From xen-devel-bounces@lists.xenproject.org Sun Nov 01 02:31:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 01 Nov 2020 02:31:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17061.41846 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZ39W-0002LI-Kj; Sun, 01 Nov 2020 02:31:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17061.41846; Sun, 01 Nov 2020 02:31:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZ39W-0002L9-D7; Sun, 01 Nov 2020 02:31:30 +0000
Received: by outflank-mailman (input) for mailman id 17061;
 Sun, 01 Nov 2020 02:31:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HYFK=EH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZ39V-0002Kb-82
 for xen-devel@lists.xenproject.org; Sun, 01 Nov 2020 02:31:29 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bcdc3733-711b-4ca1-8500-a80ec407b0b6;
 Sun, 01 Nov 2020 02:31:21 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZ39N-0005fg-6o; Sun, 01 Nov 2020 02:31:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZ39M-0002Dy-AW; Sun, 01 Nov 2020 02:31:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZ39M-0001cc-A1; Sun, 01 Nov 2020 02:31:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HYFK=EH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZ39V-0002Kb-82
	for xen-devel@lists.xenproject.org; Sun, 01 Nov 2020 02:31:29 +0000
X-Inumbo-ID: bcdc3733-711b-4ca1-8500-a80ec407b0b6
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bcdc3733-711b-4ca1-8500-a80ec407b0b6;
	Sun, 01 Nov 2020 02:31:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wVLFneDQfW0hqltbTPY3OF3DHUWJNfJ5wmIFVSnFIUo=; b=vXzA10uOou108+t3+VVr/DKmHS
	UfL+zDSdQDyyVy1lQPUHwLaZEXgJlPooSc2BN5oxUSSkgmshpZdh+hF14gVPGzg7bUsvE/VYaPJJf
	44akGJ2+rzdaO5cwinFdOs0gH7rfn3vRt3OnbxxW1o7mARRMbQtYAhrNykbytOPAQuDA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZ39N-0005fg-6o; Sun, 01 Nov 2020 02:31:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZ39M-0002Dy-AW; Sun, 01 Nov 2020 02:31:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZ39M-0001cc-A1; Sun, 01 Nov 2020 02:31:20 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156331-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156331: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7056f2f89f03f2f804ac7e776c7b2b000cd716cd
X-Osstest-Versions-That:
    xen=16a20963b3209788f2c0d3a3eebb7d92f03f5883
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 01 Nov 2020 02:31:20 +0000

flight 156331 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156331/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 156268
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156291
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156291
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156291
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156291
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156291
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156291
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156291
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156291
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156291
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156291
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156291
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  7056f2f89f03f2f804ac7e776c7b2b000cd716cd
baseline version:
 xen                  16a20963b3209788f2c0d3a3eebb7d92f03f5883

Last test of basis   156291  2020-10-29 08:15:53 Z    2 days
Failing since        156315  2020-10-30 09:45:54 Z    1 days    2 attempts
Testing same since   156331  2020-10-31 07:47:06 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andre Przywara <andre.przywara@arm.com>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien.grall@arm.com>
  Rahul Singh <rahul.singh@arm.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Tim Deegan <tim@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   16a20963b3..7056f2f89f  7056f2f89f03f2f804ac7e776c7b2b000cd716cd -> master


From xen-devel-bounces@lists.xenproject.org Sun Nov 01 05:30:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 01 Nov 2020 05:30:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17088.41934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZ5wN-0001W6-Lm; Sun, 01 Nov 2020 05:30:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17088.41934; Sun, 01 Nov 2020 05:30:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZ5wN-0001Vm-9d; Sun, 01 Nov 2020 05:30:07 +0000
Received: by outflank-mailman (input) for mailman id 17088;
 Sun, 01 Nov 2020 05:30:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HYFK=EH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZ5wL-0001JL-GH
 for xen-devel@lists.xenproject.org; Sun, 01 Nov 2020 05:30:05 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 68d48f6d-eb28-498c-b15c-144ebf45d609;
 Sun, 01 Nov 2020 05:30:01 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZ5wH-0001LY-4t; Sun, 01 Nov 2020 05:30:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZ5wG-00049P-5k; Sun, 01 Nov 2020 05:30:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZ5wG-00049c-38; Sun, 01 Nov 2020 05:30:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HYFK=EH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZ5wL-0001JL-GH
	for xen-devel@lists.xenproject.org; Sun, 01 Nov 2020 05:30:05 +0000
X-Inumbo-ID: 68d48f6d-eb28-498c-b15c-144ebf45d609
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 68d48f6d-eb28-498c-b15c-144ebf45d609;
	Sun, 01 Nov 2020 05:30:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=axAtupdHEpgoxBqWnuOZVoP8LPAkfZ0zMTmFWLiOPwA=; b=WL6tmB0waFdodWykNuQL0X9uhK
	TtkLF7BXeuRKLiHjjtWjt+mloVdanSduszZVJyRrKJCWhAuNecHe5FZCA5huvoc+kTqqyp3wbPbFr
	9s6Jblh8LVJWQd80ioylTbMatuzbeVS90/OMPtXMuxN2PAZCgmxuFvGMzuU2xWKAf0v8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZ5wH-0001LY-4t; Sun, 01 Nov 2020 05:30:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZ5wG-00049P-5k; Sun, 01 Nov 2020 05:30:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZ5wG-00049c-38; Sun, 01 Nov 2020 05:30:00 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156334-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156334: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=5fc6b075e165f641fbc366b58b578055762d5f8c
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 01 Nov 2020 05:30:00 +0000

flight 156334 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156334/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-arm64-arm64-xl          12 debian-install           fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                5fc6b075e165f641fbc366b58b578055762d5f8c
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   92 days
Failing since        152366  2020-08-01 20:49:34 Z   91 days  153 attempts
Testing same since   156334  2020-10-31 13:06:41 Z    0 days    1 attempts

------------------------------------------------------------
3401 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 648786 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 01 08:56:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 01 Nov 2020 08:56:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17113.41949 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZ9A1-0002b8-On; Sun, 01 Nov 2020 08:56:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17113.41949; Sun, 01 Nov 2020 08:56:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZ9A1-0002b1-Lh; Sun, 01 Nov 2020 08:56:25 +0000
Received: by outflank-mailman (input) for mailman id 17113;
 Sun, 01 Nov 2020 08:56:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HYFK=EH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZ9A0-0002aw-4T
 for xen-devel@lists.xenproject.org; Sun, 01 Nov 2020 08:56:24 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eb0bc588-f93f-4d66-be0e-425060e419bc;
 Sun, 01 Nov 2020 08:56:19 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZ99v-00064V-BL; Sun, 01 Nov 2020 08:56:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZ99v-0005Mz-3Y; Sun, 01 Nov 2020 08:56:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZ99v-0001Zt-2y; Sun, 01 Nov 2020 08:56:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HYFK=EH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZ9A0-0002aw-4T
	for xen-devel@lists.xenproject.org; Sun, 01 Nov 2020 08:56:24 +0000
X-Inumbo-ID: eb0bc588-f93f-4d66-be0e-425060e419bc
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id eb0bc588-f93f-4d66-be0e-425060e419bc;
	Sun, 01 Nov 2020 08:56:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NhF8+DbObRiYICKYiSARRE9K8SAzdD3oa3Dj8r0uxuA=; b=NxjlF2yCqGDm8EZeZAdQvkW261
	B34PheylKe9PplwLw/4jH+PefJrxTCXBElluoMA1Y7HaC53TQvslO2GH1KNVdl8fd1rhWPVgYP2rE
	ZesK/LoJiXWB3dgNnBfWbbFx//rC2Kygf/I+anXy3pUaKFgbqNIbdt8fWR1bGkCj8bLI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZ99v-00064V-BL; Sun, 01 Nov 2020 08:56:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZ99v-0005Mz-3Y; Sun, 01 Nov 2020 08:56:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZ99v-0001Zt-2y; Sun, 01 Nov 2020 08:56:19 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156340-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156340: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=3b7bb8f451977e81e252e23cbf817029fe40d494
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 01 Nov 2020 08:56:19 +0000

flight 156340 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156340/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              3b7bb8f451977e81e252e23cbf817029fe40d494
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  114 days
Failing since        151818  2020-07-11 04:18:52 Z  113 days  108 attempts
Testing same since   156328  2020-10-31 04:20:10 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 23514 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 01 09:14:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 01 Nov 2020 09:14:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17122.41967 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZ9RZ-0004QQ-Db; Sun, 01 Nov 2020 09:14:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17122.41967; Sun, 01 Nov 2020 09:14:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZ9RZ-0004QJ-AV; Sun, 01 Nov 2020 09:14:33 +0000
Received: by outflank-mailman (input) for mailman id 17122;
 Sun, 01 Nov 2020 09:14:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HYFK=EH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZ9RX-0004Pe-Ce
 for xen-devel@lists.xenproject.org; Sun, 01 Nov 2020 09:14:31 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 620c9104-e50b-4af0-924d-b613495c390a;
 Sun, 01 Nov 2020 09:14:22 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZ9RN-0006Sw-Qs; Sun, 01 Nov 2020 09:14:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZ9RN-0006JW-HY; Sun, 01 Nov 2020 09:14:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZ9RN-0000M9-H6; Sun, 01 Nov 2020 09:14:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HYFK=EH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZ9RX-0004Pe-Ce
	for xen-devel@lists.xenproject.org; Sun, 01 Nov 2020 09:14:31 +0000
X-Inumbo-ID: 620c9104-e50b-4af0-924d-b613495c390a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 620c9104-e50b-4af0-924d-b613495c390a;
	Sun, 01 Nov 2020 09:14:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=s2P/6NbxusjM1nyg3fWiNip8itxU7Xk0AB+P5Bk1HwY=; b=y+pkNac/kNJ6+lf5tegcQBpD+7
	/LDNQtdAANGadUTMQaBWW0vmrI4FQEZ4MOjmLKEQrWZc2biygJ9MW0sG4WeVygW4N71g3vhHC+lxW
	Msb/Uha63hie49P4u591x7zuoNW83Cs3ZoW/JZ6SNFiAelQWFvSd9pnJNDq/zuFyyO68=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZ9RN-0006Sw-Qs; Sun, 01 Nov 2020 09:14:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZ9RN-0006JW-HY; Sun, 01 Nov 2020 09:14:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZ9RN-0000M9-H6; Sun, 01 Nov 2020 09:14:21 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156336-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 156336: regressions - FAIL
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-saverestore.2:fail:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4100d463dbdd95d85fabe387dd5676bed75f65f7
X-Osstest-Versions-That:
    xen=0108b011e133915a8ebd33636811d8c141b6e9f3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 01 Nov 2020 09:14:21 +0000

flight 156336 xen-4.12-testing real [real]
flight 156342 xen-4.12-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156336/
http://logs.test-lab.xenproject.org/osstest/logs/156342/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2    18 guest-saverestore.2      fail REGR. vs. 156035

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156035
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156035
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156035
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156035
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156035
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156035
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156035
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156035
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156035
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156035
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156035
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  4100d463dbdd95d85fabe387dd5676bed75f65f7
baseline version:
 xen                  0108b011e133915a8ebd33636811d8c141b6e9f3

Last test of basis   156035  2020-10-20 13:36:02 Z   11 days
Testing same since   156263  2020-10-27 18:36:53 Z    4 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4100d463dbdd95d85fabe387dd5676bed75f65f7
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Oct 19 15:51:22 2020 +0100

    x86/pv: Flush TLB in response to paging structure changes
    
    With MMU_UPDATE, a PV guest can make changes to higher level pagetables.  This
    is safe from Xen's point of view (as the update only affects guest mappings),
    and the guest is required to flush (if necessary) after making updates.
    
    However, Xen's use of linear pagetables (UPDATE_VA_MAPPING, GNTTABOP_map,
    writeable pagetables, etc.) is an implementation detail outside of the
    API/ABI.
    
    Changes in the paging structure require invalidations in the linear pagetable
    range for subsequent accesses into the linear pagetables to access non-stale
    mappings.  Xen must provide suitable flushing to prevent intermixed guest
    actions from accidentally accessing/modifying the wrong pagetable.
    
    For all L2 and higher modifications, flush the TLB.  PV guests cannot create
    L2 or higher entries with the Global bit set, so no mappings established in
    the linear range can be global.  (This could in principle be an order 39 flush
    starting at LINEAR_PT_VIRT_START, but no such mechanism exists in practice.)
    
    Express the necessary flushes as a set of booleans which accumulate across the
    operation.  Comment the flushing logic extensively.
    
    This is XSA-286.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    (cherry picked from commit 16a20963b3209788f2c0d3a3eebb7d92f03f5883)

commit b1d6f37aa5aa9f3fc5a269b9dd21b7feb7444be0
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Oct 22 11:28:58 2020 +0100

    x86/pv: Drop FLUSH_TLB_GLOBAL in do_mmu_update() for XPTI
    
    c/s 9d1d31ad9498 "x86: slightly reduce Meltdown band-aid overhead" removed the
    use of Global TLB flushes on the Xen entry path, but added a FLUSH_TLB_GLOBAL
    to the L4 path in do_mmu_update().
    
    However, this was unnecessary.
    
    It is the guests responsibility to perform appropriate TLB flushing if the L4
    modification altered an established mapping in a flush-relevant way.  In this
    case, an MMUEXT_OP hypercall will follow.  The case which Xen needs to cover
    is when new mappings are created, and the resync on the exit-to-guest path
    covers this correctly.
    
    There is a corner case with multiple vCPUs in hypercalls at the same time,
    which 9d1d31ad9498 changed, and this patch changes back to its original XPTI
    behaviour.
    
    Architecturally, established TLB entries can continue to be used until the
    broadcast flush has completed.  Therefore, even with concurrent hypercalls,
    the guest cannot depend on older mappings not being used until an MMUEXT_OP
    hypercall completes.  Xen's implementation of guest-initiated flushes will
    take correct effect on top of an in-progress hypercall, picking up new mapping
    setting before the other vCPU's MMUEXT_OP completes.
    
    Note: The correctness of this change is not impacted by whether XPTI uses
    global mappings or not.  Correctness there depends on the behaviour of Xen on
    the entry/exit paths when switching two/from the XPTI "shadow" pagetables.
    
    This is (not really) XSA-286 (but necessary to simplify the logic).
    
    Fixes: 9d1d31ad9498 ("x86: slightly reduce Meltdown band-aid overhead")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    (cherry picked from commit 055e1c3a3d95b1e753148369fbc4ba48782dd602)
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Nov 01 10:34:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 01 Nov 2020 10:34:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17144.41979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZAga-0002vH-GS; Sun, 01 Nov 2020 10:34:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17144.41979; Sun, 01 Nov 2020 10:34:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZAga-0002vA-DJ; Sun, 01 Nov 2020 10:34:08 +0000
Received: by outflank-mailman (input) for mailman id 17144;
 Sun, 01 Nov 2020 10:34:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HYFK=EH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZAgZ-0002v5-0P
 for xen-devel@lists.xenproject.org; Sun, 01 Nov 2020 10:34:07 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3e46194b-0c77-4533-bee2-08e91fea2118;
 Sun, 01 Nov 2020 10:34:05 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZAgW-000876-Uk; Sun, 01 Nov 2020 10:34:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZAgW-0003xq-H5; Sun, 01 Nov 2020 10:34:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZAgW-00084t-Gf; Sun, 01 Nov 2020 10:34:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HYFK=EH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZAgZ-0002v5-0P
	for xen-devel@lists.xenproject.org; Sun, 01 Nov 2020 10:34:07 +0000
X-Inumbo-ID: 3e46194b-0c77-4533-bee2-08e91fea2118
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3e46194b-0c77-4533-bee2-08e91fea2118;
	Sun, 01 Nov 2020 10:34:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bqTvrxa6HXSVsQsDt9yn7JDNYgmnkd84bC/C043e8d0=; b=4sbP68AbwYmKmoHhWHuGwdmrCg
	zhSlPgYY21vpuog6gqfHvk66K06S6wt+Em60EimetVJJ57LX4c0xBULd5I16/QJhAZP+UwwOukgRw
	9ZpwwUciMB4ya5VsteTn6Bed2xvZxrRdpqHLNsDZOavbbY8PdTEA+VUclC1THVWaCUdE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZAgW-000876-Uk; Sun, 01 Nov 2020 10:34:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZAgW-0003xq-H5; Sun, 01 Nov 2020 10:34:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZAgW-00084t-Gf; Sun, 01 Nov 2020 10:34:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156344-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 156344: all pass - PUSHED
X-Osstest-Versions-This:
    xen=7056f2f89f03f2f804ac7e776c7b2b000cd716cd
X-Osstest-Versions-That:
    xen=16a20963b3209788f2c0d3a3eebb7d92f03f5883
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 01 Nov 2020 10:34:04 +0000

flight 156344 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156344/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  7056f2f89f03f2f804ac7e776c7b2b000cd716cd
baseline version:
 xen                  16a20963b3209788f2c0d3a3eebb7d92f03f5883

Last test of basis   156274  2020-10-28 09:20:30 Z    4 days
Testing same since   156344  2020-11-01 09:18:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andre Przywara <andre.przywara@arm.com>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien.grall@arm.com>
  Rahul Singh <rahul.singh@arm.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Tim Deegan <tim@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   16a20963b3..7056f2f89f  7056f2f89f03f2f804ac7e776c7b2b000cd716cd -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun Nov 01 12:15:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 01 Nov 2020 12:15:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17174.41996 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZCG5-00032i-6H; Sun, 01 Nov 2020 12:14:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17174.41996; Sun, 01 Nov 2020 12:14:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZCG5-00032b-2u; Sun, 01 Nov 2020 12:14:53 +0000
Received: by outflank-mailman (input) for mailman id 17174;
 Sun, 01 Nov 2020 12:14:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HYFK=EH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZCG3-000321-Kw
 for xen-devel@lists.xenproject.org; Sun, 01 Nov 2020 12:14:51 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6f158af1-6ed9-47c5-a04f-170cc9cab35c;
 Sun, 01 Nov 2020 12:14:43 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZCFu-0001jh-Su; Sun, 01 Nov 2020 12:14:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZCFu-0000ti-JZ; Sun, 01 Nov 2020 12:14:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZCFu-00086V-J5; Sun, 01 Nov 2020 12:14:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HYFK=EH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZCG3-000321-Kw
	for xen-devel@lists.xenproject.org; Sun, 01 Nov 2020 12:14:51 +0000
X-Inumbo-ID: 6f158af1-6ed9-47c5-a04f-170cc9cab35c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6f158af1-6ed9-47c5-a04f-170cc9cab35c;
	Sun, 01 Nov 2020 12:14:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=IW1Bw0yYdzG9sIwyyFscqajkeylUmSTBeoaGJCZAdF0=; b=RBuE8IZFhhRzl/PCNbr1Xyxlx3
	dJmCoIO+w6QfiA4HgbKAfrboclqJN1aUiCoCLXprTgIkdIfrJGBx2zA6aqr67WMxLnR9Y/rjRubPm
	+qb88HCXxxdvjfON5T2A93r2O9bMY7EzX0V7xRqHzf81RfflNe4DsT/ytiPBey9JRRvc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZCFu-0001jh-Su; Sun, 01 Nov 2020 12:14:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZCFu-0000ti-JZ; Sun, 01 Nov 2020 12:14:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZCFu-00086V-J5; Sun, 01 Nov 2020 12:14:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156338-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156338: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:guest-migrate/dst_host/src_host/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=5e6464f9c6756c95d036c4acf7ce557a7eb3a7be
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 01 Nov 2020 12:14:42 +0000

flight 156338 qemu-mainline real [real]
flight 156346 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156338/
http://logs.test-lab.xenproject.org/osstest/logs/156346/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-libvirt-pair 28 guest-migrate/dst_host/src_host/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                5e6464f9c6756c95d036c4acf7ce557a7eb3a7be
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   73 days
Failing since        152659  2020-08-21 14:07:39 Z   71 days  162 attempts
Testing same since   156338  2020-10-31 21:37:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 55741 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 01 15:17:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 01 Nov 2020 15:17:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17241.42041 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZF6S-0001og-UI; Sun, 01 Nov 2020 15:17:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17241.42041; Sun, 01 Nov 2020 15:17:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZF6S-0001oZ-Q1; Sun, 01 Nov 2020 15:17:08 +0000
Received: by outflank-mailman (input) for mailman id 17241;
 Sun, 01 Nov 2020 15:17:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HYFK=EH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZF6R-0001oU-Qw
 for xen-devel@lists.xenproject.org; Sun, 01 Nov 2020 15:17:08 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1ab06e06-0f57-4141-809d-8354bff08eaf;
 Sun, 01 Nov 2020 15:17:05 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZF6P-0005PL-1F; Sun, 01 Nov 2020 15:17:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZF6O-000225-NG; Sun, 01 Nov 2020 15:17:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZF6O-0002O5-Mi; Sun, 01 Nov 2020 15:17:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HYFK=EH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZF6R-0001oU-Qw
	for xen-devel@lists.xenproject.org; Sun, 01 Nov 2020 15:17:08 +0000
X-Inumbo-ID: 1ab06e06-0f57-4141-809d-8354bff08eaf
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1ab06e06-0f57-4141-809d-8354bff08eaf;
	Sun, 01 Nov 2020 15:17:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+wehpNm2SK3jEAgWbymZtZjwAqlTfLzrMuLhUYkW2AA=; b=n5PX4+KbjgVxzCpFV2VMSMfQwV
	O71zLh7b+OCnaAHEt7LjFQ0UTqOBesLGkoFFfLr4SY1imAmKxdqBQPCm4Laku/NJz2tFTK1HQ7kV0
	v9aw5PWygeMKm61wPcx04QfVy4tB5jsvE4Bd0U1h7nvTsqPgs1DLIF5n1tNBWN5521bI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZF6P-0005PL-1F; Sun, 01 Nov 2020 15:17:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZF6O-000225-NG; Sun, 01 Nov 2020 15:17:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZF6O-0002O5-Mi; Sun, 01 Nov 2020 15:17:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156339-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156339: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7056f2f89f03f2f804ac7e776c7b2b000cd716cd
X-Osstest-Versions-That:
    xen=7056f2f89f03f2f804ac7e776c7b2b000cd716cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 01 Nov 2020 15:17:04 +0000

flight 156339 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156339/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156331
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156331
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156331
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156331
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156331
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 156331
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156331
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156331
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156331
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156331
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156331
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156331
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  7056f2f89f03f2f804ac7e776c7b2b000cd716cd
baseline version:
 xen                  7056f2f89f03f2f804ac7e776c7b2b000cd716cd

Last test of basis   156339  2020-11-01 02:35:36 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Nov 01 17:26:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 01 Nov 2020 17:26:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17261.42062 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZH7N-0004vv-IO; Sun, 01 Nov 2020 17:26:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17261.42062; Sun, 01 Nov 2020 17:26:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZH7N-0004vo-EW; Sun, 01 Nov 2020 17:26:13 +0000
Received: by outflank-mailman (input) for mailman id 17261;
 Sun, 01 Nov 2020 17:26:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kZH7M-0004vj-3e
 for xen-devel@lists.xenproject.org; Sun, 01 Nov 2020 17:26:12 +0000
Received: from mail-wr1-x434.google.com (unknown [2a00:1450:4864:20::434])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fc3682fe-d563-467f-8d1f-4faac47f9117;
 Sun, 01 Nov 2020 17:26:10 +0000 (UTC)
Received: by mail-wr1-x434.google.com with SMTP id n15so11953902wrq.2
 for <xen-devel@lists.xenproject.org>; Sun, 01 Nov 2020 09:26:10 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id 6sm6826590wrc.88.2020.11.01.09.26.08
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Sun, 01 Nov 2020 09:26:09 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kZH7M-0004vj-3e
	for xen-devel@lists.xenproject.org; Sun, 01 Nov 2020 17:26:12 +0000
X-Inumbo-ID: fc3682fe-d563-467f-8d1f-4faac47f9117
Received: from mail-wr1-x434.google.com (unknown [2a00:1450:4864:20::434])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fc3682fe-d563-467f-8d1f-4faac47f9117;
	Sun, 01 Nov 2020 17:26:10 +0000 (UTC)
Received: by mail-wr1-x434.google.com with SMTP id n15so11953902wrq.2
        for <xen-devel@lists.xenproject.org>; Sun, 01 Nov 2020 09:26:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=81VH2H9EYCqx4v/04RbUnhPjmr5GDUw/nv05RtWy32I=;
        b=QO1fNIfmzx61QO/itaiw4gOc7mEjmasFWAVvs39eTh/Fp4ChDj/EHGWBJkpG35e0ox
         lbMH31qY7xXzKPE29Qu0HdkcOHiHmlPwl4ORViZLuE1o2E6DgAHyq/kFZDcduvtinpLU
         QiubOjGNLWzQZwZwg3rC3ogOR1GZh/q4RbSiWAEQ7p6XlkXMbu2iJnJaD4ZsLu75asMC
         o6zr/npoJA3m5TMMgMrKkSglNExOwupWOfsUqRcs9rZPAG/yUNsXvDO22t9vadZw+saW
         w71iASKzlDatnTbHPnONi1qC5ifLA5gtR8rq6JFnmsRtasF02rxcZMn231T6W1lCplyy
         OoKw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=81VH2H9EYCqx4v/04RbUnhPjmr5GDUw/nv05RtWy32I=;
        b=s/MjONtykyJM5P/yOI0AmtbEskIVcRHp+7kJwyeQ688iF6R7j/UHEvpJy7P9gA6cch
         dnO5SLqR7QqKOpgLax+LTZCSC1xXi3xR4HT4Q1j/XHpQrCJ4eNZNKB8Juuf1YVp5gfS4
         vgaebyTd3ZPtx5xpQmabYw6GiCe7tAVlC3Q7oApFy+qxd7Q9kBQ8Fowyuh+uivwVsF1y
         jD8eX6aAeOk4n2mxCatUMstQ/GGV2lWS3kjYuF445DdLQVK/LQ36BU2VAV/2y7BtXV2l
         Eu/yNdRpVVNjCXlHLeImT4+U+/7LpijPhnM5LsI0J70/B77QFw3Pd4cQpXy6z/isQg5O
         HvrQ==
X-Gm-Message-State: AOAM533Z4tsf/rPcCwL8mtEi6HG2Rk3+5FYWsk5/FWjr18zz1LXIEMcG
	np8eYKBzFZ1e7Vhh1XE/D4U=
X-Google-Smtp-Source: ABdhPJzyzZDz8pu8cWV/CIgvqGsV/zeL+GTq1V501DZv6Hnbnq7JszjICgWpFCSej+OREij+fTCWOQ==
X-Received: by 2002:a5d:62cf:: with SMTP id o15mr15123798wrv.49.1604251569772;
        Sun, 01 Nov 2020 09:26:09 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id 6sm6826590wrc.88.2020.11.01.09.26.08
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Sun, 01 Nov 2020 09:26:09 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: sstabellini@kernel.org
Cc: ehem+xen@m5p.com,
	julien@xen.org,
	roman@zededa.com,
	xen-devel@lists.xenproject.org
Subject: Re: Xen on RP4
Date: Sun,  1 Nov 2020 17:26:08 +0000
Message-Id: <20201101172608.90996-1-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <alpine.DEB.2.21.2010301240450.12247@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2010301240450.12247@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi,


>> I think the best compromise is still to use an ACPI string to detect
>> the platform. For instance, would it be possible to use the OEMID
>> fields in RSDT, XSDT, FADT?  Possibly even a combination of them?
>>
>> Another option might be to get the platform name from UEFI somehow. 
>
> I included appropriate strings in e-mail.  Suitable strings do appear
> in `dmesg`.


Just as a heads-up, SMCCC does define the optional SMCCC_ARCH_SOC_ID [1]
function and this is listed as mandatory in the Server Base Boot Reqs
(SBBR); see pp 15 of ARM DEN 0044F [2].

Unfortunately it looks like RPi 4's firmware doesn't currently support
this, or at least the rpi4-uefi project [3] didn't think so as of FW
version 1.6 [4], but I couldn't find equivalent SBBR feature tracking
pages on that site for FW versions 1.7 or 1.8 to confirm, nor could I
find any reference to SMCCC_ARCH_SOC_ID in the RPi 4 FW sources [5].

On the bright side, while not very helpful in the short-term, note that
Arm's recently announced SystemReady [6] program is an evolution of
ServerReady (SBSA+SBBR) but for other segments and applications incl.
Embedded, IoT, and general Linux Boot.

That means in future we should see more platform firmware supporting
SMCCC_ARCH_SOC_ID, as the SiPs will (hopefully) want their platforms to
be SystemReady certified.

Hope that's useful info.

Thanks,
Ash.

[1] https://developer.arm.com/documentation/den0028/c
[2] https://developer.arm.com/documentation/den0044/latest
[3] https://rpi4-uefi.dev/about/
[4] https://rpi4-uefi.dev/status-v1-6-firmware/
[5] https://github.com/pftf/RPi4/tree/master
[6] https://developer.arm.com/architectures/system-architectures/arm-systemready


From xen-devel-bounces@lists.xenproject.org Sun Nov 01 19:09:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 01 Nov 2020 19:09:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17287.42073 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZIiw-0005FS-Mm; Sun, 01 Nov 2020 19:09:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17287.42073; Sun, 01 Nov 2020 19:09:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZIiw-0005FL-Jw; Sun, 01 Nov 2020 19:09:06 +0000
Received: by outflank-mailman (input) for mailman id 17287;
 Sun, 01 Nov 2020 19:09:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HYFK=EH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZIiu-0005FG-Uy
 for xen-devel@lists.xenproject.org; Sun, 01 Nov 2020 19:09:05 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f97540c2-4218-4bcc-9bf4-a93715c2e394;
 Sun, 01 Nov 2020 19:09:02 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZIis-0002E7-0A; Sun, 01 Nov 2020 19:09:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZIir-000779-Ng; Sun, 01 Nov 2020 19:09:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZIir-000482-NB; Sun, 01 Nov 2020 19:09:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HYFK=EH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZIiu-0005FG-Uy
	for xen-devel@lists.xenproject.org; Sun, 01 Nov 2020 19:09:05 +0000
X-Inumbo-ID: f97540c2-4218-4bcc-9bf4-a93715c2e394
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f97540c2-4218-4bcc-9bf4-a93715c2e394;
	Sun, 01 Nov 2020 19:09:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=N+8pSV9r+y7j/0/wWcXGT7ImEggJvD/EwsoCyhATIgY=; b=39XoVf7BZ3dn1YygjC4ATs/YN2
	MJaemFfPWOE8Hm5rAP3U3QX1QyKW9Mdx7bL+bQMNzVyZsobB9ZzkWSaWh60qJVDKsUDTO5sHBaJmq
	T0ESd7xnTmK0EWWAtZR0L6okSqOXIzqi0ydjHKajP2i5BVMfBjyMw31SGQdyNU91cMn4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZIis-0002E7-0A; Sun, 01 Nov 2020 19:09:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZIir-000779-Ng; Sun, 01 Nov 2020 19:09:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZIir-000482-NB; Sun, 01 Nov 2020 19:09:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156341-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156341: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=c2dc4c073fb71b50904493657a7622b481b346e3
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 01 Nov 2020 19:09:01 +0000

flight 156341 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156341/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                c2dc4c073fb71b50904493657a7622b481b346e3
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   92 days
Failing since        152366  2020-08-01 20:49:34 Z   91 days  154 attempts
Testing same since   156341  2020-11-01 05:34:21 Z    0 days    1 attempts

------------------------------------------------------------
3402 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 649628 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 01 21:16:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 01 Nov 2020 21:16:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17319.42095 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZKiC-0007pd-0n; Sun, 01 Nov 2020 21:16:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17319.42095; Sun, 01 Nov 2020 21:16:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZKiB-0007pW-TF; Sun, 01 Nov 2020 21:16:27 +0000
Received: by outflank-mailman (input) for mailman id 17319;
 Sun, 01 Nov 2020 21:16:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HYFK=EH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZKiB-0007os-Bn
 for xen-devel@lists.xenproject.org; Sun, 01 Nov 2020 21:16:27 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 83c7369f-fc28-4df8-afb9-de04debe3f04;
 Sun, 01 Nov 2020 21:16:19 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZKi3-0004qD-Kh; Sun, 01 Nov 2020 21:16:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZKi3-0006SO-C4; Sun, 01 Nov 2020 21:16:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZKi3-000586-Bc; Sun, 01 Nov 2020 21:16:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HYFK=EH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZKiB-0007os-Bn
	for xen-devel@lists.xenproject.org; Sun, 01 Nov 2020 21:16:27 +0000
X-Inumbo-ID: 83c7369f-fc28-4df8-afb9-de04debe3f04
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 83c7369f-fc28-4df8-afb9-de04debe3f04;
	Sun, 01 Nov 2020 21:16:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=U/SrMVNeY8EnM9HhiuB+RtMPyJ1umvoP1pRab17IYTA=; b=E8HU7UfMwRCVURB+1ZoRqz4+MK
	8Vd1Cg/umrOidvkRIW3BubCuWQiRMs+RROf5VV/NyRWO7387dKMCW7v6Phu/gmyEACZ/ZBhn3DVA3
	BnpgIShd9x8XqXX9aQdQnKxbVHnrlFowo/G83oSlLbTOEEgqewPoEekyAVNdXvjIMZ74=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZKi3-0004qD-Kh; Sun, 01 Nov 2020 21:16:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZKi3-0006SO-C4; Sun, 01 Nov 2020 21:16:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZKi3-000586-Bc; Sun, 01 Nov 2020 21:16:19 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156343-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 156343: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4100d463dbdd95d85fabe387dd5676bed75f65f7
X-Osstest-Versions-That:
    xen=0108b011e133915a8ebd33636811d8c141b6e9f3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 01 Nov 2020 21:16:19 +0000

flight 156343 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156343/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qcow2    19 guest-localmigrate/x10       fail  like 156035
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156035
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156035
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156035
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156035
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156035
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156035
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156035
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156035
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156035
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156035
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156035
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  4100d463dbdd95d85fabe387dd5676bed75f65f7
baseline version:
 xen                  0108b011e133915a8ebd33636811d8c141b6e9f3

Last test of basis   156035  2020-10-20 13:36:02 Z   12 days
Testing same since   156263  2020-10-27 18:36:53 Z    5 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0108b011e1..4100d463db  4100d463dbdd95d85fabe387dd5676bed75f65f7 -> stable-4.12


From xen-devel-bounces@lists.xenproject.org Mon Nov 02 00:38:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 00:38:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17348.42120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZNrV-0008UJ-9n; Mon, 02 Nov 2020 00:38:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17348.42120; Mon, 02 Nov 2020 00:38:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZNrV-0008UC-6m; Mon, 02 Nov 2020 00:38:17 +0000
Received: by outflank-mailman (input) for mailman id 17348;
 Mon, 02 Nov 2020 00:38:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7jo9=EI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZNrT-0008U7-Ie
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 00:38:15 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fe3e264e-91f3-4e91-8cc1-dded81675127;
 Mon, 02 Nov 2020 00:38:12 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZNrP-000154-OV; Mon, 02 Nov 2020 00:38:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZNrP-0008H7-CZ; Mon, 02 Nov 2020 00:38:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZNrP-00042t-C5; Mon, 02 Nov 2020 00:38:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7jo9=EI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZNrT-0008U7-Ie
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 00:38:15 +0000
X-Inumbo-ID: fe3e264e-91f3-4e91-8cc1-dded81675127
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fe3e264e-91f3-4e91-8cc1-dded81675127;
	Mon, 02 Nov 2020 00:38:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GKIpEP1XMMuQR59IdTFOBL/I3RQMffx9AsMWxs15tm4=; b=FkbK4L3h1PY4L0alM4D/QhATCY
	NNQzyna9CkrQSfpo7JvMkm4d9gsbkMXhVzL/aSuK4EtGJc19t5sZFXyTvroojO+wVNLQ93CFLRjn/
	8eiKXw+UBv0Ff5rFQr3u/MM82MKTam5TGu/YRRPqsz0tLHz8RgornjWT1y+7zRHLlaLQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZNrP-000154-OV; Mon, 02 Nov 2020 00:38:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZNrP-0008H7-CZ; Mon, 02 Nov 2020 00:38:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZNrP-00042t-C5; Mon, 02 Nov 2020 00:38:11 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156345-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 156345: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=b300b28b78145b832f1112d77035111e35112cec
X-Osstest-Versions-That:
    linux=bde3f94035b0e5a724853544d65d00536e1889b2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 02 Nov 2020 00:38:11 +0000

flight 156345 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156345/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156293
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156293
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156293
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156293
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156293
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156293
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156293
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156293
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156293
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156293
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156293
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                b300b28b78145b832f1112d77035111e35112cec
baseline version:
 linux                bde3f94035b0e5a724853544d65d00536e1889b2

Last test of basis   156293  2020-10-29 09:13:51 Z    3 days
Testing same since   156345  2020-11-01 11:14:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aleksandr Nogikh <nogikh@google.com>
  Alexei Starovoitov <ast@kernel.org>
  Andrew Gabbasov <andrew_gabbasov@mentor.com>
  Ard Biesheuvel <ardb@kernel.org>
  Arjun Roy <arjunroy@google.com>
  Arnd Bergmann <arnd@arndb.de>
  Bjorn Helgaas <bhelgaas@google.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Borislav Petkov <bp@suse.de>
  Christian Eggers <ceggers@arri.de>
  Christian Lamparter <chunkeey@gmail.com>
  Cong Wang <xiyou.wangcong@gmail.com>
  dann frazier <dann.frazier@canonical.com>
  Deepa Dinamani <deepa.kernel@gmail.com>
  Eric Dumazet <edumazet@google.com>
  Frederic Barrat <fbarrat@linux.ibm.com>
  Gao Xiang <hsiangkao@redhat.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Grygorii Strashko <grygorii.strashko@ti.com>
  Guenter Roeck <linux@roeck-us.net>
  Guillaume Nault <gnault@redhat.com>
  Gustavo A. R. Silva <gustavo@embeddedor.com>
  Heiner Kallweit <hkallweit1@gmail.com>
  Herbert Xu <herbert@gondor.apana.org.au>
  Ido Schimmel <idosch@nvidia.com>
  Jakub Kicinski <kuba@kernel.org>
  Jason Gunthorpe <jgg@nvidia.com>
  Jens Axboe <axboe@kernel.dk>
  Jia-Ju Bai <baijiaju@tsinghua.edu.cn>
  Jian Cai <jiancai@google.com>
  Jon Maloy <jmaloy@redhat.com>
  Josh Poimboeuf <jpoimboe@redhat.com>
  Juergen Gross <jgross@suse.com>
  Kalle Valo <kvalo@codeaurora.org>
  Kim Phillips <kim.phillips@amd.com>
  Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
  Leon Romanovsky <leonro@nvidia.com>
  Lijun Pan <ljp@linux.ibm.com>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
  Marc Zyngier <maz@kernel.org>
  Masahiro Fujiwara <fujiwara.masahiro@gmail.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Michael Chan <michael.chan@broadcom.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael Schaller <misch@google.com>
  Miklos Szeredi <mszeredi@redhat.com>
  Mimi Zohar <zohar@linux.ibm.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Neal Cardwell <ncardwell@google.com>
  Nick Desaulniers <ndesaulniers@google.com>
  Oleksandr Shamray <oleksandrs@nvidia.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Pali Rohár <pali@kernel.org>
  Paras Sharma <parashar@codeaurora.org>
  Pavel Machek <pavel@ucw.cz>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <peterz@infradead.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Raju Rangoju <rajur@chelsio.com>
  Randy Dunlap <rdunlap@infradead.org>
  Rasmus Villemoes <linux@rasmusvillemoes.dk>
  Ricky Wu <ricky_wu@realtek.com>
  Roberto Sassu <roberto.sassu@huawei.com>
  Saeed Mirzamohammadi <saeed.mirzamohammadi@oracle.com>
  Serge Belyshev <belyshev@depni.sinp.msu.ru>
  Soheil Hassas Yeganeh <soheil@google.com>
  Song Liu <songliubraving@fb.com>
  Souptick Joarder <jrdr.linux@gmail.com>
  Stafford Horne <shorne@gmail.com>
  Stephen Hemminger <stephen@networkplumber.org>
  Thomas Gleixner <tglx@linutronix.de>
  Tomasz Maciej Nowak <tmn505@gmail.com>
  Tung Nguyen <tung.q.nguyen@dektech.com.au>
  Vasundhara Volam <vasundhara-v.volam@broadcom.com>
  Vinay Kumar Yadav <vinay.yadav@chelsio.com>
  Vincent Guittot <vincent.guittot@linaro.org>
  Will Deacon <will@kernel.org>
  Willem de Bruijn <willemb@google.com>
  Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
  Zenghui Yu <yuzenghui@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   bde3f94035b0..b300b28b7814  b300b28b78145b832f1112d77035111e35112cec -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Mon Nov 02 04:15:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 04:15:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17361.42139 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZRFc-00015c-QA; Mon, 02 Nov 2020 04:15:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17361.42139; Mon, 02 Nov 2020 04:15:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZRFc-00015T-Ji; Mon, 02 Nov 2020 04:15:24 +0000
Received: by outflank-mailman (input) for mailman id 17361;
 Mon, 02 Nov 2020 04:15:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7jo9=EI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZRFb-00014v-Kn
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 04:15:23 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8fcc781c-1451-4f66-8e20-65489dc13ef0;
 Mon, 02 Nov 2020 04:15:15 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZRFS-0004ZB-M9; Mon, 02 Nov 2020 04:15:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZRFS-0001u0-BR; Mon, 02 Nov 2020 04:15:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZRFS-00006q-Ag; Mon, 02 Nov 2020 04:15:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7jo9=EI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZRFb-00014v-Kn
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 04:15:23 +0000
X-Inumbo-ID: 8fcc781c-1451-4f66-8e20-65489dc13ef0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8fcc781c-1451-4f66-8e20-65489dc13ef0;
	Mon, 02 Nov 2020 04:15:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GNbe8PT23iRr0HxT1R4D4a/tkUutt3h1O0oF6qYccNA=; b=psZvU7mY04gSiEaGuKqPaSmRUW
	rsI2VgcpDW3Zi7/fE3gG77lDNKCj4wTYPxdbI9oiNgpngQ+APQL4HXCIvQBPpTqeLs3eSb2UjatI4
	yYXnlXNtp30N47G5+9G3K7M4+mpAJpbI2RQKoIxNBCmq/2icFFqkiY4dSHP4EqOvZkbQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZRFS-0004ZB-M9; Mon, 02 Nov 2020 04:15:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZRFS-0001u0-BR; Mon, 02 Nov 2020 04:15:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZRFS-00006q-Ag; Mon, 02 Nov 2020 04:15:14 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156347-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156347: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=2ab6c494339652e69ec405dc779d83c46c8faf98
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 02 Nov 2020 04:15:14 +0000

flight 156347 qemu-mainline real [real]
flight 156355 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156347/
http://logs.test-lab.xenproject.org/osstest/logs/156355/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                2ab6c494339652e69ec405dc779d83c46c8faf98
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   73 days
Failing since        152659  2020-08-21 14:07:39 Z   72 days  163 attempts
Testing same since   156347  2020-11-01 12:16:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 55831 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 02 05:53:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 05:53:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17368.42151 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZSmj-0001Ne-PY; Mon, 02 Nov 2020 05:53:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17368.42151; Mon, 02 Nov 2020 05:53:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZSmj-0001NX-MK; Mon, 02 Nov 2020 05:53:41 +0000
Received: by outflank-mailman (input) for mailman id 17368;
 Mon, 02 Nov 2020 05:53:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nG/o=EI=epam.com=prvs=95752e5c40=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kZSmi-0001NS-1X
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 05:53:40 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2afc28af-af07-4d03-bf0e-e9c5c08e2acc;
 Mon, 02 Nov 2020 05:53:38 +0000 (UTC)
Received: from pps.filterd (m0174682.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0A25pG3m019687; Mon, 2 Nov 2020 05:53:34 GMT
Received: from eur04-db3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2057.outbound.protection.outlook.com [104.47.12.57])
 by mx0b-0039f301.pphosted.com with ESMTP id 34he0ustwh-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 02 Nov 2020 05:53:34 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB5010.eurprd03.prod.outlook.com (2603:10a6:208:10b::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.29; Mon, 2 Nov
 2020 05:53:32 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%9]) with mapi id 15.20.3499.030; Mon, 2 Nov 2020
 05:53:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nG/o=EI=epam.com=prvs=95752e5c40=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kZSmi-0001NS-1X
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 05:53:40 +0000
X-Inumbo-ID: 2afc28af-af07-4d03-bf0e-e9c5c08e2acc
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2afc28af-af07-4d03-bf0e-e9c5c08e2acc;
	Mon, 02 Nov 2020 05:53:38 +0000 (UTC)
Received: from pps.filterd (m0174682.ppops.net [127.0.0.1])
	by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0A25pG3m019687;
	Mon, 2 Nov 2020 05:53:34 GMT
Received: from eur04-db3-obe.outbound.protection.outlook.com (mail-db3eur04lp2057.outbound.protection.outlook.com [104.47.12.57])
	by mx0b-0039f301.pphosted.com with ESMTP id 34he0ustwh-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Mon, 02 Nov 2020 05:53:34 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kXZfIuCn04uAIRU49CljNtCLtxxz4x82467CXC9TcySXFIV0yMPXUDehYeYdUOjTLDlmNKLLg3prxRCZ8epFnjnX7AcNEeXviavs65YbSH8p7LZUMubaMuCx+MrKVDzG4zVT3M9Yx9gO+iwrzRHqpZtZTFELJH/ioFBewqr1oFtaCR7+4F5xdmL7eXCGL6VblMyxOXuKiGXZsmos5BSCka5S8FNUGm0156xwO28lFlpMNC7Jp9VVDzKVr/VPjeyBbaX08WbKe2hf/7H78r952zc2ngGlW7X1RFspUy2HHVq9Pwbw8iH6a7CAF248avaRC1qQeNhpi0kD3HWbqb5XzA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PhcCXt9ozxSanJ8vIzdy30LTqtz+UqFM/wuYQheCPLI=;
 b=EU2m0H5GD4AEu8j98vBunxbugqxNLRMpxBNqXdd1q1qDfqtaD+JF6PYbRbIczvyVtcsb+il6p8wmdiIFEu38cwEsg2Ry7yUOCw8ApErUkXfbaadFFoFQkJVjDZMz4cu9IhlfzYlqWcU4OIt1zxi5nt8DlTJsVnWCkD4naGQ7Iu4ixmE4BL4GBQD8e+fcpaD3TkRoduD+8eQpztd4SqbgnP3M8kLn8HJyAxi5jkrm5t14n7Ds8av9oDZ+6pEB/9wJ7aSsJdNemm8+f0KhDFGYBb+lDnjfZ5gvnJ0ed3oP/PFWszCaqNgjoTZeugbyRrPUcIWtu2wtMNZvvYa9C9ewdg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PhcCXt9ozxSanJ8vIzdy30LTqtz+UqFM/wuYQheCPLI=;
 b=Mcw77cJhDD5so/TNQne70CetHy7wkcJmbBXKWIiivSVxIEknEVsNy8h6gMtOevYpwnKw9TzXCy7lDvrKjiOZqicc6lkqQe3w5S8j+aylO/T8d5/IPJJbq5wpQ3GWw5S+9uHPXq5NAz8zvIXkT5AFkCH/OtW2cS1aB95ftb8dOPv+2mpxODLCB4gs28rida7L6nt44yUnzqtmiFrzKcYZSEmJ9BGk3XQjk1hhNcsh/2i4w5PKX+5583zfUv4OfNToPxffKUhlGTi2Vt3JcGPGsv8i4GdskxQFmTYPCxPy2esleGSSFZBXqxsTRojyDiDjprH8OHwb1Ur/XT6kCfQiwg==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB5010.eurprd03.prod.outlook.com (2603:10a6:208:10b::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.29; Mon, 2 Nov
 2020 05:53:32 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%9]) with mapi id 15.20.3499.030; Mon, 2 Nov 2020
 05:53:32 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
CC: Rahul Singh <Rahul.Singh@arm.com>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        Jan Beulich <jbeulich@suse.com>, Paul
 Durrant <paul@xen.org>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Julien
 Grall <julien@xen.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index: AQHWrqmeCV4SGlhADU2GY14D47cn76mwOdAAgAAEO4CAABJtAIAECzGA
Date: Mon, 2 Nov 2020 05:53:32 +0000
Message-ID: <2960c8cf-d237-67c1-fcc0-3171d509dbd8@epam.com>
References: 
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <09cfc160-3490-0aeb-f872-04fb4ce04364@epam.com>
 <2AB3A125-D530-4627-A877-EC2BCDCD63DC@arm.com>
 <da9d0192-7431-83ab-be1a-cc107ee1ac4c@epam.com>
 <E1137D39-EDF2-4663-A990-7628B7057B45@arm.com>
In-Reply-To: <E1137D39-EDF2-4663-A990-7628B7057B45@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 34119cc6-f243-4f2b-78ab-08d87ef3a176
x-ms-traffictypediagnostic: AM0PR03MB5010:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM0PR03MB501017E20F5971311770D7EEE7100@AM0PR03MB5010.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 nkphQ+9QQ9+8umcPy945bBuAzveDbwRVEuxOzlfYkRD6tFfXjdz4s+I+jjsQmissxXkIZYY7LR5h46Qy2Jtkb4zc69Z1MWqF8qAdmQ20W4N9onsCzjf32cBj9Je+yyLY5Ci4aWyvLpWlNqN8ZKcmUwg/rksIpZaG34oOkU/Go+E6iQc5Aex08r9BY86GbeiqsUef4+TUq/M0pzpvsrtqvN2s3nFc+3MIb+yJW/whWRlj9kIIqvvHizeY2hWo4PVLxfUYAoOi9bnhFoiqyVtb7fhWea3JAXxDuikwAdvuORyZSvk4vuAZhUEEQLP4rcSvcUFuerXQ7b87f0U8kx4tr7zypThvS2RnyBlxpJ09U7ZZK5/D5uxkEtBHd8CqByf/+N9SWsd71YPuWoM3WqpwRQ==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(366004)(136003)(346002)(376002)(8936002)(8676002)(4326008)(83380400001)(86362001)(6512007)(31686004)(36756003)(54906003)(26005)(6506007)(2906002)(186003)(53546011)(6486002)(2616005)(478600001)(966005)(107886003)(6916009)(66476007)(5660300002)(316002)(76116006)(66446008)(71200400001)(64756008)(66556008)(66946007)(31696002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 xxaDutRHPRmgxh1o/Xk21TucwP7aWZoa8ucKlVwGvo8HLp41GNTxkBYfBAwoYcfiTUZ3uHFg/VMYPts0lJu8AGeDa5iVhlajIpQmLNce0y5bc0MbAQQLK1fiOWh0WG0Jsfec0JKa5uprBRHtLZ6Au/c3Qcbr05m76/3zRePvJNJNDtvpCrU2O1f95VGvX3PQTTcPL/Gw6B+TaYN9eMbvY7UAFL25GkQl9lcTZTuJ5HurIbx+3Rh6FZb0AuHCFD+Chx3WXc1b+NPKPgt/2Y0ip8xBok40B9z3OU4cR6QKUWXzUoVgbqKVJB1jgyWuWIO2BYJXsqu+fRJjUYR4orupY5iPbei2Qw1ui2vyPw/1QInCmsAJZLoGw0MgVNFxv6dSAwscsxq/vNsaKxk+qGdkDEXPLyAMCDToy6067qiFp8LSMJ/w5WUtJdbClv0+rj7C3fv5KqfCDHo9b6pziU9pN/HX5JZkuWgZD0jAMImy/IsC495czMTFzku3Vc0NsOJDyB7laachijnEAL/t6N5XTa2/XTk/9ZDZ3942iMAragXvLRkZkGTil4w4cB0/3CS1RSg001ipUjsfvqqWnHWLzsar0lpcTb8Cq4gqWBuJQARI4vk8CPJP/IqA4Jk7vF+R1AFdAPPvYPdfgA7R0IhwSQ==
Content-Type: text/plain; charset="utf-8"
Content-ID: <EEC7D742632E714E854E27AADF31E81B@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 34119cc6-f243-4f2b-78ab-08d87ef3a176
X-MS-Exchange-CrossTenant-originalarrivaltime: 02 Nov 2020 05:53:32.3316
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: YWz+fauaGUP78q/TMR38uGAqWZT8E3UtX4N5ymdMhzKKux6sK0fMcOyWxDHRIy7wPrgsE/VjFUvnMXYuHSOE1+IFuxPv/lkBW3+BrSAElSn4Gz9r4YbOEpuVkjz8Gg8R
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB5010
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-02_01:2020-10-30,2020-11-02 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 mlxscore=0
 bulkscore=0 impostorscore=0 lowpriorityscore=0 malwarescore=0 spamscore=0
 phishscore=0 adultscore=0 mlxlogscore=884 suspectscore=0 clxscore=1015
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2011020048

SGksDQoNCk9uIDEwLzMwLzIwIDY6MDggUE0sIEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQo+IEhp
IE9sZWtzYW5kciwNCj4NCj4+IE9uIDMwIE9jdCAyMDIwLCBhdCAxNTowMiwgT2xla3NhbmRyIEFu
ZHJ1c2hjaGVua28gPE9sZWtzYW5kcl9BbmRydXNoY2hlbmtvQGVwYW0uY29tPiB3cm90ZToNCj4+
DQo+PiBIaSwNCj4+DQo+PiBPbiAxMC8zMC8yMCA0OjQ3IFBNLCBSYWh1bCBTaW5naCB3cm90ZToN
Cj4+PiBIZWxsbyBPbGVrc2FuZHIsDQo+Pj4NCj4+Pj4gT24gMzAgT2N0IDIwMjAsIGF0IDEwOjQ0
IGFtLCBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyA8T2xla3NhbmRyX0FuZHJ1c2hjaGVua29AZXBh
bS5jb20+IHdyb3RlOg0KPj4+Pg0KPj4+PiBIaSwgUmFodWwhDQo+Pj4+DQo+Pj4+IE9uIDEwLzIw
LzIwIDY6MjUgUE0sIFJhaHVsIFNpbmdoIHdyb3RlOg0KPj4+Pj4gQWRkIHN1cHBvcnQgZm9yIEFS
TSBhcmNoaXRlY3RlZCBTTU1VdjMgaW1wbGVtZW50YXRpb25zLiBJdCBpcyBiYXNlZCBvbg0KPj4+
Pj4gdGhlIExpbnV4IFNNTVV2MyBkcml2ZXIuDQo+Pj4+Pg0KPj4+Pj4gTWFqb3IgZGlmZmVyZW5j
ZXMgYmV0d2VlbiB0aGUgTGludXggZHJpdmVyIGFyZSBhcyBmb2xsb3dzOg0KPj4+Pj4gMS4gT25s
eSBTdGFnZS0yIHRyYW5zbGF0aW9uIGlzIHN1cHBvcnRlZCBhcyBjb21wYXJlZCB0byB0aGUgTGlu
dXggZHJpdmVyDQo+Pj4+PiAgICAgdGhhdCBzdXBwb3J0cyBib3RoIFN0YWdlLTEgYW5kIFN0YWdl
LTIgdHJhbnNsYXRpb25zLg0KPj4+PiBGaXJzdCBvZiBhbGwgdGhhbmsgeW91IGZvciB0aGUgZWZm
b3J0cyENCj4+Pj4NCj4+Pj4gSSB0cmllZCB0aGUgcGF0Y2ggd2l0aCBRRU1VIGFuZCB3b3VsZCBs
aWtlIHRvIGtub3cgaWYgbXkgdW5kZXJzdGFuZGluZyBjb3JyZWN0DQo+Pj4+DQo+Pj4+IHRoYXQg
dGhpcyBjb21iaW5hdGlvbiB3aWxsIG5vdCB3b3JrIGFzIG9mIG5vdzoNCj4+Pj4NCj4+Pj4gKFhF
TikgU01NVXYzOiAvc21tdXYzQDkwNTAwMDA6IFNNTVV2MzogRFQgdmFsdWUgPSBldmVudHENCj4+
PiBJIGhhdmUgbGltaXRlZCBrbm93bGVkZ2UgYWJvdXQgUUVNVSBpbnRlcm5hbHMuQXMgd2hhdCBJ
IHNlZSBmcm9tIHRoZSBsb2dzLCBmYXVsdCBpcyBvY2N1cnJlZCBhdCBlYXJseSBkcml2ZXIgaW5p
dGlhbGlzYXRpb24gd2hlbiBTTU1VIGRyaXZlciBpcyB0cnlpbmcgdG8gcHJvYmUgdGhlIEhXLg0K
Pj4+DQo+Pj4+IChYRU4pIERhdGEgQWJvcnQgVHJhcC4gU3luZHJvbWU9MHgxOTQwMDEwDQo+Pj4+
IChYRU4pIFdhbGtpbmcgSHlwZXJ2aXNvciBWQSAweDQwMDMxMDAwIG9uIENQVTAgdmlhIFRUQlIg
MHgwMDAwMDAwMGI4NDY5MDAwDQo+Pj4+IChYRU4pIDBUSFsweDBdID0gMHgwMDAwMDAwMGI4NDY4
ZjdmDQo+Pj4+DQo+Pj4+IFtzbmlwXQ0KPj4+Pg0KPj4+PiBJZiB0aGlzIGlzIGV4cGVjdGVkIHRo
ZW4gaXMgdGhlcmUgYW55IHBsYW4gdG8gbWFrZSBRRU1VIHdvcmsgYXMgd2VsbD8NCj4+Pj4NCj4+
Pj4gSSBzZWUgWzFdIHNheXMgdGhhdCAiT25seSBzdGFnZSAxIGFuZCBBQXJjaDY0IFBUVyBhcmUg
c3VwcG9ydGVkLiIgb24gUUVNVSBzaWRlLg0KPj4+IFllcyBhcyBvZiBub3cgb25seSBTdGFnZS0y
IGlzIHN1cHBvcnRlZCBpbiBYRU4uSWYgd2UgaGF2ZSBhbnkgcmVxdWlyZW1lbnQgb3IgdXNlIGNh
c2UgdGhhdCBkZXBlbmRzIG9uIFN0YWdlLTEgdHJhbnNsYXRpb24gd2UgY2FuIHN1cHBvcnQgdGhh
dCBhbHNvIGluIFhFTi4NCj4+IFRoZSB1c2UgY2FzZSBpcyBiZWxvdzogUENJIHBhc3N0aHJvdWdo
IGFuZCB2YXJpb3VzIGNvbmZpZ3VyYXRpb25zIGluY2x1ZGluZyBTUi1JT1Ygd2hpY2ggaXMgcG9z
c2libGUgd2l0aCBRRU1VLi4uDQo+IFRoaXMgaXMgY3VycmVudGx5IG5vdCBpbiB0aGUgbGlzdCBv
ZiBjb25maWd1cmF0aW9uIHN1cHBvcnRlZCBvciB0aGF0IHdlIGhhdmUgcGxhbm5lZCBvbiBvdXIg
c2lkZSB0byBzdXBwb3J0Lg0KPg0KPiBCdXQgd2Ugd291bGQgYmUgbW9yZSB0aGVuIGhhcHB5IHRv
IHJldmlldyBhbnkgY2hhbmdlcyB0byBlbmFibGUgdGhpcyA6LSkNCg0KRmFpciBlbm91Z2gNCg0K
VW5mb3J0dW5hdGVseSB3ZSBkbyBub3QgaGF2ZSBhbnkgSFcgd2l0aCBTTU1VdjMgYXQgdGhlIG1v
bWVudCwgc28gbm90IHN1cmUgd2UgY2FuDQoNCmludmVzdCB0aW1lIGluIHRoaXMgYXQgdGhlIG1v
bWVudA0KDQo+DQo+IFJlZ2FyZHMNCj4gQmVydHJhbmQNCg0KVGhhbmsgeW91LA0KDQpPbGVrc2Fu
ZHINCg0KPg0KPj4+PiBXZSBhcmUgaW50ZXJlc3RlZCBpbiBRRU1VL1NNTVV2MyBhcyBhIGZsZXhp
YmxlIHBsYXRmb3JtIGZvciBQQ0kgcGFzc3Rocm91Z2gNCj4+Pj4NCj4+Pj4gaW1wbGVtZW50YXRp
b24sIHNvIGl0IGNvdWxkIGFsbG93IHRlc3RpbmcgZGlmZmVyZW50IHNldHVwcyBhbmQgY29uZmln
dXJhdGlvbnMgd2l0aCBRRU1VLg0KPj4+Pg0KPj4+Pg0KPj4+PiBUaGFuayB5b3UgaW4gYWR2YW5j
ZSwNCj4+Pj4NCj4+Pj4gT2xla3NhbmRyDQo+Pj4+DQo+Pj4+IFsxXSBodHRwczovL3VybGRlZmVu
c2UuY29tL3YzL19faHR0cHM6Ly9wYXRjaHdvcmsub3psYWJzLm9yZy9wcm9qZWN0L3FlbXUtZGV2
ZWwvY292ZXIvMTUyNDY2NTc2Mi0zMTM1NS0xLWdpdC1zZW5kLWVtYWlsLWVyaWMuYXVnZXJAcmVk
aGF0LmNvbS9fXzshIUdGXzI5ZGJjUUlVQlBBIWgtRWFFME9uU2JYdExCU3dJUzMxMWFsRGw3cG44
c0g3c2loZ0lZcWlsTTUtci04a0NINlVTTk5sTEIzeGhiemM2ZWN6VU9yaGN3JCBbcGF0Y2h3b3Jr
Wy5db3psYWJzWy5db3JnXQ0KPj4+IFJlZ2FyZHMsDQo+Pj4gUmFodWwNCj4+IFRoYW5rIHlvdSwN
Cj4+DQo+PiBPbGVrc2FuZHINCj4=


From xen-devel-bounces@lists.xenproject.org Mon Nov 02 05:55:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 05:55:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17371.42162 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZSov-0001VA-6w; Mon, 02 Nov 2020 05:55:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17371.42162; Mon, 02 Nov 2020 05:55:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZSov-0001V3-3j; Mon, 02 Nov 2020 05:55:57 +0000
Received: by outflank-mailman (input) for mailman id 17371;
 Mon, 02 Nov 2020 05:55:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pnj+=EI=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1kZSot-0001Uv-Tp
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 05:55:56 +0000
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 81110818-03c2-4211-841c-ed5a36365fab;
 Mon, 02 Nov 2020 05:55:55 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id v19so8753760lji.5
 for <xen-devel@lists.xenproject.org>; Sun, 01 Nov 2020 21:55:55 -0800 (PST)
Received: from [192.168.10.4] ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id k13sm2007725lfe.179.2020.11.01.21.55.52
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sun, 01 Nov 2020 21:55:53 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Pnj+=EI=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
	id 1kZSot-0001Uv-Tp
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 05:55:56 +0000
X-Inumbo-ID: 81110818-03c2-4211-841c-ed5a36365fab
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 81110818-03c2-4211-841c-ed5a36365fab;
	Mon, 02 Nov 2020 05:55:55 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id v19so8753760lji.5
        for <xen-devel@lists.xenproject.org>; Sun, 01 Nov 2020 21:55:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=Lw6JhrZAacucyBbqvQgzEycZM3mRTn4rRKHi43j13hI=;
        b=tJkrpkuvHdKsxOjwbAj204dqo/ug6ebscDcSkkOnpWzSpRnMsM2BrIUGxVlIgGQpBb
         VgQu7StFn+63uvo7wCVs8oVgzU6VSN+qjjrRBUvy2x0yPx2dCWmSHNPzti4w45h0tOP3
         oWAuwFPy9FBAhxgOND9S2wUIjcUEKDzX3Jyd1CH8XNi6vcy9hbX/tkgPAy8mRiYSY7h0
         lMVx64kllGCvYsDm4bwXBYwlhMTbwMTHC30jYCeW6hhur3+q8UZci/9I3GGXkiG/Vrhu
         2HjEzP6P9iBedNkOTNbRYdCQHQWMRvy7WJJil1lhAlJu73M3ZO0spop6ZH7vOTLxTNd7
         rZiw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=Lw6JhrZAacucyBbqvQgzEycZM3mRTn4rRKHi43j13hI=;
        b=I6PNihi6fYZc7Ot+KlghQ3mtHXwQ8TKxOa/tcTle3PSFrcb53I9YibnOtElom/0yPB
         521MSp9BMAEFqrJytz/n1XFKjGbyjzay0SiCNglKu9mz4xH4gLaENYGTDHRnQLr3NfW8
         YeRmmkQK8HWAa6rSBxu+4fRqS8UTJlwh+XR35OmkET4qz5U5woZAcUhcC4w+7z7/b7lf
         Mat5u8crDxNKhv2iy3DCXazw3gRX24GAF6sLnCoCeGoz8GN2bfHKAUnHvIjpguqnUxkT
         SmEZrqNjpT7em613PBZxKKXpCxOHDfS8v8mi9aP5+f9tfxorL8s/v5ua6zKFxlnsriLS
         dL7Q==
X-Gm-Message-State: AOAM530W0hyTJHvYIEOSbbLFftIPagw7iEd7eXSvc3FR6G1xT2Ju/wHx
	GlULYTbfPNmTzBTSmmDlnxI=
X-Google-Smtp-Source: ABdhPJzT7TB+JDrDcnmrSyxG4L3jkKeCzdQhZt2+kymG7S/gQnZffogjcRMMMtqO+gxqogLy+SuH5Q==
X-Received: by 2002:a2e:6e10:: with SMTP id j16mr5588344ljc.320.1604296553863;
        Sun, 01 Nov 2020 21:55:53 -0800 (PST)
Received: from [192.168.10.4] ([185.199.97.5])
        by smtp.gmail.com with ESMTPSA id k13sm2007725lfe.179.2020.11.01.21.55.52
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Sun, 01 Nov 2020 21:55:53 -0800 (PST)
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
To: Julien Grall <julien@xen.org>,
 Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>,
 Rahul Singh <rahul.singh@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: "bertrand.marquis@arm.com" <bertrand.marquis@arm.com>,
 Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <09cfc160-3490-0aeb-f872-04fb4ce04364@epam.com>
 <76593217-c7e2-2963-9cbe-d6cc38830710@xen.org>
From: Oleksandr Andrushchenko <andr2000@gmail.com>
Message-ID: <d83f6859-6737-0da8-7c1d-a236e8313869@gmail.com>
Date: Mon, 2 Nov 2020 07:55:51 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <76593217-c7e2-2963-9cbe-d6cc38830710@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US

Hi, Julien!

On 10/30/20 7:18 PM, Julien Grall wrote:
> Hi Oleksandr,
>
> On 30/10/2020 10:44, Oleksandr Andrushchenko wrote:
>> On 10/20/20 6:25 PM, Rahul Singh wrote:
>>> Add support for ARM architected SMMUv3 implementations. It is based on
>>> the Linux SMMUv3 driver.
>>>
>>> Major differences between the Linux driver are as follows:
>>> 1. Only Stage-2 translation is supported as compared to the Linux driver
>>>      that supports both Stage-1 and Stage-2 translations.
>>
>> First of all thank you for the efforts!
>>
>> I tried the patch with QEMU and would like to know if my understanding correct
>>
>> that this combination will not work as of now:
>>
>> (XEN) SMMUv3: /smmuv3@9050000: SMMUv3: DT value = eventq
>> (XEN) Data Abort Trap. Syndrome=0x1940010
>> (XEN) Walking Hypervisor VA 0x40031000 on CPU0 via TTBR 0x00000000b8469000
>> (XEN) 0TH[0x0] = 0x00000000b8468f7f
>>
>> [snip]
>>
>> If this is expected then is there any plan to make QEMU work as well?
>>
>> I see [1] says that "Only stage 1 and AArch64 PTW are supported." on QEMU side.
>
> Just for clarication, you are trying to boot Xen on QEMU, right?
Exactly
>
> You might be able to use the stage-1 page-tables to isolate each device in Xen. However, I don't think you will be able to share the P2M because the page-tables layout between stage-1 and stage-2 is different.
So, it is even more work then
>
>>
>>
>> We are interested in QEMU/SMMUv3 as a flexible platform for PCI passthrough
>>
>> implementation, so it could allow testing different setups and configurations with QEMU.
>
> I would recommend to get the SMMU supporting supporting stage-2 page-tables.
You mean in QEMU?
>
> Regardless that, I think Xen should be able to say the SMMU is not supported rather than crashing.
Yes, that would be nice
>
> Cheers,
>
Thank you,

Oleksandr



From xen-devel-bounces@lists.xenproject.org Mon Nov 02 07:24:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 07:24:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17395.42193 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZUCF-0000zY-8Q; Mon, 02 Nov 2020 07:24:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17395.42193; Mon, 02 Nov 2020 07:24:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZUCF-0000zR-59; Mon, 02 Nov 2020 07:24:07 +0000
Received: by outflank-mailman (input) for mailman id 17395;
 Mon, 02 Nov 2020 07:24:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1ztG=EI=arm.com=wei.chen@srs-us1.protection.inumbo.net>)
 id 1kZUCD-0000zL-8Z
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 07:24:05 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.47]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 72abb0b7-3622-4b38-b649-8f10208f4b52;
 Mon, 02 Nov 2020 07:24:01 +0000 (UTC)
Received: from AM6PR10CA0101.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:209:8c::42)
 by AM0PR08MB5010.eurprd08.prod.outlook.com (2603:10a6:208:15c::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Mon, 2 Nov
 2020 07:23:58 +0000
Received: from VE1EUR03FT052.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8c:cafe::32) by AM6PR10CA0101.outlook.office365.com
 (2603:10a6:209:8c::42) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Mon, 2 Nov 2020 07:23:58 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT052.mail.protection.outlook.com (10.152.19.173) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Mon, 2 Nov 2020 07:23:57 +0000
Received: ("Tessian outbound 68da730eaaba:v64");
 Mon, 02 Nov 2020 07:23:57 +0000
Received: from e84b342dbfc6.3
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 ADD86BBA-9E3C-4E8F-8AF0-90183795CFF0.1; 
 Mon, 02 Nov 2020 07:23:52 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e84b342dbfc6.3
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 02 Nov 2020 07:23:52 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com (2603:10a6:208:105::24)
 by AM0PR08MB5491.eurprd08.prod.outlook.com (2603:10a6:208:189::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Mon, 2 Nov
 2020 07:23:48 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993]) by AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993%3]) with mapi id 15.20.3499.030; Mon, 2 Nov 2020
 07:23:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1ztG=EI=arm.com=wei.chen@srs-us1.protection.inumbo.net>)
	id 1kZUCD-0000zL-8Z
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 07:24:05 +0000
X-Inumbo-ID: 72abb0b7-3622-4b38-b649-8f10208f4b52
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown [40.107.20.47])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 72abb0b7-3622-4b38-b649-8f10208f4b52;
	Mon, 02 Nov 2020 07:24:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6S2p8fLfA1cy8W9OIPNhLu4MpDe+iPaqmv4j2xsDyMI=;
 b=FjYQh2L1TMTfO+EW732YbbVHBeSJbJtVYvAu7Kqz0yeeiQKeAJwCMjAxfG+LAq1M1NJ5ph1jlLGldaqmFpWHN8zYBJoGYHZd0Rc6ySonz0N/pZw+QWz496lIVYhCv3uVutPj2JyzdIyPu7yjxam6Ho2wboHhbDQuXnXmk0hnCVk=
Received: from AM6PR10CA0101.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:209:8c::42)
 by AM0PR08MB5010.eurprd08.prod.outlook.com (2603:10a6:208:15c::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Mon, 2 Nov
 2020 07:23:58 +0000
Received: from VE1EUR03FT052.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8c:cafe::32) by AM6PR10CA0101.outlook.office365.com
 (2603:10a6:209:8c::42) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Mon, 2 Nov 2020 07:23:58 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT052.mail.protection.outlook.com (10.152.19.173) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Mon, 2 Nov 2020 07:23:57 +0000
Received: ("Tessian outbound 68da730eaaba:v64"); Mon, 02 Nov 2020 07:23:57 +0000
X-CR-MTA-TID: 64aa7808
Received: from e84b342dbfc6.3
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id ADD86BBA-9E3C-4E8F-8AF0-90183795CFF0.1;
	Mon, 02 Nov 2020 07:23:52 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e84b342dbfc6.3
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Mon, 02 Nov 2020 07:23:52 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mnzC5kP+Dc043b4hYo5X1kh2DfzSVFyo0UApvX3I7irsszf036vncFibu89XRivgNPB7va+IkcVOgfmGsXLZhs+qIwcwJ8ZI5EGyuFFhVAxMbSp4r6ZRt/PIOuMooCeGb2v1S3ohSuo/pY5KW4Ebif1kDc+1+UJ9aWLxa6ssQlT7PrgcJNj+9CjZEHXwX27l4thgK0kCy53dNVkmUNkjWtX+is42Y4OLa8hc+Vv96skRuCvrYUzMcPwGPKTi5XUIe9TKQgfyudFX2j0IDmc0uffJDUNmyS+N2O8fO3VTJapqLJELELgo7pA+Jsiyi3sEB19Z33VhXOuVaQy5aDN8Uw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6S2p8fLfA1cy8W9OIPNhLu4MpDe+iPaqmv4j2xsDyMI=;
 b=dY1Gx6fL3d3mVOY/LDMgl3MJcdl+VSFQMzuY63auYFXQkE82fb9bNjxu8MydYtfy/u+UUZfd4hxG++imJKyNCKZRFSXbm901HrV/9e2e7L9rqkHAStP7P3/72R6MrJYlrKXBsco9ttFgVNCk+UBT/e5W4+4b0ceB1BMlcoxloGEz89rgBkRmcLuGg08/l7v+aSCExc+cYWxs0zjCj9B3f2LjUClNkswuxBn+w0CtPN3Yqjc6MWgzd+sUfCWmqqbLDkqE3WZlnjer1+a8cUU+fDxNLqEnuHJ8+G4fjA9dYZxLVpsQDwF7tZgoz8Qu/KZXuquWrvUt0RPvV606WcS1BA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6S2p8fLfA1cy8W9OIPNhLu4MpDe+iPaqmv4j2xsDyMI=;
 b=FjYQh2L1TMTfO+EW732YbbVHBeSJbJtVYvAu7Kqz0yeeiQKeAJwCMjAxfG+LAq1M1NJ5ph1jlLGldaqmFpWHN8zYBJoGYHZd0Rc6ySonz0N/pZw+QWz496lIVYhCv3uVutPj2JyzdIyPu7yjxam6Ho2wboHhbDQuXnXmk0hnCVk=
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com (2603:10a6:208:105::24)
 by AM0PR08MB5491.eurprd08.prod.outlook.com (2603:10a6:208:189::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Mon, 2 Nov
 2020 07:23:48 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993]) by AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993%3]) with mapi id 15.20.3499.030; Mon, 2 Nov 2020
 07:23:48 +0000
From: Wei Chen <Wei.Chen@arm.com>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>, Masami Hiramatsu
	<masami.hiramatsu@linaro.org>, =?utf-8?B?QWxleCBCZW5uw6ll?=
	<alex.bennee@linaro.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, xen-devel
	<xen-devel@lists.xenproject.org>, Oleksandr Tyshchenko
	<oleksandr_tyshchenko@epam.com>, Paul Durrant <paul@xen.org>, Jan Beulich
	<jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Julien Grall <Julien.Grall@arm.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Tim Deegan <tim@xen.org>, Daniel De Graaf
	<dgdegra@tycho.nsa.gov>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Jun
 Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>, Anthony
 PERARD <anthony.perard@citrix.com>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>
Subject: RE: [PATCH V2 00/23] IOREQ feature (+ virtio-mmio) on Arm
Thread-Topic: [PATCH V2 00/23] IOREQ feature (+ virtio-mmio) on Arm
Thread-Index:
 AQHWoxKpQGDpryS/tEKvZ0gj7CwnMKmuR8kAgAC6SwCAABIuAIAAFmWAgADwXgCAAjNoAIACOnTQ
Date: Mon, 2 Nov 2020 07:23:48 +0000
Message-ID:
 <AM0PR08MB3747802302FE70971AE91F6F9E100@AM0PR08MB3747.eurprd08.prod.outlook.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <CAA93ih0o3XmD9neBu1fAkP1iBETu1-4qaQaEsZfEWRfYo7VCZA@mail.gmail.com>
 <CAPD2p-npnQz+7NtMH81s2C3dsAt_6kxQ68n7LhwYbOuTFaUEvw@mail.gmail.com>
 <alpine.DEB.2.21.2010291252410.12247@sstabellini-ThinkPad-T480s>
 <CAPD2p-mH0Hi+JOUB-mt+aZR_gN86EZCpnMPTww0ErMESTwZ=AA@mail.gmail.com>
 <CAA93ih3Z-zxQ33gvr2C43i0J5XP3OBgUhTyMcwhe9zVj-uOONA@mail.gmail.com>
 <CAPD2p-=2UimQy6VHKw1FgyVi2R94Ux_HFdPYk7=FR3KWSEqiHw@mail.gmail.com>
In-Reply-To:
 <CAPD2p-=2UimQy6VHKw1FgyVi2R94Ux_HFdPYk7=FR3KWSEqiHw@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 2EB0BAA943FAB44D8909F91824950BB4.0
x-checkrecipientchecked: true
Authentication-Results-Original: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.112]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: b8c38b05-c8e1-4b9a-0a22-08d87f004355
x-ms-traffictypediagnostic: AM0PR08MB5491:|AM0PR08MB5010:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB5010A81C3FDABE72AE340A569E100@AM0PR08MB5010.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:7691;OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 h2cCwhfaEk5x+NMUgt+jrmRHOZn++lf3yXINSYlEJJ0WDZzfvua+oVSfuTvEkvbMJoZxydCR5PBTnbrhWAgdWcdSP3Y0kbyzlG/pHFtnXKkwSQ1yewtRFlFSGspaktAnVX3cG++Z04EGyIVXFoHg/j7Zoe08ao9VixWJhaSAem6XycY1dF/D2ic+LieWF34GGzA/D2BxCDIvbpGkCmmfrhezi4lijK7oBnDXumLdyc7JhtxJvuR3m6uuphRZOQdToC/+joBseVPBYgIdF4kya87DKNekmlOeO2aSzcgQSqmze9Vb3+at8RO9bVE2fPMAcalx91lue5Cgxt5oaG69TSDNmoQCR0b6HPQN0xNUPTvC4X/YJdE4kCLwNPTdagJBhUDnQJnia29MFiTEc/8YvA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3747.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(396003)(346002)(39860400002)(366004)(136003)(478600001)(7696005)(186003)(54906003)(110136005)(8676002)(5660300002)(52536014)(53546011)(6506007)(966005)(2906002)(316002)(86362001)(33656002)(66574015)(8936002)(9686003)(55016002)(4326008)(66446008)(26005)(76116006)(7416002)(64756008)(66556008)(66476007)(66946007)(71200400001)(83380400001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 D5HDmN8LaLD6CLaQOiCn/A1MmkkubyI0A/0OUHDD5o9El7CuqlVEU2/utE/K3A0Eg0+olJBlTx8RZc7LWBZTGMsQOeepl0Ee5Ts2uNyf+FsAVKilMR2NDkS9n7mnwsqb5s0+p8H/N0ZJMcqLQBv0sJeZTmfMRJ3+WszCTUKWbfjA0upOYXOvy5lKCBUqEKtsJrQ1ozAPZvI17eqWhf1kN4v6+lli+bZzjWYtI9x72sf3RH0mpRiDpExWeZY4uMoIj99JSPXYFBACBpOxxWJaA5cN+JQInFNH5k64bAwLDqaIqmbDhiieUvWYVMzocdEggfYLR+ZvakCpj9ntp79yTXjFQB16SUAuiUxaCgcJ1TaNOQVEi0r1MKNX+4B/tZoQ9okIYA/XSpveOe6DXietNSvlcIL1TQYicy8VJaugSfs/Q0/530NnabqBXZViCHC2Yqx+yamdHS/NS6thNGOjGlC2RLU8ncR11nOPBKU3u4Z0CSepFOxnrF1vVzZNIaVcBjUktLnypVpMs+8mw2x13yYA8opbPeEGYUJ0ZoVzijLDbeTVDup8J1q4rjhPbjBeFXC4/v5/fnQD3rB6qE3Zd0n712IhYBQgyM7u5veU1CcsOYMFREC6RlbqTvoGikzJ+QC/8h+X6f85+GdNpS7tNw==
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5491
Original-Authentication-Results: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT052.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6295b572-ed8d-4122-ed70-08d87f003de7
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	IItXo5UAHdOoXFpaiDEAQ/LcCsKrHQYWSEVlr3zoelKmLxw6SBgWJuRf/1r5m8wo9csLcuPXr170sXiGET6fCaxj7yXnw79AX1f0Us9vspAppiTTu3fvkgPe2PCHaTYzcL/XH09cJJSGQm2WJtg+nPWe/wuwkTEFu7h5rO8FEZoGx884aoVuH9RJW17U9H3I5fcohsCPoZAFdYn9l5TxIMZ7qXDG/9mFppcPaE6FHPSfAE2gR2fmcUGiK7MTx/3ntVfhGFNipxZvFgypy+25S/6Zq/sCcjcEB3ZN9wsZXPzA9Hkmiuomhcwg390AWxediSff3gFfju3QcdDvjqUybWgyQEzxFbmaTLD2xsfGROg8TuQu4edfiURW4Z1f+4Y3DPtU7Yn+ZFESVN0ygkx6sjYYOyWUhB61rFACKpEObGaB8s1sjSMECM/11Ilk7nB+LLulOC+6qmG7M77KT5ogX5RroVvEstZ1c7xqUy9fuew=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(376002)(346002)(39860400002)(396003)(46966005)(478600001)(86362001)(966005)(82310400003)(47076004)(83380400001)(7696005)(70586007)(356005)(26005)(53546011)(81166007)(6506007)(82740400003)(52536014)(5660300002)(36906005)(316002)(54906003)(110136005)(4326008)(55016002)(9686003)(8936002)(8676002)(336012)(66574015)(2906002)(33656002)(186003)(70206006);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Nov 2020 07:23:57.7870
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b8c38b05-c8e1-4b9a-0a22-08d87f004355
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT052.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5010

SGkgT2xla3NhbmRyLA0KDQpUaGFua3MgZm9yIHRoZSBzaGFyaW5nIG9mIHZpcnRpby1kaXNrIGJh
Y2tlbmQuIEkgaGF2ZSB0ZXN0ZWQgaXQgb24gYXJtIEZWUF9iYXNlIHBsYXRmb3JtLg0KV2UgdXNl
ZCBkb21haW4tMCB0byBydW4gdmlydGlvIGRpc2sgYmFja2VuZC4gVGhlIGJhY2tlbmQgZGlzayBp
cyBhIGxvb3AgZGV2aWNlLg0KICAgICJ2aXJ0aW9fZGlza3MiOiBbDQogICAgICAgIHsNCiAgICAg
ICAgICAgICJiYWNrZW5kX2RvbW5hbWUiOiAiRG9tYWluLTAiLA0KICAgICAgICAgICAgImRldmlk
IjogMCwNCiAgICAgICAgICAgICJkaXNrcyI6IFsNCiAgICAgICAgICAgICAgICB7DQogICAgICAg
ICAgICAgICAgICAgICJmaWxlbmFtZSI6ICIvZGV2L2xvb3AwIg0KICAgICAgICAgICAgICAgIH0N
CiAgICAgICAgICAgIF0NCiAgICAgICAgfQ0KICAgIF0sDQoNCkl0IHdvcmtzIGZpbmUgYW5kIEkn
dmUgcGFzdGVkIHNvbWUgbG9nczoNCg0KLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLQ0KRG9tYWluLTAgbG9nczoNCm1haW46IHJlYWQgYmFja2VuZCBkb21pZCAwDQoo
WEVOKSBnbnR0YWJfbWFya19kaXJ0eSBub3QgaW1wbGVtZW50ZWQgeWV0DQooWEVOKSBkb21haW5f
ZGlyZWN0X3BsMDExX2luaXQgZm9yIGRvbWFpbiMyDQptYWluOiByZWFkIGZyb250ZW5kIGRvbWlk
IDINCiAgSW5mbzogY29ubmVjdGVkIHRvIGRvbTINCg0KZGVtdV9zZXFfbmV4dDogPlhFTlNUT1JF
X0FUVEFDSEVEDQpkZW11X3NlcV9uZXh0OiBkb21pZCA9IDINCmRlbXVfc2VxX25leHQ6IGZpbGVu
YW1lWzBdID0gL2Rldi9sb29wMA0KZGVtdV9zZXFfbmV4dDogcmVhZG9ubHlbMF0gPSAwDQpkZW11
X3NlcV9uZXh0OiBiYXNlWzBdICAgICA9IDB4MjAwMDAwMA0KZGVtdV9zZXFfbmV4dDogaXJxWzBd
ICAgICAgPSAzMw0KZGVtdV9zZXFfbmV4dDogPlhFTkNUUkxfT1BFTg0KZGVtdV9zZXFfbmV4dDog
PlhFTkVWVENITl9PUEVODQpkZW11X3NlcV9uZXh0OiA+WEVORk9SRUlHTk1FTU9SWV9PUEVODQpk
ZW11X3NlcV9uZXh0OiA+WEVOREVWSUNFTU9ERUxfT1BFTg0KZGVtdV9pbml0aWFsaXplOiAyIHZD
UFUocykNCmRlbXVfc2VxX25leHQ6ID5TRVJWRVJfUkVHSVNURVJFRA0KZGVtdV9zZXFfbmV4dDog
aW9zZXJ2aWQgPSAwDQpkZW11X3NlcV9uZXh0OiA+UkVTT1VSQ0VfTUFQUEVEDQpkZW11X3NlcV9u
ZXh0OiBzaGFyZWRfaW9wYWdlID0gMHhmZmZmYWU2ZGUwMDANCmRlbXVfc2VxX25leHQ6IGJ1ZmZl
cmVkX2lvcGFnZSA9IDB4ZmZmZmFlNmRkMDAwDQpkZW11X3NlcV9uZXh0OiA+U0VSVkVSX0VOQUJM
RUQNCmRlbXVfc2VxX25leHQ6ID5QT1JUX0FSUkFZX0FMTE9DQVRFRA0KZGVtdV9zZXFfbmV4dDog
PkVWVENITl9QT1JUU19CT1VORA0KZGVtdV9zZXFfbmV4dDogVkNQVTA6IDMgLT4gNw0KZGVtdV9z
ZXFfbmV4dDogVkNQVTE6IDUgLT4gOA0KZGVtdV9zZXFfbmV4dDogPkVWVENITl9CVUZfUE9SVF9C
T1VORA0KZGVtdV9zZXFfbmV4dDogMCAtPiA5DQpkZW11X3JlZ2lzdGVyX21lbW9yeV9zcGFjZTog
MjAwMDAwMCAtIDIwMDAxZmYNCiAgSW5mbzogKHZpcnRpby9tbWlvLmMpIHZpcnRpb19tbWlvX2lu
aXQ6MjkwOiBtYWlsdG86dmlydGlvLW1taW8uZGV2aWNlcz0weDIwMEAweDIwMDAwMDA6MzMNCmRl
bXVfc2VxX25leHQ6ID5ERVZJQ0VfSU5JVElBTElaRUQNCmRlbXVfc2VxX25leHQ6ID5JTklUSUFM
SVpFRA0KSU8gcmVxdWVzdCBub3QgcmVhZHkNCklPIHJlcXVlc3Qgbm90IHJlYWR5DQoNCi0tLS0t
LS0tLS0tLS0tLS0NCkRvbS1VIGxvZ3M6DQpbICAgIDAuNDkxMDM3XSB4ZW46eGVuX2V2dGNobjog
RXZlbnQtY2hhbm5lbCBkZXZpY2UgaW5zdGFsbGVkDQpbICAgIDAuNDkzNjAwXSBJbml0aWFsaXNp
bmcgWGVuIHB2Y2FsbHMgZnJvbnRlbmQgZHJpdmVyDQpbICAgIDAuNTE2ODA3XSBTZXJpYWw6IDgy
NTAvMTY1NTAgZHJpdmVyLCA0IHBvcnRzLCBJUlEgc2hhcmluZyBkaXNhYmxlZA0KWyAgICAwLjUy
NTU2NV0gY2FjaGVpbmZvOiBVbmFibGUgdG8gZGV0ZWN0IGNhY2hlIGhpZXJhcmNoeSBmb3IgQ1BV
IDANClsgICAgMC41NjIyNzVdIGJyZDogbW9kdWxlIGxvYWRlZA0KWyAgICAwLjU5NTMwMF0gbG9v
cDogbW9kdWxlIGxvYWRlZA0KWyAgICAwLjY4MzgwMF0gdmlydGlvX2JsayB2aXJ0aW8wOiBbdmRh
XSAxMzEwNzIgNTEyLWJ5dGUgbG9naWNhbCBibG9ja3MgKDY3LjEgTUIvNjQuMCBNaUIpDQpbICAg
IDAuNjg0MDAwXSB2ZGE6IGRldGVjdGVkIGNhcGFjaXR5IGNoYW5nZSBmcm9tIDAgdG8gNjcxMDg4
NjQNCg0KDQovICMgZGQgaWY9L2Rldi92ZGEgb2Y9L2Rldi9udWxsIGJzPTFNIGNvdW50PTY0DQo2
NCswIHJlY29yZHMgaW4NCjY0KzAgcmVjb3JkcyBvdXQNCjY3MTA4ODY0IGJ5dGVzICg2NC4wTUIp
IGNvcGllZCwgMy4xOTYyNDIgc2Vjb25kcywgMjAuME1CL3MNCi8gIyBkZCBpZj0vZGV2L3plcm8g
b2Y9L2Rldi92ZGEgYnM9MU0gY291bnQ9NjQNCjY0KzAgcmVjb3JkcyBpbg0KNjQrMCByZWNvcmRz
IG91dA0KNjcxMDg4NjQgYnl0ZXMgKDY0LjBNQikgY29waWVkLCAzLjcwNDU5NCBzZWNvbmRzLCAx
Ny4zTUIvcw0KLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQoNClRoZSByZWFkL3dyaXRlIHNlZW1zIE9L
IGluIGRvbS1VLiBUaGUgRlZQIHBsYXRmb3JtIGlzIGEgZW11bGF0b3IsIHNvIHRoZSBwZXJmb3Jt
YW5jZSBpcyBubyByZWZlcmVuY2UuDQpXZSB3aWxsIHRlc3QgaXQgb24gcmVhbCBoYXJkd2FyZSBs
aWtlIE4xU0RQLg0KDQpUaGFua3MsDQpXZWkgQ2hlbg0KDQotLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tDQpGcm9tOiBYZW4tZGV2ZWwgPHhlbi1kZXZlbC1ib3VuY2VzQGxpc3RzLnhlbnByb2plY3Qu
b3JnPiBPbiBCZWhhbGYgT2YgT2xla3NhbmRyIFR5c2hjaGVua28NClNlbnQ6IDIwMjDlubQxMeac
iDHml6UgNToxMQ0KVG86IE1hc2FtaSBIaXJhbWF0c3UgPG1hc2FtaS5oaXJhbWF0c3VAbGluYXJv
Lm9yZz47IEFsZXggQmVubsOpZSA8YWxleC5iZW5uZWVAbGluYXJvLm9yZz4NCkNjOiBTdGVmYW5v
IFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+OyB4ZW4tZGV2ZWwgPHhlbi1kZXZl
bEBsaXN0cy54ZW5wcm9qZWN0Lm9yZz47IE9sZWtzYW5kciBUeXNoY2hlbmtvIDxvbGVrc2FuZHJf
dHlzaGNoZW5rb0BlcGFtLmNvbT47IFBhdWwgRHVycmFudCA8cGF1bEB4ZW4ub3JnPjsgSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPjsgQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNA
Y2l0cml4LmNvbT47IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPjsgV2Vp
IExpdSA8d2xAeGVuLm9yZz47IEp1bGllbiBHcmFsbCA8SnVsaWVuLkdyYWxsQGFybS5jb20+OyBH
ZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+OyBJYW4gSmFja3NvbiA8aXdq
QHhlbnByb2plY3Qub3JnPjsgSnVsaWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZz47IFRpbSBEZWVn
YW4gPHRpbUB4ZW4ub3JnPjsgRGFuaWVsIERlIEdyYWFmIDxkZ2RlZ3JhQHR5Y2hvLm5zYS5nb3Y+
OyBWb2xvZHlteXIgQmFiY2h1ayA8Vm9sb2R5bXlyX0JhYmNodWtAZXBhbS5jb20+OyBKdW4gTmFr
YWppbWEgPGp1bi5uYWthamltYUBpbnRlbC5jb20+OyBLZXZpbiBUaWFuIDxrZXZpbi50aWFuQGlu
dGVsLmNvbT47IEFudGhvbnkgUEVSQVJEIDxhbnRob255LnBlcmFyZEBjaXRyaXguY29tPjsgQmVy
dHJhbmQgTWFycXVpcyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPg0KU3ViamVjdDogUmU6IFtQ
QVRDSCBWMiAwMC8yM10gSU9SRVEgZmVhdHVyZSAoKyB2aXJ0aW8tbW1pbykgb24gQXJtDQoNCg0K
DQpPbiBGcmksIE9jdCAzMCwgMjAyMCBhdCAxOjM0IFBNIE1hc2FtaSBIaXJhbWF0c3UgPG1haWx0
bzptYXNhbWkuaGlyYW1hdHN1QGxpbmFyby5vcmc+IHdyb3RlOg0KSGkgT2xla3NhbmRyLA0KwqAN
CkhpIE1hc2FtaSwgYWxsDQoNCltzb3JyeSBmb3IgdGhlIHBvc3NpYmxlIGZvcm1hdCBpc3N1ZV0N
CsKgDQo+PiA+DQo+PiA+wqAgwqAgwqAgwqBDb3VsZCB5b3UgdGVsbCBtZSBob3cgY2FuIEkgdGVz
dCBpdD8NCj4+ID4NCj4+ID4NCj4+ID4gSSBhc3N1bWUgaXQgaXMgZHVlIHRvIHRoZSBsYWNrIG9m
IHRoZSB2aXJ0aW8tZGlzayBiYWNrZW5kICh3aGljaCBJIGhhdmVuJ3Qgc2hhcmVkIHlldCBhcyBJ
IGZvY3VzZWQgb24gdGhlIElPUkVRL0RNIHN1cHBvcnQgb24gQXJtIGluIHRoZQ0KPj4gPiBmaXJz
dCBwbGFjZSkuDQo+PiA+IENvdWxkIHlvdSB3YWl0IGEgbGl0dGxlIGJpdCwgSSBhbSBnb2luZyB0
byBzaGFyZSBpdCBzb29uLg0KPj4NCj4+IERvIHlvdSBoYXZlIGEgcXVpY2stYW5kLWRpcnR5IGhh
Y2sgeW91IGNhbiBzaGFyZSBpbiB0aGUgbWVhbnRpbWU/IEV2ZW4NCj4+IGp1c3Qgb24gZ2l0aHVi
IGFzIGEgc3BlY2lhbCBicmFuY2g/IEl0IHdvdWxkIGJlIHZlcnkgdXNlZnVsIHRvIGJlIGFibGUN
Cj4+IHRvIGhhdmUgYSB0ZXN0LWRyaXZlciBmb3IgdGhlIG5ldyBmZWF0dXJlLg0KPg0KPiBXZWxs
LCBJIHdpbGwgcHJvdmlkZSBhIGJyYW5jaCBvbiBnaXRodWIgd2l0aCBvdXIgUG9DIHZpcnRpby1k
aXNrIGJhY2tlbmQgYnkgdGhlIGVuZCBvZiB0aGlzIHdlZWsuIEl0IHdpbGwgYmUgcG9zc2libGUg
dG8gdGVzdCB0aGlzIHNlcmllcyB3aXRoIGl0Lg0KDQpHcmVhdCEgT0sgSSdsbCBiZSB3YWl0aW5n
IGZvciB0aGUgUG9DIGJhY2tlbmQuDQoNClRoYW5rIHlvdSENCg0KWW91IGNhbiBmaW5kIHRoZSB2
aXJ0aW8tZGlzayBiYWNrZW5kIFBvQyAoc2hhcmVkIGFzIGlzKSBhdCBbMV0uwqANCkJyaWVmIGRl
c2NyaXB0aW9uLi4uDQoNClRoZSB2aXJ0aW8tZGlzayBiYWNrZW5kIFBvQyBpcyBhIGNvbXBsZXRl
bHkgc3RhbmRhbG9uZSBlbnRpdHkgKElPUkVRIHNlcnZlcikgd2hpY2ggZW11bGF0ZXMgYSB2aXJ0
aW8tbW1pbyBkaXNrIGRldmljZS4NCkl0IGlzIGJhc2VkIG9uIGNvZGUgZnJvbSBERU1VIFsyXSAo
Zm9yIElPUkVRIHNlcnZlciBwdXJwb3NlcykgYW5kIHNvbWUgY29kZSBmcm9tIGt2bXRvb2wgWzNd
IHRvIGltcGxlbWVudCB2aXJ0aW8gcHJvdG9jb2wsDQpkaXNrIG9wZXJhdGlvbnMgb3ZlciB1bmRl
cmx5aW5nIEgvVyBhbmQgWGVuYnVzIGNvZGUgdG8gYmUgYWJsZSB0byByZWFkIGNvbmZpZ3VyYXRp
b24gZnJvbSB0aGUgWGVuc3RvcmUNCihpdCBpcyBjb25maWd1cmVkIHZpYSBkb21haW4gY29uZmln
IGZpbGUpLiBMYXN0IHBhdGNoIGluIHRoaXMgc2VyaWVzIChtYXJrZWQgYXMgUkZDKSBhY3R1YWxs
eSBhZGRzIHJlcXVpcmVkIGJpdHMgdG8gdGhlIGxpYnhsIGNvZGUuwqAgwqANCg0KU29tZSBub3Rl
cy4uLg0KDQpCYWNrZW5kIGNvdWxkIGJlIHVzZWQgd2l0aCBjdXJyZW50IFYyIElPUkVRIHNlcmll
cyBbNF0gd2l0aG91dCBhbnkgbW9kaWZpY2F0aW9ucywgYWxsIHdoYXQgeW91IG5lZWQgaXMgdG8g
ZW5hYmxlDQpDT05GSUdfSU9SRVFfU0VSVkVSIG9uIEFybSBbNV0sIHNpbmNlIGl0IGlzIGRpc2Fi
bGVkIGJ5IGRlZmF1bHQgd2l0aGluIHRoaXMgc2VyaWVzLg0KDQpQbGVhc2Ugbm90ZSB0aGF0IGlu
IG91ciBzeXN0ZW0gd2UgcnVuIGJhY2tlbmQgaW4gRG9tRCAoZHJpdmVyIGRvbWFpbikuIEkgaGF2
ZW4ndCB0ZXN0ZWQgaXQgaW4gRG9tMCwNCnNpbmNlIGluIG91ciBzeXN0ZW0gdGhlIERvbTAgaXMg
dGhpbiAod2l0aG91dCBhbnkgSC9XKSBhbmQgb25seSB1c2VkIHRvIGxhdW5jaCBWTXMsIHNvIHRo
ZXJlIGlzIG5vIHVuZGVybHlpbmcgYmxvY2sgSC9XLiANCkJ1dCwgSSBob3BlLCBpdCBpcyBwb3Nz
aWJsZSB0byBydW4gaXQgaW4gRG9tMCBhcyB3ZWxsIChhdCBsZWFzdCB0aGVyZSBpcyBub3RoaW5n
IHNwZWNpZmljIHRvIGEgcGFydGljdWxhciBkb21haW4gaW4gdGhlIGJhY2tlbmQgaXRzZWxmLCBu
b3RoaW5nIGhhcmRjb2RlZCkuDQpJZiB5b3UgYXJlIGdvaW5nIHRvIHJ1biBhIGJhY2tlbmQgaW4g
b3RoZXIgdGhhbiBEb20wIGRvbWFpbiB5b3UgbmVlZCB0byB3cml0ZSB5b3VyIG93biBwb2xpY3kg
KEZMQVNLKSBmb3IgdGhlIGJhY2tlbmQgKHJ1bm5pbmcgaW4gdGhhdMKgZG9tYWluKQ0KdG8gYmUg
YWJsZSB0byBpc3N1ZSBETSByZWxhdGVkIHJlcXVlc3RzLCBldGMuIE9ubHkgZm9yIHRlc3QgcHVy
cG9zZXMgeW91IGNvdWxkIHVzZSB0aGlzIHBhdGNoIFs2XSB0aGF0IHR3ZWVrcyBYZW4gZHVtbXkg
cG9saWN5IChub3QgZm9yIHVwc3RyZWFtKS4NCsKgwqANCkFzIEkgbWVudGlvbmVkIGVsc2V3aGVy
ZSB5b3UgZG9uJ3QgbmVlZCB0byBtb2RpZnkgR3Vlc3QgTGludXggKERvbVUpLCBqdXN0IGVuYWJs
ZSBWaXJ0SU8gcmVsYXRlZCBjb25maWdzLg0KSWYgSSByZW1lbWJlciBjb3JyZWN0bHksIHRoZSBm
b2xsb3dpbmcgd291bGQgYmUgZW5vdWdoOg0KQ09ORklHX0JMS19NUV9WSVJUSU89eQ0KQ09ORklH
X1ZJUlRJT19CTEs9eQ0KQ09ORklHX1ZJUlRJTz15DQpDT05GSUdfVklSVElPX0JBTExPT049eQ0K
Q09ORklHX1ZJUlRJT19NTUlPPXkNCklmIEkgcmVtZW1iZXIgY29ycmVjdGx5LCBpZiB5b3VyIEhv
c3QgTGludXggKERvbTAgb3IgRG9tRCkgdmVyc2lvbiA+PSA0LjE3IHlvdSBkb24ndCBuZWVkIHRv
IG1vZGlmeSBpdCBhcyB3ZWxsLg0KT3RoZXJ3aXNlLCB5b3UgbmVlZCB0byBjaGVycnktcGljayAi
eGVuL3ByaXZjbWQ6IGFkZCBJT0NUTF9QUklWQ01EX01NQVBfUkVTT1VSQ0UiIGZyb20gdGhlIHVw
c3RyZWFtIHRvIGJlIGFibGUNCnRvIHVzZSB0aGUgYWNxdWlyZSBpbnRlcmZhY2UgZm9yIHRoZSBy
ZXNvdXJjZSBtYXBwaW5nLg0KDQoNCldlIHVzdWFsbHkgYnVpbGQgYSBiYWNrZW5kIGluIHRoZSBj
b250ZXh0IG9mIHRoZSBZb2N0byBidWlsZCBwcm9jZXNzIGFuZCBydW4gaXQgYXMgYSBzeXN0ZW1k
IHNlcnZpY2UsDQpidXQgeW91IGNhbiBhbHNvIGJ1aWxkIGFuZCBydW4gaXQgbWFudWFsbHkgKGl0
IHNob3VsZCBiZSBsYXVuY2hlZCBiZWZvcmUgRG9tVSBjcmVhdGlvbikuDQoNClRoZXJlIGFyZSBu
byBjb21tYW5kIGxpbmUgb3B0aW9ucyBhdCBhbGwuIEV2ZXJ5dGhpbmcgaXMgY29uZmlndXJlZCB2
aWEgZG9tYWluIGNvbmZpZ3VyYXRpb24gZmlsZToNCiMgVGhpcyBvcHRpb24gaXMgbWFuZGF0b3J5
LCBpdCBzaG93cyB0aGF0IFZpcnRJTyBpcyBnb2luZyB0byBiZSB1c2VkIGJ5IGd1ZXN0DQp2aXJ0
aW89MQ0KIyBFeGFtcGxlIG9mIGRvbWFpbiBjb25maWd1cmF0aW9uICh0d28gZGlza3MgYXJlIGFz
c2lnbmVkIHRvIHRoZSBndWVzdCwgdGhlIGxhdHRlciBpcyBpbiByZWFkb25seSBtb2RlKToNCnZk
aXNrID0gWyAnYmFja2VuZD1Eb21ELCBkaXNrcz1ydzovZGV2L21tY2JsazBwMztybzovZGV2L21t
Y2JsazFwMycgXQ0KDQpIb3BlIHRoYXQgaGVscHMuIEZlZWwgZnJlZSB0byBhc2sgcXVlc3Rpb25z
IGlmIGFueS4NCg0KWzFdwqBodHRwczovL2dpdGh1Yi5jb20veGVuLXRyb29wcy92aXJ0aW8tZGlz
ay9jb21taXRzL2lvcmVxX3YzDQpbMl0gaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9w
PXBlb3BsZS9wYXVsZHUvZGVtdS5naXQ7YT1zdW1tYXJ5DQpbM10gaHR0cHM6Ly9naXQua2VybmVs
Lm9yZy9wdWIvc2NtL2xpbnV4L2tlcm5lbC9naXQvd2lsbC9rdm10b29sLmdpdC8NCls0XSBodHRw
czovL2dpdGh1Yi5jb20vb3R5c2hjaGVua28xL3hlbi9jb21taXRzL2lvcmVxXzQuMTRfbWwzDQpb
NV3CoGh0dHBzOi8vZ2l0aHViLmNvbS9vdHlzaGNoZW5rbzEveGVuL2NvbW1pdC9lZTIyMTEwMjE5
M2YwNDIyYTI0MDgzMmVkYzQxZDczZjZmM2RhOTIzDQpbNl3CoGh0dHBzOi8vZ2l0aHViLmNvbS9v
dHlzaGNoZW5rbzEveGVuL2NvbW1pdC9iZTg2OGE2MzAxNGI3YWE2Yzk3MzFkNTY5MjIwMGQ3ZjJm
NTdjNjExDQoNCi0tIA0KUmVnYXJkcywNCg0KT2xla3NhbmRyIFR5c2hjaGVua28NCg==


From xen-devel-bounces@lists.xenproject.org Mon Nov 02 08:01:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 08:01:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17448.42223 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZUm8-0005Cc-VM; Mon, 02 Nov 2020 08:01:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17448.42223; Mon, 02 Nov 2020 08:01:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZUm8-0005CV-SO; Mon, 02 Nov 2020 08:01:12 +0000
Received: by outflank-mailman (input) for mailman id 17448;
 Mon, 02 Nov 2020 08:01:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iSH1=EI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kZUm7-0005CP-6F
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 08:01:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a8fafef0-1772-4d2d-97c9-b081d63fb8ec;
 Mon, 02 Nov 2020 08:01:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B906BACF6;
 Mon,  2 Nov 2020 08:01:06 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=iSH1=EI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kZUm7-0005CP-6F
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 08:01:11 +0000
X-Inumbo-ID: a8fafef0-1772-4d2d-97c9-b081d63fb8ec
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a8fafef0-1772-4d2d-97c9-b081d63fb8ec;
	Mon, 02 Nov 2020 08:01:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604304066;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=eASlqPS2sMyXaJyuaVVUn00Iuuc1tTvT28lcNIMn5Ls=;
	b=GyzVGfbuCkbTv+LDLSgMYQE3Rb35CzldsqmPFMT4JBJa+jIs2B+n6rMMVhXBt1LZuAiuXZ
	INFE5XbaR+a8VQ7BXdzEp+W4ncp1wzwQ9grnolcgvRUHVeWd8C1axHztv6UqlPdnuHesD7
	Ov3/mn+EazA6bDyveK/3OL4Awpc0kfQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B906BACF6;
	Mon,  2 Nov 2020 08:01:06 +0000 (UTC)
Subject: Re: [RFC PATCH] xen: EXPERT clean-up
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Stefano Stabellini <stefano.stabellini@xilinx.com>,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, iwj@xenproject.org,
 julien@xen.org, wl@xen.org, xen-devel@lists.xenproject.org
References: <20201031002405.4545-1-sstabellini@kernel.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <cd44d479-8dba-6311-9386-0c8c1134d07e@suse.com>
Date: Mon, 2 Nov 2020 09:01:04 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201031002405.4545-1-sstabellini@kernel.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 31.10.2020 01:24, Stefano Stabellini wrote:
> --- a/xen/Kconfig
> +++ b/xen/Kconfig
> @@ -35,14 +35,13 @@ config DEFCONFIG_LIST
>  	default ARCH_DEFCONFIG
>  
>  config EXPERT
> -	bool "Configure standard Xen features (expert users)"
> +	bool "Configure EXPERT features"
>  	help
> -	  This option allows certain base Xen options and settings
> -	  to be disabled or tweaked. This is for specialized environments
> -	  which can tolerate a "non-standard" Xen.
> -	  Only use this if you really know what you are doing.
> -	  Xen binaries built with this option enabled are not security
> -	  supported.
> +	  This option allows certain experimental (see SUPPORT.md) Xen
> +	  options and settings to be enabled/disabled. This is for
> +	  specialized environments which can tolerate a "non-standard" Xen.
> +	  Only use this if you really know what you are doing.  Xen binaries
> +	  built with this option enabled are not security supported.
>  	default n

I'm definitely in favor of this - it was more than once that I
wondered about the prompt text.

> @@ -79,8 +79,8 @@ config SBSA_VUART_CONSOLE
>  	  SBSA Generic UART implements a subset of ARM PL011 UART.
>  
>  config ARM_SSBD
> -	bool "Speculative Store Bypass Disable" if EXPERT
> -	depends on HAS_ALTERNATIVE
> +	bool "Speculative Store Bypass Disable"
> +	depends on HAS_ALTERNATIVE && EXPERT
>  	default y

At the example of this, I'm afraid when the default isn't "n"
(or there's no default directive at all, as ought to be
equivalent to and preferred over "default n"), such a
transformation is not functionally identical: Before your
change, with !EXPERT this option defaults to y. After your
change this option is unavailable (which resolves to it being
off for all consuming purposes).

IOW there are reasons to have "if ..." attached to the prompts
(for this construct indeed only making the prompt conditional,
not the entire option), but there are also cases where the
cleanup you do is indeed desirable / helpful.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 02 08:29:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 08:29:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17475.42260 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZVDg-0007Kc-Lv; Mon, 02 Nov 2020 08:29:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17475.42260; Mon, 02 Nov 2020 08:29:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZVDg-0007KV-IH; Mon, 02 Nov 2020 08:29:40 +0000
Received: by outflank-mailman (input) for mailman id 17475;
 Mon, 02 Nov 2020 08:29:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7jo9=EI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZVDf-0007Jg-NT
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 08:29:39 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 93de7c3e-fb4b-4a2b-b172-45fc8e47b9b5;
 Mon, 02 Nov 2020 08:29:31 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZVDX-0002BS-8Z; Mon, 02 Nov 2020 08:29:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZVDW-0007Bx-VC; Mon, 02 Nov 2020 08:29:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZVDW-00041U-Uj; Mon, 02 Nov 2020 08:29:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7jo9=EI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZVDf-0007Jg-NT
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 08:29:39 +0000
X-Inumbo-ID: 93de7c3e-fb4b-4a2b-b172-45fc8e47b9b5
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 93de7c3e-fb4b-4a2b-b172-45fc8e47b9b5;
	Mon, 02 Nov 2020 08:29:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9dwBaGGHqYPpJDWIs7CKJ3cpMfv5xkRSiAWLOBUz4qI=; b=y4FlFSWmRGu08hCFJWV1nx4TTp
	OS7fXq4v7IwdlZRYT92TLkjs556MuogbZdnA7WnSRta8FaSHBFiv5XOXPHLAtP7XBZ0O+7IVvdyRd
	34br44UDu+nw4bLfPdwCXi17TcKTvQpSSVZd+DI4wM90x0C0h0YAOHjiQDPUfCDT2WLk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZVDX-0002BS-8Z; Mon, 02 Nov 2020 08:29:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZVDW-0007Bx-VC; Mon, 02 Nov 2020 08:29:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZVDW-00041U-Uj; Mon, 02 Nov 2020 08:29:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156357-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156357: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=466e57541c7bb8787f6933aff201cdf9b3190de1
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 02 Nov 2020 08:29:30 +0000

flight 156357 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156357/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              466e57541c7bb8787f6933aff201cdf9b3190de1
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  115 days
Failing since        151818  2020-07-11 04:18:52 Z  114 days  109 attempts
Testing same since   156357  2020-11-02 04:19:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 23539 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 02 08:43:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 08:43:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17484.42275 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZVQj-0000dO-Sc; Mon, 02 Nov 2020 08:43:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17484.42275; Mon, 02 Nov 2020 08:43:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZVQj-0000dD-Nt; Mon, 02 Nov 2020 08:43:09 +0000
Received: by outflank-mailman (input) for mailman id 17484;
 Mon, 02 Nov 2020 08:43:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7jo9=EI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZVQh-0000ce-U8
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 08:43:07 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ae49cbf0-41e5-4ee4-8bc4-147311a599d1;
 Mon, 02 Nov 2020 08:43:00 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZVQa-0002SD-8p; Mon, 02 Nov 2020 08:43:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZVQZ-0007sm-VB; Mon, 02 Nov 2020 08:43:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZVQZ-00009m-Ul; Mon, 02 Nov 2020 08:42:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7jo9=EI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZVQh-0000ce-U8
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 08:43:07 +0000
X-Inumbo-ID: ae49cbf0-41e5-4ee4-8bc4-147311a599d1
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ae49cbf0-41e5-4ee4-8bc4-147311a599d1;
	Mon, 02 Nov 2020 08:43:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6YY4VYvPnmHXT0IuFCI10H1kcZ8aMWxHS5MZMLSzUYY=; b=Tf35v/9ysQgTeZBlfQfZEOPvZD
	axofWXcQS6mDdFpQATpEt8MPbDCn0RadtXrl5LIS/sYZAd+g2klFTMMjDlEUqsm+eN7zoDvDWOdvo
	JjEFMx4o6y/tIPKjXsiVh4j0bQnT9OARVJqizK2wqHuLpblaAw7aSnYNc0DRQTIcP+E0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZVQa-0002SD-8p; Mon, 02 Nov 2020 08:43:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZVQZ-0007sm-VB; Mon, 02 Nov 2020 08:43:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZVQZ-00009m-Ul; Mon, 02 Nov 2020 08:42:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156353-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156353: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=2363c6926098ee5c75c8780d07f88f5c21010683
X-Osstest-Versions-That:
    ovmf=8ead7af22bc596de23cdcc46e1f1a8c4e721d6d0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 02 Nov 2020 08:42:59 +0000

flight 156353 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156353/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 2363c6926098ee5c75c8780d07f88f5c21010683
baseline version:
 ovmf                 8ead7af22bc596de23cdcc46e1f1a8c4e721d6d0

Last test of basis   156329  2020-10-31 05:43:41 Z    2 days
Testing same since   156353  2020-11-02 01:40:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jeff Brasen <jbrasen@nvidia.com>
  Jon Hunter <jonathanh@nvidia.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   8ead7af22b..2363c69260  2363c6926098ee5c75c8780d07f88f5c21010683 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Nov 02 09:03:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 09:03:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17507.42304 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZVkh-0002gJ-1F; Mon, 02 Nov 2020 09:03:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17507.42304; Mon, 02 Nov 2020 09:03:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZVkg-0002gC-UZ; Mon, 02 Nov 2020 09:03:46 +0000
Received: by outflank-mailman (input) for mailman id 17507;
 Mon, 02 Nov 2020 09:03:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7jo9=EI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZVkf-0002g7-Sr
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 09:03:46 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d7c75eb9-1875-4e9d-b53e-b4332cab6c68;
 Mon, 02 Nov 2020 09:03:43 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZVkd-0002tV-1Y; Mon, 02 Nov 2020 09:03:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZVkc-0000Hc-NM; Mon, 02 Nov 2020 09:03:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZVkc-00026s-Mx; Mon, 02 Nov 2020 09:03:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7jo9=EI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZVkf-0002g7-Sr
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 09:03:46 +0000
X-Inumbo-ID: d7c75eb9-1875-4e9d-b53e-b4332cab6c68
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d7c75eb9-1875-4e9d-b53e-b4332cab6c68;
	Mon, 02 Nov 2020 09:03:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=a6TmkqrB+Un/FpAUn4wNbxcCZ5HVBi455SCuiTP3sqA=; b=CbSz+jZdHP2wES1i18wGNbCACW
	gmDaORVB92U1qQuQPEq40M0/SL1iPo+C0E8yJLJei7NXbpywto9TefKzH1phF3UBKeErmDtUk6tXc
	c3RdP2sumkCmRWjKuATg9ArU1QubCV0Aq65dgGG6wkCyeE+X7o3Z90I8lR+w4YHO+FoU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZVkd-0002tV-1Y; Mon, 02 Nov 2020 09:03:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZVkc-0000Hc-NM; Mon, 02 Nov 2020 09:03:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZVkc-00026s-Mx; Mon, 02 Nov 2020 09:03:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156351-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156351: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7b56fbd83e261484da43f04090bce07570bd117f
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 02 Nov 2020 09:03:42 +0000

flight 156351 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156351/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-arm64-arm64-xl          10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                7b56fbd83e261484da43f04090bce07570bd117f
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   93 days
Failing since        152366  2020-08-01 20:49:34 Z   92 days  155 attempts
Testing same since   156351  2020-11-01 19:40:22 Z    0 days    1 attempts

------------------------------------------------------------
3413 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 652361 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 02 09:56:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 09:56:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17561.42356 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZWZ8-0007Sf-Qd; Mon, 02 Nov 2020 09:55:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17561.42356; Mon, 02 Nov 2020 09:55:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZWZ8-0007SY-NO; Mon, 02 Nov 2020 09:55:54 +0000
Received: by outflank-mailman (input) for mailman id 17561;
 Mon, 02 Nov 2020 09:55:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=prqF=EI=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kZWZ7-0007ST-Be
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 09:55:53 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe1e::61c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8fbcc601-24ef-463e-9750-32015307dbe5;
 Mon, 02 Nov 2020 09:55:51 +0000 (UTC)
Received: from DB7PR05CA0058.eurprd05.prod.outlook.com (2603:10a6:10:2e::35)
 by HE1PR0801MB1882.eurprd08.prod.outlook.com (2603:10a6:3:4e::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Mon, 2 Nov
 2020 09:55:46 +0000
Received: from DB5EUR03FT060.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2e:cafe::d3) by DB7PR05CA0058.outlook.office365.com
 (2603:10a6:10:2e::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Mon, 2 Nov 2020 09:55:46 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT060.mail.protection.outlook.com (10.152.21.231) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Mon, 2 Nov 2020 09:55:45 +0000
Received: ("Tessian outbound 68da730eaaba:v64");
 Mon, 02 Nov 2020 09:55:45 +0000
Received: from 37cb86306c1b.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 375AA55F-F668-4FFC-BA20-7B42D580659F.1; 
 Mon, 02 Nov 2020 09:55:31 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 37cb86306c1b.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 02 Nov 2020 09:55:31 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5324.eurprd08.prod.outlook.com (2603:10a6:10:11e::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Mon, 2 Nov
 2020 09:55:29 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3499.030; Mon, 2 Nov 2020
 09:55:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=prqF=EI=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kZWZ7-0007ST-Be
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 09:55:53 +0000
X-Inumbo-ID: 8fbcc601-24ef-463e-9750-32015307dbe5
Received: from EUR01-HE1-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:fe1e::61c])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8fbcc601-24ef-463e-9750-32015307dbe5;
	Mon, 02 Nov 2020 09:55:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G0gDik8R7PSzlglryaKEUqXXKkBxBVwE4A9c2jEMqk0=;
 b=SEZYp1Tm6b680BpNk226f+UMamtHWFqx0zYjkiRIPVaR9v+7U6/aM4R5cis4dTpL+2bWv3z2BA6IZM6J+RlaMOa3FONr6EzvomfUkgVZI3E1mILG0LeIjwvRBzahX7SNMpYWH2KSG/sRyKuQqZdr3Py+RZQC/McuV6Nu9sMetkQ=
Received: from DB7PR05CA0058.eurprd05.prod.outlook.com (2603:10a6:10:2e::35)
 by HE1PR0801MB1882.eurprd08.prod.outlook.com (2603:10a6:3:4e::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Mon, 2 Nov
 2020 09:55:46 +0000
Received: from DB5EUR03FT060.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2e:cafe::d3) by DB7PR05CA0058.outlook.office365.com
 (2603:10a6:10:2e::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Mon, 2 Nov 2020 09:55:46 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT060.mail.protection.outlook.com (10.152.21.231) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Mon, 2 Nov 2020 09:55:45 +0000
Received: ("Tessian outbound 68da730eaaba:v64"); Mon, 02 Nov 2020 09:55:45 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 2d167d128e151492
X-CR-MTA-TID: 64aa7808
Received: from 37cb86306c1b.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 375AA55F-F668-4FFC-BA20-7B42D580659F.1;
	Mon, 02 Nov 2020 09:55:31 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 37cb86306c1b.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Mon, 02 Nov 2020 09:55:31 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DWVB0dYJtJ4JA60s1Fu9TGrXGJUIt+io6YOoGzkIkd3w8lLUVvZx58njiPstBHm6vWitUzchyvfYPIFuP+GpMYZtSVEWwf8uJ3gzwht5rhtRFxClvOK/zL4zNjiAEAZPCsv9qVaXqyLXNTpuGwPz6/kUHDeyryZI+zl+xTuvaIin9e9LxDlGHyXimmxxF/LuOo1BkRMuSWUBfwhseIUwCQFnmzWhLpTZ32GCDBqfxZ7oM537dZyKtR/wzRb1JshYx9dAGISts/gUlhmKEi29VZYHWHzUw3b0cPqA1besxJqqstEtP/7w2wsm0BdoiJSlaRBOeKzX98dBPfOEhURxDw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G0gDik8R7PSzlglryaKEUqXXKkBxBVwE4A9c2jEMqk0=;
 b=aSgGdjFZOmyk4BL/+ujOJKNDakmu9qaGG/gy3gMoeP7sq/QIQXCigiX5oX4Hifk/Y/gGfZcxJdZXf9AQhscSBWkoKMZF8+ihtdVU+WqLqA+6kY501U+N4SPbhx/9zyhHaNmh76XhncvHUvT8SwkrGo6+ZZVo3TX3as/4ra4zTetfch2xzLQ9YZARv807v6nGEgqet+jU1hk0VTlSKkxeTresvzSd8Vp+Ehm6VnXxYHJYhHAFVxWS5lwiKzRkttZeOa6beuiJGeOR+dFQS58gogIHw9zzjRbQJMjaT+MruoC4DrDbEcohjWNCSEAIVWzeQ5un0CQfwNmh5bfJ3NIFfQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G0gDik8R7PSzlglryaKEUqXXKkBxBVwE4A9c2jEMqk0=;
 b=SEZYp1Tm6b680BpNk226f+UMamtHWFqx0zYjkiRIPVaR9v+7U6/aM4R5cis4dTpL+2bWv3z2BA6IZM6J+RlaMOa3FONr6EzvomfUkgVZI3E1mILG0LeIjwvRBzahX7SNMpYWH2KSG/sRyKuQqZdr3Py+RZQC/McuV6Nu9sMetkQ=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5324.eurprd08.prod.outlook.com (2603:10a6:10:11e::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Mon, 2 Nov
 2020 09:55:29 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3499.030; Mon, 2 Nov 2020
 09:55:29 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Oleksandr Andrushchenko <andr2000@gmail.com>
CC: Julien Grall <julien@xen.org>, Oleksandr Andrushchenko
	<Oleksandr_Andrushchenko@epam.com>, Rahul Singh <Rahul.Singh@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Jan
 Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index: AQHWoVvyGh0jWzCHpk6w/U7XN+08x6mwEIWAgABuPwCAA/gngIAAQvMA
Date: Mon, 2 Nov 2020 09:55:28 +0000
Message-ID: <B8E54A16-8FD4-48E4-82D5-2205EEEB5D2C@arm.com>
References:
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <09cfc160-3490-0aeb-f872-04fb4ce04364@epam.com>
 <76593217-c7e2-2963-9cbe-d6cc38830710@xen.org>
 <d83f6859-6737-0da8-7c1d-a236e8313869@gmail.com>
In-Reply-To: <d83f6859-6737-0da8-7c1d-a236e8313869@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 7183759b-8108-4077-1299-08d87f1577a4
x-ms-traffictypediagnostic: DB8PR08MB5324:|HE1PR0801MB1882:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<HE1PR0801MB1882A6D0CA2AA222930858DA9D100@HE1PR0801MB1882.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 gmhycnIQWvczKt+r04VjLG9SGDE45UWz9kMr2f5fwCEgXuUPROkCDwfbXEIlJ+OYmPKOnmtBcfkA/8Uzv2vpP1TtwweHYC8wRgYigiAQQNa6VuyxiyWbMCZTRANlHaWKg4ak7unnLQbexpKPbEkS96RcgOPfPGS/s4K0vXpyYG5Q/BCmYN55CMqYdgsKaMZ1m35O8a2kd0wGd+PVagOzNMMLpa1SiJoAXLwjOzwKRooSPjBlMFwC2VALNlOYIkXDmksSvJmUOKQh2PziwUHcW7W6VOM3PqYFPJjzNMmAOb9YN9CNLnOJMsmm2DwhGVDPkWRZ5XeZZXCf1CwGdRsgtQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(366004)(376002)(396003)(136003)(39860400002)(8936002)(66946007)(66446008)(64756008)(8676002)(53546011)(316002)(6916009)(478600001)(6486002)(186003)(2906002)(91956017)(66556008)(4326008)(66476007)(5660300002)(54906003)(76116006)(36756003)(33656002)(2616005)(86362001)(71200400001)(6506007)(26005)(6512007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 jHyB7bYeeqaloQ/ho29RfCVbgiOnKWAlPl6k5gppsheq5JWvr/6mSyidbcxjyU2lMGxPSxWdosVEN2Urm3Zj/Wnzs6vj5JGBRRZ8JVnryqkkU5DzCHnTQMH/4pd4RbUwk61F53jnF5sfzFZW1Gld1zzKriQ/kYOhc+2EOn21sP2crBFov72bG9mrE04zPUyX6Fe7uno9SAwkaX2PxvuWXgSgXl2QpjlX/9TTz2lK+u27Lnr2+PjT78uWFj/GHw4FC7NyNxSA+0uqSuTdh+NAEVgzCAWThWGIflJSW7kCGkMCRtH6RlGjV9fDJrrD4kQX7s16QFfyCCXqjMfZ4Z6pWMHZJ9JF8G7RIrgiK5rjsVrcHFhOkwr0zThlmgbrp1gmJSj43vVuEoP/tzAZyngSESPjbbVNHs7KwORrrPRP8go593wYGFp6Zg17ruvpv+h349mjKEgYGL7KSZLgsjDy8+TCm93ZFJEYxWW3QmYGugkv9U5sNpVLS8oLgO7X/IxZqTATxLn09O7aLeT3TuCTKVVc8f+WdPeE6LFDM/NakCqZMQL9bQvZsulfIuX1MYudLQnAWaPToRlppb9LM/9sL/1dYsgJD3WgBQaaAjrnaM4SNwFPMKtYOxv5PrDDXyezkNd60Iavezy6FH++Jzcjqg==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <4636A511809D2E409C1A878D561BE447@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5324
Original-Authentication-Results: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	09ae4d72-eff8-4c24-f918-08d87f156e18
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TNVMaUjxaRXl+En1QEQtpDCXWmmRO7IO5mLaYK+lPqNODdHo+/X4ZGXHuAbbSiUNPAJ68RvmXNfyHmHSUv25OhI4/Rp9oeoI3rRd3d7pVuzzO3aOKlnKABBjjFOtzbvJ8MUPsZ2zhPNKvMQIfsyMxEdji0fSef/V616py+wqvGBjZOHvCul9tFGT5EYzND4JYC0ZYiTd7RZ+jhUfXOycUTH0SFoZIEXSWIqqezekvmb/pPYWLe79OZP+BEHJJFYebPiz9WuTcFALnAXO4dnPW15nYVIXAPOPeA8PHeyPUi/A9UnnWwdw8nAZrsJV6ZiBGBu/3T5e9fNNeN2RVRRaGKkD4pdcteZ+2ZRo9MNIITYsbdbuAJ7F5YKKDhGgtHRlZuqXX+pVL9p/JdXr++Ra7Q==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(136003)(39860400002)(346002)(376002)(46966005)(6862004)(107886003)(26005)(4326008)(70206006)(70586007)(6512007)(186003)(8936002)(5660300002)(336012)(2616005)(6486002)(54906003)(82740400003)(8676002)(53546011)(316002)(47076004)(478600001)(2906002)(36756003)(356005)(86362001)(82310400003)(6506007)(33656002)(81166007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Nov 2020 09:55:45.1087
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7183759b-8108-4077-1299-08d87f1577a4
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0801MB1882

Hi,

> On 2 Nov 2020, at 05:55, Oleksandr Andrushchenko <andr2000@gmail.com> wro=
te:
>=20
> Hi, Julien!
>=20
> On 10/30/20 7:18 PM, Julien Grall wrote:
>> Hi Oleksandr,
>>=20
>> On 30/10/2020 10:44, Oleksandr Andrushchenko wrote:
>>> On 10/20/20 6:25 PM, Rahul Singh wrote:
>>>> Add support for ARM architected SMMUv3 implementations. It is based on
>>>> the Linux SMMUv3 driver.
>>>>=20
>>>> Major differences between the Linux driver are as follows:
>>>> 1. Only Stage-2 translation is supported as compared to the Linux driv=
er
>>>>      that supports both Stage-1 and Stage-2 translations.
>>>=20
>>> First of all thank you for the efforts!
>>>=20
>>> I tried the patch with QEMU and would like to know if my understanding =
correct
>>>=20
>>> that this combination will not work as of now:
>>>=20
>>> (XEN) SMMUv3: /smmuv3@9050000: SMMUv3: DT value =3D eventq
>>> (XEN) Data Abort Trap. Syndrome=3D0x1940010
>>> (XEN) Walking Hypervisor VA 0x40031000 on CPU0 via TTBR 0x00000000b8469=
000
>>> (XEN) 0TH[0x0] =3D 0x00000000b8468f7f
>>>=20
>>> [snip]
>>>=20
>>> If this is expected then is there any plan to make QEMU work as well?
>>>=20
>>> I see [1] says that "Only stage 1 and AArch64 PTW are supported." on QE=
MU side.
>>=20
>> Just for clarication, you are trying to boot Xen on QEMU, right?
> Exactly
>>=20
>> You might be able to use the stage-1 page-tables to isolate each device =
in Xen. However, I don't think you will be able to share the P2M because th=
e page-tables layout between stage-1 and stage-2 is different.
> So, it is even more work then

Overall it would make more sense to spend some time adding proper support i=
n Qemu then trying to modify the driver to support Qemu right now.

>>=20
>>>=20
>>>=20
>>> We are interested in QEMU/SMMUv3 as a flexible platform for PCI passthr=
ough
>>>=20
>>> implementation, so it could allow testing different setups and configur=
ations with QEMU.
>>=20
>> I would recommend to get the SMMU supporting supporting stage-2 page-tab=
les.
> You mean in QEMU?

See before.

>>=20
>> Regardless that, I think Xen should be able to say the SMMU is not suppo=
rted rather than crashing.
> Yes, that would be nice

Fully agree and we will look into that.

Anything you could share so that we could quickly reproduce your setup woul=
d be more then great.

Regards
Bertrand

>>=20
>> Cheers,
>>=20
> Thank you,
>=20
> Oleksandr



From xen-devel-bounces@lists.xenproject.org Mon Nov 02 10:12:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 10:12:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17578.42368 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZWp6-0000pP-8I; Mon, 02 Nov 2020 10:12:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17578.42368; Mon, 02 Nov 2020 10:12:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZWp6-0000pI-5A; Mon, 02 Nov 2020 10:12:24 +0000
Received: by outflank-mailman (input) for mailman id 17578;
 Mon, 02 Nov 2020 10:12:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nG/o=EI=epam.com=prvs=95752e5c40=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kZWp4-0000pD-T7
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 10:12:23 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b28e0276-fc8f-442d-b9bd-ae75d3b5f551;
 Mon, 02 Nov 2020 10:12:21 +0000 (UTC)
Received: from pps.filterd (m0174678.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0A2A9jji027951; Mon, 2 Nov 2020 10:12:15 GMT
Received: from eur01-db5-obe.outbound.protection.outlook.com
 (mail-db5eur01lp2056.outbound.protection.outlook.com [104.47.2.56])
 by mx0a-0039f301.pphosted.com with ESMTP id 34h02rucp0-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 02 Nov 2020 10:12:15 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB4018.eurprd03.prod.outlook.com (2603:10a6:208:73::28)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.29; Mon, 2 Nov
 2020 10:12:12 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%9]) with mapi id 15.20.3499.030; Mon, 2 Nov 2020
 10:12:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nG/o=EI=epam.com=prvs=95752e5c40=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kZWp4-0000pD-T7
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 10:12:23 +0000
X-Inumbo-ID: b28e0276-fc8f-442d-b9bd-ae75d3b5f551
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b28e0276-fc8f-442d-b9bd-ae75d3b5f551;
	Mon, 02 Nov 2020 10:12:21 +0000 (UTC)
Received: from pps.filterd (m0174678.ppops.net [127.0.0.1])
	by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0A2A9jji027951;
	Mon, 2 Nov 2020 10:12:15 GMT
Received: from eur01-db5-obe.outbound.protection.outlook.com (mail-db5eur01lp2056.outbound.protection.outlook.com [104.47.2.56])
	by mx0a-0039f301.pphosted.com with ESMTP id 34h02rucp0-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Mon, 02 Nov 2020 10:12:15 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QvU02lfRj6Ic2TkzUWbl49vG1i7F61/iqykKOBe81g2Rwmca6HpPOcdKMW3/xktRn1BAUpq0XlFnNSytjDWwiRhP98oKbK3stEKrjpvZK6cYAtmJ0W9nTpxbw37dBGhxqzw9bTTUh4TdO6ukrHrTvnhikRcDDtY8ZAUNYBXbKOtJ5tJP7p0YG8SVHhgYs06MdN5DOtkwxG6yU5tUnXVwiqH4LADn0QlvD0GRBDE1f+frmx3OVsUK5ETvfRMLu6APodY1Tzmnxi9aJ9rZwmSRg2b+qcg+fRBB2kGaYrQJST0NbbWcjvbZYCC6hS11m9802S1P6rjR9GtiptgGoFlIqA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zL3imZAcZA9qSoMPlX7S/WFD8xp3V8WJlIBuF+fohzI=;
 b=cCIahvhaVK0jzSvjc19O46XabJLzmta9BT9K6o08V5vOM1VhdKoJipHPya3b3twXeUOcE+JSP+fBGr+SImOO6xiG+FWQrOoDV3MUCAcDNbhPcNGHRxNnvARss97YTR/rq4aJiFHJeF2gbuVEfml8v8s1hEG68HKJ2O0u9w7Cf8pRkhFAIfW82zuZnf4jJsPlBu8guVWgRaWbQonE/e81LPb4XzndLkRNj5/fYGaO2KRRp9DVkH7T1nFkLlWN4fNodUPGxtSqyCqpwmu8cLjOqFMApm62C4/5U3GOb13yIPYaIT1oUAwTmTGtNjyt5LjUJJ6eAdR/jBOVRM6t9VSFOA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zL3imZAcZA9qSoMPlX7S/WFD8xp3V8WJlIBuF+fohzI=;
 b=AFZX+hZusjZqLzjL41M4Lmi/Wm06mF2QQ0fpnvY5aOh6ARWvaTZAhRBI91sIMhVNlWuatPE6SO8KzPgHwNonxYYaY7909Bf6axbiVScTONqxBVh4Vc0nuiBndwYUzqK0XGA95B+o4MXdnHsoxamjYAxIyEg9sSWdN9SZZe+ry9wDf2I5+o/J/PdYT2NdF3nDuBS8QFoiQGk5STUoEUSr1S0bjsHHrpESXN+acvRWS2cSMcFNq/lR/u+vqUupDrhy0jS4kpGPEp9j7QoxtX23jTfgIsVp139dBUatyy73p7H5p/EZE5O6SCX1NZOegjgYff1Odq7JACfZhhlx6hApvQ==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB4018.eurprd03.prod.outlook.com (2603:10a6:208:73::28) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.29; Mon, 2 Nov
 2020 10:12:12 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%9]) with mapi id 15.20.3499.030; Mon, 2 Nov 2020
 10:12:12 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
        Oleksandr Andrushchenko
	<andr2000@gmail.com>
CC: Julien Grall <julien@xen.org>, Rahul Singh <Rahul.Singh@arm.com>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Jan
 Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
        Stefano Stabellini
	<sstabellini@kernel.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index: AQHWrqmeCV4SGlhADU2GY14D47cn76mwZCkAgAP4JoCAAELzAIAABK0A
Date: Mon, 2 Nov 2020 10:12:12 +0000
Message-ID: <1001ace5-c6a2-4a81-ba3d-edabeeea9336@epam.com>
References: 
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <09cfc160-3490-0aeb-f872-04fb4ce04364@epam.com>
 <76593217-c7e2-2963-9cbe-d6cc38830710@xen.org>
 <d83f6859-6737-0da8-7c1d-a236e8313869@gmail.com>
 <B8E54A16-8FD4-48E4-82D5-2205EEEB5D2C@arm.com>
In-Reply-To: <B8E54A16-8FD4-48E4-82D5-2205EEEB5D2C@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 19913cd9-188a-405e-f722-08d87f17c44c
x-ms-traffictypediagnostic: AM0PR03MB4018:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM0PR03MB4018BD5FDEFC3AEDD7B72F7BE7100@AM0PR03MB4018.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 QmCAyQSuA2CxWJ+sxQ27VFpjob+3mVJxoQfNgySvGwIMSMBCTNr/cfIfX+oNmDErpwJhg/Rugd9wYxHGu8duo5qlfGpeM/g+ID+L79Ptf+E+F2JGmxCAH0vzhzEezHbU80A1Ga+sLH54M7bUz7knuhsSAKWlEHzZWO6oQNwsZZxzhiR5KK48/JqD12nx2VfOB5Rqpjlfkj1i7PuJGoB6ArQitjePHVofxmpRdzuhH9MzQyfPUuZwDS3Xe9NuL95nQX6LrHofcFeaKxI8sxF1fvlXwpKD7/BL65vXKopMdTEtkoU22vHf07sY+GcJUQpDWW6exYR5F0dIE/wuL4CC0g==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(366004)(376002)(136003)(346002)(396003)(5660300002)(186003)(86362001)(478600001)(31686004)(26005)(71200400001)(8936002)(8676002)(2906002)(6512007)(66946007)(316002)(76116006)(36756003)(4326008)(66556008)(66446008)(54906003)(64756008)(107886003)(2616005)(53546011)(31696002)(6506007)(6486002)(66476007)(110136005);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 4xBIy1IEY1f8XaPLPa83UGk+E9bKj0vcqJTl9xEptYX28U+nWqYiceGW+Hwjq52Fw3WdhR1TJbIwT65JrEf8EDSKUKsdt5LnphojdY2djAgCauvxrr+RyvGm5Ww5JQK6sEM75DHhXc3gqAt9XkNOykkDrV6LFq+JxaZatYyPnxOV6BOUJbTsLohUhYTHaYKkqFHFlfprdSX+hkUTA+AFPTEIbqyRzjdz4JysBCADJv59d9aDB4Z6Lah9DwdxhAN3OacLjurmS2KlhvQ32h1xcrgg2RopSTBSlS7ozF1+DiXyFjURG6dNL3nOJUgGeSC+dWifxr5DCpKOJ00AD91/tc2U9AmThnm3tSO0VnSFg9/iB1QWZ201NANRSyO/CcEnOEYaFOas1x8reaZGdF9eQxg2CQJGpAjiS823ADYy2RM4+skR+47k5rT03fgUzEwzBfty+ttOoUhnKwRguwYEYGmQO073ogWiYbv4oXjLZBpNFNCItxB/oy9AHodN7VdCpC6yAK7JCJ8L2Z1am1/87/EEJ+/lwTTsMeywY0kWsQwpXDuSADJYCUV0Hcu411tCa4bodh7Yhm4aQVvg+I/FxKditQsK8IFmSTqRQD8XRblT60FcyF4OFSxp7MKg6Q1zN8AQjtIHe+xjS52h+nmfsw==
Content-Type: text/plain; charset="utf-8"
Content-ID: <16FA9744C5E1C64DAA9B614D3904BE0E@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 19913cd9-188a-405e-f722-08d87f17c44c
X-MS-Exchange-CrossTenant-originalarrivaltime: 02 Nov 2020 10:12:12.6143
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: F+KhXKdUHdhdqzRcCStAvSEceDDIuzx3TIvD0Z3rZRFpjdOAL6DcVQ5dbTB+ckXTRiCbow9CME+67suC4eLYJCv8v5I2FA+dUzGB7KTOmumXxDjt+d0JEug36X+sdn4d
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB4018
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-02_03:2020-11-02,2020-11-02 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0
 adultscore=0 clxscore=1015 priorityscore=1501 malwarescore=0
 suspectscore=0 impostorscore=0 spamscore=0 bulkscore=0 mlxscore=0
 mlxlogscore=999 phishscore=0 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2009150000 definitions=main-2011020080

SGksDQoNCk9uIDExLzIvMjAgMTE6NTUgQU0sIEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQo+IEhp
LA0KPg0KPj4gT24gMiBOb3YgMjAyMCwgYXQgMDU6NTUsIE9sZWtzYW5kciBBbmRydXNoY2hlbmtv
IDxhbmRyMjAwMEBnbWFpbC5jb20+IHdyb3RlOg0KPj4NCj4+IEhpLCBKdWxpZW4hDQo+Pg0KPj4g
T24gMTAvMzAvMjAgNzoxOCBQTSwgSnVsaWVuIEdyYWxsIHdyb3RlOg0KPj4+IEhpIE9sZWtzYW5k
ciwNCj4+Pg0KPj4+IE9uIDMwLzEwLzIwMjAgMTA6NDQsIE9sZWtzYW5kciBBbmRydXNoY2hlbmtv
IHdyb3RlOg0KPj4+PiBPbiAxMC8yMC8yMCA2OjI1IFBNLCBSYWh1bCBTaW5naCB3cm90ZToNCj4+
Pj4+IEFkZCBzdXBwb3J0IGZvciBBUk0gYXJjaGl0ZWN0ZWQgU01NVXYzIGltcGxlbWVudGF0aW9u
cy4gSXQgaXMgYmFzZWQgb24NCj4+Pj4+IHRoZSBMaW51eCBTTU1VdjMgZHJpdmVyLg0KPj4+Pj4N
Cj4+Pj4+IE1ham9yIGRpZmZlcmVuY2VzIGJldHdlZW4gdGhlIExpbnV4IGRyaXZlciBhcmUgYXMg
Zm9sbG93czoNCj4+Pj4+IDEuIE9ubHkgU3RhZ2UtMiB0cmFuc2xhdGlvbiBpcyBzdXBwb3J0ZWQg
YXMgY29tcGFyZWQgdG8gdGhlIExpbnV4IGRyaXZlcg0KPj4+Pj4gICAgICAgdGhhdCBzdXBwb3J0
cyBib3RoIFN0YWdlLTEgYW5kIFN0YWdlLTIgdHJhbnNsYXRpb25zLg0KPj4+PiBGaXJzdCBvZiBh
bGwgdGhhbmsgeW91IGZvciB0aGUgZWZmb3J0cyENCj4+Pj4NCj4+Pj4gSSB0cmllZCB0aGUgcGF0
Y2ggd2l0aCBRRU1VIGFuZCB3b3VsZCBsaWtlIHRvIGtub3cgaWYgbXkgdW5kZXJzdGFuZGluZyBj
b3JyZWN0DQo+Pj4+DQo+Pj4+IHRoYXQgdGhpcyBjb21iaW5hdGlvbiB3aWxsIG5vdCB3b3JrIGFz
IG9mIG5vdzoNCj4+Pj4NCj4+Pj4gKFhFTikgU01NVXYzOiAvc21tdXYzQDkwNTAwMDA6IFNNTVV2
MzogRFQgdmFsdWUgPSBldmVudHENCj4+Pj4gKFhFTikgRGF0YSBBYm9ydCBUcmFwLiBTeW5kcm9t
ZT0weDE5NDAwMTANCj4+Pj4gKFhFTikgV2Fsa2luZyBIeXBlcnZpc29yIFZBIDB4NDAwMzEwMDAg
b24gQ1BVMCB2aWEgVFRCUiAweDAwMDAwMDAwYjg0NjkwMDANCj4+Pj4gKFhFTikgMFRIWzB4MF0g
PSAweDAwMDAwMDAwYjg0NjhmN2YNCj4+Pj4NCj4+Pj4gW3NuaXBdDQo+Pj4+DQo+Pj4+IElmIHRo
aXMgaXMgZXhwZWN0ZWQgdGhlbiBpcyB0aGVyZSBhbnkgcGxhbiB0byBtYWtlIFFFTVUgd29yayBh
cyB3ZWxsPw0KPj4+Pg0KPj4+PiBJIHNlZSBbMV0gc2F5cyB0aGF0ICJPbmx5IHN0YWdlIDEgYW5k
IEFBcmNoNjQgUFRXIGFyZSBzdXBwb3J0ZWQuIiBvbiBRRU1VIHNpZGUuDQo+Pj4gSnVzdCBmb3Ig
Y2xhcmljYXRpb24sIHlvdSBhcmUgdHJ5aW5nIHRvIGJvb3QgWGVuIG9uIFFFTVUsIHJpZ2h0Pw0K
Pj4gRXhhY3RseQ0KPj4+IFlvdSBtaWdodCBiZSBhYmxlIHRvIHVzZSB0aGUgc3RhZ2UtMSBwYWdl
LXRhYmxlcyB0byBpc29sYXRlIGVhY2ggZGV2aWNlIGluIFhlbi4gSG93ZXZlciwgSSBkb24ndCB0
aGluayB5b3Ugd2lsbCBiZSBhYmxlIHRvIHNoYXJlIHRoZSBQMk0gYmVjYXVzZSB0aGUgcGFnZS10
YWJsZXMgbGF5b3V0IGJldHdlZW4gc3RhZ2UtMSBhbmQgc3RhZ2UtMiBpcyBkaWZmZXJlbnQuDQo+
PiBTbywgaXQgaXMgZXZlbiBtb3JlIHdvcmsgdGhlbg0KPiBPdmVyYWxsIGl0IHdvdWxkIG1ha2Ug
bW9yZSBzZW5zZSB0byBzcGVuZCBzb21lIHRpbWUgYWRkaW5nIHByb3BlciBzdXBwb3J0IGluIFFl
bXUgdGhlbiB0cnlpbmcgdG8gbW9kaWZ5IHRoZSBkcml2ZXIgdG8gc3VwcG9ydCBRZW11IHJpZ2h0
IG5vdy4NCj4NCj4+Pj4NCj4+Pj4gV2UgYXJlIGludGVyZXN0ZWQgaW4gUUVNVS9TTU1VdjMgYXMg
YSBmbGV4aWJsZSBwbGF0Zm9ybSBmb3IgUENJIHBhc3N0aHJvdWdoDQo+Pj4+DQo+Pj4+IGltcGxl
bWVudGF0aW9uLCBzbyBpdCBjb3VsZCBhbGxvdyB0ZXN0aW5nIGRpZmZlcmVudCBzZXR1cHMgYW5k
IGNvbmZpZ3VyYXRpb25zIHdpdGggUUVNVS4NCj4+PiBJIHdvdWxkIHJlY29tbWVuZCB0byBnZXQg
dGhlIFNNTVUgc3VwcG9ydGluZyBzdXBwb3J0aW5nIHN0YWdlLTIgcGFnZS10YWJsZXMuDQo+PiBZ
b3UgbWVhbiBpbiBRRU1VPw0KPiBTZWUgYmVmb3JlLg0KPg0KPj4+IFJlZ2FyZGxlc3MgdGhhdCwg
SSB0aGluayBYZW4gc2hvdWxkIGJlIGFibGUgdG8gc2F5IHRoZSBTTU1VIGlzIG5vdCBzdXBwb3J0
ZWQgcmF0aGVyIHRoYW4gY3Jhc2hpbmcuDQo+PiBZZXMsIHRoYXQgd291bGQgYmUgbmljZQ0KPiBG
dWxseSBhZ3JlZSBhbmQgd2Ugd2lsbCBsb29rIGludG8gdGhhdC4NCj4NCj4gQW55dGhpbmcgeW91
IGNvdWxkIHNoYXJlIHNvIHRoYXQgd2UgY291bGQgcXVpY2tseSByZXByb2R1Y2UgeW91ciBzZXR1
cCB3b3VsZCBiZSBtb3JlIHRoZW4gZ3JlYXQuDQoNCk5vdGhpbmcgc3BlY2lhbCwNCg0KcWVtdS9h
YXJjaDY0LXNvZnRtbXUvcWVtdS1zeXN0ZW0tYWFyY2g2NCAtbWFjaGluZSB0eXBlPXZpcnQgLW1h
Y2hpbmUgdmlydCxnaWMtdmVyc2lvbj0yIFwNCg0KLW1hY2hpbmUgdmlydHVhbGl6YXRpb249dHJ1
ZSAtY3B1IGNvcnRleC1hNTcgLXNtcCA0IC1tIDIwNDggLW5pYyB1c2VyLGhvc3Rmd2Q9dGNwOjEy
Ny4wLjAuMToyMjIyLToyMiBcDQoNCi1ub2dyYXBoaWMgLXNlcmlhbCBtb246c3RkaW8gWy4uc25p
cC4uXQ0KDQpJIGFsc28gc2V0IGlvbW11IHRvIHNtbXV2MyBpbiBteSB0ZXN0cywgUUVNVSBlbXVs
YXRvciB2ZXJzaW9uIDQuMi4xDQoNCj4NCj4gUmVnYXJkcw0KPiBCZXJ0cmFuZA0KPg0KPj4+IENo
ZWVycywNCj4+Pg0KPj4gVGhhbmsgeW91LA0KPj4NCj4+IE9sZWtzYW5kcg==


From xen-devel-bounces@lists.xenproject.org Mon Nov 02 11:05:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 11:05:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17601.42380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZXds-0005BB-3E; Mon, 02 Nov 2020 11:04:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17601.42380; Mon, 02 Nov 2020 11:04:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZXdr-0005B4-Vr; Mon, 02 Nov 2020 11:04:51 +0000
Received: by outflank-mailman (input) for mailman id 17601;
 Mon, 02 Nov 2020 11:04:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dGCx=EI=st.com=fabrice.gasnier@srs-us1.protection.inumbo.net>)
 id 1kZXdp-0005Az-9B
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 11:04:49 +0000
Received: from mx07-00178001.pphosted.com (unknown [91.207.212.93])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0f4f80c1-2be4-462c-9ecf-db3daae5b729;
 Mon, 02 Nov 2020 11:04:47 +0000 (UTC)
Received: from pps.filterd (m0046660.ppops.net [127.0.0.1])
 by mx07-00178001.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0A2B2ptP020099; Mon, 2 Nov 2020 12:04:45 +0100
Received: from beta.dmz-eu.st.com (beta.dmz-eu.st.com [164.129.1.35])
 by mx07-00178001.pphosted.com with ESMTP id 34h031a3kw-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 02 Nov 2020 12:04:45 +0100
Received: from euls16034.sgp.st.com (euls16034.sgp.st.com [10.75.44.20])
 by beta.dmz-eu.st.com (STMicroelectronics) with ESMTP id 614FF100034;
 Mon,  2 Nov 2020 12:04:43 +0100 (CET)
Received: from Webmail-eu.st.com (sfhdag1node3.st.com [10.75.127.3])
 by euls16034.sgp.st.com (STMicroelectronics) with ESMTP id 96E252AD9F8;
 Mon,  2 Nov 2020 12:04:42 +0100 (CET)
Received: from [10.211.2.101] (10.75.127.45) by SFHDAG1NODE3.st.com
 (10.75.127.3) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 2 Nov
 2020 12:04:37 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dGCx=EI=st.com=fabrice.gasnier@srs-us1.protection.inumbo.net>)
	id 1kZXdp-0005Az-9B
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 11:04:49 +0000
X-Inumbo-ID: 0f4f80c1-2be4-462c-9ecf-db3daae5b729
Received: from mx07-00178001.pphosted.com (unknown [91.207.212.93])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0f4f80c1-2be4-462c-9ecf-db3daae5b729;
	Mon, 02 Nov 2020 11:04:47 +0000 (UTC)
Received: from pps.filterd (m0046660.ppops.net [127.0.0.1])
	by mx07-00178001.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0A2B2ptP020099;
	Mon, 2 Nov 2020 12:04:45 +0100
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=st.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=STMicroelectronics;
 bh=c46BUwKMyDOK+b3xICLc4SPGOwqFHPWlLu+N0eBiDsc=;
 b=qlgXi6rP2TqaGZFGGpTYwMqvPmL90+2jNgkIDUekhb1daD59BN2gIpspD9UJNonwWatj
 OHUgUW/gkuHA03EmB0FarCr3OF9wq9SA2yCiOWRX+Ek9VIR1j/QRToM/UC2lNjWAf3Kq
 Gq1pybeCkQlXMIpdTOQ7P/iFELEu7O6whhQllNenKEe5/XK6+0VIIemg87dHr4bkgTaI
 jiPcxx/TlvDgIqRR4jdhdt3eOubNpSinANuzNPCvX7gJnJy8bY2eDVhF2FtvabV8/mVh
 TOcw0QoQ4k7AwJJMs4NADeWVkO0Uncs5SNpNP0tOBNteIl80ydQPyt/xaWqAI4BRaYqI hQ== 
Received: from beta.dmz-eu.st.com (beta.dmz-eu.st.com [164.129.1.35])
	by mx07-00178001.pphosted.com with ESMTP id 34h031a3kw-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Mon, 02 Nov 2020 12:04:45 +0100
Received: from euls16034.sgp.st.com (euls16034.sgp.st.com [10.75.44.20])
	by beta.dmz-eu.st.com (STMicroelectronics) with ESMTP id 614FF100034;
	Mon,  2 Nov 2020 12:04:43 +0100 (CET)
Received: from Webmail-eu.st.com (sfhdag1node3.st.com [10.75.127.3])
	by euls16034.sgp.st.com (STMicroelectronics) with ESMTP id 96E252AD9F8;
	Mon,  2 Nov 2020 12:04:42 +0100 (CET)
Received: from [10.211.2.101] (10.75.127.45) by SFHDAG1NODE3.st.com
 (10.75.127.3) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 2 Nov
 2020 12:04:37 +0100
Subject: Re: [PATCH v2 20/39] docs: ABI: testing: make the files compatible
 with ReST output
To: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>,
        Greg Kroah-Hartman
	<gregkh@linuxfoundation.org>
CC: Linux Doc Mailing List <linux-doc@vger.kernel.org>,
        "Gautham R. Shenoy"
	<ego@linux.vnet.ibm.com>,
        "Jason A. Donenfeld" <Jason@zx2c4.com>,
        =?UTF-8?Q?Javier_Gonz=c3=a1lez?= <javier@javigon.com>,
        Jonathan Corbet
	<corbet@lwn.net>,
        "Martin K. Petersen" <martin.petersen@oracle.com>,
        "Rafael
 J. Wysocki" <rjw@rjwysocki.net>,
        Alexander Shishkin
	<alexander.shishkin@linux.intel.com>,
        Alexandre Belloni
	<alexandre.belloni@bootlin.com>,
        Alexandre Torgue <alexandre.torgue@st.com>,
        Andrew Donnellan <ajd@linux.ibm.com>,
        Andy Shevchenko
	<andriy.shevchenko@linux.intel.com>,
        Baolin Wang <baolin.wang7@gmail.com>,
        Benson Leung <bleung@chromium.org>,
        Boris Ostrovsky
	<boris.ostrovsky@oracle.com>,
        Bruno Meneguele <bmeneg@redhat.com>,
        Chunyan
 Zhang <zhang.lyra@gmail.com>, Dan Murphy <dmurphy@ti.com>,
        Dan Williams
	<dan.j.williams@intel.com>,
        Enric Balletbo i Serra
	<enric.balletbo@collabora.com>,
        Felipe Balbi <balbi@kernel.org>,
        Frederic
 Barrat <fbarrat@linux.ibm.com>,
        Guenter Roeck <groeck@chromium.org>,
        Hanjun
 Guo <guohanjun@huawei.com>,
        Heikki Krogerus
	<heikki.krogerus@linux.intel.com>,
        Jens Axboe <axboe@kernel.dk>,
        Johannes
 Thumshirn <johannes.thumshirn@wdc.com>,
        Jonathan Cameron <jic23@kernel.org>, Juergen Gross <jgross@suse.com>,
        Konstantin Khlebnikov <koct9i@gmail.com>,
        Kranthi Kuntala <kranthi.kuntala@intel.com>,
        Lakshmi Ramasubramanian
	<nramas@linux.microsoft.com>,
        Lars-Peter Clausen <lars@metafoo.de>, Len Brown
	<lenb@kernel.org>,
        Leonid Maksymchuk <leonmaxx@gmail.com>,
        Ludovic Desroches
	<ludovic.desroches@microchip.com>,
        Mario Limonciello
	<mario.limonciello@dell.com>,
        Mark Gross <mgross@linux.intel.com>,
        Maxime
 Coquelin <mcoquelin.stm32@gmail.com>,
        Michael Ellerman <mpe@ellerman.id.au>,
        Mika Westerberg <mika.westerberg@linux.intel.com>,
        Mike Kravetz
	<mike.kravetz@oracle.com>,
        Mimi Zohar <zohar@linux.ibm.com>, Nayna Jain
	<nayna@linux.ibm.com>,
        Nicolas Ferre <nicolas.ferre@microchip.com>,
        Niklas
 Cassel <niklas.cassel@wdc.com>,
        Oded Gabbay <oded.gabbay@gmail.com>,
        Oleh
 Kravchenko <oleg@kaa.org.ua>, Orson Zhai <orsonzhai@gmail.com>,
        Pavel Machek
	<pavel@ucw.cz>,
        Pawan Gupta <pawan.kumar.gupta@linux.intel.com>,
        Peter
 Meerwald-Stadler <pmeerw@pmeerw.net>,
        Peter Rosin <peda@axentia.se>, Petr
 Mladek <pmladek@suse.com>,
        Philippe Bergheaud <felix@linux.ibm.com>,
        Richard
 Cochran <richardcochran@gmail.com>,
        Sebastian Reichel <sre@kernel.org>,
        Sergey Senozhatsky <sergey.senozhatsky@gmail.com>,
        Stefano Stabellini
	<sstabellini@kernel.org>,
        Thinh Nguyen <Thinh.Nguyen@synopsys.com>,
        Thomas
 Gleixner <tglx@linutronix.de>, Tom Rix <trix@redhat.com>,
        Vaibhav Jain
	<vaibhav@linux.ibm.com>,
        Vineela Tummalapalli
	<vineela.tummalapalli@intel.com>,
        Vishal Verma <vishal.l.verma@intel.com>, <linux-acpi@vger.kernel.org>,
        <linux-arm-kernel@lists.infradead.org>, <linux-iio@vger.kernel.org>,
        <linux-kernel@vger.kernel.org>, <linux-mm@kvack.org>,
        <linux-pm@vger.kernel.org>, <linux-stm32@st-md-mailman.stormreply.com>,
        <linux-usb@vger.kernel.org>, <linuxppc-dev@lists.ozlabs.org>,
        <netdev@vger.kernel.org>, <xen-devel@lists.xenproject.org>,
        Jonathan Cameron
	<Jonathan.Cameron@huawei.com>
References: <cover.1604042072.git.mchehab+huawei@kernel.org>
 <58cf3c2d611e0197fb215652719ebd82ca2658db.1604042072.git.mchehab+huawei@kernel.org>
 <5326488b-4185-9d67-fc09-79b911fbb3b8@st.com>
 <20201030110925.3e09d59e@coco.lan>
From: Fabrice Gasnier <fabrice.gasnier@st.com>
Message-ID: <cb586ea3-b6e6-4e48-2344-2bd641e5323f@st.com>
Date: Mon, 2 Nov 2020 12:04:36 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201030110925.3e09d59e@coco.lan>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [10.75.127.45]
X-ClientProxiedBy: SFHDAG4NODE2.st.com (10.75.127.11) To SFHDAG1NODE3.st.com
 (10.75.127.3)
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-02_03:2020-11-02,2020-11-02 signatures=0

On 10/30/20 11:09 AM, Mauro Carvalho Chehab wrote:
> Em Fri, 30 Oct 2020 10:19:12 +0100
> Fabrice Gasnier <fabrice.gasnier@st.com> escreveu:
> 
>> Hi Mauro,
>>
>> [...]
>>
>>>  
>>> +What:		/sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available
>>> +KernelVersion:	4.12
>>> +Contact:	benjamin.gaignard@st.com
>>> +Description:
>>> +		Reading returns the list possible quadrature modes.
>>> +
>>> +What:		/sys/bus/iio/devices/iio:deviceX/in_count0_quadrature_mode
>>> +KernelVersion:	4.12
>>> +Contact:	benjamin.gaignard@st.com
>>> +Description:
>>> +		Configure the device counter quadrature modes:
>>> +
>>> +		channel_A:
>>> +			Encoder A input servers as the count input and B as
>>> +			the UP/DOWN direction control input.
>>> +
>>> +		channel_B:
>>> +			Encoder B input serves as the count input and A as
>>> +			the UP/DOWN direction control input.
>>> +
>>> +		quadrature:
>>> +			Encoder A and B inputs are mixed to get direction
>>> +			and count with a scale of 0.25.
>>> +  
>>
> 
> Hi Fabrice,
> 
>> I just noticed that since Jonathan question in v1.
>>
>> Above ABI has been moved in the past as discussed in [1]. You can take a
>> look at:
>> b299d00 IIO: stm32: Remove quadrature related functions from trigger driver
>>
>> Could you please remove the above chunk ?
>>
>> With that, for the stm32 part:
>> Acked-by: Fabrice Gasnier <fabrice.gasnier@st.com>
> 
> 
> Hmm... probably those were re-introduced due to a rebase. This
> series were originally written about 1,5 years ago.
> 
> I'll drop those hunks.

Hi Mauro, Greg,

I just figured out this patch has been applied with above hunk.

This should be dropped: is there a fix on its way already ?
(I may have missed it)

Please advise,
Fabrice
> 
> Thanks!
> Mauro
> 


From xen-devel-bounces@lists.xenproject.org Mon Nov 02 12:46:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 12:46:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17632.42391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZZDb-0005HT-1k; Mon, 02 Nov 2020 12:45:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17632.42391; Mon, 02 Nov 2020 12:45:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZZDa-0005HM-V2; Mon, 02 Nov 2020 12:45:50 +0000
Received: by outflank-mailman (input) for mailman id 17632;
 Mon, 02 Nov 2020 12:45:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+SPx=EI=linuxfoundation.org=gregkh@srs-us1.protection.inumbo.net>)
 id 1kZZDZ-0005HH-4L
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 12:45:49 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 00866e5f-1b31-4c79-bbf0-9b7014648075;
 Mon, 02 Nov 2020 12:45:48 +0000 (UTC)
Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 889FA223EA;
 Mon,  2 Nov 2020 12:45:45 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+SPx=EI=linuxfoundation.org=gregkh@srs-us1.protection.inumbo.net>)
	id 1kZZDZ-0005HH-4L
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 12:45:49 +0000
X-Inumbo-ID: 00866e5f-1b31-4c79-bbf0-9b7014648075
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 00866e5f-1b31-4c79-bbf0-9b7014648075;
	Mon, 02 Nov 2020 12:45:48 +0000 (UTC)
Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 889FA223EA;
	Mon,  2 Nov 2020 12:45:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604321146;
	bh=ncwNHxV3miBTR7+Pee29GvWhEvy0Pry8NIrln2TBpDk=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=fplJUMXVQzQPj7nj4dUnCkxvodKHovhtsUbV4g2aUQ79+fJ6E4tzwHHtLwPrawdfm
	 ygXz9B3qUMtetcgvuAGg7Eh6Ty6IbDb7/eMxgsGMBrLyyTRXblVKHla6o0f+HiEPyI
	 A/3xupQPmbuXzKqd7UnamHtVajjU98f5F1v6PGqY=
Date: Mon, 2 Nov 2020 13:46:41 +0100
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: Fabrice Gasnier <fabrice.gasnier@st.com>
Cc: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>,
	Linux Doc Mailing List <linux-doc@vger.kernel.org>,
	"Gautham R. Shenoy" <ego@linux.vnet.ibm.com>,
	"Jason A. Donenfeld" <Jason@zx2c4.com>,
	Javier =?iso-8859-1?Q?Gonz=E1lez?= <javier@javigon.com>,
	Jonathan Corbet <corbet@lwn.net>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Alexandre Belloni <alexandre.belloni@bootlin.com>,
	Alexandre Torgue <alexandre.torgue@st.com>,
	Andrew Donnellan <ajd@linux.ibm.com>,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Baolin Wang <baolin.wang7@gmail.com>,
	Benson Leung <bleung@chromium.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Bruno Meneguele <bmeneg@redhat.com>,
	Chunyan Zhang <zhang.lyra@gmail.com>, Dan Murphy <dmurphy@ti.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Enric Balletbo i Serra <enric.balletbo@collabora.com>,
	Felipe Balbi <balbi@kernel.org>,
	Frederic Barrat <fbarrat@linux.ibm.com>,
	Guenter Roeck <groeck@chromium.org>,
	Hanjun Guo <guohanjun@huawei.com>,
	Heikki Krogerus <heikki.krogerus@linux.intel.com>,
	Jens Axboe <axboe@kernel.dk>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	Jonathan Cameron <jic23@kernel.org>,
	Juergen Gross <jgross@suse.com>,
	Konstantin Khlebnikov <koct9i@gmail.com>,
	Kranthi Kuntala <kranthi.kuntala@intel.com>,
	Lakshmi Ramasubramanian <nramas@linux.microsoft.com>,
	Lars-Peter Clausen <lars@metafoo.de>, Len Brown <lenb@kernel.org>,
	Leonid Maksymchuk <leonmaxx@gmail.com>,
	Ludovic Desroches <ludovic.desroches@microchip.com>,
	Mario Limonciello <mario.limonciello@dell.com>,
	Mark Gross <mgross@linux.intel.com>,
	Maxime Coquelin <mcoquelin.stm32@gmail.com>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Mika Westerberg <mika.westerberg@linux.intel.com>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	Mimi Zohar <zohar@linux.ibm.com>, Nayna Jain <nayna@linux.ibm.com>,
	Nicolas Ferre <nicolas.ferre@microchip.com>,
	Niklas Cassel <niklas.cassel@wdc.com>,
	Oded Gabbay <oded.gabbay@gmail.com>,
	Oleh Kravchenko <oleg@kaa.org.ua>, Orson Zhai <orsonzhai@gmail.com>,
	Pavel Machek <pavel@ucw.cz>,
	Pawan Gupta <pawan.kumar.gupta@linux.intel.com>,
	Peter Meerwald-Stadler <pmeerw@pmeerw.net>,
	Peter Rosin <peda@axentia.se>, Petr Mladek <pmladek@suse.com>,
	Philippe Bergheaud <felix@linux.ibm.com>,
	Richard Cochran <richardcochran@gmail.com>,
	Sebastian Reichel <sre@kernel.org>,
	Sergey Senozhatsky <sergey.senozhatsky@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Thinh Nguyen <Thinh.Nguyen@synopsys.com>,
	Thomas Gleixner <tglx@linutronix.de>, Tom Rix <trix@redhat.com>,
	Vaibhav Jain <vaibhav@linux.ibm.com>,
	Vineela Tummalapalli <vineela.tummalapalli@intel.com>,
	Vishal Verma <vishal.l.verma@intel.com>, linux-acpi@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, linux-iio@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-pm@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com,
	linux-usb@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	netdev@vger.kernel.org, xen-devel@lists.xenproject.org,
	Jonathan Cameron <Jonathan.Cameron@huawei.com>
Subject: Re: [PATCH v2 20/39] docs: ABI: testing: make the files compatible
 with ReST output
Message-ID: <20201102124641.GA881895@kroah.com>
References: <cover.1604042072.git.mchehab+huawei@kernel.org>
 <58cf3c2d611e0197fb215652719ebd82ca2658db.1604042072.git.mchehab+huawei@kernel.org>
 <5326488b-4185-9d67-fc09-79b911fbb3b8@st.com>
 <20201030110925.3e09d59e@coco.lan>
 <cb586ea3-b6e6-4e48-2344-2bd641e5323f@st.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <cb586ea3-b6e6-4e48-2344-2bd641e5323f@st.com>

On Mon, Nov 02, 2020 at 12:04:36PM +0100, Fabrice Gasnier wrote:
> On 10/30/20 11:09 AM, Mauro Carvalho Chehab wrote:
> > Em Fri, 30 Oct 2020 10:19:12 +0100
> > Fabrice Gasnier <fabrice.gasnier@st.com> escreveu:
> > 
> >> Hi Mauro,
> >>
> >> [...]
> >>
> >>>  
> >>> +What:		/sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available
> >>> +KernelVersion:	4.12
> >>> +Contact:	benjamin.gaignard@st.com
> >>> +Description:
> >>> +		Reading returns the list possible quadrature modes.
> >>> +
> >>> +What:		/sys/bus/iio/devices/iio:deviceX/in_count0_quadrature_mode
> >>> +KernelVersion:	4.12
> >>> +Contact:	benjamin.gaignard@st.com
> >>> +Description:
> >>> +		Configure the device counter quadrature modes:
> >>> +
> >>> +		channel_A:
> >>> +			Encoder A input servers as the count input and B as
> >>> +			the UP/DOWN direction control input.
> >>> +
> >>> +		channel_B:
> >>> +			Encoder B input serves as the count input and A as
> >>> +			the UP/DOWN direction control input.
> >>> +
> >>> +		quadrature:
> >>> +			Encoder A and B inputs are mixed to get direction
> >>> +			and count with a scale of 0.25.
> >>> +  
> >>
> > 
> > Hi Fabrice,
> > 
> >> I just noticed that since Jonathan question in v1.
> >>
> >> Above ABI has been moved in the past as discussed in [1]. You can take a
> >> look at:
> >> b299d00 IIO: stm32: Remove quadrature related functions from trigger driver
> >>
> >> Could you please remove the above chunk ?
> >>
> >> With that, for the stm32 part:
> >> Acked-by: Fabrice Gasnier <fabrice.gasnier@st.com>
> > 
> > 
> > Hmm... probably those were re-introduced due to a rebase. This
> > series were originally written about 1,5 years ago.
> > 
> > I'll drop those hunks.
> 
> Hi Mauro, Greg,
> 
> I just figured out this patch has been applied with above hunk.
> 
> This should be dropped: is there a fix on its way already ?
> (I may have missed it)

Can you send a fix for just this hunk?

thanks,

greg k-h


From xen-devel-bounces@lists.xenproject.org Mon Nov 02 13:12:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 13:12:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17643.42428 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZZdh-0007uT-Lx; Mon, 02 Nov 2020 13:12:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17643.42428; Mon, 02 Nov 2020 13:12:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZZdh-0007uM-IH; Mon, 02 Nov 2020 13:12:49 +0000
Received: by outflank-mailman (input) for mailman id 17643;
 Mon, 02 Nov 2020 13:12:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2ZVo=EI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kZZdg-0007rF-5g
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 13:12:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id c5cd9084-e640-4889-ab05-49d381c193a2;
 Mon, 02 Nov 2020 13:12:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C226AB8EC;
 Mon,  2 Nov 2020 13:12:41 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2ZVo=EI=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kZZdg-0007rF-5g
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 13:12:48 +0000
X-Inumbo-ID: c5cd9084-e640-4889-ab05-49d381c193a2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id c5cd9084-e640-4889-ab05-49d381c193a2;
	Mon, 02 Nov 2020 13:12:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604322761;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SMWFDeqyO4Bqdr2XBTO3XuyCVTVJJjcu3BY2gEaI7+0=;
	b=JJDx9OQIvSVxsESeXmi9zIcNr8tsUuIarqK6CP/0pAG4ox1NPFkoPUl92GKoGN1awPs6va
	QLETHCgUOgD5qVHaXfXZu+GUBWq5RJE19pfDqzTVLrbPTzjKJEFbl8giH0jwd7Gk/NRfzw
	ZclF903txCmPv+XjdDooLn8+i+qrvkw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C226AB8EC;
	Mon,  2 Nov 2020 13:12:41 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 1/2] xen/spinlocks: spin_trylock with interrupts off is always fine
Date: Mon,  2 Nov 2020 14:12:38 +0100
Message-Id: <20201102131239.14134-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201102131239.14134-1-jgross@suse.com>
References: <20201102131239.14134-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Even if a spinlock was taken with interrupts on before calling
spin_trylock() with interrupts off is fine, as it can't block.

Add a bool parameter "try" to check_lock() for handling this case.

Remove the call of check_lock() from _spin_is_locked(), as it really
serves no purpose and it can even lead to false crashes, e.g. when
a lock was taken correctly with interrupts enabled and the call of
_spin_is_locked() happened with interrupts off. In case the lock is
taken with wrong interrupt flags this will be catched when taking
the lock.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
V2:
- corrected comment (Jan Beulich)
---
 xen/common/spinlock.c | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
index ce3106e2d3..b4aaf6bce6 100644
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -13,7 +13,7 @@
 
 static atomic_t spin_debug __read_mostly = ATOMIC_INIT(0);
 
-static void check_lock(union lock_debug *debug)
+static void check_lock(union lock_debug *debug, bool try)
 {
     bool irq_safe = !local_irq_is_enabled();
 
@@ -42,7 +42,13 @@ static void check_lock(union lock_debug *debug)
      * 
      * To guard against this subtle bug we latch the IRQ safety of every
      * spinlock in the system, on first use.
+     *
+     * A spin_trylock() with interrupts off is always fine, as this can't
+     * block and above deadlock scenario doesn't apply.
      */
+    if ( try && irq_safe )
+        return;
+
     if ( unlikely(debug->irq_safe != irq_safe) )
     {
         union lock_debug seen, new = { 0 };
@@ -102,7 +108,7 @@ void spin_debug_disable(void)
 
 #else /* CONFIG_DEBUG_LOCKS */
 
-#define check_lock(l) ((void)0)
+#define check_lock(l, t) ((void)0)
 #define check_barrier(l) ((void)0)
 #define got_lock(l) ((void)0)
 #define rel_lock(l) ((void)0)
@@ -159,7 +165,7 @@ void inline _spin_lock_cb(spinlock_t *lock, void (*cb)(void *), void *data)
     spinlock_tickets_t tickets = SPINLOCK_TICKET_INC;
     LOCK_PROFILE_VAR;
 
-    check_lock(&lock->debug);
+    check_lock(&lock->debug, false);
     preempt_disable();
     tickets.head_tail = arch_fetch_and_add(&lock->tickets.head_tail,
                                            tickets.head_tail);
@@ -220,8 +226,6 @@ void _spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags)
 
 int _spin_is_locked(spinlock_t *lock)
 {
-    check_lock(&lock->debug);
-
     /*
      * Recursive locks may be locked by another CPU, yet we return
      * "false" here, making this function suitable only for use in
@@ -236,7 +240,7 @@ int _spin_trylock(spinlock_t *lock)
 {
     spinlock_tickets_t old, new;
 
-    check_lock(&lock->debug);
+    check_lock(&lock->debug, true);
     old = observe_lock(&lock->tickets);
     if ( old.head != old.tail )
         return 0;
@@ -294,7 +298,7 @@ int _spin_trylock_recursive(spinlock_t *lock)
     BUILD_BUG_ON(NR_CPUS > SPINLOCK_NO_CPU);
     BUILD_BUG_ON(SPINLOCK_RECURSE_BITS < 3);
 
-    check_lock(&lock->debug);
+    check_lock(&lock->debug, true);
 
     if ( likely(lock->recurse_cpu != cpu) )
     {
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 02 13:12:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 13:12:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17641.42403 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZZdd-0007rW-58; Mon, 02 Nov 2020 13:12:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17641.42403; Mon, 02 Nov 2020 13:12:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZZdd-0007rP-28; Mon, 02 Nov 2020 13:12:45 +0000
Received: by outflank-mailman (input) for mailman id 17641;
 Mon, 02 Nov 2020 13:12:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2ZVo=EI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kZZdb-0007rF-9m
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 13:12:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id e6f08998-ba57-4cb0-b617-e8048056c0f4;
 Mon, 02 Nov 2020 13:12:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 98FD6B8E4;
 Mon,  2 Nov 2020 13:12:41 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2ZVo=EI=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kZZdb-0007rF-9m
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 13:12:43 +0000
X-Inumbo-ID: e6f08998-ba57-4cb0-b617-e8048056c0f4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id e6f08998-ba57-4cb0-b617-e8048056c0f4;
	Mon, 02 Nov 2020 13:12:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604322761;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=2E8DyD98Ac4pDiAkzO0onJ0vvorQRsLItcjw4sbcAOA=;
	b=g2SX6IO7NtiWY/T/JCrKCwH/FknIsXtCCowAfOc8vFserqcrfAErCX8WJX9sMP9dJ332mf
	8q2jgNwf1UECS/tlrRK30ZpXHX6FWgeUpGF8msz/YdzLLyt0xvRxwzv4oFlpF6D9gvEkOD
	u2TTxW6Ye2/aFyx75jZXDsjKJZxCMvE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 98FD6B8E4;
	Mon,  2 Nov 2020 13:12:41 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 0/2] xen/locking: fix and enhance lock debugging
Date: Mon,  2 Nov 2020 14:12:37 +0100
Message-Id: <20201102131239.14134-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This small series fixes two issues with spinlock debug code and adds
lock debug code to rwlocks in order to catch IRQ violations.

Juergen Gross (2):
  xen/spinlocks: spin_trylock with interrupts off is always fine
  xen/rwlock: add check_lock() handling to rwlocks

 xen/common/spinlock.c      | 17 ++++++++++-------
 xen/include/xen/rwlock.h   | 11 +++++++++++
 xen/include/xen/spinlock.h |  2 ++
 3 files changed, 23 insertions(+), 7 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 02 13:12:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 13:12:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17642.42409 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZZdd-0007ry-FH; Mon, 02 Nov 2020 13:12:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17642.42409; Mon, 02 Nov 2020 13:12:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZZdd-0007rn-9k; Mon, 02 Nov 2020 13:12:45 +0000
Received: by outflank-mailman (input) for mailman id 17642;
 Mon, 02 Nov 2020 13:12:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2ZVo=EI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kZZdb-0007rK-JQ
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 13:12:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 7b84fdd6-28e7-4f23-99b9-3f92f0a91a01;
 Mon, 02 Nov 2020 13:12:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0A637B8EF;
 Mon,  2 Nov 2020 13:12:42 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2ZVo=EI=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kZZdb-0007rK-JQ
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 13:12:43 +0000
X-Inumbo-ID: 7b84fdd6-28e7-4f23-99b9-3f92f0a91a01
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 7b84fdd6-28e7-4f23-99b9-3f92f0a91a01;
	Mon, 02 Nov 2020 13:12:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604322762;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=T2Y6gfqjASfLX10Wg8Pwm6fRGu9Jh/ZF1PVmHXzkWYQ=;
	b=cuMWbktmuNjtb+fskPzMG61Ak3K97IFTgVS0UmXBlIfcAqmAhOslXS4IA33CvaVGOmLxoK
	Mx8W/FwYYFQ/KevfCJcYeV8NbEI96j8SDjsBgez/NcwqqYlVcrOtTPmjDfMqBXT+CHdJsX
	PK1BwVR3wN5wuB8OXjduqWcYKgvb1Ak=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0A637B8EF;
	Mon,  2 Nov 2020 13:12:42 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 2/2] xen/rwlock: add check_lock() handling to rwlocks
Date: Mon,  2 Nov 2020 14:12:39 +0100
Message-Id: <20201102131239.14134-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201102131239.14134-1-jgross@suse.com>
References: <20201102131239.14134-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Checking whether a lock is consistently used regarding interrupts on
or off is beneficial for rwlocks, too.

So add check_lock() calls to rwlock functions. For this purpose make
check_lock() globally accessible.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- call check_lock() unconditionally in try_lock variants (Jan Beulich)
---
 xen/common/spinlock.c      |  3 +--
 xen/include/xen/rwlock.h   | 11 +++++++++++
 xen/include/xen/spinlock.h |  2 ++
 3 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
index b4aaf6bce6..405322c6b8 100644
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -13,7 +13,7 @@
 
 static atomic_t spin_debug __read_mostly = ATOMIC_INIT(0);
 
-static void check_lock(union lock_debug *debug, bool try)
+void check_lock(union lock_debug *debug, bool try)
 {
     bool irq_safe = !local_irq_is_enabled();
 
@@ -108,7 +108,6 @@ void spin_debug_disable(void)
 
 #else /* CONFIG_DEBUG_LOCKS */
 
-#define check_lock(l, t) ((void)0)
 #define check_barrier(l) ((void)0)
 #define got_lock(l) ((void)0)
 #define rel_lock(l) ((void)0)
diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h
index 427664037a..94496a0f53 100644
--- a/xen/include/xen/rwlock.h
+++ b/xen/include/xen/rwlock.h
@@ -56,6 +56,7 @@ static inline int _read_trylock(rwlock_t *lock)
     u32 cnts;
 
     preempt_disable();
+    check_lock(&lock->lock.debug, true);
     cnts = atomic_read(&lock->cnts);
     if ( likely(_can_read_lock(cnts)) )
     {
@@ -66,6 +67,7 @@ static inline int _read_trylock(rwlock_t *lock)
          */
         if ( likely(_can_read_lock(cnts)) )
             return 1;
+
         atomic_sub(_QR_BIAS, &lock->cnts);
     }
     preempt_enable();
@@ -87,7 +89,10 @@ static inline void _read_lock(rwlock_t *lock)
      * arch_lock_acquire_barrier().
      */
     if ( likely(_can_read_lock(cnts)) )
+    {
+        check_lock(&lock->lock.debug, false);
         return;
+    }
 
     /* The slowpath will decrement the reader count, if necessary. */
     queue_read_lock_slowpath(lock);
@@ -162,7 +167,10 @@ static inline void _write_lock(rwlock_t *lock)
      * arch_lock_acquire_barrier().
      */
     if ( atomic_cmpxchg(&lock->cnts, 0, _write_lock_val()) == 0 )
+    {
+        check_lock(&lock->lock.debug, false);
         return;
+    }
 
     queue_write_lock_slowpath(lock);
     /*
@@ -197,6 +205,7 @@ static inline int _write_trylock(rwlock_t *lock)
     u32 cnts;
 
     preempt_disable();
+    check_lock(&lock->lock.debug, true);
     cnts = atomic_read(&lock->cnts);
     if ( unlikely(cnts) ||
          unlikely(atomic_cmpxchg(&lock->cnts, 0, _write_lock_val()) != 0) )
@@ -328,6 +337,8 @@ static inline void _percpu_read_lock(percpu_rwlock_t **per_cpudata,
         /* Drop the read lock because we don't need it anymore. */
         read_unlock(&percpu_rwlock->rwlock);
     }
+    else
+        check_lock(&percpu_rwlock->rwlock.lock.debug, false);
 }
 
 static inline void _percpu_read_unlock(percpu_rwlock_t **per_cpudata,
diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h
index ca13b600a0..9fa4e600c1 100644
--- a/xen/include/xen/spinlock.h
+++ b/xen/include/xen/spinlock.h
@@ -21,11 +21,13 @@ union lock_debug {
     };
 };
 #define _LOCK_DEBUG { LOCK_DEBUG_INITVAL }
+void check_lock(union lock_debug *debug, bool try);
 void spin_debug_enable(void);
 void spin_debug_disable(void);
 #else
 union lock_debug { };
 #define _LOCK_DEBUG { }
+#define check_lock(l, t) ((void)0)
 #define spin_debug_enable() ((void)0)
 #define spin_debug_disable() ((void)0)
 #endif
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 02 13:12:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 13:12:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17644.42439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZZdp-00080g-05; Mon, 02 Nov 2020 13:12:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17644.42439; Mon, 02 Nov 2020 13:12:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZZdo-000802-Rl; Mon, 02 Nov 2020 13:12:56 +0000
Received: by outflank-mailman (input) for mailman id 17644;
 Mon, 02 Nov 2020 13:12:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7jo9=EI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZZdn-0007zL-Nv
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 13:12:55 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 35334cdf-a47e-4928-bc6a-cb4f99ae239b;
 Mon, 02 Nov 2020 13:12:53 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZZdk-0007zi-OZ; Mon, 02 Nov 2020 13:12:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZZdk-0004KP-FU; Mon, 02 Nov 2020 13:12:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZZdk-0005IF-F3; Mon, 02 Nov 2020 13:12:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7jo9=EI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZZdn-0007zL-Nv
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 13:12:55 +0000
X-Inumbo-ID: 35334cdf-a47e-4928-bc6a-cb4f99ae239b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 35334cdf-a47e-4928-bc6a-cb4f99ae239b;
	Mon, 02 Nov 2020 13:12:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YHnYs76GVwf0r5J4CReRH1GUzXDLnnQioDrLRNle/OM=; b=ZS5HAwrzZVyjRnrsHXX2sFR7V7
	R0tulzFthkUlOusJCuVJPTo/DRWiVNPlDERWsO70qN70+0XAyzVuq7bMjdXr4SPIaCpTVhTQ0fnor
	Z13/OafYY7d8MwDUVvC5bO8jsdGb2abq3/BG4jWUC7lNviHDLyFDEXSMRe0NWDLbAcCw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZZdk-0007zi-OZ; Mon, 02 Nov 2020 13:12:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZZdk-0004KP-FU; Mon, 02 Nov 2020 13:12:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZZdk-0005IF-F3; Mon, 02 Nov 2020 13:12:52 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156354-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156354: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:guest-saverestore.2:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-xsm:guest-destroy:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7056f2f89f03f2f804ac7e776c7b2b000cd716cd
X-Osstest-Versions-That:
    xen=7056f2f89f03f2f804ac7e776c7b2b000cd716cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 02 Nov 2020 13:12:52 +0000

flight 156354 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156354/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 16 guest-saverestore.2 fail pass in 156339
 test-amd64-i386-libvirt-xsm  22 guest-destroy              fail pass in 156339
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 156339

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156339
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156339
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156339
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156339
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156339
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 156339
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156339
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156339
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156339
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156339
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156339
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156339
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  7056f2f89f03f2f804ac7e776c7b2b000cd716cd
baseline version:
 xen                  7056f2f89f03f2f804ac7e776c7b2b000cd716cd

Last test of basis   156354  2020-11-02 01:52:28 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Nov 02 13:42:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 13:42:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17661.42455 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZa5z-0002OZ-Jz; Mon, 02 Nov 2020 13:42:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17661.42455; Mon, 02 Nov 2020 13:42:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZa5z-0002OS-H4; Mon, 02 Nov 2020 13:42:03 +0000
Received: by outflank-mailman (input) for mailman id 17661;
 Mon, 02 Nov 2020 13:42:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2ZVo=EI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kZa5y-0002ON-Qn
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 13:42:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 1130bbd7-d25c-4f8d-a91d-a2b6f5882fb1;
 Mon, 02 Nov 2020 13:42:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E2EA7B933;
 Mon,  2 Nov 2020 13:41:58 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2ZVo=EI=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kZa5y-0002ON-Qn
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 13:42:02 +0000
X-Inumbo-ID: 1130bbd7-d25c-4f8d-a91d-a2b6f5882fb1
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 1130bbd7-d25c-4f8d-a91d-a2b6f5882fb1;
	Mon, 02 Nov 2020 13:42:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604324519;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=gDUO+DQwIHvMULVJb6sBEqas3ynl/02HlJzoSrE8BzI=;
	b=NbkBkTTxXf5IcYoUWSvLRgsLmb1m1nJ58yo3MtJ5nS49cdUwODrzziyMfyZb8asJf341wu
	Td5cEG6HmQ190C0D0YQxIHFIHcQ6Co7vUgOPAWA84ZlnAwLzacl74cIC7/7Rj/PfR6fw6L
	NWTwIxmCZvHNVwBOxRdCDojjzFaF0BM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E2EA7B933;
	Mon,  2 Nov 2020 13:41:58 +0000 (UTC)
Subject: Re: [PATCH v3 2/2] xen/evtchn: rework per event channel lock
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <20201016105839.14796-1-jgross@suse.com>
 <20201016105839.14796-3-jgross@suse.com>
 <0c5975b1-97ec-9bbb-0ed9-9055556215cd@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <0c39eb60-9843-9659-f7c5-4e2c3e697ee0@suse.com>
Date: Mon, 2 Nov 2020 14:41:57 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <0c5975b1-97ec-9bbb-0ed9-9055556215cd@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.10.20 11:28, Jan Beulich wrote:
> On 16.10.2020 12:58, Juergen Gross wrote:
>> --- a/xen/arch/x86/pv/shim.c
>> +++ b/xen/arch/x86/pv/shim.c
>> @@ -660,11 +660,12 @@ void pv_shim_inject_evtchn(unsigned int port)
>>       if ( port_is_valid(guest, port) )
>>       {
>>           struct evtchn *chn = evtchn_from_port(guest, port);
>> -        unsigned long flags;
>>   
>> -        spin_lock_irqsave(&chn->lock, flags);
>> -        evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
>> -        spin_unlock_irqrestore(&chn->lock, flags);
>> +        if ( evtchn_read_trylock(chn) )
>> +        {
>> +            evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
>> +            evtchn_read_unlock(chn);
>> +        }
> 
> Does this want some form of else, e.g. at least a printk_once()?

No, I don't think so.

This is just a race with the port_is_valid() test above where the
port is just being switched to invalid.

> 
>> @@ -360,7 +352,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
>>       if ( rc )
>>           goto out;
>>   
>> -    flags = double_evtchn_lock(lchn, rchn);
>> +    double_evtchn_lock(lchn, rchn);
> 
> This introduces an unfortunate conflict with my conversion of
> the per-domain event lock to an rw one: It acquires rd's lock
> in read mode only, while the requirements here would not allow
> doing so. (Same in evtchn_close() then.)

Is it a problem to use write mode for those cases?

> 
>> @@ -736,7 +723,8 @@ int evtchn_send(struct domain *ld, unsigned int lport)
>>   
>>       lchn = evtchn_from_port(ld, lport);
>>   
>> -    spin_lock_irqsave(&lchn->lock, flags);
>> +    if ( !evtchn_read_trylock(lchn) )
>> +        return 0;
> 
> With this, the auxiliary call to xsm_evtchn_send() up from
> here should also go away again (possibly in a separate follow-
> on, which would then likely be a clean revert).

Yes.

> 
>> @@ -798,9 +786,11 @@ void send_guest_vcpu_virq(struct vcpu *v, uint32_t virq)
>>   
>>       d = v->domain;
>>       chn = evtchn_from_port(d, port);
>> -    spin_lock(&chn->lock);
>> -    evtchn_port_set_pending(d, v->vcpu_id, chn);
>> -    spin_unlock(&chn->lock);
>> +    if ( evtchn_read_trylock(chn) )
>> +    {
>> +        evtchn_port_set_pending(d, v->vcpu_id, chn);
>> +        evtchn_read_unlock(chn);
>> +    }
>>   
>>    out:
>>       spin_unlock_irqrestore(&v->virq_lock, flags);
>> @@ -829,9 +819,11 @@ void send_guest_global_virq(struct domain *d, uint32_t virq)
>>           goto out;
>>   
>>       chn = evtchn_from_port(d, port);
>> -    spin_lock(&chn->lock);
>> -    evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
>> -    spin_unlock(&chn->lock);
>> +    if ( evtchn_read_trylock(chn) )
>> +    {
>> +        evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
>> +        evtchn_read_unlock(chn);
>> +    }
>>   
>>    out:
>>       spin_unlock_irqrestore(&v->virq_lock, flags);
> 
> As said before, I think these lock uses can go away altogether.
> I shall put together a patch.
> 
> And on the whole I'd really prefer if we first convinced ourselves
> that there's no way to simply get rid of the IRQ-safe locking
> forms instead, before taking a decision to go with this model with
> its extra constraints.
> 
>> @@ -1060,15 +1053,16 @@ int evtchn_unmask(unsigned int port)
>>   {
>>       struct domain *d = current->domain;
>>       struct evtchn *evtchn;
>> -    unsigned long flags;
>>   
>>       if ( unlikely(!port_is_valid(d, port)) )
>>           return -EINVAL;
>>   
>>       evtchn = evtchn_from_port(d, port);
>> -    spin_lock_irqsave(&evtchn->lock, flags);
>> -    evtchn_port_unmask(d, evtchn);
>> -    spin_unlock_irqrestore(&evtchn->lock, flags);
>> +    if ( evtchn_read_trylock(evtchn) )
>> +    {
>> +        evtchn_port_unmask(d, evtchn);
>> +        evtchn_read_unlock(evtchn);
>> +    }
> 
> I think this wants mentioning together with send / query in the
> description.

Okay.

> 
>> --- a/xen/include/xen/event.h
>> +++ b/xen/include/xen/event.h
>> @@ -105,6 +105,60 @@ void notify_via_xen_event_channel(struct domain *ld, int lport);
>>   #define bucket_from_port(d, p) \
>>       ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKET])
>>   
>> +#define EVENT_WRITE_LOCK_INC    INT_MIN
>> +
>> +/*
>> + * Lock an event channel exclusively. This is allowed only with holding
>> + * d->event_lock AND when the channel is free or unbound either when taking
>> + * or when releasing the lock, as any concurrent operation on the event
>> + * channel using evtchn_read_trylock() will just assume the event channel is
>> + * free or unbound at the moment.
> 
> ... when the evtchn_read_trylock() returns false.

Okay.

> 
>> + */
>> +static inline void evtchn_write_lock(struct evtchn *evtchn)
>> +{
>> +    int val;
>> +
>> +    /*
>> +     * The lock can't be held by a writer already, as all writers need to
>> +     * hold d->event_lock.
>> +     */
>> +    ASSERT(atomic_read(&evtchn->lock) >= 0);
>> +
>> +    /* No barrier needed, atomic_add_return() is full barrier. */
>> +    for ( val = atomic_add_return(EVENT_WRITE_LOCK_INC, &evtchn->lock);
>> +          val != EVENT_WRITE_LOCK_INC;
> 
> The _INC suffix is slightly odd for this 2nd use, but I guess
> the dual use will make it so for about any name you may pick.

I'll switch to a normal rwlock in V4.


Juergen


From xen-devel-bounces@lists.xenproject.org Mon Nov 02 13:53:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 13:53:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17666.42467 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZaGZ-0003NT-Kr; Mon, 02 Nov 2020 13:52:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17666.42467; Mon, 02 Nov 2020 13:52:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZaGZ-0003NM-Ht; Mon, 02 Nov 2020 13:52:59 +0000
Received: by outflank-mailman (input) for mailman id 17666;
 Mon, 02 Nov 2020 13:52:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iSH1=EI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kZaGX-0003NA-W6
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 13:52:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 4e8783a3-702b-43c5-8cf2-555bb8e3d1b8;
 Mon, 02 Nov 2020 13:52:57 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 90768AC3F;
 Mon,  2 Nov 2020 13:52:56 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=iSH1=EI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kZaGX-0003NA-W6
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 13:52:58 +0000
X-Inumbo-ID: 4e8783a3-702b-43c5-8cf2-555bb8e3d1b8
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 4e8783a3-702b-43c5-8cf2-555bb8e3d1b8;
	Mon, 02 Nov 2020 13:52:57 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604325176;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Gza3k4FbBeMI9MxVAdSOqIzM9oPUqn08pyvWUJUwN6g=;
	b=SQLG6KFm2DbYucRpb6FNU4zfsmj3kJRm7yV8j0v3YF5IxxOjPBCvOXPWACa9eOzUiK6EVk
	ca8lfycYzPS1kEtGY1dBRmmnkGDqGZEE/YarWjBAg0UUpr5PLmhEa7fpj1wb2P/Qk4O0Ko
	l55+FSvUB5bvWPPIbtSCLssVZSubeaQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 90768AC3F;
	Mon,  2 Nov 2020 13:52:56 +0000 (UTC)
Subject: Re: [PATCH v3 2/2] xen/evtchn: rework per event channel lock
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <20201016105839.14796-1-jgross@suse.com>
 <20201016105839.14796-3-jgross@suse.com>
 <0c5975b1-97ec-9bbb-0ed9-9055556215cd@suse.com>
 <0c39eb60-9843-9659-f7c5-4e2c3e697ee0@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c77add99-f92e-126a-5a5e-81a2b5983aa0@suse.com>
Date: Mon, 2 Nov 2020 14:52:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <0c39eb60-9843-9659-f7c5-4e2c3e697ee0@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 02.11.2020 14:41, Jürgen Groß wrote:
> On 20.10.20 11:28, Jan Beulich wrote:
>> On 16.10.2020 12:58, Juergen Gross wrote:
>>> --- a/xen/arch/x86/pv/shim.c
>>> +++ b/xen/arch/x86/pv/shim.c
>>> @@ -660,11 +660,12 @@ void pv_shim_inject_evtchn(unsigned int port)
>>>       if ( port_is_valid(guest, port) )
>>>       {
>>>           struct evtchn *chn = evtchn_from_port(guest, port);
>>> -        unsigned long flags;
>>>   
>>> -        spin_lock_irqsave(&chn->lock, flags);
>>> -        evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
>>> -        spin_unlock_irqrestore(&chn->lock, flags);
>>> +        if ( evtchn_read_trylock(chn) )
>>> +        {
>>> +            evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
>>> +            evtchn_read_unlock(chn);
>>> +        }
>>
>> Does this want some form of else, e.g. at least a printk_once()?
> 
> No, I don't think so.
> 
> This is just a race with the port_is_valid() test above where the
> port is just being switched to invalid.

This may be such a race yes, but why do you think it _will_ be?
Any holding of the lock for writing (or in fact, any pending
acquire in write mode) will make this fail, which - if it's not
such a race - will mean an event which wasn't sent when it
should have been, with potentially fatal (to the guest)
consequences.

>>> @@ -360,7 +352,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
>>>       if ( rc )
>>>           goto out;
>>>   
>>> -    flags = double_evtchn_lock(lchn, rchn);
>>> +    double_evtchn_lock(lchn, rchn);
>>
>> This introduces an unfortunate conflict with my conversion of
>> the per-domain event lock to an rw one: It acquires rd's lock
>> in read mode only, while the requirements here would not allow
>> doing so. (Same in evtchn_close() then.)
> 
> Is it a problem to use write mode for those cases?

"Problem" can have a wide range of meanings - it's not going to
be the end of the world, but I view any use of a write lock as
a problem when a read lock would suffice. This can still harm
parallelism.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 02 13:59:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 13:59:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17670.42479 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZaMl-0003aw-As; Mon, 02 Nov 2020 13:59:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17670.42479; Mon, 02 Nov 2020 13:59:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZaMl-0003ap-7l; Mon, 02 Nov 2020 13:59:23 +0000
Received: by outflank-mailman (input) for mailman id 17670;
 Mon, 02 Nov 2020 13:59:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2ZVo=EI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kZaMj-0003ak-Hg
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 13:59:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id c13a6c88-96a8-4279-bf04-683ca3a7aa10;
 Mon, 02 Nov 2020 13:59:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A6BABACB5;
 Mon,  2 Nov 2020 13:59:19 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2ZVo=EI=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kZaMj-0003ak-Hg
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 13:59:21 +0000
X-Inumbo-ID: c13a6c88-96a8-4279-bf04-683ca3a7aa10
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id c13a6c88-96a8-4279-bf04-683ca3a7aa10;
	Mon, 02 Nov 2020 13:59:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604325559;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vTBSOfCc5O2MV43b/TL4S4stycwueeg7Za2DteTavHc=;
	b=Q+erQINg9zOG+Rh5R+aq4TYMc/z6iTIi7kWgG+LxADR1gwOtEmD6MqT5Ik8tv1hmamHish
	ACYQUsaYjP8U4o0tpsRMa/po1GfPM0CufQUQahlNckfoF/F2jZSPOWTkGBdL/GIhfQZ8Ab
	g8r64n/Jhdb0IFXY29xOUx9nVIP30Us=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A6BABACB5;
	Mon,  2 Nov 2020 13:59:19 +0000 (UTC)
Subject: Re: [PATCH v3 2/2] xen/evtchn: rework per event channel lock
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <20201016105839.14796-1-jgross@suse.com>
 <20201016105839.14796-3-jgross@suse.com>
 <0c5975b1-97ec-9bbb-0ed9-9055556215cd@suse.com>
 <0c39eb60-9843-9659-f7c5-4e2c3e697ee0@suse.com>
 <c77add99-f92e-126a-5a5e-81a2b5983aa0@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <07cc4218-7aa6-2276-32af-559c0db841b5@suse.com>
Date: Mon, 2 Nov 2020 14:59:18 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <c77add99-f92e-126a-5a5e-81a2b5983aa0@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 02.11.20 14:52, Jan Beulich wrote:
> On 02.11.2020 14:41, Jürgen Groß wrote:
>> On 20.10.20 11:28, Jan Beulich wrote:
>>> On 16.10.2020 12:58, Juergen Gross wrote:
>>>> --- a/xen/arch/x86/pv/shim.c
>>>> +++ b/xen/arch/x86/pv/shim.c
>>>> @@ -660,11 +660,12 @@ void pv_shim_inject_evtchn(unsigned int port)
>>>>        if ( port_is_valid(guest, port) )
>>>>        {
>>>>            struct evtchn *chn = evtchn_from_port(guest, port);
>>>> -        unsigned long flags;
>>>>    
>>>> -        spin_lock_irqsave(&chn->lock, flags);
>>>> -        evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
>>>> -        spin_unlock_irqrestore(&chn->lock, flags);
>>>> +        if ( evtchn_read_trylock(chn) )
>>>> +        {
>>>> +            evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
>>>> +            evtchn_read_unlock(chn);
>>>> +        }
>>>
>>> Does this want some form of else, e.g. at least a printk_once()?
>>
>> No, I don't think so.
>>
>> This is just a race with the port_is_valid() test above where the
>> port is just being switched to invalid.
> 
> This may be such a race yes, but why do you think it _will_ be?

According to the outlined lock discipline there is no other
possibility (assuming that the lock discipline is honored).

I'll have a look whether I can add some ASSERT()s to catch any
lock discipline violation.

> 
>>>> @@ -360,7 +352,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
>>>>        if ( rc )
>>>>            goto out;
>>>>    
>>>> -    flags = double_evtchn_lock(lchn, rchn);
>>>> +    double_evtchn_lock(lchn, rchn);
>>>
>>> This introduces an unfortunate conflict with my conversion of
>>> the per-domain event lock to an rw one: It acquires rd's lock
>>> in read mode only, while the requirements here would not allow
>>> doing so. (Same in evtchn_close() then.)
>>
>> Is it a problem to use write mode for those cases?
> 
> "Problem" can have a wide range of meanings - it's not going to
> be the end of the world, but I view any use of a write lock as
> a problem when a read lock would suffice. This can still harm
> parallelism.

Both cases are very rare ones in the life time of an event channel. I
don't think you'll ever be able to measure any performance impact from
switching these case to a write lock for any well behaved guest.


Juergen


From xen-devel-bounces@lists.xenproject.org Mon Nov 02 14:43:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 14:43:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17683.42491 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZb3E-0007qQ-QY; Mon, 02 Nov 2020 14:43:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17683.42491; Mon, 02 Nov 2020 14:43:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZb3E-0007qJ-MD; Mon, 02 Nov 2020 14:43:16 +0000
Received: by outflank-mailman (input) for mailman id 17683;
 Mon, 02 Nov 2020 14:43:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K/pe=EI=kernel.org=mchehab+huawei@srs-us1.protection.inumbo.net>)
 id 1kZb3D-0007qD-On
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 14:43:15 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1242c726-ef0d-4624-bcfc-6d2770191fbb;
 Mon, 02 Nov 2020 14:43:15 +0000 (UTC)
Received: from coco.lan (ip5f5ad5bd.dynamic.kabel-deutschland.de
 [95.90.213.189])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 63B96223FB;
 Mon,  2 Nov 2020 14:42:56 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=K/pe=EI=kernel.org=mchehab+huawei@srs-us1.protection.inumbo.net>)
	id 1kZb3D-0007qD-On
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 14:43:15 +0000
X-Inumbo-ID: 1242c726-ef0d-4624-bcfc-6d2770191fbb
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1242c726-ef0d-4624-bcfc-6d2770191fbb;
	Mon, 02 Nov 2020 14:43:15 +0000 (UTC)
Received: from coco.lan (ip5f5ad5bd.dynamic.kabel-deutschland.de [95.90.213.189])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 63B96223FB;
	Mon,  2 Nov 2020 14:42:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604328194;
	bh=3oIo7M+kbfes9glXaYwVawYl0ThrpApAaoS3m8AadNs=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=o7vSyqavrA+tPEp8JUSX2Gre2OPdTrpaBl//HWqb1ndQojQ2+WQ+GCP6o1zviBXm5
	 BkixBkiludftpOuYl5RGYgd4ap8zUj38MqoeY5NGyExHQDLlRrYM+dKrsl0Sr0HfH8
	 qhyJWoyOHNQJOxgn2sog8B5I0MfzAibMKr1lQUA4=
Date: Mon, 2 Nov 2020 15:42:50 +0100
From: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Fabrice Gasnier <fabrice.gasnier@st.com>, Linux Doc Mailing List
 <linux-doc@vger.kernel.org>, "Gautham R. Shenoy" <ego@linux.vnet.ibm.com>,
 "Jason A. Donenfeld" <Jason@zx2c4.com>, Javier =?UTF-8?B?R29uesOhbGV6?=
 <javier@javigon.com>, Jonathan Corbet <corbet@lwn.net>, "Martin K.
 Petersen" <martin.petersen@oracle.com>, "Rafael J. Wysocki"
 <rjw@rjwysocki.net>, Alexander Shishkin
 <alexander.shishkin@linux.intel.com>, Alexandre Belloni
 <alexandre.belloni@bootlin.com>, Alexandre Torgue
 <alexandre.torgue@st.com>, Andrew Donnellan <ajd@linux.ibm.com>, Andy
 Shevchenko <andriy.shevchenko@linux.intel.com>, Baolin Wang
 <baolin.wang7@gmail.com>, Benson Leung <bleung@chromium.org>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, Bruno Meneguele
 <bmeneg@redhat.com>, Chunyan Zhang <zhang.lyra@gmail.com>, Dan Murphy
 <dmurphy@ti.com>, Dan Williams <dan.j.williams@intel.com>, Enric Balletbo i
 Serra <enric.balletbo@collabora.com>, Felipe Balbi <balbi@kernel.org>,
 Frederic Barrat <fbarrat@linux.ibm.com>, Guenter Roeck
 <groeck@chromium.org>, Hanjun Guo <guohanjun@huawei.com>, Heikki Krogerus
 <heikki.krogerus@linux.intel.com>, Jens Axboe <axboe@kernel.dk>, Johannes
 Thumshirn <johannes.thumshirn@wdc.com>, Jonathan Cameron
 <jic23@kernel.org>, Juergen Gross <jgross@suse.com>, Konstantin Khlebnikov
 <koct9i@gmail.com>, Kranthi Kuntala <kranthi.kuntala@intel.com>, Lakshmi
 Ramasubramanian <nramas@linux.microsoft.com>, Lars-Peter Clausen
 <lars@metafoo.de>, Len Brown <lenb@kernel.org>, Leonid Maksymchuk
 <leonmaxx@gmail.com>, Ludovic Desroches <ludovic.desroches@microchip.com>,
 Mario Limonciello <mario.limonciello@dell.com>, Mark Gross
 <mgross@linux.intel.com>, Maxime Coquelin <mcoquelin.stm32@gmail.com>,
 Michael Ellerman <mpe@ellerman.id.au>, Mika Westerberg
 <mika.westerberg@linux.intel.com>, Mike Kravetz <mike.kravetz@oracle.com>,
 Mimi Zohar <zohar@linux.ibm.com>, Nayna Jain <nayna@linux.ibm.com>, Nicolas
 Ferre <nicolas.ferre@microchip.com>, Niklas Cassel <niklas.cassel@wdc.com>,
 Oded Gabbay <oded.gabbay@gmail.com>, Oleh Kravchenko <oleg@kaa.org.ua>,
 Orson Zhai <orsonzhai@gmail.com>, Pavel Machek <pavel@ucw.cz>, Pawan Gupta
 <pawan.kumar.gupta@linux.intel.com>, Peter Meerwald-Stadler
 <pmeerw@pmeerw.net>, Peter Rosin <peda@axentia.se>, Petr Mladek
 <pmladek@suse.com>, Philippe Bergheaud <felix@linux.ibm.com>, Richard
 Cochran <richardcochran@gmail.com>, Sebastian Reichel <sre@kernel.org>,
 Sergey Senozhatsky <sergey.senozhatsky@gmail.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Thinh Nguyen <Thinh.Nguyen@synopsys.com>, Thomas
 Gleixner <tglx@linutronix.de>, Tom Rix <trix@redhat.com>, Vaibhav Jain
 <vaibhav@linux.ibm.com>, Vineela Tummalapalli
 <vineela.tummalapalli@intel.com>, Vishal Verma <vishal.l.verma@intel.com>,
 linux-acpi@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
 linux-iio@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, linux-pm@vger.kernel.org,
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org,
 xen-devel@lists.xenproject.org, Jonathan Cameron
 <Jonathan.Cameron@huawei.com>
Subject: Re: [PATCH v2 20/39] docs: ABI: testing: make the files compatible
 with ReST output
Message-ID: <20201102154250.45bee17f@coco.lan>
In-Reply-To: <20201102124641.GA881895@kroah.com>
References: <cover.1604042072.git.mchehab+huawei@kernel.org>
	<58cf3c2d611e0197fb215652719ebd82ca2658db.1604042072.git.mchehab+huawei@kernel.org>
	<5326488b-4185-9d67-fc09-79b911fbb3b8@st.com>
	<20201030110925.3e09d59e@coco.lan>
	<cb586ea3-b6e6-4e48-2344-2bd641e5323f@st.com>
	<20201102124641.GA881895@kroah.com>
X-Mailer: Claws Mail 3.17.8 (GTK+ 2.24.32; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

Em Mon, 2 Nov 2020 13:46:41 +0100
Greg Kroah-Hartman <gregkh@linuxfoundation.org> escreveu:

> On Mon, Nov 02, 2020 at 12:04:36PM +0100, Fabrice Gasnier wrote:
> > On 10/30/20 11:09 AM, Mauro Carvalho Chehab wrote:  
> > > Em Fri, 30 Oct 2020 10:19:12 +0100
> > > Fabrice Gasnier <fabrice.gasnier@st.com> escreveu:
> > >   
> > >> Hi Mauro,
> > >>
> > >> [...]
> > >>  
> > >>>  
> > >>> +What:		/sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available
> > >>> +KernelVersion:	4.12
> > >>> +Contact:	benjamin.gaignard@st.com
> > >>> +Description:
> > >>> +		Reading returns the list possible quadrature modes.
> > >>> +
> > >>> +What:		/sys/bus/iio/devices/iio:deviceX/in_count0_quadrature_mode
> > >>> +KernelVersion:	4.12
> > >>> +Contact:	benjamin.gaignard@st.com
> > >>> +Description:
> > >>> +		Configure the device counter quadrature modes:
> > >>> +
> > >>> +		channel_A:
> > >>> +			Encoder A input servers as the count input and B as
> > >>> +			the UP/DOWN direction control input.
> > >>> +
> > >>> +		channel_B:
> > >>> +			Encoder B input serves as the count input and A as
> > >>> +			the UP/DOWN direction control input.
> > >>> +
> > >>> +		quadrature:
> > >>> +			Encoder A and B inputs are mixed to get direction
> > >>> +			and count with a scale of 0.25.
> > >>> +    
> > >>  
> > > 
> > > Hi Fabrice,
> > >   
> > >> I just noticed that since Jonathan question in v1.
> > >>
> > >> Above ABI has been moved in the past as discussed in [1]. You can take a
> > >> look at:
> > >> b299d00 IIO: stm32: Remove quadrature related functions from trigger driver
> > >>
> > >> Could you please remove the above chunk ?
> > >>
> > >> With that, for the stm32 part:
> > >> Acked-by: Fabrice Gasnier <fabrice.gasnier@st.com>  
> > > 
> > > 
> > > Hmm... probably those were re-introduced due to a rebase. This
> > > series were originally written about 1,5 years ago.
> > > 
> > > I'll drop those hunks.  
> > 
> > Hi Mauro, Greg,
> > 
> > I just figured out this patch has been applied with above hunk.
> > 
> > This should be dropped: is there a fix on its way already ?
> > (I may have missed it)  
> 
> Can you send a fix for just this hunk?

Hmm...

	$ git grep /sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available
	Documentation/ABI/testing/sysfs-bus-iio-counter-104-quad-8:What:                /sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available
	Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32:What:             /sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available
	Documentation/ABI/testing/sysfs-bus-iio-timer-stm32:What:               /sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available

Even re-doing the changes from 
changeset b299d00420e2 ("IIO: stm32: Remove quadrature related functions from trigger driver")
at Documentation/ABI/testing/sysfs-bus-iio-timer-stm32, there's still
a third duplicate of some of those, as reported by the script:

	$ ./scripts/get_abi.pl validate 2>&1|grep quadra
	Warning: /sys/bus/iio/devices/iio:deviceX/in_count0_quadrature_mode is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-timer-stm32:117  Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32:14
	Warning: /sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available is defined 3 times:  Documentation/ABI/testing/sysfs-bus-iio-counter-104-quad-8:2  Documentation/ABI/testing/sysfs-bus-iio-timer-stm32:111  Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32:8

As in_count_quadrature_mode_available is also defined at:
	Documentation/ABI/testing/sysfs-bus-iio-counter-104-quad-8:2

The best here seems to have a patch that will also drop the other
duplication of this, probably moving in_count_quadrature_mode_available
to a generic node probably placing it inside 
Documentation/ABI/testing/sysfs-bus-iio.

Comments?

Thanks,
Mauro

PS.: the IIO subsystem is the one that currently has more duplicated
ABI entries:

$ ./scripts/get_abi.pl validate 2>&1|grep iio
Warning: /sys/bus/iio/devices/iio:deviceX/in_accel_x_calibbias is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-icm42600:0  Documentation/ABI/testing/sysfs-bus-iio:394
Warning: /sys/bus/iio/devices/iio:deviceX/in_accel_y_calibbias is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-icm42600:1  Documentation/ABI/testing/sysfs-bus-iio:395
Warning: /sys/bus/iio/devices/iio:deviceX/in_accel_z_calibbias is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-icm42600:2  Documentation/ABI/testing/sysfs-bus-iio:396
Warning: /sys/bus/iio/devices/iio:deviceX/in_anglvel_x_calibbias is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-icm42600:3  Documentation/ABI/testing/sysfs-bus-iio:397
Warning: /sys/bus/iio/devices/iio:deviceX/in_anglvel_y_calibbias is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-icm42600:4  Documentation/ABI/testing/sysfs-bus-iio:398
Warning: /sys/bus/iio/devices/iio:deviceX/in_anglvel_z_calibbias is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-icm42600:5  Documentation/ABI/testing/sysfs-bus-iio:399
Warning: /sys/bus/iio/devices/iio:deviceX/in_count0_preset is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-timer-stm32:100  Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32:0
Warning: /sys/bus/iio/devices/iio:deviceX/in_count0_quadrature_mode is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-timer-stm32:117  Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32:14
Warning: /sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available is defined 3 times:  Documentation/ABI/testing/sysfs-bus-iio-counter-104-quad-8:2  Documentation/ABI/testing/sysfs-bus-iio-timer-stm32:111  Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32:8
Warning: /sys/bus/iio/devices/iio:deviceX/out_altvoltageY_frequency is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-frequency-adf4371:0  Documentation/ABI/testing/sysfs-bus-iio:599
Warning: /sys/bus/iio/devices/iio:deviceX/out_altvoltageY_powerdown is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-frequency-adf4371:36  Documentation/ABI/testing/sysfs-bus-iio:588
Warning: /sys/bus/iio/devices/iio:deviceX/out_currentY_raw is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-light-lm3533-als:43  Documentation/ABI/testing/sysfs-bus-iio-health-afe440x:38
Warning: /sys/bus/iio/devices/iio:deviceX/out_current_heater_raw is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-humidity-hdc2010:0  Documentation/ABI/testing/sysfs-bus-iio-humidity-hdc100x:0
Warning: /sys/bus/iio/devices/iio:deviceX/out_current_heater_raw_available is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-humidity-hdc2010:1  Documentation/ABI/testing/sysfs-bus-iio-humidity-hdc100x:1
Warning: /sys/bus/iio/devices/iio:deviceX/sensor_sensitivity is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-distance-srf08:0  Documentation/ABI/testing/sysfs-bus-iio-proximity-as3935:8
Warning: /sys/bus/iio/devices/triggerX/sampling_frequency is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-timer-stm32:92  Documentation/ABI/testing/sysfs-bus-iio:45


From xen-devel-bounces@lists.xenproject.org Mon Nov 02 15:01:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 15:01:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17691.42503 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZbKw-0001CV-E3; Mon, 02 Nov 2020 15:01:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17691.42503; Mon, 02 Nov 2020 15:01:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZbKw-0001CO-AO; Mon, 02 Nov 2020 15:01:34 +0000
Received: by outflank-mailman (input) for mailman id 17691;
 Mon, 02 Nov 2020 15:01:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qrwO=EI=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kZbKv-0001CJ-Er
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 15:01:33 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.14.50]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1a7469c7-f9c5-437a-8f77-a208c161ef14;
 Mon, 02 Nov 2020 15:01:29 +0000 (UTC)
Received: from DB6PR07CA0204.eurprd07.prod.outlook.com (2603:10a6:6:42::34) by
 AM0PR08MB5506.eurprd08.prod.outlook.com (2603:10a6:208:17e::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Mon, 2 Nov
 2020 15:01:27 +0000
Received: from DB5EUR03FT061.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:42:cafe::c8) by DB6PR07CA0204.outlook.office365.com
 (2603:10a6:6:42::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.10 via Frontend
 Transport; Mon, 2 Nov 2020 15:01:27 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT061.mail.protection.outlook.com (10.152.21.234) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Mon, 2 Nov 2020 15:01:27 +0000
Received: ("Tessian outbound ba2270a55485:v64");
 Mon, 02 Nov 2020 15:01:27 +0000
Received: from 7f2b0e6bc2ac.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 753343AE-E938-4056-978D-9300F927EBAE.1; 
 Mon, 02 Nov 2020 15:01:03 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 7f2b0e6bc2ac.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 02 Nov 2020 15:01:03 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB6233.eurprd08.prod.outlook.com (2603:10a6:10:204::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.29; Mon, 2 Nov
 2020 15:01:01 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3499.030; Mon, 2 Nov 2020
 15:01:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qrwO=EI=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kZbKv-0001CJ-Er
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 15:01:33 +0000
X-Inumbo-ID: 1a7469c7-f9c5-437a-8f77-a208c161ef14
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown [40.107.14.50])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1a7469c7-f9c5-437a-8f77-a208c161ef14;
	Mon, 02 Nov 2020 15:01:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AaEQNjMOgQnE4pUoDXwLH+fGx6hSaobFsJPsMkK+0UQ=;
 b=AeLybA7sIW+4p3EEMEv2YM6Abn/nBLVc+75Abm4kr9zpJFJjEODKtIXCZ+3Ksjs+DhEis2Qa8KzgaeorbYnBoFjSzZwicTfvzCD6KLIcpTRmFnM6GpsB6NnmNjLYcVsKEnAN7ZbcigL+b6dj7lqindb4dQCMurEsbgiOWZrWOlc=
Received: from DB6PR07CA0204.eurprd07.prod.outlook.com (2603:10a6:6:42::34) by
 AM0PR08MB5506.eurprd08.prod.outlook.com (2603:10a6:208:17e::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Mon, 2 Nov
 2020 15:01:27 +0000
Received: from DB5EUR03FT061.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:42:cafe::c8) by DB6PR07CA0204.outlook.office365.com
 (2603:10a6:6:42::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.10 via Frontend
 Transport; Mon, 2 Nov 2020 15:01:27 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT061.mail.protection.outlook.com (10.152.21.234) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Mon, 2 Nov 2020 15:01:27 +0000
Received: ("Tessian outbound ba2270a55485:v64"); Mon, 02 Nov 2020 15:01:27 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: c41971efbfeabcda
X-CR-MTA-TID: 64aa7808
Received: from 7f2b0e6bc2ac.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 753343AE-E938-4056-978D-9300F927EBAE.1;
	Mon, 02 Nov 2020 15:01:03 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 7f2b0e6bc2ac.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Mon, 02 Nov 2020 15:01:03 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VHgSWPIlnMpWLdIQHsToT3LPjEkJdDTz9a77yeozOqCGwvs1h2GxjuOs68us+gVVbPPDsE54oprzyB3j/SLwH/8dkm5fBrLgiZ5O1pCQ1Ico2gFnhO82sUaQD6qTrwu6TLLlbIB6lCSINaWOIMn4ASBrpqscrF4EKq+FNXvfzxEEVjv0WLEcEZaBC/21mMOoBMHPnGamvzBAPSOu2vBjNiTLtxxV8/hrw2JjL9B2hzzcYCCcD/+DFo4L9A2msS7Ex72zGB0vTS8btFMG0oc4OFEPbHGDv+/fWXGbIo2IUFPPSKG/rcGtpk4yCRtXwuYMs778jxl9z73e86F7J1znRQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AaEQNjMOgQnE4pUoDXwLH+fGx6hSaobFsJPsMkK+0UQ=;
 b=a7W2LOIV1/wirexUAfd6xnZtNTMSvuhvzalW3UQWeMvHl2Gcs3VrtiAhZYLOKPh8HLU6LQ5k1rfQ8OfrRUf7o6Q4/4kt2dli3vx2DiPqkOJoB5eUIzwr+9wlPasPF7Klb/QqBQlQHNWvFePBdfK7+qUiDyAFiWRRnuzE9tRT5nIMBCNhfKNPGKtGdXTsN4rWZd3qFzyYreRzQ/KTGENcULTqokE6wJvtaHkdStJ2QcesfrLSSvf25lGmmtZt1ZrazadsMQtTJEupnZF1bqkIzXksDmytBUguOiFL2Pur19JSBG2enpuub8JQnRxds68moUMcns/pqDur3UI9moDIQQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AaEQNjMOgQnE4pUoDXwLH+fGx6hSaobFsJPsMkK+0UQ=;
 b=AeLybA7sIW+4p3EEMEv2YM6Abn/nBLVc+75Abm4kr9zpJFJjEODKtIXCZ+3Ksjs+DhEis2Qa8KzgaeorbYnBoFjSzZwicTfvzCD6KLIcpTRmFnM6GpsB6NnmNjLYcVsKEnAN7ZbcigL+b6dj7lqindb4dQCMurEsbgiOWZrWOlc=
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB6233.eurprd08.prod.outlook.com (2603:10a6:10:204::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.29; Mon, 2 Nov
 2020 15:01:01 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3499.030; Mon, 2 Nov 2020
 15:01:01 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Oleksandr Andrushchenko
	<andr2000@gmail.com>, Julien Grall <julien@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Jan
 Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index:
 AQHWoVvrkmJwOYERdUOadvid1OghFamwEIWAgABuPwCAA/gngIAAQvMAgAAErACAAFCygA==
Date: Mon, 2 Nov 2020 15:01:01 +0000
Message-ID: <932EA344-D673-49A0-B919-F505EB43F211@arm.com>
References:
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <09cfc160-3490-0aeb-f872-04fb4ce04364@epam.com>
 <76593217-c7e2-2963-9cbe-d6cc38830710@xen.org>
 <d83f6859-6737-0da8-7c1d-a236e8313869@gmail.com>
 <B8E54A16-8FD4-48E4-82D5-2205EEEB5D2C@arm.com>
 <1001ace5-c6a2-4a81-ba3d-edabeeea9336@epam.com>
In-Reply-To: <1001ace5-c6a2-4a81-ba3d-edabeeea9336@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 93bcbb26-2cc7-4ec2-a40b-08d87f402c65
x-ms-traffictypediagnostic: DBBPR08MB6233:|AM0PR08MB5506:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB5506CD6023F1EF7920F0B526FC100@AM0PR08MB5506.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 W0LownhjIfRiMhQPH6IVUFKTAdz/BMFAV2JJgIEAFiXHt3Gddhq4eYRsdXdLvq6+v93rroqMdf1WXlu1RSGnIV+hAnpLa6uFuZXFjUjALW0y06PaCWpw4/vfMfTIVQ6Qq1qLVzdO15J7snDA8BbI5EkX5D2B3eQs9iQwqjH5mZSLcyqk5wDKII4YXBdWWRi361jdtwJ/lh+Uya2wpIxrSHfWKqZio3JLZ5XmQDmWbRhQxqce98KlIPi4NlwiSYUwVe8MiY5YQJ5ZYpx1zfhGOMjcpgXYO+c42dF1YzHPQ/i1VNYB2lRoY0T0VN+Dc8GesLoEMKvxxU3GTvO5OeJmoA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(39860400002)(346002)(136003)(376002)(8936002)(8676002)(4326008)(33656002)(54906003)(6512007)(36756003)(26005)(86362001)(6506007)(2906002)(186003)(6486002)(2616005)(53546011)(478600001)(6916009)(64756008)(66556008)(5660300002)(91956017)(71200400001)(316002)(76116006)(66446008)(66476007)(66946007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 BRnlfceNR/mRMNKR5ptNAzfZZPZjkabWzh6cbIYf0Q7pWYDmmv3xHN48Jyd4TZsi/cJdx/wDfuMvyJQdqnU8/LeL3oY+J/p1eiRCbJvZsoQHLnHjj1H/CadzqjAtIBh87znUUHDFf5gcySrWjSjp5X4jdCmLwGHeAGDHXoJGgJlYPXgPZCa3Xh2g495aAbPxOu4kRTMLOdM+wmLutIp8Qr8Hyzy3DyL9lqh2r+xSycbZL+r5P51pNoSOtACk+TBEDdcfLkz1TQC8dGUB0uD51LZnJyGqY1Le4b3ZF570LXKnkAbZYSqqiX0Ey9wOcahQI0ZC37cPmdEF8Wg+8a4T6P98J4dOJR4MshjYQwvdXb1MjYR3EbJoZ0tn/NUp20F5RgwEH7cye9kZkmaSmf90CXa0bRwk6QXnnez4mZC+1wd0uKLyeteEnw4RGSrIo2O0JTsRfvAyIWspuSsNhYgnoaYMj9xfktcbo7yy6gTmL+tZq1igdSz2D6H6Om1lFwPA0HGlG6VkeQ15nr/uDF8AsN9BBMVs5uGcKdOncAL/KNAp1hSu3iyUoqJ4IDWcdS1sqQoApN4lGkzLwTn46N4Y+vweUdV6EHi3tMy9CLh1AWzV/mS111zuckR33iUF9OF6wlEaify8NxbQU+6vgeJNAg==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <CABC18CC94F24B4EAA06A3BCF71F4262@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6233
Original-Authentication-Results: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT061.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	fd225763-bc09-44e9-4d43-08d87f401d3f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LA9WjY8hZnOHIkF2SoZSn15IFHO19qM14fRe5ZYVsG7bl9hiHE/k2S5s/UIT8ag/SEGPOM4r8sSKUcrlnnzZyPxaNo2FeK6ceUYBrrmulHAmEYm8J+h5QP1ssyWBqD7L1IrucrjV60ihHaaGMzbY4pz1qeGxVoIjSjnriYQNgY5RN3n230qLqkCbZPgmEIpuB58AEuceM+hzxosI+TV7uFtRemPG6nsedv9XtJhz/HCwCPG0rfaaitkb/uXppKB/JEoz9zr9v6c/HBEeXfcsRJrk8sElTfEDLKwvQ5IwQwGr3mSkaZkysV00NiTfpLVgJLB4T4EAtzEoXfdK0jsueBoYaQGxhyou9GsqAb65Y6rdL5IWZAdY4jwjXWgwYKSvUaU/rqv5RP0T1sLaKHkUxA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(376002)(136003)(346002)(396003)(46966005)(81166007)(70206006)(26005)(82740400003)(47076004)(8936002)(82310400003)(336012)(2616005)(6862004)(5660300002)(6512007)(6506007)(53546011)(8676002)(4326008)(316002)(6486002)(186003)(36756003)(86362001)(478600001)(107886003)(70586007)(2906002)(356005)(33656002)(54906003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Nov 2020 15:01:27.2348
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 93bcbb26-2cc7-4ec2-a40b-08d87f402c65
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT061.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5506

Hello Oleksandr,

> On 2 Nov 2020, at 10:12 am, Oleksandr Andrushchenko <Oleksandr_Andrushche=
nko@epam.com> wrote:
>=20
> Hi,
>=20
> On 11/2/20 11:55 AM, Bertrand Marquis wrote:
>> Hi,
>>=20
>>> On 2 Nov 2020, at 05:55, Oleksandr Andrushchenko <andr2000@gmail.com> w=
rote:
>>>=20
>>> Hi, Julien!
>>>=20
>>> On 10/30/20 7:18 PM, Julien Grall wrote:
>>>> Hi Oleksandr,
>>>>=20
>>>> On 30/10/2020 10:44, Oleksandr Andrushchenko wrote:
>>>>> On 10/20/20 6:25 PM, Rahul Singh wrote:
>>>>>> Add support for ARM architected SMMUv3 implementations. It is based =
on
>>>>>> the Linux SMMUv3 driver.
>>>>>>=20
>>>>>> Major differences between the Linux driver are as follows:
>>>>>> 1. Only Stage-2 translation is supported as compared to the Linux dr=
iver
>>>>>>      that supports both Stage-1 and Stage-2 translations.
>>>>> First of all thank you for the efforts!
>>>>>=20
>>>>> I tried the patch with QEMU and would like to know if my understandin=
g correct
>>>>>=20
>>>>> that this combination will not work as of now:
>>>>>=20
>>>>> (XEN) SMMUv3: /smmuv3@9050000: SMMUv3: DT value =3D eventq
>>>>> (XEN) Data Abort Trap. Syndrome=3D0x1940010
>>>>> (XEN) Walking Hypervisor VA 0x40031000 on CPU0 via TTBR 0x00000000b84=
69000
>>>>> (XEN) 0TH[0x0] =3D 0x00000000b8468f7f
>>>>>=20
>>>>> [snip]
>>>>>=20
>>>>> If this is expected then is there any plan to make QEMU work as well?
>>>>>=20
>>>>> I see [1] says that "Only stage 1 and AArch64 PTW are supported." on =
QEMU side.
>>>> Just for clarication, you are trying to boot Xen on QEMU, right?
>>> Exactly
>>>> You might be able to use the stage-1 page-tables to isolate each devic=
e in Xen. However, I don't think you will be able to share the P2M because =
the page-tables layout between stage-1 and stage-2 is different.
>>> So, it is even more work then
>> Overall it would make more sense to spend some time adding proper suppor=
t in Qemu then trying to modify the driver to support Qemu right now.
>>=20
>>>>>=20
>>>>> We are interested in QEMU/SMMUv3 as a flexible platform for PCI passt=
hrough
>>>>>=20
>>>>> implementation, so it could allow testing different setups and config=
urations with QEMU.
>>>> I would recommend to get the SMMU supporting supporting stage-2 page-t=
ables.
>>> You mean in QEMU?
>> See before.
>>=20
>>>> Regardless that, I think Xen should be able to say the SMMU is not sup=
ported rather than crashing.
>>> Yes, that would be nice
>> Fully agree and we will look into that.
>>=20
>> Anything you could share so that we could quickly reproduce your setup w=
ould be more then great.
>=20
> Nothing special,
>=20
> qemu/aarch64-softmmu/qemu-system-aarch64 -machine type=3Dvirt -machine vi=
rt,gic-version=3D2 \
>=20
> -machine virtualization=3Dtrue -cpu cortex-a57 -smp 4 -m 2048 -nic user,h=
ostfwd=3Dtcp:127.0.0.1:2222-:22 \
>=20
> -nographic -serial mon:stdio [..snip..]
>=20
> I also set iommu to smmuv3 in my tests, QEMU emulator version 4.2.1

Thanks for sharing the information. I will also try to reproduce the issue =
and will let you know the results.

>=20
>>=20
>> Regards
>> Bertrand
>>=20
>>>> Cheers,
>>>>=20
>>> Thank you,
>>>=20
>>> Oleksandr

Regards,
Rahul



From xen-devel-bounces@lists.xenproject.org Mon Nov 02 15:04:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 15:04:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17702.42515 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZbNV-0001NQ-SA; Mon, 02 Nov 2020 15:04:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17702.42515; Mon, 02 Nov 2020 15:04:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZbNV-0001NJ-PC; Mon, 02 Nov 2020 15:04:13 +0000
Received: by outflank-mailman (input) for mailman id 17702;
 Mon, 02 Nov 2020 15:04:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2ZVo=EI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kZbNU-0001N9-6O
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 15:04:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 6db869b9-c5c1-488e-aa3f-aee8f2395eb6;
 Mon, 02 Nov 2020 15:04:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CD42BAC53;
 Mon,  2 Nov 2020 15:04:10 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2ZVo=EI=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kZbNU-0001N9-6O
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 15:04:12 +0000
X-Inumbo-ID: 6db869b9-c5c1-488e-aa3f-aee8f2395eb6
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 6db869b9-c5c1-488e-aa3f-aee8f2395eb6;
	Mon, 02 Nov 2020 15:04:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604329450;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=OBSHY70RSdE1JKjBVnT51+G54Dab9ykWCkhn2/kRJ9I=;
	b=YKHpv4kB72+593ck+Bhh8C2jkWOtQS8B8uhzBRjJR6gKP8zNBeaKw8vPE3awmGL2LqXzbo
	3Yv+v55VTQNFw4eaj5X5pfETy7MEW1q9Tfr7Pvsziepd1P3p3NhCWFYyvZFv2AnBnRCUur
	ntMewqi08NpLbTusJ4YYsa5Ybdo32x0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id CD42BAC53;
	Mon,  2 Nov 2020 15:04:10 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v4 0/2] XSA-343 followup patches
Date: Mon,  2 Nov 2020 16:04:06 +0100
Message-Id: <20201102150408.4954-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patches for XSA-343 produced some fallout, especially the event
channel locking has shown to be problematic.

Patch 1 is targeting fifo event channels for avoiding any races for the
case that the fifo queue has been changed for a specific event channel.

The second patch is modifying the per event channel locking scheme in
order to avoid deadlocks and problems due to the event channel lock
having been changed to require IRQs off by the XSA-343 patches.

Changes in V4:
- switched to real rwlock

Changes in V3:
- addressed comments

*** BLURB HERE ***

Juergen Gross (2):
  xen/events: access last_priority and last_vcpu_id together
  xen/evtchn: rework per event channel lock

 xen/arch/x86/irq.c         |   6 +-
 xen/arch/x86/pv/shim.c     |   9 +--
 xen/common/event_channel.c | 120 ++++++++++++++++++-------------------
 xen/common/event_fifo.c    |  25 ++++++--
 xen/include/xen/event.h    |  55 +++++++++++++----
 xen/include/xen/sched.h    |   6 +-
 6 files changed, 131 insertions(+), 90 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 02 15:04:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 15:04:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17703.42527 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZbNX-0001Ol-4y; Mon, 02 Nov 2020 15:04:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17703.42527; Mon, 02 Nov 2020 15:04:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZbNX-0001Oe-1Z; Mon, 02 Nov 2020 15:04:15 +0000
Received: by outflank-mailman (input) for mailman id 17703;
 Mon, 02 Nov 2020 15:04:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2ZVo=EI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kZbNV-0001NE-9U
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 15:04:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 2004d714-69eb-45ba-a865-cf5ba3c6fe44;
 Mon, 02 Nov 2020 15:04:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0925BAE66;
 Mon,  2 Nov 2020 15:04:11 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2ZVo=EI=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kZbNV-0001NE-9U
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 15:04:13 +0000
X-Inumbo-ID: 2004d714-69eb-45ba-a865-cf5ba3c6fe44
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 2004d714-69eb-45ba-a865-cf5ba3c6fe44;
	Mon, 02 Nov 2020 15:04:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604329451;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XS9rtjxJ5bDV22GVepJ1dC1HEn3ZxgpC7bx8NFsirl4=;
	b=To/4YmMhyDepZoT/nTGF/ts8yxIRNr4icExb2CUTRcBwl9atfu9T3NPA7srtGX40HwDTxl
	RJDesn7HdlbNi0oe/mjO79fOgpp1JeJ6ElmVZ0bNpnNllkXQr5tku80DEctgvSF7Cfs5g8
	c3sXtrzwsp9mb8LXVXtiUVp9M8NM8nQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0925BAE66;
	Mon,  2 Nov 2020 15:04:11 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 1/2] xen/events: access last_priority and last_vcpu_id together
Date: Mon,  2 Nov 2020 16:04:07 +0100
Message-Id: <20201102150408.4954-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201102150408.4954-1-jgross@suse.com>
References: <20201102150408.4954-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The queue for a fifo event is depending on the vcpu_id and the
priority of the event. When sending an event it might happen the
event needs to change queues and the old queue needs to be kept for
keeping the links between queue elements intact. For this purpose
the event channel contains last_priority and last_vcpu_id values
elements for being able to identify the old queue.

In order to avoid races always access last_priority and last_vcpu_id
with a single atomic operation avoiding any inconsistencies.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/event_fifo.c | 25 +++++++++++++++++++------
 xen/include/xen/sched.h |  3 +--
 2 files changed, 20 insertions(+), 8 deletions(-)

diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c
index c6e58d2a1a..79090c04ca 100644
--- a/xen/common/event_fifo.c
+++ b/xen/common/event_fifo.c
@@ -42,6 +42,14 @@ struct evtchn_fifo_domain {
     unsigned int num_evtchns;
 };
 
+union evtchn_fifo_lastq {
+    uint32_t raw;
+    struct {
+        uint8_t last_priority;
+        uint16_t last_vcpu_id;
+    };
+};
+
 static inline event_word_t *evtchn_fifo_word_from_port(const struct domain *d,
                                                        unsigned int port)
 {
@@ -86,16 +94,18 @@ static struct evtchn_fifo_queue *lock_old_queue(const struct domain *d,
     struct vcpu *v;
     struct evtchn_fifo_queue *q, *old_q;
     unsigned int try;
+    union evtchn_fifo_lastq lastq;
 
     for ( try = 0; try < 3; try++ )
     {
-        v = d->vcpu[evtchn->last_vcpu_id];
-        old_q = &v->evtchn_fifo->queue[evtchn->last_priority];
+        lastq.raw = read_atomic(&evtchn->fifo_lastq);
+        v = d->vcpu[lastq.last_vcpu_id];
+        old_q = &v->evtchn_fifo->queue[lastq.last_priority];
 
         spin_lock_irqsave(&old_q->lock, *flags);
 
-        v = d->vcpu[evtchn->last_vcpu_id];
-        q = &v->evtchn_fifo->queue[evtchn->last_priority];
+        v = d->vcpu[lastq.last_vcpu_id];
+        q = &v->evtchn_fifo->queue[lastq.last_priority];
 
         if ( old_q == q )
             return old_q;
@@ -246,8 +256,11 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
         /* Moved to a different queue? */
         if ( old_q != q )
         {
-            evtchn->last_vcpu_id = v->vcpu_id;
-            evtchn->last_priority = q->priority;
+            union evtchn_fifo_lastq lastq = { };
+
+            lastq.last_vcpu_id = v->vcpu_id;
+            lastq.last_priority = q->priority;
+            write_atomic(&evtchn->fifo_lastq, lastq.raw);
 
             spin_unlock_irqrestore(&old_q->lock, flags);
             spin_lock_irqsave(&q->lock, flags);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index d8ed83f869..a298ff4df8 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -114,8 +114,7 @@ struct evtchn
         u16 virq;      /* state == ECS_VIRQ */
     } u;
     u8 priority;
-    u8 last_priority;
-    u16 last_vcpu_id;
+    u32 fifo_lastq;    /* Data for fifo events identifying last queue. */
 #ifdef CONFIG_XSM
     union {
 #ifdef XSM_NEED_GENERIC_EVTCHN_SSID
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 02 15:04:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 15:04:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17704.42539 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZbNb-0001Rq-E2; Mon, 02 Nov 2020 15:04:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17704.42539; Mon, 02 Nov 2020 15:04:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZbNb-0001Ri-Ao; Mon, 02 Nov 2020 15:04:19 +0000
Received: by outflank-mailman (input) for mailman id 17704;
 Mon, 02 Nov 2020 15:04:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2ZVo=EI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kZbNa-0001NE-85
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 15:04:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id fa79b7b2-5b70-4f9b-b00a-15a918aa5037;
 Mon, 02 Nov 2020 15:04:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4C25BAE7A;
 Mon,  2 Nov 2020 15:04:11 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2ZVo=EI=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kZbNa-0001NE-85
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 15:04:18 +0000
X-Inumbo-ID: fa79b7b2-5b70-4f9b-b00a-15a918aa5037
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id fa79b7b2-5b70-4f9b-b00a-15a918aa5037;
	Mon, 02 Nov 2020 15:04:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604329451;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VfRVDesY1aj8uixfP3uQI2NKUrlvpo7r4BslaV+ZyxU=;
	b=n/Rf0MoV7RbJtEFdUu33dV1Gr10R+kNW6/KXGK7M3xAYhlijeKfPJmIUs1rQkvzIUOOElr
	P3HT2qAQnsKhzqcYxd767YGjee1Er7smePw7Lnuf5cSEIfEuEGzetycdNN2f7ixnGdJATr
	t/L1PESWygvkeFSXhbE2x7MhU+WtFS4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4C25BAE7A;
	Mon,  2 Nov 2020 15:04:11 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v4 2/2] xen/evtchn: rework per event channel lock
Date: Mon,  2 Nov 2020 16:04:08 +0100
Message-Id: <20201102150408.4954-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201102150408.4954-1-jgross@suse.com>
References: <20201102150408.4954-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currently the lock for a single event channel needs to be taken with
interrupts off, which causes deadlocks in some cases.

Rework the per event channel lock to be non-blocking for the case of
sending an event and removing the need for disabling interrupts for
taking the lock.

The lock is needed for avoiding races between event channel state
changes (creation, closing, binding) against normal operations (set
pending, [un]masking, priority changes).

Use a rwlock, but with some restrictions:

- normal operations use read_trylock(), in case of not obtaining the
  lock the operation is omitted or a default state is returned

- closing an event channel is using write_lock(), with ASSERT()ing that
  the lock is taken as writer only when the state of the event channel
  is either before or after the locked region appropriate (either free
  or unbound).

Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V4:
- switch to rwlock
- add ASSERT() to verify correct write_lock() usage

V3:
- corrected a copy-and-paste error (Jan Beulich)
- corrected unlocking in two cases (Jan Beulich)
- renamed evtchn_read_trylock() (Jan Beulich)
- added some comments and an ASSERT() for evtchn_write_lock()
- set EVENT_WRITE_LOCK_INC to INT_MIN

V2:
- added needed barriers

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/arch/x86/irq.c         |   6 +-
 xen/arch/x86/pv/shim.c     |   9 +--
 xen/common/event_channel.c | 120 ++++++++++++++++++-------------------
 xen/include/xen/event.h    |  55 +++++++++++++----
 xen/include/xen/sched.h    |   3 +-
 5 files changed, 111 insertions(+), 82 deletions(-)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 93c4fb9a79..8d1f9a9fc6 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -2495,14 +2495,12 @@ static void dump_irqs(unsigned char key)
                 pirq = domain_irq_to_pirq(d, irq);
                 info = pirq_info(d, pirq);
                 evtchn = evtchn_from_port(d, info->evtchn);
-                local_irq_disable();
-                if ( spin_trylock(&evtchn->lock) )
+                if ( evtchn_read_trylock(evtchn) )
                 {
                     pending = evtchn_is_pending(d, evtchn);
                     masked = evtchn_is_masked(d, evtchn);
-                    spin_unlock(&evtchn->lock);
+                    evtchn_read_unlock(evtchn);
                 }
-                local_irq_enable();
                 printk("d%d:%3d(%c%c%c)%c",
                        d->domain_id, pirq, "-P?"[pending],
                        "-M?"[masked], info->masked ? 'M' : '-',
diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c
index 9aef7a860a..b4e83e0778 100644
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -660,11 +660,12 @@ void pv_shim_inject_evtchn(unsigned int port)
     if ( port_is_valid(guest, port) )
     {
         struct evtchn *chn = evtchn_from_port(guest, port);
-        unsigned long flags;
 
-        spin_lock_irqsave(&chn->lock, flags);
-        evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
-        spin_unlock_irqrestore(&chn->lock, flags);
+        if ( evtchn_read_trylock(chn) )
+        {
+            evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
+            evtchn_read_unlock(chn);
+        }
     }
 }
 
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index cd4a2c0501..89606e0385 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -133,7 +133,7 @@ static struct evtchn *alloc_evtchn_bucket(struct domain *d, unsigned int port)
             return NULL;
         }
         chn[i].port = port + i;
-        spin_lock_init(&chn[i].lock);
+        rwlock_init(&chn[i].lock);
     }
     return chn;
 }
@@ -255,7 +255,6 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
     int            port;
     domid_t        dom = alloc->dom;
     long           rc;
-    unsigned long  flags;
 
     d = rcu_lock_domain_by_any_id(dom);
     if ( d == NULL )
@@ -271,14 +270,14 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
     if ( rc )
         goto out;
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state = ECS_UNBOUND;
     if ( (chn->u.unbound.remote_domid = alloc->remote_dom) == DOMID_SELF )
         chn->u.unbound.remote_domid = current->domain->domain_id;
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     alloc->port = port;
 
@@ -291,32 +290,26 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
 }
 
 
-static unsigned long double_evtchn_lock(struct evtchn *lchn,
-                                        struct evtchn *rchn)
+static void double_evtchn_lock(struct evtchn *lchn, struct evtchn *rchn)
 {
-    unsigned long flags;
-
     if ( lchn <= rchn )
     {
-        spin_lock_irqsave(&lchn->lock, flags);
+        evtchn_write_lock(lchn);
         if ( lchn != rchn )
-            spin_lock(&rchn->lock);
+            evtchn_write_lock(rchn);
     }
     else
     {
-        spin_lock_irqsave(&rchn->lock, flags);
-        spin_lock(&lchn->lock);
+        evtchn_write_lock(rchn);
+        evtchn_write_lock(lchn);
     }
-
-    return flags;
 }
 
-static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn,
-                                 unsigned long flags)
+static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn)
 {
     if ( lchn != rchn )
-        spin_unlock(&lchn->lock);
-    spin_unlock_irqrestore(&rchn->lock, flags);
+        evtchn_write_unlock(lchn);
+    evtchn_write_unlock(rchn);
 }
 
 static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
@@ -326,7 +319,6 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
     int            lport, rport = bind->remote_port;
     domid_t        rdom = bind->remote_dom;
     long           rc;
-    unsigned long  flags;
 
     if ( rdom == DOMID_SELF )
         rdom = current->domain->domain_id;
@@ -362,7 +354,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
     if ( rc )
         goto out;
 
-    flags = double_evtchn_lock(lchn, rchn);
+    double_evtchn_lock(lchn, rchn);
 
     lchn->u.interdomain.remote_dom  = rd;
     lchn->u.interdomain.remote_port = rport;
@@ -379,7 +371,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
      */
     evtchn_port_set_pending(ld, lchn->notify_vcpu_id, lchn);
 
-    double_evtchn_unlock(lchn, rchn, flags);
+    double_evtchn_unlock(lchn, rchn);
 
     bind->local_port = lport;
 
@@ -402,7 +394,6 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
     struct domain *d = current->domain;
     int            virq = bind->virq, vcpu = bind->vcpu;
     int            rc = 0;
-    unsigned long  flags;
 
     if ( (virq < 0) || (virq >= ARRAY_SIZE(v->virq_to_evtchn)) )
         return -EINVAL;
@@ -440,14 +431,14 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
 
     chn = evtchn_from_port(d, port);
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state          = ECS_VIRQ;
     chn->notify_vcpu_id = vcpu;
     chn->u.virq         = virq;
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     v->virq_to_evtchn[virq] = bind->port = port;
 
@@ -464,7 +455,6 @@ static long evtchn_bind_ipi(evtchn_bind_ipi_t *bind)
     struct domain *d = current->domain;
     int            port, vcpu = bind->vcpu;
     long           rc = 0;
-    unsigned long  flags;
 
     if ( domain_vcpu(d, vcpu) == NULL )
         return -ENOENT;
@@ -476,13 +466,13 @@ static long evtchn_bind_ipi(evtchn_bind_ipi_t *bind)
 
     chn = evtchn_from_port(d, port);
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state          = ECS_IPI;
     chn->notify_vcpu_id = vcpu;
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     bind->port = port;
 
@@ -526,7 +516,6 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
     struct pirq   *info;
     int            port = 0, pirq = bind->pirq;
     long           rc;
-    unsigned long  flags;
 
     if ( (pirq < 0) || (pirq >= d->nr_pirqs) )
         return -EINVAL;
@@ -559,14 +548,14 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
         goto out;
     }
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state  = ECS_PIRQ;
     chn->u.pirq.irq = pirq;
     link_pirq_port(port, chn, v);
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     bind->port = port;
 
@@ -587,7 +576,6 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
     struct evtchn *chn1, *chn2;
     int            port2;
     long           rc = 0;
-    unsigned long  flags;
 
  again:
     spin_lock(&d1->event_lock);
@@ -688,14 +676,14 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
         BUG_ON(chn2->state != ECS_INTERDOMAIN);
         BUG_ON(chn2->u.interdomain.remote_dom != d1);
 
-        flags = double_evtchn_lock(chn1, chn2);
+        double_evtchn_lock(chn1, chn2);
 
         evtchn_free(d1, chn1);
 
         chn2->state = ECS_UNBOUND;
         chn2->u.unbound.remote_domid = d1->domain_id;
 
-        double_evtchn_unlock(chn1, chn2, flags);
+        double_evtchn_unlock(chn1, chn2);
 
         goto out;
 
@@ -703,9 +691,9 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
         BUG();
     }
 
-    spin_lock_irqsave(&chn1->lock, flags);
+    evtchn_write_lock(chn1);
     evtchn_free(d1, chn1);
-    spin_unlock_irqrestore(&chn1->lock, flags);
+    evtchn_write_unlock(chn1);
 
  out:
     if ( d2 != NULL )
@@ -725,7 +713,6 @@ int evtchn_send(struct domain *ld, unsigned int lport)
     struct evtchn *lchn, *rchn;
     struct domain *rd;
     int            rport, ret = 0;
-    unsigned long  flags;
 
     if ( !port_is_valid(ld, lport) )
         return -EINVAL;
@@ -738,7 +725,8 @@ int evtchn_send(struct domain *ld, unsigned int lport)
 
     lchn = evtchn_from_port(ld, lport);
 
-    spin_lock_irqsave(&lchn->lock, flags);
+    if ( !evtchn_read_trylock(lchn) )
+        return 0;
 
     /* Guest cannot send via a Xen-attached event channel. */
     if ( unlikely(consumer_is_xen(lchn)) )
@@ -773,7 +761,7 @@ int evtchn_send(struct domain *ld, unsigned int lport)
     }
 
 out:
-    spin_unlock_irqrestore(&lchn->lock, flags);
+    evtchn_read_unlock(lchn);
 
     return ret;
 }
@@ -806,9 +794,11 @@ void send_guest_vcpu_virq(struct vcpu *v, uint32_t virq)
 
     d = v->domain;
     chn = evtchn_from_port(d, port);
-    spin_lock(&chn->lock);
-    evtchn_port_set_pending(d, v->vcpu_id, chn);
-    spin_unlock(&chn->lock);
+    if ( evtchn_read_trylock(chn) )
+    {
+        evtchn_port_set_pending(d, v->vcpu_id, chn);
+        evtchn_read_unlock(chn);
+    }
 
  out:
     spin_unlock_irqrestore(&v->virq_lock, flags);
@@ -837,9 +827,11 @@ void send_guest_global_virq(struct domain *d, uint32_t virq)
         goto out;
 
     chn = evtchn_from_port(d, port);
-    spin_lock(&chn->lock);
-    evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
-    spin_unlock(&chn->lock);
+    if ( evtchn_read_trylock(chn) )
+    {
+        evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
+        evtchn_read_unlock(chn);
+    }
 
  out:
     spin_unlock_irqrestore(&v->virq_lock, flags);
@@ -849,7 +841,6 @@ void send_guest_pirq(struct domain *d, const struct pirq *pirq)
 {
     int port;
     struct evtchn *chn;
-    unsigned long flags;
 
     /*
      * PV guests: It should not be possible to race with __evtchn_close(). The
@@ -864,9 +855,11 @@ void send_guest_pirq(struct domain *d, const struct pirq *pirq)
     }
 
     chn = evtchn_from_port(d, port);
-    spin_lock_irqsave(&chn->lock, flags);
-    evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
-    spin_unlock_irqrestore(&chn->lock, flags);
+    if ( evtchn_read_trylock(chn) )
+    {
+        evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
+        evtchn_read_unlock(chn);
+    }
 }
 
 static struct domain *global_virq_handlers[NR_VIRQS] __read_mostly;
@@ -1068,15 +1061,16 @@ int evtchn_unmask(unsigned int port)
 {
     struct domain *d = current->domain;
     struct evtchn *evtchn;
-    unsigned long flags;
 
     if ( unlikely(!port_is_valid(d, port)) )
         return -EINVAL;
 
     evtchn = evtchn_from_port(d, port);
-    spin_lock_irqsave(&evtchn->lock, flags);
-    evtchn_port_unmask(d, evtchn);
-    spin_unlock_irqrestore(&evtchn->lock, flags);
+    if ( evtchn_read_trylock(evtchn) )
+    {
+        evtchn_port_unmask(d, evtchn);
+        evtchn_read_unlock(evtchn);
+    }
 
     return 0;
 }
@@ -1155,16 +1149,17 @@ static long evtchn_set_priority(const struct evtchn_set_priority *set_priority)
     struct domain *d = current->domain;
     unsigned int port = set_priority->port;
     struct evtchn *chn;
-    long ret;
-    unsigned long flags;
+    long ret = 0;
 
     if ( !port_is_valid(d, port) )
         return -EINVAL;
 
     chn = evtchn_from_port(d, port);
-    spin_lock_irqsave(&chn->lock, flags);
-    ret = evtchn_port_set_priority(d, chn, set_priority->priority);
-    spin_unlock_irqrestore(&chn->lock, flags);
+    if ( evtchn_read_trylock(chn) )
+    {
+        ret = evtchn_port_set_priority(d, chn, set_priority->priority);
+        evtchn_read_unlock(chn);
+    }
 
     return ret;
 }
@@ -1332,7 +1327,6 @@ int alloc_unbound_xen_event_channel(
 {
     struct evtchn *chn;
     int            port, rc;
-    unsigned long  flags;
 
     spin_lock(&ld->event_lock);
 
@@ -1345,14 +1339,14 @@ int alloc_unbound_xen_event_channel(
     if ( rc )
         goto out;
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state = ECS_UNBOUND;
     chn->xen_consumer = get_xen_consumer(notification_fn);
     chn->notify_vcpu_id = lvcpu;
     chn->u.unbound.remote_domid = remote_domid;
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     /*
      * Increment ->xen_evtchns /after/ ->active_evtchns. No explicit
@@ -1388,7 +1382,6 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
 {
     struct evtchn *lchn, *rchn;
     struct domain *rd;
-    unsigned long flags;
 
     if ( !port_is_valid(ld, lport) )
     {
@@ -1403,7 +1396,8 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
 
     lchn = evtchn_from_port(ld, lport);
 
-    spin_lock_irqsave(&lchn->lock, flags);
+    if ( !evtchn_read_trylock(lchn) )
+        return;
 
     if ( likely(lchn->state == ECS_INTERDOMAIN) )
     {
@@ -1413,7 +1407,7 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
         evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
     }
 
-    spin_unlock_irqrestore(&lchn->lock, flags);
+    evtchn_read_unlock(lchn);
 }
 
 void evtchn_check_pollers(struct domain *d, unsigned int port)
diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index 2ed4be78f6..9adf70fe31 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -105,6 +105,39 @@ void notify_via_xen_event_channel(struct domain *ld, int lport);
 #define bucket_from_port(d, p) \
     ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKET])
 
+/*
+ * Lock an event channel exclusively. This is allowed only with holding
+ * d->event_lock AND when the channel is free or unbound either when taking
+ * or when releasing the lock, as any concurrent operation on the event
+ * channel using evtchn_read_trylock() will just assume the event channel is
+ * free or unbound at the moment when the evtchn_read_trylock() returns false.
+ */
+static inline void evtchn_write_lock(struct evtchn *evtchn)
+{
+    write_lock(&evtchn->lock);
+
+    evtchn->old_state = evtchn->state;
+}
+
+static inline void evtchn_write_unlock(struct evtchn *evtchn)
+{
+    /* Enforce lock discipline. */
+    ASSERT(evtchn->old_state == ECS_FREE || evtchn->old_state == ECS_UNBOUND ||
+           evtchn->state == ECS_FREE || evtchn->state == ECS_UNBOUND);
+
+    write_unlock(&evtchn->lock);
+}
+
+static inline bool evtchn_read_trylock(struct evtchn *evtchn)
+{
+    return read_trylock(&evtchn->lock);
+}
+
+static inline void evtchn_read_unlock(struct evtchn *evtchn)
+{
+    read_unlock(&evtchn->lock);
+}
+
 static inline bool_t port_is_valid(struct domain *d, unsigned int p)
 {
     if ( p >= read_atomic(&d->valid_evtchns) )
@@ -234,12 +267,13 @@ static inline bool evtchn_is_masked(const struct domain *d,
 static inline bool evtchn_port_is_masked(struct domain *d, evtchn_port_t port)
 {
     struct evtchn *evtchn = evtchn_from_port(d, port);
-    bool rc;
-    unsigned long flags;
+    bool rc = true;
 
-    spin_lock_irqsave(&evtchn->lock, flags);
-    rc = evtchn_is_masked(d, evtchn);
-    spin_unlock_irqrestore(&evtchn->lock, flags);
+    if ( evtchn_read_trylock(evtchn) )
+    {
+        rc = evtchn_is_masked(d, evtchn);
+        evtchn_read_unlock(evtchn);
+    }
 
     return rc;
 }
@@ -252,12 +286,13 @@ static inline int evtchn_port_poll(struct domain *d, evtchn_port_t port)
     if ( port_is_valid(d, port) )
     {
         struct evtchn *evtchn = evtchn_from_port(d, port);
-        unsigned long flags;
 
-        spin_lock_irqsave(&evtchn->lock, flags);
-        if ( evtchn_usable(evtchn) )
-            rc = evtchn_is_pending(d, evtchn);
-        spin_unlock_irqrestore(&evtchn->lock, flags);
+        if ( evtchn_read_trylock(evtchn) )
+        {
+            if ( evtchn_usable(evtchn) )
+                rc = evtchn_is_pending(d, evtchn);
+            evtchn_read_unlock(evtchn);
+        }
     }
 
     return rc;
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index a298ff4df8..97c65d2917 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -85,7 +85,7 @@ extern domid_t hardware_domid;
 
 struct evtchn
 {
-    spinlock_t lock;
+    rwlock_t lock;
 #define ECS_FREE         0 /* Channel is available for use.                  */
 #define ECS_RESERVED     1 /* Channel is reserved.                           */
 #define ECS_UNBOUND      2 /* Channel is waiting to bind to a remote domain. */
@@ -114,6 +114,7 @@ struct evtchn
         u16 virq;      /* state == ECS_VIRQ */
     } u;
     u8 priority;
+    u8 old_state;      /* State when taking lock in write mode. */
     u32 fifo_lastq;    /* Data for fifo events identifying last queue. */
 #ifdef CONFIG_XSM
     union {
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 02 15:10:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 15:10:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17718.42551 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZbTh-0002Vs-DO; Mon, 02 Nov 2020 15:10:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17718.42551; Mon, 02 Nov 2020 15:10:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZbTh-0002Vl-9X; Mon, 02 Nov 2020 15:10:37 +0000
Received: by outflank-mailman (input) for mailman id 17718;
 Mon, 02 Nov 2020 15:07:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rjxj=EI=linux.vnet.ibm.com=ego@srs-us1.protection.inumbo.net>)
 id 1kZbR0-0001mW-3T
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 15:07:50 +0000
Received: from mx0a-001b2d01.pphosted.com (unknown [148.163.158.5])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e0abcd69-4916-42be-bab3-3106f35419fb;
 Mon, 02 Nov 2020 15:07:49 +0000 (UTC)
Received: from pps.filterd (m0098413.ppops.net [127.0.0.1])
 by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0A2F2KcC171448; Mon, 2 Nov 2020 10:07:00 -0500
Received: from pps.reinject (localhost [127.0.0.1])
 by mx0b-001b2d01.pphosted.com with ESMTP id 34hn6g8qy6-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 02 Nov 2020 10:06:59 -0500
Received: from m0098413.ppops.net (m0098413.ppops.net [127.0.0.1])
 by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 0A2F3Hje180510;
 Mon, 2 Nov 2020 10:06:58 -0500
Received: from ppma05wdc.us.ibm.com (1b.90.2fa9.ip4.static.sl-reverse.com
 [169.47.144.27])
 by mx0b-001b2d01.pphosted.com with ESMTP id 34hn6g8qx8-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 02 Nov 2020 10:06:58 -0500
Received: from pps.filterd (ppma05wdc.us.ibm.com [127.0.0.1])
 by ppma05wdc.us.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 0A2EwVLc006665;
 Mon, 2 Nov 2020 15:06:56 GMT
Received: from b01cxnp23034.gho.pok.ibm.com (b01cxnp23034.gho.pok.ibm.com
 [9.57.198.29]) by ppma05wdc.us.ibm.com with ESMTP id 34h09mnscy-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 02 Nov 2020 15:06:56 +0000
Received: from b01ledav003.gho.pok.ibm.com (b01ledav003.gho.pok.ibm.com
 [9.57.199.108])
 by b01cxnp23034.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id
 0A2F6uY712190366
 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 2 Nov 2020 15:06:56 GMT
Received: from b01ledav003.gho.pok.ibm.com (unknown [127.0.0.1])
 by IMSVA (Postfix) with ESMTP id 59131B205F;
 Mon,  2 Nov 2020 15:06:56 +0000 (GMT)
Received: from b01ledav003.gho.pok.ibm.com (unknown [127.0.0.1])
 by IMSVA (Postfix) with ESMTP id 6038CB2064;
 Mon,  2 Nov 2020 15:06:55 +0000 (GMT)
Received: from sofia.ibm.com (unknown [9.199.57.175])
 by b01ledav003.gho.pok.ibm.com (Postfix) with ESMTP;
 Mon,  2 Nov 2020 15:06:55 +0000 (GMT)
Received: by sofia.ibm.com (Postfix, from userid 1000)
 id 06C7A2E323C; Mon,  2 Nov 2020 20:36:52 +0530 (IST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rjxj=EI=linux.vnet.ibm.com=ego@srs-us1.protection.inumbo.net>)
	id 1kZbR0-0001mW-3T
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 15:07:50 +0000
X-Inumbo-ID: e0abcd69-4916-42be-bab3-3106f35419fb
Received: from mx0a-001b2d01.pphosted.com (unknown [148.163.158.5])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e0abcd69-4916-42be-bab3-3106f35419fb;
	Mon, 02 Nov 2020 15:07:49 +0000 (UTC)
Received: from pps.filterd (m0098413.ppops.net [127.0.0.1])
	by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0A2F2KcC171448;
	Mon, 2 Nov 2020 10:07:00 -0500
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=date : from : to : cc :
 subject : message-id : reply-to : references : mime-version : content-type
 : in-reply-to; s=pp1; bh=ej00TIy/LHYCfZ0dOvBL2CZGGwy/uU70D7NYwppsmbI=;
 b=CeaooSde8P9MLJGQKN7VpxuBW/kugZ0Ubk8qYGUGVzkeohHArRsAJfqjFGzuwYnrSWTi
 /ptajn1Z7ZgrR3zZo31N4TxfYQu4DwGZIM9MOdp/Y+Dsb/J7RVxYpG/UlZjrBsr38JMO
 QU2XYT4rgLfb/wu/r4f3NmrOWsd4bjbrs6bQe/3v6mVbSf7UsID1yUh0YKSXWWRY7S5G
 u7rS5ryqq0f4CSoaiy5ymxKzUUuttHCjWDDJlWCNeRqBInxF3obTc89/GHKBGSfpbGvF
 v2/iDtwKFAbGThkzyRZhk+yph3cAassBbG/Fw51NpGogJLBtisfxJdZunU8gGohvdF9x Dg== 
Received: from pps.reinject (localhost [127.0.0.1])
	by mx0b-001b2d01.pphosted.com with ESMTP id 34hn6g8qy6-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Mon, 02 Nov 2020 10:06:59 -0500
Received: from m0098413.ppops.net (m0098413.ppops.net [127.0.0.1])
	by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 0A2F3Hje180510;
	Mon, 2 Nov 2020 10:06:58 -0500
Received: from ppma05wdc.us.ibm.com (1b.90.2fa9.ip4.static.sl-reverse.com [169.47.144.27])
	by mx0b-001b2d01.pphosted.com with ESMTP id 34hn6g8qx8-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Mon, 02 Nov 2020 10:06:58 -0500
Received: from pps.filterd (ppma05wdc.us.ibm.com [127.0.0.1])
	by ppma05wdc.us.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 0A2EwVLc006665;
	Mon, 2 Nov 2020 15:06:56 GMT
Received: from b01cxnp23034.gho.pok.ibm.com (b01cxnp23034.gho.pok.ibm.com [9.57.198.29])
	by ppma05wdc.us.ibm.com with ESMTP id 34h09mnscy-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Mon, 02 Nov 2020 15:06:56 +0000
Received: from b01ledav003.gho.pok.ibm.com (b01ledav003.gho.pok.ibm.com [9.57.199.108])
	by b01cxnp23034.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 0A2F6uY712190366
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
	Mon, 2 Nov 2020 15:06:56 GMT
Received: from b01ledav003.gho.pok.ibm.com (unknown [127.0.0.1])
	by IMSVA (Postfix) with ESMTP id 59131B205F;
	Mon,  2 Nov 2020 15:06:56 +0000 (GMT)
Received: from b01ledav003.gho.pok.ibm.com (unknown [127.0.0.1])
	by IMSVA (Postfix) with ESMTP id 6038CB2064;
	Mon,  2 Nov 2020 15:06:55 +0000 (GMT)
Received: from sofia.ibm.com (unknown [9.199.57.175])
	by b01ledav003.gho.pok.ibm.com (Postfix) with ESMTP;
	Mon,  2 Nov 2020 15:06:55 +0000 (GMT)
Received: by sofia.ibm.com (Postfix, from userid 1000)
	id 06C7A2E323C; Mon,  2 Nov 2020 20:36:52 +0530 (IST)
Date: Mon, 2 Nov 2020 20:36:51 +0530
From: Gautham R Shenoy <ego@linux.vnet.ibm.com>
To: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
Cc: Linux Doc Mailing List <linux-doc@vger.kernel.org>,
        Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
        Mauro Carvalho Chehab <mchehab+samsung@kernel.org>,
        "Gautham R. Shenoy" <ego@linux.vnet.ibm.com>,
        "Jason A. Donenfeld" <Jason@zx2c4.com>,
        Javier =?iso-8859-1?Q?Gonz=E1lez?= <javier@javigon.com>,
        Jonathan Corbet <corbet@lwn.net>,
        "Martin K. Petersen" <martin.petersen@oracle.com>,
        "Rafael J. Wysocki" <rjw@rjwysocki.net>,
        Alexander Shishkin <alexander.shishkin@linux.intel.com>,
        Alexandre Belloni <alexandre.belloni@bootlin.com>,
        Alexandre Torgue <alexandre.torgue@st.com>,
        Andrew Donnellan <ajd@linux.ibm.com>,
        Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
        Baolin Wang <baolin.wang7@gmail.com>,
        Benson Leung <bleung@chromium.org>,
        Boris Ostrovsky <boris.ostrovsky@oracle.com>,
        Bruno Meneguele <bmeneg@redhat.com>,
        Chunyan Zhang <zhang.lyra@gmail.com>, Dan Murphy <dmurphy@ti.com>,
        Dan Williams <dan.j.williams@intel.com>,
        Enric Balletbo i Serra <enric.balletbo@collabora.com>,
        Fabrice Gasnier <fabrice.gasnier@st.com>,
        Felipe Balbi <balbi@kernel.org>,
        Frederic Barrat <fbarrat@linux.ibm.com>,
        Guenter Roeck <groeck@chromium.org>, Hanjun Guo <guohanjun@huawei.com>,
        Heikki Krogerus <heikki.krogerus@linux.intel.com>,
        Jens Axboe <axboe@kernel.dk>,
        Johannes Thumshirn <johannes.thumshirn@wdc.com>,
        Jonathan Cameron <jic23@kernel.org>, Juergen Gross <jgross@suse.com>,
        Konstantin Khlebnikov <koct9i@gmail.com>,
        Kranthi Kuntala <kranthi.kuntala@intel.com>,
        Lakshmi Ramasubramanian <nramas@linux.microsoft.com>,
        Lars-Peter Clausen <lars@metafoo.de>, Len Brown <lenb@kernel.org>,
        Leonid Maksymchuk <leonmaxx@gmail.com>,
        Ludovic Desroches <ludovic.desroches@microchip.com>,
        Mario Limonciello <mario.limonciello@dell.com>,
        Maxime Coquelin <mcoquelin.stm32@gmail.com>,
        Michael Ellerman <mpe@ellerman.id.au>,
        Mika Westerberg <mika.westerberg@linux.intel.com>,
        Mike Kravetz <mike.kravetz@oracle.com>,
        Mimi Zohar <zohar@linux.ibm.com>, Nayna Jain <nayna@linux.ibm.com>,
        Nicolas Ferre <nicolas.ferre@microchip.com>,
        Niklas Cassel <niklas.cassel@wdc.com>,
        Oleh Kravchenko <oleg@kaa.org.ua>, Orson Zhai <orsonzhai@gmail.com>,
        Pavel Machek <pavel@ucw.cz>,
        Pawan Gupta <pawan.kumar.gupta@linux.intel.com>,
        Peter Meerwald-Stadler <pmeerw@pmeerw.net>,
        Peter Rosin <peda@axentia.se>, Petr Mladek <pmladek@suse.com>,
        Philippe Bergheaud <felix@linux.ibm.com>,
        Richard Cochran <richardcochran@gmail.com>,
        Sebastian Reichel <sre@kernel.org>,
        Sergey Senozhatsky <sergey.senozhatsky@gmail.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Thinh Nguyen <Thinh.Nguyen@synopsys.com>,
        Thomas Gleixner <tglx@linutronix.de>,
        Vineela Tummalapalli <vineela.tummalapalli@intel.com>,
        Vishal Verma <vishal.l.verma@intel.com>, linux-acpi@vger.kernel.org,
        linux-arm-kernel@lists.infradead.org, linux-iio@vger.kernel.org,
        linux-kernel@vger.kernel.org, linux-mm@kvack.org,
        linux-pm@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com,
        linux-usb@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
        netdev@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [PATCH 20/33] docs: ABI: testing: make the files compatible with
 ReST output
Message-ID: <20201102150651.GA4379@in.ibm.com>
Reply-To: ego@linux.vnet.ibm.com
References: <cover.1603893146.git.mchehab+huawei@kernel.org>
 <4ebaaa0320101479e392ce2db4b62e24fdf15ef1.1603893146.git.mchehab+huawei@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <4ebaaa0320101479e392ce2db4b62e24fdf15ef1.1603893146.git.mchehab+huawei@kernel.org>
User-Agent: Mutt/1.5.23 (2014-03-12)
X-TM-AS-GCONF: 00
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-02_09:2020-11-02,2020-11-02 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 clxscore=1011
 phishscore=0 mlxscore=0 bulkscore=0 mlxlogscore=999 lowpriorityscore=0
 malwarescore=0 priorityscore=1501 spamscore=0 adultscore=0 impostorscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2011020116

On Wed, Oct 28, 2020 at 03:23:18PM +0100, Mauro Carvalho Chehab wrote:
> From: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
> 
> Some files over there won't parse well by Sphinx.
> 

[..snip..]



> diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
> index b555df825447..274c337ec6a9 100644
> --- a/Documentation/ABI/testing/sysfs-devices-system-cpu
> +++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
> @@ -151,23 +151,28 @@ Description:
>  		The processor idle states which are available for use have the
>  		following attributes:
> 
> -		name: (RO) Name of the idle state (string).
> +		======== ==== =================================================
> +		name:	 (RO) Name of the idle state (string).
> 
>  		latency: (RO) The latency to exit out of this idle state (in
> -		microseconds).
> +			      microseconds).
> 
> -		power: (RO) The power consumed while in this idle state (in
> -		milliwatts).
> +		power:   (RO) The power consumed while in this idle state (in
> +			      milliwatts).
> 
> -		time: (RO) The total time spent in this idle state (in microseconds).
> +		time:    (RO) The total time spent in this idle state
> +			      (in microseconds).
> 
> -		usage: (RO) Number of times this state was entered (a count).
> +		usage:	 (RO) Number of times this state was entered (a count).
> 
> -		above: (RO) Number of times this state was entered, but the
> -		       observed CPU idle duration was too short for it (a count).
> +		above:	 (RO) Number of times this state was entered, but the
> +			      observed CPU idle duration was too short for it
> +			      (a count).
> 
> -		below: (RO) Number of times this state was entered, but the
> -		       observed CPU idle duration was too long for it (a count).
> +		below: 	 (RO) Number of times this state was entered, but the
> +			      observed CPU idle duration was too long for it
> +			      (a count).
> +		======== ==== =================================================
> 
>  What:		/sys/devices/system/cpu/cpuX/cpuidle/stateN/desc
>  Date:		February 2008
> @@ -290,6 +295,7 @@ Description:	Processor frequency boosting control
>  		This switch controls the boost setting for the whole system.
>  		Boosting allows the CPU and the firmware to run at a frequency
>  		beyound it's nominal limit.
> +
>  		More details can be found in
>  		Documentation/admin-guide/pm/cpufreq.rst
> 

The changes to cpuidle states look good to me.


[..snip..]

> @@ -414,30 +434,30 @@ Description:	POWERNV CPUFreq driver's frequency throttle stats directory and
>  		throttle attributes exported in the 'throttle_stats' directory:
> 
>  		- turbo_stat : This file gives the total number of times the max
> -		frequency is throttled to lower frequency in turbo (at and above
> -		nominal frequency) range of frequencies.
> +		  frequency is throttled to lower frequency in turbo (at and above
> +		  nominal frequency) range of frequencies.
> 
>  		- sub_turbo_stat : This file gives the total number of times the
> -		max frequency is throttled to lower frequency in sub-turbo(below
> -		nominal frequency) range of frequencies.
> +		  max frequency is throttled to lower frequency in sub-turbo(below
> +		  nominal frequency) range of frequencies.
> 
>  		- unthrottle : This file gives the total number of times the max
> -		frequency is unthrottled after being throttled.
> +		  frequency is unthrottled after being throttled.
> 
>  		- powercap : This file gives the total number of times the max
> -		frequency is throttled due to 'Power Capping'.
> +		  frequency is throttled due to 'Power Capping'.
> 
>  		- overtemp : This file gives the total number of times the max
> -		frequency is throttled due to 'CPU Over Temperature'.
> +		  frequency is throttled due to 'CPU Over Temperature'.
> 
>  		- supply_fault : This file gives the total number of times the
> -		max frequency is throttled due to 'Power Supply Failure'.
> +		  max frequency is throttled due to 'Power Supply Failure'.
> 
>  		- overcurrent : This file gives the total number of times the
> -		max frequency is throttled due to 'Overcurrent'.
> +		  max frequency is throttled due to 'Overcurrent'.
> 
>  		- occ_reset : This file gives the total number of times the max
> -		frequency is throttled due to 'OCC Reset'.
> +		  frequency is throttled due to 'OCC Reset'.
> 
>  		The sysfs attributes representing different throttle reasons like
>  		powercap, overtemp, supply_fault, overcurrent and occ_reset map to


This hunk for the powernv cpufreq driver looks good to me.
For these two hunks,

Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>




From xen-devel-bounces@lists.xenproject.org Mon Nov 02 15:19:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 15:19:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17780.42563 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZbbf-0002qF-8h; Mon, 02 Nov 2020 15:18:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17780.42563; Mon, 02 Nov 2020 15:18:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZbbf-0002q8-5l; Mon, 02 Nov 2020 15:18:51 +0000
Received: by outflank-mailman (input) for mailman id 17780;
 Mon, 02 Nov 2020 15:18:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iSH1=EI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kZbbd-0002q3-Gn
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 15:18:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 9643477f-c65f-4ab3-bdc4-fb318bb58377;
 Mon, 02 Nov 2020 15:18:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D5E9AAE59;
 Mon,  2 Nov 2020 15:18:45 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=iSH1=EI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kZbbd-0002q3-Gn
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 15:18:49 +0000
X-Inumbo-ID: 9643477f-c65f-4ab3-bdc4-fb318bb58377
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 9643477f-c65f-4ab3-bdc4-fb318bb58377;
	Mon, 02 Nov 2020 15:18:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604330326;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8djcS6Gnn02ROuP1zULEJkbt+SQPjMv385Pk+A5bAbc=;
	b=BgrCjLi9zBezHUaZYJBvtd5J9g6RrMlb1zY5PsEEXFYGMwC0lYII0DqJlq513KSIM16k4A
	5E7q4PBhdaxCt9QwiB8H/ptLrHJZOm3WEH99dtc3Nh/NHYJ3r/VOenOVsBjPzknz0qxIsA
	/f43joKVQUH2ljFVgMrr+gh1Uiqo5mQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D5E9AAE59;
	Mon,  2 Nov 2020 15:18:45 +0000 (UTC)
Subject: Re: [PATCH v3 2/2] xen/evtchn: rework per event channel lock
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <20201016105839.14796-1-jgross@suse.com>
 <20201016105839.14796-3-jgross@suse.com>
 <0c5975b1-97ec-9bbb-0ed9-9055556215cd@suse.com>
 <0c39eb60-9843-9659-f7c5-4e2c3e697ee0@suse.com>
 <c77add99-f92e-126a-5a5e-81a2b5983aa0@suse.com>
 <07cc4218-7aa6-2276-32af-559c0db841b5@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6cf9d927-5e8d-a705-0fac-38f81da07d7e@suse.com>
Date: Mon, 2 Nov 2020 16:18:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <07cc4218-7aa6-2276-32af-559c0db841b5@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 02.11.2020 14:59, Jürgen Groß wrote:
> On 02.11.20 14:52, Jan Beulich wrote:
>> On 02.11.2020 14:41, Jürgen Groß wrote:
>>> On 20.10.20 11:28, Jan Beulich wrote:
>>>> On 16.10.2020 12:58, Juergen Gross wrote:
>>>>> @@ -360,7 +352,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
>>>>>        if ( rc )
>>>>>            goto out;
>>>>>    
>>>>> -    flags = double_evtchn_lock(lchn, rchn);
>>>>> +    double_evtchn_lock(lchn, rchn);
>>>>
>>>> This introduces an unfortunate conflict with my conversion of
>>>> the per-domain event lock to an rw one: It acquires rd's lock
>>>> in read mode only, while the requirements here would not allow
>>>> doing so. (Same in evtchn_close() then.)
>>>
>>> Is it a problem to use write mode for those cases?
>>
>> "Problem" can have a wide range of meanings - it's not going to
>> be the end of the world, but I view any use of a write lock as
>> a problem when a read lock would suffice. This can still harm
>> parallelism.
> 
> Both cases are very rare ones in the life time of an event channel. I
> don't think you'll ever be able to measure any performance impact from
> switching these case to a write lock for any well behaved guest.

I agree as far as the lifetime of an individual port goes, but
we're talking about the per-domain lock here. (Perhaps my
choice of context in your patch wasn't the best one, as there
it is the per-channel lock of which two instances get acquired.
I'm sorry if this has lead to any confusion.)

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 02 15:26:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 15:26:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17804.42575 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZbjH-0003lK-26; Mon, 02 Nov 2020 15:26:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17804.42575; Mon, 02 Nov 2020 15:26:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZbjG-0003lD-VL; Mon, 02 Nov 2020 15:26:42 +0000
Received: by outflank-mailman (input) for mailman id 17804;
 Mon, 02 Nov 2020 15:26:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2ZVo=EI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kZbjF-0003l8-0H
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 15:26:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 3d918238-e49b-4bd0-8070-8dde5abb1ff9;
 Mon, 02 Nov 2020 15:26:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A9FC6ADE1;
 Mon,  2 Nov 2020 15:26:39 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2ZVo=EI=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kZbjF-0003l8-0H
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 15:26:41 +0000
X-Inumbo-ID: 3d918238-e49b-4bd0-8070-8dde5abb1ff9
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 3d918238-e49b-4bd0-8070-8dde5abb1ff9;
	Mon, 02 Nov 2020 15:26:40 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604330799;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Txby/LXnCeZ/LMefkzA/IQRIzMMiQ3n22/yN1s1IsdA=;
	b=lQ65OqDF0naqMDOguWpTxa6/J52LzJ+RfqR1kQ9iTrgdB6qp9Pj2G9OrTQ74Enh2iX6hGG
	6JCSpfxv744w5+1czmyRi9ziZczo7Tn+ypubxgeKhnjVgJFWvhg0MElxB0NDOhqvBWFEcd
	hWs23IhxnCchQj8oOhEPd1jzxqYKUAo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A9FC6ADE1;
	Mon,  2 Nov 2020 15:26:39 +0000 (UTC)
Subject: Re: [PATCH v3 2/2] xen/evtchn: rework per event channel lock
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <20201016105839.14796-1-jgross@suse.com>
 <20201016105839.14796-3-jgross@suse.com>
 <0c5975b1-97ec-9bbb-0ed9-9055556215cd@suse.com>
 <0c39eb60-9843-9659-f7c5-4e2c3e697ee0@suse.com>
 <c77add99-f92e-126a-5a5e-81a2b5983aa0@suse.com>
 <07cc4218-7aa6-2276-32af-559c0db841b5@suse.com>
 <6cf9d927-5e8d-a705-0fac-38f81da07d7e@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <b5ff1e48-1245-5ea7-cf4a-3a198450aa49@suse.com>
Date: Mon, 2 Nov 2020 16:26:39 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <6cf9d927-5e8d-a705-0fac-38f81da07d7e@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 02.11.20 16:18, Jan Beulich wrote:
> On 02.11.2020 14:59, Jürgen Groß wrote:
>> On 02.11.20 14:52, Jan Beulich wrote:
>>> On 02.11.2020 14:41, Jürgen Groß wrote:
>>>> On 20.10.20 11:28, Jan Beulich wrote:
>>>>> On 16.10.2020 12:58, Juergen Gross wrote:
>>>>>> @@ -360,7 +352,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
>>>>>>         if ( rc )
>>>>>>             goto out;
>>>>>>     
>>>>>> -    flags = double_evtchn_lock(lchn, rchn);
>>>>>> +    double_evtchn_lock(lchn, rchn);
>>>>>
>>>>> This introduces an unfortunate conflict with my conversion of
>>>>> the per-domain event lock to an rw one: It acquires rd's lock
>>>>> in read mode only, while the requirements here would not allow
>>>>> doing so. (Same in evtchn_close() then.)
>>>>
>>>> Is it a problem to use write mode for those cases?
>>>
>>> "Problem" can have a wide range of meanings - it's not going to
>>> be the end of the world, but I view any use of a write lock as
>>> a problem when a read lock would suffice. This can still harm
>>> parallelism.
>>
>> Both cases are very rare ones in the life time of an event channel. I
>> don't think you'll ever be able to measure any performance impact from
>> switching these case to a write lock for any well behaved guest.
> 
> I agree as far as the lifetime of an individual port goes, but
> we're talking about the per-domain lock here. (Perhaps my
> choice of context in your patch wasn't the best one, as there
> it is the per-channel lock of which two instances get acquired.
> I'm sorry if this has lead to any confusion.)

Hmm, with the switch to an ordinary rwlock it should be fine to drop
the requirement to hold the domain's event channel lock exclusively
for taking the per-channel lock as a writer.


Juergen


From xen-devel-bounces@lists.xenproject.org Mon Nov 02 17:27:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 17:27:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17860.42612 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZdbr-0006ME-DB; Mon, 02 Nov 2020 17:27:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17860.42612; Mon, 02 Nov 2020 17:27:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZdbr-0006M7-A3; Mon, 02 Nov 2020 17:27:11 +0000
Received: by outflank-mailman (input) for mailman id 17860;
 Mon, 02 Nov 2020 17:27:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7jo9=EI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZdbq-0006LT-DA
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 17:27:10 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f0655350-c75d-4627-8528-97bc2825036e;
 Mon, 02 Nov 2020 17:27:01 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZdbh-0005NF-Go; Mon, 02 Nov 2020 17:27:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZdbh-0003Nr-7K; Mon, 02 Nov 2020 17:27:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZdbh-00035B-6r; Mon, 02 Nov 2020 17:27:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7jo9=EI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZdbq-0006LT-DA
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 17:27:10 +0000
X-Inumbo-ID: f0655350-c75d-4627-8528-97bc2825036e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f0655350-c75d-4627-8528-97bc2825036e;
	Mon, 02 Nov 2020 17:27:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mst6+WSuL617xdloWlPXlrbfCF8zDrJLUW1ukSK9MDE=; b=c9GNNnE6Q2kr2WGTsUkhFWVMh2
	n4+dxQGE+hvcnJYi5T1SDqQLipcuP1640Qdt1Gt7rWM49ZonDWAAmwtnKCtKR7nnimzGK6cA1/HEK
	qViYNcmj1zKR8iuDsrqbQK4fCMqNcdxjxqRPSiSTPTvCxTzWsIfuvlYDqqqGgjmbmQrk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZdbh-0005NF-Go; Mon, 02 Nov 2020 17:27:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZdbh-0003Nr-7K; Mon, 02 Nov 2020 17:27:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZdbh-00035B-6r; Mon, 02 Nov 2020 17:27:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156356-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156356: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=700d20b49e303549b32d3a7a3efbfcee8c7a4f6c
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 02 Nov 2020 17:27:01 +0000

flight 156356 qemu-mainline real [real]
flight 156370 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156356/
http://logs.test-lab.xenproject.org/osstest/logs/156370/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                700d20b49e303549b32d3a7a3efbfcee8c7a4f6c
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   74 days
Failing since        152659  2020-08-21 14:07:39 Z   73 days  164 attempts
Testing same since   156356  2020-11-02 04:17:06 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 56167 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 02 18:05:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 18:05:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17873.42623 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZeCx-0001Um-A1; Mon, 02 Nov 2020 18:05:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17873.42623; Mon, 02 Nov 2020 18:05:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZeCx-0001Uf-6w; Mon, 02 Nov 2020 18:05:31 +0000
Received: by outflank-mailman (input) for mailman id 17873;
 Mon, 02 Nov 2020 18:05:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NLV7=EI=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kZeCv-0001Ua-OK
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 18:05:29 +0000
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5443ecca-c200-4938-a142-862bbd67ac79;
 Mon, 02 Nov 2020 18:05:28 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id b1so18549791lfp.11
 for <xen-devel@lists.xenproject.org>; Mon, 02 Nov 2020 10:05:28 -0800 (PST)
Received: from [100.64.112.11] (ll-18.209.223.85.sovam.net.ua. [85.223.209.18])
 by smtp.gmail.com with ESMTPSA id p18sm2522064lfa.111.2020.11.02.10.05.25
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 02 Nov 2020 10:05:26 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NLV7=EI=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kZeCv-0001Ua-OK
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 18:05:29 +0000
X-Inumbo-ID: 5443ecca-c200-4938-a142-862bbd67ac79
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5443ecca-c200-4938-a142-862bbd67ac79;
	Mon, 02 Nov 2020 18:05:28 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id b1so18549791lfp.11
        for <xen-devel@lists.xenproject.org>; Mon, 02 Nov 2020 10:05:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=91giC+mxG8WeUfryWnpYaUZpwW48FmL3KEYlVuhHF7U=;
        b=n6jRcVN58UPalHAKowzUVJc9OXrWkDYUlX/CR11eia36ZH3d9KTUTeEZtElCFdNVa4
         vdfU+UREXBr10VRJL0nFK4f2bVY74gXLgdJ6xg73LX+gM0kd7JwbG6UrxgWKx621Rfew
         sAU29X6Iz/OuedqszErCrveWCHTgtMr3vg2wSp3FH2HyILsB9LTKOocJUglwpBJTd+wl
         2t/Aqz+zxunvqJhdjUOTYf0aGvMwKmZ12amQo3CMB0DTIEr7RFPDybuIwMHGpMtatNC7
         UxocncEGqXk/UyivawPnArL+Hr3KLqQQTmSVGFnXbIlQLzwATnUGkioPcu2fGapM8alK
         d7hg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=91giC+mxG8WeUfryWnpYaUZpwW48FmL3KEYlVuhHF7U=;
        b=JHuE7B3gayX+NnM47Z3PhDVL+bi1f+yvrxm3azz2NSsHVA3m/N6espAGkTjJDbaIxv
         XhE2Wdxc1f1E643laS60QxN1BhHSgWCB9H2D0ndyvLMPVgUrst0xBexDMvCpgCAxm33f
         XFhbY5GB34JPFnSfCY+TE0PxzoHSMrwakhIGdxRBiVYa/okUEhRJlEzV/iXYdrSMf+/f
         ExQwzG/Txfy3zawRfMURT7GBRAALyXSGbDWwc7iu4hi1aw8jvWtkNKmMjn90mhsi7Mhm
         yNEgCOmKBEOEgO/qoTIuGFZy4zbJb/vg5VAENauSIL5YCbZW6aIyABh81/ZkLKJoTY+/
         91lg==
X-Gm-Message-State: AOAM530OYkLfifsZ1IqJe0O283l2XthpQ8+7WquOOB1muY36FY3XAHI1
	RJOPRVnq/aatxIuMxwP5TnE=
X-Google-Smtp-Source: ABdhPJz6/Y3JNtUGzP1ugZpfTXGYJruVrvS7FAcwSI4QdDE0pf+BpiUhypoyDDyW74SbvTwI36fJBQ==
X-Received: by 2002:ac2:5938:: with SMTP id v24mr5736541lfi.228.1604340327677;
        Mon, 02 Nov 2020 10:05:27 -0800 (PST)
Received: from [100.64.112.11] (ll-18.209.223.85.sovam.net.ua. [85.223.209.18])
        by smtp.gmail.com with ESMTPSA id p18sm2522064lfa.111.2020.11.02.10.05.25
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Mon, 02 Nov 2020 10:05:26 -0800 (PST)
Subject: Re: [PATCH V2 00/23] IOREQ feature (+ virtio-mmio) on Arm
To: Wei Chen <Wei.Chen@arm.com>
Cc: Masami Hiramatsu <masami.hiramatsu@linaro.org>,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Jan Beulich <jbeulich@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <Julien.Grall@arm.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Tim Deegan <tim@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <CAA93ih0o3XmD9neBu1fAkP1iBETu1-4qaQaEsZfEWRfYo7VCZA@mail.gmail.com>
 <CAPD2p-npnQz+7NtMH81s2C3dsAt_6kxQ68n7LhwYbOuTFaUEvw@mail.gmail.com>
 <alpine.DEB.2.21.2010291252410.12247@sstabellini-ThinkPad-T480s>
 <CAPD2p-mH0Hi+JOUB-mt+aZR_gN86EZCpnMPTww0ErMESTwZ=AA@mail.gmail.com>
 <CAA93ih3Z-zxQ33gvr2C43i0J5XP3OBgUhTyMcwhe9zVj-uOONA@mail.gmail.com>
 <CAPD2p-=2UimQy6VHKw1FgyVi2R94Ux_HFdPYk7=FR3KWSEqiHw@mail.gmail.com>
 <AM0PR08MB3747802302FE70971AE91F6F9E100@AM0PR08MB3747.eurprd08.prod.outlook.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <22edd064-bd78-ba8e-6537-820a929e21e6@gmail.com>
Date: Mon, 2 Nov 2020 20:05:24 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <AM0PR08MB3747802302FE70971AE91F6F9E100@AM0PR08MB3747.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 02.11.20 09:23, Wei Chen wrote:
> Hi Oleksandr,

Hi Wei.


>
> Thanks for the sharing of virtio-disk backend. I have tested it on arm FVP_base platform.
> We used domain-0 to run virtio disk backend. The backend disk is a loop device.
>      "virtio_disks": [
>          {
>              "backend_domname": "Domain-0",
>              "devid": 0,
>              "disks": [
>                  {
>                      "filename": "/dev/loop0"
>                  }
>              ]
>          }
>      ],
>
> It works fine and I've pasted some logs:
>
> -------------------------------------------
> Domain-0 logs:
> main: read backend domid 0
> (XEN) gnttab_mark_dirty not implemented yet
> (XEN) domain_direct_pl011_init for domain#2
> main: read frontend domid 2
>    Info: connected to dom2
>
> demu_seq_next: >XENSTORE_ATTACHED
> demu_seq_next: domid = 2
> demu_seq_next: filename[0] = /dev/loop0
> demu_seq_next: readonly[0] = 0
> demu_seq_next: base[0]     = 0x2000000
> demu_seq_next: irq[0]      = 33
> demu_seq_next: >XENCTRL_OPEN
> demu_seq_next: >XENEVTCHN_OPEN
> demu_seq_next: >XENFOREIGNMEMORY_OPEN
> demu_seq_next: >XENDEVICEMODEL_OPEN
> demu_initialize: 2 vCPU(s)
> demu_seq_next: >SERVER_REGISTERED
> demu_seq_next: ioservid = 0
> demu_seq_next: >RESOURCE_MAPPED
> demu_seq_next: shared_iopage = 0xffffae6de000
> demu_seq_next: buffered_iopage = 0xffffae6dd000
> demu_seq_next: >SERVER_ENABLED
> demu_seq_next: >PORT_ARRAY_ALLOCATED
> demu_seq_next: >EVTCHN_PORTS_BOUND
> demu_seq_next: VCPU0: 3 -> 7
> demu_seq_next: VCPU1: 5 -> 8
> demu_seq_next: >EVTCHN_BUF_PORT_BOUND
> demu_seq_next: 0 -> 9
> demu_register_memory_space: 2000000 - 20001ff
>    Info: (virtio/mmio.c) virtio_mmio_init:290: mailto:virtio-mmio.devices=0x200@0x2000000:33
> demu_seq_next: >DEVICE_INITIALIZED
> demu_seq_next: >INITIALIZED
> IO request not ready
> IO request not ready
>
> ----------------
> Dom-U logs:
> [    0.491037] xen:xen_evtchn: Event-channel device installed
> [    0.493600] Initialising Xen pvcalls frontend driver
> [    0.516807] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
> [    0.525565] cacheinfo: Unable to detect cache hierarchy for CPU 0
> [    0.562275] brd: module loaded
> [    0.595300] loop: module loaded
> [    0.683800] virtio_blk virtio0: [vda] 131072 512-byte logical blocks (67.1 MB/64.0 MiB)
> [    0.684000] vda: detected capacity change from 0 to 67108864
>
>
> / # dd if=/dev/vda of=/dev/null bs=1M count=64
> 64+0 records in
> 64+0 records out
> 67108864 bytes (64.0MB) copied, 3.196242 seconds, 20.0MB/s
> / # dd if=/dev/zero of=/dev/vda bs=1M count=64
> 64+0 records in
> 64+0 records out
> 67108864 bytes (64.0MB) copied, 3.704594 seconds, 17.3MB/s
> ---------------------
>
> The read/write seems OK in dom-U. The FVP platform is a emulator, so the performance is no reference.
> We will test it on real hardware like N1SDP.


This is really a good news. Thank you for testing!


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Mon Nov 02 18:17:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 18:17:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17878.42635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZeOs-0002Sr-DW; Mon, 02 Nov 2020 18:17:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17878.42635; Mon, 02 Nov 2020 18:17:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZeOs-0002Sk-AQ; Mon, 02 Nov 2020 18:17:50 +0000
Received: by outflank-mailman (input) for mailman id 17878;
 Mon, 02 Nov 2020 18:17:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7jo9=EI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZeOq-0002Sf-Gi
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 18:17:48 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c51f87a7-a8e4-4aa0-ae1b-148342c08d89;
 Mon, 02 Nov 2020 18:17:47 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZeOo-0006Vk-N0; Mon, 02 Nov 2020 18:17:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZeOo-0006Ep-E6; Mon, 02 Nov 2020 18:17:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZeOo-0000rW-Db; Mon, 02 Nov 2020 18:17:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7jo9=EI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZeOq-0002Sf-Gi
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 18:17:48 +0000
X-Inumbo-ID: c51f87a7-a8e4-4aa0-ae1b-148342c08d89
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c51f87a7-a8e4-4aa0-ae1b-148342c08d89;
	Mon, 02 Nov 2020 18:17:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xen3E4pcb4uCRimPPoNVOFSmHC8/As9wM+MeeXkA5jg=; b=G1GdXP9eQMHL+T4s0DwbBNXO3v
	xJ6+5t1Kr6KZdt2R8kYr3frG0GVfqhKo+Cp12ZP+7Tio04Zwrk+VrzE3vN9V+Dbj83Ooy9k50BQNV
	sAHWEdy/1oPzV64PkKhlp49HKiLQrBda16BB0jEjmE9WGuAys6nnhjRYvnJyAVelP2Kk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZeOo-0006Vk-N0; Mon, 02 Nov 2020 18:17:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZeOo-0006Ep-E6; Mon, 02 Nov 2020 18:17:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZeOo-0000rW-Db; Mon, 02 Nov 2020 18:17:46 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156359-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156359: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=ffddac3e0f2e0af54b48a86848193a5ad30def10
X-Osstest-Versions-That:
    ovmf=2363c6926098ee5c75c8780d07f88f5c21010683
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 02 Nov 2020 18:17:46 +0000

flight 156359 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156359/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 ffddac3e0f2e0af54b48a86848193a5ad30def10
baseline version:
 ovmf                 2363c6926098ee5c75c8780d07f88f5c21010683

Last test of basis   156353  2020-11-02 01:40:58 Z    0 days
Testing same since   156359  2020-11-02 08:43:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Siyuan Fu <siyuan.fu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   2363c69260..ffddac3e0f  ffddac3e0f2e0af54b48a86848193a5ad30def10 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Nov 02 21:22:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 21:22:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17930.42651 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZhHc-0001at-0a; Mon, 02 Nov 2020 21:22:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17930.42651; Mon, 02 Nov 2020 21:22:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZhHb-0001am-TF; Mon, 02 Nov 2020 21:22:31 +0000
Received: by outflank-mailman (input) for mailman id 17930;
 Mon, 02 Nov 2020 21:22:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0q6w=EI=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kZhHZ-0001ae-UO
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 21:22:29 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f4609a19-df7c-4e31-9d58-27fd73b49407;
 Mon, 02 Nov 2020 21:22:29 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 2041620870;
 Mon,  2 Nov 2020 21:22:28 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0q6w=EI=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kZhHZ-0001ae-UO
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 21:22:29 +0000
X-Inumbo-ID: f4609a19-df7c-4e31-9d58-27fd73b49407
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f4609a19-df7c-4e31-9d58-27fd73b49407;
	Mon, 02 Nov 2020 21:22:29 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 2041620870;
	Mon,  2 Nov 2020 21:22:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604352148;
	bh=VLEs5r+Tq8CyV7YqEW5qiMidUK+i28krX2EW6kK/wMs=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=XphNAwkzmTVq0XQFyLUwxQ7HHQwblJUvXA0j6bOzFR3wRo+n6Oe6tss42hwaBDhVS
	 hKxXYCGK9WJL5/zHrybZhmxXS1UZKZ1FUq7Q3KkKvf/3J0h5gGBr6clNBNAbbNhx6O
	 NlaMm2Lv+JtoWEneQ/L/T8Be/g7UZ6Jj2lXG2Nmg=
Date: Mon, 2 Nov 2020 13:22:20 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Ash Wilding <ash.j.wilding@gmail.com>
cc: sstabellini@kernel.org, ehem+xen@m5p.com, julien@xen.org, roman@zededa.com, 
    xen-devel@lists.xenproject.org
Subject: Re: Xen on RP4
In-Reply-To: <20201101172608.90996-1-ash.j.wilding@gmail.com>
Message-ID: <alpine.DEB.2.21.2011021321050.5812@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2010301240450.12247@sstabellini-ThinkPad-T480s> <20201101172608.90996-1-ash.j.wilding@gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sun, 1 Nov 2020, Ash Wilding wrote:
> >> I think the best compromise is still to use an ACPI string to detect
> >> the platform. For instance, would it be possible to use the OEMID
> >> fields in RSDT, XSDT, FADT?  Possibly even a combination of them?
> >>
> >> Another option might be to get the platform name from UEFI somehow. 
> >
> > I included appropriate strings in e-mail.  Suitable strings do appear
> > in `dmesg`.
> 
> 
> Just as a heads-up, SMCCC does define the optional SMCCC_ARCH_SOC_ID [1]
> function and this is listed as mandatory in the Server Base Boot Reqs
> (SBBR); see pp 15 of ARM DEN 0044F [2].

Thanks for sharing, it is good to know there is a "proper" way to do it.


> Unfortunately it looks like RPi 4's firmware doesn't currently support
> this, or at least the rpi4-uefi project [3] didn't think so as of FW
> version 1.6 [4], but I couldn't find equivalent SBBR feature tracking
> pages on that site for FW versions 1.7 or 1.8 to confirm, nor could I
> find any reference to SMCCC_ARCH_SOC_ID in the RPi 4 FW sources [5].

Well, call me an optimist but maybe it is just one patch away from
happening :-)


> On the bright side, while not very helpful in the short-term, note that
> Arm's recently announced SystemReady [6] program is an evolution of
> ServerReady (SBSA+SBBR) but for other segments and applications incl.
> Embedded, IoT, and general Linux Boot.
> 
> That means in future we should see more platform firmware supporting
> SMCCC_ARCH_SOC_ID, as the SiPs will (hopefully) want their platforms to
> be SystemReady certified.
> 
> Hope that's useful info.
> 
> Thanks,
> Ash.
> 
> [1] https://developer.arm.com/documentation/den0028/c
> [2] https://developer.arm.com/documentation/den0044/latest
> [3] https://rpi4-uefi.dev/about/
> [4] https://rpi4-uefi.dev/status-v1-6-firmware/
> [5] https://github.com/pftf/RPi4/tree/master
> [6] https://developer.arm.com/architectures/system-architectures/arm-systemready
> 


From xen-devel-bounces@lists.xenproject.org Mon Nov 02 21:41:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 21:41:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17937.42663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZha6-0003Lu-Hp; Mon, 02 Nov 2020 21:41:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17937.42663; Mon, 02 Nov 2020 21:41:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZha6-0003Ln-EW; Mon, 02 Nov 2020 21:41:38 +0000
Received: by outflank-mailman (input) for mailman id 17937;
 Mon, 02 Nov 2020 21:41:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0q6w=EI=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kZha5-0003Li-OA
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 21:41:37 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 516f7b90-4a80-4f29-a5bb-4e9304106864;
 Mon, 02 Nov 2020 21:41:37 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 9B62922226;
 Mon,  2 Nov 2020 21:41:35 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0q6w=EI=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kZha5-0003Li-OA
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 21:41:37 +0000
X-Inumbo-ID: 516f7b90-4a80-4f29-a5bb-4e9304106864
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 516f7b90-4a80-4f29-a5bb-4e9304106864;
	Mon, 02 Nov 2020 21:41:37 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 9B62922226;
	Mon,  2 Nov 2020 21:41:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604353296;
	bh=1IbfQyS4uyoGg+tEuYRBMLFmab7hE8yyN2GX0BZIHSk=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=J2K0jJvpTDVDwpEfiEWMDSX3FCp3wPBUmBhQ2HmWrr1AhUMkDyqN6JQC3rVkqIWkW
	 ibRxILEJ7uzxq0VZNFw/w/Zb9uiHqzsqCFNQTZXPpdwJzB2TT/Ry6xYigXS2DgrF1O
	 U17NscmteM6Zg91mnFSm12HRDOL3rFcCZou2mmsw=
Date: Mon, 2 Nov 2020 13:41:34 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>, 
    andrew.cooper3@citrix.com, george.dunlap@citrix.com, iwj@xenproject.org, 
    julien@xen.org, wl@xen.org, xen-devel@lists.xenproject.org
Subject: Re: [RFC PATCH] xen: EXPERT clean-up
In-Reply-To: <cd44d479-8dba-6311-9386-0c8c1134d07e@suse.com>
Message-ID: <alpine.DEB.2.21.2011021332460.5812@sstabellini-ThinkPad-T480s>
References: <20201031002405.4545-1-sstabellini@kernel.org> <cd44d479-8dba-6311-9386-0c8c1134d07e@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 2 Nov 2020, Jan Beulich wrote:
> On 31.10.2020 01:24, Stefano Stabellini wrote:
> > --- a/xen/Kconfig
> > +++ b/xen/Kconfig
> > @@ -35,14 +35,13 @@ config DEFCONFIG_LIST
> >  	default ARCH_DEFCONFIG
> >  
> >  config EXPERT
> > -	bool "Configure standard Xen features (expert users)"
> > +	bool "Configure EXPERT features"
> >  	help
> > -	  This option allows certain base Xen options and settings
> > -	  to be disabled or tweaked. This is for specialized environments
> > -	  which can tolerate a "non-standard" Xen.
> > -	  Only use this if you really know what you are doing.
> > -	  Xen binaries built with this option enabled are not security
> > -	  supported.
> > +	  This option allows certain experimental (see SUPPORT.md) Xen
> > +	  options and settings to be enabled/disabled. This is for
> > +	  specialized environments which can tolerate a "non-standard" Xen.
> > +	  Only use this if you really know what you are doing.  Xen binaries
> > +	  built with this option enabled are not security supported.
> >  	default n
> 
> I'm definitely in favor of this - it was more than once that I
> wondered about the prompt text.

Thanks, I agree!


> > @@ -79,8 +79,8 @@ config SBSA_VUART_CONSOLE
> >  	  SBSA Generic UART implements a subset of ARM PL011 UART.
> >  
> >  config ARM_SSBD
> > -	bool "Speculative Store Bypass Disable" if EXPERT
> > -	depends on HAS_ALTERNATIVE
> > +	bool "Speculative Store Bypass Disable"
> > +	depends on HAS_ALTERNATIVE && EXPERT
> >  	default y
> 
> At the example of this, I'm afraid when the default isn't "n"
> (or there's no default directive at all, as ought to be
> equivalent to and preferred over "default n"), such a
> transformation is not functionally identical: Before your
> change, with !EXPERT this option defaults to y. After your
> change this option is unavailable (which resolves to it being
> off for all consuming purposes).
> 
> IOW there are reasons to have "if ..." attached to the prompts
> (for this construct indeed only making the prompt conditional,
> not the entire option), but there are also cases where the
> cleanup you do is indeed desirable / helpful.

Yeah, thanks for catching it, it is obviously a problem.

My intention was just to "tag" somehow the options to EXPERT so that it
would show on the menu. Maybe a better, simpler, way to do it is
to add the word "EXPERT" to the one line prompt:

 config ARM_SSBD
-	bool "Speculative Store Bypass Disable" if EXPERT
+	bool "Speculative Store Bypass Disable (EXPERT)" if EXPERT
 	depends on HAS_ALTERNATIVE
 	default y
 	help


What do you think?


From xen-devel-bounces@lists.xenproject.org Mon Nov 02 21:59:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 02 Nov 2020 21:59:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17943.42678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZhr1-0004RM-3J; Mon, 02 Nov 2020 21:59:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17943.42678; Mon, 02 Nov 2020 21:59:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZhr1-0004RF-0D; Mon, 02 Nov 2020 21:59:07 +0000
Received: by outflank-mailman (input) for mailman id 17943;
 Mon, 02 Nov 2020 21:59:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7jo9=EI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZhr0-0004Qh-Aj
 for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 21:59:06 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 749588dd-b356-445d-9a1f-87022b728bd0;
 Mon, 02 Nov 2020 21:58:59 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZhqs-0002dU-NS; Mon, 02 Nov 2020 21:58:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZhqs-0000LD-C9; Mon, 02 Nov 2020 21:58:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZhqs-0007i3-Bd; Mon, 02 Nov 2020 21:58:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7jo9=EI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZhr0-0004Qh-Aj
	for xen-devel@lists.xenproject.org; Mon, 02 Nov 2020 21:59:06 +0000
X-Inumbo-ID: 749588dd-b356-445d-9a1f-87022b728bd0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 749588dd-b356-445d-9a1f-87022b728bd0;
	Mon, 02 Nov 2020 21:58:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ia9y48BiCv6vde6/gLhtKPNvXU7ALVSxOtl3nVNWxQ0=; b=3gvlCM+WuYOCpCaUTJOMNBG38v
	N+hJUvvXdnN3UuqBKge3QmwRzuuttp52A9w/tgto+NZ6ns8JVSv9hwHAihnhdG3mpXxaWgLzNFD0y
	Lzf4f+72spESviioPSgwgPDmxpXc9Sva9Z+RjvrJqgTGarMktuyN2QkzW6wsE/Baoyqg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZhqs-0002dU-NS; Mon, 02 Nov 2020 21:58:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZhqs-0000LD-C9; Mon, 02 Nov 2020 21:58:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZhqs-0007i3-Bd; Mon, 02 Nov 2020 21:58:58 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156358-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 156358: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=97b7b5567fba6918a656ad349051b5343b5dea2e
X-Osstest-Versions-That:
    xen=4100d463dbdd95d85fabe387dd5676bed75f65f7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 02 Nov 2020 21:58:58 +0000

flight 156358 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156358/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qcow2    19 guest-localmigrate/x10       fail  like 156343
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156343
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156343
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156343
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156343
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156343
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156343
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156343
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156343
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156343
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156343
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156343
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  97b7b5567fba6918a656ad349051b5343b5dea2e
baseline version:
 xen                  4100d463dbdd95d85fabe387dd5676bed75f65f7

Last test of basis   156343  2020-11-01 09:16:08 Z    1 days
Testing same since   156358  2020-11-02 08:38:03 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   4100d463db..97b7b5567f  97b7b5567fba6918a656ad349051b5343b5dea2e -> stable-4.12


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 00:56:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 00:56:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17961.42694 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZkca-0003DY-TT; Tue, 03 Nov 2020 00:56:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17961.42694; Tue, 03 Nov 2020 00:56:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZkca-0003DR-Px; Tue, 03 Nov 2020 00:56:24 +0000
Received: by outflank-mailman (input) for mailman id 17961;
 Tue, 03 Nov 2020 00:56:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hh/q=EJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZkcZ-0003DM-09
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 00:56:23 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id de9b2049-144c-442b-9d9f-98c8976e54da;
 Tue, 03 Nov 2020 00:56:20 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZkcV-0006nY-P8; Tue, 03 Nov 2020 00:56:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZkcV-0007zK-FP; Tue, 03 Nov 2020 00:56:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZkcV-0006Vt-Eu; Tue, 03 Nov 2020 00:56:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hh/q=EJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZkcZ-0003DM-09
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 00:56:23 +0000
X-Inumbo-ID: de9b2049-144c-442b-9d9f-98c8976e54da
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id de9b2049-144c-442b-9d9f-98c8976e54da;
	Tue, 03 Nov 2020 00:56:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ME7HAoBJ9mubATEAW1cfFx8mwYTjcje5/wqN9y3DFW4=; b=3CRfivDSUVBGD7fdjzQrlvjo08
	TKa0pN58GmHLcyrmguKy1iN1iPPyn/xRB+nl1HsD9RKF/NuvFqY8I3GngmS/BxnzpkQWHgfJpef4w
	SX/t1Fjg82UGox2XweF4Vy9hMKT+tANH5OtN8DuLg8kWYnvaqYuRd1uPgyfrFApvw6o8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZkcV-0006nY-P8; Tue, 03 Nov 2020 00:56:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZkcV-0007zK-FP; Tue, 03 Nov 2020 00:56:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZkcV-0006Vt-Eu; Tue, 03 Nov 2020 00:56:19 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156360-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156360: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=3cea11cd5e3b00d91caf0b4730194039b45c5891
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 03 Nov 2020 00:56:19 +0000

flight 156360 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156360/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                3cea11cd5e3b00d91caf0b4730194039b45c5891
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   94 days
Failing since        152366  2020-08-01 20:49:34 Z   93 days  156 attempts
Testing same since   156360  2020-11-02 09:06:18 Z    0 days    1 attempts

------------------------------------------------------------
3413 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 652367 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 04:49:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 04:49:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17981.42709 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZoFu-0002R7-42; Tue, 03 Nov 2020 04:49:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17981.42709; Tue, 03 Nov 2020 04:49:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZoFt-0002Qy-TC; Tue, 03 Nov 2020 04:49:13 +0000
Received: by outflank-mailman (input) for mailman id 17981;
 Tue, 03 Nov 2020 04:49:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hh/q=EJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZoFt-0002Qt-5V
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 04:49:13 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c8a4f700-8963-4ea6-b15e-3c8d6977b927;
 Tue, 03 Nov 2020 04:49:09 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZoFp-00080z-8J; Tue, 03 Nov 2020 04:49:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZoFp-0003Zc-0B; Tue, 03 Nov 2020 04:49:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZoFo-0007jD-U7; Tue, 03 Nov 2020 04:49:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hh/q=EJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZoFt-0002Qt-5V
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 04:49:13 +0000
X-Inumbo-ID: c8a4f700-8963-4ea6-b15e-3c8d6977b927
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c8a4f700-8963-4ea6-b15e-3c8d6977b927;
	Tue, 03 Nov 2020 04:49:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1aE6TF5R6Arnn7pjnQ7aIU4sCVxb3O47Jpu8faMpSPg=; b=ZAuuPlI7pgKPrSbtb3XBhCFDaC
	iBvyYJS2qTnD9eYPWIbT0qBYpV2t+L5ySKs8aGlNJvT5oaGlJyAXz6A4jlS73OXg0lWJdeZzUmLuA
	tHcrCfDXPSZmCsg+ACDTUqifRhxblxWGK0WweiNxgCjCbbvk7t+QO5STPC9q8/Y6+s0o=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZoFp-00080z-8J; Tue, 03 Nov 2020 04:49:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZoFp-0003Zc-0B; Tue, 03 Nov 2020 04:49:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZoFo-0007jD-U7; Tue, 03 Nov 2020 04:49:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156371-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156371: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=b139d11ae198aba0e009daddf7a3370ce84b2d09
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 03 Nov 2020 04:49:08 +0000

flight 156371 qemu-mainline real [real]
flight 156377 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156371/
http://logs.test-lab.xenproject.org/osstest/logs/156377/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                b139d11ae198aba0e009daddf7a3370ce84b2d09
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   74 days
Failing since        152659  2020-08-21 14:07:39 Z   73 days  165 attempts
Testing same since   156371  2020-11-02 17:37:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 57443 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 07:24:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 07:24:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17998.42726 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZqfM-0007k4-5q; Tue, 03 Nov 2020 07:23:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17998.42726; Tue, 03 Nov 2020 07:23:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZqfM-0007jx-2o; Tue, 03 Nov 2020 07:23:40 +0000
Received: by outflank-mailman (input) for mailman id 17998;
 Tue, 03 Nov 2020 07:23:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hh/q=EJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZqfK-0007js-Us
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 07:23:38 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7e2b2af6-ad21-4499-84aa-af16af16e540;
 Tue, 03 Nov 2020 07:23:34 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZqfG-00034K-B8; Tue, 03 Nov 2020 07:23:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZqfG-0005f9-1D; Tue, 03 Nov 2020 07:23:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZqfG-0006XJ-0l; Tue, 03 Nov 2020 07:23:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hh/q=EJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZqfK-0007js-Us
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 07:23:38 +0000
X-Inumbo-ID: 7e2b2af6-ad21-4499-84aa-af16af16e540
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7e2b2af6-ad21-4499-84aa-af16af16e540;
	Tue, 03 Nov 2020 07:23:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/A12o0LDKQnvUc03u8qHPNtU09XMjaiJGjRKBPLDlAI=; b=DkX4Pc3Y2SNucgPMDLtiCefEGk
	vI7VUA79PzQ3webWDpP5sAlX2athFTCiu4YyOH043HtDKyEvj/eexdSTML10m1xHZEoxt6FogexNt
	LuNklXuJt2arSHnnQa7/oFujhe4FryarC3CW3uPLQpDgvfY1RGisvVH7KhtYvyx1oRSc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZqfG-00034K-B8; Tue, 03 Nov 2020 07:23:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZqfG-0005f9-1D; Tue, 03 Nov 2020 07:23:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZqfG-0006XJ-0l; Tue, 03 Nov 2020 07:23:34 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156376-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156376: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=85c8c29214ffcfae50a05d0379afca27a28a147f
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 03 Nov 2020 07:23:34 +0000

flight 156376 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156376/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              85c8c29214ffcfae50a05d0379afca27a28a147f
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  116 days
Failing since        151818  2020-07-11 04:18:52 Z  115 days  110 attempts
Testing same since   156376  2020-11-03 04:20:21 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 23956 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 08:14:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 08:14:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18017.42741 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZrSM-0004C2-A1; Tue, 03 Nov 2020 08:14:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18017.42741; Tue, 03 Nov 2020 08:14:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZrSM-0004Bv-76; Tue, 03 Nov 2020 08:14:18 +0000
Received: by outflank-mailman (input) for mailman id 18017;
 Tue, 03 Nov 2020 08:14:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kZrSK-0004Bq-B7
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 08:14:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 439aca46-d17e-429c-8439-21789aefb5c3;
 Tue, 03 Nov 2020 08:14:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 79EF7AC6A;
 Tue,  3 Nov 2020 08:14:13 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kZrSK-0004Bq-B7
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 08:14:16 +0000
X-Inumbo-ID: 439aca46-d17e-429c-8439-21789aefb5c3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 439aca46-d17e-429c-8439-21789aefb5c3;
	Tue, 03 Nov 2020 08:14:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604391253;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rfEbVcIhf2++JwztfKmp+knnaDJbbwEmnJBOoNoFZAc=;
	b=odWbReI2kFoxA6bed5moN/kIIaNvOXeTuvbhi0+WqqAoo7IWHOmnnYCccjEaUMtiWi7jJN
	UilymOtAQMdq1NGwyO5O9olcXh7IR/uhldm7m0No6/FbIwh5XnqSssNk5IOMK5lVbM6fJP
	/RbxbjbHMjZwMO/hQDj/dd3LtOAN+5k=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 79EF7AC6A;
	Tue,  3 Nov 2020 08:14:13 +0000 (UTC)
Subject: Re: [RFC PATCH] xen: EXPERT clean-up
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Stefano Stabellini <stefano.stabellini@xilinx.com>,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, iwj@xenproject.org,
 julien@xen.org, wl@xen.org, xen-devel@lists.xenproject.org
References: <20201031002405.4545-1-sstabellini@kernel.org>
 <cd44d479-8dba-6311-9386-0c8c1134d07e@suse.com>
 <alpine.DEB.2.21.2011021332460.5812@sstabellini-ThinkPad-T480s>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c127499b-810b-63af-5487-2cc9ecfdba09@suse.com>
Date: Tue, 3 Nov 2020 09:14:08 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2011021332460.5812@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 02.11.2020 22:41, Stefano Stabellini wrote:
> On Mon, 2 Nov 2020, Jan Beulich wrote:
>> On 31.10.2020 01:24, Stefano Stabellini wrote:
>>> @@ -79,8 +79,8 @@ config SBSA_VUART_CONSOLE
>>>  	  SBSA Generic UART implements a subset of ARM PL011 UART.
>>>  
>>>  config ARM_SSBD
>>> -	bool "Speculative Store Bypass Disable" if EXPERT
>>> -	depends on HAS_ALTERNATIVE
>>> +	bool "Speculative Store Bypass Disable"
>>> +	depends on HAS_ALTERNATIVE && EXPERT
>>>  	default y
>>
>> At the example of this, I'm afraid when the default isn't "n"
>> (or there's no default directive at all, as ought to be
>> equivalent to and preferred over "default n"), such a
>> transformation is not functionally identical: Before your
>> change, with !EXPERT this option defaults to y. After your
>> change this option is unavailable (which resolves to it being
>> off for all consuming purposes).
>>
>> IOW there are reasons to have "if ..." attached to the prompts
>> (for this construct indeed only making the prompt conditional,
>> not the entire option), but there are also cases where the
>> cleanup you do is indeed desirable / helpful.
> 
> Yeah, thanks for catching it, it is obviously a problem.
> 
> My intention was just to "tag" somehow the options to EXPERT so that it
> would show on the menu. Maybe a better, simpler, way to do it is
> to add the word "EXPERT" to the one line prompt:
> 
>  config ARM_SSBD
> -	bool "Speculative Store Bypass Disable" if EXPERT
> +	bool "Speculative Store Bypass Disable (EXPERT)" if EXPERT
>  	depends on HAS_ALTERNATIVE
>  	default y
>  	help
> 
> 
> What do you think?

While on the surface this may look like an improvement, I don't
see how it would actually help: If you read the Kconfig file
itself, the dependency is seen anyway. And on the menu I don't
see the point of telling someone who has enabled EXPERT that a
certain option is (or is not) an expert one. If they think
they're experts, so should they be treated. (It was, after all,
a deliberate decision to make enabling expert mode easier, and
hence easier to use for what one might consider not-really-
experts. I realize saying so may be considered tendentious; I
mean it in a purely technical sense, and I'd like to apologize
in advance to anyone not sharing this as a possible perspective
to take.)

Plus, of course, the addition of such (EXPERT) markers to
future options' prompts is liable to get forgotten now and then,
so sooner or later we'd likely end up with an inconsistent
mixture anyway.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 09:02:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 09:02:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18031.42760 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZsCj-0008Sc-2m; Tue, 03 Nov 2020 09:02:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18031.42760; Tue, 03 Nov 2020 09:02:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZsCi-0008SV-Vs; Tue, 03 Nov 2020 09:02:12 +0000
Received: by outflank-mailman (input) for mailman id 18031;
 Tue, 03 Nov 2020 09:02:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kZsCh-0008SQ-Kx
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:02:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id fd9f288e-df66-4b16-b094-9c531d797801;
 Tue, 03 Nov 2020 09:02:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C791CAD5C;
 Tue,  3 Nov 2020 09:02:08 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kZsCh-0008SQ-Kx
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:02:11 +0000
X-Inumbo-ID: fd9f288e-df66-4b16-b094-9c531d797801
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id fd9f288e-df66-4b16-b094-9c531d797801;
	Tue, 03 Nov 2020 09:02:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604394128;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vi9N5nBf/53EkCEaORyxgE2bXfiCipN+Ny20Cw3cmG4=;
	b=qA9kbPajaM2g0q5hYoiN/u6A086fOOX4VEwrKmAyOaapVRTHBDPh3WQalim3kS/YAXHhTw
	3Yp2pct+i30maGzwqlDL/Shh42i5n/89lcOkh4aBkt9LG/v2I8Hs7fi5/MPey09N1tbPAz
	1ctO42TMdBIzCuksck4k5nMvvc9GOQI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C791CAD5C;
	Tue,  3 Nov 2020 09:02:08 +0000 (UTC)
Subject: Re: [PATCH v2 2/2] xen/rwlock: add check_lock() handling to rwlocks
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201102131239.14134-1-jgross@suse.com>
 <20201102131239.14134-3-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <fb3a1a5a-15ea-218f-a6d8-8e9d8d1bc2a7@suse.com>
Date: Tue, 3 Nov 2020 10:02:08 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201102131239.14134-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 02.11.2020 14:12, Juergen Gross wrote:
> --- a/xen/include/xen/rwlock.h
> +++ b/xen/include/xen/rwlock.h
> @@ -56,6 +56,7 @@ static inline int _read_trylock(rwlock_t *lock)
>      u32 cnts;
>  
>      preempt_disable();
> +    check_lock(&lock->lock.debug, true);
>      cnts = atomic_read(&lock->cnts);
>      if ( likely(_can_read_lock(cnts)) )
>      {

I'm sorry for being picky, but this still isn't matching
_spin_trylock(). Perhaps the difference is really benign, but
there the check sits ahead of preempt_disable(). (It has a
clear reason to be this way there, but without a clear reason
for things to be differently here, I think matching ordering
may help, at least to avoid questions.)

> @@ -66,6 +67,7 @@ static inline int _read_trylock(rwlock_t *lock)
>           */
>          if ( likely(_can_read_lock(cnts)) )
>              return 1;
> +
>          atomic_sub(_QR_BIAS, &lock->cnts);
>      }
>      preempt_enable();

Stray change?

> @@ -87,7 +89,10 @@ static inline void _read_lock(rwlock_t *lock)
>       * arch_lock_acquire_barrier().
>       */
>      if ( likely(_can_read_lock(cnts)) )
> +    {
> +        check_lock(&lock->lock.debug, false);
>          return;
> +    }
>  
>      /* The slowpath will decrement the reader count, if necessary. */
>      queue_read_lock_slowpath(lock);
> @@ -162,7 +167,10 @@ static inline void _write_lock(rwlock_t *lock)
>       * arch_lock_acquire_barrier().
>       */
>      if ( atomic_cmpxchg(&lock->cnts, 0, _write_lock_val()) == 0 )
> +    {
> +        check_lock(&lock->lock.debug, false);
>          return;
> +    }
>  
>      queue_write_lock_slowpath(lock);
>      /*

Maybe also for these two, but likely more importantly for ...

> @@ -328,6 +337,8 @@ static inline void _percpu_read_lock(percpu_rwlock_t **per_cpudata,
>          /* Drop the read lock because we don't need it anymore. */
>          read_unlock(&percpu_rwlock->rwlock);
>      }
> +    else
> +        check_lock(&percpu_rwlock->rwlock.lock.debug, false);
>  }

... this one a brief comment may be warranted to clarify why
the call sits here rather than at the top?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 09:10:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 09:10:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18037.42772 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZsKw-0000xR-VE; Tue, 03 Nov 2020 09:10:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18037.42772; Tue, 03 Nov 2020 09:10:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZsKw-0000xK-SC; Tue, 03 Nov 2020 09:10:42 +0000
Received: by outflank-mailman (input) for mailman id 18037;
 Tue, 03 Nov 2020 09:10:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kZsKv-0000xF-Vg
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:10:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id a5d943b6-1b37-4ab5-8941-afedee81f72d;
 Tue, 03 Nov 2020 09:10:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8AB51B135;
 Tue,  3 Nov 2020 09:10:40 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kZsKv-0000xF-Vg
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:10:42 +0000
X-Inumbo-ID: a5d943b6-1b37-4ab5-8941-afedee81f72d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id a5d943b6-1b37-4ab5-8941-afedee81f72d;
	Tue, 03 Nov 2020 09:10:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604394640;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=BO8z69KtKlnocrm4zZqoiLQXhMRld82wUq4xgkLjsbc=;
	b=YUXncDl6NIMMWYqCEQxJfb39CCvV/+YXKxYKZtKQqYfuI1Ttdh8Y6C8I/I0OmShQ7D5qTc
	QCyy3k1GzkGdBF7txgXpvaNpkJ9/kNyq9dEpTs4P35NiW4jar9UrafRDbv8XKjcusx8p+2
	v7R0MU6BifArWQwllP8m5M4vN+aVcIA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8AB51B135;
	Tue,  3 Nov 2020 09:10:40 +0000 (UTC)
Subject: Re: [PATCH v1 1/2] Define build dates/time based on SOURCE_DATE_EPOCH
To: =?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Pierret_=28fepitre=29?=
 <frederic.pierret@qubes-os.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <cover.1604156731.git.frederic.pierret@qubes-os.org>
 <57423c6627e00fbc3f41d3f6be6ba1e15abb96fc.1604156731.git.frederic.pierret@qubes-os.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <68db059e-bc65-ee1a-ac3c-004b4552133a@suse.com>
Date: Tue, 3 Nov 2020 10:10:40 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <57423c6627e00fbc3f41d3f6be6ba1e15abb96fc.1604156731.git.frederic.pierret@qubes-os.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 31.10.2020 16:14, Frédéric Pierret (fepitre) wrote:
> --- a/tools/firmware/hvmloader/Makefile
> +++ b/tools/firmware/hvmloader/Makefile
> @@ -21,7 +21,11 @@ XEN_ROOT = $(CURDIR)/../../..
>  include $(XEN_ROOT)/tools/firmware/Rules.mk
>  
>  # SMBIOS spec requires format mm/dd/yyyy
> +ifneq ($(SOURCE_DATE_EPOCH),)
> +SMBIOS_REL_DATE ?= $(shell date -u -d "@$(SOURCE_DATE_EPOCH)" "+%m/%d/%Y" 2>/dev/null)
> +else
>  SMBIOS_REL_DATE ?= $(shell date +%m/%d/%Y)
> +endif

As this pattern recurs, how about abstracting it away via a
definition (perhaps to be placed in ./Config.mk) along the
lines of (variable name subject to improvement)

DATE_EPOCH_OPTS := $(if $(SOURCE_DATE_EPOCH),-u -d "@$(SOURCE_DATE_EPOCH)")

and then here simply

SMBIOS_REL_DATE ?= $(shell date $(DATE_EPOCH_OPTS) +%m/%d/%Y)

(i.e. in particular also without any "ifneq()")?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 09:15:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 09:15:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18042.42784 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZsPb-00019m-JV; Tue, 03 Nov 2020 09:15:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18042.42784; Tue, 03 Nov 2020 09:15:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZsPb-00019f-Fb; Tue, 03 Nov 2020 09:15:31 +0000
Received: by outflank-mailman (input) for mailman id 18042;
 Tue, 03 Nov 2020 09:15:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kZsPa-00019a-Bt
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:15:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 1c5e7289-613b-43b9-9cdf-3ca1feb2393c;
 Tue, 03 Nov 2020 09:15:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E25B4ACF5;
 Tue,  3 Nov 2020 09:15:27 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kZsPa-00019a-Bt
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:15:30 +0000
X-Inumbo-ID: 1c5e7289-613b-43b9-9cdf-3ca1feb2393c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 1c5e7289-613b-43b9-9cdf-3ca1feb2393c;
	Tue, 03 Nov 2020 09:15:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604394928;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=OTaupi/1smCjLukJ2z+KNJLk6tcqet1wRcBYM0kglSw=;
	b=b37Iku0YUTN/E+Xso/zrADLCnBK6tIyIfdIY6nghSFqRL+IxomTCXgL6chVWnhFRZ9i7fO
	i39lRu3ApnZwYrYYRIkNZAF6GAvmI7GhRq+JQlhJ5/S9jAof+qevHXSAOWfZ6GwqH1TgiN
	NjzHVjxVwcRBBMW1qnYoAJQVyJG00v8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E25B4ACF5;
	Tue,  3 Nov 2020 09:15:27 +0000 (UTC)
Subject: Re: [PATCH v1 2/2] Define SOURCE_DATE_EPOCH based on git log
To: =?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Pierret_=28fepitre=29?=
 <frederic.pierret@qubes-os.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1604156731.git.frederic.pierret@qubes-os.org>
 <8b0e8b8be9c77476ecc702a7c6216ba50659deec.1604156731.git.frederic.pierret@qubes-os.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <fabf886c-4270-9620-5a72-210a2fccb016@suse.com>
Date: Tue, 3 Nov 2020 10:15:27 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <8b0e8b8be9c77476ecc702a7c6216ba50659deec.1604156731.git.frederic.pierret@qubes-os.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 31.10.2020 16:14, Frédéric Pierret (fepitre) wrote:
> --- a/xen/Makefile
> +++ b/xen/Makefile
> @@ -6,6 +6,8 @@ export XEN_EXTRAVERSION ?= -unstable$(XEN_VENDORVERSION)
>  export XEN_FULLVERSION   = $(XEN_VERSION).$(XEN_SUBVERSION)$(XEN_EXTRAVERSION)
>  -include xen-version
>  
> +export SOURCE_DATE_EPOCH	?= $(shell git log -1 --format=%ct 2>/dev/null)

In patch 1 you also use the variable under tools/. Why do you
place this here rather than in the top level Makefile?

This said I'm not convinced anyway that we want this to be the
default. I'd rather see this as something to get set by the
package build process of distros, outside of any of the source
files.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 09:18:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 09:18:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18046.42796 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZsSP-0001Km-5N; Tue, 03 Nov 2020 09:18:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18046.42796; Tue, 03 Nov 2020 09:18:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZsSP-0001Kf-2E; Tue, 03 Nov 2020 09:18:25 +0000
Received: by outflank-mailman (input) for mailman id 18046;
 Tue, 03 Nov 2020 09:18:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uWmA=EJ=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
 id 1kZsSN-0001Ka-Ng
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:18:23 +0000
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 54e63eb8-de31-40c8-9378-b20d91cbcbe7;
 Tue, 03 Nov 2020 09:18:22 +0000 (UTC)
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by
 mx.zohomail.com with SMTPS id 1604395096815928.1676514834194;
 Tue, 3 Nov 2020 01:18:16 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uWmA=EJ=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
	id 1kZsSN-0001Ka-Ng
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:18:23 +0000
X-Inumbo-ID: 54e63eb8-de31-40c8-9378-b20d91cbcbe7
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 54e63eb8-de31-40c8-9378-b20d91cbcbe7;
	Tue, 03 Nov 2020 09:18:22 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1604395098; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=k9jdRcOxDXPr5tA2JlsPs03MYPBh6kL1C3c/zZq3f5s+o8NF2K+q+1gITK+UnKDXWu+eduXAufQVTLDtwb0RveSLe4czT49IK2qvGm3NDUrHfdTJicbjCil038P61LSQ3wQEQAxsnFy25RIDHLP/RbttSsORygFPwHFROZygZgg=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1604395098; h=Content-Type:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=DCXrlknV304fNI7tghg2f9ApOklKQCZfC6k3Qw5tdnM=; 
	b=KFFyoCipgFUHXwq1Qmat9/q15U3zzTZEgjf1PwvPKcu3JW5hgztg0QYG/o6ZPCHJOxQPH2kKflnbAxiWWB0IIs47Qyo3ncqez9A8vPLsNZLaG4N3Tu3yWbWJ4ug/QgvkYl/NQLpIY9ZKBJJFdnwIzZRc0cQYnDEGznZriyhQLGE=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=qubes-os.org;
	spf=pass  smtp.mailfrom=frederic.pierret@qubes-os.org;
	dmarc=pass header.from=<frederic.pierret@qubes-os.org> header.from=<frederic.pierret@qubes-os.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1604395098;
	s=s; d=qubes-os.org; i=frederic.pierret@qubes-os.org;
	h=Subject:To:Cc:References:From:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type;
	bh=DCXrlknV304fNI7tghg2f9ApOklKQCZfC6k3Qw5tdnM=;
	b=WcNn6ppMq63n312cshENASXLUx5/X1R+qh7tOezNykggr0rmrsb4SIcC5s9Gve0F
	DHJP8K4bSRNmmHXan+cFUD/PdlU40lare3Xie8PIfVmH4pKapFjWw0+15S4wqK+VBhD
	XA0cQJcmQa7ElcH8UKRdx8RbLBDBL+URS+RTHfiU=
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by mx.zohomail.com
	with SMTPS id 1604395096815928.1676514834194; Tue, 3 Nov 2020 01:18:16 -0800 (PST)
Subject: Re: [PATCH v1 1/2] Define build dates/time based on SOURCE_DATE_EPOCH
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <cover.1604156731.git.frederic.pierret@qubes-os.org>
 <57423c6627e00fbc3f41d3f6be6ba1e15abb96fc.1604156731.git.frederic.pierret@qubes-os.org>
 <68db059e-bc65-ee1a-ac3c-004b4552133a@suse.com>
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
Autocrypt: addr=frederic.pierret@qubes-os.org; keydata=
 xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0
 Dy2rUVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8
 vtovi97sWIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIv
 ltoBrYnElD1Pyp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpX
 gYzTN/kq8sxBMh2OrQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PL
 w5koqPs/6JEIVI+t0pyg+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZ
 NbYRXzkI91HCt40X2bTb2jTKgvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpa
 A/GPaJ5DjzV0q9mkYkGDLYI3J/J+s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVir
 EVBum723MFH4DxhTrOoWgta2nyRHOoi0z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvt
 LbYnlSt3v32nfUXh12aQPwU/LCGIzq4oFNVrNp3aWPnSajLPpQARAQABzTxGcsOpZMOpcmlj
 IFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpYy5waWVycmV0QHF1YmVzLW9zLm9yZz7CwXgE
 EwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEEhAELXNxXbiPLkQ
 AI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZJnl+Fb5HBgthU9lBdMqNySg+s8y
 ekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ9uITprfEqX7V2OLbrYW94qw
 R8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04Uekt5I2Zc8iRDF9kneI
 NiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoDRC/bsAow7cBudj+
 lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR9ze2O5Yh+/B
 unrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2VHTfLmA
 Ot+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1coxFUw
 eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBko
 b1EpfW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKb
 xM/NyxHrmNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCN
 He8XNVqBplV0yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOm
 qpbSLbra8NP8Mu5FZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8
 V+9+dpLEU75+mpHU7GlECfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M
 ++ZmWfEV5nCP2qvzeYDGAL6BbWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr
 5aNCaNpv/i4gLO1IScdfDwm6gdfB2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hg
 YlDWdbImhNL/Z7iL3eayH7T9qAVNU587MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpA
 H+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYHwuc3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYD
 yhxVGbeWp820cb0s1f689XCXqFYAzTfCit+EeboYORN5CGioXzS+z0S9IhPbdUuvqs7xvC24
 8bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9aPe/Zw4PTKWvbJlcFioofLwTQE1XvWom
 FPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMAAoJEEhAELXNxXbilSkP/2NcazvU
 DGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrPX8hjITWD9ZA2bbZZ+J+a39v
 yY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9xK8mWXwhn90SHNadEf28
 ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q8aa+G8iAkNJcb+W
 x5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Nyf2MKKbWRJnt
 jy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJUs/geoC
 UBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1KjH
 uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex
 1C+w3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PX
 pm5Pw4stVEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQ
 QMhGv8DnbAdOOOnumAXWq0+wl5uP
Message-ID: <0e312fe7-3dc9-e23e-ec70-70f1d8fc962f@qubes-os.org>
Date: Tue, 3 Nov 2020 10:18:12 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <68db059e-bc65-ee1a-ac3c-004b4552133a@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="CGeHsiwKj3rHrUriGhqvWUaQ25MTejxrj"
X-Zoho-Virus-Status: 1
X-ZohoMailClient: External

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--CGeHsiwKj3rHrUriGhqvWUaQ25MTejxrj
Content-Type: multipart/mixed; boundary="xulLMhs0LkivEtEoQsCzZJNqJtbG8uxGi";
 protected-headers="v1"
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Message-ID: <0e312fe7-3dc9-e23e-ec70-70f1d8fc962f@qubes-os.org>
Subject: Re: [PATCH v1 1/2] Define build dates/time based on SOURCE_DATE_EPOCH
References: <cover.1604156731.git.frederic.pierret@qubes-os.org>
 <57423c6627e00fbc3f41d3f6be6ba1e15abb96fc.1604156731.git.frederic.pierret@qubes-os.org>
 <68db059e-bc65-ee1a-ac3c-004b4552133a@suse.com>
In-Reply-To: <68db059e-bc65-ee1a-ac3c-004b4552133a@suse.com>

--xulLMhs0LkivEtEoQsCzZJNqJtbG8uxGi
Content-Type: multipart/mixed;
 boundary="------------0280C1EE39BC06747DCFA01C"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------0280C1EE39BC06747DCFA01C
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable


Le 11/3/20 =C3=A0 10:10 AM, Jan Beulich a =C3=A9crit=C2=A0:
> On 31.10.2020 16:14, Fr=C3=A9d=C3=A9ric Pierret (fepitre) wrote:
>> --- a/tools/firmware/hvmloader/Makefile
>> +++ b/tools/firmware/hvmloader/Makefile
>> @@ -21,7 +21,11 @@ XEN_ROOT =3D $(CURDIR)/../../..
>>   include $(XEN_ROOT)/tools/firmware/Rules.mk
>>  =20
>>   # SMBIOS spec requires format mm/dd/yyyy
>> +ifneq ($(SOURCE_DATE_EPOCH),)
>> +SMBIOS_REL_DATE ?=3D $(shell date -u -d "@$(SOURCE_DATE_EPOCH)" "+%m/=
%d/%Y" 2>/dev/null)
>> +else
>>   SMBIOS_REL_DATE ?=3D $(shell date +%m/%d/%Y)
>> +endif
>=20
> As this pattern recurs, how about abstracting it away via a
> definition (perhaps to be placed in ./Config.mk) along the
> lines of (variable name subject to improvement)
>=20
> DATE_EPOCH_OPTS :=3D $(if $(SOURCE_DATE_EPOCH),-u -d "@$(SOURCE_DATE_EP=
OCH)")
>=20
> and then here simply
>=20
> SMBIOS_REL_DATE ?=3D $(shell date $(DATE_EPOCH_OPTS) +%m/%d/%Y)
>=20
> (i.e. in particular also without any "ifneq()")?
>=20
> Jan
>=20

Hi Jan,

Yes it makes sense. I'll prepare another version with your comments. Than=
k you.

Fr=C3=A9d=C3=A9ric

--------------0280C1EE39BC06747DCFA01C
Content-Type: application/pgp-keys;
 name="OpenPGP_0x484010B5CDC576E2.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0x484010B5CDC576E2.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0D=
y2r
UVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8vtovi=
97s
WIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIvltoBrYnEl=
D1P
yp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpXgYzTN/kq8sxBM=
h2O
rQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PLw5koqPs/6JEIVI+t0=
pyg
+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZNbYRXzkI91HCt40X2bTb2=
jTK
gvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpaA/GPaJ5DjzV0q9mkYkGDLYI3J=
/J+
s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVirEVBum723MFH4DxhTrOoWgta2nyRHO=
oi0
z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvtLbYnlSt3v32nfUXh12aQPwU/LCGIzq4oF=
NVr
Np3aWPnSajLPpQARAQABzTxGcsOpZMOpcmljIFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpY=
y5w
aWVycmV0QHF1YmVzLW9zLm9yZz7CwXgEEwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWA=
gMB
Ah4BAheAAAoJEEhAELXNxXbiPLkQAI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZ=
Jnl
+Fb5HBgthU9lBdMqNySg+s8yekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ=
9uI
TprfEqX7V2OLbrYW94qwR8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04=
Uek
t5I2Zc8iRDF9kneINiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoD=
RC/
bsAow7cBudj+lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR=
9ze
2O5Yh+/BunrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2=
VHT
fLmAOt+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1cox=
FUw
eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBkob=
1Ep
fW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKbxM/Ny=
xHr
mNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCNHe8XNVqBp=
lV0
yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOmqpbSLbra8NP8M=
u5F
ZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8V+9+dpLEU75+mpHU7=
GlE
CfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M++ZmWfEV5nCP2qvzeYDGA=
L6B
bWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr5aNCaNpv/i4gLO1IScdfDwm6g=
dfB
2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hgYlDWdbImhNL/Z7iL3eayH7T9qAVNU=
587
MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpAH+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYH=
wuc
3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYDyhxVGbeWp820cb0s1f689XCXqFYAzTfCit+Ee=
boY
ORN5CGioXzS+z0S9IhPbdUuvqs7xvC248bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9a=
Pe/
Zw4PTKWvbJlcFioofLwTQE1XvWomFPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMA=
AoJ
EEhAELXNxXbilSkP/2NcazvUDGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrP=
X8h
jITWD9ZA2bbZZ+J+a39vyY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9x=
K8m
WXwhn90SHNadEf28ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q=
8aa
+G8iAkNJcb+Wx5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Ny=
f2M
KKbWRJntjy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJ=
Us/
geoCUBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1=
KjH
uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex1=
C+w
3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PXpm5Pw=
4st
VEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQQMhGv8Dnb=
AdO
OOnumAXWq0+wl5uP
=3DRWX1
-----END PGP PUBLIC KEY BLOCK-----

--------------0280C1EE39BC06747DCFA01C--

--xulLMhs0LkivEtEoQsCzZJNqJtbG8uxGi--

--CGeHsiwKj3rHrUriGhqvWUaQ25MTejxrj
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEn6ZLkvlecGvyjiymSEAQtc3FduIFAl+hIFUACgkQSEAQtc3F
duL+mRAAiafGIEjO+CpiVg4+dxeCl9Re561GVlRNQg47vRwECPoeiFzVC85zQrq3
72caB/hwE8ctYRxQODonT8AAKcFjNH0v3BksT6Yi2mGY6pCi64/DVoyGEDDYzHUp
nEwtUgbJvIydQtBYXiRlxJUq7ft2VBy3CZLtYFP5TquRHd3KK5dhkbuuk/+ZyGLl
33naZJpyTUmzvHb7c7uSJSO0n+p1FMvYJrqb5WNEkNbZOYnf0SRKk3pTouKlhbhE
Nk8hUhNTle3kMBKhbN8EctEhM0UdNc2GcTNyMyqWVeg+teGXAu4I556SJ/HdwLBO
ecGv6ly7acwgsY+KnfAWJpUrADvJlOSpD4oP5SZ3Otm0vxl2w0u8Q3zdVk88+kQX
rPnGNPRcJVgU2pvqaRYgKNKveDEUQzGcMb8Om/kAZHdyDYyTwMUEeQLL+vogjxDc
TYRr5HgKdqrfLumc/XCXu2j29rgBYUHPO7+gRMpAjErPMRxkqVs/0YAGBVl89WT3
uHTsMjepUAzTwbAiKyTecEAthSnoz5b6gQcr5P+wQdCpElMQ+aCYcQTl+yyPSxbu
C6HINNl+qilf2R2KIBp8z1bj8AcpIvpl5suyTiMlmq3yj6jHTjAppkD60bPHK3L0
ExdAkY/YNXiFuTkCHKFO4Wj7q5xRWmbsgC/+it6P9QLZTSbuB2U=
=y+vN
-----END PGP SIGNATURE-----

--CGeHsiwKj3rHrUriGhqvWUaQ25MTejxrj--


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 09:21:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 09:21:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18053.42808 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZsVF-00029V-Jw; Tue, 03 Nov 2020 09:21:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18053.42808; Tue, 03 Nov 2020 09:21:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZsVF-00029O-Gx; Tue, 03 Nov 2020 09:21:21 +0000
Received: by outflank-mailman (input) for mailman id 18053;
 Tue, 03 Nov 2020 09:21:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uWmA=EJ=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
 id 1kZsVE-00029I-4y
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:21:20 +0000
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 808df712-02ad-40a5-b977-bd81c04550e2;
 Tue, 03 Nov 2020 09:21:19 +0000 (UTC)
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by
 mx.zohomail.com with SMTPS id 1604395271618145.04295290782;
 Tue, 3 Nov 2020 01:21:11 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uWmA=EJ=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
	id 1kZsVE-00029I-4y
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:21:20 +0000
X-Inumbo-ID: 808df712-02ad-40a5-b977-bd81c04550e2
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 808df712-02ad-40a5-b977-bd81c04550e2;
	Tue, 03 Nov 2020 09:21:19 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1604395273; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=KrHTiuNYwRjlG87inCb9R8Dero7mByvHFhtGlRjpvgtvXMj7kSjB4iRbPoGa4+4fVs93/jyyahAlDK9GGOx3lvk6+aa65Bn7H552EmWv1IFYcTzb0eXnyVCAZthsEQWxZRoTvWhlENS9ourpI3itkMGnFLjbCgcFrswvWEvENQE=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1604395273; h=Content-Type:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=PONRj/GF8R+7Q+1kKZid9bzvtTksyiJqHw0juDmgOks=; 
	b=Mo8hGUMt+DwotarDnHVKupWYGB1p+Z39/3ey3Xg3+hyeqNwCQtZh+Y4+cVfAMsdHD5MZl4peZ582d9KS5MV9YMsrzS5Bvbv6ecP4RUGeeDcwgtk4LjpSJPoS3A0us9Lr3alqWKoJzZetqhrMvJAQETuc8bx2tNayziCkVXN/I6k=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=qubes-os.org;
	spf=pass  smtp.mailfrom=frederic.pierret@qubes-os.org;
	dmarc=pass header.from=<frederic.pierret@qubes-os.org> header.from=<frederic.pierret@qubes-os.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1604395273;
	s=s; d=qubes-os.org; i=frederic.pierret@qubes-os.org;
	h=Subject:To:Cc:References:From:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type;
	bh=PONRj/GF8R+7Q+1kKZid9bzvtTksyiJqHw0juDmgOks=;
	b=hKlpdPIYEfqzsKk4UnC/Xo2faXfbF6B6Dinw+MQPRoMD5qcJ0rb91W0qaUNng58Z
	Ip1OlGEiPja+++e1aiCLZSxPflTwmG9+mc08jlAn7RRDyY3YNGVt7SfLYeVaO5UOIAW
	6OWSsPj8Cbhr4M8hTwssLAMcn6jy8ZhXwQchXo+0=
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by mx.zohomail.com
	with SMTPS id 1604395271618145.04295290782; Tue, 3 Nov 2020 01:21:11 -0800 (PST)
Subject: Re: [PATCH v1 2/2] Define SOURCE_DATE_EPOCH based on git log
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1604156731.git.frederic.pierret@qubes-os.org>
 <8b0e8b8be9c77476ecc702a7c6216ba50659deec.1604156731.git.frederic.pierret@qubes-os.org>
 <fabf886c-4270-9620-5a72-210a2fccb016@suse.com>
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
Autocrypt: addr=frederic.pierret@qubes-os.org; keydata=
 xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0
 Dy2rUVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8
 vtovi97sWIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIv
 ltoBrYnElD1Pyp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpX
 gYzTN/kq8sxBMh2OrQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PL
 w5koqPs/6JEIVI+t0pyg+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZ
 NbYRXzkI91HCt40X2bTb2jTKgvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpa
 A/GPaJ5DjzV0q9mkYkGDLYI3J/J+s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVir
 EVBum723MFH4DxhTrOoWgta2nyRHOoi0z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvt
 LbYnlSt3v32nfUXh12aQPwU/LCGIzq4oFNVrNp3aWPnSajLPpQARAQABzTxGcsOpZMOpcmlj
 IFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpYy5waWVycmV0QHF1YmVzLW9zLm9yZz7CwXgE
 EwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEEhAELXNxXbiPLkQ
 AI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZJnl+Fb5HBgthU9lBdMqNySg+s8y
 ekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ9uITprfEqX7V2OLbrYW94qw
 R8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04Uekt5I2Zc8iRDF9kneI
 NiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoDRC/bsAow7cBudj+
 lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR9ze2O5Yh+/B
 unrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2VHTfLmA
 Ot+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1coxFUw
 eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBko
 b1EpfW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKb
 xM/NyxHrmNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCN
 He8XNVqBplV0yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOm
 qpbSLbra8NP8Mu5FZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8
 V+9+dpLEU75+mpHU7GlECfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M
 ++ZmWfEV5nCP2qvzeYDGAL6BbWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr
 5aNCaNpv/i4gLO1IScdfDwm6gdfB2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hg
 YlDWdbImhNL/Z7iL3eayH7T9qAVNU587MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpA
 H+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYHwuc3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYD
 yhxVGbeWp820cb0s1f689XCXqFYAzTfCit+EeboYORN5CGioXzS+z0S9IhPbdUuvqs7xvC24
 8bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9aPe/Zw4PTKWvbJlcFioofLwTQE1XvWom
 FPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMAAoJEEhAELXNxXbilSkP/2NcazvU
 DGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrPX8hjITWD9ZA2bbZZ+J+a39v
 yY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9xK8mWXwhn90SHNadEf28
 ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q8aa+G8iAkNJcb+W
 x5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Nyf2MKKbWRJnt
 jy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJUs/geoC
 UBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1KjH
 uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex
 1C+w3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PX
 pm5Pw4stVEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQ
 QMhGv8DnbAdOOOnumAXWq0+wl5uP
Message-ID: <2898550a-6921-4c5a-920f-37486e2599ea@qubes-os.org>
Date: Tue, 3 Nov 2020 10:21:07 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <fabf886c-4270-9620-5a72-210a2fccb016@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="3AiVYwPScRXDmpiNBK1OElkIBX4VHftoG"
X-Zoho-Virus-Status: 1
X-ZohoMailClient: External

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--3AiVYwPScRXDmpiNBK1OElkIBX4VHftoG
Content-Type: multipart/mixed; boundary="Ro7wrEcWLT5ES2cTy32dhAq6Eqe8rkwP9";
 protected-headers="v1"
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <2898550a-6921-4c5a-920f-37486e2599ea@qubes-os.org>
Subject: Re: [PATCH v1 2/2] Define SOURCE_DATE_EPOCH based on git log
References: <cover.1604156731.git.frederic.pierret@qubes-os.org>
 <8b0e8b8be9c77476ecc702a7c6216ba50659deec.1604156731.git.frederic.pierret@qubes-os.org>
 <fabf886c-4270-9620-5a72-210a2fccb016@suse.com>
In-Reply-To: <fabf886c-4270-9620-5a72-210a2fccb016@suse.com>

--Ro7wrEcWLT5ES2cTy32dhAq6Eqe8rkwP9
Content-Type: multipart/mixed;
 boundary="------------1F1F56FE57EDBD3DA5A1FD01"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------1F1F56FE57EDBD3DA5A1FD01
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable



Le 11/3/20 =C3=A0 10:15 AM, Jan Beulich a =C3=A9crit=C2=A0:
> On 31.10.2020 16:14, Fr=C3=A9d=C3=A9ric Pierret (fepitre) wrote:
>> --- a/xen/Makefile
>> +++ b/xen/Makefile
>> @@ -6,6 +6,8 @@ export XEN_EXTRAVERSION ?=3D -unstable$(XEN_VENDORVERS=
ION)
>>   export XEN_FULLVERSION   =3D $(XEN_VERSION).$(XEN_SUBVERSION)$(XEN_E=
XTRAVERSION)
>>   -include xen-version
>>  =20
>> +export SOURCE_DATE_EPOCH	?=3D $(shell git log -1 --format=3D%ct 2>/de=
v/null)
>=20
> In patch 1 you also use the variable under tools/. Why do you
> place this here rather than in the top level Makefile?
>=20
> This said I'm not convinced anyway that we want this to be the
> default. I'd rather see this as something to get set by the
> package build process of distros, outside of any of the source
> files.
>=20
> Jan
>=20

In fact this was intended to provide a default/example value. Indeed, eac=
h package manager should
handle this with changelog or such.

This is not mandatory and if not wanted by default, maybe add this exampl=
e value into the INSTALL documentation?

Fr=C3=A9d=C3=A9ric

--------------1F1F56FE57EDBD3DA5A1FD01
Content-Type: application/pgp-keys;
 name="OpenPGP_0x484010B5CDC576E2.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0x484010B5CDC576E2.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0D=
y2r
UVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8vtovi=
97s
WIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIvltoBrYnEl=
D1P
yp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpXgYzTN/kq8sxBM=
h2O
rQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PLw5koqPs/6JEIVI+t0=
pyg
+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZNbYRXzkI91HCt40X2bTb2=
jTK
gvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpaA/GPaJ5DjzV0q9mkYkGDLYI3J=
/J+
s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVirEVBum723MFH4DxhTrOoWgta2nyRHO=
oi0
z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvtLbYnlSt3v32nfUXh12aQPwU/LCGIzq4oF=
NVr
Np3aWPnSajLPpQARAQABzTxGcsOpZMOpcmljIFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpY=
y5w
aWVycmV0QHF1YmVzLW9zLm9yZz7CwXgEEwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWA=
gMB
Ah4BAheAAAoJEEhAELXNxXbiPLkQAI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZ=
Jnl
+Fb5HBgthU9lBdMqNySg+s8yekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ=
9uI
TprfEqX7V2OLbrYW94qwR8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04=
Uek
t5I2Zc8iRDF9kneINiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoD=
RC/
bsAow7cBudj+lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR=
9ze
2O5Yh+/BunrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2=
VHT
fLmAOt+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1cox=
FUw
eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBkob=
1Ep
fW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKbxM/Ny=
xHr
mNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCNHe8XNVqBp=
lV0
yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOmqpbSLbra8NP8M=
u5F
ZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8V+9+dpLEU75+mpHU7=
GlE
CfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M++ZmWfEV5nCP2qvzeYDGA=
L6B
bWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr5aNCaNpv/i4gLO1IScdfDwm6g=
dfB
2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hgYlDWdbImhNL/Z7iL3eayH7T9qAVNU=
587
MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpAH+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYH=
wuc
3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYDyhxVGbeWp820cb0s1f689XCXqFYAzTfCit+Ee=
boY
ORN5CGioXzS+z0S9IhPbdUuvqs7xvC248bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9a=
Pe/
Zw4PTKWvbJlcFioofLwTQE1XvWomFPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMA=
AoJ
EEhAELXNxXbilSkP/2NcazvUDGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrP=
X8h
jITWD9ZA2bbZZ+J+a39vyY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9x=
K8m
WXwhn90SHNadEf28ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q=
8aa
+G8iAkNJcb+Wx5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Ny=
f2M
KKbWRJntjy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJ=
Us/
geoCUBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1=
KjH
uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex1=
C+w
3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PXpm5Pw=
4st
VEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQQMhGv8Dnb=
AdO
OOnumAXWq0+wl5uP
=3DRWX1
-----END PGP PUBLIC KEY BLOCK-----

--------------1F1F56FE57EDBD3DA5A1FD01--

--Ro7wrEcWLT5ES2cTy32dhAq6Eqe8rkwP9--

--3AiVYwPScRXDmpiNBK1OElkIBX4VHftoG
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEn6ZLkvlecGvyjiymSEAQtc3FduIFAl+hIQQACgkQSEAQtc3F
duLguBAA19Ez9+SQDToo26C5I2MbJ2MCPLegCxvx2Zh+Enq7j747nzsm30mdbpGs
/woQS5Og/oiTchCIuu6GdHWtOjmE5NwdYdEwyxAe7pyF32A0jNl66Q8xyYxwDOoL
nAhDGMqE2ka95ZOjhwJh9919a30tCLTwnLhKR1eBGA2ev8mZdRHJiHqgx+d9a3Dp
2GW+yroQ4QhbE/anW7tDE5lW57Oxt40hneYw8h82NzGkj3G/081ULbmmplUMLZ7I
VdsuQmYtdQKg90dCcAisMe5RTpua3JBBvTSBzQpoEmFgwPuwGESn4UZoHdn9Wf98
+qJxO4Mq3JdSA6OvAH2D1KjAQiOp4BwVRtJue+g9nv5hMF5VNLksZdZOhiT5nfn9
gYjyaRkvkYUZYE2ac2hC7ScBTodCrI6JMjp+oPPqIhaBdG821SmPsyvF/FycNfi5
lyj2IyoT9dzNCiwCPuOyiKLbWx/DMllK8aoZQEbAu4fs90bw3PP18eboTV8HmaaO
QnLZjIPewZeA7TMxx5YbQTJVAR73joAGrlZBrb+aRgv9vEcjdrTNw7Kr3HnlVYl/
yDjvCu2uis02c9SC0bJesMVbVWv++jsu5dDNMVT+EUysFOkPuXswB/4XQC7dLjd2
FQhPFtObI53cZNts7XJjOMwzCQAviZNg39RCg11NypUGcBUWL40=
=jRWA
-----END PGP SIGNATURE-----

--3AiVYwPScRXDmpiNBK1OElkIBX4VHftoG--


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 09:22:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 09:22:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18059.42820 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZsWC-0002Gp-Tl; Tue, 03 Nov 2020 09:22:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18059.42820; Tue, 03 Nov 2020 09:22:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZsWC-0002Gi-Ql; Tue, 03 Nov 2020 09:22:20 +0000
Received: by outflank-mailman (input) for mailman id 18059;
 Tue, 03 Nov 2020 09:22:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E907=EJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kZsWB-0002Gb-R0
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:22:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 9478bd21-64a1-4952-a7b0-0e2999bb0843;
 Tue, 03 Nov 2020 09:22:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4BA9EB135;
 Tue,  3 Nov 2020 09:22:18 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=E907=EJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kZsWB-0002Gb-R0
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:22:19 +0000
X-Inumbo-ID: 9478bd21-64a1-4952-a7b0-0e2999bb0843
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 9478bd21-64a1-4952-a7b0-0e2999bb0843;
	Tue, 03 Nov 2020 09:22:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604395338;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/IiJdOQPz7XfvvSH6xuVZ0PEx+N5yh7XsTyq1HCQWm4=;
	b=JTaZU3a1rHV2JXgPspchjpKviKdR5cU7Sm6LxMIKqzyBtfJzpvAD33MdY1r/8Ewxehwspp
	BxNSejHYU4Ek05VP1psNPXybEdB6HDqgco+Us1nKpGH6boUgU7vMskBD9MMupFgbMnkkn6
	XT4GPUgLysH63tg3QeWHVukWWcX1tL8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4BA9EB135;
	Tue,  3 Nov 2020 09:22:18 +0000 (UTC)
Subject: Re: [PATCH v2 2/2] xen/rwlock: add check_lock() handling to rwlocks
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201102131239.14134-1-jgross@suse.com>
 <20201102131239.14134-3-jgross@suse.com>
 <fb3a1a5a-15ea-218f-a6d8-8e9d8d1bc2a7@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <890b6547-ca4f-b195-6b9d-9078ba35c357@suse.com>
Date: Tue, 3 Nov 2020 10:22:17 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <fb3a1a5a-15ea-218f-a6d8-8e9d8d1bc2a7@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.11.20 10:02, Jan Beulich wrote:
> On 02.11.2020 14:12, Juergen Gross wrote:
>> --- a/xen/include/xen/rwlock.h
>> +++ b/xen/include/xen/rwlock.h
>> @@ -56,6 +56,7 @@ static inline int _read_trylock(rwlock_t *lock)
>>       u32 cnts;
>>   
>>       preempt_disable();
>> +    check_lock(&lock->lock.debug, true);
>>       cnts = atomic_read(&lock->cnts);
>>       if ( likely(_can_read_lock(cnts)) )
>>       {
> 
> I'm sorry for being picky, but this still isn't matching
> _spin_trylock(). Perhaps the difference is really benign, but
> there the check sits ahead of preempt_disable(). (It has a
> clear reason to be this way there, but without a clear reason
> for things to be differently here, I think matching ordering
> may help, at least to avoid questions.)

I think this is more an optimization opportunity: I'd rather move the
preempt_disable() into the first if clause, as there is no need to
disable preemption in case the first read of the lock already leads
to failure acquiring the lock.

If you want I can prepend a patch doing that optimization.

> 
>> @@ -66,6 +67,7 @@ static inline int _read_trylock(rwlock_t *lock)
>>            */
>>           if ( likely(_can_read_lock(cnts)) )
>>               return 1;
>> +
>>           atomic_sub(_QR_BIAS, &lock->cnts);
>>       }
>>       preempt_enable();
> 
> Stray change?

Oh yes, a leftover from the old positioning of check_lock().

> 
>> @@ -87,7 +89,10 @@ static inline void _read_lock(rwlock_t *lock)
>>        * arch_lock_acquire_barrier().
>>        */
>>       if ( likely(_can_read_lock(cnts)) )
>> +    {
>> +        check_lock(&lock->lock.debug, false);
>>           return;
>> +    }
>>   
>>       /* The slowpath will decrement the reader count, if necessary. */
>>       queue_read_lock_slowpath(lock);
>> @@ -162,7 +167,10 @@ static inline void _write_lock(rwlock_t *lock)
>>        * arch_lock_acquire_barrier().
>>        */
>>       if ( atomic_cmpxchg(&lock->cnts, 0, _write_lock_val()) == 0 )
>> +    {
>> +        check_lock(&lock->lock.debug, false);
>>           return;
>> +    }
>>   
>>       queue_write_lock_slowpath(lock);
>>       /*
> 
> Maybe also for these two, but likely more importantly for ...
> 
>> @@ -328,6 +337,8 @@ static inline void _percpu_read_lock(percpu_rwlock_t **per_cpudata,
>>           /* Drop the read lock because we don't need it anymore. */
>>           read_unlock(&percpu_rwlock->rwlock);
>>       }
>> +    else
>> +        check_lock(&percpu_rwlock->rwlock.lock.debug, false);
>>   }
> 
> ... this one a brief comment may be warranted to clarify why
> the call sits here rather than at the top?

Okay.


Juergen


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 09:23:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 09:23:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18063.42831 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZsWt-0002Qj-At; Tue, 03 Nov 2020 09:23:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18063.42831; Tue, 03 Nov 2020 09:23:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZsWt-0002Qc-7p; Tue, 03 Nov 2020 09:23:03 +0000
Received: by outflank-mailman (input) for mailman id 18063;
 Tue, 03 Nov 2020 09:23:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kZsWr-0002QT-FL
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:23:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 8cd90120-311d-4a5f-89a9-0a4673ba5589;
 Tue, 03 Nov 2020 09:23:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4A9C0AC97;
 Tue,  3 Nov 2020 09:23:00 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kZsWr-0002QT-FL
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:23:01 +0000
X-Inumbo-ID: 8cd90120-311d-4a5f-89a9-0a4673ba5589
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 8cd90120-311d-4a5f-89a9-0a4673ba5589;
	Tue, 03 Nov 2020 09:23:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604395380;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=YzatKUxPuJ4BmtBk4o15c/AANhscTKaCziIkcBaMskc=;
	b=urT8XrOEb9glwysdL6XbTnDep5UMoa8f1AEEmsD7t4Brwpxd3HXDlDKJK/qPWULSofxPut
	wXvkSH6nFn15rwY0dUwBQgGNqfcTD28umCD9fFWA1Ue6ARtnol3xDrusot91QTVw+lpr2s
	QcwJK5RleSnt9JtV36aL5AG24JubVio=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4A9C0AC97;
	Tue,  3 Nov 2020 09:23:00 +0000 (UTC)
Subject: Re: [PATCH v1 2/2] Define SOURCE_DATE_EPOCH based on git log
To: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1604156731.git.frederic.pierret@qubes-os.org>
 <8b0e8b8be9c77476ecc702a7c6216ba50659deec.1604156731.git.frederic.pierret@qubes-os.org>
 <fabf886c-4270-9620-5a72-210a2fccb016@suse.com>
 <2898550a-6921-4c5a-920f-37486e2599ea@qubes-os.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <512c3480-5736-ff1b-3ba1-46eacd04e058@suse.com>
Date: Tue, 3 Nov 2020 10:23:00 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <2898550a-6921-4c5a-920f-37486e2599ea@qubes-os.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 03.11.2020 10:21, Frédéric Pierret wrote:
> 
> 
> Le 11/3/20 à 10:15 AM, Jan Beulich a écrit :
>> On 31.10.2020 16:14, Frédéric Pierret (fepitre) wrote:
>>> --- a/xen/Makefile
>>> +++ b/xen/Makefile
>>> @@ -6,6 +6,8 @@ export XEN_EXTRAVERSION ?= -unstable$(XEN_VENDORVERSION)
>>>   export XEN_FULLVERSION   = $(XEN_VERSION).$(XEN_SUBVERSION)$(XEN_EXTRAVERSION)
>>>   -include xen-version
>>>   
>>> +export SOURCE_DATE_EPOCH	?= $(shell git log -1 --format=%ct 2>/dev/null)
>>
>> In patch 1 you also use the variable under tools/. Why do you
>> place this here rather than in the top level Makefile?
>>
>> This said I'm not convinced anyway that we want this to be the
>> default. I'd rather see this as something to get set by the
>> package build process of distros, outside of any of the source
>> files.
> 
> In fact this was intended to provide a default/example value. Indeed, each package manager should
> handle this with changelog or such.
> 
> This is not mandatory and if not wanted by default, maybe add this example value into the INSTALL documentation?

I'm certainly fine with putting this in the docs. (Whether
INSTALL is the right place I can#t tell.)

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 09:30:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 09:30:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18074.42844 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZsdy-0003LM-3p; Tue, 03 Nov 2020 09:30:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18074.42844; Tue, 03 Nov 2020 09:30:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZsdx-0003LF-Vs; Tue, 03 Nov 2020 09:30:21 +0000
Received: by outflank-mailman (input) for mailman id 18074;
 Tue, 03 Nov 2020 09:30:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xrPR=EJ=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kZsdw-0003L5-Fc
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:30:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 80e816b4-7c1e-46e0-a62b-57310a1ac20d;
 Tue, 03 Nov 2020 09:30:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AEAE2AF4F;
 Tue,  3 Nov 2020 09:30:18 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xrPR=EJ=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kZsdw-0003L5-Fc
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:30:20 +0000
X-Inumbo-ID: 80e816b4-7c1e-46e0-a62b-57310a1ac20d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 80e816b4-7c1e-46e0-a62b-57310a1ac20d;
	Tue, 03 Nov 2020 09:30:19 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id AEAE2AF4F;
	Tue,  3 Nov 2020 09:30:18 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>,
	Daniel Vetter <daniel.vetter@ffwll.ch>
Subject: [PATCH v7 01/10] drm/vram-helper: Remove invariant parameters from internal kmap function
Date: Tue,  3 Nov 2020 10:30:06 +0100
Message-Id: <20201103093015.1063-2-tzimmermann@suse.de>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20201103093015.1063-1-tzimmermann@suse.de>
References: <20201103093015.1063-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The parameters map and is_iomem are always of the same value. Removed them
to prepares the function for conversion to struct dma_buf_map.

v4:
	* don't check for !kmap->virtual; will always be false

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Christian König <christian.koenig@amd.com>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 drivers/gpu/drm/drm_gem_vram_helper.c | 18 ++++--------------
 1 file changed, 4 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 16d68c04ea5d..e305fadb8bc8 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -378,32 +378,22 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
 }
 EXPORT_SYMBOL(drm_gem_vram_unpin);
 
-static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
-				      bool map, bool *is_iomem)
+static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
 {
 	int ret;
 	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
+	bool is_iomem;
 
 	if (gbo->kmap_use_count > 0)
 		goto out;
 
-	if (kmap->virtual || !map)
-		goto out;
-
 	ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
 	if (ret)
 		return ERR_PTR(ret);
 
 out:
-	if (!kmap->virtual) {
-		if (is_iomem)
-			*is_iomem = false;
-		return NULL; /* not mapped; don't increment ref */
-	}
 	++gbo->kmap_use_count;
-	if (is_iomem)
-		return ttm_kmap_obj_virtual(kmap, is_iomem);
-	return kmap->virtual;
+	return ttm_kmap_obj_virtual(kmap, &is_iomem);
 }
 
 static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
@@ -448,7 +438,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
 	ret = drm_gem_vram_pin_locked(gbo, 0);
 	if (ret)
 		goto err_ttm_bo_unreserve;
-	base = drm_gem_vram_kmap_locked(gbo, true, NULL);
+	base = drm_gem_vram_kmap_locked(gbo);
 	if (IS_ERR(base)) {
 		ret = PTR_ERR(base);
 		goto err_drm_gem_vram_unpin_locked;
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 09:30:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 09:30:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18076.42868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZse2-0003Oa-Jj; Tue, 03 Nov 2020 09:30:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18076.42868; Tue, 03 Nov 2020 09:30:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZse2-0003OQ-GD; Tue, 03 Nov 2020 09:30:26 +0000
Received: by outflank-mailman (input) for mailman id 18076;
 Tue, 03 Nov 2020 09:30:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xrPR=EJ=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kZse1-0003L5-EE
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:30:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 2b8da4b5-8bd0-473c-8059-49da96a8f0a8;
 Tue, 03 Nov 2020 09:30:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AEF6BAF5B;
 Tue,  3 Nov 2020 09:30:18 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xrPR=EJ=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kZse1-0003L5-EE
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:30:25 +0000
X-Inumbo-ID: 2b8da4b5-8bd0-473c-8059-49da96a8f0a8
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 2b8da4b5-8bd0-473c-8059-49da96a8f0a8;
	Tue, 03 Nov 2020 09:30:19 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id AEF6BAF5B;
	Tue,  3 Nov 2020 09:30:18 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v7 00/10] Support GEM object mappings from I/O memory
Date: Tue,  3 Nov 2020 10:30:05 +0100
Message-Id: <20201103093015.1063-1-tzimmermann@suse.de>
X-Mailer: git-send-email 2.29.0
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

DRM's fbdev console uses regular load and store operations to update
framebuffer memory. The bochs driver on sparc64 requires the use of
I/O-specific load and store operations. We have a workaround, but need
a long-term solution to the problem.

This patchset changes GEM's vmap/vunmap interfaces to forward pointers
of type struct dma_buf_map and updates the generic fbdev emulation to
use them correctly. This enables I/O-memory operations on all framebuffers
that require and support them.

Patches #1 to #4 prepare VRAM helpers and drivers.

Next is the update of the GEM vmap functions. Patch #5 adds vmap and vunmap
that is usable with TTM-based GEM drivers, and patch #6 updates GEM's
vmap/vunmap callback to forward instances of type struct dma_buf_map. While
the patch touches many files throughout the DRM modules, the applied changes
are mostly trivial interface fixes. Several TTM-based GEM drivers now use
the new vmap code. Patch #7 updates GEM's internal vmap/vunmap functions to
forward struct dma_buf_map.

With struct dma_buf_map propagated through the layers, patches #8 to #10
convert DRM clients and generic fbdev emulation to use it. Updating the
fbdev framebuffer will select the correct functions, either for system or
I/O memory.

There is also a set of IGT testcases for fbdev at [1]. Reading and writting
fbdev device files has several corner cases near the EOF that the tests cover
as well. The original fbdev code has different semantics with the different
implementations (sys, cfb). Patch #10 and the testcases intend to harmonize
the behaviour and serve as a reference.

v7:
	* return number of read/written bytes in fbdev code; if any
	* init QXL cursor from BO buffer (kernel test robot)
	* use min_t(size_t,) (kernel test robot)
v6:
	* don't call page_to_phys() on fbdev framebuffers in I/O memory;
	  warn instead (Daniel)
v5:
	* rebase onto latest TTM changes (Christian)
	* support TTM premapped memory correctly (Christian)
	* implement fb_read/fb_write internally (Sam, Daniel)
	* cleanups
v4:
	* provide TTM vmap/vunmap plus GEM helpers and convert drivers
	  over (Christian, Daniel)
	* remove several empty functions
	* more TODOs and documentation (Daniel)
v3:
	* recreate the whole patchset on top of struct dma_buf_map
v2:
	* RFC patchset

[1] https://gitlab.freedesktop.org/tzimmermann/igt-gpu-tools/-/merge_requests/1

Thomas Zimmermann (10):
  drm/vram-helper: Remove invariant parameters from internal kmap
    function
  drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
  drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
  drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}()
  drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
  drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM
    backends
  drm/gem: Update internal GEM vmap/vunmap interfaces to use struct
    dma_buf_map
  drm/gem: Store client buffer mappings as struct dma_buf_map
  dma-buf-map: Add memcpy and pointer-increment interfaces
  drm/fb_helper: Support framebuffers in I/O memory

 Documentation/gpu/todo.rst                  |  37 ++-
 drivers/gpu/drm/Kconfig                     |   2 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c |  36 ---
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h |   2 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c     |   5 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.h  |   1 -
 drivers/gpu/drm/ast/ast_cursor.c            |  27 +--
 drivers/gpu/drm/ast/ast_drv.h               |   7 +-
 drivers/gpu/drm/bochs/bochs_kms.c           |   1 -
 drivers/gpu/drm/drm_client.c                |  38 +--
 drivers/gpu/drm/drm_fb_helper.c             | 250 ++++++++++++++++++--
 drivers/gpu/drm/drm_gem.c                   |  29 ++-
 drivers/gpu/drm/drm_gem_cma_helper.c        |  27 +--
 drivers/gpu/drm/drm_gem_shmem_helper.c      |  48 ++--
 drivers/gpu/drm/drm_gem_ttm_helper.c        |  38 +++
 drivers/gpu/drm/drm_gem_vram_helper.c       | 117 +++++----
 drivers/gpu/drm/drm_internal.h              |   5 +-
 drivers/gpu/drm/drm_prime.c                 |  14 +-
 drivers/gpu/drm/etnaviv/etnaviv_drv.h       |   3 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem.c       |   1 -
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c |  12 +-
 drivers/gpu/drm/exynos/exynos_drm_gem.c     |  12 -
 drivers/gpu/drm/exynos/exynos_drm_gem.h     |   2 -
 drivers/gpu/drm/lima/lima_gem.c             |   6 +-
 drivers/gpu/drm/lima/lima_sched.c           |  11 +-
 drivers/gpu/drm/mgag200/mgag200_mode.c      |  10 +-
 drivers/gpu/drm/nouveau/Kconfig             |   1 +
 drivers/gpu/drm/nouveau/nouveau_bo.h        |   2 -
 drivers/gpu/drm/nouveau/nouveau_gem.c       |   6 +-
 drivers/gpu/drm/nouveau/nouveau_gem.h       |   2 -
 drivers/gpu/drm/nouveau/nouveau_prime.c     |  20 --
 drivers/gpu/drm/panfrost/panfrost_perfcnt.c |  14 +-
 drivers/gpu/drm/qxl/qxl_display.c           |  15 +-
 drivers/gpu/drm/qxl/qxl_draw.c              |  14 +-
 drivers/gpu/drm/qxl/qxl_drv.h               |  11 +-
 drivers/gpu/drm/qxl/qxl_object.c            |  31 ++-
 drivers/gpu/drm/qxl/qxl_object.h            |   2 +-
 drivers/gpu/drm/qxl/qxl_prime.c             |  12 +-
 drivers/gpu/drm/radeon/radeon.h             |   1 -
 drivers/gpu/drm/radeon/radeon_gem.c         |   7 +-
 drivers/gpu/drm/radeon/radeon_prime.c       |  20 --
 drivers/gpu/drm/rockchip/rockchip_drm_gem.c |  22 +-
 drivers/gpu/drm/rockchip/rockchip_drm_gem.h |   4 +-
 drivers/gpu/drm/tiny/cirrus.c               |  10 +-
 drivers/gpu/drm/tiny/gm12u320.c             |  10 +-
 drivers/gpu/drm/ttm/ttm_bo_util.c           |  72 ++++++
 drivers/gpu/drm/udl/udl_modeset.c           |   8 +-
 drivers/gpu/drm/vboxvideo/vbox_mode.c       |  11 +-
 drivers/gpu/drm/vc4/vc4_bo.c                |   7 +-
 drivers/gpu/drm/vc4/vc4_drv.h               |   2 +-
 drivers/gpu/drm/vgem/vgem_drv.c             |  16 +-
 drivers/gpu/drm/vkms/vkms_plane.c           |  15 +-
 drivers/gpu/drm/vkms/vkms_writeback.c       |  22 +-
 drivers/gpu/drm/xen/xen_drm_front_gem.c     |  18 +-
 drivers/gpu/drm/xen/xen_drm_front_gem.h     |   6 +-
 include/drm/drm_client.h                    |   7 +-
 include/drm/drm_gem.h                       |   5 +-
 include/drm/drm_gem_cma_helper.h            |   3 +-
 include/drm/drm_gem_shmem_helper.h          |   4 +-
 include/drm/drm_gem_ttm_helper.h            |   6 +
 include/drm/drm_gem_vram_helper.h           |  14 +-
 include/drm/drm_mode_config.h               |  12 -
 include/drm/ttm/ttm_bo_api.h                |  28 +++
 include/linux/dma-buf-map.h                 |  93 +++++++-
 64 files changed, 856 insertions(+), 438 deletions(-)

--
2.29.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 09:30:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 09:30:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18077.42880 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZse3-0003Qv-TF; Tue, 03 Nov 2020 09:30:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18077.42880; Tue, 03 Nov 2020 09:30:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZse3-0003Qo-Pe; Tue, 03 Nov 2020 09:30:27 +0000
Received: by outflank-mailman (input) for mailman id 18077;
 Tue, 03 Nov 2020 09:30:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xrPR=EJ=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kZse2-0003LA-QQ
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:30:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 73e736b1-9aab-494e-ae1c-615bc0e5d06e;
 Tue, 03 Nov 2020 09:30:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8502CB138;
 Tue,  3 Nov 2020 09:30:21 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xrPR=EJ=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kZse2-0003LA-QQ
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:30:26 +0000
X-Inumbo-ID: 73e736b1-9aab-494e-ae1c-615bc0e5d06e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 73e736b1-9aab-494e-ae1c-615bc0e5d06e;
	Tue, 03 Nov 2020 09:30:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8502CB138;
	Tue,  3 Nov 2020 09:30:21 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>,
	Daniel Vetter <daniel.vetter@ffwll.ch>
Subject: [PATCH v7 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
Date: Tue,  3 Nov 2020 10:30:10 +0100
Message-Id: <20201103093015.1063-6-tzimmermann@suse.de>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20201103093015.1063-1-tzimmermann@suse.de>
References: <20201103093015.1063-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
address space. The mapping's address is returned as struct dma_buf_map.
Each function is a simplified version of TTM's existing kmap code. Both
functions respect the memory's location ani/or writecombine flags.

On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
two helpers that convert a GEM object into the TTM BO and forward the call
to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
callbacks.

v5:
	* use size_t for storing mapping size (Christian)
	* ignore premapped memory areas correctly in ttm_bo_vunmap()
	* rebase onto latest TTM interfaces (Christian)
	* remove BUG() from ttm_bo_vmap() (Christian)
v4:
	* drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
	  Christian)

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Christian König <christian.koenig@amd.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
 drivers/gpu/drm/ttm/ttm_bo_util.c    | 72 ++++++++++++++++++++++++++++
 include/drm/drm_gem_ttm_helper.h     |  6 +++
 include/drm/ttm/ttm_bo_api.h         | 28 +++++++++++
 include/linux/dma-buf-map.h          | 20 ++++++++
 5 files changed, 164 insertions(+)

diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
index 0e4fb9ba43ad..db4c14d78a30 100644
--- a/drivers/gpu/drm/drm_gem_ttm_helper.c
+++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
@@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
 }
 EXPORT_SYMBOL(drm_gem_ttm_print_info);
 
+/**
+ * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
+ * @gem: GEM object.
+ * @map: [out] returns the dma-buf mapping.
+ *
+ * Maps a GEM object with ttm_bo_vmap(). This function can be used as
+ * &drm_gem_object_funcs.vmap callback.
+ *
+ * Returns:
+ * 0 on success, or a negative errno code otherwise.
+ */
+int drm_gem_ttm_vmap(struct drm_gem_object *gem,
+		     struct dma_buf_map *map)
+{
+	struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
+
+	return ttm_bo_vmap(bo, map);
+
+}
+EXPORT_SYMBOL(drm_gem_ttm_vmap);
+
+/**
+ * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
+ * @gem: GEM object.
+ * @map: dma-buf mapping.
+ *
+ * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
+ * &drm_gem_object_funcs.vmap callback.
+ */
+void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
+			struct dma_buf_map *map)
+{
+	struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
+
+	ttm_bo_vunmap(bo, map);
+}
+EXPORT_SYMBOL(drm_gem_ttm_vunmap);
+
 /**
  * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
  * @gem: GEM object.
diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
index ecb54415d1ca..7ccb2295cac1 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -32,6 +32,7 @@
 #include <drm/ttm/ttm_bo_driver.h>
 #include <drm/ttm/ttm_placement.h>
 #include <drm/drm_vma_manager.h>
+#include <linux/dma-buf-map.h>
 #include <linux/io.h>
 #include <linux/highmem.h>
 #include <linux/wait.h>
@@ -471,6 +472,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
 }
 EXPORT_SYMBOL(ttm_bo_kunmap);
 
+int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
+{
+	struct ttm_resource *mem = &bo->mem;
+	int ret;
+
+	ret = ttm_mem_io_reserve(bo->bdev, mem);
+	if (ret)
+		return ret;
+
+	if (mem->bus.is_iomem) {
+		void __iomem *vaddr_iomem;
+		size_t size = bo->num_pages << PAGE_SHIFT;
+
+		if (mem->bus.addr)
+			vaddr_iomem = (void __iomem *)mem->bus.addr;
+		else if (mem->bus.caching == ttm_write_combined)
+			vaddr_iomem = ioremap_wc(mem->bus.offset, size);
+		else
+			vaddr_iomem = ioremap(mem->bus.offset, size);
+
+		if (!vaddr_iomem)
+			return -ENOMEM;
+
+		dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
+
+	} else {
+		struct ttm_operation_ctx ctx = {
+			.interruptible = false,
+			.no_wait_gpu = false
+		};
+		struct ttm_tt *ttm = bo->ttm;
+		pgprot_t prot;
+		void *vaddr;
+
+		ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
+		if (ret)
+			return ret;
+
+		/*
+		 * We need to use vmap to get the desired page protection
+		 * or to make the buffer object look contiguous.
+		 */
+		prot = ttm_io_prot(bo, mem, PAGE_KERNEL);
+		vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
+		if (!vaddr)
+			return -ENOMEM;
+
+		dma_buf_map_set_vaddr(map, vaddr);
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL(ttm_bo_vmap);
+
+void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
+{
+	struct ttm_resource *mem = &bo->mem;
+
+	if (dma_buf_map_is_null(map))
+		return;
+
+	if (!map->is_iomem)
+		vunmap(map->vaddr);
+	else if (!mem->bus.addr)
+		iounmap(map->vaddr_iomem);
+	dma_buf_map_clear(map);
+
+	ttm_mem_io_free(bo->bdev, &bo->mem);
+}
+EXPORT_SYMBOL(ttm_bo_vunmap);
+
 static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
 				 bool dst_use_tt)
 {
diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
index 118cef76f84f..7c6d874910b8 100644
--- a/include/drm/drm_gem_ttm_helper.h
+++ b/include/drm/drm_gem_ttm_helper.h
@@ -10,11 +10,17 @@
 #include <drm/ttm/ttm_bo_api.h>
 #include <drm/ttm/ttm_bo_driver.h>
 
+struct dma_buf_map;
+
 #define drm_gem_ttm_of_gem(gem_obj) \
 	container_of(gem_obj, struct ttm_buffer_object, base)
 
 void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
 			    const struct drm_gem_object *gem);
+int drm_gem_ttm_vmap(struct drm_gem_object *gem,
+		     struct dma_buf_map *map);
+void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
+			struct dma_buf_map *map);
 int drm_gem_ttm_mmap(struct drm_gem_object *gem,
 		     struct vm_area_struct *vma);
 
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index 37102e45e496..2c59a785374c 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -48,6 +48,8 @@ struct ttm_bo_global;
 
 struct ttm_bo_device;
 
+struct dma_buf_map;
+
 struct drm_mm_node;
 
 struct ttm_placement;
@@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
  */
 void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
 
+/**
+ * ttm_bo_vmap
+ *
+ * @bo: The buffer object.
+ * @map: pointer to a struct dma_buf_map representing the map.
+ *
+ * Sets up a kernel virtual mapping, using ioremap or vmap to the
+ * data in the buffer object. The parameter @map returns the virtual
+ * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
+ *
+ * Returns
+ * -ENOMEM: Out of memory.
+ * -EINVAL: Invalid range.
+ */
+int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
+
+/**
+ * ttm_bo_vunmap
+ *
+ * @bo: The buffer object.
+ * @map: Object describing the map to unmap.
+ *
+ * Unmaps a kernel map set up by ttm_bo_vmap().
+ */
+void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
+
 /**
  * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
  *
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index fd1aba545fdf..2e8bbecb5091 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -45,6 +45,12 @@
  *
  *	dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
  *
+ * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
+ *
+ * .. code-block:: c
+ *
+ *	dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
+ *
  * Test if a mapping is valid with either dma_buf_map_is_set() or
  * dma_buf_map_is_null().
  *
@@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
 	map->is_iomem = false;
 }
 
+/**
+ * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
+ * @map:		The dma-buf mapping structure
+ * @vaddr_iomem:	An I/O-memory address
+ *
+ * Sets the address and the I/O-memory flag.
+ */
+static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
+					       void __iomem *vaddr_iomem)
+{
+	map->vaddr_iomem = vaddr_iomem;
+	map->is_iomem = true;
+}
+
 /**
  * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
  * @lhs:	The dma-buf mapping structure
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 09:30:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 09:30:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18075.42856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZsdz-0003ME-Cw; Tue, 03 Nov 2020 09:30:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18075.42856; Tue, 03 Nov 2020 09:30:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZsdz-0003M7-8A; Tue, 03 Nov 2020 09:30:23 +0000
Received: by outflank-mailman (input) for mailman id 18075;
 Tue, 03 Nov 2020 09:30:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xrPR=EJ=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kZsdx-0003LA-Su
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:30:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id fa5de9f4-1ba5-4022-85a8-5d65f79434e3;
 Tue, 03 Nov 2020 09:30:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 08299B119;
 Tue,  3 Nov 2020 09:30:20 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xrPR=EJ=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kZsdx-0003LA-Su
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:30:21 +0000
X-Inumbo-ID: fa5de9f4-1ba5-4022-85a8-5d65f79434e3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id fa5de9f4-1ba5-4022-85a8-5d65f79434e3;
	Tue, 03 Nov 2020 09:30:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 08299B119;
	Tue,  3 Nov 2020 09:30:20 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v7 03/10] drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
Date: Tue,  3 Nov 2020 10:30:08 +0100
Message-Id: <20201103093015.1063-4-tzimmermann@suse.de>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20201103093015.1063-1-tzimmermann@suse.de>
References: <20201103093015.1063-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The function etnaviv_gem_prime_vunmap() is empty. Remove it before
changing the interface to use struct drm_buf_map.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 drivers/gpu/drm/etnaviv/etnaviv_drv.h       | 1 -
 drivers/gpu/drm/etnaviv/etnaviv_gem.c       | 1 -
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 5 -----
 3 files changed, 7 deletions(-)

diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
index 914f0867ff71..9682c26d89bb 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
@@ -52,7 +52,6 @@ int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
 int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
 struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
 void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
-void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma);
 struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
index 67d9a2b9ea6a..bbd235473645 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
@@ -571,7 +571,6 @@ static const struct drm_gem_object_funcs etnaviv_gem_object_funcs = {
 	.unpin = etnaviv_gem_prime_unpin,
 	.get_sg_table = etnaviv_gem_prime_get_sg_table,
 	.vmap = etnaviv_gem_prime_vmap,
-	.vunmap = etnaviv_gem_prime_vunmap,
 	.vm_ops = &vm_ops,
 };
 
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 135fbff6fecf..a6d9932a32ae 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -27,11 +27,6 @@ void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
 	return etnaviv_gem_vmap(obj);
 }
 
-void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-	/* TODO msm_gem_vunmap() */
-}
-
 int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma)
 {
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 09:30:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 09:30:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18078.42891 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZse7-0003VX-D0; Tue, 03 Nov 2020 09:30:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18078.42891; Tue, 03 Nov 2020 09:30:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZse7-0003VM-8s; Tue, 03 Nov 2020 09:30:31 +0000
Received: by outflank-mailman (input) for mailman id 18078;
 Tue, 03 Nov 2020 09:30:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xrPR=EJ=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kZse6-0003L5-EH
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:30:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 80c0e393-8ceb-4e78-a8e2-d0618d8fb958;
 Tue, 03 Nov 2020 09:30:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 398E5B016;
 Tue,  3 Nov 2020 09:30:19 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xrPR=EJ=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kZse6-0003L5-EH
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:30:30 +0000
X-Inumbo-ID: 80c0e393-8ceb-4e78-a8e2-d0618d8fb958
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 80c0e393-8ceb-4e78-a8e2-d0618d8fb958;
	Tue, 03 Nov 2020 09:30:19 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 398E5B016;
	Tue,  3 Nov 2020 09:30:19 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v7 02/10] drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
Date: Tue,  3 Nov 2020 10:30:07 +0100
Message-Id: <20201103093015.1063-3-tzimmermann@suse.de>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20201103093015.1063-1-tzimmermann@suse.de>
References: <20201103093015.1063-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The function drm_gem_cma_prime_vunmap() is empty. Remove it before
changing the interface to use struct drm_buf_map.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Christian König <christian.koenig@amd.com>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 drivers/gpu/drm/drm_gem_cma_helper.c | 17 -----------------
 drivers/gpu/drm/vc4/vc4_bo.c         |  1 -
 include/drm/drm_gem_cma_helper.h     |  1 -
 3 files changed, 19 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 2165633c9b9e..d527485ea0b7 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -537,23 +537,6 @@ void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
 }
 EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
 
-/**
- * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual
- *     address space
- * @obj: GEM object
- * @vaddr: kernel virtual address where the CMA GEM object was mapped
- *
- * This function removes a buffer exported via DRM PRIME from the kernel's
- * virtual address space. This is a no-op because CMA buffers cannot be
- * unmapped from kernel space. Drivers using the CMA helpers should set this
- * as their &drm_gem_object_funcs.vunmap callback.
- */
-void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-	/* Nothing to do */
-}
-EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vunmap);
-
 static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = {
 	.free = drm_gem_cma_free_object,
 	.print_info = drm_gem_cma_print_info,
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index f432278173cd..557f0d1e6437 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -387,7 +387,6 @@ static const struct drm_gem_object_funcs vc4_gem_object_funcs = {
 	.export = vc4_prime_export,
 	.get_sg_table = drm_gem_cma_prime_get_sg_table,
 	.vmap = vc4_prime_vmap,
-	.vunmap = drm_gem_cma_prime_vunmap,
 	.vm_ops = &vc4_vm_ops,
 };
 
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index 2bfa2502607a..a064b0d1c480 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -104,7 +104,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
 int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma);
 void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
-void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 
 struct drm_gem_object *
 drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 09:30:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 09:30:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18079.42904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZse8-0003YH-NY; Tue, 03 Nov 2020 09:30:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18079.42904; Tue, 03 Nov 2020 09:30:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZse8-0003Y3-JT; Tue, 03 Nov 2020 09:30:32 +0000
Received: by outflank-mailman (input) for mailman id 18079;
 Tue, 03 Nov 2020 09:30:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xrPR=EJ=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kZse7-0003LA-QV
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:30:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 868bce74-7153-4d6c-af59-32e68e8fce37;
 Tue, 03 Nov 2020 09:30:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1B272B135;
 Tue,  3 Nov 2020 09:30:23 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xrPR=EJ=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kZse7-0003LA-QV
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:30:31 +0000
X-Inumbo-ID: 868bce74-7153-4d6c-af59-32e68e8fce37
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 868bce74-7153-4d6c-af59-32e68e8fce37;
	Tue, 03 Nov 2020 09:30:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1B272B135;
	Tue,  3 Nov 2020 09:30:23 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>,
	Daniel Vetter <daniel.vetter@ffwll.ch>
Subject: [PATCH v7 07/10] drm/gem: Update internal GEM vmap/vunmap interfaces to use struct dma_buf_map
Date: Tue,  3 Nov 2020 10:30:12 +0100
Message-Id: <20201103093015.1063-8-tzimmermann@suse.de>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20201103093015.1063-1-tzimmermann@suse.de>
References: <20201103093015.1063-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

GEM's vmap and vunmap interfaces now wrap memory pointers in struct
dma_buf_map.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 drivers/gpu/drm/drm_client.c   | 18 +++++++++++-------
 drivers/gpu/drm/drm_gem.c      | 26 +++++++++++++-------------
 drivers/gpu/drm/drm_internal.h |  5 +++--
 drivers/gpu/drm/drm_prime.c    | 14 ++++----------
 4 files changed, 31 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index 495f47d23d87..ac0082bed966 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -3,6 +3,7 @@
  * Copyright 2018 Noralf Trønnes
  */
 
+#include <linux/dma-buf-map.h>
 #include <linux/list.h>
 #include <linux/module.h>
 #include <linux/mutex.h>
@@ -304,7 +305,8 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
  */
 void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
 {
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
 	if (buffer->vaddr)
 		return buffer->vaddr;
@@ -317,13 +319,13 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
 	 * fd_install step out of the driver backend hooks, to make that
 	 * final step optional for internal users.
 	 */
-	vaddr = drm_gem_vmap(buffer->gem);
-	if (IS_ERR(vaddr))
-		return vaddr;
+	ret = drm_gem_vmap(buffer->gem, &map);
+	if (ret)
+		return ERR_PTR(ret);
 
-	buffer->vaddr = vaddr;
+	buffer->vaddr = map.vaddr;
 
-	return vaddr;
+	return map.vaddr;
 }
 EXPORT_SYMBOL(drm_client_buffer_vmap);
 
@@ -337,7 +339,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
  */
 void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
 {
-	drm_gem_vunmap(buffer->gem, buffer->vaddr);
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
+
+	drm_gem_vunmap(buffer->gem, &map);
 	buffer->vaddr = NULL;
 }
 EXPORT_SYMBOL(drm_client_buffer_vunmap);
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 4231fda26e70..eb2d23e04be9 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1206,32 +1206,32 @@ void drm_gem_unpin(struct drm_gem_object *obj)
 		obj->funcs->unpin(obj);
 }
 
-void *drm_gem_vmap(struct drm_gem_object *obj)
+int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
-	struct dma_buf_map map;
 	int ret;
 
 	if (!obj->funcs->vmap)
-		return ERR_PTR(-EOPNOTSUPP);
+		return -EOPNOTSUPP;
 
-	ret = obj->funcs->vmap(obj, &map);
+	ret = obj->funcs->vmap(obj, map);
 	if (ret)
-		return ERR_PTR(ret);
-	else if (dma_buf_map_is_null(&map))
-		return ERR_PTR(-ENOMEM);
+		return ret;
+	else if (dma_buf_map_is_null(map))
+		return -ENOMEM;
 
-	return map.vaddr;
+	return 0;
 }
 
-void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
+void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
-	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
-
-	if (!vaddr)
+	if (dma_buf_map_is_null(map))
 		return;
 
 	if (obj->funcs->vunmap)
-		obj->funcs->vunmap(obj, &map);
+		obj->funcs->vunmap(obj, map);
+
+	/* Always set the mapping to NULL. Callers may rely on this. */
+	dma_buf_map_clear(map);
 }
 
 /**
diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
index 2bdac3557765..81d386b5b92a 100644
--- a/drivers/gpu/drm/drm_internal.h
+++ b/drivers/gpu/drm/drm_internal.h
@@ -33,6 +33,7 @@
 
 struct dentry;
 struct dma_buf;
+struct dma_buf_map;
 struct drm_connector;
 struct drm_crtc;
 struct drm_framebuffer;
@@ -187,8 +188,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
 
 int drm_gem_pin(struct drm_gem_object *obj);
 void drm_gem_unpin(struct drm_gem_object *obj);
-void *drm_gem_vmap(struct drm_gem_object *obj);
-void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr);
+int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 /* drm_debugfs.c drm_debugfs_crc.c */
 #if defined(CONFIG_DEBUG_FS)
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 187b55ede62e..302e2bb3dfff 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -667,21 +667,15 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
  *
  * Sets up a kernel virtual mapping. This can be used as the &dma_buf_ops.vmap
  * callback. Calls into &drm_gem_object_funcs.vmap for device specific handling.
+ * The kernel virtual address is returned in map.
  *
- * Returns the kernel virtual address or NULL on failure.
+ * Returns 0 on success or a negative errno code otherwise.
  */
 int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = dma_buf->priv;
-	void *vaddr;
 
-	vaddr = drm_gem_vmap(obj);
-	if (IS_ERR(vaddr))
-		return PTR_ERR(vaddr);
-
-	dma_buf_map_set_vaddr(map, vaddr);
-
-	return 0;
+	return drm_gem_vmap(obj, map);
 }
 EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
 
@@ -697,7 +691,7 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = dma_buf->priv;
 
-	drm_gem_vunmap(obj, map->vaddr);
+	drm_gem_vunmap(obj, map);
 }
 EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
 
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 09:30:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 09:30:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18080.42916 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZseD-0003fF-5j; Tue, 03 Nov 2020 09:30:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18080.42916; Tue, 03 Nov 2020 09:30:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZseD-0003ew-0L; Tue, 03 Nov 2020 09:30:37 +0000
Received: by outflank-mailman (input) for mailman id 18080;
 Tue, 03 Nov 2020 09:30:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xrPR=EJ=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kZseB-0003L5-Em
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:30:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id a0de23ba-368d-40dc-8b4b-74b63af6e1cb;
 Tue, 03 Nov 2020 09:30:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B16F0B11A;
 Tue,  3 Nov 2020 09:30:20 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xrPR=EJ=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kZseB-0003L5-Em
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:30:35 +0000
X-Inumbo-ID: a0de23ba-368d-40dc-8b4b-74b63af6e1cb
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id a0de23ba-368d-40dc-8b4b-74b63af6e1cb;
	Tue, 03 Nov 2020 09:30:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B16F0B11A;
	Tue,  3 Nov 2020 09:30:20 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v7 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}()
Date: Tue,  3 Nov 2020 10:30:09 +0100
Message-Id: <20201103093015.1063-5-tzimmermann@suse.de>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20201103093015.1063-1-tzimmermann@suse.de>
References: <20201103093015.1063-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The functions exynos_drm_gem_prime_{vmap,vunmap}() are empty. Remove
them before changing the interface to use struct drm_buf_map. As a side
effect of removing drm_gem_prime_vmap(), the error code changes from
ENOMEM to EOPNOTSUPP.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 ------------
 drivers/gpu/drm/exynos/exynos_drm_gem.h |  2 --
 2 files changed, 14 deletions(-)

diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
index 4afbf5109cbf..4396224227d1 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
@@ -135,8 +135,6 @@ static const struct vm_operations_struct exynos_drm_gem_vm_ops = {
 static const struct drm_gem_object_funcs exynos_drm_gem_object_funcs = {
 	.free = exynos_drm_gem_free_object,
 	.get_sg_table = exynos_drm_gem_prime_get_sg_table,
-	.vmap = exynos_drm_gem_prime_vmap,
-	.vunmap	= exynos_drm_gem_prime_vunmap,
 	.vm_ops = &exynos_drm_gem_vm_ops,
 };
 
@@ -469,16 +467,6 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
 	return &exynos_gem->base;
 }
 
-void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj)
-{
-	return NULL;
-}
-
-void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-	/* Nothing to do */
-}
-
 int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
 			      struct vm_area_struct *vma)
 {
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h
index 74e926abeff0..a23272fb96fb 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.h
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h
@@ -107,8 +107,6 @@ struct drm_gem_object *
 exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
 				     struct dma_buf_attachment *attach,
 				     struct sg_table *sgt);
-void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj);
-void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
 			      struct vm_area_struct *vma);
 
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 09:30:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 09:30:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18081.42924 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZseD-0003gr-T1; Tue, 03 Nov 2020 09:30:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18081.42924; Tue, 03 Nov 2020 09:30:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZseD-0003gH-If; Tue, 03 Nov 2020 09:30:37 +0000
Received: by outflank-mailman (input) for mailman id 18081;
 Tue, 03 Nov 2020 09:30:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xrPR=EJ=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kZseC-0003LA-Qp
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:30:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 04239475-b5a0-46ff-968f-ce3578ee099d;
 Tue, 03 Nov 2020 09:30:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A49C2B1A6;
 Tue,  3 Nov 2020 09:30:24 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xrPR=EJ=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kZseC-0003LA-Qp
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:30:36 +0000
X-Inumbo-ID: 04239475-b5a0-46ff-968f-ce3578ee099d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 04239475-b5a0-46ff-968f-ce3578ee099d;
	Tue, 03 Nov 2020 09:30:25 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A49C2B1A6;
	Tue,  3 Nov 2020 09:30:24 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v7 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
Date: Tue,  3 Nov 2020 10:30:14 +0100
Message-Id: <20201103093015.1063-10-tzimmermann@suse.de>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20201103093015.1063-1-tzimmermann@suse.de>
References: <20201103093015.1063-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

To do framebuffer updates, one needs memcpy from system memory and a
pointer-increment function. Add both interfaces with documentation.

v5:
	* include <linux/string.h> to build on sparc64 (Sam)

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 include/linux/dma-buf-map.h | 73 ++++++++++++++++++++++++++++++++-----
 1 file changed, 63 insertions(+), 10 deletions(-)

diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index 2e8bbecb5091..583a3a1f9447 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -7,6 +7,7 @@
 #define __DMA_BUF_MAP_H__
 
 #include <linux/io.h>
+#include <linux/string.h>
 
 /**
  * DOC: overview
@@ -32,6 +33,14 @@
  * accessing the buffer. Use the returned instance and the helper functions
  * to access the buffer's memory in the correct way.
  *
+ * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
+ * actually independent from the dma-buf infrastructure. When sharing buffers
+ * among devices, drivers have to know the location of the memory to access
+ * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
+ * solves this problem for dma-buf and its users. If other drivers or
+ * sub-systems require similar functionality, the type could be generalized
+ * and moved to a more prominent header file.
+ *
  * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
  * considered bad style. Rather then accessing its fields directly, use one
  * of the provided helper functions, or implement your own. For example,
@@ -51,6 +60,14 @@
  *
  *	dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
  *
+ * Instances of struct dma_buf_map do not have to be cleaned up, but
+ * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
+ * always refer to system memory.
+ *
+ * .. code-block:: c
+ *
+ *	dma_buf_map_clear(&map);
+ *
  * Test if a mapping is valid with either dma_buf_map_is_set() or
  * dma_buf_map_is_null().
  *
@@ -73,17 +90,19 @@
  *	if (dma_buf_map_is_equal(&sys_map, &io_map))
  *		// always false
  *
- * Instances of struct dma_buf_map do not have to be cleaned up, but
- * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
- * always refer to system memory.
+ * A set up instance of struct dma_buf_map can be used to access or manipulate
+ * the buffer memory. Depending on the location of the memory, the provided
+ * helpers will pick the correct operations. Data can be copied into the memory
+ * with dma_buf_map_memcpy_to(). The address can be manipulated with
+ * dma_buf_map_incr().
  *
- * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
- * actually independent from the dma-buf infrastructure. When sharing buffers
- * among devices, drivers have to know the location of the memory to access
- * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
- * solves this problem for dma-buf and its users. If other drivers or
- * sub-systems require similar functionality, the type could be generalized
- * and moved to a more prominent header file.
+ * .. code-block:: c
+ *
+ *	const void *src = ...; // source buffer
+ *	size_t len = ...; // length of src
+ *
+ *	dma_buf_map_memcpy_to(&map, src, len);
+ *	dma_buf_map_incr(&map, len); // go to first byte after the memcpy
  */
 
 /**
@@ -210,4 +229,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
 	}
 }
 
+/**
+ * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
+ * @dst:	The dma-buf mapping structure
+ * @src:	The source buffer
+ * @len:	The number of byte in src
+ *
+ * Copies data into a dma-buf mapping. The source buffer is in system
+ * memory. Depending on the buffer's location, the helper picks the correct
+ * method of accessing the memory.
+ */
+static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
+{
+	if (dst->is_iomem)
+		memcpy_toio(dst->vaddr_iomem, src, len);
+	else
+		memcpy(dst->vaddr, src, len);
+}
+
+/**
+ * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
+ * @map:	The dma-buf mapping structure
+ * @incr:	The number of bytes to increment
+ *
+ * Increments the address stored in a dma-buf mapping. Depending on the
+ * buffer's location, the correct value will be updated.
+ */
+static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
+{
+	if (map->is_iomem)
+		map->vaddr_iomem += incr;
+	else
+		map->vaddr += incr;
+}
+
 #endif /* __DMA_BUF_MAP_H__ */
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 09:30:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 09:30:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18082.42940 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZseI-0003pq-Ea; Tue, 03 Nov 2020 09:30:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18082.42940; Tue, 03 Nov 2020 09:30:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZseI-0003pX-8n; Tue, 03 Nov 2020 09:30:42 +0000
Received: by outflank-mailman (input) for mailman id 18082;
 Tue, 03 Nov 2020 09:30:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xrPR=EJ=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kZseG-0003L5-Em
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:30:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 76d9e46c-6026-48f7-810e-4737969eeac1;
 Tue, 03 Nov 2020 09:30:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EBCEEB1A0;
 Tue,  3 Nov 2020 09:30:23 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xrPR=EJ=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kZseG-0003L5-Em
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:30:40 +0000
X-Inumbo-ID: 76d9e46c-6026-48f7-810e-4737969eeac1
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 76d9e46c-6026-48f7-810e-4737969eeac1;
	Tue, 03 Nov 2020 09:30:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EBCEEB1A0;
	Tue,  3 Nov 2020 09:30:23 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>,
	Daniel Vetter <daniel.vetter@ffwll.ch>
Subject: [PATCH v7 08/10] drm/gem: Store client buffer mappings as struct dma_buf_map
Date: Tue,  3 Nov 2020 10:30:13 +0100
Message-Id: <20201103093015.1063-9-tzimmermann@suse.de>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20201103093015.1063-1-tzimmermann@suse.de>
References: <20201103093015.1063-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Kernel DRM clients now store their framebuffer address in an instance
of struct dma_buf_map. Depending on the buffer's location, the address
refers to system or I/O memory.

Callers of drm_client_buffer_vmap() receive a copy of the value in
the call's supplied arguments. It can be accessed and modified with
dma_buf_map interfaces.

v6:
	* don't call page_to_phys() on framebuffers in I/O memory;
	  warn instead (Daniel)

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 drivers/gpu/drm/drm_client.c    | 34 +++++++++++++++++++--------------
 drivers/gpu/drm/drm_fb_helper.c | 32 ++++++++++++++++++++-----------
 include/drm/drm_client.h        |  7 ++++---
 3 files changed, 45 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index ac0082bed966..fe573acf1067 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -235,7 +235,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
 {
 	struct drm_device *dev = buffer->client->dev;
 
-	drm_gem_vunmap(buffer->gem, buffer->vaddr);
+	drm_gem_vunmap(buffer->gem, &buffer->map);
 
 	if (buffer->gem)
 		drm_gem_object_put(buffer->gem);
@@ -291,25 +291,31 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
 /**
  * drm_client_buffer_vmap - Map DRM client buffer into address space
  * @buffer: DRM client buffer
+ * @map_copy: Returns the mapped memory's address
  *
  * This function maps a client buffer into kernel address space. If the
- * buffer is already mapped, it returns the mapping's address.
+ * buffer is already mapped, it returns the existing mapping's address.
  *
  * Client buffer mappings are not ref'counted. Each call to
  * drm_client_buffer_vmap() should be followed by a call to
  * drm_client_buffer_vunmap(); or the client buffer should be mapped
  * throughout its lifetime.
  *
+ * The returned address is a copy of the internal value. In contrast to
+ * other vmap interfaces, you don't need it for the client's vunmap
+ * function. So you can modify it at will during blit and draw operations.
+ *
  * Returns:
- *	The mapped memory's address
+ *	0 on success, or a negative errno code otherwise.
  */
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
+int
+drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map_copy)
 {
-	struct dma_buf_map map;
+	struct dma_buf_map *map = &buffer->map;
 	int ret;
 
-	if (buffer->vaddr)
-		return buffer->vaddr;
+	if (dma_buf_map_is_set(map))
+		goto out;
 
 	/*
 	 * FIXME: The dependency on GEM here isn't required, we could
@@ -319,13 +325,14 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
 	 * fd_install step out of the driver backend hooks, to make that
 	 * final step optional for internal users.
 	 */
-	ret = drm_gem_vmap(buffer->gem, &map);
+	ret = drm_gem_vmap(buffer->gem, map);
 	if (ret)
-		return ERR_PTR(ret);
+		return ret;
 
-	buffer->vaddr = map.vaddr;
+out:
+	*map_copy = *map;
 
-	return map.vaddr;
+	return 0;
 }
 EXPORT_SYMBOL(drm_client_buffer_vmap);
 
@@ -339,10 +346,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
  */
 void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
 {
-	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
+	struct dma_buf_map *map = &buffer->map;
 
-	drm_gem_vunmap(buffer->gem, &map);
-	buffer->vaddr = NULL;
+	drm_gem_vunmap(buffer->gem, map);
 }
 EXPORT_SYMBOL(drm_client_buffer_vunmap);
 
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 92e0db30fdf7..a0d88130fedb 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -378,7 +378,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
 	unsigned int cpp = fb->format->cpp[0];
 	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
 	void *src = fb_helper->fbdev->screen_buffer + offset;
-	void *dst = fb_helper->buffer->vaddr + offset;
+	void *dst = fb_helper->buffer->map.vaddr + offset;
 	size_t len = (clip->x2 - clip->x1) * cpp;
 	unsigned int y;
 
@@ -400,7 +400,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
 	struct drm_clip_rect *clip = &helper->dirty_clip;
 	struct drm_clip_rect clip_copy;
 	unsigned long flags;
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
 	spin_lock_irqsave(&helper->dirty_lock, flags);
 	clip_copy = *clip;
@@ -413,8 +414,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
 
 		/* Generic fbdev uses a shadow buffer */
 		if (helper->buffer) {
-			vaddr = drm_client_buffer_vmap(helper->buffer);
-			if (IS_ERR(vaddr))
+			ret = drm_client_buffer_vmap(helper->buffer, &map);
+			if (ret)
 				return;
 			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
 		}
@@ -2060,7 +2061,8 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
 	struct drm_framebuffer *fb;
 	struct fb_info *fbi;
 	u32 format;
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
 	drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
 		    sizes->surface_width, sizes->surface_height,
@@ -2096,14 +2098,22 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
 		fb_deferred_io_init(fbi);
 	} else {
 		/* buffer is mapped for HW framebuffer */
-		vaddr = drm_client_buffer_vmap(fb_helper->buffer);
-		if (IS_ERR(vaddr))
-			return PTR_ERR(vaddr);
+		ret = drm_client_buffer_vmap(fb_helper->buffer, &map);
+		if (ret)
+			return ret;
+		if (map.is_iomem)
+			fbi->screen_base = map.vaddr_iomem;
+		else
+			fbi->screen_buffer = map.vaddr;
 
-		fbi->screen_buffer = vaddr;
-		/* Shamelessly leak the physical address to user-space */
+		/*
+		 * Shamelessly leak the physical address to user-space. As
+		 * page_to_phys() is undefined for I/O memory, warn in this
+		 * case.
+		 */
 #if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM)
-		if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0)
+		if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0 &&
+		    !drm_WARN_ON_ONCE(dev, map.is_iomem))
 			fbi->fix.smem_start =
 				page_to_phys(virt_to_page(fbi->screen_buffer));
 #endif
diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h
index 7aaea665bfc2..f07f2fb02e75 100644
--- a/include/drm/drm_client.h
+++ b/include/drm/drm_client.h
@@ -3,6 +3,7 @@
 #ifndef _DRM_CLIENT_H_
 #define _DRM_CLIENT_H_
 
+#include <linux/dma-buf-map.h>
 #include <linux/lockdep.h>
 #include <linux/mutex.h>
 #include <linux/types.h>
@@ -141,9 +142,9 @@ struct drm_client_buffer {
 	struct drm_gem_object *gem;
 
 	/**
-	 * @vaddr: Virtual address for the buffer
+	 * @map: Virtual address for the buffer
 	 */
-	void *vaddr;
+	struct dma_buf_map map;
 
 	/**
 	 * @fb: DRM framebuffer
@@ -155,7 +156,7 @@ struct drm_client_buffer *
 drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format);
 void drm_client_framebuffer_delete(struct drm_client_buffer *buffer);
 int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect);
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
+int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map);
 void drm_client_buffer_vunmap(struct drm_client_buffer *buffer);
 
 int drm_client_modeset_create(struct drm_client_dev *client);
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 09:30:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 09:30:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18083.42950 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZseJ-0003s6-Gm; Tue, 03 Nov 2020 09:30:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18083.42950; Tue, 03 Nov 2020 09:30:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZseJ-0003ri-2E; Tue, 03 Nov 2020 09:30:43 +0000
Received: by outflank-mailman (input) for mailman id 18083;
 Tue, 03 Nov 2020 09:30:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xrPR=EJ=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kZseH-0003LA-Qt
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:30:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 3513d16f-a580-434e-bd34-81474ac20a00;
 Tue, 03 Nov 2020 09:30:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6DE2AACF5;
 Tue,  3 Nov 2020 09:30:25 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xrPR=EJ=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kZseH-0003LA-Qt
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:30:41 +0000
X-Inumbo-ID: 3513d16f-a580-434e-bd34-81474ac20a00
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 3513d16f-a580-434e-bd34-81474ac20a00;
	Tue, 03 Nov 2020 09:30:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6DE2AACF5;
	Tue,  3 Nov 2020 09:30:25 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>,
	Daniel Vetter <daniel.vetter@ffwll.ch>
Subject: [PATCH v7 10/10] drm/fb_helper: Support framebuffers in I/O memory
Date: Tue,  3 Nov 2020 10:30:15 +0100
Message-Id: <20201103093015.1063-11-tzimmermann@suse.de>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20201103093015.1063-1-tzimmermann@suse.de>
References: <20201103093015.1063-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

At least sparc64 requires I/O-specific access to framebuffers. This
patch updates the fbdev console accordingly.

For drivers with direct access to the framebuffer memory, the callback
functions in struct fb_ops test for the type of memory and call the rsp
fb_sys_ of fb_cfb_ functions. Read and write operations are implemented
internally by DRM's fbdev helper.

For drivers that employ a shadow buffer, fbdev's blit function retrieves
the framebuffer address as struct dma_buf_map, and uses dma_buf_map
interfaces to access the buffer.

The bochs driver on sparc64 uses a workaround to flag the framebuffer as
I/O memory and avoid a HW exception. With the introduction of struct
dma_buf_map, this is not required any longer. The patch removes the rsp
code from both, bochs and fbdev.

v7:
	* use min_t(size_t,) (kernel test robot)
	* return the number of bytes read/written, if any (fbdev testcase)
v5:
	* implement fb_read/fb_write internally (Daniel, Sam)
v4:
	* move dma_buf_map changes into separate patch (Daniel)
	* TODO list: comment on fbdev updates (Daniel)

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 Documentation/gpu/todo.rst        |  19 ++-
 drivers/gpu/drm/bochs/bochs_kms.c |   1 -
 drivers/gpu/drm/drm_fb_helper.c   | 220 ++++++++++++++++++++++++++++--
 include/drm/drm_mode_config.h     |  12 --
 4 files changed, 223 insertions(+), 29 deletions(-)

diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 59f63f1d7680..acca232b025b 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -201,13 +201,28 @@ Convert drivers to use drm_fbdev_generic_setup()
 ------------------------------------------------
 
 Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
-atomic modesetting and GEM vmap support. Current generic fbdev emulation
-expects the framebuffer in system memory (or system-like memory).
+atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
+expected the framebuffer in system memory or system-like memory. By employing
+struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
+as well.
 
 Contact: Maintainer of the driver you plan to convert
 
 Level: Intermediate
 
+Reimplement functions in drm_fbdev_fb_ops without fbdev
+-------------------------------------------------------
+
+A number of callback functions in drm_fbdev_fb_ops could benefit from
+being rewritten without dependencies on the fbdev module. Some of the
+helpers could further benefit from using struct dma_buf_map instead of
+raw pointers.
+
+Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
+
+Level: Advanced
+
+
 drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
 -----------------------------------------------------------------
 
diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
index 13d0d04c4457..853081d186d5 100644
--- a/drivers/gpu/drm/bochs/bochs_kms.c
+++ b/drivers/gpu/drm/bochs/bochs_kms.c
@@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
 	bochs->dev->mode_config.preferred_depth = 24;
 	bochs->dev->mode_config.prefer_shadow = 0;
 	bochs->dev->mode_config.prefer_shadow_fbdev = 1;
-	bochs->dev->mode_config.fbdev_use_iomem = true;
 	bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
 
 	bochs->dev->mode_config.funcs = &bochs_mode_funcs;
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index a0d88130fedb..01ba1da28511 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
 }
 
 static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
-					  struct drm_clip_rect *clip)
+					  struct drm_clip_rect *clip,
+					  struct dma_buf_map *dst)
 {
 	struct drm_framebuffer *fb = fb_helper->fb;
 	unsigned int cpp = fb->format->cpp[0];
 	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
 	void *src = fb_helper->fbdev->screen_buffer + offset;
-	void *dst = fb_helper->buffer->map.vaddr + offset;
 	size_t len = (clip->x2 - clip->x1) * cpp;
 	unsigned int y;
 
-	for (y = clip->y1; y < clip->y2; y++) {
-		if (!fb_helper->dev->mode_config.fbdev_use_iomem)
-			memcpy(dst, src, len);
-		else
-			memcpy_toio((void __iomem *)dst, src, len);
+	dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
 
+	for (y = clip->y1; y < clip->y2; y++) {
+		dma_buf_map_memcpy_to(dst, src, len);
+		dma_buf_map_incr(dst, fb->pitches[0]);
 		src += fb->pitches[0];
-		dst += fb->pitches[0];
 	}
 }
 
@@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
 			ret = drm_client_buffer_vmap(helper->buffer, &map);
 			if (ret)
 				return;
-			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
+			drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
 		}
+
 		if (helper->fb->funcs->dirty)
 			helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
 						 &clip_copy, 1);
@@ -2027,6 +2026,199 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
 		return -ENODEV;
 }
 
+static bool drm_fbdev_use_iomem(struct fb_info *info)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+	struct drm_client_buffer *buffer = fb_helper->buffer;
+
+	return !drm_fbdev_use_shadow_fb(fb_helper) && buffer->map.is_iomem;
+}
+
+static ssize_t fb_read_screen_base(struct fb_info *info, char __user *buf, size_t count,
+				   loff_t pos)
+{
+	const char __iomem *src = info->screen_base + pos;
+	size_t alloc_size = min(count, PAGE_SIZE);
+	ssize_t ret = 0;
+	int err = 0;
+	char *tmp;
+
+	tmp = kmalloc(alloc_size, GFP_KERNEL);
+	if (!tmp)
+		return -ENOMEM;
+
+	while (count) {
+		size_t c = min_t(size_t, count, alloc_size);
+
+		memcpy_fromio(tmp, src, c);
+		if (copy_to_user(buf, tmp, c)) {
+			err = -EFAULT;
+			break;
+		}
+
+		src += c;
+		buf += c;
+		ret += c;
+		count -= c;
+	}
+
+	kfree(tmp);
+
+	return ret ? ret : err;
+}
+
+static ssize_t fb_read_screen_buffer(struct fb_info *info, char __user *buf, size_t count,
+				     loff_t pos)
+{
+	const char *src = info->screen_buffer + pos;
+
+	if (copy_to_user(buf, src, count))
+		return -EFAULT;
+
+	return count;
+}
+
+static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
+				 size_t count, loff_t *ppos)
+{
+	loff_t pos = *ppos;
+	size_t total_size;
+	ssize_t ret;
+
+	if (info->screen_size)
+		total_size = info->screen_size;
+	else
+		total_size = info->fix.smem_len;
+
+	if (pos >= total_size)
+		return 0;
+	if (count >= total_size)
+		count = total_size;
+	if (total_size - count < pos)
+		count = total_size - pos;
+
+	if (drm_fbdev_use_iomem(info))
+		ret = fb_read_screen_base(info, buf, count, pos);
+	else
+		ret = fb_read_screen_buffer(info, buf, count, pos);
+
+	if (ret > 0)
+		*ppos += ret;
+
+	return ret;
+}
+
+static ssize_t fb_write_screen_base(struct fb_info *info, const char __user *buf, size_t count,
+				    loff_t pos)
+{
+	char __iomem *dst = info->screen_base + pos;
+	size_t alloc_size = min(count, PAGE_SIZE);
+	ssize_t ret = 0;
+	int err = 0;
+	u8 *tmp;
+
+	tmp = kmalloc(alloc_size, GFP_KERNEL);
+	if (!tmp)
+		return -ENOMEM;
+
+	while (count) {
+		size_t c = min_t(size_t, count, alloc_size);
+
+		if (copy_from_user(tmp, buf, c)) {
+			err = -EFAULT;
+			break;
+		}
+		memcpy_toio(dst, tmp, c);
+
+		dst += c;
+		buf += c;
+		ret += c;
+		count -= c;
+	}
+
+	kfree(tmp);
+
+	return ret ? ret : err;
+}
+
+static ssize_t fb_write_screen_buffer(struct fb_info *info, const char __user *buf, size_t count,
+				      loff_t pos)
+{
+	char *dst = info->screen_buffer + pos;
+
+	if (copy_from_user(dst, buf, count))
+		return -EFAULT;
+
+	return count;
+}
+
+static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
+				  size_t count, loff_t *ppos)
+{
+	loff_t pos = *ppos;
+	size_t total_size;
+	ssize_t ret;
+	int err = 0;
+
+	if (info->screen_size)
+		total_size = info->screen_size;
+	else
+		total_size = info->fix.smem_len;
+
+	if (pos > total_size)
+		return -EFBIG;
+	if (count > total_size) {
+		err = -EFBIG;
+		count = total_size;
+	}
+	if (total_size - count < pos) {
+		if (!err)
+			err = -ENOSPC;
+		count = total_size - pos;
+	}
+
+	/*
+	 * Copy to framebuffer even if we already logged an error. Emulates
+	 * the behavior of the original fbdev implementation.
+	 */
+	if (drm_fbdev_use_iomem(info))
+		ret = fb_write_screen_base(info, buf, count, pos);
+	else
+		ret = fb_write_screen_buffer(info, buf, count, pos);
+
+	if (ret > 0)
+		*ppos += ret;
+
+	return ret ? ret : err;
+}
+
+static void drm_fbdev_fb_fillrect(struct fb_info *info,
+				  const struct fb_fillrect *rect)
+{
+	if (drm_fbdev_use_iomem(info))
+		drm_fb_helper_cfb_fillrect(info, rect);
+	else
+		drm_fb_helper_sys_fillrect(info, rect);
+}
+
+static void drm_fbdev_fb_copyarea(struct fb_info *info,
+				  const struct fb_copyarea *area)
+{
+	if (drm_fbdev_use_iomem(info))
+		drm_fb_helper_cfb_copyarea(info, area);
+	else
+		drm_fb_helper_sys_copyarea(info, area);
+}
+
+static void drm_fbdev_fb_imageblit(struct fb_info *info,
+				   const struct fb_image *image)
+{
+	if (drm_fbdev_use_iomem(info))
+		drm_fb_helper_cfb_imageblit(info, image);
+	else
+		drm_fb_helper_sys_imageblit(info, image);
+}
+
 static const struct fb_ops drm_fbdev_fb_ops = {
 	.owner		= THIS_MODULE,
 	DRM_FB_HELPER_DEFAULT_OPS,
@@ -2034,11 +2226,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
 	.fb_release	= drm_fbdev_fb_release,
 	.fb_destroy	= drm_fbdev_fb_destroy,
 	.fb_mmap	= drm_fbdev_fb_mmap,
-	.fb_read	= drm_fb_helper_sys_read,
-	.fb_write	= drm_fb_helper_sys_write,
-	.fb_fillrect	= drm_fb_helper_sys_fillrect,
-	.fb_copyarea	= drm_fb_helper_sys_copyarea,
-	.fb_imageblit	= drm_fb_helper_sys_imageblit,
+	.fb_read	= drm_fbdev_fb_read,
+	.fb_write	= drm_fbdev_fb_write,
+	.fb_fillrect	= drm_fbdev_fb_fillrect,
+	.fb_copyarea	= drm_fbdev_fb_copyarea,
+	.fb_imageblit	= drm_fbdev_fb_imageblit,
 };
 
 static struct fb_deferred_io drm_fbdev_defio = {
diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
index 5ffbb4ed5b35..ab424ddd7665 100644
--- a/include/drm/drm_mode_config.h
+++ b/include/drm/drm_mode_config.h
@@ -877,18 +877,6 @@ struct drm_mode_config {
 	 */
 	bool prefer_shadow_fbdev;
 
-	/**
-	 * @fbdev_use_iomem:
-	 *
-	 * Set to true if framebuffer reside in iomem.
-	 * When set to true memcpy_toio() is used when copying the framebuffer in
-	 * drm_fb_helper.drm_fb_helper_dirty_blit_real().
-	 *
-	 * FIXME: This should be replaced with a per-mapping is_iomem
-	 * flag (like ttm does), and then used everywhere in fbdev code.
-	 */
-	bool fbdev_use_iomem;
-
 	/**
 	 * @quirk_addfb_prefer_xbgr_30bpp:
 	 *
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 09:30:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 09:30:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18086.42964 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZseN-000413-PY; Tue, 03 Nov 2020 09:30:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18086.42964; Tue, 03 Nov 2020 09:30:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZseN-00040r-JN; Tue, 03 Nov 2020 09:30:47 +0000
Received: by outflank-mailman (input) for mailman id 18086;
 Tue, 03 Nov 2020 09:30:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xrPR=EJ=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kZseL-0003L5-FB
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:30:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 7cf77d85-608b-470d-a40b-eb1c7ae134a0;
 Tue, 03 Nov 2020 09:30:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 53345B19D;
 Tue,  3 Nov 2020 09:30:22 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xrPR=EJ=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kZseL-0003L5-FB
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:30:45 +0000
X-Inumbo-ID: 7cf77d85-608b-470d-a40b-eb1c7ae134a0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 7cf77d85-608b-470d-a40b-eb1c7ae134a0;
	Tue, 03 Nov 2020 09:30:22 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 53345B19D;
	Tue,  3 Nov 2020 09:30:22 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v7 06/10] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends
Date: Tue,  3 Nov 2020 10:30:11 +0100
Message-Id: <20201103093015.1063-7-tzimmermann@suse.de>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20201103093015.1063-1-tzimmermann@suse.de>
References: <20201103093015.1063-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch replaces the vmap/vunmap's use of raw pointers in GEM object
functions with instances of struct dma_buf_map. GEM backends are
converted as well. For most of them, this simply changes the returned type.

TTM-based drivers now return information about the location of the memory,
either system or I/O memory. GEM VRAM helpers and qxl now use ttm_bo_vmap()
et al. Amdgpu, nouveau and radeon use drm_gem_ttm_vmap() et al instead of
implementing their own vmap callbacks.

v7:
	* init QXL cursor to mapped BO buffer (kernel test robot)
v5:
	* update vkms after switch to shmem
v4:
	* use ttm_bo_vmap(), drm_gem_ttm_vmap(), et al. (Daniel, Christian)
	* fix a trailing { in drm_gem_vmap()
	* remove several empty functions instead of converting them (Daniel)
	* comment uses of raw pointers with a TODO (Daniel)
	* TODO list: convert more helpers to use struct dma_buf_map

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 Documentation/gpu/todo.rst                  |  18 ++++
 drivers/gpu/drm/Kconfig                     |   2 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c |  36 -------
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h |   2 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c     |   5 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.h  |   1 -
 drivers/gpu/drm/ast/ast_cursor.c            |  27 +++--
 drivers/gpu/drm/ast/ast_drv.h               |   7 +-
 drivers/gpu/drm/drm_gem.c                   |  23 +++--
 drivers/gpu/drm/drm_gem_cma_helper.c        |  10 +-
 drivers/gpu/drm/drm_gem_shmem_helper.c      |  48 +++++----
 drivers/gpu/drm/drm_gem_vram_helper.c       | 107 ++++++++++----------
 drivers/gpu/drm/etnaviv/etnaviv_drv.h       |   2 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c |   9 +-
 drivers/gpu/drm/lima/lima_gem.c             |   6 +-
 drivers/gpu/drm/lima/lima_sched.c           |  11 +-
 drivers/gpu/drm/mgag200/mgag200_mode.c      |  10 +-
 drivers/gpu/drm/nouveau/Kconfig             |   1 +
 drivers/gpu/drm/nouveau/nouveau_bo.h        |   2 -
 drivers/gpu/drm/nouveau/nouveau_gem.c       |   6 +-
 drivers/gpu/drm/nouveau/nouveau_gem.h       |   2 -
 drivers/gpu/drm/nouveau/nouveau_prime.c     |  20 ----
 drivers/gpu/drm/panfrost/panfrost_perfcnt.c |  14 +--
 drivers/gpu/drm/qxl/qxl_display.c           |  15 ++-
 drivers/gpu/drm/qxl/qxl_draw.c              |  14 ++-
 drivers/gpu/drm/qxl/qxl_drv.h               |  11 +-
 drivers/gpu/drm/qxl/qxl_object.c            |  31 +++---
 drivers/gpu/drm/qxl/qxl_object.h            |   2 +-
 drivers/gpu/drm/qxl/qxl_prime.c             |  12 +--
 drivers/gpu/drm/radeon/radeon.h             |   1 -
 drivers/gpu/drm/radeon/radeon_gem.c         |   7 +-
 drivers/gpu/drm/radeon/radeon_prime.c       |  20 ----
 drivers/gpu/drm/rockchip/rockchip_drm_gem.c |  22 ++--
 drivers/gpu/drm/rockchip/rockchip_drm_gem.h |   4 +-
 drivers/gpu/drm/tiny/cirrus.c               |  10 +-
 drivers/gpu/drm/tiny/gm12u320.c             |  10 +-
 drivers/gpu/drm/udl/udl_modeset.c           |   8 +-
 drivers/gpu/drm/vboxvideo/vbox_mode.c       |  11 +-
 drivers/gpu/drm/vc4/vc4_bo.c                |   6 +-
 drivers/gpu/drm/vc4/vc4_drv.h               |   2 +-
 drivers/gpu/drm/vgem/vgem_drv.c             |  16 ++-
 drivers/gpu/drm/vkms/vkms_plane.c           |  15 ++-
 drivers/gpu/drm/vkms/vkms_writeback.c       |  22 ++--
 drivers/gpu/drm/xen/xen_drm_front_gem.c     |  18 ++--
 drivers/gpu/drm/xen/xen_drm_front_gem.h     |   6 +-
 include/drm/drm_gem.h                       |   5 +-
 include/drm/drm_gem_cma_helper.h            |   2 +-
 include/drm/drm_gem_shmem_helper.h          |   4 +-
 include/drm/drm_gem_vram_helper.h           |  14 +--
 49 files changed, 349 insertions(+), 308 deletions(-)

diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 6b224ef14455..59f63f1d7680 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -450,6 +450,24 @@ Contact: Ville Syrjälä, Daniel Vetter
 
 Level: Intermediate
 
+Use struct dma_buf_map throughout codebase
+------------------------------------------
+
+Pointers to shared device memory are stored in struct dma_buf_map. Each
+instance knows whether it refers to system or I/O memory. Most of the DRM-wide
+interface have been converted to use struct dma_buf_map, but implementations
+often still use raw pointers.
+
+The task is to use struct dma_buf_map where it makes sense.
+
+* Memory managers should use struct dma_buf_map for dma-buf-imported buffers.
+* TTM might benefit from using struct dma_buf_map internally.
+* Framebuffer copying and blitting helpers should operate on struct dma_buf_map.
+
+Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vetter
+
+Level: Intermediate
+
 
 Core refactorings
 =================
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index 64376dd298ed..f5c7aa7894d5 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -232,6 +232,7 @@ config DRM_RADEON
 	select FW_LOADER
         select DRM_KMS_HELPER
         select DRM_TTM
+	select DRM_TTM_HELPER
 	select POWER_SUPPLY
 	select HWMON
 	select BACKLIGHT_CLASS_DEVICE
@@ -252,6 +253,7 @@ config DRM_AMDGPU
 	select DRM_KMS_HELPER
 	select DRM_SCHED
 	select DRM_TTM
+	select DRM_TTM_HELPER
 	select POWER_SUPPLY
 	select HWMON
 	select BACKLIGHT_CLASS_DEVICE
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
index 5b465ab774d1..e5919efca870 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
@@ -41,42 +41,6 @@
 #include <linux/dma-fence-array.h>
 #include <linux/pci-p2pdma.h>
 
-/**
- * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation
- * @obj: GEM BO
- *
- * Sets up an in-kernel virtual mapping of the BO's memory.
- *
- * Returns:
- * The virtual address of the mapping or an error pointer.
- */
-void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
-{
-	struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
-	int ret;
-
-	ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
-			  &bo->dma_buf_vmap);
-	if (ret)
-		return ERR_PTR(ret);
-
-	return bo->dma_buf_vmap.virtual;
-}
-
-/**
- * amdgpu_gem_prime_vunmap - &dma_buf_ops.vunmap implementation
- * @obj: GEM BO
- * @vaddr: Virtual address (unused)
- *
- * Tears down the in-kernel virtual mapping of the BO's memory.
- */
-void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-	struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
-
-	ttm_bo_kunmap(&bo->dma_buf_vmap);
-}
-
 /**
  * amdgpu_gem_prime_mmap - &drm_driver.gem_prime_mmap implementation
  * @obj: GEM BO
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
index 2c5c84a06bb9..39b5b9616fd8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
@@ -31,8 +31,6 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
 					    struct dma_buf *dma_buf);
 bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev,
 				      struct amdgpu_bo *bo);
-void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj);
-void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 int amdgpu_gem_prime_mmap(struct drm_gem_object *obj,
 			  struct vm_area_struct *vma);
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 8ea6fc745769..a6142ede78fb 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -33,6 +33,7 @@
 
 #include <drm/amdgpu_drm.h>
 #include <drm/drm_debugfs.h>
+#include <drm/drm_gem_ttm_helper.h>
 
 #include "amdgpu.h"
 #include "amdgpu_display.h"
@@ -220,8 +221,8 @@ static const struct drm_gem_object_funcs amdgpu_gem_object_funcs = {
 	.open = amdgpu_gem_object_open,
 	.close = amdgpu_gem_object_close,
 	.export = amdgpu_gem_prime_export,
-	.vmap = amdgpu_gem_prime_vmap,
-	.vunmap = amdgpu_gem_prime_vunmap,
+	.vmap = drm_gem_ttm_vmap,
+	.vunmap = drm_gem_ttm_vunmap,
 };
 
 /*
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
index 132e5f955180..01296ef0d673 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
@@ -100,7 +100,6 @@ struct amdgpu_bo {
 	struct amdgpu_bo		*parent;
 	struct amdgpu_bo		*shadow;
 
-	struct ttm_bo_kmap_obj		dma_buf_vmap;
 	struct amdgpu_mn		*mn;
 
 
diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
index e0f4613918ad..742d43a7edf4 100644
--- a/drivers/gpu/drm/ast/ast_cursor.c
+++ b/drivers/gpu/drm/ast/ast_cursor.c
@@ -39,7 +39,7 @@ static void ast_cursor_fini(struct ast_private *ast)
 
 	for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
 		gbo = ast->cursor.gbo[i];
-		drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
+		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
 		drm_gem_vram_unpin(gbo);
 		drm_gem_vram_put(gbo);
 	}
@@ -60,7 +60,7 @@ int ast_cursor_init(struct ast_private *ast)
 	struct drm_device *dev = &ast->base;
 	size_t size, i;
 	struct drm_gem_vram_object *gbo;
-	void __iomem *vaddr;
+	struct dma_buf_map map;
 	int ret;
 
 	size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
@@ -77,16 +77,15 @@ int ast_cursor_init(struct ast_private *ast)
 			drm_gem_vram_put(gbo);
 			goto err_drm_gem_vram_put;
 		}
-		vaddr = drm_gem_vram_vmap(gbo);
-		if (IS_ERR(vaddr)) {
-			ret = PTR_ERR(vaddr);
+		ret = drm_gem_vram_vmap(gbo, &map);
+		if (ret) {
 			drm_gem_vram_unpin(gbo);
 			drm_gem_vram_put(gbo);
 			goto err_drm_gem_vram_put;
 		}
 
 		ast->cursor.gbo[i] = gbo;
-		ast->cursor.vaddr[i] = vaddr;
+		ast->cursor.map[i] = map;
 	}
 
 	return drmm_add_action_or_reset(dev, ast_cursor_release, NULL);
@@ -95,7 +94,7 @@ int ast_cursor_init(struct ast_private *ast)
 	while (i) {
 		--i;
 		gbo = ast->cursor.gbo[i];
-		drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
+		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
 		drm_gem_vram_unpin(gbo);
 		drm_gem_vram_put(gbo);
 	}
@@ -170,6 +169,7 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
 {
 	struct drm_device *dev = &ast->base;
 	struct drm_gem_vram_object *gbo;
+	struct dma_buf_map map;
 	int ret;
 	void *src;
 	void __iomem *dst;
@@ -183,18 +183,17 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
 	ret = drm_gem_vram_pin(gbo, 0);
 	if (ret)
 		return ret;
-	src = drm_gem_vram_vmap(gbo);
-	if (IS_ERR(src)) {
-		ret = PTR_ERR(src);
+	ret = drm_gem_vram_vmap(gbo, &map);
+	if (ret)
 		goto err_drm_gem_vram_unpin;
-	}
+	src = map.vaddr; /* TODO: Use mapping abstraction properly */
 
-	dst = ast->cursor.vaddr[ast->cursor.next_index];
+	dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
 
 	/* do data transfer to cursor BO */
 	update_cursor_image(dst, src, fb->width, fb->height);
 
-	drm_gem_vram_vunmap(gbo, src);
+	drm_gem_vram_vunmap(gbo, &map);
 	drm_gem_vram_unpin(gbo);
 
 	return 0;
@@ -257,7 +256,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
 	u8 __iomem *sig;
 	u8 jreg;
 
-	dst = ast->cursor.vaddr[ast->cursor.next_index];
+	dst = ast->cursor.map[ast->cursor.next_index].vaddr;
 
 	sig = dst + AST_HWC_SIZE;
 	writel(x, sig + AST_HWC_SIGNATURE_X);
diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
index 467049ca8430..f963141dd851 100644
--- a/drivers/gpu/drm/ast/ast_drv.h
+++ b/drivers/gpu/drm/ast/ast_drv.h
@@ -28,10 +28,11 @@
 #ifndef __AST_DRV_H__
 #define __AST_DRV_H__
 
-#include <linux/types.h>
-#include <linux/io.h>
+#include <linux/dma-buf-map.h>
 #include <linux/i2c.h>
 #include <linux/i2c-algo-bit.h>
+#include <linux/io.h>
+#include <linux/types.h>
 
 #include <drm/drm_connector.h>
 #include <drm/drm_crtc.h>
@@ -131,7 +132,7 @@ struct ast_private {
 
 	struct {
 		struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
-		void __iomem *vaddr[AST_DEFAULT_HWC_NUM];
+		struct dma_buf_map map[AST_DEFAULT_HWC_NUM];
 		unsigned int next_index;
 	} cursor;
 
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index d586068f5509..4231fda26e70 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -36,6 +36,7 @@
 #include <linux/pagemap.h>
 #include <linux/shmem_fs.h>
 #include <linux/dma-buf.h>
+#include <linux/dma-buf-map.h>
 #include <linux/mem_encrypt.h>
 #include <linux/pagevec.h>
 
@@ -1207,26 +1208,30 @@ void drm_gem_unpin(struct drm_gem_object *obj)
 
 void *drm_gem_vmap(struct drm_gem_object *obj)
 {
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
-	if (obj->funcs->vmap)
-		vaddr = obj->funcs->vmap(obj);
-	else
-		vaddr = ERR_PTR(-EOPNOTSUPP);
+	if (!obj->funcs->vmap)
+		return ERR_PTR(-EOPNOTSUPP);
 
-	if (!vaddr)
-		vaddr = ERR_PTR(-ENOMEM);
+	ret = obj->funcs->vmap(obj, &map);
+	if (ret)
+		return ERR_PTR(ret);
+	else if (dma_buf_map_is_null(&map))
+		return ERR_PTR(-ENOMEM);
 
-	return vaddr;
+	return map.vaddr;
 }
 
 void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
 {
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
+
 	if (!vaddr)
 		return;
 
 	if (obj->funcs->vunmap)
-		obj->funcs->vunmap(obj, vaddr);
+		obj->funcs->vunmap(obj, &map);
 }
 
 /**
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index d527485ea0b7..b57e3e9222f0 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -519,6 +519,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
  * drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual
  *     address space
  * @obj: GEM object
+ * @map: Returns the kernel virtual address of the CMA GEM object's backing
+ *       store.
  *
  * This function maps a buffer exported via DRM PRIME into the kernel's
  * virtual address space. Since the CMA buffers are already mapped into the
@@ -527,13 +529,15 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
  * driver's &drm_gem_object_funcs.vmap callback.
  *
  * Returns:
- * The kernel virtual address of the CMA GEM object's backing store.
+ * 0 on success, or a negative error code otherwise.
  */
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
+int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
 
-	return cma_obj->vaddr;
+	dma_buf_map_set_vaddr(map, cma_obj->vaddr);
+
+	return 0;
 }
 EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
 
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 8233bda4692f..499189c48f0b 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -258,19 +258,25 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj)
 }
 EXPORT_SYMBOL(drm_gem_shmem_unpin);
 
-static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
+static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = &shmem->base;
-	struct dma_buf_map map;
 	int ret = 0;
 
-	if (shmem->vmap_use_count++ > 0)
-		return shmem->vaddr;
+	if (shmem->vmap_use_count++ > 0) {
+		dma_buf_map_set_vaddr(map, shmem->vaddr);
+		return 0;
+	}
 
 	if (obj->import_attach) {
-		ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
-		if (!ret)
-			shmem->vaddr = map.vaddr;
+		ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
+		if (!ret) {
+			if (WARN_ON(map->is_iomem)) {
+				ret = -EIO;
+				goto err_put_pages;
+			}
+			shmem->vaddr = map->vaddr;
+		}
 	} else {
 		pgprot_t prot = PAGE_KERNEL;
 
@@ -284,6 +290,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 				    VM_MAP, prot);
 		if (!shmem->vaddr)
 			ret = -ENOMEM;
+		else
+			dma_buf_map_set_vaddr(map, shmem->vaddr);
 	}
 
 	if (ret) {
@@ -291,7 +299,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 		goto err_put_pages;
 	}
 
-	return shmem->vaddr;
+	return 0;
 
 err_put_pages:
 	if (!obj->import_attach)
@@ -299,12 +307,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 err_zero_use:
 	shmem->vmap_use_count = 0;
 
-	return ERR_PTR(ret);
+	return ret;
 }
 
 /*
  * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
  * @shmem: shmem GEM object
+ * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
+ *       store.
  *
  * This function makes sure that a contiguous kernel virtual address mapping
  * exists for the buffer backing the shmem GEM object.
@@ -318,26 +328,25 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
  * Returns:
  * 0 on success or a negative error code on failure.
  */
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj)
+int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
-	void *vaddr;
 	int ret;
 
 	ret = mutex_lock_interruptible(&shmem->vmap_lock);
 	if (ret)
-		return ERR_PTR(ret);
-	vaddr = drm_gem_shmem_vmap_locked(shmem);
+		return ret;
+	ret = drm_gem_shmem_vmap_locked(shmem, map);
 	mutex_unlock(&shmem->vmap_lock);
 
-	return vaddr;
+	return ret;
 }
 EXPORT_SYMBOL(drm_gem_shmem_vmap);
 
-static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
+static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
+					struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = &shmem->base;
-	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr);
 
 	if (WARN_ON_ONCE(!shmem->vmap_use_count))
 		return;
@@ -346,7 +355,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
 		return;
 
 	if (obj->import_attach)
-		dma_buf_vunmap(obj->import_attach->dmabuf, &map);
+		dma_buf_vunmap(obj->import_attach->dmabuf, map);
 	else
 		vunmap(shmem->vaddr);
 
@@ -357,6 +366,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
 /*
  * drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object
  * @shmem: shmem GEM object
+ * @map: Kernel virtual address where the SHMEM GEM object was mapped
  *
  * This function cleans up a kernel virtual address mapping acquired by
  * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
@@ -366,12 +376,12 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
  * also be called by drivers directly, in which case it will hide the
  * differences between dma-buf imported and natively allocated objects.
  */
-void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr)
+void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
 
 	mutex_lock(&shmem->vmap_lock);
-	drm_gem_shmem_vunmap_locked(shmem);
+	drm_gem_shmem_vunmap_locked(shmem, map);
 	mutex_unlock(&shmem->vmap_lock);
 }
 EXPORT_SYMBOL(drm_gem_shmem_vunmap);
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index e305fadb8bc8..5e1b09ba5f2b 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -1,5 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-or-later
 
+#include <linux/dma-buf-map.h>
 #include <linux/module.h>
 
 #include <drm/drm_debugfs.h>
@@ -112,8 +113,8 @@ static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo)
 	 * up; only release the GEM object.
 	 */
 
-	WARN_ON(gbo->kmap_use_count);
-	WARN_ON(gbo->kmap.virtual);
+	WARN_ON(gbo->vmap_use_count);
+	WARN_ON(dma_buf_map_is_set(&gbo->map));
 
 	drm_gem_object_release(&gbo->bo.base);
 }
@@ -378,29 +379,37 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
 }
 EXPORT_SYMBOL(drm_gem_vram_unpin);
 
-static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
+static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
+				    struct dma_buf_map *map)
 {
 	int ret;
-	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
-	bool is_iomem;
 
-	if (gbo->kmap_use_count > 0)
+	if (gbo->vmap_use_count > 0)
 		goto out;
 
-	ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
+	ret = ttm_bo_vmap(&gbo->bo, &gbo->map);
 	if (ret)
-		return ERR_PTR(ret);
+		return ret;
 
 out:
-	++gbo->kmap_use_count;
-	return ttm_kmap_obj_virtual(kmap, &is_iomem);
+	++gbo->vmap_use_count;
+	*map = gbo->map;
+
+	return 0;
 }
 
-static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
+static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
+				       struct dma_buf_map *map)
 {
-	if (WARN_ON_ONCE(!gbo->kmap_use_count))
+	struct drm_device *dev = gbo->bo.base.dev;
+
+	if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count))
 		return;
-	if (--gbo->kmap_use_count > 0)
+
+	if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map)))
+		return; /* BUG: map not mapped from this BO */
+
+	if (--gbo->vmap_use_count > 0)
 		return;
 
 	/*
@@ -414,7 +423,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
 /**
  * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
  *                       space
- * @gbo:	The GEM VRAM object to map
+ * @gbo: The GEM VRAM object to map
+ * @map: Returns the kernel virtual address of the VRAM GEM object's backing
+ *       store.
  *
  * The vmap function pins a GEM VRAM object to its current location, either
  * system or video memory, and maps its buffer into kernel address space.
@@ -423,48 +434,44 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
  * unmap and unpin the GEM VRAM object.
  *
  * Returns:
- * The buffer's virtual address on success, or
- * an ERR_PTR()-encoded error code otherwise.
+ * 0 on success, or a negative error code otherwise.
  */
-void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
+int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
 {
 	int ret;
-	void *base;
 
 	ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
 	if (ret)
-		return ERR_PTR(ret);
+		return ret;
 
 	ret = drm_gem_vram_pin_locked(gbo, 0);
 	if (ret)
 		goto err_ttm_bo_unreserve;
-	base = drm_gem_vram_kmap_locked(gbo);
-	if (IS_ERR(base)) {
-		ret = PTR_ERR(base);
+	ret = drm_gem_vram_kmap_locked(gbo, map);
+	if (ret)
 		goto err_drm_gem_vram_unpin_locked;
-	}
 
 	ttm_bo_unreserve(&gbo->bo);
 
-	return base;
+	return 0;
 
 err_drm_gem_vram_unpin_locked:
 	drm_gem_vram_unpin_locked(gbo);
 err_ttm_bo_unreserve:
 	ttm_bo_unreserve(&gbo->bo);
-	return ERR_PTR(ret);
+	return ret;
 }
 EXPORT_SYMBOL(drm_gem_vram_vmap);
 
 /**
  * drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object
- * @gbo:	The GEM VRAM object to unmap
- * @vaddr:	The mapping's base address as returned by drm_gem_vram_vmap()
+ * @gbo: The GEM VRAM object to unmap
+ * @map: Kernel virtual address where the VRAM GEM object was mapped
  *
  * A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See
  * the documentation for drm_gem_vram_vmap() for more information.
  */
-void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
+void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
 {
 	int ret;
 
@@ -472,7 +479,7 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
 	if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
 		return;
 
-	drm_gem_vram_kunmap_locked(gbo);
+	drm_gem_vram_kunmap_locked(gbo, map);
 	drm_gem_vram_unpin_locked(gbo);
 
 	ttm_bo_unreserve(&gbo->bo);
@@ -563,15 +570,13 @@ static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo,
 					       bool evict,
 					       struct ttm_resource *new_mem)
 {
-	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
+	struct ttm_buffer_object *bo = &gbo->bo;
+	struct drm_device *dev = bo->base.dev;
 
-	if (WARN_ON_ONCE(gbo->kmap_use_count))
+	if (drm_WARN_ON_ONCE(dev, gbo->vmap_use_count))
 		return;
 
-	if (!kmap->virtual)
-		return;
-	ttm_bo_kunmap(kmap);
-	kmap->virtual = NULL;
+	ttm_bo_vunmap(bo, &gbo->map);
 }
 
 static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo,
@@ -837,37 +842,33 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem)
 }
 
 /**
- * drm_gem_vram_object_vmap() - \
-	Implements &struct drm_gem_object_funcs.vmap
- * @gem:	The GEM object to map
+ * drm_gem_vram_object_vmap() -
+ *	Implements &struct drm_gem_object_funcs.vmap
+ * @gem: The GEM object to map
+ * @map: Returns the kernel virtual address of the VRAM GEM object's backing
+ *       store.
  *
  * Returns:
- * The buffers virtual address on success, or
- * NULL otherwise.
+ * 0 on success, or a negative error code otherwise.
  */
-static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem)
+static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map)
 {
 	struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
-	void *base;
 
-	base = drm_gem_vram_vmap(gbo);
-	if (IS_ERR(base))
-		return NULL;
-	return base;
+	return drm_gem_vram_vmap(gbo, map);
 }
 
 /**
- * drm_gem_vram_object_vunmap() - \
-	Implements &struct drm_gem_object_funcs.vunmap
- * @gem:	The GEM object to unmap
- * @vaddr:	The mapping's base address
+ * drm_gem_vram_object_vunmap() -
+ *	Implements &struct drm_gem_object_funcs.vunmap
+ * @gem: The GEM object to unmap
+ * @map: Kernel virtual address where the VRAM GEM object was mapped
  */
-static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem,
-				       void *vaddr)
+static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map)
 {
 	struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
 
-	drm_gem_vram_vunmap(gbo, vaddr);
+	drm_gem_vram_vunmap(gbo, map);
 }
 
 /*
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
index 9682c26d89bb..f5be627e1de0 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
@@ -51,7 +51,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
 int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
 int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
 struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
-void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
+int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma);
 struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index a6d9932a32ae..bc2543dd987d 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -22,9 +22,14 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	return drm_prime_pages_to_sg(obj->dev, etnaviv_obj->pages, npages);
 }
 
-void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
+int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
-	return etnaviv_gem_vmap(obj);
+	void *vaddr = etnaviv_gem_vmap(obj);
+	if (!vaddr)
+		return -ENOMEM;
+	dma_buf_map_set_vaddr(map, vaddr);
+
+	return 0;
 }
 
 int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
index 11223fe348df..832e5280a6ed 100644
--- a/drivers/gpu/drm/lima/lima_gem.c
+++ b/drivers/gpu/drm/lima/lima_gem.c
@@ -182,14 +182,14 @@ static int lima_gem_pin(struct drm_gem_object *obj)
 	return drm_gem_shmem_pin(obj);
 }
 
-static void *lima_gem_vmap(struct drm_gem_object *obj)
+static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct lima_bo *bo = to_lima_bo(obj);
 
 	if (bo->heap_size)
-		return ERR_PTR(-EINVAL);
+		return -EINVAL;
 
-	return drm_gem_shmem_vmap(obj);
+	return drm_gem_shmem_vmap(obj, map);
 }
 
 static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
index dc6df9e9a40d..a070a85f8f36 100644
--- a/drivers/gpu/drm/lima/lima_sched.c
+++ b/drivers/gpu/drm/lima/lima_sched.c
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0 OR MIT
 /* Copyright 2017-2019 Qiang Yu <yuq825@gmail.com> */
 
+#include <linux/dma-buf-map.h>
 #include <linux/kthread.h>
 #include <linux/slab.h>
 #include <linux/vmalloc.h>
@@ -303,6 +304,8 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
 	struct lima_dump_chunk_buffer *buffer_chunk;
 	u32 size, task_size, mem_size;
 	int i;
+	struct dma_buf_map map;
+	int ret;
 
 	mutex_lock(&dev->error_task_list_lock);
 
@@ -388,15 +391,15 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
 		} else {
 			buffer_chunk->size = lima_bo_size(bo);
 
-			data = drm_gem_shmem_vmap(&bo->base.base);
-			if (IS_ERR_OR_NULL(data)) {
+			ret = drm_gem_shmem_vmap(&bo->base.base, &map);
+			if (ret) {
 				kvfree(et);
 				goto out;
 			}
 
-			memcpy(buffer_chunk + 1, data, buffer_chunk->size);
+			memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size);
 
-			drm_gem_shmem_vunmap(&bo->base.base, data);
+			drm_gem_shmem_vunmap(&bo->base.base, &map);
 		}
 
 		buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size;
diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
index 38672f9e5c4f..8ef76769b97f 100644
--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
+++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
@@ -9,6 +9,7 @@
  */
 
 #include <linux/delay.h>
+#include <linux/dma-buf-map.h>
 
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_atomic_state_helper.h>
@@ -1556,15 +1557,18 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb,
 		      struct drm_rect *clip)
 {
 	struct drm_device *dev = &mdev->base;
+	struct dma_buf_map map;
 	void *vmap;
+	int ret;
 
-	vmap = drm_gem_shmem_vmap(fb->obj[0]);
-	if (drm_WARN_ON(dev, !vmap))
+	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+	if (drm_WARN_ON(dev, ret))
 		return; /* BUG: SHMEM BO should always be vmapped */
+	vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip);
 
-	drm_gem_shmem_vunmap(fb->obj[0], vmap);
+	drm_gem_shmem_vunmap(fb->obj[0], &map);
 
 	/* Always scanout image at VRAM offset 0 */
 	mgag200_set_startadd(mdev, (u32)0);
diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
index 5dec1e5694b7..9436310d0854 100644
--- a/drivers/gpu/drm/nouveau/Kconfig
+++ b/drivers/gpu/drm/nouveau/Kconfig
@@ -6,6 +6,7 @@ config DRM_NOUVEAU
 	select FW_LOADER
 	select DRM_KMS_HELPER
 	select DRM_TTM
+	select DRM_TTM_HELPER
 	select BACKLIGHT_CLASS_DEVICE if DRM_NOUVEAU_BACKLIGHT
 	select ACPI_VIDEO if ACPI && X86 && BACKLIGHT_CLASS_DEVICE && INPUT
 	select X86_PLATFORM_DEVICES if ACPI && X86
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.h b/drivers/gpu/drm/nouveau/nouveau_bo.h
index 641ef6298a0e..6045b85a762a 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.h
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.h
@@ -39,8 +39,6 @@ struct nouveau_bo {
 	unsigned mode;
 
 	struct nouveau_drm_tile *tile;
-
-	struct ttm_bo_kmap_obj dma_buf_vmap;
 };
 
 static inline struct nouveau_bo *
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
index dd51cd0ae20c..787d05eefd9c 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
@@ -24,6 +24,8 @@
  *
  */
 
+#include <drm/drm_gem_ttm_helper.h>
+
 #include "nouveau_drv.h"
 #include "nouveau_dma.h"
 #include "nouveau_fence.h"
@@ -176,8 +178,8 @@ const struct drm_gem_object_funcs nouveau_gem_object_funcs = {
 	.pin = nouveau_gem_prime_pin,
 	.unpin = nouveau_gem_prime_unpin,
 	.get_sg_table = nouveau_gem_prime_get_sg_table,
-	.vmap = nouveau_gem_prime_vmap,
-	.vunmap = nouveau_gem_prime_vunmap,
+	.vmap = drm_gem_ttm_vmap,
+	.vunmap = drm_gem_ttm_vunmap,
 };
 
 int
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h
index b35c180322e2..3b919c7c931c 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.h
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.h
@@ -37,7 +37,5 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
 extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
 extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
 	struct drm_device *, struct dma_buf_attachment *, struct sg_table *);
-extern void *nouveau_gem_prime_vmap(struct drm_gem_object *);
-extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *);
 
 #endif
diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
index a8264aebf3d4..2f16b5249283 100644
--- a/drivers/gpu/drm/nouveau/nouveau_prime.c
+++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
@@ -35,26 +35,6 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	return drm_prime_pages_to_sg(obj->dev, nvbo->bo.ttm->pages, npages);
 }
 
-void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
-{
-	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
-	int ret;
-
-	ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.num_pages,
-			  &nvbo->dma_buf_vmap);
-	if (ret)
-		return ERR_PTR(ret);
-
-	return nvbo->dma_buf_vmap.virtual;
-}
-
-void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
-
-	ttm_bo_kunmap(&nvbo->dma_buf_vmap);
-}
-
 struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev,
 							 struct dma_buf_attachment *attach,
 							 struct sg_table *sg)
diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
index fdbc8d949135..5ab03d605f57 100644
--- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
+++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
@@ -5,6 +5,7 @@
 #include <drm/drm_gem_shmem_helper.h>
 #include <drm/panfrost_drm.h>
 #include <linux/completion.h>
+#include <linux/dma-buf-map.h>
 #include <linux/iopoll.h>
 #include <linux/pm_runtime.h>
 #include <linux/slab.h>
@@ -72,6 +73,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
 {
 	struct panfrost_file_priv *user = file_priv->driver_priv;
 	struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
+	struct dma_buf_map map;
 	struct drm_gem_shmem_object *bo;
 	u32 cfg, as;
 	int ret;
@@ -103,11 +105,10 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
 		goto err_close_bo;
 	}
 
-	perfcnt->buf = drm_gem_shmem_vmap(&bo->base);
-	if (IS_ERR(perfcnt->buf)) {
-		ret = PTR_ERR(perfcnt->buf);
+	ret = drm_gem_shmem_vmap(&bo->base, &map);
+	if (ret)
 		goto err_put_mapping;
-	}
+	perfcnt->buf = map.vaddr;
 
 	/*
 	 * Invalidate the cache and clear the counters to start from a fresh
@@ -163,7 +164,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
 	return 0;
 
 err_vunmap:
-	drm_gem_shmem_vunmap(&bo->base, perfcnt->buf);
+	drm_gem_shmem_vunmap(&bo->base, &map);
 err_put_mapping:
 	panfrost_gem_mapping_put(perfcnt->mapping);
 err_close_bo:
@@ -180,6 +181,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
 {
 	struct panfrost_file_priv *user = file_priv->driver_priv;
 	struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(perfcnt->buf);
 
 	if (user != perfcnt->user)
 		return -EINVAL;
@@ -192,7 +194,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
 		  GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF));
 
 	perfcnt->user = NULL;
-	drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf);
+	drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, &map);
 	perfcnt->buf = NULL;
 	panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv);
 	panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu);
diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
index 45fd76e04bdc..b783c3abdfc9 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -25,6 +25,7 @@
 
 #include <linux/crc32.h>
 #include <linux/delay.h>
+#include <linux/dma-buf-map.h>
 
 #include <drm/drm_drv.h>
 #include <drm/drm_atomic.h>
@@ -581,6 +582,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
 	struct drm_gem_object *obj;
 	struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL;
 	int ret;
+	struct dma_buf_map user_map;
+	struct dma_buf_map cursor_map;
 	void *user_ptr;
 	int size = 64*64*4;
 
@@ -595,9 +598,10 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
 		user_bo = gem_to_qxl_bo(obj);
 
 		/* pinning is done in the prepare/cleanup framevbuffer */
-		ret = qxl_bo_kmap(user_bo, &user_ptr);
+		ret = qxl_bo_kmap(user_bo, &user_map);
 		if (ret)
 			goto out_free_release;
+		user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction properly */
 
 		ret = qxl_alloc_bo_reserved(qdev, release,
 					    sizeof(struct qxl_cursor) + size,
@@ -613,9 +617,13 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
 		if (ret)
 			goto out_unpin;
 
-		ret = qxl_bo_kmap(cursor_bo, (void **)&cursor);
+		ret = qxl_bo_kmap(cursor_bo, &cursor_map);
 		if (ret)
 			goto out_backoff;
+		if (cursor_map.is_iomem) /* TODO: Use mapping abstraction properly */
+			cursor = (struct qxl_cursor __force *)cursor_map.vaddr_iomem;
+		else
+			cursor = (struct qxl_cursor *)cursor_map.vaddr;
 
 		cursor->header.unique = 0;
 		cursor->header.type = SPICE_CURSOR_TYPE_ALPHA;
@@ -1133,6 +1141,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
 {
 	int ret;
 	struct drm_gem_object *gobj;
+	struct dma_buf_map map;
 	int monitors_config_size = sizeof(struct qxl_monitors_config) +
 		qxl_num_crtc * sizeof(struct qxl_head);
 
@@ -1149,7 +1158,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
 	if (ret)
 		return ret;
 
-	qxl_bo_kmap(qdev->monitors_config_bo, NULL);
+	qxl_bo_kmap(qdev->monitors_config_bo, &map);
 
 	qdev->monitors_config = qdev->monitors_config_bo->kptr;
 	qdev->ram_header->monitors_config =
diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c
index 3599db096973..7b7acb910780 100644
--- a/drivers/gpu/drm/qxl/qxl_draw.c
+++ b/drivers/gpu/drm/qxl/qxl_draw.c
@@ -20,6 +20,8 @@
  * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
  */
 
+#include <linux/dma-buf-map.h>
+
 #include <drm/drm_fourcc.h>
 
 #include "qxl_drv.h"
@@ -42,13 +44,15 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev,
 					      unsigned int num_clips,
 					      struct qxl_bo *clips_bo)
 {
+	struct dma_buf_map map;
 	struct qxl_clip_rects *dev_clips;
 	int ret;
 
-	ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips);
-	if (ret) {
+	ret = qxl_bo_kmap(clips_bo, &map);
+	if (ret)
 		return NULL;
-	}
+	dev_clips = map.vaddr; /* TODO: Use mapping abstraction properly */
+
 	dev_clips->num_rects = num_clips;
 	dev_clips->chunk.next_chunk = 0;
 	dev_clips->chunk.prev_chunk = 0;
@@ -142,6 +146,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
 	int stride = fb->pitches[0];
 	/* depth is not actually interesting, we don't mask with it */
 	int depth = fb->format->cpp[0] * 8;
+	struct dma_buf_map surface_map;
 	uint8_t *surface_base;
 	struct qxl_release *release;
 	struct qxl_bo *clips_bo;
@@ -197,9 +202,10 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
 	if (ret)
 		goto out_release_backoff;
 
-	ret = qxl_bo_kmap(bo, (void **)&surface_base);
+	ret = qxl_bo_kmap(bo, &surface_map);
 	if (ret)
 		goto out_release_backoff;
+	surface_base = surface_map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	ret = qxl_image_init(qdev, release, dimage, surface_base,
 			     left - dumb_shadow_offset,
diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index 3602e8b34189..eb437fea5d9e 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -30,6 +30,7 @@
  * Definitions taken from spice-protocol, plus kernel driver specific bits.
  */
 
+#include <linux/dma-buf-map.h>
 #include <linux/dma-fence.h>
 #include <linux/firmware.h>
 #include <linux/platform_device.h>
@@ -50,6 +51,8 @@
 
 #include "qxl_dev.h"
 
+struct dma_buf_map;
+
 #define DRIVER_AUTHOR		"Dave Airlie"
 
 #define DRIVER_NAME		"qxl"
@@ -79,7 +82,7 @@ struct qxl_bo {
 	/* Protected by tbo.reserved */
 	struct ttm_place		placements[3];
 	struct ttm_placement		placement;
-	struct ttm_bo_kmap_obj		kmap;
+	struct dma_buf_map		map;
 	void				*kptr;
 	unsigned int                    map_count;
 	int                             type;
@@ -335,7 +338,6 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv);
 void qxl_gem_object_close(struct drm_gem_object *obj,
 			  struct drm_file *file_priv);
 void qxl_bo_force_delete(struct qxl_device *qdev);
-int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
 
 /* qxl_dumb.c */
 int qxl_mode_dumb_create(struct drm_file *file_priv,
@@ -445,8 +447,9 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj);
 struct drm_gem_object *qxl_gem_prime_import_sg_table(
 	struct drm_device *dev, struct dma_buf_attachment *attach,
 	struct sg_table *sgt);
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj);
-void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
+			  struct dma_buf_map *map);
 int qxl_gem_prime_mmap(struct drm_gem_object *obj,
 				struct vm_area_struct *vma);
 
diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
index 547d46c14d56..ceebc5881f68 100644
--- a/drivers/gpu/drm/qxl/qxl_object.c
+++ b/drivers/gpu/drm/qxl/qxl_object.c
@@ -23,10 +23,12 @@
  *          Alon Levy
  */
 
+#include <linux/dma-buf-map.h>
+#include <linux/io-mapping.h>
+
 #include "qxl_drv.h"
 #include "qxl_object.h"
 
-#include <linux/io-mapping.h>
 static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo)
 {
 	struct qxl_bo *bo;
@@ -152,24 +154,27 @@ int qxl_bo_create(struct qxl_device *qdev,
 	return 0;
 }
 
-int qxl_bo_kmap(struct qxl_bo *bo, void **ptr)
+int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map)
 {
-	bool is_iomem;
 	int r;
 
 	if (bo->kptr) {
-		if (ptr)
-			*ptr = bo->kptr;
 		bo->map_count++;
-		return 0;
+		goto out;
 	}
-	r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
+	r = ttm_bo_vmap(&bo->tbo, &bo->map);
 	if (r)
 		return r;
-	bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
-	if (ptr)
-		*ptr = bo->kptr;
 	bo->map_count = 1;
+
+	/* TODO: Remove kptr in favor of map everywhere. */
+	if (bo->map.is_iomem)
+		bo->kptr = (void *)bo->map.vaddr_iomem;
+	else
+		bo->kptr = bo->map.vaddr;
+
+out:
+	*map = bo->map;
 	return 0;
 }
 
@@ -180,6 +185,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
 	void *rptr;
 	int ret;
 	struct io_mapping *map;
+	struct dma_buf_map bo_map;
 
 	if (bo->tbo.mem.mem_type == TTM_PL_VRAM)
 		map = qdev->vram_mapping;
@@ -196,9 +202,10 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
 		return rptr;
 	}
 
-	ret = qxl_bo_kmap(bo, &rptr);
+	ret = qxl_bo_kmap(bo, &bo_map);
 	if (ret)
 		return NULL;
+	rptr = bo_map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	rptr += page_offset * PAGE_SIZE;
 	return rptr;
@@ -212,7 +219,7 @@ void qxl_bo_kunmap(struct qxl_bo *bo)
 	if (bo->map_count > 0)
 		return;
 	bo->kptr = NULL;
-	ttm_bo_kunmap(&bo->kmap);
+	ttm_bo_vunmap(&bo->tbo, &bo->map);
 }
 
 void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev,
diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
index 09a5c818324d..ebf24c9d2bf2 100644
--- a/drivers/gpu/drm/qxl/qxl_object.h
+++ b/drivers/gpu/drm/qxl/qxl_object.h
@@ -63,7 +63,7 @@ extern int qxl_bo_create(struct qxl_device *qdev,
 			 bool kernel, bool pinned, u32 domain,
 			 struct qxl_surface *surf,
 			 struct qxl_bo **bo_ptr);
-extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
+extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);
 extern void qxl_bo_kunmap(struct qxl_bo *bo);
 void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset);
 void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map);
diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
index 7d3816fca5a8..4aa949799446 100644
--- a/drivers/gpu/drm/qxl/qxl_prime.c
+++ b/drivers/gpu/drm/qxl/qxl_prime.c
@@ -54,20 +54,20 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table(
 	return ERR_PTR(-ENOSYS);
 }
 
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj)
+int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct qxl_bo *bo = gem_to_qxl_bo(obj);
-	void *ptr;
 	int ret;
 
-	ret = qxl_bo_kmap(bo, &ptr);
+	ret = qxl_bo_kmap(bo, map);
 	if (ret < 0)
-		return ERR_PTR(ret);
+		return ret;
 
-	return ptr;
+	return 0;
 }
 
-void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
+			  struct dma_buf_map *map)
 {
 	struct qxl_bo *bo = gem_to_qxl_bo(obj);
 
diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 5d54bccebd4d..44cb5ee6fc20 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -509,7 +509,6 @@ struct radeon_bo {
 	/* Constant after initialization */
 	struct radeon_device		*rdev;
 
-	struct ttm_bo_kmap_obj		dma_buf_vmap;
 	pid_t				pid;
 
 #ifdef CONFIG_MMU_NOTIFIER
diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
index 0ccd7213e41f..d2876ce3bc9e 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -31,6 +31,7 @@
 #include <drm/drm_debugfs.h>
 #include <drm/drm_device.h>
 #include <drm/drm_file.h>
+#include <drm/drm_gem_ttm_helper.h>
 #include <drm/radeon_drm.h>
 
 #include "radeon.h"
@@ -40,8 +41,6 @@ struct dma_buf *radeon_gem_prime_export(struct drm_gem_object *gobj,
 struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj);
 int radeon_gem_prime_pin(struct drm_gem_object *obj);
 void radeon_gem_prime_unpin(struct drm_gem_object *obj);
-void *radeon_gem_prime_vmap(struct drm_gem_object *obj);
-void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 
 static const struct drm_gem_object_funcs radeon_gem_object_funcs;
 
@@ -235,8 +234,8 @@ static const struct drm_gem_object_funcs radeon_gem_object_funcs = {
 	.pin = radeon_gem_prime_pin,
 	.unpin = radeon_gem_prime_unpin,
 	.get_sg_table = radeon_gem_prime_get_sg_table,
-	.vmap = radeon_gem_prime_vmap,
-	.vunmap = radeon_gem_prime_vunmap,
+	.vmap = drm_gem_ttm_vmap,
+	.vunmap = drm_gem_ttm_vunmap,
 };
 
 /*
diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
index b9de0e51c0be..088d39a51c0d 100644
--- a/drivers/gpu/drm/radeon/radeon_prime.c
+++ b/drivers/gpu/drm/radeon/radeon_prime.c
@@ -39,26 +39,6 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	return drm_prime_pages_to_sg(obj->dev, bo->tbo.ttm->pages, npages);
 }
 
-void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
-{
-	struct radeon_bo *bo = gem_to_radeon_bo(obj);
-	int ret;
-
-	ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
-			  &bo->dma_buf_vmap);
-	if (ret)
-		return ERR_PTR(ret);
-
-	return bo->dma_buf_vmap.virtual;
-}
-
-void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-	struct radeon_bo *bo = gem_to_radeon_bo(obj);
-
-	ttm_bo_kunmap(&bo->dma_buf_vmap);
-}
-
 struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
 							struct dma_buf_attachment *attach,
 							struct sg_table *sg)
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
index 7d5ebb10323b..7971f57436dd 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
@@ -532,26 +532,32 @@ rockchip_gem_prime_import_sg_table(struct drm_device *drm,
 	return ERR_PTR(ret);
 }
 
-void *rockchip_gem_prime_vmap(struct drm_gem_object *obj)
+int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
 
-	if (rk_obj->pages)
-		return vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
-			    pgprot_writecombine(PAGE_KERNEL));
+	if (rk_obj->pages) {
+		void *vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
+				  pgprot_writecombine(PAGE_KERNEL));
+		if (!vaddr)
+			return -ENOMEM;
+		dma_buf_map_set_vaddr(map, vaddr);
+		return 0;
+	}
 
 	if (rk_obj->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING)
-		return NULL;
+		return -ENOMEM;
+	dma_buf_map_set_vaddr(map, rk_obj->kvaddr);
 
-	return rk_obj->kvaddr;
+	return 0;
 }
 
-void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
 
 	if (rk_obj->pages) {
-		vunmap(vaddr);
+		vunmap(map->vaddr);
 		return;
 	}
 
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
index 7ffc541bea07..5a70a56cd406 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
@@ -31,8 +31,8 @@ struct drm_gem_object *
 rockchip_gem_prime_import_sg_table(struct drm_device *dev,
 				   struct dma_buf_attachment *attach,
 				   struct sg_table *sg);
-void *rockchip_gem_prime_vmap(struct drm_gem_object *obj);
-void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 /* drm driver mmap file operations */
 int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c
index 744a8e337e41..c02e35ed6e76 100644
--- a/drivers/gpu/drm/tiny/cirrus.c
+++ b/drivers/gpu/drm/tiny/cirrus.c
@@ -17,6 +17,7 @@
  */
 
 #include <linux/console.h>
+#include <linux/dma-buf-map.h>
 #include <linux/module.h>
 #include <linux/pci.h>
 
@@ -314,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
 			       struct drm_rect *rect)
 {
 	struct cirrus_device *cirrus = to_cirrus(fb->dev);
+	struct dma_buf_map map;
 	void *vmap;
 	int idx, ret;
 
@@ -321,10 +323,10 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
 	if (!drm_dev_enter(&cirrus->dev, &idx))
 		goto out;
 
-	ret = -ENOMEM;
-	vmap = drm_gem_shmem_vmap(fb->obj[0]);
-	if (!vmap)
+	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+	if (ret)
 		goto out_dev_exit;
+	vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	if (cirrus->cpp == fb->format->cpp[0])
 		drm_fb_memcpy_dstclip(cirrus->vram,
@@ -343,7 +345,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
 	else
 		WARN_ON_ONCE("cpp mismatch");
 
-	drm_gem_shmem_vunmap(fb->obj[0], vmap);
+	drm_gem_shmem_vunmap(fb->obj[0], &map);
 	ret = 0;
 
 out_dev_exit:
diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
index cc397671f689..12a890cea6e9 100644
--- a/drivers/gpu/drm/tiny/gm12u320.c
+++ b/drivers/gpu/drm/tiny/gm12u320.c
@@ -248,6 +248,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
 {
 	int block, dst_offset, len, remain, ret, x1, x2, y1, y2;
 	struct drm_framebuffer *fb;
+	struct dma_buf_map map;
 	void *vaddr;
 	u8 *src;
 
@@ -262,11 +263,12 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
 	y1 = gm12u320->fb_update.rect.y1;
 	y2 = gm12u320->fb_update.rect.y2;
 
-	vaddr = drm_gem_shmem_vmap(fb->obj[0]);
-	if (IS_ERR(vaddr)) {
-		GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr));
+	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+	if (ret) {
+		GM12U320_ERR("failed to vmap fb: %d\n", ret);
 		goto put_fb;
 	}
+	vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	if (fb->obj[0]->import_attach) {
 		ret = dma_buf_begin_cpu_access(
@@ -318,7 +320,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
 			GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret);
 	}
 vunmap:
-	drm_gem_shmem_vunmap(fb->obj[0], vaddr);
+	drm_gem_shmem_vunmap(fb->obj[0], &map);
 put_fb:
 	drm_framebuffer_put(fb);
 	gm12u320->fb_update.fb = NULL;
diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
index fef43f4e3bac..42eeba1dfdbf 100644
--- a/drivers/gpu/drm/udl/udl_modeset.c
+++ b/drivers/gpu/drm/udl/udl_modeset.c
@@ -276,6 +276,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
 	struct urb *urb;
 	struct drm_rect clip;
 	int log_bpp;
+	struct dma_buf_map map;
 	void *vaddr;
 
 	ret = udl_log_cpp(fb->format->cpp[0]);
@@ -296,11 +297,12 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
 			return ret;
 	}
 
-	vaddr = drm_gem_shmem_vmap(fb->obj[0]);
-	if (IS_ERR(vaddr)) {
+	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+	if (ret) {
 		DRM_ERROR("failed to vmap fb\n");
 		goto out_dma_buf_end_cpu_access;
 	}
+	vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	urb = udl_get_urb(dev);
 	if (!urb)
@@ -333,7 +335,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
 	ret = 0;
 
 out_drm_gem_shmem_vunmap:
-	drm_gem_shmem_vunmap(fb->obj[0], vaddr);
+	drm_gem_shmem_vunmap(fb->obj[0], &map);
 out_dma_buf_end_cpu_access:
 	if (import_attach) {
 		tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf,
diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c
index 931c55126148..f268fb258c83 100644
--- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
+++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
@@ -9,6 +9,8 @@
  *          Michael Thayer <michael.thayer@oracle.com,
  *          Hans de Goede <hdegoede@redhat.com>
  */
+
+#include <linux/dma-buf-map.h>
 #include <linux/export.h>
 
 #include <drm/drm_atomic.h>
@@ -384,6 +386,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 	u32 height = plane->state->crtc_h;
 	size_t data_size, mask_size;
 	u32 flags;
+	struct dma_buf_map map;
+	int ret;
 	u8 *src;
 
 	/*
@@ -397,8 +401,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 
 	vbox_crtc->cursor_enabled = true;
 
-	src = drm_gem_vram_vmap(gbo);
-	if (IS_ERR(src)) {
+	ret = drm_gem_vram_vmap(gbo, &map);
+	if (ret) {
 		/*
 		 * BUG: we should have pinned the BO in prepare_fb().
 		 */
@@ -406,6 +410,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 		DRM_WARN("Could not map cursor bo, skipping update\n");
 		return;
 	}
+	src = map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	/*
 	 * The mask must be calculated based on the alpha
@@ -416,7 +421,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 	data_size = width * height * 4 + mask_size;
 
 	copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
-	drm_gem_vram_vunmap(gbo, src);
+	drm_gem_vram_vunmap(gbo, &map);
 
 	flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
 		VBOX_MOUSE_POINTER_ALPHA;
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index 557f0d1e6437..f290a9a942dc 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -785,16 +785,16 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
 	return drm_gem_cma_prime_mmap(obj, vma);
 }
 
-void *vc4_prime_vmap(struct drm_gem_object *obj)
+int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct vc4_bo *bo = to_vc4_bo(obj);
 
 	if (bo->validated_shader) {
 		DRM_DEBUG("mmaping of shader BOs not allowed.\n");
-		return ERR_PTR(-EINVAL);
+		return -EINVAL;
 	}
 
-	return drm_gem_cma_prime_vmap(obj);
+	return drm_gem_cma_prime_vmap(obj, map);
 }
 
 struct drm_gem_object *
diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
index 7003e7f14a48..b4409ead8f55 100644
--- a/drivers/gpu/drm/vc4/vc4_drv.h
+++ b/drivers/gpu/drm/vc4/vc4_drv.h
@@ -806,7 +806,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
 struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev,
 						 struct dma_buf_attachment *attach,
 						 struct sg_table *sgt);
-void *vc4_prime_vmap(struct drm_gem_object *obj);
+int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 int vc4_bo_cache_init(struct drm_device *dev);
 void vc4_bo_cache_destroy(struct drm_device *dev);
 int vc4_bo_inc_usecnt(struct vc4_bo *bo);
diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index fa54a6d1403d..b2aa26e1e4a2 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -361,24 +361,30 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
 	return &obj->base;
 }
 
-static void *vgem_prime_vmap(struct drm_gem_object *obj)
+static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
 	long n_pages = obj->size >> PAGE_SHIFT;
 	struct page **pages;
+	void *vaddr;
 
 	pages = vgem_pin_pages(bo);
 	if (IS_ERR(pages))
-		return NULL;
+		return PTR_ERR(pages);
+
+	vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
+	if (!vaddr)
+		return -ENOMEM;
+	dma_buf_map_set_vaddr(map, vaddr);
 
-	return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
+	return 0;
 }
 
-static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
 
-	vunmap(vaddr);
+	vunmap(map->vaddr);
 	vgem_unpin_pages(bo);
 }
 
diff --git a/drivers/gpu/drm/vkms/vkms_plane.c b/drivers/gpu/drm/vkms/vkms_plane.c
index 9890137bcb8d..0824327cc860 100644
--- a/drivers/gpu/drm/vkms/vkms_plane.c
+++ b/drivers/gpu/drm/vkms/vkms_plane.c
@@ -1,5 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0+
 
+#include <linux/dma-buf-map.h>
+
 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_fourcc.h>
@@ -146,15 +148,16 @@ static int vkms_prepare_fb(struct drm_plane *plane,
 			   struct drm_plane_state *state)
 {
 	struct drm_gem_object *gem_obj;
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
 	if (!state->fb)
 		return 0;
 
 	gem_obj = drm_gem_fb_get_obj(state->fb, 0);
-	vaddr = drm_gem_shmem_vmap(gem_obj);
-	if (IS_ERR(vaddr))
-		DRM_ERROR("vmap failed: %li\n", PTR_ERR(vaddr));
+	ret = drm_gem_shmem_vmap(gem_obj, &map);
+	if (ret)
+		DRM_ERROR("vmap failed: %d\n", ret);
 
 	return drm_gem_fb_prepare_fb(plane, state);
 }
@@ -164,13 +167,15 @@ static void vkms_cleanup_fb(struct drm_plane *plane,
 {
 	struct drm_gem_object *gem_obj;
 	struct drm_gem_shmem_object *shmem_obj;
+	struct dma_buf_map map;
 
 	if (!old_state->fb)
 		return;
 
 	gem_obj = drm_gem_fb_get_obj(old_state->fb, 0);
 	shmem_obj = to_drm_gem_shmem_obj(drm_gem_fb_get_obj(old_state->fb, 0));
-	drm_gem_shmem_vunmap(gem_obj, shmem_obj->vaddr);
+	dma_buf_map_set_vaddr(&map, shmem_obj->vaddr);
+	drm_gem_shmem_vunmap(gem_obj, &map);
 }
 
 static const struct drm_plane_helper_funcs vkms_primary_helper_funcs = {
diff --git a/drivers/gpu/drm/vkms/vkms_writeback.c b/drivers/gpu/drm/vkms/vkms_writeback.c
index 26b903926872..67f80ab1e85f 100644
--- a/drivers/gpu/drm/vkms/vkms_writeback.c
+++ b/drivers/gpu/drm/vkms/vkms_writeback.c
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0+
 
-#include "vkms_drv.h"
+#include <linux/dma-buf-map.h>
+
 #include <drm/drm_fourcc.h>
 #include <drm/drm_writeback.h>
 #include <drm/drm_probe_helper.h>
@@ -8,6 +9,8 @@
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_gem_shmem_helper.h>
 
+#include "vkms_drv.h"
+
 static const u32 vkms_wb_formats[] = {
 	DRM_FORMAT_XRGB8888,
 };
@@ -65,19 +68,20 @@ static int vkms_wb_prepare_job(struct drm_writeback_connector *wb_connector,
 			       struct drm_writeback_job *job)
 {
 	struct drm_gem_object *gem_obj;
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
 	if (!job->fb)
 		return 0;
 
 	gem_obj = drm_gem_fb_get_obj(job->fb, 0);
-	vaddr = drm_gem_shmem_vmap(gem_obj);
-	if (IS_ERR(vaddr)) {
-		DRM_ERROR("vmap failed: %li\n", PTR_ERR(vaddr));
-		return PTR_ERR(vaddr);
+	ret = drm_gem_shmem_vmap(gem_obj, &map);
+	if (ret) {
+		DRM_ERROR("vmap failed: %d\n", ret);
+		return ret;
 	}
 
-	job->priv = vaddr;
+	job->priv = map.vaddr;
 
 	return 0;
 }
@@ -87,12 +91,14 @@ static void vkms_wb_cleanup_job(struct drm_writeback_connector *connector,
 {
 	struct drm_gem_object *gem_obj;
 	struct vkms_device *vkmsdev;
+	struct dma_buf_map map;
 
 	if (!job->fb)
 		return;
 
 	gem_obj = drm_gem_fb_get_obj(job->fb, 0);
-	drm_gem_shmem_vunmap(gem_obj, job->priv);
+	dma_buf_map_set_vaddr(&map, job->priv);
+	drm_gem_shmem_vunmap(gem_obj, &map);
 
 	vkmsdev = drm_device_to_vkms_device(gem_obj->dev);
 	vkms_set_composer(&vkmsdev->output, false);
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index 4f34ef34ba60..74db5a840bed 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -290,22 +290,28 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma)
 	return gem_mmap_obj(xen_obj, vma);
 }
 
-void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj)
+int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map)
 {
 	struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
+	void *vaddr;
 
 	if (!xen_obj->pages)
-		return NULL;
+		return -ENOMEM;
 
 	/* Please see comment in gem_mmap_obj on mapping and attributes. */
-	return vmap(xen_obj->pages, xen_obj->num_pages,
-		    VM_MAP, PAGE_KERNEL);
+	vaddr = vmap(xen_obj->pages, xen_obj->num_pages,
+		     VM_MAP, PAGE_KERNEL);
+	if (!vaddr)
+		return -ENOMEM;
+	dma_buf_map_set_vaddr(map, vaddr);
+
+	return 0;
 }
 
 void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
-				    void *vaddr)
+				    struct dma_buf_map *map)
 {
-	vunmap(vaddr);
+	vunmap(map->vaddr);
 }
 
 int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
index a39675fa31b2..a4e67d0a149c 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.h
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
@@ -12,6 +12,7 @@
 #define __XEN_DRM_FRONT_GEM_H
 
 struct dma_buf_attachment;
+struct dma_buf_map;
 struct drm_device;
 struct drm_gem_object;
 struct file;
@@ -34,10 +35,11 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj);
 
 int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma);
 
-void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj);
+int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
+				 struct dma_buf_map *map);
 
 void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
-				    void *vaddr);
+				    struct dma_buf_map *map);
 
 int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
 				 struct vm_area_struct *vma);
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index c38dd35da00b..5e6daa1c982f 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -39,6 +39,7 @@
 
 #include <drm/drm_vma_manager.h>
 
+struct dma_buf_map;
 struct drm_gem_object;
 
 /**
@@ -138,7 +139,7 @@ struct drm_gem_object_funcs {
 	 *
 	 * This callback is optional.
 	 */
-	void *(*vmap)(struct drm_gem_object *obj);
+	int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 	/**
 	 * @vunmap:
@@ -148,7 +149,7 @@ struct drm_gem_object_funcs {
 	 *
 	 * This callback is optional.
 	 */
-	void (*vunmap)(struct drm_gem_object *obj, void *vaddr);
+	void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 	/**
 	 * @mmap:
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index a064b0d1c480..caf98b9cf4b4 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -103,7 +103,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
 				  struct sg_table *sgt);
 int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma);
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
+int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 struct drm_gem_object *
 drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 5381f0c8cf6f..3449a0353fe0 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -113,8 +113,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
 void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
 int drm_gem_shmem_pin(struct drm_gem_object *obj);
 void drm_gem_shmem_unpin(struct drm_gem_object *obj);
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj);
-void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr);
+int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv);
 
diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
index 128f88174d32..c0d28ba0f5c9 100644
--- a/include/drm/drm_gem_vram_helper.h
+++ b/include/drm/drm_gem_vram_helper.h
@@ -10,6 +10,7 @@
 #include <drm/ttm/ttm_bo_api.h>
 #include <drm/ttm/ttm_bo_driver.h>
 
+#include <linux/dma-buf-map.h>
 #include <linux/kernel.h> /* for container_of() */
 
 struct drm_mode_create_dumb;
@@ -29,9 +30,8 @@ struct vm_area_struct;
 
 /**
  * struct drm_gem_vram_object - GEM object backed by VRAM
- * @gem:	GEM object
  * @bo:		TTM buffer object
- * @kmap:	Mapping information for @bo
+ * @map:	Mapping information for @bo
  * @placement:	TTM placement information. Supported placements are \
 	%TTM_PL_VRAM and %TTM_PL_SYSTEM
  * @placements:	TTM placement information.
@@ -50,15 +50,15 @@ struct vm_area_struct;
  */
 struct drm_gem_vram_object {
 	struct ttm_buffer_object bo;
-	struct ttm_bo_kmap_obj kmap;
+	struct dma_buf_map map;
 
 	/**
-	 * @kmap_use_count:
+	 * @vmap_use_count:
 	 *
 	 * Reference count on the virtual address.
 	 * The address are un-mapped when the count reaches zero.
 	 */
-	unsigned int kmap_use_count;
+	unsigned int vmap_use_count;
 
 	/* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */
 	struct ttm_placement placement;
@@ -97,8 +97,8 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo);
 s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo);
 int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag);
 int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo);
-void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo);
-void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr);
+int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
+void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
 
 int drm_gem_vram_fill_create_dumb(struct drm_file *file,
 				  struct drm_device *dev,
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 09:46:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 09:46:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18133.42975 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZstt-0005bH-Jp; Tue, 03 Nov 2020 09:46:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18133.42975; Tue, 03 Nov 2020 09:46:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZstt-0005bA-Gn; Tue, 03 Nov 2020 09:46:49 +0000
Received: by outflank-mailman (input) for mailman id 18133;
 Tue, 03 Nov 2020 09:46:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ki2W=EJ=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1kZsts-0005aR-7f
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:46:48 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e90bc63e-d2db-4c3d-a3d9-67183df2d1c2;
 Tue, 03 Nov 2020 09:46:45 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id C586667373; Tue,  3 Nov 2020 10:46:43 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ki2W=EJ=lst.de=hch@srs-us1.protection.inumbo.net>)
	id 1kZsts-0005aR-7f
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 09:46:48 +0000
X-Inumbo-ID: e90bc63e-d2db-4c3d-a3d9-67183df2d1c2
Received: from verein.lst.de (unknown [213.95.11.211])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e90bc63e-d2db-4c3d-a3d9-67183df2d1c2;
	Tue, 03 Nov 2020 09:46:45 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
	id C586667373; Tue,  3 Nov 2020 10:46:43 +0100 (CET)
Date: Tue, 3 Nov 2020 10:46:43 +0100
From: Christoph Hellwig <hch@lst.de>
To: konrad.wilk@oracle.com
Cc: xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH for-5.10] swiotlb: remove the tbl_dma_addr argument to
 swiotlb_tbl_map_single
Message-ID: <20201103094643.GA18936@lst.de>
References: <20201023063309.3472987-1-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201023063309.3472987-1-hch@lst.de>
User-Agent: Mutt/1.5.17 (2007-11-01)

ping?

On Fri, Oct 23, 2020 at 08:33:09AM +0200, Christoph Hellwig wrote:
> The tbl_dma_addr argument is used to check the DMA boundary for the
> allocations, and thus needs to be a dma_addr_t.  swiotlb-xen instead
> passed a physical address, which could lead to incorrect results for
> strange offsets.  Fix this by removing the parameter entirely and hard
> code the DMA address for io_tlb_start instead.
> 
> Fixes: 91ffe4ad534a ("swiotlb-xen: introduce phys_to_dma/dma_to_phys translations")
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
>  drivers/iommu/intel/iommu.c |  5 ++---
>  drivers/xen/swiotlb-xen.c   |  3 +--
>  include/linux/swiotlb.h     | 10 +++-------
>  kernel/dma/swiotlb.c        | 16 ++++++----------
>  4 files changed, 12 insertions(+), 22 deletions(-)
> 
> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> index 8651f6d4dfa032..6b560e6f193096 100644
> --- a/drivers/iommu/intel/iommu.c
> +++ b/drivers/iommu/intel/iommu.c
> @@ -3815,9 +3815,8 @@ bounce_map_single(struct device *dev, phys_addr_t paddr, size_t size,
>  	 * page aligned, we don't need to use a bounce page.
>  	 */
>  	if (!IS_ALIGNED(paddr | size, VTD_PAGE_SIZE)) {
> -		tlb_addr = swiotlb_tbl_map_single(dev,
> -				phys_to_dma_unencrypted(dev, io_tlb_start),
> -				paddr, size, aligned_size, dir, attrs);
> +		tlb_addr = swiotlb_tbl_map_single(dev, paddr, size,
> +				aligned_size, dir, attrs);
>  		if (tlb_addr == DMA_MAPPING_ERROR) {
>  			goto swiotlb_error;
>  		} else {
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 71ce1b7a23d1cc..2b385c1b4a99cb 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -395,8 +395,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
>  	 */
>  	trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_force);
>  
> -	map = swiotlb_tbl_map_single(dev, virt_to_phys(xen_io_tlb_start),
> -				     phys, size, size, dir, attrs);
> +	map = swiotlb_tbl_map_single(dev, phys, size, size, dir, attrs);
>  	if (map == (phys_addr_t)DMA_MAPPING_ERROR)
>  		return DMA_MAPPING_ERROR;
>  
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index 513913ff748626..3bb72266a75a1d 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -45,13 +45,9 @@ enum dma_sync_target {
>  	SYNC_FOR_DEVICE = 1,
>  };
>  
> -extern phys_addr_t swiotlb_tbl_map_single(struct device *hwdev,
> -					  dma_addr_t tbl_dma_addr,
> -					  phys_addr_t phys,
> -					  size_t mapping_size,
> -					  size_t alloc_size,
> -					  enum dma_data_direction dir,
> -					  unsigned long attrs);
> +phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys,
> +		size_t mapping_size, size_t alloc_size,
> +		enum dma_data_direction dir, unsigned long attrs);
>  
>  extern void swiotlb_tbl_unmap_single(struct device *hwdev,
>  				     phys_addr_t tlb_addr,
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index b4eea0abc3f002..92e2f54f24c01b 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -441,14 +441,11 @@ static void swiotlb_bounce(phys_addr_t orig_addr, phys_addr_t tlb_addr,
>  	}
>  }
>  
> -phys_addr_t swiotlb_tbl_map_single(struct device *hwdev,
> -				   dma_addr_t tbl_dma_addr,
> -				   phys_addr_t orig_addr,
> -				   size_t mapping_size,
> -				   size_t alloc_size,
> -				   enum dma_data_direction dir,
> -				   unsigned long attrs)
> +phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
> +		size_t mapping_size, size_t alloc_size,
> +		enum dma_data_direction dir, unsigned long attrs)
>  {
> +	dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, io_tlb_start);
>  	unsigned long flags;
>  	phys_addr_t tlb_addr;
>  	unsigned int nslots, stride, index, wrap;
> @@ -667,9 +664,8 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t paddr, size_t size,
>  	trace_swiotlb_bounced(dev, phys_to_dma(dev, paddr), size,
>  			      swiotlb_force);
>  
> -	swiotlb_addr = swiotlb_tbl_map_single(dev,
> -			phys_to_dma_unencrypted(dev, io_tlb_start),
> -			paddr, size, size, dir, attrs);
> +	swiotlb_addr = swiotlb_tbl_map_single(dev, paddr, size, size, dir,
> +			attrs);
>  	if (swiotlb_addr == (phys_addr_t)DMA_MAPPING_ERROR)
>  		return DMA_MAPPING_ERROR;
>  
> -- 
> 2.28.0
> 
> _______________________________________________
> iommu mailing list
> iommu@lists.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/iommu
---end quoted text---


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 10:01:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 10:01:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18141.42988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZt7X-0007ND-N8; Tue, 03 Nov 2020 10:00:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18141.42988; Tue, 03 Nov 2020 10:00:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZt7X-0007N6-K4; Tue, 03 Nov 2020 10:00:55 +0000
Received: by outflank-mailman (input) for mailman id 18141;
 Tue, 03 Nov 2020 10:00:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RD1Y=EJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kZt7W-0007N1-Dr
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:00:54 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8d9fd4aa-2df8-4af0-a04e-9ca1e8fb4547;
 Tue, 03 Nov 2020 10:00:52 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kZt7Q-00072x-Lu; Tue, 03 Nov 2020 10:00:48 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kZt7Q-0002iL-Ai; Tue, 03 Nov 2020 10:00:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=RD1Y=EJ=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kZt7W-0007N1-Dr
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:00:54 +0000
X-Inumbo-ID: 8d9fd4aa-2df8-4af0-a04e-9ca1e8fb4547
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8d9fd4aa-2df8-4af0-a04e-9ca1e8fb4547;
	Tue, 03 Nov 2020 10:00:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=uP2W4gg5bUqMmDSdmcX7RGZycNlWhY3WAnjFDfiGHMg=; b=L6TXfZKWRvZISHxjF4GOInxMLT
	MjCX4vkf0flN21X6LC6PxA71OsMKBSs8+e4xuYuAXFiDjveBQQ/p7+k/aOOFWMd4JJ2pffV8+1XST
	k2lUfRMIdlqVMIWFemzZK1OPrsevqGwsCfOIt2Z5xXuWGjWsF8LyAGoobBLpZP74Wc8A=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kZt7Q-00072x-Lu; Tue, 03 Nov 2020 10:00:48 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kZt7Q-0002iL-Ai; Tue, 03 Nov 2020 10:00:48 +0000
Subject: Re: [PATCH v1 2/2] Define SOURCE_DATE_EPOCH based on git log
To: =?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Pierret_=28fepitre=29?=
 <frederic.pierret@qubes-os.org>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <cover.1604156731.git.frederic.pierret@qubes-os.org>
 <8b0e8b8be9c77476ecc702a7c6216ba50659deec.1604156731.git.frederic.pierret@qubes-os.org>
From: Julien Grall <julien@xen.org>
Message-ID: <396c2991-1a90-bc1a-70e7-eaaf62c309d8@xen.org>
Date: Tue, 3 Nov 2020 10:00:46 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <8b0e8b8be9c77476ecc702a7c6216ba50659deec.1604156731.git.frederic.pierret@qubes-os.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Frédéric,

On 31/10/2020 15:14, Frédéric Pierret (fepitre) wrote:
> ---
>   xen/Makefile | 2 ++
>   1 file changed, 2 insertions(+)
> 
> diff --git a/xen/Makefile b/xen/Makefile
> index 30b1847515..4cc35556ef 100644
> --- a/xen/Makefile
> +++ b/xen/Makefile
> @@ -6,6 +6,8 @@ export XEN_EXTRAVERSION ?= -unstable$(XEN_VENDORVERSION)
>   export XEN_FULLVERSION   = $(XEN_VERSION).$(XEN_SUBVERSION)$(XEN_EXTRAVERSION)
>   -include xen-version
>   
> +export SOURCE_DATE_EPOCH	?= $(shell git log -1 --format=%ct 2>/dev/null)

It is possible to download a tarball for Xen release (see [1]). They 
don't contain the .git directory and therefore this command would fail.

Should we fallback to "date" in this case?

> +
>   export XEN_WHOAMI	?= $(USER)
>   export XEN_DOMAIN	?= $(shell ([ -x /bin/dnsdomainname ] && /bin/dnsdomainname) || ([ -x /bin/domainname ] && /bin/domainname || echo [unknown]))
>   ifneq ($(SOURCE_DATE_EPOCH),)
> 

Cheers,

[1] https://xenproject.org/downloads/

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 10:04:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 10:04:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18146.42999 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtAh-0007Xu-75; Tue, 03 Nov 2020 10:04:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18146.42999; Tue, 03 Nov 2020 10:04:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtAh-0007Xn-43; Tue, 03 Nov 2020 10:04:11 +0000
Received: by outflank-mailman (input) for mailman id 18146;
 Tue, 03 Nov 2020 10:04:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kZtAf-0007Xi-My
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:04:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 642a1675-042a-4447-a37c-ae9add388907;
 Tue, 03 Nov 2020 10:04:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5655FAC97;
 Tue,  3 Nov 2020 10:04:07 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kZtAf-0007Xi-My
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:04:09 +0000
X-Inumbo-ID: 642a1675-042a-4447-a37c-ae9add388907
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 642a1675-042a-4447-a37c-ae9add388907;
	Tue, 03 Nov 2020 10:04:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604397847;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=b0xc2v+PYfbZu2imG7C1mJGgJmyzK+56mOxeutQXxpg=;
	b=bAgcDmMM2g+Emdg5ZIh9wDYEkNdpKnONdXxtT1/9Vgqlf+TEZ3/RvA6lK7edj7OuaZ4wkq
	r3qoN+BiVp7r4YkWR47/Hq/jNm1CIwrgn5HdZJOgYQB25rRKAmrsZfxc76KVq466neom5f
	rSnYazf1dW37cz1y84B4oB6N9NrjwaU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5655FAC97;
	Tue,  3 Nov 2020 10:04:07 +0000 (UTC)
Subject: Re: [PATCH v2 2/2] xen/rwlock: add check_lock() handling to rwlocks
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201102131239.14134-1-jgross@suse.com>
 <20201102131239.14134-3-jgross@suse.com>
 <fb3a1a5a-15ea-218f-a6d8-8e9d8d1bc2a7@suse.com>
 <890b6547-ca4f-b195-6b9d-9078ba35c357@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <fe41300c-3013-73ae-7ffa-7cd36705d0c2@suse.com>
Date: Tue, 3 Nov 2020 11:04:07 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <890b6547-ca4f-b195-6b9d-9078ba35c357@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 03.11.2020 10:22, Jürgen Groß wrote:
> On 03.11.20 10:02, Jan Beulich wrote:
>> On 02.11.2020 14:12, Juergen Gross wrote:
>>> --- a/xen/include/xen/rwlock.h
>>> +++ b/xen/include/xen/rwlock.h
>>> @@ -56,6 +56,7 @@ static inline int _read_trylock(rwlock_t *lock)
>>>       u32 cnts;
>>>   
>>>       preempt_disable();
>>> +    check_lock(&lock->lock.debug, true);
>>>       cnts = atomic_read(&lock->cnts);
>>>       if ( likely(_can_read_lock(cnts)) )
>>>       {
>>
>> I'm sorry for being picky, but this still isn't matching
>> _spin_trylock(). Perhaps the difference is really benign, but
>> there the check sits ahead of preempt_disable(). (It has a
>> clear reason to be this way there, but without a clear reason
>> for things to be differently here, I think matching ordering
>> may help, at least to avoid questions.)
> 
> I think this is more an optimization opportunity: I'd rather move the
> preempt_disable() into the first if clause, as there is no need to
> disable preemption in case the first read of the lock already leads
> to failure acquiring the lock.
> 
> If you want I can prepend a patch doing that optimization.

I'd appreciate you doing so, yet I'd like to point out that
then the same question remains for _write_trylock(). Perhaps
a similar transformation is possible there, but it'll at
least be more code churn. Which of course isn't a reason not
to go this route there too.

This said - wouldn't what you suggest be wrong if we had
actual preemption in the hypervisor? Preemption hitting
between e.g. these two lines

    cnts = atomic_read(&lock->cnts);
    if ( likely(_can_read_lock(cnts)) )

would not yield the intended result, would it? (It wouldn't
affect correctness afaics, because the caller has to be
prepared anyway that the attempt fails, but the amount of
effectively false negatives would grow, as would the number
of cases where the slower path is taken for no reason.)

Question therefore is how much we care about keeping code
ready for "real" preemption, when we have ample other places
that would need changing first, before such could be enabled.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 10:05:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 10:05:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18153.43012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtBz-0007f2-Il; Tue, 03 Nov 2020 10:05:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18153.43012; Tue, 03 Nov 2020 10:05:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtBz-0007ev-FB; Tue, 03 Nov 2020 10:05:31 +0000
Received: by outflank-mailman (input) for mailman id 18153;
 Tue, 03 Nov 2020 10:05:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kZtBy-0007eq-C0
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:05:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 36af6dd4-fd0b-4ded-bbb7-fa034a39a686;
 Tue, 03 Nov 2020 10:05:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 373E1ABDE;
 Tue,  3 Nov 2020 10:05:29 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kZtBy-0007eq-C0
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:05:30 +0000
X-Inumbo-ID: 36af6dd4-fd0b-4ded-bbb7-fa034a39a686
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 36af6dd4-fd0b-4ded-bbb7-fa034a39a686;
	Tue, 03 Nov 2020 10:05:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604397929;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=15Ql4n2Yymab8Emht2sqi1yTHDi/l/eT3pvMQcUPCaY=;
	b=svRUg5IcZy6Sus9ZXIRsgbTTWQq/oeEdthT1M5VS7EjVqBRT0K5CHMZOaqjpcCkJuiPEkc
	jm4kz79GX+1I4jRhkqbhTTtTZhb7Pf/6P9fs6uVokxxTHw8hP+acfRKGMIDhvP5zLOY6ZI
	f5Vv9ynLwzI6/iNPUDkAlZCWp+jnb+4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 373E1ABDE;
	Tue,  3 Nov 2020 10:05:29 +0000 (UTC)
Subject: Re: [PATCH v1 2/2] Define SOURCE_DATE_EPOCH based on git log
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Pierret_=28fepitre=29?=
 <frederic.pierret@qubes-os.org>
References: <cover.1604156731.git.frederic.pierret@qubes-os.org>
 <8b0e8b8be9c77476ecc702a7c6216ba50659deec.1604156731.git.frederic.pierret@qubes-os.org>
 <396c2991-1a90-bc1a-70e7-eaaf62c309d8@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <19a09f0e-c544-f122-b3af-881d132d7df9@suse.com>
Date: Tue, 3 Nov 2020 11:05:29 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <396c2991-1a90-bc1a-70e7-eaaf62c309d8@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 03.11.2020 11:00, Julien Grall wrote:
> Hi Frédéric,
> 
> On 31/10/2020 15:14, Frédéric Pierret (fepitre) wrote:
>> ---
>>   xen/Makefile | 2 ++
>>   1 file changed, 2 insertions(+)
>>
>> diff --git a/xen/Makefile b/xen/Makefile
>> index 30b1847515..4cc35556ef 100644
>> --- a/xen/Makefile
>> +++ b/xen/Makefile
>> @@ -6,6 +6,8 @@ export XEN_EXTRAVERSION ?= -unstable$(XEN_VENDORVERSION)
>>   export XEN_FULLVERSION   = $(XEN_VERSION).$(XEN_SUBVERSION)$(XEN_EXTRAVERSION)
>>   -include xen-version
>>   
>> +export SOURCE_DATE_EPOCH	?= $(shell git log -1 --format=%ct 2>/dev/null)
> 
> It is possible to download a tarball for Xen release (see [1]). They 
> don't contain the .git directory and therefore this command would fail.
> 
> Should we fallback to "date" in this case?

Isn't this what already happens? The variable would be assigned
an empty value in this case, wouldn't it?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 10:06:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 10:06:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18160.43027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtDC-0007nk-Tq; Tue, 03 Nov 2020 10:06:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18160.43027; Tue, 03 Nov 2020 10:06:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtDC-0007nd-Qi; Tue, 03 Nov 2020 10:06:46 +0000
Received: by outflank-mailman (input) for mailman id 18160;
 Tue, 03 Nov 2020 10:06:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hh/q=EJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZtDC-0007mz-1a
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:06:46 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id db784db4-22a8-4b63-9a5f-1d410bdd47e0;
 Tue, 03 Nov 2020 10:06:39 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZtD5-0007Ai-N7; Tue, 03 Nov 2020 10:06:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZtD5-0007B4-CO; Tue, 03 Nov 2020 10:06:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZtD5-0003oH-Bu; Tue, 03 Nov 2020 10:06:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hh/q=EJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZtDC-0007mz-1a
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:06:46 +0000
X-Inumbo-ID: db784db4-22a8-4b63-9a5f-1d410bdd47e0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id db784db4-22a8-4b63-9a5f-1d410bdd47e0;
	Tue, 03 Nov 2020 10:06:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=p69p33bfWAgRRLMqXzUDfhZu2cU9FbjpvINoDO9wa3s=; b=RfoMXM0e2BahTgAQ58CIfDb6r1
	r9sYwm5KL5GZASyz8dtSsMUivscrRKzAXDAOVOgkkBuzjEXYr3rxTOjSUSBc2amBfWOJlVGe9Axtn
	02vcErxxX50sF9Tp5WfRVhbDrPSm5BvY5KYJIAETF1CdpaQtIVZyzPsmL1hFENunaxgY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZtD5-0007Ai-N7; Tue, 03 Nov 2020 10:06:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZtD5-0007B4-CO; Tue, 03 Nov 2020 10:06:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZtD5-0003oH-Bu; Tue, 03 Nov 2020 10:06:39 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156374-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156374: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=0166dad49698fbe263759755382006d64a0ac825
X-Osstest-Versions-That:
    ovmf=ffddac3e0f2e0af54b48a86848193a5ad30def10
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 03 Nov 2020 10:06:39 +0000

flight 156374 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156374/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 0166dad49698fbe263759755382006d64a0ac825
baseline version:
 ovmf                 ffddac3e0f2e0af54b48a86848193a5ad30def10

Last test of basis   156359  2020-11-02 08:43:26 Z    1 days
Testing same since   156374  2020-11-03 01:55:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  fengyunhua <fengyunhua@byosoft.com.cn>
  Yunhua Feng <fengyunhua@byosoft.com.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   ffddac3e0f..0166dad496  0166dad49698fbe263759755382006d64a0ac825 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 10:11:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 10:11:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18169.43039 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtHZ-0000FR-GL; Tue, 03 Nov 2020 10:11:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18169.43039; Tue, 03 Nov 2020 10:11:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtHZ-0000FK-D9; Tue, 03 Nov 2020 10:11:17 +0000
Received: by outflank-mailman (input) for mailman id 18169;
 Tue, 03 Nov 2020 10:11:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uWmA=EJ=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
 id 1kZtHX-0000FF-L0
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:11:15 +0000
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2e64b63d-8b8f-479b-921e-2830258cfc95;
 Tue, 03 Nov 2020 10:11:14 +0000 (UTC)
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by
 mx.zohomail.com with SMTPS id 1604398265597262.2454710694693;
 Tue, 3 Nov 2020 02:11:05 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uWmA=EJ=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
	id 1kZtHX-0000FF-L0
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:11:15 +0000
X-Inumbo-ID: 2e64b63d-8b8f-479b-921e-2830258cfc95
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2e64b63d-8b8f-479b-921e-2830258cfc95;
	Tue, 03 Nov 2020 10:11:14 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1604398270; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=UI9NcwXVBDVYzRF18XAqtzn/fN60aCM2YyOOFo+Xu3UaWKkq0/v5Kaorq2p3K8fj79DKd6Ebi7R370wG0T/RtxszUj2aMXKYcQkqVXQlcDilSGv4Kn8Hcje+xeIrO6yzFzKhKL9CmKcSBqg4pR2f6ppdHQ9M+6I0DW4C9qnb12k=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1604398270; h=Content-Type:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=NBUnu4eFKiYodaIYDWrN7A5S8fdkqH8j0CP5dcjuKFI=; 
	b=e/RNSBtibPxnKrzBjWhhTVTIrk/L1ElysRf9GlfhsxyMcr9m0xZGOJUhz1sC8zSkLD/hmYTF1OXIAR7R1/fk1fCLnUKIyQ8bJ/qws7oah7wYt5jKjtIt0KvsvvvBRbVbKMBAjVUWet/e7cfvQhzf/6grLSY7BQBFNIYy+3v1Lis=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=qubes-os.org;
	spf=pass  smtp.mailfrom=frederic.pierret@qubes-os.org;
	dmarc=pass header.from=<frederic.pierret@qubes-os.org> header.from=<frederic.pierret@qubes-os.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1604398270;
	s=s; d=qubes-os.org; i=frederic.pierret@qubes-os.org;
	h=Subject:To:Cc:References:From:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type;
	bh=NBUnu4eFKiYodaIYDWrN7A5S8fdkqH8j0CP5dcjuKFI=;
	b=ZMkE/ttfaMeXATVtSeFKomFa46uzn3iiQ6QlF/XD9IQF9VM1mkZMpl1OFXfy8GXJ
	LDmLekD131YVydpjad4zmUfaMz1GxiGY2ajoahqU8TjegU9Hc5SmRCYBqqYrNxTKkb9
	TBND+jfAPe2EHQPvOhrhHUhPzIcZcNWY7dtmUcuk=
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by mx.zohomail.com
	with SMTPS id 1604398265597262.2454710694693; Tue, 3 Nov 2020 02:11:05 -0800 (PST)
Subject: Re: [PATCH v1 2/2] Define SOURCE_DATE_EPOCH based on git log
To: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1604156731.git.frederic.pierret@qubes-os.org>
 <8b0e8b8be9c77476ecc702a7c6216ba50659deec.1604156731.git.frederic.pierret@qubes-os.org>
 <396c2991-1a90-bc1a-70e7-eaaf62c309d8@xen.org>
 <19a09f0e-c544-f122-b3af-881d132d7df9@suse.com>
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
Autocrypt: addr=frederic.pierret@qubes-os.org; keydata=
 xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0
 Dy2rUVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8
 vtovi97sWIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIv
 ltoBrYnElD1Pyp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpX
 gYzTN/kq8sxBMh2OrQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PL
 w5koqPs/6JEIVI+t0pyg+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZ
 NbYRXzkI91HCt40X2bTb2jTKgvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpa
 A/GPaJ5DjzV0q9mkYkGDLYI3J/J+s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVir
 EVBum723MFH4DxhTrOoWgta2nyRHOoi0z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvt
 LbYnlSt3v32nfUXh12aQPwU/LCGIzq4oFNVrNp3aWPnSajLPpQARAQABzTxGcsOpZMOpcmlj
 IFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpYy5waWVycmV0QHF1YmVzLW9zLm9yZz7CwXgE
 EwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEEhAELXNxXbiPLkQ
 AI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZJnl+Fb5HBgthU9lBdMqNySg+s8y
 ekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ9uITprfEqX7V2OLbrYW94qw
 R8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04Uekt5I2Zc8iRDF9kneI
 NiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoDRC/bsAow7cBudj+
 lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR9ze2O5Yh+/B
 unrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2VHTfLmA
 Ot+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1coxFUw
 eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBko
 b1EpfW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKb
 xM/NyxHrmNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCN
 He8XNVqBplV0yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOm
 qpbSLbra8NP8Mu5FZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8
 V+9+dpLEU75+mpHU7GlECfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M
 ++ZmWfEV5nCP2qvzeYDGAL6BbWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr
 5aNCaNpv/i4gLO1IScdfDwm6gdfB2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hg
 YlDWdbImhNL/Z7iL3eayH7T9qAVNU587MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpA
 H+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYHwuc3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYD
 yhxVGbeWp820cb0s1f689XCXqFYAzTfCit+EeboYORN5CGioXzS+z0S9IhPbdUuvqs7xvC24
 8bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9aPe/Zw4PTKWvbJlcFioofLwTQE1XvWom
 FPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMAAoJEEhAELXNxXbilSkP/2NcazvU
 DGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrPX8hjITWD9ZA2bbZZ+J+a39v
 yY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9xK8mWXwhn90SHNadEf28
 ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q8aa+G8iAkNJcb+W
 x5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Nyf2MKKbWRJnt
 jy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJUs/geoC
 UBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1KjH
 uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex
 1C+w3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PX
 pm5Pw4stVEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQ
 QMhGv8DnbAdOOOnumAXWq0+wl5uP
Message-ID: <89363871-0fb7-701b-f6e6-7b94c767618f@qubes-os.org>
Date: Tue, 3 Nov 2020 11:11:00 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <19a09f0e-c544-f122-b3af-881d132d7df9@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="GWsyUHbKc3ZtpIUfRi3ZFrwKN3zMwKvAI"
X-Zoho-Virus-Status: 1
X-ZohoMailClient: External

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--GWsyUHbKc3ZtpIUfRi3ZFrwKN3zMwKvAI
Content-Type: multipart/mixed; boundary="I8Tp5nlXdgO4uIHZEZlKIgz7bhuNpQyKE";
 protected-headers="v1"
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
To: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
Message-ID: <89363871-0fb7-701b-f6e6-7b94c767618f@qubes-os.org>
Subject: Re: [PATCH v1 2/2] Define SOURCE_DATE_EPOCH based on git log
References: <cover.1604156731.git.frederic.pierret@qubes-os.org>
 <8b0e8b8be9c77476ecc702a7c6216ba50659deec.1604156731.git.frederic.pierret@qubes-os.org>
 <396c2991-1a90-bc1a-70e7-eaaf62c309d8@xen.org>
 <19a09f0e-c544-f122-b3af-881d132d7df9@suse.com>
In-Reply-To: <19a09f0e-c544-f122-b3af-881d132d7df9@suse.com>

--I8Tp5nlXdgO4uIHZEZlKIgz7bhuNpQyKE
Content-Type: multipart/mixed;
 boundary="------------162C3657FAB2C1BF50A93984"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------162C3657FAB2C1BF50A93984
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable


Le 11/3/20 =C3=A0 11:05 AM, Jan Beulich a =C3=A9crit=C2=A0:
> On 03.11.2020 11:00, Julien Grall wrote:
>> Hi Fr=C3=A9d=C3=A9ric,
>>
Hi Julien,

>> On 31/10/2020 15:14, Fr=C3=A9d=C3=A9ric Pierret (fepitre) wrote:
>>> ---
>>>    xen/Makefile | 2 ++
>>>    1 file changed, 2 insertions(+)
>>>
>>> diff --git a/xen/Makefile b/xen/Makefile
>>> index 30b1847515..4cc35556ef 100644
>>> --- a/xen/Makefile
>>> +++ b/xen/Makefile
>>> @@ -6,6 +6,8 @@ export XEN_EXTRAVERSION ?=3D -unstable$(XEN_VENDORVER=
SION)
>>>    export XEN_FULLVERSION   =3D $(XEN_VERSION).$(XEN_SUBVERSION)$(XEN=
_EXTRAVERSION)
>>>    -include xen-version
>>>   =20
>>> +export SOURCE_DATE_EPOCH	?=3D $(shell git log -1 --format=3D%ct 2>/d=
ev/null)
>>
>> It is possible to download a tarball for Xen release (see [1]). They
>> don't contain the .git directory and therefore this command would fail=
=2E
>>
>> Should we fallback to "date" in this case?
>=20
> Isn't this what already happens? The variable would be assigned
> an empty value in this case, wouldn't it?

Julien, Jan, yes it already fallback to "date" if the variable is empty (=
it's the reason of "2>/dev/null") in the other test of check if SOURCE_DA=
TE_EPOCH is defined. Maybe there is more elegant way for this. Depending =
on the wanted here for providing or not a default value in case of git so=
urces, this could be documented instead as suggested previously.

> Jan
>=20

Regards,
Fr=C3=A9d=C3=A9ric

--------------162C3657FAB2C1BF50A93984
Content-Type: application/pgp-keys;
 name="OpenPGP_0x484010B5CDC576E2.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0x484010B5CDC576E2.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0D=
y2r
UVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8vtovi=
97s
WIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIvltoBrYnEl=
D1P
yp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpXgYzTN/kq8sxBM=
h2O
rQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PLw5koqPs/6JEIVI+t0=
pyg
+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZNbYRXzkI91HCt40X2bTb2=
jTK
gvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpaA/GPaJ5DjzV0q9mkYkGDLYI3J=
/J+
s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVirEVBum723MFH4DxhTrOoWgta2nyRHO=
oi0
z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvtLbYnlSt3v32nfUXh12aQPwU/LCGIzq4oF=
NVr
Np3aWPnSajLPpQARAQABzTxGcsOpZMOpcmljIFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpY=
y5w
aWVycmV0QHF1YmVzLW9zLm9yZz7CwXgEEwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWA=
gMB
Ah4BAheAAAoJEEhAELXNxXbiPLkQAI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZ=
Jnl
+Fb5HBgthU9lBdMqNySg+s8yekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ=
9uI
TprfEqX7V2OLbrYW94qwR8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04=
Uek
t5I2Zc8iRDF9kneINiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoD=
RC/
bsAow7cBudj+lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR=
9ze
2O5Yh+/BunrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2=
VHT
fLmAOt+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1cox=
FUw
eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBkob=
1Ep
fW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKbxM/Ny=
xHr
mNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCNHe8XNVqBp=
lV0
yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOmqpbSLbra8NP8M=
u5F
ZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8V+9+dpLEU75+mpHU7=
GlE
CfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M++ZmWfEV5nCP2qvzeYDGA=
L6B
bWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr5aNCaNpv/i4gLO1IScdfDwm6g=
dfB
2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hgYlDWdbImhNL/Z7iL3eayH7T9qAVNU=
587
MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpAH+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYH=
wuc
3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYDyhxVGbeWp820cb0s1f689XCXqFYAzTfCit+Ee=
boY
ORN5CGioXzS+z0S9IhPbdUuvqs7xvC248bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9a=
Pe/
Zw4PTKWvbJlcFioofLwTQE1XvWomFPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMA=
AoJ
EEhAELXNxXbilSkP/2NcazvUDGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrP=
X8h
jITWD9ZA2bbZZ+J+a39vyY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9x=
K8m
WXwhn90SHNadEf28ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q=
8aa
+G8iAkNJcb+Wx5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Ny=
f2M
KKbWRJntjy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJ=
Us/
geoCUBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1=
KjH
uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex1=
C+w
3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PXpm5Pw=
4st
VEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQQMhGv8Dnb=
AdO
OOnumAXWq0+wl5uP
=3DRWX1
-----END PGP PUBLIC KEY BLOCK-----

--------------162C3657FAB2C1BF50A93984--

--I8Tp5nlXdgO4uIHZEZlKIgz7bhuNpQyKE--

--GWsyUHbKc3ZtpIUfRi3ZFrwKN3zMwKvAI
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEn6ZLkvlecGvyjiymSEAQtc3FduIFAl+hLLUACgkQSEAQtc3F
duIYwBAApiqOPvN8zhoU4ZAgC4pnHdds/YLGmHYzgCOrcypSlN7jXJEcPpKb/lhx
75tSMJfokHl33hrIochleyaYYrwMPrwpa3akG4vb1D0kXY50Ps29ofdEOcQj1a4N
rSB18cyKzXpYCN769QpfA1osqstB0oGCtID40SOFC2Jbh7V6g4JvCQR7yrJDfMEb
BvYkCZiB/EZzq2mP4e6NNV5J+o3bDVHJcjQLLt8ZsPQ/R4xvmZzhikzb3RM/+r28
oFlrcwZQg8w5LifYCrQpKlAjLxQ+ogF/Vmu1PXqDZcANSed3bRYwO5NRJu+c8024
nATWM4uB2cfQFUappO5XVLejmkkztIrFut6lNYJuDMyGoV0KjfBdMzi8fFxNA3F+
vBHHvAHNupej7HTNr19hhKqRnGa/i8StJdfI2VNEJSj1HaXRbTkvYYs2efmh2w7p
kXIRyuptbXFuuLRacTFkJzUSoeznVbBDULIc6Rufn9Q80euY23j3rMLkb7h25kVe
oSLrW3zXMvkgDrpJa5RiJ9EzO4bjq76SEuDNBiIiZ9IRUJMvJpfbX8oUx3L7Qy1J
eLFvKoqVehnCBfXvzjusMfNoGGLnP1CUe0drNZMTTg9BEC2SqN3AuLnaEjWkTL2j
a7gjNUFgMvzJjUKRMIvctMCT79vY+gV7+B6B9Ul1Cc5q+MGOfsc=
=MV6O
-----END PGP SIGNATURE-----

--GWsyUHbKc3ZtpIUfRi3ZFrwKN3zMwKvAI--


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 10:11:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 10:11:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18171.43050 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtHv-0000LK-T5; Tue, 03 Nov 2020 10:11:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18171.43050; Tue, 03 Nov 2020 10:11:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtHv-0000LD-QE; Tue, 03 Nov 2020 10:11:39 +0000
Received: by outflank-mailman (input) for mailman id 18171;
 Tue, 03 Nov 2020 10:11:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RD1Y=EJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kZtHt-0000Ks-PI
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:11:37 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f73d4dfb-1def-4300-b6da-4f9a0286b198;
 Tue, 03 Nov 2020 10:11:37 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kZtHr-0007HG-Ul; Tue, 03 Nov 2020 10:11:35 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kZtHr-0003PE-K6; Tue, 03 Nov 2020 10:11:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=RD1Y=EJ=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kZtHt-0000Ks-PI
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:11:37 +0000
X-Inumbo-ID: f73d4dfb-1def-4300-b6da-4f9a0286b198
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f73d4dfb-1def-4300-b6da-4f9a0286b198;
	Tue, 03 Nov 2020 10:11:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=4rLaVaOO907rpPajXsw1gY33QleW8696vrLdcGBlplU=; b=hvygHmXJDa7itQQl5lkKMggdEq
	XJ2CpbZArrExwnOphtq4WnJGWhWBS/prgo/3s6r6zkWLIX2Aperb9f4XK9gWGYN++X2YMsH1FwZBp
	VFPQgtf3e7v+S4XLkrBTSDcOK4hEbnQnR+qtZLfAK3MZ/4Ltj0Azenznqyza/gl0ndN4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kZtHr-0007HG-Ul; Tue, 03 Nov 2020 10:11:35 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kZtHr-0003PE-K6; Tue, 03 Nov 2020 10:11:35 +0000
Subject: Re: [PATCH v1 2/2] Define SOURCE_DATE_EPOCH based on git log
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Pierret_=28fepitre=29?=
 <frederic.pierret@qubes-os.org>
References: <cover.1604156731.git.frederic.pierret@qubes-os.org>
 <8b0e8b8be9c77476ecc702a7c6216ba50659deec.1604156731.git.frederic.pierret@qubes-os.org>
 <396c2991-1a90-bc1a-70e7-eaaf62c309d8@xen.org>
 <19a09f0e-c544-f122-b3af-881d132d7df9@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <a855e71d-3610-0377-75e5-f08a02e96a25@xen.org>
Date: Tue, 3 Nov 2020 10:11:33 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <19a09f0e-c544-f122-b3af-881d132d7df9@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 03/11/2020 10:05, Jan Beulich wrote:
> On 03.11.2020 11:00, Julien Grall wrote:
>> Hi Frédéric,
>>
>> On 31/10/2020 15:14, Frédéric Pierret (fepitre) wrote:
>>> ---
>>>    xen/Makefile | 2 ++
>>>    1 file changed, 2 insertions(+)
>>>
>>> diff --git a/xen/Makefile b/xen/Makefile
>>> index 30b1847515..4cc35556ef 100644
>>> --- a/xen/Makefile
>>> +++ b/xen/Makefile
>>> @@ -6,6 +6,8 @@ export XEN_EXTRAVERSION ?= -unstable$(XEN_VENDORVERSION)
>>>    export XEN_FULLVERSION   = $(XEN_VERSION).$(XEN_SUBVERSION)$(XEN_EXTRAVERSION)
>>>    -include xen-version
>>>    
>>> +export SOURCE_DATE_EPOCH	?= $(shell git log -1 --format=%ct 2>/dev/null)
>>
>> It is possible to download a tarball for Xen release (see [1]). They
>> don't contain the .git directory and therefore this command would fail.
>>
>> Should we fallback to "date" in this case?
> 
> Isn't this what already happens? The variable would be assigned
> an empty value in this case, wouldn't it?

My question was whether empty SOURCE_DATE_EPOCH is acceptable?

Looking at patch #1, the users of the variable will use "date" if it is 
empty. Why can't this behavior be common?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 10:17:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 10:17:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18183.43063 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtN9-0000bN-IM; Tue, 03 Nov 2020 10:17:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18183.43063; Tue, 03 Nov 2020 10:17:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtN9-0000bG-FC; Tue, 03 Nov 2020 10:17:03 +0000
Received: by outflank-mailman (input) for mailman id 18183;
 Tue, 03 Nov 2020 10:17:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E907=EJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kZtN8-0000bB-W2
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:17:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 336011dc-13d0-411d-bacc-db1ba8ef08e8;
 Tue, 03 Nov 2020 10:17:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 407E3B1BC;
 Tue,  3 Nov 2020 10:17:01 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=E907=EJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kZtN8-0000bB-W2
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:17:03 +0000
X-Inumbo-ID: 336011dc-13d0-411d-bacc-db1ba8ef08e8
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 336011dc-13d0-411d-bacc-db1ba8ef08e8;
	Tue, 03 Nov 2020 10:17:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604398621;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ync/xtTvyKL1xtcqWVxnTWrnpg8DkQp4LxfeelTDEb8=;
	b=ZIHJU0TfGGH/yQtgQ187tflRUfypkYWsTJjhzzqiJa81rarQx3I2O21bnYlHw7XQnsPyXT
	CAe3j1geaNtiiXffrRTNPfrFZS6TaRTWaK39YXgdPikpgewQlF5+f4AbdRkbnOLbjNxTQl
	Lg+DMQwfuR6UjoQeNXCtydvGVA1Ioqk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 407E3B1BC;
	Tue,  3 Nov 2020 10:17:01 +0000 (UTC)
Subject: Re: [PATCH v2 2/2] xen/rwlock: add check_lock() handling to rwlocks
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201102131239.14134-1-jgross@suse.com>
 <20201102131239.14134-3-jgross@suse.com>
 <fb3a1a5a-15ea-218f-a6d8-8e9d8d1bc2a7@suse.com>
 <890b6547-ca4f-b195-6b9d-9078ba35c357@suse.com>
 <fe41300c-3013-73ae-7ffa-7cd36705d0c2@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <4e0d3709-1a0a-4dbd-436d-b22a4736ac0d@suse.com>
Date: Tue, 3 Nov 2020 11:17:00 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <fe41300c-3013-73ae-7ffa-7cd36705d0c2@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 03.11.20 11:04, Jan Beulich wrote:
> On 03.11.2020 10:22, Jürgen Groß wrote:
>> On 03.11.20 10:02, Jan Beulich wrote:
>>> On 02.11.2020 14:12, Juergen Gross wrote:
>>>> --- a/xen/include/xen/rwlock.h
>>>> +++ b/xen/include/xen/rwlock.h
>>>> @@ -56,6 +56,7 @@ static inline int _read_trylock(rwlock_t *lock)
>>>>        u32 cnts;
>>>>    
>>>>        preempt_disable();
>>>> +    check_lock(&lock->lock.debug, true);
>>>>        cnts = atomic_read(&lock->cnts);
>>>>        if ( likely(_can_read_lock(cnts)) )
>>>>        {
>>>
>>> I'm sorry for being picky, but this still isn't matching
>>> _spin_trylock(). Perhaps the difference is really benign, but
>>> there the check sits ahead of preempt_disable(). (It has a
>>> clear reason to be this way there, but without a clear reason
>>> for things to be differently here, I think matching ordering
>>> may help, at least to avoid questions.)
>>
>> I think this is more an optimization opportunity: I'd rather move the
>> preempt_disable() into the first if clause, as there is no need to
>> disable preemption in case the first read of the lock already leads
>> to failure acquiring the lock.
>>
>> If you want I can prepend a patch doing that optimization.
> 
> I'd appreciate you doing so, yet I'd like to point out that
> then the same question remains for _write_trylock(). Perhaps
> a similar transformation is possible there, but it'll at
> least be more code churn. Which of course isn't a reason not
> to go this route there too.

Shouldn't be very hard. It would just need to split the if clause
into two clauses.

> This said - wouldn't what you suggest be wrong if we had
> actual preemption in the hypervisor? Preemption hitting
> between e.g. these two lines
> 
>      cnts = atomic_read(&lock->cnts);
>      if ( likely(_can_read_lock(cnts)) )
> 
> would not yield the intended result, would it? (It wouldn't
> affect correctness afaics, because the caller has to be
> prepared anyway that the attempt fails, but the amount of
> effectively false negatives would grow, as would the number
> of cases where the slower path is taken for no reason.)

And this in turn would hit _spin_trylock() the same way.

IMO we should harmonize all the trylock variants in this regard:
either they have an as small as possible preemption disabled
section or they have the initial test for success being possible
at all in this section.

> Question therefore is how much we care about keeping code
> ready for "real" preemption, when we have ample other places
> that would need changing first, before such could be enabled.

Yes. And depending on the answer the route to go (wide or narrow
no-preemption section) wither the rwlock or the spinlock trylock
variants should be adapted.


Juergen


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 10:17:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 10:17:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18187.43075 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtNl-0000hk-Ry; Tue, 03 Nov 2020 10:17:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18187.43075; Tue, 03 Nov 2020 10:17:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtNl-0000hd-Oj; Tue, 03 Nov 2020 10:17:41 +0000
Received: by outflank-mailman (input) for mailman id 18187;
 Tue, 03 Nov 2020 10:17:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NRwk=EJ=bitdefender.com=aisaila@srs-us1.protection.inumbo.net>)
 id 1kZtNl-0000hS-1P
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:17:41 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com (unknown
 [40.107.0.116]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e353be96-bd4c-463e-a2a6-db10b76f7ae8;
 Tue, 03 Nov 2020 10:17:35 +0000 (UTC)
Received: from DB8PR02MB5740.eurprd02.prod.outlook.com (2603:10a6:10:eb::10)
 by DB9PR02MB6555.eurprd02.prod.outlook.com (2603:10a6:10:214::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Tue, 3 Nov
 2020 10:17:29 +0000
Received: from DB8PR02MB5740.eurprd02.prod.outlook.com
 ([fe80::502:48a4:c75a:b6e4]) by DB8PR02MB5740.eurprd02.prod.outlook.com
 ([fe80::502:48a4:c75a:b6e4%7]) with mapi id 15.20.3499.032; Tue, 3 Nov 2020
 10:17:29 +0000
Received: from [192.168.1.109] (86.120.241.86) by
 VI1PR07CA0140.eurprd07.prod.outlook.com (2603:10a6:802:16::27) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3541.10 via Frontend Transport; Tue, 3 Nov 2020 10:17:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NRwk=EJ=bitdefender.com=aisaila@srs-us1.protection.inumbo.net>)
	id 1kZtNl-0000hS-1P
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:17:41 +0000
X-Inumbo-ID: e353be96-bd4c-463e-a2a6-db10b76f7ae8
Received: from EUR02-AM5-obe.outbound.protection.outlook.com (unknown [40.107.0.116])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e353be96-bd4c-463e-a2a6-db10b76f7ae8;
	Tue, 03 Nov 2020 10:17:35 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SdRfrLQjnCs7vEULZAvq0vfyQuszTRi5bDIR0CPD4TIW2TG+Dft7bxd3xC53yVQcxGs0SniAudIxoBNPZ1QhOWUeoEtzuVuzQ3l4i0n34fFg5zoEFpwP/LQiI0cVqN2tIHOYukeEd+KCJ/YqStLOM5ygavXhAcTl3TO0CpMCPjxCAKyufOKu3bszsiAZhIQL8HLQhKqjfAsvmbR7ocAq5CEV7KPZdjiT7wjhyxdT2cL2ZBWKLxD5dKgFTXDr7mMRSj0PtduU7W6xzaERR/vcY6uklrF0WZ4O3ZyMzMuhdOWWA+VvWmQuDurrb7azj2AxiE8P+qMICy+tbCX22E9cyQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jdIG85fWpAj0fad0Dj93K+NaZxGc7Fyc4sDoc28mt6M=;
 b=OdqFTNVDY1kBRjqUJHSKne7iULzCboy9iAb0t+JU6r3VSYx9tQylOag8WskMT8NUYiz+XoYKhB4HrwFpXcjVIIjqhEq1IqPJTIedMc3KoulJTF5oBY9+1y8NHD5eEgzPy87ynXFU1HWMNZDr1FDfCVLSQ9v7W22s/Gl8iwzdOh8eLEoAH6nSX9Hzp9YsXcQL5CpK8nqx0Iy80cOzvbjIiaRhdaBI2LXeHl4f0k04BVIvI+A2BFJJx1BcPkVBRbF6w37RAA71iyNjPUst5hwMxlMyQj+7DohOqPoKJyvpPMBjzCu/D99AzSKB8KAk17lLjP8+Lxmuc7yC/pUevsR3jg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=bitdefender.com; dmarc=pass action=none
 header.from=bitdefender.com; dkim=pass header.d=bitdefender.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=bitdefender.onmicrosoft.com; s=selector2-bitdefender-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jdIG85fWpAj0fad0Dj93K+NaZxGc7Fyc4sDoc28mt6M=;
 b=Op/LYqd7dbl21OTx25fa7zno0KVJsmXn8se22OTijSj8+qvqVhYtRN28P6+yZill7+L+5ypHTaXqY/7NC5NdI95RUZIk2qn+BddofHNEgHtO3JlL7xDSktypagVA81oEJP2Ab0LXQYvyC/HpJwyh1rhpEE1jwDzUFusuYi2P3Ws=
Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=bitdefender.com;
Received: from DB8PR02MB5740.eurprd02.prod.outlook.com (2603:10a6:10:eb::10)
 by DB9PR02MB6555.eurprd02.prod.outlook.com (2603:10a6:10:214::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Tue, 3 Nov
 2020 10:17:29 +0000
Received: from DB8PR02MB5740.eurprd02.prod.outlook.com
 ([fe80::502:48a4:c75a:b6e4]) by DB8PR02MB5740.eurprd02.prod.outlook.com
 ([fe80::502:48a4:c75a:b6e4%7]) with mapi id 15.20.3499.032; Tue, 3 Nov 2020
 10:17:29 +0000
Subject: Re: [PATCH RFC v2 8/8] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Tamas K Lengyel <tamas@tklengyel.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <247f0d77-9447-47d0-4fa6-8e17b3e6a6de@suse.com>
From: Isaila Alexandru <aisaila@bitdefender.com>
Organization: BD
Message-ID: <60302534-1dfb-af5f-4974-1790edcb2f17@bitdefender.com>
Date: Tue, 3 Nov 2020 12:17:26 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.4.1
In-Reply-To: <247f0d77-9447-47d0-4fa6-8e17b3e6a6de@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [86.120.241.86]
X-ClientProxiedBy: VI1PR07CA0140.eurprd07.prod.outlook.com
 (2603:10a6:802:16::27) To DB8PR02MB5740.eurprd02.prod.outlook.com
 (2603:10a6:10:eb::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from [192.168.1.109] (86.120.241.86) by VI1PR07CA0140.eurprd07.prod.outlook.com (2603:10a6:802:16::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.10 via Frontend Transport; Tue, 3 Nov 2020 10:17:28 +0000
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 80a74515-4496-43cd-280f-08d87fe1ab5c
X-MS-TrafficTypeDiagnostic: DB9PR02MB6555:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<DB9PR02MB65558A83DE2CC948608ED058AB110@DB9PR02MB6555.eurprd02.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5MaxMyOG933V3HljAGaY4tbLFp6IDjjUPpGoRGn0nC6RInzOHwhFruUSroOATwc8T/wJWV4XFH0+KGW2wufdzZWaI5JyYH2itcUbbnht7UZDmx+fOJsM92fyMQuIIwbAClHqk1GVqzDg9YjlZWLNFbTRDfOPd3emiwjxdlgHMrHhOrchD7eKbc2ZYJmBvaNesdduLU9m+jSpRJjhDhmdsSfme6J4l0DWtGk+F8mU5TDwkB+yjztxgJDmqo0NJZ5ZgfTS25eUoocxUy7LDDV7K818au7ED+3N5o8J5GRRAsGa/2TdhPSU8Xib+lowUeHdOgKXYiuL+6VPC8MafRGJShcpfocOFi87VK/UqcLyUd1WQqI7VbGoorjDRibQPDDU
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB8PR02MB5740.eurprd02.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(136003)(376002)(39850400004)(366004)(396003)(6636002)(8676002)(8936002)(4326008)(2906002)(86362001)(478600001)(5660300002)(31696002)(66556008)(36916002)(83380400001)(110136005)(316002)(54906003)(16576012)(6486002)(956004)(16526019)(26005)(31686004)(36756003)(53546011)(2616005)(66946007)(66476007)(52116002)(186003)(43740500002);DIR:OUT;SFP:1102;
X-MS-Exchange-AntiSpam-MessageData:
	x8WBTT0m2mE4s/QApjnkVou1kE3rX5w4udgCbOR/CjMgfGe2l4brL5f2MFAs4Txc6FIeZzXkKfaolgSRvwzMFSlIY3L48o+rO3FH4fQXwhdCd6f6BFzcAhjG7A1AWyiTPM1S6Zksbh9wbBPMdT4JpEYahHtc+l0zqytslFg7bS5tbbFiKuF0CNlANYD6lGkHRvBgp4CqSwG3bkxfYFIR7rLHI9LHaXPEbHASXMv4EkwB/6kBOk/qWyXMvJnTqb3zq7HoA3Uh2fxJxRPT2Tn4fWyR9V3GG+u/dpi/y36Jk8vmn5UNmvPEdmTsNtO0CkjyJA7rX9RmsrBVHKwqMifVpYtDZj8YgPl9Dm5hh9HWULEinqb8P6hWXouPX5dHP6PzvHs+uNJs1AIIY5vP71yZ6nEli22aZsVgSlr1c0Ee3LF+dQSc8vSayCHr811l00YLkOtJWsjRsizQlD7XF5hCoHDS7STA0lcmpWqIw1Ew+WNjeON7zd5qaLX3e/LFntAFSq9Kh55+9h+2J7FKa6QoGO+szaAeHVHnRMhhwiSpzgRbudAAa5sVmgpieWGTxJgPzeg8g/VTTOnv8VwaR4BS7HlPETB3PY47kNbs+TFyWy/vOPLs8EzoVLh9Gp16JVNSqgX8fsQXVt1aUMpC/j5P+w==
X-OriginatorOrg: bitdefender.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 80a74515-4496-43cd-280f-08d87fe1ab5c
X-MS-Exchange-CrossTenant-AuthSource: DB8PR02MB5740.eurprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Nov 2020 10:17:29.3407
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 487baf29-f1da-469a-9221-243f830c36f3
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6xVwgcXQXHeqO53BVqBTs3vw68C426+sHZiKWjM7QGF5L9bSRaKiPx6r1xn1025PVq4KvLrODHhSVP3iwGdWyLvYRa+rXkaPAIDTVW4eqd4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR02MB6555


Hi Jan and sorry for the late reply,

On 20.10.2020 17:13, Jan Beulich wrote:
> While there don't look to be any problems with this right now, the lock
> order implications from holding the lock can be very difficult to follow
> (and may be easy to violate unknowingly). The present callbacks don't
> (and no such callback should) have any need for the lock to be held.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> TODO: vm_event_disable() frees the structures used by respective
>        callbacks - need to either use call_rcu() for freeing, or maintain
>        a count of in-progress calls, for evtchn_close() to wait to drop
>        to zero before dropping the lock / returning.

I would go with the second solution and maintain a count of in-progress 
calls.

Tamas, Petre how does this sound?

Alex

> 
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -763,9 +763,18 @@ int evtchn_send(struct domain *ld, unsig
>           rport = lchn->u.interdomain.remote_port;
>           rchn  = evtchn_from_port(rd, rport);
>           if ( consumer_is_xen(rchn) )
> -            xen_notification_fn(rchn)(rd->vcpu[rchn->notify_vcpu_id], rport);
> -        else
> -            evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
> +        {
> +            /* Don't keep holding the lock for the call below. */
> +            xen_event_channel_notification_t fn = xen_notification_fn(rchn);
> +            struct vcpu *rv = rd->vcpu[rchn->notify_vcpu_id];
> +
> +            rcu_lock_domain(rd);
> +            spin_unlock_irqrestore(&lchn->lock, flags);
> +            fn(rv, rport);
> +            rcu_unlock_domain(rd);
> +            return 0;
> +        }
> +        evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
>           break;
>       case ECS_IPI:
>           evtchn_port_set_pending(ld, lchn->notify_vcpu_id, lchn);
> 


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 10:25:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 10:25:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18194.43086 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtVJ-0001ff-Lo; Tue, 03 Nov 2020 10:25:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18194.43086; Tue, 03 Nov 2020 10:25:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtVJ-0001fY-Iq; Tue, 03 Nov 2020 10:25:29 +0000
Received: by outflank-mailman (input) for mailman id 18194;
 Tue, 03 Nov 2020 10:25:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RD1Y=EJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kZtVI-0001fS-39
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:25:28 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 43676133-2a44-47e3-b61d-241c26751e86;
 Tue, 03 Nov 2020 10:25:27 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kZtVG-0007aC-FW; Tue, 03 Nov 2020 10:25:26 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kZtVG-0004C0-7J; Tue, 03 Nov 2020 10:25:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=RD1Y=EJ=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kZtVI-0001fS-39
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:25:28 +0000
X-Inumbo-ID: 43676133-2a44-47e3-b61d-241c26751e86
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 43676133-2a44-47e3-b61d-241c26751e86;
	Tue, 03 Nov 2020 10:25:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=lYx25Ok5HuPPiU/ax9x92A5EQfjW5LYB9OKN/o2KBNg=; b=l8EKAHzH6Pqy9c5CRPe66ISyXR
	V3nb/GApbKPjixggwewWutcyWUjPmovnQgRhug/PXaOxKXmiNg/OtLLHjRpYi9zpakYIs/qpRORGq
	DsL4YFSyGP0vw2nq/ydXdVd/1kCkE7aTW9oWBBlsXb06lIdnS9wSjqe4wN35hMwa5KY4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kZtVG-0007aC-FW; Tue, 03 Nov 2020 10:25:26 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kZtVG-0004C0-7J; Tue, 03 Nov 2020 10:25:26 +0000
Subject: Re: [PATCH v2 2/2] xen/rwlock: add check_lock() handling to rwlocks
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20201102131239.14134-1-jgross@suse.com>
 <20201102131239.14134-3-jgross@suse.com>
 <fb3a1a5a-15ea-218f-a6d8-8e9d8d1bc2a7@suse.com>
 <890b6547-ca4f-b195-6b9d-9078ba35c357@suse.com>
 <fe41300c-3013-73ae-7ffa-7cd36705d0c2@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <a3ad7dea-5528-43f8-ec0a-8ec678fcc2a7@xen.org>
Date: Tue, 3 Nov 2020 10:25:24 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <fe41300c-3013-73ae-7ffa-7cd36705d0c2@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Jan,

On 03/11/2020 10:04, Jan Beulich wrote:
> On 03.11.2020 10:22, Jürgen Groß wrote:
>> On 03.11.20 10:02, Jan Beulich wrote:
>>> On 02.11.2020 14:12, Juergen Gross wrote:
> Question therefore is how much we care about keeping code
> ready for "real" preemption, when we have ample other places
> that would need changing first, before such could be enabled

The question we should ask ourself is whether we think one would want to 
use preemption in Xen.

Some of the emulation in Xen on Arm (e.g. ITS, SMMUv3, set/way) would 
have been easier to implement if the code were preemptible.

I also hear time to time stakeholders asking for preemptible spin lock 
(this is useful for RT).

Therefore, I think there are values to keep the code as much as possible 
preempt-ready.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 10:30:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 10:30:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18201.43099 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtZu-0002X5-9s; Tue, 03 Nov 2020 10:30:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18201.43099; Tue, 03 Nov 2020 10:30:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtZu-0002Wy-5F; Tue, 03 Nov 2020 10:30:14 +0000
Received: by outflank-mailman (input) for mailman id 18201;
 Tue, 03 Nov 2020 10:30:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kZtZs-0002Wn-C2
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:30:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 3a8be889-3d9d-4cb6-8d6e-7400351fef8e;
 Tue, 03 Nov 2020 10:30:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E264DAC1F;
 Tue,  3 Nov 2020 10:30:10 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kZtZs-0002Wn-C2
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:30:12 +0000
X-Inumbo-ID: 3a8be889-3d9d-4cb6-8d6e-7400351fef8e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 3a8be889-3d9d-4cb6-8d6e-7400351fef8e;
	Tue, 03 Nov 2020 10:30:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604399411;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=w4mv0BeQ5SnZL8dczHZZVGsgXftnVKtTXzRs2eUJ1Vg=;
	b=hTaaV4G/HWT/ViHh9aFm/U27wGBspuZ3QTFVnlkOrk/iUXlThpFPWsMPi0lBTvDeB5zNWv
	nIRCe1zCdgiO2AmYogTXWMXRgyBUFn/hhsVzVXEje8BmK9j7ZzOEOABR9l/AcyNQ5xpNmI
	Mbaam4DSwlgHxIl6BPkGadjE9JS/fRA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E264DAC1F;
	Tue,  3 Nov 2020 10:30:10 +0000 (UTC)
Subject: Re: [PATCH v2 2/2] xen/rwlock: add check_lock() handling to rwlocks
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201102131239.14134-1-jgross@suse.com>
 <20201102131239.14134-3-jgross@suse.com>
 <fb3a1a5a-15ea-218f-a6d8-8e9d8d1bc2a7@suse.com>
 <890b6547-ca4f-b195-6b9d-9078ba35c357@suse.com>
 <fe41300c-3013-73ae-7ffa-7cd36705d0c2@suse.com>
 <4e0d3709-1a0a-4dbd-436d-b22a4736ac0d@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8f837077-5c3a-0ec1-75e6-d467bf77ab10@suse.com>
Date: Tue, 3 Nov 2020 11:30:10 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <4e0d3709-1a0a-4dbd-436d-b22a4736ac0d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 03.11.2020 11:17, Jürgen Groß wrote:
> On 03.11.20 11:04, Jan Beulich wrote:
>> This said - wouldn't what you suggest be wrong if we had
>> actual preemption in the hypervisor? Preemption hitting
>> between e.g. these two lines
>>
>>      cnts = atomic_read(&lock->cnts);
>>      if ( likely(_can_read_lock(cnts)) )
>>
>> would not yield the intended result, would it? (It wouldn't
>> affect correctness afaics, because the caller has to be
>> prepared anyway that the attempt fails, but the amount of
>> effectively false negatives would grow, as would the number
>> of cases where the slower path is taken for no reason.)
> 
> And this in turn would hit _spin_trylock() the same way.

True.

> IMO we should harmonize all the trylock variants in this regard:
> either they have an as small as possible preemption disabled
> section or they have the initial test for success being possible
> at all in this section.
> 
>> Question therefore is how much we care about keeping code
>> ready for "real" preemption, when we have ample other places
>> that would need changing first, before such could be enabled.
> 
> Yes. And depending on the answer the route to go (wide or narrow
> no-preemption section) wither the rwlock or the spinlock trylock
> variants should be adapted.

Well, personally I'd slightly prefer the adjustment as you did
suggest, but Julien's subsequent reply points towards the other
direction.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 10:47:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 10:47:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18207.43114 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtqq-0003ab-Qk; Tue, 03 Nov 2020 10:47:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18207.43114; Tue, 03 Nov 2020 10:47:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtqq-0003aU-MR; Tue, 03 Nov 2020 10:47:44 +0000
Received: by outflank-mailman (input) for mailman id 18207;
 Tue, 03 Nov 2020 10:47:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hh/q=EJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZtqq-0003Zw-6k
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:47:44 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id abc01a7f-e467-45b4-abbd-2364a8bd5404;
 Tue, 03 Nov 2020 10:47:36 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZtqi-00082P-Kz; Tue, 03 Nov 2020 10:47:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZtqi-0000ZF-9X; Tue, 03 Nov 2020 10:47:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZtqi-0000qF-94; Tue, 03 Nov 2020 10:47:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hh/q=EJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZtqq-0003Zw-6k
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:47:44 +0000
X-Inumbo-ID: abc01a7f-e467-45b4-abbd-2364a8bd5404
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id abc01a7f-e467-45b4-abbd-2364a8bd5404;
	Tue, 03 Nov 2020 10:47:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pHnRBR0fq9K37bLccAv2QwHWkukuRkoiyg6Ms+tqcvE=; b=oWvVMXmqpIT6C+bZTwE+yC0rLo
	neHaK6tm7uH7pmtpRHb9ciDWmjMMRCLD1mcEuA/MFqHuvQd14IphjxOk3uoauJ1JgT+QKWA7lkTIa
	kYMLopvONUQoQk71qQZlxdNOdvxfKaw54quPforxpiJaAvcOd/Bw7q0KXVVuxoqXVnTE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZtqi-00082P-Kz; Tue, 03 Nov 2020 10:47:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZtqi-0000ZF-9X; Tue, 03 Nov 2020 10:47:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZtqi-0000qF-94; Tue, 03 Nov 2020 10:47:36 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156372-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156372: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=b7cbaf59f62f8ab8f157698f9e31642bff525bd0
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 03 Nov 2020 10:47:36 +0000

flight 156372 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156372/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                b7cbaf59f62f8ab8f157698f9e31642bff525bd0
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   94 days
Failing since        152366  2020-08-01 20:49:34 Z   93 days  157 attempts
Testing same since   156372  2020-11-03 01:11:18 Z    0 days    1 attempts

------------------------------------------------------------
3417 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 652941 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 10:48:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 10:48:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18210.43126 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtre-0003j6-7U; Tue, 03 Nov 2020 10:48:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18210.43126; Tue, 03 Nov 2020 10:48:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtre-0003iz-4b; Tue, 03 Nov 2020 10:48:34 +0000
Received: by outflank-mailman (input) for mailman id 18210;
 Tue, 03 Nov 2020 10:48:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kZtrd-0003it-Ct
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:48:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id de19c2c1-ad7d-42ac-9a8b-0508dd27284e;
 Tue, 03 Nov 2020 10:48:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4A292ACA3;
 Tue,  3 Nov 2020 10:48:30 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kZtrd-0003it-Ct
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:48:33 +0000
X-Inumbo-ID: de19c2c1-ad7d-42ac-9a8b-0508dd27284e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id de19c2c1-ad7d-42ac-9a8b-0508dd27284e;
	Tue, 03 Nov 2020 10:48:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604400510;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=gYrpeny+gi/EErZyFZ2M4dvf/JKpXhwPCuX6X8avS6Y=;
	b=TTJaZChX9Xj+6Abp8cQvkg5eXolfXzll3Ugxc0bv99UUKLvhDIQn3Hd0GsBJ3MOPhW403J
	kdAEQUi6NP7Zn3Y1j/wUiJVVIQCJIfFGcKVPR/Mr83ihqcm+0QoNKO4MMoYu4buxaxFS6i
	3UgXsLH6CK12C1Fo4aUXtuVDqYjHVGg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4A292ACA3;
	Tue,  3 Nov 2020 10:48:30 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/mm: drop guest_get_eff_l1e()
Message-ID: <8a94e96d-14e6-d145-3532-91dab96c8209@suse.com>
Date: Tue, 3 Nov 2020 11:48:30 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

There's no actual user of it: pv_ro_page_fault() has a
guest_kernel_mode() conditional around its only call site.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/pv/mm.c
+++ b/xen/arch/x86/pv/mm.c
@@ -56,27 +56,6 @@ l1_pgentry_t *map_guest_l1e(unsigned lon
 }
 
 /*
- * Read the guest's l1e that maps this address, from the kernel-mode
- * page tables.
- */
-static l1_pgentry_t guest_get_eff_kern_l1e(unsigned long linear)
-{
-    struct vcpu *curr = current;
-    const bool user_mode = !(curr->arch.flags & TF_kernel_mode);
-    l1_pgentry_t l1e;
-
-    if ( user_mode )
-        toggle_guest_pt(curr);
-
-    l1e = guest_get_eff_l1e(linear);
-
-    if ( user_mode )
-        toggle_guest_pt(curr);
-
-    return l1e;
-}
-
-/*
  * Map a guest's LDT page (covering the byte at @offset from start of the LDT)
  * into Xen's virtual range.  Returns true if the mapping changed, false
  * otherwise.
--- a/xen/arch/x86/pv/mm.h
+++ b/xen/arch/x86/pv/mm.h
@@ -5,8 +5,11 @@ l1_pgentry_t *map_guest_l1e(unsigned lon
 
 int new_guest_cr3(mfn_t mfn);
 
-/* Read a PV guest's l1e that maps this linear address. */
-static inline l1_pgentry_t guest_get_eff_l1e(unsigned long linear)
+/*
+ * Read the guest's l1e that maps this address, from the kernel-mode
+ * page tables.
+ */
+static inline l1_pgentry_t guest_get_eff_kern_l1e(unsigned long linear)
 {
     l1_pgentry_t l1e;
 
--- a/xen/arch/x86/pv/ro-page-fault.c
+++ b/xen/arch/x86/pv/ro-page-fault.c
@@ -342,7 +342,7 @@ int pv_ro_page_fault(unsigned long addr,
     bool mmio_ro;
 
     /* Attempt to read the PTE that maps the VA being accessed. */
-    pte = guest_get_eff_l1e(addr);
+    pte = guest_get_eff_kern_l1e(addr);
 
     /* We are only looking for read-only mappings */
     if ( ((l1e_get_flags(pte) & (_PAGE_PRESENT | _PAGE_RW)) != _PAGE_PRESENT) )


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 10:48:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 10:48:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18212.43137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtrt-0003nv-Fq; Tue, 03 Nov 2020 10:48:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18212.43137; Tue, 03 Nov 2020 10:48:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtrt-0003nn-Cl; Tue, 03 Nov 2020 10:48:49 +0000
Received: by outflank-mailman (input) for mailman id 18212;
 Tue, 03 Nov 2020 10:48:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Z2DW=EJ=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kZtrs-0003nc-F4
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:48:48 +0000
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7916cdea-72dd-402a-9260-b5574d4f4397;
 Tue, 03 Nov 2020 10:48:47 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id y12so17982236wrp.6
 for <xen-devel@lists.xenproject.org>; Tue, 03 Nov 2020 02:48:47 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id o7sm25760887wrp.23.2020.11.03.02.48.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 03 Nov 2020 02:48:45 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z2DW=EJ=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kZtrs-0003nc-F4
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:48:48 +0000
X-Inumbo-ID: 7916cdea-72dd-402a-9260-b5574d4f4397
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7916cdea-72dd-402a-9260-b5574d4f4397;
	Tue, 03 Nov 2020 10:48:47 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id y12so17982236wrp.6
        for <xen-devel@lists.xenproject.org>; Tue, 03 Nov 2020 02:48:47 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=VJ35ZHV3XnN6jiKh2gQB8iLy4AOnRolbYpXB6GM3TsI=;
        b=XO9CSoLy3mZkeMgD+EIjikVEe6/m7oZuCIW6EMhkbBqFSU9tlLp0HJ2CMPDT/0sYQK
         8+zYrTiCZDGS1/KSBOn8WH92g7f+YJMxw1Ue058fgQd8pt86kg8VXA6p+BJMegoV0byw
         q7GENAey5/OS7aMICxLrks7l1icZlLj7sN3ruFKRx9qK52PHtNWVEkAiPP/kv4uU4DMS
         thHpRnp9Ecn712gbHBhvJRU0VYd7u0nQC7nhv34aeoTJqvcxRGLXX4THOvZB3ZsW1tki
         jeYzSnpfEmzcHoO++76275Qai5Krf+Q2TCmSWD/Ozcc06SkqFqNTAb46mn/9rt2yKB5k
         dc6Q==
X-Gm-Message-State: AOAM530RDFShyzg9Iz60eSgGib2Qm8YhInpjllqjyw3Aq/mAzwu8vKlV
	C+VCYcxLu0VvVG0GBWMcOJM=
X-Google-Smtp-Source: ABdhPJz/mis5AMwtmHGZ8jufZY7NYfZW25nI47mf9aw1+ZsiS01rOKqyV7qv4sUhAawrWkoPCCgxJw==
X-Received: by 2002:adf:ff82:: with SMTP id j2mr25344824wrr.401.1604400526404;
        Tue, 03 Nov 2020 02:48:46 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id o7sm25760887wrp.23.2020.11.03.02.48.45
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 03 Nov 2020 02:48:45 -0800 (PST)
Date: Tue, 3 Nov 2020 10:48:44 +0000
From: Wei Liu <wl@xen.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org,
	Anthony PERARD <anthony.perard@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2] libxl: Add suppress-vmdesc to QEMU machine
Message-ID: <20201103104844.t73k4rukp7jezk7d@liuwe-devbox-debian-v2>
References: <20201029190332.31161-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201029190332.31161-1-jandryuk@gmail.com>
User-Agent: NeoMutt/20180716

On Thu, Oct 29, 2020 at 03:03:32PM -0400, Jason Andryuk wrote:
> The device model state saved by QMP xen-save-devices-state doesn't
> include the vmdesc json.  When restoring an HVM, xen-load-devices-state
> always triggers "Expected vmdescription section, but got 0".  This is
> not a problem when restore comes from a file.  However, when QEMU runs
> in a linux stubdom and comes over a console, EOF is not received.  This
> causes a delay restoring - though it does restore.
> 
> Setting suppress-vmdesc skips looking for the vmdesc during restore and
> avoids the wait.
> 
> QEMU 5.2 enables suppress-vmdesc by default for xenfv, but this change
> sets it manually for xenfv and xen_platform_pci=0 when -machine pc is
> use.
> 
> QEMU commit 9850c6047b8b "migration: Allow to suppress vmdesc
> submission" added suppress-vmdesc in QEMU 2.3.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> 
> ---
> QEMU 2.3 came out in 2015, so setting suppress-vmdesc unilaterally
> should be okay...  Is this okay?

Anthony, what is your opinion on this?

Wei.


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 10:55:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 10:55:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18223.43150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtxr-0004l5-6V; Tue, 03 Nov 2020 10:54:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18223.43150; Tue, 03 Nov 2020 10:54:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtxr-0004ky-3U; Tue, 03 Nov 2020 10:54:59 +0000
Received: by outflank-mailman (input) for mailman id 18223;
 Tue, 03 Nov 2020 10:54:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kZtxp-0004kt-AE
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:54:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 3930e305-901e-4d5c-9c4b-75858e291101;
 Tue, 03 Nov 2020 10:54:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4BFD6ACA3;
 Tue,  3 Nov 2020 10:54:55 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kZtxp-0004kt-AE
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:54:57 +0000
X-Inumbo-ID: 3930e305-901e-4d5c-9c4b-75858e291101
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 3930e305-901e-4d5c-9c4b-75858e291101;
	Tue, 03 Nov 2020 10:54:55 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604400895;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=oV/mDKp+mMMwMLDcuxuCHfmSupqxXZf2vCXQnS4sp20=;
	b=KxhMsVlVWGe3m5z1HBJ6pWdybTdrSCA67i4+8zFNF+cb3KdKiqSotJlAjYctt8F6fWVPXF
	J5edqeKWycetFymAJ1QtlYUxM/Qjm9vIz0w6+22RVyURlnz0YieYY43qddMt8jwLf3KgM0
	jQ5ZGyAGlhcBGRX5yi5aZ7KFfcjRr6o=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4BFD6ACA3;
	Tue,  3 Nov 2020 10:54:55 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/5] x86/PV: memory management consistency and minor
 relaxations
Message-ID: <10a01f61-197b-7df4-192d-917fe135df70@suse.com>
Date: Tue, 3 Nov 2020 11:54:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Especially the latter three patches provide only small possible
gains, from all I can tell. I nevertheless wanted to put up the
entire set for consideration.

1: consistently inline {,un}adjust_guest_l<N>e()
2: fold redundant calls to adjust_guest_l<N>e()
3: _PAGE_RW changes may take fast path of mod_l[234]_entry()
4: restrict TLB flushing after mod_l[234]_entry()
5: avoid TLB flushing after mod_l3_entry()

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 10:56:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 10:56:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18226.43162 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtz8-0004rW-J9; Tue, 03 Nov 2020 10:56:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18226.43162; Tue, 03 Nov 2020 10:56:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtz8-0004rP-Ec; Tue, 03 Nov 2020 10:56:18 +0000
Received: by outflank-mailman (input) for mailman id 18226;
 Tue, 03 Nov 2020 10:56:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kZtz7-0004rK-Og
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:56:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id cb50a6d0-afb3-4660-b2f5-dc039872bad7;
 Tue, 03 Nov 2020 10:56:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 202A3ACC6;
 Tue,  3 Nov 2020 10:56:16 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kZtz7-0004rK-Og
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:56:17 +0000
X-Inumbo-ID: cb50a6d0-afb3-4660-b2f5-dc039872bad7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id cb50a6d0-afb3-4660-b2f5-dc039872bad7;
	Tue, 03 Nov 2020 10:56:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604400976;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DSYgoCWg+O2ARrh9htTTAYEpYfiAsuVOPNb70f//H6g=;
	b=s9Bbik8yGbntPDPFJ8VUPmUuRKNRuIEOCYWZsvuqIgaELjQRoHeS9yZ/ecGduHlfTW1bX0
	M6caXtlTchlls2cy4SPMZmyIwIuJ9u0S3U3fPFu+eH/qbS5N6T0kN+g7nu/bDI/ouhaJ3T
	7P7pDMZe/LS/eLaSsevLDt6yas55ouY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 202A3ACC6;
	Tue,  3 Nov 2020 10:56:16 +0000 (UTC)
Subject: [PATCH 1/5] x86/PV: consistently inline {,un}adjust_guest_l<N>e()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
References: <10a01f61-197b-7df4-192d-917fe135df70@suse.com>
Message-ID: <686d7f09-6313-6dd0-2133-8646308aea5b@suse.com>
Date: Tue, 3 Nov 2020 11:56:16 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <10a01f61-197b-7df4-192d-917fe135df70@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Commit 8a74707a7c ("x86/nospec: Use always_inline to fix code gen for
evaluate_nospec") converted inline to always_inline for
adjust_guest_l[134]e(), but left adjust_guest_l2e() and
unadjust_guest_l3e() alone without saying why these two would differ in
the needed / wanted treatment. Adjust these two as well.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Actually I question the need for always_inline here, for two reasons:
1) All that's guarded are updates to local variables, depending on
   merely the values held by these local variables (really: function
   arguments) on input. As a result it would look to me as if we wanted
   evaluate_nospec()-free variants of is_pv{,_32bit}_domain() and alike,
   to be used e.g. here.
2) These functions don't act as predicates, and hence the concern
   expressed in said commit doesn't apply here: Callers wouldn't observe
   misplaced LFENCEs, as they don't use the results of these helpers for
   further (direct) code flow control.
---
So far I've observed only clang to not inline the two functions when
just "inline" is in place.

--- a/xen/arch/x86/pv/mm.h
+++ b/xen/arch/x86/pv/mm.h
@@ -99,8 +99,8 @@ static always_inline l1_pgentry_t adjust
     return l1e;
 }
 
-static inline l2_pgentry_t adjust_guest_l2e(l2_pgentry_t l2e,
-                                            const struct domain *d)
+static always_inline l2_pgentry_t adjust_guest_l2e(l2_pgentry_t l2e,
+                                                   const struct domain *d)
 {
     if ( likely(l2e_get_flags(l2e) & _PAGE_PRESENT) &&
          likely(!is_pv_32bit_domain(d)) )
@@ -119,8 +119,8 @@ static always_inline l3_pgentry_t adjust
     return l3e;
 }
 
-static inline l3_pgentry_t unadjust_guest_l3e(l3_pgentry_t l3e,
-                                              const struct domain *d)
+static always_inline l3_pgentry_t unadjust_guest_l3e(l3_pgentry_t l3e,
+                                                     const struct domain *d)
 {
     if ( unlikely(is_pv_32bit_domain(d)) &&
          likely(l3e_get_flags(l3e) & _PAGE_PRESENT) )



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 10:56:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 10:56:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18227.43174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtzO-0004w6-QU; Tue, 03 Nov 2020 10:56:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18227.43174; Tue, 03 Nov 2020 10:56:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtzO-0004vy-N4; Tue, 03 Nov 2020 10:56:34 +0000
Received: by outflank-mailman (input) for mailman id 18227;
 Tue, 03 Nov 2020 10:56:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=41xg=EJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kZtzN-0004vj-5I
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:56:33 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b9134783-9da0-4d9c-8ca2-f7f7338c729c;
 Tue, 03 Nov 2020 10:56:30 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=41xg=EJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kZtzN-0004vj-5I
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:56:33 +0000
X-Inumbo-ID: b9134783-9da0-4d9c-8ca2-f7f7338c729c
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b9134783-9da0-4d9c-8ca2-f7f7338c729c;
	Tue, 03 Nov 2020 10:56:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1604400990;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=4a/o2HIbskxD+OMC2wnXN4PLrvWgZz4D8VfelyDZl54=;
  b=APqr6+c4ngzg/2S0Ds5DHaKwYK2rT+FQGZMJ1+NDiv9tIzNafweCMD51
   WFD6Q/GjAw9Ei+AkOtGnKIjoxTRvA6uto3cHRICT0+wqzWGJM7uQSUoQw
   UxIkzDsz4Zr87hRUgW6s55dQpU78Gju0Rb4xMCCOSWcisn3nuAlUeAhth
   c=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Ni+TYlycDocH0KZY4Jpi9OCSCMZerC2bTdGnC8k/vxXMq0UR3sCi3eWMP5anzVrz2rJqmxUO+z
 zetNB5+W4Ebgd7hkJnrZ+ooONILwK/t87ByjQy5kEud9GEvgTTBi0c13R1EsMnjQeLNFHhDGxw
 0v81akX4N5UkS4zxUA5Vrrt4nNWXT0X5DXTu7J0hbQ5tbp7jIFUFSKwY7KCOrhpWvhrttbau4g
 d6a5Uvaz4YKU2Mkk9w4Jffe8rAR38K999xrJ9r4PXQhQPQM/dzvdZ353daGFGfwXSF71uvFE/v
 98E=
X-SBRS: None
X-MesageID: 30694830
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,447,1596513600"; 
   d="scan'208";a="30694830"
Subject: Re: [PATCH] x86/mm: drop guest_get_eff_l1e()
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, George Dunlap <george.dunlap@citrix.com>
References: <8a94e96d-14e6-d145-3532-91dab96c8209@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c7add7b3-f01e-043f-6fdd-4c6e3e37d33e@citrix.com>
Date: Tue, 3 Nov 2020 10:56:24 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <8a94e96d-14e6-d145-3532-91dab96c8209@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 03/11/2020 10:48, Jan Beulich wrote:
> There's no actual user of it: pv_ro_page_fault() has a
> guest_kernel_mode() conditional around its only call site.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 10:56:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 10:56:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18230.43185 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtzb-00051K-3E; Tue, 03 Nov 2020 10:56:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18230.43185; Tue, 03 Nov 2020 10:56:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtzb-00051D-0C; Tue, 03 Nov 2020 10:56:47 +0000
Received: by outflank-mailman (input) for mailman id 18230;
 Tue, 03 Nov 2020 10:56:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kZtzZ-00050b-Gh
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:56:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 04892558-b76c-4dd4-a760-6c6e647d37d9;
 Tue, 03 Nov 2020 10:56:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2F5F7ACA3;
 Tue,  3 Nov 2020 10:56:44 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kZtzZ-00050b-Gh
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:56:45 +0000
X-Inumbo-ID: 04892558-b76c-4dd4-a760-6c6e647d37d9
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 04892558-b76c-4dd4-a760-6c6e647d37d9;
	Tue, 03 Nov 2020 10:56:44 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604401004;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Plv1raHJm4EtIIaRg6S3XXueuXOBPrKTjHui585CBRI=;
	b=aKGd7R8Mwz5j5hUMKw5siQ7Bt3cP0zwUWFZCFb4TLPXIwSjYYXpUVj11WWl9SPk/r0ic0I
	mav5ccvs5TAnmx9rMR3MRRVrstEWwn7ciognEowcPg6FqBVfcbHPRr5JaYDq4Kk05GV9J/
	GTpYnaT1OBqGb1WTHnd35PHZEF1l/XM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2F5F7ACA3;
	Tue,  3 Nov 2020 10:56:44 +0000 (UTC)
Subject: [PATCH 2/5] x86/PV: fold redundant calls to adjust_guest_l<N>e()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
References: <10a01f61-197b-7df4-192d-917fe135df70@suse.com>
Message-ID: <0199d771-a138-702a-2514-9139d0881175@suse.com>
Date: Tue, 3 Nov 2020 11:56:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <10a01f61-197b-7df4-192d-917fe135df70@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

At least from an abstract perspective it is quite odd for us to compare
adjusted old and unadjusted new page table entries when determining
whether the fast path can be used. This is largely benign because
FASTPATH_FLAG_WHITELIST covers most of the flags which the adjustments
may set, and the flags getting set don't affect the outcome of
get_page_from_l<N>e(). There's one exception: 32-bit L3 entries get
_PAGE_RW set, but get_page_from_l3e() doesn't allow linear page tables
to be created at this level for such guests. Apart from this _PAGE_RW
is unused by get_page_from_l<N>e() (for N > 1), and hence forcing the
bit on early has no functional effect.

The main reason for the change, however, is that adjust_guest_l<N>e()
aren't exactly cheap - both in terms of pure code size and because each
one has at least one evaluate_nospec() by way of containing
is_pv_32bit_domain() conditionals.

Call the functions once ahead of the fast path checks, instead of twice
after.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2199,10 +2199,11 @@ static int mod_l1_entry(l1_pgentry_t *pl
             nl1e = l1e_from_page(page, l1e_get_flags(nl1e));
         }
 
+        nl1e = adjust_guest_l1e(nl1e, pt_dom);
+
         /* Fast path for sufficiently-similar mappings. */
         if ( !l1e_has_changed(ol1e, nl1e, ~FASTPATH_FLAG_WHITELIST) )
         {
-            nl1e = adjust_guest_l1e(nl1e, pt_dom);
             rc = UPDATE_ENTRY(l1, pl1e, ol1e, nl1e, gl1mfn, pt_vcpu,
                               preserve_ad);
             if ( page )
@@ -2227,7 +2228,6 @@ static int mod_l1_entry(l1_pgentry_t *pl
         if ( page )
             put_page(page);
 
-        nl1e = adjust_guest_l1e(nl1e, pt_dom);
         if ( unlikely(!UPDATE_ENTRY(l1, pl1e, ol1e, nl1e, gl1mfn, pt_vcpu,
                                     preserve_ad)) )
         {
@@ -2279,10 +2279,11 @@ static int mod_l2_entry(l2_pgentry_t *pl
             return -EINVAL;
         }
 
+        nl2e = adjust_guest_l2e(nl2e, d);
+
         /* Fast path for sufficiently-similar mappings. */
         if ( !l2e_has_changed(ol2e, nl2e, ~FASTPATH_FLAG_WHITELIST) )
         {
-            nl2e = adjust_guest_l2e(nl2e, d);
             if ( UPDATE_ENTRY(l2, pl2e, ol2e, nl2e, mfn, vcpu, preserve_ad) )
                 return 0;
             return -EBUSY;
@@ -2291,7 +2292,6 @@ static int mod_l2_entry(l2_pgentry_t *pl
         if ( unlikely((rc = get_page_from_l2e(nl2e, mfn, d, 0)) < 0) )
             return rc;
 
-        nl2e = adjust_guest_l2e(nl2e, d);
         if ( unlikely(!UPDATE_ENTRY(l2, pl2e, ol2e, nl2e, mfn, vcpu,
                                     preserve_ad)) )
         {
@@ -2341,10 +2341,11 @@ static int mod_l3_entry(l3_pgentry_t *pl
             return -EINVAL;
         }
 
+        nl3e = adjust_guest_l3e(nl3e, d);
+
         /* Fast path for sufficiently-similar mappings. */
         if ( !l3e_has_changed(ol3e, nl3e, ~FASTPATH_FLAG_WHITELIST) )
         {
-            nl3e = adjust_guest_l3e(nl3e, d);
             rc = UPDATE_ENTRY(l3, pl3e, ol3e, nl3e, mfn, vcpu, preserve_ad);
             return rc ? 0 : -EFAULT;
         }
@@ -2354,7 +2355,6 @@ static int mod_l3_entry(l3_pgentry_t *pl
             return rc;
         rc = 0;
 
-        nl3e = adjust_guest_l3e(nl3e, d);
         if ( unlikely(!UPDATE_ENTRY(l3, pl3e, ol3e, nl3e, mfn, vcpu,
                                     preserve_ad)) )
         {
@@ -2403,10 +2403,11 @@ static int mod_l4_entry(l4_pgentry_t *pl
             return -EINVAL;
         }
 
+        nl4e = adjust_guest_l4e(nl4e, d);
+
         /* Fast path for sufficiently-similar mappings. */
         if ( !l4e_has_changed(ol4e, nl4e, ~FASTPATH_FLAG_WHITELIST) )
         {
-            nl4e = adjust_guest_l4e(nl4e, d);
             rc = UPDATE_ENTRY(l4, pl4e, ol4e, nl4e, mfn, vcpu, preserve_ad);
             return rc ? 0 : -EFAULT;
         }
@@ -2416,7 +2417,6 @@ static int mod_l4_entry(l4_pgentry_t *pl
             return rc;
         rc = 0;
 
-        nl4e = adjust_guest_l4e(nl4e, d);
         if ( unlikely(!UPDATE_ENTRY(l4, pl4e, ol4e, nl4e, mfn, vcpu,
                                     preserve_ad)) )
         {



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 10:57:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 10:57:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18235.43197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtzt-000591-CX; Tue, 03 Nov 2020 10:57:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18235.43197; Tue, 03 Nov 2020 10:57:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZtzt-00058u-9c; Tue, 03 Nov 2020 10:57:05 +0000
Received: by outflank-mailman (input) for mailman id 18235;
 Tue, 03 Nov 2020 10:57:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uWmA=EJ=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
 id 1kZtzr-00058i-UZ
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:57:03 +0000
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3ab2ffa7-182c-4c64-b407-ec8f33589217;
 Tue, 03 Nov 2020 10:57:02 +0000 (UTC)
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by
 mx.zohomail.com with SMTPS id 1604401014860529.1942258083284;
 Tue, 3 Nov 2020 02:56:54 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uWmA=EJ=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
	id 1kZtzr-00058i-UZ
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:57:03 +0000
X-Inumbo-ID: 3ab2ffa7-182c-4c64-b407-ec8f33589217
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3ab2ffa7-182c-4c64-b407-ec8f33589217;
	Tue, 03 Nov 2020 10:57:02 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1604401017; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=XeyooSQxiPGnuLuzbrWqPiEFRvTTC1Qa1kXvxJ2KUbLyOTioYOSoTCt+cKhFf5n8uCr9IxesOJvz6cblXw5mwxzckYbuQM1E7kzqatPoiy+opPJ9ssNySImQPSft2UTC0hjuvv2wQkXY2UixI1wTp8wv52Bb7bAakrRlPJUCqgQ=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1604401017; h=Content-Type:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=iCaF0jp3IRN9tZUCkkPA3t5ikCaMDd+YC3b9Mg7+CvE=; 
	b=KvmY9BPMv2X7jRTWYkIK2Pk1D6GemKnOWiwVsolmtV4viUL1WBPQH3xzJpBuolr0P6ORsUEyvgkB9z6mOmeet10tu9VDm036Wy1Z1wU8/XAcJwm9iFMa/x8kbIf9TdkA3ilL5vaej6ZztNZ02AAKclZUGRESHYZrFkqdxlsOBNU=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=qubes-os.org;
	spf=pass  smtp.mailfrom=frederic.pierret@qubes-os.org;
	dmarc=pass header.from=<frederic.pierret@qubes-os.org> header.from=<frederic.pierret@qubes-os.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1604401017;
	s=s; d=qubes-os.org; i=frederic.pierret@qubes-os.org;
	h=To:Cc:References:From:Subject:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type;
	bh=iCaF0jp3IRN9tZUCkkPA3t5ikCaMDd+YC3b9Mg7+CvE=;
	b=IbF4XyenHHowpDK3R1tgvNYCbPAQkCQTtOQwBYzE8suiDShmR3RVpcg/DTO56THf
	dsoTS6fSrdaG9EFf7qnl61Gtik1Pz+pivmkew6Ny4M2SjgI5MSNYnvl6mi3/Vb33HAF
	EinPo4segyugLOPS+SLcpJ+ofFuvv1pyB72/K9LM=
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by mx.zohomail.com
	with SMTPS id 1604401014860529.1942258083284; Tue, 3 Nov 2020 02:56:54 -0800 (PST)
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1604156731.git.frederic.pierret@qubes-os.org>
 <8b0e8b8be9c77476ecc702a7c6216ba50659deec.1604156731.git.frederic.pierret@qubes-os.org>
 <396c2991-1a90-bc1a-70e7-eaaf62c309d8@xen.org>
 <19a09f0e-c544-f122-b3af-881d132d7df9@suse.com>
 <a855e71d-3610-0377-75e5-f08a02e96a25@xen.org>
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
Autocrypt: addr=frederic.pierret@qubes-os.org; keydata=
 xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0
 Dy2rUVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8
 vtovi97sWIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIv
 ltoBrYnElD1Pyp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpX
 gYzTN/kq8sxBMh2OrQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PL
 w5koqPs/6JEIVI+t0pyg+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZ
 NbYRXzkI91HCt40X2bTb2jTKgvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpa
 A/GPaJ5DjzV0q9mkYkGDLYI3J/J+s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVir
 EVBum723MFH4DxhTrOoWgta2nyRHOoi0z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvt
 LbYnlSt3v32nfUXh12aQPwU/LCGIzq4oFNVrNp3aWPnSajLPpQARAQABzTxGcsOpZMOpcmlj
 IFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpYy5waWVycmV0QHF1YmVzLW9zLm9yZz7CwXgE
 EwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEEhAELXNxXbiPLkQ
 AI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZJnl+Fb5HBgthU9lBdMqNySg+s8y
 ekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ9uITprfEqX7V2OLbrYW94qw
 R8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04Uekt5I2Zc8iRDF9kneI
 NiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoDRC/bsAow7cBudj+
 lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR9ze2O5Yh+/B
 unrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2VHTfLmA
 Ot+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1coxFUw
 eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBko
 b1EpfW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKb
 xM/NyxHrmNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCN
 He8XNVqBplV0yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOm
 qpbSLbra8NP8Mu5FZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8
 V+9+dpLEU75+mpHU7GlECfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M
 ++ZmWfEV5nCP2qvzeYDGAL6BbWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr
 5aNCaNpv/i4gLO1IScdfDwm6gdfB2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hg
 YlDWdbImhNL/Z7iL3eayH7T9qAVNU587MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpA
 H+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYHwuc3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYD
 yhxVGbeWp820cb0s1f689XCXqFYAzTfCit+EeboYORN5CGioXzS+z0S9IhPbdUuvqs7xvC24
 8bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9aPe/Zw4PTKWvbJlcFioofLwTQE1XvWom
 FPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMAAoJEEhAELXNxXbilSkP/2NcazvU
 DGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrPX8hjITWD9ZA2bbZZ+J+a39v
 yY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9xK8mWXwhn90SHNadEf28
 ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q8aa+G8iAkNJcb+W
 x5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Nyf2MKKbWRJnt
 jy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJUs/geoC
 UBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1KjH
 uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex
 1C+w3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PX
 pm5Pw4stVEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQ
 QMhGv8DnbAdOOOnumAXWq0+wl5uP
Subject: Re: [PATCH v1 2/2] Define SOURCE_DATE_EPOCH based on git log
Message-ID: <1dfe9edb-e56b-0e0b-cd91-5af2e1558f98@qubes-os.org>
Date: Tue, 3 Nov 2020 11:56:50 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <a855e71d-3610-0377-75e5-f08a02e96a25@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="836bcfuhQ4PB7MWcAcyI0ET6Qy7igEiMT"
X-Zoho-Virus-Status: 1
X-ZohoMailClient: External

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--836bcfuhQ4PB7MWcAcyI0ET6Qy7igEiMT
Content-Type: multipart/mixed; boundary="r5aCyVpMEAHf1sjm0VkBw0VhzJfyS6p0q";
 protected-headers="v1"
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
Message-ID: <1dfe9edb-e56b-0e0b-cd91-5af2e1558f98@qubes-os.org>
Subject: Re: [PATCH v1 2/2] Define SOURCE_DATE_EPOCH based on git log
References: <cover.1604156731.git.frederic.pierret@qubes-os.org>
 <8b0e8b8be9c77476ecc702a7c6216ba50659deec.1604156731.git.frederic.pierret@qubes-os.org>
 <396c2991-1a90-bc1a-70e7-eaaf62c309d8@xen.org>
 <19a09f0e-c544-f122-b3af-881d132d7df9@suse.com>
 <a855e71d-3610-0377-75e5-f08a02e96a25@xen.org>
In-Reply-To: <a855e71d-3610-0377-75e5-f08a02e96a25@xen.org>

--r5aCyVpMEAHf1sjm0VkBw0VhzJfyS6p0q
Content-Type: multipart/mixed;
 boundary="------------370ECCE17B1C94732C8CA024"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------370ECCE17B1C94732C8CA024
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable



Le 11/3/20 =C3=A0 11:11 AM, Julien Grall a =C3=A9crit=C2=A0:
>=20
>=20
> On 03/11/2020 10:05, Jan Beulich wrote:
>> On 03.11.2020 11:00, Julien Grall wrote:
>>> Hi Fr=C3=A9d=C3=A9ric,
>>>
>>> On 31/10/2020 15:14, Fr=C3=A9d=C3=A9ric Pierret (fepitre) wrote:
>>>> ---
>>>> =C2=A0=C2=A0 xen/Makefile | 2 ++
>>>> =C2=A0=C2=A0 1 file changed, 2 insertions(+)
>>>>
>>>> diff --git a/xen/Makefile b/xen/Makefile
>>>> index 30b1847515..4cc35556ef 100644
>>>> --- a/xen/Makefile
>>>> +++ b/xen/Makefile
>>>> @@ -6,6 +6,8 @@ export XEN_EXTRAVERSION ?=3D -unstable$(XEN_VENDORVE=
RSION)
>>>> =C2=A0=C2=A0 export XEN_FULLVERSION=C2=A0=C2=A0 =3D $(XEN_VERSION).$=
(XEN_SUBVERSION)$(XEN_EXTRAVERSION)
>>>> =C2=A0=C2=A0 -include xen-version
>>>> +export SOURCE_DATE_EPOCH=C2=A0=C2=A0=C2=A0 ?=3D $(shell git log -1 =
--format=3D%ct 2>/dev/null)
>>>
>>> It is possible to download a tarball for Xen release (see [1]). They
>>> don't contain the .git directory and therefore this command would fai=
l.
>>>
>>> Should we fallback to "date" in this case?
>>
>> Isn't this what already happens? The variable would be assigned
>> an empty value in this case, wouldn't it?
>=20
> My question was whether empty SOURCE_DATE_EPOCH is acceptable?
>=20
> Looking at patch #1, the users of the variable will use "date" if it is=
 empty. Why can't this behavior be common?
>=20
> Cheers,
>=20

In fact, we could fallback to date in SOURCE_DATE_EPOCH definition and in=
 this case this would always be defined. Now, I'm wondering how misleadin=
g that could be with respect to its definition (see [1]): "The value MUST=
 be reproducible (deterministic) across different executions of the build=
, depending only on the source code.". In this case, if someone looks to =
the code and interpret the build time etc, defined with respect to SOURCE=
_DATE_EPOCH, that would be odd?

Regards,
Fr=C3=A9d=C3=A9ric


[1]: https://reproducible-builds.org/specs/source-date-epoch/

--------------370ECCE17B1C94732C8CA024
Content-Type: application/pgp-keys;
 name="OpenPGP_0x484010B5CDC576E2.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0x484010B5CDC576E2.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0D=
y2r
UVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8vtovi=
97s
WIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIvltoBrYnEl=
D1P
yp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpXgYzTN/kq8sxBM=
h2O
rQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PLw5koqPs/6JEIVI+t0=
pyg
+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZNbYRXzkI91HCt40X2bTb2=
jTK
gvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpaA/GPaJ5DjzV0q9mkYkGDLYI3J=
/J+
s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVirEVBum723MFH4DxhTrOoWgta2nyRHO=
oi0
z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvtLbYnlSt3v32nfUXh12aQPwU/LCGIzq4oF=
NVr
Np3aWPnSajLPpQARAQABzTxGcsOpZMOpcmljIFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpY=
y5w
aWVycmV0QHF1YmVzLW9zLm9yZz7CwXgEEwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWA=
gMB
Ah4BAheAAAoJEEhAELXNxXbiPLkQAI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZ=
Jnl
+Fb5HBgthU9lBdMqNySg+s8yekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ=
9uI
TprfEqX7V2OLbrYW94qwR8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04=
Uek
t5I2Zc8iRDF9kneINiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoD=
RC/
bsAow7cBudj+lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR=
9ze
2O5Yh+/BunrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2=
VHT
fLmAOt+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1cox=
FUw
eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBkob=
1Ep
fW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKbxM/Ny=
xHr
mNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCNHe8XNVqBp=
lV0
yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOmqpbSLbra8NP8M=
u5F
ZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8V+9+dpLEU75+mpHU7=
GlE
CfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M++ZmWfEV5nCP2qvzeYDGA=
L6B
bWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr5aNCaNpv/i4gLO1IScdfDwm6g=
dfB
2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hgYlDWdbImhNL/Z7iL3eayH7T9qAVNU=
587
MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpAH+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYH=
wuc
3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYDyhxVGbeWp820cb0s1f689XCXqFYAzTfCit+Ee=
boY
ORN5CGioXzS+z0S9IhPbdUuvqs7xvC248bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9a=
Pe/
Zw4PTKWvbJlcFioofLwTQE1XvWomFPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMA=
AoJ
EEhAELXNxXbilSkP/2NcazvUDGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrP=
X8h
jITWD9ZA2bbZZ+J+a39vyY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9x=
K8m
WXwhn90SHNadEf28ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q=
8aa
+G8iAkNJcb+Wx5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Ny=
f2M
KKbWRJntjy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJ=
Us/
geoCUBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1=
KjH
uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex1=
C+w
3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PXpm5Pw=
4st
VEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQQMhGv8Dnb=
AdO
OOnumAXWq0+wl5uP
=3DRWX1
-----END PGP PUBLIC KEY BLOCK-----

--------------370ECCE17B1C94732C8CA024--

--r5aCyVpMEAHf1sjm0VkBw0VhzJfyS6p0q--

--836bcfuhQ4PB7MWcAcyI0ET6Qy7igEiMT
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEn6ZLkvlecGvyjiymSEAQtc3FduIFAl+hN3MACgkQSEAQtc3F
duKZuQ//bqEBNKUE6HALsvKjK8kdg1KlfEb5ViV5LrFBG/BLkeC7Bi0v+uKWpMM/
9PoGAPzfitGqZP/wqWStwE5IkYTDrW7p72PI2NEC5UJaiYp8enVcwmwGuUIHcYQO
zgfGzCTH3fAQv0L+0bb1I1ZW/FN1uo4oeGfgxB1wb8tTHkEwR/PvTxPw3VInRvwU
aJklLk0phUmbDg01RLNQxHzn2ZRlCR7HZ1pMBO2mlwLg8F3WR6xPrzbRu8PHqyhX
xhsgUQQmIQlCWpD4c8mp0EKtMX+n/jOqDOCE00vExYtpWUGxmzm6Vq1PS1z9uRcZ
/UDfyEjd19E7ZpyzhByCy4iiwwL65ShtBa4ugOqW2lQuj3htgXQEps9EqBOtx+jf
Ng3GLHsYDN4kM0NfYTyMSDyX2fqLIFMAAc9GUpyIun6G+WIHZmz33khPS0IIh9mI
YRLUQh87HcqLSGnrqSkPYKMuXhJVaBtRmiYO1EAeQNQGWmgvR4oILTbE4plfvYGU
GxBZGT/onjB7azAeNereZ2MDM+VRPlQDzR16NQEmXXPQ8bprcJ/tqOYd3uavGiev
xukQ6NDqlRveKsMwbHCPcJLwJYbXlO3H5lqYHadjAHNHi4moVj7JeI6mdF3cRoqN
7BeBDFhzRMcBYw7F31M6GX1hT6RishzXDLX3DPfYSceX11kOGKk=
=cz9p
-----END PGP SIGNATURE-----

--836bcfuhQ4PB7MWcAcyI0ET6Qy7igEiMT--


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 10:57:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 10:57:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18236.43210 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZu00-0005D2-RX; Tue, 03 Nov 2020 10:57:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18236.43210; Tue, 03 Nov 2020 10:57:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZu00-0005Ct-OF; Tue, 03 Nov 2020 10:57:12 +0000
Received: by outflank-mailman (input) for mailman id 18236;
 Tue, 03 Nov 2020 10:57:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kZu00-0005Ce-AO
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:57:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 0644aa43-9c30-4c86-a07a-069f6d49718c;
 Tue, 03 Nov 2020 10:57:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EA4E8AC8C;
 Tue,  3 Nov 2020 10:57:10 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kZu00-0005Ce-AO
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:57:12 +0000
X-Inumbo-ID: 0644aa43-9c30-4c86-a07a-069f6d49718c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 0644aa43-9c30-4c86-a07a-069f6d49718c;
	Tue, 03 Nov 2020 10:57:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604401031;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=CJNC9hq2wQG0dp8NuWS9S6pYUYMVclkRS7uxeyQUMfg=;
	b=igrnGwQHe+fGfsjzqv1C1t0njA1Vqdwr9aTe3ohuAqK27dREWRl3WGeVuHLgjhOAgslJuj
	a8sMfztZgTDdJfOuaSFGi+HUuHehDnG+lfXAXVG9PuaATCDaKMt/0LohJBTM1iZvnuiOOM
	1QbGjpByttdN8ZCKi/JGTUjInSFvfPY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EA4E8AC8C;
	Tue,  3 Nov 2020 10:57:10 +0000 (UTC)
Subject: [PATCH 3/5] x86/PV: _PAGE_RW changes may take fast path of
 mod_l[234]_entry()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
References: <10a01f61-197b-7df4-192d-917fe135df70@suse.com>
Message-ID: <11633161-6809-db0c-44e6-e5f383f4ebd2@suse.com>
Date: Tue, 3 Nov 2020 11:57:10 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <10a01f61-197b-7df4-192d-917fe135df70@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The only time _PAGE_RW matters when validating an L2 or higher entry is
when a linear page table is tried to be installed. Therefore when we
disallow such at build time, we can allow _PAGE_RW changes to take the
fast paths there.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2140,6 +2140,18 @@ static void l3t_unlock(struct page_info
     (_PAGE_NX_BIT | _PAGE_AVAIL_HIGH | _PAGE_AVAIL | _PAGE_GLOBAL | \
      _PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_USER)
 
+/*
+ * PDE flags that a guest may change without re-validating the PDE.
+ * All other bits affect translation, caching, or Xen's safety. When guest
+ * created linear page tables aren't allowed, intermediate page tables may
+ * have _PAGE_RW altered without this requiring re-validation.
+ */
+#ifndef CONFIG_PV_LINEAR_PT
+# define FASTPATH_PDE_FLAG_WHITELIST (FASTPATH_FLAG_WHITELIST | _PAGE_RW)
+#else
+# define FASTPATH_PDE_FLAG_WHITELIST FASTPATH_FLAG_WHITELIST
+#endif
+
 /* Update the L1 entry at pl1e to new value nl1e. */
 static int mod_l1_entry(l1_pgentry_t *pl1e, l1_pgentry_t nl1e,
                         mfn_t gl1mfn, unsigned int cmd,
@@ -2282,7 +2294,7 @@ static int mod_l2_entry(l2_pgentry_t *pl
         nl2e = adjust_guest_l2e(nl2e, d);
 
         /* Fast path for sufficiently-similar mappings. */
-        if ( !l2e_has_changed(ol2e, nl2e, ~FASTPATH_FLAG_WHITELIST) )
+        if ( !l2e_has_changed(ol2e, nl2e, ~FASTPATH_PDE_FLAG_WHITELIST) )
         {
             if ( UPDATE_ENTRY(l2, pl2e, ol2e, nl2e, mfn, vcpu, preserve_ad) )
                 return 0;
@@ -2344,7 +2356,7 @@ static int mod_l3_entry(l3_pgentry_t *pl
         nl3e = adjust_guest_l3e(nl3e, d);
 
         /* Fast path for sufficiently-similar mappings. */
-        if ( !l3e_has_changed(ol3e, nl3e, ~FASTPATH_FLAG_WHITELIST) )
+        if ( !l3e_has_changed(ol3e, nl3e, ~FASTPATH_PDE_FLAG_WHITELIST) )
         {
             rc = UPDATE_ENTRY(l3, pl3e, ol3e, nl3e, mfn, vcpu, preserve_ad);
             return rc ? 0 : -EFAULT;
@@ -2406,7 +2418,7 @@ static int mod_l4_entry(l4_pgentry_t *pl
         nl4e = adjust_guest_l4e(nl4e, d);
 
         /* Fast path for sufficiently-similar mappings. */
-        if ( !l4e_has_changed(ol4e, nl4e, ~FASTPATH_FLAG_WHITELIST) )
+        if ( !l4e_has_changed(ol4e, nl4e, ~FASTPATH_PDE_FLAG_WHITELIST) )
         {
             rc = UPDATE_ENTRY(l4, pl4e, ol4e, nl4e, mfn, vcpu, preserve_ad);
             return rc ? 0 : -EFAULT;



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 10:57:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 10:57:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18245.43222 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZu0O-0005Mv-53; Tue, 03 Nov 2020 10:57:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18245.43222; Tue, 03 Nov 2020 10:57:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZu0O-0005Mo-13; Tue, 03 Nov 2020 10:57:36 +0000
Received: by outflank-mailman (input) for mailman id 18245;
 Tue, 03 Nov 2020 10:57:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kZu0M-0005Mb-WD
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:57:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id f6f69f0e-ad4a-49e5-a948-7e5a417ccecc;
 Tue, 03 Nov 2020 10:57:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 30207AC8C;
 Tue,  3 Nov 2020 10:57:33 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kZu0M-0005Mb-WD
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:57:35 +0000
X-Inumbo-ID: f6f69f0e-ad4a-49e5-a948-7e5a417ccecc
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id f6f69f0e-ad4a-49e5-a948-7e5a417ccecc;
	Tue, 03 Nov 2020 10:57:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604401053;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=McZ0BgAGUmxFUr4xKQf+8oaTAT8M4nBK6Ql+lYPz1kI=;
	b=NKitQ3yw1SjnKZ/+2j2zIe4JKMG+VmNdTN42crxe6oDwvnMHURiFha0+v5bkb1Lnw1bsex
	vxtGVZnjvBDH4fCdrIH+1AgxDyI5chTA8FTjWQPU9sqmh5hXOVuGFm8V5FD4knFfELn7WZ
	b8MV8PZFT+zYpJb2FePlWIqJueK7zOU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 30207AC8C;
	Tue,  3 Nov 2020 10:57:33 +0000 (UTC)
Subject: [PATCH 4/5] x86/PV: restrict TLB flushing after mod_l[234]_entry()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
References: <10a01f61-197b-7df4-192d-917fe135df70@suse.com>
Message-ID: <eac90675-bcf3-3818-1f5f-f9825349e22c@suse.com>
Date: Tue, 3 Nov 2020 11:57:33 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <10a01f61-197b-7df4-192d-917fe135df70@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Just like we avoid to invoke remote root pt flushes when all uses of an
L4 table can be accounted for locally, the same can be done for all of
L[234] for the linear pt flush when the table is a "free floating" one,
i.e. it is pinned but not hooked up anywhere. While this situation
doesn't occur very often, it can be observed.

Since this breaks one of the implications of the XSA-286 fix, drop the
flush_root_pt_local variable again and set ->root_pgt_changed directly,
just like it was before that change.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
While adjusting the big comment that was added for XSA-286 I wondered
why it talks about the "construction of 32bit PV guests". How are 64-bit
PV guests different in this regard?

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3903,8 +3903,7 @@ long do_mmu_update(
     struct vcpu *curr = current, *v = curr;
     struct domain *d = v->domain, *pt_owner = d, *pg_owner;
     mfn_t map_mfn = INVALID_MFN, mfn;
-    bool flush_linear_pt = false, flush_root_pt_local = false,
-        flush_root_pt_others = false;
+    bool flush_linear_pt = false, flush_root_pt_others = false;
     uint32_t xsm_needed = 0;
     uint32_t xsm_checked = 0;
     int rc = put_old_guest_table(curr);
@@ -4054,7 +4053,9 @@ long do_mmu_update(
                         break;
                     rc = mod_l2_entry(va, l2e_from_intpte(req.val), mfn,
                                       cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
-                    if ( !rc )
+                    if ( !rc &&
+                         (page->u.inuse.type_info & PGT_count_mask) >
+                         1 + !!(page->u.inuse.type_info & PGT_pinned) )
                         flush_linear_pt = true;
                     break;
 
@@ -4063,7 +4064,9 @@ long do_mmu_update(
                         break;
                     rc = mod_l3_entry(va, l3e_from_intpte(req.val), mfn,
                                       cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
-                    if ( !rc )
+                    if ( !rc &&
+                         (page->u.inuse.type_info & PGT_count_mask) >
+                         1 + !!(page->u.inuse.type_info & PGT_pinned) )
                         flush_linear_pt = true;
                     break;
 
@@ -4072,7 +4075,9 @@ long do_mmu_update(
                         break;
                     rc = mod_l4_entry(va, l4e_from_intpte(req.val), mfn,
                                       cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
-                    if ( !rc )
+                    if ( !rc &&
+                         (page->u.inuse.type_info & PGT_count_mask) >
+                         1 + !!(page->u.inuse.type_info & PGT_pinned) )
                         flush_linear_pt = true;
                     if ( !rc && pt_owner->arch.pv.xpti )
                     {
@@ -4082,7 +4087,7 @@ long do_mmu_update(
                                     mfn) )
                         {
                             local_in_use = true;
-                            flush_root_pt_local = true;
+                            get_cpu_info()->root_pgt_changed = true;
                         }
 
                         /*
@@ -4199,8 +4204,8 @@ long do_mmu_update(
     /*
      * Perform required TLB maintenance.
      *
-     * This logic currently depend on flush_linear_pt being a superset of the
-     * flush_root_pt_* conditions.
+     * This logic currently depends on flush_linear_pt being a superset of the
+     * flush_root_pt_others condition.
      *
      * pt_owner may not be current->domain.  This may occur during
      * construction of 32bit PV guests, or debugging of PV guests.  The
@@ -4219,7 +4224,7 @@ long do_mmu_update(
      * pt_owner->dirty_cpumask), and/or all *other* dirty CPUs as there are
      * references we can't account for locally.
      */
-    if ( flush_linear_pt /* || flush_root_pt_local || flush_root_pt_others */ )
+    if ( flush_linear_pt /* || flush_root_pt_others */ )
     {
         unsigned int cpu = smp_processor_id();
         cpumask_t *mask = pt_owner->dirty_cpumask;
@@ -4236,12 +4241,8 @@ long do_mmu_update(
             cpumask_copy(mask, pt_owner->dirty_cpumask);
             __cpumask_clear_cpu(cpu, mask);
 
-            flush_local(FLUSH_TLB |
-                        (flush_root_pt_local ? FLUSH_ROOT_PGTBL : 0));
+            flush_local(FLUSH_TLB);
         }
-        else
-            /* Sanity check.  flush_root_pt_local implies local cpu is dirty. */
-            ASSERT(!flush_root_pt_local);
 
         /* Flush the remote dirty CPUs.  Does not include the local CPU. */
         if ( !cpumask_empty(mask) )
@@ -4249,8 +4250,8 @@ long do_mmu_update(
                        (flush_root_pt_others ? FLUSH_ROOT_PGTBL : 0));
     }
     else
-        /* Sanity check.  flush_root_pt_* implies flush_linear_pt. */
-        ASSERT(!flush_root_pt_local && !flush_root_pt_others);
+        /* Sanity check.  flush_root_pt_others implies flush_linear_pt. */
+        ASSERT(!flush_root_pt_others);
 
     perfc_add(num_page_updates, i);
 



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 10:58:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 10:58:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18250.43233 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZu15-0005ZY-Dm; Tue, 03 Nov 2020 10:58:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18250.43233; Tue, 03 Nov 2020 10:58:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZu15-0005ZR-Ao; Tue, 03 Nov 2020 10:58:19 +0000
Received: by outflank-mailman (input) for mailman id 18250;
 Tue, 03 Nov 2020 10:58:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kZu13-0005ZA-TZ
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:58:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 4007d05e-76e6-4600-9784-edba02b9c6d9;
 Tue, 03 Nov 2020 10:58:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D4BF9AC8C;
 Tue,  3 Nov 2020 10:58:16 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kZu13-0005ZA-TZ
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 10:58:17 +0000
X-Inumbo-ID: 4007d05e-76e6-4600-9784-edba02b9c6d9
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 4007d05e-76e6-4600-9784-edba02b9c6d9;
	Tue, 03 Nov 2020 10:58:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604401096;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=a9sluiyHVwCFug4AwAJlFQ8aR4Cf8H0Ya34y3kuy2Mo=;
	b=C3ynvVrco/gurFNHp6ftuYtcaLcHxm7KFkeJBHVfSJ3BX8jQJIEEudeofRgArYkYtCrn2D
	WFBXmg94LarjaLN5OvxcmKXU6tj6ROgmYdGEqB8wZYr/GfPNt68i6HWJpzwrIMnjng1Uhf
	00Ef5QNorpMHb3neyyEyplbXyj6bNOc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D4BF9AC8C;
	Tue,  3 Nov 2020 10:58:16 +0000 (UTC)
Subject: [PATCH 5/5] x86/PV32: avoid TLB flushing after mod_l3_entry()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
References: <10a01f61-197b-7df4-192d-917fe135df70@suse.com>
Message-ID: <181be414-49b4-3bd3-bb55-cef443191e60@suse.com>
Date: Tue, 3 Nov 2020 11:58:16 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <10a01f61-197b-7df4-192d-917fe135df70@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

32-bit guests may not depend upon the side effect of using ordinary
4-level paging when running on a 64-bit hypervisor. For L3 entry updates
to take effect, they have to use a CR3 reload. Therefore there's no need
to issue a paging structure invalidating TLB flush in this case.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4066,7 +4066,8 @@ long do_mmu_update(
                                       cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
                     if ( !rc &&
                          (page->u.inuse.type_info & PGT_count_mask) >
-                         1 + !!(page->u.inuse.type_info & PGT_pinned) )
+                         1 + !!(page->u.inuse.type_info & PGT_pinned) &&
+                         !is_pv_32bit_domain(pt_owner) )
                         flush_linear_pt = true;
                     break;
 



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 11:06:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 11:06:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18268.43246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZu96-0006Zr-A7; Tue, 03 Nov 2020 11:06:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18268.43246; Tue, 03 Nov 2020 11:06:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZu96-0006Zk-74; Tue, 03 Nov 2020 11:06:36 +0000
Received: by outflank-mailman (input) for mailman id 18268;
 Tue, 03 Nov 2020 11:06:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wgX3=EJ=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1kZu95-0006Zf-2k
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 11:06:35 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 00046dc7-a81e-4813-9b21-eaffd9bd9dc1;
 Tue, 03 Nov 2020 11:06:33 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wgX3=EJ=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
	id 1kZu95-0006Zf-2k
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 11:06:35 +0000
X-Inumbo-ID: 00046dc7-a81e-4813-9b21-eaffd9bd9dc1
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 00046dc7-a81e-4813-9b21-eaffd9bd9dc1;
	Tue, 03 Nov 2020 11:06:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1604401593;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=FZPXhwelPhnrf+4ehSPMNoGpYrwJNkm/DXpvYHfd2qw=;
  b=Lx5WgpJEhfDe3gIUNrJHkbcL2U4NBGRDU9cQqoTpYAqPl1RZmdHN4Fvj
   6PtJpuAvEXmPzQYUA6GI/JrsuZ0fTPv/Due8rKkjkgPnAlRgy6T6oDyYl
   UQGu3xNAJz7hbmsVF4pPolPq036V7oc0bqP/sAH5ekF7dylnfVdlvZJyY
   Q=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: wcxy9Q6E0IN2rhwcNQ2fL8Wpaz2P7mTkdsOeh8ZFwAxxT14BZTN4m3PMeIDjywMYXeFN2RN8/T
 0F7u/Duk0fEgagtyudNVfaGmQdzaWt2aWTa9LVtqVClqyGUDjJSWyFEBUZvq5TGSQGENGYgN6a
 +uj2T/bwTBMlF9epg+VFaDbPGwPsqmHurc6uDB6gT0c8KQrWg5S0vekT5lKYOhTyIt7jhbCijz
 1KI7REgaG1Nh2T3WlncHrA3ahyXcTK+j41ytr4ox/NaZKaf73y9PcpQvvRfb2Y2Mn3hk+3BVDB
 TBs=
X-SBRS: None
X-MesageID: 30333882
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,447,1596513600"; 
   d="scan'208";a="30333882"
Date: Tue, 3 Nov 2020 11:06:29 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Wei Liu <wl@xen.org>
CC: Jason Andryuk <jandryuk@gmail.com>, <xen-devel@lists.xenproject.org>, "Ian
 Jackson" <iwj@xenproject.org>
Subject: Re: [PATCH v2] libxl: Add suppress-vmdesc to QEMU machine
Message-ID: <20201103110629.GH2214@perard.uk.xensource.com>
References: <20201029190332.31161-1-jandryuk@gmail.com>
 <20201103104844.t73k4rukp7jezk7d@liuwe-devbox-debian-v2>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20201103104844.t73k4rukp7jezk7d@liuwe-devbox-debian-v2>

On Tue, Nov 03, 2020 at 10:48:44AM +0000, Wei Liu wrote:
> On Thu, Oct 29, 2020 at 03:03:32PM -0400, Jason Andryuk wrote:
> > The device model state saved by QMP xen-save-devices-state doesn't
> > include the vmdesc json.  When restoring an HVM, xen-load-devices-state
> > always triggers "Expected vmdescription section, but got 0".  This is
> > not a problem when restore comes from a file.  However, when QEMU runs
> > in a linux stubdom and comes over a console, EOF is not received.  This
> > causes a delay restoring - though it does restore.
> > 
> > Setting suppress-vmdesc skips looking for the vmdesc during restore and
> > avoids the wait.
> > 
> > QEMU 5.2 enables suppress-vmdesc by default for xenfv, but this change
> > sets it manually for xenfv and xen_platform_pci=0 when -machine pc is
> > use.
> > 
> > QEMU commit 9850c6047b8b "migration: Allow to suppress vmdesc
> > submission" added suppress-vmdesc in QEMU 2.3.
> > 
> > Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

:-(, sorry, I never received that email.

> > ---
> > QEMU 2.3 came out in 2015, so setting suppress-vmdesc unilaterally
> > should be okay...  Is this okay?


> Anthony, what is your opinion on this?

That it's fine, and I actually asked for the libxl patch. For reference,
QEMU 2.3 is in qemu-xen-4.7.

So,
Acked-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 11:14:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 11:14:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18275.43258 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZuGw-0007W6-5S; Tue, 03 Nov 2020 11:14:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18275.43258; Tue, 03 Nov 2020 11:14:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZuGw-0007Vz-1k; Tue, 03 Nov 2020 11:14:42 +0000
Received: by outflank-mailman (input) for mailman id 18275;
 Tue, 03 Nov 2020 11:14:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=41xg=EJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kZuGv-0007Vu-L2
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 11:14:41 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7799992a-4755-4e16-bb0b-524553b67d03;
 Tue, 03 Nov 2020 11:14:40 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=41xg=EJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kZuGv-0007Vu-L2
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 11:14:41 +0000
X-Inumbo-ID: 7799992a-4755-4e16-bb0b-524553b67d03
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7799992a-4755-4e16-bb0b-524553b67d03;
	Tue, 03 Nov 2020 11:14:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1604402080;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=mrdvdn5dLPJM1XqjHNf/63I2y3o/edJRShyfDo0fAE4=;
  b=BGSSyfxez2eB9op3D/L783jdH7siC8/OAHdXMGcxgbF17IplzzOX8bpD
   JcE3SGIKudg2vNLZBcURaoRlbr7AyhOCwoo+Q9I9TZ3Qe+52DkJwUfY4G
   FBYgkleZsFX3ZLddLNvRZxnj7jKmCUD5Gms+0iI8UVGErrwy1zT6NSVZD
   Q=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: /gGnsiUzLgUOel4dU4OjzAt6FnV5GIz5kF9b5Vfhn+w8i0TykcZzyzeHJ4Xy8vAln4HPjhlDVU
 ozGiE8qRKe3fHYWLhncQNoOGdo6y9Ki0cLm2tQxb0ajUdbnhcjZEDUBQhcnwF6vgaPv2HW7YFi
 OWcuHhjmhva+V7TE9vVi2AtnFo9fzGkUp8uK4447RpPURQ80nc3y2Xgl9KUeE5E43RvI0VyXTS
 eDqxX6FI01pPW/cEA8ckHfmeZLgMXITZGu3RkAL8DSc5Q6bsWSYdI5aifo8xvwoVLHqdW6WeDL
 dTY=
X-SBRS: None
X-MesageID: 31458756
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,447,1596513600"; 
   d="scan'208";a="31458756"
Subject: Re: [PATCH 4/5] x86/PV: restrict TLB flushing after
 mod_l[234]_entry()
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, George Dunlap <george.dunlap@citrix.com>
References: <10a01f61-197b-7df4-192d-917fe135df70@suse.com>
 <eac90675-bcf3-3818-1f5f-f9825349e22c@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a10a2dd1-f536-1182-9106-61ec5741f32e@citrix.com>
Date: Tue, 3 Nov 2020 11:14:34 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <eac90675-bcf3-3818-1f5f-f9825349e22c@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 03/11/2020 10:57, Jan Beulich wrote:
> Just like we avoid to invoke remote root pt flushes when all uses of an
> L4 table can be accounted for locally, the same can be done for all of
> L[234] for the linear pt flush when the table is a "free floating" one,
> i.e. it is pinned but not hooked up anywhere. While this situation
> doesn't occur very often, it can be observed.
>
> Since this breaks one of the implications of the XSA-286 fix, drop the
> flush_root_pt_local variable again and set ->root_pgt_changed directly,
> just like it was before that change.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> While adjusting the big comment that was added for XSA-286 I wondered
> why it talks about the "construction of 32bit PV guests". How are 64-bit
> PV guests different in this regard?

Because the sole caller is move_l3_below_4G() for 32bit PV guests which
don't support folded CR3's.

It's not impossible that future changes to PV construction might change
this, but it is highly unlikely in practice.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 12:43:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 12:43:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18293.43270 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZvet-0006lz-Au; Tue, 03 Nov 2020 12:43:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18293.43270; Tue, 03 Nov 2020 12:43:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZvet-0006ls-6s; Tue, 03 Nov 2020 12:43:31 +0000
Received: by outflank-mailman (input) for mailman id 18293;
 Tue, 03 Nov 2020 12:43:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hh/q=EJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZves-0006ln-H5
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 12:43:30 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 08e357bf-7e22-4591-bf3b-bd2c891ecc85;
 Tue, 03 Nov 2020 12:43:28 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZvep-00022u-R2; Tue, 03 Nov 2020 12:43:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZvep-0004tJ-FI; Tue, 03 Nov 2020 12:43:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZvep-0006LP-El; Tue, 03 Nov 2020 12:43:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hh/q=EJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZves-0006ln-H5
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 12:43:30 +0000
X-Inumbo-ID: 08e357bf-7e22-4591-bf3b-bd2c891ecc85
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 08e357bf-7e22-4591-bf3b-bd2c891ecc85;
	Tue, 03 Nov 2020 12:43:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KxSwQ5LPereUCuodCFh/0e8ETOFjBldV9lO4mavYsNI=; b=Yz4tNaUopydJmp1ibYaOGkFgmk
	LqvMTaCLi2O6oQl2ufFPH5zaQx7yu/LUWwpHkBLNXbxueigXgd/4hB8jz7VLvyXCRWtINnkYL1AaE
	7B+h+DhGnj5VPmQ2pib4LGsrpzni9CiRQRjyId1R15KX0TPvGKtDskau/QMA2SqWUpfY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZvep-00022u-R2; Tue, 03 Nov 2020 12:43:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZvep-0004tJ-FI; Tue, 03 Nov 2020 12:43:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZvep-0006LP-El; Tue, 03 Nov 2020 12:43:27 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156373-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156373: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:guest-saverestore.2:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-xsm:guest-destroy:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7056f2f89f03f2f804ac7e776c7b2b000cd716cd
X-Osstest-Versions-That:
    xen=7056f2f89f03f2f804ac7e776c7b2b000cd716cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 03 Nov 2020 12:43:27 +0000

flight 156373 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156373/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 16 guest-saverestore.2 fail in 156354 pass in 156373
 test-amd64-i386-libvirt-xsm  22 guest-destroy    fail in 156354 pass in 156373
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 156354 pass in 156373
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 18 guest-start/debianhvm.repeat fail pass in 156339
 test-amd64-amd64-xl-rtds     18 guest-localmigrate         fail pass in 156354

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 156354 like 156339
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156354
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156354
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156354
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156354
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156354
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156354
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156354
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156354
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156354
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156354
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156354
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  7056f2f89f03f2f804ac7e776c7b2b000cd716cd
baseline version:
 xen                  7056f2f89f03f2f804ac7e776c7b2b000cd716cd

Last test of basis   156373  2020-11-03 01:53:25 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 12:48:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 12:48:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18302.43284 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZvjV-0006zA-Ve; Tue, 03 Nov 2020 12:48:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18302.43284; Tue, 03 Nov 2020 12:48:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZvjV-0006z3-Sd; Tue, 03 Nov 2020 12:48:17 +0000
Received: by outflank-mailman (input) for mailman id 18302;
 Tue, 03 Nov 2020 12:48:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kZvjU-0006ye-Ka
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 12:48:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 00b55428-4144-4d6d-aca8-6e0c8b08d64c;
 Tue, 03 Nov 2020 12:48:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1423EABF4;
 Tue,  3 Nov 2020 12:48:10 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kZvjU-0006ye-Ka
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 12:48:16 +0000
X-Inumbo-ID: 00b55428-4144-4d6d-aca8-6e0c8b08d64c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 00b55428-4144-4d6d-aca8-6e0c8b08d64c;
	Tue, 03 Nov 2020 12:48:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604407690;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=Bb6NVLYqpVmYabYL6BOTpVfhp2o8pN3uG0+yWcEyIQ8=;
	b=G+9iN9EGEt6TnbIECLM2YqgbvAzGk47zid855sfHF4PgqqmF5K3U27XrX2ujEMyQxVRXbD
	2flQS3sG8GFvMY1QnHtiiZ3ywL0vxOKTNlREXB9l3FCwXQd7purATD5WBZsmr+GIySmn4E
	9hBwVOfGMASJ3ANInbnFnhgALL68Pgc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1423EABF4;
	Tue,  3 Nov 2020 12:48:10 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: Xen 4.13.2 released
To: xen-announce@lists.xenproject.org
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <ed219f15-479b-5d06-c835-eb4f4c64db3a@suse.com>
Date: Tue, 3 Nov 2020 13:48:10 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

All,

I am pleased to announce the release of Xen 4.13.2. This is available
immediately from its git repository
http://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.13
(tag RELEASE-4.13.2) or from the XenProject download page
https://xenproject.org/downloads/xen-project-archives/xen-project-4-13-series/xen-project-4-13-2/
(where a list of changes can also be found).

We recommend all users of the 4.13 stable series to update to this
latest point release.

Regards, Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 13:23:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 13:23:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18309.43297 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZwHg-0001we-Nz; Tue, 03 Nov 2020 13:23:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18309.43297; Tue, 03 Nov 2020 13:23:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZwHg-0001wX-K8; Tue, 03 Nov 2020 13:23:36 +0000
Received: by outflank-mailman (input) for mailman id 18309;
 Tue, 03 Nov 2020 13:23:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uWmA=EJ=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
 id 1kZwHf-0001wS-Hu
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 13:23:35 +0000
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0c98a69c-0a88-4b3b-ba66-e22264191470;
 Tue, 03 Nov 2020 13:23:33 +0000 (UTC)
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by
 mx.zohomail.com with SMTPS id 1604409807635678.5731888745144;
 Tue, 3 Nov 2020 05:23:27 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uWmA=EJ=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
	id 1kZwHf-0001wS-Hu
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 13:23:35 +0000
X-Inumbo-ID: 0c98a69c-0a88-4b3b-ba66-e22264191470
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0c98a69c-0a88-4b3b-ba66-e22264191470;
	Tue, 03 Nov 2020 13:23:33 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1604409810; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=KGF0NMy1kAv0vXrL4Tw8mkirUZk3GIV6779qQyYE4TIq1EcVG+pQ2fSQy+3+fYPby/lWHK8juXe4YpW0NKhUidGVc5h+qNP5d/digAi9Au6nLGK1LlRRLsKn8nM2LPjZPULwX14AdR2QN4PI97gszCnjNn9XEUYliWFEgnTIz14=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1604409810; h=Content-Type:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=ZlTOgByCt7DOl2He7UDNW7nUhQJH71ZDkAVL6UNd+jg=; 
	b=Glc/+f0pw5k08U6xZlygS0D6GHGZE2kM9RO3zmiIESrm8kvesadfUtGRuUuYnOALCLQWTHDmXFzTQI2IHxu2cvxXnQR3nLTJZnLPIAIHdhZLgOH7xZWKNDfjLVrDajGZMm5TvJYBI2jvnuLRUQSsuobA/d8rrLOaqOmcPoUlQVs=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=qubes-os.org;
	spf=pass  smtp.mailfrom=frederic.pierret@qubes-os.org;
	dmarc=pass header.from=<frederic.pierret@qubes-os.org> header.from=<frederic.pierret@qubes-os.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1604409810;
	s=s; d=qubes-os.org; i=frederic.pierret@qubes-os.org;
	h=To:Cc:References:From:Subject:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type;
	bh=ZlTOgByCt7DOl2He7UDNW7nUhQJH71ZDkAVL6UNd+jg=;
	b=KMJPea6jWejrwQiD5n79SyzwKRZ79L3nYleZMWvbjpooYJmW4aGZKlqOkisL2awM
	GjHAMKWaQO/PUedj6p+gIsan4WQWMbny+WKwwiNGvXALrTJb/uArMgFilrbhlqUL+7R
	Gd4V/qY9rgAhJnM466KTs6k3CF30cXu73DlDFMWE=
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by mx.zohomail.com
	with SMTPS id 1604409807635678.5731888745144; Tue, 3 Nov 2020 05:23:27 -0800 (PST)
To: "marmarek@invisiblethingslab.com" <marmarek@invisiblethingslab.com>,
 Dario Faggioli <dfaggioli@suse.com>
Cc: Juergen Gross <JGross@suse.com>,
 "George.Dunlap@citrix.com" <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
References: <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
 <deefd340-ec7a-bbb9-7471-d147da174f4a@suse.com>
 <a333ea82c12086874f705fc9ea9baa991235edd4.camel@suse.com>
 <533ce2f2-f268-a70b-fad7-d8f3f4033209@suse.com>
 <182a90a89cc02beec9760559799e74572e18ce49.camel@suse.com>
 <9632dc14-46d5-83c0-7e44-0c3bd4f5154a@qubes-os.org>
 <ce07254a-0775-d35c-559b-7d9ab642accf@qubes-os.org>
 <b1a18e6ed88db3c40a54c7ca15c3399bdc6f2b9c.camel@suse.com>
 <20201031025442.GF1447@mail-itl>
 <c17e7a152a7e1922bd9c729f70a96acf4ca5240b.camel@suse.com>
 <20201031040817.GG1447@mail-itl>
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
Autocrypt: addr=frederic.pierret@qubes-os.org; keydata=
 xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0
 Dy2rUVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8
 vtovi97sWIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIv
 ltoBrYnElD1Pyp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpX
 gYzTN/kq8sxBMh2OrQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PL
 w5koqPs/6JEIVI+t0pyg+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZ
 NbYRXzkI91HCt40X2bTb2jTKgvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpa
 A/GPaJ5DjzV0q9mkYkGDLYI3J/J+s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVir
 EVBum723MFH4DxhTrOoWgta2nyRHOoi0z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvt
 LbYnlSt3v32nfUXh12aQPwU/LCGIzq4oFNVrNp3aWPnSajLPpQARAQABzTxGcsOpZMOpcmlj
 IFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpYy5waWVycmV0QHF1YmVzLW9zLm9yZz7CwXgE
 EwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEEhAELXNxXbiPLkQ
 AI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZJnl+Fb5HBgthU9lBdMqNySg+s8y
 ekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ9uITprfEqX7V2OLbrYW94qw
 R8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04Uekt5I2Zc8iRDF9kneI
 NiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoDRC/bsAow7cBudj+
 lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR9ze2O5Yh+/B
 unrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2VHTfLmA
 Ot+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1coxFUw
 eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBko
 b1EpfW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKb
 xM/NyxHrmNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCN
 He8XNVqBplV0yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOm
 qpbSLbra8NP8Mu5FZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8
 V+9+dpLEU75+mpHU7GlECfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M
 ++ZmWfEV5nCP2qvzeYDGAL6BbWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr
 5aNCaNpv/i4gLO1IScdfDwm6gdfB2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hg
 YlDWdbImhNL/Z7iL3eayH7T9qAVNU587MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpA
 H+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYHwuc3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYD
 yhxVGbeWp820cb0s1f689XCXqFYAzTfCit+EeboYORN5CGioXzS+z0S9IhPbdUuvqs7xvC24
 8bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9aPe/Zw4PTKWvbJlcFioofLwTQE1XvWom
 FPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMAAoJEEhAELXNxXbilSkP/2NcazvU
 DGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrPX8hjITWD9ZA2bbZZ+J+a39v
 yY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9xK8mWXwhn90SHNadEf28
 ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q8aa+G8iAkNJcb+W
 x5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Nyf2MKKbWRJnt
 jy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJUs/geoC
 UBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1KjH
 uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex
 1C+w3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PX
 pm5Pw4stVEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQ
 QMhGv8DnbAdOOOnumAXWq0+wl5uP
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
Message-ID: <2cbcca0b-8415-8f98-46d8-12279703cbbe@qubes-os.org>
Date: Tue, 3 Nov 2020 14:23:23 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <20201031040817.GG1447@mail-itl>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="etjCQT4fRPKgKdCift001N2KNluWz3KHp"
X-Zoho-Virus-Status: 1
X-ZohoMailClient: External

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--etjCQT4fRPKgKdCift001N2KNluWz3KHp
Content-Type: multipart/mixed; boundary="Fgc5yE9xkpOvZvMEjiYp7UpYOjhRMHUXm";
 protected-headers="v1"
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
To: "marmarek@invisiblethingslab.com" <marmarek@invisiblethingslab.com>,
 Dario Faggioli <dfaggioli@suse.com>
Cc: Juergen Gross <JGross@suse.com>,
 "George.Dunlap@citrix.com" <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Message-ID: <2cbcca0b-8415-8f98-46d8-12279703cbbe@qubes-os.org>
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
References: <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
 <deefd340-ec7a-bbb9-7471-d147da174f4a@suse.com>
 <a333ea82c12086874f705fc9ea9baa991235edd4.camel@suse.com>
 <533ce2f2-f268-a70b-fad7-d8f3f4033209@suse.com>
 <182a90a89cc02beec9760559799e74572e18ce49.camel@suse.com>
 <9632dc14-46d5-83c0-7e44-0c3bd4f5154a@qubes-os.org>
 <ce07254a-0775-d35c-559b-7d9ab642accf@qubes-os.org>
 <b1a18e6ed88db3c40a54c7ca15c3399bdc6f2b9c.camel@suse.com>
 <20201031025442.GF1447@mail-itl>
 <c17e7a152a7e1922bd9c729f70a96acf4ca5240b.camel@suse.com>
 <20201031040817.GG1447@mail-itl>
In-Reply-To: <20201031040817.GG1447@mail-itl>

--Fgc5yE9xkpOvZvMEjiYp7UpYOjhRMHUXm
Content-Type: multipart/mixed;
 boundary="------------5424715F85275EB09E1D04C2"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------5424715F85275EB09E1D04C2
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable



Le 10/31/20 =C3=A0 5:08 AM, marmarek@invisiblethingslab.com a =C3=A9crit=C2=
=A0:
> On Sat, Oct 31, 2020 at 04:27:58AM +0100, Dario Faggioli wrote:
>> On Sat, 2020-10-31 at 03:54 +0100, marmarek@invisiblethingslab.com
>> wrote:
>>> On Sat, Oct 31, 2020 at 02:34:32AM +0000, Dario Faggioli wrote:
>>> (XEN) *** Dumping CPU7 host state: ***
>>> (XEN) Xen call trace:
>>> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d040223625>] R _spin_lock+0x35/0x40
>>> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d0402233cd>] S on_selected_cpus+0x1d/=
0xc0
>>> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d040284aba>] S vmx_do_resume+0xba/0x1=
b0
>>> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d0402df160>] S context_switch+0x110/0=
xa60
>>> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d04024310a>] S core.c#schedule+0x1aa/=
0x250
>>> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d040222d4a>] S softirq.c#__do_softirq=
+0x5a/0xa0
>>> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d040291b6b>] S vmx_asm_do_vmentry+0x2=
b/0x30
>>>
>>> And so on, for (almost?) all CPUs.
>>
>> Right. So, it seems like a live (I would say) lock. It might happen on=

>> some resource which his shared among domains. And introduced (the
>> livelock, not the resource or the sharing) in 4.14.
>>
>> Just giving a quick look, I see that vmx_do_resume() calls
>> vmx_clear_vmcs() which calls on_selected_cpus() which takes the
>> call_lock spinlock.
>>
>> And none of these seems to have received much attention recently.
>>
>> But this is just a really basic analysis!
>=20
> I've looked at on_selected_cpus() and my understanding is this:
> 1. take call_lock spinlock
> 2. set function+args+what cpus to be called in a global "call_data" var=
iable
> 3. ask CPUs to execute that function (smp_send_call_function_mask() cal=
l)
> 4. wait for all requested CPUs to execute the function, still holding
> the spinlock
> 5. only then - release the spinlock
>=20
> So, if any CPU does not execute requested function for any reason, it
> will keep the call_lock locked forever.
>=20
> I don't see any CPU waiting on step 4, but also I don't see call traces=

> from CPU3 and CPU8 in the log - that's because they are in guest (dom0
> here) context, right? I do see "guest state" dumps from them.
> The only three CPUs that do logged xen call traces and are not waiting =
on that
> spin lock are:
>=20
> CPU0:
> (XEN) Xen call trace:
> (XEN)    [<ffff82d040240f89>] R vcpu_unblock+0x9/0x50
> (XEN)    [<ffff82d0402e0171>] S vcpu_kick+0x11/0x60
> (XEN)    [<ffff82d0402259c8>] S tasklet.c#do_tasklet_work+0x68/0xc0
> (XEN)    [<ffff82d040225a59>] S tasklet.c#tasklet_softirq_action+0x39/0=
x60
> (XEN)    [<ffff82d040222d4a>] S softirq.c#__do_softirq+0x5a/0xa0
> (XEN)    [<ffff82d040291b6b>] S vmx_asm_do_vmentry+0x2b/0x30
>=20
> CPU4:
> (XEN) Xen call trace:
> (XEN)    [<ffff82d040227043>] R set_timer+0x133/0x220
> (XEN)    [<ffff82d040234e90>] S credit.c#csched_tick+0/0x3a0
> (XEN)    [<ffff82d04022660f>] S timer.c#timer_softirq_action+0x9f/0x300=

> (XEN)    [<ffff82d040222d4a>] S softirq.c#__do_softirq+0x5a/0xa0
> (XEN)    [<ffff82d0402d64e6>] S x86_64/entry.S#process_softirqs+0x6/0x2=
0
>=20
> CPU14:
> (XEN) Xen call trace:
> (XEN)    [<ffff82d040222dc0>] R do_softirq+0/0x10
> (XEN)    [<ffff82d0402d64e6>] S x86_64/entry.S#process_softirqs+0x6/0x2=
0
>=20
> I'm not sure if any of those is related to that spin lock,
> on_selected_cpus() call, or anything like that...
>=20

Hi,
Some new info :). Marek has sent me a patch (https://gist.github.com/marm=
arek/810ae5c079d218928535514b08a07716) to help in debugging this issue. W=
ith default credit2 as scheduler, here are some log from two successive c=
rashes:
- https://gist.github.com/fepitre/76a1e154249c7326c743d6a6d004a67b
- https://gist.github.com/fepitre/ab00091980cb8110fb3d349aecc3a644

Also, I've been testing "dom0=3Dpvh" option and the system remains stable=
 since more than 24h. At least, I've not experimented any freeze since I'=
m using this option.

I hope that would give some hints.

Regards,
Fr=C3=A9d=C3=A9ric

--------------5424715F85275EB09E1D04C2
Content-Type: application/pgp-keys;
 name="OpenPGP_0x484010B5CDC576E2.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0x484010B5CDC576E2.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0D=
y2r
UVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8vtovi=
97s
WIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIvltoBrYnEl=
D1P
yp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpXgYzTN/kq8sxBM=
h2O
rQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PLw5koqPs/6JEIVI+t0=
pyg
+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZNbYRXzkI91HCt40X2bTb2=
jTK
gvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpaA/GPaJ5DjzV0q9mkYkGDLYI3J=
/J+
s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVirEVBum723MFH4DxhTrOoWgta2nyRHO=
oi0
z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvtLbYnlSt3v32nfUXh12aQPwU/LCGIzq4oF=
NVr
Np3aWPnSajLPpQARAQABzTxGcsOpZMOpcmljIFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpY=
y5w
aWVycmV0QHF1YmVzLW9zLm9yZz7CwXgEEwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWA=
gMB
Ah4BAheAAAoJEEhAELXNxXbiPLkQAI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZ=
Jnl
+Fb5HBgthU9lBdMqNySg+s8yekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ=
9uI
TprfEqX7V2OLbrYW94qwR8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04=
Uek
t5I2Zc8iRDF9kneINiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoD=
RC/
bsAow7cBudj+lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR=
9ze
2O5Yh+/BunrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2=
VHT
fLmAOt+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1cox=
FUw
eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBkob=
1Ep
fW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKbxM/Ny=
xHr
mNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCNHe8XNVqBp=
lV0
yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOmqpbSLbra8NP8M=
u5F
ZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8V+9+dpLEU75+mpHU7=
GlE
CfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M++ZmWfEV5nCP2qvzeYDGA=
L6B
bWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr5aNCaNpv/i4gLO1IScdfDwm6g=
dfB
2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hgYlDWdbImhNL/Z7iL3eayH7T9qAVNU=
587
MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpAH+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYH=
wuc
3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYDyhxVGbeWp820cb0s1f689XCXqFYAzTfCit+Ee=
boY
ORN5CGioXzS+z0S9IhPbdUuvqs7xvC248bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9a=
Pe/
Zw4PTKWvbJlcFioofLwTQE1XvWomFPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMA=
AoJ
EEhAELXNxXbilSkP/2NcazvUDGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrP=
X8h
jITWD9ZA2bbZZ+J+a39vyY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9x=
K8m
WXwhn90SHNadEf28ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q=
8aa
+G8iAkNJcb+Wx5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Ny=
f2M
KKbWRJntjy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJ=
Us/
geoCUBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1=
KjH
uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex1=
C+w
3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PXpm5Pw=
4st
VEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQQMhGv8Dnb=
AdO
OOnumAXWq0+wl5uP
=3DRWX1
-----END PGP PUBLIC KEY BLOCK-----

--------------5424715F85275EB09E1D04C2--

--Fgc5yE9xkpOvZvMEjiYp7UpYOjhRMHUXm--

--etjCQT4fRPKgKdCift001N2KNluWz3KHp
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEn6ZLkvlecGvyjiymSEAQtc3FduIFAl+hWcsACgkQSEAQtc3F
duKswQ//aBxtEa5DLW5ySsZpKml9QRnel5b9ENEEWGspUasvCSklBF0/rGejJKIn
R3da0fsEtdrTmgzFw5atiC30+8iADmBaNpGEbUuFays4t8scuEytv2EpeSko4qgo
QpvX6QDVPXxWOLgCCw1BAX6RbsQBoJw/8DE9dl41DYhCfXuMKc+CMmsFEHleQwPC
TUxBkgE9RRMjZnniBd+lcrl3DPqyCEuJH75hjBO7yfohdb2F+F2aWVgbKKSrB14V
gCm+XvkvCz+4o4Qu+T51fNcLQvhmayoU9m4Plg9OeSMlTBewGbkpsOv1csPOm502
/CXe8bAapxj5VX2EVqU3JOAspamV/GJF37r84dGcggXijf6Yt+CL+fHnNhX7oQ+k
2QWSwrBvn9V53syCLy6HC3LjsAw7X0d3DRMuPLgqMhuqmdyJCZ89Nqw4/0TJk5Av
oEN/TGO6LcgXqQHjf8whfPLAhk5tZzkaTZKmyVrsu7kCKsKWIk/C0VZteQ4o+wUE
Kc3cMoYm9KaYVvgklCYztdN9S/WJjb/7z1+NmpCPLJM7z6s+FCe5LG6TbhVLi4Dv
uRqb/oxjrCawKL2MoHvkgynYJ80qTUxfAe9HLkBFNWlqVPCDeI/gpzYg0/7zIDrJ
szFpln5WzaV7BqrEYsFhOjoD08uhOVQraTvfLdltCHP+n7Uos/M=
=INQm
-----END PGP SIGNATURE-----

--etjCQT4fRPKgKdCift001N2KNluWz3KHp--


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 14:15:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 14:15:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18341.43329 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZx5N-0006Qb-R7; Tue, 03 Nov 2020 14:14:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18341.43329; Tue, 03 Nov 2020 14:14:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZx5N-0006QU-O4; Tue, 03 Nov 2020 14:14:57 +0000
Received: by outflank-mailman (input) for mailman id 18341;
 Tue, 03 Nov 2020 14:14:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UbT5=EJ=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kZx5M-0006QP-5t
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 14:14:56 +0000
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5bb6a2ad-fda9-4f5a-84cf-420f02f1a27f;
 Tue, 03 Nov 2020 14:14:55 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id 126so22343094lfi.8
 for <xen-devel@lists.xenproject.org>; Tue, 03 Nov 2020 06:14:54 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=UbT5=EJ=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kZx5M-0006QP-5t
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 14:14:56 +0000
X-Inumbo-ID: 5bb6a2ad-fda9-4f5a-84cf-420f02f1a27f
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5bb6a2ad-fda9-4f5a-84cf-420f02f1a27f;
	Tue, 03 Nov 2020 14:14:55 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id 126so22343094lfi.8
        for <xen-devel@lists.xenproject.org>; Tue, 03 Nov 2020 06:14:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=Y4scG6pUiFINvPm6eKyKAnfnRZPW9pBwBTyJ9KNQu5Q=;
        b=VmyHQS9mChaxsl90FnoFH9+rrTqn0jq2qilY53BjZBwYH2XR1TNUtDLunezPjsUZaE
         UZjLI3iUZ05iFirN4ZrlMKcZQBqMQFKZ+MdOHoUjXKGPdfaup4trry8leSNKxckcABD/
         gOSF5dh6ioQV+/+QI3+V2+EUPXZuDzHH6vM4CDVcvZIZnkjTb4RB3yMEdKS0KQ9xSPPZ
         sw0AUdmamS9oIe5jF7URKJFdAkGF89nyfUQsWKsUUD1yNrRCEwREleNZqt9MeKZZbwUI
         dMOoz3npURlBx/mjBjcE7DThOt94JJli0wC9riPs5Bzy3zAdutZ9FkrGwEEHQOplqvxo
         hHSA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=Y4scG6pUiFINvPm6eKyKAnfnRZPW9pBwBTyJ9KNQu5Q=;
        b=NPnK4gb4gud6PrK4KgHsKfr5p9r1QMu/IcGHZbz5KMEZfkwAJ/QnCNPUucNz/0DgYG
         OJ+thJF3DUgM4G+Fhv3aOtMNRU6f5GuKxqlY9G3HF5a36NDTao0KMsITbwnIv7dlNFWK
         Kc0+qyZVVeaUhNJED3YZf5MXHg4ydYNUvg3f0bnluFIAm1FSELdwYovhwE15ap5ly7Q7
         XRQeUONqZ0DjUqQfJGJls06rxenTAxxTDFwOXO/Mh1BENwKKOtvVIIJ9/Rc/Wm8F7pR6
         kzb7x/wuYZ06Frb1wDtZE8vftXa2ndJKwExdnHHpYnmCubsj0r/XKwylI0R+RtD3uSLb
         v/0Q==
X-Gm-Message-State: AOAM532xQAkkcOSIGRwPvZIBWoP0GXyxPre3L6pEYI5BYuhoObNlB8wC
	BbaHZQt8rtJDWon6FFvyanAdyGA3H07KiKfKbSk=
X-Google-Smtp-Source: ABdhPJzawSsh37agbgRkhMRYqz0NpLWGwTIn7QqM8AiqHUkmdqP0WyDDF2ZNo/a+nODYqbiNGbgR5Pvs2BLeG69QEhI=
X-Received: by 2002:ac2:47fc:: with SMTP id b28mr7422181lfp.454.1604412893787;
 Tue, 03 Nov 2020 06:14:53 -0800 (PST)
MIME-Version: 1.0
References: <20201029190332.31161-1-jandryuk@gmail.com> <20201103104844.t73k4rukp7jezk7d@liuwe-devbox-debian-v2>
 <20201103110629.GH2214@perard.uk.xensource.com>
In-Reply-To: <20201103110629.GH2214@perard.uk.xensource.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Tue, 3 Nov 2020 09:14:41 -0500
Message-ID: <CAKf6xpv9JZPhUjCFKj_gneSqMsiWQ99LFUgwE9-wQtUXZw-Q0Q@mail.gmail.com>
Subject: Re: [PATCH v2] libxl: Add suppress-vmdesc to QEMU machine
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Wei Liu <wl@xen.org>, xen-devel <xen-devel@lists.xenproject.org>, 
	Ian Jackson <iwj@xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Tue, Nov 3, 2020 at 6:06 AM Anthony PERARD <anthony.perard@citrix.com> wrote:
>
> On Tue, Nov 03, 2020 at 10:48:44AM +0000, Wei Liu wrote:
> > On Thu, Oct 29, 2020 at 03:03:32PM -0400, Jason Andryuk wrote:
> > > The device model state saved by QMP xen-save-devices-state doesn't
> > > include the vmdesc json.  When restoring an HVM, xen-load-devices-state
> > > always triggers "Expected vmdescription section, but got 0".  This is
> > > not a problem when restore comes from a file.  However, when QEMU runs
> > > in a linux stubdom and comes over a console, EOF is not received.  This
> > > causes a delay restoring - though it does restore.
> > >
> > > Setting suppress-vmdesc skips looking for the vmdesc during restore and
> > > avoids the wait.
> > >
> > > QEMU 5.2 enables suppress-vmdesc by default for xenfv, but this change
> > > sets it manually for xenfv and xen_platform_pci=0 when -machine pc is
> > > use.
> > >
> > > QEMU commit 9850c6047b8b "migration: Allow to suppress vmdesc
> > > submission" added suppress-vmdesc in QEMU 2.3.
> > >
> > > Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
>
> :-(, sorry, I never received that email.

Sorry about that.  I included your address , but Gmail said:
"""
Address not found

Your message wasn't delivered to anthony.perard@citrix.com because the
address couldn't be found, or is unable to receive mail.

The response from the remote server was:

550 Too many invalid recipients
"""

> > > ---
> > > QEMU 2.3 came out in 2015, so setting suppress-vmdesc unilaterally
> > > should be okay...  Is this okay?
>
>
> > Anthony, what is your opinion on this?
>
> That it's fine, and I actually asked for the libxl patch. For reference,
> QEMU 2.3 is in qemu-xen-4.7.
>
> So,
> Acked-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

Jason


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 14:15:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 14:15:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18342.43341 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZx6C-0006Vh-4Q; Tue, 03 Nov 2020 14:15:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18342.43341; Tue, 03 Nov 2020 14:15:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZx6C-0006Va-1H; Tue, 03 Nov 2020 14:15:48 +0000
Received: by outflank-mailman (input) for mailman id 18342;
 Tue, 03 Nov 2020 14:15:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ws1C=EJ=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1kZx6A-0006VO-US
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 14:15:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id d42fa0d2-2b25-4e99-925e-86b4702fb666;
 Tue, 03 Nov 2020 14:15:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 61B36ABF4;
 Tue,  3 Nov 2020 14:15:45 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ws1C=EJ=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
	id 1kZx6A-0006VO-US
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 14:15:46 +0000
X-Inumbo-ID: d42fa0d2-2b25-4e99-925e-86b4702fb666
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id d42fa0d2-2b25-4e99-925e-86b4702fb666;
	Tue, 03 Nov 2020 14:15:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604412945;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=viJwzLFB/ThmL9uDbtzwZKdx+ghvo51wknw6OEPeAuw=;
	b=qlhovEST3SDrr3DP5MSZECg1Cngh2Hz6EuN4AT5QOd7K882jL1bKok/dfkLOyHfHP/gaXu
	0MIv06mGl2pOyilPOoAfaOEx67MFne+q3VAWyoID2cJfRz7MmNYBXj/UdUkdgSaAOkBc3K
	qSUwJl0WE679H+2JY3rs1Opm+9ggHtc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 61B36ABF4;
	Tue,  3 Nov 2020 14:15:45 +0000 (UTC)
Message-ID: <f9ceee15b46bfe66d126644986c25ced1ed70d0b.camel@suse.com>
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
From: Dario Faggioli <dfaggioli@suse.com>
To: =?ISO-8859-1?Q?Fr=E9d=E9ric?= Pierret <frederic.pierret@qubes-os.org>, 
	"marmarek@invisiblethingslab.com"
	 <marmarek@invisiblethingslab.com>
Cc: Juergen Gross <JGross@suse.com>, "George.Dunlap@citrix.com"
	 <George.Dunlap@citrix.com>, "xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>, "andrew.cooper3@citrix.com"
	 <andrew.cooper3@citrix.com>
Date: Tue, 03 Nov 2020 15:15:44 +0100
In-Reply-To: <2cbcca0b-8415-8f98-46d8-12279703cbbe@qubes-os.org>
References: <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
	 <deefd340-ec7a-bbb9-7471-d147da174f4a@suse.com>
	 <a333ea82c12086874f705fc9ea9baa991235edd4.camel@suse.com>
	 <533ce2f2-f268-a70b-fad7-d8f3f4033209@suse.com>
	 <182a90a89cc02beec9760559799e74572e18ce49.camel@suse.com>
	 <9632dc14-46d5-83c0-7e44-0c3bd4f5154a@qubes-os.org>
	 <ce07254a-0775-d35c-559b-7d9ab642accf@qubes-os.org>
	 <b1a18e6ed88db3c40a54c7ca15c3399bdc6f2b9c.camel@suse.com>
	 <20201031025442.GF1447@mail-itl>
	 <c17e7a152a7e1922bd9c729f70a96acf4ca5240b.camel@suse.com>
	 <20201031040817.GG1447@mail-itl>
	 <2cbcca0b-8415-8f98-46d8-12279703cbbe@qubes-os.org>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-7qIAXVtpKc+TNdafpvN8"
User-Agent: Evolution 3.38.1 (by Flathub.org) 
MIME-Version: 1.0


--=-7qIAXVtpKc+TNdafpvN8
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 2020-11-03 at 14:23 +0100, Fr=C3=A9d=C3=A9ric Pierret wrote:
> Hi,
>
Hi,

> Some new info :). Marek has sent me a patch (=20
> https://gist.github.com/marmarek/810ae5c079d218928535514b08a07716) to
> help in debugging this issue.=C2=A0
>
Ok, thanks for the update (and thanks Mark for putthing together the
patch).

> With default credit2 as scheduler, here are some log from two
> successive crashes:
> - https://gist.github.com/fepitre/76a1e154249c7326c743d6a6d004a67b
> - https://gist.github.com/fepitre/ab00091980cb8110fb3d349aecc3a644
>=20
Right, this is what you see when you poke at the debug keys, after a
freeze. It would be interesting to see if there is any trace of the
debug output added in the patch _before_ the crash... But I appreciate
that it's not really easy to achieve this.

> Also, I've been testing "dom0=3Dpvh" option and the system remains
> stable since more than 24h. At least, I've not experimented any
> freeze since I'm using this option.
>=20
PVH and what scheduler?

I think it's quite clear that this is not a Credit2 bug, as it shows on
Credit1 as well, but AFAIUI, with Credit2 it shows up more often?

Now we also have a (weak?) dependency on the type of dom0?

What's the configuration that is currently working?
Credit2 +PVH ?
Credit1 + PVH ?
Whatever + PVH ?

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-7qIAXVtpKc+TNdafpvN8
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl+hZhAACgkQFkJ4iaW4
c+5iLQ/+PJo8dRn3FzNtX1sYTJqKL11g+HK/6jea8VwwYRCF9FBjrbGPxrXwqCog
XviJZuAdDLDTCVPoMX4QCPAypGDnAoIK1TJgt5DzKNrvy/IS1vgMo8B33Cw8uz4x
k65HyAXCAjW3yK5FUCEOjSTQhNIigKuKO/dHiMpfwnBAJjh1mCJEQIUpyR1BD4VZ
iJNjM6Cjiy2GGv966P9mkKF3aGRqI3KtrKq4Hv1GHCPVfEHKYEz8CvfQGTNjKhwo
TSdx8piqO/DF40h7dKRub3ytUa/56CiIU+YAJYXSQNAdxKY8NiVTBm86KXeFS9Wr
xneU0leABfOTcGttYnG2BzVlbZ9aqR21RGsBfWK/MkB7XmYAbB97cx/Ix3Hi2NLi
ZKssk31ImqQoTbVv+kEukmssA6TRvf14ohkqr3TvDP9UKnpsNSOgBAJWn4u55XSH
i+cIzf4Fr6kxHrNl3G2YBNxCcf2ejxh/1CuZi4Nu8u3PPdo9UE78AvHJ1KXSsiZy
RsgWyLOeQVLoZDlOPrFavzjWRLR1LG5WzedfX8Aa5Q/TGEqWxY4CdZ5MRm2FtvVr
wMPryRaMwIUqYJ6W5ZRC+ZXui2xv+baeUsYprC4nXVoe/gKerBxd/2zx//9RWjRR
BDzHO0pjUXcAtAxQbr8cfoQCASJTW54LtHU4e9zgkxS6VB9IcQU=
=k401
-----END PGP SIGNATURE-----

--=-7qIAXVtpKc+TNdafpvN8--



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 14:30:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 14:30:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18354.43353 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZxKQ-0008F8-Bb; Tue, 03 Nov 2020 14:30:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18354.43353; Tue, 03 Nov 2020 14:30:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZxKQ-0008F1-7v; Tue, 03 Nov 2020 14:30:30 +0000
Received: by outflank-mailman (input) for mailman id 18354;
 Tue, 03 Nov 2020 14:30:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OHVE=EJ=linaro.org=masami.hiramatsu@srs-us1.protection.inumbo.net>)
 id 1kZxKP-0008Ew-9w
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 14:30:29 +0000
Received: from mail-yb1-xb42.google.com (unknown [2607:f8b0:4864:20::b42])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ea921a48-e733-47ec-9adb-c0389390518f;
 Tue, 03 Nov 2020 14:30:28 +0000 (UTC)
Received: by mail-yb1-xb42.google.com with SMTP id o70so15018120ybc.1
 for <xen-devel@lists.xenproject.org>; Tue, 03 Nov 2020 06:30:28 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=OHVE=EJ=linaro.org=masami.hiramatsu@srs-us1.protection.inumbo.net>)
	id 1kZxKP-0008Ew-9w
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 14:30:29 +0000
X-Inumbo-ID: ea921a48-e733-47ec-9adb-c0389390518f
Received: from mail-yb1-xb42.google.com (unknown [2607:f8b0:4864:20::b42])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ea921a48-e733-47ec-9adb-c0389390518f;
	Tue, 03 Nov 2020 14:30:28 +0000 (UTC)
Received: by mail-yb1-xb42.google.com with SMTP id o70so15018120ybc.1
        for <xen-devel@lists.xenproject.org>; Tue, 03 Nov 2020 06:30:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=wAq5NZ6AIfbVNF6x0628HtJVRbDZ70m+AsEhKikldfE=;
        b=SCqKreICtHxu9tfvcert9PvuT1zMJOQwUO/jpc5PPM+Vkj3r8YLyiu90Jsi269w24v
         xFNJMmL1tHhplayxiyl1FzNHlwrGjy5zVgA/e3fnampf4VJjWPsH3YJ1DPiFrvATGhuR
         ghgzIiFFehwGFofiHRnk80IxukqjdCRqRjinfaRC6W7WXHW4SpRJDscKXuZxacEB26+h
         xdTbWln85QCCPYHeMpaG/292KlyFTLCgsdUzYcXGOTPJUKCLizvayb95vafONJqFxijV
         LCOgNYEZWlriYQ1So+cfN2zM51aqkVIcFXrWkOPAlhdmpRlqWHpTiTzoiwhfv8jqLUqy
         sB1g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=wAq5NZ6AIfbVNF6x0628HtJVRbDZ70m+AsEhKikldfE=;
        b=tb2ZDknGXDdn8/LDbdcIOMSUSDhGo6HZkHewzTjDvU3fcmAXz6XlmNiZeS4Jw0YefF
         SAnSCl3x+VxMxI8p9kHSnCiF7ql7rk+1DM7q8aDi6kjcTNEocCM+txYFfztt1BU1G+Rg
         WeuB0NfF4Ki3qGAr1StTVJ//0nmn2DJFAus4VxaCKuyGPnxuBdkriEoDqx52ypTgUEiA
         +35yQjLParNfRJpV3rMWSf6cGmW17XHv8+MSiBH7jqxsZlRZ6gu3BLIZKAr9QXvDaROH
         1hmlhulPg1e418GN8UX/W2Ulh7jEX97GZ26YBKzHMjQyQuqcQwS0e1VSuvL5tIHzjFEn
         meZA==
X-Gm-Message-State: AOAM532b3cC3A9dTTTc3RTkxKcF4ymbPSie5mDnA6BA8zd1vQPEa3f0Z
	jOC+KF/T74lI6/kQYwlL3J6/RoipmzwmSQpjmUQtLw==
X-Google-Smtp-Source: ABdhPJxdZrQ3p66qfaPjSfvDDL8bDFYstlPsHVpbA79lF+3WjNQuxp7JYGBUTjjhRPUmFh0QmlTS2xsu34cuKyfYvr8=
X-Received: by 2002:a25:d0cb:: with SMTP id h194mr26939162ybg.52.1604413827690;
 Tue, 03 Nov 2020 06:30:27 -0800 (PST)
MIME-Version: 1.0
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <CAA93ih0o3XmD9neBu1fAkP1iBETu1-4qaQaEsZfEWRfYo7VCZA@mail.gmail.com>
 <CAPD2p-npnQz+7NtMH81s2C3dsAt_6kxQ68n7LhwYbOuTFaUEvw@mail.gmail.com>
 <alpine.DEB.2.21.2010291252410.12247@sstabellini-ThinkPad-T480s>
 <CAPD2p-mH0Hi+JOUB-mt+aZR_gN86EZCpnMPTww0ErMESTwZ=AA@mail.gmail.com>
 <CAA93ih3Z-zxQ33gvr2C43i0J5XP3OBgUhTyMcwhe9zVj-uOONA@mail.gmail.com> <CAPD2p-=2UimQy6VHKw1FgyVi2R94Ux_HFdPYk7=FR3KWSEqiHw@mail.gmail.com>
In-Reply-To: <CAPD2p-=2UimQy6VHKw1FgyVi2R94Ux_HFdPYk7=FR3KWSEqiHw@mail.gmail.com>
From: Masami Hiramatsu <masami.hiramatsu@linaro.org>
Date: Tue, 3 Nov 2020 23:30:16 +0900
Message-ID: <CAA93ih3LcHPLbL7dPof-OAbM2HRJv0neQtMuYDYcYAOYDhVbKA@mail.gmail.com>
Subject: Re: [PATCH V2 00/23] IOREQ feature (+ virtio-mmio) on Arm
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: =?UTF-8?B?QWxleCBCZW5uw6ll?= <alex.bennee@linaro.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, xen-devel <xen-devel@lists.xenproject.org>, 
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Paul Durrant <paul@xen.org>, 
	Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, 
	Julien Grall <julien.grall@arm.com>, George Dunlap <george.dunlap@citrix.com>, 
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Tim Deegan <tim@xen.org>, 
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
	Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>, 
	Anthony PERARD <anthony.perard@citrix.com>, Bertrand Marquis <bertrand.marquis@arm.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Oleksandr,

Thanks for sharing the virtio-disk server, I also tested with a real usb di=
sk.

In config file:

virtio =3D 1
vdisk =3D [ 'backend=3DDomain-0, disks=3Dro:/dev/sda1' ]

And it can be mounted in DomU

[    2.892874] virtio_blk virtio0: [vda] 1875382927 512-byte logical
blocks (960 GB/894 GiB)
[    2.892925] vda: detected capacity change from 0 to 960196058624
...
root@develbox:~# cat /proc/partitions
major minor  #blocks  name

 254        0  937691463 vda
...
root@develbox:~# mount /dev/vda /mnt/
[  192.260968] EXT4-fs (vda): mounted filesystem with ordered data
mode. Opts: (null)
mount: /mnt: WARNING: source write-protected, mounted read-only.

So "ro" flag also correctly works.
Great!

Thank you!

2020=E5=B9=B411=E6=9C=881=E6=97=A5(=E6=97=A5) 6:10 Oleksandr Tyshchenko <ol=
ekstysh@gmail.com>:
>
>
>
> On Fri, Oct 30, 2020 at 1:34 PM Masami Hiramatsu <masami.hiramatsu@linaro=
.org> wrote:
>>
>> Hi Oleksandr,
>
>
> Hi Masami, all
>
> [sorry for the possible format issue]
>
>>
>> >> >
>> >> >       Could you tell me how can I test it?
>> >> >
>> >> >
>> >> > I assume it is due to the lack of the virtio-disk backend (which I =
haven't shared yet as I focused on the IOREQ/DM support on Arm in the
>> >> > first place).
>> >> > Could you wait a little bit, I am going to share it soon.
>> >>
>> >> Do you have a quick-and-dirty hack you can share in the meantime? Eve=
n
>> >> just on github as a special branch? It would be very useful to be abl=
e
>> >> to have a test-driver for the new feature.
>> >
>> > Well, I will provide a branch on github with our PoC virtio-disk backe=
nd by the end of this week. It will be possible to test this series with it=
.
>>
>> Great! OK I'll be waiting for the PoC backend.
>>
>> Thank you!
>
>
> You can find the virtio-disk backend PoC (shared as is) at [1].
>
> Brief description...
>
> The virtio-disk backend PoC is a completely standalone entity (IOREQ serv=
er) which emulates a virtio-mmio disk device.
> It is based on code from DEMU [2] (for IOREQ server purposes) and some co=
de from kvmtool [3] to implement virtio protocol,
> disk operations over underlying H/W and Xenbus code to be able to read co=
nfiguration from the Xenstore
> (it is configured via domain config file). Last patch in this series (mar=
ked as RFC) actually adds required bits to the libxl code.
>
> Some notes...
>
> Backend could be used with current V2 IOREQ series [4] without any modifi=
cations, all what you need is to enable
> CONFIG_IOREQ_SERVER on Arm [5], since it is disabled by default within th=
is series.
>
> Please note that in our system we run backend in DomD (driver domain). I =
haven't tested it in Dom0,
> since in our system the Dom0 is thin (without any H/W) and only used to l=
aunch VMs, so there is no underlying block H/W.
> But, I hope, it is possible to run it in Dom0 as well (at least there is =
nothing specific to a particular domain in the backend itself, nothing hard=
coded).
> If you are going to run a backend in other than Dom0 domain you need to w=
rite your own policy (FLASK) for the backend (running in that domain)
> to be able to issue DM related requests, etc. Only for test purposes you =
could use this patch [6] that tweeks Xen dummy policy (not for upstream).
>
> As I mentioned elsewhere you don't need to modify Guest Linux (DomU), jus=
t enable VirtIO related configs.
> If I remember correctly, the following would be enough:
> CONFIG_BLK_MQ_VIRTIO=3Dy
> CONFIG_VIRTIO_BLK=3Dy
> CONFIG_VIRTIO=3Dy
> CONFIG_VIRTIO_BALLOON=3Dy
> CONFIG_VIRTIO_MMIO=3Dy
> If I remember correctly, if your Host Linux (Dom0 or DomD) version >=3D 4=
.17 you don't need to modify it as well.
> Otherwise, you need to cherry-pick "xen/privcmd: add IOCTL_PRIVCMD_MMAP_R=
ESOURCE" from the upstream to be able
> to use the acquire interface for the resource mapping.
>
> We usually build a backend in the context of the Yocto build process and =
run it as a systemd service,
> but you can also build and run it manually (it should be launched before =
DomU creation).
>
> There are no command line options at all. Everything is configured via do=
main configuration file:
> # This option is mandatory, it shows that VirtIO is going to be used by g=
uest
> virtio=3D1
> # Example of domain configuration (two disks are assigned to the guest, t=
he latter is in readonly mode):
> vdisk =3D [ 'backend=3DDomD, disks=3Drw:/dev/mmcblk0p3;ro:/dev/mmcblk1p3'=
 ]
>
> Hope that helps. Feel free to ask questions if any.
>
> [1] https://github.com/xen-troops/virtio-disk/commits/ioreq_v3
> [2] https://xenbits.xen.org/gitweb/?p=3Dpeople/pauldu/demu.git;a=3Dsummar=
y
> [3] https://git.kernel.org/pub/scm/linux/kernel/git/will/kvmtool.git/
> [4] https://github.com/otyshchenko1/xen/commits/ioreq_4.14_ml3
> [5] https://github.com/otyshchenko1/xen/commit/ee221102193f0422a240832edc=
41d73f6f3da923
> [6] https://github.com/otyshchenko1/xen/commit/be868a63014b7aa6c9731d5692=
200d7f2f57c611
>
> --
> Regards,
>
> Oleksandr Tyshchenko



--=20
Masami Hiramatsu


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 14:36:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 14:36:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18360.43364 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZxQI-0008S6-2C; Tue, 03 Nov 2020 14:36:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18360.43364; Tue, 03 Nov 2020 14:36:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZxQH-0008Rz-Ui; Tue, 03 Nov 2020 14:36:33 +0000
Received: by outflank-mailman (input) for mailman id 18360;
 Tue, 03 Nov 2020 14:36:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uWmA=EJ=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
 id 1kZxQH-0008Ru-5e
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 14:36:33 +0000
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8d2b808c-643e-450c-abb3-1ed1a3ff6ee8;
 Tue, 03 Nov 2020 14:36:32 +0000 (UTC)
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by
 mx.zohomail.com with SMTPS id 1604414186744884.2632144736624;
 Tue, 3 Nov 2020 06:36:26 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uWmA=EJ=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
	id 1kZxQH-0008Ru-5e
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 14:36:33 +0000
X-Inumbo-ID: 8d2b808c-643e-450c-abb3-1ed1a3ff6ee8
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8d2b808c-643e-450c-abb3-1ed1a3ff6ee8;
	Tue, 03 Nov 2020 14:36:32 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1604414189; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=AOBScQlcPkeRF7pLurU7Ucs5OR+h6oxZN2CPB0K4wa4B9OqLTqxTRxTwqvw0qcj6IGFhdZc1ZEuCl/EEIHQYN1YjzfLeccBva53ZbWpn+ZB7JRCZphMkAxDgrbL8qxBhrXGi+jXicH9WA24ROXTu/skH6T1OIm1UGT9/9a5ojhw=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1604414189; h=Content-Type:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=aK0qDr51RFIT+8ZZXZByEYXzbH1OSzNaKROwfwDemeU=; 
	b=SDuxn818s3ez7SgMCitduJmZ6aZ+pnTIzAnF8hQ/BPd3RHkc9hclZvB2Wo9CFH8aFpb1ak4J9jjBsfMvMBxfU5DrJtR8jEdX3eV69ixwSjihl7F8w7Nd1l+VY+Qry4U2+BpF/zVbAIaNj78fKR3UFg3GPWb95/9DCES72mvgOeE=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=qubes-os.org;
	spf=pass  smtp.mailfrom=frederic.pierret@qubes-os.org;
	dmarc=pass header.from=<frederic.pierret@qubes-os.org> header.from=<frederic.pierret@qubes-os.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1604414189;
	s=s; d=qubes-os.org; i=frederic.pierret@qubes-os.org;
	h=To:Cc:References:From:Subject:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type;
	bh=aK0qDr51RFIT+8ZZXZByEYXzbH1OSzNaKROwfwDemeU=;
	b=GCRR6DxZhzlrGuwwkqIbqDiYMXIqZdlQ9iuG7hPJEr6KRif9FCv/Kv7FKE//sc5R
	kYzLTIzqTh3wx2Lf4cnkh1qOdXuAfaqyWrvAU6lM3KoIHqD+q1FNv9L4YAXyyZjSaIG
	ltEkb0v7+Qw6T7TySCTtWw9y75q4yupNvFwFWVdM=
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by mx.zohomail.com
	with SMTPS id 1604414186744884.2632144736624; Tue, 3 Nov 2020 06:36:26 -0800 (PST)
To: Dario Faggioli <dfaggioli@suse.com>,
 "marmarek@invisiblethingslab.com" <marmarek@invisiblethingslab.com>
Cc: Juergen Gross <JGross@suse.com>,
 "George.Dunlap@citrix.com" <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
References: <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
 <deefd340-ec7a-bbb9-7471-d147da174f4a@suse.com>
 <a333ea82c12086874f705fc9ea9baa991235edd4.camel@suse.com>
 <533ce2f2-f268-a70b-fad7-d8f3f4033209@suse.com>
 <182a90a89cc02beec9760559799e74572e18ce49.camel@suse.com>
 <9632dc14-46d5-83c0-7e44-0c3bd4f5154a@qubes-os.org>
 <ce07254a-0775-d35c-559b-7d9ab642accf@qubes-os.org>
 <b1a18e6ed88db3c40a54c7ca15c3399bdc6f2b9c.camel@suse.com>
 <20201031025442.GF1447@mail-itl>
 <c17e7a152a7e1922bd9c729f70a96acf4ca5240b.camel@suse.com>
 <20201031040817.GG1447@mail-itl>
 <2cbcca0b-8415-8f98-46d8-12279703cbbe@qubes-os.org>
 <f9ceee15b46bfe66d126644986c25ced1ed70d0b.camel@suse.com>
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
Autocrypt: addr=frederic.pierret@qubes-os.org; keydata=
 xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0
 Dy2rUVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8
 vtovi97sWIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIv
 ltoBrYnElD1Pyp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpX
 gYzTN/kq8sxBMh2OrQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PL
 w5koqPs/6JEIVI+t0pyg+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZ
 NbYRXzkI91HCt40X2bTb2jTKgvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpa
 A/GPaJ5DjzV0q9mkYkGDLYI3J/J+s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVir
 EVBum723MFH4DxhTrOoWgta2nyRHOoi0z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvt
 LbYnlSt3v32nfUXh12aQPwU/LCGIzq4oFNVrNp3aWPnSajLPpQARAQABzTxGcsOpZMOpcmlj
 IFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpYy5waWVycmV0QHF1YmVzLW9zLm9yZz7CwXgE
 EwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEEhAELXNxXbiPLkQ
 AI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZJnl+Fb5HBgthU9lBdMqNySg+s8y
 ekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ9uITprfEqX7V2OLbrYW94qw
 R8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04Uekt5I2Zc8iRDF9kneI
 NiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoDRC/bsAow7cBudj+
 lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR9ze2O5Yh+/B
 unrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2VHTfLmA
 Ot+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1coxFUw
 eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBko
 b1EpfW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKb
 xM/NyxHrmNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCN
 He8XNVqBplV0yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOm
 qpbSLbra8NP8Mu5FZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8
 V+9+dpLEU75+mpHU7GlECfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M
 ++ZmWfEV5nCP2qvzeYDGAL6BbWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr
 5aNCaNpv/i4gLO1IScdfDwm6gdfB2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hg
 YlDWdbImhNL/Z7iL3eayH7T9qAVNU587MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpA
 H+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYHwuc3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYD
 yhxVGbeWp820cb0s1f689XCXqFYAzTfCit+EeboYORN5CGioXzS+z0S9IhPbdUuvqs7xvC24
 8bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9aPe/Zw4PTKWvbJlcFioofLwTQE1XvWom
 FPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMAAoJEEhAELXNxXbilSkP/2NcazvU
 DGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrPX8hjITWD9ZA2bbZZ+J+a39v
 yY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9xK8mWXwhn90SHNadEf28
 ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q8aa+G8iAkNJcb+W
 x5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Nyf2MKKbWRJnt
 jy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJUs/geoC
 UBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1KjH
 uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex
 1C+w3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PX
 pm5Pw4stVEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQ
 QMhGv8DnbAdOOOnumAXWq0+wl5uP
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
Message-ID: <7eba1d54-940d-54c7-905d-f46e1db48d79@qubes-os.org>
Date: Tue, 3 Nov 2020 15:36:22 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <f9ceee15b46bfe66d126644986c25ced1ed70d0b.camel@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="RAOUBkqoSPxdHoSIL2bCWrBl0q1hqppqJ"
X-Zoho-Virus-Status: 1
X-ZohoMailClient: External

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--RAOUBkqoSPxdHoSIL2bCWrBl0q1hqppqJ
Content-Type: multipart/mixed; boundary="GP4dnsV0Zc2pzpcKgeKTdvIAYMTCnNcqx";
 protected-headers="v1"
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
To: Dario Faggioli <dfaggioli@suse.com>,
 "marmarek@invisiblethingslab.com" <marmarek@invisiblethingslab.com>
Cc: Juergen Gross <JGross@suse.com>,
 "George.Dunlap@citrix.com" <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Message-ID: <7eba1d54-940d-54c7-905d-f46e1db48d79@qubes-os.org>
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
References: <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
 <deefd340-ec7a-bbb9-7471-d147da174f4a@suse.com>
 <a333ea82c12086874f705fc9ea9baa991235edd4.camel@suse.com>
 <533ce2f2-f268-a70b-fad7-d8f3f4033209@suse.com>
 <182a90a89cc02beec9760559799e74572e18ce49.camel@suse.com>
 <9632dc14-46d5-83c0-7e44-0c3bd4f5154a@qubes-os.org>
 <ce07254a-0775-d35c-559b-7d9ab642accf@qubes-os.org>
 <b1a18e6ed88db3c40a54c7ca15c3399bdc6f2b9c.camel@suse.com>
 <20201031025442.GF1447@mail-itl>
 <c17e7a152a7e1922bd9c729f70a96acf4ca5240b.camel@suse.com>
 <20201031040817.GG1447@mail-itl>
 <2cbcca0b-8415-8f98-46d8-12279703cbbe@qubes-os.org>
 <f9ceee15b46bfe66d126644986c25ced1ed70d0b.camel@suse.com>
In-Reply-To: <f9ceee15b46bfe66d126644986c25ced1ed70d0b.camel@suse.com>

--GP4dnsV0Zc2pzpcKgeKTdvIAYMTCnNcqx
Content-Type: multipart/mixed;
 boundary="------------48A79D9974D6BB704456BFDA"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------48A79D9974D6BB704456BFDA
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

Le 11/3/20 =C3=A0 3:15 PM, Dario Faggioli a =C3=A9crit=C2=A0:
> On Tue, 2020-11-03 at 14:23 +0100, Fr=C3=A9d=C3=A9ric Pierret wrote:
>> Hi,
>>
> Hi,
>=20
>> Some new info :). Marek has sent me a patch (
>> https://gist.github.com/marmarek/810ae5c079d218928535514b08a07716) to
>> help in debugging this issue.
>>
> Ok, thanks for the update (and thanks Mark for putthing together the
> patch).
>=20
>> With default credit2 as scheduler, here are some log from two
>> successive crashes:
>> - https://gist.github.com/fepitre/76a1e154249c7326c743d6a6d004a67b
>> - https://gist.github.com/fepitre/ab00091980cb8110fb3d349aecc3a644
>>
> Right, this is what you see when you poke at the debug keys, after a
> freeze. It would be interesting to see if there is any trace of the
> debug output added in the patch _before_ the crash... But I appreciate
> that it's not really easy to achieve this.

I'll try to find this info in the next days.

>> Also, I've been testing "dom0=3Dpvh" option and the system remains
>> stable since more than 24h. At least, I've not experimented any
>> freeze since I'm using this option.
>>
> PVH and what scheduler?

PVH and Credit2.

> I think it's quite clear that this is not a Credit2 bug, as it shows on=

> Credit1 as well, but AFAIUI, with Credit2 it shows up more often?
>=20
> Now we also have a (weak?) dependency on the type of dom0?

It seems so.

> What's the configuration that is currently working?
> Credit2 +PVH ?
> Credit1 + PVH ?
> Whatever + PVH ?

The only differences between Credit1 and 2 was that Credit2 was generatin=
g freezes faster than Credit1. Here is the output of `xl info` of the cur=
rent "stable" running config:

"""
[admin@dom0 ~]$ xl info
host                   : dom0
release                : 5.4.72-2.qubes.x86_64
version                : #1 SMP Sun Oct 18 16:17:06 CEST 2020
machine                : x86_64
nr_cpus                : 16
max_cpu_id             : 63
nr_nodes               : 2
cores_per_socket       : 8
threads_per_core       : 1
cpu_mhz                : 2593.766
hw_caps                : bfebfbff:17bee3ff:2c100800:00000001:00000001:000=
00000:00000000:00000100
virt_caps              : pv hvm hvm_directio pv_directio hap
total_memory           : 163805
free_memory            : 11077
sharing_freed_memory   : 0
sharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 14
xen_extra              : .0
xen_version            : 4.14.0
xen_caps               : xen-3.0-x86_64 hvm-3.0-x86_32 hvm-3.0-x86_32p hv=
m-3.0-x86_64
xen_scheduler          : credit2
xen_pagesize           : 4096
platform_params        : virt_start=3D0xffff800000000000
xen_changeset          :
xen_commandline        : placeholder console=3Dcom2 dom0_mem=3Dmin:1024M =
dom0_mem=3Dmax:8192M iommu=3Dno-igfx ucode=3Dscan smt=3Doff gnttab_max_fr=
ames=3D2048 gnttab_max_maptrack_frames=3D4096 loglvl=3Dall guest_loglvl=3D=
all dom0=3Dpvh
cc_compiler            : gcc (GCC) 10.2.1 20201016 (Red Hat 10.2.1-6)
cc_compile_by          : user
cc_compile_domain      : [unknown]
cc_compile_date        : Sat Oct 31 16:02:25 UTC 2020
build_id               : 0bbdc8aa6f34a2b27c3ac9842741bc022269f9ee
xend_config_format     : 4
"""

> Regards
>=20

Regards,
Fr=C3=A9d=C3=A9ric

--------------48A79D9974D6BB704456BFDA
Content-Type: application/pgp-keys;
 name="OpenPGP_0x484010B5CDC576E2.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0x484010B5CDC576E2.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0D=
y2r
UVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8vtovi=
97s
WIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIvltoBrYnEl=
D1P
yp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpXgYzTN/kq8sxBM=
h2O
rQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PLw5koqPs/6JEIVI+t0=
pyg
+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZNbYRXzkI91HCt40X2bTb2=
jTK
gvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpaA/GPaJ5DjzV0q9mkYkGDLYI3J=
/J+
s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVirEVBum723MFH4DxhTrOoWgta2nyRHO=
oi0
z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvtLbYnlSt3v32nfUXh12aQPwU/LCGIzq4oF=
NVr
Np3aWPnSajLPpQARAQABzTxGcsOpZMOpcmljIFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpY=
y5w
aWVycmV0QHF1YmVzLW9zLm9yZz7CwXgEEwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWA=
gMB
Ah4BAheAAAoJEEhAELXNxXbiPLkQAI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZ=
Jnl
+Fb5HBgthU9lBdMqNySg+s8yekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ=
9uI
TprfEqX7V2OLbrYW94qwR8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04=
Uek
t5I2Zc8iRDF9kneINiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoD=
RC/
bsAow7cBudj+lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR=
9ze
2O5Yh+/BunrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2=
VHT
fLmAOt+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1cox=
FUw
eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBkob=
1Ep
fW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKbxM/Ny=
xHr
mNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCNHe8XNVqBp=
lV0
yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOmqpbSLbra8NP8M=
u5F
ZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8V+9+dpLEU75+mpHU7=
GlE
CfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M++ZmWfEV5nCP2qvzeYDGA=
L6B
bWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr5aNCaNpv/i4gLO1IScdfDwm6g=
dfB
2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hgYlDWdbImhNL/Z7iL3eayH7T9qAVNU=
587
MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpAH+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYH=
wuc
3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYDyhxVGbeWp820cb0s1f689XCXqFYAzTfCit+Ee=
boY
ORN5CGioXzS+z0S9IhPbdUuvqs7xvC248bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9a=
Pe/
Zw4PTKWvbJlcFioofLwTQE1XvWomFPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMA=
AoJ
EEhAELXNxXbilSkP/2NcazvUDGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrP=
X8h
jITWD9ZA2bbZZ+J+a39vyY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9x=
K8m
WXwhn90SHNadEf28ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q=
8aa
+G8iAkNJcb+Wx5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Ny=
f2M
KKbWRJntjy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJ=
Us/
geoCUBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1=
KjH
uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex1=
C+w
3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PXpm5Pw=
4st
VEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQQMhGv8Dnb=
AdO
OOnumAXWq0+wl5uP
=3DRWX1
-----END PGP PUBLIC KEY BLOCK-----

--------------48A79D9974D6BB704456BFDA--

--GP4dnsV0Zc2pzpcKgeKTdvIAYMTCnNcqx--

--RAOUBkqoSPxdHoSIL2bCWrBl0q1hqppqJ
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEn6ZLkvlecGvyjiymSEAQtc3FduIFAl+haucACgkQSEAQtc3F
duLfdxAAoH5tixJNO30K9SHLpHrFectLPdQhGaf2+vWbYM0enBrCxhJkKwPWOj+O
iY9TRRl4E1nWSpabFKZ5DTuNO2C98so5cv5n/cXkVCFhpsqga0JIP1Q+5jhqVL8S
xwF1O/6pO/Xkf3mDpHEh+Z6Y9Xv//7L1v3y3JZOsXriUp5E/Nrq2P5a8u1qLLIf+
Qyf2uhGYEBn+swhMuX0TcZBnTkaLNFEIifmzIKndaRcqd3pSuklAL6qhToIi1jR8
eetrYgO3QjhIqcRZyiTiZ2+8Q0SJLbbFHf1mRDeecI+5euN26b46JVLZXXUBfNx/
HF0fFYgHG83UKNERt95x+jQr5ZEvuPiyEBOR+V50WYnY0Me+6ZMLQ5xhcfoAXX0U
zHTgay97lsji9oxHAdFxp76SCkBBoJhjttdFraec3vg/p5HB0gm+YGU1jOZMzNmG
u9e5zmGAkaynJxHjYitRQDyZLt1K6X9hoDxo/L3C7sji0i79l9LE1pcF4jtGHPHw
MG6W9xkO0xrh/IxB2AHB7/091mMQ0KSKMISLFhWRTAJ1OENiSQU5TQYPj0yuYKG6
KwzmVY6jIgtlGI5czLH2bJbgoEu7qgXhvm9x6/G5MjN9PGiD4f41QCbVwd3EyXs9
FOjTh8YA85KwlN/QvqvBvuGEimHRKjzWOPde9nFNPuQ+odu1Iyc=
=fQuu
-----END PGP SIGNATURE-----

--RAOUBkqoSPxdHoSIL2bCWrBl0q1hqppqJ--


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 14:55:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 14:55:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18369.43377 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZxi1-0001n2-LY; Tue, 03 Nov 2020 14:54:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18369.43377; Tue, 03 Nov 2020 14:54:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZxi1-0001mv-IJ; Tue, 03 Nov 2020 14:54:53 +0000
Received: by outflank-mailman (input) for mailman id 18369;
 Tue, 03 Nov 2020 14:54:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GJlZ=EJ=tklengyel.com=tamas@srs-us1.protection.inumbo.net>)
 id 1kZxi0-0001mq-Jq
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 14:54:52 +0000
Received: from se15-1.privateemail.com (unknown [198.54.127.72])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fb9d0bee-02be-42cd-9d7e-7440ec0bc06b;
 Tue, 03 Nov 2020 14:54:51 +0000 (UTC)
Received: from new-01-3.privateemail.com ([198.54.122.47])
 by se15.registrar-servers.com with esmtpsa (TLSv1.2:AES128-GCM-SHA256:128)
 (Exim 4.92) (envelope-from <tamas@tklengyel.com>) id 1kZxhx-0000Ws-18
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 06:54:51 -0800
Received: from MTA-13.privateemail.com (unknown [10.20.147.29])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by NEW-01-3.privateemail.com (Postfix) with ESMTPS id 8730BA79
 for <xen-devel@lists.xenproject.org>; Tue,  3 Nov 2020 14:54:48 +0000 (UTC)
Received: from mta-13.privateemail.com (localhost [127.0.0.1])
 by mta-13.privateemail.com (Postfix) with ESMTP id 68F7E8005C
 for <xen-devel@lists.xenproject.org>; Tue,  3 Nov 2020 09:54:48 -0500 (EST)
Received: from mail-wr1-f47.google.com (unknown [10.20.151.214])
 by mta-13.privateemail.com (Postfix) with ESMTPA id 289D280053
 for <xen-devel@lists.xenproject.org>; Tue,  3 Nov 2020 14:54:48 +0000 (UTC)
Received: by mail-wr1-f47.google.com with SMTP id e6so1635962wro.1
 for <xen-devel@lists.xenproject.org>; Tue, 03 Nov 2020 06:54:48 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GJlZ=EJ=tklengyel.com=tamas@srs-us1.protection.inumbo.net>)
	id 1kZxi0-0001mq-Jq
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 14:54:52 +0000
X-Inumbo-ID: fb9d0bee-02be-42cd-9d7e-7440ec0bc06b
Received: from se15-1.privateemail.com (unknown [198.54.127.72])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fb9d0bee-02be-42cd-9d7e-7440ec0bc06b;
	Tue, 03 Nov 2020 14:54:51 +0000 (UTC)
Received: from new-01-3.privateemail.com ([198.54.122.47])
	by se15.registrar-servers.com with esmtpsa (TLSv1.2:AES128-GCM-SHA256:128)
	(Exim 4.92)
	(envelope-from <tamas@tklengyel.com>)
	id 1kZxhx-0000Ws-18
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 06:54:51 -0800
Received: from MTA-13.privateemail.com (unknown [10.20.147.29])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by NEW-01-3.privateemail.com (Postfix) with ESMTPS id 8730BA79
	for <xen-devel@lists.xenproject.org>; Tue,  3 Nov 2020 14:54:48 +0000 (UTC)
Received: from mta-13.privateemail.com (localhost [127.0.0.1])
	by mta-13.privateemail.com (Postfix) with ESMTP id 68F7E8005C
	for <xen-devel@lists.xenproject.org>; Tue,  3 Nov 2020 09:54:48 -0500 (EST)
Received: from mail-wr1-f47.google.com (unknown [10.20.151.214])
	by mta-13.privateemail.com (Postfix) with ESMTPA id 289D280053
	for <xen-devel@lists.xenproject.org>; Tue,  3 Nov 2020 14:54:48 +0000 (UTC)
Received: by mail-wr1-f47.google.com with SMTP id e6so1635962wro.1
        for <xen-devel@lists.xenproject.org>; Tue, 03 Nov 2020 06:54:48 -0800 (PST)
X-Gm-Message-State: AOAM532bF4FWaXVviaOiKaJDbPyQlI3yOobPB5O3CCeOIdXvSFaqGNrf
	BP7b2T3hh4QbS14jYL1FJXwHiRevLPaD6qLXGgg=
X-Google-Smtp-Source: ABdhPJztlFXTt/NiTpg/N64/yQuCvguv67n/Kfvd7kzT7TDF/qWbcdo9StlZuLSP5vl9n0XX+CxqNuFktKING+klUb0=
X-Received: by 2002:adf:8284:: with SMTP id 4mr1620841wrc.386.1604415282581;
 Tue, 03 Nov 2020 06:54:42 -0800 (PST)
MIME-Version: 1.0
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <247f0d77-9447-47d0-4fa6-8e17b3e6a6de@suse.com> <60302534-1dfb-af5f-4974-1790edcb2f17@bitdefender.com>
In-Reply-To: <60302534-1dfb-af5f-4974-1790edcb2f17@bitdefender.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Tue, 3 Nov 2020 09:54:06 -0500
X-Gmail-Original-Message-ID: <CABfawhnh_4cYbcDjP_yC_8KrMjq=Yfd6=-9=kU+WjQjbfe7AvA@mail.gmail.com>
Message-ID: <CABfawhnh_4cYbcDjP_yC_8KrMjq=Yfd6=-9=kU+WjQjbfe7AvA@mail.gmail.com>
Subject: Re: [PATCH RFC v2 8/8] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Isaila Alexandru <aisaila@bitdefender.com>
Cc: Jan Beulich <jbeulich@suse.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Petre Pircalabu <ppircalabu@bitdefender.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, 
	Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
X-Virus-Scanned: ClamAV using ClamSMTP
X-Originating-IP: 198.54.122.47
X-SpamExperts-Domain: o3.privateemail.com
X-SpamExperts-Username: out-03
Authentication-Results: registrar-servers.com; auth=pass (plain) smtp.auth=out-03@o3.privateemail.com
X-SpamExperts-Outgoing-Class: ham
X-SpamExperts-Outgoing-Evidence: Combined (0.09)
X-Recommended-Action: accept
X-Filter-ID: Mvzo4OR0dZXEDF/gcnlw0XvADx2zSFwG+3csxFBPBHmpSDasLI4SayDByyq9LIhVIUNLlB3FHTXn
 c68xm9CuSETNWdUk1Ol2OGx3IfrIJKywOmJyM1qr8uRnWBrbSAGD6dlyFUCNOm4A+RbagoqcGcXW
 KniNrlErAk1ka2x3ssWY/RLwb8LtNblQmhlQVhqrCIm7vQ+wrWho34hgmOAj3dqwTTJhLChIm0ME
 lNcQXjgsWoWxNnQGVmXmGlNWFvqPiTX6paolSqlHgPgYmEpc/CXAqEAQZXCFCwvZK9SRAKXnpk/M
 4VaHG2sgIVYJKl4PiDKbdpSbtI1rabalqsakRhVVJ2mVvokCO2vVjpfr/eX3+RWxljT0AAJgouHP
 rg5R6cTrAfIBtLJVe62uoyOAUop4cZXX4YNvBwClFIJz3G8Bi53Psle5cUmOuJZEMw7GmFVVgWdB
 XKkEH0++/JhR/Qa+FEE+TmTK/6Hl36Y/23+8Ksk+aedMfNWSnJswrtlNb4Rbo17Z1teUSazoNDZl
 dHZc3qoCx6jB3DEPYTf4NonYQ5Td2RXsWX2IABvakz06874MD/XDMPpcnv6l5Hrq7jOPc3EG53uL
 3j4HL+IhMsqbPc0mYnNMTVPrDKDVIdcLW+y7FntU/xwaLfK/la1KDKAaXVo/EC2K+bDeFIM9LXsN
 6BOoBklDmN7a6acxstx8j8QB+r6c4JMDkXz0ELFOj4DOcg83LXH4GksgCfTIQ54TKgAQriuw1aTi
 a+0Fz1FRylBJH/HZL+SBwttDp25IjkFQ4FPMf16J1ee+Ao2D6rE14O9uIBc4R2FHvu2iptL1PT/Y
 jZYaXkTFxzBWuTQERk4vidrAbzPztKfs4s5XguI1eiePaQrZKwdQXE5oc9bbgU04Eqk0ovQHihEN
 x9UXNWt2q/kTHLzyCebGKfzivszIQZBdZwQyA3WtwOFul/wCAEskSfm9PBUpgWCdb6TGLpT7gdGH
 fbJUKKBLQmc/IEXooxwbXsRR+vzvUHpiFqCMdQSGZdUgxw/2/3R/MN3ySHt7xa38BTgaUFztXoSo
 PpB5Ee/uSzbytlYMSQpUVs4HMS+4ayUpOtEhdxekWDmK9g==
X-Report-Abuse-To: spam@se5.registrar-servers.com

On Tue, Nov 3, 2020 at 5:17 AM Isaila Alexandru <aisaila@bitdefender.com> wrote:
>
>
> Hi Jan and sorry for the late reply,
>
> On 20.10.2020 17:13, Jan Beulich wrote:
> > While there don't look to be any problems with this right now, the lock
> > order implications from holding the lock can be very difficult to follow
> > (and may be easy to violate unknowingly). The present callbacks don't
> > (and no such callback should) have any need for the lock to be held.
> >
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > ---
> > TODO: vm_event_disable() frees the structures used by respective
> >        callbacks - need to either use call_rcu() for freeing, or maintain
> >        a count of in-progress calls, for evtchn_close() to wait to drop
> >        to zero before dropping the lock / returning.
>
> I would go with the second solution and maintain a count of in-progress
> calls.
>
> Tamas, Petre how does this sound?

Agree, doing a reference count before freeing is preferred.

Thanks,
Tamas


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 15:18:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 15:18:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18382.43388 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZy4e-0003fG-C7; Tue, 03 Nov 2020 15:18:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18382.43388; Tue, 03 Nov 2020 15:18:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZy4e-0003f9-9B; Tue, 03 Nov 2020 15:18:16 +0000
Received: by outflank-mailman (input) for mailman id 18382;
 Tue, 03 Nov 2020 15:18:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hh/q=EJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZy4d-0003f4-43
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 15:18:15 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4242ed2a-a6dd-47e7-b4b8-eecbf659d16f;
 Tue, 03 Nov 2020 15:18:14 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZy4b-0005JL-NH; Tue, 03 Nov 2020 15:18:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZy4b-00053h-FK; Tue, 03 Nov 2020 15:18:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZy4b-0002ex-Eo; Tue, 03 Nov 2020 15:18:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hh/q=EJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZy4d-0003f4-43
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 15:18:15 +0000
X-Inumbo-ID: 4242ed2a-a6dd-47e7-b4b8-eecbf659d16f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4242ed2a-a6dd-47e7-b4b8-eecbf659d16f;
	Tue, 03 Nov 2020 15:18:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Co8AsEZFRShcWQB3T7oOkQ96YM1/hepBkqVXI7qFmZ0=; b=FUWfOCiM9wlYcb3kmT7gF5kTY6
	0d5mSjndQHZXexNmF+nGYhSGGJXgXjE6FNRyOaG+jpMFFg+1hN+PWgbWpCjQojOhLaPOfxGfAzMW/
	OfAM1PGfkR7Fpzy2KRwW4U8bHWVChyemgldAr2gueiXdYBFlvPEtQjyRg1BZuLV0VFTU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZy4b-0005JL-NH; Tue, 03 Nov 2020 15:18:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZy4b-00053h-FK; Tue, 03 Nov 2020 15:18:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZy4b-0002ex-Eo; Tue, 03 Nov 2020 15:18:13 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156380-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156380: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=375683654d46380e4e557502141e9823f6b68445
X-Osstest-Versions-That:
    ovmf=0166dad49698fbe263759755382006d64a0ac825
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 03 Nov 2020 15:18:13 +0000

flight 156380 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156380/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 375683654d46380e4e557502141e9823f6b68445
baseline version:
 ovmf                 0166dad49698fbe263759755382006d64a0ac825

Last test of basis   156374  2020-11-03 01:55:47 Z    0 days
Testing same since   156380  2020-11-03 10:11:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Pierre Gondois <pierre.gondois@arm.com>
  Sami Mujawar <sami.mujawar@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   0166dad496..375683654d  375683654d46380e4e557502141e9823f6b68445 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 15:59:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 15:59:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18392.43404 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZyii-00077m-JF; Tue, 03 Nov 2020 15:59:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18392.43404; Tue, 03 Nov 2020 15:59:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZyii-00077f-Fo; Tue, 03 Nov 2020 15:59:40 +0000
Received: by outflank-mailman (input) for mailman id 18392;
 Tue, 03 Nov 2020 15:59:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RC30=EJ=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kZyig-00077Z-KO
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 15:59:38 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 21a9fa7f-4dfe-4447-b9c0-09a10e142c63;
 Tue, 03 Nov 2020 15:59:36 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8D491139F;
 Tue,  3 Nov 2020 07:59:36 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3BFAE3F66E;
 Tue,  3 Nov 2020 07:59:35 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=RC30=EJ=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kZyig-00077Z-KO
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 15:59:38 +0000
X-Inumbo-ID: 21a9fa7f-4dfe-4447-b9c0-09a10e142c63
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 21a9fa7f-4dfe-4447-b9c0-09a10e142c63;
	Tue, 03 Nov 2020 15:59:36 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8D491139F;
	Tue,  3 Nov 2020 07:59:36 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3BFAE3F66E;
	Tue,  3 Nov 2020 07:59:35 -0800 (PST)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v2 0/4] xen/arm: Make PCI passthrough code non-x86 specific
Date: Tue,  3 Nov 2020 15:59:11 +0000
Message-Id: <cover.1604417224.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1

This patch series is v2 of preparatory work to make PCI passthrough code
non-x86 specific.

Rahul Singh (4):
  xen/ns16550: solve compilation error on ARM with CONFIG_HAS_PCI
    enabled.
  xen/pci: Introduce new CONFIG_PCI_ATS flag for PCI ATS functionality.
  xen/pci: Move x86 specific code to x86 directory.
  xen/pci: solve compilation error on ARM with HAS_PCI enabled.

 xen/drivers/char/Kconfig             |  7 +++
 xen/drivers/char/ns16550.c           | 32 +++++-----
 xen/drivers/passthrough/ats.h        | 26 ++++++++
 xen/drivers/passthrough/pci.c        | 84 +------------------------
 xen/drivers/passthrough/x86/Makefile |  3 +-
 xen/drivers/passthrough/x86/iommu.c  | 20 ++++++
 xen/drivers/passthrough/x86/pci.c    | 91 ++++++++++++++++++++++++++++
 xen/drivers/pci/Kconfig              |  9 +++
 xen/include/xen/iommu.h              |  2 +
 xen/include/xen/pci.h                |  2 +
 10 files changed, 177 insertions(+), 99 deletions(-)
 create mode 100644 xen/drivers/passthrough/x86/pci.c

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 16:00:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 16:00:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18394.43416 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZyjF-0008I3-SJ; Tue, 03 Nov 2020 16:00:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18394.43416; Tue, 03 Nov 2020 16:00:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZyjF-0008Hv-On; Tue, 03 Nov 2020 16:00:13 +0000
Received: by outflank-mailman (input) for mailman id 18394;
 Tue, 03 Nov 2020 16:00:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RC30=EJ=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kZyjF-0008Hp-9D
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 16:00:13 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 507e93cd-939e-458d-936c-960503b38e8e;
 Tue, 03 Nov 2020 16:00:11 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 644061424;
 Tue,  3 Nov 2020 08:00:06 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2CB253F66E;
 Tue,  3 Nov 2020 08:00:05 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=RC30=EJ=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kZyjF-0008Hp-9D
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 16:00:13 +0000
X-Inumbo-ID: 507e93cd-939e-458d-936c-960503b38e8e
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 507e93cd-939e-458d-936c-960503b38e8e;
	Tue, 03 Nov 2020 16:00:11 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 644061424;
	Tue,  3 Nov 2020 08:00:06 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2CB253F66E;
	Tue,  3 Nov 2020 08:00:05 -0800 (PST)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 1/4] xen/ns16550: solve compilation error on ARM with CONFIG_HAS_PCI enabled.
Date: Tue,  3 Nov 2020 15:59:12 +0000
Message-Id: <2aa79510731918d78d515a1679cc141fcf16883e.1604417224.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1604417224.git.rahul.singh@arm.com>
References: <cover.1604417224.git.rahul.singh@arm.com>
In-Reply-To: <cover.1604417224.git.rahul.singh@arm.com>
References: <cover.1604417224.git.rahul.singh@arm.com>

ARM platforms do not have PCI support available. When CONFIG_HAS_PCI
is enabled for ARM a compilation error is observed for ns16550 driver.

Fixed compilation error after introducing new kconfig option
CONFIG_HAS_NS16550_PCI to support ns16550 PCI for X86.

For X86 platforms it is enabled by default. For ARM platforms it is
disabled by default, once we have proper support for NS16550 PCI for
ARM we can enable it.

No functional change.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---

Changes in v2:
 - Silently enable the HAS_NS16550_PCI for x86 by default. 

---
 xen/drivers/char/Kconfig   |  7 +++++++
 xen/drivers/char/ns16550.c | 32 ++++++++++++++++----------------
 2 files changed, 23 insertions(+), 16 deletions(-)

diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
index b572305657..12a53607d1 100644
--- a/xen/drivers/char/Kconfig
+++ b/xen/drivers/char/Kconfig
@@ -4,6 +4,13 @@ config HAS_NS16550
 	help
 	  This selects the 16550-series UART support. For most systems, say Y.
 
+config HAS_NS16550_PCI
+	def_bool y
+	depends on X86 && HAS_NS16550 && HAS_PCI
+	help
+	  This selects the 16550-series UART PCI support.For most systems,
+	  say Y.
+
 config HAS_CADENCE_UART
 	bool "Xilinx Cadence UART driver"
 	default y
diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index d8b52eb813..bd1c2af956 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -16,7 +16,7 @@
 #include <xen/timer.h>
 #include <xen/serial.h>
 #include <xen/iocap.h>
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/pci_ids.h>
@@ -54,7 +54,7 @@ enum serial_param_type {
     reg_shift,
     reg_width,
     stop_bits,
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     bridge_bdf,
     device,
     port_bdf,
@@ -83,7 +83,7 @@ static struct ns16550 {
     unsigned int timeout_ms;
     bool_t intr_works;
     bool_t dw_usr_bsy;
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     /* PCI card parameters. */
     bool_t pb_bdf_enable;   /* if =1, pb-bdf effective, port behind bridge */
     bool_t ps_bdf_enable;   /* if =1, ps_bdf effective, port on pci card */
@@ -117,14 +117,14 @@ static const struct serial_param_var __initconst sp_vars[] = {
     {"reg-shift", reg_shift},
     {"reg-width", reg_width},
     {"stop-bits", stop_bits},
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     {"bridge", bridge_bdf},
     {"dev", device},
     {"port", port_bdf},
 #endif
 };
 
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
 struct ns16550_config {
     u16 vendor_id;
     u16 dev_id;
@@ -620,7 +620,7 @@ static int ns16550_getc(struct serial_port *port, char *pc)
 
 static void pci_serial_early_init(struct ns16550 *uart)
 {
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     if ( !uart->ps_bdf_enable || uart->io_base >= 0x10000 )
         return;
 
@@ -719,7 +719,7 @@ static void __init ns16550_init_preirq(struct serial_port *port)
 
 static void __init ns16550_init_irq(struct serial_port *port)
 {
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     struct ns16550 *uart = port->uart;
 
     if ( uart->msi )
@@ -761,7 +761,7 @@ static void __init ns16550_init_postirq(struct serial_port *port)
     uart->timeout_ms = max_t(
         unsigned int, 1, (bits * uart->fifo_size * 1000) / uart->baud);
 
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     if ( uart->bar || uart->ps_bdf_enable )
     {
         if ( uart->param && uart->param->mmio &&
@@ -841,7 +841,7 @@ static void ns16550_suspend(struct serial_port *port)
 
     stop_timer(&uart->timer);
 
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     if ( uart->bar )
        uart->cr = pci_conf_read16(PCI_SBDF(0, uart->ps_bdf[0], uart->ps_bdf[1],
                                   uart->ps_bdf[2]), PCI_COMMAND);
@@ -850,7 +850,7 @@ static void ns16550_suspend(struct serial_port *port)
 
 static void _ns16550_resume(struct serial_port *port)
 {
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     struct ns16550 *uart = port->uart;
 
     if ( uart->bar )
@@ -1013,7 +1013,7 @@ static int __init check_existence(struct ns16550 *uart)
     return 1; /* Everything is MMIO */
 #endif
 
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     pci_serial_early_init(uart);
 #endif
 
@@ -1044,7 +1044,7 @@ static int __init check_existence(struct ns16550 *uart)
     return (status == 0x90);
 }
 
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
 static int __init
 pci_uart_config(struct ns16550 *uart, bool_t skip_amt, unsigned int idx)
 {
@@ -1305,7 +1305,7 @@ static bool __init parse_positional(struct ns16550 *uart, char **str)
 
     if ( *conf == ',' && *++conf != ',' )
     {
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
         if ( strncmp(conf, "pci", 3) == 0 )
         {
             if ( pci_uart_config(uart, 1/* skip AMT */, uart - ns16550_com) )
@@ -1327,7 +1327,7 @@ static bool __init parse_positional(struct ns16550 *uart, char **str)
 
     if ( *conf == ',' && *++conf != ',' )
     {
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
         if ( strncmp(conf, "msi", 3) == 0 )
         {
             conf += 3;
@@ -1339,7 +1339,7 @@ static bool __init parse_positional(struct ns16550 *uart, char **str)
             uart->irq = simple_strtol(conf, &conf, 10);
     }
 
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     if ( *conf == ',' && *++conf != ',' )
     {
         conf = parse_pci(conf, NULL, &uart->ps_bdf[0],
@@ -1419,7 +1419,7 @@ static bool __init parse_namevalue_pairs(char *str, struct ns16550 *uart)
             uart->reg_width = simple_strtoul(param_value, NULL, 0);
             break;
 
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
         case bridge_bdf:
             if ( !parse_pci(param_value, NULL, &uart->ps_bdf[0],
                             &uart->ps_bdf[1], &uart->ps_bdf[2]) )
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 16:00:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 16:00:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18395.43428 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZyjH-0008Jg-7E; Tue, 03 Nov 2020 16:00:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18395.43428; Tue, 03 Nov 2020 16:00:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZyjH-0008JX-2B; Tue, 03 Nov 2020 16:00:15 +0000
Received: by outflank-mailman (input) for mailman id 18395;
 Tue, 03 Nov 2020 16:00:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RC30=EJ=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kZyjG-0008I1-0D
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 16:00:14 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 0d301239-0fe5-4670-a290-67d4b4fefb0d;
 Tue, 03 Nov 2020 16:00:12 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1660E139F;
 Tue,  3 Nov 2020 08:00:12 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B8DAD3F66E;
 Tue,  3 Nov 2020 08:00:10 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=RC30=EJ=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kZyjG-0008I1-0D
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 16:00:14 +0000
X-Inumbo-ID: 0d301239-0fe5-4670-a290-67d4b4fefb0d
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 0d301239-0fe5-4670-a290-67d4b4fefb0d;
	Tue, 03 Nov 2020 16:00:12 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1660E139F;
	Tue,  3 Nov 2020 08:00:12 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B8DAD3F66E;
	Tue,  3 Nov 2020 08:00:10 -0800 (PST)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 2/4] xen/pci: Introduce new CONFIG_PCI_ATS flag for PCI ATS functionality.
Date: Tue,  3 Nov 2020 15:59:13 +0000
Message-Id: <27814e614618c413ac61a9f7a48d795c557bfe5c.1604417224.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1604417224.git.rahul.singh@arm.com>
References: <cover.1604417224.git.rahul.singh@arm.com>
In-Reply-To: <cover.1604417224.git.rahul.singh@arm.com>
References: <cover.1604417224.git.rahul.singh@arm.com>

PCI ATS functionality is not enabled and tested for ARM architecture
but it is enabled for x86 and referenced in common passthrough/pci.c
code.

Therefore introducing the new flag to enable the ATS functionality for
x86 only to avoid issues for ARM architecture.

No functional change.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---

Changes in v2:
 - Fixed return value of PCI ATS related functions when PCI_ATS is not enabled.
 - Make PCI_ATS user selectable kconfig option.

---
 xen/drivers/passthrough/ats.h        | 26 ++++++++++++++++++++++++++
 xen/drivers/passthrough/x86/Makefile |  2 +-
 xen/drivers/pci/Kconfig              |  9 +++++++++
 3 files changed, 36 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/ats.h b/xen/drivers/passthrough/ats.h
index 22ae209b37..3a71fedcb4 100644
--- a/xen/drivers/passthrough/ats.h
+++ b/xen/drivers/passthrough/ats.h
@@ -17,6 +17,8 @@
 
 #include <xen/pci_regs.h>
 
+#ifdef CONFIG_PCI_ATS
+
 #define ATS_REG_CAP    4
 #define ATS_REG_CTL    6
 #define ATS_QUEUE_DEPTH_MASK     0x1f
@@ -48,5 +50,29 @@ static inline int pci_ats_device(int seg, int bus, int devfn)
     return pci_find_ext_capability(seg, bus, devfn, PCI_EXT_CAP_ID_ATS);
 }
 
+#else
+
+#define ats_enabled (false)
+
+static inline int enable_ats_device(struct pci_dev *pdev,
+                                    struct list_head *ats_list)
+{
+    return -EOPNOTSUPP;
+}
+
+static inline void disable_ats_device(struct pci_dev *pdev) { }
+
+static inline int pci_ats_enabled(int seg, int bus, int devfn)
+{
+    return 0;
+}
+
+static inline int pci_ats_device(int seg, int bus, int devfn)
+{
+    return 0;
+}
+
+#endif /* CONFIG_PCI_ATS */
+
 #endif /* _ATS_H_ */
 
diff --git a/xen/drivers/passthrough/x86/Makefile b/xen/drivers/passthrough/x86/Makefile
index a70cf9460d..aa515c680d 100644
--- a/xen/drivers/passthrough/x86/Makefile
+++ b/xen/drivers/passthrough/x86/Makefile
@@ -1,2 +1,2 @@
-obj-y += ats.o
+obj-$(CONFIG_PCI_ATS) += ats.o
 obj-y += iommu.o
diff --git a/xen/drivers/pci/Kconfig b/xen/drivers/pci/Kconfig
index 7da03fa13b..3cb79ea954 100644
--- a/xen/drivers/pci/Kconfig
+++ b/xen/drivers/pci/Kconfig
@@ -1,3 +1,12 @@
 
 config HAS_PCI
 	bool
+
+config PCI_ATS
+	bool "PCI ATS support"
+	default y
+	depends on X86 && HAS_PCI
+	---help---
+	 Enable PCI Address Translation Services.
+
+	 If unsure, say Y.
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 16:00:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 16:00:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18396.43440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZyjM-0008QC-GR; Tue, 03 Nov 2020 16:00:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18396.43440; Tue, 03 Nov 2020 16:00:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZyjM-0008Q3-CS; Tue, 03 Nov 2020 16:00:20 +0000
Received: by outflank-mailman (input) for mailman id 18396;
 Tue, 03 Nov 2020 16:00:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RC30=EJ=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kZyjK-0008I1-Tc
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 16:00:18 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 8a62af32-53e9-42dd-8610-ff795a755265;
 Tue, 03 Nov 2020 16:00:17 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D5D2F139F;
 Tue,  3 Nov 2020 08:00:16 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 644413F66E;
 Tue,  3 Nov 2020 08:00:15 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=RC30=EJ=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kZyjK-0008I1-Tc
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 16:00:18 +0000
X-Inumbo-ID: 8a62af32-53e9-42dd-8610-ff795a755265
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 8a62af32-53e9-42dd-8610-ff795a755265;
	Tue, 03 Nov 2020 16:00:17 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D5D2F139F;
	Tue,  3 Nov 2020 08:00:16 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 644413F66E;
	Tue,  3 Nov 2020 08:00:15 -0800 (PST)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 3/4] xen/pci: Move x86 specific code to x86 directory.
Date: Tue,  3 Nov 2020 15:59:14 +0000
Message-Id: <687101e7e0e6feb64dd8ea63c8cf1aacf1684049.1604417224.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1604417224.git.rahul.singh@arm.com>
References: <cover.1604417224.git.rahul.singh@arm.com>
In-Reply-To: <cover.1604417224.git.rahul.singh@arm.com>
References: <cover.1604417224.git.rahul.singh@arm.com>

passthrough/pci.c file is common for all architecture, but there is x86
sepcific code in this file.

Move x86 specific code to the x86 directory to avoid compilation error
for other architecture.

No functional change.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---

Changes is v2:
 - fixed comments.
 - rename pci_clean_dpci_irqs() to arch_pci_clean_pirqs().

---
 xen/drivers/passthrough/pci.c        | 76 +----------------------
 xen/drivers/passthrough/x86/Makefile |  1 +
 xen/drivers/passthrough/x86/iommu.c  |  7 +++
 xen/drivers/passthrough/x86/pci.c    | 91 ++++++++++++++++++++++++++++
 xen/include/xen/pci.h                |  2 +
 5 files changed, 102 insertions(+), 75 deletions(-)
 create mode 100644 xen/drivers/passthrough/x86/pci.c

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 2a3bce1462..04d3e2c0f9 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -14,7 +14,6 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
-#include <xen/sched.h>
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/pci_ids.h>
@@ -24,7 +23,6 @@
 #include <xen/irq.h>
 #include <xen/param.h>
 #include <xen/vm_event.h>
-#include <asm/hvm/irq.h>
 #include <xen/delay.h>
 #include <xen/keyhandler.h>
 #include <xen/event.h>
@@ -847,71 +845,6 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
     return ret;
 }
 
-static int pci_clean_dpci_irq(struct domain *d,
-                              struct hvm_pirq_dpci *pirq_dpci, void *arg)
-{
-    struct dev_intx_gsi_link *digl, *tmp;
-
-    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
-
-    if ( pt_irq_need_timer(pirq_dpci->flags) )
-        kill_timer(&pirq_dpci->timer);
-
-    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
-    {
-        list_del(&digl->list);
-        xfree(digl);
-    }
-
-    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
-
-    if ( !pt_pirq_softirq_active(pirq_dpci) )
-        return 0;
-
-    domain_get_irq_dpci(d)->pending_pirq_dpci = pirq_dpci;
-
-    return -ERESTART;
-}
-
-static int pci_clean_dpci_irqs(struct domain *d)
-{
-    struct hvm_irq_dpci *hvm_irq_dpci = NULL;
-
-    if ( !is_iommu_enabled(d) )
-        return 0;
-
-    if ( !is_hvm_domain(d) )
-        return 0;
-
-    spin_lock(&d->event_lock);
-    hvm_irq_dpci = domain_get_irq_dpci(d);
-    if ( hvm_irq_dpci != NULL )
-    {
-        int ret = 0;
-
-        if ( hvm_irq_dpci->pending_pirq_dpci )
-        {
-            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci) )
-                 ret = -ERESTART;
-            else
-                 hvm_irq_dpci->pending_pirq_dpci = NULL;
-        }
-
-        if ( !ret )
-            ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
-        if ( ret )
-        {
-            spin_unlock(&d->event_lock);
-            return ret;
-        }
-
-        hvm_domain_irq(d)->dpci = NULL;
-        free_hvm_irq_dpci(hvm_irq_dpci);
-    }
-    spin_unlock(&d->event_lock);
-    return 0;
-}
-
 /* Caller should hold the pcidevs_lock */
 static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
                            uint8_t devfn)
@@ -971,7 +904,7 @@ int pci_release_devices(struct domain *d)
     int ret;
 
     pcidevs_lock();
-    ret = pci_clean_dpci_irqs(d);
+    ret = arch_pci_clean_pirqs(d);
     if ( ret )
     {
         pcidevs_unlock();
@@ -1375,13 +1308,6 @@ static int __init setup_dump_pcidevs(void)
 }
 __initcall(setup_dump_pcidevs);
 
-int iommu_update_ire_from_msi(
-    struct msi_desc *msi_desc, struct msi_msg *msg)
-{
-    return iommu_intremap
-           ? iommu_call(&iommu_ops, update_ire_from_msi, msi_desc, msg) : 0;
-}
-
 static int iommu_add_device(struct pci_dev *pdev)
 {
     const struct domain_iommu *hd;
diff --git a/xen/drivers/passthrough/x86/Makefile b/xen/drivers/passthrough/x86/Makefile
index aa515c680d..d02ff75de5 100644
--- a/xen/drivers/passthrough/x86/Makefile
+++ b/xen/drivers/passthrough/x86/Makefile
@@ -1,2 +1,3 @@
 obj-$(CONFIG_PCI_ATS) += ats.o
 obj-y += iommu.o
+obj-y += pci.o
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index f17b1820f4..875e67b53b 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -308,6 +308,13 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
     return pg;
 }
 
+int iommu_update_ire_from_msi(
+    struct msi_desc *msi_desc, struct msi_msg *msg)
+{
+    return iommu_intremap
+           ? iommu_call(&iommu_ops, update_ire_from_msi, msi_desc, msg) : 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/drivers/passthrough/x86/pci.c b/xen/drivers/passthrough/x86/pci.c
new file mode 100644
index 0000000000..59588aa8d4
--- /dev/null
+++ b/xen/drivers/passthrough/x86/pci.c
@@ -0,0 +1,91 @@
+/*
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/sched.h>
+#include <xen/pci.h>
+
+static int pci_clean_dpci_irq(struct domain *d,
+                              struct hvm_pirq_dpci *pirq_dpci, void *arg)
+{
+    struct dev_intx_gsi_link *digl, *tmp;
+
+    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
+
+    if ( pt_irq_need_timer(pirq_dpci->flags) )
+        kill_timer(&pirq_dpci->timer);
+
+    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
+    {
+        list_del(&digl->list);
+        xfree(digl);
+    }
+
+    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
+
+    if ( !pt_pirq_softirq_active(pirq_dpci) )
+        return 0;
+
+    domain_get_irq_dpci(d)->pending_pirq_dpci = pirq_dpci;
+
+    return -ERESTART;
+}
+
+int arch_pci_clean_pirqs(struct domain *d)
+{
+    struct hvm_irq_dpci *hvm_irq_dpci = NULL;
+
+    if ( !is_iommu_enabled(d) )
+        return 0;
+
+    if ( !is_hvm_domain(d) )
+        return 0;
+
+    spin_lock(&d->event_lock);
+    hvm_irq_dpci = domain_get_irq_dpci(d);
+    if ( hvm_irq_dpci != NULL )
+    {
+        int ret = 0;
+
+        if ( hvm_irq_dpci->pending_pirq_dpci )
+        {
+            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci) )
+                 ret = -ERESTART;
+            else
+                 hvm_irq_dpci->pending_pirq_dpci = NULL;
+        }
+
+        if ( !ret )
+            ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
+        if ( ret )
+        {
+            spin_unlock(&d->event_lock);
+            return ret;
+        }
+
+        hvm_domain_irq(d)->dpci = NULL;
+        free_hvm_irq_dpci(hvm_irq_dpci);
+    }
+    spin_unlock(&d->event_lock);
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
index c4d3879761..fd28d11f6e 100644
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -209,4 +209,6 @@ int msixtbl_pt_register(struct domain *, struct pirq *, uint64_t gtable);
 void msixtbl_pt_unregister(struct domain *, struct pirq *);
 void msixtbl_pt_cleanup(struct domain *d);
 
+int arch_pci_clean_pirqs(struct domain *d);
+
 #endif /* __XEN_PCI_H__ */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 16:00:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 16:00:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18397.43452 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZyjR-0008Ud-20; Tue, 03 Nov 2020 16:00:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18397.43452; Tue, 03 Nov 2020 16:00:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZyjQ-0008UR-Ul; Tue, 03 Nov 2020 16:00:24 +0000
Received: by outflank-mailman (input) for mailman id 18397;
 Tue, 03 Nov 2020 16:00:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RC30=EJ=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kZyjP-0008I1-RI
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 16:00:23 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 1dae1f68-0d34-4a5d-86ae-4e8e5e1977e4;
 Tue, 03 Nov 2020 16:00:19 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 97EC413A1;
 Tue,  3 Nov 2020 08:00:19 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E8F123F66E;
 Tue,  3 Nov 2020 08:00:18 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=RC30=EJ=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kZyjP-0008I1-RI
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 16:00:23 +0000
X-Inumbo-ID: 1dae1f68-0d34-4a5d-86ae-4e8e5e1977e4
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 1dae1f68-0d34-4a5d-86ae-4e8e5e1977e4;
	Tue, 03 Nov 2020 16:00:19 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 97EC413A1;
	Tue,  3 Nov 2020 08:00:19 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E8F123F66E;
	Tue,  3 Nov 2020 08:00:18 -0800 (PST)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v2 4/4] xen/pci: solve compilation error on ARM with HAS_PCI enabled.
Date: Tue,  3 Nov 2020 15:59:15 +0000
Message-Id: <7b60501fa689a4f2795ea6c34a7475d288f154a9.1604417224.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1604417224.git.rahul.singh@arm.com>
References: <cover.1604417224.git.rahul.singh@arm.com>
In-Reply-To: <cover.1604417224.git.rahul.singh@arm.com>
References: <cover.1604417224.git.rahul.singh@arm.com>

If mem-sharing, mem-paging and log-dirty functionality is not enabled
for architecture when HAS_PCI is enabled, compiler will throw an error.

Move code to x86 specific directory to fix compilation error.

No functional change.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---

Changes in v2:
 - Move mem-sharing , men-paging and log-dirty specific code to x86 directory. 

---
 xen/drivers/passthrough/pci.c       |  8 +-------
 xen/drivers/passthrough/x86/iommu.c | 13 +++++++++++++
 xen/include/xen/iommu.h             |  2 ++
 3 files changed, 16 insertions(+), 7 deletions(-)

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 04d3e2c0f9..433989e654 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -22,7 +22,6 @@
 #include <xen/iommu.h>
 #include <xen/irq.h>
 #include <xen/param.h>
-#include <xen/vm_event.h>
 #include <xen/delay.h>
 #include <xen/keyhandler.h>
 #include <xen/event.h>
@@ -1418,12 +1417,7 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
     if ( !is_iommu_enabled(d) )
         return 0;
 
-    /* Prevent device assign if mem paging or mem sharing have been 
-     * enabled for this domain */
-    if ( d != dom_io &&
-         unlikely(mem_sharing_enabled(d) ||
-                  vm_event_check_ring(d->vm_event_paging) ||
-                  p2m_get_hostp2m(d)->global_logdirty) )
+    if( !arch_iommu_usable(d) )
         return -EXDEV;
 
     /* device_assigned() should already have cleared the device for assignment */
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index 875e67b53b..b3d151a14c 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -23,6 +23,7 @@
 #include <asm/hvm/io.h>
 #include <asm/io_apic.h>
 #include <asm/setup.h>
+#include <xen/vm_event.h>
 
 const struct iommu_init_ops *__initdata iommu_init_ops;
 struct iommu_ops __read_mostly iommu_ops;
@@ -315,6 +316,18 @@ int iommu_update_ire_from_msi(
            ? iommu_call(&iommu_ops, update_ire_from_msi, msi_desc, msg) : 0;
 }
 
+bool_t arch_iommu_usable(struct domain *d)
+{
+
+    /* Prevent device assign if mem paging or mem sharing have been
+     * enabled for this domain */
+    if ( d != dom_io && unlikely(mem_sharing_enabled(d) ||
+                        vm_event_check_ring(d->vm_event_paging) ||
+                        p2m_get_hostp2m(d)->global_logdirty) )
+        return false;
+    else
+        return true;
+}
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 191021870f..493528cee3 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -381,6 +381,8 @@ DECLARE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 extern struct spinlock iommu_pt_cleanup_lock;
 extern struct page_list_head iommu_pt_cleanup_list;
 
+bool_t arch_iommu_usable(struct domain *d);
+
 #endif /* _IOMMU_H_ */
 
 /*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 16:03:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 16:03:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18426.43467 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZymR-0000bU-IU; Tue, 03 Nov 2020 16:03:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18426.43467; Tue, 03 Nov 2020 16:03:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZymR-0000bN-FA; Tue, 03 Nov 2020 16:03:31 +0000
Received: by outflank-mailman (input) for mailman id 18426;
 Tue, 03 Nov 2020 16:03:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kZymQ-0000bH-Hc
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 16:03:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id d58119f1-2a58-4dbe-8cfd-3b605b222a5a;
 Tue, 03 Nov 2020 16:03:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0700EB1FC;
 Tue,  3 Nov 2020 16:03:28 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kZymQ-0000bH-Hc
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 16:03:30 +0000
X-Inumbo-ID: d58119f1-2a58-4dbe-8cfd-3b605b222a5a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id d58119f1-2a58-4dbe-8cfd-3b605b222a5a;
	Tue, 03 Nov 2020 16:03:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604419408;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=1n1R/+OcklHSrTdUTGvN7hl/2W4K85FsFtO5YTwnR5I=;
	b=No1yQy+kwToENzFVjviwHDwMFpAIfU/fVauQ6p3AonLyno+DK29O/UZ8WlaulrD/8ohNNC
	gxzuYPUZs6JJ9FKIyxZ/6UZdKMoEp6S2j4GNStKSu8jWzdqrTjbqD2TdsEQm3jGBfr4FwD
	Ki5cGCgxRHn3ERJHHXatWIbpgsF18Oo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0700EB1FC;
	Tue,  3 Nov 2020 16:03:28 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: Xen 4.12.4 released
To: xen-announce@lists.xenproject.org
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <4217a94f-2f8b-71f0-92d5-ff4662fc320c@suse.com>
Date: Tue, 3 Nov 2020 17:03:27 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

All,

I am pleased to announce the release of Xen 4.12.4. This is available
immediately from its git repository
http://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.12
(tag RELEASE-4.12.4) or from the XenProject download page
https://xenproject.org/downloads/xen-project-archives/xen-project-4-12-series/xen-project-4-12-4/
(where a list of changes can also be found).

We recommend all users of the 4.12 stable series to update to this last
point release to be made by the XenProject team from this stable branch.

Regards, Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 16:29:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 16:29:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18439.43485 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZzB4-0002Ua-R1; Tue, 03 Nov 2020 16:28:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18439.43485; Tue, 03 Nov 2020 16:28:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZzB4-0002UT-NQ; Tue, 03 Nov 2020 16:28:58 +0000
Received: by outflank-mailman (input) for mailman id 18439;
 Tue, 03 Nov 2020 16:28:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hh/q=EJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kZzB3-0002Tp-23
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 16:28:57 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 67beec67-e965-4556-bf54-e4872a88f525;
 Tue, 03 Nov 2020 16:28:48 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZzAu-0007Kv-1H; Tue, 03 Nov 2020 16:28:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kZzAt-0007v8-Nn; Tue, 03 Nov 2020 16:28:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kZzAt-0000zs-NH; Tue, 03 Nov 2020 16:28:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hh/q=EJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kZzB3-0002Tp-23
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 16:28:57 +0000
X-Inumbo-ID: 67beec67-e965-4556-bf54-e4872a88f525
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 67beec67-e965-4556-bf54-e4872a88f525;
	Tue, 03 Nov 2020 16:28:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8gCAieIFMS1qhs86H5UZzWGgHqLOfR06+8SkuGUp16Y=; b=VZTNFyItKSKMk5wfUoUDY82ceF
	vjNcWK69NiQvNXkPSBzEkTasb27l5eXOiXfYcScgs9VR8fEKY6weaqbeuK6FSGByvrcA4jKnW4x3F
	zOFNYAj2vwmNWzbRboC6JqhBEFnE6HHRSETvq4LEQedpEH+xePFWaaPiEwUKx6t3mNsg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZzAu-0007Kv-1H; Tue, 03 Nov 2020 16:28:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZzAt-0007v8-Nn; Tue, 03 Nov 2020 16:28:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kZzAt-0000zs-NH; Tue, 03 Nov 2020 16:28:47 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156378-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156378: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=8680d6e36468f1ca00e2fe749bef50585d632401
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 03 Nov 2020 16:28:47 +0000

flight 156378 qemu-mainline real [real]
flight 156385 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156378/
http://logs.test-lab.xenproject.org/osstest/logs/156385/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                8680d6e36468f1ca00e2fe749bef50585d632401
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   75 days
Failing since        152659  2020-08-21 14:07:39 Z   74 days  166 attempts
Testing same since   156378  2020-11-03 04:50:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 57958 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 17:04:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 17:04:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18518.43530 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZzjO-0006VE-GX; Tue, 03 Nov 2020 17:04:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18518.43530; Tue, 03 Nov 2020 17:04:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZzjO-0006V7-CV; Tue, 03 Nov 2020 17:04:26 +0000
Received: by outflank-mailman (input) for mailman id 18518;
 Tue, 03 Nov 2020 17:04:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3DNo=EJ=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kZzjM-0006V2-1x
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 17:04:24 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 62d2760d-b54b-470f-9e5d-eab919dffeb5;
 Tue, 03 Nov 2020 17:04:20 +0000 (UTC)
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-46-JVSjkzvKPBKUBYVXFfw1gA-1; Tue, 03 Nov 2020 12:04:16 -0500
Received: by mail-wm1-f71.google.com with SMTP id o81so14840wma.0
 for <xen-devel@lists.xenproject.org>; Tue, 03 Nov 2020 09:04:16 -0800 (PST)
Received: from [192.168.1.36] (234.red-83-42-66.dynamicip.rima-tde.net.
 [83.42.66.234])
 by smtp.gmail.com with ESMTPSA id g138sm3525743wme.39.2020.11.03.09.04.13
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 03 Nov 2020 09:04:14 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3DNo=EJ=redhat.com=philmd@srs-us1.protection.inumbo.net>)
	id 1kZzjM-0006V2-1x
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 17:04:24 +0000
X-Inumbo-ID: 62d2760d-b54b-470f-9e5d-eab919dffeb5
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 62d2760d-b54b-470f-9e5d-eab919dffeb5;
	Tue, 03 Nov 2020 17:04:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604423060;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=poiJCMzoY3IBe38HXDFYbr82EESqtWVxhmZ6b0tqQv8=;
	b=Tae/yZW+70bCwbh6QK4AMqyfLhtH4atWGEiRKtq0iztWiZLEqQ0gOL0UXfKyWMpINXKzkf
	OLZUYyp7/rEaQvq8eR7bw5TRb9dVLQaxbn4V6R9F9Q6FO0M7o6P0wQTKFszFw87IbLtreb
	uZub9iEudGBvIsrWOiJAP+R9yv3kZWQ=
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-46-JVSjkzvKPBKUBYVXFfw1gA-1; Tue, 03 Nov 2020 12:04:16 -0500
X-MC-Unique: JVSjkzvKPBKUBYVXFfw1gA-1
Received: by mail-wm1-f71.google.com with SMTP id o81so14840wma.0
        for <xen-devel@lists.xenproject.org>; Tue, 03 Nov 2020 09:04:16 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:from:to:cc:references:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=poiJCMzoY3IBe38HXDFYbr82EESqtWVxhmZ6b0tqQv8=;
        b=kRSairPOlXUFxBcA+Y/M6bRxpf3XUsVhW2hvs+LbXwHeGC/7e6eCqEhiDDNsjmPsM6
         YPEpYYbSOxE9VNotndpwXr2lsKnVTyxpJ6OmpGSFqdLsfOu4DbzywDaDk03PF1aVVSIP
         BHHLns37mNETPdQsu8dASlDIbzr6xQJkQBO4kGk2lzbKKgoVxM1veGgfVzPkMEY7io4N
         3dWJ0qzE6GSNx3GfPEWibkZrFX8RkWos0wfMmVvUNmwS1lCJ9QtJaIV9cRWSxLs0Doea
         Y1CZq8r533SLlTMkZI5k4AIYVL5qoOpwtHu2fcNYfN3eM2/rZ1Dov3MXtN5TItDXNwpG
         2yGQ==
X-Gm-Message-State: AOAM532FJDh4ElKLl3dQfjYqXwr9IJW3cBuFSTeQTWFOh9BfkXxqnFT0
	cMMrAv1Mf9nt+63kvWQjgU9ceQCec7eZt9tDINioSVDoGEF2y9xCdx0B4aHAhIBj4NPvAv3ErDW
	e3ZRmbIxhvQG29DVxXlUthvJG2hI=
X-Received: by 2002:a1c:df8a:: with SMTP id w132mr120529wmg.90.1604423055393;
        Tue, 03 Nov 2020 09:04:15 -0800 (PST)
X-Google-Smtp-Source: ABdhPJwhPavdeXAqmN31QepPdH3WgYyEYFt3nbQh+rILVUD6T4tn1A3Anyoz0UcURl1J401WTJCQrA==
X-Received: by 2002:a1c:df8a:: with SMTP id w132mr120500wmg.90.1604423055231;
        Tue, 03 Nov 2020 09:04:15 -0800 (PST)
Received: from [192.168.1.36] (234.red-83-42-66.dynamicip.rima-tde.net. [83.42.66.234])
        by smtp.gmail.com with ESMTPSA id g138sm3525743wme.39.2020.11.03.09.04.13
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Tue, 03 Nov 2020 09:04:14 -0800 (PST)
Subject: Re: [PATCH-for-5.2 2/3] gitlab-ci: Add a job to cover the
 --without-default-devices config
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
To: =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Greg Kurz <groug@kaod.org>, Christian Schoenebeck <qemu_oss@crudebyte.com>
Cc: qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>,
 Cornelia Huck <cohuck@redhat.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, David Hildenbrand <david@redhat.com>,
 qemu-s390x@nongnu.org, Fam Zheng <fam@euphon.net>,
 Richard Henderson <rth@twiddle.net>, Matthew Rosato
 <mjrosato@linux.ibm.com>, Halil Pasic <pasic@linux.ibm.com>,
 Thomas Huth <thuth@redhat.com>,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>
References: <20201103164604.2692357-1-philmd@redhat.com>
 <20201103164604.2692357-3-philmd@redhat.com>
 <20201103165247.GT205187@redhat.com>
 <7654e063-98d3-84e0-8116-5a1b41d14636@redhat.com>
Message-ID: <21e90ddb-fe8a-c780-2741-9b7a2f7f1c9a@redhat.com>
Date: Tue, 3 Nov 2020 18:04:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <7654e063-98d3-84e0-8116-5a1b41d14636@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

I forgot to Cc the 9pfs & Xen maintainers, doing it now ;)

On 11/3/20 6:01 PM, Philippe Mathieu-Daudé wrote:
> On 11/3/20 5:52 PM, Daniel P. Berrangé wrote:
>> On Tue, Nov 03, 2020 at 05:46:03PM +0100, Philippe Mathieu-Daudé wrote:
>>> We test './configure --without-default-devices' since commit
>>> 20885b5b169 (".travis.yml: test that no-default-device builds
>>> do not regress") in Travis-CI.
>>>
>>> As we prefer to use GitLab-CI, add the equivalent job there.
>>>
>>> One minor difference: the GitLab Ubuntu docker image has the
>>> Xen devel packages installed. As it is automatically selected,
>>> we need to disable it with the --disable-xen option, else the
>>> build fails:
>>>
>>>   /usr/bin/ld: libcommon.fa.p/hw_xen_xen-legacy-backend.c.o: in function `xen_be_register_common':
>>>   hw/xen/xen-legacy-backend.c:754: undefined reference to `xen_9pfs_ops'
>>>   /usr/bin/ld: libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x8): undefined reference to `local_ops'
>>>   /usr/bin/ld: libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x20): undefined reference to `synth_ops'
>>>   /usr/bin/ld: libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x38): undefined reference to `proxy_ops'
>>>   collect2: error: ld returned 1 exit status
>>
>> Surely this is a build bug we need to fix rather than ignore in CI ?
> 
> Well it predates this series, so nobody really cared
> (thus I wonder if it makes sense to invest resources
> there).
> 
> Anyway I can have a look after 5.2-rc1.
> 



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 17:06:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 17:06:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18522.43542 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZzlD-0006cD-U8; Tue, 03 Nov 2020 17:06:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18522.43542; Tue, 03 Nov 2020 17:06:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kZzlD-0006c6-Pl; Tue, 03 Nov 2020 17:06:19 +0000
Received: by outflank-mailman (input) for mailman id 18522;
 Tue, 03 Nov 2020 17:06:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kZzlC-0006c0-H3
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 17:06:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 1490ca39-3cc4-4b42-867e-607e4d040fe3;
 Tue, 03 Nov 2020 17:06:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 468C7AD29;
 Tue,  3 Nov 2020 17:06:17 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xm8A=EJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kZzlC-0006c0-H3
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 17:06:18 +0000
X-Inumbo-ID: 1490ca39-3cc4-4b42-867e-607e4d040fe3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 1490ca39-3cc4-4b42-867e-607e4d040fe3;
	Tue, 03 Nov 2020 17:06:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604423177;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=5maChqSYeQFqWyEVEH7Ee6tx819ukNa7Fu6didAMlY4=;
	b=MWgUvzf1oeIyHMKevKKNLw6ABTuY/njEfhYejAYZOCsc9drrBBmWoKBTlsabvSBn39uBum
	ATUE1BttuJguF4KbtyLgP+1gH5n2NZuMq2uw1UTt6rTshFmLJojUeEXghQdqb2DvbsOjuu
	zQu+oW1JeL6MByT+hco1kQ6s2zyacn8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 468C7AD29;
	Tue,  3 Nov 2020 17:06:17 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/PV: conditionally avoid raising #GP for early guest MSR
 accesses
Message-ID: <7e69db81-cee7-3c7b-be64-4f5ff50fbe9c@suse.com>
Date: Tue, 3 Nov 2020 18:06:17 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Prior to 4.15 Linux, when running in PV mode, did not install a #GP
handler early enough to cover for example the rdmsrl_safe() of
MSR_K8_TSEG_ADDR in bsp_init_amd() (not to speak of the unguarded read
of MSR_K7_HWCR later in the same function). The respective change
(42b3a4cb5609 "x86/xen: Support early interrupts in xen pv guests") was
backported to 4.14, but no further - presumably since it wasn't really
easy because of other dependencies.

Therefore, to prevent our change in the handling of guest MSR accesses
to render PV Linux 4.13 and older unusable on at least AMD systems, make
the raising of #GP on these paths conditional upon the guest having
installed a handler. Producing zero for reads and discarding writes
isn't necessarily correct and may trip code trying to detect presence of
MSRs early, but since such detection logic won't work without a #GP
handler anyway, this ought to be a fair workaround.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -886,7 +886,7 @@ static int read_msr(unsigned int reg, ui
         if ( ret == X86EMUL_EXCEPTION )
             x86_emul_hw_exception(TRAP_gp_fault, 0, ctxt);
 
-        return ret;
+        goto done;
     }
 
     switch ( reg )
@@ -999,7 +999,16 @@ static int read_msr(unsigned int reg, ui
         return X86EMUL_OKAY;
     }
 
-    return X86EMUL_UNHANDLEABLE;
+ done:
+    if ( ret != X86EMUL_OKAY && !curr->arch.pv.trap_ctxt[X86_EXC_GP].address )
+    {
+        gprintk(XENLOG_WARNING, "faking RDMSR 0x%08x\n", reg);
+        *val = 0;
+        x86_emul_reset_event(ctxt);
+        ret = X86EMUL_OKAY;
+    }
+
+    return ret;
 }
 
 static int write_msr(unsigned int reg, uint64_t val,
@@ -1016,7 +1025,7 @@ static int write_msr(unsigned int reg, u
         if ( ret == X86EMUL_EXCEPTION )
             x86_emul_hw_exception(TRAP_gp_fault, 0, ctxt);
 
-        return ret;
+        goto done;
     }
 
     switch ( reg )
@@ -1172,7 +1181,15 @@ static int write_msr(unsigned int reg, u
         return X86EMUL_OKAY;
     }
 
-    return X86EMUL_UNHANDLEABLE;
+ done:
+    if ( ret != X86EMUL_OKAY && !curr->arch.pv.trap_ctxt[X86_EXC_GP].address )
+    {
+        gprintk(XENLOG_WARNING, "dropping WRMSR 0x%08x\n", reg);
+        x86_emul_reset_event(ctxt);
+        ret = X86EMUL_OKAY;
+    }
+
+    return ret;
 }
 
 static int cache_op(enum x86emul_cache_op op, enum x86_segment seg,


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 17:31:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 17:31:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18553.43554 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka09n-0000nK-Uz; Tue, 03 Nov 2020 17:31:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18553.43554; Tue, 03 Nov 2020 17:31:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka09n-0000nD-S2; Tue, 03 Nov 2020 17:31:43 +0000
Received: by outflank-mailman (input) for mailman id 18553;
 Tue, 03 Nov 2020 17:31:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=41xg=EJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ka09n-0000n8-6W
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 17:31:43 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dfd500e8-a119-4019-8e19-933af3f2c3d5;
 Tue, 03 Nov 2020 17:31:42 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=41xg=EJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1ka09n-0000n8-6W
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 17:31:43 +0000
X-Inumbo-ID: dfd500e8-a119-4019-8e19-933af3f2c3d5
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id dfd500e8-a119-4019-8e19-933af3f2c3d5;
	Tue, 03 Nov 2020 17:31:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1604424702;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=s0pQQFFvhWYyXKJWlitZjyFa0XWXukXhrXKQfbo2Kr8=;
  b=Q3MZnZ9q/fhPsBJLHPBHqZml6c5kkhO1+Df9oVPnutfHen4OPHKGuquy
   yjzfo+9qtIcYsL/1XboMafD5PCYetUiS8YHRVgdj14b6cPWHqPtp2i/+S
   hk2BHJ+gWmIHGteK6HjkZ697UVmWdpak7o9jOu283Hq0kQlTPr1p22Df7
   M=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Mm2REViDg1xsQhVFqs0wtoWSRUzOYJIqZ9tDIODDfXplzzSgsKkaU10JZLuKw91WNpbesAD4w2
 8t1YLzZZH2p8RccOoU6qpPmU1UaiBnuy7aics1wouNYlM/AbF/embRwpQs923G7qTKKdXPsHn5
 YJpJ694mjQRTaZ+Ai3lEDsxmQQAB7Kxu+RQAQYxFSiShZQGjjq2Ox8R1/rN+brPZ9FHuXE8Sko
 Ao3+so7lCCX1FkvYYjISpas7hDIpbH3+Qmmzf7kOyuJf2PtGYbNQScz+PA+vkkM/J4hf5sdF5n
 CGc=
X-SBRS: None
X-MesageID: 30636121
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,448,1596513600"; 
   d="scan'208";a="30636121"
Subject: Re: [PATCH] x86/PV: conditionally avoid raising #GP for early guest
 MSR accesses
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <7e69db81-cee7-3c7b-be64-4f5ff50fbe9c@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <cf814663-0319-6a30-f3a2-dc43432eedb1@citrix.com>
Date: Tue, 3 Nov 2020 17:31:35 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <7e69db81-cee7-3c7b-be64-4f5ff50fbe9c@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 03/11/2020 17:06, Jan Beulich wrote:
> Prior to 4.15 Linux, when running in PV mode, did not install a #GP
> handler early enough to cover for example the rdmsrl_safe() of
> MSR_K8_TSEG_ADDR in bsp_init_amd() (not to speak of the unguarded read
> of MSR_K7_HWCR later in the same function). The respective change
> (42b3a4cb5609 "x86/xen: Support early interrupts in xen pv guests") was
> backported to 4.14, but no further - presumably since it wasn't really
> easy because of other dependencies.
>
> Therefore, to prevent our change in the handling of guest MSR accesses
> to render PV Linux 4.13 and older unusable on at least AMD systems, make
> the raising of #GP on these paths conditional upon the guest having
> installed a handler. Producing zero for reads and discarding writes
> isn't necessarily correct and may trip code trying to detect presence of
> MSRs early, but since such detection logic won't work without a #GP
> handler anyway, this ought to be a fair workaround.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I appreciate that we probably have to do something, but I don't think
this is a wise move.

Linux is fundamentally buggy.  It is deliberately looking for a
potential #GP fault given its use of rdmsrl_safe().  The reason this bug
stayed hidden for so long was as a consequence of Xen's inappropriate
MSR handling for guests, and the reasons for changing Xen's behaviour
still stand.

This change, in particular, does not apply to any explicitly handled
MSRs, and therefore is not a comprehensive fix.  Nor is it robust to
someone adding code to explicitly handling the impacted MSRs at a later
date (which are are likely to need to do for HWCR), and which would
reintroduce this failure to boot.

We should have the impacted MSRs handled explicitly, with a note stating
this was a bug in Linux 4.14 and older.  We already have workaround for
similar bugs in Windows, and it also gives us a timeline to eventually
removing support for obsolete workarounds, rather than having a "now and
in the future, we'll explicitly tolerate broken PV behaviour for one bug
back in ancient linux".

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 17:55:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 17:55:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18575.43587 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka0Wa-0002hd-EI; Tue, 03 Nov 2020 17:55:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18575.43587; Tue, 03 Nov 2020 17:55:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka0Wa-0002hW-AA; Tue, 03 Nov 2020 17:55:16 +0000
Received: by outflank-mailman (input) for mailman id 18575;
 Tue, 03 Nov 2020 17:55:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7C57=EJ=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1ka0WY-0002fE-BH
 for xen-devel@lists.xen.org; Tue, 03 Nov 2020 17:55:14 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cd0c566c-c2f0-484b-9800-56ab5276c6ba;
 Tue, 03 Nov 2020 17:55:03 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1ka0WG-0000dd-Ir; Tue, 03 Nov 2020 17:54:56 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1ka0WG-0001pG-Fi; Tue, 03 Nov 2020 17:54:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7C57=EJ=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
	id 1ka0WY-0002fE-BH
	for xen-devel@lists.xen.org; Tue, 03 Nov 2020 17:55:14 +0000
X-Inumbo-ID: cd0c566c-c2f0-484b-9800-56ab5276c6ba
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cd0c566c-c2f0-484b-9800-56ab5276c6ba;
	Tue, 03 Nov 2020 17:55:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=8rKgEwp2k3Cv78fUgxr96EpFh1cDAQGYzbAR/iUCIgY=; b=QLBbuElL5S+2cUc3xByVADxdyQ
	rUOY4eFLXv6sAAWGWBrnxaJBe/X6sIrmHrRzNCcPf5B/TtX90TCHMM6oHu+qPB0741Km8Xnc8vY3P
	PWFpKE8zdNPIKz7Z7dDxD0tUxygib9DCXDtZA+W4/RFESSGhPiletBlN1HaIjuw3agbg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1ka0WG-0000dd-Ir; Tue, 03 Nov 2020 17:54:56 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1ka0WG-0001pG-Fi; Tue, 03 Nov 2020 17:54:56 +0000
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 286 v5 - x86 PV guest INVLPG-like flushes
 may leave stale TLB entries
Message-Id: <E1ka0WG-0001pG-Fi@xenbits.xenproject.org>
Date: Tue, 03 Nov 2020 17:54:56 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

                    Xen Security Advisory XSA-286
                              version 5

     x86 PV guest INVLPG-like flushes may leave stale TLB entries

UPDATES IN VERSION 5
====================

Patches rewritten to use a completely different approach.

The patches supplied in XSA-286 version 4 were found to have a
significant performance impact.  An alternative approach was developed
and has now been committed to the relevant Xen branches.  The
alternative approach is simpler and mitigates the performance
problems.

At the time of writing the patches in XSA-286 v4 are believed to be
correct and sound, but if we discover that this is not the case we
will not issue a further update.  We recommend the use of the patches
provided in the Xen git branches, which are the same as those attached
in this version of the advisory.

ISSUE DESCRIPTION
=================

x86 PV guest kernels may use hypercalls with INVLPG-like behavior to
invalidate TLB entries even after changes to non-leaf page tables.  Such
changes to non-leaf page tables will, however, also render stale
possible TLB entries created by Xen's internal use of linear page tables
to process guest requests like update-va-mapping.  Invalidation of these
TLB entries has been missing, allowing subsequent guest requests to
change address mappings for one process to potentially modify memory
meanwhile in use elsewhere.

IMPACT
======

Malicious x86 PV guest user mode may be able to escalate their privilege
to that of the guest kernel.

VULNERABLE SYSTEMS
==================

All versions of Xen expose the vulnerability.

The vulnerability is exposed to x86 PV guests only.  x86 HVM/PVH guests
as well as ARM ones are not vulnerable.

MITIGATION
==========

There is no known mitigation.

CREDITS
=======

This issue was discovered by Jann Horn of Google Project Zero.

RESOLUTION
==========

Applying the appropriate set of attached patches resolves this issue.

xsa286-unstable/*.patch  xen-unstable
xsa286-4.14/*.patch      Xen 4.14.x
xsa286-4.13/*.patch      Xen 4.13.x
xsa286-4.12/*.patch      Xen 4.12.x
xsa286-4.11/*.patch      Xen 4.11.x
xsa286-4.10/*.patch      Xen 4.10.x

$ sha256sum xsa286* xsa286*/*
a7d4ddb15197dfcb246b84f8a89799f76070cdde99a5c1d0203229d719b0fcc1  xsa286.meta
e5f946b07989db85de2a03e4b88e09324316c0ec12d21c5afb83d463114a1f4f  xsa286-unstable/0001-x86-pv-Drop-FLUSH_TLB_GLOBAL-in-do_mmu_update-for-XP.patch
2a732c958201eb03cc0737278e75f86160e0dedbbe0a13f415ec0d17a90ec009  xsa286-unstable/0002-x86-pv-Flush-TLB-in-response-to-paging-structure-cha.patch
2da4b60e19b1fbf1daf0d1bc61733763abf5653a6e53ffeadd559d0a01ec8095  xsa286-4.10/0001-x86-pv-Drop-FLUSH_TLB_GLOBAL-in-do_mmu_update-for-XP.patch
5ce7f56a9b2c9a3a63f79d7df2486c24fc130a8658deb182b22416e17c202ae9  xsa286-4.10/0002-x86-pv-Flush-TLB-in-response-to-paging-structure-cha.patch
2e700e091bfd9d3fd6dd65064ec39a8a40d73bcc94b66852fd2d6fbe9ba6c2db  xsa286-4.11/0001-x86-pv-Drop-FLUSH_TLB_GLOBAL-in-do_mmu_update-for-XP.patch
d622652ce50d59bf45134baabc26b89a24e5d98b1f82230041919089a1cf1620  xsa286-4.11/0002-x86-pv-Flush-TLB-in-response-to-paging-structure-cha.patch
4dc18a007ddf2bd5022ce194b861989be88170f8188ce49dbea7073bb280202f  xsa286-4.12/0001-x86-pv-Drop-FLUSH_TLB_GLOBAL-in-do_mmu_update-for-XP.patch
2c48331849d4d401b47dfc3db84bb067786b4e53155587235d919781b4a10e76  xsa286-4.12/0002-x86-pv-Flush-TLB-in-response-to-paging-structure-cha.patch
dd0fad5165dcd0c3d8d551e35fa4fe29653a3b8c5ec52f7f86f434305c946338  xsa286-4.13/0001-x86-pv-Drop-FLUSH_TLB_GLOBAL-in-do_mmu_update-for-XP.patch
de1326efd4a8559c32ac68c89095f3230f723dec2acc80fc01a534578bb1be82  xsa286-4.13/0002-x86-pv-Flush-TLB-in-response-to-paging-structure-cha.patch
a718f5e19ce821d1fe06f2cdc2f7ad0bbe7c7bca954c283bbc36ad50522f66ef  xsa286-4.14/0001-x86-pv-Drop-FLUSH_TLB_GLOBAL-in-do_mmu_update-for-XP.patch
d659d4a4119b235c7d1054980ceea9424dcc7faf3cfd3fd46627577a424256b5  xsa286-4.14/0002-x86-pv-Flush-TLB-in-response-to-paging-structure-cha.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl+hmVsMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZI2cIAMnry5bAAjp6b9C2YsnAFgwQy114GNMaYUGpktEk
LPLvjyNkQ4ZRxoqUCk/i645h62cI24CfJS1JraHU5kCk2OSRNT6d2OhXkXhRb1qD
NL4tM+9Y5xo8R7HkZ3PV1Xs4RGr1RYuXYNKv6RPj74SpJFGmJYfsZaSgnzNxuNeL
LWFVCSZtFE7RIgOVHCrl+fLH0bFg3A8xKDsRTD8sZ+T7zEpUoe7lq8S/PZmijFAm
1WU/p1l7Fy1DHeIXtvLc82d7y5/ZwQtMgNjzy0BDS+rmuxaJRd6ciQgmj+4eTYXw
biiiFoKKQ/6Kaf/QdI4LlOtrnVmLyskJNnrWeP5BgW+0h7A=
=xMu5
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa286.meta"
Content-Disposition: attachment; filename="xsa286.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAyODYsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIs
CiAgICAiNC4xMSIsCiAgICAiNC4xMCIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjEwIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICI3MWRhNjNiYmI4M2FmOGM4YzUzN2YzNzMxZGRhN2RjMmQy
ZmQzMWFjIiwKICAgICAgICAgICJQcmVyZXFzIjogW10sCiAgICAgICAgICAi
UGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTI4Ni00LjEwLyoucGF0Y2gi
CiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9LAogICAgIjQu
MTEiOiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAg
ICAgICAgICAiU3RhYmxlUmVmIjogIjYzMTk5ZGZkM2EwNDE4ZjE2NzdjNmNj
ZDdmZTA1YjEyM2FmNDYxMGEiLAogICAgICAgICAgIlByZXJlcXMiOiBbXSwK
ICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNhMjg2LTQu
MTEvKi5wYXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAg
IH0sCiAgICAiNC4xMiI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAg
InhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiMDEwOGIwMTFlMTMz
OTE1YThlYmQzMzYzNjgxMWQ4YzE0MWI2ZTlmMyIsCiAgICAgICAgICAiUHJl
cmVxcyI6IFtdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAg
ICJ4c2EyODYtNC4xMi8qLnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0K
ICAgICAgfQogICAgfSwKICAgICI0LjEzIjogewogICAgICAiUmVjaXBlcyI6
IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICJk
YzM4YzExMDNjZmRjNjQzODYwZTEwYzFiOWU5MjVkYWM4MzMzMmRjIiwKICAg
ICAgICAgICJQcmVyZXFzIjogW10sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsK
ICAgICAgICAgICAgInhzYTI4Ni00LjEzLyoucGF0Y2giCiAgICAgICAgICBd
CiAgICAgICAgfQogICAgICB9CiAgICB9LAogICAgIjQuMTQiOiB7CiAgICAg
ICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3Rh
YmxlUmVmIjogIjdiMWU1ODdmMjVjMmRkYTM4MjM2ZTQ4YWFlODE3Mjk3OThm
MTA2NjMiLAogICAgICAgICAgIlByZXJlcXMiOiBbXSwKICAgICAgICAgICJQ
YXRjaGVzIjogWwogICAgICAgICAgICAieHNhMjg2LTQuMTQvKi5wYXRjaCIK
ICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAibWFz
dGVyIjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewog
ICAgICAgICAgIlN0YWJsZVJlZiI6ICI5NjQ3ODFjNmYxNjI4OTM2NzdjNTBh
Nzc5YjdkNTYyYTI5OTcyN2JhIiwKICAgICAgICAgICJQcmVyZXFzIjogW10s
CiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTI4Ni11
bnN0YWJsZS8qLnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAg
fQogICAgfQogIH0KfQ==

--=separator
Content-Type: application/octet-stream;
 name="xsa286-unstable/0001-x86-pv-Drop-FLUSH_TLB_GLOBAL-in-do_mmu_update-for-XP.patch"
Content-Disposition: attachment;
 filename="xsa286-unstable/0001-x86-pv-Drop-FLUSH_TLB_GLOBAL-in-do_mmu_update-for-XP.patch"
Content-Transfer-Encoding: base64

RnJvbSAwNTVlMWMzYTNkOTViMWU3NTMxNDgzNjlmYmM0YmE0ODc4MmRkNjAy
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpEYXRlOiBUaHUsIDIyIE9j
dCAyMDIwIDExOjI4OjU4ICswMTAwClN1YmplY3Q6IFtQQVRDSCAxLzJdIHg4
Ni9wdjogRHJvcCBGTFVTSF9UTEJfR0xPQkFMIGluIGRvX21tdV91cGRhdGUo
KSBmb3IgWFBUSQoKYy9zIDlkMWQzMWFkOTQ5OCAieDg2OiBzbGlnaHRseSBy
ZWR1Y2UgTWVsdGRvd24gYmFuZC1haWQgb3ZlcmhlYWQiIHJlbW92ZWQgdGhl
CnVzZSBvZiBHbG9iYWwgVExCIGZsdXNoZXMgb24gdGhlIFhlbiBlbnRyeSBw
YXRoLCBidXQgYWRkZWQgYSBGTFVTSF9UTEJfR0xPQkFMCnRvIHRoZSBMNCBw
YXRoIGluIGRvX21tdV91cGRhdGUoKS4KCkhvd2V2ZXIsIHRoaXMgd2FzIHVu
bmVjZXNzYXJ5LgoKSXQgaXMgdGhlIGd1ZXN0cyByZXNwb25zaWJpbGl0eSB0
byBwZXJmb3JtIGFwcHJvcHJpYXRlIFRMQiBmbHVzaGluZyBpZiB0aGUgTDQK
bW9kaWZpY2F0aW9uIGFsdGVyZWQgYW4gZXN0YWJsaXNoZWQgbWFwcGluZyBp
biBhIGZsdXNoLXJlbGV2YW50IHdheS4gIEluIHRoaXMKY2FzZSwgYW4gTU1V
RVhUX09QIGh5cGVyY2FsbCB3aWxsIGZvbGxvdy4gIFRoZSBjYXNlIHdoaWNo
IFhlbiBuZWVkcyB0byBjb3ZlcgppcyB3aGVuIG5ldyBtYXBwaW5ncyBhcmUg
Y3JlYXRlZCwgYW5kIHRoZSByZXN5bmMgb24gdGhlIGV4aXQtdG8tZ3Vlc3Qg
cGF0aApjb3ZlcnMgdGhpcyBjb3JyZWN0bHkuCgpUaGVyZSBpcyBhIGNvcm5l
ciBjYXNlIHdpdGggbXVsdGlwbGUgdkNQVXMgaW4gaHlwZXJjYWxscyBhdCB0
aGUgc2FtZSB0aW1lLAp3aGljaCA5ZDFkMzFhZDk0OTggY2hhbmdlZCwgYW5k
IHRoaXMgcGF0Y2ggY2hhbmdlcyBiYWNrIHRvIGl0cyBvcmlnaW5hbCBYUFRJ
CmJlaGF2aW91ci4KCkFyY2hpdGVjdHVyYWxseSwgZXN0YWJsaXNoZWQgVExC
IGVudHJpZXMgY2FuIGNvbnRpbnVlIHRvIGJlIHVzZWQgdW50aWwgdGhlCmJy
b2FkY2FzdCBmbHVzaCBoYXMgY29tcGxldGVkLiAgVGhlcmVmb3JlLCBldmVu
IHdpdGggY29uY3VycmVudCBoeXBlcmNhbGxzLAp0aGUgZ3Vlc3QgY2Fubm90
IGRlcGVuZCBvbiBvbGRlciBtYXBwaW5ncyBub3QgYmVpbmcgdXNlZCB1bnRp
bCBhbiBNTVVFWFRfT1AKaHlwZXJjYWxsIGNvbXBsZXRlcy4gIFhlbidzIGlt
cGxlbWVudGF0aW9uIG9mIGd1ZXN0LWluaXRpYXRlZCBmbHVzaGVzIHdpbGwK
dGFrZSBjb3JyZWN0IGVmZmVjdCBvbiB0b3Agb2YgYW4gaW4tcHJvZ3Jlc3Mg
aHlwZXJjYWxsLCBwaWNraW5nIHVwIG5ldyBtYXBwaW5nCnNldHRpbmcgYmVm
b3JlIHRoZSBvdGhlciB2Q1BVJ3MgTU1VRVhUX09QIGNvbXBsZXRlcy4KCk5v
dGU6IFRoZSBjb3JyZWN0bmVzcyBvZiB0aGlzIGNoYW5nZSBpcyBub3QgaW1w
YWN0ZWQgYnkgd2hldGhlciBYUFRJIHVzZXMKZ2xvYmFsIG1hcHBpbmdzIG9y
IG5vdC4gIENvcnJlY3RuZXNzIHRoZXJlIGRlcGVuZHMgb24gdGhlIGJlaGF2
aW91ciBvZiBYZW4gb24KdGhlIGVudHJ5L2V4aXQgcGF0aHMgd2hlbiBzd2l0
Y2hpbmcgdHdvL2Zyb20gdGhlIFhQVEkgInNoYWRvdyIgcGFnZXRhYmxlcy4K
ClRoaXMgaXMgKG5vdCByZWFsbHkpIFhTQS0yODYgKGJ1dCBuZWNlc3Nhcnkg
dG8gc2ltcGxpZnkgdGhlIGxvZ2ljKS4KCkZpeGVzOiA5ZDFkMzFhZDk0OTgg
KCJ4ODY6IHNsaWdodGx5IHJlZHVjZSBNZWx0ZG93biBiYW5kLWFpZCBvdmVy
aGVhZCIpClNpZ25lZC1vZmYtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5j
b29wZXIzQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5OiBKYW4gQmV1bGljaCA8
amJldWxpY2hAc3VzZS5jb20+Ci0tLQogeGVuL2FyY2gveDg2L21tLmMgfCAy
ICstCiAxIGZpbGUgY2hhbmdlZCwgMSBpbnNlcnRpb24oKyksIDEgZGVsZXRp
b24oLSkKCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9h
cmNoL3g4Ni9tbS5jCmluZGV4IGIyZjM1YjNlN2QuLjM4MTY4MTg5YWEgMTAw
NjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9tbS5jCisrKyBiL3hlbi9hcmNoL3g4
Ni9tbS5jCkBAIC00MTg4LDcgKzQxODgsNyBAQCBsb25nIGRvX21tdV91cGRh
dGUoCiAKICAgICAgICAgY3B1bWFza19hbmRub3QobWFzaywgcHRfb3duZXIt
PmRpcnR5X2NwdW1hc2ssIGNwdW1hc2tfb2YoY3B1KSk7CiAgICAgICAgIGlm
ICggIWNwdW1hc2tfZW1wdHkobWFzaykgKQotICAgICAgICAgICAgZmx1c2hf
bWFzayhtYXNrLCBGTFVTSF9UTEJfR0xPQkFMIHwgRkxVU0hfUk9PVF9QR1RC
TCk7CisgICAgICAgICAgICBmbHVzaF9tYXNrKG1hc2ssIEZMVVNIX1JPT1Rf
UEdUQkwpOwogICAgIH0KIAogICAgIHBlcmZjX2FkZChudW1fcGFnZV91cGRh
dGVzLCBpKTsKLS0gCjIuMjAuMQoK

--=separator
Content-Type: application/octet-stream;
 name="xsa286-unstable/0002-x86-pv-Flush-TLB-in-response-to-paging-structure-cha.patch"
Content-Disposition: attachment;
 filename="xsa286-unstable/0002-x86-pv-Flush-TLB-in-response-to-paging-structure-cha.patch"
Content-Transfer-Encoding: base64

RnJvbSAxNmEyMDk2M2IzMjA5Nzg4ZjJjMGQzYTNlZWJiN2Q5MmYwM2Y1ODgz
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpEYXRlOiBNb24sIDE5IE9j
dCAyMDIwIDE1OjUxOjIyICswMTAwClN1YmplY3Q6IFtQQVRDSCAyLzJdIHg4
Ni9wdjogRmx1c2ggVExCIGluIHJlc3BvbnNlIHRvIHBhZ2luZyBzdHJ1Y3R1
cmUgY2hhbmdlcwoKV2l0aCBNTVVfVVBEQVRFLCBhIFBWIGd1ZXN0IGNhbiBt
YWtlIGNoYW5nZXMgdG8gaGlnaGVyIGxldmVsIHBhZ2V0YWJsZXMuICBUaGlz
CmlzIHNhZmUgZnJvbSBYZW4ncyBwb2ludCBvZiB2aWV3IChhcyB0aGUgdXBk
YXRlIG9ubHkgYWZmZWN0cyBndWVzdCBtYXBwaW5ncyksCmFuZCB0aGUgZ3Vl
c3QgaXMgcmVxdWlyZWQgdG8gZmx1c2ggKGlmIG5lY2Vzc2FyeSkgYWZ0ZXIg
bWFraW5nIHVwZGF0ZXMuCgpIb3dldmVyLCBYZW4ncyB1c2Ugb2YgbGluZWFy
IHBhZ2V0YWJsZXMgKFVQREFURV9WQV9NQVBQSU5HLCBHTlRUQUJPUF9tYXAs
CndyaXRlYWJsZSBwYWdldGFibGVzLCBldGMuKSBpcyBhbiBpbXBsZW1lbnRh
dGlvbiBkZXRhaWwgb3V0c2lkZSBvZiB0aGUKQVBJL0FCSS4KCkNoYW5nZXMg
aW4gdGhlIHBhZ2luZyBzdHJ1Y3R1cmUgcmVxdWlyZSBpbnZhbGlkYXRpb25z
IGluIHRoZSBsaW5lYXIgcGFnZXRhYmxlCnJhbmdlIGZvciBzdWJzZXF1ZW50
IGFjY2Vzc2VzIGludG8gdGhlIGxpbmVhciBwYWdldGFibGVzIHRvIGFjY2Vz
cyBub24tc3RhbGUKbWFwcGluZ3MuICBYZW4gbXVzdCBwcm92aWRlIHN1aXRh
YmxlIGZsdXNoaW5nIHRvIHByZXZlbnQgaW50ZXJtaXhlZCBndWVzdAphY3Rp
b25zIGZyb20gYWNjaWRlbnRhbGx5IGFjY2Vzc2luZy9tb2RpZnlpbmcgdGhl
IHdyb25nIHBhZ2V0YWJsZS4KCkZvciBhbGwgTDIgYW5kIGhpZ2hlciBtb2Rp
ZmljYXRpb25zLCBmbHVzaCB0aGUgVExCLiAgUFYgZ3Vlc3RzIGNhbm5vdCBj
cmVhdGUKTDIgb3IgaGlnaGVyIGVudHJpZXMgd2l0aCB0aGUgR2xvYmFsIGJp
dCBzZXQsIHNvIG5vIG1hcHBpbmdzIGVzdGFibGlzaGVkIGluCnRoZSBsaW5l
YXIgcmFuZ2UgY2FuIGJlIGdsb2JhbC4gIChUaGlzIGNvdWxkIGluIHByaW5j
aXBsZSBiZSBhbiBvcmRlciAzOSBmbHVzaApzdGFydGluZyBhdCBMSU5FQVJf
UFRfVklSVF9TVEFSVCwgYnV0IG5vIHN1Y2ggbWVjaGFuaXNtIGV4aXN0cyBp
biBwcmFjdGljZS4pCgpFeHByZXNzIHRoZSBuZWNlc3NhcnkgZmx1c2hlcyBh
cyBhIHNldCBvZiBib29sZWFucyB3aGljaCBhY2N1bXVsYXRlIGFjcm9zcyB0
aGUKb3BlcmF0aW9uLiAgQ29tbWVudCB0aGUgZmx1c2hpbmcgbG9naWMgZXh0
ZW5zaXZlbHkuCgpUaGlzIGlzIFhTQS0yODYuCgpTaWduZWQtb2ZmLWJ5OiBB
bmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZp
ZXdlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgotLS0K
IHhlbi9hcmNoL3g4Ni9tbS5jIHwgNjkgKysrKysrKysrKysrKysrKysrKysr
KysrKysrKysrKysrKysrKysrKy0tLS0tLS0KIDEgZmlsZSBjaGFuZ2VkLCA1
OSBpbnNlcnRpb25zKCspLCAxMCBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQg
YS94ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9hcmNoL3g4Ni9tbS5jCmluZGV4
IDM4MTY4MTg5YWEuLjVhNTAzMzkyODQgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNo
L3g4Ni9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS5jCkBAIC0zODkxLDcg
KzM4OTEsOCBAQCBsb25nIGRvX21tdV91cGRhdGUoCiAgICAgc3RydWN0IHZj
cHUgKmN1cnIgPSBjdXJyZW50LCAqdiA9IGN1cnI7CiAgICAgc3RydWN0IGRv
bWFpbiAqZCA9IHYtPmRvbWFpbiwgKnB0X293bmVyID0gZCwgKnBnX293bmVy
OwogICAgIG1mbl90IG1hcF9tZm4gPSBJTlZBTElEX01GTiwgbWZuOwotICAg
IGJvb2wgc3luY19ndWVzdCA9IGZhbHNlOworICAgIGJvb2wgZmx1c2hfbGlu
ZWFyX3B0ID0gZmFsc2UsIGZsdXNoX3Jvb3RfcHRfbG9jYWwgPSBmYWxzZSwK
KyAgICAgICAgZmx1c2hfcm9vdF9wdF9vdGhlcnMgPSBmYWxzZTsKICAgICB1
aW50MzJfdCB4c21fbmVlZGVkID0gMDsKICAgICB1aW50MzJfdCB4c21fY2hl
Y2tlZCA9IDA7CiAgICAgaW50IHJjID0gcHV0X29sZF9ndWVzdF90YWJsZShj
dXJyKTsKQEAgLTQwNDEsNiArNDA0Miw4IEBAIGxvbmcgZG9fbW11X3VwZGF0
ZSgKICAgICAgICAgICAgICAgICAgICAgICAgIGJyZWFrOwogICAgICAgICAg
ICAgICAgICAgICByYyA9IG1vZF9sMl9lbnRyeSh2YSwgbDJlX2Zyb21faW50
cHRlKHJlcS52YWwpLCBtZm4sCiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIGNtZCA9PSBNTVVfUFRfVVBEQVRFX1BSRVNFUlZFX0FE
LCB2KTsKKyAgICAgICAgICAgICAgICAgICAgaWYgKCAhcmMgKQorICAgICAg
ICAgICAgICAgICAgICAgICAgZmx1c2hfbGluZWFyX3B0ID0gdHJ1ZTsKICAg
ICAgICAgICAgICAgICAgICAgYnJlYWs7CiAKICAgICAgICAgICAgICAgICBj
YXNlIFBHVF9sM19wYWdlX3RhYmxlOgpAQCAtNDA0OCw2ICs0MDUxLDggQEAg
bG9uZyBkb19tbXVfdXBkYXRlKAogICAgICAgICAgICAgICAgICAgICAgICAg
YnJlYWs7CiAgICAgICAgICAgICAgICAgICAgIHJjID0gbW9kX2wzX2VudHJ5
KHZhLCBsM2VfZnJvbV9pbnRwdGUocmVxLnZhbCksIG1mbiwKICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY21kID09IE1NVV9QVF9V
UERBVEVfUFJFU0VSVkVfQUQsIHYpOworICAgICAgICAgICAgICAgICAgICBp
ZiAoICFyYyApCisgICAgICAgICAgICAgICAgICAgICAgICBmbHVzaF9saW5l
YXJfcHQgPSB0cnVlOwogICAgICAgICAgICAgICAgICAgICBicmVhazsKIAog
ICAgICAgICAgICAgICAgIGNhc2UgUEdUX2w0X3BhZ2VfdGFibGU6CkBAIC00
MDU1LDYgKzQwNjAsOCBAQCBsb25nIGRvX21tdV91cGRhdGUoCiAgICAgICAg
ICAgICAgICAgICAgICAgICBicmVhazsKICAgICAgICAgICAgICAgICAgICAg
cmMgPSBtb2RfbDRfZW50cnkodmEsIGw0ZV9mcm9tX2ludHB0ZShyZXEudmFs
KSwgbWZuLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBjbWQgPT0gTU1VX1BUX1VQREFURV9QUkVTRVJWRV9BRCwgdik7CisgICAg
ICAgICAgICAgICAgICAgIGlmICggIXJjICkKKyAgICAgICAgICAgICAgICAg
ICAgICAgIGZsdXNoX2xpbmVhcl9wdCA9IHRydWU7CiAgICAgICAgICAgICAg
ICAgICAgIGlmICggIXJjICYmIHB0X293bmVyLT5hcmNoLnB2LnhwdGkgKQog
ICAgICAgICAgICAgICAgICAgICB7CiAgICAgICAgICAgICAgICAgICAgICAg
ICBib29sIGxvY2FsX2luX3VzZSA9IGZhbHNlOwpAQCAtNDA2Myw3ICs0MDcw
LDcgQEAgbG9uZyBkb19tbXVfdXBkYXRlKAogICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgbWZuKSApCiAgICAgICAgICAgICAgICAgICAg
ICAgICB7CiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbG9jYWxfaW5f
dXNlID0gdHJ1ZTsKLSAgICAgICAgICAgICAgICAgICAgICAgICAgICBnZXRf
Y3B1X2luZm8oKS0+cm9vdF9wZ3RfY2hhbmdlZCA9IHRydWU7CisgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgZmx1c2hfcm9vdF9wdF9sb2NhbCA9IHRy
dWU7CiAgICAgICAgICAgICAgICAgICAgICAgICB9CiAKICAgICAgICAgICAg
ICAgICAgICAgICAgIC8qCkBAIC00MDc1LDcgKzQwODIsNyBAQCBsb25nIGRv
X21tdV91cGRhdGUoCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICgx
ICsgISEocGFnZS0+dS5pbnVzZS50eXBlX2luZm8gJiBQR1RfcGlubmVkKSAr
CiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBtZm5fZXEocGFnZXRh
YmxlX2dldF9tZm4oY3Vyci0+YXJjaC5ndWVzdF90YWJsZV91c2VyKSwKICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBtZm4pICsgbG9j
YWxfaW5fdXNlKSApCi0gICAgICAgICAgICAgICAgICAgICAgICAgICAgc3lu
Y19ndWVzdCA9IHRydWU7CisgICAgICAgICAgICAgICAgICAgICAgICAgICAg
Zmx1c2hfcm9vdF9wdF9vdGhlcnMgPSB0cnVlOwogICAgICAgICAgICAgICAg
ICAgICB9CiAgICAgICAgICAgICAgICAgICAgIGJyZWFrOwogCkBAIC00MTc3
LDE5ICs0MTg0LDYxIEBAIGxvbmcgZG9fbW11X3VwZGF0ZSgKICAgICBpZiAo
IHZhICkKICAgICAgICAgdW5tYXBfZG9tYWluX3BhZ2UodmEpOwogCi0gICAg
aWYgKCBzeW5jX2d1ZXN0ICkKKyAgICAvKgorICAgICAqIFBlcmZvcm0gcmVx
dWlyZWQgVExCIG1haW50ZW5hbmNlLgorICAgICAqCisgICAgICogVGhpcyBs
b2dpYyBjdXJyZW50bHkgZGVwZW5kIG9uIGZsdXNoX2xpbmVhcl9wdCBiZWlu
ZyBhIHN1cGVyc2V0IG9mIHRoZQorICAgICAqIGZsdXNoX3Jvb3RfcHRfKiBj
b25kaXRpb25zLgorICAgICAqCisgICAgICogcHRfb3duZXIgbWF5IG5vdCBi
ZSBjdXJyZW50LT5kb21haW4uICBUaGlzIG1heSBvY2N1ciBkdXJpbmcKKyAg
ICAgKiBjb25zdHJ1Y3Rpb24gb2YgMzJiaXQgUFYgZ3Vlc3RzLCBvciBkZWJ1
Z2dpbmcgb2YgUFYgZ3Vlc3RzLiAgVGhlCisgICAgICogYmVoYXZpb3VyIGNh
bm5vdCBiZSBjb3JyZWN0IHdpdGggZG9tYWluIHVucGF1c2VkLiAgV2UgdGhl
cmVmb3JlIGV4cGVjdAorICAgICAqIHB0X293bmVyLT5kaXJ0eV9jcHVtYXNr
IHRvIGJlIGVtcHR5LCBidXQgaXQgaXMgYSB3YXN0ZSBvZiBlZmZvcnQgdG8K
KyAgICAgKiBleHBsaWNpdGx5IGNoZWNrIGZvciwgYW5kIGV4Y2x1ZGUsIHRo
aXMgY29ybmVyIGNhc2UuCisgICAgICoKKyAgICAgKiBmbHVzaF9saW5lYXJf
cHQgcmVxdWlyZXMgYSBGTFVTSF9UTEIgdG8gYWxsIGRpcnR5IENQVXMuICBU
aGUgZmx1c2ggbXVzdAorICAgICAqIGJlIHBlcmZvcm1lZCBub3cgdG8gbWFp
bnRhaW4gY29ycmVjdCBiZWhhdmlvdXIgYWNyb3NzIGEgbXVsdGljYWxsLgor
ICAgICAqIGkuZS4gd2UgY2Fubm90IHJlbGF4IEZMVVNIX1RMQiB0byBGTFVT
SF9ST09UX1BHVEJMLCBnaXZlbiB0aGF0IHRoZQorICAgICAqIGZvcm1lciBp
cyBhIHNpZGUgZWZmZWN0IG9mIHRoZSBsYXR0ZXIsIGJlY2F1c2UgdGhlIHJl
c3luYyAod2hpY2ggaXMgaW4KKyAgICAgKiB0aGUgcmV0dXJuLXRvLWd1ZXN0
IHBhdGgpIGhhcHBlbnMgdG9vIGxhdGUuCisgICAgICoKKyAgICAgKiBmbHVz
aF9yb290X3B0XyogcmVxdWlyZXMgRkxVU0hfUk9PVF9QR1RCTCBvbiBlaXRo
ZXIgdGhlIGxvY2FsIENQVQorICAgICAqIChpbXBsaWVzIHB0X293bmVyID09
IGN1cnJlbnQtPmRvbWFpbiBhbmQgY3VycmVudC0+cHJvY2Vzc29yIHNldCBp
bgorICAgICAqIHB0X293bmVyLT5kaXJ0eV9jcHVtYXNrKSwgYW5kL29yIGFs
bCAqb3RoZXIqIGRpcnR5IENQVXMgYXMgdGhlcmUgYXJlCisgICAgICogcmVm
ZXJlbmNlcyB3ZSBjYW4ndCBhY2NvdW50IGZvciBsb2NhbGx5LgorICAgICAq
LworICAgIGlmICggZmx1c2hfbGluZWFyX3B0IC8qIHx8IGZsdXNoX3Jvb3Rf
cHRfbG9jYWwgfHwgZmx1c2hfcm9vdF9wdF9vdGhlcnMgKi8gKQogICAgIHsK
KyAgICAgICAgdW5zaWduZWQgaW50IGNwdSA9IHNtcF9wcm9jZXNzb3JfaWQo
KTsKKyAgICAgICAgY3B1bWFza190ICptYXNrID0gcHRfb3duZXItPmRpcnR5
X2NwdW1hc2s7CisKICAgICAgICAgLyoKLSAgICAgICAgICogRm9yY2Ugb3Ro
ZXIgdkNQVS1zIG9mIHRoZSBhZmZlY3RlZCBndWVzdCB0byBwaWNrIHVwIEw0
IGVudHJ5Ci0gICAgICAgICAqIGNoYW5nZXMgKGlmIGFueSkuCisgICAgICAg
ICAqIEFsd2F5cyBoYW5kbGUgbG9jYWwgZmx1c2hpbmcgc2VwYXJhdGVseSAo
aWYgYXBwbGljYWJsZSksIHRvCisgICAgICAgICAqIHNlcGFyYXRlIHRoZSBm
bHVzaCBpbnZvY2F0aW9ucyBhcHByb3ByaWF0ZWx5IGZvciBzY29wZSBvZiB0
aGUgdHdvCisgICAgICAgICAqIGZsdXNoX3Jvb3RfcHRfKiB2YXJpYWJsZXMu
CiAgICAgICAgICAqLwotICAgICAgICB1bnNpZ25lZCBpbnQgY3B1ID0gc21w
X3Byb2Nlc3Nvcl9pZCgpOwotICAgICAgICBjcHVtYXNrX3QgKm1hc2sgPSBw
ZXJfY3B1KHNjcmF0Y2hfY3B1bWFzaywgY3B1KTsKKyAgICAgICAgaWYgKCBs
aWtlbHkoY3B1bWFza190ZXN0X2NwdShjcHUsIG1hc2spKSApCisgICAgICAg
IHsKKyAgICAgICAgICAgIG1hc2sgPSBwZXJfY3B1KHNjcmF0Y2hfY3B1bWFz
aywgY3B1KTsKIAotICAgICAgICBjcHVtYXNrX2FuZG5vdChtYXNrLCBwdF9v
d25lci0+ZGlydHlfY3B1bWFzaywgY3B1bWFza19vZihjcHUpKTsKKyAgICAg
ICAgICAgIGNwdW1hc2tfY29weShtYXNrLCBwdF9vd25lci0+ZGlydHlfY3B1
bWFzayk7CisgICAgICAgICAgICBfX2NwdW1hc2tfY2xlYXJfY3B1KGNwdSwg
bWFzayk7CisKKyAgICAgICAgICAgIGZsdXNoX2xvY2FsKEZMVVNIX1RMQiB8
CisgICAgICAgICAgICAgICAgICAgICAgICAoZmx1c2hfcm9vdF9wdF9sb2Nh
bCA/IEZMVVNIX1JPT1RfUEdUQkwgOiAwKSk7CisgICAgICAgIH0KKyAgICAg
ICAgZWxzZQorICAgICAgICAgICAgLyogU2FuaXR5IGNoZWNrLiAgZmx1c2hf
cm9vdF9wdF9sb2NhbCBpbXBsaWVzIGxvY2FsIGNwdSBpcyBkaXJ0eS4gKi8K
KyAgICAgICAgICAgIEFTU0VSVCghZmx1c2hfcm9vdF9wdF9sb2NhbCk7CisK
KyAgICAgICAgLyogRmx1c2ggdGhlIHJlbW90ZSBkaXJ0eSBDUFVzLiAgRG9l
cyBub3QgaW5jbHVkZSB0aGUgbG9jYWwgQ1BVLiAqLwogICAgICAgICBpZiAo
ICFjcHVtYXNrX2VtcHR5KG1hc2spICkKLSAgICAgICAgICAgIGZsdXNoX21h
c2sobWFzaywgRkxVU0hfUk9PVF9QR1RCTCk7CisgICAgICAgICAgICBmbHVz
aF9tYXNrKG1hc2ssIEZMVVNIX1RMQiB8CisgICAgICAgICAgICAgICAgICAg
ICAgIChmbHVzaF9yb290X3B0X290aGVycyA/IEZMVVNIX1JPT1RfUEdUQkwg
OiAwKSk7CiAgICAgfQorICAgIGVsc2UKKyAgICAgICAgLyogU2FuaXR5IGNo
ZWNrLiAgZmx1c2hfcm9vdF9wdF8qIGltcGxpZXMgZmx1c2hfbGluZWFyX3B0
LiAqLworICAgICAgICBBU1NFUlQoIWZsdXNoX3Jvb3RfcHRfbG9jYWwgJiYg
IWZsdXNoX3Jvb3RfcHRfb3RoZXJzKTsKIAogICAgIHBlcmZjX2FkZChudW1f
cGFnZV91cGRhdGVzLCBpKTsKIAotLSAKMi4yMC4xCgo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.10/0001-x86-pv-Drop-FLUSH_TLB_GLOBAL-in-do_mmu_update-for-XP.patch"
Content-Disposition: attachment;
 filename="xsa286-4.10/0001-x86-pv-Drop-FLUSH_TLB_GLOBAL-in-do_mmu_update-for-XP.patch"
Content-Transfer-Encoding: base64

RnJvbSAyMDEyZGI0NjNiOGI4ZTUwMmY2OWE0NmRkOWYzYWE0N2IzMWM5Mzdm
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpEYXRlOiBUaHUsIDIyIE9j
dCAyMDIwIDExOjI4OjU4ICswMTAwClN1YmplY3Q6IFtQQVRDSCAxLzJdIHg4
Ni9wdjogRHJvcCBGTFVTSF9UTEJfR0xPQkFMIGluIGRvX21tdV91cGRhdGUo
KSBmb3IgWFBUSQoKYy9zIDlkMWQzMWFkOTQ5OCAieDg2OiBzbGlnaHRseSBy
ZWR1Y2UgTWVsdGRvd24gYmFuZC1haWQgb3ZlcmhlYWQiIHJlbW92ZWQgdGhl
CnVzZSBvZiBHbG9iYWwgVExCIGZsdXNoZXMgb24gdGhlIFhlbiBlbnRyeSBw
YXRoLCBidXQgYWRkZWQgYSBGTFVTSF9UTEJfR0xPQkFMCnRvIHRoZSBMNCBw
YXRoIGluIGRvX21tdV91cGRhdGUoKS4KCkhvd2V2ZXIsIHRoaXMgd2FzIHVu
bmVjZXNzYXJ5LgoKSXQgaXMgdGhlIGd1ZXN0cyByZXNwb25zaWJpbGl0eSB0
byBwZXJmb3JtIGFwcHJvcHJpYXRlIFRMQiBmbHVzaGluZyBpZiB0aGUgTDQK
bW9kaWZpY2F0aW9uIGFsdGVyZWQgYW4gZXN0YWJsaXNoZWQgbWFwcGluZyBp
biBhIGZsdXNoLXJlbGV2YW50IHdheS4gIEluIHRoaXMKY2FzZSwgYW4gTU1V
RVhUX09QIGh5cGVyY2FsbCB3aWxsIGZvbGxvdy4gIFRoZSBjYXNlIHdoaWNo
IFhlbiBuZWVkcyB0byBjb3ZlcgppcyB3aGVuIG5ldyBtYXBwaW5ncyBhcmUg
Y3JlYXRlZCwgYW5kIHRoZSByZXN5bmMgb24gdGhlIGV4aXQtdG8tZ3Vlc3Qg
cGF0aApjb3ZlcnMgdGhpcyBjb3JyZWN0bHkuCgpUaGVyZSBpcyBhIGNvcm5l
ciBjYXNlIHdpdGggbXVsdGlwbGUgdkNQVXMgaW4gaHlwZXJjYWxscyBhdCB0
aGUgc2FtZSB0aW1lLAp3aGljaCA5ZDFkMzFhZDk0OTggY2hhbmdlZCwgYW5k
IHRoaXMgcGF0Y2ggY2hhbmdlcyBiYWNrIHRvIGl0cyBvcmlnaW5hbCBYUFRJ
CmJlaGF2aW91ci4KCkFyY2hpdGVjdHVyYWxseSwgZXN0YWJsaXNoZWQgVExC
IGVudHJpZXMgY2FuIGNvbnRpbnVlIHRvIGJlIHVzZWQgdW50aWwgdGhlCmJy
b2FkY2FzdCBmbHVzaCBoYXMgY29tcGxldGVkLiAgVGhlcmVmb3JlLCBldmVu
IHdpdGggY29uY3VycmVudCBoeXBlcmNhbGxzLAp0aGUgZ3Vlc3QgY2Fubm90
IGRlcGVuZCBvbiBvbGRlciBtYXBwaW5ncyBub3QgYmVpbmcgdXNlZCB1bnRp
bCBhbiBNTVVFWFRfT1AKaHlwZXJjYWxsIGNvbXBsZXRlcy4gIFhlbidzIGlt
cGxlbWVudGF0aW9uIG9mIGd1ZXN0LWluaXRpYXRlZCBmbHVzaGVzIHdpbGwK
dGFrZSBjb3JyZWN0IGVmZmVjdCBvbiB0b3Agb2YgYW4gaW4tcHJvZ3Jlc3Mg
aHlwZXJjYWxsLCBwaWNraW5nIHVwIG5ldyBtYXBwaW5nCnNldHRpbmcgYmVm
b3JlIHRoZSBvdGhlciB2Q1BVJ3MgTU1VRVhUX09QIGNvbXBsZXRlcy4KCk5v
dGU6IFRoZSBjb3JyZWN0bmVzcyBvZiB0aGlzIGNoYW5nZSBpcyBub3QgaW1w
YWN0ZWQgYnkgd2hldGhlciBYUFRJIHVzZXMKZ2xvYmFsIG1hcHBpbmdzIG9y
IG5vdC4gIENvcnJlY3RuZXNzIHRoZXJlIGRlcGVuZHMgb24gdGhlIGJlaGF2
aW91ciBvZiBYZW4gb24KdGhlIGVudHJ5L2V4aXQgcGF0aHMgd2hlbiBzd2l0
Y2hpbmcgdHdvL2Zyb20gdGhlIFhQVEkgInNoYWRvdyIgcGFnZXRhYmxlcy4K
ClRoaXMgaXMgKG5vdCByZWFsbHkpIFhTQS0yODYgKGJ1dCBuZWNlc3Nhcnkg
dG8gc2ltcGxpZnkgdGhlIGxvZ2ljKS4KCkZpeGVzOiA5ZDFkMzFhZDk0OTgg
KCJ4ODY6IHNsaWdodGx5IHJlZHVjZSBNZWx0ZG93biBiYW5kLWFpZCBvdmVy
aGVhZCIpClNpZ25lZC1vZmYtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5j
b29wZXIzQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5OiBKYW4gQmV1bGljaCA8
amJldWxpY2hAc3VzZS5jb20+CihjaGVycnkgcGlja2VkIGZyb20gY29tbWl0
IDA1NWUxYzNhM2Q5NWIxZTc1MzE0ODM2OWZiYzRiYTQ4NzgyZGQ2MDIpCi0t
LQogeGVuL2FyY2gveDg2L21tLmMgfCAyICstCiAxIGZpbGUgY2hhbmdlZCwg
MSBpbnNlcnRpb24oKyksIDEgZGVsZXRpb24oLSkKCmRpZmYgLS1naXQgYS94
ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9hcmNoL3g4Ni9tbS5jCmluZGV4IDcw
OTBiZTdiNGEuLjJhYWMxY2JmMTAgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4
Ni9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS5jCkBAIC00MjM1LDcgKzQy
MzUsNyBAQCBsb25nIGRvX21tdV91cGRhdGUoCiAKICAgICAgICAgY3B1bWFz
a19hbmRub3QobWFzaywgcHRfb3duZXItPmRvbWFpbl9kaXJ0eV9jcHVtYXNr
LCBjcHVtYXNrX29mKGNwdSkpOwogICAgICAgICBpZiAoICFjcHVtYXNrX2Vt
cHR5KG1hc2spICkKLSAgICAgICAgICAgIGZsdXNoX21hc2sobWFzaywgRkxV
U0hfVExCX0dMT0JBTCB8IEZMVVNIX1JPT1RfUEdUQkwpOworICAgICAgICAg
ICAgZmx1c2hfbWFzayhtYXNrLCBGTFVTSF9ST09UX1BHVEJMKTsKICAgICB9
CiAKICAgICBwZXJmY19hZGQobnVtX3BhZ2VfdXBkYXRlcywgaSk7Ci0tIAoy
LjIwLjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.10/0002-x86-pv-Flush-TLB-in-response-to-paging-structure-cha.patch"
Content-Disposition: attachment;
 filename="xsa286-4.10/0002-x86-pv-Flush-TLB-in-response-to-paging-structure-cha.patch"
Content-Transfer-Encoding: base64

RnJvbSA3OGQ5MDNlOTVlZmM1YjAxNjZiMzkzZDI4OWE2ODdjNjQwMTZlOGVm
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpEYXRlOiBNb24sIDE5IE9j
dCAyMDIwIDE1OjUxOjIyICswMTAwClN1YmplY3Q6IFtQQVRDSCAyLzJdIHg4
Ni9wdjogRmx1c2ggVExCIGluIHJlc3BvbnNlIHRvIHBhZ2luZyBzdHJ1Y3R1
cmUgY2hhbmdlcwoKV2l0aCBNTVVfVVBEQVRFLCBhIFBWIGd1ZXN0IGNhbiBt
YWtlIGNoYW5nZXMgdG8gaGlnaGVyIGxldmVsIHBhZ2V0YWJsZXMuICBUaGlz
CmlzIHNhZmUgZnJvbSBYZW4ncyBwb2ludCBvZiB2aWV3IChhcyB0aGUgdXBk
YXRlIG9ubHkgYWZmZWN0cyBndWVzdCBtYXBwaW5ncyksCmFuZCB0aGUgZ3Vl
c3QgaXMgcmVxdWlyZWQgdG8gZmx1c2ggKGlmIG5lY2Vzc2FyeSkgYWZ0ZXIg
bWFraW5nIHVwZGF0ZXMuCgpIb3dldmVyLCBYZW4ncyB1c2Ugb2YgbGluZWFy
IHBhZ2V0YWJsZXMgKFVQREFURV9WQV9NQVBQSU5HLCBHTlRUQUJPUF9tYXAs
CndyaXRlYWJsZSBwYWdldGFibGVzLCBldGMuKSBpcyBhbiBpbXBsZW1lbnRh
dGlvbiBkZXRhaWwgb3V0c2lkZSBvZiB0aGUKQVBJL0FCSS4KCkNoYW5nZXMg
aW4gdGhlIHBhZ2luZyBzdHJ1Y3R1cmUgcmVxdWlyZSBpbnZhbGlkYXRpb25z
IGluIHRoZSBsaW5lYXIgcGFnZXRhYmxlCnJhbmdlIGZvciBzdWJzZXF1ZW50
IGFjY2Vzc2VzIGludG8gdGhlIGxpbmVhciBwYWdldGFibGVzIHRvIGFjY2Vz
cyBub24tc3RhbGUKbWFwcGluZ3MuICBYZW4gbXVzdCBwcm92aWRlIHN1aXRh
YmxlIGZsdXNoaW5nIHRvIHByZXZlbnQgaW50ZXJtaXhlZCBndWVzdAphY3Rp
b25zIGZyb20gYWNjaWRlbnRhbGx5IGFjY2Vzc2luZy9tb2RpZnlpbmcgdGhl
IHdyb25nIHBhZ2V0YWJsZS4KCkZvciBhbGwgTDIgYW5kIGhpZ2hlciBtb2Rp
ZmljYXRpb25zLCBmbHVzaCB0aGUgVExCLiAgUFYgZ3Vlc3RzIGNhbm5vdCBj
cmVhdGUKTDIgb3IgaGlnaGVyIGVudHJpZXMgd2l0aCB0aGUgR2xvYmFsIGJp
dCBzZXQsIHNvIG5vIG1hcHBpbmdzIGVzdGFibGlzaGVkIGluCnRoZSBsaW5l
YXIgcmFuZ2UgY2FuIGJlIGdsb2JhbC4gIChUaGlzIGNvdWxkIGluIHByaW5j
aXBsZSBiZSBhbiBvcmRlciAzOSBmbHVzaApzdGFydGluZyBhdCBMSU5FQVJf
UFRfVklSVF9TVEFSVCwgYnV0IG5vIHN1Y2ggbWVjaGFuaXNtIGV4aXN0cyBp
biBwcmFjdGljZS4pCgpFeHByZXNzIHRoZSBuZWNlc3NhcnkgZmx1c2hlcyBh
cyBhIHNldCBvZiBib29sZWFucyB3aGljaCBhY2N1bXVsYXRlIGFjcm9zcyB0
aGUKb3BlcmF0aW9uLiAgQ29tbWVudCB0aGUgZmx1c2hpbmcgbG9naWMgZXh0
ZW5zaXZlbHkuCgpUaGlzIGlzIFhTQS0yODYuCgpTaWduZWQtb2ZmLWJ5OiBB
bmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZp
ZXdlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgooY2hl
cnJ5IHBpY2tlZCBmcm9tIGNvbW1pdCAxNmEyMDk2M2IzMjA5Nzg4ZjJjMGQz
YTNlZWJiN2Q5MmYwM2Y1ODgzKQotLS0KIHhlbi9hcmNoL3g4Ni9tbS5jIHwg
NjkgKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0t
LS0tLS0KIDEgZmlsZSBjaGFuZ2VkLCA1OSBpbnNlcnRpb25zKCspLCAxMCBk
ZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0uYyBi
L3hlbi9hcmNoL3g4Ni9tbS5jCmluZGV4IDJhYWMxY2JmMTAuLjY2Mzk5YTVm
NjEgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9tbS5jCisrKyBiL3hlbi9h
cmNoL3g4Ni9tbS5jCkBAIC0zOTQ5LDcgKzM5NDksOCBAQCBsb25nIGRvX21t
dV91cGRhdGUoCiAgICAgc3RydWN0IHZjcHUgKmN1cnIgPSBjdXJyZW50LCAq
diA9IGN1cnI7CiAgICAgc3RydWN0IGRvbWFpbiAqZCA9IHYtPmRvbWFpbiwg
KnB0X293bmVyID0gZCwgKnBnX293bmVyOwogICAgIG1mbl90IG1hcF9tZm4g
PSBJTlZBTElEX01GTjsKLSAgICBib29sIHN5bmNfZ3Vlc3QgPSBmYWxzZTsK
KyAgICBib29sIGZsdXNoX2xpbmVhcl9wdCA9IGZhbHNlLCBmbHVzaF9yb290
X3B0X2xvY2FsID0gZmFsc2UsCisgICAgICAgIGZsdXNoX3Jvb3RfcHRfb3Ro
ZXJzID0gZmFsc2U7CiAgICAgdWludDMyX3QgeHNtX25lZWRlZCA9IDA7CiAg
ICAgdWludDMyX3QgeHNtX2NoZWNrZWQgPSAwOwogICAgIGludCByYyA9IHB1
dF9vbGRfZ3Vlc3RfdGFibGUoY3Vycik7CkBAIC00MDk2LDE0ICs0MDk3LDIw
IEBAIGxvbmcgZG9fbW11X3VwZGF0ZSgKICAgICAgICAgICAgICAgICBjYXNl
IFBHVF9sMl9wYWdlX3RhYmxlOgogICAgICAgICAgICAgICAgICAgICByYyA9
IG1vZF9sMl9lbnRyeSh2YSwgbDJlX2Zyb21faW50cHRlKHJlcS52YWwpLCBt
Zm4sCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNt
ZCA9PSBNTVVfUFRfVVBEQVRFX1BSRVNFUlZFX0FELCB2KTsKKyAgICAgICAg
ICAgICAgICAgICAgaWYgKCAhcmMgKQorICAgICAgICAgICAgICAgICAgICAg
ICAgZmx1c2hfbGluZWFyX3B0ID0gdHJ1ZTsKICAgICAgICAgICAgICAgICAg
ICAgYnJlYWs7CiAgICAgICAgICAgICAgICAgY2FzZSBQR1RfbDNfcGFnZV90
YWJsZToKICAgICAgICAgICAgICAgICAgICAgcmMgPSBtb2RfbDNfZW50cnko
dmEsIGwzZV9mcm9tX2ludHB0ZShyZXEudmFsKSwgbWZuLAogICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjbWQgPT0gTU1VX1BUX1VQ
REFURV9QUkVTRVJWRV9BRCwgdik7CisgICAgICAgICAgICAgICAgICAgIGlm
ICggIXJjICkKKyAgICAgICAgICAgICAgICAgICAgICAgIGZsdXNoX2xpbmVh
cl9wdCA9IHRydWU7CiAgICAgICAgICAgICAgICAgICAgIGJyZWFrOwogICAg
ICAgICAgICAgICAgIGNhc2UgUEdUX2w0X3BhZ2VfdGFibGU6CiAgICAgICAg
ICAgICAgICAgICAgIHJjID0gbW9kX2w0X2VudHJ5KHZhLCBsNGVfZnJvbV9p
bnRwdGUocmVxLnZhbCksIG1mbiwKICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgY21kID09IE1NVV9QVF9VUERBVEVfUFJFU0VSVkVf
QUQsIHYpOworICAgICAgICAgICAgICAgICAgICBpZiAoICFyYyApCisgICAg
ICAgICAgICAgICAgICAgICAgICBmbHVzaF9saW5lYXJfcHQgPSB0cnVlOwog
ICAgICAgICAgICAgICAgICAgICBpZiAoICFyYyAmJiAhY3B1X2hhc19ub194
cHRpICkKICAgICAgICAgICAgICAgICAgICAgewogICAgICAgICAgICAgICAg
ICAgICAgICAgYm9vbCBsb2NhbF9pbl91c2UgPSBmYWxzZTsKQEAgLTQxMTEs
NyArNDExOCw3IEBAIGxvbmcgZG9fbW11X3VwZGF0ZSgKICAgICAgICAgICAg
ICAgICAgICAgICAgIGlmICggcGFnZXRhYmxlX2dldF9wZm4oY3Vyci0+YXJj
aC5ndWVzdF90YWJsZSkgPT0gbWZuICkKICAgICAgICAgICAgICAgICAgICAg
ICAgIHsKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsb2NhbF9pbl91
c2UgPSB0cnVlOwotICAgICAgICAgICAgICAgICAgICAgICAgICAgIGdldF9j
cHVfaW5mbygpLT5yb290X3BndF9jaGFuZ2VkID0gdHJ1ZTsKKyAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBmbHVzaF9yb290X3B0X2xvY2FsID0gdHJ1
ZTsKICAgICAgICAgICAgICAgICAgICAgICAgIH0KIAogICAgICAgICAgICAg
ICAgICAgICAgICAgLyoKQEAgLTQxMjMsNyArNDEzMCw3IEBAIGxvbmcgZG9f
bW11X3VwZGF0ZSgKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgKDEg
KyAhIShwYWdlLT51LmludXNlLnR5cGVfaW5mbyAmIFBHVF9waW5uZWQpICsK
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIChwYWdldGFibGVfZ2V0
X3BmbihjdXJyLT5hcmNoLmd1ZXN0X3RhYmxlX3VzZXIpID09CiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgbWZuKSArIGxvY2FsX2luX3VzZSkg
KQotICAgICAgICAgICAgICAgICAgICAgICAgICAgIHN5bmNfZ3Vlc3QgPSB0
cnVlOworICAgICAgICAgICAgICAgICAgICAgICAgICAgIGZsdXNoX3Jvb3Rf
cHRfb3RoZXJzID0gdHJ1ZTsKICAgICAgICAgICAgICAgICAgICAgfQogICAg
ICAgICAgICAgICAgICAgICBicmVhazsKICAgICAgICAgICAgICAgICBjYXNl
IFBHVF93cml0YWJsZV9wYWdlOgpAQCAtNDIyNCwxOSArNDIzMSw2MSBAQCBs
b25nIGRvX21tdV91cGRhdGUoCiAgICAgaWYgKCB2YSApCiAgICAgICAgIHVu
bWFwX2RvbWFpbl9wYWdlKHZhKTsKIAotICAgIGlmICggc3luY19ndWVzdCAp
CisgICAgLyoKKyAgICAgKiBQZXJmb3JtIHJlcXVpcmVkIFRMQiBtYWludGVu
YW5jZS4KKyAgICAgKgorICAgICAqIFRoaXMgbG9naWMgY3VycmVudGx5IGRl
cGVuZCBvbiBmbHVzaF9saW5lYXJfcHQgYmVpbmcgYSBzdXBlcnNldCBvZiB0
aGUKKyAgICAgKiBmbHVzaF9yb290X3B0XyogY29uZGl0aW9ucy4KKyAgICAg
KgorICAgICAqIHB0X293bmVyIG1heSBub3QgYmUgY3VycmVudC0+ZG9tYWlu
LiAgVGhpcyBtYXkgb2NjdXIgZHVyaW5nCisgICAgICogY29uc3RydWN0aW9u
IG9mIDMyYml0IFBWIGd1ZXN0cywgb3IgZGVidWdnaW5nIG9mIFBWIGd1ZXN0
cy4gIFRoZQorICAgICAqIGJlaGF2aW91ciBjYW5ub3QgYmUgY29ycmVjdCB3
aXRoIGRvbWFpbiB1bnBhdXNlZC4gIFdlIHRoZXJlZm9yZSBleHBlY3QKKyAg
ICAgKiBwdF9vd25lci0+ZGlydHlfY3B1bWFzayB0byBiZSBlbXB0eSwgYnV0
IGl0IGlzIGEgd2FzdGUgb2YgZWZmb3J0IHRvCisgICAgICogZXhwbGljaXRs
eSBjaGVjayBmb3IsIGFuZCBleGNsdWRlLCB0aGlzIGNvcm5lciBjYXNlLgor
ICAgICAqCisgICAgICogZmx1c2hfbGluZWFyX3B0IHJlcXVpcmVzIGEgRkxV
U0hfVExCIHRvIGFsbCBkaXJ0eSBDUFVzLiAgVGhlIGZsdXNoIG11c3QKKyAg
ICAgKiBiZSBwZXJmb3JtZWQgbm93IHRvIG1haW50YWluIGNvcnJlY3QgYmVo
YXZpb3VyIGFjcm9zcyBhIG11bHRpY2FsbC4KKyAgICAgKiBpLmUuIHdlIGNh
bm5vdCByZWxheCBGTFVTSF9UTEIgdG8gRkxVU0hfUk9PVF9QR1RCTCwgZ2l2
ZW4gdGhhdCB0aGUKKyAgICAgKiBmb3JtZXIgaXMgYSBzaWRlIGVmZmVjdCBv
ZiB0aGUgbGF0dGVyLCBiZWNhdXNlIHRoZSByZXN5bmMgKHdoaWNoIGlzIGlu
CisgICAgICogdGhlIHJldHVybi10by1ndWVzdCBwYXRoKSBoYXBwZW5zIHRv
byBsYXRlLgorICAgICAqCisgICAgICogZmx1c2hfcm9vdF9wdF8qIHJlcXVp
cmVzIEZMVVNIX1JPT1RfUEdUQkwgb24gZWl0aGVyIHRoZSBsb2NhbCBDUFUK
KyAgICAgKiAoaW1wbGllcyBwdF9vd25lciA9PSBjdXJyZW50LT5kb21haW4g
YW5kIGN1cnJlbnQtPnByb2Nlc3NvciBzZXQgaW4KKyAgICAgKiBwdF9vd25l
ci0+ZGlydHlfY3B1bWFzayksIGFuZC9vciBhbGwgKm90aGVyKiBkaXJ0eSBD
UFVzIGFzIHRoZXJlIGFyZQorICAgICAqIHJlZmVyZW5jZXMgd2UgY2FuJ3Qg
YWNjb3VudCBmb3IgbG9jYWxseS4KKyAgICAgKi8KKyAgICBpZiAoIGZsdXNo
X2xpbmVhcl9wdCAvKiB8fCBmbHVzaF9yb290X3B0X2xvY2FsIHx8IGZsdXNo
X3Jvb3RfcHRfb3RoZXJzICovICkKICAgICB7CisgICAgICAgIHVuc2lnbmVk
IGludCBjcHUgPSBzbXBfcHJvY2Vzc29yX2lkKCk7CisgICAgICAgIGNwdW1h
c2tfdCAqbWFzayA9IHB0X293bmVyLT5kb21haW5fZGlydHlfY3B1bWFzazsK
KwogICAgICAgICAvKgotICAgICAgICAgKiBGb3JjZSBvdGhlciB2Q1BVLXMg
b2YgdGhlIGFmZmVjdGVkIGd1ZXN0IHRvIHBpY2sgdXAgTDQgZW50cnkKLSAg
ICAgICAgICogY2hhbmdlcyAoaWYgYW55KS4KKyAgICAgICAgICogQWx3YXlz
IGhhbmRsZSBsb2NhbCBmbHVzaGluZyBzZXBhcmF0ZWx5IChpZiBhcHBsaWNh
YmxlKSwgdG8KKyAgICAgICAgICogc2VwYXJhdGUgdGhlIGZsdXNoIGludm9j
YXRpb25zIGFwcHJvcHJpYXRlbHkgZm9yIHNjb3BlIG9mIHRoZSB0d28KKyAg
ICAgICAgICogZmx1c2hfcm9vdF9wdF8qIHZhcmlhYmxlcy4KICAgICAgICAg
ICovCi0gICAgICAgIHVuc2lnbmVkIGludCBjcHUgPSBzbXBfcHJvY2Vzc29y
X2lkKCk7Ci0gICAgICAgIGNwdW1hc2tfdCAqbWFzayA9IHBlcl9jcHUoc2Ny
YXRjaF9jcHVtYXNrLCBjcHUpOworICAgICAgICBpZiAoIGxpa2VseShjcHVt
YXNrX3Rlc3RfY3B1KGNwdSwgbWFzaykpICkKKyAgICAgICAgeworICAgICAg
ICAgICAgbWFzayA9IHBlcl9jcHUoc2NyYXRjaF9jcHVtYXNrLCBjcHUpOwog
Ci0gICAgICAgIGNwdW1hc2tfYW5kbm90KG1hc2ssIHB0X293bmVyLT5kb21h
aW5fZGlydHlfY3B1bWFzaywgY3B1bWFza19vZihjcHUpKTsKKyAgICAgICAg
ICAgIGNwdW1hc2tfY29weShtYXNrLCBwdF9vd25lci0+ZG9tYWluX2RpcnR5
X2NwdW1hc2spOworICAgICAgICAgICAgX19jcHVtYXNrX2NsZWFyX2NwdShj
cHUsIG1hc2spOworCisgICAgICAgICAgICBmbHVzaF9sb2NhbChGTFVTSF9U
TEIgfAorICAgICAgICAgICAgICAgICAgICAgICAgKGZsdXNoX3Jvb3RfcHRf
bG9jYWwgPyBGTFVTSF9ST09UX1BHVEJMIDogMCkpOworICAgICAgICB9Cisg
ICAgICAgIGVsc2UKKyAgICAgICAgICAgIC8qIFNhbml0eSBjaGVjay4gIGZs
dXNoX3Jvb3RfcHRfbG9jYWwgaW1wbGllcyBsb2NhbCBjcHUgaXMgZGlydHku
ICovCisgICAgICAgICAgICBBU1NFUlQoIWZsdXNoX3Jvb3RfcHRfbG9jYWwp
OworCisgICAgICAgIC8qIEZsdXNoIHRoZSByZW1vdGUgZGlydHkgQ1BVcy4g
IERvZXMgbm90IGluY2x1ZGUgdGhlIGxvY2FsIENQVS4gKi8KICAgICAgICAg
aWYgKCAhY3B1bWFza19lbXB0eShtYXNrKSApCi0gICAgICAgICAgICBmbHVz
aF9tYXNrKG1hc2ssIEZMVVNIX1JPT1RfUEdUQkwpOworICAgICAgICAgICAg
Zmx1c2hfbWFzayhtYXNrLCBGTFVTSF9UTEIgfAorICAgICAgICAgICAgICAg
ICAgICAgICAoZmx1c2hfcm9vdF9wdF9vdGhlcnMgPyBGTFVTSF9ST09UX1BH
VEJMIDogMCkpOwogICAgIH0KKyAgICBlbHNlCisgICAgICAgIC8qIFNhbml0
eSBjaGVjay4gIGZsdXNoX3Jvb3RfcHRfKiBpbXBsaWVzIGZsdXNoX2xpbmVh
cl9wdC4gKi8KKyAgICAgICAgQVNTRVJUKCFmbHVzaF9yb290X3B0X2xvY2Fs
ICYmICFmbHVzaF9yb290X3B0X290aGVycyk7CiAKICAgICBwZXJmY19hZGQo
bnVtX3BhZ2VfdXBkYXRlcywgaSk7CiAKLS0gCjIuMjAuMQoK

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.11/0001-x86-pv-Drop-FLUSH_TLB_GLOBAL-in-do_mmu_update-for-XP.patch"
Content-Disposition: attachment;
 filename="xsa286-4.11/0001-x86-pv-Drop-FLUSH_TLB_GLOBAL-in-do_mmu_update-for-XP.patch"
Content-Transfer-Encoding: base64

RnJvbSAxZDAyMWRiM2M4NzEyZDI1ZTI1ZjA3ODgzM2JhYTE2MGM5MGYyNjBm
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpEYXRlOiBUaHUsIDIyIE9j
dCAyMDIwIDExOjI4OjU4ICswMTAwClN1YmplY3Q6IFtQQVRDSCAxLzJdIHg4
Ni9wdjogRHJvcCBGTFVTSF9UTEJfR0xPQkFMIGluIGRvX21tdV91cGRhdGUo
KSBmb3IgWFBUSQoKYy9zIDlkMWQzMWFkOTQ5OCAieDg2OiBzbGlnaHRseSBy
ZWR1Y2UgTWVsdGRvd24gYmFuZC1haWQgb3ZlcmhlYWQiIHJlbW92ZWQgdGhl
CnVzZSBvZiBHbG9iYWwgVExCIGZsdXNoZXMgb24gdGhlIFhlbiBlbnRyeSBw
YXRoLCBidXQgYWRkZWQgYSBGTFVTSF9UTEJfR0xPQkFMCnRvIHRoZSBMNCBw
YXRoIGluIGRvX21tdV91cGRhdGUoKS4KCkhvd2V2ZXIsIHRoaXMgd2FzIHVu
bmVjZXNzYXJ5LgoKSXQgaXMgdGhlIGd1ZXN0cyByZXNwb25zaWJpbGl0eSB0
byBwZXJmb3JtIGFwcHJvcHJpYXRlIFRMQiBmbHVzaGluZyBpZiB0aGUgTDQK
bW9kaWZpY2F0aW9uIGFsdGVyZWQgYW4gZXN0YWJsaXNoZWQgbWFwcGluZyBp
biBhIGZsdXNoLXJlbGV2YW50IHdheS4gIEluIHRoaXMKY2FzZSwgYW4gTU1V
RVhUX09QIGh5cGVyY2FsbCB3aWxsIGZvbGxvdy4gIFRoZSBjYXNlIHdoaWNo
IFhlbiBuZWVkcyB0byBjb3ZlcgppcyB3aGVuIG5ldyBtYXBwaW5ncyBhcmUg
Y3JlYXRlZCwgYW5kIHRoZSByZXN5bmMgb24gdGhlIGV4aXQtdG8tZ3Vlc3Qg
cGF0aApjb3ZlcnMgdGhpcyBjb3JyZWN0bHkuCgpUaGVyZSBpcyBhIGNvcm5l
ciBjYXNlIHdpdGggbXVsdGlwbGUgdkNQVXMgaW4gaHlwZXJjYWxscyBhdCB0
aGUgc2FtZSB0aW1lLAp3aGljaCA5ZDFkMzFhZDk0OTggY2hhbmdlZCwgYW5k
IHRoaXMgcGF0Y2ggY2hhbmdlcyBiYWNrIHRvIGl0cyBvcmlnaW5hbCBYUFRJ
CmJlaGF2aW91ci4KCkFyY2hpdGVjdHVyYWxseSwgZXN0YWJsaXNoZWQgVExC
IGVudHJpZXMgY2FuIGNvbnRpbnVlIHRvIGJlIHVzZWQgdW50aWwgdGhlCmJy
b2FkY2FzdCBmbHVzaCBoYXMgY29tcGxldGVkLiAgVGhlcmVmb3JlLCBldmVu
IHdpdGggY29uY3VycmVudCBoeXBlcmNhbGxzLAp0aGUgZ3Vlc3QgY2Fubm90
IGRlcGVuZCBvbiBvbGRlciBtYXBwaW5ncyBub3QgYmVpbmcgdXNlZCB1bnRp
bCBhbiBNTVVFWFRfT1AKaHlwZXJjYWxsIGNvbXBsZXRlcy4gIFhlbidzIGlt
cGxlbWVudGF0aW9uIG9mIGd1ZXN0LWluaXRpYXRlZCBmbHVzaGVzIHdpbGwK
dGFrZSBjb3JyZWN0IGVmZmVjdCBvbiB0b3Agb2YgYW4gaW4tcHJvZ3Jlc3Mg
aHlwZXJjYWxsLCBwaWNraW5nIHVwIG5ldyBtYXBwaW5nCnNldHRpbmcgYmVm
b3JlIHRoZSBvdGhlciB2Q1BVJ3MgTU1VRVhUX09QIGNvbXBsZXRlcy4KCk5v
dGU6IFRoZSBjb3JyZWN0bmVzcyBvZiB0aGlzIGNoYW5nZSBpcyBub3QgaW1w
YWN0ZWQgYnkgd2hldGhlciBYUFRJIHVzZXMKZ2xvYmFsIG1hcHBpbmdzIG9y
IG5vdC4gIENvcnJlY3RuZXNzIHRoZXJlIGRlcGVuZHMgb24gdGhlIGJlaGF2
aW91ciBvZiBYZW4gb24KdGhlIGVudHJ5L2V4aXQgcGF0aHMgd2hlbiBzd2l0
Y2hpbmcgdHdvL2Zyb20gdGhlIFhQVEkgInNoYWRvdyIgcGFnZXRhYmxlcy4K
ClRoaXMgaXMgKG5vdCByZWFsbHkpIFhTQS0yODYgKGJ1dCBuZWNlc3Nhcnkg
dG8gc2ltcGxpZnkgdGhlIGxvZ2ljKS4KCkZpeGVzOiA5ZDFkMzFhZDk0OTgg
KCJ4ODY6IHNsaWdodGx5IHJlZHVjZSBNZWx0ZG93biBiYW5kLWFpZCBvdmVy
aGVhZCIpClNpZ25lZC1vZmYtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5j
b29wZXIzQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5OiBKYW4gQmV1bGljaCA8
amJldWxpY2hAc3VzZS5jb20+CihjaGVycnkgcGlja2VkIGZyb20gY29tbWl0
IDA1NWUxYzNhM2Q5NWIxZTc1MzE0ODM2OWZiYzRiYTQ4NzgyZGQ2MDIpCi0t
LQogeGVuL2FyY2gveDg2L21tLmMgfCAyICstCiAxIGZpbGUgY2hhbmdlZCwg
MSBpbnNlcnRpb24oKyksIDEgZGVsZXRpb24oLSkKCmRpZmYgLS1naXQgYS94
ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9hcmNoL3g4Ni9tbS5jCmluZGV4IDVj
YTVjOGM5YTIuLjEyOWRhMWU2NDggMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4
Ni9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS5jCkBAIC00Mjc5LDcgKzQy
NzksNyBAQCBsb25nIGRvX21tdV91cGRhdGUoCiAKICAgICAgICAgY3B1bWFz
a19hbmRub3QobWFzaywgcHRfb3duZXItPmRpcnR5X2NwdW1hc2ssIGNwdW1h
c2tfb2YoY3B1KSk7CiAgICAgICAgIGlmICggIWNwdW1hc2tfZW1wdHkobWFz
aykgKQotICAgICAgICAgICAgZmx1c2hfbWFzayhtYXNrLCBGTFVTSF9UTEJf
R0xPQkFMIHwgRkxVU0hfUk9PVF9QR1RCTCk7CisgICAgICAgICAgICBmbHVz
aF9tYXNrKG1hc2ssIEZMVVNIX1JPT1RfUEdUQkwpOwogICAgIH0KIAogICAg
IHBlcmZjX2FkZChudW1fcGFnZV91cGRhdGVzLCBpKTsKLS0gCjIuMjAuMQoK

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.11/0002-x86-pv-Flush-TLB-in-response-to-paging-structure-cha.patch"
Content-Disposition: attachment;
 filename="xsa286-4.11/0002-x86-pv-Flush-TLB-in-response-to-paging-structure-cha.patch"
Content-Transfer-Encoding: base64

RnJvbSBlMjc0YzhiZGMxMmViNTk2ZTU1MjMzMDQwZThiNDlkYTI3MTUwZjMx
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpEYXRlOiBNb24sIDE5IE9j
dCAyMDIwIDE1OjUxOjIyICswMTAwClN1YmplY3Q6IFtQQVRDSCAyLzJdIHg4
Ni9wdjogRmx1c2ggVExCIGluIHJlc3BvbnNlIHRvIHBhZ2luZyBzdHJ1Y3R1
cmUgY2hhbmdlcwoKV2l0aCBNTVVfVVBEQVRFLCBhIFBWIGd1ZXN0IGNhbiBt
YWtlIGNoYW5nZXMgdG8gaGlnaGVyIGxldmVsIHBhZ2V0YWJsZXMuICBUaGlz
CmlzIHNhZmUgZnJvbSBYZW4ncyBwb2ludCBvZiB2aWV3IChhcyB0aGUgdXBk
YXRlIG9ubHkgYWZmZWN0cyBndWVzdCBtYXBwaW5ncyksCmFuZCB0aGUgZ3Vl
c3QgaXMgcmVxdWlyZWQgdG8gZmx1c2ggKGlmIG5lY2Vzc2FyeSkgYWZ0ZXIg
bWFraW5nIHVwZGF0ZXMuCgpIb3dldmVyLCBYZW4ncyB1c2Ugb2YgbGluZWFy
IHBhZ2V0YWJsZXMgKFVQREFURV9WQV9NQVBQSU5HLCBHTlRUQUJPUF9tYXAs
CndyaXRlYWJsZSBwYWdldGFibGVzLCBldGMuKSBpcyBhbiBpbXBsZW1lbnRh
dGlvbiBkZXRhaWwgb3V0c2lkZSBvZiB0aGUKQVBJL0FCSS4KCkNoYW5nZXMg
aW4gdGhlIHBhZ2luZyBzdHJ1Y3R1cmUgcmVxdWlyZSBpbnZhbGlkYXRpb25z
IGluIHRoZSBsaW5lYXIgcGFnZXRhYmxlCnJhbmdlIGZvciBzdWJzZXF1ZW50
IGFjY2Vzc2VzIGludG8gdGhlIGxpbmVhciBwYWdldGFibGVzIHRvIGFjY2Vz
cyBub24tc3RhbGUKbWFwcGluZ3MuICBYZW4gbXVzdCBwcm92aWRlIHN1aXRh
YmxlIGZsdXNoaW5nIHRvIHByZXZlbnQgaW50ZXJtaXhlZCBndWVzdAphY3Rp
b25zIGZyb20gYWNjaWRlbnRhbGx5IGFjY2Vzc2luZy9tb2RpZnlpbmcgdGhl
IHdyb25nIHBhZ2V0YWJsZS4KCkZvciBhbGwgTDIgYW5kIGhpZ2hlciBtb2Rp
ZmljYXRpb25zLCBmbHVzaCB0aGUgVExCLiAgUFYgZ3Vlc3RzIGNhbm5vdCBj
cmVhdGUKTDIgb3IgaGlnaGVyIGVudHJpZXMgd2l0aCB0aGUgR2xvYmFsIGJp
dCBzZXQsIHNvIG5vIG1hcHBpbmdzIGVzdGFibGlzaGVkIGluCnRoZSBsaW5l
YXIgcmFuZ2UgY2FuIGJlIGdsb2JhbC4gIChUaGlzIGNvdWxkIGluIHByaW5j
aXBsZSBiZSBhbiBvcmRlciAzOSBmbHVzaApzdGFydGluZyBhdCBMSU5FQVJf
UFRfVklSVF9TVEFSVCwgYnV0IG5vIHN1Y2ggbWVjaGFuaXNtIGV4aXN0cyBp
biBwcmFjdGljZS4pCgpFeHByZXNzIHRoZSBuZWNlc3NhcnkgZmx1c2hlcyBh
cyBhIHNldCBvZiBib29sZWFucyB3aGljaCBhY2N1bXVsYXRlIGFjcm9zcyB0
aGUKb3BlcmF0aW9uLiAgQ29tbWVudCB0aGUgZmx1c2hpbmcgbG9naWMgZXh0
ZW5zaXZlbHkuCgpUaGlzIGlzIFhTQS0yODYuCgpTaWduZWQtb2ZmLWJ5OiBB
bmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZp
ZXdlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgooY2hl
cnJ5IHBpY2tlZCBmcm9tIGNvbW1pdCAxNmEyMDk2M2IzMjA5Nzg4ZjJjMGQz
YTNlZWJiN2Q5MmYwM2Y1ODgzKQotLS0KIHhlbi9hcmNoL3g4Ni9tbS5jIHwg
NjkgKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0t
LS0tLS0KIDEgZmlsZSBjaGFuZ2VkLCA1OSBpbnNlcnRpb25zKCspLCAxMCBk
ZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0uYyBi
L3hlbi9hcmNoL3g4Ni9tbS5jCmluZGV4IDEyOWRhMWU2NDguLjM1MjhjZjZi
ODUgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9tbS5jCisrKyBiL3hlbi9h
cmNoL3g4Ni9tbS5jCkBAIC0zOTgzLDcgKzM5ODMsOCBAQCBsb25nIGRvX21t
dV91cGRhdGUoCiAgICAgc3RydWN0IHZjcHUgKmN1cnIgPSBjdXJyZW50LCAq
diA9IGN1cnI7CiAgICAgc3RydWN0IGRvbWFpbiAqZCA9IHYtPmRvbWFpbiwg
KnB0X293bmVyID0gZCwgKnBnX293bmVyOwogICAgIG1mbl90IG1hcF9tZm4g
PSBJTlZBTElEX01GTjsKLSAgICBib29sIHN5bmNfZ3Vlc3QgPSBmYWxzZTsK
KyAgICBib29sIGZsdXNoX2xpbmVhcl9wdCA9IGZhbHNlLCBmbHVzaF9yb290
X3B0X2xvY2FsID0gZmFsc2UsCisgICAgICAgIGZsdXNoX3Jvb3RfcHRfb3Ro
ZXJzID0gZmFsc2U7CiAgICAgdWludDMyX3QgeHNtX25lZWRlZCA9IDA7CiAg
ICAgdWludDMyX3QgeHNtX2NoZWNrZWQgPSAwOwogICAgIGludCByYyA9IHB1
dF9vbGRfZ3Vlc3RfdGFibGUoY3Vycik7CkBAIC00MTMzLDYgKzQxMzQsOCBA
QCBsb25nIGRvX21tdV91cGRhdGUoCiAgICAgICAgICAgICAgICAgICAgICAg
ICBicmVhazsKICAgICAgICAgICAgICAgICAgICAgcmMgPSBtb2RfbDJfZW50
cnkodmEsIGwyZV9mcm9tX2ludHB0ZShyZXEudmFsKSwgbWZuLAogICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjbWQgPT0gTU1VX1BU
X1VQREFURV9QUkVTRVJWRV9BRCwgdik7CisgICAgICAgICAgICAgICAgICAg
IGlmICggIXJjICkKKyAgICAgICAgICAgICAgICAgICAgICAgIGZsdXNoX2xp
bmVhcl9wdCA9IHRydWU7CiAgICAgICAgICAgICAgICAgICAgIGJyZWFrOwog
CiAgICAgICAgICAgICAgICAgY2FzZSBQR1RfbDNfcGFnZV90YWJsZToKQEAg
LTQxNDAsNiArNDE0Myw4IEBAIGxvbmcgZG9fbW11X3VwZGF0ZSgKICAgICAg
ICAgICAgICAgICAgICAgICAgIGJyZWFrOwogICAgICAgICAgICAgICAgICAg
ICByYyA9IG1vZF9sM19lbnRyeSh2YSwgbDNlX2Zyb21faW50cHRlKHJlcS52
YWwpLCBtZm4sCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIGNtZCA9PSBNTVVfUFRfVVBEQVRFX1BSRVNFUlZFX0FELCB2KTsKKyAg
ICAgICAgICAgICAgICAgICAgaWYgKCAhcmMgKQorICAgICAgICAgICAgICAg
ICAgICAgICAgZmx1c2hfbGluZWFyX3B0ID0gdHJ1ZTsKICAgICAgICAgICAg
ICAgICAgICAgYnJlYWs7CiAKICAgICAgICAgICAgICAgICBjYXNlIFBHVF9s
NF9wYWdlX3RhYmxlOgpAQCAtNDE0Nyw2ICs0MTUyLDggQEAgbG9uZyBkb19t
bXVfdXBkYXRlKAogICAgICAgICAgICAgICAgICAgICAgICAgYnJlYWs7CiAg
ICAgICAgICAgICAgICAgICAgIHJjID0gbW9kX2w0X2VudHJ5KHZhLCBsNGVf
ZnJvbV9pbnRwdGUocmVxLnZhbCksIG1mbiwKICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgY21kID09IE1NVV9QVF9VUERBVEVfUFJF
U0VSVkVfQUQsIHYpOworICAgICAgICAgICAgICAgICAgICBpZiAoICFyYyAp
CisgICAgICAgICAgICAgICAgICAgICAgICBmbHVzaF9saW5lYXJfcHQgPSB0
cnVlOwogICAgICAgICAgICAgICAgICAgICBpZiAoICFyYyAmJiBwdF9vd25l
ci0+YXJjaC5wdl9kb21haW4ueHB0aSApCiAgICAgICAgICAgICAgICAgICAg
IHsKICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2wgbG9jYWxfaW5fdXNl
ID0gZmFsc2U7CkBAIC00MTU0LDcgKzQxNjEsNyBAQCBsb25nIGRvX21tdV91
cGRhdGUoCiAgICAgICAgICAgICAgICAgICAgICAgICBpZiAoIHBhZ2V0YWJs
ZV9nZXRfcGZuKGN1cnItPmFyY2guZ3Vlc3RfdGFibGUpID09IG1mbiApCiAg
ICAgICAgICAgICAgICAgICAgICAgICB7CiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgbG9jYWxfaW5fdXNlID0gdHJ1ZTsKLSAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBnZXRfY3B1X2luZm8oKS0+cm9vdF9wZ3RfY2hhbmdl
ZCA9IHRydWU7CisgICAgICAgICAgICAgICAgICAgICAgICAgICAgZmx1c2hf
cm9vdF9wdF9sb2NhbCA9IHRydWU7CiAgICAgICAgICAgICAgICAgICAgICAg
ICB9CiAKICAgICAgICAgICAgICAgICAgICAgICAgIC8qCkBAIC00MTY2LDcg
KzQxNzMsNyBAQCBsb25nIGRvX21tdV91cGRhdGUoCiAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICgxICsgISEocGFnZS0+dS5pbnVzZS50eXBlX2lu
Zm8gJiBQR1RfcGlubmVkKSArCiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAocGFnZXRhYmxlX2dldF9wZm4oY3Vyci0+YXJjaC5ndWVzdF90YWJs
ZV91c2VyKSA9PQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIG1m
bikgKyBsb2NhbF9pbl91c2UpICkKLSAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBzeW5jX2d1ZXN0ID0gdHJ1ZTsKKyAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBmbHVzaF9yb290X3B0X290aGVycyA9IHRydWU7CiAgICAgICAg
ICAgICAgICAgICAgIH0KICAgICAgICAgICAgICAgICAgICAgYnJlYWs7CiAK
QEAgLTQyNjgsMTkgKzQyNzUsNjEgQEAgbG9uZyBkb19tbXVfdXBkYXRlKAog
ICAgIGlmICggdmEgKQogICAgICAgICB1bm1hcF9kb21haW5fcGFnZSh2YSk7
CiAKLSAgICBpZiAoIHN5bmNfZ3Vlc3QgKQorICAgIC8qCisgICAgICogUGVy
Zm9ybSByZXF1aXJlZCBUTEIgbWFpbnRlbmFuY2UuCisgICAgICoKKyAgICAg
KiBUaGlzIGxvZ2ljIGN1cnJlbnRseSBkZXBlbmQgb24gZmx1c2hfbGluZWFy
X3B0IGJlaW5nIGEgc3VwZXJzZXQgb2YgdGhlCisgICAgICogZmx1c2hfcm9v
dF9wdF8qIGNvbmRpdGlvbnMuCisgICAgICoKKyAgICAgKiBwdF9vd25lciBt
YXkgbm90IGJlIGN1cnJlbnQtPmRvbWFpbi4gIFRoaXMgbWF5IG9jY3VyIGR1
cmluZworICAgICAqIGNvbnN0cnVjdGlvbiBvZiAzMmJpdCBQViBndWVzdHMs
IG9yIGRlYnVnZ2luZyBvZiBQViBndWVzdHMuICBUaGUKKyAgICAgKiBiZWhh
dmlvdXIgY2Fubm90IGJlIGNvcnJlY3Qgd2l0aCBkb21haW4gdW5wYXVzZWQu
ICBXZSB0aGVyZWZvcmUgZXhwZWN0CisgICAgICogcHRfb3duZXItPmRpcnR5
X2NwdW1hc2sgdG8gYmUgZW1wdHksIGJ1dCBpdCBpcyBhIHdhc3RlIG9mIGVm
Zm9ydCB0bworICAgICAqIGV4cGxpY2l0bHkgY2hlY2sgZm9yLCBhbmQgZXhj
bHVkZSwgdGhpcyBjb3JuZXIgY2FzZS4KKyAgICAgKgorICAgICAqIGZsdXNo
X2xpbmVhcl9wdCByZXF1aXJlcyBhIEZMVVNIX1RMQiB0byBhbGwgZGlydHkg
Q1BVcy4gIFRoZSBmbHVzaCBtdXN0CisgICAgICogYmUgcGVyZm9ybWVkIG5v
dyB0byBtYWludGFpbiBjb3JyZWN0IGJlaGF2aW91ciBhY3Jvc3MgYSBtdWx0
aWNhbGwuCisgICAgICogaS5lLiB3ZSBjYW5ub3QgcmVsYXggRkxVU0hfVExC
IHRvIEZMVVNIX1JPT1RfUEdUQkwsIGdpdmVuIHRoYXQgdGhlCisgICAgICog
Zm9ybWVyIGlzIGEgc2lkZSBlZmZlY3Qgb2YgdGhlIGxhdHRlciwgYmVjYXVz
ZSB0aGUgcmVzeW5jICh3aGljaCBpcyBpbgorICAgICAqIHRoZSByZXR1cm4t
dG8tZ3Vlc3QgcGF0aCkgaGFwcGVucyB0b28gbGF0ZS4KKyAgICAgKgorICAg
ICAqIGZsdXNoX3Jvb3RfcHRfKiByZXF1aXJlcyBGTFVTSF9ST09UX1BHVEJM
IG9uIGVpdGhlciB0aGUgbG9jYWwgQ1BVCisgICAgICogKGltcGxpZXMgcHRf
b3duZXIgPT0gY3VycmVudC0+ZG9tYWluIGFuZCBjdXJyZW50LT5wcm9jZXNz
b3Igc2V0IGluCisgICAgICogcHRfb3duZXItPmRpcnR5X2NwdW1hc2spLCBh
bmQvb3IgYWxsICpvdGhlciogZGlydHkgQ1BVcyBhcyB0aGVyZSBhcmUKKyAg
ICAgKiByZWZlcmVuY2VzIHdlIGNhbid0IGFjY291bnQgZm9yIGxvY2FsbHku
CisgICAgICovCisgICAgaWYgKCBmbHVzaF9saW5lYXJfcHQgLyogfHwgZmx1
c2hfcm9vdF9wdF9sb2NhbCB8fCBmbHVzaF9yb290X3B0X290aGVycyAqLyAp
CiAgICAgeworICAgICAgICB1bnNpZ25lZCBpbnQgY3B1ID0gc21wX3Byb2Nl
c3Nvcl9pZCgpOworICAgICAgICBjcHVtYXNrX3QgKm1hc2sgPSBwdF9vd25l
ci0+ZGlydHlfY3B1bWFzazsKKwogICAgICAgICAvKgotICAgICAgICAgKiBG
b3JjZSBvdGhlciB2Q1BVLXMgb2YgdGhlIGFmZmVjdGVkIGd1ZXN0IHRvIHBp
Y2sgdXAgTDQgZW50cnkKLSAgICAgICAgICogY2hhbmdlcyAoaWYgYW55KS4K
KyAgICAgICAgICogQWx3YXlzIGhhbmRsZSBsb2NhbCBmbHVzaGluZyBzZXBh
cmF0ZWx5IChpZiBhcHBsaWNhYmxlKSwgdG8KKyAgICAgICAgICogc2VwYXJh
dGUgdGhlIGZsdXNoIGludm9jYXRpb25zIGFwcHJvcHJpYXRlbHkgZm9yIHNj
b3BlIG9mIHRoZSB0d28KKyAgICAgICAgICogZmx1c2hfcm9vdF9wdF8qIHZh
cmlhYmxlcy4KICAgICAgICAgICovCi0gICAgICAgIHVuc2lnbmVkIGludCBj
cHUgPSBzbXBfcHJvY2Vzc29yX2lkKCk7Ci0gICAgICAgIGNwdW1hc2tfdCAq
bWFzayA9IHBlcl9jcHUoc2NyYXRjaF9jcHVtYXNrLCBjcHUpOworICAgICAg
ICBpZiAoIGxpa2VseShjcHVtYXNrX3Rlc3RfY3B1KGNwdSwgbWFzaykpICkK
KyAgICAgICAgeworICAgICAgICAgICAgbWFzayA9IHBlcl9jcHUoc2NyYXRj
aF9jcHVtYXNrLCBjcHUpOwogCi0gICAgICAgIGNwdW1hc2tfYW5kbm90KG1h
c2ssIHB0X293bmVyLT5kaXJ0eV9jcHVtYXNrLCBjcHVtYXNrX29mKGNwdSkp
OworICAgICAgICAgICAgY3B1bWFza19jb3B5KG1hc2ssIHB0X293bmVyLT5k
aXJ0eV9jcHVtYXNrKTsKKyAgICAgICAgICAgIF9fY3B1bWFza19jbGVhcl9j
cHUoY3B1LCBtYXNrKTsKKworICAgICAgICAgICAgZmx1c2hfbG9jYWwoRkxV
U0hfVExCIHwKKyAgICAgICAgICAgICAgICAgICAgICAgIChmbHVzaF9yb290
X3B0X2xvY2FsID8gRkxVU0hfUk9PVF9QR1RCTCA6IDApKTsKKyAgICAgICAg
fQorICAgICAgICBlbHNlCisgICAgICAgICAgICAvKiBTYW5pdHkgY2hlY2su
ICBmbHVzaF9yb290X3B0X2xvY2FsIGltcGxpZXMgbG9jYWwgY3B1IGlzIGRp
cnR5LiAqLworICAgICAgICAgICAgQVNTRVJUKCFmbHVzaF9yb290X3B0X2xv
Y2FsKTsKKworICAgICAgICAvKiBGbHVzaCB0aGUgcmVtb3RlIGRpcnR5IENQ
VXMuICBEb2VzIG5vdCBpbmNsdWRlIHRoZSBsb2NhbCBDUFUuICovCiAgICAg
ICAgIGlmICggIWNwdW1hc2tfZW1wdHkobWFzaykgKQotICAgICAgICAgICAg
Zmx1c2hfbWFzayhtYXNrLCBGTFVTSF9ST09UX1BHVEJMKTsKKyAgICAgICAg
ICAgIGZsdXNoX21hc2sobWFzaywgRkxVU0hfVExCIHwKKyAgICAgICAgICAg
ICAgICAgICAgICAgKGZsdXNoX3Jvb3RfcHRfb3RoZXJzID8gRkxVU0hfUk9P
VF9QR1RCTCA6IDApKTsKICAgICB9CisgICAgZWxzZQorICAgICAgICAvKiBT
YW5pdHkgY2hlY2suICBmbHVzaF9yb290X3B0XyogaW1wbGllcyBmbHVzaF9s
aW5lYXJfcHQuICovCisgICAgICAgIEFTU0VSVCghZmx1c2hfcm9vdF9wdF9s
b2NhbCAmJiAhZmx1c2hfcm9vdF9wdF9vdGhlcnMpOwogCiAgICAgcGVyZmNf
YWRkKG51bV9wYWdlX3VwZGF0ZXMsIGkpOwogCi0tIAoyLjIwLjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.12/0001-x86-pv-Drop-FLUSH_TLB_GLOBAL-in-do_mmu_update-for-XP.patch"
Content-Disposition: attachment;
 filename="xsa286-4.12/0001-x86-pv-Drop-FLUSH_TLB_GLOBAL-in-do_mmu_update-for-XP.patch"
Content-Transfer-Encoding: base64

RnJvbSBiMWQ2ZjM3YWE1YWE5ZjNmYzVhMjY5YjlkZDIxYjdmZWI3NDQ0YmUw
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpEYXRlOiBUaHUsIDIyIE9j
dCAyMDIwIDExOjI4OjU4ICswMTAwClN1YmplY3Q6IFtQQVRDSCAxLzJdIHg4
Ni9wdjogRHJvcCBGTFVTSF9UTEJfR0xPQkFMIGluIGRvX21tdV91cGRhdGUo
KSBmb3IgWFBUSQoKYy9zIDlkMWQzMWFkOTQ5OCAieDg2OiBzbGlnaHRseSBy
ZWR1Y2UgTWVsdGRvd24gYmFuZC1haWQgb3ZlcmhlYWQiIHJlbW92ZWQgdGhl
CnVzZSBvZiBHbG9iYWwgVExCIGZsdXNoZXMgb24gdGhlIFhlbiBlbnRyeSBw
YXRoLCBidXQgYWRkZWQgYSBGTFVTSF9UTEJfR0xPQkFMCnRvIHRoZSBMNCBw
YXRoIGluIGRvX21tdV91cGRhdGUoKS4KCkhvd2V2ZXIsIHRoaXMgd2FzIHVu
bmVjZXNzYXJ5LgoKSXQgaXMgdGhlIGd1ZXN0cyByZXNwb25zaWJpbGl0eSB0
byBwZXJmb3JtIGFwcHJvcHJpYXRlIFRMQiBmbHVzaGluZyBpZiB0aGUgTDQK
bW9kaWZpY2F0aW9uIGFsdGVyZWQgYW4gZXN0YWJsaXNoZWQgbWFwcGluZyBp
biBhIGZsdXNoLXJlbGV2YW50IHdheS4gIEluIHRoaXMKY2FzZSwgYW4gTU1V
RVhUX09QIGh5cGVyY2FsbCB3aWxsIGZvbGxvdy4gIFRoZSBjYXNlIHdoaWNo
IFhlbiBuZWVkcyB0byBjb3ZlcgppcyB3aGVuIG5ldyBtYXBwaW5ncyBhcmUg
Y3JlYXRlZCwgYW5kIHRoZSByZXN5bmMgb24gdGhlIGV4aXQtdG8tZ3Vlc3Qg
cGF0aApjb3ZlcnMgdGhpcyBjb3JyZWN0bHkuCgpUaGVyZSBpcyBhIGNvcm5l
ciBjYXNlIHdpdGggbXVsdGlwbGUgdkNQVXMgaW4gaHlwZXJjYWxscyBhdCB0
aGUgc2FtZSB0aW1lLAp3aGljaCA5ZDFkMzFhZDk0OTggY2hhbmdlZCwgYW5k
IHRoaXMgcGF0Y2ggY2hhbmdlcyBiYWNrIHRvIGl0cyBvcmlnaW5hbCBYUFRJ
CmJlaGF2aW91ci4KCkFyY2hpdGVjdHVyYWxseSwgZXN0YWJsaXNoZWQgVExC
IGVudHJpZXMgY2FuIGNvbnRpbnVlIHRvIGJlIHVzZWQgdW50aWwgdGhlCmJy
b2FkY2FzdCBmbHVzaCBoYXMgY29tcGxldGVkLiAgVGhlcmVmb3JlLCBldmVu
IHdpdGggY29uY3VycmVudCBoeXBlcmNhbGxzLAp0aGUgZ3Vlc3QgY2Fubm90
IGRlcGVuZCBvbiBvbGRlciBtYXBwaW5ncyBub3QgYmVpbmcgdXNlZCB1bnRp
bCBhbiBNTVVFWFRfT1AKaHlwZXJjYWxsIGNvbXBsZXRlcy4gIFhlbidzIGlt
cGxlbWVudGF0aW9uIG9mIGd1ZXN0LWluaXRpYXRlZCBmbHVzaGVzIHdpbGwK
dGFrZSBjb3JyZWN0IGVmZmVjdCBvbiB0b3Agb2YgYW4gaW4tcHJvZ3Jlc3Mg
aHlwZXJjYWxsLCBwaWNraW5nIHVwIG5ldyBtYXBwaW5nCnNldHRpbmcgYmVm
b3JlIHRoZSBvdGhlciB2Q1BVJ3MgTU1VRVhUX09QIGNvbXBsZXRlcy4KCk5v
dGU6IFRoZSBjb3JyZWN0bmVzcyBvZiB0aGlzIGNoYW5nZSBpcyBub3QgaW1w
YWN0ZWQgYnkgd2hldGhlciBYUFRJIHVzZXMKZ2xvYmFsIG1hcHBpbmdzIG9y
IG5vdC4gIENvcnJlY3RuZXNzIHRoZXJlIGRlcGVuZHMgb24gdGhlIGJlaGF2
aW91ciBvZiBYZW4gb24KdGhlIGVudHJ5L2V4aXQgcGF0aHMgd2hlbiBzd2l0
Y2hpbmcgdHdvL2Zyb20gdGhlIFhQVEkgInNoYWRvdyIgcGFnZXRhYmxlcy4K
ClRoaXMgaXMgKG5vdCByZWFsbHkpIFhTQS0yODYgKGJ1dCBuZWNlc3Nhcnkg
dG8gc2ltcGxpZnkgdGhlIGxvZ2ljKS4KCkZpeGVzOiA5ZDFkMzFhZDk0OTgg
KCJ4ODY6IHNsaWdodGx5IHJlZHVjZSBNZWx0ZG93biBiYW5kLWFpZCBvdmVy
aGVhZCIpClNpZ25lZC1vZmYtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5j
b29wZXIzQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5OiBKYW4gQmV1bGljaCA8
amJldWxpY2hAc3VzZS5jb20+CihjaGVycnkgcGlja2VkIGZyb20gY29tbWl0
IDA1NWUxYzNhM2Q5NWIxZTc1MzE0ODM2OWZiYzRiYTQ4NzgyZGQ2MDIpCi0t
LQogeGVuL2FyY2gveDg2L21tLmMgfCAyICstCiAxIGZpbGUgY2hhbmdlZCwg
MSBpbnNlcnRpb24oKyksIDEgZGVsZXRpb24oLSkKCmRpZmYgLS1naXQgYS94
ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9hcmNoL3g4Ni9tbS5jCmluZGV4IGU3
YjhmNGVlNGIuLjg2ZjMxYjMzNGYgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4
Ni9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS5jCkBAIC00MzAxLDcgKzQz
MDEsNyBAQCBsb25nIGRvX21tdV91cGRhdGUoCiAKICAgICAgICAgY3B1bWFz
a19hbmRub3QobWFzaywgcHRfb3duZXItPmRpcnR5X2NwdW1hc2ssIGNwdW1h
c2tfb2YoY3B1KSk7CiAgICAgICAgIGlmICggIWNwdW1hc2tfZW1wdHkobWFz
aykgKQotICAgICAgICAgICAgZmx1c2hfbWFzayhtYXNrLCBGTFVTSF9UTEJf
R0xPQkFMIHwgRkxVU0hfUk9PVF9QR1RCTCk7CisgICAgICAgICAgICBmbHVz
aF9tYXNrKG1hc2ssIEZMVVNIX1JPT1RfUEdUQkwpOwogICAgIH0KIAogICAg
IHBlcmZjX2FkZChudW1fcGFnZV91cGRhdGVzLCBpKTsKLS0gCjIuMjAuMQoK

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.12/0002-x86-pv-Flush-TLB-in-response-to-paging-structure-cha.patch"
Content-Disposition: attachment;
 filename="xsa286-4.12/0002-x86-pv-Flush-TLB-in-response-to-paging-structure-cha.patch"
Content-Transfer-Encoding: base64

RnJvbSA0MTAwZDQ2M2RiZGQ5NWQ4NWZhYmUzODdkZDU2NzZiZWQ3NWY2NWY3
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpEYXRlOiBNb24sIDE5IE9j
dCAyMDIwIDE1OjUxOjIyICswMTAwClN1YmplY3Q6IFtQQVRDSCAyLzJdIHg4
Ni9wdjogRmx1c2ggVExCIGluIHJlc3BvbnNlIHRvIHBhZ2luZyBzdHJ1Y3R1
cmUgY2hhbmdlcwoKV2l0aCBNTVVfVVBEQVRFLCBhIFBWIGd1ZXN0IGNhbiBt
YWtlIGNoYW5nZXMgdG8gaGlnaGVyIGxldmVsIHBhZ2V0YWJsZXMuICBUaGlz
CmlzIHNhZmUgZnJvbSBYZW4ncyBwb2ludCBvZiB2aWV3IChhcyB0aGUgdXBk
YXRlIG9ubHkgYWZmZWN0cyBndWVzdCBtYXBwaW5ncyksCmFuZCB0aGUgZ3Vl
c3QgaXMgcmVxdWlyZWQgdG8gZmx1c2ggKGlmIG5lY2Vzc2FyeSkgYWZ0ZXIg
bWFraW5nIHVwZGF0ZXMuCgpIb3dldmVyLCBYZW4ncyB1c2Ugb2YgbGluZWFy
IHBhZ2V0YWJsZXMgKFVQREFURV9WQV9NQVBQSU5HLCBHTlRUQUJPUF9tYXAs
CndyaXRlYWJsZSBwYWdldGFibGVzLCBldGMuKSBpcyBhbiBpbXBsZW1lbnRh
dGlvbiBkZXRhaWwgb3V0c2lkZSBvZiB0aGUKQVBJL0FCSS4KCkNoYW5nZXMg
aW4gdGhlIHBhZ2luZyBzdHJ1Y3R1cmUgcmVxdWlyZSBpbnZhbGlkYXRpb25z
IGluIHRoZSBsaW5lYXIgcGFnZXRhYmxlCnJhbmdlIGZvciBzdWJzZXF1ZW50
IGFjY2Vzc2VzIGludG8gdGhlIGxpbmVhciBwYWdldGFibGVzIHRvIGFjY2Vz
cyBub24tc3RhbGUKbWFwcGluZ3MuICBYZW4gbXVzdCBwcm92aWRlIHN1aXRh
YmxlIGZsdXNoaW5nIHRvIHByZXZlbnQgaW50ZXJtaXhlZCBndWVzdAphY3Rp
b25zIGZyb20gYWNjaWRlbnRhbGx5IGFjY2Vzc2luZy9tb2RpZnlpbmcgdGhl
IHdyb25nIHBhZ2V0YWJsZS4KCkZvciBhbGwgTDIgYW5kIGhpZ2hlciBtb2Rp
ZmljYXRpb25zLCBmbHVzaCB0aGUgVExCLiAgUFYgZ3Vlc3RzIGNhbm5vdCBj
cmVhdGUKTDIgb3IgaGlnaGVyIGVudHJpZXMgd2l0aCB0aGUgR2xvYmFsIGJp
dCBzZXQsIHNvIG5vIG1hcHBpbmdzIGVzdGFibGlzaGVkIGluCnRoZSBsaW5l
YXIgcmFuZ2UgY2FuIGJlIGdsb2JhbC4gIChUaGlzIGNvdWxkIGluIHByaW5j
aXBsZSBiZSBhbiBvcmRlciAzOSBmbHVzaApzdGFydGluZyBhdCBMSU5FQVJf
UFRfVklSVF9TVEFSVCwgYnV0IG5vIHN1Y2ggbWVjaGFuaXNtIGV4aXN0cyBp
biBwcmFjdGljZS4pCgpFeHByZXNzIHRoZSBuZWNlc3NhcnkgZmx1c2hlcyBh
cyBhIHNldCBvZiBib29sZWFucyB3aGljaCBhY2N1bXVsYXRlIGFjcm9zcyB0
aGUKb3BlcmF0aW9uLiAgQ29tbWVudCB0aGUgZmx1c2hpbmcgbG9naWMgZXh0
ZW5zaXZlbHkuCgpUaGlzIGlzIFhTQS0yODYuCgpTaWduZWQtb2ZmLWJ5OiBB
bmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZp
ZXdlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgooY2hl
cnJ5IHBpY2tlZCBmcm9tIGNvbW1pdCAxNmEyMDk2M2IzMjA5Nzg4ZjJjMGQz
YTNlZWJiN2Q5MmYwM2Y1ODgzKQotLS0KIHhlbi9hcmNoL3g4Ni9tbS5jIHwg
NjkgKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0t
LS0tLS0KIDEgZmlsZSBjaGFuZ2VkLCA1OSBpbnNlcnRpb25zKCspLCAxMCBk
ZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0uYyBi
L3hlbi9hcmNoL3g4Ni9tbS5jCmluZGV4IDg2ZjMxYjMzNGYuLmRiNGNmZGYy
MGIgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9tbS5jCisrKyBiL3hlbi9h
cmNoL3g4Ni9tbS5jCkBAIC00MDA1LDcgKzQwMDUsOCBAQCBsb25nIGRvX21t
dV91cGRhdGUoCiAgICAgc3RydWN0IHZjcHUgKmN1cnIgPSBjdXJyZW50LCAq
diA9IGN1cnI7CiAgICAgc3RydWN0IGRvbWFpbiAqZCA9IHYtPmRvbWFpbiwg
KnB0X293bmVyID0gZCwgKnBnX293bmVyOwogICAgIG1mbl90IG1hcF9tZm4g
PSBJTlZBTElEX01GTjsKLSAgICBib29sIHN5bmNfZ3Vlc3QgPSBmYWxzZTsK
KyAgICBib29sIGZsdXNoX2xpbmVhcl9wdCA9IGZhbHNlLCBmbHVzaF9yb290
X3B0X2xvY2FsID0gZmFsc2UsCisgICAgICAgIGZsdXNoX3Jvb3RfcHRfb3Ro
ZXJzID0gZmFsc2U7CiAgICAgdWludDMyX3QgeHNtX25lZWRlZCA9IDA7CiAg
ICAgdWludDMyX3QgeHNtX2NoZWNrZWQgPSAwOwogICAgIGludCByYyA9IHB1
dF9vbGRfZ3Vlc3RfdGFibGUoY3Vycik7CkBAIC00MTU1LDYgKzQxNTYsOCBA
QCBsb25nIGRvX21tdV91cGRhdGUoCiAgICAgICAgICAgICAgICAgICAgICAg
ICBicmVhazsKICAgICAgICAgICAgICAgICAgICAgcmMgPSBtb2RfbDJfZW50
cnkodmEsIGwyZV9mcm9tX2ludHB0ZShyZXEudmFsKSwgbWZuLAogICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjbWQgPT0gTU1VX1BU
X1VQREFURV9QUkVTRVJWRV9BRCwgdik7CisgICAgICAgICAgICAgICAgICAg
IGlmICggIXJjICkKKyAgICAgICAgICAgICAgICAgICAgICAgIGZsdXNoX2xp
bmVhcl9wdCA9IHRydWU7CiAgICAgICAgICAgICAgICAgICAgIGJyZWFrOwog
CiAgICAgICAgICAgICAgICAgY2FzZSBQR1RfbDNfcGFnZV90YWJsZToKQEAg
LTQxNjIsNiArNDE2NSw4IEBAIGxvbmcgZG9fbW11X3VwZGF0ZSgKICAgICAg
ICAgICAgICAgICAgICAgICAgIGJyZWFrOwogICAgICAgICAgICAgICAgICAg
ICByYyA9IG1vZF9sM19lbnRyeSh2YSwgbDNlX2Zyb21faW50cHRlKHJlcS52
YWwpLCBtZm4sCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIGNtZCA9PSBNTVVfUFRfVVBEQVRFX1BSRVNFUlZFX0FELCB2KTsKKyAg
ICAgICAgICAgICAgICAgICAgaWYgKCAhcmMgKQorICAgICAgICAgICAgICAg
ICAgICAgICAgZmx1c2hfbGluZWFyX3B0ID0gdHJ1ZTsKICAgICAgICAgICAg
ICAgICAgICAgYnJlYWs7CiAKICAgICAgICAgICAgICAgICBjYXNlIFBHVF9s
NF9wYWdlX3RhYmxlOgpAQCAtNDE2OSw2ICs0MTc0LDggQEAgbG9uZyBkb19t
bXVfdXBkYXRlKAogICAgICAgICAgICAgICAgICAgICAgICAgYnJlYWs7CiAg
ICAgICAgICAgICAgICAgICAgIHJjID0gbW9kX2w0X2VudHJ5KHZhLCBsNGVf
ZnJvbV9pbnRwdGUocmVxLnZhbCksIG1mbiwKICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgY21kID09IE1NVV9QVF9VUERBVEVfUFJF
U0VSVkVfQUQsIHYpOworICAgICAgICAgICAgICAgICAgICBpZiAoICFyYyAp
CisgICAgICAgICAgICAgICAgICAgICAgICBmbHVzaF9saW5lYXJfcHQgPSB0
cnVlOwogICAgICAgICAgICAgICAgICAgICBpZiAoICFyYyAmJiBwdF9vd25l
ci0+YXJjaC5wdi54cHRpICkKICAgICAgICAgICAgICAgICAgICAgewogICAg
ICAgICAgICAgICAgICAgICAgICAgYm9vbCBsb2NhbF9pbl91c2UgPSBmYWxz
ZTsKQEAgLTQxNzYsNyArNDE4Myw3IEBAIGxvbmcgZG9fbW11X3VwZGF0ZSgK
ICAgICAgICAgICAgICAgICAgICAgICAgIGlmICggcGFnZXRhYmxlX2dldF9w
Zm4oY3Vyci0+YXJjaC5ndWVzdF90YWJsZSkgPT0gbWZuICkKICAgICAgICAg
ICAgICAgICAgICAgICAgIHsKICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBsb2NhbF9pbl91c2UgPSB0cnVlOwotICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIGdldF9jcHVfaW5mbygpLT5yb290X3BndF9jaGFuZ2VkID0gdHJ1
ZTsKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICBmbHVzaF9yb290X3B0
X2xvY2FsID0gdHJ1ZTsKICAgICAgICAgICAgICAgICAgICAgICAgIH0KIAog
ICAgICAgICAgICAgICAgICAgICAgICAgLyoKQEAgLTQxODgsNyArNDE5NSw3
IEBAIGxvbmcgZG9fbW11X3VwZGF0ZSgKICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgKDEgKyAhIShwYWdlLT51LmludXNlLnR5cGVfaW5mbyAmIFBH
VF9waW5uZWQpICsKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIChw
YWdldGFibGVfZ2V0X3BmbihjdXJyLT5hcmNoLmd1ZXN0X3RhYmxlX3VzZXIp
ID09CiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbWZuKSArIGxv
Y2FsX2luX3VzZSkgKQotICAgICAgICAgICAgICAgICAgICAgICAgICAgIHN5
bmNfZ3Vlc3QgPSB0cnVlOworICAgICAgICAgICAgICAgICAgICAgICAgICAg
IGZsdXNoX3Jvb3RfcHRfb3RoZXJzID0gdHJ1ZTsKICAgICAgICAgICAgICAg
ICAgICAgfQogICAgICAgICAgICAgICAgICAgICBicmVhazsKIApAQCAtNDI5
MCwxOSArNDI5Nyw2MSBAQCBsb25nIGRvX21tdV91cGRhdGUoCiAgICAgaWYg
KCB2YSApCiAgICAgICAgIHVubWFwX2RvbWFpbl9wYWdlKHZhKTsKIAotICAg
IGlmICggc3luY19ndWVzdCApCisgICAgLyoKKyAgICAgKiBQZXJmb3JtIHJl
cXVpcmVkIFRMQiBtYWludGVuYW5jZS4KKyAgICAgKgorICAgICAqIFRoaXMg
bG9naWMgY3VycmVudGx5IGRlcGVuZCBvbiBmbHVzaF9saW5lYXJfcHQgYmVp
bmcgYSBzdXBlcnNldCBvZiB0aGUKKyAgICAgKiBmbHVzaF9yb290X3B0Xyog
Y29uZGl0aW9ucy4KKyAgICAgKgorICAgICAqIHB0X293bmVyIG1heSBub3Qg
YmUgY3VycmVudC0+ZG9tYWluLiAgVGhpcyBtYXkgb2NjdXIgZHVyaW5nCisg
ICAgICogY29uc3RydWN0aW9uIG9mIDMyYml0IFBWIGd1ZXN0cywgb3IgZGVi
dWdnaW5nIG9mIFBWIGd1ZXN0cy4gIFRoZQorICAgICAqIGJlaGF2aW91ciBj
YW5ub3QgYmUgY29ycmVjdCB3aXRoIGRvbWFpbiB1bnBhdXNlZC4gIFdlIHRo
ZXJlZm9yZSBleHBlY3QKKyAgICAgKiBwdF9vd25lci0+ZGlydHlfY3B1bWFz
ayB0byBiZSBlbXB0eSwgYnV0IGl0IGlzIGEgd2FzdGUgb2YgZWZmb3J0IHRv
CisgICAgICogZXhwbGljaXRseSBjaGVjayBmb3IsIGFuZCBleGNsdWRlLCB0
aGlzIGNvcm5lciBjYXNlLgorICAgICAqCisgICAgICogZmx1c2hfbGluZWFy
X3B0IHJlcXVpcmVzIGEgRkxVU0hfVExCIHRvIGFsbCBkaXJ0eSBDUFVzLiAg
VGhlIGZsdXNoIG11c3QKKyAgICAgKiBiZSBwZXJmb3JtZWQgbm93IHRvIG1h
aW50YWluIGNvcnJlY3QgYmVoYXZpb3VyIGFjcm9zcyBhIG11bHRpY2FsbC4K
KyAgICAgKiBpLmUuIHdlIGNhbm5vdCByZWxheCBGTFVTSF9UTEIgdG8gRkxV
U0hfUk9PVF9QR1RCTCwgZ2l2ZW4gdGhhdCB0aGUKKyAgICAgKiBmb3JtZXIg
aXMgYSBzaWRlIGVmZmVjdCBvZiB0aGUgbGF0dGVyLCBiZWNhdXNlIHRoZSBy
ZXN5bmMgKHdoaWNoIGlzIGluCisgICAgICogdGhlIHJldHVybi10by1ndWVz
dCBwYXRoKSBoYXBwZW5zIHRvbyBsYXRlLgorICAgICAqCisgICAgICogZmx1
c2hfcm9vdF9wdF8qIHJlcXVpcmVzIEZMVVNIX1JPT1RfUEdUQkwgb24gZWl0
aGVyIHRoZSBsb2NhbCBDUFUKKyAgICAgKiAoaW1wbGllcyBwdF9vd25lciA9
PSBjdXJyZW50LT5kb21haW4gYW5kIGN1cnJlbnQtPnByb2Nlc3NvciBzZXQg
aW4KKyAgICAgKiBwdF9vd25lci0+ZGlydHlfY3B1bWFzayksIGFuZC9vciBh
bGwgKm90aGVyKiBkaXJ0eSBDUFVzIGFzIHRoZXJlIGFyZQorICAgICAqIHJl
ZmVyZW5jZXMgd2UgY2FuJ3QgYWNjb3VudCBmb3IgbG9jYWxseS4KKyAgICAg
Ki8KKyAgICBpZiAoIGZsdXNoX2xpbmVhcl9wdCAvKiB8fCBmbHVzaF9yb290
X3B0X2xvY2FsIHx8IGZsdXNoX3Jvb3RfcHRfb3RoZXJzICovICkKICAgICB7
CisgICAgICAgIHVuc2lnbmVkIGludCBjcHUgPSBzbXBfcHJvY2Vzc29yX2lk
KCk7CisgICAgICAgIGNwdW1hc2tfdCAqbWFzayA9IHB0X293bmVyLT5kaXJ0
eV9jcHVtYXNrOworCiAgICAgICAgIC8qCi0gICAgICAgICAqIEZvcmNlIG90
aGVyIHZDUFUtcyBvZiB0aGUgYWZmZWN0ZWQgZ3Vlc3QgdG8gcGljayB1cCBM
NCBlbnRyeQotICAgICAgICAgKiBjaGFuZ2VzIChpZiBhbnkpLgorICAgICAg
ICAgKiBBbHdheXMgaGFuZGxlIGxvY2FsIGZsdXNoaW5nIHNlcGFyYXRlbHkg
KGlmIGFwcGxpY2FibGUpLCB0bworICAgICAgICAgKiBzZXBhcmF0ZSB0aGUg
Zmx1c2ggaW52b2NhdGlvbnMgYXBwcm9wcmlhdGVseSBmb3Igc2NvcGUgb2Yg
dGhlIHR3bworICAgICAgICAgKiBmbHVzaF9yb290X3B0XyogdmFyaWFibGVz
LgogICAgICAgICAgKi8KLSAgICAgICAgdW5zaWduZWQgaW50IGNwdSA9IHNt
cF9wcm9jZXNzb3JfaWQoKTsKLSAgICAgICAgY3B1bWFza190ICptYXNrID0g
cGVyX2NwdShzY3JhdGNoX2NwdW1hc2ssIGNwdSk7CisgICAgICAgIGlmICgg
bGlrZWx5KGNwdW1hc2tfdGVzdF9jcHUoY3B1LCBtYXNrKSkgKQorICAgICAg
ICB7CisgICAgICAgICAgICBtYXNrID0gcGVyX2NwdShzY3JhdGNoX2NwdW1h
c2ssIGNwdSk7CiAKLSAgICAgICAgY3B1bWFza19hbmRub3QobWFzaywgcHRf
b3duZXItPmRpcnR5X2NwdW1hc2ssIGNwdW1hc2tfb2YoY3B1KSk7CisgICAg
ICAgICAgICBjcHVtYXNrX2NvcHkobWFzaywgcHRfb3duZXItPmRpcnR5X2Nw
dW1hc2spOworICAgICAgICAgICAgX19jcHVtYXNrX2NsZWFyX2NwdShjcHUs
IG1hc2spOworCisgICAgICAgICAgICBmbHVzaF9sb2NhbChGTFVTSF9UTEIg
fAorICAgICAgICAgICAgICAgICAgICAgICAgKGZsdXNoX3Jvb3RfcHRfbG9j
YWwgPyBGTFVTSF9ST09UX1BHVEJMIDogMCkpOworICAgICAgICB9CisgICAg
ICAgIGVsc2UKKyAgICAgICAgICAgIC8qIFNhbml0eSBjaGVjay4gIGZsdXNo
X3Jvb3RfcHRfbG9jYWwgaW1wbGllcyBsb2NhbCBjcHUgaXMgZGlydHkuICov
CisgICAgICAgICAgICBBU1NFUlQoIWZsdXNoX3Jvb3RfcHRfbG9jYWwpOwor
CisgICAgICAgIC8qIEZsdXNoIHRoZSByZW1vdGUgZGlydHkgQ1BVcy4gIERv
ZXMgbm90IGluY2x1ZGUgdGhlIGxvY2FsIENQVS4gKi8KICAgICAgICAgaWYg
KCAhY3B1bWFza19lbXB0eShtYXNrKSApCi0gICAgICAgICAgICBmbHVzaF9t
YXNrKG1hc2ssIEZMVVNIX1JPT1RfUEdUQkwpOworICAgICAgICAgICAgZmx1
c2hfbWFzayhtYXNrLCBGTFVTSF9UTEIgfAorICAgICAgICAgICAgICAgICAg
ICAgICAoZmx1c2hfcm9vdF9wdF9vdGhlcnMgPyBGTFVTSF9ST09UX1BHVEJM
IDogMCkpOwogICAgIH0KKyAgICBlbHNlCisgICAgICAgIC8qIFNhbml0eSBj
aGVjay4gIGZsdXNoX3Jvb3RfcHRfKiBpbXBsaWVzIGZsdXNoX2xpbmVhcl9w
dC4gKi8KKyAgICAgICAgQVNTRVJUKCFmbHVzaF9yb290X3B0X2xvY2FsICYm
ICFmbHVzaF9yb290X3B0X290aGVycyk7CiAKICAgICBwZXJmY19hZGQobnVt
X3BhZ2VfdXBkYXRlcywgaSk7CiAKLS0gCjIuMjAuMQoK

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.13/0001-x86-pv-Drop-FLUSH_TLB_GLOBAL-in-do_mmu_update-for-XP.patch"
Content-Disposition: attachment;
 filename="xsa286-4.13/0001-x86-pv-Drop-FLUSH_TLB_GLOBAL-in-do_mmu_update-for-XP.patch"
Content-Transfer-Encoding: base64

RnJvbSBjMTBiMjkzMWJmNjNhNDQ0ZTA5MTcxMTVhYWQzNDhiOTExY2FhODJl
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpEYXRlOiBUaHUsIDIyIE9j
dCAyMDIwIDExOjI4OjU4ICswMTAwClN1YmplY3Q6IFtQQVRDSCAxLzJdIHg4
Ni9wdjogRHJvcCBGTFVTSF9UTEJfR0xPQkFMIGluIGRvX21tdV91cGRhdGUo
KSBmb3IgWFBUSQoKYy9zIDlkMWQzMWFkOTQ5OCAieDg2OiBzbGlnaHRseSBy
ZWR1Y2UgTWVsdGRvd24gYmFuZC1haWQgb3ZlcmhlYWQiIHJlbW92ZWQgdGhl
CnVzZSBvZiBHbG9iYWwgVExCIGZsdXNoZXMgb24gdGhlIFhlbiBlbnRyeSBw
YXRoLCBidXQgYWRkZWQgYSBGTFVTSF9UTEJfR0xPQkFMCnRvIHRoZSBMNCBw
YXRoIGluIGRvX21tdV91cGRhdGUoKS4KCkhvd2V2ZXIsIHRoaXMgd2FzIHVu
bmVjZXNzYXJ5LgoKSXQgaXMgdGhlIGd1ZXN0cyByZXNwb25zaWJpbGl0eSB0
byBwZXJmb3JtIGFwcHJvcHJpYXRlIFRMQiBmbHVzaGluZyBpZiB0aGUgTDQK
bW9kaWZpY2F0aW9uIGFsdGVyZWQgYW4gZXN0YWJsaXNoZWQgbWFwcGluZyBp
biBhIGZsdXNoLXJlbGV2YW50IHdheS4gIEluIHRoaXMKY2FzZSwgYW4gTU1V
RVhUX09QIGh5cGVyY2FsbCB3aWxsIGZvbGxvdy4gIFRoZSBjYXNlIHdoaWNo
IFhlbiBuZWVkcyB0byBjb3ZlcgppcyB3aGVuIG5ldyBtYXBwaW5ncyBhcmUg
Y3JlYXRlZCwgYW5kIHRoZSByZXN5bmMgb24gdGhlIGV4aXQtdG8tZ3Vlc3Qg
cGF0aApjb3ZlcnMgdGhpcyBjb3JyZWN0bHkuCgpUaGVyZSBpcyBhIGNvcm5l
ciBjYXNlIHdpdGggbXVsdGlwbGUgdkNQVXMgaW4gaHlwZXJjYWxscyBhdCB0
aGUgc2FtZSB0aW1lLAp3aGljaCA5ZDFkMzFhZDk0OTggY2hhbmdlZCwgYW5k
IHRoaXMgcGF0Y2ggY2hhbmdlcyBiYWNrIHRvIGl0cyBvcmlnaW5hbCBYUFRJ
CmJlaGF2aW91ci4KCkFyY2hpdGVjdHVyYWxseSwgZXN0YWJsaXNoZWQgVExC
IGVudHJpZXMgY2FuIGNvbnRpbnVlIHRvIGJlIHVzZWQgdW50aWwgdGhlCmJy
b2FkY2FzdCBmbHVzaCBoYXMgY29tcGxldGVkLiAgVGhlcmVmb3JlLCBldmVu
IHdpdGggY29uY3VycmVudCBoeXBlcmNhbGxzLAp0aGUgZ3Vlc3QgY2Fubm90
IGRlcGVuZCBvbiBvbGRlciBtYXBwaW5ncyBub3QgYmVpbmcgdXNlZCB1bnRp
bCBhbiBNTVVFWFRfT1AKaHlwZXJjYWxsIGNvbXBsZXRlcy4gIFhlbidzIGlt
cGxlbWVudGF0aW9uIG9mIGd1ZXN0LWluaXRpYXRlZCBmbHVzaGVzIHdpbGwK
dGFrZSBjb3JyZWN0IGVmZmVjdCBvbiB0b3Agb2YgYW4gaW4tcHJvZ3Jlc3Mg
aHlwZXJjYWxsLCBwaWNraW5nIHVwIG5ldyBtYXBwaW5nCnNldHRpbmcgYmVm
b3JlIHRoZSBvdGhlciB2Q1BVJ3MgTU1VRVhUX09QIGNvbXBsZXRlcy4KCk5v
dGU6IFRoZSBjb3JyZWN0bmVzcyBvZiB0aGlzIGNoYW5nZSBpcyBub3QgaW1w
YWN0ZWQgYnkgd2hldGhlciBYUFRJIHVzZXMKZ2xvYmFsIG1hcHBpbmdzIG9y
IG5vdC4gIENvcnJlY3RuZXNzIHRoZXJlIGRlcGVuZHMgb24gdGhlIGJlaGF2
aW91ciBvZiBYZW4gb24KdGhlIGVudHJ5L2V4aXQgcGF0aHMgd2hlbiBzd2l0
Y2hpbmcgdHdvL2Zyb20gdGhlIFhQVEkgInNoYWRvdyIgcGFnZXRhYmxlcy4K
ClRoaXMgaXMgKG5vdCByZWFsbHkpIFhTQS0yODYgKGJ1dCBuZWNlc3Nhcnkg
dG8gc2ltcGxpZnkgdGhlIGxvZ2ljKS4KCkZpeGVzOiA5ZDFkMzFhZDk0OTgg
KCJ4ODY6IHNsaWdodGx5IHJlZHVjZSBNZWx0ZG93biBiYW5kLWFpZCBvdmVy
aGVhZCIpClNpZ25lZC1vZmYtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5j
b29wZXIzQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5OiBKYW4gQmV1bGljaCA8
amJldWxpY2hAc3VzZS5jb20+CihjaGVycnkgcGlja2VkIGZyb20gY29tbWl0
IDA1NWUxYzNhM2Q5NWIxZTc1MzE0ODM2OWZiYzRiYTQ4NzgyZGQ2MDIpCi0t
LQogeGVuL2FyY2gveDg2L21tLmMgfCAyICstCiAxIGZpbGUgY2hhbmdlZCwg
MSBpbnNlcnRpb24oKyksIDEgZGVsZXRpb24oLSkKCmRpZmYgLS1naXQgYS94
ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9hcmNoL3g4Ni9tbS5jCmluZGV4IDQw
NzM0MGU1ZjUuLjIwNDYxMWNhMmMgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4
Ni9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS5jCkBAIC00MjY2LDcgKzQy
NjYsNyBAQCBsb25nIGRvX21tdV91cGRhdGUoCiAKICAgICAgICAgY3B1bWFz
a19hbmRub3QobWFzaywgcHRfb3duZXItPmRpcnR5X2NwdW1hc2ssIGNwdW1h
c2tfb2YoY3B1KSk7CiAgICAgICAgIGlmICggIWNwdW1hc2tfZW1wdHkobWFz
aykgKQotICAgICAgICAgICAgZmx1c2hfbWFzayhtYXNrLCBGTFVTSF9UTEJf
R0xPQkFMIHwgRkxVU0hfUk9PVF9QR1RCTCk7CisgICAgICAgICAgICBmbHVz
aF9tYXNrKG1hc2ssIEZMVVNIX1JPT1RfUEdUQkwpOwogICAgIH0KIAogICAg
IHBlcmZjX2FkZChudW1fcGFnZV91cGRhdGVzLCBpKTsKLS0gCjIuMjAuMQoK

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.13/0002-x86-pv-Flush-TLB-in-response-to-paging-structure-cha.patch"
Content-Disposition: attachment;
 filename="xsa286-4.13/0002-x86-pv-Flush-TLB-in-response-to-paging-structure-cha.patch"
Content-Transfer-Encoding: base64

RnJvbSAyOGI3ODE3MTI3MWRiYmNlODhiYmQ0Y2IyZGUzZDgyOGE1MWZiMTY5
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpEYXRlOiBNb24sIDE5IE9j
dCAyMDIwIDE1OjUxOjIyICswMTAwClN1YmplY3Q6IFtQQVRDSCAyLzJdIHg4
Ni9wdjogRmx1c2ggVExCIGluIHJlc3BvbnNlIHRvIHBhZ2luZyBzdHJ1Y3R1
cmUgY2hhbmdlcwoKV2l0aCBNTVVfVVBEQVRFLCBhIFBWIGd1ZXN0IGNhbiBt
YWtlIGNoYW5nZXMgdG8gaGlnaGVyIGxldmVsIHBhZ2V0YWJsZXMuICBUaGlz
CmlzIHNhZmUgZnJvbSBYZW4ncyBwb2ludCBvZiB2aWV3IChhcyB0aGUgdXBk
YXRlIG9ubHkgYWZmZWN0cyBndWVzdCBtYXBwaW5ncyksCmFuZCB0aGUgZ3Vl
c3QgaXMgcmVxdWlyZWQgdG8gZmx1c2ggKGlmIG5lY2Vzc2FyeSkgYWZ0ZXIg
bWFraW5nIHVwZGF0ZXMuCgpIb3dldmVyLCBYZW4ncyB1c2Ugb2YgbGluZWFy
IHBhZ2V0YWJsZXMgKFVQREFURV9WQV9NQVBQSU5HLCBHTlRUQUJPUF9tYXAs
CndyaXRlYWJsZSBwYWdldGFibGVzLCBldGMuKSBpcyBhbiBpbXBsZW1lbnRh
dGlvbiBkZXRhaWwgb3V0c2lkZSBvZiB0aGUKQVBJL0FCSS4KCkNoYW5nZXMg
aW4gdGhlIHBhZ2luZyBzdHJ1Y3R1cmUgcmVxdWlyZSBpbnZhbGlkYXRpb25z
IGluIHRoZSBsaW5lYXIgcGFnZXRhYmxlCnJhbmdlIGZvciBzdWJzZXF1ZW50
IGFjY2Vzc2VzIGludG8gdGhlIGxpbmVhciBwYWdldGFibGVzIHRvIGFjY2Vz
cyBub24tc3RhbGUKbWFwcGluZ3MuICBYZW4gbXVzdCBwcm92aWRlIHN1aXRh
YmxlIGZsdXNoaW5nIHRvIHByZXZlbnQgaW50ZXJtaXhlZCBndWVzdAphY3Rp
b25zIGZyb20gYWNjaWRlbnRhbGx5IGFjY2Vzc2luZy9tb2RpZnlpbmcgdGhl
IHdyb25nIHBhZ2V0YWJsZS4KCkZvciBhbGwgTDIgYW5kIGhpZ2hlciBtb2Rp
ZmljYXRpb25zLCBmbHVzaCB0aGUgVExCLiAgUFYgZ3Vlc3RzIGNhbm5vdCBj
cmVhdGUKTDIgb3IgaGlnaGVyIGVudHJpZXMgd2l0aCB0aGUgR2xvYmFsIGJp
dCBzZXQsIHNvIG5vIG1hcHBpbmdzIGVzdGFibGlzaGVkIGluCnRoZSBsaW5l
YXIgcmFuZ2UgY2FuIGJlIGdsb2JhbC4gIChUaGlzIGNvdWxkIGluIHByaW5j
aXBsZSBiZSBhbiBvcmRlciAzOSBmbHVzaApzdGFydGluZyBhdCBMSU5FQVJf
UFRfVklSVF9TVEFSVCwgYnV0IG5vIHN1Y2ggbWVjaGFuaXNtIGV4aXN0cyBp
biBwcmFjdGljZS4pCgpFeHByZXNzIHRoZSBuZWNlc3NhcnkgZmx1c2hlcyBh
cyBhIHNldCBvZiBib29sZWFucyB3aGljaCBhY2N1bXVsYXRlIGFjcm9zcyB0
aGUKb3BlcmF0aW9uLiAgQ29tbWVudCB0aGUgZmx1c2hpbmcgbG9naWMgZXh0
ZW5zaXZlbHkuCgpUaGlzIGlzIFhTQS0yODYuCgpTaWduZWQtb2ZmLWJ5OiBB
bmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZp
ZXdlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgooY2hl
cnJ5IHBpY2tlZCBmcm9tIGNvbW1pdCAxNmEyMDk2M2IzMjA5Nzg4ZjJjMGQz
YTNlZWJiN2Q5MmYwM2Y1ODgzKQotLS0KIHhlbi9hcmNoL3g4Ni9tbS5jIHwg
NjkgKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0t
LS0tLS0KIDEgZmlsZSBjaGFuZ2VkLCA1OSBpbnNlcnRpb25zKCspLCAxMCBk
ZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0uYyBi
L3hlbi9hcmNoL3g4Ni9tbS5jCmluZGV4IDIwNDYxMWNhMmMuLmU1NmNkNGJj
NjUgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9tbS5jCisrKyBiL3hlbi9h
cmNoL3g4Ni9tbS5jCkBAIC0zOTY5LDcgKzM5NjksOCBAQCBsb25nIGRvX21t
dV91cGRhdGUoCiAgICAgc3RydWN0IHZjcHUgKmN1cnIgPSBjdXJyZW50LCAq
diA9IGN1cnI7CiAgICAgc3RydWN0IGRvbWFpbiAqZCA9IHYtPmRvbWFpbiwg
KnB0X293bmVyID0gZCwgKnBnX293bmVyOwogICAgIG1mbl90IG1hcF9tZm4g
PSBJTlZBTElEX01GTiwgbWZuOwotICAgIGJvb2wgc3luY19ndWVzdCA9IGZh
bHNlOworICAgIGJvb2wgZmx1c2hfbGluZWFyX3B0ID0gZmFsc2UsIGZsdXNo
X3Jvb3RfcHRfbG9jYWwgPSBmYWxzZSwKKyAgICAgICAgZmx1c2hfcm9vdF9w
dF9vdGhlcnMgPSBmYWxzZTsKICAgICB1aW50MzJfdCB4c21fbmVlZGVkID0g
MDsKICAgICB1aW50MzJfdCB4c21fY2hlY2tlZCA9IDA7CiAgICAgaW50IHJj
ID0gcHV0X29sZF9ndWVzdF90YWJsZShjdXJyKTsKQEAgLTQxMTksNiArNDEy
MCw4IEBAIGxvbmcgZG9fbW11X3VwZGF0ZSgKICAgICAgICAgICAgICAgICAg
ICAgICAgIGJyZWFrOwogICAgICAgICAgICAgICAgICAgICByYyA9IG1vZF9s
Ml9lbnRyeSh2YSwgbDJlX2Zyb21faW50cHRlKHJlcS52YWwpLCBtZm4sCiAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNtZCA9PSBN
TVVfUFRfVVBEQVRFX1BSRVNFUlZFX0FELCB2KTsKKyAgICAgICAgICAgICAg
ICAgICAgaWYgKCAhcmMgKQorICAgICAgICAgICAgICAgICAgICAgICAgZmx1
c2hfbGluZWFyX3B0ID0gdHJ1ZTsKICAgICAgICAgICAgICAgICAgICAgYnJl
YWs7CiAKICAgICAgICAgICAgICAgICBjYXNlIFBHVF9sM19wYWdlX3RhYmxl
OgpAQCAtNDEyNiw2ICs0MTI5LDggQEAgbG9uZyBkb19tbXVfdXBkYXRlKAog
ICAgICAgICAgICAgICAgICAgICAgICAgYnJlYWs7CiAgICAgICAgICAgICAg
ICAgICAgIHJjID0gbW9kX2wzX2VudHJ5KHZhLCBsM2VfZnJvbV9pbnRwdGUo
cmVxLnZhbCksIG1mbiwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgY21kID09IE1NVV9QVF9VUERBVEVfUFJFU0VSVkVfQUQsIHYp
OworICAgICAgICAgICAgICAgICAgICBpZiAoICFyYyApCisgICAgICAgICAg
ICAgICAgICAgICAgICBmbHVzaF9saW5lYXJfcHQgPSB0cnVlOwogICAgICAg
ICAgICAgICAgICAgICBicmVhazsKIAogICAgICAgICAgICAgICAgIGNhc2Ug
UEdUX2w0X3BhZ2VfdGFibGU6CkBAIC00MTMzLDYgKzQxMzgsOCBAQCBsb25n
IGRvX21tdV91cGRhdGUoCiAgICAgICAgICAgICAgICAgICAgICAgICBicmVh
azsKICAgICAgICAgICAgICAgICAgICAgcmMgPSBtb2RfbDRfZW50cnkodmEs
IGw0ZV9mcm9tX2ludHB0ZShyZXEudmFsKSwgbWZuLAogICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBjbWQgPT0gTU1VX1BUX1VQREFU
RV9QUkVTRVJWRV9BRCwgdik7CisgICAgICAgICAgICAgICAgICAgIGlmICgg
IXJjICkKKyAgICAgICAgICAgICAgICAgICAgICAgIGZsdXNoX2xpbmVhcl9w
dCA9IHRydWU7CiAgICAgICAgICAgICAgICAgICAgIGlmICggIXJjICYmIHB0
X293bmVyLT5hcmNoLnB2LnhwdGkgKQogICAgICAgICAgICAgICAgICAgICB7
CiAgICAgICAgICAgICAgICAgICAgICAgICBib29sIGxvY2FsX2luX3VzZSA9
IGZhbHNlOwpAQCAtNDE0MSw3ICs0MTQ4LDcgQEAgbG9uZyBkb19tbXVfdXBk
YXRlKAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbWZu
KSApCiAgICAgICAgICAgICAgICAgICAgICAgICB7CiAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgbG9jYWxfaW5fdXNlID0gdHJ1ZTsKLSAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBnZXRfY3B1X2luZm8oKS0+cm9vdF9wZ3Rf
Y2hhbmdlZCA9IHRydWU7CisgICAgICAgICAgICAgICAgICAgICAgICAgICAg
Zmx1c2hfcm9vdF9wdF9sb2NhbCA9IHRydWU7CiAgICAgICAgICAgICAgICAg
ICAgICAgICB9CiAKICAgICAgICAgICAgICAgICAgICAgICAgIC8qCkBAIC00
MTUzLDcgKzQxNjAsNyBAQCBsb25nIGRvX21tdV91cGRhdGUoCiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICgxICsgISEocGFnZS0+dS5pbnVzZS50
eXBlX2luZm8gJiBQR1RfcGlubmVkKSArCiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBtZm5fZXEocGFnZXRhYmxlX2dldF9tZm4oY3Vyci0+YXJj
aC5ndWVzdF90YWJsZV91c2VyKSwKICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBtZm4pICsgbG9jYWxfaW5fdXNlKSApCi0gICAgICAg
ICAgICAgICAgICAgICAgICAgICAgc3luY19ndWVzdCA9IHRydWU7CisgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgZmx1c2hfcm9vdF9wdF9vdGhlcnMg
PSB0cnVlOwogICAgICAgICAgICAgICAgICAgICB9CiAgICAgICAgICAgICAg
ICAgICAgIGJyZWFrOwogCkBAIC00MjU1LDE5ICs0MjYyLDYxIEBAIGxvbmcg
ZG9fbW11X3VwZGF0ZSgKICAgICBpZiAoIHZhICkKICAgICAgICAgdW5tYXBf
ZG9tYWluX3BhZ2UodmEpOwogCi0gICAgaWYgKCBzeW5jX2d1ZXN0ICkKKyAg
ICAvKgorICAgICAqIFBlcmZvcm0gcmVxdWlyZWQgVExCIG1haW50ZW5hbmNl
LgorICAgICAqCisgICAgICogVGhpcyBsb2dpYyBjdXJyZW50bHkgZGVwZW5k
IG9uIGZsdXNoX2xpbmVhcl9wdCBiZWluZyBhIHN1cGVyc2V0IG9mIHRoZQor
ICAgICAqIGZsdXNoX3Jvb3RfcHRfKiBjb25kaXRpb25zLgorICAgICAqCisg
ICAgICogcHRfb3duZXIgbWF5IG5vdCBiZSBjdXJyZW50LT5kb21haW4uICBU
aGlzIG1heSBvY2N1ciBkdXJpbmcKKyAgICAgKiBjb25zdHJ1Y3Rpb24gb2Yg
MzJiaXQgUFYgZ3Vlc3RzLCBvciBkZWJ1Z2dpbmcgb2YgUFYgZ3Vlc3RzLiAg
VGhlCisgICAgICogYmVoYXZpb3VyIGNhbm5vdCBiZSBjb3JyZWN0IHdpdGgg
ZG9tYWluIHVucGF1c2VkLiAgV2UgdGhlcmVmb3JlIGV4cGVjdAorICAgICAq
IHB0X293bmVyLT5kaXJ0eV9jcHVtYXNrIHRvIGJlIGVtcHR5LCBidXQgaXQg
aXMgYSB3YXN0ZSBvZiBlZmZvcnQgdG8KKyAgICAgKiBleHBsaWNpdGx5IGNo
ZWNrIGZvciwgYW5kIGV4Y2x1ZGUsIHRoaXMgY29ybmVyIGNhc2UuCisgICAg
ICoKKyAgICAgKiBmbHVzaF9saW5lYXJfcHQgcmVxdWlyZXMgYSBGTFVTSF9U
TEIgdG8gYWxsIGRpcnR5IENQVXMuICBUaGUgZmx1c2ggbXVzdAorICAgICAq
IGJlIHBlcmZvcm1lZCBub3cgdG8gbWFpbnRhaW4gY29ycmVjdCBiZWhhdmlv
dXIgYWNyb3NzIGEgbXVsdGljYWxsLgorICAgICAqIGkuZS4gd2UgY2Fubm90
IHJlbGF4IEZMVVNIX1RMQiB0byBGTFVTSF9ST09UX1BHVEJMLCBnaXZlbiB0
aGF0IHRoZQorICAgICAqIGZvcm1lciBpcyBhIHNpZGUgZWZmZWN0IG9mIHRo
ZSBsYXR0ZXIsIGJlY2F1c2UgdGhlIHJlc3luYyAod2hpY2ggaXMgaW4KKyAg
ICAgKiB0aGUgcmV0dXJuLXRvLWd1ZXN0IHBhdGgpIGhhcHBlbnMgdG9vIGxh
dGUuCisgICAgICoKKyAgICAgKiBmbHVzaF9yb290X3B0XyogcmVxdWlyZXMg
RkxVU0hfUk9PVF9QR1RCTCBvbiBlaXRoZXIgdGhlIGxvY2FsIENQVQorICAg
ICAqIChpbXBsaWVzIHB0X293bmVyID09IGN1cnJlbnQtPmRvbWFpbiBhbmQg
Y3VycmVudC0+cHJvY2Vzc29yIHNldCBpbgorICAgICAqIHB0X293bmVyLT5k
aXJ0eV9jcHVtYXNrKSwgYW5kL29yIGFsbCAqb3RoZXIqIGRpcnR5IENQVXMg
YXMgdGhlcmUgYXJlCisgICAgICogcmVmZXJlbmNlcyB3ZSBjYW4ndCBhY2Nv
dW50IGZvciBsb2NhbGx5LgorICAgICAqLworICAgIGlmICggZmx1c2hfbGlu
ZWFyX3B0IC8qIHx8IGZsdXNoX3Jvb3RfcHRfbG9jYWwgfHwgZmx1c2hfcm9v
dF9wdF9vdGhlcnMgKi8gKQogICAgIHsKKyAgICAgICAgdW5zaWduZWQgaW50
IGNwdSA9IHNtcF9wcm9jZXNzb3JfaWQoKTsKKyAgICAgICAgY3B1bWFza190
ICptYXNrID0gcHRfb3duZXItPmRpcnR5X2NwdW1hc2s7CisKICAgICAgICAg
LyoKLSAgICAgICAgICogRm9yY2Ugb3RoZXIgdkNQVS1zIG9mIHRoZSBhZmZl
Y3RlZCBndWVzdCB0byBwaWNrIHVwIEw0IGVudHJ5Ci0gICAgICAgICAqIGNo
YW5nZXMgKGlmIGFueSkuCisgICAgICAgICAqIEFsd2F5cyBoYW5kbGUgbG9j
YWwgZmx1c2hpbmcgc2VwYXJhdGVseSAoaWYgYXBwbGljYWJsZSksIHRvCisg
ICAgICAgICAqIHNlcGFyYXRlIHRoZSBmbHVzaCBpbnZvY2F0aW9ucyBhcHBy
b3ByaWF0ZWx5IGZvciBzY29wZSBvZiB0aGUgdHdvCisgICAgICAgICAqIGZs
dXNoX3Jvb3RfcHRfKiB2YXJpYWJsZXMuCiAgICAgICAgICAqLwotICAgICAg
ICB1bnNpZ25lZCBpbnQgY3B1ID0gc21wX3Byb2Nlc3Nvcl9pZCgpOwotICAg
ICAgICBjcHVtYXNrX3QgKm1hc2sgPSBwZXJfY3B1KHNjcmF0Y2hfY3B1bWFz
aywgY3B1KTsKKyAgICAgICAgaWYgKCBsaWtlbHkoY3B1bWFza190ZXN0X2Nw
dShjcHUsIG1hc2spKSApCisgICAgICAgIHsKKyAgICAgICAgICAgIG1hc2sg
PSBwZXJfY3B1KHNjcmF0Y2hfY3B1bWFzaywgY3B1KTsKIAotICAgICAgICBj
cHVtYXNrX2FuZG5vdChtYXNrLCBwdF9vd25lci0+ZGlydHlfY3B1bWFzaywg
Y3B1bWFza19vZihjcHUpKTsKKyAgICAgICAgICAgIGNwdW1hc2tfY29weSht
YXNrLCBwdF9vd25lci0+ZGlydHlfY3B1bWFzayk7CisgICAgICAgICAgICBf
X2NwdW1hc2tfY2xlYXJfY3B1KGNwdSwgbWFzayk7CisKKyAgICAgICAgICAg
IGZsdXNoX2xvY2FsKEZMVVNIX1RMQiB8CisgICAgICAgICAgICAgICAgICAg
ICAgICAoZmx1c2hfcm9vdF9wdF9sb2NhbCA/IEZMVVNIX1JPT1RfUEdUQkwg
OiAwKSk7CisgICAgICAgIH0KKyAgICAgICAgZWxzZQorICAgICAgICAgICAg
LyogU2FuaXR5IGNoZWNrLiAgZmx1c2hfcm9vdF9wdF9sb2NhbCBpbXBsaWVz
IGxvY2FsIGNwdSBpcyBkaXJ0eS4gKi8KKyAgICAgICAgICAgIEFTU0VSVCgh
Zmx1c2hfcm9vdF9wdF9sb2NhbCk7CisKKyAgICAgICAgLyogRmx1c2ggdGhl
IHJlbW90ZSBkaXJ0eSBDUFVzLiAgRG9lcyBub3QgaW5jbHVkZSB0aGUgbG9j
YWwgQ1BVLiAqLwogICAgICAgICBpZiAoICFjcHVtYXNrX2VtcHR5KG1hc2sp
ICkKLSAgICAgICAgICAgIGZsdXNoX21hc2sobWFzaywgRkxVU0hfUk9PVF9Q
R1RCTCk7CisgICAgICAgICAgICBmbHVzaF9tYXNrKG1hc2ssIEZMVVNIX1RM
QiB8CisgICAgICAgICAgICAgICAgICAgICAgIChmbHVzaF9yb290X3B0X290
aGVycyA/IEZMVVNIX1JPT1RfUEdUQkwgOiAwKSk7CiAgICAgfQorICAgIGVs
c2UKKyAgICAgICAgLyogU2FuaXR5IGNoZWNrLiAgZmx1c2hfcm9vdF9wdF8q
IGltcGxpZXMgZmx1c2hfbGluZWFyX3B0LiAqLworICAgICAgICBBU1NFUlQo
IWZsdXNoX3Jvb3RfcHRfbG9jYWwgJiYgIWZsdXNoX3Jvb3RfcHRfb3RoZXJz
KTsKIAogICAgIHBlcmZjX2FkZChudW1fcGFnZV91cGRhdGVzLCBpKTsKIAot
LSAKMi4yMC4xCgo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.14/0001-x86-pv-Drop-FLUSH_TLB_GLOBAL-in-do_mmu_update-for-XP.patch"
Content-Disposition: attachment;
 filename="xsa286-4.14/0001-x86-pv-Drop-FLUSH_TLB_GLOBAL-in-do_mmu_update-for-XP.patch"
Content-Transfer-Encoding: base64

RnJvbSA5NDFmNjlhNDI4Y2Q5ODkxNDQzMDA1MTllNTQ4ZTM0NmM2ODFhMWIz
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpEYXRlOiBUaHUsIDIyIE9j
dCAyMDIwIDExOjI4OjU4ICswMTAwClN1YmplY3Q6IFtQQVRDSCAxLzJdIHg4
Ni9wdjogRHJvcCBGTFVTSF9UTEJfR0xPQkFMIGluIGRvX21tdV91cGRhdGUo
KSBmb3IgWFBUSQoKYy9zIDlkMWQzMWFkOTQ5OCAieDg2OiBzbGlnaHRseSBy
ZWR1Y2UgTWVsdGRvd24gYmFuZC1haWQgb3ZlcmhlYWQiIHJlbW92ZWQgdGhl
CnVzZSBvZiBHbG9iYWwgVExCIGZsdXNoZXMgb24gdGhlIFhlbiBlbnRyeSBw
YXRoLCBidXQgYWRkZWQgYSBGTFVTSF9UTEJfR0xPQkFMCnRvIHRoZSBMNCBw
YXRoIGluIGRvX21tdV91cGRhdGUoKS4KCkhvd2V2ZXIsIHRoaXMgd2FzIHVu
bmVjZXNzYXJ5LgoKSXQgaXMgdGhlIGd1ZXN0cyByZXNwb25zaWJpbGl0eSB0
byBwZXJmb3JtIGFwcHJvcHJpYXRlIFRMQiBmbHVzaGluZyBpZiB0aGUgTDQK
bW9kaWZpY2F0aW9uIGFsdGVyZWQgYW4gZXN0YWJsaXNoZWQgbWFwcGluZyBp
biBhIGZsdXNoLXJlbGV2YW50IHdheS4gIEluIHRoaXMKY2FzZSwgYW4gTU1V
RVhUX09QIGh5cGVyY2FsbCB3aWxsIGZvbGxvdy4gIFRoZSBjYXNlIHdoaWNo
IFhlbiBuZWVkcyB0byBjb3ZlcgppcyB3aGVuIG5ldyBtYXBwaW5ncyBhcmUg
Y3JlYXRlZCwgYW5kIHRoZSByZXN5bmMgb24gdGhlIGV4aXQtdG8tZ3Vlc3Qg
cGF0aApjb3ZlcnMgdGhpcyBjb3JyZWN0bHkuCgpUaGVyZSBpcyBhIGNvcm5l
ciBjYXNlIHdpdGggbXVsdGlwbGUgdkNQVXMgaW4gaHlwZXJjYWxscyBhdCB0
aGUgc2FtZSB0aW1lLAp3aGljaCA5ZDFkMzFhZDk0OTggY2hhbmdlZCwgYW5k
IHRoaXMgcGF0Y2ggY2hhbmdlcyBiYWNrIHRvIGl0cyBvcmlnaW5hbCBYUFRJ
CmJlaGF2aW91ci4KCkFyY2hpdGVjdHVyYWxseSwgZXN0YWJsaXNoZWQgVExC
IGVudHJpZXMgY2FuIGNvbnRpbnVlIHRvIGJlIHVzZWQgdW50aWwgdGhlCmJy
b2FkY2FzdCBmbHVzaCBoYXMgY29tcGxldGVkLiAgVGhlcmVmb3JlLCBldmVu
IHdpdGggY29uY3VycmVudCBoeXBlcmNhbGxzLAp0aGUgZ3Vlc3QgY2Fubm90
IGRlcGVuZCBvbiBvbGRlciBtYXBwaW5ncyBub3QgYmVpbmcgdXNlZCB1bnRp
bCBhbiBNTVVFWFRfT1AKaHlwZXJjYWxsIGNvbXBsZXRlcy4gIFhlbidzIGlt
cGxlbWVudGF0aW9uIG9mIGd1ZXN0LWluaXRpYXRlZCBmbHVzaGVzIHdpbGwK
dGFrZSBjb3JyZWN0IGVmZmVjdCBvbiB0b3Agb2YgYW4gaW4tcHJvZ3Jlc3Mg
aHlwZXJjYWxsLCBwaWNraW5nIHVwIG5ldyBtYXBwaW5nCnNldHRpbmcgYmVm
b3JlIHRoZSBvdGhlciB2Q1BVJ3MgTU1VRVhUX09QIGNvbXBsZXRlcy4KCk5v
dGU6IFRoZSBjb3JyZWN0bmVzcyBvZiB0aGlzIGNoYW5nZSBpcyBub3QgaW1w
YWN0ZWQgYnkgd2hldGhlciBYUFRJIHVzZXMKZ2xvYmFsIG1hcHBpbmdzIG9y
IG5vdC4gIENvcnJlY3RuZXNzIHRoZXJlIGRlcGVuZHMgb24gdGhlIGJlaGF2
aW91ciBvZiBYZW4gb24KdGhlIGVudHJ5L2V4aXQgcGF0aHMgd2hlbiBzd2l0
Y2hpbmcgdHdvL2Zyb20gdGhlIFhQVEkgInNoYWRvdyIgcGFnZXRhYmxlcy4K
ClRoaXMgaXMgKG5vdCByZWFsbHkpIFhTQS0yODYgKGJ1dCBuZWNlc3Nhcnkg
dG8gc2ltcGxpZnkgdGhlIGxvZ2ljKS4KCkZpeGVzOiA5ZDFkMzFhZDk0OTgg
KCJ4ODY6IHNsaWdodGx5IHJlZHVjZSBNZWx0ZG93biBiYW5kLWFpZCBvdmVy
aGVhZCIpClNpZ25lZC1vZmYtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5j
b29wZXIzQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5OiBKYW4gQmV1bGljaCA8
amJldWxpY2hAc3VzZS5jb20+CihjaGVycnkgcGlja2VkIGZyb20gY29tbWl0
IDA1NWUxYzNhM2Q5NWIxZTc1MzE0ODM2OWZiYzRiYTQ4NzgyZGQ2MDIpCi0t
LQogeGVuL2FyY2gveDg2L21tLmMgfCAyICstCiAxIGZpbGUgY2hhbmdlZCwg
MSBpbnNlcnRpb24oKyksIDEgZGVsZXRpb24oLSkKCmRpZmYgLS1naXQgYS94
ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9hcmNoL3g4Ni9tbS5jCmluZGV4IDNj
YjZmYWJkYWUuLjFjYWEyZGYwYTUgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4
Ni9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS5jCkBAIC00MTkzLDcgKzQx
OTMsNyBAQCBsb25nIGRvX21tdV91cGRhdGUoCiAKICAgICAgICAgY3B1bWFz
a19hbmRub3QobWFzaywgcHRfb3duZXItPmRpcnR5X2NwdW1hc2ssIGNwdW1h
c2tfb2YoY3B1KSk7CiAgICAgICAgIGlmICggIWNwdW1hc2tfZW1wdHkobWFz
aykgKQotICAgICAgICAgICAgZmx1c2hfbWFzayhtYXNrLCBGTFVTSF9UTEJf
R0xPQkFMIHwgRkxVU0hfUk9PVF9QR1RCTCk7CisgICAgICAgICAgICBmbHVz
aF9tYXNrKG1hc2ssIEZMVVNIX1JPT1RfUEdUQkwpOwogICAgIH0KIAogICAg
IHBlcmZjX2FkZChudW1fcGFnZV91cGRhdGVzLCBpKTsKLS0gCjIuMjAuMQoK

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.14/0002-x86-pv-Flush-TLB-in-response-to-paging-structure-cha.patch"
Content-Disposition: attachment;
 filename="xsa286-4.14/0002-x86-pv-Flush-TLB-in-response-to-paging-structure-cha.patch"
Content-Transfer-Encoding: base64

RnJvbSAxMGJiNjNjMjAzZjQyZDkzMWZhMWZhN2RiYmFlN2NlMTc2NWNlY2Yy
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpEYXRlOiBNb24sIDE5IE9j
dCAyMDIwIDE1OjUxOjIyICswMTAwClN1YmplY3Q6IFtQQVRDSCAyLzJdIHg4
Ni9wdjogRmx1c2ggVExCIGluIHJlc3BvbnNlIHRvIHBhZ2luZyBzdHJ1Y3R1
cmUgY2hhbmdlcwoKV2l0aCBNTVVfVVBEQVRFLCBhIFBWIGd1ZXN0IGNhbiBt
YWtlIGNoYW5nZXMgdG8gaGlnaGVyIGxldmVsIHBhZ2V0YWJsZXMuICBUaGlz
CmlzIHNhZmUgZnJvbSBYZW4ncyBwb2ludCBvZiB2aWV3IChhcyB0aGUgdXBk
YXRlIG9ubHkgYWZmZWN0cyBndWVzdCBtYXBwaW5ncyksCmFuZCB0aGUgZ3Vl
c3QgaXMgcmVxdWlyZWQgdG8gZmx1c2ggKGlmIG5lY2Vzc2FyeSkgYWZ0ZXIg
bWFraW5nIHVwZGF0ZXMuCgpIb3dldmVyLCBYZW4ncyB1c2Ugb2YgbGluZWFy
IHBhZ2V0YWJsZXMgKFVQREFURV9WQV9NQVBQSU5HLCBHTlRUQUJPUF9tYXAs
CndyaXRlYWJsZSBwYWdldGFibGVzLCBldGMuKSBpcyBhbiBpbXBsZW1lbnRh
dGlvbiBkZXRhaWwgb3V0c2lkZSBvZiB0aGUKQVBJL0FCSS4KCkNoYW5nZXMg
aW4gdGhlIHBhZ2luZyBzdHJ1Y3R1cmUgcmVxdWlyZSBpbnZhbGlkYXRpb25z
IGluIHRoZSBsaW5lYXIgcGFnZXRhYmxlCnJhbmdlIGZvciBzdWJzZXF1ZW50
IGFjY2Vzc2VzIGludG8gdGhlIGxpbmVhciBwYWdldGFibGVzIHRvIGFjY2Vz
cyBub24tc3RhbGUKbWFwcGluZ3MuICBYZW4gbXVzdCBwcm92aWRlIHN1aXRh
YmxlIGZsdXNoaW5nIHRvIHByZXZlbnQgaW50ZXJtaXhlZCBndWVzdAphY3Rp
b25zIGZyb20gYWNjaWRlbnRhbGx5IGFjY2Vzc2luZy9tb2RpZnlpbmcgdGhl
IHdyb25nIHBhZ2V0YWJsZS4KCkZvciBhbGwgTDIgYW5kIGhpZ2hlciBtb2Rp
ZmljYXRpb25zLCBmbHVzaCB0aGUgVExCLiAgUFYgZ3Vlc3RzIGNhbm5vdCBj
cmVhdGUKTDIgb3IgaGlnaGVyIGVudHJpZXMgd2l0aCB0aGUgR2xvYmFsIGJp
dCBzZXQsIHNvIG5vIG1hcHBpbmdzIGVzdGFibGlzaGVkIGluCnRoZSBsaW5l
YXIgcmFuZ2UgY2FuIGJlIGdsb2JhbC4gIChUaGlzIGNvdWxkIGluIHByaW5j
aXBsZSBiZSBhbiBvcmRlciAzOSBmbHVzaApzdGFydGluZyBhdCBMSU5FQVJf
UFRfVklSVF9TVEFSVCwgYnV0IG5vIHN1Y2ggbWVjaGFuaXNtIGV4aXN0cyBp
biBwcmFjdGljZS4pCgpFeHByZXNzIHRoZSBuZWNlc3NhcnkgZmx1c2hlcyBh
cyBhIHNldCBvZiBib29sZWFucyB3aGljaCBhY2N1bXVsYXRlIGFjcm9zcyB0
aGUKb3BlcmF0aW9uLiAgQ29tbWVudCB0aGUgZmx1c2hpbmcgbG9naWMgZXh0
ZW5zaXZlbHkuCgpUaGlzIGlzIFhTQS0yODYuCgpTaWduZWQtb2ZmLWJ5OiBB
bmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZp
ZXdlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgooY2hl
cnJ5IHBpY2tlZCBmcm9tIGNvbW1pdCAxNmEyMDk2M2IzMjA5Nzg4ZjJjMGQz
YTNlZWJiN2Q5MmYwM2Y1ODgzKQotLS0KIHhlbi9hcmNoL3g4Ni9tbS5jIHwg
NjkgKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0t
LS0tLS0KIDEgZmlsZSBjaGFuZ2VkLCA1OSBpbnNlcnRpb25zKCspLCAxMCBk
ZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0uYyBi
L3hlbi9hcmNoL3g4Ni9tbS5jCmluZGV4IDFjYWEyZGYwYTUuLjYxY2Y2YTdi
OWIgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9tbS5jCisrKyBiL3hlbi9h
cmNoL3g4Ni9tbS5jCkBAIC0zODk2LDcgKzM4OTYsOCBAQCBsb25nIGRvX21t
dV91cGRhdGUoCiAgICAgc3RydWN0IHZjcHUgKmN1cnIgPSBjdXJyZW50LCAq
diA9IGN1cnI7CiAgICAgc3RydWN0IGRvbWFpbiAqZCA9IHYtPmRvbWFpbiwg
KnB0X293bmVyID0gZCwgKnBnX293bmVyOwogICAgIG1mbl90IG1hcF9tZm4g
PSBJTlZBTElEX01GTiwgbWZuOwotICAgIGJvb2wgc3luY19ndWVzdCA9IGZh
bHNlOworICAgIGJvb2wgZmx1c2hfbGluZWFyX3B0ID0gZmFsc2UsIGZsdXNo
X3Jvb3RfcHRfbG9jYWwgPSBmYWxzZSwKKyAgICAgICAgZmx1c2hfcm9vdF9w
dF9vdGhlcnMgPSBmYWxzZTsKICAgICB1aW50MzJfdCB4c21fbmVlZGVkID0g
MDsKICAgICB1aW50MzJfdCB4c21fY2hlY2tlZCA9IDA7CiAgICAgaW50IHJj
ID0gcHV0X29sZF9ndWVzdF90YWJsZShjdXJyKTsKQEAgLTQwNDYsNiArNDA0
Nyw4IEBAIGxvbmcgZG9fbW11X3VwZGF0ZSgKICAgICAgICAgICAgICAgICAg
ICAgICAgIGJyZWFrOwogICAgICAgICAgICAgICAgICAgICByYyA9IG1vZF9s
Ml9lbnRyeSh2YSwgbDJlX2Zyb21faW50cHRlKHJlcS52YWwpLCBtZm4sCiAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNtZCA9PSBN
TVVfUFRfVVBEQVRFX1BSRVNFUlZFX0FELCB2KTsKKyAgICAgICAgICAgICAg
ICAgICAgaWYgKCAhcmMgKQorICAgICAgICAgICAgICAgICAgICAgICAgZmx1
c2hfbGluZWFyX3B0ID0gdHJ1ZTsKICAgICAgICAgICAgICAgICAgICAgYnJl
YWs7CiAKICAgICAgICAgICAgICAgICBjYXNlIFBHVF9sM19wYWdlX3RhYmxl
OgpAQCAtNDA1Myw2ICs0MDU2LDggQEAgbG9uZyBkb19tbXVfdXBkYXRlKAog
ICAgICAgICAgICAgICAgICAgICAgICAgYnJlYWs7CiAgICAgICAgICAgICAg
ICAgICAgIHJjID0gbW9kX2wzX2VudHJ5KHZhLCBsM2VfZnJvbV9pbnRwdGUo
cmVxLnZhbCksIG1mbiwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgY21kID09IE1NVV9QVF9VUERBVEVfUFJFU0VSVkVfQUQsIHYp
OworICAgICAgICAgICAgICAgICAgICBpZiAoICFyYyApCisgICAgICAgICAg
ICAgICAgICAgICAgICBmbHVzaF9saW5lYXJfcHQgPSB0cnVlOwogICAgICAg
ICAgICAgICAgICAgICBicmVhazsKIAogICAgICAgICAgICAgICAgIGNhc2Ug
UEdUX2w0X3BhZ2VfdGFibGU6CkBAIC00MDYwLDYgKzQwNjUsOCBAQCBsb25n
IGRvX21tdV91cGRhdGUoCiAgICAgICAgICAgICAgICAgICAgICAgICBicmVh
azsKICAgICAgICAgICAgICAgICAgICAgcmMgPSBtb2RfbDRfZW50cnkodmEs
IGw0ZV9mcm9tX2ludHB0ZShyZXEudmFsKSwgbWZuLAogICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBjbWQgPT0gTU1VX1BUX1VQREFU
RV9QUkVTRVJWRV9BRCwgdik7CisgICAgICAgICAgICAgICAgICAgIGlmICgg
IXJjICkKKyAgICAgICAgICAgICAgICAgICAgICAgIGZsdXNoX2xpbmVhcl9w
dCA9IHRydWU7CiAgICAgICAgICAgICAgICAgICAgIGlmICggIXJjICYmIHB0
X293bmVyLT5hcmNoLnB2LnhwdGkgKQogICAgICAgICAgICAgICAgICAgICB7
CiAgICAgICAgICAgICAgICAgICAgICAgICBib29sIGxvY2FsX2luX3VzZSA9
IGZhbHNlOwpAQCAtNDA2OCw3ICs0MDc1LDcgQEAgbG9uZyBkb19tbXVfdXBk
YXRlKAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbWZu
KSApCiAgICAgICAgICAgICAgICAgICAgICAgICB7CiAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgbG9jYWxfaW5fdXNlID0gdHJ1ZTsKLSAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBnZXRfY3B1X2luZm8oKS0+cm9vdF9wZ3Rf
Y2hhbmdlZCA9IHRydWU7CisgICAgICAgICAgICAgICAgICAgICAgICAgICAg
Zmx1c2hfcm9vdF9wdF9sb2NhbCA9IHRydWU7CiAgICAgICAgICAgICAgICAg
ICAgICAgICB9CiAKICAgICAgICAgICAgICAgICAgICAgICAgIC8qCkBAIC00
MDgwLDcgKzQwODcsNyBAQCBsb25nIGRvX21tdV91cGRhdGUoCiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICgxICsgISEocGFnZS0+dS5pbnVzZS50
eXBlX2luZm8gJiBQR1RfcGlubmVkKSArCiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBtZm5fZXEocGFnZXRhYmxlX2dldF9tZm4oY3Vyci0+YXJj
aC5ndWVzdF90YWJsZV91c2VyKSwKICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBtZm4pICsgbG9jYWxfaW5fdXNlKSApCi0gICAgICAg
ICAgICAgICAgICAgICAgICAgICAgc3luY19ndWVzdCA9IHRydWU7CisgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgZmx1c2hfcm9vdF9wdF9vdGhlcnMg
PSB0cnVlOwogICAgICAgICAgICAgICAgICAgICB9CiAgICAgICAgICAgICAg
ICAgICAgIGJyZWFrOwogCkBAIC00MTgyLDE5ICs0MTg5LDYxIEBAIGxvbmcg
ZG9fbW11X3VwZGF0ZSgKICAgICBpZiAoIHZhICkKICAgICAgICAgdW5tYXBf
ZG9tYWluX3BhZ2UodmEpOwogCi0gICAgaWYgKCBzeW5jX2d1ZXN0ICkKKyAg
ICAvKgorICAgICAqIFBlcmZvcm0gcmVxdWlyZWQgVExCIG1haW50ZW5hbmNl
LgorICAgICAqCisgICAgICogVGhpcyBsb2dpYyBjdXJyZW50bHkgZGVwZW5k
IG9uIGZsdXNoX2xpbmVhcl9wdCBiZWluZyBhIHN1cGVyc2V0IG9mIHRoZQor
ICAgICAqIGZsdXNoX3Jvb3RfcHRfKiBjb25kaXRpb25zLgorICAgICAqCisg
ICAgICogcHRfb3duZXIgbWF5IG5vdCBiZSBjdXJyZW50LT5kb21haW4uICBU
aGlzIG1heSBvY2N1ciBkdXJpbmcKKyAgICAgKiBjb25zdHJ1Y3Rpb24gb2Yg
MzJiaXQgUFYgZ3Vlc3RzLCBvciBkZWJ1Z2dpbmcgb2YgUFYgZ3Vlc3RzLiAg
VGhlCisgICAgICogYmVoYXZpb3VyIGNhbm5vdCBiZSBjb3JyZWN0IHdpdGgg
ZG9tYWluIHVucGF1c2VkLiAgV2UgdGhlcmVmb3JlIGV4cGVjdAorICAgICAq
IHB0X293bmVyLT5kaXJ0eV9jcHVtYXNrIHRvIGJlIGVtcHR5LCBidXQgaXQg
aXMgYSB3YXN0ZSBvZiBlZmZvcnQgdG8KKyAgICAgKiBleHBsaWNpdGx5IGNo
ZWNrIGZvciwgYW5kIGV4Y2x1ZGUsIHRoaXMgY29ybmVyIGNhc2UuCisgICAg
ICoKKyAgICAgKiBmbHVzaF9saW5lYXJfcHQgcmVxdWlyZXMgYSBGTFVTSF9U
TEIgdG8gYWxsIGRpcnR5IENQVXMuICBUaGUgZmx1c2ggbXVzdAorICAgICAq
IGJlIHBlcmZvcm1lZCBub3cgdG8gbWFpbnRhaW4gY29ycmVjdCBiZWhhdmlv
dXIgYWNyb3NzIGEgbXVsdGljYWxsLgorICAgICAqIGkuZS4gd2UgY2Fubm90
IHJlbGF4IEZMVVNIX1RMQiB0byBGTFVTSF9ST09UX1BHVEJMLCBnaXZlbiB0
aGF0IHRoZQorICAgICAqIGZvcm1lciBpcyBhIHNpZGUgZWZmZWN0IG9mIHRo
ZSBsYXR0ZXIsIGJlY2F1c2UgdGhlIHJlc3luYyAod2hpY2ggaXMgaW4KKyAg
ICAgKiB0aGUgcmV0dXJuLXRvLWd1ZXN0IHBhdGgpIGhhcHBlbnMgdG9vIGxh
dGUuCisgICAgICoKKyAgICAgKiBmbHVzaF9yb290X3B0XyogcmVxdWlyZXMg
RkxVU0hfUk9PVF9QR1RCTCBvbiBlaXRoZXIgdGhlIGxvY2FsIENQVQorICAg
ICAqIChpbXBsaWVzIHB0X293bmVyID09IGN1cnJlbnQtPmRvbWFpbiBhbmQg
Y3VycmVudC0+cHJvY2Vzc29yIHNldCBpbgorICAgICAqIHB0X293bmVyLT5k
aXJ0eV9jcHVtYXNrKSwgYW5kL29yIGFsbCAqb3RoZXIqIGRpcnR5IENQVXMg
YXMgdGhlcmUgYXJlCisgICAgICogcmVmZXJlbmNlcyB3ZSBjYW4ndCBhY2Nv
dW50IGZvciBsb2NhbGx5LgorICAgICAqLworICAgIGlmICggZmx1c2hfbGlu
ZWFyX3B0IC8qIHx8IGZsdXNoX3Jvb3RfcHRfbG9jYWwgfHwgZmx1c2hfcm9v
dF9wdF9vdGhlcnMgKi8gKQogICAgIHsKKyAgICAgICAgdW5zaWduZWQgaW50
IGNwdSA9IHNtcF9wcm9jZXNzb3JfaWQoKTsKKyAgICAgICAgY3B1bWFza190
ICptYXNrID0gcHRfb3duZXItPmRpcnR5X2NwdW1hc2s7CisKICAgICAgICAg
LyoKLSAgICAgICAgICogRm9yY2Ugb3RoZXIgdkNQVS1zIG9mIHRoZSBhZmZl
Y3RlZCBndWVzdCB0byBwaWNrIHVwIEw0IGVudHJ5Ci0gICAgICAgICAqIGNo
YW5nZXMgKGlmIGFueSkuCisgICAgICAgICAqIEFsd2F5cyBoYW5kbGUgbG9j
YWwgZmx1c2hpbmcgc2VwYXJhdGVseSAoaWYgYXBwbGljYWJsZSksIHRvCisg
ICAgICAgICAqIHNlcGFyYXRlIHRoZSBmbHVzaCBpbnZvY2F0aW9ucyBhcHBy
b3ByaWF0ZWx5IGZvciBzY29wZSBvZiB0aGUgdHdvCisgICAgICAgICAqIGZs
dXNoX3Jvb3RfcHRfKiB2YXJpYWJsZXMuCiAgICAgICAgICAqLwotICAgICAg
ICB1bnNpZ25lZCBpbnQgY3B1ID0gc21wX3Byb2Nlc3Nvcl9pZCgpOwotICAg
ICAgICBjcHVtYXNrX3QgKm1hc2sgPSBwZXJfY3B1KHNjcmF0Y2hfY3B1bWFz
aywgY3B1KTsKKyAgICAgICAgaWYgKCBsaWtlbHkoY3B1bWFza190ZXN0X2Nw
dShjcHUsIG1hc2spKSApCisgICAgICAgIHsKKyAgICAgICAgICAgIG1hc2sg
PSBwZXJfY3B1KHNjcmF0Y2hfY3B1bWFzaywgY3B1KTsKIAotICAgICAgICBj
cHVtYXNrX2FuZG5vdChtYXNrLCBwdF9vd25lci0+ZGlydHlfY3B1bWFzaywg
Y3B1bWFza19vZihjcHUpKTsKKyAgICAgICAgICAgIGNwdW1hc2tfY29weSht
YXNrLCBwdF9vd25lci0+ZGlydHlfY3B1bWFzayk7CisgICAgICAgICAgICBf
X2NwdW1hc2tfY2xlYXJfY3B1KGNwdSwgbWFzayk7CisKKyAgICAgICAgICAg
IGZsdXNoX2xvY2FsKEZMVVNIX1RMQiB8CisgICAgICAgICAgICAgICAgICAg
ICAgICAoZmx1c2hfcm9vdF9wdF9sb2NhbCA/IEZMVVNIX1JPT1RfUEdUQkwg
OiAwKSk7CisgICAgICAgIH0KKyAgICAgICAgZWxzZQorICAgICAgICAgICAg
LyogU2FuaXR5IGNoZWNrLiAgZmx1c2hfcm9vdF9wdF9sb2NhbCBpbXBsaWVz
IGxvY2FsIGNwdSBpcyBkaXJ0eS4gKi8KKyAgICAgICAgICAgIEFTU0VSVCgh
Zmx1c2hfcm9vdF9wdF9sb2NhbCk7CisKKyAgICAgICAgLyogRmx1c2ggdGhl
IHJlbW90ZSBkaXJ0eSBDUFVzLiAgRG9lcyBub3QgaW5jbHVkZSB0aGUgbG9j
YWwgQ1BVLiAqLwogICAgICAgICBpZiAoICFjcHVtYXNrX2VtcHR5KG1hc2sp
ICkKLSAgICAgICAgICAgIGZsdXNoX21hc2sobWFzaywgRkxVU0hfUk9PVF9Q
R1RCTCk7CisgICAgICAgICAgICBmbHVzaF9tYXNrKG1hc2ssIEZMVVNIX1RM
QiB8CisgICAgICAgICAgICAgICAgICAgICAgIChmbHVzaF9yb290X3B0X290
aGVycyA/IEZMVVNIX1JPT1RfUEdUQkwgOiAwKSk7CiAgICAgfQorICAgIGVs
c2UKKyAgICAgICAgLyogU2FuaXR5IGNoZWNrLiAgZmx1c2hfcm9vdF9wdF8q
IGltcGxpZXMgZmx1c2hfbGluZWFyX3B0LiAqLworICAgICAgICBBU1NFUlQo
IWZsdXNoX3Jvb3RfcHRfbG9jYWwgJiYgIWZsdXNoX3Jvb3RfcHRfb3RoZXJz
KTsKIAogICAgIHBlcmZjX2FkZChudW1fcGFnZV91cGRhdGVzLCBpKTsKIAot
LSAKMi4yMC4xCgo=

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 19:04:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 19:04:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18654.43634 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka1aw-00012J-C3; Tue, 03 Nov 2020 19:03:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18654.43634; Tue, 03 Nov 2020 19:03:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka1aw-00012C-8u; Tue, 03 Nov 2020 19:03:50 +0000
Received: by outflank-mailman (input) for mailman id 18654;
 Tue, 03 Nov 2020 19:03:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zFvq=EJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ka1av-000127-1m
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 19:03:49 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a9f1dc3b-b11f-4a8c-bde3-8722dfde5f87;
 Tue, 03 Nov 2020 19:03:47 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 1BAEC2074B;
 Tue,  3 Nov 2020 19:03:46 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zFvq=EJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1ka1av-000127-1m
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 19:03:49 +0000
X-Inumbo-ID: a9f1dc3b-b11f-4a8c-bde3-8722dfde5f87
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a9f1dc3b-b11f-4a8c-bde3-8722dfde5f87;
	Tue, 03 Nov 2020 19:03:47 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 1BAEC2074B;
	Tue,  3 Nov 2020 19:03:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604430226;
	bh=9HeorJAkuowbAO8X3uejvWjY5pogCfovFWVu4WF1NN0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=PobIdsf1hkWsehple2WynRNOHnx1cJYYGZih7kEM9W9ncZ8Kl/Q0n7tO4TSZvP+wx
	 4jko2EJttIJrw1vgHxsET9xde4on19fk0mvYT/ImmoGQca4RcTJF1o1GMh5EOfQsqi
	 TX3thQkz4qy4oPflmh2k+3zrSlaGzf5h9WuS5fCU=
Date: Tue, 3 Nov 2020 11:03:45 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: anthony.perard@citrix.com, wl@xen.org, ian.jackson@eu.citrix.com
cc: Takahiro Akashi <takahiro.akashi@linaro.org>, 
    Alex Benn??e <alex.bennee@linaro.org>, 
    Masami Hiramatsu <masami.hiramatsu@linaro.org>, 
    xen-devel@lists.xenproject.org
Subject: Re: BUG: libxl vuart build order
In-Reply-To: <alpine.DEB.2.21.2010301045250.12247@sstabellini-ThinkPad-T480s>
Message-ID: <alpine.DEB.2.21.2011031103180.5812@sstabellini-ThinkPad-T480s>
References: <CAB5YjtCwbvYMVg-9YXjSFtC8KvjkJuYhJFSCHrJaRUKfg4NHYA@mail.gmail.com> <alpine.DEB.2.21.2010261634000.12247@sstabellini-ThinkPad-T480s> <20201027000214.GA14449@laputa> <20201028014105.GA11856@laputa> <alpine.DEB.2.21.2010281437010.12247@sstabellini-ThinkPad-T480s>
 <20201029114705.GA291577@laputa> <alpine.DEB.2.21.2010291704180.12247@sstabellini-ThinkPad-T480s> <20201030025157.GA18567@laputa> <alpine.DEB.2.21.2010301045250.12247@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-2125818377-1604430226=:5812"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-2125818377-1604430226=:5812
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

ping?


On Fri, 30 Oct 2020, Stefano Stabellini wrote:
> On Fri, 30 Oct 2020, Takahiro Akashi wrote:
> > Hi Stefano,
> > 
> > On Thu, Oct 29, 2020 at 07:03:28PM -0700, Stefano Stabellini wrote:
> > > + xen-devel and libxl maintainers
> > > 
> > > In short, there is a regression in libxl with the ARM vuart introduced
> > > by moving ARM guests to the PVH build.
> > > 
> > > 
> > > On Thu, 29 Oct 2020, Takahiro Akashi wrote:
> > > > On Wed, Oct 28, 2020 at 02:44:16PM -0700, Stefano Stabellini wrote:
> > > > > On Wed, 28 Oct 2020, Takahiro Akashi wrote:
> > > > > > On Tue, Oct 27, 2020 at 09:02:14AM +0900, Takahiro Akashi wrote:
> > > > > > > On Mon, Oct 26, 2020 at 04:37:30PM -0700, Stefano Stabellini wrote:
> > > > > > > > 
> > > > > > > > On Mon, 26 Oct 2020, Takahiro Akashi wrote:
> > > > > > > > > Stefano,
> > > > > > > > > 
> > > > > > > > > # I'm afraid that I have already bothered you with a lot of questions.
> > > > > > > > > 
> > > > > > > > > When I looked at Xen's vpl011 implementation, I found
> > > > > > > > > CR (and LCHR) register is not supported. (trap may cause a data abort).
> > > > > > > > > On the other hand, for example, linux's pl011 driver surely
> > > > > > > > > accesses CR (and LCHR) register.
> > > > > > > > > So I guess that linux won't be able to use pl011 on a Xen guest vm
> > > > > > > > > if vuart = "sbsa_uart".
> > > > > > > > > 
> > > > > > > > > Is this a known issue or do I miss anything?
> > > > > > > > 
> > > > > > > > Linux should definitely be able to use it, and in fact, I am using it
> > > > > > > > with Linux in my test environment.
> > > > > > > > 
> > > > > > > > I think the confusion comes from the name "vpl011": it is in fact not a
> > > > > > > > full PL011 UART, but an SBSA UART.
> > > > > > > 
> > > > > > > Yeah, I have noticed it.
> > > > > > > 
> > > > > > > > SBSA UART only implements a subset of
> > > > > > > > the PL011 registers. The compatible string is "arm,sbsa-uart", also see
> > > > > > > > drivers/tty/serial/amba-pl011.c:sbsa_uart_probe.
> > > > > > > 
> > > > > > > Looking closely into the details of implementation, I found
> > > > > > > that all the accesses to unimplemented registers, including
> > > > > > > CR, are deliberately avoided in sbsa part of linux driver.
> > > > > > 
> > > > > > So I'm now trying to implement "sbsa-uart" driver on U-Boot
> > > > > > by modifying the existing pl011 driver.
> > > > > > (Please note the current xen'ized U-Boot utilises a para-virtualized
> > > > > > console, i.e. with HVM_PARAM_CONSOLE_PFN.)
> > > > > > 
> > > > > > So far my all attempts have failed.
> > > > > > 
> > > > > > There are a couple of problems, and one of them is how we can
> > > > > > access vpl011 port (from dom0).
> > > > > > What I did is:
> > > > > > - modify U-Boot's pl011 driver
> > > > > >   (I'm sure that the driver correctly handle a vpl011 device
> > > > > >   with regard of accessing a proper set of registers.)
> > > > > > - start U-Boot guest with "vuart=sbsa_uart" by
> > > > > >     xl create uboot.cfg -c
> > > > > > 
> > > > > > Then I have seen almost nothing on the screen.
> > > > > > Digging into vpl011 implementation, I found that all the characters
> > > > > > written DR register will be directed to a "backend domain" if a guest
> > > > > > vm is launched by xl command.
> > > > > > (In case of dom0less, the backend seems to be Xen itself.)
> > > > > > 
> > > > > > As a silly experiment, I modified domain_vpl011_init() to always create
> > > > > > a vpl011 device with "backend_in_domain == false".
> > > > > > Then, I could see more boot messages from U-Boot, but still fails
> > > > > > to use the device as a console, I mean, we will lose all the outputs
> > > > > > after at some point and won't be able to type any keys (at a command prompt).
> > > > > > (This will be another problem on U-Boot side.)
> > > > > > 
> > > > > > My first question here is how we can configure and connect a console
> > > > > > in this case?
> > > > > > Should "xl create -c" or "xl console -t vuart" simply work?
> > > > > 
> > > > > "xl create -c" creates a guest and connect to the primary console which
> > > > > is the PV console (i.e. HVM_PARAM_CONSOLE_PFN.)
> > > > 
> > > > So in case of vuart, it (console) doesn't work?
> > > > (Apparently, "xl create" doesn't take '-t' option.)
> > > > 
> > > > > To connect to the emulated sbsa uart you need to pass -t vuart. So yes,
> > > > > "xl console -t vuart domain_name" should get you access to the emulated
> > > > > sbsa uart. The sbsa uart can also be exposed to dom0less guests; you get
> > > > > their output by using CTRL-AAA to switch to right domU console.
> > > > > 
> > > > > You can add printks to xen/arch/arm/vpl011.c in Xen to see what's
> > > > > happening on the Xen side. vpl011.c is the emulator.
> > > > 
> > > > I'm sure that write to "REG_DR" register is caught by Xen.
> > > > What I don't understand is
> > > > if back_in_domain -> no outputs
> > > > if !back_in_domain -> can see outputs
> > > > 
> > > > (As you know, if a guest is created by xl command, back_in_domain
> > > > is forcedly set to true.)
> > > > 
> > > > I looked into xenstore and found that "vuart/0/tty" does not exist,
> > > > but "console/tty" does.
> > > > How can this happen for vuart?
> > > > (I clearly specified, vuart = "sbsa_uart" in Xen config.)
> > > 
> > > It looks like we have a bug :-(
> > > 
> > > I managed to reproduce the issue. The problem is a race in libxl.
> > > 
> > > tools/libxc/xc_dom_arm.c:alloc_magic_pages is called first, setting
> > > dom->vuart_gfn.  Then, libxl__build_hvm should be setting
> > > state->vuart_gfn to dom->vuart_gfn (like libxl__build_pv does) but it
> > > doesn't.
> > 
> > Thank you for the patch.
> > I confirmed that sbsa-uart driver on U-Boot now works.
> 
> Excellent!
> 
> 
> > === after "xl console -t vuart" ===
> > U-Boot 2020.10-00777-g10cf956a26ba (Oct 29 2020 - 19:31:29 +0900) xenguest
> > 
> > Xen virtual CPU
> > Model: XENVM-4.15
> > DRAM:  128 MiB
> > 
> > In:    sbsa-pl011
> > Out:   sbsa-pl011
> > Err:   sbsa-pl011
> > xenguest# dm tree
> >  Class     Index  Probed  Driver                Name
> > -----------------------------------------------------------
> >  root          0  [ + ]   root_driver           root_driver
> >  firmware      0  [   ]   psci                  |-- psci
> >  serial        0  [ + ]   serial_pl01x          |-- sbsa-pl011
> >  pvblock       0  [   ]   pvblock               `-- pvblock
> > ===
> > 
> > If possible, I hope that "xl create -c" command would accept "-t vuart"
> > option (or it should automatically selects uart type from the config).
> 
> I think a patch to add the "-t" option to "xl create" would be
> acceptable, right Anthony?
> 
> 
> > > 
> > > ---
> > > 
> > > libxl: set vuart_gfn in libxl__build_hvm
> > > 
> > > Setting vuart_gfn was missed when switching ARM guests to the PVH build.
> > > Like libxl__build_pv, libxl__build_hvm should set state->vuart_gfn to
> > > dom->vuart_gfn.
> > > 
> > > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > > 
> > > diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> > > index f8661e90d4..36fe8915e7 100644
> > > --- a/tools/libxl/libxl_dom.c
> > > +++ b/tools/libxl/libxl_dom.c
> > > @@ -1184,6 +1184,7 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
> > >          LOG(ERROR, "hvm build set params failed");
> > >          goto out;
> > >      }
> > > +    state->vuart_gfn = dom->vuart_gfn;
> > >  
> > >      rc = hvm_build_set_xs_values(gc, domid, dom, info);
> > >      if (rc != 0) {
> > 
--8323329-2125818377-1604430226=:5812--


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 19:31:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 19:31:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18664.43652 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka21a-0003eE-I3; Tue, 03 Nov 2020 19:31:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18664.43652; Tue, 03 Nov 2020 19:31:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka21a-0003e7-Ey; Tue, 03 Nov 2020 19:31:22 +0000
Received: by outflank-mailman (input) for mailman id 18664;
 Tue, 03 Nov 2020 19:31:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hh/q=EJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1ka21Z-0003dV-T0
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 19:31:21 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e7ebe7d6-99ab-4073-8eb8-46d5a43cb481;
 Tue, 03 Nov 2020 19:31:12 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ka21Q-0002g8-1h; Tue, 03 Nov 2020 19:31:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ka21P-0006Bw-LJ; Tue, 03 Nov 2020 19:31:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ka21P-00074V-Kp; Tue, 03 Nov 2020 19:31:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hh/q=EJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1ka21Z-0003dV-T0
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 19:31:21 +0000
X-Inumbo-ID: e7ebe7d6-99ab-4073-8eb8-46d5a43cb481
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e7ebe7d6-99ab-4073-8eb8-46d5a43cb481;
	Tue, 03 Nov 2020 19:31:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5F5bZxOMNhFkMUE6MVcQXP/9P6SFsxksdy2+irXX0DU=; b=nE5P7aokP7LbhYfB/OU8liCypr
	5YIEuQbmLdNkiRo1eQ+vK5RhQHoHkHpUkzNpj9vyb2CY0w1OCoEROkiN4IcmqRVa8RT3OMBhcImtg
	fT4icdV6dyjazw3vrINbfLIuMjuf9Q38wNC2piaId3UhclM3ClV7g/MB+1aSetATadaM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ka21Q-0002g8-1h; Tue, 03 Nov 2020 19:31:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ka21P-0006Bw-LJ; Tue, 03 Nov 2020 19:31:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ka21P-00074V-Kp; Tue, 03 Nov 2020 19:31:11 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156381-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156381: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-xl-credit2:<job status>:broken:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-start:fail:heisenbug
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit2:host-install(5):broken:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=b7cbaf59f62f8ab8f157698f9e31642bff525bd0
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 03 Nov 2020 19:31:11 +0000

flight 156381 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156381/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit2     <job status>                 broken
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl          10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      12 debian-install           fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2 10 host-ping-check-xen fail in 156372 REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop     fail in 156372 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-seattle 10 host-ping-check-xen fail in 156372 pass in 156381
 test-arm64-arm64-xl-xsm   10 host-ping-check-xen fail in 156372 pass in 156381
 test-arm64-arm64-xl           8 xen-boot         fail in 156372 pass in 156381
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 156372
 test-amd64-amd64-i386-pvgrub 13 guest-start                fail pass in 156372
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 156372

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2   5 host-install(5)       broken blocked in 152332
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11) fail in 156372 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                b7cbaf59f62f8ab8f157698f9e31642bff525bd0
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   94 days
Failing since        152366  2020-08-01 20:49:34 Z   93 days  158 attempts
Testing same since   156372  2020-11-03 01:11:18 Z    0 days    2 attempts

------------------------------------------------------------
3417 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  broken  
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-credit2 broken
broken-step test-arm64-arm64-xl-credit2 host-install(5)

Not pushing.

(No revision log; it would be 652941 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 19:37:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 19:37:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18669.43664 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka27O-0003sU-AD; Tue, 03 Nov 2020 19:37:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18669.43664; Tue, 03 Nov 2020 19:37:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka27O-0003sN-6m; Tue, 03 Nov 2020 19:37:22 +0000
Received: by outflank-mailman (input) for mailman id 18669;
 Tue, 03 Nov 2020 19:37:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zFvq=EJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ka27M-0003sI-Ry
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 19:37:21 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 76890bc7-c84f-4ff4-b1b7-5cada1c02668;
 Tue, 03 Nov 2020 19:37:18 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 907AC2080D;
 Tue,  3 Nov 2020 19:37:17 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zFvq=EJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1ka27M-0003sI-Ry
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 19:37:21 +0000
X-Inumbo-ID: 76890bc7-c84f-4ff4-b1b7-5cada1c02668
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 76890bc7-c84f-4ff4-b1b7-5cada1c02668;
	Tue, 03 Nov 2020 19:37:18 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 907AC2080D;
	Tue,  3 Nov 2020 19:37:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604432238;
	bh=Q+UfnbaKn7KBr0VkNX3ie1Z4rV2/72ppmieFGq2RzvA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ZKCBCORsIr7Xsxo9zAN1VXTaZ7kPRlk8BbTAM7O1Zh59PIRRW6eawQh6AQVuj0fAR
	 Eoe8cngYPz7fWLsKrsV0oCBcoeF97N2NILURLa5bJPWUhUsBo3+nscrLvyahyBS1nr
	 4KlGGMPbFwagIL79ISIL4qnazBALQ/fQV1Ubqqhw=
Date: Tue, 3 Nov 2020 11:37:16 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>, 
    andrew.cooper3@citrix.com, george.dunlap@citrix.com, iwj@xenproject.org, 
    julien@xen.org, wl@xen.org, xen-devel@lists.xenproject.org
Subject: Re: [RFC PATCH] xen: EXPERT clean-up
In-Reply-To: <c127499b-810b-63af-5487-2cc9ecfdba09@suse.com>
Message-ID: <alpine.DEB.2.21.2011031123420.5812@sstabellini-ThinkPad-T480s>
References: <20201031002405.4545-1-sstabellini@kernel.org> <cd44d479-8dba-6311-9386-0c8c1134d07e@suse.com> <alpine.DEB.2.21.2011021332460.5812@sstabellini-ThinkPad-T480s> <c127499b-810b-63af-5487-2cc9ecfdba09@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 3 Nov 2020, Jan Beulich wrote:
> On 02.11.2020 22:41, Stefano Stabellini wrote:
> > On Mon, 2 Nov 2020, Jan Beulich wrote:
> >> On 31.10.2020 01:24, Stefano Stabellini wrote:
> >>> @@ -79,8 +79,8 @@ config SBSA_VUART_CONSOLE
> >>>  	  SBSA Generic UART implements a subset of ARM PL011 UART.
> >>>  
> >>>  config ARM_SSBD
> >>> -	bool "Speculative Store Bypass Disable" if EXPERT
> >>> -	depends on HAS_ALTERNATIVE
> >>> +	bool "Speculative Store Bypass Disable"
> >>> +	depends on HAS_ALTERNATIVE && EXPERT
> >>>  	default y
> >>
> >> At the example of this, I'm afraid when the default isn't "n"
> >> (or there's no default directive at all, as ought to be
> >> equivalent to and preferred over "default n"), such a
> >> transformation is not functionally identical: Before your
> >> change, with !EXPERT this option defaults to y. After your
> >> change this option is unavailable (which resolves to it being
> >> off for all consuming purposes).
> >>
> >> IOW there are reasons to have "if ..." attached to the prompts
> >> (for this construct indeed only making the prompt conditional,
> >> not the entire option), but there are also cases where the
> >> cleanup you do is indeed desirable / helpful.
> > 
> > Yeah, thanks for catching it, it is obviously a problem.
> > 
> > My intention was just to "tag" somehow the options to EXPERT so that it
> > would show on the menu. Maybe a better, simpler, way to do it is
> > to add the word "EXPERT" to the one line prompt:
> > 
> >  config ARM_SSBD
> > -	bool "Speculative Store Bypass Disable" if EXPERT
> > +	bool "Speculative Store Bypass Disable (EXPERT)" if EXPERT
> >  	depends on HAS_ALTERNATIVE
> >  	default y
> >  	help
> > 
> > 
> > What do you think?
> 
> While on the surface this may look like an improvement, I don't
> see how it would actually help: If you read the Kconfig file
> itself, the dependency is seen anyway. And on the menu I don't
> see the point of telling someone who has enabled EXPERT that a
> certain option is (or is not) an expert one. If they think
> they're experts, so should they be treated. (It was, after all,
> a deliberate decision to make enabling expert mode easier, and
> hence easier to use for what one might consider not-really-
> experts. I realize saying so may be considered tendentious; I
> mean it in a purely technical sense, and I'd like to apologize
> in advance to anyone not sharing this as a possible perspective
> to take.)
> 
> Plus, of course, the addition of such (EXPERT) markers to
> future options' prompts is liable to get forgotten now and then,
> so sooner or later we'd likely end up with an inconsistent
> mixture anyway.

I tend to agree with you on everything you wrote. The fundamental issue
is that we are (mis)using EXPERT to tag features that are experimental,
as defined by SUPPORT.md.

It is important to be able to distinguish clearly at the kconfig level
options that are (security) supported from options that are
unsupported/experimental. Today the only way to do it is with EXPERT
which is not great because:

- it doesn't convey the idea that it is for unsupported/experimental
  features
- if you want to enable one unsupported feature, it is not clear what
  you have to do

So maybe we should replace EXPERT with UNSUPPORTED (or EXPERIMENTAL) in
the Kconfig menu?

It would make it clearer that by enabling UNSUPPORTED you are going to
get a configuration that is not security supported. And ideally we would
also tag features like ACPI as UNSUPPORTED as I suggested above.


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 20:05:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 20:05:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18678.43676 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka2Yf-0006ZB-KY; Tue, 03 Nov 2020 20:05:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18678.43676; Tue, 03 Nov 2020 20:05:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka2Yf-0006Z4-HQ; Tue, 03 Nov 2020 20:05:33 +0000
Received: by outflank-mailman (input) for mailman id 18678;
 Tue, 03 Nov 2020 20:05:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9NW0=EJ=gmail.com=persaur@srs-us1.protection.inumbo.net>)
 id 1ka2Yd-0006Yz-TU
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 20:05:32 +0000
Received: from mail-qk1-x742.google.com (unknown [2607:f8b0:4864:20::742])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 06068785-d566-466f-89da-6d579dc82a36;
 Tue, 03 Nov 2020 20:05:30 +0000 (UTC)
Received: by mail-qk1-x742.google.com with SMTP id 12so12391830qkl.8
 for <xen-devel@lists.xenproject.org>; Tue, 03 Nov 2020 12:05:30 -0800 (PST)
Received: from [100.64.74.11] ([173.245.215.240])
 by smtp.gmail.com with ESMTPSA id 19sm11824433qkf.93.2020.11.03.12.05.29
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 03 Nov 2020 12:05:29 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=9NW0=EJ=gmail.com=persaur@srs-us1.protection.inumbo.net>)
	id 1ka2Yd-0006Yz-TU
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 20:05:32 +0000
X-Inumbo-ID: 06068785-d566-466f-89da-6d579dc82a36
Received: from mail-qk1-x742.google.com (unknown [2607:f8b0:4864:20::742])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 06068785-d566-466f-89da-6d579dc82a36;
	Tue, 03 Nov 2020 20:05:30 +0000 (UTC)
Received: by mail-qk1-x742.google.com with SMTP id 12so12391830qkl.8
        for <xen-devel@lists.xenproject.org>; Tue, 03 Nov 2020 12:05:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=content-transfer-encoding:from:mime-version:subject:date:message-id
         :references:cc:in-reply-to:to;
        bh=/K+j6MUmo9QZfRRtW1WN2ZHDw+ebAOYRkq1Ti/Y24Ik=;
        b=nh29CwME8VFbRKAYPXiIXq5Ys6YNYEpDegukX1cgSStgq6X8+HjJc+ymKQXGxEo2IM
         wcF3DZs5P5qtCiGu5Qn8E5HHRm0lORadA4HaStMdOx09Zx8y7Gy3ZKHyJuBccldlTMS9
         33Q6shvKQ+g/AvmPMIABAyjDEdsUX++8sIfdGFCq3wGOl5wa2hPt9Lzvlc4rl2U3Emoz
         FtVzi933+yAgpYML7j8hZR4GCzPJXiOC5iEXecG++hTH/eyEQZHVSrTdZTDZBUQXv8Wn
         I/PVv0i2H1wK8jeomTHUyk1qXbfUZryjN2vayU9UIqWY7Yj0pzEtlXE5YJAnVGrJ39zP
         D7nw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:content-transfer-encoding:from:mime-version
         :subject:date:message-id:references:cc:in-reply-to:to;
        bh=/K+j6MUmo9QZfRRtW1WN2ZHDw+ebAOYRkq1Ti/Y24Ik=;
        b=H5+Hhb+DvENB9NsKzMymjwM92Tl18Oq11UKnPgmxKRmcdMCM/MNtE2CzXvVDSGFBKS
         1IAFSlmTDscgapxi1mKzjK/qAO07Xh3THMoY0FN1G7cChe0v6VrCRDP4YJV3eOGlBmEV
         VlEBfuULsGcLMGPDgUeC10/oTuelo06xRf6Jd0w0vN0vkdMxjRv+aMZLGneRsAX4+RIr
         7+Cy3HJxFmRa+Xhv/kw6VNYBt8FBnp5tjRkZ5hAQm9eoObiHmRiNBSrLFzhJdM21P+ud
         rnA46nJyf/7PF/BVGfsPZPsmPo6OdrnIfaUavCP1eSKIl5Yp3IZK/MjYi0cfNsfvHj6i
         Dx3A==
X-Gm-Message-State: AOAM5303xG2kbt1NWEJlEMoecoxUR77GnkMJGb0l88yNqmnD4Y3r9euE
	BDg9eg6/6NTTOOebOrmFGeAQGO4HqBSECA==
X-Google-Smtp-Source: ABdhPJzq8RXLBFiye+POImtg2s5h98c8xut2WGrsrtC+NCMPi3n1i+wQTpceCeg3xgDhSSPosO21qQ==
X-Received: by 2002:a37:4692:: with SMTP id t140mr22333513qka.275.1604433930209;
        Tue, 03 Nov 2020 12:05:30 -0800 (PST)
Received: from [100.64.74.11] ([173.245.215.240])
        by smtp.gmail.com with ESMTPSA id 19sm11824433qkf.93.2020.11.03.12.05.29
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Tue, 03 Nov 2020 12:05:29 -0800 (PST)
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
From: Rich Persaud <persaur@gmail.com>
Mime-Version: 1.0 (1.0)
Subject: Re: [RFC PATCH] xen: EXPERT clean-up
Date: Tue, 3 Nov 2020 15:05:28 -0500
Message-Id: <E359BD65-2917-4087-A6E1-0AD5521CF823@gmail.com>
References: <alpine.DEB.2.21.2011031123420.5812@sstabellini-ThinkPad-T480s>
Cc: Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, iwj@xenproject.org,
 julien@xen.org, wl@xen.org, xen-devel@lists.xenproject.org,
 Daniel DeGraaf <dgdegra@tycho.nsa.gov>,
 Daniel Smith <dpsmith@apertussolutions.com>,
 Roman Shaposhnik <roman@zededa.com>,
 =?utf-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
In-Reply-To: <alpine.DEB.2.21.2011031123420.5812@sstabellini-ThinkPad-T480s>
To: Stefano Stabellini <sstabellini@kernel.org>
X-Mailer: iPad Mail (18A8395)

On Nov 3, 2020, at 14:37, Stefano Stabellini <sstabellini@kernel.org> wrote:=

>=20
> =EF=BB=BFOn Tue, 3 Nov 2020, Jan Beulich wrote:
>>> On 02.11.2020 22:41, Stefano Stabellini wrote:
>>> On Mon, 2 Nov 2020, Jan Beulich wrote:
>>>> On 31.10.2020 01:24, Stefano Stabellini wrote:
>>>>> @@ -79,8 +79,8 @@ config SBSA_VUART_CONSOLE
>>>>>      SBSA Generic UART implements a subset of ARM PL011 UART.
>>>>>=20
>>>>> config ARM_SSBD
>>>>> -    bool "Speculative Store Bypass Disable" if EXPERT
>>>>> -    depends on HAS_ALTERNATIVE
>>>>> +    bool "Speculative Store Bypass Disable"
>>>>> +    depends on HAS_ALTERNATIVE && EXPERT
>>>>>    default y
>>>>=20
>>>> At the example of this, I'm afraid when the default isn't "n"
>>>> (or there's no default directive at all, as ought to be
>>>> equivalent to and preferred over "default n"), such a
>>>> transformation is not functionally identical: Before your
>>>> change, with !EXPERT this option defaults to y. After your
>>>> change this option is unavailable (which resolves to it being
>>>> off for all consuming purposes).
>>>>=20
>>>> IOW there are reasons to have "if ..." attached to the prompts
>>>> (for this construct indeed only making the prompt conditional,
>>>> not the entire option), but there are also cases where the
>>>> cleanup you do is indeed desirable / helpful.
>>>=20
>>> Yeah, thanks for catching it, it is obviously a problem.
>>>=20
>>> My intention was just to "tag" somehow the options to EXPERT so that it
>>> would show on the menu. Maybe a better, simpler, way to do it is
>>> to add the word "EXPERT" to the one line prompt:
>>>=20
>>> config ARM_SSBD
>>> -    bool "Speculative Store Bypass Disable" if EXPERT
>>> +    bool "Speculative Store Bypass Disable (EXPERT)" if EXPERT
>>>    depends on HAS_ALTERNATIVE
>>>    default y
>>>    help
>>>=20
>>>=20
>>> What do you think?
>>=20
>> While on the surface this may look like an improvement, I don't
>> see how it would actually help: If you read the Kconfig file
>> itself, the dependency is seen anyway. And on the menu I don't
>> see the point of telling someone who has enabled EXPERT that a
>> certain option is (or is not) an expert one. If they think
>> they're experts, so should they be treated. (It was, after all,
>> a deliberate decision to make enabling expert mode easier, and
>> hence easier to use for what one might consider not-really-
>> experts. I realize saying so may be considered tendentious; I
>> mean it in a purely technical sense, and I'd like to apologize
>> in advance to anyone not sharing this as a possible perspective
>> to take.)
>>=20
>> Plus, of course, the addition of such (EXPERT) markers to
>> future options' prompts is liable to get forgotten now and then,
>> so sooner or later we'd likely end up with an inconsistent
>> mixture anyway.
>=20
> I tend to agree with you on everything you wrote. The fundamental issue
> is that we are (mis)using EXPERT to tag features that are experimental,
> as defined by SUPPORT.md.
>=20
> It is important to be able to distinguish clearly at the kconfig level
> options that are (security) supported from options that are
> unsupported/experimental. Today the only way to do it is with EXPERT
> which is not great because:
>=20
> - it doesn't convey the idea that it is for unsupported/experimental
>  features
> - if you want to enable one unsupported feature, it is not clear what
>  you have to do
>=20
> So maybe we should replace EXPERT with UNSUPPORTED (or EXPERIMENTAL) in
> the Kconfig menu?
>=20
> It would make it clearer that by enabling UNSUPPORTED you are going to
> get a configuration that is not security supported.

If going down this path, there should be one, authoritative, in-tree definit=
ion of feature-level support from which Kconfig (build-time policy enforceme=
nt) and SUPPORT.md (documentation) can be derived.  Later, even run-time enf=
orcement can be similarly classified.  FuSA may also wish for documented pol=
icy to align with enforcement.

Rich

> And ideally we would
> also tag features like ACPI as UNSUPPORTED as I suggested above.
>=20


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 20:32:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 20:32:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18695.43687 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka2yB-0000iI-LZ; Tue, 03 Nov 2020 20:31:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18695.43687; Tue, 03 Nov 2020 20:31:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka2yB-0000iB-IF; Tue, 03 Nov 2020 20:31:55 +0000
Received: by outflank-mailman (input) for mailman id 18695;
 Tue, 03 Nov 2020 20:31:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3DNo=EJ=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1ka2y9-0000i6-Ln
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 20:31:53 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 4df526e7-acc7-48ef-abe8-4c17ac596221;
 Tue, 03 Nov 2020 20:31:52 +0000 (UTC)
Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com
 [209.85.128.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-541-elD9pKsNO8aUM0MPiXE0Dw-1; Tue, 03 Nov 2020 15:31:50 -0500
Received: by mail-wm1-f72.google.com with SMTP id t201so249853wmt.1
 for <xen-devel@lists.xenproject.org>; Tue, 03 Nov 2020 12:31:50 -0800 (PST)
Received: from [192.168.1.36] (234.red-83-42-66.dynamicip.rima-tde.net.
 [83.42.66.234])
 by smtp.gmail.com with ESMTPSA id x10sm26462274wrp.62.2020.11.03.12.31.47
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 03 Nov 2020 12:31:48 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3DNo=EJ=redhat.com=philmd@srs-us1.protection.inumbo.net>)
	id 1ka2y9-0000i6-Ln
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 20:31:53 +0000
X-Inumbo-ID: 4df526e7-acc7-48ef-abe8-4c17ac596221
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 4df526e7-acc7-48ef-abe8-4c17ac596221;
	Tue, 03 Nov 2020 20:31:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604435512;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=W2xoQMtcZ45M2FDoabAb+oIuMsPmr+LIxDstfsaLsQQ=;
	b=PaFtiWABs/hju6rEiVxiwFwvi8p0nxvCyqd6C+UnXMW6h9Ui7sudt53kCUL5m6hNcBdJjP
	ADhzP26AYryahEj0+EJcZgag9AFIDo7LsBoTtcAJBhUb8VuWcIc0d2dAqJzJlISc7PFSsL
	/85ec3wJIOi51sZevWxlXXD9vih4GyA=
Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com
 [209.85.128.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-541-elD9pKsNO8aUM0MPiXE0Dw-1; Tue, 03 Nov 2020 15:31:50 -0500
X-MC-Unique: elD9pKsNO8aUM0MPiXE0Dw-1
Received: by mail-wm1-f72.google.com with SMTP id t201so249853wmt.1
        for <xen-devel@lists.xenproject.org>; Tue, 03 Nov 2020 12:31:50 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=W2xoQMtcZ45M2FDoabAb+oIuMsPmr+LIxDstfsaLsQQ=;
        b=ZOygKLayUn8dbz27XUEIDvqC4GvmERtb25NBtb6zUyWIYHSLRFWk3UUyVrpilWk/rI
         6jIZQjG46AZAgch1NYiIBkPp1kxLrXvKvldd7JpHAjirtOWBIcWsqwFKn2UG6eAQltHm
         9HI7C9FMOwQV57LDLHddJh+yiRBIGS+wE0KSrm60GNvJED9sxjTjgCp1DC5QaLBPAb+n
         jPenYT0xdoBA+AkJ23LPZswGuTJsXD5DyBrj7KkUo8REMUNrBFPe63jlX682QEasx6/u
         5p98AdmuCsxecJZsG0usiD75oH+reA//gVeft57YeRvdN5YIo9o6jQNCKKeAViPcLo7T
         ZeCw==
X-Gm-Message-State: AOAM5308FWoML/qu9FZWhkeqDIqEe7heL48oMNHEs+5Gba1Kk7PfPDJm
	JNN1EZrY5p3qJ/6Rc3ewqsZjKsw1SuQAFH74j6ZLtl54Uju7IUiB9fTVM6IXH1tZr4YNFNUbpyF
	w2ZB3zboOhHBpfPI37DRXwyAptok=
X-Received: by 2002:adf:e892:: with SMTP id d18mr29946474wrm.103.1604435509296;
        Tue, 03 Nov 2020 12:31:49 -0800 (PST)
X-Google-Smtp-Source: ABdhPJz5Kal2Q2BqN2crCt7Phj5MtwdE1YSjlg+L4Ckbm8BNES6hH2qy/addHtnYSbGKkq4WxUWSBQ==
X-Received: by 2002:adf:e892:: with SMTP id d18mr29946459wrm.103.1604435509125;
        Tue, 03 Nov 2020 12:31:49 -0800 (PST)
Received: from [192.168.1.36] (234.red-83-42-66.dynamicip.rima-tde.net. [83.42.66.234])
        by smtp.gmail.com with ESMTPSA id x10sm26462274wrp.62.2020.11.03.12.31.47
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Tue, 03 Nov 2020 12:31:48 -0800 (PST)
Subject: Re: [Qemu-devel] [PULL 09/36] 9p: simplify source file selection
To: Paolo Bonzini <pbonzini@redhat.com>, qemu-devel@nongnu.org,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: =?UTF-8?Q?Marc-Andr=c3=a9_Lureau?= <marcandre.lureau@redhat.com>,
 "Daniel P . Berrange" <berrange@redhat.com>,
 Cornelia Huck <cohuck@redhat.com>, Greg Kurz <groug@kaod.org>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>
References: <1566284395-30287-1-git-send-email-pbonzini@redhat.com>
 <1566284395-30287-10-git-send-email-pbonzini@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <ab9a47b9-8ac5-c8f1-b035-b8b812551b3b@redhat.com>
Date: Tue, 3 Nov 2020 21:31:47 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <1566284395-30287-10-git-send-email-pbonzini@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Hi Paolo,

On 8/20/19 8:59 AM, Paolo Bonzini wrote:
> Express the complex conditions in Kconfig rather than Makefiles, since Kconfig
> is better suited at expressing dependencies and detecting contradictions.
> 
> Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  Kconfig.host        | 1 +
>  fsdev/Makefile.objs | 2 +-
>  hw/9pfs/Kconfig     | 5 +++++
>  3 files changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/Kconfig.host b/Kconfig.host
> index aec9536..bb6e116 100644
> --- a/Kconfig.host
> +++ b/Kconfig.host
> @@ -28,6 +28,7 @@ config VHOST_USER
>  
>  config XEN
>      bool
> +    select FSDEV_9P if VIRTFS

There is something odd with CONFIG_XEN, as it is used
to select accelerator and hardware.

>  
>  config VIRTFS
>      bool
> diff --git a/fsdev/Makefile.objs b/fsdev/Makefile.objs
> index 24bbb3e..42cd70c 100644
> --- a/fsdev/Makefile.objs
> +++ b/fsdev/Makefile.objs
> @@ -1,6 +1,6 @@
>  # Lots of the fsdev/9pcode is pulled in by vl.c via qemu_fsdev_add.
>  # only pull in the actual 9p backend if we also enabled virtio or xen.
> -ifeq ($(call land,$(CONFIG_VIRTFS),$(call lor,$(CONFIG_VIRTIO_9P),$(CONFIG_XEN))),y)
> +ifeq ($(CONFIG_FSDEV_9P),y)
>  common-obj-y = qemu-fsdev.o 9p-marshal.o 9p-iov-marshal.o
>  else
>  common-obj-y = qemu-fsdev-dummy.o
> diff --git a/hw/9pfs/Kconfig b/hw/9pfs/Kconfig
> index 8c5032c..3ae5749 100644
> --- a/hw/9pfs/Kconfig
> +++ b/hw/9pfs/Kconfig
> @@ -1,4 +1,9 @@
> +config FSDEV_9P
> +    bool
> +    depends on VIRTFS

Using "depends on VIRTFS && 9PFS" instead helps to
reduce the link failure using --without-default-devices.

> +
>  config VIRTIO_9P
>      bool
>      default y
>      depends on VIRTFS && VIRTIO
> +    select FSDEV_9P

Here I used "depends on FSDEV_9P && VIRTIO" but this
doesn't look right...

Is it possible to include "config-devices.h" in
hw/xen/xen-legacy-backend.c to use CONFIG_9PFS?

xen_be_register_common() unconditionally calls:

  xen_be_register("9pfs", &xen_9pfs_ops);

As I have no much idea about Xen & 9pfs, I'm a bit
lost here (regarding the dependencies order).

Thanks,

Phil.



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 21:09:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 21:09:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18708.43700 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka3YH-0003Vc-E7; Tue, 03 Nov 2020 21:09:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18708.43700; Tue, 03 Nov 2020 21:09:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka3YH-0003VV-9j; Tue, 03 Nov 2020 21:09:13 +0000
Received: by outflank-mailman (input) for mailman id 18708;
 Tue, 03 Nov 2020 21:09:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gHuI=EJ=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1ka3YF-0003VQ-8v
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 21:09:11 +0000
Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2015d19a-33f9-4d16-9fa4-7c529e393609;
 Tue, 03 Nov 2020 21:09:10 +0000 (UTC)
Received: by mail-lf1-x144.google.com with SMTP id a7so24142988lfk.9
 for <xen-devel@lists.xenproject.org>; Tue, 03 Nov 2020 13:09:09 -0800 (PST)
Received: from [100.64.112.11] (ll-18.209.223.85.sovam.net.ua. [85.223.209.18])
 by smtp.gmail.com with ESMTPSA id k23sm4263023lfm.144.2020.11.03.13.09.06
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 03 Nov 2020 13:09:07 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gHuI=EJ=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1ka3YF-0003VQ-8v
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 21:09:11 +0000
X-Inumbo-ID: 2015d19a-33f9-4d16-9fa4-7c529e393609
Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2015d19a-33f9-4d16-9fa4-7c529e393609;
	Tue, 03 Nov 2020 21:09:10 +0000 (UTC)
Received: by mail-lf1-x144.google.com with SMTP id a7so24142988lfk.9
        for <xen-devel@lists.xenproject.org>; Tue, 03 Nov 2020 13:09:09 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=cVJnL9ewxvYcGID+vmkXVywCXdNU0ReBKwTW336NY+8=;
        b=JdQGgjULIgafS8NgexgfnaV3QmIIAJ1Y05EcQ6vDxxcEEq0tLg+CBRdLBhTgr9bP3K
         ACsY5PLgFcy3s1d1l7FfTUuU4xtM8wbN2TZNm7UctoTO+FH81CIl7pbpMvEfANtVMJXf
         7tSo+Tgs08VNE0XnLw3eUsi3yi60thBZlMnjtAIqaY3+aFcmXmLhu4YwInIobwA79yxp
         1L5X5Ew6An8uH6nZ8S8s0Eext/KsVMZ+Su3hZBmgLK1KLsZ7tqbQcF/41ACsifnJKwYL
         xVG87gT7TzS5uB9te/EFa0XVML3lNied4X7WrLGVDviO8vM8/0RazYWfFHmLK+lLeaUs
         +4VQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=cVJnL9ewxvYcGID+vmkXVywCXdNU0ReBKwTW336NY+8=;
        b=b1PgM6Ka/Q0ADGb7RKrh2/H+cdte97bb5ThsxXfm2MOOOqTP1Uzro5PB+olS/54Cea
         ajLoTbwj+p+tWYUQJiDyI+IlBtLzDNcx3POP+GSFap/6lBeNVI5IWfdJY/Ihtk4WmO91
         pkuqc0a1U3xCY1i4uOLZhZFB03CiYv0w4gwxgyI4pUzO5lZU7A7SHQG//Ay0o0I/Mn4d
         O+gUD9e25cB1+Hup8Hyvyx7gwWopdsGx1vwY48AI1YVkE9sKqE6bA1XgN+T0eALPzXKa
         rOLZ5KZN0KGKf0WHPnDtJLwk3vxHvsFTrgncFn5E22PEhH5WMKevnlpKJiIoqKN99bnA
         hKmQ==
X-Gm-Message-State: AOAM532CWCnqfEWV8SJhv7iSyAHItEwjKCyEEMUXBBaOhP5/EL3LbOZJ
	kbDPOWgJSt1VHL0TDjmdv98=
X-Google-Smtp-Source: ABdhPJzvh9Qc+v40cQh7jyFuPp1pn7qBhVfemeuTrJZGoZWRgdiaxHrLgPVr9/EvOjphJ7rmMgXcsw==
X-Received: by 2002:a05:6512:3193:: with SMTP id i19mr8927230lfe.80.1604437748765;
        Tue, 03 Nov 2020 13:09:08 -0800 (PST)
Received: from [100.64.112.11] (ll-18.209.223.85.sovam.net.ua. [85.223.209.18])
        by smtp.gmail.com with ESMTPSA id k23sm4263023lfm.144.2020.11.03.13.09.06
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Tue, 03 Nov 2020 13:09:07 -0800 (PST)
Subject: Re: [PATCH V2 00/23] IOREQ feature (+ virtio-mmio) on Arm
To: Masami Hiramatsu <masami.hiramatsu@linaro.org>
Cc: =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Jan Beulich <jbeulich@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <julien.grall@arm.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Tim Deegan <tim@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <CAA93ih0o3XmD9neBu1fAkP1iBETu1-4qaQaEsZfEWRfYo7VCZA@mail.gmail.com>
 <CAPD2p-npnQz+7NtMH81s2C3dsAt_6kxQ68n7LhwYbOuTFaUEvw@mail.gmail.com>
 <alpine.DEB.2.21.2010291252410.12247@sstabellini-ThinkPad-T480s>
 <CAPD2p-mH0Hi+JOUB-mt+aZR_gN86EZCpnMPTww0ErMESTwZ=AA@mail.gmail.com>
 <CAA93ih3Z-zxQ33gvr2C43i0J5XP3OBgUhTyMcwhe9zVj-uOONA@mail.gmail.com>
 <CAPD2p-=2UimQy6VHKw1FgyVi2R94Ux_HFdPYk7=FR3KWSEqiHw@mail.gmail.com>
 <CAA93ih3LcHPLbL7dPof-OAbM2HRJv0neQtMuYDYcYAOYDhVbKA@mail.gmail.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <d1a505ae-58f7-f096-eaaf-d249901c1861@gmail.com>
Date: Tue, 3 Nov 2020 23:09:06 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <CAA93ih3LcHPLbL7dPof-OAbM2HRJv0neQtMuYDYcYAOYDhVbKA@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 03.11.20 16:30, Masami Hiramatsu wrote:
> Hi Oleksandr,

Hi Masami


>
> Thanks for sharing the virtio-disk server, I also tested with a real usb disk.
>
> In config file:
>
> virtio = 1
> vdisk = [ 'backend=Domain-0, disks=ro:/dev/sda1' ]
>
> And it can be mounted in DomU
>
> [    2.892874] virtio_blk virtio0: [vda] 1875382927 512-byte logical
> blocks (960 GB/894 GiB)
> [    2.892925] vda: detected capacity change from 0 to 960196058624
> ...
> root@develbox:~# cat /proc/partitions
> major minor  #blocks  name
>
>   254        0  937691463 vda
> ...
> root@develbox:~# mount /dev/vda /mnt/
> [  192.260968] EXT4-fs (vda): mounted filesystem with ordered data
> mode. Opts: (null)
> mount: /mnt: WARNING: source write-protected, mounted read-only.
>
> So "ro" flag also correctly works.
> Great!
>
> Thank you!

Sounds great. Thank you for testing!


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Nov 03 21:15:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 21:15:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18714.43711 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka3eA-0004Oh-2w; Tue, 03 Nov 2020 21:15:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18714.43711; Tue, 03 Nov 2020 21:15:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka3eA-0004Oa-09; Tue, 03 Nov 2020 21:15:18 +0000
Received: by outflank-mailman (input) for mailman id 18714;
 Tue, 03 Nov 2020 21:15:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zFvq=EJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ka3e9-0004OV-GP
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 21:15:17 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 58a7ec15-27f4-4506-b0f8-562db1e86a25;
 Tue, 03 Nov 2020 21:15:16 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 0A405205ED;
 Tue,  3 Nov 2020 21:15:15 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zFvq=EJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1ka3e9-0004OV-GP
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 21:15:17 +0000
X-Inumbo-ID: 58a7ec15-27f4-4506-b0f8-562db1e86a25
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 58a7ec15-27f4-4506-b0f8-562db1e86a25;
	Tue, 03 Nov 2020 21:15:16 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 0A405205ED;
	Tue,  3 Nov 2020 21:15:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604438115;
	bh=AGr0vX1zbmHHU+V7TxZSTvg1FCY0YXQmEyXlBjDXxKk=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ND/oASvlXSWl4Zq9BmkcuWjQQgyX3hx9wd32ljNcJryTtBAn9E4JeFGeSOYMapHVw
	 nGD1GEH5tv2iMv3p2KXvuSwjjfwdq3aojeOcI4moe0t5lXe+LV4lg6fBnZhM9E3koF
	 t05yMTJrs60izMpL1FSDsoFpGgvMY0sb3CJu3fIg=
Date: Tue, 3 Nov 2020 13:15:13 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rich Persaud <persaur@gmail.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Jan Beulich <jbeulich@suse.com>, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>, 
    andrew.cooper3@citrix.com, george.dunlap@citrix.com, iwj@xenproject.org, 
    julien@xen.org, wl@xen.org, xen-devel@lists.xenproject.org, 
    Daniel DeGraaf <dgdegra@tycho.nsa.gov>, 
    Daniel Smith <dpsmith@apertussolutions.com>, 
    Roman Shaposhnik <roman@zededa.com>, 
    =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: Re: [RFC PATCH] xen: EXPERT clean-up
In-Reply-To: <E359BD65-2917-4087-A6E1-0AD5521CF823@gmail.com>
Message-ID: <alpine.DEB.2.21.2011031307430.5812@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2011031123420.5812@sstabellini-ThinkPad-T480s> <E359BD65-2917-4087-A6E1-0AD5521CF823@gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1293458912-1604438115=:5812"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1293458912-1604438115=:5812
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Tue, 3 Nov 2020, Rich Persaud wrote:
> On Nov 3, 2020, at 14:37, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > 
> > ﻿On Tue, 3 Nov 2020, Jan Beulich wrote:
> >>> On 02.11.2020 22:41, Stefano Stabellini wrote:
> >>> On Mon, 2 Nov 2020, Jan Beulich wrote:
> >>>> On 31.10.2020 01:24, Stefano Stabellini wrote:
> >>>>> @@ -79,8 +79,8 @@ config SBSA_VUART_CONSOLE
> >>>>>      SBSA Generic UART implements a subset of ARM PL011 UART.
> >>>>> 
> >>>>> config ARM_SSBD
> >>>>> -    bool "Speculative Store Bypass Disable" if EXPERT
> >>>>> -    depends on HAS_ALTERNATIVE
> >>>>> +    bool "Speculative Store Bypass Disable"
> >>>>> +    depends on HAS_ALTERNATIVE && EXPERT
> >>>>>    default y
> >>>> 
> >>>> At the example of this, I'm afraid when the default isn't "n"
> >>>> (or there's no default directive at all, as ought to be
> >>>> equivalent to and preferred over "default n"), such a
> >>>> transformation is not functionally identical: Before your
> >>>> change, with !EXPERT this option defaults to y. After your
> >>>> change this option is unavailable (which resolves to it being
> >>>> off for all consuming purposes).
> >>>> 
> >>>> IOW there are reasons to have "if ..." attached to the prompts
> >>>> (for this construct indeed only making the prompt conditional,
> >>>> not the entire option), but there are also cases where the
> >>>> cleanup you do is indeed desirable / helpful.
> >>> 
> >>> Yeah, thanks for catching it, it is obviously a problem.
> >>> 
> >>> My intention was just to "tag" somehow the options to EXPERT so that it
> >>> would show on the menu. Maybe a better, simpler, way to do it is
> >>> to add the word "EXPERT" to the one line prompt:
> >>> 
> >>> config ARM_SSBD
> >>> -    bool "Speculative Store Bypass Disable" if EXPERT
> >>> +    bool "Speculative Store Bypass Disable (EXPERT)" if EXPERT
> >>>    depends on HAS_ALTERNATIVE
> >>>    default y
> >>>    help
> >>> 
> >>> 
> >>> What do you think?
> >> 
> >> While on the surface this may look like an improvement, I don't
> >> see how it would actually help: If you read the Kconfig file
> >> itself, the dependency is seen anyway. And on the menu I don't
> >> see the point of telling someone who has enabled EXPERT that a
> >> certain option is (or is not) an expert one. If they think
> >> they're experts, so should they be treated. (It was, after all,
> >> a deliberate decision to make enabling expert mode easier, and
> >> hence easier to use for what one might consider not-really-
> >> experts. I realize saying so may be considered tendentious; I
> >> mean it in a purely technical sense, and I'd like to apologize
> >> in advance to anyone not sharing this as a possible perspective
> >> to take.)
> >> 
> >> Plus, of course, the addition of such (EXPERT) markers to
> >> future options' prompts is liable to get forgotten now and then,
> >> so sooner or later we'd likely end up with an inconsistent
> >> mixture anyway.
> > 
> > I tend to agree with you on everything you wrote. The fundamental issue
> > is that we are (mis)using EXPERT to tag features that are experimental,
> > as defined by SUPPORT.md.
> > 
> > It is important to be able to distinguish clearly at the kconfig level
> > options that are (security) supported from options that are
> > unsupported/experimental. Today the only way to do it is with EXPERT
> > which is not great because:
> > 
> > - it doesn't convey the idea that it is for unsupported/experimental
> >  features
> > - if you want to enable one unsupported feature, it is not clear what
> >  you have to do
> > 
> > So maybe we should replace EXPERT with UNSUPPORTED (or EXPERIMENTAL) in
> > the Kconfig menu?
> > 
> > It would make it clearer that by enabling UNSUPPORTED you are going to
> > get a configuration that is not security supported.
> 
> If going down this path, there should be one, authoritative, in-tree definition of feature-level support from which Kconfig (build-time policy enforcement) and SUPPORT.md (documentation) can be derived.  Later, even run-time enforcement can be similarly classified.  FuSA may also wish for documented policy to align with enforcement.

The goal is trying to align Kconfig and SUPPORT.md by clarifying the
EXPERT option, which today is a poor implementation of "experimental".

There could be further improvements down the line, for instance we could
taint Xen when UNSUPPORTED is selected and even have separate kconfig
options for UNSUPPORTED, EXPERIMENTAL, and TECHPREVIEW. FuSa is likely
going to need its own SAFETY option too. Like you suggested, we could
even have a single source of feature-level support information for both
Kconfig and SUPPORT.md.

However, I didn't want to increase the scope of this one patch. For now,
it would be a good start if we replaced EXPERT with something that covers
anything not security supported, for which UNSUPPORTED looks like a good
name.
--8323329-1293458912-1604438115=:5812--


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 21:42:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 21:42:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18726.43724 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka44p-0006yq-FD; Tue, 03 Nov 2020 21:42:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18726.43724; Tue, 03 Nov 2020 21:42:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka44p-0006yj-CF; Tue, 03 Nov 2020 21:42:51 +0000
Received: by outflank-mailman (input) for mailman id 18726;
 Tue, 03 Nov 2020 21:42:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9NW0=EJ=gmail.com=persaur@srs-us1.protection.inumbo.net>)
 id 1ka44o-0006yb-3J
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 21:42:50 +0000
Received: from mail-qk1-x743.google.com (unknown [2607:f8b0:4864:20::743])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 63cf9b9f-aa78-4083-8b83-91223c99b922;
 Tue, 03 Nov 2020 21:42:49 +0000 (UTC)
Received: by mail-qk1-x743.google.com with SMTP id p3so16719348qkk.7
 for <xen-devel@lists.xenproject.org>; Tue, 03 Nov 2020 13:42:49 -0800 (PST)
Received: from [100.64.72.4] ([173.245.215.240])
 by smtp.gmail.com with ESMTPSA id i70sm60359qke.11.2020.11.03.13.42.47
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 03 Nov 2020 13:42:48 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=9NW0=EJ=gmail.com=persaur@srs-us1.protection.inumbo.net>)
	id 1ka44o-0006yb-3J
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 21:42:50 +0000
X-Inumbo-ID: 63cf9b9f-aa78-4083-8b83-91223c99b922
Received: from mail-qk1-x743.google.com (unknown [2607:f8b0:4864:20::743])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 63cf9b9f-aa78-4083-8b83-91223c99b922;
	Tue, 03 Nov 2020 21:42:49 +0000 (UTC)
Received: by mail-qk1-x743.google.com with SMTP id p3so16719348qkk.7
        for <xen-devel@lists.xenproject.org>; Tue, 03 Nov 2020 13:42:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=content-transfer-encoding:from:mime-version:subject:date:message-id
         :references:cc:in-reply-to:to;
        bh=wWBQuSmJ0RTFNuUYO+pCJ1rcL0RhxzAkjUXcspA1ppE=;
        b=rQM8Tn2pN8ZyBdUJRsjhDny2ytxhm2MuAbXN780M+BUUf7eEc8RMXpxkP4iH9f+li2
         iTmhQ0eyat3WhttVH1KCaGBX8PqSyLlNZLaKSNuv/nVOKsF+sSU6nqtz8qvjUt25zKsz
         AQfCU40VxqO/OTxTP1hOa/9x+/4KQRIb6GjB7xW6w9lwAO/xHA8MIyOCygsipqTBr+C8
         wQqhCitx3klBBm2o0qZeRltgXa1MzbNWyuPxMgUkOko78X10ye/oEzo8SHhFpU/H99Kr
         eJ6yhz1wggBID9O5kN8dcXOVTqO9T+BaGimWNwwBLVF+pJPsb04WHHgYd4Qn4X0240nn
         uaNg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:content-transfer-encoding:from:mime-version
         :subject:date:message-id:references:cc:in-reply-to:to;
        bh=wWBQuSmJ0RTFNuUYO+pCJ1rcL0RhxzAkjUXcspA1ppE=;
        b=SBURcn8Sx/DHw8y0L9kPzpeT8m4caMpVvYK1tpo1zubiQxWD/C7zO/i27EJziO69A1
         R+1CryUMah8qZClZQ4aXCrf/CfETZFxYfLIzsRALR2Kn+FvYDhGAdj9Lh+gZvXQz9hbb
         HUDa2B1IXCTorKJxmxOyh5PRDJdZdhaJJ7EFwq04SZ4k0OFX6/rPPd29tVHdY4nA87RG
         sUZqBqVfAr/kjNyrboU6i3ND+FuXrli8q4WcqubNJywQqel+inS1rJdpKwug40UDmDSm
         u7UbdlCxuv68e792BBLW4e4xzimoeg3COPdZocNzVjge3Qe9/4EznCBzypF3Boe94HpT
         EPMA==
X-Gm-Message-State: AOAM5337iR+UUAtj7C5c/Nru6/V9R+HJmOohl2HpGW75JJcqavtKqEzt
	wydN/WCGf3t/spwkVjR+aI0LC5p5mQXgZw==
X-Google-Smtp-Source: ABdhPJzY7pCaUi5h6Dxzb8SA6w24n13qI3HvvmQ9RdK18UKWhEaJ1pPJ4lgEkjsmFloKR+enOvewWQ==
X-Received: by 2002:a37:652:: with SMTP id 79mr21286241qkg.163.1604439768588;
        Tue, 03 Nov 2020 13:42:48 -0800 (PST)
Received: from [100.64.72.4] ([173.245.215.240])
        by smtp.gmail.com with ESMTPSA id i70sm60359qke.11.2020.11.03.13.42.47
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Tue, 03 Nov 2020 13:42:48 -0800 (PST)
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
From: Rich Persaud <persaur@gmail.com>
Mime-Version: 1.0 (1.0)
Subject: Re: [RFC PATCH] xen: EXPERT clean-up
Date: Tue, 3 Nov 2020 16:42:47 -0500
Message-Id: <2A013D8F-C5C3-44E9-8113-1932434D9DDA@gmail.com>
References: <alpine.DEB.2.21.2011031307430.5812@sstabellini-ThinkPad-T480s>
Cc: Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 andrew.cooper3@citrix.com, George.Dunlap@citrix.com, iwj@xenproject.org,
 julien@xen.org, wl@xen.org, xen-devel@lists.xenproject.org,
 Daniel DeGraaf <dgdegra@tycho.nsa.gov>,
 Daniel Smith <dpsmith@apertussolutions.com>,
 Roman Shaposhnik <roman@zededa.com>,
 =?utf-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
In-Reply-To: <alpine.DEB.2.21.2011031307430.5812@sstabellini-ThinkPad-T480s>
To: Stefano Stabellini <sstabellini@kernel.org>
X-Mailer: iPhone Mail (18A8395)



> On Nov 3, 2020, at 16:15, Stefano Stabellini <sstabellini@kernel.org> wrot=
e:
>=20
> =EF=BB=BFOn Tue, 3 Nov 2020, Rich Persaud wrote:
>>> On Nov 3, 2020, at 14:37, Stefano Stabellini <sstabellini@kernel.org> wr=
ote:
>>>=20
>>> =EF=BB=BFOn Tue, 3 Nov 2020, Jan Beulich wrote:
>>>>> On 02.11.2020 22:41, Stefano Stabellini wrote:
>>>>> On Mon, 2 Nov 2020, Jan Beulich wrote:
>>>>>> On 31.10.2020 01:24, Stefano Stabellini wrote:
>>>>>>> @@ -79,8 +79,8 @@ config SBSA_VUART_CONSOLE
>>>>>>>     SBSA Generic UART implements a subset of ARM PL011 UART.
>>>>>>>=20
>>>>>>> config ARM_SSBD
>>>>>>> -    bool "Speculative Store Bypass Disable" if EXPERT
>>>>>>> -    depends on HAS_ALTERNATIVE
>>>>>>> +    bool "Speculative Store Bypass Disable"
>>>>>>> +    depends on HAS_ALTERNATIVE && EXPERT
>>>>>>>   default y
>>>>>>=20
>>>>>> At the example of this, I'm afraid when the default isn't "n"
>>>>>> (or there's no default directive at all, as ought to be
>>>>>> equivalent to and preferred over "default n"), such a
>>>>>> transformation is not functionally identical: Before your
>>>>>> change, with !EXPERT this option defaults to y. After your
>>>>>> change this option is unavailable (which resolves to it being
>>>>>> off for all consuming purposes).
>>>>>>=20
>>>>>> IOW there are reasons to have "if ..." attached to the prompts
>>>>>> (for this construct indeed only making the prompt conditional,
>>>>>> not the entire option), but there are also cases where the
>>>>>> cleanup you do is indeed desirable / helpful.
>>>>>=20
>>>>> Yeah, thanks for catching it, it is obviously a problem.
>>>>>=20
>>>>> My intention was just to "tag" somehow the options to EXPERT so that i=
t
>>>>> would show on the menu. Maybe a better, simpler, way to do it is
>>>>> to add the word "EXPERT" to the one line prompt:
>>>>>=20
>>>>> config ARM_SSBD
>>>>> -    bool "Speculative Store Bypass Disable" if EXPERT
>>>>> +    bool "Speculative Store Bypass Disable (EXPERT)" if EXPERT
>>>>>   depends on HAS_ALTERNATIVE
>>>>>   default y
>>>>>   help
>>>>>=20
>>>>>=20
>>>>> What do you think?
>>>>=20
>>>> While on the surface this may look like an improvement, I don't
>>>> see how it would actually help: If you read the Kconfig file
>>>> itself, the dependency is seen anyway. And on the menu I don't
>>>> see the point of telling someone who has enabled EXPERT that a
>>>> certain option is (or is not) an expert one. If they think
>>>> they're experts, so should they be treated. (It was, after all,
>>>> a deliberate decision to make enabling expert mode easier, and
>>>> hence easier to use for what one might consider not-really-
>>>> experts. I realize saying so may be considered tendentious; I
>>>> mean it in a purely technical sense, and I'd like to apologize
>>>> in advance to anyone not sharing this as a possible perspective
>>>> to take.)
>>>>=20
>>>> Plus, of course, the addition of such (EXPERT) markers to
>>>> future options' prompts is liable to get forgotten now and then,
>>>> so sooner or later we'd likely end up with an inconsistent
>>>> mixture anyway.
>>>=20
>>> I tend to agree with you on everything you wrote. The fundamental issue
>>> is that we are (mis)using EXPERT to tag features that are experimental,
>>> as defined by SUPPORT.md.
>>>=20
>>> It is important to be able to distinguish clearly at the kconfig level
>>> options that are (security) supported from options that are
>>> unsupported/experimental. Today the only way to do it is with EXPERT
>>> which is not great because:
>>>=20
>>> - it doesn't convey the idea that it is for unsupported/experimental
>>> features
>>> - if you want to enable one unsupported feature, it is not clear what
>>> you have to do
>>>=20
>>> So maybe we should replace EXPERT with UNSUPPORTED (or EXPERIMENTAL) in
>>> the Kconfig menu?
>>>=20
>>> It would make it clearer that by enabling UNSUPPORTED you are going to
>>> get a configuration that is not security supported.
>>=20
>> If going down this path, there should be one, authoritative, in-tree defi=
nition of feature-level support from which Kconfig (build-time policy enforc=
ement) and SUPPORT.md (documentation) can be derived.  Later, even run-time e=
nforcement can be similarly classified.  FuSA may also wish for documented p=
olicy to align with enforcement.
>=20
> The goal is trying to align Kconfig and SUPPORT.md by clarifying the
> EXPERT option, which today is a poor implementation of "experimental".
>=20
> There could be further improvements down the line, for instance we could
> taint Xen when UNSUPPORTED is selected and even have separate kconfig
> options for UNSUPPORTED, EXPERIMENTAL, and TECHPREVIEW. FuSa is likely
> going to need its own SAFETY option too. Like you suggested, we could
> even have a single source of feature-level support information for both
> Kconfig and SUPPORT.md.
>=20
> However, I didn't want to increase the scope of this one patch. For now,
> it would be a good start if we replaced EXPERT with something that covers
> anything not security supported, for which UNSUPPORTED looks like a good
> name.

Kconfig UI is aimed at humans. There is a gulf of semantic difference betwee=
n EXPERT (human assessment based on local expertise and contextual prioritie=
s) and UNSUPPORTED (binary policy assertion independent of local context, wh=
ich can be parsed by computer).  We also have the supported-with-caveat cate=
gory of features in SUPPORT.md. =20

It's not clear that every human-interpreted instance of EXPERT should be rep=
laced by UNSUPPORTED, even if some instances may qualify.  It may be better t=
o have a separate patch which introduces a comprehensive set of labels with m=
atching policy statements.

Another option is to embed/copy SUPPORT.md feature support policy text into t=
he description field of relevant Kconfig menu options.  A human expert can t=
hen consider support criteria in their decision, informing consent for use o=
f "supported" (with/without caveats) and "unsupported" features.

Rich=


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 23:43:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 23:43:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18745.43736 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka5xU-0000QR-9t; Tue, 03 Nov 2020 23:43:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18745.43736; Tue, 03 Nov 2020 23:43:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka5xU-0000QK-6h; Tue, 03 Nov 2020 23:43:24 +0000
Received: by outflank-mailman (input) for mailman id 18745;
 Tue, 03 Nov 2020 23:43:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hh/q=EJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1ka5xS-0000QC-UE
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 23:43:23 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 14cc51be-554e-4998-aab2-6344b5ce61fc;
 Tue, 03 Nov 2020 23:43:17 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ka5xN-0007w7-3R; Tue, 03 Nov 2020 23:43:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ka5xM-0002A9-Ni; Tue, 03 Nov 2020 23:43:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ka5xM-0003D0-Mz; Tue, 03 Nov 2020 23:43:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hh/q=EJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1ka5xS-0000QC-UE
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 23:43:23 +0000
X-Inumbo-ID: 14cc51be-554e-4998-aab2-6344b5ce61fc
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 14cc51be-554e-4998-aab2-6344b5ce61fc;
	Tue, 03 Nov 2020 23:43:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NKCDcBvx7c5s3WzhyRTXXRcA8meaSYi2cSpMb80AUNg=; b=lLPlZRkg+WDuBKokPWJivS4kNa
	nw0G19gjlKxg/ddMLdPUSvsXk6rwfo+90IVOwang/LBm5DuOFjqnnX1dcpM8NVo+EDP6ZXt6/0czW
	dnAYveEVv1sYEpcFiHV3ExGPpGHdvN4Ch5B7J6uYxZt8DeAJzPmGUMuR+7jLJMr88j14=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ka5xN-0007w7-3R; Tue, 03 Nov 2020 23:43:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ka5xM-0002A9-Ni; Tue, 03 Nov 2020 23:43:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ka5xM-0003D0-Mz; Tue, 03 Nov 2020 23:43:16 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156386-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156386: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:build-arm64:<job status>:broken:regression
    qemu-mainline:build-arm64-pvops:<job status>:broken:regression
    qemu-mainline:build-arm64-pvops:host-install(4):broken:regression
    qemu-mainline:build-arm64:host-install(4):broken:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=ad262888993f795db68fd7c2bdfa72f467fe0096
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 03 Nov 2020 23:43:16 +0000

flight 156386 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156386/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 152631
 build-arm64                   4 host-install(4)        broken REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                ad262888993f795db68fd7c2bdfa72f467fe0096
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   75 days
Failing since        152659  2020-08-21 14:07:39 Z   74 days  167 attempts
Testing same since   156386  2020-11-03 16:37:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64 host-install(4)

Not pushing.

(No revision log; it would be 59415 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Nov 03 23:55:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 03 Nov 2020 23:55:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18752.43751 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka69Z-0001QU-MI; Tue, 03 Nov 2020 23:55:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18752.43751; Tue, 03 Nov 2020 23:55:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka69Z-0001QN-IY; Tue, 03 Nov 2020 23:55:53 +0000
Received: by outflank-mailman (input) for mailman id 18752;
 Tue, 03 Nov 2020 23:55:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T6cW=EJ=durham.ac.uk=m.a.young@srs-us1.protection.inumbo.net>)
 id 1ka69X-0001QH-Iq
 for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 23:55:51 +0000
Received: from GBR01-LO2-obe.outbound.protection.outlook.com (unknown
 [40.107.10.97]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a1f7b570-775a-4d41-8338-aac8e921dc2b;
 Tue, 03 Nov 2020 23:55:47 +0000 (UTC)
Received: from LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:83::20)
 by LNXP265MB1562.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:7a::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.29; Tue, 3 Nov
 2020 23:55:45 +0000
Received: from LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM
 ([fe80::7956:7dd:b840:ac1b]) by LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM
 ([fe80::7956:7dd:b840:ac1b%6]) with mapi id 15.20.3499.032; Tue, 3 Nov 2020
 23:55:44 +0000
Received: from broadband.bt.com (2a00:23c6:751d:7701:1f1a:39af:4235:7681) by
 LO2P265CA0468.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:a2::24) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.19 via Frontend Transport; Tue, 3 Nov 2020 23:55:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=T6cW=EJ=durham.ac.uk=m.a.young@srs-us1.protection.inumbo.net>)
	id 1ka69X-0001QH-Iq
	for xen-devel@lists.xenproject.org; Tue, 03 Nov 2020 23:55:51 +0000
X-Inumbo-ID: a1f7b570-775a-4d41-8338-aac8e921dc2b
Received: from GBR01-LO2-obe.outbound.protection.outlook.com (unknown [40.107.10.97])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a1f7b570-775a-4d41-8338-aac8e921dc2b;
	Tue, 03 Nov 2020 23:55:47 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=acLiPmubFY7zTLVVRmqU8MbAafXqy45Xb+POs5cdb5+trxGF+crqKiuT/Infpum1xHhm5xlNwPkaxn7mcYSG1xHYgUZdb5D2CGlXthqyfaQS0zuGuGggmPUjmlAxtNEfx5AKcbpFtp8zupOmAMteWyyBLyZH+tXTyNQg0XI5Y50RZRtyPjPd+2qwr4iCBBrOL3VaALpfeihzr1G6tiwzWSzFZA4AIUdhgEI/EOnFtXW/vy5WNt3l41os1S2aKc3qz+QMFmiGcISMz1QPJ2E+6uvDZAG6Rpcf4hQ2EiEDFKDdznvRyZpcRzaG4CAhlyEXHpHJBpmtB1SnJ4qL1SqUOw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RJHnJ/3supNc9icQhZUh9ry+zzXOutWcNz2oojKU6mM=;
 b=UFVo/yyECTYQCSZy+oFSjk7gbfZnsq0AtxsRpvJqCCvIWcmmYOdTRxJ5E4qPTgB6sv4VKXLgpN9TfgDsSN6uA20TVZQJdpwUzpwCQTOPNl2WrVGhbng2JRTzHZNBCMzlB5IyBQaJcCFK/M+7VpiPB75oHV7rorN4pAMUI6aKB4uxr1/SMJqmVqaTJdgDYTTa5WfXe35eU8UwTTxVAxfoO75rlYPU4f+abimZV/SkghMoNz2MuLz2/ew8CPMWuXiW6mYBUoP/jl4QPmU4KrllFel1hu4kc677HBMaoFpAPKWx0UTSwU4VCihPkG3FCGyAwid3aIXvMSvmmgH/11H8XA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=durham.ac.uk; dmarc=pass action=none header.from=durham.ac.uk;
 dkim=pass header.d=durham.ac.uk; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=durhamuniversity.onmicrosoft.com;
 s=selector2-durhamuniversity-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RJHnJ/3supNc9icQhZUh9ry+zzXOutWcNz2oojKU6mM=;
 b=CPIlJopdcOrXzdf8qGzK2rdbeSwyOXVbDmjWSl2FFJqEhBc+fJ48maY1+l+BlDhlthp+6Nfc6dGXuCy1YXT1XIKZU5fMoCjAViJL1dWul343NWDOqeErKbKTEfn2bo5nd6Q4oympC3XC4RKBQQTFlaBv7XXIyXtNArC7oajB6Xc=
Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=durham.ac.uk;
Received: from LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:83::20)
 by LNXP265MB1562.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:7a::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.29; Tue, 3 Nov
 2020 23:55:45 +0000
Received: from LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM
 ([fe80::7956:7dd:b840:ac1b]) by LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM
 ([fe80::7956:7dd:b840:ac1b%6]) with mapi id 15.20.3499.032; Tue, 3 Nov 2020
 23:55:44 +0000
Date: Tue, 3 Nov 2020 23:55:41 +0000 (GMT)
From: Michael Young <m.a.young@durham.ac.uk>
To: Jan Beulich <jbeulich@suse.com>
cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: Xen 4.13.2 released
In-Reply-To: <ed219f15-479b-5d06-c835-eb4f4c64db3a@suse.com>
Message-ID: <a391cfd1-be4a-add6-cd36-8bb254f9b43f@austen3.home>
References: <ed219f15-479b-5d06-c835-eb4f4c64db3a@suse.com>
Content-Type: text/plain; charset=US-ASCII; format=flowed
X-Originating-IP: [2a00:23c6:751d:7701:1f1a:39af:4235:7681]
X-ClientProxiedBy: LO2P265CA0468.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a2::24) To LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:83::20)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from broadband.bt.com (2a00:23c6:751d:7701:1f1a:39af:4235:7681) by LO2P265CA0468.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:a2::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.19 via Frontend Transport; Tue, 3 Nov 2020 23:55:44 +0000
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 129c6593-d51d-45fe-a314-08d88053fa5e
X-MS-TrafficTypeDiagnostic: LNXP265MB1562:
X-Microsoft-Antispam-PRVS:
	<LNXP265MB15622DBFC3356A8696D235CB87110@LNXP265MB1562.GBRP265.PROD.OUTLOOK.COM>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	G2eoMTLJlR18mF+0ZEidJAL5thP5cm5n8Z+rPZ8y5Ch9yLXXOch4yfAPJYlUd5hRgiVRoL/7/dDOnm0KplJiXxw0xSenJ0owKs1ZJKXmCt8QuAf7uSHKF772SA+XRwcCNkKfoLC2PYXxbb5Vz31LVZHUG7W2hTt9Eog/8JOgy5rpJ9JWFpBYDqvc1xNXZ1B6s2gt5x8Au/3HnrpMH/yWVYT+RAN2+YUj3p/yk0816AwE4SwcMVPfC0OUvkfdm3xW6y6m5DlNXfsaDUXSzecpqxz76BSBKUd7bVmI7gpT5tfOSZT/I8MqSshz2hxbEU9PPAVHx7iSkDmLssnp8XFn6RBeCDyOazGKyYUueH2kRPnkqp6OIBsHroIOQlWIomuxbpgsJLrgx6rNh0uAE9HBSg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM;PTR:;CAT:NONE;SFS:(4636009)(136003)(376002)(39850400004)(346002)(366004)(396003)(186003)(8936002)(6506007)(16526019)(52116002)(6666004)(2906002)(36756003)(31686004)(4744005)(6916009)(5660300002)(66476007)(83380400001)(31696002)(86362001)(966005)(6512007)(9686003)(66556008)(8676002)(478600001)(786003)(316002)(6486002)(7116003)(66946007)(4326008);DIR:OUT;SFP:1102;
X-MS-Exchange-AntiSpam-MessageData:
	kHjSdiGzHO+zKsgPm1HWhs5itvde9W+W0Wv9WntQGoIiVOMjPXP67p4Q8BLYMWm+p9SAppM2vliPlpfnYyeCd6NFVSwT7s5RK0poef+4GcG+dv9fedU9gvXd9OZajMQnSIfaU2HqENwE5/sSdMNPg0mByjvP+hwmkhBwyT0ckCjobkwFgrTtw6AJZspMkkS54TGHnjKL1hPFobuhwsTGHLkwHen+qUMJnYXRXcgfsPJ4wh8LXOc31rag8A6U5nXz69LSs+COIo3l/cFlfgSoPAuDAInH8aWJAyYudJN8bUAH2bco9qaeayjqnmP5KKVCNVDHsIT1ZYNnh5lzRZugg6O9kz4TmyJYHbAs9gwQnjtkQH3nlyvssZ/aGgGrE5QPq2J8ZYPD6w9Se0BGIYOMpZMgEFt5wh42Juzzz5UML6GsCa0/2aWc2pfMZhfN9DRXTEtUyC2E2nbojGXkZRMWsiWErw1I4M4JhNsZd+iqwA3GsESudKTCn3ibhto2i4hlBXUERfBP/BgrJv6zLVCgE5RE45rGANyCb3/hGk55F6rH/rPn43CDJ0CV2nF4Phbx1J1uaU0e11IvU9HiMwcMw6AdYyVcrbn8iyT7i7Aa/5Pm3L8r7ECWnlRDza9Zv8wWP0YcDbeTCFvGgamheHANkfJvWh2AAy3Ie0+s4rhdhr9FUv2k/HHa+94FPgXynmrIKVIqiqvPNQSs7a6LMDN4tQ==
X-OriginatorOrg: durham.ac.uk
X-MS-Exchange-CrossTenant-Network-Message-Id: 129c6593-d51d-45fe-a314-08d88053fa5e
X-MS-Exchange-CrossTenant-AuthSource: LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Nov 2020 23:55:44.8589
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 7250d88b-4b68-4529-be44-d59a2d8a6f94
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 01gX0f4TP6wid9ajbk2dXH6wlHdSBT+CyBWfYZMxCWq0ddeWZaWsgJ13psl+nkmAJCC+LgYtY3fLcyrQGU4UHg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: LNXP265MB1562

On Tue, 3 Nov 2020, Jan Beulich wrote:

> All,
>
> I am pleased to announce the release of Xen 4.13.2. This is available
> immediately from its git repository
> http://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.13
> (tag RELEASE-4.13.2) or from the XenProject download page
> https://xenproject.org/downloads/xen-project-archives/xen-project-4-13-series/xen-project-4-13-2/
> (where a list of changes can also be found).

Is the entry for XSA-335 correct on the download page? That was a qemu 
patch but I don't think it was included in 4.13.2.

 	Michael Young


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 02:17:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 02:17:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18766.43765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka8Lo-0001yI-TJ; Wed, 04 Nov 2020 02:16:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18766.43765; Wed, 04 Nov 2020 02:16:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka8Lo-0001yB-Ph; Wed, 04 Nov 2020 02:16:40 +0000
Received: by outflank-mailman (input) for mailman id 18766;
 Wed, 04 Nov 2020 02:16:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NDu8=EK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1ka8Ln-0001xd-6F
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 02:16:39 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d8e3326b-014c-4821-9106-cee1a1aad05f;
 Wed, 04 Nov 2020 02:16:31 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ka8Lf-0007oz-ET; Wed, 04 Nov 2020 02:16:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ka8Lf-00084D-3K; Wed, 04 Nov 2020 02:16:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ka8Lf-0000bs-2n; Wed, 04 Nov 2020 02:16:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NDu8=EK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1ka8Ln-0001xd-6F
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 02:16:39 +0000
X-Inumbo-ID: d8e3326b-014c-4821-9106-cee1a1aad05f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d8e3326b-014c-4821-9106-cee1a1aad05f;
	Wed, 04 Nov 2020 02:16:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0vBjXgE6d9WH9/Fkmy1V6yfVaYGeuGuWJVMZnkDHkH0=; b=kryR97trgnKJtHBtCDyZfnNioh
	jeLRGUCS7XuVt02N1CqzElqy4+uugl7SSKNdb5TVfEzIZLRIxmwc+uU6q4OSVtgeznZU6QN9oh5ho
	5NXd0Us5EeMtyX8i2nTMlqxO/ddHa9tNUBxe4HMmpX6WXrikdzh4mg7XZfEjYXywb0i4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ka8Lf-0007oz-ET; Wed, 04 Nov 2020 02:16:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ka8Lf-00084D-3K; Wed, 04 Nov 2020 02:16:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ka8Lf-0000bs-2n; Wed, 04 Nov 2020 02:16:31 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156387-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156387: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-xl-credit2:<job status>:broken:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit2:host-install(5):broken:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ce2e33ba4163c66ff89d2c0f2a9a51214a122e27
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 04 Nov 2020 02:16:31 +0000

flight 156387 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156387/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit2     <job status>                 broken
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen      fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2   5 host-install(5)       broken blocked in 152332
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                ce2e33ba4163c66ff89d2c0f2a9a51214a122e27
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   95 days
Failing since        152366  2020-08-01 20:49:34 Z   94 days  159 attempts
Testing same since   156387  2020-11-03 19:40:08 Z    0 days    1 attempts

------------------------------------------------------------
3418 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  broken  
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-credit2 broken
broken-step test-arm64-arm64-xl-credit2 host-install(5)

Not pushing.

(No revision log; it would be 653237 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 02:27:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 02:27:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18772.43777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka8WH-0002wk-UC; Wed, 04 Nov 2020 02:27:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18772.43777; Wed, 04 Nov 2020 02:27:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ka8WH-0002wd-RC; Wed, 04 Nov 2020 02:27:29 +0000
Received: by outflank-mailman (input) for mailman id 18772;
 Wed, 04 Nov 2020 02:27:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bmKO=EK=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ka8WG-0002wY-Q2
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 02:27:28 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e5dda95f-de45-4451-8a44-2376a0c7154e;
 Wed, 04 Nov 2020 02:27:27 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id F3F2C21534;
 Wed,  4 Nov 2020 02:27:25 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bmKO=EK=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1ka8WG-0002wY-Q2
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 02:27:28 +0000
X-Inumbo-ID: e5dda95f-de45-4451-8a44-2376a0c7154e
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e5dda95f-de45-4451-8a44-2376a0c7154e;
	Wed, 04 Nov 2020 02:27:27 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id F3F2C21534;
	Wed,  4 Nov 2020 02:27:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604456846;
	bh=AAO/693nsK4aj5g61yL0xiPaCLcEqYbqVqiD/UMxyDg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Wbd2WmcxvzgXxQuvpgP0ltdv3CQkkb4HZwVP1XvmO1qWWyKPjV4NuF2TFZU0Y/GeV
	 A//QEjWq1xPg0Ygi3BWhqDBQ+b07XyL1toATNnFX3u9EFSEyBVe1uFRMJT8qBKJskk
	 HjFjiw1L4lmc/SsHy9PgS7i6m2Dc2aVkkf1ePccA=
Date: Tue, 3 Nov 2020 18:27:18 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: =?UTF-8?Q?Philippe_Mathieu-Daud=C3=A9?= <philmd@redhat.com>
cc: =?UTF-8?Q?Daniel_P=2E_Berrang=C3=A9?= <berrange@redhat.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Greg Kurz <groug@kaod.org>, Christian Schoenebeck <qemu_oss@crudebyte.com>, 
    qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>, 
    Cornelia Huck <cohuck@redhat.com>, 
    =?UTF-8?Q?Alex_Benn=C3=A9e?= <alex.bennee@linaro.org>, 
    David Hildenbrand <david@redhat.com>, qemu-s390x@nongnu.org, 
    Fam Zheng <fam@euphon.net>, Richard Henderson <rth@twiddle.net>, 
    Matthew Rosato <mjrosato@linux.ibm.com>, Halil Pasic <pasic@linux.ibm.com>, 
    Thomas Huth <thuth@redhat.com>, 
    Wainer dos Santos Moschetta <wainersm@redhat.com>, 
    Christian Borntraeger <borntraeger@de.ibm.com>
Subject: Re: [PATCH-for-5.2 2/3] gitlab-ci: Add a job to cover the
 --without-default-devices config
In-Reply-To: <21e90ddb-fe8a-c780-2741-9b7a2f7f1c9a@redhat.com>
Message-ID: <alpine.DEB.2.21.2011031722100.3264@sstabellini-ThinkPad-T480s>
References: <20201103164604.2692357-1-philmd@redhat.com> <20201103164604.2692357-3-philmd@redhat.com> <20201103165247.GT205187@redhat.com> <7654e063-98d3-84e0-8116-5a1b41d14636@redhat.com> <21e90ddb-fe8a-c780-2741-9b7a2f7f1c9a@redhat.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-2447170-1604453887=:3264"
Content-ID: <alpine.DEB.2.21.2011031752410.3264@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-2447170-1604453887=:3264
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2011031752411.3264@sstabellini-ThinkPad-T480s>

On Tue, 3 Nov 2020, Philippe Mathieu-Daudé wrote:
> I forgot to Cc the 9pfs & Xen maintainers, doing it now ;)
> 
> On 11/3/20 6:01 PM, Philippe Mathieu-Daudé wrote:
> > On 11/3/20 5:52 PM, Daniel P. Berrangé wrote:
> >> On Tue, Nov 03, 2020 at 05:46:03PM +0100, Philippe Mathieu-Daudé wrote:
> >>> We test './configure --without-default-devices' since commit
> >>> 20885b5b169 (".travis.yml: test that no-default-device builds
> >>> do not regress") in Travis-CI.
> >>>
> >>> As we prefer to use GitLab-CI, add the equivalent job there.
> >>>
> >>> One minor difference: the GitLab Ubuntu docker image has the
> >>> Xen devel packages installed. As it is automatically selected,
> >>> we need to disable it with the --disable-xen option, else the
> >>> build fails:
> >>>
> >>>   /usr/bin/ld: libcommon.fa.p/hw_xen_xen-legacy-backend.c.o: in function `xen_be_register_common':
> >>>   hw/xen/xen-legacy-backend.c:754: undefined reference to `xen_9pfs_ops'
> >>>   /usr/bin/ld: libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x8): undefined reference to `local_ops'
> >>>   /usr/bin/ld: libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x20): undefined reference to `synth_ops'
> >>>   /usr/bin/ld: libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x38): undefined reference to `proxy_ops'
> >>>   collect2: error: ld returned 1 exit status
> >>
> >> Surely this is a build bug we need to fix rather than ignore in CI ?
> > 
> > Well it predates this series, so nobody really cared
> > (thus I wonder if it makes sense to invest resources
> > there).
> > 
> > Anyway I can have a look after 5.2-rc1.

Actually I care about Xen and 9pfs support, it is one of the few
combinations that I use regularly and it is even enabled in the Xilinx
product I look after. But admittedly I don't test QEMU master as much as
I should. With the recent changes to the build system it is not very
suprising that there are some issues. It would be great to have a Xen
and 9pfs test in the gitlab CI-loop.


FYI I tried to build the latest QEMU on Alpine Linux 3.12 ARM64 and I
get:

  ninja: unknown tool 'query'

Even after rebuilding ninja master by hand. Any ideas? I don't know much
about ninja.


So I gave up on that and I spinned up a Debian Buster x86 container for
this build. That one got past the "ninja: unknown tool 'query'" error.
The build completed without problems to the end.

Either way I can't reproduce the build error above.
--8323329-2447170-1604453887=:3264--


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 06:18:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 06:18:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18787.43790 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaC7F-00064M-AZ; Wed, 04 Nov 2020 06:17:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18787.43790; Wed, 04 Nov 2020 06:17:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaC7F-00064F-74; Wed, 04 Nov 2020 06:17:53 +0000
Received: by outflank-mailman (input) for mailman id 18787;
 Wed, 04 Nov 2020 06:17:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gD9+=EK=redhat.com=thuth@srs-us1.protection.inumbo.net>)
 id 1kaC7D-000644-3P
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 06:17:51 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 982dc919-27f7-41b6-9c1f-d9a924cb2002;
 Wed, 04 Nov 2020 06:17:48 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-10-2YpDW0klNAusxWLWodXMPg-1; Wed, 04 Nov 2020 01:17:44 -0500
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com
 [10.5.11.11])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5825A801FDA;
 Wed,  4 Nov 2020 06:17:42 +0000 (UTC)
Received: from thuth.remote.csb (ovpn-112-151.ams2.redhat.com [10.36.112.151])
 by smtp.corp.redhat.com (Postfix) with ESMTP id E59F75B4D7;
 Wed,  4 Nov 2020 06:17:28 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gD9+=EK=redhat.com=thuth@srs-us1.protection.inumbo.net>)
	id 1kaC7D-000644-3P
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 06:17:51 +0000
X-Inumbo-ID: 982dc919-27f7-41b6-9c1f-d9a924cb2002
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 982dc919-27f7-41b6-9c1f-d9a924cb2002;
	Wed, 04 Nov 2020 06:17:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604470668;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=xfkrGv1aZf1IL5GXKFktkszgcVsO83XbXo4z4Vxwg7M=;
	b=hzDDIW3khQquOZolkNc7Tctr/47KImxvqjYs2Ra9dbZCbd5qv7KjiEoGnBC4wu17d6vwl/
	gMYFGCF5V7jW5IFIlvKsJKnNnvFIF1besSsOVRIYcE/9X8lWuSoGuYJtsfypLHwLciQJog
	z7rGjHaRmP/cbzAPKxnaQxY60T5ciqI=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-10-2YpDW0klNAusxWLWodXMPg-1; Wed, 04 Nov 2020 01:17:44 -0500
X-MC-Unique: 2YpDW0klNAusxWLWodXMPg-1
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5825A801FDA;
	Wed,  4 Nov 2020 06:17:42 +0000 (UTC)
Received: from thuth.remote.csb (ovpn-112-151.ams2.redhat.com [10.36.112.151])
	by smtp.corp.redhat.com (Postfix) with ESMTP id E59F75B4D7;
	Wed,  4 Nov 2020 06:17:28 +0000 (UTC)
Subject: Re: [PATCH-for-5.2 2/3] gitlab-ci: Add a job to cover the
 --without-default-devices config
To: Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Cc: =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Greg Kurz <groug@kaod.org>, Christian Schoenebeck <qemu_oss@crudebyte.com>,
 qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>,
 Cornelia Huck <cohuck@redhat.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, David Hildenbrand <david@redhat.com>,
 qemu-s390x@nongnu.org, Fam Zheng <fam@euphon.net>,
 Richard Henderson <rth@twiddle.net>, Matthew Rosato
 <mjrosato@linux.ibm.com>, Halil Pasic <pasic@linux.ibm.com>,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>
References: <20201103164604.2692357-1-philmd@redhat.com>
 <20201103164604.2692357-3-philmd@redhat.com>
 <20201103165247.GT205187@redhat.com>
 <7654e063-98d3-84e0-8116-5a1b41d14636@redhat.com>
 <21e90ddb-fe8a-c780-2741-9b7a2f7f1c9a@redhat.com>
 <alpine.DEB.2.21.2011031722100.3264@sstabellini-ThinkPad-T480s>
From: Thomas Huth <thuth@redhat.com>
Message-ID: <9ac5e985-a701-f357-29fb-ef7975f5f2c2@redhat.com>
Date: Wed, 4 Nov 2020 07:17:27 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.6.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2011031722100.3264@sstabellini-ThinkPad-T480s>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=thuth@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04/11/2020 03.27, Stefano Stabellini wrote:
[...]
> Actually I care about Xen and 9pfs support, it is one of the few
> combinations that I use regularly and it is even enabled in the Xilinx
> product I look after. But admittedly I don't test QEMU master as much as
> I should. With the recent changes to the build system it is not very
> suprising that there are some issues. It would be great to have a Xen
> and 9pfs test in the gitlab CI-loop.
> 
> 
> FYI I tried to build the latest QEMU on Alpine Linux 3.12 ARM64 and I
> get:
> 
>   ninja: unknown tool 'query'
> 
> Even after rebuilding ninja master by hand. Any ideas? I don't know much
> about ninja.
> 
> 
> So I gave up on that and I spinned up a Debian Buster x86 container for
> this build. That one got past the "ninja: unknown tool 'query'" error.
> The build completed without problems to the end.
> 
> Either way I can't reproduce the build error above.

Did you run "configure" with "--without-default-devices" ?

 Thomas




From xen-devel-bounces@lists.xenproject.org Wed Nov 04 07:35:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 07:35:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18796.43802 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaDJQ-0004RV-W1; Wed, 04 Nov 2020 07:34:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18796.43802; Wed, 04 Nov 2020 07:34:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaDJQ-0004RO-T0; Wed, 04 Nov 2020 07:34:32 +0000
Received: by outflank-mailman (input) for mailman id 18796;
 Wed, 04 Nov 2020 07:34:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gnt3=EK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kaDJP-0004RJ-Ud
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 07:34:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 3511bffa-4372-4e15-9b4a-132b0542cb09;
 Wed, 04 Nov 2020 07:34:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0B347AE1F;
 Wed,  4 Nov 2020 07:34:30 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gnt3=EK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kaDJP-0004RJ-Ud
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 07:34:32 +0000
X-Inumbo-ID: 3511bffa-4372-4e15-9b4a-132b0542cb09
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 3511bffa-4372-4e15-9b4a-132b0542cb09;
	Wed, 04 Nov 2020 07:34:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604475270;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vOeiraqo5k/9PCWGLJfcX0cIM7vQ7pneAS3SvoRlm80=;
	b=PJk5dFy37eEw7OICTWUK8jdJp3CguEztZ2s32si1ra6IxRA/c4fawX8qCSwixN9pR9Ck9g
	p584BDupGlYNai2OMXhSRr0N/JFOIQjwdmMgUuoV1dzWAH35xG4Dy2oz12YHQaXoHOTQlW
	olIBZqNzljCyKKZULoTNwRKS2EcLT7Y=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0B347AE1F;
	Wed,  4 Nov 2020 07:34:30 +0000 (UTC)
Subject: Re: [RFC PATCH] xen: EXPERT clean-up
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Stefano Stabellini <stefano.stabellini@xilinx.com>,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, iwj@xenproject.org,
 julien@xen.org, wl@xen.org, xen-devel@lists.xenproject.org
References: <20201031002405.4545-1-sstabellini@kernel.org>
 <cd44d479-8dba-6311-9386-0c8c1134d07e@suse.com>
 <alpine.DEB.2.21.2011021332460.5812@sstabellini-ThinkPad-T480s>
 <c127499b-810b-63af-5487-2cc9ecfdba09@suse.com>
 <alpine.DEB.2.21.2011031123420.5812@sstabellini-ThinkPad-T480s>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e0842284-a894-1e0b-ffbe-484013acefa5@suse.com>
Date: Wed, 4 Nov 2020 08:34:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2011031123420.5812@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.11.2020 20:37, Stefano Stabellini wrote:
> On Tue, 3 Nov 2020, Jan Beulich wrote:
>> On 02.11.2020 22:41, Stefano Stabellini wrote:
>>> On Mon, 2 Nov 2020, Jan Beulich wrote:
>>>> On 31.10.2020 01:24, Stefano Stabellini wrote:
>>>>> @@ -79,8 +79,8 @@ config SBSA_VUART_CONSOLE
>>>>>  	  SBSA Generic UART implements a subset of ARM PL011 UART.
>>>>>  
>>>>>  config ARM_SSBD
>>>>> -	bool "Speculative Store Bypass Disable" if EXPERT
>>>>> -	depends on HAS_ALTERNATIVE
>>>>> +	bool "Speculative Store Bypass Disable"
>>>>> +	depends on HAS_ALTERNATIVE && EXPERT
>>>>>  	default y
>>>>
>>>> At the example of this, I'm afraid when the default isn't "n"
>>>> (or there's no default directive at all, as ought to be
>>>> equivalent to and preferred over "default n"), such a
>>>> transformation is not functionally identical: Before your
>>>> change, with !EXPERT this option defaults to y. After your
>>>> change this option is unavailable (which resolves to it being
>>>> off for all consuming purposes).
>>>>
>>>> IOW there are reasons to have "if ..." attached to the prompts
>>>> (for this construct indeed only making the prompt conditional,
>>>> not the entire option), but there are also cases where the
>>>> cleanup you do is indeed desirable / helpful.
>>>
>>> Yeah, thanks for catching it, it is obviously a problem.
>>>
>>> My intention was just to "tag" somehow the options to EXPERT so that it
>>> would show on the menu. Maybe a better, simpler, way to do it is
>>> to add the word "EXPERT" to the one line prompt:
>>>
>>>  config ARM_SSBD
>>> -	bool "Speculative Store Bypass Disable" if EXPERT
>>> +	bool "Speculative Store Bypass Disable (EXPERT)" if EXPERT
>>>  	depends on HAS_ALTERNATIVE
>>>  	default y
>>>  	help
>>>
>>>
>>> What do you think?
>>
>> While on the surface this may look like an improvement, I don't
>> see how it would actually help: If you read the Kconfig file
>> itself, the dependency is seen anyway. And on the menu I don't
>> see the point of telling someone who has enabled EXPERT that a
>> certain option is (or is not) an expert one. If they think
>> they're experts, so should they be treated. (It was, after all,
>> a deliberate decision to make enabling expert mode easier, and
>> hence easier to use for what one might consider not-really-
>> experts. I realize saying so may be considered tendentious; I
>> mean it in a purely technical sense, and I'd like to apologize
>> in advance to anyone not sharing this as a possible perspective
>> to take.)
>>
>> Plus, of course, the addition of such (EXPERT) markers to
>> future options' prompts is liable to get forgotten now and then,
>> so sooner or later we'd likely end up with an inconsistent
>> mixture anyway.
> 
> I tend to agree with you on everything you wrote. The fundamental issue
> is that we are (mis)using EXPERT to tag features that are experimental,
> as defined by SUPPORT.md.
> 
> It is important to be able to distinguish clearly at the kconfig level
> options that are (security) supported from options that are
> unsupported/experimental. Today the only way to do it is with EXPERT
> which is not great because:
> 
> - it doesn't convey the idea that it is for unsupported/experimental
>   features
> - if you want to enable one unsupported feature, it is not clear what
>   you have to do
> 
> So maybe we should replace EXPERT with UNSUPPORTED (or EXPERIMENTAL) in
> the Kconfig menu?

If you mean this to be added to prompt texts, then yes, I'd view
this as helpful. However, ...

> It would make it clearer that by enabling UNSUPPORTED you are going to
> get a configuration that is not security supported. And ideally we would
> also tag features like ACPI as UNSUPPORTED as I suggested above.

... things will get uglier when (just a simple example) something
is supported on x86, but not on Arm.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 07:48:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 07:48:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18807.43814 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaDWY-0005Sm-80; Wed, 04 Nov 2020 07:48:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18807.43814; Wed, 04 Nov 2020 07:48:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaDWY-0005Sf-4I; Wed, 04 Nov 2020 07:48:06 +0000
Received: by outflank-mailman (input) for mailman id 18807;
 Wed, 04 Nov 2020 07:48:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gnt3=EK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kaDWW-0005Sa-Gf
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 07:48:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 73f9da6f-d52d-4dd6-82a5-7dbdd1de9436;
 Wed, 04 Nov 2020 07:47:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E76C3AC2E;
 Wed,  4 Nov 2020 07:47:57 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gnt3=EK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kaDWW-0005Sa-Gf
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 07:48:04 +0000
X-Inumbo-ID: 73f9da6f-d52d-4dd6-82a5-7dbdd1de9436
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 73f9da6f-d52d-4dd6-82a5-7dbdd1de9436;
	Wed, 04 Nov 2020 07:47:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604476078;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=plaQH/rZx43UY7+AQcByuHKm7pVAFkzf6AKcuqTZLME=;
	b=s9GLsRBXWkiz0BeerND4XkhOOnEHvmVfylMLNHe4VGNLWcU3nwvQPSsyv2R8dxZ4Rssx4u
	ulMyGs+j+UJKcPkRY8Mt18k4wrZAd+ZjAy1j3ftp/2K/w5Ot0H9EzKZM9oVPfqoFRGAHp4
	zK0r6yzwyTVFahKZuekWk39pmUi1rAk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E76C3AC2E;
	Wed,  4 Nov 2020 07:47:57 +0000 (UTC)
Subject: Re: Xen 4.13.2 released
To: Michael Young <m.a.young@durham.ac.uk>,
 Anthony Perard <anthony.perard@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 George Dunlap <george.dunlap@citrix.com>
References: <ed219f15-479b-5d06-c835-eb4f4c64db3a@suse.com>
 <a391cfd1-be4a-add6-cd36-8bb254f9b43f@austen3.home>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a3dfec9d-bb32-c1c5-c00e-ea95c62c9bde@suse.com>
Date: Wed, 4 Nov 2020 08:47:57 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <a391cfd1-be4a-add6-cd36-8bb254f9b43f@austen3.home>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.11.2020 00:55, Michael Young wrote:
> On Tue, 3 Nov 2020, Jan Beulich wrote:
>> I am pleased to announce the release of Xen 4.13.2. This is available
>> immediately from its git repository
>> http://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.13
>> (tag RELEASE-4.13.2) or from the XenProject download page
>> https://xenproject.org/downloads/xen-project-archives/xen-project-4-13-series/xen-project-4-13-2/
>> (where a list of changes can also be found).
> 
> Is the entry for XSA-335 correct on the download page? That was a qemu 
> patch but I don't think it was included in 4.13.2.

Interesting, thanks for pointing this out. The qemu-trad part,
albeit "just" a SUPPORT.md update, didn't even make it into
staging yet afaics. While this can perhaps be viewed as benign,
I'm concerned that the qemuu fix also doesn't look to have
landed in any of the branches yet, despite the version bump on
the staging/master branches just 5 days ago. Anthony, Stefano?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 07:56:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 07:56:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18816.43825 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaDf4-0006Ni-5j; Wed, 04 Nov 2020 07:56:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18816.43825; Wed, 04 Nov 2020 07:56:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaDf4-0006Nb-2i; Wed, 04 Nov 2020 07:56:54 +0000
Received: by outflank-mailman (input) for mailman id 18816;
 Wed, 04 Nov 2020 07:56:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gnt3=EK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kaDf2-0006NW-PG
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 07:56:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id d472ea04-9487-49d3-8532-0a5c26d382ae;
 Wed, 04 Nov 2020 07:56:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4A4E2AC8C;
 Wed,  4 Nov 2020 07:56:51 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gnt3=EK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kaDf2-0006NW-PG
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 07:56:52 +0000
X-Inumbo-ID: d472ea04-9487-49d3-8532-0a5c26d382ae
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id d472ea04-9487-49d3-8532-0a5c26d382ae;
	Wed, 04 Nov 2020 07:56:51 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604476611;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=3gUC3iqXwFKmZS0Iit4je3TNjjqFocBQB9Zcxo+jsaI=;
	b=fLKbCdapDKvrkv6pnuRbpNE5a4mkOG7zywFRokAR74bAIVIXx/tT9gplQrY1lzzgcAkNbb
	RX5xQokRRT0Kq0i4Rg9+bPobPSwCwryrj2tHHOwnmC0GtqakeiTev/XFT96KfPWUb0ea4+
	2X/3TzjlnXQu5SeIQP1cMnGrDtQBCKI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4A4E2AC8C;
	Wed,  4 Nov 2020 07:56:51 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2] x86/PV: make post-migration page state consistent
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Message-ID: <07ebce3c-4dcf-bc9e-6d82-7f3def486ab8@suse.com>
Date: Wed, 4 Nov 2020 08:56:50 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

When a page table page gets de-validated, its type reference count drops
to zero (and PGT_validated gets cleared), but its type remains intact.
XEN_DOMCTL_getpageframeinfo3, therefore, so far reported prior usage for
such pages. An intermediate write to such a page via e.g.
MMU_NORMAL_PT_UPDATE, however, would transition the page's type to
PGT_writable_page, thus altering what XEN_DOMCTL_getpageframeinfo3 would
return. In libxc the decision which pages to normalize / localize
depends solely on the type returned from the domctl. As a result without
further precautions the guest won't be able to tell whether such a page
has had its (apparent) PTE entries transitioned to the new MFNs.

Add a check of PGT_validated, thus consistently avoiding normalization /
localization in the tool stack.

Also use XEN_DOMCTL_PFINFO_NOTAB in the variable's initializer instead
open coding it.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Don't change type's type.

--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -215,7 +215,7 @@ long arch_do_domctl(
 
         for ( i = 0; i < num; ++i )
         {
-            unsigned long gfn = 0, type = 0;
+            unsigned long gfn = 0, type = XEN_DOMCTL_PFINFO_NOTAB;
             struct page_info *page;
             p2m_type_t t;
 
@@ -255,6 +255,8 @@ long arch_do_domctl(
 
                 if ( page->u.inuse.type_info & PGT_pinned )
                     type |= XEN_DOMCTL_PFINFO_LPINTAB;
+                else if ( !(page->u.inuse.type_info & PGT_validated) )
+                    type = XEN_DOMCTL_PFINFO_NOTAB;
 
                 if ( page->count_info & PGC_broken )
                     type = XEN_DOMCTL_PFINFO_BROKEN;


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 08:13:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 08:13:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18830.43837 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaDug-0000Fp-1t; Wed, 04 Nov 2020 08:13:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18830.43837; Wed, 04 Nov 2020 08:13:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaDuf-0000Fi-VG; Wed, 04 Nov 2020 08:13:01 +0000
Received: by outflank-mailman (input) for mailman id 18830;
 Wed, 04 Nov 2020 08:13:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qpwB=EK=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
 id 1kaDuf-0000Fd-7e
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 08:13:01 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id d5320e64-d160-492d-ae1e-2d178fd50cbf;
 Wed, 04 Nov 2020 08:13:00 +0000 (UTC)
Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com
 [209.85.215.198]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-165-t9KBmiWJMd6ejhfdPt5MOA-1; Wed, 04 Nov 2020 03:12:57 -0500
Received: by mail-pg1-f198.google.com with SMTP id e16so13334820pgm.1
 for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 00:12:57 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qpwB=EK=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
	id 1kaDuf-0000Fd-7e
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 08:13:01 +0000
X-Inumbo-ID: d5320e64-d160-492d-ae1e-2d178fd50cbf
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id d5320e64-d160-492d-ae1e-2d178fd50cbf;
	Wed, 04 Nov 2020 08:13:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604477580;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ceiJjVJiuP39c7jvGZlPEkqo+iI/UIuc+xm9Ar02lws=;
	b=MnjR2DfFuINNNBiO5W3UWT9QHoQyaUFcm3Iz0IMtdqHAuTiDmrVfQ5kEcWRT/ta39vOxbj
	EhKAemSm7KSIcT94hSNDu7IVzuStHCQ9UY3X9CLyaLJbj53586+xQOi799MHOVn/q8XRSp
	TWnQXkktl5UsWZt2TIad5Q7Twu/eGsM=
Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com
 [209.85.215.198]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-165-t9KBmiWJMd6ejhfdPt5MOA-1; Wed, 04 Nov 2020 03:12:57 -0500
X-MC-Unique: t9KBmiWJMd6ejhfdPt5MOA-1
Received: by mail-pg1-f198.google.com with SMTP id e16so13334820pgm.1
        for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 00:12:57 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=ceiJjVJiuP39c7jvGZlPEkqo+iI/UIuc+xm9Ar02lws=;
        b=k+S75zqdscXwyR9mBFqLvRv/RjncJlzeMxn+tyxjsShknWL8N5DGV8GaUbtRvXO3Fy
         Ae4sD1F7FQ/151gz7BisC7mg0eCH7TBB6Ms3mRMqRmeqP34F2aJVvpai1VPXTLgVXljy
         pYkmGd+6elc1qaxRp63j2isVMgygUeIUvOaY1N80g9mra/+uyDw2SqR5RReRhDGnkbrH
         C74c0/wkrpE15Vh9aZYWAa0sjmOqX/MdK/lyMFu8nIZMBupBEC9RUBnj8R00V5gEU1ay
         6yyaRJbuYc1gF5/8O3gWw5NwhNtVzijc11kmEKhbrQRE0X3zoah228LPnA/muRxj4Ae7
         HDUw==
X-Gm-Message-State: AOAM533cBHj/eE4/5uUmEXLKk9R96gogYcWlUvuIVmlSHAoL7NHb92yz
	rgxBs8+v+lvqfZY8y4W6rtTLseAFsUc7CWtnFow7gvUsZvlmj62qJ+OJ3ddPn9kqEM3lnEhxJHl
	G4Pf/zqlailnfCH5VoDxKnGOhxbjY3+cY6WcBmV9CyaU=
X-Received: by 2002:a17:90b:783:: with SMTP id l3mr3441351pjz.122.1604477576528;
        Wed, 04 Nov 2020 00:12:56 -0800 (PST)
X-Google-Smtp-Source: ABdhPJy9VEpGaTs4FobVdmURAavhmVrSlcOVxSA8lQRAnf6DR3hx+8yKl8lKQ0FpfM8iza8JMrUtTBW+D3SUMqpvPJM=
X-Received: by 2002:a17:90b:783:: with SMTP id l3mr3441336pjz.122.1604477576301;
 Wed, 04 Nov 2020 00:12:56 -0800 (PST)
MIME-Version: 1.0
References: <20201103164604.2692357-1-philmd@redhat.com> <20201103164604.2692357-3-philmd@redhat.com>
 <20201103165247.GT205187@redhat.com> <7654e063-98d3-84e0-8116-5a1b41d14636@redhat.com>
 <21e90ddb-fe8a-c780-2741-9b7a2f7f1c9a@redhat.com> <alpine.DEB.2.21.2011031722100.3264@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2011031722100.3264@sstabellini-ThinkPad-T480s>
From: Paolo Bonzini <pbonzini@redhat.com>
Date: Wed, 4 Nov 2020 09:12:43 +0100
Message-ID: <CABgObfaAH1fty0y0Z10GALnhy4kL_FqSxPZc2-=PwJgtSrOX0g@mail.gmail.com>
Subject: Re: [PATCH-for-5.2 2/3] gitlab-ci: Add a job to cover the
 --without-default-devices config
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>, 
	=?UTF-8?Q?Daniel_P=2E_Berrang=C3=A9?= <berrange@redhat.com>, 
	Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Greg Kurz <groug@kaod.org>, 
	Christian Schoenebeck <qemu_oss@crudebyte.com>, qemu-devel@nongnu.org, 
	Cornelia Huck <cohuck@redhat.com>, =?UTF-8?B?QWxleCBCZW5uw6ll?= <alex.bennee@linaro.org>, 
	David Hildenbrand <david@redhat.com>, qemu-s390x@nongnu.org, Fam Zheng <fam@euphon.net>, 
	Richard Henderson <rth@twiddle.net>, Matthew Rosato <mjrosato@linux.ibm.com>, 
	Halil Pasic <pasic@linux.ibm.com>, Thomas Huth <thuth@redhat.com>, 
	Wainer dos Santos Moschetta <wainersm@redhat.com>, Christian Borntraeger <borntraeger@de.ibm.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pbonzini@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: multipart/alternative; boundary="000000000000fca6bd05b3438b08"

--000000000000fca6bd05b3438b08
Content-Type: text/plain; charset="UTF-8"

Il mer 4 nov 2020, 03:27 Stefano Stabellini <sstabellini@kernel.org> ha
scritto:

> FYI I tried to build the latest QEMU on Alpine Linux 3.12 ARM64 and I
> get:
>
>   ninja: unknown tool 'query'
>
> Even after rebuilding ninja master by hand. Any ideas? I don't know much
> about ninja.
>

Are you sure you have ninja installed and not its clone samurai (yes, I am
serious)? I have contributed query support to samurai but it hasn't made it
to a release yet.

What is the output of "ninja -t list"?

Paolo


>
> So I gave up on that and I spinned up a Debian Buster x86 container for
> this build. That one got past the "ninja: unknown tool 'query'" error.
> The build completed without problems to the end.
>
> Either way I can't reproduce the build error above.

--000000000000fca6bd05b3438b08
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"auto"><div><br><br><div class=3D"gmail_quote"><div dir=3D"ltr" =
class=3D"gmail_attr">Il mer 4 nov 2020, 03:27 Stefano Stabellini &lt;<a hre=
f=3D"mailto:sstabellini@kernel.org">sstabellini@kernel.org</a>&gt; ha scrit=
to:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;b=
order-left:1px #ccc solid;padding-left:1ex">FYI I tried to build the latest=
 QEMU on Alpine Linux 3.12 ARM64 and I<br>
get:<br>
<br>
=C2=A0 ninja: unknown tool &#39;query&#39;<br>
<br>
Even after rebuilding ninja master by hand. Any ideas? I don&#39;t know muc=
h<br>
about ninja.<br></blockquote></div></div><div dir=3D"auto"><br></div><div d=
ir=3D"auto">Are you sure you have ninja installed and not its clone samurai=
 (yes, I am serious)? I have contributed query support to samurai but it ha=
sn&#39;t made it to a release yet.</div><div dir=3D"auto"><br></div><div di=
r=3D"auto">What is the output of &quot;ninja -t list&quot;?</div><div dir=
=3D"auto"><br></div><div dir=3D"auto">Paolo</div><div dir=3D"auto"><br></di=
v><div dir=3D"auto"><div class=3D"gmail_quote"><blockquote class=3D"gmail_q=
uote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1e=
x">
<br>
<br>
So I gave up on that and I spinned up a Debian Buster x86 container for<br>
this build. That one got past the &quot;ninja: unknown tool &#39;query&#39;=
&quot; error.<br>
The build completed without problems to the end.<br>
<br>
Either way I can&#39;t reproduce the build error above.</blockquote></div><=
/div></div>

--000000000000fca6bd05b3438b08--



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 08:15:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 08:15:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18835.43850 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaDxS-0000Pw-HF; Wed, 04 Nov 2020 08:15:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18835.43850; Wed, 04 Nov 2020 08:15:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaDxS-0000Pp-E7; Wed, 04 Nov 2020 08:15:54 +0000
Received: by outflank-mailman (input) for mailman id 18835;
 Wed, 04 Nov 2020 08:15:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2Coh=EK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kaDxR-0000Pf-Eu
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 08:15:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id db0330c7-59ee-4464-9368-ed68852fb91b;
 Wed, 04 Nov 2020 08:15:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F1F17ACF1;
 Wed,  4 Nov 2020 08:15:51 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2Coh=EK=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kaDxR-0000Pf-Eu
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 08:15:53 +0000
X-Inumbo-ID: db0330c7-59ee-4464-9368-ed68852fb91b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id db0330c7-59ee-4464-9368-ed68852fb91b;
	Wed, 04 Nov 2020 08:15:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604477752;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cUZq7FUinpan5b+B+0dHzJT3VyvkdYgZhHV/8DJXZyg=;
	b=blLJjIxBcUI0qmfuYyhXcjdGGlT5FjKf8X35uuEDD//NMLqxn0iA//AtqHCnHU7Wx+mR9i
	iEaP2cc8sb/dzVK0wgPyvh//ILZEY0V97pKU/5MB47bklfQlLk4XN1dv6cvuBBxxubbnKy
	PxkV2Ri4dXuBSogT/nzxTGuydug2Dm4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id F1F17ACF1;
	Wed,  4 Nov 2020 08:15:51 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 3/3] xen/rwlock: add check_lock() handling to rwlocks
Date: Wed,  4 Nov 2020 09:15:49 +0100
Message-Id: <20201104081549.3712-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201104081549.3712-1-jgross@suse.com>
References: <20201104081549.3712-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Checking whether a lock is consistently used regarding interrupts on
or off is beneficial for rwlocks, too.

So add check_lock() calls to rwlock functions. For this purpose make
check_lock() globally accessible.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- call check_lock() unconditionally in try_lock variants (Jan Beulich)

V3:
- add comments (Jan Beulich)
- place check_lock() calls inside preempt-off region (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/spinlock.c      |  3 +--
 xen/include/xen/rwlock.h   | 15 +++++++++++++++
 xen/include/xen/spinlock.h |  2 ++
 3 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
index f4eb50f030..b90981bb27 100644
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -13,7 +13,7 @@
 
 static atomic_t spin_debug __read_mostly = ATOMIC_INIT(0);
 
-static void check_lock(union lock_debug *debug, bool try)
+void check_lock(union lock_debug *debug, bool try)
 {
     bool irq_safe = !local_irq_is_enabled();
 
@@ -108,7 +108,6 @@ void spin_debug_disable(void)
 
 #else /* CONFIG_DEBUG_LOCKS */
 
-#define check_lock(l, t) ((void)0)
 #define check_barrier(l) ((void)0)
 #define got_lock(l) ((void)0)
 #define rel_lock(l) ((void)0)
diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h
index 427664037a..0cc9167715 100644
--- a/xen/include/xen/rwlock.h
+++ b/xen/include/xen/rwlock.h
@@ -56,6 +56,7 @@ static inline int _read_trylock(rwlock_t *lock)
     u32 cnts;
 
     preempt_disable();
+    check_lock(&lock->lock.debug, true);
     cnts = atomic_read(&lock->cnts);
     if ( likely(_can_read_lock(cnts)) )
     {
@@ -87,7 +88,11 @@ static inline void _read_lock(rwlock_t *lock)
      * arch_lock_acquire_barrier().
      */
     if ( likely(_can_read_lock(cnts)) )
+    {
+        /* The slow path calls check_lock() via spin_lock(). */
+        check_lock(&lock->lock.debug, false);
         return;
+    }
 
     /* The slowpath will decrement the reader count, if necessary. */
     queue_read_lock_slowpath(lock);
@@ -162,7 +167,11 @@ static inline void _write_lock(rwlock_t *lock)
      * arch_lock_acquire_barrier().
      */
     if ( atomic_cmpxchg(&lock->cnts, 0, _write_lock_val()) == 0 )
+    {
+        /* The slow path calls check_lock() via spin_lock(). */
+        check_lock(&lock->lock.debug, false);
         return;
+    }
 
     queue_write_lock_slowpath(lock);
     /*
@@ -197,6 +206,7 @@ static inline int _write_trylock(rwlock_t *lock)
     u32 cnts;
 
     preempt_disable();
+    check_lock(&lock->lock.debug, true);
     cnts = atomic_read(&lock->cnts);
     if ( unlikely(cnts) ||
          unlikely(atomic_cmpxchg(&lock->cnts, 0, _write_lock_val()) != 0) )
@@ -328,6 +338,11 @@ static inline void _percpu_read_lock(percpu_rwlock_t **per_cpudata,
         /* Drop the read lock because we don't need it anymore. */
         read_unlock(&percpu_rwlock->rwlock);
     }
+    else
+    {
+        /* All other paths have implicit check_lock() calls via read_lock(). */
+        check_lock(&percpu_rwlock->rwlock.lock.debug, false);
+    }
 }
 
 static inline void _percpu_read_unlock(percpu_rwlock_t **per_cpudata,
diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h
index ca13b600a0..9fa4e600c1 100644
--- a/xen/include/xen/spinlock.h
+++ b/xen/include/xen/spinlock.h
@@ -21,11 +21,13 @@ union lock_debug {
     };
 };
 #define _LOCK_DEBUG { LOCK_DEBUG_INITVAL }
+void check_lock(union lock_debug *debug, bool try);
 void spin_debug_enable(void);
 void spin_debug_disable(void);
 #else
 union lock_debug { };
 #define _LOCK_DEBUG { }
+#define check_lock(l, t) ((void)0)
 #define spin_debug_enable() ((void)0)
 #define spin_debug_disable() ((void)0)
 #endif
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 08:15:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 08:15:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18836.43862 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaDxT-0000Rb-Ug; Wed, 04 Nov 2020 08:15:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18836.43862; Wed, 04 Nov 2020 08:15:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaDxT-0000RU-Qm; Wed, 04 Nov 2020 08:15:55 +0000
Received: by outflank-mailman (input) for mailman id 18836;
 Wed, 04 Nov 2020 08:15:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2Coh=EK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kaDxS-0000Pk-3t
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 08:15:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 06b591f8-6124-46e8-9693-126333d10666;
 Wed, 04 Nov 2020 08:15:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4DA8DACB6;
 Wed,  4 Nov 2020 08:15:51 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2Coh=EK=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kaDxS-0000Pk-3t
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 08:15:54 +0000
X-Inumbo-ID: 06b591f8-6124-46e8-9693-126333d10666
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 06b591f8-6124-46e8-9693-126333d10666;
	Wed, 04 Nov 2020 08:15:51 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604477751;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=t0nui2j85lczmOaB6wvrDxMdOES61kNj5LWlu/C7qc0=;
	b=n6NChhe8h/R0Eli5x84sYWWmkbcjIZvdWwYZY/We62l16UTqmbbV7McACk35CFCye39EMF
	kp4CBCXxdsBCHFC0dpUPqrja7jnhBS2SVV1RlgxYWMmqHY2OxYi/I6hl9+m9XCXpdd3+0b
	udN9I2E78uWUwIWzuor4bj5rfnuWG5I=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4DA8DACB6;
	Wed,  4 Nov 2020 08:15:51 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 0/3] xen/locking: fix and enhance lock debugging
Date: Wed,  4 Nov 2020 09:15:46 +0100
Message-Id: <20201104081549.3712-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This small series fixes two issues with spinlock debug code and adds
lock debug code to rwlocks in order to catch IRQ violations.

Juergen Gross (3):
  xen/spinlocks: spin_trylock with interrupts off is always fine
  xen/locking: harmonize spinlocks and rwlocks regarding preemption
  xen/rwlock: add check_lock() handling to rwlocks

 xen/common/spinlock.c      | 22 ++++++++++++++--------
 xen/include/xen/rwlock.h   | 15 +++++++++++++++
 xen/include/xen/spinlock.h |  2 ++
 3 files changed, 31 insertions(+), 8 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 08:16:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 08:16:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18837.43874 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaDxY-0000VD-9d; Wed, 04 Nov 2020 08:16:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18837.43874; Wed, 04 Nov 2020 08:16:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaDxY-0000V1-5H; Wed, 04 Nov 2020 08:16:00 +0000
Received: by outflank-mailman (input) for mailman id 18837;
 Wed, 04 Nov 2020 08:15:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2Coh=EK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kaDxW-0000Pk-Vc
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 08:15:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id b17a3e08-d071-4107-9a9a-62c7dbc0f131;
 Wed, 04 Nov 2020 08:15:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7FB21AC23;
 Wed,  4 Nov 2020 08:15:51 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2Coh=EK=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kaDxW-0000Pk-Vc
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 08:15:59 +0000
X-Inumbo-ID: b17a3e08-d071-4107-9a9a-62c7dbc0f131
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id b17a3e08-d071-4107-9a9a-62c7dbc0f131;
	Wed, 04 Nov 2020 08:15:51 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604477751;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SMWFDeqyO4Bqdr2XBTO3XuyCVTVJJjcu3BY2gEaI7+0=;
	b=dQMJ2oPyWtwflpsHfulQPgBtMzNX2MQIF21kPA26/35M0xKeNVqjfq6c1xWMK+m7FDKPmO
	zTkXnEpzbTzej/zjlWVmri5+f9+2ho75oKOVO5iLV2UpYlv0DIsdpbOhc4fbUE+C2eYeKV
	FCc72zmaABiCECj/knpDXrloiKhgKD0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7FB21AC23;
	Wed,  4 Nov 2020 08:15:51 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 1/3] xen/spinlocks: spin_trylock with interrupts off is always fine
Date: Wed,  4 Nov 2020 09:15:47 +0100
Message-Id: <20201104081549.3712-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201104081549.3712-1-jgross@suse.com>
References: <20201104081549.3712-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Even if a spinlock was taken with interrupts on before calling
spin_trylock() with interrupts off is fine, as it can't block.

Add a bool parameter "try" to check_lock() for handling this case.

Remove the call of check_lock() from _spin_is_locked(), as it really
serves no purpose and it can even lead to false crashes, e.g. when
a lock was taken correctly with interrupts enabled and the call of
_spin_is_locked() happened with interrupts off. In case the lock is
taken with wrong interrupt flags this will be catched when taking
the lock.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
V2:
- corrected comment (Jan Beulich)
---
 xen/common/spinlock.c | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
index ce3106e2d3..b4aaf6bce6 100644
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -13,7 +13,7 @@
 
 static atomic_t spin_debug __read_mostly = ATOMIC_INIT(0);
 
-static void check_lock(union lock_debug *debug)
+static void check_lock(union lock_debug *debug, bool try)
 {
     bool irq_safe = !local_irq_is_enabled();
 
@@ -42,7 +42,13 @@ static void check_lock(union lock_debug *debug)
      * 
      * To guard against this subtle bug we latch the IRQ safety of every
      * spinlock in the system, on first use.
+     *
+     * A spin_trylock() with interrupts off is always fine, as this can't
+     * block and above deadlock scenario doesn't apply.
      */
+    if ( try && irq_safe )
+        return;
+
     if ( unlikely(debug->irq_safe != irq_safe) )
     {
         union lock_debug seen, new = { 0 };
@@ -102,7 +108,7 @@ void spin_debug_disable(void)
 
 #else /* CONFIG_DEBUG_LOCKS */
 
-#define check_lock(l) ((void)0)
+#define check_lock(l, t) ((void)0)
 #define check_barrier(l) ((void)0)
 #define got_lock(l) ((void)0)
 #define rel_lock(l) ((void)0)
@@ -159,7 +165,7 @@ void inline _spin_lock_cb(spinlock_t *lock, void (*cb)(void *), void *data)
     spinlock_tickets_t tickets = SPINLOCK_TICKET_INC;
     LOCK_PROFILE_VAR;
 
-    check_lock(&lock->debug);
+    check_lock(&lock->debug, false);
     preempt_disable();
     tickets.head_tail = arch_fetch_and_add(&lock->tickets.head_tail,
                                            tickets.head_tail);
@@ -220,8 +226,6 @@ void _spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags)
 
 int _spin_is_locked(spinlock_t *lock)
 {
-    check_lock(&lock->debug);
-
     /*
      * Recursive locks may be locked by another CPU, yet we return
      * "false" here, making this function suitable only for use in
@@ -236,7 +240,7 @@ int _spin_trylock(spinlock_t *lock)
 {
     spinlock_tickets_t old, new;
 
-    check_lock(&lock->debug);
+    check_lock(&lock->debug, true);
     old = observe_lock(&lock->tickets);
     if ( old.head != old.tail )
         return 0;
@@ -294,7 +298,7 @@ int _spin_trylock_recursive(spinlock_t *lock)
     BUILD_BUG_ON(NR_CPUS > SPINLOCK_NO_CPU);
     BUILD_BUG_ON(SPINLOCK_RECURSE_BITS < 3);
 
-    check_lock(&lock->debug);
+    check_lock(&lock->debug, true);
 
     if ( likely(lock->recurse_cpu != cpu) )
     {
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 08:16:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 08:16:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18838.43886 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaDxd-0000Zs-KM; Wed, 04 Nov 2020 08:16:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18838.43886; Wed, 04 Nov 2020 08:16:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaDxd-0000Zk-Gl; Wed, 04 Nov 2020 08:16:05 +0000
Received: by outflank-mailman (input) for mailman id 18838;
 Wed, 04 Nov 2020 08:16:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2Coh=EK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kaDxb-0000Pk-Vi
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 08:16:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 9db5219b-9832-4dda-9a2e-46059e9e7c38;
 Wed, 04 Nov 2020 08:15:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BA7DEACC6;
 Wed,  4 Nov 2020 08:15:51 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2Coh=EK=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kaDxb-0000Pk-Vi
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 08:16:04 +0000
X-Inumbo-ID: 9db5219b-9832-4dda-9a2e-46059e9e7c38
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 9db5219b-9832-4dda-9a2e-46059e9e7c38;
	Wed, 04 Nov 2020 08:15:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604477751;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=X4N92E0t7NY8cFsFrRFhreEx/aHGKkH4ttPqzLqhrP0=;
	b=TA0qCEfg66+8vkDgjK6jJWLh9gwYmrr0METgNIVpEsr55+7OnQ0pkhihTDXnPbynRX4C13
	MYySK5w1UrVlhhKfFpz3NLdoMCMDci8QlwUm1hvahwCzlpFT/yrj0xjnJMFgysPmoEvNHp
	lQ3TDSMNoyA+bl3Du3cjZ26C/fhxOWY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id BA7DEACC6;
	Wed,  4 Nov 2020 08:15:51 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 2/3] xen/locking: harmonize spinlocks and rwlocks regarding preemption
Date: Wed,  4 Nov 2020 09:15:48 +0100
Message-Id: <20201104081549.3712-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201104081549.3712-1-jgross@suse.com>
References: <20201104081549.3712-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Spinlocks and rwlocks behave differently in the try variants regarding
preemption: rwlocks are switching preemption off before testing the
lock, while spinlocks do so only after the first check.

Modify _spin_trylock() to disable preemption before testing the lock
to be held in order to be preemption-ready.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- new patch
---
 xen/common/spinlock.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
index b4aaf6bce6..f4eb50f030 100644
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -240,13 +240,16 @@ int _spin_trylock(spinlock_t *lock)
 {
     spinlock_tickets_t old, new;
 
+    preempt_disable();
     check_lock(&lock->debug, true);
     old = observe_lock(&lock->tickets);
     if ( old.head != old.tail )
+    {
+        preempt_enable();
         return 0;
+    }
     new = old;
     new.tail++;
-    preempt_disable();
     if ( cmpxchg(&lock->tickets.head_tail,
                  old.head_tail, new.head_tail) != old.head_tail )
     {
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 08:16:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 08:16:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18847.43898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaDyD-0000pT-VU; Wed, 04 Nov 2020 08:16:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18847.43898; Wed, 04 Nov 2020 08:16:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaDyD-0000pM-ST; Wed, 04 Nov 2020 08:16:41 +0000
Received: by outflank-mailman (input) for mailman id 18847;
 Wed, 04 Nov 2020 08:16:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NDu8=EK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kaDyB-0000ou-U9
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 08:16:40 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1fda3a0c-e114-4ddf-a4bd-f002db54aa76;
 Wed, 04 Nov 2020 08:16:36 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kaDy8-0007k8-BE; Wed, 04 Nov 2020 08:16:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kaDy8-0002OG-1Z; Wed, 04 Nov 2020 08:16:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kaDy8-0003sk-15; Wed, 04 Nov 2020 08:16:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NDu8=EK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kaDyB-0000ou-U9
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 08:16:40 +0000
X-Inumbo-ID: 1fda3a0c-e114-4ddf-a4bd-f002db54aa76
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1fda3a0c-e114-4ddf-a4bd-f002db54aa76;
	Wed, 04 Nov 2020 08:16:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bjomXCNZu0S4cL7NSKCqX9wUugP0HTuAcLNJqVVUG+U=; b=c4WC6dncznRm8uvEPQoOY8AUl9
	12h6PQi6iLxbluwSYO+vaIuhAOjAoXBJub2OiOXxeTgADDvG9mphPjgnPsJ6DI2dmKk9WM9++xNAn
	ft6T8XX4hIZI2KHeYBmbX1ZeP06oIEuTweHaHeEVoUsi5K8ux2noAIDG2traZr0rrlUk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaDy8-0007k8-BE; Wed, 04 Nov 2020 08:16:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaDy8-0002OG-1Z; Wed, 04 Nov 2020 08:16:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaDy8-0003sk-15; Wed, 04 Nov 2020 08:16:36 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156391-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156391: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=457877eae48d5cf3dc1ff687a8cc69def557a570
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 04 Nov 2020 08:16:36 +0000

flight 156391 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156391/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              457877eae48d5cf3dc1ff687a8cc69def557a570
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  117 days
Failing since        151818  2020-07-11 04:18:52 Z  116 days  111 attempts
Testing same since   156391  2020-11-04 04:20:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 24386 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 08:22:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 08:22:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18865.43913 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaE3T-0001nA-Mp; Wed, 04 Nov 2020 08:22:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18865.43913; Wed, 04 Nov 2020 08:22:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaE3T-0001n3-Jf; Wed, 04 Nov 2020 08:22:07 +0000
Received: by outflank-mailman (input) for mailman id 18865;
 Wed, 04 Nov 2020 08:22:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2Coh=EK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kaE3R-0001my-Ql
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 08:22:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 1ecf9401-884d-4793-9b69-64f9d9510726;
 Wed, 04 Nov 2020 08:22:05 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 890B6AC23;
 Wed,  4 Nov 2020 08:22:04 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2Coh=EK=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kaE3R-0001my-Ql
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 08:22:05 +0000
X-Inumbo-ID: 1ecf9401-884d-4793-9b69-64f9d9510726
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 1ecf9401-884d-4793-9b69-64f9d9510726;
	Wed, 04 Nov 2020 08:22:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604478124;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=FIHJjp2r63bKNxd9wt4Ny3dqHWucTNU7/7M3PY7mgtA=;
	b=WyQS0+B/nzXQjcPRwai7VmmFlkHHZrWS4FRW+UR8f/5q5khkZUEXTGUzZ7vs6dBV3Jc6JY
	Zkrr9T6ewk24n/pNdsrgALYF+xiBNEeEKXeQzztpEwHp/uA7JCKMJmc98dp018t5GWquXF
	8BgEmQfEwkIw2LahypAz0UOgmWGIVXs=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 890B6AC23;
	Wed,  4 Nov 2020 08:22:04 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] xen/drivers: remove unused pcidevs_trylock()
Date: Wed,  4 Nov 2020 09:22:02 +0100
Message-Id: <20201104082202.12194-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

pcidevs_trylock() is used nowhere, so remove it.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/drivers/passthrough/pci.c | 5 -----
 xen/include/xen/pci.h         | 1 -
 2 files changed, 6 deletions(-)

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 2a3bce1462..51e584127e 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -69,11 +69,6 @@ bool_t pcidevs_locked(void)
     return !!spin_is_locked(&_pcidevs_lock);
 }
 
-bool_t pcidevs_trylock(void)
-{
-    return !!spin_trylock_recursive(&_pcidevs_lock);
-}
-
 static struct radix_tree_root pci_segments;
 
 static inline struct pci_seg *get_pseg(u16 seg)
diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
index c4d3879761..20a54a5bb4 100644
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -148,7 +148,6 @@ struct pci_dev {
 void pcidevs_lock(void);
 void pcidevs_unlock(void);
 bool_t __must_check pcidevs_locked(void);
-bool_t __must_check pcidevs_trylock(void);
 
 bool_t pci_known_segment(u16 seg);
 bool_t pci_device_detect(u16 seg, u8 bus, u8 dev, u8 func);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 08:30:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 08:30:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18874.43924 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaEBf-0002ls-Kw; Wed, 04 Nov 2020 08:30:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18874.43924; Wed, 04 Nov 2020 08:30:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaEBf-0002ll-Hl; Wed, 04 Nov 2020 08:30:35 +0000
Received: by outflank-mailman (input) for mailman id 18874;
 Wed, 04 Nov 2020 08:30:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WAD3=EK=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kaEBe-0002lf-7y
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 08:30:34 +0000
Received: from mail-wr1-x432.google.com (unknown [2a00:1450:4864:20::432])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eb438fa4-13c4-400d-b703-5cf733429253;
 Wed, 04 Nov 2020 08:30:33 +0000 (UTC)
Received: by mail-wr1-x432.google.com with SMTP id x7so21098042wrl.3
 for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 00:30:33 -0800 (PST)
Received: from CBGR90WXYV0 (54-240-197-232.amazon.com. [54.240.197.232])
 by smtp.gmail.com with ESMTPSA id x81sm1454581wmg.5.2020.11.04.00.30.31
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 04 Nov 2020 00:30:32 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WAD3=EK=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kaEBe-0002lf-7y
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 08:30:34 +0000
X-Inumbo-ID: eb438fa4-13c4-400d-b703-5cf733429253
Received: from mail-wr1-x432.google.com (unknown [2a00:1450:4864:20::432])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id eb438fa4-13c4-400d-b703-5cf733429253;
	Wed, 04 Nov 2020 08:30:33 +0000 (UTC)
Received: by mail-wr1-x432.google.com with SMTP id x7so21098042wrl.3
        for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 00:30:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=Sh1HtM7uDLB2YmR2hldT2MWKXVVvs+80YeAn9tOAS7U=;
        b=bQ7TcV2Q2mevOYWSH1zwHfWvSPCFmUU+pdHdZadirZQi88257XwabWuCCxhXRBSBWK
         inV/3NxZbo6hHN7vglz905iqZmxydaSUxnuIz+k8OwJiXDKKS1IzYl13Na373BgH6GEL
         47lH1tLb8kWLAvSNQn9uSycF4QvA3JSmNg9VIViEhU2Iv99oaXRDcCZ8roS0YXZABoY1
         querTcbvbuTxf2mkh3gr1JZ+uGtcJl745YvsRFh7D+NEs1TiHWXphY6SHwZM7fyVoIPn
         bV/hKTmVTWueDSNllSTIPkmZE3hzO7G931p9ZCitXb79TXYgxwZshQh1jKVnEcj1KLAZ
         BdUQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=Sh1HtM7uDLB2YmR2hldT2MWKXVVvs+80YeAn9tOAS7U=;
        b=kHwSX0HIw2tNLvqJ4MtbGx11hy4n/ibAxHtDuH0SSqKek1C6iO2be9e5ll6Vj69q4V
         RLaQO3rAz1Qe4xTiAhuDHHQQThXOTw/SxNIvsq/qIMhN6zvJOX+5qcFPj9F8YzN4jXZl
         JkyBlzg43ftPz234SOR6ebbkVRJeFGXtoy8AVA/0FpcClafFIbDqLrJwEmW+VYiIe+/E
         j0eb8BzhPAp/idejYky4yY8EpHq9MIKFoyHaZJPn62GV6g+spdhaU4p6P5qGp3lKi0WY
         DJfeaLwgM51D6rqFReP/ARSBmvSIzN6j9ZgFMhA970drbVYKfuZMop2z0B+qp83986Yq
         +Rvw==
X-Gm-Message-State: AOAM530ulcdmgx8jMfA44SnW5wDY4IMwntuHAuHRjev9hoY0NU0wWUCP
	AT+Jd2HrAmngYz6IXJGdotA=
X-Google-Smtp-Source: ABdhPJyiX8fD3zJif1V+yCe3SBi85vbdaVLugtJSioIMPuBR05zWylVXzTx5lJE9v7AXCITtZwmSIA==
X-Received: by 2002:adf:f185:: with SMTP id h5mr15586113wro.10.1604478632596;
        Wed, 04 Nov 2020 00:30:32 -0800 (PST)
Received: from CBGR90WXYV0 (54-240-197-232.amazon.com. [54.240.197.232])
        by smtp.gmail.com with ESMTPSA id x81sm1454581wmg.5.2020.11.04.00.30.31
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Wed, 04 Nov 2020 00:30:32 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Juergen Gross'" <jgross@suse.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Jan Beulich'" <jbeulich@suse.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Wei Liu'" <wl@xen.org>
References: <20201104082202.12194-1-jgross@suse.com>
In-Reply-To: <20201104082202.12194-1-jgross@suse.com>
Subject: RE: [PATCH] xen/drivers: remove unused pcidevs_trylock()
Date: Wed, 4 Nov 2020 08:30:30 -0000
Message-ID: <006c01d6b284$c25cb0d0$47161270$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQE5v9PXgJmLQtnJZ68jgI88qe7mFKrxibtA

> -----Original Message-----
> From: Juergen Gross <jgross@suse.com>
> Sent: 04 November 2020 08:22
> To: xen-devel@lists.xenproject.org
> Cc: Juergen Gross <jgross@suse.com>; Jan Beulich <jbeulich@suse.com>; Paul Durrant <paul@xen.org>;
> Andrew Cooper <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>; Ian Jackson
> <iwj@xenproject.org>; Julien Grall <julien@xen.org>; Stefano Stabellini <sstabellini@kernel.org>; Wei
> Liu <wl@xen.org>
> Subject: [PATCH] xen/drivers: remove unused pcidevs_trylock()
> 
> pcidevs_trylock() is used nowhere, so remove it.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 08:33:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 08:33:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18882.43943 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaEEE-0002y7-9F; Wed, 04 Nov 2020 08:33:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18882.43943; Wed, 04 Nov 2020 08:33:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaEEE-0002y0-6F; Wed, 04 Nov 2020 08:33:14 +0000
Received: by outflank-mailman (input) for mailman id 18882;
 Wed, 04 Nov 2020 08:33:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NDu8=EK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kaEED-0002x4-47
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 08:33:13 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 44af02fe-38d2-4b85-8ee7-8254ad2e0c80;
 Wed, 04 Nov 2020 08:33:05 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kaEE4-00087A-KO; Wed, 04 Nov 2020 08:33:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kaEE4-0002uy-Bw; Wed, 04 Nov 2020 08:33:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kaEE4-0004hy-BP; Wed, 04 Nov 2020 08:33:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NDu8=EK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kaEED-0002x4-47
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 08:33:13 +0000
X-Inumbo-ID: 44af02fe-38d2-4b85-8ee7-8254ad2e0c80
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 44af02fe-38d2-4b85-8ee7-8254ad2e0c80;
	Wed, 04 Nov 2020 08:33:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HV9sRWTlieOfTCKWit+bfto5sI8hyG2t6Hnr25hzhBM=; b=fTdfRtoS4pzDXChRdnjB7/TUQX
	auRNpWnH94+egWuO8BBYOSadvBCPhzQQXIQYr9mWknj7mrh/1Q0Zc+vJ2jar4t8Dl6mBgu84+u90h
	tZbKgL3HKwRsipaJzJ2SXmd7iqr7w8KSwf01a2zdkslxGKJ0DVKxQj8WSRKsyYe1iZ/A=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaEE4-00087A-KO; Wed, 04 Nov 2020 08:33:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaEE4-0002uy-Bw; Wed, 04 Nov 2020 08:33:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaEE4-0004hy-BP; Wed, 04 Nov 2020 08:33:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156388-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156388: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=3d6e32347a3b57dac7f469a07c5f520e69bd070a
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 04 Nov 2020 08:33:04 +0000

flight 156388 qemu-mainline real [real]
flight 156392 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156388/
http://logs.test-lab.xenproject.org/osstest/logs/156392/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                3d6e32347a3b57dac7f469a07c5f520e69bd070a
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   75 days
Failing since        152659  2020-08-21 14:07:39 Z   74 days  168 attempts
Testing same since   156388  2020-11-04 00:07:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 59937 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 08:44:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 08:44:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18891.43968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaEOQ-00040w-FT; Wed, 04 Nov 2020 08:43:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18891.43968; Wed, 04 Nov 2020 08:43:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaEOQ-00040p-Bv; Wed, 04 Nov 2020 08:43:46 +0000
Received: by outflank-mailman (input) for mailman id 18891;
 Wed, 04 Nov 2020 08:43:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ed9z=EK=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kaEOP-00040k-Ky
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 08:43:45 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 68352186-03c0-4acc-91a3-a38114ed8126;
 Wed, 04 Nov 2020 08:43:44 +0000 (UTC)
Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com
 [209.85.221.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-524-RJP7EuIzPaOQZilfVSpIUQ-1; Wed, 04 Nov 2020 03:43:42 -0500
Received: by mail-wr1-f71.google.com with SMTP id p12so1453748wrx.3
 for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 00:43:42 -0800 (PST)
Received: from localhost.localdomain (234.red-83-42-66.dynamicip.rima-tde.net.
 [83.42.66.234])
 by smtp.gmail.com with ESMTPSA id u5sm1311657wml.13.2020.11.04.00.43.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 04 Nov 2020 00:43:40 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ed9z=EK=redhat.com=philmd@srs-us1.protection.inumbo.net>)
	id 1kaEOP-00040k-Ky
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 08:43:45 +0000
X-Inumbo-ID: 68352186-03c0-4acc-91a3-a38114ed8126
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 68352186-03c0-4acc-91a3-a38114ed8126;
	Wed, 04 Nov 2020 08:43:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604479424;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AINBmnp1bWLA1LEpK/rcM7CoffR1YWyCikg181rfSi4=;
	b=JlOwjAsBoxX4KdjWUg+aAwhAOsmaDxiv5iMXa7iMHzscaHs6ktPZv0PLMXhj3YIWPQAgFe
	JKTBnlHNknkrG+1/5NffNkmMm+jf7KYPp2MrZOp6/dThVfZ+gREr8rReCfiD2ugFAuit3B
	7qBOFg4YntVyxXIirTBD8Wzm8DH9xBs=
Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com
 [209.85.221.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-524-RJP7EuIzPaOQZilfVSpIUQ-1; Wed, 04 Nov 2020 03:43:42 -0500
X-MC-Unique: RJP7EuIzPaOQZilfVSpIUQ-1
Received: by mail-wr1-f71.google.com with SMTP id p12so1453748wrx.3
        for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 00:43:42 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=AINBmnp1bWLA1LEpK/rcM7CoffR1YWyCikg181rfSi4=;
        b=b0wDPtauIvQG5ODGzeD11rOdnmZu03SCZWvs9rsWlgVs6qD6jlcDxbaVVeaaK5t47E
         w2ASsy8p/lKJVIp26Rb/Ur5BDpKadeCDsWYH12uYkDreLG0eKpkneODVaGkMwy+0MJkz
         BC+jiMaMq/BATCyelTJCnGo+/SgiBCJ0e58j1k/XJZJUONmBmMECbaBfi6bK4hOkngRq
         nKgIhtyiQ1kNni44wli5Qqdyti/vHHXisCEMmNq5b6UtB7M8gM/Za/DlVulzPfmT+Tza
         qu8Dbj8AY+yvM+hZlynMrm2nWobx0XlRMuPravOGNb6decsHtoT7CLV441mh3QRqz7K8
         2dFA==
X-Gm-Message-State: AOAM532aHFpx5fPjazEi5WA8FyCDGBliOBIBub3RddYD5C+fOfubmMV9
	Elrm/E0B/kJ4roGEGVMTmZmHxYuNjgZT4wzOd8ZsOvD7etEJOvpFSD7w32zRa3J1SRSq/XAlS+Q
	NOZwiZLCZOLk1TJQTdHkpTnevPKY=
X-Received: by 2002:a1c:6782:: with SMTP id b124mr3528113wmc.117.1604479421393;
        Wed, 04 Nov 2020 00:43:41 -0800 (PST)
X-Google-Smtp-Source: ABdhPJyO4zhDM8ZE9YI+Bb3lljj4jjXBr4C5MlBdNUU1BGAXUMIJaqRRVxrwdwAf4OskguQTBvWGag==
X-Received: by 2002:a1c:6782:: with SMTP id b124mr3528078wmc.117.1604479421239;
        Wed, 04 Nov 2020 00:43:41 -0800 (PST)
Received: from localhost.localdomain (234.red-83-42-66.dynamicip.rima-tde.net. [83.42.66.234])
        by smtp.gmail.com with ESMTPSA id u5sm1311657wml.13.2020.11.04.00.43.39
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 04 Nov 2020 00:43:40 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Christian Borntraeger <borntraeger@de.ibm.com>,
	Greg Kurz <groug@kaod.org>,
	Fam Zheng <fam@euphon.net>,
	Richard Henderson <rth@twiddle.net>,
	Cornelia Huck <cohuck@redhat.com>,
	Halil Pasic <pasic@linux.ibm.com>,
	qemu-s390x@nongnu.org,
	Matthew Rosato <mjrosato@linux.ibm.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Christian Schoenebeck <qemu_oss@crudebyte.com>,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	"Daniel P . Berrange" <berrange@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	David Hildenbrand <david@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org
Subject: [PATCH-for-5.2 v2 2/4] hw/9pfs: Fix Kconfig dependency problem between 9pfs and Xen
Date: Wed,  4 Nov 2020 09:43:25 +0100
Message-Id: <20201104084327.3010593-3-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201104084327.3010593-1-philmd@redhat.com>
References: <20201104084327.3010593-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Fixes './configure --without-default-devices --enable-xen' build:

  /usr/bin/ld: libcommon.fa.p/hw_xen_xen-legacy-backend.c.o: in function `xen_be_register_common':
  hw/xen/xen-legacy-backend.c:754: undefined reference to `xen_9pfs_ops'
  /usr/bin/ld: libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x8): undefined reference to `local_ops'
  /usr/bin/ld: libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x20): undefined reference to `synth_ops'
  /usr/bin/ld: libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x38): undefined reference to `proxy_ops'
  collect2: error: ld returned 1 exit status

Fixes: b2c00bce54c ("meson: convert hw/9pfs, cleanup")
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
I'm not sure b2c00bce54c is the real culprit

Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org
Cc: Greg Kurz <groug@kaod.org>
Cc: Christian Schoenebeck <qemu_oss@crudebyte.com>
---
 hw/9pfs/Kconfig     | 4 ----
 hw/9pfs/meson.build | 2 +-
 2 files changed, 1 insertion(+), 5 deletions(-)

diff --git a/hw/9pfs/Kconfig b/hw/9pfs/Kconfig
index d3ebd737301..3ae57496613 100644
--- a/hw/9pfs/Kconfig
+++ b/hw/9pfs/Kconfig
@@ -2,12 +2,8 @@ config FSDEV_9P
     bool
     depends on VIRTFS
 
-config 9PFS
-    bool
-
 config VIRTIO_9P
     bool
     default y
     depends on VIRTFS && VIRTIO
     select FSDEV_9P
-    select 9PFS
diff --git a/hw/9pfs/meson.build b/hw/9pfs/meson.build
index cc094262122..99be5d91196 100644
--- a/hw/9pfs/meson.build
+++ b/hw/9pfs/meson.build
@@ -15,6 +15,6 @@
   'coxattr.c',
 ))
 fs_ss.add(when: 'CONFIG_XEN', if_true: files('xen-9p-backend.c'))
-softmmu_ss.add_all(when: 'CONFIG_9PFS', if_true: fs_ss)
+softmmu_ss.add_all(when: 'CONFIG_FSDEV_9P', if_true: fs_ss)
 
 specific_ss.add(when: 'CONFIG_VIRTIO_9P', if_true: files('virtio-9p-device.c'))
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 09:06:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 09:06:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18909.44012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaEkR-00062b-Vj; Wed, 04 Nov 2020 09:06:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18909.44012; Wed, 04 Nov 2020 09:06:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaEkR-00062Q-Rm; Wed, 04 Nov 2020 09:06:31 +0000
Received: by outflank-mailman (input) for mailman id 18909;
 Wed, 04 Nov 2020 09:06:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d6HI=EK=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kaEkP-00060O-Qb
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:06:29 +0000
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 13280b07-5125-48ab-9d03-1b5a2930a113;
 Wed, 04 Nov 2020 09:06:23 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id s30so3861232lfc.4
 for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 01:06:23 -0800 (PST)
Received: from [100.64.112.11] (ll-18.209.223.85.sovam.net.ua. [85.223.209.18])
 by smtp.gmail.com with ESMTPSA id n23sm349389lfe.210.2020.11.04.01.06.21
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 04 Nov 2020 01:06:22 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=d6HI=EK=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kaEkP-00060O-Qb
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:06:29 +0000
X-Inumbo-ID: 13280b07-5125-48ab-9d03-1b5a2930a113
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 13280b07-5125-48ab-9d03-1b5a2930a113;
	Wed, 04 Nov 2020 09:06:23 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id s30so3861232lfc.4
        for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 01:06:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=axTLjJW9pP5qcu9UM7eUKbFLNLSswfMDMuK1tEN4aa8=;
        b=pzS7j4KK5si2KPo7uwgcoPd9rlGFqnz1LjGZtCazdpSnZx9nuoC205qSJm7vlWdFuh
         8wILrkB9eKslJG6NctTJ61K5gSdMYNUdiFpGfwI3FkpbekuwTNtPgdyoC7nCFRRnf3qI
         YrvWZNy4PZhEgd5skCPvpyIQwip1Y0v1EE2C2iPkay2Tv2BgCfBwlgbxdPXQlTbWmMCN
         HWYEYj2UKpFpDYxJbivMtBZsur7jVgAKR4exPIC+lufI2Zv7EWHheqz9ScAzjoi8LbU9
         m9FpBmOq6f535rOPbia126aHQao88T0ZkDg8EY5mb11nkyy3faa7sBwtZsqlpr5GS5o2
         ITpw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=axTLjJW9pP5qcu9UM7eUKbFLNLSswfMDMuK1tEN4aa8=;
        b=teZGehzjYRexU0VAwLxl09bxUpyqsy6ayHxwgx/IodTzpJEeg2cDsgbbQKz2/CZ3Q4
         c3WGTkCROJH4g/TfP9QUH39Us6MC4cR58S7vz8MKjvNM7TsvYaFRjd/QTRgjUjSsq3/P
         tSVnZuoLq6F5NaFJwDXcH3SN0CPgFadN00GumI0H6mcpcquEu1fYWO3AICCIlzjhpS/A
         FDkP5wQGwZ3+wt533i5wNd23fZ4zYUOcKTimeQvkWXFcsTWrX1rgBLFMysW4kfhfLcAL
         9TF7QnNgK9rBmekOswJ4OT6sNvSF4fRLULZOJPi+L3Qx7waZx09LXNIcZSRZ5BywvFBp
         T77w==
X-Gm-Message-State: AOAM531R9foCBQEXpWf/Y0LKsmRNG7B8C3hzLxMIOUSq6xmOjmlkSeFo
	9cxLelZ1SVygfrrwunHsOIU=
X-Google-Smtp-Source: ABdhPJytUl2YF8xoDU181hAQYEzfF9rGbPL42yGnM/scNRtyLavk99yogN8o1jzsIt/ZnLJPDA5ykA==
X-Received: by 2002:a05:6512:3052:: with SMTP id b18mr8239333lfb.505.1604480782765;
        Wed, 04 Nov 2020 01:06:22 -0800 (PST)
Received: from [100.64.112.11] (ll-18.209.223.85.sovam.net.ua. [85.223.209.18])
        by smtp.gmail.com with ESMTPSA id n23sm349389lfe.210.2020.11.04.01.06.21
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Wed, 04 Nov 2020 01:06:22 -0800 (PST)
Subject: Re: [PATCH V2 01/23] x86/ioreq: Prepare IOREQ feature for making it
 common
To: paul@xen.org, xen-devel@lists.xenproject.org
Cc: 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Jan Beulich' <jbeulich@suse.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 'Julien Grall' <julien@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>, 'Wei Liu' <wl@xen.org>,
 'Julien Grall' <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-2-git-send-email-olekstysh@gmail.com>
 <003c01d6a6b0$8c418f50$a4c4adf0$@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <3dd55087-0c07-c9f3-e80a-8b136c226475@gmail.com>
Date: Wed, 4 Nov 2020 11:06:11 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <003c01d6a6b0$8c418f50$a4c4adf0$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 20.10.20 10:13, Paul Durrant wrote:

Hi Paul.

Sorry for the late response.

>> +
>> +/* Called when target domain is paused */
>> +static inline void arch_hvm_destroy_ioreq_server(struct hvm_ioreq_server *s)
>> +{
>> +    p2m_set_ioreq_server(s->target, 0, s);
>> +}
>> +
>> +/*
>> + * Map or unmap an ioreq server to specific memory type. For now, only
>> + * HVMMEM_ioreq_server is supported, and in the future new types can be
>> + * introduced, e.g. HVMMEM_ioreq_serverX mapped to ioreq server X. And
>> + * currently, only write operations are to be forwarded to an ioreq server.
>> + * Support for the emulation of read operations can be added when an ioreq
>> + * server has such requirement in the future.
>> + */
>> +static inline int hvm_map_mem_type_to_ioreq_server(struct domain *d,
>> +                                                   ioservid_t id,
>> +                                                   uint32_t type,
>> +                                                   uint32_t flags)
>> +{
>> +    struct hvm_ioreq_server *s;
>> +    int rc;
>> +
>> +    if ( type != HVMMEM_ioreq_server )
>> +        return -EINVAL;
>> +
>> +    if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE )
>> +        return -EINVAL;
>> +
>> +    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
>> +
>> +    s = get_ioreq_server(d, id);
>> +
>> +    rc = -ENOENT;
>> +    if ( !s )
>> +        goto out;
>> +
>> +    rc = -EPERM;
>> +    if ( s->emulator != current->domain )
>> +        goto out;
>> +
>> +    rc = p2m_set_ioreq_server(d, flags, s);
>> +
>> + out:
>> +    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>> +
>> +    if ( rc == 0 && flags == 0 )
>> +    {
>> +        struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +
>> +        if ( read_atomic(&p2m->ioreq.entry_count) )
>> +            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
>> +    }
>> +
>> +    return rc;
>> +}
>> +
> The above doesn't really feel right to me. It's really an entry point into the ioreq server code and as such I think it ought to be left in the common code. I suggest replacing the p2m_set_ioreq_server() function with an arch specific function (also taking the type) which you can then implement here.

Agree that it ought to be left in the common code.

However, I am afraid I didn't entirely get you suggestion how this 
function could be split. On Arm struct p2m_domain doesn't contain IOREQ 
fields (p2m->ioreq.XXX), nor p2m_change_entry_type_global() is used, so 
they should be abstracted together with p2m_set_ioreq_server().

So the whole "if ( rc == 0 && flags == 0 )" check should be folded into 
arch_p2m_set_ioreq_server implementation on x86? This in turn raises a 
question can we put a spin_unlock after.

I am wondering would it be acceptable to replace 
hvm_map_mem_type_to_ioreq_server by 
arch_hvm_map_mem_type_to_ioreq_server here and have the following in the 
common code:

int hvm_map_mem_type_to_ioreq_server(struct domain *d,
                                      ioservid_t id,
                                      uint32_t type,
                                      uint32_t flags)
{
     return arch_hvm_map_mem_type_to_ioreq_server(d, id, type, flags);
}


>
> The rest of the patch looks ok.

Thank you.

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 09:06:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 09:06:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18908.44000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaEkM-000611-Md; Wed, 04 Nov 2020 09:06:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18908.44000; Wed, 04 Nov 2020 09:06:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaEkM-00060u-JS; Wed, 04 Nov 2020 09:06:26 +0000
Received: by outflank-mailman (input) for mailman id 18908;
 Wed, 04 Nov 2020 09:06:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=okXs=EK=linaro.org=lee.jones@srs-us1.protection.inumbo.net>)
 id 1kaEkL-00060O-1U
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:06:25 +0000
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1abe8c56-48e0-488e-b8cf-7aec765fbb93;
 Wed, 04 Nov 2020 09:06:23 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id b8so21210001wrn.0
 for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 01:06:23 -0800 (PST)
Received: from dell.default ([91.110.221.242])
 by smtp.gmail.com with ESMTPSA id e25sm1607823wrc.76.2020.11.04.01.06.20
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 04 Nov 2020 01:06:22 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=okXs=EK=linaro.org=lee.jones@srs-us1.protection.inumbo.net>)
	id 1kaEkL-00060O-1U
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:06:25 +0000
X-Inumbo-ID: 1abe8c56-48e0-488e-b8cf-7aec765fbb93
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1abe8c56-48e0-488e-b8cf-7aec765fbb93;
	Wed, 04 Nov 2020 09:06:23 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id b8so21210001wrn.0
        for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 01:06:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=ZVySsFYAxVyvKjZwFTKaFMnVATseHBsi4tJ0dIp9TLE=;
        b=K1+Rdz5ogf/5T2jMdF3v9m88jLY/gNqsMKJm061QdgaEQGjK4st39GT4kTZK6c7eAv
         3cJicgPoZBh9YEvxLFGZcZyHE0XnNqBy1XOiWQuoMPDL9faO4RhxQ9VP++HCtvthQR9S
         799P3oQD3ZEWTAn8Z+1PDWW2WOSIRjZ/YNizWauRHAov45YgSsQGFvDCGkpGdXI1KiOg
         QyMiZ+qAf84rNKlDHGrCMb3Z298LfBhK4STT2/MKrClXtDUj1+DxmNn50j9f2xT6h0lS
         2FLRla4Vx5B06Pb85+k62n0XQr8agXXsfRziu37OjAXEKgzSYzjV1kzmZ4S3HLfgeg3X
         jRpA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=ZVySsFYAxVyvKjZwFTKaFMnVATseHBsi4tJ0dIp9TLE=;
        b=a1FL8GsqSgKe+OKFkJ/NQsP76ZngEhsvjj7uLBLzTq4Vr+2xRo87W02QehvV8tjKm9
         YbRF+W9ToOlITbapt9Z2sB/1DOfHVXYCMgOZNjiWKVdIQYpsvskRN6fZiwBrE59se+Su
         iulQSS1FRfgj+fRq1rYG8WL3lnxlP1bgDCbZea/k6+zoQi9YxJPuUo0Qz4+Bhr/1qXAa
         a48SxaBR5BgzcfLWGmvwYKp3SVGkxTgEGdMpRFUNRpcxqZ3d8pRPhvNkKHBbuUc1IHXw
         7OoZAPgnY/EMpeMj/T7s8Matc4qUNG5R1wxruxEDSlJ8JrU4Hog6AphMG39OjW7JQnmv
         WHPg==
X-Gm-Message-State: AOAM5318znZdnGV2yF086E/PsziplpR4VVnVrcKeG2gdYEowbTdGStYH
	vhp8UEunn1r8ZCCRu8OFHmX2Bg==
X-Google-Smtp-Source: ABdhPJzn8d8Abh2QVS+FdXToKUqdpB6TwiMSAYlD5+ls0hMcq+h/pWmcBsJhdaEKInesWzdpNiQxsw==
X-Received: by 2002:adf:e384:: with SMTP id e4mr31089426wrm.227.1604480782887;
        Wed, 04 Nov 2020 01:06:22 -0800 (PST)
Received: from dell.default ([91.110.221.242])
        by smtp.gmail.com with ESMTPSA id e25sm1607823wrc.76.2020.11.04.01.06.20
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 04 Nov 2020 01:06:22 -0800 (PST)
From: Lee Jones <lee.jones@linaro.org>
To: davem@davemloft.net,
	kuba@kernel.org
Cc: linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	Lee Jones <lee.jones@linaro.org>,
	Alexei Starovoitov <ast@kernel.org>,
	Andrii Nakryiko <andrii@kernel.org>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	bpf@vger.kernel.org,
	Daniel Borkmann <daniel@iogearbox.net>,
	Dany Madden <drt@linux.ibm.com>,
	Daris A Nevil <dnevil@snmc.com>,
	Dustin McIntire <dustin@sensoria.com>,
	Erik Stahlman <erik@vt.edu>,
	Geoff Levand <geoff@infradead.org>,
	Grygorii Strashko <grygorii.strashko@ti.com>,
	"Gustavo A. R. Silva" <gustavoars@kernel.org>,
	Ishizaki Kou <kou.ishizaki@toshiba.co.jp>,
	Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>,
	Jens Osterkamp <Jens.Osterkamp@de.ibm.com>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	John Allen <jallen@linux.vnet.ibm.com>,
	John Fastabend <john.fastabend@gmail.com>,
	John Williams <john.williams@xilinx.com>,
	Juergen Gross <jgross@suse.com>,
	KP Singh <kpsingh@chromium.org>,
	Kurt Kanzenbach <kurt@linutronix.de>,
	Lijun Pan <ljp@linux.ibm.com>,
	linuxppc-dev@lists.ozlabs.org,
	linux-usb@vger.kernel.org,
	Martin Habets <mhabets@solarflare.com>,
	Martin KaFai Lau <kafai@fb.com>,
	Michael Ellerman <mpe@ellerman.id.au>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Michal Simek <michal.simek@xilinx.com>,
	Microchip Linux Driver Support <UNGLinuxDriver@microchip.com>,
	netdev@vger.kernel.org,
	Nicolas Pitre <nico@fluxnic.net>,
	Paul Durrant <paul@xen.org>,
	Paul Mackerras <paulus@samba.org>,
	Peter Cammaert <pc@denkart.be>,
	Russell King <rmk@arm.linux.org.uk>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Santiago Leon <santi_leon@yahoo.com>,
	Shannon Nelson <snelson@pensando.io>,
	Song Liu <songliubraving@fb.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Sukadev Bhattiprolu <sukadev@linux.ibm.com>,
	Thomas Falcon <tlfalcon@linux.vnet.ibm.com>,
	Utz Bacher <utz.bacher@de.ibm.com>,
	Wei Liu <wei.liu@kernel.org>,
	Woojung Huh <woojung.huh@microchip.com>,
	xen-devel@lists.xenproject.org,
	Yonghong Song <yhs@fb.com>
Subject: [PATCH 00/12] [Set 2] Rid W=1 warnings in Net
Date: Wed,  4 Nov 2020 09:05:58 +0000
Message-Id: <20201104090610.1446616-1-lee.jones@linaro.org>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This set is part of a larger effort attempting to clean-up W=1
kernel builds, which are currently overwhelmingly riddled with
niggly little warnings.

This is the last set.

Lee Jones (12):
  net: usb: lan78xx: Remove lots of set but unused 'ret' variables
  net: ethernet: smsc: smc911x: Mark 'status' as __maybe_unused
  net: ethernet: xilinx: xilinx_emaclite: Document 'txqueue' even if it
    is unused
  net: ethernet: smsc: smc91x: Demote non-conformant kernel function
    header
  net: xen-netback: xenbus: Demote nonconformant kernel-doc headers
  net: ethernet: ti: am65-cpsw-qos: Demote non-conformant function
    header
  net: ethernet: ti: am65-cpts: Document am65_cpts_rx_enable()'s 'en'
    parameter
  net: xen-netfront: Demote non-kernel-doc headers to standard comment
    blocks
  net: ethernet: ibm: ibmvnic: Fix some kernel-doc misdemeanours
  net: ethernet: toshiba: ps3_gelic_net: Fix some kernel-doc
    misdemeanours
  net: ethernet: toshiba: spider_net: Document a whole bunch of function
    parameters
  net: ethernet: ibm: ibmvnic: Fix some kernel-doc issues

 drivers/net/ethernet/ibm/ibmvnic.c            |  27 ++-
 drivers/net/ethernet/smsc/smc911x.c           |   6 +-
 drivers/net/ethernet/smsc/smc91x.c            |   2 +-
 drivers/net/ethernet/ti/am65-cpsw-qos.c       |   2 +-
 drivers/net/ethernet/ti/am65-cpts.c           |   2 +-
 drivers/net/ethernet/toshiba/ps3_gelic_net.c  |   9 +-
 drivers/net/ethernet/toshiba/spider_net.c     |  18 +-
 drivers/net/ethernet/xilinx/xilinx_emaclite.c |   1 +
 drivers/net/usb/lan78xx.c                     | 212 +++++++++---------
 drivers/net/xen-netback/xenbus.c              |   4 +-
 drivers/net/xen-netfront.c                    |   6 +-
 11 files changed, 141 insertions(+), 148 deletions(-)

Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii@kernel.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: bpf@vger.kernel.org
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Dany Madden <drt@linux.ibm.com>
Cc: Daris A Nevil <dnevil@snmc.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Dustin McIntire <dustin@sensoria.com>
Cc: Erik Stahlman <erik@vt.edu>
Cc: Geoff Levand <geoff@infradead.org>
Cc: Grygorii Strashko <grygorii.strashko@ti.com>
Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>
Cc: Ishizaki Kou <kou.ishizaki@toshiba.co.jp>
Cc: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Jens Osterkamp <Jens.Osterkamp@de.ibm.com>
Cc: Jesper Dangaard Brouer <hawk@kernel.org>
Cc: John Allen <jallen@linux.vnet.ibm.com>
Cc: John Fastabend <john.fastabend@gmail.com>
Cc: John Williams <john.williams@xilinx.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: KP Singh <kpsingh@chromium.org>
Cc: Kurt Kanzenbach <kurt@linutronix.de>
Cc: Lijun Pan <ljp@linux.ibm.com>
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-usb@vger.kernel.org
Cc: Martin Habets <mhabets@solarflare.com>
Cc: Martin KaFai Lau <kafai@fb.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Michal Simek <michal.simek@xilinx.com>
Cc: Microchip Linux Driver Support <UNGLinuxDriver@microchip.com>
Cc: netdev@vger.kernel.org
Cc: Nicolas Pitre <nico@fluxnic.net>
Cc: Paul Durrant <paul@xen.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Cammaert <pc@denkart.be>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Santiago Leon <santi_leon@yahoo.com>
Cc: Shannon Nelson <snelson@pensando.io>
Cc: Song Liu <songliubraving@fb.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
Cc: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Cc: Utz Bacher <utz.bacher@de.ibm.com>
Cc: Wei Liu <wei.liu@kernel.org>
Cc: Woojung Huh <woojung.huh@microchip.com>
Cc: xen-devel@lists.xenproject.org
Cc: Yonghong Song <yhs@fb.com>
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 09:06:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 09:06:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18910.44024 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaEkV-00065M-83; Wed, 04 Nov 2020 09:06:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18910.44024; Wed, 04 Nov 2020 09:06:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaEkV-00065D-4S; Wed, 04 Nov 2020 09:06:35 +0000
Received: by outflank-mailman (input) for mailman id 18910;
 Wed, 04 Nov 2020 09:06:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=okXs=EK=linaro.org=lee.jones@srs-us1.protection.inumbo.net>)
 id 1kaEkU-00060O-Qw
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:06:34 +0000
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ee51d789-2cc9-4a37-83c3-5fba1d2b78c6;
 Wed, 04 Nov 2020 09:06:30 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id x7so21205841wrl.3
 for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 01:06:30 -0800 (PST)
Received: from dell.default ([91.110.221.242])
 by smtp.gmail.com with ESMTPSA id e25sm1607823wrc.76.2020.11.04.01.06.28
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 04 Nov 2020 01:06:29 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=okXs=EK=linaro.org=lee.jones@srs-us1.protection.inumbo.net>)
	id 1kaEkU-00060O-Qw
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:06:34 +0000
X-Inumbo-ID: ee51d789-2cc9-4a37-83c3-5fba1d2b78c6
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ee51d789-2cc9-4a37-83c3-5fba1d2b78c6;
	Wed, 04 Nov 2020 09:06:30 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id x7so21205841wrl.3
        for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 01:06:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=PvDQ8jiSdggfocg7QevhSStHyCThX2E8vELTACyHnVM=;
        b=O+gghJDvfp86xHKWNgfCUbwJBZer6plfg8irRb7mhXGvdo7nvbxcHUd63h6bvqOdkS
         tSUboDCoCYlt9DCY7qiK941vY9cWWcrT5dhUFhaoj+IwczKKPvBRX+IcCuLvSThGRsjC
         D58ezTytuMvEmOQY2bTv2oJzSy7M8A1Ra31v/4WhDJ/gdw2Ed9Acb+z0jHdgaSO/RfzF
         aUbBPxkvtygM9zIr6Vg1dDhySOxbbryxrnmEFSg9EURVG1o51j2usPgmkNmwHgmn7/fp
         pBtT9CLc2nU/2WTyzWyLqGfkFnDvqMLpQf/WYFc17TlKpg2vLmamvONRa7oY8mjJW1dG
         wp2w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=PvDQ8jiSdggfocg7QevhSStHyCThX2E8vELTACyHnVM=;
        b=Mlm/QNeibY56rRST5gEBt26H2PqzvCEguOAB3sBSsvoTva3kzD/WKdEFsKmOpCPnGw
         L3z+uzxxRxDDgN1h6d6nKgCC8DMoj9nuJvRzDoKyiz912V4VzVr44XONmKO+d40ry6qo
         V1Z+A3CowaKmbTd5/m5bNhON20YahMmkQf7kntzU00/Oyvex/eoxYhAZNXktPDJaUB3/
         pQBd4OXSZYBYEWBSuqoKVBKlV8uh3GA4TVrr4uNySzxNl5YVlHGT2ROkvUz/G2MnWy+G
         jmC9hf0cgVF2Av8c4hnm3QDiWzF2op//4LlZfp+jlVFNKIhDMH0puDt5yukCfV5q0TJD
         vxyQ==
X-Gm-Message-State: AOAM531kpivMTCTeLo+AAZZ37yNYVqe+RTmXYXxfAgFQrqMHsFDz7cpM
	WQnyWkz4TPltLVjuTtXQ54TNAQ==
X-Google-Smtp-Source: ABdhPJxJr4EKzyziM/kfu7bfvSRJog/0BPh5wKW3PHtRCRM2FSGZw1LpbirvmDeO5VkFIi4ChKf8FQ==
X-Received: by 2002:a5d:490a:: with SMTP id x10mr30228709wrq.289.1604480789629;
        Wed, 04 Nov 2020 01:06:29 -0800 (PST)
Received: from dell.default ([91.110.221.242])
        by smtp.gmail.com with ESMTPSA id e25sm1607823wrc.76.2020.11.04.01.06.28
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 04 Nov 2020 01:06:29 -0800 (PST)
From: Lee Jones <lee.jones@linaro.org>
To: davem@davemloft.net,
	kuba@kernel.org
Cc: linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	Lee Jones <lee.jones@linaro.org>,
	Wei Liu <wei.liu@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	John Fastabend <john.fastabend@gmail.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	xen-devel@lists.xenproject.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org
Subject: [PATCH 05/12] net: xen-netback: xenbus: Demote nonconformant kernel-doc headers
Date: Wed,  4 Nov 2020 09:06:03 +0000
Message-Id: <20201104090610.1446616-6-lee.jones@linaro.org>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20201104090610.1446616-1-lee.jones@linaro.org>
References: <20201104090610.1446616-1-lee.jones@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Fixes the following W=1 kernel build warning(s):

 drivers/net/xen-netback/xenbus.c:419: warning: Function parameter or member 'dev' not described in 'frontend_changed'
 drivers/net/xen-netback/xenbus.c:419: warning: Function parameter or member 'frontend_state' not described in 'frontend_changed'
 drivers/net/xen-netback/xenbus.c:1001: warning: Function parameter or member 'dev' not described in 'netback_probe'
 drivers/net/xen-netback/xenbus.c:1001: warning: Function parameter or member 'id' not described in 'netback_probe'

Cc: Wei Liu <wei.liu@kernel.org>
Cc: Paul Durrant <paul@xen.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Jesper Dangaard Brouer <hawk@kernel.org>
Cc: John Fastabend <john.fastabend@gmail.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: xen-devel@lists.xenproject.org
Cc: netdev@vger.kernel.org
Cc: bpf@vger.kernel.org
Signed-off-by: Lee Jones <lee.jones@linaro.org>
---
 drivers/net/xen-netback/xenbus.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index f1c1624cec8f5..de1b5471d929b 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -411,7 +411,7 @@ static void read_xenbus_frontend_xdp(struct backend_info *be,
 	vif->xdp_headroom = headroom;
 }
 
-/**
+/*
  * Callback received when the frontend's state changes.
  */
 static void frontend_changed(struct xenbus_device *dev,
@@ -992,7 +992,7 @@ static int netback_remove(struct xenbus_device *dev)
 	return 0;
 }
 
-/**
+/*
  * Entry point to this code when a new device is created.  Allocate the basic
  * structures and switch to InitWait.
  */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 09:06:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 09:06:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18911.44036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaEka-00069i-I6; Wed, 04 Nov 2020 09:06:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18911.44036; Wed, 04 Nov 2020 09:06:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaEka-00069W-Dl; Wed, 04 Nov 2020 09:06:40 +0000
Received: by outflank-mailman (input) for mailman id 18911;
 Wed, 04 Nov 2020 09:06:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=okXs=EK=linaro.org=lee.jones@srs-us1.protection.inumbo.net>)
 id 1kaEkZ-00060O-Qw
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:06:39 +0000
Received: from mail-wm1-x344.google.com (unknown [2a00:1450:4864:20::344])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bf22235b-69f8-49d6-8865-f0aaba7ab350;
 Wed, 04 Nov 2020 09:06:34 +0000 (UTC)
Received: by mail-wm1-x344.google.com with SMTP id 13so1627471wmf.0
 for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 01:06:34 -0800 (PST)
Received: from dell.default ([91.110.221.242])
 by smtp.gmail.com with ESMTPSA id e25sm1607823wrc.76.2020.11.04.01.06.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 04 Nov 2020 01:06:33 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=okXs=EK=linaro.org=lee.jones@srs-us1.protection.inumbo.net>)
	id 1kaEkZ-00060O-Qw
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:06:39 +0000
X-Inumbo-ID: bf22235b-69f8-49d6-8865-f0aaba7ab350
Received: from mail-wm1-x344.google.com (unknown [2a00:1450:4864:20::344])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bf22235b-69f8-49d6-8865-f0aaba7ab350;
	Wed, 04 Nov 2020 09:06:34 +0000 (UTC)
Received: by mail-wm1-x344.google.com with SMTP id 13so1627471wmf.0
        for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 01:06:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=kRmJDRZK6hYnZwsJWtSyII6GRFG28FGG7WVn2jtoat0=;
        b=Vy0ifnFNZsdam2EYQqrtqEkqnpo+nDE+vtZ1Exog/zDTxcDNqjIjGsjIU1pi4kN3El
         WJb+mUgrcdzGRp3ibB8ikQUE/nHlWxpqRKWFVzAojniHotRo1+HOdbKPDVdxQrzIx6sR
         o5bH/KMSqaIrUmQaZkvfPrv1gFxI//c+XLeUSm9xVA3kwk7Z26qKNIX1zX0RkPnJ08Uu
         3N/AsWZ+Jl5bU+SZ4vt/SsxAgf4VmWQbEcGIUXSUX2VSwqItVn590AY7hkgF110FhA9z
         j+QBoryGrshn3GGWS2c27qqPt497aXTtIabvKKAeyhw/gsk+3P4aFPmg5/JE0C/2HE/p
         rbiQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=kRmJDRZK6hYnZwsJWtSyII6GRFG28FGG7WVn2jtoat0=;
        b=TxOpx8nGQPg3i9XE4R4Njwwol0rz66uGwQ38vi3jhfV4YmPvnsrHviVQLdW3LDsNqM
         jVblj0y1MxiRzqvjjs9tM5GGytrkcr5PUASro1aRmMe9DhWFtiQp31Vq7dJmxpFxjmC1
         kTUUdwN4Sw7JTbJvyli0BYqpma3Aqed8oIm0JC6TeGRYxMofwqlpzSolf9H4G1sEUWZh
         Yq82Rk7Msou0dHffQgjjfwzUCetKiDYgAqM3gwNW8KYBPEczxzFgswIzB+C1aN/0ElCX
         nHR7oCGfn/GlIjSnkB1N/X/OBy4DsKJgrGAa67oTj1o3G0wnOPuH23UGVBHsPYgN5pE9
         dkZQ==
X-Gm-Message-State: AOAM5309+Dl7AA+g55cNfb0HzGvBduHK0bPuvy/2+wIFmW4F3UmFBk/Y
	5NSf47DIASmhJAzwH6T//TgvBQ==
X-Google-Smtp-Source: ABdhPJwqi0yB/+B9/s0ULCABSavH/6P6SV0tchgnVRlqqMJnN0CKjV/Hb1Yj3pHPlBIYEvJ5zSVeQA==
X-Received: by 2002:a1c:1dc1:: with SMTP id d184mr3360241wmd.169.1604480793874;
        Wed, 04 Nov 2020 01:06:33 -0800 (PST)
Received: from dell.default ([91.110.221.242])
        by smtp.gmail.com with ESMTPSA id e25sm1607823wrc.76.2020.11.04.01.06.32
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 04 Nov 2020 01:06:33 -0800 (PST)
From: Lee Jones <lee.jones@linaro.org>
To: davem@davemloft.net,
	kuba@kernel.org
Cc: linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	Lee Jones <lee.jones@linaro.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	John Fastabend <john.fastabend@gmail.com>,
	Martin KaFai Lau <kafai@fb.com>,
	Song Liu <songliubraving@fb.com>,
	Yonghong Song <yhs@fb.com>,
	Andrii Nakryiko <andrii@kernel.org>,
	KP Singh <kpsingh@chromium.org>,
	xen-devel@lists.xenproject.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org
Subject: [PATCH 08/12] net: xen-netfront: Demote non-kernel-doc headers to standard comment blocks
Date: Wed,  4 Nov 2020 09:06:06 +0000
Message-Id: <20201104090610.1446616-9-lee.jones@linaro.org>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20201104090610.1446616-1-lee.jones@linaro.org>
References: <20201104090610.1446616-1-lee.jones@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit

Fixes the following W=1 kernel build warning(s):

 drivers/net/xen-netfront.c: In function ‘store_rxbuf’:
 drivers/net/xen-netfront.c:2416:16: warning: variable ‘target’ set but not used [-Wunused-but-set-variable]
 drivers/net/xen-netfront.c:1592: warning: Function parameter or member 'dev' not described in 'netfront_probe'
 drivers/net/xen-netfront.c:1592: warning: Function parameter or member 'id' not described in 'netfront_probe'
 drivers/net/xen-netfront.c:1669: warning: Function parameter or member 'dev' not described in 'netfront_resume'
 drivers/net/xen-netfront.c:2313: warning: Function parameter or member 'dev' not described in 'netback_changed'
 drivers/net/xen-netfront.c:2313: warning: Function parameter or member 'backend_state' not described in 'netback_changed'

Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Jesper Dangaard Brouer <hawk@kernel.org>
Cc: John Fastabend <john.fastabend@gmail.com>
Cc: Martin KaFai Lau <kafai@fb.com>
Cc: Song Liu <songliubraving@fb.com>
Cc: Yonghong Song <yhs@fb.com>
Cc: Andrii Nakryiko <andrii@kernel.org>
Cc: KP Singh <kpsingh@chromium.org>
Cc: xen-devel@lists.xenproject.org
Cc: netdev@vger.kernel.org
Cc: bpf@vger.kernel.org
Signed-off-by: Lee Jones <lee.jones@linaro.org>
---
 drivers/net/xen-netfront.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 920cac4385bf7..93740ef4cf1b4 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -1582,7 +1582,7 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	return ERR_PTR(err);
 }
 
-/**
+/*
  * Entry point to this code when a new device is created.  Allocate the basic
  * structures and the ring buffers for communication with the backend, and
  * inform the backend of the appropriate details for those.
@@ -1659,7 +1659,7 @@ static void xennet_disconnect_backend(struct netfront_info *info)
 	}
 }
 
-/**
+/*
  * We are reconnecting to the backend, due to a suspend/resume, or a backend
  * driver restart.  We tear down our netif structure and recreate it, but
  * leave the device-layer structures intact so that this is transparent to the
@@ -2305,7 +2305,7 @@ static int xennet_connect(struct net_device *dev)
 	return 0;
 }
 
-/**
+/*
  * Callback received when the backend's state changes.
  */
 static void netback_changed(struct xenbus_device *dev,
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 09:31:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 09:31:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18934.44048 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaF8t-0000YD-M8; Wed, 04 Nov 2020 09:31:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18934.44048; Wed, 04 Nov 2020 09:31:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaF8t-0000Y6-JE; Wed, 04 Nov 2020 09:31:47 +0000
Received: by outflank-mailman (input) for mailman id 18934;
 Wed, 04 Nov 2020 09:31:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2Coh=EK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kaF8s-0000Y1-CV
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:31:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 56fbde74-ca87-4682-a618-b7c6c0b3dac8;
 Wed, 04 Nov 2020 09:31:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0BAD1AC0C;
 Wed,  4 Nov 2020 09:31:45 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2Coh=EK=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kaF8s-0000Y1-CV
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:31:46 +0000
X-Inumbo-ID: 56fbde74-ca87-4682-a618-b7c6c0b3dac8
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 56fbde74-ca87-4682-a618-b7c6c0b3dac8;
	Wed, 04 Nov 2020 09:31:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604482305;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fUfLaQh80kHxebGdmT1vb0kxk26aLHgtzDNP5/t+WZE=;
	b=SZrkiHwWie88Qb3OFFLiaA9/AhqXOuvNgLEitRn+N6FRqb9Uh3EsjZ9d0S49sjR2FlKuFc
	Bhtfq6Pd4C+GZFy4DoiO71w7l6VdgfborjXI8SLvgNczSGRmD48czMgilXZc3JYKIq6TPt
	1eA2iBb0G+GKtbJz5IFM8r33XkRQ8iU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0BAD1AC0C;
	Wed,  4 Nov 2020 09:31:45 +0000 (UTC)
Subject: Re: [PATCH 08/12] net: xen-netfront: Demote non-kernel-doc headers to
 standard comment blocks
To: Lee Jones <lee.jones@linaro.org>, davem@davemloft.net, kuba@kernel.org
Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Alexei Starovoitov <ast@kernel.org>, Daniel Borkmann <daniel@iogearbox.net>,
 Jesper Dangaard Brouer <hawk@kernel.org>,
 John Fastabend <john.fastabend@gmail.com>, Martin KaFai Lau <kafai@fb.com>,
 Song Liu <songliubraving@fb.com>, Yonghong Song <yhs@fb.com>,
 Andrii Nakryiko <andrii@kernel.org>, KP Singh <kpsingh@chromium.org>,
 xen-devel@lists.xenproject.org, netdev@vger.kernel.org, bpf@vger.kernel.org
References: <20201104090610.1446616-1-lee.jones@linaro.org>
 <20201104090610.1446616-9-lee.jones@linaro.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <9ba500df-9d22-3a4e-056f-9bb5f2b42440@suse.com>
Date: Wed, 4 Nov 2020 10:31:43 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201104090610.1446616-9-lee.jones@linaro.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 04.11.20 10:06, Lee Jones wrote:
> Fixes the following W=1 kernel build warning(s):
> 
>   drivers/net/xen-netfront.c: In function ‘store_rxbuf’:
>   drivers/net/xen-netfront.c:2416:16: warning: variable ‘target’ set but not used [-Wunused-but-set-variable]

Those two warnings are not fixed by the patch.

>   drivers/net/xen-netfront.c:1592: warning: Function parameter or member 'dev' not described in 'netfront_probe'
>   drivers/net/xen-netfront.c:1592: warning: Function parameter or member 'id' not described in 'netfront_probe'
>   drivers/net/xen-netfront.c:1669: warning: Function parameter or member 'dev' not described in 'netfront_resume'
>   drivers/net/xen-netfront.c:2313: warning: Function parameter or member 'dev' not described in 'netback_changed'
>   drivers/net/xen-netfront.c:2313: warning: Function parameter or member 'backend_state' not described in 'netback_changed'
> 
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Juergen Gross <jgross@suse.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: "David S. Miller" <davem@davemloft.net>
> Cc: Jakub Kicinski <kuba@kernel.org>
> Cc: Alexei Starovoitov <ast@kernel.org>
> Cc: Daniel Borkmann <daniel@iogearbox.net>
> Cc: Jesper Dangaard Brouer <hawk@kernel.org>
> Cc: John Fastabend <john.fastabend@gmail.com>
> Cc: Martin KaFai Lau <kafai@fb.com>
> Cc: Song Liu <songliubraving@fb.com>
> Cc: Yonghong Song <yhs@fb.com>
> Cc: Andrii Nakryiko <andrii@kernel.org>
> Cc: KP Singh <kpsingh@chromium.org>
> Cc: xen-devel@lists.xenproject.org
> Cc: netdev@vger.kernel.org
> Cc: bpf@vger.kernel.org
> Signed-off-by: Lee Jones <lee.jones@linaro.org>

With the commit message fixed you can have my:

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 09:36:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 09:36:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18941.44059 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaFDf-0000ll-Ac; Wed, 04 Nov 2020 09:36:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18941.44059; Wed, 04 Nov 2020 09:36:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaFDf-0000le-7d; Wed, 04 Nov 2020 09:36:43 +0000
Received: by outflank-mailman (input) for mailman id 18941;
 Wed, 04 Nov 2020 09:36:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Om8i=EK=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kaFDd-0000lZ-Mq
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:36:41 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 54f9e01b-8461-4680-8f97-e5641947ce30;
 Wed, 04 Nov 2020 09:36:40 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kaFDa-00010C-8h; Wed, 04 Nov 2020 09:36:38 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kaFDa-0001V1-0C; Wed, 04 Nov 2020 09:36:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Om8i=EK=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kaFDd-0000lZ-Mq
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:36:41 +0000
X-Inumbo-ID: 54f9e01b-8461-4680-8f97-e5641947ce30
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 54f9e01b-8461-4680-8f97-e5641947ce30;
	Wed, 04 Nov 2020 09:36:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=A0+ZQVGuozAWyB0PwW/dYFCwPTxrpel0hU9EXFUcS1E=; b=pT4rYi48oYDIhLmwM4vrV4CrhG
	u8Sq3RKeHERMwrmC1TM3wzgWyZeOsArgDW/Hze8KGn7eKseCHxuTWB8wZ4Y1kUS7k+5c4dR+QRThI
	DjN5wrlTrNkTAMHSpVZsHJaehTkKbuO2m1XPWvScJ3QtUYtMg9fCUyLuHhcBsHmM64Nw=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kaFDa-00010C-8h; Wed, 04 Nov 2020 09:36:38 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kaFDa-0001V1-0C; Wed, 04 Nov 2020 09:36:38 +0000
Subject: Re: [PATCH v3 1/3] xen/spinlocks: spin_trylock with interrupts off is
 always fine
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201104081549.3712-1-jgross@suse.com>
 <20201104081549.3712-2-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <6bb8bf65-782c-c569-1698-c00c24f44c09@xen.org>
Date: Wed, 4 Nov 2020 09:36:35 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201104081549.3712-2-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 04/11/2020 08:15, Juergen Gross wrote:
> Even if a spinlock was taken with interrupts on before calling
> spin_trylock() with interrupts off is fine, as it can't block.
> 
> Add a bool parameter "try" to check_lock() for handling this case.
> 
> Remove the call of check_lock() from _spin_is_locked(), as it really
> serves no purpose and it can even lead to false crashes, e.g. when
> a lock was taken correctly with interrupts enabled and the call of
> _spin_is_locked() happened with interrupts off. In case the lock is
> taken with wrong interrupt flags this will be catched when taking
> the lock.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
> V2:
> - corrected comment (Jan Beulich)
> ---
>   xen/common/spinlock.c | 18 +++++++++++-------
>   1 file changed, 11 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
> index ce3106e2d3..b4aaf6bce6 100644
> --- a/xen/common/spinlock.c
> +++ b/xen/common/spinlock.c
> @@ -13,7 +13,7 @@
>   
>   static atomic_t spin_debug __read_mostly = ATOMIC_INIT(0);
>   
> -static void check_lock(union lock_debug *debug)
> +static void check_lock(union lock_debug *debug, bool try)
>   {
>       bool irq_safe = !local_irq_is_enabled();
>   
> @@ -42,7 +42,13 @@ static void check_lock(union lock_debug *debug)
>        *
>        * To guard against this subtle bug we latch the IRQ safety of every
>        * spinlock in the system, on first use.
> +     *
> +     * A spin_trylock() with interrupts off is always fine, as this can't
> +     * block and above deadlock scenario doesn't apply.
>        */
> +    if ( try && irq_safe )
> +        return;
> +
>       if ( unlikely(debug->irq_safe != irq_safe) )
>       {
>           union lock_debug seen, new = { 0 };
> @@ -102,7 +108,7 @@ void spin_debug_disable(void)
>   
>   #else /* CONFIG_DEBUG_LOCKS */
>   
> -#define check_lock(l) ((void)0)
> +#define check_lock(l, t) ((void)0)
>   #define check_barrier(l) ((void)0)
>   #define got_lock(l) ((void)0)
>   #define rel_lock(l) ((void)0)
> @@ -159,7 +165,7 @@ void inline _spin_lock_cb(spinlock_t *lock, void (*cb)(void *), void *data)
>       spinlock_tickets_t tickets = SPINLOCK_TICKET_INC;
>       LOCK_PROFILE_VAR;
>   
> -    check_lock(&lock->debug);
> +    check_lock(&lock->debug, false);
>       preempt_disable();
>       tickets.head_tail = arch_fetch_and_add(&lock->tickets.head_tail,
>                                              tickets.head_tail);
> @@ -220,8 +226,6 @@ void _spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags)
>   
>   int _spin_is_locked(spinlock_t *lock)
>   {
> -    check_lock(&lock->debug);
> -
>       /*
>        * Recursive locks may be locked by another CPU, yet we return
>        * "false" here, making this function suitable only for use in
> @@ -236,7 +240,7 @@ int _spin_trylock(spinlock_t *lock)
>   {
>       spinlock_tickets_t old, new;
>   
> -    check_lock(&lock->debug);
> +    check_lock(&lock->debug, true);
>       old = observe_lock(&lock->tickets);
>       if ( old.head != old.tail )
>           return 0;
> @@ -294,7 +298,7 @@ int _spin_trylock_recursive(spinlock_t *lock)
>       BUILD_BUG_ON(NR_CPUS > SPINLOCK_NO_CPU);
>       BUILD_BUG_ON(SPINLOCK_RECURSE_BITS < 3);
>   
> -    check_lock(&lock->debug);
> +    check_lock(&lock->debug, true);
>   
>       if ( likely(lock->recurse_cpu != cpu) )
>       {
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 09:38:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 09:38:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18946.44072 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaFFF-0000vO-Mp; Wed, 04 Nov 2020 09:38:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18946.44072; Wed, 04 Nov 2020 09:38:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaFFF-0000vH-Jp; Wed, 04 Nov 2020 09:38:21 +0000
Received: by outflank-mailman (input) for mailman id 18946;
 Wed, 04 Nov 2020 09:38:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Om8i=EK=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kaFFE-0000vC-9Y
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:38:20 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 06bfd912-67c2-401c-9cf5-db3581bc9792;
 Wed, 04 Nov 2020 09:38:19 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kaFFB-00012z-8E; Wed, 04 Nov 2020 09:38:17 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kaFFB-0001gp-0t; Wed, 04 Nov 2020 09:38:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Om8i=EK=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kaFFE-0000vC-9Y
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:38:20 +0000
X-Inumbo-ID: 06bfd912-67c2-401c-9cf5-db3581bc9792
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 06bfd912-67c2-401c-9cf5-db3581bc9792;
	Wed, 04 Nov 2020 09:38:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=gZf0BTVhX+vIA3VdG7OBgsKPbP8EWYIWiotMMRtLN/8=; b=vwl02Uat5R48voil+73l6kqrrH
	XslAY/zx+H0sg4qmo8YJT/epqdNIs+6KNQlm8qHGMmP8xOVu9jcI++snJCjUAaSNqxlYSBvW9sIK0
	dpaTNFLJ7rbpOO8JGnOhPkgIHgWiEG8FtYHQT/HSEZPcJxEHAzVl+JLaRRhOpHKhgTyQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kaFFB-00012z-8E; Wed, 04 Nov 2020 09:38:17 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kaFFB-0001gp-0t; Wed, 04 Nov 2020 09:38:17 +0000
Subject: Re: [PATCH v3 2/3] xen/locking: harmonize spinlocks and rwlocks
 regarding preemption
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201104081549.3712-1-jgross@suse.com>
 <20201104081549.3712-3-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7adebd65-af5c-a265-cb4e-7dbddf790c3f@xen.org>
Date: Wed, 4 Nov 2020 09:38:15 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201104081549.3712-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 04/11/2020 08:15, Juergen Gross wrote:
> Spinlocks and rwlocks behave differently in the try variants regarding
> preemption: rwlocks are switching preemption off before testing the
> lock, while spinlocks do so only after the first check.
> 
> Modify _spin_trylock() to disable preemption before testing the lock
> to be held in order to be preemption-ready.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 09:41:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 09:41:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18952.44084 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaFIQ-0001km-6I; Wed, 04 Nov 2020 09:41:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18952.44084; Wed, 04 Nov 2020 09:41:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaFIQ-0001kf-39; Wed, 04 Nov 2020 09:41:38 +0000
Received: by outflank-mailman (input) for mailman id 18952;
 Wed, 04 Nov 2020 09:41:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Om8i=EK=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kaFIO-0001ka-Oq
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:41:36 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6ee8b392-d115-4846-a83a-b0ce7d78eafd;
 Wed, 04 Nov 2020 09:41:35 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kaFIL-00016n-77; Wed, 04 Nov 2020 09:41:33 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kaFIK-0001v5-VW; Wed, 04 Nov 2020 09:41:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Om8i=EK=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kaFIO-0001ka-Oq
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:41:36 +0000
X-Inumbo-ID: 6ee8b392-d115-4846-a83a-b0ce7d78eafd
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6ee8b392-d115-4846-a83a-b0ce7d78eafd;
	Wed, 04 Nov 2020 09:41:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=oRZqLVzfa8OX9cgLHLYxyBKVPo00g63SSB3CDWmcECI=; b=cKDPuyTT1xMGtBE0c+0WfgPAsC
	imSiK49oyBSkY1c+Dgq3hgGed946XzDKGarLghPrsPesb1dQb1Wzlm5ITvRseKMX2qa/+A72NpHnU
	PRB8TL6AZ0HTYzvKgnEeHKu79jRUt6UMbK3fqW1X728ulmi9jXshnk9M6wnN7Zdm5/BQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kaFIL-00016n-77; Wed, 04 Nov 2020 09:41:33 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kaFIK-0001v5-VW; Wed, 04 Nov 2020 09:41:33 +0000
Subject: Re: [PATCH v3 3/3] xen/rwlock: add check_lock() handling to rwlocks
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201104081549.3712-1-jgross@suse.com>
 <20201104081549.3712-4-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <253f82a7-8c7c-0351-9edd-664c421c33fc@xen.org>
Date: Wed, 4 Nov 2020 09:41:31 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201104081549.3712-4-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 04/11/2020 08:15, Juergen Gross wrote:
> Checking whether a lock is consistently used regarding interrupts on
> or off is beneficial for rwlocks, too.
> 
> So add check_lock() calls to rwlock functions. For this purpose make
> check_lock() globally accessible.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 09:42:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 09:42:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18955.44095 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaFJa-0001s3-H8; Wed, 04 Nov 2020 09:42:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18955.44095; Wed, 04 Nov 2020 09:42:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaFJa-0001rw-E6; Wed, 04 Nov 2020 09:42:50 +0000
Received: by outflank-mailman (input) for mailman id 18955;
 Wed, 04 Nov 2020 09:42:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gnt3=EK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kaFJZ-0001rq-4X
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:42:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 508c9b3b-252a-4f83-9cf1-b6b9affc7fb3;
 Wed, 04 Nov 2020 09:42:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0E373AF16;
 Wed,  4 Nov 2020 09:42:48 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gnt3=EK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kaFJZ-0001rq-4X
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:42:49 +0000
X-Inumbo-ID: 508c9b3b-252a-4f83-9cf1-b6b9affc7fb3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 508c9b3b-252a-4f83-9cf1-b6b9affc7fb3;
	Wed, 04 Nov 2020 09:42:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604482968;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=t9wPQ7lokfVzzlp/v4PGMhUOS99/dsu0jEfvYazpJqc=;
	b=MSBzHC7CK9BjXnmbmjgUDzXRpkB5nbEOXEumM/32Ue2L26vrj10tTrI7GyacGgwJVH2IsW
	ufrmEs4xwFFUAgxxnL8JXoW+pdIQYHyLyH/vIYOUm4FyG31wkso52Pu625BOYBTMrpkAlf
	N3L9jc/mYkLf6avU80ku1WSPKAbcV8w=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0E373AF16;
	Wed,  4 Nov 2020 09:42:48 +0000 (UTC)
Subject: Re: [PATCH v3 3/3] xen/rwlock: add check_lock() handling to rwlocks
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201104081549.3712-1-jgross@suse.com>
 <20201104081549.3712-4-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <36e3c54f-41c2-e407-39eb-8fd19e17641a@suse.com>
Date: Wed, 4 Nov 2020 10:42:47 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201104081549.3712-4-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.11.2020 09:15, Juergen Gross wrote:
> Checking whether a lock is consistently used regarding interrupts on
> or off is beneficial for rwlocks, too.
> 
> So add check_lock() calls to rwlock functions. For this purpose make
> check_lock() globally accessible.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 09:42:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 09:42:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18957.44108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaFJj-0001wy-PH; Wed, 04 Nov 2020 09:42:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18957.44108; Wed, 04 Nov 2020 09:42:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaFJj-0001wr-MF; Wed, 04 Nov 2020 09:42:59 +0000
Received: by outflank-mailman (input) for mailman id 18957;
 Wed, 04 Nov 2020 09:42:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Om8i=EK=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kaFJi-0001wS-HD
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:42:58 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c61fc5e4-7ce4-490b-a2b1-cf0f6d58ca53;
 Wed, 04 Nov 2020 09:42:57 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kaFJf-00019f-Pn; Wed, 04 Nov 2020 09:42:55 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kaFJf-000222-Gr; Wed, 04 Nov 2020 09:42:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Om8i=EK=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kaFJi-0001wS-HD
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:42:58 +0000
X-Inumbo-ID: c61fc5e4-7ce4-490b-a2b1-cf0f6d58ca53
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c61fc5e4-7ce4-490b-a2b1-cf0f6d58ca53;
	Wed, 04 Nov 2020 09:42:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=DwXucumgulRseoaPBalUKOCJ3rL802vPK8B20iCXu/M=; b=dhYQJ5+Fbdf8AJn9TKfEuUSIMy
	UvMcv3FYKUgm4UWJ27FFWtX7bGvWNsOJ+l+THfTu8iOl77RnVNt0moAQ8LgSq1oeXaX89BW3r13a4
	kF25bkeEhmLkFUNGSIKyQSC9HgEHNcz2fitvXSGUBFRBTPhXa4FWzqFM9u0Qe9jmtNH0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kaFJf-00019f-Pn; Wed, 04 Nov 2020 09:42:55 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kaFJf-000222-Gr; Wed, 04 Nov 2020 09:42:55 +0000
Subject: Re: [PATCH v3 1/2] xen/events: access last_priority and last_vcpu_id
 together
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201016105839.14796-1-jgross@suse.com>
 <20201016105839.14796-2-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <ddd0454f-700e-c0cc-fb5d-ed44befa3f37@xen.org>
Date: Wed, 4 Nov 2020 09:42:52 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201016105839.14796-2-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 16/10/2020 11:58, Juergen Gross wrote:
> The queue for a fifo event is depending on the vcpu_id and the
> priority of the event. When sending an event it might happen the
> event needs to change queues and the old queue needs to be kept for
> keeping the links between queue elements intact. For this purpose
> the event channel contains last_priority and last_vcpu_id values
> elements for being able to identify the old queue.
> 
> In order to avoid races always access last_priority and last_vcpu_id
> with a single atomic operation avoiding any inconsistencies.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 09:43:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 09:43:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18961.44120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaFJv-00022u-2M; Wed, 04 Nov 2020 09:43:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18961.44120; Wed, 04 Nov 2020 09:43:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaFJu-00022n-VT; Wed, 04 Nov 2020 09:43:10 +0000
Received: by outflank-mailman (input) for mailman id 18961;
 Wed, 04 Nov 2020 09:43:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UAnB=EK=kaod.org=groug@srs-us1.protection.inumbo.net>)
 id 1kaFJt-000225-HQ
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:43:09 +0000
Received: from 9.mo51.mail-out.ovh.net (unknown [46.105.48.137])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d7b5b7d7-64f8-47e1-a909-648d7b9d2e5a;
 Wed, 04 Nov 2020 09:43:07 +0000 (UTC)
Received: from mxplan5.mail.ovh.net (unknown [10.108.4.47])
 by mo51.mail-out.ovh.net (Postfix) with ESMTPS id 389B1232EDE;
 Wed,  4 Nov 2020 10:43:04 +0100 (CET)
Received: from kaod.org (37.59.142.97) by DAG8EX1.mxp5.local (172.16.2.71)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2044.4; Wed, 4 Nov 2020
 10:43:03 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=UAnB=EK=kaod.org=groug@srs-us1.protection.inumbo.net>)
	id 1kaFJt-000225-HQ
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:43:09 +0000
X-Inumbo-ID: d7b5b7d7-64f8-47e1-a909-648d7b9d2e5a
Received: from 9.mo51.mail-out.ovh.net (unknown [46.105.48.137])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d7b5b7d7-64f8-47e1-a909-648d7b9d2e5a;
	Wed, 04 Nov 2020 09:43:07 +0000 (UTC)
Received: from mxplan5.mail.ovh.net (unknown [10.108.4.47])
	by mo51.mail-out.ovh.net (Postfix) with ESMTPS id 389B1232EDE;
	Wed,  4 Nov 2020 10:43:04 +0100 (CET)
Received: from kaod.org (37.59.142.97) by DAG8EX1.mxp5.local (172.16.2.71)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2044.4; Wed, 4 Nov 2020
 10:43:03 +0100
Authentication-Results: garm.ovh; auth=pass (GARM-97G00225ad514c-a9bd-4a8c-ab32-ef9d4c5470ae,
                    B675344909C57F45DE6B9FBDE8367EDF8CA03E23) smtp.auth=groug@kaod.org
Date: Wed, 4 Nov 2020 10:43:01 +0100
From: Greg Kurz <groug@kaod.org>
To: Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?= <philmd@redhat.com>
CC: <qemu-devel@nongnu.org>, Christian Borntraeger <borntraeger@de.ibm.com>,
	Fam Zheng <fam@euphon.net>, Richard Henderson <rth@twiddle.net>, "Cornelia
 Huck" <cohuck@redhat.com>, Halil Pasic <pasic@linux.ibm.com>,
	<qemu-s390x@nongnu.org>, Matthew Rosato <mjrosato@linux.ibm.com>, Alex
 =?UTF-8?B?QmVubsOpZQ==?= <alex.bennee@linaro.org>, Paolo Bonzini
	<pbonzini@redhat.com>, Christian Schoenebeck <qemu_oss@crudebyte.com>, Wainer
 dos Santos Moschetta <wainersm@redhat.com>, "Daniel P . Berrange"
	<berrange@redhat.com>, David Hildenbrand <david@redhat.com>, Thomas Huth
	<thuth@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>, "Anthony
 Perard" <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH-for-5.2 v2 2/4] hw/9pfs: Fix Kconfig dependency problem
 between 9pfs and Xen
Message-ID: <20201104104301.6a6e0009@bahia.lan>
In-Reply-To: <20201104084327.3010593-3-philmd@redhat.com>
References: <20201104084327.3010593-1-philmd@redhat.com>
	<20201104084327.3010593-3-philmd@redhat.com>
X-Mailer: Claws Mail 3.17.8 (GTK+ 2.24.32; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.59.142.97]
X-ClientProxiedBy: DAG9EX1.mxp5.local (172.16.2.81) To DAG8EX1.mxp5.local
 (172.16.2.71)
X-Ovh-Tracer-GUID: aae0851a-7046-4d90-b8af-7274b98f0a0c
X-Ovh-Tracer-Id: 8622141487906920720
X-VR-SPAMSTATE: OK
X-VR-SPAMSCORE: -100
X-VR-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgedujedruddthedgtdekucetufdoteggodetrfdotffvucfrrhhofhhilhgvmecuqfggjfdpvefjgfevmfevgfenuceurghilhhouhhtmecuhedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujfgurhepfffhvffukfgjfhfogggtgfhisehtqhertdertdejnecuhfhrohhmpefirhgvghcumfhurhiiuceoghhrohhugheskhgrohgurdhorhhgqeenucggtffrrghtthgvrhhnpeevlefhtddufffhieevhefhleegleelgfetffetkedugeehjeffgfehhfefueduffenucfkpheptddrtddrtddrtddpfeejrdehledrudegvddrleejnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmohguvgepshhmthhpqdhouhhtpdhhvghlohepmhigphhlrghnhedrmhgrihhlrdhovhhhrdhnvghtpdhinhgvtheptddrtddrtddrtddpmhgrihhlfhhrohhmpehgrhhouhhgsehkrghougdrohhrghdprhgtphhtthhopeigvghnqdguvghvvghlsehlihhsthhsrdigvghnphhrohhjvggtthdrohhrgh

On Wed,  4 Nov 2020 09:43:25 +0100
Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com> wrote:

> Fixes './configure --without-default-devices --enable-xen' build:
>=20
>   /usr/bin/ld: libcommon.fa.p/hw_xen_xen-legacy-backend.c.o: in function =
`xen_be_register_common':
>   hw/xen/xen-legacy-backend.c:754: undefined reference to `xen_9pfs_ops'
>   /usr/bin/ld: libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x8): undef=
ined reference to `local_ops'
>   /usr/bin/ld: libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x20): unde=
fined reference to `synth_ops'
>   /usr/bin/ld: libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x38): unde=
fined reference to `proxy_ops'
>   collect2: error: ld returned 1 exit status
>=20
> Fixes: b2c00bce54c ("meson: convert hw/9pfs, cleanup")
> Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
> ---
> I'm not sure b2c00bce54c is the real culprit
>=20

FWIW this commit introduced the 9PFS config which isn't used
anywhere. Backends depend on FSDEV_9P which itself depends
on VIRTFS. So I tend to think b2c00bce54c is the culprit
but _of couse_ I could be wrong :)

Anyway, this patch (+ patch 1) fix the build break mentioned
in the changelog so:

Acked-by: Greg Kurz <groug@kaod.org>
Tested-by: Greg Kurz <groug@kaod.org>

> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> Cc: Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org
> Cc: Greg Kurz <groug@kaod.org>
> Cc: Christian Schoenebeck <qemu_oss@crudebyte.com>
> ---
>  hw/9pfs/Kconfig     | 4 ----
>  hw/9pfs/meson.build | 2 +-
>  2 files changed, 1 insertion(+), 5 deletions(-)
>=20
> diff --git a/hw/9pfs/Kconfig b/hw/9pfs/Kconfig
> index d3ebd737301..3ae57496613 100644
> --- a/hw/9pfs/Kconfig
> +++ b/hw/9pfs/Kconfig
> @@ -2,12 +2,8 @@ config FSDEV_9P
>      bool
>      depends on VIRTFS
> =20
> -config 9PFS
> -    bool
> -
>  config VIRTIO_9P
>      bool
>      default y
>      depends on VIRTFS && VIRTIO
>      select FSDEV_9P
> -    select 9PFS
> diff --git a/hw/9pfs/meson.build b/hw/9pfs/meson.build
> index cc094262122..99be5d91196 100644
> --- a/hw/9pfs/meson.build
> +++ b/hw/9pfs/meson.build
> @@ -15,6 +15,6 @@
>    'coxattr.c',
>  ))
>  fs_ss.add(when: 'CONFIG_XEN', if_true: files('xen-9p-backend.c'))
> -softmmu_ss.add_all(when: 'CONFIG_9PFS', if_true: fs_ss)
> +softmmu_ss.add_all(when: 'CONFIG_FSDEV_9P', if_true: fs_ss)
> =20
>  specific_ss.add(when: 'CONFIG_VIRTIO_9P', if_true: files('virtio-9p-devi=
ce.c'))



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 09:50:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 09:50:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18974.44131 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaFQi-00034L-Su; Wed, 04 Nov 2020 09:50:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18974.44131; Wed, 04 Nov 2020 09:50:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaFQi-00034E-PY; Wed, 04 Nov 2020 09:50:12 +0000
Received: by outflank-mailman (input) for mailman id 18974;
 Wed, 04 Nov 2020 09:50:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Om8i=EK=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kaFQi-000349-1a
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:50:12 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d866efe1-18f9-4ca1-8523-8d94695793a0;
 Wed, 04 Nov 2020 09:50:11 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kaFQe-0001IY-K4; Wed, 04 Nov 2020 09:50:08 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kaFQe-0002rr-BQ; Wed, 04 Nov 2020 09:50:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Om8i=EK=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kaFQi-000349-1a
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:50:12 +0000
X-Inumbo-ID: d866efe1-18f9-4ca1-8523-8d94695793a0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d866efe1-18f9-4ca1-8523-8d94695793a0;
	Wed, 04 Nov 2020 09:50:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=94dmp262XI9nwNVy/C0eSWVi83JcLvwO9vi885Osw2w=; b=b/O5+zg3VtW7pyt2Gi+3dT+cop
	XPbteyiSUr6BPJB79lYKbYy/q54FH3czcc0w9+Lomr8nQYvvaLxGBm0FemHXV5DGFUXSet8kuN0wZ
	IOZUifIas8Bh3iO37Yy5heGpxDXmDOCE4nsl0oNDwCbXllqBA3sSwku6gxgRfnsfPw8k=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kaFQe-0001IY-K4; Wed, 04 Nov 2020 09:50:08 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kaFQe-0002rr-BQ; Wed, 04 Nov 2020 09:50:08 +0000
Subject: Re: [PATCH v3 2/2] xen/evtchn: rework per event channel lock
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20201016105839.14796-1-jgross@suse.com>
 <20201016105839.14796-3-jgross@suse.com>
 <0c5975b1-97ec-9bbb-0ed9-9055556215cd@suse.com>
 <0c39eb60-9843-9659-f7c5-4e2c3e697ee0@suse.com>
 <c77add99-f92e-126a-5a5e-81a2b5983aa0@suse.com>
 <07cc4218-7aa6-2276-32af-559c0db841b5@suse.com>
 <6cf9d927-5e8d-a705-0fac-38f81da07d7e@suse.com>
 <b5ff1e48-1245-5ea7-cf4a-3a198450aa49@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <b0065652-5cae-c57e-dcac-d8948f04cda0@xen.org>
Date: Wed, 4 Nov 2020 09:50:06 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <b5ff1e48-1245-5ea7-cf4a-3a198450aa49@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 02/11/2020 15:26, Jürgen Groß wrote:
> On 02.11.20 16:18, Jan Beulich wrote:
>> On 02.11.2020 14:59, Jürgen Groß wrote:
>>> On 02.11.20 14:52, Jan Beulich wrote:
>>>> On 02.11.2020 14:41, Jürgen Groß wrote:
>>>>> On 20.10.20 11:28, Jan Beulich wrote:
>>>>>> On 16.10.2020 12:58, Juergen Gross wrote:
>>>>>>> @@ -360,7 +352,7 @@ static long 
>>>>>>> evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
>>>>>>>         if ( rc )
>>>>>>>             goto out;
>>>>>>> -    flags = double_evtchn_lock(lchn, rchn);
>>>>>>> +    double_evtchn_lock(lchn, rchn);
>>>>>>
>>>>>> This introduces an unfortunate conflict with my conversion of
>>>>>> the per-domain event lock to an rw one: It acquires rd's lock
>>>>>> in read mode only, while the requirements here would not allow
>>>>>> doing so. (Same in evtchn_close() then.)
>>>>>
>>>>> Is it a problem to use write mode for those cases?
>>>>
>>>> "Problem" can have a wide range of meanings - it's not going to
>>>> be the end of the world, but I view any use of a write lock as
>>>> a problem when a read lock would suffice. This can still harm
>>>> parallelism.
>>>
>>> Both cases are very rare ones in the life time of an event channel. I
>>> don't think you'll ever be able to measure any performance impact from
>>> switching these case to a write lock for any well behaved guest.
>>
>> I agree as far as the lifetime of an individual port goes, but
>> we're talking about the per-domain lock here. (Perhaps my
>> choice of context in your patch wasn't the best one, as there
>> it is the per-channel lock of which two instances get acquired.
>> I'm sorry if this has lead to any confusion.)
> 
> Hmm, with the switch to an ordinary rwlock it should be fine to drop
> the requirement to hold the domain's event channel lock exclusively
> for taking the per-channel lock as a writer.

I don't think you can drop d->event_lock. It protects us against 
allocating new ports while evtchn_reset() is called.

Without it, you are going to re-open XSA-343.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 09:56:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 09:56:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18980.44143 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaFWS-0003KO-Lj; Wed, 04 Nov 2020 09:56:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18980.44143; Wed, 04 Nov 2020 09:56:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaFWS-0003KH-Ik; Wed, 04 Nov 2020 09:56:08 +0000
Received: by outflank-mailman (input) for mailman id 18980;
 Wed, 04 Nov 2020 09:56:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2Coh=EK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kaFWR-0003KC-J2
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:56:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id eb1e07be-29bd-4ea4-ac90-862fcc76dd56;
 Wed, 04 Nov 2020 09:56:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 05D18AD39;
 Wed,  4 Nov 2020 09:56:06 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2Coh=EK=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kaFWR-0003KC-J2
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:56:07 +0000
X-Inumbo-ID: eb1e07be-29bd-4ea4-ac90-862fcc76dd56
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id eb1e07be-29bd-4ea4-ac90-862fcc76dd56;
	Wed, 04 Nov 2020 09:56:06 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604483766;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7L3rbLKg4H3gEpdFsKTqGAdRHuG1jvorSTQF5dPDJIM=;
	b=bTCBovgt2jnVLJ61h0zDHqKq7mtGvVw8EwnaO/se9gTtOh1/Ama3DygMxrJJLzoW+njTUS
	haoUJ5QEh9nnyn6km0BXNhQDJqm3Zw4PmszQeSvUhRRgKdCEKZQA7Wlry6VKZhoZkSpaEx
	SUXs9fqZC4obwdp/K87M9zgE5/bKhxM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 05D18AD39;
	Wed,  4 Nov 2020 09:56:06 +0000 (UTC)
Subject: Re: [PATCH v3 2/2] xen/evtchn: rework per event channel lock
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20201016105839.14796-1-jgross@suse.com>
 <20201016105839.14796-3-jgross@suse.com>
 <0c5975b1-97ec-9bbb-0ed9-9055556215cd@suse.com>
 <0c39eb60-9843-9659-f7c5-4e2c3e697ee0@suse.com>
 <c77add99-f92e-126a-5a5e-81a2b5983aa0@suse.com>
 <07cc4218-7aa6-2276-32af-559c0db841b5@suse.com>
 <6cf9d927-5e8d-a705-0fac-38f81da07d7e@suse.com>
 <b5ff1e48-1245-5ea7-cf4a-3a198450aa49@suse.com>
 <b0065652-5cae-c57e-dcac-d8948f04cda0@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <9017a6fd-35ec-1d66-7a73-7901f64ea64e@suse.com>
Date: Wed, 4 Nov 2020 10:56:05 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <b0065652-5cae-c57e-dcac-d8948f04cda0@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 04.11.20 10:50, Julien Grall wrote:
> Hi Juergen,
> 
> On 02/11/2020 15:26, Jürgen Groß wrote:
>> On 02.11.20 16:18, Jan Beulich wrote:
>>> On 02.11.2020 14:59, Jürgen Groß wrote:
>>>> On 02.11.20 14:52, Jan Beulich wrote:
>>>>> On 02.11.2020 14:41, Jürgen Groß wrote:
>>>>>> On 20.10.20 11:28, Jan Beulich wrote:
>>>>>>> On 16.10.2020 12:58, Juergen Gross wrote:
>>>>>>>> @@ -360,7 +352,7 @@ static long 
>>>>>>>> evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
>>>>>>>>         if ( rc )
>>>>>>>>             goto out;
>>>>>>>> -    flags = double_evtchn_lock(lchn, rchn);
>>>>>>>> +    double_evtchn_lock(lchn, rchn);
>>>>>>>
>>>>>>> This introduces an unfortunate conflict with my conversion of
>>>>>>> the per-domain event lock to an rw one: It acquires rd's lock
>>>>>>> in read mode only, while the requirements here would not allow
>>>>>>> doing so. (Same in evtchn_close() then.)
>>>>>>
>>>>>> Is it a problem to use write mode for those cases?
>>>>>
>>>>> "Problem" can have a wide range of meanings - it's not going to
>>>>> be the end of the world, but I view any use of a write lock as
>>>>> a problem when a read lock would suffice. This can still harm
>>>>> parallelism.
>>>>
>>>> Both cases are very rare ones in the life time of an event channel. I
>>>> don't think you'll ever be able to measure any performance impact from
>>>> switching these case to a write lock for any well behaved guest.
>>>
>>> I agree as far as the lifetime of an individual port goes, but
>>> we're talking about the per-domain lock here. (Perhaps my
>>> choice of context in your patch wasn't the best one, as there
>>> it is the per-channel lock of which two instances get acquired.
>>> I'm sorry if this has lead to any confusion.)
>>
>> Hmm, with the switch to an ordinary rwlock it should be fine to drop
>> the requirement to hold the domain's event channel lock exclusively
>> for taking the per-channel lock as a writer.
> 
> I don't think you can drop d->event_lock. It protects us against 
> allocating new ports while evtchn_reset() is called.

I wrote "exclusively", as in case of a switch to a rwlock it should be
fine to hold it as a reader in case the reset coding takes it as a
writer.

> Without it, you are going to re-open XSA-343.

Yes, of course.


Juergen


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 09:59:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 09:59:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18987.44155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaFZt-0003Wr-4x; Wed, 04 Nov 2020 09:59:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18987.44155; Wed, 04 Nov 2020 09:59:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaFZt-0003Wk-1s; Wed, 04 Nov 2020 09:59:41 +0000
Received: by outflank-mailman (input) for mailman id 18987;
 Wed, 04 Nov 2020 09:59:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WAD3=EK=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kaFZr-0003Wf-8T
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:59:39 +0000
Received: from mail-wm1-x336.google.com (unknown [2a00:1450:4864:20::336])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7e9bf3eb-cd1b-4b86-a911-4f0d4accc5e0;
 Wed, 04 Nov 2020 09:59:38 +0000 (UTC)
Received: by mail-wm1-x336.google.com with SMTP id k18so1721128wmj.5
 for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 01:59:38 -0800 (PST)
Received: from CBGR90WXYV0 (54-240-197-232.amazon.com. [54.240.197.232])
 by smtp.gmail.com with ESMTPSA id f17sm1527673wmh.10.2020.11.04.01.59.36
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 04 Nov 2020 01:59:36 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WAD3=EK=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kaFZr-0003Wf-8T
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 09:59:39 +0000
X-Inumbo-ID: 7e9bf3eb-cd1b-4b86-a911-4f0d4accc5e0
Received: from mail-wm1-x336.google.com (unknown [2a00:1450:4864:20::336])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7e9bf3eb-cd1b-4b86-a911-4f0d4accc5e0;
	Wed, 04 Nov 2020 09:59:38 +0000 (UTC)
Received: by mail-wm1-x336.google.com with SMTP id k18so1721128wmj.5
        for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 01:59:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=IHFCeb24ng8P9vt1Saa9HsamqkximAvecNeAK8FVeqU=;
        b=I+bTsTlfiXGT7IGhNHLFLhxyKgFVJ95LvyjFTAlaLUzqsV63MewoPXsQXJMOlBknAm
         o8c8YoQsIGuYQVRbZFhmbp6WqwEjp2zhkVZMvmRbUCT943Si+Oc7BAKq1lNIwTtzyY2n
         r4goffRbXF8ZnaYHNTgNrt/CfS4hp1Vwwi65RBLA6EuLBP38SfDjmn1hu8uB6annAZsd
         otcZhA7In0OhCv7CIBopweLkYljZlt8Vprknc5wrz4N33VnShXw86McElzMT7cx/ZjgX
         C8ko6Qy4f/uoYJA4g9u1swdw3uKJp6xml6+58aFzZnMlGXpxvzNfhihngIVZk5U03Pzz
         b8Qg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=IHFCeb24ng8P9vt1Saa9HsamqkximAvecNeAK8FVeqU=;
        b=n++fh75TJ/T9232i1YxAL9whtSioGmoCtvGvHFYRWIAOmAesqkV+lhUczvpEI4qUqo
         zLypJvDUFay6gFfjW6zCSRZAvcm8YqtmAvEb7RxUoNgm/oBX+xL/RZsj1LMtwjJRbYZp
         FbJzW6NXd/bsJe+m/kdv3ymRfuRNPD/XbKa5j2Bag9EBidHE2zC/uDm+Az+soCRYKOZ2
         scuHvYinb3G0ira1pRHfkiMhATUvv1g2zzfgK1K6w0xNkAON6LEwN+SfNqLnFj0d0FRj
         RzLUeoH/DApustiOvA5xW0oPQLrBTD1jEXVoCKChnGBUdlMIa+6joCSVK4WULZ3IpqDH
         3+6w==
X-Gm-Message-State: AOAM532Odt3EsV4Ua8Ghk4CLYcCl1K9OZO2CwMnMNZ3P9WHmVBSX6EPZ
	+wHkzFsdYDoqaLWeTUSznZM=
X-Google-Smtp-Source: ABdhPJyqEJvcP2tWo6z4wLtZtpjldmzxe9SWtYvBegrbGkA2uxV5BzimLUzy/wUhks0Nl2ZeEb0YtQ==
X-Received: by 2002:a1c:1b8f:: with SMTP id b137mr3532657wmb.61.1604483977478;
        Wed, 04 Nov 2020 01:59:37 -0800 (PST)
Received: from CBGR90WXYV0 (54-240-197-232.amazon.com. [54.240.197.232])
        by smtp.gmail.com with ESMTPSA id f17sm1527673wmh.10.2020.11.04.01.59.36
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Wed, 04 Nov 2020 01:59:36 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Oleksandr'" <olekstysh@gmail.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Oleksandr Tyshchenko'" <oleksandr_tyshchenko@epam.com>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	=?UTF-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Wei Liu'" <wl@xen.org>,
	"'Julien Grall'" <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> <1602780274-29141-2-git-send-email-olekstysh@gmail.com> <003c01d6a6b0$8c418f50$a4c4adf0$@xen.org> <3dd55087-0c07-c9f3-e80a-8b136c226475@gmail.com>
In-Reply-To: <3dd55087-0c07-c9f3-e80a-8b136c226475@gmail.com>
Subject: RE: [PATCH V2 01/23] x86/ioreq: Prepare IOREQ feature for making it common
Date: Wed, 4 Nov 2020 09:59:35 -0000
Message-ID: <007501d6b291$34160a30$9c421e90$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQFqp5MaNUj6MKEiN9RM6S6pfA5bVAH3I7RCAUkXRk0BK34LuKpsc2pQ

> -----Original Message-----
> From: Oleksandr <olekstysh@gmail.com>
> Sent: 04 November 2020 09:06
> To: paul@xen.org; xen-devel@lists.xenproject.org
> Cc: 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>; 'Jan =
Beulich' <jbeulich@suse.com>; 'Andrew
> Cooper' <andrew.cooper3@citrix.com>; 'Roger Pau Monn=C3=A9' =
<roger.pau@citrix.com>; 'Julien Grall'
> <julien@xen.org>; 'Stefano Stabellini' <sstabellini@kernel.org>; 'Wei =
Liu' <wl@xen.org>; 'Julien
> Grall' <julien.grall@arm.com>
> Subject: Re: [PATCH V2 01/23] x86/ioreq: Prepare IOREQ feature for =
making it common
>=20
>=20
> On 20.10.20 10:13, Paul Durrant wrote:
>=20
> Hi Paul.
>=20
> Sorry for the late response.
>=20
> >> +
> >> +/* Called when target domain is paused */
> >> +static inline void arch_hvm_destroy_ioreq_server(struct =
hvm_ioreq_server *s)
> >> +{
> >> +    p2m_set_ioreq_server(s->target, 0, s);
> >> +}
> >> +
> >> +/*
> >> + * Map or unmap an ioreq server to specific memory type. For now, =
only
> >> + * HVMMEM_ioreq_server is supported, and in the future new types =
can be
> >> + * introduced, e.g. HVMMEM_ioreq_serverX mapped to ioreq server X. =
And
> >> + * currently, only write operations are to be forwarded to an =
ioreq server.
> >> + * Support for the emulation of read operations can be added when =
an ioreq
> >> + * server has such requirement in the future.
> >> + */
> >> +static inline int hvm_map_mem_type_to_ioreq_server(struct domain =
*d,
> >> +                                                   ioservid_t id,
> >> +                                                   uint32_t type,
> >> +                                                   uint32_t flags)
> >> +{
> >> +    struct hvm_ioreq_server *s;
> >> +    int rc;
> >> +
> >> +    if ( type !=3D HVMMEM_ioreq_server )
> >> +        return -EINVAL;
> >> +
> >> +    if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE )
> >> +        return -EINVAL;
> >> +
> >> +    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
> >> +
> >> +    s =3D get_ioreq_server(d, id);
> >> +
> >> +    rc =3D -ENOENT;
> >> +    if ( !s )
> >> +        goto out;
> >> +
> >> +    rc =3D -EPERM;
> >> +    if ( s->emulator !=3D current->domain )
> >> +        goto out;
> >> +
> >> +    rc =3D p2m_set_ioreq_server(d, flags, s);
> >> +
> >> + out:
> >> +    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
> >> +
> >> +    if ( rc =3D=3D 0 && flags =3D=3D 0 )
> >> +    {
> >> +        struct p2m_domain *p2m =3D p2m_get_hostp2m(d);
> >> +
> >> +        if ( read_atomic(&p2m->ioreq.entry_count) )
> >> +            p2m_change_entry_type_global(d, p2m_ioreq_server, =
p2m_ram_rw);
> >> +    }
> >> +
> >> +    return rc;
> >> +}
> >> +
> > The above doesn't really feel right to me. It's really an entry =
point into the ioreq server code and
> as such I think it ought to be left in the common code. I suggest =
replacing the p2m_set_ioreq_server()
> function with an arch specific function (also taking the type) which =
you can then implement here.
>=20
> Agree that it ought to be left in the common code.
>=20
> However, I am afraid I didn't entirely get you suggestion how this
> function could be split. On Arm struct p2m_domain doesn't contain =
IOREQ
> fields (p2m->ioreq.XXX), nor p2m_change_entry_type_global() is used, =
so
> they should be abstracted together with p2m_set_ioreq_server().
>=20
> So the whole "if ( rc =3D=3D 0 && flags =3D=3D 0 )" check should be =
folded into
> arch_p2m_set_ioreq_server implementation on x86? This in turn raises a
> question can we put a spin_unlock after.
>=20

Hi Oleksandr,

I think the code as it stands is really a bit of a layering violation. I =
don't really see a problem with retaining the ioreq server lock around =
the call to p2m_change_entry_type_global() so I'd just fold that into =
p2m_set_ioreq_server().

  Paul



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 10:02:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 10:02:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.18993.44168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaFcn-0004R6-Kt; Wed, 04 Nov 2020 10:02:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 18993.44168; Wed, 04 Nov 2020 10:02:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaFcn-0004Qz-G8; Wed, 04 Nov 2020 10:02:41 +0000
Received: by outflank-mailman (input) for mailman id 18993;
 Wed, 04 Nov 2020 10:02:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Om8i=EK=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kaFcl-0004Qu-Ql
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 10:02:39 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 14b14357-a08f-4acc-bf33-7db99ba7f62a;
 Wed, 04 Nov 2020 10:02:38 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kaFci-0001fr-4H; Wed, 04 Nov 2020 10:02:36 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kaFch-00045b-Rc; Wed, 04 Nov 2020 10:02:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Om8i=EK=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kaFcl-0004Qu-Ql
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 10:02:39 +0000
X-Inumbo-ID: 14b14357-a08f-4acc-bf33-7db99ba7f62a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 14b14357-a08f-4acc-bf33-7db99ba7f62a;
	Wed, 04 Nov 2020 10:02:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=y/GlxCVfydnxi4DmfUC2MKhNA18kaAbifN4PtQl9vA0=; b=Ou6lFpjfWWEdgCIvB+AoxrmgXG
	xvbx12Q6Ab/rDxt85jVe9vom4LBSwGAduydSr9pn2fG8R+puvkeYAdGjhyDJ1/4iTN/3BJp53z1ng
	DZzb/sCPL1MihYKyZBXaJ0BbbAYKFebqJoBdjK0gptg7DxmzT1KmWEYwxgeWcf9avkBI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kaFci-0001fr-4H; Wed, 04 Nov 2020 10:02:36 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kaFch-00045b-Rc; Wed, 04 Nov 2020 10:02:35 +0000
Subject: Re: [PATCH v3 2/2] xen/evtchn: rework per event channel lock
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20201016105839.14796-1-jgross@suse.com>
 <20201016105839.14796-3-jgross@suse.com>
 <0c5975b1-97ec-9bbb-0ed9-9055556215cd@suse.com>
 <0c39eb60-9843-9659-f7c5-4e2c3e697ee0@suse.com>
 <c77add99-f92e-126a-5a5e-81a2b5983aa0@suse.com>
 <07cc4218-7aa6-2276-32af-559c0db841b5@suse.com>
 <6cf9d927-5e8d-a705-0fac-38f81da07d7e@suse.com>
 <b5ff1e48-1245-5ea7-cf4a-3a198450aa49@suse.com>
 <b0065652-5cae-c57e-dcac-d8948f04cda0@xen.org>
 <9017a6fd-35ec-1d66-7a73-7901f64ea64e@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <cf028788-c70c-d946-72d2-583446494dc7@xen.org>
Date: Wed, 4 Nov 2020 10:02:33 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <9017a6fd-35ec-1d66-7a73-7901f64ea64e@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 04/11/2020 09:56, Jürgen Groß wrote:
> On 04.11.20 10:50, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 02/11/2020 15:26, Jürgen Groß wrote:
>>> On 02.11.20 16:18, Jan Beulich wrote:
>>>> On 02.11.2020 14:59, Jürgen Groß wrote:
>>>>> On 02.11.20 14:52, Jan Beulich wrote:
>>>>>> On 02.11.2020 14:41, Jürgen Groß wrote:
>>>>>>> On 20.10.20 11:28, Jan Beulich wrote:
>>>>>>>> On 16.10.2020 12:58, Juergen Gross wrote:
>>>>>>>>> @@ -360,7 +352,7 @@ static long 
>>>>>>>>> evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
>>>>>>>>>         if ( rc )
>>>>>>>>>             goto out;
>>>>>>>>> -    flags = double_evtchn_lock(lchn, rchn);
>>>>>>>>> +    double_evtchn_lock(lchn, rchn);
>>>>>>>>
>>>>>>>> This introduces an unfortunate conflict with my conversion of
>>>>>>>> the per-domain event lock to an rw one: It acquires rd's lock
>>>>>>>> in read mode only, while the requirements here would not allow
>>>>>>>> doing so. (Same in evtchn_close() then.)
>>>>>>>
>>>>>>> Is it a problem to use write mode for those cases?
>>>>>>
>>>>>> "Problem" can have a wide range of meanings - it's not going to
>>>>>> be the end of the world, but I view any use of a write lock as
>>>>>> a problem when a read lock would suffice. This can still harm
>>>>>> parallelism.
>>>>>
>>>>> Both cases are very rare ones in the life time of an event channel. I
>>>>> don't think you'll ever be able to measure any performance impact from
>>>>> switching these case to a write lock for any well behaved guest.
>>>>
>>>> I agree as far as the lifetime of an individual port goes, but
>>>> we're talking about the per-domain lock here. (Perhaps my
>>>> choice of context in your patch wasn't the best one, as there
>>>> it is the per-channel lock of which two instances get acquired.
>>>> I'm sorry if this has lead to any confusion.)
>>>
>>> Hmm, with the switch to an ordinary rwlock it should be fine to drop
>>> the requirement to hold the domain's event channel lock exclusively
>>> for taking the per-channel lock as a writer.
>>
>> I don't think you can drop d->event_lock. It protects us against 
>> allocating new ports while evtchn_reset() is called.
> 
> I wrote "exclusively", as in case of a switch to a rwlock it should be
> fine to hold it as a reader in case the reset coding takes it as a
> writer.

Oh I misread your comment. Sorry for the noise.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 10:12:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 10:12:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19004.44192 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaFmJ-0005UK-MM; Wed, 04 Nov 2020 10:12:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19004.44192; Wed, 04 Nov 2020 10:12:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaFmJ-0005UD-JQ; Wed, 04 Nov 2020 10:12:31 +0000
Received: by outflank-mailman (input) for mailman id 19004;
 Wed, 04 Nov 2020 10:12:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gnt3=EK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kaFmI-0005U5-Al
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 10:12:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 61f7ed7a-541d-45b3-92bd-abbb56f19f3d;
 Wed, 04 Nov 2020 10:12:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E2976AC77;
 Wed,  4 Nov 2020 10:12:27 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gnt3=EK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kaFmI-0005U5-Al
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 10:12:30 +0000
X-Inumbo-ID: 61f7ed7a-541d-45b3-92bd-abbb56f19f3d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 61f7ed7a-541d-45b3-92bd-abbb56f19f3d;
	Wed, 04 Nov 2020 10:12:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604484748;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=nYkgYjJKItVDm4UZe+Prw4nKA6Okw2VxbQyIPXk2Tr4=;
	b=IYzY6CJlH94WeCAsa5YD9hwvFTHMbh/v2n11VzME6Qj39/lJIzZLZPRyfVhqOlP31lFYn8
	Dx13DPYwsR9nKepybQaqdwTOHJfDgUzmgwMevB+Oht+ODfH6HQKvBDf/Z6XDR7mwDMhSEg
	yzoPpN+J25h78griATtahayjoZdPLss=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E2976AC77;
	Wed,  4 Nov 2020 10:12:27 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: preparations for 4.14.1
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Ian Jackson <ian.jackson@citrix.com>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>, Julien Grall <julien@xen.org>
Message-ID: <5aa0791a-db56-8f5a-51a1-5863748ce7f1@suse.com>
Date: Wed, 4 Nov 2020 11:12:27 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

All,

the release is due in a couple of weeks time. Please point out
backports you find missing from the respective staging branch,
but which you consider relevant. (Ian: Please double check
there are indeed no tools side backports needed here.)

Julien, Stefano, on the Arm side I'd like to ask for

5d45ecabe3c0 xen/arm64: force gcc 10+ to always inline generic atomics helpers

just like I did when sending the respective 4.13.2 / 4.12.4
mail. Is there a particular reason it wasn't put in?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 10:51:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 10:51:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19017.44203 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaGNa-0000Xh-OS; Wed, 04 Nov 2020 10:51:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19017.44203; Wed, 04 Nov 2020 10:51:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaGNa-0000Xa-LU; Wed, 04 Nov 2020 10:51:02 +0000
Received: by outflank-mailman (input) for mailman id 19017;
 Wed, 04 Nov 2020 10:51:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gnt3=EK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kaGNZ-0000XV-So
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 10:51:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1b08df39-0b5d-43a5-863b-9c0f669047ce;
 Wed, 04 Nov 2020 10:50:59 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AFE8DABAE;
 Wed,  4 Nov 2020 10:50:58 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gnt3=EK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kaGNZ-0000XV-So
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 10:51:01 +0000
X-Inumbo-ID: 1b08df39-0b5d-43a5-863b-9c0f669047ce
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1b08df39-0b5d-43a5-863b-9c0f669047ce;
	Wed, 04 Nov 2020 10:50:59 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604487058;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Nqi+uR4cKB94poUEzycGIYagU6ii3GpB8CPi8XSd7WU=;
	b=LFV2Fu7QFlo0ktNlIfmHRPVPLuP/dAWnImZnZIAfS27heEwxJtU4scRxMQJNL7XlYrUYIR
	RXFYWKSVGjRfCPWrays7u7YiTzcB65Mm+Yb/3+/2vqkycrxUxrqLlRxhiDTZPhsR+hckP0
	4dB5v7yVX//BH3mMdrAmid/8H5vJ+Us=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id AFE8DABAE;
	Wed,  4 Nov 2020 10:50:58 +0000 (UTC)
Subject: Re: [PATCH] x86/PV: conditionally avoid raising #GP for early guest
 MSR accesses
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <7e69db81-cee7-3c7b-be64-4f5ff50fbe9c@suse.com>
 <cf814663-0319-6a30-f3a2-dc43432eedb1@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <cf24a63e-afe9-be6a-3ab9-cc65e19a7a0f@suse.com>
Date: Wed, 4 Nov 2020 11:50:58 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <cf814663-0319-6a30-f3a2-dc43432eedb1@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 03.11.2020 18:31, Andrew Cooper wrote:
> On 03/11/2020 17:06, Jan Beulich wrote:
>> Prior to 4.15 Linux, when running in PV mode, did not install a #GP
>> handler early enough to cover for example the rdmsrl_safe() of
>> MSR_K8_TSEG_ADDR in bsp_init_amd() (not to speak of the unguarded read
>> of MSR_K7_HWCR later in the same function). The respective change
>> (42b3a4cb5609 "x86/xen: Support early interrupts in xen pv guests") was
>> backported to 4.14, but no further - presumably since it wasn't really
>> easy because of other dependencies.
>>
>> Therefore, to prevent our change in the handling of guest MSR accesses
>> to render PV Linux 4.13 and older unusable on at least AMD systems, make
>> the raising of #GP on these paths conditional upon the guest having
>> installed a handler. Producing zero for reads and discarding writes
>> isn't necessarily correct and may trip code trying to detect presence of
>> MSRs early, but since such detection logic won't work without a #GP
>> handler anyway, this ought to be a fair workaround.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> I appreciate that we probably have to do something, but I don't think
> this is a wise move.

I wouldn't call it wise either, but I'm afraid something along
these lines is necessary.

> Linux is fundamentally buggy.  It is deliberately looking for a
> potential #GP fault given its use of rdmsrl_safe().  The reason this bug
> stayed hidden for so long was as a consequence of Xen's inappropriate
> MSR handling for guests, and the reasons for changing Xen's behaviour
> still stand.

I agree.

> This change, in particular, does not apply to any explicitly handled
> MSRs, and therefore is not a comprehensive fix.

But it's intentional that this deals with the situation in a
generic way, not on a per-MSR basis. If we did as you suggest
further down, we'd have to audit all Linux versions up to 4.14
for similar issues with other MSRs. I don't think this would
be a practical thing to do, and I also don't think that leaving
things as they are until we have concrete reports of problems
is a viable option either.

Adding explicit handling for the two offending MSRs (and any
possible further ones we discover) would imo only be to avoid
issuing the respective log messages.

>  Nor is it robust to
> someone adding code to explicitly handling the impacted MSRs at a later
> date (which are are likely to need to do for HWCR), and which would
> reintroduce this failure to boot.

I'm afraid I don't understand. Looking at the two functions
the patch alters, only X86EMUL_OKAY is used in return statements
other than the final one. If this model is to be followed by
future additions (which I think it ought to be; perhaps we
should add comments to this effect), the code introduced here
will take care of the situation nevertheless.

> We should have the impacted MSRs handled explicitly, with a note stating
> this was a bug in Linux 4.14 and older.  We already have workaround for
> similar bugs in Windows, and it also gives us a timeline to eventually
> removing support for obsolete workarounds, rather than having a "now and
> in the future, we'll explicitly tolerate broken PV behaviour for one bug
> back in ancient linux".

Comparing with Windows isn't very helpful; the patch here is
specifically about PV, and would help other OSes as well in
case they would have missed setting up exceptions early in
just the PV-on-Xen case. For the HVM case I'd indeed rather
see us go the route we've gone for Windows, if need be.

There's one adjustment to the logic here that I've been
considering, but I'm still undecided due to the downsides:
Without exposing the value, we could make the decision to zap
the #GP dependent upon us being able to read the MSR.

The other possible adjustment would be to avoid issuing two
log messages for the same operation (affecting debug builds
only). But the code structure (which isn't really consistent
about when the pre-existing message would get issued)
doesn't directly lend itself to such an adjustment without
altering the behavior for some of the MSRs explicitly
handled.

As a tangent, while discussing this situation, please let's
not forget about this code in Linux:

static u64 xen_read_msr(unsigned int msr)
{
	/*
	 * This will silently swallow a #GP from RDMSR.  It may be worth
	 * changing that.
	 */
	int err;

	return xen_read_msr_safe(msr, &err);
}

static void xen_write_msr(unsigned int msr, unsigned low, unsigned high)
{
	/*
	 * This will silently swallow a #GP from WRMSR.  It may be worth
	 * changing that.
	 */
	xen_write_msr_safe(msr, low, high);
}

Imo this "silent swallowing" has always been the wrong thing
to do, and hence ought to be dropped. Of course right now it
saves the kernel from dying on the HWCR read.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 11:28:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 11:28:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19033.44219 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaGxP-0003O7-NT; Wed, 04 Nov 2020 11:28:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19033.44219; Wed, 04 Nov 2020 11:28:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaGxP-0003O0-KT; Wed, 04 Nov 2020 11:28:03 +0000
Received: by outflank-mailman (input) for mailman id 19033;
 Wed, 04 Nov 2020 11:28:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qpwB=EK=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
 id 1kaGxN-0003Nv-Vb
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 11:28:02 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 3bb1a8dd-decd-472a-bab8-d4a31cbf1952;
 Wed, 04 Nov 2020 11:28:01 +0000 (UTC)
Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com
 [209.85.128.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-402-OwZvbp3OPe-dJRQyj0cq6w-1; Wed, 04 Nov 2020 06:27:57 -0500
Received: by mail-wm1-f72.google.com with SMTP id o19so1099410wme.2
 for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 03:27:57 -0800 (PST)
Received: from ?IPv6:2001:b07:6468:f312:c8dd:75d4:99ab:290a?
 ([2001:b07:6468:f312:c8dd:75d4:99ab:290a])
 by smtp.gmail.com with ESMTPSA id l16sm2008678wrr.83.2020.11.04.03.27.54
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 04 Nov 2020 03:27:54 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qpwB=EK=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
	id 1kaGxN-0003Nv-Vb
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 11:28:02 +0000
X-Inumbo-ID: 3bb1a8dd-decd-472a-bab8-d4a31cbf1952
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 3bb1a8dd-decd-472a-bab8-d4a31cbf1952;
	Wed, 04 Nov 2020 11:28:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604489280;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lalScLFIO8oj7wPwMGuu3i31sDP/fLpN35bkKRc5rK0=;
	b=iMDNXRIhKnES/Cfd9d0m4Gf5Yh/IYNW57Dxx3LV7thnHZC7todvRKL3G/L0ADngpIDVmhj
	jUugzBaRVYMRLtmqeyWAoCNJYcbSB4Kg6rRJ16bapAw5L1cTbdZhpnnWzovXc3XyM6AvK/
	VonYCzlMaNQe+nIBCzib+fRc9hf6jpc=
Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com
 [209.85.128.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-402-OwZvbp3OPe-dJRQyj0cq6w-1; Wed, 04 Nov 2020 06:27:57 -0500
X-MC-Unique: OwZvbp3OPe-dJRQyj0cq6w-1
Received: by mail-wm1-f72.google.com with SMTP id o19so1099410wme.2
        for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 03:27:57 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=lalScLFIO8oj7wPwMGuu3i31sDP/fLpN35bkKRc5rK0=;
        b=YIlAPj6Q6z3GlLv2scIn0OO5lb/2CGAqc3qfDLItPBLk+FxRWnqeW8Wq+QghkQxrQ0
         iK1T44UNfpQuPOKTeoKd4b2O2VmtpcnPTfeqt2t7rlIxJA8IYVuTqggYf0nTn0aPV0s2
         nbXdJTf1X1X38RY2JirQ0nU9cYzK93i4+TgZ3GecPCWNPiaUQzhY7rLDNeagfYyyh6RZ
         Zw/33BXs2gRhg56N1ITD2zB/vXmHpJB6TDfzPN4fPP9MuQ1ukCXDd2xIDsY7Rrmsad1X
         dcuF0eDWITCJIceTllASXXVHwYn6O4M8ZVc4S5j+cbFLW7y5DcWLO7qzaroO3kAdweqd
         PVHQ==
X-Gm-Message-State: AOAM530ojDzPrFXgac21oGugCW0gVTHjgTAnAy+mCI8mQO8TCsoytIO2
	JIQwl4R9qiDmmCXYoD5rzjaCbm/dnnInDBl8RGRmyexMKdL42/IF7un+U3MnMGAVdyhPH+xjCnT
	NM0myPuNBxcb38fU/LM5t6TMenFY=
X-Received: by 2002:a1c:a5d8:: with SMTP id o207mr4058906wme.0.1604489275762;
        Wed, 04 Nov 2020 03:27:55 -0800 (PST)
X-Google-Smtp-Source: ABdhPJwg15wfrgT+BshmOIwYfW74zEUfdWPrZdp2kXjj06rz+9QjqdURBX/mJbdLlVqTsjM/n0UQOg==
X-Received: by 2002:a1c:a5d8:: with SMTP id o207mr4058876wme.0.1604489275593;
        Wed, 04 Nov 2020 03:27:55 -0800 (PST)
Received: from ?IPv6:2001:b07:6468:f312:c8dd:75d4:99ab:290a? ([2001:b07:6468:f312:c8dd:75d4:99ab:290a])
        by smtp.gmail.com with ESMTPSA id l16sm2008678wrr.83.2020.11.04.03.27.54
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Wed, 04 Nov 2020 03:27:54 -0800 (PST)
Subject: Re: [PATCH-for-5.2 v2 2/4] hw/9pfs: Fix Kconfig dependency problem
 between 9pfs and Xen
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 qemu-devel@nongnu.org
Cc: Christian Borntraeger <borntraeger@de.ibm.com>, Greg Kurz
 <groug@kaod.org>, Fam Zheng <fam@euphon.net>,
 Richard Henderson <rth@twiddle.net>, Cornelia Huck <cohuck@redhat.com>,
 Halil Pasic <pasic@linux.ibm.com>, qemu-s390x@nongnu.org,
 Matthew Rosato <mjrosato@linux.ibm.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, Christian Schoenebeck <qemu_oss@crudebyte.com>,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 "Daniel P . Berrange" <berrange@redhat.com>,
 David Hildenbrand <david@redhat.com>, Thomas Huth <thuth@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20201104084327.3010593-1-philmd@redhat.com>
 <20201104084327.3010593-3-philmd@redhat.com>
From: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <7049dab1-5b94-6011-6501-609f32414edf@redhat.com>
Date: Wed, 4 Nov 2020 12:27:53 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <20201104084327.3010593-3-philmd@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pbonzini@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 04/11/20 09:43, Philippe Mathieu-Daudé wrote:
> Fixes './configure --without-default-devices --enable-xen' build:
> 
>    /usr/bin/ld: libcommon.fa.p/hw_xen_xen-legacy-backend.c.o: in function `xen_be_register_common':
>    hw/xen/xen-legacy-backend.c:754: undefined reference to `xen_9pfs_ops'
>    /usr/bin/ld: libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x8): undefined reference to `local_ops'
>    /usr/bin/ld: libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x20): undefined reference to `synth_ops'
>    /usr/bin/ld: libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x38): undefined reference to `proxy_ops'
>    collect2: error: ld returned 1 exit status
> 
> Fixes: b2c00bce54c ("meson: convert hw/9pfs, cleanup")
> Suggested-by: Paolo Bonzini<pbonzini@redhat.com>
> Signed-off-by: Philippe Mathieu-Daudé<philmd@redhat.com>
> ---
> I'm not sure b2c00bce54c is the real culprit

I think it is, probably a wrong conflict resolution.

Paolo



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 11:57:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 11:57:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19047.44240 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaHPq-00063m-6U; Wed, 04 Nov 2020 11:57:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19047.44240; Wed, 04 Nov 2020 11:57:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaHPq-00063f-2M; Wed, 04 Nov 2020 11:57:26 +0000
Received: by outflank-mailman (input) for mailman id 19047;
 Wed, 04 Nov 2020 11:57:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ed9z=EK=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kaHPo-00063a-Uv
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 11:57:24 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 9e9b505c-a2b5-4078-b703-916355afe2e0;
 Wed, 04 Nov 2020 11:57:22 +0000 (UTC)
Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com
 [209.85.128.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-516-lXrsdjXUNr6oVqL82Kh_Qw-1; Wed, 04 Nov 2020 06:57:21 -0500
Received: by mail-wm1-f72.google.com with SMTP id z62so1075200wmb.1
 for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 03:57:20 -0800 (PST)
Received: from x1w.redhat.com (234.red-83-42-66.dynamicip.rima-tde.net.
 [83.42.66.234])
 by smtp.gmail.com with ESMTPSA id a15sm2226651wrn.75.2020.11.04.03.57.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 04 Nov 2020 03:57:19 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ed9z=EK=redhat.com=philmd@srs-us1.protection.inumbo.net>)
	id 1kaHPo-00063a-Uv
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 11:57:24 +0000
X-Inumbo-ID: 9e9b505c-a2b5-4078-b703-916355afe2e0
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 9e9b505c-a2b5-4078-b703-916355afe2e0;
	Wed, 04 Nov 2020 11:57:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604491042;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vFU0pp+sAaUJV7Uz07XCo0kf/Xpwz4nVfgdLF0eVzjo=;
	b=A2Q5cLB16LgatLO82KUyTVd0znum1wT4xpXBjREVW5nYKlnuqjjFZt0+7V/vyR7266AFAQ
	pUS4ZspjH/xuiM2ku8YpJvnrAxSa9MYu7BEUgrm4bdl9BLXeWHhVuRSKQ0zncnlT4zZ/Nb
	MTwjTitggtTT6nbXdQdktYxbL1qTMS0=
Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com
 [209.85.128.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-516-lXrsdjXUNr6oVqL82Kh_Qw-1; Wed, 04 Nov 2020 06:57:21 -0500
X-MC-Unique: lXrsdjXUNr6oVqL82Kh_Qw-1
Received: by mail-wm1-f72.google.com with SMTP id z62so1075200wmb.1
        for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 03:57:20 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=vFU0pp+sAaUJV7Uz07XCo0kf/Xpwz4nVfgdLF0eVzjo=;
        b=Zw+CDmgVJ+lngKkQWpBI2cjtNXltnLgouh7vzcrvoixLgDbOT+CCMt7JKSBMK+S3fr
         BYdXeyTWwN6bgrVzZ/PXwM9oPwByN5utUz2jZw7DlY6wt+u4G/QckLoIBZ0+40oiYeaq
         kyHTbQpz9MRaJ9xJ5pkB4H+uONWolnPi++8KssBcEW2HwVDbAKOgblEqwfHsatzSRFLN
         p9C6rglMeM3krFBFE9MDOrdC1I5mgMltSchnPbkqprbyVkHnhOBbHXf1pOvYIeao5XHO
         05pu3Hp8EcK31/FBfHVeiuhsDJpv7Wk6sfzxnn7bxZirv+AIICXkIcBltykOaLURbxCQ
         +77Q==
X-Gm-Message-State: AOAM530AkhZEpXbn3OQxweTXBS6auHahdZ1cmtYeOXWgPnx2Kl8spL77
	NAS7hiqKMATh1++Ecuvy68uZ+X79Rp+70OtPxUowwM/w72TFbGjCjdyIOgyKTtmk7zviGT447xw
	JiZigue0R54F9eLUuoc3lVPMw2gI=
X-Received: by 2002:adf:f20e:: with SMTP id p14mr30747724wro.376.1604491039884;
        Wed, 04 Nov 2020 03:57:19 -0800 (PST)
X-Google-Smtp-Source: ABdhPJx6O/flHIYR/UleClnQnn4DXbprA02Qdr+LmWU88GsHun8X3K1oAsLw14FXKYMM+joIbNDW6A==
X-Received: by 2002:adf:f20e:: with SMTP id p14mr30747702wro.376.1604491039757;
        Wed, 04 Nov 2020 03:57:19 -0800 (PST)
Received: from x1w.redhat.com (234.red-83-42-66.dynamicip.rima-tde.net. [83.42.66.234])
        by smtp.gmail.com with ESMTPSA id a15sm2226651wrn.75.2020.11.04.03.57.17
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 04 Nov 2020 03:57:19 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: qemu-s390x@nongnu.org,
	Christian Schoenebeck <qemu_oss@crudebyte.com>,
	Thomas Huth <thuth@redhat.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Matthew Rosato <mjrosato@linux.ibm.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	"Daniel P . Berrange" <berrange@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <rth@twiddle.net>,
	Fam Zheng <fam@euphon.net>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Halil Pasic <pasic@linux.ibm.com>,
	Greg Kurz <groug@kaod.org>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	David Hildenbrand <david@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org
Subject: [PATCH-for-5.2 v3 2/4] hw/9pfs: Fix Kconfig dependency problem between 9pfs and Xen
Date: Wed,  4 Nov 2020 12:57:04 +0100
Message-Id: <20201104115706.3101190-3-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201104115706.3101190-1-philmd@redhat.com>
References: <20201104115706.3101190-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Commit b2c00bce54c ("meson: convert hw/9pfs, cleanup") introduced
CONFIG_9PFS (probably a wrong conflict resolution). This config is
not used anywhere. Backends depend on CONFIG_FSDEV_9P which itself
depends on CONFIG_VIRTFS.

Remove the invalid CONFIG_9PFS and use CONFIG_FSDEV_9P instead, to
fix the './configure --without-default-devices --enable-xen' build:

  /usr/bin/ld: libcommon.fa.p/hw_xen_xen-legacy-backend.c.o: in function `xen_be_register_common':
  hw/xen/xen-legacy-backend.c:754: undefined reference to `xen_9pfs_ops'
  /usr/bin/ld: libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x8): undefined reference to `local_ops'
  /usr/bin/ld: libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x20): undefined reference to `synth_ops'
  /usr/bin/ld: libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x38): undefined reference to `proxy_ops'
  collect2: error: ld returned 1 exit status

Fixes: b2c00bce54c ("meson: convert hw/9pfs, cleanup")
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Greg Kurz <groug@kaod.org>
Tested-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
v2: Reworded description (Greg)

Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org
Cc: Greg Kurz <groug@kaod.org>
Cc: Christian Schoenebeck <qemu_oss@crudebyte.com>
---
 hw/9pfs/Kconfig     | 4 ----
 hw/9pfs/meson.build | 2 +-
 2 files changed, 1 insertion(+), 5 deletions(-)

diff --git a/hw/9pfs/Kconfig b/hw/9pfs/Kconfig
index d3ebd737301..3ae57496613 100644
--- a/hw/9pfs/Kconfig
+++ b/hw/9pfs/Kconfig
@@ -2,12 +2,8 @@ config FSDEV_9P
     bool
     depends on VIRTFS
 
-config 9PFS
-    bool
-
 config VIRTIO_9P
     bool
     default y
     depends on VIRTFS && VIRTIO
     select FSDEV_9P
-    select 9PFS
diff --git a/hw/9pfs/meson.build b/hw/9pfs/meson.build
index cc094262122..99be5d91196 100644
--- a/hw/9pfs/meson.build
+++ b/hw/9pfs/meson.build
@@ -15,6 +15,6 @@
   'coxattr.c',
 ))
 fs_ss.add(when: 'CONFIG_XEN', if_true: files('xen-9p-backend.c'))
-softmmu_ss.add_all(when: 'CONFIG_9PFS', if_true: fs_ss)
+softmmu_ss.add_all(when: 'CONFIG_FSDEV_9P', if_true: fs_ss)
 
 specific_ss.add(when: 'CONFIG_VIRTIO_9P', if_true: files('virtio-9p-device.c'))
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 11:57:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 11:57:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19048.44251 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaHQ9-00068A-E4; Wed, 04 Nov 2020 11:57:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19048.44251; Wed, 04 Nov 2020 11:57:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaHQ9-000683-B6; Wed, 04 Nov 2020 11:57:45 +0000
Received: by outflank-mailman (input) for mailman id 19048;
 Wed, 04 Nov 2020 11:57:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2Coh=EK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kaHQ7-00067k-St
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 11:57:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6cf0f170-7279-45a5-a069-a7ec50cf6db9;
 Wed, 04 Nov 2020 11:57:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3C20EAE47;
 Wed,  4 Nov 2020 11:57:41 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2Coh=EK=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kaHQ7-00067k-St
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 11:57:43 +0000
X-Inumbo-ID: 6cf0f170-7279-45a5-a069-a7ec50cf6db9
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6cf0f170-7279-45a5-a069-a7ec50cf6db9;
	Wed, 04 Nov 2020 11:57:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604491061;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=O1uaZHAv5/QAX+mEDwcHXqHWEqFx1Ks4MienuFTtb+A=;
	b=qmPTy0tpPbdZoU8CJac6DNtwN4yn8Cbur1iE1KlYbYE7a5P67IyG7d3bUeGFGas59D5pMq
	mh+OpXRoozUQjHOb0RQmxDUnXJSDwNqW7euMejcIj1uZwJy1P4s5c4MJyb7FPpeSHzyNAC
	19APlfkuY+DNjHZoqZ79QN3pv/Kq9ko=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3C20EAE47;
	Wed,  4 Nov 2020 11:57:41 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v4.1 2/2] xen/evtchn: rework per event channel lock
Date: Wed,  4 Nov 2020 12:57:39 +0100
Message-Id: <20201104115739.20144-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currently the lock for a single event channel needs to be taken with
interrupts off, which causes deadlocks in some cases.

Rework the per event channel lock to be non-blocking for the case of
sending an event and removing the need for disabling interrupts for
taking the lock.

The lock is needed for avoiding races between event channel state
changes (creation, closing, binding) against normal operations (set
pending, [un]masking, priority changes).

Use a rwlock, but with some restrictions:

- normal operations use read_trylock(), in case of not obtaining the
  lock the operation is omitted or a default state is returned

- closing an event channel is using write_lock(), with ASSERT()ing that
  the lock is taken as writer only when the state of the event channel
  is either before or after the locked region appropriate (either free
  or unbound).

Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V4.1:
- corrected comment about lock discipline

V4:
- switch to rwlock
- add ASSERT() to verify correct write_lock() usage

V3:
- corrected a copy-and-paste error (Jan Beulich)
- corrected unlocking in two cases (Jan Beulich)
- renamed evtchn_read_trylock() (Jan Beulich)
- added some comments and an ASSERT() for evtchn_write_lock()
- set EVENT_WRITE_LOCK_INC to INT_MIN

V2:
- added needed barriers

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/arch/x86/irq.c         |   6 +-
 xen/arch/x86/pv/shim.c     |   9 +--
 xen/common/event_channel.c | 120 ++++++++++++++++++-------------------
 xen/include/xen/event.h    |  55 +++++++++++++----
 xen/include/xen/sched.h    |   3 +-
 5 files changed, 111 insertions(+), 82 deletions(-)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 93c4fb9a79..8d1f9a9fc6 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -2495,14 +2495,12 @@ static void dump_irqs(unsigned char key)
                 pirq = domain_irq_to_pirq(d, irq);
                 info = pirq_info(d, pirq);
                 evtchn = evtchn_from_port(d, info->evtchn);
-                local_irq_disable();
-                if ( spin_trylock(&evtchn->lock) )
+                if ( evtchn_read_trylock(evtchn) )
                 {
                     pending = evtchn_is_pending(d, evtchn);
                     masked = evtchn_is_masked(d, evtchn);
-                    spin_unlock(&evtchn->lock);
+                    evtchn_read_unlock(evtchn);
                 }
-                local_irq_enable();
                 printk("d%d:%3d(%c%c%c)%c",
                        d->domain_id, pirq, "-P?"[pending],
                        "-M?"[masked], info->masked ? 'M' : '-',
diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c
index 9aef7a860a..b4e83e0778 100644
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -660,11 +660,12 @@ void pv_shim_inject_evtchn(unsigned int port)
     if ( port_is_valid(guest, port) )
     {
         struct evtchn *chn = evtchn_from_port(guest, port);
-        unsigned long flags;
 
-        spin_lock_irqsave(&chn->lock, flags);
-        evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
-        spin_unlock_irqrestore(&chn->lock, flags);
+        if ( evtchn_read_trylock(chn) )
+        {
+            evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
+            evtchn_read_unlock(chn);
+        }
     }
 }
 
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index cd4a2c0501..89606e0385 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -133,7 +133,7 @@ static struct evtchn *alloc_evtchn_bucket(struct domain *d, unsigned int port)
             return NULL;
         }
         chn[i].port = port + i;
-        spin_lock_init(&chn[i].lock);
+        rwlock_init(&chn[i].lock);
     }
     return chn;
 }
@@ -255,7 +255,6 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
     int            port;
     domid_t        dom = alloc->dom;
     long           rc;
-    unsigned long  flags;
 
     d = rcu_lock_domain_by_any_id(dom);
     if ( d == NULL )
@@ -271,14 +270,14 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
     if ( rc )
         goto out;
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state = ECS_UNBOUND;
     if ( (chn->u.unbound.remote_domid = alloc->remote_dom) == DOMID_SELF )
         chn->u.unbound.remote_domid = current->domain->domain_id;
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     alloc->port = port;
 
@@ -291,32 +290,26 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
 }
 
 
-static unsigned long double_evtchn_lock(struct evtchn *lchn,
-                                        struct evtchn *rchn)
+static void double_evtchn_lock(struct evtchn *lchn, struct evtchn *rchn)
 {
-    unsigned long flags;
-
     if ( lchn <= rchn )
     {
-        spin_lock_irqsave(&lchn->lock, flags);
+        evtchn_write_lock(lchn);
         if ( lchn != rchn )
-            spin_lock(&rchn->lock);
+            evtchn_write_lock(rchn);
     }
     else
     {
-        spin_lock_irqsave(&rchn->lock, flags);
-        spin_lock(&lchn->lock);
+        evtchn_write_lock(rchn);
+        evtchn_write_lock(lchn);
     }
-
-    return flags;
 }
 
-static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn,
-                                 unsigned long flags)
+static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn)
 {
     if ( lchn != rchn )
-        spin_unlock(&lchn->lock);
-    spin_unlock_irqrestore(&rchn->lock, flags);
+        evtchn_write_unlock(lchn);
+    evtchn_write_unlock(rchn);
 }
 
 static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
@@ -326,7 +319,6 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
     int            lport, rport = bind->remote_port;
     domid_t        rdom = bind->remote_dom;
     long           rc;
-    unsigned long  flags;
 
     if ( rdom == DOMID_SELF )
         rdom = current->domain->domain_id;
@@ -362,7 +354,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
     if ( rc )
         goto out;
 
-    flags = double_evtchn_lock(lchn, rchn);
+    double_evtchn_lock(lchn, rchn);
 
     lchn->u.interdomain.remote_dom  = rd;
     lchn->u.interdomain.remote_port = rport;
@@ -379,7 +371,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
      */
     evtchn_port_set_pending(ld, lchn->notify_vcpu_id, lchn);
 
-    double_evtchn_unlock(lchn, rchn, flags);
+    double_evtchn_unlock(lchn, rchn);
 
     bind->local_port = lport;
 
@@ -402,7 +394,6 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
     struct domain *d = current->domain;
     int            virq = bind->virq, vcpu = bind->vcpu;
     int            rc = 0;
-    unsigned long  flags;
 
     if ( (virq < 0) || (virq >= ARRAY_SIZE(v->virq_to_evtchn)) )
         return -EINVAL;
@@ -440,14 +431,14 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
 
     chn = evtchn_from_port(d, port);
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state          = ECS_VIRQ;
     chn->notify_vcpu_id = vcpu;
     chn->u.virq         = virq;
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     v->virq_to_evtchn[virq] = bind->port = port;
 
@@ -464,7 +455,6 @@ static long evtchn_bind_ipi(evtchn_bind_ipi_t *bind)
     struct domain *d = current->domain;
     int            port, vcpu = bind->vcpu;
     long           rc = 0;
-    unsigned long  flags;
 
     if ( domain_vcpu(d, vcpu) == NULL )
         return -ENOENT;
@@ -476,13 +466,13 @@ static long evtchn_bind_ipi(evtchn_bind_ipi_t *bind)
 
     chn = evtchn_from_port(d, port);
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state          = ECS_IPI;
     chn->notify_vcpu_id = vcpu;
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     bind->port = port;
 
@@ -526,7 +516,6 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
     struct pirq   *info;
     int            port = 0, pirq = bind->pirq;
     long           rc;
-    unsigned long  flags;
 
     if ( (pirq < 0) || (pirq >= d->nr_pirqs) )
         return -EINVAL;
@@ -559,14 +548,14 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
         goto out;
     }
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state  = ECS_PIRQ;
     chn->u.pirq.irq = pirq;
     link_pirq_port(port, chn, v);
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     bind->port = port;
 
@@ -587,7 +576,6 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
     struct evtchn *chn1, *chn2;
     int            port2;
     long           rc = 0;
-    unsigned long  flags;
 
  again:
     spin_lock(&d1->event_lock);
@@ -688,14 +676,14 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
         BUG_ON(chn2->state != ECS_INTERDOMAIN);
         BUG_ON(chn2->u.interdomain.remote_dom != d1);
 
-        flags = double_evtchn_lock(chn1, chn2);
+        double_evtchn_lock(chn1, chn2);
 
         evtchn_free(d1, chn1);
 
         chn2->state = ECS_UNBOUND;
         chn2->u.unbound.remote_domid = d1->domain_id;
 
-        double_evtchn_unlock(chn1, chn2, flags);
+        double_evtchn_unlock(chn1, chn2);
 
         goto out;
 
@@ -703,9 +691,9 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
         BUG();
     }
 
-    spin_lock_irqsave(&chn1->lock, flags);
+    evtchn_write_lock(chn1);
     evtchn_free(d1, chn1);
-    spin_unlock_irqrestore(&chn1->lock, flags);
+    evtchn_write_unlock(chn1);
 
  out:
     if ( d2 != NULL )
@@ -725,7 +713,6 @@ int evtchn_send(struct domain *ld, unsigned int lport)
     struct evtchn *lchn, *rchn;
     struct domain *rd;
     int            rport, ret = 0;
-    unsigned long  flags;
 
     if ( !port_is_valid(ld, lport) )
         return -EINVAL;
@@ -738,7 +725,8 @@ int evtchn_send(struct domain *ld, unsigned int lport)
 
     lchn = evtchn_from_port(ld, lport);
 
-    spin_lock_irqsave(&lchn->lock, flags);
+    if ( !evtchn_read_trylock(lchn) )
+        return 0;
 
     /* Guest cannot send via a Xen-attached event channel. */
     if ( unlikely(consumer_is_xen(lchn)) )
@@ -773,7 +761,7 @@ int evtchn_send(struct domain *ld, unsigned int lport)
     }
 
 out:
-    spin_unlock_irqrestore(&lchn->lock, flags);
+    evtchn_read_unlock(lchn);
 
     return ret;
 }
@@ -806,9 +794,11 @@ void send_guest_vcpu_virq(struct vcpu *v, uint32_t virq)
 
     d = v->domain;
     chn = evtchn_from_port(d, port);
-    spin_lock(&chn->lock);
-    evtchn_port_set_pending(d, v->vcpu_id, chn);
-    spin_unlock(&chn->lock);
+    if ( evtchn_read_trylock(chn) )
+    {
+        evtchn_port_set_pending(d, v->vcpu_id, chn);
+        evtchn_read_unlock(chn);
+    }
 
  out:
     spin_unlock_irqrestore(&v->virq_lock, flags);
@@ -837,9 +827,11 @@ void send_guest_global_virq(struct domain *d, uint32_t virq)
         goto out;
 
     chn = evtchn_from_port(d, port);
-    spin_lock(&chn->lock);
-    evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
-    spin_unlock(&chn->lock);
+    if ( evtchn_read_trylock(chn) )
+    {
+        evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
+        evtchn_read_unlock(chn);
+    }
 
  out:
     spin_unlock_irqrestore(&v->virq_lock, flags);
@@ -849,7 +841,6 @@ void send_guest_pirq(struct domain *d, const struct pirq *pirq)
 {
     int port;
     struct evtchn *chn;
-    unsigned long flags;
 
     /*
      * PV guests: It should not be possible to race with __evtchn_close(). The
@@ -864,9 +855,11 @@ void send_guest_pirq(struct domain *d, const struct pirq *pirq)
     }
 
     chn = evtchn_from_port(d, port);
-    spin_lock_irqsave(&chn->lock, flags);
-    evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
-    spin_unlock_irqrestore(&chn->lock, flags);
+    if ( evtchn_read_trylock(chn) )
+    {
+        evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
+        evtchn_read_unlock(chn);
+    }
 }
 
 static struct domain *global_virq_handlers[NR_VIRQS] __read_mostly;
@@ -1068,15 +1061,16 @@ int evtchn_unmask(unsigned int port)
 {
     struct domain *d = current->domain;
     struct evtchn *evtchn;
-    unsigned long flags;
 
     if ( unlikely(!port_is_valid(d, port)) )
         return -EINVAL;
 
     evtchn = evtchn_from_port(d, port);
-    spin_lock_irqsave(&evtchn->lock, flags);
-    evtchn_port_unmask(d, evtchn);
-    spin_unlock_irqrestore(&evtchn->lock, flags);
+    if ( evtchn_read_trylock(evtchn) )
+    {
+        evtchn_port_unmask(d, evtchn);
+        evtchn_read_unlock(evtchn);
+    }
 
     return 0;
 }
@@ -1155,16 +1149,17 @@ static long evtchn_set_priority(const struct evtchn_set_priority *set_priority)
     struct domain *d = current->domain;
     unsigned int port = set_priority->port;
     struct evtchn *chn;
-    long ret;
-    unsigned long flags;
+    long ret = 0;
 
     if ( !port_is_valid(d, port) )
         return -EINVAL;
 
     chn = evtchn_from_port(d, port);
-    spin_lock_irqsave(&chn->lock, flags);
-    ret = evtchn_port_set_priority(d, chn, set_priority->priority);
-    spin_unlock_irqrestore(&chn->lock, flags);
+    if ( evtchn_read_trylock(chn) )
+    {
+        ret = evtchn_port_set_priority(d, chn, set_priority->priority);
+        evtchn_read_unlock(chn);
+    }
 
     return ret;
 }
@@ -1332,7 +1327,6 @@ int alloc_unbound_xen_event_channel(
 {
     struct evtchn *chn;
     int            port, rc;
-    unsigned long  flags;
 
     spin_lock(&ld->event_lock);
 
@@ -1345,14 +1339,14 @@ int alloc_unbound_xen_event_channel(
     if ( rc )
         goto out;
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state = ECS_UNBOUND;
     chn->xen_consumer = get_xen_consumer(notification_fn);
     chn->notify_vcpu_id = lvcpu;
     chn->u.unbound.remote_domid = remote_domid;
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     /*
      * Increment ->xen_evtchns /after/ ->active_evtchns. No explicit
@@ -1388,7 +1382,6 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
 {
     struct evtchn *lchn, *rchn;
     struct domain *rd;
-    unsigned long flags;
 
     if ( !port_is_valid(ld, lport) )
     {
@@ -1403,7 +1396,8 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
 
     lchn = evtchn_from_port(ld, lport);
 
-    spin_lock_irqsave(&lchn->lock, flags);
+    if ( !evtchn_read_trylock(lchn) )
+        return;
 
     if ( likely(lchn->state == ECS_INTERDOMAIN) )
     {
@@ -1413,7 +1407,7 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
         evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
     }
 
-    spin_unlock_irqrestore(&lchn->lock, flags);
+    evtchn_read_unlock(lchn);
 }
 
 void evtchn_check_pollers(struct domain *d, unsigned int port)
diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index 2ed4be78f6..01a0c1ed97 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -105,6 +105,39 @@ void notify_via_xen_event_channel(struct domain *ld, int lport);
 #define bucket_from_port(d, p) \
     ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKET])
 
+/*
+ * Lock an event channel exclusively. This is allowed only when the channel is
+ * free or unbound either when taking or when releasing the lock, as any
+ * concurrent operation on the event channel using evtchn_read_trylock() will
+ * just assume the event channel is free or unbound at the moment when the
+ * evtchn_read_trylock() returns false.
+ */
+static inline void evtchn_write_lock(struct evtchn *evtchn)
+{
+    write_lock(&evtchn->lock);
+
+    evtchn->old_state = evtchn->state;
+}
+
+static inline void evtchn_write_unlock(struct evtchn *evtchn)
+{
+    /* Enforce lock discipline. */
+    ASSERT(evtchn->old_state == ECS_FREE || evtchn->old_state == ECS_UNBOUND ||
+           evtchn->state == ECS_FREE || evtchn->state == ECS_UNBOUND);
+
+    write_unlock(&evtchn->lock);
+}
+
+static inline bool evtchn_read_trylock(struct evtchn *evtchn)
+{
+    return read_trylock(&evtchn->lock);
+}
+
+static inline void evtchn_read_unlock(struct evtchn *evtchn)
+{
+    read_unlock(&evtchn->lock);
+}
+
 static inline bool_t port_is_valid(struct domain *d, unsigned int p)
 {
     if ( p >= read_atomic(&d->valid_evtchns) )
@@ -234,12 +267,13 @@ static inline bool evtchn_is_masked(const struct domain *d,
 static inline bool evtchn_port_is_masked(struct domain *d, evtchn_port_t port)
 {
     struct evtchn *evtchn = evtchn_from_port(d, port);
-    bool rc;
-    unsigned long flags;
+    bool rc = true;
 
-    spin_lock_irqsave(&evtchn->lock, flags);
-    rc = evtchn_is_masked(d, evtchn);
-    spin_unlock_irqrestore(&evtchn->lock, flags);
+    if ( evtchn_read_trylock(evtchn) )
+    {
+        rc = evtchn_is_masked(d, evtchn);
+        evtchn_read_unlock(evtchn);
+    }
 
     return rc;
 }
@@ -252,12 +286,13 @@ static inline int evtchn_port_poll(struct domain *d, evtchn_port_t port)
     if ( port_is_valid(d, port) )
     {
         struct evtchn *evtchn = evtchn_from_port(d, port);
-        unsigned long flags;
 
-        spin_lock_irqsave(&evtchn->lock, flags);
-        if ( evtchn_usable(evtchn) )
-            rc = evtchn_is_pending(d, evtchn);
-        spin_unlock_irqrestore(&evtchn->lock, flags);
+        if ( evtchn_read_trylock(evtchn) )
+        {
+            if ( evtchn_usable(evtchn) )
+                rc = evtchn_is_pending(d, evtchn);
+            evtchn_read_unlock(evtchn);
+        }
     }
 
     return rc;
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index a298ff4df8..97c65d2917 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -85,7 +85,7 @@ extern domid_t hardware_domid;
 
 struct evtchn
 {
-    spinlock_t lock;
+    rwlock_t lock;
 #define ECS_FREE         0 /* Channel is available for use.                  */
 #define ECS_RESERVED     1 /* Channel is reserved.                           */
 #define ECS_UNBOUND      2 /* Channel is waiting to bind to a remote domain. */
@@ -114,6 +114,7 @@ struct evtchn
         u16 virq;      /* state == ECS_VIRQ */
     } u;
     u8 priority;
+    u8 old_state;      /* State when taking lock in write mode. */
     u32 fifo_lastq;    /* Data for fifo events identifying last queue. */
 #ifdef CONFIG_XSM
     union {
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 12:01:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 12:01:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19059.44264 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaHTH-000764-EV; Wed, 04 Nov 2020 12:00:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19059.44264; Wed, 04 Nov 2020 12:00:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaHTH-00075x-As; Wed, 04 Nov 2020 12:00:59 +0000
Received: by outflank-mailman (input) for mailman id 19059;
 Wed, 04 Nov 2020 12:00:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vd1q=EK=gmail.com=wei.liu.linux@srs-us1.protection.inumbo.net>)
 id 1kaHTG-00075s-Nx
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 12:00:58 +0000
Received: from mail-wr1-f68.google.com (unknown [209.85.221.68])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c3e54468-407c-4c65-a638-e253e76eee06;
 Wed, 04 Nov 2020 12:00:57 +0000 (UTC)
Received: by mail-wr1-f68.google.com with SMTP id c17so2390184wrc.11
 for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 04:00:57 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id t199sm1871437wmt.46.2020.11.04.04.00.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 04 Nov 2020 04:00:56 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Vd1q=EK=gmail.com=wei.liu.linux@srs-us1.protection.inumbo.net>)
	id 1kaHTG-00075s-Nx
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 12:00:58 +0000
X-Inumbo-ID: c3e54468-407c-4c65-a638-e253e76eee06
Received: from mail-wr1-f68.google.com (unknown [209.85.221.68])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c3e54468-407c-4c65-a638-e253e76eee06;
	Wed, 04 Nov 2020 12:00:57 +0000 (UTC)
Received: by mail-wr1-f68.google.com with SMTP id c17so2390184wrc.11
        for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 04:00:57 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=QUSmpWIO++p+EaBmzWHWB9j11K6ZCt/BSPj+8TR1MOs=;
        b=D9MuESqjHpowOfsrDMcQbbWJN04gOFzE8gcX3zYv8JgQl+0Jfap1n95KvENC3AKKmt
         eOIEIhvK+DjzbHrM/0uNF04GWz9uoHtAhTp24XGuW8596eKyQAui8csKt01lSvAyNZvv
         rqBecy1dGNnVMesusVUNS42izR2Atqccowzm7/mdRw0esoRG/L4o/TLc3OlOihw+Ug+A
         UHSTGsl5q0l2ajeup5cymWwqu1q1O/5tjGJl32eOJMr8oZx7o11dhqlcUnIy50UK+pWX
         Q62/vlnGFGUT7/rs39O6fQMwX2mIEH74F5IMMlHsQ0vk8mMeS5xaE4JtmMbAx17Gwix7
         +iYg==
X-Gm-Message-State: AOAM533k/PK5wYuv06vgtzPqdZU7B/w5tPPdd0MMgWn/hyL5vjiy0SHR
	eUUm6H5o9ByPtzcu5C4O/kg=
X-Google-Smtp-Source: ABdhPJyP8064AZcp00/CMnWH1Yf6wcBkvvv9giu/iKtPrEyY0N09EFUoBTXsieznUVpGwXIewZnkPQ==
X-Received: by 2002:a5d:4148:: with SMTP id c8mr31429901wrq.261.1604491256613;
        Wed, 04 Nov 2020 04:00:56 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id t199sm1871437wmt.46.2020.11.04.04.00.55
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 04 Nov 2020 04:00:56 -0800 (PST)
Date: Wed, 4 Nov 2020 12:00:54 +0000
From: Wei Liu <wei.liu@kernel.org>
To: Lee Jones <lee.jones@linaro.org>
Cc: davem@davemloft.net, kuba@kernel.org,
	linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org,
	Wei Liu <wei.liu@kernel.org>, Paul Durrant <paul@xen.org>,
	Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	John Fastabend <john.fastabend@gmail.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
	bpf@vger.kernel.org
Subject: Re: [PATCH 05/12] net: xen-netback: xenbus: Demote nonconformant
 kernel-doc headers
Message-ID: <20201104120054.jaukbhblpooi5hwf@liuwe-devbox-debian-v2>
References: <20201104090610.1446616-1-lee.jones@linaro.org>
 <20201104090610.1446616-6-lee.jones@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201104090610.1446616-6-lee.jones@linaro.org>
User-Agent: NeoMutt/20180716

On Wed, Nov 04, 2020 at 09:06:03AM +0000, Lee Jones wrote:
> Fixes the following W=1 kernel build warning(s):
> 
>  drivers/net/xen-netback/xenbus.c:419: warning: Function parameter or member 'dev' not described in 'frontend_changed'
>  drivers/net/xen-netback/xenbus.c:419: warning: Function parameter or member 'frontend_state' not described in 'frontend_changed'
>  drivers/net/xen-netback/xenbus.c:1001: warning: Function parameter or member 'dev' not described in 'netback_probe'
>  drivers/net/xen-netback/xenbus.c:1001: warning: Function parameter or member 'id' not described in 'netback_probe'
> 
> Cc: Wei Liu <wei.liu@kernel.org>

If this is ever needed:

Acked-by: Wei Liu <wei.liu@kernel.org>


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 12:12:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 12:12:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19075.44276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaHeT-00086m-H2; Wed, 04 Nov 2020 12:12:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19075.44276; Wed, 04 Nov 2020 12:12:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaHeT-00086f-Cq; Wed, 04 Nov 2020 12:12:33 +0000
Received: by outflank-mailman (input) for mailman id 19075;
 Wed, 04 Nov 2020 12:12:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NDu8=EK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kaHeS-00086a-PP
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 12:12:32 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3183ab08-46a1-4f4d-821d-2a753ca1653d;
 Wed, 04 Nov 2020 12:12:31 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kaHeR-0004J1-CE; Wed, 04 Nov 2020 12:12:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kaHeR-000480-3B; Wed, 04 Nov 2020 12:12:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kaHeR-00015c-1c; Wed, 04 Nov 2020 12:12:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NDu8=EK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kaHeS-00086a-PP
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 12:12:32 +0000
X-Inumbo-ID: 3183ab08-46a1-4f4d-821d-2a753ca1653d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3183ab08-46a1-4f4d-821d-2a753ca1653d;
	Wed, 04 Nov 2020 12:12:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8Zt+WUGH0S9wZtCvq6zG31psDFkjRQMAZgs47S9jW1o=; b=ALpJT6YvRdC1XngtcDBzxoFbqf
	dJeMgb/r7qxt4Md1IED/aNuoHzV2LZsmzR2rLcI6h2I7iHO9+Jt9nyBN5G7mAt969x9c9OML5lfqW
	gbIjOognuthfTZdINrd5893sm/q5Ql14BfAsVJHHNQRaigCSPFtw7pOWmOIt5U4fTVO4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaHeR-0004J1-CE; Wed, 04 Nov 2020 12:12:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaHeR-000480-3B; Wed, 04 Nov 2020 12:12:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaHeR-00015c-1c; Wed, 04 Nov 2020 12:12:31 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156395-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156395: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9ff9705647646aa937b5f5c1426a64c69a62b3bd
X-Osstest-Versions-That:
    xen=7056f2f89f03f2f804ac7e776c7b2b000cd716cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 04 Nov 2020 12:12:31 +0000

flight 156395 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156395/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  9ff9705647646aa937b5f5c1426a64c69a62b3bd
baseline version:
 xen                  7056f2f89f03f2f804ac7e776c7b2b000cd716cd

Last test of basis   156322  2020-10-30 20:02:33 Z    4 days
Testing same since   156395  2020-11-04 09:00:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Frédéric Pierret (fepitre) <frederic.pierret@qubes-os.org>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7056f2f89f..9ff9705647  9ff9705647646aa937b5f5c1426a64c69a62b3bd -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 12:35:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 12:35:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19090.44293 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaI0c-0001Y3-5L; Wed, 04 Nov 2020 12:35:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19090.44293; Wed, 04 Nov 2020 12:35:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaI0c-0001Xw-2T; Wed, 04 Nov 2020 12:35:26 +0000
Received: by outflank-mailman (input) for mailman id 19090;
 Wed, 04 Nov 2020 12:35:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NDu8=EK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kaI0a-0001XO-UM
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 12:35:24 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d5cbcc68-6f10-494f-a307-4026f9f6149c;
 Wed, 04 Nov 2020 12:35:17 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kaI0T-0004m9-HT; Wed, 04 Nov 2020 12:35:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kaI0T-0004wb-5F; Wed, 04 Nov 2020 12:35:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kaI0T-0001sI-4V; Wed, 04 Nov 2020 12:35:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NDu8=EK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kaI0a-0001XO-UM
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 12:35:24 +0000
X-Inumbo-ID: d5cbcc68-6f10-494f-a307-4026f9f6149c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d5cbcc68-6f10-494f-a307-4026f9f6149c;
	Wed, 04 Nov 2020 12:35:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0kWkUkOKp6aIxyod5iUbns/jG7Xp5CYOu5Jrr6tk5hM=; b=Uaz31LyhAL8AqCVvrjSgv0QcWO
	CJV/RpQmBlqZwvNGn/8muYO+pIZ2maH0XC1/PyVNfwwZc8Zlp2drI+0cEcUdE25SwaxaQSayTfPBl
	JTNlnjNkBioz5y+Qimg42ELAvyzSeI95C4aqTyFMLfZeMla+/W1lILYMmVxkdQI+G0Ro=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaI0T-0004m9-HT; Wed, 04 Nov 2020 12:35:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaI0T-0004wb-5F; Wed, 04 Nov 2020 12:35:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaI0T-0001sI-4V; Wed, 04 Nov 2020 12:35:17 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156389-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156389: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7056f2f89f03f2f804ac7e776c7b2b000cd716cd
X-Osstest-Versions-That:
    xen=7056f2f89f03f2f804ac7e776c7b2b000cd716cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 04 Nov 2020 12:35:17 +0000

flight 156389 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156389/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds   18 guest-localmigrate fail in 156373 pass in 156389
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 18 guest-start/debianhvm.repeat fail in 156373 pass in 156389
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 156373

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156373
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156373
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156373
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156373
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156373
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156373
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156373
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156373
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156373
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156373
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156373
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  7056f2f89f03f2f804ac7e776c7b2b000cd716cd
baseline version:
 xen                  7056f2f89f03f2f804ac7e776c7b2b000cd716cd

Last test of basis   156389  2020-11-04 01:51:29 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 12:46:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 12:46:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19099.44305 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaIBZ-0002Y2-De; Wed, 04 Nov 2020 12:46:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19099.44305; Wed, 04 Nov 2020 12:46:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaIBZ-0002Xv-Ag; Wed, 04 Nov 2020 12:46:45 +0000
Received: by outflank-mailman (input) for mailman id 19099;
 Wed, 04 Nov 2020 12:46:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZaG=EK=crudebyte.com=qemu_oss@srs-us1.protection.inumbo.net>)
 id 1kaIBX-0002Xq-8m
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 12:46:43 +0000
Received: from lizzy.crudebyte.com (unknown [91.194.90.13])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id adad4ea0-f79b-49c5-b9bb-862930ad7a1d;
 Wed, 04 Nov 2020 12:46:41 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NZaG=EK=crudebyte.com=qemu_oss@srs-us1.protection.inumbo.net>)
	id 1kaIBX-0002Xq-8m
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 12:46:43 +0000
X-Inumbo-ID: adad4ea0-f79b-49c5-b9bb-862930ad7a1d
Received: from lizzy.crudebyte.com (unknown [91.194.90.13])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id adad4ea0-f79b-49c5-b9bb-862930ad7a1d;
	Wed, 04 Nov 2020 12:46:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=crudebyte.com; s=lizzy; h=Content-Type:Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:
	Content-ID:Content-Description;
	bh=/krue73RQvdTWcIblo1I9y7H3d6eOwA1QW3W+4qPpfE=; b=UHh3nTAHNFB2zk0OC/yuN81pAX
	CS3ZhcW/5BcC89K1b2ChCpUquZyD+PzyaUgL8Ho7EDpDwJWA4IZVim23uFNMgtdvecrENGVrtNs2s
	01CK686+wZcKvd7lPZ4BljzXHhOrbVziSz4mYY3VQhU3yOXXGKCChQqqoT2rcJmAmr2UXXkkkV5YI
	nEEBDTWC5LDAqsGp0wZ54FJwoMtufRgdiU+Q8GSfgHfUx2o/UucIDGu5/lFc3X6JJWD/K6rFc0p3x
	3W0m4ixzxQyWOR/ZZ6nRUGTW5O/CaDlLZDTngBRDb28ytW7EQURadvxENGyH+Lk6t++V7qYkPetk7
	UcDWglqg==;
From: Christian Schoenebeck <qemu_oss@crudebyte.com>
To: qemu-devel@nongnu.org
Cc: Philippe =?ISO-8859-1?Q?Mathieu=2DDaud=E9?= <philmd@redhat.com>, Fam Zheng <fam@euphon.net>, Thomas Huth <thuth@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>, "Daniel P . Berrange" <berrange@redhat.com>, Matthew Rosato <mjrosato@linux.ibm.com>, David Hildenbrand <david@redhat.com>, Alex =?ISO-8859-1?Q?Benn=E9e?= <alex.bennee@linaro.org>, Cornelia Huck <cohuck@redhat.com>, Greg Kurz <groug@kaod.org>, Wainer dos Santos Moschetta <wainersm@redhat.com>, Halil Pasic <pasic@linux.ibm.com>, Christian Borntraeger <borntraeger@de.ibm.com>, qemu-s390x@nongnu.org, xen-devel@lists.xenproject.org, Anthony Perard <anthony.perard@citrix.com>, Paolo Bonzini <pbonzini@redhat.com>, Paul Durrant <paul@xen.org>, Richard Henderson <rth@twiddle.net>
Subject: Re: [PATCH-for-5.2 v3 2/4] hw/9pfs: Fix Kconfig dependency problem between 9pfs and Xen
Date: Wed, 04 Nov 2020 13:18:09 +0100
Message-ID: <8965407.pN9RvXrJQ9@silver>
In-Reply-To: <20201104115706.3101190-3-philmd@redhat.com>
References: <20201104115706.3101190-1-philmd@redhat.com> <20201104115706.3101190-3-philmd@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"

On Mittwoch, 4. November 2020 12:57:04 CET Philippe Mathieu-Daud=E9 wrote:
> Commit b2c00bce54c ("meson: convert hw/9pfs, cleanup") introduced
> CONFIG_9PFS (probably a wrong conflict resolution). This config is
> not used anywhere. Backends depend on CONFIG_FSDEV_9P which itself
> depends on CONFIG_VIRTFS.
>=20
> Remove the invalid CONFIG_9PFS and use CONFIG_FSDEV_9P instead, to
> fix the './configure --without-default-devices --enable-xen' build:
>=20
>   /usr/bin/ld: libcommon.fa.p/hw_xen_xen-legacy-backend.c.o: in function
> `xen_be_register_common': hw/xen/xen-legacy-backend.c:754: undefined
> reference to `xen_9pfs_ops' /usr/bin/ld:
> libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x8): undefined reference =
to
> `local_ops' /usr/bin/ld:
> libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x20): undefined reference
> to `synth_ops' /usr/bin/ld:
> libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x38): undefined reference
> to `proxy_ops' collect2: error: ld returned 1 exit status
>=20
> Fixes: b2c00bce54c ("meson: convert hw/9pfs, cleanup")
> Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
> Acked-by: Greg Kurz <groug@kaod.org>
> Tested-by: Greg Kurz <groug@kaod.org>
> Signed-off-by: Philippe Mathieu-Daud=E9 <philmd@redhat.com>

Acked-by: Christian Schoenebeck <qemu_oss@crudebyte.com>

> ---
> v2: Reworded description (Greg)
>=20
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> Cc: Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org
> Cc: Greg Kurz <groug@kaod.org>
> Cc: Christian Schoenebeck <qemu_oss@crudebyte.com>
> ---
>  hw/9pfs/Kconfig     | 4 ----
>  hw/9pfs/meson.build | 2 +-
>  2 files changed, 1 insertion(+), 5 deletions(-)
>=20
> diff --git a/hw/9pfs/Kconfig b/hw/9pfs/Kconfig
> index d3ebd737301..3ae57496613 100644
> --- a/hw/9pfs/Kconfig
> +++ b/hw/9pfs/Kconfig
> @@ -2,12 +2,8 @@ config FSDEV_9P
>      bool
>      depends on VIRTFS
>=20
> -config 9PFS
> -    bool
> -
>  config VIRTIO_9P
>      bool
>      default y
>      depends on VIRTFS && VIRTIO
>      select FSDEV_9P
> -    select 9PFS
> diff --git a/hw/9pfs/meson.build b/hw/9pfs/meson.build
> index cc094262122..99be5d91196 100644
> --- a/hw/9pfs/meson.build
> +++ b/hw/9pfs/meson.build
> @@ -15,6 +15,6 @@
>    'coxattr.c',
>  ))
>  fs_ss.add(when: 'CONFIG_XEN', if_true: files('xen-9p-backend.c'))
> -softmmu_ss.add_all(when: 'CONFIG_9PFS', if_true: fs_ss)
> +softmmu_ss.add_all(when: 'CONFIG_FSDEV_9P', if_true: fs_ss)
>=20
>  specific_ss.add(when: 'CONFIG_VIRTIO_9P', if_true:
> files('virtio-9p-device.c'))

Best regards,
Christian Schoenebeck




From xen-devel-bounces@lists.xenproject.org Wed Nov 04 13:16:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 13:16:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19111.44322 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaIeJ-0005D4-Sk; Wed, 04 Nov 2020 13:16:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19111.44322; Wed, 04 Nov 2020 13:16:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaIeJ-0005Cx-OO; Wed, 04 Nov 2020 13:16:27 +0000
Received: by outflank-mailman (input) for mailman id 19111;
 Wed, 04 Nov 2020 13:16:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yQoB=EK=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kaIeI-0005Cs-CA
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 13:16:26 +0000
Received: from mail-lj1-x22e.google.com (unknown [2a00:1450:4864:20::22e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f7633ef7-02d2-4b7b-ac54-20794ccd8d66;
 Wed, 04 Nov 2020 13:16:25 +0000 (UTC)
Received: by mail-lj1-x22e.google.com with SMTP id t13so22848584ljk.12
 for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 05:16:25 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=yQoB=EK=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kaIeI-0005Cs-CA
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 13:16:26 +0000
X-Inumbo-ID: f7633ef7-02d2-4b7b-ac54-20794ccd8d66
Received: from mail-lj1-x22e.google.com (unknown [2a00:1450:4864:20::22e])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f7633ef7-02d2-4b7b-ac54-20794ccd8d66;
	Wed, 04 Nov 2020 13:16:25 +0000 (UTC)
Received: by mail-lj1-x22e.google.com with SMTP id t13so22848584ljk.12
        for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 05:16:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=0OL/wmJvx7gfZ3aTve3/MuK63BVPu3oYASR0Jzolnjo=;
        b=okQgrdYvDpd7xsdbJHxqH+E2Z7Mw0Be6KjTCgg2Ne4C0YpYTFKL50VkJ4E0tbsDkk2
         RwxRrInYmIQZoVaMaT2TVpNc2xRg0bgvIXUfHFxy8Oawg5xr8gr+s1UYQMytEykqiNv4
         7Clf20tqCKJ2yVrLQUevqJOhNDmmxlzr4jzpodWcethA2sWUMFdD331WSuZypiEOAHYS
         W1tzEdIwvDOmYFkRFT96jOVTSpRLergU+VtrMPovOwTsN/Q0nY/5ReBXfSCT+sRjQ0vg
         XqMGwVojvnDXwYMpvqy+y+yrvmXqnFev8c1bw+XGk4S5kkupsGXM2vOd1rbyqcc1rV7v
         g6tA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=0OL/wmJvx7gfZ3aTve3/MuK63BVPu3oYASR0Jzolnjo=;
        b=pMaqBiXTGcnEiIrIGvOVQJZSgW6PtK98dCSlijOydRlDOa+hJloWAXd988+fYc5B4v
         1v1jl+l4b/v2RkfLse2dSjRxCJAf73/j6N4xCk4xcVSmEzE3lGfkIuYPdSF7QvMyMUo+
         vbmJUrFu6CMdTYKvW2CcOrVk7w2qihSrC6tp2Xkg4l87XJgpntpCQv7oWiw5g9eHEWqH
         g921J/KpbYBvv7RcVvEVC26fvD6U44yFH4Hpucql94jvTUzPMOdD/zsj9oBq2QF8e6MY
         IzGttoD7iZOn+2NNKbfVqF6bArkqSGHCnstyb9/GnCAvXRmp8DL8EW74BbWgpFOOmBX/
         3xrg==
X-Gm-Message-State: AOAM530mHZhUKVUsWP02MeLRosEV901/3KC+HHstKcvDLbBLXVa5IPwl
	MRH0AYxQqbwmwLyteDWKSgkHXZyPccvA47w1hKE=
X-Google-Smtp-Source: ABdhPJwqPWVzSwls4oFJ80HqOUlFlUkAMQKcUKJWnstqUlkmo9caiEco6a4ZcWN6CcsNelD6LALnFvxGtXWkjZrcw/g=
X-Received: by 2002:a2e:7a0a:: with SMTP id v10mr11464401ljc.13.1604495784376;
 Wed, 04 Nov 2020 05:16:24 -0800 (PST)
MIME-Version: 1.0
References: <5aa0791a-db56-8f5a-51a1-5863748ce7f1@suse.com>
In-Reply-To: <5aa0791a-db56-8f5a-51a1-5863748ce7f1@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 4 Nov 2020 08:16:13 -0500
Message-ID: <CAKf6xpuzPaRA1gRU_ne6vGxhqZRSTY0UjR3c3_64rffzyqP4oA@mail.gmail.com>
Subject: Re: preparations for 4.14.1
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	George Dunlap <george.dunlap@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Ian Jackson <ian.jackson@citrix.com>, Wei Liu <wl@xen.org>, 
	Anthony Perard <anthony.perard@citrix.com>, Julien Grall <julien@xen.org>
Content-Type: text/plain; charset="UTF-8"

On Wed, Nov 4, 2020 at 5:13 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> All,
>
> the release is due in a couple of weeks time. Please point out
> backports you find missing from the respective staging branch,
> but which you consider relevant. (Ian: Please double check
> there are indeed no tools side backports needed here.)

4ddd6499d999 SUPPORT: Add linux device model stubdom to Toolstack

It was acked before 4.14 was released, but never made it in.  The
feature is in 4.14, and Julien recently committed it to staging.

Thanks,
Jason


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 13:28:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 13:28:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19118.44333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaIpY-0006Cz-Ub; Wed, 04 Nov 2020 13:28:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19118.44333; Wed, 04 Nov 2020 13:28:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaIpY-0006Cs-Rp; Wed, 04 Nov 2020 13:28:04 +0000
Received: by outflank-mailman (input) for mailman id 19118;
 Wed, 04 Nov 2020 13:28:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=i8HL=EK=lunn.ch=andrew@srs-us1.protection.inumbo.net>)
 id 1kaIpX-0006Cn-Mw
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 13:28:03 +0000
Received: from vps0.lunn.ch (unknown [185.16.172.187])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a1629d08-ab9a-4c44-a38b-4173cae734d4;
 Wed, 04 Nov 2020 13:27:59 +0000 (UTC)
Received: from andrew by vps0.lunn.ch with local (Exim 4.94)
 (envelope-from <andrew@lunn.ch>)
 id 1kaIpF-005D1e-Gs; Wed, 04 Nov 2020 14:27:45 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=i8HL=EK=lunn.ch=andrew@srs-us1.protection.inumbo.net>)
	id 1kaIpX-0006Cn-Mw
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 13:28:03 +0000
X-Inumbo-ID: a1629d08-ab9a-4c44-a38b-4173cae734d4
Received: from vps0.lunn.ch (unknown [185.16.172.187])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a1629d08-ab9a-4c44-a38b-4173cae734d4;
	Wed, 04 Nov 2020 13:27:59 +0000 (UTC)
Received: from andrew by vps0.lunn.ch with local (Exim 4.94)
	(envelope-from <andrew@lunn.ch>)
	id 1kaIpF-005D1e-Gs; Wed, 04 Nov 2020 14:27:45 +0100
Date: Wed, 4 Nov 2020 14:27:45 +0100
From: Andrew Lunn <andrew@lunn.ch>
To: Lee Jones <lee.jones@linaro.org>
Cc: davem@davemloft.net, kuba@kernel.org, Wei Liu <wei.liu@kernel.org>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>, Paul Durrant <paul@xen.org>,
	netdev@vger.kernel.org, Rusty Russell <rusty@rustcorp.com.au>,
	John Fastabend <john.fastabend@gmail.com>,
	Alexei Starovoitov <ast@kernel.org>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org, bpf@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH 05/12] net: xen-netback: xenbus: Demote nonconformant
 kernel-doc headers
Message-ID: <20201104132745.GZ933237@lunn.ch>
References: <20201104090610.1446616-1-lee.jones@linaro.org>
 <20201104090610.1446616-6-lee.jones@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201104090610.1446616-6-lee.jones@linaro.org>

On Wed, Nov 04, 2020 at 09:06:03AM +0000, Lee Jones wrote:
> Fixes the following W=1 kernel build warning(s):
> 
>  drivers/net/xen-netback/xenbus.c:419: warning: Function parameter or member 'dev' not described in 'frontend_changed'
>  drivers/net/xen-netback/xenbus.c:419: warning: Function parameter or member 'frontend_state' not described in 'frontend_changed'
>  drivers/net/xen-netback/xenbus.c:1001: warning: Function parameter or member 'dev' not described in 'netback_probe'
>  drivers/net/xen-netback/xenbus.c:1001: warning: Function parameter or member 'id' not described in 'netback_probe'
> 
> Cc: Wei Liu <wei.liu@kernel.org>
> Cc: Paul Durrant <paul@xen.org>
> Cc: "David S. Miller" <davem@davemloft.net>
> Cc: Jakub Kicinski <kuba@kernel.org>
> Cc: Alexei Starovoitov <ast@kernel.org>
> Cc: Daniel Borkmann <daniel@iogearbox.net>
> Cc: Jesper Dangaard Brouer <hawk@kernel.org>
> Cc: John Fastabend <john.fastabend@gmail.com>
> Cc: Rusty Russell <rusty@rustcorp.com.au>
> Cc: xen-devel@lists.xenproject.org
> Cc: netdev@vger.kernel.org
> Cc: bpf@vger.kernel.org
> Signed-off-by: Lee Jones <lee.jones@linaro.org>

Reviewed-by: Andrew Lunn <andrew@lunn.ch>

    Andrew


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 13:36:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 13:36:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19124.44345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaIxd-00076t-Qs; Wed, 04 Nov 2020 13:36:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19124.44345; Wed, 04 Nov 2020 13:36:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaIxd-00076m-NC; Wed, 04 Nov 2020 13:36:25 +0000
Received: by outflank-mailman (input) for mailman id 19124;
 Wed, 04 Nov 2020 13:36:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=i8HL=EK=lunn.ch=andrew@srs-us1.protection.inumbo.net>)
 id 1kaIxb-00076h-W0
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 13:36:24 +0000
Received: from vps0.lunn.ch (unknown [185.16.172.187])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4f4b8867-43b3-455d-91c4-7c0146521f9b;
 Wed, 04 Nov 2020 13:36:21 +0000 (UTC)
Received: from andrew by vps0.lunn.ch with local (Exim 4.94)
 (envelope-from <andrew@lunn.ch>)
 id 1kaIxP-005DAy-09; Wed, 04 Nov 2020 14:36:11 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=i8HL=EK=lunn.ch=andrew@srs-us1.protection.inumbo.net>)
	id 1kaIxb-00076h-W0
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 13:36:24 +0000
X-Inumbo-ID: 4f4b8867-43b3-455d-91c4-7c0146521f9b
Received: from vps0.lunn.ch (unknown [185.16.172.187])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4f4b8867-43b3-455d-91c4-7c0146521f9b;
	Wed, 04 Nov 2020 13:36:21 +0000 (UTC)
Received: from andrew by vps0.lunn.ch with local (Exim 4.94)
	(envelope-from <andrew@lunn.ch>)
	id 1kaIxP-005DAy-09; Wed, 04 Nov 2020 14:36:11 +0100
Date: Wed, 4 Nov 2020 14:36:10 +0100
From: Andrew Lunn <andrew@lunn.ch>
To: Lee Jones <lee.jones@linaro.org>
Cc: davem@davemloft.net, kuba@kernel.org, Juergen Gross <jgross@suse.com>,
	Song Liu <songliubraving@fb.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>, netdev@vger.kernel.org,
	bpf@vger.kernel.org, John Fastabend <john.fastabend@gmail.com>,
	linux-kernel@vger.kernel.org, Alexei Starovoitov <ast@kernel.org>,
	Andrii Nakryiko <andrii@kernel.org>, xen-devel@lists.xenproject.org,
	KP Singh <kpsingh@chromium.org>, Yonghong Song <yhs@fb.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Martin KaFai Lau <kafai@fb.com>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH 08/12] net: xen-netfront: Demote non-kernel-doc headers
 to standard comment blocks
Message-ID: <20201104133610.GB933237@lunn.ch>
References: <20201104090610.1446616-1-lee.jones@linaro.org>
 <20201104090610.1446616-9-lee.jones@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201104090610.1446616-9-lee.jones@linaro.org>

On Wed, Nov 04, 2020 at 09:06:06AM +0000, Lee Jones wrote:
> Fixes the following W=1 kernel build warning(s):
> 
>  drivers/net/xen-netfront.c: In function ‘store_rxbuf’:
>  drivers/net/xen-netfront.c:2416:16: warning: variable ‘target’ set but not used [-Wunused-but-set-variable]
>  drivers/net/xen-netfront.c:1592: warning: Function parameter or member 'dev' not described in 'netfront_probe'
>  drivers/net/xen-netfront.c:1592: warning: Function parameter or member 'id' not described in 'netfront_probe'
>  drivers/net/xen-netfront.c:1669: warning: Function parameter or member 'dev' not described in 'netfront_resume'
>  drivers/net/xen-netfront.c:2313: warning: Function parameter or member 'dev' not described in 'netback_changed'
>  drivers/net/xen-netfront.c:2313: warning: Function parameter or member 'backend_state' not described in 'netback_changed'
> 

commit 8ed7ec1386b646130d80d017ecd4716f866ea570
Author: Andrew Lunn <andrew@lunn.ch>
Date:   Sat Oct 31 19:04:35 2020 +0100

    drivers: net: xen-netfront: Fixed W=1 set but unused warnings

Already in net-next.

	Andrew


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 13:42:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 13:42:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19130.44357 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaJ33-0007zP-Gl; Wed, 04 Nov 2020 13:42:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19130.44357; Wed, 04 Nov 2020 13:42:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaJ33-0007zI-Do; Wed, 04 Nov 2020 13:42:01 +0000
Received: by outflank-mailman (input) for mailman id 19130;
 Wed, 04 Nov 2020 13:40:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w5Gd=EK=huawei.com=zhangxinhao1@srs-us1.protection.inumbo.net>)
 id 1kaJ1I-0007xK-8u
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 13:40:12 +0000
Received: from szxga05-in.huawei.com (unknown [45.249.212.191])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a2a619a-caf9-4cf0-bb75-30da3e1cd77a;
 Wed, 04 Nov 2020 13:40:04 +0000 (UTC)
Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.59])
 by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4CR76H58CBzhg5H;
 Wed,  4 Nov 2020 21:39:03 +0800 (CST)
Received: from huawei.com (10.175.101.6) by DGGEMS401-HUB.china.huawei.com
 (10.3.19.201) with Microsoft SMTP Server id 14.3.487.0; Wed, 4 Nov 2020
 21:38:59 +0800
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=w5Gd=EK=huawei.com=zhangxinhao1@srs-us1.protection.inumbo.net>)
	id 1kaJ1I-0007xK-8u
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 13:40:12 +0000
X-Inumbo-ID: 4a2a619a-caf9-4cf0-bb75-30da3e1cd77a
Received: from szxga05-in.huawei.com (unknown [45.249.212.191])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4a2a619a-caf9-4cf0-bb75-30da3e1cd77a;
	Wed, 04 Nov 2020 13:40:04 +0000 (UTC)
Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.59])
	by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4CR76H58CBzhg5H;
	Wed,  4 Nov 2020 21:39:03 +0800 (CST)
Received: from huawei.com (10.175.101.6) by DGGEMS401-HUB.china.huawei.com
 (10.3.19.201) with Microsoft SMTP Server id 14.3.487.0; Wed, 4 Nov 2020
 21:38:59 +0800
From: Xinhao Zhang <zhangxinhao1@huawei.com>
To: <qemu-devel@nongnu.org>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <anthony.perard@citrix.com>, <paul@xen.org>,
	<dengkai1@huawei.com>, <alex.chen@huawei.com>, <qemu-trivial@nongnu.org>
Subject: [PATCH] hw/xen: Don't use '#' flag of printf format
Date: Wed, 4 Nov 2020 21:37:09 +0800
Message-ID: <20201104133709.3326630-1-zhangxinhao1@huawei.com>
X-Mailer: git-send-email 2.29.0-rc1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.175.101.6]
X-CFilter-Loop: Reflected

Fix code style. Don't use '#' flag of printf format ('%#') in
format strings, use '0x' prefix instead

Signed-off-by: Xinhao Zhang <zhangxinhao1@huawei.com>
Signed-off-by: Kai Deng <dengkai1@huawei.com>
---
 hw/xen/xen_pt.c             | 10 +++++-----
 hw/xen/xen_pt_config_init.c |  6 +++---
 hw/xen/xen_pt_msi.c         | 16 ++++++++--------
 3 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
index 6d359ee486..a5f3dd590c 100644
--- a/hw/xen/xen_pt.c
+++ b/hw/xen/xen_pt.c
@@ -489,7 +489,7 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s, uint16_t *cmd)
         pci_register_bar(&s->dev, i, type, &s->bar[i]);
 
         XEN_PT_LOG(&s->dev, "IO region %i registered (size=0x%08"PRIx64
-                   " base_addr=0x%08"PRIx64" type: %#x)\n",
+                   " base_addr=0x%08"PRIx64" type: 0x%x)\n",
                    i, r->size, r->base_addr, type);
     }
 
@@ -578,7 +578,7 @@ static void xen_pt_check_bar_overlap(PCIBus *bus, PCIDevice *d, void *opaque)
         if (ranges_overlap(arg->addr, arg->size, r->addr, r->size)) {
             XEN_PT_WARN(&s->dev,
                         "Overlapped to device [%02x:%02x.%d] Region: %i"
-                        " (addr: %#"FMT_PCIBUS", len: %#"FMT_PCIBUS")\n",
+                        " (addr: 0x%"FMT_PCIBUS", len: 0x%"FMT_PCIBUS")\n",
                         pci_bus_num(bus), PCI_SLOT(d->devfn),
                         PCI_FUNC(d->devfn), i, r->addr, r->size);
             arg->rc = true;
@@ -618,8 +618,8 @@ static void xen_pt_region_update(XenPCIPassthroughState *s,
     pci_for_each_device(pci_get_bus(d), pci_dev_bus_num(d),
                         xen_pt_check_bar_overlap, &args);
     if (args.rc) {
-        XEN_PT_WARN(d, "Region: %d (addr: %#"FMT_PCIBUS
-                    ", len: %#"FMT_PCIBUS") is overlapped.\n",
+        XEN_PT_WARN(d, "Region: %d (addr: 0x%"FMT_PCIBUS
+                    ", len: 0x%"FMT_PCIBUS") is overlapped.\n",
                     bar, sec->offset_within_address_space,
                     int128_get64(sec->size));
     }
@@ -786,7 +786,7 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
 
     /* register real device */
     XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
-               " to devfn %#x\n",
+               " to devfn 0x%x\n",
                s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function,
                s->dev.devfn);
 
diff --git a/hw/xen/xen_pt_config_init.c b/hw/xen/xen_pt_config_init.c
index c8724cc7c8..c5c4e943a8 100644
--- a/hw/xen/xen_pt_config_init.c
+++ b/hw/xen/xen_pt_config_init.c
@@ -1622,7 +1622,7 @@ static int xen_pt_pcie_size_init(XenPCIPassthroughState *s,
         case PCI_EXP_TYPE_PCIE_BRIDGE:
         case PCI_EXP_TYPE_RC_EC:
         default:
-            XEN_PT_ERR(d, "Unsupported device/port type %#x.\n", type);
+            XEN_PT_ERR(d, "Unsupported device/port type 0x%x.\n", type);
             return -1;
         }
     }
@@ -1645,11 +1645,11 @@ static int xen_pt_pcie_size_init(XenPCIPassthroughState *s,
         case PCI_EXP_TYPE_PCIE_BRIDGE:
         case PCI_EXP_TYPE_RC_EC:
         default:
-            XEN_PT_ERR(d, "Unsupported device/port type %#x.\n", type);
+            XEN_PT_ERR(d, "Unsupported device/port type 0x%x.\n", type);
             return -1;
         }
     } else {
-        XEN_PT_ERR(d, "Unsupported capability version %#x.\n", version);
+        XEN_PT_ERR(d, "Unsupported capability version 0x%x.\n", version);
         return -1;
     }
 
diff --git a/hw/xen/xen_pt_msi.c b/hw/xen/xen_pt_msi.c
index fb4b887b92..b71563f98a 100644
--- a/hw/xen/xen_pt_msi.c
+++ b/hw/xen/xen_pt_msi.c
@@ -123,7 +123,7 @@ static int msi_msix_setup(XenPCIPassthroughState *s,
             *ppirq = XEN_PT_UNASSIGNED_PIRQ;
         } else {
             XEN_PT_LOG(&s->dev, "requested pirq %d for MSI%s"
-                       " (vec: %#x, entry: %#x)\n",
+                       " (vec: 0x%x, entry: 0x%x)\n",
                        *ppirq, is_msix ? "-X" : "", gvec, msix_entry);
         }
     }
@@ -142,7 +142,7 @@ static int msi_msix_setup(XenPCIPassthroughState *s,
                                      msix_entry, table_base);
         if (rc) {
             XEN_PT_ERR(&s->dev,
-                       "Mapping of MSI%s (err: %i, vec: %#x, entry %#x)\n",
+                       "Mapping of MSI%s (err: %i, vec: 0x%x, entry 0x%x)\n",
                        is_msix ? "-X" : "", errno, gvec, msix_entry);
             return rc;
         }
@@ -165,8 +165,8 @@ static int msi_msix_update(XenPCIPassthroughState *s,
     int rc = 0;
     uint64_t table_addr = 0;
 
-    XEN_PT_LOG(d, "Updating MSI%s with pirq %d gvec %#x gflags %#x"
-               " (entry: %#x)\n",
+    XEN_PT_LOG(d, "Updating MSI%s with pirq %d gvec 0x%x gflags 0x%x"
+               " (entry: 0x%x)\n",
                is_msix ? "-X" : "", pirq, gvec, gflags, msix_entry);
 
     if (is_msix) {
@@ -208,11 +208,11 @@ static int msi_msix_disable(XenPCIPassthroughState *s,
     }
 
     if (is_binded) {
-        XEN_PT_LOG(d, "Unbind MSI%s with pirq %d, gvec %#x\n",
+        XEN_PT_LOG(d, "Unbind MSI%s with pirq %d, gvec 0x%x\n",
                    is_msix ? "-X" : "", pirq, gvec);
         rc = xc_domain_unbind_msi_irq(xen_xc, xen_domid, gvec, pirq, gflags);
         if (rc) {
-            XEN_PT_ERR(d, "Unbinding of MSI%s failed. (err: %d, pirq: %d, gvec: %#x)\n",
+            XEN_PT_ERR(d, "Unbinding of MSI%s failed. (err: %d, pirq: %d, gvec: 0x%x)\n",
                        is_msix ? "-X" : "", errno, pirq, gvec);
             return rc;
         }
@@ -539,7 +539,7 @@ int xen_pt_msix_init(XenPCIPassthroughState *s, uint32_t base)
     }
 
     if (id != PCI_CAP_ID_MSIX) {
-        XEN_PT_ERR(d, "Invalid id %#x base %#x\n", id, base);
+        XEN_PT_ERR(d, "Invalid id 0x%x base 0x%x\n", id, base);
         return -1;
     }
 
@@ -582,7 +582,7 @@ int xen_pt_msix_init(XenPCIPassthroughState *s, uint32_t base)
         XEN_PT_ERR(d, "Can't open /dev/mem: %s\n", strerror(errno));
         goto error_out;
     }
-    XEN_PT_LOG(d, "table_off = %#x, total_entries = %d\n",
+    XEN_PT_LOG(d, "table_off = 0x%x, total_entries = %d\n",
                table_off, total_entries);
     msix->table_offset_adjust = table_off & 0x0fff;
     msix->phys_iomem_base =
-- 
2.29.0-rc1



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 14:03:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 14:03:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19143.44369 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaJNR-0001Su-8z; Wed, 04 Nov 2020 14:03:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19143.44369; Wed, 04 Nov 2020 14:03:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaJNR-0001Sn-5w; Wed, 04 Nov 2020 14:03:05 +0000
Received: by outflank-mailman (input) for mailman id 19143;
 Wed, 04 Nov 2020 14:03:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oYrD=EK=oracle.com=konrad.wilk@srs-us1.protection.inumbo.net>)
 id 1kaJNP-0001Si-Az
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 14:03:03 +0000
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 284b590e-2be6-4c20-8c1b-07a31e17e590;
 Wed, 04 Nov 2020 14:03:01 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0A4DsuoD112289;
 Wed, 4 Nov 2020 14:02:56 GMT
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by userp2120.oracle.com with ESMTP id 34hhw2pud5-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Wed, 04 Nov 2020 14:02:56 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0A4E0jWQ009726;
 Wed, 4 Nov 2020 14:02:55 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
 by userp3020.oracle.com with ESMTP id 34hw0fjnn9-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 04 Nov 2020 14:02:55 +0000
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
 by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 0A4E2pcr004324;
 Wed, 4 Nov 2020 14:02:52 GMT
Received: from char.us.oracle.com (/10.152.32.25)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Wed, 04 Nov 2020 06:02:51 -0800
Received: by char.us.oracle.com (Postfix, from userid 1000)
 id 8D47C6A00F9; Wed,  4 Nov 2020 09:04:38 -0500 (EST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oYrD=EK=oracle.com=konrad.wilk@srs-us1.protection.inumbo.net>)
	id 1kaJNP-0001Si-Az
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 14:03:03 +0000
X-Inumbo-ID: 284b590e-2be6-4c20-8c1b-07a31e17e590
Received: from userp2120.oracle.com (unknown [156.151.31.85])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 284b590e-2be6-4c20-8c1b-07a31e17e590;
	Wed, 04 Nov 2020 14:03:01 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
	by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0A4DsuoD112289;
	Wed, 4 Nov 2020 14:02:56 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : references : mime-version : content-type :
 in-reply-to; s=corp-2020-01-29;
 bh=co77JaRJ5bF4cYzdCBPJuo/NyzpVk8qYpRiUCdlnp68=;
 b=jVhuBoPc/dJ+iGyewioD5U7LQ3IeSTP587CVWuXApuERGNRAL+7fS27R7ef0ch8tnyWd
 gZ5sSOV3SuvTV7/6FlY5sBNh337J620i2ZInM/EnkbJEvAtR7q2oJ5VBEpO5c2plxBUZ
 AczmPAqHZ7dy9av2C/jYX8xIJ2rKtRYSwNOXczHaU7wy2OjpVOXFcqUDtE4RXJPdGaIe
 gEhBv6gISN1YEgdxfafSqu2yjasV77DQgr/+bou+BP1G41QwnSJET9OnxuIIBSOt2lrr
 Wb9784lVSgtAAs8p/ECZ/w9Xw9onIViN/QhaW6WkhLJNe5amtRFI4EQJAyvt6lijWXkx hA== 
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
	by userp2120.oracle.com with ESMTP id 34hhw2pud5-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
	Wed, 04 Nov 2020 14:02:56 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
	by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0A4E0jWQ009726;
	Wed, 4 Nov 2020 14:02:55 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
	by userp3020.oracle.com with ESMTP id 34hw0fjnn9-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
	Wed, 04 Nov 2020 14:02:55 +0000
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 0A4E2pcr004324;
	Wed, 4 Nov 2020 14:02:52 GMT
Received: from char.us.oracle.com (/10.152.32.25)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 04 Nov 2020 06:02:51 -0800
Received: by char.us.oracle.com (Postfix, from userid 1000)
	id 8D47C6A00F9; Wed,  4 Nov 2020 09:04:38 -0500 (EST)
Date: Wed, 4 Nov 2020 09:04:38 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Christoph Hellwig <hch@lst.de>
Cc: xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org,
        Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH for-5.10] swiotlb: remove the tbl_dma_addr argument to
 swiotlb_tbl_map_single
Message-ID: <20201104140438.GA16892@char.us.oracle.com>
References: <20201023063309.3472987-1-hch@lst.de>
 <20201103094643.GA18936@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201103094643.GA18936@lst.de>
User-Agent: Mutt/1.9.1 (2017-09-22)
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9794 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 mlxlogscore=999
 phishscore=0 bulkscore=0 spamscore=0 malwarescore=0 mlxscore=0
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2011040105
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9794 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 malwarescore=0 mlxscore=0
 suspectscore=0 clxscore=1015 priorityscore=1501 impostorscore=0
 spamscore=0 lowpriorityscore=0 mlxlogscore=999 phishscore=0 bulkscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2011040104

On Tue, Nov 03, 2020 at 10:46:43AM +0100, Christoph Hellwig wrote:
> ping?

Hopefully this goes through. I am in the process of testing it but ran
into testing issues that I believe are unrelated.


> 
> On Fri, Oct 23, 2020 at 08:33:09AM +0200, Christoph Hellwig wrote:
> > The tbl_dma_addr argument is used to check the DMA boundary for the
> > allocations, and thus needs to be a dma_addr_t.  swiotlb-xen instead
> > passed a physical address, which could lead to incorrect results for
> > strange offsets.  Fix this by removing the parameter entirely and hard
> > code the DMA address for io_tlb_start instead.
> > 
> > Fixes: 91ffe4ad534a ("swiotlb-xen: introduce phys_to_dma/dma_to_phys translations")
> > Signed-off-by: Christoph Hellwig <hch@lst.de>
> > Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> > ---
> >  drivers/iommu/intel/iommu.c |  5 ++---
> >  drivers/xen/swiotlb-xen.c   |  3 +--
> >  include/linux/swiotlb.h     | 10 +++-------
> >  kernel/dma/swiotlb.c        | 16 ++++++----------
> >  4 files changed, 12 insertions(+), 22 deletions(-)
> > 
> > diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> > index 8651f6d4dfa032..6b560e6f193096 100644
> > --- a/drivers/iommu/intel/iommu.c
> > +++ b/drivers/iommu/intel/iommu.c
> > @@ -3815,9 +3815,8 @@ bounce_map_single(struct device *dev, phys_addr_t paddr, size_t size,
> >  	 * page aligned, we don't need to use a bounce page.
> >  	 */
> >  	if (!IS_ALIGNED(paddr | size, VTD_PAGE_SIZE)) {
> > -		tlb_addr = swiotlb_tbl_map_single(dev,
> > -				phys_to_dma_unencrypted(dev, io_tlb_start),
> > -				paddr, size, aligned_size, dir, attrs);
> > +		tlb_addr = swiotlb_tbl_map_single(dev, paddr, size,
> > +				aligned_size, dir, attrs);
> >  		if (tlb_addr == DMA_MAPPING_ERROR) {
> >  			goto swiotlb_error;
> >  		} else {
> > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > index 71ce1b7a23d1cc..2b385c1b4a99cb 100644
> > --- a/drivers/xen/swiotlb-xen.c
> > +++ b/drivers/xen/swiotlb-xen.c
> > @@ -395,8 +395,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
> >  	 */
> >  	trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_force);
> >  
> > -	map = swiotlb_tbl_map_single(dev, virt_to_phys(xen_io_tlb_start),
> > -				     phys, size, size, dir, attrs);
> > +	map = swiotlb_tbl_map_single(dev, phys, size, size, dir, attrs);
> >  	if (map == (phys_addr_t)DMA_MAPPING_ERROR)
> >  		return DMA_MAPPING_ERROR;
> >  
> > diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> > index 513913ff748626..3bb72266a75a1d 100644
> > --- a/include/linux/swiotlb.h
> > +++ b/include/linux/swiotlb.h
> > @@ -45,13 +45,9 @@ enum dma_sync_target {
> >  	SYNC_FOR_DEVICE = 1,
> >  };
> >  
> > -extern phys_addr_t swiotlb_tbl_map_single(struct device *hwdev,
> > -					  dma_addr_t tbl_dma_addr,
> > -					  phys_addr_t phys,
> > -					  size_t mapping_size,
> > -					  size_t alloc_size,
> > -					  enum dma_data_direction dir,
> > -					  unsigned long attrs);
> > +phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys,
> > +		size_t mapping_size, size_t alloc_size,
> > +		enum dma_data_direction dir, unsigned long attrs);
> >  
> >  extern void swiotlb_tbl_unmap_single(struct device *hwdev,
> >  				     phys_addr_t tlb_addr,
> > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> > index b4eea0abc3f002..92e2f54f24c01b 100644
> > --- a/kernel/dma/swiotlb.c
> > +++ b/kernel/dma/swiotlb.c
> > @@ -441,14 +441,11 @@ static void swiotlb_bounce(phys_addr_t orig_addr, phys_addr_t tlb_addr,
> >  	}
> >  }
> >  
> > -phys_addr_t swiotlb_tbl_map_single(struct device *hwdev,
> > -				   dma_addr_t tbl_dma_addr,
> > -				   phys_addr_t orig_addr,
> > -				   size_t mapping_size,
> > -				   size_t alloc_size,
> > -				   enum dma_data_direction dir,
> > -				   unsigned long attrs)
> > +phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
> > +		size_t mapping_size, size_t alloc_size,
> > +		enum dma_data_direction dir, unsigned long attrs)
> >  {
> > +	dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, io_tlb_start);
> >  	unsigned long flags;
> >  	phys_addr_t tlb_addr;
> >  	unsigned int nslots, stride, index, wrap;
> > @@ -667,9 +664,8 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t paddr, size_t size,
> >  	trace_swiotlb_bounced(dev, phys_to_dma(dev, paddr), size,
> >  			      swiotlb_force);
> >  
> > -	swiotlb_addr = swiotlb_tbl_map_single(dev,
> > -			phys_to_dma_unencrypted(dev, io_tlb_start),
> > -			paddr, size, size, dir, attrs);
> > +	swiotlb_addr = swiotlb_tbl_map_single(dev, paddr, size, size, dir,
> > +			attrs);
> >  	if (swiotlb_addr == (phys_addr_t)DMA_MAPPING_ERROR)
> >  		return DMA_MAPPING_ERROR;
> >  
> > -- 
> > 2.28.0
> > 
> > _______________________________________________
> > iommu mailing list
> > iommu@lists.linux-foundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/iommu
> ---end quoted text---


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 14:11:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 14:11:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19153.44382 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaJV3-0002Oo-7y; Wed, 04 Nov 2020 14:10:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19153.44382; Wed, 04 Nov 2020 14:10:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaJV3-0002Oh-4m; Wed, 04 Nov 2020 14:10:57 +0000
Received: by outflank-mailman (input) for mailman id 19153;
 Wed, 04 Nov 2020 14:10:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=osUd=EK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kaJV1-0002Oc-U6
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 14:10:56 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.87]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id abd61f81-acd7-49dc-9589-6f3b1d50b515;
 Wed, 04 Nov 2020 14:10:53 +0000 (UTC)
Received: from AM4PR0701CA0012.eurprd07.prod.outlook.com
 (2603:10a6:200:42::22) by PR3PR08MB5577.eurprd08.prod.outlook.com
 (2603:10a6:102:81::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Wed, 4 Nov
 2020 14:10:51 +0000
Received: from AM5EUR03FT026.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:200:42:cafe::6) by AM4PR0701CA0012.outlook.office365.com
 (2603:10a6:200:42::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.10 via Frontend
 Transport; Wed, 4 Nov 2020 14:10:51 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT026.mail.protection.outlook.com (10.152.16.155) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 4 Nov 2020 14:10:51 +0000
Received: ("Tessian outbound d5e343850048:v64");
 Wed, 04 Nov 2020 14:10:50 +0000
Received: from 3309a66f0146.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1538297D-4C67-408D-9D30-4C74CCDEA067.1; 
 Wed, 04 Nov 2020 14:10:44 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3309a66f0146.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 04 Nov 2020 14:10:44 +0000
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com (2603:10a6:208:fb::27)
 by AM0PR08MB3348.eurprd08.prod.outlook.com (2603:10a6:208:65::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Wed, 4 Nov
 2020 14:10:43 +0000
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b]) by AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b%7]) with mapi id 15.20.3499.032; Wed, 4 Nov 2020
 14:10:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=osUd=EK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kaJV1-0002Oc-U6
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 14:10:56 +0000
X-Inumbo-ID: abd61f81-acd7-49dc-9589-6f3b1d50b515
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown [40.107.22.87])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id abd61f81-acd7-49dc-9589-6f3b1d50b515;
	Wed, 04 Nov 2020 14:10:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JhDyhz0csRHAYH2N6Ig7dn10P+0PUlzAJJyD1Tm3b8A=;
 b=uluDOoZCIsd6t2V8W89WjArumgU09l73S0r9/qWV5iinvD9X35MqDENS465DGP6bKmyrPNKz6VuWavx/PCa4+/cs6N14vLU71ukr/FFRyYUfLb6h52Hy4WGh7ucQB1TONPs98oo7cE8R8NVnGMEKWIkflOKa8TOV6kCaJlQwiVU=
Received: from AM4PR0701CA0012.eurprd07.prod.outlook.com
 (2603:10a6:200:42::22) by PR3PR08MB5577.eurprd08.prod.outlook.com
 (2603:10a6:102:81::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Wed, 4 Nov
 2020 14:10:51 +0000
Received: from AM5EUR03FT026.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:200:42:cafe::6) by AM4PR0701CA0012.outlook.office365.com
 (2603:10a6:200:42::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.10 via Frontend
 Transport; Wed, 4 Nov 2020 14:10:51 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT026.mail.protection.outlook.com (10.152.16.155) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 4 Nov 2020 14:10:51 +0000
Received: ("Tessian outbound d5e343850048:v64"); Wed, 04 Nov 2020 14:10:50 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: a2b72125db14f3ca
X-CR-MTA-TID: 64aa7808
Received: from 3309a66f0146.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 1538297D-4C67-408D-9D30-4C74CCDEA067.1;
	Wed, 04 Nov 2020 14:10:44 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3309a66f0146.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 04 Nov 2020 14:10:44 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dKv0ekqTTJVPJBhEouYcpmgDO4cAD5ErUVoRN4MLwHvQrU2x2AsOtkpxpb4RXOvm7Z1QLt1xqPDfiXJajJCizDM6R8fVfxOEyBzVpRzbKojwAAipGS6mt6PWvof/pNcbu7ZUG9VkKDoxvR8EhpjbzqNayeGnK6hcYJjxfokTWl1E19elnJgmSmEmU0X32dLdbGyqUPUEZQqCc6UurcpH/Wj3Fjz3sNpYuF5KNOZPgjZISsFT9vcYqA4x8XbL8nBP5nvflGceYG5wytpIUaLBf5gy0jtiCmO+8NRxRfH6X74bJHruYGmvpWIPlRbSjh29qPEZlKWCgobBzi7dvkNMYg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JhDyhz0csRHAYH2N6Ig7dn10P+0PUlzAJJyD1Tm3b8A=;
 b=boIC7HRriaCSvUuHpcmgVCo9Gjrs6CYl49IrS2iYPp1XFmfOu3OohUnDxG4jKvpgc0w3DXig7tZEDo/LlDpFiIdPNjeZ8hVV0YBu6UdodIsPy/q8wWuZXIjzrEcH/X/x4sBVl8V3Uzw2fLpE06j52MQjkShDDGVVU16fzJSa5EinU+LQbX8v7nDFSDB6tINce742muVeiMBRNX2w34HtzEgXqb87zID31wg88tGnlhyJZf5Xcz6/0CtWXNzibKhVsNmsXqEG2XhWDG0GYkmY9NayEts+ZsnT76WurzZ986oXauIiD0dYhHpvMgNf0eI7mZb1bI4CsvGQ5ziWvOp+Tw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JhDyhz0csRHAYH2N6Ig7dn10P+0PUlzAJJyD1Tm3b8A=;
 b=uluDOoZCIsd6t2V8W89WjArumgU09l73S0r9/qWV5iinvD9X35MqDENS465DGP6bKmyrPNKz6VuWavx/PCa4+/cs6N14vLU71ukr/FFRyYUfLb6h52Hy4WGh7ucQB1TONPs98oo7cE8R8NVnGMEKWIkflOKa8TOV6kCaJlQwiVU=
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com (2603:10a6:208:fb::27)
 by AM0PR08MB3348.eurprd08.prod.outlook.com (2603:10a6:208:65::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Wed, 4 Nov
 2020 14:10:43 +0000
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b]) by AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b%7]) with mapi id 15.20.3499.032; Wed, 4 Nov 2020
 14:10:43 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Rich Persaud <persaur@gmail.com>, Jan Beulich <jbeulich@suse.com>, Stefano
 Stabellini <stefano.stabellini@xilinx.com>, "andrew.cooper3@citrix.com"
	<andrew.cooper3@citrix.com>, "george.dunlap@citrix.com"
	<george.dunlap@citrix.com>, "iwj@xenproject.org" <iwj@xenproject.org>,
	"julien@xen.org" <julien@xen.org>, "wl@xen.org" <wl@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Daniel
 DeGraaf <dgdegra@tycho.nsa.gov>, Daniel Smith <dpsmith@apertussolutions.com>,
	Roman Shaposhnik <roman@zededa.com>,
	=?utf-8?B?TWFyZWsgTWFyY3p5a293c2tpLUfDs3JlY2tp?=
	<marmarek@invisiblethingslab.com>
Subject: Re: [RFC PATCH] xen: EXPERT clean-up
Thread-Topic: [RFC PATCH] xen: EXPERT clean-up
Thread-Index:
 AQHWrxxOadLsV5e3vEiMzomhqHWckam0fmYAgADlPwCAALC9AIAAvt0AgAAH4QCAABN9gIABG7kA
Date: Wed, 4 Nov 2020 14:10:43 +0000
Message-ID: <CA576213-6C2B-4F6C-9C61-EFBBE0FFB960@arm.com>
References: <alpine.DEB.2.21.2011031123420.5812@sstabellini-ThinkPad-T480s>
 <E359BD65-2917-4087-A6E1-0AD5521CF823@gmail.com>
 <alpine.DEB.2.21.2011031307430.5812@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2011031307430.5812@sstabellini-ThinkPad-T480s>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 815397ca-6ef7-4469-9b07-08d880cb6f99
x-ms-traffictypediagnostic: AM0PR08MB3348:|PR3PR08MB5577:
X-Microsoft-Antispam-PRVS:
	<PR3PR08MB5577B9A215362FFFCE67A3129DEF0@PR3PR08MB5577.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 O8uAL/yjEIjnbGeRkvbbko3Nl6Se4jNE8lGZQoMEtQNVCq642rOEBFf6xQwdYwc8zPw+tsSpXgLeqHcPQWy9jMY+rZbp0kMx/SbtDJ/zoTDx+xNYWFNgILI0f7uJzGLxVl+SycLIgWwros7xexnsdzKBd+46qhBT3DfdudgGos/GL6LjXyscbcDvdaCYPYjV6LCGjPTWiJXVHRws1ZCqC/yJgge5aaOuS6kesMfk1QAfUgDEezqWWDW/8OCFwl1PakXmTv9a9MEt8OuBWUV8nMuxQe9ykg46xyqAk5jPjqZwWBtoAqwqd75QglDqYyrFxHaKoG2BqtzGjZIKTtoLKQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3682.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(396003)(39860400002)(346002)(136003)(366004)(478600001)(54906003)(316002)(6486002)(5660300002)(186003)(6916009)(26005)(8676002)(91956017)(36756003)(53546011)(2616005)(64756008)(6506007)(66556008)(7416002)(86362001)(33656002)(6512007)(8936002)(4326008)(66476007)(83380400001)(66946007)(71200400001)(76116006)(66446008)(2906002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 mnSFHwFuIB5C4wApsu5+mxHc22X0E92A4BYlz2q3JPscSEL1tmKgYE7GDaoExZwXXrA5Tmftv4yA9qaGLAN2QuzSNAMhWTsDDADg8q77xKXJjiVtwSi1pWxDFAaufbA/c9HrwVD/BAZIIGoZW/rPcBAyYCtYzoxugawzl/ow2ww1UndXUTQbSHJqNiJ/TYlVthuvCZIhtE89305I9LbpS30YxbzamNLUKos5PPkEpL44X3j02qnJXRvj+snmDHVDJKt2p8chjfponcnrCQ0X8DD/UfH3d4b7PGGQOW2r+SHobyWuHsO+tJUOWPypYT/7i9CaQxBiP7VEOuCQH+KnHf2JL9+JzkvsiDMTyhBssYZH5b+4N7qXnmP7mqta2BPWnlAPqMhguEyQoWe0Ku0qj5aYBnOCmmjNL6FVwf6VDxrMfoaINqO5zmd3kR9oJHACfy7vN6q2nF3EPRofRjM6nsbDWoe+AVCZ0XUqmhFzowVs738IZTckfSCPuFVeMJGqR7mwoAfUpsZ+pWRXMEY7x2TaMueCo2shBK+ivx/fPMoyUFKX8V/F1I1a6vQNiEcG42d5OIC+w4bDG1Vh81mcLOxAINTF2TpNOBlHz2hRmAUek+38mZrNYrbOzkuC4wiILsvspGQNUEI3ET/ABbqr8Q==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <0E9D4B1B93FEAE45AFC27E755C873CF4@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3348
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e36ef100-eb3c-4753-b931-08d880cb6af1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fZzVIvABKM1A4dOD+anEzEzPnHVNHZukD00nCqGNx2Md/b0dqnxWt3SqG0Ggzk8o/0fcxP9/LCKucr9rkfGbMN3fxXqqc4Ej4JUkjL1fQ3LtTn3vbfnGjP9Ar304D+ovIfYDotrsN6lEDrxINnyJqtvBjk+xN1icIpztoDsYmIvfS+CtBLhUCEWJFh0S+NHS5ngDZsQ/2ZC3xAjFAtMKVC4GNFVV5Ld9jYbRP3M7fi7WLADJo+w0XnHNCqPW2mwiAVTZasyoQLImYN5f0fnirUDijj/VSAAo3Bt4n9VCrtR1mT1H+scvtlk/PpQnPO2VStc5MnLnSoyArmrsv+o5PqBh9IJoyIxozs+gApBO3h60Cj8fVxdywLJICVBV6Bq9mXT0rB1f9XCgm3S+Qqa1Fw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(39860400002)(346002)(396003)(376002)(46966005)(47076004)(4326008)(36756003)(6862004)(70206006)(26005)(70586007)(86362001)(82310400003)(336012)(356005)(8936002)(6486002)(33656002)(82740400003)(83380400001)(6512007)(107886003)(53546011)(8676002)(5660300002)(2616005)(478600001)(186003)(2906002)(81166007)(6506007)(36906005)(316002)(54906003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2020 14:10:51.0451
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 815397ca-6ef7-4469-9b07-08d880cb6f99
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5577

SGkgU3RlZmFubywNCg0KPiBPbiAzIE5vdiAyMDIwLCBhdCAyMToxNSwgU3RlZmFubyBTdGFiZWxs
aW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPiB3cm90ZToNCj4gDQo+IE9uIFR1ZSwgMyBOb3Yg
MjAyMCwgUmljaCBQZXJzYXVkIHdyb3RlOg0KPj4gT24gTm92IDMsIDIwMjAsIGF0IDE0OjM3LCBT
dGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IHdyb3RlOg0KPj4+IA0K
Pj4+IO+7v09uIFR1ZSwgMyBOb3YgMjAyMCwgSmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4+PiBPbiAw
Mi4xMS4yMDIwIDIyOjQxLCBTdGVmYW5vIFN0YWJlbGxpbmkgd3JvdGU6DQo+Pj4+PiBPbiBNb24s
IDIgTm92IDIwMjAsIEphbiBCZXVsaWNoIHdyb3RlOg0KPj4+Pj4+IE9uIDMxLjEwLjIwMjAgMDE6
MjQsIFN0ZWZhbm8gU3RhYmVsbGluaSB3cm90ZToNCj4+Pj4+Pj4gQEAgLTc5LDggKzc5LDggQEAg
Y29uZmlnIFNCU0FfVlVBUlRfQ09OU09MRQ0KPj4+Pj4+PiAgICAgU0JTQSBHZW5lcmljIFVBUlQg
aW1wbGVtZW50cyBhIHN1YnNldCBvZiBBUk0gUEwwMTEgVUFSVC4NCj4+Pj4+Pj4gDQo+Pj4+Pj4+
IGNvbmZpZyBBUk1fU1NCRA0KPj4+Pj4+PiAtICAgIGJvb2wgIlNwZWN1bGF0aXZlIFN0b3JlIEJ5
cGFzcyBEaXNhYmxlIiBpZiBFWFBFUlQNCj4+Pj4+Pj4gLSAgICBkZXBlbmRzIG9uIEhBU19BTFRF
Uk5BVElWRQ0KPj4+Pj4+PiArICAgIGJvb2wgIlNwZWN1bGF0aXZlIFN0b3JlIEJ5cGFzcyBEaXNh
YmxlIg0KPj4+Pj4+PiArICAgIGRlcGVuZHMgb24gSEFTX0FMVEVSTkFUSVZFICYmIEVYUEVSVA0K
Pj4+Pj4+PiAgIGRlZmF1bHQgeQ0KPj4+Pj4+IA0KPj4+Pj4+IEF0IHRoZSBleGFtcGxlIG9mIHRo
aXMsIEknbSBhZnJhaWQgd2hlbiB0aGUgZGVmYXVsdCBpc24ndCAibiINCj4+Pj4+PiAob3IgdGhl
cmUncyBubyBkZWZhdWx0IGRpcmVjdGl2ZSBhdCBhbGwsIGFzIG91Z2h0IHRvIGJlDQo+Pj4+Pj4g
ZXF1aXZhbGVudCB0byBhbmQgcHJlZmVycmVkIG92ZXIgImRlZmF1bHQgbiIpLCBzdWNoIGENCj4+
Pj4+PiB0cmFuc2Zvcm1hdGlvbiBpcyBub3QgZnVuY3Rpb25hbGx5IGlkZW50aWNhbDogQmVmb3Jl
IHlvdXINCj4+Pj4+PiBjaGFuZ2UsIHdpdGggIUVYUEVSVCB0aGlzIG9wdGlvbiBkZWZhdWx0cyB0
byB5LiBBZnRlciB5b3VyDQo+Pj4+Pj4gY2hhbmdlIHRoaXMgb3B0aW9uIGlzIHVuYXZhaWxhYmxl
ICh3aGljaCByZXNvbHZlcyB0byBpdCBiZWluZw0KPj4+Pj4+IG9mZiBmb3IgYWxsIGNvbnN1bWlu
ZyBwdXJwb3NlcykuDQo+Pj4+Pj4gDQo+Pj4+Pj4gSU9XIHRoZXJlIGFyZSByZWFzb25zIHRvIGhh
dmUgImlmIC4uLiIgYXR0YWNoZWQgdG8gdGhlIHByb21wdHMNCj4+Pj4+PiAoZm9yIHRoaXMgY29u
c3RydWN0IGluZGVlZCBvbmx5IG1ha2luZyB0aGUgcHJvbXB0IGNvbmRpdGlvbmFsLA0KPj4+Pj4+
IG5vdCB0aGUgZW50aXJlIG9wdGlvbiksIGJ1dCB0aGVyZSBhcmUgYWxzbyBjYXNlcyB3aGVyZSB0
aGUNCj4+Pj4+PiBjbGVhbnVwIHlvdSBkbyBpcyBpbmRlZWQgZGVzaXJhYmxlIC8gaGVscGZ1bC4N
Cj4+Pj4+IA0KPj4+Pj4gWWVhaCwgdGhhbmtzIGZvciBjYXRjaGluZyBpdCwgaXQgaXMgb2J2aW91
c2x5IGEgcHJvYmxlbS4NCj4+Pj4+IA0KPj4+Pj4gTXkgaW50ZW50aW9uIHdhcyBqdXN0IHRvICJ0
YWciIHNvbWVob3cgdGhlIG9wdGlvbnMgdG8gRVhQRVJUIHNvIHRoYXQgaXQNCj4+Pj4+IHdvdWxk
IHNob3cgb24gdGhlIG1lbnUuIE1heWJlIGEgYmV0dGVyLCBzaW1wbGVyLCB3YXkgdG8gZG8gaXQg
aXMNCj4+Pj4+IHRvIGFkZCB0aGUgd29yZCAiRVhQRVJUIiB0byB0aGUgb25lIGxpbmUgcHJvbXB0
Og0KPj4+Pj4gDQo+Pj4+PiBjb25maWcgQVJNX1NTQkQNCj4+Pj4+IC0gICAgYm9vbCAiU3BlY3Vs
YXRpdmUgU3RvcmUgQnlwYXNzIERpc2FibGUiIGlmIEVYUEVSVA0KPj4+Pj4gKyAgICBib29sICJT
cGVjdWxhdGl2ZSBTdG9yZSBCeXBhc3MgRGlzYWJsZSAoRVhQRVJUKSIgaWYgRVhQRVJUDQo+Pj4+
PiAgIGRlcGVuZHMgb24gSEFTX0FMVEVSTkFUSVZFDQo+Pj4+PiAgIGRlZmF1bHQgeQ0KPj4+Pj4g
ICBoZWxwDQo+Pj4+PiANCj4+Pj4+IA0KPj4+Pj4gV2hhdCBkbyB5b3UgdGhpbms/DQo+Pj4+IA0K
Pj4+PiBXaGlsZSBvbiB0aGUgc3VyZmFjZSB0aGlzIG1heSBsb29rIGxpa2UgYW4gaW1wcm92ZW1l
bnQsIEkgZG9uJ3QNCj4+Pj4gc2VlIGhvdyBpdCB3b3VsZCBhY3R1YWxseSBoZWxwOiBJZiB5b3Ug
cmVhZCB0aGUgS2NvbmZpZyBmaWxlDQo+Pj4+IGl0c2VsZiwgdGhlIGRlcGVuZGVuY3kgaXMgc2Vl
biBhbnl3YXkuIEFuZCBvbiB0aGUgbWVudSBJIGRvbid0DQo+Pj4+IHNlZSB0aGUgcG9pbnQgb2Yg
dGVsbGluZyBzb21lb25lIHdobyBoYXMgZW5hYmxlZCBFWFBFUlQgdGhhdCBhDQo+Pj4+IGNlcnRh
aW4gb3B0aW9uIGlzIChvciBpcyBub3QpIGFuIGV4cGVydCBvbmUuIElmIHRoZXkgdGhpbmsNCj4+
Pj4gdGhleSdyZSBleHBlcnRzLCBzbyBzaG91bGQgdGhleSBiZSB0cmVhdGVkLiAoSXQgd2FzLCBh
ZnRlciBhbGwsDQo+Pj4+IGEgZGVsaWJlcmF0ZSBkZWNpc2lvbiB0byBtYWtlIGVuYWJsaW5nIGV4
cGVydCBtb2RlIGVhc2llciwgYW5kDQo+Pj4+IGhlbmNlIGVhc2llciB0byB1c2UgZm9yIHdoYXQg
b25lIG1pZ2h0IGNvbnNpZGVyIG5vdC1yZWFsbHktDQo+Pj4+IGV4cGVydHMuIEkgcmVhbGl6ZSBz
YXlpbmcgc28gbWF5IGJlIGNvbnNpZGVyZWQgdGVuZGVudGlvdXM7IEkNCj4+Pj4gbWVhbiBpdCBp
biBhIHB1cmVseSB0ZWNobmljYWwgc2Vuc2UsIGFuZCBJJ2QgbGlrZSB0byBhcG9sb2dpemUNCj4+
Pj4gaW4gYWR2YW5jZSB0byBhbnlvbmUgbm90IHNoYXJpbmcgdGhpcyBhcyBhIHBvc3NpYmxlIHBl
cnNwZWN0aXZlDQo+Pj4+IHRvIHRha2UuKQ0KPj4+PiANCj4+Pj4gUGx1cywgb2YgY291cnNlLCB0
aGUgYWRkaXRpb24gb2Ygc3VjaCAoRVhQRVJUKSBtYXJrZXJzIHRvDQo+Pj4+IGZ1dHVyZSBvcHRp
b25zJyBwcm9tcHRzIGlzIGxpYWJsZSB0byBnZXQgZm9yZ290dGVuIG5vdyBhbmQgdGhlbiwNCj4+
Pj4gc28gc29vbmVyIG9yIGxhdGVyIHdlJ2QgbGlrZWx5IGVuZCB1cCB3aXRoIGFuIGluY29uc2lz
dGVudA0KPj4+PiBtaXh0dXJlIGFueXdheS4NCj4+PiANCj4+PiBJIHRlbmQgdG8gYWdyZWUgd2l0
aCB5b3Ugb24gZXZlcnl0aGluZyB5b3Ugd3JvdGUuIFRoZSBmdW5kYW1lbnRhbCBpc3N1ZQ0KPj4+
IGlzIHRoYXQgd2UgYXJlIChtaXMpdXNpbmcgRVhQRVJUIHRvIHRhZyBmZWF0dXJlcyB0aGF0IGFy
ZSBleHBlcmltZW50YWwsDQo+Pj4gYXMgZGVmaW5lZCBieSBTVVBQT1JULm1kLg0KPj4+IA0KPj4+
IEl0IGlzIGltcG9ydGFudCB0byBiZSBhYmxlIHRvIGRpc3Rpbmd1aXNoIGNsZWFybHkgYXQgdGhl
IGtjb25maWcgbGV2ZWwNCj4+PiBvcHRpb25zIHRoYXQgYXJlIChzZWN1cml0eSkgc3VwcG9ydGVk
IGZyb20gb3B0aW9ucyB0aGF0IGFyZQ0KPj4+IHVuc3VwcG9ydGVkL2V4cGVyaW1lbnRhbC4gVG9k
YXkgdGhlIG9ubHkgd2F5IHRvIGRvIGl0IGlzIHdpdGggRVhQRVJUDQo+Pj4gd2hpY2ggaXMgbm90
IGdyZWF0IGJlY2F1c2U6DQo+Pj4gDQo+Pj4gLSBpdCBkb2Vzbid0IGNvbnZleSB0aGUgaWRlYSB0
aGF0IGl0IGlzIGZvciB1bnN1cHBvcnRlZC9leHBlcmltZW50YWwNCj4+PiBmZWF0dXJlcw0KPj4+
IC0gaWYgeW91IHdhbnQgdG8gZW5hYmxlIG9uZSB1bnN1cHBvcnRlZCBmZWF0dXJlLCBpdCBpcyBu
b3QgY2xlYXIgd2hhdA0KPj4+IHlvdSBoYXZlIHRvIGRvDQo+Pj4gDQo+Pj4gU28gbWF5YmUgd2Ug
c2hvdWxkIHJlcGxhY2UgRVhQRVJUIHdpdGggVU5TVVBQT1JURUQgKG9yIEVYUEVSSU1FTlRBTCkg
aW4NCj4+PiB0aGUgS2NvbmZpZyBtZW51Pw0KPj4+IA0KPj4+IEl0IHdvdWxkIG1ha2UgaXQgY2xl
YXJlciB0aGF0IGJ5IGVuYWJsaW5nIFVOU1VQUE9SVEVEIHlvdSBhcmUgZ29pbmcgdG8NCj4+PiBn
ZXQgYSBjb25maWd1cmF0aW9uIHRoYXQgaXMgbm90IHNlY3VyaXR5IHN1cHBvcnRlZC4NCj4+IA0K
Pj4gSWYgZ29pbmcgZG93biB0aGlzIHBhdGgsIHRoZXJlIHNob3VsZCBiZSBvbmUsIGF1dGhvcml0
YXRpdmUsIGluLXRyZWUgZGVmaW5pdGlvbiBvZiBmZWF0dXJlLWxldmVsIHN1cHBvcnQgZnJvbSB3
aGljaCBLY29uZmlnIChidWlsZC10aW1lIHBvbGljeSBlbmZvcmNlbWVudCkgYW5kIFNVUFBPUlQu
bWQgKGRvY3VtZW50YXRpb24pIGNhbiBiZSBkZXJpdmVkLiAgTGF0ZXIsIGV2ZW4gcnVuLXRpbWUg
ZW5mb3JjZW1lbnQgY2FuIGJlIHNpbWlsYXJseSBjbGFzc2lmaWVkLiAgRnVTQSBtYXkgYWxzbyB3
aXNoIGZvciBkb2N1bWVudGVkIHBvbGljeSB0byBhbGlnbiB3aXRoIGVuZm9yY2VtZW50Lg0KPiAN
Cj4gVGhlIGdvYWwgaXMgdHJ5aW5nIHRvIGFsaWduIEtjb25maWcgYW5kIFNVUFBPUlQubWQgYnkg
Y2xhcmlmeWluZyB0aGUNCj4gRVhQRVJUIG9wdGlvbiwgd2hpY2ggdG9kYXkgaXMgYSBwb29yIGlt
cGxlbWVudGF0aW9uIG9mICJleHBlcmltZW50YWwiLg0KPiANCj4gVGhlcmUgY291bGQgYmUgZnVy
dGhlciBpbXByb3ZlbWVudHMgZG93biB0aGUgbGluZSwgZm9yIGluc3RhbmNlIHdlIGNvdWxkDQo+
IHRhaW50IFhlbiB3aGVuIFVOU1VQUE9SVEVEIGlzIHNlbGVjdGVkIGFuZCBldmVuIGhhdmUgc2Vw
YXJhdGUga2NvbmZpZw0KPiBvcHRpb25zIGZvciBVTlNVUFBPUlRFRCwgRVhQRVJJTUVOVEFMLCBh
bmQgVEVDSFBSRVZJRVcuIEZ1U2EgaXMgbGlrZWx5DQo+IGdvaW5nIHRvIG5lZWQgaXRzIG93biBT
QUZFVFkgb3B0aW9uIHRvby4gTGlrZSB5b3Ugc3VnZ2VzdGVkLCB3ZSBjb3VsZA0KPiBldmVuIGhh
dmUgYSBzaW5nbGUgc291cmNlIG9mIGZlYXR1cmUtbGV2ZWwgc3VwcG9ydCBpbmZvcm1hdGlvbiBm
b3IgYm90aA0KPiBLY29uZmlnIGFuZCBTVVBQT1JULm1kLg0KPiANCg0KSSBkbyB0aGluayB0aGlz
IGlzIGEgZ3JlYXQgaWRlYSB0aGF0IGNvdWxkIG1ha2UgbGlrZSBlYXNpZXIgZm9yIHVzZXJzLg0K
VGhlcmUgY291bGQgYmUgc29tZSBnZW5lcmljIG9wdGlvbnMgYXMgeW91IHN1Z2dlc3RlZCB0aGF0
IHRoZSB1c2VyDQpjb3VsZCBlbmFibGUgb3Igbm90IGF0IHRoZSB0b3AgbGV2ZWwgb2YgdGhlIGNv
bmZpZ3VyYXRpb24gYW5kIGRlcGVuZGluZw0Kb24gdGhpcyBzb21lIGZlYXR1cmVzL2RyaXZlcnMg
d291bGQgYmUgcG9zc2libGUgdG8gc2VsZWN0IG9yIG5vdC4NCg0KVGhpcyB3b3VsZCBhbHNvIG1h
a2UgaXQgZWFzaWVyIHRvIGdlbmVyYXRlIGEgZnVsbHkgc2VjdXJpdHktc3VwcG9ydGVkIHN5c3Rl
bS4NCg0KVGFpbnRpbmcgdGhlIHN5c3RlbSB3aGVuIG9uZSBvZiB0aGlzIG9wdGlvbiBpcyBhY3Rp
dmF0ZWQgY291bGQgYWxzbyBtYWtlDQppdCBlYXNpZXIgdG8gaWRlbnRpZnkgd2hlbiBzb21lb25l
IGlzIHVzaW5nIGEgZmVhdHVyZSB0aGF0IGlzIG5vdCBzdXBwb3J0ZWQuDQoNCkNoZWVycw0KQmVy
dHJhbmQNCg0KPiBIb3dldmVyLCBJIGRpZG4ndCB3YW50IHRvIGluY3JlYXNlIHRoZSBzY29wZSBv
ZiB0aGlzIG9uZSBwYXRjaC4gRm9yIG5vdywNCj4gaXQgd291bGQgYmUgYSBnb29kIHN0YXJ0IGlm
IHdlIHJlcGxhY2VkIEVYUEVSVCB3aXRoIHNvbWV0aGluZyB0aGF0IGNvdmVycw0KPiBhbnl0aGlu
ZyBub3Qgc2VjdXJpdHkgc3VwcG9ydGVkLCBmb3Igd2hpY2ggVU5TVVBQT1JURUQgbG9va3MgbGlr
ZSBhIGdvb2QNCj4gbmFtZS4NCg0KDQoNCg==


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 14:15:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 14:15:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19183.44393 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaJZ1-0002co-Ry; Wed, 04 Nov 2020 14:15:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19183.44393; Wed, 04 Nov 2020 14:15:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaJZ1-0002ch-Om; Wed, 04 Nov 2020 14:15:03 +0000
Received: by outflank-mailman (input) for mailman id 19183;
 Wed, 04 Nov 2020 14:15:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=osUd=EK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kaJZ0-0002cc-Rb
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 14:15:02 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.54]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c0c38656-0b08-496a-b1da-6342c2796308;
 Wed, 04 Nov 2020 14:15:01 +0000 (UTC)
Received: from AM5PR1001CA0069.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:206:15::46) by VI1PR08MB4126.eurprd08.prod.outlook.com
 (2603:10a6:803:e1::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Wed, 4 Nov
 2020 14:14:55 +0000
Received: from AM5EUR03FT014.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:15:cafe::5e) by AM5PR1001CA0069.outlook.office365.com
 (2603:10a6:206:15::46) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.19 via Frontend
 Transport; Wed, 4 Nov 2020 14:14:54 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT014.mail.protection.outlook.com (10.152.16.130) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 4 Nov 2020 14:14:54 +0000
Received: ("Tessian outbound c189680f801b:v64");
 Wed, 04 Nov 2020 14:14:54 +0000
Received: from 939a678c45f5.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C884C586-DD1E-4CBE-B99B-E35FFC8DA436.1; 
 Wed, 04 Nov 2020 14:14:17 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 939a678c45f5.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 04 Nov 2020 14:14:17 +0000
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com (2603:10a6:208:fb::27)
 by AM4PR08MB2834.eurprd08.prod.outlook.com (2603:10a6:205:5::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.29; Wed, 4 Nov
 2020 14:14:16 +0000
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b]) by AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b%7]) with mapi id 15.20.3499.032; Wed, 4 Nov 2020
 14:14:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=osUd=EK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kaJZ0-0002cc-Rb
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 14:15:02 +0000
X-Inumbo-ID: c0c38656-0b08-496a-b1da-6342c2796308
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown [40.107.21.54])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c0c38656-0b08-496a-b1da-6342c2796308;
	Wed, 04 Nov 2020 14:15:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5Tm842fv+F/ERXencghPdS1VUajJx9qC4xm0l8+txgM=;
 b=0xWTJ74kRio2qb12QL2xzttz+FxuX1sYsijEAuJTeQKXvbkfPZYL6C2Nd8P+8gl/y1Czc3Dp1NYvfF4UtkcGRe2bskweZ7h/4RyQX2iy6WCoSc9KAcQ/ARLJ8+g3PYFhVDD3e5w2hnglK/vHJSEs2OMH7C158BBmiRZm0JnOzag=
Received: from AM5PR1001CA0069.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:206:15::46) by VI1PR08MB4126.eurprd08.prod.outlook.com
 (2603:10a6:803:e1::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Wed, 4 Nov
 2020 14:14:55 +0000
Received: from AM5EUR03FT014.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:15:cafe::5e) by AM5PR1001CA0069.outlook.office365.com
 (2603:10a6:206:15::46) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.19 via Frontend
 Transport; Wed, 4 Nov 2020 14:14:54 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT014.mail.protection.outlook.com (10.152.16.130) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 4 Nov 2020 14:14:54 +0000
Received: ("Tessian outbound c189680f801b:v64"); Wed, 04 Nov 2020 14:14:54 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 6d1678f427a3a7cc
X-CR-MTA-TID: 64aa7808
Received: from 939a678c45f5.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id C884C586-DD1E-4CBE-B99B-E35FFC8DA436.1;
	Wed, 04 Nov 2020 14:14:17 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 939a678c45f5.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 04 Nov 2020 14:14:17 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=M7SjtB9SSVkBLElusdN0AfzdR04C1P8doHKY68Y+qeSPFkTW6FRKKwEbekNbOLHYWOXxbFgcn3U2zTnyvBAYvigqdQCpkEBB/rjcvsW3v4AWrU69JJoAtUxUcpi2kUppKqEbdw6l0ZYm0fJFi22X+OuQTuAwuig81dY6RIq0iq1NVKYO/EW4uY8IX4PHjF+xkX7CzfhErTa1eVZUxF8ovhARUQ6gk3exnYKs9+pxFfoMVt3fK1zOoqToQjPMK67FUgqeAFJwngKrCWRqujiXWxgL5GDRfH49CqcJ5VUjaT9kYU5GQWd50mlwBsXvjXkvzjiTxYWqMy4Oky1T5nsNpQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5Tm842fv+F/ERXencghPdS1VUajJx9qC4xm0l8+txgM=;
 b=oakzshia1Y32jcUklQMhaKAaBP6pKhqvA3QdJ0ogd2OGT6ohBwIn0VAaAwY9u/DCrz5Ra1NrPJ0vsgvbYL4GDeept74VytmqzViyLB5nU9UbaAW1iAf1tkCKeIpxuxSHnf9A09g8Cqzx/JyzJYaQDSSTCJGH9WgC8ugAiwLV/VkqWLotnx7dkbqI4YTt+hhBYH2tIySspOdSWI8lyzRI63fM2AoYsa0tUpdOptFSv2x15m0VFQ5BB2qfqvJeUaQVKaNK7oTdryhN5gfSnE+szIU1/eEOu9MZ1D9LRkIAX86Kg6OY2kx/zW2/Pz5VQeBQ68ezpOnf7S0g7biNepUWfQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5Tm842fv+F/ERXencghPdS1VUajJx9qC4xm0l8+txgM=;
 b=0xWTJ74kRio2qb12QL2xzttz+FxuX1sYsijEAuJTeQKXvbkfPZYL6C2Nd8P+8gl/y1Czc3Dp1NYvfF4UtkcGRe2bskweZ7h/4RyQX2iy6WCoSc9KAcQ/ARLJ8+g3PYFhVDD3e5w2hnglK/vHJSEs2OMH7C158BBmiRZm0JnOzag=
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com (2603:10a6:208:fb::27)
 by AM4PR08MB2834.eurprd08.prod.outlook.com (2603:10a6:205:5::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.29; Wed, 4 Nov
 2020 14:14:16 +0000
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b]) by AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b%7]) with mapi id 15.20.3499.032; Wed, 4 Nov 2020
 14:14:16 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Stefano Stabellini
	<stefano.stabellini@xilinx.com>, "andrew.cooper3@citrix.com"
	<andrew.cooper3@citrix.com>, "george.dunlap@citrix.com"
	<george.dunlap@citrix.com>, "iwj@xenproject.org" <iwj@xenproject.org>,
	"julien@xen.org" <julien@xen.org>, "wl@xen.org" <wl@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [RFC PATCH] xen: EXPERT clean-up
Thread-Topic: [RFC PATCH] xen: EXPERT clean-up
Thread-Index:
 AQHWrxxOadLsV5e3vEiMzomhqHWckam0fmYAgADlPwCAALC9AIAAvt0AgADIX4CAAG+2gA==
Date: Wed, 4 Nov 2020 14:14:15 +0000
Message-ID: <FD3CD0C4-6055-443B-B7D9-EAAC4935D2A9@arm.com>
References: <20201031002405.4545-1-sstabellini@kernel.org>
 <cd44d479-8dba-6311-9386-0c8c1134d07e@suse.com>
 <alpine.DEB.2.21.2011021332460.5812@sstabellini-ThinkPad-T480s>
 <c127499b-810b-63af-5487-2cc9ecfdba09@suse.com>
 <alpine.DEB.2.21.2011031123420.5812@sstabellini-ThinkPad-T480s>
 <e0842284-a894-1e0b-ffbe-484013acefa5@suse.com>
In-Reply-To: <e0842284-a894-1e0b-ffbe-484013acefa5@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: fa4a5620-643a-4eec-437c-08d880cc00ca
x-ms-traffictypediagnostic: AM4PR08MB2834:|VI1PR08MB4126:
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB4126091137E862F86257CF4F9DEF0@VI1PR08MB4126.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 laDdxYNjtgCVd09a2NFEjC8z73fE3frF60tu9E38Jhw+WKMMqT77DOHiyrq1jFtrgfmVG3tUavia9wsE7o0u68jtOUQjwH8P0M3zV4OUncT6fOWwiMt/HOtvVNqJIZNFtaom0vvbha0bN0SfKhyW8nhZO6bbJE6dY2TXjv70E1iGxuq+I21FRS6JDJGs83kXl/GQX1QxRevWJlut9AD1JhPDaz4eRvmvF0YCr6xYhPDELrdDehzpYxaaFtjkGJFmjajOISG84Ohm7bgHthHIb3vmb4ED/4R93Zvq+3OfvneivT7ZZiwhp4CP0u5BFon/WWlRWGO2AF7QgdhUZxg61w==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3682.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(346002)(366004)(39860400002)(376002)(4326008)(71200400001)(54906003)(6916009)(66446008)(53546011)(91956017)(33656002)(8676002)(76116006)(8936002)(83380400001)(66946007)(6512007)(66476007)(64756008)(66556008)(186003)(26005)(478600001)(86362001)(2906002)(316002)(36756003)(6506007)(2616005)(6486002)(5660300002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 +60AkLkIiQDELH28fpn5XHMsbYE19WN6gpu7ssqfQyNsO1lO5zxSGIcnP1yXjC+QCrek7ZKqyzsk9b5pbEnC2sG2tqpuGu4oRTdUlIY2sIG5bJaFsA+29re9jXp+cz2nq0OXNjeQxR4CgPot2uDlBDqHdE3LXD1xfVOAcCnE1wN1D+U5Gi0NoJICGtcoWJKQMRQSsuPw8ZGYvN//qIOCnH7QdwWymJTY2ALJPSYS3f4IU+tTISuOz54iqTjoyLIENlWLdnpfp9FdSh2Y7AzULy0XJkeebq3Yv5BN0U/WNrNontse0wCQMsUNXg6BJdo2Fc9RJ3vrM7YO0vvWnIL6rZNrdfjs6QG1qy9dkr2mheuCuCKEGaYbXOyuahEnK9yTSmgAQLTlpmwuOVqjydyzFCxQvo3CACit+AW9cR+FSLav7w4Cg1nqSAktUUMoe9F0LqyasXMdvjsLA0O+KS+I5NwBsr+p9w1zlwmBcVbvTze1+1YppANY4p/HCmbJma/BeYQHfc3UAj3/iw5jXwV29U4XSITciYAwIFxcW2gKrGbtsoi8t6I9o7GOuOcm/1OTA8ZOxMgNYnmyvIOFmSEKUPTJZtld6DA6q992CklkMY8WhbTLoepqrugCa2/psGh9NdsQNtf/VqO9oo1HfdF+Zw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <915E064EF1964B4D85DECD1B09782DE3@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR08MB2834
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT014.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c38e9ac9-72ba-4cc9-d836-08d880cbe9ae
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	02IwD7bYyTuBvAl9JUjlLkfpLSlQk91EvhhipGtMZdCf8KseT68yVN4wjgDlVI7Ukv0d3qOj3JUlDVXie/OlaDBPNDgf+MwrwfhdZ9/VdP1Tcn/fIM+0rIAsyHCX1BorXKP8DBVcF15wBaPkq7Mxv36sf9ki74HRrmD6E5iqnBXt41u1cWJZ3tH3KD5LKibuOgQuUXglo5hnHC0Y37yveRXke/X6CSg3uSABz73nyC+6hfrRV8kGsvvrPQH+UAEt/F1XHkaV2fZKoILy+JayZ5roVzaTP9ix4dVf9TEe5AclEQTdhypzWx03hGsCLeKRyhcLKBZuNFLN5nzTqMBpfB0qfB2AlfYihzw7YCmAktRx172u8n8BZW96xV7zqCf1yc/c94eNeII5EdmMz2Cppg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(39860400002)(136003)(396003)(376002)(46966005)(86362001)(70586007)(6512007)(83380400001)(8676002)(53546011)(36756003)(6506007)(2906002)(70206006)(2616005)(82740400003)(336012)(356005)(33656002)(8936002)(4326008)(6486002)(54906003)(5660300002)(47076004)(26005)(316002)(36906005)(186003)(81166007)(6862004)(82310400003)(478600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2020 14:14:54.6884
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fa4a5620-643a-4eec-437c-08d880cc00ca
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT014.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4126

Hi Jan,

> On 4 Nov 2020, at 07:34, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 03.11.2020 20:37, Stefano Stabellini wrote:
>> On Tue, 3 Nov 2020, Jan Beulich wrote:
>>> On 02.11.2020 22:41, Stefano Stabellini wrote:
>>>> On Mon, 2 Nov 2020, Jan Beulich wrote:
>>>>> On 31.10.2020 01:24, Stefano Stabellini wrote:
>>>>>> @@ -79,8 +79,8 @@ config SBSA_VUART_CONSOLE
>>>>>> 	  SBSA Generic UART implements a subset of ARM PL011 UART.
>>>>>>=20
>>>>>> config ARM_SSBD
>>>>>> -	bool "Speculative Store Bypass Disable" if EXPERT
>>>>>> -	depends on HAS_ALTERNATIVE
>>>>>> +	bool "Speculative Store Bypass Disable"
>>>>>> +	depends on HAS_ALTERNATIVE && EXPERT
>>>>>> 	default y
>>>>>=20
>>>>> At the example of this, I'm afraid when the default isn't "n"
>>>>> (or there's no default directive at all, as ought to be
>>>>> equivalent to and preferred over "default n"), such a
>>>>> transformation is not functionally identical: Before your
>>>>> change, with !EXPERT this option defaults to y. After your
>>>>> change this option is unavailable (which resolves to it being
>>>>> off for all consuming purposes).
>>>>>=20
>>>>> IOW there are reasons to have "if ..." attached to the prompts
>>>>> (for this construct indeed only making the prompt conditional,
>>>>> not the entire option), but there are also cases where the
>>>>> cleanup you do is indeed desirable / helpful.
>>>>=20
>>>> Yeah, thanks for catching it, it is obviously a problem.
>>>>=20
>>>> My intention was just to "tag" somehow the options to EXPERT so that i=
t
>>>> would show on the menu. Maybe a better, simpler, way to do it is
>>>> to add the word "EXPERT" to the one line prompt:
>>>>=20
>>>> config ARM_SSBD
>>>> -	bool "Speculative Store Bypass Disable" if EXPERT
>>>> +	bool "Speculative Store Bypass Disable (EXPERT)" if EXPERT
>>>> 	depends on HAS_ALTERNATIVE
>>>> 	default y
>>>> 	help
>>>>=20
>>>>=20
>>>> What do you think?
>>>=20
>>> While on the surface this may look like an improvement, I don't
>>> see how it would actually help: If you read the Kconfig file
>>> itself, the dependency is seen anyway. And on the menu I don't
>>> see the point of telling someone who has enabled EXPERT that a
>>> certain option is (or is not) an expert one. If they think
>>> they're experts, so should they be treated. (It was, after all,
>>> a deliberate decision to make enabling expert mode easier, and
>>> hence easier to use for what one might consider not-really-
>>> experts. I realize saying so may be considered tendentious; I
>>> mean it in a purely technical sense, and I'd like to apologize
>>> in advance to anyone not sharing this as a possible perspective
>>> to take.)
>>>=20
>>> Plus, of course, the addition of such (EXPERT) markers to
>>> future options' prompts is liable to get forgotten now and then,
>>> so sooner or later we'd likely end up with an inconsistent
>>> mixture anyway.
>>=20
>> I tend to agree with you on everything you wrote. The fundamental issue
>> is that we are (mis)using EXPERT to tag features that are experimental,
>> as defined by SUPPORT.md.
>>=20
>> It is important to be able to distinguish clearly at the kconfig level
>> options that are (security) supported from options that are
>> unsupported/experimental. Today the only way to do it is with EXPERT
>> which is not great because:
>>=20
>> - it doesn't convey the idea that it is for unsupported/experimental
>>  features
>> - if you want to enable one unsupported feature, it is not clear what
>>  you have to do
>>=20
>> So maybe we should replace EXPERT with UNSUPPORTED (or EXPERIMENTAL) in
>> the Kconfig menu?
>=20
> If you mean this to be added to prompt texts, then yes, I'd view
> this as helpful. However, ...

+1

>=20
>> It would make it clearer that by enabling UNSUPPORTED you are going to
>> get a configuration that is not security supported. And ideally we would
>> also tag features like ACPI as UNSUPPORTED as I suggested above.
>=20
> ... things will get uglier when (just a simple example) something
> is supported on x86, but not on Arm.

This is true that this could happen but we could easily workaround this
by having arch specific entries selecting the generic one:

CONFIG_PCI
	bool
	default n

CONFIG_X86_PCI
	bool if x86
	select CONFIG_PCI

CONFIG_ARM_PCI
	bool if arm
	depends on UNSUPPORTED
	select CONFIG_PCI

This is not the full syntax or right variables but you get the idea :-)

This is making Kconfig more complex but improves the user configuration exp=
erience
so i think this is a win.

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 14:16:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 14:16:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19192.44405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaJaJ-0002l0-CM; Wed, 04 Nov 2020 14:16:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19192.44405; Wed, 04 Nov 2020 14:16:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaJaJ-0002ks-9M; Wed, 04 Nov 2020 14:16:23 +0000
Received: by outflank-mailman (input) for mailman id 19192;
 Wed, 04 Nov 2020 14:16:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=osUd=EK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kaJaI-0002km-2S
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 14:16:22 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.45]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8dcdabb3-5564-4839-a402-ec156bf8094c;
 Wed, 04 Nov 2020 14:16:20 +0000 (UTC)
Received: from DB6PR0601CA0006.eurprd06.prod.outlook.com (2603:10a6:4:7b::16)
 by AM9PR08MB6114.eurprd08.prod.outlook.com (2603:10a6:20b:287::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Wed, 4 Nov
 2020 14:16:18 +0000
Received: from DB5EUR03FT031.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:7b:cafe::62) by DB6PR0601CA0006.outlook.office365.com
 (2603:10a6:4:7b::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Wed, 4 Nov 2020 14:16:18 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT031.mail.protection.outlook.com (10.152.20.142) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 4 Nov 2020 14:16:17 +0000
Received: ("Tessian outbound c189680f801b:v64");
 Wed, 04 Nov 2020 14:16:17 +0000
Received: from 739ac3104fd4.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 5271CC01-2434-44DC-8D23-6C5D1A2F3577.1; 
 Wed, 04 Nov 2020 14:16:11 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 739ac3104fd4.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 04 Nov 2020 14:16:11 +0000
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com (2603:10a6:208:fb::27)
 by AM0PR08MB2963.eurprd08.prod.outlook.com (2603:10a6:208:56::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Wed, 4 Nov
 2020 14:16:06 +0000
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b]) by AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b%7]) with mapi id 15.20.3499.032; Wed, 4 Nov 2020
 14:16:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=osUd=EK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kaJaI-0002km-2S
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 14:16:22 +0000
X-Inumbo-ID: 8dcdabb3-5564-4839-a402-ec156bf8094c
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown [40.107.8.45])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8dcdabb3-5564-4839-a402-ec156bf8094c;
	Wed, 04 Nov 2020 14:16:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UW2cmnJ4td5WLg2T2a7LdO1bQQWTx78DnYa+I7gXtlA=;
 b=RXisbY90pmz337GXmlaOE1WRnRZfVGpjc18Q9Bl8T1lQX41M6+GVttrITu3wlR6wZf1XU3cen/24Bk4TRYHnxtNsDsrsbux/waCUQF1vQegef8QA7kjiPDulnESSJSA68lu9NQbYCMNHqg5nBLswcs6PwN10l67MkPnmmkJNseI=
Received: from DB6PR0601CA0006.eurprd06.prod.outlook.com (2603:10a6:4:7b::16)
 by AM9PR08MB6114.eurprd08.prod.outlook.com (2603:10a6:20b:287::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Wed, 4 Nov
 2020 14:16:18 +0000
Received: from DB5EUR03FT031.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:7b:cafe::62) by DB6PR0601CA0006.outlook.office365.com
 (2603:10a6:4:7b::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Wed, 4 Nov 2020 14:16:18 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT031.mail.protection.outlook.com (10.152.20.142) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 4 Nov 2020 14:16:17 +0000
Received: ("Tessian outbound c189680f801b:v64"); Wed, 04 Nov 2020 14:16:17 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: eb9f7c4593f37762
X-CR-MTA-TID: 64aa7808
Received: from 739ac3104fd4.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 5271CC01-2434-44DC-8D23-6C5D1A2F3577.1;
	Wed, 04 Nov 2020 14:16:11 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 739ac3104fd4.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 04 Nov 2020 14:16:11 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GAADdRQdj9bO0ckwZnyesmilnhbOjHtGS6iVpyxUpL4p+v/zGT2Dmc25nt/YGt334ZKg648ykVFLxdi2X7co6JdOXWo2FDFW0qbooqSBA2OKsMRf7SsEhn3702JZAsZgRSQIrxUOih/Th8D7swcI9uXLQlJo1T6ATzFgb0RIM7eESTxtjNy9KnFI+UFNndxaz48dYIEGDL22cKFkFk8VTDq5WODinvNaExRSjVlOosXAxlReUjJZNv9Mmc3AvgFKo7SyF+6tbiZnYAkYFmimDLrg15avnmKBHz/rnYLnBxVYLyXGy0XUcxY89G6yvxcO8AskqTO052wHDOcpa0yZYQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UW2cmnJ4td5WLg2T2a7LdO1bQQWTx78DnYa+I7gXtlA=;
 b=OaaZh6cln8mCre9oRXM/gXuRGRRzdWNA5AUmlOyFLKdszONUv18fAlv09bEiAxjdveqrua5yrbA5l7aSD+t6HehSElOhqbz/oJe9uFAR141DG4FloPjGtLnCwNwmkycoGfE14HsVmGTKoWrVlCdmbxC+4x7tP9dFQEr1ZiN5vG2huufzg8dp8eaK4mJL96czcEhKUuKlY5IB7ahdUk+BtNBau2WZsjpgO08pLQu7CRwod7iIVS1gGhoD8hsncq7VNPGvuNbCesTv28llFMTgmpWEUfYGr9qy+/qSt+y/KQxtOKW6EK9u7ODfNaRkS20PdVmMOBIJ2/79k98crs3NGg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UW2cmnJ4td5WLg2T2a7LdO1bQQWTx78DnYa+I7gXtlA=;
 b=RXisbY90pmz337GXmlaOE1WRnRZfVGpjc18Q9Bl8T1lQX41M6+GVttrITu3wlR6wZf1XU3cen/24Bk4TRYHnxtNsDsrsbux/waCUQF1vQegef8QA7kjiPDulnESSJSA68lu9NQbYCMNHqg5nBLswcs6PwN10l67MkPnmmkJNseI=
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com (2603:10a6:208:fb::27)
 by AM0PR08MB2963.eurprd08.prod.outlook.com (2603:10a6:208:56::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Wed, 4 Nov
 2020 14:16:06 +0000
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b]) by AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b%7]) with mapi id 15.20.3499.032; Wed, 4 Nov 2020
 14:16:06 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Rahul Singh <Rahul.Singh@arm.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH v2 1/4] xen/ns16550: solve compilation error on ARM with
 CONFIG_HAS_PCI enabled.
Thread-Topic: [PATCH v2 1/4] xen/ns16550: solve compilation error on ARM with
 CONFIG_HAS_PCI enabled.
Thread-Index: AQHWsfp6PdgZRIW/7EK8sYDcgU4ri6m4BhwA
Date: Wed, 4 Nov 2020 14:16:06 +0000
Message-ID: <E02CAC20-7E95-458A-9B91-F4C9C400DED5@arm.com>
References: <cover.1604417224.git.rahul.singh@arm.com>
 <2aa79510731918d78d515a1679cc141fcf16883e.1604417224.git.rahul.singh@arm.com>
In-Reply-To:
 <2aa79510731918d78d515a1679cc141fcf16883e.1604417224.git.rahul.singh@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: a61ff4e6-ed7d-40b3-69c8-08d880cc3254
x-ms-traffictypediagnostic: AM0PR08MB2963:|AM9PR08MB6114:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM9PR08MB61144452AB01CBA93C37CBF39DEF0@AM9PR08MB6114.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:6790;OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 icXjoqASnSZQF8K+Z+PqjNRG55QFjqKS8U6v91YDgn94Yt350w9mIo/leeHPt/kvPbyNyciFhQ4/7JPjX+H+RV7tcnmD7+1MBopaSlQw3qIm0eslA7N6zUPTYZbYdzXpfuUMAon85XGzmpgdAoy3bf5QTrYjDHov8cT0ylk0XmJBaGy+VJD7qCVwwjyfCcOX8XRte+ERmuxeEnlVm/xpTM6vj7DYy8QSwXMCEY0ItK1kvEYFjMFc9bUgIoH54vh+1Z8ueshe+OeiZA9cOx4Yzo3RHpUL35HeCXV14evBo/h5vI4XuTOg9+rjdLdbiRyhhv+x3Uij7nbIo+ZIpNfYUA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3682.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(39860400002)(366004)(396003)(376002)(136003)(4326008)(36756003)(76116006)(66476007)(66556008)(26005)(2906002)(66946007)(6862004)(83380400001)(71200400001)(66446008)(91956017)(64756008)(5660300002)(6486002)(33656002)(86362001)(53546011)(186003)(6636002)(37006003)(6506007)(8676002)(54906003)(6512007)(2616005)(478600001)(316002)(8936002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 ao0kNHyCvb8wbBqGLAIPghTa66MKAPZ8DFMvVpY7iGalnmysuRZawGFSzkxtuAP1QY03mXWdYcaN9EQSbbZzcVqMybzJIpG6ErrNIultbll0Yt1NPrzf+GKiQbuqSY73R73NTLezmSRiPspVJVHbXdg7m0wDfdDY3d8MLL+JTSvNlrWHbmD7q/UDfcqAxtC2sVJMB+JvJFdXe0HTtXmnRNvwzLaUXZOdpfAT9p20Oh6fdwRhQ6fgpipTN/gQrNVQvy/9zS32H+9Hvu6N2XhW5dSglTYVuucJz2YBiNzLebyQS4EeQZfXPQYHkUpPrjBRy/UuVOFl9PBqXEaLvoucTuCPs0ycxw/bID9FSPbRNugFLqD8xbknk6SlvFawSQdEnWxAJr26o6j55J7cL/cDZU9WEDGZ0oVo5NNqPCEwRmiBTOT8n+ONwNl92gkSYD9FUO6Mj+glovGLQzO5PeKJVVcYbIa488bg/NzL/ZmwvipFn4hhMJIFeLHF+B4eQPJ/5WyffOpmgsHQEI4w+gSUMdhbNABz7nCvIjUNi29pIZZroG6V0MzclB5SNbpfWVHb594PAMCUF0oWnto3Vk6IGD0IwS9kmQsKwDJO10+7X+7QbfbFeN2BoE99K19cPI4h38cDLRMSEQP08gNsusrg9A==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <EBBD6C253931AE4F8A4A69EF2A5ACC2A@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB2963
Original-Authentication-Results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	92c7899f-0e06-45ca-bee4-08d880cc2b95
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xckOn7QzFpVLLVnPlqZMEF2ApkCrs0QeljeVNjmwVG49YvnQKU0dwK7ygoblZVkvKmF6mlaGnmqF5RfoODQYDk9/TwDLhCovAGw9in44JkVV2NBW+f96sFkTwsa8bn/WPOzygHaiqcwQULF10YmbGrcWrzvJma2AY7uQctGapMknCj6oUcKL+xwKtoNzhlJ1nR14adW+7nYSW+W2TCycKK59OJAcnW5wTDhXUPCWfOiCvHgeRmx16QbVA0R4F9cAw27C9FimUsc4yzSS5+CsM4xwkoX9WJ6Z9Swzr12NhpEm4LX4FgfHaPZ0lcGeCD8q2JqJzAjhYO1BklMEkI9s6te61xfnVR7zBQ1Njq17Y2lAuCDQ7sul33GMdtOFP0hQOEQGoz8VXk51BD7kah+F6g==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(136003)(346002)(376002)(396003)(46966005)(26005)(336012)(82740400003)(8676002)(53546011)(6862004)(6506007)(186003)(47076004)(8936002)(2616005)(6486002)(81166007)(86362001)(82310400003)(356005)(83380400001)(36756003)(70206006)(33656002)(4326008)(5660300002)(2906002)(37006003)(6512007)(316002)(54906003)(6636002)(478600001)(70586007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2020 14:16:17.8931
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a61ff4e6-ed7d-40b3-69c8-08d880cc3254
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6114



> On 3 Nov 2020, at 15:59, Rahul Singh <Rahul.Singh@arm.com> wrote:
>=20
> ARM platforms do not have PCI support available. When CONFIG_HAS_PCI
> is enabled for ARM a compilation error is observed for ns16550 driver.
>=20
> Fixed compilation error after introducing new kconfig option
> CONFIG_HAS_NS16550_PCI to support ns16550 PCI for X86.
>=20
> For X86 platforms it is enabled by default. For ARM platforms it is
> disabled by default, once we have proper support for NS16550 PCI for
> ARM we can enable it.
>=20
> No functional change.
>=20
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
>=20
> Changes in v2:
> - Silently enable the HAS_NS16550_PCI for x86 by default.=20
>=20
> ---
> xen/drivers/char/Kconfig   |  7 +++++++
> xen/drivers/char/ns16550.c | 32 ++++++++++++++++----------------
> 2 files changed, 23 insertions(+), 16 deletions(-)
>=20
> diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
> index b572305657..12a53607d1 100644
> --- a/xen/drivers/char/Kconfig
> +++ b/xen/drivers/char/Kconfig
> @@ -4,6 +4,13 @@ config HAS_NS16550
> 	help
> 	  This selects the 16550-series UART support. For most systems, say Y.
>=20
> +config HAS_NS16550_PCI
> +	def_bool y
> +	depends on X86 && HAS_NS16550 && HAS_PCI
> +	help
> +	  This selects the 16550-series UART PCI support.For most systems,
> +	  say Y.
> +
> config HAS_CADENCE_UART
> 	bool "Xilinx Cadence UART driver"
> 	default y
> diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
> index d8b52eb813..bd1c2af956 100644
> --- a/xen/drivers/char/ns16550.c
> +++ b/xen/drivers/char/ns16550.c
> @@ -16,7 +16,7 @@
> #include <xen/timer.h>
> #include <xen/serial.h>
> #include <xen/iocap.h>
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
> #include <xen/pci.h>
> #include <xen/pci_regs.h>
> #include <xen/pci_ids.h>
> @@ -54,7 +54,7 @@ enum serial_param_type {
>     reg_shift,
>     reg_width,
>     stop_bits,
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>     bridge_bdf,
>     device,
>     port_bdf,
> @@ -83,7 +83,7 @@ static struct ns16550 {
>     unsigned int timeout_ms;
>     bool_t intr_works;
>     bool_t dw_usr_bsy;
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>     /* PCI card parameters. */
>     bool_t pb_bdf_enable;   /* if =3D1, pb-bdf effective, port behind bri=
dge */
>     bool_t ps_bdf_enable;   /* if =3D1, ps_bdf effective, port on pci car=
d */
> @@ -117,14 +117,14 @@ static const struct serial_param_var __initconst sp=
_vars[] =3D {
>     {"reg-shift", reg_shift},
>     {"reg-width", reg_width},
>     {"stop-bits", stop_bits},
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>     {"bridge", bridge_bdf},
>     {"dev", device},
>     {"port", port_bdf},
> #endif
> };
>=20
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
> struct ns16550_config {
>     u16 vendor_id;
>     u16 dev_id;
> @@ -620,7 +620,7 @@ static int ns16550_getc(struct serial_port *port, cha=
r *pc)
>=20
> static void pci_serial_early_init(struct ns16550 *uart)
> {
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>     if ( !uart->ps_bdf_enable || uart->io_base >=3D 0x10000 )
>         return;
>=20
> @@ -719,7 +719,7 @@ static void __init ns16550_init_preirq(struct serial_=
port *port)
>=20
> static void __init ns16550_init_irq(struct serial_port *port)
> {
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>     struct ns16550 *uart =3D port->uart;
>=20
>     if ( uart->msi )
> @@ -761,7 +761,7 @@ static void __init ns16550_init_postirq(struct serial=
_port *port)
>     uart->timeout_ms =3D max_t(
>         unsigned int, 1, (bits * uart->fifo_size * 1000) / uart->baud);
>=20
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>     if ( uart->bar || uart->ps_bdf_enable )
>     {
>         if ( uart->param && uart->param->mmio &&
> @@ -841,7 +841,7 @@ static void ns16550_suspend(struct serial_port *port)
>=20
>     stop_timer(&uart->timer);
>=20
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>     if ( uart->bar )
>        uart->cr =3D pci_conf_read16(PCI_SBDF(0, uart->ps_bdf[0], uart->ps=
_bdf[1],
>                                   uart->ps_bdf[2]), PCI_COMMAND);
> @@ -850,7 +850,7 @@ static void ns16550_suspend(struct serial_port *port)
>=20
> static void _ns16550_resume(struct serial_port *port)
> {
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>     struct ns16550 *uart =3D port->uart;
>=20
>     if ( uart->bar )
> @@ -1013,7 +1013,7 @@ static int __init check_existence(struct ns16550 *u=
art)
>     return 1; /* Everything is MMIO */
> #endif
>=20
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>     pci_serial_early_init(uart);
> #endif
>=20
> @@ -1044,7 +1044,7 @@ static int __init check_existence(struct ns16550 *u=
art)
>     return (status =3D=3D 0x90);
> }
>=20
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
> static int __init
> pci_uart_config(struct ns16550 *uart, bool_t skip_amt, unsigned int idx)
> {
> @@ -1305,7 +1305,7 @@ static bool __init parse_positional(struct ns16550 =
*uart, char **str)
>=20
>     if ( *conf =3D=3D ',' && *++conf !=3D ',' )
>     {
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>         if ( strncmp(conf, "pci", 3) =3D=3D 0 )
>         {
>             if ( pci_uart_config(uart, 1/* skip AMT */, uart - ns16550_co=
m) )
> @@ -1327,7 +1327,7 @@ static bool __init parse_positional(struct ns16550 =
*uart, char **str)
>=20
>     if ( *conf =3D=3D ',' && *++conf !=3D ',' )
>     {
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>         if ( strncmp(conf, "msi", 3) =3D=3D 0 )
>         {
>             conf +=3D 3;
> @@ -1339,7 +1339,7 @@ static bool __init parse_positional(struct ns16550 =
*uart, char **str)
>             uart->irq =3D simple_strtol(conf, &conf, 10);
>     }
>=20
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>     if ( *conf =3D=3D ',' && *++conf !=3D ',' )
>     {
>         conf =3D parse_pci(conf, NULL, &uart->ps_bdf[0],
> @@ -1419,7 +1419,7 @@ static bool __init parse_namevalue_pairs(char *str,=
 struct ns16550 *uart)
>             uart->reg_width =3D simple_strtoul(param_value, NULL, 0);
>             break;
>=20
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>         case bridge_bdf:
>             if ( !parse_pci(param_value, NULL, &uart->ps_bdf[0],
>                             &uart->ps_bdf[1], &uart->ps_bdf[2]) )
> --=20
> 2.17.1
>=20



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 14:16:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 14:16:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19194.44418 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaJaX-0002pP-N9; Wed, 04 Nov 2020 14:16:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19194.44418; Wed, 04 Nov 2020 14:16:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaJaX-0002pI-J6; Wed, 04 Nov 2020 14:16:37 +0000
Received: by outflank-mailman (input) for mailman id 19194;
 Wed, 04 Nov 2020 14:16:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=osUd=EK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kaJaX-0002p8-1G
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 14:16:37 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.15.72]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 105c36b8-8725-4a9e-9caa-c7c2790daf74;
 Wed, 04 Nov 2020 14:16:35 +0000 (UTC)
Received: from DB6P18901CA0021.EURP189.PROD.OUTLOOK.COM (2603:10a6:4:16::31)
 by DB6PR08MB2936.eurprd08.prod.outlook.com (2603:10a6:6:25::32) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.30; Wed, 4 Nov
 2020 14:16:32 +0000
Received: from DB5EUR03FT051.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:16:cafe::59) by DB6P18901CA0021.outlook.office365.com
 (2603:10a6:4:16::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.19 via Frontend
 Transport; Wed, 4 Nov 2020 14:16:32 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT051.mail.protection.outlook.com (10.152.21.19) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 4 Nov 2020 14:16:31 +0000
Received: ("Tessian outbound a64c3afb6fc9:v64");
 Wed, 04 Nov 2020 14:16:31 +0000
Received: from 739ac3104fd4.6
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1830FDAB-F871-4DCB-8191-9C63795F173B.1; 
 Wed, 04 Nov 2020 14:16:16 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 739ac3104fd4.6
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 04 Nov 2020 14:16:16 +0000
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com (2603:10a6:208:fb::27)
 by AM0PR08MB2963.eurprd08.prod.outlook.com (2603:10a6:208:56::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Wed, 4 Nov
 2020 14:16:13 +0000
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b]) by AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b%7]) with mapi id 15.20.3499.032; Wed, 4 Nov 2020
 14:16:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=osUd=EK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kaJaX-0002p8-1G
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 14:16:37 +0000
X-Inumbo-ID: 105c36b8-8725-4a9e-9caa-c7c2790daf74
Received: from EUR01-DB5-obe.outbound.protection.outlook.com (unknown [40.107.15.72])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 105c36b8-8725-4a9e-9caa-c7c2790daf74;
	Wed, 04 Nov 2020 14:16:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XBNwGwCJaPBXkdBiV8OA7G4XjO552wM4S80DXQFPtKo=;
 b=79moceiJm0nRxS1pA7oeJcXhPrNHW14Enu77vAAiw1Z0iyfl3F9QXVg+nBIkAnzrDOyOhS6dtzFuzavXNDsT7+YfeJNQkswz3nrO63xPNRYeMd8nAe2weJYlIyJBnfKC1jptQTuoYivv4GWRn/4AHebDjqn0oxFwKLTgqzn6VsM=
Received: from DB6P18901CA0021.EURP189.PROD.OUTLOOK.COM (2603:10a6:4:16::31)
 by DB6PR08MB2936.eurprd08.prod.outlook.com (2603:10a6:6:25::32) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.30; Wed, 4 Nov
 2020 14:16:32 +0000
Received: from DB5EUR03FT051.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:16:cafe::59) by DB6P18901CA0021.outlook.office365.com
 (2603:10a6:4:16::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.19 via Frontend
 Transport; Wed, 4 Nov 2020 14:16:32 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT051.mail.protection.outlook.com (10.152.21.19) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 4 Nov 2020 14:16:31 +0000
Received: ("Tessian outbound a64c3afb6fc9:v64"); Wed, 04 Nov 2020 14:16:31 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 35ac0487e9cc3236
X-CR-MTA-TID: 64aa7808
Received: from 739ac3104fd4.6
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 1830FDAB-F871-4DCB-8191-9C63795F173B.1;
	Wed, 04 Nov 2020 14:16:16 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 739ac3104fd4.6
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 04 Nov 2020 14:16:16 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fKJweYYtRxRQvJPR+Ajh5IMWZFi22+YsQK5QyNvXzXOKpYuLi19w7YCMQAwL7T2NH6h3/ciOUHmImnHfo7bjfxna9c1ALsNs6CJUrKtSSl8qpYbG6dn+JYkB5JDLwv381PTJaf3OR7Aw/5MMQjRLPCm/B85VB+euXodhPaaQcxBTB1EGACiXv0OQhYsTH4i8wqipkUnEm7sru0/MlM4c1bN6N6qxUQZij6kbGXxK6NRVEHd+1kRQS2fC6eteE76ptow8ZDbHeVenZu643knUZQKQ89+oIbYIoHDiQm+DwuZ4Rfh2i4nrJzUKo4OhVLR7sfdHtlriIn6PHNrCQH75gw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XBNwGwCJaPBXkdBiV8OA7G4XjO552wM4S80DXQFPtKo=;
 b=VDzMk1sT4U9HkhYYockstl2Dc5WjT1q0pbOvwF03b0nVfMtSwPy2ytKzF3OBQJUe+cOVcPmuyNYy8x7X8bzeCb3ukxJArCAKEttqCUQp35mjj2rmru/2xk+Gat7j3euJLjD0vu/g+fMDrkMGJOj/pKX0RdrbJY9AJ0XvQvkrCE4N5IzbonyRPqA5QOXcoD9aTgUue6PJh65W+/ghHu/wXKnpAlLcAN3K74lJkjJWjHwagz7SejfLph785nlkqdUNoT1lzADXBGCR/TLsIWwkCnbQ/zFDhgv0CjrgHlACEe1hn9ULPnVa3fbA4FBzrUZ0XK3XVrSP4e5f//z6e+LjbA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XBNwGwCJaPBXkdBiV8OA7G4XjO552wM4S80DXQFPtKo=;
 b=79moceiJm0nRxS1pA7oeJcXhPrNHW14Enu77vAAiw1Z0iyfl3F9QXVg+nBIkAnzrDOyOhS6dtzFuzavXNDsT7+YfeJNQkswz3nrO63xPNRYeMd8nAe2weJYlIyJBnfKC1jptQTuoYivv4GWRn/4AHebDjqn0oxFwKLTgqzn6VsM=
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com (2603:10a6:208:fb::27)
 by AM0PR08MB2963.eurprd08.prod.outlook.com (2603:10a6:208:56::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Wed, 4 Nov
 2020 14:16:13 +0000
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b]) by AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b%7]) with mapi id 15.20.3499.032; Wed, 4 Nov 2020
 14:16:13 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Rahul Singh <Rahul.Singh@arm.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Jan Beulich
	<jbeulich@suse.com>, Paul Durrant <paul@xen.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 3/4] xen/pci: Move x86 specific code to x86 directory.
Thread-Topic: [PATCH v2 3/4] xen/pci: Move x86 specific code to x86 directory.
Thread-Index: AQHWsfp3Q4MXt0Ip3ECqFjMEjNXJ/qm4BiSA
Date: Wed, 4 Nov 2020 14:16:13 +0000
Message-ID: <EAD68E48-36A4-49EA-A6B7-F6CA7D334A85@arm.com>
References: <cover.1604417224.git.rahul.singh@arm.com>
 <687101e7e0e6feb64dd8ea63c8cf1aacf1684049.1604417224.git.rahul.singh@arm.com>
In-Reply-To:
 <687101e7e0e6feb64dd8ea63c8cf1aacf1684049.1604417224.git.rahul.singh@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: c9d5dbab-0de4-4b07-6438-08d880cc3ab3
x-ms-traffictypediagnostic: AM0PR08MB2963:|DB6PR08MB2936:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB6PR08MB2936F349CFA379E9C1BDB6E79DEF0@DB6PR08MB2936.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 xL55LDyIFveu1vbd0UG0uKDBftvWnpZBHX/EjRUIZ+lef8ciZevEHRcC+cFKsASzToLWB0JDApgTrtYZ3rC2ykDf+Io0EBIRRZ5sY5twTZnt+64Dd01Su0FwiINWNJCzQkQWCaR/d96v0/qR9ho7jbanLk4NfVMPL7OayK7aCsTAT2inYjyFlSe07RV72NtMRkM1MfF1GFT0ytWKbQZT9qI0q2P+toMvFhVHc0U60/EY52P6F6eWPH3DXbhb1Ihtl9UbyVGp8V+SDR5mwl91Vy82a4yHRw4bC+gGfdxPLET5Wfc3KmjSreUCLmmRyr39VqN/c5MnFbuAzMoesuUCfBi0WjtERUOt9aobu2TtESZhVy4+NbzXQmBtwcnobXdrBKl/EGwmQ82zLeir3BVRbh8ZHy+bMSO83QzEnwjnNFLLFzAyR1x3HqNdmpxBkPxMPVi0bS0cLKcJHSzDUloXlg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3682.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(39860400002)(366004)(396003)(376002)(136003)(4326008)(36756003)(76116006)(66476007)(66556008)(26005)(2906002)(66946007)(6862004)(83380400001)(71200400001)(66446008)(91956017)(64756008)(5660300002)(6486002)(33656002)(86362001)(53546011)(186003)(6636002)(37006003)(6506007)(8676002)(54906003)(6512007)(2616005)(478600001)(316002)(8936002)(2004002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 UVVTHxMqNHBjQfOUdRPXXPjq/EDAhubcF9WzccewgWFXnmpKM034fP1T04jcu7T4VbhjM5X4alFniCn2J7r7TC4PgGgR4WWmJIZYjzCNsKIe1X10LtWfoUAn1mltga//xWnbs5f1gVIALvVy3QQNHwfn+sofRp4UXH0VPVJ9JQAhqnpREifXid0wvM6fTpv56/VtjTwS1jMcR7YcPodZyGU5bc3cG9b64R/9lPkD0PdzarwYGmZLlORf9kKscZwYShjKqfpQaOeymGSUlE+n7OJ9ecILmn77Zi5yw+OU3vtDsBMS1PH9Upgf1NI9WW4SdAxdghYYdcXscr2vlYh3iqG7KLRIq+Bu2HZsj/58PdGehSgjQIOdZZbaRSbe1e3nehqrmkwkKoApsYdte+XVd7m1QyfNet0M2fCh0cOFdzwt36RfmWVkQnDT87DYZev6jO8W4zUCXHDkbW70kDPNP3di8dMYCWbD9crrYzZL8DGvfzU485LtQwrSBxA/xHIn+/V9qcIoe5fRzLpOcZl1Rl/KFOFZTRGflR5Yx04qNAL1SngrAn6miOSwfsxMVjV/Zbc9q+dCezf/Ne/VNPxlFEeb3iXiayHFdaJddABPCsHDGFJjU5SxEOSwaaNpNmv1JbaXt1bJdD9Rw3r2FbzEwA==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <D75432950B3EEE4E855472B412CC15A1@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB2963
Original-Authentication-Results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	0e394a2e-9f6e-486a-9f28-08d880cc2fb0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zUfNqaxFqOTACTZBtpJ8qdEQku1xep+5o+jlaAJCZGmdmR17xFkqI+mdxpBuZD4sj5AY/rXUy+pixT3C82RNYSpWlVQ9VfVtR1lN4ZbgV3Y1yVWCpS3yg0U18DuAk2SW5vURvJg9cHxZzSB5n1AKVht4w7WRHX6dpZ+GHBgV9Qb0xgXvH1OnOj9Fan5nEUZ4qxse5Uia2X4R5t56oDCCqwcDrhM33WmicsvDiiBFQollprbh/+Tj7jUk3D0x9SV2Oc+jJc2ZBoMHR6OeKxU+GyOJ4W4ZsRdNfH/1z+co2Yfvrcl+KZ2O7+ilymD+yxU2Jrn2LrqrEGNGghckUGHCH/3A7AN3WUt41Yc0MvRk0UMJElbn0TELzm79Vytqjg9ni7emRrysK07ooc9Qmot0mOJwfokOtWFpSzwfqYg1Q+YWXFamyWVMDyJGvk3r/LBwHl3dQty8idUB84bEym12bR5MYhy/8T3ZIyjptqjJCaANshsUPbH086lS3zEctq4I
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(346002)(136003)(39860400002)(376002)(46966005)(82740400003)(2616005)(86362001)(4326008)(8676002)(8936002)(47076004)(70206006)(6512007)(26005)(33656002)(83380400001)(478600001)(70586007)(53546011)(356005)(81166007)(36756003)(2906002)(6486002)(6862004)(186003)(82310400003)(6636002)(336012)(54906003)(37006003)(316002)(6506007)(5660300002)(2004002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2020 14:16:31.9334
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c9d5dbab-0de4-4b07-6438-08d880cc3ab3
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR08MB2936



> On 3 Nov 2020, at 15:59, Rahul Singh <Rahul.Singh@arm.com> wrote:
>=20
> passthrough/pci.c file is common for all architecture, but there is x86
> sepcific code in this file.
>=20
> Move x86 specific code to the x86 directory to avoid compilation error
> for other architecture.
>=20
> No functional change.
>=20
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
>=20
> Changes is v2:
> - fixed comments.
> - rename pci_clean_dpci_irqs() to arch_pci_clean_pirqs().
>=20
> ---
> xen/drivers/passthrough/pci.c        | 76 +----------------------
> xen/drivers/passthrough/x86/Makefile |  1 +
> xen/drivers/passthrough/x86/iommu.c  |  7 +++
> xen/drivers/passthrough/x86/pci.c    | 91 ++++++++++++++++++++++++++++
> xen/include/xen/pci.h                |  2 +
> 5 files changed, 102 insertions(+), 75 deletions(-)
> create mode 100644 xen/drivers/passthrough/x86/pci.c
>=20
> diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.=
c
> index 2a3bce1462..04d3e2c0f9 100644
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -14,7 +14,6 @@
>  * this program; If not, see <http://www.gnu.org/licenses/>.
>  */
>=20
> -#include <xen/sched.h>
> #include <xen/pci.h>
> #include <xen/pci_regs.h>
> #include <xen/pci_ids.h>
> @@ -24,7 +23,6 @@
> #include <xen/irq.h>
> #include <xen/param.h>
> #include <xen/vm_event.h>
> -#include <asm/hvm/irq.h>
> #include <xen/delay.h>
> #include <xen/keyhandler.h>
> #include <xen/event.h>
> @@ -847,71 +845,6 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
>     return ret;
> }
>=20
> -static int pci_clean_dpci_irq(struct domain *d,
> -                              struct hvm_pirq_dpci *pirq_dpci, void *arg=
)
> -{
> -    struct dev_intx_gsi_link *digl, *tmp;
> -
> -    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
> -
> -    if ( pt_irq_need_timer(pirq_dpci->flags) )
> -        kill_timer(&pirq_dpci->timer);
> -
> -    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
> -    {
> -        list_del(&digl->list);
> -        xfree(digl);
> -    }
> -
> -    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
> -
> -    if ( !pt_pirq_softirq_active(pirq_dpci) )
> -        return 0;
> -
> -    domain_get_irq_dpci(d)->pending_pirq_dpci =3D pirq_dpci;
> -
> -    return -ERESTART;
> -}
> -
> -static int pci_clean_dpci_irqs(struct domain *d)
> -{
> -    struct hvm_irq_dpci *hvm_irq_dpci =3D NULL;
> -
> -    if ( !is_iommu_enabled(d) )
> -        return 0;
> -
> -    if ( !is_hvm_domain(d) )
> -        return 0;
> -
> -    spin_lock(&d->event_lock);
> -    hvm_irq_dpci =3D domain_get_irq_dpci(d);
> -    if ( hvm_irq_dpci !=3D NULL )
> -    {
> -        int ret =3D 0;
> -
> -        if ( hvm_irq_dpci->pending_pirq_dpci )
> -        {
> -            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci)=
 )
> -                 ret =3D -ERESTART;
> -            else
> -                 hvm_irq_dpci->pending_pirq_dpci =3D NULL;
> -        }
> -
> -        if ( !ret )
> -            ret =3D pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
> -        if ( ret )
> -        {
> -            spin_unlock(&d->event_lock);
> -            return ret;
> -        }
> -
> -        hvm_domain_irq(d)->dpci =3D NULL;
> -        free_hvm_irq_dpci(hvm_irq_dpci);
> -    }
> -    spin_unlock(&d->event_lock);
> -    return 0;
> -}
> -
> /* Caller should hold the pcidevs_lock */
> static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
>                            uint8_t devfn)
> @@ -971,7 +904,7 @@ int pci_release_devices(struct domain *d)
>     int ret;
>=20
>     pcidevs_lock();
> -    ret =3D pci_clean_dpci_irqs(d);
> +    ret =3D arch_pci_clean_pirqs(d);
>     if ( ret )
>     {
>         pcidevs_unlock();
> @@ -1375,13 +1308,6 @@ static int __init setup_dump_pcidevs(void)
> }
> __initcall(setup_dump_pcidevs);
>=20
> -int iommu_update_ire_from_msi(
> -    struct msi_desc *msi_desc, struct msi_msg *msg)
> -{
> -    return iommu_intremap
> -           ? iommu_call(&iommu_ops, update_ire_from_msi, msi_desc, msg) =
: 0;
> -}
> -
> static int iommu_add_device(struct pci_dev *pdev)
> {
>     const struct domain_iommu *hd;
> diff --git a/xen/drivers/passthrough/x86/Makefile b/xen/drivers/passthrou=
gh/x86/Makefile
> index aa515c680d..d02ff75de5 100644
> --- a/xen/drivers/passthrough/x86/Makefile
> +++ b/xen/drivers/passthrough/x86/Makefile
> @@ -1,2 +1,3 @@
> obj-$(CONFIG_PCI_ATS) +=3D ats.o
> obj-y +=3D iommu.o
> +obj-y +=3D pci.o
> diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthroug=
h/x86/iommu.c
> index f17b1820f4..875e67b53b 100644
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -308,6 +308,13 @@ struct page_info *iommu_alloc_pgtable(struct domain =
*d)
>     return pg;
> }
>=20
> +int iommu_update_ire_from_msi(
> +    struct msi_desc *msi_desc, struct msi_msg *msg)
> +{
> +    return iommu_intremap
> +           ? iommu_call(&iommu_ops, update_ire_from_msi, msi_desc, msg) =
: 0;
> +}
> +
> /*
>  * Local variables:
>  * mode: C
> diff --git a/xen/drivers/passthrough/x86/pci.c b/xen/drivers/passthrough/=
x86/pci.c
> new file mode 100644
> index 0000000000..59588aa8d4
> --- /dev/null
> +++ b/xen/drivers/passthrough/x86/pci.c
> @@ -0,0 +1,91 @@
> +/*
> + * This program is free software; you can redistribute it and/or modify =
it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOU=
T
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License=
 for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License alo=
ng with
> + * this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <xen/sched.h>
> +#include <xen/pci.h>
> +
> +static int pci_clean_dpci_irq(struct domain *d,
> +                              struct hvm_pirq_dpci *pirq_dpci, void *arg=
)
> +{
> +    struct dev_intx_gsi_link *digl, *tmp;
> +
> +    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
> +
> +    if ( pt_irq_need_timer(pirq_dpci->flags) )
> +        kill_timer(&pirq_dpci->timer);
> +
> +    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
> +    {
> +        list_del(&digl->list);
> +        xfree(digl);
> +    }
> +
> +    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
> +
> +    if ( !pt_pirq_softirq_active(pirq_dpci) )
> +        return 0;
> +
> +    domain_get_irq_dpci(d)->pending_pirq_dpci =3D pirq_dpci;
> +
> +    return -ERESTART;
> +}
> +
> +int arch_pci_clean_pirqs(struct domain *d)
> +{
> +    struct hvm_irq_dpci *hvm_irq_dpci =3D NULL;
> +
> +    if ( !is_iommu_enabled(d) )
> +        return 0;
> +
> +    if ( !is_hvm_domain(d) )
> +        return 0;
> +
> +    spin_lock(&d->event_lock);
> +    hvm_irq_dpci =3D domain_get_irq_dpci(d);
> +    if ( hvm_irq_dpci !=3D NULL )
> +    {
> +        int ret =3D 0;
> +
> +        if ( hvm_irq_dpci->pending_pirq_dpci )
> +        {
> +            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci)=
 )
> +                 ret =3D -ERESTART;
> +            else
> +                 hvm_irq_dpci->pending_pirq_dpci =3D NULL;
> +        }
> +
> +        if ( !ret )
> +            ret =3D pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
> +        if ( ret )
> +        {
> +            spin_unlock(&d->event_lock);
> +            return ret;
> +        }
> +
> +        hvm_domain_irq(d)->dpci =3D NULL;
> +        free_hvm_irq_dpci(hvm_irq_dpci);
> +    }
> +    spin_unlock(&d->event_lock);
> +
> +    return 0;
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
> index c4d3879761..fd28d11f6e 100644
> --- a/xen/include/xen/pci.h
> +++ b/xen/include/xen/pci.h
> @@ -209,4 +209,6 @@ int msixtbl_pt_register(struct domain *, struct pirq =
*, uint64_t gtable);
> void msixtbl_pt_unregister(struct domain *, struct pirq *);
> void msixtbl_pt_cleanup(struct domain *d);
>=20
> +int arch_pci_clean_pirqs(struct domain *d);
> +
> #endif /* __XEN_PCI_H__ */
> --=20
> 2.17.1
>=20



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 14:16:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 14:16:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19196.44430 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaJad-0002se-59; Wed, 04 Nov 2020 14:16:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19196.44430; Wed, 04 Nov 2020 14:16:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaJad-0002sX-1D; Wed, 04 Nov 2020 14:16:43 +0000
Received: by outflank-mailman (input) for mailman id 19196;
 Wed, 04 Nov 2020 14:16:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=osUd=EK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kaJac-0002s9-5l
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 14:16:42 +0000
Received: from FRA01-MR2-obe.outbound.protection.outlook.com (unknown
 [40.107.9.42]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 34733ece-f0bd-43fc-8926-f30151d0c5a7;
 Wed, 04 Nov 2020 14:16:41 +0000 (UTC)
Received: from AM5PR1001CA0054.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:206:15::31) by PR2PR08MB4794.eurprd08.prod.outlook.com
 (2603:10a6:101:24::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.28; Wed, 4 Nov
 2020 14:16:38 +0000
Received: from AM5EUR03FT028.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:15:cafe::30) by AM5PR1001CA0054.outlook.office365.com
 (2603:10a6:206:15::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27 via Frontend
 Transport; Wed, 4 Nov 2020 14:16:38 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT028.mail.protection.outlook.com (10.152.16.118) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 4 Nov 2020 14:16:38 +0000
Received: ("Tessian outbound c579d876a324:v64");
 Wed, 04 Nov 2020 14:16:37 +0000
Received: from 739ac3104fd4.4
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 BE82C032-2769-4D5A-9ECC-23EE2B0B8D13.1; 
 Wed, 04 Nov 2020 14:16:14 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 739ac3104fd4.4
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 04 Nov 2020 14:16:14 +0000
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com (2603:10a6:208:fb::27)
 by AM0PR08MB2963.eurprd08.prod.outlook.com (2603:10a6:208:56::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Wed, 4 Nov
 2020 14:16:10 +0000
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b]) by AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b%7]) with mapi id 15.20.3499.032; Wed, 4 Nov 2020
 14:16:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=osUd=EK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kaJac-0002s9-5l
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 14:16:42 +0000
X-Inumbo-ID: 34733ece-f0bd-43fc-8926-f30151d0c5a7
Received: from FRA01-MR2-obe.outbound.protection.outlook.com (unknown [40.107.9.42])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 34733ece-f0bd-43fc-8926-f30151d0c5a7;
	Wed, 04 Nov 2020 14:16:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WdZV61D42+JzDO6v5TD8Ky4gogrHQlvrlqdfGNG1gxU=;
 b=acjpS2y3Fkui9c5dZpHLsTDHOr+/QVCzYY0oSOnBDXomffMEWi/QrSsBdph+BE7nIuC0olWJTqWX3NGCVrlQvLVW8hOe/myzEXRqECmAWfjRsp/IHXmdUKUTY98JBgY76BKdyXjEUTgshmum4R89RTkHa9zuIMdXq2GyUYEdw3o=
Received: from AM5PR1001CA0054.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:206:15::31) by PR2PR08MB4794.eurprd08.prod.outlook.com
 (2603:10a6:101:24::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.28; Wed, 4 Nov
 2020 14:16:38 +0000
Received: from AM5EUR03FT028.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:15:cafe::30) by AM5PR1001CA0054.outlook.office365.com
 (2603:10a6:206:15::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27 via Frontend
 Transport; Wed, 4 Nov 2020 14:16:38 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT028.mail.protection.outlook.com (10.152.16.118) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 4 Nov 2020 14:16:38 +0000
Received: ("Tessian outbound c579d876a324:v64"); Wed, 04 Nov 2020 14:16:37 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 192d4b6739c3a187
X-CR-MTA-TID: 64aa7808
Received: from 739ac3104fd4.4
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id BE82C032-2769-4D5A-9ECC-23EE2B0B8D13.1;
	Wed, 04 Nov 2020 14:16:14 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 739ac3104fd4.4
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 04 Nov 2020 14:16:14 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iGfDi2kS/85oXy7IUIGwy98d3aHlaSXapth/4e9PAGB4TZG/VKbxTt7Qz5n4HpMdaK5IFCCTCsXHcovIWtXdIbtkJKymcXmp3ZVv5yabJKqYCj7VepabFsmAtx0sAQkBzG11GSJi780ADJ8TPNLOrciULWL159v3biP7swi+8d0FvI36Wjx9CPSTYvZsoVqq0BKEiLcGHTCkEeGySwAbGOXbW1t2uQ02UfqYi7Q4aWCbrR20K/A0geZTP4l5WbShrvZ2YCJg40jhJrwCKDwdtPBmxVT2ktIHLjEzjokhV97iZK7DfVuzTFzW3fnoTnvpVaRIVihm5Y9Oqfxl2kFzeg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WdZV61D42+JzDO6v5TD8Ky4gogrHQlvrlqdfGNG1gxU=;
 b=k63VxCrZslhrZOo3Dzea+lFcLr1CjwaBxi6ikkON5VaAlOy0dyPKvn5beHyIh0ytL9+4cOhkHgv/e6mr1+FKo3mRbf4XqZYerSe753TRJYAH6GY0bfNCTP3MB6F3CYvFudRteGxVz5OB1/5r+mj7mjDInXX67pZsOPON9ry7iTzmiDDvz92lFjPqhcnTClhdBXUSj5gvUIcG9fo8L+ZsXRoAJbB21Yjj3SASnAibjDZvy8I1crKhhsmi1kpHnnnG9NdFJUGcLvFgueusNzkdognviOBhCisKUS3y6ssciZjSKc7F9fNOdnM0XILEgPh2oxEz7RTm6PoJpbuoiSUV4w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WdZV61D42+JzDO6v5TD8Ky4gogrHQlvrlqdfGNG1gxU=;
 b=acjpS2y3Fkui9c5dZpHLsTDHOr+/QVCzYY0oSOnBDXomffMEWi/QrSsBdph+BE7nIuC0olWJTqWX3NGCVrlQvLVW8hOe/myzEXRqECmAWfjRsp/IHXmdUKUTY98JBgY76BKdyXjEUTgshmum4R89RTkHa9zuIMdXq2GyUYEdw3o=
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com (2603:10a6:208:fb::27)
 by AM0PR08MB2963.eurprd08.prod.outlook.com (2603:10a6:208:56::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Wed, 4 Nov
 2020 14:16:10 +0000
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b]) by AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b%7]) with mapi id 15.20.3499.032; Wed, 4 Nov 2020
 14:16:10 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Rahul Singh <Rahul.Singh@arm.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Jan Beulich
	<jbeulich@suse.com>, Paul Durrant <paul@xen.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 2/4] xen/pci: Introduce new CONFIG_PCI_ATS flag for PCI
 ATS functionality.
Thread-Topic: [PATCH v2 2/4] xen/pci: Introduce new CONFIG_PCI_ATS flag for
 PCI ATS functionality.
Thread-Index: AQHWsfp0TD99XuDgWkm6r9S7pNFSjKm4BiEA
Date: Wed, 4 Nov 2020 14:16:10 +0000
Message-ID: <2A5F6B03-4408-4D1A-BBE7-1A72C0EEECC4@arm.com>
References: <cover.1604417224.git.rahul.singh@arm.com>
 <27814e614618c413ac61a9f7a48d795c557bfe5c.1604417224.git.rahul.singh@arm.com>
In-Reply-To:
 <27814e614618c413ac61a9f7a48d795c557bfe5c.1604417224.git.rahul.singh@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 2d34b534-81b4-44a3-0c35-08d880cc3ebf
x-ms-traffictypediagnostic: AM0PR08MB2963:|PR2PR08MB4794:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<PR2PR08MB4794C0C664805D9974DC24569DEF0@PR2PR08MB4794.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:6790;OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 OxSKjvV55d0ALzPqVakVvNBOdf4mj/yCTrTwhsAp9kd+W0fqRKzCqWLJKJ+gT0A84gA5VnP6rfH7wfL7iG6t4xLN+JjTM3VyyesXnwZTE4ajh8LIf8ZjYo/OQbr5IZHQYHSWdundinntiDki28jp0Fi1gcpy70qMPFGwro5+bKeOZFTs+UvuNlFnPAt1PycYf1fQ7tYww8j1FgyfSnBYS+EKFc4owrEo7t8EJh9ap2xoSYqyyhSRE9W7G3wjIFnpHvk/wdahBBxEyrgsjZkuYgRw66jRqnxo+t0Mha7StrIxEf3Rfb3CzmQJdOr8A50fRlRGig6qRfkAJ+Y24WDG4g==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3682.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(39860400002)(366004)(396003)(376002)(136003)(4326008)(36756003)(76116006)(66476007)(66556008)(26005)(2906002)(66946007)(6862004)(83380400001)(71200400001)(66446008)(91956017)(64756008)(5660300002)(6486002)(33656002)(86362001)(53546011)(186003)(6636002)(37006003)(6506007)(8676002)(54906003)(6512007)(2616005)(478600001)(316002)(8936002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 bFkmHiKTyE9izybp9voFjWVZIufX0Oze8XkGmv8hP8+tVlgAONEMwVwKcI4AiPhe+74ykWToYT9jhdtBCD1MhB7Nnkqu3S4R5bl2N9ckndb4kklF2BcDPyPmkwFmj8PaTXUVbc9FvsuwNXQn2dUZM6ZrOKRpWPg9wTHOVhmmESo5PEbJZ2l746ThIXOowBpOYTqivA1sW4h+PnVej+NR09xDonjKpCnhTZLcnriZnsP4eSJ772tQUJYg4UOazzqQn7dKBOuaOtt5JuBfAMtEETqYkb437fFHgha9OFl1QEZOFQTiPXZolxkuVHXsJJVgGYcgAb9758JqNyXpuJzUgCeyd8e38QH6b8V1Wh0yku4Ldxs7l+QHTBxulXCkzoFzCEcHH7ICtnPNTgvq6JKxEAJxOmV7cX7TmoyC/+kJwLZrrGyTujUraPM3YjNbxpbDqWjxhvOP02brQyCm5OgAT1Fb3kwfAA7wpppa4ZT3LJHXap2AtwBkozdUDEnFSBXJDLaqEnnsLkKjZQuz1UJwg+/pnsYQ8OwVK9oCTUVYlI67mk02l7vNuPsB9QW6ehXyh/zewJSr55cqU3RIAWeZnKNFyUOhDfuGSf9OhQXnVEOO3pws9VAnrZluWx6fr0BZbZ57rtyuszYJV+B3/UPTTQ==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <C9F3106C0E5BB74886373EF93ADF0C36@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB2963
Original-Authentication-Results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT028.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	929c9537-011d-4b95-5653-08d880cc2dfa
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RTwhkoL6TZ81MUjINLkMakR7prRAOERWq82HjY6DC6uotP6U7UW/8cRlG45bNiF7wr19cl1F1qHSTJoFEJDemecTntEeObnQIYGWk5qwGUlphlY5IGhy6PPxfEXLyu+/EW4mgzvnlKbo7NI3WgjUpguSXhQsurKCB0m3zT3OkX3bPlH4nADFUQqX/sSj6wdozFfoVu86sUBE/8JFHQTVz47AmRY4Uel+SWI/dYSFQa3rW5KBizXysqUXQzeiP87BgCxNdDOa8NzrnuV+O4yWcJIVoayu2jMcc9vCJ3llbuiDEt6SVcg42bnagtox8E82ms8V0gI8FIVshmbb8M+uBsM0OW3q8WIhjYObJrY7h1E/mOkO3zAEbCS3b8cB+OdHiPfZ/oaHJorymv4alO4RaQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(376002)(396003)(39860400002)(346002)(46966005)(82740400003)(316002)(36906005)(83380400001)(33656002)(6862004)(4326008)(478600001)(54906003)(37006003)(6512007)(70586007)(70206006)(47076004)(6486002)(5660300002)(26005)(36756003)(82310400003)(186003)(6636002)(81166007)(8676002)(356005)(2616005)(336012)(8936002)(86362001)(2906002)(6506007)(53546011);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2020 14:16:38.6737
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2d34b534-81b4-44a3-0c35-08d880cc3ebf
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT028.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR2PR08MB4794



> On 3 Nov 2020, at 15:59, Rahul Singh <Rahul.Singh@arm.com> wrote:
>=20
> PCI ATS functionality is not enabled and tested for ARM architecture
> but it is enabled for x86 and referenced in common passthrough/pci.c
> code.
>=20
> Therefore introducing the new flag to enable the ATS functionality for
> x86 only to avoid issues for ARM architecture.
>=20
> No functional change.
>=20
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
>=20
> Changes in v2:
> - Fixed return value of PCI ATS related functions when PCI_ATS is not ena=
bled.
> - Make PCI_ATS user selectable kconfig option.
>=20
> ---
> xen/drivers/passthrough/ats.h        | 26 ++++++++++++++++++++++++++
> xen/drivers/passthrough/x86/Makefile |  2 +-
> xen/drivers/pci/Kconfig              |  9 +++++++++
> 3 files changed, 36 insertions(+), 1 deletion(-)
>=20
> diff --git a/xen/drivers/passthrough/ats.h b/xen/drivers/passthrough/ats.=
h
> index 22ae209b37..3a71fedcb4 100644
> --- a/xen/drivers/passthrough/ats.h
> +++ b/xen/drivers/passthrough/ats.h
> @@ -17,6 +17,8 @@
>=20
> #include <xen/pci_regs.h>
>=20
> +#ifdef CONFIG_PCI_ATS
> +
> #define ATS_REG_CAP    4
> #define ATS_REG_CTL    6
> #define ATS_QUEUE_DEPTH_MASK     0x1f
> @@ -48,5 +50,29 @@ static inline int pci_ats_device(int seg, int bus, int=
 devfn)
>     return pci_find_ext_capability(seg, bus, devfn, PCI_EXT_CAP_ID_ATS);
> }
>=20
> +#else
> +
> +#define ats_enabled (false)
> +
> +static inline int enable_ats_device(struct pci_dev *pdev,
> +                                    struct list_head *ats_list)
> +{
> +    return -EOPNOTSUPP;
> +}
> +
> +static inline void disable_ats_device(struct pci_dev *pdev) { }
> +
> +static inline int pci_ats_enabled(int seg, int bus, int devfn)
> +{
> +    return 0;
> +}
> +
> +static inline int pci_ats_device(int seg, int bus, int devfn)
> +{
> +    return 0;
> +}
> +
> +#endif /* CONFIG_PCI_ATS */
> +
> #endif /* _ATS_H_ */
>=20
> diff --git a/xen/drivers/passthrough/x86/Makefile b/xen/drivers/passthrou=
gh/x86/Makefile
> index a70cf9460d..aa515c680d 100644
> --- a/xen/drivers/passthrough/x86/Makefile
> +++ b/xen/drivers/passthrough/x86/Makefile
> @@ -1,2 +1,2 @@
> -obj-y +=3D ats.o
> +obj-$(CONFIG_PCI_ATS) +=3D ats.o
> obj-y +=3D iommu.o
> diff --git a/xen/drivers/pci/Kconfig b/xen/drivers/pci/Kconfig
> index 7da03fa13b..3cb79ea954 100644
> --- a/xen/drivers/pci/Kconfig
> +++ b/xen/drivers/pci/Kconfig
> @@ -1,3 +1,12 @@
>=20
> config HAS_PCI
> 	bool
> +
> +config PCI_ATS
> +	bool "PCI ATS support"
> +	default y
> +	depends on X86 && HAS_PCI
> +	---help---
> +	 Enable PCI Address Translation Services.
> +
> +	 If unsure, say Y.
> --=20
> 2.17.1
>=20



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 14:16:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 14:16:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19204.44442 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaJas-00031C-F5; Wed, 04 Nov 2020 14:16:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19204.44442; Wed, 04 Nov 2020 14:16:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaJas-000312-BD; Wed, 04 Nov 2020 14:16:58 +0000
Received: by outflank-mailman (input) for mailman id 19204;
 Wed, 04 Nov 2020 14:16:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=osUd=EK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kaJar-0002s9-3T
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 14:16:57 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.4.49]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2267d75b-aafe-4f21-a847-b9980201ff96;
 Wed, 04 Nov 2020 14:16:49 +0000 (UTC)
Received: from DB6PR0202CA0019.eurprd02.prod.outlook.com (2603:10a6:4:29::29)
 by AM0PR08MB5236.eurprd08.prod.outlook.com (2603:10a6:208:155::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Wed, 4 Nov
 2020 14:16:46 +0000
Received: from DB5EUR03FT030.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:29:cafe::b1) by DB6PR0202CA0019.outlook.office365.com
 (2603:10a6:4:29::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Wed, 4 Nov 2020 14:16:46 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT030.mail.protection.outlook.com (10.152.20.144) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 4 Nov 2020 14:16:44 +0000
Received: ("Tessian outbound 68da730eaaba:v64");
 Wed, 04 Nov 2020 14:16:44 +0000
Received: from b9a50f5acd47.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 FFA420FE-CC34-43C4-954D-EC6ED4B6BECB.1; 
 Wed, 04 Nov 2020 14:16:18 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b9a50f5acd47.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 04 Nov 2020 14:16:18 +0000
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com (2603:10a6:208:fb::27)
 by AM0PR08MB3348.eurprd08.prod.outlook.com (2603:10a6:208:65::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Wed, 4 Nov
 2020 14:16:16 +0000
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b]) by AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b%7]) with mapi id 15.20.3499.032; Wed, 4 Nov 2020
 14:16:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=osUd=EK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kaJar-0002s9-3T
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 14:16:57 +0000
X-Inumbo-ID: 2267d75b-aafe-4f21-a847-b9980201ff96
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown [40.107.4.49])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2267d75b-aafe-4f21-a847-b9980201ff96;
	Wed, 04 Nov 2020 14:16:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=upmY6GkKu4VTBCTd9WqVCDvt18zRTVI/l3y6CIdNFAo=;
 b=TUIFgyhe0/DxzKtZuKy0CvwX4mz4/A8ukUfcWEy1iTqPic1D5f91PLNk/8Waf6JwX+F86Qz7E3UyUL7+KqbcE2z0HlgiJzTOYHjSvthPHF0wPSam+TLOfNbwz+HkHxk4uSMOctUTFd7PieJVkTXJv3Mxv2+xQubYA2WIR+M0f6Q=
Received: from DB6PR0202CA0019.eurprd02.prod.outlook.com (2603:10a6:4:29::29)
 by AM0PR08MB5236.eurprd08.prod.outlook.com (2603:10a6:208:155::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Wed, 4 Nov
 2020 14:16:46 +0000
Received: from DB5EUR03FT030.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:29:cafe::b1) by DB6PR0202CA0019.outlook.office365.com
 (2603:10a6:4:29::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Wed, 4 Nov 2020 14:16:46 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT030.mail.protection.outlook.com (10.152.20.144) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 4 Nov 2020 14:16:44 +0000
Received: ("Tessian outbound 68da730eaaba:v64"); Wed, 04 Nov 2020 14:16:44 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5785e2cbd48e885e
X-CR-MTA-TID: 64aa7808
Received: from b9a50f5acd47.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id FFA420FE-CC34-43C4-954D-EC6ED4B6BECB.1;
	Wed, 04 Nov 2020 14:16:18 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b9a50f5acd47.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 04 Nov 2020 14:16:18 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QRnIUVY1/7cWb+S6d0KgJVf2QpwyEEVmmMuSIkbxxZMHxcVKSQ98Yxpy5pDf3sKaD642trrMnKPwPJ3en1ZCsyo0VEQyPNSCOhdz1LJL7wHhcOUyc0wZUB0lr6yfnoSYTD5xlkzHGURCvd+kkLmABYHpHSkj9mATNZNCgVw7eKr+y3f1cmIbcOJr9monLlqnxwgg0LQ6OvPT2zXUR0vb1oDKyRb8xOdZ8AtghUnQ3wmNTyXlMNoSGJ88OdVxus0GKaJDIjeJUKVN4dE+f5VfshzKy/LxjJshxFO0Dsx6jEndzVnucEYF5P+vtgdSmOSOI/tOC6Blxk1QW15fryJbXg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=upmY6GkKu4VTBCTd9WqVCDvt18zRTVI/l3y6CIdNFAo=;
 b=GfqPt/y57mRPnofZxTKgznMvgK36DF3DlZdLgAPA90es6IyDGPGBbl8mkn44CtJB0mf+7NaO5uHBroupGvXpKUhdDF1i+R4CiqVLRcHBvvIDQnPT+OeeswUXITm69BffkOE8C+ccUoTq12tE8mmJs87iYmvCwFqWoVd6YoavFwxgDaxaeoa8Xyt2nyIG8Y5l+v+cNmZD6fpHFAUqKqChqY+crGER/MFlK8HARB76015G1+gDqtNz3uTcEylShd3OCGSXIFcuCs07LzUijMp9xay0ZCjECOl5g4bILHtM9RDS1KC5SMJkfqBxmgaadwauqT1k42cXFgE3l6Cv/2ryFw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=upmY6GkKu4VTBCTd9WqVCDvt18zRTVI/l3y6CIdNFAo=;
 b=TUIFgyhe0/DxzKtZuKy0CvwX4mz4/A8ukUfcWEy1iTqPic1D5f91PLNk/8Waf6JwX+F86Qz7E3UyUL7+KqbcE2z0HlgiJzTOYHjSvthPHF0wPSam+TLOfNbwz+HkHxk4uSMOctUTFd7PieJVkTXJv3Mxv2+xQubYA2WIR+M0f6Q=
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com (2603:10a6:208:fb::27)
 by AM0PR08MB3348.eurprd08.prod.outlook.com (2603:10a6:208:65::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Wed, 4 Nov
 2020 14:16:16 +0000
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b]) by AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b%7]) with mapi id 15.20.3499.032; Wed, 4 Nov 2020
 14:16:16 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Rahul Singh <Rahul.Singh@arm.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Jan
 Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
Subject: Re: [PATCH v2 4/4] xen/pci: solve compilation error on ARM with
 HAS_PCI enabled.
Thread-Topic: [PATCH v2 4/4] xen/pci: solve compilation error on ARM with
 HAS_PCI enabled.
Thread-Index: AQHWsfqNTWzum2g39UK3EjJMrmB9Oam4BigA
Date: Wed, 4 Nov 2020 14:16:16 +0000
Message-ID: <C6FD39DC-1462-45EB-8909-B5F582A159AB@arm.com>
References: <cover.1604417224.git.rahul.singh@arm.com>
 <7b60501fa689a4f2795ea6c34a7475d288f154a9.1604417224.git.rahul.singh@arm.com>
In-Reply-To:
 <7b60501fa689a4f2795ea6c34a7475d288f154a9.1604417224.git.rahul.singh@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 6571f7e1-5f97-4308-c260-08d880cc4259
x-ms-traffictypediagnostic: AM0PR08MB3348:|AM0PR08MB5236:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB523667A7B8E259D752BD2B289DEF0@AM0PR08MB5236.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:167;OLM:167;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 AAxdgpHdZ4i5nkOosYXRLjYURQMzO7o3E70q70T8kWAKW1Q4Fd/14ZyY/FKe7xsPuT9ASRz6PLgW+pizB7d1S6YCHODasrhDMW0EhZ1EfgD4m5j/QYJkCMvuBfzGtnoW/RCRTZzpv/W+cbsTOLqGMdWumY0TFdCoZJ6joG+eCoztC2i3IfvBw4K83IxsiHwXItrtmEOrrZYW45DddkI1BeHBapQMd40XINSXrB9UDSAMLjSBxfKcgcOF4VyM7SxyZ9fxiw70Jhz7akQLB0sIn+Zdg00T6yFm76ZHmdhjX8v0/Thtt3dmz67FuQ5CeNw1
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3682.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(396003)(39860400002)(346002)(136003)(366004)(478600001)(54906003)(316002)(6486002)(5660300002)(186003)(26005)(6862004)(37006003)(6636002)(8676002)(91956017)(36756003)(53546011)(2616005)(64756008)(6506007)(66556008)(86362001)(33656002)(6512007)(8936002)(4326008)(66476007)(83380400001)(66946007)(71200400001)(76116006)(66446008)(2906002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 FeRFkbMoJqc/xSJFgPC3PT6mIxi1DRb6Z5rwsmQfLGWuTv0KFsapVhxfjScD9aHR9bpEXckRDs34NjnRmNOy6+MUfVXFpZRcnZyKu+5mCWFpJY2cIR7zdgescg28tgJIKoeRAD8HIR2ACsVPfNgNfObJFx07A9qE6cNoC/asPUyS0If2uzZ98MqI9TrFw2E6PtCfnYfwiBTFb3CMIANq4oA02W6eZTZAo3B/p+XZJJj2B9Pd1GQ2XC739iIaitga6otgTftXybHb2WKDB7BL8NAj2Mn7JZRizjhu3pYzw1OEjoMycj+rq9BavoNVUvS3GqdYIG+AWy0o9vl3LMJaOE1LnmEOF3/VV/toHVUuuEhgtV/H6sD2aQFeDH00ITrN2OFBD2qCj/Qp581cvmNbtBi39aMQizr0LCdoCj3ZFP8pLuAnybr7wOa/75zDUH4I+TeqXslMt82/4mKRIGdRxCQjtEJCX9Lu2klAQATzuyMo0KFRip9I1yGf9zhYZH3P2zhmKUMuVJJtMr8XNa/5VM0k+oSUYWRzrIkTMcfuCRvVvuYvQhJ2Qxx8XgQvHgoVAy+8V4AP6dHQ4kHviXbJSWjL3FijSc9f5sq8kR1MYDHSBu2uQTLguUiHhPWS3E5FMclY3yQB7XTOmBQYd80hig==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <9899AFDE507BE740912BE97C4DAF98E2@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3348
Original-Authentication-Results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT030.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c2c89170-a34e-4ef0-45ff-08d880cc31b7
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	T7RB0pcODv9Wm2SGq4/AnqdSSJ6vDq62I6ENNzKkg8MuORS1e3laMijsh8sCMEw4mG5IiA0vCxHdEPPMjNSpMy1Q+7BQrcf6TXJgO4ZKOXs4ihVC1dlRJ2qNp41wtpOind+fTpylnw4QuQZisE5NRx6JM+fSLrnqSlTHPyU9SaMyUv+q0jkNTYgRYUBjwDkZFnJMWgpXlm5FGvVuOE+DuoDLYEWMyoyFD07Ay9U6Xp1+lw6YM6s/ReJRJsyJ9Vp/nLDEQDO2ELbdHxdNOzph7S28gfwoqYCXgQh/aRUOL0oqYSUxuM9gPb6KqYaDzlZ3WiHwGBNKsnBxvM0pRI9E1xvIne1myTq1BkLO8GO2VPJ5S7nymOt71Bg6rf7BlkNm1IyMpKtuF2IA+Xnw6ht01Q==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(136003)(39860400002)(396003)(376002)(46966005)(54906003)(4326008)(36756003)(47076004)(5660300002)(6862004)(478600001)(70206006)(70586007)(316002)(37006003)(8936002)(82310400003)(82740400003)(26005)(8676002)(336012)(33656002)(6486002)(356005)(6636002)(81166007)(86362001)(6506007)(53546011)(186003)(2616005)(2906002)(6512007)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2020 14:16:44.7727
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6571f7e1-5f97-4308-c260-08d880cc4259
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT030.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5236



> On 3 Nov 2020, at 15:59, Rahul Singh <Rahul.Singh@arm.com> wrote:
>=20
> If mem-sharing, mem-paging and log-dirty functionality is not enabled
> for architecture when HAS_PCI is enabled, compiler will throw an error.
>=20
> Move code to x86 specific directory to fix compilation error.
>=20
> No functional change.
>=20
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
>=20
> Changes in v2:
> - Move mem-sharing , men-paging and log-dirty specific code to x86 direct=
ory.=20
>=20
> ---
> xen/drivers/passthrough/pci.c       |  8 +-------
> xen/drivers/passthrough/x86/iommu.c | 13 +++++++++++++
> xen/include/xen/iommu.h             |  2 ++
> 3 files changed, 16 insertions(+), 7 deletions(-)
>=20
> diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.=
c
> index 04d3e2c0f9..433989e654 100644
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -22,7 +22,6 @@
> #include <xen/iommu.h>
> #include <xen/irq.h>
> #include <xen/param.h>
> -#include <xen/vm_event.h>
> #include <xen/delay.h>
> #include <xen/keyhandler.h>
> #include <xen/event.h>
> @@ -1418,12 +1417,7 @@ static int assign_device(struct domain *d, u16 seg=
, u8 bus, u8 devfn, u32 flag)
>     if ( !is_iommu_enabled(d) )
>         return 0;
>=20
> -    /* Prevent device assign if mem paging or mem sharing have been=20
> -     * enabled for this domain */
> -    if ( d !=3D dom_io &&
> -         unlikely(mem_sharing_enabled(d) ||
> -                  vm_event_check_ring(d->vm_event_paging) ||
> -                  p2m_get_hostp2m(d)->global_logdirty) )
> +    if( !arch_iommu_usable(d) )
>         return -EXDEV;
>=20
>     /* device_assigned() should already have cleared the device for assig=
nment */
> diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthroug=
h/x86/iommu.c
> index 875e67b53b..b3d151a14c 100644
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -23,6 +23,7 @@
> #include <asm/hvm/io.h>
> #include <asm/io_apic.h>
> #include <asm/setup.h>
> +#include <xen/vm_event.h>
>=20
> const struct iommu_init_ops *__initdata iommu_init_ops;
> struct iommu_ops __read_mostly iommu_ops;
> @@ -315,6 +316,18 @@ int iommu_update_ire_from_msi(
>            ? iommu_call(&iommu_ops, update_ire_from_msi, msi_desc, msg) :=
 0;
> }
>=20
> +bool_t arch_iommu_usable(struct domain *d)
> +{
> +
> +    /* Prevent device assign if mem paging or mem sharing have been
> +     * enabled for this domain */
> +    if ( d !=3D dom_io && unlikely(mem_sharing_enabled(d) ||
> +                        vm_event_check_ring(d->vm_event_paging) ||
> +                        p2m_get_hostp2m(d)->global_logdirty) )
> +        return false;
> +    else
> +        return true;
> +}
> /*
>  * Local variables:
>  * mode: C
> diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
> index 191021870f..493528cee3 100644
> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -381,6 +381,8 @@ DECLARE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
> extern struct spinlock iommu_pt_cleanup_lock;
> extern struct page_list_head iommu_pt_cleanup_list;
>=20
> +bool_t arch_iommu_usable(struct domain *d);
> +
> #endif /* _IOMMU_H_ */
>=20
> /*
> --=20
> 2.17.1
>=20



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 14:58:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 14:58:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19335.44489 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaKES-00078I-Aj; Wed, 04 Nov 2020 14:57:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19335.44489; Wed, 04 Nov 2020 14:57:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaKES-00078B-7u; Wed, 04 Nov 2020 14:57:52 +0000
Received: by outflank-mailman (input) for mailman id 19335;
 Wed, 04 Nov 2020 14:57:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gnt3=EK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kaKER-000784-6I
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 14:57:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9c70428e-56ad-41b3-8cd0-7f5b1accbc6d;
 Wed, 04 Nov 2020 14:57:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 65796AC54;
 Wed,  4 Nov 2020 14:57:49 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gnt3=EK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kaKER-000784-6I
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 14:57:51 +0000
X-Inumbo-ID: 9c70428e-56ad-41b3-8cd0-7f5b1accbc6d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9c70428e-56ad-41b3-8cd0-7f5b1accbc6d;
	Wed, 04 Nov 2020 14:57:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604501869;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=+X9gG1oEAo6Wk53sLH4eZIywq3UxBtoZ9RSq33l1T1c=;
	b=ghX/nGlbchMhuTt8Bdhh6ONd94n8e2xV4RuJpyulAVNCXSfcIGi0kOVYw0723jcs/4UnTm
	EX+TsxtXSUgxrd2Y8wOUILqMdITqAjqGHauU6SLPx0l7bWELL0HsCEW3ki8ygdsckz5Hu4
	EzUYTwRXvorIOhSZUzdR5AJsfOf291s=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 65796AC54;
	Wed,  4 Nov 2020 14:57:49 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2] tools/python: pass more -rpath-link options to ld
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Marek Marczykowski <marmarek@invisiblethingslab.com>
Message-ID: <8cf8cfa9-2b0c-123a-2d23-8932e61085fa@suse.com>
Date: Wed, 4 Nov 2020 15:57:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

With the split of libraries, I've observed a number of warnings from
(old?) ld.

Instead of duplicating the additions in two places, introduce a setup.py
make variable holding all the common parts of the invocations.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Pass on and use SHLIB_libxen*.
---
It's unclear to me whether this is ld version dependent - the pattern
of where I've seen such warnings doesn't suggest a clear version
dependency.

Obviously (I think) the other similar variables (XEN_libxen*,
CFLAGS_libxen*, etc) would better also be made use of to eliminate at
least most of the PATH_* variables, but that's not the purpose of this
change.

--- a/tools/python/Makefile
+++ b/tools/python/Makefile
@@ -8,19 +8,21 @@ PY_CFLAGS = $(CFLAGS) $(PY_NOOPT_CFLAGS)
 PY_LDFLAGS = $(SHLIB_LDFLAGS) $(APPEND_LDFLAGS)
 INSTALL_LOG = build/installed_files.txt
 
+setup.py = CC="$(CC)" CFLAGS="$(PY_CFLAGS)" LDSHARED="$(CC)" LDFLAGS="$(PY_LDFLAGS)" \
+           SHLIB_libxenctrl="$(SHLIB_libxenctrl)" \
+           SHLIB_libxenguest="$(SHLIB_libxenguest)" \
+           SHLIB_libxenstore="$(SHLIB_libxenstore)" \
+           $(PYTHON) setup.py
+
 .PHONY: build
 build:
-	CC="$(CC)" CFLAGS="$(PY_CFLAGS)" LDSHARED="$(CC)" LDFLAGS="$(PY_LDFLAGS)" $(PYTHON) setup.py build
+	$(setup.py) build
 
 .PHONY: install
 install:
 	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
-
-	CC="$(CC)" CFLAGS="$(PY_CFLAGS)" LDSHARED="$(CC)" \
-		LDFLAGS="$(PY_LDFLAGS)" $(PYTHON) setup.py install \
-		--record $(INSTALL_LOG) $(PYTHON_PREFIX_ARG) \
+	$(setup.py) install --record $(INSTALL_LOG) $(PYTHON_PREFIX_ARG) \
 		--root="$(DESTDIR)" --force
-
 	$(INSTALL_PYTHON_PROG) scripts/convert-legacy-stream $(DESTDIR)$(LIBEXEC_BIN)
 	$(INSTALL_PYTHON_PROG) scripts/verify-stream-v2 $(DESTDIR)$(LIBEXEC_BIN)
 
--- a/tools/python/setup.py
+++ b/tools/python/setup.py
@@ -4,6 +4,10 @@ import os, sys
 
 XEN_ROOT = "../.."
 
+SHLIB_libxenctrl = os.environ['SHLIB_libxenctrl'].split()
+SHLIB_libxenguest = os.environ['SHLIB_libxenguest'].split()
+SHLIB_libxenstore = os.environ['SHLIB_libxenstore'].split()
+
 extra_compile_args  = [ "-fno-strict-aliasing", "-Werror" ]
 
 PATH_XEN      = XEN_ROOT + "/tools/include"
@@ -24,7 +28,7 @@ xc = Extension("xc",
                library_dirs       = [ PATH_LIBXENCTRL, PATH_LIBXENGUEST ],
                libraries          = [ "xenctrl", "xenguest" ],
                depends            = [ PATH_LIBXENCTRL + "/libxenctrl.so", PATH_LIBXENGUEST + "/libxenguest.so" ],
-               extra_link_args    = [ "-Wl,-rpath-link="+PATH_LIBXENTOOLLOG ],
+               extra_link_args    = SHLIB_libxenctrl + SHLIB_libxenguest,
                sources            = [ "xen/lowlevel/xc/xc.c" ])
 
 xs = Extension("xs",
@@ -33,6 +37,7 @@ xs = Extension("xs",
                library_dirs       = [ PATH_XENSTORE ],
                libraries          = [ "xenstore" ],
                depends            = [ PATH_XENSTORE + "/libxenstore.so" ],
+               extra_link_args    = SHLIB_libxenstore,
                sources            = [ "xen/lowlevel/xs/xs.c" ])
 
 plat = os.uname()[0]


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 15:30:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 15:30:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19366.44502 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaKjM-0001Ta-23; Wed, 04 Nov 2020 15:29:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19366.44502; Wed, 04 Nov 2020 15:29:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaKjL-0001TT-VI; Wed, 04 Nov 2020 15:29:47 +0000
Received: by outflank-mailman (input) for mailman id 19366;
 Wed, 04 Nov 2020 15:29:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gnt3=EK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kaKjK-0001TO-ID
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 15:29:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2521abd8-befd-4b0b-a7a6-11224958c628;
 Wed, 04 Nov 2020 15:29:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C6DDCAC24;
 Wed,  4 Nov 2020 15:29:44 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gnt3=EK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kaKjK-0001TO-ID
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 15:29:46 +0000
X-Inumbo-ID: 2521abd8-befd-4b0b-a7a6-11224958c628
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2521abd8-befd-4b0b-a7a6-11224958c628;
	Wed, 04 Nov 2020 15:29:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604503784;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yJuBiKBbZHg2I/SrZfUM6j23W9T9Cy7n8b/npxr/hHE=;
	b=eeKBLglB54cywo9xmW7SJaZ3OnkqiNAeOYVwszZeG10KrzcayMBWNipbUyD4Y360NY9XJr
	EU2m/Mfh6Z995dAQChGxjk8SK54vo6K2E2OZ09Ya9cHB/jOTAfW/eUETxvtJb6OMitdhoF
	gKKoDNXjZcBkov0zGRm+JyjuN++0D54=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C6DDCAC24;
	Wed,  4 Nov 2020 15:29:44 +0000 (UTC)
Subject: Re: [PATCH v4.1 2/2] xen/evtchn: rework per event channel lock
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20201104115739.20144-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ae263d8f-b81d-4c47-2760-6ef3823ca780@suse.com>
Date: Wed, 4 Nov 2020 16:29:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201104115739.20144-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

> @@ -738,7 +725,8 @@ int evtchn_send(struct domain *ld, unsigned int lport)
>  
>      lchn = evtchn_from_port(ld, lport);
>  
> -    spin_lock_irqsave(&lchn->lock, flags);
> +    if ( !evtchn_read_trylock(lchn) )
> +        return 0;

Isn't there a change in behavior here? While sends through
ECS_UNBOUND ports indeed get silently ignored, ECS_FREE ones ought
to be getting -EINVAL (as should ECS_UNBOUND ones if they're
Xen-consumer ones). With the failed trylock you don't know which
of the two the port is in the process of being transitioned
to/from. The same would apply for other operations distinguishing
the two states. Right now both evtchn_status() and
evtchn_bind_vcpu() only use the domain-wide lock, but the latter
is getting switched by "evtchn: convert domain event lock to an
r/w one" (granted there's an RFC remark there whether that
transformation is worthwhile). Anyway, the main point of my remark
is that there's another subtlety here which I don't think becomes
obvious from description or comments - where the two states are
mentioned, it gets to look as if they can be treated equally.

> --- a/xen/include/xen/event.h
> +++ b/xen/include/xen/event.h
> @@ -105,6 +105,39 @@ void notify_via_xen_event_channel(struct domain *ld, int lport);
>  #define bucket_from_port(d, p) \
>      ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKET])
>  
> +/*
> + * Lock an event channel exclusively. This is allowed only when the channel is
> + * free or unbound either when taking or when releasing the lock, as any
> + * concurrent operation on the event channel using evtchn_read_trylock() will
> + * just assume the event channel is free or unbound at the moment when the
> + * evtchn_read_trylock() returns false.
> + */
> +static inline void evtchn_write_lock(struct evtchn *evtchn)
> +{
> +    write_lock(&evtchn->lock);
> +
> +    evtchn->old_state = evtchn->state;
> +}
> +
> +static inline void evtchn_write_unlock(struct evtchn *evtchn)
> +{
> +    /* Enforce lock discipline. */
> +    ASSERT(evtchn->old_state == ECS_FREE || evtchn->old_state == ECS_UNBOUND ||
> +           evtchn->state == ECS_FREE || evtchn->state == ECS_UNBOUND);
> +
> +    write_unlock(&evtchn->lock);
> +}

These two aren't needed outside of event_channel.c, are they? If
so, and if they ought to go in a header rather than directly into
the .c file (where I'd prefer the latter, for the sake of minimal
exposure), then it should be event_channel.h.

> @@ -114,6 +114,7 @@ struct evtchn
>          u16 virq;      /* state == ECS_VIRQ */
>      } u;
>      u8 priority;
> +    u8 old_state;      /* State when taking lock in write mode. */
>      u32 fifo_lastq;    /* Data for fifo events identifying last queue. */
>  #ifdef CONFIG_XSM
>      union {

While there's a gap here after the prior patch (which I'm still
not overly happy about), I'm still inclined to ask that the field
be put inside #ifndef NDEBUG, as its only consumer is an
ASSERT().

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 15:39:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 15:39:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19372.44514 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaKsu-0002RS-1D; Wed, 04 Nov 2020 15:39:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19372.44514; Wed, 04 Nov 2020 15:39:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaKst-0002RL-UT; Wed, 04 Nov 2020 15:39:39 +0000
Received: by outflank-mailman (input) for mailman id 19372;
 Wed, 04 Nov 2020 15:39:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gnt3=EK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kaKss-0002RG-CC
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 15:39:38 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1d62c1c1-e3e5-486a-8086-f3045d38ccef;
 Wed, 04 Nov 2020 15:39:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A3F70ABA2;
 Wed,  4 Nov 2020 15:39:36 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gnt3=EK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kaKss-0002RG-CC
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 15:39:38 +0000
X-Inumbo-ID: 1d62c1c1-e3e5-486a-8086-f3045d38ccef
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1d62c1c1-e3e5-486a-8086-f3045d38ccef;
	Wed, 04 Nov 2020 15:39:37 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604504376;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=GboMbT+pyZmsPKTO86iWLZYJtotRgzMxxvlstftZ6Fs=;
	b=Pvc+wS0/ieBA8mHHyOhd4Rzd1LhQLBAWuKuXKN9CrgIxU0ooWzOWFcEBgvxSy4tkxfIBwv
	y/j/9SNuVKaswRYQulvFCywIpqOOma+YDPpYcYrF3ibsVGOrUUpxPIASdb5JimosHij6ix
	KOFx5+nnqqVC2UaToWlVmoVbzscwpnI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A3F70ABA2;
	Wed,  4 Nov 2020 15:39:36 +0000 (UTC)
Subject: Re: [PATCH v2 1/4] xen/ns16550: solve compilation error on ARM with
 CONFIG_HAS_PCI enabled.
To: Rahul Singh <rahul.singh@arm.com>
Cc: Bertrand.Marquis@arm.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1604417224.git.rahul.singh@arm.com>
 <2aa79510731918d78d515a1679cc141fcf16883e.1604417224.git.rahul.singh@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e3863995-b8c3-bf9b-f6c2-957a88e9c6bc@suse.com>
Date: Wed, 4 Nov 2020 16:39:36 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <2aa79510731918d78d515a1679cc141fcf16883e.1604417224.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.11.2020 16:59, Rahul Singh wrote:
> ARM platforms do not have PCI support available. When CONFIG_HAS_PCI
> is enabled for ARM a compilation error is observed for ns16550 driver.

I still don't really agree to the approach taken together with
the wording. If Arm was to select HAS_PCI, my expectation would
be that this file compiled fine. You don't mention what
compilation error it is that results, so it's hard to judge if
I'm completely wrong. If, however, this is just a shortcoming
of the Arm implementation right now, then I'd like to ask that
the description here be worded to this effect. This will then
make it easier to understand that the change here can really
be reverted without much further consideration.

> --- a/xen/drivers/char/Kconfig
> +++ b/xen/drivers/char/Kconfig
> @@ -4,6 +4,13 @@ config HAS_NS16550
>  	help
>  	  This selects the 16550-series UART support. For most systems, say Y.
>  
> +config HAS_NS16550_PCI
> +	def_bool y
> +	depends on X86 && HAS_NS16550 && HAS_PCI
> +	help
> +	  This selects the 16550-series UART PCI support.For most systems,
> +	  say Y.

There's not much point for a prompt-less option to have help
text. There's definitely no point telling what to select when
really there's nothing to select, due to the lack of a prompt.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 15:43:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 15:43:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19377.44525 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaKwi-0003Io-Hb; Wed, 04 Nov 2020 15:43:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19377.44525; Wed, 04 Nov 2020 15:43:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaKwi-0003Ih-Ej; Wed, 04 Nov 2020 15:43:36 +0000
Received: by outflank-mailman (input) for mailman id 19377;
 Wed, 04 Nov 2020 15:43:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gnt3=EK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kaKwg-0003Hy-I7
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 15:43:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f46bcc32-510f-4941-b137-e58d1e359206;
 Wed, 04 Nov 2020 15:43:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C9864AB5C;
 Wed,  4 Nov 2020 15:43:32 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gnt3=EK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kaKwg-0003Hy-I7
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 15:43:34 +0000
X-Inumbo-ID: f46bcc32-510f-4941-b137-e58d1e359206
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f46bcc32-510f-4941-b137-e58d1e359206;
	Wed, 04 Nov 2020 15:43:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604504612;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1yHBIDHWlcIfcBCCjCG/MjJGOs0S/6P/91kiiG2RQIQ=;
	b=mKWJfss6iuxsdPbHrFu0gb6D3lrZdO3wIGm5NU+EGvdowdRTvLy9i97UWDV+VHwucr/UEv
	qO6OO4xfjmN9SeTndhjYaixE5VYlMJRsIW6YY32KVP9QP0e58pVD/UoLls+K+evqv31Bwm
	7NN9OvVk/dsgng0z1iAa7riwCJY80Kc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C9864AB5C;
	Wed,  4 Nov 2020 15:43:32 +0000 (UTC)
Subject: Re: [PATCH v2 2/4] xen/pci: Introduce new CONFIG_PCI_ATS flag for PCI
 ATS functionality.
To: Rahul Singh <rahul.singh@arm.com>
Cc: Bertrand.Marquis@arm.com, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1604417224.git.rahul.singh@arm.com>
 <27814e614618c413ac61a9f7a48d795c557bfe5c.1604417224.git.rahul.singh@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c9874396-44d2-b969-104f-eb40b4e107c9@suse.com>
Date: Wed, 4 Nov 2020 16:43:32 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <27814e614618c413ac61a9f7a48d795c557bfe5c.1604417224.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.11.2020 16:59, Rahul Singh wrote:
> --- a/xen/drivers/pci/Kconfig
> +++ b/xen/drivers/pci/Kconfig
> @@ -1,3 +1,12 @@
>  
>  config HAS_PCI
>  	bool
> +
> +config PCI_ATS
> +	bool "PCI ATS support"
> +	default y
> +	depends on X86 && HAS_PCI
> +	---help---
> +	 Enable PCI Address Translation Services.
> +
> +	 If unsure, say Y.

Support for "---help---" having gone away in Linux, I think we'd
better not add new instances. Also indentation of help content
typically is by a tab and two spaces. With these two adjusted

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 15:49:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 15:49:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19387.44537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaL2e-0003W4-9F; Wed, 04 Nov 2020 15:49:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19387.44537; Wed, 04 Nov 2020 15:49:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaL2e-0003Vx-5k; Wed, 04 Nov 2020 15:49:44 +0000
Received: by outflank-mailman (input) for mailman id 19387;
 Wed, 04 Nov 2020 15:49:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gnt3=EK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kaL2d-0003Vs-C7
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 15:49:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5b2d0eb4-3b52-4ddf-9fa5-edbee8b265f0;
 Wed, 04 Nov 2020 15:49:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5080CAD7B;
 Wed,  4 Nov 2020 15:49:41 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gnt3=EK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kaL2d-0003Vs-C7
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 15:49:43 +0000
X-Inumbo-ID: 5b2d0eb4-3b52-4ddf-9fa5-edbee8b265f0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5b2d0eb4-3b52-4ddf-9fa5-edbee8b265f0;
	Wed, 04 Nov 2020 15:49:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604504981;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hIQIyFIAPlSNxaLVBf76lpwkIn1cYIN4BYSKW6IODZI=;
	b=ByCGftgvh+e7/3IPYM29U2uPll9xnsuZcslpofVkd5CVBjBXAqrkPSEcEBiBrcGPGaiOC5
	YUNp6AYpELBbTUWthwthW9Kcidvr6xTXjJ8Ex3OsQ52HxUws90pMB0pQLYxmFdNaKG1pnm
	kUGt0Y25VHrpsbRCku6Jx9BemUuPctk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5080CAD7B;
	Wed,  4 Nov 2020 15:49:41 +0000 (UTC)
Subject: Re: [PATCH v2 2/4] xen/pci: Introduce new CONFIG_PCI_ATS flag for PCI
 ATS functionality.
From: Jan Beulich <jbeulich@suse.com>
To: Rahul Singh <rahul.singh@arm.com>
Cc: Bertrand.Marquis@arm.com, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1604417224.git.rahul.singh@arm.com>
 <27814e614618c413ac61a9f7a48d795c557bfe5c.1604417224.git.rahul.singh@arm.com>
 <c9874396-44d2-b969-104f-eb40b4e107c9@suse.com>
Message-ID: <4598bf81-5802-93b8-e160-05c139a6d4cf@suse.com>
Date: Wed, 4 Nov 2020 16:49:40 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <c9874396-44d2-b969-104f-eb40b4e107c9@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.11.2020 16:43, Jan Beulich wrote:
> On 03.11.2020 16:59, Rahul Singh wrote:
>> --- a/xen/drivers/pci/Kconfig
>> +++ b/xen/drivers/pci/Kconfig
>> @@ -1,3 +1,12 @@
>>  
>>  config HAS_PCI
>>  	bool
>> +
>> +config PCI_ATS
>> +	bool "PCI ATS support"
>> +	default y
>> +	depends on X86 && HAS_PCI
>> +	---help---
>> +	 Enable PCI Address Translation Services.
>> +
>> +	 If unsure, say Y.
> 
> Support for "---help---" having gone away in Linux, I think we'd
> better not add new instances. Also indentation of help content
> typically is by a tab and two spaces. With these two adjusted
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Initially I wanted to merely reply indicating I'd be fine making
these changes while committing, but there are two more things
(and I withdraw my R-b): For one, isn't strict pci_dev's ats
field now unused when !PCI_ATS? If so, if should get an #ifdef
added. And then, what exactly is it in ats.c that's x86-specific?
Shouldn't the whole file instead be moved one level up, and be
usable by Arm right away?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 15:53:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 15:53:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19392.44550 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaL6J-0004MW-P6; Wed, 04 Nov 2020 15:53:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19392.44550; Wed, 04 Nov 2020 15:53:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaL6J-0004MP-MA; Wed, 04 Nov 2020 15:53:31 +0000
Received: by outflank-mailman (input) for mailman id 19392;
 Wed, 04 Nov 2020 15:53:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2Coh=EK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kaL6I-0004MK-SY
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 15:53:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 372b27a0-f417-48be-9aaf-1dfcd7aec012;
 Wed, 04 Nov 2020 15:53:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C4F80AE21;
 Wed,  4 Nov 2020 15:53:28 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2Coh=EK=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kaL6I-0004MK-SY
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 15:53:30 +0000
X-Inumbo-ID: 372b27a0-f417-48be-9aaf-1dfcd7aec012
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 372b27a0-f417-48be-9aaf-1dfcd7aec012;
	Wed, 04 Nov 2020 15:53:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604505208;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lJFO2VI+vE9y/ADUibwFoqOaJoZzmYkwl15bMIxXOSo=;
	b=hZ0dMu4mkggltgqWTE6FmM4bHlY8JvoI6t/2VKyfRXwU5wGMIjYyHldT/thvUQuA4CNbz5
	EcA0H+Ce++xDhVKAQ8RnLSn1pOEO05AH4SVSRpcgj/YnxFCa7AQwbK+oLDoeCDQroaSIdr
	P2TrjLj4TLUOYD+gh7JYwflXBNGEfXg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C4F80AE21;
	Wed,  4 Nov 2020 15:53:28 +0000 (UTC)
Subject: Re: [PATCH v4.1 2/2] xen/evtchn: rework per event channel lock
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20201104115739.20144-1-jgross@suse.com>
 <ae263d8f-b81d-4c47-2760-6ef3823ca780@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <f558dfae-ecec-9884-00de-4edd65e39b0f@suse.com>
Date: Wed, 4 Nov 2020 16:53:28 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <ae263d8f-b81d-4c47-2760-6ef3823ca780@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.11.20 16:29, Jan Beulich wrote:
>> @@ -738,7 +725,8 @@ int evtchn_send(struct domain *ld, unsigned int lport)
>>   
>>       lchn = evtchn_from_port(ld, lport);
>>   
>> -    spin_lock_irqsave(&lchn->lock, flags);
>> +    if ( !evtchn_read_trylock(lchn) )
>> +        return 0;
> 
> Isn't there a change in behavior here? While sends through
> ECS_UNBOUND ports indeed get silently ignored, ECS_FREE ones ought
> to be getting -EINVAL (as should ECS_UNBOUND ones if they're
> Xen-consumer ones). With the failed trylock you don't know which
> of the two the port is in the process of being transitioned
> to/from. The same would apply for other operations distinguishing
> the two states. Right now both evtchn_status() and
> evtchn_bind_vcpu() only use the domain-wide lock, but the latter
> is getting switched by "evtchn: convert domain event lock to an
> r/w one" (granted there's an RFC remark there whether that
> transformation is worthwhile). Anyway, the main point of my remark
> is that there's another subtlety here which I don't think becomes
> obvious from description or comments - where the two states are
> mentioned, it gets to look as if they can be treated equally.

Hmm, evtchn_send() seems to be called always with interrupts enabled.
We could just use a standard read_lock() here if you think the different
states should be treated as today.

> 
>> --- a/xen/include/xen/event.h
>> +++ b/xen/include/xen/event.h
>> @@ -105,6 +105,39 @@ void notify_via_xen_event_channel(struct domain *ld, int lport);
>>   #define bucket_from_port(d, p) \
>>       ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKET])
>>   
>> +/*
>> + * Lock an event channel exclusively. This is allowed only when the channel is
>> + * free or unbound either when taking or when releasing the lock, as any
>> + * concurrent operation on the event channel using evtchn_read_trylock() will
>> + * just assume the event channel is free or unbound at the moment when the
>> + * evtchn_read_trylock() returns false.
>> + */
>> +static inline void evtchn_write_lock(struct evtchn *evtchn)
>> +{
>> +    write_lock(&evtchn->lock);
>> +
>> +    evtchn->old_state = evtchn->state;
>> +}
>> +
>> +static inline void evtchn_write_unlock(struct evtchn *evtchn)
>> +{
>> +    /* Enforce lock discipline. */
>> +    ASSERT(evtchn->old_state == ECS_FREE || evtchn->old_state == ECS_UNBOUND ||
>> +           evtchn->state == ECS_FREE || evtchn->state == ECS_UNBOUND);
>> +
>> +    write_unlock(&evtchn->lock);
>> +}
> 
> These two aren't needed outside of event_channel.c, are they? If
> so, and if they ought to go in a header rather than directly into
> the .c file (where I'd prefer the latter, for the sake of minimal
> exposure), then it should be event_channel.h.

I wanted to have the locking functions in one place.

In case you prefer it otherwise (and you seem to do so) I'd rather move
the write lock functions to the .c file.

> 
>> @@ -114,6 +114,7 @@ struct evtchn
>>           u16 virq;      /* state == ECS_VIRQ */
>>       } u;
>>       u8 priority;
>> +    u8 old_state;      /* State when taking lock in write mode. */
>>       u32 fifo_lastq;    /* Data for fifo events identifying last queue. */
>>   #ifdef CONFIG_XSM
>>       union {
> 
> While there's a gap here after the prior patch (which I'm still
> not overly happy about), I'm still inclined to ask that the field
> be put inside #ifndef NDEBUG, as its only consumer is an
> ASSERT().

Fine with me


Juergen


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 16:02:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 16:02:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19398.44562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaLFB-0005pG-NS; Wed, 04 Nov 2020 16:02:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19398.44562; Wed, 04 Nov 2020 16:02:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaLFB-0005p9-KF; Wed, 04 Nov 2020 16:02:41 +0000
Received: by outflank-mailman (input) for mailman id 19398;
 Wed, 04 Nov 2020 16:02:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aHic=EK=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
 id 1kaLFA-0005p4-6F
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 16:02:40 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id ac351229-5d6d-4efc-9696-5ff9f699ee7c;
 Wed, 04 Nov 2020 16:02:37 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-137-4a3h0GbhMgKLAt5Gnfz4pQ-1; Wed, 04 Nov 2020 11:02:33 -0500
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 6DF88137BC1;
 Wed,  4 Nov 2020 16:01:24 +0000 (UTC)
Received: from localhost (ovpn-114-68.rdu2.redhat.com [10.10.114.68])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 604F7277DD;
 Wed,  4 Nov 2020 16:01:22 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aHic=EK=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
	id 1kaLFA-0005p4-6F
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 16:02:40 +0000
X-Inumbo-ID: ac351229-5d6d-4efc-9696-5ff9f699ee7c
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id ac351229-5d6d-4efc-9696-5ff9f699ee7c;
	Wed, 04 Nov 2020 16:02:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604505757;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fsQe7Mm1Y7+ujDMAz+Zy/FiKiDElt5mnHtrgipEPHRs=;
	b=ZXNrAXKi2WUKr0L1st/J3I5llCzYd/GVPvct9AYpcDCCz2HcFdV4SHGg3F/vAzl/95bPRi
	el0tq5vTY7tFqW7NoDy04vthx7AewjkIN6+8wOjjYjn4vKIU6Bih7zCS1cKxkPICCndxds
	AJZYu5Cqexw5FtizKAsq1KPW1Sbqa8s=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-137-4a3h0GbhMgKLAt5Gnfz4pQ-1; Wed, 04 Nov 2020 11:02:33 -0500
X-MC-Unique: 4a3h0GbhMgKLAt5Gnfz4pQ-1
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 6DF88137BC1;
	Wed,  4 Nov 2020 16:01:24 +0000 (UTC)
Received: from localhost (ovpn-114-68.rdu2.redhat.com [10.10.114.68])
	by smtp.corp.redhat.com (Postfix) with ESMTP id 604F7277DD;
	Wed,  4 Nov 2020 16:01:22 +0000 (UTC)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Daniel P. Berrange" <berrange@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Igor Mammedov <imammedo@redhat.com>,
	Eric Blake <eblake@redhat.com>,
	Stefan Berger <stefanb@linux.ibm.com>,
	Markus Armbruster <armbru@redhat.com>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	John Snow <jsnow@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	Stefan Berger <stefanb@linux.vnet.ibm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Kevin Wolf <kwolf@redhat.com>,
	Max Reitz <mreitz@redhat.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Richard Henderson <rth@twiddle.net>,
	David Hildenbrand <david@redhat.com>,
	Halil Pasic <pasic@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Matthew Rosato <mjrosato@linux.ibm.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	qemu-s390x@nongnu.org
Subject: [PATCH v2 09/44] qdev: Make qdev_get_prop_ptr() get Object* arg
Date: Wed,  4 Nov 2020 10:59:46 -0500
Message-Id: <20201104160021.2342108-10-ehabkost@redhat.com>
In-Reply-To: <20201104160021.2342108-1-ehabkost@redhat.com>
References: <20201104160021.2342108-1-ehabkost@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ehabkost@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Make the code more generic and not specific to TYPE_DEVICE.

Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
Changes v1 -> v2:
- Fix build error with CONFIG_XEN
  I took the liberty of keeping the Reviewed-by line from
  Marc-André as the build fix is a trivial one line change
---
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Thomas Huth <thuth@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: David Hildenbrand <david@redhat.com>
Cc: Halil Pasic <pasic@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Matthew Rosato <mjrosato@linux.ibm.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org
Cc: qemu-block@nongnu.org
Cc: qemu-s390x@nongnu.org
---
 include/hw/qdev-properties.h     |  2 +-
 backends/tpm/tpm_util.c          |  8 ++--
 hw/block/xen-block.c             |  5 +-
 hw/core/qdev-properties-system.c | 57 +++++++++-------------
 hw/core/qdev-properties.c        | 82 +++++++++++++-------------------
 hw/s390x/css.c                   |  5 +-
 hw/s390x/s390-pci-bus.c          |  4 +-
 hw/vfio/pci-quirks.c             |  5 +-
 8 files changed, 68 insertions(+), 100 deletions(-)

diff --git a/include/hw/qdev-properties.h b/include/hw/qdev-properties.h
index 0ea822e6a7..0b92cfc761 100644
--- a/include/hw/qdev-properties.h
+++ b/include/hw/qdev-properties.h
@@ -302,7 +302,7 @@ void qdev_prop_set_macaddr(DeviceState *dev, const char *name,
                            const uint8_t *value);
 void qdev_prop_set_enum(DeviceState *dev, const char *name, int value);
 
-void *qdev_get_prop_ptr(DeviceState *dev, Property *prop);
+void *qdev_get_prop_ptr(Object *obj, Property *prop);
 
 void qdev_prop_register_global(GlobalProperty *prop);
 const GlobalProperty *qdev_find_global_prop(DeviceState *dev,
diff --git a/backends/tpm/tpm_util.c b/backends/tpm/tpm_util.c
index b58d298c1a..e91c21dd4a 100644
--- a/backends/tpm/tpm_util.c
+++ b/backends/tpm/tpm_util.c
@@ -35,8 +35,7 @@
 static void get_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
-    TPMBackend **be = qdev_get_prop_ptr(dev, opaque);
+    TPMBackend **be = qdev_get_prop_ptr(obj, opaque);
     char *p;
 
     p = g_strdup(*be ? (*be)->id : "");
@@ -49,7 +48,7 @@ static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    TPMBackend *s, **be = qdev_get_prop_ptr(dev, prop);
+    TPMBackend *s, **be = qdev_get_prop_ptr(obj, prop);
     char *str;
 
     if (dev->realized) {
@@ -73,9 +72,8 @@ static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
 
 static void release_tpm(Object *obj, const char *name, void *opaque)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    TPMBackend **be = qdev_get_prop_ptr(dev, prop);
+    TPMBackend **be = qdev_get_prop_ptr(obj, prop);
 
     if (*be) {
         tpm_backend_reset(*be);
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index 8a7a3f5452..905e4acd97 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -335,9 +335,8 @@ static char *disk_to_vbd_name(unsigned int disk)
 static void xen_block_get_vdev(Object *obj, Visitor *v, const char *name,
                                void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    XenBlockVdev *vdev = qdev_get_prop_ptr(dev, prop);
+    XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
     char *str;
 
     switch (vdev->type) {
@@ -398,7 +397,7 @@ static void xen_block_set_vdev(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    XenBlockVdev *vdev = qdev_get_prop_ptr(dev, prop);
+    XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
     char *str, *p;
     const char *end;
 
diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
index d0fb063a49..c8c73c371b 100644
--- a/hw/core/qdev-properties-system.c
+++ b/hw/core/qdev-properties-system.c
@@ -59,9 +59,8 @@ static bool check_prop_still_unset(DeviceState *dev, const char *name,
 static void get_drive(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    void **ptr = qdev_get_prop_ptr(dev, prop);
+    void **ptr = qdev_get_prop_ptr(obj, prop);
     const char *value;
     char *p;
 
@@ -87,7 +86,7 @@ static void set_drive_helper(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    void **ptr = qdev_get_prop_ptr(dev, prop);
+    void **ptr = qdev_get_prop_ptr(obj, prop);
     char *str;
     BlockBackend *blk;
     bool blk_created = false;
@@ -185,7 +184,7 @@ static void release_drive(Object *obj, const char *name, void *opaque)
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    BlockBackend **ptr = qdev_get_prop_ptr(dev, prop);
+    BlockBackend **ptr = qdev_get_prop_ptr(obj, prop);
 
     if (*ptr) {
         AioContext *ctx = blk_get_aio_context(*ptr);
@@ -218,8 +217,7 @@ const PropertyInfo qdev_prop_drive_iothread = {
 static void get_chr(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
-    CharBackend *be = qdev_get_prop_ptr(dev, opaque);
+    CharBackend *be = qdev_get_prop_ptr(obj, opaque);
     char *p;
 
     p = g_strdup(be->chr && be->chr->label ? be->chr->label : "");
@@ -232,7 +230,7 @@ static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    CharBackend *be = qdev_get_prop_ptr(dev, prop);
+    CharBackend *be = qdev_get_prop_ptr(obj, prop);
     Chardev *s;
     char *str;
 
@@ -272,9 +270,8 @@ static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
 
 static void release_chr(Object *obj, const char *name, void *opaque)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    CharBackend *be = qdev_get_prop_ptr(dev, prop);
+    CharBackend *be = qdev_get_prop_ptr(obj, prop);
 
     qemu_chr_fe_deinit(be, false);
 }
@@ -297,9 +294,8 @@ const PropertyInfo qdev_prop_chr = {
 static void get_mac(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    MACAddr *mac = qdev_get_prop_ptr(dev, prop);
+    MACAddr *mac = qdev_get_prop_ptr(obj, prop);
     char buffer[2 * 6 + 5 + 1];
     char *p = buffer;
 
@@ -315,7 +311,7 @@ static void set_mac(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    MACAddr *mac = qdev_get_prop_ptr(dev, prop);
+    MACAddr *mac = qdev_get_prop_ptr(obj, prop);
     int i, pos;
     char *str;
     const char *p;
@@ -381,9 +377,8 @@ void qdev_prop_set_macaddr(DeviceState *dev, const char *name,
 static void get_netdev(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    NICPeers *peers_ptr = qdev_get_prop_ptr(dev, prop);
+    NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
     char *p = g_strdup(peers_ptr->ncs[0] ? peers_ptr->ncs[0]->name : "");
 
     visit_type_str(v, name, &p, errp);
@@ -395,7 +390,7 @@ static void set_netdev(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    NICPeers *peers_ptr = qdev_get_prop_ptr(dev, prop);
+    NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
     NetClientState **ncs = peers_ptr->ncs;
     NetClientState *peers[MAX_QUEUE_NUM];
     int queues, err = 0, i = 0;
@@ -461,9 +456,8 @@ const PropertyInfo qdev_prop_netdev = {
 static void get_audiodev(Object *obj, Visitor *v, const char* name,
                          void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    QEMUSoundCard *card = qdev_get_prop_ptr(dev, prop);
+    QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
     char *p = g_strdup(audio_get_id(card));
 
     visit_type_str(v, name, &p, errp);
@@ -475,7 +469,7 @@ static void set_audiodev(Object *obj, Visitor *v, const char* name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    QEMUSoundCard *card = qdev_get_prop_ptr(dev, prop);
+    QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
     AudioState *state;
     int err = 0;
     char *str;
@@ -582,7 +576,7 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
     uint64_t value;
     Error *local_err = NULL;
 
@@ -674,9 +668,8 @@ const PropertyInfo qdev_prop_multifd_compression = {
 static void get_reserved_region(Object *obj, Visitor *v, const char *name,
                                 void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    ReservedRegion *rr = qdev_get_prop_ptr(dev, prop);
+    ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
     char buffer[64];
     char *p = buffer;
     int rc;
@@ -693,7 +686,7 @@ static void set_reserved_region(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    ReservedRegion *rr = qdev_get_prop_ptr(dev, prop);
+    ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
     Error *local_err = NULL;
     const char *endptr;
     char *str;
@@ -761,7 +754,7 @@ static void set_pci_devfn(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int32_t value, *ptr = qdev_get_prop_ptr(dev, prop);
+    int32_t value, *ptr = qdev_get_prop_ptr(obj, prop);
     unsigned int slot, fn, n;
     char *str;
 
@@ -804,8 +797,7 @@ invalid:
 static int print_pci_devfn(Object *obj, Property *prop, char *dest,
                            size_t len)
 {
-    DeviceState *dev = DEVICE(obj);
-    int32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (*ptr == -1) {
         return snprintf(dest, len, "<unset>");
@@ -828,9 +820,8 @@ const PropertyInfo qdev_prop_pci_devfn = {
 static void get_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
                                  void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(dev, prop);
+    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
     char buffer[] = "ffff:ff:ff.f";
     char *p = buffer;
     int rc = 0;
@@ -857,7 +848,7 @@ static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(dev, prop);
+    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
     char *str, *p;
     const char *e;
     unsigned long val;
@@ -950,9 +941,8 @@ const PropertyInfo qdev_prop_off_auto_pcibar = {
 static void get_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    PCIExpLinkSpeed *p = qdev_get_prop_ptr(dev, prop);
+    PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
     int speed;
 
     switch (*p) {
@@ -981,7 +971,7 @@ static void set_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    PCIExpLinkSpeed *p = qdev_get_prop_ptr(dev, prop);
+    PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
     int speed;
 
     if (dev->realized) {
@@ -1027,9 +1017,8 @@ const PropertyInfo qdev_prop_pcie_link_speed = {
 static void get_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    PCIExpLinkWidth *p = qdev_get_prop_ptr(dev, prop);
+    PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
     int width;
 
     switch (*p) {
@@ -1067,7 +1056,7 @@ static void set_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    PCIExpLinkWidth *p = qdev_get_prop_ptr(dev, prop);
+    PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
     int width;
 
     if (dev->realized) {
diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index 3a4638f4de..0a54a922c8 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -38,9 +38,9 @@ void qdev_prop_allow_set_link_before_realize(const Object *obj,
     }
 }
 
-void *qdev_get_prop_ptr(DeviceState *dev, Property *prop)
+void *qdev_get_prop_ptr(Object *obj, Property *prop)
 {
-    void *ptr = dev;
+    void *ptr = obj;
     ptr += prop->offset;
     return ptr;
 }
@@ -48,9 +48,8 @@ void *qdev_get_prop_ptr(DeviceState *dev, Property *prop)
 void qdev_propinfo_get_enum(Object *obj, Visitor *v, const char *name,
                             void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int *ptr = qdev_get_prop_ptr(dev, prop);
+    int *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_enum(v, prop->name, ptr, prop->info->enum_table, errp);
 }
@@ -60,7 +59,7 @@ void qdev_propinfo_set_enum(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int *ptr = qdev_get_prop_ptr(dev, prop);
+    int *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -94,8 +93,7 @@ static uint32_t qdev_get_prop_mask(Property *prop)
 
 static void bit_prop_set(Object *obj, Property *props, bool val)
 {
-    DeviceState *dev = DEVICE(obj);
-    uint32_t *p = qdev_get_prop_ptr(dev, props);
+    uint32_t *p = qdev_get_prop_ptr(obj, props);
     uint32_t mask = qdev_get_prop_mask(props);
     if (val) {
         *p |= mask;
@@ -107,9 +105,8 @@ static void bit_prop_set(Object *obj, Property *props, bool val)
 static void prop_get_bit(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *p = qdev_get_prop_ptr(dev, prop);
+    uint32_t *p = qdev_get_prop_ptr(obj, prop);
     bool value = (*p & qdev_get_prop_mask(prop)) != 0;
 
     visit_type_bool(v, name, &value, errp);
@@ -156,8 +153,7 @@ static uint64_t qdev_get_prop_mask64(Property *prop)
 
 static void bit64_prop_set(Object *obj, Property *props, bool val)
 {
-    DeviceState *dev = DEVICE(obj);
-    uint64_t *p = qdev_get_prop_ptr(dev, props);
+    uint64_t *p = qdev_get_prop_ptr(obj, props);
     uint64_t mask = qdev_get_prop_mask64(props);
     if (val) {
         *p |= mask;
@@ -169,9 +165,8 @@ static void bit64_prop_set(Object *obj, Property *props, bool val)
 static void prop_get_bit64(Object *obj, Visitor *v, const char *name,
                            void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint64_t *p = qdev_get_prop_ptr(dev, prop);
+    uint64_t *p = qdev_get_prop_ptr(obj, prop);
     bool value = (*p & qdev_get_prop_mask64(prop)) != 0;
 
     visit_type_bool(v, name, &value, errp);
@@ -208,9 +203,8 @@ const PropertyInfo qdev_prop_bit64 = {
 static void get_bool(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    bool *ptr = qdev_get_prop_ptr(dev, prop);
+    bool *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_bool(v, name, ptr, errp);
 }
@@ -220,7 +214,7 @@ static void set_bool(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    bool *ptr = qdev_get_prop_ptr(dev, prop);
+    bool *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -242,9 +236,8 @@ const PropertyInfo qdev_prop_bool = {
 static void get_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint8_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_uint8(v, name, ptr, errp);
 }
@@ -254,7 +247,7 @@ static void set_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint8_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -288,9 +281,8 @@ const PropertyInfo qdev_prop_uint8 = {
 void qdev_propinfo_get_uint16(Object *obj, Visitor *v, const char *name,
                               void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint16_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_uint16(v, name, ptr, errp);
 }
@@ -300,7 +292,7 @@ static void set_uint16(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint16_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -322,9 +314,8 @@ const PropertyInfo qdev_prop_uint16 = {
 static void get_uint32(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_uint32(v, name, ptr, errp);
 }
@@ -334,7 +325,7 @@ static void set_uint32(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -347,9 +338,8 @@ static void set_uint32(Object *obj, Visitor *v, const char *name,
 void qdev_propinfo_get_int32(Object *obj, Visitor *v, const char *name,
                              void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_int32(v, name, ptr, errp);
 }
@@ -359,7 +349,7 @@ static void set_int32(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -388,9 +378,8 @@ const PropertyInfo qdev_prop_int32 = {
 static void get_uint64(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_uint64(v, name, ptr, errp);
 }
@@ -400,7 +389,7 @@ static void set_uint64(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -413,9 +402,8 @@ static void set_uint64(Object *obj, Visitor *v, const char *name,
 static void get_int64(Object *obj, Visitor *v, const char *name,
                       void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int64_t *ptr = qdev_get_prop_ptr(dev, prop);
+    int64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_int64(v, name, ptr, errp);
 }
@@ -425,7 +413,7 @@ static void set_int64(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int64_t *ptr = qdev_get_prop_ptr(dev, prop);
+    int64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -454,15 +442,14 @@ const PropertyInfo qdev_prop_int64 = {
 static void release_string(Object *obj, const char *name, void *opaque)
 {
     Property *prop = opaque;
-    g_free(*(char **)qdev_get_prop_ptr(DEVICE(obj), prop));
+    g_free(*(char **)qdev_get_prop_ptr(obj, prop));
 }
 
 static void get_string(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    char **ptr = qdev_get_prop_ptr(dev, prop);
+    char **ptr = qdev_get_prop_ptr(obj, prop);
 
     if (!*ptr) {
         char *str = (char *)"";
@@ -477,7 +464,7 @@ static void set_string(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    char **ptr = qdev_get_prop_ptr(dev, prop);
+    char **ptr = qdev_get_prop_ptr(obj, prop);
     char *str;
 
     if (dev->realized) {
@@ -515,9 +502,8 @@ const PropertyInfo qdev_prop_on_off_auto = {
 void qdev_propinfo_get_size32(Object *obj, Visitor *v, const char *name,
                               void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
     uint64_t value = *ptr;
 
     visit_type_size(v, name, &value, errp);
@@ -528,7 +514,7 @@ static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
     uint64_t value;
 
     if (dev->realized) {
@@ -563,9 +549,8 @@ const PropertyInfo qdev_prop_size32 = {
 static void get_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    QemuUUID *uuid = qdev_get_prop_ptr(dev, prop);
+    QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
     char buffer[UUID_FMT_LEN + 1];
     char *p = buffer;
 
@@ -581,7 +566,7 @@ static void set_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    QemuUUID *uuid = qdev_get_prop_ptr(dev, prop);
+    QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
     char *str;
 
     if (dev->realized) {
@@ -653,7 +638,7 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
      */
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *alenptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *alenptr = qdev_get_prop_ptr(obj, prop);
     void **arrayptr = (void *)dev + prop->arrayoffset;
     void *eltptr;
     const char *arrayname;
@@ -699,7 +684,7 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
          * being inside the device struct.
          */
         arrayprop->prop.offset = eltptr - (void *)dev;
-        assert(qdev_get_prop_ptr(dev, &arrayprop->prop) == eltptr);
+        assert(qdev_get_prop_ptr(obj, &arrayprop->prop) == eltptr);
         object_property_add(obj, propname,
                             arrayprop->prop.info->name,
                             arrayprop->prop.info->get,
@@ -893,9 +878,8 @@ void qdev_prop_set_globals(DeviceState *dev)
 static void get_size(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_size(v, name, ptr, errp);
 }
@@ -905,7 +889,7 @@ static void set_size(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
diff --git a/hw/s390x/css.c b/hw/s390x/css.c
index 9961cfe7bf..2b8f33fec2 100644
--- a/hw/s390x/css.c
+++ b/hw/s390x/css.c
@@ -2343,9 +2343,8 @@ void css_reset(void)
 static void get_css_devid(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    CssDevId *dev_id = qdev_get_prop_ptr(dev, prop);
+    CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
     char buffer[] = "xx.x.xxxx";
     char *p = buffer;
     int r;
@@ -2375,7 +2374,7 @@ static void set_css_devid(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    CssDevId *dev_id = qdev_get_prop_ptr(dev, prop);
+    CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
     char *str;
     int num, n1, n2;
     unsigned int cssid, ssid, devid;
diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
index 48a3be802f..ab27b6e848 100644
--- a/hw/s390x/s390-pci-bus.c
+++ b/hw/s390x/s390-pci-bus.c
@@ -1323,7 +1323,7 @@ static void s390_pci_get_fid(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(DEVICE(obj), prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_uint32(v, name, ptr, errp);
 }
@@ -1334,7 +1334,7 @@ static void s390_pci_set_fid(Object *obj, Visitor *v, const char *name,
     DeviceState *dev = DEVICE(obj);
     S390PCIBusDevice *zpci = S390_PCI_DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
index 57150913b7..53569925a2 100644
--- a/hw/vfio/pci-quirks.c
+++ b/hw/vfio/pci-quirks.c
@@ -1488,9 +1488,8 @@ static void get_nv_gpudirect_clique_id(Object *obj, Visitor *v,
                                        const char *name, void *opaque,
                                        Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint8_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_uint8(v, name, ptr, errp);
 }
@@ -1501,7 +1500,7 @@ static void set_nv_gpudirect_clique_id(Object *obj, Visitor *v,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint8_t value, *ptr = qdev_get_prop_ptr(dev, prop);
+    uint8_t value, *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 16:02:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 16:02:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19399.44574 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaLFQ-0005sp-5U; Wed, 04 Nov 2020 16:02:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19399.44574; Wed, 04 Nov 2020 16:02:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaLFQ-0005sg-2M; Wed, 04 Nov 2020 16:02:56 +0000
Received: by outflank-mailman (input) for mailman id 19399;
 Wed, 04 Nov 2020 16:02:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aHic=EK=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
 id 1kaLFP-0005sU-2X
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 16:02:55 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 1c2af3d3-50c3-42ab-9229-e9d4a1a7bc1b;
 Wed, 04 Nov 2020 16:02:52 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-105-CscNxkqRPjCIWzye9-3gOA-1; Wed, 04 Nov 2020 11:02:47 -0500
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com
 [10.5.11.13])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 959C9807104;
 Wed,  4 Nov 2020 16:01:45 +0000 (UTC)
Received: from localhost (ovpn-114-68.rdu2.redhat.com [10.10.114.68])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 2C53A6EF74;
 Wed,  4 Nov 2020 16:01:45 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aHic=EK=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
	id 1kaLFP-0005sU-2X
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 16:02:55 +0000
X-Inumbo-ID: 1c2af3d3-50c3-42ab-9229-e9d4a1a7bc1b
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 1c2af3d3-50c3-42ab-9229-e9d4a1a7bc1b;
	Wed, 04 Nov 2020 16:02:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604505772;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IXyUOnmUYpY+ShWZrhrIe3RnFmX8MuOeLyPSSeGeZRo=;
	b=Zn8Ng+RUm12PoKlqDoKRIBZ2GOaalZfeCF5DgLiSZZ7tBynZXkehr1iV4HL/bvfLy6C8FD
	O+SsE6h790ccUqSjCUIVIaK/+RHdYZYxp4V7Eau3q5Fk9LoCv3hYCC1nhvfY/eIYIhHI1k
	W90T7ujRfcn9JcyPhkDxvkk9j7x0P8g=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-105-CscNxkqRPjCIWzye9-3gOA-1; Wed, 04 Nov 2020 11:02:47 -0500
X-MC-Unique: CscNxkqRPjCIWzye9-3gOA-1
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 959C9807104;
	Wed,  4 Nov 2020 16:01:45 +0000 (UTC)
Received: from localhost (ovpn-114-68.rdu2.redhat.com [10.10.114.68])
	by smtp.corp.redhat.com (Postfix) with ESMTP id 2C53A6EF74;
	Wed,  4 Nov 2020 16:01:45 +0000 (UTC)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Daniel P. Berrange" <berrange@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Igor Mammedov <imammedo@redhat.com>,
	Eric Blake <eblake@redhat.com>,
	Stefan Berger <stefanb@linux.ibm.com>,
	Markus Armbruster <armbru@redhat.com>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	John Snow <jsnow@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	Stefan Berger <stefanb@linux.vnet.ibm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Kevin Wolf <kwolf@redhat.com>,
	Max Reitz <mreitz@redhat.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Halil Pasic <pasic@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Richard Henderson <rth@twiddle.net>,
	David Hildenbrand <david@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Matthew Rosato <mjrosato@linux.ibm.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Artyom Tarasenko <atar4qemu@gmail.com>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	qemu-s390x@nongnu.org
Subject: [PATCH v2 22/44] qdev: Move dev->realized check to qdev_property_set()
Date: Wed,  4 Nov 2020 10:59:59 -0500
Message-Id: <20201104160021.2342108-23-ehabkost@redhat.com>
In-Reply-To: <20201104160021.2342108-1-ehabkost@redhat.com>
References: <20201104160021.2342108-1-ehabkost@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ehabkost@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Every single qdev property setter function manually checks
dev->realized.  We can just check dev->realized inside
qdev_property_set() instead.

The check is being added as a separate function
(qdev_prop_allow_set()) because it will become a callback later.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
Changes v1 -> v2:
* Removed unused variable at xen_block_set_vdev()
* Redone patch after changes in the previous patches in the
  series
---
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Halil Pasic <pasic@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: David Hildenbrand <david@redhat.com>
Cc: Thomas Huth <thuth@redhat.com>
Cc: Matthew Rosato <mjrosato@linux.ibm.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Artyom Tarasenko <atar4qemu@gmail.com>
Cc: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org
Cc: qemu-block@nongnu.org
Cc: qemu-s390x@nongnu.org
---
 backends/tpm/tpm_util.c          |   6 --
 hw/block/xen-block.c             |   6 --
 hw/core/qdev-properties-system.c |  70 ----------------------
 hw/core/qdev-properties.c        | 100 ++++++-------------------------
 hw/s390x/css.c                   |   6 --
 hw/s390x/s390-pci-bus.c          |   6 --
 hw/vfio/pci-quirks.c             |   6 --
 target/sparc/cpu.c               |   6 --
 8 files changed, 18 insertions(+), 188 deletions(-)

diff --git a/backends/tpm/tpm_util.c b/backends/tpm/tpm_util.c
index dba2f6b04a..0b07cf55ea 100644
--- a/backends/tpm/tpm_util.c
+++ b/backends/tpm/tpm_util.c
@@ -46,16 +46,10 @@ static void get_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     TPMBackend *s, **be = qdev_get_prop_ptr(obj, prop);
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index 905e4acd97..bd1aef63a7 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -395,17 +395,11 @@ static int vbd_name_to_disk(const char *name, const char **endp,
 static void xen_block_set_vdev(Object *obj, Visitor *v, const char *name,
                                void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
     char *str, *p;
     const char *end;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
index 202abd0e4b..0d3e57bba0 100644
--- a/hw/core/qdev-properties-system.c
+++ b/hw/core/qdev-properties-system.c
@@ -94,11 +94,6 @@ static void set_drive_helper(Object *obj, Visitor *v, const char *name,
     bool blk_created = false;
     int ret;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -230,17 +225,11 @@ static void get_chr(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     CharBackend *be = qdev_get_prop_ptr(obj, prop);
     Chardev *s;
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -311,18 +300,12 @@ static void get_mac(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_mac(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     MACAddr *mac = qdev_get_prop_ptr(obj, prop);
     int i, pos;
     char *str;
     const char *p;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -390,7 +373,6 @@ static void get_netdev(Object *obj, Visitor *v, const char *name,
 static void set_netdev(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
     NetClientState **ncs = peers_ptr->ncs;
@@ -398,11 +380,6 @@ static void set_netdev(Object *obj, Visitor *v, const char *name,
     int queues, err = 0, i = 0;
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -469,18 +446,12 @@ static void get_audiodev(Object *obj, Visitor *v, const char* name,
 static void set_audiodev(Object *obj, Visitor *v, const char* name,
                          void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
     AudioState *state;
     int err = 0;
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -582,11 +553,6 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
     uint64_t value;
     Error *local_err = NULL;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_size(v, name, &value, errp)) {
         return;
     }
@@ -686,7 +652,6 @@ static void get_reserved_region(Object *obj, Visitor *v, const char *name,
 static void set_reserved_region(Object *obj, Visitor *v, const char *name,
                                 void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
     Error *local_err = NULL;
@@ -694,11 +659,6 @@ static void set_reserved_region(Object *obj, Visitor *v, const char *name,
     char *str;
     int ret;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_str(v, name, &str, &local_err);
     if (local_err) {
         error_propagate(errp, local_err);
@@ -754,17 +714,11 @@ const PropertyInfo qdev_prop_reserved_region = {
 static void set_pci_devfn(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     int32_t value, *ptr = qdev_get_prop_ptr(obj, prop);
     unsigned int slot, fn, n;
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, NULL)) {
         if (!visit_type_int32(v, name, &value, errp)) {
             return;
@@ -848,7 +802,6 @@ static void get_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
 static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
                                  void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
     char *str, *p;
@@ -857,11 +810,6 @@ static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
     unsigned long dom = 0, bus = 0;
     unsigned int slot = 0, func = 0;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -971,16 +919,10 @@ static void get_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
 static void set_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
     int speed;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_enum(v, name, &speed, prop->info->enum_table,
                          errp)) {
         return;
@@ -1056,16 +998,10 @@ static void get_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
 static void set_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
     int width;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_enum(v, name, &width, prop->info->enum_table,
                          errp)) {
         return;
@@ -1128,16 +1064,10 @@ static void get_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index 0e5ff81da8..ff36eb250e 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -24,6 +24,19 @@ void qdev_prop_set_after_realize(DeviceState *dev, const char *name,
     }
 }
 
+/* returns: true if property is allowed to be set, false otherwise */
+static bool qdev_prop_allow_set(Object *obj, const char *name,
+                                Error **errp)
+{
+    DeviceState *dev = DEVICE(obj);
+
+    if (dev->realized) {
+        qdev_prop_set_after_realize(dev, name, errp);
+        return false;
+    }
+    return true;
+}
+
 void qdev_prop_allow_set_link_before_realize(const Object *obj,
                                              const char *name,
                                              Object *val, Error **errp)
@@ -65,6 +78,11 @@ static void field_prop_set(Object *obj, Visitor *v, const char *name,
                            void *opaque, Error **errp)
 {
     Property *prop = opaque;
+
+    if (!qdev_prop_allow_set(obj, name, errp)) {
+        return;
+    }
+
     return prop->info->set(obj, v, name, opaque, errp);
 }
 
@@ -90,15 +108,9 @@ void qdev_propinfo_get_enum(Object *obj, Visitor *v, const char *name,
 void qdev_propinfo_set_enum(Object *obj, Visitor *v, const char *name,
                             void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     int *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_enum(v, name, ptr, prop->info->enum_table, errp);
 }
 
@@ -148,15 +160,9 @@ static void prop_get_bit(Object *obj, Visitor *v, const char *name,
 static void prop_set_bit(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     bool value;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_bool(v, name, &value, errp)) {
         return;
     }
@@ -208,15 +214,9 @@ static void prop_get_bit64(Object *obj, Visitor *v, const char *name,
 static void prop_set_bit64(Object *obj, Visitor *v, const char *name,
                            void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     bool value;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_bool(v, name, &value, errp)) {
         return;
     }
@@ -245,15 +245,9 @@ static void get_bool(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_bool(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     bool *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_bool(v, name, ptr, errp);
 }
 
@@ -278,15 +272,9 @@ static void get_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_uint8(v, name, ptr, errp);
 }
 
@@ -323,15 +311,9 @@ void qdev_propinfo_get_uint16(Object *obj, Visitor *v, const char *name,
 static void set_uint16(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_uint16(v, name, ptr, errp);
 }
 
@@ -356,15 +338,9 @@ static void get_uint32(Object *obj, Visitor *v, const char *name,
 static void set_uint32(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_uint32(v, name, ptr, errp);
 }
 
@@ -380,15 +356,9 @@ void qdev_propinfo_get_int32(Object *obj, Visitor *v, const char *name,
 static void set_int32(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     int32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_int32(v, name, ptr, errp);
 }
 
@@ -420,15 +390,9 @@ static void get_uint64(Object *obj, Visitor *v, const char *name,
 static void set_uint64(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_uint64(v, name, ptr, errp);
 }
 
@@ -444,15 +408,9 @@ static void get_int64(Object *obj, Visitor *v, const char *name,
 static void set_int64(Object *obj, Visitor *v, const char *name,
                       void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     int64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_int64(v, name, ptr, errp);
 }
 
@@ -495,16 +453,10 @@ static void get_string(Object *obj, Visitor *v, const char *name,
 static void set_string(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     char **ptr = qdev_get_prop_ptr(obj, prop);
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -545,16 +497,10 @@ void qdev_propinfo_get_size32(Object *obj, Visitor *v, const char *name,
 static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
                        Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
     uint64_t value;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_size(v, name, &value, errp)) {
         return;
     }
@@ -621,10 +567,6 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
     const char *arrayname;
     int i;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
     if (*alenptr) {
         error_setg(errp, "array size property %s may not be set more than once",
                    name);
@@ -864,15 +806,9 @@ static void get_size(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_size(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_size(v, name, ptr, errp);
 }
 
diff --git a/hw/s390x/css.c b/hw/s390x/css.c
index 7a44320d12..496e2c5801 100644
--- a/hw/s390x/css.c
+++ b/hw/s390x/css.c
@@ -2372,18 +2372,12 @@ static void get_css_devid(Object *obj, Visitor *v, const char *name,
 static void set_css_devid(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
     char *str;
     int num, n1, n2;
     unsigned int cssid, ssid, devid;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
index ab27b6e848..54fac3851d 100644
--- a/hw/s390x/s390-pci-bus.c
+++ b/hw/s390x/s390-pci-bus.c
@@ -1331,16 +1331,10 @@ static void s390_pci_get_fid(Object *obj, Visitor *v, const char *name,
 static void s390_pci_set_fid(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     S390PCIBusDevice *zpci = S390_PCI_DEVICE(obj);
     Property *prop = opaque;
     uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_uint32(v, name, ptr, errp)) {
         return;
     }
diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
index 53569925a2..802979635c 100644
--- a/hw/vfio/pci-quirks.c
+++ b/hw/vfio/pci-quirks.c
@@ -1498,15 +1498,9 @@ static void set_nv_gpudirect_clique_id(Object *obj, Visitor *v,
                                        const char *name, void *opaque,
                                        Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint8_t value, *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_uint8(v, name, &value, errp)) {
         return;
     }
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
index f5cff4103b..3375fffb38 100644
--- a/target/sparc/cpu.c
+++ b/target/sparc/cpu.c
@@ -798,17 +798,11 @@ static void sparc_get_nwindows(Object *obj, Visitor *v, const char *name,
 static void sparc_set_nwindows(Object *obj, Visitor *v, const char *name,
                                void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     const int64_t min = MIN_NWINDOWS;
     const int64_t max = MAX_NWINDOWS;
     SPARCCPU *cpu = SPARC_CPU(obj);
     int64_t value;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_int(v, name, &value, errp)) {
         return;
     }
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 16:03:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 16:03:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19400.44586 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaLFY-0005yA-FZ; Wed, 04 Nov 2020 16:03:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19400.44586; Wed, 04 Nov 2020 16:03:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaLFY-0005y3-BZ; Wed, 04 Nov 2020 16:03:04 +0000
Received: by outflank-mailman (input) for mailman id 19400;
 Wed, 04 Nov 2020 16:03:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aHic=EK=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
 id 1kaLFX-0005xf-7n
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 16:03:03 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id b73d7953-f1f9-4f61-9047-38c0175bd15f;
 Wed, 04 Nov 2020 16:03:01 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-478-yjP96hliPAuVsdiaI16oLQ-1; Wed, 04 Nov 2020 11:02:59 -0500
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com
 [10.5.11.12])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 60639D3EBB;
 Wed,  4 Nov 2020 16:02:22 +0000 (UTC)
Received: from localhost (ovpn-114-68.rdu2.redhat.com [10.10.114.68])
 by smtp.corp.redhat.com (Postfix) with ESMTP id DFFA860BFA;
 Wed,  4 Nov 2020 16:02:21 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aHic=EK=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
	id 1kaLFX-0005xf-7n
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 16:03:03 +0000
X-Inumbo-ID: b73d7953-f1f9-4f61-9047-38c0175bd15f
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id b73d7953-f1f9-4f61-9047-38c0175bd15f;
	Wed, 04 Nov 2020 16:03:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604505781;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=YlPrVGPehtxFul7QF3W0SvGE8B+YiJsTwf08ORZXOPE=;
	b=b9LVSH58XKNUiJBlz09dCZyoNy/lzCFLu2xSITGvNsmEqS9DIItZzfre23XZyOWNvLTSC+
	+SJW4cNin0zOgjpOeEpK8tV8ue7PTAFtUVrgCpD7YiiopF3BM9jLRQ+6gQ8+nZoDOTyJTM
	br5F/yiDbyybH4VesxvZFsqgtS2FLTA=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-478-yjP96hliPAuVsdiaI16oLQ-1; Wed, 04 Nov 2020 11:02:59 -0500
X-MC-Unique: yjP96hliPAuVsdiaI16oLQ-1
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 60639D3EBB;
	Wed,  4 Nov 2020 16:02:22 +0000 (UTC)
Received: from localhost (ovpn-114-68.rdu2.redhat.com [10.10.114.68])
	by smtp.corp.redhat.com (Postfix) with ESMTP id DFFA860BFA;
	Wed,  4 Nov 2020 16:02:21 +0000 (UTC)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Daniel P. Berrange" <berrange@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Igor Mammedov <imammedo@redhat.com>,
	Eric Blake <eblake@redhat.com>,
	Stefan Berger <stefanb@linux.ibm.com>,
	Markus Armbruster <armbru@redhat.com>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	John Snow <jsnow@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	Stefan Berger <stefanb@linux.vnet.ibm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Kevin Wolf <kwolf@redhat.com>,
	Max Reitz <mreitz@redhat.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Halil Pasic <pasic@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Richard Henderson <rth@twiddle.net>,
	David Hildenbrand <david@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Matthew Rosato <mjrosato@linux.ibm.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	qemu-s390x@nongnu.org
Subject: [PATCH v2 36/44] qdev: Rename qdev_get_prop_ptr() to object_field_prop_ptr()
Date: Wed,  4 Nov 2020 11:00:13 -0500
Message-Id: <20201104160021.2342108-37-ehabkost@redhat.com>
In-Reply-To: <20201104160021.2342108-1-ehabkost@redhat.com>
References: <20201104160021.2342108-1-ehabkost@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ehabkost@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The function will be moved to common QOM code, as it is not
specific to TYPE_DEVICE anymore.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
Changes v1 -> v2:
* Rename to object_field_prop_ptr() instead of object_static_prop_ptr()
---
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Halil Pasic <pasic@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: David Hildenbrand <david@redhat.com>
Cc: Thomas Huth <thuth@redhat.com>
Cc: Matthew Rosato <mjrosato@linux.ibm.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org
Cc: qemu-block@nongnu.org
Cc: qemu-s390x@nongnu.org
---
 include/hw/qdev-properties.h     |  2 +-
 backends/tpm/tpm_util.c          |  6 ++--
 hw/block/xen-block.c             |  4 +--
 hw/core/qdev-properties-system.c | 50 +++++++++++++-------------
 hw/core/qdev-properties.c        | 60 ++++++++++++++++----------------
 hw/s390x/css.c                   |  4 +--
 hw/s390x/s390-pci-bus.c          |  4 +--
 hw/vfio/pci-quirks.c             |  4 +--
 8 files changed, 67 insertions(+), 67 deletions(-)

diff --git a/include/hw/qdev-properties.h b/include/hw/qdev-properties.h
index 7f8d5fc206..2bec65c8e5 100644
--- a/include/hw/qdev-properties.h
+++ b/include/hw/qdev-properties.h
@@ -223,7 +223,7 @@ void qdev_prop_set_macaddr(DeviceState *dev, const char *name,
                            const uint8_t *value);
 void qdev_prop_set_enum(DeviceState *dev, const char *name, int value);
 
-void *qdev_get_prop_ptr(Object *obj, Property *prop);
+void *object_field_prop_ptr(Object *obj, Property *prop);
 
 void qdev_prop_register_global(GlobalProperty *prop);
 const GlobalProperty *qdev_find_global_prop(Object *obj,
diff --git a/backends/tpm/tpm_util.c b/backends/tpm/tpm_util.c
index 0b07cf55ea..bb1ab34a75 100644
--- a/backends/tpm/tpm_util.c
+++ b/backends/tpm/tpm_util.c
@@ -35,7 +35,7 @@
 static void get_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    TPMBackend **be = qdev_get_prop_ptr(obj, opaque);
+    TPMBackend **be = object_field_prop_ptr(obj, opaque);
     char *p;
 
     p = g_strdup(*be ? (*be)->id : "");
@@ -47,7 +47,7 @@ static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
     Property *prop = opaque;
-    TPMBackend *s, **be = qdev_get_prop_ptr(obj, prop);
+    TPMBackend *s, **be = object_field_prop_ptr(obj, prop);
     char *str;
 
     if (!visit_type_str(v, name, &str, errp)) {
@@ -67,7 +67,7 @@ static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
 static void release_tpm(Object *obj, const char *name, void *opaque)
 {
     Property *prop = opaque;
-    TPMBackend **be = qdev_get_prop_ptr(obj, prop);
+    TPMBackend **be = object_field_prop_ptr(obj, prop);
 
     if (*be) {
         tpm_backend_reset(*be);
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index bd1aef63a7..718d886e5c 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -336,7 +336,7 @@ static void xen_block_get_vdev(Object *obj, Visitor *v, const char *name,
                                void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
+    XenBlockVdev *vdev = object_field_prop_ptr(obj, prop);
     char *str;
 
     switch (vdev->type) {
@@ -396,7 +396,7 @@ static void xen_block_set_vdev(Object *obj, Visitor *v, const char *name,
                                void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
+    XenBlockVdev *vdev = object_field_prop_ptr(obj, prop);
     char *str, *p;
     const char *end;
 
diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
index 96a0bc5109..8781b856d3 100644
--- a/hw/core/qdev-properties-system.c
+++ b/hw/core/qdev-properties-system.c
@@ -62,7 +62,7 @@ static void get_drive(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
     Property *prop = opaque;
-    void **ptr = qdev_get_prop_ptr(obj, prop);
+    void **ptr = object_field_prop_ptr(obj, prop);
     const char *value;
     char *p;
 
@@ -88,7 +88,7 @@ static void set_drive_helper(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    void **ptr = qdev_get_prop_ptr(obj, prop);
+    void **ptr = object_field_prop_ptr(obj, prop);
     char *str;
     BlockBackend *blk;
     bool blk_created = false;
@@ -181,7 +181,7 @@ static void release_drive(Object *obj, const char *name, void *opaque)
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    BlockBackend **ptr = qdev_get_prop_ptr(obj, prop);
+    BlockBackend **ptr = object_field_prop_ptr(obj, prop);
 
     if (*ptr) {
         AioContext *ctx = blk_get_aio_context(*ptr);
@@ -214,7 +214,7 @@ const PropertyInfo qdev_prop_drive_iothread = {
 static void get_chr(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    CharBackend *be = qdev_get_prop_ptr(obj, opaque);
+    CharBackend *be = object_field_prop_ptr(obj, opaque);
     char *p;
 
     p = g_strdup(be->chr && be->chr->label ? be->chr->label : "");
@@ -226,7 +226,7 @@ static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
     Property *prop = opaque;
-    CharBackend *be = qdev_get_prop_ptr(obj, prop);
+    CharBackend *be = object_field_prop_ptr(obj, prop);
     Chardev *s;
     char *str;
 
@@ -262,7 +262,7 @@ static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
 static void release_chr(Object *obj, const char *name, void *opaque)
 {
     Property *prop = opaque;
-    CharBackend *be = qdev_get_prop_ptr(obj, prop);
+    CharBackend *be = object_field_prop_ptr(obj, prop);
 
     qemu_chr_fe_deinit(be, false);
 }
@@ -286,7 +286,7 @@ static void get_mac(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
     Property *prop = opaque;
-    MACAddr *mac = qdev_get_prop_ptr(obj, prop);
+    MACAddr *mac = object_field_prop_ptr(obj, prop);
     char buffer[2 * 6 + 5 + 1];
     char *p = buffer;
 
@@ -301,7 +301,7 @@ static void set_mac(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
     Property *prop = opaque;
-    MACAddr *mac = qdev_get_prop_ptr(obj, prop);
+    MACAddr *mac = object_field_prop_ptr(obj, prop);
     int i, pos;
     char *str;
     const char *p;
@@ -363,7 +363,7 @@ static void get_netdev(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
+    NICPeers *peers_ptr = object_field_prop_ptr(obj, prop);
     char *p = g_strdup(peers_ptr->ncs[0] ? peers_ptr->ncs[0]->name : "");
 
     visit_type_str(v, name, &p, errp);
@@ -374,7 +374,7 @@ static void set_netdev(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
+    NICPeers *peers_ptr = object_field_prop_ptr(obj, prop);
     NetClientState **ncs = peers_ptr->ncs;
     NetClientState *peers[MAX_QUEUE_NUM];
     int queues, err = 0, i = 0;
@@ -436,7 +436,7 @@ static void get_audiodev(Object *obj, Visitor *v, const char* name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
+    QEMUSoundCard *card = object_field_prop_ptr(obj, prop);
     char *p = g_strdup(audio_get_id(card));
 
     visit_type_str(v, name, &p, errp);
@@ -447,7 +447,7 @@ static void set_audiodev(Object *obj, Visitor *v, const char* name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
+    QEMUSoundCard *card = object_field_prop_ptr(obj, prop);
     AudioState *state;
     int err = 0;
     char *str;
@@ -549,7 +549,7 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_field_prop_ptr(obj, prop);
     uint64_t value;
     Error *local_err = NULL;
 
@@ -637,7 +637,7 @@ static void get_reserved_region(Object *obj, Visitor *v, const char *name,
                                 void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
+    ReservedRegion *rr = object_field_prop_ptr(obj, prop);
     char buffer[64];
     char *p = buffer;
     int rc;
@@ -653,7 +653,7 @@ static void set_reserved_region(Object *obj, Visitor *v, const char *name,
                                 void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
+    ReservedRegion *rr = object_field_prop_ptr(obj, prop);
     Error *local_err = NULL;
     const char *endptr;
     char *str;
@@ -715,7 +715,7 @@ static void set_pci_devfn(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    int32_t value, *ptr = qdev_get_prop_ptr(obj, prop);
+    int32_t value, *ptr = object_field_prop_ptr(obj, prop);
     unsigned int slot, fn, n;
     char *str;
 
@@ -753,7 +753,7 @@ invalid:
 static int print_pci_devfn(Object *obj, Property *prop, char *dest,
                            size_t len)
 {
-    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    int32_t *ptr = object_field_prop_ptr(obj, prop);
 
     if (*ptr == -1) {
         return snprintf(dest, len, "<unset>");
@@ -777,7 +777,7 @@ static void get_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
                                  void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
+    PCIHostDeviceAddress *addr = object_field_prop_ptr(obj, prop);
     char buffer[] = "ffff:ff:ff.f";
     char *p = buffer;
     int rc = 0;
@@ -803,7 +803,7 @@ static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
                                  void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
+    PCIHostDeviceAddress *addr = object_field_prop_ptr(obj, prop);
     char *str, *p;
     const char *e;
     unsigned long val;
@@ -892,7 +892,7 @@ static void get_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
+    PCIExpLinkSpeed *p = object_field_prop_ptr(obj, prop);
     int speed;
 
     switch (*p) {
@@ -920,7 +920,7 @@ static void set_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
+    PCIExpLinkSpeed *p = object_field_prop_ptr(obj, prop);
     int speed;
 
     if (!visit_type_enum(v, name, &speed, prop->info->enum_table,
@@ -962,7 +962,7 @@ static void get_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
+    PCIExpLinkWidth *p = object_field_prop_ptr(obj, prop);
     int width;
 
     switch (*p) {
@@ -999,7 +999,7 @@ static void set_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
+    PCIExpLinkWidth *p = object_field_prop_ptr(obj, prop);
     int width;
 
     if (!visit_type_enum(v, name, &width, prop->info->enum_table,
@@ -1050,7 +1050,7 @@ static void get_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
     Property *prop = opaque;
-    QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
+    QemuUUID *uuid = object_field_prop_ptr(obj, prop);
     char buffer[UUID_FMT_LEN + 1];
     char *p = buffer;
 
@@ -1065,7 +1065,7 @@ static void set_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
     Property *prop = opaque;
-    QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
+    QemuUUID *uuid = object_field_prop_ptr(obj, prop);
     char *str;
 
     if (!visit_type_str(v, name, &str, errp)) {
diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index aeab4ae9b6..9aebd7b8a9 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -50,7 +50,7 @@ void qdev_prop_allow_set_link_before_realize(const Object *obj,
     }
 }
 
-void *qdev_get_prop_ptr(Object *obj, Property *prop)
+void *object_field_prop_ptr(Object *obj, Property *prop)
 {
     void *ptr = obj;
     ptr += prop->offset;
@@ -96,7 +96,7 @@ void field_prop_get_enum(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    int *ptr = qdev_get_prop_ptr(obj, prop);
+    int *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_enum(v, name, ptr, prop->info->enum_table, errp);
 }
@@ -105,7 +105,7 @@ void field_prop_set_enum(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    int *ptr = qdev_get_prop_ptr(obj, prop);
+    int *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_enum(v, name, ptr, prop->info->enum_table, errp);
 }
@@ -134,7 +134,7 @@ static uint32_t qdev_get_prop_mask(Property *prop)
 
 static void bit_prop_set(Object *obj, Property *props, bool val)
 {
-    uint32_t *p = qdev_get_prop_ptr(obj, props);
+    uint32_t *p = object_field_prop_ptr(obj, props);
     uint32_t mask = qdev_get_prop_mask(props);
     if (val) {
         *p |= mask;
@@ -147,7 +147,7 @@ static void prop_get_bit(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *p = qdev_get_prop_ptr(obj, prop);
+    uint32_t *p = object_field_prop_ptr(obj, prop);
     bool value = (*p & qdev_get_prop_mask(prop)) != 0;
 
     visit_type_bool(v, name, &value, errp);
@@ -188,7 +188,7 @@ static uint64_t qdev_get_prop_mask64(Property *prop)
 
 static void bit64_prop_set(Object *obj, Property *props, bool val)
 {
-    uint64_t *p = qdev_get_prop_ptr(obj, props);
+    uint64_t *p = object_field_prop_ptr(obj, props);
     uint64_t mask = qdev_get_prop_mask64(props);
     if (val) {
         *p |= mask;
@@ -201,7 +201,7 @@ static void prop_get_bit64(Object *obj, Visitor *v, const char *name,
                            void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint64_t *p = qdev_get_prop_ptr(obj, prop);
+    uint64_t *p = object_field_prop_ptr(obj, prop);
     bool value = (*p & qdev_get_prop_mask64(prop)) != 0;
 
     visit_type_bool(v, name, &value, errp);
@@ -233,7 +233,7 @@ static void get_bool(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
     Property *prop = opaque;
-    bool *ptr = qdev_get_prop_ptr(obj, prop);
+    bool *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_bool(v, name, ptr, errp);
 }
@@ -242,7 +242,7 @@ static void set_bool(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
     Property *prop = opaque;
-    bool *ptr = qdev_get_prop_ptr(obj, prop);
+    bool *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_bool(v, name, ptr, errp);
 }
@@ -260,7 +260,7 @@ static void get_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
     Property *prop = opaque;
-    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint8_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint8(v, name, ptr, errp);
 }
@@ -269,7 +269,7 @@ static void set_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
     Property *prop = opaque;
-    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint8_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint8(v, name, ptr, errp);
 }
@@ -299,7 +299,7 @@ static void get_uint16(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint16_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint16(v, name, ptr, errp);
 }
@@ -308,7 +308,7 @@ static void set_uint16(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint16_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint16(v, name, ptr, errp);
 }
@@ -326,7 +326,7 @@ static void get_uint32(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint32(v, name, ptr, errp);
 }
@@ -335,7 +335,7 @@ static void set_uint32(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint32(v, name, ptr, errp);
 }
@@ -344,7 +344,7 @@ void field_prop_get_int32(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    int32_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_int32(v, name, ptr, errp);
 }
@@ -353,7 +353,7 @@ static void set_int32(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
     Property *prop = opaque;
-    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    int32_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_int32(v, name, ptr, errp);
 }
@@ -378,7 +378,7 @@ static void get_uint64(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint64_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint64(v, name, ptr, errp);
 }
@@ -387,7 +387,7 @@ static void set_uint64(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint64_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint64(v, name, ptr, errp);
 }
@@ -396,7 +396,7 @@ static void get_int64(Object *obj, Visitor *v, const char *name,
                       void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    int64_t *ptr = qdev_get_prop_ptr(obj, prop);
+    int64_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_int64(v, name, ptr, errp);
 }
@@ -405,7 +405,7 @@ static void set_int64(Object *obj, Visitor *v, const char *name,
                       void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    int64_t *ptr = qdev_get_prop_ptr(obj, prop);
+    int64_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_int64(v, name, ptr, errp);
 }
@@ -429,14 +429,14 @@ const PropertyInfo qdev_prop_int64 = {
 static void release_string(Object *obj, const char *name, void *opaque)
 {
     Property *prop = opaque;
-    g_free(*(char **)qdev_get_prop_ptr(obj, prop));
+    g_free(*(char **)object_field_prop_ptr(obj, prop));
 }
 
 static void get_string(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    char **ptr = qdev_get_prop_ptr(obj, prop);
+    char **ptr = object_field_prop_ptr(obj, prop);
 
     if (!*ptr) {
         char *str = (char *)"";
@@ -450,7 +450,7 @@ static void set_string(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    char **ptr = qdev_get_prop_ptr(obj, prop);
+    char **ptr = object_field_prop_ptr(obj, prop);
     char *str;
 
     if (!visit_type_str(v, name, &str, errp)) {
@@ -484,7 +484,7 @@ void field_prop_get_size32(Object *obj, Visitor *v, const char *name,
                            void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_field_prop_ptr(obj, prop);
     uint64_t value = *ptr;
 
     visit_type_size(v, name, &value, errp);
@@ -494,7 +494,7 @@ static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
                        Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_field_prop_ptr(obj, prop);
     uint64_t value;
 
     if (!visit_type_size(v, name, &value, errp)) {
@@ -531,7 +531,7 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
      */
     Property *prop = opaque;
     ObjectProperty *op = object_property_find_err(obj, name, &error_abort);
-    uint32_t *alenptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *alenptr = object_field_prop_ptr(obj, prop);
     void **arrayptr = (void *)obj + prop->arrayoffset;
     void *eltptr;
     const char *arrayname;
@@ -570,7 +570,7 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
          * being inside the device struct.
          */
         arrayprop->offset = eltptr - (void *)obj;
-        assert(qdev_get_prop_ptr(obj, arrayprop) == eltptr);
+        assert(object_field_prop_ptr(obj, arrayprop) == eltptr);
         object_property_add_field(obj, propname, arrayprop, op->allow_set);
     }
 }
@@ -760,7 +760,7 @@ static void get_size(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint64_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_size(v, name, ptr, errp);
 }
@@ -769,7 +769,7 @@ static void set_size(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint64_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_size(v, name, ptr, errp);
 }
diff --git a/hw/s390x/css.c b/hw/s390x/css.c
index 496e2c5801..fe47751df4 100644
--- a/hw/s390x/css.c
+++ b/hw/s390x/css.c
@@ -2344,7 +2344,7 @@ static void get_css_devid(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
+    CssDevId *dev_id = object_field_prop_ptr(obj, prop);
     char buffer[] = "xx.x.xxxx";
     char *p = buffer;
     int r;
@@ -2373,7 +2373,7 @@ static void set_css_devid(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
+    CssDevId *dev_id = object_field_prop_ptr(obj, prop);
     char *str;
     int num, n1, n2;
     unsigned int cssid, ssid, devid;
diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
index 54fac3851d..99b18d56ba 100644
--- a/hw/s390x/s390-pci-bus.c
+++ b/hw/s390x/s390-pci-bus.c
@@ -1323,7 +1323,7 @@ static void s390_pci_get_fid(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint32(v, name, ptr, errp);
 }
@@ -1333,7 +1333,7 @@ static void s390_pci_set_fid(Object *obj, Visitor *v, const char *name,
 {
     S390PCIBusDevice *zpci = S390_PCI_DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_field_prop_ptr(obj, prop);
 
     if (!visit_type_uint32(v, name, ptr, errp)) {
         return;
diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
index 802979635c..fc8d63c850 100644
--- a/hw/vfio/pci-quirks.c
+++ b/hw/vfio/pci-quirks.c
@@ -1489,7 +1489,7 @@ static void get_nv_gpudirect_clique_id(Object *obj, Visitor *v,
                                        Error **errp)
 {
     Property *prop = opaque;
-    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint8_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint8(v, name, ptr, errp);
 }
@@ -1499,7 +1499,7 @@ static void set_nv_gpudirect_clique_id(Object *obj, Visitor *v,
                                        Error **errp)
 {
     Property *prop = opaque;
-    uint8_t value, *ptr = qdev_get_prop_ptr(obj, prop);
+    uint8_t value, *ptr = object_field_prop_ptr(obj, prop);
 
     if (!visit_type_uint8(v, name, &value, errp)) {
         return;
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 16:04:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 16:04:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19407.44597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaLGV-0006Bl-0v; Wed, 04 Nov 2020 16:04:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19407.44597; Wed, 04 Nov 2020 16:04:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaLGU-0006Be-Tf; Wed, 04 Nov 2020 16:04:02 +0000
Received: by outflank-mailman (input) for mailman id 19407;
 Wed, 04 Nov 2020 16:04:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NDu8=EK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kaLGT-0006BR-ER
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 16:04:01 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id be04a26d-74ab-44b2-818d-92aa4907f742;
 Wed, 04 Nov 2020 16:03:59 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kaLGQ-0001Ig-SM; Wed, 04 Nov 2020 16:03:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kaLGQ-0008Nl-Hn; Wed, 04 Nov 2020 16:03:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kaLGQ-0004AH-H3; Wed, 04 Nov 2020 16:03:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NDu8=EK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kaLGT-0006BR-ER
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 16:04:01 +0000
X-Inumbo-ID: be04a26d-74ab-44b2-818d-92aa4907f742
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id be04a26d-74ab-44b2-818d-92aa4907f742;
	Wed, 04 Nov 2020 16:03:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=b56R+Q1O0ahCBwetf9cZHHzBWreN0eum7M3degPM0eI=; b=0tKiavS29AERKpM7tocT3vg20j
	z0gGArfqsDWwNHSiQAfC8cbRGyptHBJum+jdoqFETpLsO6JC+jgAkuUbXJR1oiXR2+Z95yGPXRS80
	umXbxfMhig5p7pAYKX0QQlNHUXej7lJgILr9fAxjvB/RbW92CMJL9HurkgF4RRcFR3ak=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaLGQ-0001Ig-SM; Wed, 04 Nov 2020 16:03:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaLGQ-0008Nl-Hn; Wed, 04 Nov 2020 16:03:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaLGQ-0004AH-H3; Wed, 04 Nov 2020 16:03:58 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156390-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156390: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=4ef8451b332662d004df269d4cdeb7d9f31419b5
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 04 Nov 2020 16:03:58 +0000

flight 156390 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156390/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      12 debian-install           fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                4ef8451b332662d004df269d4cdeb7d9f31419b5
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   95 days
Failing since        152366  2020-08-01 20:49:34 Z   94 days  160 attempts
Testing same since   156390  2020-11-04 02:20:15 Z    0 days    1 attempts

------------------------------------------------------------
3422 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 654687 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 17:20:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 17:20:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19430.44613 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaMRk-0003xd-KJ; Wed, 04 Nov 2020 17:19:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19430.44613; Wed, 04 Nov 2020 17:19:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaMRk-0003xW-HN; Wed, 04 Nov 2020 17:19:44 +0000
Received: by outflank-mailman (input) for mailman id 19430;
 Wed, 04 Nov 2020 17:19:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kZ++=EK=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kaMRi-0003xQ-Fj
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 17:19:42 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9695d997-d724-4e72-8dee-6c549aab909a;
 Wed, 04 Nov 2020 17:19:41 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0A4HJSw0001780
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Wed, 4 Nov 2020 12:19:34 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 0A4HJSsS001779;
 Wed, 4 Nov 2020 09:19:28 -0800 (PST) (envelope-from ehem)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kZ++=EK=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kaMRi-0003xQ-Fj
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 17:19:42 +0000
X-Inumbo-ID: 9695d997-d724-4e72-8dee-6c549aab909a
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9695d997-d724-4e72-8dee-6c549aab909a;
	Wed, 04 Nov 2020 17:19:41 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0A4HJSw0001780
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Wed, 4 Nov 2020 12:19:34 -0500 (EST)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 0A4HJSsS001779;
	Wed, 4 Nov 2020 09:19:28 -0800 (PST)
	(envelope-from ehem)
Date: Wed, 4 Nov 2020 09:19:28 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Marek Marczykowski <marmarek@invisiblethingslab.com>
Subject: Re: [PATCH v2] tools/python: pass more -rpath-link options to ld
Message-ID: <20201104171928.GA1647@mattapan.m5p.com>
References: <8cf8cfa9-2b0c-123a-2d23-8932e61085fa@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <8cf8cfa9-2b0c-123a-2d23-8932e61085fa@suse.com>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Wed, Nov 04, 2020 at 03:57:49PM +0100, Jan Beulich wrote:
> --- a/tools/python/Makefile
> +++ b/tools/python/Makefile
> @@ -8,19 +8,21 @@ PY_CFLAGS = $(CFLAGS) $(PY_NOOPT_CFLAGS)
>  PY_LDFLAGS = $(SHLIB_LDFLAGS) $(APPEND_LDFLAGS)
>  INSTALL_LOG = build/installed_files.txt
>  
> +setup.py = CC="$(CC)" CFLAGS="$(PY_CFLAGS)" LDSHARED="$(CC)" LDFLAGS="$(PY_LDFLAGS)" \
> +           SHLIB_libxenctrl="$(SHLIB_libxenctrl)" \
> +           SHLIB_libxenguest="$(SHLIB_libxenguest)" \
> +           SHLIB_libxenstore="$(SHLIB_libxenstore)" \
> +           $(PYTHON) setup.py
> +
>  .PHONY: build
>  build:
> -	CC="$(CC)" CFLAGS="$(PY_CFLAGS)" LDSHARED="$(CC)" LDFLAGS="$(PY_LDFLAGS)" $(PYTHON) setup.py build
> +	$(setup.py) build
>  
>  .PHONY: install
>  install:
>  	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
> -
> -	CC="$(CC)" CFLAGS="$(PY_CFLAGS)" LDSHARED="$(CC)" \
> -		LDFLAGS="$(PY_LDFLAGS)" $(PYTHON) setup.py install \
> -		--record $(INSTALL_LOG) $(PYTHON_PREFIX_ARG) \
> +	$(setup.py) install --record $(INSTALL_LOG) $(PYTHON_PREFIX_ARG) \
>  		--root="$(DESTDIR)" --force
> -
>  	$(INSTALL_PYTHON_PROG) scripts/convert-legacy-stream $(DESTDIR)$(LIBEXEC_BIN)
>  	$(INSTALL_PYTHON_PROG) scripts/verify-stream-v2 $(DESTDIR)$(LIBEXEC_BIN)

Shouldn't similar work of moving all the environment variable settings to
a $(setup.py) variable be done for tools/pygrub/Makefile?

tools/python/Makefile and tools/pygrub/Makefile are presently quite
similar and keeping them similar seems a Good Idea(tm).


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Wed Nov 04 17:22:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 17:22:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19433.44625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaMUk-0004lZ-3Z; Wed, 04 Nov 2020 17:22:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19433.44625; Wed, 04 Nov 2020 17:22:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaMUk-0004lS-0D; Wed, 04 Nov 2020 17:22:50 +0000
Received: by outflank-mailman (input) for mailman id 19433;
 Wed, 04 Nov 2020 17:22:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tcr8=EK=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1kaMUi-0004lL-EP
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 17:22:48 +0000
Received: from wout2-smtp.messagingengine.com (unknown [64.147.123.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fd00318c-eed8-4e5b-a6cc-59d6059b893d;
 Wed, 04 Nov 2020 17:22:46 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.west.internal (Postfix) with ESMTP id D8483A52;
 Wed,  4 Nov 2020 12:22:45 -0500 (EST)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Wed, 04 Nov 2020 12:22:46 -0500
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de
 [91.64.170.89])
 by mail.messagingengine.com (Postfix) with ESMTPA id B6ECF3280391;
 Wed,  4 Nov 2020 12:22:44 -0500 (EST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tcr8=EK=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
	id 1kaMUi-0004lL-EP
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 17:22:48 +0000
X-Inumbo-ID: fd00318c-eed8-4e5b-a6cc-59d6059b893d
Received: from wout2-smtp.messagingengine.com (unknown [64.147.123.25])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id fd00318c-eed8-4e5b-a6cc-59d6059b893d;
	Wed, 04 Nov 2020 17:22:46 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
	by mailout.west.internal (Postfix) with ESMTP id D8483A52;
	Wed,  4 Nov 2020 12:22:45 -0500 (EST)
Received: from mailfrontend1 ([10.202.2.162])
  by compute3.internal (MEProxy); Wed, 04 Nov 2020 12:22:46 -0500
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=p3a3wJ
	vbPPzG5HJNzeHsRog3mMlxxr8aebuzTaoFanA=; b=kkQISjx8mhLxrnOoXTeZrP
	uZGfbd4zXA094pgzVVG/mtyJJqGzaRCTjx41HaOoK9puNOY8iqcxQNAC+qnnrAfn
	Pw9D2lsoVCIped2JVlTiRb+czrW/4GY0Wo/tc319ce0uykeJoo3xQQM5PYtG7uqk
	TryX17fBYWknJe0Nkez1LYHneiseaPFjI2zWjBLy+lNZV53sc96ZxPFCXBjPksSa
	Ci370n0/F39JUzX/KFxBewmFNhQbsZmGZX0QyaQeluOiDVcxendQ+JPm4TRxzRcK
	D1bjXi8N/yQykIyJ97qW8ZT2xXJqa619zf7e4MCPpLY5agur4L4fD1Uxy0zOIm/Q
	==
X-ME-Sender: <xms:ZeOiXxfGvWR99WEhUJcjMuf-76owQ0-TVbRjPHBrroQMvoBgT7iH7g>
    <xme:ZeOiX_Md8DvCSHgegdC8hxb89TaHyA-L_YKf1XqLstsNVm1OhApr5jqlZV8U3j4Up
    fiWO6V7e4IwCg>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedruddthedguddtudcutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpeffhffvuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiuceomhgrrhhmrghrvghksehinhhvihhsihgslhgvth
    hhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgefhtdfhuddtjefgjeev
    hffhjeejtefgjeevgeeijeduhfdtteehieffvdettddvnecukfhppeeluddrieegrdduje
    dtrdekleenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhm
    pehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:ZeOiX6hqiY3kDct9U_LbieaIPUQqlRBqMNwdBYO1o_f7iw4oYS739w>
    <xmx:ZeOiX6_FGAFaoNdHWyHXq8XJx908iXVlxj3JMQwkWTSLN11a7lIU9A>
    <xmx:ZeOiX9spv_SZuEWN8MOF9PYIsRA2xT00CGZ_mi2hdiDq7bgVTO3aTA>
    <xmx:ZeOiX748IlEyfNNQ4xHa9DCPUadg3JophvUUP8AYC7PgM6zAUxvflQ>
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de [91.64.170.89])
	by mail.messagingengine.com (Postfix) with ESMTPA id B6ECF3280391;
	Wed,  4 Nov 2020 12:22:44 -0500 (EST)
Date: Wed, 4 Nov 2020 18:22:41 +0100
From: Marek Marczykowski <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2] tools/python: pass more -rpath-link options to ld
Message-ID: <20201104172241.GY1447@mail-itl>
References: <8cf8cfa9-2b0c-123a-2d23-8932e61085fa@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="VBg0eA7b+gOCQW/G"
Content-Disposition: inline
In-Reply-To: <8cf8cfa9-2b0c-123a-2d23-8932e61085fa@suse.com>


--VBg0eA7b+gOCQW/G
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: [PATCH v2] tools/python: pass more -rpath-link options to ld

On Wed, Nov 04, 2020 at 03:57:49PM +0100, Jan Beulich wrote:
> With the split of libraries, I've observed a number of warnings from
> (old?) ld.
>=20
> Instead of duplicating the additions in two places, introduce a setup.py
> make variable holding all the common parts of the invocations.
>=20
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.com>

> ---
> v2: Pass on and use SHLIB_libxen*.
> ---
> It's unclear to me whether this is ld version dependent - the pattern
> of where I've seen such warnings doesn't suggest a clear version
> dependency.
>=20
> Obviously (I think) the other similar variables (XEN_libxen*,
> CFLAGS_libxen*, etc) would better also be made use of to eliminate at
> least most of the PATH_* variables, but that's not the purpose of this
> change.
>=20
> --- a/tools/python/Makefile
> +++ b/tools/python/Makefile
> @@ -8,19 +8,21 @@ PY_CFLAGS =3D $(CFLAGS) $(PY_NOOPT_CFLAGS)
>  PY_LDFLAGS =3D $(SHLIB_LDFLAGS) $(APPEND_LDFLAGS)
>  INSTALL_LOG =3D build/installed_files.txt
> =20
> +setup.py =3D CC=3D"$(CC)" CFLAGS=3D"$(PY_CFLAGS)" LDSHARED=3D"$(CC)" LDF=
LAGS=3D"$(PY_LDFLAGS)" \
> +           SHLIB_libxenctrl=3D"$(SHLIB_libxenctrl)" \
> +           SHLIB_libxenguest=3D"$(SHLIB_libxenguest)" \
> +           SHLIB_libxenstore=3D"$(SHLIB_libxenstore)" \
> +           $(PYTHON) setup.py
> +
>  .PHONY: build
>  build:
> -	CC=3D"$(CC)" CFLAGS=3D"$(PY_CFLAGS)" LDSHARED=3D"$(CC)" LDFLAGS=3D"$(PY=
_LDFLAGS)" $(PYTHON) setup.py build
> +	$(setup.py) build
> =20
>  .PHONY: install
>  install:
>  	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
> -
> -	CC=3D"$(CC)" CFLAGS=3D"$(PY_CFLAGS)" LDSHARED=3D"$(CC)" \
> -		LDFLAGS=3D"$(PY_LDFLAGS)" $(PYTHON) setup.py install \
> -		--record $(INSTALL_LOG) $(PYTHON_PREFIX_ARG) \
> +	$(setup.py) install --record $(INSTALL_LOG) $(PYTHON_PREFIX_ARG) \
>  		--root=3D"$(DESTDIR)" --force
> -
>  	$(INSTALL_PYTHON_PROG) scripts/convert-legacy-stream $(DESTDIR)$(LIBEXE=
C_BIN)
>  	$(INSTALL_PYTHON_PROG) scripts/verify-stream-v2 $(DESTDIR)$(LIBEXEC_BIN)
> =20
> --- a/tools/python/setup.py
> +++ b/tools/python/setup.py
> @@ -4,6 +4,10 @@ import os, sys
> =20
>  XEN_ROOT =3D "../.."
> =20
> +SHLIB_libxenctrl =3D os.environ['SHLIB_libxenctrl'].split()
> +SHLIB_libxenguest =3D os.environ['SHLIB_libxenguest'].split()
> +SHLIB_libxenstore =3D os.environ['SHLIB_libxenstore'].split()
> +
>  extra_compile_args  =3D [ "-fno-strict-aliasing", "-Werror" ]
> =20
>  PATH_XEN      =3D XEN_ROOT + "/tools/include"
> @@ -24,7 +28,7 @@ xc =3D Extension("xc",
>                 library_dirs       =3D [ PATH_LIBXENCTRL, PATH_LIBXENGUES=
T ],
>                 libraries          =3D [ "xenctrl", "xenguest" ],
>                 depends            =3D [ PATH_LIBXENCTRL + "/libxenctrl.s=
o", PATH_LIBXENGUEST + "/libxenguest.so" ],
> -               extra_link_args    =3D [ "-Wl,-rpath-link=3D"+PATH_LIBXEN=
TOOLLOG ],
> +               extra_link_args    =3D SHLIB_libxenctrl + SHLIB_libxengue=
st,
>                 sources            =3D [ "xen/lowlevel/xc/xc.c" ])
> =20
>  xs =3D Extension("xs",
> @@ -33,6 +37,7 @@ xs =3D Extension("xs",
>                 library_dirs       =3D [ PATH_XENSTORE ],
>                 libraries          =3D [ "xenstore" ],
>                 depends            =3D [ PATH_XENSTORE + "/libxenstore.so=
" ],
> +               extra_link_args    =3D SHLIB_libxenstore,
>                 sources            =3D [ "xen/lowlevel/xs/xs.c" ])
> =20
>  plat =3D os.uname()[0]

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--VBg0eA7b+gOCQW/G
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl+i42EACgkQ24/THMrX
1yw3RAf/YF4OFhGI1/xSacRJqIJ/Myd8w6a3DNyWJ7PYwWBcqhX2xpuxhnzzjLnJ
bj2RO4L91NW8s915dbk+OA4dPZ3/+b4m7FicsbHwmUCyQkQBwadcV7R/UZtuXiHD
fYIPWarRRdiFHVpBMS0wup8GO7zCn+xEKB5CuFwxQQq5aEymCsspuJ3OakvJxoQL
ISDJo1fe8n7lNy9jQ60eIa9A8WXhQfmHuNCjy14+a8KbukwBlMuoha7WUjn5mg6Y
HsC1crBMxntWnAe89RB2EzoCnwZo1gyfpyKWldrtqvql40Si3GD4YQdL24lnBfOF
PnDyIksy2bO9OqeZ9k7fGvkarqvTWQ==
=G1AO
-----END PGP SIGNATURE-----

--VBg0eA7b+gOCQW/G--


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 17:25:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 17:25:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19440.44636 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaMXW-0004yW-O1; Wed, 04 Nov 2020 17:25:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19440.44636; Wed, 04 Nov 2020 17:25:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaMXW-0004yP-Kt; Wed, 04 Nov 2020 17:25:42 +0000
Received: by outflank-mailman (input) for mailman id 19440;
 Wed, 04 Nov 2020 17:25:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aHic=EK=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
 id 1kaMXV-0004yK-0m
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 17:25:41 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 38d4fb6b-17e7-4f7b-aae3-b74dfee7aad7;
 Wed, 04 Nov 2020 17:25:36 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-206-BliftGsjOkqSm4-6LQDSYg-1; Wed, 04 Nov 2020 12:25:31 -0500
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com
 [10.5.11.16])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 488668049CB;
 Wed,  4 Nov 2020 17:25:29 +0000 (UTC)
Received: from localhost (ovpn-114-68.rdu2.redhat.com [10.10.114.68])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 4A6315C5DE;
 Wed,  4 Nov 2020 17:25:21 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aHic=EK=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
	id 1kaMXV-0004yK-0m
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 17:25:41 +0000
X-Inumbo-ID: 38d4fb6b-17e7-4f7b-aae3-b74dfee7aad7
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 38d4fb6b-17e7-4f7b-aae3-b74dfee7aad7;
	Wed, 04 Nov 2020 17:25:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604510736;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qH+Ft2xr8oMmaql/QpsL3d2WQdY3tolquMseQzbcMBo=;
	b=Q6SJsWvvVfwUxbPJ+GeFpNT1OqX3Ek8gG8m/Ht3CmiDLQLp/3+MtklHRZb6U0sJOVQBGB8
	udaOPbtYX1035iDVckbRvArURWgq/hT4y9g2wLOKgAQXywtHMj1T2GeSuOohlZkFG+lldH
	LjX9mW8VV1wm5uc+/J4q3/jNz8xq9UI=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-206-BliftGsjOkqSm4-6LQDSYg-1; Wed, 04 Nov 2020 12:25:31 -0500
X-MC-Unique: BliftGsjOkqSm4-6LQDSYg-1
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 488668049CB;
	Wed,  4 Nov 2020 17:25:29 +0000 (UTC)
Received: from localhost (ovpn-114-68.rdu2.redhat.com [10.10.114.68])
	by smtp.corp.redhat.com (Postfix) with ESMTP id 4A6315C5DE;
	Wed,  4 Nov 2020 17:25:21 +0000 (UTC)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Cc: Igor Mammedov <imammedo@redhat.com>,
	"Daniel P. Berrange" <berrange@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Berger <stefanb@linux.vnet.ibm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Kevin Wolf <kwolf@redhat.com>,
	Max Reitz <mreitz@redhat.com>,
	Richard Henderson <rth@twiddle.net>,
	David Hildenbrand <david@redhat.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Halil Pasic <pasic@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Matthew Rosato <mjrosato@linux.ibm.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Artyom Tarasenko <atar4qemu@gmail.com>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	qemu-s390x@nongnu.org
Subject: [PATCH 4/7] qom: Replace void* parameter with Property* on field getters/setters
Date: Wed,  4 Nov 2020 12:25:09 -0500
Message-Id: <20201104172512.2381656-5-ehabkost@redhat.com>
In-Reply-To: <20201104172512.2381656-1-ehabkost@redhat.com>
References: <20201104172512.2381656-1-ehabkost@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ehabkost@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

All field property getters and setters must interpret the fourth
argument as Property*.  Change the function signature of field
property getters and setters to indicate that.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: David Hildenbrand <david@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Thomas Huth <thuth@redhat.com>
Cc: Halil Pasic <pasic@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Matthew Rosato <mjrosato@linux.ibm.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Artyom Tarasenko <atar4qemu@gmail.com>
Cc: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org
Cc: qemu-block@nongnu.org
Cc: qemu-s390x@nongnu.org
---
 include/qom/field-property-internal.h |   8 +-
 include/qom/field-property.h          |  26 ++++---
 backends/tpm/tpm_util.c               |  11 ++-
 hw/block/xen-block.c                  |   6 +-
 hw/core/qdev-properties-system.c      |  86 +++++++++-------------
 hw/s390x/css.c                        |   6 +-
 hw/s390x/s390-pci-bus.c               |   6 +-
 hw/vfio/pci-quirks.c                  |  10 +--
 qom/property-types.c                  | 102 +++++++++-----------------
 target/sparc/cpu.c                    |   4 +-
 10 files changed, 105 insertions(+), 160 deletions(-)

diff --git a/include/qom/field-property-internal.h b/include/qom/field-property-internal.h
index 7aa27ce836..bc7d25033d 100644
--- a/include/qom/field-property-internal.h
+++ b/include/qom/field-property-internal.h
@@ -9,9 +9,9 @@
 #define QOM_STATIC_PROPERTY_INTERNAL_H
 
 void field_prop_get_enum(Object *obj, Visitor *v, const char *name,
-                         void *opaque, Error **errp);
+                         Property *prop, Error **errp);
 void field_prop_set_enum(Object *obj, Visitor *v, const char *name,
-                         void *opaque, Error **errp);
+                         Property *prop, Error **errp);
 
 void field_prop_set_default_value_enum(ObjectProperty *op,
                                        const Property *prop);
@@ -21,9 +21,9 @@ void field_prop_set_default_value_uint(ObjectProperty *op,
                                        const Property *prop);
 
 void field_prop_get_int32(Object *obj, Visitor *v, const char *name,
-                          void *opaque, Error **errp);
+                          Property *prop, Error **errp);
 void field_prop_get_size32(Object *obj, Visitor *v, const char *name,
-                           void *opaque, Error **errp);
+                           Property *prop, Error **errp);
 
 /**
  * object_property_add_field: Add a field property to an object instance
diff --git a/include/qom/field-property.h b/include/qom/field-property.h
index e64a2b3c07..438bb25896 100644
--- a/include/qom/field-property.h
+++ b/include/qom/field-property.h
@@ -54,6 +54,18 @@ struct Property {
     const char   *link_type;
 };
 
+/**
+ * typedef FieldAccessor: a field property getter or setter function
+ * @obj: the object instance
+ * @v: the visitor that contains the property data
+ * @name: the name of the property
+ * @prop: Field property definition
+ * @errp: pointer to error information
+ */
+typedef void FieldAccessor(Object *obj, Visitor *v,
+                           const char *name, Property *prop,
+                           Error **errp);
+
 /**
  * struct PropertyInfo: information on a specific QOM property type
  */
@@ -71,16 +83,10 @@ struct PropertyInfo {
     /** @create: Optional callback for creation of property */
     ObjectProperty *(*create)(ObjectClass *oc, const char *name,
                               Property *prop);
-    /**
-     * @get: Property getter.  The opaque parameter will point to
-     *        the &Property struct for the property.
-     */
-    ObjectPropertyAccessor *get;
-    /**
-     * @set: Property setter.  The opaque parameter will point to
-     *        the &Property struct for the property.
-     */
-    ObjectPropertyAccessor *set;
+    /** @get: Property getter */
+    FieldAccessor *get;
+    /** @set: Property setter */
+    FieldAccessor *set;
     /**
      * @release: Optional release function, called when the object
      * is destroyed
diff --git a/backends/tpm/tpm_util.c b/backends/tpm/tpm_util.c
index bb1ab34a75..e8837938e5 100644
--- a/backends/tpm/tpm_util.c
+++ b/backends/tpm/tpm_util.c
@@ -32,10 +32,10 @@
 
 /* tpm backend property */
 
-static void get_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
-                    Error **errp)
+static void get_tpm(Object *obj, Visitor *v, const char *name,
+                    Property *prop, Error **errp)
 {
-    TPMBackend **be = object_field_prop_ptr(obj, opaque);
+    TPMBackend **be = object_field_prop_ptr(obj, prop);
     char *p;
 
     p = g_strdup(*be ? (*be)->id : "");
@@ -43,10 +43,9 @@ static void get_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
     g_free(p);
 }
 
-static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
-                    Error **errp)
+static void set_tpm(Object *obj, Visitor *v, const char *name,
+                    Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     TPMBackend *s, **be = object_field_prop_ptr(obj, prop);
     char *str;
 
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index 718d886e5c..c1ee634639 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -333,9 +333,8 @@ static char *disk_to_vbd_name(unsigned int disk)
 }
 
 static void xen_block_get_vdev(Object *obj, Visitor *v, const char *name,
-                               void *opaque, Error **errp)
+                                Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     XenBlockVdev *vdev = object_field_prop_ptr(obj, prop);
     char *str;
 
@@ -393,9 +392,8 @@ static int vbd_name_to_disk(const char *name, const char **endp,
 }
 
 static void xen_block_set_vdev(Object *obj, Visitor *v, const char *name,
-                               void *opaque, Error **errp)
+                                Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     XenBlockVdev *vdev = object_field_prop_ptr(obj, prop);
     char *str, *p;
     const char *end;
diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
index 8da68f076c..4c649cb4b2 100644
--- a/hw/core/qdev-properties-system.c
+++ b/hw/core/qdev-properties-system.c
@@ -58,10 +58,9 @@ static bool check_prop_still_unset(Object *obj, const char *name,
 
 /* --- drive --- */
 
-static void get_drive(Object *obj, Visitor *v, const char *name, void *opaque,
-                      Error **errp)
+static void get_drive(Object *obj, Visitor *v, const char *name,
+                      Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     void **ptr = object_field_prop_ptr(obj, prop);
     const char *value;
     char *p;
@@ -165,16 +164,16 @@ fail:
     g_free(str);
 }
 
-static void set_drive(Object *obj, Visitor *v, const char *name, void *opaque,
+static void set_drive(Object *obj, Visitor *v, const char *name, Property *prop,
                       Error **errp)
 {
-    set_drive_helper(obj, v, name, opaque, false, errp);
+    set_drive_helper(obj, v, name, prop, false, errp);
 }
 
 static void set_drive_iothread(Object *obj, Visitor *v, const char *name,
-                               void *opaque, Error **errp)
+                               Property *prop, Error **errp)
 {
-    set_drive_helper(obj, v, name, opaque, true, errp);
+    set_drive_helper(obj, v, name, prop, true, errp);
 }
 
 static void release_drive(Object *obj, const char *name, void *opaque)
@@ -211,10 +210,10 @@ const PropertyInfo qdev_prop_drive_iothread = {
 
 /* --- character device --- */
 
-static void get_chr(Object *obj, Visitor *v, const char *name, void *opaque,
-                    Error **errp)
+static void get_chr(Object *obj, Visitor *v, const char *name,
+                    Property *prop, Error **errp)
 {
-    CharBackend *be = object_field_prop_ptr(obj, opaque);
+    CharBackend *be = object_field_prop_ptr(obj, prop);
     char *p;
 
     p = g_strdup(be->chr && be->chr->label ? be->chr->label : "");
@@ -222,10 +221,9 @@ static void get_chr(Object *obj, Visitor *v, const char *name, void *opaque,
     g_free(p);
 }
 
-static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
-                    Error **errp)
+static void set_chr(Object *obj, Visitor *v, const char *name,
+                    Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     CharBackend *be = object_field_prop_ptr(obj, prop);
     Chardev *s;
     char *str;
@@ -282,10 +280,9 @@ const PropertyInfo qdev_prop_chr = {
  *   01:02:03:04:05:06
  *   01-02-03-04-05-06
  */
-static void get_mac(Object *obj, Visitor *v, const char *name, void *opaque,
-                    Error **errp)
+static void get_mac(Object *obj, Visitor *v, const char *name,
+                    Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     MACAddr *mac = object_field_prop_ptr(obj, prop);
     char buffer[2 * 6 + 5 + 1];
     char *p = buffer;
@@ -297,10 +294,9 @@ static void get_mac(Object *obj, Visitor *v, const char *name, void *opaque,
     visit_type_str(v, name, &p, errp);
 }
 
-static void set_mac(Object *obj, Visitor *v, const char *name, void *opaque,
-                    Error **errp)
+static void set_mac(Object *obj, Visitor *v, const char *name,
+                    Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     MACAddr *mac = object_field_prop_ptr(obj, prop);
     int i, pos;
     char *str;
@@ -360,9 +356,8 @@ void qdev_prop_set_macaddr(DeviceState *dev, const char *name,
 
 /* --- netdev device --- */
 static void get_netdev(Object *obj, Visitor *v, const char *name,
-                       void *opaque, Error **errp)
+                        Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     NICPeers *peers_ptr = object_field_prop_ptr(obj, prop);
     char *p = g_strdup(peers_ptr->ncs[0] ? peers_ptr->ncs[0]->name : "");
 
@@ -371,9 +366,8 @@ static void get_netdev(Object *obj, Visitor *v, const char *name,
 }
 
 static void set_netdev(Object *obj, Visitor *v, const char *name,
-                       void *opaque, Error **errp)
+                        Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     NICPeers *peers_ptr = object_field_prop_ptr(obj, prop);
     NetClientState **ncs = peers_ptr->ncs;
     NetClientState *peers[MAX_QUEUE_NUM];
@@ -433,9 +427,8 @@ const PropertyInfo qdev_prop_netdev = {
 
 /* --- audiodev --- */
 static void get_audiodev(Object *obj, Visitor *v, const char* name,
-                         void *opaque, Error **errp)
+                          Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     QEMUSoundCard *card = object_field_prop_ptr(obj, prop);
     char *p = g_strdup(audio_get_id(card));
 
@@ -444,9 +437,8 @@ static void get_audiodev(Object *obj, Visitor *v, const char* name,
 }
 
 static void set_audiodev(Object *obj, Visitor *v, const char* name,
-                         void *opaque, Error **errp)
+                          Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     QEMUSoundCard *card = object_field_prop_ptr(obj, prop);
     AudioState *state;
     int err = 0;
@@ -545,10 +537,9 @@ const PropertyInfo qdev_prop_losttickpolicy = {
 /* --- blocksize --- */
 
 static void set_blocksize(Object *obj, Visitor *v, const char *name,
-                          void *opaque, Error **errp)
+                           Property *prop, Error **errp)
 {
     DeviceState *dev = DEVICE(obj);
-    Property *prop = opaque;
     uint32_t *ptr = object_field_prop_ptr(obj, prop);
     uint64_t value;
     Error *local_err = NULL;
@@ -634,9 +625,8 @@ const PropertyInfo qdev_prop_multifd_compression = {
  *   and type is a non-negative decimal integer
  */
 static void get_reserved_region(Object *obj, Visitor *v, const char *name,
-                                void *opaque, Error **errp)
+                                 Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     ReservedRegion *rr = object_field_prop_ptr(obj, prop);
     char buffer[64];
     char *p = buffer;
@@ -650,9 +640,8 @@ static void get_reserved_region(Object *obj, Visitor *v, const char *name,
 }
 
 static void set_reserved_region(Object *obj, Visitor *v, const char *name,
-                                void *opaque, Error **errp)
+                                 Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     ReservedRegion *rr = object_field_prop_ptr(obj, prop);
     Error *local_err = NULL;
     const char *endptr;
@@ -712,9 +701,8 @@ const PropertyInfo qdev_prop_reserved_region = {
  * bus-local address, i.e. "$slot" or "$slot.$fn"
  */
 static void set_pci_devfn(Object *obj, Visitor *v, const char *name,
-                          void *opaque, Error **errp)
+                           Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     int32_t value, *ptr = object_field_prop_ptr(obj, prop);
     unsigned int slot, fn, n;
     char *str;
@@ -774,9 +762,8 @@ const PropertyInfo qdev_prop_pci_devfn = {
 /* --- pci host address --- */
 
 static void get_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
-                                 void *opaque, Error **errp)
+                                  Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     PCIHostDeviceAddress *addr = object_field_prop_ptr(obj, prop);
     char buffer[] = "ffff:ff:ff.f";
     char *p = buffer;
@@ -800,9 +787,8 @@ static void get_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
  *   if <domain> is not supplied, it's assumed to be 0.
  */
 static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
-                                 void *opaque, Error **errp)
+                                  Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     PCIHostDeviceAddress *addr = object_field_prop_ptr(obj, prop);
     char *str, *p;
     const char *e;
@@ -889,9 +875,8 @@ const PropertyInfo qdev_prop_off_auto_pcibar = {
 /* --- PCIELinkSpeed 2_5/5/8/16 -- */
 
 static void get_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
-                                   void *opaque, Error **errp)
+                                    Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     PCIExpLinkSpeed *p = object_field_prop_ptr(obj, prop);
     int speed;
 
@@ -917,9 +902,8 @@ static void get_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
 }
 
 static void set_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
-                                   void *opaque, Error **errp)
+                                    Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     PCIExpLinkSpeed *p = object_field_prop_ptr(obj, prop);
     int speed;
 
@@ -959,9 +943,8 @@ const PropertyInfo qdev_prop_pcie_link_speed = {
 /* --- PCIELinkWidth 1/2/4/8/12/16/32 -- */
 
 static void get_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
-                                   void *opaque, Error **errp)
+                                    Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     PCIExpLinkWidth *p = object_field_prop_ptr(obj, prop);
     int width;
 
@@ -996,9 +979,8 @@ static void get_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
 }
 
 static void set_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
-                                   void *opaque, Error **errp)
+                                    Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     PCIExpLinkWidth *p = object_field_prop_ptr(obj, prop);
     int width;
 
@@ -1046,10 +1028,9 @@ const PropertyInfo qdev_prop_pcie_link_width = {
 
 /* --- UUID --- */
 
-static void get_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
-                     Error **errp)
+static void get_uuid(Object *obj, Visitor *v, const char *name,
+                     Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     QemuUUID *uuid = object_field_prop_ptr(obj, prop);
     char buffer[UUID_FMT_LEN + 1];
     char *p = buffer;
@@ -1061,10 +1042,9 @@ static void get_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
 
 #define UUID_VALUE_AUTO        "auto"
 
-static void set_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
-                    Error **errp)
+static void set_uuid(Object *obj, Visitor *v, const char *name,
+                    Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     QemuUUID *uuid = object_field_prop_ptr(obj, prop);
     char *str;
 
diff --git a/hw/s390x/css.c b/hw/s390x/css.c
index fe47751df4..1400d80689 100644
--- a/hw/s390x/css.c
+++ b/hw/s390x/css.c
@@ -2341,9 +2341,8 @@ void css_reset(void)
 }
 
 static void get_css_devid(Object *obj, Visitor *v, const char *name,
-                          void *opaque, Error **errp)
+                           Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     CssDevId *dev_id = object_field_prop_ptr(obj, prop);
     char buffer[] = "xx.x.xxxx";
     char *p = buffer;
@@ -2370,9 +2369,8 @@ static void get_css_devid(Object *obj, Visitor *v, const char *name,
  * parse <cssid>.<ssid>.<devid> and assert valid range for cssid/ssid
  */
 static void set_css_devid(Object *obj, Visitor *v, const char *name,
-                          void *opaque, Error **errp)
+                           Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     CssDevId *dev_id = object_field_prop_ptr(obj, prop);
     char *str;
     int num, n1, n2;
diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
index 99b18d56ba..a29bba17b4 100644
--- a/hw/s390x/s390-pci-bus.c
+++ b/hw/s390x/s390-pci-bus.c
@@ -1320,19 +1320,17 @@ static void s390_pci_device_reset(DeviceState *dev)
 }
 
 static void s390_pci_get_fid(Object *obj, Visitor *v, const char *name,
-                         void *opaque, Error **errp)
+                          Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     uint32_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint32(v, name, ptr, errp);
 }
 
 static void s390_pci_set_fid(Object *obj, Visitor *v, const char *name,
-                         void *opaque, Error **errp)
+                          Property *prop, Error **errp)
 {
     S390PCIBusDevice *zpci = S390_PCI_DEVICE(obj);
-    Property *prop = opaque;
     uint32_t *ptr = object_field_prop_ptr(obj, prop);
 
     if (!visit_type_uint32(v, name, ptr, errp)) {
diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
index fc8d63c850..34f5f5dce2 100644
--- a/hw/vfio/pci-quirks.c
+++ b/hw/vfio/pci-quirks.c
@@ -1485,20 +1485,18 @@ void vfio_setup_resetfn_quirk(VFIOPCIDevice *vdev)
  * https://lists.gnu.org/archive/html/qemu-devel/2017-08/pdfUda5iEpgOS.pdf
  */
 static void get_nv_gpudirect_clique_id(Object *obj, Visitor *v,
-                                       const char *name, void *opaque,
-                                       Error **errp)
+                                       const char *name,
+                                       Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     uint8_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint8(v, name, ptr, errp);
 }
 
 static void set_nv_gpudirect_clique_id(Object *obj, Visitor *v,
-                                       const char *name, void *opaque,
-                                       Error **errp)
+                                       const char *name,
+                                       Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     uint8_t value, *ptr = object_field_prop_ptr(obj, prop);
 
     if (!visit_type_uint8(v, name, &value, errp)) {
diff --git a/qom/property-types.c b/qom/property-types.c
index 856b5ae76d..82a5932f4a 100644
--- a/qom/property-types.c
+++ b/qom/property-types.c
@@ -8,18 +8,16 @@
 #include "qemu/uuid.h"
 
 void field_prop_get_enum(Object *obj, Visitor *v, const char *name,
-                         void *opaque, Error **errp)
+                          Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     int *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_enum(v, name, ptr, prop->info->enum_table, errp);
 }
 
 void field_prop_set_enum(Object *obj, Visitor *v, const char *name,
-                         void *opaque, Error **errp)
+                          Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     int *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_enum(v, name, ptr, prop->info->enum_table, errp);
@@ -59,9 +57,8 @@ static void bit_prop_set(Object *obj, Property *props, bool val)
 }
 
 static void prop_get_bit(Object *obj, Visitor *v, const char *name,
-                         void *opaque, Error **errp)
+                          Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     uint32_t *p = object_field_prop_ptr(obj, prop);
     bool value = (*p & qdev_get_prop_mask(prop)) != 0;
 
@@ -69,9 +66,8 @@ static void prop_get_bit(Object *obj, Visitor *v, const char *name,
 }
 
 static void prop_set_bit(Object *obj, Visitor *v, const char *name,
-                         void *opaque, Error **errp)
+                          Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     bool value;
 
     if (!visit_type_bool(v, name, &value, errp)) {
@@ -113,9 +109,8 @@ static void bit64_prop_set(Object *obj, Property *props, bool val)
 }
 
 static void prop_get_bit64(Object *obj, Visitor *v, const char *name,
-                           void *opaque, Error **errp)
+                            Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     uint64_t *p = object_field_prop_ptr(obj, prop);
     bool value = (*p & qdev_get_prop_mask64(prop)) != 0;
 
@@ -123,9 +118,8 @@ static void prop_get_bit64(Object *obj, Visitor *v, const char *name,
 }
 
 static void prop_set_bit64(Object *obj, Visitor *v, const char *name,
-                           void *opaque, Error **errp)
+                            Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     bool value;
 
     if (!visit_type_bool(v, name, &value, errp)) {
@@ -144,19 +138,17 @@ const PropertyInfo prop_info_bit64 = {
 
 /* --- bool --- */
 
-static void get_bool(Object *obj, Visitor *v, const char *name, void *opaque,
-                     Error **errp)
+static void get_bool(Object *obj, Visitor *v, const char *name,
+                     Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     bool *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_bool(v, name, ptr, errp);
 }
 
-static void set_bool(Object *obj, Visitor *v, const char *name, void *opaque,
-                     Error **errp)
+static void set_bool(Object *obj, Visitor *v, const char *name,
+                     Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     bool *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_bool(v, name, ptr, errp);
@@ -171,19 +163,17 @@ const PropertyInfo prop_info_bool = {
 
 /* --- 8bit integer --- */
 
-static void get_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
-                      Error **errp)
+static void get_uint8(Object *obj, Visitor *v, const char *name,
+                      Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     uint8_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint8(v, name, ptr, errp);
 }
 
-static void set_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
-                      Error **errp)
+static void set_uint8(Object *obj, Visitor *v, const char *name,
+                      Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     uint8_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint8(v, name, ptr, errp);
@@ -211,18 +201,16 @@ const PropertyInfo prop_info_uint8 = {
 /* --- 16bit integer --- */
 
 static void get_uint16(Object *obj, Visitor *v, const char *name,
-                       void *opaque, Error **errp)
+                        Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     uint16_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint16(v, name, ptr, errp);
 }
 
 static void set_uint16(Object *obj, Visitor *v, const char *name,
-                       void *opaque, Error **errp)
+                        Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     uint16_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint16(v, name, ptr, errp);
@@ -238,36 +226,32 @@ const PropertyInfo prop_info_uint16 = {
 /* --- 32bit integer --- */
 
 static void get_uint32(Object *obj, Visitor *v, const char *name,
-                       void *opaque, Error **errp)
+                        Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     uint32_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint32(v, name, ptr, errp);
 }
 
 static void set_uint32(Object *obj, Visitor *v, const char *name,
-                       void *opaque, Error **errp)
+                        Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     uint32_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint32(v, name, ptr, errp);
 }
 
 void field_prop_get_int32(Object *obj, Visitor *v, const char *name,
-                          void *opaque, Error **errp)
+                           Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     int32_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_int32(v, name, ptr, errp);
 }
 
-static void set_int32(Object *obj, Visitor *v, const char *name, void *opaque,
-                      Error **errp)
+static void set_int32(Object *obj, Visitor *v, const char *name,
+                      Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     int32_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_int32(v, name, ptr, errp);
@@ -290,36 +274,32 @@ const PropertyInfo prop_info_int32 = {
 /* --- 64bit integer --- */
 
 static void get_uint64(Object *obj, Visitor *v, const char *name,
-                       void *opaque, Error **errp)
+                        Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     uint64_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint64(v, name, ptr, errp);
 }
 
 static void set_uint64(Object *obj, Visitor *v, const char *name,
-                       void *opaque, Error **errp)
+                        Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     uint64_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint64(v, name, ptr, errp);
 }
 
 static void get_int64(Object *obj, Visitor *v, const char *name,
-                      void *opaque, Error **errp)
+                       Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     int64_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_int64(v, name, ptr, errp);
 }
 
 static void set_int64(Object *obj, Visitor *v, const char *name,
-                      void *opaque, Error **errp)
+                       Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     int64_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_int64(v, name, ptr, errp);
@@ -348,9 +328,8 @@ static void release_string(Object *obj, const char *name, void *opaque)
 }
 
 static void get_string(Object *obj, Visitor *v, const char *name,
-                       void *opaque, Error **errp)
+                        Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     char **ptr = object_field_prop_ptr(obj, prop);
 
     if (!*ptr) {
@@ -362,9 +341,8 @@ static void get_string(Object *obj, Visitor *v, const char *name,
 }
 
 static void set_string(Object *obj, Visitor *v, const char *name,
-                       void *opaque, Error **errp)
+                        Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     char **ptr = object_field_prop_ptr(obj, prop);
     char *str;
 
@@ -396,19 +374,17 @@ const PropertyInfo prop_info_on_off_auto = {
 /* --- 32bit unsigned int 'size' type --- */
 
 void field_prop_get_size32(Object *obj, Visitor *v, const char *name,
-                           void *opaque, Error **errp)
+                            Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     uint32_t *ptr = object_field_prop_ptr(obj, prop);
     uint64_t value = *ptr;
 
     visit_type_size(v, name, &value, errp);
 }
 
-static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
-                       Error **errp)
+static void set_size32(Object *obj, Visitor *v, const char *name,
+                       Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     uint32_t *ptr = object_field_prop_ptr(obj, prop);
     uint64_t value;
 
@@ -437,14 +413,8 @@ const PropertyInfo prop_info_size32 = {
 /* --- support for array properties --- */
 
 static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
-                              void *opaque, Error **errp)
+                               Property *prop, Error **errp)
 {
-    /* Setter for the property which defines the length of a
-     * variable-sized property array. As well as actually setting the
-     * array-length field in the device struct, we have to create the
-     * array itself and dynamically add the corresponding properties.
-     */
-    Property *prop = opaque;
     ObjectProperty *op = object_property_find_err(obj, name, &error_abort);
     uint32_t *alenptr = object_field_prop_ptr(obj, prop);
     void **arrayptr = (void *)obj + prop->arrayoffset;
@@ -500,19 +470,17 @@ const PropertyInfo prop_info_arraylen = {
 
 /* --- 64bit unsigned int 'size' type --- */
 
-static void get_size(Object *obj, Visitor *v, const char *name, void *opaque,
-                     Error **errp)
+static void get_size(Object *obj, Visitor *v, const char *name,
+                     Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     uint64_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_size(v, name, ptr, errp);
 }
 
-static void set_size(Object *obj, Visitor *v, const char *name, void *opaque,
-                     Error **errp)
+static void set_size(Object *obj, Visitor *v, const char *name,
+                     Property *prop, Error **errp)
 {
-    Property *prop = opaque;
     uint64_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_size(v, name, ptr, errp);
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
index 5a9397f19a..3acc99c29c 100644
--- a/target/sparc/cpu.c
+++ b/target/sparc/cpu.c
@@ -787,7 +787,7 @@ static void sparc_cpu_initfn(Object *obj)
 }
 
 static void sparc_get_nwindows(Object *obj, Visitor *v, const char *name,
-                               void *opaque, Error **errp)
+                               Property *prop, Error **errp)
 {
     SPARCCPU *cpu = SPARC_CPU(obj);
     int64_t value = cpu->env.def.nwindows;
@@ -796,7 +796,7 @@ static void sparc_get_nwindows(Object *obj, Visitor *v, const char *name,
 }
 
 static void sparc_set_nwindows(Object *obj, Visitor *v, const char *name,
-                               void *opaque, Error **errp)
+                               Property *prop, Error **errp)
 {
     const int64_t min = MIN_NWINDOWS;
     const int64_t max = MAX_NWINDOWS;
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 17:25:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 17:25:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19441.44649 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaMXc-00051B-5T; Wed, 04 Nov 2020 17:25:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19441.44649; Wed, 04 Nov 2020 17:25:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaMXc-000513-1m; Wed, 04 Nov 2020 17:25:48 +0000
Received: by outflank-mailman (input) for mailman id 19441;
 Wed, 04 Nov 2020 17:25:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aHic=EK=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
 id 1kaMXa-00050W-Fg
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 17:25:46 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id dff89702-31e5-4c84-b6f6-48a8ba53bbbd;
 Wed, 04 Nov 2020 17:25:43 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-333-Ue3Ir6wOOOWpdg8TaOCtAA-1; Wed, 04 Nov 2020 12:25:41 -0500
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com
 [10.5.11.12])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D20F411BD342;
 Wed,  4 Nov 2020 17:25:38 +0000 (UTC)
Received: from localhost (ovpn-114-68.rdu2.redhat.com [10.10.114.68])
 by smtp.corp.redhat.com (Postfix) with ESMTP id E97BE65F5E;
 Wed,  4 Nov 2020 17:25:31 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aHic=EK=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
	id 1kaMXa-00050W-Fg
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 17:25:46 +0000
X-Inumbo-ID: dff89702-31e5-4c84-b6f6-48a8ba53bbbd
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id dff89702-31e5-4c84-b6f6-48a8ba53bbbd;
	Wed, 04 Nov 2020 17:25:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604510742;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Q5BeNYOZ4RDTFOfuNV6EvBOecZflftp3s5pV01xgFqI=;
	b=WhaUsOVnDkpsIxNfLQzar1rNZwy/plNMQavYBea5cAJG2vlTjKx1pG37JZzMkM93EenxBM
	29d+9a5MsGnorTl3PpoiMluMO0oqIcs+c5YSDv1OV84qKxcdNBhoDObBAZHpNtEFrj5TAc
	WDzVjOVtrHBpb3J1PtJHYydVdXNNubs=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-333-Ue3Ir6wOOOWpdg8TaOCtAA-1; Wed, 04 Nov 2020 12:25:41 -0500
X-MC-Unique: Ue3Ir6wOOOWpdg8TaOCtAA-1
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D20F411BD342;
	Wed,  4 Nov 2020 17:25:38 +0000 (UTC)
Received: from localhost (ovpn-114-68.rdu2.redhat.com [10.10.114.68])
	by smtp.corp.redhat.com (Postfix) with ESMTP id E97BE65F5E;
	Wed,  4 Nov 2020 17:25:31 +0000 (UTC)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Cc: Igor Mammedov <imammedo@redhat.com>,
	"Daniel P. Berrange" <berrange@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Berger <stefanb@linux.vnet.ibm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Kevin Wolf <kwolf@redhat.com>,
	Max Reitz <mreitz@redhat.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Halil Pasic <pasic@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Richard Henderson <rth@twiddle.net>,
	David Hildenbrand <david@redhat.com>,
	Matthew Rosato <mjrosato@linux.ibm.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	qemu-s390x@nongnu.org
Subject: [PATCH 6/7] qom: Add FIELD_PTR, a type-safe wrapper for object_field_prop_ptr()
Date: Wed,  4 Nov 2020 12:25:11 -0500
Message-Id: <20201104172512.2381656-7-ehabkost@redhat.com>
In-Reply-To: <20201104172512.2381656-1-ehabkost@redhat.com>
References: <20201104172512.2381656-1-ehabkost@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ehabkost@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Introduce a FIELD_PTR macro that will ensure the size of the area
we are accessing has the correct size, and will return a pointer
of the correct type.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Thomas Huth <thuth@redhat.com>
Cc: Halil Pasic <pasic@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: David Hildenbrand <david@redhat.com>
Cc: Matthew Rosato <mjrosato@linux.ibm.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org
Cc: qemu-block@nongnu.org
Cc: qemu-s390x@nongnu.org
---
 include/qom/field-property.h     | 21 ++++++++++-
 backends/tpm/tpm_util.c          |  6 ++--
 hw/block/xen-block.c             |  4 +--
 hw/core/qdev-properties-system.c | 50 +++++++++++++-------------
 hw/s390x/css.c                   |  4 +--
 hw/s390x/s390-pci-bus.c          |  4 +--
 hw/vfio/pci-quirks.c             |  4 +--
 qom/field-property.c             |  3 +-
 qom/property-types.c             | 60 +++++++++++++++++---------------
 9 files changed, 89 insertions(+), 67 deletions(-)

diff --git a/include/qom/field-property.h b/include/qom/field-property.h
index 1d3bf9699b..58baaca160 100644
--- a/include/qom/field-property.h
+++ b/include/qom/field-property.h
@@ -125,6 +125,25 @@ object_class_property_add_field(ObjectClass *oc, const char *name,
                                 Property *prop,
                                 ObjectPropertyAllowSet allow_set);
 
-void *object_field_prop_ptr(Object *obj, Property *prop);
+/**
+ * object_field_prop_ptr: Get pointer to property field
+ * @obj: the object instance
+ * @prop: field property definition
+ * @expected_size: expected size of struct field
+ *
+ * Don't use this function directly, use the FIELD_PTR() macro instead.
+ */
+void *object_field_prop_ptr(Object *obj, Property *prop, size_t expected_size);
+
+/**
+ * FIELD_PTR: Get pointer to struct field for property
+ *
+ * This returns a pointer to type @type, pointing to the struct
+ * field containing the property value.
+ *
+ * @type must match the expected type for the property.
+ */
+#define FIELD_PTR(obj, prop, type) \
+    ((type *)object_field_prop_ptr((obj), (prop), sizeof(type)))
 
 #endif
diff --git a/backends/tpm/tpm_util.c b/backends/tpm/tpm_util.c
index 556e21388c..da80379404 100644
--- a/backends/tpm/tpm_util.c
+++ b/backends/tpm/tpm_util.c
@@ -35,7 +35,7 @@
 static void get_tpm(Object *obj, Visitor *v, const char *name,
                     Property *prop, Error **errp)
 {
-    TPMBackend **be = object_field_prop_ptr(obj, prop);
+    TPMBackend **be = FIELD_PTR(obj, prop, TPMBackend *);
     char *p;
 
     p = g_strdup(*be ? (*be)->id : "");
@@ -46,7 +46,7 @@ static void get_tpm(Object *obj, Visitor *v, const char *name,
 static void set_tpm(Object *obj, Visitor *v, const char *name,
                     Property *prop, Error **errp)
 {
-    TPMBackend *s, **be = object_field_prop_ptr(obj, prop);
+    TPMBackend *s, **be = FIELD_PTR(obj, prop, TPMBackend *);
     char *str;
 
     if (!visit_type_str(v, name, &str, errp)) {
@@ -65,7 +65,7 @@ static void set_tpm(Object *obj, Visitor *v, const char *name,
 
 static void release_tpm(Object *obj, const char *name, Property *prop)
 {
-    TPMBackend **be = object_field_prop_ptr(obj, prop);
+    TPMBackend **be = FIELD_PTR(obj, prop, TPMBackend *);
 
     if (*be) {
         tpm_backend_reset(*be);
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index c1ee634639..390bf417ab 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -335,7 +335,7 @@ static char *disk_to_vbd_name(unsigned int disk)
 static void xen_block_get_vdev(Object *obj, Visitor *v, const char *name,
                                 Property *prop, Error **errp)
 {
-    XenBlockVdev *vdev = object_field_prop_ptr(obj, prop);
+    XenBlockVdev *vdev = FIELD_PTR(obj, prop, XenBlockVdev);
     char *str;
 
     switch (vdev->type) {
@@ -394,7 +394,7 @@ static int vbd_name_to_disk(const char *name, const char **endp,
 static void xen_block_set_vdev(Object *obj, Visitor *v, const char *name,
                                 Property *prop, Error **errp)
 {
-    XenBlockVdev *vdev = object_field_prop_ptr(obj, prop);
+    XenBlockVdev *vdev = FIELD_PTR(obj, prop, XenBlockVdev);
     char *str, *p;
     const char *end;
 
diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
index 2fdd5863bb..1ec64514b9 100644
--- a/hw/core/qdev-properties-system.c
+++ b/hw/core/qdev-properties-system.c
@@ -61,7 +61,7 @@ static bool check_prop_still_unset(Object *obj, const char *name,
 static void get_drive(Object *obj, Visitor *v, const char *name,
                       Property *prop, Error **errp)
 {
-    void **ptr = object_field_prop_ptr(obj, prop);
+    void **ptr = FIELD_PTR(obj, prop, void *);
     const char *value;
     char *p;
 
@@ -87,7 +87,7 @@ static void set_drive_helper(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    void **ptr = object_field_prop_ptr(obj, prop);
+    void **ptr = FIELD_PTR(obj, prop, void *);
     char *str;
     BlockBackend *blk;
     bool blk_created = false;
@@ -179,7 +179,7 @@ static void set_drive_iothread(Object *obj, Visitor *v, const char *name,
 static void release_drive(Object *obj, const char *name, Property *prop)
 {
     DeviceState *dev = DEVICE(obj);
-    BlockBackend **ptr = object_field_prop_ptr(obj, prop);
+    BlockBackend **ptr = FIELD_PTR(obj, prop, BlockBackend *);
 
     if (*ptr) {
         AioContext *ctx = blk_get_aio_context(*ptr);
@@ -212,7 +212,7 @@ const PropertyInfo qdev_prop_drive_iothread = {
 static void get_chr(Object *obj, Visitor *v, const char *name,
                     Property *prop, Error **errp)
 {
-    CharBackend *be = object_field_prop_ptr(obj, prop);
+    CharBackend *be = FIELD_PTR(obj, prop, CharBackend);
     char *p;
 
     p = g_strdup(be->chr && be->chr->label ? be->chr->label : "");
@@ -223,7 +223,7 @@ static void get_chr(Object *obj, Visitor *v, const char *name,
 static void set_chr(Object *obj, Visitor *v, const char *name,
                     Property *prop, Error **errp)
 {
-    CharBackend *be = object_field_prop_ptr(obj, prop);
+    CharBackend *be = FIELD_PTR(obj, prop, CharBackend);
     Chardev *s;
     char *str;
 
@@ -258,7 +258,7 @@ static void set_chr(Object *obj, Visitor *v, const char *name,
 
 static void release_chr(Object *obj, const char *name, Property *prop)
 {
-    CharBackend *be = object_field_prop_ptr(obj, prop);
+    CharBackend *be = FIELD_PTR(obj, prop, CharBackend);
 
     qemu_chr_fe_deinit(be, false);
 }
@@ -281,7 +281,7 @@ const PropertyInfo qdev_prop_chr = {
 static void get_mac(Object *obj, Visitor *v, const char *name,
                     Property *prop, Error **errp)
 {
-    MACAddr *mac = object_field_prop_ptr(obj, prop);
+    MACAddr *mac = FIELD_PTR(obj, prop, MACAddr);
     char buffer[2 * 6 + 5 + 1];
     char *p = buffer;
 
@@ -295,7 +295,7 @@ static void get_mac(Object *obj, Visitor *v, const char *name,
 static void set_mac(Object *obj, Visitor *v, const char *name,
                     Property *prop, Error **errp)
 {
-    MACAddr *mac = object_field_prop_ptr(obj, prop);
+    MACAddr *mac = FIELD_PTR(obj, prop, MACAddr);
     int i, pos;
     char *str;
     const char *p;
@@ -356,7 +356,7 @@ void qdev_prop_set_macaddr(DeviceState *dev, const char *name,
 static void get_netdev(Object *obj, Visitor *v, const char *name,
                         Property *prop, Error **errp)
 {
-    NICPeers *peers_ptr = object_field_prop_ptr(obj, prop);
+    NICPeers *peers_ptr = FIELD_PTR(obj, prop, NICPeers);
     char *p = g_strdup(peers_ptr->ncs[0] ? peers_ptr->ncs[0]->name : "");
 
     visit_type_str(v, name, &p, errp);
@@ -366,7 +366,7 @@ static void get_netdev(Object *obj, Visitor *v, const char *name,
 static void set_netdev(Object *obj, Visitor *v, const char *name,
                         Property *prop, Error **errp)
 {
-    NICPeers *peers_ptr = object_field_prop_ptr(obj, prop);
+    NICPeers *peers_ptr = FIELD_PTR(obj, prop, NICPeers);
     NetClientState **ncs = peers_ptr->ncs;
     NetClientState *peers[MAX_QUEUE_NUM];
     int queues, err = 0, i = 0;
@@ -427,7 +427,7 @@ const PropertyInfo qdev_prop_netdev = {
 static void get_audiodev(Object *obj, Visitor *v, const char* name,
                           Property *prop, Error **errp)
 {
-    QEMUSoundCard *card = object_field_prop_ptr(obj, prop);
+    QEMUSoundCard *card = FIELD_PTR(obj, prop, QEMUSoundCard);
     char *p = g_strdup(audio_get_id(card));
 
     visit_type_str(v, name, &p, errp);
@@ -437,7 +437,7 @@ static void get_audiodev(Object *obj, Visitor *v, const char* name,
 static void set_audiodev(Object *obj, Visitor *v, const char* name,
                           Property *prop, Error **errp)
 {
-    QEMUSoundCard *card = object_field_prop_ptr(obj, prop);
+    QEMUSoundCard *card = FIELD_PTR(obj, prop, QEMUSoundCard);
     AudioState *state;
     int err = 0;
     char *str;
@@ -538,7 +538,7 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
                            Property *prop, Error **errp)
 {
     DeviceState *dev = DEVICE(obj);
-    uint32_t *ptr = object_field_prop_ptr(obj, prop);
+    uint32_t *ptr = FIELD_PTR(obj, prop, uint32_t);
     uint64_t value;
     Error *local_err = NULL;
 
@@ -625,7 +625,7 @@ const PropertyInfo qdev_prop_multifd_compression = {
 static void get_reserved_region(Object *obj, Visitor *v, const char *name,
                                  Property *prop, Error **errp)
 {
-    ReservedRegion *rr = object_field_prop_ptr(obj, prop);
+    ReservedRegion *rr = FIELD_PTR(obj, prop, ReservedRegion);
     char buffer[64];
     char *p = buffer;
     int rc;
@@ -640,7 +640,7 @@ static void get_reserved_region(Object *obj, Visitor *v, const char *name,
 static void set_reserved_region(Object *obj, Visitor *v, const char *name,
                                  Property *prop, Error **errp)
 {
-    ReservedRegion *rr = object_field_prop_ptr(obj, prop);
+    ReservedRegion *rr = FIELD_PTR(obj, prop, ReservedRegion);
     Error *local_err = NULL;
     const char *endptr;
     char *str;
@@ -701,7 +701,7 @@ const PropertyInfo qdev_prop_reserved_region = {
 static void set_pci_devfn(Object *obj, Visitor *v, const char *name,
                            Property *prop, Error **errp)
 {
-    int32_t value, *ptr = object_field_prop_ptr(obj, prop);
+    int32_t value, *ptr = FIELD_PTR(obj, prop, int32_t);
     unsigned int slot, fn, n;
     char *str;
 
@@ -739,7 +739,7 @@ invalid:
 static int print_pci_devfn(Object *obj, Property *prop, char *dest,
                            size_t len)
 {
-    int32_t *ptr = object_field_prop_ptr(obj, prop);
+    int32_t *ptr = FIELD_PTR(obj, prop, int32_t);
 
     if (*ptr == -1) {
         return snprintf(dest, len, "<unset>");
@@ -762,7 +762,7 @@ const PropertyInfo qdev_prop_pci_devfn = {
 static void get_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
                                   Property *prop, Error **errp)
 {
-    PCIHostDeviceAddress *addr = object_field_prop_ptr(obj, prop);
+    PCIHostDeviceAddress *addr = FIELD_PTR(obj, prop, PCIHostDeviceAddress);
     char buffer[] = "ffff:ff:ff.f";
     char *p = buffer;
     int rc = 0;
@@ -787,7 +787,7 @@ static void get_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
 static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
                                   Property *prop, Error **errp)
 {
-    PCIHostDeviceAddress *addr = object_field_prop_ptr(obj, prop);
+    PCIHostDeviceAddress *addr = FIELD_PTR(obj, prop, PCIHostDeviceAddress);
     char *str, *p;
     const char *e;
     unsigned long val;
@@ -875,7 +875,7 @@ const PropertyInfo qdev_prop_off_auto_pcibar = {
 static void get_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
                                     Property *prop, Error **errp)
 {
-    PCIExpLinkSpeed *p = object_field_prop_ptr(obj, prop);
+    PCIExpLinkSpeed *p = FIELD_PTR(obj, prop, PCIExpLinkSpeed);
     int speed;
 
     switch (*p) {
@@ -902,7 +902,7 @@ static void get_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
 static void set_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
                                     Property *prop, Error **errp)
 {
-    PCIExpLinkSpeed *p = object_field_prop_ptr(obj, prop);
+    PCIExpLinkSpeed *p = FIELD_PTR(obj, prop, PCIExpLinkSpeed);
     int speed;
 
     if (!visit_type_enum(v, name, &speed, prop->info->enum_table,
@@ -943,7 +943,7 @@ const PropertyInfo qdev_prop_pcie_link_speed = {
 static void get_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
                                     Property *prop, Error **errp)
 {
-    PCIExpLinkWidth *p = object_field_prop_ptr(obj, prop);
+    PCIExpLinkWidth *p = FIELD_PTR(obj, prop, PCIExpLinkWidth);
     int width;
 
     switch (*p) {
@@ -979,7 +979,7 @@ static void get_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
 static void set_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
                                     Property *prop, Error **errp)
 {
-    PCIExpLinkWidth *p = object_field_prop_ptr(obj, prop);
+    PCIExpLinkWidth *p = FIELD_PTR(obj, prop, PCIExpLinkWidth);
     int width;
 
     if (!visit_type_enum(v, name, &width, prop->info->enum_table,
@@ -1029,7 +1029,7 @@ const PropertyInfo qdev_prop_pcie_link_width = {
 static void get_uuid(Object *obj, Visitor *v, const char *name,
                      Property *prop, Error **errp)
 {
-    QemuUUID *uuid = object_field_prop_ptr(obj, prop);
+    QemuUUID *uuid = FIELD_PTR(obj, prop, QemuUUID);
     char buffer[UUID_FMT_LEN + 1];
     char *p = buffer;
 
@@ -1043,7 +1043,7 @@ static void get_uuid(Object *obj, Visitor *v, const char *name,
 static void set_uuid(Object *obj, Visitor *v, const char *name,
                     Property *prop, Error **errp)
 {
-    QemuUUID *uuid = object_field_prop_ptr(obj, prop);
+    QemuUUID *uuid = FIELD_PTR(obj, prop, QemuUUID);
     char *str;
 
     if (!visit_type_str(v, name, &str, errp)) {
diff --git a/hw/s390x/css.c b/hw/s390x/css.c
index 1400d80689..5a38919a05 100644
--- a/hw/s390x/css.c
+++ b/hw/s390x/css.c
@@ -2343,7 +2343,7 @@ void css_reset(void)
 static void get_css_devid(Object *obj, Visitor *v, const char *name,
                            Property *prop, Error **errp)
 {
-    CssDevId *dev_id = object_field_prop_ptr(obj, prop);
+    CssDevId *dev_id = FIELD_PTR(obj, prop, CssDevId);
     char buffer[] = "xx.x.xxxx";
     char *p = buffer;
     int r;
@@ -2371,7 +2371,7 @@ static void get_css_devid(Object *obj, Visitor *v, const char *name,
 static void set_css_devid(Object *obj, Visitor *v, const char *name,
                            Property *prop, Error **errp)
 {
-    CssDevId *dev_id = object_field_prop_ptr(obj, prop);
+    CssDevId *dev_id = FIELD_PTR(obj, prop, CssDevId);
     char *str;
     int num, n1, n2;
     unsigned int cssid, ssid, devid;
diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
index a29bba17b4..8e38787c99 100644
--- a/hw/s390x/s390-pci-bus.c
+++ b/hw/s390x/s390-pci-bus.c
@@ -1322,7 +1322,7 @@ static void s390_pci_device_reset(DeviceState *dev)
 static void s390_pci_get_fid(Object *obj, Visitor *v, const char *name,
                           Property *prop, Error **errp)
 {
-    uint32_t *ptr = object_field_prop_ptr(obj, prop);
+    uint32_t *ptr = FIELD_PTR(obj, prop, uint32_t);
 
     visit_type_uint32(v, name, ptr, errp);
 }
@@ -1331,7 +1331,7 @@ static void s390_pci_set_fid(Object *obj, Visitor *v, const char *name,
                           Property *prop, Error **errp)
 {
     S390PCIBusDevice *zpci = S390_PCI_DEVICE(obj);
-    uint32_t *ptr = object_field_prop_ptr(obj, prop);
+    uint32_t *ptr = FIELD_PTR(obj, prop, uint32_t);
 
     if (!visit_type_uint32(v, name, ptr, errp)) {
         return;
diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
index 34f5f5dce2..93fb507ec4 100644
--- a/hw/vfio/pci-quirks.c
+++ b/hw/vfio/pci-quirks.c
@@ -1488,7 +1488,7 @@ static void get_nv_gpudirect_clique_id(Object *obj, Visitor *v,
                                        const char *name,
                                        Property *prop, Error **errp)
 {
-    uint8_t *ptr = object_field_prop_ptr(obj, prop);
+    uint8_t *ptr = FIELD_PTR(obj, prop, uint8_t);
 
     visit_type_uint8(v, name, ptr, errp);
 }
@@ -1497,7 +1497,7 @@ static void set_nv_gpudirect_clique_id(Object *obj, Visitor *v,
                                        const char *name,
                                        Property *prop, Error **errp)
 {
-    uint8_t value, *ptr = object_field_prop_ptr(obj, prop);
+    uint8_t value, *ptr = FIELD_PTR(obj, prop, uint8_t);
 
     if (!visit_type_uint8(v, name, &value, errp)) {
         return;
diff --git a/qom/field-property.c b/qom/field-property.c
index 865d4929a3..0932a799de 100644
--- a/qom/field-property.c
+++ b/qom/field-property.c
@@ -5,10 +5,11 @@
 #include "qom/field-property.h"
 #include "qom/field-property-internal.h"
 
-void *object_field_prop_ptr(Object *obj, Property *prop)
+void *object_field_prop_ptr(Object *obj, Property *prop, size_t expected_size)
 {
     void *ptr = obj;
     ptr += prop->offset;
+    assert(prop->size == expected_size);
     return ptr;
 }
 
diff --git a/qom/property-types.c b/qom/property-types.c
index 0182a73e38..e01f5a9fef 100644
--- a/qom/property-types.c
+++ b/qom/property-types.c
@@ -10,7 +10,7 @@
 void field_prop_get_enum(Object *obj, Visitor *v, const char *name,
                           Property *prop, Error **errp)
 {
-    int *ptr = object_field_prop_ptr(obj, prop);
+    int *ptr = FIELD_PTR(obj, prop, int);
 
     visit_type_enum(v, name, ptr, prop->info->enum_table, errp);
 }
@@ -18,7 +18,7 @@ void field_prop_get_enum(Object *obj, Visitor *v, const char *name,
 void field_prop_set_enum(Object *obj, Visitor *v, const char *name,
                           Property *prop, Error **errp)
 {
-    int *ptr = object_field_prop_ptr(obj, prop);
+    int *ptr = FIELD_PTR(obj, prop, int);
 
     visit_type_enum(v, name, ptr, prop->info->enum_table, errp);
 }
@@ -47,7 +47,7 @@ static uint32_t qdev_get_prop_mask(Property *prop)
 
 static void bit_prop_set(Object *obj, Property *props, bool val)
 {
-    uint32_t *p = object_field_prop_ptr(obj, props);
+    uint32_t *p = FIELD_PTR(obj, props, uint32_t);
     uint32_t mask = qdev_get_prop_mask(props);
     if (val) {
         *p |= mask;
@@ -59,7 +59,7 @@ static void bit_prop_set(Object *obj, Property *props, bool val)
 static void prop_get_bit(Object *obj, Visitor *v, const char *name,
                           Property *prop, Error **errp)
 {
-    uint32_t *p = object_field_prop_ptr(obj, prop);
+    uint32_t *p = FIELD_PTR(obj, prop, uint32_t);
     bool value = (*p & qdev_get_prop_mask(prop)) != 0;
 
     visit_type_bool(v, name, &value, errp);
@@ -99,7 +99,7 @@ static uint64_t qdev_get_prop_mask64(Property *prop)
 
 static void bit64_prop_set(Object *obj, Property *props, bool val)
 {
-    uint64_t *p = object_field_prop_ptr(obj, props);
+    uint64_t *p = FIELD_PTR(obj, props, uint64_t);
     uint64_t mask = qdev_get_prop_mask64(props);
     if (val) {
         *p |= mask;
@@ -111,7 +111,7 @@ static void bit64_prop_set(Object *obj, Property *props, bool val)
 static void prop_get_bit64(Object *obj, Visitor *v, const char *name,
                             Property *prop, Error **errp)
 {
-    uint64_t *p = object_field_prop_ptr(obj, prop);
+    uint64_t *p = FIELD_PTR(obj, prop, uint64_t);
     bool value = (*p & qdev_get_prop_mask64(prop)) != 0;
 
     visit_type_bool(v, name, &value, errp);
@@ -141,7 +141,7 @@ const PropertyInfo prop_info_bit64 = {
 static void get_bool(Object *obj, Visitor *v, const char *name,
                      Property *prop, Error **errp)
 {
-    bool *ptr = object_field_prop_ptr(obj, prop);
+    bool *ptr = FIELD_PTR(obj, prop, bool);
 
     visit_type_bool(v, name, ptr, errp);
 }
@@ -149,7 +149,7 @@ static void get_bool(Object *obj, Visitor *v, const char *name,
 static void set_bool(Object *obj, Visitor *v, const char *name,
                      Property *prop, Error **errp)
 {
-    bool *ptr = object_field_prop_ptr(obj, prop);
+    bool *ptr = FIELD_PTR(obj, prop, bool);
 
     visit_type_bool(v, name, ptr, errp);
 }
@@ -166,7 +166,7 @@ const PropertyInfo prop_info_bool = {
 static void get_uint8(Object *obj, Visitor *v, const char *name,
                       Property *prop, Error **errp)
 {
-    uint8_t *ptr = object_field_prop_ptr(obj, prop);
+    uint8_t *ptr = FIELD_PTR(obj, prop, uint8_t);
 
     visit_type_uint8(v, name, ptr, errp);
 }
@@ -174,7 +174,7 @@ static void get_uint8(Object *obj, Visitor *v, const char *name,
 static void set_uint8(Object *obj, Visitor *v, const char *name,
                       Property *prop, Error **errp)
 {
-    uint8_t *ptr = object_field_prop_ptr(obj, prop);
+    uint8_t *ptr = FIELD_PTR(obj, prop, uint8_t);
 
     visit_type_uint8(v, name, ptr, errp);
 }
@@ -203,7 +203,7 @@ const PropertyInfo prop_info_uint8 = {
 static void get_uint16(Object *obj, Visitor *v, const char *name,
                         Property *prop, Error **errp)
 {
-    uint16_t *ptr = object_field_prop_ptr(obj, prop);
+    uint16_t *ptr = FIELD_PTR(obj, prop, uint16_t);
 
     visit_type_uint16(v, name, ptr, errp);
 }
@@ -211,7 +211,7 @@ static void get_uint16(Object *obj, Visitor *v, const char *name,
 static void set_uint16(Object *obj, Visitor *v, const char *name,
                         Property *prop, Error **errp)
 {
-    uint16_t *ptr = object_field_prop_ptr(obj, prop);
+    uint16_t *ptr = FIELD_PTR(obj, prop, uint16_t);
 
     visit_type_uint16(v, name, ptr, errp);
 }
@@ -228,7 +228,7 @@ const PropertyInfo prop_info_uint16 = {
 static void get_uint32(Object *obj, Visitor *v, const char *name,
                         Property *prop, Error **errp)
 {
-    uint32_t *ptr = object_field_prop_ptr(obj, prop);
+    uint32_t *ptr = FIELD_PTR(obj, prop, uint32_t);
 
     visit_type_uint32(v, name, ptr, errp);
 }
@@ -236,7 +236,7 @@ static void get_uint32(Object *obj, Visitor *v, const char *name,
 static void set_uint32(Object *obj, Visitor *v, const char *name,
                         Property *prop, Error **errp)
 {
-    uint32_t *ptr = object_field_prop_ptr(obj, prop);
+    uint32_t *ptr = FIELD_PTR(obj, prop, uint32_t);
 
     visit_type_uint32(v, name, ptr, errp);
 }
@@ -244,7 +244,7 @@ static void set_uint32(Object *obj, Visitor *v, const char *name,
 void field_prop_get_int32(Object *obj, Visitor *v, const char *name,
                            Property *prop, Error **errp)
 {
-    int32_t *ptr = object_field_prop_ptr(obj, prop);
+    int32_t *ptr = FIELD_PTR(obj, prop, int32_t);
 
     visit_type_int32(v, name, ptr, errp);
 }
@@ -252,7 +252,7 @@ void field_prop_get_int32(Object *obj, Visitor *v, const char *name,
 static void set_int32(Object *obj, Visitor *v, const char *name,
                       Property *prop, Error **errp)
 {
-    int32_t *ptr = object_field_prop_ptr(obj, prop);
+    int32_t *ptr = FIELD_PTR(obj, prop, int32_t);
 
     visit_type_int32(v, name, ptr, errp);
 }
@@ -276,7 +276,7 @@ const PropertyInfo prop_info_int32 = {
 static void get_uint64(Object *obj, Visitor *v, const char *name,
                         Property *prop, Error **errp)
 {
-    uint64_t *ptr = object_field_prop_ptr(obj, prop);
+    uint64_t *ptr = FIELD_PTR(obj, prop, uint64_t);
 
     visit_type_uint64(v, name, ptr, errp);
 }
@@ -284,7 +284,7 @@ static void get_uint64(Object *obj, Visitor *v, const char *name,
 static void set_uint64(Object *obj, Visitor *v, const char *name,
                         Property *prop, Error **errp)
 {
-    uint64_t *ptr = object_field_prop_ptr(obj, prop);
+    uint64_t *ptr = FIELD_PTR(obj, prop, uint64_t);
 
     visit_type_uint64(v, name, ptr, errp);
 }
@@ -292,7 +292,7 @@ static void set_uint64(Object *obj, Visitor *v, const char *name,
 static void get_int64(Object *obj, Visitor *v, const char *name,
                        Property *prop, Error **errp)
 {
-    int64_t *ptr = object_field_prop_ptr(obj, prop);
+    int64_t *ptr = FIELD_PTR(obj, prop, int64_t);
 
     visit_type_int64(v, name, ptr, errp);
 }
@@ -300,7 +300,7 @@ static void get_int64(Object *obj, Visitor *v, const char *name,
 static void set_int64(Object *obj, Visitor *v, const char *name,
                        Property *prop, Error **errp)
 {
-    int64_t *ptr = object_field_prop_ptr(obj, prop);
+    int64_t *ptr = FIELD_PTR(obj, prop, int64_t);
 
     visit_type_int64(v, name, ptr, errp);
 }
@@ -323,13 +323,14 @@ const PropertyInfo prop_info_int64 = {
 
 static void release_string(Object *obj, const char *name, Property *prop)
 {
-    g_free(*(char **)object_field_prop_ptr(obj, prop));
+    char **ptr = FIELD_PTR(obj, prop, char *);
+    g_free(*ptr);
 }
 
 static void get_string(Object *obj, Visitor *v, const char *name,
                         Property *prop, Error **errp)
 {
-    char **ptr = object_field_prop_ptr(obj, prop);
+    char **ptr = FIELD_PTR(obj, prop, char *);
 
     if (!*ptr) {
         char *str = (char *)"";
@@ -342,7 +343,7 @@ static void get_string(Object *obj, Visitor *v, const char *name,
 static void set_string(Object *obj, Visitor *v, const char *name,
                         Property *prop, Error **errp)
 {
-    char **ptr = object_field_prop_ptr(obj, prop);
+    char **ptr = FIELD_PTR(obj, prop, char *);
     char *str;
 
     if (!visit_type_str(v, name, &str, errp)) {
@@ -375,7 +376,7 @@ const PropertyInfo prop_info_on_off_auto = {
 void field_prop_get_size32(Object *obj, Visitor *v, const char *name,
                             Property *prop, Error **errp)
 {
-    uint32_t *ptr = object_field_prop_ptr(obj, prop);
+    uint32_t *ptr = FIELD_PTR(obj, prop, uint32_t);
     uint64_t value = *ptr;
 
     visit_type_size(v, name, &value, errp);
@@ -384,7 +385,7 @@ void field_prop_get_size32(Object *obj, Visitor *v, const char *name,
 static void set_size32(Object *obj, Visitor *v, const char *name,
                        Property *prop, Error **errp)
 {
-    uint32_t *ptr = object_field_prop_ptr(obj, prop);
+    uint32_t *ptr = FIELD_PTR(obj, prop, uint32_t);
     uint64_t value;
 
     if (!visit_type_size(v, name, &value, errp)) {
@@ -415,7 +416,7 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
                                Property *prop, Error **errp)
 {
     ObjectProperty *op = object_property_find_err(obj, name, &error_abort);
-    uint32_t *alenptr = object_field_prop_ptr(obj, prop);
+    uint32_t *alenptr = FIELD_PTR(obj, prop, uint32_t);
     void **arrayptr = (void *)obj + prop->arrayoffset;
     void *eltptr;
     const char *arrayname;
@@ -455,7 +456,8 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
          */
         arrayprop->offset = eltptr - (void *)obj;
         arrayprop->size = prop->arrayfieldsize;
-        assert(object_field_prop_ptr(obj, arrayprop) == eltptr);
+        assert(object_field_prop_ptr(obj, arrayprop,
+                                     prop->arrayfieldsize) == eltptr);
         object_property_add_field(obj, propname, arrayprop, op->allow_set);
     }
 }
@@ -472,7 +474,7 @@ const PropertyInfo prop_info_arraylen = {
 static void get_size(Object *obj, Visitor *v, const char *name,
                      Property *prop, Error **errp)
 {
-    uint64_t *ptr = object_field_prop_ptr(obj, prop);
+    uint64_t *ptr = FIELD_PTR(obj, prop, uint64_t);
 
     visit_type_size(v, name, ptr, errp);
 }
@@ -480,7 +482,7 @@ static void get_size(Object *obj, Visitor *v, const char *name,
 static void set_size(Object *obj, Visitor *v, const char *name,
                      Property *prop, Error **errp)
 {
-    uint64_t *ptr = object_field_prop_ptr(obj, prop);
+    uint64_t *ptr = FIELD_PTR(obj, prop, uint64_t);
 
     visit_type_size(v, name, ptr, errp);
 }
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 17:28:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 17:28:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19451.44660 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaMab-0005Ix-RP; Wed, 04 Nov 2020 17:28:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19451.44660; Wed, 04 Nov 2020 17:28:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaMab-0005Iq-OR; Wed, 04 Nov 2020 17:28:53 +0000
Received: by outflank-mailman (input) for mailman id 19451;
 Wed, 04 Nov 2020 17:28:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XHTB=EK=linux.ibm.com=stefanb@srs-us1.protection.inumbo.net>)
 id 1kaMaa-0005Il-VO
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 17:28:53 +0000
Received: from mx0a-001b2d01.pphosted.com (unknown [148.163.158.5])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 235287bf-62c5-468d-9765-a112a8370f2c;
 Wed, 04 Nov 2020 17:28:51 +0000 (UTC)
Received: from pps.filterd (m0098413.ppops.net [127.0.0.1])
 by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0A4H1jHH087383; Wed, 4 Nov 2020 12:28:41 -0500
Received: from pps.reinject (localhost [127.0.0.1])
 by mx0b-001b2d01.pphosted.com with ESMTP id 34kxep56r5-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 04 Nov 2020 12:28:40 -0500
Received: from m0098413.ppops.net (m0098413.ppops.net [127.0.0.1])
 by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 0A4HHO74156506;
 Wed, 4 Nov 2020 12:28:40 -0500
Received: from ppma02wdc.us.ibm.com (aa.5b.37a9.ip4.static.sl-reverse.com
 [169.55.91.170])
 by mx0b-001b2d01.pphosted.com with ESMTP id 34kxep56qk-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 04 Nov 2020 12:28:40 -0500
Received: from pps.filterd (ppma02wdc.us.ibm.com [127.0.0.1])
 by ppma02wdc.us.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 0A4HO5e3009461;
 Wed, 4 Nov 2020 17:28:39 GMT
Received: from b03cxnp08027.gho.boulder.ibm.com
 (b03cxnp08027.gho.boulder.ibm.com [9.17.130.19])
 by ppma02wdc.us.ibm.com with ESMTP id 34h0ew82ks-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 04 Nov 2020 17:28:39 +0000
Received: from b03ledav004.gho.boulder.ibm.com
 (b03ledav004.gho.boulder.ibm.com [9.17.130.235])
 by b03cxnp08027.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id
 0A4HSWdS37814678
 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 4 Nov 2020 17:28:32 GMT
Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1])
 by IMSVA (Postfix) with ESMTP id D00BA78064;
 Wed,  4 Nov 2020 17:28:37 +0000 (GMT)
Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1])
 by IMSVA (Postfix) with ESMTP id 56BBB7805C;
 Wed,  4 Nov 2020 17:28:36 +0000 (GMT)
Received: from sbct-3.pok.ibm.com (unknown [9.47.158.153])
 by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP;
 Wed,  4 Nov 2020 17:28:36 +0000 (GMT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=XHTB=EK=linux.ibm.com=stefanb@srs-us1.protection.inumbo.net>)
	id 1kaMaa-0005Il-VO
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 17:28:53 +0000
X-Inumbo-ID: 235287bf-62c5-468d-9765-a112a8370f2c
Received: from mx0a-001b2d01.pphosted.com (unknown [148.163.158.5])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 235287bf-62c5-468d-9765-a112a8370f2c;
	Wed, 04 Nov 2020 17:28:51 +0000 (UTC)
Received: from pps.filterd (m0098413.ppops.net [127.0.0.1])
	by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0A4H1jHH087383;
	Wed, 4 Nov 2020 12:28:41 -0500
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=pp1;
 bh=CQaIPTu1V00XFULpDd6hcs3dwTUolPI4jmr/URsyTf8=;
 b=sqLpSap+DF5NsmQCSX8z0sQORyvWvpk/qvxxRhQF8K305QsvTgacoLGJIZRhCHe9EPy6
 tmOkjWaWzxieFJhtBVAa7ZZgY4HCBSzXEhldLbRTNb3FvSppcxZTuagZu+0iFlD62GdO
 qTPVpCaA06CfbhXmmZvfzpFi6BGAvt33lAeRbQXCXjn1iHwu9KvgbjvTErblaAMOblYX
 eKwVhM9CYQKJJoBrKsXzWijcSitCVVnB5KkoFTZHf4UL9DAFjWl7t4uwAAb52utjN9CM
 perBcVSnBiPLIfhZ/bWTNlY4X+ARUUceurbjzpGAv64TAbflci+sBgk3ooKdsgFz8oL8 kQ== 
Received: from pps.reinject (localhost [127.0.0.1])
	by mx0b-001b2d01.pphosted.com with ESMTP id 34kxep56r5-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Wed, 04 Nov 2020 12:28:40 -0500
Received: from m0098413.ppops.net (m0098413.ppops.net [127.0.0.1])
	by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 0A4HHO74156506;
	Wed, 4 Nov 2020 12:28:40 -0500
Received: from ppma02wdc.us.ibm.com (aa.5b.37a9.ip4.static.sl-reverse.com [169.55.91.170])
	by mx0b-001b2d01.pphosted.com with ESMTP id 34kxep56qk-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Wed, 04 Nov 2020 12:28:40 -0500
Received: from pps.filterd (ppma02wdc.us.ibm.com [127.0.0.1])
	by ppma02wdc.us.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 0A4HO5e3009461;
	Wed, 4 Nov 2020 17:28:39 GMT
Received: from b03cxnp08027.gho.boulder.ibm.com (b03cxnp08027.gho.boulder.ibm.com [9.17.130.19])
	by ppma02wdc.us.ibm.com with ESMTP id 34h0ew82ks-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Wed, 04 Nov 2020 17:28:39 +0000
Received: from b03ledav004.gho.boulder.ibm.com (b03ledav004.gho.boulder.ibm.com [9.17.130.235])
	by b03cxnp08027.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 0A4HSWdS37814678
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
	Wed, 4 Nov 2020 17:28:32 GMT
Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1])
	by IMSVA (Postfix) with ESMTP id D00BA78064;
	Wed,  4 Nov 2020 17:28:37 +0000 (GMT)
Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1])
	by IMSVA (Postfix) with ESMTP id 56BBB7805C;
	Wed,  4 Nov 2020 17:28:36 +0000 (GMT)
Received: from sbct-3.pok.ibm.com (unknown [9.47.158.153])
	by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP;
	Wed,  4 Nov 2020 17:28:36 +0000 (GMT)
Subject: Re: [PATCH v2 22/44] qdev: Move dev->realized check to
 qdev_property_set()
To: Eduardo Habkost <ehabkost@redhat.com>, qemu-devel@nongnu.org
Cc: "Daniel P. Berrange" <berrange@redhat.com>,
        Paolo Bonzini <pbonzini@redhat.com>,
        Igor Mammedov <imammedo@redhat.com>, Eric Blake <eblake@redhat.com>,
        Markus Armbruster <armbru@redhat.com>,
        =?UTF-8?Q?Marc-Andr=c3=a9_Lureau?= <marcandre.lureau@redhat.com>,
        John Snow <jsnow@redhat.com>,
        =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?=
 <philmd@redhat.com>,
        Stefan Berger <stefanb@linux.vnet.ibm.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Anthony Perard <anthony.perard@citrix.com>,
        Paul Durrant <paul@xen.org>, Kevin Wolf <kwolf@redhat.com>,
        Max Reitz <mreitz@redhat.com>, Cornelia Huck <cohuck@redhat.com>,
        Halil Pasic <pasic@linux.ibm.com>,
        Christian Borntraeger <borntraeger@de.ibm.com>,
        Richard Henderson <rth@twiddle.net>,
        David Hildenbrand <david@redhat.com>, Thomas Huth <thuth@redhat.com>,
        Matthew Rosato <mjrosato@linux.ibm.com>,
        Alex Williamson <alex.williamson@redhat.com>,
        Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
        Artyom Tarasenko <atar4qemu@gmail.com>, xen-devel@lists.xenproject.org,
        qemu-block@nongnu.org, qemu-s390x@nongnu.org
References: <20201104160021.2342108-1-ehabkost@redhat.com>
 <20201104160021.2342108-23-ehabkost@redhat.com>
From: Stefan Berger <stefanb@linux.ibm.com>
Message-ID: <db0de4f5-81b9-cb59-c0e2-899d7350c64c@linux.ibm.com>
Date: Wed, 4 Nov 2020 12:28:35 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.11.0
MIME-Version: 1.0
In-Reply-To: <20201104160021.2342108-23-ehabkost@redhat.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-TM-AS-GCONF: 00
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-04_11:2020-11-04,2020-11-04 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 mlxlogscore=999
 impostorscore=0 malwarescore=0 suspectscore=0 adultscore=0 clxscore=1011
 phishscore=0 priorityscore=1501 lowpriorityscore=0 bulkscore=0 mlxscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2011040123

On 11/4/20 10:59 AM, Eduardo Habkost wrote:
> Every single qdev property setter function manually checks
> dev->realized.  We can just check dev->realized inside
> qdev_property_set() instead.
>
> The check is being added as a separate function
> (qdev_prop_allow_set()) because it will become a callback later.
>
> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>

Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>


> ---
> Changes v1 -> v2:
> * Removed unused variable at xen_block_set_vdev()
> * Redone patch after changes in the previous patches in the
>    series
> ---
> Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> Cc: Paul Durrant <paul@xen.org>
> Cc: Kevin Wolf <kwolf@redhat.com>
> Cc: Max Reitz <mreitz@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Daniel P. Berrangé" <berrange@redhat.com>
> Cc: Eduardo Habkost <ehabkost@redhat.com>
> Cc: Cornelia Huck <cohuck@redhat.com>
> Cc: Halil Pasic <pasic@linux.ibm.com>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Richard Henderson <rth@twiddle.net>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Thomas Huth <thuth@redhat.com>
> Cc: Matthew Rosato <mjrosato@linux.ibm.com>
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
> Cc: Artyom Tarasenko <atar4qemu@gmail.com>
> Cc: qemu-devel@nongnu.org
> Cc: xen-devel@lists.xenproject.org
> Cc: qemu-block@nongnu.org
> Cc: qemu-s390x@nongnu.org
> ---
>   backends/tpm/tpm_util.c          |   6 --
>   hw/block/xen-block.c             |   6 --
>   hw/core/qdev-properties-system.c |  70 ----------------------
>   hw/core/qdev-properties.c        | 100 ++++++-------------------------
>   hw/s390x/css.c                   |   6 --
>   hw/s390x/s390-pci-bus.c          |   6 --
>   hw/vfio/pci-quirks.c             |   6 --
>   target/sparc/cpu.c               |   6 --
>   8 files changed, 18 insertions(+), 188 deletions(-)
>
> diff --git a/backends/tpm/tpm_util.c b/backends/tpm/tpm_util.c
> index dba2f6b04a..0b07cf55ea 100644
> --- a/backends/tpm/tpm_util.c
> +++ b/backends/tpm/tpm_util.c
> @@ -46,16 +46,10 @@ static void get_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
>   static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
>                       Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       TPMBackend *s, **be = qdev_get_prop_ptr(obj, prop);
>       char *str;
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_str(v, name, &str, errp)) {
>           return;
>       }
> diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
> index 905e4acd97..bd1aef63a7 100644
> --- a/hw/block/xen-block.c
> +++ b/hw/block/xen-block.c
> @@ -395,17 +395,11 @@ static int vbd_name_to_disk(const char *name, const char **endp,
>   static void xen_block_set_vdev(Object *obj, Visitor *v, const char *name,
>                                  void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
>       char *str, *p;
>       const char *end;
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_str(v, name, &str, errp)) {
>           return;
>       }
> diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
> index 202abd0e4b..0d3e57bba0 100644
> --- a/hw/core/qdev-properties-system.c
> +++ b/hw/core/qdev-properties-system.c
> @@ -94,11 +94,6 @@ static void set_drive_helper(Object *obj, Visitor *v, const char *name,
>       bool blk_created = false;
>       int ret;
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_str(v, name, &str, errp)) {
>           return;
>       }
> @@ -230,17 +225,11 @@ static void get_chr(Object *obj, Visitor *v, const char *name, void *opaque,
>   static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
>                       Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       CharBackend *be = qdev_get_prop_ptr(obj, prop);
>       Chardev *s;
>       char *str;
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_str(v, name, &str, errp)) {
>           return;
>       }
> @@ -311,18 +300,12 @@ static void get_mac(Object *obj, Visitor *v, const char *name, void *opaque,
>   static void set_mac(Object *obj, Visitor *v, const char *name, void *opaque,
>                       Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       MACAddr *mac = qdev_get_prop_ptr(obj, prop);
>       int i, pos;
>       char *str;
>       const char *p;
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_str(v, name, &str, errp)) {
>           return;
>       }
> @@ -390,7 +373,6 @@ static void get_netdev(Object *obj, Visitor *v, const char *name,
>   static void set_netdev(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
>       NetClientState **ncs = peers_ptr->ncs;
> @@ -398,11 +380,6 @@ static void set_netdev(Object *obj, Visitor *v, const char *name,
>       int queues, err = 0, i = 0;
>       char *str;
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_str(v, name, &str, errp)) {
>           return;
>       }
> @@ -469,18 +446,12 @@ static void get_audiodev(Object *obj, Visitor *v, const char* name,
>   static void set_audiodev(Object *obj, Visitor *v, const char* name,
>                            void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
>       AudioState *state;
>       int err = 0;
>       char *str;
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_str(v, name, &str, errp)) {
>           return;
>       }
> @@ -582,11 +553,6 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
>       uint64_t value;
>       Error *local_err = NULL;
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_size(v, name, &value, errp)) {
>           return;
>       }
> @@ -686,7 +652,6 @@ static void get_reserved_region(Object *obj, Visitor *v, const char *name,
>   static void set_reserved_region(Object *obj, Visitor *v, const char *name,
>                                   void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
>       Error *local_err = NULL;
> @@ -694,11 +659,6 @@ static void set_reserved_region(Object *obj, Visitor *v, const char *name,
>       char *str;
>       int ret;
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       visit_type_str(v, name, &str, &local_err);
>       if (local_err) {
>           error_propagate(errp, local_err);
> @@ -754,17 +714,11 @@ const PropertyInfo qdev_prop_reserved_region = {
>   static void set_pci_devfn(Object *obj, Visitor *v, const char *name,
>                             void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       int32_t value, *ptr = qdev_get_prop_ptr(obj, prop);
>       unsigned int slot, fn, n;
>       char *str;
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_str(v, name, &str, NULL)) {
>           if (!visit_type_int32(v, name, &value, errp)) {
>               return;
> @@ -848,7 +802,6 @@ static void get_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
>   static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
>                                    void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
>       char *str, *p;
> @@ -857,11 +810,6 @@ static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
>       unsigned long dom = 0, bus = 0;
>       unsigned int slot = 0, func = 0;
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_str(v, name, &str, errp)) {
>           return;
>       }
> @@ -971,16 +919,10 @@ static void get_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
>   static void set_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
>                                      void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
>       int speed;
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_enum(v, name, &speed, prop->info->enum_table,
>                            errp)) {
>           return;
> @@ -1056,16 +998,10 @@ static void get_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
>   static void set_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
>                                      void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
>       int width;
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_enum(v, name, &width, prop->info->enum_table,
>                            errp)) {
>           return;
> @@ -1128,16 +1064,10 @@ static void get_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
>   static void set_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
>                       Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
>       char *str;
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_str(v, name, &str, errp)) {
>           return;
>       }
> diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
> index 0e5ff81da8..ff36eb250e 100644
> --- a/hw/core/qdev-properties.c
> +++ b/hw/core/qdev-properties.c
> @@ -24,6 +24,19 @@ void qdev_prop_set_after_realize(DeviceState *dev, const char *name,
>       }
>   }
>   
> +/* returns: true if property is allowed to be set, false otherwise */
> +static bool qdev_prop_allow_set(Object *obj, const char *name,
> +                                Error **errp)
> +{
> +    DeviceState *dev = DEVICE(obj);
> +
> +    if (dev->realized) {
> +        qdev_prop_set_after_realize(dev, name, errp);
> +        return false;
> +    }
> +    return true;
> +}
> +
>   void qdev_prop_allow_set_link_before_realize(const Object *obj,
>                                                const char *name,
>                                                Object *val, Error **errp)
> @@ -65,6 +78,11 @@ static void field_prop_set(Object *obj, Visitor *v, const char *name,
>                              void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> +
> +    if (!qdev_prop_allow_set(obj, name, errp)) {
> +        return;
> +    }
> +
>       return prop->info->set(obj, v, name, opaque, errp);
>   }
>   
> @@ -90,15 +108,9 @@ void qdev_propinfo_get_enum(Object *obj, Visitor *v, const char *name,
>   void qdev_propinfo_set_enum(Object *obj, Visitor *v, const char *name,
>                               void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       int *ptr = qdev_get_prop_ptr(obj, prop);
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       visit_type_enum(v, name, ptr, prop->info->enum_table, errp);
>   }
>   
> @@ -148,15 +160,9 @@ static void prop_get_bit(Object *obj, Visitor *v, const char *name,
>   static void prop_set_bit(Object *obj, Visitor *v, const char *name,
>                            void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       bool value;
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_bool(v, name, &value, errp)) {
>           return;
>       }
> @@ -208,15 +214,9 @@ static void prop_get_bit64(Object *obj, Visitor *v, const char *name,
>   static void prop_set_bit64(Object *obj, Visitor *v, const char *name,
>                              void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       bool value;
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_bool(v, name, &value, errp)) {
>           return;
>       }
> @@ -245,15 +245,9 @@ static void get_bool(Object *obj, Visitor *v, const char *name, void *opaque,
>   static void set_bool(Object *obj, Visitor *v, const char *name, void *opaque,
>                        Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       bool *ptr = qdev_get_prop_ptr(obj, prop);
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       visit_type_bool(v, name, ptr, errp);
>   }
>   
> @@ -278,15 +272,9 @@ static void get_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
>   static void set_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
>                         Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       visit_type_uint8(v, name, ptr, errp);
>   }
>   
> @@ -323,15 +311,9 @@ void qdev_propinfo_get_uint16(Object *obj, Visitor *v, const char *name,
>   static void set_uint16(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       visit_type_uint16(v, name, ptr, errp);
>   }
>   
> @@ -356,15 +338,9 @@ static void get_uint32(Object *obj, Visitor *v, const char *name,
>   static void set_uint32(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       visit_type_uint32(v, name, ptr, errp);
>   }
>   
> @@ -380,15 +356,9 @@ void qdev_propinfo_get_int32(Object *obj, Visitor *v, const char *name,
>   static void set_int32(Object *obj, Visitor *v, const char *name, void *opaque,
>                         Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       int32_t *ptr = qdev_get_prop_ptr(obj, prop);
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       visit_type_int32(v, name, ptr, errp);
>   }
>   
> @@ -420,15 +390,9 @@ static void get_uint64(Object *obj, Visitor *v, const char *name,
>   static void set_uint64(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       visit_type_uint64(v, name, ptr, errp);
>   }
>   
> @@ -444,15 +408,9 @@ static void get_int64(Object *obj, Visitor *v, const char *name,
>   static void set_int64(Object *obj, Visitor *v, const char *name,
>                         void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       int64_t *ptr = qdev_get_prop_ptr(obj, prop);
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       visit_type_int64(v, name, ptr, errp);
>   }
>   
> @@ -495,16 +453,10 @@ static void get_string(Object *obj, Visitor *v, const char *name,
>   static void set_string(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       char **ptr = qdev_get_prop_ptr(obj, prop);
>       char *str;
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_str(v, name, &str, errp)) {
>           return;
>       }
> @@ -545,16 +497,10 @@ void qdev_propinfo_get_size32(Object *obj, Visitor *v, const char *name,
>   static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
>                          Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
>       uint64_t value;
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_size(v, name, &value, errp)) {
>           return;
>       }
> @@ -621,10 +567,6 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
>       const char *arrayname;
>       int i;
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
>       if (*alenptr) {
>           error_setg(errp, "array size property %s may not be set more than once",
>                      name);
> @@ -864,15 +806,9 @@ static void get_size(Object *obj, Visitor *v, const char *name, void *opaque,
>   static void set_size(Object *obj, Visitor *v, const char *name, void *opaque,
>                        Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       visit_type_size(v, name, ptr, errp);
>   }
>   
> diff --git a/hw/s390x/css.c b/hw/s390x/css.c
> index 7a44320d12..496e2c5801 100644
> --- a/hw/s390x/css.c
> +++ b/hw/s390x/css.c
> @@ -2372,18 +2372,12 @@ static void get_css_devid(Object *obj, Visitor *v, const char *name,
>   static void set_css_devid(Object *obj, Visitor *v, const char *name,
>                             void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
>       char *str;
>       int num, n1, n2;
>       unsigned int cssid, ssid, devid;
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_str(v, name, &str, errp)) {
>           return;
>       }
> diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
> index ab27b6e848..54fac3851d 100644
> --- a/hw/s390x/s390-pci-bus.c
> +++ b/hw/s390x/s390-pci-bus.c
> @@ -1331,16 +1331,10 @@ static void s390_pci_get_fid(Object *obj, Visitor *v, const char *name,
>   static void s390_pci_set_fid(Object *obj, Visitor *v, const char *name,
>                            void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       S390PCIBusDevice *zpci = S390_PCI_DEVICE(obj);
>       Property *prop = opaque;
>       uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_uint32(v, name, ptr, errp)) {
>           return;
>       }
> diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
> index 53569925a2..802979635c 100644
> --- a/hw/vfio/pci-quirks.c
> +++ b/hw/vfio/pci-quirks.c
> @@ -1498,15 +1498,9 @@ static void set_nv_gpudirect_clique_id(Object *obj, Visitor *v,
>                                          const char *name, void *opaque,
>                                          Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       uint8_t value, *ptr = qdev_get_prop_ptr(obj, prop);
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_uint8(v, name, &value, errp)) {
>           return;
>       }
> diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
> index f5cff4103b..3375fffb38 100644
> --- a/target/sparc/cpu.c
> +++ b/target/sparc/cpu.c
> @@ -798,17 +798,11 @@ static void sparc_get_nwindows(Object *obj, Visitor *v, const char *name,
>   static void sparc_set_nwindows(Object *obj, Visitor *v, const char *name,
>                                  void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       const int64_t min = MIN_NWINDOWS;
>       const int64_t max = MAX_NWINDOWS;
>       SPARCCPU *cpu = SPARC_CPU(obj);
>       int64_t value;
>   
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_int(v, name, &value, errp)) {
>           return;
>       }




From xen-devel-bounces@lists.xenproject.org Wed Nov 04 17:54:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 17:54:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19495.44673 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaMzf-0007uR-TZ; Wed, 04 Nov 2020 17:54:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19495.44673; Wed, 04 Nov 2020 17:54:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaMzf-0007uK-Qe; Wed, 04 Nov 2020 17:54:47 +0000
Received: by outflank-mailman (input) for mailman id 19495;
 Wed, 04 Nov 2020 17:54:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UAnB=EK=kaod.org=groug@srs-us1.protection.inumbo.net>)
 id 1kaMze-0007uF-A8
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 17:54:46 +0000
Received: from smtpout1.mo804.mail-out.ovh.net (unknown [79.137.123.220])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5447aa65-15d6-4d2b-aa36-09d0fe60705f;
 Wed, 04 Nov 2020 17:54:44 +0000 (UTC)
Received: from mxplan5.mail.ovh.net (unknown [10.109.146.197])
 by mo804.mail-out.ovh.net (Postfix) with ESMTPS id A972570EF28F;
 Wed,  4 Nov 2020 18:54:41 +0100 (CET)
Received: from kaod.org (37.59.142.97) by DAG8EX1.mxp5.local (172.16.2.71)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2044.4; Wed, 4 Nov 2020
 18:54:40 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=UAnB=EK=kaod.org=groug@srs-us1.protection.inumbo.net>)
	id 1kaMze-0007uF-A8
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 17:54:46 +0000
X-Inumbo-ID: 5447aa65-15d6-4d2b-aa36-09d0fe60705f
Received: from smtpout1.mo804.mail-out.ovh.net (unknown [79.137.123.220])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5447aa65-15d6-4d2b-aa36-09d0fe60705f;
	Wed, 04 Nov 2020 17:54:44 +0000 (UTC)
Received: from mxplan5.mail.ovh.net (unknown [10.109.146.197])
	by mo804.mail-out.ovh.net (Postfix) with ESMTPS id A972570EF28F;
	Wed,  4 Nov 2020 18:54:41 +0100 (CET)
Received: from kaod.org (37.59.142.97) by DAG8EX1.mxp5.local (172.16.2.71)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2044.4; Wed, 4 Nov 2020
 18:54:40 +0100
Authentication-Results: garm.ovh; auth=pass (GARM-97G002a60b9b55-a377-4b09-8ef1-53f1e0dae1a2,
                    B675344909C57F45DE6B9FBDE8367EDF8CA03E23) smtp.auth=groug@kaod.org
Date: Wed, 4 Nov 2020 18:54:39 +0100
From: Greg Kurz <groug@kaod.org>
To: Christian Schoenebeck <qemu_oss@crudebyte.com>
CC: <qemu-devel@nongnu.org>, Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?=
	<philmd@redhat.com>, Fam Zheng <fam@euphon.net>, Thomas Huth
	<thuth@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>, "Daniel P .
 Berrange" <berrange@redhat.com>, Matthew Rosato <mjrosato@linux.ibm.com>,
	David Hildenbrand <david@redhat.com>, Alex =?UTF-8?B?QmVubsOpZQ==?=
	<alex.bennee@linaro.org>, Cornelia Huck <cohuck@redhat.com>, "Wainer dos
 Santos Moschetta" <wainersm@redhat.com>, Halil Pasic <pasic@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>, <qemu-s390x@nongnu.org>,
	<xen-devel@lists.xenproject.org>, Anthony Perard <anthony.perard@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>, Paul Durrant <paul@xen.org>, "Richard
 Henderson" <rth@twiddle.net>
Subject: Re: [PATCH-for-5.2 v3 2/4] hw/9pfs: Fix Kconfig dependency problem
 between 9pfs and Xen
Message-ID: <20201104185439.41e9ddb3@bahia.lan>
In-Reply-To: <8965407.pN9RvXrJQ9@silver>
References: <20201104115706.3101190-1-philmd@redhat.com>
	<20201104115706.3101190-3-philmd@redhat.com>
	<8965407.pN9RvXrJQ9@silver>
X-Mailer: Claws Mail 3.17.8 (GTK+ 2.24.32; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.59.142.97]
X-ClientProxiedBy: DAG2EX2.mxp5.local (172.16.2.12) To DAG8EX1.mxp5.local
 (172.16.2.71)
X-Ovh-Tracer-GUID: 003e621b-1e17-4f77-9185-fd9f201e51e8
X-Ovh-Tracer-Id: 16924808878257838352
X-VR-SPAMSTATE: OK
X-VR-SPAMSCORE: -100
X-VR-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgedujedruddthedguddtkecutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfqggfjpdevjffgvefmvefgnecuuegrihhlohhuthemucehtddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjughrpeffhffvuffkjghfofggtgfgihesthhqredtredtjeenucfhrhhomhepifhrvghgucfmuhhriicuoehgrhhouhhgsehkrghougdrohhrgheqnecuggftrfgrthhtvghrnhepveelhfdtudffhfeiveehhfelgeellefgteffteekudegheejfffghefhfeeuudffnecukfhppedtrddtrddtrddtpdefjedrheelrddugedvrdeljeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhhouggvpehsmhhtphdqohhuthdphhgvlhhopehmgihplhgrnhehrdhmrghilhdrohhvhhdrnhgvthdpihhnvghtpedtrddtrddtrddtpdhmrghilhhfrhhomhepghhrohhugheskhgrohgurdhorhhgpdhrtghpthhtoheprhhthhesthifihguughlvgdrnhgvth

On Wed, 04 Nov 2020 13:18:09 +0100
Christian Schoenebeck <qemu_oss@crudebyte.com> wrote:

> On Mittwoch, 4. November 2020 12:57:04 CET Philippe Mathieu-Daud=C3=A9 wr=
ote:
> > Commit b2c00bce54c ("meson: convert hw/9pfs, cleanup") introduced
> > CONFIG_9PFS (probably a wrong conflict resolution). This config is
> > not used anywhere. Backends depend on CONFIG_FSDEV_9P which itself
> > depends on CONFIG_VIRTFS.
> >=20
> > Remove the invalid CONFIG_9PFS and use CONFIG_FSDEV_9P instead, to
> > fix the './configure --without-default-devices --enable-xen' build:
> >=20
> >   /usr/bin/ld: libcommon.fa.p/hw_xen_xen-legacy-backend.c.o: in function
> > `xen_be_register_common': hw/xen/xen-legacy-backend.c:754: undefined
> > reference to `xen_9pfs_ops' /usr/bin/ld:
> > libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x8): undefined referenc=
e to
> > `local_ops' /usr/bin/ld:
> > libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x20): undefined referen=
ce
> > to `synth_ops' /usr/bin/ld:
> > libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x38): undefined referen=
ce
> > to `proxy_ops' collect2: error: ld returned 1 exit status
> >=20
> > Fixes: b2c00bce54c ("meson: convert hw/9pfs, cleanup")
> > Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
> > Acked-by: Greg Kurz <groug@kaod.org>
> > Tested-by: Greg Kurz <groug@kaod.org>
> > Signed-off-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
>=20
> Acked-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
>=20

Phil,

Same questioning as Connie. Do you intend to get this merged or should
Christian or I take care of that ?

> > ---
> > v2: Reworded description (Greg)
> >=20
> > Cc: Stefano Stabellini <sstabellini@kernel.org>
> > Cc: Anthony Perard <anthony.perard@citrix.com>
> > Cc: Paul Durrant <paul@xen.org>
> > Cc: xen-devel@lists.xenproject.org
> > Cc: Greg Kurz <groug@kaod.org>
> > Cc: Christian Schoenebeck <qemu_oss@crudebyte.com>
> > ---
> >  hw/9pfs/Kconfig     | 4 ----
> >  hw/9pfs/meson.build | 2 +-
> >  2 files changed, 1 insertion(+), 5 deletions(-)
> >=20
> > diff --git a/hw/9pfs/Kconfig b/hw/9pfs/Kconfig
> > index d3ebd737301..3ae57496613 100644
> > --- a/hw/9pfs/Kconfig
> > +++ b/hw/9pfs/Kconfig
> > @@ -2,12 +2,8 @@ config FSDEV_9P
> >      bool
> >      depends on VIRTFS
> >=20
> > -config 9PFS
> > -    bool
> > -
> >  config VIRTIO_9P
> >      bool
> >      default y
> >      depends on VIRTFS && VIRTIO
> >      select FSDEV_9P
> > -    select 9PFS
> > diff --git a/hw/9pfs/meson.build b/hw/9pfs/meson.build
> > index cc094262122..99be5d91196 100644
> > --- a/hw/9pfs/meson.build
> > +++ b/hw/9pfs/meson.build
> > @@ -15,6 +15,6 @@
> >    'coxattr.c',
> >  ))
> >  fs_ss.add(when: 'CONFIG_XEN', if_true: files('xen-9p-backend.c'))
> > -softmmu_ss.add_all(when: 'CONFIG_9PFS', if_true: fs_ss)
> > +softmmu_ss.add_all(when: 'CONFIG_FSDEV_9P', if_true: fs_ss)
> >=20
> >  specific_ss.add(when: 'CONFIG_VIRTIO_9P', if_true:
> > files('virtio-9p-device.c'))
>=20
> Best regards,
> Christian Schoenebeck
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 19:03:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 19:03:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19516.44685 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaO3z-0005aF-B3; Wed, 04 Nov 2020 19:03:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19516.44685; Wed, 04 Nov 2020 19:03:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaO3z-0005a8-7N; Wed, 04 Nov 2020 19:03:19 +0000
Received: by outflank-mailman (input) for mailman id 19516;
 Wed, 04 Nov 2020 19:03:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kZ++=EK=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kaO3y-0005a3-FV
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 19:03:18 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 088a0cd9-35c6-4f53-9ba3-0485f02a93c2;
 Wed, 04 Nov 2020 19:03:17 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0A4J34pJ002460
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Wed, 4 Nov 2020 14:03:10 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 0A4J34rm002459;
 Wed, 4 Nov 2020 11:03:04 -0800 (PST) (envelope-from ehem)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kZ++=EK=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kaO3y-0005a3-FV
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 19:03:18 +0000
X-Inumbo-ID: 088a0cd9-35c6-4f53-9ba3-0485f02a93c2
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 088a0cd9-35c6-4f53-9ba3-0485f02a93c2;
	Wed, 04 Nov 2020 19:03:17 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0A4J34pJ002460
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Wed, 4 Nov 2020 14:03:10 -0500 (EST)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 0A4J34rm002459;
	Wed, 4 Nov 2020 11:03:04 -0800 (PST)
	(envelope-from ehem)
Date: Wed, 4 Nov 2020 11:03:04 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
        Elliott Mitchell <ehem+xen@m5p.com>, xen-devel@lists.xenproject.org,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: Remove EXPERT dependancy
Message-ID: <20201104190304.GB1647@mattapan.m5p.com>
References: <20201022014310.GA70872@mattapan.m5p.com>
 <7bf92deb-b1ba-31b2-0357-2639cd2a1bca@xen.org>
 <alpine.DEB.2.21.2010221403570.12247@sstabellini-ThinkPad-T480s>
 <b4ec906d-ebb6-add9-1bc0-39ab8d588026@xen.org>
 <alpine.DEB.2.21.2010230944090.12247@sstabellini-ThinkPad-T480s>
 <bf3b65d2-2642-f1f6-39f1-2f88433e9901@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <bf3b65d2-2642-f1f6-39f1-2f88433e9901@xen.org>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Mon, Oct 26, 2020 at 09:19:49AM +0000, Julien Grall wrote:
> On 23/10/2020 17:57, Stefano Stabellini wrote:
> > On Fri, 23 Oct 2020, Julien Grall wrote:

> >> I am sort of okay to remove EXPERT.
> > 
> > OK. This would help (even without building it by default) because as you
> > go and look at the menu the first time, you'll find ACPI among the
> > options right away.
> 
> To be honest, this step is probably the easiest in the full process to 
> get Xen build and booted on Arm.
> 
> I briefly looked at Elliot's v2, and I can't keep thinking that we are 
> trying to re-invent EXPERT for ACPI because we think the feature is 
> *more* important than any other feature gated by EXPERT.

Yet might that statement in fact be true?

Most of the features currently controlled by CONFIG_EXPERT are relatively
small tweaks which enable less often used features.  Some of those are
very high value in certain environments, but they're unimportant in
common environments.  Changing the scheduler might get you an extra
10-50% performance improvement in a special environment.

ACPI support isn't like this.  I'm unaware what Masami Hiramatsu's system
does if a CONFIG_ACPI=n Xen build is tried.  I haven't actually tried
such a build on mine, but from the code it looks like Xen would panic if
built that way.  No output of any sort.  Simply panic with the device
appearing to go inert.

> In fact, all the features behind EXPERT are important. But they have 
> been gated by EXPERT because they are not mature enough.

> But I don't think ACPI is mature enough to deserve a different 
> treatment. It would be more useful to get to the stage where ACPI can 
> work without any crash/issue first.

The difference is the severity of failure if the option is off, but needs
to be on.  Most CONFIG_EXPERT options will merely be performance
reductions or security situations unacceptable to some users.
CONFIG_ACPI=n on the wrong system could be a panic with *no* output.


Tainting sounds reasonable.  Messages in `dmesg` make sense.  A message
plus 10 second pause on boot seems reasonable.  I think if CONFIG_ACPI is
off, there should be code to try to detect ACPI and emit a warning if
anything is detected.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Wed Nov 04 19:40:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 19:40:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19525.44696 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaOdj-0000Zg-54; Wed, 04 Nov 2020 19:40:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19525.44696; Wed, 04 Nov 2020 19:40:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaOdj-0000ZZ-1z; Wed, 04 Nov 2020 19:40:15 +0000
Received: by outflank-mailman (input) for mailman id 19525;
 Wed, 04 Nov 2020 19:40:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NDu8=EK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kaOdh-0000ZO-3m
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 19:40:13 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d5babedc-fe41-4bfe-bfdc-0319de2e6995;
 Wed, 04 Nov 2020 19:40:07 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kaOda-0005ke-JD; Wed, 04 Nov 2020 19:40:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kaOda-0006VR-9Q; Wed, 04 Nov 2020 19:40:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kaOda-0002LD-7C; Wed, 04 Nov 2020 19:40:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NDu8=EK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kaOdh-0000ZO-3m
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 19:40:13 +0000
X-Inumbo-ID: d5babedc-fe41-4bfe-bfdc-0319de2e6995
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d5babedc-fe41-4bfe-bfdc-0319de2e6995;
	Wed, 04 Nov 2020 19:40:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zsTIWC5omPRoM0Rud0bdVoLt/wMHkoyf2BNhsJfJckk=; b=Yp+YfMMAFS+AnmbbSWKMeoHTBj
	yk9MS7K/7AMRnerl3T+IBDl1sw1YnxpPb9nSxHychEPhIO/OQJRjVW9wOkcitzYVOnTWb0TYEPCSU
	mCCc2N4G6FcmAzDOsNwc4e/LpCMvXOsuQHGgaGh7KY0byvNfw0T3JvDiespNLo6bCqrA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaOda-0005ke-JD; Wed, 04 Nov 2020 19:40:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaOda-0006VR-9Q; Wed, 04 Nov 2020 19:40:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaOda-0002LD-7C; Wed, 04 Nov 2020 19:40:06 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156393-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156393: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:guest-migrate/dst_host/src_host/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=3d6e32347a3b57dac7f469a07c5f520e69bd070a
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 04 Nov 2020 19:40:06 +0000

flight 156393 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156393/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-pair 28 guest-migrate/dst_host/src_host/debian.repeat fail pass in 156388

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                3d6e32347a3b57dac7f469a07c5f520e69bd070a
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   76 days
Failing since        152659  2020-08-21 14:07:39 Z   75 days  169 attempts
Testing same since   156388  2020-11-04 00:07:53 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 59937 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 19:41:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 19:41:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19528.44712 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaOeZ-0000gp-L4; Wed, 04 Nov 2020 19:41:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19528.44712; Wed, 04 Nov 2020 19:41:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaOeZ-0000gi-HN; Wed, 04 Nov 2020 19:41:07 +0000
Received: by outflank-mailman (input) for mailman id 19528;
 Wed, 04 Nov 2020 19:41:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Om8i=EK=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kaOeY-0000gd-Sx
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 19:41:06 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0cd4b88c-0e62-4d4a-8568-e6f11f89ba39;
 Wed, 04 Nov 2020 19:41:05 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kaOeW-0005ll-MU; Wed, 04 Nov 2020 19:41:04 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kaOeW-000869-FI; Wed, 04 Nov 2020 19:41:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Om8i=EK=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kaOeY-0000gd-Sx
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 19:41:06 +0000
X-Inumbo-ID: 0cd4b88c-0e62-4d4a-8568-e6f11f89ba39
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0cd4b88c-0e62-4d4a-8568-e6f11f89ba39;
	Wed, 04 Nov 2020 19:41:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=vL2jdxn2WGUUZZ4WMBAgRTtVtQ6fVJOgQBbI3gv4Chc=; b=TtN5PqqLpMcM3siW711OznqaB6
	chs0gqFz1gMZdrvsboAdDl2KP+jmqtgUHXpQKI+leFtmJoQOkXICRdEHQOY2Mbw9EgL1bCl+A7lWU
	OQQP2Lzk7JV9XAmSPLef388oo271JuoUfvVc/tgN+YXr8Bw2iMchd8vqsZgN2Kbz507c=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kaOeW-0005ll-MU; Wed, 04 Nov 2020 19:41:04 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kaOeW-000869-FI; Wed, 04 Nov 2020 19:41:04 +0000
Subject: Re: [PATCH] xen/arm: Remove EXPERT dependancy
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20201022014310.GA70872@mattapan.m5p.com>
 <7bf92deb-b1ba-31b2-0357-2639cd2a1bca@xen.org>
 <alpine.DEB.2.21.2010221403570.12247@sstabellini-ThinkPad-T480s>
 <b4ec906d-ebb6-add9-1bc0-39ab8d588026@xen.org>
 <alpine.DEB.2.21.2010230944090.12247@sstabellini-ThinkPad-T480s>
 <bf3b65d2-2642-f1f6-39f1-2f88433e9901@xen.org>
 <20201104190304.GB1647@mattapan.m5p.com>
From: Julien Grall <julien@xen.org>
Message-ID: <18d349b9-59b7-0acc-bff7-d29b7e40ac44@xen.org>
Date: Wed, 4 Nov 2020 19:41:02 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201104190304.GB1647@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Elliott,

On 04/11/2020 19:03, Elliott Mitchell wrote:
> On Mon, Oct 26, 2020 at 09:19:49AM +0000, Julien Grall wrote:
>> On 23/10/2020 17:57, Stefano Stabellini wrote:
>>> On Fri, 23 Oct 2020, Julien Grall wrote:
> 
>>>> I am sort of okay to remove EXPERT.
>>>
>>> OK. This would help (even without building it by default) because as you
>>> go and look at the menu the first time, you'll find ACPI among the
>>> options right away.
>>
>> To be honest, this step is probably the easiest in the full process to
>> get Xen build and booted on Arm.
>>
>> I briefly looked at Elliot's v2, and I can't keep thinking that we are
>> trying to re-invent EXPERT for ACPI because we think the feature is
>> *more* important than any other feature gated by EXPERT.
> 
> Yet might that statement in fact be true?

Everyone has a different opinion on what's important or not. I am sure 
we can spend hours bikeshedding on that...

> 
> Most of the features currently controlled by CONFIG_EXPERT are relatively
> small tweaks which enable less often used features.  Some of those are
> very high value in certain environments, but they're unimportant in
> common environments.  Changing the scheduler might get you an extra
> 10-50% performance improvement in a special environment.
> 
> ACPI support isn't like this.  I'm unaware what Masami Hiramatsu's system
> does if a CONFIG_ACPI=n Xen build is tried.  I haven't actually tried
> such a build on mine, but from the code it looks like Xen would panic if
> built that way.  No output of any sort.  Simply panic with the device
> appearing to go inert.
There will always be cases where the console is not working:
   1) There is no driver in Xen
   2) There is no SPCR table present

I think you are in the second situation and you had to enable 
earlyprintk. Is that correct?

>> In fact, all the features behind EXPERT are important. But they have
>> been gated by EXPERT because they are not mature enough.
> 
>> But I don't think ACPI is mature enough to deserve a different
>> treatment. It would be more useful to get to the stage where ACPI can
>> work without any crash/issue first.
> 
> The difference is the severity of failure if the option is off, but needs
> to be on.  Most CONFIG_EXPERT options will merely be performance
> reductions or security situations unacceptable to some users.
> CONFIG_ACPI=n on the wrong system could be a panic with *no* output. >
> 
> Tainting sounds reasonable.  Messages in `dmesg` make sense.  A message
> plus 10 second pause on boot seems reasonable.  I think if CONFIG_ACPI is
> off, there should be code to try to detect ACPI and emit a warning if
> anything is detected.

All of this would be moot if we focus on getting ACPI (or just a subset) 
working on a few platforms (e.g. RPI, Thunder-X).

I don't think we are too far from this acheviement. This would be IMHO 
be enough to move ACPI one way up in the support "ladder". We might even 
be able to do this for 4.15.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 21:57:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 21:57:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19547.44723 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaQmZ-0003cS-HG; Wed, 04 Nov 2020 21:57:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19547.44723; Wed, 04 Nov 2020 21:57:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaQmZ-0003cL-EH; Wed, 04 Nov 2020 21:57:31 +0000
Received: by outflank-mailman (input) for mailman id 19547;
 Wed, 04 Nov 2020 21:57:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Cy9h=EK=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1kaQmX-0003cD-R2
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 21:57:29 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5d83d3cb-9cca-4cdf-8fbe-cd88f00e2bb7;
 Wed, 04 Nov 2020 21:57:28 +0000 (UTC)
Received: from [10.10.1.24] (c-73-129-147-140.hsd1.md.comcast.net
 [73.129.147.140]) by mx.zohomail.com
 with SMTPS id 1604527035469513.1134307116392;
 Wed, 4 Nov 2020 13:57:15 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Cy9h=EK=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
	id 1kaQmX-0003cD-R2
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 21:57:29 +0000
X-Inumbo-ID: 5d83d3cb-9cca-4cdf-8fbe-cd88f00e2bb7
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5d83d3cb-9cca-4cdf-8fbe-cd88f00e2bb7;
	Wed, 04 Nov 2020 21:57:28 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1604527037; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=j/vBMXehQxTPf7sjbk8ovqTsIeQESGcH1Jo9GH2nX9vZw1CWmSysGjeq0oUF2DY30xuxYktp63VpX7O4sqaOl77D0z5IZY+4GBvxgG0BjJI3BFcg0LPI4am99J+NwuDRHNILVc79EPq5U0cSLGA9M++CDjGgJGgX6dHcyS47SYg=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1604527037; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=AVIeUoUbZjrTw79cxyPF13nENl8IE/JbmmTQieR44vY=; 
	b=CIHFCpLnAzdUMqEVawGs8VBg1mZ3z2LRumeHCsIQh5+z4LADKYmOpMc7xsWTMKhyW0RyrHSiqwt07YPtWD5jLGt4Ms6f3ojG9NRUa0Zk2injdvSsignPCQke1ru3Zt71BieZfDiLtQpCxYGz9b5tDmRCCGRQWUd9073Iva22HdU=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com> header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1604527037;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Subject:To:Cc:References:From:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type:Content-Transfer-Encoding;
	bh=AVIeUoUbZjrTw79cxyPF13nENl8IE/JbmmTQieR44vY=;
	b=DD/2zIBQoMxUuB0RKFcqNRmHcze6TkHdT2y5s1Vy2yWvxZJZBrMTY+Svkin6f8/i
	DvNHne8fwbH64p09FjslVNwmjfwxawdVcP2E4Sbx+8KaUwWWPoAbnNdU7oprbBhKGs5
	c84xwje6GOoC3RColeUIdKrkStWrLEIMNOIRFCg0=
Received: from [10.10.1.24] (c-73-129-147-140.hsd1.md.comcast.net [73.129.147.140]) by mx.zohomail.com
	with SMTPS id 1604527035469513.1134307116392; Wed, 4 Nov 2020 13:57:15 -0800 (PST)
Subject: Re: [RFC PATCH] xen: EXPERT clean-up
To: Stefano Stabellini <sstabellini@kernel.org>,
 Rich Persaud <persaur@gmail.com>
Cc: Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, iwj@xenproject.org,
 julien@xen.org, wl@xen.org, xen-devel@lists.xenproject.org,
 Daniel DeGraaf <dgdegra@tycho.nsa.gov>, Roman Shaposhnik <roman@zededa.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
References: <alpine.DEB.2.21.2011031123420.5812@sstabellini-ThinkPad-T480s>
 <E359BD65-2917-4087-A6E1-0AD5521CF823@gmail.com>
 <alpine.DEB.2.21.2011031307430.5812@sstabellini-ThinkPad-T480s>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Message-ID: <15a6c69d-6de2-54b3-b580-5f0fcd83ae96@apertussolutions.com>
Date: Wed, 4 Nov 2020 16:57:11 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2011031307430.5812@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-ZohoMailClient: External

On 11/3/20 4:15 PM, Stefano Stabellini wrote:
> On Tue, 3 Nov 2020, Rich Persaud wrote:
>> On Nov 3, 2020, at 14:37, Stefano Stabellini <sstabellini@kernel.org> wr=
ote:
>>>
>>> =EF=BB=BFOn Tue, 3 Nov 2020, Jan Beulich wrote:
>>>>> On 02.11.2020 22:41, Stefano Stabellini wrote:
>>>>> On Mon, 2 Nov 2020, Jan Beulich wrote:
>>>>>> On 31.10.2020 01:24, Stefano Stabellini wrote:
>>>>>>> @@ -79,8 +79,8 @@ config SBSA_VUART_CONSOLE
>>>>>>>      SBSA Generic UART implements a subset of ARM PL011 UART.
>>>>>>>
>>>>>>> config ARM_SSBD
>>>>>>> -    bool "Speculative Store Bypass Disable" if EXPERT
>>>>>>> -    depends on HAS_ALTERNATIVE
>>>>>>> +    bool "Speculative Store Bypass Disable"
>>>>>>> +    depends on HAS_ALTERNATIVE && EXPERT
>>>>>>>    default y
>>>>>>
>>>>>> At the example of this, I'm afraid when the default isn't "n"
>>>>>> (or there's no default directive at all, as ought to be
>>>>>> equivalent to and preferred over "default n"), such a
>>>>>> transformation is not functionally identical: Before your
>>>>>> change, with !EXPERT this option defaults to y. After your
>>>>>> change this option is unavailable (which resolves to it being
>>>>>> off for all consuming purposes).
>>>>>>
>>>>>> IOW there are reasons to have "if ..." attached to the prompts
>>>>>> (for this construct indeed only making the prompt conditional,
>>>>>> not the entire option), but there are also cases where the
>>>>>> cleanup you do is indeed desirable / helpful.
>>>>>
>>>>> Yeah, thanks for catching it, it is obviously a problem.
>>>>>
>>>>> My intention was just to "tag" somehow the options to EXPERT so that =
it
>>>>> would show on the menu. Maybe a better, simpler, way to do it is
>>>>> to add the word "EXPERT" to the one line prompt:
>>>>>
>>>>> config ARM_SSBD
>>>>> -    bool "Speculative Store Bypass Disable" if EXPERT
>>>>> +    bool "Speculative Store Bypass Disable (EXPERT)" if EXPERT
>>>>>    depends on HAS_ALTERNATIVE
>>>>>    default y
>>>>>    help
>>>>>
>>>>>
>>>>> What do you think?
>>>>
>>>> While on the surface this may look like an improvement, I don't
>>>> see how it would actually help: If you read the Kconfig file
>>>> itself, the dependency is seen anyway. And on the menu I don't
>>>> see the point of telling someone who has enabled EXPERT that a
>>>> certain option is (or is not) an expert one. If they think
>>>> they're experts, so should they be treated. (It was, after all,
>>>> a deliberate decision to make enabling expert mode easier, and
>>>> hence easier to use for what one might consider not-really-
>>>> experts. I realize saying so may be considered tendentious; I
>>>> mean it in a purely technical sense, and I'd like to apologize
>>>> in advance to anyone not sharing this as a possible perspective
>>>> to take.)
>>>>
>>>> Plus, of course, the addition of such (EXPERT) markers to
>>>> future options' prompts is liable to get forgotten now and then,
>>>> so sooner or later we'd likely end up with an inconsistent
>>>> mixture anyway.
>>>
>>> I tend to agree with you on everything you wrote. The fundamental issue
>>> is that we are (mis)using EXPERT to tag features that are experimental,
>>> as defined by SUPPORT.md.
>>>
>>> It is important to be able to distinguish clearly at the kconfig level
>>> options that are (security) supported from options that are
>>> unsupported/experimental. Today the only way to do it is with EXPERT
>>> which is not great because:
>>>
>>> - it doesn't convey the idea that it is for unsupported/experimental
>>>  features
>>> - if you want to enable one unsupported feature, it is not clear what
>>>  you have to do
>>>
>>> So maybe we should replace EXPERT with UNSUPPORTED (or EXPERIMENTAL) in
>>> the Kconfig menu?
>>>
>>> It would make it clearer that by enabling UNSUPPORTED you are going to
>>> get a configuration that is not security supported.
>>
>> If going down this path, there should be one, authoritative, in-tree def=
inition of feature-level support from which Kconfig (build-time policy enfo=
rcement) and SUPPORT.md (documentation) can be derived.  Later, even run-ti=
me enforcement can be similarly classified.  FuSA may also wish for documen=
ted policy to align with enforcement.
>=20
> The goal is trying to align Kconfig and SUPPORT.md by clarifying the
> EXPERT option, which today is a poor implementation of "experimental".

Just a thought but what if it were to be kept as EXPERT for the config
option to expose all of these features and then by convention require
that experimental options be required to carry an EXPERIMENTAL tag at
the beginning of the option's description. This would separate the idea
of EXPERT configuration mode from that of EXPERIMENTAL features which
can only be reached in EXPERT mode. The convention could be carried
through to UNSUPPORTED, TECHPREVIEW, or any new types of tags as they
are needed.

> There could be further improvements down the line, for instance we could
> taint Xen when UNSUPPORTED is selected and even have separate kconfig
> options for UNSUPPORTED, EXPERIMENTAL, and TECHPREVIEW. FuSa is likely
> going to need its own SAFETY option too. Like you suggested, we could
> even have a single source of feature-level support information for both
> Kconfig and SUPPORT.md.
>=20
> However, I didn't want to increase the scope of this one patch. For now,
> it would be a good start if we replaced EXPERT with something that covers
> anything not security supported, for which UNSUPPORTED looks like a good
> name.
>=20




From xen-devel-bounces@lists.xenproject.org Wed Nov 04 22:14:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 22:14:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19555.44736 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaR2i-0005Qv-WB; Wed, 04 Nov 2020 22:14:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19555.44736; Wed, 04 Nov 2020 22:14:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaR2i-0005Qo-Sx; Wed, 04 Nov 2020 22:14:12 +0000
Received: by outflank-mailman (input) for mailman id 19555;
 Wed, 04 Nov 2020 22:14:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hAWL=EK=kernel.org=ardb@srs-us1.protection.inumbo.net>)
 id 1kaR2h-0005Qj-BZ
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 22:14:11 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 69b0db74-583a-4977-a32c-2afca0585957;
 Wed, 04 Nov 2020 22:14:10 +0000 (UTC)
Received: from e123331-lin.nice.arm.com
 (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id DEEF92074B;
 Wed,  4 Nov 2020 22:14:07 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hAWL=EK=kernel.org=ardb@srs-us1.protection.inumbo.net>)
	id 1kaR2h-0005Qj-BZ
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 22:14:11 +0000
X-Inumbo-ID: 69b0db74-583a-4977-a32c-2afca0585957
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 69b0db74-583a-4977-a32c-2afca0585957;
	Wed, 04 Nov 2020 22:14:10 +0000 (UTC)
Received: from e123331-lin.nice.arm.com (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id DEEF92074B;
	Wed,  4 Nov 2020 22:14:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604528049;
	bh=NOv7OWHClGcxBYpjvN6mn3rmT/y8QjeyqPOmESrkm9k=;
	h=From:To:Cc:Subject:Date:From;
	b=bY/47qmVTLwLDq+6lqqBpsjoyvYnUK5b8q2F82+NxspPMsnv62SZAcUO004FEEuEl
	 Y7T9B5LXSJpDNRTktPwQh3NPcyO9cgzsa225cXvumxOHrUrXwtytBXFWufNV5Ko3dK
	 0MLnAxlQPVXDwW0+2GVNBDgu69Mm2Qp+pZ3M7B+s=
From: Ard Biesheuvel <ardb@kernel.org>
To: linux-efi@vger.kernel.org
Cc: xen-devel@lists.xenproject.org,
	sstabellini@kernel.org,
	jgross@suse.com,
	boris.ostrovsky@oracle.com,
	Ard Biesheuvel <ardb@kernel.org>
Subject: [PATCH] efi: x86/xen: switch to efi_get_secureboot_mode helper
Date: Wed,  4 Nov 2020 23:14:00 +0100
Message-Id: <20201104221400.7005-1-ardb@kernel.org>
X-Mailer: git-send-email 2.17.1

Now that we have a static inline helper to discover the platform's secure
boot mode that can be shared between the EFI stub and the kernel proper,
switch to it, and drop some comments about keeping them in sync manually.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/x86/xen/efi.c                        | 37 +++++---------------
 drivers/firmware/efi/libstub/secureboot.c |  3 --
 2 files changed, 9 insertions(+), 31 deletions(-)

diff --git a/arch/x86/xen/efi.c b/arch/x86/xen/efi.c
index 205a9bc981b0..a27444acaf1e 100644
--- a/arch/x86/xen/efi.c
+++ b/arch/x86/xen/efi.c
@@ -93,37 +93,22 @@ static efi_system_table_t __init *xen_efi_probe(void)
 
 /*
  * Determine whether we're in secure boot mode.
- *
- * Please keep the logic in sync with
- * drivers/firmware/efi/libstub/secureboot.c:efi_get_secureboot().
  */
 static enum efi_secureboot_mode xen_efi_get_secureboot(void)
 {
-	static efi_guid_t efi_variable_guid = EFI_GLOBAL_VARIABLE_GUID;
 	static efi_guid_t shim_guid = EFI_SHIM_LOCK_GUID;
+	enum efi_secureboot_mode mode;
 	efi_status_t status;
-	u8 moksbstate, secboot, setupmode;
+	u8 moksbstate;
 	unsigned long size;
 
-	size = sizeof(secboot);
-	status = efi.get_variable(L"SecureBoot", &efi_variable_guid,
-				  NULL, &size, &secboot);
-
-	if (status == EFI_NOT_FOUND)
-		return efi_secureboot_mode_disabled;
-
-	if (status != EFI_SUCCESS)
-		goto out_efi_err;
-
-	size = sizeof(setupmode);
-	status = efi.get_variable(L"SetupMode", &efi_variable_guid,
-				  NULL, &size, &setupmode);
-
-	if (status != EFI_SUCCESS)
-		goto out_efi_err;
-
-	if (secboot == 0 || setupmode == 1)
-		return efi_secureboot_mode_disabled;
+	mode = efi_get_secureboot_mode(efi.get_variable);
+	if (mode == efi_secureboot_mode_unknown) {
+		efi_err("Could not determine UEFI Secure Boot status.\n");
+		return efi_secureboot_mode_unknown;
+	}
+	if (mode != efi_secureboot_mode_enabled)
+		return mode;
 
 	/* See if a user has put the shim into insecure mode. */
 	size = sizeof(moksbstate);
@@ -140,10 +125,6 @@ static enum efi_secureboot_mode xen_efi_get_secureboot(void)
  secure_boot_enabled:
 	pr_info("UEFI Secure Boot is enabled.\n");
 	return efi_secureboot_mode_enabled;
-
- out_efi_err:
-	pr_err("Could not determine UEFI Secure Boot status.\n");
-	return efi_secureboot_mode_unknown;
 }
 
 void __init xen_efi_init(struct boot_params *boot_params)
diff --git a/drivers/firmware/efi/libstub/secureboot.c b/drivers/firmware/efi/libstub/secureboot.c
index af18d86c1604..8a18930f3eb6 100644
--- a/drivers/firmware/efi/libstub/secureboot.c
+++ b/drivers/firmware/efi/libstub/secureboot.c
@@ -24,9 +24,6 @@ static efi_status_t get_var(efi_char16_t *name, efi_guid_t *vendor, u32 *attr,
 
 /*
  * Determine whether we're in secure boot mode.
- *
- * Please keep the logic in sync with
- * arch/x86/xen/efi.c:xen_efi_get_secureboot().
  */
 enum efi_secureboot_mode efi_get_secureboot(void)
 {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 04 22:19:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 22:19:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19561.44751 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaR7g-0005dj-Jg; Wed, 04 Nov 2020 22:19:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19561.44751; Wed, 04 Nov 2020 22:19:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaR7g-0005dc-Gf; Wed, 04 Nov 2020 22:19:20 +0000
Received: by outflank-mailman (input) for mailman id 19561;
 Wed, 04 Nov 2020 22:19:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NDu8=EK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kaR7f-0005cy-FN
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 22:19:19 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 034a3f44-4ff8-40c6-a31e-08d9d06d2730;
 Wed, 04 Nov 2020 22:19:12 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kaR7X-0000f7-Mf; Wed, 04 Nov 2020 22:19:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kaR7X-0000KL-B2; Wed, 04 Nov 2020 22:19:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kaR7X-0003tw-AV; Wed, 04 Nov 2020 22:19:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NDu8=EK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kaR7f-0005cy-FN
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 22:19:19 +0000
X-Inumbo-ID: 034a3f44-4ff8-40c6-a31e-08d9d06d2730
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 034a3f44-4ff8-40c6-a31e-08d9d06d2730;
	Wed, 04 Nov 2020 22:19:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/KoEepBzeZw0diDIvKO5TT3F2QXt5AJVGMVpQlFOGzQ=; b=E+8X9SaX4ygU2DO428wNhzf8LZ
	bw9jSCMW/vMqqgnt/ft5pQ9gTdwoPJrKneikq0cvRgNe8QP3yhKZnWN9h0U4iQcuA6xnRp2/jlQPr
	sGZXmW9W473GeAI1vXnPusFepxWSloPzSpUHRtF/NZnebBbx1v80WW7KjQ4LYmxtjRzY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaR7X-0000f7-Mf; Wed, 04 Nov 2020 22:19:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaR7X-0000KL-B2; Wed, 04 Nov 2020 22:19:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaR7X-0003tw-AV; Wed, 04 Nov 2020 22:19:11 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156394-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 156394: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5784d1e9424151adfdc836535489bd068c6c0700
X-Osstest-Versions-That:
    xen=10bb63c203f42d931fa1fa7dbbae7ce1765cecf2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 04 Nov 2020 22:19:11 +0000

flight 156394 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156394/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156264
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156264
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156264
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156264
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156264
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156264
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156264
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 156264
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156264
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156264
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156264
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156264
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5784d1e9424151adfdc836535489bd068c6c0700
baseline version:
 xen                  10bb63c203f42d931fa1fa7dbbae7ce1765cecf2

Last test of basis   156264  2020-10-27 18:37:55 Z    8 days
Testing same since   156394  2020-11-04 08:37:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <ian.jackson@eu.citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   10bb63c203..5784d1e942  5784d1e9424151adfdc836535489bd068c6c0700 -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 23:54:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 23:54:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19574.44767 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaSbM-0005jE-U2; Wed, 04 Nov 2020 23:54:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19574.44767; Wed, 04 Nov 2020 23:54:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaSbM-0005j7-R5; Wed, 04 Nov 2020 23:54:04 +0000
Received: by outflank-mailman (input) for mailman id 19574;
 Wed, 04 Nov 2020 23:54:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nsOX=EK=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1kaSbK-0005j2-Pj
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 23:54:02 +0000
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9570d43b-4da1-4a0b-8e98-d9cbc78d909e;
 Wed, 04 Nov 2020 23:54:00 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0A4NnYdo153877;
 Wed, 4 Nov 2020 23:53:57 GMT
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by userp2120.oracle.com with ESMTP id 34hhw2sfa4-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Wed, 04 Nov 2020 23:53:57 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0A4NnnBx113233;
 Wed, 4 Nov 2020 23:53:57 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
 by userp3030.oracle.com with ESMTP id 34hvrynkur-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 04 Nov 2020 23:53:57 +0000
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
 by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 0A4NrpdL002452;
 Wed, 4 Nov 2020 23:53:53 GMT
Received: from [10.74.103.113] (/10.74.103.113)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Wed, 04 Nov 2020 15:53:51 -0800
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nsOX=EK=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
	id 1kaSbK-0005j2-Pj
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 23:54:02 +0000
X-Inumbo-ID: 9570d43b-4da1-4a0b-8e98-d9cbc78d909e
Received: from userp2120.oracle.com (unknown [156.151.31.85])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9570d43b-4da1-4a0b-8e98-d9cbc78d909e;
	Wed, 04 Nov 2020 23:54:00 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
	by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0A4NnYdo153877;
	Wed, 4 Nov 2020 23:53:57 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=1iFWj4I945Pa9pfTckxa4OLou5zq0/cCG5xj+n3yHlI=;
 b=VC3YXaB/cSjDL/Aqi4lr0E0dsWAqWZzsiCYkyWm8NKkZXqA3B+ROoH3LRdbnHm3zJ0ub
 lMR+Ctcqr1GCe++HlChadwbJPjzykF5ziruZ4old6PNMfM+SPpFHYVHfJImS9uc+MkUi
 n7ip4uSGXGUuWq1Ilz/wrkHSYtSxwmAnupCqAzxN2jEVsHsdIgFUUOdlSuvKZXMxPlUq
 Er1c1ujg2l8iP/qXHWPHMYPoa2zbXpuYSQrbV7qN0OTPHp2gm9YSU0HSsA+289ruPznT
 jYh0Jdcw5B7w0Aa2mvhTgpN2HHokxHiziXOEnZjpkGBxhR/Ue7kYfPs2Sa2klC3EzWps rg== 
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
	by userp2120.oracle.com with ESMTP id 34hhw2sfa4-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
	Wed, 04 Nov 2020 23:53:57 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
	by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0A4NnnBx113233;
	Wed, 4 Nov 2020 23:53:57 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
	by userp3030.oracle.com with ESMTP id 34hvrynkur-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
	Wed, 04 Nov 2020 23:53:57 +0000
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 0A4NrpdL002452;
	Wed, 4 Nov 2020 23:53:53 GMT
Received: from [10.74.103.113] (/10.74.103.113)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 04 Nov 2020 15:53:51 -0800
Subject: Re: [PATCH] efi: x86/xen: switch to efi_get_secureboot_mode helper
To: Ard Biesheuvel <ardb@kernel.org>, linux-efi@vger.kernel.org
Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, jgross@suse.com
References: <20201104221400.7005-1-ardb@kernel.org>
From: boris.ostrovsky@oracle.com
Organization: Oracle Corporation
Message-ID: <f3d902ca-3fa2-aa8a-fb9a-0891b9567751@oracle.com>
Date: Wed, 4 Nov 2020 18:53:50 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201104221400.7005-1-ardb@kernel.org>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9795 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 adultscore=0 mlxscore=0
 malwarescore=0 mlxlogscore=999 suspectscore=0 spamscore=0 phishscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2011040171
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9795 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 malwarescore=0 mlxscore=0
 suspectscore=0 clxscore=1011 priorityscore=1501 impostorscore=0
 spamscore=0 lowpriorityscore=0 mlxlogscore=999 phishscore=0 bulkscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2011040171


On 11/4/20 5:14 PM, Ard Biesheuvel wrote:
> Now that we have a static inline helper to discover the platform's secure
> boot mode that can be shared between the EFI stub and the kernel proper,
> switch to it, and drop some comments about keeping them in sync manually.
>
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> ---
>  arch/x86/xen/efi.c                        | 37 +++++---------------
>  drivers/firmware/efi/libstub/secureboot.c |  3 --
>  2 files changed, 9 insertions(+), 31 deletions(-)
>
> diff --git a/arch/x86/xen/efi.c b/arch/x86/xen/efi.c
> index 205a9bc981b0..a27444acaf1e 100644
> --- a/arch/x86/xen/efi.c
> +++ b/arch/x86/xen/efi.c
> @@ -93,37 +93,22 @@ static efi_system_table_t __init *xen_efi_probe(void)
>  
>  /*
>   * Determine whether we're in secure boot mode.
> - *
> - * Please keep the logic in sync with
> - * drivers/firmware/efi/libstub/secureboot.c:efi_get_secureboot().
>   */
>  static enum efi_secureboot_mode xen_efi_get_secureboot(void)
>  {
> -	static efi_guid_t efi_variable_guid = EFI_GLOBAL_VARIABLE_GUID;
>  	static efi_guid_t shim_guid = EFI_SHIM_LOCK_GUID;
> +	enum efi_secureboot_mode mode;
>  	efi_status_t status;
> -	u8 moksbstate, secboot, setupmode;
> +	u8 moksbstate;
>  	unsigned long size;
>  
> -	size = sizeof(secboot);
> -	status = efi.get_variable(L"SecureBoot", &efi_variable_guid,
> -				  NULL, &size, &secboot);
> -
> -	if (status == EFI_NOT_FOUND)
> -		return efi_secureboot_mode_disabled;
> -
> -	if (status != EFI_SUCCESS)
> -		goto out_efi_err;
> -
> -	size = sizeof(setupmode);
> -	status = efi.get_variable(L"SetupMode", &efi_variable_guid,
> -				  NULL, &size, &setupmode);
> -
> -	if (status != EFI_SUCCESS)
> -		goto out_efi_err;
> -
> -	if (secboot == 0 || setupmode == 1)
> -		return efi_secureboot_mode_disabled;
> +	mode = efi_get_secureboot_mode(efi.get_variable);


Which tree is this patch against? I don't see a definition of efi_get_secureboot_mode().


> +	if (mode == efi_secureboot_mode_unknown) {
> +		efi_err("Could not determine UEFI Secure Boot status.\n");


We need to include drivers/firmware/efi/libstub/efistub.h for that. Which I am not sure is meant to be included anywhere outside of libstub.


-boris




From xen-devel-bounces@lists.xenproject.org Wed Nov 04 23:54:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 23:54:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19579.44781 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaSc4-0005ou-8n; Wed, 04 Nov 2020 23:54:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19579.44781; Wed, 04 Nov 2020 23:54:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaSc4-0005on-5o; Wed, 04 Nov 2020 23:54:48 +0000
Received: by outflank-mailman (input) for mailman id 19579;
 Wed, 04 Nov 2020 23:54:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hAWL=EK=kernel.org=ardb@srs-us1.protection.inumbo.net>)
 id 1kaSc2-0005oH-T6
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 23:54:46 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f0fc1270-cef8-45f3-865c-3ef2e991263b;
 Wed, 04 Nov 2020 23:54:46 +0000 (UTC)
Received: from mail-ot1-f46.google.com (mail-ot1-f46.google.com
 [209.85.210.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 59D4F20644
 for <xen-devel@lists.xenproject.org>; Wed,  4 Nov 2020 23:54:45 +0000 (UTC)
Received: by mail-ot1-f46.google.com with SMTP id 32so470168otm.3
 for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 15:54:45 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hAWL=EK=kernel.org=ardb@srs-us1.protection.inumbo.net>)
	id 1kaSc2-0005oH-T6
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 23:54:46 +0000
X-Inumbo-ID: f0fc1270-cef8-45f3-865c-3ef2e991263b
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f0fc1270-cef8-45f3-865c-3ef2e991263b;
	Wed, 04 Nov 2020 23:54:46 +0000 (UTC)
Received: from mail-ot1-f46.google.com (mail-ot1-f46.google.com [209.85.210.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 59D4F20644
	for <xen-devel@lists.xenproject.org>; Wed,  4 Nov 2020 23:54:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604534085;
	bh=ok/fYJAV8bL/1AuLmDXnqp8O7fhDT79bjnQKIeilga8=;
	h=References:In-Reply-To:From:Date:Subject:To:Cc:From;
	b=1jBm3LxvPPUFtbaiINDeyZsCFVPiTJkRwgnj6Ofr3pS3Fdj2qTTqn9dWMS5Xz7BYe
	 ttLglkMvdkodFatepjG69rCU2UMAzyYVRoUQhdnAbbQcmSwk7P7hG5obqVvnPFZmHL
	 9J+1qZpq4DCdvahRg8lv+zOquzgflq8c7BGFQNdw=
Received: by mail-ot1-f46.google.com with SMTP id 32so470168otm.3
        for <xen-devel@lists.xenproject.org>; Wed, 04 Nov 2020 15:54:45 -0800 (PST)
X-Gm-Message-State: AOAM530PBGGcbOrKt4jUD6lz+ojnVzJ/k6dclhgsWb8tnDhyjD2VCbje
	LpA2lbgKIGcepCuVRgX5Yx0D9EAHUNSR4o8TaWM=
X-Google-Smtp-Source: ABdhPJx0nEZeMyzbhRvrfSiHfUQMv/ufXNej/rub8C+37WnU+/ULONWKCy93retsTVNVdch1MCDAGRDndPMREdLeDz8=
X-Received: by 2002:a9d:62c1:: with SMTP id z1mr142766otk.108.1604534084707;
 Wed, 04 Nov 2020 15:54:44 -0800 (PST)
MIME-Version: 1.0
References: <20201104221400.7005-1-ardb@kernel.org> <f3d902ca-3fa2-aa8a-fb9a-0891b9567751@oracle.com>
In-Reply-To: <f3d902ca-3fa2-aa8a-fb9a-0891b9567751@oracle.com>
From: Ard Biesheuvel <ardb@kernel.org>
Date: Thu, 5 Nov 2020 00:54:33 +0100
X-Gmail-Original-Message-ID: <CAMj1kXH81-3bayCKFQ4cO+Hw4FhRtc=DJr6qTirtg75eGwdZNQ@mail.gmail.com>
Message-ID: <CAMj1kXH81-3bayCKFQ4cO+Hw4FhRtc=DJr6qTirtg75eGwdZNQ@mail.gmail.com>
Subject: Re: [PATCH] efi: x86/xen: switch to efi_get_secureboot_mode helper
To: boris.ostrovsky@oracle.com
Cc: linux-efi <linux-efi@vger.kernel.org>, xen-devel@lists.xenproject.org, 
	sstabellini@kernel.org, Juergen Gross <jgross@suse.com>
Content-Type: text/plain; charset="UTF-8"

On Thu, 5 Nov 2020 at 00:53, <boris.ostrovsky@oracle.com> wrote:
>
>
> On 11/4/20 5:14 PM, Ard Biesheuvel wrote:
> > Now that we have a static inline helper to discover the platform's secure
> > boot mode that can be shared between the EFI stub and the kernel proper,
> > switch to it, and drop some comments about keeping them in sync manually.
> >
> > Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> > ---
> >  arch/x86/xen/efi.c                        | 37 +++++---------------
> >  drivers/firmware/efi/libstub/secureboot.c |  3 --
> >  2 files changed, 9 insertions(+), 31 deletions(-)
> >
> > diff --git a/arch/x86/xen/efi.c b/arch/x86/xen/efi.c
> > index 205a9bc981b0..a27444acaf1e 100644
> > --- a/arch/x86/xen/efi.c
> > +++ b/arch/x86/xen/efi.c
> > @@ -93,37 +93,22 @@ static efi_system_table_t __init *xen_efi_probe(void)
> >
> >  /*
> >   * Determine whether we're in secure boot mode.
> > - *
> > - * Please keep the logic in sync with
> > - * drivers/firmware/efi/libstub/secureboot.c:efi_get_secureboot().
> >   */
> >  static enum efi_secureboot_mode xen_efi_get_secureboot(void)
> >  {
> > -     static efi_guid_t efi_variable_guid = EFI_GLOBAL_VARIABLE_GUID;
> >       static efi_guid_t shim_guid = EFI_SHIM_LOCK_GUID;
> > +     enum efi_secureboot_mode mode;
> >       efi_status_t status;
> > -     u8 moksbstate, secboot, setupmode;
> > +     u8 moksbstate;
> >       unsigned long size;
> >
> > -     size = sizeof(secboot);
> > -     status = efi.get_variable(L"SecureBoot", &efi_variable_guid,
> > -                               NULL, &size, &secboot);
> > -
> > -     if (status == EFI_NOT_FOUND)
> > -             return efi_secureboot_mode_disabled;
> > -
> > -     if (status != EFI_SUCCESS)
> > -             goto out_efi_err;
> > -
> > -     size = sizeof(setupmode);
> > -     status = efi.get_variable(L"SetupMode", &efi_variable_guid,
> > -                               NULL, &size, &setupmode);
> > -
> > -     if (status != EFI_SUCCESS)
> > -             goto out_efi_err;
> > -
> > -     if (secboot == 0 || setupmode == 1)
> > -             return efi_secureboot_mode_disabled;
> > +     mode = efi_get_secureboot_mode(efi.get_variable);
>
>
> Which tree is this patch against? I don't see a definition of efi_get_secureboot_mode().
>
>
> > +     if (mode == efi_secureboot_mode_unknown) {
> > +             efi_err("Could not determine UEFI Secure Boot status.\n");
>
>
> We need to include drivers/firmware/efi/libstub/efistub.h for that. Which I am not sure is meant to be included anywhere outside of libstub.
>

Ah yes, my mistake - that should be pr_err, not efi_err.


From xen-devel-bounces@lists.xenproject.org Wed Nov 04 23:54:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 04 Nov 2020 23:54:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19580.44794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaSc8-0005ro-Jm; Wed, 04 Nov 2020 23:54:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19580.44794; Wed, 04 Nov 2020 23:54:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaSc8-0005rg-Ff; Wed, 04 Nov 2020 23:54:52 +0000
Received: by outflank-mailman (input) for mailman id 19580;
 Wed, 04 Nov 2020 23:54:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NDu8=EK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kaSc7-0005oC-EQ
 for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 23:54:51 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0ce9cc93-2d95-4008-944d-d0a66629782e;
 Wed, 04 Nov 2020 23:54:42 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kaSbx-0002Z2-Sf; Wed, 04 Nov 2020 23:54:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kaSbx-0006rI-HJ; Wed, 04 Nov 2020 23:54:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kaSbx-0008Ro-Gp; Wed, 04 Nov 2020 23:54:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NDu8=EK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kaSc7-0005oC-EQ
	for xen-devel@lists.xenproject.org; Wed, 04 Nov 2020 23:54:51 +0000
X-Inumbo-ID: 0ce9cc93-2d95-4008-944d-d0a66629782e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0ce9cc93-2d95-4008-944d-d0a66629782e;
	Wed, 04 Nov 2020 23:54:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=c3Ph+QYgdyz5GAzN8nqeDnVRDh178WC6zq/nqJO1kgg=; b=IZtAZRE7pceq24HFGX/Nm/o9U6
	5/tRScP9SfGpEbINLjHHmvodbYZc/AtxQnTSXGqELPmUXZTmu6dGo1oTGFslLvdarWCRsmPvu53tz
	21szGbIKd/9oMrAeHsLb1WrcB9wOceAs0jHI/1BLKizPzxV4+E9Thz0341KrMzXfjgeQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaSbx-0002Z2-Sf; Wed, 04 Nov 2020 23:54:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaSbx-0006rI-HJ; Wed, 04 Nov 2020 23:54:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaSbx-0008Ro-Gp; Wed, 04 Nov 2020 23:54:41 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156396-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.10-testing test] 156396: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.10-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7a4ec792d12d58d14e4d0a9cb569be4fd4fe9cf5
X-Osstest-Versions-That:
    xen=78d903e95efc5b0166b393d289a687c64016e8ef
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 04 Nov 2020 23:54:41 +0000

flight 156396 xen-4.10-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156396/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  3 hosts-allocate               fail  like 156261
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 156261
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 156261
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156261
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156261
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156261
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156261
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156261
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156261
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156261
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156261
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156261
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156261
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156261
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  7a4ec792d12d58d14e4d0a9cb569be4fd4fe9cf5
baseline version:
 xen                  78d903e95efc5b0166b393d289a687c64016e8ef

Last test of basis   156261  2020-10-27 18:36:52 Z    8 days
Testing same since   156396  2020-11-04 09:05:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <ian.jackson@eu.citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   78d903e95e..7a4ec792d1  7a4ec792d12d58d14e4d0a9cb569be4fd4fe9cf5 -> stable-4.10


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 01:15:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 01:15:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19600.44809 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaTre-0001Ds-TA; Thu, 05 Nov 2020 01:14:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19600.44809; Thu, 05 Nov 2020 01:14:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaTre-0001Dl-Pn; Thu, 05 Nov 2020 01:14:58 +0000
Received: by outflank-mailman (input) for mailman id 19600;
 Thu, 05 Nov 2020 01:14:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f69X=EL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kaTrc-0001Dg-Qp
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 01:14:56 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c1701196-d525-4472-b75a-22e28faf4194;
 Thu, 05 Nov 2020 01:14:55 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 28C59206E3;
 Thu,  5 Nov 2020 01:14:54 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=f69X=EL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kaTrc-0001Dg-Qp
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 01:14:56 +0000
X-Inumbo-ID: c1701196-d525-4472-b75a-22e28faf4194
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c1701196-d525-4472-b75a-22e28faf4194;
	Thu, 05 Nov 2020 01:14:55 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 28C59206E3;
	Thu,  5 Nov 2020 01:14:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604538894;
	bh=lEDWjSBGCb0lmhyv1pb5gunX7ZUyctdlOYsQpCdP/bg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Njo35RmDiqFbh1qRBewP9nhOPDQYBLV2+DzdJ3butUXz30US5DddgPha5sw56OSPS
	 nAmX4KiP2+nEDr4eiAF75wssPc5UeFE2kXtNRuZYf09S85NEVzvburZqBmbJUpOV0c
	 KEbNT8OPYYdnCs6RX+G3KGsb7SA1gFqfqrAYiZY8=
Date: Wed, 4 Nov 2020 17:14:53 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
cc: Jan Beulich <jbeulich@suse.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>, 
    "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, 
    "george.dunlap@citrix.com" <george.dunlap@citrix.com>, 
    "iwj@xenproject.org" <iwj@xenproject.org>, 
    "julien@xen.org" <julien@xen.org>, "wl@xen.org" <wl@xen.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [RFC PATCH] xen: EXPERT clean-up
In-Reply-To: <FD3CD0C4-6055-443B-B7D9-EAAC4935D2A9@arm.com>
Message-ID: <alpine.DEB.2.21.2011041704100.3264@sstabellini-ThinkPad-T480s>
References: <20201031002405.4545-1-sstabellini@kernel.org> <cd44d479-8dba-6311-9386-0c8c1134d07e@suse.com> <alpine.DEB.2.21.2011021332460.5812@sstabellini-ThinkPad-T480s> <c127499b-810b-63af-5487-2cc9ecfdba09@suse.com> <alpine.DEB.2.21.2011031123420.5812@sstabellini-ThinkPad-T480s>
 <e0842284-a894-1e0b-ffbe-484013acefa5@suse.com> <FD3CD0C4-6055-443B-B7D9-EAAC4935D2A9@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 4 Nov 2020, Bertrand Marquis wrote:
> > On 4 Nov 2020, at 07:34, Jan Beulich <jbeulich@suse.com> wrote:
> > On 03.11.2020 20:37, Stefano Stabellini wrote:
> >> On Tue, 3 Nov 2020, Jan Beulich wrote:
> >>> On 02.11.2020 22:41, Stefano Stabellini wrote:
> >>>> On Mon, 2 Nov 2020, Jan Beulich wrote:
> >>>>> On 31.10.2020 01:24, Stefano Stabellini wrote:
> >>>>>> @@ -79,8 +79,8 @@ config SBSA_VUART_CONSOLE
> >>>>>> 	  SBSA Generic UART implements a subset of ARM PL011 UART.
> >>>>>> 
> >>>>>> config ARM_SSBD
> >>>>>> -	bool "Speculative Store Bypass Disable" if EXPERT
> >>>>>> -	depends on HAS_ALTERNATIVE
> >>>>>> +	bool "Speculative Store Bypass Disable"
> >>>>>> +	depends on HAS_ALTERNATIVE && EXPERT
> >>>>>> 	default y
> >>>>> 
> >>>>> At the example of this, I'm afraid when the default isn't "n"
> >>>>> (or there's no default directive at all, as ought to be
> >>>>> equivalent to and preferred over "default n"), such a
> >>>>> transformation is not functionally identical: Before your
> >>>>> change, with !EXPERT this option defaults to y. After your
> >>>>> change this option is unavailable (which resolves to it being
> >>>>> off for all consuming purposes).
> >>>>> 
> >>>>> IOW there are reasons to have "if ..." attached to the prompts
> >>>>> (for this construct indeed only making the prompt conditional,
> >>>>> not the entire option), but there are also cases where the
> >>>>> cleanup you do is indeed desirable / helpful.
> >>>> 
> >>>> Yeah, thanks for catching it, it is obviously a problem.
> >>>> 
> >>>> My intention was just to "tag" somehow the options to EXPERT so that it
> >>>> would show on the menu. Maybe a better, simpler, way to do it is
> >>>> to add the word "EXPERT" to the one line prompt:
> >>>> 
> >>>> config ARM_SSBD
> >>>> -	bool "Speculative Store Bypass Disable" if EXPERT
> >>>> +	bool "Speculative Store Bypass Disable (EXPERT)" if EXPERT
> >>>> 	depends on HAS_ALTERNATIVE
> >>>> 	default y
> >>>> 	help
> >>>> 
> >>>> 
> >>>> What do you think?
> >>> 
> >>> While on the surface this may look like an improvement, I don't
> >>> see how it would actually help: If you read the Kconfig file
> >>> itself, the dependency is seen anyway. And on the menu I don't
> >>> see the point of telling someone who has enabled EXPERT that a
> >>> certain option is (or is not) an expert one. If they think
> >>> they're experts, so should they be treated. (It was, after all,
> >>> a deliberate decision to make enabling expert mode easier, and
> >>> hence easier to use for what one might consider not-really-
> >>> experts. I realize saying so may be considered tendentious; I
> >>> mean it in a purely technical sense, and I'd like to apologize
> >>> in advance to anyone not sharing this as a possible perspective
> >>> to take.)
> >>> 
> >>> Plus, of course, the addition of such (EXPERT) markers to
> >>> future options' prompts is liable to get forgotten now and then,
> >>> so sooner or later we'd likely end up with an inconsistent
> >>> mixture anyway.
> >> 
> >> I tend to agree with you on everything you wrote. The fundamental issue
> >> is that we are (mis)using EXPERT to tag features that are experimental,
> >> as defined by SUPPORT.md.
> >> 
> >> It is important to be able to distinguish clearly at the kconfig level
> >> options that are (security) supported from options that are
> >> unsupported/experimental. Today the only way to do it is with EXPERT
> >> which is not great because:
> >> 
> >> - it doesn't convey the idea that it is for unsupported/experimental
> >>  features
> >> - if you want to enable one unsupported feature, it is not clear what
> >>  you have to do
> >> 
> >> So maybe we should replace EXPERT with UNSUPPORTED (or EXPERIMENTAL) in
> >> the Kconfig menu?
> > 
> > If you mean this to be added to prompt texts, then yes, I'd view
> > this as helpful. However, ...
> 
> +1
> 
> > 
> >> It would make it clearer that by enabling UNSUPPORTED you are going to
> >> get a configuration that is not security supported. And ideally we would
> >> also tag features like ACPI as UNSUPPORTED as I suggested above.
> > 
> > ... things will get uglier when (just a simple example) something
> > is supported on x86, but not on Arm.
> 
> This is true that this could happen but we could easily workaround this
> by having arch specific entries selecting the generic one:
> 
> CONFIG_PCI
> 	bool
> 	default n
> 
> CONFIG_X86_PCI
> 	bool if x86
> 	select CONFIG_PCI
> 
> CONFIG_ARM_PCI
> 	bool if arm
> 	depends on UNSUPPORTED
> 	select CONFIG_PCI
> 
> This is not the full syntax or right variables but you get the idea :-)
> 
> This is making Kconfig more complex but improves the user configuration experience
> so i think this is a win.

It is good that we have a potential clean solution for this. However,
today this problem is only theoretical because none of the EXPERT
options under xen/commons have a different support status on ARM vs x86.
So that's not an issue.

However, there a few options in xen/common/Kconfig that are honestly
more in the original meaning of EXPERT rather than UNSUPPORTED, such as:
- CMDLINE
- TRACEBUFFER

I don't think we want to change CMDLINE from EXPERT to UNSUPPORTED,
right? Jan, are there any other options, either under xen/common/Kconfig
or xen/arch/x86/Kconfig that you think they should remain EXPERT?


So, I think the plan should be to:

- introduce a new UNSUPPORTED option, alongside EXPERT
- change EXPERT options under xen/arch/arm/Kconfig to UNSUPPORTED
    - ACPI
    - HAS_ITS
    - ARM_SSBD
    - HARDEN_BRANCH_PREDICTOR
    - TEE
- change other EXPERT options to UNSUPPORTED where it makes sense
    - e.g. ARGO
    - EFI_SET_VIRTUAL_ADDRESS_MAP
    - MEM_SHARING
    - TBOOT
    - XEN_SHSTK
    - Jan, anything else?
- add "(UNSUPPORTED)" to the oneline text of every UNSUPPORTED option


Do you guys agree?


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 02:48:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 02:48:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19608.44822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaVJz-0000wV-LN; Thu, 05 Nov 2020 02:48:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19608.44822; Thu, 05 Nov 2020 02:48:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaVJz-0000wO-Gx; Thu, 05 Nov 2020 02:48:19 +0000
Received: by outflank-mailman (input) for mailman id 19608;
 Thu, 05 Nov 2020 02:48:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f69X=EL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kaVJx-0000wJ-PX
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 02:48:17 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 20a2e41c-6409-4831-b65d-ee97d36f04f1;
 Thu, 05 Nov 2020 02:48:16 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 245DF207BB;
 Thu,  5 Nov 2020 02:48:15 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=f69X=EL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kaVJx-0000wJ-PX
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 02:48:17 +0000
X-Inumbo-ID: 20a2e41c-6409-4831-b65d-ee97d36f04f1
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 20a2e41c-6409-4831-b65d-ee97d36f04f1;
	Thu, 05 Nov 2020 02:48:16 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 245DF207BB;
	Thu,  5 Nov 2020 02:48:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604544496;
	bh=ZRX03WWp+HodhW70KtyU3AosnroCscAI79TzzbsHGMo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=tX/bB4ieWI//QTQJIe7Jp/rxTkdd79TjF1ezfVgEucOxk+jnx/1r2TP3B0IVG0RSU
	 vgsI8Upx66YuO5sZAqFhAA66ilyYACmbv5aCJpzXkLqmmZz+RmEhQHHJh0Uptxw/Pa
	 /EYtXuZou0pV3xIkPO6c9mBLKIFzqwWVm/u/KIgw=
Date: Wed, 4 Nov 2020 18:48:14 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Paolo Bonzini <pbonzini@redhat.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    =?UTF-8?Q?Philippe_Mathieu-Daud=C3=A9?= <philmd@redhat.com>, 
    =?UTF-8?Q?Daniel_P=2E_Berrang=C3=A9?= <berrange@redhat.com>, 
    Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Greg Kurz <groug@kaod.org>, Christian Schoenebeck <qemu_oss@crudebyte.com>, 
    qemu-devel@nongnu.org, Cornelia Huck <cohuck@redhat.com>, 
    =?UTF-8?Q?Alex_Benn=C3=A9e?= <alex.bennee@linaro.org>, 
    David Hildenbrand <david@redhat.com>, qemu-s390x@nongnu.org, 
    Fam Zheng <fam@euphon.net>, Richard Henderson <rth@twiddle.net>, 
    Matthew Rosato <mjrosato@linux.ibm.com>, Halil Pasic <pasic@linux.ibm.com>, 
    Thomas Huth <thuth@redhat.com>, 
    Wainer dos Santos Moschetta <wainersm@redhat.com>, 
    Christian Borntraeger <borntraeger@de.ibm.com>
Subject: Re: [PATCH-for-5.2 2/3] gitlab-ci: Add a job to cover the
 --without-default-devices config
In-Reply-To: <CABgObfaAH1fty0y0Z10GALnhy4kL_FqSxPZc2-=PwJgtSrOX0g@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2011041742580.3264@sstabellini-ThinkPad-T480s>
References: <20201103164604.2692357-1-philmd@redhat.com> <20201103164604.2692357-3-philmd@redhat.com> <20201103165247.GT205187@redhat.com> <7654e063-98d3-84e0-8116-5a1b41d14636@redhat.com> <21e90ddb-fe8a-c780-2741-9b7a2f7f1c9a@redhat.com>
 <alpine.DEB.2.21.2011031722100.3264@sstabellini-ThinkPad-T480s> <CABgObfaAH1fty0y0Z10GALnhy4kL_FqSxPZc2-=PwJgtSrOX0g@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1067490152-1604540899=:3264"
Content-ID: <alpine.DEB.2.21.2011041755170.3264@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1067490152-1604540899=:3264
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2011041755171.3264@sstabellini-ThinkPad-T480s>

On Wed, 4 Nov 2020, Paolo Bonzini wrote:
> Il mer 4 nov 2020, 03:27 Stefano Stabellini <sstabellini@kernel.org> ha scritto:
>       FYI I tried to build the latest QEMU on Alpine Linux 3.12 ARM64 and I
>       get:
> 
>         ninja: unknown tool 'query'
> 
>       Even after rebuilding ninja master by hand. Any ideas? I don't know much
>       about ninja.
> 
> 
> Are you sure you have ninja installed and not its clone samurai (yes, I am serious)? I have contributed query support to samurai but it
> hasn't made it to a release yet.
> 
> What is the output of "ninja -t list"?

I repeated all the steps to make sure. The first time I was using
Samurai because Alpine Linux comes with it and not Ninja. Then, I
removed Samurai and built and installed Ninja by hand from
https://github.com/ninja-build/ninja and that actually works. Yesterday
it was late and I was distracted by global events -- I must have failed
to update Ninja appropriately. Sorry for the confusion.
--8323329-1067490152-1604540899=:3264--


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 03:45:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 03:45:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19618.44838 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaWCp-000668-Ps; Thu, 05 Nov 2020 03:44:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19618.44838; Thu, 05 Nov 2020 03:44:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaWCp-000661-MV; Thu, 05 Nov 2020 03:44:59 +0000
Received: by outflank-mailman (input) for mailman id 19618;
 Thu, 05 Nov 2020 03:44:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kaWCn-00065u-J1
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 03:44:57 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d64b9810-7aba-46c5-bf3a-0bfe49e98e53;
 Thu, 05 Nov 2020 03:44:54 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kaWCj-00042v-TV; Thu, 05 Nov 2020 03:44:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kaWCj-0004gU-Eu; Thu, 05 Nov 2020 03:44:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kaWCj-0000qR-E9; Thu, 05 Nov 2020 03:44:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kaWCn-00065u-J1
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 03:44:57 +0000
X-Inumbo-ID: d64b9810-7aba-46c5-bf3a-0bfe49e98e53
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d64b9810-7aba-46c5-bf3a-0bfe49e98e53;
	Thu, 05 Nov 2020 03:44:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=h0JxPaFko86YLaj46EC7H2vj0ut3S2cIdeSYNmwafME=; b=CWXDECa1dcSCIo0I3VQ7WRUTO2
	lCcLR0y5kLAsvU0mG9jU9biIET32+hAkpoQfDpwuMTnVnVLIfED7yRLSH+fOpoJQGET7sWDPtxo+o
	1CxarNqU6YeOHSmBqf4qkGAR5y/xyobQZRb9m56Mkw26EVQQIFGRmgTMzq/jRcj501XA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaWCj-00042v-TV; Thu, 05 Nov 2020 03:44:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaWCj-0004gU-Eu; Thu, 05 Nov 2020 03:44:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaWCj-0000qR-E9; Thu, 05 Nov 2020 03:44:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156397-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 156397: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b5eb4956e1d2d73546f8cfdef635b6819ed7b527
X-Osstest-Versions-That:
    xen=e274c8bdc12eb596e55233040e8b49da27150f31
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 05 Nov 2020 03:44:53 +0000

flight 156397 xen-4.11-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156397/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 156277
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 156277
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156277
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156277
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156277
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156277
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156277
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156277
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156277
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156277
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156277
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156277
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156277
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  b5eb4956e1d2d73546f8cfdef635b6819ed7b527
baseline version:
 xen                  e274c8bdc12eb596e55233040e8b49da27150f31

Last test of basis   156277  2020-10-28 09:43:40 Z    7 days
Testing same since   156397  2020-11-04 09:05:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <ian.jackson@eu.citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e274c8bdc1..b5eb4956e1  b5eb4956e1d2d73546f8cfdef635b6819ed7b527 -> stable-4.11


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 04:27:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 04:27:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19626.44853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaWrP-0001Hs-6X; Thu, 05 Nov 2020 04:26:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19626.44853; Thu, 05 Nov 2020 04:26:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaWrP-0001Hl-3T; Thu, 05 Nov 2020 04:26:55 +0000
Received: by outflank-mailman (input) for mailman id 19626;
 Thu, 05 Nov 2020 04:26:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f69X=EL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kaWrN-0001Hg-ME
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 04:26:53 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 096218f0-2b78-40bc-b5ef-a511557b7f4e;
 Thu, 05 Nov 2020 04:26:53 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id E1DEF20795;
 Thu,  5 Nov 2020 04:26:50 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=f69X=EL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kaWrN-0001Hg-ME
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 04:26:53 +0000
X-Inumbo-ID: 096218f0-2b78-40bc-b5ef-a511557b7f4e
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 096218f0-2b78-40bc-b5ef-a511557b7f4e;
	Thu, 05 Nov 2020 04:26:53 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id E1DEF20795;
	Thu,  5 Nov 2020 04:26:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604550412;
	bh=QcjEAB5mR04fn/Qx7xlKu3OWYq32vZ8nwE1QoeRP+gU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=EwnXvsoHHIT2TDy9LrG+wO0NfkNF8tGwA3hCDiybhI759OHEImybik3XV8FH5ygsX
	 ZC4FvAnGSMrtI/8cqQeo3TLt1nZyl244f7/JkkMVSHXvLDpa6DcfrJ94MI+y0JywHS
	 KHwl0vO//NQlDhblv7uYxy9D1RTnIYwT5mwASq1M=
Date: Wed, 4 Nov 2020 20:26:50 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Thomas Huth <thuth@redhat.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    =?UTF-8?Q?Philippe_Mathieu-Daud=C3=A9?= <philmd@redhat.com>, 
    =?UTF-8?Q?Daniel_P=2E_Berrang=C3=A9?= <berrange@redhat.com>, 
    Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Greg Kurz <groug@kaod.org>, Christian Schoenebeck <qemu_oss@crudebyte.com>, 
    qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>, 
    Cornelia Huck <cohuck@redhat.com>, 
    =?UTF-8?Q?Alex_Benn=C3=A9e?= <alex.bennee@linaro.org>, 
    David Hildenbrand <david@redhat.com>, qemu-s390x@nongnu.org, 
    Fam Zheng <fam@euphon.net>, Richard Henderson <rth@twiddle.net>, 
    Matthew Rosato <mjrosato@linux.ibm.com>, Halil Pasic <pasic@linux.ibm.com>, 
    Wainer dos Santos Moschetta <wainersm@redhat.com>, 
    Christian Borntraeger <borntraeger@de.ibm.com>
Subject: Re: [PATCH-for-5.2 2/3] gitlab-ci: Add a job to cover the
 --without-default-devices config
In-Reply-To: <9ac5e985-a701-f357-29fb-ef7975f5f2c2@redhat.com>
Message-ID: <alpine.DEB.2.21.2011041805060.3264@sstabellini-ThinkPad-T480s>
References: <20201103164604.2692357-1-philmd@redhat.com> <20201103164604.2692357-3-philmd@redhat.com> <20201103165247.GT205187@redhat.com> <7654e063-98d3-84e0-8116-5a1b41d14636@redhat.com> <21e90ddb-fe8a-c780-2741-9b7a2f7f1c9a@redhat.com>
 <alpine.DEB.2.21.2011031722100.3264@sstabellini-ThinkPad-T480s> <9ac5e985-a701-f357-29fb-ef7975f5f2c2@redhat.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 4 Nov 2020, Thomas Huth wrote:
> On 04/11/2020 03.27, Stefano Stabellini wrote:
> [...]
> > Actually I care about Xen and 9pfs support, it is one of the few
> > combinations that I use regularly and it is even enabled in the Xilinx
> > product I look after. But admittedly I don't test QEMU master as much as
> > I should. With the recent changes to the build system it is not very
> > suprising that there are some issues. It would be great to have a Xen
> > and 9pfs test in the gitlab CI-loop.
> > 
> > 
> > FYI I tried to build the latest QEMU on Alpine Linux 3.12 ARM64 and I
> > get:
> > 
> >   ninja: unknown tool 'query'
> > 
> > Even after rebuilding ninja master by hand. Any ideas? I don't know much
> > about ninja.
> > 
> > 
> > So I gave up on that and I spinned up a Debian Buster x86 container for
> > this build. That one got past the "ninja: unknown tool 'query'" error.
> > The build completed without problems to the end.
> > 
> > Either way I can't reproduce the build error above.
> 
> Did you run "configure" with "--without-default-devices" ?

Yes, and still I can't repro the issue, strange. Anyway, I saw that
Philippe managed to find and fix the issue with "hw/9pfs: Fix Kconfig
dependency problem between 9pfs and Xen", so all sorted :)


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 07:08:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 07:08:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19637.44865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaZNN-00078u-SI; Thu, 05 Nov 2020 07:08:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19637.44865; Thu, 05 Nov 2020 07:08:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaZNN-00078n-PD; Thu, 05 Nov 2020 07:08:05 +0000
Received: by outflank-mailman (input) for mailman id 19637;
 Thu, 05 Nov 2020 07:08:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kaZNM-00078i-2g
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 07:08:04 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 74826320-0ee2-4b0d-a579-0ea55aa23172;
 Thu, 05 Nov 2020 07:08:01 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kaZNJ-0000Iu-1N; Thu, 05 Nov 2020 07:08:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kaZNI-00070u-ML; Thu, 05 Nov 2020 07:08:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kaZNI-0001z4-Lr; Thu, 05 Nov 2020 07:08:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kaZNM-00078i-2g
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 07:08:04 +0000
X-Inumbo-ID: 74826320-0ee2-4b0d-a579-0ea55aa23172
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 74826320-0ee2-4b0d-a579-0ea55aa23172;
	Thu, 05 Nov 2020 07:08:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TKdttiedhUe8Qn38/CWoCXu3MBG7+L7okN6yOOTjQzs=; b=wZZXAX/tGaF7ATBzABmJ1eAwhE
	40VrbvapdEn4vrlPKKIdKN1swfTitjI5vowsm3n9vXFG9uccdWBtGUe+eKOsaihYsN1JARuqDD77K
	buwbygZSLQhqu4EuNHxOEtnAOewK53XW05MjMPyoOQWoZLsah3XIKm+xJYY5fo70JMVA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaZNJ-0000Iu-1N; Thu, 05 Nov 2020 07:08:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaZNI-00070u-ML; Thu, 05 Nov 2020 07:08:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaZNI-0001z4-Lr; Thu, 05 Nov 2020 07:08:00 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156398-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 156398: trouble: broken/fail/pass
X-Osstest-Failures:
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:<job status>:broken:regression
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:host-install(5):broken:regression
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4f9294d21c47415376215d68a0298e88582b8e7a
X-Osstest-Versions-That:
    xen=97b7b5567fba6918a656ad349051b5343b5dea2e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 05 Nov 2020 07:08:00 +0000

flight 156398 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156398/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-cubietruck    <job status>                 broken
 test-armhf-armhf-xl-cubietruck  5 host-install(5)      broken REGR. vs. 156358

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qcow2    19 guest-localmigrate/x10       fail  like 156358
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156358
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156358
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156358
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156358
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156358
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156358
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156358
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156358
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156358
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156358
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156358
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  4f9294d21c47415376215d68a0298e88582b8e7a
baseline version:
 xen                  97b7b5567fba6918a656ad349051b5343b5dea2e

Last test of basis   156358  2020-11-02 08:38:03 Z    2 days
Testing same since   156398  2020-11-04 09:06:02 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <ian.jackson@eu.citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               broken  
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl-cubietruck broken
broken-step test-armhf-armhf-xl-cubietruck host-install(5)

Not pushing.

------------------------------------------------------------
commit 4f9294d21c47415376215d68a0298e88582b8e7a
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Wed Nov 4 09:36:36 2020 +0100

    SUPPORT.md: Desupport qemu trad except stub dm
    
    While investigating XSA-335 we discovered that many upstream security
    fixes were missing.  It is not practical to backport them.  There is
    no good reason to be running this very ancient version of qemu, except
    that it is the only way to run a stub dm which is currently supported
    by upstream.
    
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    master commit: 8587160b3e2951b722d395a0346bb17c3c22152f
    master date: 2020-11-04 09:22:37 +0100
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 08:07:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 08:07:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19651.44880 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaaIx-0004Qs-5n; Thu, 05 Nov 2020 08:07:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19651.44880; Thu, 05 Nov 2020 08:07:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaaIx-0004Ql-2Y; Thu, 05 Nov 2020 08:07:35 +0000
Received: by outflank-mailman (input) for mailman id 19651;
 Thu, 05 Nov 2020 08:07:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N0uV=EL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kaaIv-0004Qg-Ob
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 08:07:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id be16e63e-a2b2-4a77-810c-33c86fd83653;
 Thu, 05 Nov 2020 08:07:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 93B74AAF1;
 Thu,  5 Nov 2020 08:07:31 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=N0uV=EL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kaaIv-0004Qg-Ob
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 08:07:33 +0000
X-Inumbo-ID: be16e63e-a2b2-4a77-810c-33c86fd83653
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id be16e63e-a2b2-4a77-810c-33c86fd83653;
	Thu, 05 Nov 2020 08:07:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604563651;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9fUY27g6Qw/nS/Mfmlj8Z8sM9Fw5/FFjn/5sVWmWWHg=;
	b=MSJiO+w0PrM3wTiJ3Ju0A768zoMzdOqfpCDSfUj8p/ZRIDS1GWYddVbp04xhNYBjiLQwbF
	xMD1aAjyrOXrOi2vmFpAXtBJQDDJrBXJMsO9kGPZQdBiLl7KX+j/RboCm+T+0Fcn9ku187
	AA0zdsh/3q2z9b6isXdRCJDImfvWSfA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 93B74AAF1;
	Thu,  5 Nov 2020 08:07:31 +0000 (UTC)
Subject: Re: [RFC PATCH] xen: EXPERT clean-up
To: Stefano Stabellini <sstabellini@kernel.org>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@xilinx.com>,
 "george.dunlap@citrix.com" <george.dunlap@citrix.com>,
 "iwj@xenproject.org" <iwj@xenproject.org>, "julien@xen.org"
 <julien@xen.org>, "wl@xen.org" <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <20201031002405.4545-1-sstabellini@kernel.org>
 <cd44d479-8dba-6311-9386-0c8c1134d07e@suse.com>
 <alpine.DEB.2.21.2011021332460.5812@sstabellini-ThinkPad-T480s>
 <c127499b-810b-63af-5487-2cc9ecfdba09@suse.com>
 <alpine.DEB.2.21.2011031123420.5812@sstabellini-ThinkPad-T480s>
 <e0842284-a894-1e0b-ffbe-484013acefa5@suse.com>
 <FD3CD0C4-6055-443B-B7D9-EAAC4935D2A9@arm.com>
 <alpine.DEB.2.21.2011041704100.3264@sstabellini-ThinkPad-T480s>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4b60899c-e8c6-bbbb-7a2e-44ed45955e7c@suse.com>
Date: Thu, 5 Nov 2020 09:07:27 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2011041704100.3264@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 05.11.2020 02:14, Stefano Stabellini wrote:
> On Wed, 4 Nov 2020, Bertrand Marquis wrote:
>>> On 4 Nov 2020, at 07:34, Jan Beulich <jbeulich@suse.com> wrote:
>>> On 03.11.2020 20:37, Stefano Stabellini wrote:
>>>> On Tue, 3 Nov 2020, Jan Beulich wrote:
>>>>> On 02.11.2020 22:41, Stefano Stabellini wrote:
>>>>>> On Mon, 2 Nov 2020, Jan Beulich wrote:
>>>>>>> On 31.10.2020 01:24, Stefano Stabellini wrote:
>>>>>>>> @@ -79,8 +79,8 @@ config SBSA_VUART_CONSOLE
>>>>>>>> 	  SBSA Generic UART implements a subset of ARM PL011 UART.
>>>>>>>>
>>>>>>>> config ARM_SSBD
>>>>>>>> -	bool "Speculative Store Bypass Disable" if EXPERT
>>>>>>>> -	depends on HAS_ALTERNATIVE
>>>>>>>> +	bool "Speculative Store Bypass Disable"
>>>>>>>> +	depends on HAS_ALTERNATIVE && EXPERT
>>>>>>>> 	default y
>>>>>>>
>>>>>>> At the example of this, I'm afraid when the default isn't "n"
>>>>>>> (or there's no default directive at all, as ought to be
>>>>>>> equivalent to and preferred over "default n"), such a
>>>>>>> transformation is not functionally identical: Before your
>>>>>>> change, with !EXPERT this option defaults to y. After your
>>>>>>> change this option is unavailable (which resolves to it being
>>>>>>> off for all consuming purposes).
>>>>>>>
>>>>>>> IOW there are reasons to have "if ..." attached to the prompts
>>>>>>> (for this construct indeed only making the prompt conditional,
>>>>>>> not the entire option), but there are also cases where the
>>>>>>> cleanup you do is indeed desirable / helpful.
>>>>>>
>>>>>> Yeah, thanks for catching it, it is obviously a problem.
>>>>>>
>>>>>> My intention was just to "tag" somehow the options to EXPERT so that it
>>>>>> would show on the menu. Maybe a better, simpler, way to do it is
>>>>>> to add the word "EXPERT" to the one line prompt:
>>>>>>
>>>>>> config ARM_SSBD
>>>>>> -	bool "Speculative Store Bypass Disable" if EXPERT
>>>>>> +	bool "Speculative Store Bypass Disable (EXPERT)" if EXPERT
>>>>>> 	depends on HAS_ALTERNATIVE
>>>>>> 	default y
>>>>>> 	help
>>>>>>
>>>>>>
>>>>>> What do you think?
>>>>>
>>>>> While on the surface this may look like an improvement, I don't
>>>>> see how it would actually help: If you read the Kconfig file
>>>>> itself, the dependency is seen anyway. And on the menu I don't
>>>>> see the point of telling someone who has enabled EXPERT that a
>>>>> certain option is (or is not) an expert one. If they think
>>>>> they're experts, so should they be treated. (It was, after all,
>>>>> a deliberate decision to make enabling expert mode easier, and
>>>>> hence easier to use for what one might consider not-really-
>>>>> experts. I realize saying so may be considered tendentious; I
>>>>> mean it in a purely technical sense, and I'd like to apologize
>>>>> in advance to anyone not sharing this as a possible perspective
>>>>> to take.)
>>>>>
>>>>> Plus, of course, the addition of such (EXPERT) markers to
>>>>> future options' prompts is liable to get forgotten now and then,
>>>>> so sooner or later we'd likely end up with an inconsistent
>>>>> mixture anyway.
>>>>
>>>> I tend to agree with you on everything you wrote. The fundamental issue
>>>> is that we are (mis)using EXPERT to tag features that are experimental,
>>>> as defined by SUPPORT.md.
>>>>
>>>> It is important to be able to distinguish clearly at the kconfig level
>>>> options that are (security) supported from options that are
>>>> unsupported/experimental. Today the only way to do it is with EXPERT
>>>> which is not great because:
>>>>
>>>> - it doesn't convey the idea that it is for unsupported/experimental
>>>>  features
>>>> - if you want to enable one unsupported feature, it is not clear what
>>>>  you have to do
>>>>
>>>> So maybe we should replace EXPERT with UNSUPPORTED (or EXPERIMENTAL) in
>>>> the Kconfig menu?
>>>
>>> If you mean this to be added to prompt texts, then yes, I'd view
>>> this as helpful. However, ...
>>
>> +1
>>
>>>
>>>> It would make it clearer that by enabling UNSUPPORTED you are going to
>>>> get a configuration that is not security supported. And ideally we would
>>>> also tag features like ACPI as UNSUPPORTED as I suggested above.
>>>
>>> ... things will get uglier when (just a simple example) something
>>> is supported on x86, but not on Arm.
>>
>> This is true that this could happen but we could easily workaround this
>> by having arch specific entries selecting the generic one:
>>
>> CONFIG_PCI
>> 	bool
>> 	default n
>>
>> CONFIG_X86_PCI
>> 	bool if x86
>> 	select CONFIG_PCI
>>
>> CONFIG_ARM_PCI
>> 	bool if arm
>> 	depends on UNSUPPORTED
>> 	select CONFIG_PCI
>>
>> This is not the full syntax or right variables but you get the idea :-)
>>
>> This is making Kconfig more complex but improves the user configuration experience
>> so i think this is a win.
> 
> It is good that we have a potential clean solution for this. However,
> today this problem is only theoretical because none of the EXPERT
> options under xen/commons have a different support status on ARM vs x86.
> So that's not an issue.
> 
> However, there a few options in xen/common/Kconfig that are honestly
> more in the original meaning of EXPERT rather than UNSUPPORTED, such as:
> - CMDLINE
> - TRACEBUFFER
> 
> I don't think we want to change CMDLINE from EXPERT to UNSUPPORTED,
> right? Jan, are there any other options, either under xen/common/Kconfig
> or xen/arch/x86/Kconfig that you think they should remain EXPERT?

GRANT_TABLE and the "Schedulers" menu (As a general rule I'd say
that options defaulting to y and where only the prompt is
conditional are supported. In the "Schedulers" menu this then
means that the sub-options marked experimental are to become
dependent upon UNSUPPORTED in addition to the menu's EXPERT
dependency.)

Not sure about XSM_FLASK_AVC_STATS.

> So, I think the plan should be to:
> 
> - introduce a new UNSUPPORTED option, alongside EXPERT
> - change EXPERT options under xen/arch/arm/Kconfig to UNSUPPORTED
>     - ACPI
>     - HAS_ITS
>     - ARM_SSBD
>     - HARDEN_BRANCH_PREDICTOR
>     - TEE
> - change other EXPERT options to UNSUPPORTED where it makes sense
>     - e.g. ARGO
>     - EFI_SET_VIRTUAL_ADDRESS_MAP
>     - MEM_SHARING
>     - TBOOT
>     - XEN_SHSTK

Andrew, do we mean this last one to be / remain unsupported? I'd
be inclined to suggest EXPERT wants dropping there, or at least
moving to the prompt.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 08:27:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 08:27:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19657.44892 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaabt-0006Fc-Su; Thu, 05 Nov 2020 08:27:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19657.44892; Thu, 05 Nov 2020 08:27:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaabt-0006FV-Pw; Thu, 05 Nov 2020 08:27:09 +0000
Received: by outflank-mailman (input) for mailman id 19657;
 Thu, 05 Nov 2020 08:27:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N0uV=EL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kaabs-0006FQ-KG
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 08:27:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 12305d7e-24b5-4eb5-a4fa-dd9729b4d165;
 Thu, 05 Nov 2020 08:27:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B5976AAF1;
 Thu,  5 Nov 2020 08:27:06 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=N0uV=EL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kaabs-0006FQ-KG
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 08:27:08 +0000
X-Inumbo-ID: 12305d7e-24b5-4eb5-a4fa-dd9729b4d165
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 12305d7e-24b5-4eb5-a4fa-dd9729b4d165;
	Thu, 05 Nov 2020 08:27:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604564826;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Q0bbq7bYVfWFcwRu4kTljUaHGYEoVOcrIhukQmraJOw=;
	b=dCM5TJtX7B6lDRAOas3+8Eew1IXo5VpCOnog5Vtw64pOyB6/b16wn/T5cKymvx6lYDaASI
	dRsScF0TaB0mSpvqBl5MPaFVAI3UPQdiHJymbLTDId2qE+MpxjsiVmi7if4TbYQukFzL1t
	wd9CKjvBQlYw42MsJ7KBBIts41yYvl0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B5976AAF1;
	Thu,  5 Nov 2020 08:27:06 +0000 (UTC)
Subject: Re: [PATCH v2] tools/python: pass more -rpath-link options to ld
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Marek Marczykowski <marmarek@invisiblethingslab.com>
References: <8cf8cfa9-2b0c-123a-2d23-8932e61085fa@suse.com>
 <20201104171928.GA1647@mattapan.m5p.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5960e447-9e3d-a02e-3f11-c6aac01d6452@suse.com>
Date: Thu, 5 Nov 2020 09:27:06 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201104171928.GA1647@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.11.2020 18:19, Elliott Mitchell wrote:
> On Wed, Nov 04, 2020 at 03:57:49PM +0100, Jan Beulich wrote:
>> --- a/tools/python/Makefile
>> +++ b/tools/python/Makefile
>> @@ -8,19 +8,21 @@ PY_CFLAGS = $(CFLAGS) $(PY_NOOPT_CFLAGS)
>>  PY_LDFLAGS = $(SHLIB_LDFLAGS) $(APPEND_LDFLAGS)
>>  INSTALL_LOG = build/installed_files.txt
>>  
>> +setup.py = CC="$(CC)" CFLAGS="$(PY_CFLAGS)" LDSHARED="$(CC)" LDFLAGS="$(PY_LDFLAGS)" \
>> +           SHLIB_libxenctrl="$(SHLIB_libxenctrl)" \
>> +           SHLIB_libxenguest="$(SHLIB_libxenguest)" \
>> +           SHLIB_libxenstore="$(SHLIB_libxenstore)" \
>> +           $(PYTHON) setup.py
>> +
>>  .PHONY: build
>>  build:
>> -	CC="$(CC)" CFLAGS="$(PY_CFLAGS)" LDSHARED="$(CC)" LDFLAGS="$(PY_LDFLAGS)" $(PYTHON) setup.py build
>> +	$(setup.py) build
>>  
>>  .PHONY: install
>>  install:
>>  	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
>> -
>> -	CC="$(CC)" CFLAGS="$(PY_CFLAGS)" LDSHARED="$(CC)" \
>> -		LDFLAGS="$(PY_LDFLAGS)" $(PYTHON) setup.py install \
>> -		--record $(INSTALL_LOG) $(PYTHON_PREFIX_ARG) \
>> +	$(setup.py) install --record $(INSTALL_LOG) $(PYTHON_PREFIX_ARG) \
>>  		--root="$(DESTDIR)" --force
>> -
>>  	$(INSTALL_PYTHON_PROG) scripts/convert-legacy-stream $(DESTDIR)$(LIBEXEC_BIN)
>>  	$(INSTALL_PYTHON_PROG) scripts/verify-stream-v2 $(DESTDIR)$(LIBEXEC_BIN)
> 
> Shouldn't similar work of moving all the environment variable settings to
> a $(setup.py) variable be done for tools/pygrub/Makefile?
> 
> tools/python/Makefile and tools/pygrub/Makefile are presently quite
> similar and keeping them similar seems a Good Idea(tm).

The only dependency there is libfsimage - I don't even know whether
the same approach can be used there. If it can, I'd say: Likely, but
I've not observed a similar problem with pygrub, and it's only the
build problem I'm after here, sorry. As said in the post-commit-
message remark, I think there's more consolidation to be done here,
too, and I think it's at that point when pygrub, as applicable,
should also be brought in sync.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 08:28:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 08:28:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19660.44904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaadN-0006OE-7u; Thu, 05 Nov 2020 08:28:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19660.44904; Thu, 05 Nov 2020 08:28:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaadN-0006O7-4q; Thu, 05 Nov 2020 08:28:41 +0000
Received: by outflank-mailman (input) for mailman id 19660;
 Thu, 05 Nov 2020 08:28:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R1bg=EL=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1kaadM-0006O0-Bm
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 08:28:40 +0000
Received: from mail-ot1-f41.google.com (unknown [209.85.210.41])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id df994039-12e6-4d80-8c3f-3d7fc46636a6;
 Thu, 05 Nov 2020 08:28:39 +0000 (UTC)
Received: by mail-ot1-f41.google.com with SMTP id k3so666486otp.12
 for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 00:28:39 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=R1bg=EL=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
	id 1kaadM-0006O0-Bm
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 08:28:40 +0000
X-Inumbo-ID: df994039-12e6-4d80-8c3f-3d7fc46636a6
Received: from mail-ot1-f41.google.com (unknown [209.85.210.41])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id df994039-12e6-4d80-8c3f-3d7fc46636a6;
	Thu, 05 Nov 2020 08:28:39 +0000 (UTC)
Received: by mail-ot1-f41.google.com with SMTP id k3so666486otp.12
        for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 00:28:39 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=xicmHF4rJAUG4xgwhhtkHu520Lkff9VoRcMLqSQLnKw=;
        b=VLS+WigWq5UjMBX3KzrpVGTp+8IZJfkIG3aKGNUODK9H00tZceNdGYAk8A0nP1pkHP
         R7BEGx2DzTjTKUtJLapzTzA38r4rx4q2Mn5YZUjX1EBt4Qib35u8dYzr0h0MS2dSbnPj
         5m1Oo/p7NTYLq2K89m6Kb8TQfD/I7Dl7eFCsuhSkIjxoVcUMPMrXbIp+WQX1cH3wb2x+
         v3fYnf5wCoOWfvtqLARdrvJedXsEa+dWpebELbh14SId8NursgGQ0FXQIbQ4MU5mCcC2
         CKa6X/LHyj2kK/ItScHGKIwDHWJLygby9+DnXDE2s7ovHd/AtWNsE4+YPXgtmONW7oaJ
         qrkg==
X-Gm-Message-State: AOAM531ZFiLM3H2mimi0Ezk0yUdtnIS3GbN/5zTrxFmK+Pq9MWR6E9pV
	NEpMXM3XNIPI4DLB0+j+8F/vxdZDu1fgd4aZiQc=
X-Google-Smtp-Source: ABdhPJxEnS6iGJDfFZSOe00EmRjaIdeCWilAcRGjzSID/8pP/7VpwLVCFr4tVDvsPF7uAQfyKHOUI4NIAHwsXXz43kQ=
X-Received: by 2002:a05:6830:2085:: with SMTP id y5mr940046otq.37.1604564919101;
 Thu, 05 Nov 2020 00:28:39 -0800 (PST)
MIME-Version: 1.0
References: <20201103164604.2692357-1-philmd@redhat.com> <20201103164604.2692357-3-philmd@redhat.com>
 <20201103165247.GT205187@redhat.com> <7654e063-98d3-84e0-8116-5a1b41d14636@redhat.com>
 <21e90ddb-fe8a-c780-2741-9b7a2f7f1c9a@redhat.com> <alpine.DEB.2.21.2011031722100.3264@sstabellini-ThinkPad-T480s>
 <9ac5e985-a701-f357-29fb-ef7975f5f2c2@redhat.com> <alpine.DEB.2.21.2011041805060.3264@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2011041805060.3264@sstabellini-ThinkPad-T480s>
From: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Date: Thu, 5 Nov 2020 09:19:34 +0100
Message-ID: <CAAdtpL6tGqKyRSZiQK7ZaEuJyG6z2tAauzsDVQnet=3EkuqPBQ@mail.gmail.com>
Subject: Re: [PATCH-for-5.2 2/3] gitlab-ci: Add a job to cover the
 --without-default-devices config
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Thomas Huth <thuth@redhat.com>, Fam Zheng <fam@euphon.net>, 
	=?UTF-8?Q?Daniel_P=2E_Berrang=C3=A9?= <berrange@redhat.com>, 
	Matthew Rosato <mjrosato@linux.ibm.com>, Paul Durrant <paul@xen.org>, 
	=?UTF-8?B?QWxleCBCZW5uw6ll?= <alex.bennee@linaro.org>, 
	Cornelia Huck <cohuck@redhat.com>, Christian Schoenebeck <qemu_oss@crudebyte.com>, Greg Kurz <groug@kaod.org>, 
	Wainer dos Santos Moschetta <wainersm@redhat.com>, qemu-devel@nongnu.org, 
	Halil Pasic <pasic@linux.ibm.com>, Christian Borntraeger <borntraeger@de.ibm.com>, qemu-s390x@nongnu.org, 
	Paolo Bonzini <pbonzini@redhat.com>, Anthony Perard <anthony.perard@citrix.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, David Hildenbrand <david@redhat.com>, 
	=?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>, 
	Richard Henderson <rth@twiddle.net>
Content-Type: multipart/alternative; boundary="00000000000005af5d05b357e2c9"

--00000000000005af5d05b357e2c9
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Le jeu. 5 nov. 2020 05:28, Stefano Stabellini <sstabellini@kernel.org> a
=C3=A9crit :

> On Wed, 4 Nov 2020, Thomas Huth wrote:
> > On 04/11/2020 03.27, Stefano Stabellini wrote:
> > [...]
> > > Actually I care about Xen and 9pfs support, it is one of the few
> > > combinations that I use regularly and it is even enabled in the Xilin=
x
> > > product I look after. But admittedly I don't test QEMU master as much
> as
> > > I should. With the recent changes to the build system it is not very
> > > suprising that there are some issues. It would be great to have a Xen
> > > and 9pfs test in the gitlab CI-loop.
> > >
> > >
> > > FYI I tried to build the latest QEMU on Alpine Linux 3.12 ARM64 and I
> > > get:
> > >
> > >   ninja: unknown tool 'query'
> > >
> > > Even after rebuilding ninja master by hand. Any ideas? I don't know
> much
> > > about ninja.
> > >
> > >
> > > So I gave up on that and I spinned up a Debian Buster x86 container f=
or
> > > this build. That one got past the "ninja: unknown tool 'query'" error=
.
> > > The build completed without problems to the end.
> > >
> > > Either way I can't reproduce the build error above.
> >
> > Did you run "configure" with "--without-default-devices" ?
>
> Yes, and still I can't repro the issue, strange. Anyway, I saw that
> Philippe managed to find and fix the issue with "hw/9pfs: Fix Kconfig
> dependency problem between 9pfs and Xen", so all sorted :)
>

Paolo figured the problem and sent a diff, I just forwarded it as a formal
patch ;)

>

--00000000000005af5d05b357e2c9
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"auto"><div><br><div class=3D"gmail_quote"><div dir=3D"ltr" clas=
s=3D"gmail_attr">Le jeu. 5 nov. 2020 05:28, Stefano Stabellini &lt;<a href=
=3D"mailto:sstabellini@kernel.org">sstabellini@kernel.org</a>&gt; a =C3=A9c=
rit=C2=A0:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0=
 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Wed, 4 Nov 2020, Thom=
as Huth wrote:<br>
&gt; On 04/11/2020 03.27, Stefano Stabellini wrote:<br>
&gt; [...]<br>
&gt; &gt; Actually I care about Xen and 9pfs support, it is one of the few<=
br>
&gt; &gt; combinations that I use regularly and it is even enabled in the X=
ilinx<br>
&gt; &gt; product I look after. But admittedly I don&#39;t test QEMU master=
 as much as<br>
&gt; &gt; I should. With the recent changes to the build system it is not v=
ery<br>
&gt; &gt; suprising that there are some issues. It would be great to have a=
 Xen<br>
&gt; &gt; and 9pfs test in the gitlab CI-loop.<br>
&gt; &gt; <br>
&gt; &gt; <br>
&gt; &gt; FYI I tried to build the latest QEMU on Alpine Linux 3.12 ARM64 a=
nd I<br>
&gt; &gt; get:<br>
&gt; &gt; <br>
&gt; &gt;=C2=A0 =C2=A0ninja: unknown tool &#39;query&#39;<br>
&gt; &gt; <br>
&gt; &gt; Even after rebuilding ninja master by hand. Any ideas? I don&#39;=
t know much<br>
&gt; &gt; about ninja.<br>
&gt; &gt; <br>
&gt; &gt; <br>
&gt; &gt; So I gave up on that and I spinned up a Debian Buster x86 contain=
er for<br>
&gt; &gt; this build. That one got past the &quot;ninja: unknown tool &#39;=
query&#39;&quot; error.<br>
&gt; &gt; The build completed without problems to the end.<br>
&gt; &gt; <br>
&gt; &gt; Either way I can&#39;t reproduce the build error above.<br>
&gt; <br>
&gt; Did you run &quot;configure&quot; with &quot;--without-default-devices=
&quot; ?<br>
<br>
Yes, and still I can&#39;t repro the issue, strange. Anyway, I saw that<br>
Philippe managed to find and fix the issue with &quot;hw/9pfs: Fix Kconfig<=
br>
dependency problem between 9pfs and Xen&quot;, so all sorted :)<br></blockq=
uote></div></div><div dir=3D"auto"><br></div><div dir=3D"auto">Paolo figure=
d the problem and sent a diff, I just forwarded it as a formal patch ;)</di=
v><div dir=3D"auto"><div class=3D"gmail_quote"><blockquote class=3D"gmail_q=
uote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1e=
x">
</blockquote></div></div></div>

--00000000000005af5d05b357e2c9--


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 09:25:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 09:25:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19669.44918 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kabVp-00039X-Jr; Thu, 05 Nov 2020 09:24:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19669.44918; Thu, 05 Nov 2020 09:24:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kabVp-00039Q-GX; Thu, 05 Nov 2020 09:24:57 +0000
Received: by outflank-mailman (input) for mailman id 19669;
 Thu, 05 Nov 2020 09:24:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kabVo-00038s-BX
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 09:24:56 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 72582182-07c5-43d6-8afc-28c065756f32;
 Thu, 05 Nov 2020 09:24:49 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kabVh-0003et-7E; Thu, 05 Nov 2020 09:24:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kabVg-0006bB-Vu; Thu, 05 Nov 2020 09:24:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kabVg-0006iI-QO; Thu, 05 Nov 2020 09:24:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kabVo-00038s-BX
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 09:24:56 +0000
X-Inumbo-ID: 72582182-07c5-43d6-8afc-28c065756f32
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 72582182-07c5-43d6-8afc-28c065756f32;
	Thu, 05 Nov 2020 09:24:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ow6uMQ/x8lKkxKZowkXVkBLiCWRNKPixiKWZm3gRc9g=; b=HMKSyJyeySbCxEAtSheh3tc+B2
	Z2P4K+a0RCgQSsgjN926jvys/NrANPs6V8n4mY3CcUTPZAKo92qdBjR5rbzDIs5KEaITc7A8/9u6j
	9iifvx/DRTXa538p8mNpT8UuD34hK+kIBbymZgXXRMCfdo1Qag3fwSKKzpfkNM43S80k=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kabVh-0003et-7E; Thu, 05 Nov 2020 09:24:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kabVg-0006bB-Vu; Thu, 05 Nov 2020 09:24:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kabVg-0006iI-QO; Thu, 05 Nov 2020 09:24:48 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156400-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156400: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=8d5708833509ece6ac63084dc07c8ac53c4d4c1a
X-Osstest-Versions-That:
    ovmf=375683654d46380e4e557502141e9823f6b68445
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 05 Nov 2020 09:24:48 +0000

flight 156400 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156400/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 8d5708833509ece6ac63084dc07c8ac53c4d4c1a
baseline version:
 ovmf                 375683654d46380e4e557502141e9823f6b68445

Last test of basis   156380  2020-11-03 10:11:49 Z    1 days
Testing same since   156400  2020-11-04 12:10:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bob Feng <bob.c.feng@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   375683654d..8d57088335  8d5708833509ece6ac63084dc07c8ac53c4d4c1a -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 10:08:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 10:08:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19679.44931 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kacBh-0006lg-WF; Thu, 05 Nov 2020 10:08:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19679.44931; Thu, 05 Nov 2020 10:08:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kacBh-0006lZ-Rt; Thu, 05 Nov 2020 10:08:13 +0000
Received: by outflank-mailman (input) for mailman id 19679;
 Thu, 05 Nov 2020 10:08:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iIoW=EL=linaro.org=linus.walleij@srs-us1.protection.inumbo.net>)
 id 1kacBg-0006lU-9m
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 10:08:12 +0000
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cc70d049-261d-489f-9eeb-51aa7706f3e9;
 Thu, 05 Nov 2020 10:08:10 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id 11so969902ljf.2
 for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 02:08:10 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=iIoW=EL=linaro.org=linus.walleij@srs-us1.protection.inumbo.net>)
	id 1kacBg-0006lU-9m
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 10:08:12 +0000
X-Inumbo-ID: cc70d049-261d-489f-9eeb-51aa7706f3e9
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cc70d049-261d-489f-9eeb-51aa7706f3e9;
	Thu, 05 Nov 2020 10:08:10 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id 11so969902ljf.2
        for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 02:08:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=eEue8Bau+pupF7CyTGHXC2yqWeY8Sj18qJXhmeK0+qY=;
        b=rvffOctSKYH+6isvUsZ6vjuDGiRiPXrriws2tYpmxIOJoDbVaUBBLJZNLsZfPTViCm
         OB7UJ7kJVc74V/wH1kAJWs50qUWe8BxYk+3Bh0Ntwrn4R80Kw+RHuzDoPIDkFuxVKOMP
         awvWPavfta6vj9AUR/6i26T0t+QGvBOviGEDNRn+gsaMK+rwApX3VuBwIy6OffxGwIGI
         Q9ElrJtiii5nM3Ug/hNAa7sMnYOfnIMkNbbQWWsd9e9E4LqYfoNACefZyIQWVQxYvven
         erCA3D+b6EoRTgIG3bxLlMg5PeozREFm5ydwy7sAu7xfjUKeSkZzmy6N/j40DftaLQPo
         OSmw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=eEue8Bau+pupF7CyTGHXC2yqWeY8Sj18qJXhmeK0+qY=;
        b=XmHRlwZC0pXbdpsGhbYa8/73Sgy1YQjrlU6Bdq14F/kyzYMVqD0XkPFW6MQMeFzGJl
         wxZn7gi23LMws+oe5ZxOv48jojxPljEsas2h5Mg+cE1keqWEsyYEswHP1dgxyQ1P193s
         X52JSaNS/KyBmJD9bkvF5FB+Ftl7w8wM2pekYE4fLtq8LY32q2inMYSIMV3GX/hltpcV
         D6ZghUkiV3FjEvLezuQfMs1VaClc5VHRs9HirRwz+iRK/o56PMd65w03Yci9zsHZHy9T
         RXJaGhX7UH6Ke4+Sz0hU9BIIi3BjytCU4iRRvC6AXpci+LefUvK6cSQj5QASa2YzqgbA
         XfJA==
X-Gm-Message-State: AOAM530Agq4euaHA1yF5ilJCMSO1NufsUHC1aQlLKiZ/8yuEn7RIQtPf
	5BbIoLT+y4eypWFi/FFGecLBcN+nQxK0CPJp8a8DQg==
X-Google-Smtp-Source: ABdhPJzTsAT/B7g6yG/tiOPWkhIyiq+coc///CuocOp+hPojlZHj2TddLjwWNxDtJai798d/HI10cYYk95MMhchsZzo=
X-Received: by 2002:a05:651c:1205:: with SMTP id i5mr658065lja.283.1604570889728;
 Thu, 05 Nov 2020 02:08:09 -0800 (PST)
MIME-Version: 1.0
References: <20201020122046.31167-1-tzimmermann@suse.de> <20201020122046.31167-10-tzimmermann@suse.de>
In-Reply-To: <20201020122046.31167-10-tzimmermann@suse.de>
From: Linus Walleij <linus.walleij@linaro.org>
Date: Thu, 5 Nov 2020 11:07:59 +0100
Message-ID: <CACRpkdbvGWKo8y323actUJn9xXmxpgDw1EKLiPH4RqB_kFx=XQ@mail.gmail.com>
Subject: Re: [PATCH v5 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>, Maxime Ripard <mripard@kernel.org>, 
	Dave Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>, Sam Ravnborg <sam@ravnborg.org>, 
	Alex Deucher <alexander.deucher@amd.com>, =?UTF-8?Q?Christian_K=C3=B6nig?= <christian.koenig@amd.com>, 
	Gerd Hoffmann <kraxel@redhat.com>, Lucas Stach <l.stach@pengutronix.de>, 
	linux+etnaviv@armlinux.org.uk, 
	Christian Gmeiner <christian.gmeiner@gmail.com>, Inki Dae <inki.dae@samsung.com>, 
	Joonyoung Shim <jy0922.shim@samsung.com>, Seung-Woo Kim <sw0312.kim@samsung.com>, 
	Kyungmin Park <kyungmin.park@samsung.com>, Kukjin Kim <kgene@kernel.org>, 
	Krzysztof Kozlowski <krzk@kernel.org>, yuq825@gmail.com, Ben Skeggs <bskeggs@redhat.com>, 
	Rob Herring <robh@kernel.org>, Tomeu Vizoso <tomeu.vizoso@collabora.com>, steven.price@arm.com, 
	alyssa.rosenzweig@collabora.com, Sandy Huang <hjc@rock-chips.com>, 
	=?UTF-8?Q?Heiko_St=C3=BCbner?= <heiko@sntech.de>, 
	Hans de Goede <hdegoede@redhat.com>, Sean Paul <sean@poorly.run>, Eric Anholt <eric@anholt.net>, 
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>, ray.huang@amd.com, 
	Sumit Semwal <sumit.semwal@linaro.org>, Emil Velikov <emil.velikov@collabora.com>, 
	luben.tuikov@amd.com, apaneers@amd.com, melissa.srw@gmail.com, 
	Chris Wilson <chris@chris-wilson.co.uk>, Qinglang Miao <miaoqinglang@huawei.com>, 
	"open list:DRM PANEL DRIVERS" <dri-devel@lists.freedesktop.org>, amd-gfx@lists.freedesktop.org, 
	virtualization@lists.linux-foundation.org, etnaviv@lists.freedesktop.org, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, 
	linux-samsung-soc <linux-samsung-soc@vger.kernel.org>, lima@lists.freedesktop.org, 
	nouveau@lists.freedesktop.org, spice-devel@lists.freedesktop.org, 
	"open list:ARM/Rockchip SoC..." <linux-rockchip@lists.infradead.org>, xen-devel@lists.xenproject.org, 
	Linux Media Mailing List <linux-media@vger.kernel.org>, linaro-mm-sig@lists.linaro.org
Content-Type: text/plain; charset="UTF-8"

Overall I like this, just an inline question:

On Tue, Oct 20, 2020 at 2:20 PM Thomas Zimmermann <tzimmermann@suse.de> wrote:

> To do framebuffer updates, one needs memcpy from system memory and a
> pointer-increment function. Add both interfaces with documentation.

(...)
> +/**
> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> + * @dst:       The dma-buf mapping structure
> + * @src:       The source buffer
> + * @len:       The number of byte in src
> + *
> + * Copies data into a dma-buf mapping. The source buffer is in system
> + * memory. Depending on the buffer's location, the helper picks the correct
> + * method of accessing the memory.
> + */
> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
> +{
> +       if (dst->is_iomem)
> +               memcpy_toio(dst->vaddr_iomem, src, len);
> +       else
> +               memcpy(dst->vaddr, src, len);
> +}

Are these going to be really big memcpy() operations?

Some platforms have DMA offload engines that can perform memcpy(),
drivers/dma, include/linux/dmaengine.h
especially if the CPU doesn't really need to touch the contents
and flush caches etc.
An example exist in some MTD drivers that move large quantities of
data off flash memory like this:
drivers/mtd/nand/raw/cadence-nand-controller.c

Notice that DMAengine and DMAbuf does not have much in common,
the names can be deceiving.

The value of this varies with the system architecture. It is not just
a question about performance but also about power and the CPU
being able to do other stuff in parallel for large transfers. So *when*
to use this facility to accelerate memcpy() is a delicate question.

What I'm after here is if these can be really big, do we want
(in the long run, not now) open up to the idea to slot in
hardware-accelerated memcpy() here?

Yours,
Linus Walleij


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 10:22:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 10:22:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19685.44943 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kacPJ-0008RK-6q; Thu, 05 Nov 2020 10:22:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19685.44943; Thu, 05 Nov 2020 10:22:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kacPJ-0008RD-3k; Thu, 05 Nov 2020 10:22:17 +0000
Received: by outflank-mailman (input) for mailman id 19685;
 Thu, 05 Nov 2020 10:22:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aHwt=EL=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kacPI-0008R8-59
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 10:22:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id da533cfd-0cc0-45b9-9036-151e8f4005b0;
 Thu, 05 Nov 2020 10:22:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6064CABE3;
 Thu,  5 Nov 2020 10:22:14 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aHwt=EL=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kacPI-0008R8-59
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 10:22:16 +0000
X-Inumbo-ID: da533cfd-0cc0-45b9-9036-151e8f4005b0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id da533cfd-0cc0-45b9-9036-151e8f4005b0;
	Thu, 05 Nov 2020 10:22:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604571734;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PUazweBzSOrJ/GJtnh5fKJx+H3PsKPOK3TrRHz2M8Do=;
	b=g8u4xtmmSxsyeUKapvsSpqJ/z68+O0NRvdXLRN+mLyKJYCu6sMhsKfISkLCUwzWtutQbOK
	HedpTfikP1LfAaH6PKczmrcqCLNHP5WNdT0HAehndnI/r4tAV41aasjw0Ju1/BLSL0gSEK
	s6OlYHhHKKXnroDRu5Lv6NXayNgervI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6064CABE3;
	Thu,  5 Nov 2020 10:22:14 +0000 (UTC)
Subject: Re: [PATCH v3 1/3] xen/x86: add nmi continuation framework
To: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201016085350.10233-1-jgross@suse.com>
 <20201016085350.10233-2-jgross@suse.com>
 <12640bbf-475c-3d74-9bb0-57befcadd626@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <3b260972-4155-6c83-a4c3-21d096346337@suse.com>
Date: Thu, 5 Nov 2020 11:22:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <12640bbf-475c-3d74-9bb0-57befcadd626@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.10.20 15:33, Jan Beulich wrote:
> On 16.10.2020 10:53, Juergen Gross wrote:
>> Actions in NMI context are rather limited as e.g. locking is rather
>> fragile.
>>
>> Add a generic framework to continue processing in normal interrupt
>> context after leaving NMI processing.
>>
>> This is done by a high priority interrupt vector triggered via a
>> self IPI from NMI context, which will then call the continuation
>> function specified during NMI handling.
> 
> I'm concerned by there being just a single handler allowed, when
> the series already introduces two uses. A single NMI instance
> may signal multiple things in one go. At the very least we then
> need a priority, such that SERR could override oprofile.

A different approach could be not to introduce a generic interface,
but to explicitly call the continuation handlers in the interrupt
handler.

Instead of a function pointer, a parameter pointer and a busy
indicator (probably another function pointer) per cpu, we'd need for
now only a parameter value per cpu (for the oprofile case) and a
global flag (for the SERR case).

The downside would be having to add additional fields for other
use cases, but for now I think this could be the better way,
especially as this would remove the theoretical case of multiple
issues overwriting one another.

Thoughts?


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 10:37:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 10:37:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19692.44955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kacdm-00013Z-HB; Thu, 05 Nov 2020 10:37:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19692.44955; Thu, 05 Nov 2020 10:37:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kacdm-00013S-Db; Thu, 05 Nov 2020 10:37:14 +0000
Received: by outflank-mailman (input) for mailman id 19692;
 Thu, 05 Nov 2020 10:37:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PbUW=EL=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kacdk-00013N-RD
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 10:37:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4f562ecd-e2f1-40cd-b332-4ddf02de49bc;
 Thu, 05 Nov 2020 10:37:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DA1A7AD18;
 Thu,  5 Nov 2020 10:37:10 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PbUW=EL=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kacdk-00013N-RD
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 10:37:12 +0000
X-Inumbo-ID: 4f562ecd-e2f1-40cd-b332-4ddf02de49bc
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4f562ecd-e2f1-40cd-b332-4ddf02de49bc;
	Thu, 05 Nov 2020 10:37:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id DA1A7AD18;
	Thu,  5 Nov 2020 10:37:10 +0000 (UTC)
To: Linus Walleij <linus.walleij@linaro.org>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
 Maxime Ripard <mripard@kernel.org>, Dave Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>, Sam Ravnborg <sam@ravnborg.org>,
 Alex Deucher <alexander.deucher@amd.com>,
 =?UTF-8?Q?Christian_K=c3=b6nig?= <christian.koenig@amd.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Lucas Stach <l.stach@pengutronix.de>,
 linux+etnaviv@armlinux.org.uk,
 Christian Gmeiner <christian.gmeiner@gmail.com>,
 Inki Dae <inki.dae@samsung.com>, Joonyoung Shim <jy0922.shim@samsung.com>,
 Seung-Woo Kim <sw0312.kim@samsung.com>,
 Kyungmin Park <kyungmin.park@samsung.com>, Kukjin Kim <kgene@kernel.org>,
 Krzysztof Kozlowski <krzk@kernel.org>, yuq825@gmail.com,
 Ben Skeggs <bskeggs@redhat.com>, Rob Herring <robh@kernel.org>,
 Tomeu Vizoso <tomeu.vizoso@collabora.com>, steven.price@arm.com,
 alyssa.rosenzweig@collabora.com, Sandy Huang <hjc@rock-chips.com>,
 =?UTF-8?Q?Heiko_St=c3=bcbner?= <heiko@sntech.de>,
 Hans de Goede <hdegoede@redhat.com>, Sean Paul <sean@poorly.run>,
 Eric Anholt <eric@anholt.net>,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 ray.huang@amd.com, Sumit Semwal <sumit.semwal@linaro.org>,
 Emil Velikov <emil.velikov@collabora.com>, luben.tuikov@amd.com,
 apaneers@amd.com, melissa.srw@gmail.com,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Qinglang Miao <miaoqinglang@huawei.com>,
 "open list:DRM PANEL DRIVERS" <dri-devel@lists.freedesktop.org>,
 amd-gfx@lists.freedesktop.org, virtualization@lists.linux-foundation.org,
 etnaviv@lists.freedesktop.org,
 Linux ARM <linux-arm-kernel@lists.infradead.org>,
 linux-samsung-soc <linux-samsung-soc@vger.kernel.org>,
 lima@lists.freedesktop.org, nouveau@lists.freedesktop.org,
 spice-devel@lists.freedesktop.org,
 "open list:ARM/Rockchip SoC..." <linux-rockchip@lists.infradead.org>,
 xen-devel@lists.xenproject.org,
 Linux Media Mailing List <linux-media@vger.kernel.org>,
 linaro-mm-sig@lists.linaro.org
References: <20201020122046.31167-1-tzimmermann@suse.de>
 <20201020122046.31167-10-tzimmermann@suse.de>
 <CACRpkdbvGWKo8y323actUJn9xXmxpgDw1EKLiPH4RqB_kFx=XQ@mail.gmail.com>
From: Thomas Zimmermann <tzimmermann@suse.de>
Subject: Re: [PATCH v5 09/10] dma-buf-map: Add memcpy and pointer-increment
 interfaces
Message-ID: <27acbd7e-d72e-4e05-c147-b50f56e21589@suse.de>
Date: Thu, 5 Nov 2020 11:37:08 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.3
MIME-Version: 1.0
In-Reply-To: <CACRpkdbvGWKo8y323actUJn9xXmxpgDw1EKLiPH4RqB_kFx=XQ@mail.gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="au3pBN7wEzB3K5LcfiCmM2HMj6BCSqtMB"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--au3pBN7wEzB3K5LcfiCmM2HMj6BCSqtMB
Content-Type: multipart/mixed; boundary="r1jVFk6ICkZhPNx7Iw7VQFjwZE3LDZBXr";
 protected-headers="v1"
From: Thomas Zimmermann <tzimmermann@suse.de>
To: Linus Walleij <linus.walleij@linaro.org>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
 Maxime Ripard <mripard@kernel.org>, Dave Airlie <airlied@linux.ie>,
 Daniel Vetter <daniel@ffwll.ch>, Sam Ravnborg <sam@ravnborg.org>,
 Alex Deucher <alexander.deucher@amd.com>,
 =?UTF-8?Q?Christian_K=c3=b6nig?= <christian.koenig@amd.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Lucas Stach <l.stach@pengutronix.de>,
 linux+etnaviv@armlinux.org.uk,
 Christian Gmeiner <christian.gmeiner@gmail.com>,
 Inki Dae <inki.dae@samsung.com>, Joonyoung Shim <jy0922.shim@samsung.com>,
 Seung-Woo Kim <sw0312.kim@samsung.com>,
 Kyungmin Park <kyungmin.park@samsung.com>, Kukjin Kim <kgene@kernel.org>,
 Krzysztof Kozlowski <krzk@kernel.org>, yuq825@gmail.com,
 Ben Skeggs <bskeggs@redhat.com>, Rob Herring <robh@kernel.org>,
 Tomeu Vizoso <tomeu.vizoso@collabora.com>, steven.price@arm.com,
 alyssa.rosenzweig@collabora.com, Sandy Huang <hjc@rock-chips.com>,
 =?UTF-8?Q?Heiko_St=c3=bcbner?= <heiko@sntech.de>,
 Hans de Goede <hdegoede@redhat.com>, Sean Paul <sean@poorly.run>,
 Eric Anholt <eric@anholt.net>,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 ray.huang@amd.com, Sumit Semwal <sumit.semwal@linaro.org>,
 Emil Velikov <emil.velikov@collabora.com>, luben.tuikov@amd.com,
 apaneers@amd.com, melissa.srw@gmail.com,
 Chris Wilson <chris@chris-wilson.co.uk>,
 Qinglang Miao <miaoqinglang@huawei.com>,
 "open list:DRM PANEL DRIVERS" <dri-devel@lists.freedesktop.org>,
 amd-gfx@lists.freedesktop.org, virtualization@lists.linux-foundation.org,
 etnaviv@lists.freedesktop.org,
 Linux ARM <linux-arm-kernel@lists.infradead.org>,
 linux-samsung-soc <linux-samsung-soc@vger.kernel.org>,
 lima@lists.freedesktop.org, nouveau@lists.freedesktop.org,
 spice-devel@lists.freedesktop.org,
 "open list:ARM/Rockchip SoC..." <linux-rockchip@lists.infradead.org>,
 xen-devel@lists.xenproject.org,
 Linux Media Mailing List <linux-media@vger.kernel.org>,
 linaro-mm-sig@lists.linaro.org
Message-ID: <27acbd7e-d72e-4e05-c147-b50f56e21589@suse.de>
Subject: Re: [PATCH v5 09/10] dma-buf-map: Add memcpy and pointer-increment
 interfaces
References: <20201020122046.31167-1-tzimmermann@suse.de>
 <20201020122046.31167-10-tzimmermann@suse.de>
 <CACRpkdbvGWKo8y323actUJn9xXmxpgDw1EKLiPH4RqB_kFx=XQ@mail.gmail.com>
In-Reply-To: <CACRpkdbvGWKo8y323actUJn9xXmxpgDw1EKLiPH4RqB_kFx=XQ@mail.gmail.com>

--r1jVFk6ICkZhPNx7Iw7VQFjwZE3LDZBXr
Content-Type: multipart/mixed;
 boundary="------------BD3526B269F64B0F265EC3B0"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------BD3526B269F64B0F265EC3B0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Hi

Am 05.11.20 um 11:07 schrieb Linus Walleij:
> Overall I like this, just an inline question:
>=20
> On Tue, Oct 20, 2020 at 2:20 PM Thomas Zimmermann <tzimmermann@suse.de>=
 wrote:
>=20
>> To do framebuffer updates, one needs memcpy from system memory and a
>> pointer-increment function. Add both interfaces with documentation.
>=20
> (...)
>> +/**
>> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
>> + * @dst:       The dma-buf mapping structure
>> + * @src:       The source buffer
>> + * @len:       The number of byte in src
>> + *
>> + * Copies data into a dma-buf mapping. The source buffer is in system=

>> + * memory. Depending on the buffer's location, the helper picks the c=
orrect
>> + * method of accessing the memory.
>> + */
>> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, con=
st void *src, size_t len)
>> +{
>> +       if (dst->is_iomem)
>> +               memcpy_toio(dst->vaddr_iomem, src, len);
>> +       else
>> +               memcpy(dst->vaddr, src, len);
>> +}
>=20
> Are these going to be really big memcpy() operations?

Individually, each could be a scanline, so a few KiB. (4 bytes *
horizontal resolution). Updating a full framebuffer can sum up to
several MiB.

>=20
> Some platforms have DMA offload engines that can perform memcpy(),They =
could be
> drivers/dma, include/linux/dmaengine.h
> especially if the CPU doesn't really need to touch the contents
> and flush caches etc.
> An example exist in some MTD drivers that move large quantities of
> data off flash memory like this:
> drivers/mtd/nand/raw/cadence-nand-controller.c
>=20
> Notice that DMAengine and DMAbuf does not have much in common,
> the names can be deceiving.
>=20
> The value of this varies with the system architecture. It is not just
> a question about performance but also about power and the CPU
> being able to do other stuff in parallel for large transfers. So *when*=

> to use this facility to accelerate memcpy() is a delicate question.
>=20
> What I'm after here is if these can be really big, do we want
> (in the long run, not now) open up to the idea to slot in
> hardware-accelerated memcpy() here?

We currently use this functionality for the graphical framebuffer
console that most DRM drivers provide. It's non-accelerated and slow,
but this has not been much of a problem so far.

Within DRM, we're more interested in removing console code from drivers
and going for the generic implementation.

Most of the graphics HW allocates framebuffers from video RAM, system
memory or CMA pools and does not really need these memcpys. Only a few
systems with small video RAM require a shadow buffer, which we flush
into VRAM as needed. Those might benefit.

OTOH, off-loading memcpys to hardware sounds reasonable if we can hide
it from the DRM code. I think it all depends on how invasive that change
would be.

Best regards
Thomas

>=20
> Yours,
> Linus Walleij
>=20

--=20
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 N=C3=BCrnberg, Germany
(HRB 36809, AG N=C3=BCrnberg)
Gesch=C3=A4ftsf=C3=BChrer: Felix Imend=C3=B6rffer

--------------BD3526B269F64B0F265EC3B0
Content-Type: application/pgp-keys;
 name="OpenPGP_0x680DC11D530B7A23.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0x680DC11D530B7A23.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFs50uABCADEHPidWt974CaxBVbrIBwqcq/WURinJ3+2WlIrKWspiP83vfZKaXhFYsdgX=
H47
fDVbPPj+d6tQrw5lPQCyqjwrCPYnq3WlIBnGPJ4/jreTL6V+qfKRDlGLWFjZcsrPJGE0BeB5B=
bqP
5erN1qylK9i3gPoQjXGhpBpQYwRrEyQyjuvk+Ev0K1Jc5tVDeJAuau3TGNgah4YchdHm3bkPj=
z9E
ErV85RwvImQ1dptvx6s7xzwXTgGAsaYZsL8WCwDaTuqFa1d1jjlaxg6+tZsB9GluwvIhSezPg=
nEm
imZDkGnZRRSFiGP8yjqTjjWuf0bSj5rUnTGiyLyRZRNGcXmu6hjlABEBAAHNKFRob21hcyBaa=
W1t
ZXJtYW5uIDx0emltbWVybWFubkBzdXNlLmNvbT7CwI4EEwEIADgCGwMFCwkIBwIGFQoJCAsCB=
BYC
AwECHgECF4AWIQRyF/usjOnPY0ShaOVoDcEdUwt6IwUCXvxIWAAKCRBoDcEdUwt6I+aZB/9ih=
Onf
G4Lgf1L87cvoXh95/bnaJ6aQhP6/ZeRleuCXflnyDajlm3c9loQr0r2bQUi7JeYwUKbBab2QS=
GJm
DMRGlLMnmzWB8mHmZ6bHAu+2Sth8SraE42p6BB9d8dlYEID+dl/D/xUBeulfkck5rloGtYqDi=
+1Q
DfkEZJaxVSZ6FFkXuQi/G9qcI4iklN2nv02iQ7mZe8WYAysix6s/6vIobhirEBreclSNxXqis=
p8n
91+v855JC11EgRdUXMRK81IAaCKXP8zLx3ixku7mvP9Om61yerHSbeU2HZbIggZYQlFh6llJm=
zF1
CjCWgPTJyk4t4kMTcNOw5ykD47vU/KW+wl0EEBECAB0WIQQn6OOmnzvP/7ktjmoud6EwEfXTw=
gUC
WzodVwAKCRAud6EwEfXTwidvAKDkOADDHfI0QNXqAZcg6i1kOndAYACeLXHBwpjnumkPSyoab=
IiL
+he8r3zCwHMEEAEIAB0WIQQeXZghmQijlU7YzFiqUDvJrg9HpwUCWznxsQAKCRCqUDvJrg9Hp=
42f
CADIvsZcAd04PDFclRltHr2huy6s7+ZZA6PgYlMblEBh4bJA+dNPBTvzpJ7FJv/bmHOa+phWy=
Urj
EpfFGuOKGuWAfzgVAEu52fMrW3/mm+O26z1AKIu8hiZ/x9OAe4AM71ZO2lZrV1/53ZdzWnRuO=
45N
GQcotU8oeVfT9okAfmozmWMmIMq7Q0K6bV8W3qiD5XfDNxjr2caxc/9WX1bZPUo3n0H23MNaA=
Tpy
Oz732UtDh6sKUAB1RfzBBd/REbjHD7+quwJGAdRScyDRncX1vNb2+wihy0ipA69XY3bkhR5iD=
u5r
A9enuiMe6J1IBMI1PZh+vOufB/M6cd2D9RULIJaJwsBzBBABCAAdFiEEuyNtt7Ge78bIRx1op=
/N8
GYw5MYEFAls6MrsACgkQp/N8GYw5MYEnLQf/dwqlDJVQL2q+i8FFaqTMAm0n9jLRV6pN8JxFH=
j0g
voyWUOnQuNdAFgtKd26ZhN8NkLoSMO8E19eBPfLoBIFK5yNNVmRHAZm07MzGbA0uNWINJhmdR=
bZM
RMh0nneXjcEU/IvUmd8TPFTAd24X2mbzHgcaHMLJSVx1ohd4alRJXHIqDobKmiVwekyPnInJn=
zWw
iuZUkIotTkQple1PT/dF3S+KtPXBL6ldQ4NkAeCjsz4wnzSa9+VKOxEhiHM0PMzXSbkCMP+4m=
Xy9
RMplBw9Dm9hN2PSouBPifIrSodiiSWZYXOEkzLiBAB0frCKR63Dnx9kvjCD9Pz5wLd/70rjqI=
c0n
VGhvbWFzIFppbW1lcm1hbm4gPHR6aW1tZXJtYW5uQHN1c2UuZGU+wsCOBBMBCAA4AhsDBQsJC=
AcC
BhUKCQgLAgQWAgMBAh4BAheAFiEEchf7rIzpz2NEoWjlaA3BHVMLeiMFAl78SF4ACgkQaA3BH=
VML
eiOpGAgAih6C1OnWms/N8eBMC4Q93y/nyywe5vCL22Dr1rwgn6Iw2jOGziJSi7zhY4sEk2NKJ=
5cd
lFrx8mP//b+xO4AGffwBD0Vwpf38Hj2Gt0KjpzRYccqqU+tJPO5c0pjI52ZIV3+kOEFvYGfkN=
PHE
flE+b81T8L2dSXCLtj4WAGUM1rmHn3bCYl+/RwkB+8XnoL5AvrmMcU4Uhb3FJpM4DHExccYkd=
eSL
ojBppOCztBCUpBx3le+8QPVvAvJDuur4wRmjk3sjKClAwzeqoYyUKcN3JDdb3mt3QcJal9rSh=
VEI
7B25IvfmEbs42oGm8GPzPkaNJu3gcska+l5PSTfurNETGsJdBBARAgAdFiEEJ+jjpp87z/+5L=
Y5q
LnehMBH108IFAls6HVcACgkQLnehMBH108LTkACgjLQdDYMENi6BDjY/gd/LF9lMi8oAnR+o0=
FwE
Vb1K1tEMQ/1x+k1U6/xgwsBzBBABCAAdFiEEHl2YIZkIo5VO2MxYqlA7ya4PR6cFAls58bMAC=
gkQ
qlA7ya4PR6cvTAgAzY1N5QMKh8ECRtYcZNmilyV59uHTEY9hAR+203JqWnSGfUKtU7s6xfl5O=
NGq
DI5rULk4Cw2CEIzg9Sat+/lxn36w2f1tEznS5Vb0gVGWrzDAFjj7tB6MnmCzsNb/S1kgxnqJM=
Yor
RYQ7uB3Yr2Fdp08FJxN0ipd5YfzaZ6KoSWcRAv4r1R4ZQGuS77URAg7HDOIrBMOVO+HIn7GYQ=
qPS
5ZFw5yXbvEtL1c5Y8Zdw1AG2VmEXx78TWQVG3kI8/lQF1QI3yrJ1Rp2x5eK9I0OJihv13IlIW=
3sb
QGrj9pxF63kA20ZFaynzFglBGiyxExYvTD0/xKIhzYhj8mtCunPb2cLAcwQQAQgAHRYhBLsjb=
bex
nu/GyEcdaKfzfBmMOTGBBQJbOjLAAAoJEKfzfBmMOTGBBoMIALIW4EtBY28tPwZMOpN/+ARPO=
a2g
Qzpivw7iNtiDTnGIXMCoxly1CybfMdqTHYmuKbEO9AlFAlDOnkgInsn8E65IvgUTVI95Ah+Ob=
iPI
FkYc/9a+AexPl7f5kI9489k77eKtqtMpWFpo/vROmRroSw4JnM7ovwPq1QOSHExfTKbLunzD1=
i3V
4PShSZ6bGsp1LW6Wk0lRMHDuAk3xsyjBWfJwSbrCe3E6OsLG7BuQqEUt2fR6NxdDRSR9tQUp9=
Tri
AYG5LndmUzxeU6FAQjD8Wt1ezOFH5ODcCDXfRyYmE6uCGA4EvO8l9R3o68NPlUjPRAZsCbxJa=
UAg
iazX1nyQGwvOwE0EWznS4AEIAMYmP4M/V+T5RY5at/g7rUdNsLhWv1APYrh9RQefODYHrNRHU=
E9e
osYbT6XMryR9hT8XlGOYRwKWwiQBoWSDiTMo/Xi29jUnn4BXfI2px2DTXwc22LKtLAgTRjP+q=
bU6
3Y0xnQN29UGDbYgyyK51DW3H0If2a3JNsheAAK+Xc9baj0LGIc8T9uiEWHBnCH+RdhgATnWWG=
KdD
egUR5BkDfDg5O/FISymJBHx2Dyoklv5g4BzkgqTqwmaYzsl8UxZKvbaxq0zbehDda8lvhFXod=
NFM
AgTLJlLuDYOGLK2AwbrS3Sp0AEbkpdJBb44qVlGm5bApZouHeJ/+n+7r12+lqdsAEQEAAcLAf=
AQY
AQgAJhYhBHIX+6yM6c9jRKFo5WgNwR1TC3ojBQJbOdLgAhsMBQkDwmcAAAoJEGgNwR1TC3ojp=
fcI
AInwP5OlcEKokTnHCiDTz4Ony4GnHRP2fXATQZCKxmu4AJY2h9ifw9Nf2TjCZ6AMvC3thAN0r=
FDj
55N9l4s1CpaDo4J+0fkrHuyNacnT206CeJV1E7NYntxUn+LSiRrOdywn6erjxRi9EYTVLCHcD=
hBE
jKmFZfg4AM4GZMWX1lg0+eHbd5oL1as28WvvI/uIaMyV8RbyXot1r/8QLlWldU3NrTF5p7TMU=
2y3
ZH2mf5suSKHAMtbE4jKJ8ZHFOo3GhLgjVrBWHE9JXO08xKkgD+w6v83+nomsEuf6C6LYrqY/t=
sZv
yEX6zN8CtirPdPWu/VXNRYAl/lat7lSI3H26qrE=3D
=3DmxFq
-----END PGP PUBLIC KEY BLOCK-----

--------------BD3526B269F64B0F265EC3B0--

--r1jVFk6ICkZhPNx7Iw7VQFjwZE3LDZBXr--

--au3pBN7wEzB3K5LcfiCmM2HMj6BCSqtMB
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEchf7rIzpz2NEoWjlaA3BHVMLeiMFAl+j1dQFAwAAAAAACgkQaA3BHVMLeiNU
qQgAgOC30hCOxo+dX4gsKWIeGRbIouEbDaOkcDYC/gZJfPh5rePmaNGGVGOANGWjnuWjwgEnw4mj
psqmzUPDUdnL7V1B2Qey5mfEhNgHEhvPby7+LtaE7icG1EQlkY+FLCHNoVjETPec1KZNSLukpxaO
HrKYOA5Fpl6bc89NCaedSvHKGMMsbuh+ufNfSNBNsj6xVBoSKlcXWADqeGbQVMBHH+xU1vRjdAv3
vO7By/k7ya9/Gh/K4lnyrhC1Mem4UwodK0ty/r8TPoOY8yuNqpzG2I67A1d9EhetaVYxvzf6PuC9
Pvq61sFNHLIxqPeRE7m7OpN9hKlRUq1gZ05MX3xXZA==
=cSl4
-----END PGP SIGNATURE-----

--au3pBN7wEzB3K5LcfiCmM2HMj6BCSqtMB--


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 10:45:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 10:45:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19699.44969 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaclE-00020I-Ge; Thu, 05 Nov 2020 10:44:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19699.44969; Thu, 05 Nov 2020 10:44:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaclE-00020B-D5; Thu, 05 Nov 2020 10:44:56 +0000
Received: by outflank-mailman (input) for mailman id 19699;
 Thu, 05 Nov 2020 10:44:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kaclC-0001zX-S6
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 10:44:54 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d7025109-941a-4837-9b4a-9807e3e608c5;
 Thu, 05 Nov 2020 10:44:44 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kacl2-0005Kl-BG; Thu, 05 Nov 2020 10:44:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kacl1-0002s9-U7; Thu, 05 Nov 2020 10:44:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kacl1-0002oW-Tf; Thu, 05 Nov 2020 10:44:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kaclC-0001zX-S6
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 10:44:54 +0000
X-Inumbo-ID: d7025109-941a-4837-9b4a-9807e3e608c5
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d7025109-941a-4837-9b4a-9807e3e608c5;
	Thu, 05 Nov 2020 10:44:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ex95I7lJRXg/ojX82EngEiWKXrCgxPI3VZOvqQlKyBo=; b=hBPdgph94BOywB2sx4zI6Fe+PT
	L1wwV49n/nY0kNX9v5EL6XD9Euv4mKjNkrYqZe6smOCokz67BpiEYODOlZSO7w/sdUIkyuLlAcY1B
	stXlde1noAZCdhK1XCnYvJAM9WqD4J28BwGuKSL3H6DkIKm8s8E2OUgwxAAJMm/ABfLU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kacl2-0005Kl-BG; Thu, 05 Nov 2020 10:44:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kacl1-0002s9-U7; Thu, 05 Nov 2020 10:44:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kacl1-0002oW-Tf; Thu, 05 Nov 2020 10:44:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156403-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156403: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=3c8c36c9087da957f580a9bb5ebf7814a753d1c6
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 05 Nov 2020 10:44:43 +0000

flight 156403 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156403/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                3c8c36c9087da957f580a9bb5ebf7814a753d1c6
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   77 days
Failing since        152659  2020-08-21 14:07:39 Z   75 days  170 attempts
Testing same since   156403  2020-11-04 22:20:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 60305 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 10:56:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 10:56:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19706.44985 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kacw3-0002zi-KI; Thu, 05 Nov 2020 10:56:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19706.44985; Thu, 05 Nov 2020 10:56:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kacw3-0002zb-GV; Thu, 05 Nov 2020 10:56:07 +0000
Received: by outflank-mailman (input) for mailman id 19706;
 Thu, 05 Nov 2020 10:56:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kacw2-0002yP-5l
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 10:56:06 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 68c3332c-aac4-4b1c-9083-c5fa52fa723d;
 Thu, 05 Nov 2020 10:55:58 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kacvu-0005YX-1a; Thu, 05 Nov 2020 10:55:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kacvt-0003HJ-Og; Thu, 05 Nov 2020 10:55:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kacvt-0000iM-OC; Thu, 05 Nov 2020 10:55:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kacw2-0002yP-5l
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 10:56:06 +0000
X-Inumbo-ID: 68c3332c-aac4-4b1c-9083-c5fa52fa723d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 68c3332c-aac4-4b1c-9083-c5fa52fa723d;
	Thu, 05 Nov 2020 10:55:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2BvZPZDvCmUFtR7XzZS7gVJCA9jYdXjbxjQhuATTKeA=; b=EcBEYTwUrGdNbFRsvuvSTfAw1T
	D0Rn5YC7wkZ4KR8DtQLcgVc6MCYyeS340hPpWCGTqx705/NbuXN3njq//x7rTwWbjZVlqU7UCbESY
	dCHcXMVxq8jwSubnkcxgcXRbCJ30Jp3uk/IsfYwxUpcp+MzI4fBMq1aJbMVFyCpIdhRo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kacvu-0005YX-1a; Thu, 05 Nov 2020 10:55:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kacvt-0003HJ-Og; Thu, 05 Nov 2020 10:55:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kacvt-0000iM-OC; Thu, 05 Nov 2020 10:55:57 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156406-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 156406: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    xen-4.12-testing:build-arm64:xen-build:fail:regression
    xen-4.12-testing:build-amd64:xen-build:fail:regression
    xen-4.12-testing:build-amd64-prev:xen-build:fail:regression
    xen-4.12-testing:build-arm64-xsm:xen-build:fail:regression
    xen-4.12-testing:build-i386:xen-build:fail:regression
    xen-4.12-testing:build-amd64-xsm:xen-build:fail:regression
    xen-4.12-testing:build-armhf:xen-build:fail:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-4.12-testing:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-4.12-testing:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-xsm:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):starved:nonblocking
    xen-4.12-testing:build-i386-prev:hosts-allocate:starved:nonblocking
    xen-4.12-testing:build-i386-xsm:hosts-allocate:starved:nonblocking
    xen-4.12-testing:build-i386-pvops:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=4f9294d21c47415376215d68a0298e88582b8e7a
X-Osstest-Versions-That:
    xen=97b7b5567fba6918a656ad349051b5343b5dea2e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 05 Nov 2020 10:55:57 +0000

flight 156406 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156406/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                   6 xen-build                fail REGR. vs. 156358
 build-amd64                   6 xen-build                fail REGR. vs. 156358
 build-amd64-prev              6 xen-build                fail REGR. vs. 156358
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156358
 build-i386                    6 xen-build                fail REGR. vs. 156358
 build-amd64-xsm               6 xen-build                fail REGR. vs. 156358
 build-armhf                   6 xen-build                fail REGR. vs. 156358

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      starved n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               starved  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) starved n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               starved  n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      starved n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) starved n/a
 build-i386-prev               2 hosts-allocate               starved  n/a
 build-i386-xsm                2 hosts-allocate               starved  n/a
 build-i386-pvops              2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  4f9294d21c47415376215d68a0298e88582b8e7a
baseline version:
 xen                  97b7b5567fba6918a656ad349051b5343b5dea2e

Last test of basis   156358  2020-11-02 08:38:03 Z    3 days
Testing same since   156398  2020-11-04 09:06:02 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <ian.jackson@eu.citrix.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               starved 
 build-amd64-xtf                                              pass    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             fail    
 build-i386-prev                                              starved 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             starved 
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            starved 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         starved 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  starved 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  starved 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  starved 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       starved 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4f9294d21c47415376215d68a0298e88582b8e7a
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Wed Nov 4 09:36:36 2020 +0100

    SUPPORT.md: Desupport qemu trad except stub dm
    
    While investigating XSA-335 we discovered that many upstream security
    fixes were missing.  It is not practical to backport them.  There is
    no good reason to be running this very ancient version of qemu, except
    that it is the only way to run a stub dm which is currently supported
    by upstream.
    
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    master commit: 8587160b3e2951b722d395a0346bb17c3c22152f
    master date: 2020-11-04 09:22:37 +0100
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 11:00:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 11:00:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19711.45001 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kad0D-0003tS-7w; Thu, 05 Nov 2020 11:00:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19711.45001; Thu, 05 Nov 2020 11:00:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kad0D-0003tL-3Q; Thu, 05 Nov 2020 11:00:25 +0000
Received: by outflank-mailman (input) for mailman id 19711;
 Thu, 05 Nov 2020 11:00:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj/r=EL=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
 id 1kad0C-0003tF-96
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 11:00:24 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id c0850875-fd4b-4733-a359-ea735ce08dd9;
 Thu, 05 Nov 2020 11:00:21 +0000 (UTC)
Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com
 [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-442-j8HSdQU9OQy-Ur9tqgDw7g-1; Thu, 05 Nov 2020 06:00:17 -0500
Received: by mail-wr1-f72.google.com with SMTP id w6so574645wrk.1
 for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 03:00:16 -0800 (PST)
Received: from ?IPv6:2001:b07:6468:f312:5e2c:eb9a:a8b6:fd3e?
 ([2001:b07:6468:f312:5e2c:eb9a:a8b6:fd3e])
 by smtp.gmail.com with ESMTPSA id k18sm2066625wrx.96.2020.11.05.03.00.13
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 05 Nov 2020 03:00:14 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gj/r=EL=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
	id 1kad0C-0003tF-96
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 11:00:24 +0000
X-Inumbo-ID: c0850875-fd4b-4733-a359-ea735ce08dd9
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id c0850875-fd4b-4733-a359-ea735ce08dd9;
	Thu, 05 Nov 2020 11:00:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604574021;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=X1UViqaY2dw2tJ/uiPRbcc3la6348uy0QyvJfmG6SVc=;
	b=JY49kP0ensAiR9pPkvJ55541OsCvgYqDnyBO/u3UaNaWRsMseBxRvnZC23CGTZqms/LBYI
	dTZHv1XoBQ6xXYzbbmFIm9gwl5ZLNmv/eC/RVG5l3XGXUnjs1AESGJA6RGsKbm5pGzCeJZ
	72vnqmCE+P7tBA8Wd1GmqMlSClCzMAY=
Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com
 [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-442-j8HSdQU9OQy-Ur9tqgDw7g-1; Thu, 05 Nov 2020 06:00:17 -0500
X-MC-Unique: j8HSdQU9OQy-Ur9tqgDw7g-1
Received: by mail-wr1-f72.google.com with SMTP id w6so574645wrk.1
        for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 03:00:16 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:to:cc:references:from:subject:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=X1UViqaY2dw2tJ/uiPRbcc3la6348uy0QyvJfmG6SVc=;
        b=uUIg4liAOtHtF+RvP6nXFiBaaelKiP8k5SMDTbEgZirEQLW1rMbr1w7NkJDYewXhAq
         NIZzDUK3C023jiQQuYm/OcU49GDM8rneIwWIkL+iIby/loREhRg5GlM4uaUG63xBRn1U
         1+o3Azz+4K0lef7HY6uHbUwhfxMeL8KCsSVCAwC7Mx44aBz5sKEfYJKkvSBukJEt34BU
         HjJ0aWytdMCN/Dfdf9SYSWO7xbgqWVbiU1wvqnb3HFiwvh06/JgcaQXmPuO2pl+rbUgQ
         RVAJ56VsRvu+emnXJ54MkCWx8oj8TaSVHHLLNhECVqDOkB4WRUzvU2Hf4pjAkxNY1ac/
         mJIg==
X-Gm-Message-State: AOAM531KXd/DAwRT8GVgoMMwPnGfetM6UWBqc6vBt16VzmneUF83KMN5
	bVzyY2Li/MeBqN44p605bmIPWkyJEAkwj8Y3+iNWQ7d1E3BYXAVuzbTtHDBkdURmlPZW1NlBqgf
	bKgtKVHXH5g2okWDmuFDzhuo7D8o=
X-Received: by 2002:adf:cd0c:: with SMTP id w12mr2144968wrm.305.1604574015858;
        Thu, 05 Nov 2020 03:00:15 -0800 (PST)
X-Google-Smtp-Source: ABdhPJybpmotyKASMAB+t7/PgRClyJnCN9SPeZuQ/cw9KvOuv0EQ8kS0mbpjFsUNU3T9k8SUs/TxWg==
X-Received: by 2002:adf:cd0c:: with SMTP id w12mr2144953wrm.305.1604574015658;
        Thu, 05 Nov 2020 03:00:15 -0800 (PST)
Received: from ?IPv6:2001:b07:6468:f312:5e2c:eb9a:a8b6:fd3e? ([2001:b07:6468:f312:5e2c:eb9a:a8b6:fd3e])
        by smtp.gmail.com with ESMTPSA id k18sm2066625wrx.96.2020.11.05.03.00.13
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Thu, 05 Nov 2020 03:00:14 -0800 (PST)
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Greg Kurz <groug@kaod.org>, Christian Schoenebeck <qemu_oss@crudebyte.com>,
 qemu-devel@nongnu.org, Cornelia Huck <cohuck@redhat.com>,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 David Hildenbrand <david@redhat.com>, qemu-s390x@nongnu.org,
 Fam Zheng <fam@euphon.net>, Richard Henderson <rth@twiddle.net>,
 Matthew Rosato <mjrosato@linux.ibm.com>, Halil Pasic <pasic@linux.ibm.com>,
 Thomas Huth <thuth@redhat.com>,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>
References: <20201103164604.2692357-1-philmd@redhat.com>
 <20201103164604.2692357-3-philmd@redhat.com>
 <20201103165247.GT205187@redhat.com>
 <7654e063-98d3-84e0-8116-5a1b41d14636@redhat.com>
 <21e90ddb-fe8a-c780-2741-9b7a2f7f1c9a@redhat.com>
 <alpine.DEB.2.21.2011031722100.3264@sstabellini-ThinkPad-T480s>
 <CABgObfaAH1fty0y0Z10GALnhy4kL_FqSxPZc2-=PwJgtSrOX0g@mail.gmail.com>
 <alpine.DEB.2.21.2011041742580.3264@sstabellini-ThinkPad-T480s>
From: Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [PATCH-for-5.2 2/3] gitlab-ci: Add a job to cover the
 --without-default-devices config
Message-ID: <462761e7-c466-bb61-1777-cf644c6ad615@redhat.com>
Date: Thu, 5 Nov 2020 12:00:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2011041742580.3264@sstabellini-ThinkPad-T480s>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pbonzini@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 05/11/20 03:48, Stefano Stabellini wrote:
> I repeated all the steps to make sure. The first time I was using
> Samurai because Alpine Linux comes with it and not Ninja. Then, I
> removed Samurai and built and installed Ninja by hand from
> https://github.com/ninja-build/ninja and that actually works. Yesterday
> it was late and I was distracted by global events -- I must have failed
> to update Ninja appropriately. Sorry for the confusion.

FWIW I sent an Alpine merge request to support "ninja -t query".  We 
should add an Alpine container and pipeline once it's merged.

Paolo



From xen-devel-bounces@lists.xenproject.org Thu Nov 05 11:02:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 11:02:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19716.45012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kad1n-00041n-Lu; Thu, 05 Nov 2020 11:02:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19716.45012; Thu, 05 Nov 2020 11:02:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kad1n-00041g-Iy; Thu, 05 Nov 2020 11:02:03 +0000
Received: by outflank-mailman (input) for mailman id 19716;
 Thu, 05 Nov 2020 11:02:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kad1m-00041X-KJ
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 11:02:02 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1f32cd70-289d-4e20-9ab7-c38be886ae13;
 Thu, 05 Nov 2020 11:01:59 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kad1j-0005hq-IT; Thu, 05 Nov 2020 11:01:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kad1j-0003W5-3Y; Thu, 05 Nov 2020 11:01:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kad1j-0003iD-34; Thu, 05 Nov 2020 11:01:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kad1m-00041X-KJ
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 11:02:02 +0000
X-Inumbo-ID: 1f32cd70-289d-4e20-9ab7-c38be886ae13
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1f32cd70-289d-4e20-9ab7-c38be886ae13;
	Thu, 05 Nov 2020 11:01:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cId6GD9nRywjxhe3yZ/ROel4wgPTpfdM426B7GOyd5E=; b=JSRT/pCfcP/zrYkRTXb1wiLEXw
	WvgYTmIW4b7i+hk/LzntRvAALn+2ekQON76awz6J0e8jzaFIqIK/a2GFJKO0TxhGnYg3xRGFw7c+N
	1lcH7NhuV7W8vA/j+Aof06I42+7sQrYvDBULLRQ8KF5FHogF61vWNvwvKJ8fXntgzJpU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kad1j-0005hq-IT; Thu, 05 Nov 2020 11:01:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kad1j-0003W5-3Y; Thu, 05 Nov 2020 11:01:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kad1j-0003iD-34; Thu, 05 Nov 2020 11:01:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156399-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 156399: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=83115491d4b3dbcb7c8dbe74ce3e59cdfac69b03
X-Osstest-Versions-That:
    xen=0060ac29bcbdb76d49d2e248ddfcb7afa2345440
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 05 Nov 2020 11:01:59 +0000

flight 156399 xen-4.13-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156399/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156317
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156317
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156317
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156317
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156317
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156317
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156317
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156317
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156317
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156317
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156317
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  83115491d4b3dbcb7c8dbe74ce3e59cdfac69b03
baseline version:
 xen                  0060ac29bcbdb76d49d2e248ddfcb7afa2345440

Last test of basis   156317  2020-10-30 11:37:16 Z    5 days
Testing same since   156399  2020-11-04 09:06:15 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <ian.jackson@eu.citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0060ac29bc..83115491d4  83115491d4b3dbcb7c8dbe74ce3e59cdfac69b03 -> stable-4.13


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 11:26:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 11:26:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19728.45036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kadOp-0005vl-IR; Thu, 05 Nov 2020 11:25:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19728.45036; Thu, 05 Nov 2020 11:25:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kadOp-0005ve-FQ; Thu, 05 Nov 2020 11:25:51 +0000
Received: by outflank-mailman (input) for mailman id 19728;
 Thu, 05 Nov 2020 11:25:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N0uV=EL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kadOn-0005vZ-SR
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 11:25:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c93bcdca-2a91-4ec8-a3ee-173a9ee72605;
 Thu, 05 Nov 2020 11:25:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BAD00ABDE;
 Thu,  5 Nov 2020 11:25:46 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=N0uV=EL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kadOn-0005vZ-SR
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 11:25:49 +0000
X-Inumbo-ID: c93bcdca-2a91-4ec8-a3ee-173a9ee72605
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c93bcdca-2a91-4ec8-a3ee-173a9ee72605;
	Thu, 05 Nov 2020 11:25:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604575546;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SoL6+H/FfkWigkfWlqT6sgVtGr/v0ysYD2tc8xweAuQ=;
	b=U+J4W2YUyZIphuYUTDdXDJTerGGYP3c6aei8hz+ruSKB8jHHmqLwnbv1lKH1bTns2Q/KMY
	AtrHi2R9wAXmN3KN13FFwaq6Abrvrn2pEprNsheKswcy76iOLFX3G+t7nqzRPpqOVUpO7k
	RttteLGrbETTtbds1PM7fn9vh4TK1G0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id BAD00ABDE;
	Thu,  5 Nov 2020 11:25:46 +0000 (UTC)
Subject: Re: [PATCH v3 1/3] xen/x86: add nmi continuation framework
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201016085350.10233-1-jgross@suse.com>
 <20201016085350.10233-2-jgross@suse.com>
 <12640bbf-475c-3d74-9bb0-57befcadd626@suse.com>
 <3b260972-4155-6c83-a4c3-21d096346337@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <420ae10e-c9d2-2d12-fb3e-cdc315d59418@suse.com>
Date: Thu, 5 Nov 2020 12:25:46 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <3b260972-4155-6c83-a4c3-21d096346337@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 05.11.2020 11:22, Jürgen Groß wrote:
> On 20.10.20 15:33, Jan Beulich wrote:
>> On 16.10.2020 10:53, Juergen Gross wrote:
>>> Actions in NMI context are rather limited as e.g. locking is rather
>>> fragile.
>>>
>>> Add a generic framework to continue processing in normal interrupt
>>> context after leaving NMI processing.
>>>
>>> This is done by a high priority interrupt vector triggered via a
>>> self IPI from NMI context, which will then call the continuation
>>> function specified during NMI handling.
>>
>> I'm concerned by there being just a single handler allowed, when
>> the series already introduces two uses. A single NMI instance
>> may signal multiple things in one go. At the very least we then
>> need a priority, such that SERR could override oprofile.
> 
> A different approach could be not to introduce a generic interface,
> but to explicitly call the continuation handlers in the interrupt
> handler.
> 
> Instead of a function pointer, a parameter pointer and a busy
> indicator (probably another function pointer) per cpu, we'd need for
> now only a parameter value per cpu (for the oprofile case) and a
> global flag (for the SERR case).
> 
> The downside would be having to add additional fields for other
> use cases, but for now I think this could be the better way,
> especially as this would remove the theoretical case of multiple
> issues overwriting one another.

Yes, perhaps less abstraction is the better approach here, for now
at least. Perhaps give Andrew and Roger a little bit of time to
object before going down that route.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 11:33:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 11:33:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19733.45049 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kadW4-0006rF-CP; Thu, 05 Nov 2020 11:33:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19733.45049; Thu, 05 Nov 2020 11:33:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kadW4-0006r8-8v; Thu, 05 Nov 2020 11:33:20 +0000
Received: by outflank-mailman (input) for mailman id 19733;
 Thu, 05 Nov 2020 11:33:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N0uV=EL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kadW2-0006r3-2X
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 11:33:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id de39cebb-fcb7-42e9-bcfa-f79d3b787125;
 Thu, 05 Nov 2020 11:33:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4FBF8ABE3;
 Thu,  5 Nov 2020 11:33:15 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=N0uV=EL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kadW2-0006r3-2X
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 11:33:18 +0000
X-Inumbo-ID: de39cebb-fcb7-42e9-bcfa-f79d3b787125
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id de39cebb-fcb7-42e9-bcfa-f79d3b787125;
	Thu, 05 Nov 2020 11:33:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604575995;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/FW7ZIc43wsFfXwFpsgIs8hKt6y6twweyPDlhTMzmFQ=;
	b=LYr36KQfi8F5T+nqzxBXyURwPl/cucLx3hiMle6zlOozjMIXE7Nwnj+nkjmRi9yaWF7tn8
	0UCbdHjV6owhcC/KuzVnAdjbFOa3mRChj1X4geBKxXn9sxmtGJjMMqISs3isyquRh4ZVWP
	ym/azaLMrq2xbnI9bvBQ/PX0u125gk0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4FBF8ABE3;
	Thu,  5 Nov 2020 11:33:15 +0000 (UTC)
Subject: Re: [PATCH v4.1 2/2] xen/evtchn: rework per event channel lock
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20201104115739.20144-1-jgross@suse.com>
 <ae263d8f-b81d-4c47-2760-6ef3823ca780@suse.com>
 <f558dfae-ecec-9884-00de-4edd65e39b0f@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e930bc95-eeac-b16e-48c8-ef7a5dcb1ec2@suse.com>
Date: Thu, 5 Nov 2020 12:33:14 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <f558dfae-ecec-9884-00de-4edd65e39b0f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 04.11.2020 16:53, Jürgen Groß wrote:
> On 04.11.20 16:29, Jan Beulich wrote:
>>> @@ -738,7 +725,8 @@ int evtchn_send(struct domain *ld, unsigned int lport)
>>>   
>>>       lchn = evtchn_from_port(ld, lport);
>>>   
>>> -    spin_lock_irqsave(&lchn->lock, flags);
>>> +    if ( !evtchn_read_trylock(lchn) )
>>> +        return 0;
>>
>> Isn't there a change in behavior here? While sends through
>> ECS_UNBOUND ports indeed get silently ignored, ECS_FREE ones ought
>> to be getting -EINVAL (as should ECS_UNBOUND ones if they're
>> Xen-consumer ones). With the failed trylock you don't know which
>> of the two the port is in the process of being transitioned
>> to/from. The same would apply for other operations distinguishing
>> the two states. Right now both evtchn_status() and
>> evtchn_bind_vcpu() only use the domain-wide lock, but the latter
>> is getting switched by "evtchn: convert domain event lock to an
>> r/w one" (granted there's an RFC remark there whether that
>> transformation is worthwhile). Anyway, the main point of my remark
>> is that there's another subtlety here which I don't think becomes
>> obvious from description or comments - where the two states are
>> mentioned, it gets to look as if they can be treated equally.
> 
> Hmm, evtchn_send() seems to be called always with interrupts enabled.
> We could just use a standard read_lock() here if you think the different
> states should be treated as today.

This would avoid the caveat in this specific case, but it would
remain elsewhere (at least as an abstract trap to fall into). I
suppose evtchn_status() could use a regular read_lock() too, for
the same reason (if it was to be switched), and evtchn_bind_vcpu()
may need a write lock anyway (which is forbidden in your model,
i.e. I'd likely need to drop the switch to the finer grained lock
there).

>>> --- a/xen/include/xen/event.h
>>> +++ b/xen/include/xen/event.h
>>> @@ -105,6 +105,39 @@ void notify_via_xen_event_channel(struct domain *ld, int lport);
>>>   #define bucket_from_port(d, p) \
>>>       ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKET])
>>>   
>>> +/*
>>> + * Lock an event channel exclusively. This is allowed only when the channel is
>>> + * free or unbound either when taking or when releasing the lock, as any
>>> + * concurrent operation on the event channel using evtchn_read_trylock() will
>>> + * just assume the event channel is free or unbound at the moment when the
>>> + * evtchn_read_trylock() returns false.
>>> + */
>>> +static inline void evtchn_write_lock(struct evtchn *evtchn)
>>> +{
>>> +    write_lock(&evtchn->lock);
>>> +
>>> +    evtchn->old_state = evtchn->state;
>>> +}
>>> +
>>> +static inline void evtchn_write_unlock(struct evtchn *evtchn)
>>> +{
>>> +    /* Enforce lock discipline. */
>>> +    ASSERT(evtchn->old_state == ECS_FREE || evtchn->old_state == ECS_UNBOUND ||
>>> +           evtchn->state == ECS_FREE || evtchn->state == ECS_UNBOUND);
>>> +
>>> +    write_unlock(&evtchn->lock);
>>> +}
>>
>> These two aren't needed outside of event_channel.c, are they? If
>> so, and if they ought to go in a header rather than directly into
>> the .c file (where I'd prefer the latter, for the sake of minimal
>> exposure), then it should be event_channel.h.
> 
> I wanted to have the locking functions in one place.
> 
> In case you prefer it otherwise (and you seem to do so) I'd rather move
> the write lock functions to the .c file.

Yes please.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 11:56:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 11:56:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19741.45061 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kadsn-0000Gb-EX; Thu, 05 Nov 2020 11:56:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19741.45061; Thu, 05 Nov 2020 11:56:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kadsn-0000GU-Aw; Thu, 05 Nov 2020 11:56:49 +0000
Received: by outflank-mailman (input) for mailman id 19741;
 Thu, 05 Nov 2020 11:56:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N0uV=EL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kadsm-0000GP-1C
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 11:56:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c3b14e2c-1f39-4b33-9413-1d109a036771;
 Thu, 05 Nov 2020 11:56:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 59637ABDE;
 Thu,  5 Nov 2020 11:56:46 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=N0uV=EL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kadsm-0000GP-1C
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 11:56:48 +0000
X-Inumbo-ID: c3b14e2c-1f39-4b33-9413-1d109a036771
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c3b14e2c-1f39-4b33-9413-1d109a036771;
	Thu, 05 Nov 2020 11:56:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604577406;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6zB8KEK+nBzVoOInEmzNrPpJZ/h0ekA5YpK91js6qt0=;
	b=dWHUU2T8+npwRfyDN0GZxvf9G1jaz3qA8PfcQrS20b78nrYL3G5mCHET4XG7tmR57AYusv
	B1ju9kQHDCWG+B6WFSwDj3lzUH+WNkKo1//mGm5b4GX2rGvcDUMmFiR0l/68VPvzFQw5za
	9iI/ONCe2uPsIYorMLGMDZF9EG/UF7k=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 59637ABDE;
	Thu,  5 Nov 2020 11:56:46 +0000 (UTC)
Subject: Ping: [PATCH] libxl: fix libacpi dependency
From: Jan Beulich <jbeulich@suse.com>
To: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>
Cc: Olaf Hering <olaf@aepfle.de>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <bd68e8f4-ce57-7798-f6d2-53e85319b8d4@suse.com>
Message-ID: <f2172d3f-38fc-7f9f-9b31-2c07a1686cff@suse.com>
Date: Thu, 5 Nov 2020 12:56:46 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <bd68e8f4-ce57-7798-f6d2-53e85319b8d4@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 27.10.2020 12:40, Jan Beulich wrote:
> $(DSDT_FILES-y) depends on the recursive make to have run in libacpi/
> such that the file(s) itself/themselves were generated before
> compilation gets attempted. The same, however, is also necessary for
> generated headers, before source files including them would get
> attempted to be compiled.
> 
> The dependency specified in libacpi's Makefile, otoh, is entirely
> pointless nowadays - no compilation happens there anymore (except for
> tools involved in building the generated files). Together with it, the
> rule generating acpi.a also can go away.
> 
> Reported-by: Olaf Hering <olaf@aepfle.de>
> Fixes: 14c0d328da2b ("libxl/acpi: Build ACPI tables for HVMlite guests")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I'd appreciate a libxl side ack (or otherwise) here.

Thanks, Jan

> ---
> Arguably we might also use $(ACPI_OBJS) instead of specifying just the
> one object file we know has respective #include directives.
> 
> --- a/tools/libacpi/Makefile
> +++ b/tools/libacpi/Makefile
> @@ -89,11 +89,6 @@ iasl:
>  	@echo 
>  	@exit 1
>  
> -build.o: ssdt_s3.h ssdt_s4.h ssdt_pm.h ssdt_tpm.h ssdt_laptop_slate.h
> -
> -acpi.a: $(OBJS)
> -	$(AR) rc $@ $(OBJS)
> -
>  clean:
>  	rm -f $(C_SRC) $(H_SRC) $(MK_DSDT) $(C_SRC:=.$(TMP_SUFFIX))
>  	rm -f $(patsubst %.c,%.hex,$(C_SRC)) $(patsubst %.c,%.aml,$(C_SRC)) $(patsubst %.c,%.asl,$(C_SRC))
> --- a/tools/libs/light/Makefile
> +++ b/tools/libs/light/Makefile
> @@ -32,7 +32,7 @@ ACPI_PATH  = $(XEN_ROOT)/tools/libacpi
>  DSDT_FILES-$(CONFIG_X86) = dsdt_pvh.c
>  ACPI_OBJS  = $(patsubst %.c,%.o,$(DSDT_FILES-y)) build.o static_tables.o
>  ACPI_PIC_OBJS = $(patsubst %.o,%.opic,$(ACPI_OBJS))
> -$(DSDT_FILES-y): acpi
> +$(DSDT_FILES-y) build.o: acpi
>  vpath build.c $(ACPI_PATH)/
>  vpath static_tables.c $(ACPI_PATH)/
>  
> 



From xen-devel-bounces@lists.xenproject.org Thu Nov 05 12:16:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 12:16:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19757.45085 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaeBW-0002CM-FF; Thu, 05 Nov 2020 12:16:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19757.45085; Thu, 05 Nov 2020 12:16:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaeBW-0002CF-Bj; Thu, 05 Nov 2020 12:16:10 +0000
Received: by outflank-mailman (input) for mailman id 19757;
 Thu, 05 Nov 2020 12:16:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=68sV=EL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kaeBU-0002CA-PD
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 12:16:08 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id f117fe0e-a7ff-4b79-aaa6-bd9efbde5763;
 Thu, 05 Nov 2020 12:16:05 +0000 (UTC)
Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com
 [209.85.128.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-284-L2z54I-1M8OoQQrCIQUUvA-1; Thu, 05 Nov 2020 07:16:04 -0500
Received: by mail-wm1-f72.google.com with SMTP id o19so552018wme.2
 for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 04:16:04 -0800 (PST)
Received: from [192.168.1.36] (234.red-83-42-66.dynamicip.rima-tde.net.
 [83.42.66.234])
 by smtp.gmail.com with ESMTPSA id 89sm2413628wrp.58.2020.11.05.04.16.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 05 Nov 2020 04:16:01 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=68sV=EL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
	id 1kaeBU-0002CA-PD
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 12:16:08 +0000
X-Inumbo-ID: f117fe0e-a7ff-4b79-aaa6-bd9efbde5763
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id f117fe0e-a7ff-4b79-aaa6-bd9efbde5763;
	Thu, 05 Nov 2020 12:16:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604578565;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=M9ICR1yXh3FG7V16PL+z404rlhj3Yi3iLJ9NPtGdMTA=;
	b=WM/T6UuoLm6j9V4ScywLDc6lX3Y/SFfm3q//cngaZBEt7fY/BDPOpT9NU4UXKcrT0WIO/8
	Lgs4xSUrw+OsLskGmHNdwqTCbIuHv8huChNkBJ56vv2zNbGy9exeCW+Qlgofp8nV1/lgqn
	PpyrX+ai/vw29ZlBpdToLDmwKhjzhjM=
Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com
 [209.85.128.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-284-L2z54I-1M8OoQQrCIQUUvA-1; Thu, 05 Nov 2020 07:16:04 -0500
X-MC-Unique: L2z54I-1M8OoQQrCIQUUvA-1
Received: by mail-wm1-f72.google.com with SMTP id o19so552018wme.2
        for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 04:16:04 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=M9ICR1yXh3FG7V16PL+z404rlhj3Yi3iLJ9NPtGdMTA=;
        b=Puk5QEBtTeBJDMLdyxZOYMr8vwCusIXnrx0udb8QF5v52KzxambuNPsDx5hZ6E8leF
         wx9kJQp+ylYp8akJf3IMR8DgVu9NymvgFqYb1PBBqu0zxeIR1NYkQHrtKwCPYmkyaTxh
         3Tpn7kyfux50T73oGrk60j/qRQhViA4ryN5z/+CQdcUFjrzxF6TISZ5XmdcUehJgCR5O
         m2Z5rvqt4XJ2/DUdYvLkB/6FtAyOzQG+1Bwk9SRaSWq4EyVRCuehfCWiHCwYurNkYViX
         MKQ+VM/eUeU4H6t8uEfIet6EvazfU1ygb7HvglpkNGQJcOjNkVG/9SSjXBQGX7yfT5Do
         oAkw==
X-Gm-Message-State: AOAM531flenvTlbw+eSv1jedV8mxA9+qcVYVNH8wmITZPF1uBQwXCf0x
	FKNQXu5ypnyXlBynVlPza8yT4ukL0txzhkuLspDv8pmdHgE6P7xhWPlX66LqZBvEnYU1ojEcY1u
	b985GC5jieZsM4fJR2QPOBEpaqZ4=
X-Received: by 2002:a7b:ce8a:: with SMTP id q10mr2406918wmj.101.1604578562984;
        Thu, 05 Nov 2020 04:16:02 -0800 (PST)
X-Google-Smtp-Source: ABdhPJwUna/uTT3GoKmJffb5+QYG325EJWzeGaJI9YrtAx/wl8yEiUHnt1PCFQCITOvfdErMIV251g==
X-Received: by 2002:a7b:ce8a:: with SMTP id q10mr2406887wmj.101.1604578562777;
        Thu, 05 Nov 2020 04:16:02 -0800 (PST)
Received: from [192.168.1.36] (234.red-83-42-66.dynamicip.rima-tde.net. [83.42.66.234])
        by smtp.gmail.com with ESMTPSA id 89sm2413628wrp.58.2020.11.05.04.16.00
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Thu, 05 Nov 2020 04:16:01 -0800 (PST)
Subject: Re: [PATCH-for-5.2 v3 2/4] hw/9pfs: Fix Kconfig dependency problem
 between 9pfs and Xen
To: Greg Kurz <groug@kaod.org>, Christian Schoenebeck <qemu_oss@crudebyte.com>
Cc: qemu-devel@nongnu.org, Fam Zheng <fam@euphon.net>,
 Thomas Huth <thuth@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>,
 "Daniel P . Berrange" <berrange@redhat.com>,
 Matthew Rosato <mjrosato@linux.ibm.com>, David Hildenbrand
 <david@redhat.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Cornelia Huck <cohuck@redhat.com>,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>, qemu-s390x@nongnu.org,
 xen-devel@lists.xenproject.org, Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul Durrant <paul@xen.org>,
 Richard Henderson <rth@twiddle.net>
References: <20201104115706.3101190-1-philmd@redhat.com>
 <20201104115706.3101190-3-philmd@redhat.com> <8965407.pN9RvXrJQ9@silver>
 <20201104185439.41e9ddb3@bahia.lan>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <17370310-d69c-91ff-763d-52a1355ad605@redhat.com>
Date: Thu, 5 Nov 2020 13:15:59 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <20201104185439.41e9ddb3@bahia.lan>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/4/20 6:54 PM, Greg Kurz wrote:
> On Wed, 04 Nov 2020 13:18:09 +0100
> Christian Schoenebeck <qemu_oss@crudebyte.com> wrote:
> 
>> On Mittwoch, 4. November 2020 12:57:04 CET Philippe Mathieu-Daudé wrote:
>>> Commit b2c00bce54c ("meson: convert hw/9pfs, cleanup") introduced
>>> CONFIG_9PFS (probably a wrong conflict resolution). This config is
>>> not used anywhere. Backends depend on CONFIG_FSDEV_9P which itself
>>> depends on CONFIG_VIRTFS.
>>>
>>> Remove the invalid CONFIG_9PFS and use CONFIG_FSDEV_9P instead, to
>>> fix the './configure --without-default-devices --enable-xen' build:
>>>
>>>   /usr/bin/ld: libcommon.fa.p/hw_xen_xen-legacy-backend.c.o: in function
>>> `xen_be_register_common': hw/xen/xen-legacy-backend.c:754: undefined
>>> reference to `xen_9pfs_ops' /usr/bin/ld:
>>> libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x8): undefined reference to
>>> `local_ops' /usr/bin/ld:
>>> libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x20): undefined reference
>>> to `synth_ops' /usr/bin/ld:
>>> libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x38): undefined reference
>>> to `proxy_ops' collect2: error: ld returned 1 exit status
>>>
>>> Fixes: b2c00bce54c ("meson: convert hw/9pfs, cleanup")
>>> Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
>>> Acked-by: Greg Kurz <groug@kaod.org>
>>> Tested-by: Greg Kurz <groug@kaod.org>
>>> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>>
>> Acked-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
>>
> 
> Phil,
> 
> Same questioning as Connie. Do you intend to get this merged or should
> Christian or I take care of that ?

Same answer too =) If you are preparing a pull request, please go ahead!

Thanks,

Phil.



From xen-devel-bounces@lists.xenproject.org Thu Nov 05 12:23:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 12:23:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19765.45103 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaeJ0-00038H-At; Thu, 05 Nov 2020 12:23:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19765.45103; Thu, 05 Nov 2020 12:23:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaeJ0-00038A-7a; Thu, 05 Nov 2020 12:23:54 +0000
Received: by outflank-mailman (input) for mailman id 19765;
 Thu, 05 Nov 2020 12:23:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=t/Oa=EL=kaod.org=groug@srs-us1.protection.inumbo.net>)
 id 1kaeIz-000385-3U
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 12:23:53 +0000
Received: from smtpout1.mo529.mail-out.ovh.net (unknown [178.32.125.2])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1169807e-1fa0-48f7-a66a-e466d77f328d;
 Thu, 05 Nov 2020 12:23:50 +0000 (UTC)
Received: from mxplan5.mail.ovh.net (unknown [10.109.156.217])
 by mo529.mail-out.ovh.net (Postfix) with ESMTPS id 632D06A9D11B;
 Thu,  5 Nov 2020 13:23:48 +0100 (CET)
Received: from kaod.org (37.59.142.98) by DAG8EX1.mxp5.local (172.16.2.71)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2044.4; Thu, 5 Nov 2020
 13:23:47 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=t/Oa=EL=kaod.org=groug@srs-us1.protection.inumbo.net>)
	id 1kaeIz-000385-3U
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 12:23:53 +0000
X-Inumbo-ID: 1169807e-1fa0-48f7-a66a-e466d77f328d
Received: from smtpout1.mo529.mail-out.ovh.net (unknown [178.32.125.2])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1169807e-1fa0-48f7-a66a-e466d77f328d;
	Thu, 05 Nov 2020 12:23:50 +0000 (UTC)
Received: from mxplan5.mail.ovh.net (unknown [10.109.156.217])
	by mo529.mail-out.ovh.net (Postfix) with ESMTPS id 632D06A9D11B;
	Thu,  5 Nov 2020 13:23:48 +0100 (CET)
Received: from kaod.org (37.59.142.98) by DAG8EX1.mxp5.local (172.16.2.71)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2044.4; Thu, 5 Nov 2020
 13:23:47 +0100
Authentication-Results: garm.ovh; auth=pass (GARM-98R002f4d6e030-ceec-415c-9026-00fee129f7cd,
                    5D04B6D4EAACA18D9EDEF493C42F41A1D3896549) smtp.auth=groug@kaod.org
Date: Thu, 5 Nov 2020 13:23:46 +0100
From: Greg Kurz <groug@kaod.org>
To: Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?= <philmd@redhat.com>
CC: Christian Schoenebeck <qemu_oss@crudebyte.com>, <qemu-devel@nongnu.org>,
	Fam Zheng <fam@euphon.net>, Thomas Huth <thuth@redhat.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, "Daniel P . Berrange"
	<berrange@redhat.com>, Matthew Rosato <mjrosato@linux.ibm.com>, "David
 Hildenbrand" <david@redhat.com>, Alex =?UTF-8?B?QmVubsOpZQ==?=
	<alex.bennee@linaro.org>, Cornelia Huck <cohuck@redhat.com>, "Wainer dos
 Santos Moschetta" <wainersm@redhat.com>, Halil Pasic <pasic@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>, <qemu-s390x@nongnu.org>,
	<xen-devel@lists.xenproject.org>, Anthony Perard <anthony.perard@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>, Paul Durrant <paul@xen.org>, "Richard
 Henderson" <rth@twiddle.net>
Subject: Re: [PATCH-for-5.2 v3 2/4] hw/9pfs: Fix Kconfig dependency problem
 between 9pfs and Xen
Message-ID: <20201105132346.5e0adf94@bahia.lan>
In-Reply-To: <17370310-d69c-91ff-763d-52a1355ad605@redhat.com>
References: <20201104115706.3101190-1-philmd@redhat.com>
	<20201104115706.3101190-3-philmd@redhat.com>
	<8965407.pN9RvXrJQ9@silver>
	<20201104185439.41e9ddb3@bahia.lan>
	<17370310-d69c-91ff-763d-52a1355ad605@redhat.com>
X-Mailer: Claws Mail 3.17.8 (GTK+ 2.24.32; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.59.142.98]
X-ClientProxiedBy: DAG6EX1.mxp5.local (172.16.2.51) To DAG8EX1.mxp5.local
 (172.16.2.71)
X-Ovh-Tracer-GUID: b0c3b55b-f88a-4045-9165-58d4c97a7207
X-Ovh-Tracer-Id: 17209380080369375504
X-VR-SPAMSTATE: OK
X-VR-SPAMSCORE: -100
X-VR-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgedujedruddtjedggedtucetufdoteggodetrfdotffvucfrrhhofhhilhgvmecuqfggjfdpvefjgfevmfevgfenuceurghilhhouhhtmecuhedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujfgurhepfffhvffukfgjfhfogggtgfhisehtqhertdertdejnecuhfhrohhmpefirhgvghcumfhurhiiuceoghhrohhugheskhgrohgurdhorhhgqeenucggtffrrghtthgvrhhnpeevlefhtddufffhieevhefhleegleelgfetffetkedugeehjeffgfehhfefueduffenucfkpheptddrtddrtddrtddpfeejrdehledrudegvddrleeknecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmohguvgepshhmthhpqdhouhhtpdhhvghlohepmhigphhlrghnhedrmhgrihhlrdhovhhhrdhnvghtpdhinhgvtheptddrtddrtddrtddpmhgrihhlfhhrohhmpehgrhhouhhgsehkrghougdrohhrghdprhgtphhtthhopehrthhhsehtfihiuggulhgvrdhnvght

On Thu, 5 Nov 2020 13:15:59 +0100
Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com> wrote:

> On 11/4/20 6:54 PM, Greg Kurz wrote:
> > On Wed, 04 Nov 2020 13:18:09 +0100
> > Christian Schoenebeck <qemu_oss@crudebyte.com> wrote:
> >=20
> >> On Mittwoch, 4. November 2020 12:57:04 CET Philippe Mathieu-Daud=C3=A9=
 wrote:
> >>> Commit b2c00bce54c ("meson: convert hw/9pfs, cleanup") introduced
> >>> CONFIG_9PFS (probably a wrong conflict resolution). This config is
> >>> not used anywhere. Backends depend on CONFIG_FSDEV_9P which itself
> >>> depends on CONFIG_VIRTFS.
> >>>
> >>> Remove the invalid CONFIG_9PFS and use CONFIG_FSDEV_9P instead, to
> >>> fix the './configure --without-default-devices --enable-xen' build:
> >>>
> >>>   /usr/bin/ld: libcommon.fa.p/hw_xen_xen-legacy-backend.c.o: in funct=
ion
> >>> `xen_be_register_common': hw/xen/xen-legacy-backend.c:754: undefined
> >>> reference to `xen_9pfs_ops' /usr/bin/ld:
> >>> libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x8): undefined refere=
nce to
> >>> `local_ops' /usr/bin/ld:
> >>> libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x20): undefined refer=
ence
> >>> to `synth_ops' /usr/bin/ld:
> >>> libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x38): undefined refer=
ence
> >>> to `proxy_ops' collect2: error: ld returned 1 exit status
> >>>
> >>> Fixes: b2c00bce54c ("meson: convert hw/9pfs, cleanup")
> >>> Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
> >>> Acked-by: Greg Kurz <groug@kaod.org>
> >>> Tested-by: Greg Kurz <groug@kaod.org>
> >>> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
> >>
> >> Acked-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
> >>
> >=20
> > Phil,
> >=20
> > Same questioning as Connie. Do you intend to get this merged or should
> > Christian or I take care of that ?
>=20
> Same answer too =3D) If you are preparing a pull request, please go ahead!
>=20

Heh I've just seen your answer.

Christian,

Maybe you can add this patch in your next PR ?

> Thanks,
>=20
> Phil.
>=20



From xen-devel-bounces@lists.xenproject.org Thu Nov 05 12:25:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 12:25:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19771.45118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaeKh-0003Gz-Ob; Thu, 05 Nov 2020 12:25:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19771.45118; Thu, 05 Nov 2020 12:25:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaeKh-0003Gs-Ld; Thu, 05 Nov 2020 12:25:39 +0000
Received: by outflank-mailman (input) for mailman id 19771;
 Thu, 05 Nov 2020 12:25:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kaeKg-0003GD-GG
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 12:25:38 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 10afa1db-c2be-4bd5-926c-b300a79dfc6b;
 Thu, 05 Nov 2020 12:25:30 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kaeKY-0007QB-H0; Thu, 05 Nov 2020 12:25:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kaeKY-000830-7g; Thu, 05 Nov 2020 12:25:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kaeKY-0005os-7A; Thu, 05 Nov 2020 12:25:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kaeKg-0003GD-GG
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 12:25:38 +0000
X-Inumbo-ID: 10afa1db-c2be-4bd5-926c-b300a79dfc6b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 10afa1db-c2be-4bd5-926c-b300a79dfc6b;
	Thu, 05 Nov 2020 12:25:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MEiMR3yJSfmQKH2mN1M6SsY7TABegH/aQ/rk/PWW/bI=; b=Wx9eSbYrwFe/ljOR93PMYd9JLG
	SCaNvhxN1d5gqA8xYdRNYDjXI2YId0rQJ5jbr4bvPeXY4+dMrm3bbSMrfOZXAddyKO9FJvtdMEOgk
	KGJu7k4chi4JvUO3r/djAI0+balBiJKMuHxV+/RHDjhd3ZzmDINRcLZU1CRDFCJeU4L8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaeKY-0007QB-H0; Thu, 05 Nov 2020 12:25:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaeKY-000830-7g; Thu, 05 Nov 2020 12:25:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kaeKY-0005os-7A; Thu, 05 Nov 2020 12:25:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156409-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 156409: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    xen-4.12-testing:build-arm64:xen-build:fail:regression
    xen-4.12-testing:build-amd64:xen-build:fail:regression
    xen-4.12-testing:build-amd64-prev:xen-build:fail:regression
    xen-4.12-testing:build-arm64-xsm:xen-build:fail:regression
    xen-4.12-testing:build-amd64-xsm:xen-build:fail:regression
    xen-4.12-testing:build-armhf:xen-build:fail:regression
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-4.12-testing:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-xsm:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-freebsd10-amd64:build-check(1):starved:nonblocking
    xen-4.12-testing:build-i386-libvirt:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-shadow:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-raw:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-freebsd10-i386:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-pair:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-livepatch:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-migrupgrade:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-pair:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-xl:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):starved:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):starved:nonblocking
    xen-4.12-testing:build-i386-prev:hosts-allocate:starved:nonblocking
    xen-4.12-testing:build-i386-xsm:hosts-allocate:starved:nonblocking
    xen-4.12-testing:build-i386:hosts-allocate:starved:nonblocking
    xen-4.12-testing:build-i386-pvops:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=4f9294d21c47415376215d68a0298e88582b8e7a
X-Osstest-Versions-That:
    xen=97b7b5567fba6918a656ad349051b5343b5dea2e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 05 Nov 2020 12:25:30 +0000

flight 156409 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156409/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                   6 xen-build                fail REGR. vs. 156358
 build-amd64                   6 xen-build                fail REGR. vs. 156358
 build-amd64-prev              6 xen-build                fail REGR. vs. 156358
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156358
 build-amd64-xsm               6 xen-build                fail REGR. vs. 156358
 build-armhf                   6 xen-build                fail REGR. vs. 156358

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         starved n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  starved n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      starved n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) starved n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              starved n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              starved n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              starved n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               starved  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               starved  n/a
 build-i386-libvirt            1 build-check(1)               starved  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               starved  n/a
 test-amd64-i386-xl-raw        1 build-check(1)               starved  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               starved  n/a
 test-amd64-i386-libvirt       1 build-check(1)               starved  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               starved  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) starved n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               starved  n/a
 test-amd64-i386-livepatch     1 build-check(1)               starved  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               starved  n/a
 test-amd64-i386-pair          1 build-check(1)               starved  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               starved n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             starved n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               starved n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             starved n/a
 test-amd64-i386-xl            1 build-check(1)               starved  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               starved  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         starved n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      starved n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) starved n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              starved n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              starved n/a
 build-i386-prev               2 hosts-allocate               starved  n/a
 build-i386-xsm                2 hosts-allocate               starved  n/a
 build-i386                    2 hosts-allocate               starved  n/a
 build-i386-pvops              2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  4f9294d21c47415376215d68a0298e88582b8e7a
baseline version:
 xen                  97b7b5567fba6918a656ad349051b5343b5dea2e

Last test of basis   156358  2020-11-02 08:38:03 Z    3 days
Testing same since   156398  2020-11-04 09:06:02 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <ian.jackson@eu.citrix.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               starved 
 build-amd64-xtf                                              pass    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   starved 
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           starved 
 build-amd64-prev                                             fail    
 build-i386-prev                                              starved 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             starved 
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            starved 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         starved 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  starved 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  starved 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  starved 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       starved 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           starved 
 test-amd64-i386-qemuu-rhel6hvm-amd                           starved 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     starved 
 test-amd64-i386-freebsd10-amd64                              starved 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          starved 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          starved 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          starved 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          starved 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          starved 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         starved 
 test-amd64-i386-freebsd10-i386                               starved 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         starved 
 test-amd64-i386-qemuu-rhel6hvm-intel                         starved 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      starved 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    starved 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  starved 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         starved 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 starved 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    starved 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       starved 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              starved 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    starved 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4f9294d21c47415376215d68a0298e88582b8e7a
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Wed Nov 4 09:36:36 2020 +0100

    SUPPORT.md: Desupport qemu trad except stub dm
    
    While investigating XSA-335 we discovered that many upstream security
    fixes were missing.  It is not practical to backport them.  There is
    no good reason to be running this very ancient version of qemu, except
    that it is the only way to run a stub dm which is currently supported
    by upstream.
    
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    master commit: 8587160b3e2951b722d395a0346bb17c3c22152f
    master date: 2020-11-04 09:22:37 +0100
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 12:29:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 12:29:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19776.45129 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaeNm-0003T7-CH; Thu, 05 Nov 2020 12:28:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19776.45129; Thu, 05 Nov 2020 12:28:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaeNm-0003T0-9D; Thu, 05 Nov 2020 12:28:50 +0000
Received: by outflank-mailman (input) for mailman id 19776;
 Thu, 05 Nov 2020 12:28:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mRUF=EL=crudebyte.com=qemu_oss@srs-us1.protection.inumbo.net>)
 id 1kaeNk-0003Sq-4F
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 12:28:48 +0000
Received: from lizzy.crudebyte.com (unknown [91.194.90.13])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 59d1186d-e553-43cb-bc94-407a058bac8e;
 Thu, 05 Nov 2020 12:28:47 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mRUF=EL=crudebyte.com=qemu_oss@srs-us1.protection.inumbo.net>)
	id 1kaeNk-0003Sq-4F
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 12:28:48 +0000
X-Inumbo-ID: 59d1186d-e553-43cb-bc94-407a058bac8e
Received: from lizzy.crudebyte.com (unknown [91.194.90.13])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 59d1186d-e553-43cb-bc94-407a058bac8e;
	Thu, 05 Nov 2020 12:28:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=crudebyte.com; s=lizzy; h=Content-Type:Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:
	Content-ID:Content-Description;
	bh=YPq/3yfFXLfPvAw4wtoOounGC9mB9vXYDsxqJBUvGTo=; b=Che2Ca+e11VJP8CmIRlpunNLMr
	qRQYJY4aEOG0sW7s1/7HUUp1XpWOrnMQhEb929HTWWJVhd+a4HkpskarxDIq0Mx5NJDMkLSzlv4VU
	yFcXwTsWK9Zs1Mj3kZL1g8mRojrX7WB/Uy3pxPoLLQGo/c55dsUS2KFv/49dHaSDSc5JNGbkRljuE
	qAnqlj3I9P12l8yLegvSKpZyeBET7x7v9yXLWzM3e8xfxSyep95AX6+4VfUDDJoCdPoah94WEtGnQ
	Olx7AccVhzmuG0AHKIG/Nk/isz/Zgy/ZfRPtwS4tbj5DgBKVrZ9xU6aBL0JTHG3q1xebPNdtoNHOe
	kF4G/s6Q==;
From: Christian Schoenebeck <qemu_oss@crudebyte.com>
To: Greg Kurz <groug@kaod.org>
Cc: Philippe =?ISO-8859-1?Q?Mathieu=2DDaud=E9?= <philmd@redhat.com>, qemu-devel@nongnu.org, Fam Zheng <fam@euphon.net>, Thomas Huth <thuth@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>, "Daniel P . Berrange" <berrange@redhat.com>, Matthew Rosato <mjrosato@linux.ibm.com>, David Hildenbrand <david@redhat.com>, Alex =?ISO-8859-1?Q?Benn=E9e?= <alex.bennee@linaro.org>, Cornelia Huck <cohuck@redhat.com>, Wainer dos Santos Moschetta <wainersm@redhat.com>, Halil Pasic <pasic@linux.ibm.com>, Christian Borntraeger <borntraeger@de.ibm.com>, qemu-s390x@nongnu.org, xen-devel@lists.xenproject.org, Anthony Perard <anthony.perard@citrix.com>, Paolo Bonzini <pbonzini@redhat.com>, Paul Durrant <paul@xen.org>, Richard Henderson <rth@twiddle.net>
Subject: Re: [PATCH-for-5.2 v3 2/4] hw/9pfs: Fix Kconfig dependency problem between 9pfs and Xen
Date: Thu, 05 Nov 2020 13:28:31 +0100
Message-ID: <2140852.vo20GZeEQY@silver>
In-Reply-To: <20201105132346.5e0adf94@bahia.lan>
References: <20201104115706.3101190-1-philmd@redhat.com> <17370310-d69c-91ff-763d-52a1355ad605@redhat.com> <20201105132346.5e0adf94@bahia.lan>
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"

On Donnerstag, 5. November 2020 13:23:46 CET Greg Kurz wrote:
> On Thu, 5 Nov 2020 13:15:59 +0100
>=20
> Philippe Mathieu-Daud=E9 <philmd@redhat.com> wrote:
> > On 11/4/20 6:54 PM, Greg Kurz wrote:
> > > On Wed, 04 Nov 2020 13:18:09 +0100
> > >=20
> > > Christian Schoenebeck <qemu_oss@crudebyte.com> wrote:
> > >> On Mittwoch, 4. November 2020 12:57:04 CET Philippe Mathieu-Daud=E9=
=20
wrote:
> > >>> Commit b2c00bce54c ("meson: convert hw/9pfs, cleanup") introduced
> > >>> CONFIG_9PFS (probably a wrong conflict resolution). This config is
> > >>> not used anywhere. Backends depend on CONFIG_FSDEV_9P which itself
> > >>> depends on CONFIG_VIRTFS.
> > >>>=20
> > >>> Remove the invalid CONFIG_9PFS and use CONFIG_FSDEV_9P instead, to
> > >>>=20
> > >>> fix the './configure --without-default-devices --enable-xen' build:
> > >>>   /usr/bin/ld: libcommon.fa.p/hw_xen_xen-legacy-backend.c.o: in
> > >>>   function
> > >>>=20
> > >>> `xen_be_register_common': hw/xen/xen-legacy-backend.c:754: undefined
> > >>> reference to `xen_9pfs_ops' /usr/bin/ld:
> > >>> libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x8): undefined
> > >>> reference to
> > >>> `local_ops' /usr/bin/ld:
> > >>> libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x20): undefined
> > >>> reference
> > >>> to `synth_ops' /usr/bin/ld:
> > >>> libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x38): undefined
> > >>> reference
> > >>> to `proxy_ops' collect2: error: ld returned 1 exit status
> > >>>=20
> > >>> Fixes: b2c00bce54c ("meson: convert hw/9pfs, cleanup")
> > >>> Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
> > >>> Acked-by: Greg Kurz <groug@kaod.org>
> > >>> Tested-by: Greg Kurz <groug@kaod.org>
> > >>> Signed-off-by: Philippe Mathieu-Daud=E9 <philmd@redhat.com>
> > >>=20
> > >> Acked-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
> > >=20
> > > Phil,
> > >=20
> > > Same questioning as Connie. Do you intend to get this merged or should
> > > Christian or I take care of that ?
> >=20
> > Same answer too =3D) If you are preparing a pull request, please go ahe=
ad!
>=20
> Heh I've just seen your answer.
>=20
> Christian,
>=20
> Maybe you can add this patch in your next PR ?

Yes, I will prepare a 9p PR today anyway, so I will include this patch.

Best regards,
Christian Schoenebeck




From xen-devel-bounces@lists.xenproject.org Thu Nov 05 12:54:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 12:54:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19788.45151 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaeml-00064U-GZ; Thu, 05 Nov 2020 12:54:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19788.45151; Thu, 05 Nov 2020 12:54:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaeml-00064N-Bw; Thu, 05 Nov 2020 12:54:39 +0000
Received: by outflank-mailman (input) for mailman id 19788;
 Thu, 05 Nov 2020 12:54:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hyP4=EL=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
 id 1kaemj-00064I-EO
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 12:54:38 +0000
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f479eb69-954e-471a-8ff1-bc2432bde4a6;
 Thu, 05 Nov 2020 12:54:36 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id h62so1504242wme.3
 for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 04:54:36 -0800 (PST)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
 by smtp.gmail.com with ESMTPSA id m12sm2468188wrs.92.2020.11.05.04.54.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 05 Nov 2020 04:54:34 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hyP4=EL=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
	id 1kaemj-00064I-EO
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 12:54:38 +0000
X-Inumbo-ID: f479eb69-954e-471a-8ff1-bc2432bde4a6
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f479eb69-954e-471a-8ff1-bc2432bde4a6;
	Thu, 05 Nov 2020 12:54:36 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id h62so1504242wme.3
        for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 04:54:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ffwll.ch; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=iR+CVMR9Igk+i7q6zQ360QpQoQUWwq9hKMxFO8Y7cGA=;
        b=bPPrnh4BjIGHxssI9in0eI8avIf6lVSA1Fd+jmaHNuJRYj4GLesWwatvAf+cWdJ1Gn
         +T1a9zT5WiAc7uq+YkZ1UAi9uQqINJcLFkN5pbawGR6HDSkkCEunVz0GKT5BPbCo522L
         G94YF8sd+EF3xMH5d9NYF5qFQjmBr4VXc5pXs=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=iR+CVMR9Igk+i7q6zQ360QpQoQUWwq9hKMxFO8Y7cGA=;
        b=Q3ISPAn0Fk5bZU0tEb53lAnqIsght5oC6wa+wtD3fkx3h12NPA1N46Sz3iBvo7yTeQ
         Ep4i6r1VocrpKOXwazRtAoqAouPTfPP+TDrMt8YjkZfKJdpSh4b9YWsXLhjGHE2LCqYq
         qiLQVAF9xOi6JrolTkF7AFvoHz49X9UC1Y1lcBKzXX+WKnM0GZU8u2mj6i/smCuiGWMC
         xBTbptV6f/z9bjxaU+qTZS3JpNecTniLQFJmBPsKZ7oDvvOnV1BW/UOAF/zXJuZiKMsa
         se6mW4sXrCktHbM0Z6miG21jcpuS45rsF2nlrhZ0AqCO2ZEwWyuYxlptcPTzTji5TrG0
         Nr/Q==
X-Gm-Message-State: AOAM532l/jX/UM+2Bygqx6baeyEz6GSqAD6ATdQT2kFlidIs3n5tuHb/
	XlBr+i/sLJBBSZjNNN4ODRHh3A==
X-Google-Smtp-Source: ABdhPJwrW3OjL7kR/bAr74WjxI8azvpsj46DeaIlccjHcQq2wan765xbIbWOOKK6+3jptMrcJuyztQ==
X-Received: by 2002:a1c:6843:: with SMTP id d64mr2670603wmc.131.1604580875284;
        Thu, 05 Nov 2020 04:54:35 -0800 (PST)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
        by smtp.gmail.com with ESMTPSA id m12sm2468188wrs.92.2020.11.05.04.54.32
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 05 Nov 2020 04:54:34 -0800 (PST)
Date: Thu, 5 Nov 2020 13:54:31 +0100
From: Daniel Vetter <daniel@ffwll.ch>
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Linus Walleij <linus.walleij@linaro.org>,
	Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
	Maxime Ripard <mripard@kernel.org>, Dave Airlie <airlied@linux.ie>,
	Daniel Vetter <daniel@ffwll.ch>, Sam Ravnborg <sam@ravnborg.org>,
	Alex Deucher <alexander.deucher@amd.com>,
	Christian =?iso-8859-1?Q?K=F6nig?= <christian.koenig@amd.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Lucas Stach <l.stach@pengutronix.de>, linux+etnaviv@armlinux.org.uk,
	Christian Gmeiner <christian.gmeiner@gmail.com>,
	Inki Dae <inki.dae@samsung.com>,
	Joonyoung Shim <jy0922.shim@samsung.com>,
	Seung-Woo Kim <sw0312.kim@samsung.com>,
	Kyungmin Park <kyungmin.park@samsung.com>,
	Kukjin Kim <kgene@kernel.org>,
	Krzysztof Kozlowski <krzk@kernel.org>, yuq825@gmail.com,
	Ben Skeggs <bskeggs@redhat.com>, Rob Herring <robh@kernel.org>,
	Tomeu Vizoso <tomeu.vizoso@collabora.com>, steven.price@arm.com,
	alyssa.rosenzweig@collabora.com, Sandy Huang <hjc@rock-chips.com>,
	Heiko =?iso-8859-1?Q?St=FCbner?= <heiko@sntech.de>,
	Hans de Goede <hdegoede@redhat.com>, Sean Paul <sean@poorly.run>,
	Eric Anholt <eric@anholt.net>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	ray.huang@amd.com, Sumit Semwal <sumit.semwal@linaro.org>,
	Emil Velikov <emil.velikov@collabora.com>, luben.tuikov@amd.com,
	apaneers@amd.com, melissa.srw@gmail.com,
	Chris Wilson <chris@chris-wilson.co.uk>,
	Qinglang Miao <miaoqinglang@huawei.com>,
	"open list:DRM PANEL DRIVERS" <dri-devel@lists.freedesktop.org>,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	linux-samsung-soc <linux-samsung-soc@vger.kernel.org>,
	lima@lists.freedesktop.org, nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	"open list:ARM/Rockchip SoC..." <linux-rockchip@lists.infradead.org>,
	xen-devel@lists.xenproject.org,
	Linux Media Mailing List <linux-media@vger.kernel.org>,
	linaro-mm-sig@lists.linaro.org
Subject: Re: [PATCH v5 09/10] dma-buf-map: Add memcpy and pointer-increment
 interfaces
Message-ID: <20201105125431.GW401619@phenom.ffwll.local>
References: <20201020122046.31167-1-tzimmermann@suse.de>
 <20201020122046.31167-10-tzimmermann@suse.de>
 <CACRpkdbvGWKo8y323actUJn9xXmxpgDw1EKLiPH4RqB_kFx=XQ@mail.gmail.com>
 <27acbd7e-d72e-4e05-c147-b50f56e21589@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <27acbd7e-d72e-4e05-c147-b50f56e21589@suse.de>
X-Operating-System: Linux phenom 5.7.0-1-amd64 

On Thu, Nov 05, 2020 at 11:37:08AM +0100, Thomas Zimmermann wrote:
> Hi
> 
> Am 05.11.20 um 11:07 schrieb Linus Walleij:
> > Overall I like this, just an inline question:
> > 
> > On Tue, Oct 20, 2020 at 2:20 PM Thomas Zimmermann <tzimmermann@suse.de> wrote:
> > 
> >> To do framebuffer updates, one needs memcpy from system memory and a
> >> pointer-increment function. Add both interfaces with documentation.
> > 
> > (...)
> >> +/**
> >> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> >> + * @dst:       The dma-buf mapping structure
> >> + * @src:       The source buffer
> >> + * @len:       The number of byte in src
> >> + *
> >> + * Copies data into a dma-buf mapping. The source buffer is in system
> >> + * memory. Depending on the buffer's location, the helper picks the correct
> >> + * method of accessing the memory.
> >> + */
> >> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
> >> +{
> >> +       if (dst->is_iomem)
> >> +               memcpy_toio(dst->vaddr_iomem, src, len);
> >> +       else
> >> +               memcpy(dst->vaddr, src, len);
> >> +}
> > 
> > Are these going to be really big memcpy() operations?
> 
> Individually, each could be a scanline, so a few KiB. (4 bytes *
> horizontal resolution). Updating a full framebuffer can sum up to
> several MiB.
> 
> > 
> > Some platforms have DMA offload engines that can perform memcpy(),They could be
> > drivers/dma, include/linux/dmaengine.h
> > especially if the CPU doesn't really need to touch the contents
> > and flush caches etc.
> > An example exist in some MTD drivers that move large quantities of
> > data off flash memory like this:
> > drivers/mtd/nand/raw/cadence-nand-controller.c
> > 
> > Notice that DMAengine and DMAbuf does not have much in common,
> > the names can be deceiving.
> > 
> > The value of this varies with the system architecture. It is not just
> > a question about performance but also about power and the CPU
> > being able to do other stuff in parallel for large transfers. So *when*
> > to use this facility to accelerate memcpy() is a delicate question.
> > 
> > What I'm after here is if these can be really big, do we want
> > (in the long run, not now) open up to the idea to slot in
> > hardware-accelerated memcpy() here?
> 
> We currently use this functionality for the graphical framebuffer
> console that most DRM drivers provide. It's non-accelerated and slow,
> but this has not been much of a problem so far.
> 
> Within DRM, we're more interested in removing console code from drivers
> and going for the generic implementation.
> 
> Most of the graphics HW allocates framebuffers from video RAM, system
> memory or CMA pools and does not really need these memcpys. Only a few
> systems with small video RAM require a shadow buffer, which we flush
> into VRAM as needed. Those might benefit.
> 
> OTOH, off-loading memcpys to hardware sounds reasonable if we can hide
> it from the DRM code. I think it all depends on how invasive that change
> would be.

I wouldn't, all the additional locks this would pull in sound like
nightmare. And when an oops happens, this might be the only thing that
manages to get the oops to the user.

Unless someone really starts caring about fbcon acceleration I really
wouldn't bother. Ok maybe it also matters for fbdev, but the problem is
that the page fault intercepting alone is already expensive, so the only
real solution if you care about performance in that case is to use kms
natively, and use a dirty rectangle flip (or the DIRTY syscall).

And in there drivers should (and do) use any dma engines they have to
upload the frames already.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 13:02:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 13:02:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19796.45169 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaeug-00071f-Bx; Thu, 05 Nov 2020 13:02:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19796.45169; Thu, 05 Nov 2020 13:02:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaeug-00071Y-8j; Thu, 05 Nov 2020 13:02:50 +0000
Received: by outflank-mailman (input) for mailman id 19796;
 Thu, 05 Nov 2020 13:02:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nmck=EL=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kaeue-00071T-VK
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 13:02:48 +0000
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5eccbb7e-be2b-4b6f-8f21-79a82d86c2e0;
 Thu, 05 Nov 2020 13:02:45 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id g12so1658481wrp.10
 for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 05:02:45 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id r3sm2379129wrm.51.2020.11.05.05.02.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 05 Nov 2020 05:02:44 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Nmck=EL=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kaeue-00071T-VK
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 13:02:48 +0000
X-Inumbo-ID: 5eccbb7e-be2b-4b6f-8f21-79a82d86c2e0
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5eccbb7e-be2b-4b6f-8f21-79a82d86c2e0;
	Thu, 05 Nov 2020 13:02:45 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id g12so1658481wrp.10
        for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 05:02:45 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=T8s84WHZg6qe2FOtBV1gA8ngLlQvSa45H08E1gw4jjM=;
        b=F4rwTzYzKF/osyfQ2PQbpFmM6/IVtnKVdzZ8bPHu9yonMKDwKHT1GdbzbUhW1YRDiC
         M1tgakwQhoUwzsx2mSSx9pPCuI76TxYF2ag9cDk7tFNdxCxajRkxt9h8o/JPlTD9Q5s7
         Mw+YmbDU3+GP3Y6bxS2SeBCikQFSCvf/Aw7B/QtsqpdRbftnTRYxr37AEc4aEra2n6l/
         l2SbD/LmQuf7kebmpQkREYFCQbDjtHX6ZUaNgpHVvimB55cV0qt0rHOFBPjNmzVQZu7Y
         NqUZ058mxfWVPWwkyoXInMP+oGmUir7nbZ/QoJGc6tsaRcDi/iiuZixSgOhTtPIgNqZB
         flLg==
X-Gm-Message-State: AOAM531wokYhA9TP0XSKVjy0KcDEg3B9R6Ift9LE86uWm3Z8UiHq331S
	RZg1YvgzZuCiiJElMIhfZm8=
X-Google-Smtp-Source: ABdhPJz6bjocfIpNYXUJM/ZHuszdISGCmatqMCEfgS4fmW/nTwAyc5Ra6rFhikIFpuowEUI0dRdR1A==
X-Received: by 2002:adf:f74e:: with SMTP id z14mr2795110wrp.312.1604581364973;
        Thu, 05 Nov 2020 05:02:44 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id r3sm2379129wrm.51.2020.11.05.05.02.44
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 05 Nov 2020 05:02:44 -0800 (PST)
Date: Thu, 5 Nov 2020 13:02:42 +0000
From: Wei Liu <wl@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Olaf Hering <olaf@aepfle.de>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: Ping: [PATCH] libxl: fix libacpi dependency
Message-ID: <20201105130242.e7hwl7v6ktlkbbkn@liuwe-devbox-debian-v2>
References: <bd68e8f4-ce57-7798-f6d2-53e85319b8d4@suse.com>
 <f2172d3f-38fc-7f9f-9b31-2c07a1686cff@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <f2172d3f-38fc-7f9f-9b31-2c07a1686cff@suse.com>
User-Agent: NeoMutt/20180716

On Thu, Nov 05, 2020 at 12:56:46PM +0100, Jan Beulich wrote:
> On 27.10.2020 12:40, Jan Beulich wrote:
> > $(DSDT_FILES-y) depends on the recursive make to have run in libacpi/
> > such that the file(s) itself/themselves were generated before
> > compilation gets attempted. The same, however, is also necessary for
> > generated headers, before source files including them would get
> > attempted to be compiled.
> > 
> > The dependency specified in libacpi's Makefile, otoh, is entirely
> > pointless nowadays - no compilation happens there anymore (except for
> > tools involved in building the generated files). Together with it, the
> > rule generating acpi.a also can go away.
> > 
> > Reported-by: Olaf Hering <olaf@aepfle.de>
> > Fixes: 14c0d328da2b ("libxl/acpi: Build ACPI tables for HVMlite guests")
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> I'd appreciate a libxl side ack (or otherwise) here.

Acked-by: Wei Liu <wl@xen.org>

Feel free to commit this yourself.


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 14:32:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 14:32:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19826.45235 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kagJF-0006Qs-HP; Thu, 05 Nov 2020 14:32:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19826.45235; Thu, 05 Nov 2020 14:32:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kagJF-0006Ql-E0; Thu, 05 Nov 2020 14:32:17 +0000
Received: by outflank-mailman (input) for mailman id 19826;
 Thu, 05 Nov 2020 14:32:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xUX6=EL=xenproject.org=iwj@srs-us1.protection.inumbo.net>)
 id 1kagJF-0006Qg-2g
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 14:32:17 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4ee3bf18-3bb5-49ec-a471-00b6932875b9;
 Thu, 05 Nov 2020 14:32:15 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1kagJD-0001dL-LY
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 14:32:15 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1kagJD-0002U9-IA
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 14:32:15 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1kagJA-0003gA-2A; Thu, 05 Nov 2020 14:32:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xUX6=EL=xenproject.org=iwj@srs-us1.protection.inumbo.net>)
	id 1kagJF-0006Qg-2g
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 14:32:17 +0000
X-Inumbo-ID: 4ee3bf18-3bb5-49ec-a471-00b6932875b9
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4ee3bf18-3bb5-49ec-a471-00b6932875b9;
	Thu, 05 Nov 2020 14:32:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Subject:CC:To:Date:Message-ID:
	Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=FWf/ST26ihLwGN6j28QWXa9Y+j9p543ZO7qTuQScjWU=; b=DRmw9/KI0AKDNKzFZ85SIvzi2o
	Wr2NvmacBuf48Cb1O5w4bNqWtZmvb03j9XAi20lSyv+oP7mcUa/1XX3xqA9hW08HP9z7O5bYx0NY6
	65w1koRA+yiTcSe9V9FEa/pAlx13trLbLWyM4N884zDvRdBEVt+w5QdyotQTFy93viLY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1kagJD-0001dL-LY
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 14:32:15 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1kagJD-0002U9-IA
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 14:32:15 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
	(envelope-from <iwj@xenproject.org>)
	id 1kagJA-0003gA-2A; Thu, 05 Nov 2020 14:32:12 +0000
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24484.3307.827094.796027@mariner.uk.xensource.com>
Date: Thu, 5 Nov 2020 14:32:11 +0000
To: committers@xenproject.org
CC: xen-devel@lists.xenproject.org
Subject: osstest downtime for hardware work
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

We think we have all the pieces now for a bunch of long-awaited
hardware work in the Massachusetts colo.

There will be several days of complete stoppage.  If anyone has
opinions about good or bad times, please let me know.

Ian.


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 15:07:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 15:07:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19846.45279 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kagqt-0000rk-KA; Thu, 05 Nov 2020 15:07:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19846.45279; Thu, 05 Nov 2020 15:07:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kagqt-0000rd-GH; Thu, 05 Nov 2020 15:07:03 +0000
Received: by outflank-mailman (input) for mailman id 19846;
 Thu, 05 Nov 2020 15:07:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mRUF=EL=crudebyte.com=qemu_oss@srs-us1.protection.inumbo.net>)
 id 1kagqs-0000rX-GH
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 15:07:02 +0000
Received: from lizzy.crudebyte.com (unknown [91.194.90.13])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6398b5fb-6989-4d76-8022-30c9ff95a1e3;
 Thu, 05 Nov 2020 15:07:01 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mRUF=EL=crudebyte.com=qemu_oss@srs-us1.protection.inumbo.net>)
	id 1kagqs-0000rX-GH
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 15:07:02 +0000
X-Inumbo-ID: 6398b5fb-6989-4d76-8022-30c9ff95a1e3
Received: from lizzy.crudebyte.com (unknown [91.194.90.13])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6398b5fb-6989-4d76-8022-30c9ff95a1e3;
	Thu, 05 Nov 2020 15:07:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=crudebyte.com; s=lizzy; h=Content-Type:Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:
	Content-ID:Content-Description;
	bh=J80vtLgb+JbeRkicM5+Hda5f+1nTevLNRPjcN23fEMA=; b=gmBm+m8zcHwMHHa2fbyf7qSnpF
	VqVI9pndK0nmSyWd1OcYVa+E4ZG3spmnE6McjnIX7pDMi1/tkaq49cxQNyXHli4VPovL98u6KEzoL
	lhYH/Pq+xgBOQX6J2SOgcaaLLJhUGOvhrM6r3kZJI7hAeRxPMSkDH/tr6p9oRxFfq9QByss84EBv+
	z20eAc/cng1Ga7DI4JxnB0WoxobrUQnw/76hQgm3JkvfLLqBXjBBEexcVE+F4WEFJS+jta74cbxzM
	V5h9ikuPEZGVlopDEH9OTHCdzNCvmWLe1bfKu6wWWbiJItsB1r0w84DDsfsSgu2eVsDLfjIRboOPU
	H5cePrkw==;
From: Christian Schoenebeck <qemu_oss@crudebyte.com>
To: qemu-devel@nongnu.org
Cc: Greg Kurz <groug@kaod.org>, Philippe =?ISO-8859-1?Q?Mathieu=2DDaud=E9?= <philmd@redhat.com>, Fam Zheng <fam@euphon.net>, Thomas Huth <thuth@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>, "Daniel P . Berrange" <berrange@redhat.com>, Matthew Rosato <mjrosato@linux.ibm.com>, David Hildenbrand <david@redhat.com>, Alex =?ISO-8859-1?Q?Benn=E9e?= <alex.bennee@linaro.org>, Cornelia Huck <cohuck@redhat.com>, Wainer dos Santos Moschetta <wainersm@redhat.com>, Halil Pasic <pasic@linux.ibm.com>, Christian Borntraeger <borntraeger@de.ibm.com>, qemu-s390x@nongnu.org, xen-devel@lists.xenproject.org, Anthony Perard <anthony.perard@citrix.com>, Paolo Bonzini <pbonzini@redhat.com>, Paul Durrant <paul@xen.org>, Richard Henderson <rth@twiddle.net>
Subject: Re: [PATCH-for-5.2 v3 2/4] hw/9pfs: Fix Kconfig dependency problem between 9pfs and Xen
Date: Thu, 05 Nov 2020 16:06:45 +0100
Message-ID: <401148579.MYj8lGMC4g@silver>
In-Reply-To: <2140852.vo20GZeEQY@silver>
References: <20201104115706.3101190-1-philmd@redhat.com> <20201105132346.5e0adf94@bahia.lan> <2140852.vo20GZeEQY@silver>
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"

On Donnerstag, 5. November 2020 13:28:31 CET Christian Schoenebeck wrote:
> On Donnerstag, 5. November 2020 13:23:46 CET Greg Kurz wrote:
> > On Thu, 5 Nov 2020 13:15:59 +0100
> >=20
> > Philippe Mathieu-Daud=E9 <philmd@redhat.com> wrote:
> > > On 11/4/20 6:54 PM, Greg Kurz wrote:
> > > > On Wed, 04 Nov 2020 13:18:09 +0100
> > > >=20
> > > > Christian Schoenebeck <qemu_oss@crudebyte.com> wrote:
> > > >> On Mittwoch, 4. November 2020 12:57:04 CET Philippe Mathieu-Daud=E9
>=20
> wrote:
> > > >>> Commit b2c00bce54c ("meson: convert hw/9pfs, cleanup") introduced
> > > >>> CONFIG_9PFS (probably a wrong conflict resolution). This config is
> > > >>> not used anywhere. Backends depend on CONFIG_FSDEV_9P which itself
> > > >>> depends on CONFIG_VIRTFS.
> > > >>>=20
> > > >>> Remove the invalid CONFIG_9PFS and use CONFIG_FSDEV_9P instead, to
> > > >>>=20
> > > >>> fix the './configure --without-default-devices --enable-xen' buil=
d:
> > > >>>   /usr/bin/ld: libcommon.fa.p/hw_xen_xen-legacy-backend.c.o: in
> > > >>>   function
> > > >>>=20
> > > >>> `xen_be_register_common': hw/xen/xen-legacy-backend.c:754: undefi=
ned
> > > >>> reference to `xen_9pfs_ops' /usr/bin/ld:
> > > >>> libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x8): undefined
> > > >>> reference to
> > > >>> `local_ops' /usr/bin/ld:
> > > >>> libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x20): undefined
> > > >>> reference
> > > >>> to `synth_ops' /usr/bin/ld:
> > > >>> libcommon.fa.p/fsdev_qemu-fsdev.c.o:(.data.rel+0x38): undefined
> > > >>> reference
> > > >>> to `proxy_ops' collect2: error: ld returned 1 exit status
> > > >>>=20
> > > >>> Fixes: b2c00bce54c ("meson: convert hw/9pfs, cleanup")
> > > >>> Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
> > > >>> Acked-by: Greg Kurz <groug@kaod.org>
> > > >>> Tested-by: Greg Kurz <groug@kaod.org>
> > > >>> Signed-off-by: Philippe Mathieu-Daud=E9 <philmd@redhat.com>
> > > >>=20
> > > >> Acked-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
> > > >=20
> > > > Phil,
> > > >=20
> > > > Same questioning as Connie. Do you intend to get this merged or sho=
uld
> > > > Christian or I take care of that ?
> > >=20
> > > Same answer too =3D) If you are preparing a pull request, please go a=
head!
> >=20
> > Heh I've just seen your answer.
> >=20
> > Christian,
> >=20
> > Maybe you can add this patch in your next PR ?
>=20
> Yes, I will prepare a 9p PR today anyway, so I will include this patch.
>=20
> Best regards,
> Christian Schoenebeck

Queued on 9p.next:
https://github.com/cschoenebeck/qemu/commits/9p.next

Thanks!

Best regards,
Christian Schoenebeck




From xen-devel-bounces@lists.xenproject.org Thu Nov 05 15:45:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 15:45:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19872.45325 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kahS8-0004SK-O2; Thu, 05 Nov 2020 15:45:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19872.45325; Thu, 05 Nov 2020 15:45:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kahS8-0004SD-KN; Thu, 05 Nov 2020 15:45:32 +0000
Received: by outflank-mailman (input) for mailman id 19872;
 Thu, 05 Nov 2020 15:45:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kahS7-0004RZ-7X
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 15:45:31 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b6f6c81b-a54d-4b5e-af5e-0ff53ce6210a;
 Thu, 05 Nov 2020 15:45:24 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kahRz-000397-OJ; Thu, 05 Nov 2020 15:45:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kahRz-0003II-F4; Thu, 05 Nov 2020 15:45:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kahRz-000842-EX; Thu, 05 Nov 2020 15:45:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kahS7-0004RZ-7X
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 15:45:31 +0000
X-Inumbo-ID: b6f6c81b-a54d-4b5e-af5e-0ff53ce6210a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b6f6c81b-a54d-4b5e-af5e-0ff53ce6210a;
	Thu, 05 Nov 2020 15:45:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1Ch7YjUW13MFug06uv68XoT0DEi5CwXPNHi+7OjLbcc=; b=BhA4LRZgTcb3Pq+fUXq2u0XzIL
	d/yFSVFLXgNqoeyY+W7y7ruKhLtM9lNnLRM1pxJoVvxeB2reXGMFmUHmb//KzQLE3iAkSyxZ+XPEJ
	8UphMmOneGg3dRsRX+wEf8XiYq9lgpEPGScdozzxVTr/WPMzvpnZ0eSryyA7fzmQQHp8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kahRz-000397-OJ; Thu, 05 Nov 2020 15:45:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kahRz-0003II-F4; Thu, 05 Nov 2020 15:45:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kahRz-000842-EX; Thu, 05 Nov 2020 15:45:23 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156401-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156401: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9ff9705647646aa937b5f5c1426a64c69a62b3bd
X-Osstest-Versions-That:
    xen=7056f2f89f03f2f804ac7e776c7b2b000cd716cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 05 Nov 2020 15:45:23 +0000

flight 156401 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156401/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156389
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156389
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156389
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156389
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156389
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156389
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156389
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156389
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156389
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156389
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156389
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  9ff9705647646aa937b5f5c1426a64c69a62b3bd
baseline version:
 xen                  7056f2f89f03f2f804ac7e776c7b2b000cd716cd

Last test of basis   156389  2020-11-04 01:51:29 Z    1 days
Testing same since   156401  2020-11-04 12:37:40 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Frédéric Pierret (fepitre) <frederic.pierret@qubes-os.org>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7056f2f89f..9ff9705647  9ff9705647646aa937b5f5c1426a64c69a62b3bd -> master


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 15:47:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 15:47:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19876.45337 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kahTu-0004af-8z; Thu, 05 Nov 2020 15:47:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19876.45337; Thu, 05 Nov 2020 15:47:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kahTu-0004aY-60; Thu, 05 Nov 2020 15:47:22 +0000
Received: by outflank-mailman (input) for mailman id 19876;
 Thu, 05 Nov 2020 15:47:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BrWA=EL=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1kahTs-0004aQ-Vb
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 15:47:21 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3474a805-f8fe-4f4c-bfac-b91a82f71e78;
 Thu, 05 Nov 2020 15:47:19 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BrWA=EL=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
	id 1kahTs-0004aQ-Vb
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 15:47:21 +0000
X-Inumbo-ID: 3474a805-f8fe-4f4c-bfac-b91a82f71e78
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3474a805-f8fe-4f4c-bfac-b91a82f71e78;
	Thu, 05 Nov 2020 15:47:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1604591239;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=b6odJvh8DlCnhNfFQ3oF4vb47s500VoLN4BYBUdo5pY=;
  b=G8rMPk8Ltwr9K8QIwDMm/0t8qY012SSR1E7C97ZMR7s5a+vuXH0QfvGg
   NFuh0Y2wUUIIxS90+A9UEzeIywiZO1V7Zfz4zEmt8IlAFouwZKRQGbeh2
   aIQd6m0vPpkUTyKYzCGQRTdw2A19+kaqiDcyOaVpu9cykntq+fKnumKTv
   I=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: knRTjxOZvHCQKunLra8HzW35lbu22WBeWpLW61Z9kY5RqiNZ2Amik6DAszNAw/3lx1YIYn0OkV
 0VWy/8UwwqlObtibf3J3idpj6opkNoU2mxDLvQnmBEvC8I4wETTklJgxy4I8Ay3azqS5QLwah9
 N13TtusNNJyHPvtDM2pupBJ2jrVEhVsSQLDa3uWqM9wmLJ2AbrEQGY4o0HuUxHGilb/33qoFdV
 0B9zMQoF6yBZ97IXtk1HdDyuBgtFxLboqb/ttvhImCkc5ziZQUgORsc8huwQH8tJxBFmEN2Kcf
 0rU=
X-SBRS: None
X-MesageID: 30892731
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,453,1596513600"; 
   d="scan'208";a="30892731"
Date: Thu, 5 Nov 2020 15:41:47 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Stefano Stabellini <stefano.stabellini@xilinx.com>
CC: Takahiro Akashi <takahiro.akashi@linaro.org>, Alex Benn??e
	<alex.bennee@linaro.org>, Masami Hiramatsu <masami.hiramatsu@linaro.org>,
	<ian.jackson@eu.citrix.com>, <wl@xen.org>, <xen-devel@lists.xenproject.org>
Subject: Re: BUG: libxl vuart build order
Message-ID: <20201105154147.GJ2214@perard.uk.xensource.com>
References: <CAB5YjtCwbvYMVg-9YXjSFtC8KvjkJuYhJFSCHrJaRUKfg4NHYA@mail.gmail.com>
 <alpine.DEB.2.21.2010261634000.12247@sstabellini-ThinkPad-T480s>
 <20201027000214.GA14449@laputa> <20201028014105.GA11856@laputa>
 <alpine.DEB.2.21.2010281437010.12247@sstabellini-ThinkPad-T480s>
 <20201029114705.GA291577@laputa>
 <alpine.DEB.2.21.2010291704180.12247@sstabellini-ThinkPad-T480s>
 <20201030025157.GA18567@laputa>
 <alpine.DEB.2.21.2010301045250.12247@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2010301045250.12247@sstabellini-ThinkPad-T480s>

On Fri, Oct 30, 2020 at 10:46:37AM -0700, Stefano Stabellini wrote:
> On Fri, 30 Oct 2020, Takahiro Akashi wrote:
> > === after "xl console -t vuart" ===
> > U-Boot 2020.10-00777-g10cf956a26ba (Oct 29 2020 - 19:31:29 +0900) xenguest
> > 
> > Xen virtual CPU
> > Model: XENVM-4.15
> > DRAM:  128 MiB
> > 
> > In:    sbsa-pl011
> > Out:   sbsa-pl011
> > Err:   sbsa-pl011
> > ===
> > 
> > If possible, I hope that "xl create -c" command would accept "-t vuart"
> > option (or it should automatically selects uart type from the config).
> 
> I think a patch to add the "-t" option to "xl create" would be
> acceptable, right Anthony?

I don't know. Why `xl' isn't able to select the vuart as the default one?

Maybe a long option would be better in the cases where we would like to
connect to a "secondary" console? I could see `xl create --console=vuart'
been fine, I don't know if that's possible.

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 15:49:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 15:49:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19882.45349 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kahW7-0004mO-MG; Thu, 05 Nov 2020 15:49:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19882.45349; Thu, 05 Nov 2020 15:49:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kahW7-0004mH-Iz; Thu, 05 Nov 2020 15:49:39 +0000
Received: by outflank-mailman (input) for mailman id 19882;
 Thu, 05 Nov 2020 15:49:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BrWA=EL=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1kahW6-0004mA-MW
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 15:49:38 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 792c0c55-3b5e-48f5-8fe0-4ef12f01a4a1;
 Thu, 05 Nov 2020 15:49:37 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BrWA=EL=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
	id 1kahW6-0004mA-MW
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 15:49:38 +0000
X-Inumbo-ID: 792c0c55-3b5e-48f5-8fe0-4ef12f01a4a1
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 792c0c55-3b5e-48f5-8fe0-4ef12f01a4a1;
	Thu, 05 Nov 2020 15:49:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1604591377;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=HSO5V/NnPVdK2gbwb/SgQFXpIENruT650/i5YptNuHI=;
  b=AaVRj+aknzntId9OOcnkYfcxtPkqgvp7yt1U6pqtEKNy7IUsR/nzj0ei
   3k39p6FNMC/13LtYciSRE8+TUDVyLhL+qgcvL/mdBQTFgcipTrd9UaG5D
   buAVb/SInRN6CUKnTx+BTvduWyx+htpp4nPbFpUsrqzgzsKtz0SNiE+lB
   k=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: FRgFSROrUrMjH/hztCUYAkc/jqOVljfGZIB4e7Wk3A7tN9Z3dEjgvM1kRupO56CjASnJUoSlLQ
 OfaAhRxqTqWJE4brvquIjBvsc60gURjFQXvReuQ9J3J4MhQETglWPzZzdElp5iQowhtOvjQQ20
 wDvw1tRQzwpkxeaYgXqW6mWSZToogQlhzpyiLgAlwSDAa47Md1BXbOSks6REdSkGxQ/cMewJmM
 fx6F5oQ/GBhbMdy1pgLSFiApddEoK5m8VE0mMQbp3cdq1qbww8iOF+Erk3Ox5EOTXodiIDOZpK
 aRI=
X-SBRS: None
X-MesageID: 30795796
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,453,1596513600"; 
   d="scan'208";a="30795796"
Date: Thu, 5 Nov 2020 15:49:10 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Xinhao Zhang <zhangxinhao1@huawei.com>
CC: <qemu-devel@nongnu.org>, <xen-devel@lists.xenproject.org>,
	<sstabellini@kernel.org>, <paul@xen.org>, <dengkai1@huawei.com>,
	<alex.chen@huawei.com>, <qemu-trivial@nongnu.org>
Subject: Re: [PATCH] hw/xen: Don't use '#' flag of printf format
Message-ID: <20201105154910.GK2214@perard.uk.xensource.com>
References: <20201104133709.3326630-1-zhangxinhao1@huawei.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20201104133709.3326630-1-zhangxinhao1@huawei.com>

On Wed, Nov 04, 2020 at 09:37:09PM +0800, Xinhao Zhang wrote:
> Fix code style. Don't use '#' flag of printf format ('%#') in
> format strings, use '0x' prefix instead
> 
> Signed-off-by: Xinhao Zhang <zhangxinhao1@huawei.com>
> Signed-off-by: Kai Deng <dengkai1@huawei.com>

Acked-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 15:55:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 15:55:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19891.45369 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kahbo-0005hr-Cv; Thu, 05 Nov 2020 15:55:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19891.45369; Thu, 05 Nov 2020 15:55:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kahbo-0005hk-9z; Thu, 05 Nov 2020 15:55:32 +0000
Received: by outflank-mailman (input) for mailman id 19891;
 Thu, 05 Nov 2020 15:55:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N0uV=EL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kahbn-0005hf-6J
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 15:55:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 34be8d1a-bfad-4516-ad4f-a210cd72ef13;
 Thu, 05 Nov 2020 15:55:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 38A9CAFD0;
 Thu,  5 Nov 2020 15:55:28 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=N0uV=EL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kahbn-0005hf-6J
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 15:55:31 +0000
X-Inumbo-ID: 34be8d1a-bfad-4516-ad4f-a210cd72ef13
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 34be8d1a-bfad-4516-ad4f-a210cd72ef13;
	Thu, 05 Nov 2020 15:55:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604591728;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=MYdxot3OwZOi0x0Jjwp1g2jZlQ7iqjA0dOj9aslvivg=;
	b=UetZGdAFKqhOSyq+exNQgpd8MYRvlBcVBhy8UkrYZVPkcJ7Xj63NhNA5gUGUvGLKZN0KRP
	8UI509CVPFnFZGuf7QEWmc41kPF22G85wDI7klGGocmhd45xuRIKPFmjMwfebg0Qd+nw+a
	mXIr/y7mLCjkbOzmZ4N9sXil5RJdkFc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 38A9CAFD0;
	Thu,  5 Nov 2020 15:55:28 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] gnttab: don't allocate status frame tracking array when
 "gnttab=max_ver:1"
Message-ID: <a484cc88-f41d-5d38-d098-4eda297569a1@suse.com>
Date: Thu, 5 Nov 2020 16:55:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

This array can be large when many grant frames are permitted; avoid
allocating it when it's not going to be used anyway. Do so indirectly
though, by making grant_to_status_frames() return zero in this case.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -446,11 +446,14 @@ static inline void active_entry_release(
 
 static inline unsigned int grant_to_status_frames(unsigned int grant_frames)
 {
+    if ( opt_gnttab_max_version < 2 )
+        return 0;
     return DIV_ROUND_UP(grant_frames * GRANT_PER_PAGE, GRANT_STATUS_PER_PAGE);
 }
 
 static inline unsigned int status_to_grant_frames(unsigned int status_frames)
 {
+    ASSERT(opt_gnttab_max_version >= 2);
     return DIV_ROUND_UP(status_frames * GRANT_STATUS_PER_PAGE, GRANT_PER_PAGE);
 }
 


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 15:56:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 15:56:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19895.45381 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kahdA-0005p0-PQ; Thu, 05 Nov 2020 15:56:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19895.45381; Thu, 05 Nov 2020 15:56:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kahdA-0005ot-LM; Thu, 05 Nov 2020 15:56:56 +0000
Received: by outflank-mailman (input) for mailman id 19895;
 Thu, 05 Nov 2020 15:56:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N0uV=EL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kahd9-0005oo-Aw
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 15:56:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f3ea6a6a-a16d-4dfc-b43c-aafde0a97eb0;
 Thu, 05 Nov 2020 15:56:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 899E9AB4C;
 Thu,  5 Nov 2020 15:56:53 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=N0uV=EL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kahd9-0005oo-Aw
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 15:56:55 +0000
X-Inumbo-ID: f3ea6a6a-a16d-4dfc-b43c-aafde0a97eb0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f3ea6a6a-a16d-4dfc-b43c-aafde0a97eb0;
	Thu, 05 Nov 2020 15:56:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604591813;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=VMviCyLoMsssqtC8a0xLl8BceQ2P2ix8lSvSB04SKyQ=;
	b=Es/zxA4W1Ceh8XlCNJmxsDdGvYZ+Ok+h6CAH1dyo0HX+A4QVVwjNuioNZ94MGDElahDhaE
	wipi5P21NNFfJiFeZxaXNx5Cjw1vDTAI+YgIKldTuQZoK8JueUL/UFnj1bAj39dTB4RHbV
	ypszXu3j05m7+TAM6x/8rf7NA2lONXo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 899E9AB4C;
	Thu,  5 Nov 2020 15:56:53 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Ian Jackson <iwj@xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] libxg: don't use max policy in xc_cpuid_xend_policy()
Message-ID: <4fa05759-24ac-5ff3-3db9-94537f6be95d@suse.com>
Date: Thu, 5 Nov 2020 16:56:53 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Using max undermines the separation between default and max. For example,
turning off AVX512F on an MPX-capable system silently turns on MPX,
despite this not being part of the default policy anymore. Since the
information is used only for determining what to convert 'x' to (but not
to e.g. validate '1' settings), the effect of this change is identical
for guests with (suitable) "cpuid=" settings to that of the changes
separating default from max and then converting (e.g.) MPX from being
part of default to only being part of max for guests without (affected)
"cpuid=" settings.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -288,11 +288,11 @@ static int xc_cpuid_xend_policy(
     unsigned int nr_leaves, nr_msrs;
     uint32_t err_leaf = -1, err_subleaf = -1, err_msr = -1;
     /*
-     * Three full policies.  The host, domain max, and domain current for the
-     * domain type.
+     * Three full policies.  The host, default for the domain type,
+     * and domain current.
      */
-    xen_cpuid_leaf_t *host = NULL, *max = NULL, *cur = NULL;
-    unsigned int nr_host, nr_max, nr_cur;
+    xen_cpuid_leaf_t *host = NULL, *def = NULL, *cur = NULL;
+    unsigned int nr_host, nr_def, nr_cur;
 
     if ( xc_domain_getinfo(xch, domid, 1, &di) != 1 ||
          di.domid != domid )
@@ -312,7 +312,7 @@ static int xc_cpuid_xend_policy(
 
     rc = -ENOMEM;
     if ( (host = calloc(nr_leaves, sizeof(*host))) == NULL ||
-         (max  = calloc(nr_leaves, sizeof(*max)))  == NULL ||
+         (def  = calloc(nr_leaves, sizeof(*def)))  == NULL ||
          (cur  = calloc(nr_leaves, sizeof(*cur)))  == NULL )
     {
         ERROR("Unable to allocate memory for %u CPUID leaves", nr_leaves);
@@ -330,15 +330,16 @@ static int xc_cpuid_xend_policy(
         goto fail;
     }
 
-    /* Get the domain's max policy. */
+    /* Get the domain type's default policy. */
     nr_msrs = 0;
-    nr_max = nr_leaves;
-    rc = xc_get_system_cpu_policy(xch, di.hvm ? XEN_SYSCTL_cpu_policy_hvm_max
-                                              : XEN_SYSCTL_cpu_policy_pv_max,
-                                  &nr_max, max, &nr_msrs, NULL);
+    nr_def = nr_leaves;
+    rc = xc_get_system_cpu_policy(xch,
+                                  di.hvm ? XEN_SYSCTL_cpu_policy_hvm_default
+                                         : XEN_SYSCTL_cpu_policy_pv_default,
+                                  &nr_def, def, &nr_msrs, NULL);
     if ( rc )
     {
-        PERROR("Failed to obtain %s max policy", di.hvm ? "hvm" : "pv");
+        PERROR("Failed to obtain %s def policy", di.hvm ? "hvm" : "pv");
         rc = -errno;
         goto fail;
     }
@@ -359,10 +360,10 @@ static int xc_cpuid_xend_policy(
     for ( ; xend->leaf != XEN_CPUID_INPUT_UNUSED; ++xend )
     {
         xen_cpuid_leaf_t *cur_leaf = find_leaf(cur, nr_cur, xend);
-        const xen_cpuid_leaf_t *max_leaf = find_leaf(max, nr_max, xend);
+        const xen_cpuid_leaf_t *def_leaf = find_leaf(def, nr_def, xend);
         const xen_cpuid_leaf_t *host_leaf = find_leaf(host, nr_host, xend);
 
-        if ( cur_leaf == NULL || max_leaf == NULL || host_leaf == NULL )
+        if ( cur_leaf == NULL || def_leaf == NULL || host_leaf == NULL )
         {
             ERROR("Missing leaf %#x, subleaf %#x", xend->leaf, xend->subleaf);
             goto fail;
@@ -371,7 +372,7 @@ static int xc_cpuid_xend_policy(
         for ( unsigned int i = 0; i < ARRAY_SIZE(xend->policy); i++ )
         {
             uint32_t *cur_reg = &cur_leaf->a + i;
-            const uint32_t *max_reg = &max_leaf->a + i;
+            const uint32_t *def_reg = &def_leaf->a + i;
             const uint32_t *host_reg = &host_leaf->a + i;
 
             if ( xend->policy[i] == NULL )
@@ -386,7 +387,7 @@ static int xc_cpuid_xend_policy(
                 else if ( xend->policy[i][j] == '0' )
                     val = false;
                 else if ( xend->policy[i][j] == 'x' )
-                    val = test_bit(31 - j, max_reg);
+                    val = test_bit(31 - j, def_reg);
                 else if ( xend->policy[i][j] == 'k' ||
                           xend->policy[i][j] == 's' )
                     val = test_bit(31 - j, host_reg);
@@ -419,7 +420,7 @@ static int xc_cpuid_xend_policy(
 
  fail:
     free(cur);
-    free(max);
+    free(def);
     free(host);
 
     return rc;


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 16:00:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 16:00:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19903.45393 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kahgy-0007FN-9V; Thu, 05 Nov 2020 16:00:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19903.45393; Thu, 05 Nov 2020 16:00:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kahgy-0007FG-6Y; Thu, 05 Nov 2020 16:00:52 +0000
Received: by outflank-mailman (input) for mailman id 19903;
 Thu, 05 Nov 2020 16:00:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N0uV=EL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kahgx-0007FB-5Q
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 16:00:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6ef9137a-4b1f-4f5c-933c-55065bd2630e;
 Thu, 05 Nov 2020 16:00:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A048CAB4C;
 Thu,  5 Nov 2020 16:00:49 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=N0uV=EL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kahgx-0007FB-5Q
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 16:00:51 +0000
X-Inumbo-ID: 6ef9137a-4b1f-4f5c-933c-55065bd2630e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6ef9137a-4b1f-4f5c-933c-55065bd2630e;
	Thu, 05 Nov 2020 16:00:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604592049;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QnSmKXuUvQC0v84Z/vr73YbP+4gA75cbChIWCXLpeiI=;
	b=MHj6Kq8E14/D6+HbMg4T22qsbktj3J2mj7Zg/JShRnJtXci2+XVxw6+FGex6oLwdk+Vv96
	+VwvPpPoLfBtPS83MJDN5XINRLztavKpzuT55XDRyZf6mInY8AovN9I8Tbzz1e3rpdgVnD
	Zeyi2zV7DB16q/7uPq+ZgjJjA/zmGJ0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A048CAB4C;
	Thu,  5 Nov 2020 16:00:49 +0000 (UTC)
Subject: Re: [ANNOUNCE] Call for agenda items for 5 November 2020 Community
 Call @ 16:00 UTC
To: George Dunlap <George.Dunlap@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <948CC2D7-B53D-48CD-879B-6C0DDE0B1EE2@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <50195240-8375-5f9b-d5b7-2a89ec8c99d0@suse.com>
Date: Thu, 5 Nov 2020 17:00:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <948CC2D7-B53D-48CD-879B-6C0DDE0B1EE2@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.10.2020 15:47, George Dunlap wrote:
> Hi all,
> 
> The proposed agenda is in https://cryptpad.fr/pad/#/2/pad/edit/k-0Aj+Sxb5SliLWrFRBwx49V/ and you can edit to add items.  Alternatively, you can reply to this mail directly.
> 
> Agenda items appreciated a few days before the call: please put your name besides items if you edit the document.
> 
> Note the following administrative conventions for the call:
> * Unless, agreed in the pervious meeting otherwise, the call is on the 1st Thursday of each month at 1600 British Time (either GMT or BST)
> * I usually send out a meeting reminder a few days before with a provisional agenda
> 
> * To allow time to switch between meetings, we'll plan on starting the agenda at 16:05 sharp.  Aim to join by 16:03 if possible to allocate time to sort out technical difficulties &c
> 
> * If you want to be CC'ed please add or remove yourself from the sign-up-sheet at https://cryptpad.fr/pad/#/2/pad/edit/D9vGzihPxxAOe6RFPz0sRCf+/
> 
> Best Regards
> George
> 
> 
> 
> == Dial-in Information ==
> ## Meeting time
> 16:00 - 17:00 UTC
> Further International meeting times: https://www.timeanddate.com/worldclock/meetingdetails.html?year=2020&month=11&day=5&hour=16&min=0&sec=0&p1=1234&p2=37&p3=224&p4=179
> 
> 
> ## Dial in details
> Web: https://www.gotomeet.me/GeorgeDunlap
> 
> You can also dial in using your phone.
> Access Code: 168-682-109
> 
> China (Toll Free): 4008 811084
> Germany: +49 692 5736 7317

FYI: This number continues to not work.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 16:27:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 16:27:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19921.45423 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kai6R-0000lV-JV; Thu, 05 Nov 2020 16:27:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19921.45423; Thu, 05 Nov 2020 16:27:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kai6R-0000lO-GQ; Thu, 05 Nov 2020 16:27:11 +0000
Received: by outflank-mailman (input) for mailman id 19921;
 Thu, 05 Nov 2020 16:27:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tWnR=EL=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kai6Q-0000lJ-CB
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 16:27:10 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id db73dad9-cdfa-4d79-9586-8c19731d8204;
 Thu, 05 Nov 2020 16:27:04 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kai6K-0004Yo-8k; Thu, 05 Nov 2020 16:27:04 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kai6J-0001xA-Rm; Thu, 05 Nov 2020 16:27:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tWnR=EL=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kai6Q-0000lJ-CB
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 16:27:10 +0000
X-Inumbo-ID: db73dad9-cdfa-4d79-9586-8c19731d8204
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id db73dad9-cdfa-4d79-9586-8c19731d8204;
	Thu, 05 Nov 2020 16:27:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=CqpsC3IQyLxfKeb/TylKLBCfN6RptM6b7w/FRc2UraU=; b=SLlkA3LcOtTqqKJLMbBFSg/8/9
	+pNs7oT7uO0hBnD+doAw+8gCjULik6m7XlYDf+WrXeavi0cUFXT8RHckNb9JkAc3a8dV28NAXwqIA
	HIn0OmWCFZElmRxEcNAr32CvuU8mSnig1fU6Zz3SAHR+hLgZ9LN/XlPGMsQzdvxPkJeI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kai6K-0004Yo-8k; Thu, 05 Nov 2020 16:27:04 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kai6J-0001xA-Rm; Thu, 05 Nov 2020 16:27:04 +0000
Subject: Re: BUG: libxl vuart build order
To: Anthony PERARD <anthony.perard@citrix.com>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Cc: Takahiro Akashi <takahiro.akashi@linaro.org>,
 Alex Benn??e <alex.bennee@linaro.org>,
 Masami Hiramatsu <masami.hiramatsu@linaro.org>, ian.jackson@eu.citrix.com,
 wl@xen.org, xen-devel@lists.xenproject.org
References: <CAB5YjtCwbvYMVg-9YXjSFtC8KvjkJuYhJFSCHrJaRUKfg4NHYA@mail.gmail.com>
 <alpine.DEB.2.21.2010261634000.12247@sstabellini-ThinkPad-T480s>
 <20201027000214.GA14449@laputa> <20201028014105.GA11856@laputa>
 <alpine.DEB.2.21.2010281437010.12247@sstabellini-ThinkPad-T480s>
 <20201029114705.GA291577@laputa>
 <alpine.DEB.2.21.2010291704180.12247@sstabellini-ThinkPad-T480s>
 <20201030025157.GA18567@laputa>
 <alpine.DEB.2.21.2010301045250.12247@sstabellini-ThinkPad-T480s>
 <20201105154147.GJ2214@perard.uk.xensource.com>
From: Julien Grall <julien@xen.org>
Message-ID: <4933b50f-19da-dac7-78d2-378fa72649a7@xen.org>
Date: Thu, 5 Nov 2020 16:26:58 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201105154147.GJ2214@perard.uk.xensource.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 05/11/2020 15:41, Anthony PERARD wrote:
> On Fri, Oct 30, 2020 at 10:46:37AM -0700, Stefano Stabellini wrote:
>> On Fri, 30 Oct 2020, Takahiro Akashi wrote:
>>> === after "xl console -t vuart" ===
>>> U-Boot 2020.10-00777-g10cf956a26ba (Oct 29 2020 - 19:31:29 +0900) xenguest
>>>
>>> Xen virtual CPU
>>> Model: XENVM-4.15
>>> DRAM:  128 MiB
>>>
>>> In:    sbsa-pl011
>>> Out:   sbsa-pl011
>>> Err:   sbsa-pl011
>>> ===
>>>
>>> If possible, I hope that "xl create -c" command would accept "-t vuart"
>>> option (or it should automatically selects uart type from the config).
>>
>> I think a patch to add the "-t" option to "xl create" would be
>> acceptable, right Anthony?
> 
> I don't know. Why `xl' isn't able to select the vuart as the default one?

The SBSA UART was originally introduced mostly for debugging purpose and 
to be compliant with the SBSA specification.

The default console so for on Arm is the PV console.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 16:29:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 16:29:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19926.45435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kai8f-0000vr-0Y; Thu, 05 Nov 2020 16:29:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19926.45435; Thu, 05 Nov 2020 16:29:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kai8e-0000vk-Tr; Thu, 05 Nov 2020 16:29:28 +0000
Received: by outflank-mailman (input) for mailman id 19926;
 Thu, 05 Nov 2020 16:29:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f69X=EL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kai8d-0000ve-Ii
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 16:29:27 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a58dbe1e-ceca-45f6-a591-025824bac781;
 Thu, 05 Nov 2020 16:29:26 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 51E36206D9;
 Thu,  5 Nov 2020 16:29:25 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=f69X=EL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kai8d-0000ve-Ii
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 16:29:27 +0000
X-Inumbo-ID: a58dbe1e-ceca-45f6-a591-025824bac781
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a58dbe1e-ceca-45f6-a591-025824bac781;
	Thu, 05 Nov 2020 16:29:26 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 51E36206D9;
	Thu,  5 Nov 2020 16:29:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604593765;
	bh=tsk+LnGHFChweNP487VxzQI0HHqShJ9D2TXUJyZjNv8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=thwLiDGwO0AMfllqDy4viZz+PDc1XaKGr29lAKh/e+bQGgmqMqRdKvpsSjowN62L0
	 Z28CCcjiwsjTlvHdSN69S+QIf+2fPWCEvyh5qk2Wd5wfjuX8XLs9ZIeLE2BILRMoLX
	 phFTSmPJAeUzBAqnOlqhiVjuQ2gAH3TbVJhMHUpY=
Date: Thu, 5 Nov 2020 08:29:20 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Anthony PERARD <anthony.perard@citrix.com>
cc: Stefano Stabellini <stefano.stabellini@xilinx.com>, 
    Takahiro Akashi <takahiro.akashi@linaro.org>, 
    Alex Benn??e <alex.bennee@linaro.org>, 
    Masami Hiramatsu <masami.hiramatsu@linaro.org>, ian.jackson@eu.citrix.com, 
    wl@xen.org, xen-devel@lists.xenproject.org
Subject: Re: BUG: libxl vuart build order
In-Reply-To: <20201105154147.GJ2214@perard.uk.xensource.com>
Message-ID: <alpine.DEB.2.21.2011050826510.2323@sstabellini-ThinkPad-T480s>
References: <CAB5YjtCwbvYMVg-9YXjSFtC8KvjkJuYhJFSCHrJaRUKfg4NHYA@mail.gmail.com> <alpine.DEB.2.21.2010261634000.12247@sstabellini-ThinkPad-T480s> <20201027000214.GA14449@laputa> <20201028014105.GA11856@laputa> <alpine.DEB.2.21.2010281437010.12247@sstabellini-ThinkPad-T480s>
 <20201029114705.GA291577@laputa> <alpine.DEB.2.21.2010291704180.12247@sstabellini-ThinkPad-T480s> <20201030025157.GA18567@laputa> <alpine.DEB.2.21.2010301045250.12247@sstabellini-ThinkPad-T480s> <20201105154147.GJ2214@perard.uk.xensource.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 5 Nov 2020, Anthony PERARD wrote:
> On Fri, Oct 30, 2020 at 10:46:37AM -0700, Stefano Stabellini wrote:
> > On Fri, 30 Oct 2020, Takahiro Akashi wrote:
> > > === after "xl console -t vuart" ===
> > > U-Boot 2020.10-00777-g10cf956a26ba (Oct 29 2020 - 19:31:29 +0900) xenguest
> > > 
> > > Xen virtual CPU
> > > Model: XENVM-4.15
> > > DRAM:  128 MiB
> > > 
> > > In:    sbsa-pl011
> > > Out:   sbsa-pl011
> > > Err:   sbsa-pl011
> > > ===
> > > 
> > > If possible, I hope that "xl create -c" command would accept "-t vuart"
> > > option (or it should automatically selects uart type from the config).
> > 
> > I think a patch to add the "-t" option to "xl create" would be
> > acceptable, right Anthony?
> 
> I don't know. Why `xl' isn't able to select the vuart as the default one?

Because both consoles are still valid: when the emulated uart is
enabled, the normal PV console is also enabled.


> Maybe a long option would be better in the cases where we would like to
> connect to a "secondary" console? I could see `xl create --console=vuart'
> been fine, I don't know if that's possible.

That's OK for me but keep in mind that xl console already takes -t
vuart. In other words:

1) xl console -t vuart    -> WORKS
2) xl create -c -t vuart  -> DOESN'T WORK


P.S.

Could you also take a quick look at the patch I appended to the previous
email? Or would you prefer me to send it out separately as its own
patch?


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 17:10:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 17:10:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19945.45465 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaim3-0005BL-9N; Thu, 05 Nov 2020 17:10:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19945.45465; Thu, 05 Nov 2020 17:10:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaim3-0005BE-6H; Thu, 05 Nov 2020 17:10:11 +0000
Received: by outflank-mailman (input) for mailman id 19945;
 Thu, 05 Nov 2020 17:10:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dB6v=EL=gmail.com=persaur@srs-us1.protection.inumbo.net>)
 id 1kaim2-0005B9-5E
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 17:10:10 +0000
Received: from mail-qv1-xf29.google.com (unknown [2607:f8b0:4864:20::f29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 40d07617-8e2c-4002-9d6f-809dcf8c70dc;
 Thu, 05 Nov 2020 17:10:09 +0000 (UTC)
Received: by mail-qv1-xf29.google.com with SMTP id t20so1023270qvv.8
 for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 09:10:09 -0800 (PST)
Received: from [100.64.72.123] ([173.245.215.240])
 by smtp.gmail.com with ESMTPSA id e186sm974816qkd.117.2020.11.05.09.10.08
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 05 Nov 2020 09:10:08 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dB6v=EL=gmail.com=persaur@srs-us1.protection.inumbo.net>)
	id 1kaim2-0005B9-5E
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 17:10:10 +0000
X-Inumbo-ID: 40d07617-8e2c-4002-9d6f-809dcf8c70dc
Received: from mail-qv1-xf29.google.com (unknown [2607:f8b0:4864:20::f29])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 40d07617-8e2c-4002-9d6f-809dcf8c70dc;
	Thu, 05 Nov 2020 17:10:09 +0000 (UTC)
Received: by mail-qv1-xf29.google.com with SMTP id t20so1023270qvv.8
        for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 09:10:09 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=content-transfer-encoding:from:mime-version:subject:date:message-id
         :references:cc:in-reply-to:to;
        bh=1MfltUYu4JYwmtLM9x0pvlw+JF0xlahDh+KM1a24aic=;
        b=jahJVK41WT6o6pxs8NJeRMRUfna/GTPGrfJlajqI/QY3GA29NLhl4va3Gt9FrGnP2m
         V4GYtTSjaOqDNIe8OJrJKKxXQHE5VZmTMhKcI6FbeMy+Q7NzgEYqYZXRL571e6d9es5a
         7An7LZpKc2W9Y/XLT+IEZG8p/NDKxwwokKkdDDQHML8zqTjsoIEjbfK4EWV0D3hmLmO8
         HmSJmILdUtLfXx59CJReegXc/n6K6jHqMJ4uSq3NST/2mE5btuG6W4OdO3UE22FmtHsJ
         rvKGgrcLPJ3dhnqe2KVBYr6UaIbq5me0YfpHpeAEGusqHjvhRsZ2LMJdjsqh4wfbUwQS
         CinA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:content-transfer-encoding:from:mime-version
         :subject:date:message-id:references:cc:in-reply-to:to;
        bh=1MfltUYu4JYwmtLM9x0pvlw+JF0xlahDh+KM1a24aic=;
        b=J0HAR8x7YLNNRfbggmHiSkoerVYeswZViKdpVtoqa/E/1WHoCGEsHy86OJl1cqxKpr
         2ni2XVgpF/jfkTxWVexCeBmfRxonvD7If86cLjSiu65TUG7mleKaiHnafuIlt09Chp1w
         VbpbdGk4toz0302ON1APXi1CX24yY7WaXUHDjDRkn1o8X1WKuHtusdrHj1Ry1S/iNXwZ
         myqlb/s1HO6QwujdepQr5na+NrABt8pCcMpMzWfrXgBG6jNvHCqlNg6Y1910PsnpY4Xy
         SHM/JbhOgPWwcCHRMTxKDS/S3doFznwrffoOJ+HwLunmlioJYgvtjjzULgqkmE1kX9XD
         +DgQ==
X-Gm-Message-State: AOAM5335wmAtGkE+afA4s2ivdzw1COHOeWapCcHN0I1FoDiqo9w8+B6o
	XkS7ZAVui1j/xeG/WTpxO9naB1Y+Sbs=
X-Google-Smtp-Source: ABdhPJwp0H98O4ZUkioRarX30v5n+oA/aphQk+8XyC9rqbS4fCFdrsJvqTnJs8ueNbCO1RhCy8ppHg==
X-Received: by 2002:ad4:472c:: with SMTP id l12mr2981964qvz.42.1604596208784;
        Thu, 05 Nov 2020 09:10:08 -0800 (PST)
Received: from [100.64.72.123] ([173.245.215.240])
        by smtp.gmail.com with ESMTPSA id e186sm974816qkd.117.2020.11.05.09.10.08
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Thu, 05 Nov 2020 09:10:08 -0800 (PST)
Content-Type: multipart/alternative; boundary=Apple-Mail-199D7F37-FF39-44C7-9AFD-02B8F8DB1304
Content-Transfer-Encoding: 7bit
From: Rich Persaud <persaur@gmail.com>
Mime-Version: 1.0 (1.0)
Subject: Re: [ANNOUNCE] Call for agenda items for 5 November 2020 Community Call @ 16:00 UTC
Date: Thu, 5 Nov 2020 12:10:07 -0500
Message-Id: <BDAC6E73-9375-48E3-9840-4E990F01E165@gmail.com>
References: <50195240-8375-5f9b-d5b7-2a89ec8c99d0@suse.com>
Cc: George Dunlap <George.Dunlap@citrix.com>,
 xen-devel@lists.xenproject.org
In-Reply-To: <50195240-8375-5f9b-d5b7-2a89ec8c99d0@suse.com>
To: Jan Beulich <jbeulich@suse.com>
X-Mailer: iPad Mail (18A8395)


--Apple-Mail-199D7F37-FF39-44C7-9AFD-02B8F8DB1304
Content-Type: text/plain;
	charset=utf-8
Content-Transfer-Encoding: quoted-printable

> On Nov 5, 2020, at 11:01, Jan Beulich <jbeulich@suse.com> wrote:
> =EF=BB=BFOn 30.10.2020 15:47, George Dunlap wrote:
>> Hi all,
>>=20
>> The proposed agenda is in https://cryptpad.fr/pad/#/2/pad/edit/k-0Aj+Sxb5=
SliLWrFRBwx49V/ and you can edit to add items.  Alternatively, you can reply=
 to this mail directly.
>>=20
>> Agenda items appreciated a few days before the call: please put your name=
 besides items if you edit the document.
>>=20
>> Note the following administrative conventions for the call:
>> * Unless, agreed in the pervious meeting otherwise, the call is on the 1s=
t Thursday of each month at 1600 British Time (either GMT or BST)
>> * I usually send out a meeting reminder a few days before with a provisio=
nal agenda
>>=20
>> * To allow time to switch between meetings, we'll plan on starting the ag=
enda at 16:05 sharp.  Aim to join by 16:03 if possible to allocate time to s=
ort out technical difficulties &c
>>=20
>> * If you want to be CC'ed please add or remove yourself from the sign-up-=
sheet at https://cryptpad.fr/pad/#/2/pad/edit/D9vGzihPxxAOe6RFPz0sRCf+/
>>=20
>> Best Regards
>> George
>>=20
>>=20
>>=20
>> =3D=3D Dial-in Information =3D=3D
>> ## Meeting time
>> 16:00 - 17:00 UTC
>> Further International meeting times: https://www.timeanddate.com/worldclo=
ck/meetingdetails.html?year=3D2020&month=3D11&day=3D5&hour=3D16&min=3D0&sec=3D=
0&p1=3D1234&p2=3D37&p3=3D224&p4=3D179
>>=20
>>=20
>> ## Dial in details
>> Web: https://www.gotomeet.me/GeorgeDunlap
>>=20
>> You can also dial in using your phone.
>> Access Code: 168-682-109
>>=20
>> China (Toll Free): 4008 811084
>> Germany: +49 692 5736 7317
>=20
> FYI: This number continues to not work.
>=20
> Jan

This appears to be the new number since May 2020, based on [0].  I called an=
d confirmed that the number works and is expecting a number to be input, but=
 can't otherwise verify the German prompts:

   Germany: +49 721 9881 4161

Rich

[0] https://www.youtube.com/watch?v=3DEmbn9_JEqRo=

--Apple-Mail-199D7F37-FF39-44C7-9AFD-02B8F8DB1304
Content-Type: text/html;
	charset=utf-8
Content-Transfer-Encoding: quoted-printable

<html><head><meta http-equiv=3D"content-type" content=3D"text/html; charset=3D=
utf-8"></head><body dir=3D"auto"><div dir=3D"ltr"><blockquote type=3D"cite">=
On Nov 5, 2020, at 11:01, Jan Beulich &lt;jbeulich@suse.com&gt; wrote:<br></=
blockquote></div><blockquote type=3D"cite"><div dir=3D"ltr">=EF=BB=BF<span>O=
n 30.10.2020 15:47, George Dunlap wrote:</span><br><blockquote type=3D"cite"=
><span>Hi all,</span><br></blockquote><blockquote type=3D"cite"><span></span=
><br></blockquote><blockquote type=3D"cite"><span>The proposed agenda is in h=
ttps://cryptpad.fr/pad/#/2/pad/edit/k-0Aj+Sxb5SliLWrFRBwx49V/ and you can ed=
it to add items. &nbsp;Alternatively, you can reply to this mail directly.</=
span><br></blockquote><blockquote type=3D"cite"><span></span><br></blockquot=
e><blockquote type=3D"cite"><span>Agenda items appreciated a few days before=
 the call: please put your name besides items if you edit the document.</spa=
n><br></blockquote><blockquote type=3D"cite"><span></span><br></blockquote><=
blockquote type=3D"cite"><span>Note the following administrative conventions=
 for the call:</span><br></blockquote><blockquote type=3D"cite"><span>* Unle=
ss, agreed in the pervious meeting otherwise, the call is on the 1st Thursda=
y of each month at 1600 British Time (either GMT or BST)</span><br></blockqu=
ote><blockquote type=3D"cite"><span>* I usually send out a meeting reminder a=
 few days before with a provisional agenda</span><br></blockquote><blockquot=
e type=3D"cite"><span></span><br></blockquote><blockquote type=3D"cite"><spa=
n>* To allow time to switch between meetings, we'll plan on starting the age=
nda at 16:05 sharp. &nbsp;Aim to join by 16:03 if possible to allocate time t=
o sort out technical difficulties &amp;c</span><br></blockquote><blockquote t=
ype=3D"cite"><span></span><br></blockquote><blockquote type=3D"cite"><span>*=
 If you want to be CC'ed please add or remove yourself from the sign-up-shee=
t at https://cryptpad.fr/pad/#/2/pad/edit/D9vGzihPxxAOe6RFPz0sRCf+/</span><b=
r></blockquote><blockquote type=3D"cite"><span></span><br></blockquote><bloc=
kquote type=3D"cite"><span>Best Regards</span><br></blockquote><blockquote t=
ype=3D"cite"><span>George</span><br></blockquote><blockquote type=3D"cite"><=
span></span><br></blockquote><blockquote type=3D"cite"><span></span><br></bl=
ockquote><blockquote type=3D"cite"><span></span><br></blockquote><blockquote=
 type=3D"cite"><span>=3D=3D Dial-in Information =3D=3D</span><br></blockquot=
e><blockquote type=3D"cite"><span>## Meeting time</span><br></blockquote><bl=
ockquote type=3D"cite"><span>16:00 - 17:00 UTC</span><br></blockquote><block=
quote type=3D"cite"><span>Further International meeting times: https://www.t=
imeanddate.com/worldclock/meetingdetails.html?year=3D2020&amp;month=3D11&amp=
;day=3D5&amp;hour=3D16&amp;min=3D0&amp;sec=3D0&amp;p1=3D1234&amp;p2=3D37&amp=
;p3=3D224&amp;p4=3D179</span><br></blockquote><blockquote type=3D"cite"><spa=
n></span><br></blockquote><blockquote type=3D"cite"><span></span><br></block=
quote><blockquote type=3D"cite"><span>## Dial in details</span><br></blockqu=
ote><blockquote type=3D"cite"><span>Web: https://www.gotomeet.me/GeorgeDunla=
p</span><br></blockquote><blockquote type=3D"cite"><span></span><br></blockq=
uote><blockquote type=3D"cite"><span>You can also dial in using your phone.<=
/span><br></blockquote><blockquote type=3D"cite"><span>Access Code: 168-682-=
109</span><br></blockquote><blockquote type=3D"cite"><span></span><br></bloc=
kquote><blockquote type=3D"cite"><span>China (Toll Free): 4008 811084</span>=
<br></blockquote><blockquote type=3D"cite"><span>Germany: +49 692 5736 7317<=
/span><br></blockquote><span></span><br><span>FYI: This number continues to n=
ot work.</span><br><span></span><br><span>Jan</span><br></div></blockquote><=
br><div>This appears to be the new number since May 2020, based on [0]. &nbs=
p;I called and confirmed that the number works and is expecting a number to b=
e input, but can't otherwise verify the German prompts:</div><div><br></div>=
<div>&nbsp; &nbsp;Germany: +49 721 9881 4161</div><div><br></div><div>Rich</=
div><div><br></div><div>[0] <a href=3D"https://www.youtube.com/watch?v=3DEmb=
n9_JEqRo&amp;feature=3Demb_title">https://www.youtube.com/watch?v=3DEmbn9_JE=
qRo</a></div></body></html>=

--Apple-Mail-199D7F37-FF39-44C7-9AFD-02B8F8DB1304--


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 17:52:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 17:52:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19960.45494 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kajQz-0000KL-Nd; Thu, 05 Nov 2020 17:52:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19960.45494; Thu, 05 Nov 2020 17:52:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kajQz-0000KE-KU; Thu, 05 Nov 2020 17:52:29 +0000
Received: by outflank-mailman (input) for mailman id 19960;
 Thu, 05 Nov 2020 17:52:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aWoZ=EL=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1kajQy-0000K9-GG
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 17:52:28 +0000
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 02f3b7a5-5495-46fd-b349-d5c92d25a256;
 Thu, 05 Nov 2020 17:52:27 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id g12so2774371wrp.10
 for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 09:52:27 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id a185sm3323029wmf.24.2020.11.05.09.52.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 05 Nov 2020 09:52:15 -0800 (PST)
Received: from zen.lan (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id 154F51FF9D;
 Thu,  5 Nov 2020 17:51:55 +0000 (GMT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aWoZ=EL=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
	id 1kajQy-0000K9-GG
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 17:52:28 +0000
X-Inumbo-ID: 02f3b7a5-5495-46fd-b349-d5c92d25a256
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 02f3b7a5-5495-46fd-b349-d5c92d25a256;
	Thu, 05 Nov 2020 17:52:27 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id g12so2774371wrp.10
        for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 09:52:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=PWX79l8aK71+AYI8WeJL1C20BLKUbqb3b0w9LAzwhV8=;
        b=sKuBjdxgzO5u9D8pgwF/tGU3Jl9e9kVQLXkjUZin+MoLsmfLdqrVeqhqiki4zrHNmt
         5hy7oStzIxiY+o16gdITJ6d17aQlrLazLaOKd1So+BBwfe/FRh1q20LBPkyH1CYBKIBc
         89BFyfERsJdvAKbEqLP1/bDRte++Si1BFXcV5wA53Yg3P8t5AiT8ptxNXg/9K4dFdUJ5
         sPXa8twvD5cGmJ5dvqqFdwnTja/ybh6DzAOswcsvh1WULOZk8Tif6W1wiM9fUjihlpqK
         kiMe6f4QE/wGEOHChHWXznLNi+6OKoDNmYdzcJDS8aOMnUWcpxcWrDD/Hk5hUq8sEImR
         ikkQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=PWX79l8aK71+AYI8WeJL1C20BLKUbqb3b0w9LAzwhV8=;
        b=l93Kg90nRJu20jRooqwyZ1DWg2j8+W76gO7/wzUC/XaNcmycjpcBJjiY6XPFw1Drc3
         dV+gVXYxGD2j+PWCWPWzI34+5L/K0K4+qj9gfSJ3Yzu0GeLvXgOskohu+QqxVJEImHb4
         UdjKbX1C9rZWCqkO93ojH484gGHIQrfOgT1r3jePnVt1YCVBtwGiKp+lanGUGWYUlniF
         QqZL747Rfr7zBCiDfBcm86NIE9K/XjPlpWooPYpkH4E+2+tGICFl/p60NWNKWLh3Bmpi
         8XcwlW7okmvSbE6ZgemwSy6eVv1CQCTGYZBeOqzP8XmpIyRGW5HGQq8moEl1P1StewJ2
         CCNQ==
X-Gm-Message-State: AOAM5308b1ppuTNkXw20n0dTnqp1wqeiPYYlqCXTJp8fmhSmVnWnoEQh
	wij/AW6Fs7z1S/Xm40gLaBzXnA==
X-Google-Smtp-Source: ABdhPJykPmP1sXUte37eZYccOK8QD1oW46dCne80dIEM79/5lUgUNgxRnHI2ok44wyDkI60skFiMrA==
X-Received: by 2002:adf:eb4c:: with SMTP id u12mr4588066wrn.73.1604598746349;
        Thu, 05 Nov 2020 09:52:26 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
        by smtp.gmail.com with ESMTPSA id a185sm3323029wmf.24.2020.11.05.09.52.06
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 05 Nov 2020 09:52:15 -0800 (PST)
Received: from zen.lan (localhost [127.0.0.1])
	by zen.linaroharston (Postfix) with ESMTP id 154F51FF9D;
	Thu,  5 Nov 2020 17:51:55 +0000 (GMT)
From: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
To: qemu-devel@nongnu.org
Cc: julien@xen.org,
	stefano.stabellini@linaro.org,
	stefano.stabellini@xilinx.com,
	masami.hiramatsu@linaro.org,
	takahiro.akashi@linaro.org,
	andre.przywara@arm.com,
	stratos-dev@op-lists.linaro.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <rth@twiddle.net>,
	Eduardo Habkost <ehabkost@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	xen-devel@lists.xenproject.org (open list:X86 Xen CPUs)
Subject: [RFC PATCH  14/15] xen: only build HVM support under CONFIG_XEN_HVM
Date: Thu,  5 Nov 2020 17:51:52 +0000
Message-Id: <20201105175153.30489-15-alex.bennee@linaro.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201105175153.30489-1-alex.bennee@linaro.org>
References: <20201105175153.30489-1-alex.bennee@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

When running on non-x86 systems there is no point building HVM support
because we will never see such things. To achieve this we need to
shuffle a little bit of the inline and other stubs about.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
---
 include/sysemu/xen-mapcache.h |  2 +-
 include/sysemu/xen.h          |  9 +++++----
 accel/stubs/xen-all-stub.c    | 11 +++++++++++
 accel/stubs/xen-stub.c        |  2 --
 accel/stubs/meson.build       |  3 ++-
 hw/i386/xen/meson.build       |  2 +-
 6 files changed, 20 insertions(+), 9 deletions(-)
 create mode 100644 accel/stubs/xen-all-stub.c

diff --git a/include/sysemu/xen-mapcache.h b/include/sysemu/xen-mapcache.h
index c8e7c2f6cf..4bba764745 100644
--- a/include/sysemu/xen-mapcache.h
+++ b/include/sysemu/xen-mapcache.h
@@ -13,7 +13,7 @@
 
 typedef hwaddr (*phys_offset_to_gaddr_t)(hwaddr phys_offset,
                                          ram_addr_t size);
-#ifdef CONFIG_XEN
+#ifdef CONFIG_XEN_HVM
 
 void xen_map_cache_init(phys_offset_to_gaddr_t f,
                         void *opaque);
diff --git a/include/sysemu/xen.h b/include/sysemu/xen.h
index 0ca25697e4..43d2314441 100644
--- a/include/sysemu/xen.h
+++ b/include/sysemu/xen.h
@@ -24,7 +24,7 @@ extern bool xen_allowed;
 
 #define xen_enabled()           (xen_allowed)
 
-#ifndef CONFIG_USER_ONLY
+#ifdef CONFIG_XEN_HVM
 void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length);
 void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size,
                    struct MemoryRegion *mr, Error **errp);
@@ -33,7 +33,10 @@ void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size,
 #else /* !CONFIG_XEN_IS_POSSIBLE */
 
 #define xen_enabled() 0
-#ifndef CONFIG_USER_ONLY
+
+#endif /* CONFIG_XEN_IS_POSSIBLE */
+
+#if !defined(CONFIG_XEN_HVM) && !defined(CONFIG_USER_ONLY)
 static inline void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length)
 {
     /* nothing */
@@ -45,6 +48,4 @@ static inline void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size,
 }
 #endif
 
-#endif /* CONFIG_XEN_IS_POSSIBLE */
-
 #endif
diff --git a/accel/stubs/xen-all-stub.c b/accel/stubs/xen-all-stub.c
new file mode 100644
index 0000000000..597c5789cc
--- /dev/null
+++ b/accel/stubs/xen-all-stub.c
@@ -0,0 +1,11 @@
+/*
+ * Copyright (C) 2014       Citrix Systems UK Ltd.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "sysemu/xen.h"
+
+bool xen_allowed;
diff --git a/accel/stubs/xen-stub.c b/accel/stubs/xen-stub.c
index 7054965c48..6bc9906239 100644
--- a/accel/stubs/xen-stub.c
+++ b/accel/stubs/xen-stub.c
@@ -9,8 +9,6 @@
 #include "sysemu/xen.h"
 #include "qapi/qapi-commands-migration.h"
 
-bool xen_allowed;
-
 void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
 {
 }
diff --git a/accel/stubs/meson.build b/accel/stubs/meson.build
index d65cb6a5e1..dca468c82a 100644
--- a/accel/stubs/meson.build
+++ b/accel/stubs/meson.build
@@ -1,7 +1,8 @@
 softmmu_stub_ss = ss.source_set()
 
 softmmu_stub_ss.add(when: 'CONFIG_HAX', if_false: files('hax-stub.c'))
-softmmu_stub_ss.add(when: 'CONFIG_XEN', if_false: files('xen-stub.c'))
+softmmu_stub_ss.add(when: 'CONFIG_XEN', if_false: files('xen-all-stub.c'))
+softmmu_stub_ss.add(when: 'CONFIG_XEN_HVM', if_false: files('xen-stub.c'))
 softmmu_stub_ss.add(when: 'CONFIG_KVM', if_false: files('kvm-stub.c'))
 softmmu_stub_ss.add(when: 'CONFIG_TCG', if_false: files('tcg-stub.c'))
 
diff --git a/hw/i386/xen/meson.build b/hw/i386/xen/meson.build
index be84130300..576e2cc5dc 100644
--- a/hw/i386/xen/meson.build
+++ b/hw/i386/xen/meson.build
@@ -1,4 +1,4 @@
-i386_ss.add(when: 'CONFIG_XEN', if_true: files(
+i386_ss.add(when: 'CONFIG_XEN_HVM', if_true: files(
   'xen-hvm.c',
   'xen-mapcache.c',
   'xen_apic.c',
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 05 17:52:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 17:52:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19961.45507 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kajR5-0000Ls-0W; Thu, 05 Nov 2020 17:52:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19961.45507; Thu, 05 Nov 2020 17:52:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kajR4-0000Ll-T9; Thu, 05 Nov 2020 17:52:34 +0000
Received: by outflank-mailman (input) for mailman id 19961;
 Thu, 05 Nov 2020 17:52:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aWoZ=EL=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1kajR3-0000K9-Ck
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 17:52:33 +0000
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f8fea320-2561-47f5-ad9b-5906052258ac;
 Thu, 05 Nov 2020 17:52:31 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id b8so2802604wrn.0
 for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 09:52:31 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id e6sm3707860wrs.7.2020.11.05.09.52.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 05 Nov 2020 09:52:15 -0800 (PST)
Received: from zen.lan (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id D505B1FF9B;
 Thu,  5 Nov 2020 17:51:54 +0000 (GMT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aWoZ=EL=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
	id 1kajR3-0000K9-Ck
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 17:52:33 +0000
X-Inumbo-ID: f8fea320-2561-47f5-ad9b-5906052258ac
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f8fea320-2561-47f5-ad9b-5906052258ac;
	Thu, 05 Nov 2020 17:52:31 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id b8so2802604wrn.0
        for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 09:52:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=S2GIs58/W24s0fp90d9pmy8FUWIlpMCzYhDdwe9s8Gc=;
        b=y8VYfJKIVgovo9dlAt6OJzV2fWsBxN2k3Qtrdb4hD1XgOw6zMwhgc41Dig59hy8yaH
         O1kh16ITKRkR907lC8Js4Ia8jgKPNg5sUs0krQPcEwDAT3Z2C5HTiuetC+S36lkiAVZx
         bofeE7M1HpcodfKTdRNYMmbUyAPkKzHAI6Ed7yK0gieBk6E84RYpJpRJTVzlnKDN04D8
         qCxKR2hB3CbQkoutSoQqNta1PJIj1EpW8SSRQAaCqvWCSV3wgJ0heclI8x2ItRF6Wbwj
         9iR7hjEF4+AEsU0s00PT1p1dLSfHDo0bEiAnjgxIl+uKkXsN91x13BMkXwgi4KWwluWx
         j0dQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=S2GIs58/W24s0fp90d9pmy8FUWIlpMCzYhDdwe9s8Gc=;
        b=pb2wfq62Rzia3x7rjN8SufxldoPDwgFfcD0mca4yMQvt+eXnOg0rS4ok0xTwqpdqs/
         K9fqDikKiTUcIAc499m/MLIGI6eReS+I3MX0djHlapj9SgMXnlqRwpIUIrSCx4ex3k8B
         ZVrH6ZR4/AnyFoARAgkLzNrpI2u+V1AwP8yucA1zV/W1G7WdY8aYE02RXM+amMAcDIAt
         E/EgMW4e6aW/Nz+FT3R6BQVbsB3fB6IR7GqtKxhPXeIS4oSv/Y/ztSz8oHhiQpxOZz7M
         sJYvlBk1VU78T086l5aD9kLLVsXewoDkdb3OXSMPV/CGg6n6EUhrYw8LryKa44PHfpUd
         U/Tw==
X-Gm-Message-State: AOAM5324wZHAFxsn3q5BRGLCrPs8XQYaeHqF/ERux69FvzNsPq3HIAvE
	8s6k9Gop4TEblhogrxI98KYSjwAixMU+zQ==
X-Google-Smtp-Source: ABdhPJxUNz66xbbtM4Jjtwf6tt59B2TjgPZjQafK+I+tj512i1f7vkBJ0d6klAGbXLkJQvMTFOkFIg==
X-Received: by 2002:adf:e443:: with SMTP id t3mr4398348wrm.14.1604598750991;
        Thu, 05 Nov 2020 09:52:30 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
        by smtp.gmail.com with ESMTPSA id e6sm3707860wrs.7.2020.11.05.09.52.04
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 05 Nov 2020 09:52:15 -0800 (PST)
Received: from zen.lan (localhost [127.0.0.1])
	by zen.linaroharston (Postfix) with ESMTP id D505B1FF9B;
	Thu,  5 Nov 2020 17:51:54 +0000 (GMT)
From: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
To: qemu-devel@nongnu.org
Cc: julien@xen.org,
	stefano.stabellini@linaro.org,
	stefano.stabellini@xilinx.com,
	masami.hiramatsu@linaro.org,
	takahiro.akashi@linaro.org,
	andre.przywara@arm.com,
	stratos-dev@op-lists.linaro.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	xen-devel@lists.xenproject.org (open list:X86 Xen CPUs)
Subject: [RFC PATCH  12/15] stubs/xen-hw-stub: drop xenstore_store_pv_console_info stub
Date: Thu,  5 Nov 2020 17:51:50 +0000
Message-Id: <20201105175153.30489-13-alex.bennee@linaro.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201105175153.30489-1-alex.bennee@linaro.org>
References: <20201105175153.30489-1-alex.bennee@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

We should never build something that calls this without having it.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
---
 stubs/xen-hw-stub.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/stubs/xen-hw-stub.c b/stubs/xen-hw-stub.c
index 2ea8190921..15f3921a76 100644
--- a/stubs/xen-hw-stub.c
+++ b/stubs/xen-hw-stub.c
@@ -10,10 +10,6 @@
 #include "hw/xen/xen.h"
 #include "hw/xen/xen-x86.h"
 
-void xenstore_store_pv_console_info(int i, Chardev *chr)
-{
-}
-
 int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num)
 {
     return -1;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 05 17:56:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 17:56:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19973.45519 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kajV6-0000eJ-K2; Thu, 05 Nov 2020 17:56:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19973.45519; Thu, 05 Nov 2020 17:56:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kajV6-0000eC-GC; Thu, 05 Nov 2020 17:56:44 +0000
Received: by outflank-mailman (input) for mailman id 19973;
 Thu, 05 Nov 2020 17:56:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BrWA=EL=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1kajV5-0000e7-5I
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 17:56:43 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 23483491-a268-4bb1-ae9e-2c096ea8b254;
 Thu, 05 Nov 2020 17:56:41 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BrWA=EL=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
	id 1kajV5-0000e7-5I
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 17:56:43 +0000
X-Inumbo-ID: 23483491-a268-4bb1-ae9e-2c096ea8b254
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 23483491-a268-4bb1-ae9e-2c096ea8b254;
	Thu, 05 Nov 2020 17:56:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1604599001;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=K0wNNSD7lPAQz0Xa+y2jiTrG6iq0yYDdSM3++GMp1WU=;
  b=MbZM2qe1vY0TXtJMd1g8biALzkbBr87pPVaX1vh0PeE38KZl5ubUru/O
   KUQunSmzyhv72ZZo6ctwviT9PRwvyeGqQF+H17oHsnjzQBFGFOX930Qw8
   N5vjOme/haOp9CP6QyVHLdAIe2lojlEVo0DEPwd3vKtNPOvluq8bD2+Ml
   4=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: yMu/qodxUJu7veUJiFWHe4A1EQnXFMXIO0JemC6Xgj3lc0+Lw87Xic7yrsGUEiL3jjbur1Vuu4
 yMqhC80QvbISiRu27JecULIhN5j/V3V9bSISKafhXR/vRJgCSNh9C9V81STrdCBuDn9woCIbqn
 zjNq33Pu0GLXhen2BjVIlgw6d+DVGqSoUd3X0AA+r5B44lG55Dap8gcJPemQ/g8yhvpDSf5odk
 47RYRS0e+5RGtiYJ/4K0NcKDNQP4sjfI232xFrtQWK9cM7hoK4Mwj14LsNbr6g/Ybjvr6YbxMR
 a2c=
X-SBRS: None
X-MesageID: 30603986
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,454,1596513600"; 
   d="scan'208";a="30603986"
Date: Thu, 5 Nov 2020 17:56:37 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Stefano Stabellini <stefano.stabellini@xilinx.com>, Takahiro Akashi
	<takahiro.akashi@linaro.org>, Alex Benn??e <alex.bennee@linaro.org>, "Masami
 Hiramatsu" <masami.hiramatsu@linaro.org>, <ian.jackson@eu.citrix.com>,
	<wl@xen.org>, <xen-devel@lists.xenproject.org>
Subject: Re: BUG: libxl vuart build order
Message-ID: <20201105175637.GL2214@perard.uk.xensource.com>
References: <alpine.DEB.2.21.2010261634000.12247@sstabellini-ThinkPad-T480s>
 <20201027000214.GA14449@laputa> <20201028014105.GA11856@laputa>
 <alpine.DEB.2.21.2010281437010.12247@sstabellini-ThinkPad-T480s>
 <20201029114705.GA291577@laputa>
 <alpine.DEB.2.21.2010291704180.12247@sstabellini-ThinkPad-T480s>
 <20201030025157.GA18567@laputa>
 <alpine.DEB.2.21.2010301045250.12247@sstabellini-ThinkPad-T480s>
 <20201105154147.GJ2214@perard.uk.xensource.com>
 <alpine.DEB.2.21.2011050826510.2323@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2011050826510.2323@sstabellini-ThinkPad-T480s>

On Thu, Nov 05, 2020 at 08:29:20AM -0800, Stefano Stabellini wrote:
> On Thu, 5 Nov 2020, Anthony PERARD wrote:
> > On Fri, Oct 30, 2020 at 10:46:37AM -0700, Stefano Stabellini wrote:
> > > On Fri, 30 Oct 2020, Takahiro Akashi wrote:
> > > > === after "xl console -t vuart" ===
> > > > U-Boot 2020.10-00777-g10cf956a26ba (Oct 29 2020 - 19:31:29 +0900) xenguest
> > > > 
> > > > Xen virtual CPU
> > > > Model: XENVM-4.15
> > > > DRAM:  128 MiB
> > > > 
> > > > In:    sbsa-pl011
> > > > Out:   sbsa-pl011
> > > > Err:   sbsa-pl011
> > > > ===
> > > > 
> > > > If possible, I hope that "xl create -c" command would accept "-t vuart"
> > > > option (or it should automatically selects uart type from the config).
> > > 
> > > I think a patch to add the "-t" option to "xl create" would be
> > > acceptable, right Anthony?
> > 
> > I don't know. Why `xl' isn't able to select the vuart as the default one?
> 
> Because both consoles are still valid: when the emulated uart is
> enabled, the normal PV console is also enabled.
> 
> 
> > Maybe a long option would be better in the cases where we would like to
> > connect to a "secondary" console? I could see `xl create --console=vuart'
> > been fine, I don't know if that's possible.
> 
> That's OK for me but keep in mind that xl console already takes -t
> vuart. In other words:

I don't know why we would need the same exact option, `xl console` and
`xl create` are two different commands. Also, I usually prefer long
option for less used options as it makes it a bit easier to figure out
what a command is supposed to do (without checking the man; and when
there is both long and short options available).

> 1) xl console -t vuart    -> WORKS

-t for `xl console` kind of works well, -t could be a shortcut of "type"
of console".

> 2) xl create -c -t vuart  -> DOESN'T WORK

But here, -t would not be a "type" of console since we are creating a
VM. Also `xl create -t vuart` without -c would do nothing, right?
(create a vm but ignoring the -t).

Anyway, an option to auto-connect to a different console would be
useful.

> P.S.
> 
> Could you also take a quick look at the patch I appended to the previous
> email? Or would you prefer me to send it out separately as its own
> patch?

It's probably better to have a patch on its own when it's ready for
review rather that been embedded in an email in a long
discussion/debugging thread. That leave a better chance for others to
spot that a patch exist and review.

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 17:56:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 17:56:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19974.45531 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kajVG-0000hs-S7; Thu, 05 Nov 2020 17:56:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19974.45531; Thu, 05 Nov 2020 17:56:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kajVG-0000hk-Oa; Thu, 05 Nov 2020 17:56:54 +0000
Received: by outflank-mailman (input) for mailman id 19974;
 Thu, 05 Nov 2020 17:56:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kajVG-0000hU-0I
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 17:56:54 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0b52e02b-a29d-4066-853d-e0177769699b;
 Thu, 05 Nov 2020 17:56:48 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kajV9-0006R7-Lv; Thu, 05 Nov 2020 17:56:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kajV9-00022z-EU; Thu, 05 Nov 2020 17:56:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kajV9-0008V2-Dy; Thu, 05 Nov 2020 17:56:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kajVG-0000hU-0I
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 17:56:54 +0000
X-Inumbo-ID: 0b52e02b-a29d-4066-853d-e0177769699b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0b52e02b-a29d-4066-853d-e0177769699b;
	Thu, 05 Nov 2020 17:56:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=z1bnYhdu4MQjpS6lpBvZEtgZ0XafKaPuXajUDRRC4xo=; b=YTtvrVEwKeNmvlooGqwEqsYSlv
	qvQdM7a2bsFYTCXq1uEEwbAMB5k0vG1ZMyOQQ1/AYXYZrfe0chprgI1QMJ13b209Va1L7XooS6y4C
	i9aq03V9PjVs2ZHGM/fmGqUWB2+P/xP13RbcyRRmE9eb6qCO53gC9eDjb03+jvKjNv0A=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kajV9-0006R7-Lv; Thu, 05 Nov 2020 17:56:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kajV9-00022z-EU; Thu, 05 Nov 2020 17:56:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kajV9-0008V2-Dy; Thu, 05 Nov 2020 17:56:47 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156405-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156405: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64:xen-build:fail:regression
    libvirt:build-i386-xsm:xen-build:fail:regression
    libvirt:build-armhf:xen-build:fail:regression
    libvirt:build-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:build-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=f035f53baa2e5dc00b8e866e594672a90b4cea78
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 05 Nov 2020 17:56:47 +0000

flight 156405 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156405/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64                   6 xen-build                fail REGR. vs. 151777
 build-i386-xsm                6 xen-build                fail REGR. vs. 151777
 build-armhf                   6 xen-build                fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              f035f53baa2e5dc00b8e866e594672a90b4cea78
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  118 days
Failing since        151818  2020-07-11 04:18:52 Z  117 days  112 attempts
Testing same since   156405  2020-11-05 04:21:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  fail    
 build-i386                                                   pass    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 24784 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 17:58:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 17:58:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.19984.45549 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kajWr-0000xG-I2; Thu, 05 Nov 2020 17:58:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 19984.45549; Thu, 05 Nov 2020 17:58:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kajWr-0000x9-Eu; Thu, 05 Nov 2020 17:58:33 +0000
Received: by outflank-mailman (input) for mailman id 19984;
 Thu, 05 Nov 2020 17:58:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aWoZ=EL=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1kajWp-0000x3-Ph
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 17:58:31 +0000
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 22d79325-16cd-4ab9-a26b-e15f256cb57f;
 Thu, 05 Nov 2020 17:58:31 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id p19so1628514wmg.0
 for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 09:58:31 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id l16sm3421423wrr.83.2020.11.05.09.58.29
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 05 Nov 2020 09:58:29 -0800 (PST)
Received: from zen.lan (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id BDA3B1FF9A;
 Thu,  5 Nov 2020 17:51:54 +0000 (GMT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aWoZ=EL=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
	id 1kajWp-0000x3-Ph
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 17:58:31 +0000
X-Inumbo-ID: 22d79325-16cd-4ab9-a26b-e15f256cb57f
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 22d79325-16cd-4ab9-a26b-e15f256cb57f;
	Thu, 05 Nov 2020 17:58:31 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id p19so1628514wmg.0
        for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 09:58:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=e1CT0XOx8Vvy8O4jAkV/r0nsAlwbNsoMSDp0qZ+mwlI=;
        b=IHQhRMPbEglYB1FwSKkG+NGlzWF9RwV19ldmzO/oohRLPXz1nJLWx5YunHxGWGRPxo
         /Aqz0lTJ/nD7arDBcf0sbMwranwxpNdfVLle85fnpiHqG1927kcijKhgq1httip10pAa
         KdTz6HLhjzWm0b0qNDlogU0/nN1mAGqhYE2bd3V5QGsXt52QdnSO9PaAy82oAt43TSN+
         IX8ENd//l1b1wJ2d9sbQ3iK8byU9/O/Q37z6PNy87kYYDQnEJtW9ygaJU0DTlRjf22vW
         toaG/0eERqixQ/tBJ3vBkMO1QSBse5bEngAD3ByWZ3P4n9rTt9XiI6N7lU+jtJE3csbr
         9Iwg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=e1CT0XOx8Vvy8O4jAkV/r0nsAlwbNsoMSDp0qZ+mwlI=;
        b=a4+ggfAL34+oul7gwx2mrE6zW/SQblJsMruScPHMoE/1qspwATGLJLRKx0lIzBEk6Z
         vLCJHSrNEzaDD5RRC6la87SwzZLtpYlTJrOai42F/xyW0XCtcyUwtiJGAS9d7VyPOKps
         jWYR359P0AmUX+FRUg5rBb0HLZrEZxJqsbN4JOfSvGR9prfdaymeH/1g4kxu/ZXDrMb/
         4rnRYJ0qOAImdB1A5EDoYqRsyhz2sj++BMnCyVQTEPtdO49Ol9viilHjrcTtrUCN8yxB
         606TXkj0xM/FhvSXx9/WaCtJkjNtOfr+Kg4es28Tt4gSB9CVmExAjuSh80jUsuwh8Qn9
         bD+A==
X-Gm-Message-State: AOAM530dy7QEbNWhRwPxMtrES/r+6aKzGyRP0P1WAPLwaBgM5hcc01yZ
	TBuxqlwSnYNqFSihuwLBuqXizg==
X-Google-Smtp-Source: ABdhPJzeGNAWfHzipdSqUYwax9p+jCiQ9HF7pckIJMTgj1GksFvaLT2Ts2O4bY3gUQ2QwqxRyqJ0jg==
X-Received: by 2002:a1c:9d08:: with SMTP id g8mr3981235wme.171.1604599110309;
        Thu, 05 Nov 2020 09:58:30 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
        by smtp.gmail.com with ESMTPSA id l16sm3421423wrr.83.2020.11.05.09.58.29
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 05 Nov 2020 09:58:29 -0800 (PST)
Received: from zen.lan (localhost [127.0.0.1])
	by zen.linaroharston (Postfix) with ESMTP id BDA3B1FF9A;
	Thu,  5 Nov 2020 17:51:54 +0000 (GMT)
From: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
To: qemu-devel@nongnu.org
Cc: julien@xen.org,
	stefano.stabellini@linaro.org,
	stefano.stabellini@xilinx.com,
	masami.hiramatsu@linaro.org,
	takahiro.akashi@linaro.org,
	andre.przywara@arm.com,
	stratos-dev@op-lists.linaro.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org (open list:X86 Xen CPUs)
Subject: [RFC PATCH  11/15] include/hw/xen.h: drop superfluous struct
Date: Thu,  5 Nov 2020 17:51:49 +0000
Message-Id: <20201105175153.30489-12-alex.bennee@linaro.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201105175153.30489-1-alex.bennee@linaro.org>
References: <20201105175153.30489-1-alex.bennee@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Chardev is already a typedef'ed struct.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
---
 include/hw/xen/xen.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/hw/xen/xen.h b/include/hw/xen/xen.h
index 1406648ca5..0f9962b1c1 100644
--- a/include/hw/xen/xen.h
+++ b/include/hw/xen/xen.h
@@ -28,7 +28,7 @@ int xen_is_pirq_msi(uint32_t msi_data);
 
 qemu_irq *xen_interrupt_controller_init(void);
 
-void xenstore_store_pv_console_info(int i, struct Chardev *chr);
+void xenstore_store_pv_console_info(int i, Chardev *chr);
 
 void xen_register_framebuffer(struct MemoryRegion *mr);
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 05 18:21:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 18:21:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20000.45576 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kajsw-0003ar-LY; Thu, 05 Nov 2020 18:21:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20000.45576; Thu, 05 Nov 2020 18:21:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kajsw-0003ak-HV; Thu, 05 Nov 2020 18:21:22 +0000
Received: by outflank-mailman (input) for mailman id 20000;
 Thu, 05 Nov 2020 18:21:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kajsv-0003Ze-Ir
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:21:21 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e2cecc96-017b-4c9e-8cb5-9285c380a33b;
 Thu, 05 Nov 2020 18:21:14 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kajso-00071x-3Q; Thu, 05 Nov 2020 18:21:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kajsn-00031B-Sn; Thu, 05 Nov 2020 18:21:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kajsn-0003bB-SN; Thu, 05 Nov 2020 18:21:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kajsv-0003Ze-Ir
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:21:21 +0000
X-Inumbo-ID: e2cecc96-017b-4c9e-8cb5-9285c380a33b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e2cecc96-017b-4c9e-8cb5-9285c380a33b;
	Thu, 05 Nov 2020 18:21:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZYCz7L8P3iUSQjB7P92erWsatO8NNesl1Kssu51zA1E=; b=vqMAA/NZJezC5qj8m+e/6ygTAh
	g/SlaWemaBMy9fTasaUsqB+mT1THoPS5d+JQMda012PvMWMRfQpmQQIVBOh/42jzxmxRqrzDzxMIH
	HCyHPvKD0IulnKJTV1RbNDaCm9YCdQ4vDzhr7ho3cZRRPZ6NtMYx2cqcMKl2sgeSWSyY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kajso-00071x-3Q; Thu, 05 Nov 2020 18:21:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kajsn-00031B-Sn; Thu, 05 Nov 2020 18:21:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kajsn-0003bB-SN; Thu, 05 Nov 2020 18:21:13 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156444-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156444: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=e006b2e3be72e502b86bd9e1405417abd87bdfed
X-Osstest-Versions-That:
    xen=9ff9705647646aa937b5f5c1426a64c69a62b3bd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 05 Nov 2020 18:21:13 +0000

flight 156444 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156444/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  e006b2e3be72e502b86bd9e1405417abd87bdfed
baseline version:
 xen                  9ff9705647646aa937b5f5c1426a64c69a62b3bd

Last test of basis   156395  2020-11-04 09:00:24 Z    1 days
Testing same since   156444  2020-11-05 16:00:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   9ff9705647..e006b2e3be  e006b2e3be72e502b86bd9e1405417abd87bdfed -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 18:39:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 18:39:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20010.45597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kakAH-0004hv-8m; Thu, 05 Nov 2020 18:39:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20010.45597; Thu, 05 Nov 2020 18:39:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kakAH-0004ho-5K; Thu, 05 Nov 2020 18:39:17 +0000
Received: by outflank-mailman (input) for mailman id 20010;
 Thu, 05 Nov 2020 18:39:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kakAG-0004hj-F0
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:39:16 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9a53a093-cd5b-4429-be99-6b330b47c21f;
 Thu, 05 Nov 2020 18:39:10 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kakAA-0007Of-FU; Thu, 05 Nov 2020 18:39:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kakAA-0003V7-4b; Thu, 05 Nov 2020 18:39:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kakAA-0004JW-48; Thu, 05 Nov 2020 18:39:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kakAG-0004hj-F0
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:39:16 +0000
X-Inumbo-ID: 9a53a093-cd5b-4429-be99-6b330b47c21f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9a53a093-cd5b-4429-be99-6b330b47c21f;
	Thu, 05 Nov 2020 18:39:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=giqocoBG916YuXM5HAG4dHvKDuEgd7A0tfoNB6J7Fu4=; b=jgAkwFyjQV4y4AXbWyC6IZHNMX
	D/WoggC2C9M9Dt8GsmRiNeltmYYpsd0a8cR78UoO4uCS5vVTjOjUImZq8XJVsd/rhLkdGy8lCIVEi
	cQ9zBvnuhhqSgya2VcFIJtAKh15glJWaPtemy7gOuLY4RHvupZiiGAqHy74yWeBbsoZk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kakAA-0007Of-FU; Thu, 05 Nov 2020 18:39:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kakAA-0003V7-4b; Thu, 05 Nov 2020 18:39:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kakAA-0004JW-48; Thu, 05 Nov 2020 18:39:10 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156404-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 156404: regressions - FAIL
X-Osstest-Failures:
    xen-4.14-testing:test-armhf-armhf-libvirt:host-ping-check-xen:fail:regression
    xen-4.14-testing:build-i386:xen-build:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-4.14-testing:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=fc8fab1bb4d3a16914d8e7f6e288e946e68d5a41
X-Osstest-Versions-That:
    xen=5784d1e9424151adfdc836535489bd068c6c0700
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 05 Nov 2020 18:39:10 +0000

flight 156404 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156404/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt     10 host-ping-check-xen      fail REGR. vs. 156394
 build-i386                    6 xen-build                fail REGR. vs. 156394

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156394
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156394
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156394
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156394
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156394
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 156394
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156394
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  fc8fab1bb4d3a16914d8e7f6e288e946e68d5a41
baseline version:
 xen                  5784d1e9424151adfdc836535489bd068c6c0700

Last test of basis   156394  2020-11-04 08:37:48 Z    1 days
Testing same since   156404  2020-11-04 22:36:21 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit fc8fab1bb4d3a16914d8e7f6e288e946e68d5a41
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Nov 4 11:02:30 2020 +0100

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
    master date: 2020-10-23 18:03:18 +0200

commit 898864c3736338548bc2f684b7e307326e0dd4a5
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Wed Nov 4 11:01:27 2020 +0100

    pci: cleanup MSI interrupts before removing device from IOMMU
    
    Doing the MSI cleanup after removing the device from the IOMMU leads
    to the following panic on AMD hardware:
    
    Assertion 'table.ptr && (index < intremap_table_entries(table.ptr, iommu))' failed at iommu_intr.c:172
    ----[ Xen-4.13.1-10.0.3-d  x86_64  debug=y   Not tainted ]----
    CPU:    3
    RIP:    e008:[<ffff82d08026ae3c>] drivers/passthrough/amd/iommu_intr.c#get_intremap_entry+0x52/0x7b
    [...]
    Xen call trace:
       [<ffff82d08026ae3c>] R drivers/passthrough/amd/iommu_intr.c#get_intremap_entry+0x52/0x7b
       [<ffff82d08026af25>] F drivers/passthrough/amd/iommu_intr.c#update_intremap_entry_from_msi_msg+0xc0/0x342
       [<ffff82d08026ba65>] F amd_iommu_msi_msg_update_ire+0x98/0x129
       [<ffff82d08025dd36>] F iommu_update_ire_from_msi+0x1e/0x21
       [<ffff82d080286862>] F msi_free_irq+0x55/0x1a0
       [<ffff82d080286f25>] F pci_cleanup_msi+0x8c/0xb0
       [<ffff82d08025cf52>] F pci_remove_device+0x1af/0x2da
       [<ffff82d0802a42d1>] F do_physdev_op+0xd18/0x1187
       [<ffff82d080383925>] F pv_hypercall+0x1f5/0x567
       [<ffff82d08038a432>] F lstar_enter+0x112/0x120
    
    That's because the call to iommu_remove_device on AMD hardware will
    remove the per-device interrupt remapping table, and hence the call to
    pci_cleanup_msi done afterwards will find a null intremap table and
    crash.
    
    Reorder the calls so that MSI interrupts are torn down before removing
    the device from the IOMMU.
    
    Fixes: d7cfeb7c13ed ("AMD/IOMMU: don't blindly allocate interrupt remapping tables")
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 710f62cc826bb8c7ead99f9d6b6b269e39ff3e98
    master date: 2020-10-23 10:13:14 +0200

commit 9f954ae7fb1c7dc3b02e5ccac6978c3a8e86086e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Nov 4 11:01:02 2020 +0100

    build: use if_changed more consistently (and correctly) for prelink*.o
    
    Switch to $(call if_changed,ld) where possible; presumably not doing so
    in e321576f4047 ("xen/build: start using if_changed") right away was an
    oversight, as it did for Arm in (just) one case. It failed to add
    prelink.o to $(targets), though, causing - judging from the observed
    behavior on x86 - undue rebuilds of the final binary (because of
    prelink.o getting rebuild for $(cmd_prelink.o) being empty, in turn
    because of .prelink.o.cmd not getting read) during "make install-xen".
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    master commit: dd2cfba88c3d0e144ffec07c6b5b86e54a9d98a9
    master date: 2020-09-22 10:19:38 +0200
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 18:50:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 18:50:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20019.45618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kakKk-0006N1-IA; Thu, 05 Nov 2020 18:50:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20019.45618; Thu, 05 Nov 2020 18:50:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kakKk-0006Mu-Ey; Thu, 05 Nov 2020 18:50:06 +0000
Received: by outflank-mailman (input) for mailman id 20019;
 Thu, 05 Nov 2020 18:50:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6HEM=EL=linux.ibm.com=stefanb@srs-us1.protection.inumbo.net>)
 id 1kakKj-0006F7-9u
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:50:05 +0000
Received: from mx0a-001b2d01.pphosted.com (unknown [148.163.156.1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 417bf15c-a35f-4a5b-9548-93de63798a7d;
 Thu, 05 Nov 2020 18:50:02 +0000 (UTC)
Received: from pps.filterd (m0187473.ppops.net [127.0.0.1])
 by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0A5IXkOm036973; Thu, 5 Nov 2020 13:50:00 -0500
Received: from pps.reinject (localhost [127.0.0.1])
 by mx0a-001b2d01.pphosted.com with ESMTP id 34m7re1txr-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 05 Nov 2020 13:49:59 -0500
Received: from m0187473.ppops.net (m0187473.ppops.net [127.0.0.1])
 by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 0A5Ibre0052700;
 Thu, 5 Nov 2020 13:49:58 -0500
Received: from ppma03dal.us.ibm.com (b.bd.3ea9.ip4.static.sl-reverse.com
 [169.62.189.11])
 by mx0a-001b2d01.pphosted.com with ESMTP id 34m7re1txb-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 05 Nov 2020 13:49:58 -0500
Received: from pps.filterd (ppma03dal.us.ibm.com [127.0.0.1])
 by ppma03dal.us.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 0A5IlBhL027678;
 Thu, 5 Nov 2020 18:49:57 GMT
Received: from b03cxnp08027.gho.boulder.ibm.com
 (b03cxnp08027.gho.boulder.ibm.com [9.17.130.19])
 by ppma03dal.us.ibm.com with ESMTP id 34hs33gutu-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 05 Nov 2020 18:49:57 +0000
Received: from b03ledav004.gho.boulder.ibm.com
 (b03ledav004.gho.boulder.ibm.com [9.17.130.235])
 by b03cxnp08027.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id
 0A5Inosl39780740
 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 5 Nov 2020 18:49:50 GMT
Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1])
 by IMSVA (Postfix) with ESMTP id 932CF7805C;
 Thu,  5 Nov 2020 18:49:55 +0000 (GMT)
Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1])
 by IMSVA (Postfix) with ESMTP id 03A1578060;
 Thu,  5 Nov 2020 18:49:53 +0000 (GMT)
Received: from sbct-3.pok.ibm.com (unknown [9.47.158.153])
 by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP;
 Thu,  5 Nov 2020 18:49:53 +0000 (GMT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6HEM=EL=linux.ibm.com=stefanb@srs-us1.protection.inumbo.net>)
	id 1kakKj-0006F7-9u
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:50:05 +0000
X-Inumbo-ID: 417bf15c-a35f-4a5b-9548-93de63798a7d
Received: from mx0a-001b2d01.pphosted.com (unknown [148.163.156.1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 417bf15c-a35f-4a5b-9548-93de63798a7d;
	Thu, 05 Nov 2020 18:50:02 +0000 (UTC)
Received: from pps.filterd (m0187473.ppops.net [127.0.0.1])
	by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0A5IXkOm036973;
	Thu, 5 Nov 2020 13:50:00 -0500
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=pp1;
 bh=HF+zIy3dMq9WoBj7Ui49l20NKBjwZxB8ohvtpMNY4X4=;
 b=ob78G21it51g6/PxidTte7GuwtCenIhiwe07h9DqVd+XTpxhnW1STqMkw+zJTKkmD2xq
 s1gOdPOFLEWUNp2OrCi6XEpWpdZa27ytwE19AMJ+Ztgl6IWUg6306jaN7PGjaIf9iBNa
 2q+omAwsCWB6aJmJGktBAg4wFh2TNIpWQJZ1qvmJgk6PmreTXd69rWb3HgSFtKKSVjDX
 EIWNe5SSJjX3fiztI2RHe12DG28OiqfO0U5M9bLDlrtwbFez4aSxf6KAG5fNCX9WrvMU
 fXLEvTVjjnoBNg2KpLZEzT/wCtnfcokkD7VTVeb7i07sKu2mChg9PAB7XBN/JXywDngK Kg== 
Received: from pps.reinject (localhost [127.0.0.1])
	by mx0a-001b2d01.pphosted.com with ESMTP id 34m7re1txr-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Thu, 05 Nov 2020 13:49:59 -0500
Received: from m0187473.ppops.net (m0187473.ppops.net [127.0.0.1])
	by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 0A5Ibre0052700;
	Thu, 5 Nov 2020 13:49:58 -0500
Received: from ppma03dal.us.ibm.com (b.bd.3ea9.ip4.static.sl-reverse.com [169.62.189.11])
	by mx0a-001b2d01.pphosted.com with ESMTP id 34m7re1txb-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Thu, 05 Nov 2020 13:49:58 -0500
Received: from pps.filterd (ppma03dal.us.ibm.com [127.0.0.1])
	by ppma03dal.us.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 0A5IlBhL027678;
	Thu, 5 Nov 2020 18:49:57 GMT
Received: from b03cxnp08027.gho.boulder.ibm.com (b03cxnp08027.gho.boulder.ibm.com [9.17.130.19])
	by ppma03dal.us.ibm.com with ESMTP id 34hs33gutu-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Thu, 05 Nov 2020 18:49:57 +0000
Received: from b03ledav004.gho.boulder.ibm.com (b03ledav004.gho.boulder.ibm.com [9.17.130.235])
	by b03cxnp08027.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 0A5Inosl39780740
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
	Thu, 5 Nov 2020 18:49:50 GMT
Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1])
	by IMSVA (Postfix) with ESMTP id 932CF7805C;
	Thu,  5 Nov 2020 18:49:55 +0000 (GMT)
Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1])
	by IMSVA (Postfix) with ESMTP id 03A1578060;
	Thu,  5 Nov 2020 18:49:53 +0000 (GMT)
Received: from sbct-3.pok.ibm.com (unknown [9.47.158.153])
	by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP;
	Thu,  5 Nov 2020 18:49:53 +0000 (GMT)
Subject: Re: [PATCH v2 36/44] qdev: Rename qdev_get_prop_ptr() to
 object_field_prop_ptr()
To: Eduardo Habkost <ehabkost@redhat.com>, qemu-devel@nongnu.org
Cc: "Daniel P. Berrange" <berrange@redhat.com>,
        Paolo Bonzini <pbonzini@redhat.com>,
        Igor Mammedov <imammedo@redhat.com>, Eric Blake <eblake@redhat.com>,
        Markus Armbruster <armbru@redhat.com>,
        =?UTF-8?Q?Marc-Andr=c3=a9_Lureau?= <marcandre.lureau@redhat.com>,
        John Snow <jsnow@redhat.com>,
        =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?=
 <philmd@redhat.com>,
        Stefan Berger <stefanb@linux.vnet.ibm.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Anthony Perard <anthony.perard@citrix.com>,
        Paul Durrant <paul@xen.org>, Kevin Wolf <kwolf@redhat.com>,
        Max Reitz <mreitz@redhat.com>, Cornelia Huck <cohuck@redhat.com>,
        Halil Pasic <pasic@linux.ibm.com>,
        Christian Borntraeger <borntraeger@de.ibm.com>,
        Richard Henderson <rth@twiddle.net>,
        David Hildenbrand <david@redhat.com>, Thomas Huth <thuth@redhat.com>,
        Matthew Rosato <mjrosato@linux.ibm.com>,
        Alex Williamson <alex.williamson@redhat.com>,
        xen-devel@lists.xenproject.org, qemu-block@nongnu.org,
        qemu-s390x@nongnu.org
References: <20201104160021.2342108-1-ehabkost@redhat.com>
 <20201104160021.2342108-37-ehabkost@redhat.com>
From: Stefan Berger <stefanb@linux.ibm.com>
Message-ID: <5d4020f6-2df7-0f61-4060-ac885dab3bab@linux.ibm.com>
Date: Thu, 5 Nov 2020 13:49:53 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.11.0
MIME-Version: 1.0
In-Reply-To: <20201104160021.2342108-37-ehabkost@redhat.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-TM-AS-GCONF: 00
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-05_11:2020-11-05,2020-11-05 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 spamscore=0
 malwarescore=0 adultscore=0 lowpriorityscore=0 clxscore=1015 mlxscore=0
 impostorscore=0 priorityscore=1501 phishscore=0 suspectscore=2 bulkscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2011050121

On 11/4/20 11:00 AM, Eduardo Habkost wrote:
> The function will be moved to common QOM code, as it is not
> specific to TYPE_DEVICE anymore.
>
> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>

Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>


> ---
> Changes v1 -> v2:
> * Rename to object_field_prop_ptr() instead of object_static_prop_ptr()
> ---
> Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> Cc: Paul Durrant <paul@xen.org>
> Cc: Kevin Wolf <kwolf@redhat.com>
> Cc: Max Reitz <mreitz@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Daniel P. Berrangé" <berrange@redhat.com>
> Cc: Eduardo Habkost <ehabkost@redhat.com>
> Cc: Cornelia Huck <cohuck@redhat.com>
> Cc: Halil Pasic <pasic@linux.ibm.com>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Richard Henderson <rth@twiddle.net>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Thomas Huth <thuth@redhat.com>
> Cc: Matthew Rosato <mjrosato@linux.ibm.com>
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: qemu-devel@nongnu.org
> Cc: xen-devel@lists.xenproject.org
> Cc: qemu-block@nongnu.org
> Cc: qemu-s390x@nongnu.org
> ---
>   include/hw/qdev-properties.h     |  2 +-
>   backends/tpm/tpm_util.c          |  6 ++--
>   hw/block/xen-block.c             |  4 +--
>   hw/core/qdev-properties-system.c | 50 +++++++++++++-------------
>   hw/core/qdev-properties.c        | 60 ++++++++++++++++----------------
>   hw/s390x/css.c                   |  4 +--
>   hw/s390x/s390-pci-bus.c          |  4 +--
>   hw/vfio/pci-quirks.c             |  4 +--
>   8 files changed, 67 insertions(+), 67 deletions(-)
>
> diff --git a/include/hw/qdev-properties.h b/include/hw/qdev-properties.h
> index 7f8d5fc206..2bec65c8e5 100644
> --- a/include/hw/qdev-properties.h
> +++ b/include/hw/qdev-properties.h
> @@ -223,7 +223,7 @@ void qdev_prop_set_macaddr(DeviceState *dev, const char *name,
>                              const uint8_t *value);
>   void qdev_prop_set_enum(DeviceState *dev, const char *name, int value);
>   
> -void *qdev_get_prop_ptr(Object *obj, Property *prop);
> +void *object_field_prop_ptr(Object *obj, Property *prop);
>   
>   void qdev_prop_register_global(GlobalProperty *prop);
>   const GlobalProperty *qdev_find_global_prop(Object *obj,
> diff --git a/backends/tpm/tpm_util.c b/backends/tpm/tpm_util.c
> index 0b07cf55ea..bb1ab34a75 100644
> --- a/backends/tpm/tpm_util.c
> +++ b/backends/tpm/tpm_util.c
> @@ -35,7 +35,7 @@
>   static void get_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
>                       Error **errp)
>   {
> -    TPMBackend **be = qdev_get_prop_ptr(obj, opaque);
> +    TPMBackend **be = object_field_prop_ptr(obj, opaque);
>       char *p;
>   
>       p = g_strdup(*be ? (*be)->id : "");
> @@ -47,7 +47,7 @@ static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
>                       Error **errp)
>   {
>       Property *prop = opaque;
> -    TPMBackend *s, **be = qdev_get_prop_ptr(obj, prop);
> +    TPMBackend *s, **be = object_field_prop_ptr(obj, prop);
>       char *str;
>   
>       if (!visit_type_str(v, name, &str, errp)) {
> @@ -67,7 +67,7 @@ static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
>   static void release_tpm(Object *obj, const char *name, void *opaque)
>   {
>       Property *prop = opaque;
> -    TPMBackend **be = qdev_get_prop_ptr(obj, prop);
> +    TPMBackend **be = object_field_prop_ptr(obj, prop);
>   
>       if (*be) {
>           tpm_backend_reset(*be);
> diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
> index bd1aef63a7..718d886e5c 100644
> --- a/hw/block/xen-block.c
> +++ b/hw/block/xen-block.c
> @@ -336,7 +336,7 @@ static void xen_block_get_vdev(Object *obj, Visitor *v, const char *name,
>                                  void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
> +    XenBlockVdev *vdev = object_field_prop_ptr(obj, prop);
>       char *str;
>   
>       switch (vdev->type) {
> @@ -396,7 +396,7 @@ static void xen_block_set_vdev(Object *obj, Visitor *v, const char *name,
>                                  void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
> +    XenBlockVdev *vdev = object_field_prop_ptr(obj, prop);
>       char *str, *p;
>       const char *end;
>   
> diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
> index 96a0bc5109..8781b856d3 100644
> --- a/hw/core/qdev-properties-system.c
> +++ b/hw/core/qdev-properties-system.c
> @@ -62,7 +62,7 @@ static void get_drive(Object *obj, Visitor *v, const char *name, void *opaque,
>                         Error **errp)
>   {
>       Property *prop = opaque;
> -    void **ptr = qdev_get_prop_ptr(obj, prop);
> +    void **ptr = object_field_prop_ptr(obj, prop);
>       const char *value;
>       char *p;
>   
> @@ -88,7 +88,7 @@ static void set_drive_helper(Object *obj, Visitor *v, const char *name,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    void **ptr = qdev_get_prop_ptr(obj, prop);
> +    void **ptr = object_field_prop_ptr(obj, prop);
>       char *str;
>       BlockBackend *blk;
>       bool blk_created = false;
> @@ -181,7 +181,7 @@ static void release_drive(Object *obj, const char *name, void *opaque)
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    BlockBackend **ptr = qdev_get_prop_ptr(obj, prop);
> +    BlockBackend **ptr = object_field_prop_ptr(obj, prop);
>   
>       if (*ptr) {
>           AioContext *ctx = blk_get_aio_context(*ptr);
> @@ -214,7 +214,7 @@ const PropertyInfo qdev_prop_drive_iothread = {
>   static void get_chr(Object *obj, Visitor *v, const char *name, void *opaque,
>                       Error **errp)
>   {
> -    CharBackend *be = qdev_get_prop_ptr(obj, opaque);
> +    CharBackend *be = object_field_prop_ptr(obj, opaque);
>       char *p;
>   
>       p = g_strdup(be->chr && be->chr->label ? be->chr->label : "");
> @@ -226,7 +226,7 @@ static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
>                       Error **errp)
>   {
>       Property *prop = opaque;
> -    CharBackend *be = qdev_get_prop_ptr(obj, prop);
> +    CharBackend *be = object_field_prop_ptr(obj, prop);
>       Chardev *s;
>       char *str;
>   
> @@ -262,7 +262,7 @@ static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
>   static void release_chr(Object *obj, const char *name, void *opaque)
>   {
>       Property *prop = opaque;
> -    CharBackend *be = qdev_get_prop_ptr(obj, prop);
> +    CharBackend *be = object_field_prop_ptr(obj, prop);
>   
>       qemu_chr_fe_deinit(be, false);
>   }
> @@ -286,7 +286,7 @@ static void get_mac(Object *obj, Visitor *v, const char *name, void *opaque,
>                       Error **errp)
>   {
>       Property *prop = opaque;
> -    MACAddr *mac = qdev_get_prop_ptr(obj, prop);
> +    MACAddr *mac = object_field_prop_ptr(obj, prop);
>       char buffer[2 * 6 + 5 + 1];
>       char *p = buffer;
>   
> @@ -301,7 +301,7 @@ static void set_mac(Object *obj, Visitor *v, const char *name, void *opaque,
>                       Error **errp)
>   {
>       Property *prop = opaque;
> -    MACAddr *mac = qdev_get_prop_ptr(obj, prop);
> +    MACAddr *mac = object_field_prop_ptr(obj, prop);
>       int i, pos;
>       char *str;
>       const char *p;
> @@ -363,7 +363,7 @@ static void get_netdev(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
> +    NICPeers *peers_ptr = object_field_prop_ptr(obj, prop);
>       char *p = g_strdup(peers_ptr->ncs[0] ? peers_ptr->ncs[0]->name : "");
>   
>       visit_type_str(v, name, &p, errp);
> @@ -374,7 +374,7 @@ static void set_netdev(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
> +    NICPeers *peers_ptr = object_field_prop_ptr(obj, prop);
>       NetClientState **ncs = peers_ptr->ncs;
>       NetClientState *peers[MAX_QUEUE_NUM];
>       int queues, err = 0, i = 0;
> @@ -436,7 +436,7 @@ static void get_audiodev(Object *obj, Visitor *v, const char* name,
>                            void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
> +    QEMUSoundCard *card = object_field_prop_ptr(obj, prop);
>       char *p = g_strdup(audio_get_id(card));
>   
>       visit_type_str(v, name, &p, errp);
> @@ -447,7 +447,7 @@ static void set_audiodev(Object *obj, Visitor *v, const char* name,
>                            void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
> +    QEMUSoundCard *card = object_field_prop_ptr(obj, prop);
>       AudioState *state;
>       int err = 0;
>       char *str;
> @@ -549,7 +549,7 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint32_t *ptr = object_field_prop_ptr(obj, prop);
>       uint64_t value;
>       Error *local_err = NULL;
>   
> @@ -637,7 +637,7 @@ static void get_reserved_region(Object *obj, Visitor *v, const char *name,
>                                   void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
> +    ReservedRegion *rr = object_field_prop_ptr(obj, prop);
>       char buffer[64];
>       char *p = buffer;
>       int rc;
> @@ -653,7 +653,7 @@ static void set_reserved_region(Object *obj, Visitor *v, const char *name,
>                                   void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
> +    ReservedRegion *rr = object_field_prop_ptr(obj, prop);
>       Error *local_err = NULL;
>       const char *endptr;
>       char *str;
> @@ -715,7 +715,7 @@ static void set_pci_devfn(Object *obj, Visitor *v, const char *name,
>                             void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    int32_t value, *ptr = qdev_get_prop_ptr(obj, prop);
> +    int32_t value, *ptr = object_field_prop_ptr(obj, prop);
>       unsigned int slot, fn, n;
>       char *str;
>   
> @@ -753,7 +753,7 @@ invalid:
>   static int print_pci_devfn(Object *obj, Property *prop, char *dest,
>                              size_t len)
>   {
> -    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    int32_t *ptr = object_field_prop_ptr(obj, prop);
>   
>       if (*ptr == -1) {
>           return snprintf(dest, len, "<unset>");
> @@ -777,7 +777,7 @@ static void get_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
>                                    void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
> +    PCIHostDeviceAddress *addr = object_field_prop_ptr(obj, prop);
>       char buffer[] = "ffff:ff:ff.f";
>       char *p = buffer;
>       int rc = 0;
> @@ -803,7 +803,7 @@ static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
>                                    void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
> +    PCIHostDeviceAddress *addr = object_field_prop_ptr(obj, prop);
>       char *str, *p;
>       const char *e;
>       unsigned long val;
> @@ -892,7 +892,7 @@ static void get_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
>                                      void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
> +    PCIExpLinkSpeed *p = object_field_prop_ptr(obj, prop);
>       int speed;
>   
>       switch (*p) {
> @@ -920,7 +920,7 @@ static void set_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
>                                      void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
> +    PCIExpLinkSpeed *p = object_field_prop_ptr(obj, prop);
>       int speed;
>   
>       if (!visit_type_enum(v, name, &speed, prop->info->enum_table,
> @@ -962,7 +962,7 @@ static void get_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
>                                      void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
> +    PCIExpLinkWidth *p = object_field_prop_ptr(obj, prop);
>       int width;
>   
>       switch (*p) {
> @@ -999,7 +999,7 @@ static void set_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
>                                      void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
> +    PCIExpLinkWidth *p = object_field_prop_ptr(obj, prop);
>       int width;
>   
>       if (!visit_type_enum(v, name, &width, prop->info->enum_table,
> @@ -1050,7 +1050,7 @@ static void get_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
>                        Error **errp)
>   {
>       Property *prop = opaque;
> -    QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
> +    QemuUUID *uuid = object_field_prop_ptr(obj, prop);
>       char buffer[UUID_FMT_LEN + 1];
>       char *p = buffer;
>   
> @@ -1065,7 +1065,7 @@ static void set_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
>                       Error **errp)
>   {
>       Property *prop = opaque;
> -    QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
> +    QemuUUID *uuid = object_field_prop_ptr(obj, prop);
>       char *str;
>   
>       if (!visit_type_str(v, name, &str, errp)) {
> diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
> index aeab4ae9b6..9aebd7b8a9 100644
> --- a/hw/core/qdev-properties.c
> +++ b/hw/core/qdev-properties.c
> @@ -50,7 +50,7 @@ void qdev_prop_allow_set_link_before_realize(const Object *obj,
>       }
>   }
>   
> -void *qdev_get_prop_ptr(Object *obj, Property *prop)
> +void *object_field_prop_ptr(Object *obj, Property *prop)
>   {
>       void *ptr = obj;
>       ptr += prop->offset;
> @@ -96,7 +96,7 @@ void field_prop_get_enum(Object *obj, Visitor *v, const char *name,
>                            void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    int *ptr = qdev_get_prop_ptr(obj, prop);
> +    int *ptr = object_field_prop_ptr(obj, prop);
>   
>       visit_type_enum(v, name, ptr, prop->info->enum_table, errp);
>   }
> @@ -105,7 +105,7 @@ void field_prop_set_enum(Object *obj, Visitor *v, const char *name,
>                            void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    int *ptr = qdev_get_prop_ptr(obj, prop);
> +    int *ptr = object_field_prop_ptr(obj, prop);
>   
>       visit_type_enum(v, name, ptr, prop->info->enum_table, errp);
>   }
> @@ -134,7 +134,7 @@ static uint32_t qdev_get_prop_mask(Property *prop)
>   
>   static void bit_prop_set(Object *obj, Property *props, bool val)
>   {
> -    uint32_t *p = qdev_get_prop_ptr(obj, props);
> +    uint32_t *p = object_field_prop_ptr(obj, props);
>       uint32_t mask = qdev_get_prop_mask(props);
>       if (val) {
>           *p |= mask;
> @@ -147,7 +147,7 @@ static void prop_get_bit(Object *obj, Visitor *v, const char *name,
>                            void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    uint32_t *p = qdev_get_prop_ptr(obj, prop);
> +    uint32_t *p = object_field_prop_ptr(obj, prop);
>       bool value = (*p & qdev_get_prop_mask(prop)) != 0;
>   
>       visit_type_bool(v, name, &value, errp);
> @@ -188,7 +188,7 @@ static uint64_t qdev_get_prop_mask64(Property *prop)
>   
>   static void bit64_prop_set(Object *obj, Property *props, bool val)
>   {
> -    uint64_t *p = qdev_get_prop_ptr(obj, props);
> +    uint64_t *p = object_field_prop_ptr(obj, props);
>       uint64_t mask = qdev_get_prop_mask64(props);
>       if (val) {
>           *p |= mask;
> @@ -201,7 +201,7 @@ static void prop_get_bit64(Object *obj, Visitor *v, const char *name,
>                              void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    uint64_t *p = qdev_get_prop_ptr(obj, prop);
> +    uint64_t *p = object_field_prop_ptr(obj, prop);
>       bool value = (*p & qdev_get_prop_mask64(prop)) != 0;
>   
>       visit_type_bool(v, name, &value, errp);
> @@ -233,7 +233,7 @@ static void get_bool(Object *obj, Visitor *v, const char *name, void *opaque,
>                        Error **errp)
>   {
>       Property *prop = opaque;
> -    bool *ptr = qdev_get_prop_ptr(obj, prop);
> +    bool *ptr = object_field_prop_ptr(obj, prop);
>   
>       visit_type_bool(v, name, ptr, errp);
>   }
> @@ -242,7 +242,7 @@ static void set_bool(Object *obj, Visitor *v, const char *name, void *opaque,
>                        Error **errp)
>   {
>       Property *prop = opaque;
> -    bool *ptr = qdev_get_prop_ptr(obj, prop);
> +    bool *ptr = object_field_prop_ptr(obj, prop);
>   
>       visit_type_bool(v, name, ptr, errp);
>   }
> @@ -260,7 +260,7 @@ static void get_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
>                         Error **errp)
>   {
>       Property *prop = opaque;
> -    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint8_t *ptr = object_field_prop_ptr(obj, prop);
>   
>       visit_type_uint8(v, name, ptr, errp);
>   }
> @@ -269,7 +269,7 @@ static void set_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
>                         Error **errp)
>   {
>       Property *prop = opaque;
> -    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint8_t *ptr = object_field_prop_ptr(obj, prop);
>   
>       visit_type_uint8(v, name, ptr, errp);
>   }
> @@ -299,7 +299,7 @@ static void get_uint16(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint16_t *ptr = object_field_prop_ptr(obj, prop);
>   
>       visit_type_uint16(v, name, ptr, errp);
>   }
> @@ -308,7 +308,7 @@ static void set_uint16(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint16_t *ptr = object_field_prop_ptr(obj, prop);
>   
>       visit_type_uint16(v, name, ptr, errp);
>   }
> @@ -326,7 +326,7 @@ static void get_uint32(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint32_t *ptr = object_field_prop_ptr(obj, prop);
>   
>       visit_type_uint32(v, name, ptr, errp);
>   }
> @@ -335,7 +335,7 @@ static void set_uint32(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint32_t *ptr = object_field_prop_ptr(obj, prop);
>   
>       visit_type_uint32(v, name, ptr, errp);
>   }
> @@ -344,7 +344,7 @@ void field_prop_get_int32(Object *obj, Visitor *v, const char *name,
>                             void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    int32_t *ptr = object_field_prop_ptr(obj, prop);
>   
>       visit_type_int32(v, name, ptr, errp);
>   }
> @@ -353,7 +353,7 @@ static void set_int32(Object *obj, Visitor *v, const char *name, void *opaque,
>                         Error **errp)
>   {
>       Property *prop = opaque;
> -    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    int32_t *ptr = object_field_prop_ptr(obj, prop);
>   
>       visit_type_int32(v, name, ptr, errp);
>   }
> @@ -378,7 +378,7 @@ static void get_uint64(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint64_t *ptr = object_field_prop_ptr(obj, prop);
>   
>       visit_type_uint64(v, name, ptr, errp);
>   }
> @@ -387,7 +387,7 @@ static void set_uint64(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint64_t *ptr = object_field_prop_ptr(obj, prop);
>   
>       visit_type_uint64(v, name, ptr, errp);
>   }
> @@ -396,7 +396,7 @@ static void get_int64(Object *obj, Visitor *v, const char *name,
>                         void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    int64_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    int64_t *ptr = object_field_prop_ptr(obj, prop);
>   
>       visit_type_int64(v, name, ptr, errp);
>   }
> @@ -405,7 +405,7 @@ static void set_int64(Object *obj, Visitor *v, const char *name,
>                         void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    int64_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    int64_t *ptr = object_field_prop_ptr(obj, prop);
>   
>       visit_type_int64(v, name, ptr, errp);
>   }
> @@ -429,14 +429,14 @@ const PropertyInfo qdev_prop_int64 = {
>   static void release_string(Object *obj, const char *name, void *opaque)
>   {
>       Property *prop = opaque;
> -    g_free(*(char **)qdev_get_prop_ptr(obj, prop));
> +    g_free(*(char **)object_field_prop_ptr(obj, prop));
>   }
>   
>   static void get_string(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    char **ptr = qdev_get_prop_ptr(obj, prop);
> +    char **ptr = object_field_prop_ptr(obj, prop);
>   
>       if (!*ptr) {
>           char *str = (char *)"";
> @@ -450,7 +450,7 @@ static void set_string(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    char **ptr = qdev_get_prop_ptr(obj, prop);
> +    char **ptr = object_field_prop_ptr(obj, prop);
>       char *str;
>   
>       if (!visit_type_str(v, name, &str, errp)) {
> @@ -484,7 +484,7 @@ void field_prop_get_size32(Object *obj, Visitor *v, const char *name,
>                              void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint32_t *ptr = object_field_prop_ptr(obj, prop);
>       uint64_t value = *ptr;
>   
>       visit_type_size(v, name, &value, errp);
> @@ -494,7 +494,7 @@ static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
>                          Error **errp)
>   {
>       Property *prop = opaque;
> -    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint32_t *ptr = object_field_prop_ptr(obj, prop);
>       uint64_t value;
>   
>       if (!visit_type_size(v, name, &value, errp)) {
> @@ -531,7 +531,7 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
>        */
>       Property *prop = opaque;
>       ObjectProperty *op = object_property_find_err(obj, name, &error_abort);
> -    uint32_t *alenptr = qdev_get_prop_ptr(obj, prop);
> +    uint32_t *alenptr = object_field_prop_ptr(obj, prop);
>       void **arrayptr = (void *)obj + prop->arrayoffset;
>       void *eltptr;
>       const char *arrayname;
> @@ -570,7 +570,7 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
>            * being inside the device struct.
>            */
>           arrayprop->offset = eltptr - (void *)obj;
> -        assert(qdev_get_prop_ptr(obj, arrayprop) == eltptr);
> +        assert(object_field_prop_ptr(obj, arrayprop) == eltptr);
>           object_property_add_field(obj, propname, arrayprop, op->allow_set);
>       }
>   }
> @@ -760,7 +760,7 @@ static void get_size(Object *obj, Visitor *v, const char *name, void *opaque,
>                        Error **errp)
>   {
>       Property *prop = opaque;
> -    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint64_t *ptr = object_field_prop_ptr(obj, prop);
>   
>       visit_type_size(v, name, ptr, errp);
>   }
> @@ -769,7 +769,7 @@ static void set_size(Object *obj, Visitor *v, const char *name, void *opaque,
>                        Error **errp)
>   {
>       Property *prop = opaque;
> -    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint64_t *ptr = object_field_prop_ptr(obj, prop);
>   
>       visit_type_size(v, name, ptr, errp);
>   }
> diff --git a/hw/s390x/css.c b/hw/s390x/css.c
> index 496e2c5801..fe47751df4 100644
> --- a/hw/s390x/css.c
> +++ b/hw/s390x/css.c
> @@ -2344,7 +2344,7 @@ static void get_css_devid(Object *obj, Visitor *v, const char *name,
>                             void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
> +    CssDevId *dev_id = object_field_prop_ptr(obj, prop);
>       char buffer[] = "xx.x.xxxx";
>       char *p = buffer;
>       int r;
> @@ -2373,7 +2373,7 @@ static void set_css_devid(Object *obj, Visitor *v, const char *name,
>                             void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
> +    CssDevId *dev_id = object_field_prop_ptr(obj, prop);
>       char *str;
>       int num, n1, n2;
>       unsigned int cssid, ssid, devid;
> diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
> index 54fac3851d..99b18d56ba 100644
> --- a/hw/s390x/s390-pci-bus.c
> +++ b/hw/s390x/s390-pci-bus.c
> @@ -1323,7 +1323,7 @@ static void s390_pci_get_fid(Object *obj, Visitor *v, const char *name,
>                            void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint32_t *ptr = object_field_prop_ptr(obj, prop);
>   
>       visit_type_uint32(v, name, ptr, errp);
>   }
> @@ -1333,7 +1333,7 @@ static void s390_pci_set_fid(Object *obj, Visitor *v, const char *name,
>   {
>       S390PCIBusDevice *zpci = S390_PCI_DEVICE(obj);
>       Property *prop = opaque;
> -    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint32_t *ptr = object_field_prop_ptr(obj, prop);
>   
>       if (!visit_type_uint32(v, name, ptr, errp)) {
>           return;
> diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
> index 802979635c..fc8d63c850 100644
> --- a/hw/vfio/pci-quirks.c
> +++ b/hw/vfio/pci-quirks.c
> @@ -1489,7 +1489,7 @@ static void get_nv_gpudirect_clique_id(Object *obj, Visitor *v,
>                                          Error **errp)
>   {
>       Property *prop = opaque;
> -    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint8_t *ptr = object_field_prop_ptr(obj, prop);
>   
>       visit_type_uint8(v, name, ptr, errp);
>   }
> @@ -1499,7 +1499,7 @@ static void set_nv_gpudirect_clique_id(Object *obj, Visitor *v,
>                                          Error **errp)
>   {
>       Property *prop = opaque;
> -    uint8_t value, *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint8_t value, *ptr = object_field_prop_ptr(obj, prop);
>   
>       if (!visit_type_uint8(v, name, &value, errp)) {
>           return;




From xen-devel-bounces@lists.xenproject.org Thu Nov 05 18:57:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 18:57:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20049.45633 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kakRN-0006eL-GC; Thu, 05 Nov 2020 18:56:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20049.45633; Thu, 05 Nov 2020 18:56:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kakRN-0006eE-Cb; Thu, 05 Nov 2020 18:56:57 +0000
Received: by outflank-mailman (input) for mailman id 20049;
 Thu, 05 Nov 2020 18:56:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kakRM-0006e9-R2
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:56:56 +0000
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 03f5b427-568c-49fc-a385-3ec145cd49a3;
 Thu, 05 Nov 2020 18:56:55 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id w1so3008867wrm.4
 for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 10:56:55 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.56.53
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 05 Nov 2020 10:56:53 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kakRM-0006e9-R2
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:56:56 +0000
X-Inumbo-ID: 03f5b427-568c-49fc-a385-3ec145cd49a3
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 03f5b427-568c-49fc-a385-3ec145cd49a3;
	Thu, 05 Nov 2020 18:56:55 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id w1so3008867wrm.4
        for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 10:56:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=hfScFRyc+sSjBIArHw1y0ULwQ0XgOh8QIfTfRbg6KTQ=;
        b=aQ86Z3gCyi9d7H6PiZXHmAxYcTzwQit7blu5nXVyFZkyPZCu7lni92eRg1zWDIUlAm
         U6ZjtFQy2fhzV1wk97ur+WxBA57x0vdYRFiZ9mTg5SzoU+lAK1eiC6IGrDnvjd7XjxQP
         gEfJD6/jJwlkdrZTVE2tes5k8yR8mighXhnQJJydpo75f8YKs9G5kwRrXXfbZKXyeJQC
         jzGFM6I4ZOh8RfZNdxi5MD3fqjf00njBQD16nHGrCNutH0tFl/AglpixV0XT5JQsCsfm
         qvrICzrkS48/wR7HuF7ztX1DjzZxdzSJGgTzActg81ilzJMKU7qznEVy+tCh2Riukzc6
         2+vg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=hfScFRyc+sSjBIArHw1y0ULwQ0XgOh8QIfTfRbg6KTQ=;
        b=QcFr6dOK00GMf3F/yX+pCvcjc2GF2RnZpG4H/Z4XKxbK0i2Ix71B69GN66POenJX/d
         SOrZFIqnalESspArn+kA/LwUqwE8DXNHJdMNQFZk+TUh1o5zVNtcp98GEfonURiyvI5B
         Uw8eSyi3y4MYx2FcBtyYrk8UtnAaQ3Cq958PLxVrK8tmrGmj1YDqQuKjhrjRudmvATVg
         rcVxBKF1PGyw8BCtGgwwqfOsHKPxntha3KPLXyOWh6IIBFIyM5jNLpcz5Lek7KsTVjbK
         cUvCxXByDcN6K69JEuFR3Pcu92ZLtdrt0tnMDnPcE15wfrWHIYJ4zzuuc3whqG5S54dU
         a4Zw==
X-Gm-Message-State: AOAM532y6/+a+7s+0Ajm8w0cOP0NGEE0F/Jb5IteydE9IMgTiXAt3cgB
	uid7K7UBJLopYTU8JRc0WxuIlAd8HHg=
X-Google-Smtp-Source: ABdhPJyq3eVqOAyUJxBTKS5sW8I8V0MbjGQRL7nSksSQBVrHpSjp8wo70Pt5cGi3EcSiv0DBChpPWQ==
X-Received: by 2002:a5d:4d0d:: with SMTP id z13mr4427411wrt.23.1604602614536;
        Thu, 05 Nov 2020 10:56:54 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.56.53
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 05 Nov 2020 10:56:53 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	bertrand.marquis@arm.com,
	rahul.singh@arm.com
Subject: [RFC PATCH 0/6] Port Linux LL/SC and LSE atomics to Xen
Date: Thu,  5 Nov 2020 18:55:57 +0000
Message-Id: <20201105185603.24149-1-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

[this is my personal account, opinions expressed are entirely my own]

Hi,

I'm starting this new series thread to discuss how Linux's LL/SC and LSE
atomics helpers may best be ported to Xen, as per the discussion at [1].


Arguments in favour of doing this
=================================

    - Lets SMMUv3 driver switch to using <asm/atomic.h> rather than
      maintaining its own implementation of the helpers.

    - Provides mitigation against XSA-295 [2], which affects both arm32
      and arm64, across all versions of Xen, and may allow a domU to
      maliciously or erroneously DoS the hypervisor.

    - All Armv8-A core implementations since ~2017 implement LSE so
      there is an argument to be made we are long overdue support in
      Xen. This is compounded by LSE atomics being more performant than
      LL/SC atomics in most real-world applications, especially at high
      core counts.

    - We may be able to get improved performance when using LL/SC too
      as Linux provides helpers with relaxed ordering requirements which
      are currently not available in Xen, though for this we would need
      to go back through existing code to see where the more relaxed
      versions can be safely used.

    - Anything else?


Arguments against doing this
============================

    - Limited testing infrastructure in place to ensure use of new
      atomics helpers does not introduce new bugs and regressions. This
      is a particularly strong argument given how difficult it can be to
      identify and debug malfunctioning atomics. The old adage applies,
      "If it ain't broke, don't fix it."

    - Anything else?


Disclaimers
===========

    - This is a very rough first-pass effort intended to spur the
      discussions along.

    - Only build-tested on arm64 and arm32, *not* run-tested. I did also
      build for x86_64 just to make I didn't inadvertently break that.

    - This version only tackles atomics and cmpxchg; I've not yet had a
      chance to look at locks so those are still using LL/SC.


Series contents
===============

    - Patch #1 allows for detecting architectural features advertised
      in ID registers other than ID_AA64PFR{0,1}_EL1 and ID_PFR{0,1}.

    - Patch #2 uses the new infrastructure above to detect the presence
      of Armv8.1-LSE atomic instructions, as advertised by ID register
      ID_AA64ISAR0_EL1.

    - Patch #3 introduces the ARM64_HAS_LSE_ATOMICS hwcap, as well as
      the new Kconfig CONFIG_ARM64_LSE_ATOMICS, which enables runtime
      detection and setting of this hwcap.

    - Patch #4 pulls in the Linux LL/SC and LSE atomics helpers for
      arm64, using the new ARM64_HAS_LSE_ATOMICS hwcap to patch itself
      at runtime to use LSE where available and otherwise falling back
      on LL/SC.

        !! NB: Patch #4 breaks arm32 builds until Patch #5 ports
           Linux's 32-bit LL/SC helpers. I split the patches up
           to make them easier to review and discuss.

    - Patch #5 pulls in the Linux LL/SC atomics helpers for arm32.

    - Finally, Patch #6 removes Xen's dependency on gcc's built-in
      __sync_fetch_and_add() helper, instead using the ported Linux
      atomic_fetch_add() helper.


Any comments, guidance, and discussion on how to improve this first-pass
approach to get LSE atomics support merged into Xen would be greatly
appreciated.

Thanks!
Ash.

[1] https://lore.kernel.org/xen-devel/13baac40-8b10-0def-4e44-0d8f655fcaf1@xen.org/
[2] https://xenbits.xen.org/xsa/advisory-295.txt

Ash Wilding (6):
  xen/arm: Support detection of CPU features in other ID registers
  xen/arm: Add detection of Armv8.1-LSE atomic instructions
  xen/arm: Add ARM64_HAS_LSE_ATOMICS hwcap
  xen/arm64: Port Linux LL/SC and LSE atomics helpers to Xen
  xen/arm32: Port Linux LL/SC atomics helpers to Xen
  xen/arm: Remove dependency on gcc builtin __sync_fetch_and_add()

 xen/arch/arm/Kconfig                     |  11 +
 xen/arch/arm/Makefile                    |   1 +
 xen/arch/arm/lse.c                       |  13 +
 xen/arch/arm/setup.c                     |  13 +-
 xen/include/asm-arm/arm32/atomic.h       | 261 +++++++-----
 xen/include/asm-arm/arm32/cmpxchg.h      | 403 +++++++++++-------
 xen/include/asm-arm/arm32/system.h       |   2 +-
 xen/include/asm-arm/arm64/atomic.h       | 242 +++++------
 xen/include/asm-arm/arm64/atomic_ll_sc.h | 236 +++++++++++
 xen/include/asm-arm/arm64/atomic_lse.h   | 251 +++++++++++
 xen/include/asm-arm/arm64/cmpxchg.h      | 505 ++++++++++++++++-------
 xen/include/asm-arm/arm64/lse.h          |  53 +++
 xen/include/asm-arm/arm64/system.h       |   2 +-
 xen/include/asm-arm/atomic.h             |  15 +-
 xen/include/asm-arm/cpufeature.h         |  57 +--
 xen/include/asm-arm/system.h             |  10 +-
 xen/include/xen/compiler.h               |   3 +
 17 files changed, 1506 insertions(+), 572 deletions(-)
 create mode 100644 xen/arch/arm/lse.c
 create mode 100644 xen/include/asm-arm/arm64/atomic_ll_sc.h
 create mode 100644 xen/include/asm-arm/arm64/atomic_lse.h
 create mode 100644 xen/include/asm-arm/arm64/lse.h

-- 
2.24.3 (Apple Git-128)



From xen-devel-bounces@lists.xenproject.org Thu Nov 05 18:57:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 18:57:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20050.45645 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kakRX-0006hF-PL; Thu, 05 Nov 2020 18:57:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20050.45645; Thu, 05 Nov 2020 18:57:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kakRX-0006h7-LQ; Thu, 05 Nov 2020 18:57:07 +0000
Received: by outflank-mailman (input) for mailman id 20050;
 Thu, 05 Nov 2020 18:57:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kakRW-0006gr-PL
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:57:06 +0000
Received: from mail-wr1-x429.google.com (unknown [2a00:1450:4864:20::429])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3df7f1a5-ac49-40de-ac32-b14f6008216d;
 Thu, 05 Nov 2020 18:57:05 +0000 (UTC)
Received: by mail-wr1-x429.google.com with SMTP id w1so3009385wrm.4
 for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 10:57:05 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.57.03
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 05 Nov 2020 10:57:03 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kakRW-0006gr-PL
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:57:06 +0000
X-Inumbo-ID: 3df7f1a5-ac49-40de-ac32-b14f6008216d
Received: from mail-wr1-x429.google.com (unknown [2a00:1450:4864:20::429])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3df7f1a5-ac49-40de-ac32-b14f6008216d;
	Thu, 05 Nov 2020 18:57:05 +0000 (UTC)
Received: by mail-wr1-x429.google.com with SMTP id w1so3009385wrm.4
        for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 10:57:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=JXV2grd6INUy3gkR3e7DtGC7TFJVb0kBB00hdfAKmbI=;
        b=JQMuCwVvrfUlQSiYlywTVwmSgq7MvlOrDxhoITNqLZ0yUQu7Uk+BoyIxaXj6j56MJA
         WXmv04DJuD9lMEalI2Bv4LQUKFmdIGENeZ0AdGbRSS6bmQ4qaoCjZNkOdHyM351lGzSC
         1pOBENorcNwxeBsZO6L/xceFIwgu8k3psV+yn1ipXOJzTrj9ZuGYT96SY9tX2z3SEp5d
         EZRtBqRjSoHA54NUV6URJmI/3SQR2kWouCeeP1M5ZA0HF5ZeGXrK4+PvPeKnn9GNS819
         5GnhlM/jBhJ6xuWqN7FwJmAbOmvHVY46cQZN8e4ymXQ0Hd9lnrkzWcpgtCnUQBlmETiL
         Apng==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=JXV2grd6INUy3gkR3e7DtGC7TFJVb0kBB00hdfAKmbI=;
        b=iRXBej+Mosg57Hwvp1Vv/di1f9VVJu6c4PG86GFILjd9DPGYK5wTEgQ6XW+MpV/rxi
         nNPJxosV7LapdWTL12/S3jujQas/AEtf5It7Obj1LTSdZ98VKQ5BQVdng/wTUgmnFS9c
         UVkptaQLC4oj+/JfaCOZFUEyyi2efuYqBCA1s834NhHJ2DAmFfQgkz25G2u5xIm8vin7
         5S+nJV3CwqWOL47LEaTdp0Eu0gjbfbdcGMoOQQ9SIoPJ73FkjbdYOgSoV4CCrmdDZCjR
         LQaNUEHD/7NVAwTyLJKTsoPW8dyVSGFKFPgm7on9ISoT5vi/LH6x/RwZPfmZ1xvUZfAB
         b+dQ==
X-Gm-Message-State: AOAM531Cb4GImtruvNIAoz20C9LQ8Wc+FmDQKKL6jPkeG81/Rgii03cY
	uDaTXXgVTwpbHtYjEK+abPT0UwgFlx0=
X-Google-Smtp-Source: ABdhPJyCljWIKg/99jagCmcOIk9a+ZtYzbTERdVozWW3gWffCC+jUgvSlr1Z2bFuN9Tx+wIr+rHbgQ==
X-Received: by 2002:adf:ed4c:: with SMTP id u12mr4564286wro.63.1604602624600;
        Thu, 05 Nov 2020 10:57:04 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.57.03
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 05 Nov 2020 10:57:03 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Ash Wilding <ash.j.wilding@gmail.com>
Subject: [RFC PATCH 1/6] xen/arm: Support detection of CPU features in other ID registers
Date: Thu,  5 Nov 2020 18:55:58 +0000
Message-Id: <20201105185603.24149-2-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <20201105185603.24149-1-ash.j.wilding@gmail.com>
References: <20201105185603.24149-1-ash.j.wilding@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The current Arm boot_cpu_feature64() and boot_cpu_feature32() macros
are hardcoded to only detect features in ID_AA64PFR{0,1}_EL1 and
ID_PFR{0,1} respectively.

This patch replaces these macros with a new macro, boot_cpu_feature(),
which takes an explicit ID register name as an argument.

While we're here, cull cpu_feature64() and cpu_feature32() as they
have no callers (we only ever use the boot CPU features), and update
the printk() messages in setup.c to use the new macro.

Signed-off-by: Ash Wilding <ash.j.wilding@gmail.com>
---
 xen/arch/arm/setup.c             |  8 +++---
 xen/include/asm-arm/cpufeature.h | 44 +++++++++++++++-----------------
 2 files changed, 24 insertions(+), 28 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 7fcff9af2a..5121f06fc5 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -134,16 +134,16 @@ static void __init processor_id(void)
            cpu_has_gicv3 ? " GICv3-SysReg" : "");
 
     /* Warn user if we find unknown floating-point features */
-    if ( cpu_has_fp && (boot_cpu_feature64(fp) >= 2) )
+    if ( cpu_has_fp && (boot_cpu_feature(pfr64, fp) >= 2) )
         printk(XENLOG_WARNING "WARNING: Unknown Floating-point ID:%d, "
                "this may result in corruption on the platform\n",
-               boot_cpu_feature64(fp));
+               boot_cpu_feature(pfr64, fp));
 
     /* Warn user if we find unknown AdvancedSIMD features */
-    if ( cpu_has_simd && (boot_cpu_feature64(simd) >= 2) )
+    if ( cpu_has_simd && (boot_cpu_feature(pfr64, simd) >= 2) )
         printk(XENLOG_WARNING "WARNING: Unknown AdvancedSIMD ID:%d, "
                "this may result in corruption on the platform\n",
-               boot_cpu_feature64(simd));
+               boot_cpu_feature(pfr64, simd));
 
     printk("  Debug Features: %016"PRIx64" %016"PRIx64"\n",
            boot_cpu_data.dbg64.bits[0], boot_cpu_data.dbg64.bits[1]);
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index 10878ead8a..f9281ea343 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -1,39 +1,35 @@
 #ifndef __ASM_ARM_CPUFEATURE_H
 #define __ASM_ARM_CPUFEATURE_H
 
+#define boot_cpu_feature(idreg, feat) (boot_cpu_data.idreg.feat)
+
 #ifdef CONFIG_ARM_64
-#define cpu_feature64(c, feat)         ((c)->pfr64.feat)
-#define boot_cpu_feature64(feat)       (boot_cpu_data.pfr64.feat)
-
-#define cpu_has_el0_32    (boot_cpu_feature64(el0) == 2)
-#define cpu_has_el0_64    (boot_cpu_feature64(el0) >= 1)
-#define cpu_has_el1_32    (boot_cpu_feature64(el1) == 2)
-#define cpu_has_el1_64    (boot_cpu_feature64(el1) >= 1)
-#define cpu_has_el2_32    (boot_cpu_feature64(el2) == 2)
-#define cpu_has_el2_64    (boot_cpu_feature64(el2) >= 1)
-#define cpu_has_el3_32    (boot_cpu_feature64(el3) == 2)
-#define cpu_has_el3_64    (boot_cpu_feature64(el3) >= 1)
-#define cpu_has_fp        (boot_cpu_feature64(fp) < 8)
-#define cpu_has_simd      (boot_cpu_feature64(simd) < 8)
-#define cpu_has_gicv3     (boot_cpu_feature64(gic) == 1)
+#define cpu_has_el0_32          (boot_cpu_feature(pfr64, el0) == 2)
+#define cpu_has_el0_64          (boot_cpu_feature(pfr64, el0) >= 1)
+#define cpu_has_el1_32          (boot_cpu_feature(pfr64, el1) == 2)
+#define cpu_has_el1_64          (boot_cpu_feature(pfr64, el1) >= 1)
+#define cpu_has_el2_32          (boot_cpu_feature(pfr64, el2) == 2)
+#define cpu_has_el2_64          (boot_cpu_feature(pfr64, el2) >= 1)
+#define cpu_has_el3_32          (boot_cpu_feature(pfr64, el3) == 2)
+#define cpu_has_el3_64          (boot_cpu_feature(pfr64, el3) >= 1)
+#define cpu_has_fp              (boot_cpu_feature(pfr64, fp) < 8)
+#define cpu_has_simd            (boot_cpu_feature(pfr64, simd) < 8)
+#define cpu_has_gicv3           (boot_cpu_feature(pfr64, gic) == 1)
 #endif
 
-#define cpu_feature32(c, feat)         ((c)->pfr32.feat)
-#define boot_cpu_feature32(feat)       (boot_cpu_data.pfr32.feat)
-
-#define cpu_has_arm       (boot_cpu_feature32(arm) == 1)
-#define cpu_has_thumb     (boot_cpu_feature32(thumb) >= 1)
-#define cpu_has_thumb2    (boot_cpu_feature32(thumb) >= 3)
-#define cpu_has_jazelle   (boot_cpu_feature32(jazelle) > 0)
-#define cpu_has_thumbee   (boot_cpu_feature32(thumbee) == 1)
+#define cpu_has_arm       (boot_cpu_feature(pfr32, arm) == 1)
+#define cpu_has_thumb     (boot_cpu_feature(pfr32, thumb) >= 1)
+#define cpu_has_thumb2    (boot_cpu_feature(pfr32, thumb) >= 3)
+#define cpu_has_jazelle   (boot_cpu_feature(pfr32, jazelle) > 0)
+#define cpu_has_thumbee   (boot_cpu_feature(pfr32, thumbee) == 1)
 #define cpu_has_aarch32   (cpu_has_arm || cpu_has_thumb)
 
 #ifdef CONFIG_ARM_32
-#define cpu_has_gentimer  (boot_cpu_feature32(gentimer) == 1)
+#define cpu_has_gentimer  (boot_cpu_feature(pfr32, gentimer) == 1)
 #else
 #define cpu_has_gentimer  (1)
 #endif
-#define cpu_has_security  (boot_cpu_feature32(security) > 0)
+#define cpu_has_security  (boot_cpu_feature(pfr32, security) > 0)
 
 #define ARM64_WORKAROUND_CLEAN_CACHE    0
 #define ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE    1
-- 
2.24.3 (Apple Git-128)



From xen-devel-bounces@lists.xenproject.org Thu Nov 05 18:57:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 18:57:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20051.45657 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kakRZ-0006jG-3J; Thu, 05 Nov 2020 18:57:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20051.45657; Thu, 05 Nov 2020 18:57:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kakRY-0006j6-Tv; Thu, 05 Nov 2020 18:57:08 +0000
Received: by outflank-mailman (input) for mailman id 20051;
 Thu, 05 Nov 2020 18:57:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kakRX-0006gr-Hb
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:57:07 +0000
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cd5a5f0a-c5e6-447d-9134-36238ffbd79f;
 Thu, 05 Nov 2020 18:57:06 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id b8so3013984wrn.0
 for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 10:57:06 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.57.04
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 05 Nov 2020 10:57:04 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kakRX-0006gr-Hb
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:57:07 +0000
X-Inumbo-ID: cd5a5f0a-c5e6-447d-9134-36238ffbd79f
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cd5a5f0a-c5e6-447d-9134-36238ffbd79f;
	Thu, 05 Nov 2020 18:57:06 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id b8so3013984wrn.0
        for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 10:57:06 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=1cxZBd9UtbAwSXXwBERB1exoaCA+r5xNDHyZ9ab9j0k=;
        b=dn2Sdwt7lDaVlJ36tXFutn/QDnyBoZUaQBm3vtVJR3qQIR4BcilQE16TK0PqCNRMX5
         x+nO8UUCrLJ3hZUYmFe7hUunJEx5yyuPk2L4Hs887M5FTxQNQ9W0h9gL0w3DKqgNGE8l
         qbi1dU0GQocRhOABQ1wg7Yh3azozTFGBhKbM1n7Rq52zQRBl9IaDukFFZwCmAyL0FLci
         ZZAJsJSPYcvpj9s2ON5W/JgSmFey8rKcwthoVAHzB5U8XJtcWhCrgAN5ePYH/jPcinwi
         P5pq13bukcF6y+IIPY5tWV+RAcpch3a+dMx0wwCNBomVmvc0yyOCSZmD4MhKguXPkomS
         h9vg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=1cxZBd9UtbAwSXXwBERB1exoaCA+r5xNDHyZ9ab9j0k=;
        b=fhdLF+qtSki9TQiouQtC3LPNjksMmqbVjNLW0F1Uc0cdP0AJY8tKHUA0QpsnEpLmWg
         ewbxZyJ6dBci+Yua58fIwuuSY9gHITgTxiG9dA3Gls5Qm7MRNkBK6AUjf7Z5E7KEGzZu
         NX3U2A5PCHk+XuS3mlihNqfv6gE4GwwQn5Kc395l3GoG1HRVSCgKEJo2jqdC1EEjFBB7
         v8ZViLLRDSoi+142FFYX5BR9W2MPJM0J+eRKYGXtuobeE2OD68pWNMhymkdm7NQgwJMc
         woqhZ763zGy8nehBVv3/Z+ToDYudWuH0YFAuX3wtbKEZFjpxRESweqIhabCfgRHFcFBi
         jEVA==
X-Gm-Message-State: AOAM530DgMKclVLAIMgZKoh1XgHVCXKVNp6+N3CWTnNwX3jua8lrqQ1C
	KrGbhniPFizzSw8VE9pIqtyWbvZOooE=
X-Google-Smtp-Source: ABdhPJyaKSWUq4UC8sHcV/FCKE6XLTUJDppDVYkUNwm0rWqZzMuRQz8ZP2rFjmFOFazXw8vjLY2WAw==
X-Received: by 2002:a5d:474f:: with SMTP id o15mr4446221wrs.377.1604602625681;
        Thu, 05 Nov 2020 10:57:05 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.57.04
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 05 Nov 2020 10:57:04 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Ash Wilding <ash.j.wilding@gmail.com>
Subject: [RFC PATCH 2/6] xen/arm: Add detection of Armv8.1-LSE atomic instructions
Date: Thu,  5 Nov 2020 18:55:59 +0000
Message-Id: <20201105185603.24149-3-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <20201105185603.24149-1-ash.j.wilding@gmail.com>
References: <20201105185603.24149-1-ash.j.wilding@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Use the new infrastructure for detecting CPU features in other ID
registers to detect the presence of Armv8.1-LSE atomic instructions,
as reported by ID_AA64ISAR0_EL1.Atomic.

While we're here, print detection of these instructions in setup.c's
processor_id().

Signed-off-by: Ash Wilding <ash.j.wilding@gmail.com>
---
 xen/arch/arm/setup.c             |  5 +++--
 xen/include/asm-arm/cpufeature.h | 10 +++++++++-
 2 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 5121f06fc5..138e1957c5 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -128,10 +128,11 @@ static void __init processor_id(void)
            cpu_has_el2_32 ? "64+32" : cpu_has_el2_64 ? "64" : "No",
            cpu_has_el1_32 ? "64+32" : cpu_has_el1_64 ? "64" : "No",
            cpu_has_el0_32 ? "64+32" : cpu_has_el0_64 ? "64" : "No");
-    printk("    Extensions:%s%s%s\n",
+    printk("    Extensions:%s%s%s%s\n",
            cpu_has_fp ? " FloatingPoint" : "",
            cpu_has_simd ? " AdvancedSIMD" : "",
-           cpu_has_gicv3 ? " GICv3-SysReg" : "");
+           cpu_has_gicv3 ? " GICv3-SysReg" : "",
+           cpu_has_lse_atomics ? " LSE-Atomics" : "");
 
     /* Warn user if we find unknown floating-point features */
     if ( cpu_has_fp && (boot_cpu_feature(pfr64, fp) >= 2) )
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index f9281ea343..2366926e82 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -15,6 +15,7 @@
 #define cpu_has_fp              (boot_cpu_feature(pfr64, fp) < 8)
 #define cpu_has_simd            (boot_cpu_feature(pfr64, simd) < 8)
 #define cpu_has_gicv3           (boot_cpu_feature(pfr64, gic) == 1)
+#define cpu_has_lse_atomics     (boot_cpu_feature(isa64, atomic) == 2)
 #endif
 
 #define cpu_has_arm       (boot_cpu_feature(pfr32, arm) == 1)
@@ -187,8 +188,15 @@ struct cpuinfo_arm {
         };
     } mm64;
 
-    struct {
+    union {
         uint64_t bits[2];
+        struct {
+            unsigned long __res0 : 20;
+            unsigned long atomic : 4;
+            unsigned long __res1 : 40;
+
+            unsigned long __res2 : 64;
+        };
     } isa64;
 
 #endif
-- 
2.24.3 (Apple Git-128)



From xen-devel-bounces@lists.xenproject.org Thu Nov 05 18:57:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 18:57:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20053.45669 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kakRd-0006nu-At; Thu, 05 Nov 2020 18:57:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20053.45669; Thu, 05 Nov 2020 18:57:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kakRd-0006nl-7d; Thu, 05 Nov 2020 18:57:13 +0000
Received: by outflank-mailman (input) for mailman id 20053;
 Thu, 05 Nov 2020 18:57:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kakRb-0006gr-O1
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:57:11 +0000
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9f58eff1-99bd-4287-a10e-6e50cea65f9f;
 Thu, 05 Nov 2020 18:57:07 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id 33so2992898wrl.7
 for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 10:57:07 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.57.05
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 05 Nov 2020 10:57:06 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kakRb-0006gr-O1
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:57:11 +0000
X-Inumbo-ID: 9f58eff1-99bd-4287-a10e-6e50cea65f9f
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9f58eff1-99bd-4287-a10e-6e50cea65f9f;
	Thu, 05 Nov 2020 18:57:07 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id 33so2992898wrl.7
        for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 10:57:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=c0zMNTYfmc28SWK0VrVhI6pcULa51GufXevQj+g5qp8=;
        b=Lm6c8SGbxdYDpNzRTRwm/sWoLyrOkZnr2F9fvPCjnOOgtDeXdXv1HT0Jvb0dxwBJuK
         FED+Jq9PlHfWxAG3CpMFvwil5wK9S7T1qXTZA8q5DYx6uCL8aHJ+3EfS0nqaovekyX/u
         jbN7fRn2sEhfIAT2Y9uFgUbNIeenFCc6tS7HYnK/JgEFxrdJNapdJjT5nR2TeQzleE0C
         6X4tzQ3i/HR19NjsouoJppPYeT1mq2KoGXZ/LLLVvO+wVDghQXeRs8DtFUx6wUN/QjYF
         CpEVcuPrRp05kVldRCbKd9o7zW4zI1J7r5bgnwHXvVstwyVbPMWNuv1AVDEV3NOp6l1P
         JzqA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=c0zMNTYfmc28SWK0VrVhI6pcULa51GufXevQj+g5qp8=;
        b=PBeDbSSdFlmgXiO2I7wJD+kLKMZJCs7iryllgUdRhQEn7IfCrlAanaySPE6pWs88M1
         JWceC31AeUUmzWcdW2YdGld94Bp9GgPou7vl7NqdnhKSQYOF7sfj9ou7EwkrpMIJlnWN
         h2+o7DJdgV8Nbv5aV9VZgoJLO1WRPrW1adgGt8F/RVESqzc61bpu+tUiLL8/Zx4CL3f6
         5bJzKsCJQHW/qccmksk6pSS4IyJnRyTxwo0tcNP7GsWSGNgqnWGMn9LqkRyPGXW7N+dv
         b9ute9Rbw+DM6HyxBw25HzB9+wJfLTGDrtrQ/z1Evq0VsWG1V0FAIy76h271pAEiYCAY
         BIXQ==
X-Gm-Message-State: AOAM533HS88wnfwaWVf5qUpHVLcOOTEi864rrQgPe2gvubVHDfq5FRFf
	rBP8ZL1lF1X3kXvwUgdUvCtROPjpt9s=
X-Google-Smtp-Source: ABdhPJxD3UGgnbHrtL2aQwE7zcXO2D+ilM1nPJwwd/BSpSb7pACKUt2+fZo+z3sKTTEBtTttsyQjzw==
X-Received: by 2002:a5d:6944:: with SMTP id r4mr4667895wrw.151.1604602626876;
        Thu, 05 Nov 2020 10:57:06 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.57.05
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 05 Nov 2020 10:57:06 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Ash Wilding <ash.j.wilding@gmail.com>
Subject: [RFC PATCH 3/6] xen/arm: Add ARM64_HAS_LSE_ATOMICS hwcap
Date: Thu,  5 Nov 2020 18:56:00 +0000
Message-Id: <20201105185603.24149-4-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <20201105185603.24149-1-ash.j.wilding@gmail.com>
References: <20201105185603.24149-1-ash.j.wilding@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This patch introduces the ARM64_HAS_LSE_ATOMICS hwcap.

While doing this, CONFIG_ARM64_LSE_ATOMICS is added to control whether
the hwcap is actually detected and set at runtime. Without this Kconfig
being set we will always fallback on LL/SC atomics using Armv8.0 exlusive
accesses.

Note this patch does not actually add the ALTERNATIVE() switching based
on the hwcap being detected and set; that comes later in the series.

Signed-off-by: Ash Wilding <ash.j.wilding@gmail.com>
---
 xen/arch/arm/Kconfig             | 11 +++++++++++
 xen/arch/arm/Makefile            |  1 +
 xen/arch/arm/lse.c               | 13 +++++++++++++
 xen/include/asm-arm/cpufeature.h |  3 ++-
 4 files changed, 27 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/arm/lse.c

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 2777388265..febc41e492 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -78,6 +78,17 @@ config SBSA_VUART_CONSOLE
 	  Allows a guest to use SBSA Generic UART as a console. The
 	  SBSA Generic UART implements a subset of ARM PL011 UART.
 
+config ARM64_LSE_ATOMICS
+	bool "Armv8.1-LSE Atomics"
+	depends on ARM_64 && HAS_ALTERNATIVE
+	default y
+	---help---
+	When set, dynamically patch Xen at runtime to use Armv8.1-LSE
+	atomics when supported by the system.
+
+	When unset, or when Armv8.1-LSE atomics are not supported by the
+	system, fallback on LL/SC atomics using Armv8.0 exclusive accesses.
+
 config ARM_SSBD
 	bool "Speculative Store Bypass Disable" if EXPERT
 	depends on HAS_ALTERNATIVE
diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 296c5e68bb..cadd0ad253 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -63,6 +63,7 @@ obj-y += vsmc.o
 obj-y += vpsci.o
 obj-y += vuart.o
 extra-y += $(TARGET_SUBARCH)/head.o
+obj-$(CONFIG_ARM64_LSE_ATOMICS) += lse.o
 
 #obj-bin-y += ....o
 
diff --git a/xen/arch/arm/lse.c b/xen/arch/arm/lse.c
new file mode 100644
index 0000000000..8274dac671
--- /dev/null
+++ b/xen/arch/arm/lse.c
@@ -0,0 +1,13 @@
+
+#include <asm/cpufeature.h>
+#include <xen/init.h>
+
+static int __init update_lse_caps(void)
+{
+    if ( cpu_has_lse_atomics )
+        cpus_set_cap(ARM64_HAS_LSE_ATOMICS);
+
+    return 0;
+}
+
+__initcall(update_lse_caps);
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index 2366926e82..48c172ee29 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -42,8 +42,9 @@
 #define ARM_SSBD 7
 #define ARM_SMCCC_1_1 8
 #define ARM64_WORKAROUND_AT_SPECULATE 9
+#define ARM64_HAS_LSE_ATOMICS 10
 
-#define ARM_NCAPS           10
+#define ARM_NCAPS           11
 
 #ifndef __ASSEMBLY__
 
-- 
2.24.3 (Apple Git-128)



From xen-devel-bounces@lists.xenproject.org Thu Nov 05 18:57:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 18:57:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20055.45681 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kakRi-0006uB-MK; Thu, 05 Nov 2020 18:57:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20055.45681; Thu, 05 Nov 2020 18:57:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kakRi-0006u1-HH; Thu, 05 Nov 2020 18:57:18 +0000
Received: by outflank-mailman (input) for mailman id 20055;
 Thu, 05 Nov 2020 18:57:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kakRg-0006gr-O6
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:57:16 +0000
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d03e226e-0ae6-4cb5-b00d-74f4f27c7b7b;
 Thu, 05 Nov 2020 18:57:10 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id w14so2987085wrs.9
 for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 10:57:10 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.57.08
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 05 Nov 2020 10:57:08 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kakRg-0006gr-O6
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:57:16 +0000
X-Inumbo-ID: d03e226e-0ae6-4cb5-b00d-74f4f27c7b7b
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d03e226e-0ae6-4cb5-b00d-74f4f27c7b7b;
	Thu, 05 Nov 2020 18:57:10 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id w14so2987085wrs.9
        for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 10:57:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=GnhkvWFpWDOq5MdCwowKxEgDQKGXysmyNosoeceNYd4=;
        b=oFwB6g/WwFFvWGjiqHwJln484BrGqoXtl9CsN/D367voX5UFdMBC8xgWVFybdWowDl
         Ltm++Yke4FSVdXQGMo2yGSfzk96atMhyNG1PVILi90JE9Jl+cScm8LVAil6bDoro5n1Q
         hcZzVQRwJ0WFgfy4p9wBdHzHByr2VoA+WhVcxvPnT4e8Q2RvX3R7RHATI8Gbd4nbJZee
         ZE1gicQorTqjPJiy9Fcr+TqB0xfuJdwvJW3o4mxuX2lDMUexibYf7yeKIdeMi/0LJSLY
         W5WfuSxLNRcTiZ06QSSFuR3bJq+7GlWgeCooYWvFxUPzTcB4KAkUHW0YIIStAAMXXqpX
         v30w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=GnhkvWFpWDOq5MdCwowKxEgDQKGXysmyNosoeceNYd4=;
        b=YB+ce3UuciavU/kodh3gauS7Vj9dlB76CzXFefXap42+miM1cXuWrYpMYupDd5Jc8d
         BJvVVpxGBuKEJoKEr3jvRr/iyWcMqbuHy7hTFwIIQ1D6v73R2tWb2pP29Le6j+s6uFLl
         yFNmr1C9+JiI1BIe4vK38RlouX5SNjBo6MNUIpgh2MCYFBpCdo+BH5NxKOBDrGN+3Ak9
         B+URZdJV6/rXcFK9D7EEjuZ8q683+1pGr7BqYFF/abxIJgDa5daxXyVffcDDNf5VkFBF
         k0Kc/uqggvcXA14FnNXavmR0CZyQHDzblQNnj7qFCGJI12ETW0IvBTgR6NGCmAxXSkr/
         kngg==
X-Gm-Message-State: AOAM532TiraOvNim2PEGFWbCrB+v40JBhu+56ijDvuYioX9+/iB4SPbv
	hv6Zmky27FgBwLKCnRLPICcQ//nnzM4=
X-Google-Smtp-Source: ABdhPJzFU2zN+NquNgaFKhweu17Rce7BEBgHZuLvIFeAbYvdnCHCQ4tb5zRoi2NGO0VlGup7gAYNqw==
X-Received: by 2002:adf:e9c6:: with SMTP id l6mr4600491wrn.257.1604602629112;
        Thu, 05 Nov 2020 10:57:09 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.57.08
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 05 Nov 2020 10:57:08 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Ash Wilding <ash.j.wilding@gmail.com>
Subject: [RFC PATCH 5/6] xen/arm32: Port Linux LL/SC atomics helpers to Xen
Date: Thu,  5 Nov 2020 18:56:02 +0000
Message-Id: <20201105185603.24149-6-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <20201105185603.24149-1-ash.j.wilding@gmail.com>
References: <20201105185603.24149-1-ash.j.wilding@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This patch ports Linux's arm32 LL/SC atomics helpers to Xen.

The opening comment of each header file details the changes made to
that file while porting it to Xen.

Signed-off-by: Ash Wilding <ash.j.wilding@gmail.com>
---
 xen/include/asm-arm/arm32/atomic.h  | 261 ++++++++++--------
 xen/include/asm-arm/arm32/cmpxchg.h | 403 ++++++++++++++++++----------
 xen/include/asm-arm/arm32/system.h  |   2 +-
 3 files changed, 413 insertions(+), 253 deletions(-)

diff --git a/xen/include/asm-arm/arm32/atomic.h b/xen/include/asm-arm/arm32/atomic.h
index 2832a72792..544a4ba492 100644
--- a/xen/include/asm-arm/arm32/atomic.h
+++ b/xen/include/asm-arm/arm32/atomic.h
@@ -1,124 +1,118 @@
 /*
- *  arch/arm/include/asm/atomic.h
+ * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
  *
- *  Copyright (C) 1996 Russell King.
- *  Copyright (C) 2002 Deep Blue Solutions Ltd.
+ * Summary of changes:
+ * 		- Drop redundant includes and redirect others to Xen equivalents
+ * 		- Rename header include guard to reflect Xen directory structure
+ * 		- Drop atomic64_t helper declarations
+ * 		- Drop pre-Armv6 support
+ * 		- Redirect READ_ONCE/WRITE_ONCE to __* equivalents in compiler.h
+ * 		- Add explicit atomic_add_return() and atomic_sub_return() as
+ *		  Linux doesn't define these for arm32. Here we just sandwich
+ *		  the atomic_<op>_return_relaxed() calls with smp_mb()s.
  *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
+ * Copyright (C) 1996 Russell King.
+ * Copyright (C) 2002 Deep Blue Solutions Ltd.
+ * SPDX-License-Identifier: GPL-2.0-only
  */
-#ifndef __ARCH_ARM_ARM32_ATOMIC__
-#define __ARCH_ARM_ARM32_ATOMIC__
+#ifndef __ASM_ARM_ARM32_ATOMIC_H
+#define __ASM_ARM_ARM32_ATOMIC_H
+
+#include <xen/compiler.h>
+#include <xen/prefetch.h>
+#include <xen/types.h>
+#include "system.h"
+#include "cmpxchg.h"
+
+/*
+ * On ARM, ordinary assignment (str instruction) doesn't clear the local
+ * strex/ldrex monitor on some implementations. The reason we can use it for
+ * atomic_set() is the clrex or dummy strex done on every exception return.
+ */
+#define atomic_read(v)	__READ_ONCE((v)->counter)
+#define atomic_set(v,i)	__WRITE_ONCE(((v)->counter), (i))
 
 /*
  * ARMv6 UP and SMP safe atomic ops.  We use load exclusive and
  * store exclusive to ensure that these are atomic.  We may loop
  * to ensure that the update happens.
  */
-static inline void atomic_add(int i, atomic_t *v)
-{
-	unsigned long tmp;
-	int result;
 
-	prefetchw(&v->counter);
-	__asm__ __volatile__("@ atomic_add\n"
-"1:	ldrex	%0, [%3]\n"
-"	add	%0, %0, %4\n"
-"	strex	%1, %0, [%3]\n"
-"	teq	%1, #0\n"
-"	bne	1b"
-	: "=&r" (result), "=&r" (tmp), "+Qo" (v->counter)
-	: "r" (&v->counter), "Ir" (i)
-	: "cc");
+#define ATOMIC_OP(op, c_op, asm_op)					\
+static inline void atomic_##op(int i, atomic_t *v)			\
+{									\
+	unsigned long tmp;						\
+	int result;							\
+									\
+	prefetchw(&v->counter);						\
+	__asm__ __volatile__("@ atomic_" #op "\n"			\
+"1:	ldrex	%0, [%3]\n"						\
+"	" #asm_op "	%0, %0, %4\n"					\
+"	strex	%1, %0, [%3]\n"						\
+"	teq	%1, #0\n"						\
+"	bne	1b"							\
+	: "=&r" (result), "=&r" (tmp), "+Qo" (v->counter)		\
+	: "r" (&v->counter), "Ir" (i)					\
+	: "cc");							\
+}									\
+
+#define ATOMIC_OP_RETURN(op, c_op, asm_op)				\
+static inline int atomic_##op##_return_relaxed(int i, atomic_t *v)	\
+{									\
+	unsigned long tmp;						\
+	int result;							\
+									\
+	prefetchw(&v->counter);						\
+									\
+	__asm__ __volatile__("@ atomic_" #op "_return\n"		\
+"1:	ldrex	%0, [%3]\n"						\
+"	" #asm_op "	%0, %0, %4\n"					\
+"	strex	%1, %0, [%3]\n"						\
+"	teq	%1, #0\n"						\
+"	bne	1b"							\
+	: "=&r" (result), "=&r" (tmp), "+Qo" (v->counter)		\
+	: "r" (&v->counter), "Ir" (i)					\
+	: "cc");							\
+									\
+	return result;							\
 }
 
-static inline int atomic_add_return(int i, atomic_t *v)
-{
-	unsigned long tmp;
-	int result;
-
-	smp_mb();
-	prefetchw(&v->counter);
-
-	__asm__ __volatile__("@ atomic_add_return\n"
-"1:	ldrex	%0, [%3]\n"
-"	add	%0, %0, %4\n"
-"	strex	%1, %0, [%3]\n"
-"	teq	%1, #0\n"
-"	bne	1b"
-	: "=&r" (result), "=&r" (tmp), "+Qo" (v->counter)
-	: "r" (&v->counter), "Ir" (i)
-	: "cc");
-
-	smp_mb();
-
-	return result;
-}
-
-static inline void atomic_sub(int i, atomic_t *v)
-{
-	unsigned long tmp;
-	int result;
-
-	prefetchw(&v->counter);
-	__asm__ __volatile__("@ atomic_sub\n"
-"1:	ldrex	%0, [%3]\n"
-"	sub	%0, %0, %4\n"
-"	strex	%1, %0, [%3]\n"
-"	teq	%1, #0\n"
-"	bne	1b"
-	: "=&r" (result), "=&r" (tmp), "+Qo" (v->counter)
-	: "r" (&v->counter), "Ir" (i)
-	: "cc");
+#define ATOMIC_FETCH_OP(op, c_op, asm_op)				\
+static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v)	\
+{									\
+	unsigned long tmp;						\
+	int result, val;						\
+									\
+	prefetchw(&v->counter);						\
+									\
+	__asm__ __volatile__("@ atomic_fetch_" #op "\n"			\
+"1:	ldrex	%0, [%4]\n"						\
+"	" #asm_op "	%1, %0, %5\n"					\
+"	strex	%2, %1, [%4]\n"						\
+"	teq	%2, #0\n"						\
+"	bne	1b"							\
+	: "=&r" (result), "=&r" (val), "=&r" (tmp), "+Qo" (v->counter)	\
+	: "r" (&v->counter), "Ir" (i)					\
+	: "cc");							\
+									\
+	return result;							\
 }
 
-static inline int atomic_sub_return(int i, atomic_t *v)
-{
-	unsigned long tmp;
-	int result;
+#define atomic_add_return_relaxed	atomic_add_return_relaxed
+#define atomic_sub_return_relaxed	atomic_sub_return_relaxed
+#define atomic_fetch_add_relaxed	atomic_fetch_add_relaxed
+#define atomic_fetch_sub_relaxed	atomic_fetch_sub_relaxed
 
-	smp_mb();
-	prefetchw(&v->counter);
-
-	__asm__ __volatile__("@ atomic_sub_return\n"
-"1:	ldrex	%0, [%3]\n"
-"	sub	%0, %0, %4\n"
-"	strex	%1, %0, [%3]\n"
-"	teq	%1, #0\n"
-"	bne	1b"
-	: "=&r" (result), "=&r" (tmp), "+Qo" (v->counter)
-	: "r" (&v->counter), "Ir" (i)
-	: "cc");
-
-	smp_mb();
-
-	return result;
-}
-
-static inline void atomic_and(int m, atomic_t *v)
-{
-	unsigned long tmp;
-	int result;
-
-	prefetchw(&v->counter);
-	__asm__ __volatile__("@ atomic_and\n"
-"1:	ldrex	%0, [%3]\n"
-"	and	%0, %0, %4\n"
-"	strex	%1, %0, [%3]\n"
-"	teq	%1, #0\n"
-"	bne	1b"
-	: "=&r" (result), "=&r" (tmp), "+Qo" (v->counter)
-	: "r" (&v->counter), "Ir" (m)
-	: "cc");
-}
+#define atomic_fetch_and_relaxed	atomic_fetch_and_relaxed
+#define atomic_fetch_andnot_relaxed	atomic_fetch_andnot_relaxed
+#define atomic_fetch_or_relaxed		atomic_fetch_or_relaxed
+#define atomic_fetch_xor_relaxed	atomic_fetch_xor_relaxed
 
-static inline int atomic_cmpxchg(atomic_t *ptr, int old, int new)
+static inline int atomic_cmpxchg_relaxed(atomic_t *ptr, int old, int new)
 {
 	int oldval;
 	unsigned long res;
 
-	smp_mb();
 	prefetchw(&ptr->counter);
 
 	do {
@@ -132,12 +126,11 @@ static inline int atomic_cmpxchg(atomic_t *ptr, int old, int new)
 		    : "cc");
 	} while (res);
 
-	smp_mb();
-
 	return oldval;
 }
+#define atomic_cmpxchg_relaxed		atomic_cmpxchg_relaxed
 
-static inline int __atomic_add_unless(atomic_t *v, int a, int u)
+static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
 {
 	int oldval, newval;
 	unsigned long tmp;
@@ -163,13 +156,61 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u)
 
 	return oldval;
 }
+#define atomic_fetch_add_unless		atomic_fetch_add_unless
+
+#define ATOMIC_OPS(op, c_op, asm_op)					\
+	ATOMIC_OP(op, c_op, asm_op)					\
+	ATOMIC_OP_RETURN(op, c_op, asm_op)				\
+	ATOMIC_FETCH_OP(op, c_op, asm_op)
+
+ATOMIC_OPS(add, +=, add)
+ATOMIC_OPS(sub, -=, sub)
+
+#define atomic_andnot atomic_andnot
+
+#undef ATOMIC_OPS
+#define ATOMIC_OPS(op, c_op, asm_op)					\
+	ATOMIC_OP(op, c_op, asm_op)					\
+	ATOMIC_FETCH_OP(op, c_op, asm_op)
+
+ATOMIC_OPS(and, &=, and)
+ATOMIC_OPS(andnot, &= ~, bic)
+ATOMIC_OPS(or,  |=, orr)
+ATOMIC_OPS(xor, ^=, eor)
+
+#undef ATOMIC_OPS
+#undef ATOMIC_FETCH_OP
+#undef ATOMIC_OP_RETURN
+#undef ATOMIC_OP
+
+#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
 
-#endif /* __ARCH_ARM_ARM32_ATOMIC__ */
 /*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 8
- * indent-tabs-mode: t
- * End:
+ * Linux doesn't define strict atomic_add_return() or atomic_sub_return()
+ * for /arch/arm -- Let's manually define these for Xen.
  */
+
+static inline int atomic_add_return(int i, atomic_t *v)
+{
+	int ret;
+
+	smp_mb();
+	ret = atomic_add_return_relaxed(i, v);
+	smp_mb();
+
+	return ret;
+}
+
+static inline int atomic_sub_return(int i, atomic_t *v)
+{
+	int ret;
+
+	smp_mb();
+	ret = atomic_sub_return_relaxed(i, v);
+	smp_mb();
+
+	return ret;
+}
+
+
+#endif /* __ASM_ARM_ARM32_ATOMIC_H */
diff --git a/xen/include/asm-arm/arm32/cmpxchg.h b/xen/include/asm-arm/arm32/cmpxchg.h
index b0bd1d8b68..7aa8d93fc2 100644
--- a/xen/include/asm-arm/arm32/cmpxchg.h
+++ b/xen/include/asm-arm/arm32/cmpxchg.h
@@ -1,16 +1,36 @@
-#ifndef __ASM_ARM32_CMPXCHG_H
-#define __ASM_ARM32_CMPXCHG_H
+/*
+ * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
+ *
+ * Summary of changes:
+ * 		- Rename header include guard to reflect Xen directory structure
+ * 		- Drop redundant includes and redirect others to Xen equivalents
+ * 		- Assume running on Armv7 so drop support for <= Armv6, and drop
+ * 		  workarounds for StrongARM "swp" instruction errata
+ * 		- Drop local() variants (no callers in Xen)
+ * 		- Add strict versions of xchg(), cmpxchg(), and cmpxchg64() as
+ * 		  Linux does not provide these
+ * 		- Keep the compiler happy by updating __cmpxchg64() ptr arg to
+ * 		  be volatile and make the call to prefetchw() correctly cast
+ * 		  ptr to (const volatile *)
+ * 		- Pull in original Xen arm32 cmpxchg.h definitions of
+ * 		  cmpxchg_timeout*() and cmpxchg64_timeout*() as these are not
+ * 		  provided by Linux and are required for Xen's guest atomics
+ *
+ * SPDX-License-Identifier: GPL-2.0
+ */
+#ifndef __ASM_ARM_ARM32_CMPXCHG_H
+#define __ASM_ARM_ARM32_CMPXCHG_H
 
 #include <xen/prefetch.h>
+#include <xen/types.h>
 
-extern void __bad_xchg(volatile void *, int);
+extern void __bad_cmpxchg(volatile void *ptr, int size);
 
 static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size)
 {
 	unsigned long ret;
 	unsigned int tmp;
 
-	smp_mb();
 	prefetchw((const void *)ptr);
 
 	switch (size) {
@@ -24,6 +44,16 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
 			: "r" (x), "r" (ptr)
 			: "memory", "cc");
 		break;
+	case 2:
+		asm volatile("@	__xchg2\n"
+		"1:	ldrexh	%0, [%3]\n"
+		"	strexh	%1, %2, [%3]\n"
+		"	teq	%1, #0\n"
+		"	bne	1b"
+			: "=&r" (ret), "=&r" (tmp)
+			: "r" (x), "r" (ptr)
+			: "memory", "cc");
+		break;
 	case 4:
 		asm volatile("@	__xchg4\n"
 		"1:	ldrex	%0, [%3]\n"
@@ -34,121 +64,236 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
 			: "r" (x), "r" (ptr)
 			: "memory", "cc");
 		break;
+
 	default:
-		__bad_xchg(ptr, size), ret = 0;
+		/* Cause a link-time error, the size is not supported */
+		__bad_cmpxchg(ptr, size), ret = 0;
 		break;
 	}
-	smp_mb();
 
 	return ret;
 }
 
-#define xchg(ptr,x) \
-	((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))
+#define xchg_relaxed(ptr, x) ({						\
+	(__typeof__(*(ptr)))__xchg((unsigned long)(x), (ptr),		\
+				   sizeof(*(ptr)));			\
+})
+
+static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
+				      unsigned long new, int size)
+{
+	unsigned long oldval, res;
+
+	prefetchw((const void *)ptr);
+
+	switch (size) {
+	case 1:
+		do {
+			asm volatile("@ __cmpxchg1\n"
+			"	ldrexb	%1, [%2]\n"
+			"	mov	%0, #0\n"
+			"	teq	%1, %3\n"
+			"	strexbeq %0, %4, [%2]\n"
+				: "=&r" (res), "=&r" (oldval)
+				: "r" (ptr), "Ir" (old), "r" (new)
+				: "memory", "cc");
+		} while (res);
+		break;
+	case 2:
+		do {
+			asm volatile("@ __cmpxchg1\n"
+			"	ldrexh	%1, [%2]\n"
+			"	mov	%0, #0\n"
+			"	teq	%1, %3\n"
+			"	strexheq %0, %4, [%2]\n"
+				: "=&r" (res), "=&r" (oldval)
+				: "r" (ptr), "Ir" (old), "r" (new)
+				: "memory", "cc");
+		} while (res);
+		break;
+	case 4:
+		do {
+			asm volatile("@ __cmpxchg4\n"
+			"	ldrex	%1, [%2]\n"
+			"	mov	%0, #0\n"
+			"	teq	%1, %3\n"
+			"	strexeq %0, %4, [%2]\n"
+				: "=&r" (res), "=&r" (oldval)
+				: "r" (ptr), "Ir" (old), "r" (new)
+				: "memory", "cc");
+		} while (res);
+		break;
+
+	default:
+		__bad_cmpxchg(ptr, size);
+		oldval = 0;
+	}
+
+	return oldval;
+}
+
+#define cmpxchg_relaxed(ptr,o,n) ({					\
+	(__typeof__(*(ptr)))__cmpxchg((ptr),				\
+				      (unsigned long)(o),		\
+				      (unsigned long)(n),		\
+				      sizeof(*(ptr)));			\
+})
+
+static inline unsigned long long __cmpxchg64(volatile unsigned long long *ptr,
+					     unsigned long long old,
+					     unsigned long long new)
+{
+	unsigned long long oldval;
+	unsigned long res;
+
+	prefetchw((const void *)ptr);
+
+	__asm__ __volatile__(
+"1:	ldrexd		%1, %H1, [%3]\n"
+"	teq		%1, %4\n"
+"	teqeq		%H1, %H4\n"
+"	bne		2f\n"
+"	strexd		%0, %5, %H5, [%3]\n"
+"	teq		%0, #0\n"
+"	bne		1b\n"
+"2:"
+	: "=&r" (res), "=&r" (oldval), "+Qo" (*ptr)
+	: "r" (ptr), "r" (old), "r" (new)
+	: "cc");
+
+	return oldval;
+}
+
+#define cmpxchg64_relaxed(ptr, o, n) ({					\
+	(__typeof__(*(ptr)))__cmpxchg64((ptr),				\
+					(unsigned long long)(o),	\
+					(unsigned long long)(n));	\
+})
+
+
+/*
+ * Linux doesn't provide strict versions of xchg(), cmpxchg(), and cmpxchg64(),
+ * so manually define these for Xen as smp_mb() wrappers around the relaxed
+ * variants.
+ */
+
+#define xchg(ptr, x) ({ \
+	long ret; \
+	smp_mb(); \
+	ret = xchg_relaxed(ptr, x); \
+	smp_mb(); \
+	ret; \
+})
+
+#define cmpxchg(ptr, o, n) ({ \
+	long ret; \
+	smp_mb(); \
+	ret = cmpxchg_relaxed(ptr, o, n); \
+	smp_mb(); \
+	ret; \
+})
+
+#define cmpxchg64(ptr, o, n) ({ \
+	long long ret; \
+	smp_mb(); \
+	ret = cmpxchg64_relaxed(ptr, o, n); \
+	smp_mb(); \
+	ret; \
+})
 
 /*
- * Atomic compare and exchange.  Compare OLD with MEM, if identical,
- * store NEW in MEM.  Return the initial value in MEM.  Success is
- * indicated by comparing RETURN with OLD.
+ * This code is from the original Xen arm32 cmpxchg.h, from before the
+ * Linux 5.10-rc2 atomics helpers were ported over. The only changes
+ * here are renaming the macros and functions to explicitly use
+ * "timeout" in their names so that they don't clash with the above.
+ *
+ * We need this here for guest atomics (the only user of the timeout
+ * variants).
  */
 
-extern unsigned long __bad_cmpxchg(volatile void *ptr, int size);
-
-#define __CMPXCHG_CASE(sz, name)					\
-static inline bool __cmpxchg_case_##name(volatile void *ptr,		\
-					 unsigned long *old,		\
-					 unsigned long new,		\
-					 bool timeout,			\
-					 unsigned int max_try)		\
-{									\
-	unsigned long oldval;						\
-	unsigned long res;						\
-									\
-	do {								\
-		asm volatile("@ __cmpxchg_case_" #name "\n"		\
-		"	ldrex" #sz "	%1, [%2]\n"			\
-		"	mov	%0, #0\n"				\
-		"	teq	%1, %3\n"				\
-		"	strex" #sz "eq %0, %4, [%2]\n"			\
-		: "=&r" (res), "=&r" (oldval)				\
-		: "r" (ptr), "Ir" (*old), "r" (new)			\
-		: "memory", "cc");					\
-									\
-		if (!res)						\
-			break;						\
-	} while (!timeout || ((--max_try) > 0));			\
-									\
-	*old = oldval;							\
-									\
-	return !res;							\
+#define __CMPXCHG_TIMEOUT_CASE(sz, name)                                        \
+static inline bool __cmpxchg_timeout_case_##name(volatile void *ptr,            \
+                                         unsigned long *old,            \
+                                         unsigned long new,             \
+                                         bool timeout,                  \
+                                         unsigned int max_try)          \
+{                                                                       \
+        unsigned long oldval;                                           \
+        unsigned long res;                                              \
+                                                                        \
+        do {                                                            \
+                asm volatile("@ __cmpxchg_timeout_case_" #name "\n"             \
+                "       ldrex" #sz "    %1, [%2]\n"                     \
+                "       mov     %0, #0\n"                               \
+                "       teq     %1, %3\n"                               \
+                "       strex" #sz "eq %0, %4, [%2]\n"                  \
+                : "=&r" (res), "=&r" (oldval)                           \
+                : "r" (ptr), "Ir" (*old), "r" (new)                     \
+                : "memory", "cc");                                      \
+                                                                        \
+                if (!res)                                               \
+                        break;                                          \
+        } while (!timeout || ((--max_try) > 0));                        \
+                                                                        \
+        *old = oldval;                                                  \
+                                                                        \
+        return !res;                                                    \
 }
 
-__CMPXCHG_CASE(b, 1)
-__CMPXCHG_CASE(h, 2)
-__CMPXCHG_CASE( , 4)
+__CMPXCHG_TIMEOUT_CASE(b, 1)
+__CMPXCHG_TIMEOUT_CASE(h, 2)
+__CMPXCHG_TIMEOUT_CASE( , 4)
 
-static inline bool __cmpxchg_case_8(volatile uint64_t *ptr,
-			 	    uint64_t *old,
-			 	    uint64_t new,
-			 	    bool timeout,
-				    unsigned int max_try)
+static inline bool __cmpxchg_timeout_case_8(volatile uint64_t *ptr,
+                                    uint64_t *old,
+                                    uint64_t new,
+                                    bool timeout,
+                                    unsigned int max_try)
 {
-	uint64_t oldval;
-	uint64_t res;
-
-	do {
-		asm volatile(
-		"	ldrexd		%1, %H1, [%3]\n"
-		"	teq		%1, %4\n"
-		"	teqeq		%H1, %H4\n"
-		"	movne		%0, #0\n"
-		"	movne		%H0, #0\n"
-		"	bne		2f\n"
-		"	strexd		%0, %5, %H5, [%3]\n"
-		"2:"
-		: "=&r" (res), "=&r" (oldval), "+Qo" (*ptr)
-		: "r" (ptr), "r" (*old), "r" (new)
-		: "memory", "cc");
-		if (!res)
-			break;
-	} while (!timeout || ((--max_try) > 0));
-
-	*old = oldval;
-
-	return !res;
+        uint64_t oldval;
+        uint64_t res;
+
+        do {
+                asm volatile(
+                "       ldrexd          %1, %H1, [%3]\n"
+                "       teq             %1, %4\n"
+                "       teqeq           %H1, %H4\n"
+                "       movne           %0, #0\n"
+                "       movne           %H0, #0\n"
+                "       bne             2f\n"
+                "       strexd          %0, %5, %H5, [%3]\n"
+                "2:"
+                : "=&r" (res), "=&r" (oldval), "+Qo" (*ptr)
+                : "r" (ptr), "r" (*old), "r" (new)
+                : "memory", "cc");
+                if (!res)
+                        break;
+        } while (!timeout || ((--max_try) > 0));
+
+        *old = oldval;
+
+        return !res;
 }
 
 static always_inline bool __int_cmpxchg(volatile void *ptr, unsigned long *old,
-					unsigned long new, int size,
-					bool timeout, unsigned int max_try)
+                                        unsigned long new, int size,
+                                        bool timeout, unsigned int max_try)
 {
-	prefetchw((const void *)ptr);
+        prefetchw((const void *)ptr);
 
-	switch (size) {
-	case 1:
-		return __cmpxchg_case_1(ptr, old, new, timeout, max_try);
-	case 2:
-		return __cmpxchg_case_2(ptr, old, new, timeout, max_try);
-	case 4:
-		return __cmpxchg_case_4(ptr, old, new, timeout, max_try);
-	default:
-		return __bad_cmpxchg(ptr, size);
-	}
+        switch (size) {
+        case 1:
+                return __cmpxchg_timeout_case_1(ptr, old, new, timeout, max_try);
+        case 2:
+                return __cmpxchg_timeout_case_2(ptr, old, new, timeout, max_try);
+        case 4:
+                return __cmpxchg_timeout_case_4(ptr, old, new, timeout, max_try);
+        default:
+                __bad_cmpxchg(ptr, size);
+				return false;
+        }
 
-	ASSERT_UNREACHABLE();
-}
-
-static always_inline unsigned long __cmpxchg(volatile void *ptr,
-					     unsigned long old,
-					     unsigned long new,
-					     int size)
-{
-	smp_mb();
-	if (!__int_cmpxchg(ptr, &old, new, size, false, 0))
-		ASSERT_UNREACHABLE();
-	smp_mb();
-
-	return old;
+        ASSERT_UNREACHABLE();
 }
 
 /*
@@ -162,18 +307,18 @@ static always_inline unsigned long __cmpxchg(volatile void *ptr,
  * timeout) and false if the update has failed.
  */
 static always_inline bool __cmpxchg_timeout(volatile void *ptr,
-					    unsigned long *old,
-					    unsigned long new,
-					    int size,
-					    unsigned int max_try)
+                                            unsigned long *old,
+                                            unsigned long new,
+                                            int size,
+                                            unsigned int max_try)
 {
-	bool ret;
+        bool ret;
 
-	smp_mb();
-	ret = __int_cmpxchg(ptr, old, new, size, true, max_try);
-	smp_mb();
+        smp_mb();
+        ret = __int_cmpxchg(ptr, old, new, size, true, max_try);
+        smp_mb();
 
-	return ret;
+        return ret;
 }
 
 /*
@@ -187,43 +332,17 @@ static always_inline bool __cmpxchg_timeout(volatile void *ptr,
  * timeout) and false if the update has failed.
  */
 static always_inline bool __cmpxchg64_timeout(volatile uint64_t *ptr,
-					      uint64_t *old,
-					      uint64_t new,
-					      unsigned int max_try)
+                                              uint64_t *old,
+                                              uint64_t new,
+                                              unsigned int max_try)
 {
-	bool ret;
+        bool ret;
 
-	smp_mb();
-	ret = __cmpxchg_case_8(ptr, old, new, true, max_try);
-	smp_mb();
+        smp_mb();
+        ret = __cmpxchg_timeout_case_8(ptr, old, new, true, max_try);
+        smp_mb();
 
-	return ret;
+        return ret;
 }
 
-#define cmpxchg(ptr,o,n)						\
-	((__typeof__(*(ptr)))__cmpxchg((ptr),				\
-				       (unsigned long)(o),		\
-				       (unsigned long)(n),		\
-				       sizeof(*(ptr))))
-
-static inline uint64_t cmpxchg64(volatile uint64_t *ptr,
-				 uint64_t old,
-				 uint64_t new)
-{
-	smp_mb();
-	if (!__cmpxchg_case_8(ptr, &old, new, false, 0))
-		ASSERT_UNREACHABLE();
-	smp_mb();
-
-	return old;
-}
-
-#endif
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 8
- * indent-tabs-mode: t
- * End:
- */
+#endif /* __ASM_ARM_ARM32_CMPXCHG_H */
diff --git a/xen/include/asm-arm/arm32/system.h b/xen/include/asm-arm/arm32/system.h
index ab57abfbc5..88798d11db 100644
--- a/xen/include/asm-arm/arm32/system.h
+++ b/xen/include/asm-arm/arm32/system.h
@@ -2,7 +2,7 @@
 #ifndef __ASM_ARM32_SYSTEM_H
 #define __ASM_ARM32_SYSTEM_H
 
-#include <asm/arm32/cmpxchg.h>
+#include <asm/atomic.h>
 
 #define local_irq_disable() asm volatile ( "cpsid i @ local_irq_disable\n" : : : "cc" )
 #define local_irq_enable()  asm volatile ( "cpsie i @ local_irq_enable\n" : : : "cc" )
-- 
2.24.3 (Apple Git-128)



From xen-devel-bounces@lists.xenproject.org Thu Nov 05 18:57:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 18:57:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20059.45693 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kakRn-00070A-5Q; Thu, 05 Nov 2020 18:57:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20059.45693; Thu, 05 Nov 2020 18:57:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kakRn-000700-1w; Thu, 05 Nov 2020 18:57:23 +0000
Received: by outflank-mailman (input) for mailman id 20059;
 Thu, 05 Nov 2020 18:57:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kakRl-0006gr-OE
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:57:21 +0000
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8d9dd261-a20c-4b3f-8692-e58e969fd50c;
 Thu, 05 Nov 2020 18:57:11 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id a3so2839263wrx.13
 for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 10:57:11 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.57.09
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 05 Nov 2020 10:57:09 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kakRl-0006gr-OE
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:57:21 +0000
X-Inumbo-ID: 8d9dd261-a20c-4b3f-8692-e58e969fd50c
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8d9dd261-a20c-4b3f-8692-e58e969fd50c;
	Thu, 05 Nov 2020 18:57:11 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id a3so2839263wrx.13
        for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 10:57:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=2hblBpQBBS5NdLsTqWg42/ulV6RYw8DM4ijUkOSDxmc=;
        b=N2AqB3IxmclD0OoUcdMw5I5yPLH0O1tzCMCPXYTvw2C6YidmBN9/7/K637M0WhYwRx
         jkn5WD0jD+CbaX3RhodOzYUrkvC9OVeaP2sj3QrnP6EQ/DLMcH+eaynPhRDiBZvM4U4g
         G18mWhEuTn1xJSnM+3vLxQ9+ohhxHmo7xLBHHqsjEhTX5JPwUyPUI5+1f9sFkEbiXkIM
         jsoJY0c7kgxkyeC51HfsFiyd9vNFMUJxaCWsV9EUwBBfhy0CfudlBH4PlLcDlQTmO46n
         IskCQs12U919VbMP2GvrfJj/cfmmUoP+JqkOFjAnecxnJqxTCN2yqqWr4D2QgKXqVo8C
         Ntig==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=2hblBpQBBS5NdLsTqWg42/ulV6RYw8DM4ijUkOSDxmc=;
        b=XDHE0aXw2y9XYx2HynmyOlrNIrf1vToOyPQ6kxiGOv3t/a82bJ/Q4qzRe5vr+kE9rB
         HI2hp32vNHfmiovvag2OSwcdenl3Dxn+hm4nA1Aun/I5nzJFYzM22CLnF6j8ISYB1XTG
         L96NsW+DxSl7u0UiQ4CKO9y/vrbJzW7pP0ZN7r3GWndZGXVBiMbOoq9TQ6pJzzcVt4iN
         954/pOMQtF7am1lHcUWkyuoCzwcKlhmqxMLedvKHZpQNJqvnyodT5+TJARbI1CS3IUM7
         tqIX6GqSF+xvybkSUDgxslHg0MoX0A0rjhOEOZX4NR6YN8aZVosVi+r8//480WpWInNh
         mVmw==
X-Gm-Message-State: AOAM5312RtVlGY9Sr4rdF3anuPp7Sf3yUN6aGM+jryQN55Nx6DN0fNYN
	W9ceWwRW/4qc0thiYOrWLP2Jk8Cc5cY=
X-Google-Smtp-Source: ABdhPJzB6BSFEs1paVwMgXDuTbnRDHer5g36E3+ncp6SJksPOa8l52hgblR3dJfJV6G4xUQdVvNp1A==
X-Received: by 2002:a5d:54d0:: with SMTP id x16mr4579092wrv.75.1604602630143;
        Thu, 05 Nov 2020 10:57:10 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.57.09
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 05 Nov 2020 10:57:09 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Ash Wilding <ash.j.wilding@gmail.com>
Subject: [RFC PATCH 6/6] xen/arm: Remove dependency on gcc builtin __sync_fetch_and_add()
Date: Thu,  5 Nov 2020 18:56:03 +0000
Message-Id: <20201105185603.24149-7-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <20201105185603.24149-1-ash.j.wilding@gmail.com>
References: <20201105185603.24149-1-ash.j.wilding@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Now that we have explicit implementations of LL/SC and LSE atomics
helpers after porting Linux's versions to Xen, we can drop the reference
to gcc's builtin __sync_fetch_and_add().

This requires some fudging using container_of() because the users of
__sync_fetch_and_add(), namely xen/spinlock.c, expect the ptr to be
directly to the u32 being modified while the atomics helpers expect the
ptr to be to an atomic_t and then access that atomic_t's counter member.

NOTE: spinlock.c is using u32 for the value being added while the atomics
helpers use int for their counter member. This shouldn't actually matter
because we do the addition in assembly and the compiler isn't smart enough
to detect signed integer overflow in inline assembly, but I thought it worth
calling out in the commit message.

Signed-off-by: Ash Wilding <ash.j.wilding@gmail.com>
---
 xen/include/asm-arm/arm32/atomic.h |  2 +-
 xen/include/asm-arm/system.h       | 10 +++++++++-
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/xen/include/asm-arm/arm32/atomic.h b/xen/include/asm-arm/arm32/atomic.h
index 544a4ba492..5cf13cc8fa 100644
--- a/xen/include/asm-arm/arm32/atomic.h
+++ b/xen/include/asm-arm/arm32/atomic.h
@@ -200,6 +200,7 @@ static inline int atomic_add_return(int i, atomic_t *v)
 
 	return ret;
 }
+#define atomic_fetch_add(i, v) atomic_add_return(i, v)
 
 static inline int atomic_sub_return(int i, atomic_t *v)
 {
@@ -212,5 +213,4 @@ static inline int atomic_sub_return(int i, atomic_t *v)
 	return ret;
 }
 
-
 #endif /* __ASM_ARM_ARM32_ATOMIC_H */
diff --git a/xen/include/asm-arm/system.h b/xen/include/asm-arm/system.h
index 65d5c8e423..86c50915d9 100644
--- a/xen/include/asm-arm/system.h
+++ b/xen/include/asm-arm/system.h
@@ -3,6 +3,7 @@
 #define __ASM_SYSTEM_H
 
 #include <xen/lib.h>
+#include <xen/kernel.h>
 #include <public/arch-arm.h>
 
 #define sev()           asm volatile("sev" : : : "memory")
@@ -58,7 +59,14 @@ static inline int local_abort_is_enabled(void)
     return !(flags & PSR_ABT_MASK);
 }
 
-#define arch_fetch_and_add(x, v) __sync_fetch_and_add(x, v)
+#define arch_fetch_and_add(ptr, x) ({                                   \
+    int ret;                                                            \
+                                                                        \
+    atomic_t * tmp = container_of((int *)(&(x)), atomic_t, counter);    \
+    ret = atomic_fetch_add(x, tmp);                                     \
+                                                                        \
+    ret;                                                                \
+})
 
 extern struct vcpu *__context_switch(struct vcpu *prev, struct vcpu *next);
 
-- 
2.24.3 (Apple Git-128)



From xen-devel-bounces@lists.xenproject.org Thu Nov 05 18:57:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 18:57:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20063.45705 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kakRs-00076M-HA; Thu, 05 Nov 2020 18:57:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20063.45705; Thu, 05 Nov 2020 18:57:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kakRs-00076C-Co; Thu, 05 Nov 2020 18:57:28 +0000
Received: by outflank-mailman (input) for mailman id 20063;
 Thu, 05 Nov 2020 18:57:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kakRq-0006gr-OZ
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:57:26 +0000
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3c3384f2-72bb-4bd0-8af0-7b6d26181d2a;
 Thu, 05 Nov 2020 18:57:10 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id n15so3008933wrq.2
 for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 10:57:10 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.57.07
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 05 Nov 2020 10:57:07 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kakRq-0006gr-OZ
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 18:57:26 +0000
X-Inumbo-ID: 3c3384f2-72bb-4bd0-8af0-7b6d26181d2a
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3c3384f2-72bb-4bd0-8af0-7b6d26181d2a;
	Thu, 05 Nov 2020 18:57:10 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id n15so3008933wrq.2
        for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 10:57:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=8I2Ok6wpRdbW4t7D33HeYP/XkP6I7e2EsC6fbv4S2J4=;
        b=RSifhXyjfPPNLWT28QaLbOs16saU8yP5G7sfH5Wf/cLNS2zGFmWJhgoxiqJd/kIoYY
         1+TaDUmvO2ZeonOKEMKidi1aXwEp66+S8+cVjxo/xo54n7n7ImK7lSVD2+O3Ocx06heB
         hVzseaLgHKGpELZgE3QDct0nKF+eZ/jMBZPKHG9efY8x38gIoLUzcaSsvT8YQZqQUbwl
         XWrH0cUrHNK2SrSxrr55FYd++9Yd7LB6hS3docWhGJLQLsNbgBJsvY9QlZjoc/p5U70+
         7FUzszjkPvoRia5lRNoLncMrvaUSD6uE8uq/HDKfuKn6KeMPpiQTee+S91wzSS7/NI1w
         txEA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=8I2Ok6wpRdbW4t7D33HeYP/XkP6I7e2EsC6fbv4S2J4=;
        b=I4eSMBFkOX2x2p/x0tXjfx+WujSuSmYTZRELbRrZiy+fJLC9yhd9mZLC001FBzuj/D
         S+0EUTAe2bP1TneOLsd7pSeuWYwvegpZ7gHGSsx/eSukhpkvf5hYRDJydvxha08vC1rz
         nRA+NEIcqWn923yuoiIuZlC/dTwU51Nn9z71LIYjJmrEv3bNwm27UFz5WrJgItb1G06s
         yVkhE9NybuhtDjfa5wll798TKs8X5IY7PPQQSPChWNZGVvVi46f2MDcZ/3ihj+Y1yaf9
         fvytTHylNdeTKSFGbAfR/JpwVc4r7TultUhbVpic2CixVH0MLE6A8V3H/FLGM8khYwOw
         rv/g==
X-Gm-Message-State: AOAM532NhtT2tNatluTetwDoEO+ocO2+BF0YB2GGupBYQ0a5fw26bYu0
	+EdLGodBXYma5KBZl7Jn8qOYSHkAQjA=
X-Google-Smtp-Source: ABdhPJwu7tJnCNE3rvZyMq5nd9ZRgtDX+P8XJETy908fBefF8RGAvA0HeH6w48G9zlwwXuRPawsP8Q==
X-Received: by 2002:adf:ead1:: with SMTP id o17mr2263636wrn.396.1604602628146;
        Thu, 05 Nov 2020 10:57:08 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id n14sm3451536wrt.8.2020.11.05.10.57.07
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 05 Nov 2020 10:57:07 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Ash Wilding <ash.j.wilding@gmail.com>
Subject: [RFC PATCH 4/6] xen/arm64: Port Linux LL/SC and LSE atomics helpers to Xen
Date: Thu,  5 Nov 2020 18:56:01 +0000
Message-Id: <20201105185603.24149-5-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <20201105185603.24149-1-ash.j.wilding@gmail.com>
References: <20201105185603.24149-1-ash.j.wilding@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This patch ports Linux's arm64 LL/SC and LSE atomics helpers to Xen,
using Linux 5.10-rc2 (last commit 3cea11cd5) as a basis.

The opening comment of each header file details the changes made to
that file while porting it to Xen.

    !! NB: This patch breaks arm32 builds until the next patch in the
           series ports Linux's 32-bit LL/SC helpers. The patches have
           been split in this way to aid in reviewing and discussing.

Signed-off-by: Ash Wilding <ash.j.wilding@gmail.com>
---
 xen/include/asm-arm/arm64/atomic.h       | 242 +++++------
 xen/include/asm-arm/arm64/atomic_ll_sc.h | 236 +++++++++++
 xen/include/asm-arm/arm64/atomic_lse.h   | 251 +++++++++++
 xen/include/asm-arm/arm64/cmpxchg.h      | 505 ++++++++++++++++-------
 xen/include/asm-arm/arm64/lse.h          |  53 +++
 xen/include/asm-arm/arm64/system.h       |   2 +-
 xen/include/asm-arm/atomic.h             |  15 +-
 xen/include/xen/compiler.h               |   3 +
 8 files changed, 1021 insertions(+), 286 deletions(-)
 create mode 100644 xen/include/asm-arm/arm64/atomic_ll_sc.h
 create mode 100644 xen/include/asm-arm/arm64/atomic_lse.h
 create mode 100644 xen/include/asm-arm/arm64/lse.h

diff --git a/xen/include/asm-arm/arm64/atomic.h b/xen/include/asm-arm/arm64/atomic.h
index 2d42567866..5632ff7b13 100644
--- a/xen/include/asm-arm/arm64/atomic.h
+++ b/xen/include/asm-arm/arm64/atomic.h
@@ -1,148 +1,124 @@
+
 /*
- * Based on arch/arm64/include/asm/atomic.h
- * which in turn is
- * Based on arch/arm/include/asm/atomic.h
+ * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
+ *
+ * Summary of changes:
+ * 		- Rename header include guard to reflect Xen directory structure
+ * 		- Drop redundant includes and redirect others to Xen equivalents
+ * 		- Rename declarations from arch_atomic_<op>() to atomic_<op>()
+ * 		- Drop atomic64_t helper declarations
  *
  * Copyright (C) 1996 Russell King.
  * Copyright (C) 2002 Deep Blue Solutions Ltd.
  * Copyright (C) 2012 ARM Ltd.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ * SPDX-License-Identifier: GPL-2.0-only
  */
-#ifndef __ARCH_ARM_ARM64_ATOMIC
-#define __ARCH_ARM_ARM64_ATOMIC
+#ifndef __ASM_ARM_ARM64_ATOMIC_H
+#define __ASM_ARM_ARM64_ATOMIC_H
 
-/*
- * AArch64 UP and SMP safe atomic ops.  We use load exclusive and
- * store exclusive to ensure that these are atomic.  We may loop
- * to ensure that the update happens.
- */
-static inline void atomic_add(int i, atomic_t *v)
-{
-	unsigned long tmp;
-	int result;
-
-	asm volatile("// atomic_add\n"
-"1:	ldxr	%w0, %2\n"
-"	add	%w0, %w0, %w3\n"
-"	stxr	%w1, %w0, %2\n"
-"	cbnz	%w1, 1b"
-	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)
-	: "Ir" (i));
-}
+#include <xen/compiler.h>
+#include <xen/types.h>
 
-static inline int atomic_add_return(int i, atomic_t *v)
-{
-	unsigned long tmp;
-	int result;
-
-	asm volatile("// atomic_add_return\n"
-"1:	ldxr	%w0, %2\n"
-"	add	%w0, %w0, %w3\n"
-"	stlxr	%w1, %w0, %2\n"
-"	cbnz	%w1, 1b"
-	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)
-	: "Ir" (i)
-	: "memory");
-
-	smp_mb();
-	return result;
-}
+#include "lse.h"
+#include "cmpxchg.h"
 
-static inline void atomic_sub(int i, atomic_t *v)
-{
-	unsigned long tmp;
-	int result;
-
-	asm volatile("// atomic_sub\n"
-"1:	ldxr	%w0, %2\n"
-"	sub	%w0, %w0, %w3\n"
-"	stxr	%w1, %w0, %2\n"
-"	cbnz	%w1, 1b"
-	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)
-	: "Ir" (i));
+#define ATOMIC_OP(op)							\
+static inline void op(int i, atomic_t *v)			\
+{									\
+	__lse_ll_sc_body(op, i, v);					\
 }
 
-static inline int atomic_sub_return(int i, atomic_t *v)
-{
-	unsigned long tmp;
-	int result;
-
-	asm volatile("// atomic_sub_return\n"
-"1:	ldxr	%w0, %2\n"
-"	sub	%w0, %w0, %w3\n"
-"	stlxr	%w1, %w0, %2\n"
-"	cbnz	%w1, 1b"
-	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)
-	: "Ir" (i)
-	: "memory");
-
-	smp_mb();
-	return result;
-}
+ATOMIC_OP(atomic_andnot)
+ATOMIC_OP(atomic_or)
+ATOMIC_OP(atomic_xor)
+ATOMIC_OP(atomic_add)
+ATOMIC_OP(atomic_and)
+ATOMIC_OP(atomic_sub)
 
-static inline void atomic_and(int m, atomic_t *v)
-{
-	unsigned long tmp;
-	int result;
-
-	asm volatile("// atomic_and\n"
-"1:	ldxr	%w0, %2\n"
-"	and	%w0, %w0, %w3\n"
-"	stxr	%w1, %w0, %2\n"
-"	cbnz	%w1, 1b"
-	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)
-	: "Ir" (m));
-}
+#undef ATOMIC_OP
 
-static inline int atomic_cmpxchg(atomic_t *ptr, int old, int new)
-{
-	unsigned long tmp;
-	int oldval;
-
-	smp_mb();
-
-	asm volatile("// atomic_cmpxchg\n"
-"1:	ldxr	%w1, %2\n"
-"	cmp	%w1, %w3\n"
-"	b.ne	2f\n"
-"	stxr	%w0, %w4, %2\n"
-"	cbnz	%w0, 1b\n"
-"2:"
-	: "=&r" (tmp), "=&r" (oldval), "+Q" (ptr->counter)
-	: "Ir" (old), "r" (new)
-	: "cc");
-
-	smp_mb();
-	return oldval;
+#define ATOMIC_FETCH_OP(name, op)					\
+static inline int op##name(int i, atomic_t *v)			\
+{									\
+	return __lse_ll_sc_body(op##name, i, v);			\
 }
 
-static inline int __atomic_add_unless(atomic_t *v, int a, int u)
-{
-	int c, old;
-
-	c = atomic_read(v);
-	while (c != u && (old = atomic_cmpxchg((v), c, c + a)) != c)
-		c = old;
-	return c;
-}
-
-#endif
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 8
- * indent-tabs-mode: t
- * End:
- */
+#define ATOMIC_FETCH_OPS(op)						\
+	ATOMIC_FETCH_OP(_relaxed, op)					\
+	ATOMIC_FETCH_OP(_acquire, op)					\
+	ATOMIC_FETCH_OP(_release, op)					\
+	ATOMIC_FETCH_OP(        , op)
+
+ATOMIC_FETCH_OPS(atomic_fetch_andnot)
+ATOMIC_FETCH_OPS(atomic_fetch_or)
+ATOMIC_FETCH_OPS(atomic_fetch_xor)
+ATOMIC_FETCH_OPS(atomic_fetch_add)
+ATOMIC_FETCH_OPS(atomic_fetch_and)
+ATOMIC_FETCH_OPS(atomic_fetch_sub)
+ATOMIC_FETCH_OPS(atomic_add_return)
+ATOMIC_FETCH_OPS(atomic_sub_return)
+
+#undef ATOMIC_FETCH_OP
+#undef ATOMIC_FETCH_OPS
+#define atomic_read(v)			__READ_ONCE((v)->counter)
+#define atomic_set(v, i)			__WRITE_ONCE(((v)->counter), (i))
+
+#define atomic_add_return_relaxed		atomic_add_return_relaxed
+#define atomic_add_return_acquire		atomic_add_return_acquire
+#define atomic_add_return_release		atomic_add_return_release
+#define atomic_add_return			atomic_add_return
+
+#define atomic_sub_return_relaxed		atomic_sub_return_relaxed
+#define atomic_sub_return_acquire		atomic_sub_return_acquire
+#define atomic_sub_return_release		atomic_sub_return_release
+#define atomic_sub_return			atomic_sub_return
+
+#define atomic_fetch_add_relaxed		atomic_fetch_add_relaxed
+#define atomic_fetch_add_acquire		atomic_fetch_add_acquire
+#define atomic_fetch_add_release		atomic_fetch_add_release
+#define atomic_fetch_add			atomic_fetch_add
+
+#define atomic_fetch_sub_relaxed		atomic_fetch_sub_relaxed
+#define atomic_fetch_sub_acquire		atomic_fetch_sub_acquire
+#define atomic_fetch_sub_release		atomic_fetch_sub_release
+#define atomic_fetch_sub			atomic_fetch_sub
+
+#define atomic_fetch_and_relaxed		atomic_fetch_and_relaxed
+#define atomic_fetch_and_acquire		atomic_fetch_and_acquire
+#define atomic_fetch_and_release		atomic_fetch_and_release
+#define atomic_fetch_and			atomic_fetch_and
+
+#define atomic_fetch_andnot_relaxed	atomic_fetch_andnot_relaxed
+#define atomic_fetch_andnot_acquire	atomic_fetch_andnot_acquire
+#define atomic_fetch_andnot_release	atomic_fetch_andnot_release
+#define atomic_fetch_andnot		atomic_fetch_andnot
+
+#define atomic_fetch_or_relaxed		atomic_fetch_or_relaxed
+#define atomic_fetch_or_acquire		atomic_fetch_or_acquire
+#define atomic_fetch_or_release		atomic_fetch_or_release
+#define atomic_fetch_or			atomic_fetch_or
+
+#define atomic_fetch_xor_relaxed		atomic_fetch_xor_relaxed
+#define atomic_fetch_xor_acquire		atomic_fetch_xor_acquire
+#define atomic_fetch_xor_release		atomic_fetch_xor_release
+#define atomic_fetch_xor			atomic_fetch_xor
+
+#define atomic_xchg_relaxed(v, new) \
+	xchg_relaxed(&((v)->counter), (new))
+#define atomic_xchg_acquire(v, new) \
+	xchg_acquire(&((v)->counter), (new))
+#define atomic_xchg_release(v, new) \
+	xchg_release(&((v)->counter), (new))
+#define atomic_xchg(v, new) \
+	xchg(&((v)->counter), (new))
+
+#define atomic_cmpxchg_relaxed(v, old, new) \
+	cmpxchg_relaxed(&((v)->counter), (old), (new))
+#define atomic_cmpxchg_acquire(v, old, new) \
+	cmpxchg_acquire(&((v)->counter), (old), (new))
+#define atomic_cmpxchg_release(v, old, new) \
+	cmpxchg_release(&((v)->counter), (old), (new))
+
+#define atomic_andnot			atomic_andnot
+
+#endif /* __ASM_ARM_ARM64_ATOMIC_H */
\ No newline at end of file
diff --git a/xen/include/asm-arm/arm64/atomic_ll_sc.h b/xen/include/asm-arm/arm64/atomic_ll_sc.h
new file mode 100644
index 0000000000..dbcb0e9fe7
--- /dev/null
+++ b/xen/include/asm-arm/arm64/atomic_ll_sc.h
@@ -0,0 +1,236 @@
+/*
+ * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
+ *
+ * Summary of changes:
+ * 		- Rename header include guard to reflect Xen directory structure
+ * 		- Redirect includes to Xen equivalents
+ * 		- Drop atomic64_t helper definitions
+ *
+ * Copyright (C) 1996 Russell King.
+ * Copyright (C) 2002 Deep Blue Solutions Ltd.
+ * Copyright (C) 2012 ARM Ltd.
+ * SPDX-License-Identifier: GPL-2.0-only
+ */
+
+#ifndef __ASM_ARM_ARM64_ATOMIC_LL_SC_H
+#define __ASM_ARM_ARM64_ATOMIC_LL_SC_H
+
+#include <xen/stringify.h>
+
+#ifdef CONFIG_ARM64_LSE_ATOMICS
+#define __LL_SC_FALLBACK(asm_ops)					\
+"	b	3f\n"							\
+"	.subsection	1\n"						\
+"3:\n"									\
+asm_ops "\n"								\
+"	b	4f\n"							\
+"	.previous\n"							\
+"4:\n"
+#else
+#define __LL_SC_FALLBACK(asm_ops) asm_ops
+#endif
+
+#ifndef CONFIG_CC_HAS_K_CONSTRAINT
+#define K
+#endif
+
+/*
+ * AArch64 UP and SMP safe atomic ops.  We use load exclusive and
+ * store exclusive to ensure that these are atomic.  We may loop
+ * to ensure that the update happens.
+ */
+
+#define ATOMIC_OP(op, asm_op, constraint)				\
+static inline void							\
+__ll_sc_atomic_##op(int i, atomic_t *v)					\
+{									\
+	unsigned long tmp;						\
+	int result;							\
+									\
+	asm volatile("// atomic_" #op "\n"				\
+	__LL_SC_FALLBACK(						\
+"	prfm	pstl1strm, %2\n"					\
+"1:	ldxr	%w0, %2\n"						\
+"	" #asm_op "	%w0, %w0, %w3\n"				\
+"	stxr	%w1, %w0, %2\n"						\
+"	cbnz	%w1, 1b\n")						\
+	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)		\
+	: __stringify(constraint) "r" (i));				\
+}
+
+#define ATOMIC_OP_RETURN(name, mb, acq, rel, cl, op, asm_op, constraint)\
+static inline int							\
+__ll_sc_atomic_##op##_return##name(int i, atomic_t *v)			\
+{									\
+	unsigned long tmp;						\
+	int result;							\
+									\
+	asm volatile("// atomic_" #op "_return" #name "\n"		\
+	__LL_SC_FALLBACK(						\
+"	prfm	pstl1strm, %2\n"					\
+"1:	ld" #acq "xr	%w0, %2\n"					\
+"	" #asm_op "	%w0, %w0, %w3\n"				\
+"	st" #rel "xr	%w1, %w0, %2\n"					\
+"	cbnz	%w1, 1b\n"						\
+"	" #mb )								\
+	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)		\
+	: __stringify(constraint) "r" (i)				\
+	: cl);								\
+									\
+	return result;							\
+}
+
+#define ATOMIC_FETCH_OP(name, mb, acq, rel, cl, op, asm_op, constraint) \
+static inline int							\
+__ll_sc_atomic_fetch_##op##name(int i, atomic_t *v)			\
+{									\
+	unsigned long tmp;						\
+	int val, result;						\
+									\
+	asm volatile("// atomic_fetch_" #op #name "\n"			\
+	__LL_SC_FALLBACK(						\
+"	prfm	pstl1strm, %3\n"					\
+"1:	ld" #acq "xr	%w0, %3\n"					\
+"	" #asm_op "	%w1, %w0, %w4\n"				\
+"	st" #rel "xr	%w2, %w1, %3\n"					\
+"	cbnz	%w2, 1b\n"						\
+"	" #mb )								\
+	: "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter)	\
+	: __stringify(constraint) "r" (i)				\
+	: cl);								\
+									\
+	return result;							\
+}
+
+#define ATOMIC_OPS(...)							\
+	ATOMIC_OP(__VA_ARGS__)						\
+	ATOMIC_OP_RETURN(        , dmb ish,  , l, "memory", __VA_ARGS__)\
+	ATOMIC_OP_RETURN(_relaxed,        ,  ,  ,         , __VA_ARGS__)\
+	ATOMIC_OP_RETURN(_acquire,        , a,  , "memory", __VA_ARGS__)\
+	ATOMIC_OP_RETURN(_release,        ,  , l, "memory", __VA_ARGS__)\
+	ATOMIC_FETCH_OP (        , dmb ish,  , l, "memory", __VA_ARGS__)\
+	ATOMIC_FETCH_OP (_relaxed,        ,  ,  ,         , __VA_ARGS__)\
+	ATOMIC_FETCH_OP (_acquire,        , a,  , "memory", __VA_ARGS__)\
+	ATOMIC_FETCH_OP (_release,        ,  , l, "memory", __VA_ARGS__)
+
+ATOMIC_OPS(add, add, I)
+ATOMIC_OPS(sub, sub, J)
+
+#undef ATOMIC_OPS
+#define ATOMIC_OPS(...)							\
+	ATOMIC_OP(__VA_ARGS__)						\
+	ATOMIC_FETCH_OP (        , dmb ish,  , l, "memory", __VA_ARGS__)\
+	ATOMIC_FETCH_OP (_relaxed,        ,  ,  ,         , __VA_ARGS__)\
+	ATOMIC_FETCH_OP (_acquire,        , a,  , "memory", __VA_ARGS__)\
+	ATOMIC_FETCH_OP (_release,        ,  , l, "memory", __VA_ARGS__)
+
+ATOMIC_OPS(and, and, K)
+ATOMIC_OPS(or, orr, K)
+ATOMIC_OPS(xor, eor, K)
+/*
+ * GAS converts the mysterious and undocumented BIC (immediate) alias to
+ * an AND (immediate) instruction with the immediate inverted. We don't
+ * have a constraint for this, so fall back to register.
+ */
+ATOMIC_OPS(andnot, bic, )
+
+#undef ATOMIC_OPS
+#undef ATOMIC_FETCH_OP
+#undef ATOMIC_OP_RETURN
+#undef ATOMIC_OP
+
+#define __CMPXCHG_CASE(w, sfx, name, sz, mb, acq, rel, cl, constraint)	\
+static inline u##sz							\
+__ll_sc__cmpxchg_case_##name##sz(volatile void *ptr,			\
+					 unsigned long old,		\
+					 u##sz new)			\
+{									\
+	unsigned long tmp;						\
+	u##sz oldval;							\
+									\
+	/*								\
+	 * Sub-word sizes require explicit casting so that the compare  \
+	 * part of the cmpxchg doesn't end up interpreting non-zero	\
+	 * upper bits of the register containing "old".			\
+	 */								\
+	if (sz < 32)							\
+		old = (u##sz)old;					\
+									\
+	asm volatile(							\
+	__LL_SC_FALLBACK(						\
+	"	prfm	pstl1strm, %[v]\n"				\
+	"1:	ld" #acq "xr" #sfx "\t%" #w "[oldval], %[v]\n"		\
+	"	eor	%" #w "[tmp], %" #w "[oldval], %" #w "[old]\n"	\
+	"	cbnz	%" #w "[tmp], 2f\n"				\
+	"	st" #rel "xr" #sfx "\t%w[tmp], %" #w "[new], %[v]\n"	\
+	"	cbnz	%w[tmp], 1b\n"					\
+	"	" #mb "\n"						\
+	"2:")								\
+	: [tmp] "=&r" (tmp), [oldval] "=&r" (oldval),			\
+	  [v] "+Q" (*(u##sz *)ptr)					\
+	: [old] __stringify(constraint) "r" (old), [new] "r" (new)	\
+	: cl);								\
+									\
+	return oldval;							\
+}
+
+/*
+ * Earlier versions of GCC (no later than 8.1.0) appear to incorrectly
+ * handle the 'K' constraint for the value 4294967295 - thus we use no
+ * constraint for 32 bit operations.
+ */
+__CMPXCHG_CASE(w, b,     ,  8,        ,  ,  ,         , K)
+__CMPXCHG_CASE(w, h,     , 16,        ,  ,  ,         , K)
+__CMPXCHG_CASE(w,  ,     , 32,        ,  ,  ,         , K)
+__CMPXCHG_CASE( ,  ,     , 64,        ,  ,  ,         , L)
+__CMPXCHG_CASE(w, b, acq_,  8,        , a,  , "memory", K)
+__CMPXCHG_CASE(w, h, acq_, 16,        , a,  , "memory", K)
+__CMPXCHG_CASE(w,  , acq_, 32,        , a,  , "memory", K)
+__CMPXCHG_CASE( ,  , acq_, 64,        , a,  , "memory", L)
+__CMPXCHG_CASE(w, b, rel_,  8,        ,  , l, "memory", K)
+__CMPXCHG_CASE(w, h, rel_, 16,        ,  , l, "memory", K)
+__CMPXCHG_CASE(w,  , rel_, 32,        ,  , l, "memory", K)
+__CMPXCHG_CASE( ,  , rel_, 64,        ,  , l, "memory", L)
+__CMPXCHG_CASE(w, b,  mb_,  8, dmb ish,  , l, "memory", K)
+__CMPXCHG_CASE(w, h,  mb_, 16, dmb ish,  , l, "memory", K)
+__CMPXCHG_CASE(w,  ,  mb_, 32, dmb ish,  , l, "memory", K)
+__CMPXCHG_CASE( ,  ,  mb_, 64, dmb ish,  , l, "memory", L)
+
+#undef __CMPXCHG_CASE
+
+#define __CMPXCHG_DBL(name, mb, rel, cl)				\
+static inline long							\
+__ll_sc__cmpxchg_double##name(unsigned long old1,			\
+				      unsigned long old2,		\
+				      unsigned long new1,		\
+				      unsigned long new2,		\
+				      volatile void *ptr)		\
+{									\
+	unsigned long tmp, ret;						\
+									\
+	asm volatile("// __cmpxchg_double" #name "\n"			\
+	__LL_SC_FALLBACK(						\
+	"	prfm	pstl1strm, %2\n"				\
+	"1:	ldxp	%0, %1, %2\n"					\
+	"	eor	%0, %0, %3\n"					\
+	"	eor	%1, %1, %4\n"					\
+	"	orr	%1, %0, %1\n"					\
+	"	cbnz	%1, 2f\n"					\
+	"	st" #rel "xp	%w0, %5, %6, %2\n"			\
+	"	cbnz	%w0, 1b\n"					\
+	"	" #mb "\n"						\
+	"2:")								\
+	: "=&r" (tmp), "=&r" (ret), "+Q" (*(unsigned long *)ptr)	\
+	: "r" (old1), "r" (old2), "r" (new1), "r" (new2)		\
+	: cl);								\
+									\
+	return ret;							\
+}
+
+__CMPXCHG_DBL(   ,        ,  ,         )
+__CMPXCHG_DBL(_mb, dmb ish, l, "memory")
+
+#undef __CMPXCHG_DBL
+#undef K
+
+#endif	/* __ASM_ARM_ARM64_ATOMIC_LL_SC_H */
\ No newline at end of file
diff --git a/xen/include/asm-arm/arm64/atomic_lse.h b/xen/include/asm-arm/arm64/atomic_lse.h
new file mode 100644
index 0000000000..0d579f3262
--- /dev/null
+++ b/xen/include/asm-arm/arm64/atomic_lse.h
@@ -0,0 +1,251 @@
+
+/*
+ * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
+ *
+ * Summary of changes:
+ * 		- Rename header include guard to reflect Xen directory structure
+ * 		- Drop atomic64_t helper definitions
+ * 		- Switch __always_inline qualifier to always_inline
+ *
+ * Copyright (C) 1996 Russell King.
+ * Copyright (C) 2002 Deep Blue Solutions Ltd.
+ * Copyright (C) 2012 ARM Ltd.
+ * SPDX-License-Identifier: GPL-2.0-only
+ */
+
+#ifndef __ASM_ARM_ARM64_ATOMIC_LSE_H
+#define __ASM_ARM_ARM64_ATOMIC_LSE_H
+
+#define ATOMIC_OP(op, asm_op)						\
+static inline void __lse_atomic_##op(int i, atomic_t *v)			\
+{									\
+	asm volatile(							\
+	__LSE_PREAMBLE							\
+"	" #asm_op "	%w[i], %[v]\n"					\
+	: [i] "+r" (i), [v] "+Q" (v->counter)				\
+	: "r" (v));							\
+}
+
+ATOMIC_OP(andnot, stclr)
+ATOMIC_OP(or, stset)
+ATOMIC_OP(xor, steor)
+ATOMIC_OP(add, stadd)
+
+#undef ATOMIC_OP
+
+#define ATOMIC_FETCH_OP(name, mb, op, asm_op, cl...)			\
+static inline int __lse_atomic_fetch_##op##name(int i, atomic_t *v)	\
+{									\
+	asm volatile(							\
+	__LSE_PREAMBLE							\
+"	" #asm_op #mb "	%w[i], %w[i], %[v]"				\
+	: [i] "+r" (i), [v] "+Q" (v->counter)				\
+	: "r" (v)							\
+	: cl);								\
+									\
+	return i;							\
+}
+
+#define ATOMIC_FETCH_OPS(op, asm_op)					\
+	ATOMIC_FETCH_OP(_relaxed,   , op, asm_op)			\
+	ATOMIC_FETCH_OP(_acquire,  a, op, asm_op, "memory")		\
+	ATOMIC_FETCH_OP(_release,  l, op, asm_op, "memory")		\
+	ATOMIC_FETCH_OP(        , al, op, asm_op, "memory")
+
+ATOMIC_FETCH_OPS(andnot, ldclr)
+ATOMIC_FETCH_OPS(or, ldset)
+ATOMIC_FETCH_OPS(xor, ldeor)
+ATOMIC_FETCH_OPS(add, ldadd)
+
+#undef ATOMIC_FETCH_OP
+#undef ATOMIC_FETCH_OPS
+
+#define ATOMIC_OP_ADD_RETURN(name, mb, cl...)				\
+static inline int __lse_atomic_add_return##name(int i, atomic_t *v)	\
+{									\
+	u32 tmp;							\
+									\
+	asm volatile(							\
+	__LSE_PREAMBLE							\
+	"	ldadd" #mb "	%w[i], %w[tmp], %[v]\n"			\
+	"	add	%w[i], %w[i], %w[tmp]"				\
+	: [i] "+r" (i), [v] "+Q" (v->counter), [tmp] "=&r" (tmp)	\
+	: "r" (v)							\
+	: cl);								\
+									\
+	return i;							\
+}
+
+ATOMIC_OP_ADD_RETURN(_relaxed,   )
+ATOMIC_OP_ADD_RETURN(_acquire,  a, "memory")
+ATOMIC_OP_ADD_RETURN(_release,  l, "memory")
+ATOMIC_OP_ADD_RETURN(        , al, "memory")
+
+#undef ATOMIC_OP_ADD_RETURN
+
+static inline void __lse_atomic_and(int i, atomic_t *v)
+{
+	asm volatile(
+	__LSE_PREAMBLE
+	"	mvn	%w[i], %w[i]\n"
+	"	stclr	%w[i], %[v]"
+	: [i] "+&r" (i), [v] "+Q" (v->counter)
+	: "r" (v));
+}
+
+#define ATOMIC_FETCH_OP_AND(name, mb, cl...)				\
+static inline int __lse_atomic_fetch_and##name(int i, atomic_t *v)	\
+{									\
+	asm volatile(							\
+	__LSE_PREAMBLE							\
+	"	mvn	%w[i], %w[i]\n"					\
+	"	ldclr" #mb "	%w[i], %w[i], %[v]"			\
+	: [i] "+&r" (i), [v] "+Q" (v->counter)				\
+	: "r" (v)							\
+	: cl);								\
+									\
+	return i;							\
+}
+
+ATOMIC_FETCH_OP_AND(_relaxed,   )
+ATOMIC_FETCH_OP_AND(_acquire,  a, "memory")
+ATOMIC_FETCH_OP_AND(_release,  l, "memory")
+ATOMIC_FETCH_OP_AND(        , al, "memory")
+
+#undef ATOMIC_FETCH_OP_AND
+
+static inline void __lse_atomic_sub(int i, atomic_t *v)
+{
+	asm volatile(
+	__LSE_PREAMBLE
+	"	neg	%w[i], %w[i]\n"
+	"	stadd	%w[i], %[v]"
+	: [i] "+&r" (i), [v] "+Q" (v->counter)
+	: "r" (v));
+}
+
+#define ATOMIC_OP_SUB_RETURN(name, mb, cl...)				\
+static inline int __lse_atomic_sub_return##name(int i, atomic_t *v)	\
+{									\
+	u32 tmp;							\
+									\
+	asm volatile(							\
+	__LSE_PREAMBLE							\
+	"	neg	%w[i], %w[i]\n"					\
+	"	ldadd" #mb "	%w[i], %w[tmp], %[v]\n"			\
+	"	add	%w[i], %w[i], %w[tmp]"				\
+	: [i] "+&r" (i), [v] "+Q" (v->counter), [tmp] "=&r" (tmp)	\
+	: "r" (v)							\
+	: cl);							\
+									\
+	return i;							\
+}
+
+ATOMIC_OP_SUB_RETURN(_relaxed,   )
+ATOMIC_OP_SUB_RETURN(_acquire,  a, "memory")
+ATOMIC_OP_SUB_RETURN(_release,  l, "memory")
+ATOMIC_OP_SUB_RETURN(        , al, "memory")
+
+#undef ATOMIC_OP_SUB_RETURN
+
+#define ATOMIC_FETCH_OP_SUB(name, mb, cl...)				\
+static inline int __lse_atomic_fetch_sub##name(int i, atomic_t *v)	\
+{									\
+	asm volatile(							\
+	__LSE_PREAMBLE							\
+	"	neg	%w[i], %w[i]\n"					\
+	"	ldadd" #mb "	%w[i], %w[i], %[v]"			\
+	: [i] "+&r" (i), [v] "+Q" (v->counter)				\
+	: "r" (v)							\
+	: cl);								\
+									\
+	return i;							\
+}
+
+ATOMIC_FETCH_OP_SUB(_relaxed,   )
+ATOMIC_FETCH_OP_SUB(_acquire,  a, "memory")
+ATOMIC_FETCH_OP_SUB(_release,  l, "memory")
+ATOMIC_FETCH_OP_SUB(        , al, "memory")
+
+#undef ATOMIC_FETCH_OP_SUB
+
+#define __CMPXCHG_CASE(w, sfx, name, sz, mb, cl...)			\
+static always_inline u##sz						\
+__lse__cmpxchg_case_##name##sz(volatile void *ptr,			\
+					      u##sz old,		\
+					      u##sz new)		\
+{									\
+	register unsigned long x0 asm ("x0") = (unsigned long)ptr;	\
+	register u##sz x1 asm ("x1") = old;				\
+	register u##sz x2 asm ("x2") = new;				\
+	unsigned long tmp;						\
+									\
+	asm volatile(							\
+	__LSE_PREAMBLE							\
+	"	mov	%" #w "[tmp], %" #w "[old]\n"			\
+	"	cas" #mb #sfx "\t%" #w "[tmp], %" #w "[new], %[v]\n"	\
+	"	mov	%" #w "[ret], %" #w "[tmp]"			\
+	: [ret] "+r" (x0), [v] "+Q" (*(unsigned long *)ptr),		\
+	  [tmp] "=&r" (tmp)						\
+	: [old] "r" (x1), [new] "r" (x2)				\
+	: cl);								\
+									\
+	return x0;							\
+}
+
+__CMPXCHG_CASE(w, b,     ,  8,   )
+__CMPXCHG_CASE(w, h,     , 16,   )
+__CMPXCHG_CASE(w,  ,     , 32,   )
+__CMPXCHG_CASE(x,  ,     , 64,   )
+__CMPXCHG_CASE(w, b, acq_,  8,  a, "memory")
+__CMPXCHG_CASE(w, h, acq_, 16,  a, "memory")
+__CMPXCHG_CASE(w,  , acq_, 32,  a, "memory")
+__CMPXCHG_CASE(x,  , acq_, 64,  a, "memory")
+__CMPXCHG_CASE(w, b, rel_,  8,  l, "memory")
+__CMPXCHG_CASE(w, h, rel_, 16,  l, "memory")
+__CMPXCHG_CASE(w,  , rel_, 32,  l, "memory")
+__CMPXCHG_CASE(x,  , rel_, 64,  l, "memory")
+__CMPXCHG_CASE(w, b,  mb_,  8, al, "memory")
+__CMPXCHG_CASE(w, h,  mb_, 16, al, "memory")
+__CMPXCHG_CASE(w,  ,  mb_, 32, al, "memory")
+__CMPXCHG_CASE(x,  ,  mb_, 64, al, "memory")
+
+#undef __CMPXCHG_CASE
+
+#define __CMPXCHG_DBL(name, mb, cl...)					\
+static always_inline long						\
+__lse__cmpxchg_double##name(unsigned long old1,				\
+					 unsigned long old2,		\
+					 unsigned long new1,		\
+					 unsigned long new2,		\
+					 volatile void *ptr)		\
+{									\
+	unsigned long oldval1 = old1;					\
+	unsigned long oldval2 = old2;					\
+	register unsigned long x0 asm ("x0") = old1;			\
+	register unsigned long x1 asm ("x1") = old2;			\
+	register unsigned long x2 asm ("x2") = new1;			\
+	register unsigned long x3 asm ("x3") = new2;			\
+	register unsigned long x4 asm ("x4") = (unsigned long)ptr;	\
+									\
+	asm volatile(							\
+	__LSE_PREAMBLE							\
+	"	casp" #mb "\t%[old1], %[old2], %[new1], %[new2], %[v]\n"\
+	"	eor	%[old1], %[old1], %[oldval1]\n"			\
+	"	eor	%[old2], %[old2], %[oldval2]\n"			\
+	"	orr	%[old1], %[old1], %[old2]"			\
+	: [old1] "+&r" (x0), [old2] "+&r" (x1),				\
+	  [v] "+Q" (*(unsigned long *)ptr)				\
+	: [new1] "r" (x2), [new2] "r" (x3), [ptr] "r" (x4),		\
+	  [oldval1] "r" (oldval1), [oldval2] "r" (oldval2)		\
+	: cl);								\
+									\
+	return x0;							\
+}
+
+__CMPXCHG_DBL(   ,   )
+__CMPXCHG_DBL(_mb, al, "memory")
+
+#undef __CMPXCHG_DBL
+
+#endif	/* __ASM_ARM_ARM64_ATOMIC_LSE_H */
\ No newline at end of file
diff --git a/xen/include/asm-arm/arm64/cmpxchg.h b/xen/include/asm-arm/arm64/cmpxchg.h
index 10e4edc022..4ee8291d3e 100644
--- a/xen/include/asm-arm/arm64/cmpxchg.h
+++ b/xen/include/asm-arm/arm64/cmpxchg.h
@@ -1,136 +1,363 @@
-#ifndef __ASM_ARM64_CMPXCHG_H
-#define __ASM_ARM64_CMPXCHG_H
+/*
+ * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
+ *
+ * Summary of changes:
+ * 		- Rename header include guard to reflect Xen directory structure
+ * 		- Drop redundant includes and redirect others to Xen equivalents
+ * 		- Rename definitions from arch_xchg_<qual>() to xchg_<qual>()
+ * 		- Switch __always_inline qualifier to always_inline
+ * 		- Switch usage of BUILD_BUG() to returning __bad_cmpxchg()
+ * 		- Pull in original Xen arm64 cmpxchg.h definitions of
+ * 		  cmpxchg_timeout*() and cmpxchg64_timeout*() as these are not
+ * 		  provided by Linux and are required for Xen's guest atomics
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ * SPDX-License-Identifier: GPL-2.0-only
+ */
+#ifndef __ASM_ARM_ARM64_CMPXCHG_H
+#define __ASM_ARM_ARM64_CMPXCHG_H
 
-extern void __bad_xchg(volatile void *, int);
-
-static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size)
-{
-	unsigned long ret, tmp;
-
-	switch (size) {
-	case 1:
-		asm volatile("//	__xchg1\n"
-		"1:	ldxrb	%w0, %2\n"
-		"	stlxrb	%w1, %w3, %2\n"
-		"	cbnz	%w1, 1b\n"
-			: "=&r" (ret), "=&r" (tmp), "+Q" (*(u8 *)ptr)
-			: "r" (x)
-			: "memory");
-		break;
-	case 2:
-		asm volatile("//	__xchg2\n"
-		"1:	ldxrh	%w0, %2\n"
-		"	stlxrh	%w1, %w3, %2\n"
-		"	cbnz	%w1, 1b\n"
-			: "=&r" (ret), "=&r" (tmp), "+Q" (*(u16 *)ptr)
-			: "r" (x)
-			: "memory");
-		break;
-	case 4:
-		asm volatile("//	__xchg4\n"
-		"1:	ldxr	%w0, %2\n"
-		"	stlxr	%w1, %w3, %2\n"
-		"	cbnz	%w1, 1b\n"
-			: "=&r" (ret), "=&r" (tmp), "+Q" (*(u32 *)ptr)
-			: "r" (x)
-			: "memory");
-		break;
-	case 8:
-		asm volatile("//	__xchg8\n"
-		"1:	ldxr	%0, %2\n"
-		"	stlxr	%w1, %3, %2\n"
-		"	cbnz	%w1, 1b\n"
-			: "=&r" (ret), "=&r" (tmp), "+Q" (*(u64 *)ptr)
-			: "r" (x)
-			: "memory");
-		break;
-	default:
-		__bad_xchg(ptr, size), ret = 0;
-		break;
-	}
-
-	smp_mb();
-	return ret;
-}
-
-#define xchg(ptr,x) \
-({ \
-	__typeof__(*(ptr)) __ret; \
-	__ret = (__typeof__(*(ptr))) \
-		__xchg((unsigned long)(x), (ptr), sizeof(*(ptr))); \
-	__ret; \
-})
+#include <asm/bug.h>
+#include "lse.h"
 
 extern unsigned long __bad_cmpxchg(volatile void *ptr, int size);
 
-#define __CMPXCHG_CASE(w, sz, name)					\
-static inline bool __cmpxchg_case_##name(volatile void *ptr,		\
-					 unsigned long *old,		\
-					 unsigned long new,		\
-					 bool timeout,			\
-					 unsigned int max_try)		\
+/*
+ * We need separate acquire parameters for ll/sc and lse, since the full
+ * barrier case is generated as release+dmb for the former and
+ * acquire+release for the latter.
+ */
+#define __XCHG_CASE(w, sfx, name, sz, mb, nop_lse, acq, acq_lse, rel, cl)	\
+static inline u##sz __xchg_case_##name##sz(u##sz x, volatile void *ptr)		\
+{										\
+	u##sz ret;								\
+	unsigned long tmp;							\
+										\
+	asm volatile(ARM64_LSE_ATOMIC_INSN(					\
+	/* LL/SC */								\
+	"	prfm	pstl1strm, %2\n"					\
+	"1:	ld" #acq "xr" #sfx "\t%" #w "0, %2\n"				\
+	"	st" #rel "xr" #sfx "\t%w1, %" #w "3, %2\n"			\
+	"	cbnz	%w1, 1b\n"						\
+	"	" #mb,								\
+	/* LSE atomics */							\
+	"	swp" #acq_lse #rel #sfx "\t%" #w "3, %" #w "0, %2\n"		\
+		"nop\n"							\
+		"nop\n"							\
+		"nop\n"							\
+	"	" #nop_lse)							\
+	: "=&r" (ret), "=&r" (tmp), "+Q" (*(u##sz *)ptr)			\
+	: "r" (x)								\
+	: cl);									\
+										\
+	return ret;								\
+}
+
+__XCHG_CASE(w, b,     ,  8,        ,    ,  ,  ,  ,         )
+__XCHG_CASE(w, h,     , 16,        ,    ,  ,  ,  ,         )
+__XCHG_CASE(w,  ,     , 32,        ,    ,  ,  ,  ,         )
+__XCHG_CASE( ,  ,     , 64,        ,    ,  ,  ,  ,         )
+__XCHG_CASE(w, b, acq_,  8,        ,    , a, a,  , "memory")
+__XCHG_CASE(w, h, acq_, 16,        ,    , a, a,  , "memory")
+__XCHG_CASE(w,  , acq_, 32,        ,    , a, a,  , "memory")
+__XCHG_CASE( ,  , acq_, 64,        ,    , a, a,  , "memory")
+__XCHG_CASE(w, b, rel_,  8,        ,    ,  ,  , l, "memory")
+__XCHG_CASE(w, h, rel_, 16,        ,    ,  ,  , l, "memory")
+__XCHG_CASE(w,  , rel_, 32,        ,    ,  ,  , l, "memory")
+__XCHG_CASE( ,  , rel_, 64,        ,    ,  ,  , l, "memory")
+__XCHG_CASE(w, b,  mb_,  8, dmb ish, nop,  , a, l, "memory")
+__XCHG_CASE(w, h,  mb_, 16, dmb ish, nop,  , a, l, "memory")
+__XCHG_CASE(w,  ,  mb_, 32, dmb ish, nop,  , a, l, "memory")
+__XCHG_CASE( ,  ,  mb_, 64, dmb ish, nop,  , a, l, "memory")
+
+#undef __XCHG_CASE
+
+#define __XCHG_GEN(sfx)							\
+static always_inline  unsigned long __xchg##sfx(unsigned long x,	\
+					volatile void *ptr,		\
+					int size)			\
 {									\
-	unsigned long oldval;						\
-	unsigned long res;						\
+	switch (size) {							\
+	case 1:								\
+		return __xchg_case##sfx##_8(x, ptr);			\
+	case 2:								\
+		return __xchg_case##sfx##_16(x, ptr);			\
+	case 4:								\
+		return __xchg_case##sfx##_32(x, ptr);			\
+	case 8:								\
+		return __xchg_case##sfx##_64(x, ptr);			\
+	default:							\
+		return __bad_cmpxchg(ptr, size);						\
+	}								\
 									\
-	do {								\
-		asm volatile("// __cmpxchg_case_" #name "\n"		\
-		"	ldxr" #sz "	%" #w "1, %2\n"			\
-		"	mov	%w0, #0\n"				\
-		"	cmp	%" #w "1, %" #w "3\n"			\
-		"	b.ne	1f\n"					\
-		"	stxr" #sz "	%w0, %" #w "4, %2\n"		\
-		"1:\n"							\
-		: "=&r" (res), "=&r" (oldval),				\
-		  "+Q" (*(unsigned long *)ptr)				\
-		: "Ir" (*old), "r" (new)				\
-		: "cc");						\
+	unreachable();							\
+}
+
+__XCHG_GEN()
+__XCHG_GEN(_acq)
+__XCHG_GEN(_rel)
+__XCHG_GEN(_mb)
+
+#undef __XCHG_GEN
+
+#define __xchg_wrapper(sfx, ptr, x)					\
+({									\
+	__typeof__(*(ptr)) __ret;					\
+	__ret = (__typeof__(*(ptr)))					\
+		__xchg##sfx((unsigned long)(x), (ptr), sizeof(*(ptr))); \
+	__ret;								\
+})
+
+/* xchg */
+#define xchg_relaxed(...)	__xchg_wrapper(    , __VA_ARGS__)
+#define xchg_acquire(...)	__xchg_wrapper(_acq, __VA_ARGS__)
+#define xchg_release(...)	__xchg_wrapper(_rel, __VA_ARGS__)
+#define xchg(...)		__xchg_wrapper( _mb, __VA_ARGS__)
+
+#define __CMPXCHG_CASE(name, sz)			\
+static inline u##sz __cmpxchg_case_##name##sz(volatile void *ptr,	\
+					      u##sz old,		\
+					      u##sz new)		\
+{									\
+	return __lse_ll_sc_body(_cmpxchg_case_##name##sz,		\
+				ptr, old, new);				\
+}
+
+__CMPXCHG_CASE(    ,  8)
+__CMPXCHG_CASE(    , 16)
+__CMPXCHG_CASE(    , 32)
+__CMPXCHG_CASE(    , 64)
+__CMPXCHG_CASE(acq_,  8)
+__CMPXCHG_CASE(acq_, 16)
+__CMPXCHG_CASE(acq_, 32)
+__CMPXCHG_CASE(acq_, 64)
+__CMPXCHG_CASE(rel_,  8)
+__CMPXCHG_CASE(rel_, 16)
+__CMPXCHG_CASE(rel_, 32)
+__CMPXCHG_CASE(rel_, 64)
+__CMPXCHG_CASE(mb_,  8)
+__CMPXCHG_CASE(mb_, 16)
+__CMPXCHG_CASE(mb_, 32)
+__CMPXCHG_CASE(mb_, 64)
+
+#undef __CMPXCHG_CASE
+
+#define __CMPXCHG_DBL(name)						\
+static inline long __cmpxchg_double##name(unsigned long old1,		\
+					 unsigned long old2,		\
+					 unsigned long new1,		\
+					 unsigned long new2,		\
+					 volatile void *ptr)		\
+{									\
+	return __lse_ll_sc_body(_cmpxchg_double##name, 			\
+				old1, old2, new1, new2, ptr);		\
+}
+
+__CMPXCHG_DBL(   )
+__CMPXCHG_DBL(_mb)
+
+#undef __CMPXCHG_DBL
+
+#define __CMPXCHG_GEN(sfx)						\
+static always_inline unsigned long __cmpxchg##sfx(volatile void *ptr,	\
+					   unsigned long old,		\
+					   unsigned long new,		\
+					   int size)			\
+{									\
+	switch (size) {							\
+	case 1:								\
+		return __cmpxchg_case##sfx##_8(ptr, old, new);		\
+	case 2:								\
+		return __cmpxchg_case##sfx##_16(ptr, old, new);		\
+	case 4:								\
+		return __cmpxchg_case##sfx##_32(ptr, old, new);		\
+	case 8:								\
+		return __cmpxchg_case##sfx##_64(ptr, old, new);		\
+	default:							\
+		return __bad_cmpxchg(ptr, size);						\
+	}								\
 									\
-		if (!res)						\
-			break;						\
-	} while (!timeout || ((--max_try) > 0));			\
+	unreachable();							\
+}
+
+__CMPXCHG_GEN()
+__CMPXCHG_GEN(_acq)
+__CMPXCHG_GEN(_rel)
+__CMPXCHG_GEN(_mb)
+
+#undef __CMPXCHG_GEN
+
+#define __cmpxchg_wrapper(sfx, ptr, o, n)				\
+({									\
+	__typeof__(*(ptr)) __ret;					\
+	__ret = (__typeof__(*(ptr)))					\
+		__cmpxchg##sfx((ptr), (unsigned long)(o),		\
+				(unsigned long)(n), sizeof(*(ptr)));	\
+	__ret;								\
+})
+
+/* cmpxchg */
+#define cmpxchg_relaxed(...)	__cmpxchg_wrapper(    , __VA_ARGS__)
+#define cmpxchg_acquire(...)	__cmpxchg_wrapper(_acq, __VA_ARGS__)
+#define cmpxchg_release(...)	__cmpxchg_wrapper(_rel, __VA_ARGS__)
+#define cmpxchg(...)		__cmpxchg_wrapper( _mb, __VA_ARGS__)
+#define cmpxchg_local		cmpxchg_relaxed
+
+/* cmpxchg64 */
+#define cmpxchg64_relaxed		cmpxchg_relaxed
+#define cmpxchg64_acquire		cmpxchg_acquire
+#define cmpxchg64_release		cmpxchg_release
+#define cmpxchg64			cmpxchg
+#define cmpxchg64_local		cmpxchg_local
+
+/* cmpxchg_double */
+#define system_has_cmpxchg_double()     1
+
+#define __cmpxchg_double_check(ptr1, ptr2)					\
+({										\
+	if (sizeof(*(ptr1)) != 8)						\
+		return __bad_cmpxchg(ptr, size);							\
+	VM_BUG_ON((unsigned long *)(ptr2) - (unsigned long *)(ptr1) != 1);	\
+})
+
+#define cmpxchg_double(ptr1, ptr2, o1, o2, n1, n2)				\
+({										\
+	int __ret;								\
+	__cmpxchg_double_check(ptr1, ptr2);					\
+	__ret = !__cmpxchg_double_mb((unsigned long)(o1), (unsigned long)(o2),	\
+				     (unsigned long)(n1), (unsigned long)(n2),	\
+				     ptr1);					\
+	__ret;									\
+})
+
+#define cmpxchg_double_local(ptr1, ptr2, o1, o2, n1, n2)			\
+({										\
+	int __ret;								\
+	__cmpxchg_double_check(ptr1, ptr2);					\
+	__ret = !__cmpxchg_double((unsigned long)(o1), (unsigned long)(o2),	\
+				  (unsigned long)(n1), (unsigned long)(n2),	\
+				  ptr1);					\
+	__ret;									\
+})
+
+#define __CMPWAIT_CASE(w, sfx, sz)					\
+static inline void __cmpwait_case_##sz(volatile void *ptr,		\
+				       unsigned long val)		\
+{									\
+	unsigned long tmp;						\
 									\
-	*old = oldval;							\
+	asm volatile(							\
+	"	sevl\n"							\
+	"	wfe\n"							\
+	"	ldxr" #sfx "\t%" #w "[tmp], %[v]\n"			\
+	"	eor	%" #w "[tmp], %" #w "[tmp], %" #w "[val]\n"	\
+	"	cbnz	%" #w "[tmp], 1f\n"				\
+	"	wfe\n"							\
+	"1:"								\
+	: [tmp] "=&r" (tmp), [v] "+Q" (*(unsigned long *)ptr)		\
+	: [val] "r" (val));						\
+}
+
+__CMPWAIT_CASE(w, b, 8);
+__CMPWAIT_CASE(w, h, 16);
+__CMPWAIT_CASE(w,  , 32);
+__CMPWAIT_CASE( ,  , 64);
+
+#undef __CMPWAIT_CASE
+
+#define __CMPWAIT_GEN(sfx)						\
+static always_inline void __cmpwait##sfx(volatile void *ptr,		\
+				  unsigned long val,			\
+				  int size)				\
+{									\
+	switch (size) {							\
+	case 1:								\
+		return __cmpwait_case##sfx##_8(ptr, (u8)val);		\
+	case 2:								\
+		return __cmpwait_case##sfx##_16(ptr, (u16)val);		\
+	case 4:								\
+		return __cmpwait_case##sfx##_32(ptr, val);		\
+	case 8:								\
+		return __cmpwait_case##sfx##_64(ptr, val);		\
+	default:							\
+		__bad_cmpxchg(ptr, size);						\
+	}								\
 									\
-	return !res;							\
+	unreachable();							\
+}
+
+__CMPWAIT_GEN()
+
+#undef __CMPWAIT_GEN
+
+#define __cmpwait_relaxed(ptr, val) \
+	__cmpwait((ptr), (unsigned long)(val), sizeof(*(ptr)))
+
+/*
+ * This code is from the original Xen arm32 cmpxchg.h, from before the
+ * Linux 5.10-rc2 atomics helpers were ported over. The only changes
+ * here are renaming the macros and functions to explicitly use
+ * "timeout" in their names so that they don't clash with the above.
+ *
+ * We need this here for guest atomics (the only user of the timeout
+ * variants).
+ */
+
+#define __CMPXCHG_TIMEOUT_CASE(w, sz, name)                             \
+static inline bool __cmpxchg_timeout_case_##name(volatile void *ptr,    \
+                                         unsigned long *old,            \
+                                         unsigned long new,             \
+                                         bool timeout,                  \
+                                         unsigned int max_try)          \
+{                                                                       \
+        unsigned long oldval;                                           \
+        unsigned long res;                                              \
+                                                                        \
+        do {                                                            \
+                asm volatile("// __cmpxchg_timeout_case_" #name "\n"    \
+                "       ldxr" #sz "     %" #w "1, %2\n"                 \
+                "       mov     %w0, #0\n"                              \
+                "       cmp     %" #w "1, %" #w "3\n"                   \
+                "       b.ne    1f\n"                                   \
+                "       stxr" #sz "     %w0, %" #w "4, %2\n"            \
+                "1:\n"                                                  \
+                : "=&r" (res), "=&r" (oldval),                          \
+                  "+Q" (*(unsigned long *)ptr)                          \
+                : "Ir" (*old), "r" (new)                                \
+                : "cc");                                                \
+                                                                        \
+                if (!res)                                               \
+                        break;                                          \
+        } while (!timeout || ((--max_try) > 0));                        \
+                                                                        \
+        *old = oldval;                                                  \
+                                                                        \
+        return !res;                                                    \
 }
 
-__CMPXCHG_CASE(w, b, 1)
-__CMPXCHG_CASE(w, h, 2)
-__CMPXCHG_CASE(w,  , 4)
-__CMPXCHG_CASE( ,  , 8)
+__CMPXCHG_TIMEOUT_CASE(w, b, 1)
+__CMPXCHG_TIMEOUT_CASE(w, h, 2)
+__CMPXCHG_TIMEOUT_CASE(w,  , 4)
+__CMPXCHG_TIMEOUT_CASE( ,  , 8)
 
 static always_inline bool __int_cmpxchg(volatile void *ptr, unsigned long *old,
-					unsigned long new, int size,
-					bool timeout, unsigned int max_try)
+                                        unsigned long new, int size,
+                                        bool timeout, unsigned int max_try)
 {
-	switch (size) {
-	case 1:
-		return __cmpxchg_case_1(ptr, old, new, timeout, max_try);
-	case 2:
-		return __cmpxchg_case_2(ptr, old, new, timeout, max_try);
-	case 4:
-		return __cmpxchg_case_4(ptr, old, new, timeout, max_try);
-	case 8:
-		return __cmpxchg_case_8(ptr, old, new, timeout, max_try);
-	default:
-		return __bad_cmpxchg(ptr, size);
-	}
+        switch (size) {
+        case 1:
+                return __cmpxchg_timeout_case_1(ptr, old, new, timeout, max_try);
+        case 2:
+                return __cmpxchg_timeout_case_2(ptr, old, new, timeout, max_try);
+        case 4:
+                return __cmpxchg_timeout_case_4(ptr, old, new, timeout, max_try);
+        case 8:
+                return __cmpxchg_timeout_case_8(ptr, old, new, timeout, max_try);
+        default:
+                return __bad_cmpxchg(ptr, size);
+        }
 
-	ASSERT_UNREACHABLE();
-}
-
-static always_inline unsigned long __cmpxchg(volatile void *ptr,
-					     unsigned long old,
-					     unsigned long new,
-					     int size)
-{
-	smp_mb();
-	if (!__int_cmpxchg(ptr, &old, new, size, false, 0))
-		ASSERT_UNREACHABLE();
-	smp_mb();
-
-	return old;
+        ASSERT_UNREACHABLE();
 }
 
 /*
@@ -144,40 +371,22 @@ static always_inline unsigned long __cmpxchg(volatile void *ptr,
  * timeout) and false if the update has failed.
  */
 static always_inline bool __cmpxchg_timeout(volatile void *ptr,
-					    unsigned long *old,
-					    unsigned long new,
-					    int size,
-					    unsigned int max_try)
+                                            unsigned long *old,
+                                            unsigned long new,
+                                            int size,
+                                            unsigned int max_try)
 {
-	bool ret;
+        bool ret;
 
-	smp_mb();
-	ret = __int_cmpxchg(ptr, old, new, size, true, max_try);
-	smp_mb();
+        smp_mb();
+        ret = __int_cmpxchg(ptr, old, new, size, true, max_try);
+        smp_mb();
 
-	return ret;
+        return ret;
 }
 
-#define cmpxchg(ptr, o, n) \
-({ \
-	__typeof__(*(ptr)) __ret; \
-	__ret = (__typeof__(*(ptr))) \
-		__cmpxchg((ptr), (unsigned long)(o), (unsigned long)(n), \
-			  sizeof(*(ptr))); \
-	__ret; \
-})
+#define __cmpxchg64_timeout(ptr, old, new, max_try)     \
+        __cmpxchg_timeout(ptr, old, new, 8, max_try)
 
-#define cmpxchg64(ptr, o, n) cmpxchg(ptr, o, n)
 
-#define __cmpxchg64_timeout(ptr, old, new, max_try)	\
-	__cmpxchg_timeout(ptr, old, new, 8, max_try)
-
-#endif
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 8
- * indent-tabs-mode: t
- * End:
- */
+#endif	/* __ASM_ARM_ARM64_CMPXCHG_H */
diff --git a/xen/include/asm-arm/arm64/lse.h b/xen/include/asm-arm/arm64/lse.h
new file mode 100644
index 0000000000..e26245a74b
--- /dev/null
+++ b/xen/include/asm-arm/arm64/lse.h
@@ -0,0 +1,53 @@
+/*
+ * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
+ *
+ * Summary of changes:
+ * 		- Rename header include guard to reflect Xen directory structure
+ * 		- Drop redundant includes and redirect others to Xen equivalents
+ * 		- Modify hwcap check to use cpus_have_cap()
+ *
+ * SPDX-License-Identifier: GPL-2.0
+ */
+#ifndef __ASM_ARM_ARM64_LSE_H
+#define __ASM_ARM_ARM64_LSE_H
+
+#include "atomic_ll_sc.h"
+
+#ifdef CONFIG_ARM64_LSE_ATOMICS
+
+#define __LSE_PREAMBLE	".arch_extension lse\n"
+
+#include <xen/compiler.h>
+#include <xen/stringify.h>
+#include <xen/types.h>
+
+#include <asm/alternative.h>
+
+#include "atomic_lse.h"
+
+static inline bool system_uses_lse_atomics(void)
+{
+	return cpus_have_cap(ARM64_HAS_LSE_ATOMICS);
+}
+
+#define __lse_ll_sc_body(op, ...)					\
+({									\
+	system_uses_lse_atomics() ?					\
+		__lse_##op(__VA_ARGS__) :				\
+		__ll_sc_##op(__VA_ARGS__);				\
+})
+
+/* In-line patching at runtime */
+#define ARM64_LSE_ATOMIC_INSN(llsc, lse)				\
+	ALTERNATIVE(llsc, __LSE_PREAMBLE lse, ARM64_HAS_LSE_ATOMICS)
+
+#else	/* CONFIG_ARM64_LSE_ATOMICS */
+
+static inline bool system_uses_lse_atomics(void) { return false; }
+
+#define __lse_ll_sc_body(op, ...)		__ll_sc_##op(__VA_ARGS__)
+
+#define ARM64_LSE_ATOMIC_INSN(llsc, lse)	llsc
+
+#endif	/* CONFIG_ARM64_LSE_ATOMICS */
+#endif	/* __ASM_ARM_ARM64_LSE_H */
\ No newline at end of file
diff --git a/xen/include/asm-arm/arm64/system.h b/xen/include/asm-arm/arm64/system.h
index 2e36573ac6..dfbbe4b87d 100644
--- a/xen/include/asm-arm/arm64/system.h
+++ b/xen/include/asm-arm/arm64/system.h
@@ -2,7 +2,7 @@
 #ifndef __ASM_ARM64_SYSTEM_H
 #define __ASM_ARM64_SYSTEM_H
 
-#include <asm/arm64/cmpxchg.h>
+#include <asm/atomic.h>
 
 /* Uses uimm4 as a bitmask to select the clearing of one or more of
  * the DAIF exception mask bits:
diff --git a/xen/include/asm-arm/atomic.h b/xen/include/asm-arm/atomic.h
index ac2798d095..866f54d03c 100644
--- a/xen/include/asm-arm/atomic.h
+++ b/xen/include/asm-arm/atomic.h
@@ -2,8 +2,6 @@
 #define __ARCH_ARM_ATOMIC__
 
 #include <xen/atomic.h>
-#include <xen/prefetch.h>
-#include <asm/system.h>
 
 #define build_atomic_read(name, size, width, type) \
 static inline type name(const volatile type *addr) \
@@ -220,10 +218,19 @@ static inline int atomic_add_negative(int i, atomic_t *v)
 
 static inline int atomic_add_unless(atomic_t *v, int a, int u)
 {
-    return __atomic_add_unless(v, a, u);
+	int c, old;
+
+	c = atomic_read(v);
+	while (c != u && (old = atomic_cmpxchg((v), c, c + a)) != c)
+		c = old;
+
+	return c;
 }
 
-#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
+static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
+{
+	return cmpxchg(&((v)->counter), (old), (new));
+}
 
 #endif /* __ARCH_ARM_ATOMIC__ */
 /*
diff --git a/xen/include/xen/compiler.h b/xen/include/xen/compiler.h
index c0e0ee9f27..aa0546bfe8 100644
--- a/xen/include/xen/compiler.h
+++ b/xen/include/xen/compiler.h
@@ -138,4 +138,7 @@
 # define CLANG_DISABLE_WARN_GCC_COMPAT_END
 #endif
 
+#define __READ_ONCE(x)	    (*(volatile typeof(x) *)&(x))
+#define __WRITE_ONCE(x, v)  (*(volatile typeof(x) *)&(x) = (v))
+
 #endif /* __LINUX_COMPILER_H */
-- 
2.24.3 (Apple Git-128)



From xen-devel-bounces@lists.xenproject.org Thu Nov 05 19:04:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 19:04:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20094.45720 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kakYf-0008Ms-JQ; Thu, 05 Nov 2020 19:04:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20094.45720; Thu, 05 Nov 2020 19:04:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kakYf-0008Ml-GM; Thu, 05 Nov 2020 19:04:29 +0000
Received: by outflank-mailman (input) for mailman id 20094;
 Thu, 05 Nov 2020 19:04:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kakYe-0008M8-4b
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 19:04:28 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b853d52b-bbf4-4714-b497-694b43be174f;
 Thu, 05 Nov 2020 19:04:20 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kakYV-0007wa-SO; Thu, 05 Nov 2020 19:04:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kakYV-0004xL-Kn; Thu, 05 Nov 2020 19:04:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kakYV-0006XQ-EI; Thu, 05 Nov 2020 19:04:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kakYe-0008M8-4b
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 19:04:28 +0000
X-Inumbo-ID: b853d52b-bbf4-4714-b497-694b43be174f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b853d52b-bbf4-4714-b497-694b43be174f;
	Thu, 05 Nov 2020 19:04:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZeTqsn85pEVz08IlgN4tiTIb883T7hBb5EcZe5+Yc1I=; b=f8cK7rucY+2xvTwuNgvC05D81f
	5ovuK0iSMfB130gqeYG4hlthb9uv/yT6iR0PBvwey/O6dSTQ9c6CD6Hbx5ZWbHFYpZjedW4S6jus/
	GE+Wts5phN4eiPgpzSv+gjri5Ds1qhTDQskL4n8JsVUPCo5lyiPkISKOuUJRuY4JJ3xE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kakYV-0007wa-SO; Thu, 05 Nov 2020 19:04:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kakYV-0004xL-Kn; Thu, 05 Nov 2020 19:04:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kakYV-0006XQ-EI; Thu, 05 Nov 2020 19:04:19 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156402-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156402: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-xl:debian-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:build-i386-xsm:xen-build:fail:regression
    linux-linus:build-i386:xen-build:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:build-armhf:xen-build:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:build-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=4ef8451b332662d004df269d4cdeb7d9f31419b5
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 05 Nov 2020 19:04:19 +0000

flight 156402 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156402/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl          12 debian-install           fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 build-i386-xsm                6 xen-build                fail REGR. vs. 152332
 build-i386                    6 xen-build                fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 build-armhf                   6 xen-build                fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel 7 xen-install fail in 156390 REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64 7 xen-install fail in 156390 REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64 7 xen-install fail in 156390 REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install    fail in 156390 REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail in 156390 REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel 7 xen-install fail in 156390 REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail in 156390 REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail in 156390 REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install    fail in 156390 REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install    fail in 156390 REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64 7 xen-install fail in 156390 REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64 7 xen-install fail in 156390 REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install    fail in 156390 REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install    fail in 156390 REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install    fail in 156390 REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd 7 xen-install fail in 156390 REGR. vs. 152332
 test-amd64-i386-pair   10 xen-install/src_host fail in 156390 REGR. vs. 152332
 test-amd64-i386-pair   11 xen-install/dst_host fail in 156390 REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail in 156390 REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install    fail in 156390 REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install    fail in 156390 REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install fail in 156390 REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install    fail in 156390 REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail in 156390 REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install  fail in 156390 REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64 7 xen-install fail in 156390 REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64 7 xen-install fail in 156390 REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64 7 xen-install fail in 156390 REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail in 156390 REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host fail in 156390 REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host fail in 156390 REGR. vs. 152332
 test-arm64-arm64-xl-credit2 10 host-ping-check-xen fail in 156390 REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd 7 xen-install fail in 156390 REGR. vs. 152332
 test-arm64-arm64-xl-xsm      12 debian-install fail in 156390 REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot       fail in 156390 REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot       fail in 156390 REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot      fail in 156390 REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot       fail in 156390 REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot       fail in 156390 REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot     fail in 156390 REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot         fail in 156390 REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot       fail in 156390 REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot       fail in 156390 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl           8 xen-boot         fail in 156390 pass in 156402
 test-arm64-arm64-xl-credit1   8 xen-boot         fail in 156390 pass in 156402
 test-arm64-arm64-xl-seattle   8 xen-boot         fail in 156390 pass in 156402
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 156390 pass in 156402
 test-arm64-arm64-xl-credit2   8 xen-boot                   fail pass in 156390
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 156390
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 156390

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot       fail in 156390 REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11) fail in 156390 blocked in 152332
 test-armhf-armhf-xl-arndale 15 migrate-support-check fail in 156390 never pass
 test-armhf-armhf-xl-arndale 16 saverestore-support-check fail in 156390 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                4ef8451b332662d004df269d4cdeb7d9f31419b5
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   96 days
Failing since        152366  2020-08-01 20:49:34 Z   95 days  161 attempts
Testing same since   156390  2020-11-04 02:20:15 Z    1 days    2 attempts

------------------------------------------------------------
3422 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 654687 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 19:11:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 19:11:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20106.45746 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kakes-0000rA-Ex; Thu, 05 Nov 2020 19:10:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20106.45746; Thu, 05 Nov 2020 19:10:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kakes-0000r3-Bh; Thu, 05 Nov 2020 19:10:54 +0000
Received: by outflank-mailman (input) for mailman id 20106;
 Thu, 05 Nov 2020 19:10:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kaker-0000qV-03
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 19:10:53 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 211ec027-b043-4ba8-8a07-e9cb82129c92;
 Thu, 05 Nov 2020 19:10:45 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kakej-00084L-K7; Thu, 05 Nov 2020 19:10:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kakej-0005KT-CI; Thu, 05 Nov 2020 19:10:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kakej-0001HK-Bl; Thu, 05 Nov 2020 19:10:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NZd0=EL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kaker-0000qV-03
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 19:10:53 +0000
X-Inumbo-ID: 211ec027-b043-4ba8-8a07-e9cb82129c92
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 211ec027-b043-4ba8-8a07-e9cb82129c92;
	Thu, 05 Nov 2020 19:10:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FDosYbVsN85y4et6g/oPjgXaW9KO+u7Vcu/jcy1ZXds=; b=JHcl3XW5OqyAOBZoPE+djW6bIe
	JooYZbzjVYuqmrMex6l4pqERJC7CblUthATGHFZfyRh4fJQjawksd+w5UuuQOp4p9Y1eHXzbHMlDi
	7XP8ZWYN+XRuxxPnQEhRCqByY1ca9TCYfihfG9IYnZMN7j9XjfHEwO1rDm2+mf/NTqWs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kakej-00084L-K7; Thu, 05 Nov 2020 19:10:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kakej-0005KT-CI; Thu, 05 Nov 2020 19:10:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kakej-0001HK-Bl; Thu, 05 Nov 2020 19:10:45 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156407-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156407: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-xsm:xen-build:fail:regression
X-Osstest-Versions-This:
    ovmf=09af9bd9be2d3e31bba979f8cf6446017b0b863e
X-Osstest-Versions-That:
    ovmf=8d5708833509ece6ac63084dc07c8ac53c4d4c1a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 05 Nov 2020 19:10:45 +0000

flight 156407 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156407/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 156400

version targeted for testing:
 ovmf                 09af9bd9be2d3e31bba979f8cf6446017b0b863e
baseline version:
 ovmf                 8d5708833509ece6ac63084dc07c8ac53c4d4c1a

Last test of basis   156400  2020-11-04 12:10:58 Z    1 days
Testing same since   156407  2020-11-05 09:30:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bob Feng <bob.c.feng@intel.com>
  Jeff Brasen <jbrasen@nvidia.com>
  Liming Gao <gaoliming@byosoft.com.cn>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 09af9bd9be2d3e31bba979f8cf6446017b0b863e
Author: Bob Feng <bob.c.feng@intel.com>
Date:   Wed Nov 4 11:01:39 2020 +0800

    BaseTools: Enable Module Scope Structure Pcd
    
    REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2648
    
    This patch is to enable the Module scoped Structure Pcd usage.
    User can set structure pcd field value in module scope. For example,
    under the [components] section of a dsc file, user can override some
    field value for a specific module.
    
      Package/Module.inf{
          <PcdsFixedAtBuild>
          gUefiTokenSpaceGuid.StructurePcdModule.FieldName | 5
      }
    
    Signed-off-by: Bob Feng <bob.c.feng@intel.com>
    Cc: Liming Gao <gaoliming@byosoft.com.cn>
    Cc: Yuwei Chen <yuwei.chen@intel.com>
    
    Tested-by: Liming Gao <gaoliming@byosoft.com.cn>
    Acked-by: Liming Gao <gaoliming@byosoft.com.cn>

commit 978b9d511f5b9cb7bc5b09749f86c39bec51525d
Author: Jeff Brasen <jbrasen@nvidia.com>
Date:   Thu Oct 29 01:35:02 2020 +0800

    MdeModulePkg/Gcd: Check memory allocation when initializing memory
    
    CoreInitializeMemoryServices was not checking for any existing memory
    allocation created in the HOB producer phase. If there are memory
    allocations outside of the region covered by the HOB List then Gcd could
    select that region for memory which can result in the memory allocation
    to not be handled and memory overwrites.
    
    Signed-off-by: Jeff Brasen <jbrasen@nvidia.com>
    Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
    Regression-tested-by: Laszlo Ersek <lersek@redhat.com>


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 19:15:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 19:15:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20125.45761 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kakja-00016t-96; Thu, 05 Nov 2020 19:15:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20125.45761; Thu, 05 Nov 2020 19:15:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kakja-00016m-66; Thu, 05 Nov 2020 19:15:46 +0000
Received: by outflank-mailman (input) for mailman id 20125;
 Thu, 05 Nov 2020 19:15:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=68sV=EL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kakjZ-00016h-Ob
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 19:15:45 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 92944e74-fcaa-49b9-ae41-4da99521ce84;
 Thu, 05 Nov 2020 19:15:45 +0000 (UTC)
Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com
 [209.85.128.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-264--Qycz7d2O0O_BAs5Y70g2Q-1; Thu, 05 Nov 2020 14:15:42 -0500
Received: by mail-wm1-f72.google.com with SMTP id u207so997472wmu.4
 for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 11:15:42 -0800 (PST)
Received: from [192.168.1.36] (234.red-83-42-66.dynamicip.rima-tde.net.
 [83.42.66.234])
 by smtp.gmail.com with ESMTPSA id l1sm4338333wrb.1.2020.11.05.11.15.40
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 05 Nov 2020 11:15:40 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=68sV=EL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
	id 1kakjZ-00016h-Ob
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 19:15:45 +0000
X-Inumbo-ID: 92944e74-fcaa-49b9-ae41-4da99521ce84
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 92944e74-fcaa-49b9-ae41-4da99521ce84;
	Thu, 05 Nov 2020 19:15:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604603745;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zfHkRg9lIDVW7VftAA3B1DAo0AYHo0uxnJbSHZqoR04=;
	b=V0hk9erb1nvkoLwAuWFXHvfTvWotDuxHioT9V0YpPg8SjrZNUvAWPDlvpSlP8w0SpcfP08
	EpcIluhuRGg9R5+qaQ1lHQ9Ac+Z7Ytcbmo+6Sc+ciOuFM/2MdtCL+HvknQGdc5KhPrELoX
	+2cNM4IcY+7QfhcMhcGh9fJKxN3afzg=
Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com
 [209.85.128.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-264--Qycz7d2O0O_BAs5Y70g2Q-1; Thu, 05 Nov 2020 14:15:42 -0500
X-MC-Unique: -Qycz7d2O0O_BAs5Y70g2Q-1
Received: by mail-wm1-f72.google.com with SMTP id u207so997472wmu.4
        for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 11:15:42 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=zfHkRg9lIDVW7VftAA3B1DAo0AYHo0uxnJbSHZqoR04=;
        b=EBEBmjtAH/Rf5QXE754QOgoHiw94LpuAvSQAMZqficOYTkefVthhHo3CXqboO2haKu
         /TLlEwiwSe5r6vlrhf9UIx5/Oj822y2PDJ7pVwhPvxR9/cUpD9/H7jFJKzW7jYckqTsb
         d4b2+/HzgG0iXxrQlkLzEV6RgjBvrWKtKN3jSf1xYjp3f4qUqbpw3fFYEnOohKVHFJix
         DHQED81EaVL+rI0qLK38ZZpGOIBwIFg2wMColQkwCo4QSH5HfGN08c/uLM/Tz30hta1r
         gnA3EVTLgcpEsAEUieZtEsT04zuUfqbr1H02IVeTyfN2haAxAq850dHjZsjtkTEMR5n7
         ksdw==
X-Gm-Message-State: AOAM530L3TRB46YnexCxdS6XPBZu2myTb9cmCMcT3GyPjDC6tPysxBJz
	Fb9iEUaas8F46rKZCNodOdleXnrsfGcFDus3r3LIZ7+11L+Av6gL/P7gkkGqphOItlbnbSzbAqE
	zxwo7x3grYIkhF4bi07ibHJC3gAM=
X-Received: by 2002:adf:ef45:: with SMTP id c5mr5021781wrp.117.1604603741631;
        Thu, 05 Nov 2020 11:15:41 -0800 (PST)
X-Google-Smtp-Source: ABdhPJxjKHKzVkbOCoUNiT6e3SRf5UgENSEyZ68H6dNtmYVGkBBAE0Ev1sUGISelLf4x7w4/vZM17g==
X-Received: by 2002:adf:ef45:: with SMTP id c5mr5021766wrp.117.1604603741443;
        Thu, 05 Nov 2020 11:15:41 -0800 (PST)
Received: from [192.168.1.36] (234.red-83-42-66.dynamicip.rima-tde.net. [83.42.66.234])
        by smtp.gmail.com with ESMTPSA id l1sm4338333wrb.1.2020.11.05.11.15.40
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Thu, 05 Nov 2020 11:15:40 -0800 (PST)
Subject: Re: [RFC PATCH 11/15] include/hw/xen.h: drop superfluous struct
To: =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 masami.hiramatsu@linaro.org, Paul Durrant <paul@xen.org>,
 andre.przywara@arm.com, stefano.stabellini@linaro.org,
 takahiro.akashi@linaro.org, Anthony Perard <anthony.perard@citrix.com>,
 "open list:X86 Xen CPUs" <xen-devel@lists.xenproject.org>,
 stefano.stabellini@xilinx.com, stratos-dev@op-lists.linaro.org
References: <20201105175153.30489-1-alex.bennee@linaro.org>
 <20201105175153.30489-12-alex.bennee@linaro.org>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <43ed2ab8-abee-fc88-46cd-ff8d531753fa@redhat.com>
Date: Thu, 5 Nov 2020 20:15:39 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <20201105175153.30489-12-alex.bennee@linaro.org>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/5/20 6:51 PM, Alex Bennée wrote:
> Chardev is already a typedef'ed struct.
> 
> Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
> ---
>  include/hw/xen/xen.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>



From xen-devel-bounces@lists.xenproject.org Thu Nov 05 19:20:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 19:20:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20131.45774 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaknq-0001yT-RJ; Thu, 05 Nov 2020 19:20:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20131.45774; Thu, 05 Nov 2020 19:20:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaknq-0001yM-OG; Thu, 05 Nov 2020 19:20:10 +0000
Received: by outflank-mailman (input) for mailman id 20131;
 Thu, 05 Nov 2020 19:20:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=68sV=EL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kaknp-0001yH-I6
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 19:20:09 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 1a86649f-01c4-4bc5-b26d-88252e96dbd4;
 Thu, 05 Nov 2020 19:20:08 +0000 (UTC)
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-98-vtJzjy-UOM6fM7ec1V9FfQ-1; Thu, 05 Nov 2020 14:20:05 -0500
Received: by mail-wm1-f71.google.com with SMTP id y26so693308wmj.7
 for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 11:20:04 -0800 (PST)
Received: from [192.168.1.36] (234.red-83-42-66.dynamicip.rima-tde.net.
 [83.42.66.234])
 by smtp.gmail.com with ESMTPSA id z2sm3604747wmf.45.2020.11.05.11.20.01
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 05 Nov 2020 11:20:02 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=68sV=EL=redhat.com=philmd@srs-us1.protection.inumbo.net>)
	id 1kaknp-0001yH-I6
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 19:20:09 +0000
X-Inumbo-ID: 1a86649f-01c4-4bc5-b26d-88252e96dbd4
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 1a86649f-01c4-4bc5-b26d-88252e96dbd4;
	Thu, 05 Nov 2020 19:20:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604604008;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wPUgjqaNXtymxlf9iwoyhzdAiVmjShyhmkwdqKM2PxU=;
	b=WYiGTMwUKv7lznSPP19w30QsUOqvu6ja77X46oHZNEhmlmgPGOz9soedxRWQVJEdUFfkEr
	+HE5tPtLiLrWpSCsNGpK79br7sANGuclfcvRWvWuyRB7oKO6/XbNuqkaEGVv3iPvDk2v10
	vwg9VRp90jA2sTUKWtHvkozP7ZQHdY4=
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-98-vtJzjy-UOM6fM7ec1V9FfQ-1; Thu, 05 Nov 2020 14:20:05 -0500
X-MC-Unique: vtJzjy-UOM6fM7ec1V9FfQ-1
Received: by mail-wm1-f71.google.com with SMTP id y26so693308wmj.7
        for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 11:20:04 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=wPUgjqaNXtymxlf9iwoyhzdAiVmjShyhmkwdqKM2PxU=;
        b=XCnGx5fT2f0MNmyZQ4sOADGzvNj1eWAaYVO2CZyomL5yhemXgqo7el4mtXJidcAiRO
         /P92DrlmYv9a7YnrdkLU6t0EDL6qdYym47ah1lZ4YnBLRyWBfr0r1tvYtTSw/S+Qy21z
         HQ8auCHpngDABKhZ+y8IR+8Y2psLvUOmeIKIBvd63YqEtE6Ra/mL3ZxgK62w4i/OPAa9
         SYU6GNdJgf+BXO60ICtKOI8md2lziKgTU7zq16BaWD2hPSEaEW6T6AZaIE4cyPGsguQ7
         TvMykNIFRjBrXchGM4p4wESJ5FpI1Y9q/6T37s3jon7ta325xaxRKxuJcK91bU5HQVFT
         mWZQ==
X-Gm-Message-State: AOAM531QwTdMT8oE2CiU1zcPG4uTFM5Zh1dR/FgfwiacXDav3BEgbyW8
	zWXL2yiy/xmSJBbWEl6XdnrcnHDM/5I7D7aynOZEWnjhFbDgc8bcBtc9SL5jJB1cFWLrCmEQsr7
	ffyu0+Lom9pPnFy2o9H+XNmEYVbo=
X-Received: by 2002:adf:fdc7:: with SMTP id i7mr5113120wrs.198.1604604003736;
        Thu, 05 Nov 2020 11:20:03 -0800 (PST)
X-Google-Smtp-Source: ABdhPJx6AqeaQNElqhuMF+A5YAtRgxx2ExbJf92kxIDaCjH5t2qx4qZclDeMeBM1iCvyk2Wtwmx0Pg==
X-Received: by 2002:adf:fdc7:: with SMTP id i7mr5113102wrs.198.1604604003606;
        Thu, 05 Nov 2020 11:20:03 -0800 (PST)
Received: from [192.168.1.36] (234.red-83-42-66.dynamicip.rima-tde.net. [83.42.66.234])
        by smtp.gmail.com with ESMTPSA id z2sm3604747wmf.45.2020.11.05.11.20.01
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Thu, 05 Nov 2020 11:20:02 -0800 (PST)
Subject: Re: [RFC PATCH 12/15] stubs/xen-hw-stub: drop
 xenstore_store_pv_console_info stub
To: =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 masami.hiramatsu@linaro.org, Paul Durrant <paul@xen.org>,
 andre.przywara@arm.com, stefano.stabellini@linaro.org,
 takahiro.akashi@linaro.org,
 "open list:X86 Xen CPUs" <xen-devel@lists.xenproject.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>, stefano.stabellini@xilinx.com,
 stratos-dev@op-lists.linaro.org
References: <20201105175153.30489-1-alex.bennee@linaro.org>
 <20201105175153.30489-13-alex.bennee@linaro.org>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <11afa6f8-ec49-ab2b-2011-ef22665cd0c3@redhat.com>
Date: Thu, 5 Nov 2020 20:20:00 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <20201105175153.30489-13-alex.bennee@linaro.org>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/5/20 6:51 PM, Alex Bennée wrote:
> We should never build something that calls this without having it.

"because ..."?

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>

> 
> Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
> ---
>  stubs/xen-hw-stub.c | 4 ----
>  1 file changed, 4 deletions(-)
> 
> diff --git a/stubs/xen-hw-stub.c b/stubs/xen-hw-stub.c
> index 2ea8190921..15f3921a76 100644
> --- a/stubs/xen-hw-stub.c
> +++ b/stubs/xen-hw-stub.c
> @@ -10,10 +10,6 @@
>  #include "hw/xen/xen.h"
>  #include "hw/xen/xen-x86.h"
>  
> -void xenstore_store_pv_console_info(int i, Chardev *chr)
> -{
> -}
> -
>  int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num)
>  {
>      return -1;
> 



From xen-devel-bounces@lists.xenproject.org Thu Nov 05 21:04:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 21:04:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20167.45837 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kamQa-0002Lq-W3; Thu, 05 Nov 2020 21:04:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20167.45837; Thu, 05 Nov 2020 21:04:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kamQa-0002Lj-Rd; Thu, 05 Nov 2020 21:04:16 +0000
Received: by outflank-mailman (input) for mailman id 20167;
 Thu, 05 Nov 2020 21:04:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f69X=EL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kamQY-0002Ld-OW
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 21:04:14 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ce768885-05cb-4fa2-896b-f48bf62dfbf9;
 Thu, 05 Nov 2020 21:04:12 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id DE36A20719;
 Thu,  5 Nov 2020 21:04:10 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=f69X=EL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kamQY-0002Ld-OW
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 21:04:14 +0000
X-Inumbo-ID: ce768885-05cb-4fa2-896b-f48bf62dfbf9
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ce768885-05cb-4fa2-896b-f48bf62dfbf9;
	Thu, 05 Nov 2020 21:04:12 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id DE36A20719;
	Thu,  5 Nov 2020 21:04:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604610251;
	bh=RLXu6lmZAGdZwVlZF09rCSVfr+dJo4OlU+/n/lNHF5g=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=F4fIAMGRKwrpZMIABciM9OLFcQxKCqpxJ5QSCWtfPcbyUx+v9QZ+qGPuEdLTA3u2s
	 TKe2Eo9VaQWowuOpwXVvp4uL72NBZf0thKFm7rQLxwUVqSQYtc2ZFo5c3jTmYIP6Oi
	 EbxuzMNOJwCu/ulI0jTgAXx6kmqttGEZzt4MiIR0=
Date: Thu, 5 Nov 2020 13:04:10 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: Rahul Singh <rahul.singh@arm.com>, Bertrand.Marquis@arm.com, 
    Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 2/4] xen/pci: Introduce new CONFIG_PCI_ATS flag for
 PCI ATS functionality.
In-Reply-To: <4598bf81-5802-93b8-e160-05c139a6d4cf@suse.com>
Message-ID: <alpine.DEB.2.21.2011051300450.2323@sstabellini-ThinkPad-T480s>
References: <cover.1604417224.git.rahul.singh@arm.com> <27814e614618c413ac61a9f7a48d795c557bfe5c.1604417224.git.rahul.singh@arm.com> <c9874396-44d2-b969-104f-eb40b4e107c9@suse.com> <4598bf81-5802-93b8-e160-05c139a6d4cf@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 4 Nov 2020, Jan Beulich wrote:
> On 04.11.2020 16:43, Jan Beulich wrote:
> > On 03.11.2020 16:59, Rahul Singh wrote:
> >> --- a/xen/drivers/pci/Kconfig
> >> +++ b/xen/drivers/pci/Kconfig
> >> @@ -1,3 +1,12 @@
> >>  
> >>  config HAS_PCI
> >>  	bool
> >> +
> >> +config PCI_ATS
> >> +	bool "PCI ATS support"
> >> +	default y
> >> +	depends on X86 && HAS_PCI
> >> +	---help---
> >> +	 Enable PCI Address Translation Services.
> >> +
> >> +	 If unsure, say Y.
> > 
> > Support for "---help---" having gone away in Linux, I think we'd
> > better not add new instances. Also indentation of help content
> > typically is by a tab and two spaces. With these two adjusted
> > 
> > Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> Initially I wanted to merely reply indicating I'd be fine making
> these changes while committing, but there are two more things
> (and I withdraw my R-b): For one, isn't strict pci_dev's ats
> field now unused when !PCI_ATS? If so, if should get an #ifdef
> added. And then, what exactly is it in ats.c that's x86-specific?
> Shouldn't the whole file instead be moved one level up, and be
> usable by Arm right away?

If the issue is that ATS wouldn't work on ARM straight away, then I
think it would be best to make this a silent option like we did in patch
#1: if x86 && HAS_PCI -> automatically enable, otherwise disable. I
wouldn't move the code just yet, that's better done when we can actually
test it on ARM.


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 21:08:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 21:08:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20172.45855 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kamUX-0002Yi-J1; Thu, 05 Nov 2020 21:08:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20172.45855; Thu, 05 Nov 2020 21:08:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kamUX-0002YZ-FU; Thu, 05 Nov 2020 21:08:21 +0000
Received: by outflank-mailman (input) for mailman id 20172;
 Thu, 05 Nov 2020 21:08:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f69X=EL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kamUW-0002Xr-Ud
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 21:08:20 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 16e75f20-b67c-45cb-9569-a0085e12dea3;
 Thu, 05 Nov 2020 21:08:20 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 62D9920724;
 Thu,  5 Nov 2020 21:08:18 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=f69X=EL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kamUW-0002Xr-Ud
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 21:08:20 +0000
X-Inumbo-ID: 16e75f20-b67c-45cb-9569-a0085e12dea3
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 16e75f20-b67c-45cb-9569-a0085e12dea3;
	Thu, 05 Nov 2020 21:08:20 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 62D9920724;
	Thu,  5 Nov 2020 21:08:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604610499;
	bh=wta6Db5/cjG8npu1YddXPwncS//KQMBzHEcdsir9X/c=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ZR7P0wFsk2cEa1hgxYAOXGtGw6iOx/hwCRi0F6hEL9SzT1C6Yt2974qHYCGHNAZnf
	 1N7bZfRpuhPuHI8gI1u+WU7p81Ptlj2hXyTtNqx7EnCvGr63BLC6K5liyq2eDb5udg
	 uN+qSigaL06xLc0kiHVEpfF1pHHWcHGQBE4ArGLU=
Date: Thu, 5 Nov 2020 13:08:17 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, Bertrand.Marquis@arm.com, 
    Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 3/4] xen/pci: Move x86 specific code to x86
 directory.
In-Reply-To: <687101e7e0e6feb64dd8ea63c8cf1aacf1684049.1604417224.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2011051307160.2323@sstabellini-ThinkPad-T480s>
References: <cover.1604417224.git.rahul.singh@arm.com> <687101e7e0e6feb64dd8ea63c8cf1aacf1684049.1604417224.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 3 Nov 2020, Rahul Singh wrote:
> passthrough/pci.c file is common for all architecture, but there is x86
> sepcific code in this file.
   ^ specific

Aside from that:

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> Move x86 specific code to the x86 directory to avoid compilation error
> for other architecture.
> 
> No functional change.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
> 
> Changes is v2:
>  - fixed comments.
>  - rename pci_clean_dpci_irqs() to arch_pci_clean_pirqs().
> 
> ---
>  xen/drivers/passthrough/pci.c        | 76 +----------------------
>  xen/drivers/passthrough/x86/Makefile |  1 +
>  xen/drivers/passthrough/x86/iommu.c  |  7 +++
>  xen/drivers/passthrough/x86/pci.c    | 91 ++++++++++++++++++++++++++++
>  xen/include/xen/pci.h                |  2 +
>  5 files changed, 102 insertions(+), 75 deletions(-)
>  create mode 100644 xen/drivers/passthrough/x86/pci.c
> 
> diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
> index 2a3bce1462..04d3e2c0f9 100644
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -14,7 +14,6 @@
>   * this program; If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> -#include <xen/sched.h>
>  #include <xen/pci.h>
>  #include <xen/pci_regs.h>
>  #include <xen/pci_ids.h>
> @@ -24,7 +23,6 @@
>  #include <xen/irq.h>
>  #include <xen/param.h>
>  #include <xen/vm_event.h>
> -#include <asm/hvm/irq.h>
>  #include <xen/delay.h>
>  #include <xen/keyhandler.h>
>  #include <xen/event.h>
> @@ -847,71 +845,6 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
>      return ret;
>  }
>  
> -static int pci_clean_dpci_irq(struct domain *d,
> -                              struct hvm_pirq_dpci *pirq_dpci, void *arg)
> -{
> -    struct dev_intx_gsi_link *digl, *tmp;
> -
> -    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
> -
> -    if ( pt_irq_need_timer(pirq_dpci->flags) )
> -        kill_timer(&pirq_dpci->timer);
> -
> -    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
> -    {
> -        list_del(&digl->list);
> -        xfree(digl);
> -    }
> -
> -    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
> -
> -    if ( !pt_pirq_softirq_active(pirq_dpci) )
> -        return 0;
> -
> -    domain_get_irq_dpci(d)->pending_pirq_dpci = pirq_dpci;
> -
> -    return -ERESTART;
> -}
> -
> -static int pci_clean_dpci_irqs(struct domain *d)
> -{
> -    struct hvm_irq_dpci *hvm_irq_dpci = NULL;
> -
> -    if ( !is_iommu_enabled(d) )
> -        return 0;
> -
> -    if ( !is_hvm_domain(d) )
> -        return 0;
> -
> -    spin_lock(&d->event_lock);
> -    hvm_irq_dpci = domain_get_irq_dpci(d);
> -    if ( hvm_irq_dpci != NULL )
> -    {
> -        int ret = 0;
> -
> -        if ( hvm_irq_dpci->pending_pirq_dpci )
> -        {
> -            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci) )
> -                 ret = -ERESTART;
> -            else
> -                 hvm_irq_dpci->pending_pirq_dpci = NULL;
> -        }
> -
> -        if ( !ret )
> -            ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
> -        if ( ret )
> -        {
> -            spin_unlock(&d->event_lock);
> -            return ret;
> -        }
> -
> -        hvm_domain_irq(d)->dpci = NULL;
> -        free_hvm_irq_dpci(hvm_irq_dpci);
> -    }
> -    spin_unlock(&d->event_lock);
> -    return 0;
> -}
> -
>  /* Caller should hold the pcidevs_lock */
>  static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
>                             uint8_t devfn)
> @@ -971,7 +904,7 @@ int pci_release_devices(struct domain *d)
>      int ret;
>  
>      pcidevs_lock();
> -    ret = pci_clean_dpci_irqs(d);
> +    ret = arch_pci_clean_pirqs(d);
>      if ( ret )
>      {
>          pcidevs_unlock();
> @@ -1375,13 +1308,6 @@ static int __init setup_dump_pcidevs(void)
>  }
>  __initcall(setup_dump_pcidevs);
>  
> -int iommu_update_ire_from_msi(
> -    struct msi_desc *msi_desc, struct msi_msg *msg)
> -{
> -    return iommu_intremap
> -           ? iommu_call(&iommu_ops, update_ire_from_msi, msi_desc, msg) : 0;
> -}
> -
>  static int iommu_add_device(struct pci_dev *pdev)
>  {
>      const struct domain_iommu *hd;
> diff --git a/xen/drivers/passthrough/x86/Makefile b/xen/drivers/passthrough/x86/Makefile
> index aa515c680d..d02ff75de5 100644
> --- a/xen/drivers/passthrough/x86/Makefile
> +++ b/xen/drivers/passthrough/x86/Makefile
> @@ -1,2 +1,3 @@
>  obj-$(CONFIG_PCI_ATS) += ats.o
>  obj-y += iommu.o
> +obj-y += pci.o
> diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
> index f17b1820f4..875e67b53b 100644
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -308,6 +308,13 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
>      return pg;
>  }
>  
> +int iommu_update_ire_from_msi(
> +    struct msi_desc *msi_desc, struct msi_msg *msg)
> +{
> +    return iommu_intremap
> +           ? iommu_call(&iommu_ops, update_ire_from_msi, msi_desc, msg) : 0;
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/drivers/passthrough/x86/pci.c b/xen/drivers/passthrough/x86/pci.c
> new file mode 100644
> index 0000000000..59588aa8d4
> --- /dev/null
> +++ b/xen/drivers/passthrough/x86/pci.c
> @@ -0,0 +1,91 @@
> +/*
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <xen/sched.h>
> +#include <xen/pci.h>
> +
> +static int pci_clean_dpci_irq(struct domain *d,
> +                              struct hvm_pirq_dpci *pirq_dpci, void *arg)
> +{
> +    struct dev_intx_gsi_link *digl, *tmp;
> +
> +    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
> +
> +    if ( pt_irq_need_timer(pirq_dpci->flags) )
> +        kill_timer(&pirq_dpci->timer);
> +
> +    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
> +    {
> +        list_del(&digl->list);
> +        xfree(digl);
> +    }
> +
> +    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
> +
> +    if ( !pt_pirq_softirq_active(pirq_dpci) )
> +        return 0;
> +
> +    domain_get_irq_dpci(d)->pending_pirq_dpci = pirq_dpci;
> +
> +    return -ERESTART;
> +}
> +
> +int arch_pci_clean_pirqs(struct domain *d)
> +{
> +    struct hvm_irq_dpci *hvm_irq_dpci = NULL;
> +
> +    if ( !is_iommu_enabled(d) )
> +        return 0;
> +
> +    if ( !is_hvm_domain(d) )
> +        return 0;
> +
> +    spin_lock(&d->event_lock);
> +    hvm_irq_dpci = domain_get_irq_dpci(d);
> +    if ( hvm_irq_dpci != NULL )
> +    {
> +        int ret = 0;
> +
> +        if ( hvm_irq_dpci->pending_pirq_dpci )
> +        {
> +            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci) )
> +                 ret = -ERESTART;
> +            else
> +                 hvm_irq_dpci->pending_pirq_dpci = NULL;
> +        }
> +
> +        if ( !ret )
> +            ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
> +        if ( ret )
> +        {
> +            spin_unlock(&d->event_lock);
> +            return ret;
> +        }
> +
> +        hvm_domain_irq(d)->dpci = NULL;
> +        free_hvm_irq_dpci(hvm_irq_dpci);
> +    }
> +    spin_unlock(&d->event_lock);
> +
> +    return 0;
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
> index c4d3879761..fd28d11f6e 100644
> --- a/xen/include/xen/pci.h
> +++ b/xen/include/xen/pci.h
> @@ -209,4 +209,6 @@ int msixtbl_pt_register(struct domain *, struct pirq *, uint64_t gtable);
>  void msixtbl_pt_unregister(struct domain *, struct pirq *);
>  void msixtbl_pt_cleanup(struct domain *d);
>  
> +int arch_pci_clean_pirqs(struct domain *d);
> +
>  #endif /* __XEN_PCI_H__ */
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 21:12:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 21:12:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20180.45866 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kamY2-0003NN-2L; Thu, 05 Nov 2020 21:11:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20180.45866; Thu, 05 Nov 2020 21:11:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kamY1-0003NG-Vg; Thu, 05 Nov 2020 21:11:57 +0000
Received: by outflank-mailman (input) for mailman id 20180;
 Thu, 05 Nov 2020 21:11:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f69X=EL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kamY0-0003NB-JZ
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 21:11:56 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 18768c3d-33d3-4548-99ac-325cf4fad5e6;
 Thu, 05 Nov 2020 21:11:55 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id C858B20724;
 Thu,  5 Nov 2020 21:11:53 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=f69X=EL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kamY0-0003NB-JZ
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 21:11:56 +0000
X-Inumbo-ID: 18768c3d-33d3-4548-99ac-325cf4fad5e6
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 18768c3d-33d3-4548-99ac-325cf4fad5e6;
	Thu, 05 Nov 2020 21:11:55 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id C858B20724;
	Thu,  5 Nov 2020 21:11:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604610715;
	bh=zdjGWlZF5+w7QEov4WVg9sEtcjr9gG2zt7rm6H9u/HA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Cn4ylwHfRpks84ZmTRHKfKmcbgCRsMloy3+93lAPKL7CQnb0dafUVh9BMIuUATWff
	 KTSEcoJOqL0cfqZ9GeIt4vFROkGVrpZwBENRuNdSBCb/7O7JD4FH3lVxbzi4+h/f4P
	 vlIsa4mlFu5rplTh+J09TCjeqbVSdlDhzcHpd4Us=
Date: Thu, 5 Nov 2020 13:11:52 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Anthony PERARD <anthony.perard@citrix.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>, 
    Takahiro Akashi <takahiro.akashi@linaro.org>, 
    Alex Benn??e <alex.bennee@linaro.org>, 
    Masami Hiramatsu <masami.hiramatsu@linaro.org>, ian.jackson@eu.citrix.com, 
    wl@xen.org, xen-devel@lists.xenproject.org
Subject: Re: BUG: libxl vuart build order
In-Reply-To: <20201105175637.GL2214@perard.uk.xensource.com>
Message-ID: <alpine.DEB.2.21.2011051309200.2323@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2010261634000.12247@sstabellini-ThinkPad-T480s> <20201027000214.GA14449@laputa> <20201028014105.GA11856@laputa> <alpine.DEB.2.21.2010281437010.12247@sstabellini-ThinkPad-T480s> <20201029114705.GA291577@laputa>
 <alpine.DEB.2.21.2010291704180.12247@sstabellini-ThinkPad-T480s> <20201030025157.GA18567@laputa> <alpine.DEB.2.21.2010301045250.12247@sstabellini-ThinkPad-T480s> <20201105154147.GJ2214@perard.uk.xensource.com> <alpine.DEB.2.21.2011050826510.2323@sstabellini-ThinkPad-T480s>
 <20201105175637.GL2214@perard.uk.xensource.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 5 Nov 2020, Anthony PERARD wrote:
> On Thu, Nov 05, 2020 at 08:29:20AM -0800, Stefano Stabellini wrote:
> > On Thu, 5 Nov 2020, Anthony PERARD wrote:
> > > On Fri, Oct 30, 2020 at 10:46:37AM -0700, Stefano Stabellini wrote:
> > > > On Fri, 30 Oct 2020, Takahiro Akashi wrote:
> > > > > === after "xl console -t vuart" ===
> > > > > U-Boot 2020.10-00777-g10cf956a26ba (Oct 29 2020 - 19:31:29 +0900) xenguest
> > > > > 
> > > > > Xen virtual CPU
> > > > > Model: XENVM-4.15
> > > > > DRAM:  128 MiB
> > > > > 
> > > > > In:    sbsa-pl011
> > > > > Out:   sbsa-pl011
> > > > > Err:   sbsa-pl011
> > > > > ===
> > > > > 
> > > > > If possible, I hope that "xl create -c" command would accept "-t vuart"
> > > > > option (or it should automatically selects uart type from the config).
> > > > 
> > > > I think a patch to add the "-t" option to "xl create" would be
> > > > acceptable, right Anthony?
> > > 
> > > I don't know. Why `xl' isn't able to select the vuart as the default one?
> > 
> > Because both consoles are still valid: when the emulated uart is
> > enabled, the normal PV console is also enabled.
> > 
> > 
> > > Maybe a long option would be better in the cases where we would like to
> > > connect to a "secondary" console? I could see `xl create --console=vuart'
> > > been fine, I don't know if that's possible.
> > 
> > That's OK for me but keep in mind that xl console already takes -t
> > vuart. In other words:
> 
> I don't know why we would need the same exact option, `xl console` and
> `xl create` are two different commands. Also, I usually prefer long
> option for less used options as it makes it a bit easier to figure out
> what a command is supposed to do (without checking the man; and when
> there is both long and short options available).

That is true. I don't have a strong opinion if it should be "-t" or
something longer like "--console". Either one works. I tend to like "-t"
a bit more because it would make it more consident, but I don't think it
matters much.


> > 1) xl console -t vuart    -> WORKS
> 
> -t for `xl console` kind of works well, -t could be a shortcut of "type"
> of console".
> 
> > 2) xl create -c -t vuart  -> DOESN'T WORK
> 
> But here, -t would not be a "type" of console since we are creating a
> VM. Also `xl create -t vuart` without -c would do nothing, right?
> (create a vm but ignoring the -t).

Yes, it would do nothing


> Anyway, an option to auto-connect to a different console would be
> useful.

Yeah


> > P.S.
> > 
> > Could you also take a quick look at the patch I appended to the previous
> > email? Or would you prefer me to send it out separately as its own
> > patch?
> 
> It's probably better to have a patch on its own when it's ready for
> review rather that been embedded in an email in a long
> discussion/debugging thread. That leave a better chance for others to
> spot that a patch exist and review.

All right, I'll resend reparately.


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 21:15:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 21:15:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20190.45885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kamb7-0003b0-MQ; Thu, 05 Nov 2020 21:15:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20190.45885; Thu, 05 Nov 2020 21:15:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kamb7-0003at-JD; Thu, 05 Nov 2020 21:15:09 +0000
Received: by outflank-mailman (input) for mailman id 20190;
 Thu, 05 Nov 2020 21:15:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f69X=EL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kamb6-0003an-IY
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 21:15:08 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 215af06f-87c6-4754-914e-9f4172478095;
 Thu, 05 Nov 2020 21:15:07 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 0426920724;
 Thu,  5 Nov 2020 21:15:05 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=f69X=EL=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kamb6-0003an-IY
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 21:15:08 +0000
X-Inumbo-ID: 215af06f-87c6-4754-914e-9f4172478095
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 215af06f-87c6-4754-914e-9f4172478095;
	Thu, 05 Nov 2020 21:15:07 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 0426920724;
	Thu,  5 Nov 2020 21:15:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604610906;
	bh=elF/Vxj2afwaRZRW6dqOS+Fhq/qJJgeJJ1Y621Iu2JA=;
	h=Date:From:To:cc:Subject:From;
	b=oP67AJyC7TFmD/Rd8exjUFC6dOnMB03jKiMiDgBfYDlG5hYtQ5cIv+uvhCw5qO7Gc
	 xn7rZ1i1RyvcNc9imOzyIvtUDpB7Ay7SGUlqe7TsFQpB5sBT74JvMbDaJrh1kLdf3Q
	 qdHE2vSh6cN4mWziqxl7jNiCXskegwSm1k1lsZk0=
Date: Thu, 5 Nov 2020 13:15:05 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: xen-devel@lists.xenproject.org
cc: takahiro.akashi@linaro.org, alex.bennee@linaro.org, 
    masami.hiramatsu@linaro.org, ian.jackson@eu.citrix.com, wl@xen.org, 
    anthony.perard@citrix.com, sstabellini@kernel.org
Subject: [PATCH] libxl: set vuart_gfn in libxl__build_hvm
Message-ID: <alpine.DEB.2.21.2011051312120.2323@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

libxl: set vuart_gfn in libxl__build_hvm

Setting vuart_gfn was missed when switching ARM guests to the PVH build.
Like libxl__build_pv, libxl__build_hvm should set state->vuart_gfn to
dom->vuart_gfn.

Without this change, xl console cannot connect to the vuart console (-t
vuart), see https://marc.info/?l=xen-devel&m=160402342101366.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>

diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index f8661e90d4..36fe8915e7 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -1184,6 +1184,7 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
         LOG(ERROR, "hvm build set params failed");
         goto out;
     }
+    state->vuart_gfn = dom->vuart_gfn;
 
     rc = hvm_build_set_xs_values(gc, domid, dom, info);
     if (rc != 0) {


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 21:55:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 21:55:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20216.45917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kanE1-00079b-0v; Thu, 05 Nov 2020 21:55:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20216.45917; Thu, 05 Nov 2020 21:55:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kanE0-00079U-U4; Thu, 05 Nov 2020 21:55:20 +0000
Received: by outflank-mailman (input) for mailman id 20216;
 Thu, 05 Nov 2020 21:55:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8P+2=EL=linuxfoundation.org=gregkh@srs-us1.protection.inumbo.net>)
 id 1kanE0-00079P-Dw
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 21:55:20 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id afd40730-3395-4ca0-9713-7ad4d80e3952;
 Thu, 05 Nov 2020 21:55:19 +0000 (UTC)
Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id E65B22080D;
 Thu,  5 Nov 2020 21:55:17 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8P+2=EL=linuxfoundation.org=gregkh@srs-us1.protection.inumbo.net>)
	id 1kanE0-00079P-Dw
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 21:55:20 +0000
X-Inumbo-ID: afd40730-3395-4ca0-9713-7ad4d80e3952
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id afd40730-3395-4ca0-9713-7ad4d80e3952;
	Thu, 05 Nov 2020 21:55:19 +0000 (UTC)
Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id E65B22080D;
	Thu,  5 Nov 2020 21:55:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604613318;
	bh=LbPxzojqMT52SYDv1eo1fXPXQR7JVrj6rrCBxpXxugc=;
	h=Subject:To:Cc:From:Date:From;
	b=upQkOdBlX77sXTN+S1tLKsKgkySDPXmZMS3fA9QK0tKMFEPgG6QMOvexF7Z6Tmvkm
	 jsIiwD8b4KGL7RA+aStA+kqgW5xBAT7+8vYt0/AovQY8sghrJwZOjk3vtaNmBdP9bF
	 uQmZSDoQlKrLRogAPixEienbWuuteOnqGe9EZ6xE=
Subject: Patch "linkage: Introduce new macros for assembler symbols" has been added to the 5.4-stable tree
To: a.p.zijlstra@chello.nl,akpm@linux-foundation.org,aryabinin@virtuozzo.com,boris.ostrovsky@oracle.com,bp@suse.de,corbet@lwn.net,gregkh@linuxfoundation.org,hpa@zytor.com,jgross@suse.com,jiancai@google.com,jirislaby@kernel.org,jpoimboe@redhat.com,jslaby@suse.cz,len.brown@intel.com,mark.rutland@arm.com,mingo@kernel.org,pavel@ucw.cz,rafael.j.wysocki@intel.com,tglx@linutronix.de,torvalds@linux-foundation.org,will@kernel.org,x86@kernel.org,xen-devel@lists.xenproject.org
Cc: <stable-commits@vger.kernel.org>
From: <gregkh@linuxfoundation.org>
Date: Thu, 05 Nov 2020 22:55:54 +0100
Message-ID: <160461335418364@kroah.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=ANSI_X3.4-1968
Content-Transfer-Encoding: 8bit
X-stable: commit
X-Patchwork-Hint: ignore 


This is a note to let you know that I've just added the patch titled

    linkage: Introduce new macros for assembler symbols

to the 5.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     linkage-introduce-new-macros-for-assembler-symbols.patch
and it can be found in the queue-5.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From ffedeeb780dc554eff3d3b16e6a462a26a41d7ec Mon Sep 17 00:00:00 2001
From: Jiri Slaby <jirislaby@kernel.org>
Date: Fri, 11 Oct 2019 13:50:41 +0200
Subject: linkage: Introduce new macros for assembler symbols

From: Jiri Slaby <jslaby@suse.cz>

commit ffedeeb780dc554eff3d3b16e6a462a26a41d7ec upstream.

Introduce new C macros for annotations of functions and data in
assembly. There is a long-standing mess in macros like ENTRY, END,
ENDPROC and similar. They are used in different manners and sometimes
incorrectly.

So introduce macros with clear use to annotate assembly as follows:

a) Support macros for the ones below
   SYM_T_FUNC -- type used by assembler to mark functions
   SYM_T_OBJECT -- type used by assembler to mark data
   SYM_T_NONE -- type used by assembler to mark entries of unknown type

   They are defined as STT_FUNC, STT_OBJECT, and STT_NOTYPE
   respectively. According to the gas manual, this is the most portable
   way. I am not sure about other assemblers, so this can be switched
   back to %function and %object if this turns into a problem.
   Architectures can also override them by something like ", @function"
   if they need.

   SYM_A_ALIGN, SYM_A_NONE -- align the symbol?
   SYM_L_GLOBAL, SYM_L_WEAK, SYM_L_LOCAL -- linkage of symbols

b) Mostly internal annotations, used by the ones below
   SYM_ENTRY -- use only if you have to (for non-paired symbols)
   SYM_START -- use only if you have to (for paired symbols)
   SYM_END -- use only if you have to (for paired symbols)

c) Annotations for code
   SYM_INNER_LABEL_ALIGN -- only for labels in the middle of code
   SYM_INNER_LABEL -- only for labels in the middle of code

   SYM_FUNC_START_LOCAL_ALIAS -- use where there are two local names for
	one function
   SYM_FUNC_START_ALIAS -- use where there are two global names for one
	function
   SYM_FUNC_END_ALIAS -- the end of LOCAL_ALIASed or ALIASed function

   SYM_FUNC_START -- use for global functions
   SYM_FUNC_START_NOALIGN -- use for global functions, w/o alignment
   SYM_FUNC_START_LOCAL -- use for local functions
   SYM_FUNC_START_LOCAL_NOALIGN -- use for local functions, w/o
	alignment
   SYM_FUNC_START_WEAK -- use for weak functions
   SYM_FUNC_START_WEAK_NOALIGN -- use for weak functions, w/o alignment
   SYM_FUNC_END -- the end of SYM_FUNC_START_LOCAL, SYM_FUNC_START,
	SYM_FUNC_START_WEAK, ...

   For functions with special (non-C) calling conventions:
   SYM_CODE_START -- use for non-C (special) functions
   SYM_CODE_START_NOALIGN -- use for non-C (special) functions, w/o
	alignment
   SYM_CODE_START_LOCAL -- use for local non-C (special) functions
   SYM_CODE_START_LOCAL_NOALIGN -- use for local non-C (special)
	functions, w/o alignment
   SYM_CODE_END -- the end of SYM_CODE_START_LOCAL or SYM_CODE_START

d) For data
   SYM_DATA_START -- global data symbol
   SYM_DATA_START_LOCAL -- local data symbol
   SYM_DATA_END -- the end of the SYM_DATA_START symbol
   SYM_DATA_END_LABEL -- the labeled end of SYM_DATA_START symbol
   SYM_DATA -- start+end wrapper around simple global data
   SYM_DATA_LOCAL -- start+end wrapper around simple local data

==========

The macros allow to pair starts and ends of functions and mark functions
correctly in the output ELF objects.

All users of the old macros in x86 are converted to use these in further
patches.

Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Len Brown <len.brown@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-arch@vger.kernel.org
Cc: linux-doc@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-pm@vger.kernel.org
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Cc: x86-ml <x86@kernel.org>
Cc: xen-devel@lists.xenproject.org
Link: https://lkml.kernel.org/r/20191011115108.12392-2-jslaby@suse.cz
Cc: Jian Cai <jiancai@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 Documentation/asm-annotations.rst |  216 +++++++++++++++++++++++++++++++++
 Documentation/index.rst           |    8 +
 arch/x86/include/asm/linkage.h    |   10 +
 include/linux/linkage.h           |  245 ++++++++++++++++++++++++++++++++++++--
 4 files changed, 468 insertions(+), 11 deletions(-)

--- /dev/null
+++ b/Documentation/asm-annotations.rst
@@ -0,0 +1,216 @@
+Assembler Annotations
+=====================
+
+Copyright (c) 2017-2019 Jiri Slaby
+
+This document describes the new macros for annotation of data and code in
+assembly. In particular, it contains information about ``SYM_FUNC_START``,
+``SYM_FUNC_END``, ``SYM_CODE_START``, and similar.
+
+Rationale
+---------
+Some code like entries, trampolines, or boot code needs to be written in
+assembly. The same as in C, such code is grouped into functions and
+accompanied with data. Standard assemblers do not force users into precisely
+marking these pieces as code, data, or even specifying their length.
+Nevertheless, assemblers provide developers with such annotations to aid
+debuggers throughout assembly. On top of that, developers also want to mark
+some functions as *global* in order to be visible outside of their translation
+units.
+
+Over time, the Linux kernel has adopted macros from various projects (like
+``binutils``) to facilitate such annotations. So for historic reasons,
+developers have been using ``ENTRY``, ``END``, ``ENDPROC``, and other
+annotations in assembly.  Due to the lack of their documentation, the macros
+are used in rather wrong contexts at some locations. Clearly, ``ENTRY`` was
+intended to denote the beginning of global symbols (be it data or code).
+``END`` used to mark the end of data or end of special functions with
+*non-standard* calling convention. In contrast, ``ENDPROC`` should annotate
+only ends of *standard* functions.
+
+When these macros are used correctly, they help assemblers generate a nice
+object with both sizes and types set correctly. For example, the result of
+``arch/x86/lib/putuser.S``::
+
+   Num:    Value          Size Type    Bind   Vis      Ndx Name
+    25: 0000000000000000    33 FUNC    GLOBAL DEFAULT    1 __put_user_1
+    29: 0000000000000030    37 FUNC    GLOBAL DEFAULT    1 __put_user_2
+    32: 0000000000000060    36 FUNC    GLOBAL DEFAULT    1 __put_user_4
+    35: 0000000000000090    37 FUNC    GLOBAL DEFAULT    1 __put_user_8
+
+This is not only important for debugging purposes. When there are properly
+annotated objects like this, tools can be run on them to generate more useful
+information. In particular, on properly annotated objects, ``objtool`` can be
+run to check and fix the object if needed. Currently, ``objtool`` can report
+missing frame pointer setup/destruction in functions. It can also
+automatically generate annotations for :doc:`ORC unwinder <x86/orc-unwinder>`
+for most code. Both of these are especially important to support reliable
+stack traces which are in turn necessary for :doc:`Kernel live patching
+<livepatch/livepatch>`.
+
+Caveat and Discussion
+---------------------
+As one might realize, there were only three macros previously. That is indeed
+insufficient to cover all the combinations of cases:
+
+* standard/non-standard function
+* code/data
+* global/local symbol
+
+There was a discussion_ and instead of extending the current ``ENTRY/END*``
+macros, it was decided that brand new macros should be introduced instead::
+
+    So how about using macro names that actually show the purpose, instead
+    of importing all the crappy, historic, essentially randomly chosen
+    debug symbol macro names from the binutils and older kernels?
+
+.. _discussion: https://lkml.kernel.org/r/20170217104757.28588-1-jslaby@suse.cz
+
+Macros Description
+------------------
+
+The new macros are prefixed with the ``SYM_`` prefix and can be divided into
+three main groups:
+
+1. ``SYM_FUNC_*`` -- to annotate C-like functions. This means functions with
+   standard C calling conventions, i.e. the stack contains a return address at
+   the predefined place and a return from the function can happen in a
+   standard way. When frame pointers are enabled, save/restore of frame
+   pointer shall happen at the start/end of a function, respectively, too.
+
+   Checking tools like ``objtool`` should ensure such marked functions conform
+   to these rules. The tools can also easily annotate these functions with
+   debugging information (like *ORC data*) automatically.
+
+2. ``SYM_CODE_*`` -- special functions called with special stack. Be it
+   interrupt handlers with special stack content, trampolines, or startup
+   functions.
+
+   Checking tools mostly ignore checking of these functions. But some debug
+   information still can be generated automatically. For correct debug data,
+   this code needs hints like ``UNWIND_HINT_REGS`` provided by developers.
+
+3. ``SYM_DATA*`` -- obviously data belonging to ``.data`` sections and not to
+   ``.text``. Data do not contain instructions, so they have to be treated
+   specially by the tools: they should not treat the bytes as instructions,
+   nor assign any debug information to them.
+
+Instruction Macros
+~~~~~~~~~~~~~~~~~~
+This section covers ``SYM_FUNC_*`` and ``SYM_CODE_*`` enumerated above.
+
+* ``SYM_FUNC_START`` and ``SYM_FUNC_START_LOCAL`` are supposed to be **the
+  most frequent markings**. They are used for functions with standard calling
+  conventions -- global and local. Like in C, they both align the functions to
+  architecture specific ``__ALIGN`` bytes. There are also ``_NOALIGN`` variants
+  for special cases where developers do not want this implicit alignment.
+
+  ``SYM_FUNC_START_WEAK`` and ``SYM_FUNC_START_WEAK_NOALIGN`` markings are
+  also offered as an assembler counterpart to the *weak* attribute known from
+  C.
+
+  All of these **shall** be coupled with ``SYM_FUNC_END``. First, it marks
+  the sequence of instructions as a function and computes its size to the
+  generated object file. Second, it also eases checking and processing such
+  object files as the tools can trivially find exact function boundaries.
+
+  So in most cases, developers should write something like in the following
+  example, having some asm instructions in between the macros, of course::
+
+    SYM_FUNC_START(function_hook)
+        ... asm insns ...
+    SYM_FUNC_END(function_hook)
+
+  In fact, this kind of annotation corresponds to the now deprecated ``ENTRY``
+  and ``ENDPROC`` macros.
+
+* ``SYM_FUNC_START_ALIAS`` and ``SYM_FUNC_START_LOCAL_ALIAS`` serve for those
+  who decided to have two or more names for one function. The typical use is::
+
+    SYM_FUNC_START_ALIAS(__memset)
+    SYM_FUNC_START(memset)
+        ... asm insns ...
+    SYM_FUNC_END(memset)
+    SYM_FUNC_END_ALIAS(__memset)
+
+  In this example, one can call ``__memset`` or ``memset`` with the same
+  result, except the debug information for the instructions is generated to
+  the object file only once -- for the non-``ALIAS`` case.
+
+* ``SYM_CODE_START`` and ``SYM_CODE_START_LOCAL`` should be used only in
+  special cases -- if you know what you are doing. This is used exclusively
+  for interrupt handlers and similar where the calling convention is not the C
+  one. ``_NOALIGN`` variants exist too. The use is the same as for the ``FUNC``
+  category above::
+
+    SYM_CODE_START_LOCAL(bad_put_user)
+        ... asm insns ...
+    SYM_CODE_END(bad_put_user)
+
+  Again, every ``SYM_CODE_START*`` **shall** be coupled by ``SYM_CODE_END``.
+
+  To some extent, this category corresponds to deprecated ``ENTRY`` and
+  ``END``. Except ``END`` had several other meanings too.
+
+* ``SYM_INNER_LABEL*`` is used to denote a label inside some
+  ``SYM_{CODE,FUNC}_START`` and ``SYM_{CODE,FUNC}_END``.  They are very similar
+  to C labels, except they can be made global. An example of use::
+
+    SYM_CODE_START(ftrace_caller)
+        /* save_mcount_regs fills in first two parameters */
+        ...
+
+    SYM_INNER_LABEL(ftrace_caller_op_ptr, SYM_L_GLOBAL)
+        /* Load the ftrace_ops into the 3rd parameter */
+        ...
+
+    SYM_INNER_LABEL(ftrace_call, SYM_L_GLOBAL)
+        call ftrace_stub
+        ...
+        retq
+    SYM_CODE_END(ftrace_caller)
+
+Data Macros
+~~~~~~~~~~~
+Similar to instructions, there is a couple of macros to describe data in the
+assembly.
+
+* ``SYM_DATA_START`` and ``SYM_DATA_START_LOCAL`` mark the start of some data
+  and shall be used in conjunction with either ``SYM_DATA_END``, or
+  ``SYM_DATA_END_LABEL``. The latter adds also a label to the end, so that
+  people can use ``lstack`` and (local) ``lstack_end`` in the following
+  example::
+
+    SYM_DATA_START_LOCAL(lstack)
+        .skip 4096
+    SYM_DATA_END_LABEL(lstack, SYM_L_LOCAL, lstack_end)
+
+* ``SYM_DATA`` and ``SYM_DATA_LOCAL`` are variants for simple, mostly one-line
+  data::
+
+    SYM_DATA(HEAP,     .long rm_heap)
+    SYM_DATA(heap_end, .long rm_stack)
+
+  In the end, they expand to ``SYM_DATA_START`` with ``SYM_DATA_END``
+  internally.
+
+Support Macros
+~~~~~~~~~~~~~~
+All the above reduce themselves to some invocation of ``SYM_START``,
+``SYM_END``, or ``SYM_ENTRY`` at last. Normally, developers should avoid using
+these.
+
+Further, in the above examples, one could see ``SYM_L_LOCAL``. There are also
+``SYM_L_GLOBAL`` and ``SYM_L_WEAK``. All are intended to denote linkage of a
+symbol marked by them. They are used either in ``_LABEL`` variants of the
+earlier macros, or in ``SYM_START``.
+
+
+Overriding Macros
+~~~~~~~~~~~~~~~~~
+Architecture can also override any of the macros in their own
+``asm/linkage.h``, including macros specifying the type of a symbol
+(``SYM_T_FUNC``, ``SYM_T_OBJECT``, and ``SYM_T_NONE``).  As every macro
+described in this file is surrounded by ``#ifdef`` + ``#endif``, it is enough
+to define the macros differently in the aforementioned architecture-dependent
+header.
--- a/Documentation/index.rst
+++ b/Documentation/index.rst
@@ -135,6 +135,14 @@ needed).
    mic/index
    scheduler/index
 
+Architecture-agnostic documentation
+-----------------------------------
+
+.. toctree::
+   :maxdepth: 2
+
+   asm-annotations
+
 Architecture-specific documentation
 -----------------------------------
 
--- a/arch/x86/include/asm/linkage.h
+++ b/arch/x86/include/asm/linkage.h
@@ -13,9 +13,13 @@
 
 #ifdef __ASSEMBLY__
 
-#define GLOBAL(name)	\
-	.globl name;	\
-	name:
+/*
+ * GLOBAL is DEPRECATED
+ *
+ * use SYM_DATA_START, SYM_FUNC_START, SYM_INNER_LABEL, SYM_CODE_START, or
+ * similar
+ */
+#define GLOBAL(name)	SYM_ENTRY(name, SYM_L_GLOBAL, SYM_A_NONE)
 
 #if defined(CONFIG_X86_64) || defined(CONFIG_X86_ALIGNMENT_16)
 #define __ALIGN		.p2align 4, 0x90
--- a/include/linux/linkage.h
+++ b/include/linux/linkage.h
@@ -75,32 +75,58 @@
 
 #ifdef __ASSEMBLY__
 
+/* SYM_T_FUNC -- type used by assembler to mark functions */
+#ifndef SYM_T_FUNC
+#define SYM_T_FUNC				STT_FUNC
+#endif
+
+/* SYM_T_OBJECT -- type used by assembler to mark data */
+#ifndef SYM_T_OBJECT
+#define SYM_T_OBJECT				STT_OBJECT
+#endif
+
+/* SYM_T_NONE -- type used by assembler to mark entries of unknown type */
+#ifndef SYM_T_NONE
+#define SYM_T_NONE				STT_NOTYPE
+#endif
+
+/* SYM_A_* -- align the symbol? */
+#define SYM_A_ALIGN				ALIGN
+#define SYM_A_NONE				/* nothing */
+
+/* SYM_L_* -- linkage of symbols */
+#define SYM_L_GLOBAL(name)			.globl name
+#define SYM_L_WEAK(name)			.weak name
+#define SYM_L_LOCAL(name)			/* nothing */
+
 #ifndef LINKER_SCRIPT
 #define ALIGN __ALIGN
 #define ALIGN_STR __ALIGN_STR
 
+/* === DEPRECATED annotations === */
+
 #ifndef GLOBAL
+/* deprecated, use SYM_DATA*, SYM_ENTRY, or similar */
 #define GLOBAL(name) \
 	.globl name ASM_NL \
 	name:
 #endif
 
 #ifndef ENTRY
+/* deprecated, use SYM_FUNC_START */
 #define ENTRY(name) \
-	.globl name ASM_NL \
-	ALIGN ASM_NL \
-	name:
+	SYM_FUNC_START(name)
 #endif
 #endif /* LINKER_SCRIPT */
 
 #ifndef WEAK
+/* deprecated, use SYM_FUNC_START_WEAK* */
 #define WEAK(name)	   \
-	.weak name ASM_NL   \
-	ALIGN ASM_NL \
-	name:
+	SYM_FUNC_START_WEAK(name)
 #endif
 
 #ifndef END
+/* deprecated, use SYM_FUNC_END, SYM_DATA_END, or SYM_END */
 #define END(name) \
 	.size name, .-name
 #endif
@@ -110,11 +136,214 @@
  * static analysis tools such as stack depth analyzer.
  */
 #ifndef ENDPROC
+/* deprecated, use SYM_FUNC_END */
 #define ENDPROC(name) \
-	.type name, @function ASM_NL \
-	END(name)
+	SYM_FUNC_END(name)
+#endif
+
+/* === generic annotations === */
+
+/* SYM_ENTRY -- use only if you have to for non-paired symbols */
+#ifndef SYM_ENTRY
+#define SYM_ENTRY(name, linkage, align...)		\
+	linkage(name) ASM_NL				\
+	align ASM_NL					\
+	name:
+#endif
+
+/* SYM_START -- use only if you have to */
+#ifndef SYM_START
+#define SYM_START(name, linkage, align...)		\
+	SYM_ENTRY(name, linkage, align)
+#endif
+
+/* SYM_END -- use only if you have to */
+#ifndef SYM_END
+#define SYM_END(name, sym_type)				\
+	.type name sym_type ASM_NL			\
+	.size name, .-name
+#endif
+
+/* === code annotations === */
+
+/*
+ * FUNC -- C-like functions (proper stack frame etc.)
+ * CODE -- non-C code (e.g. irq handlers with different, special stack etc.)
+ *
+ * Objtool validates stack for FUNC, but not for CODE.
+ * Objtool generates debug info for both FUNC & CODE, but needs special
+ * annotations for each CODE's start (to describe the actual stack frame).
+ *
+ * ALIAS -- does not generate debug info -- the aliased function will
+ */
+
+/* SYM_INNER_LABEL_ALIGN -- only for labels in the middle of code */
+#ifndef SYM_INNER_LABEL_ALIGN
+#define SYM_INNER_LABEL_ALIGN(name, linkage)	\
+	.type name SYM_T_NONE ASM_NL			\
+	SYM_ENTRY(name, linkage, SYM_A_ALIGN)
+#endif
+
+/* SYM_INNER_LABEL -- only for labels in the middle of code */
+#ifndef SYM_INNER_LABEL
+#define SYM_INNER_LABEL(name, linkage)		\
+	.type name SYM_T_NONE ASM_NL			\
+	SYM_ENTRY(name, linkage, SYM_A_NONE)
+#endif
+
+/*
+ * SYM_FUNC_START_LOCAL_ALIAS -- use where there are two local names for one
+ * function
+ */
+#ifndef SYM_FUNC_START_LOCAL_ALIAS
+#define SYM_FUNC_START_LOCAL_ALIAS(name)		\
+	SYM_START(name, SYM_L_LOCAL, SYM_A_ALIGN)
+#endif
+
+/*
+ * SYM_FUNC_START_ALIAS -- use where there are two global names for one
+ * function
+ */
+#ifndef SYM_FUNC_START_ALIAS
+#define SYM_FUNC_START_ALIAS(name)			\
+	SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN)
+#endif
+
+/* SYM_FUNC_START -- use for global functions */
+#ifndef SYM_FUNC_START
+/*
+ * The same as SYM_FUNC_START_ALIAS, but we will need to distinguish these two
+ * later.
+ */
+#define SYM_FUNC_START(name)				\
+	SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN)
 #endif
 
+/* SYM_FUNC_START_NOALIGN -- use for global functions, w/o alignment */
+#ifndef SYM_FUNC_START_NOALIGN
+#define SYM_FUNC_START_NOALIGN(name)			\
+	SYM_START(name, SYM_L_GLOBAL, SYM_A_NONE)
 #endif
 
+/* SYM_FUNC_START_LOCAL -- use for local functions */
+#ifndef SYM_FUNC_START_LOCAL
+/* the same as SYM_FUNC_START_LOCAL_ALIAS, see comment near SYM_FUNC_START */
+#define SYM_FUNC_START_LOCAL(name)			\
+	SYM_START(name, SYM_L_LOCAL, SYM_A_ALIGN)
 #endif
+
+/* SYM_FUNC_START_LOCAL_NOALIGN -- use for local functions, w/o alignment */
+#ifndef SYM_FUNC_START_LOCAL_NOALIGN
+#define SYM_FUNC_START_LOCAL_NOALIGN(name)		\
+	SYM_START(name, SYM_L_LOCAL, SYM_A_NONE)
+#endif
+
+/* SYM_FUNC_START_WEAK -- use for weak functions */
+#ifndef SYM_FUNC_START_WEAK
+#define SYM_FUNC_START_WEAK(name)			\
+	SYM_START(name, SYM_L_WEAK, SYM_A_ALIGN)
+#endif
+
+/* SYM_FUNC_START_WEAK_NOALIGN -- use for weak functions, w/o alignment */
+#ifndef SYM_FUNC_START_WEAK_NOALIGN
+#define SYM_FUNC_START_WEAK_NOALIGN(name)		\
+	SYM_START(name, SYM_L_WEAK, SYM_A_NONE)
+#endif
+
+/* SYM_FUNC_END_ALIAS -- the end of LOCAL_ALIASed or ALIASed function */
+#ifndef SYM_FUNC_END_ALIAS
+#define SYM_FUNC_END_ALIAS(name)			\
+	SYM_END(name, SYM_T_FUNC)
+#endif
+
+/*
+ * SYM_FUNC_END -- the end of SYM_FUNC_START_LOCAL, SYM_FUNC_START,
+ * SYM_FUNC_START_WEAK, ...
+ */
+#ifndef SYM_FUNC_END
+/* the same as SYM_FUNC_END_ALIAS, see comment near SYM_FUNC_START */
+#define SYM_FUNC_END(name)				\
+	SYM_END(name, SYM_T_FUNC)
+#endif
+
+/* SYM_CODE_START -- use for non-C (special) functions */
+#ifndef SYM_CODE_START
+#define SYM_CODE_START(name)				\
+	SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN)
+#endif
+
+/* SYM_CODE_START_NOALIGN -- use for non-C (special) functions, w/o alignment */
+#ifndef SYM_CODE_START_NOALIGN
+#define SYM_CODE_START_NOALIGN(name)			\
+	SYM_START(name, SYM_L_GLOBAL, SYM_A_NONE)
+#endif
+
+/* SYM_CODE_START_LOCAL -- use for local non-C (special) functions */
+#ifndef SYM_CODE_START_LOCAL
+#define SYM_CODE_START_LOCAL(name)			\
+	SYM_START(name, SYM_L_LOCAL, SYM_A_ALIGN)
+#endif
+
+/*
+ * SYM_CODE_START_LOCAL_NOALIGN -- use for local non-C (special) functions,
+ * w/o alignment
+ */
+#ifndef SYM_CODE_START_LOCAL_NOALIGN
+#define SYM_CODE_START_LOCAL_NOALIGN(name)		\
+	SYM_START(name, SYM_L_LOCAL, SYM_A_NONE)
+#endif
+
+/* SYM_CODE_END -- the end of SYM_CODE_START_LOCAL, SYM_CODE_START, ... */
+#ifndef SYM_CODE_END
+#define SYM_CODE_END(name)				\
+	SYM_END(name, SYM_T_NONE)
+#endif
+
+/* === data annotations === */
+
+/* SYM_DATA_START -- global data symbol */
+#ifndef SYM_DATA_START
+#define SYM_DATA_START(name)				\
+	SYM_START(name, SYM_L_GLOBAL, SYM_A_NONE)
+#endif
+
+/* SYM_DATA_START -- local data symbol */
+#ifndef SYM_DATA_START_LOCAL
+#define SYM_DATA_START_LOCAL(name)			\
+	SYM_START(name, SYM_L_LOCAL, SYM_A_NONE)
+#endif
+
+/* SYM_DATA_END -- the end of SYM_DATA_START symbol */
+#ifndef SYM_DATA_END
+#define SYM_DATA_END(name)				\
+	SYM_END(name, SYM_T_OBJECT)
+#endif
+
+/* SYM_DATA_END_LABEL -- the labeled end of SYM_DATA_START symbol */
+#ifndef SYM_DATA_END_LABEL
+#define SYM_DATA_END_LABEL(name, linkage, label)	\
+	linkage(label) ASM_NL				\
+	.type label SYM_T_OBJECT ASM_NL			\
+	label:						\
+	SYM_END(name, SYM_T_OBJECT)
+#endif
+
+/* SYM_DATA -- start+end wrapper around simple global data */
+#ifndef SYM_DATA
+#define SYM_DATA(name, data...)				\
+	SYM_DATA_START(name) ASM_NL				\
+	data ASM_NL						\
+	SYM_DATA_END(name)
+#endif
+
+/* SYM_DATA_LOCAL -- start+end wrapper around simple local data */
+#ifndef SYM_DATA_LOCAL
+#define SYM_DATA_LOCAL(name, data...)			\
+	SYM_DATA_START_LOCAL(name) ASM_NL			\
+	data ASM_NL						\
+	SYM_DATA_END(name)
+#endif
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _LINUX_LINKAGE_H */


Patches currently in stable-queue which might be from jirislaby@kernel.org are

queue-5.4/linkage-introduce-new-macros-for-assembler-symbols.patch


From xen-devel-bounces@lists.xenproject.org Thu Nov 05 22:31:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 22:31:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20234.45956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kanmj-0002Do-2a; Thu, 05 Nov 2020 22:31:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20234.45956; Thu, 05 Nov 2020 22:31:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kanmi-0002Dh-Vp; Thu, 05 Nov 2020 22:31:12 +0000
Received: by outflank-mailman (input) for mailman id 20234;
 Thu, 05 Nov 2020 22:31:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tWnR=EL=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kanmh-0002Dc-Po
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 22:31:11 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7bd6ba21-4fba-4c15-8991-c4b56f25c927;
 Thu, 05 Nov 2020 22:31:10 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kanmf-0003pG-Lf; Thu, 05 Nov 2020 22:31:09 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kanmf-0007sR-Bj; Thu, 05 Nov 2020 22:31:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tWnR=EL=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kanmh-0002Dc-Po
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 22:31:11 +0000
X-Inumbo-ID: 7bd6ba21-4fba-4c15-8991-c4b56f25c927
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7bd6ba21-4fba-4c15-8991-c4b56f25c927;
	Thu, 05 Nov 2020 22:31:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=5WKeOj4EPBmk5WJNSqPI8bfFxGxj/vofaT5jIuYIsMM=; b=i+mTOueq+E/1ZAeibRxJpWdIm6
	C5ybmV12D11q+q6uGAg7Je6FeeSxPQeIdlTNjZGpCMFbYb9NAecoxjs7do2gdtL83t346io6xEsLn
	MHBjxVVHmnks66ebI24P3jHwR6Cf3QUPMyhUIECq9VV2wCklBfbNY5/gvHRF7VgHzE9c=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kanmf-0003pG-Lf; Thu, 05 Nov 2020 22:31:09 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kanmf-0007sR-Bj; Thu, 05 Nov 2020 22:31:09 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/arm: traps: Don't panic when receiving an unknown debug trap
Date: Thu,  5 Nov 2020 22:31:06 +0000
Message-Id: <20201105223106.22517-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Even if debug trap are only meant for debugging purpose, it is quite
harsh to crash Xen if one of the trap sent by the guest is not handled.

So switch from a panic() to a printk().

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/traps.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 8f40d0e0b6b1..a36f145e6739 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1410,7 +1410,7 @@ static void do_debug_trap(struct cpu_user_regs *regs, unsigned int code)
         show_execution_state(regs);
         break;
     default:
-        panic("DOM%d: Unhandled debug trap %#x\n", domid, code);
+        printk("DOM%d: Unhandled debug trap %#x\n", domid, code);
         break;
     }
 }
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 05 23:10:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 23:10:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20254.45986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaoOw-0005mF-GI; Thu, 05 Nov 2020 23:10:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20254.45986; Thu, 05 Nov 2020 23:10:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaoOw-0005m8-D5; Thu, 05 Nov 2020 23:10:42 +0000
Received: by outflank-mailman (input) for mailman id 20254;
 Thu, 05 Nov 2020 23:10:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5ooO=EL=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kaoOv-0005m3-25
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 23:10:41 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 18093f64-49b3-484a-bf39-9423097a5fcc;
 Thu, 05 Nov 2020 23:10:39 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0A5NAOLn011400
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Thu, 5 Nov 2020 18:10:29 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 0A5NAN9x011399;
 Thu, 5 Nov 2020 15:10:23 -0800 (PST) (envelope-from ehem)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5ooO=EL=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kaoOv-0005m3-25
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 23:10:41 +0000
X-Inumbo-ID: 18093f64-49b3-484a-bf39-9423097a5fcc
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 18093f64-49b3-484a-bf39-9423097a5fcc;
	Thu, 05 Nov 2020 23:10:39 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0A5NAOLn011400
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Thu, 5 Nov 2020 18:10:29 -0500 (EST)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 0A5NAN9x011399;
	Thu, 5 Nov 2020 15:10:23 -0800 (PST)
	(envelope-from ehem)
Date: Thu, 5 Nov 2020 15:10:23 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: traps: Don't panic when receiving an unknown
 debug trap
Message-ID: <20201105231023.GA9312@mattapan.m5p.com>
References: <20201105223106.22517-1-julien@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201105223106.22517-1-julien@xen.org>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Thu, Nov 05, 2020 at 10:31:06PM +0000, Julien Grall wrote:
> Even if debug trap are only meant for debugging purpose, it is quite
> harsh to crash Xen if one of the trap sent by the guest is not handled.
> 
> So switch from a panic() to a printk().

Might this qualify as security due to potential DoS?


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Thu Nov 05 23:12:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 05 Nov 2020 23:12:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20257.45998 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaoQA-0005sc-RD; Thu, 05 Nov 2020 23:11:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20257.45998; Thu, 05 Nov 2020 23:11:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaoQA-0005sV-OH; Thu, 05 Nov 2020 23:11:58 +0000
Received: by outflank-mailman (input) for mailman id 20257;
 Thu, 05 Nov 2020 23:11:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tWnR=EL=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kaoQ9-0005sJ-Cq
 for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 23:11:57 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 149d4174-7ad8-41c6-86dc-b38970065425;
 Thu, 05 Nov 2020 23:11:56 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kaoQ6-0004fD-Oe; Thu, 05 Nov 2020 23:11:54 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kaoQ6-000333-Gb; Thu, 05 Nov 2020 23:11:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tWnR=EL=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kaoQ9-0005sJ-Cq
	for xen-devel@lists.xenproject.org; Thu, 05 Nov 2020 23:11:57 +0000
X-Inumbo-ID: 149d4174-7ad8-41c6-86dc-b38970065425
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 149d4174-7ad8-41c6-86dc-b38970065425;
	Thu, 05 Nov 2020 23:11:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=RULwjwGxYV2qDZA0JSCa3+689WEGHpw+VitmXQBEN7s=; b=aQxLwmijz2M0Wce+09dqJwjX+X
	XsMCRdf5C66b4x5+s2m5PIwxQpjRgV+Nf5vSmD9oBdSJX7e4QYVrAJa/7kg+HE+RHd7XpjtLI4jPR
	mNqxLnatKXtQqePBx8Z2+cVuBnYODyozXZV2xArWuNZcHU+0ekdQIut0Wk6Z2GuQTgzA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kaoQ6-0004fD-Oe; Thu, 05 Nov 2020 23:11:54 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kaoQ6-000333-Gb; Thu, 05 Nov 2020 23:11:54 +0000
Subject: Re: [PATCH] xen/arm: traps: Don't panic when receiving an unknown
 debug trap
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20201105223106.22517-1-julien@xen.org>
 <20201105231023.GA9312@mattapan.m5p.com>
From: Julien Grall <julien@xen.org>
Message-ID: <d0dfc285-9d50-b9d6-41c5-b4e9fb1ea6cb@xen.org>
Date: Thu, 5 Nov 2020 23:11:52 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201105231023.GA9312@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 05/11/2020 23:10, Elliott Mitchell wrote:
> On Thu, Nov 05, 2020 at 10:31:06PM +0000, Julien Grall wrote:
>> Even if debug trap are only meant for debugging purpose, it is quite
>> harsh to crash Xen if one of the trap sent by the guest is not handled.
>>
>> So switch from a panic() to a printk().
> 
> Might this qualify as security due to potential DoS?
This code path only exists with CONFIG_DEBUG=y which is not security 
supported.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 00:20:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 00:20:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20282.46036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kapUa-00047e-Tk; Fri, 06 Nov 2020 00:20:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20282.46036; Fri, 06 Nov 2020 00:20:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kapUa-00047X-Qc; Fri, 06 Nov 2020 00:20:36 +0000
Received: by outflank-mailman (input) for mailman id 20282;
 Fri, 06 Nov 2020 00:20:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pfqN=EM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kapUa-00047S-7v
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 00:20:36 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cb542e93-6bf4-4bd8-8090-200d65131cbf;
 Fri, 06 Nov 2020 00:20:31 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kapUU-0006h9-IB; Fri, 06 Nov 2020 00:20:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kapUU-0005PL-9D; Fri, 06 Nov 2020 00:20:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kapUU-0000VL-8k; Fri, 06 Nov 2020 00:20:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pfqN=EM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kapUa-00047S-7v
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 00:20:36 +0000
X-Inumbo-ID: cb542e93-6bf4-4bd8-8090-200d65131cbf
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cb542e93-6bf4-4bd8-8090-200d65131cbf;
	Fri, 06 Nov 2020 00:20:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=w5zOVu40soL2j3uh7C54wT7sOsOH81q+Xr1Mf8FTR+M=; b=Fx4admNULwg77k7oAbIVkgAajT
	t0/YDPz2sctEonO3cz2vzebKIFqoErOUZQOuBoOejUBNV6CtlpV1PyKkALsi6aI02lNv4eEwdjrO1
	kp80fzt/EwTAUfcQpBthh662lIMwRSR76D4epEznUeHBr1jp8Gqcw7RZWBT266cV4qes=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kapUU-0006h9-IB; Fri, 06 Nov 2020 00:20:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kapUU-0005PL-9D; Fri, 06 Nov 2020 00:20:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kapUU-0000VL-8k; Fri, 06 Nov 2020 00:20:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156412-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 156412: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=6e97ed6efa701db070da0054b055c085895aba86
X-Osstest-Versions-That:
    linux=b300b28b78145b832f1112d77035111e35112cec
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 06 Nov 2020 00:20:30 +0000

flight 156412 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156412/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 156345

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156345
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156345
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156345
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156345
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156345
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156345
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156345
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156345
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156345
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156345
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156345
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                6e97ed6efa701db070da0054b055c085895aba86
baseline version:
 linux                b300b28b78145b832f1112d77035111e35112cec

Last test of basis   156345  2020-11-01 11:14:47 Z    4 days
Testing same since   156412  2020-11-05 11:13:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Alain Volmat <avolmat@me.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Hung <alex.hung@canonical.com>
  Alexander Sverdlin <alexander.sverdlin@nokia.com>
  Alexandre Belloni <alexandre.belloni@bootlin.com>
  Alexei Starovoitov <ast@kernel.org>
  Alok Prasad <palok@marvell.com>
  Amit Cohen <amcohen@nvidia.com>
  Anand Jain <anand.jain@oracle.com>
  Anant Thazhemadam <anant.thazhemadam@gmail.com>
  Andreas Gruenbacher <agruenba@redhat.com>
  Andrew Donnellan <ajd@linux.ibm.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrey Grodzovsky <andrey.grodzovsky@amd.com>
  Andrii Nakryiko <andriin@fb.com>
  Andy Lutomirski <luto@kernel.org>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
  Anna Schumaker <Anna.Schumaker@Netapp.com>
  Antonio Borneo <antonio.borneo@st.com>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Artur Rojek <contact@artur-rojek.eu>
  Arun Kumar Neelakantam <aneela@codeaurora.org>
  Ashish Sangwan <ashishsangwan2@gmail.com>
  Badhri Jagan Sridharan <badhri@google.com>
  Bartosz Golaszewski <bgolaszewski@baylibre.com>
  Ben Hutchings <ben@decadent.org.uk>
  Benjamin Coddington <bcodding@redhat.com>
  Bjorn Andersson <bjorn.andersson@linaro.org>
  Chanwoo Choi <cw00.choi@samsung.com>
  Chao Leng <lengchao@huawei.com>
  Chao Yu <yuchao0@huawei.com>
  Chi-hsien Lin <chi-hsien.lin@cypress.com>
  Chris Lew <clew@codeaurora.org>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christian König <christian.koenig@amd.com>
  Christoph Hellwig <hch@lst.de>
  Christophe Leroy <christophe.leroy@csgroup.eu>
  Chuck Lever <chuck.lever@oracle.com>
  Chunyan Zhang <zhang.lyra@gmail.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Thompson <daniel.thompson@linaro.org>
  Daniel W. S. Almeida <dwlsalmeida@gmail.com>
  Daniel Xu <dxu@dxuuu.xyz>
  Darrick J. Wong <darrick.wong@oracle.com>
  Dave Airlie <airlied@redhat.com>
  Dave Wysochanski <dwysocha@redhat.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Deepak Kumar Singh <deesin@codeaurora.org>
  Denis Efremov <efremov@linux.com>
  Diana Craciun <diana.craciun@oss.nxp.com>
  Dinghao Liu <dinghao.liu@zju.edu.cn>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  dmitry.torokhov@gmail.com <dmitry.torokhov@gmail.com>
  Dominique Martinet <asmadeus@codewreck.org>
  Douglas Anderson <dianders@chromium.org>
  Douglas Gilbert <dgilbert@interlog.com>
  Eric Biggers <ebiggers@google.com>
  Eryk Brol <eryk.brol@amd.com>
  Etienne Carriere <etienne.carriere@linaro.org>
  Evan Quan <evan.quan@amd.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fangzhi Zuo <Jerry.Zuo@amd.com>
  Felipe Balbi <balbi@kernel.org>
  Ferry Toth <fntoth@gmail.com>
  Filipe Manana <fdmanana@suse.com>
  Frank Wunderlich <frank-w@public-files.de>
  Gautham R. Shenoy <ego@linux.vnet.ibm.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hagen Paul Pfeifer <hagen@jauu.net>
  Hans de Goede <hdegoede@redhat.com>
  Hans Verkuil <hverkuil-cisco@xs4all.nl>
  Hans Verkuil <hverkuil@xs4all.nl>
  Heikki Krogerus <heikki.krogerus@linux.intel.com>
  Helge Deller <deller@gmx.de>
  Ian Abbott <abbotti@mev.co.uk>
  Ido Schimmel <idosch@nvidia.com>
  Igor Russkikh <irusskikh@marvell.com>
  Ilya Dryomov <idryomov@gmail.com>
  Ingo Molnar <mingo@kernel.org>
  J. Bruce Fields <bfields@redhat.com>
  Jaegeuk Kim <jaegeuk@kernel.org>
  Jakub Kicinski <kuba@kernel.org>
  Jamie Iles <jamie@nuviainc.com>
  Jan Kara <jack@suse.cz>
  Jann Horn <jannh@google.com>
  Jason Gerecke <jason.gerecke@wacom.com>
  Jason Gunthorpe <jgg@nvidia.com>
  Jay Cornwall <jay.cornwall@amd.com>
  Jens Axboe <axboe@kernel.dk>
  Jerome Brunet <jbrunet@baylibre.com>
  Jing Xiangfeng <jingxiangfeng@huawei.com>
  Jiri Kosina <jkosina@suse.cz>
  Jiri Olsa <jolsa@kernel.org>
  Jiri Slaby <jirislaby@kernel.org>
  Jiri Slaby <jslaby@suse.cz>
  Jisheng Zhang <Jisheng.Zhang@synaptics.com>
  Joakim Zhang <qiangqing.zhang@nxp.com>
  Joel Stanley <joel@jms.id.au>
  Johannes Berg <johannes.berg@intel.com>
  Johannes Thumshirn <johannes.thumshirn@wdc.com>
  John Ogness <john.ogness@linutronix.de>
  Jon Hunter <jonathanh@nvidia.com>
  Jonathan Bakker <xc-racer2@live.ca>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Josef Bacik <josef@toxicpanda.com>
  Juergen Gross <jgross@suse.com>
  Kalle Valo <kvalo@codeaurora.org>
  Kan Liang <kan.liang@linux.intel.com>
  Kees Cook <keescook@chromium.org>
  Kim Phillips <kim.phillips@amd.com>
  Krzysztof Kozlowski <krzk@kernel.org>
  Lang Dai <lang.dai@intel.com>
  Laurent Pinchart <laurent.pinchart@ideasonboard.com>
  Laurentiu Tudor <laurentiu.tudor@nxp.com>
  Li Jun <jun.li@nxp.com>
  Linu Cherian <lcherian@marvell.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Lukas Wunner <lukas@wunner.de>
  Luo Meng <luomeng12@huawei.com>
  Maciej W. Rozycki <macro@linux-mips.org>
  Madhav Chauhan <madhav.chauhan@amd.com>
  Madhuparna Bhowmik <madhuparnabhowmik10@gmail.com>
  Magnus Karlsson <magnus.karlsson@intel.com>
  Mahesh Salgaonkar <mahesh@linux.ibm.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Marek Behún <marek.behun@nic.cz>
  Mark Brown <broonie@kernel.org>
  Martin Fuzzey <martin.fuzzey@flowbird.group>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin Steigerwald <martin@lichtvoll.de>
  Masami Hiramatsu <mhiramat@kernel.org>
  Mateusz Nosek <mateusznosek0@gmail.com>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Mathieu Poirier <mathieu.poirier@linaro.org>
  Matthew Wilcox (Oracle) <willy@infradead.org>
  Matthias Brugger <matthias.bgg@gmail.com>
  Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
  Michael Chan <michael.chan@broadcom.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael Neuling <mikey@neuling.org>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Kalderon <michal.kalderon@marvell.com>
  Nadezda Lutovinova <lutovinova@ispras.ru>
  Nathan Lynch <nathanl@linux.ibm.com>
  Neil Armstrong <narmstrong@baylibre.com>
  Nicholas Piggin <npiggin@gmail.com>
  Nilesh Javali <njavali@marvell.com>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Olga Kornievskaia <kolga@netapp.com>
  Oliver Neukum <oneukum@suse.com>
  Oliver O'Halloran <oohall@gmail.com>
  Pali Rohár <pali@kernel.org>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Paul Cercueil <paul@crapouillou.net>
  Paul Mackerras <paulus@ozlabs.org>
  Pavel Machek <pavel@ucw.cz>
  Peter Chen <peter.chen@nxp.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Petr Mladek <pmladek@suse.com>
  Philippe Cornu <philippe.cornu@st.com>
  Ping Cheng <ping.cheng@wacom.com>
  Qingqing Zhuo <qingqing.zhuo@amd.com>
  Qiujun Huang <hqjagain@gmail.com>
  Qu Wenruo <wqu@suse.com>
  Quinn Tran <qutran@marvell.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Ran Wang <ran.wang_1@nxp.com>
  Raul E Rangel <rrangel@chromium.org>
  Raymond Tan <raymond.tan@intel.com>
  Richard Weinberger <richard@nod.at>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Ronnie Sahlberg <lsahlber@redhat.com>
  Russell King <rmk+kernel@armlinux.org.uk>
  Sakari Ailus <sakari.ailus@linux.intel.com>
  Sam Ravnborg <sam@ravnborg.org>
  Sandeep Singh <sandeep.singh@amd.com>
  Sanket Goswami <Sanket.Goswami@amd.com>
  Santosh Shilimkar <ssantosh@kernel.org>
  Sascha Hauer <s.hauer@pengutronix.de>
  Sasha Levin <sashal@kernel.org>
  Sathishkumar Muruganandam <murugana@codeaurora.org>
  Sean Nyekjaer <sean@geanix.com>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Song Liu <songliubraving@fb.com>
  Stefan Fritsch <sf@sfritsch.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stephen Boyd <sboyd@kernel.org>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sudeep Holla <sudeep.holla@arm.com>
  Sven Schnelle <svens@linux.ibm.com>
  syzbot+75d51fe5bf4ebe988518@syzkaller.appspotmail.com
  syzbot+af90d47a37376844e731@syzkaller.appspotmail.com
  Takashi Iwai <tiwai@suse.de>
  Tero Kristo <t-kristo@ti.com>
  Theodore Ts'o <tytso@mit.edu>
  Thinh Nguyen <Thinh.Nguyen@synopsys.com>
  Thinh Nguyen <thinhn@synopsys.com>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Gleixner <tglx@linutronix.de>
  Tobias Jordan <kernel@cdqe.de>
  Tom Rix <trix@redhat.com>
  Tony Lindgren <tony@atomide.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Valentin Schneider <valentin.schneider@arm.com>
  Vasily Gorbik <gor@linux.ibm.com>
  Vineet Gupta <vgupta@synopsys.com>
  Vinod Koul <vkoul@kernel.org>
  Viresh Kumar <viresh.kumar@linaro.org>
  Wei Huang <wei.huang2@amd.com>
  Wen Gong <wgong@codeaurora.org>
  Wesley Chalmers <Wesley.Chalmers@amd.com>
  Will Deacon <will@kernel.org>
  Wim Van Sebroeck <wim@linux-watchdog.org>
  Wolfram Sang <wsa@kernel.org>
  Wright Feng <wright.feng@cypress.com>
  Xia Jiang <xia.jiang@mediatek.com>
  Xiang Chen <chenxiang66@hisilicon.com>
  Xie He <xie.he.0141@gmail.com>
  Xiongfeng Wang <wangxiongfeng2@huawei.com>
  Xiubo Li <xiubli@redhat.com>
  Yonghong Song <yhs@fb.com>
  Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
  Zhang Qilong <zhangqilong3@huawei.com>
  Zhao Heming <heming.zhao@suse.com>
  Zhen Lei <thunder.leizhen@huawei.com>
  Zhengyuan Liu <liuzhengyuan@tj.kylinos.cn>
  Zhihao Cheng <chengzhihao1@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   b300b28b7814..6e97ed6efa70  6e97ed6efa701db070da0054b055c085895aba86 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 01:58:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 01:58:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20315.46097 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kar10-0008DO-Q0; Fri, 06 Nov 2020 01:58:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20315.46097; Fri, 06 Nov 2020 01:58:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kar10-0008DF-Iz; Fri, 06 Nov 2020 01:58:10 +0000
Received: by outflank-mailman (input) for mailman id 20315;
 Fri, 06 Nov 2020 01:58:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6QlO=EM=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kar0z-0008DA-Ef
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 01:58:09 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0aedff8b-8078-4eaa-b6fc-0d3864a6d986;
 Fri, 06 Nov 2020 01:58:08 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 82474206FB;
 Fri,  6 Nov 2020 01:58:07 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6QlO=EM=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kar0z-0008DA-Ef
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 01:58:09 +0000
X-Inumbo-ID: 0aedff8b-8078-4eaa-b6fc-0d3864a6d986
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0aedff8b-8078-4eaa-b6fc-0d3864a6d986;
	Fri, 06 Nov 2020 01:58:08 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 82474206FB;
	Fri,  6 Nov 2020 01:58:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604627888;
	bh=p2AqvGg5ZhzQEJT67RSt8noNjzmsIyRs4D95c2t8J50=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Zi/2enwiCdNc8zBEE86t/rJ9ap0GIp1AaHdJZFJh19G7IxMOnvge51/hRRKUZ6eIo
	 F28jZ/NQpbcnr0/VMglUx3gPIKtvtDzA0/OWiCnrVadY4aaJHYIQbaahS9W5KQSDVN
	 ighSDJAuELPYHdMCtYBa2NTeEJfZ1M9kP1HplzgU=
Date: Thu, 5 Nov 2020 17:58:06 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    George Dunlap <george.dunlap@citrix.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Ian Jackson <ian.jackson@citrix.com>, Wei Liu <wl@xen.org>, 
    Anthony Perard <anthony.perard@citrix.com>, Julien Grall <julien@xen.org>
Subject: Re: preparations for 4.14.1
In-Reply-To: <5aa0791a-db56-8f5a-51a1-5863748ce7f1@suse.com>
Message-ID: <alpine.DEB.2.21.2011051753580.2323@sstabellini-ThinkPad-T480s>
References: <5aa0791a-db56-8f5a-51a1-5863748ce7f1@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 4 Nov 2020, Jan Beulich wrote:
> All,
> 
> the release is due in a couple of weeks time. Please point out
> backports you find missing from the respective staging branch,
> but which you consider relevant. (Ian: Please double check
> there are indeed no tools side backports needed here.)
> 
> Julien, Stefano, on the Arm side I'd like to ask for
> 
> 5d45ecabe3c0 xen/arm64: force gcc 10+ to always inline generic atomics helpers
> 
> just like I did when sending the respective 4.13.2 / 4.12.4
> mail. Is there a particular reason it wasn't put in?

No, I have just backported 5d45ecabe3c0 and a couple of other fixes.

Jan, do you think we should backport the following also?

8856a914b build: also check for empty .bss.* in .o -> .init.o conversion


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 01:59:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 01:59:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20318.46108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kar2W-0008KD-1P; Fri, 06 Nov 2020 01:59:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20318.46108; Fri, 06 Nov 2020 01:59:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kar2V-0008K6-Un; Fri, 06 Nov 2020 01:59:43 +0000
Received: by outflank-mailman (input) for mailman id 20318;
 Fri, 06 Nov 2020 01:59:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6QlO=EM=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kar2U-0008Jy-RR
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 01:59:42 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 62ae4875-9c15-4a15-9e47-a5eb2a995a9c;
 Fri, 06 Nov 2020 01:59:42 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id E86BA20719;
 Fri,  6 Nov 2020 01:59:40 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6QlO=EM=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kar2U-0008Jy-RR
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 01:59:42 +0000
X-Inumbo-ID: 62ae4875-9c15-4a15-9e47-a5eb2a995a9c
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 62ae4875-9c15-4a15-9e47-a5eb2a995a9c;
	Fri, 06 Nov 2020 01:59:42 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id E86BA20719;
	Fri,  6 Nov 2020 01:59:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604627981;
	bh=9qznLD2HeBlwzP97J34MgKmZHCkX8pUNtayvyQfBzqo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=puFRe2O8cjVb892R2fBPY26c4+YfScVOwr9Z0ch2/Hf/8wVLYLoA8D0WK93kYbxLi
	 cYHiTvUQy52zbMcoPMrze52eShnZjs2pSBs2eIsJ1B0zyWfhAfYVSAIiZoS1cwxF6g
	 btJeOq7K3jrVtAM6uFudJueNN3xjkhqP7ggzLkaE=
Date: Thu, 5 Nov 2020 17:59:40 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: traps: Don't panic when receiving an unknown
 debug trap
In-Reply-To: <20201105223106.22517-1-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2011051759330.2323@sstabellini-ThinkPad-T480s>
References: <20201105223106.22517-1-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 5 Nov 2020, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Even if debug trap are only meant for debugging purpose, it is quite
> harsh to crash Xen if one of the trap sent by the guest is not handled.
> 
> So switch from a panic() to a printk().
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

> ---
>  xen/arch/arm/traps.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 8f40d0e0b6b1..a36f145e6739 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1410,7 +1410,7 @@ static void do_debug_trap(struct cpu_user_regs *regs, unsigned int code)
>          show_execution_state(regs);
>          break;
>      default:
> -        panic("DOM%d: Unhandled debug trap %#x\n", domid, code);
> +        printk("DOM%d: Unhandled debug trap %#x\n", domid, code);
>          break;
>      }
>  }
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 05:08:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 05:08:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20374.46228 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1katyP-0000MR-KV; Fri, 06 Nov 2020 05:07:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20374.46228; Fri, 06 Nov 2020 05:07:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1katyP-0000MK-HB; Fri, 06 Nov 2020 05:07:41 +0000
Received: by outflank-mailman (input) for mailman id 20374;
 Fri, 06 Nov 2020 05:07:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pfqN=EM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1katyN-0000Lb-RD
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 05:07:39 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 24458bd8-64bd-4943-bf97-e7f84edaa07c;
 Fri, 06 Nov 2020 05:07:32 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1katyG-0000oo-64; Fri, 06 Nov 2020 05:07:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1katyF-0004DU-RL; Fri, 06 Nov 2020 05:07:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1katyF-0001fw-Qd; Fri, 06 Nov 2020 05:07:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pfqN=EM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1katyN-0000Lb-RD
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 05:07:39 +0000
X-Inumbo-ID: 24458bd8-64bd-4943-bf97-e7f84edaa07c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 24458bd8-64bd-4943-bf97-e7f84edaa07c;
	Fri, 06 Nov 2020 05:07:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dCMfPbL3EaZhsHc0wJG2LRW0KSXYTblNpum95vPXP64=; b=fKBFwY2MFtB2Hzc6ptaI74i1nr
	/HWUTTEroSm8FQV/0GlsOZ0d82k6c8n+DToccN7azDOvLkrEuUxj64XRT/O5c76POPQzA4D6Exv28
	wzCqyQTTI2Vcjz037ANdc/n3iRNowwlTF9axTTXmD6nUF2J1p3o+R5/B5YBt4nnBaWp4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1katyG-0000oo-64; Fri, 06 Nov 2020 05:07:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1katyF-0004DU-RL; Fri, 06 Nov 2020 05:07:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1katyF-0001fw-Qd; Fri, 06 Nov 2020 05:07:31 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156423-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 156423: FAIL
X-Osstest-Failures:
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:<job status>:broken:regression
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:host-install(5):broken:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-saverestore.2:fail:heisenbug
    xen-4.12-testing:test-armhf-armhf-libvirt:xen-boot:fail:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4f9294d21c47415376215d68a0298e88582b8e7a
X-Osstest-Versions-That:
    xen=97b7b5567fba6918a656ad349051b5343b5dea2e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 06 Nov 2020 05:07:31 +0000

flight 156423 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156423/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-cubietruck    <job status>                broken in 156398

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-cubietruck 5 host-install(5) broken in 156398 pass in 156423
 test-amd64-amd64-xl-qcow2    18 guest-saverestore.2        fail pass in 156398
 test-armhf-armhf-libvirt      8 xen-boot                   fail pass in 156398

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qcow2 19 guest-localmigrate/x10 fail in 156398 like 156358
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 156398 like 156358
 test-armhf-armhf-libvirt    15 migrate-support-check fail in 156398 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156358
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156358
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156358
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156358
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156358
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156358
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156358
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156358
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156358
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156358
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  4f9294d21c47415376215d68a0298e88582b8e7a
baseline version:
 xen                  97b7b5567fba6918a656ad349051b5343b5dea2e

Last test of basis   156358  2020-11-02 08:38:03 Z    3 days
Testing same since   156398  2020-11-04 09:06:02 Z    1 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <ian.jackson@eu.citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl-cubietruck broken

Not pushing.

------------------------------------------------------------
commit 4f9294d21c47415376215d68a0298e88582b8e7a
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Wed Nov 4 09:36:36 2020 +0100

    SUPPORT.md: Desupport qemu trad except stub dm
    
    While investigating XSA-335 we discovered that many upstream security
    fixes were missing.  It is not practical to backport them.  There is
    no good reason to be running this very ancient version of qemu, except
    that it is the only way to run a stub dm which is currently supported
    by upstream.
    
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    master commit: 8587160b3e2951b722d395a0346bb17c3c22152f
    master date: 2020-11-04 09:22:37 +0100
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 05:08:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 05:08:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20292.46243 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1katyu-0000TP-Uw; Fri, 06 Nov 2020 05:08:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20292.46243; Fri, 06 Nov 2020 05:08:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1katyu-0000TI-Rf; Fri, 06 Nov 2020 05:08:12 +0000
Received: by outflank-mailman (input) for mailman id 20292;
 Fri, 06 Nov 2020 00:35:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rXhF=EM=redhat.com=bmasney@srs-us1.protection.inumbo.net>)
 id 1kapjG-0005Du-IE
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 00:35:46 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 24a5d140-b925-4246-ad47-1ba62707fb0c;
 Fri, 06 Nov 2020 00:35:45 +0000 (UTC)
Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com
 [209.85.219.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-509-oBfxYjUMOlOlj2Sr5S426Q-1; Thu, 05 Nov 2020 19:35:44 -0500
Received: by mail-qv1-f71.google.com with SMTP id q19so2085366qvs.5
 for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 16:35:44 -0800 (PST)
Received: from tp-x1.redhat.com (c-98-239-145-235.hsd1.wv.comcast.net.
 [98.239.145.235])
 by smtp.gmail.com with ESMTPSA id b3sm2002837qte.85.2020.11.05.16.35.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 05 Nov 2020 16:35:42 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rXhF=EM=redhat.com=bmasney@srs-us1.protection.inumbo.net>)
	id 1kapjG-0005Du-IE
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 00:35:46 +0000
X-Inumbo-ID: 24a5d140-b925-4246-ad47-1ba62707fb0c
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 24a5d140-b925-4246-ad47-1ba62707fb0c;
	Fri, 06 Nov 2020 00:35:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604622945;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=8cFssZ0LJiNsdeVizgTqZXbeUskMHHnTsA2koKzQk4Y=;
	b=Rpp4WvTmcBGh64Tds/KVDD8VVVW9x9VBu7eFzTva6HReuDQPLBPxaOIo4/CNlQOcQQCYJY
	tXkT7sli7AQeKDyNg7NahYvSVdpSn74HfVWvibvZi6S7JG8vs2FX3StUENabFYEGF/szQz
	v/DvUkW6Qs2FFQAUJRXZ/PZnQ9R8FlU=
Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com
 [209.85.219.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-509-oBfxYjUMOlOlj2Sr5S426Q-1; Thu, 05 Nov 2020 19:35:44 -0500
X-MC-Unique: oBfxYjUMOlOlj2Sr5S426Q-1
Received: by mail-qv1-f71.google.com with SMTP id q19so2085366qvs.5
        for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 16:35:44 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=8cFssZ0LJiNsdeVizgTqZXbeUskMHHnTsA2koKzQk4Y=;
        b=pawlFFp99l74iiahRy94bE3jGq8P33UQ669zMHLYCHQ146US5ZAXrQBmnY4CWH5124
         yUVFeUJ1GbVrS9nb1YSixeViYTXfbdFScDacyByXIGgLNKAmRvMJDz5A2MI4q5J1K8uw
         KZFvB8hLbPze969f8pN42EmFUxJNKvXSyNM8OgpGCkJyPvDJa0H+war7JKrouqRraQva
         epcfcs+4rx24I5W0X86jQX8/PbDCHDMBPhGZvu4yLVzqAkA8+uefkdYU2BFCmmmH7sBw
         OZc0t2/7UDSo2Ubpx9dTEHp/xWxgCTM5hQFO9jm+ruzdyXBo/Z1PgBGryc1kZkhdiNQI
         /WzA==
X-Gm-Message-State: AOAM530/rL3aons88OGrXcu0dYi7qr4Cx+LA2ABklvR51F0EUr6y+SdP
	wYnK4hC/KQH8sGZYDd6k3e9vdEYM8/F2QFhi6j3j8x0rsNXfn7FRwwoGFuSmAn45nI+iHT6yAw7
	4/EUazCzQ0NKmEvdP4QZD4Wq56ZE=
X-Received: by 2002:a37:617:: with SMTP id 23mr4692633qkg.256.1604622943547;
        Thu, 05 Nov 2020 16:35:43 -0800 (PST)
X-Google-Smtp-Source: ABdhPJxcXHlYQXweZgP7+K8o61FNzv2e/Th67Tuvn2dyHTEh2Lw53WCzH/qHH2/nUUSBWuJ16IKdJg==
X-Received: by 2002:a37:617:: with SMTP id 23mr4692615qkg.256.1604622943297;
        Thu, 05 Nov 2020 16:35:43 -0800 (PST)
Received: from tp-x1.redhat.com (c-98-239-145-235.hsd1.wv.comcast.net. [98.239.145.235])
        by smtp.gmail.com with ESMTPSA id b3sm2002837qte.85.2020.11.05.16.35.42
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 05 Nov 2020 16:35:42 -0800 (PST)
From: Brian Masney <bmasney@redhat.com>
To: boris.ostrovsky@oracle.com,
	jgross@suse.com,
	sstabellini@kernel.org
Cc: tglx@linutronix.de,
	mingo@redhat.com,
	bp@alien8.de,
	x86@kernel.org,
	hpa@zytor.com,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	dustymabe@redhat.com
Subject: [PATCH] x86/xen: fix warning when running with nosmt mitigations
Date: Thu,  5 Nov 2020 19:35:29 -0500
Message-Id: <20201106003529.391649-1-bmasney@redhat.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=bmasney@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="US-ASCII"

When booting a hyperthreaded system with the kernel parameter
'mitigations=auto,nosmt', the following warning occurs:

    WARNING: CPU: 0 PID: 1 at drivers/xen/events/events_base.c:1112 unbind_from_irqhandler+0x4e/0x60
    ...
    Hardware name: Xen HVM domU, BIOS 4.2.amazon 08/24/2006
    ...
    Call Trace:
     xen_uninit_lock_cpu+0x28/0x62
     xen_hvm_cpu_die+0x21/0x30
     takedown_cpu+0x9c/0xe0
     ? trace_suspend_resume+0x60/0x60
     cpuhp_invoke_callback+0x9a/0x530
     _cpu_up+0x11a/0x130
     cpu_up+0x7e/0xc0
     bringup_nonboot_cpus+0x48/0x50
     smp_init+0x26/0x79
     kernel_init_freeable+0xea/0x229
     ? rest_init+0xaa/0xaa
     kernel_init+0xa/0x106
     ret_from_fork+0x35/0x40

The secondary CPUs are not activated with the nosmt mitigations and only
the primary thread on each CPU core is used. In this situation,
xen_hvm_smp_prepare_cpus(), and more importantly xen_init_lock_cpu(), is
not called, so the lock_kicker_irq is not initialized for the secondary
CPUs. Let's fix this by exiting early in xen_uninit_lock_cpu() if the
irq is not set to avoid the warning from above for each secondary CPU.

Signed-off-by: Brian Masney <bmasney@redhat.com>
---
 arch/x86/xen/spinlock.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 799f4eba0a62..4a052459a08e 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -93,9 +93,24 @@ void xen_init_lock_cpu(int cpu)
 
 void xen_uninit_lock_cpu(int cpu)
 {
+	int irq;
+
 	if (!xen_pvspin)
 		return;
 
+	/*
+	 * When booting the kernel with 'mitigations=auto,nosmt', the secondary
+	 * CPUs are not activated and only the primary thread on each CPU core
+	 * is used. In this situation, xen_hvm_smp_prepare_cpus(), and more
+	 * importantly xen_init_lock_cpu(), is not called, so the
+	 * lock_kicker_irq is not initialized for the secondary CPUs. Let's
+	 * exit early if the irq is not set to avoid a warning in the console
+	 * log.
+	 */
+	irq = per_cpu(lock_kicker_irq, cpu);
+	if (irq == -1)
+		return;
+
 	unbind_from_irqhandler(per_cpu(lock_kicker_irq, cpu), NULL);
 	per_cpu(lock_kicker_irq, cpu) = -1;
 	kfree(per_cpu(irq_name, cpu));
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 05:08:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 05:08:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20299.46250 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1katyv-0000Ty-Aw; Fri, 06 Nov 2020 05:08:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20299.46250; Fri, 06 Nov 2020 05:08:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1katyv-0000Tl-42; Fri, 06 Nov 2020 05:08:13 +0000
Received: by outflank-mailman (input) for mailman id 20299;
 Fri, 06 Nov 2020 00:47:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rXhF=EM=redhat.com=bmasney@srs-us1.protection.inumbo.net>)
 id 1kapuw-00066e-Rt
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 00:47:50 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 03bb0b44-ec3e-4d54-85ce-d493d375b0a4;
 Fri, 06 Nov 2020 00:47:47 +0000 (UTC)
Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com
 [209.85.222.197]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-442-_f3ycuUTPVimMG1z3kOKgw-1; Thu, 05 Nov 2020 19:47:45 -0500
Received: by mail-qk1-f197.google.com with SMTP id f126so2187281qke.17
 for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 16:47:45 -0800 (PST)
Received: from tp-x1 (c-98-239-145-235.hsd1.wv.comcast.net. [98.239.145.235])
 by smtp.gmail.com with ESMTPSA id
 d133sm2374130qke.106.2020.11.05.16.47.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 05 Nov 2020 16:47:44 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rXhF=EM=redhat.com=bmasney@srs-us1.protection.inumbo.net>)
	id 1kapuw-00066e-Rt
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 00:47:50 +0000
X-Inumbo-ID: 03bb0b44-ec3e-4d54-85ce-d493d375b0a4
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 03bb0b44-ec3e-4d54-85ce-d493d375b0a4;
	Fri, 06 Nov 2020 00:47:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604623667;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=qyPnP4lF1Yi0XyDpJZZP5+UOnDqVjqVsDwjLlXmGNic=;
	b=J2l6QohYRr05Y4dYaT6txMFrxZahuftn0584OSmokXjF2Ix1tr54KEGfms8Nbek76F6UAr
	bivCcm/o/t61n5JvgXVpcR4HGISIe1h1sMFIP+p0Rn76GJEd3NxHAUvKdGKej5IGgGQvwc
	DEDW6Tab0THXEhJX5SI1sVOjBX5xSg0=
Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com
 [209.85.222.197]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-442-_f3ycuUTPVimMG1z3kOKgw-1; Thu, 05 Nov 2020 19:47:45 -0500
X-MC-Unique: _f3ycuUTPVimMG1z3kOKgw-1
Received: by mail-qk1-f197.google.com with SMTP id f126so2187281qke.17
        for <xen-devel@lists.xenproject.org>; Thu, 05 Nov 2020 16:47:45 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=qyPnP4lF1Yi0XyDpJZZP5+UOnDqVjqVsDwjLlXmGNic=;
        b=ROYnjUvdVDVgnFpG2heWEhKSZsvpq7Rw09+DldUAj8XJHUZKj+FODKX34GX5BBt5QE
         USz6+W3jOnGwlI9O58qURBrIyUCzFB49eU9XS8pGa8sRjkafoNtMao/+rhjrrbAA3K/7
         BTP3g8wUru7AsBQB9ybVQ7OVWd2dG6oZ4Vip+u04cj1rwIS75dQFVevEDewGg7KkBbmV
         /2rTm7FXB0IEsMTpR/+Fgg46IDzkCM6PcGVFL/wEnYcbZOkQZmPiesOA/jv5ml1GoHIZ
         Xrc2om/v9Ufz0LDw431ioSmpZ2abk8W+dD9Q+Qod5JhGhKQYSQKvGcKMtM7ba28arKbk
         J7qg==
X-Gm-Message-State: AOAM533fsKdrBAROhPDAt+lu/Q+C8a2VjVcxXLhpAG+DwW7Hp6iFZOEj
	h7qqrqHh2Z0wuNs9GEOK2SByk+QzzCUSGGNVa80li7CCYkaUhxpShPLFbp1IDnX68whYgWouCOk
	zAYF+ru6ordqxkNUmE70khexV5cI=
X-Received: by 2002:ac8:6d1:: with SMTP id j17mr4841433qth.230.1604623665165;
        Thu, 05 Nov 2020 16:47:45 -0800 (PST)
X-Google-Smtp-Source: ABdhPJxbRk7Wv/ie/4F9jx8fbSx+H1AvkziIsrSKBGbSjzVCO8ww8iY9yV90ttVKnnSFxemgsIHUZw==
X-Received: by 2002:ac8:6d1:: with SMTP id j17mr4841425qth.230.1604623664969;
        Thu, 05 Nov 2020 16:47:44 -0800 (PST)
Received: from tp-x1 (c-98-239-145-235.hsd1.wv.comcast.net. [98.239.145.235])
        by smtp.gmail.com with ESMTPSA id d133sm2374130qke.106.2020.11.05.16.47.43
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 05 Nov 2020 16:47:44 -0800 (PST)
Date: Thu, 5 Nov 2020 19:47:43 -0500
From: Brian Masney <bmasney@redhat.com>
To: boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org
Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org,
	hpa@zytor.com, xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org, dustymabe@redhat.com
Subject: Re: [PATCH] x86/xen: fix warning when running with nosmt mitigations
Message-ID: <20201106004743.GA380136@tp-x1>
References: <20201106003529.391649-1-bmasney@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20201106003529.391649-1-bmasney@redhat.com>
User-Agent: Mutt/1.14.6 (2020-07-11)
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=bmasney@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Thu, Nov 05, 2020 at 07:35:29PM -0500, Brian Masney wrote:
> diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
> index 799f4eba0a62..4a052459a08e 100644
> --- a/arch/x86/xen/spinlock.c
> +++ b/arch/x86/xen/spinlock.c
> @@ -93,9 +93,24 @@ void xen_init_lock_cpu(int cpu)
>  
>  void xen_uninit_lock_cpu(int cpu)
>  {
> +	int irq;
> +
>  	if (!xen_pvspin)
>  		return;
>  
> +	/*
> +	 * When booting the kernel with 'mitigations=auto,nosmt', the secondary
> +	 * CPUs are not activated and only the primary thread on each CPU core
> +	 * is used. In this situation, xen_hvm_smp_prepare_cpus(), and more
> +	 * importantly xen_init_lock_cpu(), is not called, so the
> +	 * lock_kicker_irq is not initialized for the secondary CPUs. Let's
> +	 * exit early if the irq is not set to avoid a warning in the console
> +	 * log.
> +	 */
> +	irq = per_cpu(lock_kicker_irq, cpu);
> +	if (irq == -1)
> +		return;
> +
>  	unbind_from_irqhandler(per_cpu(lock_kicker_irq, cpu), NULL);

As soon as I saw this on lore, I saw that I should have passed the irq
variable to unbind_from_irqhandler() rather than doing another per_cpu()
lookup. I'll wait for feedback about the general approach before posting
a v2.

Brian



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 05:41:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 05:41:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20398.46278 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kauV3-00041e-6C; Fri, 06 Nov 2020 05:41:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20398.46278; Fri, 06 Nov 2020 05:41:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kauV3-00041X-3B; Fri, 06 Nov 2020 05:41:25 +0000
Received: by outflank-mailman (input) for mailman id 20398;
 Fri, 06 Nov 2020 05:41:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pfqN=EM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kauV1-00040y-Sy
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 05:41:23 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c4a12626-c426-472f-8db3-43c5058f6699;
 Fri, 06 Nov 2020 05:41:17 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kauUv-0001Ua-DS; Fri, 06 Nov 2020 05:41:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kauUv-0005dV-7C; Fri, 06 Nov 2020 05:41:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kauUv-0006Fr-6l; Fri, 06 Nov 2020 05:41:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pfqN=EM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kauV1-00040y-Sy
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 05:41:23 +0000
X-Inumbo-ID: c4a12626-c426-472f-8db3-43c5058f6699
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c4a12626-c426-472f-8db3-43c5058f6699;
	Fri, 06 Nov 2020 05:41:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DN7MJoszBXZnKV1IgAR0s1ACwyLSQeWl1dNC4mJGWGE=; b=aZtmtSGBSZAbdZA++GHqbEy1qy
	f3jYHnplimdo+GKbhO7/PzFunptQhHpnHbODNNmWRLMzeN257bTdYxwOs/Lkwu+AaDQj3vsa9Zij3
	q9h7LUS9aQEU3wuiVz2OTseOcrXYKrV2rwiE4M1HPR00WrB/5byAAhyrL80BdBba6iG8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kauUv-0001Ua-DS; Fri, 06 Nov 2020 05:41:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kauUv-0005dV-7C; Fri, 06 Nov 2020 05:41:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kauUv-0006Fr-6l; Fri, 06 Nov 2020 05:41:17 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156502-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156502: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=957708c2d1ae25d7375abd5e5e70c3043d64f1f1
X-Osstest-Versions-That:
    xen=e006b2e3be72e502b86bd9e1405417abd87bdfed
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 06 Nov 2020 05:41:17 +0000

flight 156502 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156502/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  957708c2d1ae25d7375abd5e5e70c3043d64f1f1
baseline version:
 xen                  e006b2e3be72e502b86bd9e1405417abd87bdfed

Last test of basis   156444  2020-11-05 16:00:29 Z    0 days
Testing same since   156502  2020-11-06 03:01:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e006b2e3be..957708c2d1  957708c2d1ae25d7375abd5e5e70c3043d64f1f1 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 07:10:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 07:10:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20413.46305 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kavtB-0003R4-0p; Fri, 06 Nov 2020 07:10:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20413.46305; Fri, 06 Nov 2020 07:10:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kavtA-0003Qx-UA; Fri, 06 Nov 2020 07:10:24 +0000
Received: by outflank-mailman (input) for mailman id 20413;
 Fri, 06 Nov 2020 07:10:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kavtA-0003Qs-1b
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 07:10:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f6b4e055-bdb2-4878-ade0-0b0770b80ab4;
 Fri, 06 Nov 2020 07:10:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 18084AB8F;
 Fri,  6 Nov 2020 07:10:21 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kavtA-0003Qs-1b
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 07:10:24 +0000
X-Inumbo-ID: f6b4e055-bdb2-4878-ade0-0b0770b80ab4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f6b4e055-bdb2-4878-ade0-0b0770b80ab4;
	Fri, 06 Nov 2020 07:10:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604646621;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=8nREtTcjfnzeI+eMBht41WVcOapdA3Tg0/MyKJruAb4=;
	b=rcwCcd6HLqOyJ4Pr8h0zqViohye+ZMz8KFJxCoZ5xbtOcRTUQTfUlCKK96/w7Xu4VA4IF/
	hGnNx4AOi2L8ByaVU3Jo4jvxvZsvzaGA/pWoM7JTYjJsS3Mn/NJs9hwVKHzQEl6dazn4U9
	hQmyvN+eVtOFxrrd3u/N9qeoRzZ3JNc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 18084AB8F;
	Fri,  6 Nov 2020 07:10:21 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/3] introduce and use xvmalloc() et al / shrink x86 xstate
 area
Message-ID: <e0364274-f123-82bd-ec85-bea519a34049@suse.com>
Date: Fri, 6 Nov 2020 08:10:15 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

While these may seem somewhat unrelated, the connection between them
is the middle of the patches

1: mm: introduce xvmalloc() et al and use for grant table allocations
2: x86/xstate: use xvzalloc() for save area allocation
3: x86/xstate: re-size save area when CPUID policy changes

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 07:11:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 07:11:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20416.46317 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kavuc-0003YC-Cd; Fri, 06 Nov 2020 07:11:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20416.46317; Fri, 06 Nov 2020 07:11:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kavuc-0003Y5-9W; Fri, 06 Nov 2020 07:11:54 +0000
Received: by outflank-mailman (input) for mailman id 20416;
 Fri, 06 Nov 2020 07:11:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kavua-0003Xx-HW
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 07:11:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ee6b1276-9ff9-433a-adc1-172ddbc26809;
 Fri, 06 Nov 2020 07:11:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 28DBEADE8;
 Fri,  6 Nov 2020 07:11:50 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kavua-0003Xx-HW
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 07:11:52 +0000
X-Inumbo-ID: ee6b1276-9ff9-433a-adc1-172ddbc26809
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ee6b1276-9ff9-433a-adc1-172ddbc26809;
	Fri, 06 Nov 2020 07:11:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604646710;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pX4ONVtHGJzi69SDx/yzKFYwcoXyNjHMUa/NNVNyz5U=;
	b=BEt4VvtQfLMRQ32JK8W8lRujrMvuiAHsOc/ivGLIv5FYuGYEBnM98ZkZGEzTL8r3iE6vLK
	y2dawlrH0c9/Exjq4UWFsKGzoIahZDCcMexC19nIYGHXb2nqZBW1qDFs5mSK0t3bdORYDs
	ROGQweOWmN6ETZFjIvcDkiIxwTieJQA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 28DBEADE8;
	Fri,  6 Nov 2020 07:11:50 +0000 (UTC)
Subject: [PATCH 1/3] mm: introduce xvmalloc() et al and use for grant table
 allocations
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <e0364274-f123-82bd-ec85-bea519a34049@suse.com>
Message-ID: <d98aabe4-6c1b-0970-2e42-eb991e9075a2@suse.com>
Date: Fri, 6 Nov 2020 08:11:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <e0364274-f123-82bd-ec85-bea519a34049@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

All of the array allocations in grant_table_init() can exceed a page's
worth of memory, which xmalloc()-based interfaces aren't really suitable
for after boot. Introduce interfaces dynamically switching between
xmalloc() et al and vmalloc() et al, based on requested size, and use
them instead.

All the wrappers in the new header get cloned mostly verbatim from
xmalloc.h, with the sole adjustment to switch unsigned long to size_t
for sizes and to unsigned int for alignments.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -37,7 +37,7 @@
 #include <xen/iommu.h>
 #include <xen/paging.h>
 #include <xen/keyhandler.h>
-#include <xen/vmap.h>
+#include <xen/xvmalloc.h>
 #include <xen/nospec.h>
 #include <xsm/xsm.h>
 #include <asm/flushtlb.h>
@@ -1925,27 +1925,28 @@ int grant_table_init(struct domain *d, i
     d->grant_table = gt;
 
     /* Active grant table. */
-    gt->active = xzalloc_array(struct active_grant_entry *,
-                               max_nr_active_grant_frames(gt));
+    gt->active = xvzalloc_array(struct active_grant_entry *,
+                                max_nr_active_grant_frames(gt));
     if ( gt->active == NULL )
         goto out;
 
     /* Tracking of mapped foreign frames table */
     if ( gt->max_maptrack_frames )
     {
-        gt->maptrack = vzalloc(gt->max_maptrack_frames * sizeof(*gt->maptrack));
+        gt->maptrack = xvzalloc_array(struct grant_mapping *,
+                                      gt->max_maptrack_frames);
         if ( gt->maptrack == NULL )
             goto out;
     }
 
     /* Shared grant table. */
-    gt->shared_raw = xzalloc_array(void *, gt->max_grant_frames);
+    gt->shared_raw = xvzalloc_array(void *, gt->max_grant_frames);
     if ( gt->shared_raw == NULL )
         goto out;
 
     /* Status pages for grant table - for version 2 */
-    gt->status = xzalloc_array(grant_status_t *,
-                               grant_to_status_frames(gt->max_grant_frames));
+    gt->status = xvzalloc_array(grant_status_t *,
+                                grant_to_status_frames(gt->max_grant_frames));
     if ( gt->status == NULL )
         goto out;
 
@@ -3870,19 +3871,19 @@ grant_table_destroy(
 
     for ( i = 0; i < nr_grant_frames(t); i++ )
         free_xenheap_page(t->shared_raw[i]);
-    xfree(t->shared_raw);
+    xvfree(t->shared_raw);
 
     for ( i = 0; i < nr_maptrack_frames(t); i++ )
         free_xenheap_page(t->maptrack[i]);
-    vfree(t->maptrack);
+    xvfree(t->maptrack);
 
     for ( i = 0; i < nr_active_grant_frames(t); i++ )
         free_xenheap_page(t->active[i]);
-    xfree(t->active);
+    xvfree(t->active);
 
     for ( i = 0; i < nr_status_frames(t); i++ )
         free_xenheap_page(t->status[i]);
-    xfree(t->status);
+    xvfree(t->status);
 
     xfree(t);
     d->grant_table = NULL;
--- a/xen/common/vmap.c
+++ b/xen/common/vmap.c
@@ -7,6 +7,7 @@
 #include <xen/spinlock.h>
 #include <xen/types.h>
 #include <xen/vmap.h>
+#include <xen/xvmalloc.h>
 #include <asm/page.h>
 
 static DEFINE_SPINLOCK(vm_lock);
@@ -299,11 +300,29 @@ void *vzalloc(size_t size)
     return p;
 }
 
-void vfree(void *va)
+static void _vfree(const void *va, unsigned int pages, enum vmap_region type)
 {
-    unsigned int i, pages;
+    unsigned int i;
     struct page_info *pg;
     PAGE_LIST_HEAD(pg_list);
+
+    ASSERT(pages);
+
+    for ( i = 0; i < pages; i++ )
+    {
+        pg = vmap_to_page(va + i * PAGE_SIZE);
+        ASSERT(pg);
+        page_list_add(pg, &pg_list);
+    }
+    vunmap(va);
+
+    while ( (pg = page_list_remove_head(&pg_list)) != NULL )
+        free_domheap_page(pg);
+}
+
+void vfree(void *va)
+{
+    unsigned int pages;
     enum vmap_region type = VMAP_DEFAULT;
 
     if ( !va )
@@ -315,18 +334,71 @@ void vfree(void *va)
         type = VMAP_XEN;
         pages = vm_size(va, type);
     }
-    ASSERT(pages);
 
-    for ( i = 0; i < pages; i++ )
+    _vfree(va, pages, type);
+}
+
+void xvfree(void *va)
+{
+    unsigned int pages = vm_size(va, VMAP_DEFAULT);
+
+    if ( pages )
+        _vfree(va, pages, VMAP_DEFAULT);
+    else
+        xfree(va);
+}
+
+void *_xvmalloc(size_t size, unsigned int align)
+{
+    ASSERT(align <= PAGE_SIZE);
+    return size <= PAGE_SIZE ? _xmalloc(size, align) : vmalloc(size);
+}
+
+void *_xvzalloc(size_t size, unsigned int align)
+{
+    ASSERT(align <= PAGE_SIZE);
+    return size <= PAGE_SIZE ? _xzalloc(size, align) : vzalloc(size);
+}
+
+void *_xvrealloc(void *va, size_t size, unsigned int align)
+{
+    size_t pages = vm_size(va, VMAP_DEFAULT);
+    void *ptr;
+
+    ASSERT(align <= PAGE_SIZE);
+
+    if ( !pages )
     {
-        struct page_info *page = vmap_to_page(va + i * PAGE_SIZE);
+        if ( size <= PAGE_SIZE )
+            return _xrealloc(va, size, align);
 
-        ASSERT(page);
-        page_list_add(page, &pg_list);
+        ptr = vmalloc(size);
+        if ( ptr && va && va != ZERO_BLOCK_PTR )
+        {
+            /*
+             * xmalloc-based allocations up to PAGE_SIZE don't cross page
+             * boundaries. Therefore, without needing to know the exact
+             * prior allocation size, simply copy the entire tail of the
+             * page containing the earlier allocation.
+             */
+            memcpy(ptr, va, PAGE_SIZE - PAGE_OFFSET(va));
+            xfree(va);
+        }
+    }
+    else if ( pages == PFN_UP(size) )
+        ptr = va;
+    else
+    {
+        ptr = _xvmalloc(size, align);
+        if ( ptr )
+        {
+            memcpy(ptr, va, min(size, pages << PAGE_SHIFT));
+            vfree(va);
+        }
+        else if ( pages > PFN_UP(size) )
+            ptr = va;
     }
-    vunmap(va);
 
-    while ( (pg = page_list_remove_head(&pg_list)) != NULL )
-        free_domheap_page(pg);
+    return ptr;
 }
 #endif
--- /dev/null
+++ b/xen/include/xen/xvmalloc.h
@@ -0,0 +1,70 @@
+
+#ifndef __XVMALLOC_H__
+#define __XVMALLOC_H__
+
+#include <xen/cache.h>
+#include <xen/types.h>
+
+/*
+ * Xen malloc/free-style interface.
+ */
+
+/* Allocate space for typed object. */
+#define xvmalloc(_type) ((_type *)_xvmalloc(sizeof(_type), __alignof__(_type)))
+#define xvzalloc(_type) ((_type *)_xvzalloc(sizeof(_type), __alignof__(_type)))
+
+/* Allocate space for array of typed objects. */
+#define xvmalloc_array(_type, _num) \
+    ((_type *)_xvmalloc_array(sizeof(_type), __alignof__(_type), _num))
+#define xvzalloc_array(_type, _num) \
+    ((_type *)_xvzalloc_array(sizeof(_type), __alignof__(_type), _num))
+
+/* Allocate space for a structure with a flexible array of typed objects. */
+#define xvzalloc_flex_struct(type, field, nr) \
+    ((type *)_xvzalloc(offsetof(type, field[nr]), __alignof__(type)))
+
+#define xvmalloc_flex_struct(type, field, nr) \
+    ((type *)_xvmalloc(offsetof(type, field[nr]), __alignof__(type)))
+
+/* Re-allocate space for a structure with a flexible array of typed objects. */
+#define xvrealloc_flex_struct(ptr, field, nr)                          \
+    ((typeof(ptr))_xvrealloc(ptr, offsetof(typeof(*(ptr)), field[nr]), \
+                             __alignof__(typeof(*(ptr)))))
+
+/* Allocate untyped storage. */
+#define xvmalloc_bytes(_bytes) _xvmalloc(_bytes, SMP_CACHE_BYTES)
+#define xvzalloc_bytes(_bytes) _xvzalloc(_bytes, SMP_CACHE_BYTES)
+
+/* Free any of the above. */
+extern void xvfree(void *);
+
+/* Free an allocation, and zero the pointer to it. */
+#define XVFREE(p) do { \
+    xvfree(p);         \
+    (p) = NULL;        \
+} while ( false )
+
+/* Underlying functions */
+extern void *_xvmalloc(size_t size, unsigned int align);
+extern void *_xvzalloc(size_t size, unsigned int align);
+extern void *_xvrealloc(void *ptr, size_t size, unsigned int align);
+
+static inline void *_xvmalloc_array(
+    size_t size, unsigned int align, unsigned long num)
+{
+    /* Check for overflow. */
+    if ( size && num > UINT_MAX / size )
+        return NULL;
+    return _xvmalloc(size * num, align);
+}
+
+static inline void *_xvzalloc_array(
+    size_t size, unsigned int align, unsigned long num)
+{
+    /* Check for overflow. */
+    if ( size && num > UINT_MAX / size )
+        return NULL;
+    return _xvzalloc(size * num, align);
+}
+
+#endif /* __XVMALLOC_H__ */



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 07:13:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 07:13:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20419.46330 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kavvm-0003hm-OM; Fri, 06 Nov 2020 07:13:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20419.46330; Fri, 06 Nov 2020 07:13:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kavvm-0003he-Ke; Fri, 06 Nov 2020 07:13:06 +0000
Received: by outflank-mailman (input) for mailman id 20419;
 Fri, 06 Nov 2020 07:13:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kavvm-0003hX-4y
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 07:13:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 80ae3e2c-ba3c-49af-8f54-8a979441a74d;
 Fri, 06 Nov 2020 07:13:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1478FAB8F;
 Fri,  6 Nov 2020 07:13:04 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kavvm-0003hX-4y
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 07:13:06 +0000
X-Inumbo-ID: 80ae3e2c-ba3c-49af-8f54-8a979441a74d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 80ae3e2c-ba3c-49af-8f54-8a979441a74d;
	Fri, 06 Nov 2020 07:13:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604646784;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=CX7ycSIbHLxk9/beplw9uWXkdYfiG5DPYy83G5AM4JE=;
	b=iRU34nysslf1hYkQBtScKr6egff/SrZ4L7JDeamifCy9dju6eJ2RFuprHn9Xq1hsIfQCPE
	wIVLXZECJYfMgJfTqeKXyUF4KSWdBCHQmoGopXBoM9mRtN66XtB2wmBKHfG5stXPzQKg7M
	E9VF4tBdLe5eM2K3wYo+evZZCbzxPPU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1478FAB8F;
	Fri,  6 Nov 2020 07:13:04 +0000 (UTC)
Subject: [PATCH 2/3] x86/xstate: use xvzalloc() for save area allocation
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <e0364274-f123-82bd-ec85-bea519a34049@suse.com>
Message-ID: <c2aabb49-23a8-fe87-058d-d5e802ea7ba2@suse.com>
Date: Fri, 6 Nov 2020 08:13:03 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <e0364274-f123-82bd-ec85-bea519a34049@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

This is in preparation for the area size exceeding a page's worth of
space, as will happen with AMX as well as Architectural LBR.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -8,6 +8,7 @@
 #include <xen/param.h>
 #include <xen/percpu.h>
 #include <xen/sched.h>
+#include <xen/xvmalloc.h>
 #include <asm/current.h>
 #include <asm/processor.h>
 #include <asm/hvm/support.h>
@@ -522,7 +523,7 @@ int xstate_alloc_save_area(struct vcpu *
 
     /* XSAVE/XRSTOR requires the save area be 64-byte-boundary aligned. */
     BUILD_BUG_ON(__alignof(*save_area) < 64);
-    save_area = _xzalloc(size, __alignof(*save_area));
+    save_area = _xvzalloc(size, __alignof(*save_area));
     if ( save_area == NULL )
         return -ENOMEM;
 
@@ -543,8 +544,7 @@ int xstate_alloc_save_area(struct vcpu *
 
 void xstate_free_save_area(struct vcpu *v)
 {
-    xfree(v->arch.xsave_area);
-    v->arch.xsave_area = NULL;
+    XVFREE(v->arch.xsave_area);
 }
 
 static unsigned int _xstate_ctxt_size(u64 xcr0)


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 07:14:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 07:14:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20425.46342 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kavwf-0003qF-2n; Fri, 06 Nov 2020 07:14:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20425.46342; Fri, 06 Nov 2020 07:14:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kavwe-0003q8-VD; Fri, 06 Nov 2020 07:14:00 +0000
Received: by outflank-mailman (input) for mailman id 20425;
 Fri, 06 Nov 2020 07:13:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kavwd-0003py-Gl
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 07:13:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c1db2441-854a-4219-8bf7-ae1f6daec640;
 Fri, 06 Nov 2020 07:13:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8B53DAB8F;
 Fri,  6 Nov 2020 07:13:57 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kavwd-0003py-Gl
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 07:13:59 +0000
X-Inumbo-ID: c1db2441-854a-4219-8bf7-ae1f6daec640
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c1db2441-854a-4219-8bf7-ae1f6daec640;
	Fri, 06 Nov 2020 07:13:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604646837;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AsM3W0VyaEe4YA6Llrb0dMywZEtsBECVp9wQESYLvaE=;
	b=WyJ6jT0U0bRxPvqiYRoTWfEI9OBd7x7Rf4XEMzBVkdSn7h+byijYaWgXrtezuR5IqcDX0x
	65fvn5yOYbiZySmI8FlTBIaf/p3XpxTmtw8sVhjEMwYuVBqfvfs0y2rfjmGuFMm4E6JUmw
	Cnr9E6RXCiABXkiWHdTFfxRQRd4hPCA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8B53DAB8F;
	Fri,  6 Nov 2020 07:13:57 +0000 (UTC)
Subject: [PATCH 3/3] x86/xstate: re-size save area when CPUID policy changes
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <e0364274-f123-82bd-ec85-bea519a34049@suse.com>
Message-ID: <b951615e-c879-40fb-3f4d-ebaf26a01d13@suse.com>
Date: Fri, 6 Nov 2020 08:13:57 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <e0364274-f123-82bd-ec85-bea519a34049@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

vCPU-s get maximum size areas allocated initially. Hidden (and in
particular default-off) features may allow for a smaller size area to
suffice.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Seeing that both vcpu_init_fpu() and cpuid_policy_updated() get called
from arch_vcpu_create(), I'm not sure we really need this two-stage
approach - the slightly longer period of time during which
v->arch.xsave_area would remain NULL doesn't look all that problematic.
But since xstate_alloc_save_area() gets called for idle vCPU-s, it has
to stay anyway in some form, so the extra code churn may not be worth
it.

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -294,7 +294,21 @@ void update_guest_memory_policy(struct v
     }
 }
 
-void domain_cpu_policy_changed(struct domain *d)
+/*
+ * Called during vcpu construction, and each time the toolstack changes the
+ * CPUID configuration for the domain.
+ */
+static int __must_check cpuid_policy_updated(struct vcpu *v)
+{
+    int rc = xstate_update_save_area(v);
+
+    if ( !rc && is_hvm_vcpu(v) )
+        hvm_cpuid_policy_changed(v);
+
+    return rc;
+}
+
+int domain_cpu_policy_changed(struct domain *d)
 {
     const struct cpuid_policy *p = d->arch.cpuid;
     struct vcpu *v;
@@ -452,13 +466,18 @@ void domain_cpu_policy_changed(struct do
 
     for_each_vcpu ( d, v )
     {
-        cpuid_policy_updated(v);
+        int rc = cpuid_policy_updated(v);
+
+        if ( rc )
+            return rc;
 
         /* If PMU version is zero then the guest doesn't have VPMU */
         if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
              p->basic.pmu_version == 0 )
             vpmu_destroy(v);
     }
+
+    return 0;
 }
 
 #ifndef CONFIG_BIGMEM
@@ -597,7 +616,7 @@ int arch_vcpu_create(struct vcpu *v)
     {
         vpmu_initialise(v);
 
-        cpuid_policy_updated(v);
+        rc = cpuid_policy_updated(v);
     }
 
     return rc;
@@ -841,9 +860,9 @@ int arch_domain_create(struct domain *d,
      */
     d->arch.x87_fip_width = cpu_has_fpu_sel ? 0 : 8;
 
-    domain_cpu_policy_changed(d);
-
-    return 0;
+    rc = domain_cpu_policy_changed(d);
+    if ( !rc )
+        return 0;
 
  fail:
     d->is_dying = DOMDYING_dead;
@@ -2434,16 +2453,6 @@ int domain_relinquish_resources(struct d
     return 0;
 }
 
-/*
- * Called during vcpu construction, and each time the toolstack changes the
- * CPUID configuration for the domain.
- */
-void cpuid_policy_updated(struct vcpu *v)
-{
-    if ( is_hvm_vcpu(v) )
-        hvm_cpuid_policy_changed(v);
-}
-
 void arch_dump_domain_info(struct domain *d)
 {
     paging_dump_domain_info(d);
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -91,7 +91,7 @@ static int update_domain_cpu_policy(stru
     recalculate_cpuid_policy(d);
 
     /* Recalculate relevant dom/vcpu state now the policy has changed. */
-    domain_cpu_policy_changed(d);
+    ret = domain_cpu_policy_changed(d);
 
  out:
     /* Free whichever cpuid/msr structs are not installed in struct domain. */
--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -541,6 +541,41 @@ int xstate_alloc_save_area(struct vcpu *
 
     return 0;
 }
+
+int xstate_update_save_area(struct vcpu *v)
+{
+    unsigned int i, size, old;
+    struct xsave_struct *save_area;
+    uint64_t xcr0_max = cpuid_policy_xcr0_max(v->domain->arch.cpuid);
+
+    ASSERT(!is_idle_vcpu(v));
+
+    if ( !cpu_has_xsave )
+        return 0;
+
+    if ( v->arch.xcr0_accum & ~xcr0_max )
+        return -EBUSY;
+
+    for ( size = old = XSTATE_AREA_MIN_SIZE, i = 2; i < xstate_features; ++i )
+    {
+        if ( xcr0_max & (1ull << i) )
+            size = max(size, xstate_offsets[i] + xstate_sizes[i]);
+        if ( v->arch.xcr0_accum & (1ull << i) )
+            old = max(old, xstate_offsets[i] + xstate_sizes[i]);
+    }
+
+    save_area = _xvrealloc(v->arch.xsave_area, size, __alignof(*save_area));
+    if ( !save_area )
+        return -ENOMEM;
+
+    ASSERT(old <= size);
+    memset((void *)save_area + old, 0, size - old);
+
+    v->arch.xsave_area = save_area;
+    v->arch.fpu_ctxt = &v->arch.xsave_area->fpu_sse;
+
+    return 0;
+}
 
 void xstate_free_save_area(struct vcpu *v)
 {
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -78,8 +78,6 @@ void toggle_guest_mode(struct vcpu *);
 /* x86/64: toggle guest page tables between kernel and user modes. */
 void toggle_guest_pt(struct vcpu *);
 
-void cpuid_policy_updated(struct vcpu *v);
-
 /*
  * Initialise a hypercall-transfer page. The given pointer must be mapped
  * in Xen virtual address space (accesses are not validated or checked).
@@ -667,7 +665,7 @@ struct guest_memory_policy
 void update_guest_memory_policy(struct vcpu *v,
                                 struct guest_memory_policy *policy);
 
-void domain_cpu_policy_changed(struct domain *d);
+int __must_check domain_cpu_policy_changed(struct domain *d);
 
 bool update_runstate_area(struct vcpu *);
 bool update_secondary_system_time(struct vcpu *,
--- a/xen/include/asm-x86/xstate.h
+++ b/xen/include/asm-x86/xstate.h
@@ -106,6 +106,7 @@ void compress_xsave_states(struct vcpu *
 /* extended state init and cleanup functions */
 void xstate_free_save_area(struct vcpu *v);
 int xstate_alloc_save_area(struct vcpu *v);
+int xstate_update_save_area(struct vcpu *v);
 void xstate_init(struct cpuinfo_x86 *c);
 unsigned int xstate_ctxt_size(u64 xcr0);
 



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 07:22:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 07:22:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20437.46354 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaw5B-0004n5-2h; Fri, 06 Nov 2020 07:22:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20437.46354; Fri, 06 Nov 2020 07:22:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaw5A-0004my-Vy; Fri, 06 Nov 2020 07:22:48 +0000
Received: by outflank-mailman (input) for mailman id 20437;
 Fri, 06 Nov 2020 07:22:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kaw59-0004mt-U6
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 07:22:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 90a72d2f-16d8-4ae7-a3db-8ae9fea34963;
 Fri, 06 Nov 2020 07:22:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 41198ADE8;
 Fri,  6 Nov 2020 07:22:46 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kaw59-0004mt-U6
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 07:22:47 +0000
X-Inumbo-ID: 90a72d2f-16d8-4ae7-a3db-8ae9fea34963
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 90a72d2f-16d8-4ae7-a3db-8ae9fea34963;
	Fri, 06 Nov 2020 07:22:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604647366;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=xCE71nMImlouxTtVXgPf9Pv6jy5BLSqDSuAjU88BoVE=;
	b=jgVG/WCWVMNJFudDaC9xj2DnWq15vw8C70I50bz2/UwUtTSwe5WqCdHls27L+l9QlSlWIr
	ail5FA/IcFL2VICIb6R9L0qWH3JkBCFlfESOP1lhaiCfa9+eyhvdxLx8dhIFxD5IhgSaxX
	/Ieznda3BtEZsq17/4W9PNRRsoMBwpc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 41198ADE8;
	Fri,  6 Nov 2020 07:22:46 +0000 (UTC)
Subject: Re: [PATCH 3/3] x86/xstate: re-size save area when CPUID policy
 changes
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <e0364274-f123-82bd-ec85-bea519a34049@suse.com>
 <b951615e-c879-40fb-3f4d-ebaf26a01d13@suse.com>
Message-ID: <4a3d4494-5b9f-d16f-b643-6a7082d99436@suse.com>
Date: Fri, 6 Nov 2020 08:22:45 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <b951615e-c879-40fb-3f4d-ebaf26a01d13@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 06.11.2020 08:13, Jan Beulich wrote:
> --- a/xen/arch/x86/xstate.c
> +++ b/xen/arch/x86/xstate.c
> @@ -541,6 +541,41 @@ int xstate_alloc_save_area(struct vcpu *
>  
>      return 0;
>  }
> +
> +int xstate_update_save_area(struct vcpu *v)
> +{
> +    unsigned int i, size, old;
> +    struct xsave_struct *save_area;
> +    uint64_t xcr0_max = cpuid_policy_xcr0_max(v->domain->arch.cpuid);
> +
> +    ASSERT(!is_idle_vcpu(v));
> +
> +    if ( !cpu_has_xsave )
> +        return 0;
> +
> +    if ( v->arch.xcr0_accum & ~xcr0_max )
> +        return -EBUSY;
> +
> +    for ( size = old = XSTATE_AREA_MIN_SIZE, i = 2; i < xstate_features; ++i )
> +    {
> +        if ( xcr0_max & (1ull << i) )
> +            size = max(size, xstate_offsets[i] + xstate_sizes[i]);
> +        if ( v->arch.xcr0_accum & (1ull << i) )
> +            old = max(old, xstate_offsets[i] + xstate_sizes[i]);
> +    }

This could be further shrunk if we used XSAVEC / if we really
used XSAVES, as then we don't need to also cover the holes. But
since we currently use neither of the two in reality, this would
require more work than just adding the alternative size
calculation here.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 07:42:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 07:42:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20444.46365 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kawO2-0006Zx-Ng; Fri, 06 Nov 2020 07:42:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20444.46365; Fri, 06 Nov 2020 07:42:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kawO2-0006Zq-Kl; Fri, 06 Nov 2020 07:42:18 +0000
Received: by outflank-mailman (input) for mailman id 20444;
 Fri, 06 Nov 2020 07:42:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kawO0-0006Zl-NB
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 07:42:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id edbca07d-9012-4a84-9594-82a8bc47a241;
 Fri, 06 Nov 2020 07:42:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 067F0AB8F;
 Fri,  6 Nov 2020 07:42:15 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kawO0-0006Zl-NB
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 07:42:16 +0000
X-Inumbo-ID: edbca07d-9012-4a84-9594-82a8bc47a241
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id edbca07d-9012-4a84-9594-82a8bc47a241;
	Fri, 06 Nov 2020 07:42:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604648535;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=js1AsC3mAsEOOvyOOoSwTqfwIz5Rff+os2gtk4rgxSs=;
	b=I5BpRiTGt1rQBqY1Is4Ru9lKtuBguYuonOzqmf0k0kR+nGa3XAcbXv3FZFtKnV02QM1stD
	kSkGgszywXbNPxDcJ+8IBfzSkbM09pgDqFSUBbIlLSqFzMYa0/RFsKmKFKVyK+HOnDl0D4
	RLuGD+9nDUVqDbTEQP7jlgfwdeJBPtU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 067F0AB8F;
	Fri,  6 Nov 2020 07:42:15 +0000 (UTC)
Subject: Re: preparations for 4.14.1
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>, Julien Grall <julien@xen.org>
References: <5aa0791a-db56-8f5a-51a1-5863748ce7f1@suse.com>
 <alpine.DEB.2.21.2011051753580.2323@sstabellini-ThinkPad-T480s>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e12e32ca-8d2e-7314-e942-4de77d72ba4a@suse.com>
Date: Fri, 6 Nov 2020 08:42:14 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2011051753580.2323@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 06.11.2020 02:58, Stefano Stabellini wrote:
> On Wed, 4 Nov 2020, Jan Beulich wrote:
>> the release is due in a couple of weeks time. Please point out
>> backports you find missing from the respective staging branch,
>> but which you consider relevant. (Ian: Please double check
>> there are indeed no tools side backports needed here.)
>>
>> Julien, Stefano, on the Arm side I'd like to ask for
>>
>> 5d45ecabe3c0 xen/arm64: force gcc 10+ to always inline generic atomics helpers
>>
>> just like I did when sending the respective 4.13.2 / 4.12.4
>> mail. Is there a particular reason it wasn't put in?
> 
> No, I have just backported 5d45ecabe3c0 and a couple of other fixes.

Thanks.

> Jan, do you think we should backport the following also?
> 
> 8856a914b build: also check for empty .bss.* in .o -> .init.o conversion

Not having it wasn't causing active problems afaict, so it
was more to prevent future issues to put it in place. Did
we gain dependencies on this change which want backporting?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 07:47:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 07:47:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20450.46378 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kawSj-0006mr-Ar; Fri, 06 Nov 2020 07:47:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20450.46378; Fri, 06 Nov 2020 07:47:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kawSj-0006mk-7e; Fri, 06 Nov 2020 07:47:09 +0000
Received: by outflank-mailman (input) for mailman id 20450;
 Fri, 06 Nov 2020 07:47:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ANsg=EM=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kawSh-0006mf-3b
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 07:47:07 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.13.41]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cd79651c-5cb4-46c9-818c-ff3699dce459;
 Fri, 06 Nov 2020 07:47:05 +0000 (UTC)
Received: from AM6P191CA0013.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:8b::26)
 by AM9PR08MB5873.eurprd08.prod.outlook.com (2603:10a6:20b:2dd::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 6 Nov
 2020 07:47:03 +0000
Received: from AM5EUR03FT048.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8b:cafe::5c) by AM6P191CA0013.outlook.office365.com
 (2603:10a6:209:8b::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.19 via Frontend
 Transport; Fri, 6 Nov 2020 07:47:03 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT048.mail.protection.outlook.com (10.152.17.177) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3541.17 via Frontend Transport; Fri, 6 Nov 2020 07:47:03 +0000
Received: ("Tessian outbound 9487ba6994b4:v64");
 Fri, 06 Nov 2020 07:47:03 +0000
Received: from 0bf355236ae0.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 CA336F12-F449-4C8E-90A7-4320CCBDB646.1; 
 Fri, 06 Nov 2020 07:46:57 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0bf355236ae0.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 06 Nov 2020 07:46:57 +0000
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com (2603:10a6:208:fb::27)
 by AM9PR08MB6036.eurprd08.prod.outlook.com (2603:10a6:20b:2dc::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Fri, 6 Nov
 2020 07:46:55 +0000
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b]) by AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b%7]) with mapi id 15.20.3499.032; Fri, 6 Nov 2020
 07:46:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ANsg=EM=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kawSh-0006mf-3b
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 07:47:07 +0000
X-Inumbo-ID: cd79651c-5cb4-46c9-818c-ff3699dce459
Received: from EUR01-HE1-obe.outbound.protection.outlook.com (unknown [40.107.13.41])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cd79651c-5cb4-46c9-818c-ff3699dce459;
	Fri, 06 Nov 2020 07:47:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=H9v4xcUP4HSrs+2100MskJbUpJznskU9U2tS902rJJE=;
 b=eNvQNzsopzde3woeYKLbOlHD37rDrwi2OCRuMzAbvqjMLR/NAra/I9ZYEzvNKLQngVcLTlC8GG0dqBS5xuG6UN2Ie7XuRCWOqU0cZSC7R+EGr5sVDzvPvLbqE2l31cJEHmaoMvSS37C7XbZwDRKXwakqmZ2F/P3/24gKcazBpMc=
Received: from AM6P191CA0013.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:8b::26)
 by AM9PR08MB5873.eurprd08.prod.outlook.com (2603:10a6:20b:2dd::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 6 Nov
 2020 07:47:03 +0000
Received: from AM5EUR03FT048.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8b:cafe::5c) by AM6P191CA0013.outlook.office365.com
 (2603:10a6:209:8b::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.19 via Frontend
 Transport; Fri, 6 Nov 2020 07:47:03 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT048.mail.protection.outlook.com (10.152.17.177) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3541.17 via Frontend Transport; Fri, 6 Nov 2020 07:47:03 +0000
Received: ("Tessian outbound 9487ba6994b4:v64"); Fri, 06 Nov 2020 07:47:03 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8400681a9905736c
X-CR-MTA-TID: 64aa7808
Received: from 0bf355236ae0.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id CA336F12-F449-4C8E-90A7-4320CCBDB646.1;
	Fri, 06 Nov 2020 07:46:57 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0bf355236ae0.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 06 Nov 2020 07:46:57 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GJzlcR/Re7C1bl+N6aXw0uxmF14r/z1zn7rTGGxxa6e3e8ERBMuqDHDPA7cESn3+hdKehtzKDh9Q2Cx6J4pfu9pPaIQsyx1wdPsuweWMDMTPK7I735sUQcTRp83f0TQMqS2RZkviBaHvPD6QXOrSbapAUjOjGDKelOMt2ew5MTK2Aud1Hh7Z7H/OXUkKxTQr2XkrmEFC8bexDRLAlLy+s14FpunePAU3hml0tGbNhvdgasdqEFzgEP0Hch91MIsAInPkYJHIF/kYucR7a7B2hs8r4KFORlZ81Z0uir6mSd5cH54nW4E0aesdKaEJJ6rjKZY7xDQHNJZPOpmIsBlb/Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=H9v4xcUP4HSrs+2100MskJbUpJznskU9U2tS902rJJE=;
 b=Ezz8FxJZUyuNPMODd0gf9Ei9nJx8x1SgwzmUGOA44MICkTCF8145TM9GPrsD+daM+ruUD5i1z1w5br2jua7/EhXSThdbjeKFNAVynrlJNJA//f7affZXKR0m1MiYCZHHE8j+XtlAdYu/V93OgNycAs6kcr3VHzbzDamHUDBqmBqTG/2JWWbi4Nk8b9tV+CdiIGO+LxBjDUf8YUYQRXDEgecS/J2dwg+8g/VP8gLBFG15OGyiY1O4ym8tNrNIcr247pESzWFO5qB6XCpHp1iIG+rXjZGtGP/+CQPL3h9E81h6ZUBnl5+toxC2pvs/Yw1U7ETKaflHl9Lkcn8ejZIOfg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=H9v4xcUP4HSrs+2100MskJbUpJznskU9U2tS902rJJE=;
 b=eNvQNzsopzde3woeYKLbOlHD37rDrwi2OCRuMzAbvqjMLR/NAra/I9ZYEzvNKLQngVcLTlC8GG0dqBS5xuG6UN2Ie7XuRCWOqU0cZSC7R+EGr5sVDzvPvLbqE2l31cJEHmaoMvSS37C7XbZwDRKXwakqmZ2F/P3/24gKcazBpMc=
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com (2603:10a6:208:fb::27)
 by AM9PR08MB6036.eurprd08.prod.outlook.com (2603:10a6:20b:2dc::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Fri, 6 Nov
 2020 07:46:55 +0000
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b]) by AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::1c4a:d913:232b:674b%7]) with mapi id 15.20.3499.032; Fri, 6 Nov 2020
 07:46:55 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Julien
 Grall <jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: traps: Don't panic when receiving an unknown
 debug trap
Thread-Topic: [PATCH] xen/arm: traps: Don't panic when receiving an unknown
 debug trap
Thread-Index: AQHWs8NxXvjxsnR7jEOaNZeAcAn8Ham6uneA
Date: Fri, 6 Nov 2020 07:46:55 +0000
Message-ID: <221590C9-15CA-4AFE-8038-6715F84B0971@arm.com>
References: <20201105223106.22517-1-julien@xen.org>
In-Reply-To: <20201105223106.22517-1-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: ac8bf58b-293e-49ab-d3ad-08d8822826be
x-ms-traffictypediagnostic: AM9PR08MB6036:|AM9PR08MB5873:
X-Microsoft-Antispam-PRVS:
	<AM9PR08MB5873983109570E22C1CDABFF9DED0@AM9PR08MB5873.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:7691;OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 0sbzWDDMB/Bn652FFq4XjBrxAxt91PiTdKkCmlP5c6Mkd3acwphLpijzZRTPjkLM+R+Ia+gUT7O6BVgAsh9AMRrawJbMXMp0jAUPlC+1qUErTmGOgUqcR8UzeqOznI6pcVxwHFJzUE53MsAi/6d6ZnKndoclyYb4KdsA2OvGocJnOlNBzlutye5hdWgtydOfpM596uRhkFofo1n7S1F5UrzGTPi6cyBO6H8wPJHlyBw3hSqIn4i8HsebMzQzxPk1zIKAJU+JPKZ8lDV0doU0TgSd6zmdhWcqjXRIlSaFPUed3sVgs49RtoZvPhF9/zQqtGxAqF2mg4f6/9XCwY7sUQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3682.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39860400002)(346002)(376002)(396003)(136003)(4326008)(6916009)(54906003)(316002)(33656002)(6512007)(86362001)(8676002)(2616005)(478600001)(83380400001)(26005)(91956017)(76116006)(8936002)(6506007)(66946007)(66446008)(64756008)(66556008)(53546011)(66476007)(4744005)(6486002)(71200400001)(36756003)(186003)(2906002)(5660300002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 WcTLzmbrhGsA/2zGI4IWa/VIF9hfx6Z0mG4kZC7TR+6chnF3lPAUIc0c2/shVvMSfSftJZwsiPu49idcBd8e6zKh4KPQ07B2fexloOLui8Lo/oqiZcwl8PPqzKNDo8erpocTmURZFFz5k/pZt6/fjGslXOWAJmWpyVuYpR1wH04Yi9P5OrqxKbtxFlL+UQdAvk2jjgqe3MRUuC7BsMGnBmWBqWsozhlj7zphq73TSexHYYTs4lBqFCeasZ+V8l1Eo3/pJAiB3GiRYmCwyZ+jeshSpQr0Q/vVbWrvJ7Prim07JNPBELlQVf8Ad22WzRkCRldFLvvRKtXTZNOJNXHbW5LqIV35rrahsehrTj4LJYVbxx/6hk7pk7EF0gP4umtpcU8JzteKzo76AXWkYhTNjb6/qlrQP1ZHHVyhRyQYp/hwnHfVYNKWM6qY8E2sHQ0lsmolbrR5irY5tDgx9QItGABELDX7Jeiu0DoY9xgESXlVC21+0OxRbsB1j27KTXA5QOf0rjaAKoHIkx7ONfPNXAxQSPFpQmwIXWuEungPdB5EIPFnMRPYNESqdI8k4S+hYTv9zapwI6PIjOZ6v55F76r4rWDRFWc78y4UlUU9L5xGApQvllIFKaZorzBBbcb9Gz8J+4E4seB9GtJUR2R8LA==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <1C71A6EAAB0B5F4B98811D4656280B73@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6036
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	0781e035-1829-48b3-1f19-08d88228221a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	336qZS2y+aA3lMmQrQJhTPcmZgA4KoI6HfuoYJB9w3vzZw1fqBHdet2cabVkWOMdU4sw1VNn26ZMoVgSqoYts7VIl3oMD5Kf0zogd1IOcOYSk2l1FmAobBTOuty+Oq36QncbHce3klekoz7KLNXHv6Z0FFw3RWJrV60mxa2Nop2O0YeIA9J1rpvYrXB8EJJvyZcU0jLYAKhqvFZaGDfovxEw5KJhINO65WKzRCETJG75ASEFloUYkEW7jfoz1IdNQLLF7KqUHFfFnRFREXGNCHhw1kW9euTGih2vpqoCQZZb335d8JA5mJZ9C+GxgbMCYN8OUWWA2m5t7PI+lyY9R7Pdsad8BUTvLrBI7+fcy5oClBjn5BNKAHpGKn2OAvwDAHEtdIjlDxUWCFqZMsrgqg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(396003)(346002)(376002)(39860400002)(46966005)(356005)(8936002)(316002)(82310400003)(83380400001)(36906005)(336012)(186003)(6862004)(8676002)(33656002)(53546011)(26005)(6486002)(47076004)(54906003)(82740400003)(6506007)(70586007)(5660300002)(2616005)(2906002)(81166007)(107886003)(36756003)(70206006)(4326008)(478600001)(6512007)(86362001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Nov 2020 07:47:03.2599
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ac8bf58b-293e-49ab-d3ad-08d8822826be
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB5873

Hi Julien,

> On 5 Nov 2020, at 22:31, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> Even if debug trap are only meant for debugging purpose, it is quite
> harsh to crash Xen if one of the trap sent by the guest is not handled.
>=20
> So switch from a panic() to a printk().

Very smart idea :-)

>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> xen/arch/arm/traps.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>=20
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 8f40d0e0b6b1..a36f145e6739 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1410,7 +1410,7 @@ static void do_debug_trap(struct cpu_user_regs *reg=
s, unsigned int code)
>         show_execution_state(regs);
>         break;
>     default:
> -        panic("DOM%d: Unhandled debug trap %#x\n", domid, code);
> +        printk("DOM%d: Unhandled debug trap %#x\n", domid, code);
>         break;
>     }
> }
> --=20
> 2.17.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 08:23:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 08:23:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20484.46390 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kax1z-0002PV-MP; Fri, 06 Nov 2020 08:23:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20484.46390; Fri, 06 Nov 2020 08:23:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kax1z-0002PO-JA; Fri, 06 Nov 2020 08:23:35 +0000
Received: by outflank-mailman (input) for mailman id 20484;
 Fri, 06 Nov 2020 08:23:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kax1x-0002PJ-QW
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 08:23:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b2243804-0348-43da-b29b-020da43409b2;
 Fri, 06 Nov 2020 08:23:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E584FAB8F;
 Fri,  6 Nov 2020 08:23:31 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kax1x-0002PJ-QW
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 08:23:33 +0000
X-Inumbo-ID: b2243804-0348-43da-b29b-020da43409b2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b2243804-0348-43da-b29b-020da43409b2;
	Fri, 06 Nov 2020 08:23:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604651012;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=R7zbUVH+QMVOh+rrhDzkFhtHNBwMmO2+uGPuRghQXTI=;
	b=RHvo5r7Jodnfp7gv50k5L9fVtwnGyO5g2+yHnDxpBPCp4BhU5JiNO6pouAiYcmOIoi+Jvv
	gLYHFbQa1mAC4bo0aQK0TXEGNvnfCUCQVZMRoAGtkR0BTaMVxVE2PNxUSSv2YZka6MSefG
	nyDLQo5VnkUwLHcmyqFpGe/yzRdCP7Q=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E584FAB8F;
	Fri,  6 Nov 2020 08:23:31 +0000 (UTC)
Subject: Re: [PATCH v2 2/4] xen/pci: Introduce new CONFIG_PCI_ATS flag for PCI
 ATS functionality.
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Rahul Singh <rahul.singh@arm.com>, Bertrand.Marquis@arm.com,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1604417224.git.rahul.singh@arm.com>
 <27814e614618c413ac61a9f7a48d795c557bfe5c.1604417224.git.rahul.singh@arm.com>
 <c9874396-44d2-b969-104f-eb40b4e107c9@suse.com>
 <4598bf81-5802-93b8-e160-05c139a6d4cf@suse.com>
 <alpine.DEB.2.21.2011051300450.2323@sstabellini-ThinkPad-T480s>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b3a1280e-871d-3333-335d-8978e8528df5@suse.com>
Date: Fri, 6 Nov 2020 09:23:31 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2011051300450.2323@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 05.11.2020 22:04, Stefano Stabellini wrote:
> On Wed, 4 Nov 2020, Jan Beulich wrote:
>> On 04.11.2020 16:43, Jan Beulich wrote:
>>> On 03.11.2020 16:59, Rahul Singh wrote:
>>>> --- a/xen/drivers/pci/Kconfig
>>>> +++ b/xen/drivers/pci/Kconfig
>>>> @@ -1,3 +1,12 @@
>>>>  
>>>>  config HAS_PCI
>>>>  	bool
>>>> +
>>>> +config PCI_ATS
>>>> +	bool "PCI ATS support"
>>>> +	default y
>>>> +	depends on X86 && HAS_PCI
>>>> +	---help---
>>>> +	 Enable PCI Address Translation Services.
>>>> +
>>>> +	 If unsure, say Y.
>>>
>>> Support for "---help---" having gone away in Linux, I think we'd
>>> better not add new instances. Also indentation of help content
>>> typically is by a tab and two spaces. With these two adjusted
>>>
>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>
>> Initially I wanted to merely reply indicating I'd be fine making
>> these changes while committing, but there are two more things
>> (and I withdraw my R-b): For one, isn't strict pci_dev's ats
>> field now unused when !PCI_ATS? If so, if should get an #ifdef
>> added. And then, what exactly is it in ats.c that's x86-specific?
>> Shouldn't the whole file instead be moved one level up, and be
>> usable by Arm right away?
> 
> If the issue is that ATS wouldn't work on ARM straight away, then I
> think it would be best to make this a silent option like we did in patch
> #1: if x86 && HAS_PCI -> automatically enable, otherwise disable.

Taking the opportunity to make this a non-silent option was actually
a request of mine. As long as the code builds and isn't obviously
broken for Arm, I think it shouldn't have an X86 dependency (and it
then should be moved up in the tree). Arguably it could then
default to off for Arm, but when asking for this option to gain a
prompt I also indicated that I wonder whether the default shouldn't
be off on x86 as well, seeing that the controlling command line
option also defaults to off.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 08:45:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 08:45:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20492.46402 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaxMy-0004F0-Dt; Fri, 06 Nov 2020 08:45:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20492.46402; Fri, 06 Nov 2020 08:45:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaxMy-0004Et-Aa; Fri, 06 Nov 2020 08:45:16 +0000
Received: by outflank-mailman (input) for mailman id 20492;
 Fri, 06 Nov 2020 08:45:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l/Pr=EM=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kaxMw-0004Eo-W9
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 08:45:15 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 15b3ac57-217e-474e-b29f-6274760d4f07;
 Fri, 06 Nov 2020 08:45:12 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kaxMq-0005rz-VL; Fri, 06 Nov 2020 08:45:08 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kaxMq-00061j-OA; Fri, 06 Nov 2020 08:45:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l/Pr=EM=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kaxMw-0004Eo-W9
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 08:45:15 +0000
X-Inumbo-ID: 15b3ac57-217e-474e-b29f-6274760d4f07
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 15b3ac57-217e-474e-b29f-6274760d4f07;
	Fri, 06 Nov 2020 08:45:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=jqH4sJLnfwsBMK4hFoqRfpFxqVIa//ygpS4sFstvk4s=; b=h98u+dSL4FufsO3vh3H5ipBO0U
	VUzVkPbNap5sBIvucxL9jKCMogjXIyHRAcTlkoVSziVmm8O42eX3RNRucVwALRUfCU2JJMrTnpsXP
	xrulYbBbKYqYfWDdjdFaqy69KyM0gg7FruzdoJfhbmeo87Nj52xK1HvJ20orrDvKoSWE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kaxMq-0005rz-VL; Fri, 06 Nov 2020 08:45:08 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kaxMq-00061j-OA; Fri, 06 Nov 2020 08:45:08 +0000
Subject: Re: preparations for 4.14.1
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Ian Jackson <ian.jackson@citrix.com>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>
References: <5aa0791a-db56-8f5a-51a1-5863748ce7f1@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <6b67e93b-1dff-ff31-457d-400cf33cd4b6@xen.org>
Date: Fri, 6 Nov 2020 08:45:06 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <5aa0791a-db56-8f5a-51a1-5863748ce7f1@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 04/11/2020 10:12, Jan Beulich wrote:
> All,

Hi Jan,


> the release is due in a couple of weeks time. Please point out
> backports you find missing from the respective staging branch,
> but which you consider relevant. (Ian: Please double check
> there are indeed no tools side backports needed here.)

Would it be possible to consider the backport mentioned [1]? For 
convenience, this is:

d25cc3ec93eb "libxl: workaround gcc 10.2 maybe-uninitialized warning"

In addition, I would like to request a backport for:

fff1b7f50e75 "libxl: fix -Werror=stringop-truncation in
libxl__prepare_sockaddr_un"

Both patches are necessary to get Xen building with newer GCC.

> 
> Julien, Stefano, on the Arm side I'd like to ask for
> 
> 5d45ecabe3c0 xen/arm64: force gcc 10+ to always inline generic atomics helpers
> 
> just like I did when sending the respective 4.13.2 / 4.12.4
> mail. Is there a particular reason it wasn't put in?
> 
> Jan
> 

Cheers,

[1] <54fcf6ea-f400-c96a-cde6-4f55f909c2d6@xen.org>

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 08:50:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 08:50:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20498.46414 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaxSH-00056x-1E; Fri, 06 Nov 2020 08:50:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20498.46414; Fri, 06 Nov 2020 08:50:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaxSG-00056q-U1; Fri, 06 Nov 2020 08:50:44 +0000
Received: by outflank-mailman (input) for mailman id 20498;
 Fri, 06 Nov 2020 08:50:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kaxSG-00056l-7P
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 08:50:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 69d4a2d1-5ffc-452d-a942-5fcfb4a539b7;
 Fri, 06 Nov 2020 08:50:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9B729AB8F;
 Fri,  6 Nov 2020 08:50:42 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kaxSG-00056l-7P
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 08:50:44 +0000
X-Inumbo-ID: 69d4a2d1-5ffc-452d-a942-5fcfb4a539b7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 69d4a2d1-5ffc-452d-a942-5fcfb4a539b7;
	Fri, 06 Nov 2020 08:50:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604652642;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=O9oX95/HNDAsnsACCUA+gSWI27///UQmU0uf7P335ms=;
	b=A6GClXKqFCXB1jLSXiIt3iZrfQGPCTz0aMrkzORWLkNySVzcPkCREg7IOEXJLdMg2hQDfR
	IyLPquQe/c81ZYnBRmAmWlnJuMaQL4HqWSN222vAENuRGc9Ai27bcYOAkFYOVKl1Q71lBv
	nSNAnNpyEykPQEu7qmjC4AW0sPj6IRY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 9B729AB8F;
	Fri,  6 Nov 2020 08:50:42 +0000 (UTC)
Subject: Re: preparations for 4.14.1
To: Julien Grall <julien@xen.org>, Ian Jackson <ian.jackson@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <5aa0791a-db56-8f5a-51a1-5863748ce7f1@suse.com>
 <6b67e93b-1dff-ff31-457d-400cf33cd4b6@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6f5aa1f3-fb63-bb38-bb96-497c47a1e920@suse.com>
Date: Fri, 6 Nov 2020 09:50:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <6b67e93b-1dff-ff31-457d-400cf33cd4b6@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 06.11.2020 09:45, Julien Grall wrote:
> On 04/11/2020 10:12, Jan Beulich wrote:
>> the release is due in a couple of weeks time. Please point out
>> backports you find missing from the respective staging branch,
>> but which you consider relevant. (Ian: Please double check
>> there are indeed no tools side backports needed here.)
> 
> Would it be possible to consider the backport mentioned [1]? For 
> convenience, this is:
> 
> d25cc3ec93eb "libxl: workaround gcc 10.2 maybe-uninitialized warning"
> 
> In addition, I would like to request a backport for:
> 
> fff1b7f50e75 "libxl: fix -Werror=stringop-truncation in
> libxl__prepare_sockaddr_un"
> 
> Both patches are necessary to get Xen building with newer GCC.

While I support the request, it really should have gone to Ian.
Ian - please consider.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 09:09:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 09:09:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20510.46436 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaxk2-0006J1-TA; Fri, 06 Nov 2020 09:09:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20510.46436; Fri, 06 Nov 2020 09:09:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaxk2-0006Iu-Px; Fri, 06 Nov 2020 09:09:06 +0000
Received: by outflank-mailman (input) for mailman id 20510;
 Fri, 06 Nov 2020 09:09:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kaxk1-0006Ip-QG
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:09:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cce111b2-c1c0-4b01-8244-8c8b76aa5996;
 Fri, 06 Nov 2020 09:09:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F0B21AB8F;
 Fri,  6 Nov 2020 09:09:01 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kaxk1-0006Ip-QG
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:09:05 +0000
X-Inumbo-ID: cce111b2-c1c0-4b01-8244-8c8b76aa5996
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cce111b2-c1c0-4b01-8244-8c8b76aa5996;
	Fri, 06 Nov 2020 09:09:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604653742;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=anT098hY0iHyYy/U3eo5egMoauESDK4ppoA7DCQR5Jc=;
	b=f10fN1k0FBeS0VDbbl8piuInASDjdNNkfK0KZuWR12SL2i5eMMe51bolsWmeMKYBnTnL9/
	eOmBoZ10aj39paGNyPOHf9C1JJODyL6d/5n2Zaseuv3grL6+BxzIYB3WKxNExwJZOyajwP
	ZOkvrsUAfYPAb4ROq8c4qmFu+Pcyh+o=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id F0B21AB8F;
	Fri,  6 Nov 2020 09:09:01 +0000 (UTC)
Subject: Re: [PATCH v2 3/4] xen/pci: Move x86 specific code to x86 directory.
To: Rahul Singh <rahul.singh@arm.com>
Cc: Bertrand.Marquis@arm.com, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1604417224.git.rahul.singh@arm.com>
 <687101e7e0e6feb64dd8ea63c8cf1aacf1684049.1604417224.git.rahul.singh@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c49bf07f-3d39-b8e8-a3ed-a620aa5de5df@suse.com>
Date: Fri, 6 Nov 2020 10:09:01 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <687101e7e0e6feb64dd8ea63c8cf1aacf1684049.1604417224.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.11.2020 16:59, Rahul Singh wrote:
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -14,7 +14,6 @@
>   * this program; If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> -#include <xen/sched.h>
>  #include <xen/pci.h>
>  #include <xen/pci_regs.h>
>  #include <xen/pci_ids.h>

I think this hunk wants dropping - struct domain continues to be used
in this file, for example.

> @@ -847,71 +845,6 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
>      return ret;
>  }
>  
> -static int pci_clean_dpci_irq(struct domain *d,
> -                              struct hvm_pirq_dpci *pirq_dpci, void *arg)
> -{
> -    struct dev_intx_gsi_link *digl, *tmp;
> -
> -    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
> -
> -    if ( pt_irq_need_timer(pirq_dpci->flags) )
> -        kill_timer(&pirq_dpci->timer);
> -
> -    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
> -    {
> -        list_del(&digl->list);
> -        xfree(digl);
> -    }
> -
> -    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
> -
> -    if ( !pt_pirq_softirq_active(pirq_dpci) )
> -        return 0;
> -
> -    domain_get_irq_dpci(d)->pending_pirq_dpci = pirq_dpci;
> -
> -    return -ERESTART;
> -}
> -
> -static int pci_clean_dpci_irqs(struct domain *d)
> -{
> -    struct hvm_irq_dpci *hvm_irq_dpci = NULL;
> -
> -    if ( !is_iommu_enabled(d) )
> -        return 0;
> -
> -    if ( !is_hvm_domain(d) )
> -        return 0;
> -
> -    spin_lock(&d->event_lock);
> -    hvm_irq_dpci = domain_get_irq_dpci(d);
> -    if ( hvm_irq_dpci != NULL )
> -    {
> -        int ret = 0;
> -
> -        if ( hvm_irq_dpci->pending_pirq_dpci )
> -        {
> -            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci) )
> -                 ret = -ERESTART;
> -            else
> -                 hvm_irq_dpci->pending_pirq_dpci = NULL;
> -        }
> -
> -        if ( !ret )
> -            ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
> -        if ( ret )
> -        {
> -            spin_unlock(&d->event_lock);
> -            return ret;
> -        }
> -
> -        hvm_domain_irq(d)->dpci = NULL;
> -        free_hvm_irq_dpci(hvm_irq_dpci);
> -    }
> -    spin_unlock(&d->event_lock);
> -    return 0;
> -}

If this code gets moved, I think it ought to move into
xen/drivers/passthrough/io.c, as that's where all the companion code
sits. (The file as a whole, getting built for x86/HVM only, may want
moving to xen/drivers/passthrough/x86/ if the underlying model isn't
suitable for Arm. Then it probably also would want to be named hvm.c,
to express its limited purpose.)

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 09:22:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 09:22:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20516.46447 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaxwZ-0007xy-45; Fri, 06 Nov 2020 09:22:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20516.46447; Fri, 06 Nov 2020 09:22:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kaxwZ-0007xr-17; Fri, 06 Nov 2020 09:22:03 +0000
Received: by outflank-mailman (input) for mailman id 20516;
 Fri, 06 Nov 2020 09:22:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kaxwX-0007xm-4W
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:22:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ddffdeba-9bfa-4819-9dfe-392daac43a81;
 Fri, 06 Nov 2020 09:21:57 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 41F80ACD5;
 Fri,  6 Nov 2020 09:21:56 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kaxwX-0007xm-4W
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:22:01 +0000
X-Inumbo-ID: ddffdeba-9bfa-4819-9dfe-392daac43a81
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ddffdeba-9bfa-4819-9dfe-392daac43a81;
	Fri, 06 Nov 2020 09:21:57 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604654516;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=B7dntEEjArwnHoSUb3deA3LDODtxOhxCvIj10IA1iUc=;
	b=XEiFtXc0ZyF+Uf7UrJyJhPGV1WApJzWEdzzP46RsRSREfTZ/PgYpITsx9fQFuKYCIhmpE0
	/d9IXhpP+55pjuCCoKb1FPPftghtV+jqVano181904L6ocNvMI8XLFZ58xMpHQyW3wy13+
	ILJa/nqQp2aCPGFftk7QuwufNDIOjdQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 41F80ACD5;
	Fri,  6 Nov 2020 09:21:56 +0000 (UTC)
Subject: Re: [PATCH v2 4/4] xen/pci: solve compilation error on ARM with
 HAS_PCI enabled.
To: Rahul Singh <rahul.singh@arm.com>
Cc: Bertrand.Marquis@arm.com, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1604417224.git.rahul.singh@arm.com>
 <7b60501fa689a4f2795ea6c34a7475d288f154a9.1604417224.git.rahul.singh@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9c3c43c3-241d-4cea-cbad-4184523450c3@suse.com>
Date: Fri, 6 Nov 2020 10:21:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <7b60501fa689a4f2795ea6c34a7475d288f154a9.1604417224.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.11.2020 16:59, Rahul Singh wrote:
> If mem-sharing, mem-paging and log-dirty functionality is not enabled
> for architecture when HAS_PCI is enabled, compiler will throw an error.

Nit: Is it really "and", not "or"?

> @@ -1418,12 +1417,7 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
>      if ( !is_iommu_enabled(d) )
>          return 0;
>  
> -    /* Prevent device assign if mem paging or mem sharing have been 
> -     * enabled for this domain */
> -    if ( d != dom_io &&
> -         unlikely(mem_sharing_enabled(d) ||
> -                  vm_event_check_ring(d->vm_event_paging) ||
> -                  p2m_get_hostp2m(d)->global_logdirty) )
> +    if( !arch_iommu_usable(d) )
>          return -EXDEV;

While iirc I did suggest this name, seeing it used here leaves me
somewhat unhappy with the name, albeit I also can't think of any
better alternative right now. Maybe arch_iommu_use_permitted()?

> @@ -315,6 +316,18 @@ int iommu_update_ire_from_msi(
>             ? iommu_call(&iommu_ops, update_ire_from_msi, msi_desc, msg) : 0;
>  }
>  
> +bool_t arch_iommu_usable(struct domain *d)

Just bool please and I very much hope the parameter can be const.

> +{
> +
> +    /* Prevent device assign if mem paging or mem sharing have been
> +     * enabled for this domain */

Please correct comment style as you move it.

> +    if ( d != dom_io && unlikely(mem_sharing_enabled(d) ||
> +                        vm_event_check_ring(d->vm_event_paging) ||
> +                        p2m_get_hostp2m(d)->global_logdirty) )

You've screwed up indentation, and I don't see why ...

> +        return false;
> +    else
> +        return true;
> +}

... this can't be a simple single return statement anyway:

    return d == dom_io ||
           likely(!mem_sharing_enabled(d) &&
                  !vm_event_check_ring(d->vm_event_paging) &&
                  !p2m_get_hostp2m(d)->global_logdirty);

In the course of moving I'd also suggest dropping the use of
likely() here: The way it's used (on an && expression) isn't
normally having much effect anyway. If anything it should imo
be

    return d == dom_io ||
           (likely(!mem_sharing_enabled(d)) &&
            likely(!vm_event_check_ring(d->vm_event_paging)) &&
            likely(!p2m_get_hostp2m(d)->global_logdirty));

Any transformation to this effect wants mentioning in the
description, though.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 09:29:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 09:29:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20522.46460 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kay33-0008Dm-Sk; Fri, 06 Nov 2020 09:28:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20522.46460; Fri, 06 Nov 2020 09:28:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kay33-0008Df-Oi; Fri, 06 Nov 2020 09:28:45 +0000
Received: by outflank-mailman (input) for mailman id 20522;
 Fri, 06 Nov 2020 09:28:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pfqN=EM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kay32-0008Da-B2
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:28:44 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c84372e9-3ef3-477f-b696-09d5b8ece2f9;
 Fri, 06 Nov 2020 09:28:40 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kay2y-0006mE-CL; Fri, 06 Nov 2020 09:28:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kay2y-0002L5-0H; Fri, 06 Nov 2020 09:28:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kay2x-00049r-W3; Fri, 06 Nov 2020 09:28:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pfqN=EM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kay32-0008Da-B2
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:28:44 +0000
X-Inumbo-ID: c84372e9-3ef3-477f-b696-09d5b8ece2f9
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c84372e9-3ef3-477f-b696-09d5b8ece2f9;
	Fri, 06 Nov 2020 09:28:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RSl7/g/tRpTzQ6r3rKoUWqpkoqM4WNC6NUe0sZd7vNM=; b=SYSNy+UwLe0WasK86DZU5W6GFo
	5vys+ftkCxXaFC8UYzbWfj4OvTktZK2lTmeyDX/OhVkzyB0dPyW7a6IIRhyg47BvIuI3vzuktIkYg
	ULev98TVitvIblnqbo6ahzTAzLD9Vd91ZdC5/TZVmXlPw6/WNBqBHBpALg+odjaZUEaQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kay2y-0006mE-CL; Fri, 06 Nov 2020 09:28:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kay2y-0002L5-0H; Fri, 06 Nov 2020 09:28:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kay2x-00049r-W3; Fri, 06 Nov 2020 09:28:39 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156424-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156424: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=3c8c36c9087da957f580a9bb5ebf7814a753d1c6
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 06 Nov 2020 09:28:39 +0000

flight 156424 qemu-mainline real [real]
flight 156521 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156424/
http://logs.test-lab.xenproject.org/osstest/logs/156521/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                3c8c36c9087da957f580a9bb5ebf7814a753d1c6
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   78 days
Failing since        152659  2020-08-21 14:07:39 Z   76 days  171 attempts
Testing same since   156403  2020-11-04 22:20:29 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 60305 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 09:30:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 09:30:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20528.46475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kay4g-0000aa-DG; Fri, 06 Nov 2020 09:30:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20528.46475; Fri, 06 Nov 2020 09:30:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kay4g-0000aT-9r; Fri, 06 Nov 2020 09:30:26 +0000
Received: by outflank-mailman (input) for mailman id 20528;
 Fri, 06 Nov 2020 09:30:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kay4e-0000aN-Qh
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:30:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 167c5157-bbae-40d2-8c65-c31a2ce6ba2c;
 Fri, 06 Nov 2020 09:30:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DB635AC35;
 Fri,  6 Nov 2020 09:30:22 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kay4e-0000aN-Qh
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:30:24 +0000
X-Inumbo-ID: 167c5157-bbae-40d2-8c65-c31a2ce6ba2c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 167c5157-bbae-40d2-8c65-c31a2ce6ba2c;
	Fri, 06 Nov 2020 09:30:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604655023;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=8YALmS3K80DAZopt76sKC9xNv+zUz9dzUqEri7d1gN8=;
	b=HMVQwkmrdB+hkHLtybJxD8vnxwmz3tQYpVS7BlIX9gQjH0SfOCf9wdvCLuE0HGL991W2JI
	zZe7EeT7AqPyqwOly8lYMsLBPWJ/0OX5nnO/D+Lq/LPzjDkm3tdS3Td4jzvTuzUvb3QKVj
	QTW99cHwWIfVa560ySsdFbDA+Z+fLFw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id DB635AC35;
	Fri,  6 Nov 2020 09:30:22 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/9] x86/p2m: hook adjustments
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
Message-ID: <4b63025f-164c-2e93-3d54-7a7f145ad046@suse.com>
Date: Fri, 6 Nov 2020 10:30:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

This started out with me getting confused by the two write_p2m_entry()
hooks we have - there really ought to be no more than one, or if two
were absolutely needed they imo would better have distinct names. Other
adjustment opportunities (and I hope they're improvements) were found
while getting rid of that one unnecessary layer of indirect calls.

v2 has a build fix for clang in patch 3 and a few new patches.

1: p2m: paging_write_p2m_entry() is a private function
2: p2m: collapse the two ->write_p2m_entry() hooks
3: p2m: suppress audit_p2m hook when possible
4: HAP: move nested-P2M flush calculations out of locked region
5: p2m: split write_p2m_entry() hook
6: p2m: avoid unnecessary calls of write_p2m_entry_pre() hook
7: p2m: pass old PTE directly to write_p2m_entry_pre() hook
8: shadow: cosmetics to sh_unshadow_for_p2m_change()
9: shadow: adjust TLB flushing in sh_unshadow_for_p2m_change()

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 09:35:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 09:35:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20535.46487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kay90-0000oX-W6; Fri, 06 Nov 2020 09:34:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20535.46487; Fri, 06 Nov 2020 09:34:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kay90-0000oQ-T5; Fri, 06 Nov 2020 09:34:54 +0000
Received: by outflank-mailman (input) for mailman id 20535;
 Fri, 06 Nov 2020 09:34:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kay8z-0000oL-69
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:34:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d43b57f9-cd03-4de3-ad00-975e46d5dc47;
 Fri, 06 Nov 2020 09:34:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 378BFACD5;
 Fri,  6 Nov 2020 09:34:51 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kay8z-0000oL-69
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:34:53 +0000
X-Inumbo-ID: d43b57f9-cd03-4de3-ad00-975e46d5dc47
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d43b57f9-cd03-4de3-ad00-975e46d5dc47;
	Fri, 06 Nov 2020 09:34:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604655291;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1GwwWGinKRoKjPvFBHMATio3i/QUWXVKUREq8KUhXiA=;
	b=LuMTOVeJF0oNruLJnVnj4z8/cDi5pAxf01CqII3myeLP6M5nmrGK2qKKOipWr77hKiOWzz
	sm/+U+Pq5XC5w2J79I9o445llPQ9hoH1I+BjO4KkdfUaW6YzJLrVT56272x04mRPRUNnS8
	LbZfpxaMdwvjAAgj80toYlVVjlfgvEM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 378BFACD5;
	Fri,  6 Nov 2020 09:34:51 +0000 (UTC)
Subject: [PATCH v2 1/9] x86/p2m: paging_write_p2m_entry() is a private
 function
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>
References: <4b63025f-164c-2e93-3d54-7a7f145ad046@suse.com>
Message-ID: <fc48ae42-2a36-5634-de6f-15c7d9e2fca4@suse.com>
Date: Fri, 6 Nov 2020 10:34:50 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <4b63025f-164c-2e93-3d54-7a7f145ad046@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

As it gets installed by p2m_pt_init(), it doesn't need to live in
paging.c. The function working in terms of l1_pgentry_t even further
indicates its non-paging-generic nature. Move it and drop its
paging_ prefix, not adding any new one now that it's static.

This then also makes more obvious that in the EPT case we wouldn't
risk mistakenly calling through the NULL hook pointer.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -108,6 +108,31 @@ static unsigned long p2m_type_to_flags(c
     }
 }
 
+/*
+ * Atomically write a P2M entry and update the paging-assistance state
+ * appropriately.
+ * Arguments: the domain in question, the GFN whose mapping is being updated,
+ * a pointer to the entry to be written, the MFN in which the entry resides,
+ * the new contents of the entry, and the level in the p2m tree at which
+ * we are writing.
+ */
+static int write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
+                           l1_pgentry_t *p, l1_pgentry_t new,
+                           unsigned int level)
+{
+    struct domain *d = p2m->domain;
+    struct vcpu *v = current;
+    int rc = 0;
+
+    if ( v->domain != d )
+        v = d->vcpu ? d->vcpu[0] : NULL;
+    if ( likely(v && paging_mode_enabled(d) && paging_get_hostmode(v)) )
+        rc = paging_get_hostmode(v)->write_p2m_entry(p2m, gfn, p, new, level);
+    else
+        safe_write_pte(p, new);
+
+    return rc;
+}
 
 // Find the next level's P2M entry, checking for out-of-range gfn's...
 // Returns NULL on error.
@@ -594,7 +619,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m,
         entry_content.l1 = l3e_content.l3;
 
         rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 3);
-        /* NB: paging_write_p2m_entry() handles tlb flushes properly */
+        /* NB: write_p2m_entry() handles tlb flushes properly */
         if ( rc )
             goto out;
     }
@@ -631,7 +656,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m,
 
         /* level 1 entry */
         rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 1);
-        /* NB: paging_write_p2m_entry() handles tlb flushes properly */
+        /* NB: write_p2m_entry() handles tlb flushes properly */
         if ( rc )
             goto out;
     }
@@ -666,7 +691,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m,
         entry_content.l1 = l2e_content.l2;
 
         rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 2);
-        /* NB: paging_write_p2m_entry() handles tlb flushes properly */
+        /* NB: write_p2m_entry() handles tlb flushes properly */
         if ( rc )
             goto out;
     }
@@ -1107,7 +1132,7 @@ void p2m_pt_init(struct p2m_domain *p2m)
     p2m->recalc = do_recalc;
     p2m->change_entry_type_global = p2m_pt_change_entry_type_global;
     p2m->change_entry_type_range = p2m_pt_change_entry_type_range;
-    p2m->write_p2m_entry = paging_write_p2m_entry;
+    p2m->write_p2m_entry = write_p2m_entry;
 #if P2M_AUDIT
     p2m->audit_p2m = p2m_pt_audit_p2m;
 #else
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -941,27 +941,7 @@ void paging_update_nestedmode(struct vcp
         v->arch.paging.nestedmode = NULL;
     hvm_asid_flush_vcpu(v);
 }
-#endif
 
-int paging_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
-                           l1_pgentry_t *p, l1_pgentry_t new,
-                           unsigned int level)
-{
-    struct domain *d = p2m->domain;
-    struct vcpu *v = current;
-    int rc = 0;
-
-    if ( v->domain != d )
-        v = d->vcpu ? d->vcpu[0] : NULL;
-    if ( likely(v && paging_mode_enabled(d) && paging_get_hostmode(v) != NULL) )
-        rc = paging_get_hostmode(v)->write_p2m_entry(p2m, gfn, p, new, level);
-    else
-        safe_write_pte(p, new);
-
-    return rc;
-}
-
-#ifdef CONFIG_HVM
 int __init paging_set_allocation(struct domain *d, unsigned int pages,
                                  bool *preempted)
 {
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -369,18 +369,6 @@ static inline void safe_write_pte(l1_pge
     *p = new;
 }
 
-/* Atomically write a P2M entry and update the paging-assistance state 
- * appropriately. 
- * Arguments: the domain in question, the GFN whose mapping is being updated, 
- * a pointer to the entry to be written, the MFN in which the entry resides, 
- * the new contents of the entry, and the level in the p2m tree at which 
- * we are writing. */
-struct p2m_domain;
-
-int paging_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
-                           l1_pgentry_t *p, l1_pgentry_t new,
-                           unsigned int level);
-
 /*
  * Called from the guest to indicate that the a process is being
  * torn down and its pagetables will soon be discarded.



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 09:35:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 09:35:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20538.46499 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kay9X-0000tw-8Y; Fri, 06 Nov 2020 09:35:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20538.46499; Fri, 06 Nov 2020 09:35:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kay9X-0000tn-5d; Fri, 06 Nov 2020 09:35:27 +0000
Received: by outflank-mailman (input) for mailman id 20538;
 Fri, 06 Nov 2020 09:35:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kay9V-0000tf-Fk
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:35:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 92f376c6-90db-49cd-a42a-8f9f6a815224;
 Fri, 06 Nov 2020 09:35:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E46F6ACC0;
 Fri,  6 Nov 2020 09:35:22 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kay9V-0000tf-Fk
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:35:25 +0000
X-Inumbo-ID: 92f376c6-90db-49cd-a42a-8f9f6a815224
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 92f376c6-90db-49cd-a42a-8f9f6a815224;
	Fri, 06 Nov 2020 09:35:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604655323;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=m375nukqvwOyGvxgAYRClerE2/rOEDU6IVFNHZMxu40=;
	b=rrmKAHUnOm1l63rdDUoRYKTTzN5anXRmMXxUSSBvAE3QWwL3Z9ZK+0gDN5yXXbZFjt++mg
	K1+1DieqyHV5EnNiaahDFex1UY15Nj+G3dhfM0ZA0I3D/l1lnbK+ZRO+vjzQ9v4I3b8hiN
	oJ9zHw/4m+JTmfCkGK0Nyi8eJnT709E=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E46F6ACC0;
	Fri,  6 Nov 2020 09:35:22 +0000 (UTC)
Subject: [PATCH v2 2/9] x86/p2m: collapse the two ->write_p2m_entry() hooks
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>
References: <4b63025f-164c-2e93-3d54-7a7f145ad046@suse.com>
Message-ID: <0f324fa9-f307-997a-4db9-eb802a8983d9@suse.com>
Date: Fri, 6 Nov 2020 10:35:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <4b63025f-164c-2e93-3d54-7a7f145ad046@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The struct paging_mode instances get set to the same functions
regardless of mode by both HAP and shadow code, hence there's no point
having this hook there. The hook also doesn't need moving elsewhere - we
can directly use struct p2m_domain's. This merely requires (from a
strictly formal pov; in practice this may not even be needed) making
sure we don't end up using safe_write_pte() for nested P2Ms.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Tim Deegan <tim@xen.org>
---
Like for the possibly unnecessary p2m_is_nestedp2m() I'm not really sure
the paging_get_hostmode() check there is still needed either. But I
didn't want to alter more aspects than necessary here.

Of course with the p2m_is_nestedp2m() check there and with all three of
{hap,nestedp2m,shadow}_write_p2m_entry() now globally accessible, it's
certainly an option to do away with the indirect call there altogether.
In fact we may even be able to go further and fold the three functions:
They're relatively similar, and this would "seamlessly" address the
apparent bug of nestedp2m_write_p2m_entry() not making use of
p2m_entry_modify().

--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -823,6 +823,11 @@ hap_write_p2m_entry(struct p2m_domain *p
     return 0;
 }
 
+void hap_p2m_init(struct p2m_domain *p2m)
+{
+    p2m->write_p2m_entry = hap_write_p2m_entry;
+}
+
 static unsigned long hap_gva_to_gfn_real_mode(
     struct vcpu *v, struct p2m_domain *p2m, unsigned long gva, uint32_t *pfec)
 {
@@ -846,7 +851,6 @@ static const struct paging_mode hap_pagi
     .p2m_ga_to_gfn          = hap_p2m_ga_to_gfn_real_mode,
     .update_cr3             = hap_update_cr3,
     .update_paging_modes    = hap_update_paging_modes,
-    .write_p2m_entry        = hap_write_p2m_entry,
     .flush_tlb              = flush_tlb,
     .guest_levels           = 1
 };
@@ -858,7 +862,6 @@ static const struct paging_mode hap_pagi
     .p2m_ga_to_gfn          = hap_p2m_ga_to_gfn_2_levels,
     .update_cr3             = hap_update_cr3,
     .update_paging_modes    = hap_update_paging_modes,
-    .write_p2m_entry        = hap_write_p2m_entry,
     .flush_tlb              = flush_tlb,
     .guest_levels           = 2
 };
@@ -870,7 +873,6 @@ static const struct paging_mode hap_pagi
     .p2m_ga_to_gfn          = hap_p2m_ga_to_gfn_3_levels,
     .update_cr3             = hap_update_cr3,
     .update_paging_modes    = hap_update_paging_modes,
-    .write_p2m_entry        = hap_write_p2m_entry,
     .flush_tlb              = flush_tlb,
     .guest_levels           = 3
 };
@@ -882,7 +884,6 @@ static const struct paging_mode hap_pagi
     .p2m_ga_to_gfn          = hap_p2m_ga_to_gfn_4_levels,
     .update_cr3             = hap_update_cr3,
     .update_paging_modes    = hap_update_paging_modes,
-    .write_p2m_entry        = hap_write_p2m_entry,
     .flush_tlb              = flush_tlb,
     .guest_levels           = 4
 };
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -126,8 +126,9 @@ static int write_p2m_entry(struct p2m_do
 
     if ( v->domain != d )
         v = d->vcpu ? d->vcpu[0] : NULL;
-    if ( likely(v && paging_mode_enabled(d) && paging_get_hostmode(v)) )
-        rc = paging_get_hostmode(v)->write_p2m_entry(p2m, gfn, p, new, level);
+    if ( likely(v && paging_mode_enabled(d) && paging_get_hostmode(v)) ||
+         p2m_is_nestedp2m(p2m) )
+        rc = p2m->write_p2m_entry(p2m, gfn, p, new, level);
     else
         safe_write_pte(p, new);
 
@@ -209,7 +210,7 @@ p2m_next_level(struct p2m_domain *p2m, v
 
         new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
 
-        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
+        rc = write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
         if ( rc )
             goto error;
     }
@@ -251,7 +252,7 @@ p2m_next_level(struct p2m_domain *p2m, v
         {
             new_entry = l1e_from_pfn(pfn | (i << ((level - 1) * PAGETABLE_ORDER)),
                                      flags);
-            rc = p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
+            rc = write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
             if ( rc )
             {
                 unmap_domain_page(l1_entry);
@@ -262,8 +263,7 @@ p2m_next_level(struct p2m_domain *p2m, v
         unmap_domain_page(l1_entry);
 
         new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
-        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry,
-                                  level + 1);
+        rc = write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
         if ( rc )
             goto error;
     }
@@ -335,7 +335,7 @@ static int p2m_pt_set_recalc_range(struc
             if ( (l1e_get_flags(e) & _PAGE_PRESENT) && !needs_recalc(l1, e) )
             {
                 set_recalc(l1, e);
-                err = p2m->write_p2m_entry(p2m, first_gfn, pent, e, level);
+                err = write_p2m_entry(p2m, first_gfn, pent, e, level);
                 if ( err )
                 {
                     ASSERT_UNREACHABLE();
@@ -412,8 +412,8 @@ static int do_recalc(struct p2m_domain *
                      !needs_recalc(l1, ent) )
                 {
                     set_recalc(l1, ent);
-                    err = p2m->write_p2m_entry(p2m, gfn - remainder, &ptab[i],
-                                               ent, level);
+                    err = write_p2m_entry(p2m, gfn - remainder, &ptab[i], ent,
+                                          level);
                     if ( err )
                     {
                         ASSERT_UNREACHABLE();
@@ -426,7 +426,7 @@ static int do_recalc(struct p2m_domain *
             if ( !err )
             {
                 clear_recalc(l1, e);
-                err = p2m->write_p2m_entry(p2m, gfn, pent, e, level + 1);
+                err = write_p2m_entry(p2m, gfn, pent, e, level + 1);
                 ASSERT(!err);
 
                 recalc_done = true;
@@ -474,7 +474,7 @@ static int do_recalc(struct p2m_domain *
         }
         else
             clear_recalc(l1, e);
-        err = p2m->write_p2m_entry(p2m, gfn, pent, e, level + 1);
+        err = write_p2m_entry(p2m, gfn, pent, e, level + 1);
         ASSERT(!err);
 
         recalc_done = true;
@@ -618,7 +618,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m,
             : l3e_empty();
         entry_content.l1 = l3e_content.l3;
 
-        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 3);
+        rc = write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 3);
         /* NB: write_p2m_entry() handles tlb flushes properly */
         if ( rc )
             goto out;
@@ -655,7 +655,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m,
             entry_content = l1e_empty();
 
         /* level 1 entry */
-        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 1);
+        rc = write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 1);
         /* NB: write_p2m_entry() handles tlb flushes properly */
         if ( rc )
             goto out;
@@ -690,7 +690,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m,
             : l2e_empty();
         entry_content.l1 = l2e_content.l2;
 
-        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 2);
+        rc = write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 2);
         /* NB: write_p2m_entry() handles tlb flushes properly */
         if ( rc )
             goto out;
@@ -914,7 +914,7 @@ static void p2m_pt_change_entry_type_glo
             int rc;
 
             set_recalc(l1, e);
-            rc = p2m->write_p2m_entry(p2m, gfn, &tab[i], e, 4);
+            rc = write_p2m_entry(p2m, gfn, &tab[i], e, 4);
             if ( rc )
             {
                 ASSERT_UNREACHABLE();
@@ -1132,7 +1132,13 @@ void p2m_pt_init(struct p2m_domain *p2m)
     p2m->recalc = do_recalc;
     p2m->change_entry_type_global = p2m_pt_change_entry_type_global;
     p2m->change_entry_type_range = p2m_pt_change_entry_type_range;
-    p2m->write_p2m_entry = write_p2m_entry;
+
+    /* Still too early to use paging_mode_hap(). */
+    if ( hap_enabled(p2m->domain) )
+        hap_p2m_init(p2m);
+    else if ( IS_ENABLED(CONFIG_SHADOW_PAGING) )
+        shadow_p2m_init(p2m);
+
 #if P2M_AUDIT
     p2m->audit_p2m = p2m_pt_audit_p2m;
 #else
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3144,7 +3144,7 @@ static void sh_unshadow_for_p2m_change(s
     }
 }
 
-int
+static int
 shadow_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
                        l1_pgentry_t *p, l1_pgentry_t new,
                        unsigned int level)
@@ -3190,6 +3190,11 @@ shadow_write_p2m_entry(struct p2m_domain
     return 0;
 }
 
+void shadow_p2m_init(struct p2m_domain *p2m)
+{
+    p2m->write_p2m_entry = shadow_write_p2m_entry;
+}
+
 /**************************************************************************/
 /* Log-dirty mode support */
 
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -4574,7 +4574,6 @@ const struct paging_mode sh_paging_mode
     .gva_to_gfn                    = sh_gva_to_gfn,
     .update_cr3                    = sh_update_cr3,
     .update_paging_modes           = shadow_update_paging_modes,
-    .write_p2m_entry               = shadow_write_p2m_entry,
     .flush_tlb                     = shadow_flush_tlb,
     .guest_levels                  = GUEST_PAGING_LEVELS,
     .shadow.detach_old_tables      = sh_detach_old_tables,
--- a/xen/arch/x86/mm/shadow/none.c
+++ b/xen/arch/x86/mm/shadow/none.c
@@ -60,21 +60,12 @@ static void _update_paging_modes(struct
     ASSERT_UNREACHABLE();
 }
 
-static int _write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
-                            l1_pgentry_t *p, l1_pgentry_t new,
-                            unsigned int level)
-{
-    ASSERT_UNREACHABLE();
-    return -EOPNOTSUPP;
-}
-
 static const struct paging_mode sh_paging_none = {
     .page_fault                    = _page_fault,
     .invlpg                        = _invlpg,
     .gva_to_gfn                    = _gva_to_gfn,
     .update_cr3                    = _update_cr3,
     .update_paging_modes           = _update_paging_modes,
-    .write_p2m_entry               = _write_p2m_entry,
 };
 
 void shadow_vcpu_init(struct vcpu *v)
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -387,11 +387,6 @@ static inline int sh_remove_write_access
 }
 #endif
 
-/* Functions that atomically write PT/P2M entries and update state */
-int shadow_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
-                           l1_pgentry_t *p, l1_pgentry_t new,
-                           unsigned int level);
-
 /* Functions that atomically write PV guest PT entries */
 void sh_write_guest_entry(struct vcpu *v, intpte_t *p, intpte_t new,
                           mfn_t gmfn);
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -836,6 +836,9 @@ void p2m_flush_nestedp2m(struct domain *
 /* Flushes the np2m specified by np2m_base (if it exists) */
 void np2m_flush_base(struct vcpu *v, unsigned long np2m_base);
 
+void hap_p2m_init(struct p2m_domain *p2m);
+void shadow_p2m_init(struct p2m_domain *p2m);
+
 int nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
     l1_pgentry_t *p, l1_pgentry_t new, unsigned int level);
 
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -139,10 +139,6 @@ struct paging_mode {
     void          (*update_cr3            )(struct vcpu *v, int do_locking,
                                             bool noflush);
     void          (*update_paging_modes   )(struct vcpu *v);
-    int           (*write_p2m_entry       )(struct p2m_domain *p2m,
-                                            unsigned long gfn,
-                                            l1_pgentry_t *p, l1_pgentry_t new,
-                                            unsigned int level);
     bool          (*flush_tlb             )(bool (*flush_vcpu)(void *ctxt,
                                                                struct vcpu *v),
                                             void *ctxt);



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 09:36:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 09:36:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20541.46511 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kayA6-00010p-KC; Fri, 06 Nov 2020 09:36:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20541.46511; Fri, 06 Nov 2020 09:36:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kayA6-00010i-Fm; Fri, 06 Nov 2020 09:36:02 +0000
Received: by outflank-mailman (input) for mailman id 20541;
 Fri, 06 Nov 2020 09:36:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kayA5-00010a-AD
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:36:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4fae4db4-407c-470c-aaee-c530b2ecd558;
 Fri, 06 Nov 2020 09:36:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A5334ABB2;
 Fri,  6 Nov 2020 09:35:59 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kayA5-00010a-AD
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:36:01 +0000
X-Inumbo-ID: 4fae4db4-407c-470c-aaee-c530b2ecd558
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4fae4db4-407c-470c-aaee-c530b2ecd558;
	Fri, 06 Nov 2020 09:36:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604655359;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5dIUaobboFAZTb8wiUsGf1F1BaRpXPw/Me/kiFs+0h8=;
	b=LfM79q9ssUxZvLO+ivIZ6VleTEMGnqb/VKDRMkTNj+DxImNELy/TWBkzlsmOh76s9ewFHa
	zzTdlMJHENGSoXq9EdG4tMKiN6X76BRJu15KDnp5FMqrYb7QAUKwoxAjIyUx6hOZ2p0iQB
	wb+BKlIzaCIhrUhF8m9+VzGDkKAjyXo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A5334ABB2;
	Fri,  6 Nov 2020 09:35:59 +0000 (UTC)
Subject: [PATCH v2 3/9] x86/p2m: suppress audit_p2m hook when possible
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>
References: <4b63025f-164c-2e93-3d54-7a7f145ad046@suse.com>
Message-ID: <3d422bf7-9110-23ac-8061-9a7251f54e66@suse.com>
Date: Fri, 6 Nov 2020 10:35:59 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <4b63025f-164c-2e93-3d54-7a7f145ad046@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

When P2M_AUDIT is false, it's unused, so instead of having a dangling
NULL pointer sit there, omit the field altogether.

Instead of adding "#if P2M_AUDIT && defined(CONFIG_HVM)" in even more
places, fold the latter part right into the definition of P2M_AUDIT.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Fix build with newer clang ("defined" may not be the result of a
    macro expansion).
---
I wonder if !NDEBUG wouldn't better be replaced by CONFIG_DEBUG.

--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -1012,7 +1012,7 @@ long arch_do_domctl(
         break;
 #endif
 
-#if P2M_AUDIT && defined(CONFIG_HVM)
+#if P2M_AUDIT
     case XEN_DOMCTL_audit_p2m:
         if ( d == currd )
             ret = -EPERM;
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -1260,7 +1260,9 @@ int ept_p2m_init(struct p2m_domain *p2m)
     p2m->change_entry_type_global = ept_change_entry_type_global;
     p2m->change_entry_type_range = ept_change_entry_type_range;
     p2m->memory_type_changed = ept_memory_type_changed;
+#if P2M_AUDIT
     p2m->audit_p2m = NULL;
+#endif
     p2m->tlb_flush = ept_tlb_flush;
 
     /* Set the memory type used when accessing EPT paging structures. */
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -971,8 +971,8 @@ static int p2m_pt_change_entry_type_rang
     return err;
 }
 
-#if P2M_AUDIT && defined(CONFIG_HVM)
-long p2m_pt_audit_p2m(struct p2m_domain *p2m)
+#if P2M_AUDIT
+static long p2m_pt_audit_p2m(struct p2m_domain *p2m)
 {
     unsigned long entry_count = 0, pmbad = 0;
     unsigned long mfn, gfn, m2pfn;
@@ -1120,8 +1120,6 @@ long p2m_pt_audit_p2m(struct p2m_domain
 
     return pmbad;
 }
-#else
-# define p2m_pt_audit_p2m NULL
 #endif /* P2M_AUDIT */
 
 /* Set up the p2m function pointers for pagetable format */
@@ -1141,8 +1139,6 @@ void p2m_pt_init(struct p2m_domain *p2m)
 
 #if P2M_AUDIT
     p2m->audit_p2m = p2m_pt_audit_p2m;
-#else
-    p2m->audit_p2m = NULL;
 #endif
 }
 
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -2435,7 +2435,7 @@ int p2m_altp2m_propagate_change(struct d
 
 /*** Audit ***/
 
-#if P2M_AUDIT && defined(CONFIG_HVM)
+#if P2M_AUDIT
 void audit_p2m(struct domain *d,
                uint64_t *orphans,
                 uint64_t *m2p_bad,
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -31,6 +31,14 @@
 #include <asm/mem_sharing.h>
 #include <asm/page.h>    /* for pagetable_t */
 
+/* Debugging and auditing of the P2M code? */
+#if !defined(NDEBUG) && defined(CONFIG_HVM)
+#define P2M_AUDIT     1
+#else
+#define P2M_AUDIT     0
+#endif
+#define P2M_DEBUGGING 0
+
 extern bool_t opt_hap_1gb, opt_hap_2mb;
 
 /*
@@ -268,7 +276,9 @@ struct p2m_domain {
     int                (*write_p2m_entry)(struct p2m_domain *p2m,
                                           unsigned long gfn, l1_pgentry_t *p,
                                           l1_pgentry_t new, unsigned int level);
+#if P2M_AUDIT
     long               (*audit_p2m)(struct p2m_domain *p2m);
+#endif
 
     /*
      * P2M updates may require TLBs to be flushed (invalidated).
@@ -758,14 +768,6 @@ extern void p2m_pt_init(struct p2m_domai
 void *map_domain_gfn(struct p2m_domain *p2m, gfn_t gfn, mfn_t *mfn,
                      p2m_query_t q, uint32_t *pfec);
 
-/* Debugging and auditing of the P2M code? */
-#ifndef NDEBUG
-#define P2M_AUDIT     1
-#else
-#define P2M_AUDIT     0
-#endif
-#define P2M_DEBUGGING 0
-
 #if P2M_AUDIT
 extern void audit_p2m(struct domain *d,
                       uint64_t *orphans,



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 09:36:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 09:36:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20545.46523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kayAc-00018Y-0O; Fri, 06 Nov 2020 09:36:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20545.46523; Fri, 06 Nov 2020 09:36:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kayAb-00018Q-TR; Fri, 06 Nov 2020 09:36:33 +0000
Received: by outflank-mailman (input) for mailman id 20545;
 Fri, 06 Nov 2020 09:36:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kayAa-00017l-Ol
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:36:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 22abde42-bc10-405d-a9ce-afd285c90d4b;
 Fri, 06 Nov 2020 09:36:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 16E0AABB2;
 Fri,  6 Nov 2020 09:36:29 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kayAa-00017l-Ol
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:36:32 +0000
X-Inumbo-ID: 22abde42-bc10-405d-a9ce-afd285c90d4b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 22abde42-bc10-405d-a9ce-afd285c90d4b;
	Fri, 06 Nov 2020 09:36:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604655389;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JtLLY79rVz1/b+gt5Ib9A6igOOIy5NEZdkH5oVbCQnk=;
	b=sOn49/Kv4bzE9dwrFdp7rPduHiaOUosQ6zk0SNy8925zZjyHbA6O0CahuD4p3U3yXkJdLf
	CJFwYxBkJ6j8y+qCqNnuNK6Ma/saDQ6m2NU2uFNrCxR0MId2N3+PE1YwD1+F20dqOgKo6o
	1/8vJzt3QR0lbB5jYe1nSBOU49iY/ww=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 16E0AABB2;
	Fri,  6 Nov 2020 09:36:29 +0000 (UTC)
Subject: [PATCH v2 4/9] x86/HAP: move nested-P2M flush calculations out of
 locked region
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>
References: <4b63025f-164c-2e93-3d54-7a7f145ad046@suse.com>
Message-ID: <df75656c-2d01-e399-e6a8-aab084319976@suse.com>
Date: Fri, 6 Nov 2020 10:36:28 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <4b63025f-164c-2e93-3d54-7a7f145ad046@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

By latching the old MFN into a local variable, these calculations don't
depend on anything but local variables anymore. Hence the point in time
when they get performed doesn't matter anymore, so they can be moved
past the locked region.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -780,7 +780,7 @@ hap_write_p2m_entry(struct p2m_domain *p
 {
     struct domain *d = p2m->domain;
     uint32_t old_flags;
-    bool_t flush_nestedp2m = 0;
+    mfn_t omfn;
     int rc;
 
     /* We know always use the host p2m here, regardless if the vcpu
@@ -790,21 +790,11 @@ hap_write_p2m_entry(struct p2m_domain *p
 
     paging_lock(d);
     old_flags = l1e_get_flags(*p);
-
-    if ( nestedhvm_enabled(d) && (old_flags & _PAGE_PRESENT) 
-         && !p2m_get_hostp2m(d)->defer_nested_flush ) {
-        /* We are replacing a valid entry so we need to flush nested p2ms,
-         * unless the only change is an increase in access rights. */
-        mfn_t omfn = l1e_get_mfn(*p);
-        mfn_t nmfn = l1e_get_mfn(new);
-
-        flush_nestedp2m = !(mfn_eq(omfn, nmfn)
-            && perms_strictly_increased(old_flags, l1e_get_flags(new)) );
-    }
+    omfn = l1e_get_mfn(*p);
 
     rc = p2m_entry_modify(p2m, p2m_flags_to_type(l1e_get_flags(new)),
                           p2m_flags_to_type(old_flags), l1e_get_mfn(new),
-                          l1e_get_mfn(*p), level);
+                          omfn, level);
     if ( rc )
     {
         paging_unlock(d);
@@ -817,7 +807,14 @@ hap_write_p2m_entry(struct p2m_domain *p
 
     paging_unlock(d);
 
-    if ( flush_nestedp2m )
+    if ( nestedhvm_enabled(d) && (old_flags & _PAGE_PRESENT) &&
+         !p2m_get_hostp2m(d)->defer_nested_flush &&
+         /*
+          * We are replacing a valid entry so we need to flush nested p2ms,
+          * unless the only change is an increase in access rights.
+          */
+         (!mfn_eq(omfn, l1e_get_mfn(new)) ||
+          !perms_strictly_increased(old_flags, l1e_get_flags(new))) )
         p2m_flush_nestedp2m(d);
 
     return 0;



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 09:37:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 09:37:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20548.46535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kayB2-0001Fw-A9; Fri, 06 Nov 2020 09:37:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20548.46535; Fri, 06 Nov 2020 09:37:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kayB2-0001Fp-6H; Fri, 06 Nov 2020 09:37:00 +0000
Received: by outflank-mailman (input) for mailman id 20548;
 Fri, 06 Nov 2020 09:36:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kayB0-0001FZ-Nd
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:36:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d6de8ac3-bca3-40d6-b7a4-e9b86530642d;
 Fri, 06 Nov 2020 09:36:57 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3A748ABB2;
 Fri,  6 Nov 2020 09:36:56 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kayB0-0001FZ-Nd
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:36:58 +0000
X-Inumbo-ID: d6de8ac3-bca3-40d6-b7a4-e9b86530642d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d6de8ac3-bca3-40d6-b7a4-e9b86530642d;
	Fri, 06 Nov 2020 09:36:57 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604655416;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Z3NlXY+aD0HsOqqGXGoEXxkfCl7avEaD79QcpQmNwpw=;
	b=cJaf/k/s8d7kH+hOqmFgU3x6DByYJfcNSduE7hDEMP8Aa1xgcJ1slt9L+USK1peIGyRANB
	9H3kh9Whm0uolVwcW+05wI00sKv4dmLceyjGthu2PMuDnWZdt6JFDcQCCpiEQStlq8KfVm
	TvNzCKe2scNiDZi7IeOkZTXfMYP3QhE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3A748ABB2;
	Fri,  6 Nov 2020 09:36:56 +0000 (UTC)
Subject: [PATCH v2 5/9] x86/p2m: split write_p2m_entry() hook
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>
References: <4b63025f-164c-2e93-3d54-7a7f145ad046@suse.com>
Message-ID: <80c83e61-b4f9-f8a8-7db1-351521e623f5@suse.com>
Date: Fri, 6 Nov 2020 10:36:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <4b63025f-164c-2e93-3d54-7a7f145ad046@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Fair parts of the present handlers are identical; in fact
nestedp2m_write_p2m_entry() lacks a call to p2m_entry_modify(). Move
common parts right into write_p2m_entry(), splitting the hooks into a
"pre" one (needed just by shadow code) and a "post" one.

For the common parts moved I think that the p2m_flush_nestedp2m() is,
at least from an abstract perspective, also applicable in the shadow
case. Hence it doesn't get a 3rd hook put in place.

The initial comment that was in hap_write_p2m_entry() gets dropped: Its
placement was bogus, and looking back the the commit introducing it
(dd6de3ab9985 "Implement Nested-on-Nested") I can't see either what use
of a p2m it was meant to be associated with.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Tim Deegan <tim@xen.org>
---
RFC: This is effectively the alternative to the suggestion in an earlier
     patch that we might do away with the hook altogether. Of course a
     hybrid approach would also be possible, by using direct calls here
     instead of splitting the hook into two.
---
I'm unsure whether p2m_init_nestedp2m() zapping the "pre" hook is
actually correct, or whether previously the sh_unshadow_for_p2m_change()
invocation was wrongly skipped.

--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -774,55 +774,18 @@ static void hap_update_paging_modes(stru
     put_gfn(d, cr3_gfn);
 }
 
-static int
-hap_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn, l1_pgentry_t *p,
-                    l1_pgentry_t new, unsigned int level)
+static void
+hap_write_p2m_entry_post(struct p2m_domain *p2m, unsigned int oflags)
 {
     struct domain *d = p2m->domain;
-    uint32_t old_flags;
-    mfn_t omfn;
-    int rc;
 
-    /* We know always use the host p2m here, regardless if the vcpu
-     * is in host or guest mode. The vcpu can be in guest mode by
-     * a hypercall which passes a domain and chooses mostly the first
-     * vcpu. */
-
-    paging_lock(d);
-    old_flags = l1e_get_flags(*p);
-    omfn = l1e_get_mfn(*p);
-
-    rc = p2m_entry_modify(p2m, p2m_flags_to_type(l1e_get_flags(new)),
-                          p2m_flags_to_type(old_flags), l1e_get_mfn(new),
-                          omfn, level);
-    if ( rc )
-    {
-        paging_unlock(d);
-        return rc;
-    }
-
-    safe_write_pte(p, new);
-    if ( old_flags & _PAGE_PRESENT )
+    if ( oflags & _PAGE_PRESENT )
         guest_flush_tlb_mask(d, d->dirty_cpumask);
-
-    paging_unlock(d);
-
-    if ( nestedhvm_enabled(d) && (old_flags & _PAGE_PRESENT) &&
-         !p2m_get_hostp2m(d)->defer_nested_flush &&
-         /*
-          * We are replacing a valid entry so we need to flush nested p2ms,
-          * unless the only change is an increase in access rights.
-          */
-         (!mfn_eq(omfn, l1e_get_mfn(new)) ||
-          !perms_strictly_increased(old_flags, l1e_get_flags(new))) )
-        p2m_flush_nestedp2m(d);
-
-    return 0;
 }
 
 void hap_p2m_init(struct p2m_domain *p2m)
 {
-    p2m->write_p2m_entry = hap_write_p2m_entry;
+    p2m->write_p2m_entry_post = hap_write_p2m_entry_post;
 }
 
 static unsigned long hap_gva_to_gfn_real_mode(
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -71,24 +71,11 @@
 /*        NESTED VIRT P2M FUNCTIONS         */
 /********************************************/
 
-int
-nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
-    l1_pgentry_t *p, l1_pgentry_t new, unsigned int level)
+void
+nestedp2m_write_p2m_entry_post(struct p2m_domain *p2m, unsigned int oflags)
 {
-    struct domain *d = p2m->domain;
-    uint32_t old_flags;
-
-    paging_lock(d);
-
-    old_flags = l1e_get_flags(*p);
-    safe_write_pte(p, new);
-
-    if (old_flags & _PAGE_PRESENT)
-        guest_flush_tlb_mask(d, p2m->dirty_cpumask);
-
-    paging_unlock(d);
-
-    return 0;
+    if ( oflags & _PAGE_PRESENT )
+        guest_flush_tlb_mask(p2m->domain, p2m->dirty_cpumask);
 }
 
 /********************************************/
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -122,17 +122,55 @@ static int write_p2m_entry(struct p2m_do
 {
     struct domain *d = p2m->domain;
     struct vcpu *v = current;
-    int rc = 0;
 
     if ( v->domain != d )
         v = d->vcpu ? d->vcpu[0] : NULL;
     if ( likely(v && paging_mode_enabled(d) && paging_get_hostmode(v)) ||
          p2m_is_nestedp2m(p2m) )
-        rc = p2m->write_p2m_entry(p2m, gfn, p, new, level);
+    {
+        unsigned int oflags;
+        mfn_t omfn;
+        int rc;
+
+        paging_lock(d);
+
+        if ( p2m->write_p2m_entry_pre )
+            p2m->write_p2m_entry_pre(d, gfn, p, new, level);
+
+        oflags = l1e_get_flags(*p);
+        omfn = l1e_get_mfn(*p);
+
+        rc = p2m_entry_modify(p2m, p2m_flags_to_type(l1e_get_flags(new)),
+                              p2m_flags_to_type(oflags), l1e_get_mfn(new),
+                              omfn, level);
+        if ( rc )
+        {
+            paging_unlock(d);
+            return rc;
+        }
+
+        safe_write_pte(p, new);
+
+        if ( p2m->write_p2m_entry_post )
+            p2m->write_p2m_entry_post(p2m, oflags);
+
+        paging_unlock(d);
+
+        if ( nestedhvm_enabled(d) && !p2m_is_nestedp2m(p2m) &&
+             (oflags & _PAGE_PRESENT) &&
+             !p2m_get_hostp2m(d)->defer_nested_flush &&
+             /*
+              * We are replacing a valid entry so we need to flush nested p2ms,
+              * unless the only change is an increase in access rights.
+              */
+             (!mfn_eq(omfn, l1e_get_mfn(new)) ||
+              !perms_strictly_increased(oflags, l1e_get_flags(new))) )
+            p2m_flush_nestedp2m(d);
+    }
     else
         safe_write_pte(p, new);
 
-    return rc;
+    return 0;
 }
 
 // Find the next level's P2M entry, checking for out-of-range gfn's...
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -198,7 +198,8 @@ static int p2m_init_nestedp2m(struct dom
             return -ENOMEM;
         }
         p2m->p2m_class = p2m_nested;
-        p2m->write_p2m_entry = nestedp2m_write_p2m_entry;
+        p2m->write_p2m_entry_pre = NULL;
+        p2m->write_p2m_entry_post = nestedp2m_write_p2m_entry_post;
         list_add(&p2m->np2m_list, &p2m_get_hostp2m(d)->np2m_list);
     }
 
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3144,34 +3144,22 @@ static void sh_unshadow_for_p2m_change(s
     }
 }
 
-static int
-shadow_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
-                       l1_pgentry_t *p, l1_pgentry_t new,
-                       unsigned int level)
+static void
+sh_write_p2m_entry_pre(struct domain *d, unsigned long gfn, l1_pgentry_t *p,
+                       l1_pgentry_t new, unsigned int level)
 {
-    struct domain *d = p2m->domain;
-    int rc;
-
-    paging_lock(d);
-
     /* If there are any shadows, update them.  But if shadow_teardown()
      * has already been called then it's not safe to try. */
     if ( likely(d->arch.paging.shadow.total_pages != 0) )
          sh_unshadow_for_p2m_change(d, gfn, p, new, level);
-
-    rc = p2m_entry_modify(p2m, p2m_flags_to_type(l1e_get_flags(new)),
-                          p2m_flags_to_type(l1e_get_flags(*p)),
-                          l1e_get_mfn(new), l1e_get_mfn(*p), level);
-    if ( rc )
-    {
-        paging_unlock(d);
-        return rc;
-    }
-
-    /* Update the entry with new content */
-    safe_write_pte(p, new);
+}
 
 #if (SHADOW_OPTIMIZATIONS & SHOPT_FAST_FAULT_PATH)
+static void
+sh_write_p2m_entry_post(struct p2m_domain *p2m, unsigned int oflags)
+{
+    struct domain *d = p2m->domain;
+
     /* If we're doing FAST_FAULT_PATH, then shadow mode may have
        cached the fact that this is an mmio region in the shadow
        page tables.  Blow the tables away to remove the cache.
@@ -3183,16 +3171,15 @@ shadow_write_p2m_entry(struct p2m_domain
         shadow_blow_tables(d);
         d->arch.paging.shadow.has_fast_mmio_entries = false;
     }
-#endif
-
-    paging_unlock(d);
-
-    return 0;
 }
+#else
+# define sh_write_p2m_entry_post NULL
+#endif
 
 void shadow_p2m_init(struct p2m_domain *p2m)
 {
-    p2m->write_p2m_entry = shadow_write_p2m_entry;
+    p2m->write_p2m_entry_pre  = sh_write_p2m_entry_pre;
+    p2m->write_p2m_entry_post = sh_write_p2m_entry_post;
 }
 
 /**************************************************************************/
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -272,10 +272,13 @@ struct p2m_domain {
                                                   unsigned long first_gfn,
                                                   unsigned long last_gfn);
     void               (*memory_type_changed)(struct p2m_domain *p2m);
-    
-    int                (*write_p2m_entry)(struct p2m_domain *p2m,
-                                          unsigned long gfn, l1_pgentry_t *p,
-                                          l1_pgentry_t new, unsigned int level);
+    void               (*write_p2m_entry_pre)(struct domain *d,
+                                              unsigned long gfn,
+                                              l1_pgentry_t *p,
+                                              l1_pgentry_t new,
+                                              unsigned int level);
+    void               (*write_p2m_entry_post)(struct p2m_domain *p2m,
+                                               unsigned int oflags);
 #if P2M_AUDIT
     long               (*audit_p2m)(struct p2m_domain *p2m);
 #endif
@@ -472,7 +475,7 @@ void __put_gfn(struct p2m_domain *p2m, u
  *
  * This is also used in the shadow code whenever the paging lock is
  * held -- in those cases, the caller is protected against concurrent
- * p2m updates by the fact that shadow_write_p2m_entry() also takes
+ * p2m updates by the fact that write_p2m_entry() also takes
  * the paging lock.
  *
  * Note that an unlocked accessor only makes sense for a "query" lookup.
@@ -841,8 +844,8 @@ void np2m_flush_base(struct vcpu *v, uns
 void hap_p2m_init(struct p2m_domain *p2m);
 void shadow_p2m_init(struct p2m_domain *p2m);
 
-int nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
-    l1_pgentry_t *p, l1_pgentry_t new, unsigned int level);
+void nestedp2m_write_p2m_entry_post(struct p2m_domain *p2m,
+                                    unsigned int oflags);
 
 /*
  * Alternate p2m: shadow p2m tables used for alternate memory views



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 09:37:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 09:37:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20555.46547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kayBf-0001Np-KM; Fri, 06 Nov 2020 09:37:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20555.46547; Fri, 06 Nov 2020 09:37:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kayBf-0001Ni-Gi; Fri, 06 Nov 2020 09:37:39 +0000
Received: by outflank-mailman (input) for mailman id 20555;
 Fri, 06 Nov 2020 09:37:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kayBd-0001NW-FJ
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:37:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7f65050a-58fa-413f-86d4-db1c8e387c1c;
 Fri, 06 Nov 2020 09:37:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 021C1AB8F;
 Fri,  6 Nov 2020 09:37:36 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kayBd-0001NW-FJ
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:37:37 +0000
X-Inumbo-ID: 7f65050a-58fa-413f-86d4-db1c8e387c1c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7f65050a-58fa-413f-86d4-db1c8e387c1c;
	Fri, 06 Nov 2020 09:37:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604655456;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Uv1162pg+vDf/sXxEE8AWt03fZAO5xoSiLQGvgoxcW8=;
	b=e+aBHt4xb0KALE3ZJ8GWRgvj8SifBmznTKK5bGPcbEsSCcX3i8tetmJLH8F1XCiH9vxYZX
	YfPLtbWIs+pQMEnxNcH72O+9ej8ZT7DqL1NrhpCCVln5Qf3oxpRrWfLKahZmyRtLlYYQpA
	NFCw3bSMVOF510ducdTz8jxZLf8aQYE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 021C1AB8F;
	Fri,  6 Nov 2020 09:37:36 +0000 (UTC)
Subject: [PATCH v2 6/9] x86/p2m: avoid unnecessary calls of
 write_p2m_entry_pre() hook
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>
References: <4b63025f-164c-2e93-3d54-7a7f145ad046@suse.com>
Message-ID: <3386a823-5560-9cf3-5711-219d5bd0e54e@suse.com>
Date: Fri, 6 Nov 2020 10:37:35 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <4b63025f-164c-2e93-3d54-7a7f145ad046@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

When shattering a large page, we first construct the new page table page
and only then hook it up. The "pre" hook in this case does nothing, for
the page starting out all blank. Avoid 512 calls into shadow code in
this case by passing in INVALID_GFN, indicating the page being updated
is (not yet) associated with any GFN. (The alternative to this change
would be to actually pass in a correct GFN, which can't be all the same
on every loop iteration.)

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -134,7 +134,7 @@ static int write_p2m_entry(struct p2m_do
 
         paging_lock(d);
 
-        if ( p2m->write_p2m_entry_pre )
+        if ( p2m->write_p2m_entry_pre && gfn != gfn_x(INVALID_GFN) )
             p2m->write_p2m_entry_pre(d, gfn, p, new, level);
 
         oflags = l1e_get_flags(*p);
@@ -290,7 +290,8 @@ p2m_next_level(struct p2m_domain *p2m, v
         {
             new_entry = l1e_from_pfn(pfn | (i << ((level - 1) * PAGETABLE_ORDER)),
                                      flags);
-            rc = write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
+            rc = write_p2m_entry(p2m, gfn_x(INVALID_GFN), l1_entry + i,
+                                 new_entry, level);
             if ( rc )
             {
                 unmap_domain_page(l1_entry);



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 09:38:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 09:38:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20560.46559 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kayC7-0001We-UC; Fri, 06 Nov 2020 09:38:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20560.46559; Fri, 06 Nov 2020 09:38:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kayC7-0001WX-QM; Fri, 06 Nov 2020 09:38:07 +0000
Received: by outflank-mailman (input) for mailman id 20560;
 Fri, 06 Nov 2020 09:38:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kayC7-0001WP-3N
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:38:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ba8c70f6-f202-43ec-ae61-5664e5e78cf7;
 Fri, 06 Nov 2020 09:38:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5B66CAB8F;
 Fri,  6 Nov 2020 09:38:05 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kayC7-0001WP-3N
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:38:07 +0000
X-Inumbo-ID: ba8c70f6-f202-43ec-ae61-5664e5e78cf7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ba8c70f6-f202-43ec-ae61-5664e5e78cf7;
	Fri, 06 Nov 2020 09:38:06 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604655485;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4luM7MlPeuTiZGoTwImWIMS+Z6u+POF2aKYB3Z/AaQQ=;
	b=hFQdSaoZG3mAy8CIobKtYaGvMoC/vMK+kQDMnf3LPLiyKneaEq0cyf6gsPGEbgmZjeS5tl
	bUdcxqBx2QtTMVQRu50zKbaVYFVyyoeOxzoEtALKxKDIPP31Ub/7b+qqPZ56tUVW6KCJqz
	hpZfRrEzImebNZypRbMb0tAxMSwJQmE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5B66CAB8F;
	Fri,  6 Nov 2020 09:38:05 +0000 (UTC)
Subject: [PATCH v2 7/9] x86/p2m: pass old PTE directly to
 write_p2m_entry_pre() hook
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
References: <4b63025f-164c-2e93-3d54-7a7f145ad046@suse.com>
Message-ID: <9d10a4be-e463-0d3d-39c0-0920761537c5@suse.com>
Date: Fri, 6 Nov 2020 10:38:05 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <4b63025f-164c-2e93-3d54-7a7f145ad046@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

In no case is a pointer to non-const needed. Since no pointer arithmetic
is done by the sole user of the hook, passing in the PTE itself is quite
fine.

While doing this adjustment also
- drop the intermediate sh_write_p2m_entry_pre():
  sh_unshadow_for_p2m_change() can itself be used as the hook function,
  moving the conditional into there,
- introduce a local variable holding the flags of the old entry.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -135,7 +135,7 @@ static int write_p2m_entry(struct p2m_do
         paging_lock(d);
 
         if ( p2m->write_p2m_entry_pre && gfn != gfn_x(INVALID_GFN) )
-            p2m->write_p2m_entry_pre(d, gfn, p, new, level);
+            p2m->write_p2m_entry_pre(d, gfn, *p, new, level);
 
         oflags = l1e_get_flags(*p);
         omfn = l1e_get_mfn(*p);
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3078,19 +3078,28 @@ static int shadow_test_disable(struct do
  */
 
 static void sh_unshadow_for_p2m_change(struct domain *d, unsigned long gfn,
-                                       l1_pgentry_t *p, l1_pgentry_t new,
+                                       l1_pgentry_t old, l1_pgentry_t new,
                                        unsigned int level)
 {
+    unsigned int oflags = l1e_get_flags(old);
+
+    /*
+     * If there are any shadows, update them.  But if shadow_teardown()
+     * has already been called then it's not safe to try.
+     */
+    if ( unlikely(!d->arch.paging.shadow.total_pages) )
+        return;
+
     /* The following assertion is to make sure we don't step on 1GB host
      * page support of HVM guest. */
-    ASSERT(!(level > 2 && (l1e_get_flags(*p) & _PAGE_PRESENT) &&
-             (l1e_get_flags(*p) & _PAGE_PSE)));
+    ASSERT(!(level > 2 && (oflags & _PAGE_PRESENT) && (oflags & _PAGE_PSE)));
 
     /* If we're removing an MFN from the p2m, remove it from the shadows too */
     if ( level == 1 )
     {
-        mfn_t mfn = l1e_get_mfn(*p);
-        p2m_type_t p2mt = p2m_flags_to_type(l1e_get_flags(*p));
+        mfn_t mfn = l1e_get_mfn(old);
+        p2m_type_t p2mt = p2m_flags_to_type(oflags);
+
         if ( (p2m_is_valid(p2mt) || p2m_is_grant(p2mt)) && mfn_valid(mfn) )
         {
             sh_remove_all_shadows_and_parents(d, mfn);
@@ -3102,15 +3111,15 @@ static void sh_unshadow_for_p2m_change(s
     /* If we're removing a superpage mapping from the p2m, we need to check
      * all the pages covered by it.  If they're still there in the new
      * scheme, that's OK, but otherwise they must be unshadowed. */
-    if ( level == 2 && (l1e_get_flags(*p) & _PAGE_PRESENT) &&
-         (l1e_get_flags(*p) & _PAGE_PSE) )
+    if ( level == 2 && (oflags & _PAGE_PRESENT) && (oflags & _PAGE_PSE) )
     {
         unsigned int i;
         cpumask_t flushmask;
-        mfn_t omfn = l1e_get_mfn(*p);
+        mfn_t omfn = l1e_get_mfn(old);
         mfn_t nmfn = l1e_get_mfn(new);
         l1_pgentry_t *npte = NULL;
-        p2m_type_t p2mt = p2m_flags_to_type(l1e_get_flags(*p));
+        p2m_type_t p2mt = p2m_flags_to_type(oflags);
+
         if ( p2m_is_valid(p2mt) && mfn_valid(omfn) )
         {
             cpumask_clear(&flushmask);
@@ -3144,16 +3153,6 @@ static void sh_unshadow_for_p2m_change(s
     }
 }
 
-static void
-sh_write_p2m_entry_pre(struct domain *d, unsigned long gfn, l1_pgentry_t *p,
-                       l1_pgentry_t new, unsigned int level)
-{
-    /* If there are any shadows, update them.  But if shadow_teardown()
-     * has already been called then it's not safe to try. */
-    if ( likely(d->arch.paging.shadow.total_pages != 0) )
-         sh_unshadow_for_p2m_change(d, gfn, p, new, level);
-}
-
 #if (SHADOW_OPTIMIZATIONS & SHOPT_FAST_FAULT_PATH)
 static void
 sh_write_p2m_entry_post(struct p2m_domain *p2m, unsigned int oflags)
@@ -3178,7 +3177,7 @@ sh_write_p2m_entry_post(struct p2m_domai
 
 void shadow_p2m_init(struct p2m_domain *p2m)
 {
-    p2m->write_p2m_entry_pre  = sh_write_p2m_entry_pre;
+    p2m->write_p2m_entry_pre  = sh_unshadow_for_p2m_change;
     p2m->write_p2m_entry_post = sh_write_p2m_entry_post;
 }
 
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -274,7 +274,7 @@ struct p2m_domain {
     void               (*memory_type_changed)(struct p2m_domain *p2m);
     void               (*write_p2m_entry_pre)(struct domain *d,
                                               unsigned long gfn,
-                                              l1_pgentry_t *p,
+                                              l1_pgentry_t old,
                                               l1_pgentry_t new,
                                               unsigned int level);
     void               (*write_p2m_entry_post)(struct p2m_domain *p2m,



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 09:38:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 09:38:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20567.46570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kayCo-0001fP-6q; Fri, 06 Nov 2020 09:38:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20567.46570; Fri, 06 Nov 2020 09:38:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kayCo-0001fI-3x; Fri, 06 Nov 2020 09:38:50 +0000
Received: by outflank-mailman (input) for mailman id 20567;
 Fri, 06 Nov 2020 09:38:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kayCn-0001fA-4R
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:38:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 530704a9-601a-42ed-9867-1ea560e60947;
 Fri, 06 Nov 2020 09:38:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7535DAC35;
 Fri,  6 Nov 2020 09:38:47 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kayCn-0001fA-4R
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:38:49 +0000
X-Inumbo-ID: 530704a9-601a-42ed-9867-1ea560e60947
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 530704a9-601a-42ed-9867-1ea560e60947;
	Fri, 06 Nov 2020 09:38:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604655527;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=iCfZA7E0DDQIE58Cvfe3oRsW91UfgHbT3ojqFCv1fG4=;
	b=O8m6+TKDxUzdQVOjFBwmmDkfYXYkV2mqTQ2qNclffpNjx401Oismzf9934kNqelYzkIUl4
	crVgBRfSr0F4kXVsiT/IeZIouNyyQDIT8WT8iQ/iyaqmdwBHDY5GLNNquMCF3LWFfKZEz/
	eZ9sE5IZVRTNccTcvc3jUmxH2eDI/Lk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7535DAC35;
	Fri,  6 Nov 2020 09:38:47 +0000 (UTC)
Subject: [PATCH v2 8/9] x86/shadow: cosmetics to sh_unshadow_for_p2m_change()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
References: <4b63025f-164c-2e93-3d54-7a7f145ad046@suse.com>
Message-ID: <db7e83c8-e40d-3642-4acf-6320b643140f@suse.com>
Date: Fri, 6 Nov 2020 10:38:47 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <4b63025f-164c-2e93-3d54-7a7f145ad046@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Besides the adjustments for style
- use switch(),
- widen scope of commonly used variables,
- narrow scope of other variables.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3081,7 +3081,9 @@ static void sh_unshadow_for_p2m_change(s
                                        l1_pgentry_t old, l1_pgentry_t new,
                                        unsigned int level)
 {
+    mfn_t omfn = l1e_get_mfn(old);
     unsigned int oflags = l1e_get_flags(old);
+    p2m_type_t p2mt = p2m_flags_to_type(oflags);
 
     /*
      * If there are any shadows, update them.  But if shadow_teardown()
@@ -3090,53 +3092,57 @@ static void sh_unshadow_for_p2m_change(s
     if ( unlikely(!d->arch.paging.shadow.total_pages) )
         return;
 
-    /* The following assertion is to make sure we don't step on 1GB host
-     * page support of HVM guest. */
-    ASSERT(!(level > 2 && (oflags & _PAGE_PRESENT) && (oflags & _PAGE_PSE)));
-
-    /* If we're removing an MFN from the p2m, remove it from the shadows too */
-    if ( level == 1 )
+    switch ( level )
     {
-        mfn_t mfn = l1e_get_mfn(old);
-        p2m_type_t p2mt = p2m_flags_to_type(oflags);
+    default:
+        /*
+         * The following assertion is to make sure we don't step on 1GB host
+         * page support of HVM guest.
+         */
+        ASSERT(!((oflags & _PAGE_PRESENT) && (oflags & _PAGE_PSE)));
+        break;
 
-        if ( (p2m_is_valid(p2mt) || p2m_is_grant(p2mt)) && mfn_valid(mfn) )
+    /* If we're removing an MFN from the p2m, remove it from the shadows too */
+    case 1:
+        if ( (p2m_is_valid(p2mt) || p2m_is_grant(p2mt)) && mfn_valid(omfn) )
         {
-            sh_remove_all_shadows_and_parents(d, mfn);
-            if ( sh_remove_all_mappings(d, mfn, _gfn(gfn)) )
+            sh_remove_all_shadows_and_parents(d, omfn);
+            if ( sh_remove_all_mappings(d, omfn, _gfn(gfn)) )
                 guest_flush_tlb_mask(d, d->dirty_cpumask);
         }
-    }
+        break;
 
-    /* If we're removing a superpage mapping from the p2m, we need to check
+    /*
+     * If we're removing a superpage mapping from the p2m, we need to check
      * all the pages covered by it.  If they're still there in the new
-     * scheme, that's OK, but otherwise they must be unshadowed. */
-    if ( level == 2 && (oflags & _PAGE_PRESENT) && (oflags & _PAGE_PSE) )
-    {
-        unsigned int i;
-        cpumask_t flushmask;
-        mfn_t omfn = l1e_get_mfn(old);
-        mfn_t nmfn = l1e_get_mfn(new);
-        l1_pgentry_t *npte = NULL;
-        p2m_type_t p2mt = p2m_flags_to_type(oflags);
+     * scheme, that's OK, but otherwise they must be unshadowed.
+     */
+    case 2:
+        if ( !(oflags & _PAGE_PRESENT) || !(oflags & _PAGE_PSE) )
+            break;
 
         if ( p2m_is_valid(p2mt) && mfn_valid(omfn) )
         {
+            unsigned int i;
+            cpumask_t flushmask;
+            mfn_t nmfn = l1e_get_mfn(new);
+            l1_pgentry_t *npte = NULL;
+
             cpumask_clear(&flushmask);
 
             /* If we're replacing a superpage with a normal L1 page, map it */
-            if ( (l1e_get_flags(new) & _PAGE_PRESENT)
-                 && !(l1e_get_flags(new) & _PAGE_PSE)
-                 && mfn_valid(nmfn) )
+            if ( (l1e_get_flags(new) & _PAGE_PRESENT) &&
+                 !(l1e_get_flags(new) & _PAGE_PSE) &&
+                 mfn_valid(nmfn) )
                 npte = map_domain_page(nmfn);
 
             gfn &= ~(L1_PAGETABLE_ENTRIES - 1);
 
             for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
             {
-                if ( !npte
-                     || !p2m_is_ram(p2m_flags_to_type(l1e_get_flags(npte[i])))
-                     || !mfn_eq(l1e_get_mfn(npte[i]), omfn) )
+                if ( !npte ||
+                     !p2m_is_ram(p2m_flags_to_type(l1e_get_flags(npte[i]))) ||
+                     !mfn_eq(l1e_get_mfn(npte[i]), omfn) )
                 {
                     /* This GFN->MFN mapping has gone away */
                     sh_remove_all_shadows_and_parents(d, omfn);
@@ -3150,6 +3156,8 @@ static void sh_unshadow_for_p2m_change(s
             if ( npte )
                 unmap_domain_page(npte);
         }
+
+        break;
     }
 }
 



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 09:39:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 09:39:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20574.46583 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kayDU-0001oo-Jz; Fri, 06 Nov 2020 09:39:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20574.46583; Fri, 06 Nov 2020 09:39:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kayDU-0001oh-H3; Fri, 06 Nov 2020 09:39:32 +0000
Received: by outflank-mailman (input) for mailman id 20574;
 Fri, 06 Nov 2020 09:39:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kayDS-0001oV-Pc
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:39:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 05f4054e-7a14-4dd1-9a27-4a9d515bc1dc;
 Fri, 06 Nov 2020 09:39:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3DF6EACC0;
 Fri,  6 Nov 2020 09:39:29 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kayDS-0001oV-Pc
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 09:39:30 +0000
X-Inumbo-ID: 05f4054e-7a14-4dd1-9a27-4a9d515bc1dc
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 05f4054e-7a14-4dd1-9a27-4a9d515bc1dc;
	Fri, 06 Nov 2020 09:39:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604655569;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JrqwwEQxZ/aX47WNn6gfQ28HffjW3EUbN72Yi3zn4r8=;
	b=bua3CtXiFHEoK1dwhvhpk/4ILkW24qm5ATcn2YD261+YZl10BnlRZtngTKxPIFLvsz8zl1
	poJ6QPY+ZzevKIWiEIW/sEwYXiQiWZRqnUm0qc7n0gUIcm/hW9r8M7WKFloyiFtwSZA+kA
	XsLxdiR+0NIXyM/uJRyYZk4iFJoRK64=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3DF6EACC0;
	Fri,  6 Nov 2020 09:39:29 +0000 (UTC)
Subject: [PATCH v2 9/9] x86/shadow: adjust TLB flushing in
 sh_unshadow_for_p2m_change()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
References: <4b63025f-164c-2e93-3d54-7a7f145ad046@suse.com>
Message-ID: <76665833-415c-f192-29f6-1340191db7ff@suse.com>
Date: Fri, 6 Nov 2020 10:39:28 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <4b63025f-164c-2e93-3d54-7a7f145ad046@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Accumulating transient state of d->dirty_cpumask in a local variable is
unnecessary here: The flush is fine to make with the dirty set at the
time of the call. With this, move the invocation to a central place at
the end of the function.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3084,6 +3084,7 @@ static void sh_unshadow_for_p2m_change(s
     mfn_t omfn = l1e_get_mfn(old);
     unsigned int oflags = l1e_get_flags(old);
     p2m_type_t p2mt = p2m_flags_to_type(oflags);
+    bool flush = false;
 
     /*
      * If there are any shadows, update them.  But if shadow_teardown()
@@ -3108,7 +3109,7 @@ static void sh_unshadow_for_p2m_change(s
         {
             sh_remove_all_shadows_and_parents(d, omfn);
             if ( sh_remove_all_mappings(d, omfn, _gfn(gfn)) )
-                guest_flush_tlb_mask(d, d->dirty_cpumask);
+                flush = true;
         }
         break;
 
@@ -3124,12 +3125,9 @@ static void sh_unshadow_for_p2m_change(s
         if ( p2m_is_valid(p2mt) && mfn_valid(omfn) )
         {
             unsigned int i;
-            cpumask_t flushmask;
             mfn_t nmfn = l1e_get_mfn(new);
             l1_pgentry_t *npte = NULL;
 
-            cpumask_clear(&flushmask);
-
             /* If we're replacing a superpage with a normal L1 page, map it */
             if ( (l1e_get_flags(new) & _PAGE_PRESENT) &&
                  !(l1e_get_flags(new) & _PAGE_PSE) &&
@@ -3147,11 +3145,10 @@ static void sh_unshadow_for_p2m_change(s
                     /* This GFN->MFN mapping has gone away */
                     sh_remove_all_shadows_and_parents(d, omfn);
                     if ( sh_remove_all_mappings(d, omfn, _gfn(gfn + i)) )
-                        cpumask_or(&flushmask, &flushmask, d->dirty_cpumask);
+                        flush = true;
                 }
                 omfn = mfn_add(omfn, 1);
             }
-            guest_flush_tlb_mask(d, &flushmask);
 
             if ( npte )
                 unmap_domain_page(npte);
@@ -3159,6 +3156,9 @@ static void sh_unshadow_for_p2m_change(s
 
         break;
     }
+
+    if ( flush )
+        guest_flush_tlb_mask(d, d->dirty_cpumask);
 }
 
 #if (SHADOW_OPTIMIZATIONS & SHOPT_FAST_FAULT_PATH)



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 10:19:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 10:19:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20593.46610 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kayph-0005UV-UD; Fri, 06 Nov 2020 10:19:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20593.46610; Fri, 06 Nov 2020 10:19:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kayph-0005UO-R0; Fri, 06 Nov 2020 10:19:01 +0000
Received: by outflank-mailman (input) for mailman id 20593;
 Fri, 06 Nov 2020 10:19:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a9Kh=EM=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kaypg-0005UG-9x
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 10:19:00 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.4.69]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0db3e3ed-d3fe-4253-937f-02402044e255;
 Fri, 06 Nov 2020 10:18:57 +0000 (UTC)
Received: from AM5PR0201CA0023.eurprd02.prod.outlook.com
 (2603:10a6:203:3d::33) by AM5PR0801MB1922.eurprd08.prod.outlook.com
 (2603:10a6:203:4b::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.22; Fri, 6 Nov
 2020 10:18:54 +0000
Received: from VE1EUR03FT019.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:3d:cafe::66) by AM5PR0201CA0023.outlook.office365.com
 (2603:10a6:203:3d::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Fri, 6 Nov 2020 10:18:54 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT019.mail.protection.outlook.com (10.152.18.153) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3541.17 via Frontend Transport; Fri, 6 Nov 2020 10:18:54 +0000
Received: ("Tessian outbound b03b78fa78b0:v64");
 Fri, 06 Nov 2020 10:18:54 +0000
Received: from 3fc5b060e8eb.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 612D9063-7939-4A28-A847-EED1648CC8CB.1; 
 Fri, 06 Nov 2020 10:18:16 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3fc5b060e8eb.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 06 Nov 2020 10:18:16 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB8PR08MB5241.eurprd08.prod.outlook.com (2603:10a6:10:e2::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Fri, 6 Nov
 2020 10:18:15 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3499.032; Fri, 6 Nov 2020
 10:18:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=a9Kh=EM=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kaypg-0005UG-9x
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 10:19:00 +0000
X-Inumbo-ID: 0db3e3ed-d3fe-4253-937f-02402044e255
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown [40.107.4.69])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0db3e3ed-d3fe-4253-937f-02402044e255;
	Fri, 06 Nov 2020 10:18:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4QplAv8KV3OlhzaqYEU5drAx9Np895aQ32gThNMgrlI=;
 b=f3lNSu22jqXLzC7m4kw3LMKkJ2y3cow2L2D7Md6Lq0zfLK5zjobJXKGQyBYN7DcIxVo2HXzaZS7jpuZMLg2fUbdtT7XzXKGJQ3v5dBAtaN9SKrUX2GfayAGiXDfULaj3ZJTqW7j792sP33D1dgsm2YX3YA2WljZEivRFv29j8TE=
Received: from AM5PR0201CA0023.eurprd02.prod.outlook.com
 (2603:10a6:203:3d::33) by AM5PR0801MB1922.eurprd08.prod.outlook.com
 (2603:10a6:203:4b::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.22; Fri, 6 Nov
 2020 10:18:54 +0000
Received: from VE1EUR03FT019.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:3d:cafe::66) by AM5PR0201CA0023.outlook.office365.com
 (2603:10a6:203:3d::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Fri, 6 Nov 2020 10:18:54 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT019.mail.protection.outlook.com (10.152.18.153) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3541.17 via Frontend Transport; Fri, 6 Nov 2020 10:18:54 +0000
Received: ("Tessian outbound b03b78fa78b0:v64"); Fri, 06 Nov 2020 10:18:54 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 9e476fe4602b0054
X-CR-MTA-TID: 64aa7808
Received: from 3fc5b060e8eb.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 612D9063-7939-4A28-A847-EED1648CC8CB.1;
	Fri, 06 Nov 2020 10:18:16 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3fc5b060e8eb.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 06 Nov 2020 10:18:16 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fYqDlyHFu9hUA3rLJ3aPDTBa8n1qEcWBx2B2Wt/jeToAmBMY54lzeG5Xotp0ru1cGLd/CiS1keiY9Mt6J33vSo68bUnPm8n4fcNOsJjwuBzZqB4urCn2m8GhRRprpsBTibQnwQgMJMD7btJjF+xpbo4HrYkrU0B9GmIjZOBh7+24um6Q/7YmutfMP7/djy+4Wy7JQtpMrFE6Vq/USnvOHhsa23AAqb3SkEWHqEavcBnBrrZ5iQt+r4cqt0DcXpAV9na2ti3t2s3CsP/QdC/t81Pm1BbfQjhjO9ppOPbKuEbt/T7g9u8El0uaV+Uil+4m5o5dMZHMPJw14DxgBOKY0w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4QplAv8KV3OlhzaqYEU5drAx9Np895aQ32gThNMgrlI=;
 b=EfMy0ppIixsu0xzqhlhTLtVmKKG1MKKNBVYBZkNudvOJjyemD0eNf+qehYsECkgWwxBSKrpogb2v6yHMOtNy26CFNnk+TCqHam3uG8BvcjqY+PVcldA6/Ornr1ft961h3k0c4/36ojVOVZuDQrb3XBr59YY42AifyiNDqffN0bF2u8a9c1/vmhyzjFGLkm1QgA6JcXLIV2bhRqSJAvXRRkQPSl2bdeAp4aHbFaStwG84+LhlPh7i+up0upcFNGGazdnvYNXICzv9VsG7B3gw1HcxpDuEbHf7RESNHAHxVFKxrT5r56Oo7pJtDRLJMZOzVmn0477zHaRtf/CB+VoUgQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4QplAv8KV3OlhzaqYEU5drAx9Np895aQ32gThNMgrlI=;
 b=f3lNSu22jqXLzC7m4kw3LMKkJ2y3cow2L2D7Md6Lq0zfLK5zjobJXKGQyBYN7DcIxVo2HXzaZS7jpuZMLg2fUbdtT7XzXKGJQ3v5dBAtaN9SKrUX2GfayAGiXDfULaj3ZJTqW7j792sP33D1dgsm2YX3YA2WljZEivRFv29j8TE=
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB8PR08MB5241.eurprd08.prod.outlook.com (2603:10a6:10:e2::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Fri, 6 Nov
 2020 10:18:15 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3499.032; Fri, 6 Nov 2020
 10:18:15 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Paul Durrant <paul@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 3/4] xen/pci: Move x86 specific code to x86 directory.
Thread-Topic: [PATCH v2 3/4] xen/pci: Move x86 specific code to x86 directory.
Thread-Index: AQHWsfsnaWR/2G+0X0OcOTmzAzUqaqm61PiAgAATVwA=
Date: Fri, 6 Nov 2020 10:18:15 +0000
Message-ID: <B9DBAE21-C7A2-46A6-AD9F-19C4008A6F1A@arm.com>
References: <cover.1604417224.git.rahul.singh@arm.com>
 <687101e7e0e6feb64dd8ea63c8cf1aacf1684049.1604417224.git.rahul.singh@arm.com>
 <c49bf07f-3d39-b8e8-a3ed-a620aa5de5df@suse.com>
In-Reply-To: <c49bf07f-3d39-b8e8-a3ed-a620aa5de5df@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: d0d53633-8a82-4b28-79d5-08d8823d5d6e
x-ms-traffictypediagnostic: DB8PR08MB5241:|AM5PR0801MB1922:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM5PR0801MB1922D0E4EF17590CB2A83287FCED0@AM5PR0801MB1922.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:6790;OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 jYvQiZ6d9ldRyo/09MhRwLgok0gliAgSmRQ6+ckJHdCwbo2koNvb4qfYcaBWnAAqMWnzc10mS2d1s3/GdNsZ5rtm8pnh7sDZKW+AkSiHF4s26idN3PeNaNjFkFMg0osv7z5vTa6DWJfiH6dvY3iIISeT/LmgHL+T9sPEe3MWNZhBYeqGTbG5UVzRnmnfaQk70YUYD1kSkn6cqnBtX3vSgdhKCfPLk0hWcXUDZ39xZaXSNMS5toAypH98HD2/TW91fFNMdvughLotRZgihw9fqzdKdyLzWWmVaXyRCIFkucwnJ+PnoJefe8JMnqHCA3PwjF2ck6OyNv0XdnDyy5PPvGdiUZUqMe1shmeXm/7AoCXXn56afb5de/fIBtUDrSboEW35dBsdSjB1R+NYX03Urw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(136003)(39860400002)(346002)(376002)(86362001)(6512007)(54906003)(6916009)(2616005)(478600001)(36756003)(33656002)(6486002)(91956017)(83380400001)(316002)(26005)(186003)(5660300002)(53546011)(6506007)(71200400001)(8936002)(2906002)(66476007)(64756008)(76116006)(66946007)(8676002)(66556008)(66446008)(4326008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 wC2s7e5muoH5nhEGAQ/r7B0x7YtrWG3Eciy3niV8Q6NkK4oSaiW05yRAcEQY6wegvQ2zokos0t9vhsutT1HLnmHdrn3CbtFfS9miFDvW4GIYiEIVvN6tr3kNGGiTtDqsaV0xAWAJQ9cTZDc7BV/ZyPajxWtSYpv3E8ZtVY0D03DfT640KTChY1wxnWX2Aj3MzO5OeCgxS3MxyD8/oF+9q6gRadTmC+7Va8WMFutQ8z/iBdf10KvlyAu9BbicdYsb9jK8+q3hTakLK75t5eEWKlE01uYdbTjH3gN0I7kccCHeNWRggUsYYOy514iTloevHQa0na/wfkWaxW+pRZgRL0Ld9lKif6ms52zKpYT0MXWSYYso9cxleCEQcsDag0H5Cf3PcdoJ6fgWDSHor/WCvZAucxX8J36BWG7Q9bjlCx6g5caUJ6OTEAQ8M+isZRTd4AEBrjJoeuSR7b7FdeNAGVoHZbPfIm+fKAKOR3SDGUQfW88aYz1zd7gW24ZXkAvpxOr3UIG/1YWTYDJIJ0sv/8Cjaymk/DyLZyYQmu8KTiO5/0tKYEjr0mkMbzsElJPrkr1AfaAuqxIBxe3NZpo/BOmNATGxFVeyLv9h/11c0vsiaylPwqv9zQnO6WTvaJz8XQUWe9Jhi3JMxg830AArKg==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <0A05B1AAB694034DA89A1ABAC24879D2@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5241
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6a6c2c8e-ab69-4875-0e93-08d8823d4622
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rf+v/sm6wTHF8zlqplbuh0owhnT57O6BdIYeQey1NeZx5HosJ6C4qzZNtzKsvPeZ4D97LUS4EcsXRQUibMZ6bidJZBp77Hw+zUfDr5tI4+CXFVFY1Gj7oCONBryht3vJHud9LeUEfgaQA3l5T0FkfUWM2sbq4kDu0kJdKknUVQknY2Nf85mQReOGf/LuS1SOjL71b3eJFrpzFAMVjlvFRhLHpVsRhDHWUYwNN7c6D9lEYlRyaNZEHlTXsJsZ+qHULTFSdkyTXqc4/grDCqXSHp4r7M93PfC03G4xVo38QK3xrfACoOvdJr7DK3cvtYYv2NLBIGAaDC7LOxxHHdrgSsk9s24RT8Lkpn+lVu6tsV7xDs/3WN7CSaEWO7NwyzC0Z5j6s2pLPdhKFRFZxx9X0QjjOL0/TR0YNzZMyMVh37FXRdq2Bcb7q+kKPcfJKzr86kBkW16oP1Nf4w2YZaAmZoJNvpdRSI0ybAqrT4W62ow=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(136003)(346002)(39860400002)(396003)(46966005)(36756003)(2906002)(8676002)(70586007)(6512007)(70206006)(6486002)(86362001)(6862004)(36906005)(4326008)(81166007)(47076004)(82740400003)(186003)(82310400003)(6506007)(54906003)(336012)(356005)(478600001)(53546011)(2616005)(8936002)(26005)(316002)(5660300002)(83380400001)(33656002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Nov 2020 10:18:54.3984
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d0d53633-8a82-4b28-79d5-08d8823d5d6e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0801MB1922

Hello Jan,

> On 6 Nov 2020, at 9:09 am, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 03.11.2020 16:59, Rahul Singh wrote:
>> --- a/xen/drivers/passthrough/pci.c
>> +++ b/xen/drivers/passthrough/pci.c
>> @@ -14,7 +14,6 @@
>>  * this program; If not, see <http://www.gnu.org/licenses/>.
>>  */
>>=20
>> -#include <xen/sched.h>

Removed in this patch series 3/4
>> #include <xen/pci.h>
>> #include <xen/pci_regs.h>

I will remove in next version.

>> #include <xen/pci_ids.h>

It is required for PCI_VENDOR_ID_INTEL that is referenced in apply_quirks f=
unction.

>=20
> I think this hunk wants dropping - struct domain continues to be used
> in this file, for example.
>=20
>> @@ -847,71 +845,6 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
>>     return ret;
>> }
>>=20
>> -static int pci_clean_dpci_irq(struct domain *d,
>> -                              struct hvm_pirq_dpci *pirq_dpci, void *ar=
g)
>> -{
>> -    struct dev_intx_gsi_link *digl, *tmp;
>> -
>> -    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
>> -
>> -    if ( pt_irq_need_timer(pirq_dpci->flags) )
>> -        kill_timer(&pirq_dpci->timer);
>> -
>> -    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
>> -    {
>> -        list_del(&digl->list);
>> -        xfree(digl);
>> -    }
>> -
>> -    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
>> -
>> -    if ( !pt_pirq_softirq_active(pirq_dpci) )
>> -        return 0;
>> -
>> -    domain_get_irq_dpci(d)->pending_pirq_dpci =3D pirq_dpci;
>> -
>> -    return -ERESTART;
>> -}
>> -
>> -static int pci_clean_dpci_irqs(struct domain *d)
>> -{
>> -    struct hvm_irq_dpci *hvm_irq_dpci =3D NULL;
>> -
>> -    if ( !is_iommu_enabled(d) )
>> -        return 0;
>> -
>> -    if ( !is_hvm_domain(d) )
>> -        return 0;
>> -
>> -    spin_lock(&d->event_lock);
>> -    hvm_irq_dpci =3D domain_get_irq_dpci(d);
>> -    if ( hvm_irq_dpci !=3D NULL )
>> -    {
>> -        int ret =3D 0;
>> -
>> -        if ( hvm_irq_dpci->pending_pirq_dpci )
>> -        {
>> -            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci=
) )
>> -                 ret =3D -ERESTART;
>> -            else
>> -                 hvm_irq_dpci->pending_pirq_dpci =3D NULL;
>> -        }
>> -
>> -        if ( !ret )
>> -            ret =3D pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
>> -        if ( ret )
>> -        {
>> -            spin_unlock(&d->event_lock);
>> -            return ret;
>> -        }
>> -
>> -        hvm_domain_irq(d)->dpci =3D NULL;
>> -        free_hvm_irq_dpci(hvm_irq_dpci);
>> -    }
>> -    spin_unlock(&d->event_lock);
>> -    return 0;
>> -}
>=20
> If this code gets moved, I think it ought to move into
> xen/drivers/passthrough/io.c, as that's where all the companion code
> sits. (The file as a whole, getting built for x86/HVM only, may want
> moving to xen/drivers/passthrough/x86/ if the underlying model isn't
> suitable for Arm. Then it probably also would want to be named hvm.c,
> to express its limited purpose.)


Ok I will move the code to the file io.c and move that file to x86 director=
y and rename it hvm.c
>=20
> Jan

Regards,
Rahul=


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 10:27:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 10:27:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20608.46622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kayy3-0006PU-QO; Fri, 06 Nov 2020 10:27:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20608.46622; Fri, 06 Nov 2020 10:27:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kayy3-0006PN-NP; Fri, 06 Nov 2020 10:27:39 +0000
Received: by outflank-mailman (input) for mailman id 20608;
 Fri, 06 Nov 2020 10:27:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wn+7=EM=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1kayy2-0006PI-Ta
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 10:27:38 +0000
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 28fd1890-da98-4f7f-9fbd-bae6b5694c32;
 Fri, 06 Nov 2020 10:27:38 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id p1so736102wrf.12
 for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 02:27:38 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id u6sm1526567wmj.40.2020.11.06.02.27.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 06 Nov 2020 02:27:35 -0800 (PST)
Received: from zen (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id 156E71FF7E;
 Fri,  6 Nov 2020 10:27:35 +0000 (GMT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Wn+7=EM=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
	id 1kayy2-0006PI-Ta
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 10:27:38 +0000
X-Inumbo-ID: 28fd1890-da98-4f7f-9fbd-bae6b5694c32
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 28fd1890-da98-4f7f-9fbd-bae6b5694c32;
	Fri, 06 Nov 2020 10:27:38 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id p1so736102wrf.12
        for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 02:27:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=references:user-agent:from:to:cc:subject:in-reply-to:date
         :message-id:mime-version:content-transfer-encoding;
        bh=SyBKF5lvpBcPf2igl0ogG/qJ4sn+brqfAU2r6HmwwSE=;
        b=ejZ6JohKG0rn5WQjVRFWJbtHCFsrWiqlveT4qu6dAxHWdIe/z6hoGvh+j+dvqZBsQs
         6CNA9CGpTD/0rr2V5PFwRIKF071QDT8Iocm29HCpZELu3WHs6kmBY7ovACtfc6Fcp5dj
         Eg1NpxjnRLMuzCOFUQo6BTgA1t31GjfUK9rlXZg/qz6U9AzPZfLtX1T0YHWCCLKA6m8Q
         6TC9GnP1DY7A1flkOXGdwv87t1lsdB7vaDmpwitbv9JFKcK1B/O1ThCk5TIrZNt2gNXP
         VJVYo5qlv0Ye2rjkJwfk8DApma5vT8SlTBs+usTLk1Vo/+JXiZ04DbmOMu8PTqZfsTEC
         j9Eg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:references:user-agent:from:to:cc:subject
         :in-reply-to:date:message-id:mime-version:content-transfer-encoding;
        bh=SyBKF5lvpBcPf2igl0ogG/qJ4sn+brqfAU2r6HmwwSE=;
        b=mcIyBMdkkyOXTb3RIDVQsGtfxGPAesg0YOX4jTjjpW0b60h0rE3VEr4LOmxxdiKJVp
         jhwNaN+oozXTIfdDf4/GVLagzLpaeudFbZK/z1HTWJe5R/Hj9vMJ07kdnWEQy9WU6zMz
         GEgAKf0KjYouwBp2ygZsyGkSBKQH03YDAJEkQc7PQcbiUaqxUTSdRhZyP4Wlbr+3tNtK
         Og2QcFG5cNW+CL62giRaZ4qYzNdSkT4pH2dlX01vchjtbEpzaM7IF8j+PScHSaQDdyYp
         ohmlxC9FLKj6kibdch4IMrGgDOasI5+p4lg9xFomNzcIKusAUzAOmfRa1R70hVbf4XU/
         sJKA==
X-Gm-Message-State: AOAM53137qOR98c/wcE6/SGacUKg+kiVRx4d3JuS8ygO8E8bYUPKN7Dr
	Qxnt0vIMVhzloEfoypjFpnXHNg==
X-Google-Smtp-Source: ABdhPJw553yGEbhizv9iCXBgj+jdsaxGTZUcWSpTtS2mWg52QvG6/RKniP75yXJMvBedSQPMaQAwbw==
X-Received: by 2002:a05:6000:8d:: with SMTP id m13mr1824130wrx.216.1604658457162;
        Fri, 06 Nov 2020 02:27:37 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
        by smtp.gmail.com with ESMTPSA id u6sm1526567wmj.40.2020.11.06.02.27.35
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 06 Nov 2020 02:27:35 -0800 (PST)
Received: from zen (localhost [127.0.0.1])
	by zen.linaroharston (Postfix) with ESMTP id 156E71FF7E;
	Fri,  6 Nov 2020 10:27:35 +0000 (GMT)
References: <20201105175153.30489-1-alex.bennee@linaro.org>
 <20201105175153.30489-13-alex.bennee@linaro.org>
 <11afa6f8-ec49-ab2b-2011-ef22665cd0c3@redhat.com>
User-agent: mu4e 1.5.6; emacs 28.0.50
From: Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>
To: Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>,
 julien@xen.org, masami.hiramatsu@linaro.org, Paul Durrant <paul@xen.org>,
 andre.przywara@arm.com, stefano.stabellini@linaro.org,
 takahiro.akashi@linaro.org, "open list:X86 Xen CPUs"
 <xen-devel@lists.xenproject.org>, Anthony Perard
 <anthony.perard@citrix.com>, Paolo Bonzini <pbonzini@redhat.com>,
 stefano.stabellini@xilinx.com, stratos-dev@op-lists.linaro.org
Subject: Re: [RFC PATCH 12/15] stubs/xen-hw-stub: drop
 xenstore_store_pv_console_info stub
In-reply-to: <11afa6f8-ec49-ab2b-2011-ef22665cd0c3@redhat.com>
Date: Fri, 06 Nov 2020 10:27:35 +0000
Message-ID: <871rh6bx0o.fsf@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable


Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com> writes:

> On 11/5/20 6:51 PM, Alex Benn=C3=A9e wrote:
>> We should never build something that calls this without having it.
>
> "because ..."?

  xen-all.c is only built when we have CONFIG_XEN which also gates the
  only call-site in xen-console.c

>
> Reviewed-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
>
>>=20
>> Signed-off-by: Alex Benn=C3=A9e <alex.bennee@linaro.org>
>> ---
>>  stubs/xen-hw-stub.c | 4 ----
>>  1 file changed, 4 deletions(-)
>>=20
>> diff --git a/stubs/xen-hw-stub.c b/stubs/xen-hw-stub.c
>> index 2ea8190921..15f3921a76 100644
>> --- a/stubs/xen-hw-stub.c
>> +++ b/stubs/xen-hw-stub.c
>> @@ -10,10 +10,6 @@
>>  #include "hw/xen/xen.h"
>>  #include "hw/xen/xen-x86.h"
>>=20=20
>> -void xenstore_store_pv_console_info(int i, Chardev *chr)
>> -{
>> -}
>> -
>>  int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num)
>>  {
>>      return -1;
>>=20


--=20
Alex Benn=C3=A9e


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 10:55:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 10:55:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20627.46634 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kazOg-0000bV-07; Fri, 06 Nov 2020 10:55:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20627.46634; Fri, 06 Nov 2020 10:55:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kazOf-0000bO-TC; Fri, 06 Nov 2020 10:55:09 +0000
Received: by outflank-mailman (input) for mailman id 20627;
 Fri, 06 Nov 2020 10:55:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kazOe-0000bI-L3
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 10:55:08 +0000
Received: from mail-wm1-x336.google.com (unknown [2a00:1450:4864:20::336])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a06db3de-763b-44c8-b798-6e21abfabedf;
 Fri, 06 Nov 2020 10:55:05 +0000 (UTC)
Received: by mail-wm1-x336.google.com with SMTP id h2so947403wmm.0
 for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 02:55:05 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id m20sm154250wrg.81.2020.11.06.02.55.02
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 06 Nov 2020 02:55:02 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kazOe-0000bI-L3
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 10:55:08 +0000
X-Inumbo-ID: a06db3de-763b-44c8-b798-6e21abfabedf
Received: from mail-wm1-x336.google.com (unknown [2a00:1450:4864:20::336])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a06db3de-763b-44c8-b798-6e21abfabedf;
	Fri, 06 Nov 2020 10:55:05 +0000 (UTC)
Received: by mail-wm1-x336.google.com with SMTP id h2so947403wmm.0
        for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 02:55:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=3VZImL60NGG7RFdtXowdKLIl126llFB85n9FHuUleYg=;
        b=j9HkE8Wf1rHxLbWyroLJ7mwNbONvWlIA/cNM+qzZkVBs6OcUJDLrIHvcAKEa3WcH6f
         bWi7gd5oHU3D1/xqp1rQOGmLYZKwvZmRnOkmiK/eNMtrDtLjIqprz0dSIpRwv6RcIufR
         jxeOxe7NAas3YWyZJgam+3MwBnyZu8p+Ow2dcva+88AndRTDDNd0ZtWNBY72DNnxK+mD
         eKN1u51Jn/dLt4Tv6UAp73k+sjVq7jT1FyNfvaLYdEuqkSONRCmR8FvlKXDILm+VH8aq
         AhDOes26BVPyvsDDjfZ9TRgCSwiBeOiLhvDrunDkPL1lCW8Y35on8uIE44tZPuO90gk9
         oyUA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=3VZImL60NGG7RFdtXowdKLIl126llFB85n9FHuUleYg=;
        b=Z3Tdv872CrTdXi2hkAwqQ44Q7x/iNgK4ULJRSoyNZ2bq8KoV3m3Lg7ZVK1a3+Fbm93
         rAEaroH1192N0Qj1L295HLNsr4So3N0vOBgTSuOdlkz9UTdJl93INXcetSC9M1K+fL+E
         5br0nRD5AJH6WYQwci6zLJRc46jzFjBJrfCIg0zcjpqvnzkb4zkjmJpYyrGZ1u2zaYbb
         fFcj4rjiY9/4yazh0d3WA9HMmhao1lP1aA84ML2Ins1pQEob3cMjt8lsjqYB8InmkuvR
         PMK5M3ssgx80TrfRlIcNAWMJSf4KEqYolvkKVWjdyimyYMITw/Xi9qWaJQ4lmtqvMJtV
         GpeA==
X-Gm-Message-State: AOAM532SH6PFehfU5O3Ckg4XcxjYtiEIOYYkvu+++NYlGAC4Ji9jQ42M
	4CvpqSDJ54CeMP4btCUfVLgDcGtg/vk=
X-Google-Smtp-Source: ABdhPJzPZ7T7F/WcSfK4wr3+0wn4srf8gI2opX+FQ2roS6wyPQJ/tB0hq+1sf7xAxHQDbqKNAJtkHQ==
X-Received: by 2002:a1c:cc01:: with SMTP id h1mr1904127wmb.114.1604660103774;
        Fri, 06 Nov 2020 02:55:03 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id m20sm154250wrg.81.2020.11.06.02.55.02
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Fri, 06 Nov 2020 02:55:02 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	julien@xen.org,
	rahul.singh@arm.com
Subject: RE: [RFC PATCH 4/6] xen/arm64: Port Linux LL/SC and LSE atomics helpers to Xen
Date: Fri,  6 Nov 2020 10:55:01 +0000
Message-Id: <20201106105501.55396-1-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <20201105185603.24149-5-ash.j.wilding@gmail.com>
References: <20201105185603.24149-5-ash.j.wilding@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi,

In retrospect I should have put an intermediate patch between #3 and #4,
deleting the existing headers. This would have made the patch diff for
#4 and #5 much easier to read seeing as they are copying the Linux
versions wholesale into Xen.

I'll do that for V1 when we get there, but for now to aid in readability
I've pasted the complete header files below. While doing this I also
spent some time last night tidying up them up to be in line with the Xen
coding style.

Similar email incoming on patch #5 too.

Thanks,
Ash.


========================================================================
====             xen/include/asm-arm/arm64/atomic.h                 ====
========================================================================

/*
 * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
 *
 * Summary of changes:
 *      - Rename header include guard to reflect Xen directory structure
 *      - Drop redundant includes and redirect others to Xen equivalents
 *      - Rename declarations from arch_atomic_<op>() to atomic_<op>()
 *      - Drop atomic64_t helper declarations
 *      - Convert tabs to spaces in line with coding style
 *      - Tidy up indentations
 *      - Add Emacs file local variables
 *
 * Copyright (C) 1996 Russell King.
 * Copyright (C) 2002 Deep Blue Solutions Ltd.
 * Copyright (C) 2012 ARM Ltd.
 * SPDX-License-Identifier: GPL-2.0-only
 */
#ifndef __ASM_ARM_ARM64_ATOMIC_H
#define __ASM_ARM_ARM64_ATOMIC_H

#include <xen/compiler.h>
#include <xen/types.h>

#include "lse.h"
#include "cmpxchg.h"

#define ATOMIC_OP(op)                               \
static inline void op(int i, atomic_t *v)           \
{                                                   \
    __lse_ll_sc_body(op, i, v);                     \
}

ATOMIC_OP(atomic_andnot)
ATOMIC_OP(atomic_or)
ATOMIC_OP(atomic_xor)
ATOMIC_OP(atomic_add)
ATOMIC_OP(atomic_and)
ATOMIC_OP(atomic_sub)

#undef ATOMIC_OP

#define ATOMIC_FETCH_OP(name, op)                   \
static inline int op##name(int i, atomic_t *v)      \
{                                                   \
    return __lse_ll_sc_body(op##name, i, v);        \
}

#define ATOMIC_FETCH_OPS(op)            \
    ATOMIC_FETCH_OP(_relaxed, op)       \
    ATOMIC_FETCH_OP(_acquire, op)       \
    ATOMIC_FETCH_OP(_release, op)       \
    ATOMIC_FETCH_OP(        , op)

ATOMIC_FETCH_OPS(atomic_fetch_andnot)
ATOMIC_FETCH_OPS(atomic_fetch_or)
ATOMIC_FETCH_OPS(atomic_fetch_xor)
ATOMIC_FETCH_OPS(atomic_fetch_add)
ATOMIC_FETCH_OPS(atomic_fetch_and)
ATOMIC_FETCH_OPS(atomic_fetch_sub)
ATOMIC_FETCH_OPS(atomic_add_return)
ATOMIC_FETCH_OPS(atomic_sub_return)

#undef ATOMIC_FETCH_OP
#undef ATOMIC_FETCH_OPS
#define atomic_read(v)              __READ_ONCE((v)->counter)
#define atomic_set(v, i)            __WRITE_ONCE(((v)->counter), (i))

#define atomic_add_return_relaxed       atomic_add_return_relaxed
#define atomic_add_return_acquire       atomic_add_return_acquire
#define atomic_add_return_release       atomic_add_return_release
#define atomic_add_return               atomic_add_return

#define atomic_sub_return_relaxed       atomic_sub_return_relaxed
#define atomic_sub_return_acquire       atomic_sub_return_acquire
#define atomic_sub_return_release       atomic_sub_return_release
#define atomic_sub_return               atomic_sub_return

#define atomic_fetch_add_relaxed        atomic_fetch_add_relaxed
#define atomic_fetch_add_acquire        atomic_fetch_add_acquire
#define atomic_fetch_add_release        atomic_fetch_add_release
#define atomic_fetch_add                atomic_fetch_add

#define atomic_fetch_sub_relaxed        atomic_fetch_sub_relaxed
#define atomic_fetch_sub_acquire        atomic_fetch_sub_acquire
#define atomic_fetch_sub_release        atomic_fetch_sub_release
#define atomic_fetch_sub                atomic_fetch_sub

#define atomic_fetch_and_relaxed        atomic_fetch_and_relaxed
#define atomic_fetch_and_acquire        atomic_fetch_and_acquire
#define atomic_fetch_and_release        atomic_fetch_and_release
#define atomic_fetch_and                atomic_fetch_and

#define atomic_fetch_andnot_relaxed     atomic_fetch_andnot_relaxed
#define atomic_fetch_andnot_acquire     atomic_fetch_andnot_acquire
#define atomic_fetch_andnot_release     atomic_fetch_andnot_release
#define atomic_fetch_andnot             atomic_fetch_andnot

#define atomic_fetch_or_relaxed         atomic_fetch_or_relaxed
#define atomic_fetch_or_acquire         atomic_fetch_or_acquire
#define atomic_fetch_or_release         atomic_fetch_or_release
#define atomic_fetch_or                 atomic_fetch_or

#define atomic_fetch_xor_relaxed        atomic_fetch_xor_relaxed
#define atomic_fetch_xor_acquire        atomic_fetch_xor_acquire
#define atomic_fetch_xor_release        atomic_fetch_xor_release
#define atomic_fetch_xor                atomic_fetch_xor

#define atomic_xchg_relaxed(v, new) \
    xchg_relaxed(&((v)->counter), (new))
#define atomic_xchg_acquire(v, new) \
    xchg_acquire(&((v)->counter), (new))
#define atomic_xchg_release(v, new) \
    xchg_release(&((v)->counter), (new))
#define atomic_xchg(v, new) \
    xchg(&((v)->counter), (new))

#define atomic_cmpxchg_relaxed(v, old, new) \
    cmpxchg_relaxed(&((v)->counter), (old), (new))
#define atomic_cmpxchg_acquire(v, old, new) \
    cmpxchg_acquire(&((v)->counter), (old), (new))
#define atomic_cmpxchg_release(v, old, new) \
    cmpxchg_release(&((v)->counter), (old), (new))

#define atomic_andnot            atomic_andnot

#endif /* __ASM_ARM_ARM64_ATOMIC_H */
/*
 * Local variables:
 * mode: C
 * c-file-style: "BSD"
 * c-basic-offset: 4
 * indent-tabs-mode: nil
 * End:
 */




========================================================================
====          xen/include/asm-arm/arm64/atomic_ll_sc.h              ====
========================================================================

/*
 * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
 *
 * Summary of changes:
 *      - Rename header include guard to reflect Xen directory structure
 *      - Redirect includes to Xen equivalents
 *      - Drop atomic64_t helper definitions
 *      - Convert tabs to spaces in line with coding style
 *      - Tidy up indentations
 *      - Add Emacs file local variables
 *
 * Copyright (C) 1996 Russell King.
 * Copyright (C) 2002 Deep Blue Solutions Ltd.
 * Copyright (C) 2012 ARM Ltd.
 * SPDX-License-Identifier: GPL-2.0-only
 */

#ifndef __ASM_ARM_ARM64_ATOMIC_LL_SC_H
#define __ASM_ARM_ARM64_ATOMIC_LL_SC_H

#include <xen/stringify.h>

#ifdef CONFIG_ARM64_LSE_ATOMICS
#define __LL_SC_FALLBACK(asm_ops)           \
"    b    3f\n"                             \
"    .subsection    1\n"                    \
"3:\n"                                      \
asm_ops "\n"                                \
"    b    4f\n"                             \
"    .previous\n"                           \
"4:\n"
#else
#define __LL_SC_FALLBACK(asm_ops) asm_ops
#endif

#ifndef CONFIG_CC_HAS_K_CONSTRAINT
#define K
#endif

/*
 * AArch64 UP and SMP safe atomic ops.  We use load exclusive and
 * store exclusive to ensure that these are atomic.  We may loop
 * to ensure that the update happens.
 */

#define ATOMIC_OP(op, asm_op, constraint)                               \
static inline void                                                      \
__ll_sc_atomic_##op(int i, atomic_t *v)                                 \
{                                                                       \
    unsigned long tmp;                                                  \
    int result;                                                         \
                                                                        \
    asm volatile("// atomic_" #op "\n"                                  \
    __LL_SC_FALLBACK(                                                   \
"    prfm    pstl1strm, %2\n"                                           \
"1:    ldxr    %w0, %2\n"                                               \
"    " #asm_op "    %w0, %w0, %w3\n"                                    \
"    stxr    %w1, %w0, %2\n"                                            \
"    cbnz    %w1, 1b\n")                                                \
    : "=&r" (result), "=&r" (tmp), "+Q" (v->counter)                    \
    : __stringify(constraint) "r" (i));                                 \
}

#define ATOMIC_OP_RETURN(name, mb, acq, rel, cl, op, asm_op, constraint)\
static inline int                                                       \
__ll_sc_atomic_##op##_return##name(int i, atomic_t *v)                  \
{                                                                       \
    unsigned long tmp;                                                  \
    int result;                                                         \
                                                                        \
    asm volatile("// atomic_" #op "_return" #name "\n"                  \
    __LL_SC_FALLBACK(                                                   \
"    prfm    pstl1strm, %2\n"                                           \
"1:    ld" #acq "xr    %w0, %2\n"                                       \
"    " #asm_op "    %w0, %w0, %w3\n"                                    \
"    st" #rel "xr    %w1, %w0, %2\n"                                    \
"    cbnz    %w1, 1b\n"                                                 \
"    " #mb )                                                            \
    : "=&r" (result), "=&r" (tmp), "+Q" (v->counter)                    \
    : __stringify(constraint) "r" (i)                                   \
    : cl);                                                              \
                                                                        \
    return result;                                                      \
}

#define ATOMIC_FETCH_OP(name, mb, acq, rel, cl, op, asm_op, constraint) \
static inline int                                                       \
__ll_sc_atomic_fetch_##op##name(int i, atomic_t *v)                     \
{                                                                       \
    unsigned long tmp;                                                  \
    int val, result;                                                    \
                                                                        \
    asm volatile("// atomic_fetch_" #op #name "\n"                      \
    __LL_SC_FALLBACK(                                                   \
"    prfm    pstl1strm, %3\n"                                           \
"1:    ld" #acq "xr    %w0, %3\n"                                       \
"    " #asm_op "    %w1, %w0, %w4\n"                                    \
"    st" #rel "xr    %w2, %w1, %3\n"                                    \
"    cbnz    %w2, 1b\n"                                                 \
"    " #mb )                                                            \
    : "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter)       \
    : __stringify(constraint) "r" (i)                                   \
    : cl);                                                              \
                                                                        \
    return result;                                                      \
}

#define ATOMIC_OPS(...)                                                 \
    ATOMIC_OP(__VA_ARGS__)                                              \
    ATOMIC_OP_RETURN(        , dmb ish,  , l, "memory", __VA_ARGS__)    \
    ATOMIC_OP_RETURN(_relaxed,        ,  ,  ,         , __VA_ARGS__)    \
    ATOMIC_OP_RETURN(_acquire,        , a,  , "memory", __VA_ARGS__)    \
    ATOMIC_OP_RETURN(_release,        ,  , l, "memory", __VA_ARGS__)    \
    ATOMIC_FETCH_OP (        , dmb ish,  , l, "memory", __VA_ARGS__)    \
    ATOMIC_FETCH_OP (_relaxed,        ,  ,  ,         , __VA_ARGS__)    \
    ATOMIC_FETCH_OP (_acquire,        , a,  , "memory", __VA_ARGS__)    \
    ATOMIC_FETCH_OP (_release,        ,  , l, "memory", __VA_ARGS__)

ATOMIC_OPS(add, add, I)
ATOMIC_OPS(sub, sub, J)

#undef ATOMIC_OPS
#define ATOMIC_OPS(...)                                                 \
    ATOMIC_OP(__VA_ARGS__)                                              \
    ATOMIC_FETCH_OP (        , dmb ish,  , l, "memory", __VA_ARGS__)    \
    ATOMIC_FETCH_OP (_relaxed,        ,  ,  ,         , __VA_ARGS__)    \
    ATOMIC_FETCH_OP (_acquire,        , a,  , "memory", __VA_ARGS__)    \
    ATOMIC_FETCH_OP (_release,        ,  , l, "memory", __VA_ARGS__)

ATOMIC_OPS(and, and, K)
ATOMIC_OPS(or, orr, K)
ATOMIC_OPS(xor, eor, K)
/*
 * GAS converts the mysterious and undocumented BIC (immediate) alias to
 * an AND (immediate) instruction with the immediate inverted. We don't
 * have a constraint for this, so fall back to register.
 */
ATOMIC_OPS(andnot, bic, )

#undef ATOMIC_OPS
#undef ATOMIC_FETCH_OP
#undef ATOMIC_OP_RETURN
#undef ATOMIC_OP

#define __CMPXCHG_CASE(w, sfx, name, sz, mb, acq, rel, cl, constraint)  \
static inline u##sz                                                     \
__ll_sc__cmpxchg_case_##name##sz(volatile void *ptr,                    \
                     unsigned long old,                                 \
                     u##sz new)                                         \
{                                                                       \
    unsigned long tmp;                                                  \
    u##sz oldval;                                                       \
                                                                        \
    /*                                                                  \
     * Sub-word sizes require explicit casting so that the compare      \
     * part of the cmpxchg doesn't end up interpreting non-zero         \
     * upper bits of the register containing "old".                     \
     */                                                                 \
    if (sz < 32)                                                        \
        old = (u##sz)old;                                               \
                                                                        \
    asm volatile(                                                       \
    __LL_SC_FALLBACK(                                                   \
    "    prfm    pstl1strm, %[v]\n"                                     \
    "1:    ld" #acq "xr" #sfx "\t%" #w "[oldval], %[v]\n"               \
    "    eor    %" #w "[tmp], %" #w "[oldval], %" #w "[old]\n"          \
    "    cbnz    %" #w "[tmp], 2f\n"                                    \
    "    st" #rel "xr" #sfx "\t%w[tmp], %" #w "[new], %[v]\n"           \
    "    cbnz    %w[tmp], 1b\n"                                         \
    "    " #mb "\n"                                                     \
    "2:")                                                               \
    : [tmp] "=&r" (tmp), [oldval] "=&r" (oldval),                       \
      [v] "+Q" (*(u##sz *)ptr)                                          \
    : [old] __stringify(constraint) "r" (old), [new] "r" (new)          \
    : cl);                                                              \
                                                                        \
    return oldval;                                                      \
}

/*
 * Earlier versions of GCC (no later than 8.1.0) appear to incorrectly
 * handle the 'K' constraint for the value 4294967295 - thus we use no
 * constraint for 32 bit operations.
 */
__CMPXCHG_CASE(w, b,     ,  8,        ,  ,  ,         , K)
__CMPXCHG_CASE(w, h,     , 16,        ,  ,  ,         , K)
__CMPXCHG_CASE(w,  ,     , 32,        ,  ,  ,         , K)
__CMPXCHG_CASE( ,  ,     , 64,        ,  ,  ,         , L)
__CMPXCHG_CASE(w, b, acq_,  8,        , a,  , "memory", K)
__CMPXCHG_CASE(w, h, acq_, 16,        , a,  , "memory", K)
__CMPXCHG_CASE(w,  , acq_, 32,        , a,  , "memory", K)
__CMPXCHG_CASE( ,  , acq_, 64,        , a,  , "memory", L)
__CMPXCHG_CASE(w, b, rel_,  8,        ,  , l, "memory", K)
__CMPXCHG_CASE(w, h, rel_, 16,        ,  , l, "memory", K)
__CMPXCHG_CASE(w,  , rel_, 32,        ,  , l, "memory", K)
__CMPXCHG_CASE( ,  , rel_, 64,        ,  , l, "memory", L)
__CMPXCHG_CASE(w, b,  mb_,  8, dmb ish,  , l, "memory", K)
__CMPXCHG_CASE(w, h,  mb_, 16, dmb ish,  , l, "memory", K)
__CMPXCHG_CASE(w,  ,  mb_, 32, dmb ish,  , l, "memory", K)
__CMPXCHG_CASE( ,  ,  mb_, 64, dmb ish,  , l, "memory", L)

#undef __CMPXCHG_CASE

#define __CMPXCHG_DBL(name, mb, rel, cl)                                \
static inline long                                                      \
__ll_sc__cmpxchg_double##name(unsigned long old1,                       \
                      unsigned long old2,                               \
                      unsigned long new1,                               \
                      unsigned long new2,                               \
                      volatile void *ptr)                               \
{                                                                       \
    unsigned long tmp, ret;                                             \
                                                                        \
    asm volatile("// __cmpxchg_double" #name "\n"                       \
    __LL_SC_FALLBACK(                                                   \
    "    prfm    pstl1strm, %2\n"                                       \
    "1:    ldxp    %0, %1, %2\n"                                        \
    "    eor    %0, %0, %3\n"                                           \
    "    eor    %1, %1, %4\n"                                           \
    "    orr    %1, %0, %1\n"                                           \
    "    cbnz    %1, 2f\n"                                              \
    "    st" #rel "xp    %w0, %5, %6, %2\n"                             \
    "    cbnz    %w0, 1b\n"                                             \
    "    " #mb "\n"                                                     \
    "2:")                                                               \
    : "=&r" (tmp), "=&r" (ret), "+Q" (*(unsigned long *)ptr)            \
    : "r" (old1), "r" (old2), "r" (new1), "r" (new2)                    \
    : cl);                                                              \
                                                                        \
    return ret;                                                         \
}

__CMPXCHG_DBL(   ,        ,  ,         )
__CMPXCHG_DBL(_mb, dmb ish, l, "memory")

#undef __CMPXCHG_DBL
#undef K

#endif    /* __ASM_ARM_ARM64_ATOMIC_LL_SC_H */
/*
 * Local variables:
 * mode: C
 * c-file-style: "BSD"
 * c-basic-offset: 4
 * indent-tabs-mode: nil
 * End:
 */




========================================================================
====           xen/include/asm-arm/arm64/atomic_lse.h               ====
========================================================================

/*
 * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
 *
 * Summary of changes:
 *      - Rename header include guard to reflect Xen directory structure
 *      - Drop atomic64_t helper definitions
 *      - Switch __always_inline qualifier to always_inline
 *      - Convert tabs to spaces in line with coding style
 *      - Tidy up indentations
 *      - Add Emacs file local variables
 *
 * Copyright (C) 1996 Russell King.
 * Copyright (C) 2002 Deep Blue Solutions Ltd.
 * Copyright (C) 2012 ARM Ltd.
 * SPDX-License-Identifier: GPL-2.0-only
 */

#ifndef __ASM_ARM_ARM64_ATOMIC_LSE_H
#define __ASM_ARM_ARM64_ATOMIC_LSE_H

#define ATOMIC_OP(op, asm_op)                                           \
static inline void __lse_atomic_##op(int i, atomic_t *v)                \
{                                                                       \
    asm volatile(                                                       \
    __LSE_PREAMBLE                                                      \
"    " #asm_op "    %w[i], %[v]\n"                                      \
    : [i] "+r" (i), [v] "+Q" (v->counter)                               \
    : "r" (v));                                                         \
}

ATOMIC_OP(andnot, stclr)
ATOMIC_OP(or, stset)
ATOMIC_OP(xor, steor)
ATOMIC_OP(add, stadd)

#undef ATOMIC_OP

#define ATOMIC_FETCH_OP(name, mb, op, asm_op, cl...)                    \
static inline int __lse_atomic_fetch_##op##name(int i, atomic_t *v)     \
{                                                                       \
    asm volatile(                                                       \
    __LSE_PREAMBLE                                                      \
"    " #asm_op #mb "    %w[i], %w[i], %[v]"                             \
    : [i] "+r" (i), [v] "+Q" (v->counter)                               \
    : "r" (v)                                                           \
    : cl);                                                              \
                                                                        \
    return i;                                                           \
}

#define ATOMIC_FETCH_OPS(op, asm_op)                                    \
    ATOMIC_FETCH_OP(_relaxed,   , op, asm_op)                           \
    ATOMIC_FETCH_OP(_acquire,  a, op, asm_op, "memory")                 \
    ATOMIC_FETCH_OP(_release,  l, op, asm_op, "memory")                 \
    ATOMIC_FETCH_OP(        , al, op, asm_op, "memory")

ATOMIC_FETCH_OPS(andnot, ldclr)
ATOMIC_FETCH_OPS(or, ldset)
ATOMIC_FETCH_OPS(xor, ldeor)
ATOMIC_FETCH_OPS(add, ldadd)

#undef ATOMIC_FETCH_OP
#undef ATOMIC_FETCH_OPS

#define ATOMIC_OP_ADD_RETURN(name, mb, cl...)                           \
static inline int __lse_atomic_add_return##name(int i, atomic_t *v)     \
{                                                                       \
    u32 tmp;                                                            \
                                                                        \
    asm volatile(                                                       \
    __LSE_PREAMBLE                                                      \
    "    ldadd" #mb "    %w[i], %w[tmp], %[v]\n"                        \
    "    add    %w[i], %w[i], %w[tmp]"                                  \
    : [i] "+r" (i), [v] "+Q" (v->counter), [tmp] "=&r" (tmp)            \
    : "r" (v)                                                           \
    : cl);                                                              \
                                                                        \
    return i;                                                           \
}

ATOMIC_OP_ADD_RETURN(_relaxed,   )
ATOMIC_OP_ADD_RETURN(_acquire,  a, "memory")
ATOMIC_OP_ADD_RETURN(_release,  l, "memory")
ATOMIC_OP_ADD_RETURN(        , al, "memory")

#undef ATOMIC_OP_ADD_RETURN

static inline void __lse_atomic_and(int i, atomic_t *v)
{
    asm volatile(
    __LSE_PREAMBLE
    "    mvn    %w[i], %w[i]\n"
    "    stclr    %w[i], %[v]"
    : [i] "+&r" (i), [v] "+Q" (v->counter)
    : "r" (v));
}

#define ATOMIC_FETCH_OP_AND(name, mb, cl...)                            \
static inline int __lse_atomic_fetch_and##name(int i, atomic_t *v)      \
{                                                                       \
    asm volatile(                                                       \
    __LSE_PREAMBLE                                                      \
    "    mvn    %w[i], %w[i]\n"                                         \
    "    ldclr" #mb "    %w[i], %w[i], %[v]"                            \
    : [i] "+&r" (i), [v] "+Q" (v->counter)                              \
    : "r" (v)                                                           \
    : cl);                                                              \
                                                                        \
    return i;                                                           \
}

ATOMIC_FETCH_OP_AND(_relaxed,   )
ATOMIC_FETCH_OP_AND(_acquire,  a, "memory")
ATOMIC_FETCH_OP_AND(_release,  l, "memory")
ATOMIC_FETCH_OP_AND(        , al, "memory")

#undef ATOMIC_FETCH_OP_AND

static inline void __lse_atomic_sub(int i, atomic_t *v)
{
    asm volatile(
    __LSE_PREAMBLE
    "    neg    %w[i], %w[i]\n"
    "    stadd    %w[i], %[v]"
    : [i] "+&r" (i), [v] "+Q" (v->counter)
    : "r" (v));
}

#define ATOMIC_OP_SUB_RETURN(name, mb, cl...)                           \
static inline int __lse_atomic_sub_return##name(int i, atomic_t *v)     \
{                                                                       \
    u32 tmp;                                                            \
                                                                        \
    asm volatile(                                                       \
    __LSE_PREAMBLE                                                      \
    "    neg    %w[i], %w[i]\n"                                         \
    "    ldadd" #mb "    %w[i], %w[tmp], %[v]\n"                        \
    "    add    %w[i], %w[i], %w[tmp]"                                  \
    : [i] "+&r" (i), [v] "+Q" (v->counter), [tmp] "=&r" (tmp)           \
    : "r" (v)                                                           \
    : cl);                                                              \
                                                                        \
    return i;                                                           \
}

ATOMIC_OP_SUB_RETURN(_relaxed,   )
ATOMIC_OP_SUB_RETURN(_acquire,  a, "memory")
ATOMIC_OP_SUB_RETURN(_release,  l, "memory")
ATOMIC_OP_SUB_RETURN(        , al, "memory")

#undef ATOMIC_OP_SUB_RETURN

#define ATOMIC_FETCH_OP_SUB(name, mb, cl...)                            \
static inline int __lse_atomic_fetch_sub##name(int i, atomic_t *v)      \
{                                                                       \
    asm volatile(                                                       \
    __LSE_PREAMBLE                                                      \
    "    neg    %w[i], %w[i]\n"                                         \
    "    ldadd" #mb "    %w[i], %w[i], %[v]"                            \
    : [i] "+&r" (i), [v] "+Q" (v->counter)                              \
    : "r" (v)                                                           \
    : cl);                                                              \
                                                                        \
    return i;                                                           \
}

ATOMIC_FETCH_OP_SUB(_relaxed,   )
ATOMIC_FETCH_OP_SUB(_acquire,  a, "memory")
ATOMIC_FETCH_OP_SUB(_release,  l, "memory")
ATOMIC_FETCH_OP_SUB(        , al, "memory")

#undef ATOMIC_FETCH_OP_SUB

#define __CMPXCHG_CASE(w, sfx, name, sz, mb, cl...)                     \
static always_inline u##sz                                              \
__lse__cmpxchg_case_##name##sz(volatile void *ptr,                      \
                          u##sz old,                                    \
                          u##sz new)                                    \
{                                                                       \
    register unsigned long x0 asm ("x0") = (unsigned long)ptr;          \
    register u##sz x1 asm ("x1") = old;                                 \
    register u##sz x2 asm ("x2") = new;                                 \
    unsigned long tmp;                                                  \
                                                                        \
    asm volatile(                                                       \
    __LSE_PREAMBLE                                                      \
    "    mov    %" #w "[tmp], %" #w "[old]\n"                           \
    "    cas" #mb #sfx "\t%" #w "[tmp], %" #w "[new], %[v]\n"           \
    "    mov    %" #w "[ret], %" #w "[tmp]"                             \
    : [ret] "+r" (x0), [v] "+Q" (*(unsigned long *)ptr),                \
      [tmp] "=&r" (tmp)                                                 \
    : [old] "r" (x1), [new] "r" (x2)                                    \
    : cl);                                                              \
                                                                        \
    return x0;                                                          \
}

__CMPXCHG_CASE(w, b,     ,  8,   )
__CMPXCHG_CASE(w, h,     , 16,   )
__CMPXCHG_CASE(w,  ,     , 32,   )
__CMPXCHG_CASE(x,  ,     , 64,   )
__CMPXCHG_CASE(w, b, acq_,  8,  a, "memory")
__CMPXCHG_CASE(w, h, acq_, 16,  a, "memory")
__CMPXCHG_CASE(w,  , acq_, 32,  a, "memory")
__CMPXCHG_CASE(x,  , acq_, 64,  a, "memory")
__CMPXCHG_CASE(w, b, rel_,  8,  l, "memory")
__CMPXCHG_CASE(w, h, rel_, 16,  l, "memory")
__CMPXCHG_CASE(w,  , rel_, 32,  l, "memory")
__CMPXCHG_CASE(x,  , rel_, 64,  l, "memory")
__CMPXCHG_CASE(w, b,  mb_,  8, al, "memory")
__CMPXCHG_CASE(w, h,  mb_, 16, al, "memory")
__CMPXCHG_CASE(w,  ,  mb_, 32, al, "memory")
__CMPXCHG_CASE(x,  ,  mb_, 64, al, "memory")

#undef __CMPXCHG_CASE

#define __CMPXCHG_DBL(name, mb, cl...)                                  \
static always_inline long                                               \
__lse__cmpxchg_double##name(unsigned long old1,                         \
                     unsigned long old2,                                \
                     unsigned long new1,                                \
                     unsigned long new2,                                \
                     volatile void *ptr)                                \
{                                                                       \
    unsigned long oldval1 = old1;                                       \
    unsigned long oldval2 = old2;                                       \
    register unsigned long x0 asm ("x0") = old1;                        \
    register unsigned long x1 asm ("x1") = old2;                        \
    register unsigned long x2 asm ("x2") = new1;                        \
    register unsigned long x3 asm ("x3") = new2;                        \
    register unsigned long x4 asm ("x4") = (unsigned long)ptr;          \
                                                                        \
    asm volatile(                                                       \
    __LSE_PREAMBLE                                                      \
    "    casp" #mb "\t%[old1], %[old2], %[new1], %[new2], %[v]\n"       \
    "    eor    %[old1], %[old1], %[oldval1]\n"                         \
    "    eor    %[old2], %[old2], %[oldval2]\n"                         \
    "    orr    %[old1], %[old1], %[old2]"                              \
    : [old1] "+&r" (x0), [old2] "+&r" (x1),                             \
      [v] "+Q" (*(unsigned long *)ptr)                                  \
    : [new1] "r" (x2), [new2] "r" (x3), [ptr] "r" (x4),                 \
      [oldval1] "r" (oldval1), [oldval2] "r" (oldval2)                  \
    : cl);                                                              \
                                                                        \
    return x0;                                                          \
}

__CMPXCHG_DBL(   ,   )
__CMPXCHG_DBL(_mb, al, "memory")

#undef __CMPXCHG_DBL

#endif    /* __ASM_ARM_ARM64_ATOMIC_LSE_H */
/*
 * Local variables:
 * mode: C
 * c-file-style: "BSD"
 * c-basic-offset: 4
 * indent-tabs-mode: nil
 * End:
 */




========================================================================
====             xen/include/asm-arm/arm64/cmpxchg.h                ====
========================================================================

/*
 * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
 *
 * Summary of changes:
 *      - Rename header include guard to reflect Xen directory structure
 *      - Drop redundant includes and redirect others to Xen equivalents
 *      - Rename definitions from arch_xchg_<qual>() to xchg_<qual>()
 *      - Switch __always_inline qualifier to always_inline
 *      - Switch usage of BUILD_BUG() to returning __bad_cmpxchg()
 *      - Pull in original Xen arm64 cmpxchg.h definitions of
 *           cmpxchg_timeout*() and cmpxchg64_timeout*() as these are not
 *           provided by Linux and are required for Xen's guest atomics
 *      - Convert tabs to spaces in line with coding style
 *      - Tidy up indentations
 *      - Add Emacs file local variables
 *
 * Copyright (C) 2012 ARM Ltd.
 * SPDX-License-Identifier: GPL-2.0-only
 */
#ifndef __ASM_ARM_ARM64_CMPXCHG_H
#define __ASM_ARM_ARM64_CMPXCHG_H

#include <asm/bug.h>
#include "lse.h"

extern unsigned long __bad_cmpxchg(volatile void *ptr, int size);

/*
 * We need separate acquire parameters for ll/sc and lse, since the full
 * barrier case is generated as release+dmb for the former and
 * acquire+release for the latter.
 */
#define __XCHG_CASE(w, sfx, name, sz, mb, nop_lse, acq, acq_lse, rel, cl)   \
static inline u##sz __xchg_case_##name##sz(u##sz x, volatile void *ptr)     \
{                                                                           \
    u##sz ret;                                                              \
    unsigned long tmp;                                                      \
                                                                            \
    asm volatile(ARM64_LSE_ATOMIC_INSN(                                     \
    /* LL/SC */                                                             \
    "    prfm    pstl1strm, %2\n"                                           \
    "1:    ld" #acq "xr" #sfx "\t%" #w "0, %2\n"                            \
    "    st" #rel "xr" #sfx "\t%w1, %" #w "3, %2\n"                         \
    "    cbnz    %w1, 1b\n"                                                 \
    "    " #mb,                                                             \
    /* LSE atomics */                                                       \
    "    swp" #acq_lse #rel #sfx "\t%" #w "3, %" #w "0, %2\n"               \
        "nop\n"                                                             \
        "nop\n"                                                             \
        "nop\n"                                                             \
    "    " #nop_lse)                                                        \
    : "=&r" (ret), "=&r" (tmp), "+Q" (*(u##sz *)ptr)                        \
    : "r" (x)                                                               \
    : cl);                                                                  \
                                                                            \
    return ret;                                                             \
}

__XCHG_CASE(w, b,     ,  8,        ,    ,  ,  ,  ,         )
__XCHG_CASE(w, h,     , 16,        ,    ,  ,  ,  ,         )
__XCHG_CASE(w,  ,     , 32,        ,    ,  ,  ,  ,         )
__XCHG_CASE( ,  ,     , 64,        ,    ,  ,  ,  ,         )
__XCHG_CASE(w, b, acq_,  8,        ,    , a, a,  , "memory")
__XCHG_CASE(w, h, acq_, 16,        ,    , a, a,  , "memory")
__XCHG_CASE(w,  , acq_, 32,        ,    , a, a,  , "memory")
__XCHG_CASE( ,  , acq_, 64,        ,    , a, a,  , "memory")
__XCHG_CASE(w, b, rel_,  8,        ,    ,  ,  , l, "memory")
__XCHG_CASE(w, h, rel_, 16,        ,    ,  ,  , l, "memory")
__XCHG_CASE(w,  , rel_, 32,        ,    ,  ,  , l, "memory")
__XCHG_CASE( ,  , rel_, 64,        ,    ,  ,  , l, "memory")
__XCHG_CASE(w, b,  mb_,  8, dmb ish, nop,  , a, l, "memory")
__XCHG_CASE(w, h,  mb_, 16, dmb ish, nop,  , a, l, "memory")
__XCHG_CASE(w,  ,  mb_, 32, dmb ish, nop,  , a, l, "memory")
__XCHG_CASE( ,  ,  mb_, 64, dmb ish, nop,  , a, l, "memory")

#undef __XCHG_CASE

#define __XCHG_GEN(sfx)                                                 \
static always_inline  unsigned long __xchg##sfx(unsigned long x,        \
                    volatile void *ptr,                                 \
                    int size)                                           \
{                                                                       \
    switch (size) {                                                     \
    case 1:                                                             \
        return __xchg_case##sfx##_8(x, ptr);                            \
    case 2:                                                             \
        return __xchg_case##sfx##_16(x, ptr);                           \
    case 4:                                                             \
        return __xchg_case##sfx##_32(x, ptr);                           \
    case 8:                                                             \
        return __xchg_case##sfx##_64(x, ptr);                           \
    default:                                                            \
        return __bad_cmpxchg(ptr, size);                                \
    }                                                                   \
                                                                        \
    unreachable();                                                      \
}

__XCHG_GEN()
__XCHG_GEN(_acq)
__XCHG_GEN(_rel)
__XCHG_GEN(_mb)

#undef __XCHG_GEN

#define __xchg_wrapper(sfx, ptr, x)                                     \
({                                                                      \
    __typeof__(*(ptr)) __ret;                                           \
    __ret = (__typeof__(*(ptr)))                                        \
        __xchg##sfx((unsigned long)(x), (ptr), sizeof(*(ptr)));         \
    __ret;                                                              \
})

/* xchg */
#define xchg_relaxed(...)    __xchg_wrapper(    , __VA_ARGS__)
#define xchg_acquire(...)    __xchg_wrapper(_acq, __VA_ARGS__)
#define xchg_release(...)    __xchg_wrapper(_rel, __VA_ARGS__)
#define xchg(...)        __xchg_wrapper( _mb, __VA_ARGS__)

#define __CMPXCHG_CASE(name, sz)                                        \
static inline u##sz __cmpxchg_case_##name##sz(volatile void *ptr,       \
                          u##sz old,                                    \
                          u##sz new)                                    \
{                                                                       \
    return __lse_ll_sc_body(_cmpxchg_case_##name##sz,                   \
                ptr, old, new);                                         \
}

__CMPXCHG_CASE(    ,  8)
__CMPXCHG_CASE(    , 16)
__CMPXCHG_CASE(    , 32)
__CMPXCHG_CASE(    , 64)
__CMPXCHG_CASE(acq_,  8)
__CMPXCHG_CASE(acq_, 16)
__CMPXCHG_CASE(acq_, 32)
__CMPXCHG_CASE(acq_, 64)
__CMPXCHG_CASE(rel_,  8)
__CMPXCHG_CASE(rel_, 16)
__CMPXCHG_CASE(rel_, 32)
__CMPXCHG_CASE(rel_, 64)
__CMPXCHG_CASE(mb_,  8)
__CMPXCHG_CASE(mb_, 16)
__CMPXCHG_CASE(mb_, 32)
__CMPXCHG_CASE(mb_, 64)

#undef __CMPXCHG_CASE

#define __CMPXCHG_DBL(name)                                             \
static inline long __cmpxchg_double##name(unsigned long old1,           \
                     unsigned long old2,                                \
                     unsigned long new1,                                \
                     unsigned long new2,                                \
                     volatile void *ptr)                                \
{                                                                       \
    return __lse_ll_sc_body(_cmpxchg_double##name,                      \
                old1, old2, new1, new2, ptr);                           \
}

__CMPXCHG_DBL(   )
__CMPXCHG_DBL(_mb)

#undef __CMPXCHG_DBL

#define __CMPXCHG_GEN(sfx)                                              \
static always_inline unsigned long __cmpxchg##sfx(volatile void *ptr,   \
                       unsigned long old,                               \
                       unsigned long new,                               \
                       int size)                                        \
{                                                                       \
    switch (size) {                                                     \
    case 1:                                                             \
        return __cmpxchg_case##sfx##_8(ptr, old, new);                  \
    case 2:                                                             \
        return __cmpxchg_case##sfx##_16(ptr, old, new);                 \
    case 4:                                                             \
        return __cmpxchg_case##sfx##_32(ptr, old, new);                 \
    case 8:                                                             \
        return __cmpxchg_case##sfx##_64(ptr, old, new);                 \
    default:                                                            \
        return __bad_cmpxchg(ptr, size);                                \
    }                                                                   \
                                                                        \
    unreachable();                                                      \
}

__CMPXCHG_GEN()
__CMPXCHG_GEN(_acq)
__CMPXCHG_GEN(_rel)
__CMPXCHG_GEN(_mb)

#undef __CMPXCHG_GEN

#define __cmpxchg_wrapper(sfx, ptr, o, n)                               \
({                                                                      \
    __typeof__(*(ptr)) __ret;                                           \
    __ret = (__typeof__(*(ptr)))                                        \
        __cmpxchg##sfx((ptr), (unsigned long)(o),                       \
                (unsigned long)(n), sizeof(*(ptr)));                    \
    __ret;                                                              \
})

/* cmpxchg */
#define cmpxchg_relaxed(...)    __cmpxchg_wrapper(    , __VA_ARGS__)
#define cmpxchg_acquire(...)    __cmpxchg_wrapper(_acq, __VA_ARGS__)
#define cmpxchg_release(...)    __cmpxchg_wrapper(_rel, __VA_ARGS__)
#define cmpxchg(...)        __cmpxchg_wrapper( _mb, __VA_ARGS__)
#define cmpxchg_local        cmpxchg_relaxed

/* cmpxchg64 */
#define cmpxchg64_relaxed        cmpxchg_relaxed
#define cmpxchg64_acquire        cmpxchg_acquire
#define cmpxchg64_release        cmpxchg_release
#define cmpxchg64            cmpxchg
#define cmpxchg64_local        cmpxchg_local

/* cmpxchg_double */
#define system_has_cmpxchg_double()     1

#define __cmpxchg_double_check(ptr1, ptr2)                              \
({                                                                      \
    if (sizeof(*(ptr1)) != 8)                                           \
        return __bad_cmpxchg(ptr, size);                                \
    VM_BUG_ON((unsigned long *)(ptr2) - (unsigned long *)(ptr1) != 1);  \
})

#define cmpxchg_double(ptr1, ptr2, o1, o2, n1, n2)                          \
({                                                                          \
    int __ret;                                                              \
    __cmpxchg_double_check(ptr1, ptr2);                                     \
    __ret = !__cmpxchg_double_mb((unsigned long)(o1), (unsigned long)(o2),  \
                     (unsigned long)(n1), (unsigned long)(n2),              \
                     ptr1);                                                 \
    __ret;                                                                  \
})

#define cmpxchg_double_local(ptr1, ptr2, o1, o2, n1, n2)                    \
({                                                                          \
    int __ret;                                                              \
    __cmpxchg_double_check(ptr1, ptr2);                                     \
    __ret = !__cmpxchg_double((unsigned long)(o1), (unsigned long)(o2),     \
                  (unsigned long)(n1), (unsigned long)(n2),                 \
                  ptr1);                                                    \
    __ret;                                                                  \
})

#define __CMPWAIT_CASE(w, sfx, sz)                                      \
static inline void __cmpwait_case_##sz(volatile void *ptr,              \
                       unsigned long val)                               \
{                                                                       \
    unsigned long tmp;                                                  \
                                                                        \
    asm volatile(                                                       \
    "    sevl\n"                                                        \
    "    wfe\n"                                                         \
    "    ldxr" #sfx "\t%" #w "[tmp], %[v]\n"                            \
    "    eor    %" #w "[tmp], %" #w "[tmp], %" #w "[val]\n"             \
    "    cbnz    %" #w "[tmp], 1f\n"                                    \
    "    wfe\n"                                                         \
    "1:"                                                                \
    : [tmp] "=&r" (tmp), [v] "+Q" (*(unsigned long *)ptr)               \
    : [val] "r" (val));                                                 \
}

__CMPWAIT_CASE(w, b, 8);
__CMPWAIT_CASE(w, h, 16);
__CMPWAIT_CASE(w,  , 32);
__CMPWAIT_CASE( ,  , 64);

#undef __CMPWAIT_CASE

#define __CMPWAIT_GEN(sfx)                                              \
static always_inline void __cmpwait##sfx(volatile void *ptr,            \
                  unsigned long val,                                    \
                  int size)                                             \
{                                                                       \
    switch (size) {                                                     \
    case 1:                                                             \
        return __cmpwait_case##sfx##_8(ptr, (u8)val);                   \
    case 2:                                                             \
        return __cmpwait_case##sfx##_16(ptr, (u16)val);                 \
    case 4:                                                             \
        return __cmpwait_case##sfx##_32(ptr, val);                      \
    case 8:                                                             \
        return __cmpwait_case##sfx##_64(ptr, val);                      \
    default:                                                            \
        __bad_cmpxchg(ptr, size);                                       \
    }                                                                   \
                                                                        \
    unreachable();                                                      \
}

__CMPWAIT_GEN()

#undef __CMPWAIT_GEN

#define __cmpwait_relaxed(ptr, val) \
    __cmpwait((ptr), (unsigned long)(val), sizeof(*(ptr)))

/*
 * This code is from the original Xen arm32 cmpxchg.h, from before the
 * Linux 5.10-rc2 atomics helpers were ported over. The only changes
 * here are renaming the macros and functions to explicitly use
 * "timeout" in their names so that they don't clash with the above.
 *
 * We need this here for guest atomics (the only user of the timeout
 * variants).
 */

#define __CMPXCHG_TIMEOUT_CASE(w, sz, name)                             \
static inline bool __cmpxchg_timeout_case_##name(volatile void *ptr,    \
                                         unsigned long *old,            \
                                         unsigned long new,             \
                                         bool timeout,                  \
                                         unsigned int max_try)          \
{                                                                       \
        unsigned long oldval;                                           \
        unsigned long res;                                              \
                                                                        \
        do {                                                            \
                asm volatile("// __cmpxchg_timeout_case_" #name "\n"    \
                "       ldxr" #sz "     %" #w "1, %2\n"                 \
                "       mov     %w0, #0\n"                              \
                "       cmp     %" #w "1, %" #w "3\n"                   \
                "       b.ne    1f\n"                                   \
                "       stxr" #sz "     %w0, %" #w "4, %2\n"            \
                "1:\n"                                                  \
                : "=&r" (res), "=&r" (oldval),                          \
                  "+Q" (*(unsigned long *)ptr)                          \
                : "Ir" (*old), "r" (new)                                \
                : "cc");                                                \
                                                                        \
                if (!res)                                               \
                        break;                                          \
        } while (!timeout || ((--max_try) > 0));                        \
                                                                        \
        *old = oldval;                                                  \
                                                                        \
        return !res;                                                    \
}

__CMPXCHG_TIMEOUT_CASE(w, b, 1)
__CMPXCHG_TIMEOUT_CASE(w, h, 2)
__CMPXCHG_TIMEOUT_CASE(w,  , 4)
__CMPXCHG_TIMEOUT_CASE( ,  , 8)

static always_inline bool __int_cmpxchg(volatile void *ptr, unsigned long *old,
                                        unsigned long new, int size,
                                        bool timeout, unsigned int max_try)
{
        switch (size) {
        case 1:
                return __cmpxchg_timeout_case_1(ptr, old, new, timeout, max_try);
        case 2:
                return __cmpxchg_timeout_case_2(ptr, old, new, timeout, max_try);
        case 4:
                return __cmpxchg_timeout_case_4(ptr, old, new, timeout, max_try);
        case 8:
                return __cmpxchg_timeout_case_8(ptr, old, new, timeout, max_try);
        default:
                return __bad_cmpxchg(ptr, size);
        }

        ASSERT_UNREACHABLE();
}

/*
 * The helper may fail to update the memory if the action takes too long.
 *
 * @old: On call the value pointed contains the expected old value. It will be
 * updated to the actual old value.
 * @max_try: Maximum number of iterations
 *
 * The helper will return true when the update has succeeded (i.e no
 * timeout) and false if the update has failed.
 */
static always_inline bool __cmpxchg_timeout(volatile void *ptr,
                                            unsigned long *old,
                                            unsigned long new,
                                            int size,
                                            unsigned int max_try)
{
        bool ret;

        smp_mb();
        ret = __int_cmpxchg(ptr, old, new, size, true, max_try);
        smp_mb();

        return ret;
}

#define __cmpxchg64_timeout(ptr, old, new, max_try)     \
        __cmpxchg_timeout(ptr, old, new, 8, max_try)


#endif    /* __ASM_ARM_ARM64_CMPXCHG_H */
/*
 * Local variables:
 * mode: C
 * c-file-style: "BSD"
 * c-basic-offset: 4
 * indent-tabs-mode: nil
 * End:
 */




========================================================================
====               xen/include/asm-arm/arm64/lse.h                  ====
========================================================================

/*
 * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
 *
 * Summary of changes:
 *      - Rename header include guard to reflect Xen directory structure
 *      - Drop redundant includes and redirect others to Xen equivalents
 *      - Modify hwcap check to use cpus_have_cap()
 *      - Convert tabs to spaces in line with coding style
 *      - Tidy up indentations
 *      - Add Emacs file local variables
 *
 * SPDX-License-Identifier: GPL-2.0
 */
#ifndef __ASM_ARM_ARM64_LSE_H
#define __ASM_ARM_ARM64_LSE_H

#include "atomic_ll_sc.h"

#ifdef CONFIG_ARM64_LSE_ATOMICS

#define __LSE_PREAMBLE    ".arch_extension lse\n"

#include <xen/compiler.h>
#include <xen/stringify.h>
#include <xen/types.h>

#include <asm/alternative.h>

#include "atomic_lse.h"

static inline bool system_uses_lse_atomics(void)
{
    return cpus_have_cap(ARM64_HAS_LSE_ATOMICS);
}

#define __lse_ll_sc_body(op, ...)           \
({                                          \
    system_uses_lse_atomics() ?             \
        __lse_##op(__VA_ARGS__) :           \
        __ll_sc_##op(__VA_ARGS__);          \
})

/* In-line patching at runtime */
#define ARM64_LSE_ATOMIC_INSN(llsc, lse)    \
    ALTERNATIVE(llsc, __LSE_PREAMBLE lse, ARM64_HAS_LSE_ATOMICS)

#else    /* CONFIG_ARM64_LSE_ATOMICS */

static inline bool system_uses_lse_atomics(void) { return false; }

#define __lse_ll_sc_body(op, ...)        __ll_sc_##op(__VA_ARGS__)

#define ARM64_LSE_ATOMIC_INSN(llsc, lse)    llsc

#endif    /* CONFIG_ARM64_LSE_ATOMICS */
#endif    /* __ASM_ARM_ARM64_LSE_H */
/*
 * Local variables:
 * mode: C
 * c-file-style: "BSD"
 * c-basic-offset: 4
 * indent-tabs-mode: nil
 * End:
 */









From xen-devel-bounces@lists.xenproject.org Fri Nov 06 10:55:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 10:55:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20628.46646 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kazOr-0000eS-Dr; Fri, 06 Nov 2020 10:55:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20628.46646; Fri, 06 Nov 2020 10:55:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kazOr-0000eJ-AX; Fri, 06 Nov 2020 10:55:21 +0000
Received: by outflank-mailman (input) for mailman id 20628;
 Fri, 06 Nov 2020 10:55:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kazOp-0000dq-Bi
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 10:55:19 +0000
Received: from mail-wr1-x436.google.com (unknown [2a00:1450:4864:20::436])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ce82ceec-ef34-4690-bb20-7adb824aad3c;
 Fri, 06 Nov 2020 10:55:17 +0000 (UTC)
Received: by mail-wr1-x436.google.com with SMTP id n15so858676wrq.2
 for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 02:55:17 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id o7sm1645835wrp.23.2020.11.06.02.55.15
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 06 Nov 2020 02:55:15 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kazOp-0000dq-Bi
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 10:55:19 +0000
X-Inumbo-ID: ce82ceec-ef34-4690-bb20-7adb824aad3c
Received: from mail-wr1-x436.google.com (unknown [2a00:1450:4864:20::436])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ce82ceec-ef34-4690-bb20-7adb824aad3c;
	Fri, 06 Nov 2020 10:55:17 +0000 (UTC)
Received: by mail-wr1-x436.google.com with SMTP id n15so858676wrq.2
        for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 02:55:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=dZJX2HXtknH9a8H0jD07Jiilcan+PIwNNe10FvbbF2Y=;
        b=OmsE7xQG76TJID1gtWTa+p9QFbUIZOqRZKIKeSHJqwtgPk0xVGSuhnjlJQmLvKK84X
         bplv1sVaa7URAbZkXDwTs+aOw/xfINcYNvzv0wGOJdJ/61Gfk8+hLJj40nHpNUs4C7gB
         HHa6CvkK7BK+MzQK4VYB+VBYi5rp0VO4+vHrTBl8w1NRd/898h9OU2bgJQDSO6fzTdsh
         QvAopmQvzY+o3ncMK7AYRT0Ns1BnXVCBbG9zo63eBpOvVtxAnng18WYoS85BNfe6587x
         5GTQwaS4lO1AMgIAG2UP0tqcFCEmw1T6QCYMjT1MklDpPjFO2uMj7PT7Z9TEGD7gB5wp
         2Ulw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=dZJX2HXtknH9a8H0jD07Jiilcan+PIwNNe10FvbbF2Y=;
        b=ZSqyYpfGKZSdT8kkLKuZGrThXOamLl69GLOP9oS4LemD83Z2xJiOTyRzvMX6nEnG2v
         P8LsA/M+zGhrfRQtq8hZyf9a3HCH+vpcscd+i8SaFS6jeaS2FkQyEuprJkLcLoAt220n
         vgVKKlLbPovfo/2m3NDqz+PyBpTzDZ15WZb2qOwhZ2QFXtJu8rybhoKSw3Lw+yo3rWki
         DpicpTWf/5/FBlJZ9+7t70zxXGPvgg8fvYw+Jx0zR40mOjQ+wxnxbxZCk9iiMWRDDXjN
         mTdvLpdSSzeZC894eH0gS/H5vD75ITY2ouwbat64uFqTX6TBuZPoQeiqDcTpce+FQ02X
         k1eg==
X-Gm-Message-State: AOAM532v+3DZsyf/aHF5Jok3vHV3AUKydmUNfpqxD8br4xBQ5HSI6YK2
	oNS+WJTKu0hUPFofCCuC8BWqtfCVvoI=
X-Google-Smtp-Source: ABdhPJwRg7O5kOiyupDbpcyg6AeyQdN8e8ZOvWW5nGk1jwIgipAs1X3iqM+p0cbX6T/2Jwrt5d0F6A==
X-Received: by 2002:a5d:6591:: with SMTP id q17mr1973976wru.173.1604660116271;
        Fri, 06 Nov 2020 02:55:16 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id o7sm1645835wrp.23.2020.11.06.02.55.15
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Fri, 06 Nov 2020 02:55:15 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	julien@xen.org,
	rahul.singh@arm.com
Subject: RE: [RFC PATCH 5/6] xen/arm32: Port Linux LL/SC atomics helpers to Xen
Date: Fri,  6 Nov 2020 10:55:14 +0000
Message-Id: <20201106105514.55448-1-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <20201105185603.24149-6-ash.j.wilding@gmail.com>
References: <20201105185603.24149-6-ash.j.wilding@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi,

As mentioned in my reply to patch #4 just now, in retrospect I should
have put an intermediate patch between #3 and #4, deleting the existing
headers. This would have made the patch diff for #4 and #5 much easier
to read seeing as they are copying the Linux versions into Xen.

I'll do that for V1 when we get there, but for now to aid in readability
I've pasted the complete header files below.  While doing this I also
spent some time last night tidying up them up to be in line with the Xen
coding style.

Thanks,
Ash.


========================================================================
====             xen/include/asm-arm/arm32/atomic.h                 ====
========================================================================

/*
 * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
 *
 * Summary of changes:
 *      - Drop redundant includes and redirect others to Xen equivalents
 *      - Rename header include guard to reflect Xen directory structure
 *      - Drop atomic64_t helper declarations
 *      - Drop pre-Armv6 support
 *      - Redirect READ_ONCE/WRITE_ONCE to __* equivalents in compiler.h
 *      - Add explicit atomic_add_return() and atomic_sub_return() as
 *           Linux doesn't define these for arm32. Here we just sandwich
 *           the atomic_<op>_return_relaxed() calls with smp_mb()s.
 *      - Convert tabs to spaces in line with coding style
 *      - Tidy up indentations
 *      - Add Emacs file local variables
 *
 * Copyright (C) 1996 Russell King.
 * Copyright (C) 2002 Deep Blue Solutions Ltd.
 * SPDX-License-Identifier: GPL-2.0-only
 */
#ifndef __ASM_ARM_ARM32_ATOMIC_H
#define __ASM_ARM_ARM32_ATOMIC_H

#include <xen/compiler.h>
#include <xen/prefetch.h>
#include <xen/types.h>
#include "system.h"
#include "cmpxchg.h"

/*
 * On ARM, ordinary assignment (str instruction) doesn't clear the local
 * strex/ldrex monitor on some implementations. The reason we can use it for
 * atomic_set() is the clrex or dummy strex done on every exception return.
 */
#define atomic_read(v)      __READ_ONCE((v)->counter)
#define atomic_set(v,i)     __WRITE_ONCE(((v)->counter), (i))

/*
 * ARMv6 UP and SMP safe atomic ops.  We use load exclusive and
 * store exclusive to ensure that these are atomic.  We may loop
 * to ensure that the update happens.
 */

#define ATOMIC_OP(op, c_op, asm_op)                                     \
static inline void atomic_##op(int i, atomic_t *v)                      \
{                                                                       \
    unsigned long tmp;                                                  \
    int result;                                                         \
                                                                        \
    prefetchw(&v->counter);                                             \
    __asm__ __volatile__("@ atomic_" #op "\n"                           \
"1:    ldrex    %0, [%3]\n"                                             \
"    " #asm_op "    %0, %0, %4\n"                                       \
"    strex    %1, %0, [%3]\n"                                           \
"    teq    %1, #0\n"                                                   \
"    bne    1b"                                                         \
    : "=&r" (result), "=&r" (tmp), "+Qo" (v->counter)                   \
    : "r" (&v->counter), "Ir" (i)                                       \
    : "cc");                                                            \
}                                                                       \

#define ATOMIC_OP_RETURN(op, c_op, asm_op)                              \
static inline int atomic_##op##_return_relaxed(int i, atomic_t *v)      \
{                                                                       \
    unsigned long tmp;                                                  \
    int result;                                                         \
                                                                        \
    prefetchw(&v->counter);                                             \
                                                                        \
    __asm__ __volatile__("@ atomic_" #op "_return\n"                    \
"1:    ldrex    %0, [%3]\n"                                             \
"    " #asm_op "    %0, %0, %4\n"                                       \
"    strex    %1, %0, [%3]\n"                                           \
"    teq    %1, #0\n"                                                   \
"    bne    1b"                                                         \
    : "=&r" (result), "=&r" (tmp), "+Qo" (v->counter)                   \
    : "r" (&v->counter), "Ir" (i)                                       \
    : "cc");                                                            \
                                                                        \
    return result;                                                      \
}

#define ATOMIC_FETCH_OP(op, c_op, asm_op)                               \
static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v)       \
{                                                                       \
    unsigned long tmp;                                                  \
    int result, val;                                                    \
                                                                        \
    prefetchw(&v->counter);                                             \
                                                                        \
    __asm__ __volatile__("@ atomic_fetch_" #op "\n"                     \
"1:    ldrex    %0, [%4]\n"                                             \
"    " #asm_op "    %1, %0, %5\n"                                       \
"    strex    %2, %1, [%4]\n"                                           \
"    teq    %2, #0\n"                                                   \
"    bne    1b"                                                         \
    : "=&r" (result), "=&r" (val), "=&r" (tmp), "+Qo" (v->counter)      \
    : "r" (&v->counter), "Ir" (i)                                       \
    : "cc");                                                            \
                                                                        \
    return result;                                                      \
}

#define atomic_add_return_relaxed    atomic_add_return_relaxed
#define atomic_sub_return_relaxed    atomic_sub_return_relaxed
#define atomic_fetch_add_relaxed    atomic_fetch_add_relaxed
#define atomic_fetch_sub_relaxed    atomic_fetch_sub_relaxed

#define atomic_fetch_and_relaxed    atomic_fetch_and_relaxed
#define atomic_fetch_andnot_relaxed    atomic_fetch_andnot_relaxed
#define atomic_fetch_or_relaxed        atomic_fetch_or_relaxed
#define atomic_fetch_xor_relaxed    atomic_fetch_xor_relaxed

static inline int atomic_cmpxchg_relaxed(atomic_t *ptr, int old, int new)
{
    int oldval;
    unsigned long res;

    prefetchw(&ptr->counter);

    do {
        __asm__ __volatile__("@ atomic_cmpxchg\n"
        "ldrex    %1, [%3]\n"
        "mov    %0, #0\n"
        "teq    %1, %4\n"
        "strexeq %0, %5, [%3]\n"
            : "=&r" (res), "=&r" (oldval), "+Qo" (ptr->counter)
            : "r" (&ptr->counter), "Ir" (old), "r" (new)
            : "cc");
    } while (res);

    return oldval;
}
#define atomic_cmpxchg_relaxed        atomic_cmpxchg_relaxed

static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
{
    int oldval, newval;
    unsigned long tmp;

    smp_mb();
    prefetchw(&v->counter);

    __asm__ __volatile__ ("@ atomic_add_unless\n"
"1:    ldrex    %0, [%4]\n"
"    teq    %0, %5\n"
"    beq    2f\n"
"    add    %1, %0, %6\n"
"    strex    %2, %1, [%4]\n"
"    teq    %2, #0\n"
"    bne    1b\n"
"2:"
    : "=&r" (oldval), "=&r" (newval), "=&r" (tmp), "+Qo" (v->counter)
    : "r" (&v->counter), "r" (u), "r" (a)
    : "cc");

    if (oldval != u)
        smp_mb();

    return oldval;
}
#define atomic_fetch_add_unless        atomic_fetch_add_unless

#define ATOMIC_OPS(op, c_op, asm_op)        \
    ATOMIC_OP(op, c_op, asm_op)             \
    ATOMIC_OP_RETURN(op, c_op, asm_op)      \
    ATOMIC_FETCH_OP(op, c_op, asm_op)

ATOMIC_OPS(add, +=, add)
ATOMIC_OPS(sub, -=, sub)

#define atomic_andnot atomic_andnot

#undef ATOMIC_OPS
#define ATOMIC_OPS(op, c_op, asm_op)        \
    ATOMIC_OP(op, c_op, asm_op)             \
    ATOMIC_FETCH_OP(op, c_op, asm_op)

ATOMIC_OPS(and, &=, and)
ATOMIC_OPS(andnot, &= ~, bic)
ATOMIC_OPS(or,  |=, orr)
ATOMIC_OPS(xor, ^=, eor)

#undef ATOMIC_OPS
#undef ATOMIC_FETCH_OP
#undef ATOMIC_OP_RETURN
#undef ATOMIC_OP

#define atomic_xchg(v, new) (xchg(&((v)->counter), new))

/*
 * Linux doesn't define strict atomic_add_return() or atomic_sub_return()
 * for /arch/arm -- Let's manually define these for Xen.
 */

static inline int atomic_add_return(int i, atomic_t *v)
{
    int ret;

    smp_mb();
    ret = atomic_add_return_relaxed(i, v);
    smp_mb();

    return ret;
}
#define atomic_fetch_add(i, v) atomic_add_return(i, v)

static inline int atomic_sub_return(int i, atomic_t *v)
{
    int ret;

    smp_mb();
    ret = atomic_sub_return_relaxed(i, v);
    smp_mb();

    return ret;
}

#endif /* __ASM_ARM_ARM32_ATOMIC_H */
/*
 * Local variables:
 * mode: C
 * c-file-style: "BSD"
 * c-basic-offset: 4
 * indent-tabs-mode: nil
 * End:
 */




========================================================================
====            xen/include/asm-arm/arm32/cmpxchg.h                 ====
========================================================================

/*
 * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
 *
 * Summary of changes made while porting to Xen:
 *      - Rename header include guard to reflect Xen directory structure
 *      - Drop redundant includes and redirect others to Xen equivalents
 *      - Assume running on Armv7 so drop support for <= Armv6, and drop
 *           workarounds for StrongARM "swp" instruction errata
 *      - Drop local() variants (no callers in Xen)
 *      - Add strict versions of xchg(), cmpxchg(), and cmpxchg64() as
 *           Linux does not provide these
 *      - Keep the compiler happy by updating __cmpxchg64() ptr arg to
 *           be volatile and make the call to prefetchw() correctly cast
 *           ptr to (const volatile *)
 *      - Pull in original Xen arm32 cmpxchg.h definitions of
 *           cmpxchg_timeout*() and cmpxchg64_timeout*() as these are not
 *           provided by Linux and are required for Xen's guest atomics
 *      - Convert tabs to spaces in line with coding style
 *      - Tidy up indentations
 *      - Add Emacs file local variables
 *
 * SPDX-License-Identifier: GPL-2.0
 */
#ifndef __ASM_ARM_ARM32_CMPXCHG_H
#define __ASM_ARM_ARM32_CMPXCHG_H

#include <xen/prefetch.h>
#include <xen/types.h>

extern void __bad_cmpxchg(volatile void *ptr, int size);

static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size)
{
    unsigned long ret;
    unsigned int tmp;

    prefetchw((const void *)ptr);

    switch (size) {
    case 1:
        asm volatile("@    __xchg1\n"
        "1:    ldrexb    %0, [%3]\n"
        "    strexb    %1, %2, [%3]\n"
        "    teq    %1, #0\n"
        "    bne    1b"
            : "=&r" (ret), "=&r" (tmp)
            : "r" (x), "r" (ptr)
            : "memory", "cc");
        break;
    case 2:
        asm volatile("@    __xchg2\n"
        "1:    ldrexh    %0, [%3]\n"
        "    strexh    %1, %2, [%3]\n"
        "    teq    %1, #0\n"
        "    bne    1b"
            : "=&r" (ret), "=&r" (tmp)
            : "r" (x), "r" (ptr)
            : "memory", "cc");
        break;
    case 4:
        asm volatile("@    __xchg4\n"
        "1:    ldrex    %0, [%3]\n"
        "    strex    %1, %2, [%3]\n"
        "    teq    %1, #0\n"
        "    bne    1b"
            : "=&r" (ret), "=&r" (tmp)
            : "r" (x), "r" (ptr)
            : "memory", "cc");
        break;

    default:
        /* Cause a link-time error, the size is not supported */
        __bad_cmpxchg(ptr, size), ret = 0;
        break;
    }

    return ret;
}

#define xchg_relaxed(ptr, x) ({                        \
    (__typeof__(*(ptr)))__xchg((unsigned long)(x), (ptr),        \
                   sizeof(*(ptr)));            \
})

static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
                      unsigned long new, int size)
{
    unsigned long oldval, res;

    prefetchw((const void *)ptr);

    switch (size) {
    case 1:
        do {
            asm volatile("@ __cmpxchg1\n"
            "    ldrexb    %1, [%2]\n"
            "    mov    %0, #0\n"
            "    teq    %1, %3\n"
            "    strexbeq %0, %4, [%2]\n"
                : "=&r" (res), "=&r" (oldval)
                : "r" (ptr), "Ir" (old), "r" (new)
                : "memory", "cc");
        } while (res);
        break;
    case 2:
        do {
            asm volatile("@ __cmpxchg1\n"
            "    ldrexh    %1, [%2]\n"
            "    mov    %0, #0\n"
            "    teq    %1, %3\n"
            "    strexheq %0, %4, [%2]\n"
                : "=&r" (res), "=&r" (oldval)
                : "r" (ptr), "Ir" (old), "r" (new)
                : "memory", "cc");
        } while (res);
        break;
    case 4:
        do {
            asm volatile("@ __cmpxchg4\n"
            "    ldrex    %1, [%2]\n"
            "    mov    %0, #0\n"
            "    teq    %1, %3\n"
            "    strexeq %0, %4, [%2]\n"
                : "=&r" (res), "=&r" (oldval)
                : "r" (ptr), "Ir" (old), "r" (new)
                : "memory", "cc");
        } while (res);
        break;

    default:
        __bad_cmpxchg(ptr, size);
        oldval = 0;
    }

    return oldval;
}

#define cmpxchg_relaxed(ptr,o,n) ({                    \
    (__typeof__(*(ptr)))__cmpxchg((ptr),                \
                      (unsigned long)(o),        \
                      (unsigned long)(n),        \
                      sizeof(*(ptr)));            \
})

static inline unsigned long long __cmpxchg64(volatile unsigned long long *ptr,
                         unsigned long long old,
                         unsigned long long new)
{
    unsigned long long oldval;
    unsigned long res;

    prefetchw((const void *)ptr);

    __asm__ __volatile__(
"1:    ldrexd        %1, %H1, [%3]\n"
"    teq        %1, %4\n"
"    teqeq        %H1, %H4\n"
"    bne        2f\n"
"    strexd        %0, %5, %H5, [%3]\n"
"    teq        %0, #0\n"
"    bne        1b\n"
"2:"
    : "=&r" (res), "=&r" (oldval), "+Qo" (*ptr)
    : "r" (ptr), "r" (old), "r" (new)
    : "cc");

    return oldval;
}

#define cmpxchg64_relaxed(ptr, o, n) ({                    \
    (__typeof__(*(ptr)))__cmpxchg64((ptr),                \
                    (unsigned long long)(o),    \
                    (unsigned long long)(n));    \
})


/*
 * Linux doesn't provide strict versions of xchg(), cmpxchg(), and cmpxchg64(),
 * so manually define these for Xen as smp_mb() wrappers around the relaxed
 * variants.
 */

#define xchg(ptr, x) ({ \
    long ret; \
    smp_mb(); \
    ret = xchg_relaxed(ptr, x); \
    smp_mb(); \
    ret; \
})

#define cmpxchg(ptr, o, n) ({ \
    long ret; \
    smp_mb(); \
    ret = cmpxchg_relaxed(ptr, o, n); \
    smp_mb(); \
    ret; \
})

#define cmpxchg64(ptr, o, n) ({ \
    long long ret; \
    smp_mb(); \
    ret = cmpxchg64_relaxed(ptr, o, n); \
    smp_mb(); \
    ret; \
})

/*
 * This code is from the original Xen arm32 cmpxchg.h, from before the
 * Linux 5.10-rc2 atomics helpers were ported over. The only changes
 * here are renaming the macros and functions to explicitly use
 * "timeout" in their names so that they don't clash with the above.
 *
 * We need this here for guest atomics (the only user of the timeout
 * variants).
 */

#define __CMPXCHG_TIMEOUT_CASE(sz, name)                                        \
static inline bool __cmpxchg_timeout_case_##name(volatile void *ptr,            \
                                         unsigned long *old,            \
                                         unsigned long new,             \
                                         bool timeout,                  \
                                         unsigned int max_try)          \
{                                                                       \
        unsigned long oldval;                                           \
        unsigned long res;                                              \
                                                                        \
        do {                                                            \
                asm volatile("@ __cmpxchg_timeout_case_" #name "\n"             \
                "       ldrex" #sz "    %1, [%2]\n"                     \
                "       mov     %0, #0\n"                               \
                "       teq     %1, %3\n"                               \
                "       strex" #sz "eq %0, %4, [%2]\n"                  \
                : "=&r" (res), "=&r" (oldval)                           \
                : "r" (ptr), "Ir" (*old), "r" (new)                     \
                : "memory", "cc");                                      \
                                                                        \
                if (!res)                                               \
                        break;                                          \
        } while (!timeout || ((--max_try) > 0));                        \
                                                                        \
        *old = oldval;                                                  \
                                                                        \
        return !res;                                                    \
}

__CMPXCHG_TIMEOUT_CASE(b, 1)
__CMPXCHG_TIMEOUT_CASE(h, 2)
__CMPXCHG_TIMEOUT_CASE( , 4)

static inline bool __cmpxchg_timeout_case_8(volatile uint64_t *ptr,
                                    uint64_t *old,
                                    uint64_t new,
                                    bool timeout,
                                    unsigned int max_try)
{
        uint64_t oldval;
        uint64_t res;

        do {
                asm volatile(
                "       ldrexd          %1, %H1, [%3]\n"
                "       teq             %1, %4\n"
                "       teqeq           %H1, %H4\n"
                "       movne           %0, #0\n"
                "       movne           %H0, #0\n"
                "       bne             2f\n"
                "       strexd          %0, %5, %H5, [%3]\n"
                "2:"
                : "=&r" (res), "=&r" (oldval), "+Qo" (*ptr)
                : "r" (ptr), "r" (*old), "r" (new)
                : "memory", "cc");
                if (!res)
                        break;
        } while (!timeout || ((--max_try) > 0));

        *old = oldval;

        return !res;
}

static always_inline bool __int_cmpxchg(volatile void *ptr, unsigned long *old,
                                        unsigned long new, int size,
                                        bool timeout, unsigned int max_try)
{
        prefetchw((const void *)ptr);

        switch (size) {
        case 1:
                return __cmpxchg_timeout_case_1(ptr, old, new, timeout, max_try);
        case 2:
                return __cmpxchg_timeout_case_2(ptr, old, new, timeout, max_try);
        case 4:
                return __cmpxchg_timeout_case_4(ptr, old, new, timeout, max_try);
        default:
                __bad_cmpxchg(ptr, size);
                return false;
        }

        ASSERT_UNREACHABLE();
}

/*
 * The helper may fail to update the memory if the action takes too long.
 *
 * @old: On call the value pointed contains the expected old value. It will be
 * updated to the actual old value.
 * @max_try: Maximum number of iterations
 *
 * The helper will return true when the update has succeeded (i.e no
 * timeout) and false if the update has failed.
 */
static always_inline bool __cmpxchg_timeout(volatile void *ptr,
                                            unsigned long *old,
                                            unsigned long new,
                                            int size,
                                            unsigned int max_try)
{
    bool ret;

    smp_mb();
    ret = __int_cmpxchg(ptr, old, new, size, true, max_try);
    smp_mb();

    return ret;
}

/*
 * The helper may fail to update the memory if the action takes too long.
 *
 * @old: On call the value pointed contains the expected old value. It will be
 * updated to the actual old value.
 * @max_try: Maximum number of iterations
 *
 * The helper will return true when the update has succeeded (i.e no
 * timeout) and false if the update has failed.
 */
static always_inline bool __cmpxchg64_timeout(volatile uint64_t *ptr,
                                              uint64_t *old,
                                              uint64_t new,
                                              unsigned int max_try)
{
    bool ret;

    smp_mb();
    ret = __cmpxchg_timeout_case_8(ptr, old, new, true, max_try);
    smp_mb();

    return ret;
}

#endif /* __ASM_ARM_ARM32_CMPXCHG_H */
/*
 * Local variables:
 * mode: C
 * c-file-style: "BSD"
 * c-basic-offset: 4
 * indent-tabs-mode: nil
 * End:
 */


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 11:06:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 11:06:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20639.46658 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kazZt-0001kC-H9; Fri, 06 Nov 2020 11:06:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20639.46658; Fri, 06 Nov 2020 11:06:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kazZt-0001k5-Di; Fri, 06 Nov 2020 11:06:45 +0000
Received: by outflank-mailman (input) for mailman id 20639;
 Fri, 06 Nov 2020 11:06:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l/Pr=EM=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kazZs-0001k0-SQ
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 11:06:44 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 09041c4b-3001-4285-85b0-aac8a16eaf9c;
 Fri, 06 Nov 2020 11:06:42 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kazZq-0000Vy-42; Fri, 06 Nov 2020 11:06:42 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kazZp-0008Sb-Sh; Fri, 06 Nov 2020 11:06:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l/Pr=EM=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kazZs-0001k0-SQ
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 11:06:44 +0000
X-Inumbo-ID: 09041c4b-3001-4285-85b0-aac8a16eaf9c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 09041c4b-3001-4285-85b0-aac8a16eaf9c;
	Fri, 06 Nov 2020 11:06:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=4xhm3/84xwadizEJTcEOxIINJNiXhV/nT9pem7B1bhA=; b=3aXdrZ685d/SqDq1heCIZbywrt
	HrEicHeC08E/qPxHNbO5OayDxUYnWWhL3MHdbfomD7flgAMn0E5qBxvZnl8qijXvnMZNjUeUiA4Ir
	QH3R9S7zXIzc6lI7URClkSHHjdSJ3mUAsfa4QDz9yrsYaqiOqWN4dvsoZ2OkK4Nc/QO0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kazZq-0000Vy-42; Fri, 06 Nov 2020 11:06:42 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kazZp-0008Sb-Sh; Fri, 06 Nov 2020 11:06:42 +0000
Subject: Re: [RFC PATCH 4/6] xen/arm64: Port Linux LL/SC and LSE atomics
 helpers to Xen
To: Ash Wilding <ash.j.wilding@gmail.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, rahul.singh@arm.com
References: <20201105185603.24149-5-ash.j.wilding@gmail.com>
 <20201106105501.55396-1-ash.j.wilding@gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <d6ea3f34-cef5-4a2d-499d-6adb572d6d4a@xen.org>
Date: Fri, 6 Nov 2020 11:06:40 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201106105501.55396-1-ash.j.wilding@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 06/11/2020 10:55, Ash Wilding wrote:
> Hi,

Hi Ash,

First of all, thank you for taking a stab at adding LSE support in Xen!

> 
> In retrospect I should have put an intermediate patch between #3 and #4,
> deleting the existing headers. This would have made the patch diff for
> #4 and #5 much easier to read seeing as they are copying the Linux
> versions wholesale into Xen.

While I agree it would help the review, it would break Xen 
bisectability. Although, it should be feasible to fold all the patches 
in one on committing.

If you are going to split the patches then I would suggest the following 
split:
   1) Remove Xen atomic headers
   2) Add a verbatim copy of the Linux headers
   3) Modify them for Xen

With this approach, we can focus on just Xen changes rather than having 
to review the Linux code as well.

> 
> I'll do that for V1 when we get there, but for now to aid in readability
> I've pasted the complete header files below. While doing this I also
> spent some time last night tidying up them up to be in line with the Xen
> coding style.

We usually keep Linux coding style when a file mainly contains Linux 
code. This is making easier to port future fixes from Linux to Xen.

Regarding the review, I have quite a bit of backlog for Xen at the 
moment. I will try to review the series in the next couple of weeks.
I hope that's fine with you.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 11:20:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 11:20:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20646.46670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kazmq-0003Qb-O0; Fri, 06 Nov 2020 11:20:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20646.46670; Fri, 06 Nov 2020 11:20:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kazmq-0003QU-KC; Fri, 06 Nov 2020 11:20:08 +0000
Received: by outflank-mailman (input) for mailman id 20646;
 Fri, 06 Nov 2020 11:20:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kazmp-0003QP-06
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 11:20:07 +0000
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8fcba73e-1205-48d9-ab96-8cd29fb35ccb;
 Fri, 06 Nov 2020 11:20:05 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id p8so123097wrx.5
 for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 03:20:05 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id g66sm1846381wmg.37.2020.11.06.03.20.03
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 06 Nov 2020 03:20:04 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kazmp-0003QP-06
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 11:20:07 +0000
X-Inumbo-ID: 8fcba73e-1205-48d9-ab96-8cd29fb35ccb
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8fcba73e-1205-48d9-ab96-8cd29fb35ccb;
	Fri, 06 Nov 2020 11:20:05 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id p8so123097wrx.5
        for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 03:20:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=0e+HHUGLIriVxrXE4S3yIBNTph4M1aFCuoUD1LC7hwg=;
        b=GFsjnWaBn4pEr7u1fdrDBR9sCQRpscB9ukDIMr4NkXrzG4pUhh2rE865oY0R5WdPIn
         qlBl/FH2pLSbU6kQasig8af4aTexef9UVNUrfaelCVA/ybb0S5cUhOnU9AFI2m2xXs9Z
         E6Ypn16c20UiH8+cFaQmSSS0WCZBsyowVoFaL8AJrFXrcBSIGAstseLgOBR2OPhTiVTA
         FVd3Rd5+eKn9LTQegcAK3SMbJDP5si4lxvtBhoBqF7tiJcTvlIzqfHPvGqdvunk7Fwh2
         atfCa41e350+bONWfNR2XTqyn/F3FDkRobcLDc7lAbAgy/L3EYofpy7cKjfOyPMylLMw
         mwTA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=0e+HHUGLIriVxrXE4S3yIBNTph4M1aFCuoUD1LC7hwg=;
        b=TSRHwRad3sKsXvchrfhMJTu5LGmx127omNuNektd42Waa9EHpSJlTQRUSd5DHX/nzI
         3CJ4VebLm3VrlDFIanPnO01MSwuoo7kFePUhYxrZyxszdr64AdgEL8TTMr77DMe0hxTg
         00E+3QrWnzrmvRwj4UaU4o6TyIiFyZ697qxDh/m4J41uYlgUJR84HIaFR4GToBUqYNmG
         XZa2dr0dNgpb/M+uYlA3ANtJfiYkwpT465r2+aNuFc9AAqagvic7y2Eujz7zwjJfO5yI
         /UcJdC/w02Ns66VbjNSH06FzszszH+0jUrNXSwH9VInmKEnWTb2upNleTq7cTSi+eYOW
         QqJA==
X-Gm-Message-State: AOAM532az0cUAvnK+5XlLh+UnnGPmp0x/e112bTJ7ScvprApwA1DOvcY
	8J+WaTDsEUqHpBidW/crJZI=
X-Google-Smtp-Source: ABdhPJyZOGoA5H9byxnrlEE7pI2yEtfVzW6QCq2Joy2jbe/AUMDl6kaW65/Av2m+ud+JDLuoTFEKLA==
X-Received: by 2002:adf:ce84:: with SMTP id r4mr1991047wrn.281.1604661605141;
        Fri, 06 Nov 2020 03:20:05 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id g66sm1846381wmg.37.2020.11.06.03.20.03
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Fri, 06 Nov 2020 03:20:04 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: julien@xen.org
Cc: ash.j.wilding@gmail.com,
	bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	xen-devel@lists.xenproject.org
Subject: Re: [RFC PATCH 4/6] xen/arm64: Port Linux LL/SC and LSE atomics helpers to Xen
Date: Fri,  6 Nov 2020 11:20:03 +0000
Message-Id: <20201106112003.55831-1-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <d6ea3f34-cef5-4a2d-499d-6adb572d6d4a@xen.org>
References: <d6ea3f34-cef5-4a2d-499d-6adb572d6d4a@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hey Julien,

>
> First of all, thank you for taking a stab at adding LSE support in
> Xen!

No problem!


>>
>> In retrospect I should have put an intermediate patch between #3 and
>> #4, deleting the existing headers. This would have made the patch
>> diff for #4 and #5 much easier to read seeing as they are copying the
>> Linux versions wholesale into Xen.
>
> While I agree it would help the review, it would break Xen
> bisectability. Although, it should be feasible to fold all the patches
> in one on committing.
> 
> If you are going to split the patches then I would suggest the
> following split:
>    1) Remove Xen atomic headers
>    2) Add a verbatim copy of the Linux headers
>    3) Modify them for Xen
> 
> With this approach, we can focus on just Xen changes rather than
> having to review the Linux code as well.

Ah-ha, yes, that would be better, I'll do that.


>
> We usually keep Linux coding style when a file mainly contains Linux
> code. This is making easier to port future fixes from Linux to Xen.

Understood, I'll drop those updates,


>
> Regarding the review, I have quite a bit of backlog for Xen at the
> moment. I will try to review the series in the next couple of weeks.
> I hope that's fine with you.

No problem at all, and actually that gives me a chance to find some
spare time to post an updated series with the approach you outlined
above (I'm probably not going to get a chance to work on this for at
least a week now).


Many thanks for the feedback :-)

Cheers,
Ash.


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 11:39:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 11:39:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20665.46694 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb05N-0004dT-MS; Fri, 06 Nov 2020 11:39:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20665.46694; Fri, 06 Nov 2020 11:39:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb05N-0004dM-J1; Fri, 06 Nov 2020 11:39:17 +0000
Received: by outflank-mailman (input) for mailman id 20665;
 Fri, 06 Nov 2020 11:39:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a9Kh=EM=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kb05L-0004dH-Qj
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 11:39:15 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1b::62e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 404dd992-ce7a-455d-87e4-24f62a031e9a;
 Fri, 06 Nov 2020 11:39:13 +0000 (UTC)
Received: from AM5PR0201CA0016.eurprd02.prod.outlook.com
 (2603:10a6:203:3d::26) by AS8PR08MB5879.eurprd08.prod.outlook.com
 (2603:10a6:20b:293::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Fri, 6 Nov
 2020 11:39:11 +0000
Received: from AM5EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:3d:cafe::50) by AM5PR0201CA0016.outlook.office365.com
 (2603:10a6:203:3d::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Fri, 6 Nov 2020 11:39:11 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT003.mail.protection.outlook.com (10.152.16.149) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3541.17 via Frontend Transport; Fri, 6 Nov 2020 11:39:11 +0000
Received: ("Tessian outbound e0cdfd2b0406:v71");
 Fri, 06 Nov 2020 11:39:11 +0000
Received: from 71d6e79502fb.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C87B25A5-CEFC-4C17-B3CF-5A832CB80560.1; 
 Fri, 06 Nov 2020 11:39:06 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 71d6e79502fb.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 06 Nov 2020 11:39:05 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB7PR08MB3612.eurprd08.prod.outlook.com (2603:10a6:10:4a::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Fri, 6 Nov
 2020 11:39:02 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3499.032; Fri, 6 Nov 2020
 11:39:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=a9Kh=EM=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kb05L-0004dH-Qj
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 11:39:15 +0000
X-Inumbo-ID: 404dd992-ce7a-455d-87e4-24f62a031e9a
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:7e1b::62e])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 404dd992-ce7a-455d-87e4-24f62a031e9a;
	Fri, 06 Nov 2020 11:39:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IRDW/i0phvQTSU4Q6CHFQWkeO7MHwgx2OhI37mAXgyk=;
 b=cBqxGCJyJFVtt6rIBEPzVAEzxUpTeRlDo5acHQa+9zL1ISAmW3em7d/1Co+4bDxMx9AOHQIMV6U0696pYOQ6uNipegQRADy5T+TIAZCTXIrkCTFXFCZKPPjzH6Bjt7DR2OSQ0ogQYOZt5Spd1+SEi4/SCzcsP0EIhg2SGvDtk6Q=
Received: from AM5PR0201CA0016.eurprd02.prod.outlook.com
 (2603:10a6:203:3d::26) by AS8PR08MB5879.eurprd08.prod.outlook.com
 (2603:10a6:20b:293::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Fri, 6 Nov
 2020 11:39:11 +0000
Received: from AM5EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:3d:cafe::50) by AM5PR0201CA0016.outlook.office365.com
 (2603:10a6:203:3d::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Fri, 6 Nov 2020 11:39:11 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT003.mail.protection.outlook.com (10.152.16.149) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3541.17 via Frontend Transport; Fri, 6 Nov 2020 11:39:11 +0000
Received: ("Tessian outbound e0cdfd2b0406:v71"); Fri, 06 Nov 2020 11:39:11 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8c91ddf8c5f3d831
X-CR-MTA-TID: 64aa7808
Received: from 71d6e79502fb.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id C87B25A5-CEFC-4C17-B3CF-5A832CB80560.1;
	Fri, 06 Nov 2020 11:39:06 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 71d6e79502fb.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 06 Nov 2020 11:39:05 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=g9XmDt02WHGOtOyVzNcEXR6+f7QlsKNJxi59yT2FV52ZFUA0Gay2yh/APMEWa7yESAU86EVXmvhzgacPQzpqp6n/mXoPibi+0WADmQ79KVndJ7w/emLu0O7YF835XXcsFyucTjyC2WO6T89jcp/lh17gG0iLfnDsRz0LziHgHahXSL3BvuMrliF44yuWJ7RblDEtk3Ro1iu0FGG44iGVqpxMdKzqUCdQH/XCIAqCkbZXP5wit4GSe6jScsEq4MJa6mQ7pRI58Mgd69bxqfJTa8KO7A3kSnurZItYDl0ohXnVsFHbYMeCaTjGKAMvupdGirraLAsddDzuLh7tRcOgjA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IRDW/i0phvQTSU4Q6CHFQWkeO7MHwgx2OhI37mAXgyk=;
 b=drKT0EadZvRwaUzZ97Bkxa1nfMIChiT288O4ZvCRPwzHowNAbpgHTR77O4uzNAihC3aVCgjuVK2xvuusYLSGqVfgoFpPx+7qt6Fen2vlembsVJbZ50tPshRQVwWzHxWP9/bnlK5P6G7+HCU1VvnXp+4aVRL/tl3dxYYPkP9ot9WSarqVfS9U1flcLFcc56hCz/g0XCzecg/7D0DRBNnvfz+hxtWu3VE9z8bzgcxWsozKq/dM9dSn3ZDA9+JEA4jD2RWEQ8u98AqX9eMB5ARNqUJvJACi0sRkUf5P89D+NNaiUKNY7WAPE3SAPfzakYke4ly6UQ9V3eX1q68FDtoTGA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IRDW/i0phvQTSU4Q6CHFQWkeO7MHwgx2OhI37mAXgyk=;
 b=cBqxGCJyJFVtt6rIBEPzVAEzxUpTeRlDo5acHQa+9zL1ISAmW3em7d/1Co+4bDxMx9AOHQIMV6U0696pYOQ6uNipegQRADy5T+TIAZCTXIrkCTFXFCZKPPjzH6Bjt7DR2OSQ0ogQYOZt5Spd1+SEi4/SCzcsP0EIhg2SGvDtk6Q=
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB7PR08MB3612.eurprd08.prod.outlook.com (2603:10a6:10:4a::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Fri, 6 Nov
 2020 11:39:02 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3499.032; Fri, 6 Nov 2020
 11:39:02 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Paul Durrant <paul@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 4/4] xen/pci: solve compilation error on ARM with
 HAS_PCI enabled.
Thread-Topic: [PATCH v2 4/4] xen/pci: solve compilation error on ARM with
 HAS_PCI enabled.
Thread-Index: AQHWsfqN+EzmE5QdE0+RqIeW6dAkbKm62JSAgAAmToA=
Date: Fri, 6 Nov 2020 11:39:02 +0000
Message-ID: <1E1E5368-9936-43AD-8A76-9EC3FD1C1C9E@arm.com>
References: <cover.1604417224.git.rahul.singh@arm.com>
 <7b60501fa689a4f2795ea6c34a7475d288f154a9.1604417224.git.rahul.singh@arm.com>
 <9c3c43c3-241d-4cea-cbad-4184523450c3@suse.com>
In-Reply-To: <9c3c43c3-241d-4cea-cbad-4184523450c3@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 9d954e88-92f0-4124-7e11-08d88248949c
x-ms-traffictypediagnostic: DB7PR08MB3612:|AS8PR08MB5879:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AS8PR08MB58792AF80DB3720A53EDF537FCED0@AS8PR08MB5879.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 cbcQy2g99mXXVjWVNN6ZTOZBByMONfzc1fLyuRmkruW22biH+ufYoYgwO5mvsPQ++UauT6EcWqGvy4ed8ZxK2RyTxWsrvBFaCP4ZIRR5DCIjt+/zKs2pjlySz37VvdKAvn4d3YFXAY18E+2AhZvxHl+j3KVeTcOBg37uuahnvNUfe/rZUo7vE9huBFd8raC9/M+Xo4QS8tZvLJOgEvhPmwz8uS8XP2bIGIgW6a43R5zP8d7uTgQcuYF9q0rgaYMhPn6a09yV+8kNnplT0qGJN8CfJeTnp2kEy1912pC1qW8fHYecKtr/qwDkTknBBq1L5DNGMTalKNLQnlA47y3Rzw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(366004)(136003)(396003)(39860400002)(376002)(6486002)(8676002)(316002)(2906002)(6512007)(55236004)(5660300002)(54906003)(186003)(83380400001)(2616005)(6916009)(91956017)(478600001)(33656002)(66556008)(76116006)(66446008)(6506007)(4326008)(66476007)(86362001)(64756008)(8936002)(66946007)(26005)(71200400001)(36756003)(53546011);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 RfjmWaf+QbVfmJR/ufxJqfOdcbTRTFgEOPAfucSyIPnXHHVEVNndJMXlChsbuiHLMJGsozKUu0p19B/R7mqnxWKpxEAb733J15QoPQlU2ROFMC1HTpghAmEDCXenfhb5bDZ6si9Xr4oieIHzqkmyqHLr4JRoaDOBC47PQUs8yCFxHZCJLnt0k8RYIq0JLqA4O+tk8c0g9MOorR7cUQzjs0DDKi6+8EQYwHnm5NMZrLcOON1pfoUcE3QZVPEvhZ0sXel39TJQgW56x8kZ65xdqOh64kUloDZ+21lxQWOufRradDKL7aNXe5O5MW9RtJpLu5b9VMA6C0FTll3c7Kp+uLBCYHRE/yrN6ASLuyFjSZjDP3w4LX3B0Ug4758W70ASKWtd+x9djT4NyxGMOXISSjJ7Ocy7a3duokdefjHREU/qpeUga55KBRc6ww6K68oCwzvUFZz0/kN25PjN/8utL9DSnP7i9xRUPEHmbDSEQr0K2nhgs7Yit3HgyB8JzC9ygoCpbkn2+3AGd+eqcoEGFiByOGbVJWHoPFpyVD/tFevdibiBoNoXb9nwaj3oaeOs+n0waPRe/vmEVdXM9fvU/XuO8aoP04MZtSjRNpfRz1BmFT8JIj151sOjj3th4Z8seLimOR1T0ncjd3HDUAQ4rw==
Content-Type: text/plain; charset="utf-8"
Content-ID: <43D2D71CAABFAC489BB8F746AB8F1678@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3612
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	630840be-be0e-45a1-6262-08d882488f6d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tgBgswvesXwykXB7/OkWUsdqYkzR1qo8lJz9B7cI5Bj0RrIXfrVsaIwdIlUWzXBtt3JeTJpySoUkPBLYhRmWfsnhgLtpj7VKkHSom0WW3Duv6WlswAvt2+NNUrnbHY4/XsMk2ZS4J6ESDtig6/SAr3whgIOHY50eZ5hI3ya0PPyQhWyRK9x7aWepzHGJtL6YB7UsmL63qKmhpoJqxTx20o3S+YOU4qaqcTfR2UzTgDknmHcpNxiPMW7r/V8+1ud43SEDInMNKu0gYWfNOsUGbpsG6w1XGFFEGEN4Q5iDDXw4vmyx6/BaYICBeYYX7aLcfNC7k0PmYkcyJo6EZ/nRlgNx0hxki3X91ydCNrCW48PPv+wTk3VYGzZGYZ45ppHQDTpFieaDQ1se+QO/OhDzAg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(376002)(396003)(136003)(39860400002)(46966005)(82740400003)(356005)(6486002)(81166007)(186003)(82310400003)(336012)(83380400001)(47076004)(5660300002)(70586007)(70206006)(2906002)(26005)(6862004)(478600001)(86362001)(36906005)(54906003)(8676002)(6512007)(316002)(33656002)(53546011)(4326008)(55236004)(6506007)(36756003)(2616005)(8936002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Nov 2020 11:39:11.4845
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9d954e88-92f0-4124-7e11-08d88248949c
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB5879

SGVsbG8gSmFuLA0KDQo+IE9uIDYgTm92IDIwMjAsIGF0IDk6MjEgYW0sIEphbiBCZXVsaWNoIDxq
YmV1bGljaEBzdXNlLmNvbT4gd3JvdGU6DQo+IA0KPiBPbiAwMy4xMS4yMDIwIDE2OjU5LCBSYWh1
bCBTaW5naCB3cm90ZToNCj4+IElmIG1lbS1zaGFyaW5nLCBtZW0tcGFnaW5nIGFuZCBsb2ctZGly
dHkgZnVuY3Rpb25hbGl0eSBpcyBub3QgZW5hYmxlZA0KPj4gZm9yIGFyY2hpdGVjdHVyZSB3aGVu
IEhBU19QQ0kgaXMgZW5hYmxlZCwgY29tcGlsZXIgd2lsbCB0aHJvdyBhbiBlcnJvci4NCj4gDQo+
IE5pdDogSXMgaXQgcmVhbGx5ICJhbmQiLCBub3QgIm9y4oCdPw0KDQpPayB5ZXMgSSB3aWxsIGZp
eCBpbiBuZXh0IHZlcnNpb24uDQo+IA0KPj4gQEAgLTE0MTgsMTIgKzE0MTcsNyBAQCBzdGF0aWMg
aW50IGFzc2lnbl9kZXZpY2Uoc3RydWN0IGRvbWFpbiAqZCwgdTE2IHNlZywgdTggYnVzLCB1OCBk
ZXZmbiwgdTMyIGZsYWcpDQo+PiAgICAgaWYgKCAhaXNfaW9tbXVfZW5hYmxlZChkKSApDQo+PiAg
ICAgICAgIHJldHVybiAwOw0KPj4gDQo+PiAtICAgIC8qIFByZXZlbnQgZGV2aWNlIGFzc2lnbiBp
ZiBtZW0gcGFnaW5nIG9yIG1lbSBzaGFyaW5nIGhhdmUgYmVlbiANCj4+IC0gICAgICogZW5hYmxl
ZCBmb3IgdGhpcyBkb21haW4gKi8NCj4+IC0gICAgaWYgKCBkICE9IGRvbV9pbyAmJg0KPj4gLSAg
ICAgICAgIHVubGlrZWx5KG1lbV9zaGFyaW5nX2VuYWJsZWQoZCkgfHwNCj4+IC0gICAgICAgICAg
ICAgICAgICB2bV9ldmVudF9jaGVja19yaW5nKGQtPnZtX2V2ZW50X3BhZ2luZykgfHwNCj4+IC0g
ICAgICAgICAgICAgICAgICBwMm1fZ2V0X2hvc3RwMm0oZCktPmdsb2JhbF9sb2dkaXJ0eSkgKQ0K
Pj4gKyAgICBpZiggIWFyY2hfaW9tbXVfdXNhYmxlKGQpICkNCj4+ICAgICAgICAgcmV0dXJuIC1F
WERFVjsNCj4gDQo+IFdoaWxlIGlpcmMgSSBkaWQgc3VnZ2VzdCB0aGlzIG5hbWUsIHNlZWluZyBp
dCB1c2VkIGhlcmUgbGVhdmVzIG1lDQo+IHNvbWV3aGF0IHVuaGFwcHkgd2l0aCB0aGUgbmFtZSwg
YWxiZWl0IEkgYWxzbyBjYW4ndCB0aGluayBvZiBhbnkNCj4gYmV0dGVyIGFsdGVybmF0aXZlIHJp
Z2h0IG5vdy4gTWF5YmUgYXJjaF9pb21tdV91c2VfcGVybWl0dGVkKCk/DQoNCk9rIEkgd2lsbCBt
b2RpZnkgYXMgcGVyIHlvdXIgc3VnZ2VzdGlvbi4NCj4gDQo+PiBAQCAtMzE1LDYgKzMxNiwxOCBA
QCBpbnQgaW9tbXVfdXBkYXRlX2lyZV9mcm9tX21zaSgNCj4+ICAgICAgICAgICAgPyBpb21tdV9j
YWxsKCZpb21tdV9vcHMsIHVwZGF0ZV9pcmVfZnJvbV9tc2ksIG1zaV9kZXNjLCBtc2cpIDogMDsN
Cj4+IH0NCj4+IA0KPj4gK2Jvb2xfdCBhcmNoX2lvbW11X3VzYWJsZShzdHJ1Y3QgZG9tYWluICpk
KQ0KPiANCj4gSnVzdCBib29sIHBsZWFzZSBhbmQgSSB2ZXJ5IG11Y2ggaG9wZSB0aGUgcGFyYW1l
dGVyIGNhbiBiZSBjb25zdC4NCg0KT2sgSSB3aWxsIGZpeCBpbiBuZXh0IHZlcnNpb24uDQo+IA0K
Pj4gK3sNCj4+ICsNCj4+ICsgICAgLyogUHJldmVudCBkZXZpY2UgYXNzaWduIGlmIG1lbSBwYWdp
bmcgb3IgbWVtIHNoYXJpbmcgaGF2ZSBiZWVuDQo+PiArICAgICAqIGVuYWJsZWQgZm9yIHRoaXMg
ZG9tYWluICovDQo+IA0KPiBQbGVhc2UgY29ycmVjdCBjb21tZW50IHN0eWxlIGFzIHlvdSBtb3Zl
IGl0Lg0KDQpPay4gDQo+IA0KPj4gKyAgICBpZiAoIGQgIT0gZG9tX2lvICYmIHVubGlrZWx5KG1l
bV9zaGFyaW5nX2VuYWJsZWQoZCkgfHwNCj4+ICsgICAgICAgICAgICAgICAgICAgICAgICB2bV9l
dmVudF9jaGVja19yaW5nKGQtPnZtX2V2ZW50X3BhZ2luZykgfHwNCj4+ICsgICAgICAgICAgICAg
ICAgICAgICAgICBwMm1fZ2V0X2hvc3RwMm0oZCktPmdsb2JhbF9sb2dkaXJ0eSkgKQ0KPiANCj4g
WW91J3ZlIHNjcmV3ZWQgdXAgaW5kZW50YXRpb24sIGFuZCBJIGRvbid0IHNlZSB3aHkgLi4uDQoN
Ckkgd2lsbCBmaXggaW4gbmV4dCB2ZXJzaW9uLg0KDQo+IA0KPj4gKyAgICAgICAgcmV0dXJuIGZh
bHNlOw0KPj4gKyAgICBlbHNlDQo+PiArICAgICAgICByZXR1cm4gdHJ1ZTsNCj4+ICt9DQo+IA0K
PiAuLi4gdGhpcyBjYW4ndCBiZSBhIHNpbXBsZSBzaW5nbGUgcmV0dXJuIHN0YXRlbWVudCBhbnl3
YXk6DQo+IA0KPiAgICByZXR1cm4gZCA9PSBkb21faW8gfHwNCj4gICAgICAgICAgIGxpa2VseSgh
bWVtX3NoYXJpbmdfZW5hYmxlZChkKSAmJg0KPiAgICAgICAgICAgICAgICAgICF2bV9ldmVudF9j
aGVja19yaW5nKGQtPnZtX2V2ZW50X3BhZ2luZykgJiYNCj4gICAgICAgICAgICAgICAgICAhcDJt
X2dldF9ob3N0cDJtKGQpLT5nbG9iYWxfbG9nZGlydHkpOw0KPiANCj4gSW4gdGhlIGNvdXJzZSBv
ZiBtb3ZpbmcgSSdkIGFsc28gc3VnZ2VzdCBkcm9wcGluZyB0aGUgdXNlIG9mDQo+IGxpa2VseSgp
IGhlcmU6IFRoZSB3YXkgaXQncyB1c2VkIChvbiBhbiAmJiBleHByZXNzaW9uKSBpc24ndA0KPiBu
b3JtYWxseSBoYXZpbmcgbXVjaCBlZmZlY3QgYW55d2F5LiBJZiBhbnl0aGluZyBpdCBzaG91bGQg
aW1vDQo+IGJlDQo+IA0KPiAgICByZXR1cm4gZCA9PSBkb21faW8gfHwNCj4gICAgICAgICAgIChs
aWtlbHkoIW1lbV9zaGFyaW5nX2VuYWJsZWQoZCkpICYmDQo+ICAgICAgICAgICAgbGlrZWx5KCF2
bV9ldmVudF9jaGVja19yaW5nKGQtPnZtX2V2ZW50X3BhZ2luZykpICYmDQo+ICAgICAgICAgICAg
bGlrZWx5KCFwMm1fZ2V0X2hvc3RwMm0oZCktPmdsb2JhbF9sb2dkaXJ0eSkpOw0KPiANCj4gQW55
IHRyYW5zZm9ybWF0aW9uIHRvIHRoaXMgZWZmZWN0IHdhbnRzIG1lbnRpb25pbmcgaW4gdGhlDQo+
IGRlc2NyaXB0aW9uLCB0aG91Z2guDQoNCk9rIEkgd2lsbCBtb2RpZnkgYXMgcGVyIHlvdXIgc3Vn
Z2VzdGlvbi4NCj4gDQo+IEphbg0KDQpSZWdhcmRzLA0KUmFodWwNCg0K


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 11:43:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 11:43:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20679.46706 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb09r-0005W4-AU; Fri, 06 Nov 2020 11:43:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20679.46706; Fri, 06 Nov 2020 11:43:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb09r-0005Vx-6H; Fri, 06 Nov 2020 11:43:55 +0000
Received: by outflank-mailman (input) for mailman id 20679;
 Fri, 06 Nov 2020 11:43:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a9Kh=EM=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kb09q-0005Vs-Bg
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 11:43:54 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1b::61a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f600e120-28e3-4f68-8562-a130a2f95e18;
 Fri, 06 Nov 2020 11:43:52 +0000 (UTC)
Received: from AM6P193CA0058.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:8e::35)
 by AM8PR08MB5697.eurprd08.prod.outlook.com (2603:10a6:20b:1d7::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Fri, 6 Nov
 2020 11:43:49 +0000
Received: from AM5EUR03FT055.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8e:cafe::9c) by AM6P193CA0058.outlook.office365.com
 (2603:10a6:209:8e::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Fri, 6 Nov 2020 11:43:48 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT055.mail.protection.outlook.com (10.152.17.214) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3541.17 via Frontend Transport; Fri, 6 Nov 2020 11:43:48 +0000
Received: ("Tessian outbound 082214a64d39:v71");
 Fri, 06 Nov 2020 11:43:48 +0000
Received: from d1ce32fd145c.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 6075E167-8020-4CF6-AE9C-01E4F2A9BCE1.1; 
 Fri, 06 Nov 2020 11:43:26 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d1ce32fd145c.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 06 Nov 2020 11:43:26 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB6PR0802MB2375.eurprd08.prod.outlook.com (2603:10a6:4:87::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Fri, 6 Nov
 2020 11:43:25 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3499.032; Fri, 6 Nov 2020
 11:43:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=a9Kh=EM=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kb09q-0005Vs-Bg
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 11:43:54 +0000
X-Inumbo-ID: f600e120-28e3-4f68-8562-a130a2f95e18
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:7e1b::61a])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f600e120-28e3-4f68-8562-a130a2f95e18;
	Fri, 06 Nov 2020 11:43:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GJId46Y3T+sH6uEXxNLxlE0u6JQeSseFyzKydePrehA=;
 b=c7Oj7kaXullaylvW9reHWS7ufNzzFl2lYBs0n9U4VQI7rNKwWRq3kGZpP+E+mTE55oXzAm/AwG2ilh7pN/ZO842hqzxzWwivvlQ+qtb7FTVwjBmuwG9iAV9L62IX5QT1/EUcNxm9ArVtSdEdYSpLNT8YcLSYSGsgn5ugfY5JY0A=
Received: from AM6P193CA0058.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:8e::35)
 by AM8PR08MB5697.eurprd08.prod.outlook.com (2603:10a6:20b:1d7::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Fri, 6 Nov
 2020 11:43:49 +0000
Received: from AM5EUR03FT055.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8e:cafe::9c) by AM6P193CA0058.outlook.office365.com
 (2603:10a6:209:8e::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Fri, 6 Nov 2020 11:43:48 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT055.mail.protection.outlook.com (10.152.17.214) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3541.17 via Frontend Transport; Fri, 6 Nov 2020 11:43:48 +0000
Received: ("Tessian outbound 082214a64d39:v71"); Fri, 06 Nov 2020 11:43:48 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: aaff8c02ab7ec0e2
X-CR-MTA-TID: 64aa7808
Received: from d1ce32fd145c.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 6075E167-8020-4CF6-AE9C-01E4F2A9BCE1.1;
	Fri, 06 Nov 2020 11:43:26 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d1ce32fd145c.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 06 Nov 2020 11:43:26 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ROydsg4X2tjIodwb9R4DZYugOEt9gR/JGaWAIvO1v8mgW0+DMAdsg7+129Vh3Qoy0LL4omn6jyJ5zDUMGwQRxXzX1S73Sv63l5XIXVy61OJ+FzJzDWiGdPjrWfTzSRxOfThXuxO/8C/AWzJjNhtp7r8XhzaR42/uyoPlrJgoyBsK7WRHrjSG+k/oeWM4uvat0S5KGx28sJaTLoxTAZ2srK9jh19tv1uNjEspBvan7cTVqzRlNh5VLkqaWqJsXN4u59LzZzdGYLjE5hxPy2+EasVp9inJhaFA08cRx56MyToKKDfjYTSRt3aDqe/DHUsD1RSKl+7A+XBYAtgMsxxMLw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GJId46Y3T+sH6uEXxNLxlE0u6JQeSseFyzKydePrehA=;
 b=eP4HHnGZtl8r0nIRFjcrQdje2vSLyrbL+MBlpjeNauu8dnOi4hHtROvDzLbGC7HRIu5jBJyeQLTs4cj59UqyC0Vkz8RavbSPiLaphc2ysaA5avVRV0Q2raxcqdTievync+od0hd6t3iD0ChHoaN+0xVEY3KjKgoIWi6YStITJyeenq8DWxJ3YQ1WvuPbciOYT+wbn5oizSCViU76HnYLJGXc2NPeqF1fTKiQSCA2EMX4PbZydjQE89A97FMbL6b/1HFhlxJ6PW1/lrAoLrAWcLZuOC2eKSxX6umwj+/vUtzMxRWDLK8nzqcg7TPUI/KR/g70ulaICorSKgNqGc4yEw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GJId46Y3T+sH6uEXxNLxlE0u6JQeSseFyzKydePrehA=;
 b=c7Oj7kaXullaylvW9reHWS7ufNzzFl2lYBs0n9U4VQI7rNKwWRq3kGZpP+E+mTE55oXzAm/AwG2ilh7pN/ZO842hqzxzWwivvlQ+qtb7FTVwjBmuwG9iAV9L62IX5QT1/EUcNxm9ArVtSdEdYSpLNT8YcLSYSGsgn5ugfY5JY0A=
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB6PR0802MB2375.eurprd08.prod.outlook.com (2603:10a6:4:87::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Fri, 6 Nov
 2020 11:43:25 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3499.032; Fri, 6 Nov 2020
 11:43:25 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Paul Durrant <paul@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 2/4] xen/pci: Introduce new CONFIG_PCI_ATS flag for PCI
 ATS functionality.
Thread-Topic: [PATCH v2 2/4] xen/pci: Introduce new CONFIG_PCI_ATS flag for
 PCI ATS functionality.
Thread-Index: AQHWsfqLVPZUjJyJak+hwdAAN0KToam4HooAgAABtgCAAt/dgA==
Date: Fri, 6 Nov 2020 11:43:25 +0000
Message-ID: <73189992-EC5B-493D-BB23-6DFB2F11A580@arm.com>
References: <cover.1604417224.git.rahul.singh@arm.com>
 <27814e614618c413ac61a9f7a48d795c557bfe5c.1604417224.git.rahul.singh@arm.com>
 <c9874396-44d2-b969-104f-eb40b4e107c9@suse.com>
 <4598bf81-5802-93b8-e160-05c139a6d4cf@suse.com>
In-Reply-To: <4598bf81-5802-93b8-e160-05c139a6d4cf@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 63b21911-5888-48a6-9de4-08d8824939ec
x-ms-traffictypediagnostic: DB6PR0802MB2375:|AM8PR08MB5697:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM8PR08MB569733B0E59DEE46DBD9BD01FCED0@AM8PR08MB5697.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 oXaAncseH50J6k3Pki6Q0PN8MCT/jcEAmDDZolx//o9q2FHZDtbApmShv9xnw2GYAl0KAxDNN47dGOVWa2VIJ7aalogN89ngSte1Us27qxKJOv/ZJg9H2WgPMv6K7EOF+h9KlAOAmFqi/tbq0arkwbCY7JFENRSw3/7+Rt4xOOiXGmoR22TaUJ6FFRhS32QhVtHF/AKP55ZRQG7BbaKz9ajetrgMuY27HZVL+dGvg3SYxEEzilvncljxDQkfoQ68EnAXOgevWlbMCF2K/IMOHrUdmkFqWn/lREtKIrOFITpkEdMqYKr5KHPIcyJahkS2ZRvGxqu6Vmfd+YlSu28FGw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(39860400002)(396003)(366004)(136003)(2616005)(86362001)(8676002)(6486002)(55236004)(26005)(2906002)(6512007)(478600001)(186003)(5660300002)(8936002)(66556008)(4326008)(36756003)(83380400001)(33656002)(76116006)(64756008)(66446008)(66476007)(66946007)(71200400001)(91956017)(316002)(6506007)(53546011)(6916009)(54906003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 UBXPr7vA0C1+YRl2IZA+oHWYR8nEaYrk/Vme7gNx9d3XsLnLIzBqQReyjl2pG/qYXLTAgHnheV0EdWbOFHNgbWsnCpW7l2h8d9Tet2Ogsh7ldGzylbH/5h9cj0BnQkNsGhsJUfHLanp+CUKoYRzPl0qP0NEtYJvWf3ONFfRMllNFaKBo/eGj0YSnYV/kqawcXiMGrgyc3c2/njTMqIzU4KJzISZhsfe4zht8d+RVBVDOMoav0QIgKNBtrxAh/+vFoILpKOMhiCwZpD1CYHMNQFiifA5Wr76MT10C/hOARiGjoNqSNJ7vPCZ8155rVL+P0AVVWhfKBXOx+hKPjlXZXOl+lgZOlETonb0hlxl1DMbUeKcwifw5XiIwqp8T3mX3cdpVKao1UPYIoJhkZRanX3LV7zKirxfGYbFfzEQEIT9vteRU9AzXVjlgluQI1WsUP/Wogg6RursDU0CpS7BCDflaYJrhYGtan8XOk32r56GPTMiXkaDd+2Q1H+LynqYotXSMki7xszqRuOmYLdK7YZAg4bbHenX9HSnrI7F4+ZTjYQjRrozJtK/eRA+pBXnapZ+o4lKka8qtvSJRW++90zpXGqmE4ELwFJ7vRaxliiqs09chKVCUizAXcDERqiIvMPrkoruD3kzlZ4tSwJIFUw==
Content-Type: text/plain; charset="utf-8"
Content-ID: <F6C6948EEA227F4BADABF2C2AA412626@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2375
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	658aac9d-4696-4edd-fb48-08d882492c2c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JKxWkyIfDJ0l+DtE44VhuvlpDrTAFpdCoTtJYWZgA0pYZxjUiO3fq0fNA2J3TX0k9xoieSL1+oqJeGIp/K4/bCr9KafMJ4DZsEUBlzg5YRxvbYio6t62YxScWeCBZIe6q/NpMWCU83ngINoUksOh7Ydq98M7SzGNiDWkxj58fgX2vxwODzVPLW+jXs9NeHwWNlIx2UmEOvJATy/uojTSTYkQzY5w5PflpYVIYZi0cclfH7+2iIcr8gwR+uX/4AJxZBeCmjWWWsqWfw+OXwN57zTQA8lUmscKNkci7tAOweWZxcQfUsXc7Z4vml5cgD/Fq/NKQrQOtEw973cNPCRqpkJNW570ueVyFU2kMgg6qzj3GYlleNl7TJR5eoGvxuEIfDvpv1Hc2uDu5P5d6st58Q==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(376002)(346002)(396003)(136003)(46966005)(82310400003)(55236004)(86362001)(83380400001)(54906003)(6512007)(6506007)(8936002)(53546011)(36906005)(5660300002)(316002)(6486002)(8676002)(2906002)(26005)(81166007)(186003)(47076004)(478600001)(356005)(70586007)(70206006)(336012)(4326008)(6862004)(2616005)(36756003)(82740400003)(33656002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Nov 2020 11:43:48.8307
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 63b21911-5888-48a6-9de4-08d8824939ec
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB5697

SGVsbG8gSmFuLA0KDQo+IE9uIDQgTm92IDIwMjAsIGF0IDM6NDkgcG0sIEphbiBCZXVsaWNoIDxq
YmV1bGljaEBzdXNlLmNvbT4gd3JvdGU6DQo+IA0KPiBPbiAwNC4xMS4yMDIwIDE2OjQzLCBKYW4g
QmV1bGljaCB3cm90ZToNCj4+IE9uIDAzLjExLjIwMjAgMTY6NTksIFJhaHVsIFNpbmdoIHdyb3Rl
Og0KPj4+IC0tLSBhL3hlbi9kcml2ZXJzL3BjaS9LY29uZmlnDQo+Pj4gKysrIGIveGVuL2RyaXZl
cnMvcGNpL0tjb25maWcNCj4+PiBAQCAtMSwzICsxLDEyIEBADQo+Pj4gDQo+Pj4gY29uZmlnIEhB
U19QQ0kNCj4+PiAJYm9vbA0KPj4+ICsNCj4+PiArY29uZmlnIFBDSV9BVFMNCj4+PiArCWJvb2wg
IlBDSSBBVFMgc3VwcG9ydCINCj4+PiArCWRlZmF1bHQgeQ0KPj4+ICsJZGVwZW5kcyBvbiBYODYg
JiYgSEFTX1BDSQ0KPj4+ICsJLS0taGVscC0tLQ0KPj4+ICsJIEVuYWJsZSBQQ0kgQWRkcmVzcyBU
cmFuc2xhdGlvbiBTZXJ2aWNlcy4NCj4+PiArDQo+Pj4gKwkgSWYgdW5zdXJlLCBzYXkgWS4NCj4+
IA0KPj4gU3VwcG9ydCBmb3IgIi0tLWhlbHAtLS0iIGhhdmluZyBnb25lIGF3YXkgaW4gTGludXgs
IEkgdGhpbmsgd2UnZA0KPj4gYmV0dGVyIG5vdCBhZGQgbmV3IGluc3RhbmNlcy4gQWxzbyBpbmRl
bnRhdGlvbiBvZiBoZWxwIGNvbnRlbnQNCj4+IHR5cGljYWxseSBpcyBieSBhIHRhYiBhbmQgdHdv
IHNwYWNlcy4gV2l0aCB0aGVzZSB0d28gYWRqdXN0ZWQNCj4+IA0KPj4gUmV2aWV3ZWQtYnk6IEph
biBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gDQo+IEluaXRpYWxseSBJIHdhbnRlZCB0
byBtZXJlbHkgcmVwbHkgaW5kaWNhdGluZyBJJ2QgYmUgZmluZSBtYWtpbmcNCj4gdGhlc2UgY2hh
bmdlcyB3aGlsZSBjb21taXR0aW5nLCBidXQgdGhlcmUgYXJlIHR3byBtb3JlIHRoaW5ncw0KPiAo
YW5kIEkgd2l0aGRyYXcgbXkgUi1iKTogRm9yIG9uZSwgaXNuJ3Qgc3RyaWN0IHBjaV9kZXYncyBh
dHMNCj4gZmllbGQgbm93IHVudXNlZCB3aGVuICFQQ0lfQVRTPyBJZiBzbywgaWYgc2hvdWxkIGdl
dCBhbiAjaWZkZWYNCj4gYWRkZWQuDQoNClllcywgSSB0cmllZCB0byAjaWZkZWYgYXRzIGZpZWxk
IGluIHN0cnVjdCBwY2lfZGV2IGJ1dCBhZnRlciBkb2luZyB0aGF0IEkgZm91bmQgdGhhdCBJIGhh
dmUgdG8gbW9kaWZ5IHRoZSANCmNvZGUgcmVsYXRlZCB0byB4ODYgIGlvdGxiIGZsdXNoLCBhcyBJ
IGhhdmUgbGltaXRlZCBrbm93bGVkZ2UgYWJvdXQgaG93IGlvdGxiIGZsdXNoIHdvcmtzIGZvciAN
Cng4NiBzbyBJIGRlY2lkZWQgbm90IHRvIG1vZGlmeSB0aGF0IHBhcnQgb2YgdGhlIGNvZGUuIA0K
DQpJIGFscmVhZHkgY29tcGlsZWQgdGhlIHg4NiB3aXRoICFQQ0lfQVRTIGFuZCBpcyBoYXZpbmcg
c2FtZSBiZWhhdmlvdXIgYXMgY29tbWFuZCBsaW5lIG9wdGlvbnMgImF0cz1mYWxzZeKAnSB3aXRo
IHVudXNlZCBzdHJ1Y3QgcGNpX2RldiBhdHMgZmllbGQuDQoNCj4gQW5kIHRoZW4sIHdoYXQgZXhh
Y3RseSBpcyBpdCBpbiBhdHMuYyB0aGF0J3MgeDg2LXNwZWNpZmljPw0KPiBTaG91bGRuJ3QgdGhl
IHdob2xlIGZpbGUgaW5zdGVhZCBiZSBtb3ZlZCBvbmUgbGV2ZWwgdXAsIGFuZCBiZQ0KPiB1c2Fi
bGUgYnkgQXJtIHJpZ2h0IGF3YXk/DQoNClllcywgeW91IGFyZSByaWdodCBhdHMuYyBmaWxlIGlz
IG5vdCB4ODYgc3BlY2lmaWMsIGJ1dCBub3QgdGVzdGVkIGZvciBBUk0gc28gSSB0aG91Z2h0IHdl
IHdpbGwgbW92ZSB0aGUgYXRzLmMgZmlsZSB0byBjb21tb24gY29kZSBvbmNlIEFUUyBjb2RlIGlz
IHRlc3RlZC9pbXBsZW1lbnRlZCBmb3IgQVJNLg0KDQpkaXNhYmxlX2F0c19kZXZpY2UoKSBpcyBy
ZWZlcmVuY2VkIGluIGNvbW1vbiAicGFzc3Rocm91Z2gvcGNpLmMiIGZpbGUgIGFuZCBkZWZpbmVk
IGluICJ4ODYvYXRzLmMiIHRoZXJlZm9yZSBJIGRlY2lkZWQgdG8gaW50cm9kdWNlIHRoZSBQQ0lf
QVRTIG9wdGlvbiBmb3IgWDg2LiANCkFzIEkgYW0gbm90IHN1cmUgb24gQVJNIGhvdyB0aGUgQVRT
IGZ1bmN0aW9uYWxpdHkgaXMgZ29pbmcgdG8gYmUgaW1wbGVtZW50ZWQuIA0KDQpUaGVyZSBhcmUg
dGhyZWUgd2F5cyB0byBmaXggdGhlIGNvbXBpbGF0aW9uIGVycm9yIGZvciBBUk0gcmVsYXRlZCB0
byBkaXNhYmxlX2F0c19kZXZpY2UoKSBmdW5jdGlvbi4NCg0KMS4gSW50cm9kdWNlIHRoZSBQQ0lf
QVRTIGNvbmZpZyBvcHRpb24gZm9yIHg4NiBhbmQgZW5hYmxlZCBpdCBieSBkZWZhdWx0IGZvciBY
ODYgYW5kIGhhdmluZyBzYW1lIGJlaGF2aW91ciBhcyAgY29tbWFuZCBsaW5lIG9wdGlvbnMgZm9y
ICFQQ2lfQVRTICBhcyAiYXRzPWZhbHNl4oCdDQoNCjIuIGRpc2FibGVfYXRzX2RldmljZSgpIGlz
IGNhbGxlZCBieSBpb21tdV9kZXZfaW90bGJfZmx1c2hfdGltZW91dCAoKSB0aGF0IGlzIGFzIHBl
ciBteSB1bmRlcnN0YW5kaW5nIGlzIHg4NiBzcGVjaWZpYyAoIG5vdCBzdXJlIHBsZWFzZSBjb25m
aXJtKS4NCk1vdmUgaW9tbXVfZGV2X2lvdGxiX2ZsdXNoX3RpbWVvdXQgKCkgdG8gInBhc3N0aHJv
dWdoL3g4Ni9pb21tdS5j4oCdIG5vdyBhbmQgbW92ZSB0aGUgYXRzLmMgZmlsZSB0byBjb21tb24g
Y29kZSBvbmNlIEFUUyBjb2RlIGlzIHRlc3RlZCBmb3IgQVJNLg0KDQozLiBNb3ZlIHg4Ni9hdHMu
YyBmaWxlIG9uZSBsZXZlbCB1cCAsIGZpeGVkIGNvbXBpbGF0aW9uIGVycm9yIG5vdyBhbmQgaWYg
b24gQVJNIHBsYXRmb3JtcyBnb2luZyB0byBpbXBsZW1lbnQgdGhlIEFUUyBmdW5jdGlvbmFsaXR5
IGRpZmZlcmVudCBmcm9tDQp4ODYgd2UgaGF2ZSB0byBtb3ZlIGF0cy5jIGZpbGUgYmFjayB0byB4
ODYgZGlyZWN0b3J5IA0KDQpQbGVhc2Ugc3VnZ2VzdCB1cyBob3cgd2UgY2FuIHByb2NlZWQgZnVy
dGhlciBvbiB0aGlzLg0KDQo+IA0KPiBKYW4NCg0KUmVnYXJkcywNClJhaHVsDQoNCg==


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 12:18:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 12:18:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20726.46729 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb0hY-0008O1-Cw; Fri, 06 Nov 2020 12:18:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20726.46729; Fri, 06 Nov 2020 12:18:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb0hY-0008Nu-9v; Fri, 06 Nov 2020 12:18:44 +0000
Received: by outflank-mailman (input) for mailman id 20726;
 Fri, 06 Nov 2020 12:18:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pfqN=EM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kb0hW-0008NM-G5
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 12:18:42 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d0b9e5bc-4b64-4c8f-a261-b18fd0210ed0;
 Fri, 06 Nov 2020 12:18:35 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kb0hO-0001zs-W9; Fri, 06 Nov 2020 12:18:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kb0hO-0003sD-N5; Fri, 06 Nov 2020 12:18:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kb0hO-0004ir-Mb; Fri, 06 Nov 2020 12:18:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pfqN=EM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kb0hW-0008NM-G5
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 12:18:42 +0000
X-Inumbo-ID: d0b9e5bc-4b64-4c8f-a261-b18fd0210ed0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d0b9e5bc-4b64-4c8f-a261-b18fd0210ed0;
	Fri, 06 Nov 2020 12:18:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QofY3N9C3RVicqPHDN09y/199t+HLZrkoqc2OK2bktE=; b=dbr0tw0bXyEHWXGxGoNmgIhFZX
	NKVMERL1J9CsKgl0TRPxqQmTH5vZuzVbcZGUsq/kW67/Com830MgAO989FMci7hoyr79JsQBEe19z
	bDWZ1MPZJqZ06IgTaEAWMf5GFBwxOfKzHbVpaLsq2mCZZIwnqpkvPU69DyyoykqtfFlE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kb0hO-0001zs-W9; Fri, 06 Nov 2020 12:18:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kb0hO-0003sD-N5; Fri, 06 Nov 2020 12:18:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kb0hO-0004ir-Mb; Fri, 06 Nov 2020 12:18:34 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156523-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156523: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=2a5f9f6a6932214fda76b9b3c03e024772882d34
X-Osstest-Versions-That:
    xen=957708c2d1ae25d7375abd5e5e70c3043d64f1f1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 06 Nov 2020 12:18:34 +0000

flight 156523 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156523/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  2a5f9f6a6932214fda76b9b3c03e024772882d34
baseline version:
 xen                  957708c2d1ae25d7375abd5e5e70c3043d64f1f1

Last test of basis   156502  2020-11-06 03:01:22 Z    0 days
Testing same since   156523  2020-11-06 10:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Juergen Gross <jgross@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   957708c2d1..2a5f9f6a69  2a5f9f6a6932214fda76b9b3c03e024772882d34 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 12:48:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 12:48:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20734.46753 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb1AU-0002ba-Uh; Fri, 06 Nov 2020 12:48:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20734.46753; Fri, 06 Nov 2020 12:48:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb1AU-0002bT-RD; Fri, 06 Nov 2020 12:48:38 +0000
Received: by outflank-mailman (input) for mailman id 20734;
 Fri, 06 Nov 2020 12:48:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a9Kh=EM=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kb1AU-0002a1-30
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 12:48:38 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.63]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fa0781e5-22b9-4bc9-a396-4a741deb25e8;
 Fri, 06 Nov 2020 12:48:34 +0000 (UTC)
Received: from AM6P194CA0057.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:84::34)
 by DB6PR0801MB1845.eurprd08.prod.outlook.com (2603:10a6:4:39::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 6 Nov
 2020 12:48:30 +0000
Received: from AM5EUR03FT041.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:84:cafe::a) by AM6P194CA0057.outlook.office365.com
 (2603:10a6:209:84::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Fri, 6 Nov 2020 12:48:30 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT041.mail.protection.outlook.com (10.152.17.186) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3541.17 via Frontend Transport; Fri, 6 Nov 2020 12:48:30 +0000
Received: ("Tessian outbound fcd5bc555ddc:v71");
 Fri, 06 Nov 2020 12:48:30 +0000
Received: from 7ec34fa4adea.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 F0C735FD-E81A-46F1-81DB-822416D04178.1; 
 Fri, 06 Nov 2020 12:48:20 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 7ec34fa4adea.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 06 Nov 2020 12:48:20 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB6PR0801MB1958.eurprd08.prod.outlook.com (2603:10a6:4:73::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Fri, 6 Nov
 2020 12:48:17 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3499.032; Fri, 6 Nov 2020
 12:48:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=a9Kh=EM=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kb1AU-0002a1-30
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 12:48:38 +0000
X-Inumbo-ID: fa0781e5-22b9-4bc9-a396-4a741deb25e8
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown [40.107.21.63])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fa0781e5-22b9-4bc9-a396-4a741deb25e8;
	Fri, 06 Nov 2020 12:48:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jd6T/AmA/q63fvcIN0Kr5awQuDyw2DvGj5Pn/xCbQT0=;
 b=NMG1LnVzXoo82xB8ZNbkPTsyt7986RS0j3pj2MU5SD9TvjLYQWsWKANKaZnSV6rM/5v2Wh5nf3LOTp06Tw4jGau1s3M3XK31VqAzT+G1f6ROdeZKKl024Y1B++tnOLKGJTqctSeYu4nj/lHIwdFVlRMQPlT7XeqPqXD7Pw1QJZM=
Received: from AM6P194CA0057.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:84::34)
 by DB6PR0801MB1845.eurprd08.prod.outlook.com (2603:10a6:4:39::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 6 Nov
 2020 12:48:30 +0000
Received: from AM5EUR03FT041.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:84:cafe::a) by AM6P194CA0057.outlook.office365.com
 (2603:10a6:209:84::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Fri, 6 Nov 2020 12:48:30 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT041.mail.protection.outlook.com (10.152.17.186) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3541.17 via Frontend Transport; Fri, 6 Nov 2020 12:48:30 +0000
Received: ("Tessian outbound fcd5bc555ddc:v71"); Fri, 06 Nov 2020 12:48:30 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 48339c812ce88691
X-CR-MTA-TID: 64aa7808
Received: from 7ec34fa4adea.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id F0C735FD-E81A-46F1-81DB-822416D04178.1;
	Fri, 06 Nov 2020 12:48:20 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 7ec34fa4adea.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 06 Nov 2020 12:48:20 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LlF+l+Jq/0jyWR3OVO4KhBrByp58NicI1jQoS1Gaceh2IWMXmTnvgN3swPIIXYLDeJHaFODpJT9j09wXkz/OoZxbhGxSbLulcVqxAkrRb1lFguz6XubglWI2xCUPjX069NWbg0sCe32GopVa9meYcoFhoKVN40mjCsdeYfpBmnIKSToy3GAUXGq8gW1Ai2FJcxpPuRl8CV93rZf4jfrjgUw5JAWCx9bQN50/Mzm/D+XUuXCOd6n3iiyK8A5MonpY7VxwR0jKyHQQHuV7UDNqNklEjsyt/kqO3REl382oeYjeE5ovVR1UnzPeS5bXwlAy2ai+GTEwreESITtnp0JBUA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jd6T/AmA/q63fvcIN0Kr5awQuDyw2DvGj5Pn/xCbQT0=;
 b=NQH8zlkncvCtKi1aKytHITnmKsbm84JUst11zmPcyXS8UqtYHhsYt/sQ12JAi+/fw1An/7kj5TdvU9ZHKd+Lk0aevRxovKNlsd9DBXLjjAdmR4QmhndkM5d2EVAJLAqLRHItMogEAKER5cr3gJAV1Zlm4ZvJdjbozDCx/FhC05a9v7ErZ8FoZV9bHWqCNf1A+v75/k/l9BFUHcf/G/A3P//EpDAbsdDj0gqFpxh0FEZnD8FbSV5Y1rXumyulyocEX1jpQvMT/OQ/7MSr9taHtGS6jRwDboS6aemDowBPlmHEUPSpD5OKVQt4iWK6HO/2fYYg+Cj236oaYFphN3SSiA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jd6T/AmA/q63fvcIN0Kr5awQuDyw2DvGj5Pn/xCbQT0=;
 b=NMG1LnVzXoo82xB8ZNbkPTsyt7986RS0j3pj2MU5SD9TvjLYQWsWKANKaZnSV6rM/5v2Wh5nf3LOTp06Tw4jGau1s3M3XK31VqAzT+G1f6ROdeZKKl024Y1B++tnOLKGJTqctSeYu4nj/lHIwdFVlRMQPlT7XeqPqXD7Pw1QJZM=
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB6PR0801MB1958.eurprd08.prod.outlook.com (2603:10a6:4:73::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Fri, 6 Nov
 2020 12:48:17 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3499.032; Fri, 6 Nov 2020
 12:48:17 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Oleksandr Andrushchenko
	<andr2000@gmail.com>, Julien Grall <julien@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Jan
 Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index:
 AQHWoVvrkmJwOYERdUOadvid1OghFamwEIWAgABuPwCAA/gngIAAQvMAgAAErACABnTvAA==
Date: Fri, 6 Nov 2020 12:48:17 +0000
Message-ID: <5F09F481-DC27-4FC3-8CE5-F4F97FDF6DF9@arm.com>
References:
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <09cfc160-3490-0aeb-f872-04fb4ce04364@epam.com>
 <76593217-c7e2-2963-9cbe-d6cc38830710@xen.org>
 <d83f6859-6737-0da8-7c1d-a236e8313869@gmail.com>
 <B8E54A16-8FD4-48E4-82D5-2205EEEB5D2C@arm.com>
 <1001ace5-c6a2-4a81-ba3d-edabeeea9336@epam.com>
In-Reply-To: <1001ace5-c6a2-4a81-ba3d-edabeeea9336@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 7fe82ee0-40f1-4d25-9ef0-08d882524375
x-ms-traffictypediagnostic: DB6PR0801MB1958:|DB6PR0801MB1845:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB6PR0801MB1845BEC38A3FD6082D316E8EFCED0@DB6PR0801MB1845.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Xfyh1IlS5rnU8liWb9lQz1WPfJJzCa9mtkU7MpsqYAaUmbBhJqJbSgaethUifUy9i0F0Pbg+GbT85TOh6n/msontaxCliJ5igws9CnWIEBeolhewDwnqrrEGNeQBOmOdU1Ngns7SY41CJM4TJDRggwS1vLYA8SnPANFWYBcuaulvJqgfRoc5y/dDkT4aUaPECGSJaiTIUR0xUCnGkNPXEx074i55UZrKDvQ4L3AHwLvQ9XjEtJAeH06m+6xt/rYBepBzXZIr653uKXtTwihc1fcghnNx8u1f33BKE+GKtzJDkgSVzfYAQjccc8bh4Yik5JmxDEYTTd6LHCbZD6Jqgw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39840400004)(346002)(376002)(396003)(136003)(366004)(2906002)(36756003)(8936002)(83380400001)(8676002)(33656002)(66556008)(66446008)(316002)(6916009)(6486002)(6512007)(76116006)(86362001)(26005)(91956017)(66476007)(2616005)(64756008)(71200400001)(53546011)(6506007)(54906003)(186003)(66946007)(5660300002)(4326008)(478600001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 wLVWUoOHySoSg1IKXu7m87ufbHcHLady19Ql2Cq0JbpOTglnV6xbHFhuWeoOMe+SRoFvJBMABWpGlMiyi9+s1pJ0e3uJrERdR6r0u2RAjs1fQ04t5TpePxR+aTvI06vXBcky1iyTXbvBYLbYdneSE7CqAanw9OGjWv1MvCziw3s93sPRcMUIpV26u5LKSfYBJB/IydYjuBEtTPRLw03I01qJ6ryaU3awEDL8+i+8BMIr90MOYN3+0ASibaB4pVJgHvv/zrZ/z9IkXoiafwW8MM40mwB8Yc5QBxgFCw9eOj4mg7fVtHfkoyCmSGWDDbm8Up/QpU30RCYB/qVHeNmcGr62Bndq5xl8m241jrIWJ6Pwouqa+Cmz46g+03TQ5Wql7ZpkpBaR/DMJnf7Ti9USr4USlrC2rqrHZQ7vCcJpmFfxqrrMHlCNvUu92NiMAEFUXR3TmkC/ahfwq7a9BFIQ1dA7iT/rpviH6XOXvDH1ts0wT0LRJj8Y4j4wPkychrruqksdh8UjzM+o8D9AtWrTcJO720uctk9HUmLLUtrUcUijvOROPBgTyK8c9Qz72iOjmRcgXQ3RgzvyI3bHE5w8BE5d5UGDXMjbujxIMqSkXyQ1twxRJVC7YOpx1+v/QcPLJ62IvpBii/jTdGY4dWf0/w==
Content-Type: text/plain; charset="utf-8"
Content-ID: <F4ED7B644A6CB545BBA2F390371D1D56@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1958
Original-Authentication-Results: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	00f85abc-22de-4c0f-46b3-08d882523bb9
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BKpvtKJx3Lavhn9Dbksx8bfu7D1XlW4qFseoYuUzVREWfharWLdSh8Atq/ayHohbB1hYaOa5/nnBKJmM4uzhvUxXNdFfGyttziCNco4g16F4KPWA1QQUBFxZtLqqrSjMfUoBmjPlK2a5+x7hYBG9BQsm+l1+jn4WMLEVWpKCbt2n47lcU1bxTet2XxQa4BgOLcMNXiZ7AX5pHwXhbPCM1AkJ6AdQtddIjHUxKkdsKyNxsUlo8IAhU4jSp0aM1d9SslkvXy7QiSQwdRLG0qcdTmdq8tjSi/VrBbQW1S91l1vN3OkjTuD86ClweOMkztQ14By7Y/QsKUXW5up4eaE7McU4xySdT9GxOvwv4yz9e/Gtfqsnx9pudvaBwAg8lOeFsgzUe1TTG8ExyBTaaPFkOg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(136003)(39860400002)(376002)(346002)(46966005)(83380400001)(478600001)(8676002)(33656002)(2906002)(36756003)(82740400003)(356005)(6506007)(4326008)(26005)(86362001)(186003)(53546011)(8936002)(2616005)(6862004)(47076004)(54906003)(6486002)(36906005)(82310400003)(316002)(6512007)(5660300002)(336012)(70586007)(107886003)(70206006)(81166007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Nov 2020 12:48:30.3097
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7fe82ee0-40f1-4d25-9ef0-08d882524375
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1845

SGVsbG8gT2xla3NhbmRyLA0KDQo+IE9uIDIgTm92IDIwMjAsIGF0IDEwOjEyIGFtLCBPbGVrc2Fu
ZHIgQW5kcnVzaGNoZW5rbyA8T2xla3NhbmRyX0FuZHJ1c2hjaGVua29AZXBhbS5jb20+IHdyb3Rl
Og0KPiANCj4gSGksDQo+IA0KPiBPbiAxMS8yLzIwIDExOjU1IEFNLCBCZXJ0cmFuZCBNYXJxdWlz
IHdyb3RlOg0KPj4gSGksDQo+PiANCj4+PiBPbiAyIE5vdiAyMDIwLCBhdCAwNTo1NSwgT2xla3Nh
bmRyIEFuZHJ1c2hjaGVua28gPGFuZHIyMDAwQGdtYWlsLmNvbT4gd3JvdGU6DQo+Pj4gDQo+Pj4g
SGksIEp1bGllbiENCj4+PiANCj4+PiBPbiAxMC8zMC8yMCA3OjE4IFBNLCBKdWxpZW4gR3JhbGwg
d3JvdGU6DQo+Pj4+IEhpIE9sZWtzYW5kciwNCj4+Pj4gDQo+Pj4+IE9uIDMwLzEwLzIwMjAgMTA6
NDQsIE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIHdyb3RlOg0KPj4+Pj4gT24gMTAvMjAvMjAgNjoy
NSBQTSwgUmFodWwgU2luZ2ggd3JvdGU6DQo+Pj4+Pj4gQWRkIHN1cHBvcnQgZm9yIEFSTSBhcmNo
aXRlY3RlZCBTTU1VdjMgaW1wbGVtZW50YXRpb25zLiBJdCBpcyBiYXNlZCBvbg0KPj4+Pj4+IHRo
ZSBMaW51eCBTTU1VdjMgZHJpdmVyLg0KPj4+Pj4+IA0KPj4+Pj4+IE1ham9yIGRpZmZlcmVuY2Vz
IGJldHdlZW4gdGhlIExpbnV4IGRyaXZlciBhcmUgYXMgZm9sbG93czoNCj4+Pj4+PiAxLiBPbmx5
IFN0YWdlLTIgdHJhbnNsYXRpb24gaXMgc3VwcG9ydGVkIGFzIGNvbXBhcmVkIHRvIHRoZSBMaW51
eCBkcml2ZXINCj4+Pj4+PiAgICAgIHRoYXQgc3VwcG9ydHMgYm90aCBTdGFnZS0xIGFuZCBTdGFn
ZS0yIHRyYW5zbGF0aW9ucy4NCj4+Pj4+IEZpcnN0IG9mIGFsbCB0aGFuayB5b3UgZm9yIHRoZSBl
ZmZvcnRzIQ0KPj4+Pj4gDQo+Pj4+PiBJIHRyaWVkIHRoZSBwYXRjaCB3aXRoIFFFTVUgYW5kIHdv
dWxkIGxpa2UgdG8ga25vdyBpZiBteSB1bmRlcnN0YW5kaW5nIGNvcnJlY3QNCj4+Pj4+IA0KPj4+
Pj4gdGhhdCB0aGlzIGNvbWJpbmF0aW9uIHdpbGwgbm90IHdvcmsgYXMgb2Ygbm93Og0KPj4+Pj4g
DQo+Pj4+PiAoWEVOKSBTTU1VdjM6IC9zbW11djNAOTA1MDAwMDogU01NVXYzOiBEVCB2YWx1ZSA9
IGV2ZW50cQ0KPj4+Pj4gKFhFTikgRGF0YSBBYm9ydCBUcmFwLiBTeW5kcm9tZT0weDE5NDAwMTAN
Cj4+Pj4+IChYRU4pIFdhbGtpbmcgSHlwZXJ2aXNvciBWQSAweDQwMDMxMDAwIG9uIENQVTAgdmlh
IFRUQlIgMHgwMDAwMDAwMGI4NDY5MDAwDQo+Pj4+PiAoWEVOKSAwVEhbMHgwXSA9IDB4MDAwMDAw
MDBiODQ2OGY3Zg0KPj4+Pj4gDQo+Pj4+PiBbc25pcF0NCj4+Pj4+IA0KPj4+Pj4gSWYgdGhpcyBp
cyBleHBlY3RlZCB0aGVuIGlzIHRoZXJlIGFueSBwbGFuIHRvIG1ha2UgUUVNVSB3b3JrIGFzIHdl
bGw/DQo+Pj4+PiANCj4+Pj4+IEkgc2VlIFsxXSBzYXlzIHRoYXQgIk9ubHkgc3RhZ2UgMSBhbmQg
QUFyY2g2NCBQVFcgYXJlIHN1cHBvcnRlZC4iIG9uIFFFTVUgc2lkZS4NCj4+Pj4gSnVzdCBmb3Ig
Y2xhcmljYXRpb24sIHlvdSBhcmUgdHJ5aW5nIHRvIGJvb3QgWGVuIG9uIFFFTVUsIHJpZ2h0Pw0K
Pj4+IEV4YWN0bHkNCj4+Pj4gWW91IG1pZ2h0IGJlIGFibGUgdG8gdXNlIHRoZSBzdGFnZS0xIHBh
Z2UtdGFibGVzIHRvIGlzb2xhdGUgZWFjaCBkZXZpY2UgaW4gWGVuLiBIb3dldmVyLCBJIGRvbid0
IHRoaW5rIHlvdSB3aWxsIGJlIGFibGUgdG8gc2hhcmUgdGhlIFAyTSBiZWNhdXNlIHRoZSBwYWdl
LXRhYmxlcyBsYXlvdXQgYmV0d2VlbiBzdGFnZS0xIGFuZCBzdGFnZS0yIGlzIGRpZmZlcmVudC4N
Cj4+PiBTbywgaXQgaXMgZXZlbiBtb3JlIHdvcmsgdGhlbg0KPj4gT3ZlcmFsbCBpdCB3b3VsZCBt
YWtlIG1vcmUgc2Vuc2UgdG8gc3BlbmQgc29tZSB0aW1lIGFkZGluZyBwcm9wZXIgc3VwcG9ydCBp
biBRZW11IHRoZW4gdHJ5aW5nIHRvIG1vZGlmeSB0aGUgZHJpdmVyIHRvIHN1cHBvcnQgUWVtdSBy
aWdodCBub3cuDQo+PiANCj4+Pj4+IA0KPj4+Pj4gV2UgYXJlIGludGVyZXN0ZWQgaW4gUUVNVS9T
TU1VdjMgYXMgYSBmbGV4aWJsZSBwbGF0Zm9ybSBmb3IgUENJIHBhc3N0aHJvdWdoDQo+Pj4+PiAN
Cj4+Pj4+IGltcGxlbWVudGF0aW9uLCBzbyBpdCBjb3VsZCBhbGxvdyB0ZXN0aW5nIGRpZmZlcmVu
dCBzZXR1cHMgYW5kIGNvbmZpZ3VyYXRpb25zIHdpdGggUUVNVS4NCj4+Pj4gSSB3b3VsZCByZWNv
bW1lbmQgdG8gZ2V0IHRoZSBTTU1VIHN1cHBvcnRpbmcgc3VwcG9ydGluZyBzdGFnZS0yIHBhZ2Ut
dGFibGVzLg0KPj4+IFlvdSBtZWFuIGluIFFFTVU/DQo+PiBTZWUgYmVmb3JlLg0KPj4gDQo+Pj4+
IFJlZ2FyZGxlc3MgdGhhdCwgSSB0aGluayBYZW4gc2hvdWxkIGJlIGFibGUgdG8gc2F5IHRoZSBT
TU1VIGlzIG5vdCBzdXBwb3J0ZWQgcmF0aGVyIHRoYW4gY3Jhc2hpbmcuDQo+Pj4gWWVzLCB0aGF0
IHdvdWxkIGJlIG5pY2UNCj4+IEZ1bGx5IGFncmVlIGFuZCB3ZSB3aWxsIGxvb2sgaW50byB0aGF0
Lg0KPj4gDQo+PiBBbnl0aGluZyB5b3UgY291bGQgc2hhcmUgc28gdGhhdCB3ZSBjb3VsZCBxdWlj
a2x5IHJlcHJvZHVjZSB5b3VyIHNldHVwIHdvdWxkIGJlIG1vcmUgdGhlbiBncmVhdC4NCj4gDQo+
IE5vdGhpbmcgc3BlY2lhbCwNCj4gDQo+IHFlbXUvYWFyY2g2NC1zb2Z0bW11L3FlbXUtc3lzdGVt
LWFhcmNoNjQgLW1hY2hpbmUgdHlwZT12aXJ0IC1tYWNoaW5lIHZpcnQsZ2ljLXZlcnNpb249MiBc
DQo+IA0KPiAtbWFjaGluZSB2aXJ0dWFsaXphdGlvbj10cnVlIC1jcHUgY29ydGV4LWE1NyAtc21w
IDQgLW0gMjA0OCAtbmljIHVzZXIsaG9zdGZ3ZD10Y3A6MTI3LjAuMC4xOjIyMjItOjIyIFwNCj4g
DQo+IC1ub2dyYXBoaWMgLXNlcmlhbCBtb246c3RkaW8gWy4uc25pcC4uXQ0KPiANCj4gSSBhbHNv
IHNldCBpb21tdSB0byBzbW11djMgaW4gbXkgdGVzdHMsIFFFTVUgZW11bGF0b3IgdmVyc2lvbiA0
LjIuMQ0KDQpJIGp1c3QgY2hlY2tlZCBhbmQgY29uZmlybWVkIHRoYXQgUUVNVSBpcyBib290aW5n
IHdpdGggWEVOIFNNTVV2MyBwYXRjaCBhbmQgWEVOIGlzIGFibGUgdG8gc2F5IFNNTVUgdHJhbnNs
YXRpb24gaXMgbm90IHN1cHBvcnRlZC4gQXMgWEVOIHN1cHBvcnRzIFN0YWdlLTIgdHJhbnNsYXRp
b24gYW5kIFFFTVUgc3VwcG9ydHMgU3RhZ2UtMSBvbmx5Lg0KDQoNCihYRU4pIFNNTVV2MzogL3Nt
bXV2M0A5MDUwMDAwOiBTTU1VdjM6IERUIHZhbHVlID0gZXZlbnRxDQooWEVOKSBTTU1VdjM6IC9z
bW11djNAOTA1MDAwMDogSURSMC5DT0hBQ0Mgb3ZlcnJpZGRlbiBieSBGVyBjb25maWd1cmF0aW9u
IChmYWxzZSkNCihYRU4pIFNNTVV2MzogL3NtbXV2M0A5MDUwMDAwOiBubyB0cmFuc2xhdGlvbiBz
dXBwb3J0IQ0KKFhFTikgSS9PIHZpcnR1YWxpc2F0aW9uIGRpc2FibGVkDQoNCk9ubHkgZGlmZmVy
ZW5jZSBJIG9ic2VydmVkIGlzIHRoYXQgeW91IGhhdmUgdG8gYWRkIG9wdGlvbiAiLW1hY2hpbmUg
dmlydCxpb21tdT1zbW11djMg4oCcIHdoZW4gbGF1bmNoaW5nIHRoZSBRRU1VLg0KDQpQbGVhc2Ug
bGV0IG1lIGtub3cgaWYgaXQgYWxzbyB3b3JrcyBmb3IgeW91Lg0KIA0KPiANCj4+IA0KPj4gUmVn
YXJkcw0KPj4gQmVydHJhbmQNCj4+IA0KPj4+PiBDaGVlcnMsDQo+Pj4+IA0KPj4+IFRoYW5rIHlv
dSwNCj4+PiANCj4+PiBPbGVrc2FuZHINCg0K


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 12:48:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 12:48:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20733.46741 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb1AQ-0002aD-Mf; Fri, 06 Nov 2020 12:48:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20733.46741; Fri, 06 Nov 2020 12:48:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb1AQ-0002a6-Iw; Fri, 06 Nov 2020 12:48:34 +0000
Received: by outflank-mailman (input) for mailman id 20733;
 Fri, 06 Nov 2020 12:48:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kb1AP-0002a1-61
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 12:48:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id abb86215-f62f-47e1-b118-499187f904a3;
 Fri, 06 Nov 2020 12:48:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 103F5ABDE;
 Fri,  6 Nov 2020 12:48:31 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kb1AP-0002a1-61
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 12:48:33 +0000
X-Inumbo-ID: abb86215-f62f-47e1-b118-499187f904a3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id abb86215-f62f-47e1-b118-499187f904a3;
	Fri, 06 Nov 2020 12:48:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604666911;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QjH8qlKKtvwtdr/8PQ5bf4BBJvKDVNwlvbztpTDPSrQ=;
	b=ZDYhx+cOt3tVxiRISqyJtKuCaWqIcf81w7EQPjreEiZxFmvufGYYA5ktXWzJB12eva8eDD
	vNSITNdNtVdPJNmpIXHUN7LUTpc+DDl8EQvnCpUVRXto9HvWBZKQu5ANwx+KvDzoqrgIBb
	zBOQYYbAwFrmrXZY6jlhE6G6dsNYHac=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 103F5ABDE;
	Fri,  6 Nov 2020 12:48:31 +0000 (UTC)
Subject: Re: [PATCH v2 2/4] xen/pci: Introduce new CONFIG_PCI_ATS flag for PCI
 ATS functionality.
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <cover.1604417224.git.rahul.singh@arm.com>
 <27814e614618c413ac61a9f7a48d795c557bfe5c.1604417224.git.rahul.singh@arm.com>
 <c9874396-44d2-b969-104f-eb40b4e107c9@suse.com>
 <4598bf81-5802-93b8-e160-05c139a6d4cf@suse.com>
 <73189992-EC5B-493D-BB23-6DFB2F11A580@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b0f9c2db-a58c-b1d3-9e1a-09b994f1b1d3@suse.com>
Date: Fri, 6 Nov 2020 13:48:30 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <73189992-EC5B-493D-BB23-6DFB2F11A580@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 06.11.2020 12:43, Rahul Singh wrote:
> Hello Jan,
> 
>> On 4 Nov 2020, at 3:49 pm, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 04.11.2020 16:43, Jan Beulich wrote:
>>> On 03.11.2020 16:59, Rahul Singh wrote:
>>>> --- a/xen/drivers/pci/Kconfig
>>>> +++ b/xen/drivers/pci/Kconfig
>>>> @@ -1,3 +1,12 @@
>>>>
>>>> config HAS_PCI
>>>> 	bool
>>>> +
>>>> +config PCI_ATS
>>>> +	bool "PCI ATS support"
>>>> +	default y
>>>> +	depends on X86 && HAS_PCI
>>>> +	---help---
>>>> +	 Enable PCI Address Translation Services.
>>>> +
>>>> +	 If unsure, say Y.
>>>
>>> Support for "---help---" having gone away in Linux, I think we'd
>>> better not add new instances. Also indentation of help content
>>> typically is by a tab and two spaces. With these two adjusted
>>>
>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>
>> Initially I wanted to merely reply indicating I'd be fine making
>> these changes while committing, but there are two more things
>> (and I withdraw my R-b): For one, isn't strict pci_dev's ats
>> field now unused when !PCI_ATS? If so, if should get an #ifdef
>> added.
> 
> Yes, I tried to #ifdef ats field in struct pci_dev but after doing that I found that I have to modify the 
> code related to x86  iotlb flush, as I have limited knowledge about how iotlb flush works for 
> x86 so I decided not to modify that part of the code. 
> 
> I already compiled the x86 with !PCI_ATS and is having same behaviour as command line options "ats=false” with unused struct pci_dev ats field.
> 
>> And then, what exactly is it in ats.c that's x86-specific?
>> Shouldn't the whole file instead be moved one level up, and be
>> usable by Arm right away?
> 
> Yes, you are right ats.c file is not x86 specific, but not tested for ARM so I thought we will move the ats.c file to common code once ATS code is tested/implemented for ARM.
> 
> disable_ats_device() is referenced in common "passthrough/pci.c" file  and defined in "x86/ats.c" therefore I decided to introduce the PCI_ATS option for X86. 
> As I am not sure on ARM how the ATS functionality is going to be implemented. 
> 
> There are three ways to fix the compilation error for ARM related to disable_ats_device() function.
> 
> 1. Introduce the PCI_ATS config option for x86 and enabled it by default for X86 and having same behaviour as  command line options for !PCi_ATS  as "ats=false”

I'll be happy to see the config option retained, but that's
orthogonal to the goals here.

> 2. disable_ats_device() is called by iommu_dev_iotlb_flush_timeout () that is as per my understanding is x86 specific ( not sure please confirm).
> Move iommu_dev_iotlb_flush_timeout () to "passthrough/x86/iommu.c” now and move the ats.c file to common code once ATS code is tested for ARM.

While the function is currently used by VT-d code only, I again
don't see what's x86-specific about it. Hence ...

> 3. Move x86/ats.c file one level up , fixed compilation error now and if on ARM platforms going to implement the ATS functionality different from
> x86 we have to move ats.c file back to x86 directory 

... I view this as the only "option" among the ones you name.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 13:01:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 13:01:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20758.46765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb1MW-0004P7-GT; Fri, 06 Nov 2020 13:01:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20758.46765; Fri, 06 Nov 2020 13:01:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb1MW-0004P0-CW; Fri, 06 Nov 2020 13:01:04 +0000
Received: by outflank-mailman (input) for mailman id 20758;
 Fri, 06 Nov 2020 13:01:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BqUO=EM=epam.com=prvs=9579144100=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kb1MV-0004Ov-6q
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 13:01:03 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b40c188e-1e27-4de5-9eab-c9aca128b04a;
 Fri, 06 Nov 2020 13:01:02 +0000 (UTC)
Received: from pps.filterd (m0174678.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0A6CsqAZ011420; Fri, 6 Nov 2020 13:00:56 GMT
Received: from eur04-db3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2059.outbound.protection.outlook.com [104.47.12.59])
 by mx0a-0039f301.pphosted.com with ESMTP id 34kf84h66w-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 06 Nov 2020 13:00:56 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR0302MB3218.eurprd03.prod.outlook.com (2603:10a6:208:9::25)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.29; Fri, 6 Nov
 2020 13:00:50 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.021; Fri, 6 Nov 2020
 13:00:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BqUO=EM=epam.com=prvs=9579144100=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kb1MV-0004Ov-6q
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 13:01:03 +0000
X-Inumbo-ID: b40c188e-1e27-4de5-9eab-c9aca128b04a
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b40c188e-1e27-4de5-9eab-c9aca128b04a;
	Fri, 06 Nov 2020 13:01:02 +0000 (UTC)
Received: from pps.filterd (m0174678.ppops.net [127.0.0.1])
	by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0A6CsqAZ011420;
	Fri, 6 Nov 2020 13:00:56 GMT
Received: from eur04-db3-obe.outbound.protection.outlook.com (mail-db3eur04lp2059.outbound.protection.outlook.com [104.47.12.59])
	by mx0a-0039f301.pphosted.com with ESMTP id 34kf84h66w-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 06 Nov 2020 13:00:56 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dWm18a8+X5jtHm/NUIMsbCFlyyLVd+C++pID0u+9lgtGGZIa4+BgFsUsOGWFZfR9knIvIqWWyH0zu5De3HVY20s9Y9J6DYJ9uiTvJYbG6xqH1c+hCLEusVKWiy/VW8DOWf5lVg0tjGZ1a2L4areQI01cb2O2E9CKl1t+gU7LQ5D2hUpCy4AaxcWrtYSUL4vIBj0baG0cVuNBVHqlIyjfglK2Npo+VH/zkW1ypXApjQL7qP/tzuWoVcPX4ytDdIPIT11cNwlL5OWH25OFXyJyZAHN00nwze4sIkPWx9WiMeAez9aa+ZMSlCo7GXIlybmTZfvFiITD4kr/T44AYN4utQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cwDCZ+OlAF0THOMm7ZqhAlW/d1ywIZxkUK/1c1HgFtM=;
 b=gqMRpnEJeAmcJMaO4zM7sQyt/yr9OZsNdqRwnEHrTBLTJ0lyC3rcGHX/YPWyN8wDo1q5AzD2WWFwBCHd0lU2lM08egI+JOiVop7Jxdr+UgqxG+57KHyvKVJH8RbAPCCmyAwVub/uOXQWyroUkqUqBcIpKFwiwfSK8KJg+mqz3onVxOHxPBIdp+dF1mqkOZlSi6zoXCH8l5DYeIbY9mE0NGFuaN1eAdknqCFdllKDvRHSfSYhmWKZRSdJmNmg/svbFOMfa6rDg52eGlcMVQ4X/kLCg2UmzVqTf8SxBMOtUkEBC9mcQqbEH1BNh7eHQuUV8HXiv3+clTfAX69wOwCAjA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cwDCZ+OlAF0THOMm7ZqhAlW/d1ywIZxkUK/1c1HgFtM=;
 b=K1AnOBKQu/0DZeha3rodeS7DgOVII/Q+cz1JbiAz9pt6w+dGQTR9LAK8pQiqGUOHZ67/U3p4VfNqi9MhjWZqmhhk5HBfi2Z1u6s2JjEpPVNz3quUlXp6B/VdvssaEyMxBI9/JrZgFtXp2gOb8Hu5kOGr7303tFj4Vli2mXYECjKjjN8u1bLKGMPVfyTgOxezrOWY7Rfb6PGt3w9xKDx/wpLj3uX/3+p9N5+DnPi7ss26Nqju5XStqnSgVY7eLiGK/1DmqojFTsldqT0K6FAGHosAlbF9ifjZAd3rkZZjGyZt1GG5fu1igGAqKywurFQ+3k1lFUPIM3Nih7DDZgPu0g==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR0302MB3218.eurprd03.prod.outlook.com (2603:10a6:208:9::25) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.29; Fri, 6 Nov
 2020 13:00:50 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.021; Fri, 6 Nov 2020
 13:00:50 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Rahul Singh <Rahul.Singh@arm.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>,
        Oleksandr Andrushchenko
	<andr2000@gmail.com>,
        Julien Grall <julien@xen.org>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Jan
 Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
        Stefano Stabellini
	<sstabellini@kernel.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index: 
 AQHWrqmeCV4SGlhADU2GY14D47cn76mwZCkAgAP4JoCAAELzAIAABK0AgAZ074CAAAOBgA==
Date: Fri, 6 Nov 2020 13:00:50 +0000
Message-ID: <2f62f34b-f47d-3472-511f-a89ec1cd36c3@epam.com>
References: 
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <09cfc160-3490-0aeb-f872-04fb4ce04364@epam.com>
 <76593217-c7e2-2963-9cbe-d6cc38830710@xen.org>
 <d83f6859-6737-0da8-7c1d-a236e8313869@gmail.com>
 <B8E54A16-8FD4-48E4-82D5-2205EEEB5D2C@arm.com>
 <1001ace5-c6a2-4a81-ba3d-edabeeea9336@epam.com>
 <5F09F481-DC27-4FC3-8CE5-F4F97FDF6DF9@arm.com>
In-Reply-To: <5F09F481-DC27-4FC3-8CE5-F4F97FDF6DF9@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 858acb6b-f880-497d-d05a-08d88253fcc4
x-ms-traffictypediagnostic: AM0PR0302MB3218:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM0PR0302MB3218D868E94242D8DE4E812EE7ED0@AM0PR0302MB3218.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 6s5czRuPtB9tqGwNSdSdM6PlMklc9q8aG+/o/hW5BlmGIIOdguyjDkyKGnrpKyHqjtt5b2Q4Qibwh5My42oFTvEBTUPgry9UbrfDv++K90fPhzLAlj3lTXg43lJw9Vxnj5x9eZKhQAdM3fO4R91S21J4O5shhuk+1E4iKWFTdumGLkMTtV5hW6v4wyYXlSUsd6CvUTGVEiy6Dt5ABMxXUW85U9wuSgT4zYj0lLR1Z45BLlSpVuTucE5tyKYUKdkfsykaXmGf7J24FAB7aqiHD6H1MIQ55ban3qBHxurCrNcV/91n5ujks4pYYxdarVZwEfk2++mAM28pfpRVf3x5zg==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(376002)(346002)(39860400002)(396003)(53546011)(6506007)(2906002)(107886003)(26005)(31696002)(54906003)(186003)(71200400001)(86362001)(2616005)(316002)(31686004)(6916009)(6512007)(8936002)(6486002)(66476007)(91956017)(8676002)(5660300002)(76116006)(64756008)(66556008)(478600001)(66946007)(4326008)(66446008)(83380400001)(36756003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 0qtY1brwH3YTDYsOXw7ogq5cEptvSi6iVcMShB0b3MwjE2JOv0+bPijSdIKSxub8BVLrZpzv/CkmBzXgQAdQl0WbCKmfyRjOypoNGrXy8FbswkWtt9xH2fwDh54fAyAudj7MpdC+fSBzlOsEB/W+oMYBtcaBUKFKLqdDWZuCn/aZQJWg3Vx4eYf8DX8BFD2KT1MIwIS9gpumDfqeuhu3ozWlEXgoX9U9+eny5nkHb/ERlOZ3rTAJLbqfbDOog00LFSlnb5d/efbvmwtfQWysKDIIwOJ6j3TfDues7nenckyEgg0wNnybLOKvvMmfXmg/Z7+djtYgorf52pP0hCi2sNSUqIXPKQjUtmiwXi4cBTWaIf8GomRx8bror5C3QBj6RC6jtgeiibJzgYd0QohDPd/Z2OgfcMjmQ7voAYsAKMNzBkicwkT6pZLwtKZleGgdmwJ1UJifzh1uEWE/H/I7JBrW9zBgRbTo2Llwo4dJND7gnnAfBVQ5GTOzJxtSM2PvKblhp3sArlOwywjHdqbImN7TSMPn9oFzrrRS7UqhwsMyo9fqJhZxwNcD2axNcXYr4VAiUCH7rDYtNiK4bNNUFwB77EEDh/BvOFbrvsiQ1PbaizX4BVyIFFEeC2zgXIBf5QMG1EV98lAiee+UT42kPQ==
Content-Type: text/plain; charset="utf-8"
Content-ID: <16B9E70D59E6B944A7DDA3ECB74022D8@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 858acb6b-f880-497d-d05a-08d88253fcc4
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Nov 2020 13:00:50.7269
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: x59XSMMTlwtJYdoFgY/g9GH+vpZLLefpjXTId+VICgllf0YqskvTCMGvLHDy17Hwp3EFhOuG9E01yFSaAr4OXHg56XfcQz8fLgQlDtRFkFSgNdiiIH8j/I3QQrx5k6DU
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR0302MB3218
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-06_04:2020-11-05,2020-11-06 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 clxscore=1015
 priorityscore=1501 bulkscore=0 mlxscore=0 malwarescore=0 adultscore=0
 phishscore=0 mlxlogscore=999 lowpriorityscore=0 impostorscore=0
 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2011060091

SGVsbG8sIFJhaHVsIQ0KDQpPbiAxMS82LzIwIDI6NDggUE0sIFJhaHVsIFNpbmdoIHdyb3RlOg0K
PiBIZWxsbyBPbGVrc2FuZHIsDQo+DQo+PiBPbiAyIE5vdiAyMDIwLCBhdCAxMDoxMiBhbSwgT2xl
a3NhbmRyIEFuZHJ1c2hjaGVua28gPE9sZWtzYW5kcl9BbmRydXNoY2hlbmtvQGVwYW0uY29tPiB3
cm90ZToNCj4+DQo+PiBIaSwNCj4+DQo+PiBPbiAxMS8yLzIwIDExOjU1IEFNLCBCZXJ0cmFuZCBN
YXJxdWlzIHdyb3RlOg0KPj4+IEhpLA0KPj4+DQo+Pj4+IE9uIDIgTm92IDIwMjAsIGF0IDA1OjU1
LCBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyA8YW5kcjIwMDBAZ21haWwuY29tPiB3cm90ZToNCj4+
Pj4NCj4+Pj4gSGksIEp1bGllbiENCj4+Pj4NCj4+Pj4gT24gMTAvMzAvMjAgNzoxOCBQTSwgSnVs
aWVuIEdyYWxsIHdyb3RlOg0KPj4+Pj4gSGkgT2xla3NhbmRyLA0KPj4+Pj4NCj4+Pj4+IE9uIDMw
LzEwLzIwMjAgMTA6NDQsIE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIHdyb3RlOg0KPj4+Pj4+IE9u
IDEwLzIwLzIwIDY6MjUgUE0sIFJhaHVsIFNpbmdoIHdyb3RlOg0KPj4+Pj4+PiBBZGQgc3VwcG9y
dCBmb3IgQVJNIGFyY2hpdGVjdGVkIFNNTVV2MyBpbXBsZW1lbnRhdGlvbnMuIEl0IGlzIGJhc2Vk
IG9uDQo+Pj4+Pj4+IHRoZSBMaW51eCBTTU1VdjMgZHJpdmVyLg0KPj4+Pj4+Pg0KPj4+Pj4+PiBN
YWpvciBkaWZmZXJlbmNlcyBiZXR3ZWVuIHRoZSBMaW51eCBkcml2ZXIgYXJlIGFzIGZvbGxvd3M6
DQo+Pj4+Pj4+IDEuIE9ubHkgU3RhZ2UtMiB0cmFuc2xhdGlvbiBpcyBzdXBwb3J0ZWQgYXMgY29t
cGFyZWQgdG8gdGhlIExpbnV4IGRyaXZlcg0KPj4+Pj4+PiAgICAgICB0aGF0IHN1cHBvcnRzIGJv
dGggU3RhZ2UtMSBhbmQgU3RhZ2UtMiB0cmFuc2xhdGlvbnMuDQo+Pj4+Pj4gRmlyc3Qgb2YgYWxs
IHRoYW5rIHlvdSBmb3IgdGhlIGVmZm9ydHMhDQo+Pj4+Pj4NCj4+Pj4+PiBJIHRyaWVkIHRoZSBw
YXRjaCB3aXRoIFFFTVUgYW5kIHdvdWxkIGxpa2UgdG8ga25vdyBpZiBteSB1bmRlcnN0YW5kaW5n
IGNvcnJlY3QNCj4+Pj4+Pg0KPj4+Pj4+IHRoYXQgdGhpcyBjb21iaW5hdGlvbiB3aWxsIG5vdCB3
b3JrIGFzIG9mIG5vdzoNCj4+Pj4+Pg0KPj4+Pj4+IChYRU4pIFNNTVV2MzogL3NtbXV2M0A5MDUw
MDAwOiBTTU1VdjM6IERUIHZhbHVlID0gZXZlbnRxDQo+Pj4+Pj4gKFhFTikgRGF0YSBBYm9ydCBU
cmFwLiBTeW5kcm9tZT0weDE5NDAwMTANCj4+Pj4+PiAoWEVOKSBXYWxraW5nIEh5cGVydmlzb3Ig
VkEgMHg0MDAzMTAwMCBvbiBDUFUwIHZpYSBUVEJSIDB4MDAwMDAwMDBiODQ2OTAwMA0KPj4+Pj4+
IChYRU4pIDBUSFsweDBdID0gMHgwMDAwMDAwMGI4NDY4ZjdmDQo+Pj4+Pj4NCj4+Pj4+PiBbc25p
cF0NCj4+Pj4+Pg0KPj4+Pj4+IElmIHRoaXMgaXMgZXhwZWN0ZWQgdGhlbiBpcyB0aGVyZSBhbnkg
cGxhbiB0byBtYWtlIFFFTVUgd29yayBhcyB3ZWxsPw0KPj4+Pj4+DQo+Pj4+Pj4gSSBzZWUgWzFd
IHNheXMgdGhhdCAiT25seSBzdGFnZSAxIGFuZCBBQXJjaDY0IFBUVyBhcmUgc3VwcG9ydGVkLiIg
b24gUUVNVSBzaWRlLg0KPj4+Pj4gSnVzdCBmb3IgY2xhcmljYXRpb24sIHlvdSBhcmUgdHJ5aW5n
IHRvIGJvb3QgWGVuIG9uIFFFTVUsIHJpZ2h0Pw0KPj4+PiBFeGFjdGx5DQo+Pj4+PiBZb3UgbWln
aHQgYmUgYWJsZSB0byB1c2UgdGhlIHN0YWdlLTEgcGFnZS10YWJsZXMgdG8gaXNvbGF0ZSBlYWNo
IGRldmljZSBpbiBYZW4uIEhvd2V2ZXIsIEkgZG9uJ3QgdGhpbmsgeW91IHdpbGwgYmUgYWJsZSB0
byBzaGFyZSB0aGUgUDJNIGJlY2F1c2UgdGhlIHBhZ2UtdGFibGVzIGxheW91dCBiZXR3ZWVuIHN0
YWdlLTEgYW5kIHN0YWdlLTIgaXMgZGlmZmVyZW50Lg0KPj4+PiBTbywgaXQgaXMgZXZlbiBtb3Jl
IHdvcmsgdGhlbg0KPj4+IE92ZXJhbGwgaXQgd291bGQgbWFrZSBtb3JlIHNlbnNlIHRvIHNwZW5k
IHNvbWUgdGltZSBhZGRpbmcgcHJvcGVyIHN1cHBvcnQgaW4gUWVtdSB0aGVuIHRyeWluZyB0byBt
b2RpZnkgdGhlIGRyaXZlciB0byBzdXBwb3J0IFFlbXUgcmlnaHQgbm93Lg0KPj4+DQo+Pj4+Pj4g
V2UgYXJlIGludGVyZXN0ZWQgaW4gUUVNVS9TTU1VdjMgYXMgYSBmbGV4aWJsZSBwbGF0Zm9ybSBm
b3IgUENJIHBhc3N0aHJvdWdoDQo+Pj4+Pj4NCj4+Pj4+PiBpbXBsZW1lbnRhdGlvbiwgc28gaXQg
Y291bGQgYWxsb3cgdGVzdGluZyBkaWZmZXJlbnQgc2V0dXBzIGFuZCBjb25maWd1cmF0aW9ucyB3
aXRoIFFFTVUuDQo+Pj4+PiBJIHdvdWxkIHJlY29tbWVuZCB0byBnZXQgdGhlIFNNTVUgc3VwcG9y
dGluZyBzdXBwb3J0aW5nIHN0YWdlLTIgcGFnZS10YWJsZXMuDQo+Pj4+IFlvdSBtZWFuIGluIFFF
TVU/DQo+Pj4gU2VlIGJlZm9yZS4NCj4+Pg0KPj4+Pj4gUmVnYXJkbGVzcyB0aGF0LCBJIHRoaW5r
IFhlbiBzaG91bGQgYmUgYWJsZSB0byBzYXkgdGhlIFNNTVUgaXMgbm90IHN1cHBvcnRlZCByYXRo
ZXIgdGhhbiBjcmFzaGluZy4NCj4+Pj4gWWVzLCB0aGF0IHdvdWxkIGJlIG5pY2UNCj4+PiBGdWxs
eSBhZ3JlZSBhbmQgd2Ugd2lsbCBsb29rIGludG8gdGhhdC4NCj4+Pg0KPj4+IEFueXRoaW5nIHlv
dSBjb3VsZCBzaGFyZSBzbyB0aGF0IHdlIGNvdWxkIHF1aWNrbHkgcmVwcm9kdWNlIHlvdXIgc2V0
dXAgd291bGQgYmUgbW9yZSB0aGVuIGdyZWF0Lg0KPj4gTm90aGluZyBzcGVjaWFsLA0KPj4NCj4+
IHFlbXUvYWFyY2g2NC1zb2Z0bW11L3FlbXUtc3lzdGVtLWFhcmNoNjQgLW1hY2hpbmUgdHlwZT12
aXJ0IC1tYWNoaW5lIHZpcnQsZ2ljLXZlcnNpb249MiBcDQo+Pg0KPj4gLW1hY2hpbmUgdmlydHVh
bGl6YXRpb249dHJ1ZSAtY3B1IGNvcnRleC1hNTcgLXNtcCA0IC1tIDIwNDggLW5pYyB1c2VyLGhv
c3Rmd2Q9dGNwOjEyNy4wLjAuMToyMjIyLToyMiBcDQo+Pg0KPj4gLW5vZ3JhcGhpYyAtc2VyaWFs
IG1vbjpzdGRpbyBbLi5zbmlwLi5dDQo+Pg0KPj4gSSBhbHNvIHNldCBpb21tdSB0byBzbW11djMg
aW4gbXkgdGVzdHMsIFFFTVUgZW11bGF0b3IgdmVyc2lvbiA0LjIuMQ0KPiBJIGp1c3QgY2hlY2tl
ZCBhbmQgY29uZmlybWVkIHRoYXQgUUVNVSBpcyBib290aW5nIHdpdGggWEVOIFNNTVV2MyBwYXRj
aCBhbmQgWEVOIGlzIGFibGUgdG8gc2F5IFNNTVUgdHJhbnNsYXRpb24gaXMgbm90IHN1cHBvcnRl
ZC4gQXMgWEVOIHN1cHBvcnRzIFN0YWdlLTIgdHJhbnNsYXRpb24gYW5kIFFFTVUgc3VwcG9ydHMg
U3RhZ2UtMSBvbmx5Lg0KPg0KPg0KPiAoWEVOKSBTTU1VdjM6IC9zbW11djNAOTA1MDAwMDogU01N
VXYzOiBEVCB2YWx1ZSA9IGV2ZW50cQ0KPiAoWEVOKSBTTU1VdjM6IC9zbW11djNAOTA1MDAwMDog
SURSMC5DT0hBQ0Mgb3ZlcnJpZGRlbiBieSBGVyBjb25maWd1cmF0aW9uIChmYWxzZSkNCj4gKFhF
TikgU01NVXYzOiAvc21tdXYzQDkwNTAwMDA6IG5vIHRyYW5zbGF0aW9uIHN1cHBvcnQhDQo+IChY
RU4pIEkvTyB2aXJ0dWFsaXNhdGlvbiBkaXNhYmxlZA0KPg0KPiBPbmx5IGRpZmZlcmVuY2UgSSBv
YnNlcnZlZCBpcyB0aGF0IHlvdSBoYXZlIHRvIGFkZCBvcHRpb24gIi1tYWNoaW5lIHZpcnQsaW9t
bXU9c21tdXYzIOKAnCB3aGVuIGxhdW5jaGluZyB0aGUgUUVNVS4NCkkgZG8gdXNlIHRoZSBvcHRp
b24NCj4NCj4gUGxlYXNlIGxldCBtZSBrbm93IGlmIGl0IGFsc28gd29ya3MgZm9yIHlvdS4NCg0K
V2VsbCwgSSBzaG91bGQgaGF2ZSByZXBvcnRlZCB0aGF0IGVhcmxpZXIgdGhhdCBJIGRvIG5vdCB1
c2UgdGhlIHN0YWdpbmcgWGVuIGF0IHRoZSBtb21lbnQsDQoNCml0IGlzIDQuMTQuMC4gU28sIGNh
biB0aGlzIGJlIGEgcHJvYmxlbSB3aXRoIHRoYXQgWGVuIHZlcnNpb24/DQoNCkFueXdheXMsIGlm
IGl0IHdvcmtzIHdpdGggdGhlIHN0YWdpbmcgdGhlbiBldmVyeXRoaW5nIGxvb2tzIG9rDQoNClRo
YW5rIHlvdSwNCg0KT2xla3NhbmRyDQoNCj4gICANCj4+PiBSZWdhcmRzDQo+Pj4gQmVydHJhbmQN
Cj4+Pg0KPj4+Pj4gQ2hlZXJzLA0KPj4+Pj4NCj4+Pj4gVGhhbmsgeW91LA0KPj4+Pg0KPj4+PiBP
bGVrc2FuZHI=


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 13:05:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 13:05:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20774.46776 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb1Qn-0004cq-4P; Fri, 06 Nov 2020 13:05:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20774.46776; Fri, 06 Nov 2020 13:05:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb1Qn-0004cj-1R; Fri, 06 Nov 2020 13:05:29 +0000
Received: by outflank-mailman (input) for mailman id 20774;
 Fri, 06 Nov 2020 13:05:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=b4TK=EM=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kb1Qm-0004ce-DH
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 13:05:28 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.219])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 12d5c6d0-433b-42da-b27e-375e982e9011;
 Fri, 06 Nov 2020 13:05:27 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.3 DYNA|AUTH)
 with ESMTPSA id j03b7dwA6D5K1wx
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Fri, 6 Nov 2020 14:05:20 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=b4TK=EM=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kb1Qm-0004ce-DH
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 13:05:28 +0000
X-Inumbo-ID: 12d5c6d0-433b-42da-b27e-375e982e9011
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.219])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 12d5c6d0-433b-42da-b27e-375e982e9011;
	Fri, 06 Nov 2020 13:05:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1604667926;
	s=strato-dkim-0002; d=aepfle.de;
	h=Message-Id:Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH:From:
	Subject:Sender;
	bh=x1mPxd7+pCC+FxKPAKYcJaRSD0WYNhmvTrFlDWW4z0o=;
	b=DZyQCVXjH/4zUDCspEDAS8vnQiaGWKSdPQJbjvN/Vwxq7vS+6sEUWy8CI3STqSiIb8
	yNTe67ErzJasHop2kcwwugl4pTgI7w5ynZjcteaN9nwsDr1lB7CElLQxiFu0+QD7TokG
	TSzYepg+3+/9iE7/RstDmm2Up9QmCY//KJbGIGjZYaw9ZrXPLq1SQpzGAdPfxIbP16Aa
	9iZ/GMyrGcRYSgeAlKIZr2psISnpOJNvXzUunPBznSliNTa/R05qyxMl4vo8dhPi+dXs
	37Dx2aDNWcy+ZbQ1pUxMwdNejPi/+UD+HZ5BQHtjtYWoU1Y6xNfVd6RtsRnaR3fzGlxP
	c12A==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3GhJjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.3 DYNA|AUTH)
	with ESMTPSA id j03b7dwA6D5K1wx
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Fri, 6 Nov 2020 14:05:20 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1] docs/xl: fix cpupool-cpu-remove
Date: Fri,  6 Nov 2020 14:05:17 +0100
Message-Id: <20201106130518.26875-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The cpu-pool must be specified.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 docs/man/xl.1.pod.in | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index d0f50f0b4a..2f576a25e3 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -1357,7 +1357,7 @@ All the specified CPUs that can be added to the cpupool will be added
 to it. If some CPU can't (e.g., because they're already part of another
 cpupool), an error is reported about each one of them.
 
-=item B<cpupool-cpu-remove> I<cpus|node:nodes>
+=item B<cpupool-cpu-remove> I<cpu-pool> I<cpus|node:nodes>
 
 Removes one or more CPUs or NUMA nodes from I<cpu-pool>. CPUs and NUMA
 nodes can be specified as single CPU/node IDs or as ranges, using the


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 13:50:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 13:50:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20798.46809 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb27t-0008Li-QQ; Fri, 06 Nov 2020 13:50:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20798.46809; Fri, 06 Nov 2020 13:50:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb27t-0008Lb-Ma; Fri, 06 Nov 2020 13:50:01 +0000
Received: by outflank-mailman (input) for mailman id 20798;
 Fri, 06 Nov 2020 13:50:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l/Pr=EM=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kb27s-0008Ge-08
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 13:50:00 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 236f252f-8772-4165-9ad0-6ad73faa5ce8;
 Fri, 06 Nov 2020 13:49:57 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kb27n-0003r5-Pe; Fri, 06 Nov 2020 13:49:55 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kb27n-0001tn-Gf; Fri, 06 Nov 2020 13:49:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l/Pr=EM=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kb27s-0008Ge-08
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 13:50:00 +0000
X-Inumbo-ID: 236f252f-8772-4165-9ad0-6ad73faa5ce8
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 236f252f-8772-4165-9ad0-6ad73faa5ce8;
	Fri, 06 Nov 2020 13:49:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=gYQdDbfk1aRo4aL0nUewX53ao+8afLDZRoOioQFGaA0=; b=dGEDZlOucJiL/r7Z1DxXeJqdHf
	UuhrhX2arUyFOtx5C2fdM9eXJ7MI7VrDvw4SU70ydO70b1vquMMreGA/1YS052WPJ+tMmF64lIOL7
	8vMhRkHD4Gg9XAMp6Aj+3ymdqtw8qwoHqpHCvGvsYkUJ193JFE8ah/u0krS5HGHU3lOY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kb27n-0003r5-Pe; Fri, 06 Nov 2020 13:49:55 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kb27n-0001tn-Gf; Fri, 06 Nov 2020 13:49:55 +0000
Subject: Re: [PATCH v2 2/4] xen/pci: Introduce new CONFIG_PCI_ATS flag for PCI
 ATS functionality.
To: Jan Beulich <jbeulich@suse.com>, Rahul Singh <Rahul.Singh@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <cover.1604417224.git.rahul.singh@arm.com>
 <27814e614618c413ac61a9f7a48d795c557bfe5c.1604417224.git.rahul.singh@arm.com>
 <c9874396-44d2-b969-104f-eb40b4e107c9@suse.com>
 <4598bf81-5802-93b8-e160-05c139a6d4cf@suse.com>
 <73189992-EC5B-493D-BB23-6DFB2F11A580@arm.com>
 <b0f9c2db-a58c-b1d3-9e1a-09b994f1b1d3@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <297e3080-316d-72fa-dd4e-7f858c6b9db8@xen.org>
Date: Fri, 6 Nov 2020 13:49:53 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <b0f9c2db-a58c-b1d3-9e1a-09b994f1b1d3@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi,

On 06/11/2020 12:48, Jan Beulich wrote:
> On 06.11.2020 12:43, Rahul Singh wrote:
>> Hello Jan,
>>
>>> On 4 Nov 2020, at 3:49 pm, Jan Beulich <jbeulich@suse.com> wrote:
>>>
>>> On 04.11.2020 16:43, Jan Beulich wrote:
>>>> On 03.11.2020 16:59, Rahul Singh wrote:
>>>>> --- a/xen/drivers/pci/Kconfig
>>>>> +++ b/xen/drivers/pci/Kconfig
>>>>> @@ -1,3 +1,12 @@
>>>>>
>>>>> config HAS_PCI
>>>>> 	bool
>>>>> +
>>>>> +config PCI_ATS
>>>>> +	bool "PCI ATS support"
>>>>> +	default y
>>>>> +	depends on X86 && HAS_PCI
>>>>> +	---help---
>>>>> +	 Enable PCI Address Translation Services.
>>>>> +
>>>>> +	 If unsure, say Y.
>>>>
>>>> Support for "---help---" having gone away in Linux, I think we'd
>>>> better not add new instances. Also indentation of help content
>>>> typically is by a tab and two spaces. With these two adjusted
>>>>
>>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> Initially I wanted to merely reply indicating I'd be fine making
>>> these changes while committing, but there are two more things
>>> (and I withdraw my R-b): For one, isn't strict pci_dev's ats
>>> field now unused when !PCI_ATS? If so, if should get an #ifdef
>>> added.
>>
>> Yes, I tried to #ifdef ats field in struct pci_dev but after doing that I found that I have to modify the
>> code related to x86  iotlb flush, as I have limited knowledge about how iotlb flush works for
>> x86 so I decided not to modify that part of the code.
>>
>> I already compiled the x86 with !PCI_ATS and is having same behaviour as command line options "ats=false” with unused struct pci_dev ats field.
>>
>>> And then, what exactly is it in ats.c that's x86-specific?
>>> Shouldn't the whole file instead be moved one level up, and be
>>> usable by Arm right away?
>>
>> Yes, you are right ats.c file is not x86 specific, but not tested for ARM so I thought we will move the ats.c file to common code once ATS code is tested/implemented for ARM.
>>
>> disable_ats_device() is referenced in common "passthrough/pci.c" file  and defined in "x86/ats.c" therefore I decided to introduce the PCI_ATS option for X86.
>> As I am not sure on ARM how the ATS functionality is going to be implemented.
>>
>> There are three ways to fix the compilation error for ARM related to disable_ats_device() function.
>>
>> 1. Introduce the PCI_ATS config option for x86 and enabled it by default for X86 and having same behaviour as  command line options for !PCi_ATS  as "ats=false”
> 
> I'll be happy to see the config option retained, but that's
> orthogonal to the goals here.
> 
>> 2. disable_ats_device() is called by iommu_dev_iotlb_flush_timeout () that is as per my understanding is x86 specific ( not sure please confirm).
>> Move iommu_dev_iotlb_flush_timeout () to "passthrough/x86/iommu.c” now and move the ats.c file to common code once ATS code is tested for ARM.
> 
> While the function is currently used by VT-d code only, I again
> don't see what's x86-specific about it. Hence ...
The ATS code looks arch-agnostic. So I agree with this statement.

> 
>> 3. Move x86/ats.c file one level up , fixed compilation error now and if on ARM platforms going to implement the ATS functionality different from
>> x86 we have to move ats.c file back to x86 directory
> 
> ... I view this as the only "option" among the ones you name.

+1.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 13:59:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 13:59:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20802.46821 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb2Gj-0000mc-Nu; Fri, 06 Nov 2020 13:59:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20802.46821; Fri, 06 Nov 2020 13:59:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb2Gj-0000mV-Jh; Fri, 06 Nov 2020 13:59:09 +0000
Received: by outflank-mailman (input) for mailman id 20802;
 Fri, 06 Nov 2020 13:59:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a9Kh=EM=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kb2Gi-0000mQ-6n
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 13:59:08 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe0a::62b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 28e8122e-6c12-4750-8ad9-2aceff2a4296;
 Fri, 06 Nov 2020 13:59:05 +0000 (UTC)
Received: from AM5PR0602CA0012.eurprd06.prod.outlook.com
 (2603:10a6:203:a3::22) by AM6PR08MB5126.eurprd08.prod.outlook.com
 (2603:10a6:20b:ef::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 6 Nov
 2020 13:59:00 +0000
Received: from VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:a3:cafe::36) by AM5PR0602CA0012.outlook.office365.com
 (2603:10a6:203:a3::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Fri, 6 Nov 2020 13:59:00 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT063.mail.protection.outlook.com (10.152.18.236) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3541.17 via Frontend Transport; Fri, 6 Nov 2020 13:58:59 +0000
Received: ("Tessian outbound 39167997cde8:v71");
 Fri, 06 Nov 2020 13:58:59 +0000
Received: from 10479a49d559.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 FE4FC947-2DA3-4447-A3DC-1A213827F0C2.1; 
 Fri, 06 Nov 2020 13:58:53 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 10479a49d559.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 06 Nov 2020 13:58:53 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB8PR08MB5482.eurprd08.prod.outlook.com (2603:10a6:10:116::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Fri, 6 Nov
 2020 13:58:51 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3499.032; Fri, 6 Nov 2020
 13:58:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=a9Kh=EM=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kb2Gi-0000mQ-6n
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 13:59:08 +0000
X-Inumbo-ID: 28e8122e-6c12-4750-8ad9-2aceff2a4296
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:fe0a::62b])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 28e8122e-6c12-4750-8ad9-2aceff2a4296;
	Fri, 06 Nov 2020 13:59:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FWwooVcyJrpyGQlqMMlxBu7rW3tCH0uVg/JsavKGz8g=;
 b=TXLYYFxODjucBcy+AGd1PTqGwC0MuT+WB3G34+YoSY7FrD2ICWcnVD/5hwm0qBTOC+9wu23rkscogtf1s+2WtBEp9BCMjwrM/66YntUxjBxp2GMbbEa+spSODk8pTRvyWllck/0D0JmyoaFWMRf8VQGBu7w/elbe28wY5dVm4AU=
Received: from AM5PR0602CA0012.eurprd06.prod.outlook.com
 (2603:10a6:203:a3::22) by AM6PR08MB5126.eurprd08.prod.outlook.com
 (2603:10a6:20b:ef::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 6 Nov
 2020 13:59:00 +0000
Received: from VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:a3:cafe::36) by AM5PR0602CA0012.outlook.office365.com
 (2603:10a6:203:a3::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Fri, 6 Nov 2020 13:59:00 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT063.mail.protection.outlook.com (10.152.18.236) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3541.17 via Frontend Transport; Fri, 6 Nov 2020 13:58:59 +0000
Received: ("Tessian outbound 39167997cde8:v71"); Fri, 06 Nov 2020 13:58:59 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 01f93f8bfebe764a
X-CR-MTA-TID: 64aa7808
Received: from 10479a49d559.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id FE4FC947-2DA3-4447-A3DC-1A213827F0C2.1;
	Fri, 06 Nov 2020 13:58:53 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 10479a49d559.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 06 Nov 2020 13:58:53 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RbCKckoVUcvSHmvmtOj8aDc4+Y+X4QqbMvzI0qVwFjRrYm4ICJCaMMt6exSdaLR/60zv91HKcZwkfv1MSaYOGFEc2FaxBV0UlXlFxUmH7zsVNmfp8ZzwhNarppjP81SFJ/hg04+SIY05fzsH6V2o8p3dLYQ8sEQ42sI67ZlRITzBjQ3eGl9iic+IpDWlx4PsXp6BQGTsXlzzoHhfSiYY7ZCv9jB9/U7wiraxZWob0jSfQ31OS9Mx5dMgfLJsmmZFnplGnk+CqNMHyqR5ln6nBRBCddzhDoRCwthEhCdviuekIXVf8dI9ejSrs+2Mv0JS8SvOYKJJ0rLetlYhaZ/wGQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FWwooVcyJrpyGQlqMMlxBu7rW3tCH0uVg/JsavKGz8g=;
 b=LLA8i2QHPqdF2PZDeCq9/9zeUj/ee8l96ZwO7nzzARjTqD54cXfcVFjuhr1CHRKel7HdZPZBhCgKcIafd4il1nE3ftGPHzopbTHTF7HeV8s7XQ11Uz9kHKhvOLzjUjfFEjomRPFOWfAod/0DyBc/N7Ly938dbhk0kqZoR5CuWRfFOPc7+Av9X4EAdEfP2iH3+A8Y3yjg84dglYpgwvPunETYqq5j4qPQW/0O0PqiU9nBBPWkwtLXQwpNRgD8b5Q3nlF8ogTqYkx9bki/Ka3w+24ofFMlGzYe1rxOdycj5QTmq4LEo5bFbdoAeTZhvS9yrdjBGf25BkmcAPUqeeG/ww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FWwooVcyJrpyGQlqMMlxBu7rW3tCH0uVg/JsavKGz8g=;
 b=TXLYYFxODjucBcy+AGd1PTqGwC0MuT+WB3G34+YoSY7FrD2ICWcnVD/5hwm0qBTOC+9wu23rkscogtf1s+2WtBEp9BCMjwrM/66YntUxjBxp2GMbbEa+spSODk8pTRvyWllck/0D0JmyoaFWMRf8VQGBu7w/elbe28wY5dVm4AU=
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB8PR08MB5482.eurprd08.prod.outlook.com (2603:10a6:10:116::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Fri, 6 Nov
 2020 13:58:51 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3499.032; Fri, 6 Nov 2020
 13:58:51 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Oleksandr Andrushchenko
	<andr2000@gmail.com>, Julien Grall <julien@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Jan
 Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index:
 AQHWoVvrkmJwOYERdUOadvid1OghFamwEIWAgABuPwCAA/gngIAAQvMAgAAErACABnTvAIAAA4IAgAAQNoA=
Date: Fri, 6 Nov 2020 13:58:51 +0000
Message-ID: <20FF6A26-41CF-4888-901A-0FF0ABCC6E64@arm.com>
References:
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <09cfc160-3490-0aeb-f872-04fb4ce04364@epam.com>
 <76593217-c7e2-2963-9cbe-d6cc38830710@xen.org>
 <d83f6859-6737-0da8-7c1d-a236e8313869@gmail.com>
 <B8E54A16-8FD4-48E4-82D5-2205EEEB5D2C@arm.com>
 <1001ace5-c6a2-4a81-ba3d-edabeeea9336@epam.com>
 <5F09F481-DC27-4FC3-8CE5-F4F97FDF6DF9@arm.com>
 <2f62f34b-f47d-3472-511f-a89ec1cd36c3@epam.com>
In-Reply-To: <2f62f34b-f47d-3472-511f-a89ec1cd36c3@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: e965285e-f388-4969-76dd-08d8825c1c88
x-ms-traffictypediagnostic: DB8PR08MB5482:|AM6PR08MB5126:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB51263AF486429821FE70D01BFCED0@AM6PR08MB5126.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 t18XbNF4K6jGloN+HDh1A5HE01MuNLNVgeeR/00XmSdQq7UBkvq1EaQAZDN/+Jixc5g+x7PLvVATeJte9TxgnKQokmpDD6M4lrOfp0FYizqyzDbypYqTDWDmIgPqdtMZzIqeFGqDHjBCplONtNpCtaFwMusfJ8M0236WdarKWqUMgU8USVzZb9SDkjfEfWOlypNDYnffNSnfd5F3la5SIegR+gtBG2arWhvx4qXkp2HwlA4mMnODcwzmDTw2VLIdoPmM/5e5XXnoKW7w8vgpnxcf+OFzqU3rFMfrzCN4DD4xue6NeVa9mmbO8DOeF6dJOuH0JMzVP1wxHdgF1ZvLCg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39850400004)(346002)(376002)(136003)(396003)(8936002)(33656002)(71200400001)(5660300002)(2906002)(6486002)(8676002)(6506007)(36756003)(478600001)(64756008)(91956017)(316002)(26005)(54906003)(86362001)(6916009)(53546011)(66446008)(66946007)(186003)(55236004)(83380400001)(66556008)(76116006)(6512007)(2616005)(66476007)(4326008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 GsjGizzlL4+gl4PqxOipH4Oq1zLilFeVSV/yoMibzlMTAbshpied1nW7VIoIHICJnQ8WH5+pbbtw0jYAc8w1TPCjrPLcpuuBTRENneeVBIWmt4lHaMYkr/To1Nr2dvWaXemAqK/mETL11cEmweBRCN3g72AunyqwhnDfp1gYBStdgwf9gh6QuUJWjWTMjbwvYpIugw64LdM6Sd2nPopkHVy1BKoc2pWQVoLWBEzchElISk+1C3gOdE+ElPu9ERtww4UlYViAWyB0IHPxnPVhCyHJQ39lcBiudiu2aTAdAvOZO78biYGrYJtsYINYXipZQ25oPwVNEbB36lvYznUFILBGOzv8huuowY8F50+r1wPl5GwCrDpbetS49MTdbtZ61rg4+IIXupthXOSRx3I3E3sL9Ks87AvwlPJLZkK8BNFK4nK2N4psV9SxS5VTrE4v0OKPoL/5rZkwKQDhSrtHIDbrVgKDPsrEb74yBomXpk45a10ip0bayY78zKAs/sOWJ/pNAqlTdJr5dUB2btcAoycn6cGCFqwQOKUUARCK97WNzBvP9QT8hrlIalXyx1JZY6l9ij4CHjDIDYLaL99go2TZUWVbXNdOFmHaTBv8tpk5ot9SwwVTUcrMAHQlJQEx7i0MC4krv3tZounT9RbpKQ==
Content-Type: text/plain; charset="utf-8"
Content-ID: <375B1A2ABC60424AA7589FFFD5EF1928@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5482
Original-Authentication-Results: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	74b8df9c-29d7-4173-b692-08d8825c1788
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xhDIuXj0Bm+E+kvgPs7n2S6xQsA4LlO2MHp8DAtRia9qLUdmFMMW2ez/OlQ4yKp/vDQTbepA9ZGnXh1Oo2Lu3HwaXxhGpCo3RxL+iJ7L+QlTZlmVrj1N6YKThFUVbpkttnn1NhbXH09VAWmQPnz4udRJcH2GtR+SjmnukRzPN5jiIDo9uOJUxi28/qtnuWHBQNYmZbtkMhypkd+g5c26AHnZYuMv4e0+/acbkR0xyN92/ewbOedNaiLBAJSXszJqIWzTI5blB+ORTciQVVTWzIaZn6YT0q8/AZ/xndhRE4hv5QD92Q2gTfB8FxSt0vPwLNGYF9ZeDeuXfDiVFHBLgCM7Ameh5xraylJBnvVhNtEMRwZlZ1EGV4F+0xXzMSYNgYc0WQp9EvxtFe1aiuh3Ug==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(39850400004)(346002)(376002)(396003)(46966005)(5660300002)(356005)(2616005)(316002)(53546011)(478600001)(54906003)(6862004)(4326008)(81166007)(36906005)(107886003)(26005)(6486002)(47076004)(2906002)(8676002)(186003)(86362001)(6512007)(36756003)(33656002)(82310400003)(8936002)(70206006)(70586007)(83380400001)(82740400003)(6506007)(55236004)(336012);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Nov 2020 13:58:59.9153
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e965285e-f388-4969-76dd-08d8825c1c88
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB5126

SGVsbG8gT2xla3NhbmRyLA0KDQo+IE9uIDYgTm92IDIwMjAsIGF0IDE6MDAgcG0sIE9sZWtzYW5k
ciBBbmRydXNoY2hlbmtvIDxPbGVrc2FuZHJfQW5kcnVzaGNoZW5rb0BlcGFtLmNvbT4gd3JvdGU6
DQo+IA0KPiBIZWxsbywgUmFodWwhDQo+IA0KPiBPbiAxMS82LzIwIDI6NDggUE0sIFJhaHVsIFNp
bmdoIHdyb3RlOg0KPj4gSGVsbG8gT2xla3NhbmRyLA0KPj4gDQo+Pj4gT24gMiBOb3YgMjAyMCwg
YXQgMTA6MTIgYW0sIE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIDxPbGVrc2FuZHJfQW5kcnVzaGNo
ZW5rb0BlcGFtLmNvbT4gd3JvdGU6DQo+Pj4gDQo+Pj4gSGksDQo+Pj4gDQo+Pj4gT24gMTEvMi8y
MCAxMTo1NSBBTSwgQmVydHJhbmQgTWFycXVpcyB3cm90ZToNCj4+Pj4gSGksDQo+Pj4+IA0KPj4+
Pj4gT24gMiBOb3YgMjAyMCwgYXQgMDU6NTUsIE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIDxhbmRy
MjAwMEBnbWFpbC5jb20+IHdyb3RlOg0KPj4+Pj4gDQo+Pj4+PiBIaSwgSnVsaWVuIQ0KPj4+Pj4g
DQo+Pj4+PiBPbiAxMC8zMC8yMCA3OjE4IFBNLCBKdWxpZW4gR3JhbGwgd3JvdGU6DQo+Pj4+Pj4g
SGkgT2xla3NhbmRyLA0KPj4+Pj4+IA0KPj4+Pj4+IE9uIDMwLzEwLzIwMjAgMTA6NDQsIE9sZWtz
YW5kciBBbmRydXNoY2hlbmtvIHdyb3RlOg0KPj4+Pj4+PiBPbiAxMC8yMC8yMCA2OjI1IFBNLCBS
YWh1bCBTaW5naCB3cm90ZToNCj4+Pj4+Pj4+IEFkZCBzdXBwb3J0IGZvciBBUk0gYXJjaGl0ZWN0
ZWQgU01NVXYzIGltcGxlbWVudGF0aW9ucy4gSXQgaXMgYmFzZWQgb24NCj4+Pj4+Pj4+IHRoZSBM
aW51eCBTTU1VdjMgZHJpdmVyLg0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+PiBNYWpvciBkaWZmZXJlbmNl
cyBiZXR3ZWVuIHRoZSBMaW51eCBkcml2ZXIgYXJlIGFzIGZvbGxvd3M6DQo+Pj4+Pj4+PiAxLiBP
bmx5IFN0YWdlLTIgdHJhbnNsYXRpb24gaXMgc3VwcG9ydGVkIGFzIGNvbXBhcmVkIHRvIHRoZSBM
aW51eCBkcml2ZXINCj4+Pj4+Pj4+ICAgICAgdGhhdCBzdXBwb3J0cyBib3RoIFN0YWdlLTEgYW5k
IFN0YWdlLTIgdHJhbnNsYXRpb25zLg0KPj4+Pj4+PiBGaXJzdCBvZiBhbGwgdGhhbmsgeW91IGZv
ciB0aGUgZWZmb3J0cyENCj4+Pj4+Pj4gDQo+Pj4+Pj4+IEkgdHJpZWQgdGhlIHBhdGNoIHdpdGgg
UUVNVSBhbmQgd291bGQgbGlrZSB0byBrbm93IGlmIG15IHVuZGVyc3RhbmRpbmcgY29ycmVjdA0K
Pj4+Pj4+PiANCj4+Pj4+Pj4gdGhhdCB0aGlzIGNvbWJpbmF0aW9uIHdpbGwgbm90IHdvcmsgYXMg
b2Ygbm93Og0KPj4+Pj4+PiANCj4+Pj4+Pj4gKFhFTikgU01NVXYzOiAvc21tdXYzQDkwNTAwMDA6
IFNNTVV2MzogRFQgdmFsdWUgPSBldmVudHENCj4+Pj4+Pj4gKFhFTikgRGF0YSBBYm9ydCBUcmFw
LiBTeW5kcm9tZT0weDE5NDAwMTANCj4+Pj4+Pj4gKFhFTikgV2Fsa2luZyBIeXBlcnZpc29yIFZB
IDB4NDAwMzEwMDAgb24gQ1BVMCB2aWEgVFRCUiAweDAwMDAwMDAwYjg0NjkwMDANCj4+Pj4+Pj4g
KFhFTikgMFRIWzB4MF0gPSAweDAwMDAwMDAwYjg0NjhmN2YNCj4+Pj4+Pj4gDQo+Pj4+Pj4+IFtz
bmlwXQ0KPj4+Pj4+PiANCj4+Pj4+Pj4gSWYgdGhpcyBpcyBleHBlY3RlZCB0aGVuIGlzIHRoZXJl
IGFueSBwbGFuIHRvIG1ha2UgUUVNVSB3b3JrIGFzIHdlbGw/DQo+Pj4+Pj4+IA0KPj4+Pj4+PiBJ
IHNlZSBbMV0gc2F5cyB0aGF0ICJPbmx5IHN0YWdlIDEgYW5kIEFBcmNoNjQgUFRXIGFyZSBzdXBw
b3J0ZWQuIiBvbiBRRU1VIHNpZGUuDQo+Pj4+Pj4gSnVzdCBmb3IgY2xhcmljYXRpb24sIHlvdSBh
cmUgdHJ5aW5nIHRvIGJvb3QgWGVuIG9uIFFFTVUsIHJpZ2h0Pw0KPj4+Pj4gRXhhY3RseQ0KPj4+
Pj4+IFlvdSBtaWdodCBiZSBhYmxlIHRvIHVzZSB0aGUgc3RhZ2UtMSBwYWdlLXRhYmxlcyB0byBp
c29sYXRlIGVhY2ggZGV2aWNlIGluIFhlbi4gSG93ZXZlciwgSSBkb24ndCB0aGluayB5b3Ugd2ls
bCBiZSBhYmxlIHRvIHNoYXJlIHRoZSBQMk0gYmVjYXVzZSB0aGUgcGFnZS10YWJsZXMgbGF5b3V0
IGJldHdlZW4gc3RhZ2UtMSBhbmQgc3RhZ2UtMiBpcyBkaWZmZXJlbnQuDQo+Pj4+PiBTbywgaXQg
aXMgZXZlbiBtb3JlIHdvcmsgdGhlbg0KPj4+PiBPdmVyYWxsIGl0IHdvdWxkIG1ha2UgbW9yZSBz
ZW5zZSB0byBzcGVuZCBzb21lIHRpbWUgYWRkaW5nIHByb3BlciBzdXBwb3J0IGluIFFlbXUgdGhl
biB0cnlpbmcgdG8gbW9kaWZ5IHRoZSBkcml2ZXIgdG8gc3VwcG9ydCBRZW11IHJpZ2h0IG5vdy4N
Cj4+Pj4gDQo+Pj4+Pj4+IFdlIGFyZSBpbnRlcmVzdGVkIGluIFFFTVUvU01NVXYzIGFzIGEgZmxl
eGlibGUgcGxhdGZvcm0gZm9yIFBDSSBwYXNzdGhyb3VnaA0KPj4+Pj4+PiANCj4+Pj4+Pj4gaW1w
bGVtZW50YXRpb24sIHNvIGl0IGNvdWxkIGFsbG93IHRlc3RpbmcgZGlmZmVyZW50IHNldHVwcyBh
bmQgY29uZmlndXJhdGlvbnMgd2l0aCBRRU1VLg0KPj4+Pj4+IEkgd291bGQgcmVjb21tZW5kIHRv
IGdldCB0aGUgU01NVSBzdXBwb3J0aW5nIHN1cHBvcnRpbmcgc3RhZ2UtMiBwYWdlLXRhYmxlcy4N
Cj4+Pj4+IFlvdSBtZWFuIGluIFFFTVU/DQo+Pj4+IFNlZSBiZWZvcmUuDQo+Pj4+IA0KPj4+Pj4+
IFJlZ2FyZGxlc3MgdGhhdCwgSSB0aGluayBYZW4gc2hvdWxkIGJlIGFibGUgdG8gc2F5IHRoZSBT
TU1VIGlzIG5vdCBzdXBwb3J0ZWQgcmF0aGVyIHRoYW4gY3Jhc2hpbmcuDQo+Pj4+PiBZZXMsIHRo
YXQgd291bGQgYmUgbmljZQ0KPj4+PiBGdWxseSBhZ3JlZSBhbmQgd2Ugd2lsbCBsb29rIGludG8g
dGhhdC4NCj4+Pj4gDQo+Pj4+IEFueXRoaW5nIHlvdSBjb3VsZCBzaGFyZSBzbyB0aGF0IHdlIGNv
dWxkIHF1aWNrbHkgcmVwcm9kdWNlIHlvdXIgc2V0dXAgd291bGQgYmUgbW9yZSB0aGVuIGdyZWF0
Lg0KPj4+IE5vdGhpbmcgc3BlY2lhbCwNCj4+PiANCj4+PiBxZW11L2FhcmNoNjQtc29mdG1tdS9x
ZW11LXN5c3RlbS1hYXJjaDY0IC1tYWNoaW5lIHR5cGU9dmlydCAtbWFjaGluZSB2aXJ0LGdpYy12
ZXJzaW9uPTIgXA0KPj4+IA0KPj4+IC1tYWNoaW5lIHZpcnR1YWxpemF0aW9uPXRydWUgLWNwdSBj
b3J0ZXgtYTU3IC1zbXAgNCAtbSAyMDQ4IC1uaWMgdXNlcixob3N0ZndkPXRjcDoxMjcuMC4wLjE6
MjIyMi06MjIgXA0KPj4+IA0KPj4+IC1ub2dyYXBoaWMgLXNlcmlhbCBtb246c3RkaW8gWy4uc25p
cC4uXQ0KPj4+IA0KPj4+IEkgYWxzbyBzZXQgaW9tbXUgdG8gc21tdXYzIGluIG15IHRlc3RzLCBR
RU1VIGVtdWxhdG9yIHZlcnNpb24gNC4yLjENCj4+IEkganVzdCBjaGVja2VkIGFuZCBjb25maXJt
ZWQgdGhhdCBRRU1VIGlzIGJvb3Rpbmcgd2l0aCBYRU4gU01NVXYzIHBhdGNoIGFuZCBYRU4gaXMg
YWJsZSB0byBzYXkgU01NVSB0cmFuc2xhdGlvbiBpcyBub3Qgc3VwcG9ydGVkLiBBcyBYRU4gc3Vw
cG9ydHMgU3RhZ2UtMiB0cmFuc2xhdGlvbiBhbmQgUUVNVSBzdXBwb3J0cyBTdGFnZS0xIG9ubHku
DQo+PiANCj4+IA0KPj4gKFhFTikgU01NVXYzOiAvc21tdXYzQDkwNTAwMDA6IFNNTVV2MzogRFQg
dmFsdWUgPSBldmVudHENCj4+IChYRU4pIFNNTVV2MzogL3NtbXV2M0A5MDUwMDAwOiBJRFIwLkNP
SEFDQyBvdmVycmlkZGVuIGJ5IEZXIGNvbmZpZ3VyYXRpb24gKGZhbHNlKQ0KPj4gKFhFTikgU01N
VXYzOiAvc21tdXYzQDkwNTAwMDA6IG5vIHRyYW5zbGF0aW9uIHN1cHBvcnQhDQo+PiAoWEVOKSBJ
L08gdmlydHVhbGlzYXRpb24gZGlzYWJsZWQNCj4+IA0KPj4gT25seSBkaWZmZXJlbmNlIEkgb2Jz
ZXJ2ZWQgaXMgdGhhdCB5b3UgaGF2ZSB0byBhZGQgb3B0aW9uICItbWFjaGluZSB2aXJ0LGlvbW11
PXNtbXV2MyDigJwgd2hlbiBsYXVuY2hpbmcgdGhlIFFFTVUuDQo+IEkgZG8gdXNlIHRoZSBvcHRp
b24NCg0KSSB1c2VkICItbWFjaGluZSB2aXJ0LGlvbW11PXNtbXV2MyDigJwgIG9wdGlvbiB3aGls
ZSBjcmVhdGluZyB0aGUgdmlydC1kdGIgYW5kIHdoaWxlIGxhdW5jaGluZyB0aGUgUUVNVS4NCkkg
YWxzbyBvYnNlcnZlZCB0aGUgc2FtZSBlcnJvciB3aGF0IHlvdSBvYnNlcnZlZCBpZiBJIGFtIG5v
dCB1c2luZyB0aGUgIi1tYWNoaW5lIHZpcnQsaW9tbXU9c21tdXYzIOKAnCBvcHRpb25zIHdoZW4g
bGF1bmNoaW5nIHRoZSBRRU1VIHNvIEkgdGhvdWdodCB0aGlzIG1pZ2h0IGJlIGNhc2UgZm9yIHlv
dSBhbHNvIGJ1dCBhbnl3YXlzIHlvdSBoYXZlIHVzZSB0aGUgb3B0aW9ucyBpdCBtaWdodCBiZSBv
dGhlciBpc3N1ZS4NCg0KPj4gDQo+PiBQbGVhc2UgbGV0IG1lIGtub3cgaWYgaXQgYWxzbyB3b3Jr
cyBmb3IgeW91Lg0KPiANCj4gV2VsbCwgSSBzaG91bGQgaGF2ZSByZXBvcnRlZCB0aGF0IGVhcmxp
ZXIgdGhhdCBJIGRvIG5vdCB1c2UgdGhlIHN0YWdpbmcgWGVuIGF0IHRoZSBtb21lbnQsDQo+IA0K
PiBpdCBpcyA0LjE0LjAuIFNvLCBjYW4gdGhpcyBiZSBhIHByb2JsZW0gd2l0aCB0aGF0IFhlbiB2
ZXJzaW9uPw0KDQpJIGRvbuKAmXQgdGhpbmsgc28gdGhpcyBpcyB0aGUgcHJvYmxlbSB3aXRoIHRo
ZSBYRU4gdmVyc2lvbi4NCj4gDQo+IEFueXdheXMsIGlmIGl0IHdvcmtzIHdpdGggdGhlIHN0YWdp
bmcgdGhlbiBldmVyeXRoaW5nIGxvb2tzIG9rDQo+IA0KPiBUaGFuayB5b3UsDQo+IA0KPiBPbGVr
c2FuZHINCj4gDQo+PiANCj4+Pj4gUmVnYXJkcw0KPj4+PiBCZXJ0cmFuZA0KPj4+PiANCj4+Pj4+
PiBDaGVlcnMsDQo+Pj4+Pj4gDQo+Pj4+PiBUaGFuayB5b3UsDQo+Pj4+PiANCj4+Pj4+IE9sZWtz
YW5kcg0KDQpSZWdhcmRzLA0KUmFodWw=


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 14:05:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 14:05:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20820.46833 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb2MX-0001lR-Cg; Fri, 06 Nov 2020 14:05:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20820.46833; Fri, 06 Nov 2020 14:05:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb2MX-0001lK-9L; Fri, 06 Nov 2020 14:05:09 +0000
Received: by outflank-mailman (input) for mailman id 20820;
 Fri, 06 Nov 2020 14:05:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5w1g=EM=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kb2MW-0001lF-5M
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 14:05:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 58534034-284d-41d8-8933-883d7a961d01;
 Fri, 06 Nov 2020 14:05:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AEA6BABDE;
 Fri,  6 Nov 2020 14:05:05 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5w1g=EM=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kb2MW-0001lF-5M
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 14:05:08 +0000
X-Inumbo-ID: 58534034-284d-41d8-8933-883d7a961d01
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 58534034-284d-41d8-8933-883d7a961d01;
	Fri, 06 Nov 2020 14:05:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604671505;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=q3SxtmeG7Rpmzb0sczZsSNaIt8qtRAlvGjC8n51vPLY=;
	b=j56sZVbVdjmFskmMbHF3wfA8TFXwlrMXPd98g2zH7hucnIp/2NtDBKLs+0jCgBktK6PEae
	9m72Th+tr0eE2qxZdTl+ln5q9NvIFP7PzuenzQOw77Ui17aoGu7PeI3czhIFh63ceA8mlY
	R3UzfBZRFC+Q5LcvytOZuyuilNSzzLA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id AEA6BABDE;
	Fri,  6 Nov 2020 14:05:05 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH] tools/libs/light: correct bitmap operations
Date: Fri,  6 Nov 2020 15:05:04 +0100
Message-Id: <20201106140504.25488-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Libxl bitmap operations for single bits (test, set, reset) take the bit
number as a signed integer without testing the value to be larger than
0.

Correct that by adding the appropriate tests.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/light/libxl_utils.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/libs/light/libxl_utils.c b/tools/libs/light/libxl_utils.c
index b039143b8a..4699c4a0a3 100644
--- a/tools/libs/light/libxl_utils.c
+++ b/tools/libs/light/libxl_utils.c
@@ -688,21 +688,21 @@ int libxl_bitmap_is_empty(const libxl_bitmap *bitmap)
 
 int libxl_bitmap_test(const libxl_bitmap *bitmap, int bit)
 {
-    if (bit >= bitmap->size * 8)
+    if (bit >= bitmap->size * 8 || bit < 0)
         return 0;
     return (bitmap->map[bit / 8] & (1 << (bit & 7))) ? 1 : 0;
 }
 
 void libxl_bitmap_set(libxl_bitmap *bitmap, int bit)
 {
-    if (bit >= bitmap->size * 8)
+    if (bit >= bitmap->size * 8 || bit < 0)
         return;
     bitmap->map[bit / 8] |= 1 << (bit & 7);
 }
 
 void libxl_bitmap_reset(libxl_bitmap *bitmap, int bit)
 {
-    if (bit >= bitmap->size * 8)
+    if (bit >= bitmap->size * 8 || bit < 0)
         return;
     bitmap->map[bit / 8] &= ~(1 << (bit & 7));
 }
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 14:19:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 14:19:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20828.46848 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb2ak-0002p2-Px; Fri, 06 Nov 2020 14:19:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20828.46848; Fri, 06 Nov 2020 14:19:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb2ak-0002ov-Mw; Fri, 06 Nov 2020 14:19:50 +0000
Received: by outflank-mailman (input) for mailman id 20828;
 Fri, 06 Nov 2020 14:19:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pfqN=EM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kb2aj-0002nw-HC
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 14:19:49 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4486ed19-37aa-4242-9b88-8c3e9d309995;
 Fri, 06 Nov 2020 14:19:42 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kb2ac-0004XE-1C; Fri, 06 Nov 2020 14:19:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kb2ab-0001jy-Kl; Fri, 06 Nov 2020 14:19:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kb2ab-0000hq-K2; Fri, 06 Nov 2020 14:19:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pfqN=EM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kb2aj-0002nw-HC
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 14:19:49 +0000
X-Inumbo-ID: 4486ed19-37aa-4242-9b88-8c3e9d309995
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4486ed19-37aa-4242-9b88-8c3e9d309995;
	Fri, 06 Nov 2020 14:19:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GebGRC/w9z5FfH3DbOmKIyP4hZg9Vu4dtGpb/ZW6FYc=; b=guN42ePLMfchpYTzODSTFOlqAu
	6o5sxxYAQVmFg4GmFJZB7SV8lxXkLr33IcdZIQenYucvm1t5iVZxFaMiveqvwT1r5mzB+iktTj/Ff
	whsfNQ+AEZhM9MNZg4UBHIbVr5KAPIVOW4bSMIt2KYL44hGthaO1EIw3YF4wEw8NWOko=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kb2ac-0004XE-1C; Fri, 06 Nov 2020 14:19:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kb2ab-0001jy-Kl; Fri, 06 Nov 2020 14:19:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kb2ab-0000hq-K2; Fri, 06 Nov 2020 14:19:41 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156443-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156443: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9ff9705647646aa937b5f5c1426a64c69a62b3bd
X-Osstest-Versions-That:
    xen=9ff9705647646aa937b5f5c1426a64c69a62b3bd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 06 Nov 2020 14:19:41 +0000

flight 156443 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156443/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156401
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156401
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156401
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156401
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156401
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156401
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156401
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156401
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156401
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156401
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156401
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  9ff9705647646aa937b5f5c1426a64c69a62b3bd
baseline version:
 xen                  9ff9705647646aa937b5f5c1426a64c69a62b3bd

Last test of basis   156443  2020-11-05 15:47:13 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 14:23:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 14:23:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20832.46859 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb2ds-0003f2-6X; Fri, 06 Nov 2020 14:23:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20832.46859; Fri, 06 Nov 2020 14:23:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb2ds-0003ev-3S; Fri, 06 Nov 2020 14:23:04 +0000
Received: by outflank-mailman (input) for mailman id 20832;
 Fri, 06 Nov 2020 14:23:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BqUO=EM=epam.com=prvs=9579144100=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kb2dr-0003eq-0V
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 14:23:03 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 03e3aea9-09c6-4fab-ad29-38d5ee8443a3;
 Fri, 06 Nov 2020 14:23:01 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0A6EMKjg027857; Fri, 6 Nov 2020 14:22:56 GMT
Received: from eur04-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2058.outbound.protection.outlook.com [104.47.14.58])
 by mx0a-0039f301.pphosted.com with ESMTP id 34kc4uspt2-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 06 Nov 2020 14:22:56 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB4019.eurprd03.prod.outlook.com (2603:10a6:208:7b::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Fri, 6 Nov
 2020 14:22:52 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.021; Fri, 6 Nov 2020
 14:22:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BqUO=EM=epam.com=prvs=9579144100=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kb2dr-0003eq-0V
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 14:23:03 +0000
X-Inumbo-ID: 03e3aea9-09c6-4fab-ad29-38d5ee8443a3
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 03e3aea9-09c6-4fab-ad29-38d5ee8443a3;
	Fri, 06 Nov 2020 14:23:01 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
	by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0A6EMKjg027857;
	Fri, 6 Nov 2020 14:22:56 GMT
Received: from eur04-vi1-obe.outbound.protection.outlook.com (mail-vi1eur04lp2058.outbound.protection.outlook.com [104.47.14.58])
	by mx0a-0039f301.pphosted.com with ESMTP id 34kc4uspt2-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 06 Nov 2020 14:22:56 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DZ6dKe1VpuTJQokPTCDut2boh/ph5u/QXKFVxo6TPraCK95Csdca5AztNb4BdFxsMssV+4uk9nPey4OP93CE50CPWGZ7nkjv29VxpB72r2zt6wNG3m8yypkjwocuzwmfyDkfV4tNd9R4961+oNVk87pIjc91oebrfpXzwQYoerA3T6mS15q//Us2DjwrJo4Ob86e5etiDRu7vktJPuh2myNx2MKBuXnzftOyJKa448KJjwYM+1MM/UPaCmNBuQYxNahseUBNSYNO7FwBxqcV1Dea1C8RfkzYph0IXqSwwCsit5YICXFIVB5ToYgTKBkIAlkIFhvYLSFZ6T59KpE/gw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/sOSLbgi+cq7B8/Xa791BEnTwhGiuy34F6AyaYsVc7E=;
 b=jwoLPbEbTlu51BamSj18a1TnUIl/saamN8H87EnV+K+tQeyhFbh+HgXj73AV4t1AlZsJMc0LHPYUbq5M4kTvUs6tyu8OeSFM+kV3NF2mciae1JUOgf/rh/O6MnOi7n7qlKxYHySp8lDQMAi8ZFBJUCabtuMs5AOPoExo+xJ4iDdMRuL7iaw/jVQsgDjj1ojrfYc6GYe9rfC+UndtDqVVrxCaArWAiSGh0oT7j+tbW4vtkGNxj67UaoyhvxITPfSlrMbrFxUoskQH23v3Xq1/wmH6jD+RSpf8dwG/0ZjXWdqRK3bbGzQEVxV1638vjWEhbjdme1+Q7tel1S3My10Mog==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/sOSLbgi+cq7B8/Xa791BEnTwhGiuy34F6AyaYsVc7E=;
 b=Nf7LBG5k49OzVfAGa6XGaRCydA/btLCJyl4YEd9v89YYYUpCc0U2bP7+eWpaa5qartwoslEWPppvf2n9kN6+d72DU6F8/zciqldDcLxnrjJTGAvIRYMLSZtSCZ548Mvl+mITjhG2N0Is1nJ31EC7tP+65Ye4Y+aHlcMT4kGgshwANvom8wd2RTGCoWEQRs4dCGzjfSHwCJnJe749dMPYD/q1DhMJkCH1TkMjr0f5ro+YR7Tzvz768xvm6FTcf1Cwl6tfOPvpzT1O7k+KnY9nJ/31smUt2UbYDfYISD12BBI3MrPsdug9s5VVuHScpVgGzE/yQyoupg2aS/0JCFpjdA==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB4019.eurprd03.prod.outlook.com (2603:10a6:208:7b::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Fri, 6 Nov
 2020 14:22:52 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.021; Fri, 6 Nov 2020
 14:22:52 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Rahul Singh <Rahul.Singh@arm.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>,
        Oleksandr Andrushchenko
	<andr2000@gmail.com>,
        Julien Grall <julien@xen.org>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Jan
 Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
        Stefano Stabellini
	<sstabellini@kernel.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index: 
 AQHWrqmeCV4SGlhADU2GY14D47cn76mwZCkAgAP4JoCAAELzAIAABK0AgAZ074CAAAOBgIAAEDeAgAAGtIA=
Date: Fri, 6 Nov 2020 14:22:52 +0000
Message-ID: <d2eb2db3-7038-3850-310b-4676102e0a55@epam.com>
References: 
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <09cfc160-3490-0aeb-f872-04fb4ce04364@epam.com>
 <76593217-c7e2-2963-9cbe-d6cc38830710@xen.org>
 <d83f6859-6737-0da8-7c1d-a236e8313869@gmail.com>
 <B8E54A16-8FD4-48E4-82D5-2205EEEB5D2C@arm.com>
 <1001ace5-c6a2-4a81-ba3d-edabeeea9336@epam.com>
 <5F09F481-DC27-4FC3-8CE5-F4F97FDF6DF9@arm.com>
 <2f62f34b-f47d-3472-511f-a89ec1cd36c3@epam.com>
 <20FF6A26-41CF-4888-901A-0FF0ABCC6E64@arm.com>
In-Reply-To: <20FF6A26-41CF-4888-901A-0FF0ABCC6E64@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: a3bce93c-04db-48ca-83ef-08d8825f7240
x-ms-traffictypediagnostic: AM0PR03MB4019:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM0PR03MB401912A80304BDA9CE0AD6E8E7ED0@AM0PR03MB4019.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 qsd25G5nLkFkU1ATfscr1tZE/Jsdpq2oTq5suNvZkjX3Rt9U6Pvm06DxuIO+Pw9nsI0pB+hgaSAqmdYBDayqz/BaM0Hkd9e2E5E9biLWcmx+1Jm8e02tACdtYWNuc/n46pbfEkckBJxGCBi4OVtGfMkVHOLyu1YBkqmzMUVhbTxpAF0/QSrdXYVUp9B/RnTLIwybIWwEmF/sEJk5vYoUop235k2QZbcUojOniFmlFBGYa+3AH2rCw2alYzP9wiKyMnGVTwlbApJ3kU+CKiq9b/e93znigeJLqvWmjHIQQbjnQkU4XDWPpr+vzvknSurPjG70pZyd5mz/dpQGNCDKNg==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(366004)(376002)(136003)(396003)(346002)(8676002)(4326008)(8936002)(107886003)(54906003)(2906002)(6512007)(2616005)(76116006)(66446008)(36756003)(66946007)(86362001)(5660300002)(66476007)(31686004)(71200400001)(91956017)(26005)(186003)(66556008)(6916009)(6486002)(53546011)(6506007)(64756008)(31696002)(83380400001)(316002)(478600001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 sqDz/k8VAArfIUk+n65EgNw8UT6mdfKaJ2Whuv4d3UsfhKKIcWg4hV+qnypiDRDwUoRECc7e6uAVLa//mbW9WMT8IkwFovrkX4ENPwkxTKiQHO2yLCXScuMkhhRpNP5mP+67R5dLB+U1ShF63RdnIQFr/mt7yt09OMltvn04+J14a2F8e6ZLFTeAGOUgzszMwxU0h8R0tEDk77LW+HcE223eb3kPFfdD0krPrWlE3HXijCIAfLAfkA4Gmwe8ZhjfYu0F+ShsTyAYbmsH+gDvzclpooDaiN4M+Yzn1wBdbJ8/9KSoGxVbogN5XLjEWbtYwc0sU8NL/zwg7pUi46R8L3q+nE9BTwqD0pPZVTMGkg7C+nPwonsf8nf9/ICOqLZbcadWNPNV9g0MFBrXGaN/wLfluU3A5Z38nFRTVTU5m5wsSHwf+MZ+uUBea34J+0caJQ7+td2MXTvYQi5tqAasZfpm89rTLmb2QUibJoodSKYZrkeA86QGVXjrdJnZyyzSN5ViLHS8oOWEh0/OhRg39DvAalQC+iWmUz6zkkYixvmg6YfxEIBTm1zhzOVtORW61T1MyDgcX2ONjVpjUq0tU2+Xiq99sHc2KG0vQUuJ+q0rvato7TMvZXfiuauvx7/oV/JoR4JOBnShsBiDAyMl3g==
Content-Type: text/plain; charset="utf-8"
Content-ID: <6B88B0A06719AE4FA7BE4EDAD4069E09@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a3bce93c-04db-48ca-83ef-08d8825f7240
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Nov 2020 14:22:52.2015
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 5EqoGLzuTgvKiucqkdoOhRqglAENGZbQ5MclIQAkP/yt5yoPpL6SC8CGDex4q2NMXPbgyf5oCBCrzIaAU+Bz7rDDloTHwURFGKWK8m228WP+FoG97piTPWRm06OJ2WRB
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB4019
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-06_06:2020-11-05,2020-11-06 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0
 priorityscore=1501 malwarescore=0 spamscore=0 lowpriorityscore=0
 bulkscore=0 mlxlogscore=999 adultscore=0 suspectscore=0 impostorscore=0
 mlxscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2011060102

SGksIFJhaHVsIQ0KDQpPbiAxMS82LzIwIDM6NTggUE0sIFJhaHVsIFNpbmdoIHdyb3RlOg0KPiBI
ZWxsbyBPbGVrc2FuZHIsDQo+DQo+PiBPbiA2IE5vdiAyMDIwLCBhdCAxOjAwIHBtLCBPbGVrc2Fu
ZHIgQW5kcnVzaGNoZW5rbyA8T2xla3NhbmRyX0FuZHJ1c2hjaGVua29AZXBhbS5jb20+IHdyb3Rl
Og0KPj4NCj4+IEhlbGxvLCBSYWh1bCENCj4+DQo+PiBPbiAxMS82LzIwIDI6NDggUE0sIFJhaHVs
IFNpbmdoIHdyb3RlOg0KPj4+IEhlbGxvIE9sZWtzYW5kciwNCj4+Pg0KPj4+PiBPbiAyIE5vdiAy
MDIwLCBhdCAxMDoxMiBhbSwgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gPE9sZWtzYW5kcl9BbmRy
dXNoY2hlbmtvQGVwYW0uY29tPiB3cm90ZToNCj4+Pj4NCj4+Pj4gSGksDQo+Pj4+DQo+Pj4+IE9u
IDExLzIvMjAgMTE6NTUgQU0sIEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQo+Pj4+PiBIaSwNCj4+
Pj4+DQo+Pj4+Pj4gT24gMiBOb3YgMjAyMCwgYXQgMDU6NTUsIE9sZWtzYW5kciBBbmRydXNoY2hl
bmtvIDxhbmRyMjAwMEBnbWFpbC5jb20+IHdyb3RlOg0KPj4+Pj4+DQo+Pj4+Pj4gSGksIEp1bGll
biENCj4+Pj4+Pg0KPj4+Pj4+IE9uIDEwLzMwLzIwIDc6MTggUE0sIEp1bGllbiBHcmFsbCB3cm90
ZToNCj4+Pj4+Pj4gSGkgT2xla3NhbmRyLA0KPj4+Pj4+Pg0KPj4+Pj4+PiBPbiAzMC8xMC8yMDIw
IDEwOjQ0LCBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyB3cm90ZToNCj4+Pj4+Pj4+IE9uIDEwLzIw
LzIwIDY6MjUgUE0sIFJhaHVsIFNpbmdoIHdyb3RlOg0KPj4+Pj4+Pj4+IEFkZCBzdXBwb3J0IGZv
ciBBUk0gYXJjaGl0ZWN0ZWQgU01NVXYzIGltcGxlbWVudGF0aW9ucy4gSXQgaXMgYmFzZWQgb24N
Cj4+Pj4+Pj4+PiB0aGUgTGludXggU01NVXYzIGRyaXZlci4NCj4+Pj4+Pj4+Pg0KPj4+Pj4+Pj4+
IE1ham9yIGRpZmZlcmVuY2VzIGJldHdlZW4gdGhlIExpbnV4IGRyaXZlciBhcmUgYXMgZm9sbG93
czoNCj4+Pj4+Pj4+PiAxLiBPbmx5IFN0YWdlLTIgdHJhbnNsYXRpb24gaXMgc3VwcG9ydGVkIGFz
IGNvbXBhcmVkIHRvIHRoZSBMaW51eCBkcml2ZXINCj4+Pj4+Pj4+PiAgICAgICB0aGF0IHN1cHBv
cnRzIGJvdGggU3RhZ2UtMSBhbmQgU3RhZ2UtMiB0cmFuc2xhdGlvbnMuDQo+Pj4+Pj4+PiBGaXJz
dCBvZiBhbGwgdGhhbmsgeW91IGZvciB0aGUgZWZmb3J0cyENCj4+Pj4+Pj4+DQo+Pj4+Pj4+PiBJ
IHRyaWVkIHRoZSBwYXRjaCB3aXRoIFFFTVUgYW5kIHdvdWxkIGxpa2UgdG8ga25vdyBpZiBteSB1
bmRlcnN0YW5kaW5nIGNvcnJlY3QNCj4+Pj4+Pj4+DQo+Pj4+Pj4+PiB0aGF0IHRoaXMgY29tYmlu
YXRpb24gd2lsbCBub3Qgd29yayBhcyBvZiBub3c6DQo+Pj4+Pj4+Pg0KPj4+Pj4+Pj4gKFhFTikg
U01NVXYzOiAvc21tdXYzQDkwNTAwMDA6IFNNTVV2MzogRFQgdmFsdWUgPSBldmVudHENCj4+Pj4+
Pj4+IChYRU4pIERhdGEgQWJvcnQgVHJhcC4gU3luZHJvbWU9MHgxOTQwMDEwDQo+Pj4+Pj4+PiAo
WEVOKSBXYWxraW5nIEh5cGVydmlzb3IgVkEgMHg0MDAzMTAwMCBvbiBDUFUwIHZpYSBUVEJSIDB4
MDAwMDAwMDBiODQ2OTAwMA0KPj4+Pj4+Pj4gKFhFTikgMFRIWzB4MF0gPSAweDAwMDAwMDAwYjg0
NjhmN2YNCj4+Pj4+Pj4+DQo+Pj4+Pj4+PiBbc25pcF0NCj4+Pj4+Pj4+DQo+Pj4+Pj4+PiBJZiB0
aGlzIGlzIGV4cGVjdGVkIHRoZW4gaXMgdGhlcmUgYW55IHBsYW4gdG8gbWFrZSBRRU1VIHdvcmsg
YXMgd2VsbD8NCj4+Pj4+Pj4+DQo+Pj4+Pj4+PiBJIHNlZSBbMV0gc2F5cyB0aGF0ICJPbmx5IHN0
YWdlIDEgYW5kIEFBcmNoNjQgUFRXIGFyZSBzdXBwb3J0ZWQuIiBvbiBRRU1VIHNpZGUuDQo+Pj4+
Pj4+IEp1c3QgZm9yIGNsYXJpY2F0aW9uLCB5b3UgYXJlIHRyeWluZyB0byBib290IFhlbiBvbiBR
RU1VLCByaWdodD8NCj4+Pj4+PiBFeGFjdGx5DQo+Pj4+Pj4+IFlvdSBtaWdodCBiZSBhYmxlIHRv
IHVzZSB0aGUgc3RhZ2UtMSBwYWdlLXRhYmxlcyB0byBpc29sYXRlIGVhY2ggZGV2aWNlIGluIFhl
bi4gSG93ZXZlciwgSSBkb24ndCB0aGluayB5b3Ugd2lsbCBiZSBhYmxlIHRvIHNoYXJlIHRoZSBQ
Mk0gYmVjYXVzZSB0aGUgcGFnZS10YWJsZXMgbGF5b3V0IGJldHdlZW4gc3RhZ2UtMSBhbmQgc3Rh
Z2UtMiBpcyBkaWZmZXJlbnQuDQo+Pj4+Pj4gU28sIGl0IGlzIGV2ZW4gbW9yZSB3b3JrIHRoZW4N
Cj4+Pj4+IE92ZXJhbGwgaXQgd291bGQgbWFrZSBtb3JlIHNlbnNlIHRvIHNwZW5kIHNvbWUgdGlt
ZSBhZGRpbmcgcHJvcGVyIHN1cHBvcnQgaW4gUWVtdSB0aGVuIHRyeWluZyB0byBtb2RpZnkgdGhl
IGRyaXZlciB0byBzdXBwb3J0IFFlbXUgcmlnaHQgbm93Lg0KPj4+Pj4NCj4+Pj4+Pj4+IFdlIGFy
ZSBpbnRlcmVzdGVkIGluIFFFTVUvU01NVXYzIGFzIGEgZmxleGlibGUgcGxhdGZvcm0gZm9yIFBD
SSBwYXNzdGhyb3VnaA0KPj4+Pj4+Pj4NCj4+Pj4+Pj4+IGltcGxlbWVudGF0aW9uLCBzbyBpdCBj
b3VsZCBhbGxvdyB0ZXN0aW5nIGRpZmZlcmVudCBzZXR1cHMgYW5kIGNvbmZpZ3VyYXRpb25zIHdp
dGggUUVNVS4NCj4+Pj4+Pj4gSSB3b3VsZCByZWNvbW1lbmQgdG8gZ2V0IHRoZSBTTU1VIHN1cHBv
cnRpbmcgc3VwcG9ydGluZyBzdGFnZS0yIHBhZ2UtdGFibGVzLg0KPj4+Pj4+IFlvdSBtZWFuIGlu
IFFFTVU/DQo+Pj4+PiBTZWUgYmVmb3JlLg0KPj4+Pj4NCj4+Pj4+Pj4gUmVnYXJkbGVzcyB0aGF0
LCBJIHRoaW5rIFhlbiBzaG91bGQgYmUgYWJsZSB0byBzYXkgdGhlIFNNTVUgaXMgbm90IHN1cHBv
cnRlZCByYXRoZXIgdGhhbiBjcmFzaGluZy4NCj4+Pj4+PiBZZXMsIHRoYXQgd291bGQgYmUgbmlj
ZQ0KPj4+Pj4gRnVsbHkgYWdyZWUgYW5kIHdlIHdpbGwgbG9vayBpbnRvIHRoYXQuDQo+Pj4+Pg0K
Pj4+Pj4gQW55dGhpbmcgeW91IGNvdWxkIHNoYXJlIHNvIHRoYXQgd2UgY291bGQgcXVpY2tseSBy
ZXByb2R1Y2UgeW91ciBzZXR1cCB3b3VsZCBiZSBtb3JlIHRoZW4gZ3JlYXQuDQo+Pj4+IE5vdGhp
bmcgc3BlY2lhbCwNCj4+Pj4NCj4+Pj4gcWVtdS9hYXJjaDY0LXNvZnRtbXUvcWVtdS1zeXN0ZW0t
YWFyY2g2NCAtbWFjaGluZSB0eXBlPXZpcnQgLW1hY2hpbmUgdmlydCxnaWMtdmVyc2lvbj0yIFwN
Cj4+Pj4NCj4+Pj4gLW1hY2hpbmUgdmlydHVhbGl6YXRpb249dHJ1ZSAtY3B1IGNvcnRleC1hNTcg
LXNtcCA0IC1tIDIwNDggLW5pYyB1c2VyLGhvc3Rmd2Q9dGNwOjEyNy4wLjAuMToyMjIyLToyMiBc
DQo+Pj4+DQo+Pj4+IC1ub2dyYXBoaWMgLXNlcmlhbCBtb246c3RkaW8gWy4uc25pcC4uXQ0KPj4+
Pg0KPj4+PiBJIGFsc28gc2V0IGlvbW11IHRvIHNtbXV2MyBpbiBteSB0ZXN0cywgUUVNVSBlbXVs
YXRvciB2ZXJzaW9uIDQuMi4xDQo+Pj4gSSBqdXN0IGNoZWNrZWQgYW5kIGNvbmZpcm1lZCB0aGF0
IFFFTVUgaXMgYm9vdGluZyB3aXRoIFhFTiBTTU1VdjMgcGF0Y2ggYW5kIFhFTiBpcyBhYmxlIHRv
IHNheSBTTU1VIHRyYW5zbGF0aW9uIGlzIG5vdCBzdXBwb3J0ZWQuIEFzIFhFTiBzdXBwb3J0cyBT
dGFnZS0yIHRyYW5zbGF0aW9uIGFuZCBRRU1VIHN1cHBvcnRzIFN0YWdlLTEgb25seS4NCj4+Pg0K
Pj4+DQo+Pj4gKFhFTikgU01NVXYzOiAvc21tdXYzQDkwNTAwMDA6IFNNTVV2MzogRFQgdmFsdWUg
PSBldmVudHENCj4+PiAoWEVOKSBTTU1VdjM6IC9zbW11djNAOTA1MDAwMDogSURSMC5DT0hBQ0Mg
b3ZlcnJpZGRlbiBieSBGVyBjb25maWd1cmF0aW9uIChmYWxzZSkNCj4+PiAoWEVOKSBTTU1VdjM6
IC9zbW11djNAOTA1MDAwMDogbm8gdHJhbnNsYXRpb24gc3VwcG9ydCENCj4+PiAoWEVOKSBJL08g
dmlydHVhbGlzYXRpb24gZGlzYWJsZWQNCj4+Pg0KPj4+IE9ubHkgZGlmZmVyZW5jZSBJIG9ic2Vy
dmVkIGlzIHRoYXQgeW91IGhhdmUgdG8gYWRkIG9wdGlvbiAiLW1hY2hpbmUgdmlydCxpb21tdT1z
bW11djMg4oCcIHdoZW4gbGF1bmNoaW5nIHRoZSBRRU1VLg0KPj4gSSBkbyB1c2UgdGhlIG9wdGlv
bg0KPiBJIHVzZWQgIi1tYWNoaW5lIHZpcnQsaW9tbXU9c21tdXYzIOKAnCAgb3B0aW9uIHdoaWxl
IGNyZWF0aW5nIHRoZSB2aXJ0LWR0YiBhbmQgd2hpbGUgbGF1bmNoaW5nIHRoZSBRRU1VLg0KPiBJ
IGFsc28gb2JzZXJ2ZWQgdGhlIHNhbWUgZXJyb3Igd2hhdCB5b3Ugb2JzZXJ2ZWQgaWYgSSBhbSBu
b3QgdXNpbmcgdGhlICItbWFjaGluZSB2aXJ0LGlvbW11PXNtbXV2MyDigJwgb3B0aW9ucyB3aGVu
IGxhdW5jaGluZyB0aGUgUUVNVSBzbyBJIHRob3VnaHQgdGhpcyBtaWdodCBiZSBjYXNlIGZvciB5
b3UgYWxzbyBidXQgYW55d2F5cyB5b3UgaGF2ZSB1c2UgdGhlIG9wdGlvbnMgaXQgbWlnaHQgYmUg
b3RoZXIgaXNzdWUuDQoNCkhtLCBwcm9iYWJseSB0aGF0IHdhcyBvbiBteSBzaWRlIGFzIG5vdyBJ
IGNhbiBzZWU6DQoNCihYRU4pIFNNTVV2MzogL3NtbXV2M0A5MDUwMDAwOiBTTU1VdjM6IERUIHZh
bHVlID0gZXZlbnRxDQooWEVOKSBTTU1VdjM6IC9zbW11djNAOTA1MDAwMDogSURSMC5DT0hBQ0Mg
b3ZlcnJpZGRlbiBieSBGVyBjb25maWd1cmF0aW9uIChmYWxzZSkNCihYRU4pIFNNTVV2MzogL3Nt
bXV2M0A5MDUwMDAwOiBubyB0cmFuc2xhdGlvbiBzdXBwb3J0IQ0KKFhFTikgSS9PIHZpcnR1YWxp
c2F0aW9uIGRpc2FibGVkDQooWEVOKQ0KKFhFTikgKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKg0KKFhFTikgUGFuaWMgb24gQ1BVIDA6DQooWEVOKSBDb3VsZG4ndCBjb25m
aWd1cmUgY29ycmVjdGx5IGFsbCB0aGUgSU9NTVVzLg0KKFhFTikgKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKg0KKFhFTikNCihYRU4pIE1hbnVhbCByZXNldCByZXF1aXJl
ZCAoJ25vcmVib290JyBzcGVjaWZpZWQpDQoNClNvLCBzb3JyeSBmb3IgdGhlIG5vaXNlLCBJIG1p
Z2h0IGhhdmUgbWlzY29uZmlndXJlZCBzb21ldGhpbmcgaXQgc2VlbXMNCg0KV2hlbiB5b3Ugc2F5
ICJYZW4gaXMgYm9vdGluZyIgZG8geW91IG1lYW4geW91IHNlZSB0aGUgc2FtZSBwYW5pYz8NCg0K
VGhhbmsgeW91LA0KDQpPbGVrc2FuZHINCg0KPg0KPj4+IFBsZWFzZSBsZXQgbWUga25vdyBpZiBp
dCBhbHNvIHdvcmtzIGZvciB5b3UuDQo+PiBXZWxsLCBJIHNob3VsZCBoYXZlIHJlcG9ydGVkIHRo
YXQgZWFybGllciB0aGF0IEkgZG8gbm90IHVzZSB0aGUgc3RhZ2luZyBYZW4gYXQgdGhlIG1vbWVu
dCwNCj4+DQo+PiBpdCBpcyA0LjE0LjAuIFNvLCBjYW4gdGhpcyBiZSBhIHByb2JsZW0gd2l0aCB0
aGF0IFhlbiB2ZXJzaW9uPw0KPiBJIGRvbuKAmXQgdGhpbmsgc28gdGhpcyBpcyB0aGUgcHJvYmxl
bSB3aXRoIHRoZSBYRU4gdmVyc2lvbi4NCj4+IEFueXdheXMsIGlmIGl0IHdvcmtzIHdpdGggdGhl
IHN0YWdpbmcgdGhlbiBldmVyeXRoaW5nIGxvb2tzIG9rDQo+Pg0KPj4gVGhhbmsgeW91LA0KPj4N
Cj4+IE9sZWtzYW5kcg0KPj4NCj4+Pj4+IFJlZ2FyZHMNCj4+Pj4+IEJlcnRyYW5kDQo+Pj4+Pg0K
Pj4+Pj4+PiBDaGVlcnMsDQo+Pj4+Pj4+DQo+Pj4+Pj4gVGhhbmsgeW91LA0KPj4+Pj4+DQo+Pj4+
Pj4gT2xla3NhbmRyDQo+IFJlZ2FyZHMsDQo+IFJhaHVs


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 14:35:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 14:35:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20850.46875 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb2pd-0004h9-Ha; Fri, 06 Nov 2020 14:35:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20850.46875; Fri, 06 Nov 2020 14:35:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb2pd-0004h2-Eh; Fri, 06 Nov 2020 14:35:13 +0000
Received: by outflank-mailman (input) for mailman id 20850;
 Fri, 06 Nov 2020 14:35:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kb2pd-0004gx-2s
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 14:35:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f7e2deb8-426a-4037-8c6f-b565494ea3dd;
 Fri, 06 Nov 2020 14:35:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8E137AE09;
 Fri,  6 Nov 2020 14:35:11 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kb2pd-0004gx-2s
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 14:35:13 +0000
X-Inumbo-ID: f7e2deb8-426a-4037-8c6f-b565494ea3dd
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f7e2deb8-426a-4037-8c6f-b565494ea3dd;
	Fri, 06 Nov 2020 14:35:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604673311;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zie/iFpkyz7Vm32IBQPlFpo7z97lWE4i4u88Ovukv2I=;
	b=Ge48ZGBW8QnB+ajj7ILsuFECNVF6YhUXYFJl4W9zNJNRVO6/sS/1QM97VEvgEVkMbmxXWm
	UCpoMCcwR3Y5ie4aNqxz3uoLt+BQM8Z0M3b/aPN4V1RHqHr1xfE/jNEzG8l0YZL4atOWJ5
	gVOPr/vQre6FL76Lj8bPpuujAV06swc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8E137AE09;
	Fri,  6 Nov 2020 14:35:11 +0000 (UTC)
Subject: Re: [PATCH] tools/libs/light: correct bitmap operations
To: Juergen Gross <jgross@suse.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
References: <20201106140504.25488-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <61860ac6-133a-0393-e63c-8de9ea13e5f9@suse.com>
Date: Fri, 6 Nov 2020 15:35:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201106140504.25488-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 06.11.2020 15:05, Juergen Gross wrote:
> Libxl bitmap operations for single bits (test, set, reset) take the bit
> number as a signed integer without testing the value to be larger than
> 0.
> 
> Correct that by adding the appropriate tests.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Wouldn't it be better to convert the parameter types to unsigned int?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 14:36:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 14:36:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20854.46887 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb2r1-0004o4-ST; Fri, 06 Nov 2020 14:36:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20854.46887; Fri, 06 Nov 2020 14:36:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb2r1-0004nx-Pb; Fri, 06 Nov 2020 14:36:39 +0000
Received: by outflank-mailman (input) for mailman id 20854;
 Fri, 06 Nov 2020 14:36:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5w1g=EM=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kb2r0-0004nq-QB
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 14:36:38 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 79accbf8-bcab-4269-803c-97899b621e4a;
 Fri, 06 Nov 2020 14:36:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D72B0ADCD;
 Fri,  6 Nov 2020 14:36:36 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5w1g=EM=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kb2r0-0004nq-QB
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 14:36:38 +0000
X-Inumbo-ID: 79accbf8-bcab-4269-803c-97899b621e4a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 79accbf8-bcab-4269-803c-97899b621e4a;
	Fri, 06 Nov 2020 14:36:37 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604673396;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=iAiBqHAiaj016BdKVwhEkwV1Lz5pLtNkIc2medpWSvE=;
	b=KT3UtpoJiS8/ub7OQrnhaXkFEBkKZNIDp5nc7I7nMFiYUJeOboKfrOGCTcuGqq2b61MZqk
	P8+90DX/OGD0UaX3fAd6Zpn8H9ZPMT8lqo+uvGH+/4/WOs/CzU207eOVfISmhdefS3oTgH
	4rSGuUH/IuqUdJ79sQajobCsZTGQQak=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D72B0ADCD;
	Fri,  6 Nov 2020 14:36:36 +0000 (UTC)
Subject: Re: [PATCH] tools/libs/light: correct bitmap operations
To: Jan Beulich <jbeulich@suse.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
References: <20201106140504.25488-1-jgross@suse.com>
 <61860ac6-133a-0393-e63c-8de9ea13e5f9@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <22a4dc50-4feb-a934-a58b-7ebcdcf9e3ab@suse.com>
Date: Fri, 6 Nov 2020 15:36:36 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <61860ac6-133a-0393-e63c-8de9ea13e5f9@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 06.11.20 15:35, Jan Beulich wrote:
> On 06.11.2020 15:05, Juergen Gross wrote:
>> Libxl bitmap operations for single bits (test, set, reset) take the bit
>> number as a signed integer without testing the value to be larger than
>> 0.
>>
>> Correct that by adding the appropriate tests.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
> 
> Wouldn't it be better to convert the parameter types to unsigned int?

Those are official library interfaces. Can we just change them?


Juergen


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 14:38:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 14:38:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20860.46900 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb2so-0004yl-8q; Fri, 06 Nov 2020 14:38:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20860.46900; Fri, 06 Nov 2020 14:38:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb2so-0004ye-5O; Fri, 06 Nov 2020 14:38:30 +0000
Received: by outflank-mailman (input) for mailman id 20860;
 Fri, 06 Nov 2020 14:38:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a09F=EM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kb2sn-0004yX-7a
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 14:38:29 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 240a7c8c-e2fe-4f23-a95c-3d09eb96a914;
 Fri, 06 Nov 2020 14:38:28 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=a09F=EM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kb2sn-0004yX-7a
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 14:38:29 +0000
X-Inumbo-ID: 240a7c8c-e2fe-4f23-a95c-3d09eb96a914
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 240a7c8c-e2fe-4f23-a95c-3d09eb96a914;
	Fri, 06 Nov 2020 14:38:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1604673507;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=ALjou14VWdHL+2vpwf8hovwAGsZ+zQMp6avJuQGdslg=;
  b=c4PtAoGuElyecqLXOvx5AeFeWAbHsytda0kt3TAMVl1cHdDYrF89NR7I
   bECznCUtEtXi5YfUck5VoB2iUSarj4aQBkcwV3CHZSpfIS6vGL7qrdJGr
   e9guMrwP8MaUDaMPLYMqoLKa5CMzqj/+k8C4BxzkHPcKnfZJ9nEW6Fi7s
   k=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 0vu+2bSbkhQednlK5BSi9XGYZxXTAn4+VwYG5t/bokP/w2nPPHu46yLayCQlgqWNLY2Nrhwm2C
 h+OxNN4z0vUMbzBqJUST0DKuQ1zZeSXOatC3GxwExP1zefU7rUrk8bzAwCy8OJCTwACVQ0VqLw
 iuueTF42F9o+pkgGOj4fhutxMlurn+b4beyvKx8Ax2R8WYOIIIxoANfFTOY1uQOYhrq2SYW1lX
 WypAk8IPShlRiFus1+tCVdPSAJ60LgP207NtfJTjy4x8D/xa9pj+yHsYOT8/moDzseJC2IwDOW
 Yog=
X-SBRS: None
X-MesageID: 30668296
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,456,1596513600"; 
   d="scan'208";a="30668296"
Subject: Re: [PATCH] tools/libs/light: correct bitmap operations
To: Jan Beulich <jbeulich@suse.com>, Juergen Gross <jgross@suse.com>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>, <xen-devel@lists.xenproject.org>
References: <20201106140504.25488-1-jgross@suse.com>
 <61860ac6-133a-0393-e63c-8de9ea13e5f9@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <1226ca3c-df70-0ad0-dda9-3de06ca3482e@citrix.com>
Date: Fri, 6 Nov 2020 14:37:44 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <61860ac6-133a-0393-e63c-8de9ea13e5f9@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 06/11/2020 14:35, Jan Beulich wrote:
> On 06.11.2020 15:05, Juergen Gross wrote:
>> Libxl bitmap operations for single bits (test, set, reset) take the bit
>> number as a signed integer without testing the value to be larger than
>> 0.
>>
>> Correct that by adding the appropriate tests.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
> Wouldn't it be better to convert the parameter types to unsigned int?

Yes, except their in the API, so immutable.

(whether they should be in the API is a different question...)

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 14:41:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 14:41:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20867.46912 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb2vu-0005nl-OO; Fri, 06 Nov 2020 14:41:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20867.46912; Fri, 06 Nov 2020 14:41:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb2vu-0005ne-LI; Fri, 06 Nov 2020 14:41:42 +0000
Received: by outflank-mailman (input) for mailman id 20867;
 Fri, 06 Nov 2020 14:41:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a9Kh=EM=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kb2vs-0005nZ-NL
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 14:41:40 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.75]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7242ae8e-4d87-41ed-bef2-18bf3d703af1;
 Fri, 06 Nov 2020 14:41:37 +0000 (UTC)
Received: from AM6P195CA0073.EURP195.PROD.OUTLOOK.COM (2603:10a6:209:86::14)
 by DB8PR08MB4106.eurprd08.prod.outlook.com (2603:10a6:10:b2::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Fri, 6 Nov
 2020 14:41:34 +0000
Received: from AM5EUR03FT044.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:86:cafe::fb) by AM6P195CA0073.outlook.office365.com
 (2603:10a6:209:86::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Fri, 6 Nov 2020 14:41:34 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT044.mail.protection.outlook.com (10.152.17.56) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3541.17 via Frontend Transport; Fri, 6 Nov 2020 14:41:34 +0000
Received: ("Tessian outbound 814be617737e:v71");
 Fri, 06 Nov 2020 14:41:34 +0000
Received: from 4097bcb96e3b.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9976F4D8-8078-425A-A73E-62C4176E9236.1; 
 Fri, 06 Nov 2020 14:41:10 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 4097bcb96e3b.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 06 Nov 2020 14:41:10 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB8PR08MB5097.eurprd08.prod.outlook.com (2603:10a6:10:38::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 6 Nov
 2020 14:41:08 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3499.032; Fri, 6 Nov 2020
 14:41:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=a9Kh=EM=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kb2vs-0005nZ-NL
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 14:41:40 +0000
X-Inumbo-ID: 7242ae8e-4d87-41ed-bef2-18bf3d703af1
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown [40.107.8.75])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7242ae8e-4d87-41ed-bef2-18bf3d703af1;
	Fri, 06 Nov 2020 14:41:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YfMjNf8TOo4lcQe5ri3R1rXhXlGGZDfS96Gxnu42Xwo=;
 b=ogGcJ0Zl+KkjN4QWPYNHm2bYGcMCF3Ig3mfM87XY5TaUlY52nPp5D1pfXGoU7Syrb+2265VdRxbXp2Kg7DEA2Qx8my+21hUpZnynxjPLrI1O7os2E8mPtFY9hnM9pfCIe2v5GQSX31UICJy4Lrkb3i5DbttINRa3gnI9cEQM0oU=
Received: from AM6P195CA0073.EURP195.PROD.OUTLOOK.COM (2603:10a6:209:86::14)
 by DB8PR08MB4106.eurprd08.prod.outlook.com (2603:10a6:10:b2::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Fri, 6 Nov
 2020 14:41:34 +0000
Received: from AM5EUR03FT044.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:86:cafe::fb) by AM6P195CA0073.outlook.office365.com
 (2603:10a6:209:86::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Fri, 6 Nov 2020 14:41:34 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT044.mail.protection.outlook.com (10.152.17.56) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3541.17 via Frontend Transport; Fri, 6 Nov 2020 14:41:34 +0000
Received: ("Tessian outbound 814be617737e:v71"); Fri, 06 Nov 2020 14:41:34 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: e70a9345d59a9acd
X-CR-MTA-TID: 64aa7808
Received: from 4097bcb96e3b.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 9976F4D8-8078-425A-A73E-62C4176E9236.1;
	Fri, 06 Nov 2020 14:41:10 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 4097bcb96e3b.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 06 Nov 2020 14:41:10 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NMbIbbLInBmB0kNcHF9dHrwZAhHv0QkKt064OnsWjJrj9MfmugumNzcQgF0zAV3Ze0FMIcq9TxX/omwbrpF2g1yXyCDSMQnVT3b9F95M+K7BrXJfB9NFNAsrzPpp8MPFO3weP5HbxZ5ZB0ZaQEk2Y8ZUuw0lE/FMIUmHG8fCRT5aQ+2SsgCf08JNGNInoQHMYfe9+ey9xTi+L1mY4plzxJy6bHelkAPKM4XS8LVqub9SVuW18mK9yNWKQc7ygBmkgkBB3pyQDMbA5BrL1+VHOq2PWVsEEffFh3Vz8qNHwDeQh0nW1a69J64S+yzlLTgBxaZ8VTKmfF/OBfshH9otCA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YfMjNf8TOo4lcQe5ri3R1rXhXlGGZDfS96Gxnu42Xwo=;
 b=iXT2AX8ugQzUAH0bSvGEexb6SNl8uvjTBYLuBL+mopgz/0NeT6xYBVCAW8R2+/LRKV8SvVli2pIkD2qyJqLGzsLVO6Mpxw926HGNADrZmyi0iscV4thC8juMYWh3cY/ftSsK/sDVaJvkN/RwV/CTQBgd1+0sBF8x7YAQfBzCGH+CvNDlNw6gttLRUX1ePH6x6RKXuAxK+WLajSrEQ7C3LEqDg8ipv9NKTxBUelu9YPf3dFRf/3eWy4q2ZNVN7LbZ2pEpr1W3FBlkudDMa/vVKgILo2elbpTEoxsT+5eElShEVRxRVNywuCSdNypWazZ/123afehHAD6h8bPX7TcZtg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YfMjNf8TOo4lcQe5ri3R1rXhXlGGZDfS96Gxnu42Xwo=;
 b=ogGcJ0Zl+KkjN4QWPYNHm2bYGcMCF3Ig3mfM87XY5TaUlY52nPp5D1pfXGoU7Syrb+2265VdRxbXp2Kg7DEA2Qx8my+21hUpZnynxjPLrI1O7os2E8mPtFY9hnM9pfCIe2v5GQSX31UICJy4Lrkb3i5DbttINRa3gnI9cEQM0oU=
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB8PR08MB5097.eurprd08.prod.outlook.com (2603:10a6:10:38::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 6 Nov
 2020 14:41:08 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3499.032; Fri, 6 Nov 2020
 14:41:08 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Oleksandr Andrushchenko
	<andr2000@gmail.com>, Julien Grall <julien@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Jan
 Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index:
 AQHWoVvrkmJwOYERdUOadvid1OghFamwEIWAgABuPwCAA/gngIAAQvMAgAAErACABnTvAIAAA4IAgAAQNoCAAAa2AIAABRmA
Date: Fri, 6 Nov 2020 14:41:08 +0000
Message-ID: <1390C05F-445F-4349-A672-4D7373C301B8@arm.com>
References:
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <09cfc160-3490-0aeb-f872-04fb4ce04364@epam.com>
 <76593217-c7e2-2963-9cbe-d6cc38830710@xen.org>
 <d83f6859-6737-0da8-7c1d-a236e8313869@gmail.com>
 <B8E54A16-8FD4-48E4-82D5-2205EEEB5D2C@arm.com>
 <1001ace5-c6a2-4a81-ba3d-edabeeea9336@epam.com>
 <5F09F481-DC27-4FC3-8CE5-F4F97FDF6DF9@arm.com>
 <2f62f34b-f47d-3472-511f-a89ec1cd36c3@epam.com>
 <20FF6A26-41CF-4888-901A-0FF0ABCC6E64@arm.com>
 <d2eb2db3-7038-3850-310b-4676102e0a55@epam.com>
In-Reply-To: <d2eb2db3-7038-3850-310b-4676102e0a55@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 62d31641-8f96-435c-1911-08d882620f32
x-ms-traffictypediagnostic: DB8PR08MB5097:|DB8PR08MB4106:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB8PR08MB4106C8C17CA0DB70BB2ED451FCED0@DB8PR08MB4106.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 JF+JzPbelXoxoDj+NVN4Uiwrbj6/1E65OmcojqFy6o6ZuqGKxEhaOvDC/VPcDzIH9BwF39Qk9pa983ctlVNKUiRfagqZzuYfO86d0w6p9/bl5naBbCinh31IhylfhjcijgN7kdECRI7NawHlUq7Dfdjqd3/mibX0CIVtYnmzrsaUZNY671e4P/BcX0x1LGzyw4DUhWkwGrjciwChTbDHHRR+EORiHK6hXdHQNQIlNHZWVoSVtZ8z9Egp8bAfcoVmzTizn6lyGsfe76JaJuJKyVdAixbxFAoH4CsmsvEdlfE6a+vJRqUfaD3RWOcU2eqZ+TEzTWOH7CMIq6di5owUWQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(366004)(136003)(396003)(39850400004)(86362001)(83380400001)(6512007)(33656002)(36756003)(54906003)(6486002)(8676002)(186003)(5660300002)(2616005)(8936002)(4326008)(91956017)(66556008)(76116006)(66446008)(64756008)(66476007)(26005)(478600001)(55236004)(6506007)(66946007)(53546011)(6916009)(316002)(71200400001)(2906002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 MdG9aGWDPalYA4EiXMHS/MwUh6QlvDvbwKhSYXQXhIjgHDeR8g5KGySrbq3xs4+8RdoUKgjpgVdrn/D8B7P1Rj99r03f4MA6ZdnJzN+IoiQFXAI6oY401cVuuv4FUsNj3EZK6kM9PAnRLQzPd06nmj99e8o434Bgm6jJ7V2IH3VS7GWZ7Rf7dUDFfa9ppXaiOz9M4uxn1r/1wZLxDrXOHkLW7ikEGGKr2p7w/zuZDI59iChLPbha1fZedZbO2EnvygLU5TdbJ27NZnWKiWL4IGmYZaT7CvvuKcn0Cq/MgAdH3M99rK2KTX2CmvspoWGcKokk5Dgn61HKmDcgyZnpZaubdxrq0w/ImubTIfPfrATZPvK2n4v9VDqnQnwuDIv/UVKX/sb1ruuljFrPTOwwP3sRXVKmdhBtQlQu/HkOG8YCkeOgvjiTMZKsEXtKlCbrIjT9VlLtOkAD2JTAczzwrsKOzjJqEs0qCUX621uWLEU5FLaY/R/Pl1Wu+MQb6pYslH0Iadbd1mx/CeWZNQeVxT3ufDFKqPWRjVy/REo9ZM914oFYsLte+BUFDHdo09UviozYriffsGHA9LX5rct8VVsbyd21QCppnNPbp0hGq5GCa8PdcE7uiUzQoGIBlXrmekkVFrq4tW69vNrnKYlCKg==
Content-Type: text/plain; charset="utf-8"
Content-ID: <9FF9AD37F545F04FB4E95CCDBC616729@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5097
Original-Authentication-Results: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c861c12e-cfb0-44ae-9019-08d88261ff79
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EO1w9pY+soGZj4EY+N7IpsAYtN5BEcm8+IIJzAa7pc0L/2N4Z3/MDOCaoeRfFTSnHgvFvkLcyqxMaW6ZqdniaUNCKNArmGX7PW9Q33MrHXu17b05zvmUb6vzlFaUvkEiaU9yjcK0D0rI0jt4ZMR1d02P0pnesLwFT3lB2pkFpjmVP/ph67UtobJ1CySTnpy6VPeRZvvCTLWI9V5HPnkLB9JfAWI76C/42fSBwT97ivqm+Tz7a7ucNu8d8xzeMLR1Wo2pcmql+b7Ew5gYz1m67ldHqEA6r2e2pqhNbjt0InOl/hABUMp+1LpebYXVVsciNmbikOQ8OU01qWKrrbQYgIQ3wqfi5QuPCvy8qtO8HxqP9LxFqwlLLGWTvxKIVn+gb6rnE+W88tC4z2qJPVx/Dw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39850400004)(346002)(396003)(376002)(136003)(46966005)(70586007)(5660300002)(2906002)(70206006)(47076004)(478600001)(6486002)(356005)(81166007)(86362001)(4326008)(83380400001)(82310400003)(82740400003)(6512007)(107886003)(6862004)(33656002)(2616005)(55236004)(53546011)(336012)(26005)(36756003)(36906005)(316002)(8676002)(8936002)(6506007)(54906003)(186003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Nov 2020 14:41:34.5472
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 62d31641-8f96-435c-1911-08d882620f32
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB4106

SGVsbG8gT2xla3NhbmRyLA0KDQo+IE9uIDYgTm92IDIwMjAsIGF0IDI6MjIgcG0sIE9sZWtzYW5k
ciBBbmRydXNoY2hlbmtvIDxPbGVrc2FuZHJfQW5kcnVzaGNoZW5rb0BlcGFtLmNvbT4gd3JvdGU6
DQo+IA0KPiBIaSwgUmFodWwhDQo+IA0KPiBPbiAxMS82LzIwIDM6NTggUE0sIFJhaHVsIFNpbmdo
IHdyb3RlOg0KPj4gSGVsbG8gT2xla3NhbmRyLA0KPj4gDQo+Pj4gT24gNiBOb3YgMjAyMCwgYXQg
MTowMCBwbSwgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gPE9sZWtzYW5kcl9BbmRydXNoY2hlbmtv
QGVwYW0uY29tPiB3cm90ZToNCj4+PiANCj4+PiBIZWxsbywgUmFodWwhDQo+Pj4gDQo+Pj4gT24g
MTEvNi8yMCAyOjQ4IFBNLCBSYWh1bCBTaW5naCB3cm90ZToNCj4+Pj4gSGVsbG8gT2xla3NhbmRy
LA0KPj4+PiANCj4+Pj4+IE9uIDIgTm92IDIwMjAsIGF0IDEwOjEyIGFtLCBPbGVrc2FuZHIgQW5k
cnVzaGNoZW5rbyA8T2xla3NhbmRyX0FuZHJ1c2hjaGVua29AZXBhbS5jb20+IHdyb3RlOg0KPj4+
Pj4gDQo+Pj4+PiBIaSwNCj4+Pj4+IA0KPj4+Pj4gT24gMTEvMi8yMCAxMTo1NSBBTSwgQmVydHJh
bmQgTWFycXVpcyB3cm90ZToNCj4+Pj4+PiBIaSwNCj4+Pj4+PiANCj4+Pj4+Pj4gT24gMiBOb3Yg
MjAyMCwgYXQgMDU6NTUsIE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIDxhbmRyMjAwMEBnbWFpbC5j
b20+IHdyb3RlOg0KPj4+Pj4+PiANCj4+Pj4+Pj4gSGksIEp1bGllbiENCj4+Pj4+Pj4gDQo+Pj4+
Pj4+IE9uIDEwLzMwLzIwIDc6MTggUE0sIEp1bGllbiBHcmFsbCB3cm90ZToNCj4+Pj4+Pj4+IEhp
IE9sZWtzYW5kciwNCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gT24gMzAvMTAvMjAyMCAxMDo0NCwgT2xl
a3NhbmRyIEFuZHJ1c2hjaGVua28gd3JvdGU6DQo+Pj4+Pj4+Pj4gT24gMTAvMjAvMjAgNjoyNSBQ
TSwgUmFodWwgU2luZ2ggd3JvdGU6DQo+Pj4+Pj4+Pj4+IEFkZCBzdXBwb3J0IGZvciBBUk0gYXJj
aGl0ZWN0ZWQgU01NVXYzIGltcGxlbWVudGF0aW9ucy4gSXQgaXMgYmFzZWQgb24NCj4+Pj4+Pj4+
Pj4gdGhlIExpbnV4IFNNTVV2MyBkcml2ZXIuDQo+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+PiBNYWpv
ciBkaWZmZXJlbmNlcyBiZXR3ZWVuIHRoZSBMaW51eCBkcml2ZXIgYXJlIGFzIGZvbGxvd3M6DQo+
Pj4+Pj4+Pj4+IDEuIE9ubHkgU3RhZ2UtMiB0cmFuc2xhdGlvbiBpcyBzdXBwb3J0ZWQgYXMgY29t
cGFyZWQgdG8gdGhlIExpbnV4IGRyaXZlcg0KPj4+Pj4+Pj4+PiAgICAgIHRoYXQgc3VwcG9ydHMg
Ym90aCBTdGFnZS0xIGFuZCBTdGFnZS0yIHRyYW5zbGF0aW9ucy4NCj4+Pj4+Pj4+PiBGaXJzdCBv
ZiBhbGwgdGhhbmsgeW91IGZvciB0aGUgZWZmb3J0cyENCj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+PiBJ
IHRyaWVkIHRoZSBwYXRjaCB3aXRoIFFFTVUgYW5kIHdvdWxkIGxpa2UgdG8ga25vdyBpZiBteSB1
bmRlcnN0YW5kaW5nIGNvcnJlY3QNCj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+PiB0aGF0IHRoaXMgY29t
YmluYXRpb24gd2lsbCBub3Qgd29yayBhcyBvZiBub3c6DQo+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4g
KFhFTikgU01NVXYzOiAvc21tdXYzQDkwNTAwMDA6IFNNTVV2MzogRFQgdmFsdWUgPSBldmVudHEN
Cj4+Pj4+Pj4+PiAoWEVOKSBEYXRhIEFib3J0IFRyYXAuIFN5bmRyb21lPTB4MTk0MDAxMA0KPj4+
Pj4+Pj4+IChYRU4pIFdhbGtpbmcgSHlwZXJ2aXNvciBWQSAweDQwMDMxMDAwIG9uIENQVTAgdmlh
IFRUQlIgMHgwMDAwMDAwMGI4NDY5MDAwDQo+Pj4+Pj4+Pj4gKFhFTikgMFRIWzB4MF0gPSAweDAw
MDAwMDAwYjg0NjhmN2YNCj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+PiBbc25pcF0NCj4+Pj4+Pj4+PiAN
Cj4+Pj4+Pj4+PiBJZiB0aGlzIGlzIGV4cGVjdGVkIHRoZW4gaXMgdGhlcmUgYW55IHBsYW4gdG8g
bWFrZSBRRU1VIHdvcmsgYXMgd2VsbD8NCj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+PiBJIHNlZSBbMV0g
c2F5cyB0aGF0ICJPbmx5IHN0YWdlIDEgYW5kIEFBcmNoNjQgUFRXIGFyZSBzdXBwb3J0ZWQuIiBv
biBRRU1VIHNpZGUuDQo+Pj4+Pj4+PiBKdXN0IGZvciBjbGFyaWNhdGlvbiwgeW91IGFyZSB0cnlp
bmcgdG8gYm9vdCBYZW4gb24gUUVNVSwgcmlnaHQ/DQo+Pj4+Pj4+IEV4YWN0bHkNCj4+Pj4+Pj4+
IFlvdSBtaWdodCBiZSBhYmxlIHRvIHVzZSB0aGUgc3RhZ2UtMSBwYWdlLXRhYmxlcyB0byBpc29s
YXRlIGVhY2ggZGV2aWNlIGluIFhlbi4gSG93ZXZlciwgSSBkb24ndCB0aGluayB5b3Ugd2lsbCBi
ZSBhYmxlIHRvIHNoYXJlIHRoZSBQMk0gYmVjYXVzZSB0aGUgcGFnZS10YWJsZXMgbGF5b3V0IGJl
dHdlZW4gc3RhZ2UtMSBhbmQgc3RhZ2UtMiBpcyBkaWZmZXJlbnQuDQo+Pj4+Pj4+IFNvLCBpdCBp
cyBldmVuIG1vcmUgd29yayB0aGVuDQo+Pj4+Pj4gT3ZlcmFsbCBpdCB3b3VsZCBtYWtlIG1vcmUg
c2Vuc2UgdG8gc3BlbmQgc29tZSB0aW1lIGFkZGluZyBwcm9wZXIgc3VwcG9ydCBpbiBRZW11IHRo
ZW4gdHJ5aW5nIHRvIG1vZGlmeSB0aGUgZHJpdmVyIHRvIHN1cHBvcnQgUWVtdSByaWdodCBub3cu
DQo+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gV2UgYXJlIGludGVyZXN0ZWQgaW4gUUVNVS9TTU1VdjMgYXMg
YSBmbGV4aWJsZSBwbGF0Zm9ybSBmb3IgUENJIHBhc3N0aHJvdWdoDQo+Pj4+Pj4+Pj4gDQo+Pj4+
Pj4+Pj4gaW1wbGVtZW50YXRpb24sIHNvIGl0IGNvdWxkIGFsbG93IHRlc3RpbmcgZGlmZmVyZW50
IHNldHVwcyBhbmQgY29uZmlndXJhdGlvbnMgd2l0aCBRRU1VLg0KPj4+Pj4+Pj4gSSB3b3VsZCBy
ZWNvbW1lbmQgdG8gZ2V0IHRoZSBTTU1VIHN1cHBvcnRpbmcgc3VwcG9ydGluZyBzdGFnZS0yIHBh
Z2UtdGFibGVzLg0KPj4+Pj4+PiBZb3UgbWVhbiBpbiBRRU1VPw0KPj4+Pj4+IFNlZSBiZWZvcmUu
DQo+Pj4+Pj4gDQo+Pj4+Pj4+PiBSZWdhcmRsZXNzIHRoYXQsIEkgdGhpbmsgWGVuIHNob3VsZCBi
ZSBhYmxlIHRvIHNheSB0aGUgU01NVSBpcyBub3Qgc3VwcG9ydGVkIHJhdGhlciB0aGFuIGNyYXNo
aW5nLg0KPj4+Pj4+PiBZZXMsIHRoYXQgd291bGQgYmUgbmljZQ0KPj4+Pj4+IEZ1bGx5IGFncmVl
IGFuZCB3ZSB3aWxsIGxvb2sgaW50byB0aGF0Lg0KPj4+Pj4+IA0KPj4+Pj4+IEFueXRoaW5nIHlv
dSBjb3VsZCBzaGFyZSBzbyB0aGF0IHdlIGNvdWxkIHF1aWNrbHkgcmVwcm9kdWNlIHlvdXIgc2V0
dXAgd291bGQgYmUgbW9yZSB0aGVuIGdyZWF0Lg0KPj4+Pj4gTm90aGluZyBzcGVjaWFsLA0KPj4+
Pj4gDQo+Pj4+PiBxZW11L2FhcmNoNjQtc29mdG1tdS9xZW11LXN5c3RlbS1hYXJjaDY0IC1tYWNo
aW5lIHR5cGU9dmlydCAtbWFjaGluZSB2aXJ0LGdpYy12ZXJzaW9uPTIgXA0KPj4+Pj4gDQo+Pj4+
PiAtbWFjaGluZSB2aXJ0dWFsaXphdGlvbj10cnVlIC1jcHUgY29ydGV4LWE1NyAtc21wIDQgLW0g
MjA0OCAtbmljIHVzZXIsaG9zdGZ3ZD10Y3A6MTI3LjAuMC4xOjIyMjItOjIyIFwNCj4+Pj4+IA0K
Pj4+Pj4gLW5vZ3JhcGhpYyAtc2VyaWFsIG1vbjpzdGRpbyBbLi5zbmlwLi5dDQo+Pj4+PiANCj4+
Pj4+IEkgYWxzbyBzZXQgaW9tbXUgdG8gc21tdXYzIGluIG15IHRlc3RzLCBRRU1VIGVtdWxhdG9y
IHZlcnNpb24gNC4yLjENCj4+Pj4gSSBqdXN0IGNoZWNrZWQgYW5kIGNvbmZpcm1lZCB0aGF0IFFF
TVUgaXMgYm9vdGluZyB3aXRoIFhFTiBTTU1VdjMgcGF0Y2ggYW5kIFhFTiBpcyBhYmxlIHRvIHNh
eSBTTU1VIHRyYW5zbGF0aW9uIGlzIG5vdCBzdXBwb3J0ZWQuIEFzIFhFTiBzdXBwb3J0cyBTdGFn
ZS0yIHRyYW5zbGF0aW9uIGFuZCBRRU1VIHN1cHBvcnRzIFN0YWdlLTEgb25seS4NCj4+Pj4gDQo+
Pj4+IA0KPj4+PiAoWEVOKSBTTU1VdjM6IC9zbW11djNAOTA1MDAwMDogU01NVXYzOiBEVCB2YWx1
ZSA9IGV2ZW50cQ0KPj4+PiAoWEVOKSBTTU1VdjM6IC9zbW11djNAOTA1MDAwMDogSURSMC5DT0hB
Q0Mgb3ZlcnJpZGRlbiBieSBGVyBjb25maWd1cmF0aW9uIChmYWxzZSkNCj4+Pj4gKFhFTikgU01N
VXYzOiAvc21tdXYzQDkwNTAwMDA6IG5vIHRyYW5zbGF0aW9uIHN1cHBvcnQhDQo+Pj4+IChYRU4p
IEkvTyB2aXJ0dWFsaXNhdGlvbiBkaXNhYmxlZA0KPj4+PiANCj4+Pj4gT25seSBkaWZmZXJlbmNl
IEkgb2JzZXJ2ZWQgaXMgdGhhdCB5b3UgaGF2ZSB0byBhZGQgb3B0aW9uICItbWFjaGluZSB2aXJ0
LGlvbW11PXNtbXV2MyDigJwgd2hlbiBsYXVuY2hpbmcgdGhlIFFFTVUuDQo+Pj4gSSBkbyB1c2Ug
dGhlIG9wdGlvbg0KPj4gSSB1c2VkICItbWFjaGluZSB2aXJ0LGlvbW11PXNtbXV2MyDigJwgIG9w
dGlvbiB3aGlsZSBjcmVhdGluZyB0aGUgdmlydC1kdGIgYW5kIHdoaWxlIGxhdW5jaGluZyB0aGUg
UUVNVS4NCj4+IEkgYWxzbyBvYnNlcnZlZCB0aGUgc2FtZSBlcnJvciB3aGF0IHlvdSBvYnNlcnZl
ZCBpZiBJIGFtIG5vdCB1c2luZyB0aGUgIi1tYWNoaW5lIHZpcnQsaW9tbXU9c21tdXYzIOKAnCBv
cHRpb25zIHdoZW4gbGF1bmNoaW5nIHRoZSBRRU1VIHNvIEkgdGhvdWdodCB0aGlzIG1pZ2h0IGJl
IGNhc2UgZm9yIHlvdSBhbHNvIGJ1dCBhbnl3YXlzIHlvdSBoYXZlIHVzZSB0aGUgb3B0aW9ucyBp
dCBtaWdodCBiZSBvdGhlciBpc3N1ZS4NCj4gDQo+IEhtLCBwcm9iYWJseSB0aGF0IHdhcyBvbiBt
eSBzaWRlIGFzIG5vdyBJIGNhbiBzZWU6DQo+IA0KPiAoWEVOKSBTTU1VdjM6IC9zbW11djNAOTA1
MDAwMDogU01NVXYzOiBEVCB2YWx1ZSA9IGV2ZW50cQ0KPiAoWEVOKSBTTU1VdjM6IC9zbW11djNA
OTA1MDAwMDogSURSMC5DT0hBQ0Mgb3ZlcnJpZGRlbiBieSBGVyBjb25maWd1cmF0aW9uIChmYWxz
ZSkNCj4gKFhFTikgU01NVXYzOiAvc21tdXYzQDkwNTAwMDA6IG5vIHRyYW5zbGF0aW9uIHN1cHBv
cnQhDQo+IChYRU4pIEkvTyB2aXJ0dWFsaXNhdGlvbiBkaXNhYmxlZA0KPiAoWEVOKQ0KPiAoWEVO
KSAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQo+IChYRU4pIFBhbmlj
IG9uIENQVSAwOg0KPiAoWEVOKSBDb3VsZG4ndCBjb25maWd1cmUgY29ycmVjdGx5IGFsbCB0aGUg
SU9NTVVzLg0KPiAoWEVOKSAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
DQo+IChYRU4pDQo+IChYRU4pIE1hbnVhbCByZXNldCByZXF1aXJlZCAoJ25vcmVib290JyBzcGVj
aWZpZWQpDQo+IA0KPiBTbywgc29ycnkgZm9yIHRoZSBub2lzZSwgSSBtaWdodCBoYXZlIG1pc2Nv
bmZpZ3VyZWQgc29tZXRoaW5nIGl0IHNlZW1zDQo+IA0KPiBXaGVuIHlvdSBzYXkgIlhlbiBpcyBi
b290aW5nIiBkbyB5b3UgbWVhbiB5b3Ugc2VlIHRoZSBzYW1lIHBhbmljPw0KDQpZZXMgSSBvYnNl
cnZlIHRoZSBzYW1lLiANCg0KV2UgaGF2ZSB0byBkZWNpZGUgbm93IGlmIGZvciBTTU1VdjMgdGhl
cmUgaXMgbm8gdHJhbnNsYXRpb24gc3VwcG9ydCBkbyB3ZSBoYXZlIHRvIHByaW50IHRoZSBsb2dz
IGFuZCBtb3ZlIGZvcndhcmQgIG9yIGFzIGFib3ZlIHJldHVybiBlcnJvciB0byBpb21tdV9zZXR1
cCB0aGF0IHdpbGwgY2FsIHBhbmljKCkuDQoNClJlZ2FyZHMsDQpSYWh1bA0KDQo+IA0KPiBUaGFu
ayB5b3UsDQo+IA0KPiBPbGVrc2FuZHINCj4gDQo+PiANCj4+Pj4gUGxlYXNlIGxldCBtZSBrbm93
IGlmIGl0IGFsc28gd29ya3MgZm9yIHlvdS4NCj4+PiBXZWxsLCBJIHNob3VsZCBoYXZlIHJlcG9y
dGVkIHRoYXQgZWFybGllciB0aGF0IEkgZG8gbm90IHVzZSB0aGUgc3RhZ2luZyBYZW4gYXQgdGhl
IG1vbWVudCwNCj4+PiANCj4+PiBpdCBpcyA0LjE0LjAuIFNvLCBjYW4gdGhpcyBiZSBhIHByb2Js
ZW0gd2l0aCB0aGF0IFhlbiB2ZXJzaW9uPw0KPj4gSSBkb27igJl0IHRoaW5rIHNvIHRoaXMgaXMg
dGhlIHByb2JsZW0gd2l0aCB0aGUgWEVOIHZlcnNpb24uDQo+Pj4gQW55d2F5cywgaWYgaXQgd29y
a3Mgd2l0aCB0aGUgc3RhZ2luZyB0aGVuIGV2ZXJ5dGhpbmcgbG9va3Mgb2sNCj4+PiANCj4+PiBU
aGFuayB5b3UsDQo+Pj4gDQo+Pj4gT2xla3NhbmRyDQo+Pj4gDQo+Pj4+Pj4gUmVnYXJkcw0KPj4+
Pj4+IEJlcnRyYW5kDQo+Pj4+Pj4gDQo+Pj4+Pj4+PiBDaGVlcnMsDQo+Pj4+Pj4+PiANCj4+Pj4+
Pj4gVGhhbmsgeW91LA0KPj4+Pj4+PiANCj4+Pj4+Pj4gT2xla3NhbmRyDQo+PiBSZWdhcmRzLA0K
Pj4gUmFodWwNCg0K


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 15:12:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 15:12:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20894.46924 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb3PA-0008U4-1c; Fri, 06 Nov 2020 15:11:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20894.46924; Fri, 06 Nov 2020 15:11:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb3P9-0008Tx-U6; Fri, 06 Nov 2020 15:11:55 +0000
Received: by outflank-mailman (input) for mailman id 20894;
 Fri, 06 Nov 2020 15:11:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+KtV=EM=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1kb3P8-0008Ts-Dj
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 15:11:54 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id be185ddf-45ef-4ad9-afc7-f30327d238c2;
 Fri, 06 Nov 2020 15:11:53 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+KtV=EM=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
	id 1kb3P8-0008Ts-Dj
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 15:11:54 +0000
X-Inumbo-ID: be185ddf-45ef-4ad9-afc7-f30327d238c2
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id be185ddf-45ef-4ad9-afc7-f30327d238c2;
	Fri, 06 Nov 2020 15:11:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1604675513;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=xdbHo+p5BgdyP73xJIcQGHCSKtTjksyd3+brh7VTRP8=;
  b=f9bZiGe0KRujBi3Kmm+q57BifNJB7xgafz49GD0m9vxZgwRFPDzry7OD
   SnWkGIqqmpKx5c5otJQe8yRVHo/vNmTpqErXDEtpEyZTt6zF3eUOWVh5U
   8hLArUlwjfRHOI8c38vMWgqm1YkAs33L7p3CL7yyDmA57t54l8ra0WeKD
   k=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: jxpGl2hMit/nW6dP7MUyVjLk8NgXAR/2qZ8hJag6uXV3Ce8daN/Qtil5kziv3tY3M3yD7dWXYQ
 MJ+JNIRjdhUXkqOGUtHqo9b7j2Aih7UnmW76ONnUm65NZEtXzCZfVgzYXyoqVj+oBURa9kcUyF
 YAXh8Y/5sFvUH3l4xyjzdG/+/iR7wtQH+Kb9DiSuZYzKOdEbfOfTWARtfsbjNonKeyUYRdmUeY
 PIdSRrVJseEuT1TZ6VeFz6dmrHrR+lr3zuQqI8vrjcjjPKNc+oekG8PWMowXKRGIsaeuujW9+N
 epM=
X-SBRS: None
X-MesageID: 30976882
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,456,1596513600"; 
   d="scan'208";a="30976882"
Date: Fri, 6 Nov 2020 15:11:46 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: <xen-devel@lists.xenproject.org>, <takahiro.akashi@linaro.org>,
	<alex.bennee@linaro.org>, <masami.hiramatsu@linaro.org>,
	<ian.jackson@eu.citrix.com>, <wl@xen.org>
Subject: Re: [PATCH] libxl: set vuart_gfn in libxl__build_hvm
Message-ID: <20201106151146.GM2214@perard.uk.xensource.com>
References: <alpine.DEB.2.21.2011051312120.2323@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2011051312120.2323@sstabellini-ThinkPad-T480s>

On Thu, Nov 05, 2020 at 01:15:05PM -0800, Stefano Stabellini wrote:
> libxl: set vuart_gfn in libxl__build_hvm

The subject is written two times ;-)

> Setting vuart_gfn was missed when switching ARM guests to the PVH build.
> Like libxl__build_pv, libxl__build_hvm should set state->vuart_gfn to
> dom->vuart_gfn.
> 
> Without this change, xl console cannot connect to the vuart console (-t
> vuart), see https://marc.info/?l=xen-devel&m=160402342101366.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> 
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index f8661e90d4..36fe8915e7 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -1184,6 +1184,7 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
>          LOG(ERROR, "hvm build set params failed");
>          goto out;
>      }
> +    state->vuart_gfn = dom->vuart_gfn;
>  
>      rc = hvm_build_set_xs_values(gc, domid, dom, info);
>      if (rc != 0) {

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 15:45:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 15:45:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20911.46956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb3w0-0002sk-2Z; Fri, 06 Nov 2020 15:45:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20911.46956; Fri, 06 Nov 2020 15:45:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb3vz-0002sd-Uz; Fri, 06 Nov 2020 15:45:51 +0000
Received: by outflank-mailman (input) for mailman id 20911;
 Fri, 06 Nov 2020 15:45:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kb3vy-0002sY-SB
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 15:45:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1a50d871-6b4a-416e-ad4c-002c80b8151e;
 Fri, 06 Nov 2020 15:45:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DBE58ABA2;
 Fri,  6 Nov 2020 15:45:48 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DqEO=EM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kb3vy-0002sY-SB
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 15:45:50 +0000
X-Inumbo-ID: 1a50d871-6b4a-416e-ad4c-002c80b8151e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1a50d871-6b4a-416e-ad4c-002c80b8151e;
	Fri, 06 Nov 2020 15:45:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604677549;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=baG0fcY2nKj2+z89jaqCJLZcP325tXsOE+uQ+OrJOEk=;
	b=V/4zcjY8hNf/woetkroqvx+gxwqRp1BiGXEgXWLQBrWturzvz+hF61MHn2Mf3VO36Mh5mO
	yrmHo0nLFupAzOYZHevZLKEPxx9s3+2PhoM7txSiiuGyPkPUU2ha0qjz67/sbuaSJ7h7wG
	rBxFteneT//ldEiM6moLtmVroYZJl4M=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id DBE58ABA2;
	Fri,  6 Nov 2020 15:45:48 +0000 (UTC)
Subject: Re: [PATCH] tools/libs/light: correct bitmap operations
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
References: <20201106140504.25488-1-jgross@suse.com>
 <61860ac6-133a-0393-e63c-8de9ea13e5f9@suse.com>
 <22a4dc50-4feb-a934-a58b-7ebcdcf9e3ab@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <908ab536-d5f3-2822-086d-a13c730e6c74@suse.com>
Date: Fri, 6 Nov 2020 16:45:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <22a4dc50-4feb-a934-a58b-7ebcdcf9e3ab@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 06.11.2020 15:36, Jürgen Groß wrote:
> On 06.11.20 15:35, Jan Beulich wrote:
>> On 06.11.2020 15:05, Juergen Gross wrote:
>>> Libxl bitmap operations for single bits (test, set, reset) take the bit
>>> number as a signed integer without testing the value to be larger than
>>> 0.
>>>
>>> Correct that by adding the appropriate tests.
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>
>> Wouldn't it be better to convert the parameter types to unsigned int?
> 
> Those are official library interfaces. Can we just change them?

Oh, I didn't expect such helpers to be available to users of the
library.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 15:49:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 15:49:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20915.46967 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb3zJ-00033W-I8; Fri, 06 Nov 2020 15:49:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20915.46967; Fri, 06 Nov 2020 15:49:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb3zJ-00033P-FA; Fri, 06 Nov 2020 15:49:17 +0000
Received: by outflank-mailman (input) for mailman id 20915;
 Fri, 06 Nov 2020 15:49:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+KtV=EM=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1kb3zH-00033K-L6
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 15:49:15 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id acd985ab-8337-43f6-a668-ebadfcc2b01c;
 Fri, 06 Nov 2020 15:49:13 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+KtV=EM=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
	id 1kb3zH-00033K-L6
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 15:49:15 +0000
X-Inumbo-ID: acd985ab-8337-43f6-a668-ebadfcc2b01c
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id acd985ab-8337-43f6-a668-ebadfcc2b01c;
	Fri, 06 Nov 2020 15:49:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1604677753;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=yz5DFGKpaiVNHL3RvlNotFBzT5VQDrG5Fw/aH/1VkuM=;
  b=GtrR9229rKDSd8IeTUrNNxoLfQo5R/7JivDBFMVg65KvUC4TBCJyNoO4
   /n5RnRilDVNNgUJGxcWMgDEBiZwbIDiMBp7jVNuae8aEOfnFSN7SNDLeJ
   3z/ZDrHvSAsbx2JQPES8yRIfmobFVm/D90jWp75rmZjmLE7t1SkWeyu/Z
   c=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: kqXxJxKBs1V0rMqwAnGP/wWmjBvus9Rfa5oQBZhWA5ToUBXnxoOgAhgcho7HUqowRQbhjDmoL3
 RY3wDrwsONsSfeTB9JRvFsdwXSstVHkQSmrEg8FwHXIMoAXUvF6uvGrMPaOKUTwwSqdeykgGLM
 yJK0b1BBvd3vNROwKAWc3qTcKWES+DvCZENrPfeyvxh4GgwXL5pk+C3E9R3qEV/RoVdU7yFGM0
 US517zCDPnPXPyaMrtlUWh+dWofnxMI4d2JV8tdbMGH+Ce0vD8SMvTzD7Nf+AuurITlFrVmKA9
 3k4=
X-SBRS: None
X-MesageID: 30674195
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,456,1596513600"; 
   d="scan'208";a="30674195"
Date: Fri, 6 Nov 2020 15:49:08 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Michael Young <m.a.young@durham.ac.uk>, Stefano Stabellini
	<sstabellini@kernel.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, George Dunlap <george.dunlap@citrix.com>
Subject: Re: Xen 4.13.2 released
Message-ID: <20201106154908.GN2214@perard.uk.xensource.com>
References: <ed219f15-479b-5d06-c835-eb4f4c64db3a@suse.com>
 <a391cfd1-be4a-add6-cd36-8bb254f9b43f@austen3.home>
 <a3dfec9d-bb32-c1c5-c00e-ea95c62c9bde@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <a3dfec9d-bb32-c1c5-c00e-ea95c62c9bde@suse.com>

On Wed, Nov 04, 2020 at 08:47:57AM +0100, Jan Beulich wrote:
> On 04.11.2020 00:55, Michael Young wrote:
> > On Tue, 3 Nov 2020, Jan Beulich wrote:
> >> I am pleased to announce the release of Xen 4.13.2. This is available
> >> immediately from its git repository
> >> http://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.13
> >> (tag RELEASE-4.13.2) or from the XenProject download page
> >> https://xenproject.org/downloads/xen-project-archives/xen-project-4-13-series/xen-project-4-13-2/
> >> (where a list of changes can also be found).
> > 
> > Is the entry for XSA-335 correct on the download page? That was a qemu 
> > patch but I don't think it was included in 4.13.2.
> 
> Interesting, thanks for pointing this out. The qemu-trad part,
> albeit "just" a SUPPORT.md update, didn't even make it into
> staging yet afaics. While this can perhaps be viewed as benign,
> I'm concerned that the qemuu fix also doesn't look to have
> landed in any of the branches yet, despite the version bump on
> the staging/master branches just 5 days ago. Anthony, Stefano?

I've pushed the fix now, to qemu-xen trees.

Maybe George's script could check qemu-xen trees as well? Or someone in
the security team could push the patchs or tell me about XSAs involving
QEMU, otherwise that's going to happen again.

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 15:53:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 15:53:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20923.46979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb43f-0003v7-3y; Fri, 06 Nov 2020 15:53:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20923.46979; Fri, 06 Nov 2020 15:53:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb43f-0003v0-0z; Fri, 06 Nov 2020 15:53:47 +0000
Received: by outflank-mailman (input) for mailman id 20923;
 Fri, 06 Nov 2020 15:53:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mYaw=EM=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1kb43d-0003uu-EH
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 15:53:45 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 262090eb-bf42-472f-b220-ae7278bd4259;
 Fri, 06 Nov 2020 15:53:43 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mYaw=EM=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
	id 1kb43d-0003uu-EH
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 15:53:45 +0000
X-Inumbo-ID: 262090eb-bf42-472f-b220-ae7278bd4259
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 262090eb-bf42-472f-b220-ae7278bd4259;
	Fri, 06 Nov 2020 15:53:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1604678023;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=V07ZJB0BEyifSVhpjP+or0kueU0shHYYGhLPZwoP954=;
  b=EpY9j4ys9J547a7q9ocwYGg6PHbCxY5dSRhJNCDZthpr6x7csGCsbS+U
   /PPDnrv+vEGdB/I+iWsCCrjdMFVoBduaFI3P8zRf3eQ1V1i7mwcvWBXX5
   hrbEoUgcpIWRUe0BM3MdLRzOvaWXz55IJSgjdnUlZ0XFtFvNMQbigy0vr
   Y=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: HCgbAj4jitrHQ2VE/poy3CMSJz/54i/jNTPPbiUgzQErZVO4Zxb6Oe56mVwNECEkUHoNKGePII
 0paOZcPgP3MLUBYP0ZT6raXySfPTsmI3U/vwuk4nKu7cdhRZyf3ARKUHlHyg7irN9BbruKRpjc
 mjdfdaEsfQUjE4Lcxq4OW2VacQ8j+v+ViIsujl1yUIPESWo74wHiWQXHw5TRle1xNZEXEx49sM
 qQtc4JI3bFLz+l5bPB20e7uPCspN3GB3Mp9XhRxHC/Q2BBu331EK2A2PwKtPST4L93bbMrgP12
 1bU=
X-SBRS: None
X-MesageID: 30881112
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,456,1596513600"; 
   d="scan'208";a="30881112"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ahSJOP5cQSZOI8RTAjUnvAUYo3KgP2CIDcDyPNeOqRBNtMOY01dEexrIjXJe2a8T/+HzDUe6kHnzXNKzhOittYgZ/AIp9H9HmlOvi2exSrrAyt1BKDuK4ZB/2YJv928IUoShL6AiCk8883FzQJRpeUxrSRVgBQkSX445uqU0oWgoCYEkdNypvRK4kGLcPatqcMSVFmHZQOnOJJo5fhUTCSWICJujJWQh+3c0VmXukr3wQcJ0/lV1BhkZG4I4UNleVB1/pxqlG85GjXfVHrsMKFCXJDkrol9Sel+bKC9FTFJCttbKrDFRcpDi/zzhX5dn4RsxbOujTd1FjNlwAC3sgw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=V07ZJB0BEyifSVhpjP+or0kueU0shHYYGhLPZwoP954=;
 b=mqA1pt4nqwb7+Oa20MmV8royN9c1uDHY7A4xCKojUdEwp7JTiDPaq9VrGb0wvz2IeSc1iYvWxeKquwnCU63QnwxsOdf/G9ZHcKlCoi+d/54GPgvSwsU4CE+hNySqg0Y2Yy5ARVeddbJ1Zs71b/Xd866sAl7fALwboQ6LYsQ987xBGYXn7fQcPCO7KIAcJKIpEJ/KrbuZTMQkP1civZdIKRZwme5Mqr+QoAqN070PDUQjucPAyqdq7/ktdxVX9erZn0LAqu57RoNRNLA0STjYlbguMkryiLYWFwwlLokziINZTRL74CmxRDvZPiC5Px2wPbou/lquSNe3LwUwAlUuTw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=V07ZJB0BEyifSVhpjP+or0kueU0shHYYGhLPZwoP954=;
 b=oSmE+2b90IaTfGgmJqF/UpnAFIKnDFPPZ3REDkgcEIugI9LcrhFNG4wIRbiqKVYNK0Hnim2fJDZsiKG6R/VQWNBqPjg+E9/KhQl9FaAmFJuEwjE71lWfgsOrt4yWSHozNOkA6ly/haMrlgnx1yd3J9bRxb6FkzuVXG79kz0OklI=
From: George Dunlap <George.Dunlap@citrix.com>
To: Anthony Perard <anthony.perard@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>, Michael Young <m.a.young@durham.ac.uk>,
	Stefano Stabellini <sstabellini@kernel.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: Xen 4.13.2 released
Thread-Topic: Xen 4.13.2 released
Thread-Index: AQHWsn7TXlZVtT/AC0iXjX7nX8Eg4Km7Q7sAgAABPQA=
Date: Fri, 6 Nov 2020 15:53:35 +0000
Message-ID: <4D30BF59-8F11-49EC-8F7B-5822807744BF@citrix.com>
References: <ed219f15-479b-5d06-c835-eb4f4c64db3a@suse.com>
 <a391cfd1-be4a-add6-cd36-8bb254f9b43f@austen3.home>
 <a3dfec9d-bb32-c1c5-c00e-ea95c62c9bde@suse.com>
 <20201106154908.GN2214@perard.uk.xensource.com>
In-Reply-To: <20201106154908.GN2214@perard.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 529ee543-a1bb-465b-bb31-08d8826c1e7a
x-ms-traffictypediagnostic: BYAPR03MB4391:
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <BYAPR03MB439156CBE52D83C15F7A7B8399ED0@BYAPR03MB4391.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8273;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: gh9HADmv+rqMjSlFEoX5LDzFYWMRqYlxjE/xWg81Vkr7tBnXPeM/AQ4KngPS1Qa+oqpSQ3R2tIclQpRhJoOyIcZc4OrjQmgmAON2KVoLgRD4o6wKwEqPzubL9rn5ZznKH9rm7BRC7pih17kzY0m54QBECT0yUEVrxAWvd47TuT9jMqCo12SvgEZnA92XaMXjbaH5cAOW/CFoAdKkooAK5tM5zymQYjyFXCPtEKncZ7GqEyxqBUwqADTafRo9aIgTutVu3FWW5sCy5XYHQAD+SBHT6UC8wqAgznR2ssualtIx/us0sXajIIJ4n73e4feevpkCmhBMQnjY98xHyyu00k0XYYjyAEHjKZ3fR1zrm9c14YqBBWWkQn/cKebiEqgTc1AmymfzBg3S+gx7mv4fXQ==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4229.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(39860400002)(366004)(136003)(376002)(396003)(6512007)(54906003)(4326008)(2616005)(66946007)(83380400001)(5660300002)(37006003)(6636002)(71200400001)(186003)(8936002)(8676002)(36756003)(86362001)(53546011)(2906002)(55236004)(66446008)(478600001)(91956017)(6486002)(6506007)(66556008)(33656002)(66476007)(6862004)(7116003)(966005)(26005)(64756008)(76116006)(316002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: SP5wWpGPK0t+cphiXppTBDpfGJOegGB3c4VYzixLve92AHlIGh+YrdIqENGqtUjJELf92FOEAzTaEV93IJpJPGX1uS6Zrf4P4cLfEH1O84BgRFiNvhpVzJwlBjCuwIK6xvs+lE/WzVxtzzrx5eDQQkOx96PqBh3ZGJm4uPS18bSp8Valkv2UEvpwlwKqgBbxBhvrnhPogwLo4on7nsiSKoT5xxGxzRFktcRpC5O7Wd6VpnDWJw1+AMplQ2HOG+0Q3QWARW27LSxoxjTVazrb6QE/eD4kgTgNjKke6IXZPZAapW4DmUMXGw1hwVn6LmyCc08v6sxoN8IJ7YwV5e4AAWnNl4Ekr/XXDi3q6Bqvdxrv1B+QiYdwuLY4lJN4cMJROky+aXQ7UfC6L6KZ4Xohd+AvG15T6CLsQ6qe6/h5Ks/bK6vt3a3e+dwn+SXHYzIYQ1euMSIaAijTNiNsp4n6ohYP9V6o3Kfso++WN7eLsfDt5PJRNOU5NOvgJmX9TrEO2KzSMLC9RhluQRoKYxWfKpibQtrb+OmZ7GOHp04M7pAagHZgx3JopNVuFaPRRmcBhdCDjq8u1VnNxjrszDTo/5RA+uVPhpTHn1eMQ4qnLWPc7/uzzvr483eAboHSDPWy/GaVLxjC6k176yZGgL+lNA==
Content-Type: text/plain; charset="utf-8"
Content-ID: <DA65C41A7131344AA52234DB1D3B2125@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4229.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 529ee543-a1bb-465b-bb31-08d8826c1e7a
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Nov 2020 15:53:35.1004
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Ff4HIOt9O5RhnLsd+FWGVIrWLYrlLhIWpLdRVA09pb5ZYoMolS8JNTvsZe+974f/3DAPAJfBBba5lvoHd5rQwHcTA+LrB7EZzA1mzWeTNzM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4391
X-OriginatorOrg: citrix.com

DQoNCj4gT24gTm92IDYsIDIwMjAsIGF0IDM6NDkgUE0sIEFudGhvbnkgUEVSQVJEIDxhbnRob255
LnBlcmFyZEBjaXRyaXguY29tPiB3cm90ZToNCj4gDQo+IE9uIFdlZCwgTm92IDA0LCAyMDIwIGF0
IDA4OjQ3OjU3QU0gKzAxMDAsIEphbiBCZXVsaWNoIHdyb3RlOg0KPj4gT24gMDQuMTEuMjAyMCAw
MDo1NSwgTWljaGFlbCBZb3VuZyB3cm90ZToNCj4+PiBPbiBUdWUsIDMgTm92IDIwMjAsIEphbiBC
ZXVsaWNoIHdyb3RlOg0KPj4+PiBJIGFtIHBsZWFzZWQgdG8gYW5ub3VuY2UgdGhlIHJlbGVhc2Ug
b2YgWGVuIDQuMTMuMi4gVGhpcyBpcyBhdmFpbGFibGUNCj4+Pj4gaW1tZWRpYXRlbHkgZnJvbSBp
dHMgZ2l0IHJlcG9zaXRvcnkNCj4+Pj4gaHR0cDovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9
eGVuLmdpdDthPXNob3J0bG9nO2g9cmVmcy9oZWFkcy9zdGFibGUtNC4xMw0KPj4+PiAodGFnIFJF
TEVBU0UtNC4xMy4yKSBvciBmcm9tIHRoZSBYZW5Qcm9qZWN0IGRvd25sb2FkIHBhZ2UNCj4+Pj4g
aHR0cHM6Ly94ZW5wcm9qZWN0Lm9yZy9kb3dubG9hZHMveGVuLXByb2plY3QtYXJjaGl2ZXMveGVu
LXByb2plY3QtNC0xMy1zZXJpZXMveGVuLXByb2plY3QtNC0xMy0yLw0KPj4+PiAod2hlcmUgYSBs
aXN0IG9mIGNoYW5nZXMgY2FuIGFsc28gYmUgZm91bmQpLg0KPj4+IA0KPj4+IElzIHRoZSBlbnRy
eSBmb3IgWFNBLTMzNSBjb3JyZWN0IG9uIHRoZSBkb3dubG9hZCBwYWdlPyBUaGF0IHdhcyBhIHFl
bXUgDQo+Pj4gcGF0Y2ggYnV0IEkgZG9uJ3QgdGhpbmsgaXQgd2FzIGluY2x1ZGVkIGluIDQuMTMu
Mi4NCj4+IA0KPj4gSW50ZXJlc3RpbmcsIHRoYW5rcyBmb3IgcG9pbnRpbmcgdGhpcyBvdXQuIFRo
ZSBxZW11LXRyYWQgcGFydCwNCj4+IGFsYmVpdCAianVzdCIgYSBTVVBQT1JULm1kIHVwZGF0ZSwg
ZGlkbid0IGV2ZW4gbWFrZSBpdCBpbnRvDQo+PiBzdGFnaW5nIHlldCBhZmFpY3MuIFdoaWxlIHRo
aXMgY2FuIHBlcmhhcHMgYmUgdmlld2VkIGFzIGJlbmlnbiwNCj4+IEknbSBjb25jZXJuZWQgdGhh
dCB0aGUgcWVtdXUgZml4IGFsc28gZG9lc24ndCBsb29rIHRvIGhhdmUNCj4+IGxhbmRlZCBpbiBh
bnkgb2YgdGhlIGJyYW5jaGVzIHlldCwgZGVzcGl0ZSB0aGUgdmVyc2lvbiBidW1wIG9uDQo+PiB0
aGUgc3RhZ2luZy9tYXN0ZXIgYnJhbmNoZXMganVzdCA1IGRheXMgYWdvLiBBbnRob255LCBTdGVm
YW5vPw0KPiANCj4gSSd2ZSBwdXNoZWQgdGhlIGZpeCBub3csIHRvIHFlbXUteGVuIHRyZWVzLg0K
PiANCj4gTWF5YmUgR2VvcmdlJ3Mgc2NyaXB0IGNvdWxkIGNoZWNrIHFlbXUteGVuIHRyZWVzIGFz
IHdlbGw/IE9yIHNvbWVvbmUgaW4NCj4gdGhlIHNlY3VyaXR5IHRlYW0gY291bGQgcHVzaCB0aGUg
cGF0Y2hzIG9yIHRlbGwgbWUgYWJvdXQgWFNBcyBpbnZvbHZpbmcNCj4gUUVNVSwgb3RoZXJ3aXNl
IHRoYXQncyBnb2luZyB0byBoYXBwZW4gYWdhaW4uDQoNCk1ha2luZyBhIHNjcmlwdCB5b3UgY291
bGQgcHV0IGludG8gYSBjcm9uIGpvYiB0byB0ZWxsIHlvdSB3aGVuIGEgUUVNVSBYU0EgY29tZXMg
b3V0IHNob3VsZCBiZSBmYWlybHkgZWFzeS4gIChBbHRob3VnaCwgb2YgY291cnNlIGluIHRoaXMg
Y2FzZSBpdCB3b3VsZG7igJl0IGhhdmUgd29ya2VkIGJlY2F1c2UgeHNhLmpzb24gZGlkbuKAmXQg
aGF2ZSB0aGUgY29ycmVjdCBpbmZvcm1hdGlvbi4pDQoNCkNoZWNraW5nIHRoYXQgaXTigJlzIGJl
ZW4gYXBwbGllZCB3b3VsZCBiZSBhIGJpdCBtb3JlIHdvcms7IGl04oCZcyBvbiBteSBsaXN0IG9m
IHRoaW5ncyB0byBkby4NCg0KSXTigJlzIGFsbCBpbiBHbywgc28gbWF5YmUgSSBzaG91bGQgcHVi
bGlzaCB0aGUgcGFja2FnZXM7IHRoZW4gbWF5YmUgeW91IGNvdWxkIHNlbmQgbWUgYSBQUi4gOi0p
DQoNCiAtR2Vvcmdl


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 15:57:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 15:57:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20930.46992 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb47P-000459-LA; Fri, 06 Nov 2020 15:57:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20930.46992; Fri, 06 Nov 2020 15:57:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb47P-000452-Hz; Fri, 06 Nov 2020 15:57:39 +0000
Received: by outflank-mailman (input) for mailman id 20930;
 Fri, 06 Nov 2020 15:57:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rw0I=EM=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kb47O-00044x-Gk
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 15:57:38 +0000
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 469502b1-834e-4723-b1b6-49b3829b54d9;
 Fri, 06 Nov 2020 15:57:37 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id 23so1824847wrc.8
 for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 07:57:37 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id u202sm2970776wmu.23.2020.11.06.07.57.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 06 Nov 2020 07:57:36 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rw0I=EM=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kb47O-00044x-Gk
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 15:57:38 +0000
X-Inumbo-ID: 469502b1-834e-4723-b1b6-49b3829b54d9
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 469502b1-834e-4723-b1b6-49b3829b54d9;
	Fri, 06 Nov 2020 15:57:37 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id 23so1824847wrc.8
        for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 07:57:37 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=db7t4GL85dQVBPvR3s9QjpqawTtgAchhP4T6H6x/w0o=;
        b=hkWdthbCf0vkBv1zUwa+mupIUBzQqKLqmG/cZ6B/emk+Flf5Y69rCP1gsyOOLdqXRf
         /LB370TCWGXG+6bwyV6wcsvFWRfF1yyWAQaIX1eA1NzazQYHLTLViiqancxSURJs3kkz
         3HhpDLumF+/UKyVSBL2uFGUgr9qBRFUmr0Mzqbk9Ij2y/VTrewhTlNnD/V1r4Cbzjfow
         WjIuPE21NNA1uyeobtIFCdZkJS1dAW3RbALFKKyunL3cXB85c++xpugwgjesru8M+6y6
         Yczb6bCsNwKDPiqbPsH8EiLxno3tvZIAreTcS1r/OesIGSlp/hdi9EMn7dvGxOIm8F8N
         pYuA==
X-Gm-Message-State: AOAM533vDLdFgd/zqvqYnL09sGYwgowt8n+qZ+T+jc8vLP17DPrQENte
	4HE7ZEBWx1modfCzSeTO+DA=
X-Google-Smtp-Source: ABdhPJwOCSqwoW2/sdZ10gPFr75CUYKJmrnMNX+zzTlT44BCmWAo5hxUOA8//ufM2IHXVxZ+yMxXAQ==
X-Received: by 2002:a5d:6591:: with SMTP id q17mr3399090wru.173.1604678256917;
        Fri, 06 Nov 2020 07:57:36 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id u202sm2970776wmu.23.2020.11.06.07.57.35
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 06 Nov 2020 07:57:36 -0800 (PST)
Date: Fri, 6 Nov 2020 15:57:34 +0000
From: Wei Liu <wl@xen.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH] tools/libs/light: correct bitmap operations
Message-ID: <20201106155734.kk5hga6wfczrma4v@liuwe-devbox-debian-v2>
References: <20201106140504.25488-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201106140504.25488-1-jgross@suse.com>
User-Agent: NeoMutt/20180716

On Fri, Nov 06, 2020 at 03:05:04PM +0100, Juergen Gross wrote:
> Libxl bitmap operations for single bits (test, set, reset) take the bit
> number as a signed integer without testing the value to be larger than
> 0.
> 
> Correct that by adding the appropriate tests.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 15:58:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 15:58:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20933.47003 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb47z-0004D3-VF; Fri, 06 Nov 2020 15:58:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20933.47003; Fri, 06 Nov 2020 15:58:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb47z-0004Cw-Rk; Fri, 06 Nov 2020 15:58:15 +0000
Received: by outflank-mailman (input) for mailman id 20933;
 Fri, 06 Nov 2020 15:58:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rw0I=EM=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kb47y-0004Cp-UT
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 15:58:14 +0000
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a6bead1b-d79b-4be8-ae88-e697c5c35bbc;
 Fri, 06 Nov 2020 15:58:14 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id 10so1140457wml.2
 for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 07:58:14 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id j9sm2695399wrp.59.2020.11.06.07.58.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 06 Nov 2020 07:58:12 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rw0I=EM=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kb47y-0004Cp-UT
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 15:58:14 +0000
X-Inumbo-ID: a6bead1b-d79b-4be8-ae88-e697c5c35bbc
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a6bead1b-d79b-4be8-ae88-e697c5c35bbc;
	Fri, 06 Nov 2020 15:58:14 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id 10so1140457wml.2
        for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 07:58:14 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=XlizrOFuqlAfbRVAxzGG1DFw9jrD4DJZPaRu188MR24=;
        b=obpdrcLJRheAiEZmFBWHU0JIyicQx/JQosXtgCGI6yseSTSlSAGrUxFt+RtOs8suVD
         EaBjPxClFAUW6eLldnBA59RCp8TqqG1ESsdEHCHWeas5b9+QZ76D1gd6C1MiooolJPlk
         AWJJb9y0OKAh2IPvXci6Tg0+4CC+CLNunDkySQD6UfMxZZIu16U7AGDu4Z8GajKeOBc4
         O/rqemwPomsKsYiif9J3quth1X0HQg1UXhd3nvbNVerG2fY1HjfqH7iYzFBX8t6ggN5W
         YfNxTJP+PFzOiLTtIZ1nyC2BodwU6MU6YXJ3dtAcVx4fsDFwp8qzDhuGy1HC3JHP+oWq
         8rnw==
X-Gm-Message-State: AOAM533vd00bZlT4AK+NphVcL860tzqOFtyvGGPWfxk00L/FkPq2fDVT
	LiIBQjIhqOF47bHiXflJNKw=
X-Google-Smtp-Source: ABdhPJzR48xSbooA4brDVQjvjaagZIsj7lfyyMwsX5NCPmavvUS0Le9UW2w3GcnIkXI7XBNj9Pw+fw==
X-Received: by 2002:a1c:6a11:: with SMTP id f17mr289879wmc.24.1604678293390;
        Fri, 06 Nov 2020 07:58:13 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id j9sm2695399wrp.59.2020.11.06.07.58.12
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 06 Nov 2020 07:58:12 -0800 (PST)
Date: Fri, 6 Nov 2020 15:58:11 +0000
From: Wei Liu <wl@xen.org>
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH v1] docs/xl: fix cpupool-cpu-remove
Message-ID: <20201106155811.uwqbjh7byzb4d6d4@liuwe-devbox-debian-v2>
References: <20201106130518.26875-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201106130518.26875-1-olaf@aepfle.de>
User-Agent: NeoMutt/20180716

On Fri, Nov 06, 2020 at 02:05:17PM +0100, Olaf Hering wrote:
> The cpu-pool must be specified.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 15:58:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 15:58:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20936.47018 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb48R-0004JW-9j; Fri, 06 Nov 2020 15:58:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20936.47018; Fri, 06 Nov 2020 15:58:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb48R-0004JP-6i; Fri, 06 Nov 2020 15:58:43 +0000
Received: by outflank-mailman (input) for mailman id 20936;
 Fri, 06 Nov 2020 15:58:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pfqN=EM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kb48Q-0004II-0U
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 15:58:42 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c1d9cc84-c811-47e3-b821-edc00ab1128f;
 Fri, 06 Nov 2020 15:58:34 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kb48H-0006Yo-KR; Fri, 06 Nov 2020 15:58:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kb48H-0006TC-7b; Fri, 06 Nov 2020 15:58:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kb48H-00072X-6s; Fri, 06 Nov 2020 15:58:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pfqN=EM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kb48Q-0004II-0U
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 15:58:42 +0000
X-Inumbo-ID: c1d9cc84-c811-47e3-b821-edc00ab1128f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c1d9cc84-c811-47e3-b821-edc00ab1128f;
	Fri, 06 Nov 2020 15:58:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bPFoJIqkx5/ZQ2y482xmKiiwLrSiPRglROYKNCJISSg=; b=29ykh+86RqMMwzdGbXrb/EKW5e
	Hz2f4wmYWOZNrj4n7l+GkuHw/VjP/GU25CO6TDD3Qmt8J45UHR68IkkAxebDdXMoLCVxdr+j4pcvZ
	4XPG1zzJJuA+0zJGwEAYEOqkvD2LFiIDJ+EnVl9MEWDgzjpwTI9i0yWmrFUxWlmNdk9A=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kb48H-0006Yo-KR; Fri, 06 Nov 2020 15:58:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kb48H-0006TC-7b; Fri, 06 Nov 2020 15:58:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kb48H-00072X-6s; Fri, 06 Nov 2020 15:58:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156460-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 156460: regressions - FAIL
X-Osstest-Failures:
    xen-4.14-testing:build-i386:xen-build:fail:regression
    xen-4.14-testing:test-armhf-armhf-libvirt:host-ping-check-xen:fail:heisenbug
    xen-4.14-testing:test-armhf-armhf-xl-vhd:leak-check/check:fail:heisenbug
    xen-4.14-testing:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=fc8fab1bb4d3a16914d8e7f6e288e946e68d5a41
X-Osstest-Versions-That:
    xen=5784d1e9424151adfdc836535489bd068c6c0700
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 06 Nov 2020 15:58:33 +0000

flight 156460 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156460/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    6 xen-build      fail in 156404 REGR. vs. 156394

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt  10 host-ping-check-xen fail in 156404 pass in 156460
 test-armhf-armhf-xl-vhd      20 leak-check/check           fail pass in 156404

Tests which did not succeed, but are not blocking:
 build-i386-libvirt            1 build-check(1)           blocked in 156404 n/a
 test-amd64-i386-libvirt       1 build-check(1)           blocked in 156404 n/a
 test-amd64-i386-pair          1 build-check(1)           blocked in 156404 n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked in 156404 n/a
 test-amd64-coresched-i386-xl  1 build-check(1)           blocked in 156404 n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)           blocked in 156404 n/a
 test-amd64-i386-migrupgrade   1 build-check(1)           blocked in 156404 n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64 1 build-check(1) blocked in 156404 n/a
 test-amd64-i386-xl-raw        1 build-check(1)           blocked in 156404 n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)   blocked in 156404 n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)           blocked in 156404 n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)        blocked in 156404 n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)    blocked in 156404 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)     blocked in 156404 n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)    blocked in 156404 n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked in 156404 n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)    blocked in 156404 n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64 1 build-check(1) blocked in 156404 n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)           blocked in 156404 n/a
 test-amd64-i386-xl-shadow     1 build-check(1)           blocked in 156404 n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)    blocked in 156404 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)   blocked in 156404 n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)         blocked in 156404 n/a
 test-amd64-i386-xl            1 build-check(1)           blocked in 156404 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)     blocked in 156404 n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)    blocked in 156404 n/a
 test-amd64-i386-livepatch     1 build-check(1)           blocked in 156404 n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 1 build-check(1) blocked in 156404 n/a
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 156404 like 156394
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156394
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156394
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156394
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156394
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156394
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156394
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156394
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156394
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156394
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156394
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156394
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  fc8fab1bb4d3a16914d8e7f6e288e946e68d5a41
baseline version:
 xen                  5784d1e9424151adfdc836535489bd068c6c0700

Last test of basis   156394  2020-11-04 08:37:48 Z    2 days
Testing same since   156404  2020-11-04 22:36:21 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit fc8fab1bb4d3a16914d8e7f6e288e946e68d5a41
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Nov 4 11:02:30 2020 +0100

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
    master date: 2020-10-23 18:03:18 +0200

commit 898864c3736338548bc2f684b7e307326e0dd4a5
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Wed Nov 4 11:01:27 2020 +0100

    pci: cleanup MSI interrupts before removing device from IOMMU
    
    Doing the MSI cleanup after removing the device from the IOMMU leads
    to the following panic on AMD hardware:
    
    Assertion 'table.ptr && (index < intremap_table_entries(table.ptr, iommu))' failed at iommu_intr.c:172
    ----[ Xen-4.13.1-10.0.3-d  x86_64  debug=y   Not tainted ]----
    CPU:    3
    RIP:    e008:[<ffff82d08026ae3c>] drivers/passthrough/amd/iommu_intr.c#get_intremap_entry+0x52/0x7b
    [...]
    Xen call trace:
       [<ffff82d08026ae3c>] R drivers/passthrough/amd/iommu_intr.c#get_intremap_entry+0x52/0x7b
       [<ffff82d08026af25>] F drivers/passthrough/amd/iommu_intr.c#update_intremap_entry_from_msi_msg+0xc0/0x342
       [<ffff82d08026ba65>] F amd_iommu_msi_msg_update_ire+0x98/0x129
       [<ffff82d08025dd36>] F iommu_update_ire_from_msi+0x1e/0x21
       [<ffff82d080286862>] F msi_free_irq+0x55/0x1a0
       [<ffff82d080286f25>] F pci_cleanup_msi+0x8c/0xb0
       [<ffff82d08025cf52>] F pci_remove_device+0x1af/0x2da
       [<ffff82d0802a42d1>] F do_physdev_op+0xd18/0x1187
       [<ffff82d080383925>] F pv_hypercall+0x1f5/0x567
       [<ffff82d08038a432>] F lstar_enter+0x112/0x120
    
    That's because the call to iommu_remove_device on AMD hardware will
    remove the per-device interrupt remapping table, and hence the call to
    pci_cleanup_msi done afterwards will find a null intremap table and
    crash.
    
    Reorder the calls so that MSI interrupts are torn down before removing
    the device from the IOMMU.
    
    Fixes: d7cfeb7c13ed ("AMD/IOMMU: don't blindly allocate interrupt remapping tables")
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 710f62cc826bb8c7ead99f9d6b6b269e39ff3e98
    master date: 2020-10-23 10:13:14 +0200

commit 9f954ae7fb1c7dc3b02e5ccac6978c3a8e86086e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Nov 4 11:01:02 2020 +0100

    build: use if_changed more consistently (and correctly) for prelink*.o
    
    Switch to $(call if_changed,ld) where possible; presumably not doing so
    in e321576f4047 ("xen/build: start using if_changed") right away was an
    oversight, as it did for Arm in (just) one case. It failed to add
    prelink.o to $(targets), though, causing - judging from the observed
    behavior on x86 - undue rebuilds of the final binary (because of
    prelink.o getting rebuild for $(cmd_prelink.o) being empty, in turn
    because of .prelink.o.cmd not getting read) during "make install-xen".
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    master commit: dd2cfba88c3d0e144ffec07c6b5b86e54a9d98a9
    master date: 2020-09-22 10:19:38 +0200
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 15:58:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 15:58:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20937.47031 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb48S-0004L1-Lb; Fri, 06 Nov 2020 15:58:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20937.47031; Fri, 06 Nov 2020 15:58:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb48S-0004Ks-GU; Fri, 06 Nov 2020 15:58:44 +0000
Received: by outflank-mailman (input) for mailman id 20937;
 Fri, 06 Nov 2020 15:58:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rw0I=EM=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kb48Q-0004JK-U2
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 15:58:42 +0000
Received: from mail-wr1-f68.google.com (unknown [209.85.221.68])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 90e85e44-f582-41b4-a4db-27a57d3e8640;
 Fri, 06 Nov 2020 15:58:41 +0000 (UTC)
Received: by mail-wr1-f68.google.com with SMTP id p8so1036299wrx.5
 for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 07:58:41 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id f4sm2790188wrq.54.2020.11.06.07.58.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 06 Nov 2020 07:58:40 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rw0I=EM=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kb48Q-0004JK-U2
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 15:58:42 +0000
X-Inumbo-ID: 90e85e44-f582-41b4-a4db-27a57d3e8640
Received: from mail-wr1-f68.google.com (unknown [209.85.221.68])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 90e85e44-f582-41b4-a4db-27a57d3e8640;
	Fri, 06 Nov 2020 15:58:41 +0000 (UTC)
Received: by mail-wr1-f68.google.com with SMTP id p8so1036299wrx.5
        for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 07:58:41 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=u9/xTj7n0+W7aoDuTIa8HDJqm5R5FIOjh0XtULP6dHU=;
        b=UVvXcJ65fg2mij68JdLHUh+8utERACtQxl/pGIuGZbJh7geWfWq84c60aOrFCDsZf0
         7fuAEoDCCB6bleTTjREz79Aifr6Bnu1EVNPhqDlcbiHgnahcpBgmYS4EHNT+/f1QXlFZ
         vAWeBNW9hSLs9sKPztmPv2JXrx3SMBBeZdeHYVWQvHPK/d3ru3Ljo3MtRbhzfCjfvd6u
         nNDn3pt3q5fZgXmexC9ZgCphzMwH9OUUKpRIxhX6ePlNeZ9rpPmQbHUhCsBZGQ2mE0D2
         77vnQguzV7jHisAQgk9/BK1NKSmuczQbBHlV+zS3bFUCqccCp4nIQvemSEDqtLpvSRdD
         5CCA==
X-Gm-Message-State: AOAM533z7lSQ/LHOOUepcmAG1aJ2QdY9OML4/zMnTnJ7KUp7Esoc8hh2
	qxjUnDf5dlXX++abjRGRwqA=
X-Google-Smtp-Source: ABdhPJxVrNWM590SdX57k+IEurGmDyCe49WRxL2Yto6w97/Z7uA7ZJ5R1xg/IkPSwHDgn7EEKjd2mg==
X-Received: by 2002:adf:fd06:: with SMTP id e6mr3467157wrr.206.1604678321112;
        Fri, 06 Nov 2020 07:58:41 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id f4sm2790188wrq.54.2020.11.06.07.58.40
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 06 Nov 2020 07:58:40 -0800 (PST)
Date: Fri, 6 Nov 2020 15:58:39 +0000
From: Wei Liu <wl@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Ian Jackson <iwj@xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Subject: Re: [PATCH] libxg: don't use max policy in xc_cpuid_xend_policy()
Message-ID: <20201106155839.vnhdqcptbpkbzfly@liuwe-devbox-debian-v2>
References: <4fa05759-24ac-5ff3-3db9-94537f6be95d@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <4fa05759-24ac-5ff3-3db9-94537f6be95d@suse.com>
User-Agent: NeoMutt/20180716

On Thu, Nov 05, 2020 at 04:56:53PM +0100, Jan Beulich wrote:
> Using max undermines the separation between default and max. For example,
> turning off AVX512F on an MPX-capable system silently turns on MPX,
> despite this not being part of the default policy anymore. Since the
> information is used only for determining what to convert 'x' to (but not
> to e.g. validate '1' settings), the effect of this change is identical
> for guests with (suitable) "cpuid=" settings to that of the changes
> separating default from max and then converting (e.g.) MPX from being
> part of default to only being part of max for guests without (affected)
> "cpuid=" settings.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I will defer this to Andrew.


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 16:11:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 16:11:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20953.47045 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb4KD-0006gh-Tz; Fri, 06 Nov 2020 16:10:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20953.47045; Fri, 06 Nov 2020 16:10:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb4KD-0006ga-Qx; Fri, 06 Nov 2020 16:10:53 +0000
Received: by outflank-mailman (input) for mailman id 20953;
 Fri, 06 Nov 2020 16:10:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pfqN=EM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kb4KC-0006fO-Nm
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 16:10:52 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d59a0b58-1976-47ca-af61-ce377a813679;
 Fri, 06 Nov 2020 16:10:43 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kb4K3-0007Mh-Cd; Fri, 06 Nov 2020 16:10:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kb4K3-0006tz-3F; Fri, 06 Nov 2020 16:10:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kb4K3-0007Dk-2g; Fri, 06 Nov 2020 16:10:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pfqN=EM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kb4KC-0006fO-Nm
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 16:10:52 +0000
X-Inumbo-ID: d59a0b58-1976-47ca-af61-ce377a813679
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d59a0b58-1976-47ca-af61-ce377a813679;
	Fri, 06 Nov 2020 16:10:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8qg2OKz63+MAXWLd1UrA59MveSppQf2QlOsH5i6BJBI=; b=TK+xkeRD0387POOjuoiGopZTlA
	2OrY0mlwFJTyq05zV7dhkIz2JKlW5kiH9Ue5YpBUBsTZkmFwgy78+Axau9Il/otwt8T4ruPfOuy81
	9v2pmTmBWaRm9nl17vM+HK+la1zNKTbnj3lOVJmsPJHQAaBXUG1D3v0yFZutsIYTjwIM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kb4K3-0007Mh-Cd; Fri, 06 Nov 2020 16:10:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kb4K3-0006tz-3F; Fri, 06 Nov 2020 16:10:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kb4K3-0007Dk-2g; Fri, 06 Nov 2020 16:10:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156509-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156509: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=8f0f6ff08211ace1dabc2c6a93befe60edaa97e1
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 06 Nov 2020 16:10:43 +0000

flight 156509 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156509/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              8f0f6ff08211ace1dabc2c6a93befe60edaa97e1
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  119 days
Failing since        151818  2020-07-11 04:18:52 Z  118 days  113 attempts
Testing same since   156509  2020-11-06 04:19:18 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 24841 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 16:17:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 16:17:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20959.47057 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb4Qb-0006uZ-LF; Fri, 06 Nov 2020 16:17:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20959.47057; Fri, 06 Nov 2020 16:17:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb4Qb-0006uS-IA; Fri, 06 Nov 2020 16:17:29 +0000
Received: by outflank-mailman (input) for mailman id 20959;
 Fri, 06 Nov 2020 16:17:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rw0I=EM=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kb4Qa-0006uN-Q6
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 16:17:28 +0000
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 36004ef0-1fcb-4bea-a51f-c2840a17d934;
 Fri, 06 Nov 2020 16:17:27 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id g12so1887668wrp.10
 for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 08:17:27 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id t11sm2980370wmf.35.2020.11.06.08.17.25
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 06 Nov 2020 08:17:25 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rw0I=EM=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kb4Qa-0006uN-Q6
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 16:17:28 +0000
X-Inumbo-ID: 36004ef0-1fcb-4bea-a51f-c2840a17d934
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 36004ef0-1fcb-4bea-a51f-c2840a17d934;
	Fri, 06 Nov 2020 16:17:27 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id g12so1887668wrp.10
        for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 08:17:27 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=biCvBdw7KP4PimXIHYN4ShplJWo2GhQTUyxqJpPxgSo=;
        b=PRKIE7yZBjoPqgcf7pLX66QXYimiK/6DUAK2P3obTCWug+BbFLfatFQ3rCkhbuigk4
         nDSiK3eUHfJj7X6ses5mOToSHPaZjVQaWJxYJ8dg28c7Pj2lc4sIWE6bxrlUTxGdSe1f
         iwFhMshgGqk6hLKB1LN296ylrA1elaVjBCCZMcRDZPOCpdO31vngCtn6Qrb62x9QZuxr
         OC3EcjvBzPaR7jSjWKPK26LY6x86i9aE1ImIbixHL/nuUr4lyj5v5qX1KnnV7xzEMJp2
         Dp77jeWkJrbrVaotHdb2XGsXhqso1jjkBEqHZLkNckAbf7RwyyMLVNkr2aVT06bbRDgK
         99LA==
X-Gm-Message-State: AOAM532c/OltVLJR+YC0nJNFqT8y+B19HJdku7wDaNh7QVL2UYciGF/q
	qKFCF/qaaCInGsbHOKDvz78=
X-Google-Smtp-Source: ABdhPJwAv7beCDXu+ZM2PzmG1sBDLR5Q4ArOtbQp/8cB+u3mdFnql+4xDjpxq+5BKIRP6ItNIdilhA==
X-Received: by 2002:a5d:5701:: with SMTP id a1mr3487237wrv.414.1604679446507;
        Fri, 06 Nov 2020 08:17:26 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id t11sm2980370wmf.35.2020.11.06.08.17.25
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 06 Nov 2020 08:17:25 -0800 (PST)
Date: Fri, 6 Nov 2020 16:17:24 +0000
From: Wei Liu <wl@xen.org>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org, takahiro.akashi@linaro.org,
	alex.bennee@linaro.org, masami.hiramatsu@linaro.org,
	ian.jackson@eu.citrix.com, wl@xen.org
Subject: Re: [PATCH] libxl: set vuart_gfn in libxl__build_hvm
Message-ID: <20201106161724.5hot2tzamqhhycck@liuwe-devbox-debian-v2>
References: <alpine.DEB.2.21.2011051312120.2323@sstabellini-ThinkPad-T480s>
 <20201106151146.GM2214@perard.uk.xensource.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201106151146.GM2214@perard.uk.xensource.com>
User-Agent: NeoMutt/20180716

On Fri, Nov 06, 2020 at 03:11:46PM +0000, Anthony PERARD wrote:
> On Thu, Nov 05, 2020 at 01:15:05PM -0800, Stefano Stabellini wrote:
> > libxl: set vuart_gfn in libxl__build_hvm
> 
> The subject is written two times ;-)
> 
> > Setting vuart_gfn was missed when switching ARM guests to the PVH build.
> > Like libxl__build_pv, libxl__build_hvm should set state->vuart_gfn to
> > dom->vuart_gfn.
> > 
> > Without this change, xl console cannot connect to the vuart console (-t
> > vuart), see https://marc.info/?l=xen-devel&m=160402342101366.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > 
> > diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> > index f8661e90d4..36fe8915e7 100644
> > --- a/tools/libxl/libxl_dom.c
> > +++ b/tools/libxl/libxl_dom.c
> > @@ -1184,6 +1184,7 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
> >          LOG(ERROR, "hvm build set params failed");
> >          goto out;
> >      }
> > +    state->vuart_gfn = dom->vuart_gfn;
> >  
> >      rc = hvm_build_set_xs_values(gc, domid, dom, info);
> >      if (rc != 0) {
> 
> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

This patch is based on an old tree. I have ported it to staging. Please
check the code and shout if it is not done correctly.

Wei.

> 
> Thanks,
> 
> -- 
> Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 16:22:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 16:22:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20966.47081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb4Vn-0007qS-ED; Fri, 06 Nov 2020 16:22:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20966.47081; Fri, 06 Nov 2020 16:22:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb4Vn-0007pr-BI; Fri, 06 Nov 2020 16:22:51 +0000
Received: by outflank-mailman (input) for mailman id 20966;
 Fri, 06 Nov 2020 16:22:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rw0I=EM=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kb4Vl-0007pZ-QG
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 16:22:49 +0000
Received: from mail-wm1-f65.google.com (unknown [209.85.128.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a5bef956-f499-4854-80e9-a767dea7f57c;
 Fri, 06 Nov 2020 16:22:48 +0000 (UTC)
Received: by mail-wm1-f65.google.com with SMTP id 10so1227524wml.2
 for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 08:22:48 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id q7sm2156341wrg.95.2020.11.06.08.22.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 06 Nov 2020 08:22:47 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rw0I=EM=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kb4Vl-0007pZ-QG
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 16:22:49 +0000
X-Inumbo-ID: a5bef956-f499-4854-80e9-a767dea7f57c
Received: from mail-wm1-f65.google.com (unknown [209.85.128.65])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a5bef956-f499-4854-80e9-a767dea7f57c;
	Fri, 06 Nov 2020 16:22:48 +0000 (UTC)
Received: by mail-wm1-f65.google.com with SMTP id 10so1227524wml.2
        for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 08:22:48 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=KG2zUBmP7kZXhXDNWYXvcoA06zpbsQshPoKcnzmqQRc=;
        b=TFAdEYGNza0esXidyXMUcff/KLc82Oj/mH5NGvy6vFoDkWKGw/KT5UHfN95oq+5Qws
         dx9M1G2el7f6Xy0+d7V72RPOFJnQGJ1f6riWjIeTYVFbV0AQSdF4GoULGopWBeB7sOwF
         iVG5ZvG62thLF7idxOI2knUSvFHi6rVHrLlgrKWaIAlBjKSjq5fr+o9XaD/lpjzGNhOs
         n+pcr8v4SzCAEFNVhKhmc6TSlWD0Pf+3bHT9DaZgwLraPOqDBfnZQeH/07NGPwCX0ZkV
         AT80/zy1vCD8uCABHG1nzwVrLsDh7+j3EsdHCJ0MFzkmeyODW9GRR6s3JuenzwTYwV0S
         ewgg==
X-Gm-Message-State: AOAM531jfpiQDjQz5IUnPuqb6yHepbAkYz1B2XvP7mlIIqYwG9geM8bs
	RwEUmJDUCQl+T/YXCslkjYE=
X-Google-Smtp-Source: ABdhPJwJWPI1M4JHxT5nzYrE0pPJbNbkhTX/x0v/qvfL3kWLS8SQd/HUc1M5Dl/05wptqSIPC2FG1Q==
X-Received: by 2002:a7b:cb46:: with SMTP id v6mr384280wmj.146.1604679767955;
        Fri, 06 Nov 2020 08:22:47 -0800 (PST)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id q7sm2156341wrg.95.2020.11.06.08.22.47
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 06 Nov 2020 08:22:47 -0800 (PST)
Date: Fri, 6 Nov 2020 16:22:46 +0000
From: Wei Liu <wl@xen.org>
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH v1] tools: add readv_exact to xenctrl
Message-ID: <20201106162246.vmehbvecfqu4ewta@liuwe-devbox-debian-v2>
References: <20201028144151.2766-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201028144151.2766-1-olaf@aepfle.de>
User-Agent: NeoMutt/20180716

On Wed, Oct 28, 2020 at 03:41:51PM +0100, Olaf Hering wrote:
> Read a batch of iovec's.
> 
> In the common case of short reads, finish individual iov's with read_exact.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

I see your series, so I will drop this one and go over that series
instead.

Wei.


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 17:10:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 17:10:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20975.47101 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb5FM-00036U-3R; Fri, 06 Nov 2020 17:09:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20975.47101; Fri, 06 Nov 2020 17:09:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb5FL-00036M-Uu; Fri, 06 Nov 2020 17:09:55 +0000
Received: by outflank-mailman (input) for mailman id 20975;
 Fri, 06 Nov 2020 17:09:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pfqN=EM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kb5FL-00035N-L4
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 17:09:55 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 53aab237-3d43-4cb7-8945-2d62725d293d;
 Fri, 06 Nov 2020 17:09:48 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kb5FE-00008h-DV; Fri, 06 Nov 2020 17:09:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kb5FE-0001IW-40; Fri, 06 Nov 2020 17:09:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kb5FE-0003FW-3X; Fri, 06 Nov 2020 17:09:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pfqN=EM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kb5FL-00035N-L4
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 17:09:55 +0000
X-Inumbo-ID: 53aab237-3d43-4cb7-8945-2d62725d293d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 53aab237-3d43-4cb7-8945-2d62725d293d;
	Fri, 06 Nov 2020 17:09:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1WGR7YjIkRyYDoYKqqtjiyaJWz5v/tRhOT0hMjcOHfo=; b=laN5cohs7+KrkzRGGxRaFtuMB6
	ACEIH35vRkH9K3Q9uUrBhL7aZcLPgFjTV9yyZg1ID/jQ49nPWaazlVXLXHszXL8jdog23TUn5uwbp
	Fbv7oPH4nfmMGIfovSaL46h/Pz0uA9U15x2IJxjILb0rwNsenEv2e+tNRj76tzs4cgQk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kb5FE-00008h-DV; Fri, 06 Nov 2020 17:09:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kb5FE-0001IW-40; Fri, 06 Nov 2020 17:09:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kb5FE-0003FW-3X; Fri, 06 Nov 2020 17:09:48 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156467-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156467: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=09af9bd9be2d3e31bba979f8cf6446017b0b863e
X-Osstest-Versions-That:
    ovmf=8d5708833509ece6ac63084dc07c8ac53c4d4c1a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 06 Nov 2020 17:09:48 +0000

flight 156467 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156467/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 09af9bd9be2d3e31bba979f8cf6446017b0b863e
baseline version:
 ovmf                 8d5708833509ece6ac63084dc07c8ac53c4d4c1a

Last test of basis   156400  2020-11-04 12:10:58 Z    2 days
Testing same since   156407  2020-11-05 09:30:19 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bob Feng <bob.c.feng@intel.com>
  Jeff Brasen <jbrasen@nvidia.com>
  Liming Gao <gaoliming@byosoft.com.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   8d57088335..09af9bd9be  09af9bd9be2d3e31bba979f8cf6446017b0b863e -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 18:58:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 18:58:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20989.47113 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb6vu-00043Z-Ur; Fri, 06 Nov 2020 18:57:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20989.47113; Fri, 06 Nov 2020 18:57:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb6vu-00043S-Ro; Fri, 06 Nov 2020 18:57:58 +0000
Received: by outflank-mailman (input) for mailman id 20989;
 Fri, 06 Nov 2020 18:57:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pfqN=EM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kb6vu-00043G-1u
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 18:57:58 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5414fc92-5a66-42b6-9352-5204e84776c3;
 Fri, 06 Nov 2020 18:57:55 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kb6vq-0002Pr-Je; Fri, 06 Nov 2020 18:57:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kb6vq-00074q-6c; Fri, 06 Nov 2020 18:57:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kb6vq-0000UU-5t; Fri, 06 Nov 2020 18:57:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pfqN=EM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kb6vu-00043G-1u
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 18:57:58 +0000
X-Inumbo-ID: 5414fc92-5a66-42b6-9352-5204e84776c3
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5414fc92-5a66-42b6-9352-5204e84776c3;
	Fri, 06 Nov 2020 18:57:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=B8SNDGbdBoO6tfzywIFkG9Hk1jITStWk/FtI0Htv4hI=; b=yq2y+/s84znbXFhIx0FG75hHBu
	xmKp24nS0BSENv4ufU1tfsETkI98848sk65fQLynqfw6Bz0ld7JmOv5qa60wSQT8fptRerWY3jCbb
	CC5kUydybAO1iRAB+h+QYMSv+2vIUAq1B0c3kT8rAC6AzusjoLcq7NMLXBcQppuKEAcI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kb6vq-0002Pr-Je; Fri, 06 Nov 2020 18:57:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kb6vq-00074q-6c; Fri, 06 Nov 2020 18:57:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kb6vq-0000UU-5t; Fri, 06 Nov 2020 18:57:54 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156462-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156462: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-install:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-install:fail:regression
    linux-linus:build-i386-xsm:xen-build:fail:regression
    linux-linus:build-i386:xen-build:fail:regression
    linux-linus:build-armhf:xen-build:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:build-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=4ef8451b332662d004df269d4cdeb7d9f31419b5
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 06 Nov 2020 18:57:54 +0000

flight 156462 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156462/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2 10 host-ping-check-xen fail in 156390 REGR. vs. 152332
 test-arm64-arm64-xl-xsm      12 debian-install fail in 156390 REGR. vs. 152332
 test-arm64-arm64-xl          12 debian-install fail in 156402 REGR. vs. 152332
 build-i386-xsm                6 xen-build      fail in 156402 REGR. vs. 152332
 build-i386                    6 xen-build      fail in 156402 REGR. vs. 152332
 build-armhf                   6 xen-build      fail in 156402 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit1   8 xen-boot         fail in 156390 pass in 156462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 156390 pass in 156462
 test-arm64-arm64-libvirt-xsm  8 xen-boot         fail in 156402 pass in 156462
 test-arm64-arm64-examine      8 reboot           fail in 156402 pass in 156462
 test-arm64-arm64-xl-credit2   8 xen-boot                   fail pass in 156390
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 156390
 test-arm64-arm64-xl           8 xen-boot                   fail pass in 156402
 test-arm64-arm64-xl-seattle   8 xen-boot                   fail pass in 156402

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked in 156402 n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)           blocked in 156402 n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)           blocked in 156402 n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)           blocked in 156402 n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 1 build-check(1) blocked in 156402 n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)           blocked in 156402 n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64 1 build-check(1) blocked in 156402 n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)    blocked in 156402 n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 1 build-check(1) blocked in 156402 n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked in 156402 n/a
 test-amd64-i386-pair          1 build-check(1)           blocked in 156402 n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)           blocked in 156402 n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)          blocked in 156402 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)     blocked in 156402 n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)         blocked in 156402 n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)           blocked in 156402 n/a
 test-amd64-i386-xl-raw        1 build-check(1)           blocked in 156402 n/a
 test-amd64-i386-examine       1 build-check(1)           blocked in 156402 n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)    blocked in 156402 n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)           blocked in 156402 n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked in 156402 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)     blocked in 156402 n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)    blocked in 156402 n/a
 build-armhf-libvirt           1 build-check(1)           blocked in 156402 n/a
 test-armhf-armhf-examine      1 build-check(1)           blocked in 156402 n/a
 build-i386-libvirt            1 build-check(1)           blocked in 156402 n/a
 test-amd64-i386-xl-shadow     1 build-check(1)           blocked in 156402 n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)           blocked in 156402 n/a
 test-amd64-i386-libvirt       1 build-check(1)           blocked in 156402 n/a
 test-armhf-armhf-libvirt      1 build-check(1)           blocked in 156402 n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)        blocked in 156402 n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 1 build-check(1) blocked in 156402 n/a
 test-amd64-i386-xl-xsm        1 build-check(1)           blocked in 156402 n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)           blocked in 156402 n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)   blocked in 156402 n/a
 test-amd64-coresched-i386-xl  1 build-check(1)           blocked in 156402 n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)    blocked in 156402 n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)         blocked in 156402 n/a
 test-armhf-armhf-xl           1 build-check(1)           blocked in 156402 n/a
 test-amd64-i386-xl            1 build-check(1)           blocked in 156402 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)   blocked in 156402 n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)    blocked in 156402 n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64 1 build-check(1) blocked in 156402 n/a
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-seattle 11 leak-check/basis(11) fail in 156402 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                4ef8451b332662d004df269d4cdeb7d9f31419b5
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   97 days
Failing since        152366  2020-08-01 20:49:34 Z   96 days  162 attempts
Testing same since   156390  2020-11-04 02:20:15 Z    2 days    3 attempts

------------------------------------------------------------
3422 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 654687 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:04:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:04:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20995.47128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb71z-0004zf-Qd; Fri, 06 Nov 2020 19:04:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20995.47128; Fri, 06 Nov 2020 19:04:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb71z-0004zY-Nh; Fri, 06 Nov 2020 19:04:15 +0000
Received: by outflank-mailman (input) for mailman id 20995;
 Fri, 06 Nov 2020 19:04:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kb71x-0004zS-Ut
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:04:14 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2f568806-9fc9-4ed7-9c94-aa81e8b6f44a;
 Fri, 06 Nov 2020 19:04:11 +0000 (UTC)
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kb71b-0000th-31; Fri, 06 Nov 2020 19:03:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kb71x-0004zS-Ut
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:04:14 +0000
X-Inumbo-ID: 2f568806-9fc9-4ed7-9c94-aa81e8b6f44a
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2f568806-9fc9-4ed7-9c94-aa81e8b6f44a;
	Fri, 06 Nov 2020 19:04:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=GTpuLBfYyo8TaeZo/z07ad7aIfYXt5VjexG6J+Un22Y=; b=kl8AxnoJUKENxdqbmGjtDUBKGr
	4VvvwsWDbIWXpjZUE80AyVmG5Pw1VqJDYsJwtH7NNnxp0mOGVXqwKxHaePcIX2QLEgCS8sFueGPzE
	AiTGABB6Xp+kXJg8ish427RVP/SfCFu0d17IOJ1V1HiYBELRoZFB92ZTq6pCfkW0lsJkkJrifhsmZ
	pmg+34MTaZP2+QSox6FaZ3V/B2bz9YfsUA2FysiawKz4wfLjLQFNhJ856xgqwRX1KDIB3yvZOlRbF
	NcZ4l5avCx+EDEjbrxqOm6DFPEMZDdVeorTynAxTtJLBiF/5eE5hDbP4cAerzfFjspBXGft5tkdz/
	xM4wPyUw==;
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kb71b-0000th-31; Fri, 06 Nov 2020 19:03:52 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 02/24] loop: remove loop_set_size
Date: Fri,  6 Nov 2020 20:03:14 +0100
Message-Id: <20201106190337.1973127-3-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201106190337.1973127-1-hch@lst.de>
References: <20201106190337.1973127-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Just use set_capacity_revalidate_and_notify directly, as this function
can update the block device size as well when the last parameter is set
to true.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/loop.c | 37 +++++++------------------------------
 1 file changed, 7 insertions(+), 30 deletions(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index cb1191d6e945f2..86eb7e0691eef5 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -241,23 +241,6 @@ loop_validate_block_size(unsigned short bsize)
 	return 0;
 }
 
-/**
- * loop_set_size() - sets device size and notifies userspace
- * @lo: struct loop_device to set the size for
- * @size: new size of the loop device
- *
- * Callers must validate that the size passed into this function fits into
- * a sector_t, eg using loop_validate_size()
- */
-static void loop_set_size(struct loop_device *lo, loff_t size)
-{
-	struct block_device *bdev = lo->lo_device;
-
-	bd_set_nr_sectors(bdev, size);
-
-	set_capacity_revalidate_and_notify(lo->lo_disk, size, false);
-}
-
 static inline int
 lo_do_transfer(struct loop_device *lo, int cmd,
 	       struct page *rpage, unsigned roffs,
@@ -1076,7 +1059,6 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
 	struct address_space *mapping;
 	struct block_device *claimed_bdev = NULL;
 	int		error;
-	loff_t		size;
 	bool		partscan;
 	unsigned short  bsize;
 
@@ -1164,9 +1146,8 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
 	loop_update_dio(lo);
 	loop_sysfs_init(lo);
 
-	size = get_loop_size(lo, file);
-	loop_set_size(lo, size);
-
+	set_capacity_revalidate_and_notify(lo->lo_disk, get_loop_size(lo, file),
+			true);
 	set_blocksize(bdev, S_ISBLK(inode->i_mode) ?
 		      block_size(inode->i_bdev) : PAGE_SIZE);
 
@@ -1402,9 +1383,9 @@ loop_set_status(struct loop_device *lo, const struct loop_info64 *info)
 	lo->lo_flags |= prev_lo_flags & ~LOOP_SET_STATUS_CLEARABLE_FLAGS;
 
 	if (size_changed) {
-		loff_t new_size = get_size(lo->lo_offset, lo->lo_sizelimit,
-					   lo->lo_backing_file);
-		loop_set_size(lo, new_size);
+		set_capacity_revalidate_and_notify(lo->lo_disk,
+				get_size(lo->lo_offset, lo->lo_sizelimit,
+					 lo->lo_backing_file), true);
 	}
 
 	loop_config_discard(lo);
@@ -1580,14 +1561,10 @@ loop_get_status64(struct loop_device *lo, struct loop_info64 __user *arg) {
 
 static int loop_set_capacity(struct loop_device *lo)
 {
-	loff_t size;
-
 	if (unlikely(lo->lo_state != Lo_bound))
 		return -ENXIO;
-
-	size = get_loop_size(lo, lo->lo_backing_file);
-	loop_set_size(lo, size);
-
+	set_capacity_revalidate_and_notify(lo->lo_disk,
+			get_loop_size(lo, lo->lo_backing_file), true);
 	return 0;
 }
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:04:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:04:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20996.47140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb724-00051N-3L; Fri, 06 Nov 2020 19:04:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20996.47140; Fri, 06 Nov 2020 19:04:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb723-00051F-Vg; Fri, 06 Nov 2020 19:04:19 +0000
Received: by outflank-mailman (input) for mailman id 20996;
 Fri, 06 Nov 2020 19:04:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kb722-0004zS-R7
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:04:18 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2eaefe42-f66e-4f13-82d6-e370c42c2d0e;
 Fri, 06 Nov 2020 19:04:11 +0000 (UTC)
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kb71X-0000tW-UC; Fri, 06 Nov 2020 19:03:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kb722-0004zS-R7
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:04:18 +0000
X-Inumbo-ID: 2eaefe42-f66e-4f13-82d6-e370c42c2d0e
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2eaefe42-f66e-4f13-82d6-e370c42c2d0e;
	Fri, 06 Nov 2020 19:04:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=stmwFAlkeYayJwqxLAzxAb5snaHT5ewOrHfW4MD4mwk=; b=tB2Wdu/PMT4HaSqbNXLjbJSinK
	5hsrT8s37btBvfcSB9O4uU93SD5PFBnxHXny99OrIualpr25xUxUD6huKCkWn/+mqbBjsPu8GBT2K
	BThvX+OWl/FleZFRHiOhjc346V7c34ia1qBhD1BbATvc5xUrGPzH/rmzYuoB0ZX7+lPL+RKQlCljc
	jG3EgsYXLaMRAjav+9rOo/D7/8GwP4iHme6XgziXy4nXb8g0UObGiuuRw0pjd58Xmnsj883VpUPPn
	/WqbOngIbQjdRi22wAva6IBkP7TmfNzppcbD1VYiYpN4uQufsS9sHdI0GfKYll6bY7LU3TdfwkqXU
	DJQocnLQ==;
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kb71X-0000tW-UC; Fri, 06 Nov 2020 19:03:50 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 01/24] block: remove the call to __invalidate_device in check_disk_size_change
Date: Fri,  6 Nov 2020 20:03:13 +0100
Message-Id: <20201106190337.1973127-2-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201106190337.1973127-1-hch@lst.de>
References: <20201106190337.1973127-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

__invalidate_device without the kill_dirty parameter just invalidates
various clean entries in caches, which doesn't really help us with
anything, but can cause all kinds of horrible lock orders due to how
it calls into the file system.  The only reason this hasn't been a
major issue is because so many people use partitions, for which no
invalidation was performed anyway.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/block_dev.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index 9e84b1928b9401..66ebf594c97f47 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -1334,12 +1334,6 @@ static void check_disk_size_change(struct gendisk *disk,
 		i_size_write(bdev->bd_inode, disk_size);
 	}
 	spin_unlock(&bdev->bd_size_lock);
-
-	if (bdev_size > disk_size) {
-		if (__invalidate_device(bdev, false))
-			pr_warn("VFS: busy inodes on resized disk %s\n",
-				disk->disk_name);
-	}
 }
 
 /**
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:04:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:04:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20997.47151 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb729-00054y-As; Fri, 06 Nov 2020 19:04:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20997.47151; Fri, 06 Nov 2020 19:04:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb729-00054o-7k; Fri, 06 Nov 2020 19:04:25 +0000
Received: by outflank-mailman (input) for mailman id 20997;
 Fri, 06 Nov 2020 19:04:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kb727-0004zS-RK
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:04:23 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d6a1d530-ceae-4c28-a0ef-252dc15e9ac8;
 Fri, 06 Nov 2020 19:04:12 +0000 (UTC)
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kb71P-0000t7-34; Fri, 06 Nov 2020 19:03:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kb727-0004zS-RK
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:04:23 +0000
X-Inumbo-ID: d6a1d530-ceae-4c28-a0ef-252dc15e9ac8
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d6a1d530-ceae-4c28-a0ef-252dc15e9ac8;
	Fri, 06 Nov 2020 19:04:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID:
	Content-Description:In-Reply-To:References;
	bh=KZOj2ixaxfr+x29An18oeDfaJx3FSv//8uc71Hhn2A0=; b=snjJGRx7qVkoWA28XGyBSN/FU5
	1P5/6U8yQxPbAyQppy+ZGNo7hD7TWcSzoG9wJV5MoXxMDDYieBlcG6uvG1NU21CqTnxt3WKQtAuJf
	UqKqna1ID0lcpQmhrivJzFz3YtmWKpZm0KUamzMFajaTMK5TOilliU8pAMmZ7jcA4xKsEPe9u9FQe
	QLgag4ggR9M8REy4gaAzi95ookXu/+vKsN1gxvlBcIDCNpbUaA+k+bepajweHAsKHRlnIci3X0/gt
	+qiwIBcx7dd+5GmJaUUfQwr2V0s1euix28lr0Y0k8IK6gPjGXKGEZvDvsOwgNI/M1zhmywSH4Q6X6
	FMOFWKCQ==;
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kb71P-0000t7-34; Fri, 06 Nov 2020 19:03:47 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: cleanup updating the size of block devices
Date: Fri,  6 Nov 2020 20:03:12 +0100
Message-Id: <20201106190337.1973127-1-hch@lst.de>
X-Mailer: git-send-email 2.28.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Hi Jens,

this series builds on top of the work that went into the last merge window,
and make sure we have a single coherent interfac for updating the size of a
block device.

Diffstat:
 block/genhd.c                  |   16 +++----
 drivers/block/aoe/aoecmd.c     |   15 +-----
 drivers/block/drbd/drbd_main.c |    6 --
 drivers/block/loop.c           |   36 ++--------------
 drivers/block/nbd.c            |   88 +++++++++++++----------------------------
 drivers/block/pktcdvd.c        |    3 -
 drivers/block/rbd.c            |    3 -
 drivers/block/rnbd/rnbd-clt.c  |    3 -
 drivers/block/virtio_blk.c     |    3 -
 drivers/block/xen-blkfront.c   |    2 
 drivers/block/zram/zram_drv.c  |    7 ---
 drivers/md/dm-raid.c           |    3 -
 drivers/md/dm.c                |    3 -
 drivers/md/md-cluster.c        |    8 ---
 drivers/md/md-linear.c         |    3 -
 drivers/md/md.c                |   24 ++++-------
 drivers/nvme/host/core.c       |   18 --------
 drivers/scsi/sd.c              |    9 +---
 fs/block_dev.c                 |    7 ---
 include/linux/genhd.h          |    3 -
 20 files changed, 76 insertions(+), 184 deletions(-)


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:04:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:04:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20998.47164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb72D-00059B-JD; Fri, 06 Nov 2020 19:04:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20998.47164; Fri, 06 Nov 2020 19:04:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb72D-000594-Fz; Fri, 06 Nov 2020 19:04:29 +0000
Received: by outflank-mailman (input) for mailman id 20998;
 Fri, 06 Nov 2020 19:04:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kb72C-0004zS-RR
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:04:28 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a259f972-9fc8-4630-878d-9919c92a4c68;
 Fri, 06 Nov 2020 19:04:15 +0000 (UTC)
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kb71d-0000tx-H2; Fri, 06 Nov 2020 19:03:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kb72C-0004zS-RR
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:04:28 +0000
X-Inumbo-ID: a259f972-9fc8-4630-878d-9919c92a4c68
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a259f972-9fc8-4630-878d-9919c92a4c68;
	Fri, 06 Nov 2020 19:04:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=xoeiriVlvBvSZjV6Q4wYdmBzo8f6g+sLoFZ2s4edyk8=; b=NxVhEEHR/C1EYxh35k4vDv/Uce
	dlWSgLAuqe0PKAr557dVmJ6lHVGfcgVFnBUblSiUPXhcm7QnYEpJyfIExsL2RkvO0AWimJ5hCVMid
	98+jZXSuhAiXHDHIp7itgE4sFgG/X1barPzhD8UvMkia+uZ7DGb5lf04+4P3nYn+v8TAMz9uQ9kf9
	7AB5YlkNoWmNCdg+jrCrZV0P85fSt8+cPZJVIKiJuJdYGwSt895sPIIaNicR7kG1wfcYnfRHs/XmY
	WH3KVWDg4MFGBPTPloHXoU/v9MRzD7180eeFlpDdxvrfJdvQ4lqjy6ouInvMW3GmGAOlPSZXFmrve
	l411IOvA==;
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kb71d-0000tx-H2; Fri, 06 Nov 2020 19:03:54 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 03/24] nvme: let set_capacity_revalidate_and_notify update the bdev size
Date: Fri,  6 Nov 2020 20:03:15 +0100
Message-Id: <20201106190337.1973127-4-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201106190337.1973127-1-hch@lst.de>
References: <20201106190337.1973127-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

There is no good reason to call revalidate_disk_size separately.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/nvme/host/core.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 376096bfc54a83..4e86c9aafd88a7 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2053,7 +2053,7 @@ static void nvme_update_disk_info(struct gendisk *disk,
 			capacity = 0;
 	}
 
-	set_capacity_revalidate_and_notify(disk, capacity, false);
+	set_capacity_revalidate_and_notify(disk, capacity, true);
 
 	nvme_config_discard(disk, ns);
 	nvme_config_write_zeroes(disk, ns);
@@ -2136,7 +2136,6 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_id_ns *id)
 		blk_stack_limits(&ns->head->disk->queue->limits,
 				 &ns->queue->limits, 0);
 		blk_queue_update_readahead(ns->head->disk->queue);
-		nvme_update_bdev_size(ns->head->disk);
 		blk_mq_unfreeze_queue(ns->head->disk->queue);
 	}
 #endif
@@ -3965,8 +3964,6 @@ static void nvme_validate_ns(struct nvme_ns *ns, struct nvme_ns_ids *ids)
 	 */
 	if (ret && ret != -ENOMEM && !(ret > 0 && !(ret & NVME_SC_DNR)))
 		nvme_ns_remove(ns);
-	else
-		revalidate_disk_size(ns->disk, true);
 }
 
 static void nvme_validate_or_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:04:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:04:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.20999.47176 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb72I-0005Ev-Td; Fri, 06 Nov 2020 19:04:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 20999.47176; Fri, 06 Nov 2020 19:04:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb72I-0005Ek-P8; Fri, 06 Nov 2020 19:04:34 +0000
Received: by outflank-mailman (input) for mailman id 20999;
 Fri, 06 Nov 2020 19:04:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kb72H-0004zS-Rg
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:04:33 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0dad4b59-1b73-467f-827d-6f2a3aa13aa0;
 Fri, 06 Nov 2020 19:04:17 +0000 (UTC)
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kb71f-0000uK-9c; Fri, 06 Nov 2020 19:03:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kb72H-0004zS-Rg
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:04:33 +0000
X-Inumbo-ID: 0dad4b59-1b73-467f-827d-6f2a3aa13aa0
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0dad4b59-1b73-467f-827d-6f2a3aa13aa0;
	Fri, 06 Nov 2020 19:04:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=z4FJvyEBdJUNH2ICfgDhXYhI93Iugb+FUQTixBMdEJs=; b=N/BTEsalf53sWMF1NynFhjnBWQ
	uMEzJhDiUhbdgTn6HR32ujb4p0zRiPS+sOoVdEnyZv+Dij8IJUweK1eMVi2RBr1hXUSEwwavtHExg
	sNr9j1HAVcinwU0zvVZ9QarhyRI4g+YDE9woukNnt8R+toHJE6Yyj0N6xE2RsmnTfWYuqVKiRBcdc
	C+RJ0pjKXC9y7PJrEmbrlFyzinEMuLJP5ArJM2hoiLEKHH5IAxDcyRZUkdM+9P4+gn1Pn36IKFZYO
	AdL3p37VKRLnvBgnAkwYvUIBdzrmEngx2OqoC8PTe/8DN9HA5diR/JaO+uxOByGNXLNxU91/DaxUA
	FUf64JDg==;
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kb71f-0000uK-9c; Fri, 06 Nov 2020 19:03:56 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 04/24] sd: update the bdev size in sd_revalidate_disk
Date: Fri,  6 Nov 2020 20:03:16 +0100
Message-Id: <20201106190337.1973127-5-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201106190337.1973127-1-hch@lst.de>
References: <20201106190337.1973127-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

This avoids the extra call to revalidate_disk_size in sd_rescan and
is otherwise a no-op because the size did not change, or we are in
the probe path.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/scsi/sd.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 656bcf4940d6d1..4a34dd5b153196 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -1750,10 +1750,8 @@ static int sd_sync_cache(struct scsi_disk *sdkp, struct scsi_sense_hdr *sshdr)
 static void sd_rescan(struct device *dev)
 {
 	struct scsi_disk *sdkp = dev_get_drvdata(dev);
-	int ret;
 
-	ret = sd_revalidate_disk(sdkp->disk);
-	revalidate_disk_size(sdkp->disk, ret == 0);
+	sd_revalidate_disk(sdkp->disk);
 }
 
 static int sd_ioctl(struct block_device *bdev, fmode_t mode,
@@ -3266,7 +3264,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
 	sdkp->first_scan = 0;
 
 	set_capacity_revalidate_and_notify(disk,
-		logical_to_sectors(sdp, sdkp->capacity), false);
+		logical_to_sectors(sdp, sdkp->capacity), true);
 	sd_config_write_same(sdkp);
 	kfree(buffer);
 
@@ -3276,7 +3274,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
 	 * capacity to 0.
 	 */
 	if (sd_zbc_revalidate_zones(sdkp))
-		set_capacity_revalidate_and_notify(disk, 0, false);
+		set_capacity_revalidate_and_notify(disk, 0, true);
 
  out:
 	return 0;
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:04:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:04:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21002.47188 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb72O-0005KL-59; Fri, 06 Nov 2020 19:04:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21002.47188; Fri, 06 Nov 2020 19:04:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb72O-0005KE-1g; Fri, 06 Nov 2020 19:04:40 +0000
Received: by outflank-mailman (input) for mailman id 21002;
 Fri, 06 Nov 2020 19:04:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kb72M-0004zS-Rs
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:04:38 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c6d61b0e-4152-440b-9c21-c295b8b86e40;
 Fri, 06 Nov 2020 19:04:18 +0000 (UTC)
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kb71j-0000vL-Sx; Fri, 06 Nov 2020 19:04:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kb72M-0004zS-Rs
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:04:38 +0000
X-Inumbo-ID: c6d61b0e-4152-440b-9c21-c295b8b86e40
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c6d61b0e-4152-440b-9c21-c295b8b86e40;
	Fri, 06 Nov 2020 19:04:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=msAxRVgou3+s48HjnTxbv/pHmvKh+OTllxGOwTa9PDs=; b=tH8UdTWnkIuPETR5ERZJsxXqO9
	RuvncvGLMp3eecYVL5xbljvoepHQIy9LJuMEB0q+0oEeVpE399lmWu6TR7di2UcX2jvpnbzMAgO+T
	lVDmVMLdl2vdcru/zg5P/TlhhczznEvHfAs7dYsm2yxmv3ZUvGZmRnMrTXcu7NYlNoXgOCzkHUicc
	AGwDOPnsGK9wvtFNI87/4PdBzDhhP3bDY7qTssXyAvbxU4+VHn4Hlisg3BkNqrDEoze0VUh2Ufbsg
	sLEPDubY8OG8SNMA5E0TVDhcZDYddMUr3C9Qfc/tX9zYtsMYh7soL6M1ihPRkp1F9FRPk6C6a4b5z
	IcLgD9AQ==;
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kb71j-0000vL-Sx; Fri, 06 Nov 2020 19:04:00 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 06/24] block: add a return value to set_capacity_and_notify
Date: Fri,  6 Nov 2020 20:03:18 +0100
Message-Id: <20201106190337.1973127-7-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201106190337.1973127-1-hch@lst.de>
References: <20201106190337.1973127-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Return if the function ended up sending an uevent or not.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/genhd.c         | 7 +++++--
 include/linux/genhd.h | 2 +-
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index d8d9d6c1c916e1..8c350fecfe8bfe 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -47,9 +47,9 @@ static void disk_release_events(struct gendisk *disk);
 
 /*
  * Set disk capacity and notify if the size is not currently zero and will not
- * be set to zero.
+ * be set to zero.  Returns true if a uevent was sent, otherwise false.
  */
-void set_capacity_and_notify(struct gendisk *disk, sector_t size)
+bool set_capacity_and_notify(struct gendisk *disk, sector_t size)
 {
 	sector_t capacity = get_capacity(disk);
 
@@ -60,7 +60,10 @@ void set_capacity_and_notify(struct gendisk *disk, sector_t size)
 		char *envp[] = { "RESIZE=1", NULL };
 
 		kobject_uevent_env(&disk_to_dev(disk)->kobj, KOBJ_CHANGE, envp);
+		return true;
 	}
+
+	return false;
 }
 EXPORT_SYMBOL_GPL(set_capacity_and_notify);
 
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 596f31b5a3e133..4b22bfd9336e1a 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -315,7 +315,7 @@ static inline int get_disk_ro(struct gendisk *disk)
 extern void disk_block_events(struct gendisk *disk);
 extern void disk_unblock_events(struct gendisk *disk);
 extern void disk_flush_events(struct gendisk *disk, unsigned int mask);
-void set_capacity_and_notify(struct gendisk *disk, sector_t size);
+bool set_capacity_and_notify(struct gendisk *disk, sector_t size);
 
 /* drivers/char/random.c */
 extern void add_disk_randomness(struct gendisk *disk) __latent_entropy;
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:04:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:04:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21004.47200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb72S-0005Q2-Lw; Fri, 06 Nov 2020 19:04:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21004.47200; Fri, 06 Nov 2020 19:04:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb72S-0005Pt-IG; Fri, 06 Nov 2020 19:04:44 +0000
Received: by outflank-mailman (input) for mailman id 21004;
 Fri, 06 Nov 2020 19:04:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kb72R-0004zS-Ru
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:04:43 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f4ffafb3-da66-4d47-ae07-8f13879f8baa;
 Fri, 06 Nov 2020 19:04:20 +0000 (UTC)
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kb71h-0000us-6I; Fri, 06 Nov 2020 19:03:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kb72R-0004zS-Ru
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:04:43 +0000
X-Inumbo-ID: f4ffafb3-da66-4d47-ae07-8f13879f8baa
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f4ffafb3-da66-4d47-ae07-8f13879f8baa;
	Fri, 06 Nov 2020 19:04:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=WRVMysZVU7Ci7qCB6ml6ZPr77ooiCVZTgGNm0z1Kbac=; b=EwEW1dmIzT0KztvNtJRW2Uubcj
	tlurSCjfhNkHrGfPqn0+Kq/EJbeTYzgj/PyWhFzEzqGwDqMIQgTxH+4oNmCQ8LfrXPJFUgGFC+Cq9
	F1IZ+/cmfA2k4ZHSyD2l4TZouA9lJMPSR6Q5sWOPNWLmVZx8CHk9p9r181uX//s3t7DHRcDusqPUa
	9w9+yAWsWRC+aRbLhdpzbhO2y/7jPPlu5VvKDsF8GH0RYI5IXw8u5pwL5aW0jbllMeck6ak0oKTQr
	GwpmBUxE2ovzhZAP8VMIeX9plnE1wp7Omf0GdvOKlmwNyJpTTBo9MvNnbQMgbExiCingPKDwlDSjX
	5fWfThtA==;
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kb71h-0000us-6I; Fri, 06 Nov 2020 19:03:59 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 05/24] block: remove the update_bdev parameter from set_capacity_revalidate_and_notify
Date: Fri,  6 Nov 2020 20:03:17 +0100
Message-Id: <20201106190337.1973127-6-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201106190337.1973127-1-hch@lst.de>
References: <20201106190337.1973127-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

The update_bdev argument is always set to true, so remove it.  Also
rename the function to the slighly less verbose set_capacity_and_notify,
as propagating the disk size to the block device isn't really
revalidation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/genhd.c                | 13 +++++--------
 drivers/block/loop.c         | 11 +++++------
 drivers/block/virtio_blk.c   |  2 +-
 drivers/block/xen-blkfront.c |  2 +-
 drivers/nvme/host/core.c     |  2 +-
 drivers/scsi/sd.c            |  5 ++---
 include/linux/genhd.h        |  3 +--
 7 files changed, 16 insertions(+), 22 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index 0a273211fec283..d8d9d6c1c916e1 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -46,17 +46,15 @@ static void disk_del_events(struct gendisk *disk);
 static void disk_release_events(struct gendisk *disk);
 
 /*
- * Set disk capacity and notify if the size is not currently
- * zero and will not be set to zero
+ * Set disk capacity and notify if the size is not currently zero and will not
+ * be set to zero.
  */
-void set_capacity_revalidate_and_notify(struct gendisk *disk, sector_t size,
-					bool update_bdev)
+void set_capacity_and_notify(struct gendisk *disk, sector_t size)
 {
 	sector_t capacity = get_capacity(disk);
 
 	set_capacity(disk, size);
-	if (update_bdev)
-		revalidate_disk_size(disk, true);
+	revalidate_disk_size(disk, true);
 
 	if (capacity != size && capacity != 0 && size != 0) {
 		char *envp[] = { "RESIZE=1", NULL };
@@ -64,8 +62,7 @@ void set_capacity_revalidate_and_notify(struct gendisk *disk, sector_t size,
 		kobject_uevent_env(&disk_to_dev(disk)->kobj, KOBJ_CHANGE, envp);
 	}
 }
-
-EXPORT_SYMBOL_GPL(set_capacity_revalidate_and_notify);
+EXPORT_SYMBOL_GPL(set_capacity_and_notify);
 
 /*
  * Format the device name of the indicated disk into the supplied buffer and
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 86eb7e0691eef5..77937b760ee0fc 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -1146,8 +1146,7 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
 	loop_update_dio(lo);
 	loop_sysfs_init(lo);
 
-	set_capacity_revalidate_and_notify(lo->lo_disk, get_loop_size(lo, file),
-			true);
+	set_capacity_and_notify(lo->lo_disk, get_loop_size(lo, file));
 	set_blocksize(bdev, S_ISBLK(inode->i_mode) ?
 		      block_size(inode->i_bdev) : PAGE_SIZE);
 
@@ -1383,9 +1382,9 @@ loop_set_status(struct loop_device *lo, const struct loop_info64 *info)
 	lo->lo_flags |= prev_lo_flags & ~LOOP_SET_STATUS_CLEARABLE_FLAGS;
 
 	if (size_changed) {
-		set_capacity_revalidate_and_notify(lo->lo_disk,
+		set_capacity_and_notify(lo->lo_disk,
 				get_size(lo->lo_offset, lo->lo_sizelimit,
-					 lo->lo_backing_file), true);
+					 lo->lo_backing_file));
 	}
 
 	loop_config_discard(lo);
@@ -1563,8 +1562,8 @@ static int loop_set_capacity(struct loop_device *lo)
 {
 	if (unlikely(lo->lo_state != Lo_bound))
 		return -ENXIO;
-	set_capacity_revalidate_and_notify(lo->lo_disk,
-			get_loop_size(lo, lo->lo_backing_file), true);
+	set_capacity_and_notify(lo->lo_disk,
+			get_loop_size(lo, lo->lo_backing_file));
 	return 0;
 }
 
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index a314b9382442b6..3e812b4c32e669 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -470,7 +470,7 @@ static void virtblk_update_capacity(struct virtio_blk *vblk, bool resize)
 		   cap_str_10,
 		   cap_str_2);
 
-	set_capacity_revalidate_and_notify(vblk->disk, capacity, true);
+	set_capacity_and_notify(vblk->disk, capacity);
 }
 
 static void virtblk_config_changed_work(struct work_struct *work)
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 48629d3433b4c3..79521e33d30ed5 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -2370,7 +2370,7 @@ static void blkfront_connect(struct blkfront_info *info)
 			return;
 		printk(KERN_INFO "Setting capacity to %Lu\n",
 		       sectors);
-		set_capacity_revalidate_and_notify(info->gd, sectors, true);
+		set_capacity_and_notify(info->gd, sectors);
 
 		return;
 	case BLKIF_STATE_SUSPENDED:
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 4e86c9aafd88a7..aa6e27c2eec945 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2053,7 +2053,7 @@ static void nvme_update_disk_info(struct gendisk *disk,
 			capacity = 0;
 	}
 
-	set_capacity_revalidate_and_notify(disk, capacity, true);
+	set_capacity_and_notify(disk, capacity);
 
 	nvme_config_discard(disk, ns);
 	nvme_config_write_zeroes(disk, ns);
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 4a34dd5b153196..a2a4f385833d6c 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -3263,8 +3263,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
 
 	sdkp->first_scan = 0;
 
-	set_capacity_revalidate_and_notify(disk,
-		logical_to_sectors(sdp, sdkp->capacity), true);
+	set_capacity_and_notify(disk, logical_to_sectors(sdp, sdkp->capacity));
 	sd_config_write_same(sdkp);
 	kfree(buffer);
 
@@ -3274,7 +3273,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
 	 * capacity to 0.
 	 */
 	if (sd_zbc_revalidate_zones(sdkp))
-		set_capacity_revalidate_and_notify(disk, 0, true);
+		set_capacity_and_notify(disk, 0);
 
  out:
 	return 0;
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 38f23d75701379..596f31b5a3e133 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -315,8 +315,7 @@ static inline int get_disk_ro(struct gendisk *disk)
 extern void disk_block_events(struct gendisk *disk);
 extern void disk_unblock_events(struct gendisk *disk);
 extern void disk_flush_events(struct gendisk *disk, unsigned int mask);
-void set_capacity_revalidate_and_notify(struct gendisk *disk, sector_t size,
-		bool update_bdev);
+void set_capacity_and_notify(struct gendisk *disk, sector_t size);
 
 /* drivers/char/random.c */
 extern void add_disk_randomness(struct gendisk *disk) __latent_entropy;
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:04:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:04:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21008.47212 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb72Y-0005WJ-39; Fri, 06 Nov 2020 19:04:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21008.47212; Fri, 06 Nov 2020 19:04:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb72X-0005W9-VE; Fri, 06 Nov 2020 19:04:49 +0000
Received: by outflank-mailman (input) for mailman id 21008;
 Fri, 06 Nov 2020 19:04:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kb72W-0004zS-S9
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:04:48 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b6fca161-46f1-4d6e-a553-1005fb0dbc04;
 Fri, 06 Nov 2020 19:04:20 +0000 (UTC)
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kb71l-0000vc-9Z; Fri, 06 Nov 2020 19:04:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kb72W-0004zS-S9
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:04:48 +0000
X-Inumbo-ID: b6fca161-46f1-4d6e-a553-1005fb0dbc04
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b6fca161-46f1-4d6e-a553-1005fb0dbc04;
	Fri, 06 Nov 2020 19:04:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=S6R/LHkwABMFjsoJfGZBhUHQ6k4Dx5wFWz0jliJ5rD4=; b=U1It/9VbhOthOZFs+YaVYw6zCF
	hNCoh8Q0VrrRw+fcw+giRCsYx4IzyJqpYjQUkT9hnCEW7vlBjyfGp4xQIQZdpzcZqGIczsa9Rbzeq
	csXqFZUcF409NVlMSBJ6emZObaktQ+PW12dNKE6Wt3cwNbOS6p1R+LMqGZ6N5QYbqrmyAk58AFU7i
	vOL8ZxppeFPE2SWuIK6cM1/GCx9SodNxTSAOH8MIGdsdIVTf6kNR7ak6SkuuXu0m07Pf4gSvqNRxz
	GZKRdWmd7ZGqToecIIzl+SJ6plPlZPGQgVa0wCWrE9dOJ0tigJgjfSwvF9I0uonfZF33Uc7mHc/4w
	bMzYnQig==;
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kb71l-0000vc-9Z; Fri, 06 Nov 2020 19:04:02 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 07/24] nbd: remove the call to set_blocksize
Date: Fri,  6 Nov 2020 20:03:19 +0100
Message-Id: <20201106190337.1973127-8-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201106190337.1973127-1-hch@lst.de>
References: <20201106190337.1973127-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Block driver have no business setting the file system concept of a
block size.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/nbd.c | 12 +++++-------
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index c4f9ccf5cc2ac5..f618688a196654 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -296,7 +296,7 @@ static void nbd_size_clear(struct nbd_device *nbd)
 	}
 }
 
-static void nbd_size_update(struct nbd_device *nbd, bool start)
+static void nbd_size_update(struct nbd_device *nbd)
 {
 	struct nbd_config *config = nbd->config;
 	struct block_device *bdev = bdget_disk(nbd->disk, 0);
@@ -311,11 +311,9 @@ static void nbd_size_update(struct nbd_device *nbd, bool start)
 	blk_queue_physical_block_size(nbd->disk->queue, config->blksize);
 	set_capacity(nbd->disk, nr_sectors);
 	if (bdev) {
-		if (bdev->bd_disk) {
+		if (bdev->bd_disk)
 			bd_set_nr_sectors(bdev, nr_sectors);
-			if (start)
-				set_blocksize(bdev, config->blksize);
-		} else
+		else
 			set_bit(GD_NEED_PART_SCAN, &nbd->disk->state);
 		bdput(bdev);
 	}
@@ -329,7 +327,7 @@ static void nbd_size_set(struct nbd_device *nbd, loff_t blocksize,
 	config->blksize = blocksize;
 	config->bytesize = blocksize * nr_blocks;
 	if (nbd->task_recv != NULL)
-		nbd_size_update(nbd, false);
+		nbd_size_update(nbd);
 }
 
 static void nbd_complete_rq(struct request *req)
@@ -1309,7 +1307,7 @@ static int nbd_start_device(struct nbd_device *nbd)
 		args->index = i;
 		queue_work(nbd->recv_workq, &args->work);
 	}
-	nbd_size_update(nbd, true);
+	nbd_size_update(nbd);
 	return error;
 }
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:04:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:04:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21012.47224 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb72d-0005ck-Ci; Fri, 06 Nov 2020 19:04:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21012.47224; Fri, 06 Nov 2020 19:04:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb72d-0005cb-91; Fri, 06 Nov 2020 19:04:55 +0000
Received: by outflank-mailman (input) for mailman id 21012;
 Fri, 06 Nov 2020 19:04:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kb72b-0004zS-SG
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:04:53 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3757b5b5-6a3e-4b6b-b529-6201045e196a;
 Fri, 06 Nov 2020 19:04:25 +0000 (UTC)
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kb71m-0000vs-VH; Fri, 06 Nov 2020 19:04:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kb72b-0004zS-SG
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:04:53 +0000
X-Inumbo-ID: 3757b5b5-6a3e-4b6b-b529-6201045e196a
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3757b5b5-6a3e-4b6b-b529-6201045e196a;
	Fri, 06 Nov 2020 19:04:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=q/L6cOxJboQ8fMYOH5utFw/if9lMf4COUZKaNb2WGbs=; b=hl460dmzXJBKetE4r6CW7QvNJo
	VkffKRrqjhZF93Vt40enWih+9p+y26GcE/rniPHlvod8GrvgAvfGMGb1Nkm00p5n0DU486jzQ6Yux
	IU7t+sAbqTU8v2t3O7XOblCXjYv+o1HhmiSWaknEofRCQovdBTK06sKpfdmqamVGneGuASJLtpt7t
	2emcPi9cCQ4RXRtcj9eyJpB1gz7JniFo7+170kul2s3VDF3RGNWYQRKeUcCru+xXzC1LXBbryYfhk
	lmt03zbadiUs7kmNfdmnrLCOSYvR4g2mEUOusgnP2pCpH5V4pgsMtfalc6Vmbmq1SGaCwF9K7sSnG
	IQDZ+ZYQ==;
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kb71m-0000vs-VH; Fri, 06 Nov 2020 19:04:03 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 08/24] nbd: move the task_recv check into nbd_size_update
Date: Fri,  6 Nov 2020 20:03:20 +0100
Message-Id: <20201106190337.1973127-9-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201106190337.1973127-1-hch@lst.de>
References: <20201106190337.1973127-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

nbd_size_update is about to acquire a few more callers, so lift the check
into the function.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/nbd.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index f618688a196654..58b7090dcbd832 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -299,8 +299,11 @@ static void nbd_size_clear(struct nbd_device *nbd)
 static void nbd_size_update(struct nbd_device *nbd)
 {
 	struct nbd_config *config = nbd->config;
-	struct block_device *bdev = bdget_disk(nbd->disk, 0);
 	sector_t nr_sectors = config->bytesize >> 9;
+	struct block_device *bdev;
+
+	if (!nbd->task_recv)
+		return;
 
 	if (config->flags & NBD_FLAG_SEND_TRIM) {
 		nbd->disk->queue->limits.discard_granularity = config->blksize;
@@ -309,7 +312,9 @@ static void nbd_size_update(struct nbd_device *nbd)
 	}
 	blk_queue_logical_block_size(nbd->disk->queue, config->blksize);
 	blk_queue_physical_block_size(nbd->disk->queue, config->blksize);
+
 	set_capacity(nbd->disk, nr_sectors);
+	bdev = bdget_disk(nbd->disk, 0);
 	if (bdev) {
 		if (bdev->bd_disk)
 			bd_set_nr_sectors(bdev, nr_sectors);
@@ -326,8 +331,7 @@ static void nbd_size_set(struct nbd_device *nbd, loff_t blocksize,
 	struct nbd_config *config = nbd->config;
 	config->blksize = blocksize;
 	config->bytesize = blocksize * nr_blocks;
-	if (nbd->task_recv != NULL)
-		nbd_size_update(nbd);
+	nbd_size_update(nbd);
 }
 
 static void nbd_complete_rq(struct request *req)
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:13:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:13:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21045.47248 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7Al-0006tN-JS; Fri, 06 Nov 2020 19:13:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21045.47248; Fri, 06 Nov 2020 19:13:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7Al-0006tG-GI; Fri, 06 Nov 2020 19:13:19 +0000
Received: by outflank-mailman (input) for mailman id 21045;
 Fri, 06 Nov 2020 19:13:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kb73F-0004zS-TT
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:05:33 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7921d658-9789-43de-af17-1d5558d8f5aa;
 Fri, 06 Nov 2020 19:04:36 +0000 (UTC)
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kb723-00010x-Br; Fri, 06 Nov 2020 19:04:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kb73F-0004zS-TT
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:05:33 +0000
X-Inumbo-ID: 7921d658-9789-43de-af17-1d5558d8f5aa
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7921d658-9789-43de-af17-1d5558d8f5aa;
	Fri, 06 Nov 2020 19:04:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=tNHc+yKxFex2BA3VRJKRfYT+Q/BQGvkbTVBIgZi7Zak=; b=uwbAYUWtYsWNkpwUoogtJngFDI
	X2niLn/0gDbVghXFusMEzeWZ+C5KIQwkGtGxGZ6oxJj5WI+iW+O0SJnD2dR7Qk1cxpxM1Piqqibv1
	W2PHD40tu/vh96I3Z/yauJblfPpdXElTpc73LpnmGWgwgPicld7nbqsXMhQ6te+RWkgdNHLDoexl6
	4ZtBRCP240kPDW2iHuDGSh6PBKkb6TKwmX9M9z5dbJ2wi7u/yKw5Ye3Y1JEeN7/i6H6eJ4opYNCFv
	Z6kB/Jw1bpis23wpaAMdzULdhHjFYROT27VSovcE6cIP+UJFN2KPawN+zY68HvUyNDxCK9BDtcB3r
	KthEYJEA==;
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kb723-00010x-Br; Fri, 06 Nov 2020 19:04:20 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 16/24] drbd: use set_capacity_and_notify
Date: Fri,  6 Nov 2020 20:03:28 +0100
Message-Id: <20201106190337.1973127-17-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201106190337.1973127-1-hch@lst.de>
References: <20201106190337.1973127-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/drbd/drbd_main.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
index 65b95aef8dbc95..1c8c18b2a25f33 100644
--- a/drivers/block/drbd/drbd_main.c
+++ b/drivers/block/drbd/drbd_main.c
@@ -2036,8 +2036,7 @@ void drbd_set_my_capacity(struct drbd_device *device, sector_t size)
 {
 	char ppb[10];
 
-	set_capacity(device->vdisk, size);
-	revalidate_disk_size(device->vdisk, false);
+	set_capacity_and_notify(device->vdisk, size);
 
 	drbd_info(device, "size = %s (%llu KB)\n",
 		ppsize(ppb, size>>1), (unsigned long long)size>>1);
@@ -2068,8 +2067,7 @@ void drbd_device_cleanup(struct drbd_device *device)
 	}
 	D_ASSERT(device, first_peer_device(device)->connection->net_conf == NULL);
 
-	set_capacity(device->vdisk, 0);
-	revalidate_disk_size(device->vdisk, false);
+	set_capacity_and_notify(device->vdisk, 0);
 	if (device->bitmap) {
 		/* maybe never allocated. */
 		drbd_bm_resize(device, 0, 1);
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:13:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:13:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21044.47236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7Aj-0006sC-BR; Fri, 06 Nov 2020 19:13:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21044.47236; Fri, 06 Nov 2020 19:13:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7Aj-0006s5-8S; Fri, 06 Nov 2020 19:13:17 +0000
Received: by outflank-mailman (input) for mailman id 21044;
 Fri, 06 Nov 2020 19:13:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kb73y-0004zS-V0
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:06:18 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2bda410e-d641-4b51-98ac-98b7e1d729e6;
 Fri, 06 Nov 2020 19:04:55 +0000 (UTC)
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kb72G-00016V-3y; Fri, 06 Nov 2020 19:04:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kb73y-0004zS-V0
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:06:18 +0000
X-Inumbo-ID: 2bda410e-d641-4b51-98ac-98b7e1d729e6
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2bda410e-d641-4b51-98ac-98b7e1d729e6;
	Fri, 06 Nov 2020 19:04:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=X/fdtbnM6VzLEnbiAcIZtVB4MyztRye1iMhB9bpmGjw=; b=KdFsWVDDKhCb1S0+D0SokRPV+y
	+J4M6xcEJslqQfLhh8x6+66DaWFnczf1aBndVMP3EIIA+96ALRDycvFUpIzf2js2xvrsL5iP6O1+A
	ZKLK8qzUck32sEA469YdsEhBj5KXKjBMfIrXwNzMerCpymh6yL7J+3OCKmpXeH15Tn7bxLuzBCRLz
	hl7TxbtqOEdufrClF29OKAO2fi/5ZmeFIHiUXsY/1CVk/DF7ZGpB+29s9GAZmn5HLwCP/ya6jSA1Q
	muKeYFGA3/E5oI4KqySpaJNW0DC9LvJqAjqN3XUqHjqzBVWPtrxSloZh0rikWjmPlEjklxeW4xqF+
	LXF3M7pQ==;
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kb72G-00016V-3y; Fri, 06 Nov 2020 19:04:32 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 22/24] md: remove a spurious call to revalidate_disk_size in update_size
Date: Fri,  6 Nov 2020 20:03:34 +0100
Message-Id: <20201106190337.1973127-23-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201106190337.1973127-1-hch@lst.de>
References: <20201106190337.1973127-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

None of the ->resize methods updates the disk size, so calling
revalidate_disk_size here won't do anything.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/md/md-cluster.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
index 87442dc59f6ca3..35e2690c1803dd 100644
--- a/drivers/md/md-cluster.c
+++ b/drivers/md/md-cluster.c
@@ -1299,8 +1299,6 @@ static void update_size(struct mddev *mddev, sector_t old_dev_sectors)
 	} else {
 		/* revert to previous sectors */
 		ret = mddev->pers->resize(mddev, old_dev_sectors);
-		if (!ret)
-			revalidate_disk_size(mddev->gendisk, true);
 		ret = __sendmsg(cinfo, &cmsg);
 		if (ret)
 			pr_err("%s:%d: failed to send METADATA_UPDATED msg\n",
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:13:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:13:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21046.47260 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7Ar-0006yH-Rt; Fri, 06 Nov 2020 19:13:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21046.47260; Fri, 06 Nov 2020 19:13:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7Ar-0006yA-OT; Fri, 06 Nov 2020 19:13:25 +0000
Received: by outflank-mailman (input) for mailman id 21046;
 Fri, 06 Nov 2020 19:13:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kb73P-0004zS-Tw
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:05:43 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8ba95349-51b4-4797-960b-afe7b81c8d69;
 Fri, 06 Nov 2020 19:04:41 +0000 (UTC)
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kb727-00012z-Nb; Fri, 06 Nov 2020 19:04:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kb73P-0004zS-Tw
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:05:43 +0000
X-Inumbo-ID: 8ba95349-51b4-4797-960b-afe7b81c8d69
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8ba95349-51b4-4797-960b-afe7b81c8d69;
	Fri, 06 Nov 2020 19:04:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=PHuhBTY1ouZHYZght48HNcRnOQ9ETUZCJ1FQTXf/EwQ=; b=SSYTvNNJNXE2eq2vgVj6+p7+Gh
	FYbkDj8JGnpdrmz5g60TezXk1nZ8IK6aSEC63V47GcMNDVRD2b07nZboOupLzX5aEBbXOf1qNk/q1
	85ByLUNa/iIkjS2bTnTwLGX4qg1UIhlVWKfoDdV2GRo/nM9hbu38kygTHxbko2gZVtT2bUrzYPu48
	BSOQzEY1WiSKBkKLQhDgeyv8+72t+hzcZogXCmIx1Wruxn/wT/qwIItixospXcqe+3ImMT6BtwjcV
	45CMr1jmOsAd8Z/iT1swy0pwYtf88g1VIvuMcKaCvasjmy8HQRqOo3zeFNgUjbePxjiAerJHobf4V
	AcdZg19g==;
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kb727-00012z-Nb; Fri, 06 Nov 2020 19:04:24 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 18/24] rnbd: use set_capacity_and_notify
Date: Fri,  6 Nov 2020 20:03:30 +0100
Message-Id: <20201106190337.1973127-19-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201106190337.1973127-1-hch@lst.de>
References: <20201106190337.1973127-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/rnbd/rnbd-clt.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c
index 8b2411ccbda97c..bb13d7dd195a08 100644
--- a/drivers/block/rnbd/rnbd-clt.c
+++ b/drivers/block/rnbd/rnbd-clt.c
@@ -100,8 +100,7 @@ static int rnbd_clt_change_capacity(struct rnbd_clt_dev *dev,
 	rnbd_clt_info(dev, "Device size changed from %zu to %zu sectors\n",
 		       dev->nsectors, new_nsectors);
 	dev->nsectors = new_nsectors;
-	set_capacity(dev->gd, dev->nsectors);
-	revalidate_disk_size(dev->gd, true);
+	set_capacity_and_notify(dev->gd, dev->nsectors);
 	return 0;
 }
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:13:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:13:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21047.47272 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7Av-00072n-AY; Fri, 06 Nov 2020 19:13:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21047.47272; Fri, 06 Nov 2020 19:13:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7Av-00072X-6x; Fri, 06 Nov 2020 19:13:29 +0000
Received: by outflank-mailman (input) for mailman id 21047;
 Fri, 06 Nov 2020 19:13:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kb73j-0004zS-Uc
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:06:03 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 671ed6c6-ed7f-496a-86e9-f374c89d5dce;
 Fri, 06 Nov 2020 19:04:50 +0000 (UTC)
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kb72H-00017b-Qy; Fri, 06 Nov 2020 19:04:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kb73j-0004zS-Uc
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:06:03 +0000
X-Inumbo-ID: 671ed6c6-ed7f-496a-86e9-f374c89d5dce
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 671ed6c6-ed7f-496a-86e9-f374c89d5dce;
	Fri, 06 Nov 2020 19:04:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=MupYDI4l/YovZ7RWUiZbIkKTEoIVA+KjloyNnUiTimo=; b=iiGVfdCIPPX+IkfTbBzmxkNClz
	t982+XiX0rLzTMHY8HFo1Fp5Mtn7rK+diWNO+RHxxMdKcJUTti2/wcEDk1JuKfU/o4+K8dKleodTE
	HTiaGtNCpYoo9vb3GGX4aLbLQQjLGGtNWkpDZpYqEwfuTn9hbBczupYDxFhREBpBASbSx6M7Lze9e
	iKaTzCv7Ayt/qABbDvowSzznAcYB/VzFYgYnipTJO4suLkefkpjbVL2Cs7whyJwZIOchXyT0sQa+l
	gQuWBa19/ZwoPZGFHeWQytUlRCgvx6EJq35hta2jU2txOK4i/ZI7bcgxRU7PROSuEa2OkMTwR08Xi
	9io2po4w==;
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kb72H-00017b-Qy; Fri, 06 Nov 2020 19:04:34 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 23/24] virtio-blk: remove a spurious call to revalidate_disk_size
Date: Fri,  6 Nov 2020 20:03:35 +0100
Message-Id: <20201106190337.1973127-24-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201106190337.1973127-1-hch@lst.de>
References: <20201106190337.1973127-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

revalidate_disk_size just updates the block device size from the disk
size.  Thus calling it from revalidate_disk_size doesn't actually do
anything.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/virtio_blk.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 3e812b4c32e669..145606dc52db1e 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -598,7 +598,6 @@ static void virtblk_update_cache_mode(struct virtio_device *vdev)
 	struct virtio_blk *vblk = vdev->priv;
 
 	blk_queue_write_cache(vblk->disk->queue, writeback, false);
-	revalidate_disk_size(vblk->disk, true);
 }
 
 static const char *const virtblk_cache_types[] = {
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:13:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:13:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21049.47284 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7B0-000795-Lv; Fri, 06 Nov 2020 19:13:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21049.47284; Fri, 06 Nov 2020 19:13:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7B0-00078v-Ii; Fri, 06 Nov 2020 19:13:34 +0000
Received: by outflank-mailman (input) for mailman id 21049;
 Fri, 06 Nov 2020 19:13:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kb73U-0004zS-U7
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:05:48 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ee1a3226-5aa0-467c-9c10-0510578b1254;
 Fri, 06 Nov 2020 19:04:43 +0000 (UTC)
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kb72A-00013z-Vd; Fri, 06 Nov 2020 19:04:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kb73U-0004zS-U7
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:05:48 +0000
X-Inumbo-ID: ee1a3226-5aa0-467c-9c10-0510578b1254
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ee1a3226-5aa0-467c-9c10-0510578b1254;
	Fri, 06 Nov 2020 19:04:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=WS/5HaJpXF8+Aj3VZ2GQ3e6HnWhXjc79y2mDNxe6L1Y=; b=d0t0faqKeh0diZy5D+3oFd7PUF
	TCyCK8sa/DuNKYU0jeT8wTyRvaiEBKDnaS6SEkdUTuybDBsIgoIeRPsAlzl5bKXOe6SmughSQbIEP
	KkykvrhSx4ccDUXIrBKyrD0p4JiXqCPHgPeWfj3mWbknoX/XAnoyioec4TemaKCqv4ah93+QLH5cg
	9/RsZdWQzxsF1SLgUEW0HOn02HqxzKnJlnn0keo28vvDdvLYdDXii4E52+MkqA7lju2awVWXrKJL6
	ATVrFBAnHLMBbHBOSnGGHyMGWAswR83/TC1I7JT5bKbjrfZ4wEjHnEBd8a7RQ1Qnb4mvA0vQ78WTS
	P88M1meg==;
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kb72A-00013z-Vd; Fri, 06 Nov 2020 19:04:27 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 20/24] dm-raid: use set_capacity_and_notify
Date: Fri,  6 Nov 2020 20:03:32 +0100
Message-Id: <20201106190337.1973127-21-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201106190337.1973127-1-hch@lst.de>
References: <20201106190337.1973127-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/md/dm-raid.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
index 9c1f7c4de65b35..294f34d2d61bae 100644
--- a/drivers/md/dm-raid.c
+++ b/drivers/md/dm-raid.c
@@ -700,8 +700,7 @@ static void rs_set_capacity(struct raid_set *rs)
 {
 	struct gendisk *gendisk = dm_disk(dm_table_get_md(rs->ti->table));
 
-	set_capacity(gendisk, rs->md.array_sectors);
-	revalidate_disk_size(gendisk, true);
+	set_capacity_and_notify(gendisk, rs->md.array_sectors);
 }
 
 /*
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:13:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:13:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21050.47296 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7B2-0007Bo-1Y; Fri, 06 Nov 2020 19:13:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21050.47296; Fri, 06 Nov 2020 19:13:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7B1-0007Bb-Tb; Fri, 06 Nov 2020 19:13:35 +0000
Received: by outflank-mailman (input) for mailman id 21050;
 Fri, 06 Nov 2020 19:13:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kb73A-0004zS-TU
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:05:28 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 11ce5153-c32d-4c37-88c2-d52cee2f7f95;
 Fri, 06 Nov 2020 19:04:35 +0000 (UTC)
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kb71t-0000x7-8S; Fri, 06 Nov 2020 19:04:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kb73A-0004zS-TU
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:05:28 +0000
X-Inumbo-ID: 11ce5153-c32d-4c37-88c2-d52cee2f7f95
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 11ce5153-c32d-4c37-88c2-d52cee2f7f95;
	Fri, 06 Nov 2020 19:04:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=9taUiqSTACcH2Z0NVyYeu1wIyxjTgsg3N9q7Quy2bf8=; b=oAMtLVxzr+OO0PcilibN4J+7cY
	A6aOsonM11li6ilkHUPp8kBC6QK8Ku1WKyvUPQIIKS3Yr6ShhFM9rGgNGOqajADhudPZ+3vFzZY6G
	+awcIimxh/Z01UGkGMBjmo+ldoYOMeGf99KhzTGQ+98goXtxgdupqyVgT6d6XIW8QFrnGYE8B3eI/
	NdxJOwyx/XyOCFGOvQCYVgahj/lrrL/KLrr4bytAMgZd96iY0MMc/Ta+sFWjeoqP7n+5l0TOF6jMP
	v4PEZnAOD042+hSq/ZESdR+3iJ6Oge2OfUHEz6Jei3G3ygfq/wegQ3JrDRIV0DxROiAOfQJF8vFZL
	LUGQzlHA==;
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kb71t-0000x7-8S; Fri, 06 Nov 2020 19:04:10 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 11/24] nbd: use set_capacity_and_notify
Date: Fri,  6 Nov 2020 20:03:23 +0100
Message-Id: <20201106190337.1973127-12-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201106190337.1973127-1-hch@lst.de>
References: <20201106190337.1973127-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to update the disk and block device sizes and
send a RESIZE uevent to userspace.  Note that blktests relies on uevents
being sent also for updates that did not change the device size, so the
explicit kobject_uevent remains for that case.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/nbd.c | 15 +++------------
 1 file changed, 3 insertions(+), 12 deletions(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 327060e01ad58e..a6f51934391edb 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -299,8 +299,6 @@ static void nbd_size_clear(struct nbd_device *nbd)
 static int nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
 		loff_t blksize)
 {
-	struct block_device *bdev;
-
 	if (!blksize)
 		blksize = NBD_DEF_BLKSIZE;
 	if (blksize < 512 || blksize > PAGE_SIZE || !is_power_of_2(blksize))
@@ -320,16 +318,9 @@ static int nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
 	blk_queue_logical_block_size(nbd->disk->queue, blksize);
 	blk_queue_physical_block_size(nbd->disk->queue, blksize);
 
-	set_capacity(nbd->disk, bytesize >> 9);
-	bdev = bdget_disk(nbd->disk, 0);
-	if (bdev) {
-		if (bdev->bd_disk)
-			bd_set_nr_sectors(bdev, bytesize >> 9);
-		else
-			set_bit(GD_NEED_PART_SCAN, &nbd->disk->state);
-		bdput(bdev);
-	}
-	kobject_uevent(&nbd_to_dev(nbd)->kobj, KOBJ_CHANGE);
+	set_bit(GD_NEED_PART_SCAN, &nbd->disk->state);
+	if (!set_capacity_and_notify(nbd->disk, bytesize >> 9))
+		kobject_uevent(&nbd_to_dev(nbd)->kobj, KOBJ_CHANGE);
 	return 0;
 }
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:13:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:13:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21051.47302 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7B2-0007Ci-GJ; Fri, 06 Nov 2020 19:13:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21051.47302; Fri, 06 Nov 2020 19:13:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7B2-0007CU-8h; Fri, 06 Nov 2020 19:13:36 +0000
Received: by outflank-mailman (input) for mailman id 21051;
 Fri, 06 Nov 2020 19:13:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kb73Z-0004zS-U8
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:05:53 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 01e1f1af-4970-4177-b232-31b43ea69eec;
 Fri, 06 Nov 2020 19:04:43 +0000 (UTC)
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kb729-00013h-89; Fri, 06 Nov 2020 19:04:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kb73Z-0004zS-U8
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:05:53 +0000
X-Inumbo-ID: 01e1f1af-4970-4177-b232-31b43ea69eec
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 01e1f1af-4970-4177-b232-31b43ea69eec;
	Fri, 06 Nov 2020 19:04:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=4bS/eZqoJAQR/DSs9IGqi2g9LCY9L0WWDnWefDh2k5M=; b=tVd8geFSv+/a4ICZqJ4miUgAZr
	bSnQVkuUj9Q1sOtmlqt7w1/fnbDPR0IyuD7NylvEPospNvxVXib/Q1nUDuQNSaWDRFbsRXA9dbw6O
	R/GqSzwPpwI2zGGmixd50oCio/LgvmNAgH3JNK5kktQ8eJb8dFVZtMV8LOVf1+XaUk33UAT4qwsbd
	gO37JazEvqPXopiAzJkFz2g/BDk5ZGRZJMue7aCghVAZN1rAh3Kp+t7apQN+sU6hGH5YlyzTNFcs+
	V0tQaIvP9jr1OAo1S5SMYXLTVLk25tGMl0HLivTuYcN5VAr0bXyzDQ4D/TX6qd2S5zUuUJoLnnNDa
	VVQ+Kqmw==;
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kb729-00013h-89; Fri, 06 Nov 2020 19:04:25 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 19/24] zram: use set_capacity_and_notify
Date: Fri,  6 Nov 2020 20:03:31 +0100
Message-Id: <20201106190337.1973127-20-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201106190337.1973127-1-hch@lst.de>
References: <20201106190337.1973127-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/zram/zram_drv.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 1b697208d66157..6d15d51cee2b7e 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1695,7 +1695,7 @@ static void zram_reset_device(struct zram *zram)
 	disksize = zram->disksize;
 	zram->disksize = 0;
 
-	set_capacity(zram->disk, 0);
+	set_capacity_and_notify(zram->disk, 0);
 	part_stat_set_all(&zram->disk->part0, 0);
 
 	up_write(&zram->init_lock);
@@ -1741,9 +1741,7 @@ static ssize_t disksize_store(struct device *dev,
 
 	zram->comp = comp;
 	zram->disksize = disksize;
-	set_capacity(zram->disk, zram->disksize >> SECTOR_SHIFT);
-
-	revalidate_disk_size(zram->disk, true);
+	set_capacity_and_notify(zram->disk, zram->disksize >> SECTOR_SHIFT);
 	up_write(&zram->init_lock);
 
 	return len;
@@ -1790,7 +1788,6 @@ static ssize_t reset_store(struct device *dev,
 	/* Make sure all the pending I/O are finished */
 	fsync_bdev(bdev);
 	zram_reset_device(zram);
-	revalidate_disk_size(zram->disk, true);
 	bdput(bdev);
 
 	mutex_lock(&bdev->bd_mutex);
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:13:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:13:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21054.47320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7B5-0007Jj-Qf; Fri, 06 Nov 2020 19:13:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21054.47320; Fri, 06 Nov 2020 19:13:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7B5-0007JV-LP; Fri, 06 Nov 2020 19:13:39 +0000
Received: by outflank-mailman (input) for mailman id 21054;
 Fri, 06 Nov 2020 19:13:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kb73e-0004zS-UN
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:05:58 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e2d2a0a4-76e6-42f3-b969-4861d3bd38cf;
 Fri, 06 Nov 2020 19:04:44 +0000 (UTC)
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kb725-00012H-Kj; Fri, 06 Nov 2020 19:04:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kb73e-0004zS-UN
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:05:58 +0000
X-Inumbo-ID: e2d2a0a4-76e6-42f3-b969-4861d3bd38cf
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e2d2a0a4-76e6-42f3-b969-4861d3bd38cf;
	Fri, 06 Nov 2020 19:04:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=N8UuGGw9lDirN906bdHNl2lOQB2uewg0ZSFqxQb7Dc4=; b=EbShoq0e9ZdxuHzJ//Stacuw58
	fWC0EbT3bDkc4a76CHYJxCWttwRI59GvoSaEU0SW0M22mNQkH3EhRrAKd+QvVHizyPlJeJ0kMVJ4i
	zHx7ItnO6W/fLLrYnw+S/gSCKqgdFCCws7Nqwvf507VfDEMI+rvO4uJnsRlROtxVrdeIy2E7YL3TL
	S7BmHPSe96rpbKQWzhRBiwcBzWjvB4sisrJVYeefsUF98+rDlF7LunTgLETgKBtlqQ1KRVmOxV937
	0aBRSGfCuluO2qQ6XYWclmFOBtO86wc6DyjOSAZKELdbbXCgn+Cmrf4JQhJv1RVmx1A2l424dcPHJ
	70sIudTA==;
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kb725-00012H-Kj; Fri, 06 Nov 2020 19:04:22 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 17/24] rbd: use set_capacity_and_notify
Date: Fri,  6 Nov 2020 20:03:29 +0100
Message-Id: <20201106190337.1973127-18-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201106190337.1973127-1-hch@lst.de>
References: <20201106190337.1973127-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/rbd.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index f84128abade319..b7a194ffda55b4 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -4920,8 +4920,7 @@ static void rbd_dev_update_size(struct rbd_device *rbd_dev)
 	    !test_bit(RBD_DEV_FLAG_REMOVING, &rbd_dev->flags)) {
 		size = (sector_t)rbd_dev->mapping.size / SECTOR_SIZE;
 		dout("setting size to %llu sectors", (unsigned long long)size);
-		set_capacity(rbd_dev->disk, size);
-		revalidate_disk_size(rbd_dev->disk, true);
+		set_capacity_and_notify(rbd_dev->disk, size);
 	}
 }
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:13:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:13:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21055.47327 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7B6-0007Ln-Ln; Fri, 06 Nov 2020 19:13:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21055.47327; Fri, 06 Nov 2020 19:13:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7B6-0007L3-8z; Fri, 06 Nov 2020 19:13:40 +0000
Received: by outflank-mailman (input) for mailman id 21055;
 Fri, 06 Nov 2020 19:13:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kb73K-0004zS-Tc
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:05:38 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 779618f2-f1ed-44be-ae80-e963dd96cb3f;
 Fri, 06 Nov 2020 19:04:37 +0000 (UTC)
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kb71v-0000xc-3a; Fri, 06 Nov 2020 19:04:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kb73K-0004zS-Tc
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:05:38 +0000
X-Inumbo-ID: 779618f2-f1ed-44be-ae80-e963dd96cb3f
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 779618f2-f1ed-44be-ae80-e963dd96cb3f;
	Fri, 06 Nov 2020 19:04:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=bgbs5emPj5p7gTWcmnEuks79StD6TA5roIn47mScHH4=; b=sLqpFn8MNMp2YVoGVzCC1sBIXZ
	scwxQBBRwMeOtLFZWVrnLhmqAhpHB975gQ6WhMbBX7APBR0vehT4+yU9XVRSGJ7txwf7VaLg9o6JD
	n4qb1rjrSe93F3xdo4HwsLfWw1Ux1ZKT9gBfGV4g0JQkTvIgByB0fJieiXaBI+eAOsaBPVbYBZDPK
	moYTJgW59lvKQuEcYEegtN9nvNnkj3NxbJNdENCRUI7j7M4HEcbxon2U3BQHJDMmukuCyiMp5ARdR
	N1W4pYYyu024rCnDN5Y4HDqX8OoRWdoKkyf91EigCA3QKgSODr8ohYGjl37kclee4PDOrPelHnjnO
	+OGLxhGg==;
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kb71v-0000xc-3a; Fri, 06 Nov 2020 19:04:11 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 12/24] aoe: don't call set_capacity from irq context
Date: Fri,  6 Nov 2020 20:03:24 +0100
Message-Id: <20201106190337.1973127-13-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201106190337.1973127-1-hch@lst.de>
References: <20201106190337.1973127-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Updating the block device size from irq context can lead to torn
writes of the 64-bit value, and prevents us from using normal
process context locking primitives to serialize access to the 64-bit
nr_sectors value.  Defer the set_capacity to the already existing
workqueue handler, where it can be merged with the update of the
block device size by using set_capacity_and_notify.  As an extra
bonus this also adds proper uevent notifications for the resize.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/aoe/aoecmd.c | 15 ++++-----------
 1 file changed, 4 insertions(+), 11 deletions(-)

diff --git a/drivers/block/aoe/aoecmd.c b/drivers/block/aoe/aoecmd.c
index 313f0b946fe2b3..ac720bdcd983e7 100644
--- a/drivers/block/aoe/aoecmd.c
+++ b/drivers/block/aoe/aoecmd.c
@@ -890,19 +890,13 @@ void
 aoecmd_sleepwork(struct work_struct *work)
 {
 	struct aoedev *d = container_of(work, struct aoedev, work);
-	struct block_device *bd;
-	u64 ssize;
 
 	if (d->flags & DEVFL_GDALLOC)
 		aoeblk_gdalloc(d);
 
 	if (d->flags & DEVFL_NEWSIZE) {
-		ssize = get_capacity(d->gd);
-		bd = bdget_disk(d->gd, 0);
-		if (bd) {
-			bd_set_nr_sectors(bd, ssize);
-			bdput(bd);
-		}
+		set_capacity_and_notify(d->gd, d->ssize);
+
 		spin_lock_irq(&d->lock);
 		d->flags |= DEVFL_UP;
 		d->flags &= ~DEVFL_NEWSIZE;
@@ -971,10 +965,9 @@ ataid_complete(struct aoedev *d, struct aoetgt *t, unsigned char *id)
 	d->geo.start = 0;
 	if (d->flags & (DEVFL_GDALLOC|DEVFL_NEWSIZE))
 		return;
-	if (d->gd != NULL) {
-		set_capacity(d->gd, ssize);
+	if (d->gd != NULL)
 		d->flags |= DEVFL_NEWSIZE;
-	} else
+	else
 		d->flags |= DEVFL_GDALLOC;
 	schedule_work(&d->work);
 }
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:13:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:13:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21056.47337 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7B7-0007Nx-Nu; Fri, 06 Nov 2020 19:13:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21056.47337; Fri, 06 Nov 2020 19:13:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7B6-0007N8-TX; Fri, 06 Nov 2020 19:13:40 +0000
Received: by outflank-mailman (input) for mailman id 21056;
 Fri, 06 Nov 2020 19:13:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kb73t-0004zS-Uk
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:06:13 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bad5270d-f1aa-41c8-8864-9e7ea54c9faa;
 Fri, 06 Nov 2020 19:04:47 +0000 (UTC)
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kb72C-00014f-Hl; Fri, 06 Nov 2020 19:04:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kb73t-0004zS-Uk
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:06:13 +0000
X-Inumbo-ID: bad5270d-f1aa-41c8-8864-9e7ea54c9faa
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bad5270d-f1aa-41c8-8864-9e7ea54c9faa;
	Fri, 06 Nov 2020 19:04:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=UWKeto29zpFHQydywgL5QzixpOMgSmKobiU0CaruNos=; b=iOiho4s6xo37hPhvW6hYcv9bIK
	zWiLiVhIrhr5F4NsJ8c3KXd9B+zt1ASrPOOBsYKcc1VGbnC0UET/ozOEP8P3T+bdu3pGccikDcpm6
	PXmfAlB/UGD7xJEYd2rF6RDe4K2VdODkCOrAd4b8zn+p+bFgJ6Xi2JrC9ISbqRQPpWCoapvX4lKUr
	++nP7DGYDo6e5JXDZnda/83Npn93QH0BtjlvHWfHDpqBEHRv/KFW3BB71kHN3P7f2+QAPwmTBI2Tr
	MfallOeFk9txQiXvR1QXetx/1fCHpGSW8gbAWiKHa9A0xi/8YjajRI0m8W2zxnlNA5R2ojgO0PtDX
	mBGU9Gxg==;
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kb72C-00014f-Hl; Fri, 06 Nov 2020 19:04:30 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 21/24] md: use set_capacity_and_notify
Date: Fri,  6 Nov 2020 20:03:33 +0100
Message-Id: <20201106190337.1973127-22-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201106190337.1973127-1-hch@lst.de>
References: <20201106190337.1973127-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/md/md-cluster.c |  6 ++----
 drivers/md/md-linear.c  |  3 +--
 drivers/md/md.c         | 24 ++++++++++--------------
 3 files changed, 13 insertions(+), 20 deletions(-)

diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
index 4aaf4820b6f625..87442dc59f6ca3 100644
--- a/drivers/md/md-cluster.c
+++ b/drivers/md/md-cluster.c
@@ -581,8 +581,7 @@ static int process_recvd_msg(struct mddev *mddev, struct cluster_msg *msg)
 		process_metadata_update(mddev, msg);
 		break;
 	case CHANGE_CAPACITY:
-		set_capacity(mddev->gendisk, mddev->array_sectors);
-		revalidate_disk_size(mddev->gendisk, true);
+		set_capacity_and_notify(mddev->gendisk, mddev->array_sectors);
 		break;
 	case RESYNCING:
 		set_bit(MD_RESYNCING_REMOTE, &mddev->recovery);
@@ -1296,8 +1295,7 @@ static void update_size(struct mddev *mddev, sector_t old_dev_sectors)
 		if (ret)
 			pr_err("%s:%d: failed to send CHANGE_CAPACITY msg\n",
 			       __func__, __LINE__);
-		set_capacity(mddev->gendisk, mddev->array_sectors);
-		revalidate_disk_size(mddev->gendisk, true);
+		set_capacity_and_notify(mddev->gendisk, mddev->array_sectors);
 	} else {
 		/* revert to previous sectors */
 		ret = mddev->pers->resize(mddev, old_dev_sectors);
diff --git a/drivers/md/md-linear.c b/drivers/md/md-linear.c
index 5ab22069b5be9c..98f1b4b2bdcef8 100644
--- a/drivers/md/md-linear.c
+++ b/drivers/md/md-linear.c
@@ -200,9 +200,8 @@ static int linear_add(struct mddev *mddev, struct md_rdev *rdev)
 		"copied raid_disks doesn't match mddev->raid_disks");
 	rcu_assign_pointer(mddev->private, newconf);
 	md_set_array_sectors(mddev, linear_size(mddev, 0, 0));
-	set_capacity(mddev->gendisk, mddev->array_sectors);
+	set_capacity_and_notify(mddev->gendisk, mddev->array_sectors);
 	mddev_resume(mddev);
-	revalidate_disk_size(mddev->gendisk, true);
 	kfree_rcu(oldconf, rcu);
 	return 0;
 }
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 98bac4f304ae26..32e375d50fee17 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -5355,10 +5355,9 @@ array_size_store(struct mddev *mddev, const char *buf, size_t len)
 
 	if (!err) {
 		mddev->array_sectors = sectors;
-		if (mddev->pers) {
-			set_capacity(mddev->gendisk, mddev->array_sectors);
-			revalidate_disk_size(mddev->gendisk, true);
-		}
+		if (mddev->pers)
+			set_capacity_and_notify(mddev->gendisk,
+						mddev->array_sectors);
 	}
 	mddev_unlock(mddev);
 	return err ?: len;
@@ -6107,8 +6106,7 @@ int do_md_run(struct mddev *mddev)
 	md_wakeup_thread(mddev->thread);
 	md_wakeup_thread(mddev->sync_thread); /* possibly kick off a reshape */
 
-	set_capacity(mddev->gendisk, mddev->array_sectors);
-	revalidate_disk_size(mddev->gendisk, true);
+	set_capacity_and_notify(mddev->gendisk, mddev->array_sectors);
 	clear_bit(MD_NOT_READY, &mddev->flags);
 	mddev->changed = 1;
 	kobject_uevent(&disk_to_dev(mddev->gendisk)->kobj, KOBJ_CHANGE);
@@ -6423,10 +6421,9 @@ static int do_md_stop(struct mddev *mddev, int mode,
 			if (rdev->raid_disk >= 0)
 				sysfs_unlink_rdev(mddev, rdev);
 
-		set_capacity(disk, 0);
+		set_capacity_and_notify(disk, 0);
 		mutex_unlock(&mddev->open_mutex);
 		mddev->changed = 1;
-		revalidate_disk_size(disk, true);
 
 		if (mddev->ro)
 			mddev->ro = 0;
@@ -7257,8 +7254,8 @@ static int update_size(struct mddev *mddev, sector_t num_sectors)
 		if (mddev_is_clustered(mddev))
 			md_cluster_ops->update_size(mddev, old_dev_sectors);
 		else if (mddev->queue) {
-			set_capacity(mddev->gendisk, mddev->array_sectors);
-			revalidate_disk_size(mddev->gendisk, true);
+			set_capacity_and_notify(mddev->gendisk,
+						mddev->array_sectors);
 		}
 	}
 	return rv;
@@ -9035,10 +9032,9 @@ void md_do_sync(struct md_thread *thread)
 		mddev_lock_nointr(mddev);
 		md_set_array_sectors(mddev, mddev->pers->size(mddev, 0, 0));
 		mddev_unlock(mddev);
-		if (!mddev_is_clustered(mddev)) {
-			set_capacity(mddev->gendisk, mddev->array_sectors);
-			revalidate_disk_size(mddev->gendisk, true);
-		}
+		if (!mddev_is_clustered(mddev))
+			set_capacity_and_notify(mddev->gendisk,
+						mddev->array_sectors);
 	}
 
 	spin_lock(&mddev->lock);
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:13:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:13:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21057.47355 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7B9-0007TV-II; Fri, 06 Nov 2020 19:13:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21057.47355; Fri, 06 Nov 2020 19:13:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7B9-0007Sw-3R; Fri, 06 Nov 2020 19:13:43 +0000
Received: by outflank-mailman (input) for mailman id 21057;
 Fri, 06 Nov 2020 19:13:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kb72v-0004zS-Sm
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:05:13 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 14588f48-cf1a-4061-8386-a5acd3f7e2f8;
 Fri, 06 Nov 2020 19:04:32 +0000 (UTC)
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kb71x-0000yK-56; Fri, 06 Nov 2020 19:04:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kb72v-0004zS-Sm
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:05:13 +0000
X-Inumbo-ID: 14588f48-cf1a-4061-8386-a5acd3f7e2f8
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 14588f48-cf1a-4061-8386-a5acd3f7e2f8;
	Fri, 06 Nov 2020 19:04:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=aTYccYixW8gbHYOBakp7ukQbj8m53SZz7oZSluFWVGc=; b=LnYE9NNrzBcSdLeVcZBzoAaV9A
	uBqspcv2x3EvPgOaxWI5LYyJW/YTXavMBXM3sUYHJRIqR/GBbUv2TMuEif5qVKCdbNs5boXd/tlu1
	i9ZpkVaVjZ95cFBJaS5LHF2B5Qw8ZTXlY24kp5olWZsCnxbatdNJzNq9Z/G1Bfi6VVVBZh8uPvXRL
	QHLDSm+Ihgns2dqqVayBftULu0+xgED+EQMMzCK1KKvCC/AsUyHDQT50m0VDiBnghRP+3/XgY6N/U
	mN5su9mIlSc1rb8vVG2g+AL/GZzogA+3RlBH7EfGt4UNPK9HYdGZ84tPQiw8sFHXONEErgYmD4BQ2
	654UeKyQ==;
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kb71x-0000yK-56; Fri, 06 Nov 2020 19:04:13 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 13/24] dm: use set_capacity_and_notify
Date: Fri,  6 Nov 2020 20:03:25 +0100
Message-Id: <20201106190337.1973127-14-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201106190337.1973127-1-hch@lst.de>
References: <20201106190337.1973127-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/md/dm.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index c18fc25485186d..62ad44925e73ec 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1971,8 +1971,7 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
 	if (size != dm_get_size(md))
 		memset(&md->geometry, 0, sizeof(md->geometry));
 
-	set_capacity(md->disk, size);
-	bd_set_nr_sectors(md->bdev, size);
+	set_capacity_and_notify(md->disk, size);
 
 	dm_table_event_callback(t, event_callback, md);
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:13:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:13:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21066.47368 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7BG-0007k5-Tb; Fri, 06 Nov 2020 19:13:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21066.47368; Fri, 06 Nov 2020 19:13:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7BG-0007ju-NE; Fri, 06 Nov 2020 19:13:50 +0000
Received: by outflank-mailman (input) for mailman id 21066;
 Fri, 06 Nov 2020 19:13:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kb72l-0004zS-Se
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:05:03 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 09192d21-a0cf-4830-a4c1-27c31cc9f3e3;
 Fri, 06 Nov 2020 19:04:25 +0000 (UTC)
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kb71p-0000wK-4f; Fri, 06 Nov 2020 19:04:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kb72l-0004zS-Se
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:05:03 +0000
X-Inumbo-ID: 09192d21-a0cf-4830-a4c1-27c31cc9f3e3
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 09192d21-a0cf-4830-a4c1-27c31cc9f3e3;
	Fri, 06 Nov 2020 19:04:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=WafBEWtlgaGQA6gJMzw9vDCsOJAcghOtJS1v2YawlXs=; b=flOhs9A/3seUHab/L3fPIkhaNZ
	9UswHJ8U+Kj8ED+yShSZu6GM+08fa76LVduUB9dedyXSWnyRdFXAyFvywj+/KCSXIwAgSJSmRp82S
	oR68sdZML/aTnhokGAlqIPYaJ/BHEowfwu0siFs8kVUoyIZ8SrRGZTQ6gzKa8+ZQa1extJCjIvL1i
	g8i5qyKq2lJyPvkyGL2NnsSGowpS7lKDXuUb2uBm6SUkKi22pRV6trECh4UkMxiq1pDU2hAwMhUTo
	b4W8rz5ng0alQbIdzSFfASLY9LdhAIlkIhcvN4JB03FqWcb0jzUIyy8YalqzMOORvloskZfg0OmOB
	NMBFMKAg==;
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kb71p-0000wK-4f; Fri, 06 Nov 2020 19:04:06 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 09/24] nbd: refactor size updates
Date: Fri,  6 Nov 2020 20:03:21 +0100
Message-Id: <20201106190337.1973127-10-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201106190337.1973127-1-hch@lst.de>
References: <20201106190337.1973127-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Merge nbd_size_set and nbd_size_update into a single function that also
updates the nbd_config fields.  This new function takes the device size
in bytes as the first argument, and the blocksize as the second argument,
simplifying the calculations required in most callers.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/nbd.c | 44 ++++++++++++++++++--------------------------
 1 file changed, 18 insertions(+), 26 deletions(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 58b7090dcbd832..eb8a5da48ad75a 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -296,28 +296,30 @@ static void nbd_size_clear(struct nbd_device *nbd)
 	}
 }
 
-static void nbd_size_update(struct nbd_device *nbd)
+static void nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
+		loff_t blksize)
 {
-	struct nbd_config *config = nbd->config;
-	sector_t nr_sectors = config->bytesize >> 9;
 	struct block_device *bdev;
 
+	nbd->config->bytesize = bytesize;
+	nbd->config->blksize = blksize;
+
 	if (!nbd->task_recv)
 		return;
 
-	if (config->flags & NBD_FLAG_SEND_TRIM) {
-		nbd->disk->queue->limits.discard_granularity = config->blksize;
-		nbd->disk->queue->limits.discard_alignment = config->blksize;
+	if (nbd->config->flags & NBD_FLAG_SEND_TRIM) {
+		nbd->disk->queue->limits.discard_granularity = blksize;
+		nbd->disk->queue->limits.discard_alignment = blksize;
 		blk_queue_max_discard_sectors(nbd->disk->queue, UINT_MAX);
 	}
-	blk_queue_logical_block_size(nbd->disk->queue, config->blksize);
-	blk_queue_physical_block_size(nbd->disk->queue, config->blksize);
+	blk_queue_logical_block_size(nbd->disk->queue, blksize);
+	blk_queue_physical_block_size(nbd->disk->queue, blksize);
 
-	set_capacity(nbd->disk, nr_sectors);
+	set_capacity(nbd->disk, bytesize >> 9);
 	bdev = bdget_disk(nbd->disk, 0);
 	if (bdev) {
 		if (bdev->bd_disk)
-			bd_set_nr_sectors(bdev, nr_sectors);
+			bd_set_nr_sectors(bdev, bytesize >> 9);
 		else
 			set_bit(GD_NEED_PART_SCAN, &nbd->disk->state);
 		bdput(bdev);
@@ -325,15 +327,6 @@ static void nbd_size_update(struct nbd_device *nbd)
 	kobject_uevent(&nbd_to_dev(nbd)->kobj, KOBJ_CHANGE);
 }
 
-static void nbd_size_set(struct nbd_device *nbd, loff_t blocksize,
-			 loff_t nr_blocks)
-{
-	struct nbd_config *config = nbd->config;
-	config->blksize = blocksize;
-	config->bytesize = blocksize * nr_blocks;
-	nbd_size_update(nbd);
-}
-
 static void nbd_complete_rq(struct request *req)
 {
 	struct nbd_cmd *cmd = blk_mq_rq_to_pdu(req);
@@ -1311,7 +1304,7 @@ static int nbd_start_device(struct nbd_device *nbd)
 		args->index = i;
 		queue_work(nbd->recv_workq, &args->work);
 	}
-	nbd_size_update(nbd);
+	nbd_set_size(nbd, config->bytesize, config->blksize);
 	return error;
 }
 
@@ -1390,15 +1383,14 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
 			arg = NBD_DEF_BLKSIZE;
 		if (!nbd_is_valid_blksize(arg))
 			return -EINVAL;
-		nbd_size_set(nbd, arg,
-			     div_s64(config->bytesize, arg));
+		nbd_set_size(nbd, config->bytesize, arg);
 		return 0;
 	case NBD_SET_SIZE:
-		nbd_size_set(nbd, config->blksize,
-			     div_s64(arg, config->blksize));
+		nbd_set_size(nbd, arg, config->blksize);
 		return 0;
 	case NBD_SET_SIZE_BLOCKS:
-		nbd_size_set(nbd, config->blksize, arg);
+		nbd_set_size(nbd, arg * config->blksize,
+			     config->blksize);
 		return 0;
 	case NBD_SET_TIMEOUT:
 		nbd_set_cmd_timeout(nbd, arg);
@@ -1827,7 +1819,7 @@ static int nbd_genl_size_set(struct genl_info *info, struct nbd_device *nbd)
 	}
 
 	if (bytes != config->bytesize || bsize != config->blksize)
-		nbd_size_set(nbd, bsize, div64_u64(bytes, bsize));
+		nbd_set_size(nbd, bytes, bsize);
 	return 0;
 }
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:13:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:13:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21070.47380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7BJ-0007pp-Dx; Fri, 06 Nov 2020 19:13:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21070.47380; Fri, 06 Nov 2020 19:13:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7BJ-0007pW-45; Fri, 06 Nov 2020 19:13:53 +0000
Received: by outflank-mailman (input) for mailman id 21070;
 Fri, 06 Nov 2020 19:13:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kb72q-0004zS-Se
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:05:08 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8813d7a3-c631-4e87-bae5-b44cca265c68;
 Fri, 06 Nov 2020 19:04:25 +0000 (UTC)
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kb71r-0000wc-G0; Fri, 06 Nov 2020 19:04:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kb72q-0004zS-Se
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:05:08 +0000
X-Inumbo-ID: 8813d7a3-c631-4e87-bae5-b44cca265c68
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8813d7a3-c631-4e87-bae5-b44cca265c68;
	Fri, 06 Nov 2020 19:04:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=bvDrEMxNDsV0qnNGdlqXQ69LsBhINsh0TucESS2zUH4=; b=n8I2bAP/s2+x6HOHdjRdHDWR3/
	GA/Cs3M+JrFbYS8rdgv0qo6a4A0jrAZBnmY+K1D74/+PV24lfi4IfolPr81Es5J+oT3bNkKw+zeyt
	gzGbvCJ/WLzovUznZrsUBw/c3J9CCUvz+ACD1ZDZABtLuHpP1MIu1EsTtA6gHC13E5VQHVC3M2IPO
	euQ15DdO/iHu2S2Q/EN08UcT6UkTgp+yjc+bc84KvHOaWCsbvd4kqKO3zm38ITtQtkMvvsIlZcQLN
	c065U6oC9P/8yGDRb0naXdAcrUAL3dBvZQFdpezRIgymyXONVwpK8wkcN1XKXY+qgz/GD1N42jXxh
	uh/ciLlg==;
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kb71r-0000wc-G0; Fri, 06 Nov 2020 19:04:08 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 10/24] nbd: validate the block size in nbd_set_size
Date: Fri,  6 Nov 2020 20:03:22 +0100
Message-Id: <20201106190337.1973127-11-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201106190337.1973127-1-hch@lst.de>
References: <20201106190337.1973127-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Move the validation of the block from the callers into nbd_set_size.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/nbd.c | 47 +++++++++++++++------------------------------
 1 file changed, 15 insertions(+), 32 deletions(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index eb8a5da48ad75a..327060e01ad58e 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -296,16 +296,21 @@ static void nbd_size_clear(struct nbd_device *nbd)
 	}
 }
 
-static void nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
+static int nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
 		loff_t blksize)
 {
 	struct block_device *bdev;
 
+	if (!blksize)
+		blksize = NBD_DEF_BLKSIZE;
+	if (blksize < 512 || blksize > PAGE_SIZE || !is_power_of_2(blksize))
+		return -EINVAL;
+
 	nbd->config->bytesize = bytesize;
 	nbd->config->blksize = blksize;
 
 	if (!nbd->task_recv)
-		return;
+		return 0;
 
 	if (nbd->config->flags & NBD_FLAG_SEND_TRIM) {
 		nbd->disk->queue->limits.discard_granularity = blksize;
@@ -325,6 +330,7 @@ static void nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
 		bdput(bdev);
 	}
 	kobject_uevent(&nbd_to_dev(nbd)->kobj, KOBJ_CHANGE);
+	return 0;
 }
 
 static void nbd_complete_rq(struct request *req)
@@ -1304,8 +1310,7 @@ static int nbd_start_device(struct nbd_device *nbd)
 		args->index = i;
 		queue_work(nbd->recv_workq, &args->work);
 	}
-	nbd_set_size(nbd, config->bytesize, config->blksize);
-	return error;
+	return nbd_set_size(nbd, config->bytesize, config->blksize);
 }
 
 static int nbd_start_device_ioctl(struct nbd_device *nbd, struct block_device *bdev)
@@ -1347,14 +1352,6 @@ static void nbd_clear_sock_ioctl(struct nbd_device *nbd,
 		nbd_config_put(nbd);
 }
 
-static bool nbd_is_valid_blksize(unsigned long blksize)
-{
-	if (!blksize || !is_power_of_2(blksize) || blksize < 512 ||
-	    blksize > PAGE_SIZE)
-		return false;
-	return true;
-}
-
 static void nbd_set_cmd_timeout(struct nbd_device *nbd, u64 timeout)
 {
 	nbd->tag_set.timeout = timeout * HZ;
@@ -1379,19 +1376,12 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
 	case NBD_SET_SOCK:
 		return nbd_add_socket(nbd, arg, false);
 	case NBD_SET_BLKSIZE:
-		if (!arg)
-			arg = NBD_DEF_BLKSIZE;
-		if (!nbd_is_valid_blksize(arg))
-			return -EINVAL;
-		nbd_set_size(nbd, config->bytesize, arg);
-		return 0;
+		return nbd_set_size(nbd, config->bytesize, arg);
 	case NBD_SET_SIZE:
-		nbd_set_size(nbd, arg, config->blksize);
-		return 0;
+		return nbd_set_size(nbd, arg, config->blksize);
 	case NBD_SET_SIZE_BLOCKS:
-		nbd_set_size(nbd, arg * config->blksize,
-			     config->blksize);
-		return 0;
+		return nbd_set_size(nbd, arg * config->blksize,
+				    config->blksize);
 	case NBD_SET_TIMEOUT:
 		nbd_set_cmd_timeout(nbd, arg);
 		return 0;
@@ -1808,18 +1798,11 @@ static int nbd_genl_size_set(struct genl_info *info, struct nbd_device *nbd)
 	if (info->attrs[NBD_ATTR_SIZE_BYTES])
 		bytes = nla_get_u64(info->attrs[NBD_ATTR_SIZE_BYTES]);
 
-	if (info->attrs[NBD_ATTR_BLOCK_SIZE_BYTES]) {
+	if (info->attrs[NBD_ATTR_BLOCK_SIZE_BYTES])
 		bsize = nla_get_u64(info->attrs[NBD_ATTR_BLOCK_SIZE_BYTES]);
-		if (!bsize)
-			bsize = NBD_DEF_BLKSIZE;
-		if (!nbd_is_valid_blksize(bsize)) {
-			printk(KERN_ERR "Invalid block size %llu\n", bsize);
-			return -EINVAL;
-		}
-	}
 
 	if (bytes != config->bytesize || bsize != config->blksize)
-		nbd_set_size(nbd, bytes, bsize);
+		return nbd_set_size(nbd, bytes, bsize);
 	return 0;
 }
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:13:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:13:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21073.47392 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7BK-0007td-TX; Fri, 06 Nov 2020 19:13:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21073.47392; Fri, 06 Nov 2020 19:13:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7BK-0007tH-Kj; Fri, 06 Nov 2020 19:13:54 +0000
Received: by outflank-mailman (input) for mailman id 21073;
 Fri, 06 Nov 2020 19:13:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kb730-0004zS-T0
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:05:18 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c3194473-3722-401b-8c84-454df2605b44;
 Fri, 06 Nov 2020 19:04:33 +0000 (UTC)
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kb71y-0000yy-W9; Fri, 06 Nov 2020 19:04:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kb730-0004zS-T0
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:05:18 +0000
X-Inumbo-ID: c3194473-3722-401b-8c84-454df2605b44
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c3194473-3722-401b-8c84-454df2605b44;
	Fri, 06 Nov 2020 19:04:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=lumJq5B2fyOOj2vqJR3cb92lXUnjwfkYYs/CdjVL7AE=; b=eyjkFqW2Lg8XRbrrVkiRkq/qq2
	49XVx9Yxw07JHDacMMOeLzZz8HG1JveigO/MGLL9NyNBjvo9oJC6aYyW+UCA7pyMcIg2mlykFZ8zX
	ty8nCCzVLygdCiZkudRiTdi/W/FqGRVOeMQ9wpUURGDFOeF8z5DD8i/HOXTvwBHN3fV5U4peZe9Pq
	DgD3Jqo41QfIfZOp1cR5hXwFKm1Z/l9ILA6PmCBeYnfhJASEsooVpF4wac7NBiKSpHFXTDdkS/nwj
	fC+l47uqcwWBpLri1fI067m613ULNzIVBLSf1JS15xnnKfEZd/fTWViAQq5S1YviscxPom3cx2ros
	iJRrYOEA==;
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kb71y-0000yy-W9; Fri, 06 Nov 2020 19:04:15 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 14/24] pktcdvd: use set_capacity_and_notify
Date: Fri,  6 Nov 2020 20:03:26 +0100
Message-Id: <20201106190337.1973127-15-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201106190337.1973127-1-hch@lst.de>
References: <20201106190337.1973127-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/pktcdvd.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c
index 467dbd06b7cdb1..4326401cede445 100644
--- a/drivers/block/pktcdvd.c
+++ b/drivers/block/pktcdvd.c
@@ -2130,8 +2130,7 @@ static int pkt_open_dev(struct pktcdvd_device *pd, fmode_t write)
 	}
 
 	set_capacity(pd->disk, lba << 2);
-	set_capacity(pd->bdev->bd_disk, lba << 2);
-	bd_set_nr_sectors(pd->bdev, lba << 2);
+	set_capacity_and_notify(pd->bdev->bd_disk, lba << 2);
 
 	q = bdev_get_queue(pd->bdev);
 	if (write) {
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:13:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:13:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21076.47404 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7BO-00081d-Hj; Fri, 06 Nov 2020 19:13:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21076.47404; Fri, 06 Nov 2020 19:13:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7BO-00081H-8d; Fri, 06 Nov 2020 19:13:58 +0000
Received: by outflank-mailman (input) for mailman id 21076;
 Fri, 06 Nov 2020 19:13:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kb735-0004zS-TB
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:05:23 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9747272e-8e42-406c-a2c5-9972a3e883fb;
 Fri, 06 Nov 2020 19:04:35 +0000 (UTC)
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kb720-0000zf-QY; Fri, 06 Nov 2020 19:04:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kb735-0004zS-TB
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:05:23 +0000
X-Inumbo-ID: 9747272e-8e42-406c-a2c5-9972a3e883fb
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9747272e-8e42-406c-a2c5-9972a3e883fb;
	Fri, 06 Nov 2020 19:04:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=OzEVYDGpPhPKcdxmD06WYkOjTwHpY2TedHF0J0O+Oxs=; b=WIWq3qlg4S9I52R1ieKxa7C68n
	aUtvz/Z5VWPIixe1Jk0+iP74DUjRxGTZtczOqr2YZFiLvq0V49tmWt4Egw7piY18IH36MvoZkWHEt
	PEXXyAXapJgmVg+k6zZhVhBvhA8/MglHljbELD7NxRkbX9T5ONdSolc7glsQX2pPULh2BFtNXUSiB
	4S64nBsgEx2IPT2N8Aw5PojQzrUXaOJj9ljSBpygPZ+MvCfesSwKgolBafJYrmDDnT/lsCdBXfr9z
	7zQAfWzCCAKljC20Om2UuMm+cHGAPA54TVvos/MTzS9krZj+Wyhx49Kte2L0nUTmkA+/Fs96B8zhQ
	mbeUumNQ==;
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kb720-0000zf-QY; Fri, 06 Nov 2020 19:04:18 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 15/24] nvme: use set_capacity_and_notify in nvme_set_queue_dying
Date: Fri,  6 Nov 2020 20:03:27 +0100
Message-Id: <20201106190337.1973127-16-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201106190337.1973127-1-hch@lst.de>
References: <20201106190337.1973127-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use the block layer helper to update both the disk and block device
sizes.  Contrary to the name no notification is sent in this case,
as a size 0 is special cased.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/nvme/host/core.c | 13 +------------
 1 file changed, 1 insertion(+), 12 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index aa6e27c2eec945..223c681d774a6f 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -93,16 +93,6 @@ static void nvme_put_subsystem(struct nvme_subsystem *subsys);
 static void nvme_remove_invalid_namespaces(struct nvme_ctrl *ctrl,
 					   unsigned nsid);
 
-static void nvme_update_bdev_size(struct gendisk *disk)
-{
-	struct block_device *bdev = bdget_disk(disk, 0);
-
-	if (bdev) {
-		bd_set_nr_sectors(bdev, get_capacity(disk));
-		bdput(bdev);
-	}
-}
-
 /*
  * Prepare a queue for teardown.
  *
@@ -119,8 +109,7 @@ static void nvme_set_queue_dying(struct nvme_ns *ns)
 	blk_set_queue_dying(ns->queue);
 	blk_mq_unquiesce_queue(ns->queue);
 
-	set_capacity(ns->disk, 0);
-	nvme_update_bdev_size(ns->disk);
+	set_capacity_and_notify(ns->disk, 0);
 }
 
 static void nvme_queue_scan(struct nvme_ctrl *ctrl)
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:14:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:14:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21083.47416 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7BS-00088e-07; Fri, 06 Nov 2020 19:14:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21083.47416; Fri, 06 Nov 2020 19:14:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7BR-00088P-QI; Fri, 06 Nov 2020 19:14:01 +0000
Received: by outflank-mailman (input) for mailman id 21083;
 Fri, 06 Nov 2020 19:14:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kb73o-0004zS-Uk
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:06:08 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 33908fc9-68b3-480f-9414-9ad1dc72db9e;
 Fri, 06 Nov 2020 19:04:52 +0000 (UTC)
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kb72J-00017x-7W; Fri, 06 Nov 2020 19:04:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VpbQ=EM=casper.srs.infradead.org=batv+cc05c5534fc856bb48c0+6284+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kb73o-0004zS-Uk
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:06:08 +0000
X-Inumbo-ID: 33908fc9-68b3-480f-9414-9ad1dc72db9e
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 33908fc9-68b3-480f-9414-9ad1dc72db9e;
	Fri, 06 Nov 2020 19:04:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=1TxDi4JZHiUiFIihFWdfLGUjoSsTps8w2H/ZcwkuUgM=; b=Nn+BVDYVvQjkrPNv/cuyIRiZPT
	3lEBXr+QivjMh5e4l/ao1VWygxVyXgb1GaEszoEv7k0V4mDQl4Q558fFjWlIp5zja4skDHErGtVSp
	UazzhRWUeqOaZF3BD6OBDPLbIVX8kVtVQNkhl1fmu2B68Gte2LOkphKfMcyhwK+VcOtW25btBkWV6
	zqDCSn9jzGsf/k9VbPpmVDu9CIfQvBntbCmhNM15v0xLxzXvs5anzeTd73+N0ndrVdf3GlCLjFmvi
	5AEKf4IRVxwZE6cQKKKLk7ce4QfJMaGBbLCFQqB8pdhqsPi3eteFIE5suw4d4lzdD5Ph0CyZ2P7Z4
	CljTjCVw==;
Received: from [2001:4bb8:184:9a8d:9e34:f7f4:e59e:ad6f] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kb72J-00017x-7W; Fri, 06 Nov 2020 19:04:35 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 24/24] block: unexport revalidate_disk_size
Date: Fri,  6 Nov 2020 20:03:36 +0100
Message-Id: <20201106190337.1973127-25-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201106190337.1973127-1-hch@lst.de>
References: <20201106190337.1973127-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

revalidate_disk_size is not only called from set_capacity_and_notify,
so drop the export.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/block_dev.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index 66ebf594c97f47..d8664f5c1ff669 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -1362,7 +1362,6 @@ void revalidate_disk_size(struct gendisk *disk, bool verbose)
 		bdput(bdev);
 	}
 }
-EXPORT_SYMBOL(revalidate_disk_size);
 
 void bd_set_nr_sectors(struct block_device *bdev, sector_t sectors)
 {
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 06 19:41:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 19:41:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21142.47430 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7bs-0002v2-EG; Fri, 06 Nov 2020 19:41:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21142.47430; Fri, 06 Nov 2020 19:41:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kb7bs-0002uv-B9; Fri, 06 Nov 2020 19:41:20 +0000
Received: by outflank-mailman (input) for mailman id 21142;
 Fri, 06 Nov 2020 19:41:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pfqN=EM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kb7bq-0002uN-SF
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:41:18 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8937588b-ab38-4eee-8e83-37b137505baf;
 Fri, 06 Nov 2020 19:41:10 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kb7bh-0003NQ-TD; Fri, 06 Nov 2020 19:41:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kb7bh-0001gQ-KT; Fri, 06 Nov 2020 19:41:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kb7bh-0000vs-Jm; Fri, 06 Nov 2020 19:41:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pfqN=EM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kb7bq-0002uN-SF
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 19:41:18 +0000
X-Inumbo-ID: 8937588b-ab38-4eee-8e83-37b137505baf
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8937588b-ab38-4eee-8e83-37b137505baf;
	Fri, 06 Nov 2020 19:41:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sUabvlgalXwe8qbJvv72OWyEUJ0YGL5HtW9/P0+erTw=; b=Etf9hh5XDQvKu/zBdUEY8sZH4r
	SS22iyCnQwZS59Gd/YxjTrgpzU9T9TXpRUBnnkCoxlKpfVtJi55wbsCDR/F38Ucr7mNyVkiyrz2wI
	aq5Rtvh27bgRC8L3yKSAu+/KnF3z1helhkxqGwPtxWjOB8n4CoFD+gKny8+D34mLko2s=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kb7bh-0003NQ-TD; Fri, 06 Nov 2020 19:41:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kb7bh-0001gQ-KT; Fri, 06 Nov 2020 19:41:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kb7bh-0000vs-Jm; Fri, 06 Nov 2020 19:41:09 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156532-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156532: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
X-Osstest-Versions-That:
    xen=2a5f9f6a6932214fda76b9b3c03e024772882d34
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 06 Nov 2020 19:41:09 +0000

flight 156532 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156532/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
baseline version:
 xen                  2a5f9f6a6932214fda76b9b3c03e024772882d34

Last test of basis   156523  2020-11-06 10:00:26 Z    0 days
Testing same since   156532  2020-11-06 17:01:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   2a5f9f6a69..0a5e0ce0fb  0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Nov 06 23:01:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 06 Nov 2020 23:01:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21168.47443 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbAit-00034c-VK; Fri, 06 Nov 2020 23:00:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21168.47443; Fri, 06 Nov 2020 23:00:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbAit-00034V-S0; Fri, 06 Nov 2020 23:00:47 +0000
Received: by outflank-mailman (input) for mailman id 21168;
 Fri, 06 Nov 2020 23:00:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6QlO=EM=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kbAir-00034Q-T3
 for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 23:00:46 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0f253cf5-1259-4378-902e-0b3bcb88ca41;
 Fri, 06 Nov 2020 23:00:44 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id A09B0208C7;
 Fri,  6 Nov 2020 23:00:42 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6QlO=EM=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kbAir-00034Q-T3
	for xen-devel@lists.xenproject.org; Fri, 06 Nov 2020 23:00:46 +0000
X-Inumbo-ID: 0f253cf5-1259-4378-902e-0b3bcb88ca41
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0f253cf5-1259-4378-902e-0b3bcb88ca41;
	Fri, 06 Nov 2020 23:00:44 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id A09B0208C7;
	Fri,  6 Nov 2020 23:00:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604703643;
	bh=aB37nOeqQHWmUkfsWTfJ/37tF0GHcQXoLIg2M5HqFCE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=h/sHCxT5tvHTke/HWm1JTLlRAxJvRyRuodr0YaYMinZ4mxKBmKu7bqRNgXlmVzNJa
	 DJ8PGV1sSwH7qMahGHWaiKat81hd0mJWpjQekWQq2QiTYQTzzPR3D3CILft3Ss+MPr
	 QYgquDXHOMeAF1covGpYJJ54DM14wtAEHCPd9+bo=
Date: Fri, 6 Nov 2020 15:00:41 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    George Dunlap <george.dunlap@citrix.com>, 
    Ian Jackson <ian.jackson@citrix.com>, Wei Liu <wl@xen.org>, 
    Anthony Perard <anthony.perard@citrix.com>, Julien Grall <julien@xen.org>
Subject: Re: preparations for 4.14.1
In-Reply-To: <e12e32ca-8d2e-7314-e942-4de77d72ba4a@suse.com>
Message-ID: <alpine.DEB.2.21.2011061459550.2323@sstabellini-ThinkPad-T480s>
References: <5aa0791a-db56-8f5a-51a1-5863748ce7f1@suse.com> <alpine.DEB.2.21.2011051753580.2323@sstabellini-ThinkPad-T480s> <e12e32ca-8d2e-7314-e942-4de77d72ba4a@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 6 Nov 2020, Jan Beulich wrote:
> On 06.11.2020 02:58, Stefano Stabellini wrote:
> > On Wed, 4 Nov 2020, Jan Beulich wrote:
> >> the release is due in a couple of weeks time. Please point out
> >> backports you find missing from the respective staging branch,
> >> but which you consider relevant. (Ian: Please double check
> >> there are indeed no tools side backports needed here.)
> >>
> >> Julien, Stefano, on the Arm side I'd like to ask for
> >>
> >> 5d45ecabe3c0 xen/arm64: force gcc 10+ to always inline generic atomics helpers
> >>
> >> just like I did when sending the respective 4.13.2 / 4.12.4
> >> mail. Is there a particular reason it wasn't put in?
> > 
> > No, I have just backported 5d45ecabe3c0 and a couple of other fixes.
> 
> Thanks.
> 
> > Jan, do you think we should backport the following also?
> > 
> > 8856a914b build: also check for empty .bss.* in .o -> .init.o conversion
> 
> Not having it wasn't causing active problems afaict, so it
> was more to prevent future issues to put it in place. Did
> we gain dependencies on this change which want backporting?

No that's OK. I was just wondering if it was fixing something important
enough to backport. I think we are good on my side then.


From xen-devel-bounces@lists.xenproject.org Sat Nov 07 00:32:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Nov 2020 00:32:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21196.47473 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbC9Y-0003AC-HJ; Sat, 07 Nov 2020 00:32:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21196.47473; Sat, 07 Nov 2020 00:32:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbC9Y-0003A5-EH; Sat, 07 Nov 2020 00:32:24 +0000
Received: by outflank-mailman (input) for mailman id 21196;
 Sat, 07 Nov 2020 00:32:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eQUk=EN=kernel.org=song@srs-us1.protection.inumbo.net>)
 id 1kbC9W-00039y-P8
 for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 00:32:22 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7bc9f7f5-1e96-4fd1-a3ec-6a14ab5f913c;
 Sat, 07 Nov 2020 00:32:21 +0000 (UTC)
Received: from mail-lf1-f50.google.com (mail-lf1-f50.google.com
 [209.85.167.50])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 7101222202
 for <xen-devel@lists.xenproject.org>; Sat,  7 Nov 2020 00:32:20 +0000 (UTC)
Received: by mail-lf1-f50.google.com with SMTP id s30so4378456lfc.4
 for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 16:32:20 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=eQUk=EN=kernel.org=song@srs-us1.protection.inumbo.net>)
	id 1kbC9W-00039y-P8
	for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 00:32:22 +0000
X-Inumbo-ID: 7bc9f7f5-1e96-4fd1-a3ec-6a14ab5f913c
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7bc9f7f5-1e96-4fd1-a3ec-6a14ab5f913c;
	Sat, 07 Nov 2020 00:32:21 +0000 (UTC)
Received: from mail-lf1-f50.google.com (mail-lf1-f50.google.com [209.85.167.50])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 7101222202
	for <xen-devel@lists.xenproject.org>; Sat,  7 Nov 2020 00:32:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604709140;
	bh=KcpIPe+A58yFNxxgdAdaS3ugXYdHOBtuseeQrVYguHI=;
	h=References:In-Reply-To:From:Date:Subject:To:Cc:From;
	b=O7dyOXal7YL7+3TQPR507WAWXuCRa9VXvh3tl/XZ3Wmey5p91Ew7PyDj8k/tZUsUX
	 XY5D3+3MbK8y+gvrbaa+8QamA+h71zwFvUqQhxnEoaqEjETZ+k6/bM4uDXaudNmc1c
	 Llst5cl3Ll69uQ67EfsvLxy48UchXaU9XWGm+pGE=
Received: by mail-lf1-f50.google.com with SMTP id s30so4378456lfc.4
        for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 16:32:20 -0800 (PST)
X-Gm-Message-State: AOAM532YESMNR/zFnEKDhMHvZVwr2yUGSGtuyVQ48LgxmWZ7HdhmH7IQ
	rT6c505zhJZBPIA9fqNwhX8c8XRSrkXnM/APe4A=
X-Google-Smtp-Source: ABdhPJzPKJgvb0YpcBOATticPbEhSqNhkf0gjO34lXWldMg14DmOELWQsnbdEhjrrSsS6KPE2QcSxjJkDLUT8kUmj4w=
X-Received: by 2002:a19:ae13:: with SMTP id f19mr1682538lfc.193.1604709138508;
 Fri, 06 Nov 2020 16:32:18 -0800 (PST)
MIME-Version: 1.0
References: <20201106190337.1973127-1-hch@lst.de> <20201106190337.1973127-22-hch@lst.de>
In-Reply-To: <20201106190337.1973127-22-hch@lst.de>
From: Song Liu <song@kernel.org>
Date: Fri, 6 Nov 2020 16:32:07 -0800
X-Gmail-Original-Message-ID: <CAPhsuW6GuXe_2YKnP5wRHg7ytOxjUzTQZ=fG2RKxs6woNVPFaQ@mail.gmail.com>
Message-ID: <CAPhsuW6GuXe_2YKnP5wRHg7ytOxjUzTQZ=fG2RKxs6woNVPFaQ@mail.gmail.com>
Subject: Re: [PATCH 21/24] md: use set_capacity_and_notify
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Justin Sanders <justin@coraid.com>, 
	Josef Bacik <josef@toxicpanda.com>, Ilya Dryomov <idryomov@gmail.com>, 
	Jack Wang <jinpu.wang@cloud.ionos.com>, "Michael S. Tsirkin" <mst@redhat.com>, 
	Jason Wang <jasowang@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>, 
	Stefan Hajnoczi <stefanha@redhat.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>, 
	"Martin K. Petersen" <martin.petersen@oracle.com>, dm-devel@redhat.com, 
	linux-block@vger.kernel.org, drbd-dev@lists.linbit.com, nbd@other.debian.org, 
	ceph-devel@vger.kernel.org, xen-devel@lists.xenproject.org, 
	linux-raid <linux-raid@vger.kernel.org>, linux-nvme@lists.infradead.org, 
	linux-scsi@vger.kernel.org, Linux-Fsdevel <linux-fsdevel@vger.kernel.org>
Content-Type: text/plain; charset="UTF-8"

On Fri, Nov 6, 2020 at 11:04 AM Christoph Hellwig <hch@lst.de> wrote:
>
> Use set_capacity_and_notify to set the size of both the disk and block
> device.  This also gets the uevent notifications for the resize for free.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Acked-by: Song Liu <song@kernel.org>


From xen-devel-bounces@lists.xenproject.org Sat Nov 07 00:35:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Nov 2020 00:35:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21203.47492 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbCC5-0003MM-1E; Sat, 07 Nov 2020 00:35:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21203.47492; Sat, 07 Nov 2020 00:35:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbCC4-0003MF-UC; Sat, 07 Nov 2020 00:35:00 +0000
Received: by outflank-mailman (input) for mailman id 21203;
 Sat, 07 Nov 2020 00:34:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=atuI=EN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kbCC3-0003Lg-P6
 for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 00:34:59 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a4bd0ac3-04c6-45c0-af30-f374e1b5bcac;
 Sat, 07 Nov 2020 00:34:52 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbCBw-0001eK-Af; Sat, 07 Nov 2020 00:34:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbCBw-0002To-2E; Sat, 07 Nov 2020 00:34:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kbCBv-0007nu-Va; Sat, 07 Nov 2020 00:34:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=atuI=EN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kbCC3-0003Lg-P6
	for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 00:34:59 +0000
X-Inumbo-ID: a4bd0ac3-04c6-45c0-af30-f374e1b5bcac
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a4bd0ac3-04c6-45c0-af30-f374e1b5bcac;
	Sat, 07 Nov 2020 00:34:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HkxlkEPiDugytA/TjR49eLITQrXI2C94N7OcCKqjTL0=; b=BVph/18CGtbot+cCRZDUUZu+/b
	Ujl+PWx1vByb8whdrl8w+4a3HODe2rVyW9CnWP+zWPZKvcHmoJCGHBkfh7KY0usIsa7ZMr0/AKN5H
	n4kX4nNkOySttCsyKV02xg85iG7ljoxMyN83h6Sqcl0ZTC/Qi/c4Xjz2DIoSjlEMjgFA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbCBw-0001eK-Af; Sat, 07 Nov 2020 00:34:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbCBw-0002To-2E; Sat, 07 Nov 2020 00:34:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbCBv-0007nu-Va; Sat, 07 Nov 2020 00:34:51 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156515-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 156515: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-saverestore.2:fail:heisenbug
    xen-4.12-testing:test-armhf-armhf-libvirt:xen-boot:fail:heisenbug
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:xen-boot:fail:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-credit2:guest-localmigrate/x10:fail:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4f9294d21c47415376215d68a0298e88582b8e7a
X-Osstest-Versions-That:
    xen=97b7b5567fba6918a656ad349051b5343b5dea2e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 07 Nov 2020 00:34:51 +0000

flight 156515 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156515/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qcow2 18 guest-saverestore.2 fail in 156423 pass in 156515
 test-armhf-armhf-libvirt      8 xen-boot         fail in 156423 pass in 156515
 test-arm64-arm64-xl-thunderx  8 xen-boot                   fail pass in 156423
 test-amd64-amd64-xl-credit2  20 guest-localmigrate/x10     fail pass in 156423
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 156423

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check fail in 156423 never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check fail in 156423 never pass
 test-amd64-amd64-xl-qcow2    19 guest-localmigrate/x10       fail  like 156358
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156358
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156358
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156358
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156358
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156358
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156358
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156358
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156358
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156358
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156358
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156358
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  4f9294d21c47415376215d68a0298e88582b8e7a
baseline version:
 xen                  97b7b5567fba6918a656ad349051b5343b5dea2e

Last test of basis   156358  2020-11-02 08:38:03 Z    4 days
Testing same since   156398  2020-11-04 09:06:02 Z    2 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <ian.jackson@eu.citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   97b7b5567f..4f9294d21c  4f9294d21c47415376215d68a0298e88582b8e7a -> stable-4.12


From xen-devel-bounces@lists.xenproject.org Sat Nov 07 00:40:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Nov 2020 00:40:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21210.47504 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbCGu-0003a7-P2; Sat, 07 Nov 2020 00:40:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21210.47504; Sat, 07 Nov 2020 00:40:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbCGu-0003a0-L9; Sat, 07 Nov 2020 00:40:00 +0000
Received: by outflank-mailman (input) for mailman id 21210;
 Sat, 07 Nov 2020 00:39:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eQUk=EN=kernel.org=song@srs-us1.protection.inumbo.net>)
 id 1kbCGt-0003Zv-Ey
 for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 00:39:59 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0dbf14de-ad00-4b4c-ab54-d49b39ca2c70;
 Sat, 07 Nov 2020 00:39:58 +0000 (UTC)
Received: from mail-lf1-f54.google.com (mail-lf1-f54.google.com
 [209.85.167.54])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id A8B4C22202
 for <xen-devel@lists.xenproject.org>; Sat,  7 Nov 2020 00:39:57 +0000 (UTC)
Received: by mail-lf1-f54.google.com with SMTP id l2so4418438lfk.0
 for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 16:39:57 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=eQUk=EN=kernel.org=song@srs-us1.protection.inumbo.net>)
	id 1kbCGt-0003Zv-Ey
	for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 00:39:59 +0000
X-Inumbo-ID: 0dbf14de-ad00-4b4c-ab54-d49b39ca2c70
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0dbf14de-ad00-4b4c-ab54-d49b39ca2c70;
	Sat, 07 Nov 2020 00:39:58 +0000 (UTC)
Received: from mail-lf1-f54.google.com (mail-lf1-f54.google.com [209.85.167.54])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id A8B4C22202
	for <xen-devel@lists.xenproject.org>; Sat,  7 Nov 2020 00:39:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604709598;
	bh=yHGYGCtLswTDWsVV8G7btJ2YrfbhRJpoj/2QQHMNku0=;
	h=References:In-Reply-To:From:Date:Subject:To:Cc:From;
	b=JaraML33QnlELqv9b1nVGycC8TqrN2CO3mYGqYHHPbzpftbxLrgkNlDLzNuVKBnOA
	 +2GGCnIf2S6SmvnOXWVOrneIGjkyT+jTAQmqF3CZZVfU2qqbBMNpn6QobIOWbu07iq
	 qAPvrSntweEuYizqycf91KNQI82ZEYM27rTtgZAw=
Received: by mail-lf1-f54.google.com with SMTP id l2so4418438lfk.0
        for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 16:39:57 -0800 (PST)
X-Gm-Message-State: AOAM532JSiOqqS9zILphQelElLI0YHC1sQWXruVh0n26ofkk99DwghHI
	ZqyhjDi78rxejhAYWfhBPwLr0Ew8dtI1iGmvz0k=
X-Google-Smtp-Source: ABdhPJxa7Wve2F+59EP4hPqinvAbRhF4pNbwqT8FgL1QaPoOKbDOM27LB1KUwtpCP4KuABwtVBcs7qNXSe7PRXbnm/0=
X-Received: by 2002:a19:4b45:: with SMTP id y66mr1708840lfa.482.1604709595815;
 Fri, 06 Nov 2020 16:39:55 -0800 (PST)
MIME-Version: 1.0
References: <20201106190337.1973127-1-hch@lst.de> <20201106190337.1973127-23-hch@lst.de>
In-Reply-To: <20201106190337.1973127-23-hch@lst.de>
From: Song Liu <song@kernel.org>
Date: Fri, 6 Nov 2020 16:39:44 -0800
X-Gmail-Original-Message-ID: <CAPhsuW4TjGZYpf-Ad4sk5WMq8BLGTpxaCd-FnMfmqo49pX1Z9w@mail.gmail.com>
Message-ID: <CAPhsuW4TjGZYpf-Ad4sk5WMq8BLGTpxaCd-FnMfmqo49pX1Z9w@mail.gmail.com>
Subject: Re: [PATCH 22/24] md: remove a spurious call to revalidate_disk_size
 in update_size
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Justin Sanders <justin@coraid.com>, 
	Josef Bacik <josef@toxicpanda.com>, Ilya Dryomov <idryomov@gmail.com>, 
	Jack Wang <jinpu.wang@cloud.ionos.com>, "Michael S. Tsirkin" <mst@redhat.com>, 
	Jason Wang <jasowang@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>, 
	Stefan Hajnoczi <stefanha@redhat.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>, 
	"Martin K. Petersen" <martin.petersen@oracle.com>, dm-devel@redhat.com, 
	linux-block@vger.kernel.org, drbd-dev@lists.linbit.com, nbd@other.debian.org, 
	ceph-devel@vger.kernel.org, xen-devel@lists.xenproject.org, 
	linux-raid <linux-raid@vger.kernel.org>, linux-nvme@lists.infradead.org, 
	linux-scsi@vger.kernel.org, Linux-Fsdevel <linux-fsdevel@vger.kernel.org>
Content-Type: text/plain; charset="UTF-8"

On Fri, Nov 6, 2020 at 11:04 AM Christoph Hellwig <hch@lst.de> wrote:
>
> None of the ->resize methods updates the disk size, so calling
> revalidate_disk_size here won't do anything.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Acked-by: Song Liu <song@kernel.org>

> ---
>  drivers/md/md-cluster.c | 2 --
>  1 file changed, 2 deletions(-)
>
> diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
> index 87442dc59f6ca3..35e2690c1803dd 100644
> --- a/drivers/md/md-cluster.c
> +++ b/drivers/md/md-cluster.c
> @@ -1299,8 +1299,6 @@ static void update_size(struct mddev *mddev, sector_t old_dev_sectors)
>         } else {
>                 /* revert to previous sectors */
>                 ret = mddev->pers->resize(mddev, old_dev_sectors);
> -               if (!ret)
> -                       revalidate_disk_size(mddev->gendisk, true);
>                 ret = __sendmsg(cinfo, &cmsg);
>                 if (ret)
>                         pr_err("%s:%d: failed to send METADATA_UPDATED msg\n",
> --
> 2.28.0
>


From xen-devel-bounces@lists.xenproject.org Sat Nov 07 00:44:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Nov 2020 00:44:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21215.47516 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbCLQ-0004SJ-B3; Sat, 07 Nov 2020 00:44:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21215.47516; Sat, 07 Nov 2020 00:44:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbCLQ-0004SC-84; Sat, 07 Nov 2020 00:44:40 +0000
Received: by outflank-mailman (input) for mailman id 21215;
 Sat, 07 Nov 2020 00:44:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1Tei=EN=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1kbCLO-0004S7-Na
 for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 00:44:38 +0000
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dfb0ef61-8e95-4c51-95c0-ffd4d2eb40f0;
 Sat, 07 Nov 2020 00:44:36 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0A70Z841013792;
 Sat, 7 Nov 2020 00:44:26 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by userp2120.oracle.com with ESMTP id 34hhw33m6q-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Sat, 07 Nov 2020 00:44:26 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0A70Ue9H195816;
 Sat, 7 Nov 2020 00:44:26 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
 by aserp3020.oracle.com with ESMTP id 34hw0qdmy7-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Sat, 07 Nov 2020 00:44:25 +0000
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
 by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 0A70iI4U024046;
 Sat, 7 Nov 2020 00:44:18 GMT
Received: from [10.74.103.192] (/10.74.103.192)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Fri, 06 Nov 2020 16:44:17 -0800
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1Tei=EN=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
	id 1kbCLO-0004S7-Na
	for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 00:44:38 +0000
X-Inumbo-ID: dfb0ef61-8e95-4c51-95c0-ffd4d2eb40f0
Received: from userp2120.oracle.com (unknown [156.151.31.85])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id dfb0ef61-8e95-4c51-95c0-ffd4d2eb40f0;
	Sat, 07 Nov 2020 00:44:36 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
	by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0A70Z841013792;
	Sat, 7 Nov 2020 00:44:26 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=icpgsybhv1Wj9WQrwswn7c2yra/D3Oe807HPp2ql6Mg=;
 b=GWCllC2zxHNdUwiSF3B4GYyw35a2Jn12rbwdWrB//bEwuYB6tvxLU6MVxV+QVGTJhDR9
 xawOpIIOTzW7rt72RzDXGbaKB0GhTSLzmU/DvF1Kp3WC+ed78ajOEgyEgfeGTjxDPSDo
 27Z+IQRc4aABbQPtaCZtNa0yFqqFKUzFniq/cX8nN+ZUVI9W3ByqzDL/UyJbQtHZNHvk
 ywoYkx7OxNUyVDYhzSrDTL5UfNXWrmBplc/Z/12E/V7uOh1mu+i2x7NJMnHdUNnkQN5a
 jBMar3BdwyrlHkNBqq3770xjUiRIhUv08oJ5jqiZrtTFLekf/+hCiX6Lfsvsa2QZjnWV fA== 
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
	by userp2120.oracle.com with ESMTP id 34hhw33m6q-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
	Sat, 07 Nov 2020 00:44:26 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
	by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0A70Ue9H195816;
	Sat, 7 Nov 2020 00:44:26 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
	by aserp3020.oracle.com with ESMTP id 34hw0qdmy7-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
	Sat, 07 Nov 2020 00:44:25 +0000
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 0A70iI4U024046;
	Sat, 7 Nov 2020 00:44:18 GMT
Received: from [10.74.103.192] (/10.74.103.192)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 06 Nov 2020 16:44:17 -0800
Subject: Re: [PATCH] x86/xen: fix warning when running with nosmt mitigations
To: Brian Masney <bmasney@redhat.com>, jgross@suse.com, sstabellini@kernel.org
Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org,
        hpa@zytor.com, xen-devel@lists.xenproject.org,
        linux-kernel@vger.kernel.org, dustymabe@redhat.com
References: <20201106003529.391649-1-bmasney@redhat.com>
 <20201106004743.GA380136@tp-x1>
From: boris.ostrovsky@oracle.com
Organization: Oracle Corporation
Message-ID: <a541a64b-aabc-bb62-4cf7-da5004a756d7@oracle.com>
Date: Fri, 6 Nov 2020 19:44:16 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201106004743.GA380136@tp-x1>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9797 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 adultscore=0 bulkscore=0
 mlxscore=0 suspectscore=0 spamscore=0 mlxlogscore=999 malwarescore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2011070001
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9797 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 malwarescore=0 mlxscore=0
 suspectscore=0 clxscore=1011 priorityscore=1501 impostorscore=0
 spamscore=0 lowpriorityscore=0 mlxlogscore=999 phishscore=0 bulkscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2011070001


On 11/5/20 7:47 PM, Brian Masney wrote:
> On Thu, Nov 05, 2020 at 07:35:29PM -0500, Brian Masney wrote:
>> diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
>> index 799f4eba0a62..4a052459a08e 100644
>> --- a/arch/x86/xen/spinlock.c
>> +++ b/arch/x86/xen/spinlock.c
>> @@ -93,9 +93,24 @@ void xen_init_lock_cpu(int cpu)
>>  
>>  void xen_uninit_lock_cpu(int cpu)
>>  {
>> +	int irq;
>> +
>>  	if (!xen_pvspin)
>>  		return;
>>  
>> +	/*
>> +	 * When booting the kernel with 'mitigations=auto,nosmt', the secondary
>> +	 * CPUs are not activated and only the primary thread on each CPU core
>> +	 * is used. In this situation, xen_hvm_smp_prepare_cpus(), and more
>> +	 * importantly xen_init_lock_cpu(), is not called, so the
>> +	 * lock_kicker_irq is not initialized for the secondary CPUs. Let's
>> +	 * exit early if the irq is not set to avoid a warning in the console
>> +	 * log.
>> +	 */
>> +	irq = per_cpu(lock_kicker_irq, cpu);
>> +	if (irq == -1)
>> +		return;
>> +
>>  	unbind_from_irqhandler(per_cpu(lock_kicker_irq, cpu), NULL);
> As soon as I saw this on lore, I saw that I should have passed the irq
> variable to unbind_from_irqhandler() rather than doing another per_cpu()
> lookup. I'll wait for feedback about the general approach before posting
> a v2.


This looks good. I'd shorten the comment though: your commit message already describes the scenario. And change the subject to something like "Don't unbind uninitialized lock_kicker_irq".


-boris



From xen-devel-bounces@lists.xenproject.org Sat Nov 07 01:11:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Nov 2020 01:11:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21226.47528 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbClV-0003Bo-G2; Sat, 07 Nov 2020 01:11:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21226.47528; Sat, 07 Nov 2020 01:11:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbClV-0003Bh-Bq; Sat, 07 Nov 2020 01:11:37 +0000
Received: by outflank-mailman (input) for mailman id 21226;
 Sat, 07 Nov 2020 01:11:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kTuc=EN=redhat.com=bmasney@srs-us1.protection.inumbo.net>)
 id 1kbClT-0003Bc-Sl
 for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 01:11:35 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 888e7b8f-2d5a-47e5-9387-8f91592ef49b;
 Sat, 07 Nov 2020 01:11:34 +0000 (UTC)
Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com
 [209.85.219.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-265-KaIzL_T-O6CZQ9JMKHkH-w-1; Fri, 06 Nov 2020 20:11:32 -0500
Received: by mail-qv1-f70.google.com with SMTP id r11so1776806qvn.1
 for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 17:11:32 -0800 (PST)
Received: from localhost.localdomain (c-98-239-145-235.hsd1.wv.comcast.net.
 [98.239.145.235])
 by smtp.gmail.com with ESMTPSA id q6sm1758584qtq.53.2020.11.06.17.11.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 06 Nov 2020 17:11:30 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kTuc=EN=redhat.com=bmasney@srs-us1.protection.inumbo.net>)
	id 1kbClT-0003Bc-Sl
	for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 01:11:35 +0000
X-Inumbo-ID: 888e7b8f-2d5a-47e5-9387-8f91592ef49b
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 888e7b8f-2d5a-47e5-9387-8f91592ef49b;
	Sat, 07 Nov 2020 01:11:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604711493;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=s03eFWh3bvVnssKugy+SfcexCfSnOxFiFxui5ZZE6JM=;
	b=eVZtrEZww1YjvsUVfGezOqx/owmpDb0u9YcL9wkHu1NURBh6wGoInW5APFi6MST0iXrlo1
	avpkU/0uXDrRKwGxpFjjEhDMq5xG7HCwiMBsqLuNYVqlRIomOFHtsdLsojUTS2OtgM8qfO
	YdZeLP6vRoTcEkx3sxtGDxI6PRgF0Do=
Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com
 [209.85.219.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-265-KaIzL_T-O6CZQ9JMKHkH-w-1; Fri, 06 Nov 2020 20:11:32 -0500
X-MC-Unique: KaIzL_T-O6CZQ9JMKHkH-w-1
Received: by mail-qv1-f70.google.com with SMTP id r11so1776806qvn.1
        for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 17:11:32 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=s03eFWh3bvVnssKugy+SfcexCfSnOxFiFxui5ZZE6JM=;
        b=eCHBFn0iN650TG72XnMC/azAUS8rWNsc8+3ulSRNFnCQTbgPbUKGpzbp8gM7FG7/86
         lWI5dQOSGp6kzS4jqOUsanbQMOgP9oMYT1FFBK4/v2B2e6sAupTcnAjyUGsMo5qWaiB6
         fTbjj4tw+x0q4vcY5evRRsdSrkJrG/kWeQeE0ciwJ/5wHBb60NTvbGMymtoI3uR/4F6S
         m0z1jmAKOM0rC/jxRfxOrnFtiu4HTcw7wiWCk6qpXDcOEcjwSwiBlPyR8dozBDsOUkuO
         v9Y7uuAAvi2bt9UioHuFHlZKsXRuD/uCf2RkTZVJW1U7YO9RtFs+su4Dg/6QCxERJCSa
         UPQw==
X-Gm-Message-State: AOAM530QhsOO6OSsZcCL6EtWRtMjGVPy2AXhNtRo5m1B1bCzwswH5LRx
	J3XElSbWinJUup5dnnjm/LN1Y3JgzEDUYNxYQby+H/2G/j3IraJuTKIMRPjbuB8Oq+SrzA32wqD
	8qP4izDKBAfJtBxPGYVlAUCzNdWY=
X-Received: by 2002:ac8:51cd:: with SMTP id d13mr270204qtn.148.1604711491826;
        Fri, 06 Nov 2020 17:11:31 -0800 (PST)
X-Google-Smtp-Source: ABdhPJxp4oEucNKi98qPC7sJzn96P9bBblqisJwW0aucCPpUta95o5nBu9m7hEOogL969VFDL1oc3Q==
X-Received: by 2002:ac8:51cd:: with SMTP id d13mr270180qtn.148.1604711491516;
        Fri, 06 Nov 2020 17:11:31 -0800 (PST)
Received: from localhost.localdomain (c-98-239-145-235.hsd1.wv.comcast.net. [98.239.145.235])
        by smtp.gmail.com with ESMTPSA id q6sm1758584qtq.53.2020.11.06.17.11.30
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 06 Nov 2020 17:11:30 -0800 (PST)
From: Brian Masney <bmasney@redhat.com>
To: boris.ostrovsky@oracle.com,
	jgross@suse.com,
	sstabellini@kernel.org
Cc: tglx@linutronix.de,
	mingo@redhat.com,
	bp@alien8.de,
	x86@kernel.org,
	hpa@zytor.com,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	dustymabe@redhat.com
Subject: [PATCH v2] x86/xen: don't unbind uninitialized lock_kicker_irq
Date: Fri,  6 Nov 2020 20:11:19 -0500
Message-Id: <20201107011119.631442-1-bmasney@redhat.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=bmasney@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="US-ASCII"

When booting a hyperthreaded system with the kernel parameter
'mitigations=auto,nosmt', the following warning occurs:

    WARNING: CPU: 0 PID: 1 at drivers/xen/events/events_base.c:1112 unbind_from_irqhandler+0x4e/0x60
    ...
    Hardware name: Xen HVM domU, BIOS 4.2.amazon 08/24/2006
    ...
    Call Trace:
     xen_uninit_lock_cpu+0x28/0x62
     xen_hvm_cpu_die+0x21/0x30
     takedown_cpu+0x9c/0xe0
     ? trace_suspend_resume+0x60/0x60
     cpuhp_invoke_callback+0x9a/0x530
     _cpu_up+0x11a/0x130
     cpu_up+0x7e/0xc0
     bringup_nonboot_cpus+0x48/0x50
     smp_init+0x26/0x79
     kernel_init_freeable+0xea/0x229
     ? rest_init+0xaa/0xaa
     kernel_init+0xa/0x106
     ret_from_fork+0x35/0x40

The secondary CPUs are not activated with the nosmt mitigations and only
the primary thread on each CPU core is used. In this situation,
xen_hvm_smp_prepare_cpus(), and more importantly xen_init_lock_cpu(), is
not called, so the lock_kicker_irq is not initialized for the secondary
CPUs. Let's fix this by exiting early in xen_uninit_lock_cpu() if the
irq is not set to avoid the warning from above for each secondary CPU.

Signed-off-by: Brian Masney <bmasney@redhat.com>
---
Changes since v1:
- Remove duplicate per_cpu() call and pass in irq variable.
- Changed subject from 'x86/xen: fix warning when running with nosmt
  mitigations'
- Shorten code comment

 arch/x86/xen/spinlock.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 799f4eba0a62..043c73dfd2c9 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -93,10 +93,20 @@ void xen_init_lock_cpu(int cpu)
 
 void xen_uninit_lock_cpu(int cpu)
 {
+	int irq;
+
 	if (!xen_pvspin)
 		return;
 
-	unbind_from_irqhandler(per_cpu(lock_kicker_irq, cpu), NULL);
+	/*
+	 * When booting the kernel with 'mitigations=auto,nosmt', the secondary
+	 * CPUs are not activated, and lock_kicker_irq is not initialized.
+	 */
+	irq = per_cpu(lock_kicker_irq, cpu);
+	if (irq == -1)
+		return;
+
+	unbind_from_irqhandler(irq, NULL);
 	per_cpu(lock_kicker_irq, cpu) = -1;
 	kfree(per_cpu(irq_name, cpu));
 	per_cpu(irq_name, cpu) = NULL;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Sat Nov 07 01:47:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Nov 2020 01:47:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21236.47540 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbDJe-0005ui-2r; Sat, 07 Nov 2020 01:46:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21236.47540; Sat, 07 Nov 2020 01:46:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbDJd-0005ub-Ug; Sat, 07 Nov 2020 01:46:53 +0000
Received: by outflank-mailman (input) for mailman id 21236;
 Sat, 07 Nov 2020 01:46:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yBR3=EN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kbDJc-0005uW-JG
 for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 01:46:52 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b9584305-562f-4080-903a-dca4f5546d69;
 Sat, 07 Nov 2020 01:46:51 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 9B2F32078B;
 Sat,  7 Nov 2020 01:46:50 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=yBR3=EN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kbDJc-0005uW-JG
	for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 01:46:52 +0000
X-Inumbo-ID: b9584305-562f-4080-903a-dca4f5546d69
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b9584305-562f-4080-903a-dca4f5546d69;
	Sat, 07 Nov 2020 01:46:51 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 9B2F32078B;
	Sat,  7 Nov 2020 01:46:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604713611;
	bh=XpQerVOIJQI3cZ9WFTK1QdqEleg6Fb7R8S+b6FbnleQ=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=S9A4oiJGlXfR3eWOe/h8pTXhjB/tAajXZxViRQz8PcqePJLfs6TO4PYPuOjniA61O
	 sRTianwcS9cWtBDjKgvu6582HWO3/nyEUqRouCOYGxOiSZfgWwYFdWP/K9saxePGqt
	 Zh/j3khmT2mS5SLx7xWKUUzAlv9lhbqgbA2G/tjM=
Date: Fri, 6 Nov 2020 17:46:43 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Wei Liu <wl@xen.org>
cc: Anthony PERARD <anthony.perard@citrix.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, takahiro.akashi@linaro.org, 
    alex.bennee@linaro.org, masami.hiramatsu@linaro.org, 
    ian.jackson@eu.citrix.com
Subject: Re: [PATCH] libxl: set vuart_gfn in libxl__build_hvm
In-Reply-To: <20201106161724.5hot2tzamqhhycck@liuwe-devbox-debian-v2>
Message-ID: <alpine.DEB.2.21.2011061746340.21307@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2011051312120.2323@sstabellini-ThinkPad-T480s> <20201106151146.GM2214@perard.uk.xensource.com> <20201106161724.5hot2tzamqhhycck@liuwe-devbox-debian-v2>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 6 Nov 2020, Wei Liu wrote:
> On Fri, Nov 06, 2020 at 03:11:46PM +0000, Anthony PERARD wrote:
> > On Thu, Nov 05, 2020 at 01:15:05PM -0800, Stefano Stabellini wrote:
> > > libxl: set vuart_gfn in libxl__build_hvm
> > 
> > The subject is written two times ;-)
> > 
> > > Setting vuart_gfn was missed when switching ARM guests to the PVH build.
> > > Like libxl__build_pv, libxl__build_hvm should set state->vuart_gfn to
> > > dom->vuart_gfn.
> > > 
> > > Without this change, xl console cannot connect to the vuart console (-t
> > > vuart), see https://marc.info/?l=xen-devel&m=160402342101366.
> > > 
> > > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > > 
> > > diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> > > index f8661e90d4..36fe8915e7 100644
> > > --- a/tools/libxl/libxl_dom.c
> > > +++ b/tools/libxl/libxl_dom.c
> > > @@ -1184,6 +1184,7 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
> > >          LOG(ERROR, "hvm build set params failed");
> > >          goto out;
> > >      }
> > > +    state->vuart_gfn = dom->vuart_gfn;
> > >  
> > >      rc = hvm_build_set_xs_values(gc, domid, dom, info);
> > >      if (rc != 0) {
> > 
> > Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
> 
> This patch is based on an old tree. I have ported it to staging. Please
> check the code and shout if it is not done correctly.

all good, thanks!


From xen-devel-bounces@lists.xenproject.org Sat Nov 07 03:15:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Nov 2020 03:15:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21247.47555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbEhL-0005bQ-GE; Sat, 07 Nov 2020 03:15:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21247.47555; Sat, 07 Nov 2020 03:15:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbEhL-0005bJ-Cy; Sat, 07 Nov 2020 03:15:27 +0000
Received: by outflank-mailman (input) for mailman id 21247;
 Sat, 07 Nov 2020 03:15:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=atuI=EN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kbEhJ-0005af-TQ
 for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 03:15:25 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 52116781-51c8-47da-bdf1-33ba79ad66cb;
 Sat, 07 Nov 2020 03:15:13 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbEh6-00014m-Qz; Sat, 07 Nov 2020 03:15:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbEh6-00056s-HO; Sat, 07 Nov 2020 03:15:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kbEh6-0003yv-F7; Sat, 07 Nov 2020 03:15:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=atuI=EN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kbEhJ-0005af-TQ
	for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 03:15:25 +0000
X-Inumbo-ID: 52116781-51c8-47da-bdf1-33ba79ad66cb
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 52116781-51c8-47da-bdf1-33ba79ad66cb;
	Sat, 07 Nov 2020 03:15:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ke9bViNp5nMcGoHqPNClKM6xBZiNYlwgGzwsFgOF2H8=; b=sVqh9AI/VAwd2ACoEPtCOp9OZ1
	GbxS7WjlOOWOR54pE2woOwCzkAXkHDIz1aZBYw0UdkIg+jtq9uj8wGL6GFpUwbazxChQCLYYohEv8
	mzbuqGDn+lJ2LQ+fA06g5cJJ5HvVqGchRkrSf3XNEOiW0HpplkBaJvhM990TPUKsSh5Y=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbEh6-00014m-Qz; Sat, 07 Nov 2020 03:15:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbEh6-00056s-HO; Sat, 07 Nov 2020 03:15:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbEh6-0003yv-F7; Sat, 07 Nov 2020 03:15:12 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156522-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156522: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=326c9a0eb67672f3d7515fe41e9deaa58fb15227
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 07 Nov 2020 03:15:12 +0000

flight 156522 qemu-mainline real [real]
flight 156535 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156522/
http://logs.test-lab.xenproject.org/osstest/logs/156535/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                326c9a0eb67672f3d7515fe41e9deaa58fb15227
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   78 days
Failing since        152659  2020-08-21 14:07:39 Z   77 days  172 attempts
Testing same since   156522  2020-11-06 09:30:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 61578 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Nov 07 05:36:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Nov 2020 05:36:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21262.47567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbGto-0001Dl-7D; Sat, 07 Nov 2020 05:36:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21262.47567; Sat, 07 Nov 2020 05:36:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbGto-0001De-3z; Sat, 07 Nov 2020 05:36:28 +0000
Received: by outflank-mailman (input) for mailman id 21262;
 Sat, 07 Nov 2020 04:43:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=s4UT=EN=gmail.com=xjtuwjp@srs-us1.protection.inumbo.net>)
 id 1kbG4W-00055N-AF
 for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 04:43:28 +0000
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 28e7b8f5-e718-4d47-b4be-ffe546381447;
 Sat, 07 Nov 2020 04:43:27 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id 126so4803694lfi.8
 for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 20:43:26 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=s4UT=EN=gmail.com=xjtuwjp@srs-us1.protection.inumbo.net>)
	id 1kbG4W-00055N-AF
	for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 04:43:28 +0000
X-Inumbo-ID: 28e7b8f5-e718-4d47-b4be-ffe546381447
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 28e7b8f5-e718-4d47-b4be-ffe546381447;
	Sat, 07 Nov 2020 04:43:27 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id 126so4803694lfi.8
        for <xen-devel@lists.xenproject.org>; Fri, 06 Nov 2020 20:43:26 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=3jS9MbGtA3iifbgEcp45wxTOB4OKIZiUtDbS0BnZQ1E=;
        b=uyNTKiRfvsRl7dzkGQkHAdLGXciwNho3GRdQsHy5Sedv+AKGKhYus+bv9h8LXZQq6s
         wwu1nXip45DNnsBIC7dhPaWubGFLm0Gz/fDVyC7b6rkfgRe9QuBpVP/b4vbvyLKHOnWW
         uC1pKsPvhLda4Ffft3XcaG2vAdNhOOksanngAicA0csNFGEGuWvUhPj23BPqcTOwLl38
         DAkAPbMKlIYsr/1dbsR3CuGzzrtu21qpJ+8kuAB33j32phjXczoC3C6B84JWBG4dGfRu
         YL3D+i9k0nMtn1MouB79TjBRkFwf118fz3PSNulmZZubZi3Qf3wK+3GiX8TMC5KqNj4u
         C/Zg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=3jS9MbGtA3iifbgEcp45wxTOB4OKIZiUtDbS0BnZQ1E=;
        b=N3PF4lmgOkp0HQ2s+Xg/rNcTBEk14obbCpj1fQZaYeBbnNvHm9BoDXM8EOY1dtMOTM
         /tl9O+iBHbND3WTKXX33LlcKsW5hEovq+h6qERiNU6cSt0AbqxnXLSRmJDmC7eariv09
         MW3yberGF8RyP9nrSO9E9FGh1gDUtARIx13OHKWoIN0IGdcLCmS5uefrvyY94urQHGex
         Ko2El/VaORl8tz3gYyMcBztUS8FtVwoD8kf10MiUvlgfcGLMr66jYA1lMBJEmKhIBm7p
         /Ej2r8J0WvdbXPKzUQzjA6IbDpYj3u193z1XrBUhMxh18yCX0+p7oabMkeh3ru6GkjV9
         9Q8A==
X-Gm-Message-State: AOAM533X1LM79mekiWPiRJHYkDNuIdniwPMDTSMYq92j1piC74roBkwV
	UhM0/J1a6attaLD/msLJuE0a7WtGwXJb61twlpo=
X-Google-Smtp-Source: ABdhPJzn/pNC05U4kPURn4luKn+squhbFoU/xCSDcnlGrkwpRVWPCp6PcYhGp25nUSiML9AX1uojvU9jJs5PJTDaJxo=
X-Received: by 2002:ac2:5209:: with SMTP id a9mr1446563lfl.86.1604724205658;
 Fri, 06 Nov 2020 20:43:25 -0800 (PST)
MIME-Version: 1.0
References: <20201106190337.1973127-1-hch@lst.de> <20201106190337.1973127-25-hch@lst.de>
In-Reply-To: <20201106190337.1973127-25-hch@lst.de>
From: Jack Wang <xjtuwjp@gmail.com>
Date: Sat, 7 Nov 2020 05:43:14 +0100
Message-ID: <CAD+HZHUaPLB0T2A3vAPq6gSr5gEGK3XLMSAmO0FLhkWaLzPBpg@mail.gmail.com>
Subject: Re: [PATCH 24/24] block: unexport revalidate_disk_size
To: Christoph Hellwig <hch@lst.de>
Cc: Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>, 
	Jason Wang <jasowang@redhat.com>, Jens Axboe <axboe@kernel.dk>, 
	Josef Bacik <josef@toxicpanda.com>, Justin Sanders <justin@coraid.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, "Martin K. Petersen" <martin.petersen@oracle.com>, 
	"Michael S. Tsirkin" <mst@redhat.com>, Mike Snitzer <snitzer@redhat.com>, Minchan Kim <minchan@kernel.org>, 
	Paolo Bonzini <pbonzini@redhat.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Song Liu <song@kernel.org>, Stefan Hajnoczi <stefanha@redhat.com>, ceph-devel@vger.kernel.org, 
	dm-devel@redhat.com, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, 
	linux-fsdevel@vger.kernel.org, linux-nvme@lists.infradead.org, 
	linux-raid@vger.kernel.org, linux-scsi@vger.kernel.org, nbd@other.debian.org, 
	xen-devel@lists.xenproject.org
Content-Type: multipart/alternative; boundary="0000000000003da75005b37cf8d6"

--0000000000003da75005b37cf8d6
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Christoph Hellwig <hch@lst.de>=E4=BA=8E2020=E5=B9=B411=E6=9C=886=E6=97=A5 =
=E5=91=A8=E4=BA=9420:15=E5=86=99=E9=81=93=EF=BC=9A

> revalidate_disk_size is not only called from set_capacity_and_notify,
> so drop the export.

s/not/now

>
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Thanks!
Jack Wang

>
> ---
>  fs/block_dev.c | 1 -
>  1 file changed, 1 deletion(-)
>
> diff --git a/fs/block_dev.c b/fs/block_dev.c
> index 66ebf594c97f47..d8664f5c1ff669 100644
> --- a/fs/block_dev.c
> +++ b/fs/block_dev.c
> @@ -1362,7 +1362,6 @@ void revalidate_disk_size(struct gendisk *disk, boo=
l
> verbose)
>                 bdput(bdev);
>         }
>  }
> -EXPORT_SYMBOL(revalidate_disk_size);
>
>  void bd_set_nr_sectors(struct block_device *bdev, sector_t sectors)
>  {
> --
> 2.28.0
>
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
>

--0000000000003da75005b37cf8d6
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div><br></div><div><br><div class=3D"gmail_quote"><div dir=3D"ltr" class=
=3D"gmail_attr">Christoph Hellwig &lt;<a href=3D"mailto:hch@lst.de">hch@lst=
.de</a>&gt;=E4=BA=8E2020=E5=B9=B411=E6=9C=886=E6=97=A5 =E5=91=A8=E4=BA=9420=
:15=E5=86=99=E9=81=93=EF=BC=9A<br></div><blockquote class=3D"gmail_quote" s=
tyle=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:so=
lid;padding-left:1ex;border-left-color:rgb(204,204,204)">revalidate_disk_si=
ze is not only called from set_capacity_and_notify,<br>
so drop the export.</blockquote><div dir=3D"auto">s/not/now</div><blockquot=
e class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left-width=
:1px;border-left-style:solid;padding-left:1ex;border-left-color:rgb(204,204=
,204)" dir=3D"auto"><br>
<br>
Signed-off-by: Christoph Hellwig &lt;<a href=3D"mailto:hch@lst.de" target=
=3D"_blank">hch@lst.de</a>&gt;</blockquote><div dir=3D"auto">Thanks!</div><=
div dir=3D"auto">Jack Wang</div><blockquote class=3D"gmail_quote" style=3D"=
margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;padd=
ing-left:1ex;border-left-color:rgb(204,204,204)" dir=3D"auto"><br>
---<br>
=C2=A0fs/block_dev.c | 1 -<br>
=C2=A01 file changed, 1 deletion(-)<br>
<br>
diff --git a/fs/block_dev.c b/fs/block_dev.c<br>
index 66ebf594c97f47..d8664f5c1ff669 100644<br>
--- a/fs/block_dev.c<br>
+++ b/fs/block_dev.c<br>
@@ -1362,7 +1362,6 @@ void revalidate_disk_size(struct gendisk *disk, bool =
verbose)<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 bdput(bdev);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 }<br>
=C2=A0}<br>
-EXPORT_SYMBOL(revalidate_disk_size);<br>
<br>
=C2=A0void bd_set_nr_sectors(struct block_device *bdev, sector_t sectors)<b=
r>
=C2=A0{<br>
-- <br>
2.28.0<br>
<br>
<br>
_______________________________________________<br>
Linux-nvme mailing list<br>
<a href=3D"mailto:Linux-nvme@lists.infradead.org" target=3D"_blank">Linux-n=
vme@lists.infradead.org</a><br>
<a href=3D"http://lists.infradead.org/mailman/listinfo/linux-nvme" rel=3D"n=
oreferrer" target=3D"_blank">http://lists.infradead.org/mailman/listinfo/li=
nux-nvme</a><br>
</blockquote></div></div>

--0000000000003da75005b37cf8d6--


From xen-devel-bounces@lists.xenproject.org Sat Nov 07 06:29:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Nov 2020 06:29:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21273.47581 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbHj8-0005fE-DF; Sat, 07 Nov 2020 06:29:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21273.47581; Sat, 07 Nov 2020 06:29:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbHj8-0005f7-9v; Sat, 07 Nov 2020 06:29:30 +0000
Received: by outflank-mailman (input) for mailman id 21273;
 Sat, 07 Nov 2020 06:29:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=atuI=EN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kbHj6-0005eZ-S3
 for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 06:29:28 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a5687a8a-0ed2-443f-bd99-144e40e2b8fd;
 Sat, 07 Nov 2020 06:29:19 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbHix-0005Xt-Cj; Sat, 07 Nov 2020 06:29:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbHix-0007i3-0G; Sat, 07 Nov 2020 06:29:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kbHiw-0001FG-Um; Sat, 07 Nov 2020 06:29:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=atuI=EN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kbHj6-0005eZ-S3
	for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 06:29:28 +0000
X-Inumbo-ID: a5687a8a-0ed2-443f-bd99-144e40e2b8fd
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a5687a8a-0ed2-443f-bd99-144e40e2b8fd;
	Sat, 07 Nov 2020 06:29:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SYEowXUNyJMGTvTMkrGqTiYcC1sDtybrUtY+oScWRlQ=; b=nJBDrJmipXvZ94REDSaSfkVTcL
	pX+VACzdgrc9PG18dCOqiTReraCwy/iBCxl4yw+QxBuOq/ca5j4lkitBolUmZsHhFHZQDiw7ZMS3D
	0SHnkFHsIRyE1zPjZjTxgFkRLF2rZnwu6w4V7jf2TiUXAz/yOjWmOLtE5atYYa9SoPyU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbHix-0005Xt-Cj; Sat, 07 Nov 2020 06:29:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbHix-0007i3-0G; Sat, 07 Nov 2020 06:29:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbHiw-0001FG-Um; Sat, 07 Nov 2020 06:29:18 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156524-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156524: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=2a5f9f6a6932214fda76b9b3c03e024772882d34
X-Osstest-Versions-That:
    xen=9ff9705647646aa937b5f5c1426a64c69a62b3bd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 07 Nov 2020 06:29:18 +0000

flight 156524 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156524/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm       14 guest-start              fail REGR. vs. 156443
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 156443
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 156443
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 156443
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156443
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156443
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156443
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156443
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156443
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156443
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156443
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156443
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156443
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156443
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156443
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  2a5f9f6a6932214fda76b9b3c03e024772882d34
baseline version:
 xen                  9ff9705647646aa937b5f5c1426a64c69a62b3bd

Last test of basis   156443  2020-11-05 15:47:13 Z    1 days
Testing same since   156524  2020-11-06 14:22:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 2a5f9f6a6932214fda76b9b3c03e024772882d34
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 10:48:44 2020 +0100

    PCI: remove unused pcidevs_trylock()
    
    pcidevs_trylock() is used nowhere, so remove it.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Paul Durrant <paul@xen.org>

commit e19bcb626f50a652fb1854a8b2f2c9c371687a11
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 10:48:07 2020 +0100

    xen/rwlock: add check_lock() handling to rwlocks
    
    Checking whether a lock is consistently used regarding interrupts on
    or off is beneficial for rwlocks, too.
    
    So add check_lock() calls to rwlock functions. For this purpose make
    check_lock() globally accessible.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit c3453a23f7905d24f2404787543e26ec7d02301c
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 10:47:09 2020 +0100

    xen/locking: harmonize spinlocks and rwlocks regarding preemption
    
    Spinlocks and rwlocks behave differently in the try variants regarding
    preemption: rwlocks are switching preemption off before testing the
    lock, while spinlocks do so only after the first check.
    
    Modify _spin_trylock() to disable preemption before testing the lock
    to be held in order to be preemption-ready.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>

commit 957708c2d1ae25d7375abd5e5e70c3043d64f1f1
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Nov 5 22:31:06 2020 +0000

    xen/arm: traps: Don't panic when receiving an unknown debug trap
    
    Even if debug trap are only meant for debugging purpose, it is quite
    harsh to crash Xen if one of the trap sent by the guest is not handled.
    
    So switch from a panic() to a printk().
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit e006b2e3be72e502b86bd9e1405417abd87bdfed
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Nov 5 16:48:55 2020 +0100

    libxl: fix libacpi dependency
    
    $(DSDT_FILES-y) depends on the recursive make to have run in libacpi/
    such that the file(s) itself/themselves were generated before
    compilation gets attempted. The same, however, is also necessary for
    generated headers, before source files including them would get
    attempted to be compiled.
    
    The dependency specified in libacpi's Makefile, otoh, is entirely
    pointless nowadays - no compilation happens there anymore (except for
    tools involved in building the generated files). Together with it, the
    rule generating acpi.a also can go away.
    
    Reported-by: Olaf Hering <olaf@aepfle.de>
    Fixes: 14c0d328da2b ("libxl/acpi: Build ACPI tables for HVMlite guests")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 2b8314a3c354d04545700c80ff5a5f86799b79c7
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Nov 5 16:48:37 2020 +0100

    tools/python: pass more -rpath-link options to ld
    
    With the split of libraries, I've observed a number of warnings from
    (old?) ld.
    
    Instead of duplicating the additions in two places, introduce a setup.py
    make variable holding all the common parts of the invocations.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Nov 07 07:57:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Nov 2020 07:57:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21283.47597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbJ5y-0004yJ-2N; Sat, 07 Nov 2020 07:57:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21283.47597; Sat, 07 Nov 2020 07:57:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbJ5x-0004yC-VH; Sat, 07 Nov 2020 07:57:09 +0000
Received: by outflank-mailman (input) for mailman id 21283;
 Sat, 07 Nov 2020 07:57:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=np7k=EN=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1kbJ5w-0004y7-6x
 for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 07:57:08 +0000
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4d56b3fd-c204-4139-ae8b-ebad8c81c9a5;
 Sat, 07 Nov 2020 07:57:07 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1kbJ5s-0002w8-9C; Sat, 07 Nov 2020 07:57:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=np7k=EN=xen.org=tim@srs-us1.protection.inumbo.net>)
	id 1kbJ5w-0004y7-6x
	for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 07:57:08 +0000
X-Inumbo-ID: 4d56b3fd-c204-4139-ae8b-ebad8c81c9a5
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4d56b3fd-c204-4139-ae8b-ebad8c81c9a5;
	Sat, 07 Nov 2020 07:57:07 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1kbJ5s-0002w8-9C; Sat, 07 Nov 2020 07:57:04 +0000
Date: Sat, 7 Nov 2020 07:57:04 +0000
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>
Subject: Re: [PATCH v2 7/9] x86/p2m: pass old PTE directly to
 write_p2m_entry_pre() hook
Message-ID: <20201107075704.GA11151@deinos.phlegethon.org>
References: <4b63025f-164c-2e93-3d54-7a7f145ad046@suse.com>
 <9d10a4be-e463-0d3d-39c0-0920761537c5@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
In-Reply-To: <9d10a4be-e463-0d3d-39c0-0920761537c5@suse.com>
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org); SAEximRunCond expanded to false

At 10:38 +0100 on 06 Nov (1604659085), Jan Beulich wrote:
> In no case is a pointer to non-const needed. Since no pointer arithmetic
> is done by the sole user of the hook, passing in the PTE itself is quite
> fine.
> 
> While doing this adjustment also
> - drop the intermediate sh_write_p2m_entry_pre():
>   sh_unshadow_for_p2m_change() can itself be used as the hook function,
>   moving the conditional into there,
> - introduce a local variable holding the flags of the old entry.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Tim Deegan <tim@xen.org>



From xen-devel-bounces@lists.xenproject.org Sat Nov 07 08:00:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Nov 2020 08:00:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21297.47609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbJ95-0006JZ-TZ; Sat, 07 Nov 2020 08:00:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21297.47609; Sat, 07 Nov 2020 08:00:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbJ95-0006JS-PX; Sat, 07 Nov 2020 08:00:23 +0000
Received: by outflank-mailman (input) for mailman id 21297;
 Sat, 07 Nov 2020 08:00:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=np7k=EN=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1kbJ94-0006JM-Me
 for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 08:00:22 +0000
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 59a544c3-a0c7-4401-a139-63dc7b871b06;
 Sat, 07 Nov 2020 08:00:22 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1kbJ93-0002xq-9z; Sat, 07 Nov 2020 08:00:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=np7k=EN=xen.org=tim@srs-us1.protection.inumbo.net>)
	id 1kbJ94-0006JM-Me
	for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 08:00:22 +0000
X-Inumbo-ID: 59a544c3-a0c7-4401-a139-63dc7b871b06
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 59a544c3-a0c7-4401-a139-63dc7b871b06;
	Sat, 07 Nov 2020 08:00:22 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1kbJ93-0002xq-9z; Sat, 07 Nov 2020 08:00:21 +0000
Date: Sat, 7 Nov 2020 08:00:21 +0000
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>
Subject: Re: [PATCH v2 8/9] x86/shadow: cosmetics to
 sh_unshadow_for_p2m_change()
Message-ID: <20201107080021.GB11151@deinos.phlegethon.org>
References: <4b63025f-164c-2e93-3d54-7a7f145ad046@suse.com>
 <db7e83c8-e40d-3642-4acf-6320b643140f@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
In-Reply-To: <db7e83c8-e40d-3642-4acf-6320b643140f@suse.com>
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org); SAEximRunCond expanded to false

At 10:38 +0100 on 06 Nov (1604659127), Jan Beulich wrote:
> Besides the adjustments for style
> - use switch(),
> - widen scope of commonly used variables,
> - narrow scope of other variables.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Tim Deegan <tim@xen.org>


From xen-devel-bounces@lists.xenproject.org Sat Nov 07 08:48:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Nov 2020 08:48:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21309.47621 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbJtJ-0001at-63; Sat, 07 Nov 2020 08:48:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21309.47621; Sat, 07 Nov 2020 08:48:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbJtJ-0001am-2v; Sat, 07 Nov 2020 08:48:09 +0000
Received: by outflank-mailman (input) for mailman id 21309;
 Sat, 07 Nov 2020 08:48:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=np7k=EN=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1kbJtH-0001ah-B2
 for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 08:48:07 +0000
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3d30c386-f72e-43b5-b65c-2d8fcde62ff4;
 Sat, 07 Nov 2020 08:48:06 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1kbJtE-0003Jq-Eq; Sat, 07 Nov 2020 08:48:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=np7k=EN=xen.org=tim@srs-us1.protection.inumbo.net>)
	id 1kbJtH-0001ah-B2
	for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 08:48:07 +0000
X-Inumbo-ID: 3d30c386-f72e-43b5-b65c-2d8fcde62ff4
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3d30c386-f72e-43b5-b65c-2d8fcde62ff4;
	Sat, 07 Nov 2020 08:48:06 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1kbJtE-0003Jq-Eq; Sat, 07 Nov 2020 08:48:04 +0000
Date: Sat, 7 Nov 2020 08:48:04 +0000
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>
Subject: Re: [PATCH v2 9/9] x86/shadow: adjust TLB flushing in
 sh_unshadow_for_p2m_change()
Message-ID: <20201107084804.GC11151@deinos.phlegethon.org>
References: <4b63025f-164c-2e93-3d54-7a7f145ad046@suse.com>
 <76665833-415c-f192-29f6-1340191db7ff@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
In-Reply-To: <76665833-415c-f192-29f6-1340191db7ff@suse.com>
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org); SAEximRunCond expanded to false

At 10:39 +0100 on 06 Nov (1604659168), Jan Beulich wrote:
> Accumulating transient state of d->dirty_cpumask in a local variable is
> unnecessary here: The flush is fine to make with the dirty set at the
> time of the call. With this, move the invocation to a central place at
> the end of the function.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Tim Deegan <tim@xen.org>


From xen-devel-bounces@lists.xenproject.org Sat Nov 07 09:21:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Nov 2020 09:21:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21321.47633 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbKPV-0004xS-Se; Sat, 07 Nov 2020 09:21:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21321.47633; Sat, 07 Nov 2020 09:21:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbKPV-0004xL-OY; Sat, 07 Nov 2020 09:21:25 +0000
Received: by outflank-mailman (input) for mailman id 21321;
 Sat, 07 Nov 2020 09:21:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=atuI=EN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kbKPU-0004xG-AS
 for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 09:21:24 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dd02dbd8-ddf2-4716-88b6-52f307481931;
 Sat, 07 Nov 2020 09:21:22 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbKPR-00018O-MN; Sat, 07 Nov 2020 09:21:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbKPR-0000eG-Bj; Sat, 07 Nov 2020 09:21:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kbKPR-0004wQ-AT; Sat, 07 Nov 2020 09:21:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=atuI=EN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kbKPU-0004xG-AS
	for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 09:21:24 +0000
X-Inumbo-ID: dd02dbd8-ddf2-4716-88b6-52f307481931
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id dd02dbd8-ddf2-4716-88b6-52f307481931;
	Sat, 07 Nov 2020 09:21:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2+WIzqKiIv33BCrajFpHRTLby/7IG0d3kzInzeq7+so=; b=HkYuT3i6nKludcAJIFNdWoxKMe
	xPvhkHnmLlkXIIX2uJQlct+pvkKY3yiG1d4ysqYusm91mULXzlbqA9xZov3tTxkpWfNfVEordXXi/
	Opc43eSrpl6CxkjH+PWX+ox9MKpeTDWBk+rBYR4/f0Wr03uc9SN2uGei81ih4F5Prj74=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbKPR-00018O-MN; Sat, 07 Nov 2020 09:21:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbKPR-0000eG-Bj; Sat, 07 Nov 2020 09:21:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbKPR-0004wQ-AT; Sat, 07 Nov 2020 09:21:21 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156525-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 156525: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0c96e4297da07944525729ddbe438b0131ab5b7e
X-Osstest-Versions-That:
    xen=5784d1e9424151adfdc836535489bd068c6c0700
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 07 Nov 2020 09:21:21 +0000

flight 156525 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156525/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156394
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156394
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156394
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156394
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156394
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156394
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156394
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156394
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156394
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156394
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156394
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  0c96e4297da07944525729ddbe438b0131ab5b7e
baseline version:
 xen                  5784d1e9424151adfdc836535489bd068c6c0700

Last test of basis   156394  2020-11-04 08:37:48 Z    3 days
Failing since        156404  2020-11-04 22:36:21 Z    2 days    3 attempts
Testing same since   156525  2020-11-06 16:01:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Julien Grall <jgrall@amazon.com>
  Laurentiu Tudor <laurentiu.tudor@nxp.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5784d1e942..0c96e4297d  0c96e4297da07944525729ddbe438b0131ab5b7e -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Sat Nov 07 12:06:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Nov 2020 12:06:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21352.47680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbMzL-00027Z-6Z; Sat, 07 Nov 2020 12:06:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21352.47680; Sat, 07 Nov 2020 12:06:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbMzL-00027S-2V; Sat, 07 Nov 2020 12:06:35 +0000
Received: by outflank-mailman (input) for mailman id 21352;
 Sat, 07 Nov 2020 12:06:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=atuI=EN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kbMzK-00027N-GC
 for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 12:06:34 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d8910c0c-b426-460e-94a8-e3ac7e302966;
 Sat, 07 Nov 2020 12:06:32 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbMzH-0004WV-S1; Sat, 07 Nov 2020 12:06:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbMzH-0007uM-HG; Sat, 07 Nov 2020 12:06:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kbMzH-0002TC-Ge; Sat, 07 Nov 2020 12:06:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=atuI=EN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kbMzK-00027N-GC
	for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 12:06:34 +0000
X-Inumbo-ID: d8910c0c-b426-460e-94a8-e3ac7e302966
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d8910c0c-b426-460e-94a8-e3ac7e302966;
	Sat, 07 Nov 2020 12:06:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZK6dH7KcE4kwAgpKELG+alUcM8zcgNjgphUNrJlMm8k=; b=ooG3tMz4GgHHOhOjiQoUCUzmSZ
	wfVvcRo1RD3Xmz5bYaBxLuZv4sKMSv3M9gpMv8utM5yL9VF32B/mIwHBz9/DrCRLrbXE4FzVA0ajv
	D6BG3PW0fBJb+Ogfou3Ep+u54bA+mjqgHHQ+u2CaMjGmnVcPFaO3FilMy+3uN1xBfzuc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbMzH-0004WV-S1; Sat, 07 Nov 2020 12:06:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbMzH-0007uM-HG; Sat, 07 Nov 2020 12:06:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbMzH-0002TC-Ge; Sat, 07 Nov 2020 12:06:31 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156526-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-upstream-unstable test] 156526: trouble: broken/fail/pass
X-Osstest-Failures:
    qemu-upstream-unstable:test-armhf-armhf-xl-credit2:<job status>:broken:regression
    qemu-upstream-unstable:test-armhf-armhf-xl-credit2:host-install(5):broken:regression
    qemu-upstream-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=7ea428895af2840d85c524f0bd11a38aac308308
X-Osstest-Versions-That:
    qemuu=677cbe1324c29294bb1d1b8454b3f214725e40fd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 07 Nov 2020 12:06:31 +0000

flight 156526 qemu-upstream-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156526/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit2     <job status>                 broken
 test-armhf-armhf-xl-credit2   5 host-install(5)        broken REGR. vs. 156301

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156301
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156301
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156301
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156301
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156301
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156301
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156301
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                7ea428895af2840d85c524f0bd11a38aac308308
baseline version:
 qemuu                677cbe1324c29294bb1d1b8454b3f214725e40fd

Last test of basis   156301  2020-10-29 17:39:09 Z    8 days
Testing same since   156526  2020-11-06 16:07:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  broken  
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl-credit2 broken
broken-step test-armhf-armhf-xl-credit2 host-install(5)

Not pushing.

------------------------------------------------------------
commit 7ea428895af2840d85c524f0bd11a38aac308308
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Tue Aug 25 07:36:36 2020 +0200

    usb: fix setup_len init (CVE-2020-14364)
    
    Store calculated setup_len in a local variable, verify it, and only
    write it to the struct (USBDevice->setup_len) in case it passed the
    sanity checks.
    
    This prevents other code (do_token_{in,out} functions specifically)
    from working with invalid USBDevice->setup_len values and overrunning
    the USBDevice->setup_buf[] buffer.
    
    Fixes: CVE-2020-14364
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Tested-by: Gonglei <arei.gonglei@huawei.com>
    Reviewed-by: Li Qiang <liq3ea@gmail.com>
    Message-id: 20200825053636.29648-1-kraxel@redhat.com
    (cherry picked from commit b946434f2659a182afc17e155be6791ebfb302eb)


From xen-devel-bounces@lists.xenproject.org Sat Nov 07 15:34:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Nov 2020 15:34:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21398.47719 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbQDw-0003G6-03; Sat, 07 Nov 2020 15:33:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21398.47719; Sat, 07 Nov 2020 15:33:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbQDv-0003Fz-Sn; Sat, 07 Nov 2020 15:33:51 +0000
Received: by outflank-mailman (input) for mailman id 21398;
 Sat, 07 Nov 2020 15:33:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=atuI=EN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kbQDu-0003Fu-Gy
 for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 15:33:50 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5490dc66-9c38-4045-bdd4-5ef2bfd1e0d5;
 Sat, 07 Nov 2020 15:33:47 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbQDr-0000J3-Gg; Sat, 07 Nov 2020 15:33:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbQDr-0001lO-4v; Sat, 07 Nov 2020 15:33:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kbQDr-000550-2r; Sat, 07 Nov 2020 15:33:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=atuI=EN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kbQDu-0003Fu-Gy
	for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 15:33:50 +0000
X-Inumbo-ID: 5490dc66-9c38-4045-bdd4-5ef2bfd1e0d5
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5490dc66-9c38-4045-bdd4-5ef2bfd1e0d5;
	Sat, 07 Nov 2020 15:33:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=U5vH39VvBOvzjunwCwIyi32/8LHaM3luFMDRUKttup0=; b=sGW+R09xiPKXLbc68sZWzTUBNm
	nspQWMoUa99l5abZxT29ms1nyiwVkwOKsrKK5hHrGTGGaTx0qdPy46yPhea3oUvrufjdTZMHGiSAB
	nSfcBPVtet17T8bwY9T8imxr8yxDbqtXhyWOSlXtui1AjGe8f6lICWeKd0PTg/U71SvA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbQDr-0000J3-Gg; Sat, 07 Nov 2020 15:33:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbQDr-0001lO-4v; Sat, 07 Nov 2020 15:33:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbQDr-000550-2r; Sat, 07 Nov 2020 15:33:47 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156527-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-upstream-4.13-testing test] 156527: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-upstream-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-upstream-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-upstream-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-upstream-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=7269466a5b0c0e89b36dc9a7db0554ae404aa230
X-Osstest-Versions-That:
    qemuu=730e2b1927e7d911bbd5350714054ddd5912f4ed
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 07 Nov 2020 15:33:47 +0000

flight 156527 qemu-upstream-4.13-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156527/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 149658
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 149658
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 149658
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 149658
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 149658
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 149658
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 149658
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                7269466a5b0c0e89b36dc9a7db0554ae404aa230
baseline version:
 qemuu                730e2b1927e7d911bbd5350714054ddd5912f4ed

Last test of basis   149658  2020-04-14 18:08:38 Z  206 days
Testing same since   156527  2020-11-06 16:09:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   730e2b1927..7269466a5b  7269466a5b0c0e89b36dc9a7db0554ae404aa230 -> stable-4.13


From xen-devel-bounces@lists.xenproject.org Sat Nov 07 18:11:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Nov 2020 18:11:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21438.47734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbSg4-0000YN-El; Sat, 07 Nov 2020 18:11:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21438.47734; Sat, 07 Nov 2020 18:11:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbSg4-0000YG-Aq; Sat, 07 Nov 2020 18:11:04 +0000
Received: by outflank-mailman (input) for mailman id 21438;
 Sat, 07 Nov 2020 18:11:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=atuI=EN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kbSg3-0000Y7-Js
 for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 18:11:03 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 897cc67c-b1b0-4210-ae98-89e8612969e1;
 Sat, 07 Nov 2020 18:11:01 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbSg0-000420-OX; Sat, 07 Nov 2020 18:11:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbSg0-0002WA-Bs; Sat, 07 Nov 2020 18:11:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kbSg0-0007Gb-BG; Sat, 07 Nov 2020 18:11:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=atuI=EN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kbSg3-0000Y7-Js
	for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 18:11:03 +0000
X-Inumbo-ID: 897cc67c-b1b0-4210-ae98-89e8612969e1
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 897cc67c-b1b0-4210-ae98-89e8612969e1;
	Sat, 07 Nov 2020 18:11:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dRUPo+YrIRNRL0xotEw9IBef1uGDIAm2WppPmx+XWyU=; b=6eFtcdKSvz0Z6PqrvPQJ4FE3QL
	LlPL7AHSN2DyPcVNQ/hLYJNbbw3hhLsqzfUnRr6T6JQsPFKLnvAYJ7x2eA6z8Kf03zOg/9KVvP6gb
	77iwXXCdDY0JFQBCTpUpgWitNVUGVuAR4FKA3r7D2HQTnymxB/IjROTZMZcaOm1aFdXM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbSg0-000420-OX; Sat, 07 Nov 2020 18:11:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbSg0-0002WA-Bs; Sat, 07 Nov 2020 18:11:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbSg0-0007Gb-BG; Sat, 07 Nov 2020 18:11:00 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156528-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-upstream-4.11-testing test] 156528: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-upstream-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    qemu-upstream-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    qemu-upstream-4.11-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-upstream-4.11-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.11-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.11-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-upstream-4.11-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.11-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=cf8d15e2819784ed2360b7b557f72db8f8db5ff6
X-Osstest-Versions-That:
    qemuu=06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 07 Nov 2020 18:11:00 +0000

flight 156528 qemu-upstream-4.11-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156528/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 137592
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 137592
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 137592
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 137592
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 137592
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 137592
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 137592
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 137592
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 137592
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                cf8d15e2819784ed2360b7b557f72db8f8db5ff6
baseline version:
 qemuu                06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad

Last test of basis   137592  2019-06-11 00:34:31 Z  515 days
Testing same since   156528  2020-11-06 16:09:15 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   06fbdaf7d6..cf8d15e281  cf8d15e2819784ed2360b7b557f72db8f8db5ff6 -> stable-4.11


From xen-devel-bounces@lists.xenproject.org Sat Nov 07 19:42:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Nov 2020 19:42:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21455.47752 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbU5h-0008EA-AW; Sat, 07 Nov 2020 19:41:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21455.47752; Sat, 07 Nov 2020 19:41:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbU5h-0008E3-7f; Sat, 07 Nov 2020 19:41:37 +0000
Received: by outflank-mailman (input) for mailman id 21455;
 Sat, 07 Nov 2020 19:41:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=atuI=EN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kbU5f-0008Dy-OT
 for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 19:41:35 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f3f61fd5-358c-475f-ae99-c9dfa7244a65;
 Sat, 07 Nov 2020 19:41:34 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbU5e-0005tJ-8i; Sat, 07 Nov 2020 19:41:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbU5d-0000UP-VC; Sat, 07 Nov 2020 19:41:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kbU5d-0000nC-Ug; Sat, 07 Nov 2020 19:41:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=atuI=EN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kbU5f-0008Dy-OT
	for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 19:41:35 +0000
X-Inumbo-ID: f3f61fd5-358c-475f-ae99-c9dfa7244a65
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f3f61fd5-358c-475f-ae99-c9dfa7244a65;
	Sat, 07 Nov 2020 19:41:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+UWCzJHN6n6NXxK0GzKSPtGb9FRjBIEYifOz9T99NAE=; b=w43dxZVkb34hP8zKhpbyGvyDJA
	JXu/KTqxV8mxDjcV66kn77Fwz9TQ2rmMIKOB+a8xwkdnYoxsqSdpHCOsUcpgf1dnT94GkM1Mga1gz
	ztw2Xu/qHMX7pN6hROVscI2sNOVgpry2yqYsn+qmJmXhA/JkVu/yPpbIblqS78wyXGE8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbU5e-0005tJ-8i; Sat, 07 Nov 2020 19:41:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbU5d-0000UP-VC; Sat, 07 Nov 2020 19:41:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbU5d-0000nC-Ug; Sat, 07 Nov 2020 19:41:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156531-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xtf test] 156531: all pass - PUSHED
X-Osstest-Versions-This:
    xtf=f11bbff0c02e3a0b0ca01f0f3458678b7ad5173f
X-Osstest-Versions-That:
    xtf=79d9c62fb0e89dabcda6ba265ed89607be2dedc5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 07 Nov 2020 19:41:33 +0000

flight 156531 xtf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156531/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xtf                  f11bbff0c02e3a0b0ca01f0f3458678b7ad5173f
baseline version:
 xtf                  79d9c62fb0e89dabcda6ba265ed89607be2dedc5

Last test of basis   155660  2020-10-10 17:41:44 Z   28 days
Testing same since   156531  2020-11-06 16:10:06 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-amd64-pvops                                            pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xtf.git
   79d9c62..f11bbff  f11bbff0c02e3a0b0ca01f0f3458678b7ad5173f -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Nov 07 20:19:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Nov 2020 20:19:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21468.47772 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbUfz-0002eI-7F; Sat, 07 Nov 2020 20:19:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21468.47772; Sat, 07 Nov 2020 20:19:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbUfz-0002eD-1Y; Sat, 07 Nov 2020 20:19:07 +0000
Received: by outflank-mailman (input) for mailman id 21468;
 Sat, 07 Nov 2020 20:19:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=atuI=EN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kbUfy-0002dZ-7u
 for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 20:19:06 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 65c2c378-15a5-4417-8376-0993d63e1131;
 Sat, 07 Nov 2020 20:18:59 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbUfq-0006jN-Qf; Sat, 07 Nov 2020 20:18:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbUfq-0001a5-FJ; Sat, 07 Nov 2020 20:18:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kbUfq-0002Gy-Er; Sat, 07 Nov 2020 20:18:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=atuI=EN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kbUfy-0002dZ-7u
	for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 20:19:06 +0000
X-Inumbo-ID: 65c2c378-15a5-4417-8376-0993d63e1131
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 65c2c378-15a5-4417-8376-0993d63e1131;
	Sat, 07 Nov 2020 20:18:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EhTbxsr+XIZzXIcNWT5tn7vh9iljIFQ77Ya0u4VvEMk=; b=ZVjLGJEJAcoIeyjz6O4TtAO9GM
	ZsK6/qvA/+axjwA/2kYPqC9VG4aDuwUwaSlbyokfoYcOfunghjFzeLrjpVoujeGCaqnBEiP4f3wge
	CNom4khnOb845jKbCzoZ6OuURYXsMpwN1HEhU3ZRPJ04qkWOxaIU1MyXxvmJwtYGdt6I=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbUfq-0006jN-Qf; Sat, 07 Nov 2020 20:18:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbUfq-0001a5-FJ; Sat, 07 Nov 2020 20:18:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbUfq-0002Gy-Er; Sat, 07 Nov 2020 20:18:58 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156529-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-upstream-4.12-testing test] 156529: regressions - FAIL
X-Osstest-Failures:
    qemu-upstream-4.12-testing:test-amd64-amd64-libvirt-pair:guest-migrate/dst_host/src_host/debian.repeat:fail:regression
    qemu-upstream-4.12-testing:test-amd64-amd64-xl-qcow2:guest-saverestore.2:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=92a78636a51611475f711637c44cafbda3ef9859
X-Osstest-Versions-That:
    qemuu=8023a62081ffbe3f734019076ec1a2b4213142bb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 07 Nov 2020 20:18:58 +0000

flight 156529 qemu-upstream-4.12-testing real [real]
flight 156542 qemu-upstream-4.12-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156529/
http://logs.test-lab.xenproject.org/osstest/logs/156542/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-pair 28 guest-migrate/dst_host/src_host/debian.repeat fail REGR. vs. 136311

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qcow2    18 guest-saverestore.2          fail  like 136047
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 136311
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 136311
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 136311
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 136311
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 136311
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 136311
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 136311
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                92a78636a51611475f711637c44cafbda3ef9859
baseline version:
 qemuu                8023a62081ffbe3f734019076ec1a2b4213142bb

Last test of basis   136311  2019-05-15 16:29:20 Z  542 days
Testing same since   156529  2020-11-06 16:09:16 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 92a78636a51611475f711637c44cafbda3ef9859
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Tue Aug 25 07:36:36 2020 +0200

    usb: fix setup_len init (CVE-2020-14364)
    
    Store calculated setup_len in a local variable, verify it, and only
    write it to the struct (USBDevice->setup_len) in case it passed the
    sanity checks.
    
    This prevents other code (do_token_{in,out} functions specifically)
    from working with invalid USBDevice->setup_len values and overrunning
    the USBDevice->setup_buf[] buffer.
    
    Fixes: CVE-2020-14364
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Tested-by: Gonglei <arei.gonglei@huawei.com>
    Reviewed-by: Li Qiang <liq3ea@gmail.com>
    Message-id: 20200825053636.29648-1-kraxel@redhat.com
    (cherry picked from commit b946434f2659a182afc17e155be6791ebfb302eb)


From xen-devel-bounces@lists.xenproject.org Sat Nov 07 20:41:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Nov 2020 20:41:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21476.47788 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbV1b-00059N-6r; Sat, 07 Nov 2020 20:41:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21476.47788; Sat, 07 Nov 2020 20:41:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbV1b-00059G-3p; Sat, 07 Nov 2020 20:41:27 +0000
Received: by outflank-mailman (input) for mailman id 21476;
 Sat, 07 Nov 2020 20:41:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=atuI=EN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kbV1Z-00058i-LE
 for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 20:41:25 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d80ee3f2-9c02-4c63-b888-ff52bc2f6c59;
 Sat, 07 Nov 2020 20:41:15 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbV1P-0007BP-0R; Sat, 07 Nov 2020 20:41:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbV1O-00032r-KD; Sat, 07 Nov 2020 20:41:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kbV1O-00057b-Jf; Sat, 07 Nov 2020 20:41:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=atuI=EN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kbV1Z-00058i-LE
	for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 20:41:25 +0000
X-Inumbo-ID: d80ee3f2-9c02-4c63-b888-ff52bc2f6c59
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d80ee3f2-9c02-4c63-b888-ff52bc2f6c59;
	Sat, 07 Nov 2020 20:41:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YV4wNHwQ6IRFjYarSFsjo1UJZ291qy2udoyLcGv65C8=; b=KkY0FDoJ53QRw2ov3yLaHYNO+B
	m/1ieK8LS56LJN/Fu3ENiYq47fEuDcK+N28ggrpLxhM3pN5X9VpWOBfLnH6Hz50bBUryflI3OXXTt
	vnR8GiuYhbuJorkdZSxLmXktkp8/jKSF1qaLqpwKhzasWY0nl1SvlN+8aAIae1tQNLcA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbV1P-0007BP-0R; Sat, 07 Nov 2020 20:41:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbV1O-00032r-KD; Sat, 07 Nov 2020 20:41:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbV1O-00057b-Jf; Sat, 07 Nov 2020 20:41:14 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156533-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156533: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=dc0dae2d18d4b6f904e99e0ef9824d61ca750b3d
X-Osstest-Versions-That:
    ovmf=09af9bd9be2d3e31bba979f8cf6446017b0b863e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 07 Nov 2020 20:41:14 +0000

flight 156533 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156533/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 dc0dae2d18d4b6f904e99e0ef9824d61ca750b3d
baseline version:
 ovmf                 09af9bd9be2d3e31bba979f8cf6446017b0b863e

Last test of basis   156467  2020-11-05 19:39:48 Z    2 days
Testing same since   156533  2020-11-06 17:11:20 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Enze Zhu <zhuenze@byosoft.com.cn>
  fengyunhua <fengyunhua@byosoft.com.cn>
  Yunhua Feng <fengyunhua@byosoft.com.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   09af9bd9be..dc0dae2d18  dc0dae2d18d4b6f904e99e0ef9824d61ca750b3d -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Nov 07 22:36:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Nov 2020 22:36:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21495.47808 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbWoT-00069S-I4; Sat, 07 Nov 2020 22:36:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21495.47808; Sat, 07 Nov 2020 22:36:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbWoT-00069L-F6; Sat, 07 Nov 2020 22:36:01 +0000
Received: by outflank-mailman (input) for mailman id 21495;
 Sat, 07 Nov 2020 22:35:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=atuI=EN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kbWoR-00068g-Ob
 for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 22:35:59 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 76540e6e-0f16-453d-937e-5856f4d1e763;
 Sat, 07 Nov 2020 22:35:52 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbWoK-000139-2e; Sat, 07 Nov 2020 22:35:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbWoJ-0008RY-MW; Sat, 07 Nov 2020 22:35:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kbWoJ-0004nn-M3; Sat, 07 Nov 2020 22:35:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=atuI=EN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kbWoR-00068g-Ob
	for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 22:35:59 +0000
X-Inumbo-ID: 76540e6e-0f16-453d-937e-5856f4d1e763
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 76540e6e-0f16-453d-937e-5856f4d1e763;
	Sat, 07 Nov 2020 22:35:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NpDTSKt1CPLqmYcGPF9U6eRsIeRyzBRlikkvSdPioeA=; b=woRgjawIoMiKyVB92fpMFXPcAK
	zmmcQ2UPgGRj8zBlP0nugutCOzzEUpgBJnL3Mukfqzs+HF9qUmSNqZrlYw1wfKP0u4gYgjQJB+YIU
	7qqWE/RyaPyzD3aHxAKFleunqgyt/q2q4Mi1/iGFqA13p7fOLz90u6S2C9P0tTH2TA2g=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbWoK-000139-2e; Sat, 07 Nov 2020 22:35:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbWoJ-0008RY-MW; Sat, 07 Nov 2020 22:35:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbWoJ-0004nn-M3; Sat, 07 Nov 2020 22:35:51 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156530-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-upstream-4.10-testing test] 156530: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-upstream-4.10-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:fail:nonblocking
    qemu-upstream-4.10-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    qemu-upstream-4.10-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    qemu-upstream-4.10-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.10-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.10-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.10-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.10-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-upstream-4.10-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-4.10-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=e6ade47b1e1c4b9f2e49384c68c4edff783433af
X-Osstest-Versions-That:
    qemuu=8acabec966263f90ad493e4af2642947c0c43d23
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 07 Nov 2020 22:35:51 +0000

flight 156530 qemu-upstream-4.10-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156530/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  3 hosts-allocate          fail baseline untested
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 137539
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 137539
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 137539
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 137539
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 137539
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 137539
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 137539
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 137539
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 137539
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                e6ade47b1e1c4b9f2e49384c68c4edff783433af
baseline version:
 qemuu                8acabec966263f90ad493e4af2642947c0c43d23

Last test of basis   137539  2019-06-09 13:54:31 Z  517 days
Testing same since   156530  2020-11-06 16:09:38 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   8acabec966..e6ade47b1e  e6ade47b1e1c4b9f2e49384c68c4edff783433af -> stable-4.10


From xen-devel-bounces@lists.xenproject.org Sat Nov 07 23:41:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 07 Nov 2020 23:41:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21508.47823 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbXpm-0003c6-EI; Sat, 07 Nov 2020 23:41:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21508.47823; Sat, 07 Nov 2020 23:41:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbXpm-0003bz-B0; Sat, 07 Nov 2020 23:41:26 +0000
Received: by outflank-mailman (input) for mailman id 21508;
 Sat, 07 Nov 2020 23:41:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=atuI=EN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kbXpl-0003bQ-R5
 for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 23:41:25 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id de732ada-4e45-4040-8a8f-f984f4f1ed80;
 Sat, 07 Nov 2020 23:41:17 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbXpc-0002Mo-MZ; Sat, 07 Nov 2020 23:41:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbXpc-0002dk-Ea; Sat, 07 Nov 2020 23:41:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kbXpc-0005rE-E4; Sat, 07 Nov 2020 23:41:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=atuI=EN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kbXpl-0003bQ-R5
	for xen-devel@lists.xenproject.org; Sat, 07 Nov 2020 23:41:25 +0000
X-Inumbo-ID: de732ada-4e45-4040-8a8f-f984f4f1ed80
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id de732ada-4e45-4040-8a8f-f984f4f1ed80;
	Sat, 07 Nov 2020 23:41:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=y15Z/MNatmtG6SmbRv0/KzUCrQJ8CIJAjMnrsiw22jM=; b=rF4XSVdGc0izySBZ226ZKKXpFm
	cY10LI54qNHO0jc9lwlGWbpLvWu/q0qyW76YdDkKnV5G60ORe/OU5uxaqwc5v711U7OxjLtAEnDhw
	ZRRr9FmqRAR9Nagfmh4RMHlQbMd6pUXUwAwnXMkGi4Aw40qBQ0hNqOv7cCeUKzPM9L88=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbXpc-0002Mo-MZ; Sat, 07 Nov 2020 23:41:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbXpc-0002dk-Ea; Sat, 07 Nov 2020 23:41:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbXpc-0005rE-E4; Sat, 07 Nov 2020 23:41:16 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156537-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156537: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=3113f3d81532d18fbeedebc0c7de8a0e42b771b2
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 07 Nov 2020 23:41:16 +0000

flight 156537 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156537/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              3113f3d81532d18fbeedebc0c7de8a0e42b771b2
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  120 days
Failing since        151818  2020-07-11 04:18:52 Z  119 days  114 attempts
Testing same since   156537  2020-11-07 04:19:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 25200 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 08 00:26:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Nov 2020 00:26:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21520.47838 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbYWy-0007mp-PL; Sun, 08 Nov 2020 00:26:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21520.47838; Sun, 08 Nov 2020 00:26:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbYWy-0007mi-MI; Sun, 08 Nov 2020 00:26:04 +0000
Received: by outflank-mailman (input) for mailman id 21520;
 Sun, 08 Nov 2020 00:26:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h6fj=EO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kbYWx-0007m9-MW
 for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 00:26:03 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0cc81f1c-9db6-4b4f-9753-f6149f45a29a;
 Sun, 08 Nov 2020 00:25:57 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbYWr-0003vK-3l; Sun, 08 Nov 2020 00:25:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbYWq-0004E3-S4; Sun, 08 Nov 2020 00:25:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kbYWq-0004GN-Rb; Sun, 08 Nov 2020 00:25:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=h6fj=EO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kbYWx-0007m9-MW
	for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 00:26:03 +0000
X-Inumbo-ID: 0cc81f1c-9db6-4b4f-9753-f6149f45a29a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0cc81f1c-9db6-4b4f-9753-f6149f45a29a;
	Sun, 08 Nov 2020 00:25:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=izeQwlWS8UvaOMwiy86qNmLXkcV3BdwiHO56Va8Xn9k=; b=LtV3X/2eIlL4mFfkh5zAjqdkF/
	+zZ+yBXQZuy4RQnRWXRS+KSPULd7xVk3sRXugUo3y2pTNcZ6CitcVhfg1RT6euRFsiqBfCbz+EF86
	YkcNIsQEx7wkVWgDmyPdkv9n8rG6gypVAOpriASSsnVrg78xgNWsA43H1yyT3zkGid7k=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbYWr-0003vK-3l; Sun, 08 Nov 2020 00:25:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbYWq-0004E3-S4; Sun, 08 Nov 2020 00:25:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbYWq-0004GN-Rb; Sun, 08 Nov 2020 00:25:56 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156541-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xtf test] 156541: all pass - PUSHED
X-Osstest-Versions-This:
    xtf=848c3f2dd42711c4d9fc01839d6630c115daa22f
X-Osstest-Versions-That:
    xtf=f11bbff0c02e3a0b0ca01f0f3458678b7ad5173f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 08 Nov 2020 00:25:56 +0000

flight 156541 xtf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156541/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xtf                  848c3f2dd42711c4d9fc01839d6630c115daa22f
baseline version:
 xtf                  f11bbff0c02e3a0b0ca01f0f3458678b7ad5173f

Last test of basis   156531  2020-11-06 16:10:06 Z    1 days
Testing same since   156541  2020-11-07 19:44:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-amd64-pvops                                            pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xtf.git
   f11bbff..848c3f2  848c3f2dd42711c4d9fc01839d6630c115daa22f -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sun Nov 08 01:15:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Nov 2020 01:15:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21528.47855 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbZId-0000TZ-Lo; Sun, 08 Nov 2020 01:15:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21528.47855; Sun, 08 Nov 2020 01:15:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbZId-0000TS-Ix; Sun, 08 Nov 2020 01:15:19 +0000
Received: by outflank-mailman (input) for mailman id 21528;
 Sun, 08 Nov 2020 01:15:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h6fj=EO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kbZIc-0000TL-8z
 for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 01:15:18 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c7b6cb76-b34e-424c-8b24-dae9477f6940;
 Sun, 08 Nov 2020 01:15:16 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbZIZ-0000iv-KN; Sun, 08 Nov 2020 01:15:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbZIZ-0006Lo-A5; Sun, 08 Nov 2020 01:15:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kbZIZ-0000CP-9c; Sun, 08 Nov 2020 01:15:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=h6fj=EO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kbZIc-0000TL-8z
	for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 01:15:18 +0000
X-Inumbo-ID: c7b6cb76-b34e-424c-8b24-dae9477f6940
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c7b6cb76-b34e-424c-8b24-dae9477f6940;
	Sun, 08 Nov 2020 01:15:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=l4rvmUNpmrch+PAJ7P0BA5XC6eU4hnIHmT1/KpjVGIs=; b=hMngU4orN0N/0GlhQ5l2uc7E01
	4JSZsJiegeErg0q/w87/xZ2gA8vNMKjjkwNWah2VCK/jKYsvimLSHmeLpKTsFFdcRIM9+ALn2O4Yp
	FzZmKm5Sj6e29rgjqeXo52xRIOJaAsjZFHSL+pYl+x+zOFRIKpbz6v8xsWXtcnZPxjq0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbZIZ-0000iv-KN; Sun, 08 Nov 2020 01:15:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbZIZ-0006Lo-A5; Sun, 08 Nov 2020 01:15:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbZIZ-0000CP-9c; Sun, 08 Nov 2020 01:15:15 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156534-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156534: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-xsm:guest-localmigrate/x10:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=521b619acdc8f1f5acdac15b84f81fd9515b2aff
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 08 Nov 2020 01:15:15 +0000

flight 156534 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156534/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm      20 guest-localmigrate/x10   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen      fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                521b619acdc8f1f5acdac15b84f81fd9515b2aff
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   99 days
Failing since        152366  2020-08-01 20:49:34 Z   98 days  163 attempts
Testing same since   156534  2020-11-06 19:01:39 Z    1 days    1 attempts

------------------------------------------------------------
3429 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 655904 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 08 06:20:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Nov 2020 06:20:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21557.47889 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbe3g-0002V9-JA; Sun, 08 Nov 2020 06:20:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21557.47889; Sun, 08 Nov 2020 06:20:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbe3g-0002V2-E7; Sun, 08 Nov 2020 06:20:12 +0000
Received: by outflank-mailman (input) for mailman id 21557;
 Sun, 08 Nov 2020 06:20:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h6fj=EO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kbe3f-0002Ux-R4
 for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 06:20:11 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4f798dba-7222-4876-b013-42a8a90f90f3;
 Sun, 08 Nov 2020 06:20:08 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbe3c-0007qy-59; Sun, 08 Nov 2020 06:20:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbe3b-0005B0-PI; Sun, 08 Nov 2020 06:20:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kbe3b-0000Ot-Om; Sun, 08 Nov 2020 06:20:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=h6fj=EO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kbe3f-0002Ux-R4
	for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 06:20:11 +0000
X-Inumbo-ID: 4f798dba-7222-4876-b013-42a8a90f90f3
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4f798dba-7222-4876-b013-42a8a90f90f3;
	Sun, 08 Nov 2020 06:20:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BASCrkqUjWH5wUCK3JjiwbsOrSjfHnjS6Xp/CW+ymlU=; b=AJuf0/k52gPAR50fc5m79GT+iU
	tSArdkegTk+LeAWMt9AWsl07QPi+0OtYzn0GFYQyk9f4e+aA9UIyA3sLt26Bn1Zcgg4pkjjNoRF20
	/luHEgjmuXgq+x7usSHAHTQvHyD86yEa6Z/H5BN/TrSPOzZ+5/i1FRWOHEJkCbXbCivk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbe3c-0007qy-59; Sun, 08 Nov 2020 06:20:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbe3b-0005B0-PI; Sun, 08 Nov 2020 06:20:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbe3b-0000Ot-Om; Sun, 08 Nov 2020 06:20:07 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156536-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156536: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:test-amd64-coresched-i386-xl:<job status>:broken:regression
    qemu-mainline:test-amd64-coresched-amd64-xl:<job status>:broken:regression
    qemu-mainline:test-amd64-i386-xl-pvshim:<job status>:broken:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-coresched-i386-xl:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=3493c36f0371777c62d1d72b205b0eb6117e2156
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 08 Nov 2020 06:20:07 +0000

flight 156536 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156536/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-coresched-i386-xl    <job status>                 broken
 test-amd64-coresched-amd64-xl    <job status>                 broken
 test-amd64-i386-xl-pvshim       <job status>                 broken
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-coresched-i386-xl  5 host-install(5)       broken blocked in 152631
 test-amd64-coresched-amd64-xl  5 host-install(5)      broken blocked in 152631
 test-amd64-i386-xl-pvshim     5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                3493c36f0371777c62d1d72b205b0eb6117e2156
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   79 days
Failing since        152659  2020-08-21 14:07:39 Z   78 days  173 attempts
Testing same since   156536  2020-11-07 03:17:03 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                broken  
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 broken  
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    broken  
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-coresched-i386-xl broken
broken-job test-amd64-coresched-amd64-xl broken
broken-job test-amd64-i386-xl-pvshim broken
broken-step test-amd64-coresched-i386-xl host-install(5)
broken-step test-amd64-coresched-amd64-xl host-install(5)
broken-step test-amd64-i386-xl-pvshim host-install(5)

Not pushing.

(No revision log; it would be 61894 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 08 09:49:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Nov 2020 09:49:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21595.47912 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbhJs-0003IG-Sk; Sun, 08 Nov 2020 09:49:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21595.47912; Sun, 08 Nov 2020 09:49:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbhJs-0003I9-Pr; Sun, 08 Nov 2020 09:49:08 +0000
Received: by outflank-mailman (input) for mailman id 21595;
 Sun, 08 Nov 2020 09:49:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h6fj=EO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kbhJr-0003Hb-EM
 for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 09:49:07 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fbb556df-485a-4e1e-89e2-3c1436823491;
 Sun, 08 Nov 2020 09:49:01 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbhJl-00046n-0G; Sun, 08 Nov 2020 09:49:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbhJk-0007zC-PS; Sun, 08 Nov 2020 09:49:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kbhJk-0001xl-P1; Sun, 08 Nov 2020 09:49:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=h6fj=EO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kbhJr-0003Hb-EM
	for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 09:49:07 +0000
X-Inumbo-ID: fbb556df-485a-4e1e-89e2-3c1436823491
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fbb556df-485a-4e1e-89e2-3c1436823491;
	Sun, 08 Nov 2020 09:49:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wdNmZHdJGqh9mKmK3JSmb5+mQzyJf3id/neSOp9KDho=; b=vr/svparHMKghh1emhTzrdJM05
	JV2YgWO2TNJXu5LzxmxqszOWIzOUgtH+N7IA4w0s80bE1UCAUnOqDVt///Wyb/DLsAWr6lJTfCcAV
	2MSCGcYN6aY8P/JUbD0KQ6vENBJRXKCd8FOFbrFk7WoYFLXR2ya1LQPa3DsChsyWayA8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbhJl-00046n-0G; Sun, 08 Nov 2020 09:49:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbhJk-0007zC-PS; Sun, 08 Nov 2020 09:49:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbhJk-0001xl-P1; Sun, 08 Nov 2020 09:49:00 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156554-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 156554: all pass - PUSHED
X-Osstest-Versions-This:
    xen=0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
X-Osstest-Versions-That:
    xen=7056f2f89f03f2f804ac7e776c7b2b000cd716cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 08 Nov 2020 09:49:00 +0000

flight 156554 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156554/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
baseline version:
 xen                  7056f2f89f03f2f804ac7e776c7b2b000cd716cd

Last test of basis   156344  2020-11-01 09:18:29 Z    7 days
Testing same since   156554  2020-11-08 09:20:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Frédéric Pierret (fepitre) <frederic.pierret@qubes-os.org>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7056f2f89f..0a5e0ce0fb  0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun Nov 08 10:08:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Nov 2020 10:08:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21608.47931 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbhcQ-0005B3-IR; Sun, 08 Nov 2020 10:08:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21608.47931; Sun, 08 Nov 2020 10:08:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbhcQ-0005Aw-Ew; Sun, 08 Nov 2020 10:08:18 +0000
Received: by outflank-mailman (input) for mailman id 21608;
 Sun, 08 Nov 2020 10:08:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h6fj=EO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kbhcO-0005Ar-GC
 for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 10:08:16 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 26e94af2-ee8f-4e73-a713-a81684b89c4e;
 Sun, 08 Nov 2020 10:08:09 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbhcG-0004ah-UY; Sun, 08 Nov 2020 10:08:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbhcG-0000MP-Kr; Sun, 08 Nov 2020 10:08:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kbhcG-0003KR-KJ; Sun, 08 Nov 2020 10:08:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=h6fj=EO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kbhcO-0005Ar-GC
	for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 10:08:16 +0000
X-Inumbo-ID: 26e94af2-ee8f-4e73-a713-a81684b89c4e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 26e94af2-ee8f-4e73-a713-a81684b89c4e;
	Sun, 08 Nov 2020 10:08:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NVipG8DE45V5uqZaen98Z47iQU0nzSG8IF/z4hPOosE=; b=DYDV+R1K9xuD9RRW2PLVoMXFhr
	G5OhwWeBomakT37qhO3Pt28gqLVrE470i59NcnKw2duZdJbZRBni1s0o24IK9CI2dpL88i3EfVsim
	qBl7ZGApqsPdGBh79HfBV7ZEIRyB7I9xdc7yuSPOMIrgTHYeFmhjrwsKbDIq0EKGn42E=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbhcG-0004ah-UY; Sun, 08 Nov 2020 10:08:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbhcG-0000MP-Kr; Sun, 08 Nov 2020 10:08:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbhcG-0003KR-KJ; Sun, 08 Nov 2020 10:08:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156538-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156538: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
X-Osstest-Versions-That:
    xen=9ff9705647646aa937b5f5c1426a64c69a62b3bd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 08 Nov 2020 10:08:08 +0000

flight 156538 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156538/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm       14 guest-start              fail REGR. vs. 156443
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 156443
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 156443
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 156443
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156443
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156443
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156443
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156443
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156443
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156443
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156443
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156443
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156443
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156443
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156443
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
baseline version:
 xen                  9ff9705647646aa937b5f5c1426a64c69a62b3bd

Last test of basis   156443  2020-11-05 15:47:13 Z    2 days
Failing since        156524  2020-11-06 14:22:28 Z    1 days    2 attempts
Testing same since   156538  2020-11-07 06:32:07 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Oct 29 15:03:32 2020 -0400

    libxl: Add suppress-vmdesc to QEMU machine
    
    The device model state saved by QMP xen-save-devices-state doesn't
    include the vmdesc json.  When restoring an HVM, xen-load-devices-state
    always triggers "Expected vmdescription section, but got 0".  This is
    not a problem when restore comes from a file.  However, when QEMU runs
    in a linux stubdom and comes over a console, EOF is not received.  This
    causes a delay restoring - though it does restore.
    
    Setting suppress-vmdesc skips looking for the vmdesc during restore and
    avoids the wait.
    
    QEMU 5.2 enables suppress-vmdesc by default for xenfv, but this change
    sets it manually for xenfv and xen_platform_pci=0 when -machine pc is
    use.
    
    QEMU commit 9850c6047b8b "migration: Allow to suppress vmdesc
    submission" added suppress-vmdesc in QEMU 2.3.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Anthony PERARD <anthony.perard@citrix.com>

commit cd800ce442eeba5bc0857ade70a075367c01c350
Author: Stefano Stabellini <sstabellini@kernel.org>
Date:   Fri Nov 6 16:12:56 2020 +0000

    libxl: set vuart_gfn in libxl__build_hvm
    
    Setting vuart_gfn was missed when switching ARM guests to the PVH build.
    Like libxl__build_pv, libxl__build_hvm should set state->vuart_gfn to
    dom->vuart_gfn.
    
    Without this change, xl console cannot connect to the vuart console (-t
    vuart), see https://marc.info/?l=xen-devel&m=160402342101366.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

commit 4196b1523aebe0ed929accba318d5e833d7ff6b3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 15:05:04 2020 +0100

    tools/libs/light: correct bitmap operations
    
    Libxl bitmap operations for single bits (test, set, reset) take the bit
    number as a signed integer without testing the value to be larger than
    0.
    
    Correct that by adding the appropriate tests.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8aac8e0ef43a452d0b565d63e4943c275badba3f
Author: Olaf Hering <olaf@aepfle.de>
Date:   Fri Nov 6 14:05:17 2020 +0100

    docs/xl: fix cpupool-cpu-remove
    
    The cpu-pool must be specified.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Wei Liu <wl@xen.org>

commit 2a5f9f6a6932214fda76b9b3c03e024772882d34
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 10:48:44 2020 +0100

    PCI: remove unused pcidevs_trylock()
    
    pcidevs_trylock() is used nowhere, so remove it.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Paul Durrant <paul@xen.org>

commit e19bcb626f50a652fb1854a8b2f2c9c371687a11
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 10:48:07 2020 +0100

    xen/rwlock: add check_lock() handling to rwlocks
    
    Checking whether a lock is consistently used regarding interrupts on
    or off is beneficial for rwlocks, too.
    
    So add check_lock() calls to rwlock functions. For this purpose make
    check_lock() globally accessible.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit c3453a23f7905d24f2404787543e26ec7d02301c
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 10:47:09 2020 +0100

    xen/locking: harmonize spinlocks and rwlocks regarding preemption
    
    Spinlocks and rwlocks behave differently in the try variants regarding
    preemption: rwlocks are switching preemption off before testing the
    lock, while spinlocks do so only after the first check.
    
    Modify _spin_trylock() to disable preemption before testing the lock
    to be held in order to be preemption-ready.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>

commit 957708c2d1ae25d7375abd5e5e70c3043d64f1f1
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Nov 5 22:31:06 2020 +0000

    xen/arm: traps: Don't panic when receiving an unknown debug trap
    
    Even if debug trap are only meant for debugging purpose, it is quite
    harsh to crash Xen if one of the trap sent by the guest is not handled.
    
    So switch from a panic() to a printk().
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit e006b2e3be72e502b86bd9e1405417abd87bdfed
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Nov 5 16:48:55 2020 +0100

    libxl: fix libacpi dependency
    
    $(DSDT_FILES-y) depends on the recursive make to have run in libacpi/
    such that the file(s) itself/themselves were generated before
    compilation gets attempted. The same, however, is also necessary for
    generated headers, before source files including them would get
    attempted to be compiled.
    
    The dependency specified in libacpi's Makefile, otoh, is entirely
    pointless nowadays - no compilation happens there anymore (except for
    tools involved in building the generated files). Together with it, the
    rule generating acpi.a also can go away.
    
    Reported-by: Olaf Hering <olaf@aepfle.de>
    Fixes: 14c0d328da2b ("libxl/acpi: Build ACPI tables for HVMlite guests")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 2b8314a3c354d04545700c80ff5a5f86799b79c7
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Nov 5 16:48:37 2020 +0100

    tools/python: pass more -rpath-link options to ld
    
    With the split of libraries, I've observed a number of warnings from
    (old?) ld.
    
    Instead of duplicating the additions in two places, introduce a setup.py
    make variable holding all the common parts of the invocations.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Nov 08 11:35:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Nov 2020 11:35:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21645.47961 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbiyd-0004Q9-50; Sun, 08 Nov 2020 11:35:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21645.47961; Sun, 08 Nov 2020 11:35:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbiyd-0004Q2-23; Sun, 08 Nov 2020 11:35:19 +0000
Received: by outflank-mailman (input) for mailman id 21645;
 Sun, 08 Nov 2020 11:35:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h6fj=EO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kbiyc-0004PI-5D
 for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 11:35:18 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 31de02c3-f4ba-4405-a778-d7a939f51df2;
 Sun, 08 Nov 2020 11:35:10 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbiyU-0006Iq-8j; Sun, 08 Nov 2020 11:35:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbiyU-0004ei-19; Sun, 08 Nov 2020 11:35:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kbiyU-0002No-0f; Sun, 08 Nov 2020 11:35:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=h6fj=EO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kbiyc-0004PI-5D
	for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 11:35:18 +0000
X-Inumbo-ID: 31de02c3-f4ba-4405-a778-d7a939f51df2
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 31de02c3-f4ba-4405-a778-d7a939f51df2;
	Sun, 08 Nov 2020 11:35:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KmtoeYGc9AJh8r6Pj5TsJ+dw/xHXgQztPgiJfKpVt5s=; b=1bARQdcyA2ZrgFbdqpNOzBtpAW
	HszT2dk2VG16ZtktUmUvpLuIY3o8JAyLqH9gWpIHIekGczfGWkLCKf9K93jqpluSwyBX0p1TPC1gP
	TcX+pVg0uCCgePsISjjwdezBvPl4CSbFeZ/EixXofo/05pmnMBbewc8e1l2pm+COGvvg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbiyU-0006Iq-8j; Sun, 08 Nov 2020 11:35:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbiyU-0004ei-19; Sun, 08 Nov 2020 11:35:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbiyU-0002No-0f; Sun, 08 Nov 2020 11:35:10 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156540-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-upstream-unstable test] 156540: regressions - FAIL
X-Osstest-Failures:
    qemu-upstream-unstable:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=7ea428895af2840d85c524f0bd11a38aac308308
X-Osstest-Versions-That:
    qemuu=677cbe1324c29294bb1d1b8454b3f214725e40fd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 08 Nov 2020 11:35:10 +0000

flight 156540 qemu-upstream-unstable real [real]
flight 156559 qemu-upstream-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156540/
http://logs.test-lab.xenproject.org/osstest/logs/156559/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 156301

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156301
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156301
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156301
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156301
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156301
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156301
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156301
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                7ea428895af2840d85c524f0bd11a38aac308308
baseline version:
 qemuu                677cbe1324c29294bb1d1b8454b3f214725e40fd

Last test of basis   156301  2020-10-29 17:39:09 Z    9 days
Testing same since   156526  2020-11-06 16:07:44 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7ea428895af2840d85c524f0bd11a38aac308308
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Tue Aug 25 07:36:36 2020 +0200

    usb: fix setup_len init (CVE-2020-14364)
    
    Store calculated setup_len in a local variable, verify it, and only
    write it to the struct (USBDevice->setup_len) in case it passed the
    sanity checks.
    
    This prevents other code (do_token_{in,out} functions specifically)
    from working with invalid USBDevice->setup_len values and overrunning
    the USBDevice->setup_buf[] buffer.
    
    Fixes: CVE-2020-14364
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
    Tested-by: Gonglei <arei.gonglei@huawei.com>
    Reviewed-by: Li Qiang <liq3ea@gmail.com>
    Message-id: 20200825053636.29648-1-kraxel@redhat.com
    (cherry picked from commit b946434f2659a182afc17e155be6791ebfb302eb)


From xen-devel-bounces@lists.xenproject.org Sun Nov 08 14:18:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Nov 2020 14:18:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21674.47994 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kblWH-0001K1-HV; Sun, 08 Nov 2020 14:18:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21674.47994; Sun, 08 Nov 2020 14:18:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kblWH-0001Ju-EH; Sun, 08 Nov 2020 14:18:13 +0000
Received: by outflank-mailman (input) for mailman id 21674;
 Sun, 08 Nov 2020 14:18:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h6fj=EO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kblWG-0001JL-6e
 for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 14:18:12 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8c59bca0-d539-4ca3-917d-83fc64529f6f;
 Sun, 08 Nov 2020 14:18:05 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kblW9-0001CD-Hs; Sun, 08 Nov 2020 14:18:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kblW9-0005fL-Am; Sun, 08 Nov 2020 14:18:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kblW9-0001YX-AF; Sun, 08 Nov 2020 14:18:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=h6fj=EO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kblWG-0001JL-6e
	for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 14:18:12 +0000
X-Inumbo-ID: 8c59bca0-d539-4ca3-917d-83fc64529f6f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8c59bca0-d539-4ca3-917d-83fc64529f6f;
	Sun, 08 Nov 2020 14:18:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XjhYqF8DrZfyMje/FR/eFmGc6sq7LNyYtWG7PnTFyXU=; b=MKnh7ehOxRTZLnrLJPn9TAuNwI
	HwObia3KxRnuHCGKLXcbBEogI1sDRMQEnTIG38LlbFLrg1/ectgl8+yzqrD09vmwvz0uz3IIFPZtL
	2vocrVCzaDJARlGZGk3+8Mk3CQpogspDgqSXiYwAqWyaLsvh8dOpd1bY+/o3ip5KPJlI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kblW9-0001CD-Hs; Sun, 08 Nov 2020 14:18:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kblW9-0005fL-Am; Sun, 08 Nov 2020 14:18:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kblW9-0001YX-AF; Sun, 08 Nov 2020 14:18:05 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156545-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156545: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=1366cd58cd4459f00b4ecf5abed13e77ac4ad06c
X-Osstest-Versions-That:
    ovmf=dc0dae2d18d4b6f904e99e0ef9824d61ca750b3d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 08 Nov 2020 14:18:05 +0000

flight 156545 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156545/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 1366cd58cd4459f00b4ecf5abed13e77ac4ad06c
baseline version:
 ovmf                 dc0dae2d18d4b6f904e99e0ef9824d61ca750b3d

Last test of basis   156533  2020-11-06 17:11:20 Z    1 days
Testing same since   156545  2020-11-07 20:41:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Laszlo Ersek <lersek@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   dc0dae2d18..1366cd58cd  1366cd58cd4459f00b4ecf5abed13e77ac4ad06c -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sun Nov 08 14:22:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Nov 2020 14:22:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21683.48013 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kblaJ-0002Ae-5H; Sun, 08 Nov 2020 14:22:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21683.48013; Sun, 08 Nov 2020 14:22:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kblaJ-0002AX-17; Sun, 08 Nov 2020 14:22:23 +0000
Received: by outflank-mailman (input) for mailman id 21683;
 Sun, 08 Nov 2020 14:22:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h6fj=EO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kblaH-0002AR-Ss
 for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 14:22:21 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6d06d981-d4b1-4056-ab5f-351b1d3565a3;
 Sun, 08 Nov 2020 14:22:20 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kblaF-0001Gg-Hx; Sun, 08 Nov 2020 14:22:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kblaF-00062q-7T; Sun, 08 Nov 2020 14:22:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kblaF-00052E-6z; Sun, 08 Nov 2020 14:22:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=h6fj=EO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kblaH-0002AR-Ss
	for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 14:22:21 +0000
X-Inumbo-ID: 6d06d981-d4b1-4056-ab5f-351b1d3565a3
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6d06d981-d4b1-4056-ab5f-351b1d3565a3;
	Sun, 08 Nov 2020 14:22:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rO0l12kTzgjNfLa3g5asjyKrvE5D+HEl5o2FVlSZWxw=; b=VucvxOlBgpDE1Il1tjA2h4Z5d6
	23P6uWRlI5huzcvnjPen5MSDiKJKudGlJEMX/Oa0P5PgTAE4TX1HNrjxDwMvDrJfnLjmC8reXMZHS
	Ligtu2sV6FI1r6uqtusDr8T5C/c49uoseWg7Khd6dbpU+XAUKnu+I9aFb1EbRUmVbMkA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kblaF-0001Gg-Hx; Sun, 08 Nov 2020 14:22:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kblaF-00062q-7T; Sun, 08 Nov 2020 14:22:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kblaF-00052E-6z; Sun, 08 Nov 2020 14:22:19 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156543-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-upstream-4.12-testing test] 156543: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-upstream-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=92a78636a51611475f711637c44cafbda3ef9859
X-Osstest-Versions-That:
    qemuu=8023a62081ffbe3f734019076ec1a2b4213142bb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 08 Nov 2020 14:22:19 +0000

flight 156543 qemu-upstream-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156543/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qcow2    19 guest-localmigrate/x10       fail  like 136311
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 136311
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 136311
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 136311
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 136311
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 136311
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 136311
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 136311
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                92a78636a51611475f711637c44cafbda3ef9859
baseline version:
 qemuu                8023a62081ffbe3f734019076ec1a2b4213142bb

Last test of basis   136311  2019-05-15 16:29:20 Z  542 days
Testing same since   156529  2020-11-06 16:09:16 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   8023a62081..92a78636a5  92a78636a51611475f711637c44cafbda3ef9859 -> stable-4.12


From xen-devel-bounces@lists.xenproject.org Sun Nov 08 15:00:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Nov 2020 15:00:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21697.48037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbmAi-0004yN-3I; Sun, 08 Nov 2020 15:00:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21697.48037; Sun, 08 Nov 2020 15:00:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbmAi-0004yG-0C; Sun, 08 Nov 2020 15:00:00 +0000
Received: by outflank-mailman (input) for mailman id 21697;
 Sun, 08 Nov 2020 14:59:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c4CZ=EO=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1kbmAg-0004yB-Q6
 for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 14:59:59 +0000
Received: from wout2-smtp.messagingengine.com (unknown [64.147.123.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 81181a72-5ac7-4ec6-831a-ea95dcef0c5f;
 Sun, 08 Nov 2020 14:59:57 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.west.internal (Postfix) with ESMTP id AB6F454C;
 Sun,  8 Nov 2020 09:59:56 -0500 (EST)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Sun, 08 Nov 2020 09:59:57 -0500
Received: from localhost.localdomain (ip5b40aa59.dynamic.kabel-deutschland.de
 [91.64.170.89])
 by mail.messagingengine.com (Postfix) with ESMTPA id EB4F4328005E;
 Sun,  8 Nov 2020 09:59:54 -0500 (EST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=c4CZ=EO=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
	id 1kbmAg-0004yB-Q6
	for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 14:59:59 +0000
X-Inumbo-ID: 81181a72-5ac7-4ec6-831a-ea95dcef0c5f
Received: from wout2-smtp.messagingengine.com (unknown [64.147.123.25])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 81181a72-5ac7-4ec6-831a-ea95dcef0c5f;
	Sun, 08 Nov 2020 14:59:57 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
	by mailout.west.internal (Postfix) with ESMTP id AB6F454C;
	Sun,  8 Nov 2020 09:59:56 -0500 (EST)
Received: from mailfrontend1 ([10.202.2.162])
  by compute3.internal (MEProxy); Sun, 08 Nov 2020 09:59:57 -0500
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-transfer-encoding:content-type
	:date:from:message-id:mime-version:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=Dr/Os4
	7m39uIrlvoJNAUFN0AyThOrdYx72S+jUYJCb8=; b=B0jkmDcLb5IQ2Yv6JtJ1cQ
	VcX2jJZnjA9aeylCKlJeypV36DsIo74oqarEl/GJ5Gadf+BkdcQFhmN6WlT5LRIE
	W+xa3J2esBghgvRsFW/Jg5KjTCiWPcXcuerYhzZIdlx0yh1MU5AVv9OoxcxfpVLX
	6OBkLjE2tbO/sCWID5IpLPvHQwWggbChFZIau2P0SvPWs/cN8aBuHa5PDbZi/L4y
	9R/18YeGqSviaTNCCsBIdeQSfCVnI5zJamTZVsvdvB+V+RQa6anlQNVf571Gtg9W
	93yT8p6nXbwCORjyy5Qt4nz41ERylnzVGkKTJPKU5+qbJQL9yx0JVbcptnfLiLhg
	==
X-ME-Sender: <xms:6weoX8piYKOPBgl3iCTalOdJfyhlSWRWQM7mx9SPIKbhLND_nH6mZw>
    <xme:6weoXyqo-a4q8RnYujbwiVPKL6s0d8kbsiXFRUnGyj5s_qGS7kdgbnU2teh6xuGYx
    LInLu3YnTtLCg>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedruddufedgjeduucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvufffkffogggtohfgsehtkeertdertdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeetgeet
    keeukeffhfejueeludehtedtkeeuiedtgffgtdfhveefueeiiefhudehgeenucfkpheple
    durdeigedrudejtddrkeelnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehm
    rghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsg
    drtghomh
X-ME-Proxy: <xmx:6weoXxOuejcEnNQXn9aratvMoQPlKtzDgV6u78ZV0O1siIKHcaoGsQ>
    <xmx:6weoXz6cCBBPXNpYzZhFL_ba8WLhhXUi5gYjLQ9qRyzLAYDNqHFU6A>
    <xmx:6weoX77-dZlZMRTpDpg8g36X-jfuCYtKyWjoANWWSiZZs3TLCMfMkA>
    <xmx:7AeoX1Sxc5xbKd_AVs0lJw2e-mszuLe7I0QP32GsmzRpCoG9mKID1Q>
Received: from localhost.localdomain (ip5b40aa59.dynamic.kabel-deutschland.de [91.64.170.89])
	by mail.messagingengine.com (Postfix) with ESMTPA id EB4F4328005E;
	Sun,  8 Nov 2020 09:59:54 -0500 (EST)
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH] libxl: cleanup remaining backend xs dirs after driver domain
Date: Sun,  8 Nov 2020 15:59:42 +0100
Message-Id: <20201108145942.3089012-1-marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.25.4
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Organization: Invisible Things Lab
Content-Transfer-Encoding: 8bit

When device is removed, backend domain (which may be a driver domain) is
responsible for removing backend entries from xenstore. But in case of
driver domain, it has no access to remove all of them - specifically the
directory named after frontend-id remains. This may accumulate enough to
exceed xenstore quote of the driver domain, breaking further devices.

Fix this by calling libxl__xs_path_cleanup() on the backend path from
libxl__device_destroy() in the toolstack domain too. Note
libxl__device_destroy() is called when the driver domain already removed
what it can (see device_destroy_be_watch_cb()->device_hotplug_done()).

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 tools/libs/light/libxl_device.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/tools/libs/light/libxl_device.c b/tools/libs/light/libxl_device.c
index e081faf9a94e..f131a6c822c6 100644
--- a/tools/libs/light/libxl_device.c
+++ b/tools/libs/light/libxl_device.c
@@ -763,6 +763,12 @@ int libxl__device_destroy(libxl__gc *gc, libxl__device *dev)
              * from the backend path.
              */
             libxl__xs_path_cleanup(gc, t, be_path);
+        } else if (domid == LIBXL_TOOLSTACK_DOMID && !libxl_only) {
+            /*
+             * Then, toolstack domain is in charge of removing the parent
+             * directory if empty already.
+             */
+            libxl__xs_path_cleanup(gc, t, be_path);
         }
 
         rc = libxl__xs_transaction_commit(gc, &t);
-- 
2.25.4



From xen-devel-bounces@lists.xenproject.org Sun Nov 08 15:15:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Nov 2020 15:15:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21708.48049 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbmPT-0006ig-G9; Sun, 08 Nov 2020 15:15:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21708.48049; Sun, 08 Nov 2020 15:15:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbmPT-0006iZ-C3; Sun, 08 Nov 2020 15:15:15 +0000
Received: by outflank-mailman (input) for mailman id 21708;
 Sun, 08 Nov 2020 15:15:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P9lC=EO=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
 id 1kbmPR-0006iN-QS
 for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 15:15:14 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 8bfef111-cd1c-43f2-aad5-8e067207ac4c;
 Sun, 08 Nov 2020 15:15:11 +0000 (UTC)
Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com
 [209.85.221.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-203-SiUou0D8O3uLZ12GtKLeuQ-1; Sun, 08 Nov 2020 10:15:08 -0500
Received: by mail-wr1-f70.google.com with SMTP id q15so3064560wrw.8
 for <xen-devel@lists.xenproject.org>; Sun, 08 Nov 2020 07:15:08 -0800 (PST)
Received: from ?IPv6:2001:b07:6468:f312:5e2c:eb9a:a8b6:fd3e?
 ([2001:b07:6468:f312:5e2c:eb9a:a8b6:fd3e])
 by smtp.gmail.com with ESMTPSA id l3sm11508325wmg.32.2020.11.08.07.15.05
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sun, 08 Nov 2020 07:15:06 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=P9lC=EO=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
	id 1kbmPR-0006iN-QS
	for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 15:15:14 +0000
X-Inumbo-ID: 8bfef111-cd1c-43f2-aad5-8e067207ac4c
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 8bfef111-cd1c-43f2-aad5-8e067207ac4c;
	Sun, 08 Nov 2020 15:15:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604848510;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=xCiEM7AQz5QZ2RvY5C0nr85g19BsqN3bEHGGJWjun8g=;
	b=REGghAwvDG700XxvFoEua2gT/PxE7PR+OMgmuToJtos0TbUsFc02pLoUFyg5u9TQ6GTd0A
	n2qVLz8dMOc4QT5AipGmGCRZLazwmkyyRNOI5YOh3w3XFNLLFv4FhPz+BkhcnUxXhI8aJz
	0KdtWMnG4Kk3tZ5ava30mnW7p3HOdPk=
Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com
 [209.85.221.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-203-SiUou0D8O3uLZ12GtKLeuQ-1; Sun, 08 Nov 2020 10:15:08 -0500
X-MC-Unique: SiUou0D8O3uLZ12GtKLeuQ-1
Received: by mail-wr1-f70.google.com with SMTP id q15so3064560wrw.8
        for <xen-devel@lists.xenproject.org>; Sun, 08 Nov 2020 07:15:08 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=xCiEM7AQz5QZ2RvY5C0nr85g19BsqN3bEHGGJWjun8g=;
        b=sQ8Zb2abkMf5VSKXJdO/fGGuiJjZmK9Lau44+aQnr/p5VwV9FaGUBuMe2uBHGaxxM9
         0QStuatTi9GkF4rpOLwqGcXv3zn/6yxHan61Xjcs7c9PpOToN+FICwHeTF5CYevrbrwD
         JYKA1c6/C9x2vJF15RojYTIqOlnfDNnsMaedsVztk/fsj6Kw2cpF5h9JmapAsZ4vMf9F
         WK1mkFhVWioCE4j6yDwKneezNx3TfQRQjneKXKTFHYzgtmrvxHdkX+1htk9tqZbq3Su1
         xCrK7jvo57pDyPbvhyB6NaZ9+JQoTKU6ovB/49AlmdD6qZaCKA6uLfF1N6995kY2KqKm
         r5kw==
X-Gm-Message-State: AOAM531QEdSanxLN+LRRCZJVlzQVwCPucxxxXPeCsIa/Q3nKm9n+kcVF
	ia9LI0Ny52mi8pKkL+VcdkTzMJAJGXJyZlYY6Pz8/3q/KnEEy4gkzJAm+yaduWeCylMe/tTpqst
	MNmn3B8gA8ZAAOdVGKU5JijXQoeg=
X-Received: by 2002:a1c:80d3:: with SMTP id b202mr10142607wmd.139.1604848507853;
        Sun, 08 Nov 2020 07:15:07 -0800 (PST)
X-Google-Smtp-Source: ABdhPJzBz6wtcRAgwxrEGiAL8ElHcOzDdD7qQjz/7QuIFGOGrDkKJazoMrCqH8844xWODBZg8NNGyg==
X-Received: by 2002:a1c:80d3:: with SMTP id b202mr10142589wmd.139.1604848507695;
        Sun, 08 Nov 2020 07:15:07 -0800 (PST)
Received: from ?IPv6:2001:b07:6468:f312:5e2c:eb9a:a8b6:fd3e? ([2001:b07:6468:f312:5e2c:eb9a:a8b6:fd3e])
        by smtp.gmail.com with ESMTPSA id l3sm11508325wmg.32.2020.11.08.07.15.05
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Sun, 08 Nov 2020 07:15:06 -0800 (PST)
Subject: Re: [PATCH 23/24] virtio-blk: remove a spurious call to
 revalidate_disk_size
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201106190337.1973127-1-hch@lst.de>
 <20201106190337.1973127-24-hch@lst.de>
From: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <d23bd50a-7555-438b-9e3b-131414b2d1a5@redhat.com>
Date: Sun, 8 Nov 2020 16:15:05 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201106190337.1973127-24-hch@lst.de>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pbonzini@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 06/11/20 20:03, Christoph Hellwig wrote:
> revalidate_disk_size just updates the block device size from the disk 
> size. Thus calling it from revalidate_disk_size doesn't actually do 

s/revalidate_disk_size/virtblk_update_cache_mode/

> anything.



From xen-devel-bounces@lists.xenproject.org Sun Nov 08 15:21:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Nov 2020 15:21:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21718.48063 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbmVd-0007cP-BP; Sun, 08 Nov 2020 15:21:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21718.48063; Sun, 08 Nov 2020 15:21:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbmVd-0007cI-8H; Sun, 08 Nov 2020 15:21:37 +0000
Received: by outflank-mailman (input) for mailman id 21718;
 Sun, 08 Nov 2020 15:21:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h6fj=EO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kbmVb-0007bg-VB
 for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 15:21:35 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4c0d675e-a38d-4aea-98aa-3cc1b493da7f;
 Sun, 08 Nov 2020 15:21:27 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbmVT-0002TQ-Cd; Sun, 08 Nov 2020 15:21:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbmVT-0000Nl-3R; Sun, 08 Nov 2020 15:21:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kbmVT-00050s-2s; Sun, 08 Nov 2020 15:21:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=h6fj=EO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kbmVb-0007bg-VB
	for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 15:21:35 +0000
X-Inumbo-ID: 4c0d675e-a38d-4aea-98aa-3cc1b493da7f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4c0d675e-a38d-4aea-98aa-3cc1b493da7f;
	Sun, 08 Nov 2020 15:21:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=z71aQfdGOMVNT4QYp1rZsQiyxQXaHXeJJrnQVcz1Zzk=; b=5hDktEFzsVLA1SX7amxDE/ogse
	3d4Grhi662cOjW62bXIkO1soLgdGY1hcpMt+jN9VX4i7Favz1sMUeRaGfvWB/SB37irDfrguR7QIm
	T0iey1yShHmCVgd8zT4lEh2aitik3VMLYn73jp2ifF83p9KrxpFeLWthWFj2SUkUr1y0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbmVT-0002TQ-Cd; Sun, 08 Nov 2020 15:21:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbmVT-0000Nl-3R; Sun, 08 Nov 2020 15:21:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbmVT-00050s-2s; Sun, 08 Nov 2020 15:21:27 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156549-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156549: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=3113f3d81532d18fbeedebc0c7de8a0e42b771b2
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 08 Nov 2020 15:21:27 +0000

flight 156549 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156549/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              3113f3d81532d18fbeedebc0c7de8a0e42b771b2
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  121 days
Failing since        151818  2020-07-11 04:18:52 Z  120 days  115 attempts
Testing same since   156537  2020-11-07 04:19:10 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 25200 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 08 16:56:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Nov 2020 16:56:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21746.48091 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbnzc-0007UM-G8; Sun, 08 Nov 2020 16:56:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21746.48091; Sun, 08 Nov 2020 16:56:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbnzc-0007UF-D6; Sun, 08 Nov 2020 16:56:40 +0000
Received: by outflank-mailman (input) for mailman id 21746;
 Sun, 08 Nov 2020 16:56:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=m9gi=EO=kernel.org=jic23@srs-us1.protection.inumbo.net>)
 id 1kbnzb-0007UA-Av
 for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 16:56:39 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9ccd187a-e52e-4a09-a4da-b5004e2143e7;
 Sun, 08 Nov 2020 16:56:38 +0000 (UTC)
Received: from archlinux (cpc149474-cmbg20-2-0-cust94.5-4.cable.virginm.net
 [82.4.196.95])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 289C720678;
 Sun,  8 Nov 2020 16:56:24 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=m9gi=EO=kernel.org=jic23@srs-us1.protection.inumbo.net>)
	id 1kbnzb-0007UA-Av
	for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 16:56:39 +0000
X-Inumbo-ID: 9ccd187a-e52e-4a09-a4da-b5004e2143e7
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9ccd187a-e52e-4a09-a4da-b5004e2143e7;
	Sun, 08 Nov 2020 16:56:38 +0000 (UTC)
Received: from archlinux (cpc149474-cmbg20-2-0-cust94.5-4.cable.virginm.net [82.4.196.95])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 289C720678;
	Sun,  8 Nov 2020 16:56:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604854597;
	bh=HmpnTGD+sCqkzYKB+VNNoQPDyYLZ8DVZYqWfmSgpAfA=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=jTr8zTsz0WaJ6nnvT5Eg1Ll2ihGmqcyn+2SF+Wf/mrD2qllK/kFqFW/v7wcgZNefk
	 zS7dIr/q6PaG6DfI7dBacAjjo5nnHikNbE1GU0b2Tz2VVRAwvJAMwi+yrMpSam0vAS
	 yISkL1OxDj/prQqrSSSyaYuB90OjYqoYVe7sJd6Y=
Date: Sun, 8 Nov 2020 16:56:21 +0000
From: Jonathan Cameron <jic23@kernel.org>
To: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Fabrice Gasnier
 <fabrice.gasnier@st.com>, Linux Doc Mailing List
 <linux-doc@vger.kernel.org>, "Gautham R. Shenoy" <ego@linux.vnet.ibm.com>,
 "Jason A. Donenfeld" <Jason@zx2c4.com>, Javier =?UTF-8?B?R29uesOhbGV6?=
 <javier@javigon.com>, Jonathan Corbet <corbet@lwn.net>, "Martin K.
 Petersen" <martin.petersen@oracle.com>, "Rafael J. Wysocki"
 <rjw@rjwysocki.net>, Alexander Shishkin
 <alexander.shishkin@linux.intel.com>, Alexandre Belloni
 <alexandre.belloni@bootlin.com>, Alexandre Torgue
 <alexandre.torgue@st.com>, Andrew Donnellan <ajd@linux.ibm.com>, Andy
 Shevchenko <andriy.shevchenko@linux.intel.com>, Baolin Wang
 <baolin.wang7@gmail.com>, Benson Leung <bleung@chromium.org>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, Bruno Meneguele
 <bmeneg@redhat.com>, Chunyan Zhang <zhang.lyra@gmail.com>, Dan Murphy
 <dmurphy@ti.com>, Dan Williams <dan.j.williams@intel.com>, Enric Balletbo i
 Serra <enric.balletbo@collabora.com>, Felipe Balbi <balbi@kernel.org>,
 Frederic Barrat <fbarrat@linux.ibm.com>, Guenter Roeck
 <groeck@chromium.org>, Hanjun Guo <guohanjun@huawei.com>, Heikki Krogerus
 <heikki.krogerus@linux.intel.com>, Jens Axboe <axboe@kernel.dk>, Johannes
 Thumshirn <johannes.thumshirn@wdc.com>, Juergen Gross <jgross@suse.com>,
 Konstantin Khlebnikov <koct9i@gmail.com>, Kranthi Kuntala
 <kranthi.kuntala@intel.com>, Lakshmi Ramasubramanian
 <nramas@linux.microsoft.com>, Lars-Peter Clausen <lars@metafoo.de>, Len
 Brown <lenb@kernel.org>, Leonid Maksymchuk <leonmaxx@gmail.com>, Ludovic
 Desroches <ludovic.desroches@microchip.com>, Mario Limonciello
 <mario.limonciello@dell.com>, Mark Gross <mgross@linux.intel.com>, Maxime
 Coquelin <mcoquelin.stm32@gmail.com>, Michael Ellerman
 <mpe@ellerman.id.au>, Mika Westerberg <mika.westerberg@linux.intel.com>,
 Mike Kravetz <mike.kravetz@oracle.com>, Mimi Zohar <zohar@linux.ibm.com>,
 Nayna Jain <nayna@linux.ibm.com>, Nicolas Ferre
 <nicolas.ferre@microchip.com>, Niklas Cassel <niklas.cassel@wdc.com>, Oded
 Gabbay <oded.gabbay@gmail.com>, Oleh Kravchenko <oleg@kaa.org.ua>, Orson
 Zhai <orsonzhai@gmail.com>, Pavel Machek <pavel@ucw.cz>, Pawan Gupta
 <pawan.kumar.gupta@linux.intel.com>, Peter Meerwald-Stadler
 <pmeerw@pmeerw.net>, Peter Rosin <peda@axentia.se>, Petr Mladek
 <pmladek@suse.com>, Philippe Bergheaud <felix@linux.ibm.com>, Richard
 Cochran <richardcochran@gmail.com>, Sebastian Reichel <sre@kernel.org>,
 Sergey Senozhatsky <sergey.senozhatsky@gmail.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Thinh Nguyen <Thinh.Nguyen@synopsys.com>, Thomas
 Gleixner <tglx@linutronix.de>, Tom Rix <trix@redhat.com>, Vaibhav Jain
 <vaibhav@linux.ibm.com>, Vineela Tummalapalli
 <vineela.tummalapalli@intel.com>, Vishal Verma <vishal.l.verma@intel.com>,
 linux-acpi@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
 linux-iio@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, linux-pm@vger.kernel.org,
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org,
 xen-devel@lists.xenproject.org, Jonathan Cameron
 <Jonathan.Cameron@huawei.com>
Subject: Re: [PATCH v2 20/39] docs: ABI: testing: make the files compatible
 with ReST output
Message-ID: <20201108165621.4d0da3f4@archlinux>
In-Reply-To: <20201102154250.45bee17f@coco.lan>
References: <cover.1604042072.git.mchehab+huawei@kernel.org>
	<58cf3c2d611e0197fb215652719ebd82ca2658db.1604042072.git.mchehab+huawei@kernel.org>
	<5326488b-4185-9d67-fc09-79b911fbb3b8@st.com>
	<20201030110925.3e09d59e@coco.lan>
	<cb586ea3-b6e6-4e48-2344-2bd641e5323f@st.com>
	<20201102124641.GA881895@kroah.com>
	<20201102154250.45bee17f@coco.lan>
X-Mailer: Claws Mail 3.17.7 (GTK+ 2.24.32; x86_64-pc-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Mon, 2 Nov 2020 15:42:50 +0100
Mauro Carvalho Chehab <mchehab+huawei@kernel.org> wrote:

> Em Mon, 2 Nov 2020 13:46:41 +0100
> Greg Kroah-Hartman <gregkh@linuxfoundation.org> escreveu:
> 
> > On Mon, Nov 02, 2020 at 12:04:36PM +0100, Fabrice Gasnier wrote:  
> > > On 10/30/20 11:09 AM, Mauro Carvalho Chehab wrote:    
> > > > Em Fri, 30 Oct 2020 10:19:12 +0100
> > > > Fabrice Gasnier <fabrice.gasnier@st.com> escreveu:
> > > >     
> > > >> Hi Mauro,
> > > >>
> > > >> [...]
> > > >>    
> > > >>>  
> > > >>> +What:		/sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available
> > > >>> +KernelVersion:	4.12
> > > >>> +Contact:	benjamin.gaignard@st.com
> > > >>> +Description:
> > > >>> +		Reading returns the list possible quadrature modes.
> > > >>> +
> > > >>> +What:		/sys/bus/iio/devices/iio:deviceX/in_count0_quadrature_mode
> > > >>> +KernelVersion:	4.12
> > > >>> +Contact:	benjamin.gaignard@st.com
> > > >>> +Description:
> > > >>> +		Configure the device counter quadrature modes:
> > > >>> +
> > > >>> +		channel_A:
> > > >>> +			Encoder A input servers as the count input and B as
> > > >>> +			the UP/DOWN direction control input.
> > > >>> +
> > > >>> +		channel_B:
> > > >>> +			Encoder B input serves as the count input and A as
> > > >>> +			the UP/DOWN direction control input.
> > > >>> +
> > > >>> +		quadrature:
> > > >>> +			Encoder A and B inputs are mixed to get direction
> > > >>> +			and count with a scale of 0.25.
> > > >>> +      
> > > >>    
> > > > 
> > > > Hi Fabrice,
> > > >     
> > > >> I just noticed that since Jonathan question in v1.
> > > >>
> > > >> Above ABI has been moved in the past as discussed in [1]. You can take a
> > > >> look at:
> > > >> b299d00 IIO: stm32: Remove quadrature related functions from trigger driver
> > > >>
> > > >> Could you please remove the above chunk ?
> > > >>
> > > >> With that, for the stm32 part:
> > > >> Acked-by: Fabrice Gasnier <fabrice.gasnier@st.com>    
> > > > 
> > > > 
> > > > Hmm... probably those were re-introduced due to a rebase. This
> > > > series were originally written about 1,5 years ago.
> > > > 
> > > > I'll drop those hunks.    
> > > 
> > > Hi Mauro, Greg,
> > > 
> > > I just figured out this patch has been applied with above hunk.
> > > 
> > > This should be dropped: is there a fix on its way already ?
> > > (I may have missed it)    
> > 
> > Can you send a fix for just this hunk?  
> 
> Hmm...
> 
> 	$ git grep /sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available
> 	Documentation/ABI/testing/sysfs-bus-iio-counter-104-quad-8:What:                /sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available
> 	Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32:What:             /sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available
> 	Documentation/ABI/testing/sysfs-bus-iio-timer-stm32:What:               /sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available
> 
> Even re-doing the changes from 
> changeset b299d00420e2 ("IIO: stm32: Remove quadrature related functions from trigger driver")
> at Documentation/ABI/testing/sysfs-bus-iio-timer-stm32, there's still
> a third duplicate of some of those, as reported by the script:
> 
> 	$ ./scripts/get_abi.pl validate 2>&1|grep quadra
> 	Warning: /sys/bus/iio/devices/iio:deviceX/in_count0_quadrature_mode is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-timer-stm32:117  Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32:14
> 	Warning: /sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available is defined 3 times:  Documentation/ABI/testing/sysfs-bus-iio-counter-104-quad-8:2  Documentation/ABI/testing/sysfs-bus-iio-timer-stm32:111  Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32:8
> 
> As in_count_quadrature_mode_available is also defined at:
> 	Documentation/ABI/testing/sysfs-bus-iio-counter-104-quad-8:2
> 
> The best here seems to have a patch that will also drop the other
> duplication of this, probably moving in_count_quadrature_mode_available
> to a generic node probably placing it inside 
> Documentation/ABI/testing/sysfs-bus-iio.

In this particular case it may be valid to do that, but it's not in
general without loosing information - see below.

> 
> Comments?
> 
> Thanks,
> Mauro
> 
> PS.: the IIO subsystem is the one that currently has more duplicated
> ABI entries:

That was intentional.  Often these provide more information on the
ABI for a particular device than is present in the base ABI doc.

A bit like when we have additional description for dt binding properties
for a particular device, even though they are standard properties.

Often a standard property allows for more values than the specific
one for a particular device.  There can also be obscuring coupling
between sysfs attributes due to hardware restrictions that we would
like to provide some explanatory info on.

I suppose we could add all this information to the parent doc but
that is pretty ugly and will make that doc very nasty to read.

Jonathan

> 
> $ ./scripts/get_abi.pl validate 2>&1|grep iio
> Warning: /sys/bus/iio/devices/iio:deviceX/in_accel_x_calibbias is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-icm42600:0  Documentation/ABI/testing/sysfs-bus-iio:394
> Warning: /sys/bus/iio/devices/iio:deviceX/in_accel_y_calibbias is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-icm42600:1  Documentation/ABI/testing/sysfs-bus-iio:395
> Warning: /sys/bus/iio/devices/iio:deviceX/in_accel_z_calibbias is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-icm42600:2  Documentation/ABI/testing/sysfs-bus-iio:396
> Warning: /sys/bus/iio/devices/iio:deviceX/in_anglvel_x_calibbias is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-icm42600:3  Documentation/ABI/testing/sysfs-bus-iio:397
> Warning: /sys/bus/iio/devices/iio:deviceX/in_anglvel_y_calibbias is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-icm42600:4  Documentation/ABI/testing/sysfs-bus-iio:398
> Warning: /sys/bus/iio/devices/iio:deviceX/in_anglvel_z_calibbias is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-icm42600:5  Documentation/ABI/testing/sysfs-bus-iio:399
> Warning: /sys/bus/iio/devices/iio:deviceX/in_count0_preset is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-timer-stm32:100  Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32:0
> Warning: /sys/bus/iio/devices/iio:deviceX/in_count0_quadrature_mode is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-timer-stm32:117  Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32:14
> Warning: /sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available is defined 3 times:  Documentation/ABI/testing/sysfs-bus-iio-counter-104-quad-8:2  Documentation/ABI/testing/sysfs-bus-iio-timer-stm32:111  Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32:8
> Warning: /sys/bus/iio/devices/iio:deviceX/out_altvoltageY_frequency is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-frequency-adf4371:0  Documentation/ABI/testing/sysfs-bus-iio:599
> Warning: /sys/bus/iio/devices/iio:deviceX/out_altvoltageY_powerdown is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-frequency-adf4371:36  Documentation/ABI/testing/sysfs-bus-iio:588
> Warning: /sys/bus/iio/devices/iio:deviceX/out_currentY_raw is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-light-lm3533-als:43  Documentation/ABI/testing/sysfs-bus-iio-health-afe440x:38
> Warning: /sys/bus/iio/devices/iio:deviceX/out_current_heater_raw is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-humidity-hdc2010:0  Documentation/ABI/testing/sysfs-bus-iio-humidity-hdc100x:0
> Warning: /sys/bus/iio/devices/iio:deviceX/out_current_heater_raw_available is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-humidity-hdc2010:1  Documentation/ABI/testing/sysfs-bus-iio-humidity-hdc100x:1
> Warning: /sys/bus/iio/devices/iio:deviceX/sensor_sensitivity is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-distance-srf08:0  Documentation/ABI/testing/sysfs-bus-iio-proximity-as3935:8
> Warning: /sys/bus/iio/devices/triggerX/sampling_frequency is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-timer-stm32:92  Documentation/ABI/testing/sysfs-bus-iio:45



From xen-devel-bounces@lists.xenproject.org Sun Nov 08 17:19:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Nov 2020 17:19:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21768.48119 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kboM2-00011k-HU; Sun, 08 Nov 2020 17:19:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21768.48119; Sun, 08 Nov 2020 17:19:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kboM2-00011d-EL; Sun, 08 Nov 2020 17:19:50 +0000
Received: by outflank-mailman (input) for mailman id 21768;
 Sun, 08 Nov 2020 17:19:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h6fj=EO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kboM0-00011Y-HG
 for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 17:19:48 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5bf2dbc0-56b0-4fac-bbca-01d09f77e1fa;
 Sun, 08 Nov 2020 17:19:46 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kboLx-0005Nc-Pv; Sun, 08 Nov 2020 17:19:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kboLx-0004oz-Ij; Sun, 08 Nov 2020 17:19:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kboLx-0000GS-IH; Sun, 08 Nov 2020 17:19:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=h6fj=EO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kboM0-00011Y-HG
	for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 17:19:48 +0000
X-Inumbo-ID: 5bf2dbc0-56b0-4fac-bbca-01d09f77e1fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5bf2dbc0-56b0-4fac-bbca-01d09f77e1fa;
	Sun, 08 Nov 2020 17:19:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=eagm83USspADDuJEspbMQvnaSN3Z8+8P3l+xKn2TrCU=; b=L1khJxVLq0TIjCA468hmJKvN5x
	kOZ2fPI1dvWAKtIKwD/1yfA0vqlVJqRuSH1mhke3JAvRnr+xLRUa4Nk6pAVSp01WeGtzHC5FiIX4+
	e2+TeJV7VvyqCzE9/9WRluEl7g/5i6E3g69R9t8sQdu6yKiM93i0awxqCmYWUXSuLxLs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kboLx-0005Nc-Pv; Sun, 08 Nov 2020 17:19:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kboLx-0004oz-Ij; Sun, 08 Nov 2020 17:19:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kboLx-0000GS-IH; Sun, 08 Nov 2020 17:19:45 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable bisection] complete test-amd64-i386-xl-xsm
Message-Id: <E1kboLx-0000GS-IH@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 08 Nov 2020 17:19:45 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-xsm
testid guest-start

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  e19bcb626f50a652fb1854a8b2f2c9c371687a11
  Bug not present: c3453a23f7905d24f2404787543e26ec7d02301c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/156567/


  commit e19bcb626f50a652fb1854a8b2f2c9c371687a11
  Author: Juergen Gross <jgross@suse.com>
  Date:   Fri Nov 6 10:48:07 2020 +0100
  
      xen/rwlock: add check_lock() handling to rwlocks
      
      Checking whether a lock is consistently used regarding interrupts on
      or off is beneficial for rwlocks, too.
      
      So add check_lock() calls to rwlock functions. For this purpose make
      check_lock() globally accessible.
      
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Reviewed-by: Julien Grall <jgrall@amazon.com>
      Reviewed-by: Jan Beulich <jbeulich@suse.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-i386-xl-xsm.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-i386-xl-xsm.guest-start --summary-out=tmp/156567.bisection-summary --basis-template=156443 --blessings=real,real-bisect,real-retry xen-unstable test-amd64-i386-xl-xsm guest-start
Searching for failure / basis pass:
 156538 fail [host=huxelrebe0] / 156443 [host=fiano0] 156401 [host=albana0] 156389 [host=elbling1] 156373 [host=huxelrebe1] 156354 [host=albana1] 156339 [host=fiano1] 156331 [host=chardonnay1] 156315 [host=chardonnay0] 156291 [host=elbling0] 156268 [host=fiano1] 156254 [host=rimava1] 156248 [host=albana0] 156228 [host=albana1] 156196 [host=huxelrebe1] 156167 [host=pinot1] 156136 ok.
Failure / basis pass flights: 156538 / 156136
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 6ca70821b59849ad97c3fadc47e63c1a4af1a78c
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#ea6d3cd1ed79d824e605a70c3626bc4\
 37c386260-677cbe1324c29294bb1d1b8454b3f214725e40fd git://xenbits.xen.org/xen.git#6ca70821b59849ad97c3fadc47e63c1a4af1a78c-0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
Loaded 41925 nodes in revision graph
Searching for test results:
 156119 [host=pinot0]
 156136 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 6ca70821b59849ad97c3fadc47e63c1a4af1a78c
 156167 [host=pinot1]
 156196 [host=huxelrebe1]
 156228 [host=albana1]
 156248 [host=albana0]
 156254 [host=rimava1]
 156268 [host=fiano1]
 156291 [host=elbling0]
 156315 [host=chardonnay0]
 156331 [host=chardonnay1]
 156339 [host=fiano1]
 156354 [host=albana1]
 156373 [host=huxelrebe1]
 156389 [host=elbling1]
 156401 [host=albana0]
 156443 [host=fiano0]
 156550 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd dac867bf9adc1562a4cf9db5f89726597af13ef8
 156524 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 2a5f9f6a6932214fda76b9b3c03e024772882d34
 156539 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 6ca70821b59849ad97c3fadc47e63c1a4af1a78c
 156546 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 2a5f9f6a6932214fda76b9b3c03e024772882d34
 156548 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 82c0d3d491ccb183cf12c87775086b68531b8444
 156551 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 9ff9705647646aa937b5f5c1426a64c69a62b3bd
 156553 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 957708c2d1ae25d7375abd5e5e70c3043d64f1f1
 156538 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
 156555 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c
 156558 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
 156562 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd e19bcb626f50a652fb1854a8b2f2c9c371687a11
 156564 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c
 156565 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd e19bcb626f50a652fb1854a8b2f2c9c371687a11
 156566 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c
 156567 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd e19bcb626f50a652fb1854a8b2f2c9c371687a11
Searching for interesting versions
 Result found: flight 156136 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c, results HASH(0x5621d81642e0) HASH(0x5621d816a4c0) HASH(0x5621d8163770) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe132\
 4c29294bb1d1b8454b3f214725e40fd 957708c2d1ae25d7375abd5e5e70c3043d64f1f1, results HASH(0x5621d81686f8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 9ff9705647646aa937b5f5c1426a64c69a62b3bd, results HASH(0x5621d8165728) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f\
 0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd dac867bf9adc1562a4cf9db5f89726597af13ef8, results HASH(0x5621d81672b0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 82c0d3d491ccb183cf12c87775086b68531b8444, results HASH(0x5621d81675b0) Result found: flight 156524 (fail), for basis failure (at\
  ancestor ~32)
 Repro found: flight 156539 (pass), for basis pass
 Repro found: flight 156558 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c
No revisions left to test, checking graph state.
 Result found: flight 156555 (pass), for last pass
 Result found: flight 156562 (fail), for first failure
 Repro found: flight 156564 (pass), for last pass
 Repro found: flight 156565 (fail), for first failure
 Repro found: flight 156566 (pass), for last pass
 Repro found: flight 156567 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  e19bcb626f50a652fb1854a8b2f2c9c371687a11
  Bug not present: c3453a23f7905d24f2404787543e26ec7d02301c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/156567/


  commit e19bcb626f50a652fb1854a8b2f2c9c371687a11
  Author: Juergen Gross <jgross@suse.com>
  Date:   Fri Nov 6 10:48:07 2020 +0100
  
      xen/rwlock: add check_lock() handling to rwlocks
      
      Checking whether a lock is consistently used regarding interrupts on
      or off is beneficial for rwlocks, too.
      
      So add check_lock() calls to rwlock functions. For this purpose make
      check_lock() globally accessible.
      
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Reviewed-by: Julien Grall <jgrall@amazon.com>
      Reviewed-by: Jan Beulich <jbeulich@suse.com>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-i386-xl-xsm.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
156567: tolerable ALL FAIL

flight 156567 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/156567/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-xsm       14 guest-start             fail baseline untested


jobs:
 test-amd64-i386-xl-xsm                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sun Nov 08 17:45:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Nov 2020 17:45:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21779.48137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kboko-0003bv-QB; Sun, 08 Nov 2020 17:45:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21779.48137; Sun, 08 Nov 2020 17:45:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kboko-0003bo-Mz; Sun, 08 Nov 2020 17:45:26 +0000
Received: by outflank-mailman (input) for mailman id 21779;
 Sun, 08 Nov 2020 17:45:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=q0OR=EO=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kbokm-0003bj-Qj
 for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 17:45:25 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 96ddb8f4-49a5-493a-8b15-31ab39ee9fa3;
 Sun, 08 Nov 2020 17:45:23 +0000 (UTC)
Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com
 [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-97-C17h6fKmOOWAQOCwoYEU-Q-1; Sun, 08 Nov 2020 12:45:20 -0500
Received: by mail-wr1-f72.google.com with SMTP id r16so3183054wrw.22
 for <xen-devel@lists.xenproject.org>; Sun, 08 Nov 2020 09:45:20 -0800 (PST)
Received: from [192.168.1.36] (234.red-83-42-66.dynamicip.rima-tde.net.
 [83.42.66.234])
 by smtp.gmail.com with ESMTPSA id i33sm10548606wri.79.2020.11.08.09.45.17
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sun, 08 Nov 2020 09:45:18 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=q0OR=EO=redhat.com=philmd@srs-us1.protection.inumbo.net>)
	id 1kbokm-0003bj-Qj
	for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 17:45:25 +0000
X-Inumbo-ID: 96ddb8f4-49a5-493a-8b15-31ab39ee9fa3
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 96ddb8f4-49a5-493a-8b15-31ab39ee9fa3;
	Sun, 08 Nov 2020 17:45:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604857522;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pKcr5vUXMJh6j6z/E1BNk/wJeVOexwL5vSDM/421TEA=;
	b=Jq2OLlC63QglC4/X9yYUMjCeNiO9TxP9lMfb95AMNxWufrS4ecytbD/o2ofoc7IoKvjLQx
	ndthC5i6xK7ttz6Woxcq87uzOXZiEElG8zIy5/MtV8+QrwvVEFfpxFVIFb2tNIFePfqdm9
	DnDRg36eLepmlmFAqlEM9fG+LHhTNuU=
Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com
 [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-97-C17h6fKmOOWAQOCwoYEU-Q-1; Sun, 08 Nov 2020 12:45:20 -0500
X-MC-Unique: C17h6fKmOOWAQOCwoYEU-Q-1
Received: by mail-wr1-f72.google.com with SMTP id r16so3183054wrw.22
        for <xen-devel@lists.xenproject.org>; Sun, 08 Nov 2020 09:45:20 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=pKcr5vUXMJh6j6z/E1BNk/wJeVOexwL5vSDM/421TEA=;
        b=gZC7A/BJzjHwJFw1peV0P3DAVykVtjklmJ2M0tV8ILqAgqoaCBK3wBdN++Ii6BRV46
         FOXp+nfWk4zxNN+zZCK67eZDJl1G8/tIyyWYYoilgxVlQragI6SenkCCvUkpumKDU6mR
         d3BUNuLBp0/cXXfEQYzrwIjev28pKfYByoV7KbDdHlJsCS99pqpN5x5qRtRsStCgxa/n
         ydSKkFuyA6pzi4ND/HJYm5H5+mx4ODrtu79Zfn+1mzRrb8Tu3pPYIFcYmCWa7QN/hkka
         C6uMjLYw7cx/6JV4Pxu5OFhCDJxgQwOcRRJZznFWPA86mMizNjm8iorbySbK+Vhjr7al
         s09g==
X-Gm-Message-State: AOAM532oJCxZoQQxTY+WC6tiDDPot+UwNZAlXMQI9ebrYPBGTUti7DDJ
	nFsJoDDUrgYTH9HhsYx2FGxNPnwHJgWcPfTSVjV7vM9ACVNGBWEtI5tSAOtXBdc9IR//FjpDzsh
	NV5EMVF72ueN5j74QiwL8Ah3ogNA=
X-Received: by 2002:a1c:6302:: with SMTP id x2mr10602933wmb.56.1604857519835;
        Sun, 08 Nov 2020 09:45:19 -0800 (PST)
X-Google-Smtp-Source: ABdhPJx1+MmTs2lallUAly8YG6ttAFAIc9qc4UR8osQBdD9ziPolvVhuWzyWw6bD4azHYpnqWfdQ3g==
X-Received: by 2002:a1c:6302:: with SMTP id x2mr10602917wmb.56.1604857519689;
        Sun, 08 Nov 2020 09:45:19 -0800 (PST)
Received: from [192.168.1.36] (234.red-83-42-66.dynamicip.rima-tde.net. [83.42.66.234])
        by smtp.gmail.com with ESMTPSA id i33sm10548606wri.79.2020.11.08.09.45.17
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Sun, 08 Nov 2020 09:45:18 -0800 (PST)
Subject: Re: --enable-xen on gitlab CI? (was Re: [PATCH 09/36] qdev: Make
 qdev_get_prop_ptr() get Object* arg)
To: Thomas Huth <thuth@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 =?UTF-8?Q?Marc-Andr=c3=a9_Lureau?= <marcandre.lureau@gmail.com>,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>
Cc: Wainer dos Santos Moschetta <wainersm@redhat.com>,
 QEMU <qemu-devel@nongnu.org>, Matthew Rosato <mjrosato@linux.ibm.com>,
 Paul Durrant <paul@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 "open list:Block layer core" <qemu-block@nongnu.org>,
 Stefan Berger <stefanb@linux.vnet.ibm.com>,
 David Hildenbrand <david@redhat.com>, Markus Armbruster <armbru@redhat.com>,
 Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Alex Williamson <alex.williamson@redhat.com>, John Snow <jsnow@redhat.com>,
 Richard Henderson <rth@twiddle.net>, Kevin Wolf <kwolf@redhat.com>,
 "Daniel P. Berrange" <berrange@redhat.com>, Cornelia Huck
 <cohuck@redhat.com>, Qemu-s390x list <qemu-s390x@nongnu.org>,
 Max Reitz <mreitz@redhat.com>, Igor Mammedov <imammedo@redhat.com>
References: <20201029220246.472693-1-ehabkost@redhat.com>
 <20201029220246.472693-10-ehabkost@redhat.com>
 <CAJ+F1CKqo3D20=qSAovVKWCGz4otctaWnGC0O5p-Z1ZG9Pj_Mw@mail.gmail.com>
 <20201030113516.GP5733@habkost.net>
 <7645972e-5cad-6511-b057-bd595b91c4aa@redhat.com>
 <e35c50b6-e795-d901-61e4-4879c5eadd61@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <e3e3d331-fd4f-7365-1e2f-bf8bfdb11396@redhat.com>
Date: Sun, 8 Nov 2020 18:45:16 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <e35c50b6-e795-d901-61e4-4879c5eadd61@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 10/31/20 11:25 AM, Thomas Huth wrote:
> On 30/10/2020 18.13, Paolo Bonzini wrote:
>> On 30/10/20 12:35, Eduardo Habkost wrote:
>>>
>>> What is necessary to make sure we have a CONFIG_XEN=y job in
>>> gitlab CI?  Maybe just including xen-devel in some of the
>>> container images is enough?
>>
>> Fedora already has it, but build-system-fedora does not include
>> x86_64-softmmu.
> 
> Eduardo, could you try to add xen-devel to the centos8 container? If that
> does not work, we can still move the x86_64-softmmu target to the fedora
> pipeline instead.

On CentOS 8:

  #6 10.70 No match for argument: xen-devel
  #6 10.71 Error: Unable to find a match: xen-devel

Regards,

Phil.



From xen-devel-bounces@lists.xenproject.org Sun Nov 08 18:52:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Nov 2020 18:52:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21791.48152 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbpmv-00018Y-Ll; Sun, 08 Nov 2020 18:51:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21791.48152; Sun, 08 Nov 2020 18:51:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbpmv-00018R-IR; Sun, 08 Nov 2020 18:51:41 +0000
Received: by outflank-mailman (input) for mailman id 21791;
 Sun, 08 Nov 2020 18:51:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h6fj=EO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kbpmu-00017n-LE
 for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 18:51:40 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4c8700ea-d021-4e54-8a5e-c215ff350196;
 Sun, 08 Nov 2020 18:51:33 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbpmm-0007Fo-JB; Sun, 08 Nov 2020 18:51:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbpmm-0008UW-9g; Sun, 08 Nov 2020 18:51:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kbpmm-0001MI-8z; Sun, 08 Nov 2020 18:51:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=h6fj=EO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kbpmu-00017n-LE
	for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 18:51:40 +0000
X-Inumbo-ID: 4c8700ea-d021-4e54-8a5e-c215ff350196
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4c8700ea-d021-4e54-8a5e-c215ff350196;
	Sun, 08 Nov 2020 18:51:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wql+QNm2hucj6v8DSJ2mBzPP6jcU0rdLx23YpyixIRU=; b=dgv6mJg3ViRDX0crI8ROysNO9f
	j/cPHctL9gbIUJmMEuYoxwVwWrALv+fhSN0DRIlj7AD6PQ//Q92T8SICL1SBGQ9nQfYa2kZdIY/tZ
	egsq6Tlii7GtcNeUTLVSluXjpPFuP62Q5Xo9RoZ2Tbjtk8yd/v8SAOaf0FtiAHzOESNI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbpmm-0007Fo-JB; Sun, 08 Nov 2020 18:51:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbpmm-0008UW-9g; Sun, 08 Nov 2020 18:51:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbpmm-0001MI-8z; Sun, 08 Nov 2020 18:51:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156547-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156547: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:debian-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=4429f14aeea979b63bcafdcf9f09677fcf8fd475
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 08 Nov 2020 18:51:32 +0000

flight 156547 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156547/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  12 debian-install           fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152332
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                4429f14aeea979b63bcafdcf9f09677fcf8fd475
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   99 days
Failing since        152366  2020-08-01 20:49:34 Z   98 days  164 attempts
Testing same since   156547  2020-11-08 01:18:50 Z    0 days    1 attempts

------------------------------------------------------------
3468 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 661851 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 08 20:48:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Nov 2020 20:48:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21817.48175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbrbs-0002Id-FG; Sun, 08 Nov 2020 20:48:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21817.48175; Sun, 08 Nov 2020 20:48:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbrbs-0002IW-CG; Sun, 08 Nov 2020 20:48:24 +0000
Received: by outflank-mailman (input) for mailman id 21817;
 Sun, 08 Nov 2020 20:48:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=q0OR=EO=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kbrbr-0002IR-A6
 for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 20:48:23 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id bea7b5fe-f014-4c0f-b11a-6147d862d0cc;
 Sun, 08 Nov 2020 20:48:20 +0000 (UTC)
Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com
 [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-567-iSxbM1-PP6mlzMlhdqhbZA-1; Sun, 08 Nov 2020 15:48:16 -0500
Received: by mail-wr1-f72.google.com with SMTP id e18so3367686wrs.23
 for <xen-devel@lists.xenproject.org>; Sun, 08 Nov 2020 12:48:16 -0800 (PST)
Received: from localhost.localdomain (234.red-83-42-66.dynamicip.rima-tde.net.
 [83.42.66.234])
 by smtp.gmail.com with ESMTPSA id n9sm10582699wmd.4.2020.11.08.12.48.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 08 Nov 2020 12:48:14 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=q0OR=EO=redhat.com=philmd@srs-us1.protection.inumbo.net>)
	id 1kbrbr-0002IR-A6
	for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 20:48:23 +0000
X-Inumbo-ID: bea7b5fe-f014-4c0f-b11a-6147d862d0cc
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id bea7b5fe-f014-4c0f-b11a-6147d862d0cc;
	Sun, 08 Nov 2020 20:48:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604868500;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JaO3dtFj5WhHKtEHsZu2VQNZlyIz3aXfJ70aU8Z8ugU=;
	b=FMSo9KduK12oOyn0FMV19Wve2+3gMAyPw6N4B5ByRAI3cAjoYvSkjoKLPvyrcVTvOKHwpt
	pC+tfvkZ3EVyNR+uSx+vq7h3r+sTb/AP0qTR9Qie6qqjvryZ5VoyC2xAphPjdI1GbmP43z
	vGP9C4mb1CAvLgilHO2W0zGBeyRXbkk=
Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com
 [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-567-iSxbM1-PP6mlzMlhdqhbZA-1; Sun, 08 Nov 2020 15:48:16 -0500
X-MC-Unique: iSxbM1-PP6mlzMlhdqhbZA-1
Received: by mail-wr1-f72.google.com with SMTP id e18so3367686wrs.23
        for <xen-devel@lists.xenproject.org>; Sun, 08 Nov 2020 12:48:16 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=JaO3dtFj5WhHKtEHsZu2VQNZlyIz3aXfJ70aU8Z8ugU=;
        b=fdoatJ9rJEOFCrMfYKFLVJ6l+mJAxA5/q8JANfFb5UD1YjE8w6SI69FFxjlN9+W2BN
         iFNW9seuysfnl7ytAnHNJ2zerV93xP0QLZ0ti32VqqrcblcECkbaVwmeXPintVI/nlgv
         Kn+7AgBwmxIz0N9fh7B46R2CNgZb5M+aHGCdaM6xA3ACj/vU7mdrmEu0acfbQE1ZGy5G
         ulLp5RYf53VCugQBb3vyOeiPtWD7p3l+CGdWRqc9yGtgXjnBnkN299wG+ZFw1RJRkJIf
         i8cv0241ixONSpOuMDbyedO98r8ogV81zV7dEVdbfzk2GN+gbSbwGJ/2phQ5Nl3AnTaa
         bMXQ==
X-Gm-Message-State: AOAM532arU5PejqUZ9mMngcXvwgiGlO8cLIYsCqhX8D0CmUwEqoa9hT7
	jQvKK1f3jLVPFaXeFQeoEpmtYoq5cxnZA7XX+c25HLrLDpSySAetvhKda5GRK0v4UmkXCjwUJa/
	6ieOqCZjsi9jDhsy2nc8o+vqYC4s=
X-Received: by 2002:adf:dc4c:: with SMTP id m12mr14577106wrj.177.1604868495165;
        Sun, 08 Nov 2020 12:48:15 -0800 (PST)
X-Google-Smtp-Source: ABdhPJznq1KGLX1Fjb9gaA2+nc6mWhBkui9o+7ISQgnfxHqJ/igupLFrf03tBjTb6p55sq27Y5Ymbg==
X-Received: by 2002:adf:dc4c:: with SMTP id m12mr14577088wrj.177.1604868494995;
        Sun, 08 Nov 2020 12:48:14 -0800 (PST)
Received: from localhost.localdomain (234.red-83-42-66.dynamicip.rima-tde.net. [83.42.66.234])
        by smtp.gmail.com with ESMTPSA id n9sm10582699wmd.4.2020.11.08.12.48.13
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Sun, 08 Nov 2020 12:48:14 -0800 (PST)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com>,
	Igor Mammedov <imammedo@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	Aurelien Jarno <aurelien@aurel32.net>,
	Thomas Huth <thuth@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	Richard Henderson <rth@twiddle.net>,
	Fam Zheng <fam@euphon.net>,
	"Daniel P . Berrange" <berrange@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org
Subject: [PATCH-for-6.0 v4 15/17] gitlab-ci: Add test for Xen (on CentOS 7)
Date: Sun,  8 Nov 2020 21:45:33 +0100
Message-Id: <20201108204535.2319870-16-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201108204535.2319870-1-philmd@redhat.com>
References: <20201108204535.2319870-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Xen packages are available in CentOS 7, but have been
removed from CentOS 8. Use the CentOS 7 container.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org
---
 .gitlab-ci.yml | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index 2f0da7b3dc1..8e15266c277 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -557,6 +557,27 @@ check-crypto-only-gnutls:
     IMAGE: centos7
     MAKE_CHECK_ARGS: check
 
+build-xen-centos:
+  <<: *native_build_job_definition
+  variables:
+    IMAGE: centos7
+    TARGETS: i386-softmmu x86_64-softmmu
+    CONFIGURE_ARGS: --enable-xen
+    MAKE_CHECK_ARGS: check-build
+  artifacts:
+    paths:
+      - build
+
+check-xen-centos:
+  <<: *native_test_job_definition
+  needs:
+    - job: build-xen-centos
+      artifacts: true
+  variables:
+    IMAGE: centos7
+    MAKE_CHECK_ARGS: check
+
+
 # We don't need to exercise every backend with every front-end
 build-trace-multi-user:
   <<: *native_build_job_definition
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Sun Nov 08 23:23:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 08 Nov 2020 23:23:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21867.48212 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbu1u-0007d6-8u; Sun, 08 Nov 2020 23:23:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21867.48212; Sun, 08 Nov 2020 23:23:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbu1u-0007cz-51; Sun, 08 Nov 2020 23:23:26 +0000
Received: by outflank-mailman (input) for mailman id 21867;
 Sun, 08 Nov 2020 23:23:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h6fj=EO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kbu1m-0007c9-Sr
 for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 23:23:24 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 730df935-ffd0-4e21-ba89-cead76772c55;
 Sun, 08 Nov 2020 23:20:53 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbtzR-0004I3-39; Sun, 08 Nov 2020 23:20:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbtzQ-0006uO-Qq; Sun, 08 Nov 2020 23:20:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kbtzQ-0006Ku-QK; Sun, 08 Nov 2020 23:20:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=h6fj=EO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kbu1m-0007c9-Sr
	for xen-devel@lists.xenproject.org; Sun, 08 Nov 2020 23:23:24 +0000
X-Inumbo-ID: 730df935-ffd0-4e21-ba89-cead76772c55
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 730df935-ffd0-4e21-ba89-cead76772c55;
	Sun, 08 Nov 2020 23:20:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=V93EP9edBxwv/Vk9qnYnlQ0sl0vwQyQuggFP38RzSdw=; b=EKM+y4AxbgVaKamBz4r+ZZhDIe
	JVuqA2uwhdlCsG97i9RPnkISBYWcD/RjgwV6Np2Tb7JYB5WPaAUxF+piSoE9CI+BYeoVSzaOL7l4k
	DqZII5jTPBc6fKZ0EU3SsqcT3SPT6OVY9wK86qa//EAwyrdiJQhPKHjF8ECdBMs4tD9U=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbtzR-0004I3-39; Sun, 08 Nov 2020 23:20:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbtzQ-0006uO-Qq; Sun, 08 Nov 2020 23:20:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbtzQ-0006Ku-QK; Sun, 08 Nov 2020 23:20:52 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156552-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156552: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=3493c36f0371777c62d1d72b205b0eb6117e2156
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 08 Nov 2020 23:20:52 +0000

flight 156552 qemu-mainline real [real]
flight 156573 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156552/
http://logs.test-lab.xenproject.org/osstest/logs/156573/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                3493c36f0371777c62d1d72b205b0eb6117e2156
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   80 days
Failing since        152659  2020-08-21 14:07:39 Z   79 days  174 attempts
Testing same since   156536  2020-11-07 03:17:03 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 61894 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 02:10:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 02:10:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21900.48242 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbwdG-00029M-WB; Mon, 09 Nov 2020 02:10:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21900.48242; Mon, 09 Nov 2020 02:10:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbwdG-00029D-Oe; Mon, 09 Nov 2020 02:10:10 +0000
Received: by outflank-mailman (input) for mailman id 21900;
 Mon, 09 Nov 2020 02:10:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=47vU=EP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kbwdF-0001y0-N6
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 02:10:09 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 235c7b97-a091-4e46-8252-78470a00654b;
 Mon, 09 Nov 2020 02:10:00 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbwd6-0004Vu-5k; Mon, 09 Nov 2020 02:10:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbwd5-0008Tn-TD; Mon, 09 Nov 2020 02:09:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kbwd5-00064b-Mp; Mon, 09 Nov 2020 02:09:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=47vU=EP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kbwdF-0001y0-N6
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 02:10:09 +0000
X-Inumbo-ID: 235c7b97-a091-4e46-8252-78470a00654b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 235c7b97-a091-4e46-8252-78470a00654b;
	Mon, 09 Nov 2020 02:10:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wDpJVZyC6IVtPbaKkgxDrmPmSN2xmp17rObgmJFZBfc=; b=wbzPDpOirTg5jCaMSwDfAUJ9BT
	6sU+iqWzS+08qzkvBxOCIUL23+XV8cT+kOOQ3+yKrbEx7vjY6aZXdYNKqOYRRTHYNVDoPfeczT5H0
	D5opWe9TtWpk5BNvIi8ZyJmfDExC4prx61gEd5I73zo6bHSFtRNnuyNPCHSGHnkOQUlM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbwd6-0004Vu-5k; Mon, 09 Nov 2020 02:10:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbwd5-0008Tn-TD; Mon, 09 Nov 2020 02:09:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbwd5-00064b-Mp; Mon, 09 Nov 2020 02:09:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156556-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156556: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
X-Osstest-Versions-That:
    xen=9ff9705647646aa937b5f5c1426a64c69a62b3bd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Nov 2020 02:09:59 +0000

flight 156556 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156556/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm       14 guest-start              fail REGR. vs. 156443
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 156443
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 156443
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 156443
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156443
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156443
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156443
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156443
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156443
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156443
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156443
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156443
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156443
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156443
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156443
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
baseline version:
 xen                  9ff9705647646aa937b5f5c1426a64c69a62b3bd

Last test of basis   156443  2020-11-05 15:47:13 Z    3 days
Failing since        156524  2020-11-06 14:22:28 Z    2 days    3 attempts
Testing same since   156538  2020-11-07 06:32:07 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Oct 29 15:03:32 2020 -0400

    libxl: Add suppress-vmdesc to QEMU machine
    
    The device model state saved by QMP xen-save-devices-state doesn't
    include the vmdesc json.  When restoring an HVM, xen-load-devices-state
    always triggers "Expected vmdescription section, but got 0".  This is
    not a problem when restore comes from a file.  However, when QEMU runs
    in a linux stubdom and comes over a console, EOF is not received.  This
    causes a delay restoring - though it does restore.
    
    Setting suppress-vmdesc skips looking for the vmdesc during restore and
    avoids the wait.
    
    QEMU 5.2 enables suppress-vmdesc by default for xenfv, but this change
    sets it manually for xenfv and xen_platform_pci=0 when -machine pc is
    use.
    
    QEMU commit 9850c6047b8b "migration: Allow to suppress vmdesc
    submission" added suppress-vmdesc in QEMU 2.3.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Anthony PERARD <anthony.perard@citrix.com>

commit cd800ce442eeba5bc0857ade70a075367c01c350
Author: Stefano Stabellini <sstabellini@kernel.org>
Date:   Fri Nov 6 16:12:56 2020 +0000

    libxl: set vuart_gfn in libxl__build_hvm
    
    Setting vuart_gfn was missed when switching ARM guests to the PVH build.
    Like libxl__build_pv, libxl__build_hvm should set state->vuart_gfn to
    dom->vuart_gfn.
    
    Without this change, xl console cannot connect to the vuart console (-t
    vuart), see https://marc.info/?l=xen-devel&m=160402342101366.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

commit 4196b1523aebe0ed929accba318d5e833d7ff6b3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 15:05:04 2020 +0100

    tools/libs/light: correct bitmap operations
    
    Libxl bitmap operations for single bits (test, set, reset) take the bit
    number as a signed integer without testing the value to be larger than
    0.
    
    Correct that by adding the appropriate tests.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8aac8e0ef43a452d0b565d63e4943c275badba3f
Author: Olaf Hering <olaf@aepfle.de>
Date:   Fri Nov 6 14:05:17 2020 +0100

    docs/xl: fix cpupool-cpu-remove
    
    The cpu-pool must be specified.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Wei Liu <wl@xen.org>

commit 2a5f9f6a6932214fda76b9b3c03e024772882d34
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 10:48:44 2020 +0100

    PCI: remove unused pcidevs_trylock()
    
    pcidevs_trylock() is used nowhere, so remove it.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Paul Durrant <paul@xen.org>

commit e19bcb626f50a652fb1854a8b2f2c9c371687a11
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 10:48:07 2020 +0100

    xen/rwlock: add check_lock() handling to rwlocks
    
    Checking whether a lock is consistently used regarding interrupts on
    or off is beneficial for rwlocks, too.
    
    So add check_lock() calls to rwlock functions. For this purpose make
    check_lock() globally accessible.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit c3453a23f7905d24f2404787543e26ec7d02301c
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 10:47:09 2020 +0100

    xen/locking: harmonize spinlocks and rwlocks regarding preemption
    
    Spinlocks and rwlocks behave differently in the try variants regarding
    preemption: rwlocks are switching preemption off before testing the
    lock, while spinlocks do so only after the first check.
    
    Modify _spin_trylock() to disable preemption before testing the lock
    to be held in order to be preemption-ready.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>

commit 957708c2d1ae25d7375abd5e5e70c3043d64f1f1
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Nov 5 22:31:06 2020 +0000

    xen/arm: traps: Don't panic when receiving an unknown debug trap
    
    Even if debug trap are only meant for debugging purpose, it is quite
    harsh to crash Xen if one of the trap sent by the guest is not handled.
    
    So switch from a panic() to a printk().
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit e006b2e3be72e502b86bd9e1405417abd87bdfed
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Nov 5 16:48:55 2020 +0100

    libxl: fix libacpi dependency
    
    $(DSDT_FILES-y) depends on the recursive make to have run in libacpi/
    such that the file(s) itself/themselves were generated before
    compilation gets attempted. The same, however, is also necessary for
    generated headers, before source files including them would get
    attempted to be compiled.
    
    The dependency specified in libacpi's Makefile, otoh, is entirely
    pointless nowadays - no compilation happens there anymore (except for
    tools involved in building the generated files). Together with it, the
    rule generating acpi.a also can go away.
    
    Reported-by: Olaf Hering <olaf@aepfle.de>
    Fixes: 14c0d328da2b ("libxl/acpi: Build ACPI tables for HVMlite guests")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 2b8314a3c354d04545700c80ff5a5f86799b79c7
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Nov 5 16:48:37 2020 +0100

    tools/python: pass more -rpath-link options to ld
    
    With the split of libraries, I've observed a number of warnings from
    (old?) ld.
    
    Instead of duplicating the additions in two places, introduce a setup.py
    make variable holding all the common parts of the invocations.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 04:59:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 04:59:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21941.48262 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbzGV-000819-Il; Mon, 09 Nov 2020 04:58:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21941.48262; Mon, 09 Nov 2020 04:58:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbzGV-000812-Ff; Mon, 09 Nov 2020 04:58:51 +0000
Received: by outflank-mailman (input) for mailman id 21941;
 Mon, 09 Nov 2020 04:58:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=47vU=EP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kbzGT-00080O-Qd
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 04:58:49 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a66723c0-5b30-46aa-adc9-94571bfe0f30;
 Mon, 09 Nov 2020 04:58:42 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbzGM-0000AI-EF; Mon, 09 Nov 2020 04:58:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kbzGM-0007El-1b; Mon, 09 Nov 2020 04:58:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kbzGM-0001Pk-14; Mon, 09 Nov 2020 04:58:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=47vU=EP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kbzGT-00080O-Qd
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 04:58:49 +0000
X-Inumbo-ID: a66723c0-5b30-46aa-adc9-94571bfe0f30
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a66723c0-5b30-46aa-adc9-94571bfe0f30;
	Mon, 09 Nov 2020 04:58:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VpRnPggMnN8XRGf4/BamEkhAO6yoX1SaLDVrtQkJ4X8=; b=XGQVKdCirn1oRhwi5V8br3Cyt6
	tN7wOwQw8q4BD4zWc+dgXhdjgEf8JB6n5bP3PQ7DtXnBJ8LuDS8Z71Z0rB6LkXQ2zmCpusxZufKZ7
	cBhnkQHpnrQ3AlsMRDl1/lcI7i33KfcdrNoRwDG5ZHHwf7Pmw/wg9HPPa3KcpKZE6MXU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbzGM-0000AI-EF; Mon, 09 Nov 2020 04:58:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbzGM-0007El-1b; Mon, 09 Nov 2020 04:58:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kbzGM-0001Pk-14; Mon, 09 Nov 2020 04:58:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156560-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-upstream-unstable test] 156560: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=7ea428895af2840d85c524f0bd11a38aac308308
X-Osstest-Versions-That:
    qemuu=677cbe1324c29294bb1d1b8454b3f214725e40fd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Nov 2020 04:58:42 +0000

flight 156560 qemu-upstream-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156560/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156301
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156301
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156301
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156301
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156301
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156301
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156301
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                7ea428895af2840d85c524f0bd11a38aac308308
baseline version:
 qemuu                677cbe1324c29294bb1d1b8454b3f214725e40fd

Last test of basis   156301  2020-10-29 17:39:09 Z   10 days
Testing same since   156526  2020-11-06 16:07:44 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   677cbe1324..7ea428895a  7ea428895af2840d85c524f0bd11a38aac308308 -> master


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 05:33:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 05:33:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21950.48279 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbznu-0003HJ-7x; Mon, 09 Nov 2020 05:33:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21950.48279; Mon, 09 Nov 2020 05:33:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbznu-0003HC-49; Mon, 09 Nov 2020 05:33:22 +0000
Received: by outflank-mailman (input) for mailman id 21950;
 Mon, 09 Nov 2020 05:33:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kbzns-0003H7-Rm
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 05:33:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cd2cfa72-6093-40b5-b8ed-f0a955cb0d64;
 Mon, 09 Nov 2020 05:33:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 14BE8ABD1;
 Mon,  9 Nov 2020 05:33:19 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kbzns-0003H7-Rm
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 05:33:20 +0000
X-Inumbo-ID: cd2cfa72-6093-40b5-b8ed-f0a955cb0d64
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cd2cfa72-6093-40b5-b8ed-f0a955cb0d64;
	Mon, 09 Nov 2020 05:33:19 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604899999;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=NOCklL+dKjEBoXv5GNo9qN09vViqY6+JBVmLpt5WKQg=;
	b=V1LysvaHbM6GhlwsChd6yzwtwq5UI+QoF0lFHNURAqWLw62WXoFg9lOn5vAV6s4etOBxDN
	InKdArnbOOGFfo1qoi/dePLp+cj+yXux5N7Urx+uJmUO306HBQymgA5C6vCAfQj7CxA3Qw
	8eWm9iBnuSW4UakiT1aslQTsUVV8iOE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 14BE8ABD1;
	Mon,  9 Nov 2020 05:33:19 +0000 (UTC)
Subject: Re: [xen-unstable test] 156556: regressions - FAIL
To: osstest service owner <osstest-admin@xenproject.org>,
 xen-devel@lists.xenproject.org
References: <osstest-156556-mainreport@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <3660b9c9-8246-ba12-db7a-00b07a85d6d6@suse.com>
Date: Mon, 9 Nov 2020 06:33:18 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <osstest-156556-mainreport@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="WBlBx1QOj8ghiAb6qb2SbL7Ho1fKaSjap"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--WBlBx1QOj8ghiAb6qb2SbL7Ho1fKaSjap
Content-Type: multipart/mixed; boundary="2rL9Pd7kUwZUmJXWSJPuoKH8SHeO7RyR2";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: osstest service owner <osstest-admin@xenproject.org>,
 xen-devel@lists.xenproject.org
Message-ID: <3660b9c9-8246-ba12-db7a-00b07a85d6d6@suse.com>
Subject: Re: [xen-unstable test] 156556: regressions - FAIL
References: <osstest-156556-mainreport@xen.org>
In-Reply-To: <osstest-156556-mainreport@xen.org>

--2rL9Pd7kUwZUmJXWSJPuoKH8SHeO7RyR2
Content-Type: multipart/mixed;
 boundary="------------AD26196B0B9F9DAF7BA22B74"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------AD26196B0B9F9DAF7BA22B74
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 09.11.20 03:09, osstest service owner wrote:
> flight 156556 xen-unstable real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/156556/
>=20
> Regressions :-(
>=20
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>   test-amd64-i386-xl-xsm       14 guest-start              fail REGR. v=
s. 156443
>   test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. v=
s. 156443
>   test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. v=
s. 156443
>   test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. v=
s. 156443
>   test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-i=
nstall fail REGR. vs. 156443
>   test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-in=
stall fail REGR. vs. 156443
>   test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fai=
l REGR. vs. 156443
>   test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-inst=
all fail REGR. vs. 156443
>   test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-insta=
ll fail REGR. vs. 156443
>   test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fai=
l REGR. vs. 156443
>   test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fa=
il REGR. vs. 156443
>   test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fa=
il REGR. vs. 156443

Breakage due to policy_rwlock taken with interrupts off in evtchn_send()
hinting at the further need to NOT take the per event channel lock with
the irqsave variant.


Juergen

--------------AD26196B0B9F9DAF7BA22B74
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------AD26196B0B9F9DAF7BA22B74--

--2rL9Pd7kUwZUmJXWSJPuoKH8SHeO7RyR2--

--WBlBx1QOj8ghiAb6qb2SbL7Ho1fKaSjap
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+o1J4FAwAAAAAACgkQsN6d1ii/Ey/i
iAf/TrcDkALZf0KQRTSYwVDjNh9nt2GOTTBSs99tBOVNpxumdlWdaadhzu9ZbGFlUZ4YBNlyGKUZ
qCCxKbqhRfKKxkkK1hfl/1em1odnzhDqW65Ta47cftSQdc8lf5oZA2/xEyI2nRZfOhmvlUatCupv
yhk/gkPLJ/fXqHzktpL1pmQzahW1or0XQu0wmpHjR32pWifuzVKRXsI8KpiQeb3jbgpSBjQvHrfo
OeoHiYfo5b1oUQKw4nwM5X3l4IU+lNhwD/9GHNUhcu1CrRw+ShgGFHuUY5shyFTzX1LOAXMxav6M
wqSTgYsACOSzQ9g1fu4b6p1jOH/3+iJu93EDoNO3zA==
=qmbm
-----END PGP SIGNATURE-----

--WBlBx1QOj8ghiAb6qb2SbL7Ho1fKaSjap--


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 05:34:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 05:34:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21956.48293 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbzog-0003Np-HX; Mon, 09 Nov 2020 05:34:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21956.48293; Mon, 09 Nov 2020 05:34:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kbzog-0003Ni-Eg; Mon, 09 Nov 2020 05:34:10 +0000
Received: by outflank-mailman (input) for mailman id 21956;
 Mon, 09 Nov 2020 05:34:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kbzof-0003Nd-Sx
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 05:34:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 971652e7-1c2f-4917-bc75-0a2b28e0915f;
 Mon, 09 Nov 2020 05:34:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7131EABAE;
 Mon,  9 Nov 2020 05:34:08 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kbzof-0003Nd-Sx
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 05:34:09 +0000
X-Inumbo-ID: 971652e7-1c2f-4917-bc75-0a2b28e0915f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 971652e7-1c2f-4917-bc75-0a2b28e0915f;
	Mon, 09 Nov 2020 05:34:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604900048;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=noiwDpF5h6c55/R3o+eIuWv6Yp6OH0DaU2dC3h8Ij3M=;
	b=Ks60smEgzrqCfdsMA3Cu4ez8mQ/b+lRNUSiTGZd2dzvczdBKhKRw/CT8K9G7IxG1QArf2S
	tejkI/taZVBfcnX+x//mH5eudTrqyMkDyRhtfyu1O+BfC9KQYOu+rEWkOIM6+3MwuwY+/K
	91DIj6/Oq6t0ykacjqnRMEOLTCZGe7M=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7131EABAE;
	Mon,  9 Nov 2020 05:34:08 +0000 (UTC)
Subject: Re: [PATCH v2] x86/xen: don't unbind uninitialized lock_kicker_irq
To: Brian Masney <bmasney@redhat.com>, boris.ostrovsky@oracle.com,
 sstabellini@kernel.org
Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org,
 hpa@zytor.com, xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 dustymabe@redhat.com
References: <20201107011119.631442-1-bmasney@redhat.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <5950df5c-79d6-b2bc-4f2b-35624a3c0d1e@suse.com>
Date: Mon, 9 Nov 2020 06:34:07 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201107011119.631442-1-bmasney@redhat.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="YKXpHhhBagLrE4UFk7e72LVQHqGGgIOur"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--YKXpHhhBagLrE4UFk7e72LVQHqGGgIOur
Content-Type: multipart/mixed; boundary="5v4Fbrt87IaxiF1E3Az2Gieiq5dDG3qgf";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Brian Masney <bmasney@redhat.com>, boris.ostrovsky@oracle.com,
 sstabellini@kernel.org
Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org,
 hpa@zytor.com, xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 dustymabe@redhat.com
Message-ID: <5950df5c-79d6-b2bc-4f2b-35624a3c0d1e@suse.com>
Subject: Re: [PATCH v2] x86/xen: don't unbind uninitialized lock_kicker_irq
References: <20201107011119.631442-1-bmasney@redhat.com>
In-Reply-To: <20201107011119.631442-1-bmasney@redhat.com>

--5v4Fbrt87IaxiF1E3Az2Gieiq5dDG3qgf
Content-Type: multipart/mixed;
 boundary="------------6D3294FCE9D5D6953DCD3E3D"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------6D3294FCE9D5D6953DCD3E3D
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 07.11.20 02:11, Brian Masney wrote:
> When booting a hyperthreaded system with the kernel parameter
> 'mitigations=3Dauto,nosmt', the following warning occurs:
>=20
>      WARNING: CPU: 0 PID: 1 at drivers/xen/events/events_base.c:1112 un=
bind_from_irqhandler+0x4e/0x60
>      ...
>      Hardware name: Xen HVM domU, BIOS 4.2.amazon 08/24/2006
>      ...
>      Call Trace:
>       xen_uninit_lock_cpu+0x28/0x62
>       xen_hvm_cpu_die+0x21/0x30
>       takedown_cpu+0x9c/0xe0
>       ? trace_suspend_resume+0x60/0x60
>       cpuhp_invoke_callback+0x9a/0x530
>       _cpu_up+0x11a/0x130
>       cpu_up+0x7e/0xc0
>       bringup_nonboot_cpus+0x48/0x50
>       smp_init+0x26/0x79
>       kernel_init_freeable+0xea/0x229
>       ? rest_init+0xaa/0xaa
>       kernel_init+0xa/0x106
>       ret_from_fork+0x35/0x40
>=20
> The secondary CPUs are not activated with the nosmt mitigations and onl=
y
> the primary thread on each CPU core is used. In this situation,
> xen_hvm_smp_prepare_cpus(), and more importantly xen_init_lock_cpu(), i=
s
> not called, so the lock_kicker_irq is not initialized for the secondary=

> CPUs. Let's fix this by exiting early in xen_uninit_lock_cpu() if the
> irq is not set to avoid the warning from above for each secondary CPU.
>=20
> Signed-off-by: Brian Masney <bmasney@redhat.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------6D3294FCE9D5D6953DCD3E3D
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------6D3294FCE9D5D6953DCD3E3D--

--5v4Fbrt87IaxiF1E3Az2Gieiq5dDG3qgf--

--YKXpHhhBagLrE4UFk7e72LVQHqGGgIOur
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+o1M8FAwAAAAAACgkQsN6d1ii/Ey/m
CAf+Lr22R5JRysNR9BuGXcwWkkWxoz2HyxznekTnFTH6WHZs1xTmnrStNcLH2PKYPQQzAs3QPXc9
fTsZc9S3Ev+I594scUDzKCe/teXIldn1qcXoq462o1RrXxBbtAN6Q2J0NszICInMRMfr8z/srOwc
oFtXTq9s4Ib/IzLI3nZQbnDhHSlB0Z8DyVrO0aB/8RJhJu938Fc10p1MWsRlCogWib133r+8piq0
CeF3MwncLlsfGgFfnbSWT+FNB7iBTcovcf657ks4zQHFk3nMd75/EHC2CYtm1RKzoI9izg/H4/lv
w/yfICC0TMUy7mO+bRopDMK13i8w2jSVngg9uPXpIw==
=YR3F
-----END PGP SIGNATURE-----

--YKXpHhhBagLrE4UFk7e72LVQHqGGgIOur--


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 06:30:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 06:30:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21976.48309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc0gn-0008Qb-2H; Mon, 09 Nov 2020 06:30:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21976.48309; Mon, 09 Nov 2020 06:30:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc0gm-0008QU-UB; Mon, 09 Nov 2020 06:30:04 +0000
Received: by outflank-mailman (input) for mailman id 21976;
 Mon, 09 Nov 2020 06:30:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zl3J=EP=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1kc0gl-00086Y-2W
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 06:30:03 +0000
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ef9ecb37-433d-409f-bd3e-b1ec8123681e;
 Mon, 09 Nov 2020 06:30:01 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id s9so7155568ljo.11
 for <xen-devel@lists.xenproject.org>; Sun, 08 Nov 2020 22:30:01 -0800 (PST)
Received: from [192.168.10.4] ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id h4sm1309826lfk.224.2020.11.08.22.29.59
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sun, 08 Nov 2020 22:29:59 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zl3J=EP=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
	id 1kc0gl-00086Y-2W
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 06:30:03 +0000
X-Inumbo-ID: ef9ecb37-433d-409f-bd3e-b1ec8123681e
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ef9ecb37-433d-409f-bd3e-b1ec8123681e;
	Mon, 09 Nov 2020 06:30:01 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id s9so7155568ljo.11
        for <xen-devel@lists.xenproject.org>; Sun, 08 Nov 2020 22:30:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=VN7Jb3b91smoFXeJ7S6Fdaa2uIjAS9heaklmLDe7SnU=;
        b=S6hl7gvs67pY0aLiXZSMpQXLYhcfWb2l3S/9khGCTPqDejLWYLPfEKKfW3zpJ2+73u
         HC3ItfgIOLETc1qu+C3Se3pOOi0mkY0e8HvwvyKp4QK0gFR1Q/NoNjIbrcEok2AqBQ05
         EQgRj/yHbvMSJnVY9Cb2NAN1zQByh9EKlQoYyWQS2CRIeMuvckpzutC2oTF8UfvGW8IY
         ce4x708e28tANaIgHHY2jS3A5hcp9r0QbkKyHfFqzE4xwy4F0nhaYYLyiSEw/mRQjggh
         ozHH4chhF79TByQwQd3FI3dMZydjgg5aYBPHfC5zSQlKp/FjtV6vmsEfn+ledVCH3C6g
         6pJg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=VN7Jb3b91smoFXeJ7S6Fdaa2uIjAS9heaklmLDe7SnU=;
        b=MWq4B2W2zR8Lho/KOTqljg4yW0+r6VLqOtETcTiv5SaLZNOQ3xLrPhj7oWYOuZft0L
         TqGbqxwzKJe2jHADM4e1+5a0P9xKqLs4OI/db0RHIVBh6OXghHtfTAiZ0blanSGX7w5q
         t5WOPviPD0k2/e9EUyH2WkLe0fCxOyGqth2TM7e6Yi8KKeV8GpZb3CmEeQpn2ExQZpe6
         t2wGn+wmPeeLicfjR1t/7Uk23T8Tr5MgBFozuhq3eJ/Iy4Zyi/uQ70v0gUHHC+SBC0ry
         updTAgosiZuS8+sD35IYPetA6BMgZFBGLvtmhoCXy3TRZpO6SyuZnNEK0FWw39DYU+ck
         gg9Q==
X-Gm-Message-State: AOAM532OAjaMYhBS3NlA9788cDijcIxNAT0qo/bQ2xa9tecsI10TtWKB
	xZzS+YvIQsetM5ifSYQF850=
X-Google-Smtp-Source: ABdhPJxiu8L9v8GxOyUGr5jM6k+sl/g/RdxlmJX7Aed0o2g5K1Ag8I30b3Fs7EQR2+jkaN0nSAYjgg==
X-Received: by 2002:a05:651c:483:: with SMTP id s3mr5070453ljc.298.1604903400417;
        Sun, 08 Nov 2020 22:30:00 -0800 (PST)
Received: from [192.168.10.4] ([185.199.97.5])
        by smtp.gmail.com with ESMTPSA id h4sm1309826lfk.224.2020.11.08.22.29.59
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Sun, 08 Nov 2020 22:29:59 -0800 (PST)
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
To: Rahul Singh <Rahul.Singh@arm.com>,
 Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Julien Grall
 <julien@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <09cfc160-3490-0aeb-f872-04fb4ce04364@epam.com>
 <76593217-c7e2-2963-9cbe-d6cc38830710@xen.org>
 <d83f6859-6737-0da8-7c1d-a236e8313869@gmail.com>
 <B8E54A16-8FD4-48E4-82D5-2205EEEB5D2C@arm.com>
 <1001ace5-c6a2-4a81-ba3d-edabeeea9336@epam.com>
 <5F09F481-DC27-4FC3-8CE5-F4F97FDF6DF9@arm.com>
 <2f62f34b-f47d-3472-511f-a89ec1cd36c3@epam.com>
 <20FF6A26-41CF-4888-901A-0FF0ABCC6E64@arm.com>
 <d2eb2db3-7038-3850-310b-4676102e0a55@epam.com>
 <1390C05F-445F-4349-A672-4D7373C301B8@arm.com>
From: Oleksandr Andrushchenko <andr2000@gmail.com>
Message-ID: <ac5414f5-755a-b22c-c413-4b5298f057a7@gmail.com>
Date: Mon, 9 Nov 2020 08:29:58 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <1390C05F-445F-4349-A672-4D7373C301B8@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US

Hello, Rahul!

On 11/6/20 4:41 PM, Rahul Singh wrote:
> Hello Oleksandr,
>
>> On 6 Nov 2020, at 2:22 pm, Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com> wrote:
>>
>> Hi, Rahul!
>>
>> On 11/6/20 3:58 PM, Rahul Singh wrote:
>>> Hello Oleksandr,
>>>
>>>> On 6 Nov 2020, at 1:00 pm, Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com> wrote:
>>>>
>>>> Hello, Rahul!
>>>>
>>>> On 11/6/20 2:48 PM, Rahul Singh wrote:
>>>>> Hello Oleksandr,
>>>>>
>>>>>> On 2 Nov 2020, at 10:12 am, Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com> wrote:
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> On 11/2/20 11:55 AM, Bertrand Marquis wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>>> On 2 Nov 2020, at 05:55, Oleksandr Andrushchenko <andr2000@gmail.com> wrote:
>>>>>>>>
>>>>>>>> Hi, Julien!
>>>>>>>>
>>>>>>>> On 10/30/20 7:18 PM, Julien Grall wrote:
>>>>>>>>> Hi Oleksandr,
>>>>>>>>>
>>>>>>>>> On 30/10/2020 10:44, Oleksandr Andrushchenko wrote:
>>>>>>>>>> On 10/20/20 6:25 PM, Rahul Singh wrote:
>>>>>>>>>>> Add support for ARM architected SMMUv3 implementations. It is based on
>>>>>>>>>>> the Linux SMMUv3 driver.
>>>>>>>>>>>
>>>>>>>>>>> Major differences between the Linux driver are as follows:
>>>>>>>>>>> 1. Only Stage-2 translation is supported as compared to the Linux driver
>>>>>>>>>>>       that supports both Stage-1 and Stage-2 translations.
>>>>>>>>>> First of all thank you for the efforts!
>>>>>>>>>>
>>>>>>>>>> I tried the patch with QEMU and would like to know if my understanding correct
>>>>>>>>>>
>>>>>>>>>> that this combination will not work as of now:
>>>>>>>>>>
>>>>>>>>>> (XEN) SMMUv3: /smmuv3@9050000: SMMUv3: DT value = eventq
>>>>>>>>>> (XEN) Data Abort Trap. Syndrome=0x1940010
>>>>>>>>>> (XEN) Walking Hypervisor VA 0x40031000 on CPU0 via TTBR 0x00000000b8469000
>>>>>>>>>> (XEN) 0TH[0x0] = 0x00000000b8468f7f
>>>>>>>>>>
>>>>>>>>>> [snip]
>>>>>>>>>>
>>>>>>>>>> If this is expected then is there any plan to make QEMU work as well?
>>>>>>>>>>
>>>>>>>>>> I see [1] says that "Only stage 1 and AArch64 PTW are supported." on QEMU side.
>>>>>>>>> Just for clarication, you are trying to boot Xen on QEMU, right?
>>>>>>>> Exactly
>>>>>>>>> You might be able to use the stage-1 page-tables to isolate each device in Xen. However, I don't think you will be able to share the P2M because the page-tables layout between stage-1 and stage-2 is different.
>>>>>>>> So, it is even more work then
>>>>>>> Overall it would make more sense to spend some time adding proper support in Qemu then trying to modify the driver to support Qemu right now.
>>>>>>>
>>>>>>>>>> We are interested in QEMU/SMMUv3 as a flexible platform for PCI passthrough
>>>>>>>>>>
>>>>>>>>>> implementation, so it could allow testing different setups and configurations with QEMU.
>>>>>>>>> I would recommend to get the SMMU supporting supporting stage-2 page-tables.
>>>>>>>> You mean in QEMU?
>>>>>>> See before.
>>>>>>>
>>>>>>>>> Regardless that, I think Xen should be able to say the SMMU is not supported rather than crashing.
>>>>>>>> Yes, that would be nice
>>>>>>> Fully agree and we will look into that.
>>>>>>>
>>>>>>> Anything you could share so that we could quickly reproduce your setup would be more then great.
>>>>>> Nothing special,
>>>>>>
>>>>>> qemu/aarch64-softmmu/qemu-system-aarch64 -machine type=virt -machine virt,gic-version=2 \
>>>>>>
>>>>>> -machine virtualization=true -cpu cortex-a57 -smp 4 -m 2048 -nic user,hostfwd=tcp:127.0.0.1:2222-:22 \
>>>>>>
>>>>>> -nographic -serial mon:stdio [..snip..]
>>>>>>
>>>>>> I also set iommu to smmuv3 in my tests, QEMU emulator version 4.2.1
>>>>> I just checked and confirmed that QEMU is booting with XEN SMMUv3 patch and XEN is able to say SMMU translation is not supported. As XEN supports Stage-2 translation and QEMU supports Stage-1 only.
>>>>>
>>>>>
>>>>> (XEN) SMMUv3: /smmuv3@9050000: SMMUv3: DT value = eventq
>>>>> (XEN) SMMUv3: /smmuv3@9050000: IDR0.COHACC overridden by FW configuration (false)
>>>>> (XEN) SMMUv3: /smmuv3@9050000: no translation support!
>>>>> (XEN) I/O virtualisation disabled
>>>>>
>>>>> Only difference I observed is that you have to add option "-machine virt,iommu=smmuv3 “ when launching the QEMU.
>>>> I do use the option
>>> I used "-machine virt,iommu=smmuv3 “  option while creating the virt-dtb and while launching the QEMU.
>>> I also observed the same error what you observed if I am not using the "-machine virt,iommu=smmuv3 “ options when launching the QEMU so I thought this might be case for you also but anyways you have use the options it might be other issue.
>> Hm, probably that was on my side as now I can see:
>>
>> (XEN) SMMUv3: /smmuv3@9050000: SMMUv3: DT value = eventq
>> (XEN) SMMUv3: /smmuv3@9050000: IDR0.COHACC overridden by FW configuration (false)
>> (XEN) SMMUv3: /smmuv3@9050000: no translation support!
>> (XEN) I/O virtualisation disabled
>> (XEN)
>> (XEN) ****************************************
>> (XEN) Panic on CPU 0:
>> (XEN) Couldn't configure correctly all the IOMMUs.
>> (XEN) ****************************************
>> (XEN)
>> (XEN) Manual reset required ('noreboot' specified)
>>
>> So, sorry for the noise, I might have misconfigured something it seems
>>
>> When you say "Xen is booting" do you mean you see the same panic?
> Yes I observe the same.
>
> We have to decide now if for SMMUv3 there is no translation support do we have to print the logs and move forward  or as above return error to iommu_setup that will cal panic().
I would allow Xen to boot further
>
> Regards,
> Rahul
>
>> Thank you,
>>
>> Oleksandr
>>
>>>>> Please let me know if it also works for you.
>>>> Well, I should have reported that earlier that I do not use the staging Xen at the moment,
>>>>
>>>> it is 4.14.0. So, can this be a problem with that Xen version?
>>> I don’t think so this is the problem with the XEN version.
>>>> Anyways, if it works with the staging then everything looks ok
>>>>
>>>> Thank you,
>>>>
>>>> Oleksandr
>>>>
>>>>>>> Regards
>>>>>>> Bertrand
>>>>>>>
>>>>>>>>> Cheers,
>>>>>>>>>
>>>>>>>> Thank you,
>>>>>>>>
>>>>>>>> Oleksandr
>>> Regards,
>>> Rahul


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 06:41:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 06:41:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21983.48321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc0ru-00015n-5T; Mon, 09 Nov 2020 06:41:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21983.48321; Mon, 09 Nov 2020 06:41:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc0ru-00015g-1u; Mon, 09 Nov 2020 06:41:34 +0000
Received: by outflank-mailman (input) for mailman id 21983;
 Mon, 09 Nov 2020 06:41:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kc0rs-00015b-LN
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 06:41:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a1ce20d0-6b0a-40bf-8a5d-69d88ac86f38;
 Mon, 09 Nov 2020 06:41:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 792DDABAE;
 Mon,  9 Nov 2020 06:41:30 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kc0rs-00015b-LN
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 06:41:32 +0000
X-Inumbo-ID: a1ce20d0-6b0a-40bf-8a5d-69d88ac86f38
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a1ce20d0-6b0a-40bf-8a5d-69d88ac86f38;
	Mon, 09 Nov 2020 06:41:31 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604904090;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=LUdlh+zdBKQghkZPV4VxCqV7/PPrlk7RpBaFpPPoy5c=;
	b=LNFAVwcgUR7cRtdS+etbSv+69rjkjfBx/CcigeIujWHbo+cB82ap0K2mj1CabaII7HJfyh
	96wFNbWf5cAC6S59inBfETdfSPJ8pei1+/OD3Z2M5QfzsFCQTLcBoH3mVgT1hW+N5zfuVZ
	Z4OFZSvXSOhqFO79hS5RBSt3TsgaOkc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 792DDABAE;
	Mon,  9 Nov 2020 06:41:30 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v5 0/2] XSA-343 followup patches
Date: Mon,  9 Nov 2020 07:41:26 +0100
Message-Id: <20201109064128.3908-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patches for XSA-343 produced some fallout, especially the event
channel locking has shown to be problematic.

Patch 1 is targeting fifo event channels for avoiding any races for the
case that the fifo queue has been changed for a specific event channel.

The second patch is modifying the per event channel locking scheme in
order to avoid deadlocks and problems due to the event channel lock
having been changed to require IRQs off by the XSA-343 patches.

Changes in V5:
- moved evtchn_write_[un]lock() to event_channel.c (Jan Beulich)
- used normal read_lock() in some cases (Jan Beulich)

Changes in V4:
- switched to real rwlock

Changes in V3:
- addressed comments

Juergen Gross (2):
  xen/events: access last_priority and last_vcpu_id together
  xen/evtchn: rework per event channel lock

 xen/arch/x86/irq.c         |   6 +-
 xen/arch/x86/pv/shim.c     |   9 +--
 xen/common/event_channel.c | 144 +++++++++++++++++++++----------------
 xen/common/event_fifo.c    |  25 +++++--
 xen/include/xen/event.h    |  32 ++++++---
 xen/include/xen/sched.h    |   8 ++-
 6 files changed, 136 insertions(+), 88 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 06:41:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 06:41:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21985.48345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc0ry-00019H-Rj; Mon, 09 Nov 2020 06:41:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21985.48345; Mon, 09 Nov 2020 06:41:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc0ry-000194-OA; Mon, 09 Nov 2020 06:41:38 +0000
Received: by outflank-mailman (input) for mailman id 21985;
 Mon, 09 Nov 2020 06:41:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kc0rx-00015b-HR
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 06:41:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 22e94c58-c6a8-4bc2-8798-f3ed3cb96c7f;
 Mon, 09 Nov 2020 06:41:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AEF18ABCC;
 Mon,  9 Nov 2020 06:41:30 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kc0rx-00015b-HR
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 06:41:37 +0000
X-Inumbo-ID: 22e94c58-c6a8-4bc2-8798-f3ed3cb96c7f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 22e94c58-c6a8-4bc2-8798-f3ed3cb96c7f;
	Mon, 09 Nov 2020 06:41:31 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604904090;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DpSBXBI2OHigxodLIbPPZp3jSMwO4fg1IhybSfwcpsI=;
	b=vHcHmS9uJ5nMjtGus5KwQC5iOMcryUTUzOTi7GicoBnf5dhxpv8DMsx7jhwvCZfXmQj77i
	h2Vfpcvhmye3DCCyUi3ffHjp0+fEPzhcAwKwcBGuhTGN8yC54o3+L9A0SMR7u1xhVL2kS7
	LaxgeFlgZSfsGt1TR3pk2av2fSUQdDk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id AEF18ABCC;
	Mon,  9 Nov 2020 06:41:30 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v5 1/2] xen/events: access last_priority and last_vcpu_id together
Date: Mon,  9 Nov 2020 07:41:27 +0100
Message-Id: <20201109064128.3908-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201109064128.3908-1-jgross@suse.com>
References: <20201109064128.3908-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The queue for a fifo event is depending on the vcpu_id and the
priority of the event. When sending an event it might happen the
event needs to change queues and the old queue needs to be kept for
keeping the links between queue elements intact. For this purpose
the event channel contains last_priority and last_vcpu_id values
elements for being able to identify the old queue.

In order to avoid races always access last_priority and last_vcpu_id
with a single atomic operation avoiding any inconsistencies.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 xen/common/event_fifo.c | 25 +++++++++++++++++++------
 xen/include/xen/sched.h |  3 +--
 2 files changed, 20 insertions(+), 8 deletions(-)

diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c
index c6e58d2a1a..79090c04ca 100644
--- a/xen/common/event_fifo.c
+++ b/xen/common/event_fifo.c
@@ -42,6 +42,14 @@ struct evtchn_fifo_domain {
     unsigned int num_evtchns;
 };
 
+union evtchn_fifo_lastq {
+    uint32_t raw;
+    struct {
+        uint8_t last_priority;
+        uint16_t last_vcpu_id;
+    };
+};
+
 static inline event_word_t *evtchn_fifo_word_from_port(const struct domain *d,
                                                        unsigned int port)
 {
@@ -86,16 +94,18 @@ static struct evtchn_fifo_queue *lock_old_queue(const struct domain *d,
     struct vcpu *v;
     struct evtchn_fifo_queue *q, *old_q;
     unsigned int try;
+    union evtchn_fifo_lastq lastq;
 
     for ( try = 0; try < 3; try++ )
     {
-        v = d->vcpu[evtchn->last_vcpu_id];
-        old_q = &v->evtchn_fifo->queue[evtchn->last_priority];
+        lastq.raw = read_atomic(&evtchn->fifo_lastq);
+        v = d->vcpu[lastq.last_vcpu_id];
+        old_q = &v->evtchn_fifo->queue[lastq.last_priority];
 
         spin_lock_irqsave(&old_q->lock, *flags);
 
-        v = d->vcpu[evtchn->last_vcpu_id];
-        q = &v->evtchn_fifo->queue[evtchn->last_priority];
+        v = d->vcpu[lastq.last_vcpu_id];
+        q = &v->evtchn_fifo->queue[lastq.last_priority];
 
         if ( old_q == q )
             return old_q;
@@ -246,8 +256,11 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
         /* Moved to a different queue? */
         if ( old_q != q )
         {
-            evtchn->last_vcpu_id = v->vcpu_id;
-            evtchn->last_priority = q->priority;
+            union evtchn_fifo_lastq lastq = { };
+
+            lastq.last_vcpu_id = v->vcpu_id;
+            lastq.last_priority = q->priority;
+            write_atomic(&evtchn->fifo_lastq, lastq.raw);
 
             spin_unlock_irqrestore(&old_q->lock, flags);
             spin_lock_irqsave(&q->lock, flags);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index d8ed83f869..a298ff4df8 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -114,8 +114,7 @@ struct evtchn
         u16 virq;      /* state == ECS_VIRQ */
     } u;
     u8 priority;
-    u8 last_priority;
-    u16 last_vcpu_id;
+    u32 fifo_lastq;    /* Data for fifo events identifying last queue. */
 #ifdef CONFIG_XSM
     union {
 #ifdef XSM_NEED_GENERIC_EVTCHN_SSID
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 06:41:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 06:41:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21984.48333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc0rx-00017I-Cz; Mon, 09 Nov 2020 06:41:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21984.48333; Mon, 09 Nov 2020 06:41:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc0rx-00017B-9r; Mon, 09 Nov 2020 06:41:37 +0000
Received: by outflank-mailman (input) for mailman id 21984;
 Mon, 09 Nov 2020 06:41:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kc0rw-00016k-HT
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 06:41:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 953adf3f-163a-41ed-ba48-99db9d9e161a;
 Mon, 09 Nov 2020 06:41:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id ED95AABD1;
 Mon,  9 Nov 2020 06:41:30 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kc0rw-00016k-HT
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 06:41:36 +0000
X-Inumbo-ID: 953adf3f-163a-41ed-ba48-99db9d9e161a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 953adf3f-163a-41ed-ba48-99db9d9e161a;
	Mon, 09 Nov 2020 06:41:31 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604904091;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JFoT60bwaPOryKk4GbCzd9C3J92QHjrVIOHIuInzsxo=;
	b=l1K1Qhw2FIcvDIcn0ea80pf0jFMggmKFCvZYhE+2mVUv8cSCZUQ9Nx+9mBhBMjWm99DZRi
	gejAg7PQRTrhmTfvPvtHhCVAgt0RD5KPCswb/freYIP+ohOBHuxygh1EbbD37WyliHAofA
	LuSlOFRBce/zFs3lO2qcaFP+qR6+Zb0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id ED95AABD1;
	Mon,  9 Nov 2020 06:41:30 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v5 2/2] xen/evtchn: rework per event channel lock
Date: Mon,  9 Nov 2020 07:41:28 +0100
Message-Id: <20201109064128.3908-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201109064128.3908-1-jgross@suse.com>
References: <20201109064128.3908-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currently the lock for a single event channel needs to be taken with
interrupts off, which causes deadlocks in some cases.

Rework the per event channel lock to be non-blocking for the case of
sending an event and removing the need for disabling interrupts for
taking the lock.

The lock is needed for avoiding races between event channel state
changes (creation, closing, binding) against normal operations (set
pending, [un]masking, priority changes).

Use a rwlock, but with some restrictions:

- normal operations use read_trylock(), in case of not obtaining the
  lock the operation is omitted or a default state is returned

- closing an event channel is using write_lock(), with ASSERT()ing that
  the lock is taken as writer only when the state of the event channel
  is either before or after the locked region appropriate (either free
  or unbound).

Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V4:
- switch to rwlock
- add ASSERT() to verify correct write_lock() usage

V3:
- corrected a copy-and-paste error (Jan Beulich)
- corrected unlocking in two cases (Jan Beulich)
- renamed evtchn_read_trylock() (Jan Beulich)
- added some comments and an ASSERT() for evtchn_write_lock()
- set EVENT_WRITE_LOCK_INC to INT_MIN

V2:
- added needed barriers

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/arch/x86/irq.c         |   6 +-
 xen/arch/x86/pv/shim.c     |   9 +--
 xen/common/event_channel.c | 144 +++++++++++++++++++++----------------
 xen/include/xen/event.h    |  32 ++++++---
 xen/include/xen/sched.h    |   5 +-
 5 files changed, 116 insertions(+), 80 deletions(-)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 93c4fb9a79..8d1f9a9fc6 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -2495,14 +2495,12 @@ static void dump_irqs(unsigned char key)
                 pirq = domain_irq_to_pirq(d, irq);
                 info = pirq_info(d, pirq);
                 evtchn = evtchn_from_port(d, info->evtchn);
-                local_irq_disable();
-                if ( spin_trylock(&evtchn->lock) )
+                if ( evtchn_read_trylock(evtchn) )
                 {
                     pending = evtchn_is_pending(d, evtchn);
                     masked = evtchn_is_masked(d, evtchn);
-                    spin_unlock(&evtchn->lock);
+                    evtchn_read_unlock(evtchn);
                 }
-                local_irq_enable();
                 printk("d%d:%3d(%c%c%c)%c",
                        d->domain_id, pirq, "-P?"[pending],
                        "-M?"[masked], info->masked ? 'M' : '-',
diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c
index 9aef7a860a..b4e83e0778 100644
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -660,11 +660,12 @@ void pv_shim_inject_evtchn(unsigned int port)
     if ( port_is_valid(guest, port) )
     {
         struct evtchn *chn = evtchn_from_port(guest, port);
-        unsigned long flags;
 
-        spin_lock_irqsave(&chn->lock, flags);
-        evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
-        spin_unlock_irqrestore(&chn->lock, flags);
+        if ( evtchn_read_trylock(chn) )
+        {
+            evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
+            evtchn_read_unlock(chn);
+        }
     }
 }
 
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index cd4a2c0501..4a6b6fc5ad 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -50,6 +50,34 @@
 
 #define consumer_is_xen(e) (!!(e)->xen_consumer)
 
+/*
+ * Lock an event channel exclusively. This is allowed only when the channel is
+ * free or unbound either when taking or when releasing the lock, as any
+ * concurrent operation on the event channel using evtchn_read_trylock() will
+ * just assume the event channel is free or unbound at the moment when the
+ * evtchn_read_trylock() returns false.
+ */
+static inline void evtchn_write_lock(struct evtchn *evtchn)
+{
+    write_lock(&evtchn->lock);
+
+    evtchn->old_state = evtchn->state;
+}
+
+static inline void evtchn_write_unlock(struct evtchn *evtchn)
+{
+    /* Enforce lock discipline. */
+    ASSERT(evtchn->old_state == ECS_FREE || evtchn->old_state == ECS_UNBOUND ||
+           evtchn->state == ECS_FREE || evtchn->state == ECS_UNBOUND);
+
+    write_unlock(&evtchn->lock);
+}
+
+static inline void evtchn_read_lock(struct evtchn *evtchn)
+{
+    read_lock(&evtchn->lock);
+}
+
 /*
  * The function alloc_unbound_xen_event_channel() allows an arbitrary
  * notifier function to be specified. However, very few unique functions
@@ -133,7 +161,7 @@ static struct evtchn *alloc_evtchn_bucket(struct domain *d, unsigned int port)
             return NULL;
         }
         chn[i].port = port + i;
-        spin_lock_init(&chn[i].lock);
+        rwlock_init(&chn[i].lock);
     }
     return chn;
 }
@@ -255,7 +283,6 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
     int            port;
     domid_t        dom = alloc->dom;
     long           rc;
-    unsigned long  flags;
 
     d = rcu_lock_domain_by_any_id(dom);
     if ( d == NULL )
@@ -271,14 +298,14 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
     if ( rc )
         goto out;
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state = ECS_UNBOUND;
     if ( (chn->u.unbound.remote_domid = alloc->remote_dom) == DOMID_SELF )
         chn->u.unbound.remote_domid = current->domain->domain_id;
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     alloc->port = port;
 
@@ -291,32 +318,26 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
 }
 
 
-static unsigned long double_evtchn_lock(struct evtchn *lchn,
-                                        struct evtchn *rchn)
+static void double_evtchn_lock(struct evtchn *lchn, struct evtchn *rchn)
 {
-    unsigned long flags;
-
     if ( lchn <= rchn )
     {
-        spin_lock_irqsave(&lchn->lock, flags);
+        evtchn_write_lock(lchn);
         if ( lchn != rchn )
-            spin_lock(&rchn->lock);
+            evtchn_write_lock(rchn);
     }
     else
     {
-        spin_lock_irqsave(&rchn->lock, flags);
-        spin_lock(&lchn->lock);
+        evtchn_write_lock(rchn);
+        evtchn_write_lock(lchn);
     }
-
-    return flags;
 }
 
-static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn,
-                                 unsigned long flags)
+static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn)
 {
     if ( lchn != rchn )
-        spin_unlock(&lchn->lock);
-    spin_unlock_irqrestore(&rchn->lock, flags);
+        evtchn_write_unlock(lchn);
+    evtchn_write_unlock(rchn);
 }
 
 static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
@@ -326,7 +347,6 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
     int            lport, rport = bind->remote_port;
     domid_t        rdom = bind->remote_dom;
     long           rc;
-    unsigned long  flags;
 
     if ( rdom == DOMID_SELF )
         rdom = current->domain->domain_id;
@@ -362,7 +382,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
     if ( rc )
         goto out;
 
-    flags = double_evtchn_lock(lchn, rchn);
+    double_evtchn_lock(lchn, rchn);
 
     lchn->u.interdomain.remote_dom  = rd;
     lchn->u.interdomain.remote_port = rport;
@@ -379,7 +399,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
      */
     evtchn_port_set_pending(ld, lchn->notify_vcpu_id, lchn);
 
-    double_evtchn_unlock(lchn, rchn, flags);
+    double_evtchn_unlock(lchn, rchn);
 
     bind->local_port = lport;
 
@@ -402,7 +422,6 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
     struct domain *d = current->domain;
     int            virq = bind->virq, vcpu = bind->vcpu;
     int            rc = 0;
-    unsigned long  flags;
 
     if ( (virq < 0) || (virq >= ARRAY_SIZE(v->virq_to_evtchn)) )
         return -EINVAL;
@@ -440,14 +459,14 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
 
     chn = evtchn_from_port(d, port);
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state          = ECS_VIRQ;
     chn->notify_vcpu_id = vcpu;
     chn->u.virq         = virq;
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     v->virq_to_evtchn[virq] = bind->port = port;
 
@@ -464,7 +483,6 @@ static long evtchn_bind_ipi(evtchn_bind_ipi_t *bind)
     struct domain *d = current->domain;
     int            port, vcpu = bind->vcpu;
     long           rc = 0;
-    unsigned long  flags;
 
     if ( domain_vcpu(d, vcpu) == NULL )
         return -ENOENT;
@@ -476,13 +494,13 @@ static long evtchn_bind_ipi(evtchn_bind_ipi_t *bind)
 
     chn = evtchn_from_port(d, port);
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state          = ECS_IPI;
     chn->notify_vcpu_id = vcpu;
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     bind->port = port;
 
@@ -526,7 +544,6 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
     struct pirq   *info;
     int            port = 0, pirq = bind->pirq;
     long           rc;
-    unsigned long  flags;
 
     if ( (pirq < 0) || (pirq >= d->nr_pirqs) )
         return -EINVAL;
@@ -559,14 +576,14 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
         goto out;
     }
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state  = ECS_PIRQ;
     chn->u.pirq.irq = pirq;
     link_pirq_port(port, chn, v);
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     bind->port = port;
 
@@ -587,7 +604,6 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
     struct evtchn *chn1, *chn2;
     int            port2;
     long           rc = 0;
-    unsigned long  flags;
 
  again:
     spin_lock(&d1->event_lock);
@@ -688,14 +704,14 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
         BUG_ON(chn2->state != ECS_INTERDOMAIN);
         BUG_ON(chn2->u.interdomain.remote_dom != d1);
 
-        flags = double_evtchn_lock(chn1, chn2);
+        double_evtchn_lock(chn1, chn2);
 
         evtchn_free(d1, chn1);
 
         chn2->state = ECS_UNBOUND;
         chn2->u.unbound.remote_domid = d1->domain_id;
 
-        double_evtchn_unlock(chn1, chn2, flags);
+        double_evtchn_unlock(chn1, chn2);
 
         goto out;
 
@@ -703,9 +719,9 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
         BUG();
     }
 
-    spin_lock_irqsave(&chn1->lock, flags);
+    evtchn_write_lock(chn1);
     evtchn_free(d1, chn1);
-    spin_unlock_irqrestore(&chn1->lock, flags);
+    evtchn_write_unlock(chn1);
 
  out:
     if ( d2 != NULL )
@@ -725,7 +741,6 @@ int evtchn_send(struct domain *ld, unsigned int lport)
     struct evtchn *lchn, *rchn;
     struct domain *rd;
     int            rport, ret = 0;
-    unsigned long  flags;
 
     if ( !port_is_valid(ld, lport) )
         return -EINVAL;
@@ -738,7 +753,7 @@ int evtchn_send(struct domain *ld, unsigned int lport)
 
     lchn = evtchn_from_port(ld, lport);
 
-    spin_lock_irqsave(&lchn->lock, flags);
+    evtchn_read_lock(lchn);
 
     /* Guest cannot send via a Xen-attached event channel. */
     if ( unlikely(consumer_is_xen(lchn)) )
@@ -773,7 +788,7 @@ int evtchn_send(struct domain *ld, unsigned int lport)
     }
 
 out:
-    spin_unlock_irqrestore(&lchn->lock, flags);
+    evtchn_read_unlock(lchn);
 
     return ret;
 }
@@ -806,9 +821,11 @@ void send_guest_vcpu_virq(struct vcpu *v, uint32_t virq)
 
     d = v->domain;
     chn = evtchn_from_port(d, port);
-    spin_lock(&chn->lock);
-    evtchn_port_set_pending(d, v->vcpu_id, chn);
-    spin_unlock(&chn->lock);
+    if ( evtchn_read_trylock(chn) )
+    {
+        evtchn_port_set_pending(d, v->vcpu_id, chn);
+        evtchn_read_unlock(chn);
+    }
 
  out:
     spin_unlock_irqrestore(&v->virq_lock, flags);
@@ -837,9 +854,11 @@ void send_guest_global_virq(struct domain *d, uint32_t virq)
         goto out;
 
     chn = evtchn_from_port(d, port);
-    spin_lock(&chn->lock);
-    evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
-    spin_unlock(&chn->lock);
+    if ( evtchn_read_trylock(chn) )
+    {
+        evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
+        evtchn_read_unlock(chn);
+    }
 
  out:
     spin_unlock_irqrestore(&v->virq_lock, flags);
@@ -849,7 +868,6 @@ void send_guest_pirq(struct domain *d, const struct pirq *pirq)
 {
     int port;
     struct evtchn *chn;
-    unsigned long flags;
 
     /*
      * PV guests: It should not be possible to race with __evtchn_close(). The
@@ -864,9 +882,11 @@ void send_guest_pirq(struct domain *d, const struct pirq *pirq)
     }
 
     chn = evtchn_from_port(d, port);
-    spin_lock_irqsave(&chn->lock, flags);
-    evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
-    spin_unlock_irqrestore(&chn->lock, flags);
+    if ( evtchn_read_trylock(chn) )
+    {
+        evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
+        evtchn_read_unlock(chn);
+    }
 }
 
 static struct domain *global_virq_handlers[NR_VIRQS] __read_mostly;
@@ -1068,15 +1088,16 @@ int evtchn_unmask(unsigned int port)
 {
     struct domain *d = current->domain;
     struct evtchn *evtchn;
-    unsigned long flags;
 
     if ( unlikely(!port_is_valid(d, port)) )
         return -EINVAL;
 
     evtchn = evtchn_from_port(d, port);
-    spin_lock_irqsave(&evtchn->lock, flags);
-    evtchn_port_unmask(d, evtchn);
-    spin_unlock_irqrestore(&evtchn->lock, flags);
+    if ( evtchn_read_trylock(evtchn) )
+    {
+        evtchn_port_unmask(d, evtchn);
+        evtchn_read_unlock(evtchn);
+    }
 
     return 0;
 }
@@ -1156,15 +1177,17 @@ static long evtchn_set_priority(const struct evtchn_set_priority *set_priority)
     unsigned int port = set_priority->port;
     struct evtchn *chn;
     long ret;
-    unsigned long flags;
 
     if ( !port_is_valid(d, port) )
         return -EINVAL;
 
     chn = evtchn_from_port(d, port);
-    spin_lock_irqsave(&chn->lock, flags);
+
+    evtchn_read_lock(chn);
+
     ret = evtchn_port_set_priority(d, chn, set_priority->priority);
-    spin_unlock_irqrestore(&chn->lock, flags);
+
+    evtchn_read_unlock(chn);
 
     return ret;
 }
@@ -1332,7 +1355,6 @@ int alloc_unbound_xen_event_channel(
 {
     struct evtchn *chn;
     int            port, rc;
-    unsigned long  flags;
 
     spin_lock(&ld->event_lock);
 
@@ -1345,14 +1367,14 @@ int alloc_unbound_xen_event_channel(
     if ( rc )
         goto out;
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state = ECS_UNBOUND;
     chn->xen_consumer = get_xen_consumer(notification_fn);
     chn->notify_vcpu_id = lvcpu;
     chn->u.unbound.remote_domid = remote_domid;
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     /*
      * Increment ->xen_evtchns /after/ ->active_evtchns. No explicit
@@ -1388,7 +1410,6 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
 {
     struct evtchn *lchn, *rchn;
     struct domain *rd;
-    unsigned long flags;
 
     if ( !port_is_valid(ld, lport) )
     {
@@ -1403,7 +1424,8 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
 
     lchn = evtchn_from_port(ld, lport);
 
-    spin_lock_irqsave(&lchn->lock, flags);
+    if ( !evtchn_read_trylock(lchn) )
+        return;
 
     if ( likely(lchn->state == ECS_INTERDOMAIN) )
     {
@@ -1413,7 +1435,7 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
         evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
     }
 
-    spin_unlock_irqrestore(&lchn->lock, flags);
+    evtchn_read_unlock(lchn);
 }
 
 void evtchn_check_pollers(struct domain *d, unsigned int port)
diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index 2ed4be78f6..9c84265f53 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -105,6 +105,16 @@ void notify_via_xen_event_channel(struct domain *ld, int lport);
 #define bucket_from_port(d, p) \
     ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKET])
 
+static inline bool evtchn_read_trylock(struct evtchn *evtchn)
+{
+    return read_trylock(&evtchn->lock);
+}
+
+static inline void evtchn_read_unlock(struct evtchn *evtchn)
+{
+    read_unlock(&evtchn->lock);
+}
+
 static inline bool_t port_is_valid(struct domain *d, unsigned int p)
 {
     if ( p >= read_atomic(&d->valid_evtchns) )
@@ -234,12 +244,13 @@ static inline bool evtchn_is_masked(const struct domain *d,
 static inline bool evtchn_port_is_masked(struct domain *d, evtchn_port_t port)
 {
     struct evtchn *evtchn = evtchn_from_port(d, port);
-    bool rc;
-    unsigned long flags;
+    bool rc = true;
 
-    spin_lock_irqsave(&evtchn->lock, flags);
-    rc = evtchn_is_masked(d, evtchn);
-    spin_unlock_irqrestore(&evtchn->lock, flags);
+    if ( evtchn_read_trylock(evtchn) )
+    {
+        rc = evtchn_is_masked(d, evtchn);
+        evtchn_read_unlock(evtchn);
+    }
 
     return rc;
 }
@@ -252,12 +263,13 @@ static inline int evtchn_port_poll(struct domain *d, evtchn_port_t port)
     if ( port_is_valid(d, port) )
     {
         struct evtchn *evtchn = evtchn_from_port(d, port);
-        unsigned long flags;
 
-        spin_lock_irqsave(&evtchn->lock, flags);
-        if ( evtchn_usable(evtchn) )
-            rc = evtchn_is_pending(d, evtchn);
-        spin_unlock_irqrestore(&evtchn->lock, flags);
+        if ( evtchn_read_trylock(evtchn) )
+        {
+            if ( evtchn_usable(evtchn) )
+                rc = evtchn_is_pending(d, evtchn);
+            evtchn_read_unlock(evtchn);
+        }
     }
 
     return rc;
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index a298ff4df8..66d8f058be 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -85,7 +85,7 @@ extern domid_t hardware_domid;
 
 struct evtchn
 {
-    spinlock_t lock;
+    rwlock_t lock;
 #define ECS_FREE         0 /* Channel is available for use.                  */
 #define ECS_RESERVED     1 /* Channel is reserved.                           */
 #define ECS_UNBOUND      2 /* Channel is waiting to bind to a remote domain. */
@@ -114,6 +114,9 @@ struct evtchn
         u16 virq;      /* state == ECS_VIRQ */
     } u;
     u8 priority;
+#ifndef NDEBUG
+    u8 old_state;      /* State when taking lock in write mode. */
+#endif
     u32 fifo_lastq;    /* Data for fifo events identifying last queue. */
 #ifdef CONFIG_XSM
     union {
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 06:46:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 06:46:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.21999.48356 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc0wM-0001XK-Eq; Mon, 09 Nov 2020 06:46:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 21999.48356; Mon, 09 Nov 2020 06:46:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc0wM-0001XD-BX; Mon, 09 Nov 2020 06:46:10 +0000
Received: by outflank-mailman (input) for mailman id 21999;
 Mon, 09 Nov 2020 06:46:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=s5/2=EP=arm.com=wei.chen@srs-us1.protection.inumbo.net>)
 id 1kc0wK-0001X7-5x
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 06:46:08 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.88]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 112fd248-a34d-4e6f-88ad-db071b1819d7;
 Mon, 09 Nov 2020 06:46:00 +0000 (UTC)
Received: from MR2P264CA0112.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:33::28)
 by DBBPR08MB4790.eurprd08.prod.outlook.com (2603:10a6:10:f4::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.24; Mon, 9 Nov
 2020 06:45:57 +0000
Received: from VE1EUR03FT035.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:33:cafe::86) by MR2P264CA0112.outlook.office365.com
 (2603:10a6:500:33::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Mon, 9 Nov 2020 06:45:57 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT035.mail.protection.outlook.com (10.152.18.110) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3541.17 via Frontend Transport; Mon, 9 Nov 2020 06:45:56 +0000
Received: ("Tessian outbound 13ed5f5344c0:v71");
 Mon, 09 Nov 2020 06:45:56 +0000
Received: from 1c311e205a86.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1AB44B6E-1836-4EEB-82FF-4330D7B17D30.1; 
 Mon, 09 Nov 2020 06:45:51 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1c311e205a86.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 09 Nov 2020 06:45:51 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com (2603:10a6:208:105::24)
 by AM9PR08MB5940.eurprd08.prod.outlook.com (2603:10a6:20b:281::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Mon, 9 Nov
 2020 06:45:50 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993]) by AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993%3]) with mapi id 15.20.3499.032; Mon, 9 Nov 2020
 06:45:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=s5/2=EP=arm.com=wei.chen@srs-us1.protection.inumbo.net>)
	id 1kc0wK-0001X7-5x
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 06:46:08 +0000
X-Inumbo-ID: 112fd248-a34d-4e6f-88ad-db071b1819d7
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown [40.107.8.88])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 112fd248-a34d-4e6f-88ad-db071b1819d7;
	Mon, 09 Nov 2020 06:46:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RkbL8J/KQBGHldUa56Zr1oEDGRUcrvVnBN4KkxRbCcE=;
 b=r9N58VaoHRdGIWy89uciSTfmJ0eceKi5bz8wCltuYbwWMGHOaK4K3hiEZQsaZsvJLztPOtZ0HnV0v89A5AEu8jLDybz39MOMW2orr63Joq6A4STkbgIu4gxnhiqz0nM0PQGwBtvir6ciy2LoCCZiAi0AA4dJWUMqRf8abLRdrvE=
Received: from MR2P264CA0112.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:33::28)
 by DBBPR08MB4790.eurprd08.prod.outlook.com (2603:10a6:10:f4::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.24; Mon, 9 Nov
 2020 06:45:57 +0000
Received: from VE1EUR03FT035.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:33:cafe::86) by MR2P264CA0112.outlook.office365.com
 (2603:10a6:500:33::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Mon, 9 Nov 2020 06:45:57 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT035.mail.protection.outlook.com (10.152.18.110) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3541.17 via Frontend Transport; Mon, 9 Nov 2020 06:45:56 +0000
Received: ("Tessian outbound 13ed5f5344c0:v71"); Mon, 09 Nov 2020 06:45:56 +0000
X-CR-MTA-TID: 64aa7808
Received: from 1c311e205a86.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 1AB44B6E-1836-4EEB-82FF-4330D7B17D30.1;
	Mon, 09 Nov 2020 06:45:51 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1c311e205a86.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Mon, 09 Nov 2020 06:45:51 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=d8Syf2d7etnpAVZM0D1OxOWw2D9j3Gv4Q3zq5x4WUmc4h0kZzSjUhCvVj4+gf4wOvO5JjmuWAv4tV/fglzCOLbgQ4hvdU+O4dHfTEOA4DEr1fg3OdK8UuYC2EXZ2GfUNmhZSB1pLS0kyzi1P63AeZSj5fSotaafFYsOxxoBchOsAWsMsd8mbvsnvcOaey1SsnW+Irl9TppuK5XJDLLZZO4oANd1dCDyFOtosqLRcxO8LFTsXmtTL1CrLNIOmMjSKAgfJT23E0mpcNjcrc+IKEJA4MWBwoU7M0sZKicMIuNjuNnhMxZxQeRIfm0csZ5H7AIyneU4vpcKJgqKcruyzRA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RkbL8J/KQBGHldUa56Zr1oEDGRUcrvVnBN4KkxRbCcE=;
 b=mI3RcTsEpIM+wcpAqR7i1pewbMQWMVgHRnjgbb+pBBXcADyQk4+JmkOHYPSRRaXTcIEIGINkTk+LuM61lfJ9aXFCfUc4AqIJhSSnAOrL/l5oJeYcr/lFXnUKDc5JifgCIke6S8glY/ARLmhXsuWJ5l4Vfbz6qsLljyHwTvkatUIYQn1OSvTazYa/scYL864/1oIRKAFKVjd0irM4DF7dtrKeYQBy4nh/4DaaNb/qd1z2w+wDJbAeUDrIkGXZta2MMGhh/TrM2lV4bf78d8QPwYhMoYlAcgkn/OM0+BQQxVV8zqNra37+FIX/alg1QH5+FfJqxFTRKXpZ0q4WGBmGow==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RkbL8J/KQBGHldUa56Zr1oEDGRUcrvVnBN4KkxRbCcE=;
 b=r9N58VaoHRdGIWy89uciSTfmJ0eceKi5bz8wCltuYbwWMGHOaK4K3hiEZQsaZsvJLztPOtZ0HnV0v89A5AEu8jLDybz39MOMW2orr63Joq6A4STkbgIu4gxnhiqz0nM0PQGwBtvir6ciy2LoCCZiAi0AA4dJWUMqRf8abLRdrvE=
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com (2603:10a6:208:105::24)
 by AM9PR08MB5940.eurprd08.prod.outlook.com (2603:10a6:20b:281::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Mon, 9 Nov
 2020 06:45:50 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993]) by AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993%3]) with mapi id 15.20.3499.032; Mon, 9 Nov 2020
 06:45:49 +0000
From: Wei Chen <Wei.Chen@arm.com>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>
Subject: RE: [PATCH V2 23/23] [RFC] libxl: Add support for virtio-disk
 configuration
Thread-Topic: [PATCH V2 23/23] [RFC] libxl: Add support for virtio-disk
 configuration
Thread-Index: AQHWoxPDW60DxIWT4kuneyvO4zzD5qm/gHcQ
Date: Mon, 9 Nov 2020 06:45:49 +0000
Message-ID:
 <AM0PR08MB374735F747FFF1A3016F37329EEA0@AM0PR08MB3747.eurprd08.prod.outlook.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-24-git-send-email-olekstysh@gmail.com>
In-Reply-To: <1602780274-29141-24-git-send-email-olekstysh@gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 8FA3AA85BED9124986B9E8C6D28AFD61.0
x-checkrecipientchecked: true
Authentication-Results-Original: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.113]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: ca1ef037-3a51-4f61-936e-08d8847b1cb3
x-ms-traffictypediagnostic: AM9PR08MB5940:|DBBPR08MB4790:
X-Microsoft-Antispam-PRVS:
	<DBBPR08MB47900E120D1F4A11DCBC28E79EEA0@DBBPR08MB4790.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:4502;OLM:4502;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 UWtkjPgtEyJqPLRioAbalreUaMPBmht1K0tJlyVkx8ou1B+RnXeqGMIXeli6ZYYlRtZ8G8147sW8PSlETgm6BGmBmtuy7DOfdqoo07NFDorNT6ihCc4pV2ZnBIeElgyJy4+y7xktic5ogRdlHXfm4aMJyg147tFhHwXw0+qa2q5LU2bvtUAx1x5kM91lBEzw8b/MLrUdByyvVALqPofY1Z2GRfGV1BUsNO2QgODUKQNZ/v80/af166yh7jwOWiXaj7CHf/vt2WkIn7OzWOlh3xY0nRdBGbj86T9igCS/zHYeoKObrEPHWA+GX+/pKd0Uau68AcE0GMSqdTqg9pfUTGeCgnw6ahcPP/WYDhy8wzoaDNQWSnycj4C2qU5j4mMsf54ik8Pi5VMoUo9vq9lIHQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3747.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(39860400002)(376002)(346002)(366004)(2906002)(76116006)(110136005)(66946007)(4326008)(54906003)(478600001)(71200400001)(26005)(8936002)(83380400001)(66446008)(9686003)(66556008)(316002)(33656002)(8676002)(186003)(55016002)(6506007)(86362001)(64756008)(52536014)(53546011)(66476007)(5660300002)(7696005)(30864003)(21314003)(2004002)(579004);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 YXZeAiNgrqyFy/crNunSY/yBEk8w2VoFc2EaC9nNVPNoZeiByuOwpv5ade82nOd5xyfus47PvPC8Sh6HPks0lCgpsu0bk4M9Fl6usRJWf9ENSlpOCpGynzoTdU1sYPY0l600aUhIo3VzaIwcyMbX3BEUN0oxy20cel/NsBfEyYQy7cGLNuClaem+q1CZ14BCp39P5qAAOHEBkv6NZ6ABQyusFv7Badg0RQb1Ku8FZq1wB/YsD9r+sVaAR21JH7MWlEF565pGTe9YVIkTzhGFcj26y2F4cAokVMBXPh/2ewroppYyjuGrILiWmNZFyWukt2TZ7UOJlgoy4t1K6rd2HeIhq3dybRtNCgf41PRbaxqE168VuxKfPkdf3g4SSs0iwhsp1QqQiAk59fTggcG/ynhMGfPqHgxkHmalZ5P26QEC3Mn1lKvGZFp/8j8kD6xVGbBTUZnxFe7nXRoq5/6GTbD+pgbkH+/JyL3Wcejt6bRlC40clJHEpsfjnRZ20ClNQVskqK3WomrhfFeheiMb9Y0GD/BRkA3+H0kWObKxk/RJdQniHkZCr2LOdKVbALOmvo7dZoOG/AtzOzf6htKyPvncgO/RI+GR4X6djmzwWrUbbbFnLfWcNxHCLpug8aHqBLNgqn/i3E8RoHukxzwkzw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="gb2312"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB5940
Original-Authentication-Results: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT035.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	7cd83cbd-8f5a-4ebc-9021-08d8847b1848
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dLLYxCOjhh8k4vtiX3o4e3IisIVgcxEYpW3pcSYvbViLXkgBHpcAR5qPZEpkUfYccMQG1+EopXzJo8xTUqaaz1BDQPfllnRrQTITj0Ca9Jv7X/h4icZvKkc/i/yIgsZWhAW9aiBR6gqjL9+qrH3xSgVy+ODIunaJuChXdIKCdvcyT1BuWIZX6Fmw0P/dXjThLIjRA1xHcrQqpkbLxFDaM8ts50ApKDYjmuooibJxBcoWGkWE8/jUcdyZNBiHjN+ymMa1wo+w+GQIvyJptNCIPkHPXTD+J4Bifd0PIlWihd9Exyvmjlua7jtQsQbn8HRYfk0JAb6aCYGZp1AFQIjeJuDyUxM/E1wveWMW1uaej8PnO/v9MWjfUJNcQ+sLDlnt9a4T60f3pI+txUtAcmKT6Alul2VVy+hbxvmh9KEvxiTkjrZSqYQ8ABRoUvOww/tf
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(376002)(136003)(346002)(39860400002)(46966005)(316002)(26005)(36906005)(4326008)(336012)(82740400003)(86362001)(186003)(53546011)(6506007)(55016002)(8936002)(9686003)(478600001)(83380400001)(30864003)(5660300002)(70586007)(70206006)(107886003)(356005)(47076004)(52536014)(81166007)(7696005)(2906002)(82310400003)(33656002)(54906003)(110136005)(8676002)(21314003)(2004002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Nov 2020 06:45:56.8550
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ca1ef037-3a51-4f61-936e-08d8847b1cb3
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT035.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4790

SGkgT2xla3NhbmRyLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IFhl
bi1kZXZlbCA8eGVuLWRldmVsLWJvdW5jZXNAbGlzdHMueGVucHJvamVjdC5vcmc+IE9uIEJlaGFs
ZiBPZg0KPiBPbGVrc2FuZHIgVHlzaGNoZW5rbw0KPiBTZW50OiAyMDIwxOoxMNTCMTbI1SAwOjQ1
DQo+IFRvOiB4ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcNCj4gQ2M6IE9sZWtzYW5kciBU
eXNoY2hlbmtvIDxvbGVrc2FuZHJfdHlzaGNoZW5rb0BlcGFtLmNvbT47IElhbiBKYWNrc29uDQo+
IDxpd2pAeGVucHJvamVjdC5vcmc+OyBXZWkgTGl1IDx3bEB4ZW4ub3JnPjsgQW50aG9ueSBQRVJB
UkQNCj4gPGFudGhvbnkucGVyYXJkQGNpdHJpeC5jb20+OyBKdWxpZW4gR3JhbGwgPGp1bGllbkB4
ZW4ub3JnPjsgU3RlZmFubyBTdGFiZWxsaW5pDQo+IDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPg0K
PiBTdWJqZWN0OiBbUEFUQ0ggVjIgMjMvMjNdIFtSRkNdIGxpYnhsOiBBZGQgc3VwcG9ydCBmb3Ig
dmlydGlvLWRpc2sgY29uZmlndXJhdGlvbg0KPiANCj4gRnJvbTogT2xla3NhbmRyIFR5c2hjaGVu
a28gPG9sZWtzYW5kcl90eXNoY2hlbmtvQGVwYW0uY29tPg0KPiANCj4gVGhpcyBwYXRjaCBhZGRz
IGJhc2ljIHN1cHBvcnQgZm9yIGNvbmZpZ3VyaW5nIGFuZCBhc3Npc3RpbmcgdmlydGlvLWRpc2sN
Cj4gYmFja2VuZCAoZW11YWxhdG9yKSB3aGljaCBpcyBpbnRlbmRlZCB0byBydW4gb3V0IG9mIFFl
bXUgYW5kIGNvdWxkIGJlIHJ1bg0KPiBpbiBhbnkgZG9tYWluLg0KPiANCj4gWGVuc3RvcmUgd2Fz
IGNob3NlbiBhcyBhIGNvbW11bmljYXRpb24gaW50ZXJmYWNlIGZvciB0aGUgZW11bGF0b3IgcnVu
bmluZw0KPiBpbiBub24tdG9vbHN0YWNrIGRvbWFpbiB0byBiZSBhYmxlIHRvIGdldCBjb25maWd1
cmF0aW9uIGVpdGhlciBieSByZWFkaW5nDQo+IFhlbnN0b3JlIGRpcmVjdGx5IG9yIGJ5IHJlY2Vp
dmluZyBjb21tYW5kIGxpbmUgcGFyYW1ldGVycyAoYW4gdXBkYXRlZCAneGwgZGV2ZCcNCj4gcnVu
bmluZyBpbiB0aGUgc2FtZSBkb21haW4gd291bGQgcmVhZCBYZW5zdG9yZSBiZWZvcmVoYW5kIGFu
ZCBjYWxsIGJhY2tlbmQNCj4gZXhlY3V0YWJsZSB3aXRoIHRoZSByZXF1aXJlZCBhcmd1bWVudHMp
Lg0KPiANCj4gQW4gZXhhbXBsZSBvZiBkb21haW4gY29uZmlndXJhdGlvbiAodHdvIGRpc2tzIGFy
ZSBhc3NpZ25lZCB0byB0aGUgZ3Vlc3QsDQo+IHRoZSBsYXR0ZXIgaXMgaW4gcmVhZG9ubHkgbW9k
ZSk6DQo+IA0KPiB2ZGlzayA9IFsgJ2JhY2tlbmQ9RG9tRCwgZGlza3M9cnc6L2Rldi9tbWNibGsw
cDM7cm86L2Rldi9tbWNibGsxcDMnIF0NCj4gDQoNCkNhbiB3ZSBrZWVwIHVzZSB0aGUgc2FtZSAn
ZGlzaycgcGFyYW1ldGVyIGZvciB2aXJ0aW8tZGlzaywgYnV0IGFkZCBhbiBvcHRpb24gbGlrZQ0K
ICJtb2RlbD12aXJ0aW8tZGlzayI/DQpGb3IgZXhhbXBsZToNCmRpc2sgPSBbICdiYWNrZW5kPURv
bUQsIGRpc2tzPXJ3Oi9kZXYvbW1jYmxrMHAzLG1vZGVsPXZpcnRpby1kaXNrJyBdDQpKdXN0IGxp
a2Ugd2hhdCBYZW4gaGFzIGRvbmUgZm9yIHg4NiB2aXJ0aW8tbmV0Lg0KDQo+IFdoZXJlIHBlci1k
aXNrIFhlbnN0b3JlIGVudHJpZXMgYXJlOg0KPiAtIGZpbGVuYW1lIGFuZCByZWFkb25seSBmbGFn
IChjb25maWd1cmVkIHZpYSAidmRpc2siIHByb3BlcnR5KQ0KPiAtIGJhc2UgYW5kIGlycSAoYWxs
b2NhdGVkIGR5bmFtaWNhbGx5KQ0KPiANCj4gQmVzaWRlcyBoYW5kbGluZyAndmlzaWJsZScgcGFy
YW1zIGRlc2NyaWJlZCBpbiBjb25maWd1cmF0aW9uIGZpbGUsDQo+IHBhdGNoIGFsc28gYWxsb2Nh
dGVzIHZpcnRpby1tbWlvIHNwZWNpZmljIG9uZXMgZm9yIGVhY2ggZGV2aWNlIGFuZA0KPiB3cml0
ZXMgdGhlbSBpbnRvIFhlbnN0b3JlLiB2aXJ0aW8tbW1pbyBwYXJhbXMgKGlycSBhbmQgYmFzZSkg
YXJlDQo+IHVuaXF1ZSBwZXIgZ3Vlc3QgZG9tYWluLCB0aGV5IGFsbG9jYXRlZCBhdCB0aGUgZG9t
YWluIGNyZWF0aW9uIHRpbWUNCj4gYW5kIHBhc3NlZCB0aHJvdWdoIHRvIHRoZSBlbXVsYXRvci4g
RWFjaCBWaXJ0SU8gZGV2aWNlIGhhcyBhdCBsZWFzdA0KPiBvbmUgcGFpciBvZiB0aGVzZSBwYXJh
bXMuDQo+IA0KPiBUT0RPOg0KPiAxLiBBbiBleHRyYSAidmlydGlvIiBwcm9wZXJ0eSBjb3VsZCBi
ZSByZW1vdmVkLg0KPiAyLiBVcGRhdGUgZG9jdW1lbnRhdGlvbi4NCj4gDQo+IFNpZ25lZC1vZmYt
Ynk6IE9sZWtzYW5kciBUeXNoY2hlbmtvIDxvbGVrc2FuZHJfdHlzaGNoZW5rb0BlcGFtLmNvbT4N
Cj4gDQo+IC0tLQ0KPiBDaGFuZ2VzIFJGQyAtPiBWMToNCj4gICAgLSBubyBjaGFuZ2VzDQo+IA0K
PiBDaGFuZ2VzIFYxIC0+IFYyOg0KPiAgICAtIHJlYmFzZSBhY2NvcmRpbmcgdG8gdGhlIG5ldyBs
b2NhdGlvbiBvZiBsaWJ4bF92aXJ0aW9fZGlzay5jDQo+IA0KPiBQbGVhc2Ugbm90ZSwgdGhlcmUg
aXMgYSByZWFsIGNvbmNlcm4gYWJvdXQgVmlydElPIGludGVycnVwdHMgYWxsb2NhdGlvbi4NCj4g
SnVzdCBjb3B5IGhlcmUgd2hhdCBTdGVmYW5vIHNhaWQgaW4gUkZDIHRocmVhZC4NCj4gDQo+IFNv
LCBpZiB3ZSBlbmQgdXAgYWxsb2NhdGluZyBsZXQncyBzYXkgNiB2aXJ0aW8gaW50ZXJydXB0cyBm
b3IgYSBkb21haW4sDQo+IHRoZSBjaGFuY2Ugb2YgYSBjbGFzaCB3aXRoIGEgcGh5c2ljYWwgaW50
ZXJydXB0IG9mIGEgcGFzc3Rocm91Z2ggZGV2aWNlIGlzIHJlYWwuDQo+IA0KPiBJIGFtIG5vdCBl
bnRpcmVseSBzdXJlIGhvdyB0byBzb2x2ZSBpdCwgYnV0IHRoZXNlIGFyZSBhIGZldyBpZGVhczoN
Cj4gLSBjaG9vc2luZyB2aXJ0aW8gaW50ZXJydXB0cyB0aGF0IGFyZSBsZXNzIGxpa2VseSB0byBj
b25mbGljdCAobWF5YmUgPiAxMDAwKQ0KPiAtIG1ha2UgdGhlIHZpcnRpbyBpcnEgKG9wdGlvbmFs
bHkpIGNvbmZpZ3VyYWJsZSBzbyB0aGF0IGEgdXNlciBjb3VsZA0KPiAgIG92ZXJyaWRlIHRoZSBk
ZWZhdWx0IGlycSBhbmQgc3BlY2lmeSBvbmUgdGhhdCBkb2Vzbid0IGNvbmZsaWN0DQo+IC0gaW1w
bGVtZW50aW5nIHN1cHBvcnQgZm9yIHZpcnEgIT0gcGlycSAoZXZlbiB0aGUgeGwgaW50ZXJmYWNl
IGRvZXNuJ3QNCj4gICBhbGxvdyB0byBzcGVjaWZ5IHRoZSB2aXJxIG51bWJlciBmb3IgcGFzc3Ro
cm91Z2ggZGV2aWNlcywgc2VlICJpcnFzIikNCj4gDQo+IC0tLQ0KPiAgdG9vbHMvbGlicy9saWdo
dC9NYWtlZmlsZSAgICAgICAgICAgICAgICAgfCAgIDEgKw0KPiAgdG9vbHMvbGlicy9saWdodC9s
aWJ4bF9hcm0uYyAgICAgICAgICAgICAgfCAgNTYgKysrKysrKysrKysrLS0tDQo+ICB0b29scy9s
aWJzL2xpZ2h0L2xpYnhsX2NyZWF0ZS5jICAgICAgICAgICB8ICAgMSArDQo+ICB0b29scy9saWJz
L2xpZ2h0L2xpYnhsX2ludGVybmFsLmggICAgICAgICB8ICAgMSArDQo+ICB0b29scy9saWJzL2xp
Z2h0L2xpYnhsX3R5cGVzLmlkbCAgICAgICAgICB8ICAxNSArKysrDQo+ICB0b29scy9saWJzL2xp
Z2h0L2xpYnhsX3R5cGVzX2ludGVybmFsLmlkbCB8ICAgMSArDQo+ICB0b29scy9saWJzL2xpZ2h0
L2xpYnhsX3ZpcnRpb19kaXNrLmMgICAgICB8IDEwOSArKysrKysrKysrKysrKysrKysrKysrKysr
KysrDQo+ICB0b29scy94bC9NYWtlZmlsZSAgICAgICAgICAgICAgICAgICAgICAgICB8ICAgMiAr
LQ0KPiAgdG9vbHMveGwveGwuaCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfCAgIDMgKw0K
PiAgdG9vbHMveGwveGxfY21kdGFibGUuYyAgICAgICAgICAgICAgICAgICAgfCAgMTUgKysrKw0K
PiAgdG9vbHMveGwveGxfcGFyc2UuYyAgICAgICAgICAgICAgICAgICAgICAgfCAxMTUgKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrDQo+ICB0b29scy94bC94bF92aXJ0aW9fZGlzay5jICAg
ICAgICAgICAgICAgICB8ICA0NiArKysrKysrKysrKysNCj4gIDEyIGZpbGVzIGNoYW5nZWQsIDM1
NCBpbnNlcnRpb25zKCspLCAxMSBkZWxldGlvbnMoLSkNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0
b29scy9saWJzL2xpZ2h0L2xpYnhsX3ZpcnRpb19kaXNrLmMNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0
NCB0b29scy94bC94bF92aXJ0aW9fZGlzay5jDQo+IA0KPiBkaWZmIC0tZ2l0IGEvdG9vbHMvbGli
cy9saWdodC9NYWtlZmlsZSBiL3Rvb2xzL2xpYnMvbGlnaHQvTWFrZWZpbGUNCj4gaW5kZXggZjU4
YTMyMS4uMmVlMzg4YSAxMDA2NDQNCj4gLS0tIGEvdG9vbHMvbGlicy9saWdodC9NYWtlZmlsZQ0K
PiArKysgYi90b29scy9saWJzL2xpZ2h0L01ha2VmaWxlDQo+IEBAIC0xMTUsNiArMTE1LDcgQEAg
U1JDUy15ICs9IGxpYnhsX2dlbmlkLmMNCj4gIFNSQ1MteSArPSBfbGlieGxfdHlwZXMuYw0KPiAg
U1JDUy15ICs9IGxpYnhsX2ZsYXNrLmMNCj4gIFNSQ1MteSArPSBfbGlieGxfdHlwZXNfaW50ZXJu
YWwuYw0KPiArU1JDUy15ICs9IGxpYnhsX3ZpcnRpb19kaXNrLmMNCj4gDQo+ICBpZmVxICgkKENP
TkZJR19MSUJOTCkseSkNCj4gIENGTEFHU19MSUJYTCArPSAkKExJQk5MM19DRkxBR1MpDQo+IGRp
ZmYgLS1naXQgYS90b29scy9saWJzL2xpZ2h0L2xpYnhsX2FybS5jIGIvdG9vbHMvbGlicy9saWdo
dC9saWJ4bF9hcm0uYw0KPiBpbmRleCA1ODhlZTVhLi45ZWIzMDIyIDEwMDY0NA0KPiAtLS0gYS90
b29scy9saWJzL2xpZ2h0L2xpYnhsX2FybS5jDQo+ICsrKyBiL3Rvb2xzL2xpYnMvbGlnaHQvbGli
eGxfYXJtLmMNCj4gQEAgLTgsNiArOCwxMiBAQA0KPiAgI2luY2x1ZGUgPGFzc2VydC5oPg0KPiAg
I2luY2x1ZGUgPHhlbi9kZXZpY2VfdHJlZV9kZWZzLmg+DQo+IA0KPiArI2lmbmRlZiBjb250YWlu
ZXJfb2YNCj4gKyNkZWZpbmUgY29udGFpbmVyX29mKHB0ciwgdHlwZSwgbWVtYmVyKSAoewkJCVwN
Cj4gKyAgICAgICAgdHlwZW9mKCAoKHR5cGUgKikwKS0+bWVtYmVyICkgKl9fbXB0ciA9IChwdHIp
OwlcDQo+ICsgICAgICAgICh0eXBlICopKCAoY2hhciAqKV9fbXB0ciAtIG9mZnNldG9mKHR5cGUs
bWVtYmVyKSApO30pDQo+ICsjZW5kaWYNCj4gKw0KPiAgc3RhdGljIGNvbnN0IGNoYXIgKmdpY3Zf
dG9fc3RyaW5nKGxpYnhsX2dpY192ZXJzaW9uIGdpY192ZXJzaW9uKQ0KPiAgew0KPiAgICAgIHN3
aXRjaCAoZ2ljX3ZlcnNpb24pIHsNCj4gQEAgLTM5LDE0ICs0NSwzMiBAQCBpbnQgbGlieGxfX2Fy
Y2hfZG9tYWluX3ByZXBhcmVfY29uZmlnKGxpYnhsX19nYyAqZ2MsDQo+ICAgICAgICAgIHZ1YXJ0
X2VuYWJsZWQgPSB0cnVlOw0KPiAgICAgIH0NCj4gDQo+IC0gICAgLyoNCj4gLSAgICAgKiBYWFg6
IEhhbmRsZSBwcm9wZXJseSB2aXJ0aW8NCj4gLSAgICAgKiBBIHByb3BlciBzb2x1dGlvbiB3b3Vs
ZCBiZSB0aGUgdG9vbHN0YWNrIHRvIGFsbG9jYXRlIHRoZSBpbnRlcnJ1cHRzDQo+IC0gICAgICog
dXNlZCBieSBlYWNoIHZpcnRpbyBiYWNrZW5kIGFuZCBsZXQgdGhlIGJhY2tlbmQgbm93IHdoaWNo
IG9uZSBpcyB1c2VkDQo+IC0gICAgICovDQo+ICAgICAgaWYgKGxpYnhsX2RlZmJvb2xfdmFsKGRf
Y29uZmlnLT5iX2luZm8uYXJjaF9hcm0udmlydGlvKSkgew0KPiAtICAgICAgICBucl9zcGlzICs9
IChHVUVTVF9WSVJUSU9fTU1JT19TUEkgLSAzMikgKyAxOw0KPiArICAgICAgICB1aW50NjRfdCB2
aXJ0aW9fYmFzZTsNCj4gKyAgICAgICAgbGlieGxfZGV2aWNlX3ZpcnRpb19kaXNrICp2aXJ0aW9f
ZGlzazsNCj4gKw0KPiArICAgICAgICB2aXJ0aW9fYmFzZSA9IEdVRVNUX1ZJUlRJT19NTUlPX0JB
U0U7DQo+ICAgICAgICAgIHZpcnRpb19pcnEgPSBHVUVTVF9WSVJUSU9fTU1JT19TUEk7DQo+ICsN
Cj4gKyAgICAgICAgaWYgKCFkX2NvbmZpZy0+bnVtX3ZpcnRpb19kaXNrcykgew0KPiArICAgICAg
ICAgICAgTE9HKEVSUk9SLCAiVmlydGlvIGlzIGVuYWJsZWQsIGJ1dCBubyBWaXJ0aW8gZGV2aWNl
cyBwcmVzZW50XG4iKTsNCj4gKyAgICAgICAgICAgIHJldHVybiBFUlJPUl9GQUlMOw0KPiArICAg
ICAgICB9DQo+ICsgICAgICAgIHZpcnRpb19kaXNrID0gJmRfY29uZmlnLT52aXJ0aW9fZGlza3Nb
MF07DQo+ICsNCj4gKyAgICAgICAgZm9yIChpID0gMDsgaSA8IHZpcnRpb19kaXNrLT5udW1fZGlz
a3M7IGkrKykgew0KPiArICAgICAgICAgICAgdmlydGlvX2Rpc2stPmRpc2tzW2ldLmJhc2UgPSB2
aXJ0aW9fYmFzZTsNCj4gKyAgICAgICAgICAgIHZpcnRpb19kaXNrLT5kaXNrc1tpXS5pcnEgPSB2
aXJ0aW9faXJxOw0KPiArDQo+ICsgICAgICAgICAgICBMT0coREVCVUcsICJBbGxvY2F0ZSBWaXJ0
aW8gTU1JTyBwYXJhbXM6IElSUSAldSBCQVNFIDB4JSJQUkl4NjQsDQo+ICsgICAgICAgICAgICAg
ICAgdmlydGlvX2lycSwgdmlydGlvX2Jhc2UpOw0KPiArDQo+ICsgICAgICAgICAgICB2aXJ0aW9f
aXJxICsrOw0KPiArICAgICAgICAgICAgdmlydGlvX2Jhc2UgKz0gR1VFU1RfVklSVElPX01NSU9f
U0laRTsNCj4gKyAgICAgICAgfQ0KPiArICAgICAgICB2aXJ0aW9faXJxIC0tOw0KPiArDQo+ICsg
ICAgICAgIG5yX3NwaXMgKz0gKHZpcnRpb19pcnEgLSAzMikgKyAxOw0KPiAgICAgICAgICB2aXJ0
aW9fZW5hYmxlZCA9IHRydWU7DQo+ICAgICAgfQ0KPiANCj4gQEAgLTcwLDggKzk0LDkgQEAgaW50
IGxpYnhsX19hcmNoX2RvbWFpbl9wcmVwYXJlX2NvbmZpZyhsaWJ4bF9fZ2MgKmdjLA0KPiAgICAg
ICAgICB9DQo+IA0KPiAgICAgICAgICAvKiBUaGUgc2FtZSBjaGVjayBhcyBmb3IgdnBsMDExICov
DQo+IC0gICAgICAgIGlmICh2aXJ0aW9fZW5hYmxlZCAmJiBpcnEgPT0gdmlydGlvX2lycSkgew0K
PiAtICAgICAgICAgICAgTE9HKEVSUk9SLCAiUGh5c2ljYWwgSVJRICV1IGNvbmZsaWN0aW5nIHdp
dGggdmlydGlvIFNQSVxuIiwgaXJxKTsNCj4gKyAgICAgICAgaWYgKHZpcnRpb19lbmFibGVkICYm
DQo+ICsgICAgICAgICAgIChpcnEgPj0gR1VFU1RfVklSVElPX01NSU9fU1BJICYmIGlycSA8PSB2
aXJ0aW9faXJxKSkgew0KPiArICAgICAgICAgICAgTE9HKEVSUk9SLCAiUGh5c2ljYWwgSVJRICV1
IGNvbmZsaWN0aW5nIHdpdGggVmlydGlvIElSUSByYW5nZVxuIiwgaXJxKTsNCj4gICAgICAgICAg
ICAgIHJldHVybiBFUlJPUl9GQUlMOw0KPiAgICAgICAgICB9DQo+IA0KPiBAQCAtMTAxMSw4ICsx
MDM2LDE5IEBAIG5leHRfcmVzaXplOg0KPiAgICAgICAgICBpZiAoaW5mby0+dGVlID09IExJQlhM
X1RFRV9UWVBFX09QVEVFKQ0KPiAgICAgICAgICAgICAgRkRUKCBtYWtlX29wdGVlX25vZGUoZ2Ms
IGZkdCkgKTsNCj4gDQo+IC0gICAgICAgIGlmIChsaWJ4bF9kZWZib29sX3ZhbChpbmZvLT5hcmNo
X2FybS52aXJ0aW8pKQ0KPiAtICAgICAgICAgICAgRkRUKCBtYWtlX3ZpcnRpb19tbWlvX25vZGUo
Z2MsIGZkdCwgR1VFU1RfVklSVElPX01NSU9fQkFTRSwNCj4gR1VFU1RfVklSVElPX01NSU9fU1BJ
KSApOw0KPiArICAgICAgICBpZiAobGlieGxfZGVmYm9vbF92YWwoaW5mby0+YXJjaF9hcm0udmly
dGlvKSkgew0KPiArICAgICAgICAgICAgbGlieGxfZG9tYWluX2NvbmZpZyAqZF9jb25maWcgPQ0K
PiArICAgICAgICAgICAgICAgIGNvbnRhaW5lcl9vZihpbmZvLCBsaWJ4bF9kb21haW5fY29uZmln
LCBiX2luZm8pOw0KPiArICAgICAgICAgICAgbGlieGxfZGV2aWNlX3ZpcnRpb19kaXNrICp2aXJ0
aW9fZGlzayA9ICZkX2NvbmZpZy0+dmlydGlvX2Rpc2tzWzBdOw0KPiArICAgICAgICAgICAgdW5z
aWduZWQgaW50IGk7DQo+ICsNCj4gKyAgICAgICAgICAgIGZvciAoaSA9IDA7IGkgPCB2aXJ0aW9f
ZGlzay0+bnVtX2Rpc2tzOyBpKyspIHsNCj4gKyAgICAgICAgICAgICAgICB1aW50NjRfdCBiYXNl
ID0gdmlydGlvX2Rpc2stPmRpc2tzW2ldLmJhc2U7DQo+ICsgICAgICAgICAgICAgICAgdWludDMy
X3QgaXJxID0gdmlydGlvX2Rpc2stPmRpc2tzW2ldLmlycTsNCj4gKw0KPiArICAgICAgICAgICAg
ICAgIEZEVCggbWFrZV92aXJ0aW9fbW1pb19ub2RlKGdjLCBmZHQsIGJhc2UsIGlycSkgKTsNCj4g
KyAgICAgICAgICAgIH0NCj4gKyAgICAgICAgfQ0KPiANCj4gICAgICAgICAgaWYgKHBmZHQpDQo+
ICAgICAgICAgICAgICBGRFQoIGNvcHlfcGFydGlhbF9mZHQoZ2MsIGZkdCwgcGZkdCkgKTsNCj4g
ZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfY3JlYXRlLmMgYi90b29scy9saWJz
L2xpZ2h0L2xpYnhsX2NyZWF0ZS5jDQo+IGluZGV4IDMyMWExM2UuLjhkYTMyOGQgMTAwNjQ0DQo+
IC0tLSBhL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfY3JlYXRlLmMNCj4gKysrIGIvdG9vbHMvbGli
cy9saWdodC9saWJ4bF9jcmVhdGUuYw0KPiBAQCAtMTgyMSw2ICsxODIxLDcgQEAgY29uc3QgbGli
eGxfX2RldmljZV90eXBlICpkZXZpY2VfdHlwZV90YmxbXSA9IHsNCj4gICAgICAmbGlieGxfX2R0
ZGV2X2RldnR5cGUsDQo+ICAgICAgJmxpYnhsX192ZGlzcGxfZGV2dHlwZSwNCj4gICAgICAmbGli
eGxfX3ZzbmRfZGV2dHlwZSwNCj4gKyAgICAmbGlieGxfX3ZpcnRpb19kaXNrX2RldnR5cGUsDQo+
ICAgICAgTlVMTA0KPiAgfTsNCj4gDQo+IGRpZmYgLS1naXQgYS90b29scy9saWJzL2xpZ2h0L2xp
YnhsX2ludGVybmFsLmggYi90b29scy9saWJzL2xpZ2h0L2xpYnhsX2ludGVybmFsLmgNCj4gaW5k
ZXggZTI2Y2RhOS4uZWE0OTdiYiAxMDA2NDQNCj4gLS0tIGEvdG9vbHMvbGlicy9saWdodC9saWJ4
bF9pbnRlcm5hbC5oDQo+ICsrKyBiL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfaW50ZXJuYWwuaA0K
PiBAQCAtNDAwMCw2ICs0MDAwLDcgQEAgZXh0ZXJuIGNvbnN0IGxpYnhsX19kZXZpY2VfdHlwZSBs
aWJ4bF9fdmRpc3BsX2RldnR5cGU7DQo+ICBleHRlcm4gY29uc3QgbGlieGxfX2RldmljZV90eXBl
IGxpYnhsX19wOV9kZXZ0eXBlOw0KPiAgZXh0ZXJuIGNvbnN0IGxpYnhsX19kZXZpY2VfdHlwZSBs
aWJ4bF9fcHZjYWxsc2lmX2RldnR5cGU7DQo+ICBleHRlcm4gY29uc3QgbGlieGxfX2RldmljZV90
eXBlIGxpYnhsX192c25kX2RldnR5cGU7DQo+ICtleHRlcm4gY29uc3QgbGlieGxfX2RldmljZV90
eXBlIGxpYnhsX192aXJ0aW9fZGlza19kZXZ0eXBlOw0KPiANCj4gIGV4dGVybiBjb25zdCBsaWJ4
bF9fZGV2aWNlX3R5cGUgKmRldmljZV90eXBlX3RibFtdOw0KPiANCj4gZGlmZiAtLWdpdCBhL3Rv
b2xzL2xpYnMvbGlnaHQvbGlieGxfdHlwZXMuaWRsIGIvdG9vbHMvbGlicy9saWdodC9saWJ4bF90
eXBlcy5pZGwNCj4gaW5kZXggYjA1NGJmOS4uNWY4YTNmZiAxMDA2NDQNCj4gLS0tIGEvdG9vbHMv
bGlicy9saWdodC9saWJ4bF90eXBlcy5pZGwNCj4gKysrIGIvdG9vbHMvbGlicy9saWdodC9saWJ4
bF90eXBlcy5pZGwNCj4gQEAgLTkzNSw2ICs5MzUsMjAgQEAgbGlieGxfZGV2aWNlX3ZzbmQgPSBT
dHJ1Y3QoImRldmljZV92c25kIiwgWw0KPiAgICAgICgicGNtcyIsIEFycmF5KGxpYnhsX3ZzbmRf
cGNtLCAibnVtX3ZzbmRfcGNtcyIpKQ0KPiAgICAgIF0pDQo+IA0KPiArbGlieGxfdmlydGlvX2Rp
c2tfcGFyYW0gPSBTdHJ1Y3QoInZpcnRpb19kaXNrX3BhcmFtIiwgWw0KPiArICAgICgiZmlsZW5h
bWUiLCBzdHJpbmcpLA0KPiArICAgICgicmVhZG9ubHkiLCBib29sKSwNCj4gKyAgICAoImlycSIs
IHVpbnQzMiksDQo+ICsgICAgKCJiYXNlIiwgdWludDY0KSwNCj4gKyAgICBdKQ0KPiArDQo+ICts
aWJ4bF9kZXZpY2VfdmlydGlvX2Rpc2sgPSBTdHJ1Y3QoImRldmljZV92aXJ0aW9fZGlzayIsIFsN
Cj4gKyAgICAoImJhY2tlbmRfZG9taWQiLCBsaWJ4bF9kb21pZCksDQo+ICsgICAgKCJiYWNrZW5k
X2RvbW5hbWUiLCBzdHJpbmcpLA0KPiArICAgICgiZGV2aWQiLCBsaWJ4bF9kZXZpZCksDQo+ICsg
ICAgKCJkaXNrcyIsIEFycmF5KGxpYnhsX3ZpcnRpb19kaXNrX3BhcmFtLCAibnVtX2Rpc2tzIikp
LA0KPiArICAgIF0pDQo+ICsNCj4gIGxpYnhsX2RvbWFpbl9jb25maWcgPSBTdHJ1Y3QoImRvbWFp
bl9jb25maWciLCBbDQo+ICAgICAgKCJjX2luZm8iLCBsaWJ4bF9kb21haW5fY3JlYXRlX2luZm8p
LA0KPiAgICAgICgiYl9pbmZvIiwgbGlieGxfZG9tYWluX2J1aWxkX2luZm8pLA0KPiBAQCAtOTUx
LDYgKzk2NSw3IEBAIGxpYnhsX2RvbWFpbl9jb25maWcgPSBTdHJ1Y3QoImRvbWFpbl9jb25maWci
LCBbDQo+ICAgICAgKCJwdmNhbGxzaWZzIiwgQXJyYXkobGlieGxfZGV2aWNlX3B2Y2FsbHNpZiwg
Im51bV9wdmNhbGxzaWZzIikpLA0KPiAgICAgICgidmRpc3BscyIsIEFycmF5KGxpYnhsX2Rldmlj
ZV92ZGlzcGwsICJudW1fdmRpc3BscyIpKSwNCj4gICAgICAoInZzbmRzIiwgQXJyYXkobGlieGxf
ZGV2aWNlX3ZzbmQsICJudW1fdnNuZHMiKSksDQo+ICsgICAgKCJ2aXJ0aW9fZGlza3MiLCBBcnJh
eShsaWJ4bF9kZXZpY2VfdmlydGlvX2Rpc2ssICJudW1fdmlydGlvX2Rpc2tzIikpLA0KPiAgICAg
ICMgYSBjaGFubmVsIG1hbmlmZXN0cyBhcyBhIGNvbnNvbGUgd2l0aCBhIG5hbWUsDQo+ICAgICAg
IyBzZWUgZG9jcy9taXNjL2NoYW5uZWxzLnR4dA0KPiAgICAgICgiY2hhbm5lbHMiLCBBcnJheShs
aWJ4bF9kZXZpY2VfY2hhbm5lbCwgIm51bV9jaGFubmVscyIpKSwNCj4gZGlmZiAtLWdpdCBhL3Rv
b2xzL2xpYnMvbGlnaHQvbGlieGxfdHlwZXNfaW50ZXJuYWwuaWRsDQo+IGIvdG9vbHMvbGlicy9s
aWdodC9saWJ4bF90eXBlc19pbnRlcm5hbC5pZGwNCj4gaW5kZXggMzU5M2UyMS4uOGY3MTk4MCAx
MDA2NDQNCj4gLS0tIGEvdG9vbHMvbGlicy9saWdodC9saWJ4bF90eXBlc19pbnRlcm5hbC5pZGwN
Cj4gKysrIGIvdG9vbHMvbGlicy9saWdodC9saWJ4bF90eXBlc19pbnRlcm5hbC5pZGwNCj4gQEAg
LTMyLDYgKzMyLDcgQEAgbGlieGxfX2RldmljZV9raW5kID0gRW51bWVyYXRpb24oImRldmljZV9r
aW5kIiwgWw0KPiAgICAgICgxNCwgIlBWQ0FMTFMiKSwNCj4gICAgICAoMTUsICJWU05EIiksDQo+
ICAgICAgKDE2LCAiVklOUFVUIiksDQo+ICsgICAgKDE3LCAiVklSVElPX0RJU0siKSwNCj4gICAg
ICBdKQ0KPiANCj4gIGxpYnhsX19jb25zb2xlX2JhY2tlbmQgPSBFbnVtZXJhdGlvbigiY29uc29s
ZV9iYWNrZW5kIiwgWw0KPiBkaWZmIC0tZ2l0IGEvdG9vbHMvbGlicy9saWdodC9saWJ4bF92aXJ0
aW9fZGlzay5jIGIvdG9vbHMvbGlicy9saWdodC9saWJ4bF92aXJ0aW9fZGlzay5jDQo+IG5ldyBm
aWxlIG1vZGUgMTAwNjQ0DQo+IGluZGV4IDAwMDAwMDAuLjI1ZTdmMWENCj4gLS0tIC9kZXYvbnVs
bA0KPiArKysgYi90b29scy9saWJzL2xpZ2h0L2xpYnhsX3ZpcnRpb19kaXNrLmMNCj4gQEAgLTAs
MCArMSwxMDkgQEANCj4gKy8qDQo+ICsgKiBDb3B5cmlnaHQgKEMpIDIwMjAgRVBBTSBTeXN0ZW1z
IEluYy4NCj4gKyAqDQo+ICsgKiBUaGlzIHByb2dyYW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNh
biByZWRpc3RyaWJ1dGUgaXQgYW5kL29yIG1vZGlmeQ0KPiArICogaXQgdW5kZXIgdGhlIHRlcm1z
IG9mIHRoZSBHTlUgTGVzc2VyIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgYXMgcHVibGlzaGVkDQo+
ICsgKiBieSB0aGUgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uOyB2ZXJzaW9uIDIuMSBvbmx5LiB3
aXRoIHRoZSBzcGVjaWFsDQo+ICsgKiBleGNlcHRpb24gb24gbGlua2luZyBkZXNjcmliZWQgaW4g
ZmlsZSBMSUNFTlNFLg0KPiArICoNCj4gKyAqIFRoaXMgcHJvZ3JhbSBpcyBkaXN0cmlidXRlZCBp
biB0aGUgaG9wZSB0aGF0IGl0IHdpbGwgYmUgdXNlZnVsLA0KPiArICogYnV0IFdJVEhPVVQgQU5Z
IFdBUlJBTlRZOyB3aXRob3V0IGV2ZW4gdGhlIGltcGxpZWQgd2FycmFudHkgb2YNCj4gKyAqIE1F
UkNIQU5UQUJJTElUWSBvciBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNlZSB0
aGUNCj4gKyAqIEdOVSBMZXNzZXIgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSBmb3IgbW9yZSBkZXRh
aWxzLg0KPiArICovDQo+ICsNCj4gKyNpbmNsdWRlICJsaWJ4bF9pbnRlcm5hbC5oIg0KPiArDQo+
ICtzdGF0aWMgaW50IGxpYnhsX19kZXZpY2VfdmlydGlvX2Rpc2tfc2V0ZGVmYXVsdChsaWJ4bF9f
Z2MgKmdjLCB1aW50MzJfdCBkb21pZCwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIGxpYnhsX2RldmljZV92aXJ0aW9fZGlzayAqdmlydGlvX2Rpc2ss
DQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBib29s
IGhvdHBsdWcpDQo+ICt7DQo+ICsgICAgcmV0dXJuIGxpYnhsX19yZXNvbHZlX2RvbWlkKGdjLCB2
aXJ0aW9fZGlzay0+YmFja2VuZF9kb21uYW1lLA0KPiArICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAmdmlydGlvX2Rpc2stPmJhY2tlbmRfZG9taWQpOw0KPiArfQ0KPiArDQo+ICtzdGF0
aWMgaW50IGxpYnhsX192aXJ0aW9fZGlza19mcm9tX3hlbnN0b3JlKGxpYnhsX19nYyAqZ2MsIGNv
bnN0IGNoYXIgKmxpYnhsX3BhdGgsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIGxpYnhsX2RldmlkIGRldmlkLA0KPiArICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9kZXZpY2VfdmlydGlvX2Rpc2sgKnZpcnRpb19k
aXNrKQ0KPiArew0KPiArICAgIGNvbnN0IGNoYXIgKmJlX3BhdGg7DQo+ICsgICAgaW50IHJjOw0K
PiArDQo+ICsgICAgdmlydGlvX2Rpc2stPmRldmlkID0gZGV2aWQ7DQo+ICsgICAgcmMgPSBsaWJ4
bF9feHNfcmVhZF9tYW5kYXRvcnkoZ2MsIFhCVF9OVUxMLA0KPiArICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIEdDU1BSSU5URigiJXMvYmFja2VuZCIsIGxpYnhsX3BhdGgpLA0KPiAr
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICZiZV9wYXRoKTsNCj4gKyAgICBpZiAo
cmMpIHJldHVybiByYzsNCj4gKw0KPiArICAgIHJjID0gbGlieGxfX2JhY2tlbmRwYXRoX3BhcnNl
X2RvbWlkKGdjLCBiZV9wYXRoLCAmdmlydGlvX2Rpc2stDQo+ID5iYWNrZW5kX2RvbWlkKTsNCj4g
KyAgICBpZiAocmMpIHJldHVybiByYzsNCj4gKw0KPiArICAgIHJldHVybiAwOw0KPiArfQ0KPiAr
DQo+ICtzdGF0aWMgdm9pZCBsaWJ4bF9fdXBkYXRlX2NvbmZpZ192aXJ0aW9fZGlzayhsaWJ4bF9f
Z2MgKmdjLA0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
bGlieGxfZGV2aWNlX3ZpcnRpb19kaXNrICpkc3QsDQo+ICsgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9kZXZpY2VfdmlydGlvX2Rpc2sgKnNyYykNCj4g
K3sNCj4gKyAgICBkc3QtPmRldmlkID0gc3JjLT5kZXZpZDsNCj4gK30NCj4gKw0KPiArc3RhdGlj
IGludCBsaWJ4bF9kZXZpY2VfdmlydGlvX2Rpc2tfY29tcGFyZShsaWJ4bF9kZXZpY2VfdmlydGlv
X2Rpc2sgKmQxLA0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBsaWJ4bF9kZXZpY2VfdmlydGlvX2Rpc2sgKmQyKQ0KPiArew0KPiArICAgIHJldHVybiBDT01Q
QVJFX0RFVklEKGQxLCBkMik7DQo+ICt9DQo+ICsNCj4gK3N0YXRpYyB2b2lkIGxpYnhsX19kZXZp
Y2VfdmlydGlvX2Rpc2tfYWRkKGxpYnhsX19lZ2MgKmVnYywgdWludDMyX3QgZG9taWQsDQo+ICsg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9kZXZpY2Vfdmly
dGlvX2Rpc2sgKnZpcnRpb19kaXNrLA0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgbGlieGxfX2FvX2RldmljZSAqYW9kZXYpDQo+ICt7DQo+ICsgICAgbGlieGxf
X2RldmljZV9hZGRfYXN5bmMoZWdjLCBkb21pZCwgJmxpYnhsX192aXJ0aW9fZGlza19kZXZ0eXBl
LCB2aXJ0aW9fZGlzaywNCj4gYW9kZXYpOw0KPiArfQ0KPiArDQo+ICtzdGF0aWMgaW50IGxpYnhs
X19zZXRfeGVuc3RvcmVfdmlydGlvX2Rpc2sobGlieGxfX2djICpnYywgdWludDMyX3QgZG9taWQs
DQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfZGV2
aWNlX3ZpcnRpb19kaXNrICp2aXJ0aW9fZGlzaywNCj4gKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBmbGV4YXJyYXlfdCAqYmFjaywgZmxleGFycmF5X3QgKmZyb250
LA0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGZsZXhhcnJh
eV90ICpyb19mcm9udCkNCj4gK3sNCj4gKyAgICBpbnQgcmM7DQo+ICsgICAgdW5zaWduZWQgaW50
IGk7DQo+ICsNCj4gKyAgICBmb3IgKGkgPSAwOyBpIDwgdmlydGlvX2Rpc2stPm51bV9kaXNrczsg
aSsrKSB7DQo+ICsgICAgICAgIHJjID0gZmxleGFycmF5X2FwcGVuZF9wYWlyKHJvX2Zyb250LCBH
Q1NQUklOVEYoIiVkL2ZpbGVuYW1lIiwgaSksDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIEdDU1BSSU5URigiJXMiLCB2aXJ0aW9fZGlzay0+ZGlza3NbaV0uZmlsZW5hbWUp
KTsNCj4gKyAgICAgICAgaWYgKHJjKSByZXR1cm4gcmM7DQo+ICsNCj4gKyAgICAgICAgcmMgPSBm
bGV4YXJyYXlfYXBwZW5kX3BhaXIocm9fZnJvbnQsIEdDU1BSSU5URigiJWQvcmVhZG9ubHkiLCBp
KSwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgR0NTUFJJTlRGKCIlZCIs
IHZpcnRpb19kaXNrLT5kaXNrc1tpXS5yZWFkb25seSkpOw0KPiArICAgICAgICBpZiAocmMpIHJl
dHVybiByYzsNCj4gKw0KPiArICAgICAgICByYyA9IGZsZXhhcnJheV9hcHBlbmRfcGFpcihyb19m
cm9udCwgR0NTUFJJTlRGKCIlZC9iYXNlIiwgaSksDQo+ICsgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIEdDU1BSSU5URigiJWx1IiwgdmlydGlvX2Rpc2stPmRpc2tzW2ldLmJhc2Up
KTsNCj4gKyAgICAgICAgaWYgKHJjKSByZXR1cm4gcmM7DQo+ICsNCj4gKyAgICAgICAgcmMgPSBm
bGV4YXJyYXlfYXBwZW5kX3BhaXIocm9fZnJvbnQsIEdDU1BSSU5URigiJWQvaXJxIiwgaSksDQo+
ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIEdDU1BSSU5URigiJXUiLCB2aXJ0
aW9fZGlzay0+ZGlza3NbaV0uaXJxKSk7DQo+ICsgICAgICAgIGlmIChyYykgcmV0dXJuIHJjOw0K
PiArICAgIH0NCj4gKw0KPiArICAgIHJldHVybiAwOw0KPiArfQ0KPiArDQo+ICtzdGF0aWMgTElC
WExfREVGSU5FX1VQREFURV9ERVZJRCh2aXJ0aW9fZGlzaykNCj4gK3N0YXRpYyBMSUJYTF9ERUZJ
TkVfREVWSUNFX0ZST01fVFlQRSh2aXJ0aW9fZGlzaykNCj4gK3N0YXRpYyBMSUJYTF9ERUZJTkVf
REVWSUNFU19BREQodmlydGlvX2Rpc2spDQo+ICsNCj4gK0RFRklORV9ERVZJQ0VfVFlQRV9TVFJV
Q1QodmlydGlvX2Rpc2ssIFZJUlRJT19ESVNLLA0KPiArICAgIC51cGRhdGVfY29uZmlnID0gKGRl
dmljZV91cGRhdGVfY29uZmlnX2ZuX3QpDQo+IGxpYnhsX191cGRhdGVfY29uZmlnX3ZpcnRpb19k
aXNrLA0KPiArICAgIC5mcm9tX3hlbnN0b3JlID0gKGRldmljZV9mcm9tX3hlbnN0b3JlX2ZuX3Qp
DQo+IGxpYnhsX192aXJ0aW9fZGlza19mcm9tX3hlbnN0b3JlLA0KPiArICAgIC5zZXRfeGVuc3Rv
cmVfY29uZmlnID0gKGRldmljZV9zZXRfeGVuc3RvcmVfY29uZmlnX2ZuX3QpDQo+IGxpYnhsX19z
ZXRfeGVuc3RvcmVfdmlydGlvX2Rpc2sNCj4gKyk7DQo+ICsNCj4gKy8qDQo+ICsgKiBMb2NhbCB2
YXJpYWJsZXM6DQo+ICsgKiBtb2RlOiBDDQo+ICsgKiBjLWJhc2ljLW9mZnNldDogNA0KPiArICog
aW5kZW50LXRhYnMtbW9kZTogbmlsDQo+ICsgKiBFbmQ6DQo+ICsgKi8NCj4gZGlmZiAtLWdpdCBh
L3Rvb2xzL3hsL01ha2VmaWxlIGIvdG9vbHMveGwvTWFrZWZpbGUNCj4gaW5kZXggYmRmNjdjOC4u
OWQ4ZjJhYSAxMDA2NDQNCj4gLS0tIGEvdG9vbHMveGwvTWFrZWZpbGUNCj4gKysrIGIvdG9vbHMv
eGwvTWFrZWZpbGUNCj4gQEAgLTIzLDcgKzIzLDcgQEAgWExfT0JKUyArPSB4bF92dHBtLm8geGxf
YmxvY2subyB4bF9uaWMubyB4bF91c2Iubw0KPiAgWExfT0JKUyArPSB4bF9zY2hlZC5vIHhsX3Bj
aS5vIHhsX3ZjcHUubyB4bF9jZHJvbS5vIHhsX21lbS5vDQo+ICBYTF9PQkpTICs9IHhsX2luZm8u
byB4bF9jb25zb2xlLm8geGxfbWlzYy5vDQo+ICBYTF9PQkpTICs9IHhsX3ZtY29udHJvbC5vIHhs
X3NhdmVyZXN0b3JlLm8geGxfbWlncmF0ZS5vDQo+IC1YTF9PQkpTICs9IHhsX3ZkaXNwbC5vIHhs
X3ZzbmQubyB4bF92a2Iubw0KPiArWExfT0JKUyArPSB4bF92ZGlzcGwubyB4bF92c25kLm8geGxf
dmtiLm8geGxfdmlydGlvX2Rpc2subw0KPiANCj4gICQoWExfT0JKUyk6IENGTEFHUyArPSAkKENG
TEFHU19saWJ4ZW50b29sbG9nKQ0KPiAgJChYTF9PQkpTKTogQ0ZMQUdTICs9ICQoQ0ZMQUdTX1hM
KQ0KPiBkaWZmIC0tZ2l0IGEvdG9vbHMveGwveGwuaCBiL3Rvb2xzL3hsL3hsLmgNCj4gaW5kZXgg
MDY1NjljNi4uM2QyNmYxOSAxMDA2NDQNCj4gLS0tIGEvdG9vbHMveGwveGwuaA0KPiArKysgYi90
b29scy94bC94bC5oDQo+IEBAIC0xNzgsNiArMTc4LDkgQEAgaW50IG1haW5fdnNuZGRldGFjaChp
bnQgYXJnYywgY2hhciAqKmFyZ3YpOw0KPiAgaW50IG1haW5fdmtiYXR0YWNoKGludCBhcmdjLCBj
aGFyICoqYXJndik7DQo+ICBpbnQgbWFpbl92a2JsaXN0KGludCBhcmdjLCBjaGFyICoqYXJndik7
DQo+ICBpbnQgbWFpbl92a2JkZXRhY2goaW50IGFyZ2MsIGNoYXIgKiphcmd2KTsNCj4gK2ludCBt
YWluX3ZpcnRpb19kaXNrYXR0YWNoKGludCBhcmdjLCBjaGFyICoqYXJndik7DQo+ICtpbnQgbWFp
bl92aXJ0aW9fZGlza2xpc3QoaW50IGFyZ2MsIGNoYXIgKiphcmd2KTsNCj4gK2ludCBtYWluX3Zp
cnRpb19kaXNrZGV0YWNoKGludCBhcmdjLCBjaGFyICoqYXJndik7DQo+ICBpbnQgbWFpbl91c2Jj
dHJsX2F0dGFjaChpbnQgYXJnYywgY2hhciAqKmFyZ3YpOw0KPiAgaW50IG1haW5fdXNiY3RybF9k
ZXRhY2goaW50IGFyZ2MsIGNoYXIgKiphcmd2KTsNCj4gIGludCBtYWluX3VzYmRldl9hdHRhY2go
aW50IGFyZ2MsIGNoYXIgKiphcmd2KTsNCj4gZGlmZiAtLWdpdCBhL3Rvb2xzL3hsL3hsX2NtZHRh
YmxlLmMgYi90b29scy94bC94bF9jbWR0YWJsZS5jDQo+IGluZGV4IDdkYTZjMWIuLjc0NWFmYWIg
MTAwNjQ0DQo+IC0tLSBhL3Rvb2xzL3hsL3hsX2NtZHRhYmxlLmMNCj4gKysrIGIvdG9vbHMveGwv
eGxfY21kdGFibGUuYw0KPiBAQCAtNDM1LDYgKzQzNSwyMSBAQCBzdHJ1Y3QgY21kX3NwZWMgY21k
X3RhYmxlW10gPSB7DQo+ICAgICAgICAiRGVzdHJveSBhIGRvbWFpbidzIHZpcnR1YWwgc291bmQg
ZGV2aWNlIiwNCj4gICAgICAgICI8RG9tYWluPiA8RGV2SWQ+IiwNCj4gICAgICB9LA0KPiArICAg
IHsgInZpcnRpby1kaXNrLWF0dGFjaCIsDQo+ICsgICAgICAmbWFpbl92aXJ0aW9fZGlza2F0dGFj
aCwgMSwgMSwNCj4gKyAgICAgICJDcmVhdGUgYSBuZXcgdmlydGlvIGJsb2NrIGRldmljZSIsDQo+
ICsgICAgICAiIFRCRFxuIg0KPiArICAgIH0sDQo+ICsgICAgeyAidmlydGlvLWRpc2stbGlzdCIs
DQo+ICsgICAgICAmbWFpbl92aXJ0aW9fZGlza2xpc3QsIDAsIDAsDQo+ICsgICAgICAiTGlzdCB2
aXJ0aW8gYmxvY2sgZGV2aWNlcyBmb3IgYSBkb21haW4iLA0KPiArICAgICAgIjxEb21haW4ocyk+
IiwNCj4gKyAgICB9LA0KPiArICAgIHsgInZpcnRpby1kaXNrLWRldGFjaCIsDQo+ICsgICAgICAm
bWFpbl92aXJ0aW9fZGlza2RldGFjaCwgMCwgMSwNCj4gKyAgICAgICJEZXN0cm95IGEgZG9tYWlu
J3MgdmlydGlvIGJsb2NrIGRldmljZSIsDQo+ICsgICAgICAiPERvbWFpbj4gPERldklkPiIsDQo+
ICsgICAgfSwNCj4gICAgICB7ICJ1cHRpbWUiLA0KPiAgICAgICAgJm1haW5fdXB0aW1lLCAwLCAw
LA0KPiAgICAgICAgIlByaW50IHVwdGltZSBmb3IgYWxsL3NvbWUgZG9tYWlucyIsDQo+IGRpZmYg
LS1naXQgYS90b29scy94bC94bF9wYXJzZS5jIGIvdG9vbHMveGwveGxfcGFyc2UuYw0KPiBpbmRl
eCAxMGFjZjIyLi42Y2YzNTI0IDEwMDY0NA0KPiAtLS0gYS90b29scy94bC94bF9wYXJzZS5jDQo+
ICsrKyBiL3Rvb2xzL3hsL3hsX3BhcnNlLmMNCj4gQEAgLTEyMDQsNiArMTIwNCwxMjAgQEAgb3V0
Og0KPiAgICAgIGlmIChyYykgZXhpdChFWElUX0ZBSUxVUkUpOw0KPiAgfQ0KPiANCj4gKyNkZWZp
bmUgTUFYX1ZJUlRJT19ESVNLUyA0DQo+ICsNCj4gK3N0YXRpYyBpbnQgcGFyc2VfdmlydGlvX2Rp
c2tfY29uZmlnKGxpYnhsX2RldmljZV92aXJ0aW9fZGlzayAqdmlydGlvX2Rpc2ssIGNoYXINCj4g
KnRva2VuKQ0KPiArew0KPiArICAgIGNoYXIgKm9wYXJnOw0KPiArICAgIGxpYnhsX3N0cmluZ19s
aXN0IGRpc2tzID0gTlVMTDsNCj4gKyAgICBpbnQgaSwgcmM7DQo+ICsNCj4gKyAgICBpZiAoTUFU
Q0hfT1BUSU9OKCJiYWNrZW5kIiwgdG9rZW4sIG9wYXJnKSkgew0KPiArICAgICAgICB2aXJ0aW9f
ZGlzay0+YmFja2VuZF9kb21uYW1lID0gc3RyZHVwKG9wYXJnKTsNCj4gKyAgICB9IGVsc2UgaWYg
KE1BVENIX09QVElPTigiZGlza3MiLCB0b2tlbiwgb3BhcmcpKSB7DQo+ICsgICAgICAgIHNwbGl0
X3N0cmluZ19pbnRvX3N0cmluZ19saXN0KG9wYXJnLCAiOyIsICZkaXNrcyk7DQo+ICsNCj4gKyAg
ICAgICAgdmlydGlvX2Rpc2stPm51bV9kaXNrcyA9IGxpYnhsX3N0cmluZ19saXN0X2xlbmd0aCgm
ZGlza3MpOw0KPiArICAgICAgICBpZiAodmlydGlvX2Rpc2stPm51bV9kaXNrcyA+IE1BWF9WSVJU
SU9fRElTS1MpIHsNCj4gKyAgICAgICAgICAgIGZwcmludGYoc3RkZXJyLCAidmRpc2s6IGN1cnJl
bnRseSBvbmx5ICVkIGRpc2tzIGFyZSBzdXBwb3J0ZWQiLA0KPiArICAgICAgICAgICAgICAgICAg
ICBNQVhfVklSVElPX0RJU0tTKTsNCj4gKyAgICAgICAgICAgIHJldHVybiAxOw0KPiArICAgICAg
ICB9DQo+ICsgICAgICAgIHZpcnRpb19kaXNrLT5kaXNrcyA9IHhjYWxsb2ModmlydGlvX2Rpc2st
Pm51bV9kaXNrcywNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzaXpl
b2YoKnZpcnRpb19kaXNrLT5kaXNrcykpOw0KPiArDQo+ICsgICAgICAgIGZvcihpID0gMDsgaSA8
IHZpcnRpb19kaXNrLT5udW1fZGlza3M7IGkrKykgew0KPiArICAgICAgICAgICAgY2hhciAqZGlz
a19vcHQ7DQo+ICsNCj4gKyAgICAgICAgICAgIHJjID0gc3BsaXRfc3RyaW5nX2ludG9fcGFpcihk
aXNrc1tpXSwgIjoiLCAmZGlza19vcHQsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgJnZpcnRpb19kaXNrLT5kaXNrc1tpXS5maWxlbmFtZSk7DQo+ICsgICAgICAg
ICAgICBpZiAocmMpIHsNCj4gKyAgICAgICAgICAgICAgICBmcHJpbnRmKHN0ZGVyciwgInZkaXNr
OiBmYWlsZWQgdG8gc3BsaXQgXCIlc1wiIGludG8gcGFpclxuIiwNCj4gKyAgICAgICAgICAgICAg
ICAgICAgICAgIGRpc2tzW2ldKTsNCj4gKyAgICAgICAgICAgICAgICBnb3RvIG91dDsNCj4gKyAg
ICAgICAgICAgIH0NCj4gKw0KPiArICAgICAgICAgICAgaWYgKCFzdHJjbXAoZGlza19vcHQsICJy
byIpKQ0KPiArICAgICAgICAgICAgICAgIHZpcnRpb19kaXNrLT5kaXNrc1tpXS5yZWFkb25seSA9
IDE7DQo+ICsgICAgICAgICAgICBlbHNlIGlmICghc3RyY21wKGRpc2tfb3B0LCAicnciKSkNCj4g
KyAgICAgICAgICAgICAgICB2aXJ0aW9fZGlzay0+ZGlza3NbaV0ucmVhZG9ubHkgPSAwOw0KPiAr
ICAgICAgICAgICAgZWxzZSB7DQo+ICsgICAgICAgICAgICAgICAgZnByaW50ZihzdGRlcnIsICJ2
ZGlzazogZmFpbGVkIHRvIHBhcnNlIFwiJXNcIiBkaXNrIG9wdGlvblxuIiwNCj4gKyAgICAgICAg
ICAgICAgICAgICAgICAgIGRpc2tfb3B0KTsNCj4gKyAgICAgICAgICAgICAgICByYyA9IDE7DQo+
ICsgICAgICAgICAgICB9DQo+ICsgICAgICAgICAgICBmcmVlKGRpc2tfb3B0KTsNCj4gKw0KPiAr
ICAgICAgICAgICAgaWYgKHJjKSBnb3RvIG91dDsNCj4gKyAgICAgICAgfQ0KPiArICAgIH0gZWxz
ZSB7DQo+ICsgICAgICAgIGZwcmludGYoc3RkZXJyLCAiVW5rbm93biBzdHJpbmcgXCIlc1wiIGlu
IHZkaXNrIHNwZWNcbiIsIHRva2VuKTsNCj4gKyAgICAgICAgcmMgPSAxOyBnb3RvIG91dDsNCj4g
KyAgICB9DQo+ICsNCj4gKyAgICByYyA9IDA7DQo+ICsNCj4gK291dDoNCj4gKyAgICBsaWJ4bF9z
dHJpbmdfbGlzdF9kaXNwb3NlKCZkaXNrcyk7DQo+ICsgICAgcmV0dXJuIHJjOw0KPiArfQ0KPiAr
DQo+ICtzdGF0aWMgdm9pZCBwYXJzZV92aXJ0aW9fZGlza19saXN0KGNvbnN0IFhMVV9Db25maWcg
KmNvbmZpZywNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICBsaWJ4bF9kb21haW5fY29u
ZmlnICpkX2NvbmZpZykNCj4gK3sNCj4gKyAgICBYTFVfQ29uZmlnTGlzdCAqdmlydGlvX2Rpc2tz
Ow0KPiArICAgIGNvbnN0IGNoYXIgKml0ZW07DQo+ICsgICAgY2hhciAqYnVmID0gTlVMTDsNCj4g
KyAgICBpbnQgcmM7DQo+ICsNCj4gKyAgICBpZiAoIXhsdV9jZmdfZ2V0X2xpc3QgKGNvbmZpZywg
InZkaXNrIiwgJnZpcnRpb19kaXNrcywgMCwgMCkpIHsNCj4gKyAgICAgICAgbGlieGxfZG9tYWlu
X2J1aWxkX2luZm8gKmJfaW5mbyA9ICZkX2NvbmZpZy0+Yl9pbmZvOw0KPiArICAgICAgICBpbnQg
ZW50cnkgPSAwOw0KPiArDQo+ICsgICAgICAgIC8qIFhYWCBSZW1vdmUgYW4gZXh0cmEgcHJvcGVy
dHkgKi8NCj4gKyAgICAgICAgbGlieGxfZGVmYm9vbF9zZXRkZWZhdWx0KCZiX2luZm8tPmFyY2hf
YXJtLnZpcnRpbywgZmFsc2UpOw0KPiArICAgICAgICBpZiAoIWxpYnhsX2RlZmJvb2xfdmFsKGJf
aW5mby0+YXJjaF9hcm0udmlydGlvKSkgew0KPiArICAgICAgICAgICAgZnByaW50ZihzdGRlcnIs
ICJWaXJ0aW8gZGV2aWNlIHJlcXVpcmVzIFZpcnRpbyBwcm9wZXJ0eSB0byBiZSBzZXRcbiIpOw0K
PiArICAgICAgICAgICAgZXhpdChFWElUX0ZBSUxVUkUpOw0KPiArICAgICAgICB9DQo+ICsNCj4g
KyAgICAgICAgd2hpbGUgKChpdGVtID0geGx1X2NmZ19nZXRfbGlzdGl0ZW0odmlydGlvX2Rpc2tz
LCBlbnRyeSkpICE9IE5VTEwpIHsNCj4gKyAgICAgICAgICAgIGxpYnhsX2RldmljZV92aXJ0aW9f
ZGlzayAqdmlydGlvX2Rpc2s7DQo+ICsgICAgICAgICAgICBjaGFyICpwOw0KPiArDQo+ICsgICAg
ICAgICAgICB2aXJ0aW9fZGlzayA9IEFSUkFZX0VYVEVORF9JTklUKGRfY29uZmlnLT52aXJ0aW9f
ZGlza3MsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGRf
Y29uZmlnLT5udW1fdmlydGlvX2Rpc2tzLA0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBsaWJ4bF9kZXZpY2VfdmlydGlvX2Rpc2tfaW5pdCk7DQo+ICsNCj4g
KyAgICAgICAgICAgIGJ1ZiA9IHN0cmR1cChpdGVtKTsNCj4gKw0KPiArICAgICAgICAgICAgcCA9
IHN0cnRvayAoYnVmLCAiLCIpOw0KPiArICAgICAgICAgICAgd2hpbGUgKHAgIT0gTlVMTCkNCj4g
KyAgICAgICAgICAgIHsNCj4gKyAgICAgICAgICAgICAgICB3aGlsZSAoKnAgPT0gJyAnKSBwKys7
DQo+ICsNCj4gKyAgICAgICAgICAgICAgICByYyA9IHBhcnNlX3ZpcnRpb19kaXNrX2NvbmZpZyh2
aXJ0aW9fZGlzaywgcCk7DQo+ICsgICAgICAgICAgICAgICAgaWYgKHJjKSBnb3RvIG91dDsNCj4g
Kw0KPiArICAgICAgICAgICAgICAgIHAgPSBzdHJ0b2sgKE5VTEwsICIsIik7DQo+ICsgICAgICAg
ICAgICB9DQo+ICsNCj4gKyAgICAgICAgICAgIGVudHJ5Kys7DQo+ICsNCj4gKyAgICAgICAgICAg
IGlmICh2aXJ0aW9fZGlzay0+bnVtX2Rpc2tzID09IDApIHsNCj4gKyAgICAgICAgICAgICAgICBm
cHJpbnRmKHN0ZGVyciwgIkF0IGxlYXN0IG9uZSB2aXJ0aW8gZGlzayBzaG91bGQgYmUgc3BlY2lm
aWVkXG4iKTsNCj4gKyAgICAgICAgICAgICAgICByYyA9IDE7IGdvdG8gb3V0Ow0KPiArICAgICAg
ICAgICAgfQ0KPiArICAgICAgICB9DQo+ICsgICAgfQ0KPiArDQo+ICsgICAgcmMgPSAwOw0KPiAr
DQo+ICtvdXQ6DQo+ICsgICAgZnJlZShidWYpOw0KPiArICAgIGlmIChyYykgZXhpdChFWElUX0ZB
SUxVUkUpOw0KPiArfQ0KPiArDQo+ICB2b2lkIHBhcnNlX2NvbmZpZ19kYXRhKGNvbnN0IGNoYXIg
KmNvbmZpZ19zb3VyY2UsDQo+ICAgICAgICAgICAgICAgICAgICAgICAgIGNvbnN0IGNoYXIgKmNv
bmZpZ19kYXRhLA0KPiAgICAgICAgICAgICAgICAgICAgICAgICBpbnQgY29uZmlnX2xlbiwNCj4g
QEAgLTI3MzQsNiArMjg0OCw3IEBAIHNraXBfdXNiZGV2Og0KPiAgICAgIH0NCj4gDQo+ICAgICAg
cGFyc2VfdmtiX2xpc3QoY29uZmlnLCBkX2NvbmZpZyk7DQo+ICsgICAgcGFyc2VfdmlydGlvX2Rp
c2tfbGlzdChjb25maWcsIGRfY29uZmlnKTsNCj4gDQo+ICAgICAgeGx1X2NmZ19nZXRfZGVmYm9v
bChjb25maWcsICJ4ZW5kX3N1c3BlbmRfZXZ0Y2huX2NvbXBhdCIsDQo+ICAgICAgICAgICAgICAg
ICAgICAgICAgICAmY19pbmZvLT54ZW5kX3N1c3BlbmRfZXZ0Y2huX2NvbXBhdCwgMCk7DQo+IGRp
ZmYgLS1naXQgYS90b29scy94bC94bF92aXJ0aW9fZGlzay5jIGIvdG9vbHMveGwveGxfdmlydGlv
X2Rpc2suYw0KPiBuZXcgZmlsZSBtb2RlIDEwMDY0NA0KPiBpbmRleCAwMDAwMDAwLi44MDhhN2Rh
DQo+IC0tLSAvZGV2L251bGwNCj4gKysrIGIvdG9vbHMveGwveGxfdmlydGlvX2Rpc2suYw0KPiBA
QCAtMCwwICsxLDQ2IEBADQo+ICsvKg0KPiArICogQ29weXJpZ2h0IChDKSAyMDIwIEVQQU0gU3lz
dGVtcyBJbmMuDQo+ICsgKg0KPiArICogVGhpcyBwcm9ncmFtIGlzIGZyZWUgc29mdHdhcmU7IHlv
dSBjYW4gcmVkaXN0cmlidXRlIGl0IGFuZC9vciBtb2RpZnkNCj4gKyAqIGl0IHVuZGVyIHRoZSB0
ZXJtcyBvZiB0aGUgR05VIExlc3NlciBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGFzIHB1Ymxpc2hl
ZA0KPiArICogYnkgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbjsgdmVyc2lvbiAyLjEgb25s
eS4gd2l0aCB0aGUgc3BlY2lhbA0KPiArICogZXhjZXB0aW9uIG9uIGxpbmtpbmcgZGVzY3JpYmVk
IGluIGZpbGUgTElDRU5TRS4NCj4gKyAqDQo+ICsgKiBUaGlzIHByb2dyYW0gaXMgZGlzdHJpYnV0
ZWQgaW4gdGhlIGhvcGUgdGhhdCBpdCB3aWxsIGJlIHVzZWZ1bCwNCj4gKyAqIGJ1dCBXSVRIT1VU
IEFOWSBXQVJSQU5UWTsgd2l0aG91dCBldmVuIHRoZSBpbXBsaWVkIHdhcnJhbnR5IG9mDQo+ICsg
KiBNRVJDSEFOVEFCSUxJVFkgb3IgRklUTkVTUyBGT1IgQSBQQVJUSUNVTEFSIFBVUlBPU0UuICBT
ZWUgdGhlDQo+ICsgKiBHTlUgTGVzc2VyIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgZm9yIG1vcmUg
ZGV0YWlscy4NCj4gKyAqLw0KPiArDQo+ICsjaW5jbHVkZSA8c3RkbGliLmg+DQo+ICsNCj4gKyNp
bmNsdWRlIDxsaWJ4bC5oPg0KPiArI2luY2x1ZGUgPGxpYnhsX3V0aWxzLmg+DQo+ICsjaW5jbHVk
ZSA8bGlieGx1dGlsLmg+DQo+ICsNCj4gKyNpbmNsdWRlICJ4bC5oIg0KPiArI2luY2x1ZGUgInhs
X3V0aWxzLmgiDQo+ICsjaW5jbHVkZSAieGxfcGFyc2UuaCINCj4gKw0KPiAraW50IG1haW5fdmly
dGlvX2Rpc2thdHRhY2goaW50IGFyZ2MsIGNoYXIgKiphcmd2KQ0KPiArew0KPiArICAgIHJldHVy
biAwOw0KPiArfQ0KPiArDQo+ICtpbnQgbWFpbl92aXJ0aW9fZGlza2xpc3QoaW50IGFyZ2MsIGNo
YXIgKiphcmd2KQ0KPiArew0KPiArICAgcmV0dXJuIDA7DQo+ICt9DQo+ICsNCj4gK2ludCBtYWlu
X3ZpcnRpb19kaXNrZGV0YWNoKGludCBhcmdjLCBjaGFyICoqYXJndikNCj4gK3sNCj4gKyAgICBy
ZXR1cm4gMDsNCj4gK30NCj4gKw0KPiArLyoNCj4gKyAqIExvY2FsIHZhcmlhYmxlczoNCj4gKyAq
IG1vZGU6IEMNCj4gKyAqIGMtYmFzaWMtb2Zmc2V0OiA0DQo+ICsgKiBpbmRlbnQtdGFicy1tb2Rl
OiBuaWwNCj4gKyAqIEVuZDoNCj4gKyAqLw0KPiAtLQ0KPiAyLjcuNA0KPiANCg0K


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 06:59:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 06:59:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22019.48369 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc196-0002bI-V1; Mon, 09 Nov 2020 06:59:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22019.48369; Mon, 09 Nov 2020 06:59:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc196-0002bB-Qf; Mon, 09 Nov 2020 06:59:20 +0000
Received: by outflank-mailman (input) for mailman id 22019;
 Mon, 09 Nov 2020 06:56:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fWh8=EP=cloud.ionos.com=jinpu.wang@srs-us1.protection.inumbo.net>)
 id 1kc16c-0002WW-4H
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 06:56:46 +0000
Received: from mail-ed1-x544.google.com (unknown [2a00:1450:4864:20::544])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f99bf70e-5509-4ffa-8364-ec172001a3fd;
 Mon, 09 Nov 2020 06:56:44 +0000 (UTC)
Received: by mail-ed1-x544.google.com with SMTP id ay21so7542903edb.2
 for <xen-devel@lists.xenproject.org>; Sun, 08 Nov 2020 22:56:44 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=fWh8=EP=cloud.ionos.com=jinpu.wang@srs-us1.protection.inumbo.net>)
	id 1kc16c-0002WW-4H
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 06:56:46 +0000
X-Inumbo-ID: f99bf70e-5509-4ffa-8364-ec172001a3fd
Received: from mail-ed1-x544.google.com (unknown [2a00:1450:4864:20::544])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f99bf70e-5509-4ffa-8364-ec172001a3fd;
	Mon, 09 Nov 2020 06:56:44 +0000 (UTC)
Received: by mail-ed1-x544.google.com with SMTP id ay21so7542903edb.2
        for <xen-devel@lists.xenproject.org>; Sun, 08 Nov 2020 22:56:44 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.ionos.com; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=lhEWojdnEETMpoDmZaKWZITR453k36pmv/sfAr6vN/Q=;
        b=OlOP6e+hTTnzcMIwdaF/T/hnfIBf/9oJMDqvqjYzplHIQ4xMgH25+911Ixq0B3+K58
         5Dj/JmHZRD8Yi8zqOu+1rKF07XS2mrjA0Wv9PMukIV3uX40Qn/weu/uMdAMCHG7RU935
         A8pgkbko6hNqb1nFuLU3A3PiBPYVj8XjATb55mVvvHvMOvvFHanlnCDmyhaMxkXN6DKS
         Bf5AgcB50gTwwmoZvVJaG5dtCbnwCB8M7N5x+uhcjSrbjJF9nKnn2cuHHSz7I/a4akR/
         xE89AMdrexbc31NBbIF2vvHpBBUezGpS6XdWxnMo7c/NgKk9nHTIb/0D5wlGo0gz8gEr
         D1Nw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=lhEWojdnEETMpoDmZaKWZITR453k36pmv/sfAr6vN/Q=;
        b=M4qveyHfqVMjA0F01UJZsEwehImPK4b0g4sjZHPWzegdvew8Y75cWiaueJQzGv5zIn
         CrNZ8NBPc5152frpu6uUQ/AqcojYxIogEisrBrRQKBlS3Ey/lV8FudzLyuQQ1ueh5qSm
         uYBOOOq2Seag2dnw7urHCl7GzyrstISRmKCJ+hAqcOZxHaBfC4yUQmVgd3RhGC6VeY5P
         H8L02sFe8FmV+7dkv0jerHTNJoJo/fU5ZzRq+lTvhAKpKIJTj9APGvtUpr+GHT4KFC1r
         INiiNN9uC76T+dCaU3ObIkfRj2dFFVXaNkkP9SguTqqgBOh378AU32L50gPipqgxI0Na
         ASGg==
X-Gm-Message-State: AOAM533+WinJaJt0xUl6f97WZzfeGwCzEV/Alv4VPLVYTm8tFLc3MS9U
	vPE0QTpm/kWmwcQY5yMv8fi6TGwj7FOxnjtBX2w8OQ==
X-Google-Smtp-Source: ABdhPJy5lMTAmAT8/Ewb7MyKs6V0kAUlIo1q9lM6dl2oB700nsAVEi6e4At1FesCFzDNwPnUu593st+zxY/BthW2KiM=
X-Received: by 2002:a50:fc89:: with SMTP id f9mr14573990edq.89.1604905003672;
 Sun, 08 Nov 2020 22:56:43 -0800 (PST)
MIME-Version: 1.0
References: <20201106190337.1973127-1-hch@lst.de> <20201106190337.1973127-19-hch@lst.de>
In-Reply-To: <20201106190337.1973127-19-hch@lst.de>
From: Jinpu Wang <jinpu.wang@cloud.ionos.com>
Date: Mon, 9 Nov 2020 07:56:32 +0100
Message-ID: <CAMGffEnRgesKniK_X5b2nAoiQ_i6xqL4gnCw7dJxapkD-6Dvwg@mail.gmail.com>
Subject: Re: [PATCH 18/24] rnbd: use set_capacity_and_notify
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Justin Sanders <justin@coraid.com>, 
	Josef Bacik <josef@toxicpanda.com>, Ilya Dryomov <idryomov@gmail.com>, 
	"Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>, 
	Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>, Song Liu <song@kernel.org>, 
	"Martin K. Petersen" <martin.petersen@oracle.com>, 
	device-mapper development <dm-devel@redhat.com>, linux-block@vger.kernel.org, 
	drbd-dev@lists.linbit.com, nbd@other.debian.org, ceph-devel@vger.kernel.org, 
	xen-devel@lists.xenproject.org, linux-raid <linux-raid@vger.kernel.org>, 
	linux-nvme@lists.infradead.org, 
	Linux SCSI Mailinglist <linux-scsi@vger.kernel.org>, linux-fsdevel@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"

On Fri, Nov 6, 2020 at 8:04 PM Christoph Hellwig <hch@lst.de> wrote:
>
> Use set_capacity_and_notify to set the size of both the disk and block
> device.  This also gets the uevent notifications for the resize for free.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
Thanks, Christoph!
Acked-by: Jack Wang <jinpu.wang@cloud.ionos.com>
> ---
>  drivers/block/rnbd/rnbd-clt.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c
> index 8b2411ccbda97c..bb13d7dd195a08 100644
> --- a/drivers/block/rnbd/rnbd-clt.c
> +++ b/drivers/block/rnbd/rnbd-clt.c
> @@ -100,8 +100,7 @@ static int rnbd_clt_change_capacity(struct rnbd_clt_dev *dev,
>         rnbd_clt_info(dev, "Device size changed from %zu to %zu sectors\n",
>                        dev->nsectors, new_nsectors);
>         dev->nsectors = new_nsectors;
> -       set_capacity(dev->gd, dev->nsectors);
> -       revalidate_disk_size(dev->gd, true);
> +       set_capacity_and_notify(dev->gd, dev->nsectors);
>         return 0;
>  }
>
> --
> 2.28.0
>


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 07:44:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 07:44:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22045.48386 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc1qx-0006td-FY; Mon, 09 Nov 2020 07:44:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22045.48386; Mon, 09 Nov 2020 07:44:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc1qx-0006tW-Ch; Mon, 09 Nov 2020 07:44:39 +0000
Received: by outflank-mailman (input) for mailman id 22045;
 Mon, 09 Nov 2020 07:44:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=47vU=EP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kc1qw-0006tR-5z
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 07:44:38 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6d1aa2b9-a900-4c2c-a5ec-2484c974e9eb;
 Mon, 09 Nov 2020 07:44:35 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kc1qt-0003xr-78; Mon, 09 Nov 2020 07:44:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kc1qs-0006Cd-TG; Mon, 09 Nov 2020 07:44:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kc1qs-000356-So; Mon, 09 Nov 2020 07:44:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=47vU=EP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kc1qw-0006tR-5z
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 07:44:38 +0000
X-Inumbo-ID: 6d1aa2b9-a900-4c2c-a5ec-2484c974e9eb
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6d1aa2b9-a900-4c2c-a5ec-2484c974e9eb;
	Mon, 09 Nov 2020 07:44:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wrMOjfINQr9kHM5xNWmngXEEGAsbAg+G/5J62KAfAC4=; b=u7KaoTWrRSCj+TlEsuBUHv7XPd
	w9XN5l12OY1Pj89z/v+wxjL+AGGeDoJwxn7WMlxrnuMaRUKC9yLcmf2p+8kbE2Diuiy1gty0HkKug
	a3w72lm8d5Qz7H+D71oU5i0BjnN+5BQybxUwz4nB73jTmNsmuyUAPfnUGeyifAfw5wB8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kc1qt-0003xr-78; Mon, 09 Nov 2020 07:44:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kc1qs-0006Cd-TG; Mon, 09 Nov 2020 07:44:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kc1qs-000356-So; Mon, 09 Nov 2020 07:44:34 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156569-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156569: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=9dbc1c03eeb534b82647cccb059aca0685d449a7
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Nov 2020 07:44:34 +0000

flight 156569 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156569/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 17 guest-localmigrate      fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                9dbc1c03eeb534b82647cccb059aca0685d449a7
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  100 days
Failing since        152366  2020-08-01 20:49:34 Z   99 days  165 attempts
Testing same since   156569  2020-11-08 18:55:02 Z    0 days    1 attempts

------------------------------------------------------------
3472 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 663115 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 07:45:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 07:45:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22052.48405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc1s5-00070z-UI; Mon, 09 Nov 2020 07:45:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22052.48405; Mon, 09 Nov 2020 07:45:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc1s5-00070s-Qh; Mon, 09 Nov 2020 07:45:49 +0000
Received: by outflank-mailman (input) for mailman id 22052;
 Mon, 09 Nov 2020 07:45:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=47vU=EP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kc1s4-00070B-Gm
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 07:45:48 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 466a90e9-f123-45b3-b8b4-055bd9874e09;
 Mon, 09 Nov 2020 07:45:41 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kc1rx-0003zZ-CF; Mon, 09 Nov 2020 07:45:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kc1rx-0006Ek-3H; Mon, 09 Nov 2020 07:45:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kc1rx-0003uF-2o; Mon, 09 Nov 2020 07:45:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=47vU=EP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kc1s4-00070B-Gm
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 07:45:48 +0000
X-Inumbo-ID: 466a90e9-f123-45b3-b8b4-055bd9874e09
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 466a90e9-f123-45b3-b8b4-055bd9874e09;
	Mon, 09 Nov 2020 07:45:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/QcRoTz1Zzq8wo+CH3I6cd/PiEd5eX0yOTapdfL01Lo=; b=CmCCH+LsS9SM3QqfL7yqWuFISF
	TEd68J59hXww4nylo2xIrmWPeEGJevA35Xj+E1aeV1qIEzkXXLOXZbm7jRJtajFg7B1HssEAnXdZX
	1U3e2GqbwnQykeH4QFbPgs802Uu1n/aWbwnIdFSGnLL49HGLkug04prP+1GoVv28vfn8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kc1rx-0003zZ-CF; Mon, 09 Nov 2020 07:45:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kc1rx-0006Ek-3H; Mon, 09 Nov 2020 07:45:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kc1rx-0003uF-2o; Mon, 09 Nov 2020 07:45:41 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156579-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156579: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=3113f3d81532d18fbeedebc0c7de8a0e42b771b2
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Nov 2020 07:45:41 +0000

flight 156579 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156579/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              3113f3d81532d18fbeedebc0c7de8a0e42b771b2
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  122 days
Failing since        151818  2020-07-11 04:18:52 Z  121 days  116 attempts
Testing same since   156537  2020-11-07 04:19:10 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 25200 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 07:54:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 07:54:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22061.48417 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc202-0007zE-V9; Mon, 09 Nov 2020 07:54:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22061.48417; Mon, 09 Nov 2020 07:54:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc202-0007z7-Q9; Mon, 09 Nov 2020 07:54:02 +0000
Received: by outflank-mailman (input) for mailman id 22061;
 Mon, 09 Nov 2020 07:54:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UZOx=EP=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kc202-0007z2-4x
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 07:54:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id feb78623-4f8a-4a78-888e-d697e64c96ee;
 Mon, 09 Nov 2020 07:54:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E44CAABAE;
 Mon,  9 Nov 2020 07:53:59 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=UZOx=EP=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kc202-0007z2-4x
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 07:54:02 +0000
X-Inumbo-ID: feb78623-4f8a-4a78-888e-d697e64c96ee
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id feb78623-4f8a-4a78-888e-d697e64c96ee;
	Mon, 09 Nov 2020 07:54:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E44CAABAE;
	Mon,  9 Nov 2020 07:53:59 +0000 (UTC)
Subject: Re: [PATCH 03/24] nvme: let set_capacity_revalidate_and_notify update
 the bdev size
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201106190337.1973127-1-hch@lst.de>
 <20201106190337.1973127-4-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <1d06cdfa-a904-30be-f3ec-08ae2fa85cbd@suse.de>
Date: Mon, 9 Nov 2020 08:53:58 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201106190337.1973127-4-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/6/20 8:03 PM, Christoph Hellwig wrote:
> There is no good reason to call revalidate_disk_size separately.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/nvme/host/core.c | 5 +----
>   1 file changed, 1 insertion(+), 4 deletions(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 376096bfc54a83..4e86c9aafd88a7 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -2053,7 +2053,7 @@ static void nvme_update_disk_info(struct gendisk *disk,
>   			capacity = 0;
>   	}
>   
> -	set_capacity_revalidate_and_notify(disk, capacity, false);
> +	set_capacity_revalidate_and_notify(disk, capacity, true);
>   
>   	nvme_config_discard(disk, ns);
>   	nvme_config_write_zeroes(disk, ns);
> @@ -2136,7 +2136,6 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_id_ns *id)
>   		blk_stack_limits(&ns->head->disk->queue->limits,
>   				 &ns->queue->limits, 0);
>   		blk_queue_update_readahead(ns->head->disk->queue);
> -		nvme_update_bdev_size(ns->head->disk);
>   		blk_mq_unfreeze_queue(ns->head->disk->queue);
>   	}
>   #endif

Hold on.
This, at the very least, should be a separate patch.
With this you are changing the behaviour of nvme multipath.

Originally nvme multipath would update/change the size of the multipath 
device according to the underlying path devices.
With this patch the size of the multipath device will _not_ change if 
there is a change on the underlying devices.

While personally I would _love_ to have this patch, we should at least 
document this by making it a separate patch.
And we possibly should check if both sizes are the same, and think of 
what we should be doing if they are not.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 08:22:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 08:22:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22094.48437 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc2R2-0002mX-Hu; Mon, 09 Nov 2020 08:21:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22094.48437; Mon, 09 Nov 2020 08:21:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc2R2-0002mQ-Eh; Mon, 09 Nov 2020 08:21:56 +0000
Received: by outflank-mailman (input) for mailman id 22094;
 Mon, 09 Nov 2020 08:21:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=U+9y=EP=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1kc2R0-0002mL-KU
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 08:21:54 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.85]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ec57454a-4f65-4bf1-8f4e-a04d2adc5431;
 Mon, 09 Nov 2020 08:21:51 +0000 (UTC)
Received: from AS8PR04CA0027.eurprd04.prod.outlook.com (2603:10a6:20b:310::32)
 by VI1PR08MB3600.eurprd08.prod.outlook.com (2603:10a6:803:85::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.24; Mon, 9 Nov
 2020 08:21:49 +0000
Received: from VE1EUR03FT030.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:310:cafe::6b) by AS8PR04CA0027.outlook.office365.com
 (2603:10a6:20b:310::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Mon, 9 Nov 2020 08:21:48 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT030.mail.protection.outlook.com (10.152.18.66) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3541.17 via Frontend Transport; Mon, 9 Nov 2020 08:21:44 +0000
Received: ("Tessian outbound 39167997cde8:v71");
 Mon, 09 Nov 2020 08:21:40 +0000
Received: from bf27f0ca01c8.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 674DD738-A1F7-46C4-83A4-2FA19E6A3CFB.1; 
 Mon, 09 Nov 2020 08:21:35 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id bf27f0ca01c8.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 09 Nov 2020 08:21:35 +0000
Received: from AM5PR0202CA0013.eurprd02.prod.outlook.com
 (2603:10a6:203:69::23) by VE1PR08MB4957.eurprd08.prod.outlook.com
 (2603:10a6:803:109::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Mon, 9 Nov
 2020 08:21:34 +0000
Received: from VE1EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:69:cafe::30) by AM5PR0202CA0013.outlook.office365.com
 (2603:10a6:203:69::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Mon, 9 Nov 2020 08:21:34 +0000
Received: from nebula.arm.com (40.67.248.234) by
 VE1EUR03FT003.mail.protection.outlook.com (10.152.18.108) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.3541.17 via Frontend Transport; Mon, 9 Nov 2020 08:21:32 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2044.4; Mon, 9 Nov 2020
 08:21:20 +0000
Received: from entos-d05.shanghai.arm.com (10.169.212.212) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2044.4 via Frontend
 Transport; Mon, 9 Nov 2020 08:21:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=U+9y=EP=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
	id 1kc2R0-0002mL-KU
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 08:21:54 +0000
X-Inumbo-ID: ec57454a-4f65-4bf1-8f4e-a04d2adc5431
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown [40.107.21.85])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ec57454a-4f65-4bf1-8f4e-a04d2adc5431;
	Mon, 09 Nov 2020 08:21:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YR7dvGtRyqou2OvC4N1znnpIeQY3jFZ7NUMIJKjRUZY=;
 b=n5naFCVBj0dB66s2ywJoomCc3CMTR5CqD8ljFFKBQt6kdMa8iVdzw5dH7xHMeltgeRmRA5zk/9Raq2FwgPf0gmydOR4p6stmy18i1wMXvJCKaVzCK9N22gpcmPfanUKrVz0ZhSs2KhWiss0k0nbXNokEMBnWMQUXl9xmaZg0S1s=
Received: from AS8PR04CA0027.eurprd04.prod.outlook.com (2603:10a6:20b:310::32)
 by VI1PR08MB3600.eurprd08.prod.outlook.com (2603:10a6:803:85::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.24; Mon, 9 Nov
 2020 08:21:49 +0000
Received: from VE1EUR03FT030.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:310:cafe::6b) by AS8PR04CA0027.outlook.office365.com
 (2603:10a6:20b:310::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Mon, 9 Nov 2020 08:21:48 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT030.mail.protection.outlook.com (10.152.18.66) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3541.17 via Frontend Transport; Mon, 9 Nov 2020 08:21:44 +0000
Received: ("Tessian outbound 39167997cde8:v71"); Mon, 09 Nov 2020 08:21:40 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 513a9a58bd1043bf
X-CR-MTA-TID: 64aa7808
Received: from bf27f0ca01c8.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 674DD738-A1F7-46C4-83A4-2FA19E6A3CFB.1;
	Mon, 09 Nov 2020 08:21:35 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id bf27f0ca01c8.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Mon, 09 Nov 2020 08:21:35 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aTIP7M9y7OWZgU3aByN6KgVxSHQPAyc75pi+J2CA76ybBUL41XEDedg5H5CMLlipZ25SXeH+AO/hXx1e/qPdHQo9HDr+5EfMSqqpp0XOT58aSDxbCV0ERSUJVYH/cpgZRExV194i2HFGjVNZpti0nUMgPOyHPQcGPMyf5wdiLJNjQGXtvatOCW7USOrDKbBCnGaQ1Z41KwRwt0byvYxHgAywmt13Zbgo9bEZc69QS6v2ER7+QwXG/rtmNGU75SOuYI2r/4q8xvd53Mc8RIUZ7hVK9pMD6z32jMWbmVUFNOg2++9E/4lVURNXQYXX03jLBgii/WEttU/hXFdb8LZiGw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YR7dvGtRyqou2OvC4N1znnpIeQY3jFZ7NUMIJKjRUZY=;
 b=aNgYvTE4q5E68L+x21cblCWWmkZr1q6ILkEowd9kP2SUstqhrwUCRCXaWN0/qRzXuS5HRBSC4KGjzMssOx3JJPkaczZE0hoeQZENhtqhsvzHvSK7xw7JVIIaOc0p/3FAkOlmoY3X/u8XowXtjWptH5EHJPbwTkPKjncd6wacHypYfnSQnywmXEBG8RiSp0jSoeazImofUmwqOKGOxKb0HO9QibOAF/z7Rnl/SlNR4LnDjngl8efpZaIyHHb/9Awo7GgBfYd5XfpgKkI7aXyyPCL7DTo48fROGIlIMtuYeZu1H+ncDggU+UXAT/BYlxO+xQfH9niMBNoaGJT9lz+erg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YR7dvGtRyqou2OvC4N1znnpIeQY3jFZ7NUMIJKjRUZY=;
 b=n5naFCVBj0dB66s2ywJoomCc3CMTR5CqD8ljFFKBQt6kdMa8iVdzw5dH7xHMeltgeRmRA5zk/9Raq2FwgPf0gmydOR4p6stmy18i1wMXvJCKaVzCK9N22gpcmPfanUKrVz0ZhSs2KhWiss0k0nbXNokEMBnWMQUXl9xmaZg0S1s=
Received: from AM5PR0202CA0013.eurprd02.prod.outlook.com
 (2603:10a6:203:69::23) by VE1PR08MB4957.eurprd08.prod.outlook.com
 (2603:10a6:803:109::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Mon, 9 Nov
 2020 08:21:34 +0000
Received: from VE1EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:69:cafe::30) by AM5PR0202CA0013.outlook.office365.com
 (2603:10a6:203:69::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Mon, 9 Nov 2020 08:21:34 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com;
Received: from nebula.arm.com (40.67.248.234) by
 VE1EUR03FT003.mail.protection.outlook.com (10.152.18.108) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.3541.17 via Frontend Transport; Mon, 9 Nov 2020 08:21:32 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2044.4; Mon, 9 Nov 2020
 08:21:20 +0000
Received: from entos-d05.shanghai.arm.com (10.169.212.212) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2044.4 via Frontend
 Transport; Mon, 9 Nov 2020 08:21:17 +0000
From: Penny Zheng <penny.zheng@arm.com>
To: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<julien@xen.org>
CC: <Andre.Przywara@arm.com>, <Bertrand.Marquis@arm.com>, <Wei.Chen@arm.com>,
	<Penny.Zheng@arm.com>, <Kaly.Xin@arm.com>, <nd@arm.com>
Subject: [PATCH] xen/arm: Add Cortex-A73 erratum 858921 workaround
Date: Mon, 9 Nov 2020 16:21:10 +0800
Message-ID: <20201109082110.1133996-1-penny.zheng@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-Office365-Filtering-HT: Tenant
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 46b081bf-663e-413f-7376-08d884887fb1
X-MS-TrafficTypeDiagnostic: VE1PR08MB4957:|VI1PR08MB3600:
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB360056F4DC207F891191F1ECF7EA0@VI1PR08MB3600.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Pvpi18BA+cwFFpAzixI6DERkVouNmeMQTxoRyyQf8S0Cs8gHPJFtloZwvJKfrijtC7+oqldbaVwzsYAz0ALInCuGs/WKpxuMnwZCcW2k677qR/K0049WPnMWeXMbJr+eQD0oHg+zgU7STwrlWpf7VqOk8P9QoHveK7uMcEP5QkYjK83e5YeADm5dE3zxKQr8C9zCGuqti5TXjMTDwWZo3+tYAPPpfgZlXV+RKpsCj/BhgzgkubLOH0y/h+qwB8HJRqjGUD15MFwzB0oX3DtVW1yTYv9jCRCjh+0AqFZHE6gPDNmpRtGkKGLT5n+QJqOXLg+oa7lYgBScswC7h29pWPUG/w79cw3Vzsef5AShfxQnRt8uJ2cOUdpwbwR63SlVysD9KUCqK1y7BABGDZ8BVw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(4636009)(136003)(39860400002)(346002)(376002)(396003)(46966005)(86362001)(7696005)(82310400003)(5660300002)(26005)(110136005)(54906003)(47076004)(1076003)(82740400003)(36756003)(186003)(81166007)(356005)(336012)(8936002)(426003)(70586007)(83380400001)(2616005)(8676002)(6666004)(478600001)(4326008)(44832011)(2906002)(70206006)(316002);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB4957
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT030.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e11b6a38-896f-4d4d-8a3a-08d88488775f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+prsIyIW4biMCgYAKJelNIEboGje4QSkOp9LrxNrDrWbqHryk+IRjBADHFl/RyLzzM/mMCPqEaD0JKwk1mlqNBG60C0hlTu3leHOh/w0AlyrS6As+Z0MY7i1Llo2uHa8OTOBFhU25C+K5857i9yELHYxITGBt1yrcdfxYGfHjdbcKSR4euH9hX/18jO4aciq+JGH26KPM3qxEoYvcWcdfrsnhCSJKjskqeTNFe/K9PCVAwTnGdDswIwOqSR2opBQHd6esIlCVqBguCQfbKDMDyQhka0m8XdZgFqMUJ6SZVBetxcOPcMUJDleD0ESbMLxIA+rw8vAtDFV1TpDOHPvuWpQ5HzMkfHS8m9+fj3wnWA5mNWVM1R8lpE44bprK/+SaiTMbTU7ec43OM55qKl89w==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(376002)(346002)(396003)(136003)(46966005)(8676002)(86362001)(54906003)(4326008)(336012)(83380400001)(110136005)(36906005)(70206006)(2906002)(7696005)(316002)(1076003)(478600001)(186003)(2616005)(26005)(70586007)(82310400003)(82740400003)(81166007)(47076004)(36756003)(8936002)(6666004)(44832011)(5660300002)(426003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Nov 2020 08:21:44.8188
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 46b081bf-663e-413f-7376-08d884887fb1
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT030.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3600

CNTVCT_EL0 or CNTPCT_EL0 counter read in Cortex-A73 (all versions)
might return a wrong value when the counter crosses a 32bit boundary.

Until now, there is no case for Xen itself to access CNTVCT_EL0,
and it also should be the Guest OS's responsibility to deal with
this part.

But for CNTPCT, there exists several cases in Xen involving reading
CNTPCT, so a possible workaround is that performing the read twice,
and to return one or the other depending on whether a transition has
taken place.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 docs/misc/arm/silicon-errata.txt |  1 +
 xen/arch/arm/Kconfig             | 18 ++++++++++++++++++
 xen/arch/arm/cpuerrata.c         |  8 ++++++++
 xen/arch/arm/vtimer.c            |  2 +-
 xen/include/asm-arm/cpuerrata.h  |  2 ++
 xen/include/asm-arm/cpufeature.h |  3 ++-
 xen/include/asm-arm/time.h       | 22 +++++++++++++++++++++-
 7 files changed, 53 insertions(+), 3 deletions(-)

diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-errata.txt
index 1f18a9df58..552c4151d3 100644
--- a/docs/misc/arm/silicon-errata.txt
+++ b/docs/misc/arm/silicon-errata.txt
@@ -51,6 +51,7 @@ stable hypervisors.
 | ARM            | Cortex-A57      | #1319537        | N/A                     |
 | ARM            | Cortex-A72      | #1319367        | N/A                     |
 | ARM            | Cortex-A72      | #853709         | N/A                     |
+| ARM            | Cortex-A73      | #858921         | ARM_ERRATUM_858921      |
 | ARM            | Cortex-A76      | #1165522        | N/A                     |
 | ARM            | Neoverse-N1     | #1165522        | N/A
 | ARM            | MMU-500         | #842869         | N/A                     |
diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 2777388265..f938dd21bd 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -226,6 +226,24 @@ config ARM64_ERRATUM_834220
 
 	  If unsure, say Y.
 
+config ARM_ERRATUM_858921
+	bool "Cortex-A73: 858921: Possible wrong read value for CNTVCT or CNTPCT"
+	default y
+	help
+	  This option adds an alternative code sequence to work around ARM
+	  erratum 858921 on Cortex-A73 (all versions).
+
+	  Affected Cortex-A73 might return wrong read value for CNTVCT or CNTPCT
+	  when the counter crosses a 32bit boundary.
+
+	  The workaround involves performing the read twice, and to return
+	  one or the other value depending on whether a transition has taken place.
+	  Please note that this does not necessarily enable the workaround,
+	  as it depends on the alternative framework, which will only patch
+	  the kernel if an affected CPU is detected.
+
+	  If unsure, say Y.
+
 endmenu
 
 config ARM64_HARDEN_BRANCH_PREDICTOR
diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
index 6731d873e8..567911d380 100644
--- a/xen/arch/arm/cpuerrata.c
+++ b/xen/arch/arm/cpuerrata.c
@@ -469,6 +469,14 @@ static const struct arm_cpu_capabilities arm_errata[] = {
         .capability = ARM_SSBD,
         .matches = has_ssbd_mitigation,
     },
+#endif
+#ifdef CONFIG_ARM_ERRATUM_858921
+    {
+        /* Cortex-A73 (all versions) */
+        .desc = "ARM erratum 858921",
+        .capability = ARM_WORKAROUND_858921,
+        MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
+    },
 #endif
     {
         /* Neoverse r0p0 - r2p0 */
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index 6d39fc944f..c2b27915c6 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -62,7 +62,7 @@ static void virt_timer_expired(void *data)
 
 int domain_vtimer_init(struct domain *d, struct xen_arch_domainconfig *config)
 {
-    d->arch.virt_timer_base.offset = READ_SYSREG64(CNTPCT_EL0);
+    d->arch.virt_timer_base.offset = get_cycles();
     d->time_offset.seconds = ticks_to_ns(d->arch.virt_timer_base.offset - boot_count);
     do_div(d->time_offset.seconds, 1000000000);
 
diff --git a/xen/include/asm-arm/cpuerrata.h b/xen/include/asm-arm/cpuerrata.h
index 88ef3ca934..8d7e7b9375 100644
--- a/xen/include/asm-arm/cpuerrata.h
+++ b/xen/include/asm-arm/cpuerrata.h
@@ -28,6 +28,8 @@ static inline bool check_workaround_##erratum(void)             \
 CHECK_WORKAROUND_HELPER(766422, ARM32_WORKAROUND_766422, CONFIG_ARM_32)
 CHECK_WORKAROUND_HELPER(834220, ARM64_WORKAROUND_834220, CONFIG_ARM_64)
 CHECK_WORKAROUND_HELPER(ssbd, ARM_SSBD, CONFIG_ARM_SSBD)
+CHECK_WORKAROUND_HELPER(858921, ARM_WORKAROUND_858921,
+                        CONFIG_ARM_ERRATUM_858921)
 
 #undef CHECK_WORKAROUND_HELPER
 
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index 10878ead8a..016a9fe203 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -45,8 +45,9 @@
 #define ARM_SSBD 7
 #define ARM_SMCCC_1_1 8
 #define ARM64_WORKAROUND_AT_SPECULATE 9
+#define ARM_WORKAROUND_858921 10
 
-#define ARM_NCAPS           10
+#define ARM_NCAPS           11
 
 #ifndef __ASSEMBLY__
 
diff --git a/xen/include/asm-arm/time.h b/xen/include/asm-arm/time.h
index 9cb6f9b0b4..1b2c13614b 100644
--- a/xen/include/asm-arm/time.h
+++ b/xen/include/asm-arm/time.h
@@ -3,6 +3,7 @@
 
 #include <asm/sysregs.h>
 #include <asm/system.h>
+#include <asm/cpuerrata.h>
 
 #define DT_MATCH_TIMER                      \
     DT_MATCH_COMPATIBLE("arm,armv7-timer"), \
@@ -13,7 +14,26 @@ typedef uint64_t cycles_t;
 static inline cycles_t get_cycles (void)
 {
         isb();
-        return READ_SYSREG64(CNTPCT_EL0);
+        /*
+         * ARM_WORKAROUND_858921: Cortex-A73 (all versions) counter read
+         * can return a wrong value when the counter crosses a 32bit boundary.
+         */
+        if ( !check_workaround_858921() )
+            return READ_SYSREG64(CNTPCT_EL0);
+        else
+        {
+            /*
+             * A recommended workaround for erratum 858921 is to:
+             *  1- Read twice CNTPCT.
+             *  2- Compare bit[32] of the two read values.
+             *      - If bit[32] is different, keep the old value.
+             *      - If bit[32] is the same, keep the new value.
+             */
+            cycles_t old, new;
+            old = READ_SYSREG64(CNTPCT_EL0);
+            new = READ_SYSREG64(CNTPCT_EL0);
+            return (((old ^ new) >> 32) & 1) ? old : new;
+        }
 }
 
 /* List of timer's IRQ */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 08:54:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 08:54:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22188.48456 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc2vq-0005cn-81; Mon, 09 Nov 2020 08:53:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22188.48456; Mon, 09 Nov 2020 08:53:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc2vq-0005cg-4s; Mon, 09 Nov 2020 08:53:46 +0000
Received: by outflank-mailman (input) for mailman id 22188;
 Mon, 09 Nov 2020 08:53:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rGOv=EP=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1kc2vp-0005cb-6F
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 08:53:45 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id efb7b2d8-e50f-4a67-8ee7-e9905961d348;
 Mon, 09 Nov 2020 08:53:43 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id AB71E6736F; Mon,  9 Nov 2020 09:53:40 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rGOv=EP=lst.de=hch@srs-us1.protection.inumbo.net>)
	id 1kc2vp-0005cb-6F
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 08:53:45 +0000
X-Inumbo-ID: efb7b2d8-e50f-4a67-8ee7-e9905961d348
Received: from verein.lst.de (unknown [213.95.11.211])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id efb7b2d8-e50f-4a67-8ee7-e9905961d348;
	Mon, 09 Nov 2020 08:53:43 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
	id AB71E6736F; Mon,  9 Nov 2020 09:53:40 +0100 (CET)
Date: Mon, 9 Nov 2020 09:53:40 +0100
From: Christoph Hellwig <hch@lst.de>
To: Hannes Reinecke <hare@suse.de>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com, linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	ceph-devel@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org, linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org, linux-fsdevel@vger.kernel.org
Subject: Re: [PATCH 03/24] nvme: let set_capacity_revalidate_and_notify
 update the bdev size
Message-ID: <20201109085340.GB27483@lst.de>
References: <20201106190337.1973127-1-hch@lst.de> <20201106190337.1973127-4-hch@lst.de> <1d06cdfa-a904-30be-f3ec-08ae2fa85cbd@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <1d06cdfa-a904-30be-f3ec-08ae2fa85cbd@suse.de>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Mon, Nov 09, 2020 at 08:53:58AM +0100, Hannes Reinecke wrote:
>> index 376096bfc54a83..4e86c9aafd88a7 100644
>> --- a/drivers/nvme/host/core.c
>> +++ b/drivers/nvme/host/core.c
>> @@ -2053,7 +2053,7 @@ static void nvme_update_disk_info(struct gendisk *disk,
>>   			capacity = 0;
>>   	}
>>   -	set_capacity_revalidate_and_notify(disk, capacity, false);
>> +	set_capacity_revalidate_and_notify(disk, capacity, true);
>>     	nvme_config_discard(disk, ns);
>>   	nvme_config_write_zeroes(disk, ns);
>> @@ -2136,7 +2136,6 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_id_ns *id)
>>   		blk_stack_limits(&ns->head->disk->queue->limits,
>>   				 &ns->queue->limits, 0);
>>   		blk_queue_update_readahead(ns->head->disk->queue);
>> -		nvme_update_bdev_size(ns->head->disk);
>>   		blk_mq_unfreeze_queue(ns->head->disk->queue);
>>   	}
>>   #endif
>
> Hold on.
> This, at the very least, should be a separate patch.
> With this you are changing the behaviour of nvme multipath.
>
> Originally nvme multipath would update/change the size of the multipath 
> device according to the underlying path devices.
> With this patch the size of the multipath device will _not_ change if there 
> is a change on the underlying devices.

Yes, it will.  Take a close look at nvme_update_disk_info and how it is
called.


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 09:01:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 09:01:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22196.48469 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc33g-0006Yn-56; Mon, 09 Nov 2020 09:01:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22196.48469; Mon, 09 Nov 2020 09:01:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc33g-0006Yg-1f; Mon, 09 Nov 2020 09:01:52 +0000
Received: by outflank-mailman (input) for mailman id 22196;
 Mon, 09 Nov 2020 09:01:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=s5/2=EP=arm.com=wei.chen@srs-us1.protection.inumbo.net>)
 id 1kc33e-0006Yb-PQ
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 09:01:50 +0000
Received: from FRA01-MR2-obe.outbound.protection.outlook.com (unknown
 [40.107.9.77]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a61f24e5-ae82-4332-8297-c9c2d3f06deb;
 Mon, 09 Nov 2020 09:01:48 +0000 (UTC)
Received: from AM5PR0701CA0068.eurprd07.prod.outlook.com (2603:10a6:203:2::30)
 by PR2PR08MB4906.eurprd08.prod.outlook.com (2603:10a6:101:26::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.22; Mon, 9 Nov
 2020 09:01:45 +0000
Received: from VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:2:cafe::d1) by AM5PR0701CA0068.outlook.office365.com
 (2603:10a6:203:2::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.13 via Frontend
 Transport; Mon, 9 Nov 2020 09:01:45 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT063.mail.protection.outlook.com (10.152.18.236) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3541.17 via Frontend Transport; Mon, 9 Nov 2020 09:01:44 +0000
Received: ("Tessian outbound 082214a64d39:v71");
 Mon, 09 Nov 2020 09:01:43 +0000
Received: from 6874ff27115d.3
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4DF6879C-8B17-412C-B757-A60DA1041E46.1; 
 Mon, 09 Nov 2020 09:01:38 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 6874ff27115d.3
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 09 Nov 2020 09:01:38 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com (2603:10a6:208:105::24)
 by AM0PR08MB3137.eurprd08.prod.outlook.com (2603:10a6:208:64::27)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.23; Mon, 9 Nov
 2020 09:01:32 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993]) by AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993%3]) with mapi id 15.20.3499.032; Mon, 9 Nov 2020
 09:01:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=s5/2=EP=arm.com=wei.chen@srs-us1.protection.inumbo.net>)
	id 1kc33e-0006Yb-PQ
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 09:01:50 +0000
X-Inumbo-ID: a61f24e5-ae82-4332-8297-c9c2d3f06deb
Received: from FRA01-MR2-obe.outbound.protection.outlook.com (unknown [40.107.9.77])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a61f24e5-ae82-4332-8297-c9c2d3f06deb;
	Mon, 09 Nov 2020 09:01:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ycnj+jXOWTGKLzVgLcpM39cPNIcUh/AdEUqjzwHTte0=;
 b=ebs2Nag7UQiCZDRhb1WOexkP7i44+6XxGpyZhhB2w7uMMuQ3F4BtjmvUoBWvm9Xk3Y/bza6CPE/29OFR3h6TVBjCn8RNjEj9OhwKBMFKTDkD3KC7PTuGJl+rvboz5GNrp82SVCmOEu34GGuTl97LII1XMDCFc6MiCMiQIDvAW2Q=
Received: from AM5PR0701CA0068.eurprd07.prod.outlook.com (2603:10a6:203:2::30)
 by PR2PR08MB4906.eurprd08.prod.outlook.com (2603:10a6:101:26::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.22; Mon, 9 Nov
 2020 09:01:45 +0000
Received: from VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:2:cafe::d1) by AM5PR0701CA0068.outlook.office365.com
 (2603:10a6:203:2::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.13 via Frontend
 Transport; Mon, 9 Nov 2020 09:01:45 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT063.mail.protection.outlook.com (10.152.18.236) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3541.17 via Frontend Transport; Mon, 9 Nov 2020 09:01:44 +0000
Received: ("Tessian outbound 082214a64d39:v71"); Mon, 09 Nov 2020 09:01:43 +0000
X-CR-MTA-TID: 64aa7808
Received: from 6874ff27115d.3
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 4DF6879C-8B17-412C-B757-A60DA1041E46.1;
	Mon, 09 Nov 2020 09:01:38 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 6874ff27115d.3
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Mon, 09 Nov 2020 09:01:38 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nmYuVR/2+W7xefYpiPgqqZLDGQiYfUSG4/1YubnXIxxYcVI4IrIUcAH9xE8lkrwCVb8w1GFy3mp9XuoBH532sWSX+DdQ3VAY1X7rRbLikU0FOn+vk0nl27iGevWem3nBWhwubYT5ewZxiOrVwVKLqvR1lVe3qBYg1TrxR2/4OOZ7vVJL9W2NJm/yDTw7kFVsWR/wioP4QGSaXxjfHAYdsX53IaVQlw1wcopsKMOeiDYhFD2oAbdzo7cgq/ns9gfs/oMxnJjvLUL3vh1nidiQUpNX2TGSjauIaJbgNClngR9NCwhprtZKp4Krr5Hy+aD59NiNVq8cJLUpIGDI+Rlv1w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ycnj+jXOWTGKLzVgLcpM39cPNIcUh/AdEUqjzwHTte0=;
 b=Zcl9kcVgg99au4/0G4fOUdoxJVC/uzjRnmlDaZyXo/v9KHYi+hEYDZ0Q5gQCmdTwLledR41RdsvlEHh3MQmLsangdxjsyS+B25gdd+ZtZN1BqhcBiED4oeiIf8rjTGVSRlYoh1wGLpYgBPBk8JW7ZfaUTu7FRaYb8vbrfMJdlvElwwGIBGKxiSKnXv8zmBWqI2h+l76wj62hIlixnWd481E6TY+E1WVhbCfvCgbxcfmENS6kbPdLH3Mv36UFg3mCl2mOoNJWQbXMyRtk3UHa3J/FMfXqrHjB7sgNuioO3GBlrWdv0VntF+ziOfTAEbTxkryX3pmeaDHauHJfnTxd4g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ycnj+jXOWTGKLzVgLcpM39cPNIcUh/AdEUqjzwHTte0=;
 b=ebs2Nag7UQiCZDRhb1WOexkP7i44+6XxGpyZhhB2w7uMMuQ3F4BtjmvUoBWvm9Xk3Y/bza6CPE/29OFR3h6TVBjCn8RNjEj9OhwKBMFKTDkD3KC7PTuGJl+rvboz5GNrp82SVCmOEu34GGuTl97LII1XMDCFc6MiCMiQIDvAW2Q=
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com (2603:10a6:208:105::24)
 by AM0PR08MB3137.eurprd08.prod.outlook.com (2603:10a6:208:64::27) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.23; Mon, 9 Nov
 2020 09:01:32 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993]) by AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993%3]) with mapi id 15.20.3499.032; Mon, 9 Nov 2020
 09:01:32 +0000
From: Wei Chen <Wei.Chen@arm.com>
To: Penny Zheng <Penny.Zheng@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>, "julien@xen.org" <julien@xen.org>
CC: Andre Przywara <Andre.Przywara@arm.com>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Penny Zheng <Penny.Zheng@arm.com>, Kaly Xin
	<Kaly.Xin@arm.com>, nd <nd@arm.com>
Subject: RE: [PATCH] xen/arm: Add Cortex-A73 erratum 858921 workaround
Thread-Topic: [PATCH] xen/arm: Add Cortex-A73 erratum 858921 workaround
Thread-Index: AQHWtnF2jWNIb4RgFU+PRE0mwpdPDam/f5Dw
Date: Mon, 9 Nov 2020 09:01:32 +0000
Message-ID:
 <AM0PR08MB37475BBAE02CAD8EC0F067199EEA0@AM0PR08MB3747.eurprd08.prod.outlook.com>
References: <20201109082110.1133996-1-penny.zheng@arm.com>
In-Reply-To: <20201109082110.1133996-1-penny.zheng@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: A4A5B80817ADDB47AA86EBB75A7CC3C0.0
x-checkrecipientchecked: true
Authentication-Results-Original: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.112]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 6f045462-a295-48ef-e884-08d8848e1551
x-ms-traffictypediagnostic: AM0PR08MB3137:|PR2PR08MB4906:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<PR2PR08MB49065B9708005A2684DA747C9EEA0@PR2PR08MB4906.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 EW6W8UC3hxngEAw7MMIFIiv/hbFX4jF9PZAQW2z+VjrvHULpKYnwzpC4pslPUpZK3XeQq8RhH4/S5Gw2gICCDxPODUtTVmwdOV0kgO0p1gh4n/DxMd8nf5B4XkRDokXusi4RT+/Gz26lI/GCSTyD+gxobZCsn3U7xM942TSnq/WcdzFlI9c7Gd0OT/r6LXFi4mdTBpPuA2KShmJ6yDCFL10Ek7KJTeOp/ZwaPjOZnxQQFGHkvw856Oq/qpWke1814p14Skses8bQyur8wxfx30d+TqjH0aPk8rptkdH67vcYI5/+b6tP3aIlTYkn5EcgBN7RnU5qryt2i1IymxlXJA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3747.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(366004)(39860400002)(396003)(136003)(8936002)(52536014)(26005)(4326008)(2906002)(54906003)(53546011)(6506007)(7696005)(478600001)(83380400001)(76116006)(66946007)(8676002)(66446008)(66556008)(64756008)(66476007)(9686003)(55016002)(86362001)(110136005)(33656002)(316002)(71200400001)(5660300002)(186003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 fK3wiLtIXT0f0o5JH6dwRk5d8jUZ92Jc5mgJnj0GfrEXUKObLg2r05G4ZHQrOFWWuhJaBE2B7NBC9D1gQa/7nGVrjKKsAKoKJvGDTVdO6Be6GXlzUuJxvCB2NRockMt0WAXIDHcCt5ZrnJJyMfct7QDlvIC3yr41krz/5orjtdoMSjCbKN2479NQHBJxdSVCIqCESepSDsEvi7wxu9mAYGhUU8ZieybyF4nx8oJG7GKpuI+aIfh8F8z1usahZMyGDjPhJcpHK7nAdsUBIcWvTX554c1q+c8+BsE37KItWgOZcUVRxjJwHNR7XYg5/KoewQxD/0edsl1wWMRm9GZtjP4tVosPyV6iIjLGv4S1vUivahKMILgzuFYc0uAzO3tqYN7nTHPKGwRo8l1b/l4TuE3sV8M8GZ2aTiESgyytnpWgoIMywsqgaGH0dMPYvRm4J+12ttugmw/ppWM/R/7I8Bmm4aD95JneHAL8ScqdJX+4pl8TELZgtVix5HEp29zCHznMX8WYxA9LkLehLF6PKqbDFSCjIFPAocFrKw4sp3r2TtBTw3dH868DeRZ4VAcsRQ49YYyw5cGYDwJnJIiCzI9CC6cIWejb6ngbl7qvOV80mXx6Ztf3yNcSF73w+xrYekQ2QNkCnhC78IJUr/hyfA==
Content-Type: text/plain; charset="gb2312"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3137
Original-Authentication-Results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	55348e46-8506-444e-8430-08d8848e0dc7
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/1fk8XQwkCxz9ZT7tO1oYlG+aKOEW3gHQtmFAu29KszB0pYJanvNzw7e8PkX+fvCC3uMTAOJqoARbP2i7Q5f7Ytn0RFxkKtbAWBeJPitLsGHU5Pnhjn5Rx+hyF80BJZDXutK11gIz9Jpe80Uflg+lDEG2yavmmJkgLShhJBE9bAzndY/LSeJv7F1AHmUgLnDj7L4F5CbqEZQ3+ZG85kxZLAmcUjkW7On7iixG9PcrzDdOQihXYNxRSo67m5Uuf7cuBjjTsNFYlgeappgoH47FT5/Vu+lpy11k4mRr38+OKTdpn5OBK1mMTDLY8DuxX7Ydu2vIFWpJvlmP0RL7PJ1oIVhMOiEaSHl9n9pgAppgZVv9ww8pOfmate65yZuTGIEx/IwOV5QpsNd5DQ10FaWhQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(136003)(376002)(396003)(39860400002)(46966005)(82740400003)(47076004)(82310400003)(4326008)(2906002)(83380400001)(110136005)(52536014)(86362001)(336012)(8936002)(356005)(81166007)(478600001)(8676002)(33656002)(186003)(316002)(70586007)(36906005)(70206006)(5660300002)(7696005)(55016002)(26005)(6506007)(53546011)(54906003)(9686003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Nov 2020 09:01:44.8328
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6f045462-a295-48ef-e884-08d8848e1551
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR2PR08MB4906

SGkgUGVubnksDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogUGVubnkg
WmhlbmcgPHBlbm55LnpoZW5nQGFybS5jb20+DQo+IFNlbnQ6IDIwMjDE6jEx1MI5yNUgMTY6MjEN
Cj4gVG86IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZzsgc3N0YWJlbGxpbmlAa2VybmVs
Lm9yZzsganVsaWVuQHhlbi5vcmcNCj4gQ2M6IEFuZHJlIFByenl3YXJhIDxBbmRyZS5Qcnp5d2Fy
YUBhcm0uY29tPjsgQmVydHJhbmQgTWFycXVpcw0KPiA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29t
PjsgV2VpIENoZW4gPFdlaS5DaGVuQGFybS5jb20+OyBQZW5ueSBaaGVuZw0KPiA8UGVubnkuWmhl
bmdAYXJtLmNvbT47IEthbHkgWGluIDxLYWx5LlhpbkBhcm0uY29tPjsgbmQgPG5kQGFybS5jb20+
DQo+IFN1YmplY3Q6IFtQQVRDSF0geGVuL2FybTogQWRkIENvcnRleC1BNzMgZXJyYXR1bSA4NTg5
MjEgd29ya2Fyb3VuZA0KPiANCj4gQ05UVkNUX0VMMCBvciBDTlRQQ1RfRUwwIGNvdW50ZXIgcmVh
ZCBpbiBDb3J0ZXgtQTczIChhbGwgdmVyc2lvbnMpDQo+IG1pZ2h0IHJldHVybiBhIHdyb25nIHZh
bHVlIHdoZW4gdGhlIGNvdW50ZXIgY3Jvc3NlcyBhIDMyYml0IGJvdW5kYXJ5Lg0KPiANCj4gVW50
aWwgbm93LCB0aGVyZSBpcyBubyBjYXNlIGZvciBYZW4gaXRzZWxmIHRvIGFjY2VzcyBDTlRWQ1Rf
RUwwLA0KPiBhbmQgaXQgYWxzbyBzaG91bGQgYmUgdGhlIEd1ZXN0IE9TJ3MgcmVzcG9uc2liaWxp
dHkgdG8gZGVhbCB3aXRoDQo+IHRoaXMgcGFydC4NCj4gDQo+IEJ1dCBmb3IgQ05UUENULCB0aGVy
ZSBleGlzdHMgc2V2ZXJhbCBjYXNlcyBpbiBYZW4gaW52b2x2aW5nIHJlYWRpbmcNCj4gQ05UUENU
LCBzbyBhIHBvc3NpYmxlIHdvcmthcm91bmQgaXMgdGhhdCBwZXJmb3JtaW5nIHRoZSByZWFkIHR3
aWNlLA0KPiBhbmQgdG8gcmV0dXJuIG9uZSBvciB0aGUgb3RoZXIgZGVwZW5kaW5nIG9uIHdoZXRo
ZXIgYSB0cmFuc2l0aW9uIGhhcw0KPiB0YWtlbiBwbGFjZS4NCj4gDQo+IFNpZ25lZC1vZmYtYnk6
IFBlbm55IFpoZW5nIDxwZW5ueS56aGVuZ0Bhcm0uY29tPg0KDQpSZXZpZXdlZC1ieTogV2VpIENo
ZW4gPFdlaS5DaGVuQGFybS5jb20+DQoNClRoYW5rcywNCldlaSBDaGVuDQoNCj4gLS0tDQo+ICBk
b2NzL21pc2MvYXJtL3NpbGljb24tZXJyYXRhLnR4dCB8ICAxICsNCj4gIHhlbi9hcmNoL2FybS9L
Y29uZmlnICAgICAgICAgICAgIHwgMTggKysrKysrKysrKysrKysrKysrDQo+ICB4ZW4vYXJjaC9h
cm0vY3B1ZXJyYXRhLmMgICAgICAgICB8ICA4ICsrKysrKysrDQo+ICB4ZW4vYXJjaC9hcm0vdnRp
bWVyLmMgICAgICAgICAgICB8ICAyICstDQo+ICB4ZW4vaW5jbHVkZS9hc20tYXJtL2NwdWVycmF0
YS5oICB8ICAyICsrDQo+ICB4ZW4vaW5jbHVkZS9hc20tYXJtL2NwdWZlYXR1cmUuaCB8ICAzICsr
LQ0KPiAgeGVuL2luY2x1ZGUvYXNtLWFybS90aW1lLmggICAgICAgfCAyMiArKysrKysrKysrKysr
KysrKysrKystDQo+ICA3IGZpbGVzIGNoYW5nZWQsIDUzIGluc2VydGlvbnMoKyksIDMgZGVsZXRp
b25zKC0pDQo+IA0KPiBkaWZmIC0tZ2l0IGEvZG9jcy9taXNjL2FybS9zaWxpY29uLWVycmF0YS50
eHQgYi9kb2NzL21pc2MvYXJtL3NpbGljb24tZXJyYXRhLnR4dA0KPiBpbmRleCAxZjE4YTlkZjU4
Li41NTJjNDE1MWQzIDEwMDY0NA0KPiAtLS0gYS9kb2NzL21pc2MvYXJtL3NpbGljb24tZXJyYXRh
LnR4dA0KPiArKysgYi9kb2NzL21pc2MvYXJtL3NpbGljb24tZXJyYXRhLnR4dA0KPiBAQCAtNTEs
NiArNTEsNyBAQCBzdGFibGUgaHlwZXJ2aXNvcnMuDQo+ICB8IEFSTSAgICAgICAgICAgIHwgQ29y
dGV4LUE1NyAgICAgIHwgIzEzMTk1MzcgICAgICAgIHwgTi9BICAgICAgICAgICAgICAgICAgICAg
fA0KPiAgfCBBUk0gICAgICAgICAgICB8IENvcnRleC1BNzIgICAgICB8ICMxMzE5MzY3ICAgICAg
ICB8IE4vQSAgICAgICAgICAgICAgICAgICAgIHwNCj4gIHwgQVJNICAgICAgICAgICAgfCBDb3J0
ZXgtQTcyICAgICAgfCAjODUzNzA5ICAgICAgICAgfCBOL0EgICAgICAgICAgICAgICAgICAgICB8
DQo+ICt8IEFSTSAgICAgICAgICAgIHwgQ29ydGV4LUE3MyAgICAgIHwgIzg1ODkyMSAgICAgICAg
IHwgQVJNX0VSUkFUVU1fODU4OTIxICAgICAgfA0KPiAgfCBBUk0gICAgICAgICAgICB8IENvcnRl
eC1BNzYgICAgICB8ICMxMTY1NTIyICAgICAgICB8IE4vQSAgICAgICAgICAgICAgICAgICAgIHwN
Cj4gIHwgQVJNICAgICAgICAgICAgfCBOZW92ZXJzZS1OMSAgICAgfCAjMTE2NTUyMiAgICAgICAg
fCBOL0ENCj4gIHwgQVJNICAgICAgICAgICAgfCBNTVUtNTAwICAgICAgICAgfCAjODQyODY5ICAg
ICAgICAgfCBOL0EgICAgICAgICAgICAgICAgICAgICB8DQo+IGRpZmYgLS1naXQgYS94ZW4vYXJj
aC9hcm0vS2NvbmZpZyBiL3hlbi9hcmNoL2FybS9LY29uZmlnDQo+IGluZGV4IDI3NzczODgyNjUu
LmY5MzhkZDIxYmQgMTAwNjQ0DQo+IC0tLSBhL3hlbi9hcmNoL2FybS9LY29uZmlnDQo+ICsrKyBi
L3hlbi9hcmNoL2FybS9LY29uZmlnDQo+IEBAIC0yMjYsNiArMjI2LDI0IEBAIGNvbmZpZyBBUk02
NF9FUlJBVFVNXzgzNDIyMA0KPiANCj4gIAkgIElmIHVuc3VyZSwgc2F5IFkuDQo+IA0KPiArY29u
ZmlnIEFSTV9FUlJBVFVNXzg1ODkyMQ0KPiArCWJvb2wgIkNvcnRleC1BNzM6IDg1ODkyMTogUG9z
c2libGUgd3JvbmcgcmVhZCB2YWx1ZSBmb3IgQ05UVkNUIG9yDQo+IENOVFBDVCINCj4gKwlkZWZh
dWx0IHkNCj4gKwloZWxwDQo+ICsJICBUaGlzIG9wdGlvbiBhZGRzIGFuIGFsdGVybmF0aXZlIGNv
ZGUgc2VxdWVuY2UgdG8gd29yayBhcm91bmQgQVJNDQo+ICsJICBlcnJhdHVtIDg1ODkyMSBvbiBD
b3J0ZXgtQTczIChhbGwgdmVyc2lvbnMpLg0KPiArDQo+ICsJICBBZmZlY3RlZCBDb3J0ZXgtQTcz
IG1pZ2h0IHJldHVybiB3cm9uZyByZWFkIHZhbHVlIGZvciBDTlRWQ1Qgb3INCj4gQ05UUENUDQo+
ICsJICB3aGVuIHRoZSBjb3VudGVyIGNyb3NzZXMgYSAzMmJpdCBib3VuZGFyeS4NCj4gKw0KPiAr
CSAgVGhlIHdvcmthcm91bmQgaW52b2x2ZXMgcGVyZm9ybWluZyB0aGUgcmVhZCB0d2ljZSwgYW5k
IHRvIHJldHVybg0KPiArCSAgb25lIG9yIHRoZSBvdGhlciB2YWx1ZSBkZXBlbmRpbmcgb24gd2hl
dGhlciBhIHRyYW5zaXRpb24gaGFzIHRha2VuDQo+IHBsYWNlLg0KPiArCSAgUGxlYXNlIG5vdGUg
dGhhdCB0aGlzIGRvZXMgbm90IG5lY2Vzc2FyaWx5IGVuYWJsZSB0aGUgd29ya2Fyb3VuZCwNCj4g
KwkgIGFzIGl0IGRlcGVuZHMgb24gdGhlIGFsdGVybmF0aXZlIGZyYW1ld29yaywgd2hpY2ggd2ls
bCBvbmx5IHBhdGNoDQo+ICsJICB0aGUga2VybmVsIGlmIGFuIGFmZmVjdGVkIENQVSBpcyBkZXRl
Y3RlZC4NCj4gKw0KPiArCSAgSWYgdW5zdXJlLCBzYXkgWS4NCj4gKw0KPiAgZW5kbWVudQ0KPiAN
Cj4gIGNvbmZpZyBBUk02NF9IQVJERU5fQlJBTkNIX1BSRURJQ1RPUg0KPiBkaWZmIC0tZ2l0IGEv
eGVuL2FyY2gvYXJtL2NwdWVycmF0YS5jIGIveGVuL2FyY2gvYXJtL2NwdWVycmF0YS5jDQo+IGlu
ZGV4IDY3MzFkODczZTguLjU2NzkxMWQzODAgMTAwNjQ0DQo+IC0tLSBhL3hlbi9hcmNoL2FybS9j
cHVlcnJhdGEuYw0KPiArKysgYi94ZW4vYXJjaC9hcm0vY3B1ZXJyYXRhLmMNCj4gQEAgLTQ2OSw2
ICs0NjksMTQgQEAgc3RhdGljIGNvbnN0IHN0cnVjdCBhcm1fY3B1X2NhcGFiaWxpdGllcyBhcm1f
ZXJyYXRhW10gPQ0KPiB7DQo+ICAgICAgICAgIC5jYXBhYmlsaXR5ID0gQVJNX1NTQkQsDQo+ICAg
ICAgICAgIC5tYXRjaGVzID0gaGFzX3NzYmRfbWl0aWdhdGlvbiwNCj4gICAgICB9LA0KPiArI2Vu
ZGlmDQo+ICsjaWZkZWYgQ09ORklHX0FSTV9FUlJBVFVNXzg1ODkyMQ0KPiArICAgIHsNCj4gKyAg
ICAgICAgLyogQ29ydGV4LUE3MyAoYWxsIHZlcnNpb25zKSAqLw0KPiArICAgICAgICAuZGVzYyA9
ICJBUk0gZXJyYXR1bSA4NTg5MjEiLA0KPiArICAgICAgICAuY2FwYWJpbGl0eSA9IEFSTV9XT1JL
QVJPVU5EXzg1ODkyMSwNCj4gKyAgICAgICAgTUlEUl9BTExfVkVSU0lPTlMoTUlEUl9DT1JURVhf
QTczKSwNCj4gKyAgICB9LA0KPiAgI2VuZGlmDQo+ICAgICAgew0KPiAgICAgICAgICAvKiBOZW92
ZXJzZSByMHAwIC0gcjJwMCAqLw0KPiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL3Z0aW1lci5j
IGIveGVuL2FyY2gvYXJtL3Z0aW1lci5jDQo+IGluZGV4IDZkMzlmYzk0NGYuLmMyYjI3OTE1YzYg
MTAwNjQ0DQo+IC0tLSBhL3hlbi9hcmNoL2FybS92dGltZXIuYw0KPiArKysgYi94ZW4vYXJjaC9h
cm0vdnRpbWVyLmMNCj4gQEAgLTYyLDcgKzYyLDcgQEAgc3RhdGljIHZvaWQgdmlydF90aW1lcl9l
eHBpcmVkKHZvaWQgKmRhdGEpDQo+IA0KPiAgaW50IGRvbWFpbl92dGltZXJfaW5pdChzdHJ1Y3Qg
ZG9tYWluICpkLCBzdHJ1Y3QgeGVuX2FyY2hfZG9tYWluY29uZmlnDQo+ICpjb25maWcpDQo+ICB7
DQo+IC0gICAgZC0+YXJjaC52aXJ0X3RpbWVyX2Jhc2Uub2Zmc2V0ID0gUkVBRF9TWVNSRUc2NChD
TlRQQ1RfRUwwKTsNCj4gKyAgICBkLT5hcmNoLnZpcnRfdGltZXJfYmFzZS5vZmZzZXQgPSBnZXRf
Y3ljbGVzKCk7DQo+ICAgICAgZC0+dGltZV9vZmZzZXQuc2Vjb25kcyA9IHRpY2tzX3RvX25zKGQt
PmFyY2gudmlydF90aW1lcl9iYXNlLm9mZnNldCAtDQo+IGJvb3RfY291bnQpOw0KPiAgICAgIGRv
X2RpdihkLT50aW1lX29mZnNldC5zZWNvbmRzLCAxMDAwMDAwMDAwKTsNCj4gDQo+IGRpZmYgLS1n
aXQgYS94ZW4vaW5jbHVkZS9hc20tYXJtL2NwdWVycmF0YS5oIGIveGVuL2luY2x1ZGUvYXNtLQ0K
PiBhcm0vY3B1ZXJyYXRhLmgNCj4gaW5kZXggODhlZjNjYTkzNC4uOGQ3ZTdiOTM3NSAxMDA2NDQN
Cj4gLS0tIGEveGVuL2luY2x1ZGUvYXNtLWFybS9jcHVlcnJhdGEuaA0KPiArKysgYi94ZW4vaW5j
bHVkZS9hc20tYXJtL2NwdWVycmF0YS5oDQo+IEBAIC0yOCw2ICsyOCw4IEBAIHN0YXRpYyBpbmxp
bmUgYm9vbCBjaGVja193b3JrYXJvdW5kXyMjZXJyYXR1bSh2b2lkKQ0KPiBcDQo+ICBDSEVDS19X
T1JLQVJPVU5EX0hFTFBFUig3NjY0MjIsIEFSTTMyX1dPUktBUk9VTkRfNzY2NDIyLA0KPiBDT05G
SUdfQVJNXzMyKQ0KPiAgQ0hFQ0tfV09SS0FST1VORF9IRUxQRVIoODM0MjIwLCBBUk02NF9XT1JL
QVJPVU5EXzgzNDIyMCwNCj4gQ09ORklHX0FSTV82NCkNCj4gIENIRUNLX1dPUktBUk9VTkRfSEVM
UEVSKHNzYmQsIEFSTV9TU0JELCBDT05GSUdfQVJNX1NTQkQpDQo+ICtDSEVDS19XT1JLQVJPVU5E
X0hFTFBFUig4NTg5MjEsIEFSTV9XT1JLQVJPVU5EXzg1ODkyMSwNCj4gKyAgICAgICAgICAgICAg
ICAgICAgICAgIENPTkZJR19BUk1fRVJSQVRVTV84NTg5MjEpDQo+IA0KPiAgI3VuZGVmIENIRUNL
X1dPUktBUk9VTkRfSEVMUEVSDQo+IA0KPiBkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvYXNtLWFy
bS9jcHVmZWF0dXJlLmggYi94ZW4vaW5jbHVkZS9hc20tDQo+IGFybS9jcHVmZWF0dXJlLmgNCj4g
aW5kZXggMTA4NzhlYWQ4YS4uMDE2YTlmZTIwMyAxMDA2NDQNCj4gLS0tIGEveGVuL2luY2x1ZGUv
YXNtLWFybS9jcHVmZWF0dXJlLmgNCj4gKysrIGIveGVuL2luY2x1ZGUvYXNtLWFybS9jcHVmZWF0
dXJlLmgNCj4gQEAgLTQ1LDggKzQ1LDkgQEANCj4gICNkZWZpbmUgQVJNX1NTQkQgNw0KPiAgI2Rl
ZmluZSBBUk1fU01DQ0NfMV8xIDgNCj4gICNkZWZpbmUgQVJNNjRfV09SS0FST1VORF9BVF9TUEVD
VUxBVEUgOQ0KPiArI2RlZmluZSBBUk1fV09SS0FST1VORF84NTg5MjEgMTANCj4gDQo+IC0jZGVm
aW5lIEFSTV9OQ0FQUyAgICAgICAgICAgMTANCj4gKyNkZWZpbmUgQVJNX05DQVBTICAgICAgICAg
ICAxMQ0KPiANCj4gICNpZm5kZWYgX19BU1NFTUJMWV9fDQo+IA0KPiBkaWZmIC0tZ2l0IGEveGVu
L2luY2x1ZGUvYXNtLWFybS90aW1lLmggYi94ZW4vaW5jbHVkZS9hc20tYXJtL3RpbWUuaA0KPiBp
bmRleCA5Y2I2ZjliMGI0Li4xYjJjMTM2MTRiIDEwMDY0NA0KPiAtLS0gYS94ZW4vaW5jbHVkZS9h
c20tYXJtL3RpbWUuaA0KPiArKysgYi94ZW4vaW5jbHVkZS9hc20tYXJtL3RpbWUuaA0KPiBAQCAt
Myw2ICszLDcgQEANCj4gDQo+ICAjaW5jbHVkZSA8YXNtL3N5c3JlZ3MuaD4NCj4gICNpbmNsdWRl
IDxhc20vc3lzdGVtLmg+DQo+ICsjaW5jbHVkZSA8YXNtL2NwdWVycmF0YS5oPg0KPiANCj4gICNk
ZWZpbmUgRFRfTUFUQ0hfVElNRVIgICAgICAgICAgICAgICAgICAgICAgXA0KPiAgICAgIERUX01B
VENIX0NPTVBBVElCTEUoImFybSxhcm12Ny10aW1lciIpLCBcDQo+IEBAIC0xMyw3ICsxNCwyNiBA
QCB0eXBlZGVmIHVpbnQ2NF90IGN5Y2xlc190Ow0KPiAgc3RhdGljIGlubGluZSBjeWNsZXNfdCBn
ZXRfY3ljbGVzICh2b2lkKQ0KPiAgew0KPiAgICAgICAgICBpc2IoKTsNCj4gLSAgICAgICAgcmV0
dXJuIFJFQURfU1lTUkVHNjQoQ05UUENUX0VMMCk7DQo+ICsgICAgICAgIC8qDQo+ICsgICAgICAg
ICAqIEFSTV9XT1JLQVJPVU5EXzg1ODkyMTogQ29ydGV4LUE3MyAoYWxsIHZlcnNpb25zKSBjb3Vu
dGVyIHJlYWQNCj4gKyAgICAgICAgICogY2FuIHJldHVybiBhIHdyb25nIHZhbHVlIHdoZW4gdGhl
IGNvdW50ZXIgY3Jvc3NlcyBhIDMyYml0IGJvdW5kYXJ5Lg0KPiArICAgICAgICAgKi8NCj4gKyAg
ICAgICAgaWYgKCAhY2hlY2tfd29ya2Fyb3VuZF84NTg5MjEoKSApDQo+ICsgICAgICAgICAgICBy
ZXR1cm4gUkVBRF9TWVNSRUc2NChDTlRQQ1RfRUwwKTsNCj4gKyAgICAgICAgZWxzZQ0KPiArICAg
ICAgICB7DQo+ICsgICAgICAgICAgICAvKg0KPiArICAgICAgICAgICAgICogQSByZWNvbW1lbmRl
ZCB3b3JrYXJvdW5kIGZvciBlcnJhdHVtIDg1ODkyMSBpcyB0bzoNCj4gKyAgICAgICAgICAgICAq
ICAxLSBSZWFkIHR3aWNlIENOVFBDVC4NCj4gKyAgICAgICAgICAgICAqICAyLSBDb21wYXJlIGJp
dFszMl0gb2YgdGhlIHR3byByZWFkIHZhbHVlcy4NCj4gKyAgICAgICAgICAgICAqICAgICAgLSBJ
ZiBiaXRbMzJdIGlzIGRpZmZlcmVudCwga2VlcCB0aGUgb2xkIHZhbHVlLg0KPiArICAgICAgICAg
ICAgICogICAgICAtIElmIGJpdFszMl0gaXMgdGhlIHNhbWUsIGtlZXAgdGhlIG5ldyB2YWx1ZS4N
Cj4gKyAgICAgICAgICAgICAqLw0KPiArICAgICAgICAgICAgY3ljbGVzX3Qgb2xkLCBuZXc7DQo+
ICsgICAgICAgICAgICBvbGQgPSBSRUFEX1NZU1JFRzY0KENOVFBDVF9FTDApOw0KPiArICAgICAg
ICAgICAgbmV3ID0gUkVBRF9TWVNSRUc2NChDTlRQQ1RfRUwwKTsNCj4gKyAgICAgICAgICAgIHJl
dHVybiAoKChvbGQgXiBuZXcpID4+IDMyKSAmIDEpID8gb2xkIDogbmV3Ow0KPiArICAgICAgICB9
DQo+ICB9DQo+IA0KPiAgLyogTGlzdCBvZiB0aW1lcidzIElSUSAqLw0KPiAtLQ0KPiAyLjI1LjEN
Cg0K


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 09:25:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 09:25:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22246.48511 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc3Qb-0000RM-FD; Mon, 09 Nov 2020 09:25:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22246.48511; Mon, 09 Nov 2020 09:25:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc3Qb-0000RD-7N; Mon, 09 Nov 2020 09:25:33 +0000
Received: by outflank-mailman (input) for mailman id 22246;
 Mon, 09 Nov 2020 09:25:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UZOx=EP=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kc3QZ-0000R7-8Z
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 09:25:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b25ea218-31cd-4d33-84bf-09e3ba447283;
 Mon, 09 Nov 2020 09:25:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CA23CABCC;
 Mon,  9 Nov 2020 09:25:23 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=UZOx=EP=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kc3QZ-0000R7-8Z
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 09:25:31 +0000
X-Inumbo-ID: b25ea218-31cd-4d33-84bf-09e3ba447283
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b25ea218-31cd-4d33-84bf-09e3ba447283;
	Mon, 09 Nov 2020 09:25:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id CA23CABCC;
	Mon,  9 Nov 2020 09:25:23 +0000 (UTC)
Subject: Re: [PATCH 03/24] nvme: let set_capacity_revalidate_and_notify update
 the bdev size
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Justin Sanders <justin@coraid.com>,
 Josef Bacik <josef@toxicpanda.com>, Ilya Dryomov <idryomov@gmail.com>,
 Jack Wang <jinpu.wang@cloud.ionos.com>, "Michael S. Tsirkin"
 <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201106190337.1973127-1-hch@lst.de>
 <20201106190337.1973127-4-hch@lst.de>
 <1d06cdfa-a904-30be-f3ec-08ae2fa85cbd@suse.de>
 <20201109085340.GB27483@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <e79f9a96-ef53-d6ea-f6e7-e141bdd2e2d2@suse.de>
Date: Mon, 9 Nov 2020 10:25:22 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201109085340.GB27483@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/9/20 9:53 AM, Christoph Hellwig wrote:
> On Mon, Nov 09, 2020 at 08:53:58AM +0100, Hannes Reinecke wrote:
>>> index 376096bfc54a83..4e86c9aafd88a7 100644
>>> --- a/drivers/nvme/host/core.c
>>> +++ b/drivers/nvme/host/core.c
>>> @@ -2053,7 +2053,7 @@ static void nvme_update_disk_info(struct gendisk *disk,
>>>    			capacity = 0;
[ .. ]
>> Originally nvme multipath would update/change the size of the multipath
>> device according to the underlying path devices.
>> With this patch the size of the multipath device will _not_ change if there
>> is a change on the underlying devices.
> 
> Yes, it will.  Take a close look at nvme_update_disk_info and how it is
> called.
> 
Okay, then: What would be the correct way of handling a size update for 
NVMe multipath?
Assuming we're getting an AEN for each path signalling the size change
(or a controller reset leading to a size change).
So if we're updating the size of the multipath device together with the 
path device at the first AEN/reset we'll end up with the other paths 
having a different size than the multipath device (and the path we've 
just been updating).
- Do we care, or cross fingers and hope for the best?
- Shouldn't we detect the case where we won't get a size update for the 
other paths, or, indeed, we have a genuine device size mismatch due to a 
misconfiguration on the target?

IE shouldn't we have a flag 'size update pending' for the other paths,, 
to take them out ouf use temporarily until the other AENs/resets have 
been processed?

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 09:50:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 09:50:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22268.48535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc3oh-000342-KF; Mon, 09 Nov 2020 09:50:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22268.48535; Mon, 09 Nov 2020 09:50:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc3oh-00033v-Gt; Mon, 09 Nov 2020 09:50:27 +0000
Received: by outflank-mailman (input) for mailman id 22268;
 Mon, 09 Nov 2020 09:50:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kc3of-00032y-Ve
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 09:50:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id abd59ebd-efb6-4d95-a147-c7bb5d752dbe;
 Mon, 09 Nov 2020 09:50:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E0472AC65;
 Mon,  9 Nov 2020 09:50:23 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kc3of-00032y-Ve
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 09:50:26 +0000
X-Inumbo-ID: abd59ebd-efb6-4d95-a147-c7bb5d752dbe
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id abd59ebd-efb6-4d95-a147-c7bb5d752dbe;
	Mon, 09 Nov 2020 09:50:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604915424;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=GLBR9EmjHCFkPapKcP0kRX3wmjiDirIrvyn/L7SIsBk=;
	b=ZMdQFefCGQLexNWbgcUpKI//N649BSfQt+iiuKYf3V80m/cdf+zyipd2CdYWR5MhtGzZJ8
	TObwY6Pcy9GRx6gO/Ha7bhJA/+fHa35BErQ/FoBRcq8AIVelWvsclOFaUwxLYcPoYonyY+
	1li17/vxGJ+D5DTiosZo/BOzOaGW+cw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E0472AC65;
	Mon,  9 Nov 2020 09:50:23 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 3/3] xen/x86: issue pci_serr error message via NMI continuation
Date: Mon,  9 Nov 2020 10:50:21 +0100
Message-Id: <20201109095021.9897-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201109095021.9897-1-jgross@suse.com>
References: <20201109095021.9897-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of using a softirq pci_serr_error() can use NMI continuation
for issuing an error message.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V4:
- rework to less generic approach
---
 xen/arch/x86/traps.c          | 21 +++++++++++++++------
 xen/include/asm-x86/softirq.h |  5 ++---
 2 files changed, 17 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 7cb7d7e09c..6aeccef32d 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1660,10 +1660,18 @@ void do_general_protection(struct cpu_user_regs *regs)
     panic("GENERAL PROTECTION FAULT\n[error_code=%04x]\n", regs->error_code);
 }
 
-static void pci_serr_softirq(void)
+static bool pci_serr_cont;
+
+static bool pci_serr_nmicont(void)
 {
+    if ( !pci_serr_cont )
+        return false;
+
+    pci_serr_cont = false;
     printk("\n\nNMI - PCI system error (SERR)\n");
     outb(inb(0x61) & 0x0b, 0x61); /* re-enable the PCI SERR error line. */
+
+    return true;
 }
 
 static void nmi_hwdom_report(unsigned int reason_idx)
@@ -1688,9 +1696,9 @@ static void pci_serr_error(const struct cpu_user_regs *regs)
         nmi_hwdom_report(_XEN_NMIREASON_pci_serr);
         /* fallthrough */
     case 'i': /* 'ignore' */
-        /* Would like to print a diagnostic here but can't call printk()
-           from NMI context -- raise a softirq instead. */
-        raise_softirq(PCI_SERR_SOFTIRQ);
+        /* Issue error message in NMI continuation. */
+        pci_serr_cont = true;
+        trigger_nmi_continuation();
         break;
     default:  /* 'fatal' */
         console_force_unlock();
@@ -1808,6 +1816,9 @@ bool nmi_check_continuation(void)
     if ( nmi_oprofile_send_virq() )
         ret = true;
 
+    if ( pci_serr_nmicont() )
+        ret = true;
+
     return ret;
 }
 
@@ -2154,8 +2165,6 @@ void __init trap_init(void)
     percpu_traps_init();
 
     cpu_init();
-
-    open_softirq(PCI_SERR_SOFTIRQ, pci_serr_softirq);
 }
 
 void activate_debugregs(const struct vcpu *curr)
diff --git a/xen/include/asm-x86/softirq.h b/xen/include/asm-x86/softirq.h
index 0b7a77f11f..415ee866c7 100644
--- a/xen/include/asm-x86/softirq.h
+++ b/xen/include/asm-x86/softirq.h
@@ -6,9 +6,8 @@
 #define VCPU_KICK_SOFTIRQ      (NR_COMMON_SOFTIRQS + 2)
 
 #define MACHINE_CHECK_SOFTIRQ  (NR_COMMON_SOFTIRQS + 3)
-#define PCI_SERR_SOFTIRQ       (NR_COMMON_SOFTIRQS + 4)
-#define HVM_DPCI_SOFTIRQ       (NR_COMMON_SOFTIRQS + 5)
-#define NR_ARCH_SOFTIRQS       6
+#define HVM_DPCI_SOFTIRQ       (NR_COMMON_SOFTIRQS + 4)
+#define NR_ARCH_SOFTIRQS       5
 
 bool arch_skip_send_event_check(unsigned int cpu);
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 09:50:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 09:50:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22267.48523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc3og-00033A-CX; Mon, 09 Nov 2020 09:50:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22267.48523; Mon, 09 Nov 2020 09:50:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc3og-000333-8v; Mon, 09 Nov 2020 09:50:26 +0000
Received: by outflank-mailman (input) for mailman id 22267;
 Mon, 09 Nov 2020 09:50:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kc3of-00032t-7a
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 09:50:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b17111e1-d52a-4b74-9f3c-d5d1117e0620;
 Mon, 09 Nov 2020 09:50:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AA563AC1F;
 Mon,  9 Nov 2020 09:50:23 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kc3of-00032t-7a
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 09:50:25 +0000
X-Inumbo-ID: b17111e1-d52a-4b74-9f3c-d5d1117e0620
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b17111e1-d52a-4b74-9f3c-d5d1117e0620;
	Mon, 09 Nov 2020 09:50:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604915423;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=FbiVz3bZ42jiEwvKXSrzs/ujY9m/oaZbmYcqH7Jl2W8=;
	b=LJ+u9AbRSLGFjuUb5obEL4y/43RnWgWBmG3ijmXb/e8qP06iCqhDaUUPiKOjquPY49bRer
	BjNSX6RpDozboTHDXnWP3KfBtkQjDYj3Jk8vNNnDZtRBWXUJXl924XEWBkbcV3RF5MMNas
	KLTEMlY9VSF2F8azRD5v99Zwwsl2m/M=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id AA563AC1F;
	Mon,  9 Nov 2020 09:50:23 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 0/3] xen/x86: implement NMI continuation
Date: Mon,  9 Nov 2020 10:50:18 +0100
Message-Id: <20201109095021.9897-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move sending of a virq event for oprofile to the local vcpu from NMI
to normal interrupt context.

This has been tested with a small test patch using the continuation
framework of patch 1 for all NMIs and doing a print to console in
the continuation handler.

Version 1 of this small series was sent to the security list before.

Changes in V3:
- switched to self-IPI instead of softirq
- added patch 3

Changes in V4:
- use less generic approach

Juergen Gross (3):
  xen/x86: add nmi continuation framework
  xen/oprofile: use NMI continuation for sending virq to guest
  xen/x86: issue pci_serr error message via NMI continuation

 xen/arch/x86/apic.c             | 13 +++++++---
 xen/arch/x86/oprofile/nmi_int.c | 20 +++++++++++++--
 xen/arch/x86/traps.c            | 44 ++++++++++++++++++++++++++++-----
 xen/include/asm-x86/nmi.h       | 11 ++++++++-
 xen/include/asm-x86/softirq.h   |  5 ++--
 xen/include/asm-x86/xenoprof.h  |  7 ++++++
 6 files changed, 85 insertions(+), 15 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 09:50:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 09:50:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22269.48547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc3om-00037W-S0; Mon, 09 Nov 2020 09:50:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22269.48547; Mon, 09 Nov 2020 09:50:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc3om-00037L-P0; Mon, 09 Nov 2020 09:50:32 +0000
Received: by outflank-mailman (input) for mailman id 22269;
 Mon, 09 Nov 2020 09:50:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kc3ok-00032y-UC
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 09:50:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 73627980-4f17-4a32-8c9f-f9554f50e3a6;
 Mon, 09 Nov 2020 09:50:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AAA75AC23;
 Mon,  9 Nov 2020 09:50:23 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kc3ok-00032y-UC
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 09:50:30 +0000
X-Inumbo-ID: 73627980-4f17-4a32-8c9f-f9554f50e3a6
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 73627980-4f17-4a32-8c9f-f9554f50e3a6;
	Mon, 09 Nov 2020 09:50:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604915423;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1OcADNeqtn5le1IW034CfnSXJe+7ZXbQ3qogiKRAU7Y=;
	b=U584MBYlyKA0OmO+EmQRpJWo3L6D/lC8U9IOmHtAp+DfmcF7sHcckrOBXW5j73Dj/CNL8H
	JxejBerXqH53Nfby8wCmFQa0d5wqFUjEdmM24QOY1UWJ4OipRn0ZbZGdMqiZFJu6tXsVWV
	MF2SZMlAuDBx0XMRyEWC8P3zomCk7Aw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id AAA75AC23;
	Mon,  9 Nov 2020 09:50:23 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 1/3] xen/x86: add nmi continuation framework
Date: Mon,  9 Nov 2020 10:50:19 +0100
Message-Id: <20201109095021.9897-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201109095021.9897-1-jgross@suse.com>
References: <20201109095021.9897-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Actions in NMI context are rather limited as e.g. locking is rather
fragile.

Add a framework to continue processing in normal interrupt context
after leaving NMI processing.

This is done by a high priority interrupt vector triggered via a
self IPI from NMI context, which will then call the continuation
function specified during NMI handling.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V4:
- make the framework less generic

V2:
- add prototype for continuation function (Roger Pau Monné)
- switch from softirq to explicit self-IPI (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/arch/x86/apic.c       | 13 ++++++++++---
 xen/arch/x86/traps.c      | 19 +++++++++++++++++++
 xen/include/asm-x86/nmi.h | 11 ++++++++++-
 3 files changed, 39 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/apic.c b/xen/arch/x86/apic.c
index 60627fd6e6..7497ddb5da 100644
--- a/xen/arch/x86/apic.c
+++ b/xen/arch/x86/apic.c
@@ -40,6 +40,7 @@
 #include <irq_vectors.h>
 #include <xen/kexec.h>
 #include <asm/guest.h>
+#include <asm/nmi.h>
 #include <asm/time.h>
 
 static bool __read_mostly tdt_enabled;
@@ -1376,16 +1377,22 @@ void spurious_interrupt(struct cpu_user_regs *regs)
 {
     /*
      * Check if this is a vectored interrupt (most likely, as this is probably
-     * a request to dump local CPU state). Vectored interrupts are ACKed;
-     * spurious interrupts are not.
+     * a request to dump local CPU state or to continue NMI handling).
+     * Vectored interrupts are ACKed; spurious interrupts are not.
      */
     if (apic_isr_read(SPURIOUS_APIC_VECTOR)) {
+        bool is_spurious;
+
         ack_APIC_irq();
+        is_spurious = !nmi_check_continuation();
         if (this_cpu(state_dump_pending)) {
             this_cpu(state_dump_pending) = false;
             dump_execstate(regs);
-            return;
+            is_spurious = false;
         }
+
+        if ( !is_spurious )
+            return;
     }
 
     /* see sw-dev-man vol 3, chapter 7.4.13.5 */
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index bc5b8f8ea3..5005ac6e6e 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -79,6 +79,7 @@
 #include <public/hvm/params.h>
 #include <asm/cpuid.h>
 #include <xsm/xsm.h>
+#include <asm/mach-default/irq_vectors.h>
 #include <asm/pv/traps.h>
 #include <asm/pv/mm.h>
 
@@ -1799,6 +1800,24 @@ void unset_nmi_callback(void)
     nmi_callback = dummy_nmi_callback;
 }
 
+bool nmi_check_continuation(void)
+{
+    bool ret = false;
+
+    return ret;
+}
+
+void trigger_nmi_continuation(void)
+{
+    /*
+     * Issue a self-IPI. Handling is done in spurious_interrupt().
+     * NMI could have happened in IPI sequence, so wait for ICR being idle
+     * again before leaving NMI handler.
+     */
+    send_IPI_self(SPURIOUS_APIC_VECTOR);
+    apic_wait_icr_idle();
+}
+
 void do_device_not_available(struct cpu_user_regs *regs)
 {
 #ifdef CONFIG_PV
diff --git a/xen/include/asm-x86/nmi.h b/xen/include/asm-x86/nmi.h
index a288f02a50..9a5da14162 100644
--- a/xen/include/asm-x86/nmi.h
+++ b/xen/include/asm-x86/nmi.h
@@ -33,5 +33,14 @@ nmi_callback_t *set_nmi_callback(nmi_callback_t *callback);
 void unset_nmi_callback(void);
 
 DECLARE_PER_CPU(unsigned int, nmi_count);
- 
+
+/**
+ * trigger_nmi_continuation
+ *
+ * Schedule continuation to be started in interrupt context after NMI handling.
+ */
+void trigger_nmi_continuation(void);
+
+/* Check for NMI continuation pending. */
+bool nmi_check_continuation(void);
 #endif /* ASM_NMI_H */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 09:50:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 09:50:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22270.48559 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc3or-0003Ba-4y; Mon, 09 Nov 2020 09:50:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22270.48559; Mon, 09 Nov 2020 09:50:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc3or-0003BR-1M; Mon, 09 Nov 2020 09:50:37 +0000
Received: by outflank-mailman (input) for mailman id 22270;
 Mon, 09 Nov 2020 09:50:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kc3op-00032y-UN
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 09:50:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 10bdc7d8-87d4-43e9-a1d6-2b01eb3537ff;
 Mon, 09 Nov 2020 09:50:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BEB7BAC55;
 Mon,  9 Nov 2020 09:50:23 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kc3op-00032y-UN
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 09:50:35 +0000
X-Inumbo-ID: 10bdc7d8-87d4-43e9-a1d6-2b01eb3537ff
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 10bdc7d8-87d4-43e9-a1d6-2b01eb3537ff;
	Mon, 09 Nov 2020 09:50:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604915423;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+Ehg+Oe06If61OaHeelhui6nZFa39xSlzfoRaxz8w3I=;
	b=fh9NlGNOOPqmb4A14oeD+pOVfS5y4p5QA6KODCl8vN86moulHHvmfxjKs0FNyuK40MakFd
	Wu6iKHP0Px1fltYULEvPx4jChyZcRVeTL7ldidbPNL3KNJMIsIHaeUKgSF2wwaOGdHNKOB
	r7R0lf+8qiwOfbkNg5MO46JbLuCMWn8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id BEB7BAC55;
	Mon,  9 Nov 2020 09:50:23 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 2/3] xen/oprofile: use NMI continuation for sending virq to guest
Date: Mon,  9 Nov 2020 10:50:20 +0100
Message-Id: <20201109095021.9897-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201109095021.9897-1-jgross@suse.com>
References: <20201109095021.9897-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of calling send_guest_vcpu_virq() from NMI context use the
NMI continuation framework for that purpose. This avoids taking locks
in NMI mode.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V4:
- rework to less generic approach
---
 xen/arch/x86/oprofile/nmi_int.c | 20 ++++++++++++++++++--
 xen/arch/x86/traps.c            |  4 ++++
 xen/include/asm-x86/xenoprof.h  |  7 +++++++
 3 files changed, 29 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/oprofile/nmi_int.c b/xen/arch/x86/oprofile/nmi_int.c
index 0f103d80a6..5f17cba0b2 100644
--- a/xen/arch/x86/oprofile/nmi_int.c
+++ b/xen/arch/x86/oprofile/nmi_int.c
@@ -38,6 +38,8 @@ static unsigned long saved_lvtpc[NR_CPUS];
 
 static char *cpu_type;
 
+static DEFINE_PER_CPU(struct vcpu *, nmi_cont_vcpu);
+
 static int passive_domain_msr_op_checks(unsigned int msr, int *typep, int *indexp)
 {
 	struct vpmu_struct *vpmu = vcpu_vpmu(current);
@@ -83,14 +85,28 @@ void passive_domain_destroy(struct vcpu *v)
 		model->free_msr(v);
 }
 
+bool nmi_oprofile_send_virq(void)
+{
+	struct vcpu *v = this_cpu(nmi_cont_vcpu);
+
+	if ( v )
+		send_guest_vcpu_virq(v, VIRQ_XENOPROF);
+
+	this_cpu(nmi_cont_vcpu) = NULL;
+
+	return v;
+}
+
 static int nmi_callback(const struct cpu_user_regs *regs, int cpu)
 {
 	int xen_mode, ovf;
 
 	ovf = model->check_ctrs(cpu, &cpu_msrs[cpu], regs);
 	xen_mode = ring_0(regs);
-	if ( ovf && is_active(current->domain) && !xen_mode )
-		send_guest_vcpu_virq(current, VIRQ_XENOPROF);
+	if ( ovf && is_active(current->domain) && !xen_mode ) {
+		this_cpu(nmi_cont_vcpu) = current;
+		trigger_nmi_continuation();
+	}
 
 	if ( ovf == 2 )
 		current->arch.nmi_pending = true;
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 5005ac6e6e..7cb7d7e09c 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -65,6 +65,7 @@
 #include <asm/debugger.h>
 #include <asm/msr.h>
 #include <asm/nmi.h>
+#include <asm/xenoprof.h>
 #include <asm/shared.h>
 #include <asm/x86_emulate.h>
 #include <asm/traps.h>
@@ -1804,6 +1805,9 @@ bool nmi_check_continuation(void)
 {
     bool ret = false;
 
+    if ( nmi_oprofile_send_virq() )
+        ret = true;
+
     return ret;
 }
 
diff --git a/xen/include/asm-x86/xenoprof.h b/xen/include/asm-x86/xenoprof.h
index 1026ba2e1f..cf6af8c5df 100644
--- a/xen/include/asm-x86/xenoprof.h
+++ b/xen/include/asm-x86/xenoprof.h
@@ -69,6 +69,8 @@ int passive_domain_do_rdmsr(unsigned int msr, uint64_t *msr_content);
 int passive_domain_do_wrmsr(unsigned int msr, uint64_t msr_content);
 void passive_domain_destroy(struct vcpu *v);
 
+bool nmi_oprofile_send_virq(void);
+
 #else
 
 static inline int passive_domain_do_rdmsr(unsigned int msr,
@@ -85,6 +87,11 @@ static inline int passive_domain_do_wrmsr(unsigned int msr,
 
 static inline void passive_domain_destroy(struct vcpu *v) {}
 
+static inline bool nmi_oprofile_send_virq(void)
+{
+    return false;
+}
+
 #endif /* CONFIG_XENOPROF */
 
 #endif /* __ASM_X86_XENOPROF_H__ */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 10:22:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 10:22:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22307.48571 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc4Jp-0006AH-C3; Mon, 09 Nov 2020 10:22:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22307.48571; Mon, 09 Nov 2020 10:22:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc4Jp-0006AA-8n; Mon, 09 Nov 2020 10:22:37 +0000
Received: by outflank-mailman (input) for mailman id 22307;
 Mon, 09 Nov 2020 10:22:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jc/Q=EP=redhat.com=stefanha@srs-us1.protection.inumbo.net>)
 id 1kc4Jo-0006A5-4l
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 10:22:36 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 6052f51f-325e-4601-9439-fd8f5ad07eed;
 Mon, 09 Nov 2020 10:22:34 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-41-I_sUUytpNla_sXK2O1mUPw-1; Mon, 09 Nov 2020 05:22:29 -0500
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C02131084C8F;
 Mon,  9 Nov 2020 10:22:26 +0000 (UTC)
Received: from localhost (ovpn-114-110.ams2.redhat.com [10.36.114.110])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 4D8985D9DD;
 Mon,  9 Nov 2020 10:22:10 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=jc/Q=EP=redhat.com=stefanha@srs-us1.protection.inumbo.net>)
	id 1kc4Jo-0006A5-4l
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 10:22:36 +0000
X-Inumbo-ID: 6052f51f-325e-4601-9439-fd8f5ad07eed
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 6052f51f-325e-4601-9439-fd8f5ad07eed;
	Mon, 09 Nov 2020 10:22:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604917353;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=fvP8UJpz4riRKthWscMZ00xlDbEYu80keBJwGqqSMOE=;
	b=aumEm1BQiWsEY/9fPJkWEcC+J3tNGAuuStz1sz061yEn2cbmbE/xFVOJ0/mGqOBUTvLMbj
	XEvSqdnEm9L0mVzFmax2+VHoPXRso+cA4kGUdLuLVB5WgcvvkchfZl6Pv+ab1WvXk2Tb1/
	kSgQDJadu/MUnB7XVvQEB0sbQGrp/mw=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-41-I_sUUytpNla_sXK2O1mUPw-1; Mon, 09 Nov 2020 05:22:29 -0500
X-MC-Unique: I_sUUytpNla_sXK2O1mUPw-1
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C02131084C8F;
	Mon,  9 Nov 2020 10:22:26 +0000 (UTC)
Received: from localhost (ovpn-114-110.ams2.redhat.com [10.36.114.110])
	by smtp.corp.redhat.com (Postfix) with ESMTP id 4D8985D9DD;
	Mon,  9 Nov 2020 10:22:10 +0000 (UTC)
Date: Mon, 9 Nov 2020 10:22:09 +0000
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com, linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	ceph-devel@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org, linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org, linux-fsdevel@vger.kernel.org
Subject: Re: [PATCH 23/24] virtio-blk: remove a spurious call to
 revalidate_disk_size
Message-ID: <20201109102209.GF783516@stefanha-x1.localdomain>
References: <20201106190337.1973127-1-hch@lst.de>
 <20201106190337.1973127-24-hch@lst.de>
MIME-Version: 1.0
In-Reply-To: <20201106190337.1973127-24-hch@lst.de>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=stefanha@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="7cm2iqirTL37Ot+N"
Content-Disposition: inline

--7cm2iqirTL37Ot+N
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Fri, Nov 06, 2020 at 08:03:35PM +0100, Christoph Hellwig wrote:
> revalidate_disk_size just updates the block device size from the disk
> size.  Thus calling it from revalidate_disk_size doesn't actually do
> anything.
>=20
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/block/virtio_blk.c | 1 -
>  1 file changed, 1 deletion(-)

Modulo Paolo's comment:

Acked-by: Stefan Hajnoczi <stefanha@redhat.com>

--7cm2iqirTL37Ot+N
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAl+pGFEACgkQnKSrs4Gr
c8hPvggAiEUoB55Y2NWYKmWp20Pqz66o8MfxgXahkbbIj6hWGOJZ5M8cZD5dmb6h
xlDynJx6PzDey/2EstgMWAWpt5QFnKiPDSY+t/UjxpkAXqacgWSnNXhedkDGlczW
4LP5GspHB7zun1KHAcMpcXo6Uet85t5RPsKZqqkp1hRsIMjzKScj4Kan0b65BS0V
J6vUVDQnwVqn8DI38Ebm0r6TWG3PorXZ/SanjhCB9wbuGw3dX6X9aAk2XY8Ybwa6
34P5kZN1RxaqPNFYU0r3gcIWvi8CdCB6XE1Q4OM0ahxmoN4Y4pJcGA0XDZI1N7ei
35TRtk3FCvvJK8X13zk4enPGeLZiYQ==
=JdMU
-----END PGP SIGNATURE-----

--7cm2iqirTL37Ot+N--



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 10:31:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 10:31:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22314.48583 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc4SO-00075t-E0; Mon, 09 Nov 2020 10:31:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22314.48583; Mon, 09 Nov 2020 10:31:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc4SO-00075m-Ax; Mon, 09 Nov 2020 10:31:28 +0000
Received: by outflank-mailman (input) for mailman id 22314;
 Mon, 09 Nov 2020 10:31:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TI6i=EP=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kc4SN-00075h-A6
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 10:31:27 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.47]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 44ce25a6-f2ea-478d-ae24-c6447e774c55;
 Mon, 09 Nov 2020 10:31:24 +0000 (UTC)
Received: from DB6PR0201CA0033.eurprd02.prod.outlook.com (2603:10a6:4:3f::43)
 by AM7PR08MB5382.eurprd08.prod.outlook.com (2603:10a6:20b:108::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Mon, 9 Nov
 2020 10:31:22 +0000
Received: from DB5EUR03FT029.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:3f:cafe::96) by DB6PR0201CA0033.outlook.office365.com
 (2603:10a6:4:3f::43) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Mon, 9 Nov 2020 10:31:22 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT029.mail.protection.outlook.com (10.152.20.131) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3541.17 via Frontend Transport; Mon, 9 Nov 2020 10:31:21 +0000
Received: ("Tessian outbound d6c201accd3c:v71");
 Mon, 09 Nov 2020 10:31:21 +0000
Received: from 74f22f20d372.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 6884EBC1-89D4-4972-9B63-6068B3548238.1; 
 Mon, 09 Nov 2020 10:31:11 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 74f22f20d372.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 09 Nov 2020 10:31:11 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1912.eurprd08.prod.outlook.com (2603:10a6:4:73::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.25; Mon, 9 Nov
 2020 10:31:07 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3499.032; Mon, 9 Nov 2020
 10:31:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TI6i=EP=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kc4SN-00075h-A6
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 10:31:27 +0000
X-Inumbo-ID: 44ce25a6-f2ea-478d-ae24-c6447e774c55
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown [40.107.21.47])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 44ce25a6-f2ea-478d-ae24-c6447e774c55;
	Mon, 09 Nov 2020 10:31:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DpLhZKZMuech9rSN31ImuPnFc2gbkYROJlrKIvHDLwE=;
 b=aabSl3kMC2zwWLAi+qD9bWpadRx6qM6xdZtRyklzBTgNnImnbX5vOlhEHzU9x/kvFvIOLCkLy52J6uT7g9130K/TJQxaTxBAodH37Y1hX1ez32nrgdfUJ+XC2dZ+klxn2DnSpYuOmvbj9GG5zc6trTEMUM1W+nU2ksqpWWQivsE=
Received: from DB6PR0201CA0033.eurprd02.prod.outlook.com (2603:10a6:4:3f::43)
 by AM7PR08MB5382.eurprd08.prod.outlook.com (2603:10a6:20b:108::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Mon, 9 Nov
 2020 10:31:22 +0000
Received: from DB5EUR03FT029.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:3f:cafe::96) by DB6PR0201CA0033.outlook.office365.com
 (2603:10a6:4:3f::43) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Mon, 9 Nov 2020 10:31:22 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT029.mail.protection.outlook.com (10.152.20.131) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3541.17 via Frontend Transport; Mon, 9 Nov 2020 10:31:21 +0000
Received: ("Tessian outbound d6c201accd3c:v71"); Mon, 09 Nov 2020 10:31:21 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 0359730d9299b187
X-CR-MTA-TID: 64aa7808
Received: from 74f22f20d372.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 6884EBC1-89D4-4972-9B63-6068B3548238.1;
	Mon, 09 Nov 2020 10:31:11 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 74f22f20d372.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Mon, 09 Nov 2020 10:31:11 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RpXz6yb3yHr5S6oD8/onJWSWxbATVa0U29LH8fd60Vjo0cKBS3ochU4HsUafKuTiSdBr0PDc3GzMsUlYeHvDOoD6y0R4jNSUpFwHyBNYCgKw/EX6lypxUizFHF+kOHaBVF5kSEpjQGBuN3kwwRiOdp+zN6xLqmefzv7zhfs1j5YjVtkJ44fI8Dhef1VknPByc9j0HQEPAFX+EelxXEnKFH+1h4r7baQI3oFS+zGqfcfbWP+jXWtpZ1nzCOcsLitbFAcTWU/ENkq/COnPHdfZv32T+5ZzpWxr3EJNfP13Zg7xeAvHEcjNvADE2Z+YU7JbD6F2L791FRVPL/1WuAbjaQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DpLhZKZMuech9rSN31ImuPnFc2gbkYROJlrKIvHDLwE=;
 b=CtcHapp+d0poHDB1QIR5NuuQX+eHTnHG9lbRCw5hgm3oICP+bLPBJQoPMwz3psKoFgsx+8SE/AAckZfPaCV8+n4w+QZXgFQhbCvfqJ1KYMlvVsw+AvrwT/jmEWB97bsXOqQeyyo5d2Rj3fQ5BylhtcVoCpbQK8pHDkXb3MlR13QBa2JEMyZs1nPSPK3BcZskn+rOTyHPAUy+BCrrcvavsTZn22hd2sHHewHFABSRxzX1byyRU1jBWLPCT8LsdBoqMZcOZRfTi750lD8NwxqFAV/aCmAUW84/+01zpMmP6yBRJW4A6vSi5xLkDbiOupY75Yqo5v/EP/1uNS2QSNVMlw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DpLhZKZMuech9rSN31ImuPnFc2gbkYROJlrKIvHDLwE=;
 b=aabSl3kMC2zwWLAi+qD9bWpadRx6qM6xdZtRyklzBTgNnImnbX5vOlhEHzU9x/kvFvIOLCkLy52J6uT7g9130K/TJQxaTxBAodH37Y1hX1ez32nrgdfUJ+XC2dZ+klxn2DnSpYuOmvbj9GG5zc6trTEMUM1W+nU2ksqpWWQivsE=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1912.eurprd08.prod.outlook.com (2603:10a6:4:73::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.25; Mon, 9 Nov
 2020 10:31:07 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3499.032; Mon, 9 Nov 2020
 10:31:07 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Penny Zheng <Penny.Zheng@arm.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, "julien@xen.org" <julien@xen.org>, Andre Przywara
	<Andre.Przywara@arm.com>, Wei Chen <Wei.Chen@arm.com>, Kaly Xin
	<Kaly.Xin@arm.com>, nd <nd@arm.com>
Subject: Re: [PATCH] xen/arm: Add Cortex-A73 erratum 858921 workaround
Thread-Topic: [PATCH] xen/arm: Add Cortex-A73 erratum 858921 workaround
Thread-Index: AQHWtnF2aLk0HNycw0Ccp4xeoLYWMam/mfqA
Date: Mon, 9 Nov 2020 10:31:07 +0000
Message-ID: <A6CE7288-A61F-4D99-966E-88A3F7A83EED@arm.com>
References: <20201109082110.1133996-1-penny.zheng@arm.com>
In-Reply-To: <20201109082110.1133996-1-penny.zheng@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 070dfa4e-3fbb-4e6a-b57c-08d8849a9a28
x-ms-traffictypediagnostic: DB6PR0801MB1912:|AM7PR08MB5382:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM7PR08MB53823B90C84CFD106064821F9DEA0@AM7PR08MB5382.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 y6KbrMQjlUwDTGq0aT9qoPbIE1kcXRr1biGNoKOZTsxWFuJW69UzO7Tk0fig1ijXpz3Ce/RsSC137FzLcAdBBeS5GZBAr2VZ+2wd8Yjo+UQo/zVJT1XncRXs4qyXIgWlvPK/10Y4U6Ixd1oDsM4QQqkisY7dDu4TSHt7xKenobwyWKtWiEWswWB3ryVG8+n59qLh+xZIIorTLXkrz5e2bCBbHCLQ6LYwnEF0+G2p73R8bWZWyKkUdkdz9IQqba3qPWwm1igZjKv9jQiOk6FgIyRkFWXWuMLTMmcNGelizaILdB0kmgoNAq36SkNwV7AtgCTGkfnzXySTu5R8NSaDew==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(39860400002)(346002)(366004)(396003)(33656002)(66476007)(316002)(4326008)(37006003)(6512007)(6506007)(53546011)(66556008)(64756008)(66446008)(71200400001)(26005)(6486002)(8676002)(86362001)(66946007)(186003)(54906003)(6636002)(76116006)(91956017)(478600001)(8936002)(2906002)(5660300002)(2616005)(83380400001)(6862004)(36756003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 hfE3t5pGAXnp7UTinhRXniwk348Q9MnN0iVA6mEM+7a+nHLYSZ2GoULynKzRmFPdjQXNx3k3YeOzBJHtpxHGUgFkhE4VL7kQOwewKP9MXjHn5rKPstVunUhrwBTfUaxZUkbaNiJZ9Nw8c7xitXzKDyYNBGX+/D9alG8rYFx70Tta2o7ndC0Q6iOs27iQCG1KWhXUI6m4A+oFFiJhq5y8sUp5M87W2959qhPyUTKgd2uUtx/sfID2Gi5iN86KXIedaXA/4R1Fz3YjFvNHcVdzc03znNZIplkWQUVg/yINEFNVv1sZYECQN3tZ+hwZbNEN8pk6MbO7mteaC+2B2cFwXEz6DXEoSN86HzkJFtqLcLQ6CW+bCQClm0XUHuclDu21gngPifRQrxUJ0a4prpZqVBe+qoxDia9gRFDSoDnpGiJsX9aoSKHDy1UdUnf/iXVJIC8+tNs9nCoo0R/CqkdVvK4N+5m458kldXDPtwzmqcfzcPtfgabX850RqpAcv4xgDPURzabzFH/S+0+Xlmowu00zPxX8x1O/if6bit0J9cvtl79mXkAhLYfcnxbpbKSVk678u4Wnja8NkOnP/XW+SPSb3hNEbbwwTU8U6cfMhjfoBNjg1WXcgEawfbNu+Z1VoW2GKNvAUZEV6E9dX0/9Vw==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <8B4A54530AB43C45874981F338CB77B3@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1912
Original-Authentication-Results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT029.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d6806492-9388-4bea-a7e5-08d8849a91ca
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	obawSiu9lVHaZXmHxWWPWw5muuQmoYM0O7KNKY8cIOTXZ2VnvS2D2GbOMx3GzYhvrcjUPuYtzI/0Bb9pcinpaYcb689Y5qwtdKLe/mJ/mHV7TZ76ZGWEWvJgtw4TDEbbINctxtok+NnRhEumCol2sr7djkEIlKUrqxJElE0HuGlXv0iPVBRhPQMp5OEJwxct1PfaTubYj3dizaizwDLVQQopql/vIYousdvlwMusciquPpdJqF0ENe02dXCOw6Y2o//k/aKTSZmuPWOYNQTM9MZaIsBSvhI+l6i+a0kI01k+6wCPaRQKPRTUfmMd2sb9kgRzaH0LgGyLoGo/k+Ze0tCcEVJ2W0vfGiGC3XNVJhq/jRTA76eU9DeKYEVa9ASIV5kKS2Tom2fs7xVpreGW5w==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(136003)(346002)(376002)(39860400002)(46966005)(336012)(70206006)(8936002)(53546011)(4326008)(6862004)(81166007)(54906003)(6512007)(2906002)(37006003)(5660300002)(6486002)(82740400003)(47076004)(186003)(2616005)(82310400003)(26005)(36756003)(478600001)(83380400001)(6636002)(86362001)(8676002)(356005)(316002)(70586007)(33656002)(6506007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Nov 2020 10:31:21.9044
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 070dfa4e-3fbb-4e6a-b57c-08d8849a9a28
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT029.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5382

Hi,

> On 9 Nov 2020, at 08:21, Penny Zheng <Penny.Zheng@arm.com> wrote:
>=20
> CNTVCT_EL0 or CNTPCT_EL0 counter read in Cortex-A73 (all versions)
> might return a wrong value when the counter crosses a 32bit boundary.
>=20
> Until now, there is no case for Xen itself to access CNTVCT_EL0,
> and it also should be the Guest OS's responsibility to deal with
> this part.
>=20
> But for CNTPCT, there exists several cases in Xen involving reading
> CNTPCT, so a possible workaround is that performing the read twice,
> and to return one or the other depending on whether a transition has
> taken place.
>=20
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Regards
Bertrand

> ---
> docs/misc/arm/silicon-errata.txt |  1 +
> xen/arch/arm/Kconfig             | 18 ++++++++++++++++++
> xen/arch/arm/cpuerrata.c         |  8 ++++++++
> xen/arch/arm/vtimer.c            |  2 +-
> xen/include/asm-arm/cpuerrata.h  |  2 ++
> xen/include/asm-arm/cpufeature.h |  3 ++-
> xen/include/asm-arm/time.h       | 22 +++++++++++++++++++++-
> 7 files changed, 53 insertions(+), 3 deletions(-)
>=20
> diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-err=
ata.txt
> index 1f18a9df58..552c4151d3 100644
> --- a/docs/misc/arm/silicon-errata.txt
> +++ b/docs/misc/arm/silicon-errata.txt
> @@ -51,6 +51,7 @@ stable hypervisors.
> | ARM            | Cortex-A57      | #1319537        | N/A               =
      |
> | ARM            | Cortex-A72      | #1319367        | N/A               =
      |
> | ARM            | Cortex-A72      | #853709         | N/A               =
      |
> +| ARM            | Cortex-A73      | #858921         | ARM_ERRATUM_85892=
1      |
> | ARM            | Cortex-A76      | #1165522        | N/A               =
      |
> | ARM            | Neoverse-N1     | #1165522        | N/A
> | ARM            | MMU-500         | #842869         | N/A               =
      |
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index 2777388265..f938dd21bd 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -226,6 +226,24 @@ config ARM64_ERRATUM_834220
>=20
> 	  If unsure, say Y.
>=20
> +config ARM_ERRATUM_858921
> +	bool "Cortex-A73: 858921: Possible wrong read value for CNTVCT or CNTPC=
T"
> +	default y
> +	help
> +	  This option adds an alternative code sequence to work around ARM
> +	  erratum 858921 on Cortex-A73 (all versions).
> +
> +	  Affected Cortex-A73 might return wrong read value for CNTVCT or CNTPC=
T
> +	  when the counter crosses a 32bit boundary.
> +
> +	  The workaround involves performing the read twice, and to return
> +	  one or the other value depending on whether a transition has taken pl=
ace.
> +	  Please note that this does not necessarily enable the workaround,
> +	  as it depends on the alternative framework, which will only patch
> +	  the kernel if an affected CPU is detected.
> +
> +	  If unsure, say Y.
> +
> endmenu
>=20
> config ARM64_HARDEN_BRANCH_PREDICTOR
> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
> index 6731d873e8..567911d380 100644
> --- a/xen/arch/arm/cpuerrata.c
> +++ b/xen/arch/arm/cpuerrata.c
> @@ -469,6 +469,14 @@ static const struct arm_cpu_capabilities arm_errata[=
] =3D {
>         .capability =3D ARM_SSBD,
>         .matches =3D has_ssbd_mitigation,
>     },
> +#endif
> +#ifdef CONFIG_ARM_ERRATUM_858921
> +    {
> +        /* Cortex-A73 (all versions) */
> +        .desc =3D "ARM erratum 858921",
> +        .capability =3D ARM_WORKAROUND_858921,
> +        MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
> +    },
> #endif
>     {
>         /* Neoverse r0p0 - r2p0 */
> diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
> index 6d39fc944f..c2b27915c6 100644
> --- a/xen/arch/arm/vtimer.c
> +++ b/xen/arch/arm/vtimer.c
> @@ -62,7 +62,7 @@ static void virt_timer_expired(void *data)
>=20
> int domain_vtimer_init(struct domain *d, struct xen_arch_domainconfig *co=
nfig)
> {
> -    d->arch.virt_timer_base.offset =3D READ_SYSREG64(CNTPCT_EL0);
> +    d->arch.virt_timer_base.offset =3D get_cycles();
>     d->time_offset.seconds =3D ticks_to_ns(d->arch.virt_timer_base.offset=
 - boot_count);
>     do_div(d->time_offset.seconds, 1000000000);
>=20
> diff --git a/xen/include/asm-arm/cpuerrata.h b/xen/include/asm-arm/cpuerr=
ata.h
> index 88ef3ca934..8d7e7b9375 100644
> --- a/xen/include/asm-arm/cpuerrata.h
> +++ b/xen/include/asm-arm/cpuerrata.h
> @@ -28,6 +28,8 @@ static inline bool check_workaround_##erratum(void)    =
         \
> CHECK_WORKAROUND_HELPER(766422, ARM32_WORKAROUND_766422, CONFIG_ARM_32)
> CHECK_WORKAROUND_HELPER(834220, ARM64_WORKAROUND_834220, CONFIG_ARM_64)
> CHECK_WORKAROUND_HELPER(ssbd, ARM_SSBD, CONFIG_ARM_SSBD)
> +CHECK_WORKAROUND_HELPER(858921, ARM_WORKAROUND_858921,
> +                        CONFIG_ARM_ERRATUM_858921)
>=20
> #undef CHECK_WORKAROUND_HELPER
>=20
> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufe=
ature.h
> index 10878ead8a..016a9fe203 100644
> --- a/xen/include/asm-arm/cpufeature.h
> +++ b/xen/include/asm-arm/cpufeature.h
> @@ -45,8 +45,9 @@
> #define ARM_SSBD 7
> #define ARM_SMCCC_1_1 8
> #define ARM64_WORKAROUND_AT_SPECULATE 9
> +#define ARM_WORKAROUND_858921 10
>=20
> -#define ARM_NCAPS           10
> +#define ARM_NCAPS           11
>=20
> #ifndef __ASSEMBLY__
>=20
> diff --git a/xen/include/asm-arm/time.h b/xen/include/asm-arm/time.h
> index 9cb6f9b0b4..1b2c13614b 100644
> --- a/xen/include/asm-arm/time.h
> +++ b/xen/include/asm-arm/time.h
> @@ -3,6 +3,7 @@
>=20
> #include <asm/sysregs.h>
> #include <asm/system.h>
> +#include <asm/cpuerrata.h>
>=20
> #define DT_MATCH_TIMER                      \
>     DT_MATCH_COMPATIBLE("arm,armv7-timer"), \
> @@ -13,7 +14,26 @@ typedef uint64_t cycles_t;
> static inline cycles_t get_cycles (void)
> {
>         isb();
> -        return READ_SYSREG64(CNTPCT_EL0);
> +        /*
> +         * ARM_WORKAROUND_858921: Cortex-A73 (all versions) counter read
> +         * can return a wrong value when the counter crosses a 32bit bou=
ndary.
> +         */
> +        if ( !check_workaround_858921() )
> +            return READ_SYSREG64(CNTPCT_EL0);
> +        else
> +        {
> +            /*
> +             * A recommended workaround for erratum 858921 is to:
> +             *  1- Read twice CNTPCT.
> +             *  2- Compare bit[32] of the two read values.
> +             *      - If bit[32] is different, keep the old value.
> +             *      - If bit[32] is the same, keep the new value.
> +             */
> +            cycles_t old, new;
> +            old =3D READ_SYSREG64(CNTPCT_EL0);
> +            new =3D READ_SYSREG64(CNTPCT_EL0);
> +            return (((old ^ new) >> 32) & 1) ? old : new;
> +        }
> }
>=20
> /* List of timer's IRQ */
> --=20
> 2.25.1
>=20



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 10:32:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 10:32:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22326.48607 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc4Tm-0007GV-2w; Mon, 09 Nov 2020 10:32:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22326.48607; Mon, 09 Nov 2020 10:32:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc4Tl-0007GO-Ub; Mon, 09 Nov 2020 10:32:53 +0000
Received: by outflank-mailman (input) for mailman id 22326;
 Mon, 09 Nov 2020 10:32:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jo29=EP=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kc4Tk-0007FI-9g
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 10:32:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d83c79b1-074a-4775-97b5-b1d801e6f370;
 Mon, 09 Nov 2020 10:32:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2DD92AF2F;
 Mon,  9 Nov 2020 10:32:50 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Jo29=EP=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kc4Tk-0007FI-9g
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 10:32:52 +0000
X-Inumbo-ID: d83c79b1-074a-4775-97b5-b1d801e6f370
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d83c79b1-074a-4775-97b5-b1d801e6f370;
	Mon, 09 Nov 2020 10:32:51 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2DD92AF2F;
	Mon,  9 Nov 2020 10:32:50 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: daniel@ffwll.ch,
	airlied@linux.ie,
	chunkuang.hu@kernel.org,
	p.zabel@pengutronix.de,
	robdclark@gmail.com,
	sean@poorly.run
Cc: linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	Thomas Zimmermann <tzimmermann@suse.de>,
	=?UTF-8?q?Christian=20K=C3=B6nig?= <christian.koenig@amd.com>,
	Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
	Maxime Ripard <mripard@kernel.org>,
	Dave Airlie <airlied@redhat.com>,
	Lucas Stach <l.stach@pengutronix.de>,
	Russell King <linux+etnaviv@armlinux.org.uk>,
	Christian Gmeiner <christian.gmeiner@gmail.com>,
	Qiang Yu <yuq825@gmail.com>,
	Ben Skeggs <bskeggs@redhat.com>,
	Rob Herring <robh@kernel.org>,
	Tomeu Vizoso <tomeu.vizoso@collabora.com>,
	Steven Price <steven.price@arm.com>,
	Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Alex Deucher <alexander.deucher@amd.com>,
	Sandy Huang <hjc@rock-chips.com>,
	=?UTF-8?q?Heiko=20St=C3=BCbner?= <heiko@sntech.de>,
	Hans de Goede <hdegoede@redhat.com>,
	Eric Anholt <eric@anholt.net>,
	Rodrigo Siqueira <rodrigosiqueiramelo@gmail.com>,
	Melissa Wen <melissa.srw@gmail.com>,
	Haneen Mohammed <hamohammed.sa@gmail.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Sumit Semwal <sumit.semwal@linaro.org>,
	Emil Velikov <emil.velikov@collabora.com>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	Arunpravin <apaneers@amd.com>,
	Huang Rui <ray.huang@amd.com>,
	Luben Tuikov <luben.tuikov@amd.com>,
	Madhav Chauhan <madhav.chauhan@amd.com>,
	Nirmoy Das <Nirmoy.Das@amd.com>,
	Jason Gunthorpe <jgg@ziepe.ca>,
	Sam Ravnborg <sam@ravnborg.org>,
	Chris Wilson <chris@chris-wilson.co.uk>,
	etnaviv@lists.freedesktop.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	spice-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH 2/2] drm/mediatek: Use struct dma_buf_map in GEM vmap ops
Date: Mon,  9 Nov 2020 11:32:42 +0100
Message-Id: <20201109103242.19544-3-tzimmermann@suse.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201109103242.19544-1-tzimmermann@suse.de>
References: <20201109103242.19544-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Fixes a build failure with mediatek.

This change was supposed to be part of commit 49a3f51dfeee ("drm/gem:
Use struct dma_buf_map in GEM vmap ops and convert GEM backends"), but
mediatek was forgotten.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Fixes: 49a3f51dfeee ("drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends")
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Christian König <christian.koenig@amd.com>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <mripard@kernel.org>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Lucas Stach <l.stach@pengutronix.de>
Cc: Russell King <linux+etnaviv@armlinux.org.uk>
Cc: Christian Gmeiner <christian.gmeiner@gmail.com>
Cc: Qiang Yu <yuq825@gmail.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Rob Herring <robh@kernel.org>
Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: "Christian König" <christian.koenig@amd.com>
Cc: Sandy Huang <hjc@rock-chips.com>
Cc: "Heiko Stübner" <heiko@sntech.de>
Cc: Hans de Goede <hdegoede@redhat.com>
Cc: Sean Paul <sean@poorly.run>
Cc: Eric Anholt <eric@anholt.net>
Cc: Rodrigo Siqueira <rodrigosiqueiramelo@gmail.com>
Cc: Melissa Wen <melissa.srw@gmail.com>
Cc: Haneen Mohammed <hamohammed.sa@gmail.com>
Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Emil Velikov <emil.velikov@collabora.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Arunpravin <apaneers@amd.com>
Cc: Huang Rui <ray.huang@amd.com>
Cc: Luben Tuikov <luben.tuikov@amd.com>
Cc: Madhav Chauhan <madhav.chauhan@amd.com>
Cc: Nirmoy Das <Nirmoy.Das@amd.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: dri-devel@lists.freedesktop.org
Cc: etnaviv@lists.freedesktop.org
Cc: lima@lists.freedesktop.org
Cc: nouveau@lists.freedesktop.org
Cc: virtualization@lists.linux-foundation.org
Cc: spice-devel@lists.freedesktop.org
Cc: amd-gfx@lists.freedesktop.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-rockchip@lists.infradead.org
Cc: xen-devel@lists.xenproject.org
---
 drivers/gpu/drm/mediatek/mtk_drm_gem.c | 20 ++++++++++++--------
 drivers/gpu/drm/mediatek/mtk_drm_gem.h |  4 ++--
 2 files changed, 14 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.c b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
index cdd1a6e61564..28a2ee1336ef 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_gem.c
+++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
@@ -240,23 +240,25 @@ struct drm_gem_object *mtk_gem_prime_import_sg_table(struct drm_device *dev,
 	return &mtk_gem->base;
 }
 
-void *mtk_drm_gem_prime_vmap(struct drm_gem_object *obj)
+int mtk_drm_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct mtk_drm_gem_obj *mtk_gem = to_mtk_gem_obj(obj);
-	struct sg_table *sgt;
+	struct sg_table *sgt = NULL;
 	unsigned int npages;
 
 	if (mtk_gem->kvaddr)
-		return mtk_gem->kvaddr;
+		goto out;
 
 	sgt = mtk_gem_prime_get_sg_table(obj);
 	if (IS_ERR(sgt))
-		return NULL;
+		return PTR_ERR(sgt);
 
 	npages = obj->size >> PAGE_SHIFT;
 	mtk_gem->pages = kcalloc(npages, sizeof(*mtk_gem->pages), GFP_KERNEL);
-	if (!mtk_gem->pages)
-		goto out;
+	if (!mtk_gem->pages) {
+		kfree(sgt);
+		return -ENOMEM;
+	}
 
 	drm_prime_sg_to_page_addr_arrays(sgt, mtk_gem->pages, NULL, npages);
 
@@ -265,13 +267,15 @@ void *mtk_drm_gem_prime_vmap(struct drm_gem_object *obj)
 
 out:
 	kfree(sgt);
+	dma_buf_map_set_vaddr(map, mtk_gem->kvaddr);
 
-	return mtk_gem->kvaddr;
+	return 0;
 }
 
-void mtk_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void mtk_drm_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct mtk_drm_gem_obj *mtk_gem = to_mtk_gem_obj(obj);
+	void *vaddr = map->vaddr;
 
 	if (!mtk_gem->pages)
 		return;
diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.h b/drivers/gpu/drm/mediatek/mtk_drm_gem.h
index ff9f976d9807..6da5ccb4b933 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_gem.h
+++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.h
@@ -45,7 +45,7 @@ int mtk_drm_gem_mmap_buf(struct drm_gem_object *obj,
 struct sg_table *mtk_gem_prime_get_sg_table(struct drm_gem_object *obj);
 struct drm_gem_object *mtk_gem_prime_import_sg_table(struct drm_device *dev,
 			struct dma_buf_attachment *attach, struct sg_table *sg);
-void *mtk_drm_gem_prime_vmap(struct drm_gem_object *obj);
-void mtk_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int mtk_drm_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void mtk_drm_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 #endif
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 10:32:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 10:32:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22325.48594 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc4Tk-0007FU-P4; Mon, 09 Nov 2020 10:32:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22325.48594; Mon, 09 Nov 2020 10:32:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc4Tk-0007FN-M5; Mon, 09 Nov 2020 10:32:52 +0000
Received: by outflank-mailman (input) for mailman id 22325;
 Mon, 09 Nov 2020 10:32:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jo29=EP=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kc4Tk-0007FD-5T
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 10:32:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 06e358e3-e257-4d15-ab94-04882179fdd5;
 Mon, 09 Nov 2020 10:32:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D074DAC1F;
 Mon,  9 Nov 2020 10:32:49 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Jo29=EP=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kc4Tk-0007FD-5T
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 10:32:52 +0000
X-Inumbo-ID: 06e358e3-e257-4d15-ab94-04882179fdd5
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 06e358e3-e257-4d15-ab94-04882179fdd5;
	Mon, 09 Nov 2020 10:32:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D074DAC1F;
	Mon,  9 Nov 2020 10:32:49 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: daniel@ffwll.ch,
	airlied@linux.ie,
	chunkuang.hu@kernel.org,
	p.zabel@pengutronix.de,
	robdclark@gmail.com,
	sean@poorly.run
Cc: linux-arm-msm@vger.kernel.org,
	freedreno@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	Thomas Zimmermann <tzimmermann@suse.de>,
	=?UTF-8?q?Christian=20K=C3=B6nig?= <christian.koenig@amd.com>,
	Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
	Maxime Ripard <mripard@kernel.org>,
	Dave Airlie <airlied@redhat.com>,
	Lucas Stach <l.stach@pengutronix.de>,
	Russell King <linux+etnaviv@armlinux.org.uk>,
	Christian Gmeiner <christian.gmeiner@gmail.com>,
	Qiang Yu <yuq825@gmail.com>,
	Ben Skeggs <bskeggs@redhat.com>,
	Rob Herring <robh@kernel.org>,
	Tomeu Vizoso <tomeu.vizoso@collabora.com>,
	Steven Price <steven.price@arm.com>,
	Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Alex Deucher <alexander.deucher@amd.com>,
	Sandy Huang <hjc@rock-chips.com>,
	=?UTF-8?q?Heiko=20St=C3=BCbner?= <heiko@sntech.de>,
	Hans de Goede <hdegoede@redhat.com>,
	Eric Anholt <eric@anholt.net>,
	Rodrigo Siqueira <rodrigosiqueiramelo@gmail.com>,
	Melissa Wen <melissa.srw@gmail.com>,
	Haneen Mohammed <hamohammed.sa@gmail.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Sumit Semwal <sumit.semwal@linaro.org>,
	Emil Velikov <emil.velikov@collabora.com>,
	Luben Tuikov <luben.tuikov@amd.com>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	Arunpravin <apaneers@amd.com>,
	Huang Rui <ray.huang@amd.com>,
	Madhav Chauhan <madhav.chauhan@amd.com>,
	Jason Gunthorpe <jgg@ziepe.ca>,
	Sam Ravnborg <sam@ravnborg.org>,
	Chris Wilson <chris@chris-wilson.co.uk>,
	Qinglang Miao <miaoqinglang@huawei.com>,
	etnaviv@lists.freedesktop.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	spice-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH 1/2] drm/msm: Use struct dma_buf_map in GEM vmap ops
Date: Mon,  9 Nov 2020 11:32:41 +0100
Message-Id: <20201109103242.19544-2-tzimmermann@suse.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201109103242.19544-1-tzimmermann@suse.de>
References: <20201109103242.19544-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Fixes a build failure with msm.

This change was supposed to be part of commit 49a3f51dfeee ("drm/gem:
Use struct dma_buf_map in GEM vmap ops and convert GEM backends"), but
msm was forgotten.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Fixes: 49a3f51dfeee ("drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends")
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Christian König <christian.koenig@amd.com>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <mripard@kernel.org>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Lucas Stach <l.stach@pengutronix.de>
Cc: Russell King <linux+etnaviv@armlinux.org.uk>
Cc: Christian Gmeiner <christian.gmeiner@gmail.com>
Cc: Qiang Yu <yuq825@gmail.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Rob Herring <robh@kernel.org>
Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: "Christian König" <christian.koenig@amd.com>
Cc: Sandy Huang <hjc@rock-chips.com>
Cc: "Heiko Stübner" <heiko@sntech.de>
Cc: Hans de Goede <hdegoede@redhat.com>
Cc: Sean Paul <sean@poorly.run>
Cc: Eric Anholt <eric@anholt.net>
Cc: Rodrigo Siqueira <rodrigosiqueiramelo@gmail.com>
Cc: Melissa Wen <melissa.srw@gmail.com>
Cc: Haneen Mohammed <hamohammed.sa@gmail.com>
Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Emil Velikov <emil.velikov@collabora.com>
Cc: Luben Tuikov <luben.tuikov@amd.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Arunpravin <apaneers@amd.com>
Cc: Huang Rui <ray.huang@amd.com>
Cc: Madhav Chauhan <madhav.chauhan@amd.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Qinglang Miao <miaoqinglang@huawei.com>
Cc: dri-devel@lists.freedesktop.org
Cc: etnaviv@lists.freedesktop.org
Cc: lima@lists.freedesktop.org
Cc: nouveau@lists.freedesktop.org
Cc: virtualization@lists.linux-foundation.org
Cc: spice-devel@lists.freedesktop.org
Cc: amd-gfx@lists.freedesktop.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-rockchip@lists.infradead.org
Cc: xen-devel@lists.xenproject.org
---
 drivers/gpu/drm/msm/msm_drv.h       |  4 ++--
 drivers/gpu/drm/msm/msm_gem_prime.c | 13 ++++++++++---
 2 files changed, 12 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
index c45789f36e48..a6aef687bc6e 100644
--- a/drivers/gpu/drm/msm/msm_drv.h
+++ b/drivers/gpu/drm/msm/msm_drv.h
@@ -295,8 +295,8 @@ int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
 int msm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev,
 		uint32_t handle, uint64_t *offset);
 struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj);
-void *msm_gem_prime_vmap(struct drm_gem_object *obj);
-void msm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int msm_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 int msm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
 struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev,
 		struct dma_buf_attachment *attach, struct sg_table *sg);
diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_gem_prime.c
index 515ef80816a0..9880348a4dc7 100644
--- a/drivers/gpu/drm/msm/msm_gem_prime.c
+++ b/drivers/gpu/drm/msm/msm_gem_prime.c
@@ -22,12 +22,19 @@ struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	return drm_prime_pages_to_sg(obj->dev, msm_obj->pages, npages);
 }
 
-void *msm_gem_prime_vmap(struct drm_gem_object *obj)
+int msm_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
-	return msm_gem_get_vaddr(obj);
+	void *vaddr;
+
+	vaddr = msm_gem_get_vaddr(obj);
+	if (IS_ERR(vaddr))
+		return PTR_ERR(vaddr);
+	dma_buf_map_set_vaddr(map, vaddr);
+
+	return 0;
 }
 
-void msm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	msm_gem_put_vaddr(obj);
 }
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 10:45:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 10:45:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22345.48619 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc4fn-0008Nn-DJ; Mon, 09 Nov 2020 10:45:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22345.48619; Mon, 09 Nov 2020 10:45:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc4fn-0008Ng-9j; Mon, 09 Nov 2020 10:45:19 +0000
Received: by outflank-mailman (input) for mailman id 22345;
 Mon, 09 Nov 2020 10:45:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FQmy=EP=redhat.com=cohuck@srs-us1.protection.inumbo.net>)
 id 1kc4fm-0008Nb-2Y
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 10:45:18 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 740bdb85-0db7-485f-bd18-b02b4fd1cc68;
 Mon, 09 Nov 2020 10:45:17 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-58-bZouaZUZO4upuVnTmkfAdA-1; Mon, 09 Nov 2020 05:45:13 -0500
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B25E2186840C;
 Mon,  9 Nov 2020 10:45:11 +0000 (UTC)
Received: from gondolin (ovpn-113-28.ams2.redhat.com [10.36.113.28])
 by smtp.corp.redhat.com (Postfix) with ESMTP id D14D075125;
 Mon,  9 Nov 2020 10:44:57 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FQmy=EP=redhat.com=cohuck@srs-us1.protection.inumbo.net>)
	id 1kc4fm-0008Nb-2Y
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 10:45:18 +0000
X-Inumbo-ID: 740bdb85-0db7-485f-bd18-b02b4fd1cc68
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 740bdb85-0db7-485f-bd18-b02b4fd1cc68;
	Mon, 09 Nov 2020 10:45:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604918717;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IvovXvGi96RLS5Xlxa3vEz79xbJKjbaUI17fwHflXc8=;
	b=XbsA+exQ94ZZYCdC/P/uCt2BeIesqg78v3cNrf70OAAc/ogSrJ7z5HCo8lmXw35X1qtW0c
	qZmBqKltzxrajZ5nVjfbHTN0ubdlMK7/p1W3KfIIY5qOgKtDp5SIQ32d8zmJkjbcXwSnjz
	tTy+ZUjt4pScsVB7l48JrdJ26640gD8=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-58-bZouaZUZO4upuVnTmkfAdA-1; Mon, 09 Nov 2020 05:45:13 -0500
X-MC-Unique: bZouaZUZO4upuVnTmkfAdA-1
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B25E2186840C;
	Mon,  9 Nov 2020 10:45:11 +0000 (UTC)
Received: from gondolin (ovpn-113-28.ams2.redhat.com [10.36.113.28])
	by smtp.corp.redhat.com (Postfix) with ESMTP id D14D075125;
	Mon,  9 Nov 2020 10:44:57 +0000 (UTC)
Date: Mon, 9 Nov 2020 11:44:55 +0100
From: Cornelia Huck <cohuck@redhat.com>
To: Eduardo Habkost <ehabkost@redhat.com>
Cc: qemu-devel@nongnu.org, Igor Mammedov <imammedo@redhat.com>, "Daniel P.
 Berrange" <berrange@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Stefan Berger <stefanb@linux.vnet.ibm.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, Paul
 Durrant <paul@xen.org>, Kevin Wolf <kwolf@redhat.com>, Max Reitz
 <mreitz@redhat.com>, Richard Henderson <rth@twiddle.net>, David Hildenbrand
 <david@redhat.com>, Thomas Huth <thuth@redhat.com>, Halil Pasic
 <pasic@linux.ibm.com>, Christian Borntraeger <borntraeger@de.ibm.com>,
 Matthew Rosato <mjrosato@linux.ibm.com>, Alex Williamson
 <alex.williamson@redhat.com>, Mark Cave-Ayland
 <mark.cave-ayland@ilande.co.uk>, Artyom Tarasenko <atar4qemu@gmail.com>,
 xen-devel@lists.xenproject.org, qemu-block@nongnu.org,
 qemu-s390x@nongnu.org
Subject: Re: [PATCH 4/7] qom: Replace void* parameter with Property* on
 field getters/setters
Message-ID: <20201109114455.2eb0fabd.cohuck@redhat.com>
In-Reply-To: <20201104172512.2381656-5-ehabkost@redhat.com>
References: <20201104172512.2381656-1-ehabkost@redhat.com>
	<20201104172512.2381656-5-ehabkost@redhat.com>
Organization: Red Hat GmbH
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=cohuck@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Wed,  4 Nov 2020 12:25:09 -0500
Eduardo Habkost <ehabkost@redhat.com> wrote:

> All field property getters and setters must interpret the fourth
> argument as Property*.  Change the function signature of field
> property getters and setters to indicate that.
>=20
> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
> ---
> Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> Cc: Paul Durrant <paul@xen.org>
> Cc: Kevin Wolf <kwolf@redhat.com>
> Cc: Max Reitz <mreitz@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Daniel P. Berrang=C3=A9" <berrange@redhat.com>
> Cc: Eduardo Habkost <ehabkost@redhat.com>
> Cc: Richard Henderson <rth@twiddle.net>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Cornelia Huck <cohuck@redhat.com>
> Cc: Thomas Huth <thuth@redhat.com>
> Cc: Halil Pasic <pasic@linux.ibm.com>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Matthew Rosato <mjrosato@linux.ibm.com>
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
> Cc: Artyom Tarasenko <atar4qemu@gmail.com>
> Cc: qemu-devel@nongnu.org
> Cc: xen-devel@lists.xenproject.org
> Cc: qemu-block@nongnu.org
> Cc: qemu-s390x@nongnu.org
> ---
>  include/qom/field-property-internal.h |   8 +-
>  include/qom/field-property.h          |  26 ++++---
>  backends/tpm/tpm_util.c               |  11 ++-
>  hw/block/xen-block.c                  |   6 +-
>  hw/core/qdev-properties-system.c      |  86 +++++++++-------------
>  hw/s390x/css.c                        |   6 +-
>  hw/s390x/s390-pci-bus.c               |   6 +-
>  hw/vfio/pci-quirks.c                  |  10 +--
>  qom/property-types.c                  | 102 +++++++++-----------------
>  target/sparc/cpu.c                    |   4 +-
>  10 files changed, 105 insertions(+), 160 deletions(-)

Acked-by: Cornelia Huck <cohuck@redhat.com>



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 10:48:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 10:48:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22351.48631 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc4iQ-00007Q-Qx; Mon, 09 Nov 2020 10:48:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22351.48631; Mon, 09 Nov 2020 10:48:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc4iQ-00007F-Nz; Mon, 09 Nov 2020 10:48:02 +0000
Received: by outflank-mailman (input) for mailman id 22351;
 Mon, 09 Nov 2020 10:48:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FQmy=EP=redhat.com=cohuck@srs-us1.protection.inumbo.net>)
 id 1kc4iQ-000074-0T
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 10:48:02 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 173e2d7d-7bd5-4887-a9ae-09e503516baa;
 Mon, 09 Nov 2020 10:48:00 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-276-4D-q5n4QNqulbb6oiE2qdQ-1; Mon, 09 Nov 2020 05:47:57 -0500
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3B2481074653;
 Mon,  9 Nov 2020 10:47:55 +0000 (UTC)
Received: from gondolin (ovpn-113-28.ams2.redhat.com [10.36.113.28])
 by smtp.corp.redhat.com (Postfix) with ESMTP id C3AF175125;
 Mon,  9 Nov 2020 10:47:43 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FQmy=EP=redhat.com=cohuck@srs-us1.protection.inumbo.net>)
	id 1kc4iQ-000074-0T
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 10:48:02 +0000
X-Inumbo-ID: 173e2d7d-7bd5-4887-a9ae-09e503516baa
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 173e2d7d-7bd5-4887-a9ae-09e503516baa;
	Mon, 09 Nov 2020 10:48:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604918880;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5xhQO0ta8Zk7YojdbHZFj2vc+4a/lBAf0TXONvmSYBs=;
	b=Y57XVgQ30rv+NqboQG+U0Z76cL3+V5FWalC/3pedE5VdUg7KwlCtevkyksHjfJrelxxihS
	c1tfQC3NIzUHXZlwz8ZgmH2eSMed/qCH+Y9oGnJ3jsj+6iA2UWxiHf3wl2MJuAvSAL19pc
	EqPZrXejAGuTuO3eyXpJO3myRUbcFIQ=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-276-4D-q5n4QNqulbb6oiE2qdQ-1; Mon, 09 Nov 2020 05:47:57 -0500
X-MC-Unique: 4D-q5n4QNqulbb6oiE2qdQ-1
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3B2481074653;
	Mon,  9 Nov 2020 10:47:55 +0000 (UTC)
Received: from gondolin (ovpn-113-28.ams2.redhat.com [10.36.113.28])
	by smtp.corp.redhat.com (Postfix) with ESMTP id C3AF175125;
	Mon,  9 Nov 2020 10:47:43 +0000 (UTC)
Date: Mon, 9 Nov 2020 11:47:40 +0100
From: Cornelia Huck <cohuck@redhat.com>
To: Eduardo Habkost <ehabkost@redhat.com>
Cc: qemu-devel@nongnu.org, Igor Mammedov <imammedo@redhat.com>, "Daniel P.
 Berrange" <berrange@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Stefan Berger <stefanb@linux.vnet.ibm.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, Paul
 Durrant <paul@xen.org>, Kevin Wolf <kwolf@redhat.com>, Max Reitz
 <mreitz@redhat.com>, Thomas Huth <thuth@redhat.com>, Halil Pasic
 <pasic@linux.ibm.com>, Christian Borntraeger <borntraeger@de.ibm.com>,
 Richard Henderson <rth@twiddle.net>, David Hildenbrand <david@redhat.com>,
 Matthew Rosato <mjrosato@linux.ibm.com>, Alex Williamson
 <alex.williamson@redhat.com>, xen-devel@lists.xenproject.org,
 qemu-block@nongnu.org, qemu-s390x@nongnu.org
Subject: Re: [PATCH 6/7] qom: Add FIELD_PTR, a type-safe wrapper for
 object_field_prop_ptr()
Message-ID: <20201109114740.19e727d9.cohuck@redhat.com>
In-Reply-To: <20201104172512.2381656-7-ehabkost@redhat.com>
References: <20201104172512.2381656-1-ehabkost@redhat.com>
	<20201104172512.2381656-7-ehabkost@redhat.com>
Organization: Red Hat GmbH
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=cohuck@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Wed,  4 Nov 2020 12:25:11 -0500
Eduardo Habkost <ehabkost@redhat.com> wrote:

> Introduce a FIELD_PTR macro that will ensure the size of the area
> we are accessing has the correct size, and will return a pointer
> of the correct type.
>=20
> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
> ---
> Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> Cc: Paul Durrant <paul@xen.org>
> Cc: Kevin Wolf <kwolf@redhat.com>
> Cc: Max Reitz <mreitz@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Daniel P. Berrang=C3=A9" <berrange@redhat.com>
> Cc: Eduardo Habkost <ehabkost@redhat.com>
> Cc: Cornelia Huck <cohuck@redhat.com>
> Cc: Thomas Huth <thuth@redhat.com>
> Cc: Halil Pasic <pasic@linux.ibm.com>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Richard Henderson <rth@twiddle.net>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Matthew Rosato <mjrosato@linux.ibm.com>
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: qemu-devel@nongnu.org
> Cc: xen-devel@lists.xenproject.org
> Cc: qemu-block@nongnu.org
> Cc: qemu-s390x@nongnu.org
> ---
>  include/qom/field-property.h     | 21 ++++++++++-
>  backends/tpm/tpm_util.c          |  6 ++--
>  hw/block/xen-block.c             |  4 +--
>  hw/core/qdev-properties-system.c | 50 +++++++++++++-------------
>  hw/s390x/css.c                   |  4 +--
>  hw/s390x/s390-pci-bus.c          |  4 +--
>  hw/vfio/pci-quirks.c             |  4 +--
>  qom/field-property.c             |  3 +-
>  qom/property-types.c             | 60 +++++++++++++++++---------------
>  9 files changed, 89 insertions(+), 67 deletions(-)

Acked-by: Cornelia Huck <cohuck@redhat.com>



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 10:53:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 10:53:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22361.48643 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc4np-00011g-D2; Mon, 09 Nov 2020 10:53:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22361.48643; Mon, 09 Nov 2020 10:53:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc4np-00011Z-AE; Mon, 09 Nov 2020 10:53:37 +0000
Received: by outflank-mailman (input) for mailman id 22361;
 Mon, 09 Nov 2020 10:53:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=a0OZ=EP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kc4no-00011U-7P
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 10:53:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 61f1e0b8-92dd-477f-babd-6d197ab52dc3;
 Mon, 09 Nov 2020 10:53:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DE761AD6D;
 Mon,  9 Nov 2020 10:53:33 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=a0OZ=EP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kc4no-00011U-7P
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 10:53:36 +0000
X-Inumbo-ID: 61f1e0b8-92dd-477f-babd-6d197ab52dc3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 61f1e0b8-92dd-477f-babd-6d197ab52dc3;
	Mon, 09 Nov 2020 10:53:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604919214;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=dIi1qxwUZQ0BiQH3jv2IR7dBQoVEGhHSp8IsqYfVSKM=;
	b=fQfb6yJenalkGZ055BOtocobl1eskuM76KJd0zx+u3QDvnNu/oZNxGwfvYxTcK1LwT71db
	qDga/k39vGWF8dP53fODKKX9YNOSKrH9YKBL7ZIYSG38Mx2SoRx3W5tLcvQB5VQmXQK8Ay
	fQ3mbQ6HZibD2Tr+u6AuIArQVOL1Bhg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id DE761AD6D;
	Mon,  9 Nov 2020 10:53:33 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/CPUID: don't use UB shift when library is built as 32-bit
Message-ID: <b579bde8-6cd2-0407-f098-c44e1c9ff814@suse.com>
Date: Mon, 9 Nov 2020 11:53:36 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

At least the insn emulator test harness will continue to be buildable
(and ought to continue to be usable) also as a 32-bit binary. (Right now
the CPU policy test harness is, too, but there it may be less relevant
to keep it functional, just like e.g. we don't support fuzzing the insn
emulator in 32-bit mode.) Hence the library code needs to cope with
this.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/lib/x86/cpuid.c
+++ b/xen/lib/x86/cpuid.c
@@ -165,7 +165,7 @@ void x86_cpuid_policy_fill_native(struct
         for ( i = 2; i < min_t(unsigned int, 63,
                                ARRAY_SIZE(p->xstate.raw)); ++i )
         {
-            if ( xstates & (1ul << i) )
+            if ( xstates & (1ull << i) )
                 cpuid_count_leaf(0xd, i, &p->xstate.raw[i]);
         }
     }


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 10:54:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 10:54:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22365.48655 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc4oM-00017q-Mu; Mon, 09 Nov 2020 10:54:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22365.48655; Mon, 09 Nov 2020 10:54:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc4oM-00017j-Iz; Mon, 09 Nov 2020 10:54:10 +0000
Received: by outflank-mailman (input) for mailman id 22365;
 Mon, 09 Nov 2020 10:54:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=a0OZ=EP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kc4oL-00017c-5L
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 10:54:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6b0b5df8-e315-4844-aceb-5b5334fa445d;
 Mon, 09 Nov 2020 10:54:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9BF44AD77;
 Mon,  9 Nov 2020 10:54:06 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=a0OZ=EP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kc4oL-00017c-5L
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 10:54:09 +0000
X-Inumbo-ID: 6b0b5df8-e315-4844-aceb-5b5334fa445d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6b0b5df8-e315-4844-aceb-5b5334fa445d;
	Mon, 09 Nov 2020 10:54:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604919246;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=J+KNAgfWHHkbYSPigebmygyg//op4KQC2wP1eeaKmCo=;
	b=U8/5xU2Ne6GXgfhWbNxl2mAHoynn+iHR0mSLFBKZE/9aFDaKiutvu8L/0oe6NaRi12wx9Z
	YdRh+FoA2cT5gLOU32X+QRksStUJYGYsZYetyxfhPNWRXqCcJcMt92rRjZbKrjQDQ81gIJ
	m0DN05hybDtwWuD3HdeCUHEeM8AP8Cs=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 9BF44AD77;
	Mon,  9 Nov 2020 10:54:06 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/CPUID: suppress IOMMU related hypervisor leaf data
Message-ID: <c640463a-d088-aaf5-0c3c-d82b1c98ee4f@suse.com>
Date: Mon, 9 Nov 2020 11:54:09 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Now that the IOMMU for guests can't be enabled "on demand" anymore,
there's also no reason to expose the related CPUID bit "just in case".

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1050,7 +1050,8 @@ void cpuid_hypervisor_leaves(const struc
          * Indicate that memory mapped from other domains (either grants or
          * foreign pages) has valid IOMMU entries.
          */
-        res->a |= XEN_HVM_CPUID_IOMMU_MAPPINGS;
+        if ( is_iommu_enabled(d) )
+            res->a |= XEN_HVM_CPUID_IOMMU_MAPPINGS;
 
         /* Indicate presence of vcpu id and set it in ebx */
         res->a |= XEN_HVM_CPUID_VCPU_ID_PRESENT;


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 10:54:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 10:54:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22368.48667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc4oc-0001Cy-W1; Mon, 09 Nov 2020 10:54:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22368.48667; Mon, 09 Nov 2020 10:54:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc4oc-0001Cr-ST; Mon, 09 Nov 2020 10:54:26 +0000
Received: by outflank-mailman (input) for mailman id 22368;
 Mon, 09 Nov 2020 10:54:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2NS2=EP=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kc4ob-0001CW-Km
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 10:54:25 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e0207ecf-dd8f-4568-b69b-aab8e27ed41b;
 Mon, 09 Nov 2020 10:54:23 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2NS2=EP=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kc4ob-0001CW-Km
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 10:54:25 +0000
X-Inumbo-ID: e0207ecf-dd8f-4568-b69b-aab8e27ed41b
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e0207ecf-dd8f-4568-b69b-aab8e27ed41b;
	Mon, 09 Nov 2020 10:54:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1604919263;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=K0393i6vKcAjDPstlMWXCVBlFDlN8IgGJ1fo3abEXHc=;
  b=X/rLMMJIcIQJx83i37ZqvgfuAiM7L4BizObwHWLA4l2Z1N81vzFmSTHz
   5qfgv48G+zjBms8t6x2XKPIpiuShhFWtZDYXVeSoGVwmwRdUkxaVTy3G3
   5dqnBfI2Amgn1ZJmWBUCLOP2xJvXkHQ4N2GSc6MajKnoDsyxPK7xwY4Cn
   c=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: ctql2X4OGiMxAzz13OyWDhYIQkPJMF5wMecpH0QPoeoRuiFG/O3FIlWkP+7OFLvoiuiRSCf0gp
 Fna0lvPCEyGtzpj1tZQEUxsDZOhzWMEsgV+HMuBK6VcSLBPkeM8cvozfLv//0J0Rwwoze81F+2
 nSxlu3n+tVeiosxy5bIr4Hmfm9FIm2WMicdMcJVt92W4u/M9XqSjmSC4BJTJcQ4TUsnOzQvPrQ
 TdKRZlvfvfUSNO64LqZFZb6uTjuabauGpgvOJ4qw1FhWo5fatqYWKaK4FjwnW3Us4c5pfJb4mh
 3/Q=
X-SBRS: None
X-MesageID: 30713828
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,463,1596513600"; 
   d="scan'208";a="30713828"
Subject: Re: [PATCH] x86/CPUID: don't use UB shift when library is built as
 32-bit
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <b579bde8-6cd2-0407-f098-c44e1c9ff814@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <21f2020b-f74b-3726-69ff-fa54f4a209c0@citrix.com>
Date: Mon, 9 Nov 2020 10:54:17 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <b579bde8-6cd2-0407-f098-c44e1c9ff814@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 09/11/2020 10:53, Jan Beulich wrote:
> At least the insn emulator test harness will continue to be buildable
> (and ought to continue to be usable) also as a 32-bit binary. (Right now
> the CPU policy test harness is, too, but there it may be less relevant
> to keep it functional, just like e.g. we don't support fuzzing the insn
> emulator in 32-bit mode.) Hence the library code needs to cope with
> this.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 10:54:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 10:54:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22374.48679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc4p6-0001LL-90; Mon, 09 Nov 2020 10:54:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22374.48679; Mon, 09 Nov 2020 10:54:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc4p6-0001LD-5s; Mon, 09 Nov 2020 10:54:56 +0000
Received: by outflank-mailman (input) for mailman id 22374;
 Mon, 09 Nov 2020 10:54:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2NS2=EP=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kc4p5-0001L3-0i
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 10:54:55 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b021e9b6-823d-4a40-b579-9e6f8ed9c1d9;
 Mon, 09 Nov 2020 10:54:54 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2NS2=EP=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kc4p5-0001L3-0i
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 10:54:55 +0000
X-Inumbo-ID: b021e9b6-823d-4a40-b579-9e6f8ed9c1d9
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b021e9b6-823d-4a40-b579-9e6f8ed9c1d9;
	Mon, 09 Nov 2020 10:54:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1604919294;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=DsODFVnxWQ5oJjDWjhOlPQGXBIFr2Rz/lhW84/4NfHk=;
  b=Dkez3Z/Wl1Ip/Fj3OCxMZGQJkfrbDcNyFsfYgTl7Z1c9rfrxlJv1y0p6
   4X8AZRxo4+IOzVBQt+p2cNPRGYdZyuBD7BFGAsSmVYfvSeKD96gIUZfxy
   ONcCwDzEBKTedRKnwKJfGZDwXpvH2KTfW26zXRNie3EmVv8JZtB2pchVh
   U=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: tkidY/lwLE/119KhANGf2f5ppRW3wYyjl9PGLlz8N5tCmf04SdS8sLJefn0juoMOpb8JZuhy5x
 t9S9mDX2Crq7hTjVgZCk+46t0y9bSfp9EOPoDWdSPrwiCW1bKw8RdGt5rX/U6jBTplqvGEdvlR
 AJ8CoS/Q9bFAqPGpSLfnvEo+grLMYt9CpveP3kppREgxeIyii5IZZrdD9OCz8xNrj77jFR58qb
 idmFts0ZzhNOxdaGe/pF3b79dswyY0sW4C8CZINZ2b4TaTQTRQ9eWdJGECeEV+KyaoT/+8bDVi
 al4=
X-SBRS: None
X-MesageID: 31850263
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,463,1596513600"; 
   d="scan'208";a="31850263"
Subject: Re: [PATCH] x86/CPUID: suppress IOMMU related hypervisor leaf data
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <c640463a-d088-aaf5-0c3c-d82b1c98ee4f@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <84272c0c-f9af-6853-e997-4f1bd11be1cb@citrix.com>
Date: Mon, 9 Nov 2020 10:54:48 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <c640463a-d088-aaf5-0c3c-d82b1c98ee4f@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 09/11/2020 10:54, Jan Beulich wrote:
> Now that the IOMMU for guests can't be enabled "on demand" anymore,
> there's also no reason to expose the related CPUID bit "just in case".
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 11:30:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 11:30:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22402.48697 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc5NR-0004pe-8L; Mon, 09 Nov 2020 11:30:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22402.48697; Mon, 09 Nov 2020 11:30:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc5NR-0004pX-5K; Mon, 09 Nov 2020 11:30:25 +0000
Received: by outflank-mailman (input) for mailman id 22402;
 Mon, 09 Nov 2020 11:30:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RfH0=EP=redhat.com=mst@srs-us1.protection.inumbo.net>)
 id 1kc5NP-0004pS-P1
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 11:30:23 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 6fb7b241-17ff-490c-82b9-b0dcd1ae93b5;
 Mon, 09 Nov 2020 11:30:21 +0000 (UTC)
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-495-r8JCfLz1M1mLEP_3X3BFuQ-1; Mon, 09 Nov 2020 06:30:20 -0500
Received: by mail-wm1-f71.google.com with SMTP id k128so778437wme.7
 for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 03:30:20 -0800 (PST)
Received: from redhat.com (bzq-79-181-34-244.red.bezeqint.net. [79.181.34.244])
 by smtp.gmail.com with ESMTPSA id 35sm10972366wro.71.2020.11.09.03.30.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Nov 2020 03:30:17 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=RfH0=EP=redhat.com=mst@srs-us1.protection.inumbo.net>)
	id 1kc5NP-0004pS-P1
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 11:30:23 +0000
X-Inumbo-ID: 6fb7b241-17ff-490c-82b9-b0dcd1ae93b5
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 6fb7b241-17ff-490c-82b9-b0dcd1ae93b5;
	Mon, 09 Nov 2020 11:30:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604921421;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=qJPHU79Z9aFII+RXhVTkpUcHO9yLNqjEBCGqyvXY310=;
	b=V995cmrSEl54zDxw55HTc5LkSVBjkP2bV0TQekN9COsw8YV5x1PRZoXTiOQWYGIU7qeB/F
	S4Jatc89SsaTrYA2XyRpHprinR8Gwv4zs3Vt/5nvJ4nLkVnzBzM/M3xEYu8X8hmCleZFtk
	XB3dlRLouPrOoTfOtSq1W/8EfJ7ljA0=
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-495-r8JCfLz1M1mLEP_3X3BFuQ-1; Mon, 09 Nov 2020 06:30:20 -0500
X-MC-Unique: r8JCfLz1M1mLEP_3X3BFuQ-1
Received: by mail-wm1-f71.google.com with SMTP id k128so778437wme.7
        for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 03:30:20 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=qJPHU79Z9aFII+RXhVTkpUcHO9yLNqjEBCGqyvXY310=;
        b=PifFVATXj5hQjm5ltdmK7YCX6TH3cMPn7zmuH1EDoLyxUWOU7dsEW6q9/khvSKkpgl
         qfNpaqiN5rQf0NGW4r2wMmuA8gHNXRG1pqyvvFSJanovwQdhwG1KJ7U30yfErzxW+i8V
         8QpA4//KjPYys3ii23TUPxaEVy12J+VjuRb5o6IOpJ888yCJ3L5b/XGECE35QaCijZn2
         FB0kpQiW+NZCdSzjrfWW+H6F+Aru4G+KvcVx/NDReuE4GrUojr5tmFKSXsEbXHNV3Wzx
         9epchv4J3luEHdVT3OMcBBKzZlcjDTNzbEwSyqcbQ5leMRPuVC5n6y4e5kiLTA85M6c4
         93dw==
X-Gm-Message-State: AOAM533ee9up/tAw/5XmMVc9emn0Th3tYCkI997lHIc71+WGheaItELd
	8E4CCxkn68h66CkUITTxNCn1wedDx9DBAACoPU/ZjvaBlkVg63J50GL+CW0PIeduTqv7hcO1pob
	T5hkcfNGCSiohfbyfV8qIwLB0Nm8=
X-Received: by 2002:a05:6000:1005:: with SMTP id a5mr10320355wrx.425.1604921419107;
        Mon, 09 Nov 2020 03:30:19 -0800 (PST)
X-Google-Smtp-Source: ABdhPJxfMfmWa9Cv9HNLSs0dLMrckBnIhQ+LX5gsNBfqeNBoHRKN7zV/cjWgtINNh6h724r5DlDOEA==
X-Received: by 2002:a05:6000:1005:: with SMTP id a5mr10320328wrx.425.1604921418961;
        Mon, 09 Nov 2020 03:30:18 -0800 (PST)
Received: from redhat.com (bzq-79-181-34-244.red.bezeqint.net. [79.181.34.244])
        by smtp.gmail.com with ESMTPSA id 35sm10972366wro.71.2020.11.09.03.30.12
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 09 Nov 2020 03:30:17 -0800 (PST)
Date: Mon, 9 Nov 2020 06:30:10 -0500
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com, linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	ceph-devel@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org, linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org, linux-fsdevel@vger.kernel.org
Subject: Re: [PATCH 23/24] virtio-blk: remove a spurious call to
 revalidate_disk_size
Message-ID: <20201109063004-mutt-send-email-mst@kernel.org>
References: <20201106190337.1973127-1-hch@lst.de>
 <20201106190337.1973127-24-hch@lst.de>
MIME-Version: 1.0
In-Reply-To: <20201106190337.1973127-24-hch@lst.de>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=mst@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Fri, Nov 06, 2020 at 08:03:35PM +0100, Christoph Hellwig wrote:
> revalidate_disk_size just updates the block device size from the disk
> size.  Thus calling it from revalidate_disk_size doesn't actually do
> anything.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Acked-by: Michael S. Tsirkin <mst@redhat.com>

> ---
>  drivers/block/virtio_blk.c | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index 3e812b4c32e669..145606dc52db1e 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -598,7 +598,6 @@ static void virtblk_update_cache_mode(struct virtio_device *vdev)
>  	struct virtio_blk *vblk = vdev->priv;
>  
>  	blk_queue_write_cache(vblk->disk->queue, writeback, false);
> -	revalidate_disk_size(vblk->disk, true);
>  }
>  
>  static const char *const virtblk_cache_types[] = {
> -- 
> 2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 11:49:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 11:49:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22414.48709 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc5fe-0005xg-0D; Mon, 09 Nov 2020 11:49:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22414.48709; Mon, 09 Nov 2020 11:49:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc5fd-0005xZ-Rz; Mon, 09 Nov 2020 11:49:13 +0000
Received: by outflank-mailman (input) for mailman id 22414;
 Mon, 09 Nov 2020 11:49:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=47vU=EP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kc5fc-0005xU-AG
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 11:49:12 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 10e06f6b-30d2-4f88-9b8f-afa2b7f8f2f1;
 Mon, 09 Nov 2020 11:49:08 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kc5fY-0003TZ-Gf; Mon, 09 Nov 2020 11:49:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kc5fY-0003gG-5i; Mon, 09 Nov 2020 11:49:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kc5fY-0003yS-5H; Mon, 09 Nov 2020 11:49:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=47vU=EP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kc5fc-0005xU-AG
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 11:49:12 +0000
X-Inumbo-ID: 10e06f6b-30d2-4f88-9b8f-afa2b7f8f2f1
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 10e06f6b-30d2-4f88-9b8f-afa2b7f8f2f1;
	Mon, 09 Nov 2020 11:49:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/x1rpaU6tbeYwI9DOlTYZ3d7Gp62KolBsrlwC7QzDKo=; b=toqiTjvkwKUJURGYO9fk4e94rd
	PujRZzuwK4/9cL+Lldz1Fl4ZD/dxN7WIOGUTwzRl9wkHNJIt+7e9G6Nl3Di+1kiJqtV4+HBa8doIM
	eYFzwE/n+FqsoZ6fJf1ZWZetInoF6gtuzTOx/L98viDMi1pKCdjR5JVs7HshhFy8M8Mw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kc5fY-0003TZ-Gf; Mon, 09 Nov 2020 11:49:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kc5fY-0003gG-5i; Mon, 09 Nov 2020 11:49:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kc5fY-0003yS-5H; Mon, 09 Nov 2020 11:49:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156575-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156575: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-credit1:xen-boot:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=3493c36f0371777c62d1d72b205b0eb6117e2156
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Nov 2020 11:49:08 +0000

flight 156575 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156575/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit1   8 xen-boot                   fail pass in 156552

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit1 15 migrate-support-check fail in 156552 never pass
 test-armhf-armhf-xl-credit1 16 saverestore-support-check fail in 156552 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                3493c36f0371777c62d1d72b205b0eb6117e2156
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   81 days
Failing since        152659  2020-08-21 14:07:39 Z   79 days  175 attempts
Testing same since   156536  2020-11-07 03:17:03 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 61894 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 11:58:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 11:58:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22422.48724 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc5ok-0006uJ-TJ; Mon, 09 Nov 2020 11:58:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22422.48724; Mon, 09 Nov 2020 11:58:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc5ok-0006uC-QE; Mon, 09 Nov 2020 11:58:38 +0000
Received: by outflank-mailman (input) for mailman id 22422;
 Mon, 09 Nov 2020 11:58:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=a0OZ=EP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kc5oj-0006u7-Uz
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 11:58:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 08cfcfc7-c043-4b0c-b0dc-f6b3ac316f51;
 Mon, 09 Nov 2020 11:58:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D4995ABDE;
 Mon,  9 Nov 2020 11:58:34 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=a0OZ=EP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kc5oj-0006u7-Uz
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 11:58:37 +0000
X-Inumbo-ID: 08cfcfc7-c043-4b0c-b0dc-f6b3ac316f51
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 08cfcfc7-c043-4b0c-b0dc-f6b3ac316f51;
	Mon, 09 Nov 2020 11:58:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604923115;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=z6SX85wqVND53dpcJa/GHF2mK9WMNe5pyTVY7T8E/fo=;
	b=tdRkG1H7+zTt48OA/wek3SfbxKStGo5pDqrWxhWm+YU2toS9paCGJDJ+J0E52Ybp6Wwccw
	I0hirOCmHLnIczjTw+FqN33/Ruj4mEQAFLzMzO2ik5GaLZrZe2SztzrqPiQRsqqYhB9MMM
	ynf6hFh7EtZhHJHcw4kgIDeh4KqEx0o=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D4995ABDE;
	Mon,  9 Nov 2020 11:58:34 +0000 (UTC)
Subject: Re: [PATCH v5 2/2] xen/evtchn: rework per event channel lock
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20201109064128.3908-1-jgross@suse.com>
 <20201109064128.3908-3-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <df9737a4-f90a-0498-b67d-ce254b835287@suse.com>
Date: Mon, 9 Nov 2020 12:58:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201109064128.3908-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.11.2020 07:41, Juergen Gross wrote:
> Currently the lock for a single event channel needs to be taken with
> interrupts off, which causes deadlocks in some cases.
> 
> Rework the per event channel lock to be non-blocking for the case of
> sending an event and removing the need for disabling interrupts for
> taking the lock.
> 
> The lock is needed for avoiding races between event channel state
> changes (creation, closing, binding) against normal operations (set
> pending, [un]masking, priority changes).
> 
> Use a rwlock, but with some restrictions:
> 
> - normal operations use read_trylock(), in case of not obtaining the
>   lock the operation is omitted or a default state is returned
> 
> - closing an event channel is using write_lock(), with ASSERT()ing that
>   the lock is taken as writer only when the state of the event channel
>   is either before or after the locked region appropriate (either free
>   or unbound).

This has become stale, and may have been incomplete already before:
- Normal operations now may use two diffeent approaches. Which one
is to be used when would want writing down here.
- write_lock() use goes beyond closing.

> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -2495,14 +2495,12 @@ static void dump_irqs(unsigned char key)
>                  pirq = domain_irq_to_pirq(d, irq);
>                  info = pirq_info(d, pirq);
>                  evtchn = evtchn_from_port(d, info->evtchn);
> -                local_irq_disable();
> -                if ( spin_trylock(&evtchn->lock) )
> +                if ( evtchn_read_trylock(evtchn) )
>                  {
>                      pending = evtchn_is_pending(d, evtchn);
>                      masked = evtchn_is_masked(d, evtchn);
> -                    spin_unlock(&evtchn->lock);
> +                    evtchn_read_unlock(evtchn);
>                  }
> -                local_irq_enable();
>                  printk("d%d:%3d(%c%c%c)%c",
>                         d->domain_id, pirq, "-P?"[pending],
>                         "-M?"[masked], info->masked ? 'M' : '-',

Using trylock here has a reason different from that in sending
functions, aiui. Please say so in the description, to justify
exposure of evtchn_read_lock().

> --- a/xen/arch/x86/pv/shim.c
> +++ b/xen/arch/x86/pv/shim.c
> @@ -660,11 +660,12 @@ void pv_shim_inject_evtchn(unsigned int port)
>      if ( port_is_valid(guest, port) )
>      {
>          struct evtchn *chn = evtchn_from_port(guest, port);
> -        unsigned long flags;
>  
> -        spin_lock_irqsave(&chn->lock, flags);
> -        evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
> -        spin_unlock_irqrestore(&chn->lock, flags);
> +        if ( evtchn_read_trylock(chn) )
> +        {
> +            evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
> +            evtchn_read_unlock(chn);
> +        }

Does this need trylock?

> @@ -1068,15 +1088,16 @@ int evtchn_unmask(unsigned int port)
>  {
>      struct domain *d = current->domain;
>      struct evtchn *evtchn;
> -    unsigned long flags;
>  
>      if ( unlikely(!port_is_valid(d, port)) )
>          return -EINVAL;
>  
>      evtchn = evtchn_from_port(d, port);
> -    spin_lock_irqsave(&evtchn->lock, flags);
> -    evtchn_port_unmask(d, evtchn);
> -    spin_unlock_irqrestore(&evtchn->lock, flags);
> +    if ( evtchn_read_trylock(evtchn) )
> +    {
> +        evtchn_port_unmask(d, evtchn);
> +        evtchn_read_unlock(evtchn);
> +    }

I think this one could as well use plain read_lock().

> @@ -234,12 +244,13 @@ static inline bool evtchn_is_masked(const struct domain *d,
>  static inline bool evtchn_port_is_masked(struct domain *d, evtchn_port_t port)
>  {
>      struct evtchn *evtchn = evtchn_from_port(d, port);
> -    bool rc;
> -    unsigned long flags;
> +    bool rc = true;
>  
> -    spin_lock_irqsave(&evtchn->lock, flags);
> -    rc = evtchn_is_masked(d, evtchn);
> -    spin_unlock_irqrestore(&evtchn->lock, flags);
> +    if ( evtchn_read_trylock(evtchn) )
> +    {
> +        rc = evtchn_is_masked(d, evtchn);
> +        evtchn_read_unlock(evtchn);
> +    }
>  
>      return rc;
>  }
> @@ -252,12 +263,13 @@ static inline int evtchn_port_poll(struct domain *d, evtchn_port_t port)
>      if ( port_is_valid(d, port) )
>      {
>          struct evtchn *evtchn = evtchn_from_port(d, port);
> -        unsigned long flags;
>  
> -        spin_lock_irqsave(&evtchn->lock, flags);
> -        if ( evtchn_usable(evtchn) )
> -            rc = evtchn_is_pending(d, evtchn);
> -        spin_unlock_irqrestore(&evtchn->lock, flags);
> +        if ( evtchn_read_trylock(evtchn) )
> +        {
> +            if ( evtchn_usable(evtchn) )
> +                rc = evtchn_is_pending(d, evtchn);
> +            evtchn_read_unlock(evtchn);
> +        }
>      }
>  
>      return rc;

At least for the latter I suppose it should also be plain read_lock().
We ought to keep the exceptions to where they're actually needed, I
think.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 12:00:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 12:00:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22432.48736 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc5qg-0007of-Lc; Mon, 09 Nov 2020 12:00:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22432.48736; Mon, 09 Nov 2020 12:00:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc5qg-0007oY-If; Mon, 09 Nov 2020 12:00:38 +0000
Received: by outflank-mailman (input) for mailman id 22432;
 Mon, 09 Nov 2020 12:00:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=a0OZ=EP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kc5qf-0007oS-DG
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:00:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b195b27e-0cb2-406d-8028-a2793837a169;
 Mon, 09 Nov 2020 12:00:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A2F3EABF4;
 Mon,  9 Nov 2020 12:00:35 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=a0OZ=EP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kc5qf-0007oS-DG
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:00:37 +0000
X-Inumbo-ID: b195b27e-0cb2-406d-8028-a2793837a169
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b195b27e-0cb2-406d-8028-a2793837a169;
	Mon, 09 Nov 2020 12:00:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604923235;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6O/AVKJogNb2qz7BW4zQQMiP0+DrfWq/tB9FKD/VZn4=;
	b=HhOXk9vFP7qqyNkvuk2EoY2KXhHEGANfR/4XNwhsWyb1CkWpbV7TSu1kBwPfiFOsZ1Gaet
	MmdamZXoxSADwL201C3bGfLZxCC6TbNKYFwq/EPiSVLHBFDRccW/tj5XieU1rBEfkPXDvp
	azZqvk7dctoD8w+ASTrFtyrl90M6LAw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A2F3EABF4;
	Mon,  9 Nov 2020 12:00:35 +0000 (UTC)
Subject: Re: [PATCH v5 0/2] XSA-343 followup patches
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20201109064128.3908-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e9326609-0808-8f0b-6cb9-3ed4401b9f1d@suse.com>
Date: Mon, 9 Nov 2020 13:00:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201109064128.3908-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.11.2020 07:41, Juergen Gross wrote:
> The patches for XSA-343 produced some fallout, especially the event
> channel locking has shown to be problematic.
> 
> Patch 1 is targeting fifo event channels for avoiding any races for the
> case that the fifo queue has been changed for a specific event channel.
> 
> The second patch is modifying the per event channel locking scheme in
> order to avoid deadlocks and problems due to the event channel lock
> having been changed to require IRQs off by the XSA-343 patches.
> 
> Changes in V5:
> - moved evtchn_write_[un]lock() to event_channel.c (Jan Beulich)
> - used normal read_lock() in some cases (Jan Beulich)
> 
> Changes in V4:
> - switched to real rwlock
> 
> Changes in V3:
> - addressed comments
> 
> Juergen Gross (2):
>   xen/events: access last_priority and last_vcpu_id together
>   xen/evtchn: rework per event channel lock

Didn't you mean to add a 3rd patch here to drop the 2nd call to
xsm_evtchn_send() again?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 12:04:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 12:04:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22442.48748 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc5uS-000815-5T; Mon, 09 Nov 2020 12:04:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22442.48748; Mon, 09 Nov 2020 12:04:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc5uS-00080y-2d; Mon, 09 Nov 2020 12:04:32 +0000
Received: by outflank-mailman (input) for mailman id 22442;
 Mon, 09 Nov 2020 12:04:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wB/=EP=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kc5uQ-00080t-UQ
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:04:31 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3aa2d892-0ce4-4532-8e64-ea4ae77dfc23;
 Mon, 09 Nov 2020 12:04:30 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kc5uP-0003ow-PA; Mon, 09 Nov 2020 12:04:29 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kc5uP-0004Gd-Ha; Mon, 09 Nov 2020 12:04:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5wB/=EP=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kc5uQ-00080t-UQ
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:04:31 +0000
X-Inumbo-ID: 3aa2d892-0ce4-4532-8e64-ea4ae77dfc23
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3aa2d892-0ce4-4532-8e64-ea4ae77dfc23;
	Mon, 09 Nov 2020 12:04:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=UdtPmwE9gqG0N+a8baZ0Ckhctdo1DjlL50gRRn6LQIk=; b=gPnVfG4m8BamYNYvctZijbMowS
	xHSDKehEGPzpMZHG9y/CWOczPtJbbMZP+q9z7pcvQjCTagQTYgRg3NCdkAcV0XhYvzguE9MS6TsuW
	kQnQpewrA9NCpKyr2UNbN3JaMNl0KKlklDaIbf/sm7EGnSl8BLmcEtRwfJKa/qb/ST94=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kc5uP-0003ow-PA; Mon, 09 Nov 2020 12:04:29 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kc5uP-0004Gd-Ha; Mon, 09 Nov 2020 12:04:29 +0000
Subject: Re: [PATCH] xen/arm: Add Cortex-A73 erratum 858921 workaround
To: Penny Zheng <penny.zheng@arm.com>, xen-devel@lists.xenproject.org,
 sstabellini@kernel.org
Cc: Andre.Przywara@arm.com, Bertrand.Marquis@arm.com, Wei.Chen@arm.com,
 Kaly.Xin@arm.com, nd@arm.com
References: <20201109082110.1133996-1-penny.zheng@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <cfa63398-8182-b79f-1602-ed068e2319ad@xen.org>
Date: Mon, 9 Nov 2020 12:04:27 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201109082110.1133996-1-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 09/11/2020 08:21, Penny Zheng wrote:
> CNTVCT_EL0 or CNTPCT_EL0 counter read in Cortex-A73 (all versions)
> might return a wrong value when the counter crosses a 32bit boundary.
> 
> Until now, there is no case for Xen itself to access CNTVCT_EL0,
> and it also should be the Guest OS's responsibility to deal with
> this part.
> 
> But for CNTPCT, there exists several cases in Xen involving reading
> CNTPCT, so a possible workaround is that performing the read twice,
> and to return one or the other depending on whether a transition has
> taken place.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>

Acked-by: Julien Grall <jgrall@amazon.com>

On a related topic, do we need a fix similar to Linux commit 
75a19a0202db "arm64: arch_timer: Ensure counter register reads occur 
with seqlock held"?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 12:50:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 12:50:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22467.48782 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6d9-0003vi-3H; Mon, 09 Nov 2020 12:50:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22467.48782; Mon, 09 Nov 2020 12:50:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6d8-0003vY-VG; Mon, 09 Nov 2020 12:50:42 +0000
Received: by outflank-mailman (input) for mailman id 22467;
 Mon, 09 Nov 2020 12:50:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zl3J=EP=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1kc6d7-0003tf-G8
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:50:41 +0000
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 65cd7055-ea58-471e-ab4b-c97dca5b8bec;
 Mon, 09 Nov 2020 12:50:36 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id a25so3369148lfb.2
 for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 04:50:35 -0800 (PST)
Received: from a2klaptop.localdomain ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id i1sm1736447lfr.176.2020.11.09.04.50.33
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Nov 2020 04:50:34 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zl3J=EP=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
	id 1kc6d7-0003tf-G8
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:50:41 +0000
X-Inumbo-ID: 65cd7055-ea58-471e-ab4b-c97dca5b8bec
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 65cd7055-ea58-471e-ab4b-c97dca5b8bec;
	Mon, 09 Nov 2020 12:50:36 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id a25so3369148lfb.2
        for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 04:50:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=1rzuTCzz2KF+DRZQjteexLeZTdzBj+314YEg3qH31eU=;
        b=S6tl/cpgNDnUIxTskwzefT2OcCf0wYW5IaEF+pGqwIKBOuIcCUH2BcvRYHPqlqI1aR
         1gNG2n/XcW+n1z1aieFzT7m+uHrSmtUL13xhONepAHj5uGGxGOo5mSulraPtUUGNXdk4
         PkaDGOrmV9iy8I8+zqBUlZ8uffd8lGDHyWmlB5LicP9KJIgbexggybZvxyCKvMe4NvKW
         h6ckFRAlbFgjOKxl68B2wYdQuyJm5BpX+zER0tvHBh1EaurrT9hMLlGYFxiKM1v2m2f+
         XyTGI8XpTj5SSzvLIYWYAz6+MRpvmi1QXJ6wD/el3bR9kVwg7HK8WzMlYWRyBQGmsgHg
         sxjw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=1rzuTCzz2KF+DRZQjteexLeZTdzBj+314YEg3qH31eU=;
        b=mYeJFBoBR5FDMmsimYRnYPmz70vxnEueiAZRQjD9rF8x3F72OmzqYTwS4sXnSNItOf
         Rw7wQPI6D8AXk+e5ReKoqYAUocvJxylww7l/1Jlbt9Yvr+h93sqwS5Zz/vwExiw2BEk7
         Ei4E/MByNMuiy6qvWqcTcLPmZ2RcyxGfYloo8wrqZTwA8ng1I171NlMUmZtAdsQfpO3O
         Wv3RjC9co1Yc5g/KunQV3nEWht5fmoKU19rJK6brHloXFQh+8dAgKzAi3EaEBrcI0K0K
         FHghbKkXY8aOdlM+xYn7BieSQS1m+WTvVVGurUY/HV1qt11uWozVGGiWvrcsVG/q4vGk
         75Ug==
X-Gm-Message-State: AOAM5311aL+pNemyRvPcJnREPY0u2j9du3lTfzsQMiwLCMMVhpJTvUT/
	xTTZC1nYmE16kwZhvAY9Lf4T/fSsxaSd7j7B
X-Google-Smtp-Source: ABdhPJxDFoHXaz68fLZTaNjkSD8UMocF3QfpIF4hXTbn9tp8pmsusBJgEQvGxDLl+4M/wzGd4N7iMA==
X-Received: by 2002:a19:90b:: with SMTP id 11mr6026088lfj.316.1604926234923;
        Mon, 09 Nov 2020 04:50:34 -0800 (PST)
Received: from a2klaptop.localdomain ([185.199.97.5])
        by smtp.gmail.com with ESMTPSA id i1sm1736447lfr.176.2020.11.09.04.50.33
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 09 Nov 2020 04:50:34 -0800 (PST)
From: Oleksandr Andrushchenko <andr2000@gmail.com>
To: Rahul.Singh@arm.com,
	Bertrand.Marquis@arm.com,
	julien.grall@arm.com,
	jbeulich@suse.com,
	roger.pau@citrix.com,
	sstabellini@kernel.org,
	xen-devel@lists.xenproject.org
Cc: iwj@xenproject.org,
	wl@xen.org,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Subject: [PATCH 01/10] pci/pvh: Allow PCI toolstack code run with PVH domains on ARM
Date: Mon,  9 Nov 2020 14:50:22 +0200
Message-Id: <20201109125031.26409-2-andr2000@gmail.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201109125031.26409-1-andr2000@gmail.com>
References: <20201109125031.26409-1-andr2000@gmail.com>

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

According to https://wiki.xenproject.org/wiki/Linux_PVH:

Items not supported by PVH
 - PCI pass through (as of Xen 4.10)

Allow running PCI remove code on ARM and do not assert for PVH domains.

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
 tools/libxl/Makefile    | 4 ++++
 tools/libxl/libxl_pci.c | 4 +++-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 241da7fff6f4..f3806aafcb4e 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -130,6 +130,10 @@ endif
 
 LIBXL_LIBS += -lyajl
 
+ifeq ($(CONFIG_ARM),y)
+CFALGS += -DCONFIG_ARM
+endif
+
 LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
 			libxl_dom.o libxl_exec.o libxl_xshelp.o libxl_device.o \
 			libxl_internal.o libxl_utils.o libxl_uuid.o \
diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
index bc5843b13701..b93cf976642b 100644
--- a/tools/libxl/libxl_pci.c
+++ b/tools/libxl/libxl_pci.c
@@ -1915,8 +1915,10 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
             goto out_fail;
         }
     } else {
+        /* PCI passthrough can also run on ARM PVH */
+#ifndef CONFIG_ARM
         assert(type == LIBXL_DOMAIN_TYPE_PV);
-
+#endif
         char *sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pcidev->domain,
                                      pcidev->bus, pcidev->dev, pcidev->func);
         FILE *f = fopen(sysfs_path, "r");
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 12:50:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 12:50:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22468.48794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6dE-0003yZ-C7; Mon, 09 Nov 2020 12:50:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22468.48794; Mon, 09 Nov 2020 12:50:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6dE-0003yS-8F; Mon, 09 Nov 2020 12:50:48 +0000
Received: by outflank-mailman (input) for mailman id 22468;
 Mon, 09 Nov 2020 12:50:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zl3J=EP=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1kc6dC-0003tf-GG
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:50:46 +0000
Received: from mail-lj1-x22a.google.com (unknown [2a00:1450:4864:20::22a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d3ff9ada-bce0-421f-b7dd-51132a6a6695;
 Mon, 09 Nov 2020 12:50:37 +0000 (UTC)
Received: by mail-lj1-x22a.google.com with SMTP id h23so5662951ljg.13
 for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 04:50:37 -0800 (PST)
Received: from a2klaptop.localdomain ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id i1sm1736447lfr.176.2020.11.09.04.50.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Nov 2020 04:50:35 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zl3J=EP=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
	id 1kc6dC-0003tf-GG
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:50:46 +0000
X-Inumbo-ID: d3ff9ada-bce0-421f-b7dd-51132a6a6695
Received: from mail-lj1-x22a.google.com (unknown [2a00:1450:4864:20::22a])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d3ff9ada-bce0-421f-b7dd-51132a6a6695;
	Mon, 09 Nov 2020 12:50:37 +0000 (UTC)
Received: by mail-lj1-x22a.google.com with SMTP id h23so5662951ljg.13
        for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 04:50:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=0raAmTH7axJ0YbWj1RALaYgyVMCqkwur6fbc6S2x+mQ=;
        b=PMkxzd3meXfxMFb+8V9v7umIklIQVDgU3Yej5GLf8NU7kAmMU2+X+iujw4xkvoPWDZ
         Fm/28ucP6ws+K2JsF3+RpNyd7b6UwSmSOHJh85uxzWU994fDaFsECOoIbCHjR1prI4yU
         64eHPDI4OgRvuOqOUEqXAzGgTRhri8UvuIRYqKmWb2PUtZrH4WabsDma9liElDpwz7as
         j4D8EPtr5S2d5pa4JGAhP3pVYZ8xGTwjkK0lbRPgJI/mqdvu9ywBmCcLBjSbegicuWlt
         r9myYOftERyVCLaQPMnDbSM29lZW/loDtF+lNO+cwK27Yf3Iql1Qak7N2Ca/XgdZzDZE
         LqUg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=0raAmTH7axJ0YbWj1RALaYgyVMCqkwur6fbc6S2x+mQ=;
        b=mUuxg36vtPtT76VLpOey1FIxB3yqOt6KeFX7knzis4E7rBJJ7PvoFFVH7vwn/rU1E5
         LIpOYonDKLj0cD6Uh8/mbRVv//gojnXjCIaACX+V457vvp++ScGth8UNXAe7d/D+3474
         bgIudVKR5rXGU8hZXLaKR3ZQCFELjepYNgpC939+ZQ8IdBwSrdFNZfo220kQtdzs284s
         dFd3lrm07pHJsH97uYyE80kMTRajkm0mOoo66oBSa44TV5FFD8zhwycp0tdMcvqzsfQK
         BOJr54VoK7a15foXsswtIFpCoU9ExktFtPZRy8J6DZVOgR/M6UYemnHqinh+dbzTPfwO
         ZVQg==
X-Gm-Message-State: AOAM531pz4c5Iv3Vg/bt6bipIwoxVS7Sdx9dsxeYfo5Rx51QtErdLhcC
	rimt9U27DvEtBZbKFJp9DkU=
X-Google-Smtp-Source: ABdhPJzP9fZHRdiYOs6Ll6Bu1EWUvCyMJ+eVtThjJRLcocKF8qx9PQqAen8FQrTxH96C4NWdi/wfBA==
X-Received: by 2002:a2e:9694:: with SMTP id q20mr139176lji.279.1604926236043;
        Mon, 09 Nov 2020 04:50:36 -0800 (PST)
Received: from a2klaptop.localdomain ([185.199.97.5])
        by smtp.gmail.com with ESMTPSA id i1sm1736447lfr.176.2020.11.09.04.50.35
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 09 Nov 2020 04:50:35 -0800 (PST)
From: Oleksandr Andrushchenko <andr2000@gmail.com>
To: Rahul.Singh@arm.com,
	Bertrand.Marquis@arm.com,
	julien.grall@arm.com,
	jbeulich@suse.com,
	roger.pau@citrix.com,
	sstabellini@kernel.org,
	xen-devel@lists.xenproject.org
Cc: iwj@xenproject.org,
	wl@xen.org,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Subject: [PATCH 02/10] arm/pci: Maintain PCI assignable list
Date: Mon,  9 Nov 2020 14:50:23 +0200
Message-Id: <20201109125031.26409-3-andr2000@gmail.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201109125031.26409-1-andr2000@gmail.com>
References: <20201109125031.26409-1-andr2000@gmail.com>

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

The original code depends on pciback to manage assignable device list.
The functionality which is implemented by the pciback and the toolstack
and which is relevant/missing/needed for ARM:

1. pciback is used as a database for assignable PCI devices, e.g. xl
   pci-assignable-{add|remove|list} manipulates that list. So, whenever the
   toolstack needs to know which PCI devices can be passed through it reads
   that from the relevant sysfs entries of the pciback.

2. pciback is used to hold the unbound PCI devices, e.g. when passing through
   a PCI device it needs to be unbound from the relevant device driver and bound
   to pciback (strictly speaking it is not required that the device is bound to
   pciback, but pciback is again used as a database of the passed through PCI
   devices, so we can re-bind the devices back to their original drivers when
   guest domain shuts down)

1. As ARM doesn't use pciback implement the above with additional sysctls:
 - XEN_SYSCTL_pci_device_set_assigned
 - XEN_SYSCTL_pci_device_get_assigned
 - XEN_SYSCTL_pci_device_enum_assigned
2. Extend struct pci_dev to hold assignment state.

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
 tools/libxc/include/xenctrl.h |   9 +++
 tools/libxc/xc_domain.c       |   1 +
 tools/libxc/xc_misc.c         |  46 +++++++++++++++
 tools/libxl/Makefile          |   4 ++
 tools/libxl/libxl_pci.c       | 105 ++++++++++++++++++++++++++++++++--
 xen/arch/arm/sysctl.c         |  66 ++++++++++++++++++++-
 xen/drivers/passthrough/pci.c |  93 ++++++++++++++++++++++++++++++
 xen/include/public/sysctl.h   |  40 +++++++++++++
 xen/include/xen/pci.h         |  12 ++++
 9 files changed, 370 insertions(+), 6 deletions(-)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 4c89b7294c4f..77029013da7d 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -2652,6 +2652,15 @@ int xc_livepatch_replace(xc_interface *xch, char *name, uint32_t timeout, uint32
 int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
                          xen_pfn_t start_pfn, xen_pfn_t nr_pfns);
 
+typedef xen_sysctl_pci_device_enum_assigned_t xc_pci_device_enum_assigned_t;
+
+int xc_pci_device_set_assigned(xc_interface *xch, uint32_t machine_sbdf,
+                               bool assigned);
+int xc_pci_device_get_assigned(xc_interface *xch, uint32_t machine_sbdf);
+
+int xc_pci_device_enum_assigned(xc_interface *xch,
+                                xc_pci_device_enum_assigned_t *e);
+
 /* Compat shims */
 #include "xenctrl_compat.h"
 
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 71829c2bce3e..d515191e9243 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -2321,6 +2321,7 @@ int xc_domain_soft_reset(xc_interface *xch,
     domctl.domain = domid;
     return do_domctl(xch, &domctl);
 }
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c
index 3820394413a9..d439c4ba1019 100644
--- a/tools/libxc/xc_misc.c
+++ b/tools/libxc/xc_misc.c
@@ -988,6 +988,52 @@ int xc_livepatch_replace(xc_interface *xch, char *name, uint32_t timeout, uint32
     return _xc_livepatch_action(xch, name, LIVEPATCH_ACTION_REPLACE, timeout, flags);
 }
 
+int xc_pci_device_set_assigned(
+    xc_interface *xch,
+    uint32_t machine_sbdf,
+    bool assigned)
+{
+    DECLARE_SYSCTL;
+
+    sysctl.cmd = XEN_SYSCTL_pci_device_set_assigned;
+    sysctl.u.pci_set_assigned.machine_sbdf = machine_sbdf;
+    sysctl.u.pci_set_assigned.assigned = assigned;
+
+    return do_sysctl(xch, &sysctl);
+}
+
+int xc_pci_device_get_assigned(
+    xc_interface *xch,
+    uint32_t machine_sbdf)
+{
+    DECLARE_SYSCTL;
+
+    sysctl.cmd = XEN_SYSCTL_pci_device_get_assigned;
+    sysctl.u.pci_get_assigned.machine_sbdf = machine_sbdf;
+
+    return do_sysctl(xch, &sysctl);
+}
+
+int xc_pci_device_enum_assigned(xc_interface *xch,
+                                xc_pci_device_enum_assigned_t *e)
+{
+    int ret;
+    DECLARE_SYSCTL;
+
+    sysctl.cmd = XEN_SYSCTL_pci_device_enum_assigned;
+    sysctl.u.pci_enum_assigned.idx = e->idx;
+    sysctl.u.pci_enum_assigned.report_not_assigned = e->report_not_assigned;
+    ret = do_sysctl(xch, &sysctl);
+    if ( ret )
+        errno = EINVAL;
+    else
+    {
+        e->domain = sysctl.u.pci_enum_assigned.domain;
+        e->machine_sbdf = sysctl.u.pci_enum_assigned.machine_sbdf;
+    }
+    return ret;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index f3806aafcb4e..6f76ba35aec7 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -130,6 +130,10 @@ endif
 
 LIBXL_LIBS += -lyajl
 
+ifeq ($(CONFIG_X86),y)
+CFALGS += -DCONFIG_PCIBACK
+endif
+
 ifeq ($(CONFIG_ARM),y)
 CFALGS += -DCONFIG_ARM
 endif
diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
index b93cf976642b..41f89b8aae10 100644
--- a/tools/libxl/libxl_pci.c
+++ b/tools/libxl/libxl_pci.c
@@ -319,6 +319,7 @@ retry_transaction2:
 
 static int get_all_assigned_devices(libxl__gc *gc, libxl_device_pci **list, int *num)
 {
+#ifdef CONFIG_PCIBACK
     char **domlist;
     unsigned int nd = 0, i;
 
@@ -356,6 +357,33 @@ static int get_all_assigned_devices(libxl__gc *gc, libxl_device_pci **list, int
             }
         }
     }
+#else
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+    int ret;
+    xc_pci_device_enum_assigned_t e;
+
+    *list = NULL;
+    *num = 0;
+
+    memset(&e, 0, sizeof(e));
+    do {
+        ret = xc_pci_device_enum_assigned(ctx->xch, &e);
+        if ( ret && errno == EINVAL )
+            break;
+        *list = realloc(*list, sizeof(libxl_device_pci) * (e.idx + 1));
+        if (*list == NULL)
+            return ERROR_NOMEM;
+
+        pcidev_struct_fill(*list + e.idx,
+                           e.domain,
+                           e.machine_sbdf >> 8 & 0xff,
+                           PCI_SLOT(e.machine_sbdf),
+                           PCI_FUNC(e.machine_sbdf),
+                           0 /*vdevfn*/);
+        e.idx++;
+    } while (!ret);
+    *num = e.idx;
+#endif
     libxl__ptr_add(gc, *list);
 
     return 0;
@@ -411,13 +439,20 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 {
     GC_INIT(ctx);
-    libxl_device_pci *pcidevs = NULL, *new, *assigned;
+    libxl_device_pci *pcidevs = NULL, *new;
+    int r;
+#ifdef CONFIG_PCIBACK
+    libxl_device_pci *assigned;
+    int num_assigned;
     struct dirent *de;
     DIR *dir;
-    int r, num_assigned;
+#else
+    xc_pci_device_enum_assigned_t e;
+#endif
 
     *num = 0;
 
+#ifdef CONFIG_PCIBACK
     r = get_all_assigned_devices(gc, &assigned, &num_assigned);
     if (r) goto out;
 
@@ -453,6 +488,32 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 
     closedir(dir);
 out:
+#else
+    memset(&e, 0, sizeof(e));
+    e.report_not_assigned = 1;
+    do {
+        r = xc_pci_device_enum_assigned(ctx->xch, &e);
+        if ( r && errno == EINVAL )
+            break;
+        new = realloc(pcidevs, (e.idx + 1) * sizeof(*new));
+        if (NULL == new)
+            continue;
+
+        pcidevs = new;
+        new = pcidevs + e.idx;
+
+        memset(new, 0, sizeof(*new));
+
+        pcidev_struct_fill(new,
+                           e.domain,
+                           e.machine_sbdf >> 8 & 0xff,
+                           PCI_SLOT(e.machine_sbdf),
+                           PCI_FUNC(e.machine_sbdf),
+                           0 /*vdevfn*/);
+        e.idx++;
+    } while (!r);
+    *num = e.idx;
+#endif
     GC_FREE;
     return pcidevs;
 }
@@ -606,6 +667,7 @@ bool libxl__is_igd_vga_passthru(libxl__gc *gc,
     return false;
 }
 
+#ifdef CONFIG_PCIBACK
 /*
  * A brief comment about slots.  I don't know what slots are for; however,
  * I have by experimentation determined:
@@ -648,11 +710,13 @@ out:
     fclose(f);
     return rc;
 }
+#endif
 
 static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pcidev)
 {
-    char * spath;
     int rc;
+#ifdef CONFIG_PCIBACK
+    char * spath;
     struct stat st;
 
     if ( access(SYSFS_PCIBACK_DRIVER, F_OK) < 0 ) {
@@ -663,22 +727,27 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pcidev)
         }
         return -1;
     }
-
     spath = GCSPRINTF(SYSFS_PCIBACK_DRIVER"/"PCI_BDF,
                       pcidev->domain, pcidev->bus,
                       pcidev->dev, pcidev->func);
     rc = lstat(spath, &st);
-
     if( rc == 0 )
         return 1;
     if ( rc < 0 && errno == ENOENT )
         return 0;
     LOGE(ERROR, "Accessing %s", spath);
     return -1;
+#else
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+
+    rc = xc_pci_device_get_assigned(ctx->xch, pcidev_encode_bdf(pcidev));
+    return rc == 0 ? 1 : 0;
+#endif
 }
 
 static int pciback_dev_assign(libxl__gc *gc, libxl_device_pci *pcidev)
 {
+#ifdef CONFIG_PCIBACK
     int rc;
 
     if ( (rc=pciback_dev_has_slot(gc, pcidev)) < 0 ) {
@@ -697,10 +766,17 @@ static int pciback_dev_assign(libxl__gc *gc, libxl_device_pci *pcidev)
         return ERROR_FAIL;
     }
     return 0;
+#else
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+
+    return xc_pci_device_set_assigned(ctx->xch, pcidev_encode_bdf(pcidev),
+                                      true);
+#endif
 }
 
 static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pcidev)
 {
+#ifdef CONFIG_PCIBACK
     /* Remove from pciback */
     if ( sysfs_dev_unbind(gc, pcidev, NULL) < 0 ) {
         LOG(ERROR, "Couldn't unbind device!");
@@ -716,6 +792,12 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pcidev)
         }
     }
     return 0;
+#else
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+
+    return xc_pci_device_set_assigned(ctx->xch, pcidev_encode_bdf(pcidev),
+                                      false);
+#endif
 }
 
 #define PCIBACK_INFO_PATH "/libxl/pciback"
@@ -780,10 +862,15 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
 
     /* See if the device exists */
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF, dom, bus, dev, func);
+#ifdef CONFIG_PCI_SYSFS_DOM0
     if ( lstat(spath, &st) ) {
         LOGE(ERROR, "Couldn't lstat %s", spath);
         return ERROR_FAIL;
     }
+#else
+    (void)st;
+    printf("IMPLEMENT_ME: %s lstat %s\n", __func__, spath);
+#endif
 
     /* Check to see if it's already assigned to pciback */
     rc = pciback_dev_is_assigned(gc, pcidev);
@@ -1350,8 +1437,12 @@ static void pci_add_dm_done(libxl__egc *egc,
 
     if (f == NULL) {
         LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
+#ifdef CONFIG_PCI_SYSFS_DOM0
         rc = ERROR_FAIL;
         goto out;
+#else
+        goto out_no_irq;
+#endif
     }
     for (i = 0; i < PROC_PCI_NUM_RESOURCES; i++) {
         if (fscanf(f, "0x%llx 0x%llx 0x%llx\n", &start, &end, &flags) != 3)
@@ -1522,7 +1613,11 @@ static int libxl_pcidev_assignable(libxl_ctx *ctx, libxl_device_pci *pcidev)
             break;
     }
     free(pcidevs);
+#ifdef CONFIG_PCIBACK
     return i != num;
+#else
+    return 1;
+#endif
 }
 
 static void device_pci_add_stubdom_wait(libxl__egc *egc,
diff --git a/xen/arch/arm/sysctl.c b/xen/arch/arm/sysctl.c
index f87944e8473c..84e933b2eb45 100644
--- a/xen/arch/arm/sysctl.c
+++ b/xen/arch/arm/sysctl.c
@@ -10,6 +10,7 @@
 #include <xen/lib.h>
 #include <xen/errno.h>
 #include <xen/hypercall.h>
+#include <xen/guest_access.h>
 #include <public/sysctl.h>
 
 void arch_do_physinfo(struct xen_sysctl_physinfo *pi)
@@ -20,7 +21,70 @@ void arch_do_physinfo(struct xen_sysctl_physinfo *pi)
 long arch_do_sysctl(struct xen_sysctl *sysctl,
                     XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
-    return -ENOSYS;
+    long ret = 0;
+    bool copyback = 0;
+
+    switch ( sysctl->cmd )
+    {
+    case XEN_SYSCTL_pci_device_set_assigned:
+    {
+        u16 seg;
+        u8 bus, devfn;
+        uint32_t machine_sbdf;
+
+        machine_sbdf = sysctl->u.pci_set_assigned.machine_sbdf;
+
+#if 0
+        ret = xsm_pci_device_set_assigned(XSM_HOOK, d);
+        if ( ret )
+            break;
+#endif
+
+        seg = machine_sbdf >> 16;
+        bus = PCI_BUS(machine_sbdf);
+        devfn = PCI_DEVFN2(machine_sbdf);
+
+        pcidevs_lock();
+        ret = pci_device_set_assigned(seg, bus, devfn,
+                                      !!sysctl->u.pci_set_assigned.assigned);
+        pcidevs_unlock();
+        break;
+    }
+    case XEN_SYSCTL_pci_device_get_assigned:
+    {
+        u16 seg;
+        u8 bus, devfn;
+        uint32_t machine_sbdf;
+
+        machine_sbdf = sysctl->u.pci_set_assigned.machine_sbdf;
+
+        seg = machine_sbdf >> 16;
+        bus = PCI_BUS(machine_sbdf);
+        devfn = PCI_DEVFN2(machine_sbdf);
+
+        pcidevs_lock();
+        ret = pci_device_get_assigned(seg, bus, devfn);
+        pcidevs_unlock();
+        break;
+    }
+    case XEN_SYSCTL_pci_device_enum_assigned:
+    {
+        ret = pci_device_enum_assigned(sysctl->u.pci_enum_assigned.report_not_assigned,
+                                       sysctl->u.pci_enum_assigned.idx,
+                                       &sysctl->u.pci_enum_assigned.domain,
+                                       &sysctl->u.pci_enum_assigned.machine_sbdf);
+        copyback = 1;
+        break;
+    }
+    default:
+        ret = -ENOSYS;
+        break;
+    }
+    if ( copyback && (!ret || copyback > 0) &&
+         __copy_to_guest(u_sysctl, sysctl, 1) )
+        ret = -EFAULT;
+
+    return ret;
 }
 
 /*
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 98e8a2fade60..49b4279c63bd 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -879,6 +879,43 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
     return ret;
 }
 
+#ifdef CONFIG_ARM
+int pci_device_set_assigned(u16 seg, u8 bus, u8 devfn, bool assigned)
+{
+    struct pci_dev *pdev;
+
+    pdev = pci_get_pdev(seg, bus, devfn);
+    if ( !pdev )
+    {
+        printk(XENLOG_ERR "Can't find PCI device %04x:%02x:%02x.%u\n",
+               seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+        return -ENODEV;
+    }
+
+    pdev->assigned = assigned;
+    printk(XENLOG_ERR "pciback %sassign PCI device %04x:%02x:%02x.%u\n",
+           assigned ? "" : "de-",
+           seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+
+    return 0;
+}
+
+int pci_device_get_assigned(u16 seg, u8 bus, u8 devfn)
+{
+    struct pci_dev *pdev;
+
+    pdev = pci_get_pdev(seg, bus, devfn);
+    if ( !pdev )
+    {
+        printk(XENLOG_ERR "Can't find PCI device %04x:%02x:%02x.%u\n",
+               seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+        return -ENODEV;
+    }
+
+    return pdev->assigned ? 0 : -ENODEV;
+}
+#endif
+
 #ifndef CONFIG_ARM
 /*TODO :Implement MSI support for ARM  */
 static int pci_clean_dpci_irq(struct domain *d,
@@ -1821,6 +1858,62 @@ int iommu_do_pci_domctl(
     return ret;
 }
 
+#ifdef CONFIG_ARM
+struct list_assigned {
+    uint32_t cur_idx;
+    uint32_t from_idx;
+    bool assigned;
+    domid_t *domain;
+    uint32_t *machine_sbdf;
+};
+
+static int _enum_assigned_pci_devices(struct pci_seg *pseg, void *arg)
+{
+    struct list_assigned *ctxt = arg;
+    struct pci_dev *pdev;
+
+    list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list )
+    {
+        if ( pdev->assigned == ctxt->assigned )
+        {
+            if ( ctxt->cur_idx == ctxt->from_idx )
+            {
+                *ctxt->domain = pdev->domain->domain_id;
+                *ctxt->machine_sbdf = pdev->sbdf.sbdf;
+                return 1;
+            }
+            ctxt->cur_idx++;
+        }
+    }
+    return 0;
+}
+
+int pci_device_enum_assigned(bool report_not_assigned,
+                             uint32_t from_idx, domid_t *domain,
+                             uint32_t *machine_sbdf)
+{
+    struct list_assigned ctxt = {
+        .assigned = !report_not_assigned,
+        .cur_idx = 0,
+        .from_idx = from_idx,
+        .domain = domain,
+        .machine_sbdf = machine_sbdf,
+    };
+    int ret;
+
+    pcidevs_lock();
+    ret = pci_segments_iterate(_enum_assigned_pci_devices, &ctxt);
+    pcidevs_unlock();
+    /*
+     * If not found then report as EINVAL to mark
+     * enumeration process finished.
+     */
+    if ( !ret )
+        return -EINVAL;
+    return 0;
+}
+#endif
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index a07364711794..5ca73c538688 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -1062,6 +1062,40 @@ typedef struct xen_sysctl_cpu_policy xen_sysctl_cpu_policy_t;
 DEFINE_XEN_GUEST_HANDLE(xen_sysctl_cpu_policy_t);
 #endif
 
+/*
+ * These are to emulate pciback device (de-)assignment used by the tools
+ * to track current device assignments: all the PCI devices that can
+ * be passed through must be assigned to the pciback to mark them
+ * as such. As on ARM we do not run pci{back|front} and are emulating
+ * PCI host bridge in Xen, so we need to maintain the assignments on our
+ * own in Xen itself.
+ *
+ * Note on xen_sysctl_pci_device_get_assigned: ENOENT is used to report
+ * that there are no assigned devices left.
+ */
+struct xen_sysctl_pci_device_set_assigned {
+    /* IN */
+    /* FIXME: is this really a machine SBDF or as Domain-0 sees it? */
+    uint32_t machine_sbdf;
+    uint8_t assigned;
+};
+
+struct xen_sysctl_pci_device_get_assigned {
+    /* IN */
+    uint32_t machine_sbdf;
+};
+
+struct xen_sysctl_pci_device_enum_assigned {
+    /* IN */
+    uint32_t idx;
+    uint8_t report_not_assigned;
+    /* OUT */
+    domid_t domain;
+    uint32_t machine_sbdf;
+};
+typedef struct xen_sysctl_pci_device_enum_assigned xen_sysctl_pci_device_enum_assigned_t;
+DEFINE_XEN_GUEST_HANDLE(xen_sysctl_pci_device_enum_assigned_t);
+
 struct xen_sysctl {
     uint32_t cmd;
 #define XEN_SYSCTL_readconsole                    1
@@ -1092,6 +1126,9 @@ struct xen_sysctl {
 #define XEN_SYSCTL_livepatch_op                  27
 /* #define XEN_SYSCTL_set_parameter              28 */
 #define XEN_SYSCTL_get_cpu_policy                29
+#define XEN_SYSCTL_pci_device_set_assigned       30
+#define XEN_SYSCTL_pci_device_get_assigned       31
+#define XEN_SYSCTL_pci_device_enum_assigned      32
     uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */
     union {
         struct xen_sysctl_readconsole       readconsole;
@@ -1122,6 +1159,9 @@ struct xen_sysctl {
 #if defined(__i386__) || defined(__x86_64__)
         struct xen_sysctl_cpu_policy        cpu_policy;
 #endif
+        struct xen_sysctl_pci_device_set_assigned pci_set_assigned;
+        struct xen_sysctl_pci_device_get_assigned pci_get_assigned;
+        struct xen_sysctl_pci_device_enum_assigned pci_enum_assigned;
         uint8_t                             pad[128];
     } u;
 };
diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
index 2bc4aaf4530c..7bf439de4de0 100644
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -132,6 +132,13 @@ struct pci_dev {
 
     /* Data for vPCI. */
     struct vpci *vpci;
+#ifdef CONFIG_ARM
+    /*
+     * Set if this PCI device is eligible for pass through,
+     * e.g. just like it was assigned to pciback driver.
+     */
+    bool assigned;
+#endif
 };
 
 #define for_each_pdev(domain, pdev) \
@@ -168,6 +175,11 @@ const unsigned long *pci_get_ro_map(u16 seg);
 int pci_add_device(u16 seg, u8 bus, u8 devfn,
                    const struct pci_dev_info *, nodeid_t node);
 int pci_remove_device(u16 seg, u8 bus, u8 devfn);
+int pci_device_set_assigned(u16 seg, u8 bus, u8 devfn, bool assigned);
+int pci_device_get_assigned(u16 seg, u8 bus, u8 devfn);
+int pci_device_enum_assigned(bool report_not_assigned,
+                             uint32_t from_idx, domid_t *domain,
+                             uint32_t *machine_sbdf);
 int pci_ro_device(int seg, int bus, int devfn);
 int pci_hide_device(unsigned int seg, unsigned int bus, unsigned int devfn);
 struct pci_dev *pci_get_pdev(int seg, int bus, int devfn);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 12:50:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 12:50:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22466.48769 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6d3-0003tr-Pl; Mon, 09 Nov 2020 12:50:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22466.48769; Mon, 09 Nov 2020 12:50:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6d3-0003tk-Mh; Mon, 09 Nov 2020 12:50:37 +0000
Received: by outflank-mailman (input) for mailman id 22466;
 Mon, 09 Nov 2020 12:50:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zl3J=EP=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1kc6d2-0003tf-It
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:50:36 +0000
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0dcd4d05-209e-4199-bb38-9a78a5bd1e58;
 Mon, 09 Nov 2020 12:50:35 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id f11so5952999lfs.3
 for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 04:50:35 -0800 (PST)
Received: from a2klaptop.localdomain ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id i1sm1736447lfr.176.2020.11.09.04.50.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Nov 2020 04:50:33 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zl3J=EP=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
	id 1kc6d2-0003tf-It
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:50:36 +0000
X-Inumbo-ID: 0dcd4d05-209e-4199-bb38-9a78a5bd1e58
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0dcd4d05-209e-4199-bb38-9a78a5bd1e58;
	Mon, 09 Nov 2020 12:50:35 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id f11so5952999lfs.3
        for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 04:50:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=h2r3TXE0gZ6mXMUUJiLgoTTMIZl9z2Lfa74lW8k4XfI=;
        b=vSFlYWxpVF240AWqf8fGEZ+65UCAt8j6lMW1SZBUfcv8EnnK+q1CJcetDzGqAfAz47
         rua+roT3tiXFPPdpDqaiFhpcgqu8EPSDAULvWKLN+3TLXYdOJSDLxHRvPsRNloLi1On/
         q/1918k0bbexGoCe1QRmDkGUY4Gp82rDhA0CrbU8SIkBcsPX6kdrY/22Q7xuuHEYjzAk
         OeFpM7fjx5xKBaJ4EtJpuhOKyGqt8hbUTdScaWy1HHMjo8K9t3nA8bePs9iHqtWf2DKi
         sYry+0bDt/SEIzzY3TSpO/VvJpidozApCSy+pHW9i9OP9i8/7Kl/ugwqfpyQAJfaRaOT
         WY+Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=h2r3TXE0gZ6mXMUUJiLgoTTMIZl9z2Lfa74lW8k4XfI=;
        b=WYYgz47LGyRXIjDvGghRaWuTe4yABxIhzBNmcfugBJdMWk8huI/cIUKlj1S004b2Yd
         XBJQ0ZESt7QxcASCEw2qF6ynypMLL1PwtcpY0T1T30YZCGG5afA4Nz6fYLtRRBLatUAm
         dhzbr6R9AjknPwn+V+ai9Ik1zuIQ/H0RPsMw8zbo4+4u1eOCOdm1gm3yyUvMsBeAG1YL
         AqbkcJo7FbDH46iGACFP6Xpmg29IGb+TMDhBFu8iPCk0mIFqjoTDrPgVUHMPDeXoEC4s
         MfsvNnXg5pT+ZsYHVmV1xSnqCI96dBX+lNhaAq7O+jsThNPSr35YCyrQPJhbg8iOZTkC
         X/Eg==
X-Gm-Message-State: AOAM533S5Q3aJ8KJFHoBAv/UTG/Zat57DD8TVjYfiCNnbbyU6qKzHaqz
	qtWP1rstgiSiawhyoCg8//k=
X-Google-Smtp-Source: ABdhPJyZYUSyPkePQiSWsS7LFCclMAQlrATb7zRXQpicfcf1CPu2lrxaTkwYriA+oyiekWji3mXuyg==
X-Received: by 2002:a19:82d4:: with SMTP id e203mr3523564lfd.58.1604926233757;
        Mon, 09 Nov 2020 04:50:33 -0800 (PST)
Received: from a2klaptop.localdomain ([185.199.97.5])
        by smtp.gmail.com with ESMTPSA id i1sm1736447lfr.176.2020.11.09.04.50.32
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 09 Nov 2020 04:50:33 -0800 (PST)
From: Oleksandr Andrushchenko <andr2000@gmail.com>
To: Rahul.Singh@arm.com,
	Bertrand.Marquis@arm.com,
	julien.grall@arm.com,
	jbeulich@suse.com,
	roger.pau@citrix.com,
	sstabellini@kernel.org,
	xen-devel@lists.xenproject.org
Cc: iwj@xenproject.org,
	wl@xen.org,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Subject: [PATCH 00/10] [RFC] ARM PCI passthrough configuration and vPCI
Date: Mon,  9 Nov 2020 14:50:21 +0200
Message-Id: <20201109125031.26409-1-andr2000@gmail.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Hello, all!

This is an RFC and an attempt to understand the best way to progress with ARM
PCI passthrough configuration. This includes, but not limited to, configuration
of assignable PCI devices, (legacy IRQs, MSI/MSI-X are not yet supported), MMIO
etc.

This is based on the original RFC from ARM [1] and bits of the possible
configuration approaches were discussed before [2]. So, I tried to implement
something, so we can discuss it in more detail. (Rahul, Bertrand: if you are
interested we can discuss all this in detail, so we can use this as a part of
ARM PCI passthrough solution).

This is all work in progress, so without having some other minor patches one
won’t be able to run that, but still the patches show the direction, which
should be fine for the RFC. Those interested in the full working example I have
created a branch [3], but please note that this was fully tested on R-Car Gen3
platform only which has a non-ECAM PCI host controller and only partially it
was tested on QEMU (without running guest domains, Domain-0 only).

In this RFC I only submit some patches which I would like to get community's
view on.  I will highlight some of them below and the rest is documented in
their commit messages:

1. [WORKAROUND] xen/arm: Update hwdom's p2m to trap ECAM space

This is a workaround to be able to trap ECAM address space in Domain-0 which is
normally not possible because PCI host bridge is mapped into Domain-0, so there
is a need in some special handling of such devices. I have discussed this with
Julien on IRC, but haven't implemented anything of production quality yet.

2. arm/pci: Maintain PCI assignable list

This patch needs a decision on pci-back use. As of now what is not covered is
the assignment of legacy IRQs (MMIOs and MSIs are handled by Xen without the
toolstack's help - am I right here?). MMIOs are assigned by vPCI code.  We
discussed [2] a possibility to run a "limited" version of the pci-back driver
for ARM, but I’d like to bring back that discussion here as it seems that only
some bits of the pci-back may now be used, so profit of having pci-back
in the picture is not clear.

3. vpci: Make every domain handle its own BARs

This is a big change in vPCI code which allows non-identity mappings for guest
domains. This also handles MMIO configuration for the guests without using the
toolstack which does the same via reading PCI bus sysfs entries in Domain-0.
(Thank you Roger for making lots of thing clear for me.) This implements PCI
headers per domain.

4. vpci/arm: Allow updating BAR's header for non-ECAM bridges

This allows non-ECAM bridges, which are not trapped by vPCI for Domain-0/hwdom,
to update vPCI's view of the real values of the BARs. The assumption here is
that Domain-0/hwdom won't relocate BARs which is usually the case.


5. Some code is for R-Car Gen3 which is not ECAM compatible. It is good for the
demostration of where generic ARM PCI framwork should be changed to support
such controllers.

Thank you,
Oleksandr Andrushchenko

P.S. I would like to thank Roger, Juilien and Jan for their attention
and time.

[1] https://www.mail-archive.com/xen-devel@lists.xenproject.org/msg77422.html
[2] https://www.mail-archive.com/xen-devel@lists.xenproject.org/msg84452.html
[3] https://github.com/andr2000/xen/tree/vpci_rfc

Oleksandr Andrushchenko (10):
  pci/pvh: Allow PCI toolstack code run with PVH domains on ARM
  arm/pci: Maintain PCI assignable list
  xen/arm: Setup MMIO range trap handlers for hardware domain
  [WORKAROUND] xen/arm: Update hwdom's p2m to trap ECAM space
  xen/arm: Process pending vPCI map/unmap operations
  vpci: Make every domain handle its own BARs
  xen/arm: Do not hardcode phycial PCI device addresses
  vpci/arm: Allow updating BAR's header for non-ECAM bridges
  vpci/rcar: Implement vPCI.update_bar_header callback
  [HACK] vpci/rcar: Make vPCI know DomD is hardware domain

 tools/libxc/include/xenctrl.h         |   9 +
 tools/libxc/xc_domain.c               |   1 +
 tools/libxc/xc_misc.c                 |  46 ++++
 tools/libxl/Makefile                  |   8 +
 tools/libxl/libxl_pci.c               | 109 +++++++++-
 xen/arch/arm/domain_build.c           |  10 +-
 xen/arch/arm/pci/pci-host-common.c    |  44 ++++
 xen/arch/arm/pci/pci-host-generic.c   |  43 +++-
 xen/arch/arm/pci/pci-host-rcar-gen3.c |  69 ++++++
 xen/arch/arm/sysctl.c                 |  66 +++++-
 xen/arch/arm/traps.c                  |   6 +
 xen/arch/arm/vpci.c                   |  16 +-
 xen/drivers/passthrough/pci.c         |  93 +++++++++
 xen/drivers/vpci/header.c             | 289 +++++++++++++++++++++++---
 xen/drivers/vpci/vpci.c               |   1 +
 xen/include/asm-arm/pci.h             |  17 ++
 xen/include/public/arch-arm.h         |   9 +-
 xen/include/public/sysctl.h           |  40 ++++
 xen/include/xen/pci.h                 |  12 ++
 xen/include/xen/vpci.h                |  24 ++-
 20 files changed, 857 insertions(+), 55 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 12:50:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 12:50:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22469.48806 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6dI-00042q-SL; Mon, 09 Nov 2020 12:50:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22469.48806; Mon, 09 Nov 2020 12:50:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6dI-00042f-NV; Mon, 09 Nov 2020 12:50:52 +0000
Received: by outflank-mailman (input) for mailman id 22469;
 Mon, 09 Nov 2020 12:50:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zl3J=EP=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1kc6dH-0003tf-GO
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:50:51 +0000
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a51ad763-8346-45f2-a425-f8c9c9eb7af3;
 Mon, 09 Nov 2020 12:50:38 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id y25so9221562lja.9
 for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 04:50:38 -0800 (PST)
Received: from a2klaptop.localdomain ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id i1sm1736447lfr.176.2020.11.09.04.50.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Nov 2020 04:50:36 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zl3J=EP=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
	id 1kc6dH-0003tf-GO
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:50:51 +0000
X-Inumbo-ID: a51ad763-8346-45f2-a425-f8c9c9eb7af3
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a51ad763-8346-45f2-a425-f8c9c9eb7af3;
	Mon, 09 Nov 2020 12:50:38 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id y25so9221562lja.9
        for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 04:50:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=7Mun79oNB4+e5ttsNU806SzMPrzXSy2ZLLighcxfMfY=;
        b=WWIUxIatMUUQ/JCfsQDdZYnmWvE8T2h2OJYWDZzrFG0esbosTsAYeB4Q+V8boBfWVh
         +qDaIT+FtaVwJz540oMHyEzAJazvA3r80XjsSxFlmi+VVHqSc6uwgYLsT4v88nlj6K+0
         dSs66fjRUduKWocG6OrWGyX28LuDGQcuX0uzIzb/Iq0lYHZzvuHBAK4fneo9aafA5e5q
         elPLqZjK0gViriEOHK6HHLTzIvf5JWKlsFUM5+IJGACKSA3WGiyu7/gTmz963wusTJ+p
         SU1D6b7iwR9IN4g6c4eLX6Cy2g6XsdcnqmzAH8rlK2CqqCYedBWywIMrVpBBP+gJ7Z6x
         lSGA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=7Mun79oNB4+e5ttsNU806SzMPrzXSy2ZLLighcxfMfY=;
        b=D72ikbGeU1SsPiJgZF6MpbY+RmhTdxoeLVTJRRhzr/yAWpTWf3x7c1CtQH7+QKAHd4
         0+/WMp0QD+4xeMz6/eGxD9TXoznpiTVxk6cFnexU4saJjlkqHotS2lvEpzScdKf8wB1H
         tgjdSW2W3wjG2oGhpmmnoo5N73mhNmR6SMQgqTkT9BgHRUmF/UkzJgVAfWguoszGsbWA
         2ZwasWvEcB2ezIsNuzxaVe285cgCHxCT43XuO2BcgT9TJUK63CbCly2vxNoL9Wt80Sjh
         WF37nTSXT1QeaaWeCh0BeHzXoQSE/DNvNFtE2P5KAkscxHXIHX+s9+Nl9mwYqoGNoY+J
         QV4Q==
X-Gm-Message-State: AOAM531gVoEMnUBIX4QeZ3uFrKtwRFX8ygQ6Hpcb13smXxfs849wX5L6
	40VKD3/Py85Wra9V59u44MY=
X-Google-Smtp-Source: ABdhPJyQgmzrX5saLyJOhkpBy66jZinX+gIfrfFvi2zgYffNfrppcrxhlT7B3mFNp8jlF7tFrs9X4Q==
X-Received: by 2002:a05:651c:2db:: with SMTP id f27mr5734043ljo.394.1604926237424;
        Mon, 09 Nov 2020 04:50:37 -0800 (PST)
Received: from a2klaptop.localdomain ([185.199.97.5])
        by smtp.gmail.com with ESMTPSA id i1sm1736447lfr.176.2020.11.09.04.50.36
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 09 Nov 2020 04:50:36 -0800 (PST)
From: Oleksandr Andrushchenko <andr2000@gmail.com>
To: Rahul.Singh@arm.com,
	Bertrand.Marquis@arm.com,
	julien.grall@arm.com,
	jbeulich@suse.com,
	roger.pau@citrix.com,
	sstabellini@kernel.org,
	xen-devel@lists.xenproject.org
Cc: iwj@xenproject.org,
	wl@xen.org,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Subject: [PATCH 03/10] xen/arm: Setup MMIO range trap handlers for hardware domain
Date: Mon,  9 Nov 2020 14:50:24 +0200
Message-Id: <20201109125031.26409-4-andr2000@gmail.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201109125031.26409-1-andr2000@gmail.com>
References: <20201109125031.26409-1-andr2000@gmail.com>

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

In order vPCI to work it needs all access to PCI configuration space
access to be synchronized among all entities, e.g. hardware domain and
guests. For that implement PCI host bridge specific callbacks to
propelry setup those ranges depending on host bridge implementation.

This callback is optional and may not be used by non-ECAM host bridges.

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
 xen/arch/arm/pci/pci-host-common.c  | 16 ++++++++++++++++
 xen/arch/arm/pci/pci-host-generic.c | 15 +++++++++++++--
 xen/arch/arm/vpci.c                 | 16 +++++++++++++++-
 xen/include/asm-arm/pci.h           |  7 +++++++
 4 files changed, 51 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/pci/pci-host-common.c b/xen/arch/arm/pci/pci-host-common.c
index b011c7eff3c8..b81184d34980 100644
--- a/xen/arch/arm/pci/pci-host-common.c
+++ b/xen/arch/arm/pci/pci-host-common.c
@@ -219,6 +219,22 @@ struct device *pci_find_host_bridge_device(struct device *dev)
     }
     return dt_to_dev(bridge->dt_node);
 }
+
+int pci_host_iterate_bridges(struct domain *d,
+                             int (*clb)(struct domain *d,
+                                        struct pci_host_bridge *bridge))
+{
+    struct pci_host_bridge *bridge;
+    int err;
+
+    list_for_each_entry( bridge, &pci_host_bridges, node )
+    {
+        err = clb(d, bridge);
+        if ( err )
+            return err;
+    }
+    return 0;
+}
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/pci/pci-host-generic.c b/xen/arch/arm/pci/pci-host-generic.c
index 54dd123e95c7..469df3da0116 100644
--- a/xen/arch/arm/pci/pci-host-generic.c
+++ b/xen/arch/arm/pci/pci-host-generic.c
@@ -85,12 +85,23 @@ int pci_ecam_config_read(struct pci_host_bridge *bridge, uint32_t sbdf,
     return 0;
 }
 
+static int pci_ecam_register_mmio_handler(struct domain *d,
+                                          struct pci_host_bridge *bridge,
+                                          const struct mmio_handler_ops *ops)
+{
+    struct pci_config_window *cfg = bridge->sysdata;
+
+    register_mmio_handler(d, ops, cfg->phys_addr, cfg->size, NULL);
+    return 0;
+}
+
 /* ECAM ops */
 struct pci_ecam_ops pci_generic_ecam_ops = {
     .bus_shift  = 20,
     .pci_ops    = {
-        .read       = pci_ecam_config_read,
-        .write      = pci_ecam_config_write,
+        .read                  = pci_ecam_config_read,
+        .write                 = pci_ecam_config_write,
+        .register_mmio_handler = pci_ecam_register_mmio_handler,
     }
 };
 
diff --git a/xen/arch/arm/vpci.c b/xen/arch/arm/vpci.c
index 49e473ab0d10..2b9bf34c8fe6 100644
--- a/xen/arch/arm/vpci.c
+++ b/xen/arch/arm/vpci.c
@@ -80,11 +80,25 @@ static const struct mmio_handler_ops vpci_mmio_handler = {
     .write = vpci_mmio_write,
 };
 
+static int vpci_setup_mmio_handler(struct domain *d,
+                                   struct pci_host_bridge *bridge)
+{
+    if ( bridge->ops->register_mmio_handler )
+        return bridge->ops->register_mmio_handler(d, bridge,
+                                                  &vpci_mmio_handler);
+    return 0;
+}
+
+
 int domain_vpci_init(struct domain *d)
 {
-    if ( !has_vpci(d) || is_hardware_domain(d) )
+    if ( !has_vpci(d) )
         return 0;
 
+    if ( is_hardware_domain(d) )
+        return pci_host_iterate_bridges(d, vpci_setup_mmio_handler);
+
+    /* Guest domains use what is programmed in their device tree. */
     register_mmio_handler(d, &vpci_mmio_handler,
             GUEST_VPCI_ECAM_BASE,GUEST_VPCI_ECAM_SIZE,NULL);
 
diff --git a/xen/include/asm-arm/pci.h b/xen/include/asm-arm/pci.h
index ba23178f67ab..e3a02429b8d4 100644
--- a/xen/include/asm-arm/pci.h
+++ b/xen/include/asm-arm/pci.h
@@ -27,6 +27,7 @@
 #include <xen/pci.h>
 #include <xen/device_tree.h>
 #include <asm/device.h>
+#include <asm/mmio.h>
 
 #ifdef CONFIG_ARM_PCI
 
@@ -64,6 +65,9 @@ struct pci_ops {
                     uint32_t sbdf, int where, int size, u32 *val);
     int (*write)(struct pci_host_bridge *bridge,
                     uint32_t sbdf, int where, int size, u32 val);
+    int (*register_mmio_handler)(struct domain *d,
+                                 struct pci_host_bridge *bridge,
+                                 const struct mmio_handler_ops *ops);
 };
 
 /*
@@ -101,6 +105,9 @@ void pci_init(void);
 bool dt_pci_parse_bus_range(struct dt_device_node *dev,
                             struct pci_config_window *cfg);
 
+int pci_host_iterate_bridges(struct domain *d,
+                             int (*clb)(struct domain *d,
+                                        struct pci_host_bridge *bridge));
 #else   /*!CONFIG_ARM_PCI*/
 struct arch_pci_dev { };
 static inline void  pci_init(void) { }
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 12:50:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 12:50:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22471.48818 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6dO-00048k-64; Mon, 09 Nov 2020 12:50:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22471.48818; Mon, 09 Nov 2020 12:50:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6dO-00048c-2K; Mon, 09 Nov 2020 12:50:58 +0000
Received: by outflank-mailman (input) for mailman id 22471;
 Mon, 09 Nov 2020 12:50:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zl3J=EP=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1kc6dM-0003tf-GV
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:50:56 +0000
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b7f8e413-9a77-403b-9adb-a160bd28eaa2;
 Mon, 09 Nov 2020 12:50:39 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id 11so10138049ljf.2
 for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 04:50:39 -0800 (PST)
Received: from a2klaptop.localdomain ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id i1sm1736447lfr.176.2020.11.09.04.50.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Nov 2020 04:50:38 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zl3J=EP=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
	id 1kc6dM-0003tf-GV
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:50:56 +0000
X-Inumbo-ID: b7f8e413-9a77-403b-9adb-a160bd28eaa2
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b7f8e413-9a77-403b-9adb-a160bd28eaa2;
	Mon, 09 Nov 2020 12:50:39 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id 11so10138049ljf.2
        for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 04:50:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=PUERNr5enj9JC+Z7u0WMW7Vxpi+i1ufmU0togeFlHo0=;
        b=ANLRTH/4Hv3bHw8eON/a0Fq8FEpojEHcja+6e0UW2+ObVjfMN1VQpp/h6Il4p0fjre
         1/DnkOA4VpgHyY90+eIFSrwo0FToUsb8RclBLNOxRU1t5EZsRnqGb3ZNPFKmRXT4xnmY
         XoR7+URrhpmjZWptwHnuFuvTYPGFWCc32LM9n6ItABHncX57D3W6Os7pkcLkd1/UOdqm
         NrcHEtBmqjUZR8VcB6AICzcaDiboqs2KmyFB9TAz1K3Snjgg/L8rnByfJAEluRxD/CQq
         7fp06FQgKXqFmonTlk5vqol2WQXkGD5OeUiquzzmSDmvyjmxVJ0dUWHH/r6tHVcQ4E0D
         A4Ew==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=PUERNr5enj9JC+Z7u0WMW7Vxpi+i1ufmU0togeFlHo0=;
        b=lcE5aq74yjtSQNmXmmi6luO63zEU1VdptoHTDx6FGfgO/5gSy5nG88F5m8GjPzEIMR
         ZqFGUf6atCww+K3xmcDVqpYk79wXcASk7IwtxUUsBRRdjBi0LMOdYKN5Sqv+7ZyvPYvk
         F8nAJz0soiRole8wDxi7rM1IUJkSM3NSkt1nsVBArzxWdNzi4GAF25PBy38dIursBu15
         Ex507irkynDwJvpvjDqfnBKLfwx4jh8XimXLWcDyg7pn4XFWul0B8gRUEI7cM7l6Iyq1
         1RSxLenOmAaVC89OaqMBuRYDOFVssGH3/R/6gHTuHy7zg8cHQ262CWrb/PFa//mOz14a
         C5jQ==
X-Gm-Message-State: AOAM533nWVPO99/pzlpDt3yMVrvkcUxJ4iVUreR7w5ib2b97xN/eIsXB
	RHdNJpQuC9IEuxrpxpGUbkw=
X-Google-Smtp-Source: ABdhPJxpHQoFZaGQdiyFojJGSAIuDJP5u+AqQp44p0HbAoz1/DbVFC+0Clu3bBtYGLbmbtezoClYRA==
X-Received: by 2002:a05:651c:544:: with SMTP id q4mr6213912ljp.468.1604926238541;
        Mon, 09 Nov 2020 04:50:38 -0800 (PST)
Received: from a2klaptop.localdomain ([185.199.97.5])
        by smtp.gmail.com with ESMTPSA id i1sm1736447lfr.176.2020.11.09.04.50.37
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 09 Nov 2020 04:50:38 -0800 (PST)
From: Oleksandr Andrushchenko <andr2000@gmail.com>
To: Rahul.Singh@arm.com,
	Bertrand.Marquis@arm.com,
	julien.grall@arm.com,
	jbeulich@suse.com,
	roger.pau@citrix.com,
	sstabellini@kernel.org,
	xen-devel@lists.xenproject.org
Cc: iwj@xenproject.org,
	wl@xen.org,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Subject: [PATCH 04/10] [WORKAROUND] xen/arm: Update hwdom's p2m to trap ECAM space
Date: Mon,  9 Nov 2020 14:50:25 +0200
Message-Id: <20201109125031.26409-5-andr2000@gmail.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201109125031.26409-1-andr2000@gmail.com>
References: <20201109125031.26409-1-andr2000@gmail.com>

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Host bridge controller's ECAM space is mapped into Domain-0's p2m,
thus it is not possible to trap the same for vPCI via MMIO handlers.
For this to work we need to unmap those mappings in p2m.

TODO (Julien): It would be best if we avoid the map/unmap operation.
So, maybe we want to introduce another way to avoid the mapping.
Maybe by changing the type of the controller to "PCI_HOSTCONTROLLER"
and checking if this is a PCI hostcontroller avoid the mapping.

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
 xen/arch/arm/domain_build.c         | 10 +++++++++-
 xen/arch/arm/pci/pci-host-common.c  | 15 +++++++++++++++
 xen/arch/arm/pci/pci-host-generic.c | 28 ++++++++++++++++++++++++++++
 xen/include/asm-arm/pci.h           |  2 ++
 4 files changed, 54 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 1f83f9048146..3f696d2a6672 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2566,7 +2566,15 @@ int __init construct_dom0(struct domain *d)
     if ( rc < 0 )
         return rc;
 
-    return construct_domain(d, &kinfo);
+    rc = construct_domain(d, &kinfo);
+    if ( rc < 0 )
+        return rc;
+
+#ifdef CONFIG_HAS_PCI
+    if ( has_vpci(d) )
+        rc = pci_host_bridge_update_mappings(d);
+#endif
+    return rc;
 }
 
 /*
diff --git a/xen/arch/arm/pci/pci-host-common.c b/xen/arch/arm/pci/pci-host-common.c
index b81184d34980..b6c4d7b636b1 100644
--- a/xen/arch/arm/pci/pci-host-common.c
+++ b/xen/arch/arm/pci/pci-host-common.c
@@ -235,6 +235,21 @@ int pci_host_iterate_bridges(struct domain *d,
     }
     return 0;
 }
+
+static int pci_host_bridge_update_mapping(struct domain *d,
+                                          struct pci_host_bridge *bridge)
+{
+    if ( !bridge->ops->update_mappings )
+        return 0;
+
+    return bridge->ops->update_mappings(d, bridge);
+}
+
+int pci_host_bridge_update_mappings(struct domain *d)
+{
+    return pci_host_iterate_bridges(d, pci_host_bridge_update_mapping);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/pci/pci-host-generic.c b/xen/arch/arm/pci/pci-host-generic.c
index 469df3da0116..772c53c881bc 100644
--- a/xen/arch/arm/pci/pci-host-generic.c
+++ b/xen/arch/arm/pci/pci-host-generic.c
@@ -21,6 +21,8 @@
 #include <asm/device.h>
 #include <asm/io.h>
 #include <xen/pci.h>
+#include <xen/sched.h>
+#include <asm/p2m.h>
 #include <asm/pci.h>
 
 /*
@@ -85,6 +87,31 @@ int pci_ecam_config_read(struct pci_host_bridge *bridge, uint32_t sbdf,
     return 0;
 }
 
+/*
+ * TODO: This is called late on domain creation to mangle p2m if needed:
+ * for ECAM host controller for mmio region traps to work for Domain-0
+ * we need to unmap those mappings in p2m.
+ * This is WIP:
+ * julieng: I think it would be best if we avoid the map/unmap operation.
+ * So maybe we want to introduce another way to avoid the mapping.
+ * Maybe by changing the type of the controller to "PCI_HOSTCONTROLLER"
+ * and check if this is a PCI hostcontroller avoid the mapping.
+ */
+static int pci_ecam_update_mappings(struct domain *d,
+                                    struct pci_host_bridge *bridge)
+{
+    struct pci_config_window *cfg = bridge->sysdata;
+    int ret;
+
+    /* Only for control domain which owns this PCI host bridge. */
+    if ( !is_control_domain(d) )
+        return 0;
+
+    ret = unmap_regions_p2mt(d, gaddr_to_gfn(cfg->phys_addr),
+                             cfg->size >> PAGE_SHIFT, INVALID_MFN);
+    return ret;
+}
+
 static int pci_ecam_register_mmio_handler(struct domain *d,
                                           struct pci_host_bridge *bridge,
                                           const struct mmio_handler_ops *ops)
@@ -101,6 +128,7 @@ struct pci_ecam_ops pci_generic_ecam_ops = {
     .pci_ops    = {
         .read                  = pci_ecam_config_read,
         .write                 = pci_ecam_config_write,
+        .update_mappings       = pci_ecam_update_mappings,
         .register_mmio_handler = pci_ecam_register_mmio_handler,
     }
 };
diff --git a/xen/include/asm-arm/pci.h b/xen/include/asm-arm/pci.h
index e3a02429b8d4..d94e8a6628de 100644
--- a/xen/include/asm-arm/pci.h
+++ b/xen/include/asm-arm/pci.h
@@ -65,6 +65,7 @@ struct pci_ops {
                     uint32_t sbdf, int where, int size, u32 *val);
     int (*write)(struct pci_host_bridge *bridge,
                     uint32_t sbdf, int where, int size, u32 val);
+    int (*update_mappings)(struct domain *d, struct pci_host_bridge *bridge);
     int (*register_mmio_handler)(struct domain *d,
                                  struct pci_host_bridge *bridge,
                                  const struct mmio_handler_ops *ops);
@@ -108,6 +109,7 @@ bool dt_pci_parse_bus_range(struct dt_device_node *dev,
 int pci_host_iterate_bridges(struct domain *d,
                              int (*clb)(struct domain *d,
                                         struct pci_host_bridge *bridge));
+int pci_host_bridge_update_mappings(struct domain *d);
 #else   /*!CONFIG_ARM_PCI*/
 struct arch_pci_dev { };
 static inline void  pci_init(void) { }
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 12:51:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 12:51:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22472.48830 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6dT-0004EK-FF; Mon, 09 Nov 2020 12:51:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22472.48830; Mon, 09 Nov 2020 12:51:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6dT-0004EC-BS; Mon, 09 Nov 2020 12:51:03 +0000
Received: by outflank-mailman (input) for mailman id 22472;
 Mon, 09 Nov 2020 12:51:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zl3J=EP=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1kc6dR-0003tf-Gi
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:51:01 +0000
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c8ab388e-186e-4278-85b1-ae6cd3f5dd61;
 Mon, 09 Nov 2020 12:50:40 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id v18so10142740ljc.3
 for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 04:50:40 -0800 (PST)
Received: from a2klaptop.localdomain ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id i1sm1736447lfr.176.2020.11.09.04.50.38
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Nov 2020 04:50:39 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zl3J=EP=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
	id 1kc6dR-0003tf-Gi
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:51:01 +0000
X-Inumbo-ID: c8ab388e-186e-4278-85b1-ae6cd3f5dd61
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c8ab388e-186e-4278-85b1-ae6cd3f5dd61;
	Mon, 09 Nov 2020 12:50:40 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id v18so10142740ljc.3
        for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 04:50:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=elscKxOwLIVI2A3WOH0c4hFjfpu+AwWsnBd8zoJ5TOA=;
        b=tcBYbtRH2U39Q1/VskmmFKup9IujI+FTY0QpLU5U/Yh07iYlW+oqc2viPjbmGpZahi
         r4yzVmyJBqSCQ+G+mjsVmM2o/9WqseNIo20QzBFJkj8ETEKmLQSec4Eg10kgzjaocY87
         VAMGGUMrPrqy4LUCnoKGM1v48QgSrYcgvBWeokYCMJdJ1Xz+8zZKiECPhs/pZ3yMOXml
         AWUkDBaLJfnEHcWrFdm7+I6bMTbFfT5bkxReyUDS7yITWvsiD55px+35Z1vjh4IqlHzV
         DiFLMCqIZ+tAtySx9n8FN3jVRl3oqVn8QU/xXFlvQKTlngXL9YOvCP8KE1lgMCyVk9jA
         tUQA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=elscKxOwLIVI2A3WOH0c4hFjfpu+AwWsnBd8zoJ5TOA=;
        b=g4kfgPFmAUOCQoi3Dk1Apr6uFRSLccSFgV9WyC08IRRaCKCRsLsJX0SgMJgeYvMkPu
         xVfTFOr4DQzywa8PX4hisQZ7uy2FriZ8KMrwnqgIH+cEmbDZidukk134AZvdm/6Li7+g
         nS6fflZbG8U/sFABCPBr86e/uFs8NLmhdVLcn/ydFpcjeJZgO4UFEPgdj04xsQIfgJsY
         QGsEwScVLUJFLmsXTd1luNydk1SBfLW5LX9fcC118qu0e0u77Jf+r7jKmgIC7kM/mphQ
         9FsXSRmv7RqUWowWmgQ6boAjoVhuGlKMCgOB/7sEl/ZshlHoCK1xY2gFWqQi158wKsB/
         dotA==
X-Gm-Message-State: AOAM532s+JcMK00HURZdCAghgQVG1rjJPFgrKGkACMRPxXWTeVuOxKsp
	q2HxWrQnbING5VmRiVy2TOo=
X-Google-Smtp-Source: ABdhPJx5rXKcbnC2dQR2fqaqdXLLpOHbJpRTETelBk1X2W2GGtWTbGTJqdBYK60sg1nIaSGbU2Z2TA==
X-Received: by 2002:a2e:5d4:: with SMTP id 203mr6214273ljf.137.1604926239612;
        Mon, 09 Nov 2020 04:50:39 -0800 (PST)
Received: from a2klaptop.localdomain ([185.199.97.5])
        by smtp.gmail.com with ESMTPSA id i1sm1736447lfr.176.2020.11.09.04.50.38
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 09 Nov 2020 04:50:39 -0800 (PST)
From: Oleksandr Andrushchenko <andr2000@gmail.com>
To: Rahul.Singh@arm.com,
	Bertrand.Marquis@arm.com,
	julien.grall@arm.com,
	jbeulich@suse.com,
	roger.pau@citrix.com,
	sstabellini@kernel.org,
	xen-devel@lists.xenproject.org
Cc: iwj@xenproject.org,
	wl@xen.org,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Subject: [PATCH 05/10] xen/arm: Process pending vPCI map/unmap operations
Date: Mon,  9 Nov 2020 14:50:26 +0200
Message-Id: <20201109125031.26409-6-andr2000@gmail.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201109125031.26409-1-andr2000@gmail.com>
References: <20201109125031.26409-1-andr2000@gmail.com>

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

vPCI may map and unmap PCI device memory (BARs) being passed through which
may take a lot of time. For this those operations may be deferred to be
performed later, so that they can be safely preempted.
Run the corresponding vPCI code while switching a vCPU.

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
 xen/arch/arm/traps.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 8f40d0e0b6b1..1c54dc0cdd51 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -33,6 +33,7 @@
 #include <xen/symbols.h>
 #include <xen/version.h>
 #include <xen/virtual_region.h>
+#include <xen/vpci.h>
 
 #include <public/sched.h>
 #include <public/xen.h>
@@ -2253,6 +2254,11 @@ static void check_for_vcpu_work(void)
 {
     struct vcpu *v = current;
 
+    local_irq_enable();
+    if ( has_vpci(v->domain) && vpci_process_pending(v) )
+        raise_softirq(SCHEDULE_SOFTIRQ);
+    local_irq_disable();
+
     if ( likely(!v->arch.need_flush_to_ram) )
         return;
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 12:51:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 12:51:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22475.48842 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6dX-0004JH-PX; Mon, 09 Nov 2020 12:51:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22475.48842; Mon, 09 Nov 2020 12:51:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6dX-0004J9-Lg; Mon, 09 Nov 2020 12:51:07 +0000
Received: by outflank-mailman (input) for mailman id 22475;
 Mon, 09 Nov 2020 12:51:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zl3J=EP=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1kc6dW-0003tf-Gq
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:51:06 +0000
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9df39a4f-6d66-4634-bf58-8d1c09321705;
 Mon, 09 Nov 2020 12:50:41 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id r17so4485213ljg.5
 for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 04:50:41 -0800 (PST)
Received: from a2klaptop.localdomain ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id i1sm1736447lfr.176.2020.11.09.04.50.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Nov 2020 04:50:40 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zl3J=EP=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
	id 1kc6dW-0003tf-Gq
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:51:06 +0000
X-Inumbo-ID: 9df39a4f-6d66-4634-bf58-8d1c09321705
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9df39a4f-6d66-4634-bf58-8d1c09321705;
	Mon, 09 Nov 2020 12:50:41 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id r17so4485213ljg.5
        for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 04:50:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=4uUYf7LbdujIXi0hlvWg2yt1O+62hgozBNucQm/XVxE=;
        b=nuwE9B9bn+tdWxbabj0uNIsC6wr3xTqxw523yvetQHGv9ujFCwj5IpveMzBLiEuxW5
         CzBHsvYtNJNSfdoCNVPN2gibjCnBV8cmoZ/G182Pq7LZ/B6bSc4p46PfxWc20qhTq4bf
         yJ5H6rcqP5nWWjUHaiFILNhnbohX21o/MgrRAUU9q3byvwB4QQlNfCcalA2T4plp6Mry
         czXMqb2Yt3f0tjy9mMUWuTWC8I8AFmbkZoRFT/pprF5e0ye9ZfPZedRyyewF7DGHSvz+
         xFNpUVJW6vsMfeXp/zN/ln613ZS9gXC5/BwunyyBjkN6wxXaIA1M1qDBwisLr3i3yREW
         L5fg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=4uUYf7LbdujIXi0hlvWg2yt1O+62hgozBNucQm/XVxE=;
        b=tGA1NsK2O635NrFwhsWnHTHJKbbs6b067pV9K8s+DGJoSdby3N54iy9wlncmhtK+PT
         +IhrmiHmDEKhkHFXc/cUYhxTC4OPhdYpu0hqFZ0pmoLX4G29iwpZ2JFMqt7ig/Jxd6j/
         DbRQISp0HZ3eIxfSLXfUKSlQkmcAjK8ZjGVXQVe/o1urdo35vq74upQ1WhytXkcMMWQn
         58vKLkwSPGAsZ4jx5TqgU9QcPPuaNBJenqSPl2HFWLxzfwOXUUCBwu5ieP5I/GKSWXF7
         62+P5b2YwNNv9uVRzj5vGof7Talwc9YU4lY9TgjuYVjdFjWS1fopqhK+O4xFjzJr3Fth
         VC6A==
X-Gm-Message-State: AOAM5339VLqtFA/TXl2oWsuZcwg+1rO2EbTCqMc3LLkGUI0RogmCRcAW
	xDNsld7jwa/3h8v0K6gN+eM=
X-Google-Smtp-Source: ABdhPJzEeHBOgXtCrM/P1zLK3bmDqq8tzFO+IcwdqhxWfb94Tao7VH7AAbtiw041BJADLaam0L9ihA==
X-Received: by 2002:a2e:b54a:: with SMTP id a10mr5764786ljn.139.1604926240752;
        Mon, 09 Nov 2020 04:50:40 -0800 (PST)
Received: from a2klaptop.localdomain ([185.199.97.5])
        by smtp.gmail.com with ESMTPSA id i1sm1736447lfr.176.2020.11.09.04.50.39
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 09 Nov 2020 04:50:40 -0800 (PST)
From: Oleksandr Andrushchenko <andr2000@gmail.com>
To: Rahul.Singh@arm.com,
	Bertrand.Marquis@arm.com,
	julien.grall@arm.com,
	jbeulich@suse.com,
	roger.pau@citrix.com,
	sstabellini@kernel.org,
	xen-devel@lists.xenproject.org
Cc: iwj@xenproject.org,
	wl@xen.org,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Subject: [PATCH 06/10] vpci: Make every domain handle its own BARs
Date: Mon,  9 Nov 2020 14:50:27 +0200
Message-Id: <20201109125031.26409-7-andr2000@gmail.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201109125031.26409-1-andr2000@gmail.com>
References: <20201109125031.26409-1-andr2000@gmail.com>

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

At the moment there is an identity mapping between how a guest sees its
BARs and how they are programmed into guest domain's p2m. This is not
going to work as guest domains have their own view on the BARs.
Extend existing vPCI BAR handling to allow every domain to have its own
view of the BARs: only hardware domain sees physical memory addresses in
this case and for the rest those are emulated, including logic required
for the guests to detect memory sizes and properties.

While emulating BAR access for the guests create a link between the
virtual BAR address and physical one: use full memory address while
creating range sets used to map/unmap corresponding address spaces and
exploit the fact that PCI BAR value doesn't use 8 lower bits of the
memory address. Use those bits to pass physical BAR's index, so we can
build/remove proper p2m mappings.

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
 xen/drivers/vpci/header.c | 276 ++++++++++++++++++++++++++++++++++----
 xen/drivers/vpci/vpci.c   |   1 +
 xen/include/xen/vpci.h    |  24 ++--
 3 files changed, 265 insertions(+), 36 deletions(-)

diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
index f74f728884c0..7dc7c70e24f2 100644
--- a/xen/drivers/vpci/header.c
+++ b/xen/drivers/vpci/header.c
@@ -31,14 +31,87 @@
 struct map_data {
     struct domain *d;
     bool map;
+    struct pci_dev *pdev;
 };
 
+static struct vpci_header *get_vpci_header(struct domain *d,
+                                           const struct pci_dev *pdev);
+
+static struct vpci_header *get_hwdom_vpci_header(const struct pci_dev *pdev)
+{
+    if ( unlikely(list_empty(&pdev->vpci->headers)) )
+        return get_vpci_header(hardware_domain, pdev);
+
+    /* hwdom's header is always the very first entry. */
+    return list_first_entry(&pdev->vpci->headers, struct vpci_header, node);
+}
+
+static struct vpci_header *get_vpci_header(struct domain *d,
+                                           const struct pci_dev *pdev)
+{
+    struct list_head *prev;
+    struct vpci_header *header;
+    struct vpci *vpci = pdev->vpci;
+
+    list_for_each( prev, &vpci->headers )
+    {
+        struct vpci_header *this = list_entry(prev, struct vpci_header, node);
+
+        if ( this->domain_id == d->domain_id )
+            return this;
+    }
+    printk(XENLOG_DEBUG "--------------------------------------" \
+           "Adding new vPCI BAR headers for domain %d: " PRI_pci" \n",
+           d->domain_id, pdev->sbdf.seg, pdev->sbdf.bus,
+           pdev->sbdf.dev, pdev->sbdf.fn);
+    header = xzalloc(struct vpci_header);
+    if ( !header )
+    {
+        printk(XENLOG_ERR
+               "Failed to add new vPCI BAR headers for domain %d: " PRI_pci" \n",
+               d->domain_id, pdev->sbdf.seg, pdev->sbdf.bus,
+               pdev->sbdf.dev, pdev->sbdf.fn);
+        return NULL;
+    }
+
+    if ( !is_hardware_domain(d) )
+    {
+        struct vpci_header *hwdom_header = get_hwdom_vpci_header(pdev);
+
+        /* Make a copy of the hwdom's BARs as the initial state for vBARs. */
+        memcpy(header, hwdom_header, sizeof(*header));
+    }
+
+    header->domain_id = d->domain_id;
+    list_add_tail(&header->node, &vpci->headers);
+    return header;
+}
+
+static struct vpci_bar *get_vpci_bar(struct domain *d,
+                                     const struct pci_dev *pdev,
+                                     int bar_idx)
+{
+    struct vpci_header *vheader;
+
+    vheader = get_vpci_header(d, pdev);
+    if ( !vheader )
+        return NULL;
+
+    return &vheader->bars[bar_idx];
+}
+
 static int map_range(unsigned long s, unsigned long e, void *data,
                      unsigned long *c)
 {
     const struct map_data *map = data;
-    int rc;
-
+    unsigned long mfn;
+    int rc, bar_idx;
+    struct vpci_header *header = get_hwdom_vpci_header(map->pdev);
+
+    bar_idx = s & ~PCI_BASE_ADDRESS_MEM_MASK;
+    s = PFN_DOWN(s);
+    e = PFN_DOWN(e);
+    mfn = _mfn(PFN_DOWN(header->bars[bar_idx].addr));
     for ( ; ; )
     {
         unsigned long size = e - s + 1;
@@ -52,11 +125,15 @@ static int map_range(unsigned long s, unsigned long e, void *data,
          * - {un}map_mmio_regions doesn't support preemption.
          */
 
-        rc = map->map ? map_mmio_regions(map->d, _gfn(s), size, _mfn(s))
-                      : unmap_mmio_regions(map->d, _gfn(s), size, _mfn(s));
+        rc = map->map ? map_mmio_regions(map->d, _gfn(s), size, mfn)
+                      : unmap_mmio_regions(map->d, _gfn(s), size, mfn);
         if ( rc == 0 )
         {
-            *c += size;
+            /*
+             * Range set is not expressed in frame numbers and the size
+             * is the number of frames, so update accordingly.
+             */
+            *c += size << PAGE_SHIFT;
             break;
         }
         if ( rc < 0 )
@@ -67,8 +144,9 @@ static int map_range(unsigned long s, unsigned long e, void *data,
             break;
         }
         ASSERT(rc < size);
-        *c += rc;
+        *c += rc << PAGE_SHIFT;
         s += rc;
+        mfn += rc;
         if ( general_preempt_check() )
                 return -ERESTART;
     }
@@ -84,7 +162,7 @@ static int map_range(unsigned long s, unsigned long e, void *data,
 static void modify_decoding(const struct pci_dev *pdev, uint16_t cmd,
                             bool rom_only)
 {
-    struct vpci_header *header = &pdev->vpci->header;
+    struct vpci_header *header = get_hwdom_vpci_header(pdev);
     bool map = cmd & PCI_COMMAND_MEMORY;
     unsigned int i;
 
@@ -136,6 +214,7 @@ bool vpci_process_pending(struct vcpu *v)
         struct map_data data = {
             .d = v->domain,
             .map = v->vpci.cmd & PCI_COMMAND_MEMORY,
+            .pdev = v->vpci.pdev,
         };
         int rc = rangeset_consume_ranges(v->vpci.mem, map_range, &data);
 
@@ -168,7 +247,8 @@ bool vpci_process_pending(struct vcpu *v)
 static int __init apply_map(struct domain *d, const struct pci_dev *pdev,
                             struct rangeset *mem, uint16_t cmd)
 {
-    struct map_data data = { .d = d, .map = true };
+    struct map_data data = { .d = d, .map = true,
+        .pdev = (struct pci_dev *)pdev };
     int rc;
 
     while ( (rc = rangeset_consume_ranges(mem, map_range, &data)) == -ERESTART )
@@ -205,7 +285,7 @@ static void defer_map(struct domain *d, struct pci_dev *pdev,
 
 static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
 {
-    struct vpci_header *header = &pdev->vpci->header;
+    struct vpci_header *header;
     struct rangeset *mem = rangeset_new(NULL, NULL, 0);
     struct pci_dev *tmp, *dev = NULL;
 #ifdef CONFIG_X86
@@ -217,6 +297,11 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
     if ( !mem )
         return -ENOMEM;
 
+    if ( is_hardware_domain(current->domain) )
+        header = get_hwdom_vpci_header(pdev);
+    else
+        header = get_vpci_header(current->domain, pdev);
+
     /*
      * Create a rangeset that represents the current device BARs memory region
      * and compare it against all the currently active BAR memory regions. If
@@ -225,12 +310,15 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
      * First fill the rangeset with all the BARs of this device or with the ROM
      * BAR only, depending on whether the guest is toggling the memory decode
      * bit of the command register, or the enable bit of the ROM BAR register.
+     *
+     * Use the PCI reserved bits of the BAR to pass BAR's index.
      */
     for ( i = 0; i < ARRAY_SIZE(header->bars); i++ )
     {
         const struct vpci_bar *bar = &header->bars[i];
-        unsigned long start = PFN_DOWN(bar->addr);
-        unsigned long end = PFN_DOWN(bar->addr + bar->size - 1);
+        unsigned long start = (bar->addr & PCI_BASE_ADDRESS_MEM_MASK) | i;
+        unsigned long end = (bar->addr & PCI_BASE_ADDRESS_MEM_MASK) +
+            bar->size - 1;
 
         if ( !MAPPABLE_BAR(bar) ||
              (rom_only ? bar->type != VPCI_BAR_ROM
@@ -251,9 +339,11 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
     /* Remove any MSIX regions if present. */
     for ( i = 0; msix && i < ARRAY_SIZE(msix->tables); i++ )
     {
-        unsigned long start = PFN_DOWN(vmsix_table_addr(pdev->vpci, i));
-        unsigned long end = PFN_DOWN(vmsix_table_addr(pdev->vpci, i) +
-                                     vmsix_table_size(pdev->vpci, i) - 1);
+        unsigned long start = (vmsix_table_addr(pdev->vpci, i) &
+                               PCI_BASE_ADDRESS_MEM_MASK) | i;
+        unsigned long end = (vmsix_table_addr(pdev->vpci, i) &
+                             PCI_BASE_ADDRESS_MEM_MASK ) +
+                             vmsix_table_size(pdev->vpci, i) - 1;
 
         rc = rangeset_remove_range(mem, start, end);
         if ( rc )
@@ -273,6 +363,8 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
      */
     for_each_pdev ( pdev->domain, tmp )
     {
+        struct vpci_header *header;
+
         if ( tmp == pdev )
         {
             /*
@@ -289,11 +381,14 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
                 continue;
         }
 
-        for ( i = 0; i < ARRAY_SIZE(tmp->vpci->header.bars); i++ )
+        header = get_vpci_header(tmp->domain, pdev);
+
+        for ( i = 0; i < ARRAY_SIZE(header->bars); i++ )
         {
-            const struct vpci_bar *bar = &tmp->vpci->header.bars[i];
-            unsigned long start = PFN_DOWN(bar->addr);
-            unsigned long end = PFN_DOWN(bar->addr + bar->size - 1);
+            const struct vpci_bar *bar = &header->bars[i];
+            unsigned long start = (bar->addr & PCI_BASE_ADDRESS_MEM_MASK) | i;
+            unsigned long end = (bar->addr & PCI_BASE_ADDRESS_MEM_MASK)
+                + bar->size - 1;
 
             if ( !bar->enabled || !rangeset_overlaps_range(mem, start, end) ||
                  /*
@@ -357,7 +452,7 @@ static void cmd_write(const struct pci_dev *pdev, unsigned int reg,
         pci_conf_write16(pdev->sbdf, reg, cmd);
 }
 
-static void bar_write(const struct pci_dev *pdev, unsigned int reg,
+static void bar_write_hwdom(const struct pci_dev *pdev, unsigned int reg,
                       uint32_t val, void *data)
 {
     struct vpci_bar *bar = data;
@@ -377,14 +472,17 @@ static void bar_write(const struct pci_dev *pdev, unsigned int reg,
     {
         /* If the value written is the current one avoid printing a warning. */
         if ( val != (uint32_t)(bar->addr >> (hi ? 32 : 0)) )
+        {
+            struct vpci_header *header = get_hwdom_vpci_header(pdev);
+
             gprintk(XENLOG_WARNING,
                     "%04x:%02x:%02x.%u: ignored BAR %lu write with memory decoding enabled\n",
                     pdev->seg, pdev->bus, slot, func,
-                    bar - pdev->vpci->header.bars + hi);
+                    bar - header->bars + hi);
+        }
         return;
     }
 
-
     /*
      * Update the cached address, so that when memory decoding is enabled
      * Xen can map the BAR into the guest p2m.
@@ -403,10 +501,89 @@ static void bar_write(const struct pci_dev *pdev, unsigned int reg,
     pci_conf_write32(pdev->sbdf, reg, val);
 }
 
+static uint32_t bar_read_hwdom(const struct pci_dev *pdev, unsigned int reg,
+                               void *data)
+{
+    return vpci_hw_read32(pdev, reg, data);
+}
+
+static void bar_write_guest(const struct pci_dev *pdev, unsigned int reg,
+                            uint32_t val, void *data)
+{
+    struct vpci_bar *vbar = data;
+    bool hi = false;
+
+    if ( vbar->type == VPCI_BAR_MEM64_HI )
+    {
+        ASSERT(reg > PCI_BASE_ADDRESS_0);
+        vbar--;
+        hi = true;
+    }
+    vbar->addr &= ~(0xffffffffull << (hi ? 32 : 0));
+    vbar->addr |= (uint64_t)val << (hi ? 32 : 0);
+}
+
+static uint32_t bar_read_guest(const struct pci_dev *pdev, unsigned int reg,
+                               void *data)
+{
+    struct vpci_bar *vbar = data;
+    uint32_t val;
+    bool hi = false;
+
+    if ( vbar->type == VPCI_BAR_MEM64_HI )
+    {
+        ASSERT(reg > PCI_BASE_ADDRESS_0);
+        vbar--;
+        hi = true;
+    }
+
+    if ( vbar->type == VPCI_BAR_MEM64_LO || vbar->type == VPCI_BAR_MEM64_HI )
+    {
+        if ( hi )
+            val = vbar->addr >> 32;
+        else
+            val = vbar->addr & 0xffffffff;
+        if ( val == ~0 )
+        {
+            /* Guests detects BAR's properties and sizes. */
+            if ( !hi )
+            {
+                val = 0xffffffff & ~(vbar->size - 1);
+                val |= vbar->type == VPCI_BAR_MEM32 ? PCI_BASE_ADDRESS_MEM_TYPE_32
+                                                    : PCI_BASE_ADDRESS_MEM_TYPE_64;
+                val |= vbar->prefetchable ? PCI_BASE_ADDRESS_MEM_PREFETCH : 0;
+            }
+            else
+                val = vbar->size >> 32;
+            vbar->addr &= ~(0xffffffffull << (hi ? 32 : 0));
+            vbar->addr |= (uint64_t)val << (hi ? 32 : 0);
+        }
+    }
+    else if ( vbar->type == VPCI_BAR_MEM32 )
+    {
+        val = vbar->addr;
+        if ( val == ~0 )
+        {
+            if ( !hi )
+            {
+                val = 0xffffffff & ~(vbar->size - 1);
+                val |= vbar->type == VPCI_BAR_MEM32 ? PCI_BASE_ADDRESS_MEM_TYPE_32
+                                                    : PCI_BASE_ADDRESS_MEM_TYPE_64;
+                val |= vbar->prefetchable ? PCI_BASE_ADDRESS_MEM_PREFETCH : 0;
+            }
+        }
+    }
+    else
+    {
+        val = vbar->addr;
+    }
+    return val;
+}
+
 static void rom_write(const struct pci_dev *pdev, unsigned int reg,
                       uint32_t val, void *data)
 {
-    struct vpci_header *header = &pdev->vpci->header;
+    struct vpci_header *header = get_hwdom_vpci_header(pdev);
     struct vpci_bar *rom = data;
     uint8_t slot = PCI_SLOT(pdev->devfn), func = PCI_FUNC(pdev->devfn);
     uint16_t cmd = pci_conf_read16(pdev->sbdf, PCI_COMMAND);
@@ -452,15 +629,56 @@ static void rom_write(const struct pci_dev *pdev, unsigned int reg,
         rom->addr = val & PCI_ROM_ADDRESS_MASK;
 }
 
+static uint32_t bar_read_dispatch(const struct pci_dev *pdev, unsigned int reg,
+                                  void *data)
+{
+    struct vpci_bar *vbar, *bar = data;
+
+    if ( is_hardware_domain(current->domain) )
+        return bar_read_hwdom(pdev, reg, data);
+
+    vbar = get_vpci_bar(current->domain, pdev, bar->index);
+    if ( !vbar )
+        return ~0;
+
+    return bar_read_guest(pdev, reg, vbar);
+}
+
+static void bar_write_dispatch(const struct pci_dev *pdev, unsigned int reg,
+                               uint32_t val, void *data)
+{
+    struct vpci_bar *bar = data;
+
+    if ( is_hardware_domain(current->domain) )
+        bar_write_hwdom(pdev, reg, val, data);
+    else
+    {
+        struct vpci_bar *vbar = get_vpci_bar(current->domain, pdev, bar->index);
+
+        if ( !vbar )
+            return;
+        bar_write_guest(pdev, reg, val, vbar);
+    }
+}
+
+/*
+ * FIXME: This is called early while adding vPCI handlers which is done
+ * by and for hwdom.
+ */
 static int init_bars(struct pci_dev *pdev)
 {
     uint16_t cmd;
     uint64_t addr, size;
     unsigned int i, num_bars, rom_reg;
-    struct vpci_header *header = &pdev->vpci->header;
-    struct vpci_bar *bars = header->bars;
+    struct vpci_header *header;
+    struct vpci_bar *bars;
     int rc;
 
+    header = get_hwdom_vpci_header(pdev);
+    if ( !header )
+        return -ENOMEM;
+    bars = header->bars;
+
     switch ( pci_conf_read8(pdev->sbdf, PCI_HEADER_TYPE) & 0x7f )
     {
     case PCI_HEADER_TYPE_NORMAL:
@@ -496,11 +714,12 @@ static int init_bars(struct pci_dev *pdev)
         uint8_t reg = PCI_BASE_ADDRESS_0 + i * 4;
         uint32_t val;
 
+        bars[i].index = i;
         if ( i && bars[i - 1].type == VPCI_BAR_MEM64_LO )
         {
             bars[i].type = VPCI_BAR_MEM64_HI;
-            rc = vpci_add_register(pdev->vpci, vpci_hw_read32, bar_write, reg,
-                                   4, &bars[i]);
+            rc = vpci_add_register(pdev->vpci, bar_read_dispatch,
+                                   bar_write_dispatch, reg, 4, &bars[i]);
             if ( rc )
             {
                 pci_conf_write16(pdev->sbdf, PCI_COMMAND, cmd);
@@ -540,8 +759,8 @@ static int init_bars(struct pci_dev *pdev)
         bars[i].size = size;
         bars[i].prefetchable = val & PCI_BASE_ADDRESS_MEM_PREFETCH;
 
-        rc = vpci_add_register(pdev->vpci, vpci_hw_read32, bar_write, reg, 4,
-                               &bars[i]);
+        rc = vpci_add_register(pdev->vpci, bar_read_dispatch,
+                               bar_write_dispatch, reg, 4, &bars[i]);
         if ( rc )
         {
             pci_conf_write16(pdev->sbdf, PCI_COMMAND, cmd);
@@ -558,6 +777,7 @@ static int init_bars(struct pci_dev *pdev)
         rom->type = VPCI_BAR_ROM;
         rom->size = size;
         rom->addr = addr;
+        rom->index = num_bars;
         header->rom_enabled = pci_conf_read32(pdev->sbdf, rom_reg) &
                               PCI_ROM_ADDRESS_ENABLE;
 
diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
index a5293521a36a..728029da3e9c 100644
--- a/xen/drivers/vpci/vpci.c
+++ b/xen/drivers/vpci/vpci.c
@@ -69,6 +69,7 @@ int __hwdom_init vpci_add_handlers(struct pci_dev *pdev)
         return -ENOMEM;
 
     INIT_LIST_HEAD(&pdev->vpci->handlers);
+    INIT_LIST_HEAD(&pdev->vpci->headers);
     spin_lock_init(&pdev->vpci->lock);
 
     for ( i = 0; i < NUM_VPCI_INIT; i++ )
diff --git a/xen/include/xen/vpci.h b/xen/include/xen/vpci.h
index c3501e9ec010..54423bc6556d 100644
--- a/xen/include/xen/vpci.h
+++ b/xen/include/xen/vpci.h
@@ -55,16 +55,14 @@ uint32_t vpci_hw_read32(const struct pci_dev *pdev, unsigned int reg,
  */
 bool __must_check vpci_process_pending(struct vcpu *v);
 
-struct vpci {
-    /* List of vPCI handlers for a device. */
-    struct list_head handlers;
-    spinlock_t lock;
-
 #ifdef __XEN__
-    /* Hide the rest of the vpci struct from the user-space test harness. */
     struct vpci_header {
+    struct list_head node;
+    /* Domain that owns this view of the BARs. */
+    domid_t domain_id;
         /* Information about the PCI BARs of this device. */
         struct vpci_bar {
+            int index;
             uint64_t addr;
             uint64_t size;
             enum {
@@ -88,8 +86,18 @@ struct vpci {
          * is mapped into guest p2m) if there's a ROM BAR on the device.
          */
         bool rom_enabled      : 1;
-        /* FIXME: currently there's no support for SR-IOV. */
-    } header;
+};
+#endif
+
+struct vpci {
+    /* List of vPCI handlers for a device. */
+    struct list_head handlers;
+    spinlock_t lock;
+
+#ifdef __XEN__
+    /* Hide the rest of the vpci struct from the user-space test harness. */
+    /* List of vPCI headers for all domains. */
+    struct list_head headers;
 
 #ifdef CONFIG_X86
     /* MSI data. */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 12:51:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 12:51:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22480.48854 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6dd-0004QU-CV; Mon, 09 Nov 2020 12:51:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22480.48854; Mon, 09 Nov 2020 12:51:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6dd-0004QF-5l; Mon, 09 Nov 2020 12:51:13 +0000
Received: by outflank-mailman (input) for mailman id 22480;
 Mon, 09 Nov 2020 12:51:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zl3J=EP=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1kc6db-0003tf-H3
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:51:11 +0000
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dc348a48-5985-497a-b14c-cb2c93794339;
 Mon, 09 Nov 2020 12:50:43 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id 23so10119541ljv.7
 for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 04:50:42 -0800 (PST)
Received: from a2klaptop.localdomain ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id i1sm1736447lfr.176.2020.11.09.04.50.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Nov 2020 04:50:41 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zl3J=EP=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
	id 1kc6db-0003tf-H3
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:51:11 +0000
X-Inumbo-ID: dc348a48-5985-497a-b14c-cb2c93794339
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id dc348a48-5985-497a-b14c-cb2c93794339;
	Mon, 09 Nov 2020 12:50:43 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id 23so10119541ljv.7
        for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 04:50:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=eTzgC8wjlZZ5dW1kDk0Apl54Fj76f3KrpZyQEpkVwKo=;
        b=Q+1kyGk5oARHjB8whVHpRnlcQGZvtUwLqYbMqdS2JgNepCo6HOCtrkgLU4lRbnd8Yh
         HiwTsu7E+ujayTEFUS5+o8cbaTGs5LrUWHtG8VOtipdlzsAgSrFWEGZTqySzl6Xpr82Q
         ILuyttgBKhz0I8XuKJgP0/RHWouG8xbM/KEMucYHrWTmTpk3XLk0oxxKJfjqSZ3MsOaP
         w1F2b3JLfnFvA8OdZObmhzlJfBUkmeIyQciEUQaNYChFd7vCou/T1NEBFjGUNiRItvT7
         sAjBFf/ixmlga0rH4j6IGQCEKR8S43/jU9w9FSDawd0FME2xd9VysITBFGikF6FrXXKH
         OJfA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=eTzgC8wjlZZ5dW1kDk0Apl54Fj76f3KrpZyQEpkVwKo=;
        b=FwAEvUhu6vBOIcoF8r9MpUnpLFiyM3W6u0GDQMZvU39JAPHIifQlcmkHXnUslyQu+I
         08jCxlV8YDbxeiwtvaCmxkbOiCEjbUjkYPGVdEqHdfQJgyFlBSH8//Olj0KE3e5XcVxz
         FTZs7ZkA5LemYgjzi+Xpn6mfyuPQb0quQHal9BcCNoRnVpeyb4g7YeXKFLR27DDAfJpR
         ZpkE8XJYB1IirGKx+amHYdyjT/2GsSi5RCiJADuY29fp+bbzqf3c4T4mU4By/sP2fxl9
         ZW88nxNvXI6iRvX9TPxsd6Yp+JZ7o838J5hsNKP/S8v/9tzkuZBvlACYhgJBm2NAgV6e
         6KlQ==
X-Gm-Message-State: AOAM533eFebnC6GcK4evJ7WnGOfagamGOQnIFc25IuJ60BsIljpJnZ5G
	LuKljgnzfAwYBGbEYSnGMjBe1r3DpfZdD5c7
X-Google-Smtp-Source: ABdhPJwcPThVD8W/eKCaLs5uWrQq++GkIkgz3jehLAFccyEViZGB673ffnnnOCit0z69w08y6AN6Vg==
X-Received: by 2002:a2e:b701:: with SMTP id j1mr6333703ljo.242.1604926241939;
        Mon, 09 Nov 2020 04:50:41 -0800 (PST)
Received: from a2klaptop.localdomain ([185.199.97.5])
        by smtp.gmail.com with ESMTPSA id i1sm1736447lfr.176.2020.11.09.04.50.40
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 09 Nov 2020 04:50:41 -0800 (PST)
From: Oleksandr Andrushchenko <andr2000@gmail.com>
To: Rahul.Singh@arm.com,
	Bertrand.Marquis@arm.com,
	julien.grall@arm.com,
	jbeulich@suse.com,
	roger.pau@citrix.com,
	sstabellini@kernel.org,
	xen-devel@lists.xenproject.org
Cc: iwj@xenproject.org,
	wl@xen.org,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Subject: [PATCH 07/10] xen/arm: Do not hardcode phycial PCI device addresses
Date: Mon,  9 Nov 2020 14:50:28 +0200
Message-Id: <20201109125031.26409-8-andr2000@gmail.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201109125031.26409-1-andr2000@gmail.com>
References: <20201109125031.26409-1-andr2000@gmail.com>

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

As vPCI now takes care of the proper p2m mappings for PCI devices there
is no more need to hardcode anything.

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
 xen/include/public/arch-arm.h | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 2411ac9f7b0a..59baf1014fe3 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -444,15 +444,8 @@ typedef uint64_t xen_callback_t;
 #define GUEST_VPCI_MEM_CPU_ADDR           xen_mk_ullong(0x04020000)
 #define GUEST_VPCI_IO_CPU_ADDR            xen_mk_ullong(0xC0200800)
 
-/*
- * This is hardcoded values for the real PCI physical addresses.
- * This will be removed once we read the real PCI-PCIe physical
- * addresses form the config space and map to the guest memory map
- * when assigning the device to guest via VPCI.
- *
- */
 #define GUEST_VPCI_PREFETCH_MEM_PCI_ADDR  xen_mk_ullong(0x4000000000)
-#define GUEST_VPCI_MEM_PCI_ADDR           xen_mk_ullong(0x50000000)
+#define GUEST_VPCI_MEM_PCI_ADDR           xen_mk_ullong(0x04020000)
 #define GUEST_VPCI_IO_PCI_ADDR            xen_mk_ullong(0x00000000)
 
 #define GUEST_VPCI_PREFETCH_MEM_SIZE      xen_mk_ullong(0x100000000)
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 12:51:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 12:51:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22485.48866 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6di-0004XL-Mf; Mon, 09 Nov 2020 12:51:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22485.48866; Mon, 09 Nov 2020 12:51:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6di-0004X6-IX; Mon, 09 Nov 2020 12:51:18 +0000
Received: by outflank-mailman (input) for mailman id 22485;
 Mon, 09 Nov 2020 12:51:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zl3J=EP=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1kc6dg-0003tf-HB
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:51:16 +0000
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9373c888-deb2-488e-818e-fa0740b97367;
 Mon, 09 Nov 2020 12:50:44 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id f11so5953619lfs.3
 for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 04:50:44 -0800 (PST)
Received: from a2klaptop.localdomain ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id i1sm1736447lfr.176.2020.11.09.04.50.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Nov 2020 04:50:42 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zl3J=EP=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
	id 1kc6dg-0003tf-HB
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:51:16 +0000
X-Inumbo-ID: 9373c888-deb2-488e-818e-fa0740b97367
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9373c888-deb2-488e-818e-fa0740b97367;
	Mon, 09 Nov 2020 12:50:44 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id f11so5953619lfs.3
        for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 04:50:44 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=nvNmRBs0yMdCUG+5+G3i3M5jvG9+T7PAVH/ULtYwZd8=;
        b=BhLucB5unsgmpqCexxIviXoItX2knYBdsgr+yYELfc6dr8nME0oNQOEI3yc8ktBgNQ
         j5qG9gXFioPOtAz9b1XK70s8Q7w3Qo+on1JDA/vcCoN1xAwNuClbRiCFKnIpiKdJ/2bK
         fGyPd6VUEqYTvU9skirT/6Fp6HSM6ToAyr52K+IoSd8ASPbwjfS4EiYUIrBFa6m9ZA4B
         Reext1joqqooXwsl7kXl7+z2os2td3jxJkZlcBWSrQcGrT2tX8xo18fauv0twLpimoY8
         WojJROH0jW7I9TJnPUy8gNbladV7e9Oz9GkwPe5VVe+QHLcX6aw+rG7/46lOqMPmbeY8
         SN5g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=nvNmRBs0yMdCUG+5+G3i3M5jvG9+T7PAVH/ULtYwZd8=;
        b=WXirXJ3Slfo/KxpylhnAgvXlFO8qIn479DhhaM7jHHnU0nNOsecY5K+i1FHRLeDbqd
         iFnPIJ1cGSTANWq6CEz1Exe1TYdGQ8wC1udjsbQ1kILx3DwhQ6aDxsNR7GWcwRNpsiO9
         kExnRtwJ3+latMRh6r8no/agu965hLcq9xH8XooQWq2ybfbkzJ/j9AuEK+OQfkYxUiU5
         MbUldKT9zoJt/FunsRiFt8v2u9udiBhy6Y02pG88MWt/JoakDTuUiied51TA9ex1e9Uy
         H77rR01TPYPH5zYx4/YPiDPlxhfLeaADKelyQNzARgalDz/nz6sFC8zmkx3jIXLg7KsW
         1LOg==
X-Gm-Message-State: AOAM531q+C3dP2JhgSn+vzeohBvBKv0XXmPlxjAccgirkgbnbN893Vzm
	9yd1TCEZVtMKx5X5m1BNKnA=
X-Google-Smtp-Source: ABdhPJwIxNZ+eFf7k5nHNhhjoeffUH3DJiF6DgEDs7OPTcsBJLtqcKWT+tf2/kVSMShVjyHxOj/DWw==
X-Received: by 2002:ac2:57d3:: with SMTP id k19mr6188423lfo.386.1604926243125;
        Mon, 09 Nov 2020 04:50:43 -0800 (PST)
Received: from a2klaptop.localdomain ([185.199.97.5])
        by smtp.gmail.com with ESMTPSA id i1sm1736447lfr.176.2020.11.09.04.50.42
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 09 Nov 2020 04:50:42 -0800 (PST)
From: Oleksandr Andrushchenko <andr2000@gmail.com>
To: Rahul.Singh@arm.com,
	Bertrand.Marquis@arm.com,
	julien.grall@arm.com,
	jbeulich@suse.com,
	roger.pau@citrix.com,
	sstabellini@kernel.org,
	xen-devel@lists.xenproject.org
Cc: iwj@xenproject.org,
	wl@xen.org,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Subject: [PATCH 08/10] vpci/arm: Allow updating BAR's header for non-ECAM bridges
Date: Mon,  9 Nov 2020 14:50:29 +0200
Message-Id: <20201109125031.26409-9-andr2000@gmail.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201109125031.26409-1-andr2000@gmail.com>
References: <20201109125031.26409-1-andr2000@gmail.com>

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Non-ECAM host bridges in hwdom go directly to PCI config space,
not through vpci (they use their specific method for accessing PCI
configuration, e.g. dedicated registers etc.). Thus hwdom's vpci BARs are
never updated via vPCI MMIO handlers, so implement a dedicated method
for a PCI host bridge, so it has a chance to update the initial state of
the device BARs.

Note, we rely on the fact that control/hardware domain will not update
physical BAR locations for the given devices.

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
 xen/arch/arm/pci/pci-host-common.c | 13 +++++++++++++
 xen/drivers/vpci/header.c          |  9 ++++++++-
 xen/include/asm-arm/pci.h          |  8 ++++++++
 3 files changed, 29 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/pci/pci-host-common.c b/xen/arch/arm/pci/pci-host-common.c
index b6c4d7b636b1..5f4239afa41f 100644
--- a/xen/arch/arm/pci/pci-host-common.c
+++ b/xen/arch/arm/pci/pci-host-common.c
@@ -250,6 +250,19 @@ int pci_host_bridge_update_mappings(struct domain *d)
     return pci_host_iterate_bridges(d, pci_host_bridge_update_mapping);
 }
 
+void pci_host_bridge_update_bar_header(const struct pci_dev *pdev,
+                                       struct vpci_header *header)
+{
+    struct pci_host_bridge *bridge;
+
+    bridge = pci_find_host_bridge(pdev->seg, pdev->bus);
+    if ( unlikely(!bridge) )
+        return;
+
+    if ( bridge->ops->update_bar_header )
+        bridge->ops->update_bar_header(pdev, header);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
index 7dc7c70e24f2..1f326c894d16 100644
--- a/xen/drivers/vpci/header.c
+++ b/xen/drivers/vpci/header.c
@@ -77,7 +77,14 @@ static struct vpci_header *get_vpci_header(struct domain *d,
     if ( !is_hardware_domain(d) )
     {
         struct vpci_header *hwdom_header = get_hwdom_vpci_header(pdev);
-
+#ifdef CONFIG_ARM
+        /*
+         * Non-ECAM host bridges in hwdom go directly to PCI
+         * config space, not through vpci. Thus hwdom's vpci BARs are
+         * never updated.
+         */
+        pci_host_bridge_update_bar_header(pdev, hwdom_header);
+#endif
         /* Make a copy of the hwdom's BARs as the initial state for vBARs. */
         memcpy(header, hwdom_header, sizeof(*header));
     }
diff --git a/xen/include/asm-arm/pci.h b/xen/include/asm-arm/pci.h
index d94e8a6628de..723b2a99b6e1 100644
--- a/xen/include/asm-arm/pci.h
+++ b/xen/include/asm-arm/pci.h
@@ -60,6 +60,9 @@ struct pci_config_window {
 /* Forward declaration as pci_host_bridge and pci_ops depend on each other. */
 struct pci_host_bridge;
 
+struct pci_dev;
+struct vpci_header;
+
 struct pci_ops {
     int (*read)(struct pci_host_bridge *bridge,
                     uint32_t sbdf, int where, int size, u32 *val);
@@ -69,6 +72,8 @@ struct pci_ops {
     int (*register_mmio_handler)(struct domain *d,
                                  struct pci_host_bridge *bridge,
                                  const struct mmio_handler_ops *ops);
+    void (*update_bar_header)(const struct pci_dev *pdev,
+                              struct vpci_header *header);
 };
 
 /*
@@ -110,6 +115,9 @@ int pci_host_iterate_bridges(struct domain *d,
                              int (*clb)(struct domain *d,
                                         struct pci_host_bridge *bridge));
 int pci_host_bridge_update_mappings(struct domain *d);
+void pci_host_bridge_update_bar_header(const struct pci_dev *pdev,
+                                       struct vpci_header *header);
+
 #else   /*!CONFIG_ARM_PCI*/
 struct arch_pci_dev { };
 static inline void  pci_init(void) { }
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 12:51:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 12:51:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22489.48878 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6dn-0004d0-0P; Mon, 09 Nov 2020 12:51:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22489.48878; Mon, 09 Nov 2020 12:51:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6dm-0004cp-Su; Mon, 09 Nov 2020 12:51:22 +0000
Received: by outflank-mailman (input) for mailman id 22489;
 Mon, 09 Nov 2020 12:51:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zl3J=EP=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1kc6dl-0003tf-HP
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:51:21 +0000
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aabbabcf-60b4-4731-b85c-19f807f3690c;
 Mon, 09 Nov 2020 12:50:45 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id o24so3432507ljj.6
 for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 04:50:45 -0800 (PST)
Received: from a2klaptop.localdomain ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id i1sm1736447lfr.176.2020.11.09.04.50.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Nov 2020 04:50:43 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zl3J=EP=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
	id 1kc6dl-0003tf-HP
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:51:21 +0000
X-Inumbo-ID: aabbabcf-60b4-4731-b85c-19f807f3690c
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id aabbabcf-60b4-4731-b85c-19f807f3690c;
	Mon, 09 Nov 2020 12:50:45 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id o24so3432507ljj.6
        for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 04:50:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=eL99YnEMb+Fyva1t76PgaT1/+PMUISpx/gf+bW+r/cI=;
        b=HIz0YaOhkTYT9bi7q3vjfOFzs5nAscZaPl/ok3a0KcURNAkpcnAiB3ZSQlhsq94anU
         8pcS45urcHNlE+SWTJ0ceIR/qq5Ap5XZjYN2mGXHgu8WxpgtxzKUygvzDM/rX7vg5fJ3
         B8gahy9qZ0YZYpztX6gD1R1fHu4UKOXdBQE3VFJEU81+YCH2Di3atrwl0TMwAPJ/dhQl
         oNqcHsI2aiS/wtwHpxkWtq0Y7V9OoE6Wx8fktrP9tuvfzgmMlF1LgklQy2Z/EyDdb+D6
         +ly2E/EHTcQmmmUjMwzhSzyMj9hdv2saHe6ey0FFT9txiM63byBTE/ydvFb9NVMYmVTT
         gTGg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=eL99YnEMb+Fyva1t76PgaT1/+PMUISpx/gf+bW+r/cI=;
        b=cXzlTRHJBu0qvY5ZK9A83FGA23XaMQptoUeCV2rUQk89dwkmvE593A3/2z2WNGh3bk
         W8d7ZbOrwikR0ck+P2tHhEoG4Fw2kGt6TbgBqXN+Ie39QGh/0lcUHNWR0l53DL+PRiJq
         sHkhtCbK7pivfZPA8qQvVAp6kM4eItKU5ACg2B3lY5qoyW/y9bu7W3Ya4Mz0EMo57dwQ
         FhUC6K6O8DIkR4Xs7O+SX8YTwMdJCFVrWOqd4iPMdq3ctPtW/D3FJpRFo0nCnwUaTBVA
         JZEIxxOl1YCaUdr9aF8/LsnkSGQP9oC1EL7YL91JN8UHvx8QYWILQySL7ymtae/3IL9S
         ZMMA==
X-Gm-Message-State: AOAM532eP1DoZq3dCQQjlMAg5yYpphmETjbP3VbNeDHHj598LMWNRYrL
	DG9A76mt76B4hQs7MJ6pADk=
X-Google-Smtp-Source: ABdhPJxgeKI9dEyPqiYGy3kiXhmC5roPNZKfR08jh66HPMOL1bYYFLCoIBl0CtpPpcDIcSW8vJH91w==
X-Received: by 2002:a2e:9207:: with SMTP id k7mr1622007ljg.71.1604926244200;
        Mon, 09 Nov 2020 04:50:44 -0800 (PST)
Received: from a2klaptop.localdomain ([185.199.97.5])
        by smtp.gmail.com with ESMTPSA id i1sm1736447lfr.176.2020.11.09.04.50.43
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 09 Nov 2020 04:50:43 -0800 (PST)
From: Oleksandr Andrushchenko <andr2000@gmail.com>
To: Rahul.Singh@arm.com,
	Bertrand.Marquis@arm.com,
	julien.grall@arm.com,
	jbeulich@suse.com,
	roger.pau@citrix.com,
	sstabellini@kernel.org,
	xen-devel@lists.xenproject.org
Cc: iwj@xenproject.org,
	wl@xen.org,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Subject: [PATCH 09/10] vpci/rcar: Implement vPCI.update_bar_header callback
Date: Mon,  9 Nov 2020 14:50:30 +0200
Message-Id: <20201109125031.26409-10-andr2000@gmail.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201109125031.26409-1-andr2000@gmail.com>
References: <20201109125031.26409-1-andr2000@gmail.com>

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Update hardware domain's BAR header as R-Car Gen3 is a non-ECAM host
controller, so vPCI MMIO handlers do not work for it in hwdom.

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
 xen/arch/arm/pci/pci-host-rcar-gen3.c | 69 +++++++++++++++++++++++++++
 1 file changed, 69 insertions(+)

diff --git a/xen/arch/arm/pci/pci-host-rcar-gen3.c b/xen/arch/arm/pci/pci-host-rcar-gen3.c
index ec14bb29a38b..353ac2bfd6e6 100644
--- a/xen/arch/arm/pci/pci-host-rcar-gen3.c
+++ b/xen/arch/arm/pci/pci-host-rcar-gen3.c
@@ -23,6 +23,7 @@
 #include <xen/pci.h>
 #include <asm/pci.h>
 #include <xen/vmap.h>
+#include <xen/vpci.h>
 
 /* Error values that may be returned by PCI functions */
 #define PCIBIOS_SUCCESSFUL		0x00
@@ -307,12 +308,80 @@ int pci_rcar_gen3_config_write(struct pci_host_bridge *bridge, uint32_t _sbdf,
     return ret;
 }
 
+static void pci_rcar_gen3_hwbar_init(const struct pci_dev *pdev,
+                                     struct vpci_header *header)
+
+{
+    static bool once = true;
+    struct vpci_bar *bars = header->bars;
+    unsigned int num_bars;
+    int i;
+
+    /* Run only once. */
+    if (!once)
+        return;
+    once = false;
+
+    printk("\n\n ------------------------ %s -------------------\n", __func__);
+    switch ( pci_conf_read8(pdev->sbdf, PCI_HEADER_TYPE) & 0x7f )
+    {
+    case PCI_HEADER_TYPE_NORMAL:
+        num_bars = PCI_HEADER_NORMAL_NR_BARS;
+        break;
+
+    case PCI_HEADER_TYPE_BRIDGE:
+        num_bars = PCI_HEADER_BRIDGE_NR_BARS;
+        break;
+
+    default:
+        return;
+    }
+
+    for ( i = 0; i < num_bars; i++ )
+    {
+        uint8_t reg = PCI_BASE_ADDRESS_0 + i * 4;
+
+        if ( bars[i].type == VPCI_BAR_MEM64_HI )
+        {
+            /*
+             * Skip hi part of the 64-bit register: it is read
+             * together with the lower part.
+             */
+            continue;
+        }
+
+        if ( bars[i].type == VPCI_BAR_IO )
+        {
+            /* Skip IO. */
+            continue;
+        }
+
+        if ( bars[i].type == VPCI_BAR_MEM64_LO )
+        {
+            /* Read both hi and lo parts of the 64-bit BAR. */
+            bars[i].addr =
+                (uint64_t)pci_conf_read32(pdev->sbdf, reg + 4) << 32 |
+                pci_conf_read32(pdev->sbdf, reg);
+        }
+        else if ( bars[i].type == VPCI_BAR_MEM32 )
+        {
+            bars[i].addr = pci_conf_read32(pdev->sbdf, reg);
+        }
+        else
+        {
+            /* Expansion ROM? */
+            continue;
+        }
+    }
+}
+
 /* R-Car Gen3 ops */
 static struct pci_ecam_ops pci_rcar_gen3_ops = {
     .bus_shift  = 20, /* FIXME: this is not used by RCar */
     .pci_ops    = {
         .read       = pci_rcar_gen3_config_read,
         .write      = pci_rcar_gen3_config_write,
+        .update_bar_header = pci_rcar_gen3_hwbar_init,
     }
 };
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 12:59:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 12:59:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22530.48890 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6lq-0005HW-Oi; Mon, 09 Nov 2020 12:59:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22530.48890; Mon, 09 Nov 2020 12:59:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6lq-0005HP-Lf; Mon, 09 Nov 2020 12:59:42 +0000
Received: by outflank-mailman (input) for mailman id 22530;
 Mon, 09 Nov 2020 12:59:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zl3J=EP=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1kc6dq-0003tf-HT
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:51:26 +0000
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 82cd251f-0a00-4bda-afad-5c275205e25e;
 Mon, 09 Nov 2020 12:50:46 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id s9so8391861ljo.11
 for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 04:50:46 -0800 (PST)
Received: from a2klaptop.localdomain ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id i1sm1736447lfr.176.2020.11.09.04.50.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 09 Nov 2020 04:50:44 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zl3J=EP=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
	id 1kc6dq-0003tf-HT
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 12:51:26 +0000
X-Inumbo-ID: 82cd251f-0a00-4bda-afad-5c275205e25e
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 82cd251f-0a00-4bda-afad-5c275205e25e;
	Mon, 09 Nov 2020 12:50:46 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id s9so8391861ljo.11
        for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 04:50:46 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=LgLl6GM036W1KRVdmwK/U4Ej5BMvrSeEvhXdIuom+ac=;
        b=iVAQBM0OfQj5dQiwPcEeoDLByL/4ourFSArcjdgeei6AGbR9xIJ9A6b1V7HKm4Sa1G
         6CFmgvdfXlktq0S1j7o3FB+nxuUnNv13LHnW7nWGaD7A37gu5gowSHhDfeP6x5TwXooG
         h8DGnR36m1NMJUxH74sENUrm9yHZenghpc8Sbe87gsrDe0s984czdwWuN3wxA7/nRtMU
         eMYusiu0i0FP63MKqiNSFKmnZdTZWL465Ie/ggj2VLUR28u0HXtd+LFWQmX8Aqr32yJH
         svYFxvWVec3cKbaOJI46lOvsWbfsZkaC+8LWH9DEYGsOGHjkGXH2W7E3XZe/B4EsRbmS
         cwqQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=LgLl6GM036W1KRVdmwK/U4Ej5BMvrSeEvhXdIuom+ac=;
        b=g8f2/DTfwYzBF6jXtYvfbksz3V9YKuMmDvfPphjeXHG6lD8xHNVy0kxhIbbTLrOMUh
         K1jqBQHYEpmo0jlyyaEe9BQuxxGBwK9teGRyudV7zx+WZFCPuBYuXHspO+kaYWq5BwHy
         EPvC3BmCc+huB5Kz7aEcXtGx89zyHgXGDXCiqnCLQgePOY2iyJ40j5Q/cwjTgIbbpPMm
         vNDcjtYqanp9+biC3RpsnTZjhbJ1PIDJLeJUZ9ZGpl5NdMOwKo0WJGZu4KdT8qvqAYyz
         LM9D+GykJOk9AYvwvaPdAj2OaTzW+l7Vao8+9mF0NiYq6lUamzWmwmGn5/32OfW3HBgU
         rgtw==
X-Gm-Message-State: AOAM530GcDFmANHKeM8zbGaOsOcyzLYaqUnwf0fu2u/hW/FMxHUtC1mj
	o5Q/lNhaE5LzqBACtCtT4y4=
X-Google-Smtp-Source: ABdhPJy6R+2G+Qrko7GXDmhFwZ5TybTy9d9/cnVEZZCiAFf7wsNOj+yFYPCBcswfB0YeugnM8l8ntg==
X-Received: by 2002:a2e:9ad0:: with SMTP id p16mr1347028ljj.424.1604926245382;
        Mon, 09 Nov 2020 04:50:45 -0800 (PST)
Received: from a2klaptop.localdomain ([185.199.97.5])
        by smtp.gmail.com with ESMTPSA id i1sm1736447lfr.176.2020.11.09.04.50.44
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 09 Nov 2020 04:50:44 -0800 (PST)
From: Oleksandr Andrushchenko <andr2000@gmail.com>
To: Rahul.Singh@arm.com,
	Bertrand.Marquis@arm.com,
	julien.grall@arm.com,
	jbeulich@suse.com,
	roger.pau@citrix.com,
	sstabellini@kernel.org,
	xen-devel@lists.xenproject.org
Cc: iwj@xenproject.org,
	wl@xen.org,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Subject: [PATCH 10/10] [HACK] vpci/rcar: Make vPCI know DomD is hardware domain
Date: Mon,  9 Nov 2020 14:50:31 +0200
Message-Id: <20201109125031.26409-11-andr2000@gmail.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201109125031.26409-1-andr2000@gmail.com>
References: <20201109125031.26409-1-andr2000@gmail.com>

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
 xen/drivers/vpci/header.c | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
index 1f326c894d16..d5738ecca93d 100644
--- a/xen/drivers/vpci/header.c
+++ b/xen/drivers/vpci/header.c
@@ -34,13 +34,19 @@ struct map_data {
     struct pci_dev *pdev;
 };
 
+static bool is_hardware_domain_DomD(const struct domain *d)
+{
+    return d->domain_id == 1;
+}
+
 static struct vpci_header *get_vpci_header(struct domain *d,
                                            const struct pci_dev *pdev);
 
 static struct vpci_header *get_hwdom_vpci_header(const struct pci_dev *pdev)
 {
+    /* TODO: this should be for the hardware_domain, not current->domain. */
     if ( unlikely(list_empty(&pdev->vpci->headers)) )
-        return get_vpci_header(hardware_domain, pdev);
+        return get_vpci_header(current->domain, pdev);
 
     /* hwdom's header is always the very first entry. */
     return list_first_entry(&pdev->vpci->headers, struct vpci_header, node);
@@ -74,7 +80,7 @@ static struct vpci_header *get_vpci_header(struct domain *d,
         return NULL;
     }
 
-    if ( !is_hardware_domain(d) )
+    if ( !is_hardware_domain_DomD(d) )
     {
         struct vpci_header *hwdom_header = get_hwdom_vpci_header(pdev);
 #ifdef CONFIG_ARM
@@ -304,7 +310,7 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
     if ( !mem )
         return -ENOMEM;
 
-    if ( is_hardware_domain(current->domain) )
+    if ( is_hardware_domain_DomD(current->domain) )
         header = get_hwdom_vpci_header(pdev);
     else
         header = get_vpci_header(current->domain, pdev);
@@ -641,7 +647,7 @@ static uint32_t bar_read_dispatch(const struct pci_dev *pdev, unsigned int reg,
 {
     struct vpci_bar *vbar, *bar = data;
 
-    if ( is_hardware_domain(current->domain) )
+    if ( is_hardware_domain_DomD(current->domain) )
         return bar_read_hwdom(pdev, reg, data);
 
     vbar = get_vpci_bar(current->domain, pdev, bar->index);
@@ -656,7 +662,7 @@ static void bar_write_dispatch(const struct pci_dev *pdev, unsigned int reg,
 {
     struct vpci_bar *bar = data;
 
-    if ( is_hardware_domain(current->domain) )
+    if ( is_hardware_domain_DomD(current->domain) )
         bar_write_hwdom(pdev, reg, val, data);
     else
     {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 13:05:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 13:05:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22540.48902 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6ro-0006C7-El; Mon, 09 Nov 2020 13:05:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22540.48902; Mon, 09 Nov 2020 13:05:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6ro-0006C0-Ao; Mon, 09 Nov 2020 13:05:52 +0000
Received: by outflank-mailman (input) for mailman id 22540;
 Mon, 09 Nov 2020 13:05:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kc6rn-0006Bv-Br
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 13:05:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d56a37ee-06c8-4ee2-ab5c-ea2b9d3598b3;
 Mon, 09 Nov 2020 13:05:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8CE9CABAE;
 Mon,  9 Nov 2020 13:05:49 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kc6rn-0006Bv-Br
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 13:05:51 +0000
X-Inumbo-ID: d56a37ee-06c8-4ee2-ab5c-ea2b9d3598b3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d56a37ee-06c8-4ee2-ab5c-ea2b9d3598b3;
	Mon, 09 Nov 2020 13:05:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604927149;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=U/55AuPuvYJH5yF3KbYVD4llOq1t8Ht16pu3so/uKXk=;
	b=GibwIyrKtTw9XEt5e0zI8B+KXYUT1NJUu5Db4H1Bcj/l7RXv3ktSSwVYnz+RUubMyJgBRO
	N7c6QZgF2PdlCPtLhiapRX/OZHJuDSwXMVo4EFd9Qorpfx9NGM2hvFJCthJjbUqsoYyofI
	psIaEVGbKbcBqcdnmqZTags2KguKs8Q=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8CE9CABAE;
	Mon,  9 Nov 2020 13:05:49 +0000 (UTC)
Subject: Re: [PATCH v5 0/2] XSA-343 followup patches
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20201109064128.3908-1-jgross@suse.com>
 <e9326609-0808-8f0b-6cb9-3ed4401b9f1d@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <54840a9d-8fc4-e97e-8c97-3fdad8d273fd@suse.com>
Date: Mon, 9 Nov 2020 14:05:48 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <e9326609-0808-8f0b-6cb9-3ed4401b9f1d@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="128oblLcpeljJaao10Lw3Se9IcigBrpp3"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--128oblLcpeljJaao10Lw3Se9IcigBrpp3
Content-Type: multipart/mixed; boundary="OF7iSyQ0yH2W8xlIizqcMkaWZtRX83APi";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
Message-ID: <54840a9d-8fc4-e97e-8c97-3fdad8d273fd@suse.com>
Subject: Re: [PATCH v5 0/2] XSA-343 followup patches
References: <20201109064128.3908-1-jgross@suse.com>
 <e9326609-0808-8f0b-6cb9-3ed4401b9f1d@suse.com>
In-Reply-To: <e9326609-0808-8f0b-6cb9-3ed4401b9f1d@suse.com>

--OF7iSyQ0yH2W8xlIizqcMkaWZtRX83APi
Content-Type: multipart/mixed;
 boundary="------------4F25DD51BE6063AE65733C2F"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------4F25DD51BE6063AE65733C2F
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 09.11.20 13:00, Jan Beulich wrote:
> On 09.11.2020 07:41, Juergen Gross wrote:
>> The patches for XSA-343 produced some fallout, especially the event
>> channel locking has shown to be problematic.
>>
>> Patch 1 is targeting fifo event channels for avoiding any races for th=
e
>> case that the fifo queue has been changed for a specific event channel=
=2E
>>
>> The second patch is modifying the per event channel locking scheme in
>> order to avoid deadlocks and problems due to the event channel lock
>> having been changed to require IRQs off by the XSA-343 patches.
>>
>> Changes in V5:
>> - moved evtchn_write_[un]lock() to event_channel.c (Jan Beulich)
>> - used normal read_lock() in some cases (Jan Beulich)
>>
>> Changes in V4:
>> - switched to real rwlock
>>
>> Changes in V3:
>> - addressed comments
>>
>> Juergen Gross (2):
>>    xen/events: access last_priority and last_vcpu_id together
>>    xen/evtchn: rework per event channel lock
>=20
> Didn't you mean to add a 3rd patch here to drop the 2nd call to
> xsm_evtchn_send() again?

I wanted to that after this series has been accepted, but I can include
it here, of course.


Juergen


--------------4F25DD51BE6063AE65733C2F
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------4F25DD51BE6063AE65733C2F--

--OF7iSyQ0yH2W8xlIizqcMkaWZtRX83APi--

--128oblLcpeljJaao10Lw3Se9IcigBrpp3
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+pPqwFAwAAAAAACgkQsN6d1ii/Ey8w
jwf/adhJcRo5OAbOsxNU1KwhFbrwEJNDhYjXptg1my4woYWfw1XlAB/XzhcFTlBdksYumdWm/uz9
uN6HuWHsiuvcGDB8p2/TwcWEk9RNbnUdy4BcpjT76uFHWmmnrivL7lOAmphGortLLO7fddGt+8bd
fOAHx9rtxf+2EqG5Tpw5/BJJz7NxG8ZP8CjPmzF3dc8ZEx/HYJaqX4X2DJPtb4rM9z2PverZPCci
KX5s25sNWYWzXe0BvQdzExClqQTNw+1oNcxJhUI1a34U8R9V1ycoqeJ8HqYF2u4d+loThiyN3E6m
fkt/+D6K5/m0C7p9JEbVGzN16MDbsMp8LCRqLIo5Jw==
=pl9p
-----END PGP SIGNATURE-----

--128oblLcpeljJaao10Lw3Se9IcigBrpp3--


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 13:12:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 13:12:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22548.48914 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6yR-00076A-Ar; Mon, 09 Nov 2020 13:12:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22548.48914; Mon, 09 Nov 2020 13:12:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc6yR-000763-7w; Mon, 09 Nov 2020 13:12:43 +0000
Received: by outflank-mailman (input) for mailman id 22548;
 Mon, 09 Nov 2020 13:12:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ha9Y=EP=linuxfoundation.org=gregkh@srs-us1.protection.inumbo.net>)
 id 1kc6yP-00075X-Q9
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 13:12:41 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cce20758-5045-419d-ba7a-143e4f354415;
 Mon, 09 Nov 2020 13:12:40 +0000 (UTC)
Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 19B5A2083B;
 Mon,  9 Nov 2020 13:12:39 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ha9Y=EP=linuxfoundation.org=gregkh@srs-us1.protection.inumbo.net>)
	id 1kc6yP-00075X-Q9
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 13:12:41 +0000
X-Inumbo-ID: cce20758-5045-419d-ba7a-143e4f354415
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cce20758-5045-419d-ba7a-143e4f354415;
	Mon, 09 Nov 2020 13:12:40 +0000 (UTC)
Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 19B5A2083B;
	Mon,  9 Nov 2020 13:12:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604927560;
	bh=ZmrKTGum/f0zyhlnH6HxBmTD0MYV6ADqfkZ8bDYKB+8=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=Dlq6OFsSWTQWfVvSNS0UPXJFig2/W6633sPsrMMMu0dvxyL5UlzzDaCuhoyNaiLtZ
	 HYCS215Gl6a2494Cd9gbf3Qifa9urfKF7JTpKkvKDA1ZpDks4h1ktadsM+X3xD0xt+
	 CAO1XnZXdSBl6qM9w8P31hGLCvjvo+BzH4Snfk54=
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: linux-kernel@vger.kernel.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	stable@vger.kernel.org,
	Jiri Slaby <jslaby@suse.cz>,
	Borislav Petkov <bp@suse.de>,
	"Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Andrey Ryabinin <aryabinin@virtuozzo.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Ingo Molnar <mingo@kernel.org>,
	Jonathan Corbet <corbet@lwn.net>,
	Josh Poimboeuf <jpoimboe@redhat.com>,
	Juergen Gross <jgross@suse.com>,
	Len Brown <len.brown@intel.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-arch@vger.kernel.org,
	linux-doc@vger.kernel.org,
	linux-pm@vger.kernel.org,
	Mark Rutland <mark.rutland@arm.com>,
	Pavel Machek <pavel@ucw.cz>,
	Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Thomas Gleixner <tglx@linutronix.de>,
	Will Deacon <will@kernel.org>,
	x86-ml <x86@kernel.org>,
	xen-devel@lists.xenproject.org,
	Jian Cai <jiancai@google.com>
Subject: [PATCH 5.4 05/85] linkage: Introduce new macros for assembler symbols
Date: Mon,  9 Nov 2020 13:55:02 +0100
Message-Id: <20201109125022.876784566@linuxfoundation.org>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201109125022.614792961@linuxfoundation.org>
References: <20201109125022.614792961@linuxfoundation.org>
User-Agent: quilt/0.66
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Jiri Slaby <jslaby@suse.cz>

commit ffedeeb780dc554eff3d3b16e6a462a26a41d7ec upstream.

Introduce new C macros for annotations of functions and data in
assembly. There is a long-standing mess in macros like ENTRY, END,
ENDPROC and similar. They are used in different manners and sometimes
incorrectly.

So introduce macros with clear use to annotate assembly as follows:

a) Support macros for the ones below
   SYM_T_FUNC -- type used by assembler to mark functions
   SYM_T_OBJECT -- type used by assembler to mark data
   SYM_T_NONE -- type used by assembler to mark entries of unknown type

   They are defined as STT_FUNC, STT_OBJECT, and STT_NOTYPE
   respectively. According to the gas manual, this is the most portable
   way. I am not sure about other assemblers, so this can be switched
   back to %function and %object if this turns into a problem.
   Architectures can also override them by something like ", @function"
   if they need.

   SYM_A_ALIGN, SYM_A_NONE -- align the symbol?
   SYM_L_GLOBAL, SYM_L_WEAK, SYM_L_LOCAL -- linkage of symbols

b) Mostly internal annotations, used by the ones below
   SYM_ENTRY -- use only if you have to (for non-paired symbols)
   SYM_START -- use only if you have to (for paired symbols)
   SYM_END -- use only if you have to (for paired symbols)

c) Annotations for code
   SYM_INNER_LABEL_ALIGN -- only for labels in the middle of code
   SYM_INNER_LABEL -- only for labels in the middle of code

   SYM_FUNC_START_LOCAL_ALIAS -- use where there are two local names for
	one function
   SYM_FUNC_START_ALIAS -- use where there are two global names for one
	function
   SYM_FUNC_END_ALIAS -- the end of LOCAL_ALIASed or ALIASed function

   SYM_FUNC_START -- use for global functions
   SYM_FUNC_START_NOALIGN -- use for global functions, w/o alignment
   SYM_FUNC_START_LOCAL -- use for local functions
   SYM_FUNC_START_LOCAL_NOALIGN -- use for local functions, w/o
	alignment
   SYM_FUNC_START_WEAK -- use for weak functions
   SYM_FUNC_START_WEAK_NOALIGN -- use for weak functions, w/o alignment
   SYM_FUNC_END -- the end of SYM_FUNC_START_LOCAL, SYM_FUNC_START,
	SYM_FUNC_START_WEAK, ...

   For functions with special (non-C) calling conventions:
   SYM_CODE_START -- use for non-C (special) functions
   SYM_CODE_START_NOALIGN -- use for non-C (special) functions, w/o
	alignment
   SYM_CODE_START_LOCAL -- use for local non-C (special) functions
   SYM_CODE_START_LOCAL_NOALIGN -- use for local non-C (special)
	functions, w/o alignment
   SYM_CODE_END -- the end of SYM_CODE_START_LOCAL or SYM_CODE_START

d) For data
   SYM_DATA_START -- global data symbol
   SYM_DATA_START_LOCAL -- local data symbol
   SYM_DATA_END -- the end of the SYM_DATA_START symbol
   SYM_DATA_END_LABEL -- the labeled end of SYM_DATA_START symbol
   SYM_DATA -- start+end wrapper around simple global data
   SYM_DATA_LOCAL -- start+end wrapper around simple local data

==========

The macros allow to pair starts and ends of functions and mark functions
correctly in the output ELF objects.

All users of the old macros in x86 are converted to use these in further
patches.

Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Len Brown <len.brown@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-arch@vger.kernel.org
Cc: linux-doc@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-pm@vger.kernel.org
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Cc: x86-ml <x86@kernel.org>
Cc: xen-devel@lists.xenproject.org
Link: https://lkml.kernel.org/r/20191011115108.12392-2-jslaby@suse.cz
Cc: Jian Cai <jiancai@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 Documentation/asm-annotations.rst |  216 +++++++++++++++++++++++++++++++++
 Documentation/index.rst           |    8 +
 arch/x86/include/asm/linkage.h    |   10 +
 include/linux/linkage.h           |  245 ++++++++++++++++++++++++++++++++++++--
 4 files changed, 468 insertions(+), 11 deletions(-)

--- /dev/null
+++ b/Documentation/asm-annotations.rst
@@ -0,0 +1,216 @@
+Assembler Annotations
+=====================
+
+Copyright (c) 2017-2019 Jiri Slaby
+
+This document describes the new macros for annotation of data and code in
+assembly. In particular, it contains information about ``SYM_FUNC_START``,
+``SYM_FUNC_END``, ``SYM_CODE_START``, and similar.
+
+Rationale
+---------
+Some code like entries, trampolines, or boot code needs to be written in
+assembly. The same as in C, such code is grouped into functions and
+accompanied with data. Standard assemblers do not force users into precisely
+marking these pieces as code, data, or even specifying their length.
+Nevertheless, assemblers provide developers with such annotations to aid
+debuggers throughout assembly. On top of that, developers also want to mark
+some functions as *global* in order to be visible outside of their translation
+units.
+
+Over time, the Linux kernel has adopted macros from various projects (like
+``binutils``) to facilitate such annotations. So for historic reasons,
+developers have been using ``ENTRY``, ``END``, ``ENDPROC``, and other
+annotations in assembly.  Due to the lack of their documentation, the macros
+are used in rather wrong contexts at some locations. Clearly, ``ENTRY`` was
+intended to denote the beginning of global symbols (be it data or code).
+``END`` used to mark the end of data or end of special functions with
+*non-standard* calling convention. In contrast, ``ENDPROC`` should annotate
+only ends of *standard* functions.
+
+When these macros are used correctly, they help assemblers generate a nice
+object with both sizes and types set correctly. For example, the result of
+``arch/x86/lib/putuser.S``::
+
+   Num:    Value          Size Type    Bind   Vis      Ndx Name
+    25: 0000000000000000    33 FUNC    GLOBAL DEFAULT    1 __put_user_1
+    29: 0000000000000030    37 FUNC    GLOBAL DEFAULT    1 __put_user_2
+    32: 0000000000000060    36 FUNC    GLOBAL DEFAULT    1 __put_user_4
+    35: 0000000000000090    37 FUNC    GLOBAL DEFAULT    1 __put_user_8
+
+This is not only important for debugging purposes. When there are properly
+annotated objects like this, tools can be run on them to generate more useful
+information. In particular, on properly annotated objects, ``objtool`` can be
+run to check and fix the object if needed. Currently, ``objtool`` can report
+missing frame pointer setup/destruction in functions. It can also
+automatically generate annotations for :doc:`ORC unwinder <x86/orc-unwinder>`
+for most code. Both of these are especially important to support reliable
+stack traces which are in turn necessary for :doc:`Kernel live patching
+<livepatch/livepatch>`.
+
+Caveat and Discussion
+---------------------
+As one might realize, there were only three macros previously. That is indeed
+insufficient to cover all the combinations of cases:
+
+* standard/non-standard function
+* code/data
+* global/local symbol
+
+There was a discussion_ and instead of extending the current ``ENTRY/END*``
+macros, it was decided that brand new macros should be introduced instead::
+
+    So how about using macro names that actually show the purpose, instead
+    of importing all the crappy, historic, essentially randomly chosen
+    debug symbol macro names from the binutils and older kernels?
+
+.. _discussion: https://lkml.kernel.org/r/20170217104757.28588-1-jslaby@suse.cz
+
+Macros Description
+------------------
+
+The new macros are prefixed with the ``SYM_`` prefix and can be divided into
+three main groups:
+
+1. ``SYM_FUNC_*`` -- to annotate C-like functions. This means functions with
+   standard C calling conventions, i.e. the stack contains a return address at
+   the predefined place and a return from the function can happen in a
+   standard way. When frame pointers are enabled, save/restore of frame
+   pointer shall happen at the start/end of a function, respectively, too.
+
+   Checking tools like ``objtool`` should ensure such marked functions conform
+   to these rules. The tools can also easily annotate these functions with
+   debugging information (like *ORC data*) automatically.
+
+2. ``SYM_CODE_*`` -- special functions called with special stack. Be it
+   interrupt handlers with special stack content, trampolines, or startup
+   functions.
+
+   Checking tools mostly ignore checking of these functions. But some debug
+   information still can be generated automatically. For correct debug data,
+   this code needs hints like ``UNWIND_HINT_REGS`` provided by developers.
+
+3. ``SYM_DATA*`` -- obviously data belonging to ``.data`` sections and not to
+   ``.text``. Data do not contain instructions, so they have to be treated
+   specially by the tools: they should not treat the bytes as instructions,
+   nor assign any debug information to them.
+
+Instruction Macros
+~~~~~~~~~~~~~~~~~~
+This section covers ``SYM_FUNC_*`` and ``SYM_CODE_*`` enumerated above.
+
+* ``SYM_FUNC_START`` and ``SYM_FUNC_START_LOCAL`` are supposed to be **the
+  most frequent markings**. They are used for functions with standard calling
+  conventions -- global and local. Like in C, they both align the functions to
+  architecture specific ``__ALIGN`` bytes. There are also ``_NOALIGN`` variants
+  for special cases where developers do not want this implicit alignment.
+
+  ``SYM_FUNC_START_WEAK`` and ``SYM_FUNC_START_WEAK_NOALIGN`` markings are
+  also offered as an assembler counterpart to the *weak* attribute known from
+  C.
+
+  All of these **shall** be coupled with ``SYM_FUNC_END``. First, it marks
+  the sequence of instructions as a function and computes its size to the
+  generated object file. Second, it also eases checking and processing such
+  object files as the tools can trivially find exact function boundaries.
+
+  So in most cases, developers should write something like in the following
+  example, having some asm instructions in between the macros, of course::
+
+    SYM_FUNC_START(function_hook)
+        ... asm insns ...
+    SYM_FUNC_END(function_hook)
+
+  In fact, this kind of annotation corresponds to the now deprecated ``ENTRY``
+  and ``ENDPROC`` macros.
+
+* ``SYM_FUNC_START_ALIAS`` and ``SYM_FUNC_START_LOCAL_ALIAS`` serve for those
+  who decided to have two or more names for one function. The typical use is::
+
+    SYM_FUNC_START_ALIAS(__memset)
+    SYM_FUNC_START(memset)
+        ... asm insns ...
+    SYM_FUNC_END(memset)
+    SYM_FUNC_END_ALIAS(__memset)
+
+  In this example, one can call ``__memset`` or ``memset`` with the same
+  result, except the debug information for the instructions is generated to
+  the object file only once -- for the non-``ALIAS`` case.
+
+* ``SYM_CODE_START`` and ``SYM_CODE_START_LOCAL`` should be used only in
+  special cases -- if you know what you are doing. This is used exclusively
+  for interrupt handlers and similar where the calling convention is not the C
+  one. ``_NOALIGN`` variants exist too. The use is the same as for the ``FUNC``
+  category above::
+
+    SYM_CODE_START_LOCAL(bad_put_user)
+        ... asm insns ...
+    SYM_CODE_END(bad_put_user)
+
+  Again, every ``SYM_CODE_START*`` **shall** be coupled by ``SYM_CODE_END``.
+
+  To some extent, this category corresponds to deprecated ``ENTRY`` and
+  ``END``. Except ``END`` had several other meanings too.
+
+* ``SYM_INNER_LABEL*`` is used to denote a label inside some
+  ``SYM_{CODE,FUNC}_START`` and ``SYM_{CODE,FUNC}_END``.  They are very similar
+  to C labels, except they can be made global. An example of use::
+
+    SYM_CODE_START(ftrace_caller)
+        /* save_mcount_regs fills in first two parameters */
+        ...
+
+    SYM_INNER_LABEL(ftrace_caller_op_ptr, SYM_L_GLOBAL)
+        /* Load the ftrace_ops into the 3rd parameter */
+        ...
+
+    SYM_INNER_LABEL(ftrace_call, SYM_L_GLOBAL)
+        call ftrace_stub
+        ...
+        retq
+    SYM_CODE_END(ftrace_caller)
+
+Data Macros
+~~~~~~~~~~~
+Similar to instructions, there is a couple of macros to describe data in the
+assembly.
+
+* ``SYM_DATA_START`` and ``SYM_DATA_START_LOCAL`` mark the start of some data
+  and shall be used in conjunction with either ``SYM_DATA_END``, or
+  ``SYM_DATA_END_LABEL``. The latter adds also a label to the end, so that
+  people can use ``lstack`` and (local) ``lstack_end`` in the following
+  example::
+
+    SYM_DATA_START_LOCAL(lstack)
+        .skip 4096
+    SYM_DATA_END_LABEL(lstack, SYM_L_LOCAL, lstack_end)
+
+* ``SYM_DATA`` and ``SYM_DATA_LOCAL`` are variants for simple, mostly one-line
+  data::
+
+    SYM_DATA(HEAP,     .long rm_heap)
+    SYM_DATA(heap_end, .long rm_stack)
+
+  In the end, they expand to ``SYM_DATA_START`` with ``SYM_DATA_END``
+  internally.
+
+Support Macros
+~~~~~~~~~~~~~~
+All the above reduce themselves to some invocation of ``SYM_START``,
+``SYM_END``, or ``SYM_ENTRY`` at last. Normally, developers should avoid using
+these.
+
+Further, in the above examples, one could see ``SYM_L_LOCAL``. There are also
+``SYM_L_GLOBAL`` and ``SYM_L_WEAK``. All are intended to denote linkage of a
+symbol marked by them. They are used either in ``_LABEL`` variants of the
+earlier macros, or in ``SYM_START``.
+
+
+Overriding Macros
+~~~~~~~~~~~~~~~~~
+Architecture can also override any of the macros in their own
+``asm/linkage.h``, including macros specifying the type of a symbol
+(``SYM_T_FUNC``, ``SYM_T_OBJECT``, and ``SYM_T_NONE``).  As every macro
+described in this file is surrounded by ``#ifdef`` + ``#endif``, it is enough
+to define the macros differently in the aforementioned architecture-dependent
+header.
--- a/Documentation/index.rst
+++ b/Documentation/index.rst
@@ -135,6 +135,14 @@ needed).
    mic/index
    scheduler/index
 
+Architecture-agnostic documentation
+-----------------------------------
+
+.. toctree::
+   :maxdepth: 2
+
+   asm-annotations
+
 Architecture-specific documentation
 -----------------------------------
 
--- a/arch/x86/include/asm/linkage.h
+++ b/arch/x86/include/asm/linkage.h
@@ -13,9 +13,13 @@
 
 #ifdef __ASSEMBLY__
 
-#define GLOBAL(name)	\
-	.globl name;	\
-	name:
+/*
+ * GLOBAL is DEPRECATED
+ *
+ * use SYM_DATA_START, SYM_FUNC_START, SYM_INNER_LABEL, SYM_CODE_START, or
+ * similar
+ */
+#define GLOBAL(name)	SYM_ENTRY(name, SYM_L_GLOBAL, SYM_A_NONE)
 
 #if defined(CONFIG_X86_64) || defined(CONFIG_X86_ALIGNMENT_16)
 #define __ALIGN		.p2align 4, 0x90
--- a/include/linux/linkage.h
+++ b/include/linux/linkage.h
@@ -75,32 +75,58 @@
 
 #ifdef __ASSEMBLY__
 
+/* SYM_T_FUNC -- type used by assembler to mark functions */
+#ifndef SYM_T_FUNC
+#define SYM_T_FUNC				STT_FUNC
+#endif
+
+/* SYM_T_OBJECT -- type used by assembler to mark data */
+#ifndef SYM_T_OBJECT
+#define SYM_T_OBJECT				STT_OBJECT
+#endif
+
+/* SYM_T_NONE -- type used by assembler to mark entries of unknown type */
+#ifndef SYM_T_NONE
+#define SYM_T_NONE				STT_NOTYPE
+#endif
+
+/* SYM_A_* -- align the symbol? */
+#define SYM_A_ALIGN				ALIGN
+#define SYM_A_NONE				/* nothing */
+
+/* SYM_L_* -- linkage of symbols */
+#define SYM_L_GLOBAL(name)			.globl name
+#define SYM_L_WEAK(name)			.weak name
+#define SYM_L_LOCAL(name)			/* nothing */
+
 #ifndef LINKER_SCRIPT
 #define ALIGN __ALIGN
 #define ALIGN_STR __ALIGN_STR
 
+/* === DEPRECATED annotations === */
+
 #ifndef GLOBAL
+/* deprecated, use SYM_DATA*, SYM_ENTRY, or similar */
 #define GLOBAL(name) \
 	.globl name ASM_NL \
 	name:
 #endif
 
 #ifndef ENTRY
+/* deprecated, use SYM_FUNC_START */
 #define ENTRY(name) \
-	.globl name ASM_NL \
-	ALIGN ASM_NL \
-	name:
+	SYM_FUNC_START(name)
 #endif
 #endif /* LINKER_SCRIPT */
 
 #ifndef WEAK
+/* deprecated, use SYM_FUNC_START_WEAK* */
 #define WEAK(name)	   \
-	.weak name ASM_NL   \
-	ALIGN ASM_NL \
-	name:
+	SYM_FUNC_START_WEAK(name)
 #endif
 
 #ifndef END
+/* deprecated, use SYM_FUNC_END, SYM_DATA_END, or SYM_END */
 #define END(name) \
 	.size name, .-name
 #endif
@@ -110,11 +136,214 @@
  * static analysis tools such as stack depth analyzer.
  */
 #ifndef ENDPROC
+/* deprecated, use SYM_FUNC_END */
 #define ENDPROC(name) \
-	.type name, @function ASM_NL \
-	END(name)
+	SYM_FUNC_END(name)
+#endif
+
+/* === generic annotations === */
+
+/* SYM_ENTRY -- use only if you have to for non-paired symbols */
+#ifndef SYM_ENTRY
+#define SYM_ENTRY(name, linkage, align...)		\
+	linkage(name) ASM_NL				\
+	align ASM_NL					\
+	name:
+#endif
+
+/* SYM_START -- use only if you have to */
+#ifndef SYM_START
+#define SYM_START(name, linkage, align...)		\
+	SYM_ENTRY(name, linkage, align)
+#endif
+
+/* SYM_END -- use only if you have to */
+#ifndef SYM_END
+#define SYM_END(name, sym_type)				\
+	.type name sym_type ASM_NL			\
+	.size name, .-name
+#endif
+
+/* === code annotations === */
+
+/*
+ * FUNC -- C-like functions (proper stack frame etc.)
+ * CODE -- non-C code (e.g. irq handlers with different, special stack etc.)
+ *
+ * Objtool validates stack for FUNC, but not for CODE.
+ * Objtool generates debug info for both FUNC & CODE, but needs special
+ * annotations for each CODE's start (to describe the actual stack frame).
+ *
+ * ALIAS -- does not generate debug info -- the aliased function will
+ */
+
+/* SYM_INNER_LABEL_ALIGN -- only for labels in the middle of code */
+#ifndef SYM_INNER_LABEL_ALIGN
+#define SYM_INNER_LABEL_ALIGN(name, linkage)	\
+	.type name SYM_T_NONE ASM_NL			\
+	SYM_ENTRY(name, linkage, SYM_A_ALIGN)
+#endif
+
+/* SYM_INNER_LABEL -- only for labels in the middle of code */
+#ifndef SYM_INNER_LABEL
+#define SYM_INNER_LABEL(name, linkage)		\
+	.type name SYM_T_NONE ASM_NL			\
+	SYM_ENTRY(name, linkage, SYM_A_NONE)
+#endif
+
+/*
+ * SYM_FUNC_START_LOCAL_ALIAS -- use where there are two local names for one
+ * function
+ */
+#ifndef SYM_FUNC_START_LOCAL_ALIAS
+#define SYM_FUNC_START_LOCAL_ALIAS(name)		\
+	SYM_START(name, SYM_L_LOCAL, SYM_A_ALIGN)
+#endif
+
+/*
+ * SYM_FUNC_START_ALIAS -- use where there are two global names for one
+ * function
+ */
+#ifndef SYM_FUNC_START_ALIAS
+#define SYM_FUNC_START_ALIAS(name)			\
+	SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN)
+#endif
+
+/* SYM_FUNC_START -- use for global functions */
+#ifndef SYM_FUNC_START
+/*
+ * The same as SYM_FUNC_START_ALIAS, but we will need to distinguish these two
+ * later.
+ */
+#define SYM_FUNC_START(name)				\
+	SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN)
 #endif
 
+/* SYM_FUNC_START_NOALIGN -- use for global functions, w/o alignment */
+#ifndef SYM_FUNC_START_NOALIGN
+#define SYM_FUNC_START_NOALIGN(name)			\
+	SYM_START(name, SYM_L_GLOBAL, SYM_A_NONE)
 #endif
 
+/* SYM_FUNC_START_LOCAL -- use for local functions */
+#ifndef SYM_FUNC_START_LOCAL
+/* the same as SYM_FUNC_START_LOCAL_ALIAS, see comment near SYM_FUNC_START */
+#define SYM_FUNC_START_LOCAL(name)			\
+	SYM_START(name, SYM_L_LOCAL, SYM_A_ALIGN)
 #endif
+
+/* SYM_FUNC_START_LOCAL_NOALIGN -- use for local functions, w/o alignment */
+#ifndef SYM_FUNC_START_LOCAL_NOALIGN
+#define SYM_FUNC_START_LOCAL_NOALIGN(name)		\
+	SYM_START(name, SYM_L_LOCAL, SYM_A_NONE)
+#endif
+
+/* SYM_FUNC_START_WEAK -- use for weak functions */
+#ifndef SYM_FUNC_START_WEAK
+#define SYM_FUNC_START_WEAK(name)			\
+	SYM_START(name, SYM_L_WEAK, SYM_A_ALIGN)
+#endif
+
+/* SYM_FUNC_START_WEAK_NOALIGN -- use for weak functions, w/o alignment */
+#ifndef SYM_FUNC_START_WEAK_NOALIGN
+#define SYM_FUNC_START_WEAK_NOALIGN(name)		\
+	SYM_START(name, SYM_L_WEAK, SYM_A_NONE)
+#endif
+
+/* SYM_FUNC_END_ALIAS -- the end of LOCAL_ALIASed or ALIASed function */
+#ifndef SYM_FUNC_END_ALIAS
+#define SYM_FUNC_END_ALIAS(name)			\
+	SYM_END(name, SYM_T_FUNC)
+#endif
+
+/*
+ * SYM_FUNC_END -- the end of SYM_FUNC_START_LOCAL, SYM_FUNC_START,
+ * SYM_FUNC_START_WEAK, ...
+ */
+#ifndef SYM_FUNC_END
+/* the same as SYM_FUNC_END_ALIAS, see comment near SYM_FUNC_START */
+#define SYM_FUNC_END(name)				\
+	SYM_END(name, SYM_T_FUNC)
+#endif
+
+/* SYM_CODE_START -- use for non-C (special) functions */
+#ifndef SYM_CODE_START
+#define SYM_CODE_START(name)				\
+	SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN)
+#endif
+
+/* SYM_CODE_START_NOALIGN -- use for non-C (special) functions, w/o alignment */
+#ifndef SYM_CODE_START_NOALIGN
+#define SYM_CODE_START_NOALIGN(name)			\
+	SYM_START(name, SYM_L_GLOBAL, SYM_A_NONE)
+#endif
+
+/* SYM_CODE_START_LOCAL -- use for local non-C (special) functions */
+#ifndef SYM_CODE_START_LOCAL
+#define SYM_CODE_START_LOCAL(name)			\
+	SYM_START(name, SYM_L_LOCAL, SYM_A_ALIGN)
+#endif
+
+/*
+ * SYM_CODE_START_LOCAL_NOALIGN -- use for local non-C (special) functions,
+ * w/o alignment
+ */
+#ifndef SYM_CODE_START_LOCAL_NOALIGN
+#define SYM_CODE_START_LOCAL_NOALIGN(name)		\
+	SYM_START(name, SYM_L_LOCAL, SYM_A_NONE)
+#endif
+
+/* SYM_CODE_END -- the end of SYM_CODE_START_LOCAL, SYM_CODE_START, ... */
+#ifndef SYM_CODE_END
+#define SYM_CODE_END(name)				\
+	SYM_END(name, SYM_T_NONE)
+#endif
+
+/* === data annotations === */
+
+/* SYM_DATA_START -- global data symbol */
+#ifndef SYM_DATA_START
+#define SYM_DATA_START(name)				\
+	SYM_START(name, SYM_L_GLOBAL, SYM_A_NONE)
+#endif
+
+/* SYM_DATA_START -- local data symbol */
+#ifndef SYM_DATA_START_LOCAL
+#define SYM_DATA_START_LOCAL(name)			\
+	SYM_START(name, SYM_L_LOCAL, SYM_A_NONE)
+#endif
+
+/* SYM_DATA_END -- the end of SYM_DATA_START symbol */
+#ifndef SYM_DATA_END
+#define SYM_DATA_END(name)				\
+	SYM_END(name, SYM_T_OBJECT)
+#endif
+
+/* SYM_DATA_END_LABEL -- the labeled end of SYM_DATA_START symbol */
+#ifndef SYM_DATA_END_LABEL
+#define SYM_DATA_END_LABEL(name, linkage, label)	\
+	linkage(label) ASM_NL				\
+	.type label SYM_T_OBJECT ASM_NL			\
+	label:						\
+	SYM_END(name, SYM_T_OBJECT)
+#endif
+
+/* SYM_DATA -- start+end wrapper around simple global data */
+#ifndef SYM_DATA
+#define SYM_DATA(name, data...)				\
+	SYM_DATA_START(name) ASM_NL				\
+	data ASM_NL						\
+	SYM_DATA_END(name)
+#endif
+
+/* SYM_DATA_LOCAL -- start+end wrapper around simple local data */
+#ifndef SYM_DATA_LOCAL
+#define SYM_DATA_LOCAL(name, data...)			\
+	SYM_DATA_START_LOCAL(name) ASM_NL			\
+	data ASM_NL						\
+	SYM_DATA_END(name)
+#endif
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _LINUX_LINKAGE_H */




From xen-devel-bounces@lists.xenproject.org Mon Nov 09 13:16:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 13:16:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22556.48925 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc728-0007Hi-Rp; Mon, 09 Nov 2020 13:16:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22556.48925; Mon, 09 Nov 2020 13:16:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc728-0007Hb-Ou; Mon, 09 Nov 2020 13:16:32 +0000
Received: by outflank-mailman (input) for mailman id 22556;
 Mon, 09 Nov 2020 13:16:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kc728-0007HW-36
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 13:16:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f3bb77c4-a158-459c-bf0d-eaff2031d350;
 Mon, 09 Nov 2020 13:16:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 55BFFAC53;
 Mon,  9 Nov 2020 13:16:29 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kc728-0007HW-36
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 13:16:32 +0000
X-Inumbo-ID: f3bb77c4-a158-459c-bf0d-eaff2031d350
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f3bb77c4-a158-459c-bf0d-eaff2031d350;
	Mon, 09 Nov 2020 13:16:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604927789;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Gl1mL6XKk1PigKKL1qPSLAB+qTnJhKQoMk3cDGCzrRw=;
	b=RwFVdgDDXeLCWcwe0GAjCI7XVWoff9HjuoSIIXBSGxLLC0fj3xBe96Xz64/4tr6pKhmXXC
	rtKc01EhehI/xhLT4zGlGEhmJ7vULHkpyXmyKz0UhqTNn7ggX/VMq28nSjPb66DJXHgqxP
	nlJjRfALSXPU/HWbpDJG9A25Uc0aU7s=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 55BFFAC53;
	Mon,  9 Nov 2020 13:16:29 +0000 (UTC)
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20201109064128.3908-1-jgross@suse.com>
 <20201109064128.3908-3-jgross@suse.com>
 <df9737a4-f90a-0498-b67d-ce254b835287@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v5 2/2] xen/evtchn: rework per event channel lock
Message-ID: <7fa67b64-4114-736b-660c-2ec5be8f7da1@suse.com>
Date: Mon, 9 Nov 2020 14:16:28 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <df9737a4-f90a-0498-b67d-ce254b835287@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="ujqyC0L7ukV05M7it7xm6tKO1HlH9PU0H"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--ujqyC0L7ukV05M7it7xm6tKO1HlH9PU0H
Content-Type: multipart/mixed; boundary="VcSeUsQIlUhnyUrCudpaPkVzEfCEcICbx";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Message-ID: <7fa67b64-4114-736b-660c-2ec5be8f7da1@suse.com>
Subject: Re: [PATCH v5 2/2] xen/evtchn: rework per event channel lock
References: <20201109064128.3908-1-jgross@suse.com>
 <20201109064128.3908-3-jgross@suse.com>
 <df9737a4-f90a-0498-b67d-ce254b835287@suse.com>
In-Reply-To: <df9737a4-f90a-0498-b67d-ce254b835287@suse.com>

--VcSeUsQIlUhnyUrCudpaPkVzEfCEcICbx
Content-Type: multipart/mixed;
 boundary="------------9003C3690CB30519C6B53FDD"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------9003C3690CB30519C6B53FDD
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 09.11.20 12:58, Jan Beulich wrote:
> On 09.11.2020 07:41, Juergen Gross wrote:
>> Currently the lock for a single event channel needs to be taken with
>> interrupts off, which causes deadlocks in some cases.
>>
>> Rework the per event channel lock to be non-blocking for the case of
>> sending an event and removing the need for disabling interrupts for
>> taking the lock.
>>
>> The lock is needed for avoiding races between event channel state
>> changes (creation, closing, binding) against normal operations (set
>> pending, [un]masking, priority changes).
>>
>> Use a rwlock, but with some restrictions:
>>
>> - normal operations use read_trylock(), in case of not obtaining the
>>    lock the operation is omitted or a default state is returned
>>
>> - closing an event channel is using write_lock(), with ASSERT()ing tha=
t
>>    the lock is taken as writer only when the state of the event channe=
l
>>    is either before or after the locked region appropriate (either fre=
e
>>    or unbound).
>=20
> This has become stale, and may have been incomplete already before:
> - Normal operations now may use two diffeent approaches. Which one
> is to be used when would want writing down here.
> - write_lock() use goes beyond closing.

Yes, you are right.

>=20
>> --- a/xen/arch/x86/irq.c
>> +++ b/xen/arch/x86/irq.c
>> @@ -2495,14 +2495,12 @@ static void dump_irqs(unsigned char key)
>>                   pirq =3D domain_irq_to_pirq(d, irq);
>>                   info =3D pirq_info(d, pirq);
>>                   evtchn =3D evtchn_from_port(d, info->evtchn);
>> -                local_irq_disable();
>> -                if ( spin_trylock(&evtchn->lock) )
>> +                if ( evtchn_read_trylock(evtchn) )
>>                   {
>>                       pending =3D evtchn_is_pending(d, evtchn);
>>                       masked =3D evtchn_is_masked(d, evtchn);
>> -                    spin_unlock(&evtchn->lock);
>> +                    evtchn_read_unlock(evtchn);
>>                   }
>> -                local_irq_enable();
>>                   printk("d%d:%3d(%c%c%c)%c",
>>                          d->domain_id, pirq, "-P?"[pending],
>>                          "-M?"[masked], info->masked ? 'M' : '-',
>=20
> Using trylock here has a reason different from that in sending
> functions, aiui. Please say so in the description, to justify
> exposure of evtchn_read_lock().

Okay.

>=20
>> --- a/xen/arch/x86/pv/shim.c
>> +++ b/xen/arch/x86/pv/shim.c
>> @@ -660,11 +660,12 @@ void pv_shim_inject_evtchn(unsigned int port)
>>       if ( port_is_valid(guest, port) )
>>       {
>>           struct evtchn *chn =3D evtchn_from_port(guest, port);
>> -        unsigned long flags;
>>  =20
>> -        spin_lock_irqsave(&chn->lock, flags);
>> -        evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
>> -        spin_unlock_irqrestore(&chn->lock, flags);
>> +        if ( evtchn_read_trylock(chn) )
>> +        {
>> +            evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);=

>> +            evtchn_read_unlock(chn);
>> +        }
>=20
> Does this need trylock?

It is called directly from the event upcall, so interrupts should be
off here. Without trylock this would result in check_lock() triggering.

>=20
>> @@ -1068,15 +1088,16 @@ int evtchn_unmask(unsigned int port)
>>   {
>>       struct domain *d =3D current->domain;
>>       struct evtchn *evtchn;
>> -    unsigned long flags;
>>  =20
>>       if ( unlikely(!port_is_valid(d, port)) )
>>           return -EINVAL;
>>  =20
>>       evtchn =3D evtchn_from_port(d, port);
>> -    spin_lock_irqsave(&evtchn->lock, flags);
>> -    evtchn_port_unmask(d, evtchn);
>> -    spin_unlock_irqrestore(&evtchn->lock, flags);
>> +    if ( evtchn_read_trylock(evtchn) )
>> +    {
>> +        evtchn_port_unmask(d, evtchn);
>> +        evtchn_read_unlock(evtchn);
>> +    }
>=20
> I think this one could as well use plain read_lock().

Oh, indeed.

>=20
>> @@ -234,12 +244,13 @@ static inline bool evtchn_is_masked(const struct=
 domain *d,
>>   static inline bool evtchn_port_is_masked(struct domain *d, evtchn_po=
rt_t port)
>>   {
>>       struct evtchn *evtchn =3D evtchn_from_port(d, port);
>> -    bool rc;
>> -    unsigned long flags;
>> +    bool rc =3D true;
>>  =20
>> -    spin_lock_irqsave(&evtchn->lock, flags);
>> -    rc =3D evtchn_is_masked(d, evtchn);
>> -    spin_unlock_irqrestore(&evtchn->lock, flags);
>> +    if ( evtchn_read_trylock(evtchn) )
>> +    {
>> +        rc =3D evtchn_is_masked(d, evtchn);
>> +        evtchn_read_unlock(evtchn);
>> +    }
>>  =20
>>       return rc;
>>   }
>> @@ -252,12 +263,13 @@ static inline int evtchn_port_poll(struct domain=
 *d, evtchn_port_t port)
>>       if ( port_is_valid(d, port) )
>>       {
>>           struct evtchn *evtchn =3D evtchn_from_port(d, port);
>> -        unsigned long flags;
>>  =20
>> -        spin_lock_irqsave(&evtchn->lock, flags);
>> -        if ( evtchn_usable(evtchn) )
>> -            rc =3D evtchn_is_pending(d, evtchn);
>> -        spin_unlock_irqrestore(&evtchn->lock, flags);
>> +        if ( evtchn_read_trylock(evtchn) )
>> +        {
>> +            if ( evtchn_usable(evtchn) )
>> +                rc =3D evtchn_is_pending(d, evtchn);
>> +            evtchn_read_unlock(evtchn);
>> +        }
>>       }
>>  =20
>>       return rc;
>=20
> At least for the latter I suppose it should also be plain read_lock().
> We ought to keep the exceptions to where they're actually needed, I
> think.

I think both instances can be switched.


Juergen

--------------9003C3690CB30519C6B53FDD
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------9003C3690CB30519C6B53FDD--

--VcSeUsQIlUhnyUrCudpaPkVzEfCEcICbx--

--ujqyC0L7ukV05M7it7xm6tKO1HlH9PU0H
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+pQSwFAwAAAAAACgkQsN6d1ii/Ey/K
eQf/V/7Vds2iI9uq8R2B4QLX2xntYjn9TbrEQb8jsIy6kyALr02UR7L8gmOFTbNUY4t/m4dGnadR
bN7mFcfIMm9/EugR5ICqTsBUVa/Q5ChTqpcw2p4fZs03MD8ABwcXO6Y7V6BkdzDDz6VTHle9IG59
peMZnz+IVW2vJfo/s/MU2k1+hPK327rFl4VoGEd1l05WWhEhpnqaaFMkNU0aBn4x5ndxqNSnPt0k
9frOB69RW7+8GUOGMeWfx2kUDmTDhFkPGpaePgtnekBon/Tut61lvzwku2YJTzHfS5U+of7dMk7G
D0OkQ+bkz1q9GgXfW6jtGXKJvyfUc7eXG7xWLIw6PA==
=gTDz
-----END PGP SIGNATURE-----

--ujqyC0L7ukV05M7it7xm6tKO1HlH9PU0H--


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 13:25:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 13:25:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22565.48937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc7AI-0008EN-Qc; Mon, 09 Nov 2020 13:24:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22565.48937; Mon, 09 Nov 2020 13:24:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc7AI-0008EG-Np; Mon, 09 Nov 2020 13:24:58 +0000
Received: by outflank-mailman (input) for mailman id 22565;
 Mon, 09 Nov 2020 13:24:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=caUz=EP=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kc7AH-0008E9-6O
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 13:24:57 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7bd53ffd-6d33-413b-a7f2-a60cf05eec33;
 Mon, 09 Nov 2020 13:24:55 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=caUz=EP=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kc7AH-0008E9-6O
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 13:24:57 +0000
X-Inumbo-ID: 7bd53ffd-6d33-413b-a7f2-a60cf05eec33
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7bd53ffd-6d33-413b-a7f2-a60cf05eec33;
	Mon, 09 Nov 2020 13:24:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1604928295;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=mxx0fLOA0uXdvrCL1KCXe5MIQGok5t7yZxpb49blu+k=;
  b=Gz4GukSmjgljcIKJWljrQP+4F6FDui+5QmRaDxTgc0mgkGHF9q40Yg9d
   mD6jWtpk9gEP5dwYY2SLvBG8OW/nJkgajZFXSUVlfBfxMo/riXxtBXS6Z
   tH0COklG2aJatfSHZnkS4QlOio6oS5M04F7CScT5fpgOzrNUCn4f2pe4M
   Y=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: N1S5wNWIcahSkRSRiCn/eK/QO1S8f+saEtpl1JuIhiF21FEWe4hVpFPQLjc3BEZzQiUWZH8YIx
 vSP1TCYxUz8k6atWCJVve3yrMHhn30w0QAxEiyrRWjTAfdX4uv0WMbnxo5kE2/Z3TGX3gj7T6v
 IeKpHuzgsAG4YrcQR9atcHaPgRNCAwPqH/gdqr147qVSgdEorUxNwzpjJKpCro1cZk4VBX3yZ+
 J8jzSnnUaYEfpK7dbBXRpmCI4QFZdY2c+vwt1mEklxTYSxWqh+x8qLGBjWKWuXR2pcko/dqia3
 flo=
X-SBRS: None
X-MesageID: 30754426
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,463,1596513600"; 
   d="scan'208";a="30754426"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gkzvC8YAG8M8LhI96Bfw3EaYc0nSrpkiY6ZHI2kUiJXAXXvtUVIgqYlsbsK9CfZP8Hcbcw6kA/MzDLekADJfPLJRSlVrgFs6w884RCRy2tZM3ETN6F8WOtQYV/dFh5O/RFgK3SHEwLtxcsjHQX1El0ifhCNtkCMykqGPsfqHWGy2oBUoHExoHOGEWXz+GWvOgHkjskUxusIaZyy8lPEh5gmdzQR+2JKTGNcB7Dg0nb6Ta07s4Z5zwb/ezSu8K6bxnsdI7QmbOmZXPLzSLBn4TRlaAt2cjin++V1KX4/t+4bGrkFofltiJKhU0oGALkC4+CWJzzK+2m7aXUL1F0w4hQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bRvQ7DYZZl9FUqe+EmSipHERghk/4Z1blarS5+Y0UCI=;
 b=K9m1VQSm+DSZTLv10D1lw3APUZMaI3KRkng3ar0XwsqlojCFh7nVNpWxikrhfvDYu3kMpWKq7NlNNRN8TRFUS1gp4mRvpvO60pcC6loYncpBnJOOFa3YcE3dSct/Y3L40PyjSaxUkytGhnhTR3E2NohQSlYDr/oc6PfjOnZc2+UcLqM9wgau7BBfH4py6aUKddOnhtPeJJ5BcPXbL78J5I2/kHHVrMDA85gIcRmzYWwYpq4+CD1u0tdMWHRQ38E90HNzvhTLAnMSb8TRtCmzB6niE48qoDyaK/4AUoXP1yMjq3sDSH2bwmB6V6bw3lISzbHEqRcNvZTFqrX8GPt1bg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bRvQ7DYZZl9FUqe+EmSipHERghk/4Z1blarS5+Y0UCI=;
 b=DFMHKozHkGcmWzA/tl3WnWDgMJcH8XRSHsyWbABvxpULqAmWexo+raUxQ37wMKuDZTZKIETY9gME8OgoQok8LovBryl9dVJNH7tU6efo/D1HeKzt/gMjVHPzZ9dxPMjPKGbbWSpltvQaqvoKBbvKxFho3R4wBMWQnQKgdXkpjvE=
Date: Mon, 9 Nov 2020 14:21:45 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH] x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL}
Message-ID: <20201109132145.lwjh3i6msx3ltxlw@Air-de-Roger>
References: <20201006162327.93055-1-roger.pau@citrix.com>
 <a98d6cb1-0b1d-8fb8-8718-c65e02e448bb@citrix.com>
 <20201007164117.GH19254@Air-de-Roger> <20201015133412.GC68032@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201015133412.GC68032@Air-de-Roger>
X-ClientProxiedBy: LO2P265CA0509.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13b::16) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c54fae49-a572-4876-36f4-08d884b26b45
X-MS-TrafficTypeDiagnostic: DM5PR03MB2841:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB2841A3A1C6B98A1109AE3F198FEA0@DM5PR03MB2841.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Rl8ixv5OUerlvOY7fJQZnVQtiDap11yOQmD2Wsh3pPaA/BCKC9aHKvaUEpndwh6qKXD72H4NNNo6gqXH0wbBdGttGcZLTNdLKiCQOrjzC6GuCjBJYQpZm3PQ0qnvXOp2+gTgHSAZP3ZVh9wTH6ZkVOWYRDBm+YVB7RO0k5R2+v9z7ecchjjU0YzsG7jNZOzTiwcXvACgOXyY5LvsB7GMk94+htVErxZ9PB/ACrYfVoVcIPqOf/JTVGBoYkH0xAsMcmtZpS18Tkez/KGGlFihZ/mFEyla1Opb7o8wxiw70lm9fphBMjxYJpPKYuchdgJgrKp8D6H12X0H0Vr9mg3fVE73dsjL9itYP9A5GFSTaiNt78TSXjfDiHevh3c92yfa4AJM3lC6nY/hKAmuf3ICTw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(39860400002)(376002)(366004)(346002)(396003)(4326008)(8936002)(956004)(6862004)(6496006)(2906002)(6666004)(33716001)(9686003)(16526019)(316002)(54906003)(26005)(186003)(86362001)(5660300002)(66476007)(6486002)(478600001)(1076003)(6636002)(966005)(85182001)(8676002)(83380400001)(66946007)(66556008)(53546011);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: vpR1D0D5ZJ1qCtf5X9bg3S2sgu8PvTZgXIPPsBDrSIg6Lg3qAZP1RPEek8d/98PI9NnMMcKHTSvSwsx4oD9unRiiG4sCyhEhe+FxKAzbT+Bn/UoFMTHEi5rn2wRfqx1VW0Ke4VJyokltwcQK/XP7MIbH9SOsdgPt6kW6CtLXi61EZX4XVIoCBto7coS94VPyc9Hlji/pmGSu3Xk1Ty01HPComLwj65dOAj+ThSv+/rMCwsukF/PJTzQrAPDh8uoXsYPHdMAxzfE9hj9CTpq/4PiAz96vGXUcTrjTxEWL4FsO5EWnUTgF067pCf4MxJFd4aflxwCsYifklPb9W0HCIu8itpcTj6YQdckZu1wjzsmrG4q2YzWn/D/Zg/Dt+J6rfTtZR4wFAxiqr2C33Qk7syZG0yaW7eSMDZCtjQvaOxKsBqE0vpu94yCS5wH9+8DFsjIWDF0/hrKw8DreOKYkzTD2d9euS1gK/l4WuepPeOVR466GvxuU87zOoIExGDoO4MWc48SWXaY8knhhLHjv8ewsdqfrWKroPoFIeNAyvszpxbQLu1gMcr/F7uE4MybH4SD0MZLR27dl4phamx8g6OxzKcW7R1ag5WjMT5dmYfuA4g0qoxgOnqjhl6DaTRuRPzLdHx3bkkCvLEIuPBgGzA==
X-MS-Exchange-CrossTenant-Network-Message-Id: c54fae49-a572-4876-36f4-08d884b26b45
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Nov 2020 13:21:51.4376
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: LLJ6f+mD6Wsb+ddQalY8M5XakbWfIRFOKX5QYRH2MW8j0UCsW9e2bhZCIxyQctXCO+j2jPjChc4n/HtahNLzBg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2841
X-OriginatorOrg: citrix.com

Ping?

On Thu, Oct 15, 2020 at 03:34:12PM +0200, Roger Pau Monné wrote:
> On Wed, Oct 07, 2020 at 06:41:17PM +0200, Roger Pau Monné wrote:
> > On Wed, Oct 07, 2020 at 01:06:08PM +0100, Andrew Cooper wrote:
> > > On 06/10/2020 17:23, Roger Pau Monne wrote:
> > > > Currently a PV hardware domain can also be given control over the CPU
> > > > frequency, and such guest is allowed to write to MSR_IA32_PERF_CTL.
> > > 
> > > This might be how the current logic "works", but its straight up broken.
> > > 
> > > PERF_CTL is thread scope, so unless dom0 is identity pinned and has one
> > > vcpu for every pcpu, it cannot use the interface correctly.
> > 
> > Selecting cpufreq=dom0-kernel will force vCPU pinning. I'm not able
> > however to see anywhere that would force dom0 vCPUs == pCPUs.
> > 
> > > > However since commit 322ec7c89f6 the default behavior has been changed
> > > > to reject accesses to not explicitly handled MSRs, preventing PV
> > > > guests that manage CPU frequency from reading
> > > > MSR_IA32_PERF_{STATUS/CTL}.
> > > >
> > > > Additionally some HVM guests (Windows at least) will attempt to read
> > > > MSR_IA32_PERF_CTL and will panic if given back a #GP fault:
> > > >
> > > > vmx.c:3035:d8v0 RDMSR 0x00000199 unimplemented
> > > > d8v0 VIRIDIAN CRASH: 3b c0000096 fffff806871c1651 ffffda0253683720 0
> > > >
> > > > Move the handling of MSR_IA32_PERF_{STATUS/CTL} to the common MSR
> > > > handling shared between HVM and PV guests, and add an explicit case
> > > > for reads to MSR_IA32_PERF_{STATUS/CTL}.
> > > 
> > > OTOH, PERF_CTL does have a seemingly architectural "please disable turbo
> > > for me" bit, which is supposed to be for calibration loops.  I wonder if
> > > anyone uses this, and whether we ought to honour it (probably not).
> > 
> > If we let guests play with this we would have to save/restore the
> > guest value on context switch. Unless there's a strong case for this,
> > I would say no.
> > 
> > > > diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> > > > index d8ed83f869..41baa3b7a1 100644
> > > > --- a/xen/include/xen/sched.h
> > > > +++ b/xen/include/xen/sched.h
> > > > @@ -1069,6 +1069,12 @@ extern enum cpufreq_controller {
> > > >      FREQCTL_none, FREQCTL_dom0_kernel, FREQCTL_xen
> > > >  } cpufreq_controller;
> > > >  
> > > > +static inline bool is_cpufreq_controller(const struct domain *d)
> > > > +{
> > > > +    return ((cpufreq_controller == FREQCTL_dom0_kernel) &&
> > > > +            is_hardware_domain(d));
> > > 
> > > This won't compile on !CONFIG_X86, due to CONFIG_HAS_CPUFREQ
> > 
> > It does seem to build on Arm, because this is only used in x86 code:
> > 
> > https://gitlab.com/xen-project/people/royger/xen/-/jobs/778207412
> > 
> > The extern declaration of cpufreq_controller is just above, so if you
> > tried to use is_cpufreq_controller on Arm you would get a link time
> > error, otherwise it builds fine. The compiler removes the function on
> > Arm as it has the inline attribute and it's not used.
> > 
> > Alternatively I could look into moving cpufreq_controller (and
> > is_cpufreq_controller) out of sched.h into somewhere else, I haven't
> > looked at why it needs to live there.
> > 
> > > Honestly - I don't see any point to this code.  Its opt-in via the
> > > command line only, and doesn't provide adequate checks for enablement. 
> > > (It's not as if we're lacking complexity or moving parts when it comes
> > > to power/frequency management).
> > 
> > Right, I could do a pre-patch to remove this, but I also don't think
> > we should block this fix on removing FREQCTL_dom0_kernel, so I would
> > rather fix the regression and then remove the feature if we agree it
> > can be removed.
> 
> Can we get some consensus on what to do next?
> 
> I think I've provided replies to all the points above, and I'm not sure
> what do to next in order to proceed with this patch.
> 
> Thanks, Roger.
> 


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 14:25:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 14:25:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22581.48950 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc86c-00052U-EB; Mon, 09 Nov 2020 14:25:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22581.48950; Mon, 09 Nov 2020 14:25:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc86c-00052N-Ao; Mon, 09 Nov 2020 14:25:14 +0000
Received: by outflank-mailman (input) for mailman id 22581;
 Mon, 09 Nov 2020 13:52:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pbKN=EP=gmail.com=idryomov@srs-us1.protection.inumbo.net>)
 id 1kc7aa-0002Nb-NH
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 13:52:08 +0000
Received: from mail-il1-x142.google.com (unknown [2607:f8b0:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dbf31a08-1e92-4de7-83df-b791fa8008ea;
 Mon, 09 Nov 2020 13:52:08 +0000 (UTC)
Received: by mail-il1-x142.google.com with SMTP id g7so8304442ilr.12
 for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 05:52:08 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pbKN=EP=gmail.com=idryomov@srs-us1.protection.inumbo.net>)
	id 1kc7aa-0002Nb-NH
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 13:52:08 +0000
X-Inumbo-ID: dbf31a08-1e92-4de7-83df-b791fa8008ea
Received: from mail-il1-x142.google.com (unknown [2607:f8b0:4864:20::142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id dbf31a08-1e92-4de7-83df-b791fa8008ea;
	Mon, 09 Nov 2020 13:52:08 +0000 (UTC)
Received: by mail-il1-x142.google.com with SMTP id g7so8304442ilr.12
        for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 05:52:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=kU2Lz6FCVrZiuEoHabGbCYI8mZi4aQygf2Kr0fLfGG4=;
        b=Lb/BkUcVIcd6xhv8oVONfECjW/ZB+eIGRHnh0dm3V/Dcve1CHMxGnjqr8op+1v76K7
         BboZLuIC9lFoWJc8zRN7ckG8JYkUCEMrBXYH3LeqwWaeQw6FiksBZKkC4/rY+nMcbUOP
         mcyx+jHFLBCXxcHO2RHq4/pyA2nmEefVK9oOy9OsQ2fr8vLqz43v1yvxxZclHh7D+k2N
         Qw7b/fXaJ5trjyicYsioOneOxQN4Qzfe8MAWER1MvLiN60BX2gzHW6baJIDiu8cGcvYG
         BbC7br0UMOuvYPQzgHxuAO2xrcgSSEORe8jj2L4/gatFyG+wbhw19VFJICOjyYa/QKXf
         TWnw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=kU2Lz6FCVrZiuEoHabGbCYI8mZi4aQygf2Kr0fLfGG4=;
        b=QEXn6pV9Ghqq9xvC0wewmKcCsFxs/qqTMBjh45D+YJIMIk7Ai1ET7mO2CF4NOfm4s8
         qvQYfiYT3m4DyQvVQ4KmtyZXjorfXGqnHOyjopzOTWeJe9TX/iV7RHtw5d9P29zgLoJu
         Hrgl0qDboD96AEzGF5dprhz6IBeySEH+QUxF/jeK3V4PrtwXJJ2eZoCK5glqaGHuHbKY
         4wR7sdsZWmgEdMCoAHY/e+2PhmdPm/NG9VGi/AMpVW6Row2FCeWDbM6XBU8hamBaBvTx
         rMr3nYouumGHaSJRk16GnpXrkQzF0BCG8gUTYDA2HwKFL+/l9yUJhXOnax/5eITymtmJ
         xcuA==
X-Gm-Message-State: AOAM530pCitRxlqtFIszk+LihMS83RTfRfjqOzP/BRR+zFSCUdm2oh0G
	OXiq1CfcCWIZzWvqq7N0RCPKpQF5NqcRLiHKd/0=
X-Google-Smtp-Source: ABdhPJyXm5ijaiShQDd7Xj0k0fsOvCm3U5j3JctcQpIY73kNN30YAUsNc9VfJ5DemzskInCCw4oOffcooZ/r5dSiKD8=
X-Received: by 2002:a05:6e02:c:: with SMTP id h12mr10623495ilr.177.1604929927708;
 Mon, 09 Nov 2020 05:52:07 -0800 (PST)
MIME-Version: 1.0
References: <20201106190337.1973127-1-hch@lst.de> <20201106190337.1973127-18-hch@lst.de>
In-Reply-To: <20201106190337.1973127-18-hch@lst.de>
From: Ilya Dryomov <idryomov@gmail.com>
Date: Mon, 9 Nov 2020 14:52:08 +0100
Message-ID: <CAOi1vP83cOt_FOFLXQmgBpDgmaq8o8OQcUYWOb97jzkgOw6r4A@mail.gmail.com>
Subject: Re: [PATCH 17/24] rbd: use set_capacity_and_notify
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Justin Sanders <justin@coraid.com>, 
	Josef Bacik <josef@toxicpanda.com>, Jack Wang <jinpu.wang@cloud.ionos.com>, 
	"Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>, 
	Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>, Song Liu <song@kernel.org>, 
	"Martin K. Petersen" <martin.petersen@oracle.com>, dm-devel@redhat.com, 
	linux-block <linux-block@vger.kernel.org>, Lars Ellenberg <drbd-dev@lists.linbit.com>, 
	nbd@other.debian.org, Ceph Development <ceph-devel@vger.kernel.org>, 
	xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org, 
	linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org, 
	linux-fsdevel <linux-fsdevel@vger.kernel.org>
Content-Type: text/plain; charset="UTF-8"

On Fri, Nov 6, 2020 at 8:04 PM Christoph Hellwig <hch@lst.de> wrote:
>
> Use set_capacity_and_notify to set the size of both the disk and block
> device.  This also gets the uevent notifications for the resize for free.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/block/rbd.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
> index f84128abade319..b7a194ffda55b4 100644
> --- a/drivers/block/rbd.c
> +++ b/drivers/block/rbd.c
> @@ -4920,8 +4920,7 @@ static void rbd_dev_update_size(struct rbd_device *rbd_dev)
>             !test_bit(RBD_DEV_FLAG_REMOVING, &rbd_dev->flags)) {
>                 size = (sector_t)rbd_dev->mapping.size / SECTOR_SIZE;
>                 dout("setting size to %llu sectors", (unsigned long long)size);
> -               set_capacity(rbd_dev->disk, size);
> -               revalidate_disk_size(rbd_dev->disk, true);
> +               set_capacity_and_notify(rbd_dev->disk, size);
>         }
>  }
>
> --
> 2.28.0
>

Acked-by: Ilya Dryomov <idryomov@gmail.com>

Thanks,

                Ilya


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 14:40:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 14:40:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22600.48965 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc8Kw-0006Cu-PI; Mon, 09 Nov 2020 14:40:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22600.48965; Mon, 09 Nov 2020 14:40:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc8Kw-0006C9-LD; Mon, 09 Nov 2020 14:40:02 +0000
Received: by outflank-mailman (input) for mailman id 22600;
 Mon, 09 Nov 2020 14:40:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=47vU=EP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kc8Kv-00064X-B7
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 14:40:01 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 665bd766-ef2c-4c4f-b9f7-b5499f128556;
 Mon, 09 Nov 2020 14:39:53 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kc8Kn-00077m-6V; Mon, 09 Nov 2020 14:39:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kc8Km-0002aP-SG; Mon, 09 Nov 2020 14:39:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kc8Km-0006sF-PL; Mon, 09 Nov 2020 14:39:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=47vU=EP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kc8Kv-00064X-B7
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 14:40:01 +0000
X-Inumbo-ID: 665bd766-ef2c-4c4f-b9f7-b5499f128556
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 665bd766-ef2c-4c4f-b9f7-b5499f128556;
	Mon, 09 Nov 2020 14:39:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=qrQXHKWlLc9Y0uMR1SQosLiMSilYhY6+tB2LkaYGl84=; b=g/vNhpwnWsrp++PlpvsB3wJPfL
	erHZ0Nuxlnut7zG02/DRlRV1Pr5qBO1++xCBfWQ0NSU0TBoUai1I0xjhvdg8+KuHXfGLNbtIQVqLH
	C34nQEbcgf1+wUGC+giCja9DxxowwW/BjnURzaiZOkZdk+cbODTpZw7U9QIICLvrfq/U=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kc8Kn-00077m-6V; Mon, 09 Nov 2020 14:39:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kc8Km-0002aP-SG; Mon, 09 Nov 2020 14:39:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kc8Km-0006sF-PL; Mon, 09 Nov 2020 14:39:52 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable bisection] complete test-amd64-i386-libvirt-xsm
Message-Id: <E1kc8Km-0006sF-PL@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Nov 2020 14:39:52 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-libvirt-xsm
testid guest-start

Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  e19bcb626f50a652fb1854a8b2f2c9c371687a11
  Bug not present: c3453a23f7905d24f2404787543e26ec7d02301c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/156586/


  commit e19bcb626f50a652fb1854a8b2f2c9c371687a11
  Author: Juergen Gross <jgross@suse.com>
  Date:   Fri Nov 6 10:48:07 2020 +0100
  
      xen/rwlock: add check_lock() handling to rwlocks
      
      Checking whether a lock is consistently used regarding interrupts on
      or off is beneficial for rwlocks, too.
      
      So add check_lock() calls to rwlock functions. For this purpose make
      check_lock() globally accessible.
      
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Reviewed-by: Julien Grall <jgrall@amazon.com>
      Reviewed-by: Jan Beulich <jbeulich@suse.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-i386-libvirt-xsm.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-i386-libvirt-xsm.guest-start --summary-out=tmp/156586.bisection-summary --basis-template=156443 --blessings=real,real-bisect,real-retry xen-unstable test-amd64-i386-libvirt-xsm guest-start
Searching for failure / basis pass:
 156556 fail [host=chardonnay0] / 156443 [host=elbling1] 156401 [host=huxelrebe1] 156389 [host=chardonnay1] 156373 [host=rimava1] 156354 [host=rimava1] 156339 [host=elbling0] 156331 [host=albana0] 156315 [host=pinot0] 156291 [host=pinot1] 156268 [host=fiano1] 156254 [host=fiano0] 156248 [host=albana1] 156228 [host=albana0] 156196 [host=chardonnay1] 156167 [host=chardonnay1] 156136 ok.
Failure / basis pass flights: 156556 / 156136
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
Basis pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 6ca70821b59849ad97c3fadc47e63c1a4af1a78c
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/libvirt.git#2c846fa6bcc11929c9fb857a22430fb9945654ad-2c846fa6bcc11929c9fb857a22430fb9945654ad https://gitlab.com/keycodemap/keycodemapdb.git#27acf0ef828bf719b2053ba398b195829413dbdd-27acf0ef828bf719b2053ba398b195829413dbdd git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0\
 dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#ea6d3cd1ed79d824e605a70c3626bc437c386260-677cbe1324c29294bb1d1b8454b3f214725e40fd git://xenbits.xen.org/xen.git#6ca70821b59849ad97c3fadc47e63c1a4af1a78c-0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
Loaded 41925 nodes in revision graph
Searching for test results:
 156119 [host=huxelrebe0]
 156136 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 6ca70821b59849ad97c3fadc47e63c1a4af1a78c
 156167 [host=chardonnay1]
 156196 [host=chardonnay1]
 156228 [host=albana0]
 156248 [host=albana1]
 156254 [host=fiano0]
 156268 [host=fiano1]
 156291 [host=pinot1]
 156315 [host=pinot0]
 156331 [host=albana0]
 156339 [host=elbling0]
 156354 [host=rimava1]
 156373 [host=rimava1]
 156389 [host=chardonnay1]
 156401 [host=huxelrebe1]
 156443 [host=elbling1]
 156524 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 2a5f9f6a6932214fda76b9b3c03e024772882d34
 156538 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
 156568 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 6ca70821b59849ad97c3fadc47e63c1a4af1a78c
 156570 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
 156571 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 82c0d3d491ccb183cf12c87775086b68531b8444
 156578 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c
 156572 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd dac867bf9adc1562a4cf9db5f89726597af13ef8
 156574 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 9ff9705647646aa937b5f5c1426a64c69a62b3bd
 156556 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
 156576 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 957708c2d1ae25d7375abd5e5e70c3043d64f1f1
 156580 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd e19bcb626f50a652fb1854a8b2f2c9c371687a11
 156581 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c
 156583 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd e19bcb626f50a652fb1854a8b2f2c9c371687a11
 156584 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c
 156586 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd e19bcb626f50a652fb1854a8b2f2c9c371687a11
Searching for interesting versions
 Result found: flight 156136 (pass), for basis pass
 For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c, results HASH(0x55aee1f71b90) HASH(0x55aedeea4e50) HASH(0x55aedee060f0) For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef8\
 28bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 957708c2d1ae25d7375abd5e5e70c3043d64f1f1, results HASH(0x55aee1f71928) For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98\
 c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 9ff9705647646aa937b5f5c1426a64c69a62b3bd, results HASH(0x55aee1f6e298) For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd dac867bf9adc1562a4cf9db5f89726597af13ef8, results HASH(0x55aee1f71268) For basis\
  failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 82c0d3d491ccb183cf12c87775086b68531b8444, results HASH(0x55aee1f71bf0) Result found: flight 156524 (fail), for basis failure (at ancestor ~32)
 Repro found: flight 156568 (pass), for basis pass
 Repro found: flight 156570 (fail), for basis failure
 0 revisions at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c
No revisions left to test, checking graph state.
 Result found: flight 156578 (pass), for last pass
 Result found: flight 156580 (fail), for first failure
 Repro found: flight 156581 (pass), for last pass
 Repro found: flight 156583 (fail), for first failure
 Repro found: flight 156584 (pass), for last pass
 Repro found: flight 156586 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  e19bcb626f50a652fb1854a8b2f2c9c371687a11
  Bug not present: c3453a23f7905d24f2404787543e26ec7d02301c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/156586/


  commit e19bcb626f50a652fb1854a8b2f2c9c371687a11
  Author: Juergen Gross <jgross@suse.com>
  Date:   Fri Nov 6 10:48:07 2020 +0100
  
      xen/rwlock: add check_lock() handling to rwlocks
      
      Checking whether a lock is consistently used regarding interrupts on
      or off is beneficial for rwlocks, too.
      
      So add check_lock() calls to rwlock functions. For this purpose make
      check_lock() globally accessible.
      
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Reviewed-by: Julien Grall <jgrall@amazon.com>
      Reviewed-by: Jan Beulich <jbeulich@suse.com>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-i386-libvirt-xsm.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
156586: tolerable FAIL

flight 156586 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/156586/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-libvirt-xsm  14 guest-start             fail baseline untested


jobs:
 build-i386-libvirt                                           pass    
 test-amd64-i386-libvirt-xsm                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 14:41:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 14:41:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22602.48977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc8Ma-0006tC-BL; Mon, 09 Nov 2020 14:41:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22602.48977; Mon, 09 Nov 2020 14:41:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc8Ma-0006t5-6p; Mon, 09 Nov 2020 14:41:44 +0000
Received: by outflank-mailman (input) for mailman id 22602;
 Mon, 09 Nov 2020 14:40:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Ks2=EP=sec.uni-passau.de=hr@srs-us1.protection.inumbo.net>)
 id 1kc8Le-0006r3-JJ
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 14:40:46 +0000
Received: from tom.rz.uni-passau.de (unknown [132.231.51.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 951758a5-3521-4c70-95e7-72cfc1d99a91;
 Mon, 09 Nov 2020 14:40:43 +0000 (UTC)
Received: from puremessage.rz.uni-passau.de (puremessage.rz.uni-passau.de
 [132.231.51.54])
 by tom.rz.uni-passau.de (Postfix) with ESMTP id 0657311A6EBCC
 for <xen-devel@lists.xenproject.org>; Mon,  9 Nov 2020 15:40:42 +0100 (CET)
Received: from puremessage.rz.uni-passau.de (localhost.localdomain [127.0.0.1])
 by localhost (Email Security Appliance) with SMTP id E328010709E_FA954E9B
 for <xen-devel@lists.xenproject.org>; Mon,  9 Nov 2020 14:40:41 +0000 (GMT)
Received: from tom.rz.uni-passau.de (tom.rz.uni-passau.de [132.231.51.4])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by puremessage.rz.uni-passau.de (Sophos Email Appliance) with ESMTPS id
 DA9A61047DC_FA954E9F
 for <xen-devel@lists.xenproject.org>; Mon,  9 Nov 2020 14:40:41 +0000 (GMT)
Received: from localhost (localhost [127.0.0.1])
 by tom.rz.uni-passau.de (Postfix) with ESMTP id D4D9B11A6EBC8
 for <xen-devel@lists.xenproject.org>; Mon,  9 Nov 2020 15:40:41 +0100 (CET)
Received: from [132.231.11.13] (helo=smith.sec.uni-passau.de)
 by mail.uni-passau.de with ESMTPS (eXpurgate 4.15.1)
 (envelope-from <hr@sec.uni-passau.de>)
 id 5fa954e3-53de-84e733040019-84e70b0d91fc-3
 for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 15:40:41 +0100
Received: from smith.sec.uni-passau.de (132.231.11.13) by
 smith.sec.uni-passau.de (132.231.11.13) with Microsoft SMTP Server (TLS) id
 15.0.1497.2; Mon, 9 Nov 2020 15:36:12 +0100
Received: from smith.sec.uni-passau.de ([fe80::55d2:6c14:41:dff9]) by
 smith.sec.uni-passau.de ([fe80::55d2:6c14:41:dff9%12]) with mapi id
 15.00.1497.007; Mon, 9 Nov 2020 15:36:12 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7Ks2=EP=sec.uni-passau.de=hr@srs-us1.protection.inumbo.net>)
	id 1kc8Le-0006r3-JJ
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 14:40:46 +0000
X-Inumbo-ID: 951758a5-3521-4c70-95e7-72cfc1d99a91
Received: from tom.rz.uni-passau.de (unknown [132.231.51.4])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 951758a5-3521-4c70-95e7-72cfc1d99a91;
	Mon, 09 Nov 2020 14:40:43 +0000 (UTC)
Received: from puremessage.rz.uni-passau.de (puremessage.rz.uni-passau.de [132.231.51.54])
	by tom.rz.uni-passau.de (Postfix) with ESMTP id 0657311A6EBCC
	for <xen-devel@lists.xenproject.org>; Mon,  9 Nov 2020 15:40:42 +0100 (CET)
Received: from puremessage.rz.uni-passau.de (localhost.localdomain [127.0.0.1])
	by localhost (Email Security Appliance) with SMTP id E328010709E_FA954E9B
	for <xen-devel@lists.xenproject.org>; Mon,  9 Nov 2020 14:40:41 +0000 (GMT)
Received: from tom.rz.uni-passau.de (tom.rz.uni-passau.de [132.231.51.4])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(Client did not present a certificate)
	by puremessage.rz.uni-passau.de (Sophos Email Appliance) with ESMTPS id DA9A61047DC_FA954E9F
	for <xen-devel@lists.xenproject.org>; Mon,  9 Nov 2020 14:40:41 +0000 (GMT)
Received: from localhost (localhost [127.0.0.1])
	by tom.rz.uni-passau.de (Postfix) with ESMTP id D4D9B11A6EBC8
	for <xen-devel@lists.xenproject.org>; Mon,  9 Nov 2020 15:40:41 +0100 (CET)
Received: from [132.231.11.13] (helo=smith.sec.uni-passau.de)
	by mail.uni-passau.de with ESMTPS (eXpurgate 4.15.1)
	(envelope-from <hr@sec.uni-passau.de>)
	id 5fa954e3-53de-84e733040019-84e70b0d91fc-3
	for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 15:40:41 +0100
Received: from smith.sec.uni-passau.de (132.231.11.13) by
 smith.sec.uni-passau.de (132.231.11.13) with Microsoft SMTP Server (TLS) id
 15.0.1497.2; Mon, 9 Nov 2020 15:36:12 +0100
Received: from smith.sec.uni-passau.de ([fe80::55d2:6c14:41:dff9]) by
 smith.sec.uni-passau.de ([fe80::55d2:6c14:41:dff9%12]) with mapi id
 15.00.1497.007; Mon, 9 Nov 2020 15:36:12 +0100
From: "Reiser, Hans" <hr@sec.uni-passau.de>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: infinite loop in xenstat_qmp.c
Thread-Topic: infinite loop in xenstat_qmp.c
Thread-Index: AQHWtqWrP5ss4mCWiE+RSioLt66eLw==
Date: Mon, 9 Nov 2020 14:36:11 +0000
Message-ID: <9E9AC979-4A93-4527-B36C-BA96EEF190D1@sec.uni-passau.de>
Accept-Language: de-DE, en-US
Content-Language: de-DE
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [178.27.71.78]
Content-Type: text/plain; charset="us-ascii"
Content-ID: <2F081905460E1242B1244DC38F45A0B9@sec.uni-passau.de>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-purgate-ID: 151291::1604932841-000053DE-A95F8489/0/0
X-purgate-type: clean
X-purgate-size: 1483
X-purgate-Ad: Categorized by eleven eXpurgate (R) http://www.eleven.de
X-purgate: This mail is considered clean (visit http://www.eleven.de for further information)
X-purgate: clean

Hi,

I have seen several occasions with "dead" xentop processes consuming 100% C=
PU time, and tracked this down
to the following problem:

When the QEMU process the qmp_read function is communicating with terminate=
s, qmp_read may enter an
infinite loop:  poll signals EOF (POLLIN and POLLHUP set), the subsequent r=
ead() call returns 0, and then the
function calls poll again, which still sees the EOF condition and will retu=
rn again immediately with POLLIN and
POLLHUP set, repeating ad infinitum.

A simple fix is to terminate the loop when read returns 0 (under "normal" i=
nstances, poll will return with POLLIN
set only if there is data to read, so read will always read >0 bytes, excep=
t if the socket has been closed).

Cheers, Hans

diff --git a/tools/xenstat/libxenstat/src/xenstat_qmp.c b/tools/xenstat/lib=
xenstat/src/xenstat_qmp.c
index 19b236e7b6..0c5748ba68 100644
--- a/tools/xenstat/libxenstat/src/xenstat_qmp.c
+++ b/tools/xenstat/libxenstat/src/xenstat_qmp.c
@@ -298,7 +298,7 @@ static int qmp_read(int qfd, unsigned char **qstats)
        pfd[0].events =3D POLLIN;
        while ((n =3D poll(pfd, 1, 10)) > 0) {
                if (pfd[0].revents & POLLIN) {
-                       if ((n =3D read(qfd, buf, sizeof(buf))) < 0) {
+                       if ((n =3D read(qfd, buf, sizeof(buf))) <=3D 0) {
                                free(*qstats);
                                return 0;
                        }



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 14:56:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 14:56:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22622.48994 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc8b7-0007yA-Jr; Mon, 09 Nov 2020 14:56:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22622.48994; Mon, 09 Nov 2020 14:56:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc8b7-0007y3-Gr; Mon, 09 Nov 2020 14:56:45 +0000
Received: by outflank-mailman (input) for mailman id 22622;
 Mon, 09 Nov 2020 14:56:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=47vU=EP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kc8b6-0007wx-07
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 14:56:44 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b4ab1b25-306a-437b-88d6-c3d930045670;
 Mon, 09 Nov 2020 14:56:37 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kc8ay-0007Uq-SA; Mon, 09 Nov 2020 14:56:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kc8ay-0003ay-CZ; Mon, 09 Nov 2020 14:56:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kc8ay-0007Ec-C6; Mon, 09 Nov 2020 14:56:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=47vU=EP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kc8b6-0007wx-07
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 14:56:44 +0000
X-Inumbo-ID: b4ab1b25-306a-437b-88d6-c3d930045670
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b4ab1b25-306a-437b-88d6-c3d930045670;
	Mon, 09 Nov 2020 14:56:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=T2ju/AjLFff+BQ1UT43zZe9slEtTfMZmHfgTwBYk62k=; b=BdERxg8YMR4T9DZnYW2s9p3Ld/
	9VCE4syMjm7EPbA7p+NKtPC8yLsfQwgCQkI+RulWd6FTnfxjKicG6ZtXSBRRZzXof1YawtM2OtDa7
	wauZWOEsz4eiH2vpufV2gg2ZCBk+YQQwdmiGyR07lr0T+Vc/zSQ65PxiCkY0KNdgWuPs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kc8ay-0007Uq-SA; Mon, 09 Nov 2020 14:56:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kc8ay-0003ay-CZ; Mon, 09 Nov 2020 14:56:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kc8ay-0007Ec-C6; Mon, 09 Nov 2020 14:56:36 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156577-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156577: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
X-Osstest-Versions-That:
    xen=9ff9705647646aa937b5f5c1426a64c69a62b3bd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Nov 2020 14:56:36 +0000

flight 156577 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156577/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm       14 guest-start              fail REGR. vs. 156443
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 156443
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 156443
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 156443
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-vhd      17 guest-start/debian.repeat  fail pass in 156556

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156443
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156443
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156443
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156443
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156443
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156443
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156443
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156443
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156443
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156443
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156443
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
baseline version:
 xen                  9ff9705647646aa937b5f5c1426a64c69a62b3bd

Last test of basis   156443  2020-11-05 15:47:13 Z    3 days
Failing since        156524  2020-11-06 14:22:28 Z    3 days    4 attempts
Testing same since   156538  2020-11-07 06:32:07 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Oct 29 15:03:32 2020 -0400

    libxl: Add suppress-vmdesc to QEMU machine
    
    The device model state saved by QMP xen-save-devices-state doesn't
    include the vmdesc json.  When restoring an HVM, xen-load-devices-state
    always triggers "Expected vmdescription section, but got 0".  This is
    not a problem when restore comes from a file.  However, when QEMU runs
    in a linux stubdom and comes over a console, EOF is not received.  This
    causes a delay restoring - though it does restore.
    
    Setting suppress-vmdesc skips looking for the vmdesc during restore and
    avoids the wait.
    
    QEMU 5.2 enables suppress-vmdesc by default for xenfv, but this change
    sets it manually for xenfv and xen_platform_pci=0 when -machine pc is
    use.
    
    QEMU commit 9850c6047b8b "migration: Allow to suppress vmdesc
    submission" added suppress-vmdesc in QEMU 2.3.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Anthony PERARD <anthony.perard@citrix.com>

commit cd800ce442eeba5bc0857ade70a075367c01c350
Author: Stefano Stabellini <sstabellini@kernel.org>
Date:   Fri Nov 6 16:12:56 2020 +0000

    libxl: set vuart_gfn in libxl__build_hvm
    
    Setting vuart_gfn was missed when switching ARM guests to the PVH build.
    Like libxl__build_pv, libxl__build_hvm should set state->vuart_gfn to
    dom->vuart_gfn.
    
    Without this change, xl console cannot connect to the vuart console (-t
    vuart), see https://marc.info/?l=xen-devel&m=160402342101366.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

commit 4196b1523aebe0ed929accba318d5e833d7ff6b3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 15:05:04 2020 +0100

    tools/libs/light: correct bitmap operations
    
    Libxl bitmap operations for single bits (test, set, reset) take the bit
    number as a signed integer without testing the value to be larger than
    0.
    
    Correct that by adding the appropriate tests.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8aac8e0ef43a452d0b565d63e4943c275badba3f
Author: Olaf Hering <olaf@aepfle.de>
Date:   Fri Nov 6 14:05:17 2020 +0100

    docs/xl: fix cpupool-cpu-remove
    
    The cpu-pool must be specified.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Wei Liu <wl@xen.org>

commit 2a5f9f6a6932214fda76b9b3c03e024772882d34
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 10:48:44 2020 +0100

    PCI: remove unused pcidevs_trylock()
    
    pcidevs_trylock() is used nowhere, so remove it.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Paul Durrant <paul@xen.org>

commit e19bcb626f50a652fb1854a8b2f2c9c371687a11
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 10:48:07 2020 +0100

    xen/rwlock: add check_lock() handling to rwlocks
    
    Checking whether a lock is consistently used regarding interrupts on
    or off is beneficial for rwlocks, too.
    
    So add check_lock() calls to rwlock functions. For this purpose make
    check_lock() globally accessible.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit c3453a23f7905d24f2404787543e26ec7d02301c
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 10:47:09 2020 +0100

    xen/locking: harmonize spinlocks and rwlocks regarding preemption
    
    Spinlocks and rwlocks behave differently in the try variants regarding
    preemption: rwlocks are switching preemption off before testing the
    lock, while spinlocks do so only after the first check.
    
    Modify _spin_trylock() to disable preemption before testing the lock
    to be held in order to be preemption-ready.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>

commit 957708c2d1ae25d7375abd5e5e70c3043d64f1f1
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Nov 5 22:31:06 2020 +0000

    xen/arm: traps: Don't panic when receiving an unknown debug trap
    
    Even if debug trap are only meant for debugging purpose, it is quite
    harsh to crash Xen if one of the trap sent by the guest is not handled.
    
    So switch from a panic() to a printk().
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit e006b2e3be72e502b86bd9e1405417abd87bdfed
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Nov 5 16:48:55 2020 +0100

    libxl: fix libacpi dependency
    
    $(DSDT_FILES-y) depends on the recursive make to have run in libacpi/
    such that the file(s) itself/themselves were generated before
    compilation gets attempted. The same, however, is also necessary for
    generated headers, before source files including them would get
    attempted to be compiled.
    
    The dependency specified in libacpi's Makefile, otoh, is entirely
    pointless nowadays - no compilation happens there anymore (except for
    tools involved in building the generated files). Together with it, the
    rule generating acpi.a also can go away.
    
    Reported-by: Olaf Hering <olaf@aepfle.de>
    Fixes: 14c0d328da2b ("libxl/acpi: Build ACPI tables for HVMlite guests")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 2b8314a3c354d04545700c80ff5a5f86799b79c7
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Nov 5 16:48:37 2020 +0100

    tools/python: pass more -rpath-link options to ld
    
    With the split of libraries, I've observed a number of warnings from
    (old?) ld.
    
    Instead of duplicating the additions in two places, introduce a setup.py
    make variable holding all the common parts of the invocations.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 15:07:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 15:07:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22635.49006 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc8lk-0000Yo-OC; Mon, 09 Nov 2020 15:07:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22635.49006; Mon, 09 Nov 2020 15:07:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc8lk-0000Yh-LB; Mon, 09 Nov 2020 15:07:44 +0000
Received: by outflank-mailman (input) for mailman id 22635;
 Mon, 09 Nov 2020 15:07:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=a0OZ=EP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kc8lj-0000Yc-AI
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 15:07:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9b910015-2f54-4ac7-8034-067ef38d7d67;
 Mon, 09 Nov 2020 15:07:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 78254ABAE;
 Mon,  9 Nov 2020 15:07:41 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=a0OZ=EP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kc8lj-0000Yc-AI
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 15:07:43 +0000
X-Inumbo-ID: 9b910015-2f54-4ac7-8034-067ef38d7d67
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9b910015-2f54-4ac7-8034-067ef38d7d67;
	Mon, 09 Nov 2020 15:07:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604934461;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Uk0rdmTBIxSJxNqONHLje+CAA9BWzyn2FGHgBU+NhL8=;
	b=TXhbGNsrcHvRmUPxIYucDI6loUyKFEEUyhEc1pljVw+s/n6RKM0esLKi+E6gmP2q5I6ZAl
	3TE55YOHxx5SpYAsmOD2ZdBqp+mjOgUlYU2EJlJ4iMts5EpRegdbVtR1mYrwpkU/h/LhYy
	Bgpk8UN/eRk0dlSGQFkj0ZfjSxQXpOQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 78254ABAE;
	Mon,  9 Nov 2020 15:07:41 +0000 (UTC)
Subject: Re: [PATCH v5 2/2] xen/evtchn: rework per event channel lock
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20201109064128.3908-1-jgross@suse.com>
 <20201109064128.3908-3-jgross@suse.com>
 <df9737a4-f90a-0498-b67d-ce254b835287@suse.com>
 <7fa67b64-4114-736b-660c-2ec5be8f7da1@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2add23c5-8fae-5637-a132-b25b2d856a7b@suse.com>
Date: Mon, 9 Nov 2020 16:07:40 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <7fa67b64-4114-736b-660c-2ec5be8f7da1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 09.11.2020 14:16, Jürgen Groß wrote:
> On 09.11.20 12:58, Jan Beulich wrote:
>> On 09.11.2020 07:41, Juergen Gross wrote:
>>> --- a/xen/arch/x86/pv/shim.c
>>> +++ b/xen/arch/x86/pv/shim.c
>>> @@ -660,11 +660,12 @@ void pv_shim_inject_evtchn(unsigned int port)
>>>       if ( port_is_valid(guest, port) )
>>>       {
>>>           struct evtchn *chn = evtchn_from_port(guest, port);
>>> -        unsigned long flags;
>>>   
>>> -        spin_lock_irqsave(&chn->lock, flags);
>>> -        evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
>>> -        spin_unlock_irqrestore(&chn->lock, flags);
>>> +        if ( evtchn_read_trylock(chn) )
>>> +        {
>>> +            evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
>>> +            evtchn_read_unlock(chn);
>>> +        }
>>
>> Does this need trylock?
> 
> It is called directly from the event upcall, so interrupts should be
> off here. Without trylock this would result in check_lock() triggering.

Hmm, right. Since the trylock function needs exposing anyway, this
isn't much of a problem, the more that it fits the pattern of being
a send.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 15:21:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 15:21:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22659.49038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc8z0-0002Mn-58; Mon, 09 Nov 2020 15:21:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22659.49038; Mon, 09 Nov 2020 15:21:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc8z0-0002Mg-1q; Mon, 09 Nov 2020 15:21:26 +0000
Received: by outflank-mailman (input) for mailman id 22659;
 Mon, 09 Nov 2020 15:21:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=a0OZ=EP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kc8yy-0002Mb-Q1
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 15:21:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f0dd5f93-1f3b-4e24-992b-ffdf11c15443;
 Mon, 09 Nov 2020 15:21:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 32826ABCC;
 Mon,  9 Nov 2020 15:21:23 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=a0OZ=EP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kc8yy-0002Mb-Q1
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 15:21:24 +0000
X-Inumbo-ID: f0dd5f93-1f3b-4e24-992b-ffdf11c15443
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f0dd5f93-1f3b-4e24-992b-ffdf11c15443;
	Mon, 09 Nov 2020 15:21:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604935283;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=ompXXUS9kSTZFi9Rj5W7qk62f2/bOTFujCaDXGDgiRw=;
	b=t6nR34upa4AV05T+1AnA70lqyY1PbEmHZVt1Hn5V4GipiHwus+Hj+AWbeDubAZBWgqG/eG
	4Bnpz33tNRIq+FKpOHVKwYaX+d5sT2opkuTORcOY/83FTyJfkPxUM/3ziuWfj7/CFQ8GV+
	2C4COzo0mcHDmVcffxAyrf7JftQjUAQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 32826ABCC;
	Mon,  9 Nov 2020 15:21:23 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/CPUID: also check leaf 7 max subleaf to be compatible
Message-ID: <a59790df-cc2c-30a3-fdf8-7c56472f00c2@suse.com>
Date: Mon, 9 Nov 2020 16:21:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Just like is done for basic and extended major leaves.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/lib/x86/policy.c
+++ b/xen/lib/x86/policy.c
@@ -18,6 +18,9 @@ int x86_cpu_policies_are_compatible(cons
     if ( guest->cpuid->basic.max_leaf > host->cpuid->basic.max_leaf )
         FAIL_CPUID(0, NA);
 
+    if ( guest->cpuid->feat.max_subleaf > host->cpuid->feat.max_subleaf )
+        FAIL_CPUID(7, 0);
+
     if ( guest->cpuid->extd.max_leaf > host->cpuid->extd.max_leaf )
         FAIL_CPUID(0x80000000, NA);
 


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 15:38:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 15:38:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22665.49051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc9F7-0003Q6-Gx; Mon, 09 Nov 2020 15:38:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22665.49051; Mon, 09 Nov 2020 15:38:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kc9F7-0003Pz-Dt; Mon, 09 Nov 2020 15:38:05 +0000
Received: by outflank-mailman (input) for mailman id 22665;
 Mon, 09 Nov 2020 15:38:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2NS2=EP=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kc9F6-0003Pu-GE
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 15:38:04 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 072a0080-e124-459f-8307-958c58b47770;
 Mon, 09 Nov 2020 15:38:02 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2NS2=EP=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kc9F6-0003Pu-GE
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 15:38:04 +0000
X-Inumbo-ID: 072a0080-e124-459f-8307-958c58b47770
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 072a0080-e124-459f-8307-958c58b47770;
	Mon, 09 Nov 2020 15:38:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1604936282;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=h8MDBQiES9vqsk/VNWiI5tsZVeX14okfUoJ5BWeel7Y=;
  b=Zbd+feVIOyiQzvfZ07XgR0RnY06am78XXiTpaESUkOWxSIf0Acrcg/FK
   NGRbdCZ4gS2oeyDIejPRWhlPgx6c7Qvzp7deypHc4NnBVnxKG9amvJxnT
   2YbKFdjTh22ykBjO+wUMk/Uku2lJw9KAI5w2DcdzGfd4GstNLLiUo2Z5p
   g=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: c6aLOzjjUzrg04/4+tsgI7sD+ZAfDtDe3sdT4feJNOHFlRW+hhVHU5qv+r+lSLcXeNyZ3oDsVL
 Ru5gbHzQEVd0aCiT4sz9hOeW7N9pDhNTffJQK1cQYIyx9gLJiQdTVE25GAT3UUc3fILmJjSBeS
 u0VUn5jUUbbhF+V4MbZKqC5/52b9h96cMSInYQ1CPSvaI2jelGuqYhQqFAPEuSRdKM+QR5rtQs
 3keusS7pF8jRePvaUH9p+z6rhBUOmkE05hpCRmkd66dJ0V1ENwpddxDmUW2QjymG6NFeJnKGoC
 Pvk=
X-SBRS: None
X-MesageID: 30738214
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,463,1596513600"; 
   d="scan'208";a="30738214"
Subject: Re: [PATCH] x86/CPUID: also check leaf 7 max subleaf to be compatible
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <a59790df-cc2c-30a3-fdf8-7c56472f00c2@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <3be88229-ac3c-37fa-ce15-d6c48654cd74@citrix.com>
Date: Mon, 9 Nov 2020 15:30:58 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a59790df-cc2c-30a3-fdf8-7c56472f00c2@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 09/11/2020 15:21, Jan Beulich wrote:
> Just like is done for basic and extended major leaves.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 16:38:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 16:38:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22721.49135 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcABh-0000ym-Ir; Mon, 09 Nov 2020 16:38:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22721.49135; Mon, 09 Nov 2020 16:38:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcABh-0000ye-AZ; Mon, 09 Nov 2020 16:38:37 +0000
Received: by outflank-mailman (input) for mailman id 22721;
 Mon, 09 Nov 2020 16:38:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kcABg-0000vD-HK
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 16:38:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 92da756d-819c-4ff9-883c-5d9172814c7c;
 Mon, 09 Nov 2020 16:38:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 79FB2AD0F;
 Mon,  9 Nov 2020 16:38:29 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kcABg-0000vD-HK
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 16:38:36 +0000
X-Inumbo-ID: 92da756d-819c-4ff9-883c-5d9172814c7c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 92da756d-819c-4ff9-883c-5d9172814c7c;
	Mon, 09 Nov 2020 16:38:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604939909;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zA2xJswA/ZiZycp0tBhOJN4sbHBbwAVz3EYB9fQC5Xw=;
	b=i5LB/RS0DMA2RLw7ZpEoGG04PunrSQBzWvQTssLI5OmCcuvvu7HuPpKEH9bmpEr0PgwXFi
	Ngv+nGZDsRuRtsk1MoYi6t9fVk+L1iLgE1EFV8huyEHVXXbV5+0rZ+TmyvSAYrW1/MUemQ
	+caAUpSJ6ypDa8BRtoIWXY6rGMCL8Fw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 79FB2AD0F;
	Mon,  9 Nov 2020 16:38:29 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v6 2/3] xen/evtchn: rework per event channel lock
Date: Mon,  9 Nov 2020 17:38:25 +0100
Message-Id: <20201109163826.13035-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201109163826.13035-1-jgross@suse.com>
References: <20201109163826.13035-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currently the lock for a single event channel needs to be taken with
interrupts off, which causes deadlocks in some cases.

Rework the per event channel lock to be non-blocking for the case of
sending an event and removing the need for disabling interrupts for
taking the lock.

The lock is needed for avoiding races between event channel state
changes (creation, closing, binding) against normal operations (set
pending, [un]masking, priority changes).

Use a rwlock, but with some restrictions:

- Changing the state of an event channel (creation, closing, binding)
  needs to use write_lock(), with ASSERT()ing that the lock is taken as
  writer only when the state of the event channel is either before or
  after the locked region appropriate (either free or unbound).

- Sending an event needs to use read_trylock() mostly, in case of not
  obtaining the lock the operation is omitted. This is needed as
  sending an event can happen with interrupts off (at least in some
  cases).

- Dumping the event channel state for debug purposes is using
  read_trylock(), too, in order to avoid blocking in case the lock is
  taken as writer for a long time.

- All other cases can use read_lock().

Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V4:
- switch to rwlock
- add ASSERT() to verify correct write_lock() usage

V3:
- corrected a copy-and-paste error (Jan Beulich)
- corrected unlocking in two cases (Jan Beulich)
- renamed evtchn_read_trylock() (Jan Beulich)
- added some comments and an ASSERT() for evtchn_write_lock()
- set EVENT_WRITE_LOCK_INC to INT_MIN

V2:
- added needed barriers

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/arch/x86/irq.c         |   6 +-
 xen/arch/x86/pv/shim.c     |   9 +--
 xen/common/event_channel.c | 138 +++++++++++++++++++++----------------
 xen/include/xen/event.h    |  29 ++++++--
 xen/include/xen/sched.h    |   5 +-
 5 files changed, 112 insertions(+), 75 deletions(-)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 93c4fb9a79..8d1f9a9fc6 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -2495,14 +2495,12 @@ static void dump_irqs(unsigned char key)
                 pirq = domain_irq_to_pirq(d, irq);
                 info = pirq_info(d, pirq);
                 evtchn = evtchn_from_port(d, info->evtchn);
-                local_irq_disable();
-                if ( spin_trylock(&evtchn->lock) )
+                if ( evtchn_read_trylock(evtchn) )
                 {
                     pending = evtchn_is_pending(d, evtchn);
                     masked = evtchn_is_masked(d, evtchn);
-                    spin_unlock(&evtchn->lock);
+                    evtchn_read_unlock(evtchn);
                 }
-                local_irq_enable();
                 printk("d%d:%3d(%c%c%c)%c",
                        d->domain_id, pirq, "-P?"[pending],
                        "-M?"[masked], info->masked ? 'M' : '-',
diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c
index 9aef7a860a..b4e83e0778 100644
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -660,11 +660,12 @@ void pv_shim_inject_evtchn(unsigned int port)
     if ( port_is_valid(guest, port) )
     {
         struct evtchn *chn = evtchn_from_port(guest, port);
-        unsigned long flags;
 
-        spin_lock_irqsave(&chn->lock, flags);
-        evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
-        spin_unlock_irqrestore(&chn->lock, flags);
+        if ( evtchn_read_trylock(chn) )
+        {
+            evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
+            evtchn_read_unlock(chn);
+        }
     }
 }
 
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index cd4a2c0501..43e3520df6 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -50,6 +50,29 @@
 
 #define consumer_is_xen(e) (!!(e)->xen_consumer)
 
+/*
+ * Lock an event channel exclusively. This is allowed only when the channel is
+ * free or unbound either when taking or when releasing the lock, as any
+ * concurrent operation on the event channel using evtchn_read_trylock() will
+ * just assume the event channel is free or unbound at the moment when the
+ * evtchn_read_trylock() returns false.
+ */
+static inline void evtchn_write_lock(struct evtchn *evtchn)
+{
+    write_lock(&evtchn->lock);
+
+    evtchn->old_state = evtchn->state;
+}
+
+static inline void evtchn_write_unlock(struct evtchn *evtchn)
+{
+    /* Enforce lock discipline. */
+    ASSERT(evtchn->old_state == ECS_FREE || evtchn->old_state == ECS_UNBOUND ||
+           evtchn->state == ECS_FREE || evtchn->state == ECS_UNBOUND);
+
+    write_unlock(&evtchn->lock);
+}
+
 /*
  * The function alloc_unbound_xen_event_channel() allows an arbitrary
  * notifier function to be specified. However, very few unique functions
@@ -133,7 +156,7 @@ static struct evtchn *alloc_evtchn_bucket(struct domain *d, unsigned int port)
             return NULL;
         }
         chn[i].port = port + i;
-        spin_lock_init(&chn[i].lock);
+        rwlock_init(&chn[i].lock);
     }
     return chn;
 }
@@ -255,7 +278,6 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
     int            port;
     domid_t        dom = alloc->dom;
     long           rc;
-    unsigned long  flags;
 
     d = rcu_lock_domain_by_any_id(dom);
     if ( d == NULL )
@@ -271,14 +293,14 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
     if ( rc )
         goto out;
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state = ECS_UNBOUND;
     if ( (chn->u.unbound.remote_domid = alloc->remote_dom) == DOMID_SELF )
         chn->u.unbound.remote_domid = current->domain->domain_id;
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     alloc->port = port;
 
@@ -291,32 +313,26 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
 }
 
 
-static unsigned long double_evtchn_lock(struct evtchn *lchn,
-                                        struct evtchn *rchn)
+static void double_evtchn_lock(struct evtchn *lchn, struct evtchn *rchn)
 {
-    unsigned long flags;
-
     if ( lchn <= rchn )
     {
-        spin_lock_irqsave(&lchn->lock, flags);
+        evtchn_write_lock(lchn);
         if ( lchn != rchn )
-            spin_lock(&rchn->lock);
+            evtchn_write_lock(rchn);
     }
     else
     {
-        spin_lock_irqsave(&rchn->lock, flags);
-        spin_lock(&lchn->lock);
+        evtchn_write_lock(rchn);
+        evtchn_write_lock(lchn);
     }
-
-    return flags;
 }
 
-static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn,
-                                 unsigned long flags)
+static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn)
 {
     if ( lchn != rchn )
-        spin_unlock(&lchn->lock);
-    spin_unlock_irqrestore(&rchn->lock, flags);
+        evtchn_write_unlock(lchn);
+    evtchn_write_unlock(rchn);
 }
 
 static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
@@ -326,7 +342,6 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
     int            lport, rport = bind->remote_port;
     domid_t        rdom = bind->remote_dom;
     long           rc;
-    unsigned long  flags;
 
     if ( rdom == DOMID_SELF )
         rdom = current->domain->domain_id;
@@ -362,7 +377,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
     if ( rc )
         goto out;
 
-    flags = double_evtchn_lock(lchn, rchn);
+    double_evtchn_lock(lchn, rchn);
 
     lchn->u.interdomain.remote_dom  = rd;
     lchn->u.interdomain.remote_port = rport;
@@ -379,7 +394,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
      */
     evtchn_port_set_pending(ld, lchn->notify_vcpu_id, lchn);
 
-    double_evtchn_unlock(lchn, rchn, flags);
+    double_evtchn_unlock(lchn, rchn);
 
     bind->local_port = lport;
 
@@ -402,7 +417,6 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
     struct domain *d = current->domain;
     int            virq = bind->virq, vcpu = bind->vcpu;
     int            rc = 0;
-    unsigned long  flags;
 
     if ( (virq < 0) || (virq >= ARRAY_SIZE(v->virq_to_evtchn)) )
         return -EINVAL;
@@ -440,14 +454,14 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
 
     chn = evtchn_from_port(d, port);
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state          = ECS_VIRQ;
     chn->notify_vcpu_id = vcpu;
     chn->u.virq         = virq;
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     v->virq_to_evtchn[virq] = bind->port = port;
 
@@ -464,7 +478,6 @@ static long evtchn_bind_ipi(evtchn_bind_ipi_t *bind)
     struct domain *d = current->domain;
     int            port, vcpu = bind->vcpu;
     long           rc = 0;
-    unsigned long  flags;
 
     if ( domain_vcpu(d, vcpu) == NULL )
         return -ENOENT;
@@ -476,13 +489,13 @@ static long evtchn_bind_ipi(evtchn_bind_ipi_t *bind)
 
     chn = evtchn_from_port(d, port);
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state          = ECS_IPI;
     chn->notify_vcpu_id = vcpu;
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     bind->port = port;
 
@@ -526,7 +539,6 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
     struct pirq   *info;
     int            port = 0, pirq = bind->pirq;
     long           rc;
-    unsigned long  flags;
 
     if ( (pirq < 0) || (pirq >= d->nr_pirqs) )
         return -EINVAL;
@@ -559,14 +571,14 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
         goto out;
     }
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state  = ECS_PIRQ;
     chn->u.pirq.irq = pirq;
     link_pirq_port(port, chn, v);
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     bind->port = port;
 
@@ -587,7 +599,6 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
     struct evtchn *chn1, *chn2;
     int            port2;
     long           rc = 0;
-    unsigned long  flags;
 
  again:
     spin_lock(&d1->event_lock);
@@ -688,14 +699,14 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
         BUG_ON(chn2->state != ECS_INTERDOMAIN);
         BUG_ON(chn2->u.interdomain.remote_dom != d1);
 
-        flags = double_evtchn_lock(chn1, chn2);
+        double_evtchn_lock(chn1, chn2);
 
         evtchn_free(d1, chn1);
 
         chn2->state = ECS_UNBOUND;
         chn2->u.unbound.remote_domid = d1->domain_id;
 
-        double_evtchn_unlock(chn1, chn2, flags);
+        double_evtchn_unlock(chn1, chn2);
 
         goto out;
 
@@ -703,9 +714,9 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
         BUG();
     }
 
-    spin_lock_irqsave(&chn1->lock, flags);
+    evtchn_write_lock(chn1);
     evtchn_free(d1, chn1);
-    spin_unlock_irqrestore(&chn1->lock, flags);
+    evtchn_write_unlock(chn1);
 
  out:
     if ( d2 != NULL )
@@ -725,7 +736,6 @@ int evtchn_send(struct domain *ld, unsigned int lport)
     struct evtchn *lchn, *rchn;
     struct domain *rd;
     int            rport, ret = 0;
-    unsigned long  flags;
 
     if ( !port_is_valid(ld, lport) )
         return -EINVAL;
@@ -738,7 +748,7 @@ int evtchn_send(struct domain *ld, unsigned int lport)
 
     lchn = evtchn_from_port(ld, lport);
 
-    spin_lock_irqsave(&lchn->lock, flags);
+    evtchn_read_lock(lchn);
 
     /* Guest cannot send via a Xen-attached event channel. */
     if ( unlikely(consumer_is_xen(lchn)) )
@@ -773,7 +783,7 @@ int evtchn_send(struct domain *ld, unsigned int lport)
     }
 
 out:
-    spin_unlock_irqrestore(&lchn->lock, flags);
+    evtchn_read_unlock(lchn);
 
     return ret;
 }
@@ -806,9 +816,11 @@ void send_guest_vcpu_virq(struct vcpu *v, uint32_t virq)
 
     d = v->domain;
     chn = evtchn_from_port(d, port);
-    spin_lock(&chn->lock);
-    evtchn_port_set_pending(d, v->vcpu_id, chn);
-    spin_unlock(&chn->lock);
+    if ( evtchn_read_trylock(chn) )
+    {
+        evtchn_port_set_pending(d, v->vcpu_id, chn);
+        evtchn_read_unlock(chn);
+    }
 
  out:
     spin_unlock_irqrestore(&v->virq_lock, flags);
@@ -837,9 +849,11 @@ void send_guest_global_virq(struct domain *d, uint32_t virq)
         goto out;
 
     chn = evtchn_from_port(d, port);
-    spin_lock(&chn->lock);
-    evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
-    spin_unlock(&chn->lock);
+    if ( evtchn_read_trylock(chn) )
+    {
+        evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
+        evtchn_read_unlock(chn);
+    }
 
  out:
     spin_unlock_irqrestore(&v->virq_lock, flags);
@@ -849,7 +863,6 @@ void send_guest_pirq(struct domain *d, const struct pirq *pirq)
 {
     int port;
     struct evtchn *chn;
-    unsigned long flags;
 
     /*
      * PV guests: It should not be possible to race with __evtchn_close(). The
@@ -864,9 +877,11 @@ void send_guest_pirq(struct domain *d, const struct pirq *pirq)
     }
 
     chn = evtchn_from_port(d, port);
-    spin_lock_irqsave(&chn->lock, flags);
-    evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
-    spin_unlock_irqrestore(&chn->lock, flags);
+    if ( evtchn_read_trylock(chn) )
+    {
+        evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
+        evtchn_read_unlock(chn);
+    }
 }
 
 static struct domain *global_virq_handlers[NR_VIRQS] __read_mostly;
@@ -1068,15 +1083,17 @@ int evtchn_unmask(unsigned int port)
 {
     struct domain *d = current->domain;
     struct evtchn *evtchn;
-    unsigned long flags;
 
     if ( unlikely(!port_is_valid(d, port)) )
         return -EINVAL;
 
     evtchn = evtchn_from_port(d, port);
-    spin_lock_irqsave(&evtchn->lock, flags);
+
+    evtchn_read_lock(evtchn);
+
     evtchn_port_unmask(d, evtchn);
-    spin_unlock_irqrestore(&evtchn->lock, flags);
+
+    evtchn_read_unlock(evtchn);
 
     return 0;
 }
@@ -1156,15 +1173,17 @@ static long evtchn_set_priority(const struct evtchn_set_priority *set_priority)
     unsigned int port = set_priority->port;
     struct evtchn *chn;
     long ret;
-    unsigned long flags;
 
     if ( !port_is_valid(d, port) )
         return -EINVAL;
 
     chn = evtchn_from_port(d, port);
-    spin_lock_irqsave(&chn->lock, flags);
+
+    evtchn_read_lock(chn);
+
     ret = evtchn_port_set_priority(d, chn, set_priority->priority);
-    spin_unlock_irqrestore(&chn->lock, flags);
+
+    evtchn_read_unlock(chn);
 
     return ret;
 }
@@ -1332,7 +1351,6 @@ int alloc_unbound_xen_event_channel(
 {
     struct evtchn *chn;
     int            port, rc;
-    unsigned long  flags;
 
     spin_lock(&ld->event_lock);
 
@@ -1345,14 +1363,14 @@ int alloc_unbound_xen_event_channel(
     if ( rc )
         goto out;
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state = ECS_UNBOUND;
     chn->xen_consumer = get_xen_consumer(notification_fn);
     chn->notify_vcpu_id = lvcpu;
     chn->u.unbound.remote_domid = remote_domid;
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     /*
      * Increment ->xen_evtchns /after/ ->active_evtchns. No explicit
@@ -1388,7 +1406,6 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
 {
     struct evtchn *lchn, *rchn;
     struct domain *rd;
-    unsigned long flags;
 
     if ( !port_is_valid(ld, lport) )
     {
@@ -1403,7 +1420,8 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
 
     lchn = evtchn_from_port(ld, lport);
 
-    spin_lock_irqsave(&lchn->lock, flags);
+    if ( !evtchn_read_trylock(lchn) )
+        return;
 
     if ( likely(lchn->state == ECS_INTERDOMAIN) )
     {
@@ -1413,7 +1431,7 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
         evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
     }
 
-    spin_unlock_irqrestore(&lchn->lock, flags);
+    evtchn_read_unlock(lchn);
 }
 
 void evtchn_check_pollers(struct domain *d, unsigned int port)
diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index 2ed4be78f6..a0a85cdda8 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -105,6 +105,21 @@ void notify_via_xen_event_channel(struct domain *ld, int lport);
 #define bucket_from_port(d, p) \
     ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKET])
 
+static inline void evtchn_read_lock(struct evtchn *evtchn)
+{
+    read_lock(&evtchn->lock);
+}
+
+static inline bool evtchn_read_trylock(struct evtchn *evtchn)
+{
+    return read_trylock(&evtchn->lock);
+}
+
+static inline void evtchn_read_unlock(struct evtchn *evtchn)
+{
+    read_unlock(&evtchn->lock);
+}
+
 static inline bool_t port_is_valid(struct domain *d, unsigned int p)
 {
     if ( p >= read_atomic(&d->valid_evtchns) )
@@ -235,11 +250,12 @@ static inline bool evtchn_port_is_masked(struct domain *d, evtchn_port_t port)
 {
     struct evtchn *evtchn = evtchn_from_port(d, port);
     bool rc;
-    unsigned long flags;
 
-    spin_lock_irqsave(&evtchn->lock, flags);
+    evtchn_read_lock(evtchn);
+
     rc = evtchn_is_masked(d, evtchn);
-    spin_unlock_irqrestore(&evtchn->lock, flags);
+
+    evtchn_read_unlock(evtchn);
 
     return rc;
 }
@@ -252,12 +268,13 @@ static inline int evtchn_port_poll(struct domain *d, evtchn_port_t port)
     if ( port_is_valid(d, port) )
     {
         struct evtchn *evtchn = evtchn_from_port(d, port);
-        unsigned long flags;
 
-        spin_lock_irqsave(&evtchn->lock, flags);
+        evtchn_read_lock(evtchn);
+
         if ( evtchn_usable(evtchn) )
             rc = evtchn_is_pending(d, evtchn);
-        spin_unlock_irqrestore(&evtchn->lock, flags);
+
+        evtchn_read_unlock(evtchn);
     }
 
     return rc;
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index a298ff4df8..66d8f058be 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -85,7 +85,7 @@ extern domid_t hardware_domid;
 
 struct evtchn
 {
-    spinlock_t lock;
+    rwlock_t lock;
 #define ECS_FREE         0 /* Channel is available for use.                  */
 #define ECS_RESERVED     1 /* Channel is reserved.                           */
 #define ECS_UNBOUND      2 /* Channel is waiting to bind to a remote domain. */
@@ -114,6 +114,9 @@ struct evtchn
         u16 virq;      /* state == ECS_VIRQ */
     } u;
     u8 priority;
+#ifndef NDEBUG
+    u8 old_state;      /* State when taking lock in write mode. */
+#endif
     u32 fifo_lastq;    /* Data for fifo events identifying last queue. */
 #ifdef CONFIG_XSM
     union {
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 16:38:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 16:38:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22719.49117 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcABd-0000wH-SQ; Mon, 09 Nov 2020 16:38:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22719.49117; Mon, 09 Nov 2020 16:38:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcABd-0000wA-PB; Mon, 09 Nov 2020 16:38:33 +0000
Received: by outflank-mailman (input) for mailman id 22719;
 Mon, 09 Nov 2020 16:38:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kcABb-0000vD-LY
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 16:38:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 40bef7d9-7384-4f85-bd59-998c1815a481;
 Mon, 09 Nov 2020 16:38:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B0655AD11;
 Mon,  9 Nov 2020 16:38:29 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kcABb-0000vD-LY
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 16:38:31 +0000
X-Inumbo-ID: 40bef7d9-7384-4f85-bd59-998c1815a481
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 40bef7d9-7384-4f85-bd59-998c1815a481;
	Mon, 09 Nov 2020 16:38:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604939909;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1eDlo6s9/bQ6o2hmRgAhuAfISfO0llaHbAtq+TidIvI=;
	b=eIMoxiIGdcdaTDNok1VTLO3NcneGW58cITfTdFyjf9tOW7srpPEHzfmzJVYSA6dNFd05XZ
	nllz6wrLVbNeok3AsrdXbF0efN3/+3CvdFg8rNR2shRj06FWvHv0K6Re/JGtSrLDMREZoA
	GG+UCjHa7Gdj+S5pCD6XGm6SGfYmbn8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B0655AD11;
	Mon,  9 Nov 2020 16:38:29 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [PATCH v6 3/3] xen/evtchn: revert 52e1fc47abc3a0123
Date: Mon,  9 Nov 2020 17:38:26 +0100
Message-Id: <20201109163826.13035-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201109163826.13035-1-jgross@suse.com>
References: <20201109163826.13035-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

With the event channel lock no longer disabling interrupts commit
52e1fc47abc3a0123 ("evtchn/Flask: pre-allocate node on send path") can
be reverted again.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/event_channel.c  |  6 ---
 xen/include/xsm/xsm.h       |  1 -
 xen/xsm/flask/avc.c         | 78 ++++---------------------------------
 xen/xsm/flask/hooks.c       | 10 -----
 xen/xsm/flask/include/avc.h |  2 -
 5 files changed, 7 insertions(+), 90 deletions(-)

diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 43e3520df6..eacd96b92f 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -740,12 +740,6 @@ int evtchn_send(struct domain *ld, unsigned int lport)
     if ( !port_is_valid(ld, lport) )
         return -EINVAL;
 
-    /*
-     * As the call further down needs to avoid allocations (due to running
-     * with IRQs off), give XSM a chance to pre-allocate if needed.
-     */
-    xsm_evtchn_send(XSM_HOOK, ld, NULL);
-
     lchn = evtchn_from_port(ld, lport);
 
     evtchn_read_lock(lchn);
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 358ec13ba8..7bd03d8817 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -59,7 +59,6 @@ struct xsm_operations {
     int (*evtchn_interdomain) (struct domain *d1, struct evtchn *chn1,
                                         struct domain *d2, struct evtchn *chn2);
     void (*evtchn_close_post) (struct evtchn *chn);
-    /* Note: Next hook may be called with 'chn' set to NULL. See call site. */
     int (*evtchn_send) (struct domain *d, struct evtchn *chn);
     int (*evtchn_status) (struct domain *d, struct evtchn *chn);
     int (*evtchn_reset) (struct domain *d1, struct domain *d2);
diff --git a/xen/xsm/flask/avc.c b/xen/xsm/flask/avc.c
index 2dfa1f4295..87ea38b7a0 100644
--- a/xen/xsm/flask/avc.c
+++ b/xen/xsm/flask/avc.c
@@ -24,9 +24,7 @@
 #include <xen/prefetch.h>
 #include <xen/kernel.h>
 #include <xen/sched.h>
-#include <xen/cpu.h>
 #include <xen/init.h>
-#include <xen/percpu.h>
 #include <xen/rcupdate.h>
 #include <asm/atomic.h>
 #include <asm/current.h>
@@ -343,79 +341,17 @@ static inline int avc_reclaim_node(void)
     return ecx;
 }
 
-static struct avc_node *new_node(void)
-{
-    struct avc_node *node = xzalloc(struct avc_node);
-
-    if ( node )
-    {
-        INIT_RCU_HEAD(&node->rhead);
-        INIT_HLIST_NODE(&node->list);
-        avc_cache_stats_incr(allocations);
-    }
-
-    return node;
-}
-
-/*
- * avc_has_perm_noaudit() may consume up to two nodes, which we may not be
- * able to obtain from the allocator at that point. Since the is merely
- * about caching earlier decisions, allow for (just) one pre-allocated node.
- */
-static DEFINE_PER_CPU(struct avc_node *, prealloc_node);
-
-void avc_prealloc(void)
-{
-    struct avc_node **prealloc = &this_cpu(prealloc_node);
-
-    if ( !*prealloc )
-        *prealloc = new_node();
-}
-
-static int cpu_callback(struct notifier_block *nfb, unsigned long action,
-                        void *hcpu)
-{
-    unsigned int cpu = (unsigned long)hcpu;
-    struct avc_node **prealloc = &per_cpu(prealloc_node, cpu);
-
-    if ( action == CPU_DEAD && *prealloc )
-    {
-        xfree(*prealloc);
-        *prealloc = NULL;
-        avc_cache_stats_incr(frees);
-    }
-
-    return NOTIFY_DONE;
-}
-
-static struct notifier_block cpu_nfb = {
-    .notifier_call = cpu_callback,
-    .priority = 99
-};
-
-static int __init cpu_nfb_init(void)
-{
-    register_cpu_notifier(&cpu_nfb);
-    return 0;
-}
-__initcall(cpu_nfb_init);
-
 static struct avc_node *avc_alloc_node(void)
 {
-    struct avc_node *node, **prealloc = &this_cpu(prealloc_node);
+    struct avc_node *node;
 
-    node = *prealloc;
-    *prealloc = NULL;
+    node = xzalloc(struct avc_node);
+    if (!node)
+        goto out;
 
-    if ( !node )
-    {
-        /* Must not call xmalloc() & Co with IRQs off. */
-        if ( !local_irq_is_enabled() )
-            goto out;
-        node = new_node();
-        if ( !node )
-            goto out;
-    }
+    INIT_RCU_HEAD(&node->rhead);
+    INIT_HLIST_NODE(&node->list);
+    avc_cache_stats_incr(allocations);
 
     atomic_inc(&avc_cache.active_nodes);
     if ( atomic_read(&avc_cache.active_nodes) > avc_cache_threshold )
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index de050cc9fe..19b0d9e3eb 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -281,16 +281,6 @@ static int flask_evtchn_send(struct domain *d, struct evtchn *chn)
 {
     int rc;
 
-    /*
-     * When called with non-NULL chn, memory allocation may not be permitted.
-     * Allow AVC to preallocate nodes as necessary upon early notification.
-     */
-    if ( !chn )
-    {
-        avc_prealloc();
-        return 0;
-    }
-
     switch ( chn->state )
     {
     case ECS_INTERDOMAIN:
diff --git a/xen/xsm/flask/include/avc.h b/xen/xsm/flask/include/avc.h
index 722919b762..c14bd07a2b 100644
--- a/xen/xsm/flask/include/avc.h
+++ b/xen/xsm/flask/include/avc.h
@@ -91,8 +91,6 @@ int avc_has_perm_noaudit(u32 ssid, u32 tsid, u16 tclass, u32 requested,
 int avc_has_perm(u32 ssid, u32 tsid, u16 tclass, u32 requested,
                                              struct avc_audit_data *auditdata);
 
-void avc_prealloc(void);
-
 /* Exported to selinuxfs */
 struct xen_flask_hash_stats;
 int avc_get_hash_stats(struct xen_flask_hash_stats *arg);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 16:38:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 16:38:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22720.49128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcABh-0000yA-4E; Mon, 09 Nov 2020 16:38:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22720.49128; Mon, 09 Nov 2020 16:38:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcABh-0000y2-0z; Mon, 09 Nov 2020 16:38:37 +0000
Received: by outflank-mailman (input) for mailman id 22720;
 Mon, 09 Nov 2020 16:38:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kcABf-0000v8-RJ
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 16:38:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2849798a-e04d-47c6-a55b-07f6f9a5605f;
 Mon, 09 Nov 2020 16:38:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3D9B7AD07;
 Mon,  9 Nov 2020 16:38:29 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kcABf-0000v8-RJ
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 16:38:35 +0000
X-Inumbo-ID: 2849798a-e04d-47c6-a55b-07f6f9a5605f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2849798a-e04d-47c6-a55b-07f6f9a5605f;
	Mon, 09 Nov 2020 16:38:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604939909;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DpSBXBI2OHigxodLIbPPZp3jSMwO4fg1IhybSfwcpsI=;
	b=mdNDxUawjIsq4LTthLx/DVa/uts2f/PQ0R+BflxXhbQ8CoXQShaIrahQInkijFeozpq6AY
	vg94KekXePLAYQJSpJQz2fc+O+rLhSWsIS1PvoSzq5xgs7mQkd+JswsKJBZYQr5CuwUlG4
	SIMK80I89bVcO8tudmJD3H5tUsik97w=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3D9B7AD07;
	Mon,  9 Nov 2020 16:38:29 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v6 1/3] xen/events: access last_priority and last_vcpu_id together
Date: Mon,  9 Nov 2020 17:38:24 +0100
Message-Id: <20201109163826.13035-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201109163826.13035-1-jgross@suse.com>
References: <20201109163826.13035-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The queue for a fifo event is depending on the vcpu_id and the
priority of the event. When sending an event it might happen the
event needs to change queues and the old queue needs to be kept for
keeping the links between queue elements intact. For this purpose
the event channel contains last_priority and last_vcpu_id values
elements for being able to identify the old queue.

In order to avoid races always access last_priority and last_vcpu_id
with a single atomic operation avoiding any inconsistencies.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 xen/common/event_fifo.c | 25 +++++++++++++++++++------
 xen/include/xen/sched.h |  3 +--
 2 files changed, 20 insertions(+), 8 deletions(-)

diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c
index c6e58d2a1a..79090c04ca 100644
--- a/xen/common/event_fifo.c
+++ b/xen/common/event_fifo.c
@@ -42,6 +42,14 @@ struct evtchn_fifo_domain {
     unsigned int num_evtchns;
 };
 
+union evtchn_fifo_lastq {
+    uint32_t raw;
+    struct {
+        uint8_t last_priority;
+        uint16_t last_vcpu_id;
+    };
+};
+
 static inline event_word_t *evtchn_fifo_word_from_port(const struct domain *d,
                                                        unsigned int port)
 {
@@ -86,16 +94,18 @@ static struct evtchn_fifo_queue *lock_old_queue(const struct domain *d,
     struct vcpu *v;
     struct evtchn_fifo_queue *q, *old_q;
     unsigned int try;
+    union evtchn_fifo_lastq lastq;
 
     for ( try = 0; try < 3; try++ )
     {
-        v = d->vcpu[evtchn->last_vcpu_id];
-        old_q = &v->evtchn_fifo->queue[evtchn->last_priority];
+        lastq.raw = read_atomic(&evtchn->fifo_lastq);
+        v = d->vcpu[lastq.last_vcpu_id];
+        old_q = &v->evtchn_fifo->queue[lastq.last_priority];
 
         spin_lock_irqsave(&old_q->lock, *flags);
 
-        v = d->vcpu[evtchn->last_vcpu_id];
-        q = &v->evtchn_fifo->queue[evtchn->last_priority];
+        v = d->vcpu[lastq.last_vcpu_id];
+        q = &v->evtchn_fifo->queue[lastq.last_priority];
 
         if ( old_q == q )
             return old_q;
@@ -246,8 +256,11 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
         /* Moved to a different queue? */
         if ( old_q != q )
         {
-            evtchn->last_vcpu_id = v->vcpu_id;
-            evtchn->last_priority = q->priority;
+            union evtchn_fifo_lastq lastq = { };
+
+            lastq.last_vcpu_id = v->vcpu_id;
+            lastq.last_priority = q->priority;
+            write_atomic(&evtchn->fifo_lastq, lastq.raw);
 
             spin_unlock_irqrestore(&old_q->lock, flags);
             spin_lock_irqsave(&q->lock, flags);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index d8ed83f869..a298ff4df8 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -114,8 +114,7 @@ struct evtchn
         u16 virq;      /* state == ECS_VIRQ */
     } u;
     u8 priority;
-    u8 last_priority;
-    u16 last_vcpu_id;
+    u32 fifo_lastq;    /* Data for fifo events identifying last queue. */
 #ifdef CONFIG_XSM
     union {
 #ifdef XSM_NEED_GENERIC_EVTCHN_SSID
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 16:38:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 16:38:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22718.49104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcABc-0000vP-Ji; Mon, 09 Nov 2020 16:38:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22718.49104; Mon, 09 Nov 2020 16:38:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcABc-0000vI-Gp; Mon, 09 Nov 2020 16:38:32 +0000
Received: by outflank-mailman (input) for mailman id 22718;
 Mon, 09 Nov 2020 16:38:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kcABb-0000v8-2G
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 16:38:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7b4ea960-041b-44e3-96b3-3216b006e393;
 Mon, 09 Nov 2020 16:38:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 10915AC23;
 Mon,  9 Nov 2020 16:38:29 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sEkb=EP=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kcABb-0000v8-2G
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 16:38:31 +0000
X-Inumbo-ID: 7b4ea960-041b-44e3-96b3-3216b006e393
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7b4ea960-041b-44e3-96b3-3216b006e393;
	Mon, 09 Nov 2020 16:38:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604939909;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=QU7Z3Z0bBjMIFarM5iEftjQYoXJqTzDIt0wRsAEwtIo=;
	b=e8UAv2S+dcrq+HObe5o8R9YRbIM7bntu3ww9Qwm6FODAhPXWvzaMJYfOcTTOB3V6YpJd1q
	jqwkqyaf9LIrj/6oGIKLBrE4jzkhpWzozAFDE1DSeI1XOeSwqfaFVv+cEEoi8pK9Iry2b6
	CAtgkgihAUT1v9DRu8lygSlkLFd/h1g=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 10915AC23;
	Mon,  9 Nov 2020 16:38:29 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [PATCH v6 0/3] XSA-343 followup patches
Date: Mon,  9 Nov 2020 17:38:23 +0100
Message-Id: <20201109163826.13035-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patches for XSA-343 produced some fallout, especially the event
channel locking has shown to be problematic.

Patch 1 is targeting fifo event channels for avoiding any races for the
case that the fifo queue has been changed for a specific event channel.

The second patch is modifying the per event channel locking scheme in
order to avoid deadlocks and problems due to the event channel lock
having been changed to require IRQs off by the XSA-343 patches.

Changes in V6:
- added patch 3 (Jan Beulich)
- switched some more read_trylock() cases to read_lock() (Jan Beulich)

Changes in V5:
- moved evtchn_write_[un]lock() to event_channel.c (Jan Beulich)
- used normal read_lock() in some cases (Jan Beulich)

Changes in V4:
- switched to real rwlock

Changes in V3:
- addressed comments

Juergen Gross (3):
  xen/events: access last_priority and last_vcpu_id together
  xen/evtchn: rework per event channel lock
  xen/evtchn: revert 52e1fc47abc3a0123

 xen/arch/x86/irq.c          |   6 +-
 xen/arch/x86/pv/shim.c      |   9 ++-
 xen/common/event_channel.c  | 144 +++++++++++++++++++-----------------
 xen/common/event_fifo.c     |  25 +++++--
 xen/include/xen/event.h     |  29 ++++++--
 xen/include/xen/sched.h     |   8 +-
 xen/include/xsm/xsm.h       |   1 -
 xen/xsm/flask/avc.c         |  78 ++-----------------
 xen/xsm/flask/hooks.c       |  10 ---
 xen/xsm/flask/include/avc.h |   2 -
 10 files changed, 139 insertions(+), 173 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 16:40:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 16:40:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22740.49153 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcADl-000264-6T; Mon, 09 Nov 2020 16:40:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22740.49153; Mon, 09 Nov 2020 16:40:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcADl-00025x-2m; Mon, 09 Nov 2020 16:40:45 +0000
Received: by outflank-mailman (input) for mailman id 22740;
 Mon, 09 Nov 2020 16:40:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=a0OZ=EP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcADj-00025o-52
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 16:40:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9afd9b1e-6cc2-4d20-8141-d35fd72f0d64;
 Mon, 09 Nov 2020 16:40:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2ADF7ACF1;
 Mon,  9 Nov 2020 16:40:41 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=a0OZ=EP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcADj-00025o-52
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 16:40:43 +0000
X-Inumbo-ID: 9afd9b1e-6cc2-4d20-8141-d35fd72f0d64
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9afd9b1e-6cc2-4d20-8141-d35fd72f0d64;
	Mon, 09 Nov 2020 16:40:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604940041;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3aNMIrVfZWlG53c/2LnwOwCn4F3/iAvTdqD7F2c9kWE=;
	b=kyl6PeK3jfvDHOK75Tw7BKxFO9QjgigYw/k7vPIu0gvf+xyc6IP3Tcz80IGuFxyhHa988B
	qwjhbCUhUGP7+fDBBnPgxYwubuGnE6Lc+FKm2MWNNF2Ksah64CrSmMkb9wwe3w1lEA3RWM
	XMcZ4QtJRH8CqVvKRVlr8UkflT/8JNw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2ADF7ACF1;
	Mon,  9 Nov 2020 16:40:41 +0000 (UTC)
Subject: Re: [PATCH v6 3/3] xen/evtchn: revert 52e1fc47abc3a0123
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 xen-devel@lists.xenproject.org
References: <20201109163826.13035-1-jgross@suse.com>
 <20201109163826.13035-4-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5cabf9dc-6227-7985-b150-2452808496aa@suse.com>
Date: Mon, 9 Nov 2020 17:40:40 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201109163826.13035-4-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.11.2020 17:38, Juergen Gross wrote:
> With the event channel lock no longer disabling interrupts commit
> 52e1fc47abc3a0123 ("evtchn/Flask: pre-allocate node on send path") can
> be reverted again.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 16:48:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 16:48:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22754.49168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcAKy-0002NU-0N; Mon, 09 Nov 2020 16:48:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22754.49168; Mon, 09 Nov 2020 16:48:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcAKx-0002NN-Sv; Mon, 09 Nov 2020 16:48:11 +0000
Received: by outflank-mailman (input) for mailman id 22754;
 Mon, 09 Nov 2020 16:48:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=a0OZ=EP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcAKw-0002NI-Qm
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 16:48:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e94f90a8-404d-42dc-83fb-ffa3a7ee0cbb;
 Mon, 09 Nov 2020 16:48:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8800CAD07;
 Mon,  9 Nov 2020 16:48:08 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=a0OZ=EP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcAKw-0002NI-Qm
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 16:48:10 +0000
X-Inumbo-ID: e94f90a8-404d-42dc-83fb-ffa3a7ee0cbb
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e94f90a8-404d-42dc-83fb-ffa3a7ee0cbb;
	Mon, 09 Nov 2020 16:48:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604940488;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZeeLWfSAPUv67W5P3EPdo1W/Zp3+Po4OXwB6EjUj6+4=;
	b=qZ5jUuQcLgS0Df44dPaDkPnBlEVdEBAi+3uwAnQLLNrwMdZy8u4wIlZjxm/k8K8BRRXItW
	aYuON8rfuixrjLGpGIuojSg+lrvWwd81v77Z6+afBD0G0fdZ4qkIUj3E0i53U7hHWM+mKm
	aDdsqca57fBeGx6Drr4kkDWlZ7lfbB0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8800CAD07;
	Mon,  9 Nov 2020 16:48:08 +0000 (UTC)
Subject: Re: [PATCH v6 2/3] xen/evtchn: rework per event channel lock
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20201109163826.13035-1-jgross@suse.com>
 <20201109163826.13035-3-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f64158af-3468-053f-7cbe-d52ab01b8bfc@suse.com>
Date: Mon, 9 Nov 2020 17:48:07 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201109163826.13035-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.11.2020 17:38, Juergen Gross wrote:
> Currently the lock for a single event channel needs to be taken with
> interrupts off, which causes deadlocks in some cases.
> 
> Rework the per event channel lock to be non-blocking for the case of
> sending an event and removing the need for disabling interrupts for
> taking the lock.
> 
> The lock is needed for avoiding races between event channel state
> changes (creation, closing, binding) against normal operations (set
> pending, [un]masking, priority changes).
> 
> Use a rwlock, but with some restrictions:
> 
> - Changing the state of an event channel (creation, closing, binding)
>   needs to use write_lock(), with ASSERT()ing that the lock is taken as
>   writer only when the state of the event channel is either before or
>   after the locked region appropriate (either free or unbound).
> 
> - Sending an event needs to use read_trylock() mostly, in case of not
>   obtaining the lock the operation is omitted. This is needed as
>   sending an event can happen with interrupts off (at least in some
>   cases).
> 
> - Dumping the event channel state for debug purposes is using
>   read_trylock(), too, in order to avoid blocking in case the lock is
>   taken as writer for a long time.
> 
> - All other cases can use read_lock().
> 
> Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()")
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V4:
> - switch to rwlock
> - add ASSERT() to verify correct write_lock() usage
> 
> V3:
> - corrected a copy-and-paste error (Jan Beulich)
> - corrected unlocking in two cases (Jan Beulich)
> - renamed evtchn_read_trylock() (Jan Beulich)
> - added some comments and an ASSERT() for evtchn_write_lock()
> - set EVENT_WRITE_LOCK_INC to INT_MIN
> 
> V2:
> - added needed barriers
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

I'll give it overnight for others to possibly comment, but I'd
like to get this in tomorrow if at all possible.

I also think this will want backporting beyond just the fully
maintained branches.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 17:39:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 17:39:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22776.49189 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcB7v-0006lY-Uw; Mon, 09 Nov 2020 17:38:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22776.49189; Mon, 09 Nov 2020 17:38:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcB7v-0006lR-RC; Mon, 09 Nov 2020 17:38:47 +0000
Received: by outflank-mailman (input) for mailman id 22776;
 Mon, 09 Nov 2020 17:38:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2NS2=EP=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kcB7u-0006lM-5s
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 17:38:46 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fe5324bc-429e-413a-8850-5a522ac91c23;
 Mon, 09 Nov 2020 17:38:44 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2NS2=EP=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kcB7u-0006lM-5s
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 17:38:46 +0000
X-Inumbo-ID: fe5324bc-429e-413a-8850-5a522ac91c23
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fe5324bc-429e-413a-8850-5a522ac91c23;
	Mon, 09 Nov 2020 17:38:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1604943525;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=Xi+YP2gii5z/qrVuc/xwb4NhqKRSsN0i+5+j9bZiYH4=;
  b=g3gr0F2/G0gvNU4JiJpLR6L8U568W/Bmu7uuUgk3cdEQzvHf6aP7nNlA
   U9LeN8T3PM+M0HNWKcX94OYiHHDMzyEqI5zBO1IqDYoJ+AcCS7qLpbpRv
   87IBQk1JM3sPLuAk7P1udFG7Nocr8zLi9aKRpPJOkbPBVtRS3sAqn7mkZ
   U=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 4Oz/y88grkHPEUvsMUtbkrMf/0c9qbhY5dNSvopmyQh50I3iPcfbl/lwoLSHd/vXTOvnFBWHpN
 IBBBbxFLyIDVGTuVVhKSHckX4KKaiOI+Qlrw8v8IVS7cfbXWT/apuIaImIlbTzUh2/AZjDv9i/
 mLZdoxqUuke8IQmweMxYYuuVFzl+gVdbSaVjKUX8Oh9HQpnAxwCZsRYp+vQ/DL7h+bplwCfiAy
 KxkJIxD2bMvfdqwokkFgO1AhofyvelwJM6CL9xi8fqGea4DoV4PZvU5DxvCZ0dTJcR5r/Ak3M3
 1+U=
X-SBRS: None
X-MesageID: 30781346
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,463,1596513600"; 
   d="scan'208";a="30781346"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Jan Beulich <JBeulich@suse.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v2] x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL}
Date: Mon, 9 Nov 2020 17:38:19 +0000
Message-ID: <20201109173819.7817-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

From: Roger Pau Monné <roger.pau@citrix.com>

Currently a PV hardware domain can also be given control over the CPU
frequency, and such guest is allowed to write to MSR_IA32_PERF_CTL.
However since commit 322ec7c89f6 the default behavior has been changed
to reject accesses to not explicitly handled MSRs, preventing PV
guests that manage CPU frequency from reading
MSR_IA32_PERF_{STATUS/CTL}.

Additionally some HVM guests (Windows at least) will attempt to read
MSR_IA32_PERF_CTL and will panic if given back a #GP fault:

  vmx.c:3035:d8v0 RDMSR 0x00000199 unimplemented
  d8v0 VIRIDIAN CRASH: 3b c0000096 fffff806871c1651 ffffda0253683720 0

Move the handling of MSR_IA32_PERF_{STATUS/CTL} to the common MSR
handling shared between HVM and PV guests, and add an explicit case
for reads to MSR_IA32_PERF_{STATUS/CTL}.

Restore previous behavior and allow PV guests with the required
permissions to read the contents of the mentioned MSRs. Non privileged
guests will get 0 when trying to read those registers, as writes to
MSR_IA32_PERF_CTL by such guest will already be silently dropped.

Fixes: 322ec7c89f6 ('x86/pv: disallow access to unknown MSRs')
Fixes: 84e848fd7a1 ('x86/hvm: disallow access to unknown MSRs')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * fix is_cpufreq_controller() to exclude PVH dom0, and collapse to nothing in
   !CONFIG_PV builds
 * Drop the cross-vendor checks.  It isn't possible to configure dom0 as
   cross-vendor, and anyone using is_cpufreq_controller() without an exact
   model match has far bigger problems.
 * At least Centaur implements these MSRs.  Add access.
---
 xen/arch/x86/msr.c             | 34 ++++++++++++++++++++++++++++++++++
 xen/arch/x86/pv/emul-priv-op.c | 14 --------------
 xen/include/xen/sched.h        | 17 +++++++++++++++++
 3 files changed, 51 insertions(+), 14 deletions(-)

diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 9c69ef8792..0a8ae4d22c 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -242,6 +242,25 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
             goto gp_fault;
         break;
 
+        /*
+         * These MSRs are not enumerated in CPUID.  They have been around
+         * since the Pentium 4, and implemented by other vendors.
+         *
+         * Some versions of Windows try reading these before setting up a #GP
+         * handler, and Linux has several unguarded reads as well.  Provide
+         * RAZ semantics, in general, but permit a cpufreq controller dom0 to
+         * have full access.
+         */
+    case MSR_IA32_PERF_STATUS:
+    case MSR_IA32_PERF_CTL:
+        if ( !(cp->x86_vendor & (X86_VENDOR_INTEL | X86_VENDOR_CENTAUR)) )
+            goto gp_fault;
+
+        *val = 0;
+        if ( likely(!is_cpufreq_controller(d)) || rdmsr_safe(msr, *val) == 0 )
+            break;
+        goto gp_fault;
+
     case MSR_IA32_THERM_STATUS:
         if ( cp->x86_vendor != X86_VENDOR_INTEL )
             goto gp_fault;
@@ -448,6 +467,21 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
             goto gp_fault;
         break;
 
+        /*
+         * This MSR are not enumerated in CPUID.  It has been around since the
+         * Pentium 4, and implemented by other vendors.
+         *
+         * To match the RAZ semantics, implement as write-discard, except for
+         * a cpufreq controller dom0 which has full access.
+         */
+    case MSR_IA32_PERF_CTL:
+        if ( !(cp->x86_vendor & (X86_VENDOR_INTEL | X86_VENDOR_CENTAUR)) )
+            goto gp_fault;
+
+        if ( likely(!is_cpufreq_controller(d)) || wrmsr_safe(msr, val) == 0 )
+            break;
+        goto gp_fault;
+
     case MSR_X2APIC_FIRST ... MSR_X2APIC_LAST:
         if ( !is_hvm_domain(d) || v != curr )
             goto gp_fault;
diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index 7cc16d6eda..dbceed8a05 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -849,12 +849,6 @@ static inline uint64_t guest_misc_enable(uint64_t val)
     return val;
 }
 
-static inline bool is_cpufreq_controller(const struct domain *d)
-{
-    return ((cpufreq_controller == FREQCTL_dom0_kernel) &&
-            is_hardware_domain(d));
-}
-
 static uint64_t guest_efer(const struct domain *d)
 {
     uint64_t val;
@@ -1121,14 +1115,6 @@ static int write_msr(unsigned int reg, uint64_t val,
             return X86EMUL_OKAY;
         break;
 
-    case MSR_IA32_PERF_CTL:
-        if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL )
-            break;
-        if ( likely(!is_cpufreq_controller(currd)) ||
-             wrmsr_safe(reg, val) == 0 )
-            return X86EMUL_OKAY;
-        break;
-
     case MSR_IA32_THERM_CONTROL:
     case MSR_IA32_ENERGY_PERF_BIAS:
         if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL )
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index d8ed83f869..b4d3e53310 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -1069,6 +1069,23 @@ extern enum cpufreq_controller {
     FREQCTL_none, FREQCTL_dom0_kernel, FREQCTL_xen
 } cpufreq_controller;
 
+static always_inline bool is_cpufreq_controller(const struct domain *d)
+{
+    /*
+     * A PV dom0 can be nominated as the cpufreq controller, instead of using
+     * Xen's cpufreq driver, at which point dom0 gets direct access to certain
+     * MSRs.
+     *
+     * This interface only works when dom0 is identity pinned and has the same
+     * number of vCPUs as pCPUs on the system.
+     *
+     * It would be far better to paravirtualise the interface.
+     */
+    return (IS_ENABLED(CONFIG_PV) &&
+            (cpufreq_controller == FREQCTL_dom0_kernel) &&
+            is_pv_domain(d) && is_hardware_domain(d));
+}
+
 int cpupool_move_domain(struct domain *d, struct cpupool *c);
 int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op);
 int cpupool_get_id(const struct domain *d);
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 17:44:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 17:44:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22782.49201 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcBDH-0007e6-J6; Mon, 09 Nov 2020 17:44:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22782.49201; Mon, 09 Nov 2020 17:44:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcBDH-0007dz-Fo; Mon, 09 Nov 2020 17:44:19 +0000
Received: by outflank-mailman (input) for mailman id 22782;
 Mon, 09 Nov 2020 17:44:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wB/=EP=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kcBDG-0007du-7f
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 17:44:18 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a36829f3-757d-4a32-815a-b7bced826d62;
 Mon, 09 Nov 2020 17:44:17 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kcBDC-00036g-Rq; Mon, 09 Nov 2020 17:44:14 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kcBDC-0006OZ-Kg; Mon, 09 Nov 2020 17:44:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5wB/=EP=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kcBDG-0007du-7f
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 17:44:18 +0000
X-Inumbo-ID: a36829f3-757d-4a32-815a-b7bced826d62
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a36829f3-757d-4a32-815a-b7bced826d62;
	Mon, 09 Nov 2020 17:44:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=P2gXSBFISjxCjNa9a0EzzOOqarUXYUyO3s6dL9m0hMY=; b=bwVlSRH8g7XF+rr7TYKzE+6VMe
	9uCDk5zqNh79/ag3Ytm8kPlVN7TmWzs0OPAUwrVmmJdfIL9ZAjFQpekrYIy+7pl93jB/dyWG5vQs7
	LW5rrCkXe1jwDlHiJQtq+qSe3ClDRS/tPoYY/6UIUbScWkaZzvmJeDrZuVROGTm/TBSU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kcBDC-00036g-Rq; Mon, 09 Nov 2020 17:44:14 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kcBDC-0006OZ-Kg; Mon, 09 Nov 2020 17:44:14 +0000
Subject: Re: [PATCH v6 2/3] xen/evtchn: rework per event channel lock
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20201109163826.13035-1-jgross@suse.com>
 <20201109163826.13035-3-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <bf53714c-ad49-13f5-147c-de225b775d42@xen.org>
Date: Mon, 9 Nov 2020 17:44:12 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201109163826.13035-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 09/11/2020 16:38, Juergen Gross wrote:
> Currently the lock for a single event channel needs to be taken with
> interrupts off, which causes deadlocks in some cases.
> 
> Rework the per event channel lock to be non-blocking for the case of
> sending an event and removing the need for disabling interrupts for
> taking the lock.
> 
> The lock is needed for avoiding races between event channel state
> changes (creation, closing, binding) against normal operations (set
> pending, [un]masking, priority changes).
> 
> Use a rwlock, but with some restrictions:
> 
> - Changing the state of an event channel (creation, closing, binding)
>    needs to use write_lock(), with ASSERT()ing that the lock is taken as
>    writer only when the state of the event channel is either before or
>    after the locked region appropriate (either free or unbound).
> 
> - Sending an event needs to use read_trylock() mostly, in case of not
>    obtaining the lock the operation is omitted. This is needed as
>    sending an event can happen with interrupts off (at least in some
>    cases).
> 
> - Dumping the event channel state for debug purposes is using
>    read_trylock(), too, in order to avoid blocking in case the lock is
>    taken as writer for a long time.
> 
> - All other cases can use read_lock().
> 
> Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()")
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 17:47:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 17:47:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22790.49212 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcBFu-0007nZ-1P; Mon, 09 Nov 2020 17:47:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22790.49212; Mon, 09 Nov 2020 17:47:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcBFt-0007nS-Uk; Mon, 09 Nov 2020 17:47:01 +0000
Received: by outflank-mailman (input) for mailman id 22790;
 Mon, 09 Nov 2020 17:47:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wB/=EP=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kcBFr-0007nN-Ux
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 17:46:59 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 06fa5825-c4d3-4a73-8ca7-2194af36eb00;
 Mon, 09 Nov 2020 17:46:59 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kcBFq-0003At-69; Mon, 09 Nov 2020 17:46:58 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kcBFp-0006f1-Ve; Mon, 09 Nov 2020 17:46:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5wB/=EP=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kcBFr-0007nN-Ux
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 17:46:59 +0000
X-Inumbo-ID: 06fa5825-c4d3-4a73-8ca7-2194af36eb00
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 06fa5825-c4d3-4a73-8ca7-2194af36eb00;
	Mon, 09 Nov 2020 17:46:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=ij9A6sVLzDlVzXD0GE/WhDIn4hZvFt9y8rauKAsMoR8=; b=lm3X5KRIsCTsQ6aIT+oHMXaiA+
	PMol+3GEmD9gd+VnwnJIdlVodlulcPc5h5yHEjtGh5JyP/sC00DxAmmoPllNkubOq+vHQIw7NnQq/
	51TlGjWwynC5qqmM2d4AiGh1y8wXdOYf1/IGBLnxURBHPYKTXRwTvc6mZHf+vrvvwsUQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kcBFq-0003At-69; Mon, 09 Nov 2020 17:46:58 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kcBFp-0006f1-Ve; Mon, 09 Nov 2020 17:46:58 +0000
Subject: Re: [PATCH v6 2/3] xen/evtchn: rework per event channel lock
To: Jan Beulich <jbeulich@suse.com>, Juergen Gross <jgross@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Stefano Stabellini
 <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20201109163826.13035-1-jgross@suse.com>
 <20201109163826.13035-3-jgross@suse.com>
 <f64158af-3468-053f-7cbe-d52ab01b8bfc@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7858513b-2938-5bad-0c9e-167a8472656f@xen.org>
Date: Mon, 9 Nov 2020 17:46:55 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <f64158af-3468-053f-7cbe-d52ab01b8bfc@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 09/11/2020 16:48, Jan Beulich wrote:
> On 09.11.2020 17:38, Juergen Gross wrote:
>> Currently the lock for a single event channel needs to be taken with
>> interrupts off, which causes deadlocks in some cases.
>>
>> Rework the per event channel lock to be non-blocking for the case of
>> sending an event and removing the need for disabling interrupts for
>> taking the lock.
>>
>> The lock is needed for avoiding races between event channel state
>> changes (creation, closing, binding) against normal operations (set
>> pending, [un]masking, priority changes).
>>
>> Use a rwlock, but with some restrictions:
>>
>> - Changing the state of an event channel (creation, closing, binding)
>>    needs to use write_lock(), with ASSERT()ing that the lock is taken as
>>    writer only when the state of the event channel is either before or
>>    after the locked region appropriate (either free or unbound).
>>
>> - Sending an event needs to use read_trylock() mostly, in case of not
>>    obtaining the lock the operation is omitted. This is needed as
>>    sending an event can happen with interrupts off (at least in some
>>    cases).
>>
>> - Dumping the event channel state for debug purposes is using
>>    read_trylock(), too, in order to avoid blocking in case the lock is
>>    taken as writer for a long time.
>>
>> - All other cases can use read_lock().
>>
>> Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()")
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>> V4:
>> - switch to rwlock
>> - add ASSERT() to verify correct write_lock() usage
>>
>> V3:
>> - corrected a copy-and-paste error (Jan Beulich)
>> - corrected unlocking in two cases (Jan Beulich)
>> - renamed evtchn_read_trylock() (Jan Beulich)
>> - added some comments and an ASSERT() for evtchn_write_lock()
>> - set EVENT_WRITE_LOCK_INC to INT_MIN
>>
>> V2:
>> - added needed barriers
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> I'll give it overnight for others to possibly comment, but I'd
> like to get this in tomorrow if at all possible.

IIRC, Citrix originally reported the issues. I would like to give them 
an opportunity to test the patches and confirm this effectively fixed 
there issues.

@Roger, @Andrew, could you give a try to confirm this unblock the vm 
event issue?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 17:52:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 17:52:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22798.49227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcBL2-0000GK-Mr; Mon, 09 Nov 2020 17:52:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22798.49227; Mon, 09 Nov 2020 17:52:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcBL2-0000G9-Ju; Mon, 09 Nov 2020 17:52:20 +0000
Received: by outflank-mailman (input) for mailman id 22798;
 Mon, 09 Nov 2020 17:52:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=47vU=EP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kcBL1-0000FG-Jk
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 17:52:19 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 057c4481-bade-42f4-a8c5-450dd49b55d6;
 Mon, 09 Nov 2020 17:52:11 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcBKt-0003Hz-Gp; Mon, 09 Nov 2020 17:52:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcBKt-0003Ng-2b; Mon, 09 Nov 2020 17:52:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kcBKt-00049t-26; Mon, 09 Nov 2020 17:52:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=47vU=EP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kcBL1-0000FG-Jk
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 17:52:19 +0000
X-Inumbo-ID: 057c4481-bade-42f4-a8c5-450dd49b55d6
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 057c4481-bade-42f4-a8c5-450dd49b55d6;
	Mon, 09 Nov 2020 17:52:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YQvENOjLIep56eOUD+JoEtJYEG+hy/+HmctiY/9WCNk=; b=ac1g3oIdTMMFMMEzSqj/Fr1wn5
	aozHeu2q+UWZSL0rmKn89E1ad+F59zhHgEQJqlM3gto0prd6ciLH4zGV7I92CgE+i/SK0Jyygf3fE
	JNyCx1McQ22WmJ8x5Cy8p6Ld7ww35iSN0v2eSMK2J/sWECNxhJzcvWFzNKk5j5hNmVBM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcBKt-0003Hz-Gp; Mon, 09 Nov 2020 17:52:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcBKt-0003Ng-2b; Mon, 09 Nov 2020 17:52:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcBKt-00049t-26; Mon, 09 Nov 2020 17:52:11 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156582-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156582: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f8394f232b1eab649ce2df5c5f15b0e528c92091
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Nov 2020 17:52:11 +0000

flight 156582 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156582/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen      fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                f8394f232b1eab649ce2df5c5f15b0e528c92091
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  100 days
Failing since        152366  2020-08-01 20:49:34 Z   99 days  166 attempts
Testing same since   156582  2020-11-09 07:47:25 Z    0 days    1 attempts

------------------------------------------------------------
3477 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 663854 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 18:03:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 18:03:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22825.49315 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcBVl-0001jd-NT; Mon, 09 Nov 2020 18:03:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22825.49315; Mon, 09 Nov 2020 18:03:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcBVl-0001jW-KW; Mon, 09 Nov 2020 18:03:25 +0000
Received: by outflank-mailman (input) for mailman id 22825;
 Mon, 09 Nov 2020 18:03:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yezh=EP=xenproject.org=iwj@srs-us1.protection.inumbo.net>)
 id 1kcBVk-0001jR-KF
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 18:03:24 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d25f45fb-6bde-4d35-a489-bdf4e043fe40;
 Mon, 09 Nov 2020 18:03:23 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1kcBVj-0003dv-JB
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 18:03:23 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1kcBVj-0000Bh-IP
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 18:03:23 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1kcBVd-0006yE-9B; Mon, 09 Nov 2020 18:03:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=yezh=EP=xenproject.org=iwj@srs-us1.protection.inumbo.net>)
	id 1kcBVk-0001jR-KF
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 18:03:24 +0000
X-Inumbo-ID: d25f45fb-6bde-4d35-a489-bdf4e043fe40
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d25f45fb-6bde-4d35-a489-bdf4e043fe40;
	Mon, 09 Nov 2020 18:03:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=VFQkw8XPMUDgit2Y0G+rDzVc4Kx4UWsCyEr4ryH0CVA=; b=elGjwDSaNWe/uacG0JNvwPm5J9
	VreVSueqZfo8ZJs2l0uiMuun1Q30Fu3zrM2pd9c5geX/DgoVE9PgBcEWF3E/LEekcO3AnkFI2PFYj
	SVtRF3aKjuideiBCmXVkT75i0qubXrpnKHz0LWZrPty4oqODbEKIxIrGKdLGvnl7+U6g=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1kcBVj-0003dv-JB
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 18:03:23 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1kcBVj-0000Bh-IP
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 18:03:23 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
	(envelope-from <iwj@xenproject.org>)
	id 1kcBVd-0006yE-9B; Mon, 09 Nov 2020 18:03:17 +0000
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24489.33893.98470.334969@mariner.uk.xensource.com>
Date: Mon, 9 Nov 2020 18:03:17 +0000
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: Julien Grall <julien@xen.org>,
    Jan Beulich <jbeulich@suse.com>,
    "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
    Bertrand Marquis <bertrand.marquis@arm.com>,
    ba1020@protonmail.ch
Subject: Re: Tools backport request for Xen 4.14
In-Reply-To: <20201009184930.GA65219@mattapan.m5p.com>
References: <54fcf6ea-f400-c96a-cde6-4f55f909c2d6@xen.org>
	<20201009184930.GA65219@mattapan.m5p.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Elliott Mitchell writes ("Re: Tools backport request for Xen 4.14"):
> On Fri, Oct 09, 2020 at 06:47:22PM +0100, Julien Grall wrote:
> > Would it be possible to consider backporting to 4.14 the following tools 
> > commit:
> > 
> > d25cc3ec93eb "libxl: workaround gcc 10.2 maybe-uninitialized warning"
> > 
> > This would help to build Xen tools on Debian Testing with GCC 10. I 
> > haven't build itself myself, so I can't promise this is only one :).
> 
> >From Debian's repository:
> https://salsa.debian.org/xen-team/debian-xen.git
> 
> The master and knorrie/4.14 branches include that commit.  They will
> hopefully soon include all the Debian-specific bits for cross-building
> too.

I have now backported all of the GCC10 fixes to all the supported Xen
branches (ie back to 4.12).  Upstream staging-* now builds on Debian
sid, for me.  Sorry for not getting to this earlier.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 18:31:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 18:31:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22848.49370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcBwi-0004WP-Cp; Mon, 09 Nov 2020 18:31:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22848.49370; Mon, 09 Nov 2020 18:31:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcBwi-0004WI-9F; Mon, 09 Nov 2020 18:31:16 +0000
Received: by outflank-mailman (input) for mailman id 22848;
 Mon, 09 Nov 2020 18:31:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=caUz=EP=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kcBwh-0004WD-Cc
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 18:31:15 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fbff16a5-9f17-431f-b8b7-dc86d1f980ac;
 Mon, 09 Nov 2020 18:31:13 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=caUz=EP=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kcBwh-0004WD-Cc
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 18:31:15 +0000
X-Inumbo-ID: fbff16a5-9f17-431f-b8b7-dc86d1f980ac
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fbff16a5-9f17-431f-b8b7-dc86d1f980ac;
	Mon, 09 Nov 2020 18:31:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1604946673;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=EuxrYwKQ6NdxoHdWOE4LN5YptLu811gjirE/rRY+3T8=;
  b=DUV01Rlpfs/B0IGuQbtaPg8c9dMaB1onWBZ+G4dC218zfeBhnBbo5a1r
   b9SeqAoPWh55YH1AB1Oin86n2ZScfpJapWldzBwop4R4rLPet1Tz9Jwdc
   HHLiOVscaCQlLGX3n1TgAuUDtE+GBnbSPEKmahJX+v4V0EAX2L7u+Co4e
   E=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: s0+yVDksQT/XRSH0K1kEyjAZEbPfgL9se/j8rA87gtavcL9sdDEwlHPPjNSHnbZf9YfwFF4dtX
 s3ZrlM8ho0dsyxGpRxqMfiFa9frK/Vmcw2F7fcfISRXednhZ+vfppCEqG9Cn5cnk/6PiFeVznz
 EDonx88a3saOk8WfDRfFaUMAlADSL1FZlx+T2q5J2fqodhtroZjlKWIJhGGNyh8ylByx6+68/s
 oP5Ex4KM6fkvve1nYJbUFDMK62WALPI0cz/i+cFbiSIuCB6FLVnStk7MWyqUvRSJSQP2FG4d8J
 H6g=
X-SBRS: None
X-MesageID: 30787529
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,464,1596513600"; 
   d="scan'208";a="30787529"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MQIrBKCeXhL3o7m9xS8VctTS9WTW1KF5OcWs478xxrQEZx8AwnGKZ7r/D6zp4HUA5Eeoua/Uee4XqATTtoC6JggFiJOjXh33xVuUhWgdgE0VrWDHQLOKYr0Jef0bnHHb1LYQzanZriza4tUWI46/pyXDm/3Sof5kPET4uG33JV5ohF0EBu56c4u4ymh+tiGU7LWKjrKCnG1XdmtvXpNS7WodL79/z8VjPfc0fESnGFGlvI9wAKs2DZcCI9929DfvrQ/6EKCVqfWOf5E02FMP8FmkZP05g5eFTk525vQVORWANS58vH5nNMpJ3R+dSMV/IJoHM8qPe/k7We7uj+cN2Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Tkv/RO8/ivEQBrckg2t8lPlq1vIMiZWaZKDqTuP15TU=;
 b=TVX5irkqcuB4kZvSf1M7OQoRo+y8l1JsVg33sDJDv1TfuBZwtWfchEuiEWXv0xH5+V4u3vQtLSJy2WHxaF0d+FUfYDYdxKtKexr+NugPNoy20OiyAGZJuUOQAnLIRWXRLLH+6QcfvL6/R1R9LX+tnl9uxmtXz+/tz2IUTNxBVXGGOy6wp2Qd/y+RA3ChFWV7l/Uav5qXQzJbruh2Mg+5PHPTkVAzjJYDUct36lVVrSDkRTn1snv3mri+LmssIVHQpl+qWQnoHScvd5POQsovqr6/hDeKq2PJjjw86OcHrAN7eJO5JbjjiH0S4aIbS6vKgJN6sFub2STJ/4UI7vQhXw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Tkv/RO8/ivEQBrckg2t8lPlq1vIMiZWaZKDqTuP15TU=;
 b=GtGIGvchfrWyYeCsnQuNp12idhoqNMzKI0UirdQWtu0WQxGJ2nUY8aZOj5Ou84oxWRqAG/4axmM+j4RRq1USBV0kUsolLPQgZShnb7Ta/gNE5c9m0/mStauAbS1MqJD/wXy7I1SV1pm1c5tSTZcmGMVx06bOPNwLw2fPs1/sM2s=
Date: Mon, 9 Nov 2020 19:31:02 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2] x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL}
Message-ID: <20201109183102.mrqklmpqyka5u6bt@Air-de-Roger>
References: <20201109173819.7817-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201109173819.7817-1-andrew.cooper3@citrix.com>
X-ClientProxiedBy: LO2P265CA0039.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:61::27) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ff2561ed-af50-42cb-80b7-08d884dda026
X-MS-TrafficTypeDiagnostic: DS7PR03MB5525:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DS7PR03MB552505829E1175AEEF572A2F8FEA0@DS7PR03MB5525.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: V3WDXQa1RooheRLI1doKLm5mUW8V/n/6GMV9VzCPuqVmEeabk3mq/ZqLkNv/ctNMWJFaUbGvRW2kcL2nszSQtpJr6Q/eJu3k6wQx0cIGHWzp6pO0bAvJOqoEqTa4hQeW3I6lZLIm9y+8da8FFR7wy0L0JBSgt5DUMriddYhTCTcq7K/r+4PFjrvgMeSrx+Or83dl/SMsX8R3RTW3/a9a+glUIp29xL1yTUOVmcV8wpdU2g5nAAQGBLdtxsxCa41EbgdkjOLixZEuyCoQge/edjj/v9y0oNgJkKGEGuyUx/LNpVRlK10cF0uryn/+F/1L
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(39860400002)(396003)(136003)(376002)(346002)(366004)(8936002)(2906002)(8676002)(6496006)(956004)(4326008)(83380400001)(6862004)(86362001)(66556008)(6666004)(85182001)(9686003)(5660300002)(316002)(1076003)(54906003)(33716001)(66946007)(66476007)(478600001)(186003)(6486002)(6636002)(16526019)(26005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: gee9EqPQwPiY0efA+/djNqQ7JlNcmIMuW8K8UGFJtypDgWlkWUGYGUKMPgV+Qy7BAWS7KMWcs8khI+6/OXLFXetOn0jwhviHi0HTd4x+ObOwtkHYxDxzOXu6Vrgj8LInibUwW5WuslBLXOt/MK6YjAoJUreP0a8u6LPhgohI9VTXSOXRxIWRveAdAON8458T319DVC8mgEsYKoqRI2W2tm82EPOXGRP8iA1ZURE+Y9MMy9YMaqlNoOJPYPrB5yttJeX7l1m6BvsjkMMtAPixbX4ie9UrhT7NzEsVwKyrz2tZNcoxpMt68VZJIUGCMfl+lLjfxr/ukflWMUW31TSzCLJINdTGR1EN+hOvvw4pXdCFCu1Fm/gpoxHZKoSjg9kgszEXaZN3TNDq5b+Uvz0yu1y5PSjrMjyP6IkPoRFmz4czFZ7UDaoz6w1tTZqHxv47Z3A43rnF6qCAcKxae7aQYpR4xxxp6F2Ezpr6F0eAJ+lwl61xF5yixKkyofbMo/qChL2d9VW1WIQg56zXvAfLGcJuocSnMi6VKQ1xEDWtNzGA/+LFBaWKXCBMCPyCH6GCfl+ZOGutEowf4nxdBUXmFJ/j0qpY3ELtRRXNVZiMwd53lMehMMGcyQR6nxzPw281hkNvqjTkvMHTplho1W2IIpcy/lzE0ckyYd9pgibAhxDhIxpw5cuoXl1bwcTc7IjWJPQf8+dsRkVR42T/vc6HAyIlL5MZoZlMMHKtuR8M2WbIOKpsO48uPbyCulLFSCcMH7FUDTCrAnOzch3aj8797vosNsjo+xRWy4KUwHdV8Ssqg5gyn+UHofjrPjv4PXXU4Nrx2tum/6/EcPCi3JPIzTnoyK+6ilrYven53Vx/w3hbblYZmzkWL7GDZ0hrGNyUuhG9k4/tBIYl3C7q7VplaA==
X-MS-Exchange-CrossTenant-Network-Message-Id: ff2561ed-af50-42cb-80b7-08d884dda026
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Nov 2020 18:31:08.4463
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: o3ZEfEkTtQQfBZ+Da00J+4jQFP1iSEhMMpdSgYhSNK8PgzLYipUSFHblnxDt3w/SPuRuQjSFSO3LT1ZWKc5T5Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5525
X-OriginatorOrg: citrix.com

On Mon, Nov 09, 2020 at 05:38:19PM +0000, Andrew Cooper wrote:
> From: Roger Pau Monné <roger.pau@citrix.com>
> 
> Currently a PV hardware domain can also be given control over the CPU
> frequency, and such guest is allowed to write to MSR_IA32_PERF_CTL.
> However since commit 322ec7c89f6 the default behavior has been changed
> to reject accesses to not explicitly handled MSRs, preventing PV
> guests that manage CPU frequency from reading
> MSR_IA32_PERF_{STATUS/CTL}.
> 
> Additionally some HVM guests (Windows at least) will attempt to read
> MSR_IA32_PERF_CTL and will panic if given back a #GP fault:
> 
>   vmx.c:3035:d8v0 RDMSR 0x00000199 unimplemented
>   d8v0 VIRIDIAN CRASH: 3b c0000096 fffff806871c1651 ffffda0253683720 0
> 
> Move the handling of MSR_IA32_PERF_{STATUS/CTL} to the common MSR
> handling shared between HVM and PV guests, and add an explicit case
> for reads to MSR_IA32_PERF_{STATUS/CTL}.
> 
> Restore previous behavior and allow PV guests with the required
> permissions to read the contents of the mentioned MSRs. Non privileged
> guests will get 0 when trying to read those registers, as writes to
> MSR_IA32_PERF_CTL by such guest will already be silently dropped.
> 
> Fixes: 322ec7c89f6 ('x86/pv: disallow access to unknown MSRs')
> Fixes: 84e848fd7a1 ('x86/hvm: disallow access to unknown MSRs')
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> 
> v2:
>  * fix is_cpufreq_controller() to exclude PVH dom0, and collapse to nothing in
>    !CONFIG_PV builds
>  * Drop the cross-vendor checks.  It isn't possible to configure dom0 as
>    cross-vendor, and anyone using is_cpufreq_controller() without an exact
>    model match has far bigger problems.

I was on the verge of doing this in v1, but wasn't really sure whether
there was any use case to change the vendor for dom0 cpuid.

>  * At least Centaur implements these MSRs.  Add access.
> ---
>  xen/arch/x86/msr.c             | 34 ++++++++++++++++++++++++++++++++++
>  xen/arch/x86/pv/emul-priv-op.c | 14 --------------
>  xen/include/xen/sched.h        | 17 +++++++++++++++++
>  3 files changed, 51 insertions(+), 14 deletions(-)
> 
> diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
> index 9c69ef8792..0a8ae4d22c 100644
> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -242,6 +242,25 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>              goto gp_fault;
>          break;
>  
> +        /*
> +         * These MSRs are not enumerated in CPUID.  They have been around
> +         * since the Pentium 4, and implemented by other vendors.
> +         *
> +         * Some versions of Windows try reading these before setting up a #GP
> +         * handler, and Linux has several unguarded reads as well.  Provide
> +         * RAZ semantics, in general, but permit a cpufreq controller dom0 to
> +         * have full access.
> +         */
> +    case MSR_IA32_PERF_STATUS:
> +    case MSR_IA32_PERF_CTL:
> +        if ( !(cp->x86_vendor & (X86_VENDOR_INTEL | X86_VENDOR_CENTAUR)) )
> +            goto gp_fault;
> +
> +        *val = 0;
> +        if ( likely(!is_cpufreq_controller(d)) || rdmsr_safe(msr, *val) == 0 )
> +            break;
> +        goto gp_fault;
> +
>      case MSR_IA32_THERM_STATUS:
>          if ( cp->x86_vendor != X86_VENDOR_INTEL )
>              goto gp_fault;
> @@ -448,6 +467,21 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>              goto gp_fault;
>          break;
>  
> +        /*
> +         * This MSR are not enumerated in CPUID.  It has been around since the
> +         * Pentium 4, and implemented by other vendors.
> +         *
> +         * To match the RAZ semantics, implement as write-discard, except for
> +         * a cpufreq controller dom0 which has full access.
> +         */
> +    case MSR_IA32_PERF_CTL:
> +        if ( !(cp->x86_vendor & (X86_VENDOR_INTEL | X86_VENDOR_CENTAUR)) )
> +            goto gp_fault;
> +
> +        if ( likely(!is_cpufreq_controller(d)) || wrmsr_safe(msr, val) == 0 )
> +            break;
> +        goto gp_fault;
> +
>      case MSR_X2APIC_FIRST ... MSR_X2APIC_LAST:
>          if ( !is_hvm_domain(d) || v != curr )
>              goto gp_fault;
> diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
> index 7cc16d6eda..dbceed8a05 100644
> --- a/xen/arch/x86/pv/emul-priv-op.c
> +++ b/xen/arch/x86/pv/emul-priv-op.c
> @@ -849,12 +849,6 @@ static inline uint64_t guest_misc_enable(uint64_t val)
>      return val;
>  }
>  
> -static inline bool is_cpufreq_controller(const struct domain *d)
> -{
> -    return ((cpufreq_controller == FREQCTL_dom0_kernel) &&
> -            is_hardware_domain(d));
> -}
> -
>  static uint64_t guest_efer(const struct domain *d)
>  {
>      uint64_t val;
> @@ -1121,14 +1115,6 @@ static int write_msr(unsigned int reg, uint64_t val,
>              return X86EMUL_OKAY;
>          break;
>  
> -    case MSR_IA32_PERF_CTL:
> -        if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL )
> -            break;
> -        if ( likely(!is_cpufreq_controller(currd)) ||
> -             wrmsr_safe(reg, val) == 0 )
> -            return X86EMUL_OKAY;
> -        break;
> -
>      case MSR_IA32_THERM_CONTROL:
>      case MSR_IA32_ENERGY_PERF_BIAS:
>          if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL )
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index d8ed83f869..b4d3e53310 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -1069,6 +1069,23 @@ extern enum cpufreq_controller {
>      FREQCTL_none, FREQCTL_dom0_kernel, FREQCTL_xen
>  } cpufreq_controller;
>  
> +static always_inline bool is_cpufreq_controller(const struct domain *d)
> +{
> +    /*
> +     * A PV dom0 can be nominated as the cpufreq controller, instead of using
> +     * Xen's cpufreq driver, at which point dom0 gets direct access to certain
> +     * MSRs.
> +     *
> +     * This interface only works when dom0 is identity pinned and has the same
> +     * number of vCPUs as pCPUs on the system.
> +     *
> +     * It would be far better to paravirtualise the interface.
> +     */

Would it be useful to add an assert here that the domain cpuid vendor
and the BSP cpuid vendor match?

Is it even possible to fake a different cpuid vendor for dom0?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 18:48:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 18:48:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22857.49382 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcCCu-0005Zw-PJ; Mon, 09 Nov 2020 18:48:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22857.49382; Mon, 09 Nov 2020 18:48:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcCCu-0005Zp-M2; Mon, 09 Nov 2020 18:48:00 +0000
Received: by outflank-mailman (input) for mailman id 22857;
 Mon, 09 Nov 2020 18:47:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+c5I=EP=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kcCCs-0005Zk-Jp
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 18:47:58 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.22])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fef92ffc-035f-4ebb-a4bb-96c49e6fd660;
 Mon, 09 Nov 2020 18:47:56 +0000 (UTC)
Received: from aepfle.de by smtp.strato.de (RZmta 47.3.3 DYNA|AUTH)
 with ESMTPSA id j03b7dwA9IltCiO
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Mon, 9 Nov 2020 19:47:55 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+c5I=EP=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kcCCs-0005Zk-Jp
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 18:47:58 +0000
X-Inumbo-ID: fef92ffc-035f-4ebb-a4bb-96c49e6fd660
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.22])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fef92ffc-035f-4ebb-a4bb-96c49e6fd660;
	Mon, 09 Nov 2020 18:47:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1604947676;
	s=strato-dkim-0002; d=aepfle.de;
	h=In-Reply-To:References:Message-ID:Subject:Cc:To:From:Date:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=Yb6nN3Nh6cDmNZ8Fkmuf6nIsjsFWSPeytv34C6bY18E=;
	b=LOqBJ/2iZCweu9+SYbb6lXXeo/ymrw9TgvKZa2itnsf3YAO1SgGhWH/8Z/RU5gRtyd
	ffhTfw+ksnaa7dUxNV3anDE3nC1btC4NEV90tqZpQCDFIK/0vbuaCyyfWHfI3ibQxL7E
	Sv+p2F6m09qMdkYHWe3W84n6XSGAn3yGR00WBgbmVMgN6plTuJeWdvU7gJaBx0Sn8A9R
	IlhcsHYq3rMT/o2HBgRVrWspTgX/20b7DHURS3z4y11RP/aVc+FomTMBGCWVBDLpFbjh
	cVTo1pTylllAis+N2QKDZbC3mryKpSFJs9bOKo9OaeLXuKP6fyYMACfKjIrCeM04Rj1S
	YErA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3GhJjw=="
X-RZG-CLASS-ID: mo00
Received: from aepfle.de
	by smtp.strato.de (RZmta 47.3.3 DYNA|AUTH)
	with ESMTPSA id j03b7dwA9IltCiO
	(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
	(Client did not present a certificate);
	Mon, 9 Nov 2020 19:47:55 +0100 (CET)
Date: Mon, 9 Nov 2020 19:47:50 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Jackson <iwj@xenproject.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: Tools backport request for Xen 4.14
Message-ID: <20201109184750.GA15148@aepfle.de>
References: <54fcf6ea-f400-c96a-cde6-4f55f909c2d6@xen.org>
 <20201009184930.GA65219@mattapan.m5p.com>
 <24489.33893.98470.334969@mariner.uk.xensource.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="sm4nu43k4a2Rpi4c"
Content-Disposition: inline
In-Reply-To: <24489.33893.98470.334969@mariner.uk.xensource.com>


--sm4nu43k4a2Rpi4c
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline

On Mon, Nov 09, Ian Jackson wrote:

> I have now backported all of the GCC10 fixes to all the supported Xen
> branches (ie back to 4.12).

Please consider also these commits:

55ab292c42 stubdom/vtpm: include stdio.h for declaration of printf
c3999835df libxl: Offer API versions 0x040700 and 0x040800
f1d376a825 stubdom/vtpm: add extern to function declarations

Olaf

--sm4nu43k4a2Rpi4c
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAl+pjtEACgkQ86SN7mm1
DoD1BA/9Gg9b0zo6Qi5EPG7FDNvel6pTPwzUxNgrGXPWntBLyD+dUar7R8Eo48FP
M6QKT/IeqGoyV1CHhS0R2I+npCzYYzxb79/g1Rrt8Hc5BflyLb7ucpHJjwQj00KA
piIwpAeT1cgJRkZsVEewQT9E5ehQOO3xpYX1SV6zXMswFJlZdU8aAwzq3gBREciW
YX7V5M0sgtzOsp3cXl2NCEHa6BN7xnah90DKTDW2qJ7IBFeD2WnkXWbD45ivxJH2
6xfalCFf6tyzNlwQKG6ij3R5RTwAlxHUlbCz2DMqK4e8lY+FP3y0d/ZhZnAXV4RA
6y3UFhVg7MYK+4C7s7FWqWQgLOeNyUVJ1SrHJScfIK0bctrVT5k3fo+lo26V6U9n
tP8Oi/pP1kFJ8pj05g1hul3xUjqO4J23yLrvBF4Mc2YaRotlM74zhbP2fLw1R7au
kFO7Ge1JlUJV6OwVU6D4nPOFERgaWG5u2sdkdI9xe3TVxeKPST2PuqMkrjRBWA3j
tuFyqZ0VqSgB1ZEchqH9O55gR4AhY4TQz9T29z0vp7I7QdMAo93EJQJbC1I5cRIJ
7rSX6bOCGCTDqCV9UORqa1e9ZmQuwt1hwzcsroFgn/3kRG97aTaCQWOSpkevrQeu
q54bIkoMawShunhRhH8bAMJWDQC6uQ8eICbTYcIZTjaj3Ul7a/Q=
=3BMs
-----END PGP SIGNATURE-----

--sm4nu43k4a2Rpi4c--


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 21:58:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 21:58:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22908.49427 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcFAY-0004yj-4T; Mon, 09 Nov 2020 21:57:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22908.49427; Mon, 09 Nov 2020 21:57:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcFAY-0004yc-14; Mon, 09 Nov 2020 21:57:46 +0000
Received: by outflank-mailman (input) for mailman id 22908;
 Mon, 09 Nov 2020 21:57:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=47vU=EP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kcFAX-0004y4-2m
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 21:57:45 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2dcf191a-e2eb-4194-a170-eb0c15ace95c;
 Mon, 09 Nov 2020 21:57:35 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcFAM-0008W3-Pg; Mon, 09 Nov 2020 21:57:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcFAM-0000O6-GS; Mon, 09 Nov 2020 21:57:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kcFAM-0005Nl-Fv; Mon, 09 Nov 2020 21:57:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=47vU=EP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kcFAX-0004y4-2m
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 21:57:45 +0000
X-Inumbo-ID: 2dcf191a-e2eb-4194-a170-eb0c15ace95c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2dcf191a-e2eb-4194-a170-eb0c15ace95c;
	Mon, 09 Nov 2020 21:57:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZYzp5xCMUGhQUAezx83G3h5YNymwDbZmdf/g6Q8PfVk=; b=yXWsYC+w7mt2YqFIzyDek5dOvQ
	w+CPphHEr1SCMM9GICCSyd74DMBs7f7h14gQoUoAADLCLtAdq2LG2mKr2zayt15f8aZrwXDbsGA0X
	7jbc430xaw/cOw2LjPEyK9F2QcjV/groTHKLRTJhu8+3YSIZbuQR6zI6VMqds6uX5f6s=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcFAM-0008W3-Pg; Mon, 09 Nov 2020 21:57:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcFAM-0000O6-GS; Mon, 09 Nov 2020 21:57:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcFAM-0005Nl-Fv; Mon, 09 Nov 2020 21:57:34 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156585-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156585: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=193f51ddcf1d87d725f1dfd51b8a95351c910e8f
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Nov 2020 21:57:34 +0000

flight 156585 qemu-mainline real [real]
flight 156602 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156585/
http://logs.test-lab.xenproject.org/osstest/logs/156602/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                193f51ddcf1d87d725f1dfd51b8a95351c910e8f
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   81 days
Failing since        152659  2020-08-21 14:07:39 Z   80 days  176 attempts
Testing same since   156585  2020-11-09 11:50:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 62007 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 09 21:59:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 21:59:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22915.49439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcFCO-00058j-Kg; Mon, 09 Nov 2020 21:59:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22915.49439; Mon, 09 Nov 2020 21:59:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcFCO-00058c-Hb; Mon, 09 Nov 2020 21:59:40 +0000
Received: by outflank-mailman (input) for mailman id 22915;
 Mon, 09 Nov 2020 21:59:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZjvN=EP=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1kcFCN-00058S-C0
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 21:59:39 +0000
Received: from aserp2130.oracle.com (unknown [141.146.126.79])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab8d546c-15ec-4a61-b6aa-e0724f2b22e4;
 Mon, 09 Nov 2020 21:59:38 +0000 (UTC)
Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1])
 by aserp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0A9LE6TI082669;
 Mon, 9 Nov 2020 21:59:27 GMT
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by aserp2130.oracle.com with ESMTP id 34nh3arqj8-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Mon, 09 Nov 2020 21:59:27 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0A9LFHn1119274;
 Mon, 9 Nov 2020 21:59:26 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
 by userp3020.oracle.com with ESMTP id 34p5br4kcu-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 09 Nov 2020 21:59:26 +0000
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
 by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 0A9LxJ0U027159;
 Mon, 9 Nov 2020 21:59:20 GMT
Received: from [10.74.103.185] (/10.74.103.185)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Mon, 09 Nov 2020 13:59:19 -0800
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ZjvN=EP=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
	id 1kcFCN-00058S-C0
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 21:59:39 +0000
X-Inumbo-ID: ab8d546c-15ec-4a61-b6aa-e0724f2b22e4
Received: from aserp2130.oracle.com (unknown [141.146.126.79])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ab8d546c-15ec-4a61-b6aa-e0724f2b22e4;
	Mon, 09 Nov 2020 21:59:38 +0000 (UTC)
Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1])
	by aserp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0A9LE6TI082669;
	Mon, 9 Nov 2020 21:59:27 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=iudd/pWhUkjLp/Fl5sIWyMMOmTzZfRPFExPcv41Jvgw=;
 b=qWXC8FRu7L7SesUBbDQz8EvTZ4cCbyhLuaovb9eAOJYFy6TlNOz/OhUfq/72lHwgEqCt
 Jca2rK2h/8E2vTvrV14slrUZk+Vh5uA7HbIM7se26y9UxlDNXqGYUhFQRPBPZgif0lEK
 hegUPs/8y8x/1jndLVLicIudA+X5Di7PSJdHxOwz9tq9Un1OokWdvPJC9QmBotkYl3bi
 FgXWmSsZc4NBefpT8aWK6SOkb6UwoGPhdzzufvD8A5F9ZHFwRV47NBlWOcjZfc2TLWug
 XKSTn+Uk2IVAS/wTZkSJ4CziX9E3K9n2F5rZ+JvB4i7ByrP+xipFvW7iZA6dYraREs67 Lw== 
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
	by aserp2130.oracle.com with ESMTP id 34nh3arqj8-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
	Mon, 09 Nov 2020 21:59:27 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
	by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0A9LFHn1119274;
	Mon, 9 Nov 2020 21:59:26 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
	by userp3020.oracle.com with ESMTP id 34p5br4kcu-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
	Mon, 09 Nov 2020 21:59:26 +0000
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 0A9LxJ0U027159;
	Mon, 9 Nov 2020 21:59:20 GMT
Received: from [10.74.103.185] (/10.74.103.185)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 09 Nov 2020 13:59:19 -0800
Subject: Re: [PATCH v2] x86/xen: don't unbind uninitialized lock_kicker_irq
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
        Brian Masney <bmasney@redhat.com>, sstabellini@kernel.org
Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org,
        hpa@zytor.com, xen-devel@lists.xenproject.org,
        linux-kernel@vger.kernel.org, dustymabe@redhat.com
References: <20201107011119.631442-1-bmasney@redhat.com>
 <5950df5c-79d6-b2bc-4f2b-35624a3c0d1e@suse.com>
From: boris.ostrovsky@oracle.com
Organization: Oracle Corporation
Message-ID: <87d1122a-ca5a-786b-5b25-4caaaeaf386a@oracle.com>
Date: Mon, 9 Nov 2020 16:59:14 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <5950df5c-79d6-b2bc-4f2b-35624a3c0d1e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9800 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 malwarescore=0
 phishscore=0 spamscore=0 mlxlogscore=999 bulkscore=0 suspectscore=0
 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2011090139
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9800 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 lowpriorityscore=0 priorityscore=1501
 clxscore=1015 malwarescore=0 mlxscore=0 spamscore=0 suspectscore=0
 mlxlogscore=999 impostorscore=0 phishscore=0 adultscore=0 bulkscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2011090139


On 11/9/20 12:34 AM, Jürgen Groß wrote:
> On 07.11.20 02:11, Brian Masney wrote:
>> When booting a hyperthreaded system with the kernel parameter
>> 'mitigations=auto,nosmt', the following warning occurs:
>>
>>      WARNING: CPU: 0 PID: 1 at drivers/xen/events/events_base.c:1112 unbind_from_irqhandler+0x4e/0x60
>>      ...
>>      Hardware name: Xen HVM domU, BIOS 4.2.amazon 08/24/2006
>>      ...
>>      Call Trace:
>>       xen_uninit_lock_cpu+0x28/0x62
>>       xen_hvm_cpu_die+0x21/0x30
>>       takedown_cpu+0x9c/0xe0
>>       ? trace_suspend_resume+0x60/0x60
>>       cpuhp_invoke_callback+0x9a/0x530
>>       _cpu_up+0x11a/0x130
>>       cpu_up+0x7e/0xc0
>>       bringup_nonboot_cpus+0x48/0x50
>>       smp_init+0x26/0x79
>>       kernel_init_freeable+0xea/0x229
>>       ? rest_init+0xaa/0xaa
>>       kernel_init+0xa/0x106
>>       ret_from_fork+0x35/0x40
>>
>> The secondary CPUs are not activated with the nosmt mitigations and only
>> the primary thread on each CPU core is used. In this situation,
>> xen_hvm_smp_prepare_cpus(), and more importantly xen_init_lock_cpu(), is
>> not called, so the lock_kicker_irq is not initialized for the secondary
>> CPUs. Let's fix this by exiting early in xen_uninit_lock_cpu() if the
>> irq is not set to avoid the warning from above for each secondary CPU.
>>
>> Signed-off-by: Brian Masney <bmasney@redhat.com>
>
> Reviewed-by: Juergen Gross <jgross@suse.com>



Applied to for-linus-5.10b.


-boris



From xen-devel-bounces@lists.xenproject.org Mon Nov 09 22:15:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 09 Nov 2020 22:15:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22931.49453 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcFRs-0006uO-2A; Mon, 09 Nov 2020 22:15:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22931.49453; Mon, 09 Nov 2020 22:15:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcFRr-0006uH-VX; Mon, 09 Nov 2020 22:15:39 +0000
Received: by outflank-mailman (input) for mailman id 22931;
 Mon, 09 Nov 2020 22:15:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=47vU=EP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kcFRp-0006td-VS
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 22:15:37 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9cf0d240-05ae-4469-82a2-183f59073b7e;
 Mon, 09 Nov 2020 22:15:33 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcFRl-0000U7-47; Mon, 09 Nov 2020 22:15:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcFRk-0001ak-U3; Mon, 09 Nov 2020 22:15:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kcFRk-00085M-TY; Mon, 09 Nov 2020 22:15:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=47vU=EP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kcFRp-0006td-VS
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 22:15:37 +0000
X-Inumbo-ID: 9cf0d240-05ae-4469-82a2-183f59073b7e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9cf0d240-05ae-4469-82a2-183f59073b7e;
	Mon, 09 Nov 2020 22:15:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=fMmWneM6nFvqAHlp41GqrknYVPZT8aF8Mo8bxYb1h1A=; b=19/o28MRhFbUsmp6bp72+M7syF
	oBUaXfIvi9fU10bgaQDtEXt7YsCTgpmpy0OiKm53uYieX/AHVWN41HQ7ZZSXs6kz1CaViqNFoMxfm
	T0nu4uu92c4aRP6ypWId95/Eai/3UX4uAElaO0rqAc9Zb0+PXNsR0Q7lEU3tijxWDaqA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcFRl-0000U7-47; Mon, 09 Nov 2020 22:15:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcFRk-0001ak-U3; Mon, 09 Nov 2020 22:15:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcFRk-00085M-TY; Mon, 09 Nov 2020 22:15:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable bisection] complete test-amd64-amd64-xl-xsm
Message-Id: <E1kcFRk-00085M-TY@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 09 Nov 2020 22:15:32 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-xsm
testid guest-start

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  e19bcb626f50a652fb1854a8b2f2c9c371687a11
  Bug not present: c3453a23f7905d24f2404787543e26ec7d02301c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/156601/


  commit e19bcb626f50a652fb1854a8b2f2c9c371687a11
  Author: Juergen Gross <jgross@suse.com>
  Date:   Fri Nov 6 10:48:07 2020 +0100
  
      xen/rwlock: add check_lock() handling to rwlocks
      
      Checking whether a lock is consistently used regarding interrupts on
      or off is beneficial for rwlocks, too.
      
      So add check_lock() calls to rwlock functions. For this purpose make
      check_lock() globally accessible.
      
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Reviewed-by: Julien Grall <jgrall@amazon.com>
      Reviewed-by: Jan Beulich <jbeulich@suse.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-amd64-xl-xsm.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-amd64-xl-xsm.guest-start --summary-out=tmp/156601.bisection-summary --basis-template=156443 --blessings=real,real-bisect,real-retry xen-unstable test-amd64-amd64-xl-xsm guest-start
Searching for failure / basis pass:
 156577 fail [host=godello0] / 156443 [host=pinot1] 156401 [host=huxelrebe1] 156389 [host=fiano0] 156373 [host=albana1] 156354 [host=godello1] 156339 ok.
Failure / basis pass flights: 156577 / 156339
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 7056f2f89f03f2f804ac7e776c7b2b000cd716cd
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#677cbe1324c29294bb1d1b8454b3f21\
 4725e40fd-677cbe1324c29294bb1d1b8454b3f214725e40fd git://xenbits.xen.org/xen.git#7056f2f89f03f2f804ac7e776c7b2b000cd716cd-0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
Loaded 5001 nodes in revision graph
Searching for test results:
 156331 [host=elbling0]
 156339 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 7056f2f89f03f2f804ac7e776c7b2b000cd716cd
 156354 [host=godello1]
 156373 [host=albana1]
 156389 [host=fiano0]
 156401 [host=huxelrebe1]
 156443 [host=pinot1]
 156524 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 2a5f9f6a6932214fda76b9b3c03e024772882d34
 156538 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
 156556 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
 156577 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
 156587 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 7056f2f89f03f2f804ac7e776c7b2b000cd716cd
 156589 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
 156590 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 2b8314a3c354d04545700c80ff5a5f86799b79c7
 156591 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 957708c2d1ae25d7375abd5e5e70c3043d64f1f1
 156596 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c
 156597 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd e19bcb626f50a652fb1854a8b2f2c9c371687a11
 156598 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c
 156599 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd e19bcb626f50a652fb1854a8b2f2c9c371687a11
 156600 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c
 156601 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd e19bcb626f50a652fb1854a8b2f2c9c371687a11
Searching for interesting versions
 Result found: flight 156339 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c, results HASH(0x5654388d1fe0) HASH(0x5654388bee38) HASH(0x5654388e4f78) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe132\
 4c29294bb1d1b8454b3f214725e40fd 957708c2d1ae25d7375abd5e5e70c3043d64f1f1, results HASH(0x5654388e0ae8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 2b8314a3c354d04545700c80ff5a5f86799b79c7, results HASH(0x5654388c1440) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f\
 0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 7056f2f89f03f2f804ac7e776c7b2b000cd716cd, results HASH(0x5654388cedb0) HASH(0x5654388cbb20) Result found: flight 156524 (fail), for basis failure (at ancestor ~523)
 Repro found: flight 156587 (pass), for basis pass
 Repro found: flight 156589 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c
No revisions left to test, checking graph state.
 Result found: flight 156596 (pass), for last pass
 Result found: flight 156597 (fail), for first failure
 Repro found: flight 156598 (pass), for last pass
 Repro found: flight 156599 (fail), for first failure
 Repro found: flight 156600 (pass), for last pass
 Repro found: flight 156601 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  e19bcb626f50a652fb1854a8b2f2c9c371687a11
  Bug not present: c3453a23f7905d24f2404787543e26ec7d02301c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/156601/


  commit e19bcb626f50a652fb1854a8b2f2c9c371687a11
  Author: Juergen Gross <jgross@suse.com>
  Date:   Fri Nov 6 10:48:07 2020 +0100
  
      xen/rwlock: add check_lock() handling to rwlocks
      
      Checking whether a lock is consistently used regarding interrupts on
      or off is beneficial for rwlocks, too.
      
      So add check_lock() calls to rwlock functions. For this purpose make
      check_lock() globally accessible.
      
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Reviewed-by: Julien Grall <jgrall@amazon.com>
      Reviewed-by: Jan Beulich <jbeulich@suse.com>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-amd64-xl-xsm.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
156601: tolerable ALL FAIL

flight 156601 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/156601/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-xsm      14 guest-start             fail baseline untested


jobs:
 test-amd64-amd64-xl-xsm                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 00:51:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 00:51:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22962.49478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcHsf-0004R6-Pf; Tue, 10 Nov 2020 00:51:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22962.49478; Tue, 10 Nov 2020 00:51:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcHsf-0004Qz-MP; Tue, 10 Nov 2020 00:51:29 +0000
Received: by outflank-mailman (input) for mailman id 22962;
 Tue, 10 Nov 2020 00:51:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kcHse-0004QK-IT
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 00:51:28 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 68454e2c-2008-459b-9852-0d7f1f6890f0;
 Tue, 10 Nov 2020 00:51:20 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcHsW-0004En-0G; Tue, 10 Nov 2020 00:51:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcHsV-0000vD-Nq; Tue, 10 Nov 2020 00:51:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kcHsV-0007Vz-NB; Tue, 10 Nov 2020 00:51:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kcHse-0004QK-IT
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 00:51:28 +0000
X-Inumbo-ID: 68454e2c-2008-459b-9852-0d7f1f6890f0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 68454e2c-2008-459b-9852-0d7f1f6890f0;
	Tue, 10 Nov 2020 00:51:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lM2APx8yKlK4KJULEGDoBYU9M68lao+wKPnWEIImphA=; b=HfZwaZAXv6tfe3uGlukPrGGqVd
	PLfHEj61YcqBJ6inti1DsLLvdnfhr52Zx4DM0PVIpmSqHdyX2f9C0ari7eLe9MEWPrVcfezu3+M0L
	gGF/8EkrjFxFC6EdtdkTZVZIUYWuyQEzv6663NwfNbpjL9CWWq5mP6hCEnBTJIiOzaaI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcHsW-0004En-0G; Tue, 10 Nov 2020 00:51:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcHsV-0000vD-Nq; Tue, 10 Nov 2020 00:51:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcHsV-0007Vz-NB; Tue, 10 Nov 2020 00:51:19 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156588-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156588: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
X-Osstest-Versions-That:
    xen=9ff9705647646aa937b5f5c1426a64c69a62b3bd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Nov 2020 00:51:19 +0000

flight 156588 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156588/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm       14 guest-start              fail REGR. vs. 156443
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 156443
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 156443
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 156443
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156443
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156443
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156443
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156443
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156443
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156443
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156443
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156443
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156443
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156443
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156443
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
baseline version:
 xen                  9ff9705647646aa937b5f5c1426a64c69a62b3bd

Last test of basis   156443  2020-11-05 15:47:13 Z    4 days
Failing since        156524  2020-11-06 14:22:28 Z    3 days    5 attempts
Testing same since   156538  2020-11-07 06:32:07 Z    2 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Oct 29 15:03:32 2020 -0400

    libxl: Add suppress-vmdesc to QEMU machine
    
    The device model state saved by QMP xen-save-devices-state doesn't
    include the vmdesc json.  When restoring an HVM, xen-load-devices-state
    always triggers "Expected vmdescription section, but got 0".  This is
    not a problem when restore comes from a file.  However, when QEMU runs
    in a linux stubdom and comes over a console, EOF is not received.  This
    causes a delay restoring - though it does restore.
    
    Setting suppress-vmdesc skips looking for the vmdesc during restore and
    avoids the wait.
    
    QEMU 5.2 enables suppress-vmdesc by default for xenfv, but this change
    sets it manually for xenfv and xen_platform_pci=0 when -machine pc is
    use.
    
    QEMU commit 9850c6047b8b "migration: Allow to suppress vmdesc
    submission" added suppress-vmdesc in QEMU 2.3.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Anthony PERARD <anthony.perard@citrix.com>

commit cd800ce442eeba5bc0857ade70a075367c01c350
Author: Stefano Stabellini <sstabellini@kernel.org>
Date:   Fri Nov 6 16:12:56 2020 +0000

    libxl: set vuart_gfn in libxl__build_hvm
    
    Setting vuart_gfn was missed when switching ARM guests to the PVH build.
    Like libxl__build_pv, libxl__build_hvm should set state->vuart_gfn to
    dom->vuart_gfn.
    
    Without this change, xl console cannot connect to the vuart console (-t
    vuart), see https://marc.info/?l=xen-devel&m=160402342101366.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

commit 4196b1523aebe0ed929accba318d5e833d7ff6b3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 15:05:04 2020 +0100

    tools/libs/light: correct bitmap operations
    
    Libxl bitmap operations for single bits (test, set, reset) take the bit
    number as a signed integer without testing the value to be larger than
    0.
    
    Correct that by adding the appropriate tests.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8aac8e0ef43a452d0b565d63e4943c275badba3f
Author: Olaf Hering <olaf@aepfle.de>
Date:   Fri Nov 6 14:05:17 2020 +0100

    docs/xl: fix cpupool-cpu-remove
    
    The cpu-pool must be specified.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Wei Liu <wl@xen.org>

commit 2a5f9f6a6932214fda76b9b3c03e024772882d34
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 10:48:44 2020 +0100

    PCI: remove unused pcidevs_trylock()
    
    pcidevs_trylock() is used nowhere, so remove it.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Paul Durrant <paul@xen.org>

commit e19bcb626f50a652fb1854a8b2f2c9c371687a11
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 10:48:07 2020 +0100

    xen/rwlock: add check_lock() handling to rwlocks
    
    Checking whether a lock is consistently used regarding interrupts on
    or off is beneficial for rwlocks, too.
    
    So add check_lock() calls to rwlock functions. For this purpose make
    check_lock() globally accessible.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit c3453a23f7905d24f2404787543e26ec7d02301c
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 10:47:09 2020 +0100

    xen/locking: harmonize spinlocks and rwlocks regarding preemption
    
    Spinlocks and rwlocks behave differently in the try variants regarding
    preemption: rwlocks are switching preemption off before testing the
    lock, while spinlocks do so only after the first check.
    
    Modify _spin_trylock() to disable preemption before testing the lock
    to be held in order to be preemption-ready.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>

commit 957708c2d1ae25d7375abd5e5e70c3043d64f1f1
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Nov 5 22:31:06 2020 +0000

    xen/arm: traps: Don't panic when receiving an unknown debug trap
    
    Even if debug trap are only meant for debugging purpose, it is quite
    harsh to crash Xen if one of the trap sent by the guest is not handled.
    
    So switch from a panic() to a printk().
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit e006b2e3be72e502b86bd9e1405417abd87bdfed
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Nov 5 16:48:55 2020 +0100

    libxl: fix libacpi dependency
    
    $(DSDT_FILES-y) depends on the recursive make to have run in libacpi/
    such that the file(s) itself/themselves were generated before
    compilation gets attempted. The same, however, is also necessary for
    generated headers, before source files including them would get
    attempted to be compiled.
    
    The dependency specified in libacpi's Makefile, otoh, is entirely
    pointless nowadays - no compilation happens there anymore (except for
    tools involved in building the generated files). Together with it, the
    rule generating acpi.a also can go away.
    
    Reported-by: Olaf Hering <olaf@aepfle.de>
    Fixes: 14c0d328da2b ("libxl/acpi: Build ACPI tables for HVMlite guests")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 2b8314a3c354d04545700c80ff5a5f86799b79c7
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Nov 5 16:48:37 2020 +0100

    tools/python: pass more -rpath-link options to ld
    
    With the split of libraries, I've observed a number of warnings from
    (old?) ld.
    
    Instead of duplicating the additions in two places, introduce a setup.py
    make variable holding all the common parts of the invocations.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 01:14:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 01:14:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22971.49495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcIEU-0002RZ-Pp; Tue, 10 Nov 2020 01:14:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22971.49495; Tue, 10 Nov 2020 01:14:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcIEU-0002RS-My; Tue, 10 Nov 2020 01:14:02 +0000
Received: by outflank-mailman (input) for mailman id 22971;
 Tue, 10 Nov 2020 01:14:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ywAc=EQ=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1kcIET-0002QG-26
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 01:14:01 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 02ad943a-5243-4351-a500-5dfd80e08e76;
 Tue, 10 Nov 2020 01:13:58 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ywAc=EQ=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
	id 1kcIET-0002QG-26
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 01:14:01 +0000
X-Inumbo-ID: 02ad943a-5243-4351-a500-5dfd80e08e76
Received: from galois.linutronix.de (unknown [193.142.43.55])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 02ad943a-5243-4351-a500-5dfd80e08e76;
	Tue, 10 Nov 2020 01:13:58 +0000 (UTC)
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1604970836;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=UkwosGD4CNwpr5O4mn7FToWEjHCGounmVIqa5vjwzhQ=;
	b=DRjXCHfaWTV+37wSriT1BjBgrVPE6jdQ5QSgYOIapfxJkFIwaFOAyxB9axkmnF7jwFoYyj
	IwVALzvZ0PuA7RxNph9leDCS6B9rTa4rL3nKcIBy/Am24PTMh01e/A2SQWp6LTkOQfBaj6
	TPNYDFf29DCqNiv3sgD8nE6enYeIlsYkCGyKDzwB1FGN01/Y3vKoJQtpchwkYWrvNb9W70
	4PCnBA8S0oYM/XwkwwZotKzR9vmT9oC6FdGvh5lMI1bLUaS0qdB+ZPTGprUeJSBjnvH0Xv
	xyRk7zOTS6b0OUuDBNt+X1xYcMB1KSB7gGowiPoSA5hPR5omsqQ5g20Kt4+uKg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1604970836;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=UkwosGD4CNwpr5O4mn7FToWEjHCGounmVIqa5vjwzhQ=;
	b=acv/dHWHYYmtT1/z0brDNSAmBtlvEk8c07ItXmS/RUDap25yEm8biO9jTMWItRsTlNRTeF
	x/KI3sqvslL8SkDg==
To: ira.weiny@intel.com, Andrew Morton <akpm@linux-foundation.org>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>, Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>, Randy Dunlap <rdunlap@infradead.org>,
 x86@kernel.org, Dave Hansen <dave.hansen@linux.intel.com>, Dan Williams
 <dan.j.williams@intel.com>, Fenghua Yu <fenghua.yu@intel.com>,
 linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org,
 linux-mm@kvack.org, linux-kselftest@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org,
 netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org,
 linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org,
 devel@driverdev.osuosl.org, linux-efi@vger.kernel.org,
 linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org,
 target-devel@vger.kernel.org, linux-nfs@vger.kernel.org,
 ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org,
 linux-aio@kvack.org, io-uring@vger.kernel.org,
 linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org,
 linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org,
 linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org,
 cluster-devel@redhat.com, ecryptfs@vger.kernel.org,
 linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org,
 linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org,
 amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
 intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com,
 linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-cachefs@redhat.com, samba-technical@lists.samba.org,
 intel-wired-lan@lists.osuosl.org
Subject: Re: [PATCH RFC PKS/PMEM 05/58] kmap: Introduce k[un]map_thread
In-Reply-To: <20201009195033.3208459-6-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com> <20201009195033.3208459-6-ira.weiny@intel.com>
Date: Tue, 10 Nov 2020 02:13:56 +0100
Message-ID: <87h7pyhv3f.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain

Ira,

On Fri, Oct 09 2020 at 12:49, ira weiny wrote:
> From: Ira Weiny <ira.weiny@intel.com>
>
> To correctly support the semantics of kmap() with Kernel protection keys
> (PKS), kmap() may be required to set the protections on multiple
> processors (globally).  Enabling PKS globally can be very expensive
> depending on the requested operation.  Furthermore, enabling a domain
> globally reduces the protection afforded by PKS.
>
> Most kmap() (Aprox 209 of 229) callers use the map within a single thread and
> have no need for the protection domain to be enabled globally.  However, the
> remaining callers do not follow this pattern and, as best I can tell, expect
> the mapping to be 'global' and available to any thread who may access the
> mapping.[1]
>
> We don't anticipate global mappings to pmem, however in general there is a
> danger in changing the semantics of kmap().  Effectively, this would cause an
> unresolved page fault with little to no information about why the failure
> occurred.
>
> To resolve this a number of options were considered.
>
> 1) Attempt to change all the thread local kmap() calls to kmap_atomic()[2]
> 2) Introduce a flags parameter to kmap() to indicate if the mapping should be
>    global or not
> 3) Change ~20 call sites to 'kmap_global()' to indicate that they require a
>    global enablement of the pages.
> 4) Change ~209 call sites to 'kmap_thread()' to indicate that the mapping is to
>    be used within that thread of execution only
>
> Option 1 is simply not feasible.  Option 2 would require all of the call sites
> of kmap() to change.  Option 3 seems like a good minimal change but there is a
> danger that new code may miss the semantic change of kmap() and not get the
> behavior the developer intended.  Therefore, #4 was chosen.

There is Option #5:

Convert the thread local kmap() invocations to the proposed kmap_local()
interface which is coming along [1].

That solves a couple of issues:

 1) It relieves the current kmap_atomic() usage sites from the implict
    pagefault/preempt disable semantics which apply even when
    CONFIG_HIGHMEM is disabled. kmap_local() still can be invoked from
    atomic context.

 2) Due to #1 it allows to replace the conditional usage of kmap() and
    kmap_atomic() for purely thread local mappings.

 3) It puts the burden on the HIGHMEM inflicted systems

 4) It is actually more efficient for most of the pure thread local use
    cases on HIGHMEM inflicted systems because it avoids the overhead of
    the global lock and the potential kmap slot exhaustion. A potential
    preemption will be more expensive, but that's not really the case we
    want to optimize for.

 5) It solves the RT issue vs. kmap_atomic()

So instead of creating yet another variety of kmap() which is just
scratching the particular PKRS itch, can we please consolidate all of
that on the wider reaching kmap_local() approach?

Thanks,

        tglx
     
[1] https://lore.kernel.org/lkml/20201103092712.714480842@linutronix.de/



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 05:00:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 05:00:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23001.49520 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcLlD-0005Gl-69; Tue, 10 Nov 2020 05:00:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23001.49520; Tue, 10 Nov 2020 05:00:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcLlD-0005Fq-0U; Tue, 10 Nov 2020 05:00:03 +0000
Received: by outflank-mailman (input) for mailman id 23001;
 Tue, 10 Nov 2020 05:00:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UHDj=EQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kcLlB-00050T-Px
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 05:00:01 +0000
Received: from mga07.intel.com (unknown [134.134.136.100])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 68bccc73-eb04-47a8-aab5-b48abcf54fb1;
 Tue, 10 Nov 2020 04:59:58 +0000 (UTC)
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Nov 2020 20:59:55 -0800
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Nov 2020 20:59:54 -0800
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=UHDj=EQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kcLlB-00050T-Px
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 05:00:01 +0000
X-Inumbo-ID: 68bccc73-eb04-47a8-aab5-b48abcf54fb1
Received: from mga07.intel.com (unknown [134.134.136.100])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 68bccc73-eb04-47a8-aab5-b48abcf54fb1;
	Tue, 10 Nov 2020 04:59:58 +0000 (UTC)
IronPort-SDR: k336O8cOJji5+GTaMOdyb4xTebDXuySbCXlqcK9MtBhrXtbk9XOxGTnCmVgcpK8b5s/HdwOdzz
 2W2AmJLU1mMA==
X-IronPort-AV: E=McAfee;i="6000,8403,9800"; a="234085725"
X-IronPort-AV: E=Sophos;i="5.77,465,1596524400"; 
   d="scan'208";a="234085725"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
  by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Nov 2020 20:59:55 -0800
IronPort-SDR: hhtGbVHiYEduO62f/jUORNlQE29+PA61IlKuvPXTtZvvOlHxHT75TW+y4i0J9B1O2lrgZ7K2Sp
 gskiAxl5TCew==
X-IronPort-AV: E=Sophos;i="5.77,465,1596524400"; 
   d="scan'208";a="531063331"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Nov 2020 20:59:54 -0800
Date: Mon, 9 Nov 2020 20:59:54 -0800
From: Ira Weiny <ira.weiny@intel.com>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Randy Dunlap <rdunlap@infradead.org>, x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org,
	kexec@lists.infradead.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org, target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org, linux-aio@kvack.org,
	io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com, ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com, samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: Re: [PATCH RFC PKS/PMEM 05/58] kmap: Introduce k[un]map_thread
Message-ID: <20201110045954.GL3976735@iweiny-DESK2.sc.intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
 <20201009195033.3208459-6-ira.weiny@intel.com>
 <87h7pyhv3f.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <87h7pyhv3f.fsf@nanos.tec.linutronix.de>
User-Agent: Mutt/1.11.1 (2018-12-01)

On Tue, Nov 10, 2020 at 02:13:56AM +0100, Thomas Gleixner wrote:
> Ira,
> 
> On Fri, Oct 09 2020 at 12:49, ira weiny wrote:
> > From: Ira Weiny <ira.weiny@intel.com>
> >
> > To correctly support the semantics of kmap() with Kernel protection keys
> > (PKS), kmap() may be required to set the protections on multiple
> > processors (globally).  Enabling PKS globally can be very expensive
> > depending on the requested operation.  Furthermore, enabling a domain
> > globally reduces the protection afforded by PKS.
> >
> > Most kmap() (Aprox 209 of 229) callers use the map within a single thread and
> > have no need for the protection domain to be enabled globally.  However, the
> > remaining callers do not follow this pattern and, as best I can tell, expect
> > the mapping to be 'global' and available to any thread who may access the
> > mapping.[1]
> >
> > We don't anticipate global mappings to pmem, however in general there is a
> > danger in changing the semantics of kmap().  Effectively, this would cause an
> > unresolved page fault with little to no information about why the failure
> > occurred.
> >
> > To resolve this a number of options were considered.
> >
> > 1) Attempt to change all the thread local kmap() calls to kmap_atomic()[2]
> > 2) Introduce a flags parameter to kmap() to indicate if the mapping should be
> >    global or not
> > 3) Change ~20 call sites to 'kmap_global()' to indicate that they require a
> >    global enablement of the pages.
> > 4) Change ~209 call sites to 'kmap_thread()' to indicate that the mapping is to
> >    be used within that thread of execution only
> >
> > Option 1 is simply not feasible.  Option 2 would require all of the call sites
> > of kmap() to change.  Option 3 seems like a good minimal change but there is a
> > danger that new code may miss the semantic change of kmap() and not get the
> > behavior the developer intended.  Therefore, #4 was chosen.
> 
> There is Option #5:

There is now yes.  :-D

> 
> Convert the thread local kmap() invocations to the proposed kmap_local()
> interface which is coming along [1].

I've been trying to follow that thread.

> 
> That solves a couple of issues:
> 
>  1) It relieves the current kmap_atomic() usage sites from the implict
>     pagefault/preempt disable semantics which apply even when
>     CONFIG_HIGHMEM is disabled. kmap_local() still can be invoked from
>     atomic context.
> 
>  2) Due to #1 it allows to replace the conditional usage of kmap() and
>     kmap_atomic() for purely thread local mappings.
> 
>  3) It puts the burden on the HIGHMEM inflicted systems
> 
>  4) It is actually more efficient for most of the pure thread local use
>     cases on HIGHMEM inflicted systems because it avoids the overhead of
>     the global lock and the potential kmap slot exhaustion. A potential
>     preemption will be more expensive, but that's not really the case we
>     want to optimize for.
> 
>  5) It solves the RT issue vs. kmap_atomic()
> 
> So instead of creating yet another variety of kmap() which is just
> scratching the particular PKRS itch, can we please consolidate all of
> that on the wider reaching kmap_local() approach?

Yes I agree.  We absolutely don't want more kmap*() calls and I was hoping to
dovetail into your kmap_local() work.[2]

I've pivoted away from this work a bit to clean up all the
kmap()/memcpy*()/kunmaps() as discussed elsewhere in the thread first.[3]  I
was hoping your work would land and then I could s/kmap_thread()/kmap_local()/
on all of these patches.

Also, we can convert the new memcpy_*_page() calls to kmap_local() as well.
[For now my patch just uses kmap_atomic().]

I've not looked at all of the patches in your latest version.  Have you
included converting any of the kmap() call sites?  I thought you were more
focused on converting the kmap_atomic() to kmap_local()?

Ira

> 
> Thanks,
> 
>         tglx
>      
> [1] https://lore.kernel.org/lkml/20201103092712.714480842@linutronix.de/

[2] https://lore.kernel.org/lkml/20201012195354.GC2046448@iweiny-DESK2.sc.intel.com/
[3] https://lore.kernel.org/lkml/20201009213434.GA839@sol.localdomain/
    https://lore.kernel.org/lkml/20201013200149.GI3576660@ZenIV.linux.org.uk/



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 05:14:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 05:14:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22831.49532 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcLz3-0006il-IN; Tue, 10 Nov 2020 05:14:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22831.49532; Tue, 10 Nov 2020 05:14:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcLz3-0006ie-D3; Tue, 10 Nov 2020 05:14:21 +0000
Received: by outflank-mailman (input) for mailman id 22831;
 Mon, 09 Nov 2020 18:07:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/yET=EP=toxicpanda.com=josef@srs-us1.protection.inumbo.net>)
 id 1kcBa9-0001u5-HZ
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 18:07:57 +0000
Received: from mail-qt1-x831.google.com (unknown [2607:f8b0:4864:20::831])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9d17f325-28b7-460b-a0fb-c09e3ea6acab;
 Mon, 09 Nov 2020 18:07:56 +0000 (UTC)
Received: by mail-qt1-x831.google.com with SMTP id t5so6649800qtp.2
 for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 10:07:56 -0800 (PST)
Received: from [192.168.1.45] (cpe-174-109-172-136.nc.res.rr.com.
 [174.109.172.136])
 by smtp.gmail.com with ESMTPSA id z2sm6588768qkl.22.2020.11.09.10.07.54
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 09 Nov 2020 10:07:55 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/yET=EP=toxicpanda.com=josef@srs-us1.protection.inumbo.net>)
	id 1kcBa9-0001u5-HZ
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 18:07:57 +0000
X-Inumbo-ID: 9d17f325-28b7-460b-a0fb-c09e3ea6acab
Received: from mail-qt1-x831.google.com (unknown [2607:f8b0:4864:20::831])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9d17f325-28b7-460b-a0fb-c09e3ea6acab;
	Mon, 09 Nov 2020 18:07:56 +0000 (UTC)
Received: by mail-qt1-x831.google.com with SMTP id t5so6649800qtp.2
        for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 10:07:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=toxicpanda-com.20150623.gappssmtp.com; s=20150623;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=7NGOsjZERbiE5w2MHC1mFU2jxp4Wb+n3njEEiJYjdrs=;
        b=O4/l9OEaWnaVVx9mA/+hODfmoKgGQIEUNKB0HmUGrD+LhRciIQAcMV79vjoQi/4L15
         CDVLebLFqwtrXUj2MValkzQKsvuvZ+bIj/HDHunVOXdD5rmqC2zDcm6G2ntA+MvXnqhC
         vTEl8uxCxkWixlkEepU/54lrdZYMGD0U3koL7aiUg0SUzZXfN7ZexvE/u58j8exqrb0L
         /O1jcvVnjraSt/kGOG4mDP8TkZHx3MoMGZRLY1X+es3rUicc6WotFmSs08rTBby/Yuep
         Z7z6vRhPWjHEqJ+TL+UexVhcPZExA43AO0zs4xR+XnsHWVoRk8SDKKD+pe0DirmCEgT1
         wWqQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=7NGOsjZERbiE5w2MHC1mFU2jxp4Wb+n3njEEiJYjdrs=;
        b=nplbdFEP/ss5c3919zFbgqeuJLyZMVs8QYB4/Ds/+hl+YjHb9REtC+tqNa3ZFSO5eL
         XVObDdAOkthYPkb4nlv3IOfECsHLxoCG2XjWX1WxRotIIIMeYKC/DKvDzLJv8+Spe2St
         35FQ2zNMLvGExXmC1G0/ErFySdv8EWfrsXVb81A5NV72HF8GQyyqoutelggQPUSNpC2/
         Zu/WIP4xRbw29RYolXF+TvkF8ygBrcaplsI3COpnVw1USxQeKO+2ksGgmcZ24f2KmeGX
         qQ9hrlBEMC9p/ZomvSpP84fwaDIFqNLDBzEyjzL7rqFoz15ErBU2t5lpHBgTzTNQDyCt
         Bfug==
X-Gm-Message-State: AOAM533enl4CQpapjhaUS6xFhpWpMqsS8jd+5bcp02rzT/0Ev3avo248
	l1kbCbz4IcsROd1ilXJWRmLz4w==
X-Google-Smtp-Source: ABdhPJz1XdtisV/lRv5ASDEyKBu7yVqtI15E3NOcCNcW+bkgomAeHWz+gX9FyFOWQktTiYmlqxOYiw==
X-Received: by 2002:aed:3147:: with SMTP id 65mr14719130qtg.295.1604945276157;
        Mon, 09 Nov 2020 10:07:56 -0800 (PST)
Received: from [192.168.1.45] (cpe-174-109-172-136.nc.res.rr.com. [174.109.172.136])
        by smtp.gmail.com with ESMTPSA id z2sm6588768qkl.22.2020.11.09.10.07.54
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Mon, 09 Nov 2020 10:07:55 -0800 (PST)
Subject: Re: cleanup updating the size of block devices
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Ilya Dryomov <idryomov@gmail.com>,
 Jack Wang <jinpu.wang@cloud.ionos.com>, "Michael S. Tsirkin"
 <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201106190337.1973127-1-hch@lst.de>
From: Josef Bacik <josef@toxicpanda.com>
Message-ID: <7ddd60ce-f588-028f-7e47-2df4d52e22d5@toxicpanda.com>
Date: Mon, 9 Nov 2020 13:07:53 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201106190337.1973127-1-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11/6/20 2:03 PM, Christoph Hellwig wrote:
> Hi Jens,
> 
> this series builds on top of the work that went into the last merge window,
> and make sure we have a single coherent interfac for updating the size of a
> block device.
> 

You can add

Reviewed-by: Josef Bacik <josef@toxicpanda.com>

for the nbd bits, thanks,

Josef


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 05:14:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 05:14:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.22945.49540 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcLz3-0006jI-TI; Tue, 10 Nov 2020 05:14:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 22945.49540; Tue, 10 Nov 2020 05:14:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcLz3-0006j5-Li; Tue, 10 Nov 2020 05:14:21 +0000
Received: by outflank-mailman (input) for mailman id 22945;
 Mon, 09 Nov 2020 23:28:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zSbr=EP=gmail.com=sagigrim@srs-us1.protection.inumbo.net>)
 id 1kcGao-0004dR-1H
 for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 23:28:58 +0000
Received: from mail-wm1-f50.google.com (unknown [209.85.128.50])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ed551f5a-5b92-48c9-ad0b-2ab809b51429;
 Mon, 09 Nov 2020 23:28:54 +0000 (UTC)
Received: by mail-wm1-f50.google.com with SMTP id v5so1250725wmh.1
 for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 15:28:53 -0800 (PST)
Received: from ?IPv6:2601:647:4802:9070:f26a:270b:f54c:37eb?
 ([2601:647:4802:9070:f26a:270b:f54c:37eb])
 by smtp.gmail.com with ESMTPSA id c17sm6900728wro.19.2020.11.09.15.28.46
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 09 Nov 2020 15:28:51 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zSbr=EP=gmail.com=sagigrim@srs-us1.protection.inumbo.net>)
	id 1kcGao-0004dR-1H
	for xen-devel@lists.xenproject.org; Mon, 09 Nov 2020 23:28:58 +0000
X-Inumbo-ID: ed551f5a-5b92-48c9-ad0b-2ab809b51429
Received: from mail-wm1-f50.google.com (unknown [209.85.128.50])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ed551f5a-5b92-48c9-ad0b-2ab809b51429;
	Mon, 09 Nov 2020 23:28:54 +0000 (UTC)
Received: by mail-wm1-f50.google.com with SMTP id v5so1250725wmh.1
        for <xen-devel@lists.xenproject.org>; Mon, 09 Nov 2020 15:28:53 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=uVd85Bl4q22td+3jSMit0FtxZJ1sJtChRiPRn7GLPMw=;
        b=ItqUNnW7P/7fOaEsC1OzM9lAYP5jEIhhrLVvElDpkuZTt9sEOyqt1R32W+BzIstSPj
         gl3xm2oDjjN/xYsFu6RmxDo3stq2iEp3HjhaZJkQQ9Q+rG0xAHsB15nMiNDYFpDCayXg
         dudwYDbyaf9P9O9x8lB186FNKrrdRsxofxp9OZo2cfn7cvCXne6VbhSojcHQEgJlPRzU
         0HlhqiIdxQaPA3rjxCz90UCzkHhwtQDaUF9zQv41bYCXh3PagaC6JeWRRHYAL2EopCaM
         6ph/leqxGothBhus8Oxe+az/fQIydxnhrM9gih5DEi4iq/vBN7wdaRu8ZRmVihLPRI56
         kYGQ==
X-Gm-Message-State: AOAM5326ark+57qhl+xcXOhBF9ROqwTvMbrChskFV5+XfKoDqVlRVxg9
	8jt2X6xYos/9UPon6m1QXM0=
X-Google-Smtp-Source: ABdhPJyLfZWJAVtXVvWavay6Dkz/YKClz0bIC5QMERvUePaULd58OWlCLidmjNr2cN/ZVx4HfgINbw==
X-Received: by 2002:a7b:c2f7:: with SMTP id e23mr1575766wmk.100.1604964532362;
        Mon, 09 Nov 2020 15:28:52 -0800 (PST)
Received: from ?IPv6:2601:647:4802:9070:f26a:270b:f54c:37eb? ([2601:647:4802:9070:f26a:270b:f54c:37eb])
        by smtp.gmail.com with ESMTPSA id c17sm6900728wro.19.2020.11.09.15.28.46
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Mon, 09 Nov 2020 15:28:51 -0800 (PST)
Subject: Re: [PATCH 03/24] nvme: let set_capacity_revalidate_and_notify update
 the bdev size
To: Hannes Reinecke <hare@suse.de>, Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Justin Sanders <justin@coraid.com>,
 Josef Bacik <josef@toxicpanda.com>, Ilya Dryomov <idryomov@gmail.com>,
 Jack Wang <jinpu.wang@cloud.ionos.com>, "Michael S. Tsirkin"
 <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201106190337.1973127-1-hch@lst.de>
 <20201106190337.1973127-4-hch@lst.de>
 <1d06cdfa-a904-30be-f3ec-08ae2fa85cbd@suse.de>
 <20201109085340.GB27483@lst.de>
 <e79f9a96-ef53-d6ea-f6e7-e141bdd2e2d2@suse.de>
From: Sagi Grimberg <sagi@grimberg.me>
Message-ID: <d28042e3-3123-5dfc-d0a2-aab0012150c8@grimberg.me>
Date: Mon, 9 Nov 2020 15:28:44 -0800
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <e79f9a96-ef53-d6ea-f6e7-e141bdd2e2d2@suse.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit


> [ .. ]
>>> Originally nvme multipath would update/change the size of the multipath
>>> device according to the underlying path devices.
>>> With this patch the size of the multipath device will _not_ change if 
>>> there
>>> is a change on the underlying devices.
>>
>> Yes, it will.  Take a close look at nvme_update_disk_info and how it is
>> called.
>>
> Okay, then: What would be the correct way of handling a size update for 
> NVMe multipath?
> Assuming we're getting an AEN for each path signalling the size change
> (or a controller reset leading to a size change).
> So if we're updating the size of the multipath device together with the 
> path device at the first AEN/reset we'll end up with the other paths 
> having a different size than the multipath device (and the path we've 
> just been updating).
> - Do we care, or cross fingers and hope for the best?
> - Shouldn't we detect the case where we won't get a size update for the 
> other paths, or, indeed, we have a genuine device size mismatch due to a 
> misconfiguration on the target?
> 
> IE shouldn't we have a flag 'size update pending' for the other paths,, 
> to take them out ouf use temporarily until the other AENs/resets have 
> been processed?

the mpath device will take the minimum size from all the paths, that is
what blk_stack_limits does. When the AEN for all the paths will arrive
the mpath size will update.

Not sure how this is different than what we have today...


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 06:14:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 06:14:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23023.49562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcMuY-0003cP-6j; Tue, 10 Nov 2020 06:13:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23023.49562; Tue, 10 Nov 2020 06:13:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcMuY-0003cI-3h; Tue, 10 Nov 2020 06:13:46 +0000
Received: by outflank-mailman (input) for mailman id 23023;
 Tue, 10 Nov 2020 06:13:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=n6mT=EQ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kcMuW-0003cD-9a
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 06:13:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7057ce77-f7f2-4c2f-8f92-ac323c81e03f;
 Tue, 10 Nov 2020 06:13:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 54B7DABCC;
 Tue, 10 Nov 2020 06:13:40 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=n6mT=EQ=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kcMuW-0003cD-9a
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 06:13:44 +0000
X-Inumbo-ID: 7057ce77-f7f2-4c2f-8f92-ac323c81e03f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7057ce77-f7f2-4c2f-8f92-ac323c81e03f;
	Tue, 10 Nov 2020 06:13:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604988820;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=am5sI30/jpQEL6r0DGcXpXqbcTe0odIDO8MWnn6/kNE=;
	b=NKJ9mpFq9YdKQSUhRiznQbaWXeavG0o3cbBPjjNlqjxmdnqSt8FdO/SbT/JAeFz2TDcq3M
	9S3r5wZz/tnS50+QQyix7aikVw+0C8W//cBvcwhYYnvlsk4ysXo3WihU/TltH5p5mnMhi1
	RmUADBUkLrQrz5Xkv8UFu0tmiqn4vQo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 54B7DABCC;
	Tue, 10 Nov 2020 06:13:40 +0000 (UTC)
Subject: Re: [PATCH v6 2/3] xen/evtchn: rework per event channel lock
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Stefano Stabellini
 <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20201109163826.13035-1-jgross@suse.com>
 <20201109163826.13035-3-jgross@suse.com>
 <f64158af-3468-053f-7cbe-d52ab01b8bfc@suse.com>
 <7858513b-2938-5bad-0c9e-167a8472656f@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <55a10d06-190b-0647-28ac-05aef2ef29a6@suse.com>
Date: Tue, 10 Nov 2020 07:13:39 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <7858513b-2938-5bad-0c9e-167a8472656f@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="PPVfdByhVypL8nldDacm7BGdJuae8xDGk"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--PPVfdByhVypL8nldDacm7BGdJuae8xDGk
Content-Type: multipart/mixed; boundary="RmjyHdAK7vtsYXhFx5DAC6m4boeHL7jnJ";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Stefano Stabellini
 <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Message-ID: <55a10d06-190b-0647-28ac-05aef2ef29a6@suse.com>
Subject: Re: [PATCH v6 2/3] xen/evtchn: rework per event channel lock
References: <20201109163826.13035-1-jgross@suse.com>
 <20201109163826.13035-3-jgross@suse.com>
 <f64158af-3468-053f-7cbe-d52ab01b8bfc@suse.com>
 <7858513b-2938-5bad-0c9e-167a8472656f@xen.org>
In-Reply-To: <7858513b-2938-5bad-0c9e-167a8472656f@xen.org>

--RmjyHdAK7vtsYXhFx5DAC6m4boeHL7jnJ
Content-Type: multipart/mixed;
 boundary="------------08E2B106380132A6A21E391D"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------08E2B106380132A6A21E391D
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 09.11.20 18:46, Julien Grall wrote:
> Hi,
>=20
> On 09/11/2020 16:48, Jan Beulich wrote:
>> On 09.11.2020 17:38, Juergen Gross wrote:
>>> Currently the lock for a single event channel needs to be taken with
>>> interrupts off, which causes deadlocks in some cases.
>>>
>>> Rework the per event channel lock to be non-blocking for the case of
>>> sending an event and removing the need for disabling interrupts for
>>> taking the lock.
>>>
>>> The lock is needed for avoiding races between event channel state
>>> changes (creation, closing, binding) against normal operations (set
>>> pending, [un]masking, priority changes).
>>>
>>> Use a rwlock, but with some restrictions:
>>>
>>> - Changing the state of an event channel (creation, closing, binding)=

>>> =C2=A0=C2=A0 needs to use write_lock(), with ASSERT()ing that the loc=
k is taken as
>>> =C2=A0=C2=A0 writer only when the state of the event channel is eithe=
r before or
>>> =C2=A0=C2=A0 after the locked region appropriate (either free or unbo=
und).
>>>
>>> - Sending an event needs to use read_trylock() mostly, in case of not=

>>> =C2=A0=C2=A0 obtaining the lock the operation is omitted. This is nee=
ded as
>>> =C2=A0=C2=A0 sending an event can happen with interrupts off (at leas=
t in some
>>> =C2=A0=C2=A0 cases).
>>>
>>> - Dumping the event channel state for debug purposes is using
>>> =C2=A0=C2=A0 read_trylock(), too, in order to avoid blocking in case =
the lock is
>>> =C2=A0=C2=A0 taken as writer for a long time.
>>>
>>> - All other cases can use read_lock().
>>>
>>> Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()")
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>> V4:
>>> - switch to rwlock
>>> - add ASSERT() to verify correct write_lock() usage
>>>
>>> V3:
>>> - corrected a copy-and-paste error (Jan Beulich)
>>> - corrected unlocking in two cases (Jan Beulich)
>>> - renamed evtchn_read_trylock() (Jan Beulich)
>>> - added some comments and an ASSERT() for evtchn_write_lock()
>>> - set EVENT_WRITE_LOCK_INC to INT_MIN
>>>
>>> V2:
>>> - added needed barriers
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>
>> I'll give it overnight for others to possibly comment, but I'd
>> like to get this in tomorrow if at all possible.
>=20
> IIRC, Citrix originally reported the issues. I would like to give them =

> an opportunity to test the patches and confirm this effectively fixed=20
> there issues.
>=20
> @Roger, @Andrew, could you give a try to confirm this unblock the vm=20
> event issue?

It should be noted that this series ought to fix current unstable
failures of XSM tests, as those are triggering the advanced lock checks
of rwlocks due to the per-event channel lock disabling interrupts.


Juergen

--------------08E2B106380132A6A21E391D
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------08E2B106380132A6A21E391D--

--RmjyHdAK7vtsYXhFx5DAC6m4boeHL7jnJ--

--PPVfdByhVypL8nldDacm7BGdJuae8xDGk
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+qL5MFAwAAAAAACgkQsN6d1ii/Ey85
twf/fs+46wJm0f7jShKJQUQXgmNsfYhLf/mq9TR4W+V/aDQcn6yG4EXwSoGea+6XkwZlhyrWTEq6
Hc+s2WSJgNcSIJLeFqsXdq0cHW7Gw2HxGp7lU4Al3+roXR6Eub7c1xapX1cNNPT/GjRCSvdkOQ3c
xBpBf/ntoFTfo+RbJEuwLEJcbe5LlbrDmjR0LDRHBVj1lZJegQP246j8uyCaZwvm1ieZUfHFtD54
OQLv78iGRkUwNxEw35oUx9t14HuwmYll6IUlR5MLGUUsdYPcgI1ZZ1C36wuDkGojVrxGE2jFfRkU
KnAN5XMmCMNDudZEqdBVxzrDjciIAa5bjGya27Xa7A==
=heia
-----END PGP SIGNATURE-----

--PPVfdByhVypL8nldDacm7BGdJuae8xDGk--


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 07:00:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 07:00:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23034.49580 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcNdp-0007st-VZ; Tue, 10 Nov 2020 07:00:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23034.49580; Tue, 10 Nov 2020 07:00:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcNdp-0007sm-Sd; Tue, 10 Nov 2020 07:00:33 +0000
Received: by outflank-mailman (input) for mailman id 23034;
 Tue, 10 Nov 2020 07:00:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=O/Cu=EQ=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kcNdo-0007sh-Do
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 07:00:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 79a9254e-10c7-4ff6-91e6-5324dba2946c;
 Tue, 10 Nov 2020 07:00:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4ADE6ABCC;
 Tue, 10 Nov 2020 07:00:29 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=O/Cu=EQ=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kcNdo-0007sh-Do
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 07:00:32 +0000
X-Inumbo-ID: 79a9254e-10c7-4ff6-91e6-5324dba2946c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 79a9254e-10c7-4ff6-91e6-5324dba2946c;
	Tue, 10 Nov 2020 07:00:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4ADE6ABCC;
	Tue, 10 Nov 2020 07:00:29 +0000 (UTC)
Subject: Re: [PATCH 03/24] nvme: let set_capacity_revalidate_and_notify update
 the bdev size
To: Sagi Grimberg <sagi@grimberg.me>, Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Justin Sanders <justin@coraid.com>,
 Josef Bacik <josef@toxicpanda.com>, Ilya Dryomov <idryomov@gmail.com>,
 Jack Wang <jinpu.wang@cloud.ionos.com>, "Michael S. Tsirkin"
 <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201106190337.1973127-1-hch@lst.de>
 <20201106190337.1973127-4-hch@lst.de>
 <1d06cdfa-a904-30be-f3ec-08ae2fa85cbd@suse.de>
 <20201109085340.GB27483@lst.de>
 <e79f9a96-ef53-d6ea-f6e7-e141bdd2e2d2@suse.de>
 <d28042e3-3123-5dfc-d0a2-aab0012150c8@grimberg.me>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <c883475d-c154-a123-521e-4723b87534cd@suse.de>
Date: Tue, 10 Nov 2020 08:00:28 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <d28042e3-3123-5dfc-d0a2-aab0012150c8@grimberg.me>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/10/20 12:28 AM, Sagi Grimberg wrote:
> 
>> [ .. ]
>>>> Originally nvme multipath would update/change the size of the multipath
>>>> device according to the underlying path devices.
>>>> With this patch the size of the multipath device will _not_ change 
>>>> if there
>>>> is a change on the underlying devices.
>>>
>>> Yes, it will.  Take a close look at nvme_update_disk_info and how it is
>>> called.
>>>
>> Okay, then: What would be the correct way of handling a size update 
>> for NVMe multipath?
>> Assuming we're getting an AEN for each path signalling the size change
>> (or a controller reset leading to a size change).
>> So if we're updating the size of the multipath device together with 
>> the path device at the first AEN/reset we'll end up with the other 
>> paths having a different size than the multipath device (and the path 
>> we've just been updating).
>> - Do we care, or cross fingers and hope for the best?
>> - Shouldn't we detect the case where we won't get a size update for 
>> the other paths, or, indeed, we have a genuine device size mismatch 
>> due to a misconfiguration on the target?
>>
>> IE shouldn't we have a flag 'size update pending' for the other 
>> paths,, to take them out ouf use temporarily until the other 
>> AENs/resets have been processed?
> 
> the mpath device will take the minimum size from all the paths, that is
> what blk_stack_limits does. When the AEN for all the paths will arrive
> the mpath size will update.
> 
But that's precisely my point; there won't be an AEN for _all_ paths, 
but rather one AEN per path. Which will be processed separately, leading 
to the issue described above.

> Not sure how this is different than what we have today...

Oh, that is a problem even today.
So we should probably move it to a different thread...

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 07:27:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 07:27:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23045.49592 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcO3o-0001M9-4K; Tue, 10 Nov 2020 07:27:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23045.49592; Tue, 10 Nov 2020 07:27:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcO3o-0001M2-1O; Tue, 10 Nov 2020 07:27:24 +0000
Received: by outflank-mailman (input) for mailman id 23045;
 Tue, 10 Nov 2020 07:27:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qxi4=EQ=kernel.org=mchehab+huawei@srs-us1.protection.inumbo.net>)
 id 1kcO3m-0001Lx-Rd
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 07:27:22 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ad09f86c-f71b-443d-9c8c-92ee4d9e29ec;
 Tue, 10 Nov 2020 07:27:22 +0000 (UTC)
Received: from coco.lan (ip5f5ad5a8.dynamic.kabel-deutschland.de
 [95.90.213.168])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id DB0E3206E3;
 Tue, 10 Nov 2020 07:27:03 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qxi4=EQ=kernel.org=mchehab+huawei@srs-us1.protection.inumbo.net>)
	id 1kcO3m-0001Lx-Rd
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 07:27:22 +0000
X-Inumbo-ID: ad09f86c-f71b-443d-9c8c-92ee4d9e29ec
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ad09f86c-f71b-443d-9c8c-92ee4d9e29ec;
	Tue, 10 Nov 2020 07:27:22 +0000 (UTC)
Received: from coco.lan (ip5f5ad5a8.dynamic.kabel-deutschland.de [95.90.213.168])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id DB0E3206E3;
	Tue, 10 Nov 2020 07:27:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604993241;
	bh=S1o4qj80GoZoWAAm6f5WIdZgKYNgEyeuOMYKs01dOuI=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=TUH5+tNazhJ/XJzAl/Wd+cQEsRpNMchPxEmOCv92clZjPMpMMVpFOqsyNGQ8qQlVK
	 vt54VFkbwQr8J7ito6H7c5ALBnYlEng06bYHpBTDo7H35Y6tiYwfUYlrrC4pXGTnLL
	 ZqjZgHifCGhcsQ6FCeSyPXsZrR60Wop4+ME3S5hM=
Date: Tue, 10 Nov 2020 08:26:58 +0100
From: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
To: Jonathan Cameron <jic23@kernel.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Fabrice Gasnier
 <fabrice.gasnier@st.com>, Linux Doc Mailing List
 <linux-doc@vger.kernel.org>, "Gautham R. Shenoy" <ego@linux.vnet.ibm.com>,
 "Jason A. Donenfeld" <Jason@zx2c4.com>, Javier =?UTF-8?B?R29uesOhbGV6?=
 <javier@javigon.com>, Jonathan Corbet <corbet@lwn.net>, "Martin K.
 Petersen" <martin.petersen@oracle.com>, "Rafael J. Wysocki"
 <rjw@rjwysocki.net>, Alexander Shishkin
 <alexander.shishkin@linux.intel.com>, Alexandre Belloni
 <alexandre.belloni@bootlin.com>, Alexandre Torgue
 <alexandre.torgue@st.com>, Andrew Donnellan <ajd@linux.ibm.com>, Andy
 Shevchenko <andriy.shevchenko@linux.intel.com>, Baolin Wang
 <baolin.wang7@gmail.com>, Benson Leung <bleung@chromium.org>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, Bruno Meneguele
 <bmeneg@redhat.com>, Chunyan Zhang <zhang.lyra@gmail.com>, Dan Murphy
 <dmurphy@ti.com>, Dan Williams <dan.j.williams@intel.com>, Enric Balletbo i
 Serra <enric.balletbo@collabora.com>, Felipe Balbi <balbi@kernel.org>,
 Frederic Barrat <fbarrat@linux.ibm.com>, Guenter Roeck
 <groeck@chromium.org>, Hanjun Guo <guohanjun@huawei.com>, Heikki Krogerus
 <heikki.krogerus@linux.intel.com>, Jens Axboe <axboe@kernel.dk>, Johannes
 Thumshirn <johannes.thumshirn@wdc.com>, Juergen Gross <jgross@suse.com>,
 Konstantin Khlebnikov <koct9i@gmail.com>, Kranthi Kuntala
 <kranthi.kuntala@intel.com>, Lakshmi Ramasubramanian
 <nramas@linux.microsoft.com>, Lars-Peter Clausen <lars@metafoo.de>, Len
 Brown <lenb@kernel.org>, Leonid Maksymchuk <leonmaxx@gmail.com>, Ludovic
 Desroches <ludovic.desroches@microchip.com>, Mario Limonciello
 <mario.limonciello@dell.com>, Mark Gross <mgross@linux.intel.com>, Maxime
 Coquelin <mcoquelin.stm32@gmail.com>, Michael Ellerman
 <mpe@ellerman.id.au>, Mika Westerberg <mika.westerberg@linux.intel.com>,
 Mike Kravetz <mike.kravetz@oracle.com>, Mimi Zohar <zohar@linux.ibm.com>,
 Nayna Jain <nayna@linux.ibm.com>, Nicolas Ferre
 <nicolas.ferre@microchip.com>, Niklas Cassel <niklas.cassel@wdc.com>, Oded
 Gabbay <oded.gabbay@gmail.com>, Oleh Kravchenko <oleg@kaa.org.ua>, Orson
 Zhai <orsonzhai@gmail.com>, Pavel Machek <pavel@ucw.cz>, Pawan Gupta
 <pawan.kumar.gupta@linux.intel.com>, Peter Meerwald-Stadler
 <pmeerw@pmeerw.net>, Peter Rosin <peda@axentia.se>, Petr Mladek
 <pmladek@suse.com>, Philippe Bergheaud <felix@linux.ibm.com>, Richard
 Cochran <richardcochran@gmail.com>, Sebastian Reichel <sre@kernel.org>,
 Sergey Senozhatsky <sergey.senozhatsky@gmail.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Thinh Nguyen <Thinh.Nguyen@synopsys.com>, Thomas
 Gleixner <tglx@linutronix.de>, Tom Rix <trix@redhat.com>, Vaibhav Jain
 <vaibhav@linux.ibm.com>, Vineela Tummalapalli
 <vineela.tummalapalli@intel.com>, Vishal Verma <vishal.l.verma@intel.com>,
 linux-acpi@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
 linux-iio@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, linux-pm@vger.kernel.org,
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org,
 xen-devel@lists.xenproject.org, Jonathan Cameron
 <Jonathan.Cameron@huawei.com>
Subject: Duplicated ABI entries - Was: Re: [PATCH v2 20/39] docs: ABI:
 testing: make the files compatible with ReST output
Message-ID: <20201110082658.2edc1ab5@coco.lan>
In-Reply-To: <20201108165621.4d0da3f4@archlinux>
References: <cover.1604042072.git.mchehab+huawei@kernel.org>
	<58cf3c2d611e0197fb215652719ebd82ca2658db.1604042072.git.mchehab+huawei@kernel.org>
	<5326488b-4185-9d67-fc09-79b911fbb3b8@st.com>
	<20201030110925.3e09d59e@coco.lan>
	<cb586ea3-b6e6-4e48-2344-2bd641e5323f@st.com>
	<20201102124641.GA881895@kroah.com>
	<20201102154250.45bee17f@coco.lan>
	<20201108165621.4d0da3f4@archlinux>
X-Mailer: Claws Mail 3.17.8 (GTK+ 2.24.32; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

Hi Jonathan,

Em Sun, 8 Nov 2020 16:56:21 +0000
Jonathan Cameron <jic23@kernel.org> escreveu:

> > PS.: the IIO subsystem is the one that currently has more duplicated
> > ABI entries:  
> > $ ./scripts/get_abi.pl validate 2>&1|grep iio
> > Warning: /sys/bus/iio/devices/iio:deviceX/in_accel_x_calibbias is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-icm42600:0  Documentation/ABI/testing/sysfs-bus-iio:394
> > Warning: /sys/bus/iio/devices/iio:deviceX/in_accel_y_calibbias is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-icm42600:1  Documentation/ABI/testing/sysfs-bus-iio:395
> > Warning: /sys/bus/iio/devices/iio:deviceX/in_accel_z_calibbias is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-icm42600:2  Documentation/ABI/testing/sysfs-bus-iio:396
> > Warning: /sys/bus/iio/devices/iio:deviceX/in_anglvel_x_calibbias is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-icm42600:3  Documentation/ABI/testing/sysfs-bus-iio:397
> > Warning: /sys/bus/iio/devices/iio:deviceX/in_anglvel_y_calibbias is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-icm42600:4  Documentation/ABI/testing/sysfs-bus-iio:398
> > Warning: /sys/bus/iio/devices/iio:deviceX/in_anglvel_z_calibbias is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-icm42600:5  Documentation/ABI/testing/sysfs-bus-iio:399
> > Warning: /sys/bus/iio/devices/iio:deviceX/in_count0_preset is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-timer-stm32:100  Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32:0
> > Warning: /sys/bus/iio/devices/iio:deviceX/in_count0_quadrature_mode is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-timer-stm32:117  Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32:14
> > Warning: /sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available is defined 3 times:  Documentation/ABI/testing/sysfs-bus-iio-counter-104-quad-8:2  Documentation/ABI/testing/sysfs-bus-iio-timer-stm32:111  Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32:8
> > Warning: /sys/bus/iio/devices/iio:deviceX/out_altvoltageY_frequency is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-frequency-adf4371:0  Documentation/ABI/testing/sysfs-bus-iio:599
> > Warning: /sys/bus/iio/devices/iio:deviceX/out_altvoltageY_powerdown is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-frequency-adf4371:36  Documentation/ABI/testing/sysfs-bus-iio:588
> > Warning: /sys/bus/iio/devices/iio:deviceX/out_currentY_raw is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-light-lm3533-als:43  Documentation/ABI/testing/sysfs-bus-iio-health-afe440x:38
> > Warning: /sys/bus/iio/devices/iio:deviceX/out_current_heater_raw is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-humidity-hdc2010:0  Documentation/ABI/testing/sysfs-bus-iio-humidity-hdc100x:0
> > Warning: /sys/bus/iio/devices/iio:deviceX/out_current_heater_raw_available is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-humidity-hdc2010:1  Documentation/ABI/testing/sysfs-bus-iio-humidity-hdc100x:1
> > Warning: /sys/bus/iio/devices/iio:deviceX/sensor_sensitivity is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-distance-srf08:0  Documentation/ABI/testing/sysfs-bus-iio-proximity-as3935:8
> > Warning: /sys/bus/iio/devices/triggerX/sampling_frequency is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-timer-stm32:92  Documentation/ABI/testing/sysfs-bus-iio:45  

> 
> That was intentional.  Often these provide more information on the
> ABI for a particular device than is present in the base ABI doc.

FYI, right now, there are 20 duplicated entries, being 16 of them
from IIO, on those files:

	$ ./scripts/get_abi.pl validate 2>&1|perl -ne 'if (m,(Documentation/\S+)\:,g) { print "$1\n" }'|sort|uniq
	Documentation/ABI/stable/sysfs-driver-w1_ds28e04
	Documentation/ABI/testing/sysfs-bus-iio-counter-104-quad-8
	Documentation/ABI/testing/sysfs-bus-iio-distance-srf08
	Documentation/ABI/testing/sysfs-bus-iio-frequency-adf4371
	Documentation/ABI/testing/sysfs-bus-iio-humidity-hdc2010
	Documentation/ABI/testing/sysfs-bus-iio-icm42600
	Documentation/ABI/testing/sysfs-bus-iio-light-lm3533-als
	Documentation/ABI/testing/sysfs-bus-iio-timer-stm32
	Documentation/ABI/testing/sysfs-class-backlight-adp8860
	Documentation/ABI/testing/sysfs-class-led-trigger-pattern
	Documentation/ABI/testing/sysfs-kernel-iommu_groups

> 
> A bit like when we have additional description for dt binding properties
> for a particular device, even though they are standard properties.
> 
> Often a standard property allows for more values than the specific
> one for a particular device.  There can also be obscuring coupling
> between sysfs attributes due to hardware restrictions that we would
> like to provide some explanatory info on.
> 
> I suppose we could add all this information to the parent doc but
> that is pretty ugly and will make that doc very nasty to read.

I understand what you meant to do, but right now, it is is actually
a lot uglier than merging into a single entry ;-)

Let's view ABI from the PoV of a system admin that doesn't know
yet about a certain ABI symbol.

He'll try to seek for the symbol, more likely using the HTML 
documentation. Only very senior system admins might try to take
a look at the Kernel.

This is what happens when one would seek for a duplicated symbol
via command line:

	$ ./scripts/get_abi.pl search /sys/bus/iio/devices/iio:deviceX/out_altvoltageY_frequency$
	
	/sys/bus/iio/devices/iio:deviceX/out_altvoltageY_frequency
	----------------------------------------------------------
	
	Kernel version:		3.4.0
	Contact:		linux-iio@vger.kernel.org
	Defined on file(s):	Documentation/ABI/testing/sysfs-bus-iio-frequency-adf4371 Documentation/ABI/testing/sysfs-bus-iio
	
	Description:
	
	Stores the PLL frequency in Hz for channel Y.
	Reading returns the actual frequency in Hz.
	The ADF4371 has an integrated VCO with fundamendal output
	frequency ranging from 4000000000 Hz 8000000000 Hz.
	
	out_altvoltage0_frequency:
	        A divide by 1, 2, 4, 8, 16, 32 or circuit generates
	        frequencies from 62500000 Hz to 8000000000 Hz.
	out_altvoltage1_frequency:
	        This channel duplicates the channel 0 frequency
	out_altvoltage2_frequency:
	        A frequency doubler generates frequencies from
	        8000000000 Hz to 16000000000 Hz.
	out_altvoltage3_frequency:
	        A frequency quadrupler generates frequencies from
	        16000000000 Hz to 32000000000 Hz.
	
	Note: writes to one of the channels will affect the frequency of
	all the other channels, since it involves changing the VCO
	fundamental output frequency.
	
	Output frequency for channel Y in Hz. The number must always be
	specified and unique if the output corresponds to a single
	channel.

As the "What:" field is identical on both sysfs-bus-iio-frequency-adf4371
and sysfs-bus-iio, those entries are merged, which produces an ABI
documentation mixing both the generic one and the board specific one
into a single output.

Worse than that, the "generic" content is at the end.

The same happens when generating the HTML output.

See, entries at the HTML output are ordered by the What: field,
which is considered within the script as an unique key, as it is
unique (except for IIO and a couple of other cases).

-

As I commented on an e-mail I sent to Greg, I see a few ways
to solve it.

The most trivial one (which I used to solve a few conflicts on
other places), is to place driver-specific details on a separate
file under Documentation/driver-api, and mention it at the
generic entries. The docs building system will generate cross
references for Documentation/.../foo.rst files, so, everything
should be OK.

The second alternative that I also used on a couple of places
is to modify the generic entry for it to contain the generic
definition first, followed by per-device details.

There is a third possible alternative: add a new optional field
(something like Scope:) which would be part of the unique key,
if present. Implementing support for it could be tricky, as the
produced output would likely need to create cross-references
between the generic field (if present) and the per-device details.

Thanks,
Mauro

PS.: I'm taking a few days of PTO during this week. So, it
could take a while for me to reply again to this thread.


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 07:39:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 07:39:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23051.49603 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcOFs-0002Nz-9L; Tue, 10 Nov 2020 07:39:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23051.49603; Tue, 10 Nov 2020 07:39:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcOFs-0002Ns-6J; Tue, 10 Nov 2020 07:39:52 +0000
Received: by outflank-mailman (input) for mailman id 23051;
 Tue, 10 Nov 2020 07:39:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcOFr-0002Nh-DR
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 07:39:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4baaee4d-5979-4ae7-bf05-fc1b7b705e92;
 Tue, 10 Nov 2020 07:39:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 56DB8ABCC;
 Tue, 10 Nov 2020 07:39:49 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcOFr-0002Nh-DR
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 07:39:51 +0000
X-Inumbo-ID: 4baaee4d-5979-4ae7-bf05-fc1b7b705e92
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4baaee4d-5979-4ae7-bf05-fc1b7b705e92;
	Tue, 10 Nov 2020 07:39:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604993989;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=11Eat1GBxVk81WcVBoKQDb7ycMBDADa+6Y7+6MYaXKA=;
	b=b5MeNwLYgCX1cSA0BQJIpWkgtPjmx9L3FTfc7IfiUgS1xF57ASNEdQq/O/ZIshRwpFsn/x
	/WR0hzlfoSX7aPuxZjE065p1a/5UTSSN6yN9guUaPoLPnzK+F6kcqtecwfuWiDLshLjGUf
	3icnnm5wEfOZIAKoI10WQG7d4D0ZVE8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 56DB8ABCC;
	Tue, 10 Nov 2020 07:39:49 +0000 (UTC)
Subject: Re: [PATCH v6 2/3] xen/evtchn: rework per event channel lock
To: Julien Grall <julien@xen.org>
Cc: Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Stefano Stabellini
 <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Juergen Gross <jgross@suse.com>
References: <20201109163826.13035-1-jgross@suse.com>
 <20201109163826.13035-3-jgross@suse.com>
 <f64158af-3468-053f-7cbe-d52ab01b8bfc@suse.com>
 <7858513b-2938-5bad-0c9e-167a8472656f@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c1e02f02-dfdd-33f1-0881-8632ddcba158@suse.com>
Date: Tue, 10 Nov 2020 08:39:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <7858513b-2938-5bad-0c9e-167a8472656f@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.11.2020 18:46, Julien Grall wrote:
> Hi,
> 
> On 09/11/2020 16:48, Jan Beulich wrote:
>> On 09.11.2020 17:38, Juergen Gross wrote:
>>> Currently the lock for a single event channel needs to be taken with
>>> interrupts off, which causes deadlocks in some cases.
>>>
>>> Rework the per event channel lock to be non-blocking for the case of
>>> sending an event and removing the need for disabling interrupts for
>>> taking the lock.
>>>
>>> The lock is needed for avoiding races between event channel state
>>> changes (creation, closing, binding) against normal operations (set
>>> pending, [un]masking, priority changes).
>>>
>>> Use a rwlock, but with some restrictions:
>>>
>>> - Changing the state of an event channel (creation, closing, binding)
>>>    needs to use write_lock(), with ASSERT()ing that the lock is taken as
>>>    writer only when the state of the event channel is either before or
>>>    after the locked region appropriate (either free or unbound).
>>>
>>> - Sending an event needs to use read_trylock() mostly, in case of not
>>>    obtaining the lock the operation is omitted. This is needed as
>>>    sending an event can happen with interrupts off (at least in some
>>>    cases).
>>>
>>> - Dumping the event channel state for debug purposes is using
>>>    read_trylock(), too, in order to avoid blocking in case the lock is
>>>    taken as writer for a long time.
>>>
>>> - All other cases can use read_lock().
>>>
>>> Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()")
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>> V4:
>>> - switch to rwlock
>>> - add ASSERT() to verify correct write_lock() usage
>>>
>>> V3:
>>> - corrected a copy-and-paste error (Jan Beulich)
>>> - corrected unlocking in two cases (Jan Beulich)
>>> - renamed evtchn_read_trylock() (Jan Beulich)
>>> - added some comments and an ASSERT() for evtchn_write_lock()
>>> - set EVENT_WRITE_LOCK_INC to INT_MIN
>>>
>>> V2:
>>> - added needed barriers
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>
>> I'll give it overnight for others to possibly comment, but I'd
>> like to get this in tomorrow if at all possible.
> 
> IIRC, Citrix originally reported the issues. I would like to give them 
> an opportunity to test the patches and confirm this effectively fixed 
> there issues.

I would consider waiting longer, but I'd like to get staging unblocked.
Just like with 52e1fc47abc3a0123 ("evtchn/Flask: pre-allocate node on
send path") we can always decide to revert / fix up afterwards. The
patch here surely is an improvement, even if it was to turn out it
doesn't fix all issues yes. I'd appreciate if you would confirm you
don't object to the patch going in - I'll wait a few more hours,
perhaps until around noon.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 07:47:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 07:47:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23059.49616 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcOMu-0003H2-2T; Tue, 10 Nov 2020 07:47:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23059.49616; Tue, 10 Nov 2020 07:47:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcOMt-0003Gv-VZ; Tue, 10 Nov 2020 07:47:07 +0000
Received: by outflank-mailman (input) for mailman id 23059;
 Tue, 10 Nov 2020 07:47:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcOMs-0003Gq-Q3
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 07:47:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ae7978aa-92ac-41df-a13f-da2be41ea121;
 Tue, 10 Nov 2020 07:47:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E47F3ABD1;
 Tue, 10 Nov 2020 07:47:03 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcOMs-0003Gq-Q3
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 07:47:06 +0000
X-Inumbo-ID: ae7978aa-92ac-41df-a13f-da2be41ea121
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ae7978aa-92ac-41df-a13f-da2be41ea121;
	Tue, 10 Nov 2020 07:47:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604994424;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4grvy2Vd+XHhnHogizLT8Lx1VH01gIHDPhnAecCkrks=;
	b=fTNc12UHVdBYeUrKwnUK0ecM6ZU87oTpw3snqC/auXwDo5Wun5TfvO0qInzxH+htVXfK3E
	SJKev/f6nHsGfry4yf0wMnueXTEMVAg9StOQ9gX+3oJOVM88fRxwFGQTgjzaLwZ891WyW9
	/VhCLL6LcPKICFIAvIdStiuStPd/050=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E47F3ABD1;
	Tue, 10 Nov 2020 07:47:03 +0000 (UTC)
Subject: Re: Tools backport request for Xen 4.14
To: Ian Jackson <iwj@xenproject.org>
Cc: Julien Grall <julien@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>, ba1020@protonmail.ch,
 Elliott Mitchell <ehem+xen@m5p.com>
References: <54fcf6ea-f400-c96a-cde6-4f55f909c2d6@xen.org>
 <20201009184930.GA65219@mattapan.m5p.com>
 <24489.33893.98470.334969@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0b675d88-9586-e590-2ea4-18dfe931d638@suse.com>
Date: Tue, 10 Nov 2020 08:47:03 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <24489.33893.98470.334969@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.11.2020 19:03, Ian Jackson wrote:
> Elliott Mitchell writes ("Re: Tools backport request for Xen 4.14"):
>> On Fri, Oct 09, 2020 at 06:47:22PM +0100, Julien Grall wrote:
>>> Would it be possible to consider backporting to 4.14 the following tools 
>>> commit:
>>>
>>> d25cc3ec93eb "libxl: workaround gcc 10.2 maybe-uninitialized warning"
>>>
>>> This would help to build Xen tools on Debian Testing with GCC 10. I 
>>> haven't build itself myself, so I can't promise this is only one :).
>>
>> >From Debian's repository:
>> https://salsa.debian.org/xen-team/debian-xen.git
>>
>> The master and knorrie/4.14 branches include that commit.  They will
>> hopefully soon include all the Debian-specific bits for cross-building
>> too.
> 
> I have now backported all of the GCC10 fixes to all the supported Xen
> branches (ie back to 4.12).

Just to clarify: 4.12 is only security supported at this point, from
all I can tell.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 08:03:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 08:03:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23074.49627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcOcx-0005ZZ-PF; Tue, 10 Nov 2020 08:03:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23074.49627; Tue, 10 Nov 2020 08:03:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcOcx-0005ZS-MN; Tue, 10 Nov 2020 08:03:43 +0000
Received: by outflank-mailman (input) for mailman id 23074;
 Tue, 10 Nov 2020 08:03:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcOcw-0005ZN-Bi
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 08:03:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 431f4e2c-2b5a-4328-a7d8-cd09321a3592;
 Tue, 10 Nov 2020 08:03:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7A5EFAB95;
 Tue, 10 Nov 2020 08:03:40 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcOcw-0005ZN-Bi
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 08:03:42 +0000
X-Inumbo-ID: 431f4e2c-2b5a-4328-a7d8-cd09321a3592
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 431f4e2c-2b5a-4328-a7d8-cd09321a3592;
	Tue, 10 Nov 2020 08:03:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604995420;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=sI7MSEonKur0nPgBfGqEMV2xnU2Wo991o44qzOu9Sm0=;
	b=DD4ssLPp+eqtV2KiUNXzOX/ajfpWd0yvIsd84euDfPUFbnmqRcJGz+3HJN9BWs7zc+E2JA
	558W/8KXbyrPpDgli8MCxXWTt9KHvIguIlLV7PrI/eka170ErpOWu1axf/5KZ9zerOzDq7
	IXAoFna3mVJXB/J6gPpqnBMWg8jo+vg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7A5EFAB95;
	Tue, 10 Nov 2020 08:03:40 +0000 (UTC)
Subject: Re: [PATCH v2] x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL}
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20201109173819.7817-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <681e03f5-86fd-43bb-5347-c526def9ffcb@suse.com>
Date: Tue, 10 Nov 2020 09:03:40 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201109173819.7817-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 09.11.2020 18:38, Andrew Cooper wrote:
> From: Roger Pau Monné <roger.pau@citrix.com>
> 
> Currently a PV hardware domain can also be given control over the CPU
> frequency, and such guest is allowed to write to MSR_IA32_PERF_CTL.
> However since commit 322ec7c89f6 the default behavior has been changed
> to reject accesses to not explicitly handled MSRs, preventing PV
> guests that manage CPU frequency from reading
> MSR_IA32_PERF_{STATUS/CTL}.
> 
> Additionally some HVM guests (Windows at least) will attempt to read
> MSR_IA32_PERF_CTL and will panic if given back a #GP fault:
> 
>   vmx.c:3035:d8v0 RDMSR 0x00000199 unimplemented
>   d8v0 VIRIDIAN CRASH: 3b c0000096 fffff806871c1651 ffffda0253683720 0
> 
> Move the handling of MSR_IA32_PERF_{STATUS/CTL} to the common MSR
> handling shared between HVM and PV guests, and add an explicit case
> for reads to MSR_IA32_PERF_{STATUS/CTL}.
> 
> Restore previous behavior and allow PV guests with the required
> permissions to read the contents of the mentioned MSRs. Non privileged
> guests will get 0 when trying to read those registers, as writes to
> MSR_IA32_PERF_CTL by such guest will already be silently dropped.
> 
> Fixes: 322ec7c89f6 ('x86/pv: disallow access to unknown MSRs')
> Fixes: 84e848fd7a1 ('x86/hvm: disallow access to unknown MSRs')
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with a nit, a minor adjustment request, and a question:

> @@ -448,6 +467,21 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>              goto gp_fault;
>          break;
>  
> +        /*
> +         * This MSR are not enumerated in CPUID.  It has been around since the

s/are/is/

> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -1069,6 +1069,23 @@ extern enum cpufreq_controller {
>      FREQCTL_none, FREQCTL_dom0_kernel, FREQCTL_xen
>  } cpufreq_controller;
>  
> +static always_inline bool is_cpufreq_controller(const struct domain *d)
> +{
> +    /*
> +     * A PV dom0 can be nominated as the cpufreq controller, instead of using
> +     * Xen's cpufreq driver, at which point dom0 gets direct access to certain
> +     * MSRs.
> +     *
> +     * This interface only works when dom0 is identity pinned and has the same
> +     * number of vCPUs as pCPUs on the system.
> +     *
> +     * It would be far better to paravirtualise the interface.
> +     */
> +    return (IS_ENABLED(CONFIG_PV) &&
> +            (cpufreq_controller == FREQCTL_dom0_kernel) &&
> +            is_pv_domain(d) && is_hardware_domain(d));
> +}

IS_ENABLED(CONFIG_PV) is already part of is_pv_domain() and hence
imo shouldn't be used explicitly here.

Also, this being an x86 concept, is it really a good idea to put
in xen/sched.h? I guess this builds on Arm just because of DCE
from the IS_ENABLED(CONFIG_PV) (where afaict the one in
is_pv_domain() will still do). (But yes, I do realize that
cpufreq_controller itself gets declared in this file, so maybe
better to do some subsequent cleanup.)

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 08:04:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 08:04:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23079.49639 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcOe9-0005hI-8d; Tue, 10 Nov 2020 08:04:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23079.49639; Tue, 10 Nov 2020 08:04:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcOe9-0005hB-5i; Tue, 10 Nov 2020 08:04:57 +0000
Received: by outflank-mailman (input) for mailman id 23079;
 Tue, 10 Nov 2020 08:04:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcOe8-0005h6-5t
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 08:04:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fa950df0-5faf-4e20-9204-d3c35737d64c;
 Tue, 10 Nov 2020 08:04:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2DFCEADEE;
 Tue, 10 Nov 2020 08:04:54 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcOe8-0005h6-5t
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 08:04:56 +0000
X-Inumbo-ID: fa950df0-5faf-4e20-9204-d3c35737d64c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id fa950df0-5faf-4e20-9204-d3c35737d64c;
	Tue, 10 Nov 2020 08:04:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604995494;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=G+pRUw8/CTl2vkEDDB9rqxMOasKAKUGMRrmlIbJgMFI=;
	b=kuRaT9xAfUBFaH64EUC1FX5xRKyMeGnWHe0ArDO29bH5wnPWlHXUonaL4VOOPQeEnlaw22
	9IfEHFTQIJSU4QLrJOAESjXte6KdkSllmnA50ZXqEgPDhj2SK4ZmsvwUn6mKFpmxZnH76Y
	btOUgBvBtHlKYJohDwFECT6mDCkaPRk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2DFCEADEE;
	Tue, 10 Nov 2020 08:04:54 +0000 (UTC)
Subject: Re: [PATCH v2] x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL}
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <20201109173819.7817-1-andrew.cooper3@citrix.com>
 <20201109183102.mrqklmpqyka5u6bt@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d76806b4-2302-bee9-d977-ecfe29089fd7@suse.com>
Date: Tue, 10 Nov 2020 09:04:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201109183102.mrqklmpqyka5u6bt@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 09.11.2020 19:31, Roger Pau Monné wrote:
> On Mon, Nov 09, 2020 at 05:38:19PM +0000, Andrew Cooper wrote:
>> From: Roger Pau Monné <roger.pau@citrix.com>
>>
>> Currently a PV hardware domain can also be given control over the CPU
>> frequency, and such guest is allowed to write to MSR_IA32_PERF_CTL.
>> However since commit 322ec7c89f6 the default behavior has been changed
>> to reject accesses to not explicitly handled MSRs, preventing PV
>> guests that manage CPU frequency from reading
>> MSR_IA32_PERF_{STATUS/CTL}.
>>
>> Additionally some HVM guests (Windows at least) will attempt to read
>> MSR_IA32_PERF_CTL and will panic if given back a #GP fault:
>>
>>   vmx.c:3035:d8v0 RDMSR 0x00000199 unimplemented
>>   d8v0 VIRIDIAN CRASH: 3b c0000096 fffff806871c1651 ffffda0253683720 0
>>
>> Move the handling of MSR_IA32_PERF_{STATUS/CTL} to the common MSR
>> handling shared between HVM and PV guests, and add an explicit case
>> for reads to MSR_IA32_PERF_{STATUS/CTL}.
>>
>> Restore previous behavior and allow PV guests with the required
>> permissions to read the contents of the mentioned MSRs. Non privileged
>> guests will get 0 when trying to read those registers, as writes to
>> MSR_IA32_PERF_CTL by such guest will already be silently dropped.
>>
>> Fixes: 322ec7c89f6 ('x86/pv: disallow access to unknown MSRs')
>> Fixes: 84e848fd7a1 ('x86/hvm: disallow access to unknown MSRs')
>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> 
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> CC: Wei Liu <wl@xen.org>
>>
>> v2:
>>  * fix is_cpufreq_controller() to exclude PVH dom0, and collapse to nothing in
>>    !CONFIG_PV builds
>>  * Drop the cross-vendor checks.  It isn't possible to configure dom0 as
>>    cross-vendor, and anyone using is_cpufreq_controller() without an exact
>>    model match has far bigger problems.

This already answers ...

>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -1069,6 +1069,23 @@ extern enum cpufreq_controller {
>>      FREQCTL_none, FREQCTL_dom0_kernel, FREQCTL_xen
>>  } cpufreq_controller;
>>  
>> +static always_inline bool is_cpufreq_controller(const struct domain *d)
>> +{
>> +    /*
>> +     * A PV dom0 can be nominated as the cpufreq controller, instead of using
>> +     * Xen's cpufreq driver, at which point dom0 gets direct access to certain
>> +     * MSRs.
>> +     *
>> +     * This interface only works when dom0 is identity pinned and has the same
>> +     * number of vCPUs as pCPUs on the system.
>> +     *
>> +     * It would be far better to paravirtualise the interface.
>> +     */
> 
> Would it be useful to add an assert here that the domain cpuid vendor
> and the BSP cpuid vendor match?
> 
> Is it even possible to fake a different cpuid vendor for dom0?

... your question here.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 08:48:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 08:48:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23095.49658 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcPKN-0000ob-PG; Tue, 10 Nov 2020 08:48:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23095.49658; Tue, 10 Nov 2020 08:48:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcPKN-0000oU-KC; Tue, 10 Nov 2020 08:48:35 +0000
Received: by outflank-mailman (input) for mailman id 23095;
 Tue, 10 Nov 2020 08:48:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ywAc=EQ=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1kcPKL-0000oP-UO
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 08:48:33 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5a5526d0-a1a7-4887-9989-79d6b1c40017;
 Tue, 10 Nov 2020 08:48:32 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ywAc=EQ=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
	id 1kcPKL-0000oP-UO
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 08:48:33 +0000
X-Inumbo-ID: 5a5526d0-a1a7-4887-9989-79d6b1c40017
Received: from galois.linutronix.de (unknown [193.142.43.55])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5a5526d0-a1a7-4887-9989-79d6b1c40017;
	Tue, 10 Nov 2020 08:48:32 +0000 (UTC)
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1604998111;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=+lFM+Dy1hrzRRrrrA3qnbGkxEtyVqIh0g4peYrriLeU=;
	b=iuo8f+7vFglLggVRQElkiZ2WEkqCMbTG8xoMGtVyBk3DBfZOlzK9bn4iYE9n5TJlXEeyiw
	mK2AsUoeE727uJ+eyVgbEeyt2qz1CsngbkfMTC30zg6BSGbxrFxVJV/nTlcmtj9NHSMsJn
	sU38ljGJ30NJ8ooIZ53QTax6dO6NfnLLpRxklxBphTMVejdacYZZqkmCK8e4gkxhfN2Hq9
	zuGNw+h8VUH3NFZO14JlYgbkNPH833xVYFQ2lmqEAC35a4/baTqfi6uG7ey36+HQyygrEi
	Jy4I41umlX4stejJrRBLu7awPrfWhbLcelHNMzqQQSqRZrKTpO34TDntXU+02A==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1604998111;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=+lFM+Dy1hrzRRrrrA3qnbGkxEtyVqIh0g4peYrriLeU=;
	b=y7osrUd/437dzM5/Hc5G9cQ/HuZ2jh7vgX8EDHSswmJuPLkHyLG6iEX8rrCl2sg3XohELe
	1aQhUu8PTNjq+DAw==
To: Ira Weiny <ira.weiny@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>, Ingo Molnar
 <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, Andy Lutomirski
 <luto@kernel.org>, Peter Zijlstra <peterz@infradead.org>, Randy Dunlap
 <rdunlap@infradead.org>, x86@kernel.org, Dave Hansen
 <dave.hansen@linux.intel.com>, Dan Williams <dan.j.williams@intel.com>,
 Fenghua Yu <fenghua.yu@intel.com>, linux-doc@vger.kernel.org,
 linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org,
 linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
 linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
 kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org,
 kexec@lists.infradead.org, linux-bcache@vger.kernel.org,
 linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org,
 linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-scsi@vger.kernel.org, target-devel@vger.kernel.org,
 linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org,
 linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org,
 linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org,
 linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org,
 linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org,
 cluster-devel@redhat.com, ecryptfs@vger.kernel.org,
 linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org,
 linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org,
 amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
 intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com,
 linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-cachefs@redhat.com, samba-technical@lists.samba.org,
 intel-wired-lan@lists.osuosl.org
Subject: Re: [PATCH RFC PKS/PMEM 05/58] kmap: Introduce k[un]map_thread
In-Reply-To: <20201110045954.GL3976735@iweiny-DESK2.sc.intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com> <20201009195033.3208459-6-ira.weiny@intel.com> <87h7pyhv3f.fsf@nanos.tec.linutronix.de> <20201110045954.GL3976735@iweiny-DESK2.sc.intel.com>
Date: Tue, 10 Nov 2020 09:48:31 +0100
Message-ID: <87eel1iom8.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain

On Mon, Nov 09 2020 at 20:59, Ira Weiny wrote:
> On Tue, Nov 10, 2020 at 02:13:56AM +0100, Thomas Gleixner wrote:
> Also, we can convert the new memcpy_*_page() calls to kmap_local() as well.
> [For now my patch just uses kmap_atomic().]
>
> I've not looked at all of the patches in your latest version.  Have you
> included converting any of the kmap() call sites?  I thought you were more
> focused on converting the kmap_atomic() to kmap_local()?

I did not touch any of those yet, but it's a logical consequence to
convert all kmap() instances which are _not_ creating a global mapping
over to it.

Thanks,

        tglx



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 09:14:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 09:14:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23101.49670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcPjP-0003N4-RE; Tue, 10 Nov 2020 09:14:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23101.49670; Tue, 10 Nov 2020 09:14:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcPjP-0003Mx-Mr; Tue, 10 Nov 2020 09:14:27 +0000
Received: by outflank-mailman (input) for mailman id 23101;
 Tue, 10 Nov 2020 09:14:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5gKK=EQ=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1kcPjO-0003Ms-2a
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 09:14:26 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b7885558-6f25-4fd1-90cd-264bd6d40d02;
 Tue, 10 Nov 2020 09:14:24 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 352EF6736F; Tue, 10 Nov 2020 10:14:22 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5gKK=EQ=lst.de=hch@srs-us1.protection.inumbo.net>)
	id 1kcPjO-0003Ms-2a
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 09:14:26 +0000
X-Inumbo-ID: b7885558-6f25-4fd1-90cd-264bd6d40d02
Received: from verein.lst.de (unknown [213.95.11.211])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b7885558-6f25-4fd1-90cd-264bd6d40d02;
	Tue, 10 Nov 2020 09:14:24 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
	id 352EF6736F; Tue, 10 Nov 2020 10:14:22 +0100 (CET)
Date: Tue, 10 Nov 2020 10:14:21 +0100
From: Christoph Hellwig <hch@lst.de>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>, xen-devel@lists.xenproject.org,
	iommu@lists.linux-foundation.org,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH for-5.10] swiotlb: remove the tbl_dma_addr argument to
 swiotlb_tbl_map_single
Message-ID: <20201110091421.GA23707@lst.de>
References: <20201023063309.3472987-1-hch@lst.de> <20201103094643.GA18936@lst.de> <20201104140438.GA16892@char.us.oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201104140438.GA16892@char.us.oracle.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Wed, Nov 04, 2020 at 09:04:38AM -0500, Konrad Rzeszutek Wilk wrote:
> On Tue, Nov 03, 2020 at 10:46:43AM +0100, Christoph Hellwig wrote:
> > ping?
> 
> Hopefully this goes through. I am in the process of testing it but ran
> into testing issues that I believe are unrelated.

Did you manage to make any progress?  This fixes an issue with the
new support for systems with DMA offsets in 5.10..


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 09:32:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 09:32:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23115.49686 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcQ0G-00057K-C2; Tue, 10 Nov 2020 09:31:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23115.49686; Tue, 10 Nov 2020 09:31:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcQ0G-00057D-8m; Tue, 10 Nov 2020 09:31:52 +0000
Received: by outflank-mailman (input) for mailman id 23115;
 Tue, 10 Nov 2020 09:31:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hJ2u=EQ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kcQ0F-000578-8H
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 09:31:51 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f5b55895-8899-4afe-b034-9212727a23d7;
 Tue, 10 Nov 2020 09:31:50 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hJ2u=EQ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kcQ0F-000578-8H
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 09:31:51 +0000
X-Inumbo-ID: f5b55895-8899-4afe-b034-9212727a23d7
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f5b55895-8899-4afe-b034-9212727a23d7;
	Tue, 10 Nov 2020 09:31:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605000710;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=+T+QAPDe8sIL4BaSXGHfzPQAR/jSS1vNs6ir8MsEOBI=;
  b=E0Bg1kTj0zGoc/whWgdhGEW7KhA3mVoFOd9BJFCTwAf5q0GQLudZzK7D
   L68pF9lAXY5hXx2j/KLSnZiE9rjfWWK9IxsE8XD1oEbhxjoM2O+nM4bkN
   GsiNNQ955iCakbCEB7z+qpQ8stB/f6kdbcdvgj1vHEmP7vT3Ar+S1giTJ
   g=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: b4Oc9sYOlA7OPfP5KugD1DplPah5xUojux4VVDKaZ0ZjAH/H2P+a1OnXpOIAmMxuC9/9oxdxnD
 ZbcXrk1XmNjQj9lcgIA/H1UfrUzGE9g58jWoLJ235SKI/3OYPmKXCg9lHyLho5AVSfk8HM8N/S
 bkmbsbLSwXUeAloxWrS16nM0TejFb1jgPF10P505Fh8GziT/kiaF/1R4yLK6qm5mk80mV07j0y
 Aod0ora0yaVIm1+hGoWONTarc0fLBoA+kzu3R3Cw60ztR1Gfnm4v1qSCR85nnNU4Cnx/SuGu4O
 cjc=
X-SBRS: None
X-MesageID: 31066316
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,466,1596513600"; 
   d="scan'208";a="31066316"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=N0QdhoLJvdk+wrxWHoyWoNMEXn3MrKWWzwTVkCXtEjqE3pV0SFifYAcLVgc4CPt8qetmm/vgJtNBB1qdlLjOjip/nxNIgJt6toR2GzDfISsvkfwbtWKitUCSg+xAAmXT+XQt8OIpSlD60/mdN224cMKxlviFYaRijadSTDQ6LrB5C3FSOOxxa42c9TFvU1uBJ4Vco5e8hujq9jHEGzxxv0zrjt3nz+ilZU4Hvu8LC+Ot4u8LNyKMJMrt2tgbhhzocrSDFyXCZ505tmZ1bAZ3Fghzyu+QoeQFakjSaj4MwOE86oLpYWWUYNSEF2uKhxFk44t6L1az4xtk1xwzv61SZw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CY6pf+Of+ze4JW9l14slxpn+ssZAYMU0Um5kRS/63n4=;
 b=ofX6xA/gznKZsZo0am89Je6RDV1VUE2UvPGLf3tkX+TsOaZw46ijS5c3MhIsFI4LH1OnwAxgzdgKjGJ/22wtEL6CsOryXb9gzX2Ha/3Jc+ceIu1VvAIkUHFawSV/KddE8oQw/HohWXzvHLJGZl1cnYgU2jWe2Hb4UO3dTaf99GyzOHrm+sGy1F/p+XYNxAxl0Pg6BFfuHPO2pdpiWEQTRCiwoBf1GN3XqcWibUGPSJHxopn5Uxr61I3SZ6HtrTGbnDtvFMDQ42ey8rKHcS1mV8ccR9oA0vGymQ+RXsyhqSmXpFJfHc3P2M05eHUYb0I6rR8zUVFU2yUCluS5/kP//Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CY6pf+Of+ze4JW9l14slxpn+ssZAYMU0Um5kRS/63n4=;
 b=rlybPT93ilqhQLDz3dly15inHoWyXOUxIosAmb3nqaJWB30hfDvRHLr6jf+fKI4JbqTVIfgz+C/RgYPtOOlWR7HnhPC1zCtxAlEnPb58norgyBxjZMvZ8U8dCU22Jo0MGv5QavNZHCA56+ycgksURvJnKmFzEIfnGY6fSDsSugk=
Date: Tue, 10 Nov 2020 10:31:42 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3 5/7] x86: guard against straight-line speculation past
 RET
Message-ID: <20201110093142.hkufamaepn67gv43@Air-de-Roger>
References: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
 <80ceea17-958d-f409-5f39-9f353e780f5b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <80ceea17-958d-f409-5f39-9f353e780f5b@suse.com>
X-ClientProxiedBy: LO3P265CA0007.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:bb::12) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e4dc072e-55a4-400b-91dd-08d8855b718b
X-MS-TrafficTypeDiagnostic: DM6PR03MB3833:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB3833EFB775F0436FFE883CB78FE90@DM6PR03MB3833.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: LaDLTZvWUThqjiahwl7dJycw8HnBXlu4dxJtRySElHqnAOiiTQajRA6a3sDH3pIswxLCMYp5EZWmEMPp9CF04RnA3vx258Chd4UWCrLAn+AK6S00Ma6txJCWAHZ8RPnwKIiN/mjTHNlI9ks/3hIw9nxNNGV859ztMPJvYL2tFde3ziQMxQZ438QikGMI4f4sVMGdMO6uBfS2vbtkipAOT06y7OtIey7vyAmxPG37MHklJRTPKLc9aF6tb6Jh0IxlFRWBXoOws2mqFfICdhthxy8PbWAjRqAVPv4QPynT6//+cdmtwWl64hJuRRqiFe1eF/ANx77xbj+vFHgpu6Q3CA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(346002)(376002)(396003)(366004)(39860400002)(136003)(478600001)(316002)(5660300002)(8936002)(85182001)(33716001)(6496006)(1076003)(9686003)(6486002)(54906003)(66946007)(86362001)(66476007)(2906002)(16526019)(956004)(26005)(186003)(6916009)(8676002)(66556008)(6666004)(4326008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: KpoOW2kFXNq8MvuAUTtryC0NyNoCULgQBKv8edv5n3r4q7mzjrOibt24w22JQoHnbNfpyvGNYqOR48+mi85Ch/h+qQ3Ir9he0Z6KzoyvKtwMg/7CCQEPTVSeX0z3BJIgZSPYKO1WK8NsyOPNujberhUqRo/i73ZB4dUQuJAYtSS52j6UxOjfLqV0VZFqXZ7/B/w/Oj7rwYjGn+dQ/iGqX29/okjB8UdDZo185wT0JIUku5Hp7eSu1GBrDe/xMqeGIve3xyI6Pzv/b7cGlvojNarg2xj4siztXEUdd3MywkK51Wf54MwsjWNcwLB5Mk9tBgYUPo405Ov7xiZeoiSb7J0HrtB3GQgB2OiNkrgfIyd8DC4Ssd85MmMW7KLUN+0nF6BMyipVf+/Ym5kqsGyh5vzFsG45Q5oLojo6yw8j/0sG5MNIkMPcd8ZkAi0va5U1AV9hRIQmjksLyHY6OtJCaGA4jZCnaXp40+6ms/3dgE0PC6h+2uCZjD33mblkq1vkWyxMAjTl/H/NkhP0DRsfrTJj112/WR3wezXMNhyA7TLPuWBr+KRIh3qnJvX+T7aqhtmyB+tHlXYgfb8E6soO86JUE1Q1ERrNoMs3bo4FfWqrc7xpQCk+AyvgPwahZv7fj7zxIxRpZVBAzQI4m5W9+EGmWcvdBrpmhcNqUgmTw9HXIAiS09WGN2TlPLwsucZkSaiVTh8Ckeqk+8hZv4ZjqwrRz7pK7hVrLY2RFQ8XSX2aOZ56yrJY2RS7yUAEUywoifNkG+N63yEh4efrskK0T7oVimkPSVlB/jxSDEGpkfxHzbuiotVdazWZxWVNonp+5NgY1rjTXaE8HDUr6ArljmFeDF9h0Ucc5mfFRFT+CuaLQvqcJPP8knR93gva4PqGW5DCepLAHou+T89CyFhK0g==
X-MS-Exchange-CrossTenant-Network-Message-Id: e4dc072e-55a4-400b-91dd-08d8855b718b
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Nov 2020 09:31:46.7570
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: UabrwSISwisq2+yQ4QLHW3EamLWIcRCaj57pBZG//EiIdtjUDx4korbj1ZBMwSLoUeuZBIA6d+7oz2l/kLoAcw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3833
X-OriginatorOrg: citrix.com

On Fri, Oct 23, 2020 at 10:38:04AM +0200, Jan Beulich wrote:
> Under certain conditions CPUs can speculate into the instruction stream
> past a RET instruction. Guard against this just like 3b7dab93f240
> ("x86/spec-ctrl: Protect against CALL/JMP straight-line speculation")
> did - by inserting an "INT $3" insn. It's merely the mechanics of how to
> achieve this that differ: A set of macros gets introduced to post-
> process RET insns issued by the compiler (or living in assembly files).
> 
> Unfortunately for clang this requires further features their built-in
> assembler doesn't support: We need to be able to override insn mnemonics
> produced by the compiler (which may be impossible, if internally
> assembly mnemonics never get generated), and we want to use \(text)
> escaping / quoting in the auxiliary macro.

Could this have an option to enable/disable at build time?

FreeBSD will drop GNU as quite soon from base, and albeit it can be
installed as a package I would like to be able to build Xen using a
toolchain based on LLVM exclusively.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 09:32:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 09:32:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23116.49698 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcQ0L-00058k-Lp; Tue, 10 Nov 2020 09:31:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23116.49698; Tue, 10 Nov 2020 09:31:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcQ0L-00058d-Hw; Tue, 10 Nov 2020 09:31:57 +0000
Received: by outflank-mailman (input) for mailman id 23116;
 Tue, 10 Nov 2020 09:31:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8v0f=EQ=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1kcQ0K-000578-2K
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 09:31:56 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.74]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 236309b9-9dd4-4055-bb2a-f3e2adceed66;
 Tue, 10 Nov 2020 09:31:50 +0000 (UTC)
Received: from MR2P264CA0150.FRAP264.PROD.OUTLOOK.COM (2603:10a6:501:1::13) by
 VI1PR0801MB1919.eurprd08.prod.outlook.com (2603:10a6:800:89::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.25; Tue, 10 Nov
 2020 09:31:48 +0000
Received: from VE1EUR03FT008.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:501:1:cafe::4c) by MR2P264CA0150.outlook.office365.com
 (2603:10a6:501:1::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Tue, 10 Nov 2020 09:31:47 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT008.mail.protection.outlook.com (10.152.18.75) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3541.17 via Frontend Transport; Tue, 10 Nov 2020 09:31:46 +0000
Received: ("Tessian outbound e0cdfd2b0406:v71");
 Tue, 10 Nov 2020 09:31:45 +0000
Received: from 6ba7799d1be7.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B2DDF338-D78D-404A-B160-587F118C2D72.1; 
 Tue, 10 Nov 2020 09:31:40 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 6ba7799d1be7.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 10 Nov 2020 09:31:40 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VI1PR08MB3584.eurprd08.prod.outlook.com (2603:10a6:803:88::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.29; Tue, 10 Nov
 2020 09:31:38 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::68c2:f9e0:49c5:7e18]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::68c2:f9e0:49c5:7e18%3]) with mapi id 15.20.3541.025; Tue, 10 Nov 2020
 09:31:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8v0f=EQ=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
	id 1kcQ0K-000578-2K
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 09:31:56 +0000
X-Inumbo-ID: 236309b9-9dd4-4055-bb2a-f3e2adceed66
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown [40.107.6.74])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 236309b9-9dd4-4055-bb2a-f3e2adceed66;
	Tue, 10 Nov 2020 09:31:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G9IZsptPU/gTSzuYimH3onRmy2KghqqHJrBnEnUhsOQ=;
 b=zNEMX5wtoIR2xzMmrzUdd+75dMIqtIHlZ4iJ45DHhXyh7XX19Oc8aMgsN/rW7oKzl+Kr27YtVVEBZDOrfq5YoFOEAOziVZLP/J5qQ2MKAlGjepKZVhhIiw/Md3R3sROPErroKKYoSHpPdkm6/FYCITH5GFlyqs540WGn9lnImoA=
Received: from MR2P264CA0150.FRAP264.PROD.OUTLOOK.COM (2603:10a6:501:1::13) by
 VI1PR0801MB1919.eurprd08.prod.outlook.com (2603:10a6:800:89::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.25; Tue, 10 Nov
 2020 09:31:48 +0000
Received: from VE1EUR03FT008.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:501:1:cafe::4c) by MR2P264CA0150.outlook.office365.com
 (2603:10a6:501:1::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Tue, 10 Nov 2020 09:31:47 +0000
X-MS-Exchange-Authentication-Results: spf=temperror (sender IP is
 63.35.35.123) smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass
 (signature was verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=temperror action=none header.from=arm.com;
Received-SPF: TempError (protection.outlook.com: error in processing during
 lookup of arm.com: DNS Timeout)
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT008.mail.protection.outlook.com (10.152.18.75) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3541.17 via Frontend Transport; Tue, 10 Nov 2020 09:31:46 +0000
Received: ("Tessian outbound e0cdfd2b0406:v71"); Tue, 10 Nov 2020 09:31:45 +0000
X-CR-MTA-TID: 64aa7808
Received: from 6ba7799d1be7.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id B2DDF338-D78D-404A-B160-587F118C2D72.1;
	Tue, 10 Nov 2020 09:31:40 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 6ba7799d1be7.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Tue, 10 Nov 2020 09:31:40 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=P55FmaBW4vOANJHuWDOjw2RlcVhicLAlBp9nxufjO/KjeyBitgPQKEVHBN6nkiVW/o+7ZjbTyvzTCfBHl4zWDCm2V6vsuW1dH/bUAdY5kiqbGyLiNlF8CNbwPZ2OWajPhMWAaB7zL5JpG0jHNPT9vKMED2hKRD9yATBBxev+0XhwEe/3JQ8SrYpgSO9aP+t1J7ZlpLMequE3zt0PQc6ZTt/PNCqlgWp+OXEm7k+HoL26QO3fDRuvkHid5NujQ5eaml0tCe5iU0qvqT2Jt1dm2lg4HD5Jwm0zGlVGSMs+ZDbqa56s8PwbqS78AZ2GYHwBHYEpEaJYWDNSAInfaCwTHA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G9IZsptPU/gTSzuYimH3onRmy2KghqqHJrBnEnUhsOQ=;
 b=lmoH92UWeS487QAwi2yst0/njd10We5Z7NQ65aFZB2YoRChzoKwCkWMZD4JK+z+PaMxsPvM8Kp82IlSaisyxaZmPUgkyYU1qh6VAtgvYG3zd0oykRvS4j5wpWqFEbRlIdN3Yb+4OBrjw2QbbaIgTcW7avDz/Y/SoXHFFoQnn/QjJ5rDVanerarxtAKRqnQhP/y3Xxe8OxlWYocA4Lhwkz81yihnYAK77zM1OfdJTDodF3vnQRc0s2dH48QayzIRyLox2goDWsUB/ItlTUT6J3JMky6iGAxNPMBqzHX/G0knyOlfS/aSub9hOhX+otJuCQQCFM4v4k0BBJPM7Y+Bkaw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G9IZsptPU/gTSzuYimH3onRmy2KghqqHJrBnEnUhsOQ=;
 b=zNEMX5wtoIR2xzMmrzUdd+75dMIqtIHlZ4iJ45DHhXyh7XX19Oc8aMgsN/rW7oKzl+Kr27YtVVEBZDOrfq5YoFOEAOziVZLP/J5qQ2MKAlGjepKZVhhIiw/Md3R3sROPErroKKYoSHpPdkm6/FYCITH5GFlyqs540WGn9lnImoA=
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VI1PR08MB3584.eurprd08.prod.outlook.com (2603:10a6:803:88::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.29; Tue, 10 Nov
 2020 09:31:38 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::68c2:f9e0:49c5:7e18]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::68c2:f9e0:49c5:7e18%3]) with mapi id 15.20.3541.025; Tue, 10 Nov 2020
 09:31:38 +0000
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>
CC: Andre Przywara <Andre.Przywara@arm.com>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Kaly Xin
	<Kaly.Xin@arm.com>, nd <nd@arm.com>
Subject: RE: [PATCH] xen/arm: Add Cortex-A73 erratum 858921 workaround
Thread-Topic: [PATCH] xen/arm: Add Cortex-A73 erratum 858921 workaround
Thread-Index: AQHWtnF2vBhY4+4fg0iJYr/0WqjXm6m/tA6AgAFmnRA=
Date: Tue, 10 Nov 2020 09:31:37 +0000
Message-ID:
 <VE1PR08MB521542C55E70B3280B51175DF7E90@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20201109082110.1133996-1-penny.zheng@arm.com>
 <cfa63398-8182-b79f-1602-ed068e2319ad@xen.org>
In-Reply-To: <cfa63398-8182-b79f-1602-ed068e2319ad@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 701A1A5E6FAD9B44A481112128DBD013.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: fcc157b2-05cc-4eb4-6524-08d8855b7141
x-ms-traffictypediagnostic: VI1PR08MB3584:|VI1PR0801MB1919:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0801MB1919ABAD0540629D386B613EF7E90@VI1PR0801MB1919.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:2276;OLM:2276;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 HsIf1m3Nu0bQipYOw6A+I3j+3nBzNBYdqFotC1+GdYd1cFbK/expmIntfLZAPMSbWSQrnt2kERpYC3/LspjN5PJr6Yii/cFAiAK/lMgml2DxsvR2gkibQ+8CaL6YoJOguSzsQn+eJxZ5GiE1gRe5GO67PrxSGk0bqH0w6IGStoI4yMP2fhk9j7rMwHwmyoU/ObcOs3q9sW0alYo9trblv+BXMJCMv3nztOBocp1CtMffmfbvZlQCybGAlkJhouV4guUUL3VX3my5BQbb9jRUKBbcylYv76mCNxZPgZ25nQkrIyjhVkq6SCJJfq9j6dxLQzrhLFGKJ+t/ViiiVS/oTA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(39860400002)(346002)(376002)(53546011)(64756008)(8936002)(26005)(66946007)(6506007)(186003)(83380400001)(76116006)(66556008)(86362001)(66476007)(5660300002)(7696005)(8676002)(52536014)(33656002)(9686003)(54906003)(55016002)(2906002)(478600001)(4326008)(316002)(71200400001)(110136005)(66446008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 7EDylir6ILKl5J93DpwZtBL/FJEkcAF4qYYEZoz7yQDrGECW62XS+xtXKilWSSETg/omkhTEAqRZUg5qCpFXyiVy7H15GBkVOdLj8JqNYC0zSxWyjZF0oQMVSYh3JKSfwJrGUe0vkrL1/SFHjWjJ9GuzWez5QNzR74EaorAmfP9EpeI2YRMA7hxg/TqluoDxSpGjdYkidHqTlkvQgnYZtcSOhlMYkx8DYSn3wl2d09LHN5tNkkgHoIf8CWlCDTl6P1E+Y8Yxwwz96PVrb83zhCfHW7FXIuY5yr3E8WuFUGeXcAiv+uNKOoVzE4Tm7FbkQBst46A54qfuy4eqHTi6C9PwNkcxFpFqXpHpUFhoVa0yoQM3HMnAkpYl+p7aHh4QYpnlBHZkp1hPBZp/zWcX7cOgs+fYHxb8d1UKbMPVPdS5CG+6alscGTLxdB0IaL2On6wUza9PTyq+wspRDPvDWCpqp1FqyGvPmr6XrYsTlaymap46iK8igRcoqzl7m5geOk2wxvW3Zpq5m0isThHwbHAvr2Kd5J8cvD/XmiaIlqAT6npJQpuur1IlN+9shxRfrWKg8x0kTXGa75nTyvlBPJPOO1QRriEnBbm+/xg15VHUadmpepXM+kRNLL+rNOCfYxNKmCj2xGozKIijHyoO3xphb/XY7sVVIZVctagv19Nps0OtmM32VcdyCgzMGwF8tJGuYsc9gLCCXWx/U1ZkOZLyUHNgh1N6ND2eduFi8PXHsEV8QGu1sqi5KvZXtleQ9sUaEDRry64NEMEauQGbvs3n0xuxFg0WRgob5cw4JqaqHBJ82pYX2QBzRAFJkkXHkCKb3brA+CqOrh5ywKiHrtKo/3O+lG0xWoRkTdoY31bdJaPxn6MeeUtFVT+FELp1nnKpqWJkMScPGNiOG+IgmA==
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3584
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT008.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	36952f97-3d06-4bee-aa55-08d8855b6c83
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0H1vbCtHx5/pGjA0nEFncp7Y7DAh7aLcG1kb9YsL2PlZFMSIV+xYPENJKTE6kXnkOOSigDuaV31u0PVBbnPqIYDCRv195Dej2oWMhriyHWs5MuSGNfrzeQDper4TiTLl0jrpfa+b4Rl5U8XpJ5GtjAB+C4q1srdn9n1uaZEVtttnayW0Na0WOYDlzN9G/XCOrlZN6s2jz+CgiZy2EX14CDkA+mHl86ax9JqoqgRV8VTM2cMeNNinmicXoEK8YPqrrY4G7g4UJqHq5uQzIA0JXsw6cIfWAJN1fJm0sTLpFjVyvKTpbh/j6Fsa8WXd/yO8xAr2f2iCFs3SN6uGrHc6GYUxpKmC3A6T0Kv8yyYWHoa2u8XF0uQuH2riC7TuM/rD2pWreYBES/JuaQHA/sb9Ew==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(346002)(376002)(39860400002)(136003)(46966005)(336012)(53546011)(81166007)(6506007)(186003)(26005)(7696005)(70206006)(356005)(4326008)(55016002)(83380400001)(5660300002)(70586007)(8936002)(86362001)(2906002)(63370400001)(63350400001)(8676002)(9686003)(316002)(110136005)(82310400003)(54906003)(36906005)(47076004)(82740400003)(478600001)(33656002)(52536014);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Nov 2020 09:31:46.0255
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fcc157b2-05cc-4eb4-6524-08d8855b7141
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT008.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0801MB1919

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVuIEdyYWxsIDxq
dWxpZW5AeGVuLm9yZz4NCj4gU2VudDogTW9uZGF5LCBOb3ZlbWJlciA5LCAyMDIwIDg6MDQgUE0N
Cj4gVG86IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bhcm0uY29tPjsgeGVuLWRldmVsQGxpc3Rz
LnhlbnByb2plY3Qub3JnOw0KPiBzc3RhYmVsbGluaUBrZXJuZWwub3JnDQo+IENjOiBBbmRyZSBQ
cnp5d2FyYSA8QW5kcmUuUHJ6eXdhcmFAYXJtLmNvbT47IEJlcnRyYW5kIE1hcnF1aXMNCj4gPEJl
cnRyYW5kLk1hcnF1aXNAYXJtLmNvbT47IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29tPjsgS2Fs
eSBYaW4NCj4gPEthbHkuWGluQGFybS5jb20+OyBuZCA8bmRAYXJtLmNvbT4NCj4gU3ViamVjdDog
UmU6IFtQQVRDSF0geGVuL2FybTogQWRkIENvcnRleC1BNzMgZXJyYXR1bSA4NTg5MjEgd29ya2Fy
b3VuZA0KPiANCj4gSGksDQo+IA0KPiBPbiAwOS8xMS8yMDIwIDA4OjIxLCBQZW5ueSBaaGVuZyB3
cm90ZToNCj4gPiBDTlRWQ1RfRUwwIG9yIENOVFBDVF9FTDAgY291bnRlciByZWFkIGluIENvcnRl
eC1BNzMgKGFsbCB2ZXJzaW9ucykNCj4gPiBtaWdodCByZXR1cm4gYSB3cm9uZyB2YWx1ZSB3aGVu
IHRoZSBjb3VudGVyIGNyb3NzZXMgYSAzMmJpdCBib3VuZGFyeS4NCj4gPg0KPiA+IFVudGlsIG5v
dywgdGhlcmUgaXMgbm8gY2FzZSBmb3IgWGVuIGl0c2VsZiB0byBhY2Nlc3MgQ05UVkNUX0VMMCwg
YW5kDQo+ID4gaXQgYWxzbyBzaG91bGQgYmUgdGhlIEd1ZXN0IE9TJ3MgcmVzcG9uc2liaWxpdHkg
dG8gZGVhbCB3aXRoIHRoaXMNCj4gPiBwYXJ0Lg0KPiA+DQo+ID4gQnV0IGZvciBDTlRQQ1QsIHRo
ZXJlIGV4aXN0cyBzZXZlcmFsIGNhc2VzIGluIFhlbiBpbnZvbHZpbmcgcmVhZGluZw0KPiA+IENO
VFBDVCwgc28gYSBwb3NzaWJsZSB3b3JrYXJvdW5kIGlzIHRoYXQgcGVyZm9ybWluZyB0aGUgcmVh
ZCB0d2ljZSwNCj4gPiBhbmQgdG8gcmV0dXJuIG9uZSBvciB0aGUgb3RoZXIgZGVwZW5kaW5nIG9u
IHdoZXRoZXIgYSB0cmFuc2l0aW9uIGhhcw0KPiA+IHRha2VuIHBsYWNlLg0KPiA+DQo+ID4gU2ln
bmVkLW9mZi1ieTogUGVubnkgWmhlbmcgPHBlbm55LnpoZW5nQGFybS5jb20+DQo+IA0KPiBBY2tl
ZC1ieTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4NCj4NClRoYW5rIHlvdS4g8J+Y
iQ0KDQo+IE9uIGEgcmVsYXRlZCB0b3BpYywgZG8gd2UgbmVlZCBhIGZpeCBzaW1pbGFyIHRvIExp
bnV4IGNvbW1pdCA3NWExOWEwMjAyZGINCj4gImFybTY0OiBhcmNoX3RpbWVyOiBFbnN1cmUgY291
bnRlciByZWdpc3RlciByZWFkcyBvY2N1ciB3aXRoIHNlcWxvY2sgaGVsZCI/DQo+IA0KU3VyZSwg
SSdsbCBjaGVjayB0aGlzIGNvbW1pdCBhbmQgdGFsayB3aXRoIG15IHRlYW1zIGZvciBmdXJ0aGVy
IHdvcmsuDQoNCkNoZWVycw0KDQotLQ0KUGVubnkgWmhlbmcNCj4gQ2hlZXJzLA0KPiANCj4gLS0N
Cj4gSnVsaWVuIEdyYWxsDQo=


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 09:47:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 09:47:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23141.49709 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcQFX-0006In-91; Tue, 10 Nov 2020 09:47:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23141.49709; Tue, 10 Nov 2020 09:47:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcQFX-0006Ig-67; Tue, 10 Nov 2020 09:47:39 +0000
Received: by outflank-mailman (input) for mailman id 23141;
 Tue, 10 Nov 2020 09:47:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qU6Q=EQ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kcQFV-0006Ib-KM
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 09:47:37 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7135ed00-cc6c-4785-acdd-9b7cfc3a15e4;
 Tue, 10 Nov 2020 09:47:36 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kcQFT-0004EV-89; Tue, 10 Nov 2020 09:47:35 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kcQFT-00078u-0R; Tue, 10 Nov 2020 09:47:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qU6Q=EQ=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kcQFV-0006Ib-KM
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 09:47:37 +0000
X-Inumbo-ID: 7135ed00-cc6c-4785-acdd-9b7cfc3a15e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7135ed00-cc6c-4785-acdd-9b7cfc3a15e4;
	Tue, 10 Nov 2020 09:47:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=J+q3yBi9QfKTGC/JZyJJRHhvRikw6AzeLtP36BVNy/Y=; b=I46xqANe8awyC+w49LaWoPptLf
	NdEKWbS9d1JMD5gcLEqd2FAQLqWF1DINuGKmJFXz5VdBcMWk4pY6Bnt4QPQ8uxWshorgo9tbaVDPw
	6lUX/qViS1T8a5/uh4DDz79qIcZg9VKx1vZrkZ7t1p6DUr7EVUvsgE1klfFb9cOzD3ho=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kcQFT-0004EV-89; Tue, 10 Nov 2020 09:47:35 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kcQFT-00078u-0R; Tue, 10 Nov 2020 09:47:35 +0000
Subject: Re: [PATCH v6 2/3] xen/evtchn: rework per event channel lock
To: Jan Beulich <jbeulich@suse.com>
Cc: Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Stefano Stabellini
 <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Juergen Gross <jgross@suse.com>
References: <20201109163826.13035-1-jgross@suse.com>
 <20201109163826.13035-3-jgross@suse.com>
 <f64158af-3468-053f-7cbe-d52ab01b8bfc@suse.com>
 <7858513b-2938-5bad-0c9e-167a8472656f@xen.org>
 <c1e02f02-dfdd-33f1-0881-8632ddcba158@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <f32f583e-2bc8-a064-fc16-0f8661149357@xen.org>
Date: Tue, 10 Nov 2020 09:47:32 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <c1e02f02-dfdd-33f1-0881-8632ddcba158@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 10/11/2020 07:39, Jan Beulich wrote:
> On 09.11.2020 18:46, Julien Grall wrote:
>> Hi,
>>
>> On 09/11/2020 16:48, Jan Beulich wrote:
>>> On 09.11.2020 17:38, Juergen Gross wrote:
>>>> Currently the lock for a single event channel needs to be taken with
>>>> interrupts off, which causes deadlocks in some cases.
>>>>
>>>> Rework the per event channel lock to be non-blocking for the case of
>>>> sending an event and removing the need for disabling interrupts for
>>>> taking the lock.
>>>>
>>>> The lock is needed for avoiding races between event channel state
>>>> changes (creation, closing, binding) against normal operations (set
>>>> pending, [un]masking, priority changes).
>>>>
>>>> Use a rwlock, but with some restrictions:
>>>>
>>>> - Changing the state of an event channel (creation, closing, binding)
>>>>     needs to use write_lock(), with ASSERT()ing that the lock is taken as
>>>>     writer only when the state of the event channel is either before or
>>>>     after the locked region appropriate (either free or unbound).
>>>>
>>>> - Sending an event needs to use read_trylock() mostly, in case of not
>>>>     obtaining the lock the operation is omitted. This is needed as
>>>>     sending an event can happen with interrupts off (at least in some
>>>>     cases).
>>>>
>>>> - Dumping the event channel state for debug purposes is using
>>>>     read_trylock(), too, in order to avoid blocking in case the lock is
>>>>     taken as writer for a long time.
>>>>
>>>> - All other cases can use read_lock().
>>>>
>>>> Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()")
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>> ---
>>>> V4:
>>>> - switch to rwlock
>>>> - add ASSERT() to verify correct write_lock() usage
>>>>
>>>> V3:
>>>> - corrected a copy-and-paste error (Jan Beulich)
>>>> - corrected unlocking in two cases (Jan Beulich)
>>>> - renamed evtchn_read_trylock() (Jan Beulich)
>>>> - added some comments and an ASSERT() for evtchn_write_lock()
>>>> - set EVENT_WRITE_LOCK_INC to INT_MIN
>>>>
>>>> V2:
>>>> - added needed barriers
>>>>
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>
>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> I'll give it overnight for others to possibly comment, but I'd
>>> like to get this in tomorrow if at all possible.
>>
>> IIRC, Citrix originally reported the issues. I would like to give them
>> an opportunity to test the patches and confirm this effectively fixed
>> there issues.
> 
> I would consider waiting longer, but I'd like to get staging unblocked.

So the plan is to have the patches sitting in staging for a few days and 
then backport?

> Just like with 52e1fc47abc3a0123 ("evtchn/Flask: pre-allocate node on
> send path") we can always decide to revert / fix up afterwards. The
> patch here surely is an improvement, even if it was to turn out it
> doesn't fix all issues yes. I'd appreciate if you would confirm you
> don't object to the patch going in - I'll wait a few more hours,
> perhaps until around noon.
I would suggest to wait until noon and if you don't get any answer, then 
merge it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 09:53:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 09:53:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23150.49722 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcQLA-0007Ch-W5; Tue, 10 Nov 2020 09:53:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23150.49722; Tue, 10 Nov 2020 09:53:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcQLA-0007Ca-SD; Tue, 10 Nov 2020 09:53:28 +0000
Received: by outflank-mailman (input) for mailman id 23150;
 Tue, 10 Nov 2020 09:53:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kcQLA-0007CV-Bv
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 09:53:28 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7271080d-aca9-4905-a532-2d23383e0a6c;
 Tue, 10 Nov 2020 09:53:24 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcQL6-0004LI-FY; Tue, 10 Nov 2020 09:53:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcQL6-00088A-4q; Tue, 10 Nov 2020 09:53:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kcQL6-0001yg-4N; Tue, 10 Nov 2020 09:53:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kcQLA-0007CV-Bv
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 09:53:28 +0000
X-Inumbo-ID: 7271080d-aca9-4905-a532-2d23383e0a6c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7271080d-aca9-4905-a532-2d23383e0a6c;
	Tue, 10 Nov 2020 09:53:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=do39fy9ZKrWFC90uIWh8uTlCZl2PJFqwObbqRGZDNng=; b=eAQpnAsUw3/jEj/EjnfKx++IIK
	m5oJdLXtlKyRGwIxw9djV7yizL4sHhah+jDnPhYUujl5Aah4nglO6XEftoc0Z2b/DrlndF/YPycoi
	euptiEBQ/pH11MYfTel1SVYjNP/I9xaZoLqDw54B0j7BL12Xv9ue3EWpzUXFtcWb0PEg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcQL6-0004LI-FY; Tue, 10 Nov 2020 09:53:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcQL6-00088A-4q; Tue, 10 Nov 2020 09:53:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcQL6-0001yg-4N; Tue, 10 Nov 2020 09:53:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156594-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 156594: regressions - FAIL
X-Osstest-Failures:
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:debian-fixup:fail:regression
    xen-4.14-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=a38060ece699f22cd317219bec53c0d27279155b
X-Osstest-Versions-That:
    xen=0c96e4297da07944525729ddbe438b0131ab5b7e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Nov 2020 09:53:24 +0000

flight 156594 xen-4.14-testing real [real]
flight 156615 xen-4.14-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156594/
http://logs.test-lab.xenproject.org/osstest/logs/156615/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-multivcpu 13 debian-fixup            fail REGR. vs. 156525
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156525

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156525
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156525
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156525
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156525
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156525
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156525
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156525
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156525
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156525
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156525
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156525
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  a38060ece699f22cd317219bec53c0d27279155b
baseline version:
 xen                  0c96e4297da07944525729ddbe438b0131ab5b7e

Last test of basis   156525  2020-11-06 16:01:25 Z    3 days
Testing same since   156594  2020-11-09 18:08:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bertrand Marquis <bertrand.marquis@arm.com>
  David Woodhouse <dwmw@amazon.co.uk>
  Juergen Gross <jgross@suse.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Olaf Hering <olaf@aepfle.de>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit a38060ece699f22cd317219bec53c0d27279155b
Author: Bertrand Marquis <bertrand.marquis@arm.com>
Date:   Wed Oct 7 14:57:01 2020 +0100

    tools/libs/stat: use memcpy instead of strncpy in getBridge
    
    Use memcpy in getBridge to prevent gcc warnings about truncated
    strings. We know that we might truncate it, so the gcc warning
    here is wrong.
    Revert previous change changing buffer sizes as bigger buffers
    are not needed.
    
    Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Wei Liu <wl@xen.org>
    (cherry picked from commit 40fe714ca4245a9716264fcb3ee8df42c3650cf6)

commit 78a53f0ee04bfd14dc21dabd3a0d79343c3b642f
Author: Bertrand Marquis <bertrand.marquis@arm.com>
Date:   Wed Oct 7 14:57:02 2020 +0100

    tool/libs/light: Fix libxenlight gcc warning
    
    Fix gcc10 compilation warning about uninitialized variable by setting
    it to 0.
    
    Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Wei Liu <wl@xen.org>
    (cherry picked from commit 0241809bf838875615797f52af34222e5ab8e98f)

commit 89ae1b185a193fea8e86840c48a2711f04042415
Author: Olaf Hering <olaf@aepfle.de>
Date:   Wed Sep 23 08:48:40 2020 +0200

    tools/libxc: report malloc errors in writev_exact
    
    The caller of writev_exact should be notified about malloc errors
    when dealing with partial writes.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
    (cherry picked from commit 0d8d289af7a679c028462c4ed5d98586f9ef9648)

commit 7398a44e8636c99f6b06fb16d05a64ee06980d10
Author: Juergen Gross <jgross@suse.com>
Date:   Sat Sep 12 15:08:36 2020 +0200

    tools/libs/stat: fix broken build
    
    Making getBridge() static triggered a build error with some gcc versions:
    
    error: 'strncpy' output may be truncated copying 15 bytes from a string of
    length 255 [-Werror=stringop-truncation]
    
    Fix that by using a buffer with 256 bytes instead.
    
    Fixes: 6d0ec053907794 ("tools: split libxenstat into new tools/libs/stat directory")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>
    (cherry picked from commit c8099e48c3dbb8c7399551a265756ecf354eece2)

commit 59b83663f925092f60f699147390c6cb77b55e43
Author: David Woodhouse <dwmw@amazon.co.uk>
Date:   Thu Mar 19 20:40:24 2020 +0000

    tools/xenstore: Do not abort xenstore-ls if a node disappears while iterating
    
    The do_ls() function has somewhat inconsistent handling of errors.
    
    If reading the node's contents with xs_read() fails, then do_ls() will
    just quietly not display the contents.
    
    If reading the node's permissions with xs_get_permissions() fails, then
    do_ls() will print a warning, continue, and ultimately won't exit with
    an error code (unless another error happens).
    
    If recursing into the node with xs_directory() fails, then do_ls() will
    abort immediately, not printing any further nodes.
    
    For persistent failure modes — such as ENOENT because a node has been
    removed, or EACCES because it has had its permisions changed since the
    xs_directory() on the parent directory returned its name — it's
    obviously quite likely that if either of the first two errors occur for
    a given node, then so will the third and thus xenstore-ls will abort.
    
    The ENOENT one is actually a fairly common case, and has caused tools to
    fail to clean up a network device because it *apparently* already
    doesn't exist in xenstore.
    
    There is a school of thought that says, "Well, xenstore-ls returned an
    error. So the tools should not trust its output."
    
    The natural corollary of this would surely be that the tools must re-run
    xenstore-ls as many times as is necessary until its manages to exit
    without hitting the race condition. I am not keen on that conclusion.
    
    For the specific case of ENOENT it seems reasonable to declare that,
    but for the timing, we might as well just not have seen that node at
    all when calling xs_directory() for the parent. By ignoring the error,
    we give acceptable output.
    
    The issue can be reproduced as follows:
    
    (dom0) # for a in `seq 1 1000` ; do
                  xenstore-write /local/domain/2/foo/$a $a ;
             done
    
    Now simultaneously:
    
    (dom0) # for a in `seq 1 999` ; do
                  xenstore-rm /local/domain/2/foo/$a ;
             done
    (dom2) # while true ; do
                  ./xenstore-ls -p /local/domain/2/foo | grep -c 1000 ;
             done
    
    We should expect to see node 1000 in the output, every time.
    
    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit beb105af19cc3e58e2ed49224cfe190a736e5fa0)

commit 1f9f1cb3a0216a7d41ded3090b4bb2735cc8a8e6
Author: Bertrand Marquis <bertrand.marquis@arm.com>
Date:   Thu Oct 15 10:16:09 2020 +0100

    tools/xenpmd: Fix gcc10 snprintf warning
    
    Add a check for snprintf return code and ignore the entry if we get an
    error. This should in fact never happen and is more a trick to make gcc
    happy and prevent compilation errors.
    
    This is solving the following gcc warning when compiling for arm32 host
    platforms with optimization activated:
    xenpmd.c:92:37: error: '%s' directive output may be truncated writing
    between 4 and 2147483645 bytes into a region of size 271
    [-Werror=format-truncation=]
    
    This is also solving the following Debian bug:
    https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=970802
    
    Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Wei Liu <wl@xen.org>
    (cherry picked from commit 0dfddb2116e3757f77a691a3fe335173088d69dc)

commit f728b2d69f426258f37709de9efac5245a597d0d
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Wed Aug 19 04:00:36 2020 +0200

    libxl: fix -Werror=stringop-truncation in libxl__prepare_sockaddr_un
    
    In file included from /usr/include/string.h:495,
                     from libxl_internal.h:38,
                     from libxl_utils.c:20:
    In function 'strncpy',
        inlined from 'libxl__prepare_sockaddr_un' at libxl_utils.c:1262:5:
    /usr/include/bits/string_fortified.h:106:10: error: '__builtin_strncpy' specified bound 108 equals destination size [-Werror=stringop-truncation]
      106 |   return __builtin___strncpy_chk (__dest, __src, __len, __bos (__dest));
          |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    cc1: all warnings being treated as errors
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit fff1b7f50e75ad9535c86f3fcf425b4945c50a1c)

commit 71a12a97988c516a17aba40bb5df9274133e9e92
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Wed Aug 19 04:00:35 2020 +0200

    libxl: workaround gcc 10.2 maybe-uninitialized warning
    
    It seems xlu_pci_parse_bdf has a state machine that is too complex for
    gcc to understand. The build fails with:
    
        libxlu_pci.c: In function 'xlu_pci_parse_bdf':
        libxlu_pci.c:32:18: error: 'func' may be used uninitialized in this function [-Werror=maybe-uninitialized]
           32 |     pcidev->func = func;
              |     ~~~~~~~~~~~~~^~~~~~
        libxlu_pci.c:51:29: note: 'func' was declared here
           51 |     unsigned dom, bus, dev, func, vslot = 0;
              |                             ^~~~
        libxlu_pci.c:31:17: error: 'dev' may be used uninitialized in this function [-Werror=maybe-uninitialized]
           31 |     pcidev->dev = dev;
              |     ~~~~~~~~~~~~^~~~~
        libxlu_pci.c:51:24: note: 'dev' was declared here
           51 |     unsigned dom, bus, dev, func, vslot = 0;
              |                        ^~~
        libxlu_pci.c:30:17: error: 'bus' may be used uninitialized in this function [-Werror=maybe-uninitialized]
           30 |     pcidev->bus = bus;
              |     ~~~~~~~~~~~~^~~~~
        libxlu_pci.c:51:19: note: 'bus' was declared here
           51 |     unsigned dom, bus, dev, func, vslot = 0;
              |                   ^~~
        libxlu_pci.c:29:20: error: 'dom' may be used uninitialized in this function [-Werror=maybe-uninitialized]
           29 |     pcidev->domain = domain;
              |     ~~~~~~~~~~~~~~~^~~~~~~~
        libxlu_pci.c:51:14: note: 'dom' was declared here
           51 |     unsigned dom, bus, dev, func, vslot = 0;
              |              ^~~
        cc1: all warnings being treated as errors
    
    Workaround it by setting the initial value to invalid value (0xffffffff)
    and then assert on each value being set. This way we mute the gcc
    warning, while still detecting bugs in the parse code.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit d25cc3ec93ebda030349045d2c7fa14ffde07ed7)
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 10:06:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 10:06:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23171.49746 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcQY5-0008L0-DK; Tue, 10 Nov 2020 10:06:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23171.49746; Tue, 10 Nov 2020 10:06:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcQY5-0008Kt-A1; Tue, 10 Nov 2020 10:06:49 +0000
Received: by outflank-mailman (input) for mailman id 23171;
 Tue, 10 Nov 2020 10:06:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcQY4-0008Ko-Cp
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 10:06:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 85ac3371-cc11-4ec1-b75b-0b7c54f50505;
 Tue, 10 Nov 2020 10:06:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 958D1AC1F;
 Tue, 10 Nov 2020 10:06:46 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcQY4-0008Ko-Cp
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 10:06:48 +0000
X-Inumbo-ID: 85ac3371-cc11-4ec1-b75b-0b7c54f50505
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 85ac3371-cc11-4ec1-b75b-0b7c54f50505;
	Tue, 10 Nov 2020 10:06:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605002806;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PNEEhRnNUojdjEzjHxl/YUclwf0hv0h1quOPdcSrjd8=;
	b=sJ1Hu1b1dDOjVYV+1DOobazfB64uvKzDO3whjAQdNIyzUIBfTqlxrhlu1sUqkm/KolZOfg
	G6znXDMvEgosURd3rjQS+ddb7anFIl53J4lWuCAGDyb9QsG6NYKs8d+OSCwljXmpUHQe+Z
	RuLmJFGC5qQBMxHOVBSlTGCCvvTuezY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 958D1AC1F;
	Tue, 10 Nov 2020 10:06:46 +0000 (UTC)
Subject: Re: [PATCH v3 5/7] x86: guard against straight-line speculation past
 RET
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
 <80ceea17-958d-f409-5f39-9f353e780f5b@suse.com>
 <20201110093142.hkufamaepn67gv43@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <92e58ff0-e6a4-f92f-1ad6-06db7751762a@suse.com>
Date: Tue, 10 Nov 2020 11:06:46 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201110093142.hkufamaepn67gv43@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 10.11.2020 10:31, Roger Pau Monné wrote:
> On Fri, Oct 23, 2020 at 10:38:04AM +0200, Jan Beulich wrote:
>> Under certain conditions CPUs can speculate into the instruction stream
>> past a RET instruction. Guard against this just like 3b7dab93f240
>> ("x86/spec-ctrl: Protect against CALL/JMP straight-line speculation")
>> did - by inserting an "INT $3" insn. It's merely the mechanics of how to
>> achieve this that differ: A set of macros gets introduced to post-
>> process RET insns issued by the compiler (or living in assembly files).
>>
>> Unfortunately for clang this requires further features their built-in
>> assembler doesn't support: We need to be able to override insn mnemonics
>> produced by the compiler (which may be impossible, if internally
>> assembly mnemonics never get generated), and we want to use \(text)
>> escaping / quoting in the auxiliary macro.
> 
> Could this have an option to enable/disable at build time?

Well, a subsequent patch adds a config option for this, which in
the worst case could be turned off. I'm afraid though I'm not
clear about the question, because ...

> FreeBSD will drop GNU as quite soon from base, and albeit it can be
> installed as a package I would like to be able to build Xen using a
> toolchain based on LLVM exclusively.

... it's not clear to me what the implications here are: Are you
saying -no-integrated-as is not going to function anymore, unless
people explicitly install gas? If that's not what you meant to
indicate, then I don't see how building would become impossible.

Depending on the situation as a whole, we might then be in need
of a 2nd config option...

And btw, good that you pointed me back at this: The v3 change
wasn't quite complete, since we now don't need to check for
proper \(text) handling anymore in our logic to establish
CLANG_FLAGS. I've dropped that part for v4.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 10:09:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 10:09:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23178.49761 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcQaN-0008VP-Rj; Tue, 10 Nov 2020 10:09:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23178.49761; Tue, 10 Nov 2020 10:09:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcQaN-0008VI-Nx; Tue, 10 Nov 2020 10:09:11 +0000
Received: by outflank-mailman (input) for mailman id 23178;
 Tue, 10 Nov 2020 10:09:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kcQaM-0008Uh-Is
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 10:09:10 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7f618d08-5cbc-43d3-9fba-9ada8afcd985;
 Tue, 10 Nov 2020 10:09:03 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcQaE-0004lI-JW; Tue, 10 Nov 2020 10:09:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcQaE-0000JR-7z; Tue, 10 Nov 2020 10:09:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kcQaE-0001vj-7T; Tue, 10 Nov 2020 10:09:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kcQaM-0008Uh-Is
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 10:09:10 +0000
X-Inumbo-ID: 7f618d08-5cbc-43d3-9fba-9ada8afcd985
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7f618d08-5cbc-43d3-9fba-9ada8afcd985;
	Tue, 10 Nov 2020 10:09:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ex22ujBJVjJABWN/SaVE7xJSbOZYwwSaMP+IlyaTfkg=; b=lV8XxI8qDHtZEDNJsLq425u+27
	tHUOpyR6EDFXztX3oPsyzbhDfEfGQfdzQVp8lxc+q49exExqnUl6I8oUynqWJqU5zhtffvLHvlxvU
	ZQ6jU68+uQvtEJhkX57HluZ7uWJ2Eu+Ph6sWhUQkCho7ZZIQ2mkXX5ctp+8PxrPxMaRc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcQaE-0004lI-JW; Tue, 10 Nov 2020 10:09:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcQaE-0000JR-7z; Tue, 10 Nov 2020 10:09:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcQaE-0001vj-7T; Tue, 10 Nov 2020 10:09:02 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156593-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 156593: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.13-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=971a9d14667448427ddea1cf15fd7fd409205c58
X-Osstest-Versions-That:
    xen=83115491d4b3dbcb7c8dbe74ce3e59cdfac69b03
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Nov 2020 10:09:02 +0000

flight 156593 xen-4.13-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156593/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 156399

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156399
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156399
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156399
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156399
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156399
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156399
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156399
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156399
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156399
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156399
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156399
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  971a9d14667448427ddea1cf15fd7fd409205c58
baseline version:
 xen                  83115491d4b3dbcb7c8dbe74ce3e59cdfac69b03

Last test of basis   156399  2020-11-04 09:06:15 Z    6 days
Testing same since   156593  2020-11-09 18:08:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  David Woodhouse <dwmw@amazon.co.uk>
  Juergen Gross <jgross@suse.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Olaf Hering <olaf@aepfle.de>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   83115491d4..971a9d1466  971a9d14667448427ddea1cf15fd7fd409205c58 -> stable-4.13


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 10:11:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 10:11:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23184.49773 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcQcB-0000t3-70; Tue, 10 Nov 2020 10:11:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23184.49773; Tue, 10 Nov 2020 10:11:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcQcB-0000sw-3j; Tue, 10 Nov 2020 10:11:03 +0000
Received: by outflank-mailman (input) for mailman id 23184;
 Tue, 10 Nov 2020 10:11:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcQc9-0000sq-60
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 10:11:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 913e560a-6b85-47dc-abcc-5016591c3cc0;
 Tue, 10 Nov 2020 10:11:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3C6B4AF21;
 Tue, 10 Nov 2020 10:10:59 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcQc9-0000sq-60
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 10:11:01 +0000
X-Inumbo-ID: 913e560a-6b85-47dc-abcc-5016591c3cc0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 913e560a-6b85-47dc-abcc-5016591c3cc0;
	Tue, 10 Nov 2020 10:11:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605003059;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+Rr7pXcq/NxzmYm2LYglg7vbyHz1ZwB4ioLa2yxqp08=;
	b=fcbjvv4pyzWi9GMwhS8yPGwWzYc2+yASnVbZYEbnvPcSPgiXPyk4UWy/vcq1FF3/UNguUs
	LavCSLPmItBq7RN4HG8pAK+qj134NuCEnZIhCr649dpJgBvKujVhA/zhdURlPGynwz/+IK
	NMu9xcKUDlltKcLPqUjA5HBUdYpKw1I=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3C6B4AF21;
	Tue, 10 Nov 2020 10:10:59 +0000 (UTC)
Subject: Re: [PATCH v6 2/3] xen/evtchn: rework per event channel lock
To: Julien Grall <julien@xen.org>
Cc: Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Stefano Stabellini
 <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Juergen Gross <jgross@suse.com>
References: <20201109163826.13035-1-jgross@suse.com>
 <20201109163826.13035-3-jgross@suse.com>
 <f64158af-3468-053f-7cbe-d52ab01b8bfc@suse.com>
 <7858513b-2938-5bad-0c9e-167a8472656f@xen.org>
 <c1e02f02-dfdd-33f1-0881-8632ddcba158@suse.com>
 <f32f583e-2bc8-a064-fc16-0f8661149357@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5c4b9d03-1e25-3338-46cd-c038f9c7cffb@suse.com>
Date: Tue, 10 Nov 2020 11:10:58 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <f32f583e-2bc8-a064-fc16-0f8661149357@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 10.11.2020 10:47, Julien Grall wrote:
> Hi,
> 
> On 10/11/2020 07:39, Jan Beulich wrote:
>> On 09.11.2020 18:46, Julien Grall wrote:
>>> Hi,
>>>
>>> On 09/11/2020 16:48, Jan Beulich wrote:
>>>> On 09.11.2020 17:38, Juergen Gross wrote:
>>>>> Currently the lock for a single event channel needs to be taken with
>>>>> interrupts off, which causes deadlocks in some cases.
>>>>>
>>>>> Rework the per event channel lock to be non-blocking for the case of
>>>>> sending an event and removing the need for disabling interrupts for
>>>>> taking the lock.
>>>>>
>>>>> The lock is needed for avoiding races between event channel state
>>>>> changes (creation, closing, binding) against normal operations (set
>>>>> pending, [un]masking, priority changes).
>>>>>
>>>>> Use a rwlock, but with some restrictions:
>>>>>
>>>>> - Changing the state of an event channel (creation, closing, binding)
>>>>>     needs to use write_lock(), with ASSERT()ing that the lock is taken as
>>>>>     writer only when the state of the event channel is either before or
>>>>>     after the locked region appropriate (either free or unbound).
>>>>>
>>>>> - Sending an event needs to use read_trylock() mostly, in case of not
>>>>>     obtaining the lock the operation is omitted. This is needed as
>>>>>     sending an event can happen with interrupts off (at least in some
>>>>>     cases).
>>>>>
>>>>> - Dumping the event channel state for debug purposes is using
>>>>>     read_trylock(), too, in order to avoid blocking in case the lock is
>>>>>     taken as writer for a long time.
>>>>>
>>>>> - All other cases can use read_lock().
>>>>>
>>>>> Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()")
>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>> ---
>>>>> V4:
>>>>> - switch to rwlock
>>>>> - add ASSERT() to verify correct write_lock() usage
>>>>>
>>>>> V3:
>>>>> - corrected a copy-and-paste error (Jan Beulich)
>>>>> - corrected unlocking in two cases (Jan Beulich)
>>>>> - renamed evtchn_read_trylock() (Jan Beulich)
>>>>> - added some comments and an ASSERT() for evtchn_write_lock()
>>>>> - set EVENT_WRITE_LOCK_INC to INT_MIN
>>>>>
>>>>> V2:
>>>>> - added needed barriers
>>>>>
>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>
>>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>>
>>>> I'll give it overnight for others to possibly comment, but I'd
>>>> like to get this in tomorrow if at all possible.
>>>
>>> IIRC, Citrix originally reported the issues. I would like to give them
>>> an opportunity to test the patches and confirm this effectively fixed
>>> there issues.
>>
>> I would consider waiting longer, but I'd like to get staging unblocked.
> 
> So the plan is to have the patches sitting in staging for a few days and 
> then backport?

Right, the usual thing - wait at least until a push has happened.
Unless we learn of problems with the change, I definitely want to
have this in for 4.14.1.

>> Just like with 52e1fc47abc3a0123 ("evtchn/Flask: pre-allocate node on
>> send path") we can always decide to revert / fix up afterwards. The
>> patch here surely is an improvement, even if it was to turn out it
>> doesn't fix all issues yes. I'd appreciate if you would confirm you
>> don't object to the patch going in - I'll wait a few more hours,
>> perhaps until around noon.
> I would suggest to wait until noon and if you don't get any answer, then 
> merge it.

Okay.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 10:24:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 10:24:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23205.49829 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcQou-00028g-2O; Tue, 10 Nov 2020 10:24:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23205.49829; Tue, 10 Nov 2020 10:24:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcQot-00028Z-Vc; Tue, 10 Nov 2020 10:24:11 +0000
Received: by outflank-mailman (input) for mailman id 23205;
 Tue, 10 Nov 2020 10:24:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=psMK=EQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kcQos-00028U-Vp
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 10:24:11 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3bac013f-0e47-4d2c-aa1c-d4f5f72fd4bc;
 Tue, 10 Nov 2020 10:24:08 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=psMK=EQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kcQos-00028U-Vp
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 10:24:11 +0000
X-Inumbo-ID: 3bac013f-0e47-4d2c-aa1c-d4f5f72fd4bc
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3bac013f-0e47-4d2c-aa1c-d4f5f72fd4bc;
	Tue, 10 Nov 2020 10:24:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605003848;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=Z6N2GG1EhgHfaBdcjuBZsBm/E6lBZ5//MluDHr3a23o=;
  b=QJgxgFJ+EHHKiWtigytTume6m1e6ZqPTz96KfYwq3SElMQKcG9HRHMUF
   NKsDho4sLZ2m14QF2gzhtTCutfDaus/f3oSF7A1Flz4inTPpURRhWU2XK
   dz6D66NaWYatWFZARB6HpFGplGuoHr/S4+43hzBS3CtXnrvVC3nffsNLC
   0=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: H6iMV0iPQI+EUuavz/ExCq0936GOK0IgnvXyHcEFZ+0wKH1RBLW0/V50gK0U1+RH3XdfHu+OA5
 IWetp/ardF97cJUeH3YcYqd22SzBmzqihl9OvnttyN9oUe6EQQf2/0TJSIvsQ0cNqLMID/+0Gw
 CekVzk9emSB2dL3xWhDPFj8H/ZUiguHya98qjLshqSKqW2ESPhm48IGBWfM/E2TBn3rfx6kSdZ
 gTKzs+9qutywRdBIFLtFufAsVKkC5eof2ZfkpQ7SNJrAYDe30Lfw7Jxxp5oi54xEAj+KJJv4m9
 HgQ=
X-SBRS: None
X-MesageID: 31069019
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,466,1596513600"; 
   d="scan'208";a="31069019"
Subject: Re: [PATCH v2] x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL}
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>
References: <20201109173819.7817-1-andrew.cooper3@citrix.com>
 <20201109183102.mrqklmpqyka5u6bt@Air-de-Roger>
 <d76806b4-2302-bee9-d977-ecfe29089fd7@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <b4171eb9-6f47-e401-4049-d8b2a67c6f97@citrix.com>
Date: Tue, 10 Nov 2020 10:24:02 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <d76806b4-2302-bee9-d977-ecfe29089fd7@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 10/11/2020 08:04, Jan Beulich wrote:
> On 09.11.2020 19:31, Roger Pau Monné wrote:
>> On Mon, Nov 09, 2020 at 05:38:19PM +0000, Andrew Cooper wrote:
>>> From: Roger Pau Monné <roger.pau@citrix.com>
>>>
>>> Currently a PV hardware domain can also be given control over the CPU
>>> frequency, and such guest is allowed to write to MSR_IA32_PERF_CTL.
>>> However since commit 322ec7c89f6 the default behavior has been changed
>>> to reject accesses to not explicitly handled MSRs, preventing PV
>>> guests that manage CPU frequency from reading
>>> MSR_IA32_PERF_{STATUS/CTL}.
>>>
>>> Additionally some HVM guests (Windows at least) will attempt to read
>>> MSR_IA32_PERF_CTL and will panic if given back a #GP fault:
>>>
>>>   vmx.c:3035:d8v0 RDMSR 0x00000199 unimplemented
>>>   d8v0 VIRIDIAN CRASH: 3b c0000096 fffff806871c1651 ffffda0253683720 0
>>>
>>> Move the handling of MSR_IA32_PERF_{STATUS/CTL} to the common MSR
>>> handling shared between HVM and PV guests, and add an explicit case
>>> for reads to MSR_IA32_PERF_{STATUS/CTL}.
>>>
>>> Restore previous behavior and allow PV guests with the required
>>> permissions to read the contents of the mentioned MSRs. Non privileged
>>> guests will get 0 when trying to read those registers, as writes to
>>> MSR_IA32_PERF_CTL by such guest will already be silently dropped.
>>>
>>> Fixes: 322ec7c89f6 ('x86/pv: disallow access to unknown MSRs')
>>> Fixes: 84e848fd7a1 ('x86/hvm: disallow access to unknown MSRs')
>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
>>
>>> ---
>>> CC: Jan Beulich <JBeulich@suse.com>
>>> CC: Roger Pau Monné <roger.pau@citrix.com>
>>> CC: Wei Liu <wl@xen.org>
>>>
>>> v2:
>>>  * fix is_cpufreq_controller() to exclude PVH dom0, and collapse to nothing in
>>>    !CONFIG_PV builds
>>>  * Drop the cross-vendor checks.  It isn't possible to configure dom0 as
>>>    cross-vendor, and anyone using is_cpufreq_controller() without an exact
>>>    model match has far bigger problems.
> This already answers ...
>
>>> --- a/xen/include/xen/sched.h
>>> +++ b/xen/include/xen/sched.h
>>> @@ -1069,6 +1069,23 @@ extern enum cpufreq_controller {
>>>      FREQCTL_none, FREQCTL_dom0_kernel, FREQCTL_xen
>>>  } cpufreq_controller;
>>>  
>>> +static always_inline bool is_cpufreq_controller(const struct domain *d)
>>> +{
>>> +    /*
>>> +     * A PV dom0 can be nominated as the cpufreq controller, instead of using
>>> +     * Xen's cpufreq driver, at which point dom0 gets direct access to certain
>>> +     * MSRs.
>>> +     *
>>> +     * This interface only works when dom0 is identity pinned and has the same
>>> +     * number of vCPUs as pCPUs on the system.
>>> +     *
>>> +     * It would be far better to paravirtualise the interface.
>>> +     */
>> Would it be useful to add an assert here that the domain cpuid vendor
>> and the BSP cpuid vendor match?
>>
>> Is it even possible to fake a different cpuid vendor for dom0?
> ... your question here.

Technically at the moment it is possible to configure a non-dom0
hardware domain as cross vendor, but anyone doing so can keep all the
resulting pieces.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 10:27:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 10:27:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23211.49840 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcQsJ-0002IX-I6; Tue, 10 Nov 2020 10:27:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23211.49840; Tue, 10 Nov 2020 10:27:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcQsJ-0002IQ-F9; Tue, 10 Nov 2020 10:27:43 +0000
Received: by outflank-mailman (input) for mailman id 23211;
 Tue, 10 Nov 2020 10:27:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hJ2u=EQ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kcQsI-0002IK-86
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 10:27:42 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f8d6a73b-187d-43be-b090-a40008456ef5;
 Tue, 10 Nov 2020 10:27:40 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hJ2u=EQ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kcQsI-0002IK-86
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 10:27:42 +0000
X-Inumbo-ID: f8d6a73b-187d-43be-b090-a40008456ef5
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f8d6a73b-187d-43be-b090-a40008456ef5;
	Tue, 10 Nov 2020 10:27:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605004060;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=jbg2vOJ/SIP5Oqtija1BYZDEJ0QFHzzIQCi5QGEzKy4=;
  b=GZddmFikHMzedDBJTpmXPzTwtC+Nxsn7oczDbGjzfGwgLaa+aOQq0sdm
   KD+5Am5GoaoRyL3pDuJPeSvYg8xnDV1DHQwHZrplJnguTQKO1ZL/5U4kd
   iK/lvhRTQCGRecXO/XfK6WGucd1aQM60ogCcj4GWDGUwfDOE5elylQC64
   0=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 9CMnBvkB3nS5SERjSiZiM/b1Q1wAAYQivzCwHPaD/ilZJ/TtFY29cufTfBN5xzDNNANptJzSNx
 VQ90qYdgQCa0IB/wtFjusewtM/857eeJyu+iIiqOUA4zbq7N5ps0V+TsZzNc9NgNamsHXkW5ao
 lyg0ngomqVwY4Lk7l6HLo06WbYUcHcITgfORCwRW7Ts6wkqGfceS48etpJ4ir/EJrCcFvIvAHt
 xn4FQsWuGnSgjA3gAT1TXcTdvrHGB0RwmYm+DXIJApAcpHBZeUVK2JVesalstygTHhVQr7/RtZ
 9S8=
X-SBRS: None
X-MesageID: 30807467
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,466,1596513600"; 
   d="scan'208";a="30807467"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QZe74/IWbP75wB4tRvOJfPn5kkRfNOVLiOmCz/KNHaD1Uim1+ApXXtt+k4pKQM2891GRbihqiqivO8cOt8xCmcDCiL/eWyPKx4t5xhenbXL7s286K2Bc1SEWDOPn2aZkwgmGwPbZthDAK4vuqQIhnrO67epRjKJW6qCbBa0TXgjyyG9D6w6V/6SryJJK5AXPGiObfqan7MWNUwTHIzH0fb6YL6FH2lShbTiEpW189s5ATZlrwJuC00m+DwKPVmZnhU8VXzKVzWemQ7o+9joTXFNmWKSBvT54oyRfvzClyODZ5NZQ4TeRel3dy7FS0CYZix1GmaGsDNkO8UZmjTIEnw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6aSv34bPjibLtGTL3U+lwCgf+nz8m/0NbKVn4O3bRa4=;
 b=h+z8EjG7ck1t0bg55XIFKFEO8zi4jiQJH78Z+l1QR+uDqj0n9KvWxIKorNu73VJvylCdVMwJlh+uJpzypG4U3e8n2lLZyhvuG+ecBg1ILX38AOYs72Hk1Di3QRrSkQzsymK2dox4oGjmL1FvhqtRkXro5TrvrUhGqKCBEdVzB5bXfYoQK4IvGVq8e8wlOyrTOJeIZ/rCu0aWyy/xnvhV3mDBFzx+31c/6AEMBv1OQIje3xFc2bgV+aRO2qhgwGsUQip7LBadN7RjPuIfSXwUMXOnwa/fpJzENA9Vr4Duiq2lE2ofj3/uJFWDBBJVLtgB2l8K/0GqGnw1IcOKCmSfsA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6aSv34bPjibLtGTL3U+lwCgf+nz8m/0NbKVn4O3bRa4=;
 b=rbEzGOK+W2+F3K8Roi6rah+BOevJjsNQddTJqEGFQ6ToEng6n+JOb6ktS01riJmQQZYghQP/p0cwg8NRCTLCgmkGGoOsmfSFJtyC4E+13E6wI+EUQbItRIK4aw1LxdUjTxYGm0T8pxnJC0/s3i2KqYIktI3gvSrneM7xO/hDl1o=
Date: Tue, 10 Nov 2020 11:27:31 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
Subject: Re: [PATCH 1/5] x86/p2m: paging_write_p2m_entry() is a private
 function
Message-ID: <20201110102731.6eg6u7mxefoihmfq@Air-de-Roger>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
 <1fab241b-3969-9ce5-2388-bcdbe3be6079@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <1fab241b-3969-9ce5-2388-bcdbe3be6079@suse.com>
X-ClientProxiedBy: LNXP265CA0049.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5d::13) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 49da6026-4762-4514-be3d-08d885633dc0
X-MS-TrafficTypeDiagnostic: DS7PR03MB5590:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DS7PR03MB5590673576FC1B75064392B98FE90@DS7PR03MB5590.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 69lXt6c2ts/3p7XT0RxzRrOOKO9ClqUOc9//ozWiPXEyDIR/TrB1as6gLbAGaOOmkEo2z06lfOaDDchvo0onz/H4tou+DTOybfedr7FkthVc3/wzT70iiaXbnsw5hcyhzIL07MbObenOWRPJAXk3OYv8hwQaCfkKhMdYNS5iimrqZa1+m5ax6NZYa12uUmD+b+VqvNPJVsFFLMJm5RgQyXMwRrgJgkBM/jo40jLpvjcJ71dNkHiho3IOv4NTZf8t5C0kI/JbX9jr1Vk5fUo6ivYS1tzLgM7Pr927AykjEZwsQEdsHbcD+3aNc8Sou9qXAf9mm0hueu57fvSGVaF9qx95TNJi6wM0Po5QheISDgCUEqAqW6Dt53FZzffpCW4W
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39860400002)(136003)(366004)(396003)(346002)(376002)(26005)(6916009)(85182001)(316002)(9686003)(6496006)(86362001)(186003)(83380400001)(2906002)(6666004)(1076003)(16526019)(8676002)(4326008)(5660300002)(956004)(6486002)(33716001)(478600001)(54906003)(66946007)(66476007)(66556008)(8936002)(70780200001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 3f01DDJoWYToZnaokPgrws13+6GxDLzpA3FOHnY2+7teYX8iD+gEAcFYDTakPkY5qIWrlw5r6JuTPw9JdFjcUISPoJytR6Y8NPDa5wFmsHgeLnz0Kt4B8GdVDP+j9gg1nK7Fa2XFg0qn82xYklnEwRE80sOLpA4w9ORWfuJvgD92Yhq/ogHvPpy3A9DFvvgvc4MPeFD2VdlxpKAhL+CxQqKSr+5OwkEffHuVpRub98m01EyX5+hmX/9nW7ZVHdXc3/b4/3axDK2lWGHk8K1svtB/ugBHbLXYrC0hXQP4emWRcRNxJ6Ei8L7qSrchswrPgg3pvRGdhwa+B+1jVAbgKsC1fdeWpveW3/QdbI+P4uOZ5en3fNMLunj6jkHC+mmBHeLhJRYg+OU4S95pfjsVYNDk3OMWZ3RSM11I/k98VMG61TWVDf4mw/uKykqhCtR2vDKWWry2H1sc8dvIsyLsA9OT5mtpQx5iUxwWQgq/cVefqUOgXea1XWcfI511T+/diLfx0Lk7rkXqDJu7kTIcaEMsggLYXg9C3dtyJFXWWyExVgbw+DCMspQzn4rPkcjf+zeTaZc7U2M5DxphuqLSuQDiu49c3hVPebbdeJXxA6Lyy+B6gVg2ivFA1JrLzJ7EULZfjAPZFWItAvs9M93/B5SBSIVlBP0GGdIRHgQr57ZaVIMqsPzdsBiZo1uxlydUL9RZ7mduvLXurGOi6Upb30/UDPaWwQNHnese1KogdzLA0vYfCxZroby2yGhNxGBHwJI7PD++MZfiEJlxtfEN9uRhORPnGZ9j9UU0pzFKElThx3tFRHToHKet3G867wgJpt5xFVAvBpAQoxJ33O/YVkkrzaS2yxfQlHq0bjqo6KtOb4V6bPY2GK+PuOKV8apasQY0JvdQNRw1BKGPRNVW7A==
X-MS-Exchange-CrossTenant-Network-Message-Id: 49da6026-4762-4514-be3d-08d885633dc0
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Nov 2020 10:27:35.9264
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9O7szrjOYCDL+e+YLX02cxGKn/Ai8oEq4GoW+6+PURVmk+1xO648ihfACtjO4sL2HnK7rPHwc4cJ862hX7ZDPw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5590
X-OriginatorOrg: citrix.com

On Wed, Oct 28, 2020 at 10:22:04AM +0100, Jan Beulich wrote:
> As it gets installed by p2m_pt_init(), it doesn't need to live in
> paging.c. The function working in terms of l1_pgentry_t even further
> indicates its non-paging-generic nature. Move it and drop its
> paging_ prefix, not adding any new one now that it's static.
> 
> This then also makes more obvious that in the EPT case we wouldn't
> risk mistakenly calling through the NULL hook pointer.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

> 
> --- a/xen/arch/x86/mm/p2m-pt.c
> +++ b/xen/arch/x86/mm/p2m-pt.c
> @@ -108,6 +108,31 @@ static unsigned long p2m_type_to_flags(c
>      }
>  }
>  
> +/*
> + * Atomically write a P2M entry and update the paging-assistance state
> + * appropriately.
> + * Arguments: the domain in question, the GFN whose mapping is being updated,
> + * a pointer to the entry to be written, the MFN in which the entry resides,
> + * the new contents of the entry, and the level in the p2m tree at which
> + * we are writing.
> + */
> +static int write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
> +                           l1_pgentry_t *p, l1_pgentry_t new,
> +                           unsigned int level)
> +{
> +    struct domain *d = p2m->domain;
> +    struct vcpu *v = current;

I think you could constify both?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 10:32:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 10:32:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23220.49852 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcQwX-00039e-4J; Tue, 10 Nov 2020 10:32:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23220.49852; Tue, 10 Nov 2020 10:32:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcQwX-00039X-1E; Tue, 10 Nov 2020 10:32:05 +0000
Received: by outflank-mailman (input) for mailman id 23220;
 Tue, 10 Nov 2020 10:32:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcQwW-00039S-4R
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 10:32:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5a9835cf-2743-4bbe-b8cb-932f69ab58f1;
 Tue, 10 Nov 2020 10:32:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 90F2DABCC;
 Tue, 10 Nov 2020 10:32:02 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcQwW-00039S-4R
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 10:32:04 +0000
X-Inumbo-ID: 5a9835cf-2743-4bbe-b8cb-932f69ab58f1
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5a9835cf-2743-4bbe-b8cb-932f69ab58f1;
	Tue, 10 Nov 2020 10:32:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605004322;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=suPULunDBoceBK+KSIm9ml4LeFOfKA+UxEUDxa+REu8=;
	b=MMxGgVuvFeE5CNBX/QVvn85eBCqGYKx3okmH/ZxmEYahFuYH87MFlEMUFLZJDVr50BWdoX
	ElH7nGNrdu0EaACS4Bsqm1rzcOlEq9pcvRtF+LkEto9qEWng2C0P7L/Ru2v6bXTcB8LXML
	cdopS+sPSbERN9NHvFtA3cHJmoYZAbI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 90F2DABCC;
	Tue, 10 Nov 2020 10:32:02 +0000 (UTC)
Subject: Re: [PATCH 1/5] x86/p2m: paging_write_p2m_entry() is a private
 function
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
 <1fab241b-3969-9ce5-2388-bcdbe3be6079@suse.com>
 <20201110102731.6eg6u7mxefoihmfq@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1cc350f6-4b91-b56a-1891-cb60a5275af8@suse.com>
Date: Tue, 10 Nov 2020 11:32:02 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201110102731.6eg6u7mxefoihmfq@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 10.11.2020 11:27, Roger Pau Monné wrote:
> On Wed, Oct 28, 2020 at 10:22:04AM +0100, Jan Beulich wrote:
>> As it gets installed by p2m_pt_init(), it doesn't need to live in
>> paging.c. The function working in terms of l1_pgentry_t even further
>> indicates its non-paging-generic nature. Move it and drop its
>> paging_ prefix, not adding any new one now that it's static.
>>
>> This then also makes more obvious that in the EPT case we wouldn't
>> risk mistakenly calling through the NULL hook pointer.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

>> --- a/xen/arch/x86/mm/p2m-pt.c
>> +++ b/xen/arch/x86/mm/p2m-pt.c
>> @@ -108,6 +108,31 @@ static unsigned long p2m_type_to_flags(c
>>      }
>>  }
>>  
>> +/*
>> + * Atomically write a P2M entry and update the paging-assistance state
>> + * appropriately.
>> + * Arguments: the domain in question, the GFN whose mapping is being updated,
>> + * a pointer to the entry to be written, the MFN in which the entry resides,
>> + * the new contents of the entry, and the level in the p2m tree at which
>> + * we are writing.
>> + */
>> +static int write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
>> +                           l1_pgentry_t *p, l1_pgentry_t new,
>> +                           unsigned int level)
>> +{
>> +    struct domain *d = p2m->domain;
>> +    struct vcpu *v = current;
> 
> I think you could constify both?

For v it looks like I could. For d a subsequent patch would then
need to undo it, so I'd prefer to keep it this way here.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 10:32:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 10:32:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23226.49865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcQxL-0003GN-E0; Tue, 10 Nov 2020 10:32:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23226.49865; Tue, 10 Nov 2020 10:32:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcQxL-0003GG-Ah; Tue, 10 Nov 2020 10:32:55 +0000
Received: by outflank-mailman (input) for mailman id 23226;
 Tue, 10 Nov 2020 10:32:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=psMK=EQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kcQxJ-0003GA-IQ
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 10:32:53 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e5b84ab0-4272-4ae2-ac2e-18ac3ac5404a;
 Tue, 10 Nov 2020 10:32:52 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=psMK=EQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kcQxJ-0003GA-IQ
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 10:32:53 +0000
X-Inumbo-ID: e5b84ab0-4272-4ae2-ac2e-18ac3ac5404a
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e5b84ab0-4272-4ae2-ac2e-18ac3ac5404a;
	Tue, 10 Nov 2020 10:32:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605004372;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=uhQye2TkyUFDzIYdrR0IO5dPfzS+NBxOun3In7QlsxU=;
  b=U3ABf+9ognqk7hVistdX76H016abFt0btpIOSjn9fHZB3xRLDo0sby06
   x7ZKNAeN4YCWDBg4D0Mf+XLkaCZKHSy+vq2GrQVVy9sRYbV0bcIkgRa2y
   PUZVjFaadedshxpnbGbViyLlFr+lltKWqBXd2gbwWHkblB4qT5KqgDhTp
   s=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Z002tLpBFVgr1ecjQPdNc6X2+/6drGo5e2c7bBMwRImT5DA2vPkn4y3/1U5/zWutdXJlAafe+g
 UIuG6b9nuxC4zY768T+Ft3g3DPeSW1sbgV66rLZOkAqUkq5lmXWwtne+xqUtAPVSuLFd3fRz4r
 XgQWgaxcq5SKnKRo+g1EdpUYUH8Osiw2MahvNFIbhD7qfcibeHYLZI6hu26Yo2wvImJz+ODang
 UZKpDuOgeT1uJW/3IeLDM8N5+w5Umz9frewLD0agA2OyEI6R7EPKaQOHKvGFJ7fd+HzeV5ly6w
 o38=
X-SBRS: None
X-MesageID: 31174526
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,466,1596513600"; 
   d="scan'208";a="31174526"
Subject: Re: [PATCH v2] x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL}
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20201109173819.7817-1-andrew.cooper3@citrix.com>
 <681e03f5-86fd-43bb-5347-c526def9ffcb@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <083f46c0-07fb-7b22-4e49-d2a0df87164c@citrix.com>
Date: Tue, 10 Nov 2020 10:32:46 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <681e03f5-86fd-43bb-5347-c526def9ffcb@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 10/11/2020 08:03, Jan Beulich wrote:
> On 09.11.2020 18:38, Andrew Cooper wrote:
>> From: Roger Pau Monné <roger.pau@citrix.com>
>>
>> Currently a PV hardware domain can also be given control over the CPU
>> frequency, and such guest is allowed to write to MSR_IA32_PERF_CTL.
>> However since commit 322ec7c89f6 the default behavior has been changed
>> to reject accesses to not explicitly handled MSRs, preventing PV
>> guests that manage CPU frequency from reading
>> MSR_IA32_PERF_{STATUS/CTL}.
>>
>> Additionally some HVM guests (Windows at least) will attempt to read
>> MSR_IA32_PERF_CTL and will panic if given back a #GP fault:
>>
>>   vmx.c:3035:d8v0 RDMSR 0x00000199 unimplemented
>>   d8v0 VIRIDIAN CRASH: 3b c0000096 fffff806871c1651 ffffda0253683720 0
>>
>> Move the handling of MSR_IA32_PERF_{STATUS/CTL} to the common MSR
>> handling shared between HVM and PV guests, and add an explicit case
>> for reads to MSR_IA32_PERF_{STATUS/CTL}.
>>
>> Restore previous behavior and allow PV guests with the required
>> permissions to read the contents of the mentioned MSRs. Non privileged
>> guests will get 0 when trying to read those registers, as writes to
>> MSR_IA32_PERF_CTL by such guest will already be silently dropped.
>>
>> Fixes: 322ec7c89f6 ('x86/pv: disallow access to unknown MSRs')
>> Fixes: 84e848fd7a1 ('x86/hvm: disallow access to unknown MSRs')
>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks,

> with a nit, a minor adjustment request, and a question:
>
>> @@ -448,6 +467,21 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>>              goto gp_fault;
>>          break;
>>  
>> +        /*
>> +         * This MSR are not enumerated in CPUID.  It has been around since the
> s/are/is/

Oops, yes.

>
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -1069,6 +1069,23 @@ extern enum cpufreq_controller {
>>      FREQCTL_none, FREQCTL_dom0_kernel, FREQCTL_xen
>>  } cpufreq_controller;
>>  
>> +static always_inline bool is_cpufreq_controller(const struct domain *d)
>> +{
>> +    /*
>> +     * A PV dom0 can be nominated as the cpufreq controller, instead of using
>> +     * Xen's cpufreq driver, at which point dom0 gets direct access to certain
>> +     * MSRs.
>> +     *
>> +     * This interface only works when dom0 is identity pinned and has the same
>> +     * number of vCPUs as pCPUs on the system.
>> +     *
>> +     * It would be far better to paravirtualise the interface.
>> +     */
>> +    return (IS_ENABLED(CONFIG_PV) &&
>> +            (cpufreq_controller == FREQCTL_dom0_kernel) &&
>> +            is_pv_domain(d) && is_hardware_domain(d));
>> +}
> IS_ENABLED(CONFIG_PV) is already part of is_pv_domain() and hence
> imo shouldn't be used explicitly here.

Ah yes.  Will drop.

> Also, this being an x86 concept, is it really a good idea to put
> in xen/sched.h? I guess this builds on Arm just because of DCE
> from the IS_ENABLED(CONFIG_PV) (where afaict the one in
> is_pv_domain() will still do). (But yes, I do realize that
> cpufreq_controller itself gets declared in this file, so maybe
> better to do some subsequent cleanup.)

I can't spot anywhere obviously better for it to live.  We don't have a
common cpufreq.h, and I'm not sure that cpuidle.h is an appropriate
place to live either.

Thanks,

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 11:06:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 11:06:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23244.49885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcRTp-00061n-50; Tue, 10 Nov 2020 11:06:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23244.49885; Tue, 10 Nov 2020 11:06:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcRTp-00061g-1G; Tue, 10 Nov 2020 11:06:29 +0000
Received: by outflank-mailman (input) for mailman id 23244;
 Tue, 10 Nov 2020 11:06:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hJ2u=EQ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kcRTo-00061b-6A
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 11:06:28 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d7b9a9a1-76e4-4aec-b330-5cf24c955b51;
 Tue, 10 Nov 2020 11:06:26 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hJ2u=EQ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kcRTo-00061b-6A
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 11:06:28 +0000
X-Inumbo-ID: d7b9a9a1-76e4-4aec-b330-5cf24c955b51
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d7b9a9a1-76e4-4aec-b330-5cf24c955b51;
	Tue, 10 Nov 2020 11:06:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605006386;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=IoeKuqYSK5jA+7hLi8BBP01Ooa4QsWCp7IH9OBg61IU=;
  b=N+YbqqVrVyKvEjW2F7D7h1XTalRs6n3u+NdxK34V2ULHOUQFM/VLMJn7
   ArJfE4ukLBv+VKNhn/40sxiRlNxoCVsE8KQ/vq2692J4kVQCquiaRgwQ1
   7Vo3NLlhbCkSEQ78kvoCou70u696Y0uENNE2e0+aC7IEO6pqu8eNT3YDN
   Y=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 4tyJ30fWxdKDPOogYFRirC82FqLHUxjtxFwwHK36xTsQt5NlgEdCnN2+0p9kcoQxTDLA971PFb
 9L6+QCo4Kqg9lYQVNGetrNrN2hK+EusKvUFiSWJbtZtRkdQtsWSQMoB+ro3tFRa7Qt8ax59pRG
 heTsdQf8ZQV1buBDQoQHwkFhDXdjWYAvRy21OQ1mRw19EreBa+ODxwnkGbj8VGKaSwuxmdLaqD
 //hQeXNLmt7Lcqe0ELQn6aa2NiDptht1blB+JjzEm1ag/YPZmoWvYCZr5OzHrc1/eGQHkCgjgS
 8Dc=
X-SBRS: None
X-MesageID: 31071122
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,466,1596513600"; 
   d="scan'208";a="31071122"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FHiG0TsDCKl3rnbO6tjBVUm0RcgdQSKoYgWxWw77NeutuMpbASTm9yagXlH8AdMo2GY6pIcltuYJ1nHGXX9n60mbWOiOogeMI5VLZO0O+1i0YKocgVjso14Sfxk9SjLtzyw4bRlVExURItOLpFuNGYECe8qdGKSseHUybTCAOTG6EvwB328nk+ZB6koKgCNKNcDO/Or9cZ6iEq7BmbnMgunggpmugmV41Re9EoALy9nyrjgPJQSLFKf5UF5RJ7vw/oYadQx4/tY/tCtS58Rx2ZsiyWCaaAa7lVWlzT8EgyTC4cuFBxqgOYp9k3pQq1f5ucrNQ0KD0YPIU1HIqVCz5w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=e0jYr58tjfamTxcEsgCw9lsAJ5QTh9ksPUArderUhmM=;
 b=hx8HaokcvfZxirMLaLg0a5Usv9f2BQBBBU6Uweuq/INXvz1tsZqH2JRYxdivM2vZ4WlPCVopfSMizGr+14rSMQl9LQ25uWkWW+07/Ox8kR7LJxzg+AcesStWLjq3n+oYiKQ0WzsYbu27d028+2AkLuZId/U8qivoHatxvfMOBi90ThsjK8QcV3pU09lOwiuXHAbzoGdVGWgmUtJCoHBgsccmOn2BDMtEWDb+lWX5SQg9KWMQ56FFYGoAEVEgLimDVnQGT4KDxmBYe4b2w3GEUTtUIx7ThPUjEYpmwf6NntlEwZWU+QJ8qMVLK9FKGxW3zOj7Gjl9O0gWBCC9Pt0NAA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=e0jYr58tjfamTxcEsgCw9lsAJ5QTh9ksPUArderUhmM=;
 b=KGdJxfVArkRIZv8/IgYrNwQxJwUElgvwiuKTh1tF6ssqN5PxVkPlO+y2aComTXAD95CRFSVELovD1zBs5yk/ZyX2dkwy7AID3Ovya40Xnlp9cNpvAfkC4nlH29nRyYxmXM8t8sFnl6xVI0y3ywlon9WrRPzMMl8uNOCp0Ywqa2M=
Date: Tue, 10 Nov 2020 12:06:11 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
Subject: Re: [PATCH 2/5] x86/p2m: collapse the two ->write_p2m_entry() hooks
Message-ID: <20201110110611.p3twf6rmy7qdlxa7@Air-de-Roger>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
 <b26981d1-7a1a-2387-0640-574bdf11ceff@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <b26981d1-7a1a-2387-0640-574bdf11ceff@suse.com>
X-ClientProxiedBy: LO3P123CA0006.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:ba::11) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4e3d672d-7e21-4200-0cc8-08d88568a890
X-MS-TrafficTypeDiagnostic: DS7PR03MB5463:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DS7PR03MB5463C7C2BD0ACA18BE2E24778FE90@DS7PR03MB5463.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Z5yWdSSgUgIrKQSIFe+YLVkZ6+OGTk7te1EbRNZPvuKLHX+1BE63E6fglUNJwiIR8psAMT3yY58m9bIuXu9oOSo55Q1fpCU8Rxhsm5y2bqv0UthEGx6p5bnLwBzKeh6RGfp2fmdWtDNLDY1szXw9t0AAK7GPXZHM+DPTRlPppJ0bXypNa0yJKKpjA6+z7rpOXWx6/v/sx7/WSPvYPx5zxug/UnhauaH7k5kiNxhIapHVzKk4NCg4jjMrFvMcH5mYg2CpHcarVaZFKWzmfsrlhBQAK7O1/mw+M+r3I3Cfay6691hJUvmsWiPSXtCe89rPxRe+3RSQ/UrfNj28fIag9ar6sBtFg5EdNQHs1xQkXX4j1AXu/iXvlJ5MKNO4uTHS
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(376002)(39860400002)(366004)(346002)(136003)(396003)(2906002)(4326008)(6496006)(1076003)(5660300002)(8676002)(66946007)(9686003)(66476007)(8936002)(83380400001)(86362001)(85182001)(186003)(956004)(6916009)(16526019)(478600001)(26005)(6666004)(54906003)(316002)(33716001)(6486002)(66556008)(70780200001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: jhvOG2WAcSB//Ok/SdhjC4sj9ogQUOa35lozhyGuHT5gaLEPiUTt8q3RAtzU3H2SBlwMY3RU5NqU9SnMF7efGQOJbY81UHHaISIx5GCYBZDOy5LVo06ot6cuUyXg+pbxJMT1zyX/IyPbT+Hh6Hs8vGZYkxKz+ilZssvnxn9+4/d6ArNMsapiOmCd/+BhGcV+s2Nnayjmd9w8HH4odM8xXL3bHCmtz3pHOzYp7/WhY23LAyfEJceI5OGa+LodbeukVkF+uHuUvRBoKlUFcof4jGcr+cyeKveraBbAmwoqbfe/dkWW035ICeqp0GLQ/bZI0/2XeLoxONSbqb2ah2HTKn52knoBu1+cc0ejI151uxwXhVZz1xPPirQF4OORCpqVRm6+5D6hfMhps9lB7kNAKqGigj61rcECGbKeAmg992lbqKZ55oPttNYUY35q0b/zXdFY0E05/1qYPeRt7dmeFj1I5q/VWef6kv4pnAkfj32Xq8A8T7Veqay6KVRJEPH71ogt30OJoYKOUKO69cBTQg5FEojtMut5H+3Y2iFAhkJSLA0p4NGnDmxPaX0BiRE06yNnhW7RNOaJZB2+S8U7LMcxeVHcUSz+9HfGPg0RCDmi0jV2+BTmLOErSVFTa2/sS1/XL8l/dfjO+swDgvW5HcoUuDAtbNsW+4x7UuHDXrqpFeFj6FQeQPNqkSL1Z/J1SKHtZAZTVr5UcQsW1t3YgPGcv35CNxaIriaXbl+TCrtYYu+xMD9xzP29iuVBKrARUqnvFlQW0ylD+4fWM+GAdRs4GWr0pH43Ty3shhyaJv6+WbCE0rpHRRUKEsk1u5woiR/BQE0SgWyUDXBHkYwrZy/Icdu424b+eYbAbMq1VO/mBsUAzM147A5Qv/gHIH/YO6m3QKNf5H65H95GIL3jng==
X-MS-Exchange-CrossTenant-Network-Message-Id: 4e3d672d-7e21-4200-0cc8-08d88568a890
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Nov 2020 11:06:22.6081
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yKxOld4fbT8KRQknB+nXQ04QDxH+MJYPfRpC9mLsax0km9NKyDFjIDvCwQU29ng2fSL0GWhQsl9pmPJRUmAjOQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5463
X-OriginatorOrg: citrix.com

On Wed, Oct 28, 2020 at 10:22:58AM +0100, Jan Beulich wrote:
> The struct paging_mode instances get set to the same functions
> regardless of mode by both HAP and shadow code, hence there's no point
> having this hook there. The hook also doesn't need moving elsewhere - we
> can directly use struct p2m_domain's. This merely requires (from a
> strictly formal pov; in practice this may not even be needed) making
> sure we don't end up using safe_write_pte() for nested P2Ms.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> Like for the possibly unnecessary p2m_is_nestedp2m() I'm not really sure
> the paging_get_hostmode() check there is still needed either. But I
> didn't want to alter more aspects than necessary here.
> 
> Of course with the p2m_is_nestedp2m() check there and with all three of
> {hap,nestedp2m,shadow}_write_p2m_entry() now globally accessible, it's
> certainly an option to do away with the indirect call there altogether.
> In fact we may even be able to go further and fold the three functions:
> They're relatively similar, and this would "seamlessly" address the
> apparent bug of nestedp2m_write_p2m_entry() not making use of
> p2m_entry_modify().
> 
> --- a/xen/arch/x86/mm/hap/hap.c
> +++ b/xen/arch/x86/mm/hap/hap.c
> @@ -823,6 +823,11 @@ hap_write_p2m_entry(struct p2m_domain *p
>      return 0;
>  }
>  
> +void hap_p2m_init(struct p2m_domain *p2m)
> +{
> +    p2m->write_p2m_entry = hap_write_p2m_entry;
> +}
> +
>  static unsigned long hap_gva_to_gfn_real_mode(
>      struct vcpu *v, struct p2m_domain *p2m, unsigned long gva, uint32_t *pfec)
>  {
> @@ -846,7 +851,6 @@ static const struct paging_mode hap_pagi
>      .p2m_ga_to_gfn          = hap_p2m_ga_to_gfn_real_mode,
>      .update_cr3             = hap_update_cr3,
>      .update_paging_modes    = hap_update_paging_modes,
> -    .write_p2m_entry        = hap_write_p2m_entry,
>      .flush_tlb              = flush_tlb,
>      .guest_levels           = 1
>  };
> @@ -858,7 +862,6 @@ static const struct paging_mode hap_pagi
>      .p2m_ga_to_gfn          = hap_p2m_ga_to_gfn_2_levels,
>      .update_cr3             = hap_update_cr3,
>      .update_paging_modes    = hap_update_paging_modes,
> -    .write_p2m_entry        = hap_write_p2m_entry,
>      .flush_tlb              = flush_tlb,
>      .guest_levels           = 2
>  };
> @@ -870,7 +873,6 @@ static const struct paging_mode hap_pagi
>      .p2m_ga_to_gfn          = hap_p2m_ga_to_gfn_3_levels,
>      .update_cr3             = hap_update_cr3,
>      .update_paging_modes    = hap_update_paging_modes,
> -    .write_p2m_entry        = hap_write_p2m_entry,
>      .flush_tlb              = flush_tlb,
>      .guest_levels           = 3
>  };
> @@ -882,7 +884,6 @@ static const struct paging_mode hap_pagi
>      .p2m_ga_to_gfn          = hap_p2m_ga_to_gfn_4_levels,
>      .update_cr3             = hap_update_cr3,
>      .update_paging_modes    = hap_update_paging_modes,
> -    .write_p2m_entry        = hap_write_p2m_entry,
>      .flush_tlb              = flush_tlb,
>      .guest_levels           = 4
>  };
> --- a/xen/arch/x86/mm/p2m-pt.c
> +++ b/xen/arch/x86/mm/p2m-pt.c
> @@ -126,8 +126,9 @@ static int write_p2m_entry(struct p2m_do
>  
>      if ( v->domain != d )
>          v = d->vcpu ? d->vcpu[0] : NULL;
> -    if ( likely(v && paging_mode_enabled(d) && paging_get_hostmode(v)) )
> -        rc = paging_get_hostmode(v)->write_p2m_entry(p2m, gfn, p, new, level);
> +    if ( likely(v && paging_mode_enabled(d) && paging_get_hostmode(v)) ||
> +         p2m_is_nestedp2m(p2m) )
> +        rc = p2m->write_p2m_entry(p2m, gfn, p, new, level);
>      else
>          safe_write_pte(p, new);
>  
> @@ -209,7 +210,7 @@ p2m_next_level(struct p2m_domain *p2m, v
>  
>          new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
>  
> -        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
> +        rc = write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
>          if ( rc )
>              goto error;
>      }
> @@ -251,7 +252,7 @@ p2m_next_level(struct p2m_domain *p2m, v
>          {
>              new_entry = l1e_from_pfn(pfn | (i << ((level - 1) * PAGETABLE_ORDER)),
>                                       flags);
> -            rc = p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
> +            rc = write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
>              if ( rc )
>              {
>                  unmap_domain_page(l1_entry);
> @@ -262,8 +263,7 @@ p2m_next_level(struct p2m_domain *p2m, v
>          unmap_domain_page(l1_entry);
>  
>          new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
> -        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry,
> -                                  level + 1);
> +        rc = write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
>          if ( rc )
>              goto error;
>      }
> @@ -335,7 +335,7 @@ static int p2m_pt_set_recalc_range(struc
>              if ( (l1e_get_flags(e) & _PAGE_PRESENT) && !needs_recalc(l1, e) )
>              {
>                  set_recalc(l1, e);
> -                err = p2m->write_p2m_entry(p2m, first_gfn, pent, e, level);
> +                err = write_p2m_entry(p2m, first_gfn, pent, e, level);
>                  if ( err )
>                  {
>                      ASSERT_UNREACHABLE();
> @@ -412,8 +412,8 @@ static int do_recalc(struct p2m_domain *
>                       !needs_recalc(l1, ent) )
>                  {
>                      set_recalc(l1, ent);
> -                    err = p2m->write_p2m_entry(p2m, gfn - remainder, &ptab[i],
> -                                               ent, level);
> +                    err = write_p2m_entry(p2m, gfn - remainder, &ptab[i], ent,
> +                                          level);
>                      if ( err )
>                      {
>                          ASSERT_UNREACHABLE();
> @@ -426,7 +426,7 @@ static int do_recalc(struct p2m_domain *
>              if ( !err )
>              {
>                  clear_recalc(l1, e);
> -                err = p2m->write_p2m_entry(p2m, gfn, pent, e, level + 1);
> +                err = write_p2m_entry(p2m, gfn, pent, e, level + 1);
>                  ASSERT(!err);
>  
>                  recalc_done = true;
> @@ -474,7 +474,7 @@ static int do_recalc(struct p2m_domain *
>          }
>          else
>              clear_recalc(l1, e);
> -        err = p2m->write_p2m_entry(p2m, gfn, pent, e, level + 1);
> +        err = write_p2m_entry(p2m, gfn, pent, e, level + 1);
>          ASSERT(!err);
>  
>          recalc_done = true;
> @@ -618,7 +618,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m,
>              : l3e_empty();
>          entry_content.l1 = l3e_content.l3;
>  
> -        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 3);
> +        rc = write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 3);
>          /* NB: write_p2m_entry() handles tlb flushes properly */
>          if ( rc )
>              goto out;
> @@ -655,7 +655,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m,
>              entry_content = l1e_empty();
>  
>          /* level 1 entry */
> -        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 1);
> +        rc = write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 1);
>          /* NB: write_p2m_entry() handles tlb flushes properly */
>          if ( rc )
>              goto out;
> @@ -690,7 +690,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m,
>              : l2e_empty();
>          entry_content.l1 = l2e_content.l2;
>  
> -        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 2);
> +        rc = write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 2);
>          /* NB: write_p2m_entry() handles tlb flushes properly */
>          if ( rc )
>              goto out;
> @@ -914,7 +914,7 @@ static void p2m_pt_change_entry_type_glo
>              int rc;
>  
>              set_recalc(l1, e);
> -            rc = p2m->write_p2m_entry(p2m, gfn, &tab[i], e, 4);
> +            rc = write_p2m_entry(p2m, gfn, &tab[i], e, 4);
>              if ( rc )
>              {
>                  ASSERT_UNREACHABLE();
> @@ -1132,7 +1132,13 @@ void p2m_pt_init(struct p2m_domain *p2m)
>      p2m->recalc = do_recalc;
>      p2m->change_entry_type_global = p2m_pt_change_entry_type_global;
>      p2m->change_entry_type_range = p2m_pt_change_entry_type_range;
> -    p2m->write_p2m_entry = write_p2m_entry;
> +
> +    /* Still too early to use paging_mode_hap(). */
> +    if ( hap_enabled(p2m->domain) )
> +        hap_p2m_init(p2m);
> +    else if ( IS_ENABLED(CONFIG_SHADOW_PAGING) )
> +        shadow_p2m_init(p2m);

There's already some logic in p2m_initialise that checks for
hap_enabled for EPT specific initialization. Do you think you could
move this there so that it's more contained?

I think having the initialization condition sprinkled all over the
different functions makes the logic more complicated to follow.

Also, should hap_p2m_init be limited to HAP and PT, as opposed to HAP
and EPT which doesn't use the helper AFAICT?

Maybe it would be clearer to unify shadow_write_p2m_entry with
hap_write_p2m_entry and call it p2m_pt_write_p2m_entry to match the
rest of the p2m PT helpers?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 11:16:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 11:16:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23264.49915 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcRdH-00073q-FE; Tue, 10 Nov 2020 11:16:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23264.49915; Tue, 10 Nov 2020 11:16:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcRdH-00073j-CB; Tue, 10 Nov 2020 11:16:15 +0000
Received: by outflank-mailman (input) for mailman id 23264;
 Tue, 10 Nov 2020 11:16:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hJ2u=EQ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kcRdG-00073e-3s
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 11:16:14 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a2d39282-37fa-4aee-9921-54f575537241;
 Tue, 10 Nov 2020 11:16:12 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hJ2u=EQ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kcRdG-00073e-3s
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 11:16:14 +0000
X-Inumbo-ID: a2d39282-37fa-4aee-9921-54f575537241
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a2d39282-37fa-4aee-9921-54f575537241;
	Tue, 10 Nov 2020 11:16:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605006972;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=cqQem3HhAF2O33HayaRBvE+pd3V5gZFLS317N09pocw=;
  b=dmKRlGsWX0/wDkGLoaDzeyEYAyV/VbE9jXOGAmVf64FnMgs0Ike9P5JG
   lsUpTIwPSWhclS3q2JC4neEtnibzfT8GXliamCQ7CLx1lpo4+iCQAIih8
   hlwCTl1hxydvGLy7wRM0Rd8fZB777fhFR8gVItFk+vGGYbG039/xOy4EC
   A=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: FzvriLHXav0THI47ZVQwxKykduWLJfdUIW95aSXpBTXPW7nPDFZgCevwYkq8/o1yPnUP/aFnJa
 RJ+iDMAYiSdOaO16E7ERwG19kLvndhmNm3oJwOh1HTue515d1D7qwcMMVpq6EauTyowtDnBhjt
 aw/aXoLcJX2mmzJ4DFOifKZP9CMYMOP6snSgnmvrjDWZVzSK8JCLYZNUzBjXtJT8tHMVPjqFO1
 pBe/8z35yGVVpFP3cm+Q5jGu0rkFKpoMGEAh+zvyIE8lrTGI+Axw645XIjrbtDjPDO1DG/9QEs
 vhc=
X-SBRS: None
X-MesageID: 30809968
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,466,1596513600"; 
   d="scan'208";a="30809968"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=J6fya2vX62CndgJoDBnstFv8r65WYuxWEEwYE/CR/B/cfBmhMaI+rvwJKIkM4Zp5kNl0w7aMRDVAVwF01VDCCfo48e9PIJxjaGJGaOwC4NT4r5Al0nCAjksb5IkuACYLmZSgWbLLDf9z5fHXZMqs8xPfnxpJdjXegjiTQqmS3uNt3aJCYTs0zlb0AXVrrjxcOVPbRn8+0xF+zq4VY3cxhAfT0sRMjf5zoubmq1M7WtWEGkTLtlCgf++VzAnliiUKqjzRtvjWCqqKNVs5FcQWtEuMgVz4IcXeJwcPMr3R1MGncnENxAs3XHyOUfaUm4JGMvI9boRssto7/L1f4kGTwg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XnTB1J1VK+kzRdZpv40WckXOrhwiJ028YdQrgs5DGVk=;
 b=UyMDJyQo4uGY8JZkziAXZAaTvtkya9qws3x/VqySFR83uiK32H0ml+shcW6b64jGHpkgwNySRI1MiQB+bB8F29tdTmX7kD6nRdgt1cfb5RIrMbcLefQKuPdHJ9WKLHbHh368gkTUs19sqMhGz/my0cFfYn1PBCXy6rw7ZBvzpG1uiiQdT0WC9djjo23QbMzKdcp5e21/B93FlpBfSey5zKMBs3jODhSZYGEzZNWMObSEmbAcqLjEBTWcYaJtdd6QOiiKiWUyP8//W72BOIYRxOHqGhrfqOl91QasCFGrO2Dlm+mPVR8EyOkhGgN3d5Y5AUKXnZ9PQitCOqesltufZg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XnTB1J1VK+kzRdZpv40WckXOrhwiJ028YdQrgs5DGVk=;
 b=hEPlbda1CiyCh/r90KGrVLojCorpQIDZ7aILgMlOPSPtiEimOpav99JRbHSjElJA6O5yXYc3j7ABwj2CZwhxpIgfUwZHjZfByRV9W0ue2vqwcBUqGIarML59+5yFdH2+psOrgO/jec1tAWU6nmQq8mLD3buTzqE1TOSCWkS0luQ=
Date: Tue, 10 Nov 2020 12:16:03 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3 5/7] x86: guard against straight-line speculation past
 RET
Message-ID: <20201110111603.rarf7ncddrkswlxs@Air-de-Roger>
References: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
 <80ceea17-958d-f409-5f39-9f353e780f5b@suse.com>
 <20201110093142.hkufamaepn67gv43@Air-de-Roger>
 <92e58ff0-e6a4-f92f-1ad6-06db7751762a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <92e58ff0-e6a4-f92f-1ad6-06db7751762a@suse.com>
X-ClientProxiedBy: LO2P265CA0259.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8a::31) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 972414f6-9620-4f50-3890-08d8856a0606
X-MS-TrafficTypeDiagnostic: DM6PR03MB3483:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB3483C9B45219B40AAA411EB48FE90@DM6PR03MB3483.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: hZJZFys9KloDYAIKMARVm2tUH8FNDKavYu6ibvyOQ+TknzBb7QEsjmm+j3pND8b/NY9Z/OTIJs4a0qwyPS8tDX5cA2fplts2dD7GhxM6mftvgKaEuKwVcSZuy983L8TRKDlMT4Zu8mW2zn3KfuhxB0BxgAvyYHADLRB+ZX8boKwT1TFkY8N1nByZPU4A0DFo2DZxD7vJiX1fiWKHuop2r7vdrSZAQQrBWHQS54VOOxAhLnKCjrwlT/gJyQwXunILfh/De9CoO8a9q34+3c1P9DCg4a6oivYxZ5mC9W+xuRlTtyQBkAzoDR7DesZIavtO7tNxPqYfjlbtdnpMODqz/A==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(39860400002)(136003)(346002)(396003)(376002)(366004)(6486002)(8676002)(2906002)(478600001)(54906003)(53546011)(33716001)(316002)(26005)(66556008)(6496006)(83380400001)(956004)(66476007)(8936002)(6666004)(1076003)(86362001)(16526019)(4326008)(186003)(66946007)(9686003)(6916009)(5660300002)(85182001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: G/UIrY9Pnb3dxJ2AFfBZZFpgI3SuqRy3O3XQhE9r9V6YlKbZLddl7zvZPX6tjM56nZ4s8viGCzesR0PtWTogoWxkPGNwh0miIwes3XunC+eAv8i1VnNAzHLOw5v0oKEwLa0ea9ih7ec4M0KuMKExnzOGcKowLQ+fTThiPMEXVNJMUDksMMiALueSsVdjsKyMJ3Tv8liyx/i3AYjnVcGPjQ+jKmZ2bA7WyPFjqSnGt5UTUL2XE1DK4jOcTHGkbbeaT8iQJcpLxBR2MUVjWAWBb1BwDMv6HrJ5D0j9lTgbzmZe6ubl37YHwk8C7yWHtk2BaOrsy/WGB6FONQ1OW1llakGtyXTWeTCaDqFh1m9Wb6/q0z3DeAvw33sl/ydB9/J3YO1KRlmLRW0ooMyjdFN1akaALglWeR+UhnpsBUHHSZ+4MziT405NSsDTkzQMo5aRumHztO4tQeaY254E21IbuzDjXZbTXFph5fgSNi/aLGc8bURHzbUNs3FZlvJN9AIG8ztDqVtA8hWYXSRxBIu6/ysWooxADdNog80bJK0G9zMSeZVe8WhkoYaqAWH4bc0W0KWJWBmPN5lDfKlR61h2W6KLvMIOXN/oVko78QJKRpJwrXF+2sxMeWAqCi8/rwi2dGSDxP33E7HKzhcSr4lXNVSkWY1HMoeulM3Pe9AQwlLCI8TFwmWarRBSJIJxlcKyxUQqRgpZ+Gq8rnD7z+anqmzRRVnq+KQQAhmI6hR/Znpeu0Yc+0U0s2mrdlwAAKA7AwQhoBgH7+FOtYIXgebykfCSlJsUHoaKin4C9s5kcTQEAfec/BUAtJq3GQQ2Ye2XekcXufKag0kHoqD2P3PCKdNRGVfOcmz2X+e8tZOFbl7QS+mib3IUefP5h5uVVw4YK5ZveN6vxGEQmco1/FteYQ==
X-MS-Exchange-CrossTenant-Network-Message-Id: 972414f6-9620-4f50-3890-08d8856a0606
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Nov 2020 11:16:08.9064
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GxIT6S3Wr7wIIliiQhQSJBtD3LU5rIQy+psIVpJinoolmlKAclGPsTMSVxc5CBAwh6COxDE1cyieqb8GBJzdRQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3483
X-OriginatorOrg: citrix.com

On Tue, Nov 10, 2020 at 11:06:46AM +0100, Jan Beulich wrote:
> On 10.11.2020 10:31, Roger Pau Monné wrote:
> > On Fri, Oct 23, 2020 at 10:38:04AM +0200, Jan Beulich wrote:
> >> Under certain conditions CPUs can speculate into the instruction stream
> >> past a RET instruction. Guard against this just like 3b7dab93f240
> >> ("x86/spec-ctrl: Protect against CALL/JMP straight-line speculation")
> >> did - by inserting an "INT $3" insn. It's merely the mechanics of how to
> >> achieve this that differ: A set of macros gets introduced to post-
> >> process RET insns issued by the compiler (or living in assembly files).
> >>
> >> Unfortunately for clang this requires further features their built-in
> >> assembler doesn't support: We need to be able to override insn mnemonics
> >> produced by the compiler (which may be impossible, if internally
> >> assembly mnemonics never get generated), and we want to use \(text)
> >> escaping / quoting in the auxiliary macro.
> > 
> > Could this have an option to enable/disable at build time?
> 
> Well, a subsequent patch adds a config option for this, which in
> the worst case could be turned off. I'm afraid though I'm not
> clear about the question, because ...
> 
> > FreeBSD will drop GNU as quite soon from base, and albeit it can be
> > installed as a package I would like to be able to build Xen using a
> > toolchain based on LLVM exclusively.
> 
> ... it's not clear to me what the implications here are: Are you
> saying -no-integrated-as is not going to function anymore, unless
> people explicitly install gas? If that's not what you meant to
> indicate, then I don't see how building would become impossible.

I'm still inquiring about this, but I would say that when gas is
removed from FreeBSD then the 'as' command would be mapped to llvm-as,
and thus -no-integrated-as would hit the same issues as the integrated
as. So far in Xen we have assumed that -no-integrated-as would
fallback to an as capable of doing what the integrated clang as
doesn't support, but that might not be the case.

Ideally we would have to re-run the tests with -no-integrated-as, in
order to assert that the external as is really capable of what the
internal one is not.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 11:21:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 11:21:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23270.49926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcRiT-0007wx-3o; Tue, 10 Nov 2020 11:21:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23270.49926; Tue, 10 Nov 2020 11:21:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcRiT-0007wq-0q; Tue, 10 Nov 2020 11:21:37 +0000
Received: by outflank-mailman (input) for mailman id 23270;
 Tue, 10 Nov 2020 11:21:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hJ2u=EQ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kcRiR-0007wl-QU
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 11:21:35 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c3517021-0cb2-4b29-977d-6736856d513e;
 Tue, 10 Nov 2020 11:21:32 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hJ2u=EQ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kcRiR-0007wl-QU
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 11:21:35 +0000
X-Inumbo-ID: c3517021-0cb2-4b29-977d-6736856d513e
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c3517021-0cb2-4b29-977d-6736856d513e;
	Tue, 10 Nov 2020 11:21:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605007292;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=20QXgotj5x48Q6AvWXydhWzP6NnSp3hXwDVA7/Nfln0=;
  b=J5L1625tE4Ry4Vq6Ng+gEVMObi7wfn0X3wcGjWlqnEJ50HCbHEqGaI0k
   gff+U9IxNwfzxNhpuCjktFmEczyyftAZdn2ceJAJPEV1K08F8iZQI8Fbm
   CnKIqfTim15GQZd24rGN3rDZcFhnIsplVZzv3NbNgFOK2J+QV9EiTnczQ
   4=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: rY8dTgtPw1dSAE2RV36RZyLKPoltQCfrWr83Ca+GEp/iGkuwWLKwrbDAohPVxaGBef1n0XNqFB
 YisKQsuzNCFMvR+0GW7297Hh0x62QS1XFvLLSxdG8QtFJsMYX12+wRP6MXO6Os1Y33T8jk7NuD
 0+7xo9HPfKlZrNAHvU27zNZttLCz71YRvwRKpl1+qmhm96wsUM3qC/VvefXmex0FF7+7M/mG9I
 pxWMrB8u4w6FYcbN+D6MqRR1Prt9H2c2bvO6QmxxElWsfAkmgOVIevejfUWyjsGtwWhj269buI
 VL8=
X-SBRS: None
X-MesageID: 30866721
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,466,1596513600"; 
   d="scan'208";a="30866721"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZZYqYnqHQMEAbcdWtZuN2LplM1suIqrgijIOMaIoFTHd5MWOYFfi9AUmbN5+ITxIqdZwhD5eqY9cPHgOSgUclNfgtveUZ4Y9EQpAlfed1n9eB6L2z+t33fL4mygr7WxM7rntAnGTG9YUVus11I36ojNbqf23uwdPOgVK9FdUexCU2pWBITMu8T8MZ+9L3AqZ02zViq/yNdCCAod0zXihVmgIvATaS3tfUEN4GgSPNw92YO/xttvfayYyM4nuWQWnNat6/mugZyjasmekzTPkQyp2IJfFXS0UfuUGHoBDgj09b03ku5oBQCiwwfohW6l77f8Mze9vGl5CUCbMEMx1HQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zev643dNXnzkIxRBXaETPFQbhApcmzn+jO9AJuSQBoc=;
 b=cINt2kjwED8dBxKNTpT+Y7ELGB5kpMo53UsC0tMLUo5BctMLB7t2IlvVi7RVu9+qiW0GXwJM723Y5VorXS1bG7DhkqF5CqDRir09WJK9W0flNrlZUEgQEymWJxcqbAi8S550I6GJ44DSiGV/jouDQ55qfHxGpRBWs1tmREnM6f7xeJZC4e/XunlhDSUNdNE4724wewvW9LMhBaDhHZvHc3gwM0BFvUvo4buqOaAgU8aJVkNAYJuaODIWzWloxh6Dba7KVjiXzLm0srRJxyhKNm+ZSATovpyRxCwEYauxUcUNPoZ1RwxKRqHIuXKM48Cfe8+aPXQCv22BtuIl8niv/A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zev643dNXnzkIxRBXaETPFQbhApcmzn+jO9AJuSQBoc=;
 b=AA5ze/ISY2b5QccFYhV3LkLIBhBOzgkECoZfoQjUR4zFEiSSawlJ0TwUPDdirRcKzN824tPvxeGC9vRR9VmZp6WTJo6Q63X8yTLrDFz7bZ/IMW0L3vOLt2uMTZYV1OdTRNU1d0T5GgK/v4ikyQIudV9iFocmWE34nDpiq7Wt+6A=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Ian
 Jackson" <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: [PATCH] docs: fix documentation to notice credit2 is the default
Date: Tue, 10 Nov 2020 12:21:18 +0100
Message-ID: <20201110112118.99960-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.29.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO3P265CA0005.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:bb::10) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 0a7952bb-8d6d-4656-a37c-08d8856ac375
X-MS-TrafficTypeDiagnostic: DM6PR03MB3483:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB34839044CD0453E51A4848BE8FE90@DM6PR03MB3483.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3826;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: DYGzYNC01QXPpSBNsFuybWAiW+j34PN0AMswtntlZKWqKP/SMvxHENTkkv1NNswnR5yaVGKvWLIhUsVFAIkavR75bI543Rf7NETZLnUB7Hq6a6gIxi1xcX40Lw+Ii8gLRN78lh/ZtspSfAb1X+LPyvIhIGx85IcAq/PS1/kGjSMdk5z8rZiLdJQzyEcwV1uRqHgaksKi+w0EGcWQbZy/DgM5y4L0DyTFB9Aqzh/8CpDHWvsJDEAcFANc+d4zIxcWTD6j8UkR3nIvVgjQQdXJGEj0zifAyL7/m9xpH4qBflObxhyFR1pcHO7L/+XaNVAA
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(136003)(346002)(396003)(376002)(366004)(6486002)(8676002)(2906002)(478600001)(54906003)(2616005)(316002)(26005)(66556008)(6496006)(83380400001)(956004)(66476007)(8936002)(36756003)(6666004)(15650500001)(1076003)(86362001)(16526019)(4326008)(186003)(4744005)(66946007)(6916009)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 5ULcHBEE77M7FAYE7U/qQN8csKnSUcPzDo5UQK0kGJ82Dw5YiO7ZWMDuAsc+Pkfub9x59XWZGer+RFsJYJ2Omiy/j6NHqr8ColuZNu3iqaJtz158CYBQ4WSmPWMV8YfwTJYWX9JSN6YFxglgW8TgePgzftaP7wH615tHg6kC7fVbUWGkEhLV6TMgc1iB6gUcWyAfBchuJBjfBO1jLkgP+XTJCpPyHjv1QSKVFFGPP86EDsbv8xZQVrzF/G6zjxJiIjmrRAQQKnA1GUrVCujT/QZrKopo0EaVu44TMdM49Ym/R4/y8Wv7DjJ6+Rn1mdvdfIW39Xfm46ABCgn0KDvU11PKF2eVKUB2XtAqCQgjesjZ5FbWNcvfpg8y7EIe6P0vp6G5assklNiXDWyj5mLwLvA+PX7aVyN5GcM32AAiXoJpR8wamfNBcC8Yi5zN2tqMLacaucUvJ9rn4RP8Jkhd8HHVwOPKI4WVNXzprJ8KohuSjAD+icIXTrph6CzBmuZGdk+y/ou9Uh8vEBCRqquyhZytKh0e1/vhhLlRSHEPGRd8c9kk4M+cMWGNy1d6ZibdAfAvPu2qKb0BGTUhR+Jhtx0xG+v5QCgzIAV48EntIYg//deDkAuJJDMlYOlz+RXrpf/b+fkx2ZoNmH/o6I6fl+3i/nRxJss9/UKrHprdE1rBSRaIqzhE8J+PDMue7Fz4UNONsL9j7kffSqrPEJIoR5iD6PShSP9GtpkbBcoAhomR90rC/TDHqRQTqZD6edGy7hVz/gy6fwN1/CyPg5dAM3No/VeMla0GbkzRI5hXgtiAjhHECeahUn2llR0l0rfqoyj31mfAFS81mt+mcKteiDnxQEP0RkdAwMSdPJTfaq3AsZQegrgEj4y8SavQVh69/ysrjMNaGWjH1CJdev9iNw==
X-MS-Exchange-CrossTenant-Network-Message-Id: 0a7952bb-8d6d-4656-a37c-08d8856ac375
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Nov 2020 11:21:26.6254
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: T56s3huhVtp3PgjpeGURjQnM4QiDlX6s6ra+TmgsCcWBwhNa3TRz6DrESXes8fhgPW4zx1B/3Xe2JEiyVBAwfQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3483
X-OriginatorOrg: citrix.com

Fix the command line document to account for credit2 now being the
default scheduler.

Fixes: dafd936dddbd ('Make credit2 the default scheduler')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 docs/misc/xen-command-line.pandoc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index 4ae9391fcd..789aead148 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1876,7 +1876,7 @@ with read and write permissions.
 ### sched
 > `= credit | credit2 | arinc653 | rtds | null`
 
-> Default: `sched=credit`
+> Default: `sched=credit2`
 
 Choose the default scheduler.
 
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 11:30:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 11:30:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23279.49939 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcRqn-0000SJ-VW; Tue, 10 Nov 2020 11:30:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23279.49939; Tue, 10 Nov 2020 11:30:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcRqn-0000SC-Rw; Tue, 10 Nov 2020 11:30:13 +0000
Received: by outflank-mailman (input) for mailman id 23279;
 Tue, 10 Nov 2020 11:30:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hJ2u=EQ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kcRqm-0000S7-Fn
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 11:30:12 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3e372d28-f65c-42b7-bc12-48b90cd970d1;
 Tue, 10 Nov 2020 11:30:11 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hJ2u=EQ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kcRqm-0000S7-Fn
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 11:30:12 +0000
X-Inumbo-ID: 3e372d28-f65c-42b7-bc12-48b90cd970d1
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3e372d28-f65c-42b7-bc12-48b90cd970d1;
	Tue, 10 Nov 2020 11:30:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605007811;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=DswWUBFVhefM0AiWW5aZVrzExhrh8izShQ7k4bwFu2c=;
  b=NtaZUL4Salgjy5blYQMnYcvZln3BFEeokc7JizgE52l04QNID5JTlDTt
   VdrWw9cIdQMK3+NjG8IqarGLDXzgP3DoT8QHyRcMxOZ8V/UTrLSOs23Hz
   /qNIfj+i0fnh6wpwlWMDpHflq0sSDMc215Q3zf1tj9pVSPg7hrHyPJRLN
   A=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: NltpuTUiRVMSwgsnUyRDxCAVv5qRsF21vhXMhIY/q6oKvMsCatopih6MsV9T6bBWmxXIXHfGkP
 gGpqPC7sn7a/P70fZnuCTdrbTufMm6fYgQKdpGWoOXbCxUOQfIB59g05fOsKMUlgeT3xkXSAoZ
 czmSIL8car2H+B56/x5VVFLDn6bQiZQLcHNUl+KtLpmwAjtiEtJ6RnWc/jsSoLEZF67TZ9uziJ
 s7kU85CKooSSu4DTq/rvioxlwpsRhb4NBQqaIGOkZWGB2ISrJcU3CVELfYaTJnDdg1I/uExhGn
 FEc=
X-SBRS: None
X-MesageID: 30839964
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,466,1596513600"; 
   d="scan'208";a="30839964"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YZ+awRR7yQ1b4INNuRBAM7vUdGifA7kyvVmtv6MHIkobcMaGGBpNvxTAvNsIquoFyZQ4s8Afbap7vN3pPEqlHgCVnTRfuIcQlEGjaK0PFXRBA5bin8eBEJAMxRwT4t2AdEkGrZ1XCHJwP8mQfMjsWsseIrE5Y7GA8EQ1ElnWXn4HgyCOsr/ECwptyBx02mcssOpolREzZAEChFG6umiQ3Z+3TSLsE7QFE0Kj+/tUDhklJ2hG0/5e0x46cLjSglgMQDeLk8yjErycZZG2JSG1G6ZcLQUEpFLEvo0TIqiq4q2HVngRBjBWf3iZ6GxU/MsfCO/2hTC7rc32g+1z3buOyw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GmFRSLA9nNldh/+JrLsL8VNAG5HwvNv1XF129AwYSVQ=;
 b=iBD/+bt2aTjbSSY7pu3Z0NXkEjQ5PIsDRbWRfNXMT1xgHcGD2bJLWDLMu8tPS33c9vdVoJVuGI4+3LTNhwDwK+r2IhhjEDIAjG8ELQ4qN8AH20I4pMXGk4brYiV4ysQZPYphxNVEM5HRBw/PFDdFoM9BTs3eVHDW9ZJMk5v2Vyhe/R971hKTUWObgStZ5wisINoNeO+iecMD6x31k70/xeGVvTn40ApxOnw9JOV+mRQN772w6MC2F+X5Z968qDQAKd/F/xg5rqtf1FLtdZHbQNaDomQxHv+SPMdJv019ZJzUxWVRd/2mY25pWKf8bva5sHJGoyHh38sObpY/wUD9pw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GmFRSLA9nNldh/+JrLsL8VNAG5HwvNv1XF129AwYSVQ=;
 b=RhSPNFbPzyVbgmOXuqS/jyf7prnoP7MiQvPSVImZWp+uQg81We7Jgty+if9MZJDa/wj49WpvuOT7ljOkEK/uowbSrDqgHjRlAGvpe9Ys7HeRMK7Iz6vPSJNtRIeRS2nigfQz9FXcbWDgJCKGjToJjRAMak0rx0b0ljiO7lUcxtI=
Date: Tue, 10 Nov 2020 12:30:02 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<George.Dunlap@eu.citrix.com>
Subject: Re: [PATCH 3/5] x86/p2m: suppress audit_p2m hook when possible
Message-ID: <20201110113002.maox2v2w6om4lmik@Air-de-Roger>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
 <722cf75e-da6a-49c5-472a-898796c9030e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <722cf75e-da6a-49c5-472a-898796c9030e@suse.com>
X-ClientProxiedBy: LO2P265CA0365.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a3::17) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: cac03f18-fe99-4974-2944-08d8856bfa08
X-MS-TrafficTypeDiagnostic: DM6PR03MB5338:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB533811B284361C1830DCA5828FE90@DM6PR03MB5338.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3513;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: h00qtaTWA0QD6Exz77iwApjiCojiQPwelqQye4UggmWF0tJ24qW+ZxoIfGuElSToTCbp+wvCAcbflPWvhzLt8k69ZpPbqgKZdZkPtBo/PL9Tw/nXda7gXEUMmsklubE1sUpYs3CU+EAhmwUl4KMYihvWhZ8OjmlN5w4faReuGAc5yGScRkXcoBuX+FSFBabOBNk761GfPTtMCQYo8/PrP5sS93bqfet7h/XjtM5/hrWFNjUQGJjkxnav7EPHfL7QiZcnn73gumsAD1CCDQUVXGazaS7tpslm1OsQo047a7J3q758e/y2ZgqYgY1xVC4gBEioSfNT6W/dW9OH31d3Lw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(39860400002)(396003)(376002)(346002)(136003)(366004)(1076003)(6486002)(186003)(4744005)(5660300002)(6496006)(54906003)(8936002)(26005)(107886003)(16526019)(6916009)(316002)(956004)(2906002)(85182001)(66476007)(8676002)(478600001)(86362001)(4326008)(6666004)(9686003)(66946007)(33716001)(66556008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: GWmaIT7Kql9Wy8O5sfqpJJZWS+mb2m7BRcvQvlQrsXeeDJPNUwZCUwVIPTODXqmB2z5kImfOs40ZOUkRXxI5KS1mbTbZUZnRrJ0QhhsrjQolxUV6YVCSg6ehW0b+LnK+kWW/UepmJD/qIUVuH+DxZ8XioWstoKzfdthw+3SV6A2hVJTUfhu/MUcyISCtAymetFdrwQrdrMeD/r0ShDv7AXx99x9UCV4b8Rh5LenxrZRlqTp9mtb8j2wVeDzuu7rRLwPbMMZda9YOHLzWI3vKYly2RqC3Dj8m1Y0nixJaAQgbCpjXFFCjgSukyCZIWW0k50HnmgBZsXYzLG7l1C8FXEYoX82y41f2QNUAPOP0I3UWRYj6O348Et+6sjpNols6fdk3TTTo7GolhE4nGXn51tFpx7BZyKWpvHukqxPNzPgkdd834JQfqNSyDlre1Q52IHrHkGlGhLKYlNJHoHxRPZOIAqZRYUC/uaUFL0c1wIjvS6hyxNLBGHw8DjIuWmTjw/GjuFS+auIvm40mmw0djXZkjv780LnLBH5mG3o2WmH2oLxGcqesgDbxoPt0jFGu5gzTpgy83JdO6HZPFU9cg1NnixFzulHvBIKgmYOVXQ3E9Je7uYr/Y3Kkr8kmD9K+GwgDK2xEdYtdC8Ax2ZXT8tQAqgSipQ3iI/kBMGnMXynd5f56bkjinwgl0TbgB0QW/62ajxEOSalKQ5W9gEseSUVqo+INkv1Vdp4lm6E1dwyKc9KHxZ/f2SfmIWaA0ZxprLVtFjIRtl4saOXIJGAoPMAIwVyS5WhTJzm2h2Xi738fO42w93VNe9Bl9OiIJU6HWUWoxVvbqTLjmT4v6Ic2FBPwqkx9cntmosSO8qoUAHkBhdGFW4vrLed0uLoFeeejZInvpGo9bfebGtCGdSWqkA==
X-MS-Exchange-CrossTenant-Network-Message-Id: cac03f18-fe99-4974-2944-08d8856bfa08
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Nov 2020 11:30:07.6854
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: lJcNMBOT7o5sOseQxhbbmdxL6BZghzHbGtA78GtevX7+7X73wWBsdoYI2534OyvQoVKU+8StTwm1bXxVMKvpCA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5338
X-OriginatorOrg: citrix.com

On Wed, Oct 28, 2020 at 10:23:42AM +0100, Jan Beulich wrote:
> When P2M_AUDIT is false, it's unused, so instead of having a dangling
> NULL pointer sit there, omit the field altogether.
> 
> Instead of adding "#if P2M_AUDIT && defined(CONFIG_HVM)" in even more
> places, fold the latter part right into the definition of P2M_AUDIT.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 11:31:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 11:31:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23285.49951 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcRrq-0000Zp-8k; Tue, 10 Nov 2020 11:31:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23285.49951; Tue, 10 Nov 2020 11:31:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcRrq-0000Zi-5h; Tue, 10 Nov 2020 11:31:18 +0000
Received: by outflank-mailman (input) for mailman id 23285;
 Tue, 10 Nov 2020 11:31:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=n6mT=EQ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kcRrp-0000Zc-NA
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 11:31:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0d607260-069f-4c6d-8f13-6f7f8d925cf4;
 Tue, 10 Nov 2020 11:31:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7CE90AC75;
 Tue, 10 Nov 2020 11:31:15 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=n6mT=EQ=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kcRrp-0000Zc-NA
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 11:31:17 +0000
X-Inumbo-ID: 0d607260-069f-4c6d-8f13-6f7f8d925cf4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0d607260-069f-4c6d-8f13-6f7f8d925cf4;
	Tue, 10 Nov 2020 11:31:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605007875;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=hjO7Yaf/heKc7IBFQGJi+fADdiiJj6eRoTK9rtO1azw=;
	b=ZUU7JBue+5xyus8bO+BZ5jnZ9hP4e7YcndmxGUk5ygsvygJlXvBGykhLBKMdJsvXb3Na7X
	FXMesQh11Vt435ns0NjkI5TPEQyiVf/kc0FtJiNI/qZN1PU50iSlg1l5fuq2r8NqKCnfaN
	DKrL0IQ0/bV37fdSxlUFnkQTI2+EzBo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7CE90AC75;
	Tue, 10 Nov 2020 11:31:15 +0000 (UTC)
Subject: Re: [PATCH] docs: fix documentation to notice credit2 is the default
To: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201110112118.99960-1-roger.pau@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <b9ca219d-b6d7-9f59-3ede-9b4c9225e01b@suse.com>
Date: Tue, 10 Nov 2020 12:31:14 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201110112118.99960-1-roger.pau@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="8Hze3JGygH3N8sanomTXGRLCmj9nE6PnQ"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--8Hze3JGygH3N8sanomTXGRLCmj9nE6PnQ
Content-Type: multipart/mixed; boundary="8iTMXHV9319z5ZGXoSjF3Lwq3GldUssRO";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Message-ID: <b9ca219d-b6d7-9f59-3ede-9b4c9225e01b@suse.com>
Subject: Re: [PATCH] docs: fix documentation to notice credit2 is the default
References: <20201110112118.99960-1-roger.pau@citrix.com>
In-Reply-To: <20201110112118.99960-1-roger.pau@citrix.com>

--8iTMXHV9319z5ZGXoSjF3Lwq3GldUssRO
Content-Type: multipart/mixed;
 boundary="------------51E75D92BD6E66B618018713"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------51E75D92BD6E66B618018713
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 10.11.20 12:21, Roger Pau Monne wrote:
> Fix the command line document to account for credit2 now being the
> default scheduler.
>=20
> Fixes: dafd936dddbd ('Make credit2 the default scheduler')
> Signed-off-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
> ---
>   docs/misc/xen-command-line.pandoc | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>=20
> diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-=
line.pandoc
> index 4ae9391fcd..789aead148 100644
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -1876,7 +1876,7 @@ with read and write permissions.
>   ### sched
>   > `=3D credit | credit2 | arinc653 | rtds | null`
>  =20
> -> Default: `sched=3Dcredit`
> +> Default: `sched=3Dcredit2`
>  =20
>   Choose the default scheduler.
>  =20
>=20

Tried that before:

https://lists.xen.org/archives/html/xen-devel/2019-01/msg01097.html

And Andrew didn't like it...


Juergen

--------------51E75D92BD6E66B618018713
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------51E75D92BD6E66B618018713--

--8iTMXHV9319z5ZGXoSjF3Lwq3GldUssRO--

--8Hze3JGygH3N8sanomTXGRLCmj9nE6PnQ
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+qegIFAwAAAAAACgkQsN6d1ii/Ey8y
kQgAkvIg6/0OJNaFgvqmLUCxo+pw3XZq5h4icI8yEFmwV3A6F5qSEWKU6F42aslZuGS+3s28CgaU
9rUWDGCPonG2XdtrGuGMpnOuMo8NoTY6RXa6s532NqcHYbpWCw5O24Vt6oEQE2bBoeof/fk6KOQy
aY7ycH2sIef5FWkZecmUT2opqE/pdnn7R1sSZn3ImtD4wrunk2GkDVMRKc0vZLolsO8iZ5Xelqzu
mz8wLkFxS0eDAkp6x4AzDf90sN9s1j/7db5qP7ORbDh+g+gS2aG6oVNlpAyUXGeStsPS1kFts7yP
s4PKq2uzLR02A/OXPLwK8N+VZ4AR0d+tIFHeEowFvw==
=Enob
-----END PGP SIGNATURE-----

--8Hze3JGygH3N8sanomTXGRLCmj9nE6PnQ--


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 11:38:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 11:38:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23295.49962 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcRz3-0000rs-4U; Tue, 10 Nov 2020 11:38:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23295.49962; Tue, 10 Nov 2020 11:38:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcRz3-0000rl-1c; Tue, 10 Nov 2020 11:38:45 +0000
Received: by outflank-mailman (input) for mailman id 23295;
 Tue, 10 Nov 2020 11:38:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hJ2u=EQ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kcRz1-0000rg-Kp
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 11:38:43 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4cacae8a-02c7-4b23-974f-18927ec6a886;
 Tue, 10 Nov 2020 11:38:41 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hJ2u=EQ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kcRz1-0000rg-Kp
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 11:38:43 +0000
X-Inumbo-ID: 4cacae8a-02c7-4b23-974f-18927ec6a886
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4cacae8a-02c7-4b23-974f-18927ec6a886;
	Tue, 10 Nov 2020 11:38:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605008321;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=40PxImIy7N+dTs+m91Fee03wILsi2HFIaT9+AwryQAA=;
  b=C2Bgo9wyPZUcEm0hAtuipocoOijt/cfeU5pnRNfzS+Gxi+GbOTuJmdkM
   GrkDenA/paeH69P6AZ+/iqE5trj7x12/iJMHwsxjrzD0NwwyCauV5Req7
   tb7iHTnp3pg2KJZ6QJfOOzt3fClSGtBYFkY8u7zzajNcP3GIgT3mWQtzB
   I=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: k5Lz4GGEf2jgghS13W3KNJFrkJppOQwP5tPWYYvmg3plpgQZMeqVMaSKkUpuQSECz0SAfqwFRD
 YYLGgProERLplWctzgargJHr9+8XQBWnlw3SDDtIycJxMP1LF2fAOYeA07hxU8UbD0LDiapbHO
 jyg0EAAQ1BXSZ6lYz5RNGa95+oZ0kbGbPBW9nE9nFSE/npezKZInaDFvKElANMiEo+YA+FfrZ/
 H3iU17hBV7fNzOdUBS1Sz7O1sXxpnLrJqsXzdo3kuW0icgstyD9dqFJZ/AoM2o7ge9OasiRsIy
 nTE=
X-SBRS: None
X-MesageID: 30840389
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,466,1596513600"; 
   d="scan'208";a="30840389"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iPodGjUZihmn4BuFGd4pJrPiTzTdOSt4OgDwPIpNcz7M8qlNIV5C1EFAhQao6mkAkakz6V1VgkapJLBN8dXzAJpK7ckePOEnF97wV6YSSj83R+opkkHzKeaDTqvQjnNQw460MD5MrMV7jwH2HH6GOuFiJdubqbrY2eT1MkFya3uZZeuZCmZMDvjBaw1c71TwdoRk2vVC/SfyMAMZFf5kzbm0b/4Q3+djmlB+MpOVDN52X0vAbJT+3Hg+bBagAGcwKVArBx3/EU95KYzmedu4ROpcOYvtMnEv+IrhlUTcQ/Y8mhMtX0B24VLdkpAGOQUCPITSq3WfKZ5uIuL5I4Pv6Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=my46kdy4q3mUYkveQ4tkBogWiaNXK1LTx/T3ptmh0bg=;
 b=izA6y4zcxrjPcIFG7ejcVfDplV90pHAlxek3IaCiYLtjkQfd+RGKiIeii537STElU1wrXoKpq6XjWhXPHwox/1XqTxo1qzTko2gkAj+cGBbp5ss4vbDE1dtzFw6sUX/VmL7QXFeqHfq0IHxOZh1t662bSoObVxMREvScTFn0/zL4NFCEyNbx+/Yrp3DT1QX7oGKKmwuQYKFbedLwzXAWQ1mBA7fzOOCaaefympF9se7uASdd3pDgMxHLhLdIRIE/5hgV5GWGW+OYzdVG3QF93CrviVJTj0uDS1CfUUmRfWAnlvPeYlzvUoXkDv77vdmog0lz1/BumHlpbtGfUGR1ew==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=my46kdy4q3mUYkveQ4tkBogWiaNXK1LTx/T3ptmh0bg=;
 b=os1MgH2Z6lmGURZQVNAVVue2oCY4scJITLoGBOu9w9ic/Ko3hYooY3PtsPi3T2qpZM5P8UcSOMyB2chFpqcACHSD3HmXbePmDOi1rph3f0dP4wVdiTYpjePmoBqA1I6dlnCHzw1F1Byt9CkA5zwLv/zJl9HFfjqhNv16LcHuy2s=
Date: Tue, 10 Nov 2020 12:38:30 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<George.Dunlap@eu.citrix.com>
Subject: Re: [PATCH 4/5] x86/HAP: move nested-P2M flush calculations out of
 locked region
Message-ID: <20201110113830.ttnvs74ud2fo3bxn@Air-de-Roger>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
 <551dc0a2-f5ef-a646-26eb-8a67ae428745@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <551dc0a2-f5ef-a646-26eb-8a67ae428745@suse.com>
X-ClientProxiedBy: LO2P123CA0097.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:139::12) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1ce278f9-fd56-4e9d-20e4-08d8856d29aa
X-MS-TrafficTypeDiagnostic: DM6PR03MB4396:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4396FB5D3F96CFC46FFC8AE28FE90@DM6PR03MB4396.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4941;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: J5dFXGN9N4K6z9yaGQ+2zpkbfg8bdemuxsFxAkcOcmeXRkv9mGrvWIpFQcITatFuIACQz8sLBQq9ewqssPNwITvSVDTgpDjwnHC2UGGPnSCP5DWzATPQx5MLuyOlPdqzCpKWMmGbkFRmFpBf3pqXzSMaC2hI0pjUBgiIt5DOLc0dpaYuop/R1w9KG3iS0LReTw3rPABNokb9BWmgqP8oDGuHk2aou1K0WfVj8WrELmBH/NgBl4m542miNOYH+L0BExLwh7jLqWooCGRo14e4UBvQrjmVLhpJBuLl/oCnoGMTA1N9KZBRO1HL/Np1v01x
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(396003)(346002)(366004)(136003)(39860400002)(376002)(4744005)(54906003)(6486002)(85182001)(4326008)(1076003)(9686003)(86362001)(66476007)(66556008)(33716001)(478600001)(2906002)(66946007)(316002)(6666004)(186003)(26005)(83380400001)(6496006)(6916009)(107886003)(8676002)(956004)(8936002)(5660300002)(16526019);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: wO8ecIKtG7y+HVaOt5Bnkwz9eOt8jMJhtN6iZrUiTlbtZwLfB4QTXSSToyXbJ8Y2CRkh9Dj4k1nCpkEWisn+gholsc9W2DtOvUoEhxmD2TIc5jxgMFB0oTyBfvqeJYdryB6swCNFGvnkDCWHVgWSvwlXtOZSdST7WYA3Yrt52cn+SX+l0FiKcu4aqmWJmdA6KyhyZH+YFxtYPM7wE4+kwTM5GYjVEruH+Hhdgkr/1BG0GywLDQPzFFNoHz8oKmvnJnaf5ePtGUNWLGoErMXCaGpjXiYNqiwPzLTkDC32iKYOgttprh2a5U/v4XbxtbnI2ptxj7thGKY5N6qvPCYRW8a3gcc9MHVFmhM+ah9hSFwyW/WvTRFf9q2PD+2p2KQM+QgjxSco/DKozCj6SZiq3oD2V3hMSRbrg+JA7M9OPiReNL9ERu1IiRxgapN0ERDOuopc8ZSIyGYZ+lG66iAOo2AGsoXCpm/QC8Eah/kTGHhXOVajjemxhBehICerFtdqF6ahIemgyxoShytjI5Ns6UgUZuotiYMMSBaGXN8N+niUbHCa/Oqb4I2ELEVigToYopBPx+F7B9Op27iy5theM2i/8sbHRrFF3RG7mxjiBln/dMF3Pf6Eqcm+2LFkdGcc29wRDv0kn4IVvEF+ytcXwfWdlXWQXfVGL7BWoENLxheSE+zXv5ezmLd7U/mhWtqUfDtKIFatRL/TU+Eum3jJcoGW1RnbNHTih5GkrOX+3iH6sq5z1Bfwpkdc8ls+6SIgMMa6nwRiqISaF69zAlpB6H2wdfEclXZwaujLixZd9TEfC8ggyhPz7Ib6LZuLkVcq5JHPumxiA4kuCPOn0zaxM8jvXVgA4LUUehqwJrSMZ4EWUZDIBcGt45eEadjpfa/4octJzDiTMFYYhcCDgK190A==
X-MS-Exchange-CrossTenant-Network-Message-Id: 1ce278f9-fd56-4e9d-20e4-08d8856d29aa
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Nov 2020 11:38:37.1084
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NPAIYXQhS7L2nikBG718SmDB+zmc2HQkuDPu5SxSNLaI6kHK/jNMaHctE0vD7Cs6faUB0zJCN1ry4JRUwWC3RA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4396
X-OriginatorOrg: citrix.com

On Wed, Oct 28, 2020 at 10:24:12AM +0100, Jan Beulich wrote:
> By latching the old MFN into a local variable, these calculations don't
> depend on anything but local variables anymore. Hence the point in time
> when they get performed doesn't matter anymore, so they can be moved
> past the locked region.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 12:12:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 12:12:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23315.49983 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcSVi-0004Hp-6x; Tue, 10 Nov 2020 12:12:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23315.49983; Tue, 10 Nov 2020 12:12:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcSVi-0004Hi-3F; Tue, 10 Nov 2020 12:12:30 +0000
Received: by outflank-mailman (input) for mailman id 23315;
 Tue, 10 Nov 2020 12:12:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcSVh-0004Hd-3O
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 12:12:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5458695e-253b-42aa-9abd-e47c127b6146;
 Tue, 10 Nov 2020 12:12:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2BBA4ABCC;
 Tue, 10 Nov 2020 12:12:27 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcSVh-0004Hd-3O
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 12:12:29 +0000
X-Inumbo-ID: 5458695e-253b-42aa-9abd-e47c127b6146
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5458695e-253b-42aa-9abd-e47c127b6146;
	Tue, 10 Nov 2020 12:12:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605010347;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0YfaoJeSASk43AydLI7V2l3mT3osg6VPabsZj92csws=;
	b=ieXVHdl+U0MiIDOAQBIlAqWzbq4T2QMuaLCXgqyJkB6mE528pY0KDi9Cog/3+xw49PqsUv
	yLnsl3GhffmaOEBlCi+dq0B63DrR8GtXIDCQP372QMPMt0nzMgpxo8poz+13V4HZKN9RC1
	2UWfhGLd3Jqc8TEkPD7t8jTwDQUqyPY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2BBA4ABCC;
	Tue, 10 Nov 2020 12:12:27 +0000 (UTC)
Subject: Re: [PATCH v2] x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL}
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20201109173819.7817-1-andrew.cooper3@citrix.com>
 <681e03f5-86fd-43bb-5347-c526def9ffcb@suse.com>
 <083f46c0-07fb-7b22-4e49-d2a0df87164c@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <60ac8b66-7d36-0c4e-85c4-f2d867201fd5@suse.com>
Date: Tue, 10 Nov 2020 13:12:27 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <083f46c0-07fb-7b22-4e49-d2a0df87164c@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 10.11.2020 11:32, Andrew Cooper wrote:
> On 10/11/2020 08:03, Jan Beulich wrote:
>> On 09.11.2020 18:38, Andrew Cooper wrote:
>>> --- a/xen/include/xen/sched.h
>>> +++ b/xen/include/xen/sched.h
>>> @@ -1069,6 +1069,23 @@ extern enum cpufreq_controller {
>>>      FREQCTL_none, FREQCTL_dom0_kernel, FREQCTL_xen
>>>  } cpufreq_controller;
>>>  
>>> +static always_inline bool is_cpufreq_controller(const struct domain *d)
>>> +{
>>> +    /*
>>> +     * A PV dom0 can be nominated as the cpufreq controller, instead of using
>>> +     * Xen's cpufreq driver, at which point dom0 gets direct access to certain
>>> +     * MSRs.
>>> +     *
>>> +     * This interface only works when dom0 is identity pinned and has the same
>>> +     * number of vCPUs as pCPUs on the system.
>>> +     *
>>> +     * It would be far better to paravirtualise the interface.
>>> +     */
>>> +    return (IS_ENABLED(CONFIG_PV) &&
>>> +            (cpufreq_controller == FREQCTL_dom0_kernel) &&
>>> +            is_pv_domain(d) && is_hardware_domain(d));
>>> +}
>> IS_ENABLED(CONFIG_PV) is already part of is_pv_domain() and hence
>> imo shouldn't be used explicitly here.
> 
> Ah yes.  Will drop.
> 
>> Also, this being an x86 concept, is it really a good idea to put
>> in xen/sched.h? I guess this builds on Arm just because of DCE
>> from the IS_ENABLED(CONFIG_PV) (where afaict the one in
>> is_pv_domain() will still do). (But yes, I do realize that
>> cpufreq_controller itself gets declared in this file, so maybe
>> better to do some subsequent cleanup.)
> 
> I can't spot anywhere obviously better for it to live.  We don't have a
> common cpufreq.h,

Not the most obvious place for it to live at, but
xen/include/acpi/cpufreq/cpufreq.h ?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 12:42:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 12:42:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23338.50005 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcSyQ-0006wS-Ny; Tue, 10 Nov 2020 12:42:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23338.50005; Tue, 10 Nov 2020 12:42:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcSyQ-0006wL-K2; Tue, 10 Nov 2020 12:42:10 +0000
Received: by outflank-mailman (input) for mailman id 23338;
 Tue, 10 Nov 2020 12:42:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P7kS=EQ=gmail.com=lambert.olivier@srs-us1.protection.inumbo.net>)
 id 1kcSyP-0006wG-8X
 for xen-devel@lists.xen.org; Tue, 10 Nov 2020 12:42:09 +0000
Received: from mail-ua1-x92b.google.com (unknown [2607:f8b0:4864:20::92b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1e87db3c-004f-4bc0-b247-c349a910c9ea;
 Tue, 10 Nov 2020 12:42:08 +0000 (UTC)
Received: by mail-ua1-x92b.google.com with SMTP id q68so3881704uaq.3
 for <xen-devel@lists.xen.org>; Tue, 10 Nov 2020 04:42:08 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=P7kS=EQ=gmail.com=lambert.olivier@srs-us1.protection.inumbo.net>)
	id 1kcSyP-0006wG-8X
	for xen-devel@lists.xen.org; Tue, 10 Nov 2020 12:42:09 +0000
X-Inumbo-ID: 1e87db3c-004f-4bc0-b247-c349a910c9ea
Received: from mail-ua1-x92b.google.com (unknown [2607:f8b0:4864:20::92b])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1e87db3c-004f-4bc0-b247-c349a910c9ea;
	Tue, 10 Nov 2020 12:42:08 +0000 (UTC)
Received: by mail-ua1-x92b.google.com with SMTP id q68so3881704uaq.3
        for <xen-devel@lists.xen.org>; Tue, 10 Nov 2020 04:42:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:from:date:message-id:subject:to;
        bh=DwhMxMNv67jD8/9WZVTPnyEUhMbKmpNtaev6tL7Ou+A=;
        b=smyBCrvyXNXYkhIChQPMTH5Y5yquJ5Om+PvG1se+rgAlwtxwoDe6RRHJF4aCyf3s0o
         e5GuSA1D4q+IM+jNS1iTN8SD7nC6wKADmmuuFXmULJarTBu6GgRAQ9FKcn6adZD5p6HE
         TZEg5+4aJQdWXos3Djpu8W/nTOJRQ0rr+kIai/5uqWMQoEsaWfOoV+X3SbjMvTfPXh/H
         mmzQtT1mskOeXqpnl+aVgRzE1GWjLZPkcKOkWFynkYtMbB37Nq/LImQeKf72/urZT2b8
         S8sK/e8pILIoTBTJbG0nahqBjStiLDBpELWtALSdYwHBXRbPE2IVNFlKztS3/oHB29CC
         5uAg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:from:date:message-id:subject:to;
        bh=DwhMxMNv67jD8/9WZVTPnyEUhMbKmpNtaev6tL7Ou+A=;
        b=O+Y6beMHwMZF9Nqzd+fpr1Iy8PvarC4YqIrbGjvD3rlrvRLWKtDuxTMS3jkEO+fgY/
         daWEbDzgMew7XQ2apWylyUNYfgEmSj1wyxhfZO4Y90fF/FZzUl1SKwsGKGqRQIdKeCUx
         LEIsFm2OsouQrggtVn56UZ7kuC+WumGAUgwVRCepOaSz79EgIJjZUuvg3x9rdsAZlxe5
         Lql28TbtpmtHyt2O1kVBz+XxN36xn2KAMRq2+GpMKcqjUBIHSta9QjOQz812VvJPRkNz
         babjiiuYyhAEjqNBS868k5HV2p0b/5IJ8Qm1b40IX9k70px2L4IGuvyBxoEIFqekBZ58
         HogQ==
X-Gm-Message-State: AOAM5311brbxYCl2jGMl2Ru1KvLcuq+mbVRVbWVkZfpqcEhXMi8VKA+Z
	wspALuvqBXhm3uSKV1DrpOTFsNPehEb3MURw5sSlxHyFFKOosgDf
X-Google-Smtp-Source: ABdhPJxmEKHo5PET9IHJLFqBUKf6tTe4Q9QDAsoNpXv96duiC6eojqmszS8/OplvilIRcREyUxuUTY8XZ8WNwjDERHE=
X-Received: by 2002:ab0:e04:: with SMTP id g4mr3955792uak.68.1605012127520;
 Tue, 10 Nov 2020 04:42:07 -0800 (PST)
MIME-Version: 1.0
From: Olivier Lambert <lambert.olivier@gmail.com>
Date: Tue, 10 Nov 2020 13:41:56 +0100
Message-ID: <CACJ1ZNuJCgDkRHvH2gXqC5gWTJHdUQ9J4G-HBNFwKYZFaWpWuw@mail.gmail.com>
Subject: Schedule for OpenPOWER/Xen meeting
To: "<xen-devel@lists.xen.org>" <xen-devel@lists.xen.org>
Content-Type: multipart/alternative; boundary="000000000000b89e1705b3c001c1"

--000000000000b89e1705b3c001c1
Content-Type: text/plain; charset="UTF-8"

Hi everyone,

We got 2 potential dates for the initial tech meeting with at least one
OpenPOWER expert, so we can discuss the effort needed to port Xen on this
architecture.

Because of time zones (on OpenPower side, there's one guy in Australia), we
got 2 possible schedules in November:

1. 3pm CT on this Thursday the 12th (! this week)
2. Or next week Thursday the 19th

I made a doodle-like so everyone can vote on their preferred schedule:
https://framadate.org/QQu5rYEOEYr4ZHc4

Note: 3pm CT would mean 9pm UTC, 10pm UTC+1 (CET). But correct me if I'm
wrong.

Reminder: the Cryptpad of the last Xen Community meeting contains the list
of people interested. If you are aware of someone interested that could
miss this email on this devel list, feel free to forward it. Cryptpad link:
https://cryptpad.fr/pad/#/2/pad/edit/k-0Aj+Sxb5SliLWrFRBwx49V/

Thank you and see you soon!

Olivier.

--000000000000b89e1705b3c001c1
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hi everyone,</div><div><br></div><div>We got 2 potent=
ial dates for the initial tech meeting with at least one OpenPOWER expert, =
so we can discuss the effort needed to port Xen on this architecture.</div>=
<div><br></div><div>Because of time zones (on OpenPower side, there&#39;s o=
ne guy in Australia), we got 2 possible schedules in November:</div><div><b=
r></div><div>1. 3pm CT on this Thursday the 12th (! this week)<br></div><di=
v>2. Or next week Thursday the 19th</div><div><br></div><div>I made a doodl=
e-like so everyone can vote on their preferred schedule: <a href=3D"https:/=
/framadate.org/QQu5rYEOEYr4ZHc4">https://framadate.org/QQu5rYEOEYr4ZHc4</a>=
</div><div><br></div><div>Note: 3pm CT would mean 9pm UTC, 10pm UTC+1 (CET)=
. But correct me if I&#39;m wrong.</div><div><br></div><div>Reminder: the C=
ryptpad of the last Xen Community meeting contains the list of people inter=
ested. If you are aware of someone interested that could miss this email on=
 this devel list, feel free to forward it. Cryptpad link: <a href=3D"https:=
//cryptpad.fr/pad/#/2/pad/edit/k-0Aj+Sxb5SliLWrFRBwx49V/">https://cryptpad.=
fr/pad/#/2/pad/edit/k-0Aj+Sxb5SliLWrFRBwx49V/</a></div><div><br></div><div>=
Thank you and see you soon!</div><div><br></div><div>Olivier.<br></div></di=
v>

--000000000000b89e1705b3c001c1--


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 12:49:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 12:49:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23345.50017 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcT5E-0007CH-H5; Tue, 10 Nov 2020 12:49:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23345.50017; Tue, 10 Nov 2020 12:49:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcT5E-0007CA-D7; Tue, 10 Nov 2020 12:49:12 +0000
Received: by outflank-mailman (input) for mailman id 23345;
 Tue, 10 Nov 2020 12:49:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hJ2u=EQ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kcT5C-0007C5-MS
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 12:49:10 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e6b0d1ac-4652-4e48-accb-bcab033c7f66;
 Tue, 10 Nov 2020 12:49:09 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hJ2u=EQ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kcT5C-0007C5-MS
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 12:49:10 +0000
X-Inumbo-ID: e6b0d1ac-4652-4e48-accb-bcab033c7f66
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e6b0d1ac-4652-4e48-accb-bcab033c7f66;
	Tue, 10 Nov 2020 12:49:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605012549;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=kGOAcPyvw+4dMkrB/343daA5zT95B08sLkeXyo9XKVQ=;
  b=bd9u02HjDWCBT2GqZxi2/xWY01Y1XDuM5BSNAAOa329NUMwH25H5JsPa
   F2yzId+A1ThbO0o5oSOKf9NhrOMWOgA3ao1XFEn+vq96xL8kcgaeUvxrn
   NdEru934QG40457Q8vjy6sprua1Pxm02FNCGkj+s7/JEmKQ0zz8DHGWfv
   o=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: SIu95eLLXhVgsoRszmQdX7MT47r5+U5X9aQO1+sOVkGDgw905QyQCpCjtusg0CL8i77Lf75twL
 8De/Y6rK4fg3OMOhbWhq8pZ3zL+8JPEXtucOcJw3+PWI9a2OijBw/hyuzCPs8LCzA/GvFAj7BV
 /9SNxP/cFzqM/Jww1DbhDyGz7dkyJxruZnsAtpZoD15qIx/aYSi9Rk/4Jphz8/BKSDeimgF/DY
 k4BkObu1L/1T5Wc415GUTAJEiIVhBnnK27E+Hs8Vt5vFEckiP3jrCWXl30hWQiOQytyERfxJXF
 Q44=
X-SBRS: None
X-MesageID: 31955429
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,466,1596513600"; 
   d="scan'208";a="31955429"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=O9PsT0qrAliX4oNyvxLOIAvYYqlcYse8YzvnYOANOkXe4qmYvQxArVdxINXx0NcMuHY1yAVO3BgJNnmDDJcpXaNYfDo0w8fIsCe8T9W2W1X44G4VGLFabkz8hF/t0SEdn35wqveSXnWKMmPgtbBqr4AE3KCTtttgbcnWWGpOBbYDKPqfw8hTVnlkMsOTYKISDRTHbSQ1cwzWgGTiumNqJ9MEhoERn7z5LfnCWM01FXuTCDcmUSUMXmKGgTqs7XL6zbbEmeb0/7NH+9hynlrdO/dKlSVrkn9Pf05UajUU8UvGAMZN+6ZVELm5kRzJl3vz7mbNpBpHLGM8rDXVtWe/FA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PJW52Z4umjhNmHjsR3iz/CLYwuqpkZqVSsMAAX9G8sg=;
 b=k4gAVCrln5TMyoQa9H0P9npKDbShjAEVU0XuLNNCCleV+ciqNwmgVv09mQTMJyQbh6FJM/1wwrj6XALfkB8TTuugv0fzCStbZL+lEpICXhDz910n4UeOg6XGq33sbaZRoxejd0TR5HWsnzmu5/AlFA3Q4t7ddgS2S1AHh+q8tTSwbXw10yIWOJLGZeDFM5xT9kxeUio803xSxAkmlmF/L7So4MPWD94aFRFM7yxOHBTk2JKoCiDVCq7BMZ/mo8Cp2lDqQHWwRd7AHGU5Wr9wXcCQL8l1EmyBGQKlw9GL5GK+1Av6xjgDcZnJhbHufxNtCaKOdXIrD6JG4vLbg67uWw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PJW52Z4umjhNmHjsR3iz/CLYwuqpkZqVSsMAAX9G8sg=;
 b=qM/ybuf/Rk4nIjPuaqqmSKa56TPyOcMg7yplonNgC//Irccpzh1H4W4PmkVgpTXdj5rf1NbVNUIhimmQBw9GWg+U9UQPtHasjXPGqONASFEwvfNX2Yow5dUaUC0Di9ROfQqBCQOqUimigR86U51rnRiL7q1rIUMHvmWiPcivYRE=
Date: Tue, 10 Nov 2020 13:49:00 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
CC: <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Ian
 Jackson" <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH] docs: fix documentation to notice credit2 is the default
Message-ID: <20201110124900.2hjgn45i7ynf33p3@Air-de-Roger>
References: <20201110112118.99960-1-roger.pau@citrix.com>
 <b9ca219d-b6d7-9f59-3ede-9b4c9225e01b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <b9ca219d-b6d7-9f59-3ede-9b4c9225e01b@suse.com>
X-ClientProxiedBy: LO2P265CA0338.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:d::14) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 26e02c4d-19e9-4d83-4d3a-08d8857701fc
X-MS-TrafficTypeDiagnostic: DM6PR03MB3481:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB34812AB491B7A9AF4164C32B8FE90@DM6PR03MB3481.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: pV8/UV5mXZlpkTYbmTcsCv7cuwjYa5xIQojUIg49D3qAfPcVGEJLS+8g1etBIxNS1ELVidlaz9xMS7IQmpb55RqAzjhfLRU8sVPYl/fbJrofznk1ofccF1tiDE7VHDt5tZQqYx8rxpZwe7vza9+w0GlxirvvU2/89gvoVug6Cl7kwGVf55CbfSUDCCcrzW+HQNJPSBHxZEUquCscgXA9AtMQ1VnlQFBpgRCA22bmnKqKBPHfIhbr8/PN/gHRIA1pc+Oy9go06ayRy5DJB0l42ryprvgbrdFrQfDO2LcLW3Brzd0NaE8wmUl9aupdrdeG6qULz6LqOxQTEvL26+cCoBAIVSCMm4/Z3LoxF8IsrDYLENXM+jfy2iYQmhsh0xFS29UDrK4bZct2RBLbn86w9g==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(376002)(346002)(396003)(366004)(39860400002)(186003)(16526019)(26005)(66476007)(33716001)(6486002)(66556008)(6916009)(86362001)(66946007)(6496006)(5660300002)(53546011)(66574015)(316002)(966005)(478600001)(83380400001)(85182001)(8936002)(1076003)(2906002)(54906003)(8676002)(6666004)(4326008)(9686003)(15650500001)(956004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: ciy0zDrI3FofNreUEbyfb6XDfqg/SGQ6WaD/jn9Zvvc257Bu25j0MhsUKVN3xXq+7tV/xW3bDSaW6PPEMykUtzcbzm4DpVgkvVTcNbgo0Xdh/Ck+wlZf+cVBMwdrEPWyYGfGhe3APWwIa/PT+EeALKO1vzpO+r6GO9Irtj4dC06/LSgV+po/xY9L7kcvRTxMkNWBlNdBad+GGE9gsHHnH8kgCmddgH+/6HcnVDQod8s/apzBmKOnyw609KOeQ0SrYi7SLIXv9CmiAbXFcK3J9FpDkqQAyzzDbf9Mt+pxvMhjn4krvXaeZspnWl/l26jjZSaFkoJIHfRxSGSWMc/X4E99nwtaVjHE7t2M7EV6jqM1rYASnX94kJJsg4/nahoWkbcZcIDJPpZ3pvArN794hfYD+nOAJZ1MSvX0VzjhziZKlQGveIMmIzuK2v1rbX5cn6MkGK4FomrBmUGhCeLGap8u7UTLD3DzVC0z2u02MM/cWUWlvWlR80aKXL8/e2n0WLgSVjjPLsCkUarbraDouFFYl7w0lgfuxtakT4U6j/kVdNlUwJZwdxH9Ti4WIsx43lzRGf7dQ34VNwxAEyd3Lzq7RK+qIkRq+mLNCUfGtfiFKg2gEwQpJ1h5uRb3BpUs8QKPqQaQLNQepHmxHijE/DW8oi4UBvJSUjy4Y1o1hRSWIbHbjoZkWezWhy4hkUo5OiMfF7HmcSPWSqYSxq1q+zaEzqXF29kVZk6kCoXfpKIswvhsxndhXdXRr4XdaVUtiapCd7HF8pa9YeLUOg+4Wfbpei3qK4j6wvWdEZQEgbUCo6RoLCOTW4afgA426kjydkCuuvrJ+niBCo59B7b4wxgRHRZWPVlHUGcECgQwXrA6YdqPLOPseQAhzgYpAihNZgeK01jrjNQ7gmJI1GdEQA==
X-MS-Exchange-CrossTenant-Network-Message-Id: 26e02c4d-19e9-4d83-4d3a-08d8857701fc
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Nov 2020 12:49:05.7005
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yKlz8+4NW+3tp8Y3JRTlZv8PFB2Mo6fbfzixmfqhPL9Lthu34Qzk6tuEOmoh8PAW+uOaHrGV2KCg0jBOFddEuA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3481
X-OriginatorOrg: citrix.com

On Tue, Nov 10, 2020 at 12:31:14PM +0100, Jürgen Groß wrote:
> On 10.11.20 12:21, Roger Pau Monne wrote:
> > Fix the command line document to account for credit2 now being the
> > default scheduler.
> > 
> > Fixes: dafd936dddbd ('Make credit2 the default scheduler')
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > ---
> >   docs/misc/xen-command-line.pandoc | 2 +-
> >   1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
> > index 4ae9391fcd..789aead148 100644
> > --- a/docs/misc/xen-command-line.pandoc
> > +++ b/docs/misc/xen-command-line.pandoc
> > @@ -1876,7 +1876,7 @@ with read and write permissions.
> >   ### sched
> >   > `= credit | credit2 | arinc653 | rtds | null`
> > -> Default: `sched=credit`
> > +> Default: `sched=credit2`
> >   Choose the default scheduler.
> > 
> 
> Tried that before:
> 
> https://lists.xen.org/archives/html/xen-devel/2019-01/msg01097.html
> 
> And Andrew didn't like it...

One way or another we need to get this fixed in the document. Listing
credit as the still the default is wrong.

I think there are several places in xen-command-line.pandoc that just
contain the default values set in Kconfig, so IMO if we want to
change this it should be done wholesale rather than picking on every
default value change. Opinions?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 13:07:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 13:07:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23355.50029 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcTN6-0000Xw-2I; Tue, 10 Nov 2020 13:07:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23355.50029; Tue, 10 Nov 2020 13:07:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcTN5-0000Xp-VR; Tue, 10 Nov 2020 13:07:39 +0000
Received: by outflank-mailman (input) for mailman id 23355;
 Tue, 10 Nov 2020 13:07:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=psMK=EQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kcTN4-0000Xk-89
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 13:07:38 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2b906a52-2d4c-4448-a113-4dcf810a92a9;
 Tue, 10 Nov 2020 13:07:36 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=psMK=EQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kcTN4-0000Xk-89
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 13:07:38 +0000
X-Inumbo-ID: 2b906a52-2d4c-4448-a113-4dcf810a92a9
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2b906a52-2d4c-4448-a113-4dcf810a92a9;
	Tue, 10 Nov 2020 13:07:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605013656;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=9QUsY448JzdfbL4MYcbZiBz1JoYM1idYzU6UardXy74=;
  b=EY/SbnNmQafDBKRmVDzYgbXebsZvAdSAiJD1JdX1kG+1AHmh66cHXuIX
   uMO9e08EwKKdcCgAVm4MasdxRnsAyQXjG1xfW3JqAzTDyxf9ulZAApp0K
   awBTu0lfYGREQrGL0Q81reCOMqFOthk0am98fG0GOVT5p9871vO5w6bX5
   g=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: CMAC4MVosbgD0FXxof+7NZ1/M7N36WEXdpSRgL7Jj+Ynsk/wAWYVi6wPSLHCpnpzzWpJD4RPdD
 rPC/bbefh+wL0Lj/H7rHFLC2oMvim6LZxBd6/H3JI2nO2qwRp0c0jDQrwJLJWeoM4jfYV7WDqp
 G6+rLNnqa0p4teSrbFDUMyNQ1+bUiI5MDUr3/JL+fzFjaWnxxbN4TlBTzQjD1e38/0SXQtRiNk
 nlvWKwzakxgf7JiNa+c7Swwu1paIpbZReWw7cizhCnj4sy4MCeWMw0QsGFZfkXrt5xf0xgaDmq
 WyY=
X-SBRS: None
X-MesageID: 30846913
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,466,1596513600"; 
   d="scan'208";a="30846913"
Subject: Re: [PATCH] docs: fix documentation to notice credit2 is the default
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
	=?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
CC: <xen-devel@lists.xenproject.org>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201110112118.99960-1-roger.pau@citrix.com>
 <b9ca219d-b6d7-9f59-3ede-9b4c9225e01b@suse.com>
 <20201110124900.2hjgn45i7ynf33p3@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <035d10c0-2774-8d1c-b55f-e075f04344e7@citrix.com>
Date: Tue, 10 Nov 2020 13:07:26 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201110124900.2hjgn45i7ynf33p3@Air-de-Roger>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 10/11/2020 12:49, Roger Pau Monné wrote:
> On Tue, Nov 10, 2020 at 12:31:14PM +0100, Jürgen Groß wrote:
>> On 10.11.20 12:21, Roger Pau Monne wrote:
>>> Fix the command line document to account for credit2 now being the
>>> default scheduler.
>>>
>>> Fixes: dafd936dddbd ('Make credit2 the default scheduler')
>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>>> ---
>>>   docs/misc/xen-command-line.pandoc | 2 +-
>>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
>>> index 4ae9391fcd..789aead148 100644
>>> --- a/docs/misc/xen-command-line.pandoc
>>> +++ b/docs/misc/xen-command-line.pandoc
>>> @@ -1876,7 +1876,7 @@ with read and write permissions.
>>>   ### sched
>>>   > `= credit | credit2 | arinc653 | rtds | null`
>>> -> Default: `sched=credit`
>>> +> Default: `sched=credit2`
>>>   Choose the default scheduler.
>>>
>> Tried that before:
>>
>> https://lists.xen.org/archives/html/xen-devel/2019-01/msg01097.html
>>
>> And Andrew didn't like it...
> One way or another we need to get this fixed in the document. Listing
> credit as the still the default is wrong.

I agree that what is there is wrong, but so is saying credit2.

This documentation is for users, because develops know exactly how they
configured their schedulers, and won't actually need to refer to it.

As a consequence, it depends heavily on what a specific
distro/downstream chose, config-wise.

> I think there are several places in xen-command-line.pandoc that just
> contain the default values set in Kconfig, so IMO if we want to
> change this it should be done wholesale rather than picking on every
> default value change. Opinions?

xen-command-line.pandoc wholly predates Kconfig.  We're only in this
mess because previous patches haven't been appropriately reviewed.

Lets fix it up to be correct, but lets not delay fixing this to look for
potential other cases.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 13:13:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 13:13:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23362.50041 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcTSk-0001Rx-O1; Tue, 10 Nov 2020 13:13:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23362.50041; Tue, 10 Nov 2020 13:13:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcTSk-0001Rq-Km; Tue, 10 Nov 2020 13:13:30 +0000
Received: by outflank-mailman (input) for mailman id 23362;
 Tue, 10 Nov 2020 13:13:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=n6mT=EQ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kcTSj-0001Rl-A9
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 13:13:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a2b86a3d-b048-4bb1-a0af-cb7d293332c6;
 Tue, 10 Nov 2020 13:13:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 77FD1ABCC;
 Tue, 10 Nov 2020 13:13:27 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=n6mT=EQ=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kcTSj-0001Rl-A9
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 13:13:29 +0000
X-Inumbo-ID: a2b86a3d-b048-4bb1-a0af-cb7d293332c6
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a2b86a3d-b048-4bb1-a0af-cb7d293332c6;
	Tue, 10 Nov 2020 13:13:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605014007;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=kyqV/IU3pTGPWdSEZlbImOB47IBl1mGTQFOc9zMGDOY=;
	b=JRXWO0xO753KI62IJT9BJMYgV4Pz0YcDcM6YlGlCdmM5j6UZhOUNpfzIoZagRFaRW7LTjZ
	RFObhSINHDqZCQePHChUXPGQcFtI6cv3WqPRVljsF/OhgfNqZJ4zW4WTzFmqqpJJJXfyUm
	/dDRsigQYb86/DGRWgPsFZB86m0E8uE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 77FD1ABCC;
	Tue, 10 Nov 2020 13:13:27 +0000 (UTC)
Subject: Re: [PATCH] docs: fix documentation to notice credit2 is the default
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20201110112118.99960-1-roger.pau@citrix.com>
 <b9ca219d-b6d7-9f59-3ede-9b4c9225e01b@suse.com>
 <20201110124900.2hjgn45i7ynf33p3@Air-de-Roger>
 <035d10c0-2774-8d1c-b55f-e075f04344e7@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <20d3ee40-950f-e4f9-00d0-a5274c17771f@suse.com>
Date: Tue, 10 Nov 2020 14:13:26 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <035d10c0-2774-8d1c-b55f-e075f04344e7@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="UBcp8WDq5VoO9yRZgYr5x4iBik254Vd9l"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--UBcp8WDq5VoO9yRZgYr5x4iBik254Vd9l
Content-Type: multipart/mixed; boundary="ykSdU5tvKgx4hnlICDlPwc9psFT0nBMOp";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
Message-ID: <20d3ee40-950f-e4f9-00d0-a5274c17771f@suse.com>
Subject: Re: [PATCH] docs: fix documentation to notice credit2 is the default
References: <20201110112118.99960-1-roger.pau@citrix.com>
 <b9ca219d-b6d7-9f59-3ede-9b4c9225e01b@suse.com>
 <20201110124900.2hjgn45i7ynf33p3@Air-de-Roger>
 <035d10c0-2774-8d1c-b55f-e075f04344e7@citrix.com>
In-Reply-To: <035d10c0-2774-8d1c-b55f-e075f04344e7@citrix.com>

--ykSdU5tvKgx4hnlICDlPwc9psFT0nBMOp
Content-Type: multipart/mixed;
 boundary="------------CB5577FAF3A37CBDE15CA46E"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------CB5577FAF3A37CBDE15CA46E
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 10.11.20 14:07, Andrew Cooper wrote:
> On 10/11/2020 12:49, Roger Pau Monn=C3=A9 wrote:
>> On Tue, Nov 10, 2020 at 12:31:14PM +0100, J=C3=BCrgen Gro=C3=9F wrote:=

>>> On 10.11.20 12:21, Roger Pau Monne wrote:
>>>> Fix the command line document to account for credit2 now being the
>>>> default scheduler.
>>>>
>>>> Fixes: dafd936dddbd ('Make credit2 the default scheduler')
>>>> Signed-off-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
>>>> ---
>>>>    docs/misc/xen-command-line.pandoc | 2 +-
>>>>    1 file changed, 1 insertion(+), 1 deletion(-)
>>>>
>>>> diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-comma=
nd-line.pandoc
>>>> index 4ae9391fcd..789aead148 100644
>>>> --- a/docs/misc/xen-command-line.pandoc
>>>> +++ b/docs/misc/xen-command-line.pandoc
>>>> @@ -1876,7 +1876,7 @@ with read and write permissions.
>>>>    ### sched
>>>>    > `=3D credit | credit2 | arinc653 | rtds | null`
>>>> -> Default: `sched=3Dcredit`
>>>> +> Default: `sched=3Dcredit2`
>>>>    Choose the default scheduler.
>>>>
>>> Tried that before:
>>>
>>> https://lists.xen.org/archives/html/xen-devel/2019-01/msg01097.html
>>>
>>> And Andrew didn't like it...
>> One way or another we need to get this fixed in the document. Listing
>> credit as the still the default is wrong.
>=20
> I agree that what is there is wrong, but so is saying credit2.
>=20
> This documentation is for users, because develops know exactly how they=

> configured their schedulers, and won't actually need to refer to it.
>=20
> As a consequence, it depends heavily on what a specific
> distro/downstream chose, config-wise.
>=20
>> I think there are several places in xen-command-line.pandoc that just
>> contain the default values set in Kconfig, so IMO if we want to
>> change this it should be done wholesale rather than picking on every
>> default value change. Opinions?
>=20
> xen-command-line.pandoc wholly predates Kconfig.=C2=A0 We're only in th=
is
> mess because previous patches haven't been appropriately reviewed.
>=20
> Lets fix it up to be correct, but lets not delay fixing this to look fo=
r
> potential other cases.

The ultimate fix would be to generate this document according to
Kconfig settings. :-D


Juergen

--------------CB5577FAF3A37CBDE15CA46E
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------CB5577FAF3A37CBDE15CA46E--

--ykSdU5tvKgx4hnlICDlPwc9psFT0nBMOp--

--UBcp8WDq5VoO9yRZgYr5x4iBik254Vd9l
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+qkfYFAwAAAAAACgkQsN6d1ii/Ey9d
jggAnsipR5PeH3fRnnhb8V7V8Hq/LTvapnzETCbawd1XTg8eHfprxnyDT6+nlbWu6/cqy9lXVVGs
Czj+fk+CdKnlKbzPh6grFB3S9qmJbcb+pbkzFlCY18gw2e993tOBBpPe1uBGShwIvntPClO8ANRQ
XkZ07YgDjqoAFWD0vRuRmivLcdoKoIgFB4p84JGV4VC/oY6LS2G8ygzjEuI8ky+oLbv4L/BdNO/B
01Fg4Gt+DSmdH1bCkEOdtNjNeUPsgy2P2A/viRz0NTQKgaezM/n0913/wVytNCJswPNHGiI2cUN+
HscOk/xU75iVkqCkj7a+w+LhFKJhP19ESZpqHzO6XQ==
=am7L
-----END PGP SIGNATURE-----

--UBcp8WDq5VoO9yRZgYr5x4iBik254Vd9l--


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 13:15:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 13:15:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23369.50055 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcTUL-0001aX-8V; Tue, 10 Nov 2020 13:15:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23369.50055; Tue, 10 Nov 2020 13:15:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcTUL-0001aQ-5U; Tue, 10 Nov 2020 13:15:09 +0000
Received: by outflank-mailman (input) for mailman id 23369;
 Tue, 10 Nov 2020 13:15:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcTUJ-0001aI-Qp
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 13:15:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b251ff40-e675-4d8f-a5bc-bc5231568bde;
 Tue, 10 Nov 2020 13:15:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EEF06ABD6;
 Tue, 10 Nov 2020 13:15:03 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcTUJ-0001aI-Qp
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 13:15:07 +0000
X-Inumbo-ID: b251ff40-e675-4d8f-a5bc-bc5231568bde
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b251ff40-e675-4d8f-a5bc-bc5231568bde;
	Tue, 10 Nov 2020 13:15:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605014104;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9si9W8u3F5evKhqgh7QdWPPNH2bY3QcqUEHMhYPU1/E=;
	b=jD0NgWRKHWzNdaoTjpLMEkQomSF1Pu/l5zZyhOX7n9ElgTP4siIFKr2a4rt0rR4kqrlH9o
	FhytsiO8yAID36HRAG3b8BU6RsyghqVvL/2eBHdMFyGDmXh0lIAqzqb8liXYMj3DlfUPpj
	X8CNeHK8pzeG/oy5q4SELJT8SUmTazk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EEF06ABD6;
	Tue, 10 Nov 2020 13:15:03 +0000 (UTC)
Subject: Re: [PATCH] docs: fix documentation to notice credit2 is the default
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <20201110112118.99960-1-roger.pau@citrix.com>
 <b9ca219d-b6d7-9f59-3ede-9b4c9225e01b@suse.com>
 <20201110124900.2hjgn45i7ynf33p3@Air-de-Roger>
 <035d10c0-2774-8d1c-b55f-e075f04344e7@citrix.com>
 <20d3ee40-950f-e4f9-00d0-a5274c17771f@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1e44b5b1-7406-c459-eac5-78af76ccbf34@suse.com>
Date: Tue, 10 Nov 2020 14:15:03 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20d3ee40-950f-e4f9-00d0-a5274c17771f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 10.11.2020 14:13, Jürgen Groß wrote:
> On 10.11.20 14:07, Andrew Cooper wrote:
>> On 10/11/2020 12:49, Roger Pau Monné wrote:
>>> On Tue, Nov 10, 2020 at 12:31:14PM +0100, Jürgen Groß wrote:
>>>> On 10.11.20 12:21, Roger Pau Monne wrote:
>>>>> Fix the command line document to account for credit2 now being the
>>>>> default scheduler.
>>>>>
>>>>> Fixes: dafd936dddbd ('Make credit2 the default scheduler')
>>>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>>>>> ---
>>>>>    docs/misc/xen-command-line.pandoc | 2 +-
>>>>>    1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
>>>>> index 4ae9391fcd..789aead148 100644
>>>>> --- a/docs/misc/xen-command-line.pandoc
>>>>> +++ b/docs/misc/xen-command-line.pandoc
>>>>> @@ -1876,7 +1876,7 @@ with read and write permissions.
>>>>>    ### sched
>>>>>    > `= credit | credit2 | arinc653 | rtds | null`
>>>>> -> Default: `sched=credit`
>>>>> +> Default: `sched=credit2`
>>>>>    Choose the default scheduler.
>>>>>
>>>> Tried that before:
>>>>
>>>> https://lists.xen.org/archives/html/xen-devel/2019-01/msg01097.html
>>>>
>>>> And Andrew didn't like it...
>>> One way or another we need to get this fixed in the document. Listing
>>> credit as the still the default is wrong.
>>
>> I agree that what is there is wrong, but so is saying credit2.
>>
>> This documentation is for users, because develops know exactly how they
>> configured their schedulers, and won't actually need to refer to it.
>>
>> As a consequence, it depends heavily on what a specific
>> distro/downstream chose, config-wise.
>>
>>> I think there are several places in xen-command-line.pandoc that just
>>> contain the default values set in Kconfig, so IMO if we want to
>>> change this it should be done wholesale rather than picking on every
>>> default value change. Opinions?
>>
>> xen-command-line.pandoc wholly predates Kconfig.  We're only in this
>> mess because previous patches haven't been appropriately reviewed.
>>
>> Lets fix it up to be correct, but lets not delay fixing this to look for
>> potential other cases.
> 
> The ultimate fix would be to generate this document according to
> Kconfig settings. :-D

Except that's not suitable for putting up as a web page for
everyone to use as "cannoical" reference. Every distro could
do so, sure.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 13:19:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 13:19:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23378.50071 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcTYn-0001oH-T7; Tue, 10 Nov 2020 13:19:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23378.50071; Tue, 10 Nov 2020 13:19:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcTYn-0001oA-Q8; Tue, 10 Nov 2020 13:19:45 +0000
Received: by outflank-mailman (input) for mailman id 23378;
 Tue, 10 Nov 2020 13:19:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcTYm-0001o0-Do
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 13:19:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ca5d3cd4-0a20-4a0a-865d-a307583a7e8a;
 Tue, 10 Nov 2020 13:19:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D0453ABCC;
 Tue, 10 Nov 2020 13:19:40 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcTYm-0001o0-Do
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 13:19:44 +0000
X-Inumbo-ID: ca5d3cd4-0a20-4a0a-865d-a307583a7e8a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ca5d3cd4-0a20-4a0a-865d-a307583a7e8a;
	Tue, 10 Nov 2020 13:19:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605014380;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=xjIxpzoJoK2RSmU+/Sp56xcGMtt7dE1k677M7iQdhr0=;
	b=Sf/jqmcV09prnR3VZfsTw7JOT4nFDEAmgPie/XOceYuIAy+77v+rzSsKqHSD48N2nAvvpc
	0CKh2O+eheI3BSXGn/BgeDG5MdYGUNgBH1qh4cdRrurbo8eTt1mOR4SjaIFOCJaVPTIBpq
	Qpjh8ticDZuHqkFwQH0emVb1uuGkHlc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D0453ABCC;
	Tue, 10 Nov 2020 13:19:40 +0000 (UTC)
Subject: Re: [PATCH v3 5/7] x86: guard against straight-line speculation past
 RET
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
 <80ceea17-958d-f409-5f39-9f353e780f5b@suse.com>
 <20201110093142.hkufamaepn67gv43@Air-de-Roger>
 <92e58ff0-e6a4-f92f-1ad6-06db7751762a@suse.com>
 <20201110111603.rarf7ncddrkswlxs@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <586bb9e5-bb90-bb27-3010-e702d65e301c@suse.com>
Date: Tue, 10 Nov 2020 14:19:40 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201110111603.rarf7ncddrkswlxs@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 10.11.2020 12:16, Roger Pau Monné wrote:
> On Tue, Nov 10, 2020 at 11:06:46AM +0100, Jan Beulich wrote:
>> On 10.11.2020 10:31, Roger Pau Monné wrote:
>>> On Fri, Oct 23, 2020 at 10:38:04AM +0200, Jan Beulich wrote:
>>>> Under certain conditions CPUs can speculate into the instruction stream
>>>> past a RET instruction. Guard against this just like 3b7dab93f240
>>>> ("x86/spec-ctrl: Protect against CALL/JMP straight-line speculation")
>>>> did - by inserting an "INT $3" insn. It's merely the mechanics of how to
>>>> achieve this that differ: A set of macros gets introduced to post-
>>>> process RET insns issued by the compiler (or living in assembly files).
>>>>
>>>> Unfortunately for clang this requires further features their built-in
>>>> assembler doesn't support: We need to be able to override insn mnemonics
>>>> produced by the compiler (which may be impossible, if internally
>>>> assembly mnemonics never get generated), and we want to use \(text)
>>>> escaping / quoting in the auxiliary macro.
>>>
>>> Could this have an option to enable/disable at build time?
>>
>> Well, a subsequent patch adds a config option for this, which in
>> the worst case could be turned off. I'm afraid though I'm not
>> clear about the question, because ...
>>
>>> FreeBSD will drop GNU as quite soon from base, and albeit it can be
>>> installed as a package I would like to be able to build Xen using a
>>> toolchain based on LLVM exclusively.
>>
>> ... it's not clear to me what the implications here are: Are you
>> saying -no-integrated-as is not going to function anymore, unless
>> people explicitly install gas? If that's not what you meant to
>> indicate, then I don't see how building would become impossible.
> 
> I'm still inquiring about this, but I would say that when gas is
> removed from FreeBSD then the 'as' command would be mapped to llvm-as,
> and thus -no-integrated-as would hit the same issues as the integrated
> as. So far in Xen we have assumed that -no-integrated-as would
> fallback to an as capable of doing what the integrated clang as
> doesn't support, but that might not be the case.

At which point Xen couldn't be built anyway, I expect. If llvm-as
isn't sufficiently gas-compatible, we've lost (right now at least).

> Ideally we would have to re-run the tests with -no-integrated-as, in
> order to assert that the external as is really capable of what the
> internal one is not.

And if it doesn't, what would we do other than failing the build
(which it would also if we didn't do the 2nd round of checks)?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 13:21:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 13:21:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23384.50083 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcTaw-0002bw-9C; Tue, 10 Nov 2020 13:21:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23384.50083; Tue, 10 Nov 2020 13:21:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcTaw-0002bp-6N; Tue, 10 Nov 2020 13:21:58 +0000
Received: by outflank-mailman (input) for mailman id 23384;
 Tue, 10 Nov 2020 13:21:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcTau-0002bj-Dq
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 13:21:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b6fe35e8-bd66-4a21-8fa6-d65cb2ad7555;
 Tue, 10 Nov 2020 13:21:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 93BD4ABCC;
 Tue, 10 Nov 2020 13:21:43 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcTau-0002bj-Dq
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 13:21:56 +0000
X-Inumbo-ID: b6fe35e8-bd66-4a21-8fa6-d65cb2ad7555
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b6fe35e8-bd66-4a21-8fa6-d65cb2ad7555;
	Tue, 10 Nov 2020 13:21:44 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605014503;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=MVUdvBPE8v73QV9es/7ggX0+OQLQnLELoXD9zmWU8Po=;
	b=nKmv5C6LRPUzQpuMNcVzUHsMVtlIgOwKdBS+00Lgv6CsHcOScHOviRG/rVdrW897LxTKc3
	qbx6PhO8fsmC/h7spnZ5Hr1UF77VeTYleIIc3j74cfEHQLZ93FmS9B7PctXA8acSN/tzp0
	f6Cbz2zL6Ic7mxIyrTmCp1KgsNJ1td0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 93BD4ABCC;
	Tue, 10 Nov 2020 13:21:43 +0000 (UTC)
Subject: Re: [PATCH 3/5] x86/p2m: suppress audit_p2m hook when possible
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
 <722cf75e-da6a-49c5-472a-898796c9030e@suse.com>
 <20201110113002.maox2v2w6om4lmik@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ae87c1d8-0b4b-8a6a-156d-9f596d73a7bf@suse.com>
Date: Tue, 10 Nov 2020 14:21:43 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201110113002.maox2v2w6om4lmik@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 10.11.2020 12:30, Roger Pau Monné wrote:
> On Wed, Oct 28, 2020 at 10:23:42AM +0100, Jan Beulich wrote:
>> When P2M_AUDIT is false, it's unused, so instead of having a dangling
>> NULL pointer sit there, omit the field altogether.
>>
>> Instead of adding "#if P2M_AUDIT && defined(CONFIG_HVM)" in even more
>> places, fold the latter part right into the definition of P2M_AUDIT.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks. Since I see you keep replying to v1, are you aware of
there being v2? For this particular patch there actually is a
clang related fix in v2, which may be of interest to you.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 13:26:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 13:26:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23391.50095 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcTez-0002nw-QK; Tue, 10 Nov 2020 13:26:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23391.50095; Tue, 10 Nov 2020 13:26:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcTez-0002np-N8; Tue, 10 Nov 2020 13:26:09 +0000
Received: by outflank-mailman (input) for mailman id 23391;
 Tue, 10 Nov 2020 13:26:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcTez-0002nk-85
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 13:26:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f1cda6e8-b040-4c24-941c-0e4732f8d19e;
 Tue, 10 Nov 2020 13:26:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 95DCBABD1;
 Tue, 10 Nov 2020 13:26:03 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcTez-0002nk-85
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 13:26:09 +0000
X-Inumbo-ID: f1cda6e8-b040-4c24-941c-0e4732f8d19e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f1cda6e8-b040-4c24-941c-0e4732f8d19e;
	Tue, 10 Nov 2020 13:26:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605014763;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=jjUdDj8RZpSQepLOmH3M8F5TnSNPzxKb8DoMJBUyFlg=;
	b=ZX5QyjLNzK7WW6rHhgf1oBdMikvNGxEJeX3hv6V9YPUiArak2yhuOAN/Z/Onxf98jLNRK1
	KdEyeR1OuK3Vb7qE4+ULoSexpCGkbqN0v68BAicUiCVcv9ltFzAKa+H1wB0t338v8WaIB1
	wVOknrAtpDI3ey98a3s/OdULRTkINdk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 95DCBABD1;
	Tue, 10 Nov 2020 13:26:03 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86emul: de-duplicate scatters to the same linear address
Message-ID: <6064996d-943f-1be3-9bfd-e872149da2a1@suse.com>
Date: Tue, 10 Nov 2020 14:26:03 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The SDM specifically allows for earlier writes to fully overlapping
ranges to be dropped. If a guest did so, hvmemul_phys_mmio_access()
would crash it if varying data was written to the same address. Detect
overlaps early, as doing so in hvmemul_{linear,phys}_mmio_access() would
be quite a bit more difficult.

Note that due to cache slot use being linear address based, there's no
similar issue with multiple writes to the same physical address (mapped
through different linear addresses).

Since this requires an adjustment to the EVEX Disp8 scaling test,
correct a comment there at the same time.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
TBD: The SDM isn't entirely unambiguous about the faulting behavior in
     this case: If a fault would need delivering on the earlier slot
     despite the write getting squashed, we'd have to call ops->write()
     with size set to zero for the earlier write(s). However,
     hvm/emulate.c's handling of zero-byte accesses extends only to the
     virtual-to-linear address conversions (and raising of involved
     faults), so in order to also observe #PF changes to that logic
     would then also be needed. Can we live with a possible misbehavior
     here?

--- a/tools/tests/x86_emulator/evex-disp8.c
+++ b/tools/tests/x86_emulator/evex-disp8.c
@@ -647,8 +647,8 @@ static const uint16_t imm0f[16] = {
 static struct x86_emulate_ops emulops;
 
 /*
- * Access tracking (by granular) is used on the first 64 bytes of address
- * space. Instructions get encode with a raw Disp8 value of 1, which then
+ * Access tracking (byte granular) is used on the first bytes of address
+ * space. Instructions get encoded with a raw Disp8 value of 1, which then
  * gets scaled accordingly. Hence accesses below the address <scaling factor>
  * as well as at or above 2 * <scaling factor> are indications of bugs. To
  * aid diagnosis / debugging, track all accesses below 3 * <scaling factor>.
@@ -804,6 +804,31 @@ static void test_one(const struct test *
 
     asm volatile ( "kxnorw %k1, %k1, %k1" );
     asm volatile ( "vxorps %xmm4, %xmm4, %xmm4" );
+    if ( sg && (test->opc | 3) == 0xa3 )
+    {
+        /*
+         * Non-prefetch scatters need handling specially, due to the
+         * overlapped write elimination done by the emulator. The order of
+         * indexes chosen is somewhat arbitrary, within the constraints
+         * imposed by the various different uses.
+         */
+        static const struct {
+            int32_t _[16];
+        } off32 = {{ 1, 0, 2, 3, 7, 6, 5, 4, 8, 10, 12, 14, 9, 11, 13, 15 }};
+
+        if ( test->opc & 1 )
+        {
+            asm volatile ( "vpmovsxdq %0, %%zmm4" :: "m" (off32) );
+            vsz >>= !evex.w;
+        }
+        else
+            asm volatile ( "vmovdqu32 %0, %%zmm4" :: "m" (off32) );
+
+        /* Scale by element size. */
+        instr[6] |= (evex.w | 2) << 6;
+
+        sg = false;
+    }
 
     ctxt->regs->eip = (unsigned long)&instr[0];
     ctxt->regs->edx = 0;
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -10032,25 +10032,47 @@ x86_emulate(
 
         for ( i = 0; op_mask; ++i )
         {
-            long idx = b & 1 ? index.qw[i] : index.dw[i];
+            long idx = (b & 1 ? index.qw[i]
+                              : index.dw[i]) * (1 << state->sib_scale);
+            unsigned long offs = truncate_ea(ea.mem.off + idx);
+            unsigned int j;
 
             if ( !(op_mask & (1 << i)) )
                 continue;
 
-            rc = ops->write(ea.mem.seg,
-                            truncate_ea(ea.mem.off +
-                                        idx * (1 << state->sib_scale)),
-                            (void *)mmvalp + i * op_bytes, op_bytes, ctxt);
-            if ( rc != X86EMUL_OKAY )
+            /*
+             * hvmemul_linear_mmio_access() will find a cache slot based on
+             * linear address. hvmemul_phys_mmio_access() will crash the
+             * domain if observing varying data getting written to the same
+             * address within a cache slot. Utilize that squashing earlier
+             * writes to fully overlapping addresses is permitted by the spec.
+             */
+            for ( j = i + 1; j < n; ++j )
             {
-                /* See comment in gather emulation. */
-                if ( rc != X86EMUL_EXCEPTION && done )
-                    rc = X86EMUL_RETRY;
-                break;
+                long idx2 = (b & 1 ? index.qw[j]
+                                   : index.dw[j]) * (1 << state->sib_scale);
+
+                if ( (op_mask & (1 << j)) &&
+                     truncate_ea(ea.mem.off + idx2) == offs )
+                    break;
+            }
+
+            if ( j >= n )
+            {
+                rc = ops->write(ea.mem.seg, offs,
+                                (void *)mmvalp + i * op_bytes, op_bytes, ctxt);
+                if ( rc != X86EMUL_OKAY )
+                {
+                    /* See comment in gather emulation. */
+                    if ( rc != X86EMUL_EXCEPTION && done )
+                        rc = X86EMUL_RETRY;
+                    break;
+                }
+
+                done = true;
             }
 
             op_mask &= ~(1 << i);
-            done = true;
 
 #ifdef __XEN__
             if ( op_mask && local_events_need_delivery() )


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 13:43:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 13:43:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23400.50107 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcTvI-0004WR-AE; Tue, 10 Nov 2020 13:43:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23400.50107; Tue, 10 Nov 2020 13:43:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcTvI-0004WK-6O; Tue, 10 Nov 2020 13:43:00 +0000
Received: by outflank-mailman (input) for mailman id 23400;
 Tue, 10 Nov 2020 13:42:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kcTvG-0004WF-Kf
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 13:42:58 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 46cb0cff-b73b-454b-bf23-04ccb13b576b;
 Tue, 10 Nov 2020 13:42:56 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcTvE-0000nd-7A; Tue, 10 Nov 2020 13:42:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcTvD-00040u-SQ; Tue, 10 Nov 2020 13:42:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kcTvD-0006gf-Ry; Tue, 10 Nov 2020 13:42:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kcTvG-0004WF-Kf
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 13:42:58 +0000
X-Inumbo-ID: 46cb0cff-b73b-454b-bf23-04ccb13b576b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 46cb0cff-b73b-454b-bf23-04ccb13b576b;
	Tue, 10 Nov 2020 13:42:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Iq1MNgFDRJH0oivcZdxfFDUl5i8S4CZlUYTjWkbKta8=; b=grn8qLY4MJp3Z0ZgvyArkB/wER
	oR4RiVNtdzboyYWARozApg5SWvPQidd6A5Qb/Fnqqh4/g5/JFZKYdY/QWF2MmR/NiqLc5uAKqFAUr
	AvCMMZzkon4FqummNr6eU2ET9unkEFGmAYpOdkyqO64oJwxm4pnbJgJETAcPWDSpd3fU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcTvE-0000nd-7A; Tue, 10 Nov 2020 13:42:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcTvD-00040u-SQ; Tue, 10 Nov 2020 13:42:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcTvD-0006gf-Ry; Tue, 10 Nov 2020 13:42:55 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156592-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 156592: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.12-testing:test-armhf-armhf-xl-rtds:debian-fixup:fail:allowable
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=46ad8841ac4da8fc2a128e49b8406966bf5d3601
X-Osstest-Versions-That:
    xen=4f9294d21c47415376215d68a0298e88582b8e7a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Nov 2020 13:42:55 +0000

flight 156592 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156592/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     13 debian-fixup             fail REGR. vs. 156515

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qcow2    19 guest-localmigrate/x10       fail  like 156515
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156515
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156515
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156515
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156515
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156515
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156515
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156515
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156515
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156515
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156515
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156515
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  46ad8841ac4da8fc2a128e49b8406966bf5d3601
baseline version:
 xen                  4f9294d21c47415376215d68a0298e88582b8e7a

Last test of basis   156515  2020-11-06 05:10:43 Z    4 days
Testing same since   156592  2020-11-09 18:08:09 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  David Woodhouse <dwmw@amazon.co.uk>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Olaf Hering <olaf@aepfle.de>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   4f9294d21c..46ad8841ac  46ad8841ac4da8fc2a128e49b8406966bf5d3601 -> stable-4.12


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 13:51:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 13:51:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23423.50177 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcU3G-0005lD-VY; Tue, 10 Nov 2020 13:51:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23423.50177; Tue, 10 Nov 2020 13:51:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcU3G-0005l6-S1; Tue, 10 Nov 2020 13:51:14 +0000
Received: by outflank-mailman (input) for mailman id 23423;
 Tue, 10 Nov 2020 13:51:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcU3G-0005l1-83
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 13:51:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4187f26b-cc3c-4f0e-943c-1917eea4e1ff;
 Tue, 10 Nov 2020 13:51:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 13AD0ABCC;
 Tue, 10 Nov 2020 13:51:12 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcU3G-0005l1-83
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 13:51:14 +0000
X-Inumbo-ID: 4187f26b-cc3c-4f0e-943c-1917eea4e1ff
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4187f26b-cc3c-4f0e-943c-1917eea4e1ff;
	Tue, 10 Nov 2020 13:51:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605016272;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=69BB1CsLHAFZWYWWFWN0GZb+tOjRcN4OnsOFVzu2VUA=;
	b=jEJEpGnZ8ao3sXW8By32WyYe1SJQJPSGQrYM626MJbZOXUeKLL715UsR4kFXvSxbaIpVa8
	qTdY7okSfB7iTMZpjj7RHkqfvGsOSvQmON8fT+ovdPRLpOrHe9w1xb3I8so8kqaycnyE1H
	Q0nh2j4s8hstwMnzd4kTO+M5OhNObaE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 13AD0ABCC;
	Tue, 10 Nov 2020 13:51:12 +0000 (UTC)
Subject: Re: [PATCH 2/5] x86/p2m: collapse the two ->write_p2m_entry() hooks
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
 <b26981d1-7a1a-2387-0640-574bdf11ceff@suse.com>
 <20201110110611.p3twf6rmy7qdlxa7@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b4932c75-c9c4-1da0-2218-fe3cb959e2e2@suse.com>
Date: Tue, 10 Nov 2020 14:51:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201110110611.p3twf6rmy7qdlxa7@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 10.11.2020 12:06, Roger Pau Monné wrote:
> On Wed, Oct 28, 2020 at 10:22:58AM +0100, Jan Beulich wrote:
>> @@ -1132,7 +1132,13 @@ void p2m_pt_init(struct p2m_domain *p2m)
>>      p2m->recalc = do_recalc;
>>      p2m->change_entry_type_global = p2m_pt_change_entry_type_global;
>>      p2m->change_entry_type_range = p2m_pt_change_entry_type_range;
>> -    p2m->write_p2m_entry = write_p2m_entry;
>> +
>> +    /* Still too early to use paging_mode_hap(). */
>> +    if ( hap_enabled(p2m->domain) )
>> +        hap_p2m_init(p2m);
>> +    else if ( IS_ENABLED(CONFIG_SHADOW_PAGING) )
>> +        shadow_p2m_init(p2m);
> 
> There's already some logic in p2m_initialise that checks for
> hap_enabled for EPT specific initialization. Do you think you could
> move this there so that it's more contained?
> 
> I think having the initialization condition sprinkled all over the
> different functions makes the logic more complicated to follow.
> 
> Also, should hap_p2m_init be limited to HAP and PT, as opposed to HAP
> and EPT which doesn't use the helper AFAICT?

It is limited to HAP and PT - we're in p2m_pt_init() here. This is
also why I don't want to move it to e.g. p2m_initialise(), as that
would be the wrong layer.

> Maybe it would be clearer to unify shadow_write_p2m_entry with
> hap_write_p2m_entry and call it p2m_pt_write_p2m_entry to match the
> rest of the p2m PT helpers?

This looks to go along the lines of what I'd put up as a post-
commit-message remark in "x86/p2m: collapse the two
->write_p2m_entry() hooks". The nested handler is perhaps the
bigger problem with such merging, plus it would feel a little like
a layering violation (which is why I did put up the question
instead of doing it right away).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 14:00:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 14:00:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23431.50190 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcUBh-00061l-R8; Tue, 10 Nov 2020 13:59:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23431.50190; Tue, 10 Nov 2020 13:59:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcUBh-00061e-Nu; Tue, 10 Nov 2020 13:59:57 +0000
Received: by outflank-mailman (input) for mailman id 23431;
 Tue, 10 Nov 2020 13:59:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hJ2u=EQ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kcUBg-00061X-4b
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 13:59:56 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7183c7c1-02f4-42f6-9a50-451e64f28a20;
 Tue, 10 Nov 2020 13:59:54 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hJ2u=EQ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kcUBg-00061X-4b
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 13:59:56 +0000
X-Inumbo-ID: 7183c7c1-02f4-42f6-9a50-451e64f28a20
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7183c7c1-02f4-42f6-9a50-451e64f28a20;
	Tue, 10 Nov 2020 13:59:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605016794;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=r5iiOjedk2JU9fzX4kP+66vTVAhWk/gH5xKq5Rr3854=;
  b=NWt88M9cwhwb9TsY/qADV5XdO8A1d5BGzcg9qtpMDyw7Zw37UzJu4KrM
   dvv/50YaZeD67s9cQsAJs5rOjNv8vQntCzYMIigREwCj2R/qt3N82YHqE
   ikZtzGbRr0QkbMuHJB28q8hqWjSlvw47JTOWa7mfZww1xD/6NnL8DYD77
   E=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: KvQvBNbeciOL82CpdT0xg7nsO9hLVQSgof/K0mswKEe/yAegLlzp4AutzKX8VYPD64rygkkZzj
 c/YBxXK8EW2rxEiemaswfYK1R3QanENqvPsHnqM0OOAnArmoEUQdWTNJstCqKric/iXajtdLa+
 HzE6gkAcygjdNEQgx0vtks1kOA5p+ThQd+BT6tw8WAbwSJ5zc8aJZigNX460Vy7ND0D6IgWI69
 CJbwkowdgIPZgXMKN+w8e5pJmLO/YFSuFHwpnbuQPlByR85v83UbjoCldmP2FlUtiSA5SRw868
 tHI=
X-SBRS: None
X-MesageID: 31189355
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,466,1596513600"; 
   d="scan'208";a="31189355"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TfLZsaR4G0yfBJkl0nYw2l9BePWqFyJ9ENIIfnAOHacNvsMIAWkIebk0Sx1zxYZJ7g11cWPCa2B1NO+VtAzUFzSZSugLL6acsec9ZKPErHp89EFkukaybIcq/lbVPuNrltrar9tZQk8Sxn8G2WreYul/mr7HUj5ZLiWFtWxJSuhV2wxnUyY0jY/enMYsvVLIVM0c1h4rTw7FF/PbF5u23HVxczcDLKqMvLF2U413WL3A2gXuiIGhJQXa7VxcIHBybi1Z8hRJBmmu7Nz15fvl9gtDiO9nD1Q03rUkgaHPL+SsT/tTNzS6Fn/5VUFMs8nkIK5JPyq7wy6kDM5HJAFGsQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=riQl1Ui0MG+1X4/c0TWUuj7C8+7QwXUUp/MQS5CEwXY=;
 b=dxBICnYxtQk0MW1NJzIakfLRjRNXPkHzKaLr77dkC6hN83nIaynCX6Yvp4BsNK1/yt3ia2dygfz5zykKgyzprS/0/bOtpPYf12FhGNFVHbesiegc41dg2lEYtB5pzdMbnDctPEuutEGphY5fTygth8uuZRII8YqNoNjW62izvGzIndVus1d9G8l4r0+pMgxi8b3nef8W/OTMHTRbvSY/Cx1qHmTBoty3FIseu6lA8sPeIiJ8p5svrc1xCp47DGv6Q8qv4z7cJpbjcfJVnAzYDSPM6IvFJa2wP66OaFEfvpH630zIq2IT4DJhGUd7ZKI2lyV+1Zc7ZQl0oZ+JJQTN1g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=riQl1Ui0MG+1X4/c0TWUuj7C8+7QwXUUp/MQS5CEwXY=;
 b=lFdEofygpb/cpMLPyKw/f70o0M3W8acWNSV12CRDM4zwLXZ1V6i+2zNrQeqFRn5tturadZRKXsQvPzbmp5rHi+ylsqaOUrvgFl7tjJeFe+XT0CynEdp8N/npMDRDmkMvOTd1Ef04ru9ORTBEUXUjU/eyLDFao0oam+ounGgO60A=
Date: Tue, 10 Nov 2020 14:59:44 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
Subject: Re: [PATCH 5/5] x86/p2m: split write_p2m_entry() hook
Message-ID: <20201110135944.hbsojy6eeyw53has@Air-de-Roger>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
 <7b2b7cc9-8828-41bd-7949-764161bbe7ff@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <7b2b7cc9-8828-41bd-7949-764161bbe7ff@suse.com>
X-ClientProxiedBy: LO4P123CA0066.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:153::17) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d7128cb9-3ab1-4c54-6555-08d88580e3b0
X-MS-TrafficTypeDiagnostic: DM6PR03MB4476:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4476E0382A5ABB223DEED7A78FE90@DM6PR03MB4476.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: rnymQO5L3s0c1GKola8AVTexNd2nH4CMiSfpemwovjplEAZ63kYvS+VAy0Ok7aHC4Mz5x9DcSsdLiExcJ/RrGWHgPUYZ71mKMUSd2Iljp7XtsSnYa0sBDfaKa9AiHTVqCDSyxJ8fu+CfHeTtL9UHqEVRfQYn/1I2STP6ViM/oBwPCtpfa3zMJzAVyuIYp9toSDLne6+cfHn1FChclMGBIbqyKcDjCSInJTEzsNIGlV70cjhE1Vb3MBAXEMwBRLvfyn0DULIU6/tfNbmVNP+50Dt4ZmCX6q5asJ022jbwIVeamzx5J6stbD8FvCzsANCyLWljfbQxmsEP8CkLCSlY+NPVkzcjngjP0Yh1lVdo9oWESxTjQfAIdsHzp+wejt5d
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39860400002)(376002)(366004)(136003)(346002)(396003)(66946007)(2906002)(186003)(83380400001)(6666004)(66556008)(6486002)(6916009)(26005)(6496006)(54906003)(4326008)(85182001)(5660300002)(1076003)(66476007)(316002)(478600001)(86362001)(16526019)(956004)(8936002)(33716001)(9686003)(8676002)(70780200001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: En949OIrW0cEShQRYvey+lfjYHPxi6Kldkcvz4rwpW6aLPIEmByTdKlY299bXC4qsaoXxRmVgcfApmsmuBTUX2FXhte90+1u8A+yeyN6ZdgBiD3n6aKBWXoTqUrB2PipYv1zrgKs55sW8L/BeIhCtRcgcCJq0Vwv6UWRnCJGLtQLcwq8fUqF1I7ID5IE96ChoDG5dTgh2rN6MfjHpg9py/AE8Q8QjvTwh+4mi1+PwIn+t6nkr2cEuvbaZlukwY8f7FJ3iCIm+TZRcx0gwevfUSfX+oVrHJNl1oFVJWon5r3yTYple9cEV0Q9DzKIplcn8LsaIIkOI+nwBAAyu9wJLzTae/iKpa/tGi1Ze9brNg42IJQktSAOLtmA9Cp4+bVYgezMTTz91GobohJHAYKyUfNJJX5T5+I4NWgzaRPD6tsZ3/K5dy34jDByu0hTy8vitJG2ulvESstKh01M7V+xy8fW2+TUt6IXmGgd5VUl/gTQed8XIkUpKay5/g5C97JfFV5xcnEvT+IK9hxu3ZwjL5/Nh7Muu5V+Gkc+gKFqJpaIbKd6+p3ResSLG6kKvCeTmLVirR+IX6UU7/wwSOGd027P6Xbyk4julurRkDIYwhzyDMuULVYonUys0tIqONysvpGhRjebubDqEUlqbTl30H1yM71uU6HfeWH7eSnftNrUArre+KgiAWlFq4ZDDHxUoGHSOosr2udzgVjzNgE4nbV/V7B8HfLuu04KojEf/yBHWjA2cElSWy11iHMY2NwQ096iIDQeZJw6JYN0eGOQ53zBBRLgacw8luS5SR511DNNOhcN329mFEUI+6IfbBLiU1mA8Ex5i+XVHmeHBiNJstdpcHJJMlsJUzU2pmTIuQ9hoRnQXnO2N9zfKLqNrhiQ6zLFaw/NESZv2/JZezcZ3Q==
X-MS-Exchange-CrossTenant-Network-Message-Id: d7128cb9-3ab1-4c54-6555-08d88580e3b0
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Nov 2020 13:59:49.6605
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: r3XNpl9f+rqsyT+UuQdSNFczFQpk47beL5ZkPRsmvNX1JacJ4m7z3LIIaRT3O4PH9EqxwAC8s3PrBOOe+Zebqg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4476
X-OriginatorOrg: citrix.com

On Wed, Oct 28, 2020 at 10:24:53AM +0100, Jan Beulich wrote:
> Fair parts of the present handlers are identical; in fact
> nestedp2m_write_p2m_entry() lacks a call to p2m_entry_modify(). Move
> common parts right into write_p2m_entry(), splitting the hooks into a
> "pre" one (needed just by shadow code) and a "post" one.
> 
> For the common parts moved I think that the p2m_flush_nestedp2m() is,
> at least from an abstract perspective, also applicable in the shadow
> case. Hence it doesn't get a 3rd hook put in place.
> 
> The initial comment that was in hap_write_p2m_entry() gets dropped: Its
> placement was bogus, and looking back the the commit introducing it
> (dd6de3ab9985 "Implement Nested-on-Nested") I can't see either what use
> of a p2m it was meant to be associated with.

Is there any performance implication of moving from one hook to two
hooks? Since this shouldn't be a hot path I assume it's fine.

> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> RFC: This is effectively the alternative to the suggestion in an earlier
>      patch that we might do away with the hook altogether. Of course a
>      hybrid approach would also be possible, by using direct calls here
>      instead of splitting the hook into two.
> ---
> I'm unsure whether p2m_init_nestedp2m() zapping the "pre" hook is
> actually correct, or whether previously the sh_unshadow_for_p2m_change()
> invocation was wrongly skipped.
> 
> --- a/xen/arch/x86/mm/hap/hap.c
> +++ b/xen/arch/x86/mm/hap/hap.c
> @@ -774,55 +774,18 @@ static void hap_update_paging_modes(stru
>      put_gfn(d, cr3_gfn);
>  }
>  
> -static int
> -hap_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn, l1_pgentry_t *p,
> -                    l1_pgentry_t new, unsigned int level)
> +static void
> +hap_write_p2m_entry_post(struct p2m_domain *p2m, unsigned int oflags)
>  {
>      struct domain *d = p2m->domain;
> -    uint32_t old_flags;
> -    mfn_t omfn;
> -    int rc;
>  
> -    /* We know always use the host p2m here, regardless if the vcpu
> -     * is in host or guest mode. The vcpu can be in guest mode by
> -     * a hypercall which passes a domain and chooses mostly the first
> -     * vcpu. */
> -
> -    paging_lock(d);
> -    old_flags = l1e_get_flags(*p);
> -    omfn = l1e_get_mfn(*p);
> -
> -    rc = p2m_entry_modify(p2m, p2m_flags_to_type(l1e_get_flags(new)),
> -                          p2m_flags_to_type(old_flags), l1e_get_mfn(new),
> -                          omfn, level);
> -    if ( rc )
> -    {
> -        paging_unlock(d);
> -        return rc;
> -    }
> -
> -    safe_write_pte(p, new);
> -    if ( old_flags & _PAGE_PRESENT )
> +    if ( oflags & _PAGE_PRESENT )
>          guest_flush_tlb_mask(d, d->dirty_cpumask);
> -
> -    paging_unlock(d);
> -
> -    if ( nestedhvm_enabled(d) && (old_flags & _PAGE_PRESENT) &&
> -         !p2m_get_hostp2m(d)->defer_nested_flush &&
> -         /*
> -          * We are replacing a valid entry so we need to flush nested p2ms,
> -          * unless the only change is an increase in access rights.
> -          */
> -         (!mfn_eq(omfn, l1e_get_mfn(new)) ||
> -          !perms_strictly_increased(old_flags, l1e_get_flags(new))) )
> -        p2m_flush_nestedp2m(d);
> -
> -    return 0;
>  }
>  
>  void hap_p2m_init(struct p2m_domain *p2m)
>  {
> -    p2m->write_p2m_entry = hap_write_p2m_entry;
> +    p2m->write_p2m_entry_post = hap_write_p2m_entry_post;
>  }
>  
>  static unsigned long hap_gva_to_gfn_real_mode(
> --- a/xen/arch/x86/mm/hap/nested_hap.c
> +++ b/xen/arch/x86/mm/hap/nested_hap.c
> @@ -71,24 +71,11 @@
>  /*        NESTED VIRT P2M FUNCTIONS         */
>  /********************************************/
>  
> -int
> -nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
> -    l1_pgentry_t *p, l1_pgentry_t new, unsigned int level)
> +void
> +nestedp2m_write_p2m_entry_post(struct p2m_domain *p2m, unsigned int oflags)
>  {
> -    struct domain *d = p2m->domain;
> -    uint32_t old_flags;
> -
> -    paging_lock(d);
> -
> -    old_flags = l1e_get_flags(*p);
> -    safe_write_pte(p, new);
> -
> -    if (old_flags & _PAGE_PRESENT)
> -        guest_flush_tlb_mask(d, p2m->dirty_cpumask);
> -
> -    paging_unlock(d);
> -
> -    return 0;
> +    if ( oflags & _PAGE_PRESENT )
> +        guest_flush_tlb_mask(p2m->domain, p2m->dirty_cpumask);
>  }

This is a verbatimi copy of hap_write_p2m_entry_post. I assume there's
a reason why we need both, but I'm missing it.

>  
>  /********************************************/
> --- a/xen/arch/x86/mm/p2m-pt.c
> +++ b/xen/arch/x86/mm/p2m-pt.c
> @@ -122,17 +122,55 @@ static int write_p2m_entry(struct p2m_do
>  {
>      struct domain *d = p2m->domain;
>      struct vcpu *v = current;
> -    int rc = 0;
>  
>      if ( v->domain != d )
>          v = d->vcpu ? d->vcpu[0] : NULL;
>      if ( likely(v && paging_mode_enabled(d) && paging_get_hostmode(v)) ||
>           p2m_is_nestedp2m(p2m) )
> -        rc = p2m->write_p2m_entry(p2m, gfn, p, new, level);
> +    {
> +        unsigned int oflags;
> +        mfn_t omfn;
> +        int rc;
> +
> +        paging_lock(d);
> +
> +        if ( p2m->write_p2m_entry_pre )
> +            p2m->write_p2m_entry_pre(d, gfn, p, new, level);
> +
> +        oflags = l1e_get_flags(*p);
> +        omfn = l1e_get_mfn(*p);
> +
> +        rc = p2m_entry_modify(p2m, p2m_flags_to_type(l1e_get_flags(new)),
> +                              p2m_flags_to_type(oflags), l1e_get_mfn(new),
> +                              omfn, level);
> +        if ( rc )
> +        {
> +            paging_unlock(d);
> +            return rc;
> +        }
> +
> +        safe_write_pte(p, new);
> +
> +        if ( p2m->write_p2m_entry_post )
> +            p2m->write_p2m_entry_post(p2m, oflags);
> +
> +        paging_unlock(d);
> +
> +        if ( nestedhvm_enabled(d) && !p2m_is_nestedp2m(p2m) &&
> +             (oflags & _PAGE_PRESENT) &&
> +             !p2m_get_hostp2m(d)->defer_nested_flush &&
> +             /*
> +              * We are replacing a valid entry so we need to flush nested p2ms,
> +              * unless the only change is an increase in access rights.
> +              */
> +             (!mfn_eq(omfn, l1e_get_mfn(new)) ||
> +              !perms_strictly_increased(oflags, l1e_get_flags(new))) )
> +            p2m_flush_nestedp2m(d);

It feel slightly weird to have a nested p2m hook post, and yet have
nested specific code here.

Have you considered if the post hook could be moved outside of the
locked region, so that we could put this chunk there in the nested p2m
case?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 14:01:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 14:01:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23436.50202 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcUD5-0006vs-5e; Tue, 10 Nov 2020 14:01:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23436.50202; Tue, 10 Nov 2020 14:01:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcUD5-0006vl-2f; Tue, 10 Nov 2020 14:01:23 +0000
Received: by outflank-mailman (input) for mailman id 23436;
 Tue, 10 Nov 2020 14:01:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hJ2u=EQ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kcUD3-0006ve-19
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 14:01:21 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a996ad91-ebbe-49d6-ae82-38f33ce2576f;
 Tue, 10 Nov 2020 14:01:19 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hJ2u=EQ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kcUD3-0006ve-19
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 14:01:21 +0000
X-Inumbo-ID: a996ad91-ebbe-49d6-ae82-38f33ce2576f
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a996ad91-ebbe-49d6-ae82-38f33ce2576f;
	Tue, 10 Nov 2020 14:01:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605016879;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=4nQbDztq7vUsnFcGfi22P7BrxkrZLow7DXw6fyKcMSw=;
  b=DkYLCIZBgmNFqrTKBgpt0RbtHz8drVRZHYaJQOETRorgBjMB+eolSKMp
   hZ7KZxCuKR+UIMOslblUp8+xZoZXDsq83ogoenUBVIA/UjT4k8UpVisix
   V+he0GSP6ezYrfG8L81mZGPL/y1iVBdphZbU3473fbKTEdmMfC8CmIv/L
   s=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: uZTPLRCQA/o+O+5aULFK3NqufAa+I9YeBF+7vAn0ftWn36OoTHbizTHOlzhbESKstgP5zxGwcb
 Njq+OC4vDJxgPeDxUOSbR7/ZXAMDhcsMzcKQCSMwWKcs6/+jlvgCas7V3n3+amGfBHRGLJY7ED
 BV7WaDaWnkvkbXrZERuYNYEd6hJ091ri8jyYuD43QOAFfjc3y8sb64JEChRxeN6gnQdsg1j6Kh
 LvKBlczJpk6mS3b3mHpdtEKFxxFCQ/BeSPcS2yYk4mZBSwFT/l05aqOmYSUHR05XTEyD19RCYg
 e7s=
X-SBRS: None
X-MesageID: 31083993
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,466,1596513600"; 
   d="scan'208";a="31083993"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OKDrMRU3QGqRyo0BurZIqKsX67wtz75p4glmdcKtCfEqQaGo8JOAQ09uB67caW9cFspcatRkgvPGugtysMi45A1IEDtk7BqWZAmQ9m9yhgJT6X5wbeB1cnJRMeWcqgo6enjsmiJGb//xD7NOMR69CDpd+L3PZ0D9zrA8zjARtuhqf87/cPzyTm1Ng7LlH9qwKKsoNDp9SV8rqk7Xmd9MOSETp8uBM4nvkgup/xOqRd4AdFZ/j8hnYfhvOQPKt0E7vUKDD7tfrQJOZnYrlrK1QMKQ23Lq/KKAA1vQ70gCelG0CcmwPmuesoc43WUsndvmRz6x2/u0/7CPf4Ql6iDrVg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=d7owhZAVtvNdWNQB+tGU55PWMNYvbI7Tg2m0AILIEfM=;
 b=OvQOA3BaVj/R6uF7QhQjEMcWp0Wq5Zmg8iPS5lCGbeNtSBW5HqJec2WIbx9VkwJsLQoSDSIy8vjrSlS6oAwumObea7mzJ0haEl923WV4h/oGlc4JFspZTK6eZibZ8vwpp63MAmhFRoii0MQxHgw8XN+65u/Eq0pjTw7suBX3HI5DK0TE/Rvn6JOvuJ4Mou/ZmK8ZkMt5TLh3xfoAfOSW0Ik2QDmcsV+1p0nbf+DS5V+zS2qbUtRHNWuc6Reu0ArI0xii1asjt0j2a6A3qePishs7bAHzILX2/i4iAZmRRB5phbhjzLRpZOrmYJP23jc+AFykfI09WgE+xCKvaEH4MQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=d7owhZAVtvNdWNQB+tGU55PWMNYvbI7Tg2m0AILIEfM=;
 b=pbZkiEJIo9foUB7Nf4g7cQ9sKmerUPEWmnwHw0RY2CF/nTeT5RzTUG1pLHFTU8goI0O7H3D3LeXIFZjm4RzUa5YvU2H4il8LcGuQgIcZDupr2l2BQE4upCUmD45BgxHvw9X6jyMfD4AhOkHFp9S0lvk57SMlLDSWhYDj1evOFIo=
Date: Tue, 10 Nov 2020 15:01:03 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<George.Dunlap@eu.citrix.com>
Subject: Re: [PATCH 3/5] x86/p2m: suppress audit_p2m hook when possible
Message-ID: <20201110140103.6yalbcdgxl76ydzu@Air-de-Roger>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
 <722cf75e-da6a-49c5-472a-898796c9030e@suse.com>
 <20201110113002.maox2v2w6om4lmik@Air-de-Roger>
 <ae87c1d8-0b4b-8a6a-156d-9f596d73a7bf@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ae87c1d8-0b4b-8a6a-156d-9f596d73a7bf@suse.com>
X-ClientProxiedBy: LO2P265CA0037.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:61::25) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e0cde2f3-df72-4e27-51b6-08d885811267
X-MS-TrafficTypeDiagnostic: DM6PR03MB3580:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB35805FEA60D9A3811D2F94308FE90@DM6PR03MB3580.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: UGaUs60mY5IjDD42J2bnTAT4N+wNwPctVWbmg1OfS/VFOE4Bfq9q0IL7rxm/r1jgotxO/85GnEs5qhUCclSZoz/XYC5uWd9ZIXSSl1I3ges6ppvy0Xr0YBOWE2tVhPgEWAlBkUV9IRFo3R6hjdJMJOBUYFTMzaPTpqJRPhW/djyYLQGf65+ErJRYLvlnV74jstsBI9hmslx1LNHDBraR3O3R7GhNm+QKBRzP3NNKKTUA76AOE2BTZeOD6QLj/bX6LCEzddoIckkZnYLN7elBk0xydN6+i1f8bfIOOs2V5KMto6ixh5/e9Br/B0FIwaC519T7VlDf5BPeMb9mZV7RwA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(39860400002)(136003)(396003)(346002)(366004)(376002)(6666004)(107886003)(54906003)(6916009)(66476007)(4744005)(33716001)(66946007)(66556008)(956004)(8676002)(4326008)(316002)(86362001)(478600001)(85182001)(16526019)(9686003)(26005)(5660300002)(186003)(6486002)(1076003)(8936002)(2906002)(53546011)(6496006);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: hxhi5e8KhcqVWSqy52hBhYhSW3Kydv1aIfFtL9UJkipDa8FPaRBIN9a7r2dlxdfKCBZEiZ0TpWott7Dc0TK1gipp34Z0Ev9eYu5N8g64Y9JHQOhuFxVWRh0iMh1ldF8j7tDOYFIh58s8ddPlEhQYHe3Dan1zARd8WjG3qd8T1LHg6t6MuVorYLaO4Cw7eH5v4TtJkyKFzV87NWK8sw4R0FpwFfK2rqry4dGosbpcyuoMMrV5o7NFG3FYgLggHrLDHyQ0P5YG5gpmxQO4bpgIjDuhwI1LlS5OEITtY+moqUYEKlciNg+CbWCHNOMzL5aMHREkKk1JQHnAU+qBFFwfL+FyB7DOhyzAlxTupF21Avok9jZlhCnN1vopb+tks17rVGqH/SVhYmLdmwgR+1cgLhE3h3WkILTI7fZDr/Ssrh91Dp6R5eoLV3E6dWRhS5mU4LidN5KmrQ/22d/19CBHuiejvyCwfb7Bz3ioJBDuPFhk3ZOXhJ71UFvgGs4u9bbSKY904/u8tphZNQOROFTgNzZVyCKSmyy0W+qhJt+gGWtOtayYNuWSBUBtgGPVXcvXAgA3HugqRkAux5d74RdisxifNjf3NW3FK+3X/DCqNhbCxS6kpnyBH9FlffSBxD2JZTWi/plobALdxQJJo1iTSywJ07T32YVn30e29BV3j7gxlmqKF/5CFWTjH6US/KY3gNN4cKvyj8tdvBWViIw4ftv5ZwBC3sshlIndMJbH2VA3e4dp/n39UakimQdlWhyWHL3yd+t2WNJn9NzTeo0J7J/g49czOFQ0jS3tOQZtlh5/e1sjCl1coCll8CI7SqX9XNaTc/nb3QUnIWrb8I6QXbTQdwrNo5AEPzSg0TPBZ8/zIPxmIQ4e6sjAz7jpgMKX1Ly79jWNdZvruvmMEYyqzA==
X-MS-Exchange-CrossTenant-Network-Message-Id: e0cde2f3-df72-4e27-51b6-08d885811267
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Nov 2020 14:01:07.9688
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9q/NksdBIN6K0LEq1fhYEYh4kMzlmBrw6h79Z6Vx+X8HkJiQlaAXHmnwXBIiV66i8kcblswIasv/ImBkZFh7Cw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3580
X-OriginatorOrg: citrix.com

On Tue, Nov 10, 2020 at 02:21:43PM +0100, Jan Beulich wrote:
> On 10.11.2020 12:30, Roger Pau Monné wrote:
> > On Wed, Oct 28, 2020 at 10:23:42AM +0100, Jan Beulich wrote:
> >> When P2M_AUDIT is false, it's unused, so instead of having a dangling
> >> NULL pointer sit there, omit the field altogether.
> >>
> >> Instead of adding "#if P2M_AUDIT && defined(CONFIG_HVM)" in even more
> >> places, fold the latter part right into the definition of P2M_AUDIT.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > 
> > Acked-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Thanks. Since I see you keep replying to v1, are you aware of
> there being v2? For this particular patch there actually is a
> clang related fix in v2, which may be of interest to you.

Urg, no sorry. I was just going over my mail queue and didn't realize
there was a v2. Will take a look.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 14:10:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 14:10:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23445.50214 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcULh-0007t7-39; Tue, 10 Nov 2020 14:10:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23445.50214; Tue, 10 Nov 2020 14:10:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcULg-0007t0-VO; Tue, 10 Nov 2020 14:10:16 +0000
Received: by outflank-mailman (input) for mailman id 23445;
 Tue, 10 Nov 2020 14:10:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hJ2u=EQ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kcULg-0007sv-6C
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 14:10:16 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 51d9cb08-52d9-45d3-a0a9-039ade9a789c;
 Tue, 10 Nov 2020 14:10:08 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hJ2u=EQ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kcULg-0007sv-6C
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 14:10:16 +0000
X-Inumbo-ID: 51d9cb08-52d9-45d3-a0a9-039ade9a789c
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 51d9cb08-52d9-45d3-a0a9-039ade9a789c;
	Tue, 10 Nov 2020 14:10:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605017408;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=W9kR6cuGuPuO59b0WBsQShBKvzrrQm5UGbFW1DW/7uc=;
  b=Oznqxa08v/m58jYuZcBEz7QbImCWpne47d3/4XwccjznBZpdE7i04Yw2
   e6V1cEBbtHNRCC0Zkj3+GM2ZgKoc1QNrSDjPVbuTWmT3Uc0j5ya1gozR/
   r6SPd7M6BzMRFixiKLzhVXY/DOgotKNC9bUbSK4FaLu+/Cf69FkEeoAQX
   I=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: G7VR7kDP5V3iIPLn0XsrBiu3nbzshPs6kHk5H3eeTo6pZ5YQ3rdaROV24AnaEo6HqppRULRd1c
 NnS7USd+dypmyWQxqaEUXBOcB8UV3QgjswUmCFBq96PtoAst5p8c5uSkceuQIfgMEonIqvWqHe
 PezWCCX05E93u1oUOksIE8GDBNx+GWp+TySVXR92XO76cIjf7doBwULX31hmwMp1VZ9zR8xH0P
 uZHfpsGnlBNX3pt32CVslSxT0pozyzB5aFD7etHmAyHTmiHpA4QUIowNDWle7i6lNqdEMVL1/L
 vTQ=
X-SBRS: None
X-MesageID: 31191703
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,466,1596513600"; 
   d="scan'208";a="31191703"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BciKau0nzy4nz0z6qMVZLBIwWYc+iqAQ/LsWVONVzM5/CrvCnYBFGzAVgyWli5+7+Uva2HuXort5mtL3u9k7d/oSJ9Kn/QExMwDO0KXE32cJS+T/+zAs2qtyJYFxlj8Q+nv3RbC3EWwe9il8yHz+8hU13d9yJ3Hr8rWiWRLN/gQEIrKOZkUs9CjPkYUg7dbMB6xt/xIJdmHt6b6kvkHPkakfalv0+PmQ8dtxPQKKPTo6IT5oyxlgabradJoacOCXKTZw2nwdGVa8URYNMCvT6XJATl50aBQGQRXgbdXvdoHr5F8crdNmyb1x0OiPNgpEqDMe6XD2U/yQLExYjmNkZA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8IOmeltT7hFatd+i1U39dua0XWlKFgIOmapkPc/V650=;
 b=Z/WP+oZthT3YgxyXjvwAwrsDo6R3KmynGzJfh1v335q1y4pSMtyA9q71DzlZRqfEfuPcLkdei1cwjvn9JIbf8dRJAtm1bOrG99ft1ZcKMIUkbGXi9oLfqiHIwiZ212tWr1KK87PLbmP5McWReQJZ2x8t+3zt4RiGy0ID22XQIseRFQ6rz2yxjYqh1ZcIUd9sn9ucsKrlL3VROsDu4sSzU/128gL5pGv12jupJJsJZDWjH9lD99fv2N+7hQCJJ16ir02gJo/SCIygJ5Rp3ko6kGXJ0Kk1MRZTzsf53dC5F3HgXcyLVufLtU0/iZok6BaXUT6OBLDwDEQAWY0d5W1qrw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8IOmeltT7hFatd+i1U39dua0XWlKFgIOmapkPc/V650=;
 b=Qgg/FN5lpuJt8vSM/MHXGAyFQDWA0BCTHiRiheS7qotx63UQlFlrJ2b0KITp6iTlNK0ILA3XULtJZU3lQFp39HjtTIR1YX23hmsJep4EpmgBvq+2zyndpPi4x3SpV7h2qh+V9rLsDkPI7lNTen1DljzRcnzUGjiWJIhiqoqGXf4=
Date: Tue, 10 Nov 2020 15:08:56 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3 5/7] x86: guard against straight-line speculation past
 RET
Message-ID: <20201110140856.dtdql7lkwzwijko2@Air-de-Roger>
References: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
 <80ceea17-958d-f409-5f39-9f353e780f5b@suse.com>
 <20201110093142.hkufamaepn67gv43@Air-de-Roger>
 <92e58ff0-e6a4-f92f-1ad6-06db7751762a@suse.com>
 <20201110111603.rarf7ncddrkswlxs@Air-de-Roger>
 <586bb9e5-bb90-bb27-3010-e702d65e301c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <586bb9e5-bb90-bb27-3010-e702d65e301c@suse.com>
X-ClientProxiedBy: LO4P123CA0009.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:150::14) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f27cd11f-2d5a-48ec-91db-08d885822cda
X-MS-TrafficTypeDiagnostic: DM6PR03MB5340:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB5340B4CC90F6A0DB5CEBB5938FE90@DM6PR03MB5340.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: rdRNDHmQiY8j5wek53spxHQlimsy52f7pIIqaylJ9PyKIt7CREf39fBQzn8PphhCwHwh4k6BIgIVyjGUVSwAZiZ2+XHuOxXKvrBR+UmHiQvFV4osS8b45r/8ZxbgIBArM1ivaEwnplfINuBeN/8XnYU+20mbAd4jvCHHrOKOLURgd2g4tc39e4krXZOdorSC8zN1B5UdSLu4a1Q4ZYzSyrok0Xex4bj5XmP2HalBnjU7JGVJ4BQNpG2oa+i9xLqfcswDpE3SMMe5sU1dlcyLya6yNFVzC8deXzLj9IV03Z86vSkhlz58B/eXIY/Atfci/omi5/qNsAD1Beb1V5Ifkg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(376002)(396003)(346002)(39860400002)(366004)(8676002)(6496006)(54906003)(2906002)(6916009)(956004)(8936002)(16526019)(316002)(6666004)(478600001)(186003)(6486002)(26005)(66556008)(66476007)(66946007)(9686003)(1076003)(53546011)(85182001)(4326008)(33716001)(83380400001)(5660300002)(86362001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: H1kSfY8gjyZeBQ+ShnzRjq5dEKwPi/UK+L26EipoDz5qLZIW2ALoPzcyhJl1jzNU5Y7ym+bAZv/5x+UWaRHFdvlued6v8h1JN7My1VC/h0aowpmvM0eg1kZqxw3iVTYylpZha4TV8ypsTLr2oceDW9/lQF4OZfjCKTw2W3WhgS1jeqN2oOnlsUkTMn+mkAyITj8oWu0xy+XyK3iglyAz6A0biUGBJlmef6fjwOZs/yI3k2DLszurIg22YB29+sCr5HWFiP5gws4L3hXKbuOrQi6cDNW/wlrBLCt8ZGNpJx82Hqv2x7LWiA3ZdR9BXpac/64OIYQE7sdVfew9qBl3MqEbCWdVc9NSLgPLei/+KeezjsZ3CpvqYeS3gbY2hjukBlbB+IhvVX4XK72n/glNOIKPb6GkhQvYy6ORbnh39VV4mUc+GSuVvVdPFcYJFgdMj4egNwvT2Zkm/DhuRIOV3gDVYemyHy/jzJGTAHVCFa33F+oVwQxGtkBYF5itxXESzuePZWfUEAipD7ft8XepnJdBnj6CmZpkj1+Ehor/xfZfjxFmeto62T0rFdWzv0mA/gGC//9UK7FALCuYuAN10hUWkjHMFOAkTmmNVP8LxG9IUgdO6vHqE7WtkKxQaGws24YypRNWmMj1P7iyKUEhj7GHbYobA/MWC0xctABsLgoH72dkWDrFQchQb5JBWMDsoI1OYLGNrbRGuDj+oF7sb8rlx6Zj6+jt7HYgHwZndm1mSuBgLLFXhZta2GepGEBiWr2DaLfTO6MmYq9TmxmW0ZQQinCnwr7Z8IXUvtWmVUJUnosXlsWWsEYgAn9qDKRlE5vsRJ9v9DJYpdTo0JZ9A9G8VNtGHYhDkWQ8C+3S2gWqU5Ux3OGIYF7Rt/NW4E0as7+EnWKiZXn771mG7/Gh6g==
X-MS-Exchange-CrossTenant-Network-Message-Id: f27cd11f-2d5a-48ec-91db-08d885822cda
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Nov 2020 14:09:01.9800
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9WIlELdmmMpZqXAPFRYeFOXhTlnMs6x/BDWw06kE9+xQG7cXnY3lxbohToiOsvdGGvg02OJCuRoG0Wpx25cENw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5340
X-OriginatorOrg: citrix.com

On Tue, Nov 10, 2020 at 02:19:40PM +0100, Jan Beulich wrote:
> On 10.11.2020 12:16, Roger Pau Monné wrote:
> > On Tue, Nov 10, 2020 at 11:06:46AM +0100, Jan Beulich wrote:
> >> On 10.11.2020 10:31, Roger Pau Monné wrote:
> >>> On Fri, Oct 23, 2020 at 10:38:04AM +0200, Jan Beulich wrote:
> >>>> Under certain conditions CPUs can speculate into the instruction stream
> >>>> past a RET instruction. Guard against this just like 3b7dab93f240
> >>>> ("x86/spec-ctrl: Protect against CALL/JMP straight-line speculation")
> >>>> did - by inserting an "INT $3" insn. It's merely the mechanics of how to
> >>>> achieve this that differ: A set of macros gets introduced to post-
> >>>> process RET insns issued by the compiler (or living in assembly files).
> >>>>
> >>>> Unfortunately for clang this requires further features their built-in
> >>>> assembler doesn't support: We need to be able to override insn mnemonics
> >>>> produced by the compiler (which may be impossible, if internally
> >>>> assembly mnemonics never get generated), and we want to use \(text)
> >>>> escaping / quoting in the auxiliary macro.
> >>>
> >>> Could this have an option to enable/disable at build time?
> >>
> >> Well, a subsequent patch adds a config option for this, which in
> >> the worst case could be turned off. I'm afraid though I'm not
> >> clear about the question, because ...
> >>
> >>> FreeBSD will drop GNU as quite soon from base, and albeit it can be
> >>> installed as a package I would like to be able to build Xen using a
> >>> toolchain based on LLVM exclusively.
> >>
> >> ... it's not clear to me what the implications here are: Are you
> >> saying -no-integrated-as is not going to function anymore, unless
> >> people explicitly install gas? If that's not what you meant to
> >> indicate, then I don't see how building would become impossible.
> > 
> > I'm still inquiring about this, but I would say that when gas is
> > removed from FreeBSD then the 'as' command would be mapped to llvm-as,
> > and thus -no-integrated-as would hit the same issues as the integrated
> > as. So far in Xen we have assumed that -no-integrated-as would
> > fallback to an as capable of doing what the integrated clang as
> > doesn't support, but that might not be the case.
> 
> At which point Xen couldn't be built anyway, I expect. If llvm-as
> isn't sufficiently gas-compatible, we've lost (right now at least).
> 
> > Ideally we would have to re-run the tests with -no-integrated-as, in
> > order to assert that the external as is really capable of what the
> > internal one is not.
> 
> And if it doesn't, what would we do other than failing the build
> (which it would also if we didn't do the 2nd round of checks)?

I would always prefer a clear message (ie: your toolstack is not
capable of building Xen) rather than a weird build time failure.

Also we could maybe disable certain options by default if the
toolstack doesn't have the required support to build them?

Has anyone reported this shortcoming to upstream llvm, so they are
aware and can work on this or maybe provide an alternative way to
achieve the same result?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 14:33:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 14:33:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23463.50239 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcUhV-0001Ku-8T; Tue, 10 Nov 2020 14:32:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23463.50239; Tue, 10 Nov 2020 14:32:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcUhV-0001Kn-56; Tue, 10 Nov 2020 14:32:49 +0000
Received: by outflank-mailman (input) for mailman id 23463;
 Tue, 10 Nov 2020 14:32:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcUhT-0001Ke-JE
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 14:32:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b34f6fe6-3aa9-4c25-9083-0a50c71bcb2f;
 Tue, 10 Nov 2020 14:32:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C2097ABCC;
 Tue, 10 Nov 2020 14:32:43 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcUhT-0001Ke-JE
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 14:32:47 +0000
X-Inumbo-ID: b34f6fe6-3aa9-4c25-9083-0a50c71bcb2f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b34f6fe6-3aa9-4c25-9083-0a50c71bcb2f;
	Tue, 10 Nov 2020 14:32:44 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605018763;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SMu4wIQkPJZ7C9X9ZeDrNY0IAJqareRqvRzMcZmSHHY=;
	b=tqRRY8Zc/TQb1Scy6g5lq74iRbrHCri8mbf8Y4Hhg0B7ysgeaiKrxht6Pl71+MuTZ+ZJlF
	a54WC2SUq/OXviXK6Ep3LTyY2LO7CjmQxB0ufaHQNvuMmopX6TPBh9yMpKekt46Ukr8fcW
	e09zLIEuBbLaMKnftHtlRX/NbYERu5E=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C2097ABCC;
	Tue, 10 Nov 2020 14:32:43 +0000 (UTC)
Subject: Re: [PATCH v3 5/7] x86: guard against straight-line speculation past
 RET
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Anthony Perard <anthony.perard@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
References: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
 <80ceea17-958d-f409-5f39-9f353e780f5b@suse.com>
 <20201110093142.hkufamaepn67gv43@Air-de-Roger>
 <92e58ff0-e6a4-f92f-1ad6-06db7751762a@suse.com>
 <20201110111603.rarf7ncddrkswlxs@Air-de-Roger>
 <586bb9e5-bb90-bb27-3010-e702d65e301c@suse.com>
 <20201110140856.dtdql7lkwzwijko2@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <63ac07fc-1a71-b765-007e-571550970833@suse.com>
Date: Tue, 10 Nov 2020 15:32:43 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201110140856.dtdql7lkwzwijko2@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 10.11.2020 15:08, Roger Pau Monné wrote:
> On Tue, Nov 10, 2020 at 02:19:40PM +0100, Jan Beulich wrote:
>> On 10.11.2020 12:16, Roger Pau Monné wrote:
>>> On Tue, Nov 10, 2020 at 11:06:46AM +0100, Jan Beulich wrote:
>>>> On 10.11.2020 10:31, Roger Pau Monné wrote:
>>>>> On Fri, Oct 23, 2020 at 10:38:04AM +0200, Jan Beulich wrote:
>>>>>> Under certain conditions CPUs can speculate into the instruction stream
>>>>>> past a RET instruction. Guard against this just like 3b7dab93f240
>>>>>> ("x86/spec-ctrl: Protect against CALL/JMP straight-line speculation")
>>>>>> did - by inserting an "INT $3" insn. It's merely the mechanics of how to
>>>>>> achieve this that differ: A set of macros gets introduced to post-
>>>>>> process RET insns issued by the compiler (or living in assembly files).
>>>>>>
>>>>>> Unfortunately for clang this requires further features their built-in
>>>>>> assembler doesn't support: We need to be able to override insn mnemonics
>>>>>> produced by the compiler (which may be impossible, if internally
>>>>>> assembly mnemonics never get generated), and we want to use \(text)
>>>>>> escaping / quoting in the auxiliary macro.
>>>>>
>>>>> Could this have an option to enable/disable at build time?
>>>>
>>>> Well, a subsequent patch adds a config option for this, which in
>>>> the worst case could be turned off. I'm afraid though I'm not
>>>> clear about the question, because ...
>>>>
>>>>> FreeBSD will drop GNU as quite soon from base, and albeit it can be
>>>>> installed as a package I would like to be able to build Xen using a
>>>>> toolchain based on LLVM exclusively.
>>>>
>>>> ... it's not clear to me what the implications here are: Are you
>>>> saying -no-integrated-as is not going to function anymore, unless
>>>> people explicitly install gas? If that's not what you meant to
>>>> indicate, then I don't see how building would become impossible.
>>>
>>> I'm still inquiring about this, but I would say that when gas is
>>> removed from FreeBSD then the 'as' command would be mapped to llvm-as,
>>> and thus -no-integrated-as would hit the same issues as the integrated
>>> as. So far in Xen we have assumed that -no-integrated-as would
>>> fallback to an as capable of doing what the integrated clang as
>>> doesn't support, but that might not be the case.
>>
>> At which point Xen couldn't be built anyway, I expect. If llvm-as
>> isn't sufficiently gas-compatible, we've lost (right now at least).
>>
>>> Ideally we would have to re-run the tests with -no-integrated-as, in
>>> order to assert that the external as is really capable of what the
>>> internal one is not.
>>
>> And if it doesn't, what would we do other than failing the build
>> (which it would also if we didn't do the 2nd round of checks)?
> 
> I would always prefer a clear message (ie: your toolstack is not
> capable of building Xen) rather than a weird build time failure.

Fair point in general.

> Also we could maybe disable certain options by default if the
> toolstack doesn't have the required support to build them?

We could, but I'm afraid this will go down the route of embedding
tool chain capabilities in xen/.config, which I continue to not
consider a good idea (and the thread got stalled, as expected).

In fact (also to Andrew and Anthony), recently I've become aware
of another shortcoming of this model: Our kernel packages contain
.config files for the various architectures and specific per-
architecture flavors. It used to be easy to update them on any
system, until the tool chain capability checks got introduced.
Now, in order to update them, one has to use the precise versions
of the various tool chain parts that will be used on the build
hosts, or else an error may result (for unexpected changes to
the file), or one may unknowingly turn off options that are
expected to be enabled.

Put more generally - if I handed someone a specific .config, I'd
expect their resulting binary to contain what I did set up. Or
for them to report back that they can't build the thing. But it
should not be the case that the .config got silently changed and
certain functionality disabled just because they use a different
tool chain.

> Has anyone reported this shortcoming to upstream llvm, so they are
> aware and can work on this or maybe provide an alternative way to
> achieve the same result?

I didn't and I'm unaware of anyone else possibly having done so.
That said, I consider it sort of obvious though that the goal of
replacing the GNU tool chain implies being fully compatible (and
presumably better in certain areas).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 14:36:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 14:36:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23475.50253 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcUl0-0001Xp-Um; Tue, 10 Nov 2020 14:36:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23475.50253; Tue, 10 Nov 2020 14:36:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcUl0-0001Xi-Ri; Tue, 10 Nov 2020 14:36:26 +0000
Received: by outflank-mailman (input) for mailman id 23475;
 Tue, 10 Nov 2020 14:36:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8xyM=EQ=gmail.com=bestofboston9@srs-us1.protection.inumbo.net>)
 id 1kcUkz-0001Xc-MC
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 14:36:25 +0000
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3a92610d-1f55-4427-a4c4-ca89f3cd2b82;
 Tue, 10 Nov 2020 14:36:25 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id c17so12888720wrc.11
 for <xen-devel@lists.xenproject.org>; Tue, 10 Nov 2020 06:36:25 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8xyM=EQ=gmail.com=bestofboston9@srs-us1.protection.inumbo.net>)
	id 1kcUkz-0001Xc-MC
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 14:36:25 +0000
X-Inumbo-ID: 3a92610d-1f55-4427-a4c4-ca89f3cd2b82
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3a92610d-1f55-4427-a4c4-ca89f3cd2b82;
	Tue, 10 Nov 2020 14:36:25 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id c17so12888720wrc.11
        for <xen-devel@lists.xenproject.org>; Tue, 10 Nov 2020 06:36:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:from:date:message-id:subject:to;
        bh=+38AGrc6bohEGufZoH3Iz9R98zqgUWm9z5uK3ylOrBU=;
        b=OzazuIJlVm5vyLp79aGJ/ENznoDl/jAHzTKkrgznLKHZ+KKG0T4q7VH1YddboO4jYQ
         6tXAEth411wHsdEZUzNLS7eV1h3bTTp89wohT42zDN0k5qopzBhwgmfhdLOfQtms1lPP
         tgiAUN9zjjsLKgyv6yFIQ0sKN1CqkJkEYK2noKq2/QDtYLVCu/vO3APMBHmUArqcPRU7
         Q1jI2HQoy52tUOyzTz5XFouDndZZBiZGUKhXzmJ3I3Pz71VdMQO3Ff9lEFr/6BFwxx1N
         uzoDcrcXZIqaksrjoOnwLss6DXzkIuEi90xWLljaN8yHlumS2EgtWzRmgq7b26G68h5h
         Olng==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:from:date:message-id:subject:to;
        bh=+38AGrc6bohEGufZoH3Iz9R98zqgUWm9z5uK3ylOrBU=;
        b=VPljCJUL12uMX+BPnUSAGT4IoBNPLVNHNoaJm8AfnOx1XXnT/hBcKvm9CQp7BkPeZT
         iAZCkhSNR1Z2d0Vc5n2d6POr7C9g65a4zgMJ5LawIffzoe5kV9ClfqHaVCv8D8uKqmjY
         OJ3EHXNDhakkDAnYqgUI7zvwXRNoTT/8xpkV0p/NtRDHZQ+WERqjJDfCqLNgeBftae8G
         ejFhGBW0JwsEwzRh0YMyBNNlNAFT3svnwgIa4CnU1ZBCOIJov1+Jm0ED/DZku8NbRw64
         /e5UujeWPwqgIndtwISzEFW1kPi8L6s4p5lyxl+Nst/tC4kxE0IHTxuQZ3Ni+AyHmRTv
         UzGA==
X-Gm-Message-State: AOAM531Q2jmETwBaf4n6VO3JczeyDw+p/eZdnzhhazC504mltwCKoc4e
	iNVm/+dsvR4TF1yu/FJwByDMc515smhHBj7U+1qjV5Kb+XIrimLj
X-Google-Smtp-Source: ABdhPJzofyWMbsWT/f5njWX710solCjUoTqAcRJOxw3B1RNIay6GG6XbrFYvy9awqvxDTonHTYk96AD237GMIfKyack=
X-Received: by 2002:adf:e388:: with SMTP id e8mr24106900wrm.65.1605018984034;
 Tue, 10 Nov 2020 06:36:24 -0800 (PST)
MIME-Version: 1.0
From: Scott Davis <bestofboston9@gmail.com>
Date: Tue, 10 Nov 2020 09:36:13 -0500
Message-ID: <CAPjWd+wmFc=OGLpmxeZiW5euvByO=fiecRk9UR3Wsre6Y+j3qQ@mail.gmail.com>
Subject: UNSUB
To: xen-devel@lists.xenproject.org
Content-Type: multipart/alternative; boundary="00000000000066b6d705b3c19a95"

--00000000000066b6d705b3c19a95
Content-Type: text/plain; charset="UTF-8"

UNSUB xen-devel

--00000000000066b6d705b3c19a95
Content-Type: text/html; charset="UTF-8"

<div dir="ltr">UNSUB xen-devel</div>

--00000000000066b6d705b3c19a95--


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 14:50:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 14:50:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23485.50266 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcUyu-0003Ev-8A; Tue, 10 Nov 2020 14:50:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23485.50266; Tue, 10 Nov 2020 14:50:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcUyu-0003Eo-54; Tue, 10 Nov 2020 14:50:48 +0000
Received: by outflank-mailman (input) for mailman id 23485;
 Tue, 10 Nov 2020 14:50:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcUys-0003Ej-Rt
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 14:50:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2411ae47-7c1c-42bd-942f-fa6fdc7e244e;
 Tue, 10 Nov 2020 14:50:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 85BECACF5;
 Tue, 10 Nov 2020 14:50:44 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcUys-0003Ej-Rt
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 14:50:46 +0000
X-Inumbo-ID: 2411ae47-7c1c-42bd-942f-fa6fdc7e244e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2411ae47-7c1c-42bd-942f-fa6fdc7e244e;
	Tue, 10 Nov 2020 14:50:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605019844;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mcM2MSsUdFJePUZru7Sp5fyBCMz/gtP77qxAlImoS2M=;
	b=rxm/Vr7UoCtc/hIBqRNSRbMvDmFFRNW7xbJDEg5Y+DDyXFm2iD/icK8IjHXY70gpHUwOrF
	iaD2U0hnb6rf2uw5fVHozFas+R85UWFQWfNKhHdqDaOvNNulYFpDdc/2VwMX4YRGGGon3U
	AzsN+Yu14xT33qRYxji9ov0qEq/Bmfo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 85BECACF5;
	Tue, 10 Nov 2020 14:50:44 +0000 (UTC)
Subject: Re: [PATCH 5/5] x86/p2m: split write_p2m_entry() hook
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
 <7b2b7cc9-8828-41bd-7949-764161bbe7ff@suse.com>
 <20201110135944.hbsojy6eeyw53has@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d73234b0-f22e-0783-3fbe-759ccb0ecc48@suse.com>
Date: Tue, 10 Nov 2020 15:50:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201110135944.hbsojy6eeyw53has@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 10.11.2020 14:59, Roger Pau Monné wrote:
> On Wed, Oct 28, 2020 at 10:24:53AM +0100, Jan Beulich wrote:
>> Fair parts of the present handlers are identical; in fact
>> nestedp2m_write_p2m_entry() lacks a call to p2m_entry_modify(). Move
>> common parts right into write_p2m_entry(), splitting the hooks into a
>> "pre" one (needed just by shadow code) and a "post" one.
>>
>> For the common parts moved I think that the p2m_flush_nestedp2m() is,
>> at least from an abstract perspective, also applicable in the shadow
>> case. Hence it doesn't get a 3rd hook put in place.
>>
>> The initial comment that was in hap_write_p2m_entry() gets dropped: Its
>> placement was bogus, and looking back the the commit introducing it
>> (dd6de3ab9985 "Implement Nested-on-Nested") I can't see either what use
>> of a p2m it was meant to be associated with.
> 
> Is there any performance implication of moving from one hook to two
> hooks? Since this shouldn't be a hot path I assume it's fine.

Well, first of all just a couple of patches ago two indirect
calls were folded into one, so it's at least not getting worse
compared to where we started from. And then both HAP and nested
install just one of the two hooks.

As per the remark in an earlier patch, referred to ...

>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> RFC: This is effectively the alternative to the suggestion in an earlier
>>      patch that we might do away with the hook altogether. Of course a
>>      hybrid approach would also be possible, by using direct calls here
>>      instead of splitting the hook into two.

... here, there is the option of doing away with the indirect
calls altogether, but - as said earlier - at the price of at
least coming close to a layering violation.

>> --- a/xen/arch/x86/mm/hap/nested_hap.c
>> +++ b/xen/arch/x86/mm/hap/nested_hap.c
>> @@ -71,24 +71,11 @@
>>  /*        NESTED VIRT P2M FUNCTIONS         */
>>  /********************************************/
>>  
>> -int
>> -nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
>> -    l1_pgentry_t *p, l1_pgentry_t new, unsigned int level)
>> +void
>> +nestedp2m_write_p2m_entry_post(struct p2m_domain *p2m, unsigned int oflags)
>>  {
>> -    struct domain *d = p2m->domain;
>> -    uint32_t old_flags;
>> -
>> -    paging_lock(d);
>> -
>> -    old_flags = l1e_get_flags(*p);
>> -    safe_write_pte(p, new);
>> -
>> -    if (old_flags & _PAGE_PRESENT)
>> -        guest_flush_tlb_mask(d, p2m->dirty_cpumask);
>> -
>> -    paging_unlock(d);
>> -
>> -    return 0;
>> +    if ( oflags & _PAGE_PRESENT )
>> +        guest_flush_tlb_mask(p2m->domain, p2m->dirty_cpumask);
>>  }
> 
> This is a verbatimi copy of hap_write_p2m_entry_post. I assume there's
> a reason why we need both, but I'm missing it.

Only almost, since HAP has

    if ( oflags & _PAGE_PRESENT )
        guest_flush_tlb_mask(d, d->dirty_cpumask);

instead (i.e. they differ in which dirty_cpumask gets used).

>> --- a/xen/arch/x86/mm/p2m-pt.c
>> +++ b/xen/arch/x86/mm/p2m-pt.c
>> @@ -122,17 +122,55 @@ static int write_p2m_entry(struct p2m_do
>>  {
>>      struct domain *d = p2m->domain;
>>      struct vcpu *v = current;
>> -    int rc = 0;
>>  
>>      if ( v->domain != d )
>>          v = d->vcpu ? d->vcpu[0] : NULL;
>>      if ( likely(v && paging_mode_enabled(d) && paging_get_hostmode(v)) ||
>>           p2m_is_nestedp2m(p2m) )
>> -        rc = p2m->write_p2m_entry(p2m, gfn, p, new, level);
>> +    {
>> +        unsigned int oflags;
>> +        mfn_t omfn;
>> +        int rc;
>> +
>> +        paging_lock(d);
>> +
>> +        if ( p2m->write_p2m_entry_pre )
>> +            p2m->write_p2m_entry_pre(d, gfn, p, new, level);
>> +
>> +        oflags = l1e_get_flags(*p);
>> +        omfn = l1e_get_mfn(*p);
>> +
>> +        rc = p2m_entry_modify(p2m, p2m_flags_to_type(l1e_get_flags(new)),
>> +                              p2m_flags_to_type(oflags), l1e_get_mfn(new),
>> +                              omfn, level);
>> +        if ( rc )
>> +        {
>> +            paging_unlock(d);
>> +            return rc;
>> +        }
>> +
>> +        safe_write_pte(p, new);
>> +
>> +        if ( p2m->write_p2m_entry_post )
>> +            p2m->write_p2m_entry_post(p2m, oflags);
>> +
>> +        paging_unlock(d);
>> +
>> +        if ( nestedhvm_enabled(d) && !p2m_is_nestedp2m(p2m) &&
>> +             (oflags & _PAGE_PRESENT) &&
>> +             !p2m_get_hostp2m(d)->defer_nested_flush &&
>> +             /*
>> +              * We are replacing a valid entry so we need to flush nested p2ms,
>> +              * unless the only change is an increase in access rights.
>> +              */
>> +             (!mfn_eq(omfn, l1e_get_mfn(new)) ||
>> +              !perms_strictly_increased(oflags, l1e_get_flags(new))) )
>> +            p2m_flush_nestedp2m(d);
> 
> It feel slightly weird to have a nested p2m hook post, and yet have
> nested specific code here.
> 
> Have you considered if the post hook could be moved outside of the
> locked region, so that we could put this chunk there in the nested p2m
> case?

Yes, I did, but I don't think the post hook can be moved out. The
only alternative therefore would be a 3rd hook. And this hook would
then need to be installed on the host p2m for nested guests, as
opposed to nestedp2m_write_p2m_entry_post, which gets installed in
the nested p2m-s. As said in the description, the main reason I
decided against a 3rd hook is that I suppose the code here isn't
HAP-specific (while prior to this patch it was).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 15:38:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 15:38:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23496.50277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcViQ-0006nN-TB; Tue, 10 Nov 2020 15:37:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23496.50277; Tue, 10 Nov 2020 15:37:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcViQ-0006nG-QN; Tue, 10 Nov 2020 15:37:50 +0000
Received: by outflank-mailman (input) for mailman id 23496;
 Tue, 10 Nov 2020 15:37:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kcViP-0006nB-Gr
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 15:37:49 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0baa1a42-7dd6-47e9-a025-6f74b6476aff;
 Tue, 10 Nov 2020 15:37:45 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcViK-0003Hg-Nd; Tue, 10 Nov 2020 15:37:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcViK-0008Hm-GB; Tue, 10 Nov 2020 15:37:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kcViK-0007fp-Fj; Tue, 10 Nov 2020 15:37:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kcViP-0006nB-Gr
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 15:37:49 +0000
X-Inumbo-ID: 0baa1a42-7dd6-47e9-a025-6f74b6476aff
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0baa1a42-7dd6-47e9-a025-6f74b6476aff;
	Tue, 10 Nov 2020 15:37:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=Hf86lN04BWqqhdsd+NxrfgVpMXAq54xsQCzaR07XM1s=; b=gz8rzfIx1SRlV5mkF+zub8c8DM
	xwSOp1hVrYzI4FCtQ61IyXWr/a7QeyOIUdVN0FQ/6VJetABjwrmntcUBinNz3DqFI0sXyiAVKialM
	jaAJ20ZTWmfk379AMSeYzIasiBFX/VRSt9OArWiYoUPXQFXfj7cZhLXA/zXaCLinCIVw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcViK-0003Hg-Nd; Tue, 10 Nov 2020 15:37:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcViK-0008Hm-GB; Tue, 10 Nov 2020 15:37:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcViK-0007fp-Fj; Tue, 10 Nov 2020 15:37:44 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable bisection] complete test-amd64-amd64-libvirt-xsm
Message-Id: <E1kcViK-0007fp-Fj@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Nov 2020 15:37:44 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-libvirt-xsm
testid guest-start

Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  e19bcb626f50a652fb1854a8b2f2c9c371687a11
  Bug not present: c3453a23f7905d24f2404787543e26ec7d02301c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/156624/


  commit e19bcb626f50a652fb1854a8b2f2c9c371687a11
  Author: Juergen Gross <jgross@suse.com>
  Date:   Fri Nov 6 10:48:07 2020 +0100
  
      xen/rwlock: add check_lock() handling to rwlocks
      
      Checking whether a lock is consistently used regarding interrupts on
      or off is beneficial for rwlocks, too.
      
      So add check_lock() calls to rwlock functions. For this purpose make
      check_lock() globally accessible.
      
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Reviewed-by: Julien Grall <jgrall@amazon.com>
      Reviewed-by: Jan Beulich <jbeulich@suse.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-amd64-libvirt-xsm.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-amd64-libvirt-xsm.guest-start --summary-out=tmp/156624.bisection-summary --basis-template=156443 --blessings=real,real-bisect,real-retry xen-unstable test-amd64-amd64-libvirt-xsm guest-start
Searching for failure / basis pass:
 156588 fail [host=chardonnay0] / 156443 [host=elbling0] 156401 [host=fiano0] 156389 [host=albana0] 156373 [host=godello0] 156354 [host=godello1] 156339 [host=godello1] 156331 [host=huxelrebe0] 156315 [host=albana1] 156291 [host=pinot0] 156268 [host=elbling1] 156254 [host=godello0] 156248 [host=pinot1] 156228 [host=huxelrebe1] 156196 [host=fiano1] 156167 [host=godello1] 156136 [host=chardonnay1] 156119 [host=albana1] 156112 [host=godello0] 156093 ok.
Failure / basis pass flights: 156588 / 156093
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
Basis pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 3b49791e4cc2f38dd84bf331b75217adaef636e3
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/libvirt.git#2c846fa6bcc11929c9fb857a22430fb9945654ad-2c846fa6bcc11929c9fb857a22430fb9945654ad https://gitlab.com/keycodemap/keycodemapdb.git#27acf0ef828bf719b2053ba398b195829413dbdd-27acf0ef828bf719b2053ba398b195829413dbdd git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0\
 dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#ea6d3cd1ed79d824e605a70c3626bc437c386260-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/xen.git#3b49791e4cc2f38dd84bf331b75217adaef636e3-0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
Loaded 41874 nodes in revision graph
Searching for test results:
 156050 [host=elbling0]
 156085 []
 156093 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 3b49791e4cc2f38dd84bf331b75217adaef636e3
 156112 [host=godello0]
 156119 [host=albana1]
 156136 [host=chardonnay1]
 156167 [host=godello1]
 156196 [host=fiano1]
 156228 [host=huxelrebe1]
 156248 [host=pinot1]
 156254 [host=godello0]
 156268 [host=elbling1]
 156291 [host=pinot0]
 156315 [host=albana1]
 156331 [host=huxelrebe0]
 156339 [host=godello1]
 156354 [host=godello1]
 156373 [host=godello0]
 156389 [host=albana0]
 156401 [host=fiano0]
 156443 [host=elbling0]
 156524 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 2a5f9f6a6932214fda76b9b3c03e024772882d34
 156538 fail irrelevant
 156556 fail irrelevant
 156577 fail irrelevant
 156623 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c
 156604 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 3b49791e4cc2f38dd84bf331b75217adaef636e3
 156588 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
 156605 fail irrelevant
 156607 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
 156609 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 1fd1d4bafdf6f9f8fe5ca9b947f016a7aae92a74
 156610 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd ca56b06043bb4241eeb0a41a60daffb1408a08d5
 156612 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 5816d327e44ab37ae08730f4c54a80835998f31f
 156613 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd e006b2e3be72e502b86bd9e1405417abd87bdfed
 156614 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c
 156618 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd e19bcb626f50a652fb1854a8b2f2c9c371687a11
 156619 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c
 156621 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd e19bcb626f50a652fb1854a8b2f2c9c371687a11
 156624 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd e19bcb626f50a652fb1854a8b2f2c9c371687a11
Searching for interesting versions
 Result found: flight 156093 (pass), for basis pass
 For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c, results HASH(0x5637915ab618) HASH(0x5637915b1ac8) HASH(0x5637915af300) For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef8\
 28bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd e006b2e3be72e502b86bd9e1405417abd87bdfed, results HASH(0x56378e467658) For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98\
 c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 5816d327e44ab37ae08730f4c54a80835998f31f, results HASH(0x5637915ab6d8) For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd ca56b06043bb4241eeb0a41a60daffb1408a08d5, results HASH(0x56378e45d118) Result fo\
 und: flight 156524 (fail), for basis failure (at ancestor ~33)
 Repro found: flight 156604 (pass), for basis pass
 Repro found: flight 156607 (fail), for basis failure
 0 revisions at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c
No revisions left to test, checking graph state.
 Result found: flight 156614 (pass), for last pass
 Result found: flight 156618 (fail), for first failure
 Repro found: flight 156619 (pass), for last pass
 Repro found: flight 156621 (fail), for first failure
 Repro found: flight 156623 (pass), for last pass
 Repro found: flight 156624 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  e19bcb626f50a652fb1854a8b2f2c9c371687a11
  Bug not present: c3453a23f7905d24f2404787543e26ec7d02301c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/156624/


  commit e19bcb626f50a652fb1854a8b2f2c9c371687a11
  Author: Juergen Gross <jgross@suse.com>
  Date:   Fri Nov 6 10:48:07 2020 +0100
  
      xen/rwlock: add check_lock() handling to rwlocks
      
      Checking whether a lock is consistently used regarding interrupts on
      or off is beneficial for rwlocks, too.
      
      So add check_lock() calls to rwlock functions. For this purpose make
      check_lock() globally accessible.
      
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Reviewed-by: Julien Grall <jgrall@amazon.com>
      Reviewed-by: Jan Beulich <jbeulich@suse.com>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-amd64-libvirt-xsm.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
156624: tolerable FAIL

flight 156624 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/156624/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm 14 guest-start             fail baseline untested


jobs:
 build-amd64-libvirt                                          pass    
 test-amd64-amd64-libvirt-xsm                                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 15:50:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 15:50:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23509.50299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcVus-00005O-90; Tue, 10 Nov 2020 15:50:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23509.50299; Tue, 10 Nov 2020 15:50:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcVus-00005H-61; Tue, 10 Nov 2020 15:50:42 +0000
Received: by outflank-mailman (input) for mailman id 23509;
 Tue, 10 Nov 2020 15:50:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcVuq-00005B-H5
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 15:50:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 59ae3b8d-8070-4964-9580-eba431b7172e;
 Tue, 10 Nov 2020 15:50:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9C145ABD1;
 Tue, 10 Nov 2020 15:50:38 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xL7T=EQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcVuq-00005B-H5
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 15:50:40 +0000
X-Inumbo-ID: 59ae3b8d-8070-4964-9580-eba431b7172e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 59ae3b8d-8070-4964-9580-eba431b7172e;
	Tue, 10 Nov 2020 15:50:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605023438;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=YlOWlGaIVwjiekIPN3oYInbRbb4q51d2Wrgr9lUiRAY=;
	b=h/p4GKGdiIER9zmLfolzr5Qg1UNfO2UlCybLQm3u3nXZaT5JUGuuZdZow2xktpGDQDtK6p
	Spy6uylIq9Oic4SjoEDK2cVJSCQHfeLJ10nWMflBaWN+tUdXqoV2xYCjShDq8YrOCVIGTM
	9eE0qLgxRAglqWEjecyXsb+WojNppGM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 9C145ABD1;
	Tue, 10 Nov 2020 15:50:38 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/CPUID: adjust extended leaves out of range clearing
Message-ID: <c40f524a-b991-b26d-317e-7faaa9b9e23c@suse.com>
Date: Tue, 10 Nov 2020 16:50:38 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

A maximum extended leaf input value with the high half different from
0x8000 should not be considered valid - all leaves should be cleared in
this case.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/tests/cpu-policy/test-cpu-policy.c
+++ b/tools/tests/cpu-policy/test-cpu-policy.c
@@ -516,11 +516,22 @@ static void test_cpuid_out_of_range_clea
             },
         },
         {
+            .name = "no extd",
+            .nr_markers = 0,
+            .p = {
+                /* Clears all markers. */
+                .extd.max_leaf = 0,
+
+                .extd.vendor_ebx = 0xc2,
+                .extd.raw_fms = 0xc2,
+            },
+        },
+        {
             .name = "extd",
             .nr_markers = 1,
             .p = {
                 /* Retains marker in leaf 0.  Clears others. */
-                .extd.max_leaf = 0,
+                .extd.max_leaf = 0x80000000,
                 .extd.vendor_ebx = 0xc2,
 
                 .extd.raw_fms = 0xc2,
--- a/xen/lib/x86/cpuid.c
+++ b/xen/lib/x86/cpuid.c
@@ -232,7 +232,9 @@ void x86_cpuid_policy_clear_out_of_range
                     ARRAY_SIZE(p->xstate.raw) - 1);
     }
 
-    zero_leaves(p->extd.raw, (p->extd.max_leaf & 0xffff) + 1,
+    zero_leaves(p->extd.raw,
+                ((p->extd.max_leaf >> 16) == 0x8000
+                 ? (p->extd.max_leaf & 0xffff) + 1 : 0),
                 ARRAY_SIZE(p->extd.raw) - 1);
 }
 


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 16:05:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 16:05:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23520.50312 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcW9J-0001fc-Jx; Tue, 10 Nov 2020 16:05:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23520.50312; Tue, 10 Nov 2020 16:05:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcW9J-0001fV-GT; Tue, 10 Nov 2020 16:05:37 +0000
Received: by outflank-mailman (input) for mailman id 23520;
 Tue, 10 Nov 2020 16:05:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kcW9H-0001fQ-J5
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 16:05:35 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d27df046-f4f7-459f-be6b-4fb446d587c7;
 Tue, 10 Nov 2020 16:05:34 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcW9F-0004Me-Ok; Tue, 10 Nov 2020 16:05:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcW9F-0000m0-HF; Tue, 10 Nov 2020 16:05:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kcW9F-00021K-Gh; Tue, 10 Nov 2020 16:05:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kcW9H-0001fQ-J5
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 16:05:35 +0000
X-Inumbo-ID: d27df046-f4f7-459f-be6b-4fb446d587c7
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d27df046-f4f7-459f-be6b-4fb446d587c7;
	Tue, 10 Nov 2020 16:05:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FyI95Iwy1Mpq3hdwv02l/wa63dzxB31bWCFREHzxQF0=; b=ZZLGmM1AAoGg9WmPuhB2IJWnbZ
	VOp32tMiN0uWN2UNMphEz7R9Q4oSQ/tniuUmirhPb1NxeMMzkfvBBw4HT1542Dirc/m1h8nlxCusB
	ukJqDls+3n099GfT0sV/HaD6ppPPNWH7M0wzJC3Hbw46MyS7TQGk/caDaODipt1l/fl8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcW9F-0004Me-Ok; Tue, 10 Nov 2020 16:05:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcW9F-0000m0-HF; Tue, 10 Nov 2020 16:05:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcW9F-00021K-Gh; Tue, 10 Nov 2020 16:05:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156622-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156622: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3059178798a23ba870ff86ff54d442a07e6651fc
X-Osstest-Versions-That:
    xen=0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Nov 2020 16:05:33 +0000

flight 156622 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156622/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  3059178798a23ba870ff86ff54d442a07e6651fc
baseline version:
 xen                  0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258

Last test of basis   156532  2020-11-06 17:01:35 Z    3 days
Testing same since   156622  2020-11-10 13:01:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0a5e0ce0fb..3059178798  3059178798a23ba870ff86ff54d442a07e6651fc -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 16:45:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 16:45:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23540.50337 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcWm6-0005A4-NI; Tue, 10 Nov 2020 16:45:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23540.50337; Tue, 10 Nov 2020 16:45:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcWm6-00059x-Jw; Tue, 10 Nov 2020 16:45:42 +0000
Received: by outflank-mailman (input) for mailman id 23540;
 Tue, 10 Nov 2020 16:45:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BlNO=EQ=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kcWm5-00059s-Il
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 16:45:41 +0000
Received: from mail-lf1-x132.google.com (unknown [2a00:1450:4864:20::132])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 99d50f3c-ae01-477f-bb11-31fbc2aab6de;
 Tue, 10 Nov 2020 16:45:40 +0000 (UTC)
Received: by mail-lf1-x132.google.com with SMTP id e27so18505263lfn.7
 for <xen-devel@lists.xenproject.org>; Tue, 10 Nov 2020 08:45:40 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id w4sm1464132ljd.28.2020.11.10.08.45.38
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 10 Nov 2020 08:45:38 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BlNO=EQ=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kcWm5-00059s-Il
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 16:45:41 +0000
X-Inumbo-ID: 99d50f3c-ae01-477f-bb11-31fbc2aab6de
Received: from mail-lf1-x132.google.com (unknown [2a00:1450:4864:20::132])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 99d50f3c-ae01-477f-bb11-31fbc2aab6de;
	Tue, 10 Nov 2020 16:45:40 +0000 (UTC)
Received: by mail-lf1-x132.google.com with SMTP id e27so18505263lfn.7
        for <xen-devel@lists.xenproject.org>; Tue, 10 Nov 2020 08:45:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=0EakeXW0nW2lxgIyroEMqDbYM47xx3ZrgkBt9RF9HRM=;
        b=UUS9bQTuZXoYxGLo99q3A6eAbyMMpUEER7CDodxzfzpPx6DUr4iQmdfxvO3mZdKGnh
         qU8V0oxmnXhqPexmURckZ3UNDB+zTmI3tJy16r07BRSMHcuAMdHOZKFtJzS/8xXoYMnD
         HjUUTXuUhHKmtHsbrHWEbubXK27cR5eiXNodd9j6OC9atghYHbqT6gtiXvBVhQGsbJxO
         /bPGqr6cvx9f6KhvhEHnRlNPE8oG9fsJWmuPu6HMaZwobmBKhG1YPBgOWUqK3bVhlo0f
         9Q/ufEfB0tbq0XI8ZZfrQoqD5Hn8O37Pj9AaTQOXN+ICypAdIc4y4wW0oileY07TdieA
         F6EA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=0EakeXW0nW2lxgIyroEMqDbYM47xx3ZrgkBt9RF9HRM=;
        b=j+ThfBybIg72VoBsqowxuGg8RtQWFgnDeJlIb3kl4BSFYpI3UNNp2hg9CczrKJ/jq/
         4WhDTxt7vtshfX+m4dYIqk8OMvIyu7pRn9EyoZm3aS7CflyBXzI5lZIo17YoQ3ukacGK
         D0PlLH8vy59/31flWOpDmyRk4TpecPVDTbt72/ybFwDA8WB0R1kGBNBrWWQ/6vekugN5
         rbliFiL0i6p9u5BCQp6rABDFMY3kSN0GQDA+9T66HzXdBzFNeJ042bzcTBv9QEbTsbcB
         EvVCNY3lZ9qPqCMjnOV6AodwXx4lmeVpjatempR46sF457gFaPnHgrKpgPM50qXJTR8j
         aIfQ==
X-Gm-Message-State: AOAM532yswFCVKTJGus637agtbx6a2hXBSP3X6nT0hT0QvfkZHmQS0RO
	P7WLOTOfCGXHcndRHvs5pu8=
X-Google-Smtp-Source: ABdhPJz0Bi4KZoE4J6+oKYmtlbFpxEDmvh3gIm5i+hUi8Rfy30XzO/CHDeEN/RzFte7BRKcifxt32Q==
X-Received: by 2002:a19:7b06:: with SMTP id w6mr8556787lfc.260.1605026739532;
        Tue, 10 Nov 2020 08:45:39 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id w4sm1464132ljd.28.2020.11.10.08.45.38
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Tue, 10 Nov 2020 08:45:38 -0800 (PST)
Subject: Re: [PATCH V2 02/23] xen/ioreq: Make x86's IOREQ feature common
To: paul@xen.org, xen-devel@lists.xenproject.org
Cc: 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Ian Jackson' <iwj@xenproject.org>, 'Jan Beulich' <jbeulich@suse.com>,
 'Julien Grall' <julien@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>, 'Wei Liu' <wl@xen.org>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 'Tim Deegan' <tim@xen.org>, 'Julien Grall' <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-3-git-send-email-olekstysh@gmail.com>
 <004001d6a6b6$9ffd3ac0$dff7b040$@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <436143ea-609f-f6c3-4952-19fcf410fe8f@gmail.com>
Date: Tue, 10 Nov 2020 18:45:32 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <004001d6a6b6$9ffd3ac0$dff7b040$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 20.10.20 10:57, Paul Durrant wrote:

Hi Paul

Sorry for the late response.

>> -----Original Message-----
>> From: Oleksandr Tyshchenko <olekstysh@gmail.com>
>> Sent: 15 October 2020 17:44
>> To: xen-devel@lists.xenproject.org
>> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Andrew Cooper <andrew.cooper3@citrix.com>;
>> George Dunlap <george.dunlap@citrix.com>; Ian Jackson <iwj@xenproject.org>; Jan Beulich
>> <jbeulich@suse.com>; Julien Grall <julien@xen.org>; Stefano Stabellini <sstabellini@kernel.org>; Wei
>> Liu <wl@xen.org>; Roger Pau Monné <roger.pau@citrix.com>; Paul Durrant <paul@xen.org>; Tim Deegan
>> <tim@xen.org>; Julien Grall <julien.grall@arm.com>
>> Subject: [PATCH V2 02/23] xen/ioreq: Make x86's IOREQ feature common
>>
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> As a lot of x86 code can be re-used on Arm later on, this patch
>> moves previously prepared x86/hvm/ioreq.c to the common code.
>>
>> The common IOREQ feature is supposed to be built with IOREQ_SERVER
>> option enabled, which is selected for x86's config HVM for now.
>>
>> In order to avoid having a gigantic patch here, the subsequent
>> patches will update remaining bits in the common code step by step:
>> - Make IOREQ related structs/materials common
>> - Drop the "hvm" prefixes and infixes
> FWIW you could tackle the naming changes in patch #1.
Unfortunately, there are a lot of places that need touching (in order to 
replace/drop hvm prefixes), so I decided to keep that in a separate patch.
 From the review comments on previous series I got that renaming could 
be done before or after the move, but within
a single series, so I chose the latter.

> The 'legacy' mechanism of mapping magic pages for ioreq servers should remain x86 specific I think that aspect of the code needs to remain behind and not get moved into common code. You could do that in arch specific calls in hvm_ioreq_server_enable/disable() and hvm_get_ioreq_server_info().
Well, if legacy mechanism is not going to be used for other arch and 
should remain x86 specific, I will try to investigate what should be 
left in x86 code and rework the series.
As a side note, I am afraid, we won't get a 100% code movement (which I 
managed to achieve here) for the next version of this patch as we need 
arch/x86/hvm/ioreq.c.


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 17:44:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 17:44:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23555.50364 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXgP-0001sq-A2; Tue, 10 Nov 2020 17:43:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23555.50364; Tue, 10 Nov 2020 17:43:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXgP-0001sj-5P; Tue, 10 Nov 2020 17:43:53 +0000
Received: by outflank-mailman (input) for mailman id 23555;
 Tue, 10 Nov 2020 17:43:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kcXgN-0001s5-Bq
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 17:43:51 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 375ed7d6-430c-4fad-8ff5-23cc9b3d901f;
 Tue, 10 Nov 2020 17:43:43 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcXgF-0006NZ-HY; Tue, 10 Nov 2020 17:43:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcXgF-0006BT-8W; Tue, 10 Nov 2020 17:43:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kcXgF-00089z-82; Tue, 10 Nov 2020 17:43:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kcXgN-0001s5-Bq
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 17:43:51 +0000
X-Inumbo-ID: 375ed7d6-430c-4fad-8ff5-23cc9b3d901f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 375ed7d6-430c-4fad-8ff5-23cc9b3d901f;
	Tue, 10 Nov 2020 17:43:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=klYhO//FKmCEInXLw5lMWN6jcDv/oLou3OLsVJscWqE=; b=WHcJGFI9o6HbJlOAXpw4HzUWKI
	bHd85+gumGeL6Jz/Nmzq3PVnLr3WjMeeEoQD9wXmbJO7RLFbNk90fFjufdYmfrW80bKO6YLAYYoGg
	PwI+Y0vFquno27OAaj0hA24JmfpNwLfXcp9AyHAfyUxwzBnHggp2AyATPGcbEcvTbmao=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcXgF-0006NZ-HY; Tue, 10 Nov 2020 17:43:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcXgF-0006BT-8W; Tue, 10 Nov 2020 17:43:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcXgF-00089z-82; Tue, 10 Nov 2020 17:43:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156595-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156595: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-xl-credit2:<job status>:broken:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit2:host-install(5):broken:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f8394f232b1eab649ce2df5c5f15b0e528c92091
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Nov 2020 17:43:43 +0000

flight 156595 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156595/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit2     <job status>                 broken
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot       fail in 156582 REGR. vs. 152332
 test-arm64-arm64-xl-xsm 10 host-ping-check-xen fail in 156582 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen fail in 156582 pass in 156595
 test-arm64-arm64-examine      8 reboot           fail in 156582 pass in 156595
 test-arm64-arm64-xl-seattle   8 xen-boot                   fail pass in 156582
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 156582

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2   5 host-install(5)       broken blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-seattle 11 leak-check/basis(11) fail in 156582 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                f8394f232b1eab649ce2df5c5f15b0e528c92091
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  101 days
Failing since        152366  2020-08-01 20:49:34 Z  100 days  167 attempts
Testing same since   156582  2020-11-09 07:47:25 Z    1 days    2 attempts

------------------------------------------------------------
3477 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  broken  
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-credit2 broken
broken-step test-arm64-arm64-xl-credit2 host-install(5)

Not pushing.

(No revision log; it would be 663854 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 17:51:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 17:51:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23563.50376 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXnu-0002pl-68; Tue, 10 Nov 2020 17:51:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23563.50376; Tue, 10 Nov 2020 17:51:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXnu-0002pc-30; Tue, 10 Nov 2020 17:51:38 +0000
Received: by outflank-mailman (input) for mailman id 23563;
 Tue, 10 Nov 2020 17:51:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kcXns-0002pX-CF
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 17:51:36 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id da43741a-5327-4bfc-8f35-e00968d183e7;
 Tue, 10 Nov 2020 17:51:35 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcXnq-0006WV-P7; Tue, 10 Nov 2020 17:51:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcXnq-0006gR-Gy; Tue, 10 Nov 2020 17:51:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kcXnq-0003LF-GW; Tue, 10 Nov 2020 17:51:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kcXns-0002pX-CF
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 17:51:36 +0000
X-Inumbo-ID: da43741a-5327-4bfc-8f35-e00968d183e7
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id da43741a-5327-4bfc-8f35-e00968d183e7;
	Tue, 10 Nov 2020 17:51:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5QoJWVefmJg9JGEni6go5uGLbQytKU8t4dqGrNpBIbk=; b=63EHJP1NzsaPpMSixuMawr8BEg
	F6ovdKdA6QoheyzSrTxn3d8eaZ9dV8ZrDXhMCCvwRJkuSP/Vd3rIhrgaCv0LZX6erhW2pRHMawwg6
	Prfh+zWWpRNHWtonn5dia0NWj9gJh8kooLzAGCKysDv/cv3GyZT1Ocw3wogqVdSEI2Ws=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcXnq-0006WV-P7; Tue, 10 Nov 2020 17:51:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcXnq-0006gR-Gy; Tue, 10 Nov 2020 17:51:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcXnq-0003LF-GW; Tue, 10 Nov 2020 17:51:34 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156606-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156606: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=0af7f8e6a9253960ba820cd6ddfd8c36543d30cb
X-Osstest-Versions-That:
    ovmf=1366cd58cd4459f00b4ecf5abed13e77ac4ad06c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Nov 2020 17:51:34 +0000

flight 156606 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156606/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 0af7f8e6a9253960ba820cd6ddfd8c36543d30cb
baseline version:
 ovmf                 1366cd58cd4459f00b4ecf5abed13e77ac4ad06c

Last test of basis   156545  2020-11-07 20:41:45 Z    2 days
Testing same since   156606  2020-11-10 00:39:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Mingyue Liang <mingyuex.liang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   1366cd58cd..0af7f8e6a9  0af7f8e6a9253960ba820cd6ddfd8c36543d30cb -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 17:51:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 17:51:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23566.50391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXoC-0002ua-G5; Tue, 10 Nov 2020 17:51:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23566.50391; Tue, 10 Nov 2020 17:51:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXoC-0002uT-D9; Tue, 10 Nov 2020 17:51:56 +0000
Received: by outflank-mailman (input) for mailman id 23566;
 Tue, 10 Nov 2020 17:51:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kcXoB-0002u0-2N
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 17:51:55 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 729d886a-8552-4cd6-8f9c-64283fde34a4;
 Tue, 10 Nov 2020 17:51:54 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXo9-0006Yn-Ar; Tue, 10 Nov 2020 17:51:53 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXo9-0007RC-3F; Tue, 10 Nov 2020 17:51:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kcXoB-0002u0-2N
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 17:51:55 +0000
X-Inumbo-ID: 729d886a-8552-4cd6-8f9c-64283fde34a4
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 729d886a-8552-4cd6-8f9c-64283fde34a4;
	Tue, 10 Nov 2020 17:51:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=cosKl6EG8MD5F2R40iXfQB/FbWXbninM7U6ZXHr119k=; b=uUVc7HdwpmMGC+VhMEfaxno2UN
	cXqhkT1NKfbsyPJfh+xmnEzdhtmt7ZowgpKR/IflV1I4wGREFvcGPtvuGXlXn7jlxzcRQpJjM+sX+
	MpeX9AE2+PtFDO1SJ2O2LiPVxol4A53cN84N4ElKyUc+i6M0UudX0rsDpjtPOI1mAuPQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXo9-0006Yn-Ar; Tue, 10 Nov 2020 17:51:53 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXo9-0007RC-3F; Tue, 10 Nov 2020 17:51:53 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 03/24] libxl: use LIBXL_DEFINE_DEVICE_LIST for nic devices
Date: Tue, 10 Nov 2020 17:51:26 +0000
Message-Id: <20201110175147.7067-4-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110175147.7067-1-paul@xen.org>
References: <20201110175147.7067-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Remove open-coded definitions of libxl_device_nic_list() and
libxl_device_nic_list_free().

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>

This patch is slightly tangential. I just happend to notice the inefficiency
while looking at code for various device types.
---
 tools/libs/light/libxl_nic.c | 19 +------------------
 1 file changed, 1 insertion(+), 18 deletions(-)

diff --git a/tools/libs/light/libxl_nic.c b/tools/libs/light/libxl_nic.c
index 0e5d120ae9a4..a44058f92951 100644
--- a/tools/libs/light/libxl_nic.c
+++ b/tools/libs/light/libxl_nic.c
@@ -403,24 +403,6 @@ static int libxl__nic_from_xenstore(libxl__gc *gc, const char *libxl_path,
     return rc;
 }
 
-libxl_device_nic *libxl_device_nic_list(libxl_ctx *ctx, uint32_t domid, int *num)
-{
-    libxl_device_nic *r;
-
-    GC_INIT(ctx);
-
-    r = libxl__device_list(gc, &libxl__nic_devtype, domid, num);
-
-    GC_FREE;
-
-    return r;
-}
-
-void libxl_device_nic_list_free(libxl_device_nic* list, int num)
-{
-    libxl__device_list_free(&libxl__nic_devtype, list, num);
-}
-
 int libxl_device_nic_getinfo(libxl_ctx *ctx, uint32_t domid,
                               const libxl_device_nic *nic,
                               libxl_nicinfo *nicinfo)
@@ -527,6 +509,7 @@ LIBXL_DEFINE_DEVID_TO_DEVICE(nic)
 LIBXL_DEFINE_DEVICE_ADD(nic)
 LIBXL_DEFINE_DEVICES_ADD(nic)
 LIBXL_DEFINE_DEVICE_REMOVE(nic)
+LIBXL_DEFINE_DEVICE_LIST(nic)
 
 DEFINE_DEVICE_TYPE_STRUCT(nic, VIF,
     .update_config = libxl_device_nic_update_config,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 17:51:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 17:51:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23567.50399 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXoC-0002vG-VR; Tue, 10 Nov 2020 17:51:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23567.50399; Tue, 10 Nov 2020 17:51:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXoC-0002uz-Mt; Tue, 10 Nov 2020 17:51:56 +0000
Received: by outflank-mailman (input) for mailman id 23567;
 Tue, 10 Nov 2020 17:51:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kcXoB-0002u0-Qj
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 17:51:55 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cff68d35-046c-4438-a355-f7501af07ffe;
 Tue, 10 Nov 2020 17:51:55 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoA-0006Yw-7p; Tue, 10 Nov 2020 17:51:54 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoA-0007RC-03; Tue, 10 Nov 2020 17:51:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kcXoB-0002u0-Qj
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 17:51:55 +0000
X-Inumbo-ID: cff68d35-046c-4438-a355-f7501af07ffe
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cff68d35-046c-4438-a355-f7501af07ffe;
	Tue, 10 Nov 2020 17:51:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=+1oww7hSWyiptMhSf735m1JHsetRhBPwidp8laVrHmc=; b=Uf+xfIOpQIQsHhNkRVt5EZ0XL9
	xspdGWmlw7UT+NBg2EHeQ7rU67YGMWuOCh8xAcoFz/N/IvV/zyW0jt9AaZzZpPsfzFiBBGUyPwj7O
	l/5i92qUtOn2F02DC0yWp/j4kxPsbyHAQFz60vfWJeO4PtIp1oTDppXJOHzLhIB7s2z8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoA-0006Yw-7p; Tue, 10 Nov 2020 17:51:54 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoA-0007RC-03; Tue, 10 Nov 2020 17:51:54 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 04/24] libxl: s/detatched/detached in libxl_pci.c
Date: Tue, 10 Nov 2020 17:51:27 +0000
Message-Id: <20201110175147.7067-5-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110175147.7067-1-paul@xen.org>
References: <20201110175147.7067-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Simply spelling correction. Purely cosmetic fix.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 515e74fe5aae..52feac651c70 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1861,7 +1861,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
     libxl__ev_qmp *qmp, const libxl__json_object *response, int rc);
 static void pci_remove_timeout(libxl__egc *egc,
     libxl__ev_time *ev, const struct timeval *requested_abs, int rc);
-static void pci_remove_detatched(libxl__egc *egc,
+static void pci_remove_detached(libxl__egc *egc,
     pci_remove_state *prs, int rc);
 static void pci_remove_stubdom_done(libxl__egc *egc,
     libxl__ao_device *aodev);
@@ -1975,7 +1975,7 @@ skip1:
 skip_irq:
     rc = 0;
 out_fail:
-    pci_remove_detatched(egc, prs, rc); /* must be last */
+    pci_remove_detached(egc, prs, rc); /* must be last */
 }
 
 static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
@@ -1999,7 +1999,7 @@ static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
     rc = qemu_pci_remove_xenstore(gc, domid, pci, prs->force);
 
 out:
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
 static void pci_remove_qmp_device_del(libxl__egc *egc,
@@ -2025,7 +2025,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     return;
 
 out:
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
 static void pci_remove_qmp_device_del_cb(libxl__egc *egc,
@@ -2048,7 +2048,7 @@ static void pci_remove_qmp_device_del_cb(libxl__egc *egc,
     return;
 
 out:
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
 static void pci_remove_qmp_retry_timer_cb(libxl__egc *egc, libxl__ev_time *ev,
@@ -2064,7 +2064,7 @@ static void pci_remove_qmp_retry_timer_cb(libxl__egc *egc, libxl__ev_time *ev,
     return;
 
 out:
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
 static void pci_remove_qmp_query_cb(libxl__egc *egc,
@@ -2124,7 +2124,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
     }
 
 out:
-    pci_remove_detatched(egc, prs, rc); /* must be last */
+    pci_remove_detached(egc, prs, rc); /* must be last */
 }
 
 static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
@@ -2143,12 +2143,12 @@ static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
     /* If we timed out, we might still want to keep destroying the device
      * (when force==true), so let the next function decide what to do on
      * error */
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
-static void pci_remove_detatched(libxl__egc *egc,
-                                 pci_remove_state *prs,
-                                 int rc)
+static void pci_remove_detached(libxl__egc *egc,
+                                pci_remove_state *prs,
+                                int rc)
 {
     STATE_AO_GC(prs->aodev->ao);
     int stubdomid = 0;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 17:51:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 17:51:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23569.50415 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXoF-0002yV-5O; Tue, 10 Nov 2020 17:51:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23569.50415; Tue, 10 Nov 2020 17:51:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXoF-0002yM-17; Tue, 10 Nov 2020 17:51:59 +0000
Received: by outflank-mailman (input) for mailman id 23569;
 Tue, 10 Nov 2020 17:51:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kcXoD-0002tQ-1a
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 17:51:57 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 15803b96-49c3-4874-8997-a74a4a0e91db;
 Tue, 10 Nov 2020 17:51:52 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXo6-0006YX-Cq; Tue, 10 Nov 2020 17:51:50 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXo6-0007RC-2V; Tue, 10 Nov 2020 17:51:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kcXoD-0002tQ-1a
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 17:51:57 +0000
X-Inumbo-ID: 15803b96-49c3-4874-8997-a74a4a0e91db
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 15803b96-49c3-4874-8997-a74a4a0e91db;
	Tue, 10 Nov 2020 17:51:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date:
	Subject:Cc:To:From; bh=qVlFu9hM29FvShCiNe/TR6XZRyXI3nvUfa1dpCTnNDA=; b=sAFef9
	lU/G2ebAMJaNLqsHsjWUWgUH40+t7mKY2kLHL4MZVSrmrXzkKbsAjXMl0EIfWpkGMGlMjA2MVYtMn
	yJkw+h5zX9JfUIvHI5gPIr4Fw34EVDmXnw7PM8ajAmAevLMHMnOjJ/TGmX3T+OlptRECKh0yaZzbu
	mD6z6mzQXmU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXo6-0006YX-Cq; Tue, 10 Nov 2020 17:51:50 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXo6-0007RC-2V; Tue, 10 Nov 2020 17:51:50 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Nick Rosbrook <rosbrookn@ainfosec.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 00/24] xl / libxl: named PCI pass-through devices
Date: Tue, 10 Nov 2020 17:51:23 +0000
Message-Id: <20201110175147.7067-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Paul Durrant (24):
  xl / libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
  libxl: use LIBXL_DEFINE_DEVICE_LIST for pci devices
  libxl: use LIBXL_DEFINE_DEVICE_LIST for nic devices
  libxl: s/detatched/detached in libxl_pci.c
  libxl: remove extraneous arguments to do_pci_remove() in libxl_pci.c
  libxl: stop using aodev->device_config in libxl__device_pci_add()...
  libxl: generalise 'driver_path' xenstore access functions in
    libxl_pci.c
  libxl: remove unnecessary check from libxl__device_pci_add()
  libxl: remove get_all_assigned_devices() from libxl_pci.c
  libxl: make sure callers of libxl_device_pci_list() free the list
    after use
  libxl: add libxl_device_pci_assignable_list_free()...
  libxl: use COMPARE_PCI() macro is_pci_in_array()...
  libxl: add/recover 'rdm_policy' to/from PCI backend in xenstore
  libxl: Make sure devices added by pci-attach are reflected in the
    config
  docs/man: extract documentation of PCI_SPEC_STRING from the xl.cfg
    manpage...
  docs/man: improve documentation of PCI_SPEC_STRING...
  docs/man: fix xl(1) documentation for 'pci' operations
  libxl: introduce 'libxl_pci_bdf' in the idl...
  libxlu: introduce xlu_pci_parse_spec_string()
  libxl: modify
    libxl_device_pci_assignable_add/remove/list/list_free()...
  docs/man: modify xl(1) in preparation for naming of assignable devices
  xl / libxl: support naming of assignable devices
  docs/man: modify xl-pci-configuration(5) to add 'name' field to
    PCI_SPEC_STRING
  xl / libxl: support 'xl pci-attach/detach' by name

 docs/man/xl-pci-configuration.5.pod  |  218 ++++++
 docs/man/xl.1.pod.in                 |   39 +-
 docs/man/xl.cfg.5.pod.in             |   68 +-
 tools/golang/xenlight/helpers.gen.go |   77 +-
 tools/golang/xenlight/types.gen.go   |    8 +-
 tools/include/libxl.h                |   67 +-
 tools/include/libxlutil.h            |    8 +-
 tools/libs/light/libxl_create.c      |    6 +-
 tools/libs/light/libxl_dm.c          |   18 +-
 tools/libs/light/libxl_internal.h    |   53 +-
 tools/libs/light/libxl_nic.c         |   19 +-
 tools/libs/light/libxl_pci.c         | 1030 ++++++++++++++------------
 tools/libs/light/libxl_types.idl     |   19 +-
 tools/libs/util/libxlu_pci.c         |  379 +++++-----
 tools/ocaml/libs/xl/xenlight_stubs.c |   19 +-
 tools/xl/xl_cmdtable.c               |   16 +-
 tools/xl/xl_parse.c                  |   28 +-
 tools/xl/xl_pci.c                    |  159 ++--
 tools/xl/xl_sxp.c                    |   12 +-
 19 files changed, 1308 insertions(+), 935 deletions(-)
 create mode 100644 docs/man/xl-pci-configuration.5.pod
---
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Christian Lindig <christian.lindig@citrix.com>
Cc: David Scott <dave@recoil.org>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>
Cc: Wei Liu <wl@xen.org>
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 17:52:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 17:52:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23571.50427 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXoH-00031R-Dq; Tue, 10 Nov 2020 17:52:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23571.50427; Tue, 10 Nov 2020 17:52:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXoH-00031F-AB; Tue, 10 Nov 2020 17:52:01 +0000
Received: by outflank-mailman (input) for mailman id 23571;
 Tue, 10 Nov 2020 17:52:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kcXoG-0002u0-0v
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 17:52:00 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1941d183-0ac8-4c0c-a88a-5605a1033cb7;
 Tue, 10 Nov 2020 17:51:56 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoB-0006Z3-5q; Tue, 10 Nov 2020 17:51:55 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoA-0007RC-T5; Tue, 10 Nov 2020 17:51:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kcXoG-0002u0-0v
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 17:52:00 +0000
X-Inumbo-ID: 1941d183-0ac8-4c0c-a88a-5605a1033cb7
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1941d183-0ac8-4c0c-a88a-5605a1033cb7;
	Tue, 10 Nov 2020 17:51:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=FwZuYGPiT+jEv/21q2vsmSJAK0lBtK+hfoOhsdKiXmA=; b=29Y97Bc00pRw/M4CVS2L+zFM6n
	EW+CB+L7+QL+3zHl3Z+n/QzixfB+oc/rS0lWsSMyeEB6CjvRW8rZECBg3DWWcps+/GqnuJopvumQQ
	/zONnclTFKgUeLmItrWF7VrHpzr9J39WuNybmHFgWY0mSfOpi8fifmTOLoXEJrWVFcdE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoB-0006Z3-5q; Tue, 10 Nov 2020 17:51:55 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoA-0007RC-T5; Tue, 10 Nov 2020 17:51:55 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 05/24] libxl: remove extraneous arguments to do_pci_remove() in libxl_pci.c
Date: Tue, 10 Nov 2020 17:51:28 +0000
Message-Id: <20201110175147.7067-6-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110175147.7067-1-paul@xen.org>
References: <20201110175147.7067-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Both 'domid' and 'pci' are available in 'pci_remove_state' so there is no
need to also pass them as separate arguments.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 52feac651c70..0abc679c3958 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1868,14 +1868,14 @@ static void pci_remove_stubdom_done(libxl__egc *egc,
 static void pci_remove_done(libxl__egc *egc,
     pci_remove_state *prs, int rc);
 
-static void do_pci_remove(libxl__egc *egc, uint32_t domid,
-                          libxl_device_pci *pci, int force,
-                          pci_remove_state *prs)
+static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
 {
     STATE_AO_GC(prs->aodev->ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
     libxl_device_pci *assigned;
+    uint32_t domid = prs->domid;
     libxl_domain_type type = libxl__domain_type(gc, domid);
+    libxl_device_pci *pci = prs->pci;
     int rc, num;
     uint32_t domainid = domid;
 
@@ -2272,7 +2272,6 @@ static void device_pci_remove_common_next(libxl__egc *egc,
     EGC_GC;
 
     /* Convenience aliases */
-    libxl_domid domid = prs->domid;
     libxl_device_pci *const pci = prs->pci;
     libxl__ao_device *const aodev = prs->aodev;
     const unsigned int pfunc_mask = prs->pfunc_mask;
@@ -2290,7 +2289,7 @@ static void device_pci_remove_common_next(libxl__egc *egc,
             } else {
                 pci->vdevfn = orig_vdev;
             }
-            do_pci_remove(egc, domid, pci, prs->force, prs);
+            do_pci_remove(egc, prs);
             return;
         }
     }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 17:52:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 17:52:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23572.50439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXoI-000344-Q3; Tue, 10 Nov 2020 17:52:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23572.50439; Tue, 10 Nov 2020 17:52:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXoI-00033s-LV; Tue, 10 Nov 2020 17:52:02 +0000
Received: by outflank-mailman (input) for mailman id 23572;
 Tue, 10 Nov 2020 17:52:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kcXoI-0002tQ-1o
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 17:52:02 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4e143eab-38e6-4c54-bf75-18c9c219581d;
 Tue, 10 Nov 2020 17:51:53 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXo8-0006Yf-FX; Tue, 10 Nov 2020 17:51:52 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXo8-0007RC-6T; Tue, 10 Nov 2020 17:51:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kcXoI-0002tQ-1o
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 17:52:02 +0000
X-Inumbo-ID: 4e143eab-38e6-4c54-bf75-18c9c219581d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4e143eab-38e6-4c54-bf75-18c9c219581d;
	Tue, 10 Nov 2020 17:51:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=jy1w5XNZuUTCc1TfiWPlVdgsiUjSwSTRmbob0Mo3M8w=; b=vGmIm5oNBfIZPyG5jJ9ygAV5ek
	UsqOfobUUr8bxoI16S9Duq9sTq0yYu+i6bCXMpg97ioiw9faD79DIV5BP+bTamW21Hvc4abEGfZWI
	Fnz0hBnO5RHCHSJ3gv970qmjYPBHa+vKQZCiq2Q5xhobxZsnhrglyYrNWkPoTUj9hfSc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXo8-0006Yf-FX; Tue, 10 Nov 2020 17:51:52 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXo8-0007RC-6T; Tue, 10 Nov 2020 17:51:52 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 02/24] libxl: use LIBXL_DEFINE_DEVICE_LIST for pci devices
Date: Tue, 10 Nov 2020 17:51:25 +0000
Message-Id: <20201110175147.7067-3-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110175147.7067-1-paul@xen.org>
References: <20201110175147.7067-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Remove open coded definition of libxl_device_pci_list().

NOTE: Using the macro also defines libxl_device_pci_list_free() so a prototype
      for it is added. Subsequent patches will make used of it.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/include/libxl.h        |  7 +++++++
 tools/libs/light/libxl_pci.c | 27 ++-------------------------
 2 files changed, 9 insertions(+), 25 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index fbe4c81ba511..ee52d3cf7e7e 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -451,6 +451,12 @@
  */
 #define LIBXL_HAVE_CONFIG_PCIS 1
 
+/*
+ * LIBXL_HAVE_DEVICE_PCI_LIST_FREE indicates that the
+ * libxl_device_pci_list_free() function is defined.
+ */
+#define LIBXL_HAVE_DEVICE_PCI_LIST_FREE 1
+
 /*
  * libxl ABI compatibility
  *
@@ -2321,6 +2327,7 @@ int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
 
 libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid,
                                         int *num);
+void libxl_device_pci_list_free(libxl_device_pci* list, int num);
 
 /*
  * Turns the current process into a backend device service daemon
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 2ff1c64a3144..515e74fe5aae 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -2393,31 +2393,6 @@ static int libxl__device_pci_get_num(libxl__gc *gc, const char *be_path,
     return rc;
 }
 
-libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid, int *num)
-{
-    GC_INIT(ctx);
-    char *be_path;
-    unsigned int n, i;
-    libxl_device_pci *pcis = NULL;
-
-    *num = 0;
-
-    be_path = libxl__domain_device_backend_path(gc, 0, domid, 0,
-                                                LIBXL__DEVICE_KIND_PCI);
-    if (libxl__device_pci_get_num(gc, be_path, &n))
-        goto out;
-
-    pcis = calloc(n, sizeof(libxl_device_pci));
-
-    for (i = 0; i < n; i++)
-        libxl__device_pci_from_xs_be(gc, be_path, i, pcis + i);
-
-    *num = n;
-out:
-    GC_FREE;
-    return pcis;
-}
-
 void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
                                    libxl__multidev *multidev)
 {
@@ -2492,6 +2467,8 @@ static int libxl_device_pci_compare(const libxl_device_pci *d1,
     return COMPARE_PCI(d1, d2);
 }
 
+LIBXL_DEFINE_DEVICE_LIST(pci)
+
 #define libxl__device_pci_update_devid NULL
 
 DEFINE_DEVICE_TYPE_STRUCT(pci, PCI,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 17:52:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 17:52:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23573.50451 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXoM-0003A9-HB; Tue, 10 Nov 2020 17:52:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23573.50451; Tue, 10 Nov 2020 17:52:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXoM-00039z-AT; Tue, 10 Nov 2020 17:52:06 +0000
Received: by outflank-mailman (input) for mailman id 23573;
 Tue, 10 Nov 2020 17:52:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kcXoL-0002u0-0u
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 17:52:05 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 211ddd09-1534-4663-b83c-5ff3fc69c787;
 Tue, 10 Nov 2020 17:51:58 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoD-0006ZM-RD; Tue, 10 Nov 2020 17:51:57 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoD-0007RC-JQ; Tue, 10 Nov 2020 17:51:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kcXoL-0002u0-0u
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 17:52:05 +0000
X-Inumbo-ID: 211ddd09-1534-4663-b83c-5ff3fc69c787
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 211ddd09-1534-4663-b83c-5ff3fc69c787;
	Tue, 10 Nov 2020 17:51:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=9Gg5nhEEC5gWYbJbNF9Cn78tfQpb6QS4wTBLfSwSCiY=; b=5somhVGtlDPmd6swL+YBqjBTs7
	YDIJCX0JIztUus0a59dW3RPGcatS+P2swvxwhI5MEMftFMGiD/Q3yAYU+Nd0lkTPtlWqIMvgGHEL7
	j8/uK4j7Y7qv5+4BRljlWhfBmmb+lkNRcPO6ILH5RbxJYH5wgKf0hl8yPNnm+j4/a1/I=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoD-0006ZM-RD; Tue, 10 Nov 2020 17:51:57 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoD-0007RC-JQ; Tue, 10 Nov 2020 17:51:57 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 08/24] libxl: remove unnecessary check from libxl__device_pci_add()
Date: Tue, 10 Nov 2020 17:51:31 +0000
Message-Id: <20201110175147.7067-9-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110175147.7067-1-paul@xen.org>
References: <20201110175147.7067-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

The code currently checks explicitly whether the device is already assigned,
but this is actually unnecessary as assigned devices do not form part of
the list returned by libxl_device_pci_assignable_list() and hence the
libxl_pci_assignable() test would have already failed.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 16 +---------------
 1 file changed, 1 insertion(+), 15 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index fdafd2c9bafb..f9f8374d7d36 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1536,8 +1536,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
 {
     STATE_AO_GC(aodev->ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
-    libxl_device_pci *assigned;
-    int num_assigned, rc;
+    int rc;
     int stubdomid = 0;
     pci_add_state *pas;
 
@@ -1576,19 +1575,6 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
         goto out;
     }
 
-    rc = get_all_assigned_devices(gc, &assigned, &num_assigned);
-    if ( rc ) {
-        LOGD(ERROR, domid,
-             "cannot determine if device is assigned, refusing to continue");
-        goto out;
-    }
-    if ( is_pci_in_array(assigned, num_assigned, pci->domain,
-                         pci->bus, pci->dev, pci->func) ) {
-        LOGD(ERROR, domid, "PCI device already attached to a domain");
-        rc = ERROR_FAIL;
-        goto out;
-    }
-
     libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 17:52:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 17:52:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23574.50463 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXoN-0003DM-UV; Tue, 10 Nov 2020 17:52:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23574.50463; Tue, 10 Nov 2020 17:52:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXoN-0003D8-Og; Tue, 10 Nov 2020 17:52:07 +0000
Received: by outflank-mailman (input) for mailman id 23574;
 Tue, 10 Nov 2020 17:52:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kcXoN-0002tQ-2F
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 17:52:07 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 104bb508-a8f3-488c-bf27-69795e3c0db8;
 Tue, 10 Nov 2020 17:51:53 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXo7-0006YZ-I4; Tue, 10 Nov 2020 17:51:51 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXo7-0007RC-2g; Tue, 10 Nov 2020 17:51:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kcXoN-0002tQ-2F
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 17:52:07 +0000
X-Inumbo-ID: 104bb508-a8f3-488c-bf27-69795e3c0db8
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 104bb508-a8f3-488c-bf27-69795e3c0db8;
	Tue, 10 Nov 2020 17:51:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=aXM543KSwo9/OhomqYdTCI4BiVTgrWZjPqCSeNef20s=; b=iWMx4KEGF9uLre9TlsOIzCzoNH
	piXUymsHyY2yMVYac17MHpkb+V5SrPc2nGjictb7Zdm1pZg+M8EZ7fcxCIuAEFZFKC/3gqSZgIfYE
	Z/5KVuSL7SgPZFCqCaspjT+yFnPN5ZJ3uf23WuHgxJ+Ry3wcDscyCLSzy6hvzUCZ0X6Q=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXo7-0006YZ-I4; Tue, 10 Nov 2020 17:51:51 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXo7-0007RC-2g; Tue, 10 Nov 2020 17:51:51 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2 01/24] xl / libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
Date: Tue, 10 Nov 2020 17:51:24 +0000
Message-Id: <20201110175147.7067-2-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110175147.7067-1-paul@xen.org>
References: <20201110175147.7067-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

The seemingly arbitrary use of 'pci' and 'pcidev' in the code in libxl_pci.c
is confusing and also compromises use of some macros used for other device
types. Indeed it seems that DEFINE_DEVICE_TYPE_STRUCT_X exists solely because
of this duality.

This patch purges use of 'pcidev' from the libxl code, allowing evaluation of
DEFINE_DEVICE_TYPE_STRUCT_X to be replaced with DEFINE_DEVICE_TYPE_STRUCT,
hence allowing removal of the former.

For consistency the xl and libs/util code is also modified, but in this case
it is purely cosmetic.

NOTE: Some of the more gross formatting errors (such as lack of spaces after
      keywords) that came into context have been fixed in libxl_pci.c.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxl.h             |  17 +-
 tools/libs/light/libxl_create.c   |   6 +-
 tools/libs/light/libxl_dm.c       |  18 +-
 tools/libs/light/libxl_internal.h |  45 ++-
 tools/libs/light/libxl_pci.c      | 582 +++++++++++++++---------------
 tools/libs/light/libxl_types.idl  |   2 +-
 tools/libs/util/libxlu_pci.c      |  36 +-
 tools/xl/xl_parse.c               |  26 +-
 tools/xl/xl_pci.c                 |  68 ++--
 tools/xl/xl_sxp.c                 |  12 +-
 10 files changed, 408 insertions(+), 404 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 1ea5b4f446e8..fbe4c81ba511 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -444,6 +444,13 @@
  */
 #define LIBXL_HAVE_DISK_SAFE_REMOVE 1
 
+/*
+ * LIBXL_HAVE_CONFIG_PCIS indicates that the 'pcidevs' and 'num_pcidevs'
+ * fields in libxl_domain_config have been renamed to 'pcis' and 'num_pcis'
+ * respectively.
+ */
+#define LIBXL_HAVE_CONFIG_PCIS 1
+
 /*
  * libxl ABI compatibility
  *
@@ -2300,15 +2307,15 @@ int libxl_device_pvcallsif_destroy(libxl_ctx *ctx, uint32_t domid,
 
 /* PCI Passthrough */
 int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
-                         libxl_device_pci *pcidev,
+                         libxl_device_pci *pci,
                          const libxl_asyncop_how *ao_how)
                          LIBXL_EXTERNAL_CALLERS_ONLY;
 int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid,
-                            libxl_device_pci *pcidev,
+                            libxl_device_pci *pci,
                             const libxl_asyncop_how *ao_how)
                             LIBXL_EXTERNAL_CALLERS_ONLY;
 int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
-                             libxl_device_pci *pcidev,
+                             libxl_device_pci *pci,
                              const libxl_asyncop_how *ao_how)
                              LIBXL_EXTERNAL_CALLERS_ONLY;
 
@@ -2352,8 +2359,8 @@ int libxl_device_events_handler(libxl_ctx *ctx,
  * added or is not bound, the functions will emit a warning but return
  * SUCCESS.
  */
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pcidev, int rebind);
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pcidev, int rebind);
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
 
 /* CPUID handling */
diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index 321a13e519b5..1f5052c52033 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -1100,7 +1100,7 @@ int libxl__domain_config_setdefault(libxl__gc *gc,
         goto error_out;
     }
 
-    bool need_pt = d_config->num_pcidevs || d_config->num_dtdevs;
+    bool need_pt = d_config->num_pcis || d_config->num_dtdevs;
     if (c_info->passthrough == LIBXL_PASSTHROUGH_DEFAULT) {
         c_info->passthrough = need_pt
             ? LIBXL_PASSTHROUGH_ENABLED : LIBXL_PASSTHROUGH_DISABLED;
@@ -1141,7 +1141,7 @@ int libxl__domain_config_setdefault(libxl__gc *gc,
      * assignment when PoD is enabled.
      */
     if (d_config->c_info.type != LIBXL_DOMAIN_TYPE_PV &&
-        d_config->num_pcidevs && pod_enabled) {
+        d_config->num_pcis && pod_enabled) {
         ret = ERROR_INVAL;
         LOGD(ERROR, domid,
              "PCI device assignment for HVM guest failed due to PoD enabled");
@@ -1817,7 +1817,7 @@ const libxl__device_type *device_type_tbl[] = {
     &libxl__vtpm_devtype,
     &libxl__usbctrl_devtype,
     &libxl__usbdev_devtype,
-    &libxl__pcidev_devtype,
+    &libxl__pci_devtype,
     &libxl__dtdev_devtype,
     &libxl__vdispl_devtype,
     &libxl__vsnd_devtype,
diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 3da83259c08e..8ebe1b60c9d7 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -442,7 +442,7 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
 
     /* Might not expose rdm. */
     if (strategy == LIBXL_RDM_RESERVE_STRATEGY_IGNORE &&
-        !d_config->num_pcidevs)
+        !d_config->num_pcis)
         return 0;
 
     /* Query all RDM entries in this platform */
@@ -469,13 +469,13 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
     }
 
     /* Query RDM entries per-device */
-    for (i = 0; i < d_config->num_pcidevs; i++) {
+    for (i = 0; i < d_config->num_pcis; i++) {
         unsigned int n, nr_entries;
 
-        seg = d_config->pcidevs[i].domain;
-        bus = d_config->pcidevs[i].bus;
-        devfn = PCI_DEVFN(d_config->pcidevs[i].dev,
-                          d_config->pcidevs[i].func);
+        seg = d_config->pcis[i].domain;
+        bus = d_config->pcis[i].bus;
+        devfn = PCI_DEVFN(d_config->pcis[i].dev,
+                          d_config->pcis[i].func);
         nr_entries = 0;
         rc = libxl__xc_device_get_rdm(gc, 0,
                                       seg, bus, devfn, &nr_entries, &xrdm);
@@ -488,7 +488,7 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
         assert(xrdm);
 
         rc = libxl__device_pci_setdefault(gc, DOMID_INVALID,
-                                          &d_config->pcidevs[i], false);
+                                          &d_config->pcis[i], false);
         if (rc)
             goto out;
 
@@ -516,7 +516,7 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
                      * global policy in this case.
                      */
                     d_config->rdms[j].policy
-                        = d_config->pcidevs[i].rdm_policy;
+                        = d_config->pcis[i].rdm_policy;
                     new = false;
                     break;
                 }
@@ -526,7 +526,7 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
                 add_rdm_entry(gc, d_config,
                               pfn_to_paddr(xrdm[n].start_pfn),
                               pfn_to_paddr(xrdm[n].nr_pages),
-                              d_config->pcidevs[i].rdm_policy);
+                              d_config->pcis[i].rdm_policy);
         }
     }
 
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index e26cda9b5045..3e70ff639b3c 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -1709,7 +1709,7 @@ _hidden int libxl__pci_topology_init(libxl__gc *gc,
 /* from libxl_pci */
 
 _hidden void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
-                                   libxl_device_pci *pcidev, bool starting,
+                                   libxl_device_pci *pci, bool starting,
                                    libxl__ao_device *aodev);
 _hidden void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
                                            libxl__multidev *);
@@ -3945,30 +3945,27 @@ struct libxl__device_type {
     device_set_xenstore_config_fn_t set_xenstore_config;
 };
 
-#define DEFINE_DEVICE_TYPE_STRUCT_X(name, sname, kind, ...)                    \
-    const libxl__device_type libxl__ ## name ## _devtype = {                   \
-        .type          = LIBXL__DEVICE_KIND_ ## kind,                       \
-        .ptr_offset    = offsetof(libxl_domain_config, name ## s),             \
-        .num_offset    = offsetof(libxl_domain_config, num_ ## name ## s),     \
-        .dev_elem_size = sizeof(libxl_device_ ## sname),                       \
-        .add           = libxl__add_ ## name ## s,                             \
-        .set_default   = (device_set_default_fn_t)                             \
-                         libxl__device_ ## sname ## _setdefault,               \
-        .to_device     = (device_to_device_fn_t)libxl__device_from_ ## name,   \
-        .init          = (device_init_fn_t)libxl_device_ ## sname ## _init,    \
-        .copy          = (device_copy_fn_t)libxl_device_ ## sname ## _copy,    \
-        .dispose       = (device_dispose_fn_t)                                 \
-                         libxl_device_ ## sname ## _dispose,                   \
-        .compare       = (device_compare_fn_t)                                 \
-                         libxl_device_ ## sname ## _compare,                   \
-        .update_devid  = (device_update_devid_fn_t)                            \
-                         libxl__device_ ## sname ## _update_devid,             \
-        __VA_ARGS__                                                            \
+#define DEFINE_DEVICE_TYPE_STRUCT(name, kind, ...)                           \
+    const libxl__device_type libxl__ ## name ## _devtype = {                 \
+        .type          = LIBXL__DEVICE_KIND_ ## kind,                        \
+        .ptr_offset    = offsetof(libxl_domain_config, name ## s),           \
+        .num_offset    = offsetof(libxl_domain_config, num_ ## name ## s),   \
+        .dev_elem_size = sizeof(libxl_device_ ## name),                      \
+        .add           = libxl__add_ ## name ## s,                           \
+        .set_default   = (device_set_default_fn_t)                           \
+                         libxl__device_ ## name ## _setdefault,              \
+        .to_device     = (device_to_device_fn_t)libxl__device_from_ ## name, \
+        .init          = (device_init_fn_t)libxl_device_ ## name ## _init,   \
+        .copy          = (device_copy_fn_t)libxl_device_ ## name ## _copy,   \
+        .dispose       = (device_dispose_fn_t)                               \
+                         libxl_device_ ## name ## _dispose,                  \
+        .compare       = (device_compare_fn_t)                               \
+                         libxl_device_ ## name ## _compare,                  \
+        .update_devid  = (device_update_devid_fn_t)                          \
+                         libxl__device_ ## name ## _update_devid,            \
+        __VA_ARGS__                                                          \
     }
 
-#define DEFINE_DEVICE_TYPE_STRUCT(name, kind, ...)                             \
-    DEFINE_DEVICE_TYPE_STRUCT_X(name, name, kind, __VA_ARGS__)
-
 static inline void **libxl__device_type_get_ptr(
     const libxl__device_type *dt, const libxl_domain_config *d_config)
 {
@@ -3995,7 +3992,7 @@ extern const libxl__device_type libxl__nic_devtype;
 extern const libxl__device_type libxl__vtpm_devtype;
 extern const libxl__device_type libxl__usbctrl_devtype;
 extern const libxl__device_type libxl__usbdev_devtype;
-extern const libxl__device_type libxl__pcidev_devtype;
+extern const libxl__device_type libxl__pci_devtype;
 extern const libxl__device_type libxl__vdispl_devtype;
 extern const libxl__device_type libxl__p9_devtype;
 extern const libxl__device_type libxl__pvcallsif_devtype;
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index bc5843b13701..2ff1c64a3144 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -25,51 +25,51 @@
 #define PCI_BDF_XSPATH         "%04x-%02x-%02x-%01x"
 #define PCI_PT_QDEV_ID         "pci-pt-%02x_%02x.%01x"
 
-static unsigned int pcidev_encode_bdf(libxl_device_pci *pcidev)
+static unsigned int pci_encode_bdf(libxl_device_pci *pci)
 {
     unsigned int value;
 
-    value = pcidev->domain << 16;
-    value |= (pcidev->bus & 0xff) << 8;
-    value |= (pcidev->dev & 0x1f) << 3;
-    value |= (pcidev->func & 0x7);
+    value = pci->domain << 16;
+    value |= (pci->bus & 0xff) << 8;
+    value |= (pci->dev & 0x1f) << 3;
+    value |= (pci->func & 0x7);
 
     return value;
 }
 
-static void pcidev_struct_fill(libxl_device_pci *pcidev, unsigned int domain,
-                               unsigned int bus, unsigned int dev,
-                               unsigned int func, unsigned int vdevfn)
+static void pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
+                            unsigned int bus, unsigned int dev,
+                            unsigned int func, unsigned int vdevfn)
 {
-    pcidev->domain = domain;
-    pcidev->bus = bus;
-    pcidev->dev = dev;
-    pcidev->func = func;
-    pcidev->vdevfn = vdevfn;
+    pci->domain = domain;
+    pci->bus = bus;
+    pci->dev = dev;
+    pci->func = func;
+    pci->vdevfn = vdevfn;
 }
 
 static void libxl_create_pci_backend_device(libxl__gc *gc,
                                             flexarray_t *back,
                                             int num,
-                                            const libxl_device_pci *pcidev)
+                                            const libxl_device_pci *pci)
 {
     flexarray_append(back, GCSPRINTF("key-%d", num));
-    flexarray_append(back, GCSPRINTF(PCI_BDF, pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func));
+    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->domain, pci->bus, pci->dev, pci->func));
     flexarray_append(back, GCSPRINTF("dev-%d", num));
-    flexarray_append(back, GCSPRINTF(PCI_BDF, pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func));
-    if (pcidev->vdevfn)
-        flexarray_append_pair(back, GCSPRINTF("vdevfn-%d", num), GCSPRINTF("%x", pcidev->vdevfn));
+    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->domain, pci->bus, pci->dev, pci->func));
+    if (pci->vdevfn)
+        flexarray_append_pair(back, GCSPRINTF("vdevfn-%d", num), GCSPRINTF("%x", pci->vdevfn));
     flexarray_append(back, GCSPRINTF("opts-%d", num));
     flexarray_append(back,
               GCSPRINTF("msitranslate=%d,power_mgmt=%d,permissive=%d",
-                             pcidev->msitranslate, pcidev->power_mgmt,
-                             pcidev->permissive));
+                             pci->msitranslate, pci->power_mgmt,
+                             pci->permissive));
     flexarray_append_pair(back, GCSPRINTF("state-%d", num), GCSPRINTF("%d", XenbusStateInitialising));
 }
 
-static void libxl__device_from_pcidev(libxl__gc *gc, uint32_t domid,
-                                      const libxl_device_pci *pcidev,
-                                      libxl__device *device)
+static void libxl__device_from_pci(libxl__gc *gc, uint32_t domid,
+                                   const libxl_device_pci *pci,
+                                   libxl__device *device)
 {
     device->backend_devid = 0;
     device->backend_domid = 0;
@@ -80,7 +80,7 @@ static void libxl__device_from_pcidev(libxl__gc *gc, uint32_t domid,
 }
 
 static int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
-                                     const libxl_device_pci *pcidev,
+                                     const libxl_device_pci *pci,
                                      int num)
 {
     flexarray_t *front = NULL;
@@ -94,15 +94,15 @@ static int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
     LOGD(DEBUG, domid, "Creating pci backend");
 
     /* add pci device */
-    libxl__device_from_pcidev(gc, domid, pcidev, &device);
+    libxl__device_from_pci(gc, domid, pci, &device);
 
     flexarray_append_pair(back, "frontend-id", GCSPRINTF("%d", domid));
     flexarray_append_pair(back, "online", "1");
     flexarray_append_pair(back, "state", GCSPRINTF("%d", XenbusStateInitialising));
     flexarray_append_pair(back, "domain", libxl__domid_to_name(gc, domid));
 
-    for (i = 0; i < num; i++, pcidev++)
-        libxl_create_pci_backend_device(gc, back, i, pcidev);
+    for (i = 0; i < num; i++, pci++)
+        libxl_create_pci_backend_device(gc, back, i, pci);
 
     flexarray_append_pair(back, "num_devs", GCSPRINTF("%d", num));
     flexarray_append_pair(front, "backend-id", GCSPRINTF("%d", 0));
@@ -116,7 +116,7 @@ static int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
 
 static int libxl__device_pci_add_xenstore(libxl__gc *gc,
                                           uint32_t domid,
-                                          const libxl_device_pci *pcidev,
+                                          const libxl_device_pci *pci,
                                           bool starting)
 {
     flexarray_t *back;
@@ -136,7 +136,7 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
                                                 LIBXL__DEVICE_KIND_PCI);
     num_devs = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/num_devs", be_path));
     if (!num_devs)
-        return libxl__create_pci_backend(gc, domid, pcidev, 1);
+        return libxl__create_pci_backend(gc, domid, pci, 1);
 
     libxl_domain_type domtype = libxl__domain_type(gc, domid);
     if (domtype == LIBXL_DOMAIN_TYPE_INVALID)
@@ -151,7 +151,7 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
 
     LOGD(DEBUG, domid, "Adding new pci device to xenstore");
     num = atoi(num_devs);
-    libxl_create_pci_backend_device(gc, back, num, pcidev);
+    libxl_create_pci_backend_device(gc, back, num, pci);
     flexarray_append_pair(back, "num_devs", GCSPRINTF("%d", num + 1));
     if (!starting)
         flexarray_append_pair(back, "state", GCSPRINTF("%d", XenbusStateReconfiguring));
@@ -170,8 +170,8 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
         rc = libxl__get_domain_configuration(gc, domid, &d_config);
         if (rc) goto out;
 
-        device_add_domain_config(gc, &d_config, &libxl__pcidev_devtype,
-                                 pcidev);
+        device_add_domain_config(gc, &d_config, &libxl__pci_devtype,
+                                 pci);
 
         rc = libxl__dm_check_start(gc, &d_config, domid);
         if (rc) goto out;
@@ -201,7 +201,7 @@ out:
     return rc;
 }
 
-static int libxl__device_pci_remove_xenstore(libxl__gc *gc, uint32_t domid, libxl_device_pci *pcidev)
+static int libxl__device_pci_remove_xenstore(libxl__gc *gc, uint32_t domid, libxl_device_pci *pci)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     char *be_path, *num_devs_path, *num_devs, *xsdev, *tmp, *tmppath;
@@ -231,8 +231,8 @@ static int libxl__device_pci_remove_xenstore(libxl__gc *gc, uint32_t domid, libx
         unsigned int domain = 0, bus = 0, dev = 0, func = 0;
         xsdev = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/dev-%d", be_path, i));
         sscanf(xsdev, PCI_BDF, &domain, &bus, &dev, &func);
-        if (domain == pcidev->domain && bus == pcidev->bus &&
-            pcidev->dev == dev && pcidev->func == func) {
+        if (domain == pci->domain && bus == pci->bus &&
+            pci->dev == dev && pci->func == func) {
             break;
         }
     }
@@ -350,7 +350,7 @@ static int get_all_assigned_devices(libxl__gc *gc, libxl_device_pci **list, int
                     *list = realloc(*list, sizeof(libxl_device_pci) * ((*num) + 1));
                     if (*list == NULL)
                         return ERROR_NOMEM;
-                    pcidev_struct_fill(*list + *num, dom, bus, dev, func, 0);
+                    pci_struct_fill(*list + *num, dom, bus, dev, func, 0);
                     (*num)++;
                 }
             }
@@ -361,8 +361,8 @@ static int get_all_assigned_devices(libxl__gc *gc, libxl_device_pci **list, int
     return 0;
 }
 
-static int is_pcidev_in_array(libxl_device_pci *assigned, int num_assigned,
-                       int dom, int bus, int dev, int func)
+static int is_pci_in_array(libxl_device_pci *assigned, int num_assigned,
+                           int dom, int bus, int dev, int func)
 {
     int i;
 
@@ -383,7 +383,7 @@ static int is_pcidev_in_array(libxl_device_pci *assigned, int num_assigned,
 
 /* Write the standard BDF into the sysfs path given by sysfs_path. */
 static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
-                           libxl_device_pci *pcidev)
+                           libxl_device_pci *pci)
 {
     int rc, fd;
     char *buf;
@@ -394,8 +394,8 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
         return ERROR_FAIL;
     }
 
-    buf = GCSPRINTF(PCI_BDF, pcidev->domain, pcidev->bus,
-                    pcidev->dev, pcidev->func);
+    buf = GCSPRINTF(PCI_BDF, pci->domain, pci->bus,
+                    pci->dev, pci->func);
     rc = write(fd, buf, strlen(buf));
     /* Annoying to have two if's, but we need the errno */
     if (rc < 0)
@@ -411,7 +411,7 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 {
     GC_INIT(ctx);
-    libxl_device_pci *pcidevs = NULL, *new, *assigned;
+    libxl_device_pci *pcis = NULL, *new, *assigned;
     struct dirent *de;
     DIR *dir;
     int r, num_assigned;
@@ -436,40 +436,40 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         if (sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4)
             continue;
 
-        if (is_pcidev_in_array(assigned, num_assigned, dom, bus, dev, func))
+        if (is_pci_in_array(assigned, num_assigned, dom, bus, dev, func))
             continue;
 
-        new = realloc(pcidevs, ((*num) + 1) * sizeof(*new));
+        new = realloc(pcis, ((*num) + 1) * sizeof(*new));
         if (NULL == new)
             continue;
 
-        pcidevs = new;
-        new = pcidevs + *num;
+        pcis = new;
+        new = pcis + *num;
 
         memset(new, 0, sizeof(*new));
-        pcidev_struct_fill(new, dom, bus, dev, func, 0);
+        pci_struct_fill(new, dom, bus, dev, func, 0);
         (*num)++;
     }
 
     closedir(dir);
 out:
     GC_FREE;
-    return pcidevs;
+    return pcis;
 }
 
 /* Unbind device from its current driver, if any.  If driver_path is non-NULL,
  * store the path to the original driver in it. */
-static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pcidev,
+static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
                             char **driver_path)
 {
     char * spath, *dp = NULL;
     struct stat st;
 
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/driver",
-                           pcidev->domain,
-                           pcidev->bus,
-                           pcidev->dev,
-                           pcidev->func);
+                           pci->domain,
+                           pci->bus,
+                           pci->dev,
+                           pci->func);
     if ( !lstat(spath, &st) ) {
         /* Find the canonical path to the driver. */
         dp = libxl__zalloc(gc, PATH_MAX);
@@ -483,7 +483,7 @@ static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pcidev,
 
         /* Unbind from the old driver */
         spath = GCSPRINTF("%s/unbind", dp);
-        if ( sysfs_write_bdf(gc, spath, pcidev) < 0 ) {
+        if ( sysfs_write_bdf(gc, spath, pci) < 0 ) {
             LOGE(ERROR, "Couldn't unbind device");
             return -1;
         }
@@ -495,11 +495,11 @@ static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pcidev,
     return 0;
 }
 
-static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pcidev)
+static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pci)
 {
     char *pci_device_vendor_path =
             GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/vendor",
-                      pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+                      pci->domain, pci->bus, pci->dev, pci->func);
     uint16_t read_items;
     uint16_t pci_device_vendor;
 
@@ -507,7 +507,7 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pcidev)
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have vendor attribute",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         return 0xffff;
     }
     read_items = fscanf(f, "0x%hx\n", &pci_device_vendor);
@@ -515,18 +515,18 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pcidev)
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read vendor of pci device "PCI_BDF,
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         return 0xffff;
     }
 
     return pci_device_vendor;
 }
 
-static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pcidev)
+static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pci)
 {
     char *pci_device_device_path =
             GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/device",
-                      pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+                      pci->domain, pci->bus, pci->dev, pci->func);
     uint16_t read_items;
     uint16_t pci_device_device;
 
@@ -534,7 +534,7 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pcidev)
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have device attribute",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         return 0xffff;
     }
     read_items = fscanf(f, "0x%hx\n", &pci_device_device);
@@ -542,25 +542,25 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pcidev)
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read device of pci device "PCI_BDF,
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         return 0xffff;
     }
 
     return pci_device_device;
 }
 
-static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pcidev,
+static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pci,
                                unsigned long *class)
 {
     char *pci_device_class_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/class",
-                     pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+                     pci->domain, pci->bus, pci->dev, pci->func);
     int read_items, ret = 0;
 
     FILE *f = fopen(pci_device_class_path, "r");
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have class attribute",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         ret = ERROR_FAIL;
         goto out;
     }
@@ -569,7 +569,7 @@ static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pcidev,
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read class of pci device "PCI_BDF,
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         ret = ERROR_FAIL;
     }
 
@@ -588,16 +588,16 @@ bool libxl__is_igd_vga_passthru(libxl__gc *gc,
     uint16_t pt_vendor, pt_device;
     unsigned long class;
 
-    for (i = 0 ; i < d_config->num_pcidevs ; i++) {
-        libxl_device_pci *pcidev = &d_config->pcidevs[i];
-        pt_vendor = sysfs_dev_get_vendor(gc, pcidev);
-        pt_device = sysfs_dev_get_device(gc, pcidev);
+    for (i = 0 ; i < d_config->num_pcis ; i++) {
+        libxl_device_pci *pci = &d_config->pcis[i];
+        pt_vendor = sysfs_dev_get_vendor(gc, pci);
+        pt_device = sysfs_dev_get_device(gc, pci);
 
         if (pt_vendor == 0xffff || pt_device == 0xffff ||
             pt_vendor != 0x8086)
             continue;
 
-        if (sysfs_dev_get_class(gc, pcidev, &class))
+        if (sysfs_dev_get_class(gc, pci, &class))
             continue;
         if (class == 0x030000)
             return true;
@@ -621,8 +621,8 @@ bool libxl__is_igd_vga_passthru(libxl__gc *gc,
  * already exist.
  */
 
-/* Scan through /sys/.../pciback/slots looking for pcidev's BDF */
-static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pcidev)
+/* Scan through /sys/.../pciback/slots looking for pci's BDF */
+static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
 {
     FILE *f;
     int rc = 0;
@@ -635,11 +635,11 @@ static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pcidev)
         return ERROR_FAIL;
     }
 
-    while(fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func)==4) {
-        if(dom == pcidev->domain
-           && bus == pcidev->bus
-           && dev == pcidev->dev
-           && func == pcidev->func) {
+    while (fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func)==4) {
+        if (dom == pci->domain
+            && bus == pci->bus
+            && dev == pci->dev
+            && func == pci->func) {
             rc = 1;
             goto out;
         }
@@ -649,7 +649,7 @@ out:
     return rc;
 }
 
-static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pcidev)
+static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
 {
     char * spath;
     int rc;
@@ -665,8 +665,8 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pcidev)
     }
 
     spath = GCSPRINTF(SYSFS_PCIBACK_DRIVER"/"PCI_BDF,
-                      pcidev->domain, pcidev->bus,
-                      pcidev->dev, pcidev->func);
+                      pci->domain, pci->bus,
+                      pci->dev, pci->func);
     rc = lstat(spath, &st);
 
     if( rc == 0 )
@@ -677,40 +677,40 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pcidev)
     return -1;
 }
 
-static int pciback_dev_assign(libxl__gc *gc, libxl_device_pci *pcidev)
+static int pciback_dev_assign(libxl__gc *gc, libxl_device_pci *pci)
 {
     int rc;
 
-    if ( (rc=pciback_dev_has_slot(gc, pcidev)) < 0 ) {
+    if ( (rc = pciback_dev_has_slot(gc, pci)) < 0 ) {
         LOGE(ERROR, "Error checking for pciback slot");
         return ERROR_FAIL;
     } else if (rc == 0) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/new_slot",
-                             pcidev) < 0 ) {
+                             pci) < 0 ) {
             LOGE(ERROR, "Couldn't bind device to pciback!");
             return ERROR_FAIL;
         }
     }
 
-    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind", pcidev) < 0 ) {
+    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind", pci) < 0 ) {
         LOGE(ERROR, "Couldn't bind device to pciback!");
         return ERROR_FAIL;
     }
     return 0;
 }
 
-static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pcidev)
+static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
 {
     /* Remove from pciback */
-    if ( sysfs_dev_unbind(gc, pcidev, NULL) < 0 ) {
+    if ( sysfs_dev_unbind(gc, pci, NULL) < 0 ) {
         LOG(ERROR, "Couldn't unbind device!");
         return ERROR_FAIL;
     }
 
     /* Remove slot if necessary */
-    if ( pciback_dev_has_slot(gc, pcidev) > 0 ) {
+    if ( pciback_dev_has_slot(gc, pci) > 0 ) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/remove_slot",
-                             pcidev) < 0 ) {
+                             pci) < 0 ) {
             LOGE(ERROR, "Couldn't remove pciback slot");
             return ERROR_FAIL;
         }
@@ -721,49 +721,49 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pcidev)
 #define PCIBACK_INFO_PATH "/libxl/pciback"
 
 static void pci_assignable_driver_path_write(libxl__gc *gc,
-                                            libxl_device_pci *pcidev,
+                                            libxl_device_pci *pci,
                                             char *driver_path)
 {
     char *path;
 
     path = GCSPRINTF(PCIBACK_INFO_PATH"/"PCI_BDF_XSPATH"/driver_path",
-                     pcidev->domain,
-                     pcidev->bus,
-                     pcidev->dev,
-                     pcidev->func);
+                     pci->domain,
+                     pci->bus,
+                     pci->dev,
+                     pci->func);
     if ( libxl__xs_printf(gc, XBT_NULL, path, "%s", driver_path) < 0 ) {
         LOGE(WARN, "Write of %s to node %s failed.", driver_path, path);
     }
 }
 
 static char * pci_assignable_driver_path_read(libxl__gc *gc,
-                                              libxl_device_pci *pcidev)
+                                              libxl_device_pci *pci)
 {
     return libxl__xs_read(gc, XBT_NULL,
                           GCSPRINTF(
                            PCIBACK_INFO_PATH "/" PCI_BDF_XSPATH "/driver_path",
-                           pcidev->domain,
-                           pcidev->bus,
-                           pcidev->dev,
-                           pcidev->func));
+                           pci->domain,
+                           pci->bus,
+                           pci->dev,
+                           pci->func));
 }
 
 static void pci_assignable_driver_path_remove(libxl__gc *gc,
-                                              libxl_device_pci *pcidev)
+                                              libxl_device_pci *pci)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
 
     /* Remove the xenstore entry */
     xs_rm(ctx->xsh, XBT_NULL,
           GCSPRINTF(PCIBACK_INFO_PATH "/" PCI_BDF_XSPATH,
-                    pcidev->domain,
-                    pcidev->bus,
-                    pcidev->dev,
-                    pcidev->func) );
+                    pci->domain,
+                    pci->bus,
+                    pci->dev,
+                    pci->func) );
 }
 
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
-                                            libxl_device_pci *pcidev,
+                                            libxl_device_pci *pci,
                                             int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -773,10 +773,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     struct stat st;
 
     /* Local copy for convenience */
-    dom = pcidev->domain;
-    bus = pcidev->bus;
-    dev = pcidev->dev;
-    func = pcidev->func;
+    dom = pci->domain;
+    bus = pci->bus;
+    dev = pci->dev;
+    func = pci->func;
 
     /* See if the device exists */
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF, dom, bus, dev, func);
@@ -786,7 +786,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
 
     /* Check to see if it's already assigned to pciback */
-    rc = pciback_dev_is_assigned(gc, pcidev);
+    rc = pciback_dev_is_assigned(gc, pci);
     if ( rc < 0 ) {
         return ERROR_FAIL;
     }
@@ -796,7 +796,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
 
     /* Check to see if there's already a driver that we need to unbind from */
-    if ( sysfs_dev_unbind(gc, pcidev, &driver_path ) ) {
+    if ( sysfs_dev_unbind(gc, pci, &driver_path ) ) {
         LOG(ERROR, "Couldn't unbind "PCI_BDF" from driver",
             dom, bus, dev, func);
         return ERROR_FAIL;
@@ -805,9 +805,9 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     /* Store driver_path for rebinding to dom0 */
     if ( rebind ) {
         if ( driver_path ) {
-            pci_assignable_driver_path_write(gc, pcidev, driver_path);
+            pci_assignable_driver_path_write(gc, pci, driver_path);
         } else if ( (driver_path =
-                     pci_assignable_driver_path_read(gc, pcidev)) != NULL ) {
+                     pci_assignable_driver_path_read(gc, pci)) != NULL ) {
             LOG(INFO, PCI_BDF" not bound to a driver, will be rebound to %s",
                 dom, bus, dev, func, driver_path);
         } else {
@@ -815,10 +815,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
                 dom, bus, dev, func);
         }
     } else {
-        pci_assignable_driver_path_remove(gc, pcidev);
+        pci_assignable_driver_path_remove(gc, pci);
     }
 
-    if ( pciback_dev_assign(gc, pcidev) ) {
+    if ( pciback_dev_assign(gc, pci) ) {
         LOG(ERROR, "Couldn't bind device to pciback!");
         return ERROR_FAIL;
     }
@@ -829,7 +829,7 @@ quarantine:
      * so always pass XEN_DOMCTL_DEV_RDM_RELAXED to avoid assignment being
      * unnecessarily denied.
      */
-    rc = xc_assign_device(ctx->xch, DOMID_IO, pcidev_encode_bdf(pcidev),
+    rc = xc_assign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci),
                           XEN_DOMCTL_DEV_RDM_RELAXED);
     if ( rc < 0 ) {
         LOG(ERROR, "failed to quarantine "PCI_BDF, dom, bus, dev, func);
@@ -840,7 +840,7 @@ quarantine:
 }
 
 static int libxl__device_pci_assignable_remove(libxl__gc *gc,
-                                               libxl_device_pci *pcidev,
+                                               libxl_device_pci *pci,
                                                int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -848,24 +848,24 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     char *driver_path;
 
     /* De-quarantine */
-    rc = xc_deassign_device(ctx->xch, DOMID_IO, pcidev_encode_bdf(pcidev));
+    rc = xc_deassign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci));
     if ( rc < 0 ) {
-        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pcidev->domain, pcidev->bus,
-            pcidev->dev, pcidev->func);
+        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pci->domain, pci->bus,
+            pci->dev, pci->func);
         return ERROR_FAIL;
     }
 
     /* Unbind from pciback */
-    if ( (rc=pciback_dev_is_assigned(gc, pcidev)) < 0 ) {
+    if ( (rc = pciback_dev_is_assigned(gc, pci)) < 0 ) {
         return ERROR_FAIL;
     } else if ( rc ) {
-        pciback_dev_unassign(gc, pcidev);
+        pciback_dev_unassign(gc, pci);
     } else {
         LOG(WARN, "Not bound to pciback");
     }
 
     /* Rebind if necessary */
-    driver_path = pci_assignable_driver_path_read(gc, pcidev);
+    driver_path = pci_assignable_driver_path_read(gc, pci);
 
     if ( driver_path ) {
         if ( rebind ) {
@@ -873,12 +873,12 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
 
             if ( sysfs_write_bdf(gc,
                                  GCSPRINTF("%s/bind", driver_path),
-                                 pcidev) < 0 ) {
+                                 pci) < 0 ) {
                 LOGE(ERROR, "Couldn't bind device to %s", driver_path);
                 return -1;
             }
 
-            pci_assignable_driver_path_remove(gc, pcidev);
+            pci_assignable_driver_path_remove(gc, pci);
         }
     } else {
         if ( rebind ) {
@@ -890,26 +890,26 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     return 0;
 }
 
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pcidev,
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci,
                                     int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_add(gc, pcidev, rebind);
+    rc = libxl__device_pci_assignable_add(gc, pci, rebind);
 
     GC_FREE;
     return rc;
 }
 
 
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pcidev,
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci,
                                        int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_remove(gc, pcidev, rebind);
+    rc = libxl__device_pci_assignable_remove(gc, pci, rebind);
 
     GC_FREE;
     return rc;
@@ -920,7 +920,7 @@ int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pcidev,
  * driver. It also initialises a bit-mask of which function numbers are present
  * on that device.
 */
-static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pcidev, unsigned int *func_mask)
+static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pci, unsigned int *func_mask)
 {
     struct dirent *de;
     DIR *dir;
@@ -940,11 +940,11 @@ static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pcidev, unsi
 
         if ( sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4 )
             continue;
-        if ( pcidev->domain != dom )
+        if ( pci->domain != dom )
             continue;
-        if ( pcidev->bus != bus )
+        if ( pci->bus != bus )
             continue;
-        if ( pcidev->dev != dev )
+        if ( pci->dev != dev )
             continue;
 
         path = GCSPRINTF("%s/" PCI_BDF, SYSFS_PCIBACK_DRIVER, dom, bus, dev, func);
@@ -979,7 +979,7 @@ static int pci_ins_check(libxl__gc *gc, uint32_t domid, const char *state, void
 }
 
 static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
-                                 libxl_device_pci *pcidev)
+                                 libxl_device_pci *pci)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     int rc = 0;
@@ -991,15 +991,15 @@ static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/state");
     state = libxl__xs_read(gc, XBT_NULL, path);
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/parameter");
-    if (pcidev->vdevfn) {
+    if (pci->vdevfn) {
         libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF_VDEVFN","PCI_OPTIONS,
-                         pcidev->domain, pcidev->bus, pcidev->dev,
-                         pcidev->func, pcidev->vdevfn, pcidev->msitranslate,
-                         pcidev->power_mgmt);
+                         pci->domain, pci->bus, pci->dev,
+                         pci->func, pci->vdevfn, pci->msitranslate,
+                         pci->power_mgmt);
     } else {
         libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF","PCI_OPTIONS,
-                         pcidev->domain,  pcidev->bus, pcidev->dev,
-                         pcidev->func, pcidev->msitranslate, pcidev->power_mgmt);
+                         pci->domain,  pci->bus, pci->dev,
+                         pci->func, pci->msitranslate, pci->power_mgmt);
     }
 
     libxl__qemu_traditional_cmd(gc, domid, "pci-ins");
@@ -1010,7 +1010,7 @@ static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/state");
     if ( rc < 0 )
         LOGD(ERROR, domid, "qemu refused to add device: %s", vdevfn);
-    else if ( sscanf(vdevfn, "0x%x", &pcidev->vdevfn) != 1 ) {
+    else if ( sscanf(vdevfn, "0x%x", &pci->vdevfn) != 1 ) {
         LOGD(ERROR, domid, "wrong format for the vdevfn: '%s'", vdevfn);
         rc = -1;
     }
@@ -1054,7 +1054,7 @@ typedef struct pci_add_state {
     libxl__xswait_state xswait;
     libxl__ev_qmp qmp;
     libxl__ev_time timeout;
-    libxl_device_pci *pcidev;
+    libxl_device_pci *pci;
     int pci_domid;
 } pci_add_state;
 
@@ -1072,7 +1072,7 @@ static void pci_add_dm_done(libxl__egc *,
 
 static void do_pci_add(libxl__egc *egc,
                        libxl_domid domid,
-                       libxl_device_pci *pcidev,
+                       libxl_device_pci *pci,
                        pci_add_state *pas)
 {
     STATE_AO_GC(pas->aodev->ao);
@@ -1082,7 +1082,7 @@ static void do_pci_add(libxl__egc *egc,
     /* init pci_add_state */
     libxl__xswait_init(&pas->xswait);
     libxl__ev_qmp_init(&pas->qmp);
-    pas->pcidev = pcidev;
+    pas->pci = pci;
     pas->pci_domid = domid;
     libxl__ev_time_init(&pas->timeout);
 
@@ -1128,7 +1128,7 @@ static void pci_add_qemu_trad_watch_state_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pcidev = pas->pcidev;
+    libxl_device_pci *pci = pas->pci;
 
     rc = check_qemu_running(gc, domid, xswa, rc, state);
     if (rc == ERROR_NOT_READY)
@@ -1136,7 +1136,7 @@ static void pci_add_qemu_trad_watch_state_cb(libxl__egc *egc,
     if (rc)
         goto out;
 
-    rc = qemu_pci_add_xenstore(gc, domid, pcidev);
+    rc = qemu_pci_add_xenstore(gc, domid, pci);
 out:
     pci_add_dm_done(egc, pas, rc); /* must be last */
 }
@@ -1149,7 +1149,7 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pcidev = pas->pcidev;
+    libxl_device_pci *pci = pas->pci;
     libxl__ev_qmp *const qmp = &pas->qmp;
 
     rc = libxl__ev_time_register_rel(ao, &pas->timeout,
@@ -1160,14 +1160,14 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
     libxl__qmp_param_add_string(gc, &args, "driver",
                                 "xen-pci-passthrough");
     QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
-                           pcidev->bus, pcidev->dev, pcidev->func);
+                           pci->bus, pci->dev, pci->func);
     QMP_PARAMETERS_SPRINTF(&args, "hostaddr",
-                           "%04x:%02x:%02x.%01x", pcidev->domain,
-                           pcidev->bus, pcidev->dev, pcidev->func);
-    if (pcidev->vdevfn) {
+                           "%04x:%02x:%02x.%01x", pci->domain,
+                           pci->bus, pci->dev, pci->func);
+    if (pci->vdevfn) {
         QMP_PARAMETERS_SPRINTF(&args, "addr", "%x.%x",
-                               PCI_SLOT(pcidev->vdevfn),
-                               PCI_FUNC(pcidev->vdevfn));
+                               PCI_SLOT(pci->vdevfn),
+                               PCI_FUNC(pci->vdevfn));
     }
     /*
      * Version of QEMU prior to the XSA-131 fix did not support
@@ -1179,7 +1179,7 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
      * set the permissive flag if it is true. Users of older QEMU
      * have no reason to set the flag so this is ok.
      */
-    if (pcidev->permissive)
+    if (pci->permissive)
         libxl__qmp_param_add_bool(gc, &args, "permissive", true);
 
     qmp->ao = pas->aodev->ao;
@@ -1230,7 +1230,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
     int dev_slot, dev_func;
 
     /* Convenience aliases */
-    libxl_device_pci *pcidev = pas->pcidev;
+    libxl_device_pci *pci = pas->pci;
 
     if (rc) goto out;
 
@@ -1251,7 +1251,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
      */
 
     asked_id = GCSPRINTF(PCI_PT_QDEV_ID,
-                         pcidev->bus, pcidev->dev, pcidev->func);
+                         pci->bus, pci->dev, pci->func);
 
     for (i = 0; (bus = libxl__json_array_get(response, i)); i++) {
         devices = libxl__json_map_get("devices", bus, JSON_ARRAY);
@@ -1283,7 +1283,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
              }
              dev_func = libxl__json_object_get_integer(o);
 
-             pcidev->vdevfn = PCI_DEVFN(dev_slot, dev_func);
+             pci->vdevfn = PCI_DEVFN(dev_slot, dev_func);
 
              rc = 0;
              goto out;
@@ -1331,7 +1331,7 @@ static void pci_add_dm_done(libxl__egc *egc,
 
     /* Convenience aliases */
     bool starting = pas->starting;
-    libxl_device_pci *pcidev = pas->pcidev;
+    libxl_device_pci *pci = pas->pci;
     bool hvm = libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM;
 
     libxl__ev_qmp_dispose(gc, &pas->qmp);
@@ -1342,8 +1342,8 @@ static void pci_add_dm_done(libxl__egc *egc,
     if (isstubdom)
         starting = false;
 
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pcidev->domain,
-                           pcidev->bus, pcidev->dev, pcidev->func);
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->domain,
+                           pci->bus, pci->dev, pci->func);
     f = fopen(sysfs_path, "r");
     start = end = flags = size = 0;
     irq = 0;
@@ -1383,8 +1383,8 @@ static void pci_add_dm_done(libxl__egc *egc,
         }
     }
     fclose(f);
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pcidev->domain,
-                                pcidev->bus, pcidev->dev, pcidev->func);
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
+                                pci->bus, pci->dev, pci->func);
     f = fopen(sysfs_path, "r");
     if (f == NULL) {
         LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
@@ -1411,9 +1411,9 @@ static void pci_add_dm_done(libxl__egc *egc,
     fclose(f);
 
     /* Don't restrict writes to the PCI config space from this VM */
-    if (pcidev->permissive) {
+    if (pci->permissive) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/permissive",
-                             pcidev) < 0 ) {
+                             pci) < 0 ) {
             LOGD(ERROR, domainid, "Setting permissive for device");
             rc = ERROR_FAIL;
             goto out;
@@ -1422,14 +1422,14 @@ static void pci_add_dm_done(libxl__egc *egc,
 
 out_no_irq:
     if (!isstubdom) {
-        if (pcidev->rdm_policy == LIBXL_RDM_RESERVE_POLICY_STRICT) {
+        if (pci->rdm_policy == LIBXL_RDM_RESERVE_POLICY_STRICT) {
             flag &= ~XEN_DOMCTL_DEV_RDM_RELAXED;
-        } else if (pcidev->rdm_policy != LIBXL_RDM_RESERVE_POLICY_RELAXED) {
+        } else if (pci->rdm_policy != LIBXL_RDM_RESERVE_POLICY_RELAXED) {
             LOGED(ERROR, domainid, "unknown rdm check flag.");
             rc = ERROR_FAIL;
             goto out;
         }
-        r = xc_assign_device(ctx->xch, domid, pcidev_encode_bdf(pcidev), flag);
+        r = xc_assign_device(ctx->xch, domid, pci_encode_bdf(pci), flag);
         if (r < 0 && (hvm || errno != ENOSYS)) {
             LOGED(ERROR, domainid, "xc_assign_device failed");
             rc = ERROR_FAIL;
@@ -1438,7 +1438,7 @@ out_no_irq:
     }
 
     if (!starting && !libxl_get_stubdom_id(CTX, domid))
-        rc = libxl__device_pci_add_xenstore(gc, domid, pcidev, starting);
+        rc = libxl__device_pci_add_xenstore(gc, domid, pci, starting);
     else
         rc = 0;
 out:
@@ -1493,7 +1493,7 @@ int libxl__device_pci_setdefault(libxl__gc *gc, uint32_t domid,
 }
 
 int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
-                         libxl_device_pci *pcidev,
+                         libxl_device_pci *pci,
                          const libxl_asyncop_how *ao_how)
 {
     AO_CREATE(ctx, domid, ao_how);
@@ -1504,24 +1504,24 @@ int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
     aodev->action = LIBXL__DEVICE_ACTION_ADD;
     aodev->callback = device_addrm_aocomplete;
     aodev->update_json = true;
-    libxl__device_pci_add(egc, domid, pcidev, false, aodev);
+    libxl__device_pci_add(egc, domid, pci, false, aodev);
     return AO_INPROGRESS;
 }
 
-static int libxl_pcidev_assignable(libxl_ctx *ctx, libxl_device_pci *pcidev)
+static int libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
 {
-    libxl_device_pci *pcidevs;
+    libxl_device_pci *pcis;
     int num, i;
 
-    pcidevs = libxl_device_pci_assignable_list(ctx, &num);
+    pcis = libxl_device_pci_assignable_list(ctx, &num);
     for (i = 0; i < num; i++) {
-        if (pcidevs[i].domain == pcidev->domain &&
-            pcidevs[i].bus == pcidev->bus &&
-            pcidevs[i].dev == pcidev->dev &&
-            pcidevs[i].func == pcidev->func)
+        if (pcis[i].domain == pci->domain &&
+            pcis[i].bus == pci->bus &&
+            pcis[i].dev == pci->dev &&
+            pcis[i].func == pci->func)
             break;
     }
-    free(pcidevs);
+    free(pcis);
     return i != num;
 }
 
@@ -1535,7 +1535,7 @@ static void device_pci_add_done(libxl__egc *egc,
     pci_add_state *, int rc);
 
 void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
-                           libxl_device_pci *pcidev, bool starting,
+                           libxl_device_pci *pci, bool starting,
                            libxl__ao_device *aodev)
 {
     STATE_AO_GC(aodev->ao);
@@ -1545,9 +1545,9 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     int stubdomid = 0;
     pci_add_state *pas;
 
-    /* Store *pcidev to be used by callbacks */
-    aodev->device_config = pcidev;
-    aodev->device_type = &libxl__pcidev_devtype;
+    /* Store *pci to be used by callbacks */
+    aodev->device_config = pci;
+    aodev->device_type = &libxl__pci_devtype;
 
     GCNEW(pas);
     pas->aodev = aodev;
@@ -1556,29 +1556,29 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     pas->callback = device_pci_add_stubdom_done;
 
     if (libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
-        rc = xc_test_assign_device(ctx->xch, domid, pcidev_encode_bdf(pcidev));
+        rc = xc_test_assign_device(ctx->xch, domid, pci_encode_bdf(pci));
         if (rc) {
             LOGD(ERROR, domid,
                  "PCI device %04x:%02x:%02x.%u %s?",
-                 pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func,
+                 pci->domain, pci->bus, pci->dev, pci->func,
                  errno == EOPNOTSUPP ? "cannot be assigned - no IOMMU"
                  : "already assigned to a different guest");
             goto out;
         }
     }
 
-    rc = libxl__device_pci_setdefault(gc, domid, pcidev, !starting);
+    rc = libxl__device_pci_setdefault(gc, domid, pci, !starting);
     if (rc) goto out;
 
-    if (pcidev->seize && !pciback_dev_is_assigned(gc, pcidev)) {
-        rc = libxl__device_pci_assignable_add(gc, pcidev, 1);
+    if (pci->seize && !pciback_dev_is_assigned(gc, pci)) {
+        rc = libxl__device_pci_assignable_add(gc, pci, 1);
         if ( rc )
             goto out;
     }
 
-    if (!libxl_pcidev_assignable(ctx, pcidev)) {
+    if (!libxl_pci_assignable(ctx, pci)) {
         LOGD(ERROR, domid, "PCI device %x:%x:%x.%x is not assignable",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         rc = ERROR_FAIL;
         goto out;
     }
@@ -1589,25 +1589,25 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
              "cannot determine if device is assigned, refusing to continue");
         goto out;
     }
-    if ( is_pcidev_in_array(assigned, num_assigned, pcidev->domain,
-                     pcidev->bus, pcidev->dev, pcidev->func) ) {
+    if ( is_pci_in_array(assigned, num_assigned, pci->domain,
+                         pci->bus, pci->dev, pci->func) ) {
         LOGD(ERROR, domid, "PCI device already attached to a domain");
         rc = ERROR_FAIL;
         goto out;
     }
 
-    libxl__device_pci_reset(gc, pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+    libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
     if (stubdomid != 0) {
-        libxl_device_pci *pcidev_s;
+        libxl_device_pci *pci_s;
 
-        GCNEW(pcidev_s);
-        libxl_device_pci_init(pcidev_s);
-        libxl_device_pci_copy(CTX, pcidev_s, pcidev);
+        GCNEW(pci_s);
+        libxl_device_pci_init(pci_s);
+        libxl_device_pci_copy(CTX, pci_s, pci);
         pas->callback = device_pci_add_stubdom_wait;
 
-        do_pci_add(egc, stubdomid, pcidev_s, pas); /* must be last */
+        do_pci_add(egc, stubdomid, pci_s, pas); /* must be last */
         return;
     }
 
@@ -1664,42 +1664,42 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
     /* Convenience aliases */
     libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pcidev = aodev->device_config;
+    libxl_device_pci *pci = aodev->device_config;
 
     if (rc) goto out;
 
-    orig_vdev = pcidev->vdevfn & ~7U;
+    orig_vdev = pci->vdevfn & ~7U;
 
-    if ( pcidev->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
-        if ( !(pcidev->vdevfn >> 3) ) {
+    if ( pci->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
+        if ( !(pci->vdevfn >> 3) ) {
             LOGD(ERROR, domid, "Must specify a v-slot for multi-function devices");
             rc = ERROR_INVAL;
             goto out;
         }
-        if ( pci_multifunction_check(gc, pcidev, &pfunc_mask) ) {
+        if ( pci_multifunction_check(gc, pci, &pfunc_mask) ) {
             rc = ERROR_FAIL;
             goto out;
         }
-        pcidev->vfunc_mask &= pfunc_mask;
+        pci->vfunc_mask &= pfunc_mask;
         /* so now vfunc_mask == pfunc_mask */
     }else{
-        pfunc_mask = (1 << pcidev->func);
+        pfunc_mask = (1 << pci->func);
     }
 
-    for(rc = 0, i = 7; i >= 0; --i) {
+    for (rc = 0, i = 7; i >= 0; --i) {
         if ( (1 << i) & pfunc_mask ) {
-            if ( pcidev->vfunc_mask == pfunc_mask ) {
-                pcidev->func = i;
-                pcidev->vdevfn = orig_vdev | i;
-            }else{
+            if ( pci->vfunc_mask == pfunc_mask ) {
+                pci->func = i;
+                pci->vdevfn = orig_vdev | i;
+            } else {
                 /* if not passing through multiple devices in a block make
                  * sure that virtual function number 0 is always used otherwise
                  * guest won't see the device
                  */
-                pcidev->vdevfn = orig_vdev;
+                pci->vdevfn = orig_vdev;
             }
             pas->callback = device_pci_add_done;
-            do_pci_add(egc, domid, pcidev, pas); /* must be last */
+            do_pci_add(egc, domid, pci, pas); /* must be last */
             return;
         }
     }
@@ -1715,13 +1715,13 @@ static void device_pci_add_done(libxl__egc *egc,
     EGC_GC;
     libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pcidev = aodev->device_config;
+    libxl_device_pci *pci = aodev->device_config;
 
     if (rc) {
         LOGD(ERROR, domid,
              "libxl__device_pci_add  failed for "
              "PCI device %x:%x:%x.%x (rc %d)",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func,
+             pci->domain, pci->bus, pci->dev, pci->func,
              rc);
     }
     aodev->rc = rc;
@@ -1733,16 +1733,16 @@ typedef struct {
     libxl__ao_device *outer_aodev;
     libxl_domain_config *d_config;
     libxl_domid domid;
-} add_pcidevs_state;
+} add_pcis_state;
 
-static void add_pcidevs_done(libxl__egc *, libxl__multidev *, int rc);
+static void add_pcis_done(libxl__egc *, libxl__multidev *, int rc);
 
-static void libxl__add_pcidevs(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
-                               libxl_domain_config *d_config,
-                               libxl__multidev *multidev)
+static void libxl__add_pcis(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
+                            libxl_domain_config *d_config,
+                            libxl__multidev *multidev)
 {
     AO_GC;
-    add_pcidevs_state *apds;
+    add_pcis_state *apds;
     int i;
 
     /* We need to start a new multidev in order to be able to execute
@@ -1752,23 +1752,23 @@ static void libxl__add_pcidevs(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
     apds->outer_aodev = libxl__multidev_prepare(multidev);
     apds->d_config = d_config;
     apds->domid = domid;
-    apds->multidev.callback = add_pcidevs_done;
+    apds->multidev.callback = add_pcis_done;
     libxl__multidev_begin(ao, &apds->multidev);
 
-    for (i = 0; i < d_config->num_pcidevs; i++) {
+    for (i = 0; i < d_config->num_pcis; i++) {
         libxl__ao_device *aodev = libxl__multidev_prepare(&apds->multidev);
-        libxl__device_pci_add(egc, domid, &d_config->pcidevs[i],
+        libxl__device_pci_add(egc, domid, &d_config->pcis[i],
                               true, aodev);
     }
 
     libxl__multidev_prepared(egc, &apds->multidev, 0);
 }
 
-static void add_pcidevs_done(libxl__egc *egc, libxl__multidev *multidev,
+static void add_pcis_done(libxl__egc *egc, libxl__multidev *multidev,
                              int rc)
 {
     EGC_GC;
-    add_pcidevs_state *apds = CONTAINER_OF(multidev, *apds, multidev);
+    add_pcis_state *apds = CONTAINER_OF(multidev, *apds, multidev);
 
     /* Convenience aliases */
     libxl_domain_config *d_config = apds->d_config;
@@ -1777,9 +1777,9 @@ static void add_pcidevs_done(libxl__egc *egc, libxl__multidev *multidev,
 
     if (rc) goto out;
 
-    if (d_config->num_pcidevs > 0 && !libxl_get_stubdom_id(CTX, domid)) {
-        rc = libxl__create_pci_backend(gc, domid, d_config->pcidevs,
-            d_config->num_pcidevs);
+    if (d_config->num_pcis > 0 && !libxl_get_stubdom_id(CTX, domid)) {
+        rc = libxl__create_pci_backend(gc, domid, d_config->pcis,
+                                       d_config->num_pcis);
         if (rc < 0) {
             LOGD(ERROR, domid, "libxl_create_pci_backend failed: %d", rc);
             goto out;
@@ -1792,7 +1792,7 @@ out:
 }
 
 static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
-                                    libxl_device_pci *pcidev, int force)
+                                    libxl_device_pci *pci, int force)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     char *state;
@@ -1804,12 +1804,12 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/state");
     state = libxl__xs_read(gc, XBT_NULL, path);
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/parameter");
-    libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF, pcidev->domain,
-                     pcidev->bus, pcidev->dev, pcidev->func);
+    libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF, pci->domain,
+                     pci->bus, pci->dev, pci->func);
 
     /* Remove all functions at once atomically by only signalling
      * device-model for function 0 */
-    if ( !force && (pcidev->vdevfn & 0x7) == 0 ) {
+    if ( !force && (pci->vdevfn & 0x7) == 0 ) {
         libxl__qemu_traditional_cmd(gc, domid, "pci-rem");
         if (libxl__wait_for_device_model_deprecated(gc, domid, "pci-removed",
                                          NULL, NULL, NULL) < 0) {
@@ -1830,7 +1830,7 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
 typedef struct pci_remove_state {
     libxl__ao_device *aodev;
     libxl_domid domid;
-    libxl_device_pci *pcidev;
+    libxl_device_pci *pci;
     bool force;
     bool hvm;
     unsigned int orig_vdev;
@@ -1844,7 +1844,7 @@ typedef struct pci_remove_state {
 } pci_remove_state;
 
 static void libxl__device_pci_remove_common(libxl__egc *egc,
-    uint32_t domid, libxl_device_pci *pcidev, bool force,
+    uint32_t domid, libxl_device_pci *pci, bool force,
     libxl__ao_device *aodev);
 static void device_pci_remove_common_next(libxl__egc *egc,
     pci_remove_state *prs, int rc);
@@ -1869,7 +1869,7 @@ static void pci_remove_done(libxl__egc *egc,
     pci_remove_state *prs, int rc);
 
 static void do_pci_remove(libxl__egc *egc, uint32_t domid,
-                          libxl_device_pci *pcidev, int force,
+                          libxl_device_pci *pci, int force,
                           pci_remove_state *prs)
 {
     STATE_AO_GC(prs->aodev->ao);
@@ -1887,8 +1887,8 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
     libxl__ptr_add(gc, assigned);
 
     rc = ERROR_INVAL;
-    if ( !is_pcidev_in_array(assigned, num, pcidev->domain,
-                      pcidev->bus, pcidev->dev, pcidev->func) ) {
+    if ( !is_pci_in_array(assigned, num, pci->domain,
+                          pci->bus, pci->dev, pci->func) ) {
         LOGD(ERROR, domainid, "PCI device not attached to this domain");
         goto out_fail;
     }
@@ -1917,8 +1917,8 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
     } else {
         assert(type == LIBXL_DOMAIN_TYPE_PV);
 
-        char *sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pcidev->domain,
-                                     pcidev->bus, pcidev->dev, pcidev->func);
+        char *sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->domain,
+                                     pci->bus, pci->dev, pci->func);
         FILE *f = fopen(sysfs_path, "r");
         unsigned int start = 0, end = 0, flags = 0, size = 0;
         int irq = 0;
@@ -1953,8 +1953,8 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
         }
         fclose(f);
 skip1:
-        sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pcidev->domain,
-                               pcidev->bus, pcidev->dev, pcidev->func);
+        sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
+                               pci->bus, pci->dev, pci->func);
         f = fopen(sysfs_path, "r");
         if (f == NULL) {
             LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
@@ -1988,7 +1988,7 @@ static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = prs->domid;
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
 
     rc = check_qemu_running(gc, domid, xswa, rc, state);
     if (rc == ERROR_NOT_READY)
@@ -1996,7 +1996,7 @@ static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
     if (rc)
         goto out;
 
-    rc = qemu_pci_remove_xenstore(gc, domid, pcidev, prs->force);
+    rc = qemu_pci_remove_xenstore(gc, domid, pci, prs->force);
 
 out:
     pci_remove_detatched(egc, prs, rc);
@@ -2010,7 +2010,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     int rc;
 
     /* Convenience aliases */
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
 
     rc = libxl__ev_time_register_rel(ao, &prs->timeout,
                                      pci_remove_timeout,
@@ -2018,7 +2018,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     if (rc) goto out;
 
     QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
-                           pcidev->bus, pcidev->dev, pcidev->func);
+                           pci->bus, pci->dev, pci->func);
     prs->qmp.callback = pci_remove_qmp_device_del_cb;
     rc = libxl__ev_qmp_send(egc, &prs->qmp, "device_del", args);
     if (rc) goto out;
@@ -2080,14 +2080,14 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl__ao *const ao = prs->aodev->ao;
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
 
     if (rc) goto out;
 
     libxl__ev_qmp_dispose(gc, qmp);
 
     asked_id = GCSPRINTF(PCI_PT_QDEV_ID,
-                         pcidev->bus, pcidev->dev, pcidev->func);
+                         pci->bus, pci->dev, pci->func);
 
     /* query-pci response:
      * [{ 'devices': [ 'qdev_id': 'str', ...  ], ... }]
@@ -2135,10 +2135,10 @@ static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
     pci_remove_state *prs = CONTAINER_OF(ev, *prs, timeout);
 
     /* Convenience aliases */
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
 
     LOGD(WARN, prs->domid, "timed out waiting for DM to remove "
-         PCI_PT_QDEV_ID, pcidev->bus, pcidev->dev, pcidev->func);
+         PCI_PT_QDEV_ID, pci->bus, pci->dev, pci->func);
 
     /* If we timed out, we might still want to keep destroying the device
      * (when force==true), so let the next function decide what to do on
@@ -2156,7 +2156,7 @@ static void pci_remove_detatched(libxl__egc *egc,
     bool isstubdom;
 
     /* Convenience aliases */
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
     libxl_domid domid = prs->domid;
 
     /* Cleaning QMP states ASAP */
@@ -2170,30 +2170,30 @@ static void pci_remove_detatched(libxl__egc *egc,
     isstubdom = libxl_is_stubdom(CTX, domid, &domainid);
 
     /* don't do multiple resets while some functions are still passed through */
-    if ( (pcidev->vdevfn & 0x7) == 0 ) {
-        libxl__device_pci_reset(gc, pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+    if ((pci->vdevfn & 0x7) == 0) {
+        libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
     }
 
     if (!isstubdom) {
-        rc = xc_deassign_device(CTX->xch, domid, pcidev_encode_bdf(pcidev));
+        rc = xc_deassign_device(CTX->xch, domid, pci_encode_bdf(pci));
         if (rc < 0 && (prs->hvm || errno != ENOSYS))
             LOGED(ERROR, domainid, "xc_deassign_device failed");
     }
 
     stubdomid = libxl_get_stubdom_id(CTX, domid);
     if (stubdomid != 0) {
-        libxl_device_pci *pcidev_s;
+        libxl_device_pci *pci_s;
         libxl__ao_device *const stubdom_aodev = &prs->stubdom_aodev;
 
-        GCNEW(pcidev_s);
-        libxl_device_pci_init(pcidev_s);
-        libxl_device_pci_copy(CTX, pcidev_s, pcidev);
+        GCNEW(pci_s);
+        libxl_device_pci_init(pci_s);
+        libxl_device_pci_copy(CTX, pci_s, pci);
 
         libxl__prepare_ao_device(ao, stubdom_aodev);
         stubdom_aodev->action = LIBXL__DEVICE_ACTION_REMOVE;
         stubdom_aodev->callback = pci_remove_stubdom_done;
         stubdom_aodev->update_json = prs->aodev->update_json;
-        libxl__device_pci_remove_common(egc, stubdomid, pcidev_s,
+        libxl__device_pci_remove_common(egc, stubdomid, pci_s,
                                         prs->force, stubdom_aodev);
         return;
     }
@@ -2219,14 +2219,14 @@ static void pci_remove_done(libxl__egc *egc,
 
     if (rc) goto out;
 
-    libxl__device_pci_remove_xenstore(gc, prs->domid, prs->pcidev);
+    libxl__device_pci_remove_xenstore(gc, prs->domid, prs->pci);
 out:
     device_pci_remove_common_next(egc, prs, rc);
 }
 
 static void libxl__device_pci_remove_common(libxl__egc *egc,
                                             uint32_t domid,
-                                            libxl_device_pci *pcidev,
+                                            libxl_device_pci *pci,
                                             bool force,
                                             libxl__ao_device *aodev)
 {
@@ -2237,7 +2237,7 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
     GCNEW(prs);
     prs->aodev = aodev;
     prs->domid = domid;
-    prs->pcidev = pcidev;
+    prs->pci = pci;
     prs->force = force;
     libxl__xswait_init(&prs->xswait);
     libxl__ev_qmp_init(&prs->qmp);
@@ -2247,16 +2247,16 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
     libxl__ev_time_init(&prs->timeout);
     libxl__ev_time_init(&prs->retry_timer);
 
-    prs->orig_vdev = pcidev->vdevfn & ~7U;
+    prs->orig_vdev = pci->vdevfn & ~7U;
 
-    if ( pcidev->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
-        if ( pci_multifunction_check(gc, pcidev, &prs->pfunc_mask) ) {
+    if ( pci->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
+        if ( pci_multifunction_check(gc, pci, &prs->pfunc_mask) ) {
             rc = ERROR_FAIL;
             goto out;
         }
-        pcidev->vfunc_mask &= prs->pfunc_mask;
-    }else{
-        prs->pfunc_mask = (1 << pcidev->func);
+        pci->vfunc_mask &= prs->pfunc_mask;
+    } else {
+        prs->pfunc_mask = (1 << pci->func);
     }
 
     rc = 0;
@@ -2273,7 +2273,7 @@ static void device_pci_remove_common_next(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = prs->domid;
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
     libxl__ao_device *const aodev = prs->aodev;
     const unsigned int pfunc_mask = prs->pfunc_mask;
     const unsigned int orig_vdev = prs->orig_vdev;
@@ -2284,13 +2284,13 @@ static void device_pci_remove_common_next(libxl__egc *egc,
         const int i = prs->next_func;
         prs->next_func--;
         if ( (1 << i) & pfunc_mask ) {
-            if ( pcidev->vfunc_mask == pfunc_mask ) {
-                pcidev->func = i;
-                pcidev->vdevfn = orig_vdev | i;
-            }else{
-                pcidev->vdevfn = orig_vdev;
+            if ( pci->vfunc_mask == pfunc_mask ) {
+                pci->func = i;
+                pci->vdevfn = orig_vdev | i;
+            } else {
+                pci->vdevfn = orig_vdev;
             }
-            do_pci_remove(egc, domid, pcidev, prs->force, prs);
+            do_pci_remove(egc, domid, pci, prs->force, prs);
             return;
         }
     }
@@ -2306,7 +2306,7 @@ out:
 }
 
 int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid,
-                            libxl_device_pci *pcidev,
+                            libxl_device_pci *pci,
                             const libxl_asyncop_how *ao_how)
 
 {
@@ -2318,12 +2318,12 @@ int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid,
     aodev->action = LIBXL__DEVICE_ACTION_REMOVE;
     aodev->callback = device_addrm_aocomplete;
     aodev->update_json = true;
-    libxl__device_pci_remove_common(egc, domid, pcidev, false, aodev);
+    libxl__device_pci_remove_common(egc, domid, pci, false, aodev);
     return AO_INPROGRESS;
 }
 
 int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
-                             libxl_device_pci *pcidev,
+                             libxl_device_pci *pci,
                              const libxl_asyncop_how *ao_how)
 {
     AO_CREATE(ctx, domid, ao_how);
@@ -2334,7 +2334,7 @@ int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
     aodev->action = LIBXL__DEVICE_ACTION_REMOVE;
     aodev->callback = device_addrm_aocomplete;
     aodev->update_json = true;
-    libxl__device_pci_remove_common(egc, domid, pcidev, true, aodev);
+    libxl__device_pci_remove_common(egc, domid, pci, true, aodev);
     return AO_INPROGRESS;
 }
 
@@ -2353,7 +2353,7 @@ static int libxl__device_pci_from_xs_be(libxl__gc *gc,
     if (s)
         vdevfn = strtol(s, (char **) NULL, 16);
 
-    pcidev_struct_fill(pci, domain, bus, dev, func, vdevfn);
+    pci_struct_fill(pci, domain, bus, dev, func, vdevfn);
 
     s = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/opts-%d", be_path, nr));
     if (s) {
@@ -2398,7 +2398,7 @@ libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid, int *num
     GC_INIT(ctx);
     char *be_path;
     unsigned int n, i;
-    libxl_device_pci *pcidevs = NULL;
+    libxl_device_pci *pcis = NULL;
 
     *num = 0;
 
@@ -2407,28 +2407,28 @@ libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid, int *num
     if (libxl__device_pci_get_num(gc, be_path, &n))
         goto out;
 
-    pcidevs = calloc(n, sizeof(libxl_device_pci));
+    pcis = calloc(n, sizeof(libxl_device_pci));
 
     for (i = 0; i < n; i++)
-        libxl__device_pci_from_xs_be(gc, be_path, i, pcidevs + i);
+        libxl__device_pci_from_xs_be(gc, be_path, i, pcis + i);
 
     *num = n;
 out:
     GC_FREE;
-    return pcidevs;
+    return pcis;
 }
 
 void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
                                    libxl__multidev *multidev)
 {
     STATE_AO_GC(multidev->ao);
-    libxl_device_pci *pcidevs;
+    libxl_device_pci *pcis;
     int num, i;
 
-    pcidevs = libxl_device_pci_list(CTX, domid, &num);
-    if ( pcidevs == NULL )
+    pcis = libxl_device_pci_list(CTX, domid, &num);
+    if ( pcis == NULL )
         return;
-    libxl__ptr_add(gc, pcidevs);
+    libxl__ptr_add(gc, pcis);
 
     for (i = 0; i < num; i++) {
         /* Force remove on shutdown since, on HVM, qemu will not always
@@ -2436,7 +2436,7 @@ void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
          * devices by the time we even get here!
          */
         libxl__ao_device *aodev = libxl__multidev_prepare(multidev);
-        libxl__device_pci_remove_common(egc, domid, pcidevs + i, true,
+        libxl__device_pci_remove_common(egc, domid, pcis + i, true,
                                         aodev);
     }
 }
@@ -2449,13 +2449,13 @@ int libxl__grant_vga_iomem_permission(libxl__gc *gc, const uint32_t domid,
     if (!libxl_defbool_val(d_config->b_info.u.hvm.gfx_passthru))
         return 0;
 
-    for (i = 0 ; i < d_config->num_pcidevs ; i++) {
+    for (i = 0 ; i < d_config->num_pcis ; i++) {
         uint64_t vga_iomem_start = 0xa0000 >> XC_PAGE_SHIFT;
         uint32_t stubdom_domid;
-        libxl_device_pci *pcidev = &d_config->pcidevs[i];
+        libxl_device_pci *pci = &d_config->pcis[i];
         unsigned long pci_device_class;
 
-        if (sysfs_dev_get_class(gc, pcidev, &pci_device_class))
+        if (sysfs_dev_get_class(gc, pci, &pci_device_class))
             continue;
         if (pci_device_class != 0x030000) /* VGA class */
             continue;
@@ -2494,7 +2494,7 @@ static int libxl_device_pci_compare(const libxl_device_pci *d1,
 
 #define libxl__device_pci_update_devid NULL
 
-DEFINE_DEVICE_TYPE_STRUCT_X(pcidev, pci, PCI,
+DEFINE_DEVICE_TYPE_STRUCT(pci, PCI,
     .get_num = libxl__device_pci_get_num,
     .from_xenstore = libxl__device_pci_from_xs_be,
 );
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 9d3f05f39978..20f8dd7cfa5d 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -940,7 +940,7 @@ libxl_domain_config = Struct("domain_config", [
 
     ("disks", Array(libxl_device_disk, "num_disks")),
     ("nics", Array(libxl_device_nic, "num_nics")),
-    ("pcidevs", Array(libxl_device_pci, "num_pcidevs")),
+    ("pcis", Array(libxl_device_pci, "num_pcis")),
     ("rdms", Array(libxl_device_rdm, "num_rdms")),
     ("dtdevs", Array(libxl_device_dtdev, "num_dtdevs")),
     ("vfbs", Array(libxl_device_vfb, "num_vfbs")),
diff --git a/tools/libs/util/libxlu_pci.c b/tools/libs/util/libxlu_pci.c
index 12fc0b3a7fdc..1d38fffce357 100644
--- a/tools/libs/util/libxlu_pci.c
+++ b/tools/libs/util/libxlu_pci.c
@@ -23,15 +23,15 @@ static int hex_convert(const char *str, unsigned int *val, unsigned int mask)
     return 0;
 }
 
-static int pcidev_struct_fill(libxl_device_pci *pcidev, unsigned int domain,
-                               unsigned int bus, unsigned int dev,
-                               unsigned int func, unsigned int vdevfn)
+static int pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
+                           unsigned int bus, unsigned int dev,
+                           unsigned int func, unsigned int vdevfn)
 {
-    pcidev->domain = domain;
-    pcidev->bus = bus;
-    pcidev->dev = dev;
-    pcidev->func = func;
-    pcidev->vdevfn = vdevfn;
+    pci->domain = domain;
+    pci->bus = bus;
+    pci->dev = dev;
+    pci->func = func;
+    pci->vdevfn = vdevfn;
     return 0;
 }
 
@@ -47,7 +47,7 @@ static int pcidev_struct_fill(libxl_device_pci *pcidev, unsigned int domain,
 #define STATE_RDM_STRATEGY      10
 #define STATE_RESERVE_POLICY    11
 #define INVALID         0xffffffff
-int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str)
+int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pci, const char *str)
 {
     unsigned state = STATE_DOMAIN;
     unsigned dom = INVALID, bus = INVALID, dev = INVALID, func = INVALID, vslot = 0;
@@ -110,11 +110,11 @@ int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str
                 }
                 *ptr = '\0';
                 if ( !strcmp(tok, "*") ) {
-                    pcidev->vfunc_mask = LIBXL_PCI_FUNC_ALL;
+                    pci->vfunc_mask = LIBXL_PCI_FUNC_ALL;
                 }else{
                     if ( hex_convert(tok, &func, 0x7) )
                         goto parse_error;
-                    pcidev->vfunc_mask = (1 << 0);
+                    pci->vfunc_mask = (1 << 0);
                 }
                 tok = ptr + 1;
             }
@@ -141,18 +141,18 @@ int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str
                 state = (*ptr == ',') ? STATE_OPTIONS_K : STATE_TERMINAL;
                 *ptr = '\0';
                 if ( !strcmp(optkey, "msitranslate") ) {
-                    pcidev->msitranslate = atoi(tok);
+                    pci->msitranslate = atoi(tok);
                 }else if ( !strcmp(optkey, "power_mgmt") ) {
-                    pcidev->power_mgmt = atoi(tok);
+                    pci->power_mgmt = atoi(tok);
                 }else if ( !strcmp(optkey, "permissive") ) {
-                    pcidev->permissive = atoi(tok);
+                    pci->permissive = atoi(tok);
                 }else if ( !strcmp(optkey, "seize") ) {
-                    pcidev->seize = atoi(tok);
+                    pci->seize = atoi(tok);
                 } else if (!strcmp(optkey, "rdm_policy")) {
                     if (!strcmp(tok, "strict")) {
-                        pcidev->rdm_policy = LIBXL_RDM_RESERVE_POLICY_STRICT;
+                        pci->rdm_policy = LIBXL_RDM_RESERVE_POLICY_STRICT;
                     } else if (!strcmp(tok, "relaxed")) {
-                        pcidev->rdm_policy = LIBXL_RDM_RESERVE_POLICY_RELAXED;
+                        pci->rdm_policy = LIBXL_RDM_RESERVE_POLICY_RELAXED;
                     } else {
                         XLU__PCI_ERR(cfg, "%s is not an valid PCI RDM property"
                                           " policy: 'strict' or 'relaxed'.",
@@ -175,7 +175,7 @@ int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str
     assert(dom != INVALID && bus != INVALID && dev != INVALID && func != INVALID);
 
     /* Just a pretty way to fill in the values */
-    pcidev_struct_fill(pcidev, dom, bus, dev, func, vslot << 3);
+    pci_struct_fill(pci, dom, bus, dev, func, vslot << 3);
 
     free(buf2);
 
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index cae8eb679c5a..0765780d9f0a 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1470,24 +1470,24 @@ void parse_config_data(const char *config_source,
     }
 
     if (!xlu_cfg_get_list (config, "pci", &pcis, 0, 0)) {
-        d_config->num_pcidevs = 0;
-        d_config->pcidevs = NULL;
+        d_config->num_pcis = 0;
+        d_config->pcis = NULL;
         for(i = 0; (buf = xlu_cfg_get_listitem (pcis, i)) != NULL; i++) {
-            libxl_device_pci *pcidev;
+            libxl_device_pci *pci;
 
-            pcidev = ARRAY_EXTEND_INIT_NODEVID(d_config->pcidevs,
-                                               d_config->num_pcidevs,
-                                               libxl_device_pci_init);
-            pcidev->msitranslate = pci_msitranslate;
-            pcidev->power_mgmt = pci_power_mgmt;
-            pcidev->permissive = pci_permissive;
-            pcidev->seize = pci_seize;
+            pci = ARRAY_EXTEND_INIT_NODEVID(d_config->pcis,
+                                            d_config->num_pcis,
+                                            libxl_device_pci_init);
+            pci->msitranslate = pci_msitranslate;
+            pci->power_mgmt = pci_power_mgmt;
+            pci->permissive = pci_permissive;
+            pci->seize = pci_seize;
             /*
              * Like other pci option, the per-device policy always follows
              * the global policy by default.
              */
-            pcidev->rdm_policy = b_info->u.hvm.rdm.policy;
-            e = xlu_pci_parse_bdf(config, pcidev, buf);
+            pci->rdm_policy = b_info->u.hvm.rdm.policy;
+            e = xlu_pci_parse_bdf(config, pci, buf);
             if (e) {
                 fprintf(stderr,
                         "unable to parse PCI BDF `%s' for passthrough\n",
@@ -1495,7 +1495,7 @@ void parse_config_data(const char *config_source,
                 exit(-e);
             }
         }
-        if (d_config->num_pcidevs && c_info->type == LIBXL_DOMAIN_TYPE_PV)
+        if (d_config->num_pcis && c_info->type == LIBXL_DOMAIN_TYPE_PV)
             libxl_defbool_set(&b_info->u.pv.e820_host, true);
     }
 
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 58345bdae213..34fcf5a4fadf 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -24,20 +24,20 @@
 
 static void pcilist(uint32_t domid)
 {
-    libxl_device_pci *pcidevs;
+    libxl_device_pci *pcis;
     int num, i;
 
-    pcidevs = libxl_device_pci_list(ctx, domid, &num);
-    if (pcidevs == NULL)
+    pcis = libxl_device_pci_list(ctx, domid, &num);
+    if (pcis == NULL)
         return;
     printf("Vdev Device\n");
     for (i = 0; i < num; i++) {
         printf("%02x.%01x %04x:%02x:%02x.%01x\n",
-               (pcidevs[i].vdevfn >> 3) & 0x1f, pcidevs[i].vdevfn & 0x7,
-               pcidevs[i].domain, pcidevs[i].bus, pcidevs[i].dev, pcidevs[i].func);
-        libxl_device_pci_dispose(&pcidevs[i]);
+               (pcis[i].vdevfn >> 3) & 0x1f, pcis[i].vdevfn & 0x7,
+               pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
+        libxl_device_pci_dispose(&pcis[i]);
     }
-    free(pcidevs);
+    free(pcis);
 }
 
 int main_pcilist(int argc, char **argv)
@@ -57,28 +57,28 @@ int main_pcilist(int argc, char **argv)
 
 static int pcidetach(uint32_t domid, const char *bdf, int force)
 {
-    libxl_device_pci pcidev;
+    libxl_device_pci pci;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pcidev);
+    libxl_device_pci_init(&pci);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_inig"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcidev, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
         fprintf(stderr, "pci-detach: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
     if (force) {
-        if (libxl_device_pci_destroy(ctx, domid, &pcidev, 0))
+        if (libxl_device_pci_destroy(ctx, domid, &pci, 0))
             r = 1;
     } else {
-        if (libxl_device_pci_remove(ctx, domid, &pcidev, 0))
+        if (libxl_device_pci_remove(ctx, domid, &pci, 0))
             r = 1;
     }
 
-    libxl_device_pci_dispose(&pcidev);
+    libxl_device_pci_dispose(&pci);
     xlu_cfg_destroy(config);
 
     return r;
@@ -108,24 +108,24 @@ int main_pcidetach(int argc, char **argv)
 
 static int pciattach(uint32_t domid, const char *bdf, const char *vs)
 {
-    libxl_device_pci pcidev;
+    libxl_device_pci pci;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pcidev);
+    libxl_device_pci_init(&pci);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_inig"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcidev, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
         fprintf(stderr, "pci-attach: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_add(ctx, domid, &pcidev, 0))
+    if (libxl_device_pci_add(ctx, domid, &pci, 0))
         r = 1;
 
-    libxl_device_pci_dispose(&pcidev);
+    libxl_device_pci_dispose(&pci);
     xlu_cfg_destroy(config);
 
     return r;
@@ -155,19 +155,19 @@ int main_pciattach(int argc, char **argv)
 
 static void pciassignable_list(void)
 {
-    libxl_device_pci *pcidevs;
+    libxl_device_pci *pcis;
     int num, i;
 
-    pcidevs = libxl_device_pci_assignable_list(ctx, &num);
+    pcis = libxl_device_pci_assignable_list(ctx, &num);
 
-    if ( pcidevs == NULL )
+    if ( pcis == NULL )
         return;
     for (i = 0; i < num; i++) {
         printf("%04x:%02x:%02x.%01x\n",
-               pcidevs[i].domain, pcidevs[i].bus, pcidevs[i].dev, pcidevs[i].func);
-        libxl_device_pci_dispose(&pcidevs[i]);
+               pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
+        libxl_device_pci_dispose(&pcis[i]);
     }
-    free(pcidevs);
+    free(pcis);
 }
 
 int main_pciassignable_list(int argc, char **argv)
@@ -184,24 +184,24 @@ int main_pciassignable_list(int argc, char **argv)
 
 static int pciassignable_add(const char *bdf, int rebind)
 {
-    libxl_device_pci pcidev;
+    libxl_device_pci pci;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pcidev);
+    libxl_device_pci_init(&pci);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcidev, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
         fprintf(stderr, "pci-assignable-add: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_add(ctx, &pcidev, rebind))
+    if (libxl_device_pci_assignable_add(ctx, &pci, rebind))
         r = 1;
 
-    libxl_device_pci_dispose(&pcidev);
+    libxl_device_pci_dispose(&pci);
     xlu_cfg_destroy(config);
 
     return r;
@@ -226,24 +226,24 @@ int main_pciassignable_add(int argc, char **argv)
 
 static int pciassignable_remove(const char *bdf, int rebind)
 {
-    libxl_device_pci pcidev;
+    libxl_device_pci pci;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pcidev);
+    libxl_device_pci_init(&pci);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcidev, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
         fprintf(stderr, "pci-assignable-remove: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_remove(ctx, &pcidev, rebind))
+    if (libxl_device_pci_assignable_remove(ctx, &pci, rebind))
         r = 1;
 
-    libxl_device_pci_dispose(&pcidev);
+    libxl_device_pci_dispose(&pci);
     xlu_cfg_destroy(config);
 
     return r;
diff --git a/tools/xl/xl_sxp.c b/tools/xl/xl_sxp.c
index 359a0015709e..b03e348ffb9a 100644
--- a/tools/xl/xl_sxp.c
+++ b/tools/xl/xl_sxp.c
@@ -190,16 +190,16 @@ void printf_info_sexp(int domid, libxl_domain_config *d_config, FILE *fh)
         fprintf(fh, "\t)\n");
     }
 
-    for (i = 0; i < d_config->num_pcidevs; i++) {
+    for (i = 0; i < d_config->num_pcis; i++) {
         fprintf(fh, "\t(device\n");
         fprintf(fh, "\t\t(pci\n");
         fprintf(fh, "\t\t\t(pci dev %04x:%02x:%02x.%01x@%02x)\n",
-               d_config->pcidevs[i].domain, d_config->pcidevs[i].bus,
-               d_config->pcidevs[i].dev, d_config->pcidevs[i].func,
-               d_config->pcidevs[i].vdevfn);
+               d_config->pcis[i].domain, d_config->pcis[i].bus,
+               d_config->pcis[i].dev, d_config->pcis[i].func,
+               d_config->pcis[i].vdevfn);
         fprintf(fh, "\t\t\t(opts msitranslate %d power_mgmt %d)\n",
-               d_config->pcidevs[i].msitranslate,
-               d_config->pcidevs[i].power_mgmt);
+               d_config->pcis[i].msitranslate,
+               d_config->pcis[i].power_mgmt);
         fprintf(fh, "\t\t)\n");
         fprintf(fh, "\t)\n");
     }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 17:52:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 17:52:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23576.50474 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXoS-0003NP-SR; Tue, 10 Nov 2020 17:52:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23576.50474; Tue, 10 Nov 2020 17:52:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXoS-0003NC-NV; Tue, 10 Nov 2020 17:52:12 +0000
Received: by outflank-mailman (input) for mailman id 23576;
 Tue, 10 Nov 2020 17:52:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kcXoS-0002tQ-2M
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 17:52:12 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 94410a4f-55d2-45bd-9fc6-ddca36a6213a;
 Tue, 10 Nov 2020 17:51:57 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoC-0006Z8-15; Tue, 10 Nov 2020 17:51:56 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoB-0007RC-Pq; Tue, 10 Nov 2020 17:51:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kcXoS-0002tQ-2M
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 17:52:12 +0000
X-Inumbo-ID: 94410a4f-55d2-45bd-9fc6-ddca36a6213a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 94410a4f-55d2-45bd-9fc6-ddca36a6213a;
	Tue, 10 Nov 2020 17:51:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=BsdChYzNHsC0mIvRqh9QjeS15LyK+GbGCkAXV4LjPl8=; b=DE66vFvuT2mUJCzcG8qF0a8R+K
	Ob6x7obXVMkfdd+PxX1Rt263uUsLuh3br1DQr/7tpFN5y11Cm3qyr4cqqbNKli8L7FOSpgMeNpGVP
	c3NTdQ4Q2wEq5EWEgNyxmNzL37+cBOavwkqkOLlvm5IoAORe93N6ADtPXP9Log9EHLGA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoC-0006Z8-15; Tue, 10 Nov 2020 17:51:56 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoB-0007RC-Pq; Tue, 10 Nov 2020 17:51:56 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 06/24] libxl: stop using aodev->device_config in libxl__device_pci_add()...
Date: Tue, 10 Nov 2020 17:51:29 +0000
Message-Id: <20201110175147.7067-7-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110175147.7067-1-paul@xen.org>
References: <20201110175147.7067-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... to hold a pointer to the device.

There is already a 'pci' field in 'pci_add_state' so simply use that from
the start. This also allows the 'pci' (#3) argument to be dropped from
do_pci_add().

NOTE: This patch also changes the type of the 'pci_domid' field in
      'pci_add_state' from 'int' to 'libxl_domid' which is more appropriate
      given what the field is used for.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 19 +++++++------------
 1 file changed, 7 insertions(+), 12 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 0abc679c3958..c9f062fc2d8b 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1055,7 +1055,7 @@ typedef struct pci_add_state {
     libxl__ev_qmp qmp;
     libxl__ev_time timeout;
     libxl_device_pci *pci;
-    int pci_domid;
+    libxl_domid pci_domid;
 } pci_add_state;
 
 static void pci_add_qemu_trad_watch_state_cb(libxl__egc *egc,
@@ -1072,7 +1072,6 @@ static void pci_add_dm_done(libxl__egc *,
 
 static void do_pci_add(libxl__egc *egc,
                        libxl_domid domid,
-                       libxl_device_pci *pci,
                        pci_add_state *pas)
 {
     STATE_AO_GC(pas->aodev->ao);
@@ -1082,7 +1081,6 @@ static void do_pci_add(libxl__egc *egc,
     /* init pci_add_state */
     libxl__xswait_init(&pas->xswait);
     libxl__ev_qmp_init(&pas->qmp);
-    pas->pci = pci;
     pas->pci_domid = domid;
     libxl__ev_time_init(&pas->timeout);
 
@@ -1545,13 +1543,10 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     int stubdomid = 0;
     pci_add_state *pas;
 
-    /* Store *pci to be used by callbacks */
-    aodev->device_config = pci;
-    aodev->device_type = &libxl__pci_devtype;
-
     GCNEW(pas);
     pas->aodev = aodev;
     pas->domid = domid;
+    pas->pci = pci;
     pas->starting = starting;
     pas->callback = device_pci_add_stubdom_done;
 
@@ -1605,9 +1600,10 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
         GCNEW(pci_s);
         libxl_device_pci_init(pci_s);
         libxl_device_pci_copy(CTX, pci_s, pci);
+        pas->pci = pci_s;
         pas->callback = device_pci_add_stubdom_wait;
 
-        do_pci_add(egc, stubdomid, pci_s, pas); /* must be last */
+        do_pci_add(egc, stubdomid, pas); /* must be last */
         return;
     }
 
@@ -1662,9 +1658,8 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
     int i;
 
     /* Convenience aliases */
-    libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = aodev->device_config;
+    libxl_device_pci *pci = pas->pci;
 
     if (rc) goto out;
 
@@ -1699,7 +1694,7 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
                 pci->vdevfn = orig_vdev;
             }
             pas->callback = device_pci_add_done;
-            do_pci_add(egc, domid, pci, pas); /* must be last */
+            do_pci_add(egc, domid, pas); /* must be last */
             return;
         }
     }
@@ -1715,7 +1710,7 @@ static void device_pci_add_done(libxl__egc *egc,
     EGC_GC;
     libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = aodev->device_config;
+    libxl_device_pci *pci = pas->pci;
 
     if (rc) {
         LOGD(ERROR, domid,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 17:52:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 17:52:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23587.50487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXoY-0003WS-6D; Tue, 10 Nov 2020 17:52:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23587.50487; Tue, 10 Nov 2020 17:52:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXoY-0003WK-2i; Tue, 10 Nov 2020 17:52:18 +0000
Received: by outflank-mailman (input) for mailman id 23587;
 Tue, 10 Nov 2020 17:52:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kcXoX-0002tQ-2b
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 17:52:17 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 26e9022a-6a30-41d8-a850-79199f96f771;
 Tue, 10 Nov 2020 17:51:57 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoC-0006ZE-U9; Tue, 10 Nov 2020 17:51:56 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoC-0007RC-Md; Tue, 10 Nov 2020 17:51:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kcXoX-0002tQ-2b
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 17:52:17 +0000
X-Inumbo-ID: 26e9022a-6a30-41d8-a850-79199f96f771
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 26e9022a-6a30-41d8-a850-79199f96f771;
	Tue, 10 Nov 2020 17:51:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=LyPpHUyMe6aEgoIkauLXqbCOzMhm1Lwmubu6y3KL/nk=; b=GcwxQJwuPDlxQYv/6Jq6F+VUV8
	j0vufZHutPnK0Q2vzQ7qtPuHcsj14FZJmQsheM3tjODpyOAACeIgXnX1crLSIsP1u59uI81SsGqw6
	jogtSQvAIJnl2Jo9uZFMXOU82tdanJtuuiBXknjC10HrRYsjuvq/6clz0JDulkxTvSyI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoC-0006ZE-U9; Tue, 10 Nov 2020 17:51:56 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoC-0007RC-Md; Tue, 10 Nov 2020 17:51:56 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 07/24] libxl: generalise 'driver_path' xenstore access functions in libxl_pci.c
Date: Tue, 10 Nov 2020 17:51:30 +0000
Message-Id: <20201110175147.7067-8-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110175147.7067-1-paul@xen.org>
References: <20201110175147.7067-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

For the purposes of re-binding a device to its previous driver
libxl__device_pci_assignable_add() writes the driver path into xenstore.
This path is then read back in libxl__device_pci_assignable_remove().

The functions that support this writing to and reading from xenstore are
currently dedicated for this purpose and hence the node name 'driver_path'
is hard-coded. This patch generalizes these utility functions and passes
'driver_path' as an argument. Subsequent patches will invoke them to
access other nodes.

NOTE: Because functions will have a broader use (other than storing a
      driver path in lieu of pciback) the base xenstore path is also
      changed from '/libxl/pciback' to '/libxl/pci'.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 66 +++++++++++++++++-------------------
 1 file changed, 32 insertions(+), 34 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index c9f062fc2d8b..fdafd2c9bafb 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -718,48 +718,46 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
     return 0;
 }
 
-#define PCIBACK_INFO_PATH "/libxl/pciback"
+#define PCI_INFO_PATH "/libxl/pci"
 
-static void pci_assignable_driver_path_write(libxl__gc *gc,
-                                            libxl_device_pci *pci,
-                                            char *driver_path)
+static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node)
 {
-    char *path;
+    return node ?
+        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
+                  pci->domain, pci->bus, pci->dev, pci->func,
+                  node) :
+        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
+                  pci->domain, pci->bus, pci->dev, pci->func);
+}
 
-    path = GCSPRINTF(PCIBACK_INFO_PATH"/"PCI_BDF_XSPATH"/driver_path",
-                     pci->domain,
-                     pci->bus,
-                     pci->dev,
-                     pci->func);
-    if ( libxl__xs_printf(gc, XBT_NULL, path, "%s", driver_path) < 0 ) {
-        LOGE(WARN, "Write of %s to node %s failed.", driver_path, path);
+
+static void pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node, const char *val)
+{
+    char *path = pci_info_xs_path(gc, pci, node);
+
+    if ( libxl__xs_printf(gc, XBT_NULL, path, "%s", val) < 0 ) {
+        LOGE(WARN, "Write of %s to node %s failed.", val, path);
     }
 }
 
-static char * pci_assignable_driver_path_read(libxl__gc *gc,
-                                              libxl_device_pci *pci)
+static char *pci_info_xs_read(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node)
 {
-    return libxl__xs_read(gc, XBT_NULL,
-                          GCSPRINTF(
-                           PCIBACK_INFO_PATH "/" PCI_BDF_XSPATH "/driver_path",
-                           pci->domain,
-                           pci->bus,
-                           pci->dev,
-                           pci->func));
+    char *path = pci_info_xs_path(gc, pci, node);
+
+    return libxl__xs_read(gc, XBT_NULL, path);
 }
 
-static void pci_assignable_driver_path_remove(libxl__gc *gc,
-                                              libxl_device_pci *pci)
+static void pci_info_xs_remove(libxl__gc *gc, libxl_device_pci *pci,
+                               const char *node)
 {
+    char *path = pci_info_xs_path(gc, pci, node);
     libxl_ctx *ctx = libxl__gc_owner(gc);
 
     /* Remove the xenstore entry */
-    xs_rm(ctx->xsh, XBT_NULL,
-          GCSPRINTF(PCIBACK_INFO_PATH "/" PCI_BDF_XSPATH,
-                    pci->domain,
-                    pci->bus,
-                    pci->dev,
-                    pci->func) );
+    xs_rm(ctx->xsh, XBT_NULL, path);
 }
 
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
@@ -805,9 +803,9 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     /* Store driver_path for rebinding to dom0 */
     if ( rebind ) {
         if ( driver_path ) {
-            pci_assignable_driver_path_write(gc, pci, driver_path);
+            pci_info_xs_write(gc, pci, "driver_path", driver_path);
         } else if ( (driver_path =
-                     pci_assignable_driver_path_read(gc, pci)) != NULL ) {
+                     pci_info_xs_read(gc, pci, "driver_path")) != NULL ) {
             LOG(INFO, PCI_BDF" not bound to a driver, will be rebound to %s",
                 dom, bus, dev, func, driver_path);
         } else {
@@ -815,7 +813,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
                 dom, bus, dev, func);
         }
     } else {
-        pci_assignable_driver_path_remove(gc, pci);
+        pci_info_xs_remove(gc, pci, "driver_path");
     }
 
     if ( pciback_dev_assign(gc, pci) ) {
@@ -865,7 +863,7 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     }
 
     /* Rebind if necessary */
-    driver_path = pci_assignable_driver_path_read(gc, pci);
+    driver_path = pci_info_xs_read(gc, pci, "driver_path");
 
     if ( driver_path ) {
         if ( rebind ) {
@@ -878,7 +876,7 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
                 return -1;
             }
 
-            pci_assignable_driver_path_remove(gc, pci);
+            pci_info_xs_remove(gc, pci, "driver_path");
         }
     } else {
         if ( rebind ) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 17:52:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 17:52:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23596.50499 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXod-0003e3-Ki; Tue, 10 Nov 2020 17:52:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23596.50499; Tue, 10 Nov 2020 17:52:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXod-0003dt-Ei; Tue, 10 Nov 2020 17:52:23 +0000
Received: by outflank-mailman (input) for mailman id 23596;
 Tue, 10 Nov 2020 17:52:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kcXoc-0002tQ-2i
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 17:52:22 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0e9f988b-aa1d-455a-882d-e9707216c62c;
 Tue, 10 Nov 2020 17:51:59 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoE-0006ZS-OF; Tue, 10 Nov 2020 17:51:58 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoE-0007RC-GC; Tue, 10 Nov 2020 17:51:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kcXoc-0002tQ-2i
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 17:52:22 +0000
X-Inumbo-ID: 0e9f988b-aa1d-455a-882d-e9707216c62c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0e9f988b-aa1d-455a-882d-e9707216c62c;
	Tue, 10 Nov 2020 17:51:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=O8xz+gNA+F9oWyzdxZ8KFUGWd7gPCv/wYmwNsj3czZk=; b=g1THw3DQ/lv3kKPgEL3ZwTxTKc
	ZFG1VfmnMlfkANh01hJNrUauEqSYlIA+RoiikFuYNvDem+JPld10mSJxMonf5cOdmncijyN2VrBDF
	B7X7YiMl2amsD2DO27wtUvd6LA+Mx5u8svih8QAvUoNr0F5oaGV6pHmLG1QiK3dZccO4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoE-0006ZS-OF; Tue, 10 Nov 2020 17:51:58 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoE-0007RC-GC; Tue, 10 Nov 2020 17:51:58 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 09/24] libxl: remove get_all_assigned_devices() from libxl_pci.c
Date: Tue, 10 Nov 2020 17:51:32 +0000
Message-Id: <20201110175147.7067-10-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110175147.7067-1-paul@xen.org>
References: <20201110175147.7067-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Use of this function is a very inefficient way to check whether a device
has already been assigned.

This patch adds code that saves the domain id in xenstore at the point of
assignment, and removes it again when the device id de-assigned (or the
domain is destroyed). It is then straightforward to check whether a device
has been assigned by checking whether a device has a saved domain id.

NOTE: To facilitate the xenstore check it is necessary to move the
      pci_info_xs_read() earlier in libxl_pci.c. To keep related functions
      together, the rest of the pci_info_xs_XXX() functions are moved too.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 149 +++++++++++++----------------------
 1 file changed, 55 insertions(+), 94 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index f9f8374d7d36..ff37dc5b5921 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -317,50 +317,6 @@ retry_transaction2:
     return 0;
 }
 
-static int get_all_assigned_devices(libxl__gc *gc, libxl_device_pci **list, int *num)
-{
-    char **domlist;
-    unsigned int nd = 0, i;
-
-    *list = NULL;
-    *num = 0;
-
-    domlist = libxl__xs_directory(gc, XBT_NULL, "/local/domain", &nd);
-    for(i = 0; i < nd; i++) {
-        char *path, *num_devs;
-
-        path = GCSPRINTF("/local/domain/0/backend/%s/%s/0/num_devs",
-                         libxl__device_kind_to_string(LIBXL__DEVICE_KIND_PCI),
-                         domlist[i]);
-        num_devs = libxl__xs_read(gc, XBT_NULL, path);
-        if ( num_devs ) {
-            int ndev = atoi(num_devs), j;
-            char *devpath, *bdf;
-
-            for(j = 0; j < ndev; j++) {
-                devpath = GCSPRINTF("/local/domain/0/backend/%s/%s/0/dev-%u",
-                                    libxl__device_kind_to_string(LIBXL__DEVICE_KIND_PCI),
-                                    domlist[i], j);
-                bdf = libxl__xs_read(gc, XBT_NULL, devpath);
-                if ( bdf ) {
-                    unsigned dom, bus, dev, func;
-                    if ( sscanf(bdf, PCI_BDF, &dom, &bus, &dev, &func) != 4 )
-                        continue;
-
-                    *list = realloc(*list, sizeof(libxl_device_pci) * ((*num) + 1));
-                    if (*list == NULL)
-                        return ERROR_NOMEM;
-                    pci_struct_fill(*list + *num, dom, bus, dev, func, 0);
-                    (*num)++;
-                }
-            }
-        }
-    }
-    libxl__ptr_add(gc, *list);
-
-    return 0;
-}
-
 static int is_pci_in_array(libxl_device_pci *assigned, int num_assigned,
                            int dom, int bus, int dev, int func)
 {
@@ -408,19 +364,58 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
     return 0;
 }
 
+#define PCI_INFO_PATH "/libxl/pci"
+
+static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node)
+{
+    return node ?
+        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
+                  pci->domain, pci->bus, pci->dev, pci->func,
+                  node) :
+        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
+                  pci->domain, pci->bus, pci->dev, pci->func);
+}
+
+
+static int pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node, const char *val)
+{
+    char *path = pci_info_xs_path(gc, pci, node);
+    int rc = libxl__xs_printf(gc, XBT_NULL, path, "%s", val);
+
+    if (rc) LOGE(WARN, "Write of %s to node %s failed.", val, path);
+
+    return rc;
+}
+
+static char *pci_info_xs_read(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node)
+{
+    char *path = pci_info_xs_path(gc, pci, node);
+
+    return libxl__xs_read(gc, XBT_NULL, path);
+}
+
+static void pci_info_xs_remove(libxl__gc *gc, libxl_device_pci *pci,
+                               const char *node)
+{
+    char *path = pci_info_xs_path(gc, pci, node);
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+
+    /* Remove the xenstore entry */
+    xs_rm(ctx->xsh, XBT_NULL, path);
+}
+
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 {
     GC_INIT(ctx);
-    libxl_device_pci *pcis = NULL, *new, *assigned;
+    libxl_device_pci *pcis = NULL, *new;
     struct dirent *de;
     DIR *dir;
-    int r, num_assigned;
 
     *num = 0;
 
-    r = get_all_assigned_devices(gc, &assigned, &num_assigned);
-    if (r) goto out;
-
     dir = opendir(SYSFS_PCIBACK_DRIVER);
     if (NULL == dir) {
         if (errno == ENOENT) {
@@ -436,9 +431,6 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         if (sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4)
             continue;
 
-        if (is_pci_in_array(assigned, num_assigned, dom, bus, dev, func))
-            continue;
-
         new = realloc(pcis, ((*num) + 1) * sizeof(*new));
         if (NULL == new)
             continue;
@@ -448,6 +440,10 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 
         memset(new, 0, sizeof(*new));
         pci_struct_fill(new, dom, bus, dev, func, 0);
+
+        if (pci_info_xs_read(gc, new, "domid")) /* already assigned */
+            continue;
+
         (*num)++;
     }
 
@@ -718,48 +714,6 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
     return 0;
 }
 
-#define PCI_INFO_PATH "/libxl/pci"
-
-static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
-                              const char *node)
-{
-    return node ?
-        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
-                  pci->domain, pci->bus, pci->dev, pci->func,
-                  node) :
-        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
-                  pci->domain, pci->bus, pci->dev, pci->func);
-}
-
-
-static void pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
-                              const char *node, const char *val)
-{
-    char *path = pci_info_xs_path(gc, pci, node);
-
-    if ( libxl__xs_printf(gc, XBT_NULL, path, "%s", val) < 0 ) {
-        LOGE(WARN, "Write of %s to node %s failed.", val, path);
-    }
-}
-
-static char *pci_info_xs_read(libxl__gc *gc, libxl_device_pci *pci,
-                              const char *node)
-{
-    char *path = pci_info_xs_path(gc, pci, node);
-
-    return libxl__xs_read(gc, XBT_NULL, path);
-}
-
-static void pci_info_xs_remove(libxl__gc *gc, libxl_device_pci *pci,
-                               const char *node)
-{
-    char *path = pci_info_xs_path(gc, pci, node);
-    libxl_ctx *ctx = libxl__gc_owner(gc);
-
-    /* Remove the xenstore entry */
-    xs_rm(ctx->xsh, XBT_NULL, path);
-}
-
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
                                             libxl_device_pci *pci,
                                             int rebind)
@@ -1575,6 +1529,9 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
         goto out;
     }
 
+    rc = pci_info_xs_write(gc, pci, "domid", GCSPRINTF("%u", domid));
+    if (rc) goto out;
+
     libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
@@ -1702,6 +1659,7 @@ static void device_pci_add_done(libxl__egc *egc,
              "PCI device %x:%x:%x.%x (rc %d)",
              pci->domain, pci->bus, pci->dev, pci->func,
              rc);
+        pci_info_xs_remove(gc, pci, "domid");
     }
     aodev->rc = rc;
     aodev->callback(egc, aodev);
@@ -2279,6 +2237,9 @@ out:
     libxl__xswait_stop(gc, &prs->xswait);
     libxl__ev_time_deregister(gc, &prs->timeout);
     libxl__ev_time_deregister(gc, &prs->retry_timer);
+
+    if (!rc) pci_info_xs_remove(gc, pci, "domid");
+
     aodev->rc = rc;
     aodev->callback(egc, aodev);
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 18:00:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 18:00:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23646.50519 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXwk-0005AB-Rd; Tue, 10 Nov 2020 18:00:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23646.50519; Tue, 10 Nov 2020 18:00:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXwk-00059z-KR; Tue, 10 Nov 2020 18:00:46 +0000
Received: by outflank-mailman (input) for mailman id 23646;
 Tue, 10 Nov 2020 18:00:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kcXwi-00059I-MY
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:00:44 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2f94b7b5-cbcb-446c-98ae-4a5335c88778;
 Tue, 10 Nov 2020 18:00:43 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXwg-0006qR-To; Tue, 10 Nov 2020 18:00:42 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoM-0007RC-3v; Tue, 10 Nov 2020 17:52:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kcXwi-00059I-MY
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:00:44 +0000
X-Inumbo-ID: 2f94b7b5-cbcb-446c-98ae-4a5335c88778
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2f94b7b5-cbcb-446c-98ae-4a5335c88778;
	Tue, 10 Nov 2020 18:00:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=dE8muqjMtVe9zxnJuQ+Eu3D+iUwfNz4L9IpDVriPtNY=; b=rOmNqggLotwZkwnZUI8lV0EjF6
	DV9PVTLgnkfv00EHLXPxBYixIbO06eI16znPNRWtzSDCoesp1AWYDUDm06d5wILxN26U9A2YCEK10
	CUoj3eIqJEAQ8Qx8X1l1N3NXht3pihSU1oOlc0bqVQ9A5kzjP3sjvEHEVZiXeFssE9jk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXwg-0006qR-To; Tue, 10 Nov 2020 18:00:42 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoM-0007RC-3v; Tue, 10 Nov 2020 17:52:06 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 17/24] docs/man: fix xl(1) documentation for 'pci' operations
Date: Tue, 10 Nov 2020 17:51:40 +0000
Message-Id: <20201110175147.7067-18-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110175147.7067-1-paul@xen.org>
References: <20201110175147.7067-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Currently the documentation completely fails to mention the existence of
PCI_SPEC_STRING. This patch tidies things up, specifically clarifying that
'pci-assignable-add/remove' take <BDF> arguments where as 'pci-attach/detach'
take <PCI_SPEC_STRING> arguments (which will be enforced in a subsequent
patch).

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 docs/man/xl.1.pod.in | 28 +++++++++++++++++-----------
 1 file changed, 17 insertions(+), 11 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index f92bacfa7277..c5fbce3b5c4b 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -1597,14 +1597,18 @@ List virtual network interfaces for a domain.
 
 =item B<pci-assignable-list>
 
-List all the assignable PCI devices.
+List all the B<BDF> of assignable PCI devices. See
+L<xl-pci-configuration(5)> for more information.
+
 These are devices in the system which are configured to be
 available for passthrough and are bound to a suitable PCI
 backend driver in domain 0 rather than a real driver.
 
 =item B<pci-assignable-add> I<BDF>
 
-Make the device at PCI Bus/Device/Function BDF assignable to guests.
+Make the device at B<BDF> assignable to guests. See
+L<xl-pci-configuration(5)> for more information.
+
 This will bind the device to the pciback driver and assign it to the
 "quarantine domain".  If it is already bound to a driver, it will
 first be unbound, and the original driver stored so that it can be
@@ -1620,8 +1624,10 @@ being used.
 
 =item B<pci-assignable-remove> [I<-r>] I<BDF>
 
-Make the device at PCI Bus/Device/Function BDF not assignable to
-guests.  This will at least unbind the device from pciback, and
+Make the device at B<BDF> not assignable to guests. See
+L<xl-pci-configuration(5)> for more information.
+
+This will at least unbind the device from pciback, and
 re-assign it from the "quarantine domain" back to domain 0.  If the -r
 option is specified, it will also attempt to re-bind the device to its
 original driver, making it usable by Domain 0 again.  If the device is
@@ -1637,15 +1643,15 @@ As always, this should only be done if you trust the guest, or are
 confident that the particular device you're re-assigning to dom0 will
 cancel all in-flight DMA on FLR.
 
-=item B<pci-attach> I<domain-id> I<BDF>
+=item B<pci-attach> I<domain-id> I<PCI_SPEC_STRING>
 
-Hot-plug a new pass-through pci device to the specified domain.
-B<BDF> is the PCI Bus/Device/Function of the physical device to pass-through.
+Hot-plug a new pass-through pci device to the specified domain. See
+L<xl-pci-configuration(5)> for more information.
 
-=item B<pci-detach> [I<OPTIONS>] I<domain-id> I<BDF>
+=item B<pci-detach> [I<OPTIONS>] I<domain-id> I<PCI_SPEC_STRING>
 
-Hot-unplug a previously assigned pci device from a domain. B<BDF> is the PCI
-Bus/Device/Function of the physical device to be removed from the guest domain.
+Hot-unplug a pci device that was previously passed through to a domain. See
+L<xl-pci-configuration(5)> for more information.
 
 B<OPTIONS>
 
@@ -1660,7 +1666,7 @@ even without guest domain's collaboration.
 
 =item B<pci-list> I<domain-id>
 
-List pass-through pci devices for a domain.
+List the B<BDF> of pci devices passed through to a domain.
 
 =back
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 18:00:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 18:00:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23645.50511 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXwk-00059f-GH; Tue, 10 Nov 2020 18:00:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23645.50511; Tue, 10 Nov 2020 18:00:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXwk-00059Y-B3; Tue, 10 Nov 2020 18:00:46 +0000
Received: by outflank-mailman (input) for mailman id 23645;
 Tue, 10 Nov 2020 18:00:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kcXwi-00059I-Ig
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:00:44 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a66384d9-06e1-4c39-98cb-d06930aa6aec;
 Tue, 10 Nov 2020 18:00:43 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXwh-0006qT-02; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoH-0007RC-Jy; Tue, 10 Nov 2020 17:52:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kcXwi-00059I-Ig
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:00:44 +0000
X-Inumbo-ID: a66384d9-06e1-4c39-98cb-d06930aa6aec
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a66384d9-06e1-4c39-98cb-d06930aa6aec;
	Tue, 10 Nov 2020 18:00:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=PtqM9YxtE81a3JL1qMtcUPTsYA1jEFS78iQFWeC5eZA=; b=cLGh52QeUqzLm/p2O6ihQl3rSd
	JTtdIwqy8pIvY1KJC+JiDMemyI6uGenYnO6+QoURLOWa7WIN7oo98hVXMeCTc66mzRqZ4brOq6gTz
	5xWWROFpfMQUKV6El+mpBJw2ILeqq2t0waul1FExyIYkBwlQ+4JlxSE2yRHhawRx8E6U=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXwh-0006qT-02; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoH-0007RC-Jy; Tue, 10 Nov 2020 17:52:01 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 12/24] libxl: use COMPARE_PCI() macro is_pci_in_array()...
Date: Tue, 10 Nov 2020 17:51:35 +0000
Message-Id: <20201110175147.7067-13-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110175147.7067-1-paul@xen.org>
References: <20201110175147.7067-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... rather than an open-coded equivalent.

This patch tidies up the is_pci_in_array() function, making it take a single
'libxl_device_pci' argument rather than separate domain, bus, device and
function arguments. The already-available COMPARE_PCI() macro can then be
used and it is also modified to return 'bool' rather than 'int'.

The patch also modifies libxl_pci_assignable() to use is_pci_in_array() rather
than a separate open-coded equivalent, and also modifies it to return a
'bool' rather than an 'int'.

NOTE: The COMPARE_PCI() macro is also fixed to include the 'domain' in its
      comparison, which should always have been the case.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_internal.h |  7 +++---
 tools/libs/light/libxl_pci.c      | 38 +++++++++++--------------------
 2 files changed, 17 insertions(+), 28 deletions(-)

diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 3e70ff639b3c..80d798862229 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -4744,9 +4744,10 @@ void libxl__xcinfo2xlinfo(libxl_ctx *ctx,
  * devices have same identifier. */
 #define COMPARE_DEVID(a, b) ((a)->devid == (b)->devid)
 #define COMPARE_DISK(a, b) (!strcmp((a)->vdev, (b)->vdev))
-#define COMPARE_PCI(a, b) ((a)->func == (b)->func &&    \
-                           (a)->bus == (b)->bus &&      \
-                           (a)->dev == (b)->dev)
+#define COMPARE_PCI(a, b) ((a)->domain == (b)->domain && \
+                           (a)->bus == (b)->bus &&       \
+                           (a)->dev == (b)->dev &&       \
+                           (a)->func == (b)->func)
 #define COMPARE_USB(a, b) ((a)->ctrl == (b)->ctrl && \
                            (a)->port == (b)->port)
 #define COMPARE_USBCTRL(a, b) ((a)->devid == (b)->devid)
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index b87e121c4d5c..278ebd9f561b 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -317,24 +317,17 @@ retry_transaction2:
     return 0;
 }
 
-static int is_pci_in_array(libxl_device_pci *assigned, int num_assigned,
-                           int dom, int bus, int dev, int func)
+static bool is_pci_in_array(libxl_device_pci *pcis, int num,
+                            libxl_device_pci *pci)
 {
     int i;
 
-    for(i = 0; i < num_assigned; i++) {
-        if ( assigned[i].domain != dom )
-            continue;
-        if ( assigned[i].bus != bus )
-            continue;
-        if ( assigned[i].dev != dev )
-            continue;
-        if ( assigned[i].func != func )
-            continue;
-        return 1;
+    for (i = 0; i < num; i++) {
+        if (COMPARE_PCI(pci, &pcis[i]))
+            break;
     }
 
-    return 0;
+    return i < num;
 }
 
 /* Write the standard BDF into the sysfs path given by sysfs_path. */
@@ -1468,21 +1461,17 @@ int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
     return AO_INPROGRESS;
 }
 
-static int libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
+static bool libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
 {
     libxl_device_pci *pcis;
-    int num, i;
+    int num;
+    bool assignable;
 
     pcis = libxl_device_pci_assignable_list(ctx, &num);
-    for (i = 0; i < num; i++) {
-        if (pcis[i].domain == pci->domain &&
-            pcis[i].bus == pci->bus &&
-            pcis[i].dev == pci->dev &&
-            pcis[i].func == pci->func)
-            break;
-    }
+    assignable = is_pci_in_array(pcis, num, pci);
     libxl_device_pci_assignable_list_free(pcis, num);
-    return i != num;
+
+    return assignable;
 }
 
 static void device_pci_add_stubdom_wait(libxl__egc *egc,
@@ -1831,8 +1820,7 @@ static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
         goto out_fail;
     }
 
-    attached = is_pci_in_array(pcis, num, pci->domain,
-                               pci->bus, pci->dev, pci->func);
+    attached = is_pci_in_array(pcis, num, pci);
     libxl_device_pci_list_free(pcis, num);
 
     rc = ERROR_INVAL;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 18:00:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 18:00:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23647.50531 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXwl-0005BR-A1; Tue, 10 Nov 2020 18:00:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23647.50531; Tue, 10 Nov 2020 18:00:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXwl-0005B4-40; Tue, 10 Nov 2020 18:00:47 +0000
Received: by outflank-mailman (input) for mailman id 23647;
 Tue, 10 Nov 2020 18:00:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kcXwj-00059T-Mi
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:00:45 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 974cb0d8-4297-46b9-9b09-d374a24b83e5;
 Tue, 10 Nov 2020 18:00:44 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXwg-0006qP-Pr; Tue, 10 Nov 2020 18:00:42 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoF-0007RC-GK; Tue, 10 Nov 2020 17:51:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kcXwj-00059T-Mi
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:00:45 +0000
X-Inumbo-ID: 974cb0d8-4297-46b9-9b09-d374a24b83e5
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 974cb0d8-4297-46b9-9b09-d374a24b83e5;
	Tue, 10 Nov 2020 18:00:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=oJvtVeJyKav8GhwUPtA/ZF/25xkdgeM2zKGBF6Bli9k=; b=U2q6I8EpNhvvQfSY4et0RwcqMm
	ZSruS/HlMk7Kx71HHmm2s3Rg4KoXJW4qN+/wC+FmkmeRfU2v7UMGkkWSjX7eDl9nbhd79ILJRowf1
	7LIN3v4CnMK2beo9M00GiPD8V795CAv8zjXDCXhnki6CMWnJcs7xWoGr1fTEAgFgJqSk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXwg-0006qP-Pr; Tue, 10 Nov 2020 18:00:42 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoF-0007RC-GK; Tue, 10 Nov 2020 17:51:59 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2 10/24] libxl: make sure callers of libxl_device_pci_list() free the list after use
Date: Tue, 10 Nov 2020 17:51:33 +0000
Message-Id: <20201110175147.7067-11-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110175147.7067-1-paul@xen.org>
References: <20201110175147.7067-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

A previous patch introduced libxl_device_pci_list_free() which should be used
by callers of libxl_device_pci_list() to properly dispose of the exported
'libxl_device_pci' types and the free the memory holding them. Whilst all
current callers do ensure the memory is freed, only the code in xl's
pcilist() function actually calls libxl_device_pci_dispose(). As it stands
this laxity does not lead to any memory leaks, but the simple addition of
.e.g. a 'string' into the idl definition of 'libxl_device_pci' would lead
to leaks.

This patch makes sure all callers of libxl_device_pci_list() can call
libxl_device_pci_list_free() by keeping copies of 'libxl_device_pci'
structures inline in 'pci_add_state' and 'pci_remove_state' (and also making
sure these are properly disposed at the end of the operations) rather
than keeping pointers to the structures returned by libxl_device_pci_list().

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libs/light/libxl_pci.c | 68 ++++++++++++++++++++----------------
 tools/xl/xl_pci.c            |  3 +-
 2 files changed, 38 insertions(+), 33 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index ff37dc5b5921..a69cba793583 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1006,7 +1006,7 @@ typedef struct pci_add_state {
     libxl__xswait_state xswait;
     libxl__ev_qmp qmp;
     libxl__ev_time timeout;
-    libxl_device_pci *pci;
+    libxl_device_pci pci;
     libxl_domid pci_domid;
 } pci_add_state;
 
@@ -1078,7 +1078,7 @@ static void pci_add_qemu_trad_watch_state_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
 
     rc = check_qemu_running(gc, domid, xswa, rc, state);
     if (rc == ERROR_NOT_READY)
@@ -1099,7 +1099,7 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
     libxl__ev_qmp *const qmp = &pas->qmp;
 
     rc = libxl__ev_time_register_rel(ao, &pas->timeout,
@@ -1180,7 +1180,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
     int dev_slot, dev_func;
 
     /* Convenience aliases */
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
 
     if (rc) goto out;
 
@@ -1281,7 +1281,7 @@ static void pci_add_dm_done(libxl__egc *egc,
 
     /* Convenience aliases */
     bool starting = pas->starting;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
     bool hvm = libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM;
 
     libxl__ev_qmp_dispose(gc, &pas->qmp);
@@ -1497,7 +1497,10 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     GCNEW(pas);
     pas->aodev = aodev;
     pas->domid = domid;
-    pas->pci = pci;
+
+    libxl_device_pci_copy(CTX, &pas->pci, pci);
+    pci = &pas->pci;
+
     pas->starting = starting;
     pas->callback = device_pci_add_stubdom_done;
 
@@ -1536,12 +1539,6 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
     if (stubdomid != 0) {
-        libxl_device_pci *pci_s;
-
-        GCNEW(pci_s);
-        libxl_device_pci_init(pci_s);
-        libxl_device_pci_copy(CTX, pci_s, pci);
-        pas->pci = pci_s;
         pas->callback = device_pci_add_stubdom_wait;
 
         do_pci_add(egc, stubdomid, pas); /* must be last */
@@ -1600,7 +1597,7 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
 
     if (rc) goto out;
 
@@ -1651,7 +1648,7 @@ static void device_pci_add_done(libxl__egc *egc,
     EGC_GC;
     libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
 
     if (rc) {
         LOGD(ERROR, domid,
@@ -1661,6 +1658,7 @@ static void device_pci_add_done(libxl__egc *egc,
              rc);
         pci_info_xs_remove(gc, pci, "domid");
     }
+    libxl_device_pci_dispose(pci);
     aodev->rc = rc;
     aodev->callback(egc, aodev);
 }
@@ -1767,7 +1765,7 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
 typedef struct pci_remove_state {
     libxl__ao_device *aodev;
     libxl_domid domid;
-    libxl_device_pci *pci;
+    libxl_device_pci pci;
     bool force;
     bool hvm;
     unsigned int orig_vdev;
@@ -1809,23 +1807,26 @@ static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
 {
     STATE_AO_GC(prs->aodev->ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
-    libxl_device_pci *assigned;
+    libxl_device_pci *pcis;
+    bool attached;
     uint32_t domid = prs->domid;
     libxl_domain_type type = libxl__domain_type(gc, domid);
-    libxl_device_pci *pci = prs->pci;
+    libxl_device_pci *pci = &prs->pci;
     int rc, num;
     uint32_t domainid = domid;
 
-    assigned = libxl_device_pci_list(ctx, domid, &num);
-    if (assigned == NULL) {
+    pcis = libxl_device_pci_list(ctx, domid, &num);
+    if (!pcis) {
         rc = ERROR_FAIL;
         goto out_fail;
     }
-    libxl__ptr_add(gc, assigned);
+
+    attached = is_pci_in_array(pcis, num, pci->domain,
+                               pci->bus, pci->dev, pci->func);
+    libxl_device_pci_list_free(pcis, num);
 
     rc = ERROR_INVAL;
-    if ( !is_pci_in_array(assigned, num, pci->domain,
-                          pci->bus, pci->dev, pci->func) ) {
+    if (!attached) {
         LOGD(ERROR, domainid, "PCI device not attached to this domain");
         goto out_fail;
     }
@@ -1925,7 +1926,7 @@ static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = prs->domid;
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
 
     rc = check_qemu_running(gc, domid, xswa, rc, state);
     if (rc == ERROR_NOT_READY)
@@ -1947,7 +1948,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     int rc;
 
     /* Convenience aliases */
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
 
     rc = libxl__ev_time_register_rel(ao, &prs->timeout,
                                      pci_remove_timeout,
@@ -2017,7 +2018,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl__ao *const ao = prs->aodev->ao;
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
 
     if (rc) goto out;
 
@@ -2072,7 +2073,7 @@ static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
     pci_remove_state *prs = CONTAINER_OF(ev, *prs, timeout);
 
     /* Convenience aliases */
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
 
     LOGD(WARN, prs->domid, "timed out waiting for DM to remove "
          PCI_PT_QDEV_ID, pci->bus, pci->dev, pci->func);
@@ -2093,7 +2094,7 @@ static void pci_remove_detached(libxl__egc *egc,
     bool isstubdom;
 
     /* Convenience aliases */
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
     libxl_domid domid = prs->domid;
 
     /* Cleaning QMP states ASAP */
@@ -2156,7 +2157,7 @@ static void pci_remove_done(libxl__egc *egc,
 
     if (rc) goto out;
 
-    libxl__device_pci_remove_xenstore(gc, prs->domid, prs->pci);
+    libxl__device_pci_remove_xenstore(gc, prs->domid, &prs->pci);
 out:
     device_pci_remove_common_next(egc, prs, rc);
 }
@@ -2174,7 +2175,10 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
     GCNEW(prs);
     prs->aodev = aodev;
     prs->domid = domid;
-    prs->pci = pci;
+
+    libxl_device_pci_copy(CTX, &prs->pci, pci);
+    pci = &prs->pci;
+
     prs->force = force;
     libxl__xswait_init(&prs->xswait);
     libxl__ev_qmp_init(&prs->qmp);
@@ -2209,7 +2213,7 @@ static void device_pci_remove_common_next(libxl__egc *egc,
     EGC_GC;
 
     /* Convenience aliases */
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
     libxl__ao_device *const aodev = prs->aodev;
     const unsigned int pfunc_mask = prs->pfunc_mask;
     const unsigned int orig_vdev = prs->orig_vdev;
@@ -2240,6 +2244,7 @@ out:
 
     if (!rc) pci_info_xs_remove(gc, pci, "domid");
 
+    libxl_device_pci_dispose(pci);
     aodev->rc = rc;
     aodev->callback(egc, aodev);
 }
@@ -2342,7 +2347,6 @@ void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
     pcis = libxl_device_pci_list(CTX, domid, &num);
     if ( pcis == NULL )
         return;
-    libxl__ptr_add(gc, pcis);
 
     for (i = 0; i < num; i++) {
         /* Force remove on shutdown since, on HVM, qemu will not always
@@ -2353,6 +2357,8 @@ void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
         libxl__device_pci_remove_common(egc, domid, pcis + i, true,
                                         aodev);
     }
+
+    libxl_device_pci_list_free(pcis, num);
 }
 
 int libxl__grant_vga_iomem_permission(libxl__gc *gc, const uint32_t domid,
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 34fcf5a4fadf..7c0f102ac7b7 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -35,9 +35,8 @@ static void pcilist(uint32_t domid)
         printf("%02x.%01x %04x:%02x:%02x.%01x\n",
                (pcis[i].vdevfn >> 3) & 0x1f, pcis[i].vdevfn & 0x7,
                pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
-        libxl_device_pci_dispose(&pcis[i]);
     }
-    free(pcis);
+    libxl_device_pci_list_free(pcis, num);
 }
 
 int main_pcilist(int argc, char **argv)
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 18:00:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 18:00:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23648.50547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXwo-0005Ea-Ki; Tue, 10 Nov 2020 18:00:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23648.50547; Tue, 10 Nov 2020 18:00:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXwo-0005EJ-Go; Tue, 10 Nov 2020 18:00:50 +0000
Received: by outflank-mailman (input) for mailman id 23648;
 Tue, 10 Nov 2020 18:00:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kcXwn-00059I-HB
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:00:49 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fe5ac561-eec2-4642-9b50-85772df026f8;
 Tue, 10 Nov 2020 18:00:44 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXwh-0006qd-7R; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoS-0007RC-Gk; Tue, 10 Nov 2020 17:52:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kcXwn-00059I-HB
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:00:49 +0000
X-Inumbo-ID: fe5ac561-eec2-4642-9b50-85772df026f8
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fe5ac561-eec2-4642-9b50-85772df026f8;
	Tue, 10 Nov 2020 18:00:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=/V+Xsx5DyAJ7DrcbKg6+KBwsTjqitxu0fA7ODPAM4l8=; b=SSkY6Wm+6XNV8mzsO0jRUlugIB
	hWwAqvjfiM6g7mefDHPA9wKQ41EOnrQizDfX8PhGEdROla/sUXxwK+eQNoP8S/9u4jgYNxFrJNeHM
	6ajdsvN+jXf4qHNyOuF8kcl6Li4AE3OyFHcXvPvFJbeiNIge2hj5cXeRAWIbAuodZSMU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXwh-0006qd-7R; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoS-0007RC-Gk; Tue, 10 Nov 2020 17:52:12 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 23/24] docs/man: modify xl-pci-configuration(5) to add 'name' field to PCI_SPEC_STRING
Date: Tue, 10 Nov 2020 17:51:46 +0000
Message-Id: <20201110175147.7067-24-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110175147.7067-1-paul@xen.org>
References: <20201110175147.7067-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Since assignable devices can be named, a subsequent patch will support use
of a PCI_SPEC_STRING containing a 'name' parameter instead of a 'bdf'. In
this case the name will be used to look up the 'bdf' in the list of assignable
(or assigned) devices.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 docs/man/xl-pci-configuration.5.pod | 25 +++++++++++++++++++++++--
 1 file changed, 23 insertions(+), 2 deletions(-)

diff --git a/docs/man/xl-pci-configuration.5.pod b/docs/man/xl-pci-configuration.5.pod
index 4dd73bc498d6..db3360307cbd 100644
--- a/docs/man/xl-pci-configuration.5.pod
+++ b/docs/man/xl-pci-configuration.5.pod
@@ -51,7 +51,7 @@ is not specified, or if it is specified with an empty value (whether
 positionally or explicitly).
 
 B<NOTE>: In context of B<xl pci-detach> (see L<xl(1)>), parameters other than
-B<bdf> will be ignored.
+B<bdf> or B<name> will be ignored.
 
 =head1 Positional Parameters
 
@@ -70,7 +70,11 @@ B<*> to indicate all functions of a multi-function device.
 
 =item Default Value
 
-None. This parameter is mandatory as it identifies the device.
+None. This parameter is mandatory in its positional form. As a non-positional
+parameter it is also mandatory unless a B<name> parameter is present, in
+which case B<bdf> must not be present since the B<name> will be used to find
+the B<bdf> in the list of assignable devices. See L<xl(1)> for more information
+on naming assignable devices.
 
 =back
 
@@ -194,4 +198,21 @@ B<NOTE>: This overrides the global B<rdm> option.
 
 =back
 
+=item B<name>=I<STRING>
+
+=over 4
+
+=item Description
+
+This is the name given when the B<BDF> was made assignable. See L<xl(1)> for
+more information on naming assignable devices.
+
+=item Default Value
+
+None. This parameter must not be present if a B<bdf> parameter is present.
+If a B<bdf> parameter is not present then B<name> is mandatory as it is
+required to look up the B<BDF> in the list of assignable devices.
+
+=back
+
 =back
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 18:00:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 18:00:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23649.50559 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXwp-0005HF-Uy; Tue, 10 Nov 2020 18:00:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23649.50559; Tue, 10 Nov 2020 18:00:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXwp-0005H6-RI; Tue, 10 Nov 2020 18:00:51 +0000
Received: by outflank-mailman (input) for mailman id 23649;
 Tue, 10 Nov 2020 18:00:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kcXwo-00059T-IV
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:00:50 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3bbcddc3-b58c-4341-adc9-b13c0e5f0e0c;
 Tue, 10 Nov 2020 18:00:44 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXwh-0006qZ-5C; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoK-0007RC-AO; Tue, 10 Nov 2020 17:52:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kcXwo-00059T-IV
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:00:50 +0000
X-Inumbo-ID: 3bbcddc3-b58c-4341-adc9-b13c0e5f0e0c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3bbcddc3-b58c-4341-adc9-b13c0e5f0e0c;
	Tue, 10 Nov 2020 18:00:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=a+z/TBEK7kf9VGtg2RqANtVzeEXmtQ7752DahaSIvAU=; b=RJeMbiwl9PPWEFvHWxNjMOKI19
	Qj/LDNpFY9/RhQZdvy6/qRwGAj4twYJwN6H0ojnswr1k1L1iq+lN3UAH+jTWLHqkdpr6cNHDie5SR
	XwnbxZCItwpdyN0tVe5lLYa0KqFLNXUhjg6M5hydTu2Q7np3uXTiSPZLIX6nOst6Jrfg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXwh-0006qZ-5C; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoK-0007RC-AO; Tue, 10 Nov 2020 17:52:04 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 15/24] docs/man: extract documentation of PCI_SPEC_STRING from the xl.cfg manpage...
Date: Tue, 10 Nov 2020 17:51:38 +0000
Message-Id: <20201110175147.7067-16-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110175147.7067-1-paul@xen.org>
References: <20201110175147.7067-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... and put it into a new xl-pci-configuration(5) manpage, akin to the
xl-network-configration(5) and xl-disk-configuration(5) manpages.

This patch moves the content of the section verbatim. A subsequent patch
will improve the documentation, once it is in its new location.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 docs/man/xl-pci-configuration.5.pod | 78 +++++++++++++++++++++++++++++
 docs/man/xl.cfg.5.pod.in            | 68 +------------------------
 2 files changed, 79 insertions(+), 67 deletions(-)
 create mode 100644 docs/man/xl-pci-configuration.5.pod

diff --git a/docs/man/xl-pci-configuration.5.pod b/docs/man/xl-pci-configuration.5.pod
new file mode 100644
index 000000000000..72a27bd95dec
--- /dev/null
+++ b/docs/man/xl-pci-configuration.5.pod
@@ -0,0 +1,78 @@
+=encoding utf8
+
+=head1 NAME
+
+xl-pci-configuration - XL PCI Configuration Syntax
+
+=head1 SYNTAX
+
+This document specifies the format for B<PCI_SPEC_STRING> which is used by
+the L<xl.cfg(5)> pci configuration option, and related L<xl(1)> commands.
+
+Each B<PCI_SPEC_STRING> has the form of
+B<[DDDD:]BB:DD.F[@VSLOT],KEY=VALUE,KEY=VALUE,...> where:
+
+=over 4
+
+=item B<[DDDD:]BB:DD.F>
+
+Identifies the PCI device from the host perspective in the domain
+(B<DDDD>), Bus (B<BB>), Device (B<DD>) and Function (B<F>) syntax. This is
+the same scheme as used in the output of B<lspci(1)> for the device in
+question.
+
+Note: by default B<lspci(1)> will omit the domain (B<DDDD>) if it
+is zero and it is optional here also. You may specify the function
+(B<F>) as B<*> to indicate all functions.
+
+=item B<@VSLOT>
+
+Specifies the virtual slot where the guest will see this
+device. This is equivalent to the B<DD> which the guest sees. In a
+guest B<DDDD> and B<BB> are C<0000:00>.
+
+=item B<permissive=BOOLEAN>
+
+By default pciback only allows PV guests to write "known safe" values
+into PCI configuration space, likewise QEMU (both qemu-xen and
+qemu-xen-traditional) imposes the same constraint on HVM guests.
+However, many devices require writes to other areas of the configuration space
+in order to operate properly.  This option tells the backend (pciback or QEMU)
+to allow all writes to the PCI configuration space of this device by this
+domain.
+
+B<This option should be enabled with caution:> it gives the guest much
+more control over the device, which may have security or stability
+implications.  It is recommended to only enable this option for
+trusted VMs under administrator's control.
+
+=item B<msitranslate=BOOLEAN>
+
+Specifies that MSI-INTx translation should be turned on for the PCI
+device. When enabled, MSI-INTx translation will always enable MSI on
+the PCI device regardless of whether the guest uses INTx or MSI. Some
+device drivers, such as NVIDIA's, detect an inconsistency and do not
+function when this option is enabled. Therefore the default is false (0).
+
+=item B<seize=BOOLEAN>
+
+Tells B<xl> to automatically attempt to re-assign a device to
+pciback if it is not already assigned.
+
+B<WARNING:> If you set this option, B<xl> will gladly re-assign a critical
+system device, such as a network or a disk controller being used by
+dom0 without confirmation.  Please use with care.
+
+=item B<power_mgmt=BOOLEAN>
+
+B<(HVM only)> Specifies that the VM should be able to program the
+D0-D3hot power management states for the PCI device. The default is false (0).
+
+=item B<rdm_policy=STRING>
+
+B<(HVM/x86 only)> This is the same as the policy setting inside the B<rdm>
+option but just specific to a given device. The default is "relaxed".
+
+Note: this would override global B<rdm> option.
+
+=back
diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 0532739c1fff..b00644e852f9 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1101,73 +1101,7 @@ option is valid only when the B<controller> option is specified.
 =item B<pci=[ "PCI_SPEC_STRING", "PCI_SPEC_STRING", ...]>
 
 Specifies the host PCI devices to passthrough to this guest.
-Each B<PCI_SPEC_STRING> has the form of
-B<[DDDD:]BB:DD.F[@VSLOT],KEY=VALUE,KEY=VALUE,...> where:
-
-=over 4
-
-=item B<[DDDD:]BB:DD.F>
-
-Identifies the PCI device from the host perspective in the domain
-(B<DDDD>), Bus (B<BB>), Device (B<DD>) and Function (B<F>) syntax. This is
-the same scheme as used in the output of B<lspci(1)> for the device in
-question.
-
-Note: by default B<lspci(1)> will omit the domain (B<DDDD>) if it
-is zero and it is optional here also. You may specify the function
-(B<F>) as B<*> to indicate all functions.
-
-=item B<@VSLOT>
-
-Specifies the virtual slot where the guest will see this
-device. This is equivalent to the B<DD> which the guest sees. In a
-guest B<DDDD> and B<BB> are C<0000:00>.
-
-=item B<permissive=BOOLEAN>
-
-By default pciback only allows PV guests to write "known safe" values
-into PCI configuration space, likewise QEMU (both qemu-xen and
-qemu-xen-traditional) imposes the same constraint on HVM guests.
-However, many devices require writes to other areas of the configuration space
-in order to operate properly.  This option tells the backend (pciback or QEMU)
-to allow all writes to the PCI configuration space of this device by this
-domain.
-
-B<This option should be enabled with caution:> it gives the guest much
-more control over the device, which may have security or stability
-implications.  It is recommended to only enable this option for
-trusted VMs under administrator's control.
-
-=item B<msitranslate=BOOLEAN>
-
-Specifies that MSI-INTx translation should be turned on for the PCI
-device. When enabled, MSI-INTx translation will always enable MSI on
-the PCI device regardless of whether the guest uses INTx or MSI. Some
-device drivers, such as NVIDIA's, detect an inconsistency and do not
-function when this option is enabled. Therefore the default is false (0).
-
-=item B<seize=BOOLEAN>
-
-Tells B<xl> to automatically attempt to re-assign a device to
-pciback if it is not already assigned.
-
-B<WARNING:> If you set this option, B<xl> will gladly re-assign a critical
-system device, such as a network or a disk controller being used by
-dom0 without confirmation.  Please use with care.
-
-=item B<power_mgmt=BOOLEAN>
-
-B<(HVM only)> Specifies that the VM should be able to program the
-D0-D3hot power management states for the PCI device. The default is false (0).
-
-=item B<rdm_policy=STRING>
-
-B<(HVM/x86 only)> This is the same as the policy setting inside the B<rdm>
-option but just specific to a given device. The default is "relaxed".
-
-Note: this would override global B<rdm> option.
-
-=back
+See L<xl-pci-configuration(5)> for more details.
 
 =item B<pci_permissive=BOOLEAN>
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 18:00:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 18:00:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23650.50571 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXwu-0005NV-Du; Tue, 10 Nov 2020 18:00:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23650.50571; Tue, 10 Nov 2020 18:00:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXwu-0005NL-9K; Tue, 10 Nov 2020 18:00:56 +0000
Received: by outflank-mailman (input) for mailman id 23650;
 Tue, 10 Nov 2020 18:00:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kcXws-00059I-HE
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:00:54 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0ebb0dff-749e-4c82-9d0c-1cd728dfa451;
 Tue, 10 Nov 2020 18:00:44 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXwh-0006qv-JA; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoQ-0007RC-Dc; Tue, 10 Nov 2020 17:52:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kcXws-00059I-HE
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:00:54 +0000
X-Inumbo-ID: 0ebb0dff-749e-4c82-9d0c-1cd728dfa451
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0ebb0dff-749e-4c82-9d0c-1cd728dfa451;
	Tue, 10 Nov 2020 18:00:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=uc5JF4Aw3pRQS5p/Kqsvv3JLTlI3UZvcoVyjRuBT19E=; b=zj6e0f7fy52Y1xBC/zGW7mf+Y1
	GNZm5L6ykgc9s4kMtsaF3VKO26F25xfyQUVSeBINvPOnGgwfLfCzM8xCGq/xM9AJeOZsL9yiVSfSc
	3uBgCNI0WZiQwsqxSIBYTd2Qh2pJ9kKMhtdEnNkmn93CCoTAOnCyJP6CNQZtcvvDWd14=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXwh-0006qv-JA; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoQ-0007RC-Dc; Tue, 10 Nov 2020 17:52:10 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 21/24] docs/man: modify xl(1) in preparation for naming of assignable devices
Date: Tue, 10 Nov 2020 17:51:44 +0000
Message-Id: <20201110175147.7067-22-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110175147.7067-1-paul@xen.org>
References: <20201110175147.7067-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

A subsequent patch will introduce code to allow a name to be specified to
'xl pci-assignable-add' such that the assignable device may be referred to
by than name in subsequent operations.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 docs/man/xl.1.pod.in | 19 ++++++++++++-------
 1 file changed, 12 insertions(+), 7 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index c5fbce3b5c4b..0822a5842835 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -1595,19 +1595,23 @@ List virtual network interfaces for a domain.
 
 =over 4
 
-=item B<pci-assignable-list>
+=item B<pci-assignable-list> [I<-n>]
 
 List all the B<BDF> of assignable PCI devices. See
-L<xl-pci-configuration(5)> for more information.
+L<xl-pci-configuration(5)> for more information. If the -n option is
+specified then any name supplied when the device was made assignable
+will also be displayed.
 
 These are devices in the system which are configured to be
 available for passthrough and are bound to a suitable PCI
 backend driver in domain 0 rather than a real driver.
 
-=item B<pci-assignable-add> I<BDF>
+=item B<pci-assignable-add> [I<-n NAME>] I<BDF>
 
 Make the device at B<BDF> assignable to guests. See
-L<xl-pci-configuration(5)> for more information.
+L<xl-pci-configuration(5)> for more information. If the -n option is
+supplied then the assignable device entry will the named with the
+given B<NAME>.
 
 This will bind the device to the pciback driver and assign it to the
 "quarantine domain".  If it is already bound to a driver, it will
@@ -1622,10 +1626,11 @@ not to do this on a device critical to domain 0's operation, such as
 storage controllers, network interfaces, or GPUs that are currently
 being used.
 
-=item B<pci-assignable-remove> [I<-r>] I<BDF>
+=item B<pci-assignable-remove> [I<-r>] I<BDF>|I<NAME>
 
-Make the device at B<BDF> not assignable to guests. See
-L<xl-pci-configuration(5)> for more information.
+Make a device non-assignable to guests. The device may be identified
+either by its B<BDF> or the B<NAME> supplied when the device was made
+assignable. See L<xl-pci-configuration(5)> for more information.
 
 This will at least unbind the device from pciback, and
 re-assign it from the "quarantine domain" back to domain 0.  If the -r
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 18:00:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 18:00:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23651.50583 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXwv-0005QT-OT; Tue, 10 Nov 2020 18:00:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23651.50583; Tue, 10 Nov 2020 18:00:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXwv-0005QH-Ju; Tue, 10 Nov 2020 18:00:57 +0000
Received: by outflank-mailman (input) for mailman id 23651;
 Tue, 10 Nov 2020 18:00:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kcXwt-00059T-IL
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:00:55 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f24967f7-6e80-447f-810f-d2a081aa45d8;
 Tue, 10 Nov 2020 18:00:44 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXwh-0006ql-Dh; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoR-0007RC-Jv; Tue, 10 Nov 2020 17:52:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kcXwt-00059T-IL
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:00:55 +0000
X-Inumbo-ID: f24967f7-6e80-447f-810f-d2a081aa45d8
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f24967f7-6e80-447f-810f-d2a081aa45d8;
	Tue, 10 Nov 2020 18:00:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=jW0SM/9setCxRvCCKF+uNweMj12crmD9b/aY8yS6dQw=; b=Mj/kh4X1TAA5iAQpP0OapDqTdD
	d9I/wDtERC56U22amvKHv3la4ywh2HEI/kSLBVqQEZlclBbDFrZhJH7p0aysfdL29pW2d9uvEh42E
	ax9fkbDWsUuIa2egyID3jaqShOWWPx72LDpohUqkZBT705a1G12agqbHsMfEKmf1q/Lc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXwh-0006ql-Dh; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoR-0007RC-Jv; Tue, 10 Nov 2020 17:52:11 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2 22/24] xl / libxl: support naming of assignable devices
Date: Tue, 10 Nov 2020 17:51:45 +0000
Message-Id: <20201110175147.7067-23-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110175147.7067-1-paul@xen.org>
References: <20201110175147.7067-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This patch modifies libxl_device_pci_assignable_add() to take an optional
'name' argument, which (if supplied) is saved into xenstore and can hence be
used to refer to the now-assignable BDF in subsequent operations. To
facilitate this, a new libxl_device_pci_assignable_name2bdf() function is
added.

The xl code is modified to allow a name to be specified in the
'pci-assignable-add' operation and also allow an option to be specified to
'pci-assignable-list' requesting that names be displayed. The latter is
facilitated by a new libxl_device_pci_assignable_bdf2name() function. Finally
xl 'pci-assignable-remove' is modified to that either a name or BDF can be
supplied. The supplied 'identifier' is first assumed to be a name, but if
libxl_device_pci_assignable_name2bdf() fails to find a matching BDF the
identifier itself will be parsed as a BDF. Names my only include printable
characters and may not include whitespace.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Christian Lindig <christian.lindig@citrix.com>
Cc: David Scott <dave@recoil.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxl.h                | 19 +++++-
 tools/libs/light/libxl_pci.c         | 86 ++++++++++++++++++++++++++--
 tools/ocaml/libs/xl/xenlight_stubs.c |  3 +-
 tools/xl/xl_cmdtable.c               | 12 ++--
 tools/xl/xl_pci.c                    | 80 ++++++++++++++++++--------
 5 files changed, 164 insertions(+), 36 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 5703fdf367c5..4025d3a3d437 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -476,6 +476,14 @@
  */
 #define LIBXL_HAVE_PCI_ASSIGNABLE_BDF 1
 
+/*
+ * LIBXL_HAVE_PCI_ASSIGNABLE_NAME indicates that the
+ * libxl_device_pci_assignable_add() function takes a 'name' argument
+ * and that the libxl_device_pci_assignable_name2bdf() and
+ * libxl_device_pci_assignable_bdf2name() functions are defined.
+ */
+#define LIBXL_HAVE_PCI_ASSIGNABLE_NAME 1
+
 /*
  * libxl ABI compatibility
  *
@@ -2385,11 +2393,18 @@ int libxl_device_events_handler(libxl_ctx *ctx,
  * added or is not bound, the functions will emit a warning but return
  * SUCCESS.
  */
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
+                                    const char *name, int rebind);
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
+                                       int rebind);
 libxl_pci_bdf *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
 void libxl_device_pci_assignable_list_free(libxl_pci_bdf *list, int num);
 
+libxl_pci_bdf *libxl_device_pci_assignable_name2bdf(libxl_ctx *ctx,
+                                                    const char *name);
+char *libxl_device_pci_assignable_bdf2name(libxl_ctx *ctx,
+                                           libxl_pci_bdf *pcibdf);
+
 /* CPUID handling */
 int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str);
 int libxl_cpuid_parse_config_xend(libxl_cpuid_policy_list *cpuid,
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index d436e0a42b0c..a789ade036fa 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -713,6 +713,7 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
                                             libxl_pci_bdf *pcibdf,
+                                            const char *name,
                                             int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -721,6 +722,23 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     int rc;
     struct stat st;
 
+    /* Sanitise any name that was passed */
+    if (name) {
+        unsigned int i, n = strlen(name);
+
+        if (n > 64) { /* Reasonable upper bound on name length */
+            LOG(ERROR, "Name too long");
+            return ERROR_FAIL;
+        }
+
+        for (i = 0; i < n; i++) {
+            if (!isgraph(name[i])) {
+                LOG(ERROR, "Names may only include printable characters");
+                return ERROR_FAIL;
+            }
+        }
+    }
+
     /* Local copy for convenience */
     dom = pcibdf->domain;
     bus = pcibdf->bus;
@@ -741,7 +759,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
     if ( rc ) {
         LOG(WARN, PCI_BDF" already assigned to pciback", dom, bus, dev, func);
-        goto quarantine;
+        goto name;
     }
 
     /* Check to see if there's already a driver that we need to unbind from */
@@ -772,7 +790,12 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
         return ERROR_FAIL;
     }
 
-quarantine:
+name:
+    if (name)
+        pci_info_xs_write(gc, pcibdf, "name", name);
+    else
+        pci_info_xs_remove(gc, pcibdf, "name");
+
     /*
      * DOMID_IO is just a sentinel domain, without any actual mappings,
      * so always pass XEN_DOMCTL_DEV_RDM_RELAXED to avoid assignment being
@@ -836,16 +859,18 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
         }
     }
 
+    pci_info_xs_remove(gc, pcibdf, "name");
+
     return 0;
 }
 
 int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
-                                    int rebind)
+                                    const char *name, int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_add(gc, pcibdf, rebind);
+    rc = libxl__device_pci_assignable_add(gc, pcibdf, name, rebind);
 
     GC_FREE;
     return rc;
@@ -864,6 +889,57 @@ int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
     return rc;
 }
 
+libxl_pci_bdf *libxl_device_pci_assignable_name2bdf(libxl_ctx *ctx,
+                                                    const char *name)
+{
+    GC_INIT(ctx);
+    char **bdfs;
+    libxl_pci_bdf *pcibdf;
+    unsigned int i, n;
+
+    bdfs = libxl__xs_directory(gc, XBT_NULL, PCI_INFO_PATH, &n);
+    if (!n)
+        goto out;
+
+    pcibdf = calloc(1, sizeof(*pcibdf));
+    if (!pcibdf)
+        goto out;
+
+    for (i = 0; i < n; i++) {
+        unsigned dom, bus, dev, func;
+        const char *tmp;
+
+        if (sscanf(bdfs[i], PCI_BDF_XSPATH, &dom, &bus, &dev, &func) != 4)
+            continue;
+
+        pcibdf_struct_fill(pcibdf, dom, bus, dev, func);
+
+        tmp = pci_info_xs_read(gc, pcibdf, "name");
+        if (tmp && !strcmp(tmp, name))
+            goto out;
+    }
+
+    free(pcibdf);
+    pcibdf = NULL;
+
+out:
+    GC_FREE;
+    return pcibdf;
+}
+
+char *libxl_device_pci_assignable_bdf2name(libxl_ctx *ctx,
+                                           libxl_pci_bdf *pcibdf)
+{
+    GC_INIT(ctx);
+    char *name = NULL, *tmp = pci_info_xs_read(gc, pcibdf, "name");
+
+    if (tmp)
+        name = strdup(tmp);
+
+    GC_FREE;
+    return name;
+}
+
 /*
  * This function checks that all functions of a device are bound to pciback
  * driver. It also initialises a bit-mask of which function numbers are present
@@ -1528,7 +1604,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     if (rc) goto out;
 
     if (pci->seize && !pciback_dev_is_assigned(gc, &pci->bdf)) {
-        rc = libxl__device_pci_assignable_add(gc, &pci->bdf, 1);
+        rc = libxl__device_pci_assignable_add(gc, &pci->bdf, NULL, 1);
         if ( rc )
             goto out;
     }
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 2388f238697c..96bb4655e0b1 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -840,7 +840,8 @@ value stub_xl_device_pci_assignable_add(value ctx, value info, value rebind)
 	device_pci_val(CTX, &c_info, info);
 
 	caml_enter_blocking_section();
-	ret = libxl_device_pci_assignable_add(CTX, &c_info.bdf, c_rebind);
+	ret = libxl_device_pci_assignable_add(CTX, &c_info.bdf, NULL,
+					      c_rebind);
 	caml_leave_blocking_section();
 
 	libxl_device_pci_dispose(&c_info);
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 2ee0c4967334..9e9aa448e2a6 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -105,21 +105,25 @@ struct cmd_spec cmd_table[] = {
     { "pci-assignable-add",
       &main_pciassignable_add, 0, 1,
       "Make a device assignable for pci-passthru",
-      "<BDF>",
+      "[options] <BDF>",
+      "-n NAME, --name=NAME    Name the assignable device.\n"
       "-h                      Print this help.\n"
     },
     { "pci-assignable-remove",
       &main_pciassignable_remove, 0, 1,
       "Remove a device from being assignable",
-      "[options] <BDF>",
+      "[options] <BDF>|NAME",
       "-h                      Print this help.\n"
       "-r                      Attempt to re-assign the device to the\n"
-      "                        original driver"
+      "                        original driver."
     },
     { "pci-assignable-list",
       &main_pciassignable_list, 0, 0,
       "List all the assignable pci devices",
-      "",
+      "[options]",
+      "-h                      Print this help.\n"
+      "-n, --show-names        Display assignable device names where\n"
+      "                        supplied.\n"
     },
     { "pause",
       &main_pause, 0, 1,
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 37708b4eb14d..f1b58b39762a 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -152,7 +152,7 @@ int main_pciattach(int argc, char **argv)
     return EXIT_SUCCESS;
 }
 
-static void pciassignable_list(void)
+static void pciassignable_list(bool show_names)
 {
     libxl_pci_bdf *pcibdfs;
     int num, i;
@@ -162,9 +162,15 @@ static void pciassignable_list(void)
     if ( pcibdfs == NULL )
         return;
     for (i = 0; i < num; i++) {
-        printf("%04x:%02x:%02x.%01x\n",
-               pcibdfs[i].domain, pcibdfs[i].bus, pcibdfs[i].dev,
-               pcibdfs[i].func);
+        libxl_pci_bdf *pcibdf = &pcibdfs[i];
+        char *name = show_names ?
+            libxl_device_pci_assignable_bdf2name(ctx, pcibdf) : NULL;
+
+        printf("%04x:%02x:%02x.%01x %s\n",
+               pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func,
+               name ?: "");
+
+        free(name);
     }
     libxl_device_pci_assignable_list_free(pcibdfs, num);
 }
@@ -172,16 +178,23 @@ static void pciassignable_list(void)
 int main_pciassignable_list(int argc, char **argv)
 {
     int opt;
+    static struct option opts[] = {
+        {"show-names", 0, 0, 'n'},
+        COMMON_LONG_OPTS
+    };
+    bool show_names = false;
 
-    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-assignable-list", 0) {
-        /* No options */
+    SWITCH_FOREACH_OPT(opt, "n", opts, "pci-assignable-list", 0) {
+    case 'n':
+        show_names = true;
+        break;
     }
 
-    pciassignable_list();
+    pciassignable_list(show_names);
     return 0;
 }
 
-static int pciassignable_add(const char *bdf, int rebind)
+static int pciassignable_add(const char *bdf, const char *name, int rebind)
 {
     libxl_pci_bdf pcibdf;
     XLU_Config *config;
@@ -197,7 +210,7 @@ static int pciassignable_add(const char *bdf, int rebind)
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_add(ctx, &pcibdf, rebind))
+    if (libxl_device_pci_assignable_add(ctx, &pcibdf, name, rebind))
         r = 1;
 
     libxl_pci_bdf_dispose(&pcibdf);
@@ -210,39 +223,58 @@ int main_pciassignable_add(int argc, char **argv)
 {
     int opt;
     const char *bdf = NULL;
+    static struct option opts[] = {
+        {"name", 1, 0, 'n'},
+        COMMON_LONG_OPTS
+    };
+    const char *name = NULL;
 
-    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-assignable-add", 1) {
-        /* No options */
+    SWITCH_FOREACH_OPT(opt, "n:", opts, "pci-assignable-add", 0) {
+    case 'n':
+        name = optarg;
+        break;
     }
 
     bdf = argv[optind];
 
-    if (pciassignable_add(bdf, 1))
+    if (pciassignable_add(bdf, name, 1))
         return EXIT_FAILURE;
 
     return EXIT_SUCCESS;
 }
 
-static int pciassignable_remove(const char *bdf, int rebind)
+static int pciassignable_remove(const char *ident, int rebind)
 {
-    libxl_pci_bdf pcibdf;
+    libxl_pci_bdf *pcibdf;
     XLU_Config *config;
     int r = 0;
 
-    libxl_pci_bdf_init(&pcibdf);
-
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcibdf, bdf)) {
-        fprintf(stderr, "pci-assignable-remove: malformed BDF \"%s\"\n", bdf);
-        exit(2);
+    pcibdf = libxl_device_pci_assignable_name2bdf(ctx, ident);
+    if (!pcibdf) {
+        pcibdf = calloc(1, sizeof(*pcibdf));
+
+        if (!pcibdf) {
+            fprintf(stderr,
+                    "pci-assignable-remove: failed to allocate memory\n");
+            exit(2);
+        }
+
+        libxl_pci_bdf_init(pcibdf);
+        if (xlu_pci_parse_bdf(config, pcibdf, ident)) {
+            fprintf(stderr,
+                    "pci-assignable-remove: malformed BDF '%s'\n", ident);
+            exit(2);
+        }
     }
 
-    if (libxl_device_pci_assignable_remove(ctx, &pcibdf, rebind))
+    if (libxl_device_pci_assignable_remove(ctx, pcibdf, rebind))
         r = 1;
 
-    libxl_pci_bdf_dispose(&pcibdf);
+    libxl_pci_bdf_dispose(pcibdf);
+    free(pcibdf);
     xlu_cfg_destroy(config);
 
     return r;
@@ -251,7 +283,7 @@ static int pciassignable_remove(const char *bdf, int rebind)
 int main_pciassignable_remove(int argc, char **argv)
 {
     int opt;
-    const char *bdf = NULL;
+    const char *ident = NULL;
     int rebind = 0;
 
     SWITCH_FOREACH_OPT(opt, "r", NULL, "pci-assignable-remove", 1) {
@@ -260,9 +292,9 @@ int main_pciassignable_remove(int argc, char **argv)
         break;
     }
 
-    bdf = argv[optind];
+    ident = argv[optind];
 
-    if (pciassignable_remove(bdf, rebind))
+    if (pciassignable_remove(ident, rebind))
         return EXIT_FAILURE;
 
     return EXIT_SUCCESS;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 18:01:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 18:01:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23652.50595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXwz-0005Wv-6Z; Tue, 10 Nov 2020 18:01:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23652.50595; Tue, 10 Nov 2020 18:01:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXwz-0005Wi-1Q; Tue, 10 Nov 2020 18:01:01 +0000
Received: by outflank-mailman (input) for mailman id 23652;
 Tue, 10 Nov 2020 18:00:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kcXwx-00059I-HU
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:00:59 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id db2b885e-7a28-4694-bd02-816bfa1cda17;
 Tue, 10 Nov 2020 18:00:44 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXwh-0006qt-H9; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoL-0007RC-7A; Tue, 10 Nov 2020 17:52:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kcXwx-00059I-HU
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:00:59 +0000
X-Inumbo-ID: db2b885e-7a28-4694-bd02-816bfa1cda17
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id db2b885e-7a28-4694-bd02-816bfa1cda17;
	Tue, 10 Nov 2020 18:00:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=9oBaH3TvR/OzX4i2SsbbWQZTQ5vUhqb0tdWkNXX9QLM=; b=CMdcstQWqJOC3kVza/V0xPHD28
	nEjdzSsTU4SssfxdTNZjpp42CEwXVOgL1dkPs9/KoRB7l/3Uhvyu2MR01AlEgbAGC8MsHq3jEuwM0
	MFbozyXgqNRp8HbCyKbHfqM4cUEndJwvDUVMIl24OYprWlxSQd0wNTtgaB62Dw3uFl5g=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXwh-0006qt-H9; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoL-0007RC-7A; Tue, 10 Nov 2020 17:52:05 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 16/24] docs/man: improve documentation of PCI_SPEC_STRING...
Date: Tue, 10 Nov 2020 17:51:39 +0000
Message-Id: <20201110175147.7067-17-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110175147.7067-1-paul@xen.org>
References: <20201110175147.7067-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... and prepare for adding support for non-positional parsing of 'bdf' and
'vslot' in a subsequent patch.

Also document 'BDF' as a first-class parameter type and fix the documentation
to state that the default value of 'rdm_policy' is actually 'strict', not
'relaxed', as can be seen in libxl__device_pci_setdefault().

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 docs/man/xl-pci-configuration.5.pod | 187 +++++++++++++++++++++++-----
 1 file changed, 153 insertions(+), 34 deletions(-)

diff --git a/docs/man/xl-pci-configuration.5.pod b/docs/man/xl-pci-configuration.5.pod
index 72a27bd95dec..4dd73bc498d6 100644
--- a/docs/man/xl-pci-configuration.5.pod
+++ b/docs/man/xl-pci-configuration.5.pod
@@ -6,32 +6,105 @@ xl-pci-configuration - XL PCI Configuration Syntax
 
 =head1 SYNTAX
 
-This document specifies the format for B<PCI_SPEC_STRING> which is used by
-the L<xl.cfg(5)> pci configuration option, and related L<xl(1)> commands.
+This document specifies the format for B<BDF> and B<PCI_SPEC_STRING> which are
+used by the L<xl.cfg(5)> pci configuration option, and related L<xl(1)>
+commands.
 
-Each B<PCI_SPEC_STRING> has the form of
-B<[DDDD:]BB:DD.F[@VSLOT],KEY=VALUE,KEY=VALUE,...> where:
+A B<BDF> has the following form:
+
+    [DDDD:]BB:SS.F
+
+B<DDDD> is the domain number, B<BB> is the bus number, B<SS> is the device (or
+slot) number, and B<F> is the function number. This is the same scheme as
+used in the output of L<lspci(1)> for the device in question. By default
+L<lspci(1)> will omit the domain (B<DDDD>) if it is zero and hence a zero
+value for domain may also be omitted when specifying a B<BDF>.
+
+Each B<PCI_SPEC_STRING> has the one of the forms:
 
 =over 4
 
-=item B<[DDDD:]BB:DD.F>
+    [<bdf>[@<vslot>,][<key>=<value>,]*
+    [<key>=<value>,]*
 
-Identifies the PCI device from the host perspective in the domain
-(B<DDDD>), Bus (B<BB>), Device (B<DD>) and Function (B<F>) syntax. This is
-the same scheme as used in the output of B<lspci(1)> for the device in
-question.
+=back
 
-Note: by default B<lspci(1)> will omit the domain (B<DDDD>) if it
-is zero and it is optional here also. You may specify the function
-(B<F>) as B<*> to indicate all functions.
+For example, these strings are equivalent:
 
-=item B<@VSLOT>
+=over 4
 
-Specifies the virtual slot where the guest will see this
-device. This is equivalent to the B<DD> which the guest sees. In a
-guest B<DDDD> and B<BB> are C<0000:00>.
+    36:00.0@20,seize=1
+    36:00.0,vslot=20,seize=1
+    bdf=36:00.0,vslot=20,seize=1
 
-=item B<permissive=BOOLEAN>
+=back
+
+More formally, the string is a series of comma-separated keyword/value
+pairs, flags and positional parameters.  Parameters which are not bare
+keywords and which do not contain "=" symbols are assigned to the
+positional parameters, in the order specified below.  The positional
+parameters may also be specified by name.
+
+Each parameter may be specified at most once, either as a positional
+parameter or a named parameter.  Default values apply if the parameter
+is not specified, or if it is specified with an empty value (whether
+positionally or explicitly).
+
+B<NOTE>: In context of B<xl pci-detach> (see L<xl(1)>), parameters other than
+B<bdf> will be ignored.
+
+=head1 Positional Parameters
+
+=over 4
+
+=item B<bdf>=I<BDF>
+
+=over 4
+
+=item Description
+
+This identifies the PCI device from the host perspective.
+
+In the context of a B<PCI_SPEC_STRING> you may specify the function (B<F>) as
+B<*> to indicate all functions of a multi-function device.
+
+=item Default Value
+
+None. This parameter is mandatory as it identifies the device.
+
+=back
+
+=item B<vslot>=I<NUMBER>
+
+=over 4
+
+=item Description
+
+Specifies the virtual slot (device) number where the guest will see this
+device. For example, running L<lspci(1)> in a Linux guest where B<vslot>
+was specified as C<8> would identify the device as C<00:08.0>. Virtual domain
+and bus numbers are always 0.
+
+B<NOTE:> This parameter is always parsed as a hexidecimal value.
+
+=item Default Value
+
+None. This parameter is not mandatory. An available B<vslot> will be selected
+if this parameter is not specified.
+
+=back
+
+=back
+
+=head1 Other Parameters and Flags
+
+=over 4
+
+=item B<permissive>=I<BOOLEAN>
+
+=over 4
+
+=item Description
 
 By default pciback only allows PV guests to write "known safe" values
 into PCI configuration space, likewise QEMU (both qemu-xen and
@@ -46,33 +119,79 @@ more control over the device, which may have security or stability
 implications.  It is recommended to only enable this option for
 trusted VMs under administrator's control.
 
-=item B<msitranslate=BOOLEAN>
+=item Default Value
+
+0
+
+=back
+
+=item B<msitranslate>=I<BOOLEAN>
+
+=over 4
+
+=item Description
 
 Specifies that MSI-INTx translation should be turned on for the PCI
 device. When enabled, MSI-INTx translation will always enable MSI on
-the PCI device regardless of whether the guest uses INTx or MSI. Some
-device drivers, such as NVIDIA's, detect an inconsistency and do not
+the PCI device regardless of whether the guest uses INTx or MSI.
+
+=item Default Value
+
+Some device drivers, such as NVIDIA's, detect an inconsistency and do not
 function when this option is enabled. Therefore the default is false (0).
 
-=item B<seize=BOOLEAN>
+=back
 
-Tells B<xl> to automatically attempt to re-assign a device to
-pciback if it is not already assigned.
+=item B<seize>=I<BOOLEAN>
 
-B<WARNING:> If you set this option, B<xl> will gladly re-assign a critical
+=over 4
+
+=item Description
+
+Tells L<xl(1)> to automatically attempt to make the device assignable to
+guests if that has not already been done by the B<pci-assignable-add>
+command.
+
+B<WARNING:> If you set this option, L<xl> will gladly re-assign a critical
 system device, such as a network or a disk controller being used by
 dom0 without confirmation.  Please use with care.
 
-=item B<power_mgmt=BOOLEAN>
+=item Default Value
 
-B<(HVM only)> Specifies that the VM should be able to program the
-D0-D3hot power management states for the PCI device. The default is false (0).
-
-=item B<rdm_policy=STRING>
-
-B<(HVM/x86 only)> This is the same as the policy setting inside the B<rdm>
-option but just specific to a given device. The default is "relaxed".
-
-Note: this would override global B<rdm> option.
+0
+
+=back
+
+=item B<power_mgmt>=I<BOOLEAN>
+
+=over 4
+
+=item Description
+
+B<(HVM only)> Specifies that the VM should be able to program the
+D0-D3hot power management states for the PCI device.
+
+=item Default Value
+
+0
+
+=back
+
+=item B<rdm_policy>=I<STRING>
+
+=over 4
+
+=item Description
+
+B<(HVM/x86 only)> This is the same as the policy setting inside the B<rdm>
+option in L<xl.cfg(5)> but just specific to a given device.
+
+B<NOTE>: This overrides the global B<rdm> option.
+
+=item Default Value
+
+"strict"
+
+=back
 
 =back
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 18:01:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 18:01:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23653.50607 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXx0-0005b7-RJ; Tue, 10 Nov 2020 18:01:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23653.50607; Tue, 10 Nov 2020 18:01:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXx0-0005ao-MO; Tue, 10 Nov 2020 18:01:02 +0000
Received: by outflank-mailman (input) for mailman id 23653;
 Tue, 10 Nov 2020 18:01:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kcXwy-00059T-IO
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:01:00 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 13b82948-4a68-4d26-96a5-4c50f9399bd0;
 Tue, 10 Nov 2020 18:00:44 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXwh-0006qy-Kc; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoT-0007RC-Gn; Tue, 10 Nov 2020 17:52:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kcXwy-00059T-IO
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:01:00 +0000
X-Inumbo-ID: 13b82948-4a68-4d26-96a5-4c50f9399bd0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 13b82948-4a68-4d26-96a5-4c50f9399bd0;
	Tue, 10 Nov 2020 18:00:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=f7b9R9L0EjwySNnJBfVklW8PEfUZPW/pk87BXU1BZ8k=; b=RpTtxFVq7hjo8fLU4vT4GWTfwx
	WLEBXj6vfdbHqwU7xyB9dtJILMcuIJ5dLko4yFnpd5057KfhCrsFA9QOkZDoPmizKY3jhGMaF52k/
	TrlVUTt0MCvqU+li4LKTdba/9A9+rfh1BirEWL/VOxP11dCVvqrrFbOvAp4T0n4J53MQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXwh-0006qy-Kc; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoT-0007RC-Gn; Tue, 10 Nov 2020 17:52:13 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2 24/24] xl / libxl: support 'xl pci-attach/detach' by name
Date: Tue, 10 Nov 2020 17:51:47 +0000
Message-Id: <20201110175147.7067-25-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110175147.7067-1-paul@xen.org>
References: <20201110175147.7067-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This patch adds a 'name' field into the idl for 'libxl_device_pci' and
libxlu_pci_parse_spec_string() is modified to parse the new 'name'
parameter of PCI_SPEC_STRING detailed in the updated documention in
xl-pci-configuration(5).

If the 'name' field is non-NULL then both libxl_device_pci_add() and
libxl_device_pci_remove() will use it to look up the device BDF in
the list of assignable devices.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxl.h            |  6 +++
 tools/libs/light/libxl_pci.c     | 67 +++++++++++++++++++++++++++++---
 tools/libs/light/libxl_types.idl |  1 +
 tools/libs/util/libxlu_pci.c     |  7 +++-
 4 files changed, 75 insertions(+), 6 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 4025d3a3d437..5b55a2015533 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -484,6 +484,12 @@
  */
 #define LIBXL_HAVE_PCI_ASSIGNABLE_NAME 1
 
+/*
+ * LIBXL_HAVE_DEVICE_PCI_NAME indicates that the 'name' field of
+ * libxl_device_pci is defined.
+ */
+#define LIBXL_HAVE_DEVICE_PCI_NAME 1
+
 /*
  * libxl ABI compatibility
  *
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index a789ade036fa..8dfbcf8d390d 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -60,6 +60,10 @@ static void libxl_create_pci_backend_device(libxl__gc *gc,
                                             int num,
                                             const libxl_device_pci *pci)
 {
+    if (pci->name) {
+        flexarray_append(back, GCSPRINTF("name-%d", num));
+        flexarray_append(back, GCSPRINTF("%s", pci->name));
+    }
     flexarray_append(back, GCSPRINTF("key-%d", num));
     flexarray_append(back, GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func));
     flexarray_append(back, GCSPRINTF("dev-%d", num));
@@ -252,6 +256,7 @@ retry_transaction:
 
 retry_transaction2:
     t = xs_transaction_start(ctx->xsh);
+    xs_rm(ctx->xsh, t, GCSPRINTF("%s/name-%d", be_path, i));
     xs_rm(ctx->xsh, t, GCSPRINTF("%s/state-%d", be_path, i));
     xs_rm(ctx->xsh, t, GCSPRINTF("%s/key-%d", be_path, i));
     xs_rm(ctx->xsh, t, GCSPRINTF("%s/dev-%d", be_path, i));
@@ -290,6 +295,12 @@ retry_transaction2:
             xs_write(ctx->xsh, t, GCSPRINTF("%s/vdevfn-%d", be_path, j - 1), tmp, strlen(tmp));
             xs_rm(ctx->xsh, t, tmppath);
         }
+        tmppath = GCSPRINTF("%s/name-%d", be_path, j);
+        tmp = libxl__xs_read(gc, t, tmppath);
+        if (tmp) {
+            xs_write(ctx->xsh, t, GCSPRINTF("%s/name-%d", be_path, j - 1), tmp, strlen(tmp));
+            xs_rm(ctx->xsh, t, tmppath);
+        }
     }
     if (!xs_transaction_end(ctx->xsh, t, 0))
         if (errno == EAGAIN)
@@ -1587,6 +1598,23 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     pas->starting = starting;
     pas->callback = device_pci_add_stubdom_done;
 
+    if (pci->name) {
+        libxl_pci_bdf *pcibdf =
+            libxl_device_pci_assignable_name2bdf(CTX, pci->name);
+
+        if (!pcibdf) {
+            rc = ERROR_FAIL;
+            goto out;
+        }
+
+        LOGD(DETAIL, domid, "'%s' -> %04x:%02x:%02x.%u", pci->name,
+             pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func);
+
+        libxl_pci_bdf_copy(CTX, &pci->bdf, pcibdf);
+        libxl_pci_bdf_dispose(pcibdf);
+        free(pcibdf);
+    }
+
     if (libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
         rc = xc_test_assign_device(ctx->xch, domid,
                                    pci_encode_bdf(&pci->bdf));
@@ -1735,11 +1763,19 @@ static void device_pci_add_done(libxl__egc *egc,
     libxl_device_pci *pci = &pas->pci;
 
     if (rc) {
-        LOGD(ERROR, domid,
-             "libxl__device_pci_add  failed for "
-             "PCI device %x:%x:%x.%x (rc %d)",
-             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
-             rc);
+        if (pci->name) {
+            LOGD(ERROR, domid,
+                 "libxl__device_pci_add failed for "
+                 "PCI device '%s' (rc %d)",
+                 pci->name,
+                 rc);
+        } else {
+            LOGD(ERROR, domid,
+                 "libxl__device_pci_add failed for "
+                 "PCI device %x:%x:%x.%x (rc %d)",
+                 pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
+                 rc);
+        }
         pci_info_xs_remove(gc, &pci->bdf, "domid");
     }
     libxl_device_pci_dispose(pci);
@@ -2256,6 +2292,23 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
     libxl__ev_time_init(&prs->timeout);
     libxl__ev_time_init(&prs->retry_timer);
 
+    if (pci->name) {
+        libxl_pci_bdf *pcibdf =
+            libxl_device_pci_assignable_name2bdf(CTX, pci->name);
+
+        if (!pcibdf) {
+            rc = ERROR_FAIL;
+            goto out;
+        }
+
+        LOGD(DETAIL, domid, "'%s' -> %04x:%02x:%02x.%u", pci->name,
+             pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func);
+
+        libxl_pci_bdf_copy(CTX, &prs->pci.bdf, pcibdf);
+        libxl_pci_bdf_dispose(pcibdf);
+        free(pcibdf);
+    }
+
     prs->orig_vdev = pci->vdevfn & ~7U;
 
     if ( pci->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
@@ -2390,6 +2443,10 @@ static int libxl__device_pci_from_xs_be(libxl__gc *gc,
         } while ((p = strtok_r(NULL, ",=", &saveptr)) != NULL);
     }
 
+    s = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/name-%d", be_path, nr));
+    if (s)
+        pci->name = strdup(s);
+
     return 0;
 }
 
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 2c441142fba6..44bad36f1c4c 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -778,6 +778,7 @@ libxl_pci_bdf = Struct("pci_bdf", [
 
 libxl_device_pci = Struct("device_pci", [
     ("bdf", libxl_pci_bdf),
+    ("name", string),
     ("vdevfn", uint32),
     ("vfunc_mask", uint32),
     ("msitranslate", bool),
diff --git a/tools/libs/util/libxlu_pci.c b/tools/libs/util/libxlu_pci.c
index a8b6ce542736..543a1f80e99e 100644
--- a/tools/libs/util/libxlu_pci.c
+++ b/tools/libs/util/libxlu_pci.c
@@ -151,6 +151,7 @@ int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pcidev,
 {
     const char *ptr = str;
     bool bdf_present = false;
+    bool name_present = false;
     int ret;
 
     /* Attempt to parse 'bdf' as positional parameter */
@@ -193,6 +194,10 @@ int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pcidev,
             pcidev->power_mgmt = atoi(val);
         } else if (!strcmp(key, "rdm_policy")) {
             ret = parse_rdm_policy(cfg, &pcidev->rdm_policy, val);
+        } else if (!strcmp(key, "name")) {
+            name_present = true;
+            pcidev->name = strdup(val);
+            if (!pcidev->name) ret = ERROR_NOMEM;
         } else {
             XLU__PCI_ERR(cfg, "Unknown PCI_SPEC_STRING option: %s", key);
             ret = ERROR_INVAL;
@@ -205,7 +210,7 @@ int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pcidev,
             return ret;
     }
 
-    if (!bdf_present)
+    if (!(bdf_present ^ name_present))
         return ERROR_INVAL;
 
     return 0;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 18:01:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 18:01:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23656.50618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXx3-0005hD-DG; Tue, 10 Nov 2020 18:01:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23656.50618; Tue, 10 Nov 2020 18:01:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXx3-0005gt-4d; Tue, 10 Nov 2020 18:01:05 +0000
Received: by outflank-mailman (input) for mailman id 23656;
 Tue, 10 Nov 2020 18:01:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kcXx2-00059I-He
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:01:04 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6ad73c0e-dabd-4023-b44b-84f7a4810b05;
 Tue, 10 Nov 2020 18:00:44 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXwh-0006qj-Aw; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoG-0007RC-Mt; Tue, 10 Nov 2020 17:52:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kcXx2-00059I-He
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:01:04 +0000
X-Inumbo-ID: 6ad73c0e-dabd-4023-b44b-84f7a4810b05
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6ad73c0e-dabd-4023-b44b-84f7a4810b05;
	Tue, 10 Nov 2020 18:00:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=IDUE3Ze2GM3LtewxaGuPMMotFNB9D82rFfhrQDyrHpw=; b=CjeDoxgGGn6CoVOACVLTtkla6M
	PHVBkU4R4UW4bIDYK0etIAzmEqkhe/2gLMbW1Rt01Cqe5ivtAZewheANpJXfdr4jUEZbQFB1QPxdu
	wjX0DyBhe6V6hvS1FRZahgWnPb4SzisMp0Kz376nHCKETLrxBVD4kfIr1kGWnWTRn7JQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXwh-0006qj-Aw; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoG-0007RC-Mt; Tue, 10 Nov 2020 17:52:00 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2 11/24] libxl: add libxl_device_pci_assignable_list_free()...
Date: Tue, 10 Nov 2020 17:51:34 +0000
Message-Id: <20201110175147.7067-12-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110175147.7067-1-paul@xen.org>
References: <20201110175147.7067-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... to be used by callers of libxl_device_pci_assignable_list().

Currently there is no API for callers of libxl_device_pci_assignable_list()
to free the list. The xl function pciassignable_list() calls
libxl_device_pci_dispose() on each element of the returned list, but
libxl_pci_assignable() in libxl_pci.c does not. Neither does the implementation
of libxl_device_pci_assignable_list() call libxl_device_pci_init().

This patch adds the new API function, makes sure it is used everywhere and
also modifies libxl_device_pci_assignable_list() to initialize list
entries rather than just zeroing them.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Christian Lindig <christian.lindig@citrix.com>
Cc: David Scott <dave@recoil.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxl.h                |  7 +++++++
 tools/libs/light/libxl_pci.c         | 14 ++++++++++++--
 tools/ocaml/libs/xl/xenlight_stubs.c |  3 +--
 tools/xl/xl_pci.c                    |  3 +--
 4 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index ee52d3cf7e7e..8225809d94a8 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -457,6 +457,12 @@
  */
 #define LIBXL_HAVE_DEVICE_PCI_LIST_FREE 1
 
+/*
+ * LIBXL_HAVE_DEVICE_PCI_ASSIGNABLE_LIST_FREE indicates that the
+ * libxl_device_pci_assignable_list_free() function is defined.
+ */
+#define LIBXL_HAVE_DEVICE_PCI_ASSIGNABLE_LIST_FREE 1
+
 /*
  * libxl ABI compatibility
  *
@@ -2369,6 +2375,7 @@ int libxl_device_events_handler(libxl_ctx *ctx,
 int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
 int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
+void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num);
 
 /* CPUID handling */
 int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str);
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index a69cba793583..b87e121c4d5c 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -438,7 +438,7 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         pcis = new;
         new = pcis + *num;
 
-        memset(new, 0, sizeof(*new));
+        libxl_device_pci_init(new);
         pci_struct_fill(new, dom, bus, dev, func, 0);
 
         if (pci_info_xs_read(gc, new, "domid")) /* already assigned */
@@ -453,6 +453,16 @@ out:
     return pcis;
 }
 
+void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num)
+{
+    int i;
+
+    for (i = 0; i < num; i++)
+        libxl_device_pci_dispose(&list[i]);
+
+    free(list);
+}
+
 /* Unbind device from its current driver, if any.  If driver_path is non-NULL,
  * store the path to the original driver in it. */
 static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
@@ -1471,7 +1481,7 @@ static int libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
             pcis[i].func == pci->func)
             break;
     }
-    free(pcis);
+    libxl_device_pci_assignable_list_free(pcis, num);
     return i != num;
 }
 
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 1181971da4e7..352a00134d70 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -894,9 +894,8 @@ value stub_xl_device_pci_assignable_list(value ctx)
 		Field(list, 1) = temp;
 		temp = list;
 		Store_field(list, 0, Val_device_pci(&c_list[i]));
-		libxl_device_pci_dispose(&c_list[i]);
 	}
-	free(c_list);
+	libxl_device_pci_assignable_list_free(c_list, nb);
 
 	CAMLreturn(list);
 }
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 7c0f102ac7b7..f71498cbb570 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -164,9 +164,8 @@ static void pciassignable_list(void)
     for (i = 0; i < num; i++) {
         printf("%04x:%02x:%02x.%01x\n",
                pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
-        libxl_device_pci_dispose(&pcis[i]);
     }
-    free(pcis);
+    libxl_device_pci_assignable_list_free(pcis, num);
 }
 
 int main_pciassignable_list(int argc, char **argv)
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 18:01:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 18:01:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23658.50631 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXx5-0005n1-W7; Tue, 10 Nov 2020 18:01:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23658.50631; Tue, 10 Nov 2020 18:01:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXx5-0005mn-Ql; Tue, 10 Nov 2020 18:01:07 +0000
Received: by outflank-mailman (input) for mailman id 23658;
 Tue, 10 Nov 2020 18:01:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kcXx3-00059T-If
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:01:05 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 847e99e8-5812-424b-891d-c75b46627675;
 Tue, 10 Nov 2020 18:00:44 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXwh-0006r6-Nf; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoJ-0007RC-Df; Tue, 10 Nov 2020 17:52:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kcXx3-00059T-If
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:01:05 +0000
X-Inumbo-ID: 847e99e8-5812-424b-891d-c75b46627675
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 847e99e8-5812-424b-891d-c75b46627675;
	Tue, 10 Nov 2020 18:00:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=aZN6k9Rot7P+r2V0A51Rm8r3Sa0jlfLO7FycFTaQjfw=; b=bACpfY/tsdE8HZM4KVctTnyLDU
	vVkhYHDa10mzhXlA/tSMzMEMz27X2U9WuusJVKu7ZjIlM3/HXJ1wm5RDpC4lDmOXCGfboRwCb7BTO
	Q4wdIqq/G8sdFVOV0iVL9sq+GxH3j1U59ej5/hD6hKRSpPVcZTXGibYB7grIcVwKNA9c=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXwh-0006r6-Nf; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoJ-0007RC-Df; Tue, 10 Nov 2020 17:52:03 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 14/24] libxl: Make sure devices added by pci-attach are reflected in the config
Date: Tue, 10 Nov 2020 17:51:37 +0000
Message-Id: <20201110175147.7067-15-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110175147.7067-1-paul@xen.org>
References: <20201110175147.7067-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Currently libxl__device_pci_add_xenstore() is broken in that does not
update the domain's configuration for the first device added (which causes
creation of the overall backend area in xenstore). This can be easily observed
by running 'xl list -l' after adding a single device: the device will be
missing.

This patch fixes the problem and adds a DEBUG log line to allow easy
verification that the domain configuration is being modified.

NOTE: This patch includes a whitespace in add_pcis_done().

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>

v2:
 - Avoid having two completely different ways of adding devices into xenstore
---
 tools/libs/light/libxl_pci.c | 71 +++++++++++-------------------------
 1 file changed, 21 insertions(+), 50 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 06fdc50ce10c..29a450f7e649 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -79,39 +79,18 @@ static void libxl__device_from_pci(libxl__gc *gc, uint32_t domid,
     device->kind = LIBXL__DEVICE_KIND_PCI;
 }
 
-static int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
-                                     const libxl_device_pci *pci,
-                                     int num)
+static void libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
+                                      flexarray_t *front, flexarray_t *back)
 {
-    flexarray_t *front = NULL;
-    flexarray_t *back = NULL;
-    libxl__device device;
-    int i;
-
-    front = flexarray_make(gc, 16, 1);
-    back = flexarray_make(gc, 16, 1);
-
     LOGD(DEBUG, domid, "Creating pci backend");
 
-    /* add pci device */
-    libxl__device_from_pci(gc, domid, pci, &device);
-
     flexarray_append_pair(back, "frontend-id", GCSPRINTF("%d", domid));
-    flexarray_append_pair(back, "online", "1");
+    flexarray_append_pair(back, "online", GCSPRINTF("%d", 1));
     flexarray_append_pair(back, "state", GCSPRINTF("%d", XenbusStateInitialising));
     flexarray_append_pair(back, "domain", libxl__domid_to_name(gc, domid));
 
-    for (i = 0; i < num; i++, pci++)
-        libxl_create_pci_backend_device(gc, back, i, pci);
-
-    flexarray_append_pair(back, "num_devs", GCSPRINTF("%d", num));
     flexarray_append_pair(front, "backend-id", GCSPRINTF("%d", 0));
     flexarray_append_pair(front, "state", GCSPRINTF("%d", XenbusStateInitialising));
-
-    return libxl__device_generic_add(gc, XBT_NULL, &device,
-                                     libxl__xs_kvs_of_flexarray(gc, back),
-                                     libxl__xs_kvs_of_flexarray(gc, front),
-                                     NULL);
 }
 
 static int libxl__device_pci_add_xenstore(libxl__gc *gc,
@@ -119,7 +98,7 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
                                           const libxl_device_pci *pci,
                                           bool starting)
 {
-    flexarray_t *back;
+    flexarray_t *front, *back;
     char *num_devs, *be_path;
     int num = 0;
     xs_transaction_t t = XBT_NULL;
@@ -127,16 +106,22 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
     libxl_domain_config d_config;
     libxl__flock *lock = NULL;
     bool is_stubdomain = libxl_is_stubdom(CTX, domid, NULL);
+    libxl__device device;
+
+    libxl__device_from_pci(gc, domid, pci, &device);
 
     /* Stubdomain doesn't have own config. */
     if (!is_stubdomain)
         libxl_domain_config_init(&d_config);
 
+    front = flexarray_make(gc, 16, 1);
+    back = flexarray_make(gc, 16, 1);
+
     be_path = libxl__domain_device_backend_path(gc, 0, domid, 0,
                                                 LIBXL__DEVICE_KIND_PCI);
     num_devs = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/num_devs", be_path));
     if (!num_devs)
-        return libxl__create_pci_backend(gc, domid, pci, 1);
+        libxl__create_pci_backend(gc, domid, front, back);
 
     libxl_domain_type domtype = libxl__domain_type(gc, domid);
     if (domtype == LIBXL_DOMAIN_TYPE_INVALID)
@@ -147,20 +132,18 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
             return ERROR_FAIL;
     }
 
-    back = flexarray_make(gc, 16, 1);
-
     LOGD(DEBUG, domid, "Adding new pci device to xenstore");
-    num = atoi(num_devs);
+    num = num_devs ? atoi(num_devs) : 0;
     libxl_create_pci_backend_device(gc, back, num, pci);
     flexarray_append_pair(back, "num_devs", GCSPRINTF("%d", num + 1));
-    if (!starting)
+    if (num && !starting)
         flexarray_append_pair(back, "state", GCSPRINTF("%d", XenbusStateReconfiguring));
 
     /*
      * Stubdomin config is derived from its target domain, it doesn't have
      * its own file.
      */
-    if (!is_stubdomain) {
+    if (!is_stubdomain && !starting) {
         lock = libxl__lock_domain_userdata(gc, domid);
         if (!lock) {
             rc = ERROR_LOCK_FAIL;
@@ -170,6 +153,7 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
         rc = libxl__get_domain_configuration(gc, domid, &d_config);
         if (rc) goto out;
 
+        LOGD(DEBUG, domid, "Adding new pci device to config");
         device_add_domain_config(gc, &d_config, &libxl__pci_devtype,
                                  pci);
 
@@ -186,7 +170,10 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
             if (rc) goto out;
         }
 
-        libxl__xs_writev(gc, t, be_path, libxl__xs_kvs_of_flexarray(gc, back));
+        libxl__device_generic_add(gc, t, &device,
+                                  libxl__xs_kvs_of_flexarray(gc, back),
+                                  libxl__xs_kvs_of_flexarray(gc, front),
+                                  NULL);
 
         rc = libxl__xs_transaction_commit(gc, &t);
         if (!rc) break;
@@ -1390,7 +1377,7 @@ out_no_irq:
         }
     }
 
-    if (!starting && !libxl_get_stubdom_id(CTX, domid))
+    if (!libxl_get_stubdom_id(CTX, domid))
         rc = libxl__device_pci_add_xenstore(gc, domid, pci, starting);
     else
         rc = 0;
@@ -1699,28 +1686,12 @@ static void libxl__add_pcis(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
 }
 
 static void add_pcis_done(libxl__egc *egc, libxl__multidev *multidev,
-                             int rc)
+                          int rc)
 {
     EGC_GC;
     add_pcis_state *apds = CONTAINER_OF(multidev, *apds, multidev);
-
-    /* Convenience aliases */
-    libxl_domain_config *d_config = apds->d_config;
-    libxl_domid domid = apds->domid;
     libxl__ao_device *aodev = apds->outer_aodev;
 
-    if (rc) goto out;
-
-    if (d_config->num_pcis > 0 && !libxl_get_stubdom_id(CTX, domid)) {
-        rc = libxl__create_pci_backend(gc, domid, d_config->pcis,
-                                       d_config->num_pcis);
-        if (rc < 0) {
-            LOGD(ERROR, domid, "libxl_create_pci_backend failed: %d", rc);
-            goto out;
-        }
-    }
-
-out:
     aodev->rc = rc;
     aodev->callback(egc, aodev);
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 18:01:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 18:01:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23665.50643 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXx8-0005sY-Fm; Tue, 10 Nov 2020 18:01:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23665.50643; Tue, 10 Nov 2020 18:01:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXx8-0005sI-9G; Tue, 10 Nov 2020 18:01:10 +0000
Received: by outflank-mailman (input) for mailman id 23665;
 Tue, 10 Nov 2020 18:01:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kcXx7-00059I-Hx
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:01:09 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 98f21cd0-723e-4b31-82e5-fc74da31e0ed;
 Tue, 10 Nov 2020 18:00:44 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXwh-0006rB-P1; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoI-0007RC-Gp; Tue, 10 Nov 2020 17:52:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kcXx7-00059I-Hx
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:01:09 +0000
X-Inumbo-ID: 98f21cd0-723e-4b31-82e5-fc74da31e0ed
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 98f21cd0-723e-4b31-82e5-fc74da31e0ed;
	Tue, 10 Nov 2020 18:00:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=QafHuUbOEu8fc/Otpqr5mCK9z+T4MUpc57SIzfCCCl0=; b=EVSCFrioLg9TBeLVH7XPvnV3eU
	97GDeyezdb60SiFHiscWZcaA8dLy5krSbH19LGTbsAZEn1wUokUy8U9TdWOnH9diEjAxOQLV8S46x
	ypQtDU2pGN0SEmWZDYWOW6zUNT9E4uxy0T37nknwQvNySw6s9Yq858nFvwgzRERe2GKA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXwh-0006rB-P1; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoI-0007RC-Gp; Tue, 10 Nov 2020 17:52:02 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 13/24] libxl: add/recover 'rdm_policy' to/from PCI backend in xenstore
Date: Tue, 10 Nov 2020 17:51:36 +0000
Message-Id: <20201110175147.7067-14-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110175147.7067-1-paul@xen.org>
References: <20201110175147.7067-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Other parameters, such as 'msitranslate' and 'permissive' are dealt with
but 'rdm_policy' appears to be have been completely missed.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 278ebd9f561b..06fdc50ce10c 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -61,9 +61,9 @@ static void libxl_create_pci_backend_device(libxl__gc *gc,
         flexarray_append_pair(back, GCSPRINTF("vdevfn-%d", num), GCSPRINTF("%x", pci->vdevfn));
     flexarray_append(back, GCSPRINTF("opts-%d", num));
     flexarray_append(back,
-              GCSPRINTF("msitranslate=%d,power_mgmt=%d,permissive=%d",
-                             pci->msitranslate, pci->power_mgmt,
-                             pci->permissive));
+              GCSPRINTF("msitranslate=%d,power_mgmt=%d,permissive=%d,rdm_policy=%s",
+                        pci->msitranslate, pci->power_mgmt,
+                        pci->permissive, libxl_rdm_reserve_policy_to_string(pci->rdm_policy)));
     flexarray_append_pair(back, GCSPRINTF("state-%d", num), GCSPRINTF("%d", XenbusStateInitialising));
 }
 
@@ -2313,6 +2313,9 @@ static int libxl__device_pci_from_xs_be(libxl__gc *gc,
             } else if (!strcmp(p, "permissive")) {
                 p = strtok_r(NULL, ",=", &saveptr);
                 pci->permissive = atoi(p);
+            } else if (!strcmp(p, "rdm_policy")) {
+                p = strtok_r(NULL, ",=", &saveptr);
+                libxl_rdm_reserve_policy_from_string(p, &pci->rdm_policy);
             }
         } while ((p = strtok_r(NULL, ",=", &saveptr)) != NULL);
     }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 18:01:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 18:01:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23667.50655 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXxA-0005xY-Fr; Tue, 10 Nov 2020 18:01:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23667.50655; Tue, 10 Nov 2020 18:01:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXxA-0005xI-B2; Tue, 10 Nov 2020 18:01:12 +0000
Received: by outflank-mailman (input) for mailman id 23667;
 Tue, 10 Nov 2020 18:01:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kcXx8-00059T-JO
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:01:10 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b1d4a63e-4762-443f-bff6-3bd064cdb104;
 Tue, 10 Nov 2020 18:00:44 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXwh-0006qh-95; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoO-0007RC-AP; Tue, 10 Nov 2020 17:52:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kcXx8-00059T-JO
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:01:10 +0000
X-Inumbo-ID: b1d4a63e-4762-443f-bff6-3bd064cdb104
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b1d4a63e-4762-443f-bff6-3bd064cdb104;
	Tue, 10 Nov 2020 18:00:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=y1RGS4HyjvmtOi0l/sT6C5YNGct5R5yiMakE1fxQSLM=; b=eJte9hi6VYTKYWkiSsTOCVX3x8
	x/AEOmMj+VQNbJqLLikEglA7M0f4VmH/GtczPnoX2ah6y3ojI321A4YiAPp/Btwg6sAf+znO7z6pv
	55czTxWacWjOCpgESD8LCVgXLzX8Vk+tMi4OV4i5u3T2s8TmcjF7E7DbQCj5KW+WMqkY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXwh-0006qh-95; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoO-0007RC-AP; Tue, 10 Nov 2020 17:52:08 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2 19/24] libxlu: introduce xlu_pci_parse_spec_string()
Date: Tue, 10 Nov 2020 17:51:42 +0000
Message-Id: <20201110175147.7067-20-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110175147.7067-1-paul@xen.org>
References: <20201110175147.7067-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This patch largely re-writes the code to parse a PCI_SPEC_STRING and enters
it via the newly introduced function. The new parser also deals with 'bdf'
and 'vslot' as non-positional paramaters, as per the documentation in
xl-pci-configuration(5).

The existing xlu_pci_parse_bdf() function remains, but now strictly parses
BDF values. Some existing callers of xlu_pci_parse_bdf() are
modified to call xlu_pci_parse_spec_string() as per the documentation in xl(1).

NOTE: Usage text in xl_cmdtable.c and error messages are also modified
      appropriately.

Fixes: d25cc3ec93eb ("libxl: workaround gcc 10.2 maybe-uninitialized warning")
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxlutil.h    |   8 +-
 tools/libs/util/libxlu_pci.c | 374 +++++++++++++++++++----------------
 tools/xl/xl_cmdtable.c       |   4 +-
 tools/xl/xl_parse.c          |   4 +-
 tools/xl/xl_pci.c            |  37 ++--
 5 files changed, 230 insertions(+), 197 deletions(-)

diff --git a/tools/include/libxlutil.h b/tools/include/libxlutil.h
index 92e35c546278..cdd6aab4f816 100644
--- a/tools/include/libxlutil.h
+++ b/tools/include/libxlutil.h
@@ -108,10 +108,16 @@ int xlu_disk_parse(XLU_Config *cfg, int nspecs, const char *const *specs,
    * resulting disk struct is used with libxl.
    */
 
+/*
+ * PCI BDF
+ */
+int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_pci_bdf *bdf, const char *str);
+
 /*
  * PCI specification parsing
  */
-int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str);
+int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pci,
+                              const char *str);
 
 /*
  * RDM parsing
diff --git a/tools/libs/util/libxlu_pci.c b/tools/libs/util/libxlu_pci.c
index 5c107f264260..a8b6ce542736 100644
--- a/tools/libs/util/libxlu_pci.c
+++ b/tools/libs/util/libxlu_pci.c
@@ -1,5 +1,7 @@
 #define _GNU_SOURCE
 
+#include <ctype.h>
+
 #include "libxlu_internal.h"
 #include "libxlu_disk_l.h"
 #include "libxlu_disk_i.h"
@@ -9,185 +11,213 @@
 #define XLU__PCI_ERR(_c, _x, _a...) \
     if((_c) && (_c)->report) fprintf((_c)->report, _x, ##_a)
 
-static int hex_convert(const char *str, unsigned int *val, unsigned int mask)
+static int parse_bdf(libxl_pci_bdf *bdfp, uint32_t *vfunc_maskp,
+                     const char *str, const char **endp)
 {
-    unsigned long ret;
-    char *end;
+    const char *ptr = str;
+    unsigned int colons = 0;
+    unsigned int domain, bus, dev, func;
+    int n;
 
-    ret = strtoul(str, &end, 16);
-    if ( end == str || *end != '\0' )
-        return -1;
-    if ( ret & ~mask )
-        return -1;
-    *val = (unsigned int)ret & mask;
-    return 0;
-}
-
-static int pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
-                           unsigned int bus, unsigned int dev,
-                           unsigned int func, unsigned int vdevfn)
-{
-    pci->bdf.domain = domain;
-    pci->bdf.bus = bus;
-    pci->bdf.dev = dev;
-    pci->bdf.func = func;
-    pci->vdevfn = vdevfn;
-    return 0;
-}
-
-#define STATE_DOMAIN    0
-#define STATE_BUS       1
-#define STATE_DEV       2
-#define STATE_FUNC      3
-#define STATE_VSLOT     4
-#define STATE_OPTIONS_K 6
-#define STATE_OPTIONS_V 7
-#define STATE_TERMINAL  8
-#define STATE_TYPE      9
-#define STATE_RDM_STRATEGY      10
-#define STATE_RESERVE_POLICY    11
-#define INVALID         0xffffffff
-int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pci, const char *str)
-{
-    unsigned state = STATE_DOMAIN;
-    unsigned dom = INVALID, bus = INVALID, dev = INVALID, func = INVALID, vslot = 0;
-    char *buf2, *tok, *ptr, *end, *optkey = NULL;
-
-    if ( NULL == (buf2 = ptr = strdup(str)) )
-        return ERROR_NOMEM;
-
-    for(tok = ptr, end = ptr + strlen(ptr) + 1; ptr < end; ptr++) {
-        switch(state) {
-        case STATE_DOMAIN:
-            if ( *ptr == ':' ) {
-                state = STATE_BUS;
-                *ptr = '\0';
-                if ( hex_convert(tok, &dom, 0xffff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_BUS:
-            if ( *ptr == ':' ) {
-                state = STATE_DEV;
-                *ptr = '\0';
-                if ( hex_convert(tok, &bus, 0xff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }else if ( *ptr == '.' ) {
-                state = STATE_FUNC;
-                *ptr = '\0';
-                if ( dom & ~0xff )
-                    goto parse_error;
-                bus = dom;
-                dom = 0;
-                if ( hex_convert(tok, &dev, 0xff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_DEV:
-            if ( *ptr == '.' ) {
-                state = STATE_FUNC;
-                *ptr = '\0';
-                if ( hex_convert(tok, &dev, 0xff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_FUNC:
-            if ( *ptr == '\0' || *ptr == '@' || *ptr == ',' ) {
-                switch( *ptr ) {
-                case '\0':
-                    state = STATE_TERMINAL;
-                    break;
-                case '@':
-                    state = STATE_VSLOT;
-                    break;
-                case ',':
-                    state = STATE_OPTIONS_K;
-                    break;
-                }
-                *ptr = '\0';
-                if ( !strcmp(tok, "*") ) {
-                    pci->vfunc_mask = LIBXL_PCI_FUNC_ALL;
-                }else{
-                    if ( hex_convert(tok, &func, 0x7) )
-                        goto parse_error;
-                    pci->vfunc_mask = (1 << 0);
-                }
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_VSLOT:
-            if ( *ptr == '\0' || *ptr == ',' ) {
-                state = ( *ptr == ',' ) ? STATE_OPTIONS_K : STATE_TERMINAL;
-                *ptr = '\0';
-                if ( hex_convert(tok, &vslot, 0xff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_OPTIONS_K:
-            if ( *ptr == '=' ) {
-                state = STATE_OPTIONS_V;
-                *ptr = '\0';
-                optkey = tok;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_OPTIONS_V:
-            if ( *ptr == ',' || *ptr == '\0' ) {
-                state = (*ptr == ',') ? STATE_OPTIONS_K : STATE_TERMINAL;
-                *ptr = '\0';
-                if ( !strcmp(optkey, "msitranslate") ) {
-                    pci->msitranslate = atoi(tok);
-                }else if ( !strcmp(optkey, "power_mgmt") ) {
-                    pci->power_mgmt = atoi(tok);
-                }else if ( !strcmp(optkey, "permissive") ) {
-                    pci->permissive = atoi(tok);
-                }else if ( !strcmp(optkey, "seize") ) {
-                    pci->seize = atoi(tok);
-                } else if (!strcmp(optkey, "rdm_policy")) {
-                    if (!strcmp(tok, "strict")) {
-                        pci->rdm_policy = LIBXL_RDM_RESERVE_POLICY_STRICT;
-                    } else if (!strcmp(tok, "relaxed")) {
-                        pci->rdm_policy = LIBXL_RDM_RESERVE_POLICY_RELAXED;
-                    } else {
-                        XLU__PCI_ERR(cfg, "%s is not an valid PCI RDM property"
-                                          " policy: 'strict' or 'relaxed'.",
-                                     tok);
-                        goto parse_error;
-                    }
-                } else {
-                    XLU__PCI_ERR(cfg, "Unknown PCI BDF option: %s", optkey);
-                }
-                tok = ptr + 1;
-            }
-        default:
-            break;
-        }
+    /* Count occurrences of ':' to detrmine presence/absence of the 'domain' */
+    while (isxdigit(*ptr) || *ptr == ':') {
+        if (*ptr == ':')
+            colons++;
+        ptr++;
     }
 
-    if ( tok != ptr || state != STATE_TERMINAL )
-        goto parse_error;
+    ptr = str;
+    switch (colons) {
+    case 1:
+        domain = 0;
+        if (sscanf(ptr, "%x:%x.%n", &bus, &dev, &n) != 2)
+            return ERROR_INVAL;
+        break;
+    case 2:
+        if (sscanf(ptr, "%x:%x:%x.%n", &domain, &bus, &dev, &n) != 3)
+            return ERROR_INVAL;
+        break;
+    default:
+        return ERROR_INVAL;
+    }
 
-    assert(dom != INVALID && bus != INVALID && dev != INVALID && func != INVALID);
+    if (domain > 0xffff || bus > 0xff || dev > 0x1f)
+        return ERROR_INVAL;
 
-    /* Just a pretty way to fill in the values */
-    pci_struct_fill(pci, dom, bus, dev, func, vslot << 3);
+    ptr += n;
+    if (*ptr == '*') {
+        if (!vfunc_maskp)
+            return ERROR_INVAL;
+        *vfunc_maskp = LIBXL_PCI_FUNC_ALL;
+        func = 0;
+        ptr++;
+    } else {
+        if (sscanf(ptr, "%x%n", &func, &n) != 1)
+            return ERROR_INVAL;
+        if (func > 7)
+            return ERROR_INVAL;
+        if (vfunc_maskp)
+            *vfunc_maskp = 1;
+        ptr += n;
+    }
 
-    free(buf2);
+    bdfp->domain = domain;
+    bdfp->bus = bus;
+    bdfp->dev = dev;
+    bdfp->func = func;
+
+    if (endp)
+        *endp = ptr;
 
     return 0;
+}
 
-parse_error:
-    free(buf2);
-    return ERROR_INVAL;
+static int parse_vslot(uint32_t *vdevfnp, const char *str, const char **endp)
+{
+    const char *ptr = str;
+    unsigned int val;
+    int n;
+
+    if (sscanf(ptr, "%x%n", &val, &n) != 1)
+        return ERROR_INVAL;
+
+    if (val > 0x1f)
+        return ERROR_INVAL;
+
+    ptr += n;
+
+    *vdevfnp = val << 3;
+
+    if (endp)
+        *endp = ptr;
+
+    return 0;
+}
+
+static int parse_key_val(char **keyp, char**valp, const char *str,
+                         const char **endp)
+{
+    const char *ptr = str;
+    char *key, *val;
+
+    while (*ptr != '=' && *ptr != '\0')
+        ptr++;
+
+    if (*ptr == '\0')
+        return ERROR_INVAL;
+
+    key = strndup(str, ptr - str);
+    if (!key)
+        return ERROR_NOMEM;
+
+    str = ++ptr; /* skip '=' */
+    while (*ptr != ',' && *ptr != '\0')
+        ptr++;
+
+    val = strndup(str, ptr - str);
+    if (!val) {
+        free(key);
+        return ERROR_NOMEM;
+    }
+
+    if (*ptr == ',')
+        ptr++;
+
+    *keyp = key;
+    *valp = val;
+    *endp = ptr;
+
+    return 0;
+}
+
+static int parse_rdm_policy(XLU_Config *cfg, libxl_rdm_reserve_policy *policy,
+                            const char *str)
+{
+    int ret = libxl_rdm_reserve_policy_from_string(str, policy);
+
+    if (ret)
+        XLU__PCI_ERR(cfg, "Unknown RDM policy: %s", str);
+
+    return ret;
+}
+
+int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_pci_bdf *bdf, const char *str)
+{
+    return parse_bdf(bdf, NULL, str, NULL);
+}
+
+int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pcidev,
+                              const char *str)
+{
+    const char *ptr = str;
+    bool bdf_present = false;
+    int ret;
+
+    /* Attempt to parse 'bdf' as positional parameter */
+    ret = parse_bdf(&pcidev->bdf, &pcidev->vfunc_mask, ptr, &ptr);
+    if (!ret) {
+        bdf_present = true;
+
+        /* Check whether 'vslot' if present */
+        if (*ptr == '@') {
+            ret = parse_vslot(&pcidev->vdevfn, ++ptr, &ptr);
+            if (ret)
+                return ret;
+        }
+        if (*ptr == ',')
+            ptr++;
+        else if (*ptr != '\0')
+            return ERROR_INVAL;
+    }
+
+    /* Parse the rest as 'key=val' pairs */
+    while (*ptr != '\0') {
+        char *key, *val;
+
+        ret = parse_key_val(&key, &val, ptr, &ptr);
+        if (ret)
+            return ret;
+
+        if (!strcmp(key, "bdf")) {
+            ret = parse_bdf(&pcidev->bdf, &pcidev->vfunc_mask, val, NULL);
+            bdf_present = !ret;
+        } else if (!strcmp(key, "vslot")) {
+            ret = parse_vslot(&pcidev->vdevfn, val, NULL);
+        } else if (!strcmp(key, "permissive")) {
+            pcidev->permissive = atoi(val);
+        } else if (!strcmp(key, "msitranslate")) {
+            pcidev->msitranslate = atoi(val);
+        } else if (!strcmp(key, "seize")) {
+            pcidev->seize= atoi(val);
+        } else if (!strcmp(key, "power_mgmt")) {
+            pcidev->power_mgmt = atoi(val);
+        } else if (!strcmp(key, "rdm_policy")) {
+            ret = parse_rdm_policy(cfg, &pcidev->rdm_policy, val);
+        } else {
+            XLU__PCI_ERR(cfg, "Unknown PCI_SPEC_STRING option: %s", key);
+            ret = ERROR_INVAL;
+        }
+
+        free(key);
+        free(val);
+
+        if (ret)
+            return ret;
+    }
+
+    if (!bdf_present)
+        return ERROR_INVAL;
+
+    return 0;
 }
 
 int xlu_rdm_parse(XLU_Config *cfg, libxl_rdm_reserve *rdm, const char *str)
 {
+#define STATE_TYPE           0
+#define STATE_RDM_STRATEGY   1
+#define STATE_RESERVE_POLICY 2
+#define STATE_TERMINAL       3
+
     unsigned state = STATE_TYPE;
     char *buf2, *tok, *ptr, *end;
 
@@ -227,15 +257,8 @@ int xlu_rdm_parse(XLU_Config *cfg, libxl_rdm_reserve *rdm, const char *str)
             if (*ptr == ',' || *ptr == '\0') {
                 state = *ptr == ',' ? STATE_TYPE : STATE_TERMINAL;
                 *ptr = '\0';
-                if (!strcmp(tok, "strict")) {
-                    rdm->policy = LIBXL_RDM_RESERVE_POLICY_STRICT;
-                } else if (!strcmp(tok, "relaxed")) {
-                    rdm->policy = LIBXL_RDM_RESERVE_POLICY_RELAXED;
-                } else {
-                    XLU__PCI_ERR(cfg, "Unknown RDM property policy value: %s",
-                                 tok);
+                if (!parse_rdm_policy(cfg, &rdm->policy, tok))
                     goto parse_error;
-                }
                 tok = ptr + 1;
             }
         default:
@@ -253,6 +276,11 @@ int xlu_rdm_parse(XLU_Config *cfg, libxl_rdm_reserve *rdm, const char *str)
 parse_error:
     free(buf2);
     return ERROR_INVAL;
+
+#undef STATE_TYPE
+#undef STATE_RDM_STRATEGY
+#undef STATE_RESERVE_POLICY
+#undef STATE_TERMINAL
 }
 
 /*
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 7da6c1b927bb..2ee0c4967334 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -90,12 +90,12 @@ struct cmd_spec cmd_table[] = {
     { "pci-attach",
       &main_pciattach, 0, 1,
       "Insert a new pass-through pci device",
-      "<Domain> <BDF> [Virtual Slot]",
+      "<Domain> <PCI_SPEC_STRING>",
     },
     { "pci-detach",
       &main_pcidetach, 0, 1,
       "Remove a domain's pass-through pci device",
-      "<Domain> <BDF>",
+      "<Domain> <PCI_SPEC_STRING>",
     },
     { "pci-list",
       &main_pcilist, 0, 0,
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 0765780d9f0a..6a4703e745c9 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1487,10 +1487,10 @@ void parse_config_data(const char *config_source,
              * the global policy by default.
              */
             pci->rdm_policy = b_info->u.hvm.rdm.policy;
-            e = xlu_pci_parse_bdf(config, pci, buf);
+            e = xlu_pci_parse_spec_string(config, pci, buf);
             if (e) {
                 fprintf(stderr,
-                        "unable to parse PCI BDF `%s' for passthrough\n",
+                        "unable to parse PCI_SPEC_STRING `%s' for passthrough\n",
                         buf);
                 exit(-e);
             }
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index b6dc7c28401c..9c24496cb2dd 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -55,7 +55,7 @@ int main_pcilist(int argc, char **argv)
     return 0;
 }
 
-static int pcidetach(uint32_t domid, const char *bdf, int force)
+static int pcidetach(uint32_t domid, const char *spec_string, int force)
 {
     libxl_device_pci pci;
     XLU_Config *config;
@@ -66,8 +66,9 @@ static int pcidetach(uint32_t domid, const char *bdf, int force)
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_inig"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
-        fprintf(stderr, "pci-detach: malformed BDF specification \"%s\"\n", bdf);
+    if (xlu_pci_parse_spec_string(config, &pci, spec_string)) {
+        fprintf(stderr, "pci-detach: malformed PCI_SPEC_STRING \"%s\"\n",
+                spec_string);
         exit(2);
     }
     if (force) {
@@ -89,7 +90,7 @@ int main_pcidetach(int argc, char **argv)
     uint32_t domid;
     int opt;
     int force = 0;
-    const char *bdf = NULL;
+    const char *spec_string = NULL;
 
     SWITCH_FOREACH_OPT(opt, "f", NULL, "pci-detach", 2) {
     case 'f':
@@ -98,15 +99,15 @@ int main_pcidetach(int argc, char **argv)
     }
 
     domid = find_domain(argv[optind]);
-    bdf = argv[optind + 1];
+    spec_string = argv[optind + 1];
 
-    if (pcidetach(domid, bdf, force))
+    if (pcidetach(domid, spec_string, force))
         return EXIT_FAILURE;
 
     return EXIT_SUCCESS;
 }
 
-static int pciattach(uint32_t domid, const char *bdf, const char *vs)
+static int pciattach(uint32_t domid, const char *spec_string)
 {
     libxl_device_pci pci;
     XLU_Config *config;
@@ -117,8 +118,9 @@ static int pciattach(uint32_t domid, const char *bdf, const char *vs)
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_inig"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
-        fprintf(stderr, "pci-attach: malformed BDF specification \"%s\"\n", bdf);
+    if (xlu_pci_parse_spec_string(config, &pci, spec_string)) {
+        fprintf(stderr, "pci-attach: malformed PCI_SPEC_STRING \"%s\"\n",
+                spec_string);
         exit(2);
     }
 
@@ -135,19 +137,16 @@ int main_pciattach(int argc, char **argv)
 {
     uint32_t domid;
     int opt;
-    const char *bdf = NULL, *vs = NULL;
+    const char *spec_string = NULL;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "pci-attach", 2) {
         /* No options */
     }
 
     domid = find_domain(argv[optind]);
-    bdf = argv[optind + 1];
+    spec_string = argv[optind + 1];
 
-    if (optind + 1 < argc)
-        vs = argv[optind + 2];
-
-    if (pciattach(domid, bdf, vs))
+    if (pciattach(domid, spec_string))
         return EXIT_FAILURE;
 
     return EXIT_SUCCESS;
@@ -193,8 +192,8 @@ static int pciassignable_add(const char *bdf, int rebind)
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
-        fprintf(stderr, "pci-assignable-add: malformed BDF specification \"%s\"\n", bdf);
+    if (xlu_pci_parse_bdf(config, &pci.bdf, bdf)) {
+        fprintf(stderr, "pci-assignable-add: malformed BDF \"%s\"\n", bdf);
         exit(2);
     }
 
@@ -235,8 +234,8 @@ static int pciassignable_remove(const char *bdf, int rebind)
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
-        fprintf(stderr, "pci-assignable-remove: malformed BDF specification \"%s\"\n", bdf);
+    if (xlu_pci_parse_bdf(config, &pci.bdf, bdf)) {
+        fprintf(stderr, "pci-assignable-remove: malformed BDF \"%s\"\n", bdf);
         exit(2);
     }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 18:01:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 18:01:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23676.50667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXxF-00067X-2z; Tue, 10 Nov 2020 18:01:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23676.50667; Tue, 10 Nov 2020 18:01:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXxE-000676-RI; Tue, 10 Nov 2020 18:01:16 +0000
Received: by outflank-mailman (input) for mailman id 23676;
 Tue, 10 Nov 2020 18:01:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kcXxD-00059T-JR
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:01:15 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ea4fa62f-4a96-41f1-825d-1f2284041132;
 Tue, 10 Nov 2020 18:00:44 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXwh-0006qV-21; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoP-0007RC-Gp; Tue, 10 Nov 2020 17:52:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kcXxD-00059T-JR
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:01:15 +0000
X-Inumbo-ID: ea4fa62f-4a96-41f1-825d-1f2284041132
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ea4fa62f-4a96-41f1-825d-1f2284041132;
	Tue, 10 Nov 2020 18:00:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=nxLu3+VFZvHeARLYl8MQlR0efkG/dxOmK0XHYLg8RvM=; b=nOiZtMJT8iqRpG/1gSL2eNYbTG
	EykmOXMWPkVswa6gxJyVfydmPxH9mQt2CMLblgysHrYPK/1ot97b/RDzaOv7JMtJN+3lnvvAe0yCo
	XnC8HuhqUwx0a6OONWE4cOplzkes4woUFNvkZZjcchXEqAwRCBOGpbE8JvE19L4b7zOI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXwh-0006qV-21; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoP-0007RC-Gp; Tue, 10 Nov 2020 17:52:09 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2 20/24] libxl: modify libxl_device_pci_assignable_add/remove/list/list_free()...
Date: Tue, 10 Nov 2020 17:51:43 +0000
Message-Id: <20201110175147.7067-21-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110175147.7067-1-paul@xen.org>
References: <20201110175147.7067-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... to use 'libxl_pci_bdf' rather than 'libxl_device_pci'.

This patch modifies the API and callers accordingly. It also modifies
several internal functions in libxl_pci.c that support the API to also use
'libxl_pci_bdf'.

NOTE: The OCaml bindings are adjusted to contain the interface change. It
      should therefore not affect compatibility with OCaml-based utilities.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Acked-by: Christian Lindig <christian.lindig@citrix.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: David Scott <dave@recoil.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxl.h                |  15 +-
 tools/libs/light/libxl_pci.c         | 215 +++++++++++++++------------
 tools/ocaml/libs/xl/xenlight_stubs.c |  15 +-
 tools/xl/xl_pci.c                    |  32 ++--
 4 files changed, 157 insertions(+), 120 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 5edacccbd1da..5703fdf367c5 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -469,6 +469,13 @@
  */
 #define LIBXL_HAVE_PCI_BDF 1
 
+/*
+ * LIBXL_HAVE_PCI_ASSIGNABLE_BDF indicates that the
+ * libxl_device_pci_assignable_add/remove/list/list_free() functions all
+ * use the 'libxl_pci_bdf' type rather than 'libxl_device_pci' type.
+ */
+#define LIBXL_HAVE_PCI_ASSIGNABLE_BDF 1
+
 /*
  * libxl ABI compatibility
  *
@@ -2378,10 +2385,10 @@ int libxl_device_events_handler(libxl_ctx *ctx,
  * added or is not bound, the functions will emit a warning but return
  * SUCCESS.
  */
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
-libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
-void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num);
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
+libxl_pci_bdf *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
+void libxl_device_pci_assignable_list_free(libxl_pci_bdf *list, int num);
 
 /* CPUID handling */
 int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str);
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 382c10674313..d436e0a42b0c 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -25,26 +25,33 @@
 #define PCI_BDF_XSPATH         "%04x-%02x-%02x-%01x"
 #define PCI_PT_QDEV_ID         "pci-pt-%02x_%02x.%01x"
 
-static unsigned int pci_encode_bdf(libxl_device_pci *pci)
+static unsigned int pci_encode_bdf(libxl_pci_bdf *pcibdf)
 {
     unsigned int value;
 
-    value = pci->bdf.domain << 16;
-    value |= (pci->bdf.bus & 0xff) << 8;
-    value |= (pci->bdf.dev & 0x1f) << 3;
-    value |= (pci->bdf.func & 0x7);
+    value = pcibdf->domain << 16;
+    value |= (pcibdf->bus & 0xff) << 8;
+    value |= (pcibdf->dev & 0x1f) << 3;
+    value |= (pcibdf->func & 0x7);
 
     return value;
 }
 
+static void pcibdf_struct_fill(libxl_pci_bdf *pcibdf, unsigned int domain,
+                               unsigned int bus, unsigned int dev,
+                               unsigned int func)
+{
+    pcibdf->domain = domain;
+    pcibdf->bus = bus;
+    pcibdf->dev = dev;
+    pcibdf->func = func;
+}
+
 static void pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
                             unsigned int bus, unsigned int dev,
                             unsigned int func, unsigned int vdevfn)
 {
-    pci->bdf.domain = domain;
-    pci->bdf.bus = bus;
-    pci->bdf.dev = dev;
-    pci->bdf.func = func;
+    pcibdf_struct_fill(&pci->bdf, domain, bus, dev, func);
     pci->vdevfn = vdevfn;
 }
 
@@ -318,8 +325,8 @@ static bool is_pci_in_array(libxl_device_pci *pcis, int num,
 }
 
 /* Write the standard BDF into the sysfs path given by sysfs_path. */
-static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
-                           libxl_device_pci *pci)
+static int sysfs_write_bdf(libxl__gc *gc, const char *sysfs_path,
+                           libxl_pci_bdf *pcibdf)
 {
     int rc, fd;
     char *buf;
@@ -330,8 +337,8 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
         return ERROR_FAIL;
     }
 
-    buf = GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus,
-                    pci->bdf.dev, pci->bdf.func);
+    buf = GCSPRINTF(PCI_BDF, pcibdf->domain, pcibdf->bus,
+                    pcibdf->dev, pcibdf->func);
     rc = write(fd, buf, strlen(buf));
     /* Annoying to have two if's, but we need the errno */
     if (rc < 0)
@@ -346,22 +353,22 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
 
 #define PCI_INFO_PATH "/libxl/pci"
 
-static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
+static char *pci_info_xs_path(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                               const char *node)
 {
     return node ?
         GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
-                  pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
+                  pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func,
                   node) :
         GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
-                  pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
+                  pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func);
 }
 
 
-static int pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
+static int pci_info_xs_write(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                               const char *node, const char *val)
 {
-    char *path = pci_info_xs_path(gc, pci, node);
+    char *path = pci_info_xs_path(gc, pcibdf, node);
     int rc = libxl__xs_printf(gc, XBT_NULL, path, "%s", val);
 
     if (rc) LOGE(WARN, "Write of %s to node %s failed.", val, path);
@@ -369,28 +376,28 @@ static int pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
     return rc;
 }
 
-static char *pci_info_xs_read(libxl__gc *gc, libxl_device_pci *pci,
+static char *pci_info_xs_read(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                               const char *node)
 {
-    char *path = pci_info_xs_path(gc, pci, node);
+    char *path = pci_info_xs_path(gc, pcibdf, node);
 
     return libxl__xs_read(gc, XBT_NULL, path);
 }
 
-static void pci_info_xs_remove(libxl__gc *gc, libxl_device_pci *pci,
+static void pci_info_xs_remove(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                                const char *node)
 {
-    char *path = pci_info_xs_path(gc, pci, node);
+    char *path = pci_info_xs_path(gc, pcibdf, node);
     libxl_ctx *ctx = libxl__gc_owner(gc);
 
     /* Remove the xenstore entry */
     xs_rm(ctx->xsh, XBT_NULL, path);
 }
 
-libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
+libxl_pci_bdf *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 {
     GC_INIT(ctx);
-    libxl_device_pci *pcis = NULL, *new;
+    libxl_pci_bdf *pcibdfs = NULL, *new;
     struct dirent *de;
     DIR *dir;
 
@@ -411,15 +418,15 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         if (sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4)
             continue;
 
-        new = realloc(pcis, ((*num) + 1) * sizeof(*new));
+        new = realloc(pcibdfs, ((*num) + 1) * sizeof(*new));
         if (NULL == new)
             continue;
 
-        pcis = new;
-        new = pcis + *num;
+        pcibdfs = new;
+        new = pcibdfs + *num;
 
-        libxl_device_pci_init(new);
-        pci_struct_fill(new, dom, bus, dev, func, 0);
+        libxl_pci_bdf_init(new);
+        pcibdf_struct_fill(new, dom, bus, dev, func);
 
         if (pci_info_xs_read(gc, new, "domid")) /* already assigned */
             continue;
@@ -430,32 +437,32 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
     closedir(dir);
 out:
     GC_FREE;
-    return pcis;
+    return pcibdfs;
 }
 
-void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num)
+void libxl_device_pci_assignable_list_free(libxl_pci_bdf *list, int num)
 {
     int i;
 
     for (i = 0; i < num; i++)
-        libxl_device_pci_dispose(&list[i]);
+        libxl_pci_bdf_dispose(&list[i]);
 
     free(list);
 }
 
 /* Unbind device from its current driver, if any.  If driver_path is non-NULL,
  * store the path to the original driver in it. */
-static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
+static int sysfs_dev_unbind(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                             char **driver_path)
 {
     char * spath, *dp = NULL;
     struct stat st;
 
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/driver",
-                           pci->bdf.domain,
-                           pci->bdf.bus,
-                           pci->bdf.dev,
-                           pci->bdf.func);
+                           pcibdf->domain,
+                           pcibdf->bus,
+                           pcibdf->dev,
+                           pcibdf->func);
     if ( !lstat(spath, &st) ) {
         /* Find the canonical path to the driver. */
         dp = libxl__zalloc(gc, PATH_MAX);
@@ -469,7 +476,7 @@ static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
 
         /* Unbind from the old driver */
         spath = GCSPRINTF("%s/unbind", dp);
-        if ( sysfs_write_bdf(gc, spath, pci) < 0 ) {
+        if ( sysfs_write_bdf(gc, spath, pcibdf) < 0 ) {
             LOGE(ERROR, "Couldn't unbind device");
             return -1;
         }
@@ -607,8 +614,8 @@ bool libxl__is_igd_vga_passthru(libxl__gc *gc,
  * already exist.
  */
 
-/* Scan through /sys/.../pciback/slots looking for pci's BDF */
-static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
+/* Scan through /sys/.../pciback/slots looking for BDF */
+static int pciback_dev_has_slot(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 {
     FILE *f;
     int rc = 0;
@@ -621,11 +628,11 @@ static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
         return ERROR_FAIL;
     }
 
-    while (fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func)==4) {
-        if (dom == pci->bdf.domain
-            && bus == pci->bdf.bus
-            && dev == pci->bdf.dev
-            && func == pci->bdf.func) {
+    while (fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func) == 4) {
+        if (dom == pcibdf->domain
+           && bus == pcibdf->bus
+           && dev == pcibdf->dev
+           && func == pcibdf->func) {
             rc = 1;
             goto out;
         }
@@ -635,7 +642,7 @@ out:
     return rc;
 }
 
-static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
+static int pciback_dev_is_assigned(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 {
     char * spath;
     int rc;
@@ -651,8 +658,8 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
     }
 
     spath = GCSPRINTF(SYSFS_PCIBACK_DRIVER"/"PCI_BDF,
-                      pci->bdf.domain, pci->bdf.bus,
-                      pci->bdf.dev, pci->bdf.func);
+                      pcibdf->domain, pcibdf->bus,
+                      pcibdf->dev, pcibdf->func);
     rc = lstat(spath, &st);
 
     if( rc == 0 )
@@ -663,40 +670,40 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
     return -1;
 }
 
-static int pciback_dev_assign(libxl__gc *gc, libxl_device_pci *pci)
+static int pciback_dev_assign(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 {
     int rc;
 
-    if ( (rc = pciback_dev_has_slot(gc, pci)) < 0 ) {
+    if ( (rc = pciback_dev_has_slot(gc, pcibdf)) < 0 ) {
         LOGE(ERROR, "Error checking for pciback slot");
         return ERROR_FAIL;
     } else if (rc == 0) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/new_slot",
-                             pci) < 0 ) {
+                             pcibdf) < 0 ) {
             LOGE(ERROR, "Couldn't bind device to pciback!");
             return ERROR_FAIL;
         }
     }
 
-    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind", pci) < 0 ) {
+    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind", pcibdf) < 0 ) {
         LOGE(ERROR, "Couldn't bind device to pciback!");
         return ERROR_FAIL;
     }
     return 0;
 }
 
-static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
+static int pciback_dev_unassign(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 {
     /* Remove from pciback */
-    if ( sysfs_dev_unbind(gc, pci, NULL) < 0 ) {
+    if ( sysfs_dev_unbind(gc, pcibdf, NULL) < 0 ) {
         LOG(ERROR, "Couldn't unbind device!");
         return ERROR_FAIL;
     }
 
     /* Remove slot if necessary */
-    if ( pciback_dev_has_slot(gc, pci) > 0 ) {
+    if ( pciback_dev_has_slot(gc, pcibdf) > 0 ) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/remove_slot",
-                             pci) < 0 ) {
+                             pcibdf) < 0 ) {
             LOGE(ERROR, "Couldn't remove pciback slot");
             return ERROR_FAIL;
         }
@@ -705,7 +712,7 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
 }
 
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
-                                            libxl_device_pci *pci,
+                                            libxl_pci_bdf *pcibdf,
                                             int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -715,10 +722,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     struct stat st;
 
     /* Local copy for convenience */
-    dom = pci->bdf.domain;
-    bus = pci->bdf.bus;
-    dev = pci->bdf.dev;
-    func = pci->bdf.func;
+    dom = pcibdf->domain;
+    bus = pcibdf->bus;
+    dev = pcibdf->dev;
+    func = pcibdf->func;
 
     /* See if the device exists */
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF, dom, bus, dev, func);
@@ -728,7 +735,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
 
     /* Check to see if it's already assigned to pciback */
-    rc = pciback_dev_is_assigned(gc, pci);
+    rc = pciback_dev_is_assigned(gc, pcibdf);
     if ( rc < 0 ) {
         return ERROR_FAIL;
     }
@@ -738,7 +745,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
 
     /* Check to see if there's already a driver that we need to unbind from */
-    if ( sysfs_dev_unbind(gc, pci, &driver_path ) ) {
+    if ( sysfs_dev_unbind(gc, pcibdf, &driver_path ) ) {
         LOG(ERROR, "Couldn't unbind "PCI_BDF" from driver",
             dom, bus, dev, func);
         return ERROR_FAIL;
@@ -747,9 +754,9 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     /* Store driver_path for rebinding to dom0 */
     if ( rebind ) {
         if ( driver_path ) {
-            pci_info_xs_write(gc, pci, "driver_path", driver_path);
+            pci_info_xs_write(gc, pcibdf, "driver_path", driver_path);
         } else if ( (driver_path =
-                     pci_info_xs_read(gc, pci, "driver_path")) != NULL ) {
+                     pci_info_xs_read(gc, pcibdf, "driver_path")) != NULL ) {
             LOG(INFO, PCI_BDF" not bound to a driver, will be rebound to %s",
                 dom, bus, dev, func, driver_path);
         } else {
@@ -757,10 +764,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
                 dom, bus, dev, func);
         }
     } else {
-        pci_info_xs_remove(gc, pci, "driver_path");
+        pci_info_xs_remove(gc, pcibdf, "driver_path");
     }
 
-    if ( pciback_dev_assign(gc, pci) ) {
+    if ( pciback_dev_assign(gc, pcibdf) ) {
         LOG(ERROR, "Couldn't bind device to pciback!");
         return ERROR_FAIL;
     }
@@ -771,7 +778,7 @@ quarantine:
      * so always pass XEN_DOMCTL_DEV_RDM_RELAXED to avoid assignment being
      * unnecessarily denied.
      */
-    rc = xc_assign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci),
+    rc = xc_assign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pcibdf),
                           XEN_DOMCTL_DEV_RDM_RELAXED);
     if ( rc < 0 ) {
         LOG(ERROR, "failed to quarantine "PCI_BDF, dom, bus, dev, func);
@@ -782,7 +789,7 @@ quarantine:
 }
 
 static int libxl__device_pci_assignable_remove(libxl__gc *gc,
-                                               libxl_device_pci *pci,
+                                               libxl_pci_bdf *pcibdf,
                                                int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -790,24 +797,24 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     char *driver_path;
 
     /* De-quarantine */
-    rc = xc_deassign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci));
+    rc = xc_deassign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pcibdf));
     if ( rc < 0 ) {
-        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pci->bdf.domain, pci->bdf.bus,
-            pci->bdf.dev, pci->bdf.func);
+        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pcibdf->domain,
+            pcibdf->bus, pcibdf->dev, pcibdf->func);
         return ERROR_FAIL;
     }
 
     /* Unbind from pciback */
-    if ( (rc = pciback_dev_is_assigned(gc, pci)) < 0 ) {
+    if ( (rc = pciback_dev_is_assigned(gc, pcibdf)) < 0 ) {
         return ERROR_FAIL;
     } else if ( rc ) {
-        pciback_dev_unassign(gc, pci);
+        pciback_dev_unassign(gc, pcibdf);
     } else {
         LOG(WARN, "Not bound to pciback");
     }
 
     /* Rebind if necessary */
-    driver_path = pci_info_xs_read(gc, pci, "driver_path");
+    driver_path = pci_info_xs_read(gc, pcibdf, "driver_path");
 
     if ( driver_path ) {
         if ( rebind ) {
@@ -815,12 +822,12 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
 
             if ( sysfs_write_bdf(gc,
                                  GCSPRINTF("%s/bind", driver_path),
-                                 pci) < 0 ) {
+                                 pcibdf) < 0 ) {
                 LOGE(ERROR, "Couldn't bind device to %s", driver_path);
                 return -1;
             }
 
-            pci_info_xs_remove(gc, pci, "driver_path");
+            pci_info_xs_remove(gc, pcibdf, "driver_path");
         }
     } else {
         if ( rebind ) {
@@ -832,26 +839,26 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     return 0;
 }
 
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci,
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
                                     int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_add(gc, pci, rebind);
+    rc = libxl__device_pci_assignable_add(gc, pcibdf, rebind);
 
     GC_FREE;
     return rc;
 }
 
 
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci,
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
                                        int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_remove(gc, pci, rebind);
+    rc = libxl__device_pci_assignable_remove(gc, pcibdf, rebind);
 
     GC_FREE;
     return rc;
@@ -1353,7 +1360,7 @@ static void pci_add_dm_done(libxl__egc *egc,
     /* Don't restrict writes to the PCI config space from this VM */
     if (pci->permissive) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/permissive",
-                             pci) < 0 ) {
+                             &pci->bdf) < 0 ) {
             LOGD(ERROR, domainid, "Setting permissive for device");
             rc = ERROR_FAIL;
             goto out;
@@ -1369,7 +1376,8 @@ out_no_irq:
             rc = ERROR_FAIL;
             goto out;
         }
-        r = xc_assign_device(ctx->xch, domid, pci_encode_bdf(pci), flag);
+        r = xc_assign_device(ctx->xch, domid, pci_encode_bdf(&pci->bdf),
+                             flag);
         if (r < 0 && (hvm || errno != ENOSYS)) {
             LOGED(ERROR, domainid, "xc_assign_device failed");
             rc = ERROR_FAIL;
@@ -1448,15 +1456,28 @@ int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
     return AO_INPROGRESS;
 }
 
-static bool libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
+static int is_bdf_in_array(libxl_pci_bdf *pcibdfs, int num,
+                           libxl_pci_bdf *pcibdf)
 {
-    libxl_device_pci *pcis;
+    int i;
+
+    for(i = 0; i < num; i++) {
+        if (COMPARE_BDF(pcibdf, &pcibdfs[i]))
+            break;
+    }
+
+    return i < num;
+}
+
+static bool is_bdf_assignable(libxl_ctx *ctx, libxl_pci_bdf *pcibdf)
+{
+    libxl_pci_bdf *pcibdfs;
     int num;
     bool assignable;
 
-    pcis = libxl_device_pci_assignable_list(ctx, &num);
-    assignable = is_pci_in_array(pcis, num, pci);
-    libxl_device_pci_assignable_list_free(pcis, num);
+    pcibdfs = libxl_device_pci_assignable_list(ctx, &num);
+    assignable = is_bdf_in_array(pcibdfs, num, pcibdf);
+    libxl_device_pci_assignable_list_free(pcibdfs, num);
 
     return assignable;
 }
@@ -1491,7 +1512,8 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     pas->callback = device_pci_add_stubdom_done;
 
     if (libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
-        rc = xc_test_assign_device(ctx->xch, domid, pci_encode_bdf(pci));
+        rc = xc_test_assign_device(ctx->xch, domid,
+                                   pci_encode_bdf(&pci->bdf));
         if (rc) {
             LOGD(ERROR, domid,
                  "PCI device %04x:%02x:%02x.%u %s?",
@@ -1505,20 +1527,20 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     rc = libxl__device_pci_setdefault(gc, domid, pci, !starting);
     if (rc) goto out;
 
-    if (pci->seize && !pciback_dev_is_assigned(gc, pci)) {
-        rc = libxl__device_pci_assignable_add(gc, pci, 1);
+    if (pci->seize && !pciback_dev_is_assigned(gc, &pci->bdf)) {
+        rc = libxl__device_pci_assignable_add(gc, &pci->bdf, 1);
         if ( rc )
             goto out;
     }
 
-    if (!libxl_pci_assignable(ctx, pci)) {
+    if (!is_bdf_assignable(ctx, &pci->bdf)) {
         LOGD(ERROR, domid, "PCI device %x:%x:%x.%x is not assignable",
              pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         rc = ERROR_FAIL;
         goto out;
     }
 
-    rc = pci_info_xs_write(gc, pci, "domid", GCSPRINTF("%u", domid));
+    rc = pci_info_xs_write(gc, &pci->bdf, "domid", GCSPRINTF("%u", domid));
     if (rc) goto out;
 
     libxl__device_pci_reset(gc, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
@@ -1642,7 +1664,7 @@ static void device_pci_add_done(libxl__egc *egc,
              "PCI device %x:%x:%x.%x (rc %d)",
              pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
              rc);
-        pci_info_xs_remove(gc, pci, "domid");
+        pci_info_xs_remove(gc, &pci->bdf, "domid");
     }
     libxl_device_pci_dispose(pci);
     aodev->rc = rc;
@@ -2082,7 +2104,8 @@ static void pci_remove_detached(libxl__egc *egc,
     }
 
     if (!isstubdom) {
-        rc = xc_deassign_device(CTX->xch, domid, pci_encode_bdf(pci));
+        rc = xc_deassign_device(CTX->xch, domid,
+                                pci_encode_bdf(&pci->bdf));
         if (rc < 0 && (prs->hvm || errno != ENOSYS))
             LOGED(ERROR, domainid, "xc_deassign_device failed");
     }
@@ -2211,7 +2234,7 @@ out:
     libxl__ev_time_deregister(gc, &prs->timeout);
     libxl__ev_time_deregister(gc, &prs->retry_timer);
 
-    if (!rc) pci_info_xs_remove(gc, pci, "domid");
+    if (!rc) pci_info_xs_remove(gc, &pci->bdf, "domid");
 
     libxl_device_pci_dispose(pci);
     aodev->rc = rc;
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 352a00134d70..2388f238697c 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -840,7 +840,7 @@ value stub_xl_device_pci_assignable_add(value ctx, value info, value rebind)
 	device_pci_val(CTX, &c_info, info);
 
 	caml_enter_blocking_section();
-	ret = libxl_device_pci_assignable_add(CTX, &c_info, c_rebind);
+	ret = libxl_device_pci_assignable_add(CTX, &c_info.bdf, c_rebind);
 	caml_leave_blocking_section();
 
 	libxl_device_pci_dispose(&c_info);
@@ -861,7 +861,7 @@ value stub_xl_device_pci_assignable_remove(value ctx, value info, value rebind)
 	device_pci_val(CTX, &c_info, info);
 
 	caml_enter_blocking_section();
-	ret = libxl_device_pci_assignable_remove(CTX, &c_info, c_rebind);
+	ret = libxl_device_pci_assignable_remove(CTX, &c_info.bdf, c_rebind);
 	caml_leave_blocking_section();
 
 	libxl_device_pci_dispose(&c_info);
@@ -876,7 +876,7 @@ value stub_xl_device_pci_assignable_list(value ctx)
 {
 	CAMLparam1(ctx);
 	CAMLlocal2(list, temp);
-	libxl_device_pci *c_list;
+	libxl_pci_bdf *c_list;
 	int i, nb;
 	uint32_t c_domid;
 
@@ -889,11 +889,18 @@ value stub_xl_device_pci_assignable_list(value ctx)
 
 	list = temp = Val_emptylist;
 	for (i = 0; i < nb; i++) {
+		libxl_device_pci pci;
+
+		libxl_device_pci_init(&pci);
+		libxl_pci_bdf_copy(CTX, &pci.bdf, &c_list[i]);
+
 		list = caml_alloc_small(2, Tag_cons);
 		Field(list, 0) = Val_int(0);
 		Field(list, 1) = temp;
 		temp = list;
-		Store_field(list, 0, Val_device_pci(&c_list[i]));
+		Store_field(list, 0, Val_device_pci(&pci));
+
+		libxl_device_pci_dispose(&pci);
 	}
 	libxl_device_pci_assignable_list_free(c_list, nb);
 
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 9c24496cb2dd..37708b4eb14d 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -154,19 +154,19 @@ int main_pciattach(int argc, char **argv)
 
 static void pciassignable_list(void)
 {
-    libxl_device_pci *pcis;
+    libxl_pci_bdf *pcibdfs;
     int num, i;
 
-    pcis = libxl_device_pci_assignable_list(ctx, &num);
+    pcibdfs = libxl_device_pci_assignable_list(ctx, &num);
 
-    if ( pcis == NULL )
+    if ( pcibdfs == NULL )
         return;
     for (i = 0; i < num; i++) {
         printf("%04x:%02x:%02x.%01x\n",
-               pcis[i].bdf.domain, pcis[i].bdf.bus, pcis[i].bdf.dev,
-               pcis[i].bdf.func);
+               pcibdfs[i].domain, pcibdfs[i].bus, pcibdfs[i].dev,
+               pcibdfs[i].func);
     }
-    libxl_device_pci_assignable_list_free(pcis, num);
+    libxl_device_pci_assignable_list_free(pcibdfs, num);
 }
 
 int main_pciassignable_list(int argc, char **argv)
@@ -183,24 +183,24 @@ int main_pciassignable_list(int argc, char **argv)
 
 static int pciassignable_add(const char *bdf, int rebind)
 {
-    libxl_device_pci pci;
+    libxl_pci_bdf pcibdf;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pci);
+    libxl_pci_bdf_init(&pcibdf);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci.bdf, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pcibdf, bdf)) {
         fprintf(stderr, "pci-assignable-add: malformed BDF \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_add(ctx, &pci, rebind))
+    if (libxl_device_pci_assignable_add(ctx, &pcibdf, rebind))
         r = 1;
 
-    libxl_device_pci_dispose(&pci);
+    libxl_pci_bdf_dispose(&pcibdf);
     xlu_cfg_destroy(config);
 
     return r;
@@ -225,24 +225,24 @@ int main_pciassignable_add(int argc, char **argv)
 
 static int pciassignable_remove(const char *bdf, int rebind)
 {
-    libxl_device_pci pci;
+    libxl_pci_bdf pcibdf;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pci);
+    libxl_pci_bdf_init(&pcibdf);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci.bdf, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pcibdf, bdf)) {
         fprintf(stderr, "pci-assignable-remove: malformed BDF \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_remove(ctx, &pci, rebind))
+    if (libxl_device_pci_assignable_remove(ctx, &pcibdf, rebind))
         r = 1;
 
-    libxl_device_pci_dispose(&pci);
+    libxl_pci_bdf_dispose(&pcibdf);
     xlu_cfg_destroy(config);
 
     return r;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 18:01:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 18:01:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23686.50679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXxJ-0006HJ-MT; Tue, 10 Nov 2020 18:01:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23686.50679; Tue, 10 Nov 2020 18:01:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcXxJ-0006HA-H1; Tue, 10 Nov 2020 18:01:21 +0000
Received: by outflank-mailman (input) for mailman id 23686;
 Tue, 10 Nov 2020 18:01:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kcXxI-00059T-JP
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:01:20 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2f8d5528-962b-432c-b445-876e8bacca13;
 Tue, 10 Nov 2020 18:00:45 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXwh-0006r2-MD; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcXoN-0007RC-AM; Tue, 10 Nov 2020 17:52:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l7/2=EQ=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kcXxI-00059T-JP
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:01:20 +0000
X-Inumbo-ID: 2f8d5528-962b-432c-b445-876e8bacca13
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2f8d5528-962b-432c-b445-876e8bacca13;
	Tue, 10 Nov 2020 18:00:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=XnrhltWwemKj7GI9ItMK8J3j+kAN8barQ+gmIHpNfGQ=; b=OHzciZqSa4hFytsCMInMmmww0G
	IeSxxXJktiW7LSrSXbCEcwOFS5TTgI77/MLnVwe8E1qOmPUTEvv9pXKchknjnol6XyyqxbxsMRSVl
	4s7YH1zKM6YJL98/zWF4I4wrqzMVdcs5IkBO1fkTEnaIqr752MLd84N6WP/b5B+WCMLI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXwh-0006r2-MD; Tue, 10 Nov 2020 18:00:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcXoN-0007RC-AM; Tue, 10 Nov 2020 17:52:07 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Nick Rosbrook <rosbrookn@ainfosec.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2 18/24] libxl: introduce 'libxl_pci_bdf' in the idl...
Date: Tue, 10 Nov 2020 17:51:41 +0000
Message-Id: <20201110175147.7067-19-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110175147.7067-1-paul@xen.org>
References: <20201110175147.7067-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... and use in 'libxl_device_pci'

This patch is preparatory work for restricting the type passed to functions
that only require BDF information, rather than passing a 'libxl_device_pci'
structure which is only partially filled. In this patch only the minimal
mechanical changes necessary to deal with the structural changes are made.
Subsequent patches will adjust the code to make better use of the new type.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/golang/xenlight/helpers.gen.go |  77 ++++++++++----
 tools/golang/xenlight/types.gen.go   |   8 +-
 tools/include/libxl.h                |   6 ++
 tools/libs/light/libxl_dm.c          |   8 +-
 tools/libs/light/libxl_internal.h    |   3 +-
 tools/libs/light/libxl_pci.c         | 148 +++++++++++++--------------
 tools/libs/light/libxl_types.idl     |  16 +--
 tools/libs/util/libxlu_pci.c         |   8 +-
 tools/xl/xl_pci.c                    |   6 +-
 tools/xl/xl_sxp.c                    |   4 +-
 10 files changed, 167 insertions(+), 117 deletions(-)

diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index c8605994e7fe..b7230f693c69 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -1999,6 +1999,41 @@ xc.colo_checkpoint_port = C.CString(x.ColoCheckpointPort)}
  return nil
  }
 
+// NewPciBdf returns an instance of PciBdf initialized with defaults.
+func NewPciBdf() (*PciBdf, error) {
+var (
+x PciBdf
+xc C.libxl_pci_bdf)
+
+C.libxl_pci_bdf_init(&xc)
+defer C.libxl_pci_bdf_dispose(&xc)
+
+if err := x.fromC(&xc); err != nil {
+return nil, err }
+
+return &x, nil}
+
+func (x *PciBdf) fromC(xc *C.libxl_pci_bdf) error {
+ x.Func = byte(xc._func)
+x.Dev = byte(xc.dev)
+x.Bus = byte(xc.bus)
+x.Domain = int(xc.domain)
+
+ return nil}
+
+func (x *PciBdf) toC(xc *C.libxl_pci_bdf) (err error){defer func(){
+if err != nil{
+C.libxl_pci_bdf_dispose(xc)}
+}()
+
+xc._func = C.uint8_t(x.Func)
+xc.dev = C.uint8_t(x.Dev)
+xc.bus = C.uint8_t(x.Bus)
+xc.domain = C.int(x.Domain)
+
+ return nil
+ }
+
 // NewDevicePci returns an instance of DevicePci initialized with defaults.
 func NewDevicePci() (*DevicePci, error) {
 var (
@@ -2014,10 +2049,9 @@ return nil, err }
 return &x, nil}
 
 func (x *DevicePci) fromC(xc *C.libxl_device_pci) error {
- x.Func = byte(xc._func)
-x.Dev = byte(xc.dev)
-x.Bus = byte(xc.bus)
-x.Domain = int(xc.domain)
+ if err := x.Bdf.fromC(&xc.bdf);err != nil {
+return fmt.Errorf("converting field Bdf: %v", err)
+}
 x.Vdevfn = uint32(xc.vdevfn)
 x.VfuncMask = uint32(xc.vfunc_mask)
 x.Msitranslate = bool(xc.msitranslate)
@@ -2033,10 +2067,9 @@ if err != nil{
 C.libxl_device_pci_dispose(xc)}
 }()
 
-xc._func = C.uint8_t(x.Func)
-xc.dev = C.uint8_t(x.Dev)
-xc.bus = C.uint8_t(x.Bus)
-xc.domain = C.int(x.Domain)
+if err := x.Bdf.toC(&xc.bdf); err != nil {
+return fmt.Errorf("converting field Bdf: %v", err)
+}
 xc.vdevfn = C.uint32_t(x.Vdevfn)
 xc.vfunc_mask = C.uint32_t(x.VfuncMask)
 xc.msitranslate = C.bool(x.Msitranslate)
@@ -2766,13 +2799,13 @@ if err := x.Nics[i].fromC(&v); err != nil {
 return fmt.Errorf("converting field Nics: %v", err) }
 }
 }
-x.Pcidevs = nil
-if n := int(xc.num_pcidevs); n > 0 {
-cPcidevs := (*[1<<28]C.libxl_device_pci)(unsafe.Pointer(xc.pcidevs))[:n:n]
-x.Pcidevs = make([]DevicePci, n)
-for i, v := range cPcidevs {
-if err := x.Pcidevs[i].fromC(&v); err != nil {
-return fmt.Errorf("converting field Pcidevs: %v", err) }
+x.Pcis = nil
+if n := int(xc.num_pcis); n > 0 {
+cPcis := (*[1<<28]C.libxl_device_pci)(unsafe.Pointer(xc.pcis))[:n:n]
+x.Pcis = make([]DevicePci, n)
+for i, v := range cPcis {
+if err := x.Pcis[i].fromC(&v); err != nil {
+return fmt.Errorf("converting field Pcis: %v", err) }
 }
 }
 x.Rdms = nil
@@ -2922,13 +2955,13 @@ return fmt.Errorf("converting field Nics: %v", err)
 }
 }
 }
-if numPcidevs := len(x.Pcidevs); numPcidevs > 0 {
-xc.pcidevs = (*C.libxl_device_pci)(C.malloc(C.ulong(numPcidevs)*C.sizeof_libxl_device_pci))
-xc.num_pcidevs = C.int(numPcidevs)
-cPcidevs := (*[1<<28]C.libxl_device_pci)(unsafe.Pointer(xc.pcidevs))[:numPcidevs:numPcidevs]
-for i,v := range x.Pcidevs {
-if err := v.toC(&cPcidevs[i]); err != nil {
-return fmt.Errorf("converting field Pcidevs: %v", err)
+if numPcis := len(x.Pcis); numPcis > 0 {
+xc.pcis = (*C.libxl_device_pci)(C.malloc(C.ulong(numPcis)*C.sizeof_libxl_device_pci))
+xc.num_pcis = C.int(numPcis)
+cPcis := (*[1<<28]C.libxl_device_pci)(unsafe.Pointer(xc.pcis))[:numPcis:numPcis]
+for i,v := range x.Pcis {
+if err := v.toC(&cPcis[i]); err != nil {
+return fmt.Errorf("converting field Pcis: %v", err)
 }
 }
 }
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index b4c5df0f2c5c..bc62ae8ce9d1 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -707,11 +707,15 @@ ColoCheckpointHost string
 ColoCheckpointPort string
 }
 
-type DevicePci struct {
+type PciBdf struct {
 Func byte
 Dev byte
 Bus byte
 Domain int
+}
+
+type DevicePci struct {
+Bdf PciBdf
 Vdevfn uint32
 VfuncMask uint32
 Msitranslate bool
@@ -896,7 +900,7 @@ CInfo DomainCreateInfo
 BInfo DomainBuildInfo
 Disks []DeviceDisk
 Nics []DeviceNic
-Pcidevs []DevicePci
+Pcis []DevicePci
 Rdms []DeviceRdm
 Dtdevs []DeviceDtdev
 Vfbs []DeviceVfb
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 8225809d94a8..5edacccbd1da 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -463,6 +463,12 @@
  */
 #define LIBXL_HAVE_DEVICE_PCI_ASSIGNABLE_LIST_FREE 1
 
+/*
+ * LIBXL_HAVE_PCI_BDF indicates that the 'libxl_pci_bdf' type is defined
+ * is embedded in the 'libxl_device_pci' type.
+ */
+#define LIBXL_HAVE_PCI_BDF 1
+
 /*
  * libxl ABI compatibility
  *
diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 8ebe1b60c9d7..a25bf23834a2 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -472,10 +472,10 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
     for (i = 0; i < d_config->num_pcis; i++) {
         unsigned int n, nr_entries;
 
-        seg = d_config->pcis[i].domain;
-        bus = d_config->pcis[i].bus;
-        devfn = PCI_DEVFN(d_config->pcis[i].dev,
-                          d_config->pcis[i].func);
+        seg = d_config->pcis[i].bdf.domain;
+        bus = d_config->pcis[i].bdf.bus;
+        devfn = PCI_DEVFN(d_config->pcis[i].bdf.dev,
+                          d_config->pcis[i].bdf.func);
         nr_entries = 0;
         rc = libxl__xc_device_get_rdm(gc, 0,
                                       seg, bus, devfn, &nr_entries, &xrdm);
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 80d798862229..e1cb8404abab 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -4744,10 +4744,11 @@ void libxl__xcinfo2xlinfo(libxl_ctx *ctx,
  * devices have same identifier. */
 #define COMPARE_DEVID(a, b) ((a)->devid == (b)->devid)
 #define COMPARE_DISK(a, b) (!strcmp((a)->vdev, (b)->vdev))
-#define COMPARE_PCI(a, b) ((a)->domain == (b)->domain && \
+#define COMPARE_BDF(a, b) ((a)->domain == (b)->domain && \
                            (a)->bus == (b)->bus &&       \
                            (a)->dev == (b)->dev &&       \
                            (a)->func == (b)->func)
+#define COMPARE_PCI(a, b) COMPARE_BDF(&((a)->bdf), &((b)->bdf))
 #define COMPARE_USB(a, b) ((a)->ctrl == (b)->ctrl && \
                            (a)->port == (b)->port)
 #define COMPARE_USBCTRL(a, b) ((a)->devid == (b)->devid)
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 29a450f7e649..382c10674313 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -29,10 +29,10 @@ static unsigned int pci_encode_bdf(libxl_device_pci *pci)
 {
     unsigned int value;
 
-    value = pci->domain << 16;
-    value |= (pci->bus & 0xff) << 8;
-    value |= (pci->dev & 0x1f) << 3;
-    value |= (pci->func & 0x7);
+    value = pci->bdf.domain << 16;
+    value |= (pci->bdf.bus & 0xff) << 8;
+    value |= (pci->bdf.dev & 0x1f) << 3;
+    value |= (pci->bdf.func & 0x7);
 
     return value;
 }
@@ -41,10 +41,10 @@ static void pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
                             unsigned int bus, unsigned int dev,
                             unsigned int func, unsigned int vdevfn)
 {
-    pci->domain = domain;
-    pci->bus = bus;
-    pci->dev = dev;
-    pci->func = func;
+    pci->bdf.domain = domain;
+    pci->bdf.bus = bus;
+    pci->bdf.dev = dev;
+    pci->bdf.func = func;
     pci->vdevfn = vdevfn;
 }
 
@@ -54,9 +54,9 @@ static void libxl_create_pci_backend_device(libxl__gc *gc,
                                             const libxl_device_pci *pci)
 {
     flexarray_append(back, GCSPRINTF("key-%d", num));
-    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->domain, pci->bus, pci->dev, pci->func));
+    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func));
     flexarray_append(back, GCSPRINTF("dev-%d", num));
-    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->domain, pci->bus, pci->dev, pci->func));
+    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func));
     if (pci->vdevfn)
         flexarray_append_pair(back, GCSPRINTF("vdevfn-%d", num), GCSPRINTF("%x", pci->vdevfn));
     flexarray_append(back, GCSPRINTF("opts-%d", num));
@@ -218,8 +218,8 @@ static int libxl__device_pci_remove_xenstore(libxl__gc *gc, uint32_t domid, libx
         unsigned int domain = 0, bus = 0, dev = 0, func = 0;
         xsdev = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/dev-%d", be_path, i));
         sscanf(xsdev, PCI_BDF, &domain, &bus, &dev, &func);
-        if (domain == pci->domain && bus == pci->bus &&
-            pci->dev == dev && pci->func == func) {
+        if (domain == pci->bdf.domain && bus == pci->bdf.bus &&
+            pci->bdf.dev == dev && pci->bdf.func == func) {
             break;
         }
     }
@@ -330,8 +330,8 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
         return ERROR_FAIL;
     }
 
-    buf = GCSPRINTF(PCI_BDF, pci->domain, pci->bus,
-                    pci->dev, pci->func);
+    buf = GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus,
+                    pci->bdf.dev, pci->bdf.func);
     rc = write(fd, buf, strlen(buf));
     /* Annoying to have two if's, but we need the errno */
     if (rc < 0)
@@ -351,10 +351,10 @@ static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
 {
     return node ?
         GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
-                  pci->domain, pci->bus, pci->dev, pci->func,
+                  pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
                   node) :
         GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
-                  pci->domain, pci->bus, pci->dev, pci->func);
+                  pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 }
 
 
@@ -452,10 +452,10 @@ static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
     struct stat st;
 
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/driver",
-                           pci->domain,
-                           pci->bus,
-                           pci->dev,
-                           pci->func);
+                           pci->bdf.domain,
+                           pci->bdf.bus,
+                           pci->bdf.dev,
+                           pci->bdf.func);
     if ( !lstat(spath, &st) ) {
         /* Find the canonical path to the driver. */
         dp = libxl__zalloc(gc, PATH_MAX);
@@ -485,7 +485,7 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pci)
 {
     char *pci_device_vendor_path =
             GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/vendor",
-                      pci->domain, pci->bus, pci->dev, pci->func);
+                      pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     uint16_t read_items;
     uint16_t pci_device_vendor;
 
@@ -493,7 +493,7 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pci)
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have vendor attribute",
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         return 0xffff;
     }
     read_items = fscanf(f, "0x%hx\n", &pci_device_vendor);
@@ -501,7 +501,7 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pci)
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read vendor of pci device "PCI_BDF,
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         return 0xffff;
     }
 
@@ -512,7 +512,7 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pci)
 {
     char *pci_device_device_path =
             GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/device",
-                      pci->domain, pci->bus, pci->dev, pci->func);
+                      pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     uint16_t read_items;
     uint16_t pci_device_device;
 
@@ -520,7 +520,7 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pci)
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have device attribute",
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         return 0xffff;
     }
     read_items = fscanf(f, "0x%hx\n", &pci_device_device);
@@ -528,7 +528,7 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pci)
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read device of pci device "PCI_BDF,
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         return 0xffff;
     }
 
@@ -539,14 +539,14 @@ static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pci,
                                unsigned long *class)
 {
     char *pci_device_class_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/class",
-                     pci->domain, pci->bus, pci->dev, pci->func);
+                     pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     int read_items, ret = 0;
 
     FILE *f = fopen(pci_device_class_path, "r");
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have class attribute",
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         ret = ERROR_FAIL;
         goto out;
     }
@@ -555,7 +555,7 @@ static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pci,
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read class of pci device "PCI_BDF,
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         ret = ERROR_FAIL;
     }
 
@@ -622,10 +622,10 @@ static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
     }
 
     while (fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func)==4) {
-        if (dom == pci->domain
-            && bus == pci->bus
-            && dev == pci->dev
-            && func == pci->func) {
+        if (dom == pci->bdf.domain
+            && bus == pci->bdf.bus
+            && dev == pci->bdf.dev
+            && func == pci->bdf.func) {
             rc = 1;
             goto out;
         }
@@ -651,8 +651,8 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
     }
 
     spath = GCSPRINTF(SYSFS_PCIBACK_DRIVER"/"PCI_BDF,
-                      pci->domain, pci->bus,
-                      pci->dev, pci->func);
+                      pci->bdf.domain, pci->bdf.bus,
+                      pci->bdf.dev, pci->bdf.func);
     rc = lstat(spath, &st);
 
     if( rc == 0 )
@@ -715,10 +715,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     struct stat st;
 
     /* Local copy for convenience */
-    dom = pci->domain;
-    bus = pci->bus;
-    dev = pci->dev;
-    func = pci->func;
+    dom = pci->bdf.domain;
+    bus = pci->bdf.bus;
+    dev = pci->bdf.dev;
+    func = pci->bdf.func;
 
     /* See if the device exists */
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF, dom, bus, dev, func);
@@ -792,8 +792,8 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     /* De-quarantine */
     rc = xc_deassign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci));
     if ( rc < 0 ) {
-        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pci->domain, pci->bus,
-            pci->dev, pci->func);
+        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pci->bdf.domain, pci->bdf.bus,
+            pci->bdf.dev, pci->bdf.func);
         return ERROR_FAIL;
     }
 
@@ -882,11 +882,11 @@ static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pci, unsigne
 
         if ( sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4 )
             continue;
-        if ( pci->domain != dom )
+        if ( pci->bdf.domain != dom )
             continue;
-        if ( pci->bus != bus )
+        if ( pci->bdf.bus != bus )
             continue;
-        if ( pci->dev != dev )
+        if ( pci->bdf.dev != dev )
             continue;
 
         path = GCSPRINTF("%s/" PCI_BDF, SYSFS_PCIBACK_DRIVER, dom, bus, dev, func);
@@ -935,13 +935,13 @@ static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/parameter");
     if (pci->vdevfn) {
         libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF_VDEVFN","PCI_OPTIONS,
-                         pci->domain, pci->bus, pci->dev,
-                         pci->func, pci->vdevfn, pci->msitranslate,
+                         pci->bdf.domain, pci->bdf.bus, pci->bdf.dev,
+                         pci->bdf.func, pci->vdevfn, pci->msitranslate,
                          pci->power_mgmt);
     } else {
         libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF","PCI_OPTIONS,
-                         pci->domain,  pci->bus, pci->dev,
-                         pci->func, pci->msitranslate, pci->power_mgmt);
+                         pci->bdf.domain,  pci->bdf.bus, pci->bdf.dev,
+                         pci->bdf.func, pci->msitranslate, pci->power_mgmt);
     }
 
     libxl__qemu_traditional_cmd(gc, domid, "pci-ins");
@@ -1100,10 +1100,10 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
     libxl__qmp_param_add_string(gc, &args, "driver",
                                 "xen-pci-passthrough");
     QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
-                           pci->bus, pci->dev, pci->func);
+                           pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     QMP_PARAMETERS_SPRINTF(&args, "hostaddr",
-                           "%04x:%02x:%02x.%01x", pci->domain,
-                           pci->bus, pci->dev, pci->func);
+                           "%04x:%02x:%02x.%01x", pci->bdf.domain,
+                           pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     if (pci->vdevfn) {
         QMP_PARAMETERS_SPRINTF(&args, "addr", "%x.%x",
                                PCI_SLOT(pci->vdevfn),
@@ -1191,7 +1191,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
      */
 
     asked_id = GCSPRINTF(PCI_PT_QDEV_ID,
-                         pci->bus, pci->dev, pci->func);
+                         pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     for (i = 0; (bus = libxl__json_array_get(response, i)); i++) {
         devices = libxl__json_map_get("devices", bus, JSON_ARRAY);
@@ -1282,8 +1282,8 @@ static void pci_add_dm_done(libxl__egc *egc,
     if (isstubdom)
         starting = false;
 
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->domain,
-                           pci->bus, pci->dev, pci->func);
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->bdf.domain,
+                           pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     f = fopen(sysfs_path, "r");
     start = end = flags = size = 0;
     irq = 0;
@@ -1323,8 +1323,8 @@ static void pci_add_dm_done(libxl__egc *egc,
         }
     }
     fclose(f);
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
-                                pci->bus, pci->dev, pci->func);
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->bdf.domain,
+                                pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     f = fopen(sysfs_path, "r");
     if (f == NULL) {
         LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
@@ -1495,7 +1495,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
         if (rc) {
             LOGD(ERROR, domid,
                  "PCI device %04x:%02x:%02x.%u %s?",
-                 pci->domain, pci->bus, pci->dev, pci->func,
+                 pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
                  errno == EOPNOTSUPP ? "cannot be assigned - no IOMMU"
                  : "already assigned to a different guest");
             goto out;
@@ -1513,7 +1513,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
 
     if (!libxl_pci_assignable(ctx, pci)) {
         LOGD(ERROR, domid, "PCI device %x:%x:%x.%x is not assignable",
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         rc = ERROR_FAIL;
         goto out;
     }
@@ -1521,7 +1521,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     rc = pci_info_xs_write(gc, pci, "domid", GCSPRINTF("%u", domid));
     if (rc) goto out;
 
-    libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
+    libxl__device_pci_reset(gc, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
     if (stubdomid != 0) {
@@ -1602,13 +1602,13 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
         pci->vfunc_mask &= pfunc_mask;
         /* so now vfunc_mask == pfunc_mask */
     }else{
-        pfunc_mask = (1 << pci->func);
+        pfunc_mask = (1 << pci->bdf.func);
     }
 
     for (rc = 0, i = 7; i >= 0; --i) {
         if ( (1 << i) & pfunc_mask ) {
             if ( pci->vfunc_mask == pfunc_mask ) {
-                pci->func = i;
+                pci->bdf.func = i;
                 pci->vdevfn = orig_vdev | i;
             } else {
                 /* if not passing through multiple devices in a block make
@@ -1640,7 +1640,7 @@ static void device_pci_add_done(libxl__egc *egc,
         LOGD(ERROR, domid,
              "libxl__device_pci_add  failed for "
              "PCI device %x:%x:%x.%x (rc %d)",
-             pci->domain, pci->bus, pci->dev, pci->func,
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
              rc);
         pci_info_xs_remove(gc, pci, "domid");
     }
@@ -1709,8 +1709,8 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/state");
     state = libxl__xs_read(gc, XBT_NULL, path);
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/parameter");
-    libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF, pci->domain,
-                     pci->bus, pci->dev, pci->func);
+    libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF, pci->bdf.domain,
+                     pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     /* Remove all functions at once atomically by only signalling
      * device-model for function 0 */
@@ -1824,8 +1824,8 @@ static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
     } else {
         assert(type == LIBXL_DOMAIN_TYPE_PV);
 
-        char *sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->domain,
-                                     pci->bus, pci->dev, pci->func);
+        char *sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->bdf.domain,
+                                     pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         FILE *f = fopen(sysfs_path, "r");
         unsigned int start = 0, end = 0, flags = 0, size = 0;
         int irq = 0;
@@ -1860,8 +1860,8 @@ static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
         }
         fclose(f);
 skip1:
-        sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
-                               pci->bus, pci->dev, pci->func);
+        sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->bdf.domain,
+                               pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         f = fopen(sysfs_path, "r");
         if (f == NULL) {
             LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
@@ -1925,7 +1925,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     if (rc) goto out;
 
     QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
-                           pci->bus, pci->dev, pci->func);
+                           pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     prs->qmp.callback = pci_remove_qmp_device_del_cb;
     rc = libxl__ev_qmp_send(egc, &prs->qmp, "device_del", args);
     if (rc) goto out;
@@ -1994,7 +1994,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
     libxl__ev_qmp_dispose(gc, qmp);
 
     asked_id = GCSPRINTF(PCI_PT_QDEV_ID,
-                         pci->bus, pci->dev, pci->func);
+                         pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     /* query-pci response:
      * [{ 'devices': [ 'qdev_id': 'str', ...  ], ... }]
@@ -2045,7 +2045,7 @@ static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
     libxl_device_pci *const pci = &prs->pci;
 
     LOGD(WARN, prs->domid, "timed out waiting for DM to remove "
-         PCI_PT_QDEV_ID, pci->bus, pci->dev, pci->func);
+         PCI_PT_QDEV_ID, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     /* If we timed out, we might still want to keep destroying the device
      * (when force==true), so let the next function decide what to do on
@@ -2078,7 +2078,7 @@ static void pci_remove_detached(libxl__egc *egc,
 
     /* don't do multiple resets while some functions are still passed through */
     if ((pci->vdevfn & 0x7) == 0) {
-        libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
+        libxl__device_pci_reset(gc, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     }
 
     if (!isstubdom) {
@@ -2166,7 +2166,7 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
         }
         pci->vfunc_mask &= prs->pfunc_mask;
     } else {
-        prs->pfunc_mask = (1 << pci->func);
+        prs->pfunc_mask = (1 << pci->bdf.func);
     }
 
     rc = 0;
@@ -2194,7 +2194,7 @@ static void device_pci_remove_common_next(libxl__egc *egc,
         prs->next_func--;
         if ( (1 << i) & pfunc_mask ) {
             if ( pci->vfunc_mask == pfunc_mask ) {
-                pci->func = i;
+                pci->bdf.func = i;
                 pci->vdevfn = orig_vdev | i;
             } else {
                 pci->vdevfn = orig_vdev;
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 20f8dd7cfa5d..2c441142fba6 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -769,18 +769,22 @@ libxl_device_nic = Struct("device_nic", [
     ("colo_checkpoint_port", string)
     ])
 
+libxl_pci_bdf = Struct("pci_bdf", [
+    ("func", uint8),
+    ("dev", uint8),
+    ("bus", uint8),
+    ("domain", integer),
+    ])
+
 libxl_device_pci = Struct("device_pci", [
-    ("func",      uint8),
-    ("dev",       uint8),
-    ("bus",       uint8),
-    ("domain",    integer),
-    ("vdevfn",    uint32),
+    ("bdf", libxl_pci_bdf),
+    ("vdevfn", uint32),
     ("vfunc_mask", uint32),
     ("msitranslate", bool),
     ("power_mgmt", bool),
     ("permissive", bool),
     ("seize", bool),
-    ("rdm_policy",      libxl_rdm_reserve_policy),
+    ("rdm_policy", libxl_rdm_reserve_policy),
     ])
 
 libxl_device_rdm = Struct("device_rdm", [
diff --git a/tools/libs/util/libxlu_pci.c b/tools/libs/util/libxlu_pci.c
index 1d38fffce357..5c107f264260 100644
--- a/tools/libs/util/libxlu_pci.c
+++ b/tools/libs/util/libxlu_pci.c
@@ -27,10 +27,10 @@ static int pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
                            unsigned int bus, unsigned int dev,
                            unsigned int func, unsigned int vdevfn)
 {
-    pci->domain = domain;
-    pci->bus = bus;
-    pci->dev = dev;
-    pci->func = func;
+    pci->bdf.domain = domain;
+    pci->bdf.bus = bus;
+    pci->bdf.dev = dev;
+    pci->bdf.func = func;
     pci->vdevfn = vdevfn;
     return 0;
 }
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index f71498cbb570..b6dc7c28401c 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -34,7 +34,8 @@ static void pcilist(uint32_t domid)
     for (i = 0; i < num; i++) {
         printf("%02x.%01x %04x:%02x:%02x.%01x\n",
                (pcis[i].vdevfn >> 3) & 0x1f, pcis[i].vdevfn & 0x7,
-               pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
+               pcis[i].bdf.domain, pcis[i].bdf.bus, pcis[i].bdf.dev,
+               pcis[i].bdf.func);
     }
     libxl_device_pci_list_free(pcis, num);
 }
@@ -163,7 +164,8 @@ static void pciassignable_list(void)
         return;
     for (i = 0; i < num; i++) {
         printf("%04x:%02x:%02x.%01x\n",
-               pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
+               pcis[i].bdf.domain, pcis[i].bdf.bus, pcis[i].bdf.dev,
+               pcis[i].bdf.func);
     }
     libxl_device_pci_assignable_list_free(pcis, num);
 }
diff --git a/tools/xl/xl_sxp.c b/tools/xl/xl_sxp.c
index b03e348ffb9a..95180b60df5b 100644
--- a/tools/xl/xl_sxp.c
+++ b/tools/xl/xl_sxp.c
@@ -194,8 +194,8 @@ void printf_info_sexp(int domid, libxl_domain_config *d_config, FILE *fh)
         fprintf(fh, "\t(device\n");
         fprintf(fh, "\t\t(pci\n");
         fprintf(fh, "\t\t\t(pci dev %04x:%02x:%02x.%01x@%02x)\n",
-               d_config->pcis[i].domain, d_config->pcis[i].bus,
-               d_config->pcis[i].dev, d_config->pcis[i].func,
+               d_config->pcis[i].bdf.domain, d_config->pcis[i].bdf.bus,
+               d_config->pcis[i].bdf.dev, d_config->pcis[i].bdf.func,
                d_config->pcis[i].vdevfn);
         fprintf(fh, "\t\t\t(opts msitranslate %d power_mgmt %d)\n",
                d_config->pcis[i].msitranslate,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 18:19:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 18:19:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23784.50761 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcYEo-0008WJ-AR; Tue, 10 Nov 2020 18:19:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23784.50761; Tue, 10 Nov 2020 18:19:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcYEo-0008WC-76; Tue, 10 Nov 2020 18:19:26 +0000
Received: by outflank-mailman (input) for mailman id 23784;
 Tue, 10 Nov 2020 18:19:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sJgt=EQ=infradead.org=rdunlap@srs-us1.protection.inumbo.net>)
 id 1kcYEl-0008W7-OU
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:19:25 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f09500a4-dd4a-40d3-b43a-4b66aa72fd20;
 Tue, 10 Nov 2020 18:19:20 +0000 (UTC)
Received: from [2601:1c0:6280:3f0::662d]
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kcYES-0001S3-4s; Tue, 10 Nov 2020 18:19:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sJgt=EQ=infradead.org=rdunlap@srs-us1.protection.inumbo.net>)
	id 1kcYEl-0008W7-OU
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:19:25 +0000
X-Inumbo-ID: f09500a4-dd4a-40d3-b43a-4b66aa72fd20
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f09500a4-dd4a-40d3-b43a-4b66aa72fd20;
	Tue, 10 Nov 2020 18:19:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:Content-Type:
	In-Reply-To:MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender
	:Reply-To:Content-ID:Content-Description;
	bh=Ct6aE0Bm8FKKbzyR0cHse7bKKnoXTwBFYl7S9+AHa7M=; b=RFBBplucvygNQ6kLC7amRs2zQ2
	jtuZiTjE7qGN10yc9ZPOrch35q8ioF7bOSvZrVS3CaHAbmRER2RiMQ05VaJEbNKHvh/5BR0O541V2
	Gca/C1mlMqiRD0ABv+Mep2UJ4C1tdOAmjsGmAFXtgHMbf/ZnmKYTa/b/tE++H7sZr3i6NPxdJiFLc
	zZXogcHmMGNFpI3oq7mSneRkgpFUNihyVjwMuW9UXJ6iN7LW82vA/iOo7qS8rBiRYbxtO9DblFakc
	GUdAR8qdE3T1a6EbsoAshklApxvl/ps5eNoUhe4tNlZDdG1fN96lpd4jIJXkVB/fkaHCdXIfsK4xm
	+2iXQ7/A==;
Received: from [2601:1c0:6280:3f0::662d]
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kcYES-0001S3-4s; Tue, 10 Nov 2020 18:19:04 +0000
Subject: Re: Duplicated ABI entries - Was: Re: [PATCH v2 20/39] docs: ABI:
 testing: make the files compatible with ReST output
To: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>,
 Jonathan Cameron <jic23@kernel.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 Fabrice Gasnier <fabrice.gasnier@st.com>,
 Linux Doc Mailing List <linux-doc@vger.kernel.org>,
 "Gautham R. Shenoy" <ego@linux.vnet.ibm.com>,
 "Jason A. Donenfeld" <Jason@zx2c4.com>, =?UTF-8?Q?Javier_Gonz=c3=a1lez?=
 <javier@javigon.com>, Jonathan Corbet <corbet@lwn.net>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 "Rafael J. Wysocki" <rjw@rjwysocki.net>,
 Alexander Shishkin <alexander.shishkin@linux.intel.com>,
 Alexandre Belloni <alexandre.belloni@bootlin.com>,
 Alexandre Torgue <alexandre.torgue@st.com>,
 Andrew Donnellan <ajd@linux.ibm.com>,
 Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
 Baolin Wang <baolin.wang7@gmail.com>, Benson Leung <bleung@chromium.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Bruno Meneguele <bmeneg@redhat.com>, Chunyan Zhang <zhang.lyra@gmail.com>,
 Dan Murphy <dmurphy@ti.com>, Dan Williams <dan.j.williams@intel.com>,
 Enric Balletbo i Serra <enric.balletbo@collabora.com>,
 Felipe Balbi <balbi@kernel.org>, Frederic Barrat <fbarrat@linux.ibm.com>,
 Guenter Roeck <groeck@chromium.org>, Hanjun Guo <guohanjun@huawei.com>,
 Heikki Krogerus <heikki.krogerus@linux.intel.com>,
 Jens Axboe <axboe@kernel.dk>, Johannes Thumshirn
 <johannes.thumshirn@wdc.com>, Juergen Gross <jgross@suse.com>,
 Konstantin Khlebnikov <koct9i@gmail.com>,
 Kranthi Kuntala <kranthi.kuntala@intel.com>,
 Lakshmi Ramasubramanian <nramas@linux.microsoft.com>,
 Lars-Peter Clausen <lars@metafoo.de>, Len Brown <lenb@kernel.org>,
 Leonid Maksymchuk <leonmaxx@gmail.com>,
 Ludovic Desroches <ludovic.desroches@microchip.com>,
 Mario Limonciello <mario.limonciello@dell.com>,
 Mark Gross <mgross@linux.intel.com>,
 Maxime Coquelin <mcoquelin.stm32@gmail.com>,
 Michael Ellerman <mpe@ellerman.id.au>,
 Mika Westerberg <mika.westerberg@linux.intel.com>,
 Mike Kravetz <mike.kravetz@oracle.com>, Mimi Zohar <zohar@linux.ibm.com>,
 Nayna Jain <nayna@linux.ibm.com>, Nicolas Ferre
 <nicolas.ferre@microchip.com>, Niklas Cassel <niklas.cassel@wdc.com>,
 Oded Gabbay <oded.gabbay@gmail.com>, Oleh Kravchenko <oleg@kaa.org.ua>,
 Orson Zhai <orsonzhai@gmail.com>, Pavel Machek <pavel@ucw.cz>,
 Pawan Gupta <pawan.kumar.gupta@linux.intel.com>,
 Peter Meerwald-Stadler <pmeerw@pmeerw.net>, Peter Rosin <peda@axentia.se>,
 Petr Mladek <pmladek@suse.com>, Philippe Bergheaud <felix@linux.ibm.com>,
 Richard Cochran <richardcochran@gmail.com>,
 Sebastian Reichel <sre@kernel.org>,
 Sergey Senozhatsky <sergey.senozhatsky@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Thinh Nguyen <Thinh.Nguyen@synopsys.com>,
 Thomas Gleixner <tglx@linutronix.de>, Tom Rix <trix@redhat.com>,
 Vaibhav Jain <vaibhav@linux.ibm.com>,
 Vineela Tummalapalli <vineela.tummalapalli@intel.com>,
 Vishal Verma <vishal.l.verma@intel.com>, linux-acpi@vger.kernel.org,
 linux-arm-kernel@lists.infradead.org, linux-iio@vger.kernel.org,
 linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-pm@vger.kernel.org,
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org,
 xen-devel@lists.xenproject.org,
 Jonathan Cameron <Jonathan.Cameron@huawei.com>
References: <cover.1604042072.git.mchehab+huawei@kernel.org>
 <58cf3c2d611e0197fb215652719ebd82ca2658db.1604042072.git.mchehab+huawei@kernel.org>
 <5326488b-4185-9d67-fc09-79b911fbb3b8@st.com>
 <20201030110925.3e09d59e@coco.lan>
 <cb586ea3-b6e6-4e48-2344-2bd641e5323f@st.com>
 <20201102124641.GA881895@kroah.com> <20201102154250.45bee17f@coco.lan>
 <20201108165621.4d0da3f4@archlinux> <20201110082658.2edc1ab5@coco.lan>
From: Randy Dunlap <rdunlap@infradead.org>
Message-ID: <aa855d9a-4fc6-2b64-b6b7-69409af3f9d0@infradead.org>
Date: Tue, 10 Nov 2020 10:18:48 -0800
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201110082658.2edc1ab5@coco.lan>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11/9/20 11:26 PM, Mauro Carvalho Chehab wrote:
> Hi Jonathan,
> 
> Let's view ABI from the PoV of a system admin that doesn't know
> yet about a certain ABI symbol.
> 
> He'll try to seek for the symbol, more likely using the HTML 
> documentation. Only very senior system admins might try to take
> a look at the Kernel.

FWIW, I think that the likely search methods are $search_engine
and 'grep'.

Have a good few days off.

-- 
~Randy



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 18:49:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 18:49:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23800.50773 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcYi5-0002vG-QY; Tue, 10 Nov 2020 18:49:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23800.50773; Tue, 10 Nov 2020 18:49:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcYi5-0002v9-Lm; Tue, 10 Nov 2020 18:49:41 +0000
Received: by outflank-mailman (input) for mailman id 23800;
 Tue, 10 Nov 2020 18:49:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DtfH=EQ=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1kcYi4-0002v1-2X
 for xen-devel@lists.xen.org; Tue, 10 Nov 2020 18:49:40 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fd7d08b3-6399-4b24-9c5c-205d0cb12edd;
 Tue, 10 Nov 2020 18:01:36 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1kcXxR-0006sg-TN; Tue, 10 Nov 2020 18:01:29 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1kcXxR-0000JO-SU; Tue, 10 Nov 2020 18:01:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DtfH=EQ=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
	id 1kcYi4-0002v1-2X
	for xen-devel@lists.xen.org; Tue, 10 Nov 2020 18:49:40 +0000
X-Inumbo-ID: fd7d08b3-6399-4b24-9c5c-205d0cb12edd
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fd7d08b3-6399-4b24-9c5c-205d0cb12edd;
	Tue, 10 Nov 2020 18:01:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=BtKbQ6ISTsFEXn7C1Ku+rIn2zxX1S7UJ7B2b2oEnYSM=; b=Fml14f3740oyjSS2Jopq9kzmf2
	SycV3/wl3jg5IoYbOvFR32woQZ79N6RpwVMSX0t4ge/gr7aesZ6x03bY2uAfc0cnunrvub8hj5Bqn
	bSUBXG2hzU2FomHrmLaaz3zUyYulz1rL8V3H3RSgu6PVI2tGxE+Y9Pv9xjkXOlHGyTBE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1kcXxR-0006sg-TN; Tue, 10 Nov 2020 18:01:29 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1kcXxR-0000JO-SU; Tue, 10 Nov 2020 18:01:29 +0000
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 351 v1 - Information leak via power sidechannel
Message-Id: <E1kcXxR-0000JO-SU@xenbits.xenproject.org>
Date: Tue, 10 Nov 2020 18:01:29 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

                    Xen Security Advisory XSA-351

                 Information leak via power sidechannel

ISSUE DESCRIPTION
=================

Researchers have demonstrated using software power/energy monitoring
interfaces to create covert channels, and infer the operations/data used
by other contexts within the system.

Access to these interfaces should be restricted to privileged software,
but it was found that Xen doesn't restrict access suitably, and the
interfaces are accessible to all guests.

For more information, see:
  https://platypusattack.com
  https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-00389.html

IMPACT
======

An unprivileged guest administrator can sample platform power/energy
data.  This may be used to infer the operations/data used by other
contexts within the system.

The research demonstrates using this sidechannel to leak the AES keys
used elsewhere in the system.

VULNERABLE SYSTEMS
==================

Power/energy monitoring interfaces are platform and architecture
specific.  Consult your hardware vendor to ascertain what power feedback
interfaces are available.

For ARM systems, all versions of Xen are vulnerable.  The fix restricts
access to the AMU (Activity Monitors Unit) interface, introduced in
Armv8.4.

For x86 systems, Xen 4.14 and earlier are vulnerable - master is not
vulnerable, as these issues have been addressed in a more general
fashion.

The x86 fixes restrict access to:
 * Intel RAPL interface, introduced in SandyBridge CPUs.
 * Intel platform energy interface.
 * Intel perf_ctl interface, introduced in Pentium 4 CPUs and also
   implemented by other vendors.
 * AMD RAPL interface, introduced in Ryzen/EPYC CPUs.
 * AMD compute unit energy interface, present in Fam15/16 CPUs.

MITIGATION
==========

There are no mitigations available.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa351-arm.patch             Xen unstable - 4.10.x [ARM]
xsa351-x86-4.14-?.patch      Xen 4.14.x            [x86]
xsa351-x86-4.13-?.patch      Xen 4.13.x            [x86]
xsa351-x86-4.12-?.patch      Xen 4.12.x            [x86]
xsa351-x86-4.11-?.patch      Xen 4.11.x - 4.10.x   [x86]

$ sha256sum xsa351*
cad287981a870f13484834fa2364ffee68178517e906f55d2889304a4a9eae06  xsa351.meta
70ebd0e93af240af2680374dcfd8ff4a5dd3eefccf670f1cb9b546d763d6a554  xsa351-arm.patch
49b52a1366912a29e184e3014a9f1f579e8a0dd8a36f01d38d995d2c8ed81928  xsa351-arm-4.11.patch
2e7b7c2b98625d70c8b10047a9f668372f3ccede167344dedb712312606acbca  xsa351-x86-4.11-1.patch
ab9e2cb7d5e3e0c3a916f006c697495f4f01146e09df60ece59ce0a8f7aa5ed0  xsa351-x86-4.11-2.patch
bb68f6e6905bc1566156cafab058cbaf02a17c197385c33a83b7f73885913c1c  xsa351-x86-4.12-1.patch
53f464269f59498f8a9a614f10a47cfb1d81c666f0d684346e28005015de962c  xsa351-x86-4.12-2.patch
67a29d66230faafd9a8047ac80ec18130b5659e80a38c3a412cb2be6d3288a8f  xsa351-x86-4.13-1.patch
f7d8717dec33ee7484b36490402d113f1e7e168e7541bcf193fef620df299f08  xsa351-x86-4.13-2.patch
7d4fbe11a766226d7f1b93c5bf34664d8855deee09d1feebc76f11e49f2aa9c9  xsa351-x86-4.14-1.patch
41df825deafe3ef28e8594ec956033689af69f84a4a6dd92f97d1071e925203d  xsa351-x86-4.14-2.patch
$

NOTE REGARDING LACK OF EMBARGO
==============================

Despite an attempt to organise predisclosure, the discoverers ultimately
did not authorise a predisclosure.
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl+q1WwMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZANkH+wf8pft4t9KoC9HFxd96DfCjZ+FQnD0hMp+890cY
ztNJM4+o+SBP2ytEMZLIoN1oJeTSQqyNgQh2sXNm7/WpseklOTR6s8zw4LWATEfz
rqF8G2xIN8ka7AAqAwOzkzj6qlxuWbiXKm4ENd5ocRxVvF1A2PYyEX88uCPgmupg
dqfufhYQF7hrz8VKDRDYtLsMrRaIFCWqGdOdQfVF64pHGHLvGZkANGN8yva8mBfC
uavwvX+O3CdVMENS4AA3TNo6p2nnWp1iQJCiBwLGCRbTQaRtRucV4Q/eSLC3pHLp
NO26OxieT4tLJN7Ox4ex43KZIsyweZSaUl18rfg0J8MB3FM=
=/6Fo
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa351.meta"
Content-Disposition: attachment; filename="xsa351.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzNTEsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIs
CiAgICAiNC4xMSIsCiAgICAiNC4xMCIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjEwIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICI3OGQ5MDNlOTVlZmM1YjAxNjZiMzkzZDI4OWE2ODdjNjQw
MTZlOGVmIiwKICAgICAgICAgICJQcmVyZXFzIjogW10sCiAgICAgICAgICAi
UGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTM1MS14ODYtNC4xMS0/LnBh
dGNoIiwKICAgICAgICAgICAgInhzYTM1MS1hcm0tNC4xMS5wYXRjaCIKICAg
ICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAiNC4xMSI6
IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAgICAg
ICAgICJTdGFibGVSZWYiOiAiZTI3NGM4YmRjMTJlYjU5NmU1NTIzMzA0MGU4
YjQ5ZGEyNzE1MGYzMSIsCiAgICAgICAgICAiUHJlcmVxcyI6IFtdLAogICAg
ICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2EzNTEteDg2LTQu
MTEtPy5wYXRjaCIsCiAgICAgICAgICAgICJ4c2EzNTEtYXJtLTQuMTEucGF0
Y2giCiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9LAogICAg
IjQuMTIiOiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7
CiAgICAgICAgICAiU3RhYmxlUmVmIjogIjk3YjdiNTU2N2ZiYTY5MThhNjU2
YWQzNDkwNTFiNTM0M2I1ZGVhMmUiLAogICAgICAgICAgIlByZXJlcXMiOiBb
XSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzUx
LXg4Ni00LjEyLT8ucGF0Y2giLAogICAgICAgICAgICAieHNhMzUxLWFybS5w
YXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAg
ICAiNC4xMyI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6
IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiMDA2MGFjMjliY2JkYjc2ZDQ5
ZDJlMjQ4ZGRmY2I3YWZhMjM0NTQ0MCIsCiAgICAgICAgICAiUHJlcmVxcyI6
IFtdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2Ez
NTEteDg2LTQuMTMtPy5wYXRjaCIsCiAgICAgICAgICAgICJ4c2EzNTEtYXJt
LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAgfSwK
ICAgICI0LjE0IjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVu
IjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICIxMGJiNjNjMjAzZjQyZDkz
MWZhMWZhN2RiYmFlN2NlMTc2NWNlY2YyIiwKICAgICAgICAgICJQcmVyZXFz
IjogW10sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhz
YTM1MS14ODYtNC4xNC0/LnBhdGNoIiwKICAgICAgICAgICAgInhzYTM1MS1h
cm0ucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9
LAogICAgIm1hc3RlciI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAg
InhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiNzA1NmYyZjg5ZjAz
ZjJmODA0YWM3ZTc3NmM3YjJiMDAwY2Q3MTZjZCIsCiAgICAgICAgICAiUHJl
cmVxcyI6IFtdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCgkgICAgICAgICAg
ICAgICJ4c2EzNTEtYXJtLnBhdGNoIgoJCSAgICAgIF0KICAgICAgICB9CiAg
ICAgIH0KICAgIH0KICB9Cn0=

--=separator
Content-Type: application/octet-stream; name="xsa351-arm.patch"
Content-Disposition: attachment; filename="xsa351-arm.patch"
Content-Transfer-Encoding: base64

RnJvbTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4KU3ViamVj
dDogeGVuL2FybTogQWx3YXlzIHRyYXAgQU1VIHN5c3RlbSByZWdpc3RlcnMK
ClRoZSBBY3Rpdml0eSBNb25pdG9ycyBVbml0IChBTVUpIGhhcyBiZWVuIGlu
dHJvZHVjZWQgYnkgQVJNdjguNC4gSXQgaXMKY29uc2lkZXJlZCB0byBiZSB1
bnNhZmUgdG8gYmUgZXhwb3NlIHRvIGd1ZXN0cyBhcyB0aGV5IG1pZ2h0IGV4
cG9zZQppbmZvcm1hdGlvbiBhYm91dCBjb2RlIGV4ZWN1dGVkIGJ5IG90aGVy
IGd1ZXN0cyBvciB0aGUgaG9zdC4KCkFybSBwcm92aWRlZCBhIHdheSB0byB0
cmFwIGFsbCB0aGUgQU1VIHN5c3RlbSByZWdpc3RlcnMgYnkgc2V0dGluZwpD
UFRSX0VMMi5UQU0gdG8gMS4KClVuZm9ydHVuYXRlbHksIG9uIG9sZGVyIHJl
dmlzaW9uIG9mIHRoZSBzcGVjaWZpY2F0aW9uLCB0aGUgYml0IDMwIChub3cK
Q1BUUl9FTDEuVEFNKSB3YXMgUkVTMC4gQmVjYXVzZSBvZiB0aGF0LCBYZW4g
aXMgc2V0dGluZyBpdCB0byAwIGFuZAp0aGVyZWZvcmUgdGhlIHN5c3RlbSBy
ZWdpc3RlcnMgd291bGQgYmUgZXhwb3NlZCB0byB0aGUgZ3Vlc3Qgd2hlbiBp
dCBpcwpydW4gb24gcHJvY2Vzc29ycyB3aXRoIEFNVS4KCkFzIHRoZSBiaXQg
aXMgbWFyayBhcyBVTktOT1dOIGF0IGJvb3QgaW4gQXJtdjguNCwgdGhlIG9u
bHkgc2FmZSBzb2x1dGlvbgpmb3IgdXMgaXMgdG8gYWx3YXlzIHNldCBDUFRS
X0VMMS5UQU0gdG8gMS4KCkd1ZXN0IHRyeWluZyB0byBhY2Nlc3MgdGhlIEFN
VSBzeXN0ZW0gcmVnaXN0ZXJzIHdpbGwgbm93IHJlY2VpdmUgYW4KdW5kZWZp
bmVkIGluc3RydWN0aW9uLiBVbmZvcnR1bmF0ZWx5LCB0aGlzIG1lYW5zIHRo
YXQgZXZlbiB3ZWxsLWJlaGF2ZWQKZ3Vlc3QgbWF5IGZhaWwgdG8gYm9vdCBi
ZWNhdXNlIHdlIGRvbid0IHNhbml0aXplIHRoZSBJRCByZWdpc3RlcnMuCgpU
aGlzIGlzIGEga25vd24gaXNzdWVzIHdpdGggb3RoZXIgQXJtdjguMCsgZmVh
dHVyZXMgKGUuZy4gU1ZFLCBQb2ludGVyCkF1dGgpLiBUaGlzIHdpbGwgdGFr
ZW4gY2FyZSBzZXBhcmF0ZWx5LgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0zNTEg
KG9yIFhTQS05MyByZS1ib3JuKS4KClNpZ25lZC1vZmYtYnk6IEp1bGllbiBH
cmFsbCA8amdyYWxsQGFtYXpvbi5jb20+ClJldmlld2VkLWJ5OiBBbmRyZSBQ
cnp5d2FyYSA8YW5kcmUucHJ6eXdhcmFAYXJtLmNvbT4KUmV2aWV3ZWQtYnk6
IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4K
UmV2aWV3ZWQtYnk6IEJlcnRyYW5kIE1hcnF1aXMgPGJlcnRyYW5kLm1hcnF1
aXNAYXJtLmNvbT4KCmRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vdHJhcHMu
YyBiL3hlbi9hcmNoL2FybS90cmFwcy5jCmluZGV4IGEzNmYxNDVlNjcuLjIy
YmQxYmQ0YzYgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL2FybS90cmFwcy5jCisr
KyBiL3hlbi9hcmNoL2FybS90cmFwcy5jCkBAIC0xNTEsNyArMTUxLDggQEAg
dm9pZCBpbml0X3RyYXBzKHZvaWQpCiAgICAgICogT24gQVJNNjQgdGhlIFRD
UHggYml0cyB3aGljaCB3ZSBzZXQgaGVyZSAoMC4uOSwxMiwxMykgYXJlIGFs
bAogICAgICAqIFJFUzEsIGkuZS4gdGhleSB3b3VsZCB0cmFwIHdoZXRoZXIg
d2UgZGlkIHRoaXMgd3JpdGUgb3Igbm90LgogICAgICAqLwotICAgIFdSSVRF
X1NZU1JFRygoSENQVFJfQ1BfTUFTSyAmIH4oSENQVFJfQ1AoMTApIHwgSENQ
VFJfQ1AoMTEpKSkgfCBIQ1BUUl9UVEEsCisgICAgV1JJVEVfU1lTUkVHKChI
Q1BUUl9DUF9NQVNLICYgfihIQ1BUUl9DUCgxMCkgfCBIQ1BUUl9DUCgxMSkp
KSB8CisgICAgICAgICAgICAgICAgIEhDUFRSX1RUQSB8IEhDUFRSX1RBTSwK
ICAgICAgICAgICAgICAgICAgQ1BUUl9FTDIpOwogCiAgICAgLyoKZGlmZiAt
LWdpdCBhL3hlbi9pbmNsdWRlL2FzbS1hcm0vcHJvY2Vzc29yLmggYi94ZW4v
aW5jbHVkZS9hc20tYXJtL3Byb2Nlc3Nvci5oCmluZGV4IDNjYTY3ZjgxNTcu
LmQzZDEyYTlkMTkgMTAwNjQ0Ci0tLSBhL3hlbi9pbmNsdWRlL2FzbS1hcm0v
cHJvY2Vzc29yLmgKKysrIGIveGVuL2luY2x1ZGUvYXNtLWFybS9wcm9jZXNz
b3IuaApAQCAtMzUxLDYgKzM1MSw3IEBACiAjZGVmaW5lIFZUQ1JfUkVTMSAg
ICAgICAoX0FDKDEsVUwpPDwzMSkKIAogLyogSENQVFIgSHlwLiBDb3Byb2Nl
c3NvciBUcmFwIFJlZ2lzdGVyICovCisjZGVmaW5lIEhDUFRSX1RBTSAgICAg
ICAoKF9BQygxLFUpPDwzMCkpCiAjZGVmaW5lIEhDUFRSX1RUQSAgICAgICAo
KF9BQygxLFUpPDwyMCkpICAgICAgICAvKiBUcmFwIHRyYWNlIHJlZ2lzdGVy
cyAqLwogI2RlZmluZSBIQ1BUUl9DUCh4KSAgICAgKChfQUMoMSxVKTw8KHgp
KSkgICAgICAgLyogVHJhcCBDb3Byb2Nlc3NvciB4ICovCiAjZGVmaW5lIEhD
UFRSX0NQX01BU0sgICAoKF9BQygxLFUpPDwxNCktMSkK

--=separator
Content-Type: application/octet-stream; name="xsa351-arm-4.11.patch"
Content-Disposition: attachment; filename="xsa351-arm-4.11.patch"
Content-Transfer-Encoding: base64

RnJvbSBiZGJkNjZjYjliYTE3ZGQxYTcyMjFmMmE1NjFmNDVhODM2ZjEyZjY0
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWxpZW4gR3JhbGwg
PGpncmFsbEBhbWF6b24uY29tPgpEYXRlOiBUdWUsIDEwIE5vdiAyMDIwIDE3
OjA4OjMyICswMDAwClN1YmplY3Q6IFtQQVRDSF0geGVuL2FybTogQWx3YXlz
IHRyYXAgQU1VIHN5c3RlbSByZWdpc3RlcnMKClRoZSBBY3Rpdml0eSBNb25p
dG9ycyBVbml0IChBTVUpIGhhcyBiZWVuIGludHJvZHVjZWQgYnkgQVJNdjgu
NC4gSXQgaXMKY29uc2lkZXJlZCB0byBiZSB1bnNhZmUgdG8gYmUgZXhwb3Nl
IHRvIGd1ZXN0cyBhcyB0aGV5IG1pZ2h0IGV4cG9zZQppbmZvcm1hdGlvbiBh
Ym91dCBjb2RlIGV4ZWN1dGVkIGJ5IG90aGVyIGd1ZXN0cyBvciB0aGUgaG9z
dC4KCkFybSBwcm92aWRlZCBhIHdheSB0byB0cmFwIGFsbCB0aGUgQU1VIHN5
c3RlbSByZWdpc3RlcnMgYnkgc2V0dGluZwpDUFRSX0VMMi5UQU0gdG8gMS4K
ClVuZm9ydHVuYXRlbHksIG9uIG9sZGVyIHJldmlzaW9uIG9mIHRoZSBzcGVj
aWZpY2F0aW9uLCB0aGUgYml0IDMwIChub3cKQ1BUUl9FTDEuVEFNKSB3YXMg
UkVTMC4gQmVjYXVzZSBvZiB0aGF0LCBYZW4gaXMgc2V0dGluZyBpdCB0byAw
IGFuZAp0aGVyZWZvcmUgdGhlIHN5c3RlbSByZWdpc3RlcnMgd291bGQgYmUg
ZXhwb3NlZCB0byB0aGUgZ3Vlc3Qgd2hlbiBpdCBpcwpydW4gb24gcHJvY2Vz
c29ycyB3aXRoIEFNVS4KCkFzIHRoZSBiaXQgaXMgbWFyayBhcyBVTktOT1dO
IGF0IGJvb3QgaW4gQXJtdjguNCwgdGhlIG9ubHkgc2FmZSBzb2x1dGlvbgpm
b3IgdXMgaXMgdG8gYWx3YXlzIHNldCBDUFRSX0VMMS5UQU0gdG8gMS4KCkd1
ZXN0IHRyeWluZyB0byBhY2Nlc3MgdGhlIEFNVSBzeXN0ZW0gcmVnaXN0ZXJz
IHdpbGwgbm93IHJlY2VpdmUgYW4KdW5kZWZpbmVkIGluc3RydWN0aW9uLiBV
bmZvcnR1bmF0ZWx5LCB0aGlzIG1lYW5zIHRoYXQgZXZlbiB3ZWxsLWJlaGF2
ZWQKZ3Vlc3QgbWF5IGZhaWwgdG8gYm9vdCBiZWNhdXNlIHdlIGRvbid0IHNh
bml0aXplIHRoZSBJRCByZWdpc3RlcnMuCgpUaGlzIGlzIGEga25vd24gaXNz
dWVzIHdpdGggb3RoZXIgQXJtdjguMCsgZmVhdHVyZXMgKGUuZy4gU1ZFLCBQ
b2ludGVyCkF1dGgpLiBUaGlzIHdpbGwgdGFrZW4gY2FyZSBzZXBhcmF0ZWx5
LgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0zNTEgKG9yIFhTQS05MyByZS1ib3Ju
KS4KClNpZ25lZC1vZmYtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpv
bi5jb20+ClJldmlld2VkLWJ5OiBBbmRyZSBQcnp5d2FyYSA8YW5kcmUucHJ6
eXdhcmFAYXJtLmNvbT4KUmV2aWV3ZWQtYnk6IFN0ZWZhbm8gU3RhYmVsbGlu
aSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4KUmV2aWV3ZWQtYnk6IEJlcnRy
YW5kIE1hcnF1aXMgPGJlcnRyYW5kLm1hcnF1aXNAYXJtLmNvbT4KLS0tCiB4
ZW4vYXJjaC9hcm0vdHJhcHMuYyAgICAgICAgICAgIHwgMyArKy0KIHhlbi9p
bmNsdWRlL2FzbS1hcm0vcHJvY2Vzc29yLmggfCAxICsKIDIgZmlsZXMgY2hh
bmdlZCwgMyBpbnNlcnRpb25zKCspLCAxIGRlbGV0aW9uKC0pCgpkaWZmIC0t
Z2l0IGEveGVuL2FyY2gvYXJtL3RyYXBzLmMgYi94ZW4vYXJjaC9hcm0vdHJh
cHMuYwppbmRleCBlOTMwNTg1YWQ2ZDQuLmMxMjAxMGE3MjJiNSAxMDA2NDQK
LS0tIGEveGVuL2FyY2gvYXJtL3RyYXBzLmMKKysrIGIveGVuL2FyY2gvYXJt
L3RyYXBzLmMKQEAgLTE3OSw3ICsxNzksOCBAQCB2b2lkIGluaXRfdHJhcHMo
dm9pZCkKICAgICAgKiBPbiBBUk02NCB0aGUgVENQeCBiaXRzIHdoaWNoIHdl
IHNldCBoZXJlICgwLi45LDEyLDEzKSBhcmUgYWxsCiAgICAgICogUkVTMSwg
aS5lLiB0aGV5IHdvdWxkIHRyYXAgd2hldGhlciB3ZSBkaWQgdGhpcyB3cml0
ZSBvciBub3QuCiAgICAgICovCi0gICAgV1JJVEVfU1lTUkVHKChIQ1BUUl9D
UF9NQVNLICYgfihIQ1BUUl9DUCgxMCkgfCBIQ1BUUl9DUCgxMSkpKSB8IEhD
UFRSX1RUQSwKKyAgICBXUklURV9TWVNSRUcoKEhDUFRSX0NQX01BU0sgJiB+
KEhDUFRSX0NQKDEwKSB8IEhDUFRSX0NQKDExKSkpIHwKKyAgICAgICAgICAg
ICAgICAgSENQVFJfVFRBIHwgSENQVFJfVEFNLAogICAgICAgICAgICAgICAg
ICBDUFRSX0VMMik7CiAKICAgICAvKiBTZXR1cCBoeXBlcnZpc29yIHRyYXBz
ICovCmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20tYXJtL3Byb2Nlc3Nv
ci5oIGIveGVuL2luY2x1ZGUvYXNtLWFybS9wcm9jZXNzb3IuaAppbmRleCAy
MjJhMDJkZDk5MzUuLjU3NTVjYzY0MzQ0YSAxMDA2NDQKLS0tIGEveGVuL2lu
Y2x1ZGUvYXNtLWFybS9wcm9jZXNzb3IuaAorKysgYi94ZW4vaW5jbHVkZS9h
c20tYXJtL3Byb2Nlc3Nvci5oCkBAIC0yOTEsNiArMjkxLDcgQEAKICNkZWZp
bmUgVlRDUl9SRVMxICAgICAgIChfQUMoMSxVTCk8PDMxKQogCiAvKiBIQ1BU
UiBIeXAuIENvcHJvY2Vzc29yIFRyYXAgUmVnaXN0ZXIgKi8KKyNkZWZpbmUg
SENQVFJfVEFNICAgICAgICgoX0FDKDEsVSk8PDMwKSkKICNkZWZpbmUgSENQ
VFJfVFRBICAgICAgICgoX0FDKDEsVSk8PDIwKSkgICAgICAgIC8qIFRyYXAg
dHJhY2UgcmVnaXN0ZXJzICovCiAjZGVmaW5lIEhDUFRSX0NQKHgpICAgICAo
KF9BQygxLFUpPDwoeCkpKSAgICAgICAvKiBUcmFwIENvcHJvY2Vzc29yIHgg
Ki8KICNkZWZpbmUgSENQVFJfQ1BfTUFTSyAgICgoX0FDKDEsVSk8PDE0KS0x
KQotLSAKMi4xNy4xCgo=

--=separator
Content-Type: application/octet-stream; name="xsa351-x86-4.11-1.patch"
Content-Disposition: attachment; filename="xsa351-x86-4.11-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP1JvZ2VyPTIwUGF1PTIwTW9ubj1DMz1BOT89IDxy
b2dlci5wYXVAY2l0cml4LmNvbT4KU3ViamVjdDogeDg2L21zcjogZml4IGhh
bmRsaW5nIG9mIE1TUl9JQTMyX1BFUkZfe1NUQVRVUy9DVEx9Ck1JTUUtVmVy
c2lvbjogMS4wCkNvbnRlbnQtVHlwZTogdGV4dC9wbGFpbjsgY2hhcnNldD1V
VEYtOApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA4Yml0CgpDdXJyZW50
bHkgYSBQViBoYXJkd2FyZSBkb21haW4gY2FuIGFsc28gYmUgZ2l2ZW4gY29u
dHJvbCBvdmVyIHRoZSBDUFUKZnJlcXVlbmN5LCBhbmQgc3VjaCBndWVzdCBp
cyBhbGxvd2VkIHRvIHdyaXRlIHRvIE1TUl9JQTMyX1BFUkZfQ1RMLgpIb3dl
dmVyIHNpbmNlIGNvbW1pdCAzMjJlYzdjODlmNiB0aGUgZGVmYXVsdCBiZWhh
dmlvciBoYXMgYmVlbiBjaGFuZ2VkCnRvIHJlamVjdCBhY2Nlc3NlcyB0byBu
b3QgZXhwbGljaXRseSBoYW5kbGVkIE1TUnMsIHByZXZlbnRpbmcgUFYKZ3Vl
c3RzIHRoYXQgbWFuYWdlIENQVSBmcmVxdWVuY3kgZnJvbSByZWFkaW5nCk1T
Ul9JQTMyX1BFUkZfe1NUQVRVUy9DVEx9LgoKQWRkaXRpb25hbGx5IHNvbWUg
SFZNIGd1ZXN0cyAoV2luZG93cyBhdCBsZWFzdCkgd2lsbCBhdHRlbXB0IHRv
IHJlYWQKTVNSX0lBMzJfUEVSRl9DVEwgYW5kIHdpbGwgcGFuaWMgaWYgZ2l2
ZW4gYmFjayBhICNHUCBmYXVsdDoKCiAgdm14LmM6MzAzNTpkOHYwIFJETVNS
IDB4MDAwMDAxOTkgdW5pbXBsZW1lbnRlZAogIGQ4djAgVklSSURJQU4gQ1JB
U0g6IDNiIGMwMDAwMDk2IGZmZmZmODA2ODcxYzE2NTEgZmZmZmRhMDI1MzY4
MzcyMCAwCgpNb3ZlIHRoZSBoYW5kbGluZyBvZiBNU1JfSUEzMl9QRVJGX3tT
VEFUVVMvQ1RMfSB0byB0aGUgY29tbW9uIE1TUgpoYW5kbGluZyBzaGFyZWQg
YmV0d2VlbiBIVk0gYW5kIFBWIGd1ZXN0cywgYW5kIGFkZCBhbiBleHBsaWNp
dCBjYXNlCmZvciByZWFkcyB0byBNU1JfSUEzMl9QRVJGX3tTVEFUVVMvQ1RM
fS4KClJlc3RvcmUgcHJldmlvdXMgYmVoYXZpb3IgYW5kIGFsbG93IFBWIGd1
ZXN0cyB3aXRoIHRoZSByZXF1aXJlZApwZXJtaXNzaW9ucyB0byByZWFkIHRo
ZSBjb250ZW50cyBvZiB0aGUgbWVudGlvbmVkIE1TUnMuIE5vbiBwcml2aWxl
Z2VkCmd1ZXN0cyB3aWxsIGdldCAwIHdoZW4gdHJ5aW5nIHRvIHJlYWQgdGhv
c2UgcmVnaXN0ZXJzLCBhcyB3cml0ZXMgdG8KTVNSX0lBMzJfUEVSRl9DVEwg
Ynkgc3VjaCBndWVzdCB3aWxsIGFscmVhZHkgYmUgc2lsZW50bHkgZHJvcHBl
ZC4KCkZpeGVzOiAzMjJlYzdjODlmNiAoJ3g4Ni9wdjogZGlzYWxsb3cgYWNj
ZXNzIHRvIHVua25vd24gTVNScycpCkZpeGVzOiA4NGU4NDhmZDdhMSAoJ3g4
Ni9odm06IGRpc2FsbG93IGFjY2VzcyB0byB1bmtub3duIE1TUnMnKQpTaWdu
ZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4
LmNvbT4KU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNv
b3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IFJvZ2VyIFBhdSBNb25u
w6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgooY2hlcnJ5IHBpY2tlZCBmcm9t
IGNvbW1pdCAzMDU5MTc4Nzk4YTIzYmE4NzBmZjg2ZmY1NGQ0NDJhMDdlNjY1
MWZjKQoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9tc3IuYyBiL3hlbi9h
cmNoL3g4Ni9tc3IuYwppbmRleCAyNTZlNThkODJiLi4zNDk1YWM5ZjRhIDEw
MDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbXNyLmMKKysrIGIveGVuL2FyY2gv
eDg2L21zci5jCkBAIC0xNDEsNiArMTQxLDcgQEAgaW50IGluaXRfdmNwdV9t
c3JfcG9saWN5KHN0cnVjdCB2Y3B1ICp2KQogCiBpbnQgZ3Vlc3RfcmRtc3Io
Y29uc3Qgc3RydWN0IHZjcHUgKnYsIHVpbnQzMl90IG1zciwgdWludDY0X3Qg
KnZhbCkKIHsKKyAgICBjb25zdCBzdHJ1Y3QgZG9tYWluICpkID0gdi0+ZG9t
YWluOwogICAgIGNvbnN0IHN0cnVjdCBjcHVpZF9wb2xpY3kgKmNwID0gdi0+
ZG9tYWluLT5hcmNoLmNwdWlkOwogICAgIGNvbnN0IHN0cnVjdCBtc3JfZG9t
YWluX3BvbGljeSAqZHAgPSB2LT5kb21haW4tPmFyY2gubXNyOwogICAgIGNv
bnN0IHN0cnVjdCBtc3JfdmNwdV9wb2xpY3kgKnZwID0gdi0+YXJjaC5tc3I7
CkBAIC0yMTIsNiArMjEzLDI1IEBAIGludCBndWVzdF9yZG1zcihjb25zdCBz
dHJ1Y3QgdmNwdSAqdiwgdWludDMyX3QgbXNyLCB1aW50NjRfdCAqdmFsKQog
ICAgICAgICBicmVhazsKIAogICAgICAgICAvKgorICAgICAgICAgKiBUaGVz
ZSBNU1JzIGFyZSBub3QgZW51bWVyYXRlZCBpbiBDUFVJRC4gIFRoZXkgaGF2
ZSBiZWVuIGFyb3VuZAorICAgICAgICAgKiBzaW5jZSB0aGUgUGVudGl1bSA0
LCBhbmQgaW1wbGVtZW50ZWQgYnkgb3RoZXIgdmVuZG9ycy4KKyAgICAgICAg
ICoKKyAgICAgICAgICogU29tZSB2ZXJzaW9ucyBvZiBXaW5kb3dzIHRyeSBy
ZWFkaW5nIHRoZXNlIGJlZm9yZSBzZXR0aW5nIHVwIGEgI0dQCisgICAgICAg
ICAqIGhhbmRsZXIsIGFuZCBMaW51eCBoYXMgc2V2ZXJhbCB1bmd1YXJkZWQg
cmVhZHMgYXMgd2VsbC4gIFByb3ZpZGUKKyAgICAgICAgICogUkFaIHNlbWFu
dGljcywgaW4gZ2VuZXJhbCwgYnV0IHBlcm1pdCBhIGNwdWZyZXEgY29udHJv
bGxlciBkb20wIHRvCisgICAgICAgICAqIGhhdmUgZnVsbCBhY2Nlc3MuCisg
ICAgICAgICAqLworICAgIGNhc2UgTVNSX0lBMzJfUEVSRl9TVEFUVVM6Cisg
ICAgY2FzZSBNU1JfSUEzMl9QRVJGX0NUTDoKKyAgICAgICAgaWYgKCAhKGNw
LT54ODZfdmVuZG9yICYgKFg4Nl9WRU5ET1JfSU5URUwgfCBYODZfVkVORE9S
X0NFTlRBVVIpKSApCisgICAgICAgICAgICBnb3RvIGdwX2ZhdWx0OworCisg
ICAgICAgICp2YWwgPSAwOworICAgICAgICBpZiAoIGxpa2VseSghaXNfY3B1
ZnJlcV9jb250cm9sbGVyKGQpKSB8fCByZG1zcl9zYWZlKG1zciwgKnZhbCkg
PT0gMCApCisgICAgICAgICAgICBicmVhazsKKyAgICAgICAgZ290byBncF9m
YXVsdDsKKworICAgICAgICAvKgogICAgICAgICAgKiBUT0RPOiBJbXBsZW1l
bnQgd2hlbiB3ZSBoYXZlIGJldHRlciB0b3BvbG9neSByZXByZXNlbnRhdGlv
bi4KICAgICBjYXNlIE1TUl9JTlRFTF9DT1JFX1RIUkVBRF9DT1VOVDoKICAg
ICAgICAgICovCkBAIC0yNDEsNiArMjYxLDcgQEAgaW50IGd1ZXN0X3dybXNy
KHN0cnVjdCB2Y3B1ICp2LCB1aW50MzJfdCBtc3IsIHVpbnQ2NF90IHZhbCkK
ICAgICBjYXNlIE1TUl9JTlRFTF9DT1JFX1RIUkVBRF9DT1VOVDoKICAgICBj
YXNlIE1TUl9JTlRFTF9QTEFURk9STV9JTkZPOgogICAgIGNhc2UgTVNSX0FS
Q0hfQ0FQQUJJTElUSUVTOgorICAgIGNhc2UgTVNSX0lBMzJfUEVSRl9TVEFU
VVM6CiAgICAgICAgIC8qIFJlYWQtb25seSAqLwogICAgIGNhc2UgTVNSX1RT
WF9GT1JDRV9BQk9SVDoKICAgICBjYXNlIE1TUl9UU1hfQ1RSTDoKQEAgLTM0
NSw2ICszNjYsMjEgQEAgaW50IGd1ZXN0X3dybXNyKHN0cnVjdCB2Y3B1ICp2
LCB1aW50MzJfdCBtc3IsIHVpbnQ2NF90IHZhbCkKICAgICAgICAgYnJlYWs7
CiAgICAgfQogCisgICAgICAgIC8qCisgICAgICAgICAqIFRoaXMgTVNSIGlz
IG5vdCBlbnVtZXJhdGVkIGluIENQVUlELiAgSXQgaGFzIGJlZW4gYXJvdW5k
IHNpbmNlIHRoZQorICAgICAgICAgKiBQZW50aXVtIDQsIGFuZCBpbXBsZW1l
bnRlZCBieSBvdGhlciB2ZW5kb3JzLgorICAgICAgICAgKgorICAgICAgICAg
KiBUbyBtYXRjaCB0aGUgUkFaIHNlbWFudGljcywgaW1wbGVtZW50IGFzIHdy
aXRlLWRpc2NhcmQsIGV4Y2VwdCBmb3IKKyAgICAgICAgICogYSBjcHVmcmVx
IGNvbnRyb2xsZXIgZG9tMCB3aGljaCBoYXMgZnVsbCBhY2Nlc3MuCisgICAg
ICAgICAqLworICAgIGNhc2UgTVNSX0lBMzJfUEVSRl9DVEw6CisgICAgICAg
IGlmICggIShjcC0+eDg2X3ZlbmRvciAmIChYODZfVkVORE9SX0lOVEVMIHwg
WDg2X1ZFTkRPUl9DRU5UQVVSKSkgKQorICAgICAgICAgICAgZ290byBncF9m
YXVsdDsKKworICAgICAgICBpZiAoIGxpa2VseSghaXNfY3B1ZnJlcV9jb250
cm9sbGVyKGQpKSB8fCB3cm1zcl9zYWZlKG1zciwgdmFsKSA9PSAwICkKKyAg
ICAgICAgICAgIGJyZWFrOworICAgICAgICBnb3RvIGdwX2ZhdWx0OworCiAg
ICAgZGVmYXVsdDoKICAgICAgICAgcmV0dXJuIFg4NkVNVUxfVU5IQU5ETEVB
QkxFOwogICAgIH0KZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9wdi9lbXVs
LXByaXYtb3AuYyBiL3hlbi9hcmNoL3g4Ni9wdi9lbXVsLXByaXYtb3AuYwpp
bmRleCA4MTIwZGVkMzMwLi43NTVmMDBkYjMzIDEwMDY0NAotLS0gYS94ZW4v
YXJjaC94ODYvcHYvZW11bC1wcml2LW9wLmMKKysrIGIveGVuL2FyY2gveDg2
L3B2L2VtdWwtcHJpdi1vcC5jCkBAIC04MTYsMTIgKzgxNiw2IEBAIHN0YXRp
YyBpbmxpbmUgdWludDY0X3QgZ3Vlc3RfbWlzY19lbmFibGUodWludDY0X3Qg
dmFsKQogICAgIHJldHVybiB2YWw7CiB9CiAKLXN0YXRpYyBpbmxpbmUgYm9v
bCBpc19jcHVmcmVxX2NvbnRyb2xsZXIoY29uc3Qgc3RydWN0IGRvbWFpbiAq
ZCkKLXsKLSAgICByZXR1cm4gKChjcHVmcmVxX2NvbnRyb2xsZXIgPT0gRlJF
UUNUTF9kb20wX2tlcm5lbCkgJiYKLSAgICAgICAgICAgIGlzX2hhcmR3YXJl
X2RvbWFpbihkKSk7Ci19Ci0KIHN0YXRpYyBpbnQgcmVhZF9tc3IodW5zaWdu
ZWQgaW50IHJlZywgdWludDY0X3QgKnZhbCwKICAgICAgICAgICAgICAgICAg
ICAgc3RydWN0IHg4Nl9lbXVsYXRlX2N0eHQgKmN0eHQpCiB7CkBAIC0xMDk2
LDE0ICsxMDkwLDYgQEAgc3RhdGljIGludCB3cml0ZV9tc3IodW5zaWduZWQg
aW50IHJlZywgdWludDY0X3QgdmFsLAogICAgICAgICAgICAgcmV0dXJuIFg4
NkVNVUxfT0tBWTsKICAgICAgICAgYnJlYWs7CiAKLSAgICBjYXNlIE1TUl9J
QTMyX1BFUkZfQ1RMOgotICAgICAgICBpZiAoIGJvb3RfY3B1X2RhdGEueDg2
X3ZlbmRvciAhPSBYODZfVkVORE9SX0lOVEVMICkKLSAgICAgICAgICAgIGJy
ZWFrOwotICAgICAgICBpZiAoIGxpa2VseSghaXNfY3B1ZnJlcV9jb250cm9s
bGVyKGN1cnJkKSkgfHwKLSAgICAgICAgICAgICB3cm1zcl9zYWZlKHJlZywg
dmFsKSA9PSAwICkKLSAgICAgICAgICAgIHJldHVybiBYODZFTVVMX09LQVk7
Ci0gICAgICAgIGJyZWFrOwotCiAgICAgY2FzZSBNU1JfSUEzMl9USEVSTV9D
T05UUk9MOgogICAgIGNhc2UgTVNSX0lBMzJfRU5FUkdZX1BFUkZfQklBUzoK
ICAgICAgICAgaWYgKCBib290X2NwdV9kYXRhLng4Nl92ZW5kb3IgIT0gWDg2
X1ZFTkRPUl9JTlRFTCApCmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS94ZW4v
c2NoZWQuaCBiL3hlbi9pbmNsdWRlL3hlbi9zY2hlZC5oCmluZGV4IGMwY2M1
ZDkzMzYuLjdlNGFkNWQ1MWIgMTAwNjQ0Ci0tLSBhL3hlbi9pbmNsdWRlL3hl
bi9zY2hlZC5oCisrKyBiL3hlbi9pbmNsdWRlL3hlbi9zY2hlZC5oCkBAIC05
MjAsNiArOTIwLDIyIEBAIGV4dGVybiBlbnVtIGNwdWZyZXFfY29udHJvbGxl
ciB7CiAgICAgRlJFUUNUTF9ub25lLCBGUkVRQ1RMX2RvbTBfa2VybmVsLCBG
UkVRQ1RMX3hlbgogfSBjcHVmcmVxX2NvbnRyb2xsZXI7CiAKK3N0YXRpYyBh
bHdheXNfaW5saW5lIGJvb2wgaXNfY3B1ZnJlcV9jb250cm9sbGVyKGNvbnN0
IHN0cnVjdCBkb21haW4gKmQpCit7CisgICAgLyoKKyAgICAgKiBBIFBWIGRv
bTAgY2FuIGJlIG5vbWluYXRlZCBhcyB0aGUgY3B1ZnJlcSBjb250cm9sbGVy
LCBpbnN0ZWFkIG9mIHVzaW5nCisgICAgICogWGVuJ3MgY3B1ZnJlcSBkcml2
ZXIsIGF0IHdoaWNoIHBvaW50IGRvbTAgZ2V0cyBkaXJlY3QgYWNjZXNzIHRv
IGNlcnRhaW4KKyAgICAgKiBNU1JzLgorICAgICAqCisgICAgICogVGhpcyBp
bnRlcmZhY2Ugb25seSB3b3JrcyB3aGVuIGRvbTAgaXMgaWRlbnRpdHkgcGlu
bmVkIGFuZCBoYXMgdGhlIHNhbWUKKyAgICAgKiBudW1iZXIgb2YgdkNQVXMg
YXMgcENQVXMgb24gdGhlIHN5c3RlbS4KKyAgICAgKgorICAgICAqIEl0IHdv
dWxkIGJlIGZhciBiZXR0ZXIgdG8gcGFyYXZpcnR1YWxpc2UgdGhlIGludGVy
ZmFjZS4KKyAgICAgKi8KKyAgICByZXR1cm4gKGlzX3B2X2RvbWFpbihkKSAm
JiBpc19oYXJkd2FyZV9kb21haW4oZCkgJiYKKyAgICAgICAgICAgIGNwdWZy
ZXFfY29udHJvbGxlciA9PSBGUkVRQ1RMX2RvbTBfa2VybmVsKTsKK30KKwog
I2RlZmluZSBDUFVQT09MSURfTk9ORSAgICAtMQogCiBzdHJ1Y3QgY3B1cG9v
bCAqY3B1cG9vbF9nZXRfYnlfaWQoaW50IHBvb2xpZCk7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa351-x86-4.11-2.patch"
Content-Disposition: attachment; filename="xsa351-x86-4.11-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L21zcjogRGlzYWxsb3cgZ3Vlc3QgYWNjZXNzIHRv
IHRoZSBSQVBMIE1TUnMKClJlc2VhcmNoZXJzIGhhdmUgZGVtb25zdHJhdGVk
IHVzaW5nIHRoZSBSQVBMIGludGVyZmFjZSB0byBwZXJmb3JtIGEKZGlmZmVy
ZW50aWFsIHBvd2VyIGFuYWx5c2lzIGF0dGFjayB0byByZWNvdmVyIEFFUyBr
ZXlzIHVzZWQgYnkgb3RoZXIgY29yZXMgaW4KdGhlIHN5c3RlbS4KCkZ1cnRo
ZXJtb3JlLCBldmVuIHByaXZpbGVnZWQgZ3Vlc3RzIGNhbm5vdCB1c2UgdGhp
cyBpbnRlcmZhY2UgY29ycmVjdGx5LCBkdWUKdG8gTVNSIHNjb3BlIGFuZCB2
Y3B1IHNjaGVkdWxpbmcgaXNzdWVzLiAgVGhlIGludGVyZmFjZSB3b3VsZCB3
YW50IHRvIGJlCnBhcmF2aXJ0dWFsaXNlZCB0byBiZSB1c2VkIHNlbnNpYmx5
LgoKRGlzYWxsb3cgYWNjZXNzIHRvIHRoZSBSQVBMIE1TUnMgY29tcGxldGVs
eSwgYXMgd2VsbCBhcyBvdGhlciBNU1JzIHdoaWNoCnBvdGVudGlhbGx5IGFj
Y2VzcyBmaW5lIGdyYWluIHBvd2VyIGluZm9ybWF0aW9uLgoKVGhpcyBpcyBw
YXJ0IG9mIFhTQS0zNTEuCgpTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFu
IEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgoKZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL3g4Ni9tc3IuYyBiL3hlbi9hcmNoL3g4Ni9tc3IuYwppbmRleCAz
NDk1YWM5ZjRhLi45OWM4NDhmZjQxIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94
ODYvbXNyLmMKKysrIGIveGVuL2FyY2gveDg2L21zci5jCkBAIC0xNTYsNiAr
MTU2LDE1IEBAIGludCBndWVzdF9yZG1zcihjb25zdCBzdHJ1Y3QgdmNwdSAq
diwgdWludDMyX3QgbXNyLCB1aW50NjRfdCAqdmFsKQogICAgIGNhc2UgTVNS
X1RTWF9GT1JDRV9BQk9SVDoKICAgICBjYXNlIE1TUl9UU1hfQ1RSTDoKICAg
ICBjYXNlIE1TUl9NQ1VfT1BUX0NUUkw6CisgICAgY2FzZSBNU1JfUkFQTF9Q
T1dFUl9VTklUOgorICAgIGNhc2UgTVNSX1BLR19QT1dFUl9MSU1JVCAgLi4u
IE1TUl9QS0dfUE9XRVJfSU5GTzoKKyAgICBjYXNlIE1TUl9EUkFNX1BPV0VS
X0xJTUlUIC4uLiBNU1JfRFJBTV9QT1dFUl9JTkZPOgorICAgIGNhc2UgTVNS
X1BQMF9QT1dFUl9MSU1JVCAgLi4uIE1TUl9QUDBfUE9MSUNZOgorICAgIGNh
c2UgTVNSX1BQMV9QT1dFUl9MSU1JVCAgLi4uIE1TUl9QUDFfUE9MSUNZOgor
ICAgIGNhc2UgTVNSX1BMQVRGT1JNX0VORVJHWV9DT1VOVEVSOgorICAgIGNh
c2UgTVNSX1BMQVRGT1JNX1BPV0VSX0xJTUlUOgorICAgIGNhc2UgTVNSX0Yx
NUhfQ1VfUE9XRVIgLi4uIE1TUl9GMTVIX0NVX01BWF9QT1dFUjoKKyAgICBj
YXNlIE1TUl9BTURfUkFQTF9QT1dFUl9VTklUIC4uLiBNU1JfQU1EX1BLR19F
TkVSR1lfU1RBVFVTOgogICAgICAgICAvKiBOb3Qgb2ZmZXJlZCB0byBndWVz
dHMuICovCiAgICAgICAgIGdvdG8gZ3BfZmF1bHQ7CiAKQEAgLTI2Niw2ICsy
NzUsMTUgQEAgaW50IGd1ZXN0X3dybXNyKHN0cnVjdCB2Y3B1ICp2LCB1aW50
MzJfdCBtc3IsIHVpbnQ2NF90IHZhbCkKICAgICBjYXNlIE1TUl9UU1hfRk9S
Q0VfQUJPUlQ6CiAgICAgY2FzZSBNU1JfVFNYX0NUUkw6CiAgICAgY2FzZSBN
U1JfTUNVX09QVF9DVFJMOgorICAgIGNhc2UgTVNSX1JBUExfUE9XRVJfVU5J
VDoKKyAgICBjYXNlIE1TUl9QS0dfUE9XRVJfTElNSVQgIC4uLiBNU1JfUEtH
X1BPV0VSX0lORk86CisgICAgY2FzZSBNU1JfRFJBTV9QT1dFUl9MSU1JVCAu
Li4gTVNSX0RSQU1fUE9XRVJfSU5GTzoKKyAgICBjYXNlIE1TUl9QUDBfUE9X
RVJfTElNSVQgIC4uLiBNU1JfUFAwX1BPTElDWToKKyAgICBjYXNlIE1TUl9Q
UDFfUE9XRVJfTElNSVQgIC4uLiBNU1JfUFAxX1BPTElDWToKKyAgICBjYXNl
IE1TUl9QTEFURk9STV9FTkVSR1lfQ09VTlRFUjoKKyAgICBjYXNlIE1TUl9Q
TEFURk9STV9QT1dFUl9MSU1JVDoKKyAgICBjYXNlIE1TUl9GMTVIX0NVX1BP
V0VSIC4uLiBNU1JfRjE1SF9DVV9NQVhfUE9XRVI6CisgICAgY2FzZSBNU1Jf
QU1EX1JBUExfUE9XRVJfVU5JVCAuLi4gTVNSX0FNRF9QS0dfRU5FUkdZX1NU
QVRVUzoKICAgICAgICAgLyogTm90IG9mZmVyZWQgdG8gZ3Vlc3RzLiAqLwog
ICAgICAgICBnb3RvIGdwX2ZhdWx0OwogCmRpZmYgLS1naXQgYS94ZW4vaW5j
bHVkZS9hc20teDg2L21zci1pbmRleC5oIGIveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tc3ItaW5kZXguaAppbmRleCA0ODBkMWQ4MTAyLi5hNjg1ZGNkY2NhIDEw
MDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L21zci1pbmRleC5oCisr
KyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvbXNyLWluZGV4LmgKQEAgLTk2LDYg
Kzk2LDM4IEBACiAvKiBMb3dlciA2IGJpdHMgZGVmaW5lIHRoZSBmb3JtYXQg
b2YgdGhlIGFkZHJlc3MgaW4gdGhlIExCUiBzdGFjayAqLwogI2RlZmluZSBN
U1JfSUEzMl9QRVJGX0NBUF9MQlJfRk9STUFUCTB4M2YKIAorLyoKKyAqIElu
dGVsIFJ1bnRpbWUgQXZlcmFnZSBQb3dlciBMaW1pdGluZyAoUkFQTCkgaW50
ZXJmYWNlLiAgUG93ZXIgcGxhbmUgYmFzZQorICogYWRkcmVzc2VzIChNU1Jf
Kl9QT1dFUl9MSU1JVCkgYXJlIG1vZGVsIHNwZWNpZmljLCBidXQgaGF2ZSBz
by1mYXIgYmVlbgorICogY29uc2lzdGVudCBzaW5jZSB0aGVpciBpbnRyb2R1
Y3Rpb24gaW4gU2FuZHlCcmlkZ2UuCisgKgorICogT2Zmc2V0cyBvZiBmdW5j
dGlvbmFsaXR5IGZyb20gdGhlIHBvd2VyIHBsYW5lIGJhc2UgaXMgYXJjaGl0
ZWN0dXJhbCwgYnV0CisgKiBub3QgYWxsIHBvd2VyIHBsYW5lcyBzdXBwb3J0
IGFsbCBmdW5jdGlvbmFsaXR5LgorICovCisjZGVmaW5lIE1TUl9SQVBMX1BP
V0VSX1VOSVQJCTB4MDAwMDA2MDYKKworI2RlZmluZSBNU1JfUEtHX1BPV0VS
X0xJTUlUCQkweDAwMDAwNjEwCisjZGVmaW5lIE1TUl9QS0dfRU5FUkdZX1NU
QVRVUwkJMHgwMDAwMDYxMQorI2RlZmluZSBNU1JfUEtHX1BFUkZfU1RBVFVT
CQkweDAwMDAwNjEzCisjZGVmaW5lIE1TUl9QS0dfUE9XRVJfSU5GTwkJMHgw
MDAwMDYxNAorCisjZGVmaW5lIE1TUl9EUkFNX1BPV0VSX0xJTUlUCQkweDAw
MDAwNjE4CisjZGVmaW5lIE1TUl9EUkFNX0VORVJHWV9TVEFUVVMJCTB4MDAw
MDA2MTkKKyNkZWZpbmUgTVNSX0RSQU1fUEVSRl9TVEFUVVMJCTB4MDAwMDA2
MWIKKyNkZWZpbmUgTVNSX0RSQU1fUE9XRVJfSU5GTwkJMHgwMDAwMDYxYwor
CisjZGVmaW5lIE1TUl9QUDBfUE9XRVJfTElNSVQJCTB4MDAwMDA2MzgKKyNk
ZWZpbmUgTVNSX1BQMF9FTkVSR1lfU1RBVFVTCQkweDAwMDAwNjM5CisjZGVm
aW5lIE1TUl9QUDBfUE9MSUNZCQkJMHgwMDAwMDYzYQorCisjZGVmaW5lIE1T
Ul9QUDFfUE9XRVJfTElNSVQJCTB4MDAwMDA2NDAKKyNkZWZpbmUgTVNSX1BQ
MV9FTkVSR1lfU1RBVFVTCQkweDAwMDAwNjQxCisjZGVmaW5lIE1TUl9QUDFf
UE9MSUNZCQkJMHgwMDAwMDY0MgorCisvKiBJbnRlbCBQbGF0Zm9ybS13aWRl
IHBvd2VyIGludGVyZmFjZS4gKi8KKyNkZWZpbmUgTVNSX1BMQVRGT1JNX0VO
RVJHWV9DT1VOVEVSCTB4MDAwMDA2NGQKKyNkZWZpbmUgTVNSX1BMQVRGT1JN
X1BPV0VSX0xJTUlUCTB4MDAwMDA2NWMKKwogI2RlZmluZSBNU1JfSUEzMl9C
TkRDRkdTCQkweDAwMDAwZDkwCiAjZGVmaW5lIElBMzJfQk5EQ0ZHU19FTkFC
TEUJCTB4MDAwMDAwMDEKICNkZWZpbmUgSUEzMl9CTkRDRkdTX1BSRVNFUlZF
CQkweDAwMDAwMDAyCkBAIC0yMTgsNiArMjUwLDggQEAKICNkZWZpbmUgTVNS
X0s4X1ZNX0NSCQkJMHhjMDAxMDExNAogI2RlZmluZSBNU1JfSzhfVk1fSFNB
VkVfUEEJCTB4YzAwMTAxMTcKIAorI2RlZmluZSBNU1JfRjE1SF9DVV9QT1dF
UgkJMHhjMDAxMDA3YQorI2RlZmluZSBNU1JfRjE1SF9DVV9NQVhfUE9XRVIJ
CTB4YzAwMTAwN2IKICNkZWZpbmUgTVNSX0FNRF9GQU0xNUhfRVZOVFNFTDAJ
CTB4YzAwMTAyMDAKICNkZWZpbmUgTVNSX0FNRF9GQU0xNUhfUEVSRkNUUjAJ
CTB4YzAwMTAyMDEKICNkZWZpbmUgTVNSX0FNRF9GQU0xNUhfRVZOVFNFTDEJ
CTB4YzAwMTAyMDIKQEAgLTIzMSw2ICsyNjUsMTAgQEAKICNkZWZpbmUgTVNS
X0FNRF9GQU0xNUhfRVZOVFNFTDUJCTB4YzAwMTAyMGEKICNkZWZpbmUgTVNS
X0FNRF9GQU0xNUhfUEVSRkNUUjUJCTB4YzAwMTAyMGIKIAorI2RlZmluZSBN
U1JfQU1EX1JBUExfUE9XRVJfVU5JVAkJMHhjMDAxMDI5OQorI2RlZmluZSBN
U1JfQU1EX0NPUkVfRU5FUkdZX1NUQVRVUwkweGMwMDEwMjlhCisjZGVmaW5l
IE1TUl9BTURfUEtHX0VORVJHWV9TVEFUVVMJMHhjMDAxMDI5YgorCiAjZGVm
aW5lIE1TUl9BTURfTDdTMF9GRUFUVVJFX01BU0sJMHhjMDAxMTAwMgogI2Rl
ZmluZSBNU1JfQU1EX1RIUk1fRkVBVFVSRV9NQVNLCTB4YzAwMTEwMDMKICNk
ZWZpbmUgTVNSX0s4X0ZFQVRVUkVfTUFTSwkJMHhjMDAxMTAwNAo=

--=separator
Content-Type: application/octet-stream; name="xsa351-x86-4.12-1.patch"
Content-Disposition: attachment; filename="xsa351-x86-4.12-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP1JvZ2VyPTIwUGF1PTIwTW9ubj1DMz1BOT89IDxy
b2dlci5wYXVAY2l0cml4LmNvbT4KU3ViamVjdDogeDg2L21zcjogZml4IGhh
bmRsaW5nIG9mIE1TUl9JQTMyX1BFUkZfe1NUQVRVUy9DVEx9Ck1JTUUtVmVy
c2lvbjogMS4wCkNvbnRlbnQtVHlwZTogdGV4dC9wbGFpbjsgY2hhcnNldD1V
VEYtOApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA4Yml0CgpDdXJyZW50
bHkgYSBQViBoYXJkd2FyZSBkb21haW4gY2FuIGFsc28gYmUgZ2l2ZW4gY29u
dHJvbCBvdmVyIHRoZSBDUFUKZnJlcXVlbmN5LCBhbmQgc3VjaCBndWVzdCBp
cyBhbGxvd2VkIHRvIHdyaXRlIHRvIE1TUl9JQTMyX1BFUkZfQ1RMLgpIb3dl
dmVyIHNpbmNlIGNvbW1pdCAzMjJlYzdjODlmNiB0aGUgZGVmYXVsdCBiZWhh
dmlvciBoYXMgYmVlbiBjaGFuZ2VkCnRvIHJlamVjdCBhY2Nlc3NlcyB0byBu
b3QgZXhwbGljaXRseSBoYW5kbGVkIE1TUnMsIHByZXZlbnRpbmcgUFYKZ3Vl
c3RzIHRoYXQgbWFuYWdlIENQVSBmcmVxdWVuY3kgZnJvbSByZWFkaW5nCk1T
Ul9JQTMyX1BFUkZfe1NUQVRVUy9DVEx9LgoKQWRkaXRpb25hbGx5IHNvbWUg
SFZNIGd1ZXN0cyAoV2luZG93cyBhdCBsZWFzdCkgd2lsbCBhdHRlbXB0IHRv
IHJlYWQKTVNSX0lBMzJfUEVSRl9DVEwgYW5kIHdpbGwgcGFuaWMgaWYgZ2l2
ZW4gYmFjayBhICNHUCBmYXVsdDoKCiAgdm14LmM6MzAzNTpkOHYwIFJETVNS
IDB4MDAwMDAxOTkgdW5pbXBsZW1lbnRlZAogIGQ4djAgVklSSURJQU4gQ1JB
U0g6IDNiIGMwMDAwMDk2IGZmZmZmODA2ODcxYzE2NTEgZmZmZmRhMDI1MzY4
MzcyMCAwCgpNb3ZlIHRoZSBoYW5kbGluZyBvZiBNU1JfSUEzMl9QRVJGX3tT
VEFUVVMvQ1RMfSB0byB0aGUgY29tbW9uIE1TUgpoYW5kbGluZyBzaGFyZWQg
YmV0d2VlbiBIVk0gYW5kIFBWIGd1ZXN0cywgYW5kIGFkZCBhbiBleHBsaWNp
dCBjYXNlCmZvciByZWFkcyB0byBNU1JfSUEzMl9QRVJGX3tTVEFUVVMvQ1RM
fS4KClJlc3RvcmUgcHJldmlvdXMgYmVoYXZpb3IgYW5kIGFsbG93IFBWIGd1
ZXN0cyB3aXRoIHRoZSByZXF1aXJlZApwZXJtaXNzaW9ucyB0byByZWFkIHRo
ZSBjb250ZW50cyBvZiB0aGUgbWVudGlvbmVkIE1TUnMuIE5vbiBwcml2aWxl
Z2VkCmd1ZXN0cyB3aWxsIGdldCAwIHdoZW4gdHJ5aW5nIHRvIHJlYWQgdGhv
c2UgcmVnaXN0ZXJzLCBhcyB3cml0ZXMgdG8KTVNSX0lBMzJfUEVSRl9DVEwg
Ynkgc3VjaCBndWVzdCB3aWxsIGFscmVhZHkgYmUgc2lsZW50bHkgZHJvcHBl
ZC4KCkZpeGVzOiAzMjJlYzdjODlmNiAoJ3g4Ni9wdjogZGlzYWxsb3cgYWNj
ZXNzIHRvIHVua25vd24gTVNScycpCkZpeGVzOiA4NGU4NDhmZDdhMSAoJ3g4
Ni9odm06IGRpc2FsbG93IGFjY2VzcyB0byB1bmtub3duIE1TUnMnKQpTaWdu
ZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4
LmNvbT4KU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNv
b3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IFJvZ2VyIFBhdSBNb25u
w6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgooY2hlcnJ5IHBpY2tlZCBmcm9t
IGNvbW1pdCAzMDU5MTc4Nzk4YTIzYmE4NzBmZjg2ZmY1NGQ0NDJhMDdlNjY1
MWZjKQoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9tc3IuYyBiL3hlbi9h
cmNoL3g4Ni9tc3IuYwppbmRleCA0Njc3MjIyYzQwLi5hNDI3ODI2YmEwIDEw
MDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbXNyLmMKKysrIGIveGVuL2FyY2gv
eDg2L21zci5jCkBAIC0yMDYsNiArMjA2LDI1IEBAIGludCBndWVzdF9yZG1z
cihjb25zdCBzdHJ1Y3QgdmNwdSAqdiwgdWludDMyX3QgbXNyLCB1aW50NjRf
dCAqdmFsKQogICAgICAgICAqdmFsID0gbXNycy0+bWlzY19mZWF0dXJlc19l
bmFibGVzLnJhdzsKICAgICAgICAgYnJlYWs7CiAKKyAgICAgICAgLyoKKyAg
ICAgICAgICogVGhlc2UgTVNScyBhcmUgbm90IGVudW1lcmF0ZWQgaW4gQ1BV
SUQuICBUaGV5IGhhdmUgYmVlbiBhcm91bmQKKyAgICAgICAgICogc2luY2Ug
dGhlIFBlbnRpdW0gNCwgYW5kIGltcGxlbWVudGVkIGJ5IG90aGVyIHZlbmRv
cnMuCisgICAgICAgICAqCisgICAgICAgICAqIFNvbWUgdmVyc2lvbnMgb2Yg
V2luZG93cyB0cnkgcmVhZGluZyB0aGVzZSBiZWZvcmUgc2V0dGluZyB1cCBh
ICNHUAorICAgICAgICAgKiBoYW5kbGVyLCBhbmQgTGludXggaGFzIHNldmVy
YWwgdW5ndWFyZGVkIHJlYWRzIGFzIHdlbGwuICBQcm92aWRlCisgICAgICAg
ICAqIFJBWiBzZW1hbnRpY3MsIGluIGdlbmVyYWwsIGJ1dCBwZXJtaXQgYSBj
cHVmcmVxIGNvbnRyb2xsZXIgZG9tMCB0bworICAgICAgICAgKiBoYXZlIGZ1
bGwgYWNjZXNzLgorICAgICAgICAgKi8KKyAgICBjYXNlIE1TUl9JQTMyX1BF
UkZfU1RBVFVTOgorICAgIGNhc2UgTVNSX0lBMzJfUEVSRl9DVEw6CisgICAg
ICAgIGlmICggIShjcC0+eDg2X3ZlbmRvciAmIChYODZfVkVORE9SX0lOVEVM
IHwgWDg2X1ZFTkRPUl9DRU5UQVVSKSkgKQorICAgICAgICAgICAgZ290byBn
cF9mYXVsdDsKKworICAgICAgICAqdmFsID0gMDsKKyAgICAgICAgaWYgKCBs
aWtlbHkoIWlzX2NwdWZyZXFfY29udHJvbGxlcihkKSkgfHwgcmRtc3Jfc2Fm
ZShtc3IsICp2YWwpID09IDAgKQorICAgICAgICAgICAgYnJlYWs7CisgICAg
ICAgIGdvdG8gZ3BfZmF1bHQ7CisKICAgICBjYXNlIE1TUl9YMkFQSUNfRklS
U1QgLi4uIE1TUl9YMkFQSUNfTEFTVDoKICAgICAgICAgaWYgKCAhaXNfaHZt
X2RvbWFpbihkKSB8fCB2ICE9IGN1cnIgKQogICAgICAgICAgICAgZ290byBn
cF9mYXVsdDsKQEAgLTI5MCw2ICszMDksNyBAQCBpbnQgZ3Vlc3Rfd3Jtc3Io
c3RydWN0IHZjcHUgKnYsIHVpbnQzMl90IG1zciwgdWludDY0X3QgdmFsKQog
ICAgIGNhc2UgTVNSX0lOVEVMX0NPUkVfVEhSRUFEX0NPVU5UOgogICAgIGNh
c2UgTVNSX0lOVEVMX1BMQVRGT1JNX0lORk86CiAgICAgY2FzZSBNU1JfQVJD
SF9DQVBBQklMSVRJRVM6CisgICAgY2FzZSBNU1JfSUEzMl9QRVJGX1NUQVRV
UzoKICAgICAgICAgLyogUmVhZC1vbmx5ICovCiAgICAgY2FzZSBNU1JfVFNY
X0ZPUkNFX0FCT1JUOgogICAgIGNhc2UgTVNSX1RTWF9DVFJMOgpAQCAtMzk0
LDYgKzQxNCwyMSBAQCBpbnQgZ3Vlc3Rfd3Jtc3Ioc3RydWN0IHZjcHUgKnYs
IHVpbnQzMl90IG1zciwgdWludDY0X3QgdmFsKQogICAgICAgICBicmVhazsK
ICAgICB9CiAKKyAgICAgICAgLyoKKyAgICAgICAgICogVGhpcyBNU1IgaXMg
bm90IGVudW1lcmF0ZWQgaW4gQ1BVSUQuICBJdCBoYXMgYmVlbiBhcm91bmQg
c2luY2UgdGhlCisgICAgICAgICAqIFBlbnRpdW0gNCwgYW5kIGltcGxlbWVu
dGVkIGJ5IG90aGVyIHZlbmRvcnMuCisgICAgICAgICAqCisgICAgICAgICAq
IFRvIG1hdGNoIHRoZSBSQVogc2VtYW50aWNzLCBpbXBsZW1lbnQgYXMgd3Jp
dGUtZGlzY2FyZCwgZXhjZXB0IGZvcgorICAgICAgICAgKiBhIGNwdWZyZXEg
Y29udHJvbGxlciBkb20wIHdoaWNoIGhhcyBmdWxsIGFjY2Vzcy4KKyAgICAg
ICAgICovCisgICAgY2FzZSBNU1JfSUEzMl9QRVJGX0NUTDoKKyAgICAgICAg
aWYgKCAhKGNwLT54ODZfdmVuZG9yICYgKFg4Nl9WRU5ET1JfSU5URUwgfCBY
ODZfVkVORE9SX0NFTlRBVVIpKSApCisgICAgICAgICAgICBnb3RvIGdwX2Zh
dWx0OworCisgICAgICAgIGlmICggbGlrZWx5KCFpc19jcHVmcmVxX2NvbnRy
b2xsZXIoZCkpIHx8IHdybXNyX3NhZmUobXNyLCB2YWwpID09IDAgKQorICAg
ICAgICAgICAgYnJlYWs7CisgICAgICAgIGdvdG8gZ3BfZmF1bHQ7CisKICAg
ICBjYXNlIE1TUl9YMkFQSUNfRklSU1QgLi4uIE1TUl9YMkFQSUNfTEFTVDoK
ICAgICAgICAgaWYgKCAhaXNfaHZtX2RvbWFpbihkKSB8fCB2ICE9IGN1cnIg
KQogICAgICAgICAgICAgZ290byBncF9mYXVsdDsKZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL3g4Ni9wdi9lbXVsLXByaXYtb3AuYyBiL3hlbi9hcmNoL3g4Ni9w
di9lbXVsLXByaXYtb3AuYwppbmRleCAzMjRhMjMzNGEyLi45MzMwMzZlYTM0
IDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvcHYvZW11bC1wcml2LW9wLmMK
KysrIGIveGVuL2FyY2gveDg2L3B2L2VtdWwtcHJpdi1vcC5jCkBAIC03OTks
MTIgKzc5OSw2IEBAIHN0YXRpYyBpbmxpbmUgdWludDY0X3QgZ3Vlc3RfbWlz
Y19lbmFibGUodWludDY0X3QgdmFsKQogICAgIHJldHVybiB2YWw7CiB9CiAK
LXN0YXRpYyBpbmxpbmUgYm9vbCBpc19jcHVmcmVxX2NvbnRyb2xsZXIoY29u
c3Qgc3RydWN0IGRvbWFpbiAqZCkKLXsKLSAgICByZXR1cm4gKChjcHVmcmVx
X2NvbnRyb2xsZXIgPT0gRlJFUUNUTF9kb20wX2tlcm5lbCkgJiYKLSAgICAg
ICAgICAgIGlzX2hhcmR3YXJlX2RvbWFpbihkKSk7Ci19Ci0KIHN0YXRpYyBp
bnQgcmVhZF9tc3IodW5zaWduZWQgaW50IHJlZywgdWludDY0X3QgKnZhbCwK
ICAgICAgICAgICAgICAgICAgICAgc3RydWN0IHg4Nl9lbXVsYXRlX2N0eHQg
KmN0eHQpCiB7CkBAIC0xMDQ3LDE0ICsxMDQxLDYgQEAgc3RhdGljIGludCB3
cml0ZV9tc3IodW5zaWduZWQgaW50IHJlZywgdWludDY0X3QgdmFsLAogICAg
ICAgICAgICAgcmV0dXJuIFg4NkVNVUxfT0tBWTsKICAgICAgICAgYnJlYWs7
CiAKLSAgICBjYXNlIE1TUl9JQTMyX1BFUkZfQ1RMOgotICAgICAgICBpZiAo
IGJvb3RfY3B1X2RhdGEueDg2X3ZlbmRvciAhPSBYODZfVkVORE9SX0lOVEVM
ICkKLSAgICAgICAgICAgIGJyZWFrOwotICAgICAgICBpZiAoIGxpa2VseSgh
aXNfY3B1ZnJlcV9jb250cm9sbGVyKGN1cnJkKSkgfHwKLSAgICAgICAgICAg
ICB3cm1zcl9zYWZlKHJlZywgdmFsKSA9PSAwICkKLSAgICAgICAgICAgIHJl
dHVybiBYODZFTVVMX09LQVk7Ci0gICAgICAgIGJyZWFrOwotCiAgICAgY2Fz
ZSBNU1JfSUEzMl9USEVSTV9DT05UUk9MOgogICAgIGNhc2UgTVNSX0lBMzJf
RU5FUkdZX1BFUkZfQklBUzoKICAgICAgICAgaWYgKCBib290X2NwdV9kYXRh
Lng4Nl92ZW5kb3IgIT0gWDg2X1ZFTkRPUl9JTlRFTCApCmRpZmYgLS1naXQg
YS94ZW4vaW5jbHVkZS94ZW4vc2NoZWQuaCBiL3hlbi9pbmNsdWRlL3hlbi9z
Y2hlZC5oCmluZGV4IDgxOWY2ZWRlMmIuLmI5MTg2MjQzMjcgMTAwNjQ0Ci0t
LSBhL3hlbi9pbmNsdWRlL3hlbi9zY2hlZC5oCisrKyBiL3hlbi9pbmNsdWRl
L3hlbi9zY2hlZC5oCkBAIC05OTMsNiArOTkzLDIyIEBAIGV4dGVybiBlbnVt
IGNwdWZyZXFfY29udHJvbGxlciB7CiAgICAgRlJFUUNUTF9ub25lLCBGUkVR
Q1RMX2RvbTBfa2VybmVsLCBGUkVRQ1RMX3hlbgogfSBjcHVmcmVxX2NvbnRy
b2xsZXI7CiAKK3N0YXRpYyBhbHdheXNfaW5saW5lIGJvb2wgaXNfY3B1ZnJl
cV9jb250cm9sbGVyKGNvbnN0IHN0cnVjdCBkb21haW4gKmQpCit7CisgICAg
LyoKKyAgICAgKiBBIFBWIGRvbTAgY2FuIGJlIG5vbWluYXRlZCBhcyB0aGUg
Y3B1ZnJlcSBjb250cm9sbGVyLCBpbnN0ZWFkIG9mIHVzaW5nCisgICAgICog
WGVuJ3MgY3B1ZnJlcSBkcml2ZXIsIGF0IHdoaWNoIHBvaW50IGRvbTAgZ2V0
cyBkaXJlY3QgYWNjZXNzIHRvIGNlcnRhaW4KKyAgICAgKiBNU1JzLgorICAg
ICAqCisgICAgICogVGhpcyBpbnRlcmZhY2Ugb25seSB3b3JrcyB3aGVuIGRv
bTAgaXMgaWRlbnRpdHkgcGlubmVkIGFuZCBoYXMgdGhlIHNhbWUKKyAgICAg
KiBudW1iZXIgb2YgdkNQVXMgYXMgcENQVXMgb24gdGhlIHN5c3RlbS4KKyAg
ICAgKgorICAgICAqIEl0IHdvdWxkIGJlIGZhciBiZXR0ZXIgdG8gcGFyYXZp
cnR1YWxpc2UgdGhlIGludGVyZmFjZS4KKyAgICAgKi8KKyAgICByZXR1cm4g
KGlzX3B2X2RvbWFpbihkKSAmJiBpc19oYXJkd2FyZV9kb21haW4oZCkgJiYK
KyAgICAgICAgICAgIGNwdWZyZXFfY29udHJvbGxlciA9PSBGUkVRQ1RMX2Rv
bTBfa2VybmVsKTsKK30KKwogI2RlZmluZSBDUFVQT09MSURfTk9ORSAgICAt
MQogCiBzdHJ1Y3QgY3B1cG9vbCAqY3B1cG9vbF9nZXRfYnlfaWQoaW50IHBv
b2xpZCk7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa351-x86-4.12-2.patch"
Content-Disposition: attachment; filename="xsa351-x86-4.12-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L21zcjogRGlzYWxsb3cgZ3Vlc3QgYWNjZXNzIHRv
IHRoZSBSQVBMIE1TUnMKClJlc2VhcmNoZXJzIGhhdmUgZGVtb25zdHJhdGVk
IHVzaW5nIHRoZSBSQVBMIGludGVyZmFjZSB0byBwZXJmb3JtIGEKZGlmZmVy
ZW50aWFsIHBvd2VyIGFuYWx5c2lzIGF0dGFjayB0byByZWNvdmVyIEFFUyBr
ZXlzIHVzZWQgYnkgb3RoZXIgY29yZXMgaW4KdGhlIHN5c3RlbS4KCkZ1cnRo
ZXJtb3JlLCBldmVuIHByaXZpbGVnZWQgZ3Vlc3RzIGNhbm5vdCB1c2UgdGhp
cyBpbnRlcmZhY2UgY29ycmVjdGx5LCBkdWUKdG8gTVNSIHNjb3BlIGFuZCB2
Y3B1IHNjaGVkdWxpbmcgaXNzdWVzLiAgVGhlIGludGVyZmFjZSB3b3VsZCB3
YW50IHRvIGJlCnBhcmF2aXJ0dWFsaXNlZCB0byBiZSB1c2VkIHNlbnNpYmx5
LgoKRGlzYWxsb3cgYWNjZXNzIHRvIHRoZSBSQVBMIE1TUnMgY29tcGxldGVs
eSwgYXMgd2VsbCBhcyBvdGhlciBNU1JzIHdoaWNoCnBvdGVudGlhbGx5IGFj
Y2VzcyBmaW5lIGdyYWluIHBvd2VyIGluZm9ybWF0aW9uLgoKVGhpcyBpcyBw
YXJ0IG9mIFhTQS0zNTEuCgpTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFu
IEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgoKZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL3g4Ni9tc3IuYyBiL3hlbi9hcmNoL3g4Ni9tc3IuYwppbmRleCBh
NDI3ODI2YmEwLi45MjdlZDYyNWRmIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94
ODYvbXNyLmMKKysrIGIveGVuL2FyY2gveDg2L21zci5jCkBAIC0xNTEsOSAr
MTUxLDE4IEBAIGludCBndWVzdF9yZG1zcihjb25zdCBzdHJ1Y3QgdmNwdSAq
diwgdWludDMyX3QgbXNyLCB1aW50NjRfdCAqdmFsKQogICAgIGNhc2UgTVNS
X1RTWF9DVFJMOgogICAgIGNhc2UgTVNSX01DVV9PUFRfQ1RSTDoKICAgICBj
YXNlIE1TUl9SVElUX09VVFBVVF9CQVNFIC4uLiBNU1JfUlRJVF9BRERSX0Io
Nyk6CisgICAgY2FzZSBNU1JfUkFQTF9QT1dFUl9VTklUOgorICAgIGNhc2Ug
TVNSX1BLR19QT1dFUl9MSU1JVCAgLi4uIE1TUl9QS0dfUE9XRVJfSU5GTzoK
KyAgICBjYXNlIE1TUl9EUkFNX1BPV0VSX0xJTUlUIC4uLiBNU1JfRFJBTV9Q
T1dFUl9JTkZPOgorICAgIGNhc2UgTVNSX1BQMF9QT1dFUl9MSU1JVCAgLi4u
IE1TUl9QUDBfUE9MSUNZOgorICAgIGNhc2UgTVNSX1BQMV9QT1dFUl9MSU1J
VCAgLi4uIE1TUl9QUDFfUE9MSUNZOgorICAgIGNhc2UgTVNSX1BMQVRGT1JN
X0VORVJHWV9DT1VOVEVSOgorICAgIGNhc2UgTVNSX1BMQVRGT1JNX1BPV0VS
X0xJTUlUOgogICAgIGNhc2UgTVNSX1VfQ0VUOgogICAgIGNhc2UgTVNSX1Nf
Q0VUOgogICAgIGNhc2UgTVNSX1BMMF9TU1AgLi4uIE1TUl9JTlRFUlJVUFRf
U1NQX1RBQkxFOgorICAgIGNhc2UgTVNSX0YxNUhfQ1VfUE9XRVIgLi4uIE1T
Ul9GMTVIX0NVX01BWF9QT1dFUjoKKyAgICBjYXNlIE1TUl9BTURfUkFQTF9Q
T1dFUl9VTklUIC4uLiBNU1JfQU1EX1BLR19FTkVSR1lfU1RBVFVTOgogICAg
ICAgICAvKiBOb3Qgb2ZmZXJlZCB0byBndWVzdHMuICovCiAgICAgICAgIGdv
dG8gZ3BfZmF1bHQ7CiAKQEAgLTMxNSw5ICszMjQsMTggQEAgaW50IGd1ZXN0
X3dybXNyKHN0cnVjdCB2Y3B1ICp2LCB1aW50MzJfdCBtc3IsIHVpbnQ2NF90
IHZhbCkKICAgICBjYXNlIE1TUl9UU1hfQ1RSTDoKICAgICBjYXNlIE1TUl9N
Q1VfT1BUX0NUUkw6CiAgICAgY2FzZSBNU1JfUlRJVF9PVVRQVVRfQkFTRSAu
Li4gTVNSX1JUSVRfQUREUl9CKDcpOgorICAgIGNhc2UgTVNSX1JBUExfUE9X
RVJfVU5JVDoKKyAgICBjYXNlIE1TUl9QS0dfUE9XRVJfTElNSVQgIC4uLiBN
U1JfUEtHX1BPV0VSX0lORk86CisgICAgY2FzZSBNU1JfRFJBTV9QT1dFUl9M
SU1JVCAuLi4gTVNSX0RSQU1fUE9XRVJfSU5GTzoKKyAgICBjYXNlIE1TUl9Q
UDBfUE9XRVJfTElNSVQgIC4uLiBNU1JfUFAwX1BPTElDWToKKyAgICBjYXNl
IE1TUl9QUDFfUE9XRVJfTElNSVQgIC4uLiBNU1JfUFAxX1BPTElDWToKKyAg
ICBjYXNlIE1TUl9QTEFURk9STV9FTkVSR1lfQ09VTlRFUjoKKyAgICBjYXNl
IE1TUl9QTEFURk9STV9QT1dFUl9MSU1JVDoKICAgICBjYXNlIE1TUl9VX0NF
VDoKICAgICBjYXNlIE1TUl9TX0NFVDoKICAgICBjYXNlIE1TUl9QTDBfU1NQ
IC4uLiBNU1JfSU5URVJSVVBUX1NTUF9UQUJMRToKKyAgICBjYXNlIE1TUl9G
MTVIX0NVX1BPV0VSIC4uLiBNU1JfRjE1SF9DVV9NQVhfUE9XRVI6CisgICAg
Y2FzZSBNU1JfQU1EX1JBUExfUE9XRVJfVU5JVCAuLi4gTVNSX0FNRF9QS0df
RU5FUkdZX1NUQVRVUzoKICAgICAgICAgLyogTm90IG9mZmVyZWQgdG8gZ3Vl
c3RzLiAqLwogICAgICAgICBnb3RvIGdwX2ZhdWx0OwogCmRpZmYgLS1naXQg
YS94ZW4vaW5jbHVkZS9hc20teDg2L21zci1pbmRleC5oIGIveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tc3ItaW5kZXguaAppbmRleCAwZWI2ODU1NjE0Li5iYTll
OTBhZjIxIDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L21zci1p
bmRleC5oCisrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvbXNyLWluZGV4LmgK
QEAgLTk2LDYgKzk2LDM4IEBACiAvKiBMb3dlciA2IGJpdHMgZGVmaW5lIHRo
ZSBmb3JtYXQgb2YgdGhlIGFkZHJlc3MgaW4gdGhlIExCUiBzdGFjayAqLwog
I2RlZmluZSBNU1JfSUEzMl9QRVJGX0NBUF9MQlJfRk9STUFUCTB4M2YKIAor
LyoKKyAqIEludGVsIFJ1bnRpbWUgQXZlcmFnZSBQb3dlciBMaW1pdGluZyAo
UkFQTCkgaW50ZXJmYWNlLiAgUG93ZXIgcGxhbmUgYmFzZQorICogYWRkcmVz
c2VzIChNU1JfKl9QT1dFUl9MSU1JVCkgYXJlIG1vZGVsIHNwZWNpZmljLCBi
dXQgaGF2ZSBzby1mYXIgYmVlbgorICogY29uc2lzdGVudCBzaW5jZSB0aGVp
ciBpbnRyb2R1Y3Rpb24gaW4gU2FuZHlCcmlkZ2UuCisgKgorICogT2Zmc2V0
cyBvZiBmdW5jdGlvbmFsaXR5IGZyb20gdGhlIHBvd2VyIHBsYW5lIGJhc2Ug
aXMgYXJjaGl0ZWN0dXJhbCwgYnV0CisgKiBub3QgYWxsIHBvd2VyIHBsYW5l
cyBzdXBwb3J0IGFsbCBmdW5jdGlvbmFsaXR5LgorICovCisjZGVmaW5lIE1T
Ul9SQVBMX1BPV0VSX1VOSVQJCTB4MDAwMDA2MDYKKworI2RlZmluZSBNU1Jf
UEtHX1BPV0VSX0xJTUlUCQkweDAwMDAwNjEwCisjZGVmaW5lIE1TUl9QS0df
RU5FUkdZX1NUQVRVUwkJMHgwMDAwMDYxMQorI2RlZmluZSBNU1JfUEtHX1BF
UkZfU1RBVFVTCQkweDAwMDAwNjEzCisjZGVmaW5lIE1TUl9QS0dfUE9XRVJf
SU5GTwkJMHgwMDAwMDYxNAorCisjZGVmaW5lIE1TUl9EUkFNX1BPV0VSX0xJ
TUlUCQkweDAwMDAwNjE4CisjZGVmaW5lIE1TUl9EUkFNX0VORVJHWV9TVEFU
VVMJCTB4MDAwMDA2MTkKKyNkZWZpbmUgTVNSX0RSQU1fUEVSRl9TVEFUVVMJ
CTB4MDAwMDA2MWIKKyNkZWZpbmUgTVNSX0RSQU1fUE9XRVJfSU5GTwkJMHgw
MDAwMDYxYworCisjZGVmaW5lIE1TUl9QUDBfUE9XRVJfTElNSVQJCTB4MDAw
MDA2MzgKKyNkZWZpbmUgTVNSX1BQMF9FTkVSR1lfU1RBVFVTCQkweDAwMDAw
NjM5CisjZGVmaW5lIE1TUl9QUDBfUE9MSUNZCQkJMHgwMDAwMDYzYQorCisj
ZGVmaW5lIE1TUl9QUDFfUE9XRVJfTElNSVQJCTB4MDAwMDA2NDAKKyNkZWZp
bmUgTVNSX1BQMV9FTkVSR1lfU1RBVFVTCQkweDAwMDAwNjQxCisjZGVmaW5l
IE1TUl9QUDFfUE9MSUNZCQkJMHgwMDAwMDY0MgorCisvKiBJbnRlbCBQbGF0
Zm9ybS13aWRlIHBvd2VyIGludGVyZmFjZS4gKi8KKyNkZWZpbmUgTVNSX1BM
QVRGT1JNX0VORVJHWV9DT1VOVEVSCTB4MDAwMDA2NGQKKyNkZWZpbmUgTVNS
X1BMQVRGT1JNX1BPV0VSX0xJTUlUCTB4MDAwMDA2NWMKKwogI2RlZmluZSBN
U1JfSUEzMl9CTkRDRkdTCQkweDAwMDAwZDkwCiAjZGVmaW5lIElBMzJfQk5E
Q0ZHU19FTkFCTEUJCTB4MDAwMDAwMDEKICNkZWZpbmUgSUEzMl9CTkRDRkdT
X1BSRVNFUlZFCQkweDAwMDAwMDAyCkBAIC0yMzYsNiArMjY4LDggQEAKICNk
ZWZpbmUgTVNSX0s4X1ZNX0NSCQkJMHhjMDAxMDExNAogI2RlZmluZSBNU1Jf
SzhfVk1fSFNBVkVfUEEJCTB4YzAwMTAxMTcKIAorI2RlZmluZSBNU1JfRjE1
SF9DVV9QT1dFUgkJMHhjMDAxMDA3YQorI2RlZmluZSBNU1JfRjE1SF9DVV9N
QVhfUE9XRVIJCTB4YzAwMTAwN2IKICNkZWZpbmUgTVNSX0FNRF9GQU0xNUhf
RVZOVFNFTDAJCTB4YzAwMTAyMDAKICNkZWZpbmUgTVNSX0FNRF9GQU0xNUhf
UEVSRkNUUjAJCTB4YzAwMTAyMDEKICNkZWZpbmUgTVNSX0FNRF9GQU0xNUhf
RVZOVFNFTDEJCTB4YzAwMTAyMDIKQEAgLTI0OSw2ICsyODMsMTAgQEAKICNk
ZWZpbmUgTVNSX0FNRF9GQU0xNUhfRVZOVFNFTDUJCTB4YzAwMTAyMGEKICNk
ZWZpbmUgTVNSX0FNRF9GQU0xNUhfUEVSRkNUUjUJCTB4YzAwMTAyMGIKIAor
I2RlZmluZSBNU1JfQU1EX1JBUExfUE9XRVJfVU5JVAkJMHhjMDAxMDI5OQor
I2RlZmluZSBNU1JfQU1EX0NPUkVfRU5FUkdZX1NUQVRVUwkweGMwMDEwMjlh
CisjZGVmaW5lIE1TUl9BTURfUEtHX0VORVJHWV9TVEFUVVMJMHhjMDAxMDI5
YgorCiAjZGVmaW5lIE1TUl9BTURfTDdTMF9GRUFUVVJFX01BU0sJMHhjMDAx
MTAwMgogI2RlZmluZSBNU1JfQU1EX1RIUk1fRkVBVFVSRV9NQVNLCTB4YzAw
MTEwMDMKICNkZWZpbmUgTVNSX0s4X0ZFQVRVUkVfTUFTSwkJMHhjMDAxMTAw
NAo=

--=separator
Content-Type: application/octet-stream; name="xsa351-x86-4.13-1.patch"
Content-Disposition: attachment; filename="xsa351-x86-4.13-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP1JvZ2VyPTIwUGF1PTIwTW9ubj1DMz1BOT89IDxy
b2dlci5wYXVAY2l0cml4LmNvbT4KU3ViamVjdDogeDg2L21zcjogZml4IGhh
bmRsaW5nIG9mIE1TUl9JQTMyX1BFUkZfe1NUQVRVUy9DVEx9Ck1JTUUtVmVy
c2lvbjogMS4wCkNvbnRlbnQtVHlwZTogdGV4dC9wbGFpbjsgY2hhcnNldD1V
VEYtOApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA4Yml0CgpDdXJyZW50
bHkgYSBQViBoYXJkd2FyZSBkb21haW4gY2FuIGFsc28gYmUgZ2l2ZW4gY29u
dHJvbCBvdmVyIHRoZSBDUFUKZnJlcXVlbmN5LCBhbmQgc3VjaCBndWVzdCBp
cyBhbGxvd2VkIHRvIHdyaXRlIHRvIE1TUl9JQTMyX1BFUkZfQ1RMLgpIb3dl
dmVyIHNpbmNlIGNvbW1pdCAzMjJlYzdjODlmNiB0aGUgZGVmYXVsdCBiZWhh
dmlvciBoYXMgYmVlbiBjaGFuZ2VkCnRvIHJlamVjdCBhY2Nlc3NlcyB0byBu
b3QgZXhwbGljaXRseSBoYW5kbGVkIE1TUnMsIHByZXZlbnRpbmcgUFYKZ3Vl
c3RzIHRoYXQgbWFuYWdlIENQVSBmcmVxdWVuY3kgZnJvbSByZWFkaW5nCk1T
Ul9JQTMyX1BFUkZfe1NUQVRVUy9DVEx9LgoKQWRkaXRpb25hbGx5IHNvbWUg
SFZNIGd1ZXN0cyAoV2luZG93cyBhdCBsZWFzdCkgd2lsbCBhdHRlbXB0IHRv
IHJlYWQKTVNSX0lBMzJfUEVSRl9DVEwgYW5kIHdpbGwgcGFuaWMgaWYgZ2l2
ZW4gYmFjayBhICNHUCBmYXVsdDoKCiAgdm14LmM6MzAzNTpkOHYwIFJETVNS
IDB4MDAwMDAxOTkgdW5pbXBsZW1lbnRlZAogIGQ4djAgVklSSURJQU4gQ1JB
U0g6IDNiIGMwMDAwMDk2IGZmZmZmODA2ODcxYzE2NTEgZmZmZmRhMDI1MzY4
MzcyMCAwCgpNb3ZlIHRoZSBoYW5kbGluZyBvZiBNU1JfSUEzMl9QRVJGX3tT
VEFUVVMvQ1RMfSB0byB0aGUgY29tbW9uIE1TUgpoYW5kbGluZyBzaGFyZWQg
YmV0d2VlbiBIVk0gYW5kIFBWIGd1ZXN0cywgYW5kIGFkZCBhbiBleHBsaWNp
dCBjYXNlCmZvciByZWFkcyB0byBNU1JfSUEzMl9QRVJGX3tTVEFUVVMvQ1RM
fS4KClJlc3RvcmUgcHJldmlvdXMgYmVoYXZpb3IgYW5kIGFsbG93IFBWIGd1
ZXN0cyB3aXRoIHRoZSByZXF1aXJlZApwZXJtaXNzaW9ucyB0byByZWFkIHRo
ZSBjb250ZW50cyBvZiB0aGUgbWVudGlvbmVkIE1TUnMuIE5vbiBwcml2aWxl
Z2VkCmd1ZXN0cyB3aWxsIGdldCAwIHdoZW4gdHJ5aW5nIHRvIHJlYWQgdGhv
c2UgcmVnaXN0ZXJzLCBhcyB3cml0ZXMgdG8KTVNSX0lBMzJfUEVSRl9DVEwg
Ynkgc3VjaCBndWVzdCB3aWxsIGFscmVhZHkgYmUgc2lsZW50bHkgZHJvcHBl
ZC4KCkZpeGVzOiAzMjJlYzdjODlmNiAoJ3g4Ni9wdjogZGlzYWxsb3cgYWNj
ZXNzIHRvIHVua25vd24gTVNScycpCkZpeGVzOiA4NGU4NDhmZDdhMSAoJ3g4
Ni9odm06IGRpc2FsbG93IGFjY2VzcyB0byB1bmtub3duIE1TUnMnKQpTaWdu
ZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4
LmNvbT4KU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNv
b3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IFJvZ2VyIFBhdSBNb25u
w6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgooY2hlcnJ5IHBpY2tlZCBmcm9t
IGNvbW1pdCAzMDU5MTc4Nzk4YTIzYmE4NzBmZjg2ZmY1NGQ0NDJhMDdlNjY1
MWZjKQoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9tc3IuYyBiL3hlbi9h
cmNoL3g4Ni9tc3IuYwppbmRleCA4NzVhYzM5ZDMwLi44Yzk2OTE5N2FhIDEw
MDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbXNyLmMKKysrIGIveGVuL2FyY2gv
eDg2L21zci5jCkBAIC0yMDgsNiArMjA4LDI1IEBAIGludCBndWVzdF9yZG1z
cihzdHJ1Y3QgdmNwdSAqdiwgdWludDMyX3QgbXNyLCB1aW50NjRfdCAqdmFs
KQogICAgICAgICAqdmFsID0gbXNycy0+bWlzY19mZWF0dXJlc19lbmFibGVz
LnJhdzsKICAgICAgICAgYnJlYWs7CiAKKyAgICAgICAgLyoKKyAgICAgICAg
ICogVGhlc2UgTVNScyBhcmUgbm90IGVudW1lcmF0ZWQgaW4gQ1BVSUQuICBU
aGV5IGhhdmUgYmVlbiBhcm91bmQKKyAgICAgICAgICogc2luY2UgdGhlIFBl
bnRpdW0gNCwgYW5kIGltcGxlbWVudGVkIGJ5IG90aGVyIHZlbmRvcnMuCisg
ICAgICAgICAqCisgICAgICAgICAqIFNvbWUgdmVyc2lvbnMgb2YgV2luZG93
cyB0cnkgcmVhZGluZyB0aGVzZSBiZWZvcmUgc2V0dGluZyB1cCBhICNHUAor
ICAgICAgICAgKiBoYW5kbGVyLCBhbmQgTGludXggaGFzIHNldmVyYWwgdW5n
dWFyZGVkIHJlYWRzIGFzIHdlbGwuICBQcm92aWRlCisgICAgICAgICAqIFJB
WiBzZW1hbnRpY3MsIGluIGdlbmVyYWwsIGJ1dCBwZXJtaXQgYSBjcHVmcmVx
IGNvbnRyb2xsZXIgZG9tMCB0bworICAgICAgICAgKiBoYXZlIGZ1bGwgYWNj
ZXNzLgorICAgICAgICAgKi8KKyAgICBjYXNlIE1TUl9JQTMyX1BFUkZfU1RB
VFVTOgorICAgIGNhc2UgTVNSX0lBMzJfUEVSRl9DVEw6CisgICAgICAgIGlm
ICggIShjcC0+eDg2X3ZlbmRvciAmIChYODZfVkVORE9SX0lOVEVMIHwgWDg2
X1ZFTkRPUl9DRU5UQVVSKSkgKQorICAgICAgICAgICAgZ290byBncF9mYXVs
dDsKKworICAgICAgICAqdmFsID0gMDsKKyAgICAgICAgaWYgKCBsaWtlbHko
IWlzX2NwdWZyZXFfY29udHJvbGxlcihkKSkgfHwgcmRtc3Jfc2FmZShtc3Is
ICp2YWwpID09IDAgKQorICAgICAgICAgICAgYnJlYWs7CisgICAgICAgIGdv
dG8gZ3BfZmF1bHQ7CisKICAgICBjYXNlIE1TUl9YMkFQSUNfRklSU1QgLi4u
IE1TUl9YMkFQSUNfTEFTVDoKICAgICAgICAgaWYgKCAhaXNfaHZtX2RvbWFp
bihkKSB8fCB2ICE9IGN1cnIgKQogICAgICAgICAgICAgZ290byBncF9mYXVs
dDsKQEAgLTMwNSw2ICszMjQsNyBAQCBpbnQgZ3Vlc3Rfd3Jtc3Ioc3RydWN0
IHZjcHUgKnYsIHVpbnQzMl90IG1zciwgdWludDY0X3QgdmFsKQogICAgIGNh
c2UgTVNSX0lOVEVMX0NPUkVfVEhSRUFEX0NPVU5UOgogICAgIGNhc2UgTVNS
X0lOVEVMX1BMQVRGT1JNX0lORk86CiAgICAgY2FzZSBNU1JfQVJDSF9DQVBB
QklMSVRJRVM6CisgICAgY2FzZSBNU1JfSUEzMl9QRVJGX1NUQVRVUzoKICAg
ICAgICAgLyogUmVhZC1vbmx5ICovCiAgICAgY2FzZSBNU1JfVFNYX0ZPUkNF
X0FCT1JUOgogICAgIGNhc2UgTVNSX1RTWF9DVFJMOgpAQCAtNDExLDYgKzQz
MSwyMSBAQCBpbnQgZ3Vlc3Rfd3Jtc3Ioc3RydWN0IHZjcHUgKnYsIHVpbnQz
Ml90IG1zciwgdWludDY0X3QgdmFsKQogICAgICAgICBicmVhazsKICAgICB9
CiAKKyAgICAgICAgLyoKKyAgICAgICAgICogVGhpcyBNU1IgaXMgbm90IGVu
dW1lcmF0ZWQgaW4gQ1BVSUQuICBJdCBoYXMgYmVlbiBhcm91bmQgc2luY2Ug
dGhlCisgICAgICAgICAqIFBlbnRpdW0gNCwgYW5kIGltcGxlbWVudGVkIGJ5
IG90aGVyIHZlbmRvcnMuCisgICAgICAgICAqCisgICAgICAgICAqIFRvIG1h
dGNoIHRoZSBSQVogc2VtYW50aWNzLCBpbXBsZW1lbnQgYXMgd3JpdGUtZGlz
Y2FyZCwgZXhjZXB0IGZvcgorICAgICAgICAgKiBhIGNwdWZyZXEgY29udHJv
bGxlciBkb20wIHdoaWNoIGhhcyBmdWxsIGFjY2Vzcy4KKyAgICAgICAgICov
CisgICAgY2FzZSBNU1JfSUEzMl9QRVJGX0NUTDoKKyAgICAgICAgaWYgKCAh
KGNwLT54ODZfdmVuZG9yICYgKFg4Nl9WRU5ET1JfSU5URUwgfCBYODZfVkVO
RE9SX0NFTlRBVVIpKSApCisgICAgICAgICAgICBnb3RvIGdwX2ZhdWx0Owor
CisgICAgICAgIGlmICggbGlrZWx5KCFpc19jcHVmcmVxX2NvbnRyb2xsZXIo
ZCkpIHx8IHdybXNyX3NhZmUobXNyLCB2YWwpID09IDAgKQorICAgICAgICAg
ICAgYnJlYWs7CisgICAgICAgIGdvdG8gZ3BfZmF1bHQ7CisKICAgICBjYXNl
IE1TUl9YMkFQSUNfRklSU1QgLi4uIE1TUl9YMkFQSUNfTEFTVDoKICAgICAg
ICAgaWYgKCAhaXNfaHZtX2RvbWFpbihkKSB8fCB2ICE9IGN1cnIgKQogICAg
ICAgICAgICAgZ290byBncF9mYXVsdDsKZGlmZiAtLWdpdCBhL3hlbi9hcmNo
L3g4Ni9wdi9lbXVsLXByaXYtb3AuYyBiL3hlbi9hcmNoL3g4Ni9wdi9lbXVs
LXByaXYtb3AuYwppbmRleCA0MjI1OGM2YmYxLi42ZGM0ZjkyYTg0IDEwMDY0
NAotLS0gYS94ZW4vYXJjaC94ODYvcHYvZW11bC1wcml2LW9wLmMKKysrIGIv
eGVuL2FyY2gveDg2L3B2L2VtdWwtcHJpdi1vcC5jCkBAIC03NzYsMTIgKzc3
Niw2IEBAIHN0YXRpYyBpbmxpbmUgdWludDY0X3QgZ3Vlc3RfbWlzY19lbmFi
bGUodWludDY0X3QgdmFsKQogICAgIHJldHVybiB2YWw7CiB9CiAKLXN0YXRp
YyBpbmxpbmUgYm9vbCBpc19jcHVmcmVxX2NvbnRyb2xsZXIoY29uc3Qgc3Ry
dWN0IGRvbWFpbiAqZCkKLXsKLSAgICByZXR1cm4gKChjcHVmcmVxX2NvbnRy
b2xsZXIgPT0gRlJFUUNUTF9kb20wX2tlcm5lbCkgJiYKLSAgICAgICAgICAg
IGlzX2hhcmR3YXJlX2RvbWFpbihkKSk7Ci19Ci0KIHN0YXRpYyBpbnQgcmVh
ZF9tc3IodW5zaWduZWQgaW50IHJlZywgdWludDY0X3QgKnZhbCwKICAgICAg
ICAgICAgICAgICAgICAgc3RydWN0IHg4Nl9lbXVsYXRlX2N0eHQgKmN0eHQp
CiB7CkBAIC0xMDI2LDE0ICsxMDIwLDYgQEAgc3RhdGljIGludCB3cml0ZV9t
c3IodW5zaWduZWQgaW50IHJlZywgdWludDY0X3QgdmFsLAogICAgICAgICAg
ICAgcmV0dXJuIFg4NkVNVUxfT0tBWTsKICAgICAgICAgYnJlYWs7CiAKLSAg
ICBjYXNlIE1TUl9JQTMyX1BFUkZfQ1RMOgotICAgICAgICBpZiAoIGJvb3Rf
Y3B1X2RhdGEueDg2X3ZlbmRvciAhPSBYODZfVkVORE9SX0lOVEVMICkKLSAg
ICAgICAgICAgIGJyZWFrOwotICAgICAgICBpZiAoIGxpa2VseSghaXNfY3B1
ZnJlcV9jb250cm9sbGVyKGN1cnJkKSkgfHwKLSAgICAgICAgICAgICB3cm1z
cl9zYWZlKHJlZywgdmFsKSA9PSAwICkKLSAgICAgICAgICAgIHJldHVybiBY
ODZFTVVMX09LQVk7Ci0gICAgICAgIGJyZWFrOwotCiAgICAgY2FzZSBNU1Jf
SUEzMl9USEVSTV9DT05UUk9MOgogICAgIGNhc2UgTVNSX0lBMzJfRU5FUkdZ
X1BFUkZfQklBUzoKICAgICAgICAgaWYgKCBib290X2NwdV9kYXRhLng4Nl92
ZW5kb3IgIT0gWDg2X1ZFTkRPUl9JTlRFTCApCmRpZmYgLS1naXQgYS94ZW4v
aW5jbHVkZS94ZW4vc2NoZWQuaCBiL3hlbi9pbmNsdWRlL3hlbi9zY2hlZC5o
CmluZGV4IGQ2ZTI3ZmM0YjguLjhiYjViZDdiMzggMTAwNjQ0Ci0tLSBhL3hl
bi9pbmNsdWRlL3hlbi9zY2hlZC5oCisrKyBiL3hlbi9pbmNsdWRlL3hlbi9z
Y2hlZC5oCkBAIC0xMDU3LDYgKzEwNTcsMjIgQEAgZXh0ZXJuIGVudW0gY3B1
ZnJlcV9jb250cm9sbGVyIHsKICAgICBGUkVRQ1RMX25vbmUsIEZSRVFDVExf
ZG9tMF9rZXJuZWwsIEZSRVFDVExfeGVuCiB9IGNwdWZyZXFfY29udHJvbGxl
cjsKIAorc3RhdGljIGFsd2F5c19pbmxpbmUgYm9vbCBpc19jcHVmcmVxX2Nv
bnRyb2xsZXIoY29uc3Qgc3RydWN0IGRvbWFpbiAqZCkKK3sKKyAgICAvKgor
ICAgICAqIEEgUFYgZG9tMCBjYW4gYmUgbm9taW5hdGVkIGFzIHRoZSBjcHVm
cmVxIGNvbnRyb2xsZXIsIGluc3RlYWQgb2YgdXNpbmcKKyAgICAgKiBYZW4n
cyBjcHVmcmVxIGRyaXZlciwgYXQgd2hpY2ggcG9pbnQgZG9tMCBnZXRzIGRp
cmVjdCBhY2Nlc3MgdG8gY2VydGFpbgorICAgICAqIE1TUnMuCisgICAgICoK
KyAgICAgKiBUaGlzIGludGVyZmFjZSBvbmx5IHdvcmtzIHdoZW4gZG9tMCBp
cyBpZGVudGl0eSBwaW5uZWQgYW5kIGhhcyB0aGUgc2FtZQorICAgICAqIG51
bWJlciBvZiB2Q1BVcyBhcyBwQ1BVcyBvbiB0aGUgc3lzdGVtLgorICAgICAq
CisgICAgICogSXQgd291bGQgYmUgZmFyIGJldHRlciB0byBwYXJhdmlydHVh
bGlzZSB0aGUgaW50ZXJmYWNlLgorICAgICAqLworICAgIHJldHVybiAoaXNf
cHZfZG9tYWluKGQpICYmIGlzX2hhcmR3YXJlX2RvbWFpbihkKSAmJgorICAg
ICAgICAgICAgY3B1ZnJlcV9jb250cm9sbGVyID09IEZSRVFDVExfZG9tMF9r
ZXJuZWwpOworfQorCiAjZGVmaW5lIENQVVBPT0xJRF9OT05FICAgIC0xCiAK
IHN0cnVjdCBjcHVwb29sICpjcHVwb29sX2dldF9ieV9pZChpbnQgcG9vbGlk
KTsK

--=separator
Content-Type: application/octet-stream; name="xsa351-x86-4.13-2.patch"
Content-Disposition: attachment; filename="xsa351-x86-4.13-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L21zcjogRGlzYWxsb3cgZ3Vlc3QgYWNjZXNzIHRv
IHRoZSBSQVBMIE1TUnMKClJlc2VhcmNoZXJzIGhhdmUgZGVtb25zdHJhdGVk
IHVzaW5nIHRoZSBSQVBMIGludGVyZmFjZSB0byBwZXJmb3JtIGEKZGlmZmVy
ZW50aWFsIHBvd2VyIGFuYWx5c2lzIGF0dGFjayB0byByZWNvdmVyIEFFUyBr
ZXlzIHVzZWQgYnkgb3RoZXIgY29yZXMgaW4KdGhlIHN5c3RlbS4KCkZ1cnRo
ZXJtb3JlLCBldmVuIHByaXZpbGVnZWQgZ3Vlc3RzIGNhbm5vdCB1c2UgdGhp
cyBpbnRlcmZhY2UgY29ycmVjdGx5LCBkdWUKdG8gTVNSIHNjb3BlIGFuZCB2
Y3B1IHNjaGVkdWxpbmcgaXNzdWVzLiAgVGhlIGludGVyZmFjZSB3b3VsZCB3
YW50IHRvIGJlCnBhcmF2aXJ0dWFsaXNlZCB0byBiZSB1c2VkIHNlbnNpYmx5
LgoKRGlzYWxsb3cgYWNjZXNzIHRvIHRoZSBSQVBMIE1TUnMgY29tcGxldGVs
eSwgYXMgd2VsbCBhcyBvdGhlciBNU1JzIHdoaWNoCnBvdGVudGlhbGx5IGFj
Y2VzcyBmaW5lIGdyYWluIHBvd2VyIGluZm9ybWF0aW9uLgoKVGhpcyBpcyBw
YXJ0IG9mIFhTQS0zNTEuCgpTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFu
IEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgoKZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL3g4Ni9tc3IuYyBiL3hlbi9hcmNoL3g4Ni9tc3IuYwppbmRleCA4
Yzk2OTE5N2FhLi44YWI2OTQ5YThlIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94
ODYvbXNyLmMKKysrIGIveGVuL2FyY2gveDg2L21zci5jCkBAIC0xNTIsMTEg
KzE1MiwyMCBAQCBpbnQgZ3Vlc3RfcmRtc3Ioc3RydWN0IHZjcHUgKnYsIHVp
bnQzMl90IG1zciwgdWludDY0X3QgKnZhbCkKICAgICBjYXNlIE1TUl9UU1hf
Q1RSTDoKICAgICBjYXNlIE1TUl9NQ1VfT1BUX0NUUkw6CiAgICAgY2FzZSBN
U1JfUlRJVF9PVVRQVVRfQkFTRSAuLi4gTVNSX1JUSVRfQUREUl9CKDcpOgor
ICAgIGNhc2UgTVNSX1JBUExfUE9XRVJfVU5JVDoKKyAgICBjYXNlIE1TUl9Q
S0dfUE9XRVJfTElNSVQgIC4uLiBNU1JfUEtHX1BPV0VSX0lORk86CisgICAg
Y2FzZSBNU1JfRFJBTV9QT1dFUl9MSU1JVCAuLi4gTVNSX0RSQU1fUE9XRVJf
SU5GTzoKKyAgICBjYXNlIE1TUl9QUDBfUE9XRVJfTElNSVQgIC4uLiBNU1Jf
UFAwX1BPTElDWToKKyAgICBjYXNlIE1TUl9QUDFfUE9XRVJfTElNSVQgIC4u
LiBNU1JfUFAxX1BPTElDWToKKyAgICBjYXNlIE1TUl9QTEFURk9STV9FTkVS
R1lfQ09VTlRFUjoKKyAgICBjYXNlIE1TUl9QTEFURk9STV9QT1dFUl9MSU1J
VDoKICAgICBjYXNlIE1TUl9VX0NFVDoKICAgICBjYXNlIE1TUl9TX0NFVDoK
ICAgICBjYXNlIE1TUl9QTDBfU1NQIC4uLiBNU1JfSU5URVJSVVBUX1NTUF9U
QUJMRToKICAgICBjYXNlIE1TUl9BTUQ2NF9MV1BfQ0ZHOgogICAgIGNhc2Ug
TVNSX0FNRDY0X0xXUF9DQkFERFI6CisgICAgY2FzZSBNU1JfRjE1SF9DVV9Q
T1dFUiAuLi4gTVNSX0YxNUhfQ1VfTUFYX1BPV0VSOgorICAgIGNhc2UgTVNS
X0FNRF9SQVBMX1BPV0VSX1VOSVQgLi4uIE1TUl9BTURfUEtHX0VORVJHWV9T
VEFUVVM6CiAgICAgICAgIC8qIE5vdCBvZmZlcmVkIHRvIGd1ZXN0cy4gKi8K
ICAgICAgICAgZ290byBncF9mYXVsdDsKIApAQCAtMzMwLDExICszMzksMjAg
QEAgaW50IGd1ZXN0X3dybXNyKHN0cnVjdCB2Y3B1ICp2LCB1aW50MzJfdCBt
c3IsIHVpbnQ2NF90IHZhbCkKICAgICBjYXNlIE1TUl9UU1hfQ1RSTDoKICAg
ICBjYXNlIE1TUl9NQ1VfT1BUX0NUUkw6CiAgICAgY2FzZSBNU1JfUlRJVF9P
VVRQVVRfQkFTRSAuLi4gTVNSX1JUSVRfQUREUl9CKDcpOgorICAgIGNhc2Ug
TVNSX1JBUExfUE9XRVJfVU5JVDoKKyAgICBjYXNlIE1TUl9QS0dfUE9XRVJf
TElNSVQgIC4uLiBNU1JfUEtHX1BPV0VSX0lORk86CisgICAgY2FzZSBNU1Jf
RFJBTV9QT1dFUl9MSU1JVCAuLi4gTVNSX0RSQU1fUE9XRVJfSU5GTzoKKyAg
ICBjYXNlIE1TUl9QUDBfUE9XRVJfTElNSVQgIC4uLiBNU1JfUFAwX1BPTElD
WToKKyAgICBjYXNlIE1TUl9QUDFfUE9XRVJfTElNSVQgIC4uLiBNU1JfUFAx
X1BPTElDWToKKyAgICBjYXNlIE1TUl9QTEFURk9STV9FTkVSR1lfQ09VTlRF
UjoKKyAgICBjYXNlIE1TUl9QTEFURk9STV9QT1dFUl9MSU1JVDoKICAgICBj
YXNlIE1TUl9VX0NFVDoKICAgICBjYXNlIE1TUl9TX0NFVDoKICAgICBjYXNl
IE1TUl9QTDBfU1NQIC4uLiBNU1JfSU5URVJSVVBUX1NTUF9UQUJMRToKICAg
ICBjYXNlIE1TUl9BTUQ2NF9MV1BfQ0ZHOgogICAgIGNhc2UgTVNSX0FNRDY0
X0xXUF9DQkFERFI6CisgICAgY2FzZSBNU1JfRjE1SF9DVV9QT1dFUiAuLi4g
TVNSX0YxNUhfQ1VfTUFYX1BPV0VSOgorICAgIGNhc2UgTVNSX0FNRF9SQVBM
X1BPV0VSX1VOSVQgLi4uIE1TUl9BTURfUEtHX0VORVJHWV9TVEFUVVM6CiAg
ICAgICAgIC8qIE5vdCBvZmZlcmVkIHRvIGd1ZXN0cy4gKi8KICAgICAgICAg
Z290byBncF9mYXVsdDsKIApkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tc3ItaW5kZXguaCBiL3hlbi9pbmNsdWRlL2FzbS14ODYvbXNyLWlu
ZGV4LmgKaW5kZXggMGViNjg1NTYxNC4uYmE5ZTkwYWYyMSAxMDA2NDQKLS0t
IGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9tc3ItaW5kZXguaAorKysgYi94ZW4v
aW5jbHVkZS9hc20teDg2L21zci1pbmRleC5oCkBAIC05Niw2ICs5NiwzOCBA
QAogLyogTG93ZXIgNiBiaXRzIGRlZmluZSB0aGUgZm9ybWF0IG9mIHRoZSBh
ZGRyZXNzIGluIHRoZSBMQlIgc3RhY2sgKi8KICNkZWZpbmUgTVNSX0lBMzJf
UEVSRl9DQVBfTEJSX0ZPUk1BVAkweDNmCiAKKy8qCisgKiBJbnRlbCBSdW50
aW1lIEF2ZXJhZ2UgUG93ZXIgTGltaXRpbmcgKFJBUEwpIGludGVyZmFjZS4g
IFBvd2VyIHBsYW5lIGJhc2UKKyAqIGFkZHJlc3NlcyAoTVNSXypfUE9XRVJf
TElNSVQpIGFyZSBtb2RlbCBzcGVjaWZpYywgYnV0IGhhdmUgc28tZmFyIGJl
ZW4KKyAqIGNvbnNpc3RlbnQgc2luY2UgdGhlaXIgaW50cm9kdWN0aW9uIGlu
IFNhbmR5QnJpZGdlLgorICoKKyAqIE9mZnNldHMgb2YgZnVuY3Rpb25hbGl0
eSBmcm9tIHRoZSBwb3dlciBwbGFuZSBiYXNlIGlzIGFyY2hpdGVjdHVyYWws
IGJ1dAorICogbm90IGFsbCBwb3dlciBwbGFuZXMgc3VwcG9ydCBhbGwgZnVu
Y3Rpb25hbGl0eS4KKyAqLworI2RlZmluZSBNU1JfUkFQTF9QT1dFUl9VTklU
CQkweDAwMDAwNjA2CisKKyNkZWZpbmUgTVNSX1BLR19QT1dFUl9MSU1JVAkJ
MHgwMDAwMDYxMAorI2RlZmluZSBNU1JfUEtHX0VORVJHWV9TVEFUVVMJCTB4
MDAwMDA2MTEKKyNkZWZpbmUgTVNSX1BLR19QRVJGX1NUQVRVUwkJMHgwMDAw
MDYxMworI2RlZmluZSBNU1JfUEtHX1BPV0VSX0lORk8JCTB4MDAwMDA2MTQK
KworI2RlZmluZSBNU1JfRFJBTV9QT1dFUl9MSU1JVAkJMHgwMDAwMDYxOAor
I2RlZmluZSBNU1JfRFJBTV9FTkVSR1lfU1RBVFVTCQkweDAwMDAwNjE5Cisj
ZGVmaW5lIE1TUl9EUkFNX1BFUkZfU1RBVFVTCQkweDAwMDAwNjFiCisjZGVm
aW5lIE1TUl9EUkFNX1BPV0VSX0lORk8JCTB4MDAwMDA2MWMKKworI2RlZmlu
ZSBNU1JfUFAwX1BPV0VSX0xJTUlUCQkweDAwMDAwNjM4CisjZGVmaW5lIE1T
Ul9QUDBfRU5FUkdZX1NUQVRVUwkJMHgwMDAwMDYzOQorI2RlZmluZSBNU1Jf
UFAwX1BPTElDWQkJCTB4MDAwMDA2M2EKKworI2RlZmluZSBNU1JfUFAxX1BP
V0VSX0xJTUlUCQkweDAwMDAwNjQwCisjZGVmaW5lIE1TUl9QUDFfRU5FUkdZ
X1NUQVRVUwkJMHgwMDAwMDY0MQorI2RlZmluZSBNU1JfUFAxX1BPTElDWQkJ
CTB4MDAwMDA2NDIKKworLyogSW50ZWwgUGxhdGZvcm0td2lkZSBwb3dlciBp
bnRlcmZhY2UuICovCisjZGVmaW5lIE1TUl9QTEFURk9STV9FTkVSR1lfQ09V
TlRFUgkweDAwMDAwNjRkCisjZGVmaW5lIE1TUl9QTEFURk9STV9QT1dFUl9M
SU1JVAkweDAwMDAwNjVjCisKICNkZWZpbmUgTVNSX0lBMzJfQk5EQ0ZHUwkJ
MHgwMDAwMGQ5MAogI2RlZmluZSBJQTMyX0JORENGR1NfRU5BQkxFCQkweDAw
MDAwMDAxCiAjZGVmaW5lIElBMzJfQk5EQ0ZHU19QUkVTRVJWRQkJMHgwMDAw
MDAwMgpAQCAtMjM2LDYgKzI2OCw4IEBACiAjZGVmaW5lIE1TUl9LOF9WTV9D
UgkJCTB4YzAwMTAxMTQKICNkZWZpbmUgTVNSX0s4X1ZNX0hTQVZFX1BBCQkw
eGMwMDEwMTE3CiAKKyNkZWZpbmUgTVNSX0YxNUhfQ1VfUE9XRVIJCTB4YzAw
MTAwN2EKKyNkZWZpbmUgTVNSX0YxNUhfQ1VfTUFYX1BPV0VSCQkweGMwMDEw
MDdiCiAjZGVmaW5lIE1TUl9BTURfRkFNMTVIX0VWTlRTRUwwCQkweGMwMDEw
MjAwCiAjZGVmaW5lIE1TUl9BTURfRkFNMTVIX1BFUkZDVFIwCQkweGMwMDEw
MjAxCiAjZGVmaW5lIE1TUl9BTURfRkFNMTVIX0VWTlRTRUwxCQkweGMwMDEw
MjAyCkBAIC0yNDksNiArMjgzLDEwIEBACiAjZGVmaW5lIE1TUl9BTURfRkFN
MTVIX0VWTlRTRUw1CQkweGMwMDEwMjBhCiAjZGVmaW5lIE1TUl9BTURfRkFN
MTVIX1BFUkZDVFI1CQkweGMwMDEwMjBiCiAKKyNkZWZpbmUgTVNSX0FNRF9S
QVBMX1BPV0VSX1VOSVQJCTB4YzAwMTAyOTkKKyNkZWZpbmUgTVNSX0FNRF9D
T1JFX0VORVJHWV9TVEFUVVMJMHhjMDAxMDI5YQorI2RlZmluZSBNU1JfQU1E
X1BLR19FTkVSR1lfU1RBVFVTCTB4YzAwMTAyOWIKKwogI2RlZmluZSBNU1Jf
QU1EX0w3UzBfRkVBVFVSRV9NQVNLCTB4YzAwMTEwMDIKICNkZWZpbmUgTVNS
X0FNRF9USFJNX0ZFQVRVUkVfTUFTSwkweGMwMDExMDAzCiAjZGVmaW5lIE1T
Ul9LOF9GRUFUVVJFX01BU0sJCTB4YzAwMTEwMDQK

--=separator
Content-Type: application/octet-stream; name="xsa351-x86-4.14-1.patch"
Content-Disposition: attachment; filename="xsa351-x86-4.14-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP1JvZ2VyPTIwUGF1PTIwTW9ubj1DMz1BOT89IDxy
b2dlci5wYXVAY2l0cml4LmNvbT4KU3ViamVjdDogeDg2L21zcjogZml4IGhh
bmRsaW5nIG9mIE1TUl9JQTMyX1BFUkZfe1NUQVRVUy9DVEx9Ck1JTUUtVmVy
c2lvbjogMS4wCkNvbnRlbnQtVHlwZTogdGV4dC9wbGFpbjsgY2hhcnNldD1V
VEYtOApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA4Yml0CgpDdXJyZW50
bHkgYSBQViBoYXJkd2FyZSBkb21haW4gY2FuIGFsc28gYmUgZ2l2ZW4gY29u
dHJvbCBvdmVyIHRoZSBDUFUKZnJlcXVlbmN5LCBhbmQgc3VjaCBndWVzdCBp
cyBhbGxvd2VkIHRvIHdyaXRlIHRvIE1TUl9JQTMyX1BFUkZfQ1RMLgpIb3dl
dmVyIHNpbmNlIGNvbW1pdCAzMjJlYzdjODlmNiB0aGUgZGVmYXVsdCBiZWhh
dmlvciBoYXMgYmVlbiBjaGFuZ2VkCnRvIHJlamVjdCBhY2Nlc3NlcyB0byBu
b3QgZXhwbGljaXRseSBoYW5kbGVkIE1TUnMsIHByZXZlbnRpbmcgUFYKZ3Vl
c3RzIHRoYXQgbWFuYWdlIENQVSBmcmVxdWVuY3kgZnJvbSByZWFkaW5nCk1T
Ul9JQTMyX1BFUkZfe1NUQVRVUy9DVEx9LgoKQWRkaXRpb25hbGx5IHNvbWUg
SFZNIGd1ZXN0cyAoV2luZG93cyBhdCBsZWFzdCkgd2lsbCBhdHRlbXB0IHRv
IHJlYWQKTVNSX0lBMzJfUEVSRl9DVEwgYW5kIHdpbGwgcGFuaWMgaWYgZ2l2
ZW4gYmFjayBhICNHUCBmYXVsdDoKCiAgdm14LmM6MzAzNTpkOHYwIFJETVNS
IDB4MDAwMDAxOTkgdW5pbXBsZW1lbnRlZAogIGQ4djAgVklSSURJQU4gQ1JB
U0g6IDNiIGMwMDAwMDk2IGZmZmZmODA2ODcxYzE2NTEgZmZmZmRhMDI1MzY4
MzcyMCAwCgpNb3ZlIHRoZSBoYW5kbGluZyBvZiBNU1JfSUEzMl9QRVJGX3tT
VEFUVVMvQ1RMfSB0byB0aGUgY29tbW9uIE1TUgpoYW5kbGluZyBzaGFyZWQg
YmV0d2VlbiBIVk0gYW5kIFBWIGd1ZXN0cywgYW5kIGFkZCBhbiBleHBsaWNp
dCBjYXNlCmZvciByZWFkcyB0byBNU1JfSUEzMl9QRVJGX3tTVEFUVVMvQ1RM
fS4KClJlc3RvcmUgcHJldmlvdXMgYmVoYXZpb3IgYW5kIGFsbG93IFBWIGd1
ZXN0cyB3aXRoIHRoZSByZXF1aXJlZApwZXJtaXNzaW9ucyB0byByZWFkIHRo
ZSBjb250ZW50cyBvZiB0aGUgbWVudGlvbmVkIE1TUnMuIE5vbiBwcml2aWxl
Z2VkCmd1ZXN0cyB3aWxsIGdldCAwIHdoZW4gdHJ5aW5nIHRvIHJlYWQgdGhv
c2UgcmVnaXN0ZXJzLCBhcyB3cml0ZXMgdG8KTVNSX0lBMzJfUEVSRl9DVEwg
Ynkgc3VjaCBndWVzdCB3aWxsIGFscmVhZHkgYmUgc2lsZW50bHkgZHJvcHBl
ZC4KCkZpeGVzOiAzMjJlYzdjODlmNiAoJ3g4Ni9wdjogZGlzYWxsb3cgYWNj
ZXNzIHRvIHVua25vd24gTVNScycpCkZpeGVzOiA4NGU4NDhmZDdhMSAoJ3g4
Ni9odm06IGRpc2FsbG93IGFjY2VzcyB0byB1bmtub3duIE1TUnMnKQpTaWdu
ZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4
LmNvbT4KU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNv
b3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IFJvZ2VyIFBhdSBNb25u
w6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgooY2hlcnJ5IHBpY2tlZCBmcm9t
IGNvbW1pdCAzMDU5MTc4Nzk4YTIzYmE4NzBmZjg2ZmY1NGQ0NDJhMDdlNjY1
MWZjKQoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9tc3IuYyBiL3hlbi9h
cmNoL3g4Ni9tc3IuYwppbmRleCBkNzJhYjBmYTFmLi4zZGIyNmZhZjA4IDEw
MDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbXNyLmMKKysrIGIveGVuL2FyY2gv
eDg2L21zci5jCkBAIC0yNDUsNiArMjQ1LDI1IEBAIGludCBndWVzdF9yZG1z
cihzdHJ1Y3QgdmNwdSAqdiwgdWludDMyX3QgbXNyLCB1aW50NjRfdCAqdmFs
KQogICAgICAgICAqdmFsID0gbXNycy0+bWlzY19mZWF0dXJlc19lbmFibGVz
LnJhdzsKICAgICAgICAgYnJlYWs7CiAKKyAgICAgICAgLyoKKyAgICAgICAg
ICogVGhlc2UgTVNScyBhcmUgbm90IGVudW1lcmF0ZWQgaW4gQ1BVSUQuICBU
aGV5IGhhdmUgYmVlbiBhcm91bmQKKyAgICAgICAgICogc2luY2UgdGhlIFBl
bnRpdW0gNCwgYW5kIGltcGxlbWVudGVkIGJ5IG90aGVyIHZlbmRvcnMuCisg
ICAgICAgICAqCisgICAgICAgICAqIFNvbWUgdmVyc2lvbnMgb2YgV2luZG93
cyB0cnkgcmVhZGluZyB0aGVzZSBiZWZvcmUgc2V0dGluZyB1cCBhICNHUAor
ICAgICAgICAgKiBoYW5kbGVyLCBhbmQgTGludXggaGFzIHNldmVyYWwgdW5n
dWFyZGVkIHJlYWRzIGFzIHdlbGwuICBQcm92aWRlCisgICAgICAgICAqIFJB
WiBzZW1hbnRpY3MsIGluIGdlbmVyYWwsIGJ1dCBwZXJtaXQgYSBjcHVmcmVx
IGNvbnRyb2xsZXIgZG9tMCB0bworICAgICAgICAgKiBoYXZlIGZ1bGwgYWNj
ZXNzLgorICAgICAgICAgKi8KKyAgICBjYXNlIE1TUl9JQTMyX1BFUkZfU1RB
VFVTOgorICAgIGNhc2UgTVNSX0lBMzJfUEVSRl9DVEw6CisgICAgICAgIGlm
ICggIShjcC0+eDg2X3ZlbmRvciAmIChYODZfVkVORE9SX0lOVEVMIHwgWDg2
X1ZFTkRPUl9DRU5UQVVSKSkgKQorICAgICAgICAgICAgZ290byBncF9mYXVs
dDsKKworICAgICAgICAqdmFsID0gMDsKKyAgICAgICAgaWYgKCBsaWtlbHko
IWlzX2NwdWZyZXFfY29udHJvbGxlcihkKSkgfHwgcmRtc3Jfc2FmZShtc3Is
ICp2YWwpID09IDAgKQorICAgICAgICAgICAgYnJlYWs7CisgICAgICAgIGdv
dG8gZ3BfZmF1bHQ7CisKICAgICBjYXNlIE1TUl9YMkFQSUNfRklSU1QgLi4u
IE1TUl9YMkFQSUNfTEFTVDoKICAgICAgICAgaWYgKCAhaXNfaHZtX2RvbWFp
bihkKSB8fCB2ICE9IGN1cnIgKQogICAgICAgICAgICAgZ290byBncF9mYXVs
dDsKQEAgLTM0Myw2ICszNjIsNyBAQCBpbnQgZ3Vlc3Rfd3Jtc3Ioc3RydWN0
IHZjcHUgKnYsIHVpbnQzMl90IG1zciwgdWludDY0X3QgdmFsKQogICAgIGNh
c2UgTVNSX0lOVEVMX0NPUkVfVEhSRUFEX0NPVU5UOgogICAgIGNhc2UgTVNS
X0lOVEVMX1BMQVRGT1JNX0lORk86CiAgICAgY2FzZSBNU1JfQVJDSF9DQVBB
QklMSVRJRVM6CisgICAgY2FzZSBNU1JfSUEzMl9QRVJGX1NUQVRVUzoKICAg
ICAgICAgLyogUmVhZC1vbmx5ICovCiAgICAgY2FzZSBNU1JfVEVTVF9DVFJM
OgogICAgIGNhc2UgTVNSX1RTWF9GT1JDRV9BQk9SVDoKQEAgLTQ1NCw2ICs0
NzQsMjEgQEAgaW50IGd1ZXN0X3dybXNyKHN0cnVjdCB2Y3B1ICp2LCB1aW50
MzJfdCBtc3IsIHVpbnQ2NF90IHZhbCkKICAgICAgICAgYnJlYWs7CiAgICAg
fQogCisgICAgICAgIC8qCisgICAgICAgICAqIFRoaXMgTVNSIGlzIG5vdCBl
bnVtZXJhdGVkIGluIENQVUlELiAgSXQgaGFzIGJlZW4gYXJvdW5kIHNpbmNl
IHRoZQorICAgICAgICAgKiBQZW50aXVtIDQsIGFuZCBpbXBsZW1lbnRlZCBi
eSBvdGhlciB2ZW5kb3JzLgorICAgICAgICAgKgorICAgICAgICAgKiBUbyBt
YXRjaCB0aGUgUkFaIHNlbWFudGljcywgaW1wbGVtZW50IGFzIHdyaXRlLWRp
c2NhcmQsIGV4Y2VwdCBmb3IKKyAgICAgICAgICogYSBjcHVmcmVxIGNvbnRy
b2xsZXIgZG9tMCB3aGljaCBoYXMgZnVsbCBhY2Nlc3MuCisgICAgICAgICAq
LworICAgIGNhc2UgTVNSX0lBMzJfUEVSRl9DVEw6CisgICAgICAgIGlmICgg
IShjcC0+eDg2X3ZlbmRvciAmIChYODZfVkVORE9SX0lOVEVMIHwgWDg2X1ZF
TkRPUl9DRU5UQVVSKSkgKQorICAgICAgICAgICAgZ290byBncF9mYXVsdDsK
KworICAgICAgICBpZiAoIGxpa2VseSghaXNfY3B1ZnJlcV9jb250cm9sbGVy
KGQpKSB8fCB3cm1zcl9zYWZlKG1zciwgdmFsKSA9PSAwICkKKyAgICAgICAg
ICAgIGJyZWFrOworICAgICAgICBnb3RvIGdwX2ZhdWx0OworCiAgICAgY2Fz
ZSBNU1JfWDJBUElDX0ZJUlNUIC4uLiBNU1JfWDJBUElDX0xBU1Q6CiAgICAg
ICAgIGlmICggIWlzX2h2bV9kb21haW4oZCkgfHwgdiAhPSBjdXJyICkKICAg
ICAgICAgICAgIGdvdG8gZ3BfZmF1bHQ7CmRpZmYgLS1naXQgYS94ZW4vYXJj
aC94ODYvcHYvZW11bC1wcml2LW9wLmMgYi94ZW4vYXJjaC94ODYvcHYvZW11
bC1wcml2LW9wLmMKaW5kZXggODVhOWZkNDc2Ny4uNWM3YjkxMTdhZSAxMDA2
NDQKLS0tIGEveGVuL2FyY2gveDg2L3B2L2VtdWwtcHJpdi1vcC5jCisrKyBi
L3hlbi9hcmNoL3g4Ni9wdi9lbXVsLXByaXYtb3AuYwpAQCAtODIwLDEyICs4
MjAsNiBAQCBzdGF0aWMgaW5saW5lIHVpbnQ2NF90IGd1ZXN0X21pc2NfZW5h
YmxlKHVpbnQ2NF90IHZhbCkKICAgICByZXR1cm4gdmFsOwogfQogCi1zdGF0
aWMgaW5saW5lIGJvb2wgaXNfY3B1ZnJlcV9jb250cm9sbGVyKGNvbnN0IHN0
cnVjdCBkb21haW4gKmQpCi17Ci0gICAgcmV0dXJuICgoY3B1ZnJlcV9jb250
cm9sbGVyID09IEZSRVFDVExfZG9tMF9rZXJuZWwpICYmCi0gICAgICAgICAg
ICBpc19oYXJkd2FyZV9kb21haW4oZCkpOwotfQotCiBzdGF0aWMgaW50IHJl
YWRfbXNyKHVuc2lnbmVkIGludCByZWcsIHVpbnQ2NF90ICp2YWwsCiAgICAg
ICAgICAgICAgICAgICAgIHN0cnVjdCB4ODZfZW11bGF0ZV9jdHh0ICpjdHh0
KQogewpAQCAtMTA3MCwxNCArMTA2NCw2IEBAIHN0YXRpYyBpbnQgd3JpdGVf
bXNyKHVuc2lnbmVkIGludCByZWcsIHVpbnQ2NF90IHZhbCwKICAgICAgICAg
ICAgIHJldHVybiBYODZFTVVMX09LQVk7CiAgICAgICAgIGJyZWFrOwogCi0g
ICAgY2FzZSBNU1JfSUEzMl9QRVJGX0NUTDoKLSAgICAgICAgaWYgKCBib290
X2NwdV9kYXRhLng4Nl92ZW5kb3IgIT0gWDg2X1ZFTkRPUl9JTlRFTCApCi0g
ICAgICAgICAgICBicmVhazsKLSAgICAgICAgaWYgKCBsaWtlbHkoIWlzX2Nw
dWZyZXFfY29udHJvbGxlcihjdXJyZCkpIHx8Ci0gICAgICAgICAgICAgd3Jt
c3Jfc2FmZShyZWcsIHZhbCkgPT0gMCApCi0gICAgICAgICAgICByZXR1cm4g
WDg2RU1VTF9PS0FZOwotICAgICAgICBicmVhazsKLQogICAgIGNhc2UgTVNS
X0lBMzJfVEhFUk1fQ09OVFJPTDoKICAgICBjYXNlIE1TUl9JQTMyX0VORVJH
WV9QRVJGX0JJQVM6CiAgICAgICAgIGlmICggYm9vdF9jcHVfZGF0YS54ODZf
dmVuZG9yICE9IFg4Nl9WRU5ET1JfSU5URUwgKQpkaWZmIC0tZ2l0IGEveGVu
L2luY2x1ZGUveGVuL3NjaGVkLmggYi94ZW4vaW5jbHVkZS94ZW4vc2NoZWQu
aAppbmRleCBhMGQ4N2VmOWQwLi45N2JhOGUwNzk1IDEwMDY0NAotLS0gYS94
ZW4vaW5jbHVkZS94ZW4vc2NoZWQuaAorKysgYi94ZW4vaW5jbHVkZS94ZW4v
c2NoZWQuaApAQCAtMTA3MSw2ICsxMDcxLDIyIEBAIGV4dGVybiBlbnVtIGNw
dWZyZXFfY29udHJvbGxlciB7CiAgICAgRlJFUUNUTF9ub25lLCBGUkVRQ1RM
X2RvbTBfa2VybmVsLCBGUkVRQ1RMX3hlbgogfSBjcHVmcmVxX2NvbnRyb2xs
ZXI7CiAKK3N0YXRpYyBhbHdheXNfaW5saW5lIGJvb2wgaXNfY3B1ZnJlcV9j
b250cm9sbGVyKGNvbnN0IHN0cnVjdCBkb21haW4gKmQpCit7CisgICAgLyoK
KyAgICAgKiBBIFBWIGRvbTAgY2FuIGJlIG5vbWluYXRlZCBhcyB0aGUgY3B1
ZnJlcSBjb250cm9sbGVyLCBpbnN0ZWFkIG9mIHVzaW5nCisgICAgICogWGVu
J3MgY3B1ZnJlcSBkcml2ZXIsIGF0IHdoaWNoIHBvaW50IGRvbTAgZ2V0cyBk
aXJlY3QgYWNjZXNzIHRvIGNlcnRhaW4KKyAgICAgKiBNU1JzLgorICAgICAq
CisgICAgICogVGhpcyBpbnRlcmZhY2Ugb25seSB3b3JrcyB3aGVuIGRvbTAg
aXMgaWRlbnRpdHkgcGlubmVkIGFuZCBoYXMgdGhlIHNhbWUKKyAgICAgKiBu
dW1iZXIgb2YgdkNQVXMgYXMgcENQVXMgb24gdGhlIHN5c3RlbS4KKyAgICAg
KgorICAgICAqIEl0IHdvdWxkIGJlIGZhciBiZXR0ZXIgdG8gcGFyYXZpcnR1
YWxpc2UgdGhlIGludGVyZmFjZS4KKyAgICAgKi8KKyAgICByZXR1cm4gKGlz
X3B2X2RvbWFpbihkKSAmJiBpc19oYXJkd2FyZV9kb21haW4oZCkgJiYKKyAg
ICAgICAgICAgIGNwdWZyZXFfY29udHJvbGxlciA9PSBGUkVRQ1RMX2RvbTBf
a2VybmVsKTsKK30KKwogaW50IGNwdXBvb2xfbW92ZV9kb21haW4oc3RydWN0
IGRvbWFpbiAqZCwgc3RydWN0IGNwdXBvb2wgKmMpOwogaW50IGNwdXBvb2xf
ZG9fc3lzY3RsKHN0cnVjdCB4ZW5fc3lzY3RsX2NwdXBvb2xfb3AgKm9wKTsK
IGludCBjcHVwb29sX2dldF9pZChjb25zdCBzdHJ1Y3QgZG9tYWluICpkKTsK

--=separator
Content-Type: application/octet-stream; name="xsa351-x86-4.14-2.patch"
Content-Disposition: attachment; filename="xsa351-x86-4.14-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L21zcjogRGlzYWxsb3cgZ3Vlc3QgYWNjZXNzIHRv
IHRoZSBSQVBMIE1TUnMKClJlc2VhcmNoZXJzIGhhdmUgZGVtb25zdHJhdGVk
IHVzaW5nIHRoZSBSQVBMIGludGVyZmFjZSB0byBwZXJmb3JtIGEKZGlmZmVy
ZW50aWFsIHBvd2VyIGFuYWx5c2lzIGF0dGFjayB0byByZWNvdmVyIEFFUyBr
ZXlzIHVzZWQgYnkgb3RoZXIgY29yZXMgaW4KdGhlIHN5c3RlbS4KCkZ1cnRo
ZXJtb3JlLCBldmVuIHByaXZpbGVnZWQgZ3Vlc3RzIGNhbm5vdCB1c2UgdGhp
cyBpbnRlcmZhY2UgY29ycmVjdGx5LCBkdWUKdG8gTVNSIHNjb3BlIGFuZCB2
Y3B1IHNjaGVkdWxpbmcgaXNzdWVzLiAgVGhlIGludGVyZmFjZSB3b3VsZCB3
YW50IHRvIGJlCnBhcmF2aXJ0dWFsaXNlZCB0byBiZSB1c2VkIHNlbnNpYmx5
LgoKRGlzYWxsb3cgYWNjZXNzIHRvIHRoZSBSQVBMIE1TUnMgY29tcGxldGVs
eSwgYXMgd2VsbCBhcyBvdGhlciBNU1JzIHdoaWNoCnBvdGVudGlhbGx5IGFj
Y2VzcyBmaW5lIGdyYWluIHBvd2VyIGluZm9ybWF0aW9uLgoKVGhpcyBpcyBw
YXJ0IG9mIFhTQS0zNTEuCgpTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFu
IEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgoKZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL3g4Ni9tc3IuYyBiL3hlbi9hcmNoL3g4Ni9tc3IuYwppbmRleCAz
ZGIyNmZhZjA4Li5hYTEwNzgyM2FjIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94
ODYvbXNyLmMKKysrIGIveGVuL2FyY2gveDg2L21zci5jCkBAIC0xODUsNiAr
MTg1LDEzIEBAIGludCBndWVzdF9yZG1zcihzdHJ1Y3QgdmNwdSAqdiwgdWlu
dDMyX3QgbXNyLCB1aW50NjRfdCAqdmFsKQogICAgIGNhc2UgTVNSX1RTWF9D
VFJMOgogICAgIGNhc2UgTVNSX01DVV9PUFRfQ1RSTDoKICAgICBjYXNlIE1T
Ul9SVElUX09VVFBVVF9CQVNFIC4uLiBNU1JfUlRJVF9BRERSX0IoNyk6Cisg
ICAgY2FzZSBNU1JfUkFQTF9QT1dFUl9VTklUOgorICAgIGNhc2UgTVNSX1BL
R19QT1dFUl9MSU1JVCAgLi4uIE1TUl9QS0dfUE9XRVJfSU5GTzoKKyAgICBj
YXNlIE1TUl9EUkFNX1BPV0VSX0xJTUlUIC4uLiBNU1JfRFJBTV9QT1dFUl9J
TkZPOgorICAgIGNhc2UgTVNSX1BQMF9QT1dFUl9MSU1JVCAgLi4uIE1TUl9Q
UDBfUE9MSUNZOgorICAgIGNhc2UgTVNSX1BQMV9QT1dFUl9MSU1JVCAgLi4u
IE1TUl9QUDFfUE9MSUNZOgorICAgIGNhc2UgTVNSX1BMQVRGT1JNX0VORVJH
WV9DT1VOVEVSOgorICAgIGNhc2UgTVNSX1BMQVRGT1JNX1BPV0VSX0xJTUlU
OgogICAgIGNhc2UgTVNSX1VfQ0VUOgogICAgIGNhc2UgTVNSX1NfQ0VUOgog
ICAgIGNhc2UgTVNSX1BMMF9TU1AgLi4uIE1TUl9JTlRFUlJVUFRfU1NQX1RB
QkxFOgpAQCAtMTkyLDYgKzE5OSw4IEBAIGludCBndWVzdF9yZG1zcihzdHJ1
Y3QgdmNwdSAqdiwgdWludDMyX3QgbXNyLCB1aW50NjRfdCAqdmFsKQogICAg
IGNhc2UgTVNSX0FNRDY0X0xXUF9DQkFERFI6CiAgICAgY2FzZSBNU1JfUFBJ
Tl9DVEw6CiAgICAgY2FzZSBNU1JfUFBJTjoKKyAgICBjYXNlIE1TUl9GMTVI
X0NVX1BPV0VSIC4uLiBNU1JfRjE1SF9DVV9NQVhfUE9XRVI6CisgICAgY2Fz
ZSBNU1JfQU1EX1JBUExfUE9XRVJfVU5JVCAuLi4gTVNSX0FNRF9QS0dfRU5F
UkdZX1NUQVRVUzoKICAgICBjYXNlIE1TUl9BTURfUFBJTl9DVEw6CiAgICAg
Y2FzZSBNU1JfQU1EX1BQSU46CiAgICAgICAgIC8qIE5vdCBvZmZlcmVkIHRv
IGd1ZXN0cy4gKi8KQEAgLTM2OSw2ICszNzgsMTMgQEAgaW50IGd1ZXN0X3dy
bXNyKHN0cnVjdCB2Y3B1ICp2LCB1aW50MzJfdCBtc3IsIHVpbnQ2NF90IHZh
bCkKICAgICBjYXNlIE1TUl9UU1hfQ1RSTDoKICAgICBjYXNlIE1TUl9NQ1Vf
T1BUX0NUUkw6CiAgICAgY2FzZSBNU1JfUlRJVF9PVVRQVVRfQkFTRSAuLi4g
TVNSX1JUSVRfQUREUl9CKDcpOgorICAgIGNhc2UgTVNSX1JBUExfUE9XRVJf
VU5JVDoKKyAgICBjYXNlIE1TUl9QS0dfUE9XRVJfTElNSVQgIC4uLiBNU1Jf
UEtHX1BPV0VSX0lORk86CisgICAgY2FzZSBNU1JfRFJBTV9QT1dFUl9MSU1J
VCAuLi4gTVNSX0RSQU1fUE9XRVJfSU5GTzoKKyAgICBjYXNlIE1TUl9QUDBf
UE9XRVJfTElNSVQgIC4uLiBNU1JfUFAwX1BPTElDWToKKyAgICBjYXNlIE1T
Ul9QUDFfUE9XRVJfTElNSVQgIC4uLiBNU1JfUFAxX1BPTElDWToKKyAgICBj
YXNlIE1TUl9QTEFURk9STV9FTkVSR1lfQ09VTlRFUjoKKyAgICBjYXNlIE1T
Ul9QTEFURk9STV9QT1dFUl9MSU1JVDoKICAgICBjYXNlIE1TUl9VX0NFVDoK
ICAgICBjYXNlIE1TUl9TX0NFVDoKICAgICBjYXNlIE1TUl9QTDBfU1NQIC4u
LiBNU1JfSU5URVJSVVBUX1NTUF9UQUJMRToKQEAgLTM3Niw2ICszOTIsOCBA
QCBpbnQgZ3Vlc3Rfd3Jtc3Ioc3RydWN0IHZjcHUgKnYsIHVpbnQzMl90IG1z
ciwgdWludDY0X3QgdmFsKQogICAgIGNhc2UgTVNSX0FNRDY0X0xXUF9DQkFE
RFI6CiAgICAgY2FzZSBNU1JfUFBJTl9DVEw6CiAgICAgY2FzZSBNU1JfUFBJ
TjoKKyAgICBjYXNlIE1TUl9GMTVIX0NVX1BPV0VSIC4uLiBNU1JfRjE1SF9D
VV9NQVhfUE9XRVI6CisgICAgY2FzZSBNU1JfQU1EX1JBUExfUE9XRVJfVU5J
VCAuLi4gTVNSX0FNRF9QS0dfRU5FUkdZX1NUQVRVUzoKICAgICBjYXNlIE1T
Ul9BTURfUFBJTl9DVEw6CiAgICAgY2FzZSBNU1JfQU1EX1BQSU46CiAgICAg
ICAgIC8qIE5vdCBvZmZlcmVkIHRvIGd1ZXN0cy4gKi8KZGlmZiAtLWdpdCBh
L3hlbi9pbmNsdWRlL2FzbS14ODYvbXNyLWluZGV4LmggYi94ZW4vaW5jbHVk
ZS9hc20teDg2L21zci1pbmRleC5oCmluZGV4IDBmZTk4YWY5MjMuLjVlNjRl
Y2ZmOTEgMTAwNjQ0Ci0tLSBhL3hlbi9pbmNsdWRlL2FzbS14ODYvbXNyLWlu
ZGV4LmgKKysrIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9tc3ItaW5kZXguaApA
QCAtNzcsNiArNzcsMzggQEAKICNkZWZpbmUgTVNSX1JUSVRfQUREUl9BKG4p
ICAgICAgICAgICAgICAgICAoMHgwMDAwMDU4MCArIChuKSAqIDIpCiAjZGVm
aW5lIE1TUl9SVElUX0FERFJfQihuKSAgICAgICAgICAgICAgICAgKDB4MDAw
MDA1ODEgKyAobikgKiAyKQogCisvKgorICogSW50ZWwgUnVudGltZSBBdmVy
YWdlIFBvd2VyIExpbWl0aW5nIChSQVBMKSBpbnRlcmZhY2UuICBQb3dlciBw
bGFuZSBiYXNlCisgKiBhZGRyZXNzZXMgKE1TUl8qX1BPV0VSX0xJTUlUKSBh
cmUgbW9kZWwgc3BlY2lmaWMsIGJ1dCBoYXZlIHNvLWZhciBiZWVuCisgKiBj
b25zaXN0ZW50IHNpbmNlIHRoZWlyIGludHJvZHVjdGlvbiBpbiBTYW5keUJy
aWRnZS4KKyAqCisgKiBPZmZzZXRzIG9mIGZ1bmN0aW9uYWxpdHkgZnJvbSB0
aGUgcG93ZXIgcGxhbmUgYmFzZSBpcyBhcmNoaXRlY3R1cmFsLCBidXQKKyAq
IG5vdCBhbGwgcG93ZXIgcGxhbmVzIHN1cHBvcnQgYWxsIGZ1bmN0aW9uYWxp
dHkuCisgKi8KKyNkZWZpbmUgTVNSX1JBUExfUE9XRVJfVU5JVCAgICAgICAg
ICAgICAgICAgMHgwMDAwMDYwNgorCisjZGVmaW5lIE1TUl9QS0dfUE9XRVJf
TElNSVQgICAgICAgICAgICAgICAgIDB4MDAwMDA2MTAKKyNkZWZpbmUgTVNS
X1BLR19FTkVSR1lfU1RBVFVTICAgICAgICAgICAgICAgMHgwMDAwMDYxMQor
I2RlZmluZSBNU1JfUEtHX1BFUkZfU1RBVFVTICAgICAgICAgICAgICAgICAw
eDAwMDAwNjEzCisjZGVmaW5lIE1TUl9QS0dfUE9XRVJfSU5GTyAgICAgICAg
ICAgICAgICAgIDB4MDAwMDA2MTQKKworI2RlZmluZSBNU1JfRFJBTV9QT1dF
Ul9MSU1JVCAgICAgICAgICAgICAgICAweDAwMDAwNjE4CisjZGVmaW5lIE1T
Ul9EUkFNX0VORVJHWV9TVEFUVVMgICAgICAgICAgICAgIDB4MDAwMDA2MTkK
KyNkZWZpbmUgTVNSX0RSQU1fUEVSRl9TVEFUVVMgICAgICAgICAgICAgICAg
MHgwMDAwMDYxYgorI2RlZmluZSBNU1JfRFJBTV9QT1dFUl9JTkZPICAgICAg
ICAgICAgICAgICAweDAwMDAwNjFjCisKKyNkZWZpbmUgTVNSX1BQMF9QT1dF
Ul9MSU1JVCAgICAgICAgICAgICAgICAgMHgwMDAwMDYzOAorI2RlZmluZSBN
U1JfUFAwX0VORVJHWV9TVEFUVVMgICAgICAgICAgICAgICAweDAwMDAwNjM5
CisjZGVmaW5lIE1TUl9QUDBfUE9MSUNZICAgICAgICAgICAgICAgICAgICAg
IDB4MDAwMDA2M2EKKworI2RlZmluZSBNU1JfUFAxX1BPV0VSX0xJTUlUICAg
ICAgICAgICAgICAgICAweDAwMDAwNjQwCisjZGVmaW5lIE1TUl9QUDFfRU5F
UkdZX1NUQVRVUyAgICAgICAgICAgICAgIDB4MDAwMDA2NDEKKyNkZWZpbmUg
TVNSX1BQMV9QT0xJQ1kgICAgICAgICAgICAgICAgICAgICAgMHgwMDAwMDY0
MgorCisvKiBJbnRlbCBQbGF0Zm9ybS13aWRlIHBvd2VyIGludGVyZmFjZS4g
Ki8KKyNkZWZpbmUgTVNSX1BMQVRGT1JNX0VORVJHWV9DT1VOVEVSICAgICAg
ICAgMHgwMDAwMDY0ZAorI2RlZmluZSBNU1JfUExBVEZPUk1fUE9XRVJfTElN
SVQgICAgICAgICAgICAweDAwMDAwNjVjCisKICNkZWZpbmUgTVNSX1VfQ0VU
ICAgICAgICAgICAgICAgICAgICAgICAgICAgMHgwMDAwMDZhMAogI2RlZmlu
ZSBNU1JfU19DRVQgICAgICAgICAgICAgICAgICAgICAgICAgICAweDAwMDAw
NmEyCiAjZGVmaW5lICBDRVRfU0hTVEtfRU4gICAgICAgICAgICAgICAgICAg
ICAgIChfQUMoMSwgVUxMKSA8PCAgMCkKQEAgLTkyLDYgKzEyNCwxMyBAQAog
I2RlZmluZSAgUEFTSURfUEFTSURfTUFTSyAgICAgICAgICAgICAgICAgICAw
eDAwMGZmZmZmCiAjZGVmaW5lICBQQVNJRF9WQUxJRCAgICAgICAgICAgICAg
ICAgICAgICAgIChfQUMoMSwgVUxMKSA8PCAzMSkKIAorI2RlZmluZSBNU1Jf
RjE1SF9DVV9QT1dFUiAgICAgICAgICAgICAgICAgICAweGMwMDEwMDdhCisj
ZGVmaW5lIE1TUl9GMTVIX0NVX01BWF9QT1dFUiAgICAgICAgICAgICAgIDB4
YzAwMTAwN2IKKworI2RlZmluZSBNU1JfQU1EX1JBUExfUE9XRVJfVU5JVCAg
ICAgICAgICAgICAweGMwMDEwMjk5CisjZGVmaW5lIE1TUl9BTURfQ09SRV9F
TkVSR1lfU1RBVFVTICAgICAgICAgIDB4YzAwMTAyOWEKKyNkZWZpbmUgTVNS
X0FNRF9QS0dfRU5FUkdZX1NUQVRVUyAgICAgICAgICAgMHhjMDAxMDI5Ygor
CiAvKgogICogTGVnYWN5IE1TUiBjb25zdGFudHMgaW4gbmVlZCBvZiBjbGVh
bnVwLiAgTm8gbmV3IE1TUnMgYmVsb3cgdGhpcyBjb21tZW50LgogICovCg==

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 18:51:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 18:51:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23805.50785 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcYk3-0003jy-BU; Tue, 10 Nov 2020 18:51:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23805.50785; Tue, 10 Nov 2020 18:51:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcYk3-0003jr-8K; Tue, 10 Nov 2020 18:51:43 +0000
Received: by outflank-mailman (input) for mailman id 23805;
 Tue, 10 Nov 2020 18:51:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hJ2u=EQ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kcYk1-0003jl-PL
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:51:41 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a889c701-fdee-4cf5-8140-dd9cb0268063;
 Tue, 10 Nov 2020 18:51:40 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hJ2u=EQ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kcYk1-0003jl-PL
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:51:41 +0000
X-Inumbo-ID: a889c701-fdee-4cf5-8140-dd9cb0268063
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a889c701-fdee-4cf5-8140-dd9cb0268063;
	Tue, 10 Nov 2020 18:51:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605034300;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=0P6BlN8n/8fhTncKKRPXaKVHd+tjYWSua7+ozKfPa80=;
  b=QetTIpN3BSW4tRPa9qogqRS4wNsS0lTUBDWyHCASK/JSmqClIBiZwv++
   4nGecsPPHqnLD/bMdMAGhyHCmYNmO7mb4KZBpWu4AQwD4efRJ08YzotID
   OPzpHARHk4Px6TyskIP/l8m87ssPDHeTt2GVh0HRgTeSuL+4SJRX1MvsO
   M=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: o17djuEm1RRlhW49kXB4Avl8jukOcbz0XEhvBFu6/qsTfYnb34wTy1KSM3OaH95Rk667b84q74
 l81XlFX0utWJ6kpseFje0VdOpZp7kPTVUflePAg2v9Ri7S1PT/bvCXhYDAFP3qfDMLzsduu1Sb
 xvnt4GmBx1K6z20fLJRlf8PzDU5yd/MAuyBKzQHHrNwlP3io/BXL5gXlIH3ams7BTTa0ofH/1m
 ieiAQ67imvTb2BHigUh9ASiGo8HWUKSeGYeAdvI7pdNp18+MJVBvRPXk7gh+P32Sk32mz3YDdR
 Udw=
X-SBRS: None
X-MesageID: 30888593
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,467,1596513600"; 
   d="scan'208";a="30888593"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kTbNWarrxeOz001mIWs5lLmekFL+sUz+CsRW0Eb+opNsV/n/P+u/xDKz5dk0umn+6nhYERW1oUb++ZGVNciFBOD+0beOYLMLuTXTGavvCsDmOqpcuFIfoaTHmN0ypmAdz279lu70rI5VYg7qFY109teyJ5b/y5WYzJLFjQo2ZiVAeibLMT4wW1ABC1cXISlTFRg7lOKth+VnogduGuWtL5k2KwB4SEEsezowhH4vG1IpIrXg5JG/MA9Kqn4LgqHzusK1ALPQsv0+LXgiK0UJ+X1rfONSUjmHszCnjss1g0clsdvqNl5PzoNDtzRPS8zPXV5yqwsExzgDhZUtCXXHuA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8YHXVbQzTETfPdfk2vrhXNMgP+PXS8KsCn+gPtsESqI=;
 b=PeMLWKFsVFl3+mVuyumBrZ+r+R2YlzDKIXLoRAPBJjCoCp2KpDA7PTq5spYiMtAgZu9R2YPpd9PWs1RMODGH3Ht8IUskOr2I4MbRN9jXGBkZ93NBldjQY1K/zrLRcowYGqgTQSNsaDG89sUyBliwuWfYfZy3B8oqzt4Ko3WjB3b4X4WJqUy5P7uxkI7kozEOfPn8bIJNAtZgi3PBuDmBM4zHZHRN3r0WYpf6M2q3rEbzGbIYSC0C6aapoGF+cFBGmYDaHr1y/Ek5CRrw0AY756Du2NsKz5KeTdM0UnVKg1ty2mejYvNLHa9YBhHBTTTSyU6lkMRnf286I2pNZH4Q1w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8YHXVbQzTETfPdfk2vrhXNMgP+PXS8KsCn+gPtsESqI=;
 b=t9xhyWI7JP3tGcHXH2NifljWsfjoQsDpeTQIAMey2UgdeyuAlZq9EzpjovQPy9pyu8XnXci4YgYBFyf0dhwd696SILyV1S1zCMZNc5lRWqveUcQJk3WnIv2RaYeH0XKPERsS564tJVruczJHd3EF+6ukkN6iotk+lbpE7JjHvJk=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Ian
 Jackson" <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: [PATCH] docs: fix documentation about default scheduler
Date: Tue, 10 Nov 2020 19:51:29 +0100
Message-ID: <20201110185129.5951-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.29.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0068.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:60::32) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 324a52ff-fe34-452a-cb82-08d885a9a614
X-MS-TrafficTypeDiagnostic: DM6PR03MB3579:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB35791762AE47DABE5E5D03878FE90@DM6PR03MB3579.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6430;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: KWPZa/JkDcc+DdTdqgv/WEal2jsJcofeY4SBfkep00AhXQSWuvnSS+vFwEG4j35bljllY91vJwghRyuGTqX0FxFn1aqZEdFupAxdSQO5/FQEhMMDIzvh+H+/DmIkuUR4/Z18fMQz0mGGr82N7N8NbHZuJmGL0qUfi9y96wgLnuQx0CVgg78vBjdpNhmF7NXmEuroGOpc3TPvv6jMZL/0XUVBLxRij2gTm6XFtpmd4vpnRyPpZ/Wupaer5fbA93jFEAEVuiLzAy1aD6tDX1HJK1A5fRpYUqbO4Zt3xiEFIzHALTO3yQHZQVEKu5KQvju7
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(136003)(396003)(39860400002)(366004)(83380400001)(6496006)(186003)(2906002)(956004)(8936002)(2616005)(66476007)(66556008)(316002)(6666004)(1076003)(86362001)(6486002)(4326008)(66946007)(5660300002)(6916009)(36756003)(26005)(8676002)(478600001)(54906003)(16526019);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: dl/LrfQEFZxLik3NabR8y8UtpI5X3yYZEYR1QiUEJ110JLHbEZsZwAD17mvZRwAZkgVXVUio7aj+uCMhJg1kcLOufmb7OB4jwcLDdmHMMpBfEJinw22g5e4MGwrMuRV1r2+lfz+KqUK63zsoxE/ghFbcKOjTcr3k6BfjK/6s4QI6jZ5AL+RWuzE2knt1DsexzU2DmNFVgv4BYwjJz9jw4mkErOV7sVkYhnMzu5QF2tIMYkia9HjOWtnPSiJhFghWqlka9AB2+o3hA+DMwhuDLsVOLf3aNmvk3vvJk9NJ+z03OzVh+o/p/cweWGkuNEiqBKx91IEU67E2BgoMffES5HSWUSBvvTwDGZOHFYraFG34iOJrZVM53tqyi/YcXTR1AvHKBUWu3zO9Ee8ojeTnRIMCkplB2RajZfLnuAzxUywN02Pvr+Oqitxlu1iTyymxDNfc3Agvn8NEpc4s9Mh1ZTPzeah0V0XdWLUVQRrysMeXTJM0lt38/4UsUeE0NKvspB2BqqcLpULFSZTLVdr/B9J3gBtDCI6pateDO+eVF8RcQhV5+7SN04kpHGmFv+7G4PtbuvjFPcBvWS0gxpf5Lp6W0UdpYTxyw5i+TfSoqjD2ymnJyNbU/HndZXFo+N/RDeXd6qKamGF9XMFMon9L8NPaRB2O06mjKg3Kx6KTLe3YsF+uEf1kuh0OwIBJ+07Cv7xHYw8Z28WFumdT5HKyN4HP+F4lOMqUheVojakQl0S5noY7Tn8lQzWn9iF6RojIkGn9IgVnw9CHEme4nnUAWloOYWhzh09GMwBOJPoTN3V7sFb4UpRuHr10oq5v1c6LsGQPu7Fek35VTKWdKv7w6ksHV0GZc1fOQE9PKAlLZi0+kltHUhf7Tcs7PDZH8aTeECVWhGrxUqXmYtqIEhbWQA==
X-MS-Exchange-CrossTenant-Network-Message-Id: 324a52ff-fe34-452a-cb82-08d885a9a614
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Nov 2020 18:51:35.7423
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /EQlvZ0C15LTNEXTAoRZWslypuLCRUzuLPwHf5mCDsYm2IhzwYpTkaRrmeBxguvFj3wP0yGBSvU63l5ejpbILw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3579
X-OriginatorOrg: citrix.com

Fix the command line document to account for the default scheduler not
being credit anymore likely, and the fact that it's selectable from
Kconfig and thus different builds could end up with different default
schedulers.

Fixes: dafd936dddbd ('Make credit2 the default scheduler')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Point that the default scheduler is being selected by Kconfig,
   don't mention the default Kconfig selection.
---
 docs/misc/xen-command-line.pandoc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index 4ae9391fcd..eb1db25f92 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1876,7 +1876,7 @@ with read and write permissions.
 ### sched
 > `= credit | credit2 | arinc653 | rtds | null`
 
-> Default: `sched=credit`
+> Default: selectable via Kconfig.  Depends on enabled schedulers.
 
 Choose the default scheduler.
 
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 19:00:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 19:00:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23824.50815 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcYsY-0004sa-OJ; Tue, 10 Nov 2020 19:00:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23824.50815; Tue, 10 Nov 2020 19:00:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcYsY-0004sT-Kt; Tue, 10 Nov 2020 19:00:30 +0000
Received: by outflank-mailman (input) for mailman id 23824;
 Tue, 10 Nov 2020 19:00:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>) id 1kcYsX-0004sH-Br
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 19:00:29 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1kcYsX-0007eJ-Ar; Tue, 10 Nov 2020 19:00:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1kcYsX-0004sH-Br
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 19:00:29 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1kcYsX-0007eJ-Ar; Tue, 10 Nov 2020 19:00:29 +0000
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-devel@lists.xenproject.org
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 351 v1 - Information leak via power sidechannel
Message-Id: <E1kcYsX-0007eJ-Ar@xenbits.xenproject.org>
Date: Tue, 10 Nov 2020 19:00:29 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

(Copy of advisory)

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

                    Xen Security Advisory XSA-351

                 Information leak via power sidechannel

ISSUE DESCRIPTION
=================

Researchers have demonstrated using software power/energy monitoring
interfaces to create covert channels, and infer the operations/data used
by other contexts within the system.

Access to these interfaces should be restricted to privileged software,
but it was found that Xen doesn't restrict access suitably, and the
interfaces are accessible to all guests.

For more information, see:
  https://platypusattack.com
  https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-00389.html

IMPACT
======

An unprivileged guest administrator can sample platform power/energy
data.  This may be used to infer the operations/data used by other
contexts within the system.

The research demonstrates using this sidechannel to leak the AES keys
used elsewhere in the system.

VULNERABLE SYSTEMS
==================

Power/energy monitoring interfaces are platform and architecture
specific.  Consult your hardware vendor to ascertain what power feedback
interfaces are available.

For ARM systems, all versions of Xen are vulnerable.  The fix restricts
access to the AMU (Activity Monitors Unit) interface, introduced in
Armv8.4.

For x86 systems, Xen 4.14 and earlier are vulnerable - master is not
vulnerable, as these issues have been addressed in a more general
fashion.

The x86 fixes restrict access to:
 * Intel RAPL interface, introduced in SandyBridge CPUs.
 * Intel platform energy interface.
 * Intel perf_ctl interface, introduced in Pentium 4 CPUs and also
   implemented by other vendors.
 * AMD RAPL interface, introduced in Ryzen/EPYC CPUs.
 * AMD compute unit energy interface, present in Fam15/16 CPUs.

MITIGATION
==========

There are no mitigations available.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa351-arm.patch             Xen unstable - 4.10.x [ARM]
xsa351-x86-4.14-?.patch      Xen 4.14.x            [x86]
xsa351-x86-4.13-?.patch      Xen 4.13.x            [x86]
xsa351-x86-4.12-?.patch      Xen 4.12.x            [x86]
xsa351-x86-4.11-?.patch      Xen 4.11.x - 4.10.x   [x86]

$ sha256sum xsa351*
cad287981a870f13484834fa2364ffee68178517e906f55d2889304a4a9eae06  xsa351.meta
70ebd0e93af240af2680374dcfd8ff4a5dd3eefccf670f1cb9b546d763d6a554  xsa351-arm.patch
49b52a1366912a29e184e3014a9f1f579e8a0dd8a36f01d38d995d2c8ed81928  xsa351-arm-4.11.patch
2e7b7c2b98625d70c8b10047a9f668372f3ccede167344dedb712312606acbca  xsa351-x86-4.11-1.patch
ab9e2cb7d5e3e0c3a916f006c697495f4f01146e09df60ece59ce0a8f7aa5ed0  xsa351-x86-4.11-2.patch
bb68f6e6905bc1566156cafab058cbaf02a17c197385c33a83b7f73885913c1c  xsa351-x86-4.12-1.patch
53f464269f59498f8a9a614f10a47cfb1d81c666f0d684346e28005015de962c  xsa351-x86-4.12-2.patch
67a29d66230faafd9a8047ac80ec18130b5659e80a38c3a412cb2be6d3288a8f  xsa351-x86-4.13-1.patch
f7d8717dec33ee7484b36490402d113f1e7e168e7541bcf193fef620df299f08  xsa351-x86-4.13-2.patch
7d4fbe11a766226d7f1b93c5bf34664d8855deee09d1feebc76f11e49f2aa9c9  xsa351-x86-4.14-1.patch
41df825deafe3ef28e8594ec956033689af69f84a4a6dd92f97d1071e925203d  xsa351-x86-4.14-2.patch
$

NOTE REGARDING LACK OF EMBARGO
==============================

Despite an attempt to organise predisclosure, the discoverers ultimately
did not authorise a predisclosure.
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl+q1WwMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZANkH+wf8pft4t9KoC9HFxd96DfCjZ+FQnD0hMp+890cY
ztNJM4+o+SBP2ytEMZLIoN1oJeTSQqyNgQh2sXNm7/WpseklOTR6s8zw4LWATEfz
rqF8G2xIN8ka7AAqAwOzkzj6qlxuWbiXKm4ENd5ocRxVvF1A2PYyEX88uCPgmupg
dqfufhYQF7hrz8VKDRDYtLsMrRaIFCWqGdOdQfVF64pHGHLvGZkANGN8yva8mBfC
uavwvX+O3CdVMENS4AA3TNo6p2nnWp1iQJCiBwLGCRbTQaRtRucV4Q/eSLC3pHLp
NO26OxieT4tLJN7Ox4ex43KZIsyweZSaUl18rfg0J8MB3FM=
=/6Fo
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa351.meta"
Content-Disposition: attachment; filename="xsa351.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzNTEsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIs
CiAgICAiNC4xMSIsCiAgICAiNC4xMCIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjEwIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICI3OGQ5MDNlOTVlZmM1YjAxNjZiMzkzZDI4OWE2ODdjNjQw
MTZlOGVmIiwKICAgICAgICAgICJQcmVyZXFzIjogW10sCiAgICAgICAgICAi
UGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTM1MS14ODYtNC4xMS0/LnBh
dGNoIiwKICAgICAgICAgICAgInhzYTM1MS1hcm0tNC4xMS5wYXRjaCIKICAg
ICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAiNC4xMSI6
IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAgICAg
ICAgICJTdGFibGVSZWYiOiAiZTI3NGM4YmRjMTJlYjU5NmU1NTIzMzA0MGU4
YjQ5ZGEyNzE1MGYzMSIsCiAgICAgICAgICAiUHJlcmVxcyI6IFtdLAogICAg
ICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2EzNTEteDg2LTQu
MTEtPy5wYXRjaCIsCiAgICAgICAgICAgICJ4c2EzNTEtYXJtLTQuMTEucGF0
Y2giCiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9LAogICAg
IjQuMTIiOiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7
CiAgICAgICAgICAiU3RhYmxlUmVmIjogIjk3YjdiNTU2N2ZiYTY5MThhNjU2
YWQzNDkwNTFiNTM0M2I1ZGVhMmUiLAogICAgICAgICAgIlByZXJlcXMiOiBb
XSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzUx
LXg4Ni00LjEyLT8ucGF0Y2giLAogICAgICAgICAgICAieHNhMzUxLWFybS5w
YXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAg
ICAiNC4xMyI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6
IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiMDA2MGFjMjliY2JkYjc2ZDQ5
ZDJlMjQ4ZGRmY2I3YWZhMjM0NTQ0MCIsCiAgICAgICAgICAiUHJlcmVxcyI6
IFtdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2Ez
NTEteDg2LTQuMTMtPy5wYXRjaCIsCiAgICAgICAgICAgICJ4c2EzNTEtYXJt
LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAgfSwK
ICAgICI0LjE0IjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVu
IjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICIxMGJiNjNjMjAzZjQyZDkz
MWZhMWZhN2RiYmFlN2NlMTc2NWNlY2YyIiwKICAgICAgICAgICJQcmVyZXFz
IjogW10sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhz
YTM1MS14ODYtNC4xNC0/LnBhdGNoIiwKICAgICAgICAgICAgInhzYTM1MS1h
cm0ucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9
LAogICAgIm1hc3RlciI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAg
InhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiNzA1NmYyZjg5ZjAz
ZjJmODA0YWM3ZTc3NmM3YjJiMDAwY2Q3MTZjZCIsCiAgICAgICAgICAiUHJl
cmVxcyI6IFtdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCgkgICAgICAgICAg
ICAgICJ4c2EzNTEtYXJtLnBhdGNoIgoJCSAgICAgIF0KICAgICAgICB9CiAg
ICAgIH0KICAgIH0KICB9Cn0=

--=separator
Content-Type: application/octet-stream; name="xsa351-arm.patch"
Content-Disposition: attachment; filename="xsa351-arm.patch"
Content-Transfer-Encoding: base64

RnJvbTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4KU3ViamVj
dDogeGVuL2FybTogQWx3YXlzIHRyYXAgQU1VIHN5c3RlbSByZWdpc3RlcnMK
ClRoZSBBY3Rpdml0eSBNb25pdG9ycyBVbml0IChBTVUpIGhhcyBiZWVuIGlu
dHJvZHVjZWQgYnkgQVJNdjguNC4gSXQgaXMKY29uc2lkZXJlZCB0byBiZSB1
bnNhZmUgdG8gYmUgZXhwb3NlIHRvIGd1ZXN0cyBhcyB0aGV5IG1pZ2h0IGV4
cG9zZQppbmZvcm1hdGlvbiBhYm91dCBjb2RlIGV4ZWN1dGVkIGJ5IG90aGVy
IGd1ZXN0cyBvciB0aGUgaG9zdC4KCkFybSBwcm92aWRlZCBhIHdheSB0byB0
cmFwIGFsbCB0aGUgQU1VIHN5c3RlbSByZWdpc3RlcnMgYnkgc2V0dGluZwpD
UFRSX0VMMi5UQU0gdG8gMS4KClVuZm9ydHVuYXRlbHksIG9uIG9sZGVyIHJl
dmlzaW9uIG9mIHRoZSBzcGVjaWZpY2F0aW9uLCB0aGUgYml0IDMwIChub3cK
Q1BUUl9FTDEuVEFNKSB3YXMgUkVTMC4gQmVjYXVzZSBvZiB0aGF0LCBYZW4g
aXMgc2V0dGluZyBpdCB0byAwIGFuZAp0aGVyZWZvcmUgdGhlIHN5c3RlbSBy
ZWdpc3RlcnMgd291bGQgYmUgZXhwb3NlZCB0byB0aGUgZ3Vlc3Qgd2hlbiBp
dCBpcwpydW4gb24gcHJvY2Vzc29ycyB3aXRoIEFNVS4KCkFzIHRoZSBiaXQg
aXMgbWFyayBhcyBVTktOT1dOIGF0IGJvb3QgaW4gQXJtdjguNCwgdGhlIG9u
bHkgc2FmZSBzb2x1dGlvbgpmb3IgdXMgaXMgdG8gYWx3YXlzIHNldCBDUFRS
X0VMMS5UQU0gdG8gMS4KCkd1ZXN0IHRyeWluZyB0byBhY2Nlc3MgdGhlIEFN
VSBzeXN0ZW0gcmVnaXN0ZXJzIHdpbGwgbm93IHJlY2VpdmUgYW4KdW5kZWZp
bmVkIGluc3RydWN0aW9uLiBVbmZvcnR1bmF0ZWx5LCB0aGlzIG1lYW5zIHRo
YXQgZXZlbiB3ZWxsLWJlaGF2ZWQKZ3Vlc3QgbWF5IGZhaWwgdG8gYm9vdCBi
ZWNhdXNlIHdlIGRvbid0IHNhbml0aXplIHRoZSBJRCByZWdpc3RlcnMuCgpU
aGlzIGlzIGEga25vd24gaXNzdWVzIHdpdGggb3RoZXIgQXJtdjguMCsgZmVh
dHVyZXMgKGUuZy4gU1ZFLCBQb2ludGVyCkF1dGgpLiBUaGlzIHdpbGwgdGFr
ZW4gY2FyZSBzZXBhcmF0ZWx5LgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0zNTEg
KG9yIFhTQS05MyByZS1ib3JuKS4KClNpZ25lZC1vZmYtYnk6IEp1bGllbiBH
cmFsbCA8amdyYWxsQGFtYXpvbi5jb20+ClJldmlld2VkLWJ5OiBBbmRyZSBQ
cnp5d2FyYSA8YW5kcmUucHJ6eXdhcmFAYXJtLmNvbT4KUmV2aWV3ZWQtYnk6
IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4K
UmV2aWV3ZWQtYnk6IEJlcnRyYW5kIE1hcnF1aXMgPGJlcnRyYW5kLm1hcnF1
aXNAYXJtLmNvbT4KCmRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vdHJhcHMu
YyBiL3hlbi9hcmNoL2FybS90cmFwcy5jCmluZGV4IGEzNmYxNDVlNjcuLjIy
YmQxYmQ0YzYgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL2FybS90cmFwcy5jCisr
KyBiL3hlbi9hcmNoL2FybS90cmFwcy5jCkBAIC0xNTEsNyArMTUxLDggQEAg
dm9pZCBpbml0X3RyYXBzKHZvaWQpCiAgICAgICogT24gQVJNNjQgdGhlIFRD
UHggYml0cyB3aGljaCB3ZSBzZXQgaGVyZSAoMC4uOSwxMiwxMykgYXJlIGFs
bAogICAgICAqIFJFUzEsIGkuZS4gdGhleSB3b3VsZCB0cmFwIHdoZXRoZXIg
d2UgZGlkIHRoaXMgd3JpdGUgb3Igbm90LgogICAgICAqLwotICAgIFdSSVRF
X1NZU1JFRygoSENQVFJfQ1BfTUFTSyAmIH4oSENQVFJfQ1AoMTApIHwgSENQ
VFJfQ1AoMTEpKSkgfCBIQ1BUUl9UVEEsCisgICAgV1JJVEVfU1lTUkVHKChI
Q1BUUl9DUF9NQVNLICYgfihIQ1BUUl9DUCgxMCkgfCBIQ1BUUl9DUCgxMSkp
KSB8CisgICAgICAgICAgICAgICAgIEhDUFRSX1RUQSB8IEhDUFRSX1RBTSwK
ICAgICAgICAgICAgICAgICAgQ1BUUl9FTDIpOwogCiAgICAgLyoKZGlmZiAt
LWdpdCBhL3hlbi9pbmNsdWRlL2FzbS1hcm0vcHJvY2Vzc29yLmggYi94ZW4v
aW5jbHVkZS9hc20tYXJtL3Byb2Nlc3Nvci5oCmluZGV4IDNjYTY3ZjgxNTcu
LmQzZDEyYTlkMTkgMTAwNjQ0Ci0tLSBhL3hlbi9pbmNsdWRlL2FzbS1hcm0v
cHJvY2Vzc29yLmgKKysrIGIveGVuL2luY2x1ZGUvYXNtLWFybS9wcm9jZXNz
b3IuaApAQCAtMzUxLDYgKzM1MSw3IEBACiAjZGVmaW5lIFZUQ1JfUkVTMSAg
ICAgICAoX0FDKDEsVUwpPDwzMSkKIAogLyogSENQVFIgSHlwLiBDb3Byb2Nl
c3NvciBUcmFwIFJlZ2lzdGVyICovCisjZGVmaW5lIEhDUFRSX1RBTSAgICAg
ICAoKF9BQygxLFUpPDwzMCkpCiAjZGVmaW5lIEhDUFRSX1RUQSAgICAgICAo
KF9BQygxLFUpPDwyMCkpICAgICAgICAvKiBUcmFwIHRyYWNlIHJlZ2lzdGVy
cyAqLwogI2RlZmluZSBIQ1BUUl9DUCh4KSAgICAgKChfQUMoMSxVKTw8KHgp
KSkgICAgICAgLyogVHJhcCBDb3Byb2Nlc3NvciB4ICovCiAjZGVmaW5lIEhD
UFRSX0NQX01BU0sgICAoKF9BQygxLFUpPDwxNCktMSkK

--=separator
Content-Type: application/octet-stream; name="xsa351-arm-4.11.patch"
Content-Disposition: attachment; filename="xsa351-arm-4.11.patch"
Content-Transfer-Encoding: base64

RnJvbSBiZGJkNjZjYjliYTE3ZGQxYTcyMjFmMmE1NjFmNDVhODM2ZjEyZjY0
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWxpZW4gR3JhbGwg
PGpncmFsbEBhbWF6b24uY29tPgpEYXRlOiBUdWUsIDEwIE5vdiAyMDIwIDE3
OjA4OjMyICswMDAwClN1YmplY3Q6IFtQQVRDSF0geGVuL2FybTogQWx3YXlz
IHRyYXAgQU1VIHN5c3RlbSByZWdpc3RlcnMKClRoZSBBY3Rpdml0eSBNb25p
dG9ycyBVbml0IChBTVUpIGhhcyBiZWVuIGludHJvZHVjZWQgYnkgQVJNdjgu
NC4gSXQgaXMKY29uc2lkZXJlZCB0byBiZSB1bnNhZmUgdG8gYmUgZXhwb3Nl
IHRvIGd1ZXN0cyBhcyB0aGV5IG1pZ2h0IGV4cG9zZQppbmZvcm1hdGlvbiBh
Ym91dCBjb2RlIGV4ZWN1dGVkIGJ5IG90aGVyIGd1ZXN0cyBvciB0aGUgaG9z
dC4KCkFybSBwcm92aWRlZCBhIHdheSB0byB0cmFwIGFsbCB0aGUgQU1VIHN5
c3RlbSByZWdpc3RlcnMgYnkgc2V0dGluZwpDUFRSX0VMMi5UQU0gdG8gMS4K
ClVuZm9ydHVuYXRlbHksIG9uIG9sZGVyIHJldmlzaW9uIG9mIHRoZSBzcGVj
aWZpY2F0aW9uLCB0aGUgYml0IDMwIChub3cKQ1BUUl9FTDEuVEFNKSB3YXMg
UkVTMC4gQmVjYXVzZSBvZiB0aGF0LCBYZW4gaXMgc2V0dGluZyBpdCB0byAw
IGFuZAp0aGVyZWZvcmUgdGhlIHN5c3RlbSByZWdpc3RlcnMgd291bGQgYmUg
ZXhwb3NlZCB0byB0aGUgZ3Vlc3Qgd2hlbiBpdCBpcwpydW4gb24gcHJvY2Vz
c29ycyB3aXRoIEFNVS4KCkFzIHRoZSBiaXQgaXMgbWFyayBhcyBVTktOT1dO
IGF0IGJvb3QgaW4gQXJtdjguNCwgdGhlIG9ubHkgc2FmZSBzb2x1dGlvbgpm
b3IgdXMgaXMgdG8gYWx3YXlzIHNldCBDUFRSX0VMMS5UQU0gdG8gMS4KCkd1
ZXN0IHRyeWluZyB0byBhY2Nlc3MgdGhlIEFNVSBzeXN0ZW0gcmVnaXN0ZXJz
IHdpbGwgbm93IHJlY2VpdmUgYW4KdW5kZWZpbmVkIGluc3RydWN0aW9uLiBV
bmZvcnR1bmF0ZWx5LCB0aGlzIG1lYW5zIHRoYXQgZXZlbiB3ZWxsLWJlaGF2
ZWQKZ3Vlc3QgbWF5IGZhaWwgdG8gYm9vdCBiZWNhdXNlIHdlIGRvbid0IHNh
bml0aXplIHRoZSBJRCByZWdpc3RlcnMuCgpUaGlzIGlzIGEga25vd24gaXNz
dWVzIHdpdGggb3RoZXIgQXJtdjguMCsgZmVhdHVyZXMgKGUuZy4gU1ZFLCBQ
b2ludGVyCkF1dGgpLiBUaGlzIHdpbGwgdGFrZW4gY2FyZSBzZXBhcmF0ZWx5
LgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0zNTEgKG9yIFhTQS05MyByZS1ib3Ju
KS4KClNpZ25lZC1vZmYtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpv
bi5jb20+ClJldmlld2VkLWJ5OiBBbmRyZSBQcnp5d2FyYSA8YW5kcmUucHJ6
eXdhcmFAYXJtLmNvbT4KUmV2aWV3ZWQtYnk6IFN0ZWZhbm8gU3RhYmVsbGlu
aSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4KUmV2aWV3ZWQtYnk6IEJlcnRy
YW5kIE1hcnF1aXMgPGJlcnRyYW5kLm1hcnF1aXNAYXJtLmNvbT4KLS0tCiB4
ZW4vYXJjaC9hcm0vdHJhcHMuYyAgICAgICAgICAgIHwgMyArKy0KIHhlbi9p
bmNsdWRlL2FzbS1hcm0vcHJvY2Vzc29yLmggfCAxICsKIDIgZmlsZXMgY2hh
bmdlZCwgMyBpbnNlcnRpb25zKCspLCAxIGRlbGV0aW9uKC0pCgpkaWZmIC0t
Z2l0IGEveGVuL2FyY2gvYXJtL3RyYXBzLmMgYi94ZW4vYXJjaC9hcm0vdHJh
cHMuYwppbmRleCBlOTMwNTg1YWQ2ZDQuLmMxMjAxMGE3MjJiNSAxMDA2NDQK
LS0tIGEveGVuL2FyY2gvYXJtL3RyYXBzLmMKKysrIGIveGVuL2FyY2gvYXJt
L3RyYXBzLmMKQEAgLTE3OSw3ICsxNzksOCBAQCB2b2lkIGluaXRfdHJhcHMo
dm9pZCkKICAgICAgKiBPbiBBUk02NCB0aGUgVENQeCBiaXRzIHdoaWNoIHdl
IHNldCBoZXJlICgwLi45LDEyLDEzKSBhcmUgYWxsCiAgICAgICogUkVTMSwg
aS5lLiB0aGV5IHdvdWxkIHRyYXAgd2hldGhlciB3ZSBkaWQgdGhpcyB3cml0
ZSBvciBub3QuCiAgICAgICovCi0gICAgV1JJVEVfU1lTUkVHKChIQ1BUUl9D
UF9NQVNLICYgfihIQ1BUUl9DUCgxMCkgfCBIQ1BUUl9DUCgxMSkpKSB8IEhD
UFRSX1RUQSwKKyAgICBXUklURV9TWVNSRUcoKEhDUFRSX0NQX01BU0sgJiB+
KEhDUFRSX0NQKDEwKSB8IEhDUFRSX0NQKDExKSkpIHwKKyAgICAgICAgICAg
ICAgICAgSENQVFJfVFRBIHwgSENQVFJfVEFNLAogICAgICAgICAgICAgICAg
ICBDUFRSX0VMMik7CiAKICAgICAvKiBTZXR1cCBoeXBlcnZpc29yIHRyYXBz
ICovCmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20tYXJtL3Byb2Nlc3Nv
ci5oIGIveGVuL2luY2x1ZGUvYXNtLWFybS9wcm9jZXNzb3IuaAppbmRleCAy
MjJhMDJkZDk5MzUuLjU3NTVjYzY0MzQ0YSAxMDA2NDQKLS0tIGEveGVuL2lu
Y2x1ZGUvYXNtLWFybS9wcm9jZXNzb3IuaAorKysgYi94ZW4vaW5jbHVkZS9h
c20tYXJtL3Byb2Nlc3Nvci5oCkBAIC0yOTEsNiArMjkxLDcgQEAKICNkZWZp
bmUgVlRDUl9SRVMxICAgICAgIChfQUMoMSxVTCk8PDMxKQogCiAvKiBIQ1BU
UiBIeXAuIENvcHJvY2Vzc29yIFRyYXAgUmVnaXN0ZXIgKi8KKyNkZWZpbmUg
SENQVFJfVEFNICAgICAgICgoX0FDKDEsVSk8PDMwKSkKICNkZWZpbmUgSENQ
VFJfVFRBICAgICAgICgoX0FDKDEsVSk8PDIwKSkgICAgICAgIC8qIFRyYXAg
dHJhY2UgcmVnaXN0ZXJzICovCiAjZGVmaW5lIEhDUFRSX0NQKHgpICAgICAo
KF9BQygxLFUpPDwoeCkpKSAgICAgICAvKiBUcmFwIENvcHJvY2Vzc29yIHgg
Ki8KICNkZWZpbmUgSENQVFJfQ1BfTUFTSyAgICgoX0FDKDEsVSk8PDE0KS0x
KQotLSAKMi4xNy4xCgo=

--=separator
Content-Type: application/octet-stream; name="xsa351-x86-4.11-1.patch"
Content-Disposition: attachment; filename="xsa351-x86-4.11-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP1JvZ2VyPTIwUGF1PTIwTW9ubj1DMz1BOT89IDxy
b2dlci5wYXVAY2l0cml4LmNvbT4KU3ViamVjdDogeDg2L21zcjogZml4IGhh
bmRsaW5nIG9mIE1TUl9JQTMyX1BFUkZfe1NUQVRVUy9DVEx9Ck1JTUUtVmVy
c2lvbjogMS4wCkNvbnRlbnQtVHlwZTogdGV4dC9wbGFpbjsgY2hhcnNldD1V
VEYtOApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA4Yml0CgpDdXJyZW50
bHkgYSBQViBoYXJkd2FyZSBkb21haW4gY2FuIGFsc28gYmUgZ2l2ZW4gY29u
dHJvbCBvdmVyIHRoZSBDUFUKZnJlcXVlbmN5LCBhbmQgc3VjaCBndWVzdCBp
cyBhbGxvd2VkIHRvIHdyaXRlIHRvIE1TUl9JQTMyX1BFUkZfQ1RMLgpIb3dl
dmVyIHNpbmNlIGNvbW1pdCAzMjJlYzdjODlmNiB0aGUgZGVmYXVsdCBiZWhh
dmlvciBoYXMgYmVlbiBjaGFuZ2VkCnRvIHJlamVjdCBhY2Nlc3NlcyB0byBu
b3QgZXhwbGljaXRseSBoYW5kbGVkIE1TUnMsIHByZXZlbnRpbmcgUFYKZ3Vl
c3RzIHRoYXQgbWFuYWdlIENQVSBmcmVxdWVuY3kgZnJvbSByZWFkaW5nCk1T
Ul9JQTMyX1BFUkZfe1NUQVRVUy9DVEx9LgoKQWRkaXRpb25hbGx5IHNvbWUg
SFZNIGd1ZXN0cyAoV2luZG93cyBhdCBsZWFzdCkgd2lsbCBhdHRlbXB0IHRv
IHJlYWQKTVNSX0lBMzJfUEVSRl9DVEwgYW5kIHdpbGwgcGFuaWMgaWYgZ2l2
ZW4gYmFjayBhICNHUCBmYXVsdDoKCiAgdm14LmM6MzAzNTpkOHYwIFJETVNS
IDB4MDAwMDAxOTkgdW5pbXBsZW1lbnRlZAogIGQ4djAgVklSSURJQU4gQ1JB
U0g6IDNiIGMwMDAwMDk2IGZmZmZmODA2ODcxYzE2NTEgZmZmZmRhMDI1MzY4
MzcyMCAwCgpNb3ZlIHRoZSBoYW5kbGluZyBvZiBNU1JfSUEzMl9QRVJGX3tT
VEFUVVMvQ1RMfSB0byB0aGUgY29tbW9uIE1TUgpoYW5kbGluZyBzaGFyZWQg
YmV0d2VlbiBIVk0gYW5kIFBWIGd1ZXN0cywgYW5kIGFkZCBhbiBleHBsaWNp
dCBjYXNlCmZvciByZWFkcyB0byBNU1JfSUEzMl9QRVJGX3tTVEFUVVMvQ1RM
fS4KClJlc3RvcmUgcHJldmlvdXMgYmVoYXZpb3IgYW5kIGFsbG93IFBWIGd1
ZXN0cyB3aXRoIHRoZSByZXF1aXJlZApwZXJtaXNzaW9ucyB0byByZWFkIHRo
ZSBjb250ZW50cyBvZiB0aGUgbWVudGlvbmVkIE1TUnMuIE5vbiBwcml2aWxl
Z2VkCmd1ZXN0cyB3aWxsIGdldCAwIHdoZW4gdHJ5aW5nIHRvIHJlYWQgdGhv
c2UgcmVnaXN0ZXJzLCBhcyB3cml0ZXMgdG8KTVNSX0lBMzJfUEVSRl9DVEwg
Ynkgc3VjaCBndWVzdCB3aWxsIGFscmVhZHkgYmUgc2lsZW50bHkgZHJvcHBl
ZC4KCkZpeGVzOiAzMjJlYzdjODlmNiAoJ3g4Ni9wdjogZGlzYWxsb3cgYWNj
ZXNzIHRvIHVua25vd24gTVNScycpCkZpeGVzOiA4NGU4NDhmZDdhMSAoJ3g4
Ni9odm06IGRpc2FsbG93IGFjY2VzcyB0byB1bmtub3duIE1TUnMnKQpTaWdu
ZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4
LmNvbT4KU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNv
b3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IFJvZ2VyIFBhdSBNb25u
w6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgooY2hlcnJ5IHBpY2tlZCBmcm9t
IGNvbW1pdCAzMDU5MTc4Nzk4YTIzYmE4NzBmZjg2ZmY1NGQ0NDJhMDdlNjY1
MWZjKQoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9tc3IuYyBiL3hlbi9h
cmNoL3g4Ni9tc3IuYwppbmRleCAyNTZlNThkODJiLi4zNDk1YWM5ZjRhIDEw
MDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbXNyLmMKKysrIGIveGVuL2FyY2gv
eDg2L21zci5jCkBAIC0xNDEsNiArMTQxLDcgQEAgaW50IGluaXRfdmNwdV9t
c3JfcG9saWN5KHN0cnVjdCB2Y3B1ICp2KQogCiBpbnQgZ3Vlc3RfcmRtc3Io
Y29uc3Qgc3RydWN0IHZjcHUgKnYsIHVpbnQzMl90IG1zciwgdWludDY0X3Qg
KnZhbCkKIHsKKyAgICBjb25zdCBzdHJ1Y3QgZG9tYWluICpkID0gdi0+ZG9t
YWluOwogICAgIGNvbnN0IHN0cnVjdCBjcHVpZF9wb2xpY3kgKmNwID0gdi0+
ZG9tYWluLT5hcmNoLmNwdWlkOwogICAgIGNvbnN0IHN0cnVjdCBtc3JfZG9t
YWluX3BvbGljeSAqZHAgPSB2LT5kb21haW4tPmFyY2gubXNyOwogICAgIGNv
bnN0IHN0cnVjdCBtc3JfdmNwdV9wb2xpY3kgKnZwID0gdi0+YXJjaC5tc3I7
CkBAIC0yMTIsNiArMjEzLDI1IEBAIGludCBndWVzdF9yZG1zcihjb25zdCBz
dHJ1Y3QgdmNwdSAqdiwgdWludDMyX3QgbXNyLCB1aW50NjRfdCAqdmFsKQog
ICAgICAgICBicmVhazsKIAogICAgICAgICAvKgorICAgICAgICAgKiBUaGVz
ZSBNU1JzIGFyZSBub3QgZW51bWVyYXRlZCBpbiBDUFVJRC4gIFRoZXkgaGF2
ZSBiZWVuIGFyb3VuZAorICAgICAgICAgKiBzaW5jZSB0aGUgUGVudGl1bSA0
LCBhbmQgaW1wbGVtZW50ZWQgYnkgb3RoZXIgdmVuZG9ycy4KKyAgICAgICAg
ICoKKyAgICAgICAgICogU29tZSB2ZXJzaW9ucyBvZiBXaW5kb3dzIHRyeSBy
ZWFkaW5nIHRoZXNlIGJlZm9yZSBzZXR0aW5nIHVwIGEgI0dQCisgICAgICAg
ICAqIGhhbmRsZXIsIGFuZCBMaW51eCBoYXMgc2V2ZXJhbCB1bmd1YXJkZWQg
cmVhZHMgYXMgd2VsbC4gIFByb3ZpZGUKKyAgICAgICAgICogUkFaIHNlbWFu
dGljcywgaW4gZ2VuZXJhbCwgYnV0IHBlcm1pdCBhIGNwdWZyZXEgY29udHJv
bGxlciBkb20wIHRvCisgICAgICAgICAqIGhhdmUgZnVsbCBhY2Nlc3MuCisg
ICAgICAgICAqLworICAgIGNhc2UgTVNSX0lBMzJfUEVSRl9TVEFUVVM6Cisg
ICAgY2FzZSBNU1JfSUEzMl9QRVJGX0NUTDoKKyAgICAgICAgaWYgKCAhKGNw
LT54ODZfdmVuZG9yICYgKFg4Nl9WRU5ET1JfSU5URUwgfCBYODZfVkVORE9S
X0NFTlRBVVIpKSApCisgICAgICAgICAgICBnb3RvIGdwX2ZhdWx0OworCisg
ICAgICAgICp2YWwgPSAwOworICAgICAgICBpZiAoIGxpa2VseSghaXNfY3B1
ZnJlcV9jb250cm9sbGVyKGQpKSB8fCByZG1zcl9zYWZlKG1zciwgKnZhbCkg
PT0gMCApCisgICAgICAgICAgICBicmVhazsKKyAgICAgICAgZ290byBncF9m
YXVsdDsKKworICAgICAgICAvKgogICAgICAgICAgKiBUT0RPOiBJbXBsZW1l
bnQgd2hlbiB3ZSBoYXZlIGJldHRlciB0b3BvbG9neSByZXByZXNlbnRhdGlv
bi4KICAgICBjYXNlIE1TUl9JTlRFTF9DT1JFX1RIUkVBRF9DT1VOVDoKICAg
ICAgICAgICovCkBAIC0yNDEsNiArMjYxLDcgQEAgaW50IGd1ZXN0X3dybXNy
KHN0cnVjdCB2Y3B1ICp2LCB1aW50MzJfdCBtc3IsIHVpbnQ2NF90IHZhbCkK
ICAgICBjYXNlIE1TUl9JTlRFTF9DT1JFX1RIUkVBRF9DT1VOVDoKICAgICBj
YXNlIE1TUl9JTlRFTF9QTEFURk9STV9JTkZPOgogICAgIGNhc2UgTVNSX0FS
Q0hfQ0FQQUJJTElUSUVTOgorICAgIGNhc2UgTVNSX0lBMzJfUEVSRl9TVEFU
VVM6CiAgICAgICAgIC8qIFJlYWQtb25seSAqLwogICAgIGNhc2UgTVNSX1RT
WF9GT1JDRV9BQk9SVDoKICAgICBjYXNlIE1TUl9UU1hfQ1RSTDoKQEAgLTM0
NSw2ICszNjYsMjEgQEAgaW50IGd1ZXN0X3dybXNyKHN0cnVjdCB2Y3B1ICp2
LCB1aW50MzJfdCBtc3IsIHVpbnQ2NF90IHZhbCkKICAgICAgICAgYnJlYWs7
CiAgICAgfQogCisgICAgICAgIC8qCisgICAgICAgICAqIFRoaXMgTVNSIGlz
IG5vdCBlbnVtZXJhdGVkIGluIENQVUlELiAgSXQgaGFzIGJlZW4gYXJvdW5k
IHNpbmNlIHRoZQorICAgICAgICAgKiBQZW50aXVtIDQsIGFuZCBpbXBsZW1l
bnRlZCBieSBvdGhlciB2ZW5kb3JzLgorICAgICAgICAgKgorICAgICAgICAg
KiBUbyBtYXRjaCB0aGUgUkFaIHNlbWFudGljcywgaW1wbGVtZW50IGFzIHdy
aXRlLWRpc2NhcmQsIGV4Y2VwdCBmb3IKKyAgICAgICAgICogYSBjcHVmcmVx
IGNvbnRyb2xsZXIgZG9tMCB3aGljaCBoYXMgZnVsbCBhY2Nlc3MuCisgICAg
ICAgICAqLworICAgIGNhc2UgTVNSX0lBMzJfUEVSRl9DVEw6CisgICAgICAg
IGlmICggIShjcC0+eDg2X3ZlbmRvciAmIChYODZfVkVORE9SX0lOVEVMIHwg
WDg2X1ZFTkRPUl9DRU5UQVVSKSkgKQorICAgICAgICAgICAgZ290byBncF9m
YXVsdDsKKworICAgICAgICBpZiAoIGxpa2VseSghaXNfY3B1ZnJlcV9jb250
cm9sbGVyKGQpKSB8fCB3cm1zcl9zYWZlKG1zciwgdmFsKSA9PSAwICkKKyAg
ICAgICAgICAgIGJyZWFrOworICAgICAgICBnb3RvIGdwX2ZhdWx0OworCiAg
ICAgZGVmYXVsdDoKICAgICAgICAgcmV0dXJuIFg4NkVNVUxfVU5IQU5ETEVB
QkxFOwogICAgIH0KZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9wdi9lbXVs
LXByaXYtb3AuYyBiL3hlbi9hcmNoL3g4Ni9wdi9lbXVsLXByaXYtb3AuYwpp
bmRleCA4MTIwZGVkMzMwLi43NTVmMDBkYjMzIDEwMDY0NAotLS0gYS94ZW4v
YXJjaC94ODYvcHYvZW11bC1wcml2LW9wLmMKKysrIGIveGVuL2FyY2gveDg2
L3B2L2VtdWwtcHJpdi1vcC5jCkBAIC04MTYsMTIgKzgxNiw2IEBAIHN0YXRp
YyBpbmxpbmUgdWludDY0X3QgZ3Vlc3RfbWlzY19lbmFibGUodWludDY0X3Qg
dmFsKQogICAgIHJldHVybiB2YWw7CiB9CiAKLXN0YXRpYyBpbmxpbmUgYm9v
bCBpc19jcHVmcmVxX2NvbnRyb2xsZXIoY29uc3Qgc3RydWN0IGRvbWFpbiAq
ZCkKLXsKLSAgICByZXR1cm4gKChjcHVmcmVxX2NvbnRyb2xsZXIgPT0gRlJF
UUNUTF9kb20wX2tlcm5lbCkgJiYKLSAgICAgICAgICAgIGlzX2hhcmR3YXJl
X2RvbWFpbihkKSk7Ci19Ci0KIHN0YXRpYyBpbnQgcmVhZF9tc3IodW5zaWdu
ZWQgaW50IHJlZywgdWludDY0X3QgKnZhbCwKICAgICAgICAgICAgICAgICAg
ICAgc3RydWN0IHg4Nl9lbXVsYXRlX2N0eHQgKmN0eHQpCiB7CkBAIC0xMDk2
LDE0ICsxMDkwLDYgQEAgc3RhdGljIGludCB3cml0ZV9tc3IodW5zaWduZWQg
aW50IHJlZywgdWludDY0X3QgdmFsLAogICAgICAgICAgICAgcmV0dXJuIFg4
NkVNVUxfT0tBWTsKICAgICAgICAgYnJlYWs7CiAKLSAgICBjYXNlIE1TUl9J
QTMyX1BFUkZfQ1RMOgotICAgICAgICBpZiAoIGJvb3RfY3B1X2RhdGEueDg2
X3ZlbmRvciAhPSBYODZfVkVORE9SX0lOVEVMICkKLSAgICAgICAgICAgIGJy
ZWFrOwotICAgICAgICBpZiAoIGxpa2VseSghaXNfY3B1ZnJlcV9jb250cm9s
bGVyKGN1cnJkKSkgfHwKLSAgICAgICAgICAgICB3cm1zcl9zYWZlKHJlZywg
dmFsKSA9PSAwICkKLSAgICAgICAgICAgIHJldHVybiBYODZFTVVMX09LQVk7
Ci0gICAgICAgIGJyZWFrOwotCiAgICAgY2FzZSBNU1JfSUEzMl9USEVSTV9D
T05UUk9MOgogICAgIGNhc2UgTVNSX0lBMzJfRU5FUkdZX1BFUkZfQklBUzoK
ICAgICAgICAgaWYgKCBib290X2NwdV9kYXRhLng4Nl92ZW5kb3IgIT0gWDg2
X1ZFTkRPUl9JTlRFTCApCmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS94ZW4v
c2NoZWQuaCBiL3hlbi9pbmNsdWRlL3hlbi9zY2hlZC5oCmluZGV4IGMwY2M1
ZDkzMzYuLjdlNGFkNWQ1MWIgMTAwNjQ0Ci0tLSBhL3hlbi9pbmNsdWRlL3hl
bi9zY2hlZC5oCisrKyBiL3hlbi9pbmNsdWRlL3hlbi9zY2hlZC5oCkBAIC05
MjAsNiArOTIwLDIyIEBAIGV4dGVybiBlbnVtIGNwdWZyZXFfY29udHJvbGxl
ciB7CiAgICAgRlJFUUNUTF9ub25lLCBGUkVRQ1RMX2RvbTBfa2VybmVsLCBG
UkVRQ1RMX3hlbgogfSBjcHVmcmVxX2NvbnRyb2xsZXI7CiAKK3N0YXRpYyBh
bHdheXNfaW5saW5lIGJvb2wgaXNfY3B1ZnJlcV9jb250cm9sbGVyKGNvbnN0
IHN0cnVjdCBkb21haW4gKmQpCit7CisgICAgLyoKKyAgICAgKiBBIFBWIGRv
bTAgY2FuIGJlIG5vbWluYXRlZCBhcyB0aGUgY3B1ZnJlcSBjb250cm9sbGVy
LCBpbnN0ZWFkIG9mIHVzaW5nCisgICAgICogWGVuJ3MgY3B1ZnJlcSBkcml2
ZXIsIGF0IHdoaWNoIHBvaW50IGRvbTAgZ2V0cyBkaXJlY3QgYWNjZXNzIHRv
IGNlcnRhaW4KKyAgICAgKiBNU1JzLgorICAgICAqCisgICAgICogVGhpcyBp
bnRlcmZhY2Ugb25seSB3b3JrcyB3aGVuIGRvbTAgaXMgaWRlbnRpdHkgcGlu
bmVkIGFuZCBoYXMgdGhlIHNhbWUKKyAgICAgKiBudW1iZXIgb2YgdkNQVXMg
YXMgcENQVXMgb24gdGhlIHN5c3RlbS4KKyAgICAgKgorICAgICAqIEl0IHdv
dWxkIGJlIGZhciBiZXR0ZXIgdG8gcGFyYXZpcnR1YWxpc2UgdGhlIGludGVy
ZmFjZS4KKyAgICAgKi8KKyAgICByZXR1cm4gKGlzX3B2X2RvbWFpbihkKSAm
JiBpc19oYXJkd2FyZV9kb21haW4oZCkgJiYKKyAgICAgICAgICAgIGNwdWZy
ZXFfY29udHJvbGxlciA9PSBGUkVRQ1RMX2RvbTBfa2VybmVsKTsKK30KKwog
I2RlZmluZSBDUFVQT09MSURfTk9ORSAgICAtMQogCiBzdHJ1Y3QgY3B1cG9v
bCAqY3B1cG9vbF9nZXRfYnlfaWQoaW50IHBvb2xpZCk7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa351-x86-4.11-2.patch"
Content-Disposition: attachment; filename="xsa351-x86-4.11-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L21zcjogRGlzYWxsb3cgZ3Vlc3QgYWNjZXNzIHRv
IHRoZSBSQVBMIE1TUnMKClJlc2VhcmNoZXJzIGhhdmUgZGVtb25zdHJhdGVk
IHVzaW5nIHRoZSBSQVBMIGludGVyZmFjZSB0byBwZXJmb3JtIGEKZGlmZmVy
ZW50aWFsIHBvd2VyIGFuYWx5c2lzIGF0dGFjayB0byByZWNvdmVyIEFFUyBr
ZXlzIHVzZWQgYnkgb3RoZXIgY29yZXMgaW4KdGhlIHN5c3RlbS4KCkZ1cnRo
ZXJtb3JlLCBldmVuIHByaXZpbGVnZWQgZ3Vlc3RzIGNhbm5vdCB1c2UgdGhp
cyBpbnRlcmZhY2UgY29ycmVjdGx5LCBkdWUKdG8gTVNSIHNjb3BlIGFuZCB2
Y3B1IHNjaGVkdWxpbmcgaXNzdWVzLiAgVGhlIGludGVyZmFjZSB3b3VsZCB3
YW50IHRvIGJlCnBhcmF2aXJ0dWFsaXNlZCB0byBiZSB1c2VkIHNlbnNpYmx5
LgoKRGlzYWxsb3cgYWNjZXNzIHRvIHRoZSBSQVBMIE1TUnMgY29tcGxldGVs
eSwgYXMgd2VsbCBhcyBvdGhlciBNU1JzIHdoaWNoCnBvdGVudGlhbGx5IGFj
Y2VzcyBmaW5lIGdyYWluIHBvd2VyIGluZm9ybWF0aW9uLgoKVGhpcyBpcyBw
YXJ0IG9mIFhTQS0zNTEuCgpTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFu
IEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgoKZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL3g4Ni9tc3IuYyBiL3hlbi9hcmNoL3g4Ni9tc3IuYwppbmRleCAz
NDk1YWM5ZjRhLi45OWM4NDhmZjQxIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94
ODYvbXNyLmMKKysrIGIveGVuL2FyY2gveDg2L21zci5jCkBAIC0xNTYsNiAr
MTU2LDE1IEBAIGludCBndWVzdF9yZG1zcihjb25zdCBzdHJ1Y3QgdmNwdSAq
diwgdWludDMyX3QgbXNyLCB1aW50NjRfdCAqdmFsKQogICAgIGNhc2UgTVNS
X1RTWF9GT1JDRV9BQk9SVDoKICAgICBjYXNlIE1TUl9UU1hfQ1RSTDoKICAg
ICBjYXNlIE1TUl9NQ1VfT1BUX0NUUkw6CisgICAgY2FzZSBNU1JfUkFQTF9Q
T1dFUl9VTklUOgorICAgIGNhc2UgTVNSX1BLR19QT1dFUl9MSU1JVCAgLi4u
IE1TUl9QS0dfUE9XRVJfSU5GTzoKKyAgICBjYXNlIE1TUl9EUkFNX1BPV0VS
X0xJTUlUIC4uLiBNU1JfRFJBTV9QT1dFUl9JTkZPOgorICAgIGNhc2UgTVNS
X1BQMF9QT1dFUl9MSU1JVCAgLi4uIE1TUl9QUDBfUE9MSUNZOgorICAgIGNh
c2UgTVNSX1BQMV9QT1dFUl9MSU1JVCAgLi4uIE1TUl9QUDFfUE9MSUNZOgor
ICAgIGNhc2UgTVNSX1BMQVRGT1JNX0VORVJHWV9DT1VOVEVSOgorICAgIGNh
c2UgTVNSX1BMQVRGT1JNX1BPV0VSX0xJTUlUOgorICAgIGNhc2UgTVNSX0Yx
NUhfQ1VfUE9XRVIgLi4uIE1TUl9GMTVIX0NVX01BWF9QT1dFUjoKKyAgICBj
YXNlIE1TUl9BTURfUkFQTF9QT1dFUl9VTklUIC4uLiBNU1JfQU1EX1BLR19F
TkVSR1lfU1RBVFVTOgogICAgICAgICAvKiBOb3Qgb2ZmZXJlZCB0byBndWVz
dHMuICovCiAgICAgICAgIGdvdG8gZ3BfZmF1bHQ7CiAKQEAgLTI2Niw2ICsy
NzUsMTUgQEAgaW50IGd1ZXN0X3dybXNyKHN0cnVjdCB2Y3B1ICp2LCB1aW50
MzJfdCBtc3IsIHVpbnQ2NF90IHZhbCkKICAgICBjYXNlIE1TUl9UU1hfRk9S
Q0VfQUJPUlQ6CiAgICAgY2FzZSBNU1JfVFNYX0NUUkw6CiAgICAgY2FzZSBN
U1JfTUNVX09QVF9DVFJMOgorICAgIGNhc2UgTVNSX1JBUExfUE9XRVJfVU5J
VDoKKyAgICBjYXNlIE1TUl9QS0dfUE9XRVJfTElNSVQgIC4uLiBNU1JfUEtH
X1BPV0VSX0lORk86CisgICAgY2FzZSBNU1JfRFJBTV9QT1dFUl9MSU1JVCAu
Li4gTVNSX0RSQU1fUE9XRVJfSU5GTzoKKyAgICBjYXNlIE1TUl9QUDBfUE9X
RVJfTElNSVQgIC4uLiBNU1JfUFAwX1BPTElDWToKKyAgICBjYXNlIE1TUl9Q
UDFfUE9XRVJfTElNSVQgIC4uLiBNU1JfUFAxX1BPTElDWToKKyAgICBjYXNl
IE1TUl9QTEFURk9STV9FTkVSR1lfQ09VTlRFUjoKKyAgICBjYXNlIE1TUl9Q
TEFURk9STV9QT1dFUl9MSU1JVDoKKyAgICBjYXNlIE1TUl9GMTVIX0NVX1BP
V0VSIC4uLiBNU1JfRjE1SF9DVV9NQVhfUE9XRVI6CisgICAgY2FzZSBNU1Jf
QU1EX1JBUExfUE9XRVJfVU5JVCAuLi4gTVNSX0FNRF9QS0dfRU5FUkdZX1NU
QVRVUzoKICAgICAgICAgLyogTm90IG9mZmVyZWQgdG8gZ3Vlc3RzLiAqLwog
ICAgICAgICBnb3RvIGdwX2ZhdWx0OwogCmRpZmYgLS1naXQgYS94ZW4vaW5j
bHVkZS9hc20teDg2L21zci1pbmRleC5oIGIveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tc3ItaW5kZXguaAppbmRleCA0ODBkMWQ4MTAyLi5hNjg1ZGNkY2NhIDEw
MDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L21zci1pbmRleC5oCisr
KyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvbXNyLWluZGV4LmgKQEAgLTk2LDYg
Kzk2LDM4IEBACiAvKiBMb3dlciA2IGJpdHMgZGVmaW5lIHRoZSBmb3JtYXQg
b2YgdGhlIGFkZHJlc3MgaW4gdGhlIExCUiBzdGFjayAqLwogI2RlZmluZSBN
U1JfSUEzMl9QRVJGX0NBUF9MQlJfRk9STUFUCTB4M2YKIAorLyoKKyAqIElu
dGVsIFJ1bnRpbWUgQXZlcmFnZSBQb3dlciBMaW1pdGluZyAoUkFQTCkgaW50
ZXJmYWNlLiAgUG93ZXIgcGxhbmUgYmFzZQorICogYWRkcmVzc2VzIChNU1Jf
Kl9QT1dFUl9MSU1JVCkgYXJlIG1vZGVsIHNwZWNpZmljLCBidXQgaGF2ZSBz
by1mYXIgYmVlbgorICogY29uc2lzdGVudCBzaW5jZSB0aGVpciBpbnRyb2R1
Y3Rpb24gaW4gU2FuZHlCcmlkZ2UuCisgKgorICogT2Zmc2V0cyBvZiBmdW5j
dGlvbmFsaXR5IGZyb20gdGhlIHBvd2VyIHBsYW5lIGJhc2UgaXMgYXJjaGl0
ZWN0dXJhbCwgYnV0CisgKiBub3QgYWxsIHBvd2VyIHBsYW5lcyBzdXBwb3J0
IGFsbCBmdW5jdGlvbmFsaXR5LgorICovCisjZGVmaW5lIE1TUl9SQVBMX1BP
V0VSX1VOSVQJCTB4MDAwMDA2MDYKKworI2RlZmluZSBNU1JfUEtHX1BPV0VS
X0xJTUlUCQkweDAwMDAwNjEwCisjZGVmaW5lIE1TUl9QS0dfRU5FUkdZX1NU
QVRVUwkJMHgwMDAwMDYxMQorI2RlZmluZSBNU1JfUEtHX1BFUkZfU1RBVFVT
CQkweDAwMDAwNjEzCisjZGVmaW5lIE1TUl9QS0dfUE9XRVJfSU5GTwkJMHgw
MDAwMDYxNAorCisjZGVmaW5lIE1TUl9EUkFNX1BPV0VSX0xJTUlUCQkweDAw
MDAwNjE4CisjZGVmaW5lIE1TUl9EUkFNX0VORVJHWV9TVEFUVVMJCTB4MDAw
MDA2MTkKKyNkZWZpbmUgTVNSX0RSQU1fUEVSRl9TVEFUVVMJCTB4MDAwMDA2
MWIKKyNkZWZpbmUgTVNSX0RSQU1fUE9XRVJfSU5GTwkJMHgwMDAwMDYxYwor
CisjZGVmaW5lIE1TUl9QUDBfUE9XRVJfTElNSVQJCTB4MDAwMDA2MzgKKyNk
ZWZpbmUgTVNSX1BQMF9FTkVSR1lfU1RBVFVTCQkweDAwMDAwNjM5CisjZGVm
aW5lIE1TUl9QUDBfUE9MSUNZCQkJMHgwMDAwMDYzYQorCisjZGVmaW5lIE1T
Ul9QUDFfUE9XRVJfTElNSVQJCTB4MDAwMDA2NDAKKyNkZWZpbmUgTVNSX1BQ
MV9FTkVSR1lfU1RBVFVTCQkweDAwMDAwNjQxCisjZGVmaW5lIE1TUl9QUDFf
UE9MSUNZCQkJMHgwMDAwMDY0MgorCisvKiBJbnRlbCBQbGF0Zm9ybS13aWRl
IHBvd2VyIGludGVyZmFjZS4gKi8KKyNkZWZpbmUgTVNSX1BMQVRGT1JNX0VO
RVJHWV9DT1VOVEVSCTB4MDAwMDA2NGQKKyNkZWZpbmUgTVNSX1BMQVRGT1JN
X1BPV0VSX0xJTUlUCTB4MDAwMDA2NWMKKwogI2RlZmluZSBNU1JfSUEzMl9C
TkRDRkdTCQkweDAwMDAwZDkwCiAjZGVmaW5lIElBMzJfQk5EQ0ZHU19FTkFC
TEUJCTB4MDAwMDAwMDEKICNkZWZpbmUgSUEzMl9CTkRDRkdTX1BSRVNFUlZF
CQkweDAwMDAwMDAyCkBAIC0yMTgsNiArMjUwLDggQEAKICNkZWZpbmUgTVNS
X0s4X1ZNX0NSCQkJMHhjMDAxMDExNAogI2RlZmluZSBNU1JfSzhfVk1fSFNB
VkVfUEEJCTB4YzAwMTAxMTcKIAorI2RlZmluZSBNU1JfRjE1SF9DVV9QT1dF
UgkJMHhjMDAxMDA3YQorI2RlZmluZSBNU1JfRjE1SF9DVV9NQVhfUE9XRVIJ
CTB4YzAwMTAwN2IKICNkZWZpbmUgTVNSX0FNRF9GQU0xNUhfRVZOVFNFTDAJ
CTB4YzAwMTAyMDAKICNkZWZpbmUgTVNSX0FNRF9GQU0xNUhfUEVSRkNUUjAJ
CTB4YzAwMTAyMDEKICNkZWZpbmUgTVNSX0FNRF9GQU0xNUhfRVZOVFNFTDEJ
CTB4YzAwMTAyMDIKQEAgLTIzMSw2ICsyNjUsMTAgQEAKICNkZWZpbmUgTVNS
X0FNRF9GQU0xNUhfRVZOVFNFTDUJCTB4YzAwMTAyMGEKICNkZWZpbmUgTVNS
X0FNRF9GQU0xNUhfUEVSRkNUUjUJCTB4YzAwMTAyMGIKIAorI2RlZmluZSBN
U1JfQU1EX1JBUExfUE9XRVJfVU5JVAkJMHhjMDAxMDI5OQorI2RlZmluZSBN
U1JfQU1EX0NPUkVfRU5FUkdZX1NUQVRVUwkweGMwMDEwMjlhCisjZGVmaW5l
IE1TUl9BTURfUEtHX0VORVJHWV9TVEFUVVMJMHhjMDAxMDI5YgorCiAjZGVm
aW5lIE1TUl9BTURfTDdTMF9GRUFUVVJFX01BU0sJMHhjMDAxMTAwMgogI2Rl
ZmluZSBNU1JfQU1EX1RIUk1fRkVBVFVSRV9NQVNLCTB4YzAwMTEwMDMKICNk
ZWZpbmUgTVNSX0s4X0ZFQVRVUkVfTUFTSwkJMHhjMDAxMTAwNAo=

--=separator
Content-Type: application/octet-stream; name="xsa351-x86-4.12-1.patch"
Content-Disposition: attachment; filename="xsa351-x86-4.12-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP1JvZ2VyPTIwUGF1PTIwTW9ubj1DMz1BOT89IDxy
b2dlci5wYXVAY2l0cml4LmNvbT4KU3ViamVjdDogeDg2L21zcjogZml4IGhh
bmRsaW5nIG9mIE1TUl9JQTMyX1BFUkZfe1NUQVRVUy9DVEx9Ck1JTUUtVmVy
c2lvbjogMS4wCkNvbnRlbnQtVHlwZTogdGV4dC9wbGFpbjsgY2hhcnNldD1V
VEYtOApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA4Yml0CgpDdXJyZW50
bHkgYSBQViBoYXJkd2FyZSBkb21haW4gY2FuIGFsc28gYmUgZ2l2ZW4gY29u
dHJvbCBvdmVyIHRoZSBDUFUKZnJlcXVlbmN5LCBhbmQgc3VjaCBndWVzdCBp
cyBhbGxvd2VkIHRvIHdyaXRlIHRvIE1TUl9JQTMyX1BFUkZfQ1RMLgpIb3dl
dmVyIHNpbmNlIGNvbW1pdCAzMjJlYzdjODlmNiB0aGUgZGVmYXVsdCBiZWhh
dmlvciBoYXMgYmVlbiBjaGFuZ2VkCnRvIHJlamVjdCBhY2Nlc3NlcyB0byBu
b3QgZXhwbGljaXRseSBoYW5kbGVkIE1TUnMsIHByZXZlbnRpbmcgUFYKZ3Vl
c3RzIHRoYXQgbWFuYWdlIENQVSBmcmVxdWVuY3kgZnJvbSByZWFkaW5nCk1T
Ul9JQTMyX1BFUkZfe1NUQVRVUy9DVEx9LgoKQWRkaXRpb25hbGx5IHNvbWUg
SFZNIGd1ZXN0cyAoV2luZG93cyBhdCBsZWFzdCkgd2lsbCBhdHRlbXB0IHRv
IHJlYWQKTVNSX0lBMzJfUEVSRl9DVEwgYW5kIHdpbGwgcGFuaWMgaWYgZ2l2
ZW4gYmFjayBhICNHUCBmYXVsdDoKCiAgdm14LmM6MzAzNTpkOHYwIFJETVNS
IDB4MDAwMDAxOTkgdW5pbXBsZW1lbnRlZAogIGQ4djAgVklSSURJQU4gQ1JB
U0g6IDNiIGMwMDAwMDk2IGZmZmZmODA2ODcxYzE2NTEgZmZmZmRhMDI1MzY4
MzcyMCAwCgpNb3ZlIHRoZSBoYW5kbGluZyBvZiBNU1JfSUEzMl9QRVJGX3tT
VEFUVVMvQ1RMfSB0byB0aGUgY29tbW9uIE1TUgpoYW5kbGluZyBzaGFyZWQg
YmV0d2VlbiBIVk0gYW5kIFBWIGd1ZXN0cywgYW5kIGFkZCBhbiBleHBsaWNp
dCBjYXNlCmZvciByZWFkcyB0byBNU1JfSUEzMl9QRVJGX3tTVEFUVVMvQ1RM
fS4KClJlc3RvcmUgcHJldmlvdXMgYmVoYXZpb3IgYW5kIGFsbG93IFBWIGd1
ZXN0cyB3aXRoIHRoZSByZXF1aXJlZApwZXJtaXNzaW9ucyB0byByZWFkIHRo
ZSBjb250ZW50cyBvZiB0aGUgbWVudGlvbmVkIE1TUnMuIE5vbiBwcml2aWxl
Z2VkCmd1ZXN0cyB3aWxsIGdldCAwIHdoZW4gdHJ5aW5nIHRvIHJlYWQgdGhv
c2UgcmVnaXN0ZXJzLCBhcyB3cml0ZXMgdG8KTVNSX0lBMzJfUEVSRl9DVEwg
Ynkgc3VjaCBndWVzdCB3aWxsIGFscmVhZHkgYmUgc2lsZW50bHkgZHJvcHBl
ZC4KCkZpeGVzOiAzMjJlYzdjODlmNiAoJ3g4Ni9wdjogZGlzYWxsb3cgYWNj
ZXNzIHRvIHVua25vd24gTVNScycpCkZpeGVzOiA4NGU4NDhmZDdhMSAoJ3g4
Ni9odm06IGRpc2FsbG93IGFjY2VzcyB0byB1bmtub3duIE1TUnMnKQpTaWdu
ZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4
LmNvbT4KU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNv
b3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IFJvZ2VyIFBhdSBNb25u
w6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgooY2hlcnJ5IHBpY2tlZCBmcm9t
IGNvbW1pdCAzMDU5MTc4Nzk4YTIzYmE4NzBmZjg2ZmY1NGQ0NDJhMDdlNjY1
MWZjKQoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9tc3IuYyBiL3hlbi9h
cmNoL3g4Ni9tc3IuYwppbmRleCA0Njc3MjIyYzQwLi5hNDI3ODI2YmEwIDEw
MDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbXNyLmMKKysrIGIveGVuL2FyY2gv
eDg2L21zci5jCkBAIC0yMDYsNiArMjA2LDI1IEBAIGludCBndWVzdF9yZG1z
cihjb25zdCBzdHJ1Y3QgdmNwdSAqdiwgdWludDMyX3QgbXNyLCB1aW50NjRf
dCAqdmFsKQogICAgICAgICAqdmFsID0gbXNycy0+bWlzY19mZWF0dXJlc19l
bmFibGVzLnJhdzsKICAgICAgICAgYnJlYWs7CiAKKyAgICAgICAgLyoKKyAg
ICAgICAgICogVGhlc2UgTVNScyBhcmUgbm90IGVudW1lcmF0ZWQgaW4gQ1BV
SUQuICBUaGV5IGhhdmUgYmVlbiBhcm91bmQKKyAgICAgICAgICogc2luY2Ug
dGhlIFBlbnRpdW0gNCwgYW5kIGltcGxlbWVudGVkIGJ5IG90aGVyIHZlbmRv
cnMuCisgICAgICAgICAqCisgICAgICAgICAqIFNvbWUgdmVyc2lvbnMgb2Yg
V2luZG93cyB0cnkgcmVhZGluZyB0aGVzZSBiZWZvcmUgc2V0dGluZyB1cCBh
ICNHUAorICAgICAgICAgKiBoYW5kbGVyLCBhbmQgTGludXggaGFzIHNldmVy
YWwgdW5ndWFyZGVkIHJlYWRzIGFzIHdlbGwuICBQcm92aWRlCisgICAgICAg
ICAqIFJBWiBzZW1hbnRpY3MsIGluIGdlbmVyYWwsIGJ1dCBwZXJtaXQgYSBj
cHVmcmVxIGNvbnRyb2xsZXIgZG9tMCB0bworICAgICAgICAgKiBoYXZlIGZ1
bGwgYWNjZXNzLgorICAgICAgICAgKi8KKyAgICBjYXNlIE1TUl9JQTMyX1BF
UkZfU1RBVFVTOgorICAgIGNhc2UgTVNSX0lBMzJfUEVSRl9DVEw6CisgICAg
ICAgIGlmICggIShjcC0+eDg2X3ZlbmRvciAmIChYODZfVkVORE9SX0lOVEVM
IHwgWDg2X1ZFTkRPUl9DRU5UQVVSKSkgKQorICAgICAgICAgICAgZ290byBn
cF9mYXVsdDsKKworICAgICAgICAqdmFsID0gMDsKKyAgICAgICAgaWYgKCBs
aWtlbHkoIWlzX2NwdWZyZXFfY29udHJvbGxlcihkKSkgfHwgcmRtc3Jfc2Fm
ZShtc3IsICp2YWwpID09IDAgKQorICAgICAgICAgICAgYnJlYWs7CisgICAg
ICAgIGdvdG8gZ3BfZmF1bHQ7CisKICAgICBjYXNlIE1TUl9YMkFQSUNfRklS
U1QgLi4uIE1TUl9YMkFQSUNfTEFTVDoKICAgICAgICAgaWYgKCAhaXNfaHZt
X2RvbWFpbihkKSB8fCB2ICE9IGN1cnIgKQogICAgICAgICAgICAgZ290byBn
cF9mYXVsdDsKQEAgLTI5MCw2ICszMDksNyBAQCBpbnQgZ3Vlc3Rfd3Jtc3Io
c3RydWN0IHZjcHUgKnYsIHVpbnQzMl90IG1zciwgdWludDY0X3QgdmFsKQog
ICAgIGNhc2UgTVNSX0lOVEVMX0NPUkVfVEhSRUFEX0NPVU5UOgogICAgIGNh
c2UgTVNSX0lOVEVMX1BMQVRGT1JNX0lORk86CiAgICAgY2FzZSBNU1JfQVJD
SF9DQVBBQklMSVRJRVM6CisgICAgY2FzZSBNU1JfSUEzMl9QRVJGX1NUQVRV
UzoKICAgICAgICAgLyogUmVhZC1vbmx5ICovCiAgICAgY2FzZSBNU1JfVFNY
X0ZPUkNFX0FCT1JUOgogICAgIGNhc2UgTVNSX1RTWF9DVFJMOgpAQCAtMzk0
LDYgKzQxNCwyMSBAQCBpbnQgZ3Vlc3Rfd3Jtc3Ioc3RydWN0IHZjcHUgKnYs
IHVpbnQzMl90IG1zciwgdWludDY0X3QgdmFsKQogICAgICAgICBicmVhazsK
ICAgICB9CiAKKyAgICAgICAgLyoKKyAgICAgICAgICogVGhpcyBNU1IgaXMg
bm90IGVudW1lcmF0ZWQgaW4gQ1BVSUQuICBJdCBoYXMgYmVlbiBhcm91bmQg
c2luY2UgdGhlCisgICAgICAgICAqIFBlbnRpdW0gNCwgYW5kIGltcGxlbWVu
dGVkIGJ5IG90aGVyIHZlbmRvcnMuCisgICAgICAgICAqCisgICAgICAgICAq
IFRvIG1hdGNoIHRoZSBSQVogc2VtYW50aWNzLCBpbXBsZW1lbnQgYXMgd3Jp
dGUtZGlzY2FyZCwgZXhjZXB0IGZvcgorICAgICAgICAgKiBhIGNwdWZyZXEg
Y29udHJvbGxlciBkb20wIHdoaWNoIGhhcyBmdWxsIGFjY2Vzcy4KKyAgICAg
ICAgICovCisgICAgY2FzZSBNU1JfSUEzMl9QRVJGX0NUTDoKKyAgICAgICAg
aWYgKCAhKGNwLT54ODZfdmVuZG9yICYgKFg4Nl9WRU5ET1JfSU5URUwgfCBY
ODZfVkVORE9SX0NFTlRBVVIpKSApCisgICAgICAgICAgICBnb3RvIGdwX2Zh
dWx0OworCisgICAgICAgIGlmICggbGlrZWx5KCFpc19jcHVmcmVxX2NvbnRy
b2xsZXIoZCkpIHx8IHdybXNyX3NhZmUobXNyLCB2YWwpID09IDAgKQorICAg
ICAgICAgICAgYnJlYWs7CisgICAgICAgIGdvdG8gZ3BfZmF1bHQ7CisKICAg
ICBjYXNlIE1TUl9YMkFQSUNfRklSU1QgLi4uIE1TUl9YMkFQSUNfTEFTVDoK
ICAgICAgICAgaWYgKCAhaXNfaHZtX2RvbWFpbihkKSB8fCB2ICE9IGN1cnIg
KQogICAgICAgICAgICAgZ290byBncF9mYXVsdDsKZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL3g4Ni9wdi9lbXVsLXByaXYtb3AuYyBiL3hlbi9hcmNoL3g4Ni9w
di9lbXVsLXByaXYtb3AuYwppbmRleCAzMjRhMjMzNGEyLi45MzMwMzZlYTM0
IDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvcHYvZW11bC1wcml2LW9wLmMK
KysrIGIveGVuL2FyY2gveDg2L3B2L2VtdWwtcHJpdi1vcC5jCkBAIC03OTks
MTIgKzc5OSw2IEBAIHN0YXRpYyBpbmxpbmUgdWludDY0X3QgZ3Vlc3RfbWlz
Y19lbmFibGUodWludDY0X3QgdmFsKQogICAgIHJldHVybiB2YWw7CiB9CiAK
LXN0YXRpYyBpbmxpbmUgYm9vbCBpc19jcHVmcmVxX2NvbnRyb2xsZXIoY29u
c3Qgc3RydWN0IGRvbWFpbiAqZCkKLXsKLSAgICByZXR1cm4gKChjcHVmcmVx
X2NvbnRyb2xsZXIgPT0gRlJFUUNUTF9kb20wX2tlcm5lbCkgJiYKLSAgICAg
ICAgICAgIGlzX2hhcmR3YXJlX2RvbWFpbihkKSk7Ci19Ci0KIHN0YXRpYyBp
bnQgcmVhZF9tc3IodW5zaWduZWQgaW50IHJlZywgdWludDY0X3QgKnZhbCwK
ICAgICAgICAgICAgICAgICAgICAgc3RydWN0IHg4Nl9lbXVsYXRlX2N0eHQg
KmN0eHQpCiB7CkBAIC0xMDQ3LDE0ICsxMDQxLDYgQEAgc3RhdGljIGludCB3
cml0ZV9tc3IodW5zaWduZWQgaW50IHJlZywgdWludDY0X3QgdmFsLAogICAg
ICAgICAgICAgcmV0dXJuIFg4NkVNVUxfT0tBWTsKICAgICAgICAgYnJlYWs7
CiAKLSAgICBjYXNlIE1TUl9JQTMyX1BFUkZfQ1RMOgotICAgICAgICBpZiAo
IGJvb3RfY3B1X2RhdGEueDg2X3ZlbmRvciAhPSBYODZfVkVORE9SX0lOVEVM
ICkKLSAgICAgICAgICAgIGJyZWFrOwotICAgICAgICBpZiAoIGxpa2VseSgh
aXNfY3B1ZnJlcV9jb250cm9sbGVyKGN1cnJkKSkgfHwKLSAgICAgICAgICAg
ICB3cm1zcl9zYWZlKHJlZywgdmFsKSA9PSAwICkKLSAgICAgICAgICAgIHJl
dHVybiBYODZFTVVMX09LQVk7Ci0gICAgICAgIGJyZWFrOwotCiAgICAgY2Fz
ZSBNU1JfSUEzMl9USEVSTV9DT05UUk9MOgogICAgIGNhc2UgTVNSX0lBMzJf
RU5FUkdZX1BFUkZfQklBUzoKICAgICAgICAgaWYgKCBib290X2NwdV9kYXRh
Lng4Nl92ZW5kb3IgIT0gWDg2X1ZFTkRPUl9JTlRFTCApCmRpZmYgLS1naXQg
YS94ZW4vaW5jbHVkZS94ZW4vc2NoZWQuaCBiL3hlbi9pbmNsdWRlL3hlbi9z
Y2hlZC5oCmluZGV4IDgxOWY2ZWRlMmIuLmI5MTg2MjQzMjcgMTAwNjQ0Ci0t
LSBhL3hlbi9pbmNsdWRlL3hlbi9zY2hlZC5oCisrKyBiL3hlbi9pbmNsdWRl
L3hlbi9zY2hlZC5oCkBAIC05OTMsNiArOTkzLDIyIEBAIGV4dGVybiBlbnVt
IGNwdWZyZXFfY29udHJvbGxlciB7CiAgICAgRlJFUUNUTF9ub25lLCBGUkVR
Q1RMX2RvbTBfa2VybmVsLCBGUkVRQ1RMX3hlbgogfSBjcHVmcmVxX2NvbnRy
b2xsZXI7CiAKK3N0YXRpYyBhbHdheXNfaW5saW5lIGJvb2wgaXNfY3B1ZnJl
cV9jb250cm9sbGVyKGNvbnN0IHN0cnVjdCBkb21haW4gKmQpCit7CisgICAg
LyoKKyAgICAgKiBBIFBWIGRvbTAgY2FuIGJlIG5vbWluYXRlZCBhcyB0aGUg
Y3B1ZnJlcSBjb250cm9sbGVyLCBpbnN0ZWFkIG9mIHVzaW5nCisgICAgICog
WGVuJ3MgY3B1ZnJlcSBkcml2ZXIsIGF0IHdoaWNoIHBvaW50IGRvbTAgZ2V0
cyBkaXJlY3QgYWNjZXNzIHRvIGNlcnRhaW4KKyAgICAgKiBNU1JzLgorICAg
ICAqCisgICAgICogVGhpcyBpbnRlcmZhY2Ugb25seSB3b3JrcyB3aGVuIGRv
bTAgaXMgaWRlbnRpdHkgcGlubmVkIGFuZCBoYXMgdGhlIHNhbWUKKyAgICAg
KiBudW1iZXIgb2YgdkNQVXMgYXMgcENQVXMgb24gdGhlIHN5c3RlbS4KKyAg
ICAgKgorICAgICAqIEl0IHdvdWxkIGJlIGZhciBiZXR0ZXIgdG8gcGFyYXZp
cnR1YWxpc2UgdGhlIGludGVyZmFjZS4KKyAgICAgKi8KKyAgICByZXR1cm4g
KGlzX3B2X2RvbWFpbihkKSAmJiBpc19oYXJkd2FyZV9kb21haW4oZCkgJiYK
KyAgICAgICAgICAgIGNwdWZyZXFfY29udHJvbGxlciA9PSBGUkVRQ1RMX2Rv
bTBfa2VybmVsKTsKK30KKwogI2RlZmluZSBDUFVQT09MSURfTk9ORSAgICAt
MQogCiBzdHJ1Y3QgY3B1cG9vbCAqY3B1cG9vbF9nZXRfYnlfaWQoaW50IHBv
b2xpZCk7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa351-x86-4.12-2.patch"
Content-Disposition: attachment; filename="xsa351-x86-4.12-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L21zcjogRGlzYWxsb3cgZ3Vlc3QgYWNjZXNzIHRv
IHRoZSBSQVBMIE1TUnMKClJlc2VhcmNoZXJzIGhhdmUgZGVtb25zdHJhdGVk
IHVzaW5nIHRoZSBSQVBMIGludGVyZmFjZSB0byBwZXJmb3JtIGEKZGlmZmVy
ZW50aWFsIHBvd2VyIGFuYWx5c2lzIGF0dGFjayB0byByZWNvdmVyIEFFUyBr
ZXlzIHVzZWQgYnkgb3RoZXIgY29yZXMgaW4KdGhlIHN5c3RlbS4KCkZ1cnRo
ZXJtb3JlLCBldmVuIHByaXZpbGVnZWQgZ3Vlc3RzIGNhbm5vdCB1c2UgdGhp
cyBpbnRlcmZhY2UgY29ycmVjdGx5LCBkdWUKdG8gTVNSIHNjb3BlIGFuZCB2
Y3B1IHNjaGVkdWxpbmcgaXNzdWVzLiAgVGhlIGludGVyZmFjZSB3b3VsZCB3
YW50IHRvIGJlCnBhcmF2aXJ0dWFsaXNlZCB0byBiZSB1c2VkIHNlbnNpYmx5
LgoKRGlzYWxsb3cgYWNjZXNzIHRvIHRoZSBSQVBMIE1TUnMgY29tcGxldGVs
eSwgYXMgd2VsbCBhcyBvdGhlciBNU1JzIHdoaWNoCnBvdGVudGlhbGx5IGFj
Y2VzcyBmaW5lIGdyYWluIHBvd2VyIGluZm9ybWF0aW9uLgoKVGhpcyBpcyBw
YXJ0IG9mIFhTQS0zNTEuCgpTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFu
IEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgoKZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL3g4Ni9tc3IuYyBiL3hlbi9hcmNoL3g4Ni9tc3IuYwppbmRleCBh
NDI3ODI2YmEwLi45MjdlZDYyNWRmIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94
ODYvbXNyLmMKKysrIGIveGVuL2FyY2gveDg2L21zci5jCkBAIC0xNTEsOSAr
MTUxLDE4IEBAIGludCBndWVzdF9yZG1zcihjb25zdCBzdHJ1Y3QgdmNwdSAq
diwgdWludDMyX3QgbXNyLCB1aW50NjRfdCAqdmFsKQogICAgIGNhc2UgTVNS
X1RTWF9DVFJMOgogICAgIGNhc2UgTVNSX01DVV9PUFRfQ1RSTDoKICAgICBj
YXNlIE1TUl9SVElUX09VVFBVVF9CQVNFIC4uLiBNU1JfUlRJVF9BRERSX0Io
Nyk6CisgICAgY2FzZSBNU1JfUkFQTF9QT1dFUl9VTklUOgorICAgIGNhc2Ug
TVNSX1BLR19QT1dFUl9MSU1JVCAgLi4uIE1TUl9QS0dfUE9XRVJfSU5GTzoK
KyAgICBjYXNlIE1TUl9EUkFNX1BPV0VSX0xJTUlUIC4uLiBNU1JfRFJBTV9Q
T1dFUl9JTkZPOgorICAgIGNhc2UgTVNSX1BQMF9QT1dFUl9MSU1JVCAgLi4u
IE1TUl9QUDBfUE9MSUNZOgorICAgIGNhc2UgTVNSX1BQMV9QT1dFUl9MSU1J
VCAgLi4uIE1TUl9QUDFfUE9MSUNZOgorICAgIGNhc2UgTVNSX1BMQVRGT1JN
X0VORVJHWV9DT1VOVEVSOgorICAgIGNhc2UgTVNSX1BMQVRGT1JNX1BPV0VS
X0xJTUlUOgogICAgIGNhc2UgTVNSX1VfQ0VUOgogICAgIGNhc2UgTVNSX1Nf
Q0VUOgogICAgIGNhc2UgTVNSX1BMMF9TU1AgLi4uIE1TUl9JTlRFUlJVUFRf
U1NQX1RBQkxFOgorICAgIGNhc2UgTVNSX0YxNUhfQ1VfUE9XRVIgLi4uIE1T
Ul9GMTVIX0NVX01BWF9QT1dFUjoKKyAgICBjYXNlIE1TUl9BTURfUkFQTF9Q
T1dFUl9VTklUIC4uLiBNU1JfQU1EX1BLR19FTkVSR1lfU1RBVFVTOgogICAg
ICAgICAvKiBOb3Qgb2ZmZXJlZCB0byBndWVzdHMuICovCiAgICAgICAgIGdv
dG8gZ3BfZmF1bHQ7CiAKQEAgLTMxNSw5ICszMjQsMTggQEAgaW50IGd1ZXN0
X3dybXNyKHN0cnVjdCB2Y3B1ICp2LCB1aW50MzJfdCBtc3IsIHVpbnQ2NF90
IHZhbCkKICAgICBjYXNlIE1TUl9UU1hfQ1RSTDoKICAgICBjYXNlIE1TUl9N
Q1VfT1BUX0NUUkw6CiAgICAgY2FzZSBNU1JfUlRJVF9PVVRQVVRfQkFTRSAu
Li4gTVNSX1JUSVRfQUREUl9CKDcpOgorICAgIGNhc2UgTVNSX1JBUExfUE9X
RVJfVU5JVDoKKyAgICBjYXNlIE1TUl9QS0dfUE9XRVJfTElNSVQgIC4uLiBN
U1JfUEtHX1BPV0VSX0lORk86CisgICAgY2FzZSBNU1JfRFJBTV9QT1dFUl9M
SU1JVCAuLi4gTVNSX0RSQU1fUE9XRVJfSU5GTzoKKyAgICBjYXNlIE1TUl9Q
UDBfUE9XRVJfTElNSVQgIC4uLiBNU1JfUFAwX1BPTElDWToKKyAgICBjYXNl
IE1TUl9QUDFfUE9XRVJfTElNSVQgIC4uLiBNU1JfUFAxX1BPTElDWToKKyAg
ICBjYXNlIE1TUl9QTEFURk9STV9FTkVSR1lfQ09VTlRFUjoKKyAgICBjYXNl
IE1TUl9QTEFURk9STV9QT1dFUl9MSU1JVDoKICAgICBjYXNlIE1TUl9VX0NF
VDoKICAgICBjYXNlIE1TUl9TX0NFVDoKICAgICBjYXNlIE1TUl9QTDBfU1NQ
IC4uLiBNU1JfSU5URVJSVVBUX1NTUF9UQUJMRToKKyAgICBjYXNlIE1TUl9G
MTVIX0NVX1BPV0VSIC4uLiBNU1JfRjE1SF9DVV9NQVhfUE9XRVI6CisgICAg
Y2FzZSBNU1JfQU1EX1JBUExfUE9XRVJfVU5JVCAuLi4gTVNSX0FNRF9QS0df
RU5FUkdZX1NUQVRVUzoKICAgICAgICAgLyogTm90IG9mZmVyZWQgdG8gZ3Vl
c3RzLiAqLwogICAgICAgICBnb3RvIGdwX2ZhdWx0OwogCmRpZmYgLS1naXQg
YS94ZW4vaW5jbHVkZS9hc20teDg2L21zci1pbmRleC5oIGIveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tc3ItaW5kZXguaAppbmRleCAwZWI2ODU1NjE0Li5iYTll
OTBhZjIxIDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L21zci1p
bmRleC5oCisrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvbXNyLWluZGV4LmgK
QEAgLTk2LDYgKzk2LDM4IEBACiAvKiBMb3dlciA2IGJpdHMgZGVmaW5lIHRo
ZSBmb3JtYXQgb2YgdGhlIGFkZHJlc3MgaW4gdGhlIExCUiBzdGFjayAqLwog
I2RlZmluZSBNU1JfSUEzMl9QRVJGX0NBUF9MQlJfRk9STUFUCTB4M2YKIAor
LyoKKyAqIEludGVsIFJ1bnRpbWUgQXZlcmFnZSBQb3dlciBMaW1pdGluZyAo
UkFQTCkgaW50ZXJmYWNlLiAgUG93ZXIgcGxhbmUgYmFzZQorICogYWRkcmVz
c2VzIChNU1JfKl9QT1dFUl9MSU1JVCkgYXJlIG1vZGVsIHNwZWNpZmljLCBi
dXQgaGF2ZSBzby1mYXIgYmVlbgorICogY29uc2lzdGVudCBzaW5jZSB0aGVp
ciBpbnRyb2R1Y3Rpb24gaW4gU2FuZHlCcmlkZ2UuCisgKgorICogT2Zmc2V0
cyBvZiBmdW5jdGlvbmFsaXR5IGZyb20gdGhlIHBvd2VyIHBsYW5lIGJhc2Ug
aXMgYXJjaGl0ZWN0dXJhbCwgYnV0CisgKiBub3QgYWxsIHBvd2VyIHBsYW5l
cyBzdXBwb3J0IGFsbCBmdW5jdGlvbmFsaXR5LgorICovCisjZGVmaW5lIE1T
Ul9SQVBMX1BPV0VSX1VOSVQJCTB4MDAwMDA2MDYKKworI2RlZmluZSBNU1Jf
UEtHX1BPV0VSX0xJTUlUCQkweDAwMDAwNjEwCisjZGVmaW5lIE1TUl9QS0df
RU5FUkdZX1NUQVRVUwkJMHgwMDAwMDYxMQorI2RlZmluZSBNU1JfUEtHX1BF
UkZfU1RBVFVTCQkweDAwMDAwNjEzCisjZGVmaW5lIE1TUl9QS0dfUE9XRVJf
SU5GTwkJMHgwMDAwMDYxNAorCisjZGVmaW5lIE1TUl9EUkFNX1BPV0VSX0xJ
TUlUCQkweDAwMDAwNjE4CisjZGVmaW5lIE1TUl9EUkFNX0VORVJHWV9TVEFU
VVMJCTB4MDAwMDA2MTkKKyNkZWZpbmUgTVNSX0RSQU1fUEVSRl9TVEFUVVMJ
CTB4MDAwMDA2MWIKKyNkZWZpbmUgTVNSX0RSQU1fUE9XRVJfSU5GTwkJMHgw
MDAwMDYxYworCisjZGVmaW5lIE1TUl9QUDBfUE9XRVJfTElNSVQJCTB4MDAw
MDA2MzgKKyNkZWZpbmUgTVNSX1BQMF9FTkVSR1lfU1RBVFVTCQkweDAwMDAw
NjM5CisjZGVmaW5lIE1TUl9QUDBfUE9MSUNZCQkJMHgwMDAwMDYzYQorCisj
ZGVmaW5lIE1TUl9QUDFfUE9XRVJfTElNSVQJCTB4MDAwMDA2NDAKKyNkZWZp
bmUgTVNSX1BQMV9FTkVSR1lfU1RBVFVTCQkweDAwMDAwNjQxCisjZGVmaW5l
IE1TUl9QUDFfUE9MSUNZCQkJMHgwMDAwMDY0MgorCisvKiBJbnRlbCBQbGF0
Zm9ybS13aWRlIHBvd2VyIGludGVyZmFjZS4gKi8KKyNkZWZpbmUgTVNSX1BM
QVRGT1JNX0VORVJHWV9DT1VOVEVSCTB4MDAwMDA2NGQKKyNkZWZpbmUgTVNS
X1BMQVRGT1JNX1BPV0VSX0xJTUlUCTB4MDAwMDA2NWMKKwogI2RlZmluZSBN
U1JfSUEzMl9CTkRDRkdTCQkweDAwMDAwZDkwCiAjZGVmaW5lIElBMzJfQk5E
Q0ZHU19FTkFCTEUJCTB4MDAwMDAwMDEKICNkZWZpbmUgSUEzMl9CTkRDRkdT
X1BSRVNFUlZFCQkweDAwMDAwMDAyCkBAIC0yMzYsNiArMjY4LDggQEAKICNk
ZWZpbmUgTVNSX0s4X1ZNX0NSCQkJMHhjMDAxMDExNAogI2RlZmluZSBNU1Jf
SzhfVk1fSFNBVkVfUEEJCTB4YzAwMTAxMTcKIAorI2RlZmluZSBNU1JfRjE1
SF9DVV9QT1dFUgkJMHhjMDAxMDA3YQorI2RlZmluZSBNU1JfRjE1SF9DVV9N
QVhfUE9XRVIJCTB4YzAwMTAwN2IKICNkZWZpbmUgTVNSX0FNRF9GQU0xNUhf
RVZOVFNFTDAJCTB4YzAwMTAyMDAKICNkZWZpbmUgTVNSX0FNRF9GQU0xNUhf
UEVSRkNUUjAJCTB4YzAwMTAyMDEKICNkZWZpbmUgTVNSX0FNRF9GQU0xNUhf
RVZOVFNFTDEJCTB4YzAwMTAyMDIKQEAgLTI0OSw2ICsyODMsMTAgQEAKICNk
ZWZpbmUgTVNSX0FNRF9GQU0xNUhfRVZOVFNFTDUJCTB4YzAwMTAyMGEKICNk
ZWZpbmUgTVNSX0FNRF9GQU0xNUhfUEVSRkNUUjUJCTB4YzAwMTAyMGIKIAor
I2RlZmluZSBNU1JfQU1EX1JBUExfUE9XRVJfVU5JVAkJMHhjMDAxMDI5OQor
I2RlZmluZSBNU1JfQU1EX0NPUkVfRU5FUkdZX1NUQVRVUwkweGMwMDEwMjlh
CisjZGVmaW5lIE1TUl9BTURfUEtHX0VORVJHWV9TVEFUVVMJMHhjMDAxMDI5
YgorCiAjZGVmaW5lIE1TUl9BTURfTDdTMF9GRUFUVVJFX01BU0sJMHhjMDAx
MTAwMgogI2RlZmluZSBNU1JfQU1EX1RIUk1fRkVBVFVSRV9NQVNLCTB4YzAw
MTEwMDMKICNkZWZpbmUgTVNSX0s4X0ZFQVRVUkVfTUFTSwkJMHhjMDAxMTAw
NAo=

--=separator
Content-Type: application/octet-stream; name="xsa351-x86-4.13-1.patch"
Content-Disposition: attachment; filename="xsa351-x86-4.13-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP1JvZ2VyPTIwUGF1PTIwTW9ubj1DMz1BOT89IDxy
b2dlci5wYXVAY2l0cml4LmNvbT4KU3ViamVjdDogeDg2L21zcjogZml4IGhh
bmRsaW5nIG9mIE1TUl9JQTMyX1BFUkZfe1NUQVRVUy9DVEx9Ck1JTUUtVmVy
c2lvbjogMS4wCkNvbnRlbnQtVHlwZTogdGV4dC9wbGFpbjsgY2hhcnNldD1V
VEYtOApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA4Yml0CgpDdXJyZW50
bHkgYSBQViBoYXJkd2FyZSBkb21haW4gY2FuIGFsc28gYmUgZ2l2ZW4gY29u
dHJvbCBvdmVyIHRoZSBDUFUKZnJlcXVlbmN5LCBhbmQgc3VjaCBndWVzdCBp
cyBhbGxvd2VkIHRvIHdyaXRlIHRvIE1TUl9JQTMyX1BFUkZfQ1RMLgpIb3dl
dmVyIHNpbmNlIGNvbW1pdCAzMjJlYzdjODlmNiB0aGUgZGVmYXVsdCBiZWhh
dmlvciBoYXMgYmVlbiBjaGFuZ2VkCnRvIHJlamVjdCBhY2Nlc3NlcyB0byBu
b3QgZXhwbGljaXRseSBoYW5kbGVkIE1TUnMsIHByZXZlbnRpbmcgUFYKZ3Vl
c3RzIHRoYXQgbWFuYWdlIENQVSBmcmVxdWVuY3kgZnJvbSByZWFkaW5nCk1T
Ul9JQTMyX1BFUkZfe1NUQVRVUy9DVEx9LgoKQWRkaXRpb25hbGx5IHNvbWUg
SFZNIGd1ZXN0cyAoV2luZG93cyBhdCBsZWFzdCkgd2lsbCBhdHRlbXB0IHRv
IHJlYWQKTVNSX0lBMzJfUEVSRl9DVEwgYW5kIHdpbGwgcGFuaWMgaWYgZ2l2
ZW4gYmFjayBhICNHUCBmYXVsdDoKCiAgdm14LmM6MzAzNTpkOHYwIFJETVNS
IDB4MDAwMDAxOTkgdW5pbXBsZW1lbnRlZAogIGQ4djAgVklSSURJQU4gQ1JB
U0g6IDNiIGMwMDAwMDk2IGZmZmZmODA2ODcxYzE2NTEgZmZmZmRhMDI1MzY4
MzcyMCAwCgpNb3ZlIHRoZSBoYW5kbGluZyBvZiBNU1JfSUEzMl9QRVJGX3tT
VEFUVVMvQ1RMfSB0byB0aGUgY29tbW9uIE1TUgpoYW5kbGluZyBzaGFyZWQg
YmV0d2VlbiBIVk0gYW5kIFBWIGd1ZXN0cywgYW5kIGFkZCBhbiBleHBsaWNp
dCBjYXNlCmZvciByZWFkcyB0byBNU1JfSUEzMl9QRVJGX3tTVEFUVVMvQ1RM
fS4KClJlc3RvcmUgcHJldmlvdXMgYmVoYXZpb3IgYW5kIGFsbG93IFBWIGd1
ZXN0cyB3aXRoIHRoZSByZXF1aXJlZApwZXJtaXNzaW9ucyB0byByZWFkIHRo
ZSBjb250ZW50cyBvZiB0aGUgbWVudGlvbmVkIE1TUnMuIE5vbiBwcml2aWxl
Z2VkCmd1ZXN0cyB3aWxsIGdldCAwIHdoZW4gdHJ5aW5nIHRvIHJlYWQgdGhv
c2UgcmVnaXN0ZXJzLCBhcyB3cml0ZXMgdG8KTVNSX0lBMzJfUEVSRl9DVEwg
Ynkgc3VjaCBndWVzdCB3aWxsIGFscmVhZHkgYmUgc2lsZW50bHkgZHJvcHBl
ZC4KCkZpeGVzOiAzMjJlYzdjODlmNiAoJ3g4Ni9wdjogZGlzYWxsb3cgYWNj
ZXNzIHRvIHVua25vd24gTVNScycpCkZpeGVzOiA4NGU4NDhmZDdhMSAoJ3g4
Ni9odm06IGRpc2FsbG93IGFjY2VzcyB0byB1bmtub3duIE1TUnMnKQpTaWdu
ZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4
LmNvbT4KU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNv
b3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IFJvZ2VyIFBhdSBNb25u
w6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgooY2hlcnJ5IHBpY2tlZCBmcm9t
IGNvbW1pdCAzMDU5MTc4Nzk4YTIzYmE4NzBmZjg2ZmY1NGQ0NDJhMDdlNjY1
MWZjKQoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9tc3IuYyBiL3hlbi9h
cmNoL3g4Ni9tc3IuYwppbmRleCA4NzVhYzM5ZDMwLi44Yzk2OTE5N2FhIDEw
MDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbXNyLmMKKysrIGIveGVuL2FyY2gv
eDg2L21zci5jCkBAIC0yMDgsNiArMjA4LDI1IEBAIGludCBndWVzdF9yZG1z
cihzdHJ1Y3QgdmNwdSAqdiwgdWludDMyX3QgbXNyLCB1aW50NjRfdCAqdmFs
KQogICAgICAgICAqdmFsID0gbXNycy0+bWlzY19mZWF0dXJlc19lbmFibGVz
LnJhdzsKICAgICAgICAgYnJlYWs7CiAKKyAgICAgICAgLyoKKyAgICAgICAg
ICogVGhlc2UgTVNScyBhcmUgbm90IGVudW1lcmF0ZWQgaW4gQ1BVSUQuICBU
aGV5IGhhdmUgYmVlbiBhcm91bmQKKyAgICAgICAgICogc2luY2UgdGhlIFBl
bnRpdW0gNCwgYW5kIGltcGxlbWVudGVkIGJ5IG90aGVyIHZlbmRvcnMuCisg
ICAgICAgICAqCisgICAgICAgICAqIFNvbWUgdmVyc2lvbnMgb2YgV2luZG93
cyB0cnkgcmVhZGluZyB0aGVzZSBiZWZvcmUgc2V0dGluZyB1cCBhICNHUAor
ICAgICAgICAgKiBoYW5kbGVyLCBhbmQgTGludXggaGFzIHNldmVyYWwgdW5n
dWFyZGVkIHJlYWRzIGFzIHdlbGwuICBQcm92aWRlCisgICAgICAgICAqIFJB
WiBzZW1hbnRpY3MsIGluIGdlbmVyYWwsIGJ1dCBwZXJtaXQgYSBjcHVmcmVx
IGNvbnRyb2xsZXIgZG9tMCB0bworICAgICAgICAgKiBoYXZlIGZ1bGwgYWNj
ZXNzLgorICAgICAgICAgKi8KKyAgICBjYXNlIE1TUl9JQTMyX1BFUkZfU1RB
VFVTOgorICAgIGNhc2UgTVNSX0lBMzJfUEVSRl9DVEw6CisgICAgICAgIGlm
ICggIShjcC0+eDg2X3ZlbmRvciAmIChYODZfVkVORE9SX0lOVEVMIHwgWDg2
X1ZFTkRPUl9DRU5UQVVSKSkgKQorICAgICAgICAgICAgZ290byBncF9mYXVs
dDsKKworICAgICAgICAqdmFsID0gMDsKKyAgICAgICAgaWYgKCBsaWtlbHko
IWlzX2NwdWZyZXFfY29udHJvbGxlcihkKSkgfHwgcmRtc3Jfc2FmZShtc3Is
ICp2YWwpID09IDAgKQorICAgICAgICAgICAgYnJlYWs7CisgICAgICAgIGdv
dG8gZ3BfZmF1bHQ7CisKICAgICBjYXNlIE1TUl9YMkFQSUNfRklSU1QgLi4u
IE1TUl9YMkFQSUNfTEFTVDoKICAgICAgICAgaWYgKCAhaXNfaHZtX2RvbWFp
bihkKSB8fCB2ICE9IGN1cnIgKQogICAgICAgICAgICAgZ290byBncF9mYXVs
dDsKQEAgLTMwNSw2ICszMjQsNyBAQCBpbnQgZ3Vlc3Rfd3Jtc3Ioc3RydWN0
IHZjcHUgKnYsIHVpbnQzMl90IG1zciwgdWludDY0X3QgdmFsKQogICAgIGNh
c2UgTVNSX0lOVEVMX0NPUkVfVEhSRUFEX0NPVU5UOgogICAgIGNhc2UgTVNS
X0lOVEVMX1BMQVRGT1JNX0lORk86CiAgICAgY2FzZSBNU1JfQVJDSF9DQVBB
QklMSVRJRVM6CisgICAgY2FzZSBNU1JfSUEzMl9QRVJGX1NUQVRVUzoKICAg
ICAgICAgLyogUmVhZC1vbmx5ICovCiAgICAgY2FzZSBNU1JfVFNYX0ZPUkNF
X0FCT1JUOgogICAgIGNhc2UgTVNSX1RTWF9DVFJMOgpAQCAtNDExLDYgKzQz
MSwyMSBAQCBpbnQgZ3Vlc3Rfd3Jtc3Ioc3RydWN0IHZjcHUgKnYsIHVpbnQz
Ml90IG1zciwgdWludDY0X3QgdmFsKQogICAgICAgICBicmVhazsKICAgICB9
CiAKKyAgICAgICAgLyoKKyAgICAgICAgICogVGhpcyBNU1IgaXMgbm90IGVu
dW1lcmF0ZWQgaW4gQ1BVSUQuICBJdCBoYXMgYmVlbiBhcm91bmQgc2luY2Ug
dGhlCisgICAgICAgICAqIFBlbnRpdW0gNCwgYW5kIGltcGxlbWVudGVkIGJ5
IG90aGVyIHZlbmRvcnMuCisgICAgICAgICAqCisgICAgICAgICAqIFRvIG1h
dGNoIHRoZSBSQVogc2VtYW50aWNzLCBpbXBsZW1lbnQgYXMgd3JpdGUtZGlz
Y2FyZCwgZXhjZXB0IGZvcgorICAgICAgICAgKiBhIGNwdWZyZXEgY29udHJv
bGxlciBkb20wIHdoaWNoIGhhcyBmdWxsIGFjY2Vzcy4KKyAgICAgICAgICov
CisgICAgY2FzZSBNU1JfSUEzMl9QRVJGX0NUTDoKKyAgICAgICAgaWYgKCAh
KGNwLT54ODZfdmVuZG9yICYgKFg4Nl9WRU5ET1JfSU5URUwgfCBYODZfVkVO
RE9SX0NFTlRBVVIpKSApCisgICAgICAgICAgICBnb3RvIGdwX2ZhdWx0Owor
CisgICAgICAgIGlmICggbGlrZWx5KCFpc19jcHVmcmVxX2NvbnRyb2xsZXIo
ZCkpIHx8IHdybXNyX3NhZmUobXNyLCB2YWwpID09IDAgKQorICAgICAgICAg
ICAgYnJlYWs7CisgICAgICAgIGdvdG8gZ3BfZmF1bHQ7CisKICAgICBjYXNl
IE1TUl9YMkFQSUNfRklSU1QgLi4uIE1TUl9YMkFQSUNfTEFTVDoKICAgICAg
ICAgaWYgKCAhaXNfaHZtX2RvbWFpbihkKSB8fCB2ICE9IGN1cnIgKQogICAg
ICAgICAgICAgZ290byBncF9mYXVsdDsKZGlmZiAtLWdpdCBhL3hlbi9hcmNo
L3g4Ni9wdi9lbXVsLXByaXYtb3AuYyBiL3hlbi9hcmNoL3g4Ni9wdi9lbXVs
LXByaXYtb3AuYwppbmRleCA0MjI1OGM2YmYxLi42ZGM0ZjkyYTg0IDEwMDY0
NAotLS0gYS94ZW4vYXJjaC94ODYvcHYvZW11bC1wcml2LW9wLmMKKysrIGIv
eGVuL2FyY2gveDg2L3B2L2VtdWwtcHJpdi1vcC5jCkBAIC03NzYsMTIgKzc3
Niw2IEBAIHN0YXRpYyBpbmxpbmUgdWludDY0X3QgZ3Vlc3RfbWlzY19lbmFi
bGUodWludDY0X3QgdmFsKQogICAgIHJldHVybiB2YWw7CiB9CiAKLXN0YXRp
YyBpbmxpbmUgYm9vbCBpc19jcHVmcmVxX2NvbnRyb2xsZXIoY29uc3Qgc3Ry
dWN0IGRvbWFpbiAqZCkKLXsKLSAgICByZXR1cm4gKChjcHVmcmVxX2NvbnRy
b2xsZXIgPT0gRlJFUUNUTF9kb20wX2tlcm5lbCkgJiYKLSAgICAgICAgICAg
IGlzX2hhcmR3YXJlX2RvbWFpbihkKSk7Ci19Ci0KIHN0YXRpYyBpbnQgcmVh
ZF9tc3IodW5zaWduZWQgaW50IHJlZywgdWludDY0X3QgKnZhbCwKICAgICAg
ICAgICAgICAgICAgICAgc3RydWN0IHg4Nl9lbXVsYXRlX2N0eHQgKmN0eHQp
CiB7CkBAIC0xMDI2LDE0ICsxMDIwLDYgQEAgc3RhdGljIGludCB3cml0ZV9t
c3IodW5zaWduZWQgaW50IHJlZywgdWludDY0X3QgdmFsLAogICAgICAgICAg
ICAgcmV0dXJuIFg4NkVNVUxfT0tBWTsKICAgICAgICAgYnJlYWs7CiAKLSAg
ICBjYXNlIE1TUl9JQTMyX1BFUkZfQ1RMOgotICAgICAgICBpZiAoIGJvb3Rf
Y3B1X2RhdGEueDg2X3ZlbmRvciAhPSBYODZfVkVORE9SX0lOVEVMICkKLSAg
ICAgICAgICAgIGJyZWFrOwotICAgICAgICBpZiAoIGxpa2VseSghaXNfY3B1
ZnJlcV9jb250cm9sbGVyKGN1cnJkKSkgfHwKLSAgICAgICAgICAgICB3cm1z
cl9zYWZlKHJlZywgdmFsKSA9PSAwICkKLSAgICAgICAgICAgIHJldHVybiBY
ODZFTVVMX09LQVk7Ci0gICAgICAgIGJyZWFrOwotCiAgICAgY2FzZSBNU1Jf
SUEzMl9USEVSTV9DT05UUk9MOgogICAgIGNhc2UgTVNSX0lBMzJfRU5FUkdZ
X1BFUkZfQklBUzoKICAgICAgICAgaWYgKCBib290X2NwdV9kYXRhLng4Nl92
ZW5kb3IgIT0gWDg2X1ZFTkRPUl9JTlRFTCApCmRpZmYgLS1naXQgYS94ZW4v
aW5jbHVkZS94ZW4vc2NoZWQuaCBiL3hlbi9pbmNsdWRlL3hlbi9zY2hlZC5o
CmluZGV4IGQ2ZTI3ZmM0YjguLjhiYjViZDdiMzggMTAwNjQ0Ci0tLSBhL3hl
bi9pbmNsdWRlL3hlbi9zY2hlZC5oCisrKyBiL3hlbi9pbmNsdWRlL3hlbi9z
Y2hlZC5oCkBAIC0xMDU3LDYgKzEwNTcsMjIgQEAgZXh0ZXJuIGVudW0gY3B1
ZnJlcV9jb250cm9sbGVyIHsKICAgICBGUkVRQ1RMX25vbmUsIEZSRVFDVExf
ZG9tMF9rZXJuZWwsIEZSRVFDVExfeGVuCiB9IGNwdWZyZXFfY29udHJvbGxl
cjsKIAorc3RhdGljIGFsd2F5c19pbmxpbmUgYm9vbCBpc19jcHVmcmVxX2Nv
bnRyb2xsZXIoY29uc3Qgc3RydWN0IGRvbWFpbiAqZCkKK3sKKyAgICAvKgor
ICAgICAqIEEgUFYgZG9tMCBjYW4gYmUgbm9taW5hdGVkIGFzIHRoZSBjcHVm
cmVxIGNvbnRyb2xsZXIsIGluc3RlYWQgb2YgdXNpbmcKKyAgICAgKiBYZW4n
cyBjcHVmcmVxIGRyaXZlciwgYXQgd2hpY2ggcG9pbnQgZG9tMCBnZXRzIGRp
cmVjdCBhY2Nlc3MgdG8gY2VydGFpbgorICAgICAqIE1TUnMuCisgICAgICoK
KyAgICAgKiBUaGlzIGludGVyZmFjZSBvbmx5IHdvcmtzIHdoZW4gZG9tMCBp
cyBpZGVudGl0eSBwaW5uZWQgYW5kIGhhcyB0aGUgc2FtZQorICAgICAqIG51
bWJlciBvZiB2Q1BVcyBhcyBwQ1BVcyBvbiB0aGUgc3lzdGVtLgorICAgICAq
CisgICAgICogSXQgd291bGQgYmUgZmFyIGJldHRlciB0byBwYXJhdmlydHVh
bGlzZSB0aGUgaW50ZXJmYWNlLgorICAgICAqLworICAgIHJldHVybiAoaXNf
cHZfZG9tYWluKGQpICYmIGlzX2hhcmR3YXJlX2RvbWFpbihkKSAmJgorICAg
ICAgICAgICAgY3B1ZnJlcV9jb250cm9sbGVyID09IEZSRVFDVExfZG9tMF9r
ZXJuZWwpOworfQorCiAjZGVmaW5lIENQVVBPT0xJRF9OT05FICAgIC0xCiAK
IHN0cnVjdCBjcHVwb29sICpjcHVwb29sX2dldF9ieV9pZChpbnQgcG9vbGlk
KTsK

--=separator
Content-Type: application/octet-stream; name="xsa351-x86-4.13-2.patch"
Content-Disposition: attachment; filename="xsa351-x86-4.13-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L21zcjogRGlzYWxsb3cgZ3Vlc3QgYWNjZXNzIHRv
IHRoZSBSQVBMIE1TUnMKClJlc2VhcmNoZXJzIGhhdmUgZGVtb25zdHJhdGVk
IHVzaW5nIHRoZSBSQVBMIGludGVyZmFjZSB0byBwZXJmb3JtIGEKZGlmZmVy
ZW50aWFsIHBvd2VyIGFuYWx5c2lzIGF0dGFjayB0byByZWNvdmVyIEFFUyBr
ZXlzIHVzZWQgYnkgb3RoZXIgY29yZXMgaW4KdGhlIHN5c3RlbS4KCkZ1cnRo
ZXJtb3JlLCBldmVuIHByaXZpbGVnZWQgZ3Vlc3RzIGNhbm5vdCB1c2UgdGhp
cyBpbnRlcmZhY2UgY29ycmVjdGx5LCBkdWUKdG8gTVNSIHNjb3BlIGFuZCB2
Y3B1IHNjaGVkdWxpbmcgaXNzdWVzLiAgVGhlIGludGVyZmFjZSB3b3VsZCB3
YW50IHRvIGJlCnBhcmF2aXJ0dWFsaXNlZCB0byBiZSB1c2VkIHNlbnNpYmx5
LgoKRGlzYWxsb3cgYWNjZXNzIHRvIHRoZSBSQVBMIE1TUnMgY29tcGxldGVs
eSwgYXMgd2VsbCBhcyBvdGhlciBNU1JzIHdoaWNoCnBvdGVudGlhbGx5IGFj
Y2VzcyBmaW5lIGdyYWluIHBvd2VyIGluZm9ybWF0aW9uLgoKVGhpcyBpcyBw
YXJ0IG9mIFhTQS0zNTEuCgpTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFu
IEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgoKZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL3g4Ni9tc3IuYyBiL3hlbi9hcmNoL3g4Ni9tc3IuYwppbmRleCA4
Yzk2OTE5N2FhLi44YWI2OTQ5YThlIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94
ODYvbXNyLmMKKysrIGIveGVuL2FyY2gveDg2L21zci5jCkBAIC0xNTIsMTEg
KzE1MiwyMCBAQCBpbnQgZ3Vlc3RfcmRtc3Ioc3RydWN0IHZjcHUgKnYsIHVp
bnQzMl90IG1zciwgdWludDY0X3QgKnZhbCkKICAgICBjYXNlIE1TUl9UU1hf
Q1RSTDoKICAgICBjYXNlIE1TUl9NQ1VfT1BUX0NUUkw6CiAgICAgY2FzZSBN
U1JfUlRJVF9PVVRQVVRfQkFTRSAuLi4gTVNSX1JUSVRfQUREUl9CKDcpOgor
ICAgIGNhc2UgTVNSX1JBUExfUE9XRVJfVU5JVDoKKyAgICBjYXNlIE1TUl9Q
S0dfUE9XRVJfTElNSVQgIC4uLiBNU1JfUEtHX1BPV0VSX0lORk86CisgICAg
Y2FzZSBNU1JfRFJBTV9QT1dFUl9MSU1JVCAuLi4gTVNSX0RSQU1fUE9XRVJf
SU5GTzoKKyAgICBjYXNlIE1TUl9QUDBfUE9XRVJfTElNSVQgIC4uLiBNU1Jf
UFAwX1BPTElDWToKKyAgICBjYXNlIE1TUl9QUDFfUE9XRVJfTElNSVQgIC4u
LiBNU1JfUFAxX1BPTElDWToKKyAgICBjYXNlIE1TUl9QTEFURk9STV9FTkVS
R1lfQ09VTlRFUjoKKyAgICBjYXNlIE1TUl9QTEFURk9STV9QT1dFUl9MSU1J
VDoKICAgICBjYXNlIE1TUl9VX0NFVDoKICAgICBjYXNlIE1TUl9TX0NFVDoK
ICAgICBjYXNlIE1TUl9QTDBfU1NQIC4uLiBNU1JfSU5URVJSVVBUX1NTUF9U
QUJMRToKICAgICBjYXNlIE1TUl9BTUQ2NF9MV1BfQ0ZHOgogICAgIGNhc2Ug
TVNSX0FNRDY0X0xXUF9DQkFERFI6CisgICAgY2FzZSBNU1JfRjE1SF9DVV9Q
T1dFUiAuLi4gTVNSX0YxNUhfQ1VfTUFYX1BPV0VSOgorICAgIGNhc2UgTVNS
X0FNRF9SQVBMX1BPV0VSX1VOSVQgLi4uIE1TUl9BTURfUEtHX0VORVJHWV9T
VEFUVVM6CiAgICAgICAgIC8qIE5vdCBvZmZlcmVkIHRvIGd1ZXN0cy4gKi8K
ICAgICAgICAgZ290byBncF9mYXVsdDsKIApAQCAtMzMwLDExICszMzksMjAg
QEAgaW50IGd1ZXN0X3dybXNyKHN0cnVjdCB2Y3B1ICp2LCB1aW50MzJfdCBt
c3IsIHVpbnQ2NF90IHZhbCkKICAgICBjYXNlIE1TUl9UU1hfQ1RSTDoKICAg
ICBjYXNlIE1TUl9NQ1VfT1BUX0NUUkw6CiAgICAgY2FzZSBNU1JfUlRJVF9P
VVRQVVRfQkFTRSAuLi4gTVNSX1JUSVRfQUREUl9CKDcpOgorICAgIGNhc2Ug
TVNSX1JBUExfUE9XRVJfVU5JVDoKKyAgICBjYXNlIE1TUl9QS0dfUE9XRVJf
TElNSVQgIC4uLiBNU1JfUEtHX1BPV0VSX0lORk86CisgICAgY2FzZSBNU1Jf
RFJBTV9QT1dFUl9MSU1JVCAuLi4gTVNSX0RSQU1fUE9XRVJfSU5GTzoKKyAg
ICBjYXNlIE1TUl9QUDBfUE9XRVJfTElNSVQgIC4uLiBNU1JfUFAwX1BPTElD
WToKKyAgICBjYXNlIE1TUl9QUDFfUE9XRVJfTElNSVQgIC4uLiBNU1JfUFAx
X1BPTElDWToKKyAgICBjYXNlIE1TUl9QTEFURk9STV9FTkVSR1lfQ09VTlRF
UjoKKyAgICBjYXNlIE1TUl9QTEFURk9STV9QT1dFUl9MSU1JVDoKICAgICBj
YXNlIE1TUl9VX0NFVDoKICAgICBjYXNlIE1TUl9TX0NFVDoKICAgICBjYXNl
IE1TUl9QTDBfU1NQIC4uLiBNU1JfSU5URVJSVVBUX1NTUF9UQUJMRToKICAg
ICBjYXNlIE1TUl9BTUQ2NF9MV1BfQ0ZHOgogICAgIGNhc2UgTVNSX0FNRDY0
X0xXUF9DQkFERFI6CisgICAgY2FzZSBNU1JfRjE1SF9DVV9QT1dFUiAuLi4g
TVNSX0YxNUhfQ1VfTUFYX1BPV0VSOgorICAgIGNhc2UgTVNSX0FNRF9SQVBM
X1BPV0VSX1VOSVQgLi4uIE1TUl9BTURfUEtHX0VORVJHWV9TVEFUVVM6CiAg
ICAgICAgIC8qIE5vdCBvZmZlcmVkIHRvIGd1ZXN0cy4gKi8KICAgICAgICAg
Z290byBncF9mYXVsdDsKIApkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tc3ItaW5kZXguaCBiL3hlbi9pbmNsdWRlL2FzbS14ODYvbXNyLWlu
ZGV4LmgKaW5kZXggMGViNjg1NTYxNC4uYmE5ZTkwYWYyMSAxMDA2NDQKLS0t
IGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9tc3ItaW5kZXguaAorKysgYi94ZW4v
aW5jbHVkZS9hc20teDg2L21zci1pbmRleC5oCkBAIC05Niw2ICs5NiwzOCBA
QAogLyogTG93ZXIgNiBiaXRzIGRlZmluZSB0aGUgZm9ybWF0IG9mIHRoZSBh
ZGRyZXNzIGluIHRoZSBMQlIgc3RhY2sgKi8KICNkZWZpbmUgTVNSX0lBMzJf
UEVSRl9DQVBfTEJSX0ZPUk1BVAkweDNmCiAKKy8qCisgKiBJbnRlbCBSdW50
aW1lIEF2ZXJhZ2UgUG93ZXIgTGltaXRpbmcgKFJBUEwpIGludGVyZmFjZS4g
IFBvd2VyIHBsYW5lIGJhc2UKKyAqIGFkZHJlc3NlcyAoTVNSXypfUE9XRVJf
TElNSVQpIGFyZSBtb2RlbCBzcGVjaWZpYywgYnV0IGhhdmUgc28tZmFyIGJl
ZW4KKyAqIGNvbnNpc3RlbnQgc2luY2UgdGhlaXIgaW50cm9kdWN0aW9uIGlu
IFNhbmR5QnJpZGdlLgorICoKKyAqIE9mZnNldHMgb2YgZnVuY3Rpb25hbGl0
eSBmcm9tIHRoZSBwb3dlciBwbGFuZSBiYXNlIGlzIGFyY2hpdGVjdHVyYWws
IGJ1dAorICogbm90IGFsbCBwb3dlciBwbGFuZXMgc3VwcG9ydCBhbGwgZnVu
Y3Rpb25hbGl0eS4KKyAqLworI2RlZmluZSBNU1JfUkFQTF9QT1dFUl9VTklU
CQkweDAwMDAwNjA2CisKKyNkZWZpbmUgTVNSX1BLR19QT1dFUl9MSU1JVAkJ
MHgwMDAwMDYxMAorI2RlZmluZSBNU1JfUEtHX0VORVJHWV9TVEFUVVMJCTB4
MDAwMDA2MTEKKyNkZWZpbmUgTVNSX1BLR19QRVJGX1NUQVRVUwkJMHgwMDAw
MDYxMworI2RlZmluZSBNU1JfUEtHX1BPV0VSX0lORk8JCTB4MDAwMDA2MTQK
KworI2RlZmluZSBNU1JfRFJBTV9QT1dFUl9MSU1JVAkJMHgwMDAwMDYxOAor
I2RlZmluZSBNU1JfRFJBTV9FTkVSR1lfU1RBVFVTCQkweDAwMDAwNjE5Cisj
ZGVmaW5lIE1TUl9EUkFNX1BFUkZfU1RBVFVTCQkweDAwMDAwNjFiCisjZGVm
aW5lIE1TUl9EUkFNX1BPV0VSX0lORk8JCTB4MDAwMDA2MWMKKworI2RlZmlu
ZSBNU1JfUFAwX1BPV0VSX0xJTUlUCQkweDAwMDAwNjM4CisjZGVmaW5lIE1T
Ul9QUDBfRU5FUkdZX1NUQVRVUwkJMHgwMDAwMDYzOQorI2RlZmluZSBNU1Jf
UFAwX1BPTElDWQkJCTB4MDAwMDA2M2EKKworI2RlZmluZSBNU1JfUFAxX1BP
V0VSX0xJTUlUCQkweDAwMDAwNjQwCisjZGVmaW5lIE1TUl9QUDFfRU5FUkdZ
X1NUQVRVUwkJMHgwMDAwMDY0MQorI2RlZmluZSBNU1JfUFAxX1BPTElDWQkJ
CTB4MDAwMDA2NDIKKworLyogSW50ZWwgUGxhdGZvcm0td2lkZSBwb3dlciBp
bnRlcmZhY2UuICovCisjZGVmaW5lIE1TUl9QTEFURk9STV9FTkVSR1lfQ09V
TlRFUgkweDAwMDAwNjRkCisjZGVmaW5lIE1TUl9QTEFURk9STV9QT1dFUl9M
SU1JVAkweDAwMDAwNjVjCisKICNkZWZpbmUgTVNSX0lBMzJfQk5EQ0ZHUwkJ
MHgwMDAwMGQ5MAogI2RlZmluZSBJQTMyX0JORENGR1NfRU5BQkxFCQkweDAw
MDAwMDAxCiAjZGVmaW5lIElBMzJfQk5EQ0ZHU19QUkVTRVJWRQkJMHgwMDAw
MDAwMgpAQCAtMjM2LDYgKzI2OCw4IEBACiAjZGVmaW5lIE1TUl9LOF9WTV9D
UgkJCTB4YzAwMTAxMTQKICNkZWZpbmUgTVNSX0s4X1ZNX0hTQVZFX1BBCQkw
eGMwMDEwMTE3CiAKKyNkZWZpbmUgTVNSX0YxNUhfQ1VfUE9XRVIJCTB4YzAw
MTAwN2EKKyNkZWZpbmUgTVNSX0YxNUhfQ1VfTUFYX1BPV0VSCQkweGMwMDEw
MDdiCiAjZGVmaW5lIE1TUl9BTURfRkFNMTVIX0VWTlRTRUwwCQkweGMwMDEw
MjAwCiAjZGVmaW5lIE1TUl9BTURfRkFNMTVIX1BFUkZDVFIwCQkweGMwMDEw
MjAxCiAjZGVmaW5lIE1TUl9BTURfRkFNMTVIX0VWTlRTRUwxCQkweGMwMDEw
MjAyCkBAIC0yNDksNiArMjgzLDEwIEBACiAjZGVmaW5lIE1TUl9BTURfRkFN
MTVIX0VWTlRTRUw1CQkweGMwMDEwMjBhCiAjZGVmaW5lIE1TUl9BTURfRkFN
MTVIX1BFUkZDVFI1CQkweGMwMDEwMjBiCiAKKyNkZWZpbmUgTVNSX0FNRF9S
QVBMX1BPV0VSX1VOSVQJCTB4YzAwMTAyOTkKKyNkZWZpbmUgTVNSX0FNRF9D
T1JFX0VORVJHWV9TVEFUVVMJMHhjMDAxMDI5YQorI2RlZmluZSBNU1JfQU1E
X1BLR19FTkVSR1lfU1RBVFVTCTB4YzAwMTAyOWIKKwogI2RlZmluZSBNU1Jf
QU1EX0w3UzBfRkVBVFVSRV9NQVNLCTB4YzAwMTEwMDIKICNkZWZpbmUgTVNS
X0FNRF9USFJNX0ZFQVRVUkVfTUFTSwkweGMwMDExMDAzCiAjZGVmaW5lIE1T
Ul9LOF9GRUFUVVJFX01BU0sJCTB4YzAwMTEwMDQK

--=separator
Content-Type: application/octet-stream; name="xsa351-x86-4.14-1.patch"
Content-Disposition: attachment; filename="xsa351-x86-4.14-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP1JvZ2VyPTIwUGF1PTIwTW9ubj1DMz1BOT89IDxy
b2dlci5wYXVAY2l0cml4LmNvbT4KU3ViamVjdDogeDg2L21zcjogZml4IGhh
bmRsaW5nIG9mIE1TUl9JQTMyX1BFUkZfe1NUQVRVUy9DVEx9Ck1JTUUtVmVy
c2lvbjogMS4wCkNvbnRlbnQtVHlwZTogdGV4dC9wbGFpbjsgY2hhcnNldD1V
VEYtOApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA4Yml0CgpDdXJyZW50
bHkgYSBQViBoYXJkd2FyZSBkb21haW4gY2FuIGFsc28gYmUgZ2l2ZW4gY29u
dHJvbCBvdmVyIHRoZSBDUFUKZnJlcXVlbmN5LCBhbmQgc3VjaCBndWVzdCBp
cyBhbGxvd2VkIHRvIHdyaXRlIHRvIE1TUl9JQTMyX1BFUkZfQ1RMLgpIb3dl
dmVyIHNpbmNlIGNvbW1pdCAzMjJlYzdjODlmNiB0aGUgZGVmYXVsdCBiZWhh
dmlvciBoYXMgYmVlbiBjaGFuZ2VkCnRvIHJlamVjdCBhY2Nlc3NlcyB0byBu
b3QgZXhwbGljaXRseSBoYW5kbGVkIE1TUnMsIHByZXZlbnRpbmcgUFYKZ3Vl
c3RzIHRoYXQgbWFuYWdlIENQVSBmcmVxdWVuY3kgZnJvbSByZWFkaW5nCk1T
Ul9JQTMyX1BFUkZfe1NUQVRVUy9DVEx9LgoKQWRkaXRpb25hbGx5IHNvbWUg
SFZNIGd1ZXN0cyAoV2luZG93cyBhdCBsZWFzdCkgd2lsbCBhdHRlbXB0IHRv
IHJlYWQKTVNSX0lBMzJfUEVSRl9DVEwgYW5kIHdpbGwgcGFuaWMgaWYgZ2l2
ZW4gYmFjayBhICNHUCBmYXVsdDoKCiAgdm14LmM6MzAzNTpkOHYwIFJETVNS
IDB4MDAwMDAxOTkgdW5pbXBsZW1lbnRlZAogIGQ4djAgVklSSURJQU4gQ1JB
U0g6IDNiIGMwMDAwMDk2IGZmZmZmODA2ODcxYzE2NTEgZmZmZmRhMDI1MzY4
MzcyMCAwCgpNb3ZlIHRoZSBoYW5kbGluZyBvZiBNU1JfSUEzMl9QRVJGX3tT
VEFUVVMvQ1RMfSB0byB0aGUgY29tbW9uIE1TUgpoYW5kbGluZyBzaGFyZWQg
YmV0d2VlbiBIVk0gYW5kIFBWIGd1ZXN0cywgYW5kIGFkZCBhbiBleHBsaWNp
dCBjYXNlCmZvciByZWFkcyB0byBNU1JfSUEzMl9QRVJGX3tTVEFUVVMvQ1RM
fS4KClJlc3RvcmUgcHJldmlvdXMgYmVoYXZpb3IgYW5kIGFsbG93IFBWIGd1
ZXN0cyB3aXRoIHRoZSByZXF1aXJlZApwZXJtaXNzaW9ucyB0byByZWFkIHRo
ZSBjb250ZW50cyBvZiB0aGUgbWVudGlvbmVkIE1TUnMuIE5vbiBwcml2aWxl
Z2VkCmd1ZXN0cyB3aWxsIGdldCAwIHdoZW4gdHJ5aW5nIHRvIHJlYWQgdGhv
c2UgcmVnaXN0ZXJzLCBhcyB3cml0ZXMgdG8KTVNSX0lBMzJfUEVSRl9DVEwg
Ynkgc3VjaCBndWVzdCB3aWxsIGFscmVhZHkgYmUgc2lsZW50bHkgZHJvcHBl
ZC4KCkZpeGVzOiAzMjJlYzdjODlmNiAoJ3g4Ni9wdjogZGlzYWxsb3cgYWNj
ZXNzIHRvIHVua25vd24gTVNScycpCkZpeGVzOiA4NGU4NDhmZDdhMSAoJ3g4
Ni9odm06IGRpc2FsbG93IGFjY2VzcyB0byB1bmtub3duIE1TUnMnKQpTaWdu
ZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4
LmNvbT4KU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNv
b3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IFJvZ2VyIFBhdSBNb25u
w6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgooY2hlcnJ5IHBpY2tlZCBmcm9t
IGNvbW1pdCAzMDU5MTc4Nzk4YTIzYmE4NzBmZjg2ZmY1NGQ0NDJhMDdlNjY1
MWZjKQoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9tc3IuYyBiL3hlbi9h
cmNoL3g4Ni9tc3IuYwppbmRleCBkNzJhYjBmYTFmLi4zZGIyNmZhZjA4IDEw
MDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbXNyLmMKKysrIGIveGVuL2FyY2gv
eDg2L21zci5jCkBAIC0yNDUsNiArMjQ1LDI1IEBAIGludCBndWVzdF9yZG1z
cihzdHJ1Y3QgdmNwdSAqdiwgdWludDMyX3QgbXNyLCB1aW50NjRfdCAqdmFs
KQogICAgICAgICAqdmFsID0gbXNycy0+bWlzY19mZWF0dXJlc19lbmFibGVz
LnJhdzsKICAgICAgICAgYnJlYWs7CiAKKyAgICAgICAgLyoKKyAgICAgICAg
ICogVGhlc2UgTVNScyBhcmUgbm90IGVudW1lcmF0ZWQgaW4gQ1BVSUQuICBU
aGV5IGhhdmUgYmVlbiBhcm91bmQKKyAgICAgICAgICogc2luY2UgdGhlIFBl
bnRpdW0gNCwgYW5kIGltcGxlbWVudGVkIGJ5IG90aGVyIHZlbmRvcnMuCisg
ICAgICAgICAqCisgICAgICAgICAqIFNvbWUgdmVyc2lvbnMgb2YgV2luZG93
cyB0cnkgcmVhZGluZyB0aGVzZSBiZWZvcmUgc2V0dGluZyB1cCBhICNHUAor
ICAgICAgICAgKiBoYW5kbGVyLCBhbmQgTGludXggaGFzIHNldmVyYWwgdW5n
dWFyZGVkIHJlYWRzIGFzIHdlbGwuICBQcm92aWRlCisgICAgICAgICAqIFJB
WiBzZW1hbnRpY3MsIGluIGdlbmVyYWwsIGJ1dCBwZXJtaXQgYSBjcHVmcmVx
IGNvbnRyb2xsZXIgZG9tMCB0bworICAgICAgICAgKiBoYXZlIGZ1bGwgYWNj
ZXNzLgorICAgICAgICAgKi8KKyAgICBjYXNlIE1TUl9JQTMyX1BFUkZfU1RB
VFVTOgorICAgIGNhc2UgTVNSX0lBMzJfUEVSRl9DVEw6CisgICAgICAgIGlm
ICggIShjcC0+eDg2X3ZlbmRvciAmIChYODZfVkVORE9SX0lOVEVMIHwgWDg2
X1ZFTkRPUl9DRU5UQVVSKSkgKQorICAgICAgICAgICAgZ290byBncF9mYXVs
dDsKKworICAgICAgICAqdmFsID0gMDsKKyAgICAgICAgaWYgKCBsaWtlbHko
IWlzX2NwdWZyZXFfY29udHJvbGxlcihkKSkgfHwgcmRtc3Jfc2FmZShtc3Is
ICp2YWwpID09IDAgKQorICAgICAgICAgICAgYnJlYWs7CisgICAgICAgIGdv
dG8gZ3BfZmF1bHQ7CisKICAgICBjYXNlIE1TUl9YMkFQSUNfRklSU1QgLi4u
IE1TUl9YMkFQSUNfTEFTVDoKICAgICAgICAgaWYgKCAhaXNfaHZtX2RvbWFp
bihkKSB8fCB2ICE9IGN1cnIgKQogICAgICAgICAgICAgZ290byBncF9mYXVs
dDsKQEAgLTM0Myw2ICszNjIsNyBAQCBpbnQgZ3Vlc3Rfd3Jtc3Ioc3RydWN0
IHZjcHUgKnYsIHVpbnQzMl90IG1zciwgdWludDY0X3QgdmFsKQogICAgIGNh
c2UgTVNSX0lOVEVMX0NPUkVfVEhSRUFEX0NPVU5UOgogICAgIGNhc2UgTVNS
X0lOVEVMX1BMQVRGT1JNX0lORk86CiAgICAgY2FzZSBNU1JfQVJDSF9DQVBB
QklMSVRJRVM6CisgICAgY2FzZSBNU1JfSUEzMl9QRVJGX1NUQVRVUzoKICAg
ICAgICAgLyogUmVhZC1vbmx5ICovCiAgICAgY2FzZSBNU1JfVEVTVF9DVFJM
OgogICAgIGNhc2UgTVNSX1RTWF9GT1JDRV9BQk9SVDoKQEAgLTQ1NCw2ICs0
NzQsMjEgQEAgaW50IGd1ZXN0X3dybXNyKHN0cnVjdCB2Y3B1ICp2LCB1aW50
MzJfdCBtc3IsIHVpbnQ2NF90IHZhbCkKICAgICAgICAgYnJlYWs7CiAgICAg
fQogCisgICAgICAgIC8qCisgICAgICAgICAqIFRoaXMgTVNSIGlzIG5vdCBl
bnVtZXJhdGVkIGluIENQVUlELiAgSXQgaGFzIGJlZW4gYXJvdW5kIHNpbmNl
IHRoZQorICAgICAgICAgKiBQZW50aXVtIDQsIGFuZCBpbXBsZW1lbnRlZCBi
eSBvdGhlciB2ZW5kb3JzLgorICAgICAgICAgKgorICAgICAgICAgKiBUbyBt
YXRjaCB0aGUgUkFaIHNlbWFudGljcywgaW1wbGVtZW50IGFzIHdyaXRlLWRp
c2NhcmQsIGV4Y2VwdCBmb3IKKyAgICAgICAgICogYSBjcHVmcmVxIGNvbnRy
b2xsZXIgZG9tMCB3aGljaCBoYXMgZnVsbCBhY2Nlc3MuCisgICAgICAgICAq
LworICAgIGNhc2UgTVNSX0lBMzJfUEVSRl9DVEw6CisgICAgICAgIGlmICgg
IShjcC0+eDg2X3ZlbmRvciAmIChYODZfVkVORE9SX0lOVEVMIHwgWDg2X1ZF
TkRPUl9DRU5UQVVSKSkgKQorICAgICAgICAgICAgZ290byBncF9mYXVsdDsK
KworICAgICAgICBpZiAoIGxpa2VseSghaXNfY3B1ZnJlcV9jb250cm9sbGVy
KGQpKSB8fCB3cm1zcl9zYWZlKG1zciwgdmFsKSA9PSAwICkKKyAgICAgICAg
ICAgIGJyZWFrOworICAgICAgICBnb3RvIGdwX2ZhdWx0OworCiAgICAgY2Fz
ZSBNU1JfWDJBUElDX0ZJUlNUIC4uLiBNU1JfWDJBUElDX0xBU1Q6CiAgICAg
ICAgIGlmICggIWlzX2h2bV9kb21haW4oZCkgfHwgdiAhPSBjdXJyICkKICAg
ICAgICAgICAgIGdvdG8gZ3BfZmF1bHQ7CmRpZmYgLS1naXQgYS94ZW4vYXJj
aC94ODYvcHYvZW11bC1wcml2LW9wLmMgYi94ZW4vYXJjaC94ODYvcHYvZW11
bC1wcml2LW9wLmMKaW5kZXggODVhOWZkNDc2Ny4uNWM3YjkxMTdhZSAxMDA2
NDQKLS0tIGEveGVuL2FyY2gveDg2L3B2L2VtdWwtcHJpdi1vcC5jCisrKyBi
L3hlbi9hcmNoL3g4Ni9wdi9lbXVsLXByaXYtb3AuYwpAQCAtODIwLDEyICs4
MjAsNiBAQCBzdGF0aWMgaW5saW5lIHVpbnQ2NF90IGd1ZXN0X21pc2NfZW5h
YmxlKHVpbnQ2NF90IHZhbCkKICAgICByZXR1cm4gdmFsOwogfQogCi1zdGF0
aWMgaW5saW5lIGJvb2wgaXNfY3B1ZnJlcV9jb250cm9sbGVyKGNvbnN0IHN0
cnVjdCBkb21haW4gKmQpCi17Ci0gICAgcmV0dXJuICgoY3B1ZnJlcV9jb250
cm9sbGVyID09IEZSRVFDVExfZG9tMF9rZXJuZWwpICYmCi0gICAgICAgICAg
ICBpc19oYXJkd2FyZV9kb21haW4oZCkpOwotfQotCiBzdGF0aWMgaW50IHJl
YWRfbXNyKHVuc2lnbmVkIGludCByZWcsIHVpbnQ2NF90ICp2YWwsCiAgICAg
ICAgICAgICAgICAgICAgIHN0cnVjdCB4ODZfZW11bGF0ZV9jdHh0ICpjdHh0
KQogewpAQCAtMTA3MCwxNCArMTA2NCw2IEBAIHN0YXRpYyBpbnQgd3JpdGVf
bXNyKHVuc2lnbmVkIGludCByZWcsIHVpbnQ2NF90IHZhbCwKICAgICAgICAg
ICAgIHJldHVybiBYODZFTVVMX09LQVk7CiAgICAgICAgIGJyZWFrOwogCi0g
ICAgY2FzZSBNU1JfSUEzMl9QRVJGX0NUTDoKLSAgICAgICAgaWYgKCBib290
X2NwdV9kYXRhLng4Nl92ZW5kb3IgIT0gWDg2X1ZFTkRPUl9JTlRFTCApCi0g
ICAgICAgICAgICBicmVhazsKLSAgICAgICAgaWYgKCBsaWtlbHkoIWlzX2Nw
dWZyZXFfY29udHJvbGxlcihjdXJyZCkpIHx8Ci0gICAgICAgICAgICAgd3Jt
c3Jfc2FmZShyZWcsIHZhbCkgPT0gMCApCi0gICAgICAgICAgICByZXR1cm4g
WDg2RU1VTF9PS0FZOwotICAgICAgICBicmVhazsKLQogICAgIGNhc2UgTVNS
X0lBMzJfVEhFUk1fQ09OVFJPTDoKICAgICBjYXNlIE1TUl9JQTMyX0VORVJH
WV9QRVJGX0JJQVM6CiAgICAgICAgIGlmICggYm9vdF9jcHVfZGF0YS54ODZf
dmVuZG9yICE9IFg4Nl9WRU5ET1JfSU5URUwgKQpkaWZmIC0tZ2l0IGEveGVu
L2luY2x1ZGUveGVuL3NjaGVkLmggYi94ZW4vaW5jbHVkZS94ZW4vc2NoZWQu
aAppbmRleCBhMGQ4N2VmOWQwLi45N2JhOGUwNzk1IDEwMDY0NAotLS0gYS94
ZW4vaW5jbHVkZS94ZW4vc2NoZWQuaAorKysgYi94ZW4vaW5jbHVkZS94ZW4v
c2NoZWQuaApAQCAtMTA3MSw2ICsxMDcxLDIyIEBAIGV4dGVybiBlbnVtIGNw
dWZyZXFfY29udHJvbGxlciB7CiAgICAgRlJFUUNUTF9ub25lLCBGUkVRQ1RM
X2RvbTBfa2VybmVsLCBGUkVRQ1RMX3hlbgogfSBjcHVmcmVxX2NvbnRyb2xs
ZXI7CiAKK3N0YXRpYyBhbHdheXNfaW5saW5lIGJvb2wgaXNfY3B1ZnJlcV9j
b250cm9sbGVyKGNvbnN0IHN0cnVjdCBkb21haW4gKmQpCit7CisgICAgLyoK
KyAgICAgKiBBIFBWIGRvbTAgY2FuIGJlIG5vbWluYXRlZCBhcyB0aGUgY3B1
ZnJlcSBjb250cm9sbGVyLCBpbnN0ZWFkIG9mIHVzaW5nCisgICAgICogWGVu
J3MgY3B1ZnJlcSBkcml2ZXIsIGF0IHdoaWNoIHBvaW50IGRvbTAgZ2V0cyBk
aXJlY3QgYWNjZXNzIHRvIGNlcnRhaW4KKyAgICAgKiBNU1JzLgorICAgICAq
CisgICAgICogVGhpcyBpbnRlcmZhY2Ugb25seSB3b3JrcyB3aGVuIGRvbTAg
aXMgaWRlbnRpdHkgcGlubmVkIGFuZCBoYXMgdGhlIHNhbWUKKyAgICAgKiBu
dW1iZXIgb2YgdkNQVXMgYXMgcENQVXMgb24gdGhlIHN5c3RlbS4KKyAgICAg
KgorICAgICAqIEl0IHdvdWxkIGJlIGZhciBiZXR0ZXIgdG8gcGFyYXZpcnR1
YWxpc2UgdGhlIGludGVyZmFjZS4KKyAgICAgKi8KKyAgICByZXR1cm4gKGlz
X3B2X2RvbWFpbihkKSAmJiBpc19oYXJkd2FyZV9kb21haW4oZCkgJiYKKyAg
ICAgICAgICAgIGNwdWZyZXFfY29udHJvbGxlciA9PSBGUkVRQ1RMX2RvbTBf
a2VybmVsKTsKK30KKwogaW50IGNwdXBvb2xfbW92ZV9kb21haW4oc3RydWN0
IGRvbWFpbiAqZCwgc3RydWN0IGNwdXBvb2wgKmMpOwogaW50IGNwdXBvb2xf
ZG9fc3lzY3RsKHN0cnVjdCB4ZW5fc3lzY3RsX2NwdXBvb2xfb3AgKm9wKTsK
IGludCBjcHVwb29sX2dldF9pZChjb25zdCBzdHJ1Y3QgZG9tYWluICpkKTsK

--=separator
Content-Type: application/octet-stream; name="xsa351-x86-4.14-2.patch"
Content-Disposition: attachment; filename="xsa351-x86-4.14-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L21zcjogRGlzYWxsb3cgZ3Vlc3QgYWNjZXNzIHRv
IHRoZSBSQVBMIE1TUnMKClJlc2VhcmNoZXJzIGhhdmUgZGVtb25zdHJhdGVk
IHVzaW5nIHRoZSBSQVBMIGludGVyZmFjZSB0byBwZXJmb3JtIGEKZGlmZmVy
ZW50aWFsIHBvd2VyIGFuYWx5c2lzIGF0dGFjayB0byByZWNvdmVyIEFFUyBr
ZXlzIHVzZWQgYnkgb3RoZXIgY29yZXMgaW4KdGhlIHN5c3RlbS4KCkZ1cnRo
ZXJtb3JlLCBldmVuIHByaXZpbGVnZWQgZ3Vlc3RzIGNhbm5vdCB1c2UgdGhp
cyBpbnRlcmZhY2UgY29ycmVjdGx5LCBkdWUKdG8gTVNSIHNjb3BlIGFuZCB2
Y3B1IHNjaGVkdWxpbmcgaXNzdWVzLiAgVGhlIGludGVyZmFjZSB3b3VsZCB3
YW50IHRvIGJlCnBhcmF2aXJ0dWFsaXNlZCB0byBiZSB1c2VkIHNlbnNpYmx5
LgoKRGlzYWxsb3cgYWNjZXNzIHRvIHRoZSBSQVBMIE1TUnMgY29tcGxldGVs
eSwgYXMgd2VsbCBhcyBvdGhlciBNU1JzIHdoaWNoCnBvdGVudGlhbGx5IGFj
Y2VzcyBmaW5lIGdyYWluIHBvd2VyIGluZm9ybWF0aW9uLgoKVGhpcyBpcyBw
YXJ0IG9mIFhTQS0zNTEuCgpTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFu
IEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgoKZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL3g4Ni9tc3IuYyBiL3hlbi9hcmNoL3g4Ni9tc3IuYwppbmRleCAz
ZGIyNmZhZjA4Li5hYTEwNzgyM2FjIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94
ODYvbXNyLmMKKysrIGIveGVuL2FyY2gveDg2L21zci5jCkBAIC0xODUsNiAr
MTg1LDEzIEBAIGludCBndWVzdF9yZG1zcihzdHJ1Y3QgdmNwdSAqdiwgdWlu
dDMyX3QgbXNyLCB1aW50NjRfdCAqdmFsKQogICAgIGNhc2UgTVNSX1RTWF9D
VFJMOgogICAgIGNhc2UgTVNSX01DVV9PUFRfQ1RSTDoKICAgICBjYXNlIE1T
Ul9SVElUX09VVFBVVF9CQVNFIC4uLiBNU1JfUlRJVF9BRERSX0IoNyk6Cisg
ICAgY2FzZSBNU1JfUkFQTF9QT1dFUl9VTklUOgorICAgIGNhc2UgTVNSX1BL
R19QT1dFUl9MSU1JVCAgLi4uIE1TUl9QS0dfUE9XRVJfSU5GTzoKKyAgICBj
YXNlIE1TUl9EUkFNX1BPV0VSX0xJTUlUIC4uLiBNU1JfRFJBTV9QT1dFUl9J
TkZPOgorICAgIGNhc2UgTVNSX1BQMF9QT1dFUl9MSU1JVCAgLi4uIE1TUl9Q
UDBfUE9MSUNZOgorICAgIGNhc2UgTVNSX1BQMV9QT1dFUl9MSU1JVCAgLi4u
IE1TUl9QUDFfUE9MSUNZOgorICAgIGNhc2UgTVNSX1BMQVRGT1JNX0VORVJH
WV9DT1VOVEVSOgorICAgIGNhc2UgTVNSX1BMQVRGT1JNX1BPV0VSX0xJTUlU
OgogICAgIGNhc2UgTVNSX1VfQ0VUOgogICAgIGNhc2UgTVNSX1NfQ0VUOgog
ICAgIGNhc2UgTVNSX1BMMF9TU1AgLi4uIE1TUl9JTlRFUlJVUFRfU1NQX1RB
QkxFOgpAQCAtMTkyLDYgKzE5OSw4IEBAIGludCBndWVzdF9yZG1zcihzdHJ1
Y3QgdmNwdSAqdiwgdWludDMyX3QgbXNyLCB1aW50NjRfdCAqdmFsKQogICAg
IGNhc2UgTVNSX0FNRDY0X0xXUF9DQkFERFI6CiAgICAgY2FzZSBNU1JfUFBJ
Tl9DVEw6CiAgICAgY2FzZSBNU1JfUFBJTjoKKyAgICBjYXNlIE1TUl9GMTVI
X0NVX1BPV0VSIC4uLiBNU1JfRjE1SF9DVV9NQVhfUE9XRVI6CisgICAgY2Fz
ZSBNU1JfQU1EX1JBUExfUE9XRVJfVU5JVCAuLi4gTVNSX0FNRF9QS0dfRU5F
UkdZX1NUQVRVUzoKICAgICBjYXNlIE1TUl9BTURfUFBJTl9DVEw6CiAgICAg
Y2FzZSBNU1JfQU1EX1BQSU46CiAgICAgICAgIC8qIE5vdCBvZmZlcmVkIHRv
IGd1ZXN0cy4gKi8KQEAgLTM2OSw2ICszNzgsMTMgQEAgaW50IGd1ZXN0X3dy
bXNyKHN0cnVjdCB2Y3B1ICp2LCB1aW50MzJfdCBtc3IsIHVpbnQ2NF90IHZh
bCkKICAgICBjYXNlIE1TUl9UU1hfQ1RSTDoKICAgICBjYXNlIE1TUl9NQ1Vf
T1BUX0NUUkw6CiAgICAgY2FzZSBNU1JfUlRJVF9PVVRQVVRfQkFTRSAuLi4g
TVNSX1JUSVRfQUREUl9CKDcpOgorICAgIGNhc2UgTVNSX1JBUExfUE9XRVJf
VU5JVDoKKyAgICBjYXNlIE1TUl9QS0dfUE9XRVJfTElNSVQgIC4uLiBNU1Jf
UEtHX1BPV0VSX0lORk86CisgICAgY2FzZSBNU1JfRFJBTV9QT1dFUl9MSU1J
VCAuLi4gTVNSX0RSQU1fUE9XRVJfSU5GTzoKKyAgICBjYXNlIE1TUl9QUDBf
UE9XRVJfTElNSVQgIC4uLiBNU1JfUFAwX1BPTElDWToKKyAgICBjYXNlIE1T
Ul9QUDFfUE9XRVJfTElNSVQgIC4uLiBNU1JfUFAxX1BPTElDWToKKyAgICBj
YXNlIE1TUl9QTEFURk9STV9FTkVSR1lfQ09VTlRFUjoKKyAgICBjYXNlIE1T
Ul9QTEFURk9STV9QT1dFUl9MSU1JVDoKICAgICBjYXNlIE1TUl9VX0NFVDoK
ICAgICBjYXNlIE1TUl9TX0NFVDoKICAgICBjYXNlIE1TUl9QTDBfU1NQIC4u
LiBNU1JfSU5URVJSVVBUX1NTUF9UQUJMRToKQEAgLTM3Niw2ICszOTIsOCBA
QCBpbnQgZ3Vlc3Rfd3Jtc3Ioc3RydWN0IHZjcHUgKnYsIHVpbnQzMl90IG1z
ciwgdWludDY0X3QgdmFsKQogICAgIGNhc2UgTVNSX0FNRDY0X0xXUF9DQkFE
RFI6CiAgICAgY2FzZSBNU1JfUFBJTl9DVEw6CiAgICAgY2FzZSBNU1JfUFBJ
TjoKKyAgICBjYXNlIE1TUl9GMTVIX0NVX1BPV0VSIC4uLiBNU1JfRjE1SF9D
VV9NQVhfUE9XRVI6CisgICAgY2FzZSBNU1JfQU1EX1JBUExfUE9XRVJfVU5J
VCAuLi4gTVNSX0FNRF9QS0dfRU5FUkdZX1NUQVRVUzoKICAgICBjYXNlIE1T
Ul9BTURfUFBJTl9DVEw6CiAgICAgY2FzZSBNU1JfQU1EX1BQSU46CiAgICAg
ICAgIC8qIE5vdCBvZmZlcmVkIHRvIGd1ZXN0cy4gKi8KZGlmZiAtLWdpdCBh
L3hlbi9pbmNsdWRlL2FzbS14ODYvbXNyLWluZGV4LmggYi94ZW4vaW5jbHVk
ZS9hc20teDg2L21zci1pbmRleC5oCmluZGV4IDBmZTk4YWY5MjMuLjVlNjRl
Y2ZmOTEgMTAwNjQ0Ci0tLSBhL3hlbi9pbmNsdWRlL2FzbS14ODYvbXNyLWlu
ZGV4LmgKKysrIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9tc3ItaW5kZXguaApA
QCAtNzcsNiArNzcsMzggQEAKICNkZWZpbmUgTVNSX1JUSVRfQUREUl9BKG4p
ICAgICAgICAgICAgICAgICAoMHgwMDAwMDU4MCArIChuKSAqIDIpCiAjZGVm
aW5lIE1TUl9SVElUX0FERFJfQihuKSAgICAgICAgICAgICAgICAgKDB4MDAw
MDA1ODEgKyAobikgKiAyKQogCisvKgorICogSW50ZWwgUnVudGltZSBBdmVy
YWdlIFBvd2VyIExpbWl0aW5nIChSQVBMKSBpbnRlcmZhY2UuICBQb3dlciBw
bGFuZSBiYXNlCisgKiBhZGRyZXNzZXMgKE1TUl8qX1BPV0VSX0xJTUlUKSBh
cmUgbW9kZWwgc3BlY2lmaWMsIGJ1dCBoYXZlIHNvLWZhciBiZWVuCisgKiBj
b25zaXN0ZW50IHNpbmNlIHRoZWlyIGludHJvZHVjdGlvbiBpbiBTYW5keUJy
aWRnZS4KKyAqCisgKiBPZmZzZXRzIG9mIGZ1bmN0aW9uYWxpdHkgZnJvbSB0
aGUgcG93ZXIgcGxhbmUgYmFzZSBpcyBhcmNoaXRlY3R1cmFsLCBidXQKKyAq
IG5vdCBhbGwgcG93ZXIgcGxhbmVzIHN1cHBvcnQgYWxsIGZ1bmN0aW9uYWxp
dHkuCisgKi8KKyNkZWZpbmUgTVNSX1JBUExfUE9XRVJfVU5JVCAgICAgICAg
ICAgICAgICAgMHgwMDAwMDYwNgorCisjZGVmaW5lIE1TUl9QS0dfUE9XRVJf
TElNSVQgICAgICAgICAgICAgICAgIDB4MDAwMDA2MTAKKyNkZWZpbmUgTVNS
X1BLR19FTkVSR1lfU1RBVFVTICAgICAgICAgICAgICAgMHgwMDAwMDYxMQor
I2RlZmluZSBNU1JfUEtHX1BFUkZfU1RBVFVTICAgICAgICAgICAgICAgICAw
eDAwMDAwNjEzCisjZGVmaW5lIE1TUl9QS0dfUE9XRVJfSU5GTyAgICAgICAg
ICAgICAgICAgIDB4MDAwMDA2MTQKKworI2RlZmluZSBNU1JfRFJBTV9QT1dF
Ul9MSU1JVCAgICAgICAgICAgICAgICAweDAwMDAwNjE4CisjZGVmaW5lIE1T
Ul9EUkFNX0VORVJHWV9TVEFUVVMgICAgICAgICAgICAgIDB4MDAwMDA2MTkK
KyNkZWZpbmUgTVNSX0RSQU1fUEVSRl9TVEFUVVMgICAgICAgICAgICAgICAg
MHgwMDAwMDYxYgorI2RlZmluZSBNU1JfRFJBTV9QT1dFUl9JTkZPICAgICAg
ICAgICAgICAgICAweDAwMDAwNjFjCisKKyNkZWZpbmUgTVNSX1BQMF9QT1dF
Ul9MSU1JVCAgICAgICAgICAgICAgICAgMHgwMDAwMDYzOAorI2RlZmluZSBN
U1JfUFAwX0VORVJHWV9TVEFUVVMgICAgICAgICAgICAgICAweDAwMDAwNjM5
CisjZGVmaW5lIE1TUl9QUDBfUE9MSUNZICAgICAgICAgICAgICAgICAgICAg
IDB4MDAwMDA2M2EKKworI2RlZmluZSBNU1JfUFAxX1BPV0VSX0xJTUlUICAg
ICAgICAgICAgICAgICAweDAwMDAwNjQwCisjZGVmaW5lIE1TUl9QUDFfRU5F
UkdZX1NUQVRVUyAgICAgICAgICAgICAgIDB4MDAwMDA2NDEKKyNkZWZpbmUg
TVNSX1BQMV9QT0xJQ1kgICAgICAgICAgICAgICAgICAgICAgMHgwMDAwMDY0
MgorCisvKiBJbnRlbCBQbGF0Zm9ybS13aWRlIHBvd2VyIGludGVyZmFjZS4g
Ki8KKyNkZWZpbmUgTVNSX1BMQVRGT1JNX0VORVJHWV9DT1VOVEVSICAgICAg
ICAgMHgwMDAwMDY0ZAorI2RlZmluZSBNU1JfUExBVEZPUk1fUE9XRVJfTElN
SVQgICAgICAgICAgICAweDAwMDAwNjVjCisKICNkZWZpbmUgTVNSX1VfQ0VU
ICAgICAgICAgICAgICAgICAgICAgICAgICAgMHgwMDAwMDZhMAogI2RlZmlu
ZSBNU1JfU19DRVQgICAgICAgICAgICAgICAgICAgICAgICAgICAweDAwMDAw
NmEyCiAjZGVmaW5lICBDRVRfU0hTVEtfRU4gICAgICAgICAgICAgICAgICAg
ICAgIChfQUMoMSwgVUxMKSA8PCAgMCkKQEAgLTkyLDYgKzEyNCwxMyBAQAog
I2RlZmluZSAgUEFTSURfUEFTSURfTUFTSyAgICAgICAgICAgICAgICAgICAw
eDAwMGZmZmZmCiAjZGVmaW5lICBQQVNJRF9WQUxJRCAgICAgICAgICAgICAg
ICAgICAgICAgIChfQUMoMSwgVUxMKSA8PCAzMSkKIAorI2RlZmluZSBNU1Jf
RjE1SF9DVV9QT1dFUiAgICAgICAgICAgICAgICAgICAweGMwMDEwMDdhCisj
ZGVmaW5lIE1TUl9GMTVIX0NVX01BWF9QT1dFUiAgICAgICAgICAgICAgIDB4
YzAwMTAwN2IKKworI2RlZmluZSBNU1JfQU1EX1JBUExfUE9XRVJfVU5JVCAg
ICAgICAgICAgICAweGMwMDEwMjk5CisjZGVmaW5lIE1TUl9BTURfQ09SRV9F
TkVSR1lfU1RBVFVTICAgICAgICAgIDB4YzAwMTAyOWEKKyNkZWZpbmUgTVNS
X0FNRF9QS0dfRU5FUkdZX1NUQVRVUyAgICAgICAgICAgMHhjMDAxMDI5Ygor
CiAvKgogICogTGVnYWN5IE1TUiBjb25zdGFudHMgaW4gbmVlZCBvZiBjbGVh
bnVwLiAgTm8gbmV3IE1TUnMgYmVsb3cgdGhpcyBjb21tZW50LgogICovCg==

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 19:01:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 19:01:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23833.50828 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcYtK-00053N-8M; Tue, 10 Nov 2020 19:01:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23833.50828; Tue, 10 Nov 2020 19:01:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcYtK-00053G-40; Tue, 10 Nov 2020 19:01:18 +0000
Received: by outflank-mailman (input) for mailman id 23833;
 Tue, 10 Nov 2020 19:01:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kcYtJ-000538-HK
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 19:01:17 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 97dbbf6a-8abd-49c1-b171-50c38eb63e74;
 Tue, 10 Nov 2020 19:01:13 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcYtF-0008ES-H6; Tue, 10 Nov 2020 19:01:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcYtF-0001xb-9J; Tue, 10 Nov 2020 19:01:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kcYtF-0004do-8o; Tue, 10 Nov 2020 19:01:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kcYtJ-000538-HK
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 19:01:17 +0000
X-Inumbo-ID: 97dbbf6a-8abd-49c1-b171-50c38eb63e74
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 97dbbf6a-8abd-49c1-b171-50c38eb63e74;
	Tue, 10 Nov 2020 19:01:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tVbSzxaZJuPrwVudZ2yYVlvbjb+kOzDVQoorx2K4dq4=; b=1Q8Jc49LGkFgfgR/hE09OfJ/3M
	4Ytbm9KTpQaROSQVGYT0E6LnOY9kJ4kNZaagFZM1QQDJgCUTySXu/eAdUjzKcVGzc5KoU6xyuysF8
	NxBuxBdLvfEh/rYxCbgjSeCF6aaQq1/kIE8/9KkgSKiJdDjPaNKxxS4oclPvNGlH1h0M=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcYtF-0008ES-H6; Tue, 10 Nov 2020 19:01:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcYtF-0001xb-9J; Tue, 10 Nov 2020 19:01:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcYtF-0004do-8o; Tue, 10 Nov 2020 19:01:13 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156611-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156611: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=43ee7c6db119add9591357b643378d090769307b
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Nov 2020 19:01:13 +0000

flight 156611 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156611/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              43ee7c6db119add9591357b643378d090769307b
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  123 days
Failing since        151818  2020-07-11 04:18:52 Z  122 days  117 attempts
Testing same since   156611  2020-11-10 04:18:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 25623 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 19:23:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 19:23:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23887.50864 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcZEs-0007Hq-Di; Tue, 10 Nov 2020 19:23:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23887.50864; Tue, 10 Nov 2020 19:23:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcZEs-0007Hh-Ab; Tue, 10 Nov 2020 19:23:34 +0000
Received: by outflank-mailman (input) for mailman id 23887;
 Tue, 10 Nov 2020 19:23:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bdNS=EQ=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1kcZEr-0007Fc-3o
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 19:23:33 +0000
Received: from mail-wr1-x433.google.com (unknown [2a00:1450:4864:20::433])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c4fc5809-74db-4bb1-8e36-1f8220cffab0;
 Tue, 10 Nov 2020 19:23:28 +0000 (UTC)
Received: by mail-wr1-x433.google.com with SMTP id p8so13183696wrx.5
 for <xen-devel@lists.xenproject.org>; Tue, 10 Nov 2020 11:23:28 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id p13sm18180232wrt.73.2020.11.10.11.23.18
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 10 Nov 2020 11:23:24 -0800 (PST)
Received: from zen.lan (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id B762B1FF91;
 Tue, 10 Nov 2020 19:23:16 +0000 (GMT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bdNS=EQ=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
	id 1kcZEr-0007Fc-3o
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 19:23:33 +0000
X-Inumbo-ID: c4fc5809-74db-4bb1-8e36-1f8220cffab0
Received: from mail-wr1-x433.google.com (unknown [2a00:1450:4864:20::433])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c4fc5809-74db-4bb1-8e36-1f8220cffab0;
	Tue, 10 Nov 2020 19:23:28 +0000 (UTC)
Received: by mail-wr1-x433.google.com with SMTP id p8so13183696wrx.5
        for <xen-devel@lists.xenproject.org>; Tue, 10 Nov 2020 11:23:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=OhJqzhnruESxyQBsuUBkTpXlj1KG2mpYOvcyz9G88Zg=;
        b=BYj3pLZXVoexXGwdJoXo1ydbNurkgF3mKNmPDG3Is3roLOOYY61mRZaAeKYdfR2SjW
         KxSfs6RQULqqvPzSFB6PN1DtWt5qXqNyLLD5MZlJHlewRYGyEPll6BUdFyVkwmtctZHJ
         Yq6WfWMDUmfhzZXQGnMOV0Mr5xZQDi3gXQHPL86mry8dGE8K0QCsj+F2vDaaVgKpDWOv
         gYGBrfXitnqfpKOWTZERUH6IU7koDoj4/oQf2QIKlCNkjqh6I3A1nvV4K1mGZ50cEUSX
         5zNzMZi7Y2Y+0Kch7/mDjmE/EyZ5cDSI9Zd+nUqgSxMUxDW963aHU9n0xlCSohwJr2hQ
         DBrQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=OhJqzhnruESxyQBsuUBkTpXlj1KG2mpYOvcyz9G88Zg=;
        b=X+laSz/4O1OvcQgI6sr/JpjttAIpfJqA/addNj5asxeGw5FUX49a54Fs5GRMTSGvrr
         oGNGTw0F2dbfpPl6XaOSP2yhtQmTwmFs0LnSbLXPVQAVZxufJ2WEb87p5SUwnGcv2d0h
         o3+RXgACnFD1Z3Dqwb3RDjftQrJffm1sp3fyDsFhdiuOYVkhX40OtJHxmudhO3434Awf
         WQW2w6GaYjNwxJ5Dvd0ZWi4mfMEp7FnCUbbY5zySABOYi4k+9U7faw2zVGRMfds39dsg
         5AbLQWAxpzU7XOVDrYSOyovH2NS+KKYBNu05R4dKIRr5VB5pHPkIY/4q4jGuJHWxmRw5
         NiFg==
X-Gm-Message-State: AOAM532nQHIK5dozZHa9FQZl0eoTAlwBwhsRY5/HKTwuO4s4kwLQdYav
	n6gdXQJ9oFoEFBQAAbUk7/2psw==
X-Google-Smtp-Source: ABdhPJzVSbE+5KXb+q3qWvL34oqGwQ/Ri7FBsgcn0thcQllMwFwg5o7MK5c47Opz1q4GH2WYIiwDzQ==
X-Received: by 2002:a5d:4d86:: with SMTP id b6mr25370577wru.369.1605036207888;
        Tue, 10 Nov 2020 11:23:27 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
        by smtp.gmail.com with ESMTPSA id p13sm18180232wrt.73.2020.11.10.11.23.18
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 10 Nov 2020 11:23:24 -0800 (PST)
Received: from zen.lan (localhost [127.0.0.1])
	by zen.linaroharston (Postfix) with ESMTP id B762B1FF91;
	Tue, 10 Nov 2020 19:23:16 +0000 (GMT)
From: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
To: qemu-devel@nongnu.org
Cc: peter.maydell@linaro.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	xen-devel@lists.xenproject.org (open list:X86 Xen CPUs)
Subject: [PATCH  v1 05/10] stubs/xen-hw-stub: drop xenstore_store_pv_console_info stub
Date: Tue, 10 Nov 2020 19:23:11 +0000
Message-Id: <20201110192316.26397-6-alex.bennee@linaro.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110192316.26397-1-alex.bennee@linaro.org>
References: <20201110192316.26397-1-alex.bennee@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

We should never build something that calls this without having it.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20201105175153.30489-13-alex.bennee@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
---
 stubs/xen-hw-stub.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/stubs/xen-hw-stub.c b/stubs/xen-hw-stub.c
index 2ea8190921..15f3921a76 100644
--- a/stubs/xen-hw-stub.c
+++ b/stubs/xen-hw-stub.c
@@ -10,10 +10,6 @@
 #include "hw/xen/xen.h"
 #include "hw/xen/xen-x86.h"
 
-void xenstore_store_pv_console_info(int i, Chardev *chr)
-{
-}
-
 int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num)
 {
     return -1;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 19:23:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 19:23:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23886.50851 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcZEn-0007GH-5U; Tue, 10 Nov 2020 19:23:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23886.50851; Tue, 10 Nov 2020 19:23:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcZEn-0007GA-2Z; Tue, 10 Nov 2020 19:23:29 +0000
Received: by outflank-mailman (input) for mailman id 23886;
 Tue, 10 Nov 2020 19:23:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bdNS=EQ=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1kcZEm-0007Fc-6C
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 19:23:28 +0000
Received: from mail-wr1-x433.google.com (unknown [2a00:1450:4864:20::433])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5c8ec26c-d7cf-4a93-beb5-ec25070f65b1;
 Tue, 10 Nov 2020 19:23:27 +0000 (UTC)
Received: by mail-wr1-x433.google.com with SMTP id b8so13989111wrn.0
 for <xen-devel@lists.xenproject.org>; Tue, 10 Nov 2020 11:23:27 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id c62sm3929964wme.22.2020.11.10.11.23.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 10 Nov 2020 11:23:22 -0800 (PST)
Received: from zen.lan (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id A08201FF90;
 Tue, 10 Nov 2020 19:23:16 +0000 (GMT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bdNS=EQ=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
	id 1kcZEm-0007Fc-6C
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 19:23:28 +0000
X-Inumbo-ID: 5c8ec26c-d7cf-4a93-beb5-ec25070f65b1
Received: from mail-wr1-x433.google.com (unknown [2a00:1450:4864:20::433])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5c8ec26c-d7cf-4a93-beb5-ec25070f65b1;
	Tue, 10 Nov 2020 19:23:27 +0000 (UTC)
Received: by mail-wr1-x433.google.com with SMTP id b8so13989111wrn.0
        for <xen-devel@lists.xenproject.org>; Tue, 10 Nov 2020 11:23:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=gBkEgjV7awvtEg/RUlUyLYVtc42f18uJSH1j4TZVeBA=;
        b=eiSuFCpkFUCvuaKxPzO1vnOi1gud/W7G7GTv6WfIN6CALMklhYzvmAaHZGhY9HIZf5
         WqKRHudZ8IL+U8FIve9wdULV/KdmA1+R64lV9664NK5ozYDA3vYtGs8eBmpJm0u6D1Zr
         E841WUtTp0iCQNT0qvBF+bVffGVzntKBWuWdM05af4wouBq0nvSPL7rR76rYmsVQU8Rk
         KqrZGfdDQcegSk4rz/jnXjh+dcwIYVznj0IKuKwYwDgLN1dhNVZTiTCKFNeFnByFGfec
         ESHxG7w8B1UAdhKgGRN56Dh4C61M3dcDUFBuWsiDvOK6+AeCdnlxAVOwaBuPrGl99mvW
         SxQg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=gBkEgjV7awvtEg/RUlUyLYVtc42f18uJSH1j4TZVeBA=;
        b=qh4CveKIHfrHJfCXBQWeg+HmDsKHnxdBXBb4kw66P5WwJd1zvRtEC8cYHLXfjFDbW9
         U2G50ixLjlx6QSDCDgWX6OJf4pBo5DDtYsUgqneUqkCgsF71Yk9H9IHpSo23U4Goyi++
         Cvohh+QCdTCRk0jOvWL72ijLT7L0hOo7Ud+EsWD/18efCNfa6dM+aweMezB1C/QexTt1
         vulWyqFMBt2AyBGxDk56IaR3as81gEn9dgaPFkC3oFLfyrvjlsWusSLOMu0m5mYymJw0
         rKo7c4Y99ayv05hd2oFUcb3W9tD3qpBXy0B6ae2LqVTrSO/wQXjQJOD0ZgTcRSKxtG29
         JKUg==
X-Gm-Message-State: AOAM5339mDZ3a3lnIe7s7CdP5IlJzJLDjKdE04+DMO76l8BT6oDD2jr8
	ch5+mFxFX0C2eJy/FGAskSYJRw==
X-Google-Smtp-Source: ABdhPJxa3cy975GPrMS66E+XZOpqdG7rl0e9DLYbMS13M4jWqT5KLVU3E0q/DBVsCPGl9Li+10Sakw==
X-Received: by 2002:a5d:474f:: with SMTP id o15mr24752225wrs.377.1605036206523;
        Tue, 10 Nov 2020 11:23:26 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
        by smtp.gmail.com with ESMTPSA id c62sm3929964wme.22.2020.11.10.11.23.17
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 10 Nov 2020 11:23:22 -0800 (PST)
Received: from zen.lan (localhost [127.0.0.1])
	by zen.linaroharston (Postfix) with ESMTP id A08201FF90;
	Tue, 10 Nov 2020 19:23:16 +0000 (GMT)
From: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
To: qemu-devel@nongnu.org
Cc: peter.maydell@linaro.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org (open list:X86 Xen CPUs)
Subject: [PATCH  v1 04/10] include/hw/xen.h: drop superfluous struct
Date: Tue, 10 Nov 2020 19:23:10 +0000
Message-Id: <20201110192316.26397-5-alex.bennee@linaro.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201110192316.26397-1-alex.bennee@linaro.org>
References: <20201110192316.26397-1-alex.bennee@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Chardev is already a typedef'ed struct.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20201105175153.30489-12-alex.bennee@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
---
 include/hw/xen/xen.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/hw/xen/xen.h b/include/hw/xen/xen.h
index 1406648ca5..0f9962b1c1 100644
--- a/include/hw/xen/xen.h
+++ b/include/hw/xen/xen.h
@@ -28,7 +28,7 @@ int xen_is_pirq_msi(uint32_t msi_data);
 
 qemu_irq *xen_interrupt_controller_init(void);
 
-void xenstore_store_pv_console_info(int i, struct Chardev *chr);
+void xenstore_store_pv_console_info(int i, Chardev *chr);
 
 void xen_register_framebuffer(struct MemoryRegion *mr);
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 19:44:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 19:44:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23911.50886 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcZZS-0000uK-8G; Tue, 10 Nov 2020 19:44:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23911.50886; Tue, 10 Nov 2020 19:44:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcZZS-0000uD-4z; Tue, 10 Nov 2020 19:44:50 +0000
Received: by outflank-mailman (input) for mailman id 23911;
 Tue, 10 Nov 2020 19:44:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BlNO=EQ=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kcZZP-0000u8-V6
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 19:44:48 +0000
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d402c991-56f3-4249-882a-27b27dcd0b60;
 Tue, 10 Nov 2020 19:44:46 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id o24so9439933ljj.6
 for <xen-devel@lists.xenproject.org>; Tue, 10 Nov 2020 11:44:46 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id a1sm2199762lfg.282.2020.11.10.11.44.44
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 10 Nov 2020 11:44:45 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BlNO=EQ=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kcZZP-0000u8-V6
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 19:44:48 +0000
X-Inumbo-ID: d402c991-56f3-4249-882a-27b27dcd0b60
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d402c991-56f3-4249-882a-27b27dcd0b60;
	Tue, 10 Nov 2020 19:44:46 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id o24so9439933ljj.6
        for <xen-devel@lists.xenproject.org>; Tue, 10 Nov 2020 11:44:46 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=Ijt0C6F19hg6tRiJ0CIaXs4iO2mhbcftXqOZOihG6X4=;
        b=Ou3d8sk7kgCk4cWeFoZO2+Nd72Qw0VXLPPuGQ8Pg2WjRzJwaFf9yXYZiBp6xUFJHiw
         FTDlHPmMnJ6+QVBYxv6Pf+vfFc1rm8UchkZ4WweOusXxs7En8GB1qNSl+FDMLuiBTde5
         AZhW6+8VLKtX+ks3xF/lu5e0RLE+CNbmatGGbuHwsIzoZR5RGmH6NltDRQM710zDTP33
         padiT1EQscePqN8Yc/bIQ81FF9WEnpESJnSp1gmlOHlM4Rg8KD+VQYCSJugbOJYY3oXP
         3hsGPtwkUTU28Z8HJepRlwxSzjCgW4YKUrgLp7iQt9G2IZJTxdPmHa71lkGdja0j+rTz
         rIoA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=Ijt0C6F19hg6tRiJ0CIaXs4iO2mhbcftXqOZOihG6X4=;
        b=MKRcAL/Un1Twf/2e9TGdvhVISqCBdwybTqnkGeoC5SUEh5LCb1kxLxoxGk8y5cu6js
         Se1M2UNyJgjsodVnPaS1QIFJDw1QJZfum38JkP8laNO/PgnZnjUii8tbWy2LRea46zsB
         JwjjB3mYP621zFaAJcCcbARDNizYb+zf3+LqgvOE1Qz9O1sNjpYQU4rpRDL68r2E5chV
         7+zLrQ1Plwm8B6FxiPuxWA/lDE0AuJDFR+6dSaGIeyjjiYN7ljXG64MXGRZ5aXN6aVNg
         i4kGCp+JPiy79LhjODQZW3EIi6tryDZZWrvgCWmWvClSeU2VLPYNxVmc/Jj9AwFTVZb7
         xRWQ==
X-Gm-Message-State: AOAM530SUIVw3wBBo/KvsQ8BgI3vp0257vewprX5Ge3UAtppkXuRyY2/
	vWjevhjRB+OXCrfLyP+4/nw=
X-Google-Smtp-Source: ABdhPJyFb67nL+yYu4yKth8EwwTZ3tLUl4+VzTos7cV+FXWjm573Pdkd/GIQ40QKk+nXTEOzLZstEw==
X-Received: by 2002:a2e:8e7b:: with SMTP id t27mr7352667ljk.129.1605037485820;
        Tue, 10 Nov 2020 11:44:45 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id a1sm2199762lfg.282.2020.11.10.11.44.44
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Tue, 10 Nov 2020 11:44:45 -0800 (PST)
Subject: Re: [PATCH V2 04/23] xen/ioreq: Provide alias for the handle_mmio()
To: paul@xen.org, 'Jan Beulich' <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org,
 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 'Wei Liu' <wl@xen.org>, 'Julien Grall' <julien@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-5-git-send-email-olekstysh@gmail.com>
 <004701d6a6c1$6c09f860$441de920$@xen.org>
 <38ba45dd-f1cd-a289-3ea3-75148782e126@suse.com>
 <004a01d6a6cd$1f4684b0$5dd38e10$@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <fab8e4b0-e3b2-fb74-76d4-42753ac88367@gmail.com>
Date: Tue, 10 Nov 2020 21:44:39 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <004a01d6a6cd$1f4684b0$5dd38e10$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 20.10.20 13:38, Paul Durrant wrote:

Hi Jan, Paul

Sorry for the late response.

>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 20 October 2020 11:05
>> To: paul@xen.org
>> Cc: 'Oleksandr Tyshchenko' <olekstysh@gmail.com>; xen-devel@lists.xenproject.org; 'Oleksandr
>> Tyshchenko' <oleksandr_tyshchenko@epam.com>; 'Andrew Cooper' <andrew.cooper3@citrix.com>; 'Roger Pau
>> Monné' <roger.pau@citrix.com>; 'Wei Liu' <wl@xen.org>; 'Julien Grall' <julien@xen.org>; 'Stefano
>> Stabellini' <sstabellini@kernel.org>; 'Julien Grall' <julien.grall@arm.com>
>> Subject: Re: [PATCH V2 04/23] xen/ioreq: Provide alias for the handle_mmio()
>>
>> On 20.10.2020 11:14, Paul Durrant wrote:
>>>> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Oleksandr Tyshchenko
>>>> Sent: 15 October 2020 17:44
>>>>
>>>> --- a/xen/include/asm-x86/hvm/ioreq.h
>>>> +++ b/xen/include/asm-x86/hvm/ioreq.h
>>>> @@ -181,6 +181,8 @@ static inline bool arch_hvm_ioreq_destroy(struct domain *d)
>>>>   #define IOREQ_STATUS_UNHANDLED   X86EMUL_UNHANDLEABLE
>>>>   #define IOREQ_STATUS_RETRY       X86EMUL_RETRY
>>>>
>>>> +#define ioreq_complete_mmio   handle_mmio
>>>> +
>>> A #define? Really? Can we not have a static inline?
>> I guess this would require further shuffling: handle_mmio() is
>> an inline function in hvm/emulate.h, and hvm/ioreq.h has no
>> need to include the former (and imo it also shouldn't have).
>>
> I see. I think we need an x86 ioreq.c anyway, to deal with the legacy use of magic pages, so it could be dealt with there instead.
I am afraid I don't entirely understand the required changes. Could you 
please clarify where the "inline(?)" ioreq_complete_mmio() should
live? I included hvm/emulate.h here not for the "handle_mmio()" reason 
only, but for "struct hvm_emulate_ctxt" also (see arch_io_completion()).


But, if we return x86 ioreq.c back I can move arch_io_completion() to it 
as well as "non-online" ioreq_complete_mmio().
This will avoid including hvm/emulate.h here. Or I missed something?

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 19:45:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 19:45:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23916.50897 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcZaK-00010q-IN; Tue, 10 Nov 2020 19:45:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23916.50897; Tue, 10 Nov 2020 19:45:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcZaK-00010j-FJ; Tue, 10 Nov 2020 19:45:44 +0000
Received: by outflank-mailman (input) for mailman id 23916;
 Tue, 10 Nov 2020 19:45:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kcZaJ-00010c-MK
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 19:45:43 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ee755c8b-0585-4e03-84d8-605fbe0ea29c;
 Tue, 10 Nov 2020 19:45:41 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcZaH-0000gs-JY; Tue, 10 Nov 2020 19:45:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcZaH-0005El-Ch; Tue, 10 Nov 2020 19:45:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kcZaH-0003cc-CD; Tue, 10 Nov 2020 19:45:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kcZaJ-00010c-MK
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 19:45:43 +0000
X-Inumbo-ID: ee755c8b-0585-4e03-84d8-605fbe0ea29c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ee755c8b-0585-4e03-84d8-605fbe0ea29c;
	Tue, 10 Nov 2020 19:45:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=iejRnIAKA1QY4FZVL/mv+9MF3RW6AVBTBb7wVsnc8D8=; b=pguw+9SQ2IzLGxzmgGUoVO/eF3
	THW8Jy7jWgP+jtjdNZvozdAvrTIdJeHK4peo67QS/OSIFNEX86YGr+sGBKGFOi35zSISX/3DTVxch
	I/+QYuXMW0GFMVu8IJhhq79MvIU2cVSqAKF7b53ncTQeozFP206xo0WfHO/F5kxMXpng=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcZaH-0000gs-JY; Tue, 10 Nov 2020 19:45:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcZaH-0005El-Ch; Tue, 10 Nov 2020 19:45:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcZaH-0003cc-CD; Tue, 10 Nov 2020 19:45:41 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156628-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156628: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=e6e85b662be9eab96f4cfc58e9945580cce8b2bb
X-Osstest-Versions-That:
    xen=3059178798a23ba870ff86ff54d442a07e6651fc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Nov 2020 19:45:41 +0000

flight 156628 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156628/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156622

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  e6e85b662be9eab96f4cfc58e9945580cce8b2bb
baseline version:
 xen                  3059178798a23ba870ff86ff54d442a07e6651fc

Last test of basis   156622  2020-11-10 13:01:19 Z    0 days
Testing same since   156628  2020-11-10 17:00:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit e6e85b662be9eab96f4cfc58e9945580cce8b2bb
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Nov 10 14:40:09 2020 +0100

    x86/CPUID: also check leaf 7 max subleaf to be compatible
    
    Just like is done for basic and extended major leaves.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit f5cfa09856732b1d78ff6a21ca3dc33a010da951
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Nov 10 14:39:30 2020 +0100

    x86/CPUID: suppress IOMMU related hypervisor leaf data
    
    Now that the IOMMU for guests can't be enabled "on demand" anymore,
    there's also no reason to expose the related CPUID bit "just in case".
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit db1a9fdd554cb1d8a7099af7925318fc06c6875b
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Nov 10 14:39:03 2020 +0100

    x86/CPUID: don't use UB shift when library is built as 32-bit
    
    At least the insn emulator test harness will continue to be buildable
    (and ought to continue to be usable) also as a 32-bit binary. (Right now
    the CPU policy test harness is, too, but there it may be less relevant
    to keep it functional, just like e.g. we don't support fuzzing the insn
    emulator in 32-bit mode.) Hence the library code needs to cope with
    this.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit b5ad37f8e9284cc147218f7a5193d739ae7b956f
Author: Juergen Gross <jgross@suse.com>
Date:   Tue Nov 10 14:37:15 2020 +0100

    xen/evtchn: revert 52e1fc47abc3a0123
    
    With the event channel lock no longer disabling interrupts commit
    52e1fc47abc3a0123 ("evtchn/Flask: pre-allocate node on send path") can
    be reverted again.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 5f2df45ead7c1195142f68b7923047a1e9479d54
Author: Juergen Gross <jgross@suse.com>
Date:   Tue Nov 10 14:36:15 2020 +0100

    xen/evtchn: rework per event channel lock
    
    Currently the lock for a single event channel needs to be taken with
    interrupts off, which causes deadlocks in some cases.
    
    Rework the per event channel lock to be non-blocking for the case of
    sending an event and removing the need for disabling interrupts for
    taking the lock.
    
    The lock is needed for avoiding races between event channel state
    changes (creation, closing, binding) against normal operations (set
    pending, [un]masking, priority changes).
    
    Use a rwlock, but with some restrictions:
    
    - Changing the state of an event channel (creation, closing, binding)
      needs to use write_lock(), with ASSERT()ing that the lock is taken as
      writer only when the state of the event channel is either before or
      after the locked region appropriate (either free or unbound).
    
    - Sending an event needs to use read_trylock() mostly, in case of not
      obtaining the lock the operation is omitted. This is needed as
      sending an event can happen with interrupts off (at least in some
      cases).
    
    - Dumping the event channel state for debug purposes is using
      read_trylock(), too, in order to avoid blocking in case the lock is
      taken as writer for a long time.
    
    - All other cases can use read_lock().
    
    Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 20:01:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 20:01:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23927.50915 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcZp5-0002uk-2T; Tue, 10 Nov 2020 20:00:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23927.50915; Tue, 10 Nov 2020 20:00:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcZp4-0002ud-Vs; Tue, 10 Nov 2020 20:00:58 +0000
Received: by outflank-mailman (input) for mailman id 23927;
 Tue, 10 Nov 2020 20:00:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BlNO=EQ=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kcZp3-0002uY-Ha
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 20:00:57 +0000
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e6cc7351-0dda-4849-a152-07abc18202e9;
 Tue, 10 Nov 2020 20:00:56 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id e27so19367032lfn.7
 for <xen-devel@lists.xenproject.org>; Tue, 10 Nov 2020 12:00:56 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id n20sm2165361ljj.85.2020.11.10.12.00.54
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 10 Nov 2020 12:00:55 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BlNO=EQ=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kcZp3-0002uY-Ha
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 20:00:57 +0000
X-Inumbo-ID: e6cc7351-0dda-4849-a152-07abc18202e9
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e6cc7351-0dda-4849-a152-07abc18202e9;
	Tue, 10 Nov 2020 20:00:56 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id e27so19367032lfn.7
        for <xen-devel@lists.xenproject.org>; Tue, 10 Nov 2020 12:00:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=qtAwTdnEJSWx+IgWSM4arTxkF9Jv4KcG+2RumSEPQcU=;
        b=gCXOExftDjEN2W2GVE7V2r0zF5kix4KtrByDpuCqsv/oVfn0wPUq8gm9WsZWRiV8ER
         VF1uABxYgH0q5u5jhyZKUADpsr5MeDlZhUGWeK7UnaNtSjvn+1Xu/WInMd9ShYYytRKq
         HV4MbTklYoQWE7BmyC3JN3bveHtjp00/7/efBy564mMhM8ESgQg2wF3jQ3DAnMhx/vKz
         zZd9bpSM0tbKJ+IFPnWbJzJ/f16Fet71KXreT0IphDl8uXjvj/IYb1baG23G62zg6ZGB
         N3U+ndnP7ShDbdjzh5OhOEqQLHNoxadLHjap1VkxDyKZpGdBSazWcav44KVGUxFjeWv0
         3YKA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=qtAwTdnEJSWx+IgWSM4arTxkF9Jv4KcG+2RumSEPQcU=;
        b=twVhHrGvP3CcGK80zggmhkO6NgKbR9L2rblYIrkM8sZHSXgtWZVPz5cvuNi7mBjO0f
         lot1FHHWrUZEnRxmfZT4bbdwpW0cVXlltf/7IUVV0sQTxhO03dbyTnwm7CcS2X0pmZtE
         Pp5UBnoxYiMMpucECtOO28X7dYB+r81iISjJdt8AONuKXlbSl8k6bFZfu19ORUcszZbS
         oEn0bhnKa/BvO8HCwjQRTx1oM3OL00VE06xzy3ag5LrBxntlaDGx/KeGhRQs0BDLvm4u
         AhBnKnPp+UgoYPE2YZC1O/AhBCpTlgWTFRzutlWkVrg/sXGpZJxNAQJuu9QKCV/ZjfG7
         14hw==
X-Gm-Message-State: AOAM533dS7LGIKPojiK9s7NRvmUjbNH1xduaXzU0HlEcCpYYfOcKS//g
	vsFC7nIRVW/0nWRG/gxntu4=
X-Google-Smtp-Source: ABdhPJxkaf5F/iurLZTZMh9REo5fgRnyhe7/n1eFT7FkCpVWD0XFfJ1uVoq7vITsKMfwSHEcJsXjoA==
X-Received: by 2002:a19:c94:: with SMTP id 142mr7704702lfm.284.1605038455625;
        Tue, 10 Nov 2020 12:00:55 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id n20sm2165361ljj.85.2020.11.10.12.00.54
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Tue, 10 Nov 2020 12:00:55 -0800 (PST)
Subject: Re: [PATCH V2 08/23] xen/ioreq: Introduce ioreq_params to abstract
 accesses to arch.hvm.params
To: paul@xen.org, xen-devel@lists.xenproject.org
Cc: 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Jan Beulich' <jbeulich@suse.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 'Wei Liu' <wl@xen.org>, 'Julien Grall' <julien@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-9-git-send-email-olekstysh@gmail.com>
 <004c01d6a6cd$8b3e22e0$a1ba68a0$@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <6b23967b-7517-477f-6923-ce530e877480@gmail.com>
Date: Tue, 10 Nov 2020 22:00:54 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <004c01d6a6cd$8b3e22e0$a1ba68a0$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 20.10.20 13:41, Paul Durrant wrote:

Hi Paul

Sorry for the late response.


>> -----Original Message-----
>> From: Oleksandr Tyshchenko <olekstysh@gmail.com>
>> Sent: 15 October 2020 17:44
>> To: xen-devel@lists.xenproject.org
>> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Paul Durrant <paul@xen.org>; Jan Beulich
>> <jbeulich@suse.com>; Andrew Cooper <andrew.cooper3@citrix.com>; Roger Pau Monné
>> <roger.pau@citrix.com>; Wei Liu <wl@xen.org>; Julien Grall <julien@xen.org>; Stefano Stabellini
>> <sstabellini@kernel.org>; Julien Grall <julien.grall@arm.com>
>> Subject: [PATCH V2 08/23] xen/ioreq: Introduce ioreq_params to abstract accesses to arch.hvm.params
>>
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> We don't want to move HVM params field out of *arch.hvm* in this particular
>> case as although it stores a few IOREQ params, it is not a (completely)
>> IOREQ stuff and is specific to the architecture. Instead, abstract
>> accesses by the proposed macro.
>>
>> This is a follow up action to reduce layering violation in the common code.
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> CC: Julien Grall <julien.grall@arm.com>
>>
> Keeping the 'legacy' magic page code under an x86 ioreq.c would avoid the need for this patch.

In that case, yes, agree.


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 20:53:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 20:53:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23946.50944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcadv-0007YJ-8L; Tue, 10 Nov 2020 20:53:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23946.50944; Tue, 10 Nov 2020 20:53:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcadv-0007YC-4t; Tue, 10 Nov 2020 20:53:31 +0000
Received: by outflank-mailman (input) for mailman id 23946;
 Tue, 10 Nov 2020 20:53:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BlNO=EQ=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kcadt-0007Xe-LA
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 20:53:29 +0000
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 03b5462d-70c6-47dd-9da1-3c117ee88eda;
 Tue, 10 Nov 2020 20:53:28 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id b17so3716217ljf.12
 for <xen-devel@lists.xenproject.org>; Tue, 10 Nov 2020 12:53:28 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id u25sm2221609lfo.198.2020.11.10.12.53.26
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 10 Nov 2020 12:53:26 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BlNO=EQ=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kcadt-0007Xe-LA
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 20:53:29 +0000
X-Inumbo-ID: 03b5462d-70c6-47dd-9da1-3c117ee88eda
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 03b5462d-70c6-47dd-9da1-3c117ee88eda;
	Tue, 10 Nov 2020 20:53:28 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id b17so3716217ljf.12
        for <xen-devel@lists.xenproject.org>; Tue, 10 Nov 2020 12:53:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=EQd+dYIPIqLBD0R0ERmRb49QsS0Y9wO0s924FzzJ6iU=;
        b=X/fCK5wWC7fdVVo4CsJp+uUeJV+zwiDTJnzoGOG0kgUJZOhb8qlTjG1fMktJQx/yPc
         h9Dqo/2Fz6iXgC5Vgp+2dL66b6E+MSAQRGeT5zy6GE71IXmoKpdR5TLhirY2hHmtA94T
         Co+9tHochKdQoGfNDqXXMZ8w34Of5lBo0VfZn3vRUQj9O/FtmDiKCL0y3Go7aWnMa/MX
         LE7szJcoaxbrhwm14fSvhaVMqbWWr2QR9Wk2rpfv2k4GFFTqPPTrcxHQB9bwv3KnqJ9Q
         gZppJE/23SJLYyYcgfFioEjZWIcmeFjMWTjGls2WVU5u1TBZU2PUfW8sw6vhYlVD3bIw
         YLuQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=EQd+dYIPIqLBD0R0ERmRb49QsS0Y9wO0s924FzzJ6iU=;
        b=IzEwe1EoDRRsJepv33zuNUk7SUIEhH31y5DkJ1JRwVq/4O5sBbOcrJWQJyo5QqJsdr
         w72acEzSK/I+SfTcw8BJrJPPGXiZyVSO2uEQjtWvePnYqCaZuoYVMlWohgjVYJg8oYTZ
         mZoNC/cgsPt837H2eC/cnC6Lx9o1Nm3jUSVmRJSW+MlrZXur2ZLCqo0ej9/fIFss5w9X
         g1eg+nez/soMyGXF57s6Tx3hwaxqk1WjY4PSxnhb2thEga9AqEJhgZZELAY2A72M5hFB
         xVdr05TA6kbpjZjng7PxKRsLvEe1fKp+7gTK0PKUNICd9IXCjZgv1O/ZXdr5F6XL3USR
         IrBg==
X-Gm-Message-State: AOAM5313NBbIvzxSG29i3nIw0tPHJQ7d3G8/RQY9T3OKrPGEr8FomWB1
	6OuN5NMDJ6Hf6ZJt8Y47fQ8=
X-Google-Smtp-Source: ABdhPJwZ2DXqBsM8loS36kNafC7esUpda78IhylZ24t3HgcJPnmGqsUna78ykHuUkfZn0SDYihGSNw==
X-Received: by 2002:a2e:b536:: with SMTP id z22mr9368485ljm.177.1605041607388;
        Tue, 10 Nov 2020 12:53:27 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id u25sm2221609lfo.198.2020.11.10.12.53.26
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Tue, 10 Nov 2020 12:53:26 -0800 (PST)
Subject: Re: [PATCH V2 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
To: paul@xen.org, xen-devel@lists.xenproject.org
Cc: 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Ian Jackson' <iwj@xenproject.org>, 'Jan Beulich' <jbeulich@suse.com>,
 'Wei Liu' <wl@xen.org>, 'Julien Grall' <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-18-git-send-email-olekstysh@gmail.com>
 <004e01d6a6cf$09cd9f40$1d68ddc0$@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <700a643e-641e-c243-cb2d-7ad8b5a9b8ad@gmail.com>
Date: Tue, 10 Nov 2020 22:53:20 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <004e01d6a6cf$09cd9f40$1d68ddc0$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 20.10.20 13:51, Paul Durrant wrote:

Hi Paul.

Sorry for the late response.

>
>> -----Original Message-----
>> From: Oleksandr Tyshchenko <olekstysh@gmail.com>
>> Sent: 15 October 2020 17:44
>> To: xen-devel@lists.xenproject.org
>> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Stefano Stabellini <sstabellini@kernel.org>;
>> Julien Grall <julien@xen.org>; Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>; Andrew Cooper
>> <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>; Ian Jackson
>> <iwj@xenproject.org>; Jan Beulich <jbeulich@suse.com>; Wei Liu <wl@xen.org>; Paul Durrant
>> <paul@xen.org>; Julien Grall <julien.grall@arm.com>
>> Subject: [PATCH V2 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
>>
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> This patch introduces a helper the main purpose of which is to check
>> if a domain is using IOREQ server(s).
>>
>> On Arm the current benefit is to avoid calling handle_io_completion()
>> (which implies iterating over all possible IOREQ servers anyway)
>> on every return in leave_hypervisor_to_guest() if there is no active
>> servers for the particular domain.
>> Also this helper will be used by one of the subsequent patches on Arm.
>>
>> This involves adding an extra per-domain variable to store the count
>> of servers in use.
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> CC: Julien Grall <julien.grall@arm.com>
>>
>> ---
>> Please note, this is a split/cleanup/hardening of Julien's PoC:
>> "Add support for Guest IO forwarding to a device emulator"
>>
>> Changes RFC -> V1:
>>     - new patch
>>
>> Changes V1 -> V2:
>>     - update patch description
>>     - guard helper with CONFIG_IOREQ_SERVER
>>     - remove "hvm" prefix
>>     - modify helper to just return d->arch.hvm.ioreq_server.nr_servers
>>     - put suitable ASSERT()s
>>     - use ASSERT(d->ioreq_server.server[id] ? !s : !!s) in set_ioreq_server()
>>     - remove d->ioreq_server.nr_servers = 0 from hvm_ioreq_init()
>> ---
>>   xen/arch/arm/traps.c    | 15 +++++++++------
>>   xen/common/ioreq.c      |  7 ++++++-
>>   xen/include/xen/ioreq.h | 14 ++++++++++++++
>>   xen/include/xen/sched.h |  1 +
>>   4 files changed, 30 insertions(+), 7 deletions(-)
>>
>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>> index 507c095..a8f5fdf 100644
>> --- a/xen/arch/arm/traps.c
>> +++ b/xen/arch/arm/traps.c
>> @@ -2261,14 +2261,17 @@ static bool check_for_vcpu_work(void)
>>       struct vcpu *v = current;
>>
>>   #ifdef CONFIG_IOREQ_SERVER
>> -    bool handled;
>> +    if ( domain_has_ioreq_server(v->domain) )
>> +    {
>> +        bool handled;
>>
>> -    local_irq_enable();
>> -    handled = handle_io_completion(v);
>> -    local_irq_disable();
>> +        local_irq_enable();
>> +        handled = handle_io_completion(v);
>> +        local_irq_disable();
>>
>> -    if ( !handled )
>> -        return true;
>> +        if ( !handled )
>> +            return true;
>> +    }
>>   #endif
>>
>>       if ( likely(!v->arch.need_flush_to_ram) )
>> diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
>> index bcd4961..a72bc0e 100644
>> --- a/xen/common/ioreq.c
>> +++ b/xen/common/ioreq.c
>> @@ -39,9 +39,14 @@ static void set_ioreq_server(struct domain *d, unsigned int id,
>>                                struct ioreq_server *s)
>>   {
>>       ASSERT(id < MAX_NR_IOREQ_SERVERS);
>> -    ASSERT(!s || !d->ioreq_server.server[id]);
>> +    ASSERT(d->ioreq_server.server[id] ? !s : !!s);
> That looks odd. How about ASSERT(!s ^ !d->ioreq_server.server[id])?

ok, looks like it will work.


>    Paul
>
>>       d->ioreq_server.server[id] = s;
>> +
>> +    if ( s )
>> +        d->ioreq_server.nr_servers++;
>> +    else
>> +        d->ioreq_server.nr_servers--;
>>   }
>>
>>   #define GET_IOREQ_SERVER(d, id) \
>> diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
>> index 7b03ab5..0679fef 100644
>> --- a/xen/include/xen/ioreq.h
>> +++ b/xen/include/xen/ioreq.h
>> @@ -55,6 +55,20 @@ struct ioreq_server {
>>       uint8_t                bufioreq_handling;
>>   };
>>
>> +#ifdef CONFIG_IOREQ_SERVER
>> +static inline bool domain_has_ioreq_server(const struct domain *d)
>> +{
>> +    ASSERT((current->domain == d) || atomic_read(&d->pause_count));
>> +
> This seems like an odd place to put such an assertion.

I might miss something or interpreted incorrectly but these asserts are 
the result of how I understood the review comment on previous version [1].

I will copy a comment here for the convenience:
"This is safe only when d == current->domain and it's not paused,
or when they're distinct and d is paused. Otherwise the result is
stale before the caller can inspect it. This wants documenting by
at least a comment, but perhaps better by suitable ASSERT()s."


>
>> +    return d->ioreq_server.nr_servers;
>> +}
>> +#else
>> +static inline bool domain_has_ioreq_server(const struct domain *d)
>> +{
>> +    return false;
>> +}
>> +#endif
>> +
> Can this be any more compact? E.g.
>
> return IS_ENABLED(CONFIG_IOREQ_SERVER) && d->ioreq_server.nr_servers;
>
> ?
I have got a compilation error this way (if CONFIG_IOREQ_SERVER is 
disabled):

...xen/4.14.0+gitAUTOINC+ee22110219-r0/git/xen/include/xen/ioreq.h:62:48: 
error: ‘const struct domain’ has no member named ‘ioreq_server’
      return IS_ENABLED(CONFIG_IOREQ_SERVER) && d->ioreq_server.nr_servers;
                                                 ^
as domain's ioreq_server struct is guarded by CONFIG_IOREQ_SERVER as well.


[1] 
https://patchwork.kernel.org/project/xen-devel/patch/1599769330-17656-12-git-send-email-olekstysh@gmail.com/#23618623

Thank you.

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 20:57:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 20:57:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23953.50956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcaho-0007jT-Pb; Tue, 10 Nov 2020 20:57:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23953.50956; Tue, 10 Nov 2020 20:57:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcaho-0007jM-MP; Tue, 10 Nov 2020 20:57:32 +0000
Received: by outflank-mailman (input) for mailman id 23953;
 Tue, 10 Nov 2020 20:57:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kcahn-0007jH-PN
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 20:57:31 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e046904b-df22-47b4-a9e7-851f6535f853;
 Tue, 10 Nov 2020 20:57:26 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcahi-0002Hn-5B; Tue, 10 Nov 2020 20:57:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcahh-0001uG-Tv; Tue, 10 Nov 2020 20:57:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kcahh-0000cp-TQ; Tue, 10 Nov 2020 20:57:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kcahn-0007jH-PN
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 20:57:31 +0000
X-Inumbo-ID: e046904b-df22-47b4-a9e7-851f6535f853
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e046904b-df22-47b4-a9e7-851f6535f853;
	Tue, 10 Nov 2020 20:57:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=09dsGboer2hxiv8kPQuav287hXmj/Q7XoNPHXHiVyDg=; b=JGEhJoKSEh08yERvfhzjO7nKvN
	O764KyHrV4TZWH7WViQKEZ6wWt8zjlUAvBCHQN37HftLmZcKosKUGjvrjWM3KxlADlojbX6jZXlK2
	0zsZTLRH4f1Yq7SgtfyHqPxVVySPDPzVxVKspJxPKgN+ErXfGTmCsey+idwIu08Zzefg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcahi-0002Hn-5B; Tue, 10 Nov 2020 20:57:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcahh-0001uG-Tv; Tue, 10 Nov 2020 20:57:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcahh-0000cp-TQ; Tue, 10 Nov 2020 20:57:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156603-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156603: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:guest-saverestore.2:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=43afbbd9fea1b255cc81f5f4bfd0b6a88826c735
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Nov 2020 20:57:25 +0000

flight 156603 qemu-mainline real [real]
flight 156645 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156603/
http://logs.test-lab.xenproject.org/osstest/logs/156645/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 16 guest-saverestore.2 fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                43afbbd9fea1b255cc81f5f4bfd0b6a88826c735
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   82 days
Failing since        152659  2020-08-21 14:07:39 Z   81 days  177 attempts
Testing same since   156603  2020-11-09 22:07:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 62569 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 20:59:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 20:59:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23961.50971 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcajz-0007wo-Bw; Tue, 10 Nov 2020 20:59:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23961.50971; Tue, 10 Nov 2020 20:59:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcajz-0007wh-8p; Tue, 10 Nov 2020 20:59:47 +0000
Received: by outflank-mailman (input) for mailman id 23961;
 Tue, 10 Nov 2020 20:59:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BlNO=EQ=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kcajy-0007wc-9A
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 20:59:46 +0000
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 68fa3c18-1121-4a52-9232-4f59f4fd4986;
 Tue, 10 Nov 2020 20:59:45 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id 11so16376779ljf.2
 for <xen-devel@lists.xenproject.org>; Tue, 10 Nov 2020 12:59:45 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id w10sm2708051ljo.130.2020.11.10.12.59.42
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 10 Nov 2020 12:59:43 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BlNO=EQ=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kcajy-0007wc-9A
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 20:59:46 +0000
X-Inumbo-ID: 68fa3c18-1121-4a52-9232-4f59f4fd4986
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 68fa3c18-1121-4a52-9232-4f59f4fd4986;
	Tue, 10 Nov 2020 20:59:45 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id 11so16376779ljf.2
        for <xen-devel@lists.xenproject.org>; Tue, 10 Nov 2020 12:59:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=WCEMbK6VpTWfnhbVRjPF5UIFvs6FkjhGHnAeW326f3g=;
        b=iMpcgfzuICrq0q3iPYKyyIXfq5f3majc74oak7l+oqjHYFf17I3Ozia8LBmmHbN1/r
         w2qNdUJexnEdGwpFBvSbBW7lu4ZVPJ5I/Kl9Qv4YNKXPAYZ8LEYASJbBbV6l0ISkHzHZ
         nmqH56DC9asEKFNyl/mitq/QP8cxo6PhpBRvTrwDUcCpGfuqOLNoaskPZBazaHaHV6B4
         C6/8d3QV4YzDbCB6rLYuN6/6AFv5KQAqM1uvKsi8ACspb+oABuubO/V4Qv+wcDUiCkg6
         F1Zgx7Fh3aL+lOfY0IHZSusJpI0fUqrXWV4EdcpRzUQC1T6063rYNOYZGoGhKJJERlKj
         qkSw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=WCEMbK6VpTWfnhbVRjPF5UIFvs6FkjhGHnAeW326f3g=;
        b=kZ6LNBmpOLvt3u6hK5FRI5Tzo9GS5ilUgu0kdcVC2hw/cLsLAWrUEB74Kp7jcE+BsN
         gKWQ7Zx0ry5aimahxdZdVd1te0oJ+sJV7JBFqVaCIy0Bcbv7o+5qYfC/DGAOEhsKoev6
         JSeHKHnBl+ZVZ+U5gWq1n96ILXN7Zil2vKw4H4AjhNCQV1mhjKu7QnhjJU2svm96ECjv
         HcNnntQa+m8B8dghK9Kh4+1iVW8aW4hsfSvGTpYs75q0atBD6kjTzVFfOKq02+2hbO2r
         nt49pBNt9tSGc1sk0R1aLjlaUrXgZVCQevRhXd15AB6EbwZZ0PtWlO+MdoiDsv7gJUj9
         tyrg==
X-Gm-Message-State: AOAM530mG76vM9G9fyQKsXWuHTnNwNwRmxQ6pqhLbrybfcf5903aRQ4c
	6F2I1ggDToxFO1btbAkS7vE=
X-Google-Smtp-Source: ABdhPJx0FoQ+N0R4gfe08ybC6gVKP0QNYk6QszL75pnmu1zu1JpGal/VPT307fMrZF3epemW2PT3kQ==
X-Received: by 2002:a2e:93cf:: with SMTP id p15mr9850563ljh.141.1605041984083;
        Tue, 10 Nov 2020 12:59:44 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id w10sm2708051ljo.130.2020.11.10.12.59.42
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Tue, 10 Nov 2020 12:59:43 -0800 (PST)
Subject: Re: [PATCH V2 11/23] xen/ioreq: Move x86's io_completion/io_req
 fields to struct vcpu
To: paul@xen.org, xen-devel@lists.xenproject.org
Cc: 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Jan Beulich' <jbeulich@suse.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 'Wei Liu' <wl@xen.org>, 'George Dunlap' <george.dunlap@citrix.com>,
 'Ian Jackson' <iwj@xenproject.org>, 'Julien Grall' <julien@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Jun Nakajima' <jun.nakajima@intel.com>, 'Kevin Tian'
 <kevin.tian@intel.com>, 'Julien Grall' <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-12-git-send-email-olekstysh@gmail.com>
 <004f01d6a6cf$79d0daa0$6d728fe0$@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <51b7fe12-46a2-bd0e-da10-f753ecb0453c@gmail.com>
Date: Tue, 10 Nov 2020 22:59:42 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <004f01d6a6cf$79d0daa0$6d728fe0$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 20.10.20 13:55, Paul Durrant wrote:

Hi Paul.

Sorry for the late response.

>> -----Original Message-----
>> From: Oleksandr Tyshchenko <olekstysh@gmail.com>
>> Sent: 15 October 2020 17:44
>> To: xen-devel@lists.xenproject.org
>> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Paul Durrant <paul@xen.org>; Jan Beulich
>> <jbeulich@suse.com>; Andrew Cooper <andrew.cooper3@citrix.com>; Roger Pau Monné
>> <roger.pau@citrix.com>; Wei Liu <wl@xen.org>; George Dunlap <george.dunlap@citrix.com>; Ian Jackson
>> <iwj@xenproject.org>; Julien Grall <julien@xen.org>; Stefano Stabellini <sstabellini@kernel.org>; Jun
>> Nakajima <jun.nakajima@intel.com>; Kevin Tian <kevin.tian@intel.com>; Julien Grall
>> <julien.grall@arm.com>
>> Subject: [PATCH V2 11/23] xen/ioreq: Move x86's io_completion/io_req fields to struct vcpu
>>
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> The IOREQ is a common feature now and these fields will be used
>> on Arm as is. Move them to common struct vcpu as a part of new
>> struct vcpu_io. Also move enum hvm_io_completion to xen/sched.h
>> and remove "hvm" prefixes.
>>
>> This patch completely removes layering violation in the common code.
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> CC: Julien Grall <julien.grall@arm.com>
>>
>> ---
>> Please note, this is a split/cleanup/hardening of Julien's PoC:
>> "Add support for Guest IO forwarding to a device emulator"
>>
>> ***
>> I was thinking that it may be better to place these two fields
>> into struct vcpu directly (without intermediate "io" struct).
>> I think, this way the code which operates with these fields
>> would become cleaner. Another possible option would be either
>> to rename "io" struct (I failed to think of a better name) or
>> to drop(replace?) duplicating "io" prefixes from these fields.
> Just drop the 'io_' prefix from the field names.

Will drop. This would look like indeed better.


Thank you.


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Nov 10 21:26:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 21:26:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23984.51009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcb9Z-0002J0-P6; Tue, 10 Nov 2020 21:26:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23984.51009; Tue, 10 Nov 2020 21:26:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcb9Z-0002It-MC; Tue, 10 Nov 2020 21:26:13 +0000
Received: by outflank-mailman (input) for mailman id 23984;
 Tue, 10 Nov 2020 21:26:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5eud=EQ=oracle.com=konrad.wilk@srs-us1.protection.inumbo.net>)
 id 1kcb9Y-0002Im-40
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 21:26:12 +0000
Received: from aserp2130.oracle.com (unknown [141.146.126.79])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ca59eb76-203d-4c95-9139-27ae57475992;
 Tue, 10 Nov 2020 21:26:11 +0000 (UTC)
Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1])
 by aserp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0AALNsqL062598;
 Tue, 10 Nov 2020 21:26:08 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by aserp2130.oracle.com with ESMTP id 34nh3ax8mk-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Tue, 10 Nov 2020 21:26:07 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0AALOqFA194731;
 Tue, 10 Nov 2020 21:26:07 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
 by aserp3020.oracle.com with ESMTP id 34p5g0w6vj-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 10 Nov 2020 21:26:07 +0000
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
 by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 0AALPx9D001394;
 Tue, 10 Nov 2020 21:26:01 GMT
Received: from char.us.oracle.com (/10.152.32.25)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Tue, 10 Nov 2020 13:25:59 -0800
Received: by char.us.oracle.com (Postfix, from userid 1000)
 id 650FC6A0109; Tue, 10 Nov 2020 16:27:51 -0500 (EST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5eud=EQ=oracle.com=konrad.wilk@srs-us1.protection.inumbo.net>)
	id 1kcb9Y-0002Im-40
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 21:26:12 +0000
X-Inumbo-ID: ca59eb76-203d-4c95-9139-27ae57475992
Received: from aserp2130.oracle.com (unknown [141.146.126.79])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ca59eb76-203d-4c95-9139-27ae57475992;
	Tue, 10 Nov 2020 21:26:11 +0000 (UTC)
Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1])
	by aserp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0AALNsqL062598;
	Tue, 10 Nov 2020 21:26:08 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : references : mime-version : content-type :
 in-reply-to; s=corp-2020-01-29;
 bh=9Nj2D+m80kZDJqrKbN/4/Is2vNJ6uqoCfmj84gGzeI0=;
 b=KQwSFc1LoGQ1VzaMRiggNXYjNGFOkFbtnIpoHJ63adZBDFwbB47VyCf6DN+tnopZ6tC+
 Ys3WhPIB7S4McHQMXEQ/TepCswb5N1BFbWjHrxa03JjMciQwnjLGGGyrlV53U9+z0dhb
 f3F77ISWqoiBoSQLLZ/m8/dJ3McQblbRZ9l/mnOEBiiEDYqqJ9Ns9zBPT1S6rci/9SBu
 QNDx37w7jRZCsXJG4ySSpIDFhgsG1sNZe/fNCo+y6IMeCl3O6VL0jWad0tcMUszF1FTg
 Ueya8AfybJ2HPRbcLnuZJBCKzrfT6slqP45OamdgZvhtcAVS1LgCt3H+vS4V2OPQqEF2 Lw== 
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
	by aserp2130.oracle.com with ESMTP id 34nh3ax8mk-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
	Tue, 10 Nov 2020 21:26:07 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
	by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0AALOqFA194731;
	Tue, 10 Nov 2020 21:26:07 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
	by aserp3020.oracle.com with ESMTP id 34p5g0w6vj-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
	Tue, 10 Nov 2020 21:26:07 +0000
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 0AALPx9D001394;
	Tue, 10 Nov 2020 21:26:01 GMT
Received: from char.us.oracle.com (/10.152.32.25)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 10 Nov 2020 13:25:59 -0800
Received: by char.us.oracle.com (Postfix, from userid 1000)
	id 650FC6A0109; Tue, 10 Nov 2020 16:27:51 -0500 (EST)
Date: Tue, 10 Nov 2020 16:27:51 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Christoph Hellwig <hch@lst.de>
Cc: xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org,
        Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH for-5.10] swiotlb: remove the tbl_dma_addr argument to
 swiotlb_tbl_map_single
Message-ID: <20201110212751.GA16458@char.us.oracle.com>
References: <20201023063309.3472987-1-hch@lst.de>
 <20201103094643.GA18936@lst.de>
 <20201104140438.GA16892@char.us.oracle.com>
 <20201110091421.GA23707@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201110091421.GA23707@lst.de>
User-Agent: Mutt/1.9.1 (2017-09-22)
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9801 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 spamscore=0 malwarescore=0
 adultscore=0 phishscore=0 bulkscore=0 mlxlogscore=999 suspectscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2011100146
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9801 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 lowpriorityscore=0 priorityscore=1501
 clxscore=1015 malwarescore=0 mlxscore=0 spamscore=0 suspectscore=0
 mlxlogscore=999 impostorscore=0 phishscore=0 adultscore=0 bulkscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2011100146

On Tue, Nov 10, 2020 at 10:14:21AM +0100, Christoph Hellwig wrote:
> On Wed, Nov 04, 2020 at 09:04:38AM -0500, Konrad Rzeszutek Wilk wrote:
> > On Tue, Nov 03, 2020 at 10:46:43AM +0100, Christoph Hellwig wrote:
> > > ping?
> > 
> > Hopefully this goes through. I am in the process of testing it but ran
> > into testing issues that I believe are unrelated.
> 
> Did you manage to make any progress?  This fixes an issue with the

YES!!
> new support for systems with DMA offsets in 5.10..

OK. Sending the git pull request in a minute or two.


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 22:36:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 22:36:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24048.51084 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kccFS-0000bK-R9; Tue, 10 Nov 2020 22:36:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24048.51084; Tue, 10 Nov 2020 22:36:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kccFS-0000bD-Ny; Tue, 10 Nov 2020 22:36:22 +0000
Received: by outflank-mailman (input) for mailman id 24048;
 Tue, 10 Nov 2020 22:36:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kccFR-0000aW-BO
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 22:36:21 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 828e765a-a845-422f-a405-4f66ffd3797c;
 Tue, 10 Nov 2020 22:36:14 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kccFJ-0004KO-Ve; Tue, 10 Nov 2020 22:36:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kccFJ-0006O9-OI; Tue, 10 Nov 2020 22:36:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kccFJ-0002wW-Nm; Tue, 10 Nov 2020 22:36:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kccFR-0000aW-BO
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 22:36:21 +0000
X-Inumbo-ID: 828e765a-a845-422f-a405-4f66ffd3797c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 828e765a-a845-422f-a405-4f66ffd3797c;
	Tue, 10 Nov 2020 22:36:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=R6qXlTM8un0T/SFP7gCdTqWp5vWHW/qcnbytq+M52o8=; b=4gKUAJbmE/g/MWiJKPJj7uoikb
	8kFj1OWZlc6Ap7OxOU2036Jq9CT6Dd1F/iZhPindBTL/5tXNMZDvJqTel4PJJg4cHqXxX6ILn/TV9
	np8cU1ZJ7XiwMVxOufqzeqhx6hbIHCBPcVw6Is9/pT/CXKabL673g5UbiL1h4z6DRUtk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kccFJ-0004KO-Ve; Tue, 10 Nov 2020 22:36:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kccFJ-0006O9-OI; Tue, 10 Nov 2020 22:36:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kccFJ-0002wW-Nm; Tue, 10 Nov 2020 22:36:13 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156642-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156642: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=628e1becb6fb121475a6ce68e3f1cb4499851255
X-Osstest-Versions-That:
    xen=3059178798a23ba870ff86ff54d442a07e6651fc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Nov 2020 22:36:13 +0000

flight 156642 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156642/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156622

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  628e1becb6fb121475a6ce68e3f1cb4499851255
baseline version:
 xen                  3059178798a23ba870ff86ff54d442a07e6651fc

Last test of basis   156622  2020-11-10 13:01:19 Z    0 days
Failing since        156628  2020-11-10 17:00:28 Z    0 days    2 attempts
Testing same since   156642  2020-11-10 20:00:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 628e1becb6fb121475a6ce68e3f1cb4499851255
Author: Julien Grall <jgrall@amazon.com>
Date:   Mon Nov 9 20:28:59 2020 +0000

    xen/arm: Always trap AMU system registers
    
    The Activity Monitors Unit (AMU) has been introduced by ARMv8.4. It is
    considered to be unsafe to be expose to guests as they might expose
    information about code executed by other guests or the host.
    
    Arm provided a way to trap all the AMU system registers by setting
    CPTR_EL2.TAM to 1.
    
    Unfortunately, on older revision of the specification, the bit 30 (now
    CPTR_EL1.TAM) was RES0. Because of that, Xen is setting it to 0 and
    therefore the system registers would be exposed to the guest when it is
    run on processors with AMU.
    
    As the bit is mark as UNKNOWN at boot in Armv8.4, the only safe solution
    for us is to always set CPTR_EL1.TAM to 1.
    
    Guest trying to access the AMU system registers will now receive an
    undefined instruction. Unfortunately, this means that even well-behaved
    guest may fail to boot because we don't sanitize the ID registers.
    
    This is a known issues with other Armv8.0+ features (e.g. SVE, Pointer
    Auth). This will taken care separately.
    
    This is part of XSA-351 (or XSA-93 re-born).
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Andre Przywara <andre.przywara@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

commit e6e85b662be9eab96f4cfc58e9945580cce8b2bb
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Nov 10 14:40:09 2020 +0100

    x86/CPUID: also check leaf 7 max subleaf to be compatible
    
    Just like is done for basic and extended major leaves.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit f5cfa09856732b1d78ff6a21ca3dc33a010da951
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Nov 10 14:39:30 2020 +0100

    x86/CPUID: suppress IOMMU related hypervisor leaf data
    
    Now that the IOMMU for guests can't be enabled "on demand" anymore,
    there's also no reason to expose the related CPUID bit "just in case".
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit db1a9fdd554cb1d8a7099af7925318fc06c6875b
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Nov 10 14:39:03 2020 +0100

    x86/CPUID: don't use UB shift when library is built as 32-bit
    
    At least the insn emulator test harness will continue to be buildable
    (and ought to continue to be usable) also as a 32-bit binary. (Right now
    the CPU policy test harness is, too, but there it may be less relevant
    to keep it functional, just like e.g. we don't support fuzzing the insn
    emulator in 32-bit mode.) Hence the library code needs to cope with
    this.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit b5ad37f8e9284cc147218f7a5193d739ae7b956f
Author: Juergen Gross <jgross@suse.com>
Date:   Tue Nov 10 14:37:15 2020 +0100

    xen/evtchn: revert 52e1fc47abc3a0123
    
    With the event channel lock no longer disabling interrupts commit
    52e1fc47abc3a0123 ("evtchn/Flask: pre-allocate node on send path") can
    be reverted again.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 5f2df45ead7c1195142f68b7923047a1e9479d54
Author: Juergen Gross <jgross@suse.com>
Date:   Tue Nov 10 14:36:15 2020 +0100

    xen/evtchn: rework per event channel lock
    
    Currently the lock for a single event channel needs to be taken with
    interrupts off, which causes deadlocks in some cases.
    
    Rework the per event channel lock to be non-blocking for the case of
    sending an event and removing the need for disabling interrupts for
    taking the lock.
    
    The lock is needed for avoiding races between event channel state
    changes (creation, closing, binding) against normal operations (set
    pending, [un]masking, priority changes).
    
    Use a rwlock, but with some restrictions:
    
    - Changing the state of an event channel (creation, closing, binding)
      needs to use write_lock(), with ASSERT()ing that the lock is taken as
      writer only when the state of the event channel is either before or
      after the locked region appropriate (either free or unbound).
    
    - Sending an event needs to use read_trylock() mostly, in case of not
      obtaining the lock the operation is omitted. This is needed as
      sending an event can happen with interrupts off (at least in some
      cases).
    
    - Dumping the event channel state for debug purposes is using
      read_trylock(), too, in order to avoid blocking in case the lock is
      taken as writer for a long time.
    
    - All other cases can use read_lock().
    
    Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Nov 10 23:25:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 23:25:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24066.51121 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcd0w-0005CW-Pm; Tue, 10 Nov 2020 23:25:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24066.51121; Tue, 10 Nov 2020 23:25:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcd0w-0005CP-Mb; Tue, 10 Nov 2020 23:25:26 +0000
Received: by outflank-mailman (input) for mailman id 24066;
 Tue, 10 Nov 2020 23:25:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZSK1=EQ=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kcd0v-0005CI-Tj
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 23:25:25 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a377acb8-f98e-4998-8919-e2203896ffd4;
 Tue, 10 Nov 2020 23:25:24 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0AANPBFC043559
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Tue, 10 Nov 2020 18:25:17 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 0AANPAQB043558;
 Tue, 10 Nov 2020 15:25:10 -0800 (PST) (envelope-from ehem)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ZSK1=EQ=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kcd0v-0005CI-Tj
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 23:25:25 +0000
X-Inumbo-ID: a377acb8-f98e-4998-8919-e2203896ffd4
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a377acb8-f98e-4998-8919-e2203896ffd4;
	Tue, 10 Nov 2020 23:25:24 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0AANPBFC043559
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Tue, 10 Nov 2020 18:25:17 -0500 (EST)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 0AANPAQB043558;
	Tue, 10 Nov 2020 15:25:10 -0800 (PST)
	(envelope-from ehem)
Date: Tue, 10 Nov 2020 15:25:10 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: J??rgen Gro?? <jgross@suse.com>, Julien Grall <julien@xen.org>,
        roman@zededa.com, xen-devel@lists.xenproject.org
Subject: Re: Xen on RP4
Message-ID: <20201110232510.GA43420@mattapan.m5p.com>
References: <20201023211941.GA90171@mattapan.m5p.com>
 <alpine.DEB.2.21.2010231647290.12247@sstabellini-ThinkPad-T480s>
 <20201024053540.GA97417@mattapan.m5p.com>
 <4fcf4832-9266-443f-54d0-fa1fff4b6e14@xen.org>
 <20201026160316.GA20589@mattapan.m5p.com>
 <7a904044-8206-b45d-8ec2-d4e48b07ea83@xen.org>
 <20201028015423.GA33407@mattapan.m5p.com>
 <alpine.DEB.2.21.2010281704250.12247@sstabellini-ThinkPad-T480s>
 <e885b2a9-f6ea-e224-b906-125936cfe550@suse.com>
 <alpine.DEB.2.21.2010291255070.12247@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2010291255070.12247@sstabellini-ThinkPad-T480s>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Thu, Oct 29, 2020 at 12:57:58PM -0700, Stefano Stabellini wrote:
> On Thu, 29 Oct 2020, J??rgen Gro?? wrote:
> > What about having a small domain parsing the ACPI booting first and use
> > that information for booting dom0?
> > 
> > That dom would be part of the Xen build and the hypervisor wouldn't need
> > to gain all the ACPI AML logic.
> 
> That could work, but in practice we don't have such a domain today --
> the infrastructure is missing. I wonder whether the bootloader (uboot or
> grub) would know about the platform and might be able to pass that
> information to Xen somehow.

How long would such likely take to implement?  This reads like a
complicated project, and likely to take a while...


Then would be the issue of efifb.



I've been pondering allocate_memory_11() and coming up with a rather
complicated potential problem.  ACPI appears to potentially allow for
non-power of 2 DMA ranges; I'm unaware of any such device, but the code
should allow for such.

I can imagine a device which has multiple DMA ranges.  The ranges could
be fully contained within each other, the ranges could partially overlap,
or the ranges could be disjoint.

Someone might wish to allocate all DMA-capable memory to domain 0,
someone might wish to allocate less.  Additionally if all DMA-capable
memory is allocated to domain 0, some non-DMA-capable memory could be
desired.

Ideally Xen would move to non-DMA memory.  This would protect Xen against
a malicious domain 0 and allow allocating more DMA-capable memory to
domain 0.

This interacts with ballooning.  If memory is removed from domain 0,
non-DMA memory should be removed first.  If domain 0 is allocated more
memory, DMA memory should be added first (if any isn't allocated to
domain 0).

Then again I may be severely overthinking things.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Tue Nov 10 23:25:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 10 Nov 2020 23:25:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24068.51136 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcd14-0005Ew-2X; Tue, 10 Nov 2020 23:25:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24068.51136; Tue, 10 Nov 2020 23:25:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcd13-0005Eo-Vj; Tue, 10 Nov 2020 23:25:33 +0000
Received: by outflank-mailman (input) for mailman id 24068;
 Tue, 10 Nov 2020 23:25:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kcd12-0005DH-Kf
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 23:25:32 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c22de40b-8a65-4f86-9a5a-dc77ccd63f83;
 Tue, 10 Nov 2020 23:25:25 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcd0v-0005L9-8B; Tue, 10 Nov 2020 23:25:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcd0u-00017r-W5; Tue, 10 Nov 2020 23:25:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kcd0u-00047b-VZ; Tue, 10 Nov 2020 23:25:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pxmX=EQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kcd12-0005DH-Kf
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 23:25:32 +0000
X-Inumbo-ID: c22de40b-8a65-4f86-9a5a-dc77ccd63f83
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c22de40b-8a65-4f86-9a5a-dc77ccd63f83;
	Tue, 10 Nov 2020 23:25:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=qjZiNs0NbSv8avHIh6NYmUJqmgIKuoTb5tfm40rH9cM=; b=bGIAKvozFN4iRf6t/OP8JHgiSu
	eoRZRWb9zvvZ8JVthRc23TDen3KhhChHtjckbzcp8+7hRDI7wyv9+1Df2lbewp6a1E0ymoJ9IB+Qs
	0FtQo2MtD7sF0PPwWMmBK+DITBNuv6Ev5ML1V3aUCeTVrjLt2ZtQtQy8BGWTjetLHFQM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcd0v-0005L9-8B; Tue, 10 Nov 2020 23:25:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcd0u-00017r-W5; Tue, 10 Nov 2020 23:25:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcd0u-00047b-VZ; Tue, 10 Nov 2020 23:25:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable-smoke bisection] complete build-amd64
Message-Id: <E1kcd0u-00047b-VZ@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 10 Nov 2020 23:25:24 +0000

branch xen-unstable-smoke
xenbranch xen-unstable-smoke
job build-amd64
testid xen-build

Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  5f2df45ead7c1195142f68b7923047a1e9479d54
  Bug not present: 3059178798a23ba870ff86ff54d442a07e6651fc
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/156660/


  commit 5f2df45ead7c1195142f68b7923047a1e9479d54
  Author: Juergen Gross <jgross@suse.com>
  Date:   Tue Nov 10 14:36:15 2020 +0100
  
      xen/evtchn: rework per event channel lock
      
      Currently the lock for a single event channel needs to be taken with
      interrupts off, which causes deadlocks in some cases.
      
      Rework the per event channel lock to be non-blocking for the case of
      sending an event and removing the need for disabling interrupts for
      taking the lock.
      
      The lock is needed for avoiding races between event channel state
      changes (creation, closing, binding) against normal operations (set
      pending, [un]masking, priority changes).
      
      Use a rwlock, but with some restrictions:
      
      - Changing the state of an event channel (creation, closing, binding)
        needs to use write_lock(), with ASSERT()ing that the lock is taken as
        writer only when the state of the event channel is either before or
        after the locked region appropriate (either free or unbound).
      
      - Sending an event needs to use read_trylock() mostly, in case of not
        obtaining the lock the operation is omitted. This is needed as
        sending an event can happen with interrupts off (at least in some
        cases).
      
      - Dumping the event channel state for debug purposes is using
        read_trylock(), too, in order to avoid blocking in case the lock is
        taken as writer for a long time.
      
      - All other cases can use read_lock().
      
      Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()")
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Reviewed-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable-smoke/build-amd64.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable-smoke/build-amd64.xen-build --summary-out=tmp/156660.bisection-summary --basis-template=156622 --blessings=real,real-bisect,real-retry xen-unstable-smoke build-amd64 xen-build
Searching for failure / basis pass:
 156642 fail [host=himrod1] / 156622 [host=himrod2] 156532 [host=himrod2] 156523 ok.
Failure / basis pass flights: 156642 / 156523
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 628e1becb6fb121475a6ce68e3f1cb4499851255
Basis pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 2a5f9f6a6932214fda76b9b3c03e024772882d34
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#677cbe1324c29294bb1d1b8454b3f214725e40fd-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/xen.git#2a5f9f6a6932214fda76b9b3c03e024772882d34-628e1becb6fb121475a6ce68e3f1cb4499851255
Loaded 10007 nodes in revision graph
Searching for test results:
 156523 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 2a5f9f6a6932214fda76b9b3c03e024772882d34
 156532 [host=himrod2]
 156622 [host=himrod2]
 156628 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 e6e85b662be9eab96f4cfc58e9945580cce8b2bb
 156641 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 2a5f9f6a6932214fda76b9b3c03e024772882d34
 156644 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 e6e85b662be9eab96f4cfc58e9945580cce8b2bb
 156647 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
 156650 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 b5ad37f8e9284cc147218f7a5193d739ae7b956f
 156652 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 3059178798a23ba870ff86ff54d442a07e6651fc
 156654 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5f2df45ead7c1195142f68b7923047a1e9479d54
 156642 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 628e1becb6fb121475a6ce68e3f1cb4499851255
 156655 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 3059178798a23ba870ff86ff54d442a07e6651fc
 156657 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5f2df45ead7c1195142f68b7923047a1e9479d54
 156658 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 3059178798a23ba870ff86ff54d442a07e6651fc
 156660 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 5f2df45ead7c1195142f68b7923047a1e9479d54
Searching for interesting versions
 Result found: flight 156523 (pass), for basis pass
 For basis failure, parent search stopping at 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 3059178798a23ba870ff86ff54d442a07e6651fc, results HASH(0x55985838a5c8) HASH(0x559858388d40) HASH(0x55985838df78) For basis failure, parent search stopping at 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258, results HASH(0x559858385030) For basis failure, parent search stopping at 3d273dd05e51e5a1\
 ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 2a5f9f6a6932214fda76b9b3c03e024772882d34, results HASH(0x55985837a3b8) HASH(0x559858372e78) Result found: flight 156628 (fail), for basis failure (at ancestor ~531)
 Repro found: flight 156641 (pass), for basis pass
 Repro found: flight 156642 (fail), for basis failure
 0 revisions at 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 3059178798a23ba870ff86ff54d442a07e6651fc
No revisions left to test, checking graph state.
 Result found: flight 156652 (pass), for last pass
 Result found: flight 156654 (fail), for first failure
 Repro found: flight 156655 (pass), for last pass
 Repro found: flight 156657 (fail), for first failure
 Repro found: flight 156658 (pass), for last pass
 Repro found: flight 156660 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  5f2df45ead7c1195142f68b7923047a1e9479d54
  Bug not present: 3059178798a23ba870ff86ff54d442a07e6651fc
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/156660/


  commit 5f2df45ead7c1195142f68b7923047a1e9479d54
  Author: Juergen Gross <jgross@suse.com>
  Date:   Tue Nov 10 14:36:15 2020 +0100
  
      xen/evtchn: rework per event channel lock
      
      Currently the lock for a single event channel needs to be taken with
      interrupts off, which causes deadlocks in some cases.
      
      Rework the per event channel lock to be non-blocking for the case of
      sending an event and removing the need for disabling interrupts for
      taking the lock.
      
      The lock is needed for avoiding races between event channel state
      changes (creation, closing, binding) against normal operations (set
      pending, [un]masking, priority changes).
      
      Use a rwlock, but with some restrictions:
      
      - Changing the state of an event channel (creation, closing, binding)
        needs to use write_lock(), with ASSERT()ing that the lock is taken as
        writer only when the state of the event channel is either before or
        after the locked region appropriate (either free or unbound).
      
      - Sending an event needs to use read_trylock() mostly, in case of not
        obtaining the lock the operation is omitted. This is needed as
        sending an event can happen with interrupts off (at least in some
        cases).
      
      - Dumping the event channel state for debug purposes is using
        read_trylock(), too, in order to avoid blocking in case the lock is
        taken as writer for a long time.
      
      - All other cases can use read_lock().
      
      Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()")
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Reviewed-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>

Revision graph left in /home/logs/results/bisect/xen-unstable-smoke/build-amd64.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
156660: tolerable ALL FAIL

flight 156660 xen-unstable-smoke real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/156660/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-amd64                   6 xen-build               fail baseline untested


jobs:
 build-amd64                                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 00:04:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 00:04:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24110.51182 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcdcH-0001K6-0m; Wed, 11 Nov 2020 00:04:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24110.51182; Wed, 11 Nov 2020 00:04:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcdcG-0001Jz-Tv; Wed, 11 Nov 2020 00:04:00 +0000
Received: by outflank-mailman (input) for mailman id 24110;
 Wed, 11 Nov 2020 00:04:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OH4G=ER=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kcdcG-0001Ju-42
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 00:04:00 +0000
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 42cb2559-726b-46e6-be0c-6ba568fe2fe4;
 Wed, 11 Nov 2020 00:03:59 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id s9so15139008ljo.11
 for <xen-devel@lists.xenproject.org>; Tue, 10 Nov 2020 16:03:59 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id k21sm30013ljb.43.2020.11.10.16.03.56
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 10 Nov 2020 16:03:57 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=OH4G=ER=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kcdcG-0001Ju-42
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 00:04:00 +0000
X-Inumbo-ID: 42cb2559-726b-46e6-be0c-6ba568fe2fe4
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 42cb2559-726b-46e6-be0c-6ba568fe2fe4;
	Wed, 11 Nov 2020 00:03:59 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id s9so15139008ljo.11
        for <xen-devel@lists.xenproject.org>; Tue, 10 Nov 2020 16:03:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=76xm+qijDBcEjInIg3M0U+JEDRMn7Wif8SEw0tdElMk=;
        b=ThrRLsnZXo/mU1BCaVwghtoSRultTp31XKN2HkvyxH+lB11mkDr7QXN7GQeJnp9fo8
         zvyDtSKl7Q3DqcE9VgJUGle1ajLq5iv/ccQva+K1mrLsvbGxpzuJr6Qp+BjBLyF2MoHr
         hSbAIaPRufDIEVcMlkcJEMuRhJu+ERlCM0HbT/6O1NR6jwoP1CVOSea3LHXX+CQB4d6o
         ngsUMdGf2jPhnKI4p8qqS0GeK16CDZZDsTBg5JmtoM5h9hQxFdToiduPtEa9w4sUsCDP
         qs2SsAcoL5lRKRESv8FuuNeDstqjngLJ5nDqH0eiv0ujF/kILRcqQERf5S0aopAEYr35
         0VOw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=76xm+qijDBcEjInIg3M0U+JEDRMn7Wif8SEw0tdElMk=;
        b=UUnTs/ta69JXzKm0ZoZpp0ndCCa8P2ZXE4v1vYROdBQpxIENAHIN2xY2NbTNwR75+v
         Cu/2nvuj1wAjX30lBIfPyaqWObU0FzM0ey6/dmRYXWVRFnJ9R0Z3qEbt2OGogT9KuOM3
         MU2ZQheIiCezdMp3Ue1Smto73RsJXhBkBhiNGlYlVj59qjbGHCDExlhBVAPseqdYbzk6
         /JqkK1U0gp1ywxsf1HVjTQa3Fw4wTa8sgLJdeoD63xbOlF4a6Of4j04vic4Aull+jai0
         DN63gw/fS7AqIl59iQ8IvxOEM6jBUSudl9DYKN8AwPiBSMKISfj8fGYZN5qQNYO4knx7
         cpGg==
X-Gm-Message-State: AOAM530RCq8GXfJihCMSxO6ZuuKaYiEeqsJa4afj4V7PWD5Uf5O0l20I
	/FA/vgvjyPicx5DcbcWE5r0=
X-Google-Smtp-Source: ABdhPJzuw9qiO4fqJ5NLkzharGPNppmAG0y9E9/wbed9HJP8IqF51XGEf9VrM9/I3qmcrXeAARhY9A==
X-Received: by 2002:a2e:9811:: with SMTP id a17mr9046177ljj.164.1605053038083;
        Tue, 10 Nov 2020 16:03:58 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id k21sm30013ljb.43.2020.11.10.16.03.56
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Tue, 10 Nov 2020 16:03:57 -0800 (PST)
Subject: Re: [PATCH V2 21/23] xen/arm: Add mapcache invalidation handling
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-22-git-send-email-olekstysh@gmail.com>
 <cad29fdb-089a-541b-6c5b-538d96441714@suse.com>
 <b074eb70-a770-1f96-3d68-b06476b963ca@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <ecd5c3a5-e889-4fff-8145-3c129f619979@gmail.com>
Date: Wed, 11 Nov 2020 02:03:51 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <b074eb70-a770-1f96-3d68-b06476b963ca@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 16.10.20 11:41, Julien Grall wrote:

Hi Jan, Julien

Sorry for the late response.

> Hi Jan,
>
> On 16/10/2020 07:29, Jan Beulich wrote:
>> On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
>>> @@ -1067,7 +1068,14 @@ static int __p2m_set_entry(struct p2m_domain 
>>> *p2m,
>>>        */
>>>       if ( p2m_is_valid(orig_pte) &&
>>>            !mfn_eq(lpae_get_mfn(*entry), lpae_get_mfn(orig_pte)) )
>>> +    {
>>> +#ifdef CONFIG_IOREQ_SERVER
>>> +        if ( domain_has_ioreq_server(p2m->domain) &&
>>> +             (p2m->domain == current->domain) && 
>>> p2m_is_ram(orig_pte.p2m.type) )
>>> +            p2m->domain->qemu_mapcache_invalidate = true;
>>> +#endif
>>>           p2m_free_entry(p2m, orig_pte, level);
>>> +    }
>>
>> For all I have to say here, please bear in mind that I don't know
>> the internals of Arm memory management.
>>
>> The first odd thing here the merely MFN-based condition. It may
>> well be that's sufficient, if there's no way to get a "not present"
>> entry with an MFN matching any valid MFN. (This isn't just with
>> your addition, but even before.
> Invalid entries are always zeroed. So in theory the problem could 
> arise if MFN 0 used in the guest. It should not be possible on 
> staging, but I agree this should be fixed.
>
>>
>> Given how p2m_free_entry() works (or is supposed to work in the
>> long run), is the new code you add guaranteed to only alter leaf
>> entries?
>
> This path may also be called with tables. I think we want to move the 
> check in p2m_free_entry() so we can find the correct leaf type.

Well, but inside p2m_free_entry() we don't have a new entry in order to 
check whether new MFN is actually different. An extra arg only comes to 
mind...

[Didn't update call sites yet and didn't tested]

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 5bb23df..4001f46 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -739,7 +739,7 @@ static void p2m_put_l3_page(const lpae_t pte)

  /* Free lpae sub-tree behind an entry */
  static void p2m_free_entry(struct p2m_domain *p2m,
-                           lpae_t entry, unsigned int level)
+                           lpae_t entry, lpae_t new_entry, unsigned int 
level)
  {
      unsigned int i;
      lpae_t *table;
@@ -750,17 +750,19 @@ static void p2m_free_entry(struct p2m_domain *p2m,
      if ( !p2m_is_valid(entry) )
          return;

-    /* Nothing to do but updating the stats if the entry is a 
super-page. */
-    if ( p2m_is_superpage(entry, level) )
+    if ( p2m_is_superpage(entry, level) || (level == 3) )
      {
-        p2m->stats.mappings[level]--;
-        return;
-    }
+#ifdef CONFIG_IOREQ_SERVER
+        if ( domain_has_ioreq_server(p2m->domain) &&
+             (p2m->domain == current->domain) &&
+             !mfn_eq(lpae_get_mfn(new_entry), lpae_get_mfn(entry)) &&
+             p2m_is_ram(entry.p2m.type) )
+            p2m->domain->qemu_mapcache_invalidate = true;
+#endif

-    if ( level == 3 )
-    {
          p2m->stats.mappings[level]--;
-        p2m_put_l3_page(entry);
+        if ( level == 3 )
+            p2m_put_l3_page(entry);
          return;
      }

(END)

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 00:09:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 00:09:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24118.51194 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcdhM-0001Xw-M6; Wed, 11 Nov 2020 00:09:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24118.51194; Wed, 11 Nov 2020 00:09:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcdhM-0001Xp-HJ; Wed, 11 Nov 2020 00:09:16 +0000
Received: by outflank-mailman (input) for mailman id 24118;
 Wed, 11 Nov 2020 00:09:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kcdhL-0001Xk-1J
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 00:09:15 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 15ab6c0c-2f07-4010-aec9-d69ef99af8de;
 Wed, 11 Nov 2020 00:09:11 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcdhH-0006qj-6V; Wed, 11 Nov 2020 00:09:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcdhG-0003xn-Qt; Wed, 11 Nov 2020 00:09:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kcdhG-0005ga-Q2; Wed, 11 Nov 2020 00:09:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kcdhL-0001Xk-1J
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 00:09:15 +0000
X-Inumbo-ID: 15ab6c0c-2f07-4010-aec9-d69ef99af8de
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 15ab6c0c-2f07-4010-aec9-d69ef99af8de;
	Wed, 11 Nov 2020 00:09:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=L2Geh5Yc5rFD65o0WuXo+JzUjduKJv8ROa2yWSJBcO8=; b=6yUpV7E/r/l5DyQuJYqKd8/7GK
	HaKl6pgcmNFETq9IEzK2SeF/bmQWLN1Qvm3IEQBBD4SraCy1RfASGDR4t5UEAUGf1ZeJs2wjcEIks
	1dIH5nRlzdx1AVLS5ivb0dPgLeTQ+dTp3yj9K9CjfqiscQBtrEuvQRqhSOgMA72Hu+u8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcdhH-0006qj-6V; Wed, 11 Nov 2020 00:09:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcdhG-0003xn-Qt; Wed, 11 Nov 2020 00:09:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcdhG-0005ga-Q2; Wed, 11 Nov 2020 00:09:10 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156608-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156608: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
X-Osstest-Versions-That:
    xen=9ff9705647646aa937b5f5c1426a64c69a62b3bd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Nov 2020 00:09:10 +0000

flight 156608 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156608/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm       14 guest-start              fail REGR. vs. 156443
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 156443
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 156443
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 156443
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10     fail pass in 156588
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 156588

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156443
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156443
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156443
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156443
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156443
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156443
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156443
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156443
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156443
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156443
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156443
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
baseline version:
 xen                  9ff9705647646aa937b5f5c1426a64c69a62b3bd

Last test of basis   156443  2020-11-05 15:47:13 Z    5 days
Failing since        156524  2020-11-06 14:22:28 Z    4 days    6 attempts
Testing same since   156538  2020-11-07 06:32:07 Z    3 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Oct 29 15:03:32 2020 -0400

    libxl: Add suppress-vmdesc to QEMU machine
    
    The device model state saved by QMP xen-save-devices-state doesn't
    include the vmdesc json.  When restoring an HVM, xen-load-devices-state
    always triggers "Expected vmdescription section, but got 0".  This is
    not a problem when restore comes from a file.  However, when QEMU runs
    in a linux stubdom and comes over a console, EOF is not received.  This
    causes a delay restoring - though it does restore.
    
    Setting suppress-vmdesc skips looking for the vmdesc during restore and
    avoids the wait.
    
    QEMU 5.2 enables suppress-vmdesc by default for xenfv, but this change
    sets it manually for xenfv and xen_platform_pci=0 when -machine pc is
    use.
    
    QEMU commit 9850c6047b8b "migration: Allow to suppress vmdesc
    submission" added suppress-vmdesc in QEMU 2.3.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Anthony PERARD <anthony.perard@citrix.com>

commit cd800ce442eeba5bc0857ade70a075367c01c350
Author: Stefano Stabellini <sstabellini@kernel.org>
Date:   Fri Nov 6 16:12:56 2020 +0000

    libxl: set vuart_gfn in libxl__build_hvm
    
    Setting vuart_gfn was missed when switching ARM guests to the PVH build.
    Like libxl__build_pv, libxl__build_hvm should set state->vuart_gfn to
    dom->vuart_gfn.
    
    Without this change, xl console cannot connect to the vuart console (-t
    vuart), see https://marc.info/?l=xen-devel&m=160402342101366.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

commit 4196b1523aebe0ed929accba318d5e833d7ff6b3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 15:05:04 2020 +0100

    tools/libs/light: correct bitmap operations
    
    Libxl bitmap operations for single bits (test, set, reset) take the bit
    number as a signed integer without testing the value to be larger than
    0.
    
    Correct that by adding the appropriate tests.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8aac8e0ef43a452d0b565d63e4943c275badba3f
Author: Olaf Hering <olaf@aepfle.de>
Date:   Fri Nov 6 14:05:17 2020 +0100

    docs/xl: fix cpupool-cpu-remove
    
    The cpu-pool must be specified.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Wei Liu <wl@xen.org>

commit 2a5f9f6a6932214fda76b9b3c03e024772882d34
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 10:48:44 2020 +0100

    PCI: remove unused pcidevs_trylock()
    
    pcidevs_trylock() is used nowhere, so remove it.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Paul Durrant <paul@xen.org>

commit e19bcb626f50a652fb1854a8b2f2c9c371687a11
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 10:48:07 2020 +0100

    xen/rwlock: add check_lock() handling to rwlocks
    
    Checking whether a lock is consistently used regarding interrupts on
    or off is beneficial for rwlocks, too.
    
    So add check_lock() calls to rwlock functions. For this purpose make
    check_lock() globally accessible.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit c3453a23f7905d24f2404787543e26ec7d02301c
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 10:47:09 2020 +0100

    xen/locking: harmonize spinlocks and rwlocks regarding preemption
    
    Spinlocks and rwlocks behave differently in the try variants regarding
    preemption: rwlocks are switching preemption off before testing the
    lock, while spinlocks do so only after the first check.
    
    Modify _spin_trylock() to disable preemption before testing the lock
    to be held in order to be preemption-ready.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>

commit 957708c2d1ae25d7375abd5e5e70c3043d64f1f1
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Nov 5 22:31:06 2020 +0000

    xen/arm: traps: Don't panic when receiving an unknown debug trap
    
    Even if debug trap are only meant for debugging purpose, it is quite
    harsh to crash Xen if one of the trap sent by the guest is not handled.
    
    So switch from a panic() to a printk().
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit e006b2e3be72e502b86bd9e1405417abd87bdfed
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Nov 5 16:48:55 2020 +0100

    libxl: fix libacpi dependency
    
    $(DSDT_FILES-y) depends on the recursive make to have run in libacpi/
    such that the file(s) itself/themselves were generated before
    compilation gets attempted. The same, however, is also necessary for
    generated headers, before source files including them would get
    attempted to be compiled.
    
    The dependency specified in libacpi's Makefile, otoh, is entirely
    pointless nowadays - no compilation happens there anymore (except for
    tools involved in building the generated files). Together with it, the
    rule generating acpi.a also can go away.
    
    Reported-by: Olaf Hering <olaf@aepfle.de>
    Fixes: 14c0d328da2b ("libxl/acpi: Build ACPI tables for HVMlite guests")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 2b8314a3c354d04545700c80ff5a5f86799b79c7
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Nov 5 16:48:37 2020 +0100

    tools/python: pass more -rpath-link options to ld
    
    With the split of libraries, I've observed a number of warnings from
    (old?) ld.
    
    Instead of duplicating the additions in two places, introduce a setup.py
    make variable holding all the common parts of the invocations.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 00:53:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 00:53:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24129.51214 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kceNt-0005vg-10; Wed, 11 Nov 2020 00:53:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24129.51214; Wed, 11 Nov 2020 00:53:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kceNs-0005vZ-UL; Wed, 11 Nov 2020 00:53:12 +0000
Received: by outflank-mailman (input) for mailman id 24129;
 Wed, 11 Nov 2020 00:53:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OH4G=ER=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kceNr-0005vU-76
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 00:53:11 +0000
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3f9dbde4-1312-4efd-a783-4c30960e8620;
 Wed, 11 Nov 2020 00:53:10 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id w142so808082lff.8
 for <xen-devel@lists.xenproject.org>; Tue, 10 Nov 2020 16:53:08 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id b13sm40632ljf.107.2020.11.10.16.53.06
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 10 Nov 2020 16:53:06 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=OH4G=ER=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kceNr-0005vU-76
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 00:53:11 +0000
X-Inumbo-ID: 3f9dbde4-1312-4efd-a783-4c30960e8620
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3f9dbde4-1312-4efd-a783-4c30960e8620;
	Wed, 11 Nov 2020 00:53:10 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id w142so808082lff.8
        for <xen-devel@lists.xenproject.org>; Tue, 10 Nov 2020 16:53:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=LRw/QZu5dIL2xriEbomndRVnVkLZPwRTAU/cv/U3GFA=;
        b=Q8oAW52+w8xZ5KFDsBl6QpJY5bOAhZjwrdnFiIQhxfR51ZJrBAVb+FlgCee6bK5aGh
         5FxjtHJ1f78FVzBvLYWffHNeTdDjfrLnTlq60+P4XknMcZnIbe36vrChf2B3XGrP2TNw
         7kf088AjIw+WzMCOQc22JkTtZW6e+DJQdxg6T0bFGPs6EUEcQQffIi8+wKTJCMxG0mdf
         4B1GnYe1N8VMGlVbncgi7xrW5zzGzX8lwyHrwsHipmu5nHLRBFzoQINWEjtsjo1hyNKH
         P65WIOXgCVpGjFnoEWIKic2lMndGLXg+33+E4ez/ASR2lI6sOk16FVJrGGb8+MPuZzT5
         qpFg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=LRw/QZu5dIL2xriEbomndRVnVkLZPwRTAU/cv/U3GFA=;
        b=NZG6UKr3YNrw8tq3Eel/CZGB121bx5MbVcqmIQr2ytKhz0S3xxa7VnFfMySxzNQRZw
         sSppMN171wxSf8Q9R/maOLsRuTq/fWo4zlv2z0xvRL1tjLy/KmBpRcCv5buCiwtqyqlb
         lcVSeEF3fpF+ehiZ1MHfO0eTBng17PmmrCkeqPV+bji0ccC5gEsfBF8ZZ01R3HA0iVKq
         HDrDhL0oR4+x+BLLxZ1Nc4llBvUAygKbST7jOK8DTxRZzLTjyyWlJiH6U2T9pwGrr4rc
         Nyhqp3d21UHRxaZxygfk1GXj/8tvY89K/tTRmrtdyFx6/vEwxq2+x0+aU2kwsYWlSZGD
         B9GQ==
X-Gm-Message-State: AOAM533pJrWPidO9Gz1bvqJutWYMSuySydrwEjwpn0os3Ww7XVfvMyTM
	fWLMhOfVE3entviaQdSqO6k=
X-Google-Smtp-Source: ABdhPJztHAicBw9PTCIV7025j4tNDaE4GHIrIs8fqFUivrnDBmVL+vbA5i/4FE1JJHaleIw3eTM+GQ==
X-Received: by 2002:a19:c60c:: with SMTP id w12mr1445530lff.244.1605055987491;
        Tue, 10 Nov 2020 16:53:07 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id b13sm40632ljf.107.2020.11.10.16.53.06
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Tue, 10 Nov 2020 16:53:06 -0800 (PST)
Subject: Re: [PATCH V2 23/23] [RFC] libxl: Add support for virtio-disk
 configuration
To: Wei Chen <Wei.Chen@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-24-git-send-email-olekstysh@gmail.com>
 <AM0PR08MB374735F747FFF1A3016F37329EEA0@AM0PR08MB3747.eurprd08.prod.outlook.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <99636dd8-4c90-bb84-b96f-6c7a9ad31b63@gmail.com>
Date: Wed, 11 Nov 2020 02:53:01 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <AM0PR08MB374735F747FFF1A3016F37329EEA0@AM0PR08MB3747.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 09.11.20 08:45, Wei Chen wrote:
> Hi Oleksandr,

Hi Wei


>
>> -----Original Message-----
>> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of
>> Oleksandr Tyshchenko
>> Sent: 2020年10月16日 0:45
>> To: xen-devel@lists.xenproject.org
>> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Ian Jackson
>> <iwj@xenproject.org>; Wei Liu <wl@xen.org>; Anthony PERARD
>> <anthony.perard@citrix.com>; Julien Grall <julien@xen.org>; Stefano Stabellini
>> <sstabellini@kernel.org>
>> Subject: [PATCH V2 23/23] [RFC] libxl: Add support for virtio-disk configuration
>>
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> This patch adds basic support for configuring and assisting virtio-disk
>> backend (emualator) which is intended to run out of Qemu and could be run
>> in any domain.
>>
>> Xenstore was chosen as a communication interface for the emulator running
>> in non-toolstack domain to be able to get configuration either by reading
>> Xenstore directly or by receiving command line parameters (an updated 'xl devd'
>> running in the same domain would read Xenstore beforehand and call backend
>> executable with the required arguments).
>>
>> An example of domain configuration (two disks are assigned to the guest,
>> the latter is in readonly mode):
>>
>> vdisk = [ 'backend=DomD, disks=rw:/dev/mmcblk0p3;ro:/dev/mmcblk1p3' ]
>>
> Can we keep use the same 'disk' parameter for virtio-disk, but add an option like
>   "model=virtio-disk"?
> For example:
> disk = [ 'backend=DomD, disks=rw:/dev/mmcblk0p3,model=virtio-disk' ]
> Just like what Xen has done for x86 virtio-net.

I think, this needs an additional investigation. In general I agree with 
you that it would be nice to reuse existing 'disk' parameter somehow 
rather than introducing new one
for the same purpose (to handle virtual block device(s)).


One note, although both are used for the same purpose they are different 
in at least one important option.

For example:
1. disk = [ 'backend=DomD, phy:/dev/mmcblk0p3, xvda1' ]
2. vdisk = [ 'backend=DomD, disks=rw:/dev/mmcblk0p3' ]
As you can see existing "disk" parameter contains xvda1, this means that 
a new device /dev/xvda1 will appear at the guest side, but "vdisk" 
doesn't contain anything similar. So with Xen PV driver (xen-blkfront) 
we are able to configure a device name, but with VirtIO solution 
(virtio-blk) we aren't (at least I don't know how exactly).





-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 01:31:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 01:31:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24139.51233 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kceyg-0005VK-4Q; Wed, 11 Nov 2020 01:31:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24139.51233; Wed, 11 Nov 2020 01:31:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kceyf-0005VB-U2; Wed, 11 Nov 2020 01:31:13 +0000
Received: by outflank-mailman (input) for mailman id 24139;
 Wed, 11 Nov 2020 01:31:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kceye-0005V6-LW
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 01:31:12 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7a824303-b16b-43c0-8559-c0a1efeccb4e;
 Wed, 11 Nov 2020 01:31:10 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kceyb-0004EK-Jp; Wed, 11 Nov 2020 01:31:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kceyb-0000gw-4X; Wed, 11 Nov 2020 01:31:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kceyb-0007wy-42; Wed, 11 Nov 2020 01:31:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kceye-0005V6-LW
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 01:31:12 +0000
X-Inumbo-ID: 7a824303-b16b-43c0-8559-c0a1efeccb4e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7a824303-b16b-43c0-8559-c0a1efeccb4e;
	Wed, 11 Nov 2020 01:31:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=RkxAee3Pfj/VGx/Ts8l8FqUpizg+8ZCH07GD00r9v0k=; b=QAFcjxtzEtcRur/zllODYd9w5u
	ocwnJ/eZAs0WTQNmVmhvDHsH5GJ+do+14F9/FtXXLP6+peqjga4NoeJ3QpiJiPADTo6HxDM3EBVCx
	dS08/P3Aq9/bnGjKM9yJV8lgiDIJkEL1fa6LvxLbTTp+9p7sjynQgwZ/p6b4sDxWmUto=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kceyb-0004EK-Jp; Wed, 11 Nov 2020 01:31:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kceyb-0000gw-4X; Wed, 11 Nov 2020 01:31:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kceyb-0007wy-42; Wed, 11 Nov 2020 01:31:09 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable bisection] complete test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm
Message-Id: <E1kceyb-0007wy-42@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Nov 2020 01:31:09 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm
testid debian-hvm-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  e19bcb626f50a652fb1854a8b2f2c9c371687a11
  Bug not present: c3453a23f7905d24f2404787543e26ec7d02301c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/156664/


  commit e19bcb626f50a652fb1854a8b2f2c9c371687a11
  Author: Juergen Gross <jgross@suse.com>
  Date:   Fri Nov 6 10:48:07 2020 +0100
  
      xen/rwlock: add check_lock() handling to rwlocks
      
      Checking whether a lock is consistently used regarding interrupts on
      or off is beneficial for rwlocks, too.
      
      So add check_lock() calls to rwlock functions. For this purpose make
      check_lock() globally accessible.
      
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Reviewed-by: Julien Grall <jgrall@amazon.com>
      Reviewed-by: Jan Beulich <jbeulich@suse.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm.debian-hvm-install --summary-out=tmp/156664.bisection-summary --basis-template=156443 --blessings=real,real-bisect,real-retry xen-unstable test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm debian-hvm-install
Searching for failure / basis pass:
 156608 fail [host=godello0] / 156443 [host=albana0] 156401 [host=huxelrebe1] 156389 [host=godello1] 156373 [host=albana1] 156354 [host=fiano1] 156339 ok.
Failure / basis pass flights: 156608 / 156339
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 7056f2f89f03f2f804ac7e776c7b2b000cd716cd
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#677cbe1324c29294bb1d1b8454b3f21\
 4725e40fd-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/xen.git#7056f2f89f03f2f804ac7e776c7b2b000cd716cd-0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
Loaded 10007 nodes in revision graph
Searching for test results:
 156331 [host=elbling0]
 156339 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 7056f2f89f03f2f804ac7e776c7b2b000cd716cd
 156354 [host=fiano1]
 156373 [host=albana1]
 156389 [host=godello1]
 156401 [host=huxelrebe1]
 156443 [host=albana0]
 156524 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 2a5f9f6a6932214fda76b9b3c03e024772882d34
 156538 fail irrelevant
 156556 fail irrelevant
 156577 fail irrelevant
 156588 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
 156626 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 7056f2f89f03f2f804ac7e776c7b2b000cd716cd
 156627 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
 156629 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 2b8314a3c354d04545700c80ff5a5f86799b79c7
 156637 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 957708c2d1ae25d7375abd5e5e70c3043d64f1f1
 156638 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c
 156643 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd e19bcb626f50a652fb1854a8b2f2c9c371687a11
 156651 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c
 156656 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd e19bcb626f50a652fb1854a8b2f2c9c371687a11
 156608 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
 156661 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c
 156664 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd e19bcb626f50a652fb1854a8b2f2c9c371687a11
Searching for interesting versions
 Result found: flight 156339 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c, results HASH(0x55cd7b11cd80) HASH(0x55cd7b1270d0) HASH(0x55cd7b134050) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe132\
 4c29294bb1d1b8454b3f214725e40fd 957708c2d1ae25d7375abd5e5e70c3043d64f1f1, results HASH(0x55cd7b118748) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 2b8314a3c354d04545700c80ff5a5f86799b79c7, results HASH(0x55cd7b12a780) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f\
 0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 7056f2f89f03f2f804ac7e776c7b2b000cd716cd, results HASH(0x55cd799a5ae0) HASH(0x55cd7b11d380) Result found: flight 156524 (fail), for basis failure (at ancestor ~524)
 Repro found: flight 156626 (pass), for basis pass
 Repro found: flight 156627 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c
No revisions left to test, checking graph state.
 Result found: flight 156638 (pass), for last pass
 Result found: flight 156643 (fail), for first failure
 Repro found: flight 156651 (pass), for last pass
 Repro found: flight 156656 (fail), for first failure
 Repro found: flight 156661 (pass), for last pass
 Repro found: flight 156664 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  e19bcb626f50a652fb1854a8b2f2c9c371687a11
  Bug not present: c3453a23f7905d24f2404787543e26ec7d02301c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/156664/


  commit e19bcb626f50a652fb1854a8b2f2c9c371687a11
  Author: Juergen Gross <jgross@suse.com>
  Date:   Fri Nov 6 10:48:07 2020 +0100
  
      xen/rwlock: add check_lock() handling to rwlocks
      
      Checking whether a lock is consistently used regarding interrupts on
      or off is beneficial for rwlocks, too.
      
      So add check_lock() calls to rwlock functions. For this purpose make
      check_lock() globally accessible.
      
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Reviewed-by: Julien Grall <jgrall@amazon.com>
      Reviewed-by: Jan Beulich <jbeulich@suse.com>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
156664: tolerable ALL FAIL

flight 156664 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/156664/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail baseline untested


jobs:
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 01:32:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 01:32:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24145.51251 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcf07-0005d1-Gc; Wed, 11 Nov 2020 01:32:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24145.51251; Wed, 11 Nov 2020 01:32:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcf07-0005cu-Br; Wed, 11 Nov 2020 01:32:43 +0000
Received: by outflank-mailman (input) for mailman id 24145;
 Wed, 11 Nov 2020 01:32:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kcf06-0005ba-0p
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 01:32:42 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eae9071f-b328-49ab-8fca-b688f95980af;
 Wed, 11 Nov 2020 01:32:34 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcezy-0004Gi-F8; Wed, 11 Nov 2020 01:32:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcezx-0000nv-Vy; Wed, 11 Nov 2020 01:32:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kcezx-0000YU-VT; Wed, 11 Nov 2020 01:32:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kcf06-0005ba-0p
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 01:32:42 +0000
X-Inumbo-ID: eae9071f-b328-49ab-8fca-b688f95980af
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id eae9071f-b328-49ab-8fca-b688f95980af;
	Wed, 11 Nov 2020 01:32:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LnLLcXEAKkvUuGu08hSuZDHJz3zOD/cTHQB9SlBqqRI=; b=qu/CY1Q5MLs+XTbx6Xp0h3JDk4
	ze3ZivmQXtX3JsGbddZhg7LM3L6HNwwANUIGYq7+dFvUhfOSzGWWqRTwG4FMPb08Sz43SUWiUsTrC
	CTIQxKGrZ6dHFRi2SVvT2b6fvmp0Wo2RZH7c13miY+C11uvrVtNG2DBxHg1JrCKTDOg0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcezy-0004Gi-F8; Wed, 11 Nov 2020 01:32:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcezx-0000nv-Vy; Wed, 11 Nov 2020 01:32:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcezx-0000YU-VT; Wed, 11 Nov 2020 01:32:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156659-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156659: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=628e1becb6fb121475a6ce68e3f1cb4499851255
X-Osstest-Versions-That:
    xen=3059178798a23ba870ff86ff54d442a07e6651fc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Nov 2020 01:32:33 +0000

flight 156659 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156659/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156622

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  628e1becb6fb121475a6ce68e3f1cb4499851255
baseline version:
 xen                  3059178798a23ba870ff86ff54d442a07e6651fc

Last test of basis   156622  2020-11-10 13:01:19 Z    0 days
Failing since        156628  2020-11-10 17:00:28 Z    0 days    3 attempts
Testing same since   156642  2020-11-10 20:00:30 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 628e1becb6fb121475a6ce68e3f1cb4499851255
Author: Julien Grall <jgrall@amazon.com>
Date:   Mon Nov 9 20:28:59 2020 +0000

    xen/arm: Always trap AMU system registers
    
    The Activity Monitors Unit (AMU) has been introduced by ARMv8.4. It is
    considered to be unsafe to be expose to guests as they might expose
    information about code executed by other guests or the host.
    
    Arm provided a way to trap all the AMU system registers by setting
    CPTR_EL2.TAM to 1.
    
    Unfortunately, on older revision of the specification, the bit 30 (now
    CPTR_EL1.TAM) was RES0. Because of that, Xen is setting it to 0 and
    therefore the system registers would be exposed to the guest when it is
    run on processors with AMU.
    
    As the bit is mark as UNKNOWN at boot in Armv8.4, the only safe solution
    for us is to always set CPTR_EL1.TAM to 1.
    
    Guest trying to access the AMU system registers will now receive an
    undefined instruction. Unfortunately, this means that even well-behaved
    guest may fail to boot because we don't sanitize the ID registers.
    
    This is a known issues with other Armv8.0+ features (e.g. SVE, Pointer
    Auth). This will taken care separately.
    
    This is part of XSA-351 (or XSA-93 re-born).
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Andre Przywara <andre.przywara@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

commit e6e85b662be9eab96f4cfc58e9945580cce8b2bb
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Nov 10 14:40:09 2020 +0100

    x86/CPUID: also check leaf 7 max subleaf to be compatible
    
    Just like is done for basic and extended major leaves.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit f5cfa09856732b1d78ff6a21ca3dc33a010da951
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Nov 10 14:39:30 2020 +0100

    x86/CPUID: suppress IOMMU related hypervisor leaf data
    
    Now that the IOMMU for guests can't be enabled "on demand" anymore,
    there's also no reason to expose the related CPUID bit "just in case".
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit db1a9fdd554cb1d8a7099af7925318fc06c6875b
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Nov 10 14:39:03 2020 +0100

    x86/CPUID: don't use UB shift when library is built as 32-bit
    
    At least the insn emulator test harness will continue to be buildable
    (and ought to continue to be usable) also as a 32-bit binary. (Right now
    the CPU policy test harness is, too, but there it may be less relevant
    to keep it functional, just like e.g. we don't support fuzzing the insn
    emulator in 32-bit mode.) Hence the library code needs to cope with
    this.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit b5ad37f8e9284cc147218f7a5193d739ae7b956f
Author: Juergen Gross <jgross@suse.com>
Date:   Tue Nov 10 14:37:15 2020 +0100

    xen/evtchn: revert 52e1fc47abc3a0123
    
    With the event channel lock no longer disabling interrupts commit
    52e1fc47abc3a0123 ("evtchn/Flask: pre-allocate node on send path") can
    be reverted again.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 5f2df45ead7c1195142f68b7923047a1e9479d54
Author: Juergen Gross <jgross@suse.com>
Date:   Tue Nov 10 14:36:15 2020 +0100

    xen/evtchn: rework per event channel lock
    
    Currently the lock for a single event channel needs to be taken with
    interrupts off, which causes deadlocks in some cases.
    
    Rework the per event channel lock to be non-blocking for the case of
    sending an event and removing the need for disabling interrupts for
    taking the lock.
    
    The lock is needed for avoiding races between event channel state
    changes (creation, closing, binding) against normal operations (set
    pending, [un]masking, priority changes).
    
    Use a rwlock, but with some restrictions:
    
    - Changing the state of an event channel (creation, closing, binding)
      needs to use write_lock(), with ASSERT()ing that the lock is taken as
      writer only when the state of the event channel is either before or
      after the locked region appropriate (either free or unbound).
    
    - Sending an event needs to use read_trylock() mostly, in case of not
      obtaining the lock the operation is omitted. This is needed as
      sending an event can happen with interrupts off (at least in some
      cases).
    
    - Dumping the event channel state for debug purposes is using
      read_trylock(), too, in order to avoid blocking in case the lock is
      taken as writer for a long time.
    
    - All other cases can use read_lock().
    
    Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 03:00:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 03:00:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24162.51272 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcgMj-0005IW-25; Wed, 11 Nov 2020 03:00:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24162.51272; Wed, 11 Nov 2020 03:00:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcgMi-0005IP-TS; Wed, 11 Nov 2020 03:00:08 +0000
Received: by outflank-mailman (input) for mailman id 24162;
 Wed, 11 Nov 2020 03:00:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qFwf=ER=gmail.com=christopher.w.clark@srs-us1.protection.inumbo.net>)
 id 1kcgMh-0005IK-47
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 03:00:07 +0000
Received: from mail-oi1-x234.google.com (unknown [2607:f8b0:4864:20::234])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7ab605cd-4c71-4ab4-a227-ac092ff6790c;
 Wed, 11 Nov 2020 03:00:05 +0000 (UTC)
Received: by mail-oi1-x234.google.com with SMTP id d9so555624oib.3
 for <xen-devel@lists.xenproject.org>; Tue, 10 Nov 2020 19:00:05 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qFwf=ER=gmail.com=christopher.w.clark@srs-us1.protection.inumbo.net>)
	id 1kcgMh-0005IK-47
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 03:00:07 +0000
X-Inumbo-ID: 7ab605cd-4c71-4ab4-a227-ac092ff6790c
Received: from mail-oi1-x234.google.com (unknown [2607:f8b0:4864:20::234])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7ab605cd-4c71-4ab4-a227-ac092ff6790c;
	Wed, 11 Nov 2020 03:00:05 +0000 (UTC)
Received: by mail-oi1-x234.google.com with SMTP id d9so555624oib.3
        for <xen-devel@lists.xenproject.org>; Tue, 10 Nov 2020 19:00:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=beEpBTWagLBOc/O+uVfh7++man2OBoWOZC896ZVA0ZQ=;
        b=pHl011s1kqxQ4UQY/epoecoMeoxXwARF14NhVxLqOHiLSa8zBUo/Z84FUKwy91Dcx1
         UsJfq2q1sXnqoMtz3/QV9o+9hpkXWM6rjlgkbMCUUf/GmhHcG6N/S+5qTPX2z2ALzPfu
         otxYJsDaKvBmI2McDSaayk/RQ0Bwor09M/kdLyxJg7AeT3wyNV5hOn3vgfCKLsw/tyc4
         GBXkkOuOn0o/Xq42hzeiNfutAkLIfE/f0RClIyctmCeBYCaeCrcFrnG4ZrgiF9qLKz5j
         yDc9RTZCKk95AHl1V9I5lMUXN+p+ldatDwVXofdLSZ9DdYh4MdRncJrzhhZXoplsUtjz
         aS6A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=beEpBTWagLBOc/O+uVfh7++man2OBoWOZC896ZVA0ZQ=;
        b=q5G2KsUacttEyjnNz0s68hM4N/pII4GuzYVqYBEpXb1YqkrEX7QvSOIKPyxIqpbQSr
         gg1ujPbIGRF4Bk0dBu2M+CftGWRMhSnF6iWelqbWvGWYTW7hkYdDbFpNDhtyv08Epuko
         WDf0zth3y/m952RMYgtRJdrOEb34OZ34eUhQeqHcXSbheRdtbd5rcxH3g7rwV1FvYeNh
         6PWCv+KRDRrHyvWv5qRfUYJc/OYuavwD7wKWkDNZZ/oCErvdSEPdmw1XOyK4zItzkkil
         iAy0aoyBxe4E37wXnagcu/WNEEzdPjhAm8O1nYqDOxwfGqKhghhM6uNJSuyqcnDloCOg
         dZCQ==
X-Gm-Message-State: AOAM532O6iH4y9O2AvRGuz9SrwBed++izFlQ5RaAVmH9+MlMHffYVaym
	RIhGKZtDBse1PbAc8cT/YMvHkHSTUdBABPgnO/eRFIhG8aZEBA==
X-Google-Smtp-Source: ABdhPJxovtd3/lcT+kwqCBUQ1hvE1nmjWnP6LQrdvfLuhTvqvKS3vzVGQm4sIONRRYiA9oxY/0CNVpo88hJjoBdwb1Y=
X-Received: by 2002:aca:d941:: with SMTP id q62mr844936oig.33.1605063605402;
 Tue, 10 Nov 2020 19:00:05 -0800 (PST)
MIME-Version: 1.0
References: <20201022021655.GA74011@mattapan.m5p.com> <alpine.DEB.2.21.2010221620230.12247@sstabellini-ThinkPad-T480s>
 <20201023005629.GA83870@mattapan.m5p.com> <alpine.DEB.2.21.2010221801490.12247@sstabellini-ThinkPad-T480s>
 <20201023211941.GA90171@mattapan.m5p.com> <alpine.DEB.2.21.2010231647290.12247@sstabellini-ThinkPad-T480s>
 <20201024053540.GA97417@mattapan.m5p.com> <4fcf4832-9266-443f-54d0-fa1fff4b6e14@xen.org>
 <20201026160316.GA20589@mattapan.m5p.com> <7a904044-8206-b45d-8ec2-d4e48b07ea83@xen.org>
 <20201028015423.GA33407@mattapan.m5p.com> <alpine.DEB.2.21.2010281704250.12247@sstabellini-ThinkPad-T480s>
 <e885b2a9-f6ea-e224-b906-125936cfe550@suse.com> <alpine.DEB.2.21.2010291255070.12247@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2010291255070.12247@sstabellini-ThinkPad-T480s>
From: Christopher Clark <christopher.w.clark@gmail.com>
Date: Tue, 10 Nov 2020 18:59:53 -0800
Message-ID: <CACMJ4GZfXkJhGi5uGhA=Ke4y5d4ukfoaAh9r-BSOs84QBX-Ztw@mail.gmail.com>
Subject: Re: Xen on RP4
To: Stefano Stabellini <sstabellini@kernel.org>, xen-devel <xen-devel@lists.xenproject.org>, 
	=?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Elliott Mitchell <ehem+xen@m5p.com>, Julien Grall <julien@xen.org>, 
	Roman Shaposhnik <roman@zededa.com>, Daniel Smith <dpsmith@apertussolutions.com>, 
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Rich Persaud <persaur@gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, Oct 29, 2020 at 12:58 PM Stefano Stabellini
<sstabellini@kernel.org> wrote:
>
> On Thu, 29 Oct 2020, J=C3=BCrgen Gro=C3=9F wrote:
> > On 29.10.20 01:37, Stefano Stabellini wrote:
> > > On Tue, 27 Oct 2020, Elliott Mitchell wrote:
> > > > On Mon, Oct 26, 2020 at 06:44:27PM +0000, Julien Grall wrote:
> > > > > On 26/10/2020 16:03, Elliott Mitchell wrote:
> > > > > > On Mon, Oct 26, 2020 at 01:31:42PM +0000, Julien Grall wrote:
> > > > > > > On 24/10/2020 06:35, Elliott Mitchell wrote:
> > > > > > > > ACPI has a distinct
> > > > > > > > means of specifying a limited DMA-width; the above fails, b=
ecause
> > > > > > > > it
> > > > > > > > assumes a *device-tree*.
> > > > > > >
> > > > > > > Do you know if it would be possible to infer from the ACPI st=
atic
> > > > > > > table
> > > > > > > the DMA-width?
> > > > > >
> > > > > > Yes, and it is.  Due to not knowing much about ACPI tables I do=
n't
> > > > > > know
> > > > > > what the C code would look like though (problem is which docume=
ntation
> > > > > > should I be looking at first?).
> > > > >
> > > > > What you provided below is an excerpt of the DSDT. AFAIK, DSDT co=
ntent
> > > > > is written in AML. So far the shortest implementation I have seen=
 for
> > > > > the AML parser is around 5000 lines (see [1]). It might be possib=
le to
> > > > > strip some the code, although I think this will still probably to=
o big
> > > > > for a single workaround.
> > > > >
> > > > > What I meant by "static table" is a table that looks like a struc=
ture
> > > > > and can be parsed in a few lines. If we can't find on contain the=
 DMA
> > > > > window, then the next best solution is to find a way to identity =
the
> > > > > platform.
> > > > >
> > > > > I don't know enough ACPI to know if this solution is possible. A =
good
> > > > > starter would probably be the ACPI spec [2].
> > > >
> > > > Okay, that is worse than I had thought (okay, ACPI is impressively
> > > > complex for something nominally firmware-level).
> > > >
> > > > There are strings in the present Tianocore implementation for Raspb=
erry
> > > > PI 4B which could be targeted.  Notably included in the output duri=
ng
> > > > boot listing the tables, "RPIFDN", "RPIFDN RPI" and "RPIFDN RPI4" (=
I'm
> > > > unsure how kosher these are as this wsn't implemented nor blessed b=
y the
> > > > Raspberry PI Foundation).
> > > >
> > > > I strongly dislike this approach as you soon turn the Xen project i=
nto a
> > > > database of hardware.  This is already occurring with
> > > > xen/arch/arm/platforms and I would love to do something about this.=
  On
> > > > that thought, how about utilizing Xen's command-line for this purpo=
se?
> > >
> > > I don't think that a command line option is a good idea: basically it=
 is
> > > punting to users the task of platform detection. Also, it means that
> > > users will be necessarily forced to edit the uboot script or grub
> > > configuration file to boot.
> > >
> > > Note that even if we introduced a new command line, we wouldn't take
> > > away the need for xen/arch/arm/platforms anyway.
> > >
> > > It would be far better for Xen to autodetect the platform if we can.
> > >
> > >
> > > > Have a procedure of during installation/updates retrieve DMA limita=
tion
> > > > information from the running OS and the following boot Xen will fol=
low
> > > > the appropriate setup.  By its nature, Domain 0 will have the infor=
mation
> > > > needed, just becomes an issue of how hard that is to retrieve...
> > >
> > > Historically that is what we used to do for many things related to AC=
PI,
> > > but unfortunately it leads to a pretty bad architecture where Xen
> > > depends on Dom0 for booting rather than the other way around. (Dom0
> > > should be the one requiring Xen for booting, given that Xen is higher
> > > privilege and boots first.)
> > >
> > >
> > > I think the best compromise is still to use an ACPI string to detect =
the
> > > platform. For instance, would it be possible to use the OEMID fields =
in
> > > RSDT, XSDT, FADT?  Possibly even a combination of them?
> > >
> > > Another option might be to get the platform name from UEFI somehow.
> >
> > What about having a small domain parsing the ACPI booting first and use
> > that information for booting dom0?
> >
> > That dom would be part of the Xen build and the hypervisor wouldn't nee=
d
> > to gain all the ACPI AML logic.
>
> That could work, but in practice we don't have such a domain today --
> the infrastructure is missing. I wonder whether the bootloader (uboot or
> grub) would know about the platform and might be able to pass that
> information to Xen somehow.

This is one of the functions envisioned for the Boot Domain in the
proposal to add DomB mode to dom0less[1], in the work developed by
Daniel Smith and I.

With DomB as much or as little of platform detection and initial
domain configuration will be possible from the Boot Domain. ACPI was
specifically discussed in the second DomB Design Session at this
year's Xen Summit[2].

Further information on this is available on the Xen wiki[3], thanks to
Rich Persaud, including links to the Summit presentation on DomB for
Xen System Boot, and the following discussion of that in the Design
Session[4], and specific questions can be raised in this thread.

thanks,

Christopher

[1] https://lists.xenproject.org/archives/html/xen-devel/2020-05/msg00233.h=
tml
[2] https://www.youtube.com/watch?v=3DjlyWfdW6D-E&t=3D15m26s
[3] https://wiki.xenproject.org/wiki/DomB_mode_of_dom0less
[4] https://lists.archive.carbon60.com/xen/devel/592140


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 04:03:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 04:03:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24171.51290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kchLi-0002Bw-FQ; Wed, 11 Nov 2020 04:03:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24171.51290; Wed, 11 Nov 2020 04:03:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kchLi-0002Bp-CP; Wed, 11 Nov 2020 04:03:10 +0000
Received: by outflank-mailman (input) for mailman id 24171;
 Wed, 11 Nov 2020 04:03:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kchLg-0002AO-Vc
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 04:03:09 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3837a018-7ac8-4956-9d23-ee9535cdcd37;
 Wed, 11 Nov 2020 04:03:05 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kchLd-0007qc-6f; Wed, 11 Nov 2020 04:03:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kchLc-0000pc-Uw; Wed, 11 Nov 2020 04:03:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kchLc-0008B5-MC; Wed, 11 Nov 2020 04:03:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kchLg-0002AO-Vc
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 04:03:09 +0000
X-Inumbo-ID: 3837a018-7ac8-4956-9d23-ee9535cdcd37
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3837a018-7ac8-4956-9d23-ee9535cdcd37;
	Wed, 11 Nov 2020 04:03:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GRGovCJQf24Rcfoqh40eeEpl1dXLNBFBObQnA6gPNIo=; b=xmcYTCw/DNnkrgH41YnPvWmadE
	/ZzVUiBSYA811KE1rOLyxGvlf4s4o0u8ulb7yVi/Fu+aiR2jWwFExTcf8/iWmH5TC6jLMPk3n1jEc
	+NLLds1ECD/Ymcivm4+RoHjrwnhvTgIS9irJFEYz5ikAeq8WmhPaLCz/L9YFWhlbyys8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kchLd-0007qc-6f; Wed, 11 Nov 2020 04:03:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kchLc-0000pc-Uw; Wed, 11 Nov 2020 04:03:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kchLc-0008B5-MC; Wed, 11 Nov 2020 04:03:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156617-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 156617: regressions - FAIL
X-Osstest-Failures:
    xen-4.14-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:debian-fixup:fail:heisenbug
    xen-4.14-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-localmigrate/x10:fail:heisenbug
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=a38060ece699f22cd317219bec53c0d27279155b
X-Osstest-Versions-That:
    xen=0c96e4297da07944525729ddbe438b0131ab5b7e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Nov 2020 04:03:04 +0000

flight 156617 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156617/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156525

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-multivcpu 13 debian-fixup    fail in 156594 pass in 156617
 test-amd64-i386-xl-qemuu-debianhvm-amd64 18 guest-localmigrate/x10 fail pass in 156594

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156525
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156525
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156525
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156525
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156525
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156525
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156525
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156525
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156525
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156525
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156525
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  a38060ece699f22cd317219bec53c0d27279155b
baseline version:
 xen                  0c96e4297da07944525729ddbe438b0131ab5b7e

Last test of basis   156525  2020-11-06 16:01:25 Z    4 days
Testing same since   156594  2020-11-09 18:08:08 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bertrand Marquis <bertrand.marquis@arm.com>
  David Woodhouse <dwmw@amazon.co.uk>
  Juergen Gross <jgross@suse.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Olaf Hering <olaf@aepfle.de>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit a38060ece699f22cd317219bec53c0d27279155b
Author: Bertrand Marquis <bertrand.marquis@arm.com>
Date:   Wed Oct 7 14:57:01 2020 +0100

    tools/libs/stat: use memcpy instead of strncpy in getBridge
    
    Use memcpy in getBridge to prevent gcc warnings about truncated
    strings. We know that we might truncate it, so the gcc warning
    here is wrong.
    Revert previous change changing buffer sizes as bigger buffers
    are not needed.
    
    Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Wei Liu <wl@xen.org>
    (cherry picked from commit 40fe714ca4245a9716264fcb3ee8df42c3650cf6)

commit 78a53f0ee04bfd14dc21dabd3a0d79343c3b642f
Author: Bertrand Marquis <bertrand.marquis@arm.com>
Date:   Wed Oct 7 14:57:02 2020 +0100

    tool/libs/light: Fix libxenlight gcc warning
    
    Fix gcc10 compilation warning about uninitialized variable by setting
    it to 0.
    
    Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Wei Liu <wl@xen.org>
    (cherry picked from commit 0241809bf838875615797f52af34222e5ab8e98f)

commit 89ae1b185a193fea8e86840c48a2711f04042415
Author: Olaf Hering <olaf@aepfle.de>
Date:   Wed Sep 23 08:48:40 2020 +0200

    tools/libxc: report malloc errors in writev_exact
    
    The caller of writev_exact should be notified about malloc errors
    when dealing with partial writes.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
    (cherry picked from commit 0d8d289af7a679c028462c4ed5d98586f9ef9648)

commit 7398a44e8636c99f6b06fb16d05a64ee06980d10
Author: Juergen Gross <jgross@suse.com>
Date:   Sat Sep 12 15:08:36 2020 +0200

    tools/libs/stat: fix broken build
    
    Making getBridge() static triggered a build error with some gcc versions:
    
    error: 'strncpy' output may be truncated copying 15 bytes from a string of
    length 255 [-Werror=stringop-truncation]
    
    Fix that by using a buffer with 256 bytes instead.
    
    Fixes: 6d0ec053907794 ("tools: split libxenstat into new tools/libs/stat directory")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>
    (cherry picked from commit c8099e48c3dbb8c7399551a265756ecf354eece2)

commit 59b83663f925092f60f699147390c6cb77b55e43
Author: David Woodhouse <dwmw@amazon.co.uk>
Date:   Thu Mar 19 20:40:24 2020 +0000

    tools/xenstore: Do not abort xenstore-ls if a node disappears while iterating
    
    The do_ls() function has somewhat inconsistent handling of errors.
    
    If reading the node's contents with xs_read() fails, then do_ls() will
    just quietly not display the contents.
    
    If reading the node's permissions with xs_get_permissions() fails, then
    do_ls() will print a warning, continue, and ultimately won't exit with
    an error code (unless another error happens).
    
    If recursing into the node with xs_directory() fails, then do_ls() will
    abort immediately, not printing any further nodes.
    
    For persistent failure modes — such as ENOENT because a node has been
    removed, or EACCES because it has had its permisions changed since the
    xs_directory() on the parent directory returned its name — it's
    obviously quite likely that if either of the first two errors occur for
    a given node, then so will the third and thus xenstore-ls will abort.
    
    The ENOENT one is actually a fairly common case, and has caused tools to
    fail to clean up a network device because it *apparently* already
    doesn't exist in xenstore.
    
    There is a school of thought that says, "Well, xenstore-ls returned an
    error. So the tools should not trust its output."
    
    The natural corollary of this would surely be that the tools must re-run
    xenstore-ls as many times as is necessary until its manages to exit
    without hitting the race condition. I am not keen on that conclusion.
    
    For the specific case of ENOENT it seems reasonable to declare that,
    but for the timing, we might as well just not have seen that node at
    all when calling xs_directory() for the parent. By ignoring the error,
    we give acceptable output.
    
    The issue can be reproduced as follows:
    
    (dom0) # for a in `seq 1 1000` ; do
                  xenstore-write /local/domain/2/foo/$a $a ;
             done
    
    Now simultaneously:
    
    (dom0) # for a in `seq 1 999` ; do
                  xenstore-rm /local/domain/2/foo/$a ;
             done
    (dom2) # while true ; do
                  ./xenstore-ls -p /local/domain/2/foo | grep -c 1000 ;
             done
    
    We should expect to see node 1000 in the output, every time.
    
    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit beb105af19cc3e58e2ed49224cfe190a736e5fa0)

commit 1f9f1cb3a0216a7d41ded3090b4bb2735cc8a8e6
Author: Bertrand Marquis <bertrand.marquis@arm.com>
Date:   Thu Oct 15 10:16:09 2020 +0100

    tools/xenpmd: Fix gcc10 snprintf warning
    
    Add a check for snprintf return code and ignore the entry if we get an
    error. This should in fact never happen and is more a trick to make gcc
    happy and prevent compilation errors.
    
    This is solving the following gcc warning when compiling for arm32 host
    platforms with optimization activated:
    xenpmd.c:92:37: error: '%s' directive output may be truncated writing
    between 4 and 2147483645 bytes into a region of size 271
    [-Werror=format-truncation=]
    
    This is also solving the following Debian bug:
    https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=970802
    
    Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Wei Liu <wl@xen.org>
    (cherry picked from commit 0dfddb2116e3757f77a691a3fe335173088d69dc)

commit f728b2d69f426258f37709de9efac5245a597d0d
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Wed Aug 19 04:00:36 2020 +0200

    libxl: fix -Werror=stringop-truncation in libxl__prepare_sockaddr_un
    
    In file included from /usr/include/string.h:495,
                     from libxl_internal.h:38,
                     from libxl_utils.c:20:
    In function 'strncpy',
        inlined from 'libxl__prepare_sockaddr_un' at libxl_utils.c:1262:5:
    /usr/include/bits/string_fortified.h:106:10: error: '__builtin_strncpy' specified bound 108 equals destination size [-Werror=stringop-truncation]
      106 |   return __builtin___strncpy_chk (__dest, __src, __len, __bos (__dest));
          |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    cc1: all warnings being treated as errors
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit fff1b7f50e75ad9535c86f3fcf425b4945c50a1c)

commit 71a12a97988c516a17aba40bb5df9274133e9e92
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Wed Aug 19 04:00:35 2020 +0200

    libxl: workaround gcc 10.2 maybe-uninitialized warning
    
    It seems xlu_pci_parse_bdf has a state machine that is too complex for
    gcc to understand. The build fails with:
    
        libxlu_pci.c: In function 'xlu_pci_parse_bdf':
        libxlu_pci.c:32:18: error: 'func' may be used uninitialized in this function [-Werror=maybe-uninitialized]
           32 |     pcidev->func = func;
              |     ~~~~~~~~~~~~~^~~~~~
        libxlu_pci.c:51:29: note: 'func' was declared here
           51 |     unsigned dom, bus, dev, func, vslot = 0;
              |                             ^~~~
        libxlu_pci.c:31:17: error: 'dev' may be used uninitialized in this function [-Werror=maybe-uninitialized]
           31 |     pcidev->dev = dev;
              |     ~~~~~~~~~~~~^~~~~
        libxlu_pci.c:51:24: note: 'dev' was declared here
           51 |     unsigned dom, bus, dev, func, vslot = 0;
              |                        ^~~
        libxlu_pci.c:30:17: error: 'bus' may be used uninitialized in this function [-Werror=maybe-uninitialized]
           30 |     pcidev->bus = bus;
              |     ~~~~~~~~~~~~^~~~~
        libxlu_pci.c:51:19: note: 'bus' was declared here
           51 |     unsigned dom, bus, dev, func, vslot = 0;
              |                   ^~~
        libxlu_pci.c:29:20: error: 'dom' may be used uninitialized in this function [-Werror=maybe-uninitialized]
           29 |     pcidev->domain = domain;
              |     ~~~~~~~~~~~~~~~^~~~~~~~
        libxlu_pci.c:51:14: note: 'dom' was declared here
           51 |     unsigned dom, bus, dev, func, vslot = 0;
              |              ^~~
        cc1: all warnings being treated as errors
    
    Workaround it by setting the initial value to invalid value (0xffffffff)
    and then assert on each value being set. This way we mute the gcc
    warning, while still detecting bugs in the parse code.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit d25cc3ec93ebda030349045d2c7fa14ffde07ed7)
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 04:03:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 04:03:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24177.51308 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kchMV-0002Jt-0g; Wed, 11 Nov 2020 04:03:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24177.51308; Wed, 11 Nov 2020 04:03:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kchMU-0002Jm-To; Wed, 11 Nov 2020 04:03:58 +0000
Received: by outflank-mailman (input) for mailman id 24177;
 Wed, 11 Nov 2020 04:03:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kchMU-0002J7-Bq
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 04:03:58 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4d719f77-802c-47ec-8f2e-819c603de2e9;
 Wed, 11 Nov 2020 04:03:51 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kchMM-0007qp-Tr; Wed, 11 Nov 2020 04:03:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kchMM-0000uF-Nb; Wed, 11 Nov 2020 04:03:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kchMM-0000Se-N2; Wed, 11 Nov 2020 04:03:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kchMU-0002J7-Bq
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 04:03:58 +0000
X-Inumbo-ID: 4d719f77-802c-47ec-8f2e-819c603de2e9
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4d719f77-802c-47ec-8f2e-819c603de2e9;
	Wed, 11 Nov 2020 04:03:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aAKEr5PsH+DQ9qkJSPEU1slkhGj2OKjC7oujSPa0q+k=; b=Hm4yc6pyvu4AHDpn8Fp9ioctL9
	AAl3qboDXjeg2kjvlbKE9pyvs4U3E+31GlSTwz0qfiyChpcInxtokLG/h9WWX2282BlkE08brn76E
	FwiCoRixobM1DtqMMfqwesPnIpBgfTVfjFgN6jjpkR1TCs8lt6yjKtMblDdfSv/fgkUg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kchMM-0007qp-Tr; Wed, 11 Nov 2020 04:03:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kchMM-0000uF-Nb; Wed, 11 Nov 2020 04:03:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kchMM-0000Se-N2; Wed, 11 Nov 2020 04:03:50 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156667-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156667: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=628e1becb6fb121475a6ce68e3f1cb4499851255
X-Osstest-Versions-That:
    xen=3059178798a23ba870ff86ff54d442a07e6651fc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Nov 2020 04:03:50 +0000

flight 156667 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156667/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156622

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  628e1becb6fb121475a6ce68e3f1cb4499851255
baseline version:
 xen                  3059178798a23ba870ff86ff54d442a07e6651fc

Last test of basis   156622  2020-11-10 13:01:19 Z    0 days
Failing since        156628  2020-11-10 17:00:28 Z    0 days    4 attempts
Testing same since   156642  2020-11-10 20:00:30 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 628e1becb6fb121475a6ce68e3f1cb4499851255
Author: Julien Grall <jgrall@amazon.com>
Date:   Mon Nov 9 20:28:59 2020 +0000

    xen/arm: Always trap AMU system registers
    
    The Activity Monitors Unit (AMU) has been introduced by ARMv8.4. It is
    considered to be unsafe to be expose to guests as they might expose
    information about code executed by other guests or the host.
    
    Arm provided a way to trap all the AMU system registers by setting
    CPTR_EL2.TAM to 1.
    
    Unfortunately, on older revision of the specification, the bit 30 (now
    CPTR_EL1.TAM) was RES0. Because of that, Xen is setting it to 0 and
    therefore the system registers would be exposed to the guest when it is
    run on processors with AMU.
    
    As the bit is mark as UNKNOWN at boot in Armv8.4, the only safe solution
    for us is to always set CPTR_EL1.TAM to 1.
    
    Guest trying to access the AMU system registers will now receive an
    undefined instruction. Unfortunately, this means that even well-behaved
    guest may fail to boot because we don't sanitize the ID registers.
    
    This is a known issues with other Armv8.0+ features (e.g. SVE, Pointer
    Auth). This will taken care separately.
    
    This is part of XSA-351 (or XSA-93 re-born).
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Andre Przywara <andre.przywara@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

commit e6e85b662be9eab96f4cfc58e9945580cce8b2bb
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Nov 10 14:40:09 2020 +0100

    x86/CPUID: also check leaf 7 max subleaf to be compatible
    
    Just like is done for basic and extended major leaves.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit f5cfa09856732b1d78ff6a21ca3dc33a010da951
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Nov 10 14:39:30 2020 +0100

    x86/CPUID: suppress IOMMU related hypervisor leaf data
    
    Now that the IOMMU for guests can't be enabled "on demand" anymore,
    there's also no reason to expose the related CPUID bit "just in case".
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit db1a9fdd554cb1d8a7099af7925318fc06c6875b
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Nov 10 14:39:03 2020 +0100

    x86/CPUID: don't use UB shift when library is built as 32-bit
    
    At least the insn emulator test harness will continue to be buildable
    (and ought to continue to be usable) also as a 32-bit binary. (Right now
    the CPU policy test harness is, too, but there it may be less relevant
    to keep it functional, just like e.g. we don't support fuzzing the insn
    emulator in 32-bit mode.) Hence the library code needs to cope with
    this.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit b5ad37f8e9284cc147218f7a5193d739ae7b956f
Author: Juergen Gross <jgross@suse.com>
Date:   Tue Nov 10 14:37:15 2020 +0100

    xen/evtchn: revert 52e1fc47abc3a0123
    
    With the event channel lock no longer disabling interrupts commit
    52e1fc47abc3a0123 ("evtchn/Flask: pre-allocate node on send path") can
    be reverted again.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 5f2df45ead7c1195142f68b7923047a1e9479d54
Author: Juergen Gross <jgross@suse.com>
Date:   Tue Nov 10 14:36:15 2020 +0100

    xen/evtchn: rework per event channel lock
    
    Currently the lock for a single event channel needs to be taken with
    interrupts off, which causes deadlocks in some cases.
    
    Rework the per event channel lock to be non-blocking for the case of
    sending an event and removing the need for disabling interrupts for
    taking the lock.
    
    The lock is needed for avoiding races between event channel state
    changes (creation, closing, binding) against normal operations (set
    pending, [un]masking, priority changes).
    
    Use a rwlock, but with some restrictions:
    
    - Changing the state of an event channel (creation, closing, binding)
      needs to use write_lock(), with ASSERT()ing that the lock is taken as
      writer only when the state of the event channel is either before or
      after the locked region appropriate (either free or unbound).
    
    - Sending an event needs to use read_trylock() mostly, in case of not
      obtaining the lock the operation is omitted. This is needed as
      sending an event can happen with interrupts off (at least in some
      cases).
    
    - Dumping the event channel state for debug purposes is using
      read_trylock(), too, in order to avoid blocking in case the lock is
      taken as writer for a long time.
    
    - All other cases can use read_lock().
    
    Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 04:18:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 04:18:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24185.51320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kchae-0003NO-9T; Wed, 11 Nov 2020 04:18:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24185.51320; Wed, 11 Nov 2020 04:18:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kchae-0003NH-6a; Wed, 11 Nov 2020 04:18:36 +0000
Received: by outflank-mailman (input) for mailman id 24185;
 Wed, 11 Nov 2020 04:18:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nWKd=ER=oracle.com=martin.petersen@srs-us1.protection.inumbo.net>)
 id 1kchad-0003NC-4A
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 04:18:35 +0000
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e8eb0979-0d4d-4e67-89c5-c86ef86b693d;
 Wed, 11 Nov 2020 04:18:33 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0AB49Tbn187079;
 Wed, 11 Nov 2020 04:18:21 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by userp2120.oracle.com with ESMTP id 34p72en941-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Wed, 11 Nov 2020 04:18:21 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0AB4Arf2138947;
 Wed, 11 Nov 2020 04:18:20 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
 by aserp3020.oracle.com with ESMTP id 34p5g179j7-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 11 Nov 2020 04:18:20 +0000
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
 by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 0AB4IBdx000879;
 Wed, 11 Nov 2020 04:18:11 GMT
Received: from ca-mkp.ca.oracle.com (/10.159.214.123)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Tue, 10 Nov 2020 20:18:09 -0800
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nWKd=ER=oracle.com=martin.petersen@srs-us1.protection.inumbo.net>)
	id 1kchad-0003NC-4A
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 04:18:35 +0000
X-Inumbo-ID: e8eb0979-0d4d-4e67-89c5-c86ef86b693d
Received: from userp2120.oracle.com (unknown [156.151.31.85])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e8eb0979-0d4d-4e67-89c5-c86ef86b693d;
	Wed, 11 Nov 2020 04:18:33 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
	by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0AB49Tbn187079;
	Wed, 11 Nov 2020 04:18:21 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=to : cc : subject :
 from : message-id : references : date : in-reply-to : mime-version :
 content-type; s=corp-2020-01-29;
 bh=EwuOibuX72u5GW/vr3IZET1mbk4Qqp2f1lpQhsZMo3U=;
 b=rdxx8bfCoz66U7/JxdRXyqosHLyk6104gocou3/pxKr+MBaHN9z+JG22xyBjdFrs/toj
 u8kyE9Mnx40f+onFhQWnASdg9eewdpqao7JjD8N8PDe0bDGwb4iBUgCzzxwXH2tPfd4b
 ZewD9ZDiq4ttMBKnV5hFCYBDOG9JBlIzwIQL8q1J/Y6q7a+0GcQKjASc2OGvEo13qqVS
 YW2PdJ4GC1fZvqx8aM04P1FnQraTmrtlR84Q48lCpCKfWOIJkugsvGV1CkUq/whUbI2D
 eu9b4yGyWVKNklkWbsL8Eg4w7p+siPP69BiWWD0dUAtW6doZkGUQe3KqPOb7wn/WWPSd sQ== 
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
	by userp2120.oracle.com with ESMTP id 34p72en941-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
	Wed, 11 Nov 2020 04:18:21 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
	by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0AB4Arf2138947;
	Wed, 11 Nov 2020 04:18:20 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
	by aserp3020.oracle.com with ESMTP id 34p5g179j7-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
	Wed, 11 Nov 2020 04:18:20 +0000
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 0AB4IBdx000879;
	Wed, 11 Nov 2020 04:18:11 GMT
Received: from ca-mkp.ca.oracle.com (/10.159.214.123)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 10 Nov 2020 20:18:09 -0800
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Justin Sanders <justin@coraid.com>,
        Josef Bacik <josef@toxicpanda.com>, Ilya Dryomov <idryomov@gmail.com>,
        Jack Wang <jinpu.wang@cloud.ionos.com>,
        "Michael S. Tsirkin"
 <mst@redhat.com>,
        Jason Wang <jasowang@redhat.com>, Paolo Bonzini
 <pbonzini@redhat.com>,
        Stefan Hajnoczi <stefanha@redhat.com>,
        Konrad
 Rzeszutek Wilk <konrad.wilk@oracle.com>,
        Roger Pau =?utf-8?Q?Monn=C3=A9?=
 <roger.pau@citrix.com>,
        Minchan Kim <minchan@kernel.org>, Mike Snitzer
 <snitzer@redhat.com>,
        Song Liu <song@kernel.org>,
        "Martin K. Petersen"
 <martin.petersen@oracle.com>,
        dm-devel@redhat.com, linux-block@vger.kernel.org,
        drbd-dev@tron.linbit.com, nbd@other.debian.org,
        ceph-devel@vger.kernel.org, xen-devel@lists.xenproject.org,
        linux-raid@vger.kernel.org, linux-nvme@lists.infradead.org,
        linux-scsi@vger.kernel.org, linux-fsdevel@vger.kernel.org
Subject: Re: [PATCH 04/24] sd: update the bdev size in sd_revalidate_disk
From: "Martin K. Petersen" <martin.petersen@oracle.com>
Organization: Oracle
Message-ID: <yq1tutwedcb.fsf@ca-mkp.ca.oracle.com>
References: <20201106190337.1973127-1-hch@lst.de>
	<20201106190337.1973127-5-hch@lst.de>
Date: Tue, 10 Nov 2020 23:18:05 -0500
In-Reply-To: <20201106190337.1973127-5-hch@lst.de> (Christoph Hellwig's
	message of "Fri, 6 Nov 2020 20:03:16 +0100")
MIME-Version: 1.0
Content-Type: text/plain
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9801 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 spamscore=0 malwarescore=0
 adultscore=0 phishscore=0 bulkscore=0 mlxlogscore=887 suspectscore=1
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2011110019
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9801 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxlogscore=877 mlxscore=0
 malwarescore=0 suspectscore=1 lowpriorityscore=0 adultscore=0 phishscore=0
 priorityscore=1501 spamscore=0 impostorscore=0 clxscore=1011
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2011110019


Christoph,

> This avoids the extra call to revalidate_disk_size in sd_rescan and
> is otherwise a no-op because the size did not change, or we are in
> the probe path.

Acked-by: Martin K. Petersen <martin.petersen@oracle.com>

-- 
Martin K. Petersen	Oracle Linux Engineering


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 04:29:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 04:29:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24196.51332 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kchkp-0004M8-Az; Wed, 11 Nov 2020 04:29:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24196.51332; Wed, 11 Nov 2020 04:29:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kchkp-0004M1-6V; Wed, 11 Nov 2020 04:29:07 +0000
Received: by outflank-mailman (input) for mailman id 24196;
 Wed, 11 Nov 2020 04:29:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/uh=ER=arm.com=wei.chen@srs-us1.protection.inumbo.net>)
 id 1kchkn-0004Lw-On
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 04:29:05 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.71]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 39a8fe24-abbc-4427-b42f-ff739c71b4b9;
 Wed, 11 Nov 2020 04:29:02 +0000 (UTC)
Received: from AM6P193CA0098.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:88::39)
 by AM9PR08MB6177.eurprd08.prod.outlook.com (2603:10a6:20b:283::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Wed, 11 Nov
 2020 04:28:59 +0000
Received: from AM5EUR03FT006.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:88:cafe::76) by AM6P193CA0098.outlook.office365.com
 (2603:10a6:209:88::39) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Wed, 11 Nov 2020 04:28:59 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT006.mail.protection.outlook.com (10.152.16.122) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3564.22 via Frontend Transport; Wed, 11 Nov 2020 04:28:58 +0000
Received: ("Tessian outbound 797fb8e1da56:v71");
 Wed, 11 Nov 2020 04:28:58 +0000
Received: from 82b69ba6d817.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A57435CF-82EB-43AD-B4BC-275437473E78.1; 
 Wed, 11 Nov 2020 04:28:53 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 82b69ba6d817.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 11 Nov 2020 04:28:53 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com (2603:10a6:208:105::24)
 by AM0PR08MB4131.eurprd08.prod.outlook.com (2603:10a6:208:129::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Wed, 11 Nov
 2020 04:28:52 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993]) by AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993%3]) with mapi id 15.20.3499.032; Wed, 11 Nov 2020
 04:28:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/uh=ER=arm.com=wei.chen@srs-us1.protection.inumbo.net>)
	id 1kchkn-0004Lw-On
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 04:29:05 +0000
X-Inumbo-ID: 39a8fe24-abbc-4427-b42f-ff739c71b4b9
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown [40.107.20.71])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 39a8fe24-abbc-4427-b42f-ff739c71b4b9;
	Wed, 11 Nov 2020 04:29:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QS7xoR716HkevH42CvyZCMeN8quVhoRGF5711cjYfKk=;
 b=Pdv2VdfbvVCm0gh5QSmv+marAB5hcM83rtqDUKi4VnDj1PyqRNz4d3xbp2fdgqeYY38Z+Z3OxGhBG5chvmtyhcpjUuQ+7gFfEHdwcl12UEU0mP4oIsl4Xg2yIUB6j2IQ7/R8mFzgYv4dxlieKvg2jc1gR4AvKubKqg8pLg9LHKs=
Received: from AM6P193CA0098.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:88::39)
 by AM9PR08MB6177.eurprd08.prod.outlook.com (2603:10a6:20b:283::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Wed, 11 Nov
 2020 04:28:59 +0000
Received: from AM5EUR03FT006.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:88:cafe::76) by AM6P193CA0098.outlook.office365.com
 (2603:10a6:209:88::39) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Wed, 11 Nov 2020 04:28:59 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT006.mail.protection.outlook.com (10.152.16.122) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3564.22 via Frontend Transport; Wed, 11 Nov 2020 04:28:58 +0000
Received: ("Tessian outbound 797fb8e1da56:v71"); Wed, 11 Nov 2020 04:28:58 +0000
X-CR-MTA-TID: 64aa7808
Received: from 82b69ba6d817.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id A57435CF-82EB-43AD-B4BC-275437473E78.1;
	Wed, 11 Nov 2020 04:28:53 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 82b69ba6d817.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 11 Nov 2020 04:28:53 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gecXuaWo07v9AA1MeaqH7jFzplMtQiBfcY/pWIoRITt8LSBzcfjk34QTrr7Vu127aPQxbFaD8BPfxlKgmj6bnXYdevMY9fAwjDsFKnYq1e4AyjFq18SDNHMG9n8Cwdp3kImgq4anIYcBZtzVWccm7DBa0jz8ttoU3zHAfEIQ+zvjLAg5EodEvg2F0qfzEn3pAfLQYProIX8da4alKkmmQxEbrR941rt72pepDd9xaxorjkCk/t5UCVq57+Jq7nFGAQtjuhstQ1u9Z43a9ZQkAQrVI3P5hMz0mEQ+QmAcvxuM4W4KZDrk4VvJK8eRiumnEJqKaPSK/rDiGkacaEBe8w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QS7xoR716HkevH42CvyZCMeN8quVhoRGF5711cjYfKk=;
 b=HgggEgP6ajZjsl9yDGjtdoKHjcz1RQWKE6AiWjJbWqQfdKDzILoNmahbFLXPQrdbE1/tpvDxCELzPAfD+xDK9oWoLbQ+TtDrUBEb6BzNBum2f5gAu+bmCbPMjNULuRY2s2kakh++Ep2Sn/hR0do1ECrRckmE3jqXRm0O8jhBbPv7+jq7Yp0PEUrs8WUJVuswM8KyJxk4yaKFqAsSnFh0MuEFnthDgE5UpXOciSr8KcLika2URuPNbWNX27wofZ9tWYgc9FSn4qM9Jn7TeIqqR3AFX44ogeiEgu3u97ZfQZB0s3cdyOaLop4mu278Qc73ZX5G3vPAGvXHCzYhIr3F7g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QS7xoR716HkevH42CvyZCMeN8quVhoRGF5711cjYfKk=;
 b=Pdv2VdfbvVCm0gh5QSmv+marAB5hcM83rtqDUKi4VnDj1PyqRNz4d3xbp2fdgqeYY38Z+Z3OxGhBG5chvmtyhcpjUuQ+7gFfEHdwcl12UEU0mP4oIsl4Xg2yIUB6j2IQ7/R8mFzgYv4dxlieKvg2jc1gR4AvKubKqg8pLg9LHKs=
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com (2603:10a6:208:105::24)
 by AM0PR08MB4131.eurprd08.prod.outlook.com (2603:10a6:208:129::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Wed, 11 Nov
 2020 04:28:52 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993]) by AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993%3]) with mapi id 15.20.3499.032; Wed, 11 Nov 2020
 04:28:51 +0000
From: Wei Chen <Wei.Chen@arm.com>
To: Oleksandr <olekstysh@gmail.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>
Subject: RE: [PATCH V2 23/23] [RFC] libxl: Add support for virtio-disk
 configuration
Thread-Topic: [PATCH V2 23/23] [RFC] libxl: Add support for virtio-disk
 configuration
Thread-Index: AQHWoxPDW60DxIWT4kuneyvO4zzD5qm/gHcQgALDY4CAAC0d4A==
Date: Wed, 11 Nov 2020 04:28:51 +0000
Message-ID:
 <AM0PR08MB3747DB347ED64AF7DF99C5A19EE80@AM0PR08MB3747.eurprd08.prod.outlook.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-24-git-send-email-olekstysh@gmail.com>
 <AM0PR08MB374735F747FFF1A3016F37329EEA0@AM0PR08MB3747.eurprd08.prod.outlook.com>
 <99636dd8-4c90-bb84-b96f-6c7a9ad31b63@gmail.com>
In-Reply-To: <99636dd8-4c90-bb84-b96f-6c7a9ad31b63@gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 59D392A2A9911740BCC0CA5B760E3456.0
x-checkrecipientchecked: true
Authentication-Results-Original: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [218.82.77.130]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 029ad72d-21e2-4ca3-456c-08d885fa4f1d
x-ms-traffictypediagnostic: AM0PR08MB4131:|AM9PR08MB6177:
X-Microsoft-Antispam-PRVS:
	<AM9PR08MB6177F705FB0CB115B7385A099EE80@AM9PR08MB6177.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 SEz3YLvgmPPcuDNfIQBu7c62diE7KNEe8NN42vjja4W/KUTFrJrPzuyJaTKMW+Mt4/4/uc8UQ6hViTQIDdzuReK2xy3WIJ/6VIPEsnsoDcBAUwIj/yiTy4Xqqbep2hMsvzms5FQz2irdp0ISVFypLnmo5wAYuMKow3GwxZ15aAmMre2VzaxtiYIapVAJvSlOghCy/fYaSAP3VRP3jecLR1AHhwb+w9zQEqCwIFxurqfVkmfAkUtvqIuU2C4ThaUgh0OhIqPllN1ZWQbZagcx6ka5oMjMYITZN9B/yImHqNfsQXvrKBzAo3ZljobZLGYkpo2rsFLoEH4D01JW4IBXhA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3747.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(366004)(39850400004)(376002)(346002)(186003)(55016002)(66446008)(64756008)(66556008)(66476007)(53546011)(9686003)(76116006)(26005)(7696005)(5660300002)(6506007)(52536014)(66946007)(83380400001)(2906002)(478600001)(54906003)(110136005)(316002)(86362001)(8936002)(4326008)(8676002)(33656002)(71200400001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 pumtL2/frRmLbSOGsvQyR8v+rizyfJkKzVPQaF+xHo/xhWfLShak5T4sTfA/fE8F1uqQx64jHcy9KzBPXfp3mfg2uICQ+wyuTxmQAQiwpC0KkWVv8Pp05eYOTsnY8qryL9qjnh5PBtNPKkRiIAk6vCJoT/Tllv5f20J0omaOe/4vSan0wIF4hFQdfntzvqSYLHvRCwLl8I0QgaGpVmg4bJmMeiiTVt18j4ZP2PIg1OKu2wPEPkOy3K95dHoKdSHj+U1Rzlwx/WUSkzWMm9sUUBkrbEtZp/EMGD2jnLIHUhBMcoPOgcBtypOabVP5Xdd96qbiMAMyNNVXtnrvxZjx4JpSB1wzBOaeOHuXiJ05z33VJ5z+6nYVhnMmmXTKELHtcTRhGgI18+MXUBADxLTOmIZEtYvg3jDo7mDpTi/L/HFjYEyXkrvw4iZNYy+AdL6aBRkQIaontAjVFtVTfydxYevA7a6dH1sY1FPmgvm1gGDBUoY7j5r36nIdR73ey/EhexmTT4Zq6Po1+OaCCHuWSeDPXWjfruRk/zZKfvYyXYvhc3TdhRcgJ62yWGujYHN2vNwElw1EaSaDNcRZsM62iwTdWCSzXq/HKtdOy1i+DcLY6Ldny9zCadH6AMc8MYMtL3s+v+kN3S4S+XkIbia80g==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB4131
Original-Authentication-Results: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	9f92384c-e403-4c40-7b9e-08d885fa4b01
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8L8sTMNtcpW1wTnHH96/1oYwBOm9N5mJsqCkOY9nUdn4dEWL5goFnCcrrx3efNTyp9BkdaSRao2OzkE4cExJ3oJSUDR6GCJJvr+AFEoduKfj9u9cujvUQLPS2wBFjuLE4eYJuPia/Wk7JPI1Gw395a2LtA4EC+GvaMNtuoqm3tNIlYSLRcHL0975+KVN60GeUyX9l7qss8VeL3T5Iq/LPa0SjKVv+lPX2ymk5OqozEaHWgnGx1jBC8h/DP1dr4GP+9Bkc5tTnq1OfQLxX8qm+hq6XKHwCQb/SgY6mjRE+QLOQe2Ri+I/lxg/IWepVOHwo+2H+cBvlvkJH8rf3MdkxIzp7hEVDCAsnGDRlmRwD9IN432rq/31oHyn1Is6/h48822qKZMiO+NgyCAVgA93oQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(136003)(39850400004)(376002)(396003)(46966005)(70206006)(70586007)(356005)(110136005)(54906003)(336012)(82310400003)(478600001)(26005)(186003)(316002)(36906005)(6506007)(107886003)(53546011)(8676002)(81166007)(33656002)(8936002)(2906002)(7696005)(9686003)(82740400003)(47076004)(5660300002)(52536014)(55016002)(4326008)(86362001)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Nov 2020 04:28:58.7812
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 029ad72d-21e2-4ca3-456c-08d885fa4f1d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6177

SGkgT2xla3NhbmRyLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IE9s
ZWtzYW5kciA8b2xla3N0eXNoQGdtYWlsLmNvbT4NCj4gU2VudDogMjAyMOW5tDEx5pyIMTHml6Ug
ODo1Mw0KPiBUbzogV2VpIENoZW4gPFdlaS5DaGVuQGFybS5jb20+OyB4ZW4tZGV2ZWxAbGlzdHMu
eGVucHJvamVjdC5vcmcNCj4gQ2M6IE9sZWtzYW5kciBUeXNoY2hlbmtvIDxvbGVrc2FuZHJfdHlz
aGNoZW5rb0BlcGFtLmNvbT47IElhbiBKYWNrc29uDQo+IDxpd2pAeGVucHJvamVjdC5vcmc+OyBX
ZWkgTGl1IDx3bEB4ZW4ub3JnPjsgQW50aG9ueSBQRVJBUkQNCj4gPGFudGhvbnkucGVyYXJkQGNp
dHJpeC5jb20+OyBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPjsgU3RlZmFubyBTdGFiZWxs
aW5pDQo+IDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPg0KPiBTdWJqZWN0OiBSZTogW1BBVENIIFYy
IDIzLzIzXSBbUkZDXSBsaWJ4bDogQWRkIHN1cHBvcnQgZm9yIHZpcnRpby1kaXNrDQo+IGNvbmZp
Z3VyYXRpb24NCj4gDQo+IA0KPiBPbiAwOS4xMS4yMCAwODo0NSwgV2VpIENoZW4gd3JvdGU6DQo+
ID4gSGkgT2xla3NhbmRyLA0KPiANCj4gSGkgV2VpDQo+IA0KPiANCj4gPg0KPiA+PiAtLS0tLU9y
aWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiA+PiBGcm9tOiBYZW4tZGV2ZWwgPHhlbi1kZXZlbC1ib3Vu
Y2VzQGxpc3RzLnhlbnByb2plY3Qub3JnPiBPbiBCZWhhbGYgT2YNCj4gPj4gT2xla3NhbmRyIFR5
c2hjaGVua28NCj4gPj4gU2VudDogMjAyMOW5tDEw5pyIMTbml6UgMDo0NQ0KPiA+PiBUbzogeGVu
LWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnDQo+ID4+IENjOiBPbGVrc2FuZHIgVHlzaGNoZW5r
byA8b2xla3NhbmRyX3R5c2hjaGVua29AZXBhbS5jb20+OyBJYW4gSmFja3Nvbg0KPiA+PiA8aXdq
QHhlbnByb2plY3Qub3JnPjsgV2VpIExpdSA8d2xAeGVuLm9yZz47IEFudGhvbnkgUEVSQVJEDQo+
ID4+IDxhbnRob255LnBlcmFyZEBjaXRyaXguY29tPjsgSnVsaWVuIEdyYWxsIDxqdWxpZW5AeGVu
Lm9yZz47IFN0ZWZhbm8NCj4gU3RhYmVsbGluaQ0KPiA+PiA8c3N0YWJlbGxpbmlAa2VybmVsLm9y
Zz4NCj4gPj4gU3ViamVjdDogW1BBVENIIFYyIDIzLzIzXSBbUkZDXSBsaWJ4bDogQWRkIHN1cHBv
cnQgZm9yIHZpcnRpby1kaXNrDQo+IGNvbmZpZ3VyYXRpb24NCj4gPj4NCj4gPj4gRnJvbTogT2xl
a3NhbmRyIFR5c2hjaGVua28gPG9sZWtzYW5kcl90eXNoY2hlbmtvQGVwYW0uY29tPg0KPiA+Pg0K
PiA+PiBUaGlzIHBhdGNoIGFkZHMgYmFzaWMgc3VwcG9ydCBmb3IgY29uZmlndXJpbmcgYW5kIGFz
c2lzdGluZyB2aXJ0aW8tZGlzaw0KPiA+PiBiYWNrZW5kIChlbXVhbGF0b3IpIHdoaWNoIGlzIGlu
dGVuZGVkIHRvIHJ1biBvdXQgb2YgUWVtdSBhbmQgY291bGQgYmUgcnVuDQo+ID4+IGluIGFueSBk
b21haW4uDQo+ID4+DQo+ID4+IFhlbnN0b3JlIHdhcyBjaG9zZW4gYXMgYSBjb21tdW5pY2F0aW9u
IGludGVyZmFjZSBmb3IgdGhlIGVtdWxhdG9yIHJ1bm5pbmcNCj4gPj4gaW4gbm9uLXRvb2xzdGFj
ayBkb21haW4gdG8gYmUgYWJsZSB0byBnZXQgY29uZmlndXJhdGlvbiBlaXRoZXIgYnkgcmVhZGlu
Zw0KPiA+PiBYZW5zdG9yZSBkaXJlY3RseSBvciBieSByZWNlaXZpbmcgY29tbWFuZCBsaW5lIHBh
cmFtZXRlcnMgKGFuIHVwZGF0ZWQgJ3hsDQo+IGRldmQnDQo+ID4+IHJ1bm5pbmcgaW4gdGhlIHNh
bWUgZG9tYWluIHdvdWxkIHJlYWQgWGVuc3RvcmUgYmVmb3JlaGFuZCBhbmQgY2FsbA0KPiBiYWNr
ZW5kDQo+ID4+IGV4ZWN1dGFibGUgd2l0aCB0aGUgcmVxdWlyZWQgYXJndW1lbnRzKS4NCj4gPj4N
Cj4gPj4gQW4gZXhhbXBsZSBvZiBkb21haW4gY29uZmlndXJhdGlvbiAodHdvIGRpc2tzIGFyZSBh
c3NpZ25lZCB0byB0aGUgZ3Vlc3QsDQo+ID4+IHRoZSBsYXR0ZXIgaXMgaW4gcmVhZG9ubHkgbW9k
ZSk6DQo+ID4+DQo+ID4+IHZkaXNrID0gWyAnYmFja2VuZD1Eb21ELCBkaXNrcz1ydzovZGV2L21t
Y2JsazBwMztybzovZGV2L21tY2JsazFwMycgXQ0KPiA+Pg0KPiA+IENhbiB3ZSBrZWVwIHVzZSB0
aGUgc2FtZSAnZGlzaycgcGFyYW1ldGVyIGZvciB2aXJ0aW8tZGlzaywgYnV0IGFkZCBhbiBvcHRp
b24NCj4gbGlrZQ0KPiA+ICAgIm1vZGVsPXZpcnRpby1kaXNrIj8NCj4gPiBGb3IgZXhhbXBsZToN
Cj4gPiBkaXNrID0gWyAnYmFja2VuZD1Eb21ELCBkaXNrcz1ydzovZGV2L21tY2JsazBwMyxtb2Rl
bD12aXJ0aW8tZGlzaycgXQ0KPiA+IEp1c3QgbGlrZSB3aGF0IFhlbiBoYXMgZG9uZSBmb3IgeDg2
IHZpcnRpby1uZXQuDQo+IA0KPiBJIHRoaW5rLCB0aGlzIG5lZWRzIGFuIGFkZGl0aW9uYWwgaW52
ZXN0aWdhdGlvbi4gSW4gZ2VuZXJhbCBJIGFncmVlIHdpdGgNCj4geW91IHRoYXQgaXQgd291bGQg
YmUgbmljZSB0byByZXVzZSBleGlzdGluZyAnZGlzaycgcGFyYW1ldGVyIHNvbWVob3cNCj4gcmF0
aGVyIHRoYW4gaW50cm9kdWNpbmcgbmV3IG9uZQ0KPiBmb3IgdGhlIHNhbWUgcHVycG9zZSAodG8g
aGFuZGxlIHZpcnR1YWwgYmxvY2sgZGV2aWNlKHMpKS4NCj4gDQo+IA0KPiBPbmUgbm90ZSwgYWx0
aG91Z2ggYm90aCBhcmUgdXNlZCBmb3IgdGhlIHNhbWUgcHVycG9zZSB0aGV5IGFyZSBkaWZmZXJl
bnQNCj4gaW4gYXQgbGVhc3Qgb25lIGltcG9ydGFudCBvcHRpb24uDQo+IA0KPiBGb3IgZXhhbXBs
ZToNCj4gMS4gZGlzayA9IFsgJ2JhY2tlbmQ9RG9tRCwgcGh5Oi9kZXYvbW1jYmxrMHAzLCB4dmRh
MScgXQ0KPiAyLiB2ZGlzayA9IFsgJ2JhY2tlbmQ9RG9tRCwgZGlza3M9cnc6L2Rldi9tbWNibGsw
cDMnIF0NCj4gQXMgeW91IGNhbiBzZWUgZXhpc3RpbmcgImRpc2siIHBhcmFtZXRlciBjb250YWlu
cyB4dmRhMSwgdGhpcyBtZWFucyB0aGF0DQo+IGEgbmV3IGRldmljZSAvZGV2L3h2ZGExIHdpbGwg
YXBwZWFyIGF0IHRoZSBndWVzdCBzaWRlLCBidXQgInZkaXNrIg0KPiBkb2Vzbid0IGNvbnRhaW4g
YW55dGhpbmcgc2ltaWxhci4gU28gd2l0aCBYZW4gUFYgZHJpdmVyICh4ZW4tYmxrZnJvbnQpDQoN
ClllcywgSSB1bmRlcnN0YW5kIHlvdXIgY29uY2VybnMuIEJ1dCBJIHRoaW5rIHNwZWNpZnlpbmcg
YSBkZXZpY2UgbmFtZQ0KZm9yIHZpcnRpbyBkaXNrIGlzIG5vdCBhIG1hbmRhdG9yeSByZXF1aXJl
bWVudC4gRXZlbiBpZiB3ZSdyZSB1c2luZyBwaHlzaWNhbA0KZGlza3Mgb24gYmFyZSBtZXRhbCBt
YWNoaW5lLCB3ZSBjYW4ndCBndWFyYW50ZWUgc2xvdCMxIGRpc2sgaXMgYWx3YXlzICdzZGEnLg0K
U28gbW9zdCBtb2Rlcm4gT1MgYXJlIHByZWZlciB0byB1c2UgYmxraWQgdG8gbW91bnQgZmlsZXN5
c3RlbS4NCg0KPiB3ZSBhcmUgYWJsZSB0byBjb25maWd1cmUgYSBkZXZpY2UgbmFtZSwgYnV0IHdp
dGggVmlydElPIHNvbHV0aW9uDQo+ICh2aXJ0aW8tYmxrKSB3ZSBhcmVuJ3QgKGF0IGxlYXN0IEkg
ZG9uJ3Qga25vdyBob3cgZXhhY3RseSkuDQo+IA0KDQpKdXN0IG15IHVuZGVyc3RhbmRpbmcsIG5v
dCBleGFjdGx5IGFjY3VyYXRlOg0KVGhlIHZpcnRpby1ibGsgY291bGQgbm90IGdldCBWREVWIGlu
Zm9ybWF0aW9uIGZvciBhIGJ1cyBsaWtlIFhlbi1idXMuIFNvIHRoZSBkaXNrDQpJRCBpcyBhbGxv
Y2F0ZWQgZHluYW1pY2FsbHkgaW4gYnVzIHByb2JlIHByb2dyZXNzLiBUaGUgZmlyc3QgcHJvYmVk
IGRpc2sgd2lsbCBnZXQNCklEICdhJy4gQW5kIHRoZW4gdGhlIElEIGtlZXBzIGluY3JlYXNpbmcu
IElmIHdlIHdhbnQgdG8gc3BlY2lmeSB0aGUgZGlzayBJRCBmb3IgdmlydGlvDQpkaXNrLCBvbmUg
cG9zc2libGUgd2F5IHRvIGRvIHRoaXMgaXMgdG8gY29uc3RydWN0IGEgcmVhc29uYWJsZSBwb3Np
dGlvbiBvbiBidXMNCihmZHQgbm9kZSBwb3NpdGlvbiBvZiB2aXJ0aW8gbW1pbyBkZXZpY2UsIFBD
SSBGdW5jdGlvbiBJRCBvZiB2aXJ0aW8gcGNpIGJsb2NrKSBmb3INCnZpcnRpbyBkaXNrLiBCdXQg
aXQgaXMgbm90IGFsd2F5cyBzdWNjZXNzZnVsLCB3ZSBjYW4ndCBza2lwICd2ZGEnIHRvIHNwZWNp
ZnkgYSB2aXJ0aW8NCmRpc2sgYXMgJ3ZkYicuDQoNClJlZ2FyZHMsDQpXZWkgQ2hlbg0KPiANCj4g
DQo+IA0KPiANCj4gLS0NCj4gUmVnYXJkcywNCj4gDQo+IE9sZWtzYW5kciBUeXNoY2hlbmtvDQoN
Cg==


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 05:24:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 05:24:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.23821.51344 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcibh-0001Zu-JM; Wed, 11 Nov 2020 05:23:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 23821.51344; Wed, 11 Nov 2020 05:23:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcibh-0001Zn-Fq; Wed, 11 Nov 2020 05:23:45 +0000
Received: by outflank-mailman (input) for mailman id 23821;
 Tue, 10 Nov 2020 18:59:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xenproject.org>) id 1kcYrZ-00045J-P8
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:59:29 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xenproject.org>) id 1kcYrZ-0007Lw-OM
 for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:59:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenbits.xenproject.org>)
	id 1kcYrZ-00045J-P8
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:59:29 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
	(envelope-from <iwj@xenbits.xenproject.org>)
	id 1kcYrZ-0007Lw-OM
	for xen-devel@lists.xenproject.org; Tue, 10 Nov 2020 18:59:29 +0000
Subject: test
To: <xen-devel@lists.xenproject.org>
X-Mailer: mail (GNU Mailutils 3.5)
Message-Id: <E1kcYrZ-0007Lw-OM@xenbits.xenproject.org>
From: Ian Jackson <iwj@xenbits.xenproject.org>
Date: Tue, 10 Nov 2020 18:59:29 +0000

Please don't reply.
.


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 05:49:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 05:49:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24235.51359 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcj10-0003Va-Np; Wed, 11 Nov 2020 05:49:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24235.51359; Wed, 11 Nov 2020 05:49:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcj10-0003VT-Kf; Wed, 11 Nov 2020 05:49:54 +0000
Received: by outflank-mailman (input) for mailman id 24235;
 Wed, 11 Nov 2020 05:49:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GpG1=ER=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kcj0z-0003VO-U5
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 05:49:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cdbb153a-9754-4576-bc1f-b1305e7789a5;
 Wed, 11 Nov 2020 05:49:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6101DAD66;
 Wed, 11 Nov 2020 05:49:48 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GpG1=ER=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kcj0z-0003VO-U5
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 05:49:54 +0000
X-Inumbo-ID: cdbb153a-9754-4576-bc1f-b1305e7789a5
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cdbb153a-9754-4576-bc1f-b1305e7789a5;
	Wed, 11 Nov 2020 05:49:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605073788;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=8HUl3sIsB7tj41whqo0AC809SMSfM+7DMkhJUHNhwxE=;
	b=aqXSvtmZIjbQ8QC9UXLDMOAC6MUfCxxoSwXcCFQgbEfsuVyvFeiu4aQ9OGyg/i00pFO2OK
	QDSJJTnNAcFJvhDsPh79F6GQZi2UVvH/pOfnbMZWhHJusf6L8sbIYVHXkOi1jGSaOhQWf/
	QBRAw2XNUV8nv1jxcXE6BniXb4t5A+g=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6101DAD66;
	Wed, 11 Nov 2020 05:49:48 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] xen/events: fix build
Date: Wed, 11 Nov 2020 06:49:46 +0100
Message-Id: <20201111054946.3229-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Commit 5f2df45ead7c1195 ("xen/evtchn: rework per event channel lock")
introduced a build failure for NDEBUG builds.

Fixes: 5f2df45ead7c1195 ("xen/evtchn: rework per event channel lock")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/event_channel.c | 2 ++
 xen/include/xen/sched.h    | 2 --
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index eacd96b92f..da85d536f4 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -61,7 +61,9 @@ static inline void evtchn_write_lock(struct evtchn *evtchn)
 {
     write_lock(&evtchn->lock);
 
+#ifndef NDEBUG
     evtchn->old_state = evtchn->state;
+#endif
 }
 
 static inline void evtchn_write_unlock(struct evtchn *evtchn)
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 7251b3ae3e..95f96e7a33 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -114,9 +114,7 @@ struct evtchn
         u16 virq;      /* state == ECS_VIRQ */
     } u;
     u8 priority;
-#ifndef NDEBUG
     u8 old_state;      /* State when taking lock in write mode. */
-#endif
     u8 last_priority;
     u16 last_vcpu_id;
 #ifdef CONFIG_XSM
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 07:16:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 07:16:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24247.51380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kckMw-0002vt-EH; Wed, 11 Nov 2020 07:16:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24247.51380; Wed, 11 Nov 2020 07:16:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kckMw-0002vm-A9; Wed, 11 Nov 2020 07:16:38 +0000
Received: by outflank-mailman (input) for mailman id 24247;
 Wed, 11 Nov 2020 07:16:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kckMu-0002vh-MA
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 07:16:36 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5bcddc47-2eac-4c4a-bd42-deb6dd489783;
 Wed, 11 Nov 2020 07:16:33 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kckMr-0003ox-JF; Wed, 11 Nov 2020 07:16:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kckMr-0004Ll-AI; Wed, 11 Nov 2020 07:16:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kckMr-0003P9-9d; Wed, 11 Nov 2020 07:16:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kckMu-0002vh-MA
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 07:16:36 +0000
X-Inumbo-ID: 5bcddc47-2eac-4c4a-bd42-deb6dd489783
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5bcddc47-2eac-4c4a-bd42-deb6dd489783;
	Wed, 11 Nov 2020 07:16:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cMrInPk1mafvBwaDh960KqyVg0JoRORFnnASYxgAdtk=; b=6K1/uuGJJWXzivX1qKxNIvrraS
	gop0WG2Wx4+qzg+L8ImONdO2vEl8RzRkNjhCvC5r5JZP/NX4Ke6+P1HSkXym3YY0NXyp/YdYh62Mz
	yLSRIzxeXbLP6Ws93dxkn7ekYHA0s38WHVi8xLZxKK3QBnAzl5lGlLdF+l6xVWV1AL8o=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kckMr-0003ox-JF; Wed, 11 Nov 2020 07:16:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kckMr-0004Ll-AI; Wed, 11 Nov 2020 07:16:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kckMr-0003P9-9d; Wed, 11 Nov 2020 07:16:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156672-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156672: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=628e1becb6fb121475a6ce68e3f1cb4499851255
X-Osstest-Versions-That:
    xen=3059178798a23ba870ff86ff54d442a07e6651fc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Nov 2020 07:16:33 +0000

flight 156672 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156672/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156622

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  628e1becb6fb121475a6ce68e3f1cb4499851255
baseline version:
 xen                  3059178798a23ba870ff86ff54d442a07e6651fc

Last test of basis   156622  2020-11-10 13:01:19 Z    0 days
Failing since        156628  2020-11-10 17:00:28 Z    0 days    5 attempts
Testing same since   156642  2020-11-10 20:00:30 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 628e1becb6fb121475a6ce68e3f1cb4499851255
Author: Julien Grall <jgrall@amazon.com>
Date:   Mon Nov 9 20:28:59 2020 +0000

    xen/arm: Always trap AMU system registers
    
    The Activity Monitors Unit (AMU) has been introduced by ARMv8.4. It is
    considered to be unsafe to be expose to guests as they might expose
    information about code executed by other guests or the host.
    
    Arm provided a way to trap all the AMU system registers by setting
    CPTR_EL2.TAM to 1.
    
    Unfortunately, on older revision of the specification, the bit 30 (now
    CPTR_EL1.TAM) was RES0. Because of that, Xen is setting it to 0 and
    therefore the system registers would be exposed to the guest when it is
    run on processors with AMU.
    
    As the bit is mark as UNKNOWN at boot in Armv8.4, the only safe solution
    for us is to always set CPTR_EL1.TAM to 1.
    
    Guest trying to access the AMU system registers will now receive an
    undefined instruction. Unfortunately, this means that even well-behaved
    guest may fail to boot because we don't sanitize the ID registers.
    
    This is a known issues with other Armv8.0+ features (e.g. SVE, Pointer
    Auth). This will taken care separately.
    
    This is part of XSA-351 (or XSA-93 re-born).
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Andre Przywara <andre.przywara@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

commit e6e85b662be9eab96f4cfc58e9945580cce8b2bb
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Nov 10 14:40:09 2020 +0100

    x86/CPUID: also check leaf 7 max subleaf to be compatible
    
    Just like is done for basic and extended major leaves.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit f5cfa09856732b1d78ff6a21ca3dc33a010da951
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Nov 10 14:39:30 2020 +0100

    x86/CPUID: suppress IOMMU related hypervisor leaf data
    
    Now that the IOMMU for guests can't be enabled "on demand" anymore,
    there's also no reason to expose the related CPUID bit "just in case".
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit db1a9fdd554cb1d8a7099af7925318fc06c6875b
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Nov 10 14:39:03 2020 +0100

    x86/CPUID: don't use UB shift when library is built as 32-bit
    
    At least the insn emulator test harness will continue to be buildable
    (and ought to continue to be usable) also as a 32-bit binary. (Right now
    the CPU policy test harness is, too, but there it may be less relevant
    to keep it functional, just like e.g. we don't support fuzzing the insn
    emulator in 32-bit mode.) Hence the library code needs to cope with
    this.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit b5ad37f8e9284cc147218f7a5193d739ae7b956f
Author: Juergen Gross <jgross@suse.com>
Date:   Tue Nov 10 14:37:15 2020 +0100

    xen/evtchn: revert 52e1fc47abc3a0123
    
    With the event channel lock no longer disabling interrupts commit
    52e1fc47abc3a0123 ("evtchn/Flask: pre-allocate node on send path") can
    be reverted again.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 5f2df45ead7c1195142f68b7923047a1e9479d54
Author: Juergen Gross <jgross@suse.com>
Date:   Tue Nov 10 14:36:15 2020 +0100

    xen/evtchn: rework per event channel lock
    
    Currently the lock for a single event channel needs to be taken with
    interrupts off, which causes deadlocks in some cases.
    
    Rework the per event channel lock to be non-blocking for the case of
    sending an event and removing the need for disabling interrupts for
    taking the lock.
    
    The lock is needed for avoiding races between event channel state
    changes (creation, closing, binding) against normal operations (set
    pending, [un]masking, priority changes).
    
    Use a rwlock, but with some restrictions:
    
    - Changing the state of an event channel (creation, closing, binding)
      needs to use write_lock(), with ASSERT()ing that the lock is taken as
      writer only when the state of the event channel is either before or
      after the locked region appropriate (either free or unbound).
    
    - Sending an event needs to use read_trylock() mostly, in case of not
      obtaining the lock the operation is omitted. This is needed as
      sending an event can happen with interrupts off (at least in some
      cases).
    
    - Dumping the event channel state for debug purposes is using
      read_trylock(), too, in order to avoid blocking in case the lock is
      taken as writer for a long time.
    
    - All other cases can use read_lock().
    
    Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 07:20:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 07:20:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24254.51395 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kckR7-0003pF-1I; Wed, 11 Nov 2020 07:20:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24254.51395; Wed, 11 Nov 2020 07:20:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kckR6-0003p8-UT; Wed, 11 Nov 2020 07:20:56 +0000
Received: by outflank-mailman (input) for mailman id 24254;
 Wed, 11 Nov 2020 07:20:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kckR5-0003p1-Fs
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 07:20:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0ecff3b3-488a-4d31-a58a-5cdedda3078b;
 Wed, 11 Nov 2020 07:20:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 70590AC82;
 Wed, 11 Nov 2020 07:20:53 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kckR5-0003p1-Fs
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 07:20:55 +0000
X-Inumbo-ID: 0ecff3b3-488a-4d31-a58a-5cdedda3078b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0ecff3b3-488a-4d31-a58a-5cdedda3078b;
	Wed, 11 Nov 2020 07:20:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605079253;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=LQd77mdpS/ziSZ6ksGaT/WYsWU2bwhmwIchfymJzMg8=;
	b=lzUI1NRnUV/q3O1SxW1ftm/OyQBgyKQCIY+Jv8zgfHw6uNJGah4BlTe4cGMrfNdjKwSWcI
	1OWP240VxNoKVIpVFlWhhFlNEeYQymSvfjytb9ahI2gl1zignWnAJilNpE0FxXyOFWln7s
	g7H+BgPjs1top8oy5s60Lruou5kHcFM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 70590AC82;
	Wed, 11 Nov 2020 07:20:53 +0000 (UTC)
Subject: Re: [PATCH] xen/events: fix build
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201111054946.3229-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8feafd17-851f-9bb2-0fe0-2d6f8bed4d4c@suse.com>
Date: Wed, 11 Nov 2020 08:20:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201111054946.3229-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11.11.2020 06:49, Juergen Gross wrote:
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -61,7 +61,9 @@ static inline void evtchn_write_lock(struct evtchn *evtchn)
>  {
>      write_lock(&evtchn->lock);
>  
> +#ifndef NDEBUG
>      evtchn->old_state = evtchn->state;
> +#endif
>  }
>  
>  static inline void evtchn_write_unlock(struct evtchn *evtchn)
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index 7251b3ae3e..95f96e7a33 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -114,9 +114,7 @@ struct evtchn
>          u16 virq;      /* state == ECS_VIRQ */
>      } u;
>      u8 priority;
> -#ifndef NDEBUG
>      u8 old_state;      /* State when taking lock in write mode. */
> -#endif
>      u8 last_priority;
>      u16 last_vcpu_id;
>  #ifdef CONFIG_XSM

Did you mean just either of the two changes (and then the former),
not both? If so
Reviewed-by: Jan Beulich <jbeulich@suse.com>
and I'll be happy to drop the other half for committing.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 07:27:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 07:27:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24267.51413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kckXT-00043g-PG; Wed, 11 Nov 2020 07:27:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24267.51413; Wed, 11 Nov 2020 07:27:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kckXT-00043Z-MM; Wed, 11 Nov 2020 07:27:31 +0000
Received: by outflank-mailman (input) for mailman id 24267;
 Wed, 11 Nov 2020 07:27:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GpG1=ER=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kckXS-000431-9Z
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 07:27:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 71df10ff-c2b0-4345-a46b-2560ca1c2ac4;
 Wed, 11 Nov 2020 07:27:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B774DABD6;
 Wed, 11 Nov 2020 07:27:25 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GpG1=ER=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kckXS-000431-9Z
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 07:27:30 +0000
X-Inumbo-ID: 71df10ff-c2b0-4345-a46b-2560ca1c2ac4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 71df10ff-c2b0-4345-a46b-2560ca1c2ac4;
	Wed, 11 Nov 2020 07:27:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605079645;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=cPFJXwJv1wkhJGQLQYh0ic4x7lI8xpLasYkn9Cp0j7o=;
	b=WvjaUVirCfVLEW1A4BkTpSuiqeTz/E6Fwwro+KgPsnD3zCg3dR10iVYihfEAnM+2haXg3l
	bhOS7s9vZpYfG5ybYmL+PYcMfot/DYLbQ7EdS5uMMCbcpcMMA3TBjZNDPpg+bc1JGEEKtx
	e61jkdL7Uu1o/7M+mTEwoUtHMC+hTME=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B774DABD6;
	Wed, 11 Nov 2020 07:27:25 +0000 (UTC)
Subject: Re: [PATCH] xen/events: fix build
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201111054946.3229-1-jgross@suse.com>
 <8feafd17-851f-9bb2-0fe0-2d6f8bed4d4c@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <dbd01178-d925-b01c-8624-377a00270a22@suse.com>
Date: Wed, 11 Nov 2020 08:27:24 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <8feafd17-851f-9bb2-0fe0-2d6f8bed4d4c@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="fzT1OxlTufCMm9OaemJpr6ybTbbdOMJlC"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--fzT1OxlTufCMm9OaemJpr6ybTbbdOMJlC
Content-Type: multipart/mixed; boundary="KcXin2GkqMtv87soc2AYB1RRwuuzgPlKX";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <dbd01178-d925-b01c-8624-377a00270a22@suse.com>
Subject: Re: [PATCH] xen/events: fix build
References: <20201111054946.3229-1-jgross@suse.com>
 <8feafd17-851f-9bb2-0fe0-2d6f8bed4d4c@suse.com>
In-Reply-To: <8feafd17-851f-9bb2-0fe0-2d6f8bed4d4c@suse.com>

--KcXin2GkqMtv87soc2AYB1RRwuuzgPlKX
Content-Type: multipart/mixed;
 boundary="------------635302E3A5732C3E0E88207E"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------635302E3A5732C3E0E88207E
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 11.11.20 08:20, Jan Beulich wrote:
> On 11.11.2020 06:49, Juergen Gross wrote:
>> --- a/xen/common/event_channel.c
>> +++ b/xen/common/event_channel.c
>> @@ -61,7 +61,9 @@ static inline void evtchn_write_lock(struct evtchn *=
evtchn)
>>   {
>>       write_lock(&evtchn->lock);
>>  =20
>> +#ifndef NDEBUG
>>       evtchn->old_state =3D evtchn->state;
>> +#endif
>>   }
>>  =20
>>   static inline void evtchn_write_unlock(struct evtchn *evtchn)
>> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
>> index 7251b3ae3e..95f96e7a33 100644
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -114,9 +114,7 @@ struct evtchn
>>           u16 virq;      /* state =3D=3D ECS_VIRQ */
>>       } u;
>>       u8 priority;
>> -#ifndef NDEBUG
>>       u8 old_state;      /* State when taking lock in write mode. */
>> -#endif
>>       u8 last_priority;
>>       u16 last_vcpu_id;
>>   #ifdef CONFIG_XSM
>=20
> Did you mean just either of the two changes (and then the former),
> not both? If so
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> and I'll be happy to drop the other half for committing.

The header fix is required for NDEBUG builds, while the other one is
removing a write with no accompanied read for NDEBUG builds.


Juergen


--------------635302E3A5732C3E0E88207E
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------635302E3A5732C3E0E88207E--

--KcXin2GkqMtv87soc2AYB1RRwuuzgPlKX--

--fzT1OxlTufCMm9OaemJpr6ybTbbdOMJlC
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+rklwFAwAAAAAACgkQsN6d1ii/Ey9e
8QgAgt3b+lL5jeqVJooeeBqI3s8t7hl2KtOGkuwGE/DFzAjWEXsFkKkCpyy4+Z7YoDqkfnlTG++8
QHTUgSIX7naEh8tjD+8AEgJsQ9hV/lt+fZ++6Yatj1GJd6ezKPapkww74m7kqqueFFaP2v7XyB4N
fOkiynFgRtEqWpDWHefYMqL/ONXOsRZugpNAZ/hz9E+Vlm0OkpvX5T4+Zd5tfaeBVuvQiLLQnJ5Z
lrkH4Z9PL6UFhx+Ko/VuzwWOO/e024lpy37migHgQ98bRGerFKsqklOUVZIs8GOi0zru4kX8njrG
tn9aDThS8Kwvhiudtfi2Q+9rEA25VmmusnMvMnj6iA==
=Gqn0
-----END PGP SIGNATURE-----

--fzT1OxlTufCMm9OaemJpr6ybTbbdOMJlC--


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 07:27:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 07:27:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24268.51425 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kckXe-00046Y-25; Wed, 11 Nov 2020 07:27:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24268.51425; Wed, 11 Nov 2020 07:27:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kckXd-00046R-VA; Wed, 11 Nov 2020 07:27:41 +0000
Received: by outflank-mailman (input) for mailman id 24268;
 Wed, 11 Nov 2020 07:27:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kckXc-000463-5x
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 07:27:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a050a9db-cfec-4dd8-9573-a4b834d0c333;
 Wed, 11 Nov 2020 07:27:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1AA2EABD6;
 Wed, 11 Nov 2020 07:27:38 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kckXc-000463-5x
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 07:27:40 +0000
X-Inumbo-ID: a050a9db-cfec-4dd8-9573-a4b834d0c333
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a050a9db-cfec-4dd8-9573-a4b834d0c333;
	Wed, 11 Nov 2020 07:27:38 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605079658;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pmceScSgf0CVIuUojV+Ta0HKWnBwoHHZb6EIyKPlsz4=;
	b=mF7BZdwkZlN3gbq9PervHFqqGvpqdwjnpa1tfQGPIXD48fodWkbrS/8hfNy3aXUil9zcpv
	8YpVmzcYsslgXc82vd2L5xfWR3dzh/TswwCqfb+iSDNa6xIbAI6uOMaSESYWLfw14TmD1g
	8CdwbMI9lTN+eRVNnEpcH9dKU4H1bNI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1AA2EABD6;
	Wed, 11 Nov 2020 07:27:38 +0000 (UTC)
Subject: Re: [PATCH V2 04/23] xen/ioreq: Provide alias for the handle_mmio()
To: Oleksandr <olekstysh@gmail.com>
Cc: xen-devel@lists.xenproject.org,
 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 'Wei Liu' <wl@xen.org>, 'Julien Grall' <julien@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien.grall@arm.com>, paul@xen.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-5-git-send-email-olekstysh@gmail.com>
 <004701d6a6c1$6c09f860$441de920$@xen.org>
 <38ba45dd-f1cd-a289-3ea3-75148782e126@suse.com>
 <004a01d6a6cd$1f4684b0$5dd38e10$@xen.org>
 <fab8e4b0-e3b2-fb74-76d4-42753ac88367@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f2d1cbef-09ec-86e4-bfc5-20320f78be6b@suse.com>
Date: Wed, 11 Nov 2020 08:27:37 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <fab8e4b0-e3b2-fb74-76d4-42753ac88367@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 10.11.2020 20:44, Oleksandr wrote:
> 
> On 20.10.20 13:38, Paul Durrant wrote:
> 
> Hi Jan, Paul
> 
> Sorry for the late response.
> 
>>> -----Original Message-----
>>> From: Jan Beulich <jbeulich@suse.com>
>>> Sent: 20 October 2020 11:05
>>> To: paul@xen.org
>>> Cc: 'Oleksandr Tyshchenko' <olekstysh@gmail.com>; xen-devel@lists.xenproject.org; 'Oleksandr
>>> Tyshchenko' <oleksandr_tyshchenko@epam.com>; 'Andrew Cooper' <andrew.cooper3@citrix.com>; 'Roger Pau
>>> Monné' <roger.pau@citrix.com>; 'Wei Liu' <wl@xen.org>; 'Julien Grall' <julien@xen.org>; 'Stefano
>>> Stabellini' <sstabellini@kernel.org>; 'Julien Grall' <julien.grall@arm.com>
>>> Subject: Re: [PATCH V2 04/23] xen/ioreq: Provide alias for the handle_mmio()
>>>
>>> On 20.10.2020 11:14, Paul Durrant wrote:
>>>>> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Oleksandr Tyshchenko
>>>>> Sent: 15 October 2020 17:44
>>>>>
>>>>> --- a/xen/include/asm-x86/hvm/ioreq.h
>>>>> +++ b/xen/include/asm-x86/hvm/ioreq.h
>>>>> @@ -181,6 +181,8 @@ static inline bool arch_hvm_ioreq_destroy(struct domain *d)
>>>>>   #define IOREQ_STATUS_UNHANDLED   X86EMUL_UNHANDLEABLE
>>>>>   #define IOREQ_STATUS_RETRY       X86EMUL_RETRY
>>>>>
>>>>> +#define ioreq_complete_mmio   handle_mmio
>>>>> +
>>>> A #define? Really? Can we not have a static inline?
>>> I guess this would require further shuffling: handle_mmio() is
>>> an inline function in hvm/emulate.h, and hvm/ioreq.h has no
>>> need to include the former (and imo it also shouldn't have).
>>>
>> I see. I think we need an x86 ioreq.c anyway, to deal with the legacy use of magic pages, so it could be dealt with there instead.
> I am afraid I don't entirely understand the required changes. Could you 
> please clarify where the "inline(?)" ioreq_complete_mmio() should
> live? I included hvm/emulate.h here not for the "handle_mmio()" reason 
> only, but for "struct hvm_emulate_ctxt" also (see arch_io_completion()).

I'm sorry, but in the context of this patch there's no use of any
struct hvm_emulate_ctxt instance. I'm not going to wade through 23
patches to find what you mean.

> But, if we return x86 ioreq.c back I can move arch_io_completion() to it 
> as well as "non-online" ioreq_complete_mmio().
> This will avoid including hvm/emulate.h here. Or I missed something?

I suppose an out-of-line function as kind of a last resort solution
is what Paul had in mind. To be honest I'd prefer to avoid the
extra call layer though, if possible.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 07:37:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 07:37:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24282.51437 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kckgl-00059s-7c; Wed, 11 Nov 2020 07:37:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24282.51437; Wed, 11 Nov 2020 07:37:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kckgl-00059l-3B; Wed, 11 Nov 2020 07:37:07 +0000
Received: by outflank-mailman (input) for mailman id 24282;
 Wed, 11 Nov 2020 07:37:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kckgk-00059g-72
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 07:37:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ac37dd4e-7cdc-43fb-9c04-76361172de2b;
 Wed, 11 Nov 2020 07:37:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E99EEACB5;
 Wed, 11 Nov 2020 07:37:00 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kckgk-00059g-72
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 07:37:06 +0000
X-Inumbo-ID: ac37dd4e-7cdc-43fb-9c04-76361172de2b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ac37dd4e-7cdc-43fb-9c04-76361172de2b;
	Wed, 11 Nov 2020 07:37:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605080221;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lZGPpPQWd/RqUsbCF9Ic6LnkUqyzWBAcp5xJxKjrV3I=;
	b=jlCGgKLFhIMZYgb98L4XiR6oWNdHpMOkOEUX/murXyIP3nJDz0v7RqdlBIBgcAVzNtKc14
	LD0II9m3Krlw89HRIDUqdKUbus+iCXcWk9iTY+0gY0XrTZDOaqPBSdyXJoy4JANJ1yC1+o
	CBD3GeilIBuY/DFHV1YfihrUFlTdtwI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E99EEACB5;
	Wed, 11 Nov 2020 07:37:00 +0000 (UTC)
Subject: Re: [PATCH] xen/events: fix build
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201111054946.3229-1-jgross@suse.com>
 <8feafd17-851f-9bb2-0fe0-2d6f8bed4d4c@suse.com>
 <dbd01178-d925-b01c-8624-377a00270a22@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8882494b-0da5-306e-ba2b-e1b7588973ad@suse.com>
Date: Wed, 11 Nov 2020 08:37:00 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <dbd01178-d925-b01c-8624-377a00270a22@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11.11.2020 08:27, Jürgen Groß wrote:
> On 11.11.20 08:20, Jan Beulich wrote:
>> On 11.11.2020 06:49, Juergen Gross wrote:
>>> --- a/xen/common/event_channel.c
>>> +++ b/xen/common/event_channel.c
>>> @@ -61,7 +61,9 @@ static inline void evtchn_write_lock(struct evtchn *evtchn)
>>>   {
>>>       write_lock(&evtchn->lock);
>>>   
>>> +#ifndef NDEBUG
>>>       evtchn->old_state = evtchn->state;
>>> +#endif
>>>   }
>>>   
>>>   static inline void evtchn_write_unlock(struct evtchn *evtchn)
>>> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
>>> index 7251b3ae3e..95f96e7a33 100644
>>> --- a/xen/include/xen/sched.h
>>> +++ b/xen/include/xen/sched.h
>>> @@ -114,9 +114,7 @@ struct evtchn
>>>           u16 virq;      /* state == ECS_VIRQ */
>>>       } u;
>>>       u8 priority;
>>> -#ifndef NDEBUG
>>>       u8 old_state;      /* State when taking lock in write mode. */
>>> -#endif
>>>       u8 last_priority;
>>>       u16 last_vcpu_id;
>>>   #ifdef CONFIG_XSM
>>
>> Did you mean just either of the two changes (and then the former),
>> not both? If so
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>> and I'll be happy to drop the other half for committing.
> 
> The header fix is required for NDEBUG builds, while the other one is
> removing a write with no accompanied read for NDEBUG builds.

Oh, that's because of our absurd ASSERT() implementation in the
NDEBUG case. I stand by my position that the field should not be
there in the first place for release builds. Even more so with
the original patch having got re-based ahead of what was patch 1
of the series, which I did not the least because I want that one
backported urgently, while I continue to be hesitant altogether
about the other one.

I guess the course of action is then to put #ifndef NDEBUG
around the ASSERT() itself, however strange this may look. Or
introduce an evtchn_old_state() wrapper, perhaps returning
ECS_RESERVED in the NDEBUG case. I guess it'll be quicker if I
take your patch and massage it before throwing in, than to have
you make a v2.

Janan


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 07:39:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 07:39:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24288.51448 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kckjE-0005LG-Ke; Wed, 11 Nov 2020 07:39:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24288.51448; Wed, 11 Nov 2020 07:39:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kckjE-0005L2-Hd; Wed, 11 Nov 2020 07:39:40 +0000
Received: by outflank-mailman (input) for mailman id 24288;
 Wed, 11 Nov 2020 07:39:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GpG1=ER=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kckjC-0005Jg-Uu
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 07:39:38 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7eff94fc-6cae-49ea-8bf2-9f6b13a49567;
 Wed, 11 Nov 2020 07:39:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CE605ACB5;
 Wed, 11 Nov 2020 07:39:32 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GpG1=ER=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kckjC-0005Jg-Uu
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 07:39:38 +0000
X-Inumbo-ID: 7eff94fc-6cae-49ea-8bf2-9f6b13a49567
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7eff94fc-6cae-49ea-8bf2-9f6b13a49567;
	Wed, 11 Nov 2020 07:39:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605080373;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=f2Df8ZiXE7mehe9z7eXmbBCGGVz7kKA5f9r8mGMDEzA=;
	b=in/kuocUZ3ZXjWhQ2trCb268597gfSea3A5kJDl7NAhQtqW+FrzatI4fjOwuF3rvRHSiib
	wwMzO//y47lPkzydImvocRgu2adoxGVGkpX4WiNo5MgTj1iB37lnsb7g5e5mFXB7hK7d1L
	WWxUQAe1aIxgBgPBcbOHPfTx10vllEg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id CE605ACB5;
	Wed, 11 Nov 2020 07:39:32 +0000 (UTC)
Subject: Re: [PATCH] xen/events: fix build
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201111054946.3229-1-jgross@suse.com>
 <8feafd17-851f-9bb2-0fe0-2d6f8bed4d4c@suse.com>
 <dbd01178-d925-b01c-8624-377a00270a22@suse.com>
 <8882494b-0da5-306e-ba2b-e1b7588973ad@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <2627d9f1-a392-951e-5511-4058445d9f3d@suse.com>
Date: Wed, 11 Nov 2020 08:39:32 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <8882494b-0da5-306e-ba2b-e1b7588973ad@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="nyGfzY1Jg7rMWLqlQOBokNIKJjemetkww"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--nyGfzY1Jg7rMWLqlQOBokNIKJjemetkww
Content-Type: multipart/mixed; boundary="joIYApioSDXdMmY0qfT3ObG81m6R0vlPY";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <2627d9f1-a392-951e-5511-4058445d9f3d@suse.com>
Subject: Re: [PATCH] xen/events: fix build
References: <20201111054946.3229-1-jgross@suse.com>
 <8feafd17-851f-9bb2-0fe0-2d6f8bed4d4c@suse.com>
 <dbd01178-d925-b01c-8624-377a00270a22@suse.com>
 <8882494b-0da5-306e-ba2b-e1b7588973ad@suse.com>
In-Reply-To: <8882494b-0da5-306e-ba2b-e1b7588973ad@suse.com>

--joIYApioSDXdMmY0qfT3ObG81m6R0vlPY
Content-Type: multipart/mixed;
 boundary="------------096B23860B27495166121185"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------096B23860B27495166121185
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 11.11.20 08:37, Jan Beulich wrote:
> On 11.11.2020 08:27, J=C3=BCrgen Gro=C3=9F wrote:
>> On 11.11.20 08:20, Jan Beulich wrote:
>>> On 11.11.2020 06:49, Juergen Gross wrote:
>>>> --- a/xen/common/event_channel.c
>>>> +++ b/xen/common/event_channel.c
>>>> @@ -61,7 +61,9 @@ static inline void evtchn_write_lock(struct evtchn=
 *evtchn)
>>>>    {
>>>>        write_lock(&evtchn->lock);
>>>>   =20
>>>> +#ifndef NDEBUG
>>>>        evtchn->old_state =3D evtchn->state;
>>>> +#endif
>>>>    }
>>>>   =20
>>>>    static inline void evtchn_write_unlock(struct evtchn *evtchn)
>>>> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
>>>> index 7251b3ae3e..95f96e7a33 100644
>>>> --- a/xen/include/xen/sched.h
>>>> +++ b/xen/include/xen/sched.h
>>>> @@ -114,9 +114,7 @@ struct evtchn
>>>>            u16 virq;      /* state =3D=3D ECS_VIRQ */
>>>>        } u;
>>>>        u8 priority;
>>>> -#ifndef NDEBUG
>>>>        u8 old_state;      /* State when taking lock in write mode. *=
/
>>>> -#endif
>>>>        u8 last_priority;
>>>>        u16 last_vcpu_id;
>>>>    #ifdef CONFIG_XSM
>>>
>>> Did you mean just either of the two changes (and then the former),
>>> not both? If so
>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>> and I'll be happy to drop the other half for committing.
>>
>> The header fix is required for NDEBUG builds, while the other one is
>> removing a write with no accompanied read for NDEBUG builds.
>=20
> Oh, that's because of our absurd ASSERT() implementation in the
> NDEBUG case. I stand by my position that the field should not be
> there in the first place for release builds. Even more so with
> the original patch having got re-based ahead of what was patch 1
> of the series, which I did not the least because I want that one
> backported urgently, while I continue to be hesitant altogether
> about the other one.
>=20
> I guess the course of action is then to put #ifndef NDEBUG
> around the ASSERT() itself, however strange this may look. Or
> introduce an evtchn_old_state() wrapper, perhaps returning
> ECS_RESERVED in the NDEBUG case. I guess it'll be quicker if I
> take your patch and massage it before throwing in, than to have
> you make a v2.

Fine with me.


Juergen

--------------096B23860B27495166121185
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------096B23860B27495166121185--

--joIYApioSDXdMmY0qfT3ObG81m6R0vlPY--

--nyGfzY1Jg7rMWLqlQOBokNIKJjemetkww
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+rlTQFAwAAAAAACgkQsN6d1ii/Ey/+
lAf/YVD3rawgrnQ44NlDmhouP7M1Qh7vs99p2FChEhfqkRQJfTHc1jL2DiOIUt/2sQkYNEdePMNL
9/9Y4e/TSJMjSk12vl17YU46xHTQeA7A/3/y7jPulLB7A0LbNdP9iWmXmivqhCJBop1kQndqDh14
iO4LPhfexWPBPZlhCR+NleZPgdWMvOP5AjDlVbqIyQs+S+EjnoTrlyGA8EOxA4O4vYEtrCzzMrko
i7U4zuRPy0kursJ4DZ1TLRqXx1zvdHeDnbfxz/g/lwoY45ySc8Bl3eYfMTU6gpSoHhBocbp8Xp9Z
11wZZm7X/zaWH50I5Mtswh4BT1GjXHh/lBolUY49Bw==
=f0AW
-----END PGP SIGNATURE-----

--nyGfzY1Jg7rMWLqlQOBokNIKJjemetkww--


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 07:47:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 07:47:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24298.51464 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kckqs-0006Fz-Et; Wed, 11 Nov 2020 07:47:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24298.51464; Wed, 11 Nov 2020 07:47:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kckqs-0006Fs-Bp; Wed, 11 Nov 2020 07:47:34 +0000
Received: by outflank-mailman (input) for mailman id 24298;
 Wed, 11 Nov 2020 07:47:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kckqq-0006FE-K4
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 07:47:32 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dbf64ef7-23a3-42b1-b354-43f03c121306;
 Wed, 11 Nov 2020 07:47:23 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kckqg-0004U6-Lu; Wed, 11 Nov 2020 07:47:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kckqg-0006dt-Ce; Wed, 11 Nov 2020 07:47:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kckqg-0004dg-C4; Wed, 11 Nov 2020 07:47:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kckqq-0006FE-K4
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 07:47:32 +0000
X-Inumbo-ID: dbf64ef7-23a3-42b1-b354-43f03c121306
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id dbf64ef7-23a3-42b1-b354-43f03c121306;
	Wed, 11 Nov 2020 07:47:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aKlyfz+CFQH3VDXP8H9av21IZ/ZVZmqZDMODh2jikJU=; b=JERaT8JTW3evejV72Uf0nCuBZU
	RUwt2vXV8KCP6ue+Atd6ZwMovQz/9yVXIfX6wsxEyPUaxnp10U59ZUns6ugM56RsHnrjPc25jmWET
	MXDfu6YXSdY3VwVUjwz+8poEeh1O9buGtmemlp+AjOicDgOhaJfQ8GE6UlcJwzHpKYno=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kckqg-0004U6-Lu; Wed, 11 Nov 2020 07:47:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kckqg-0006dt-Ce; Wed, 11 Nov 2020 07:47:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kckqg-0004dg-C4; Wed, 11 Nov 2020 07:47:22 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156620-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 156620: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-pvhv2-amd:guest-localmigrate/x10:fail:regression
    linux-5.4:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ec9c6b417e271ee76d1430d2b197794858238d3b
X-Osstest-Versions-That:
    linux=6e97ed6efa701db070da0054b055c085895aba86
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Nov 2020 07:47:22 +0000

flight 156620 linux-5.4 real [real]
flight 156676 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156620/
http://logs.test-lab.xenproject.org/osstest/logs/156676/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-pvhv2-amd 20 guest-localmigrate/x10  fail REGR. vs. 156412

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 156412

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156412
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156412
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156412
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156412
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156412
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156412
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156412
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156412
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156412
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156412
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156412
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                ec9c6b417e271ee76d1430d2b197794858238d3b
baseline version:
 linux                6e97ed6efa701db070da0054b055c085895aba86

Last test of basis   156412  2020-11-05 11:13:59 Z    5 days
Testing same since   156620  2020-11-10 12:10:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  "kiyin(尹亮)" <kiyin@tencent.com>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Aring <aahringo@redhat.com>
  Alexander Sverdlin <alexander.sverdlin@nokia.com>
  Andreas Gruenbacher <agruenba@redhat.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Strohman <astroh@amazon.com>
  Artem Lapkin <art@khadas.com>
  Baurzhan Ismagulov <ibr@radix50.net>
  Ben Skeggs <bskeggs@redhat.com>
  Bjørn Mork <bjorn@mork.no>
  Boris Brezillon <boris.brezillon@collabora.com>
  Borislav Petkov <bp@suse.de>
  Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christoph Hellwig <hch@lst.de>
  Chunfeng Yun <chunfeng.yun@mediatek.com>
  Claire Chang <tientzu@chromium.org>
  Claudiu Manoil <claudiu.manoil@nxp.com>
  Clément Péron <peron.clem@gmail.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Daniel Vetter <daniel.vetter@intel.com>
  Daniele Palmas <dnlplm@gmail.com>
  Dany Madden <drt@linux.ibm.com>
  Darrick J. Wong <darrick.wong@oracle.com>
  David S. Miller <davem@davemloft.net>
  Eddy Wu <eddy_wu@trendmicro.com>
  Eddy Wu <itseddy0402@gmail.com>
  Fangrui Song <maskray@google.com>
  Felipe Balbi <balbi@kernel.org>
  Frank Slotta <frank.slotta@posteo.de>
  Gabriel Krisman Bertazi <krisman@collabora.com>
  Geoffrey D. Bennett <g@b4.vu>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gregory CLEMENT <gregory.clement@bootlin.com>
  Guenter Roeck <linux@roeck-us.net>
  Harald Freudenberger <freude@linux.ibm.com>
  Heiko Carstens <hca@linux.ibm.com>
  Hoang Huu Le <hoang.h.le@dektech.com.au>
  Hoegeun Kwon <hoegeun.kwon@samsung.com>
  Ingo Molnar <mingo@kernel.org>
  Jakub Kicinski <kuba@kernel.org>
  Jason Gunthorpe <jgg@nvidia.com>
  Jeff Vander Stoep <jeffv@google.com>
  Jens Axboe <axboe@kernel.dk>
  Jernej Skrabec <jernej.skrabec@siol.net>
  Jiri Slaby <jslaby@suse.cz>
  Johan Hovold <johan@kernel.org>
  Jon Hunter <jonathanh@nvidia.com>
  Jon Maloy <jmaloy@redhat.com>
  Kailang Yang <kailang@realtek.com>
  Kairui Song <kasong@redhat.com>
  Karol Herbst <kherbst@redhat.com>
  Keith Winstein <keithw@cs.stanford.edu>
  Kevin Hilman <khilman@baylibre.com>
  kiyin(尹亮) <kiyin@tencent.com>
  Lee Jones <lee.jones@linaro.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Macpaul Lin <macpaul.lin@mediatek.com>
  Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
  Mark Brown <broonie@kernel.org>
  Mark Deneen <mdeneen@saucontech.com>
  Martin Hundebøll <martin@geanix.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Mateusz Gorski <mateusz.gorski@linux.intel.com>
  Matija Glavinic Pecotic <matija.glavinic-pecotic.ext@nokia.com>
  Maxime Ripard <maxime@cerno.tech>
  Miaohe Lin <linmiaohe@huawei.com>
  Michael Walle <michael@walle.cc>
  Michal Hocko <mhocko@suse.com>
  Michał Mirosław <mirq-linux@rere.qmqm.pl>
  Mika Kuoppala <mika.kuoppala@linux.intel.com>
  Mike Galbraith <efault@gmx.de>
  Ming Lei <ming.lei@redhat.com>
  Nick Desaulniers <ndesaulniers@google.com>
  Oleg Nesterov <oleg@redhat.com>
  Ondřej Jirman <megous@megous.com>
  Pali Rohár <pali@kernel.org>
  Paul E. McKenney <paulmck@kernel.org>
  Peilin Ye <yepeilin.cs@gmail.com>
  Peter Chen <peter.chen@nxp.com>
  Petr Malat <oss@malat.biz>
  Qian Cai <cai@redhat.com>
  Qinglang Miao <miaoqinglang@huawei.com>
  Qiujun Huang <hqjagain@gmail.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Ralph Campbell <rcampbell@nvidia.com>
  Rob Herring <robh@kernel.org>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Sami Tolvanen <samitolvanen@google.com>
  Sasha Levin <sashal@kernel.org>
  Scott K Logan <logans@cottsay.net>
  Shannon Nelson <snelson@pensando.io>
  Shijie Luo <luoshijie1@huawei.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sukadev Bhattiprolu <sukadev@linux.ibm.com>
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Thinh Nguyen <Thinh.Nguyen@synopsys.com>
  Thomas Gleixner <tglx@linutronix.de>
  Tianci.Yin <tianci.yin@amd.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Vasily Gorbik <gor@linux.ibm.com>
  Vignesh Raghavendra <vigneshr@ti.com>
  Vinay Kumar Yadav <vinay.yadav@chelsio.com>
  Vincent Whitchurch <vincent.whitchurch@axis.com>
  Vineet Gupta <vgupta@synopsys.com>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  wenxu <wenxu@ucloud.cn>
  Will Deacon <will@kernel.org>
  Xiang Chen <chenxiang66@hisilicon.com>
  YueHaibing <yuehaibing@huawei.com>
  Zhang Qilong <zhangqilong3@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Ziyi Cao <kernel@septs.pw>
  Zqiang <qiang.zhang@windriver.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2653 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:04:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:04:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24323.51487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcl7Q-0000GY-Nh; Wed, 11 Nov 2020 08:04:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24323.51487; Wed, 11 Nov 2020 08:04:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcl7Q-0000GR-Kf; Wed, 11 Nov 2020 08:04:40 +0000
Received: by outflank-mailman (input) for mailman id 24323;
 Wed, 11 Nov 2020 08:04:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcl7P-0000GM-GJ
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:04:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 895e710c-8494-4249-83f1-c5a5ebf9f53e;
 Wed, 11 Nov 2020 08:04:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8DC17AC75;
 Wed, 11 Nov 2020 08:04:32 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcl7P-0000GM-GJ
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:04:39 +0000
X-Inumbo-ID: 895e710c-8494-4249-83f1-c5a5ebf9f53e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 895e710c-8494-4249-83f1-c5a5ebf9f53e;
	Wed, 11 Nov 2020 08:04:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605081872;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RKlZesGlo/Jod+lByjm7c3cskP1uxr5yBJcFPgvX8L0=;
	b=IrZl5j10II7JvVlgqMEEsr9j/nxZPlsYpP10mdPvevcD3pR5vNXV8GygOxdZ4jj4C3ZpVh
	BX3EH7pqFCuhfxlkFLoEjMctlpLdP2xy9MTiFyfqlUbSw8ARBgaQe1wdTApoJzlhNvJLWK
	cY5tpFOlTTQ67y9xooRQborAgURzvc8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8DC17AC75;
	Wed, 11 Nov 2020 08:04:32 +0000 (UTC)
Subject: Re: [PATCH V2 11/23] xen/ioreq: Move x86's io_completion/io_req
 fields to struct vcpu
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-12-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0582af84-6c2c-9f3d-e056-2f828dd8666a@suse.com>
Date: Wed, 11 Nov 2020 09:04:32 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <1602780274-29141-12-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -143,6 +143,19 @@ void evtchn_destroy_final(struct domain *d); /* from complete_domain_destroy */
>  
>  struct waitqueue_vcpu;
>  
> +enum io_completion {
> +    IO_no_completion,
> +    IO_mmio_completion,
> +    IO_pio_completion,
> +    IO_realmode_completion
> +};

May I suggest wrapping at least the last one in #ifdef CONFIG_X86?

Also please add a trailing comma.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:08:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:08:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24330.51499 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclBW-0000Tl-8e; Wed, 11 Nov 2020 08:08:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24330.51499; Wed, 11 Nov 2020 08:08:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclBW-0000Te-5h; Wed, 11 Nov 2020 08:08:54 +0000
Received: by outflank-mailman (input) for mailman id 24330;
 Wed, 11 Nov 2020 08:08:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kclBU-0000TZ-W7
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:08:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f0749748-52be-40a5-8971-338cf1005c26;
 Wed, 11 Nov 2020 08:08:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9AB63AC82;
 Wed, 11 Nov 2020 08:08:49 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kclBU-0000TZ-W7
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:08:53 +0000
X-Inumbo-ID: f0749748-52be-40a5-8971-338cf1005c26
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f0749748-52be-40a5-8971-338cf1005c26;
	Wed, 11 Nov 2020 08:08:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605082129;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vmp8pwriy0PxUYkKMYXfKsdmcXzIzJkX96FtNs/m6+Q=;
	b=ihT04QN2Nk4f0MgBINwa04ctsgoBy0kMacSXYYQvtpGymwLnLrvIpx/DAXWWM41XaC1DC8
	OTmmWxxsXEcJpMpSw/yh+j+fa5/PWRiTFjosdzzJUOjzStf3bZ33kxFDLGmQOz1VcFPqsz
	cjVUhqJe10PeaekcJ5gHZnhUYMjfZJg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 9AB63AC82;
	Wed, 11 Nov 2020 08:08:49 +0000 (UTC)
Subject: Re: [PATCH V2 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
To: Oleksandr <olekstysh@gmail.com>
Cc: 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Ian Jackson' <iwj@xenproject.org>, 'Wei Liu' <wl@xen.org>,
 'Julien Grall' <julien.grall@arm.com>, paul@xen.org,
 xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-18-git-send-email-olekstysh@gmail.com>
 <004e01d6a6cf$09cd9f40$1d68ddc0$@xen.org>
 <700a643e-641e-c243-cb2d-7ad8b5a9b8ad@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d4088e1b-1a50-d2fd-29b0-0f8a2cf4e7d4@suse.com>
Date: Wed, 11 Nov 2020 09:08:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <700a643e-641e-c243-cb2d-7ad8b5a9b8ad@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 10.11.2020 21:53, Oleksandr wrote:
> 
> On 20.10.20 13:51, Paul Durrant wrote:
> 
> Hi Paul.
> 
> Sorry for the late response.
> 
>>
>>> -----Original Message-----
>>> From: Oleksandr Tyshchenko <olekstysh@gmail.com>
>>> Sent: 15 October 2020 17:44
>>> To: xen-devel@lists.xenproject.org
>>> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Stefano Stabellini <sstabellini@kernel.org>;
>>> Julien Grall <julien@xen.org>; Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>; Andrew Cooper
>>> <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>; Ian Jackson
>>> <iwj@xenproject.org>; Jan Beulich <jbeulich@suse.com>; Wei Liu <wl@xen.org>; Paul Durrant
>>> <paul@xen.org>; Julien Grall <julien.grall@arm.com>
>>> Subject: [PATCH V2 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
>>>
>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>
>>> This patch introduces a helper the main purpose of which is to check
>>> if a domain is using IOREQ server(s).
>>>
>>> On Arm the current benefit is to avoid calling handle_io_completion()
>>> (which implies iterating over all possible IOREQ servers anyway)
>>> on every return in leave_hypervisor_to_guest() if there is no active
>>> servers for the particular domain.
>>> Also this helper will be used by one of the subsequent patches on Arm.
>>>
>>> This involves adding an extra per-domain variable to store the count
>>> of servers in use.
>>>
>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>> CC: Julien Grall <julien.grall@arm.com>
>>>
>>> ---
>>> Please note, this is a split/cleanup/hardening of Julien's PoC:
>>> "Add support for Guest IO forwarding to a device emulator"
>>>
>>> Changes RFC -> V1:
>>>     - new patch
>>>
>>> Changes V1 -> V2:
>>>     - update patch description
>>>     - guard helper with CONFIG_IOREQ_SERVER
>>>     - remove "hvm" prefix
>>>     - modify helper to just return d->arch.hvm.ioreq_server.nr_servers
>>>     - put suitable ASSERT()s
>>>     - use ASSERT(d->ioreq_server.server[id] ? !s : !!s) in set_ioreq_server()
>>>     - remove d->ioreq_server.nr_servers = 0 from hvm_ioreq_init()
>>> ---
>>>   xen/arch/arm/traps.c    | 15 +++++++++------
>>>   xen/common/ioreq.c      |  7 ++++++-
>>>   xen/include/xen/ioreq.h | 14 ++++++++++++++
>>>   xen/include/xen/sched.h |  1 +
>>>   4 files changed, 30 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>>> index 507c095..a8f5fdf 100644
>>> --- a/xen/arch/arm/traps.c
>>> +++ b/xen/arch/arm/traps.c
>>> @@ -2261,14 +2261,17 @@ static bool check_for_vcpu_work(void)
>>>       struct vcpu *v = current;
>>>
>>>   #ifdef CONFIG_IOREQ_SERVER
>>> -    bool handled;
>>> +    if ( domain_has_ioreq_server(v->domain) )
>>> +    {
>>> +        bool handled;
>>>
>>> -    local_irq_enable();
>>> -    handled = handle_io_completion(v);
>>> -    local_irq_disable();
>>> +        local_irq_enable();
>>> +        handled = handle_io_completion(v);
>>> +        local_irq_disable();
>>>
>>> -    if ( !handled )
>>> -        return true;
>>> +        if ( !handled )
>>> +            return true;
>>> +    }
>>>   #endif
>>>
>>>       if ( likely(!v->arch.need_flush_to_ram) )
>>> diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
>>> index bcd4961..a72bc0e 100644
>>> --- a/xen/common/ioreq.c
>>> +++ b/xen/common/ioreq.c
>>> @@ -39,9 +39,14 @@ static void set_ioreq_server(struct domain *d, unsigned int id,
>>>                                struct ioreq_server *s)
>>>   {
>>>       ASSERT(id < MAX_NR_IOREQ_SERVERS);
>>> -    ASSERT(!s || !d->ioreq_server.server[id]);
>>> +    ASSERT(d->ioreq_server.server[id] ? !s : !!s);
>> That looks odd. How about ASSERT(!s ^ !d->ioreq_server.server[id])?
> 
> ok, looks like it will work.
> 
> 
>>    Paul
>>
>>>       d->ioreq_server.server[id] = s;
>>> +
>>> +    if ( s )
>>> +        d->ioreq_server.nr_servers++;
>>> +    else
>>> +        d->ioreq_server.nr_servers--;
>>>   }
>>>
>>>   #define GET_IOREQ_SERVER(d, id) \
>>> diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
>>> index 7b03ab5..0679fef 100644
>>> --- a/xen/include/xen/ioreq.h
>>> +++ b/xen/include/xen/ioreq.h
>>> @@ -55,6 +55,20 @@ struct ioreq_server {
>>>       uint8_t                bufioreq_handling;
>>>   };
>>>
>>> +#ifdef CONFIG_IOREQ_SERVER
>>> +static inline bool domain_has_ioreq_server(const struct domain *d)
>>> +{
>>> +    ASSERT((current->domain == d) || atomic_read(&d->pause_count));
>>> +
>> This seems like an odd place to put such an assertion.
> 
> I might miss something or interpreted incorrectly but these asserts are 
> the result of how I understood the review comment on previous version [1].
> 
> I will copy a comment here for the convenience:
> "This is safe only when d == current->domain and it's not paused,
> or when they're distinct and d is paused. Otherwise the result is
> stale before the caller can inspect it. This wants documenting by
> at least a comment, but perhaps better by suitable ASSERT()s."

The way his reply was worded, I think Paul was wondering about the
place where you put the assertion, not what you actually assert. 

>>> +    return d->ioreq_server.nr_servers;
>>> +}
>>> +#else
>>> +static inline bool domain_has_ioreq_server(const struct domain *d)
>>> +{
>>> +    return false;
>>> +}
>>> +#endif
>>> +
>> Can this be any more compact? E.g.
>>
>> return IS_ENABLED(CONFIG_IOREQ_SERVER) && d->ioreq_server.nr_servers;
>>
>> ?
> I have got a compilation error this way (if CONFIG_IOREQ_SERVER is 
> disabled):
> 
> ...xen/4.14.0+gitAUTOINC+ee22110219-r0/git/xen/include/xen/ioreq.h:62:48: 
> error: ‘const struct domain’ has no member named ‘ioreq_server’
>       return IS_ENABLED(CONFIG_IOREQ_SERVER) && d->ioreq_server.nr_servers;
>                                                  ^
> as domain's ioreq_server struct is guarded by CONFIG_IOREQ_SERVER as well.

The #ifdef is unavoidable here, I agree, but it should be inside
the function's body. There's no need to duplicate the rest of it.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:09:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:09:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24334.51511 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclC5-0000ZT-IE; Wed, 11 Nov 2020 08:09:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24334.51511; Wed, 11 Nov 2020 08:09:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclC5-0000ZL-FI; Wed, 11 Nov 2020 08:09:29 +0000
Received: by outflank-mailman (input) for mailman id 24334;
 Wed, 11 Nov 2020 08:09:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OH4G=ER=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kclC4-0000ZF-T5
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:09:28 +0000
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 25d1b008-ddb5-4e02-8e40-8e42dc2daef5;
 Wed, 11 Nov 2020 08:09:27 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id h23so1022611ljg.13
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 00:09:27 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 8sm145087lfk.246.2020.11.11.00.09.25
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 11 Nov 2020 00:09:26 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=OH4G=ER=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kclC4-0000ZF-T5
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:09:28 +0000
X-Inumbo-ID: 25d1b008-ddb5-4e02-8e40-8e42dc2daef5
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 25d1b008-ddb5-4e02-8e40-8e42dc2daef5;
	Wed, 11 Nov 2020 08:09:27 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id h23so1022611ljg.13
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 00:09:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=Y86yd3DlAxaOfu2nLRPPa9Zkd4hYQHr84dhPBKcH5nM=;
        b=SqodPKMu2+icN1uGiwh/82nc8NErYW61swqce9Tb24u/S3m87QLPQ47F9ArUMmXwfH
         RO1erTklhQaIdw+sXXUvd4dDZY87YUGJzeGtJwxCOJ6+JE/GolF7brGc47XSbokYz5wf
         q11RN2dQ+dh6GJLiFotWiEsnEv55Wc+5iiE5MwuE3enTQYroeGlqGwZSYmOAFRC9r7LN
         dbTsDCjqlFerB4/lQ5v+T8rbhINaAjCTSQUGzNSHJMkxP/93Yf5wfmxIDyPjEOKijRgO
         /zrWF8y4jxWuDRE7+SPPzFwRwEbTYyNBaGsYjk0N/D+CShfzsX+BVwdZOgPMYTbYf+Ky
         WPMw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=Y86yd3DlAxaOfu2nLRPPa9Zkd4hYQHr84dhPBKcH5nM=;
        b=BysGbQPO2O1BTWMGOtmbZBnA5XLjBYrAP7zWQC9XuKOr4T3C24ct+GaCJuCro59tO6
         u7OWW7GiRrkRHKhds+vjrxJJ+/wHDf+ANXiPpMPJCU4QaWeUJZXBQqt6S8rkqB1jyJyt
         XGUcesuKYwVLDdelgOeTijFQilOzgldV5f1iNYfJqeqiawLaQSnNPDUjZpzhyCqLn8P0
         GKTSNVxt8R1BLWRoHpJD0s+YvCZNI/LyAkGpQ8J0OAg0DmEw5KaAe5bu0FTcjqJrk+6d
         h+c3svt32lyHZULum7NV80gBYdzw2YCGhadXL8saA2+CbaIRMv2x8oOVlVq/rTxJ640m
         b2Dg==
X-Gm-Message-State: AOAM530Wrsl/WnJcTRd6grSYOkunbMXFQp3CROeO9wm713rmGDmQCPEp
	Sjdy2ks7U8Zw9UZDHRdZwac=
X-Google-Smtp-Source: ABdhPJwOVO901jbF+6fbxZ66qUCsA7j8iV1RoruRGKhyn/IzYteke5ORW0vyQ8sTlyQfiqjTx5dr0w==
X-Received: by 2002:a2e:6c14:: with SMTP id h20mr3541390ljc.458.1605082166621;
        Wed, 11 Nov 2020 00:09:26 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id 8sm145087lfk.246.2020.11.11.00.09.25
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Wed, 11 Nov 2020 00:09:26 -0800 (PST)
Subject: Re: [PATCH V2 04/23] xen/ioreq: Provide alias for the handle_mmio()
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org,
 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 'Wei Liu' <wl@xen.org>, 'Julien Grall' <julien@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien.grall@arm.com>, paul@xen.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-5-git-send-email-olekstysh@gmail.com>
 <004701d6a6c1$6c09f860$441de920$@xen.org>
 <38ba45dd-f1cd-a289-3ea3-75148782e126@suse.com>
 <004a01d6a6cd$1f4684b0$5dd38e10$@xen.org>
 <fab8e4b0-e3b2-fb74-76d4-42753ac88367@gmail.com>
 <f2d1cbef-09ec-86e4-bfc5-20320f78be6b@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <35d38051-129e-333d-ec94-aa36e68a3814@gmail.com>
Date: Wed, 11 Nov 2020 10:09:20 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <f2d1cbef-09ec-86e4-bfc5-20320f78be6b@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 11.11.20 09:27, Jan Beulich wrote:

Hi Jan

> On 10.11.2020 20:44, Oleksandr wrote:
>> On 20.10.20 13:38, Paul Durrant wrote:
>>
>> Hi Jan, Paul
>>
>> Sorry for the late response.
>>
>>>> -----Original Message-----
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>> Sent: 20 October 2020 11:05
>>>> To: paul@xen.org
>>>> Cc: 'Oleksandr Tyshchenko' <olekstysh@gmail.com>; xen-devel@lists.xenproject.org; 'Oleksandr
>>>> Tyshchenko' <oleksandr_tyshchenko@epam.com>; 'Andrew Cooper' <andrew.cooper3@citrix.com>; 'Roger Pau
>>>> Monné' <roger.pau@citrix.com>; 'Wei Liu' <wl@xen.org>; 'Julien Grall' <julien@xen.org>; 'Stefano
>>>> Stabellini' <sstabellini@kernel.org>; 'Julien Grall' <julien.grall@arm.com>
>>>> Subject: Re: [PATCH V2 04/23] xen/ioreq: Provide alias for the handle_mmio()
>>>>
>>>> On 20.10.2020 11:14, Paul Durrant wrote:
>>>>>> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Oleksandr Tyshchenko
>>>>>> Sent: 15 October 2020 17:44
>>>>>>
>>>>>> --- a/xen/include/asm-x86/hvm/ioreq.h
>>>>>> +++ b/xen/include/asm-x86/hvm/ioreq.h
>>>>>> @@ -181,6 +181,8 @@ static inline bool arch_hvm_ioreq_destroy(struct domain *d)
>>>>>>    #define IOREQ_STATUS_UNHANDLED   X86EMUL_UNHANDLEABLE
>>>>>>    #define IOREQ_STATUS_RETRY       X86EMUL_RETRY
>>>>>>
>>>>>> +#define ioreq_complete_mmio   handle_mmio
>>>>>> +
>>>>> A #define? Really? Can we not have a static inline?
>>>> I guess this would require further shuffling: handle_mmio() is
>>>> an inline function in hvm/emulate.h, and hvm/ioreq.h has no
>>>> need to include the former (and imo it also shouldn't have).
>>>>
>>> I see. I think we need an x86 ioreq.c anyway, to deal with the legacy use of magic pages, so it could be dealt with there instead.
>> I am afraid I don't entirely understand the required changes. Could you
>> please clarify where the "inline(?)" ioreq_complete_mmio() should
>> live? I included hvm/emulate.h here not for the "handle_mmio()" reason
>> only, but for "struct hvm_emulate_ctxt" also (see arch_io_completion()).
> I'm sorry, but in the context of this patch there's no use of any
> struct hvm_emulate_ctxt instance. I'm not going to wade through 23
> patches to find what you mean.

Sorry for not being precise here. I meant arch_io_completion() added at [1]


[1] 
https://patchwork.kernel.org/project/xen-devel/patch/1602780274-29141-2-git-send-email-olekstysh@gmail.com/

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:14:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:14:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24344.51523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclH7-0001Ud-6k; Wed, 11 Nov 2020 08:14:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24344.51523; Wed, 11 Nov 2020 08:14:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclH7-0001UW-35; Wed, 11 Nov 2020 08:14:41 +0000
Received: by outflank-mailman (input) for mailman id 24344;
 Wed, 11 Nov 2020 08:14:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OH4G=ER=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kclH6-0001UR-94
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:14:40 +0000
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 27034e53-888e-4918-8821-4e374cac7470;
 Wed, 11 Nov 2020 08:14:39 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id b17so1036895ljf.12
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 00:14:39 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id f2sm150899ljc.118.2020.11.11.00.14.36
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 11 Nov 2020 00:14:37 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=OH4G=ER=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kclH6-0001UR-94
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:14:40 +0000
X-Inumbo-ID: 27034e53-888e-4918-8821-4e374cac7470
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 27034e53-888e-4918-8821-4e374cac7470;
	Wed, 11 Nov 2020 08:14:39 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id b17so1036895ljf.12
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 00:14:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=zqxgfoKuAyK4lLVvrlnmm4wF+ekwP0LtLglPa8tjDF8=;
        b=tW0RfZtQ6wRSIsm9wDUnGUx8zx4VuUyeqYoP9adR+sVni0E1/yhfFtrYf/Olt7aDKY
         vdHFgdWWFj0XTUdTh8TpATO/aMCOQsyVnFnHmEbMttx9NNZ1CLr7YOXWTaRXgBXL9Lo9
         qzaTUufjgdIrQZ/e3/NAtKUAImWczKWrf1IXt4RM2iCgWc91aBX9EoSsYp0WPhLMMZob
         L9biI3eaJJdwqCGUprOB3P9lRuMC7TkWqTWa9+VVWqSLGdjJs+QekiFH00RYxQ8XVrJl
         msfar9l2rrysqlzSsrR08qnsQchoR/KHT9u4UeH0SrUU9djNZsOFl+h6JvaXNxwuokgj
         q6EQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=zqxgfoKuAyK4lLVvrlnmm4wF+ekwP0LtLglPa8tjDF8=;
        b=jOB4MHG5MmGtwSTCXLY4TWwtR0ifqs38FJHHJnuV3jh0Zei7n2RFvFxFSSksn0MMEi
         IgVSSZ76lUemB64scMwk4Rt+LFGHwgDLNn7mouhisgfToFamvM6YMzxoYPhqjp2pBfxk
         um6qj6anIJo/DLb1eeOEA37FLc//sOfnN5kLchdkbzldsBKJeQEnkW8RN/czBVu4fP0z
         MiS8BxNcQIFMlTGSeOsU/WE0QgQd97ZEKzhpJhpR+tQWUgDTb1wKjkW4Tu9A+fxsnqgV
         NGa3po43KMVF/2W/Wjj51j8fQfn4ADvN6491KMNbk7g9vPA5LBtA1e80HNvIKlVDMXwE
         t+4Q==
X-Gm-Message-State: AOAM5309i9knU3Keiv/aT2tL2Zdndtr1/FW/FEWItdXK3XHxUm4RbSGV
	lRFIN4zoHmgLoixcGDfckST+ss4bI/tpGA==
X-Google-Smtp-Source: ABdhPJzj/FaQLNO4vt6jZCmRSqQlNdaHCCuYMJVdFWKSCVJcuQJYe3Vlk25s9Syk4SlCxJyW134xAw==
X-Received: by 2002:a2e:5450:: with SMTP id y16mr10579585ljd.288.1605082478044;
        Wed, 11 Nov 2020 00:14:38 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id f2sm150899ljc.118.2020.11.11.00.14.36
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Wed, 11 Nov 2020 00:14:37 -0800 (PST)
Subject: Re: [PATCH V2 11/23] xen/ioreq: Move x86's io_completion/io_req
 fields to struct vcpu
To: Jan Beulich <jbeulich@suse.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-12-git-send-email-olekstysh@gmail.com>
 <0582af84-6c2c-9f3d-e056-2f828dd8666a@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <bae3cd0e-f949-f29e-05f7-a2f13b78c8b2@gmail.com>
Date: Wed, 11 Nov 2020 10:14:36 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <0582af84-6c2c-9f3d-e056-2f828dd8666a@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 11.11.20 10:04, Jan Beulich wrote:

Hi Jan

> On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -143,6 +143,19 @@ void evtchn_destroy_final(struct domain *d); /* from complete_domain_destroy */
>>   
>>   struct waitqueue_vcpu;
>>   
>> +enum io_completion {
>> +    IO_no_completion,
>> +    IO_mmio_completion,
>> +    IO_pio_completion,
>> +    IO_realmode_completion
>> +};
> May I suggest wrapping at least the last one in #ifdef CONFIG_X86?
>
> Also please add a trailing comma.

Yes, will do.


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:16:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:16:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24350.51535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclIk-0001co-Jp; Wed, 11 Nov 2020 08:16:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24350.51535; Wed, 11 Nov 2020 08:16:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclIk-0001ch-Fb; Wed, 11 Nov 2020 08:16:22 +0000
Received: by outflank-mailman (input) for mailman id 24350;
 Wed, 11 Nov 2020 08:16:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kclIj-0001cc-HM
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:16:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 718587b1-4f72-4325-b1d1-ce623a635394;
 Wed, 11 Nov 2020 08:16:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C45E8AC75;
 Wed, 11 Nov 2020 08:16:19 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kclIj-0001cc-HM
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:16:21 +0000
X-Inumbo-ID: 718587b1-4f72-4325-b1d1-ce623a635394
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 718587b1-4f72-4325-b1d1-ce623a635394;
	Wed, 11 Nov 2020 08:16:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605082579;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=F0UVQ7XfQ1QsHhWC89QH2H73gajMtQD6fWVY+G9oIhg=;
	b=c1qX3CpMxMfJ+Kem2oOiaIpbOac7ZOl3HiFq1pF6nd1R1TFqX+AeCK5NJlxBmXIyai3gKj
	QJGA2wvyUwgeKMWSikvW5BWoVtI83YpQKBW5bY7kp4xac++v3YYq9IYFDVnDzktFuO2vgR
	YTOp+wslXaFu8tUGnT+C4FN/YmkWVGA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C45E8AC75;
	Wed, 11 Nov 2020 08:16:19 +0000 (UTC)
Subject: Re: [PATCH V2 04/23] xen/ioreq: Provide alias for the handle_mmio()
To: Oleksandr <olekstysh@gmail.com>
Cc: xen-devel@lists.xenproject.org,
 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 'Wei Liu' <wl@xen.org>, 'Julien Grall' <julien@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien.grall@arm.com>, paul@xen.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-5-git-send-email-olekstysh@gmail.com>
 <004701d6a6c1$6c09f860$441de920$@xen.org>
 <38ba45dd-f1cd-a289-3ea3-75148782e126@suse.com>
 <004a01d6a6cd$1f4684b0$5dd38e10$@xen.org>
 <fab8e4b0-e3b2-fb74-76d4-42753ac88367@gmail.com>
 <f2d1cbef-09ec-86e4-bfc5-20320f78be6b@suse.com>
 <35d38051-129e-333d-ec94-aa36e68a3814@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2e677d01-bf2f-e569-95a2-490dc05102d9@suse.com>
Date: Wed, 11 Nov 2020 09:16:19 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <35d38051-129e-333d-ec94-aa36e68a3814@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11.11.2020 09:09, Oleksandr wrote:
> 
> On 11.11.20 09:27, Jan Beulich wrote:
> 
> Hi Jan
> 
>> On 10.11.2020 20:44, Oleksandr wrote:
>>> On 20.10.20 13:38, Paul Durrant wrote:
>>>
>>> Hi Jan, Paul
>>>
>>> Sorry for the late response.
>>>
>>>>> -----Original Message-----
>>>>> From: Jan Beulich <jbeulich@suse.com>
>>>>> Sent: 20 October 2020 11:05
>>>>> To: paul@xen.org
>>>>> Cc: 'Oleksandr Tyshchenko' <olekstysh@gmail.com>; xen-devel@lists.xenproject.org; 'Oleksandr
>>>>> Tyshchenko' <oleksandr_tyshchenko@epam.com>; 'Andrew Cooper' <andrew.cooper3@citrix.com>; 'Roger Pau
>>>>> Monné' <roger.pau@citrix.com>; 'Wei Liu' <wl@xen.org>; 'Julien Grall' <julien@xen.org>; 'Stefano
>>>>> Stabellini' <sstabellini@kernel.org>; 'Julien Grall' <julien.grall@arm.com>
>>>>> Subject: Re: [PATCH V2 04/23] xen/ioreq: Provide alias for the handle_mmio()
>>>>>
>>>>> On 20.10.2020 11:14, Paul Durrant wrote:
>>>>>>> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Oleksandr Tyshchenko
>>>>>>> Sent: 15 October 2020 17:44
>>>>>>>
>>>>>>> --- a/xen/include/asm-x86/hvm/ioreq.h
>>>>>>> +++ b/xen/include/asm-x86/hvm/ioreq.h
>>>>>>> @@ -181,6 +181,8 @@ static inline bool arch_hvm_ioreq_destroy(struct domain *d)
>>>>>>>    #define IOREQ_STATUS_UNHANDLED   X86EMUL_UNHANDLEABLE
>>>>>>>    #define IOREQ_STATUS_RETRY       X86EMUL_RETRY
>>>>>>>
>>>>>>> +#define ioreq_complete_mmio   handle_mmio
>>>>>>> +
>>>>>> A #define? Really? Can we not have a static inline?
>>>>> I guess this would require further shuffling: handle_mmio() is
>>>>> an inline function in hvm/emulate.h, and hvm/ioreq.h has no
>>>>> need to include the former (and imo it also shouldn't have).
>>>>>
>>>> I see. I think we need an x86 ioreq.c anyway, to deal with the legacy use of magic pages, so it could be dealt with there instead.
>>> I am afraid I don't entirely understand the required changes. Could you
>>> please clarify where the "inline(?)" ioreq_complete_mmio() should
>>> live? I included hvm/emulate.h here not for the "handle_mmio()" reason
>>> only, but for "struct hvm_emulate_ctxt" also (see arch_io_completion()).
>> I'm sorry, but in the context of this patch there's no use of any
>> struct hvm_emulate_ctxt instance. I'm not going to wade through 23
>> patches to find what you mean.
> 
> Sorry for not being precise here. I meant arch_io_completion() added at [1]

At least some of the inlines you add there are way too large to be
inline functions, imo. But consensus appears to be now to retain a
per-arch ioreq.c anyway.

Jan

> [1] 
> https://patchwork.kernel.org/project/xen-devel/patch/1602780274-29141-2-git-send-email-olekstysh@gmail.com/
> 



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:27:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:27:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24361.51571 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclTd-0002hM-7b; Wed, 11 Nov 2020 08:27:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24361.51571; Wed, 11 Nov 2020 08:27:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclTd-0002hE-4I; Wed, 11 Nov 2020 08:27:37 +0000
Received: by outflank-mailman (input) for mailman id 24361;
 Wed, 11 Nov 2020 08:27:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kclTc-0002dF-GK
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:27:36 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dca9e242-ad8a-4336-8862-9c7cc41af2ad;
 Wed, 11 Nov 2020 08:27:25 +0000 (UTC)
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kclT5-0007Zq-BZ; Wed, 11 Nov 2020 08:27:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kclTc-0002dF-GK
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:27:36 +0000
X-Inumbo-ID: dca9e242-ad8a-4336-8862-9c7cc41af2ad
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id dca9e242-ad8a-4336-8862-9c7cc41af2ad;
	Wed, 11 Nov 2020 08:27:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=39LqvDrxdGPNtriZMOTjrvhaX6hK5pd+kVbHzj2b5RQ=; b=rB3xS2xKLy3oYNqoQbHUPfTNTa
	r+rNmbYaYrD2dlKjzOxYT9xmEBJsRgUQJcGy66NpIJpzIjMxQizb3i/YvjT4iTdbGigdHKJcenQpb
	F71+M1qmF3OUb37QXvlyPi35EALUT1xpODwUhpDlDborQS+zX63b6P4Ls+4IgD3/rbnslNs6ulVdG
	uiLutWtBfp3TghBBkbaPN3ds7YR09C0o6XQkZdD+ueG3VvtoxpF1pGzKs+I2bINVH1LBvE8L9wwjT
	jQwz44tV+riOP8/WouLdnWaQco2wRAS6pwBfPpT4gA6tmr4NW5hEh60r9spxv7eHvAXz6SCt21dPc
	YXmRlIKg==;
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kclT5-0007Zq-BZ; Wed, 11 Nov 2020 08:27:03 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 03/24] nvme: let set_capacity_revalidate_and_notify update the bdev size
Date: Wed, 11 Nov 2020 09:26:37 +0100
Message-Id: <20201111082658.3401686-4-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201111082658.3401686-1-hch@lst.de>
References: <20201111082658.3401686-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

There is no good reason to call revalidate_disk_size separately.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/nvme/host/core.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 40ca71b29bb91a..66129b86e97bed 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2053,7 +2053,7 @@ static void nvme_update_disk_info(struct gendisk *disk,
 			capacity = 0;
 	}
 
-	set_capacity_revalidate_and_notify(disk, capacity, false);
+	set_capacity_revalidate_and_notify(disk, capacity, true);
 
 	nvme_config_discard(disk, ns);
 	nvme_config_write_zeroes(disk, ns);
@@ -2136,7 +2136,6 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_id_ns *id)
 		blk_stack_limits(&ns->head->disk->queue->limits,
 				 &ns->queue->limits, 0);
 		blk_queue_update_readahead(ns->head->disk->queue);
-		nvme_update_bdev_size(ns->head->disk);
 		blk_mq_unfreeze_queue(ns->head->disk->queue);
 	}
 #endif
@@ -3965,8 +3964,6 @@ static void nvme_validate_ns(struct nvme_ns *ns, struct nvme_ns_ids *ids)
 	 */
 	if (ret && ret != -ENOMEM && !(ret > 0 && !(ret & NVME_SC_DNR)))
 		nvme_ns_remove(ns);
-	else
-		revalidate_disk_size(ns->disk, true);
 }
 
 static void nvme_validate_or_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:27:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:27:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24359.51547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclTV-0002dR-NM; Wed, 11 Nov 2020 08:27:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24359.51547; Wed, 11 Nov 2020 08:27:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclTV-0002dK-KP; Wed, 11 Nov 2020 08:27:29 +0000
Received: by outflank-mailman (input) for mailman id 24359;
 Wed, 11 Nov 2020 08:27:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kclTS-0002dF-J8
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:27:27 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8ccb2d29-f0d0-4874-8e7e-1d45f2d6a14d;
 Wed, 11 Nov 2020 08:27:24 +0000 (UTC)
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kclT1-0007Z4-Lb; Wed, 11 Nov 2020 08:27:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kclTS-0002dF-J8
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:27:27 +0000
X-Inumbo-ID: 8ccb2d29-f0d0-4874-8e7e-1d45f2d6a14d
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8ccb2d29-f0d0-4874-8e7e-1d45f2d6a14d;
	Wed, 11 Nov 2020 08:27:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID:
	Content-Description:In-Reply-To:References;
	bh=JAKJjNQpsrrbjmsxnYVVJRTgGHrM34ycZR/npT/boCU=; b=b1ddQnGWc2jtHoKywAuScN8Zdh
	JpUZaUqXCtKuY0r1wvcQJho9I5LlxQ+Jf1PeuV9MT8HozEqz08MjPlQ2bflOzs5jN6AtnzX9UZLvR
	TP+ksdGoX+zMrKqAYf6UV/230oaeX+lnludtn/sFHHGEZUrrgeMSlNkGj8LNwkKSIL8M1BtpRD8QW
	JHnmwoGSdoYRlX13Qq80Wbx/PB+8AMxck1eRfIb4ZyOL7mf/BnfhPUzDrVsEYY2/hN12f5r+V4/aB
	CJTa0r5tvV0A5l4JwwqEcBTVEhomL9IDTnuaOQ3CyD1joSKU5ImqC0esiM6bTSJ+Tq3LxQEN03v+h
	/wjoDc/w==;
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kclT1-0007Z4-Lb; Wed, 11 Nov 2020 08:27:00 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: cleanup updating the size of block devices v2
Date: Wed, 11 Nov 2020 09:26:34 +0100
Message-Id: <20201111082658.3401686-1-hch@lst.de>
X-Mailer: git-send-email 2.28.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Hi Jens,

this series builds on top of the work that went into the last merge window,
and make sure we have a single coherent interfac for updating the size of a
block device.

Changes since v1:
 - minor spelling fixes

Diffstat:
 block/genhd.c                  |   16 +++----
 drivers/block/aoe/aoecmd.c     |   15 +-----
 drivers/block/drbd/drbd_main.c |    6 --
 drivers/block/loop.c           |   36 ++--------------
 drivers/block/nbd.c            |   88 +++++++++++++----------------------------
 drivers/block/pktcdvd.c        |    3 -
 drivers/block/rbd.c            |    3 -
 drivers/block/rnbd/rnbd-clt.c  |    3 -
 drivers/block/virtio_blk.c     |    3 -
 drivers/block/xen-blkfront.c   |    2 
 drivers/block/zram/zram_drv.c  |    7 ---
 drivers/md/dm-raid.c           |    3 -
 drivers/md/dm.c                |    3 -
 drivers/md/md-cluster.c        |    8 ---
 drivers/md/md-linear.c         |    3 -
 drivers/md/md.c                |   24 ++++-------
 drivers/nvme/host/core.c       |   18 --------
 drivers/scsi/sd.c              |    9 +---
 fs/block_dev.c                 |    7 ---
 include/linux/genhd.h          |    3 -
 20 files changed, 76 insertions(+), 184 deletions(-)


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:27:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:27:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24360.51559 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclTX-0002eN-VZ; Wed, 11 Nov 2020 08:27:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24360.51559; Wed, 11 Nov 2020 08:27:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclTX-0002eG-SK; Wed, 11 Nov 2020 08:27:31 +0000
Received: by outflank-mailman (input) for mailman id 24360;
 Wed, 11 Nov 2020 08:27:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kclTX-0002dF-G2
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:27:31 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 53d43f0c-1765-4d95-9bc9-7811b7c25870;
 Wed, 11 Nov 2020 08:27:25 +0000 (UTC)
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kclT4-0007Zh-93; Wed, 11 Nov 2020 08:27:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kclTX-0002dF-G2
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:27:31 +0000
X-Inumbo-ID: 53d43f0c-1765-4d95-9bc9-7811b7c25870
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 53d43f0c-1765-4d95-9bc9-7811b7c25870;
	Wed, 11 Nov 2020 08:27:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=GTpuLBfYyo8TaeZo/z07ad7aIfYXt5VjexG6J+Un22Y=; b=J7buGI+/FIE7Z9SkwKpEv6Dzao
	H6Jb7XY7aLyNJQXzdjS8KYFAIVquuUqCDHM7hyICJO7/G/MCyKCn0iuzxejTcLdTDO2OcEOx0dcDS
	xhDrtjFYbZTHZdcrjKzLNs7zeWZH6NtDmauWmpSq/4neZW0FKJUBESEe2290E+/sWMUiz2KhrZVe1
	zSq1a/6hxWMD2sjWBneREarqtdUM/v40hRmP7cNWbUvCbhYM5UtDwGuEnfrwRt02CysygTP9nvqBk
	hiXdS7b3AluL/i5ahJdcWqTTM/kEXp3nGDMSv/NxeIEiOg6Xnf7cpb5dGWOuvMZ+I/jr2gTe4ccJr
	Dzzb4ntg==;
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kclT4-0007Zh-93; Wed, 11 Nov 2020 08:27:02 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 02/24] loop: remove loop_set_size
Date: Wed, 11 Nov 2020 09:26:36 +0100
Message-Id: <20201111082658.3401686-3-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201111082658.3401686-1-hch@lst.de>
References: <20201111082658.3401686-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Just use set_capacity_revalidate_and_notify directly, as this function
can update the block device size as well when the last parameter is set
to true.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/loop.c | 37 +++++++------------------------------
 1 file changed, 7 insertions(+), 30 deletions(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index cb1191d6e945f2..86eb7e0691eef5 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -241,23 +241,6 @@ loop_validate_block_size(unsigned short bsize)
 	return 0;
 }
 
-/**
- * loop_set_size() - sets device size and notifies userspace
- * @lo: struct loop_device to set the size for
- * @size: new size of the loop device
- *
- * Callers must validate that the size passed into this function fits into
- * a sector_t, eg using loop_validate_size()
- */
-static void loop_set_size(struct loop_device *lo, loff_t size)
-{
-	struct block_device *bdev = lo->lo_device;
-
-	bd_set_nr_sectors(bdev, size);
-
-	set_capacity_revalidate_and_notify(lo->lo_disk, size, false);
-}
-
 static inline int
 lo_do_transfer(struct loop_device *lo, int cmd,
 	       struct page *rpage, unsigned roffs,
@@ -1076,7 +1059,6 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
 	struct address_space *mapping;
 	struct block_device *claimed_bdev = NULL;
 	int		error;
-	loff_t		size;
 	bool		partscan;
 	unsigned short  bsize;
 
@@ -1164,9 +1146,8 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
 	loop_update_dio(lo);
 	loop_sysfs_init(lo);
 
-	size = get_loop_size(lo, file);
-	loop_set_size(lo, size);
-
+	set_capacity_revalidate_and_notify(lo->lo_disk, get_loop_size(lo, file),
+			true);
 	set_blocksize(bdev, S_ISBLK(inode->i_mode) ?
 		      block_size(inode->i_bdev) : PAGE_SIZE);
 
@@ -1402,9 +1383,9 @@ loop_set_status(struct loop_device *lo, const struct loop_info64 *info)
 	lo->lo_flags |= prev_lo_flags & ~LOOP_SET_STATUS_CLEARABLE_FLAGS;
 
 	if (size_changed) {
-		loff_t new_size = get_size(lo->lo_offset, lo->lo_sizelimit,
-					   lo->lo_backing_file);
-		loop_set_size(lo, new_size);
+		set_capacity_revalidate_and_notify(lo->lo_disk,
+				get_size(lo->lo_offset, lo->lo_sizelimit,
+					 lo->lo_backing_file), true);
 	}
 
 	loop_config_discard(lo);
@@ -1580,14 +1561,10 @@ loop_get_status64(struct loop_device *lo, struct loop_info64 __user *arg) {
 
 static int loop_set_capacity(struct loop_device *lo)
 {
-	loff_t size;
-
 	if (unlikely(lo->lo_state != Lo_bound))
 		return -ENXIO;
-
-	size = get_loop_size(lo, lo->lo_backing_file);
-	loop_set_size(lo, size);
-
+	set_capacity_revalidate_and_notify(lo->lo_disk,
+			get_loop_size(lo, lo->lo_backing_file), true);
 	return 0;
 }
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:27:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:27:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24362.51583 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclTi-0002lm-Hw; Wed, 11 Nov 2020 08:27:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24362.51583; Wed, 11 Nov 2020 08:27:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclTi-0002lf-Dv; Wed, 11 Nov 2020 08:27:42 +0000
Received: by outflank-mailman (input) for mailman id 24362;
 Wed, 11 Nov 2020 08:27:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kclTh-0002dF-GV
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:27:41 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d7266bc0-fca1-44ed-9190-ec7016c2a723;
 Wed, 11 Nov 2020 08:27:25 +0000 (UTC)
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kclT7-0007a6-HW; Wed, 11 Nov 2020 08:27:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kclTh-0002dF-GV
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:27:41 +0000
X-Inumbo-ID: d7266bc0-fca1-44ed-9190-ec7016c2a723
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d7266bc0-fca1-44ed-9190-ec7016c2a723;
	Wed, 11 Nov 2020 08:27:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=Dvn0FaNo/8PBixGEN3RzjNI2ye67KoyGrHfPj2XKGOY=; b=WfHCfS71T0TlhdvdbiXY0IUJ8U
	bFjui3wewAtFhnUKYMrqWtx+i0W8oU/rkJ68nfC84r/sgjSLKesqYl+LT9MxjkfIhV2YTB/lNyksS
	j/Ma+k+L39dMRnJ4DnC0BBG1vNY6MsJRcjXsUNa3IEC5qM8G5omocTWx7s3nxSQr/YVEyCYzhXbju
	AJV3R/DCKbNdD66opVe6ukpbs82VppTy5jpH2yHVSitj5R3kIMaDir+nLwKfkxE4rWI320Dyx/PCA
	fL//0AN3krkEY9dv/bES4MDMy4OGu4aGoO5akaYEh2hPkUKoi/NBucV8JuvZjnti7SqFRNyGBHBY8
	vw4CPzkg==;
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kclT7-0007a6-HW; Wed, 11 Nov 2020 08:27:05 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 05/24] block: remove the update_bdev parameter from set_capacity_revalidate_and_notify
Date: Wed, 11 Nov 2020 09:26:39 +0100
Message-Id: <20201111082658.3401686-6-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201111082658.3401686-1-hch@lst.de>
References: <20201111082658.3401686-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

The update_bdev argument is always set to true, so remove it.  Also
rename the function to the slighly less verbose set_capacity_and_notify,
as propagating the disk size to the block device isn't really
revalidation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/genhd.c                | 13 +++++--------
 drivers/block/loop.c         | 11 +++++------
 drivers/block/virtio_blk.c   |  2 +-
 drivers/block/xen-blkfront.c |  2 +-
 drivers/nvme/host/core.c     |  2 +-
 drivers/scsi/sd.c            |  5 ++---
 include/linux/genhd.h        |  3 +--
 7 files changed, 16 insertions(+), 22 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index 0a273211fec283..d8d9d6c1c916e1 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -46,17 +46,15 @@ static void disk_del_events(struct gendisk *disk);
 static void disk_release_events(struct gendisk *disk);
 
 /*
- * Set disk capacity and notify if the size is not currently
- * zero and will not be set to zero
+ * Set disk capacity and notify if the size is not currently zero and will not
+ * be set to zero.
  */
-void set_capacity_revalidate_and_notify(struct gendisk *disk, sector_t size,
-					bool update_bdev)
+void set_capacity_and_notify(struct gendisk *disk, sector_t size)
 {
 	sector_t capacity = get_capacity(disk);
 
 	set_capacity(disk, size);
-	if (update_bdev)
-		revalidate_disk_size(disk, true);
+	revalidate_disk_size(disk, true);
 
 	if (capacity != size && capacity != 0 && size != 0) {
 		char *envp[] = { "RESIZE=1", NULL };
@@ -64,8 +62,7 @@ void set_capacity_revalidate_and_notify(struct gendisk *disk, sector_t size,
 		kobject_uevent_env(&disk_to_dev(disk)->kobj, KOBJ_CHANGE, envp);
 	}
 }
-
-EXPORT_SYMBOL_GPL(set_capacity_revalidate_and_notify);
+EXPORT_SYMBOL_GPL(set_capacity_and_notify);
 
 /*
  * Format the device name of the indicated disk into the supplied buffer and
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 86eb7e0691eef5..77937b760ee0fc 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -1146,8 +1146,7 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
 	loop_update_dio(lo);
 	loop_sysfs_init(lo);
 
-	set_capacity_revalidate_and_notify(lo->lo_disk, get_loop_size(lo, file),
-			true);
+	set_capacity_and_notify(lo->lo_disk, get_loop_size(lo, file));
 	set_blocksize(bdev, S_ISBLK(inode->i_mode) ?
 		      block_size(inode->i_bdev) : PAGE_SIZE);
 
@@ -1383,9 +1382,9 @@ loop_set_status(struct loop_device *lo, const struct loop_info64 *info)
 	lo->lo_flags |= prev_lo_flags & ~LOOP_SET_STATUS_CLEARABLE_FLAGS;
 
 	if (size_changed) {
-		set_capacity_revalidate_and_notify(lo->lo_disk,
+		set_capacity_and_notify(lo->lo_disk,
 				get_size(lo->lo_offset, lo->lo_sizelimit,
-					 lo->lo_backing_file), true);
+					 lo->lo_backing_file));
 	}
 
 	loop_config_discard(lo);
@@ -1563,8 +1562,8 @@ static int loop_set_capacity(struct loop_device *lo)
 {
 	if (unlikely(lo->lo_state != Lo_bound))
 		return -ENXIO;
-	set_capacity_revalidate_and_notify(lo->lo_disk,
-			get_loop_size(lo, lo->lo_backing_file), true);
+	set_capacity_and_notify(lo->lo_disk,
+			get_loop_size(lo, lo->lo_backing_file));
 	return 0;
 }
 
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index a314b9382442b6..3e812b4c32e669 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -470,7 +470,7 @@ static void virtblk_update_capacity(struct virtio_blk *vblk, bool resize)
 		   cap_str_10,
 		   cap_str_2);
 
-	set_capacity_revalidate_and_notify(vblk->disk, capacity, true);
+	set_capacity_and_notify(vblk->disk, capacity);
 }
 
 static void virtblk_config_changed_work(struct work_struct *work)
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 48629d3433b4c3..79521e33d30ed5 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -2370,7 +2370,7 @@ static void blkfront_connect(struct blkfront_info *info)
 			return;
 		printk(KERN_INFO "Setting capacity to %Lu\n",
 		       sectors);
-		set_capacity_revalidate_and_notify(info->gd, sectors, true);
+		set_capacity_and_notify(info->gd, sectors);
 
 		return;
 	case BLKIF_STATE_SUSPENDED:
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 66129b86e97bed..445274b28518fb 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2053,7 +2053,7 @@ static void nvme_update_disk_info(struct gendisk *disk,
 			capacity = 0;
 	}
 
-	set_capacity_revalidate_and_notify(disk, capacity, true);
+	set_capacity_and_notify(disk, capacity);
 
 	nvme_config_discard(disk, ns);
 	nvme_config_write_zeroes(disk, ns);
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 4a34dd5b153196..a2a4f385833d6c 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -3263,8 +3263,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
 
 	sdkp->first_scan = 0;
 
-	set_capacity_revalidate_and_notify(disk,
-		logical_to_sectors(sdp, sdkp->capacity), true);
+	set_capacity_and_notify(disk, logical_to_sectors(sdp, sdkp->capacity));
 	sd_config_write_same(sdkp);
 	kfree(buffer);
 
@@ -3274,7 +3273,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
 	 * capacity to 0.
 	 */
 	if (sd_zbc_revalidate_zones(sdkp))
-		set_capacity_revalidate_and_notify(disk, 0, true);
+		set_capacity_and_notify(disk, 0);
 
  out:
 	return 0;
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 38f23d75701379..596f31b5a3e133 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -315,8 +315,7 @@ static inline int get_disk_ro(struct gendisk *disk)
 extern void disk_block_events(struct gendisk *disk);
 extern void disk_unblock_events(struct gendisk *disk);
 extern void disk_flush_events(struct gendisk *disk, unsigned int mask);
-void set_capacity_revalidate_and_notify(struct gendisk *disk, sector_t size,
-		bool update_bdev);
+void set_capacity_and_notify(struct gendisk *disk, sector_t size);
 
 /* drivers/char/random.c */
 extern void add_disk_randomness(struct gendisk *disk) __latent_entropy;
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:27:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:27:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24364.51595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclTn-0002re-R7; Wed, 11 Nov 2020 08:27:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24364.51595; Wed, 11 Nov 2020 08:27:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclTn-0002rU-Mw; Wed, 11 Nov 2020 08:27:47 +0000
Received: by outflank-mailman (input) for mailman id 24364;
 Wed, 11 Nov 2020 08:27:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kclTm-0002dF-Gp
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:27:46 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c0190187-1c80-456f-85ae-a5289f70094e;
 Wed, 11 Nov 2020 08:27:25 +0000 (UTC)
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kclT9-0007aX-Sw; Wed, 11 Nov 2020 08:27:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kclTm-0002dF-Gp
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:27:46 +0000
X-Inumbo-ID: c0190187-1c80-456f-85ae-a5289f70094e
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c0190187-1c80-456f-85ae-a5289f70094e;
	Wed, 11 Nov 2020 08:27:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=DxpFRHQFxTQEJeN4GgGJQ0kD3jbey5oDwaxlpVZafNQ=; b=vJB9c0ufLsJ+1Xo5f8NEg4RHaC
	98XDL7CsYrrMKs9Q11bCNE34XEPD2IqwS7KKo5742pBrutf+nBOji8TuquAbexo/jmfAH5rN6yLvg
	vynvG1kS12vAEnRM0qCOkqzj4K6wx65p08QSCU6KweYUO41Ky9CBvAueWqKzfwAL2Izvf1mxdkMSH
	CF7dcunxgYm95R69mW8FLbnawc67+JRWKMKFSvpCi7sea2728l+t/4wD3D6d8xlKV/lA1rlhsloEC
	MoxWwOngvmgMlyXhXRA8eKa8tBI6ITu5Prbwd5gH9fsErYkZCfFYsvFdRkWPc5VOc0QL8QZblikj5
	EJ2PRWpw==;
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kclT9-0007aX-Sw; Wed, 11 Nov 2020 08:27:08 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 07/24] nbd: remove the call to set_blocksize
Date: Wed, 11 Nov 2020 09:26:41 +0100
Message-Id: <20201111082658.3401686-8-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201111082658.3401686-1-hch@lst.de>
References: <20201111082658.3401686-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Block driver have no business setting the file system concept of a
block size.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 drivers/block/nbd.c | 12 +++++-------
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index c4f9ccf5cc2ac5..f618688a196654 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -296,7 +296,7 @@ static void nbd_size_clear(struct nbd_device *nbd)
 	}
 }
 
-static void nbd_size_update(struct nbd_device *nbd, bool start)
+static void nbd_size_update(struct nbd_device *nbd)
 {
 	struct nbd_config *config = nbd->config;
 	struct block_device *bdev = bdget_disk(nbd->disk, 0);
@@ -311,11 +311,9 @@ static void nbd_size_update(struct nbd_device *nbd, bool start)
 	blk_queue_physical_block_size(nbd->disk->queue, config->blksize);
 	set_capacity(nbd->disk, nr_sectors);
 	if (bdev) {
-		if (bdev->bd_disk) {
+		if (bdev->bd_disk)
 			bd_set_nr_sectors(bdev, nr_sectors);
-			if (start)
-				set_blocksize(bdev, config->blksize);
-		} else
+		else
 			set_bit(GD_NEED_PART_SCAN, &nbd->disk->state);
 		bdput(bdev);
 	}
@@ -329,7 +327,7 @@ static void nbd_size_set(struct nbd_device *nbd, loff_t blocksize,
 	config->blksize = blocksize;
 	config->bytesize = blocksize * nr_blocks;
 	if (nbd->task_recv != NULL)
-		nbd_size_update(nbd, false);
+		nbd_size_update(nbd);
 }
 
 static void nbd_complete_rq(struct request *req)
@@ -1309,7 +1307,7 @@ static int nbd_start_device(struct nbd_device *nbd)
 		args->index = i;
 		queue_work(nbd->recv_workq, &args->work);
 	}
-	nbd_size_update(nbd, true);
+	nbd_size_update(nbd);
 	return error;
 }
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:27:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:27:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24370.51607 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclTs-0002xG-Ao; Wed, 11 Nov 2020 08:27:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24370.51607; Wed, 11 Nov 2020 08:27:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclTs-0002x8-70; Wed, 11 Nov 2020 08:27:52 +0000
Received: by outflank-mailman (input) for mailman id 24370;
 Wed, 11 Nov 2020 08:27:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kclTr-0002dF-HG
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:27:51 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e1d9a267-22c6-41c1-b11d-53cc22540c4e;
 Wed, 11 Nov 2020 08:27:25 +0000 (UTC)
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kclT6-0007Zx-Ef; Wed, 11 Nov 2020 08:27:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kclTr-0002dF-HG
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:27:51 +0000
X-Inumbo-ID: e1d9a267-22c6-41c1-b11d-53cc22540c4e
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e1d9a267-22c6-41c1-b11d-53cc22540c4e;
	Wed, 11 Nov 2020 08:27:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=5qPdl0lgCEFGJP5lmM9cIt5xVilmdrR4jnFYjPe6Rv8=; b=Ls58rLr6Jmuq6v/NWUCE8TiQve
	QBjPwhQnxk5CSZDABDmV5Iyu5nLZjZdDmarW2RJ3+wQ5CV5i02GTCtgGza5/Yikfz5DAuG4zguCCG
	EgI2R68wska0Fvawliu6ZPQh4vTvOEmDa4t/Qtue0sps/HKOLz+4l8CHVpEuyJgxKspc63ZIHdpKU
	Uq3vr9AJQ5geGGAUVhvBJa5nv2zB+UeNiCzfSwqeiFVJbNutC+jduzyz0hsShH64P+ui1tcelTUh9
	yk7kRWHvmVGXLbg/oeCZFpx13CQpxXc9aqGPjNAocrsQQUXFrFu2FbaPQL06RNU64LcD6XqJk4iCo
	SOHcJEpw==;
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kclT6-0007Zx-Ef; Wed, 11 Nov 2020 08:27:04 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 04/24] sd: update the bdev size in sd_revalidate_disk
Date: Wed, 11 Nov 2020 09:26:38 +0100
Message-Id: <20201111082658.3401686-5-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201111082658.3401686-1-hch@lst.de>
References: <20201111082658.3401686-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

This avoids the extra call to revalidate_disk_size in sd_rescan and
is otherwise a no-op because the size did not change, or we are in
the probe path.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Martin K. Petersen <martin.petersen@oracle.com>
---
 drivers/scsi/sd.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 656bcf4940d6d1..4a34dd5b153196 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -1750,10 +1750,8 @@ static int sd_sync_cache(struct scsi_disk *sdkp, struct scsi_sense_hdr *sshdr)
 static void sd_rescan(struct device *dev)
 {
 	struct scsi_disk *sdkp = dev_get_drvdata(dev);
-	int ret;
 
-	ret = sd_revalidate_disk(sdkp->disk);
-	revalidate_disk_size(sdkp->disk, ret == 0);
+	sd_revalidate_disk(sdkp->disk);
 }
 
 static int sd_ioctl(struct block_device *bdev, fmode_t mode,
@@ -3266,7 +3264,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
 	sdkp->first_scan = 0;
 
 	set_capacity_revalidate_and_notify(disk,
-		logical_to_sectors(sdp, sdkp->capacity), false);
+		logical_to_sectors(sdp, sdkp->capacity), true);
 	sd_config_write_same(sdkp);
 	kfree(buffer);
 
@@ -3276,7 +3274,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
 	 * capacity to 0.
 	 */
 	if (sd_zbc_revalidate_zones(sdkp))
-		set_capacity_revalidate_and_notify(disk, 0, false);
+		set_capacity_revalidate_and_notify(disk, 0, true);
 
  out:
 	return 0;
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:27:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:27:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24373.51619 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclTx-00033N-Mg; Wed, 11 Nov 2020 08:27:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24373.51619; Wed, 11 Nov 2020 08:27:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclTx-00033B-Iv; Wed, 11 Nov 2020 08:27:57 +0000
Received: by outflank-mailman (input) for mailman id 24373;
 Wed, 11 Nov 2020 08:27:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kclTw-0002dF-HY
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:27:56 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3d3112db-b4f2-48cc-9d40-273a2ad080a5;
 Wed, 11 Nov 2020 08:27:25 +0000 (UTC)
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kclT3-0007Ze-1h; Wed, 11 Nov 2020 08:27:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kclTw-0002dF-HY
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:27:56 +0000
X-Inumbo-ID: 3d3112db-b4f2-48cc-9d40-273a2ad080a5
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3d3112db-b4f2-48cc-9d40-273a2ad080a5;
	Wed, 11 Nov 2020 08:27:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=stmwFAlkeYayJwqxLAzxAb5snaHT5ewOrHfW4MD4mwk=; b=WGdyaw45qmIaK2SmatQbzEUT8g
	sEQj1d8rAjkxLN2VkCVTmqZPll/LHGpxSV/Ir/IILaWgwYuFt2ICDMLJQCEBXEhKh5VnaPge+eYYF
	IOkQ1MFG821grDHDC93JDllTw4BY36AXV/yFqmryZ3vHvbukA3hGqUm74A8voHvu45uxjEGSyanL7
	IphCk+rTO/osj709X1NFm11SrN6uOCMdFEiCn4af1goiKlpL+OEwTfsYuGysBGdA57IRinYRl+Iz0
	iyGlhXels8NLzlL13cFL+YI5aSIARz+l1EkZoUoEk2NSPyid+0DXdQDDf19mhfxK6pCa6I7Rf7UkB
	L8AvM4RA==;
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kclT3-0007Ze-1h; Wed, 11 Nov 2020 08:27:01 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 01/24] block: remove the call to __invalidate_device in check_disk_size_change
Date: Wed, 11 Nov 2020 09:26:35 +0100
Message-Id: <20201111082658.3401686-2-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201111082658.3401686-1-hch@lst.de>
References: <20201111082658.3401686-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

__invalidate_device without the kill_dirty parameter just invalidates
various clean entries in caches, which doesn't really help us with
anything, but can cause all kinds of horrible lock orders due to how
it calls into the file system.  The only reason this hasn't been a
major issue is because so many people use partitions, for which no
invalidation was performed anyway.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/block_dev.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index 9e84b1928b9401..66ebf594c97f47 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -1334,12 +1334,6 @@ static void check_disk_size_change(struct gendisk *disk,
 		i_size_write(bdev->bd_inode, disk_size);
 	}
 	spin_unlock(&bdev->bd_size_lock);
-
-	if (bdev_size > disk_size) {
-		if (__invalidate_device(bdev, false))
-			pr_warn("VFS: busy inodes on resized disk %s\n",
-				disk->disk_name);
-	}
 }
 
 /**
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:28:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:28:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24380.51631 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclU2-00039z-Vl; Wed, 11 Nov 2020 08:28:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24380.51631; Wed, 11 Nov 2020 08:28:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclU2-00039q-RP; Wed, 11 Nov 2020 08:28:02 +0000
Received: by outflank-mailman (input) for mailman id 24380;
 Wed, 11 Nov 2020 08:28:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kclU1-0002dF-Hc
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:28:01 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bd2c7935-bf16-4057-943c-e01baae3032f;
 Wed, 11 Nov 2020 08:27:27 +0000 (UTC)
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kclT8-0007aM-Pa; Wed, 11 Nov 2020 08:27:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kclU1-0002dF-Hc
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:28:01 +0000
X-Inumbo-ID: bd2c7935-bf16-4057-943c-e01baae3032f
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bd2c7935-bf16-4057-943c-e01baae3032f;
	Wed, 11 Nov 2020 08:27:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=msAxRVgou3+s48HjnTxbv/pHmvKh+OTllxGOwTa9PDs=; b=HtE3Jf5Vbg9BUyHuOP4LioCkGM
	D5RSbkrHUpDFKIQjTw/EK4TzE+8HC7HKkeSRDQ5TzvYLeD176nt7vURqsS/VEBAOA0ZGVp2Hfk4BA
	FS2uNGpdT5zytQL6JfJub2kZpw2LjaJ9Qgw8NxRTfmPK2IU9L78Jw1VK21pEcdjuzr8xdUwVC1AG8
	FZzij4ATCs/rSb1EzfUVneXPwEq9nYX5eNo+i9hTfvU1x/iEwjUiJQP7bXoR4lZCu9eKsU3Nd9Bq0
	Kq770Jj2OBzqFQPHPMIVFh4h4EqCewYZ87onZqxTquG/UH1fcZ9aZrou8dZ3Mf5neqm7MQBxdmEx8
	6E4xnYtg==;
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kclT8-0007aM-Pa; Wed, 11 Nov 2020 08:27:06 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 06/24] block: add a return value to set_capacity_and_notify
Date: Wed, 11 Nov 2020 09:26:40 +0100
Message-Id: <20201111082658.3401686-7-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201111082658.3401686-1-hch@lst.de>
References: <20201111082658.3401686-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Return if the function ended up sending an uevent or not.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/genhd.c         | 7 +++++--
 include/linux/genhd.h | 2 +-
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index d8d9d6c1c916e1..8c350fecfe8bfe 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -47,9 +47,9 @@ static void disk_release_events(struct gendisk *disk);
 
 /*
  * Set disk capacity and notify if the size is not currently zero and will not
- * be set to zero.
+ * be set to zero.  Returns true if a uevent was sent, otherwise false.
  */
-void set_capacity_and_notify(struct gendisk *disk, sector_t size)
+bool set_capacity_and_notify(struct gendisk *disk, sector_t size)
 {
 	sector_t capacity = get_capacity(disk);
 
@@ -60,7 +60,10 @@ void set_capacity_and_notify(struct gendisk *disk, sector_t size)
 		char *envp[] = { "RESIZE=1", NULL };
 
 		kobject_uevent_env(&disk_to_dev(disk)->kobj, KOBJ_CHANGE, envp);
+		return true;
 	}
+
+	return false;
 }
 EXPORT_SYMBOL_GPL(set_capacity_and_notify);
 
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 596f31b5a3e133..4b22bfd9336e1a 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -315,7 +315,7 @@ static inline int get_disk_ro(struct gendisk *disk)
 extern void disk_block_events(struct gendisk *disk);
 extern void disk_unblock_events(struct gendisk *disk);
 extern void disk_flush_events(struct gendisk *disk, unsigned int mask);
-void set_capacity_and_notify(struct gendisk *disk, sector_t size);
+bool set_capacity_and_notify(struct gendisk *disk, sector_t size);
 
 /* drivers/char/random.c */
 extern void add_disk_randomness(struct gendisk *disk) __latent_entropy;
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:28:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:28:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24384.51643 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclU7-0003Fm-Bd; Wed, 11 Nov 2020 08:28:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24384.51643; Wed, 11 Nov 2020 08:28:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclU7-0003Fc-6d; Wed, 11 Nov 2020 08:28:07 +0000
Received: by outflank-mailman (input) for mailman id 24384;
 Wed, 11 Nov 2020 08:28:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kclU6-0002dF-Hr
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:28:06 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b393bd16-cbd1-42b7-81a8-8567ef6fbc9d;
 Wed, 11 Nov 2020 08:27:27 +0000 (UTC)
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kclTC-0007ay-Br; Wed, 11 Nov 2020 08:27:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kclU6-0002dF-Hr
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:28:06 +0000
X-Inumbo-ID: b393bd16-cbd1-42b7-81a8-8567ef6fbc9d
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b393bd16-cbd1-42b7-81a8-8567ef6fbc9d;
	Wed, 11 Nov 2020 08:27:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=cYLQe7HGubqAnJ/tUfc/v2PvIuWje8/yv3ctpiDBjdw=; b=Dx9jgyUlGTkZ3ko/6qDjgaYNEL
	FCotyOTHxSBkohOv1KlRMKIA/vk+h6ba0YSuNKfdgKwITSY3WbFIeiaXAvhkT7vVv7+9378vhN1Ij
	DelopFtwZbjUPDOsV1O+z/w+CEUOSMJQ/W7sjff+rwqTAw4uvMZcObE1/BARoib4CyM50iogDhaMr
	fs91RYCyjpQx5tslTYTBBmHAMwFytBX3sgtylRwNvkoLbr0kjqr6hZmnMXYJ86f2xcMPVY92BWXcm
	/Y26861aP/1IYIXo/3qkKNLEMc+J9Op78G/hwC8/OBs5SfP+Tbx8gL7P+DIZnEwkOoqeVtoBklWu1
	j5w6NGdg==;
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kclTC-0007ay-Br; Wed, 11 Nov 2020 08:27:10 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 09/24] nbd: refactor size updates
Date: Wed, 11 Nov 2020 09:26:43 +0100
Message-Id: <20201111082658.3401686-10-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201111082658.3401686-1-hch@lst.de>
References: <20201111082658.3401686-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Merge nbd_size_set and nbd_size_update into a single function that also
updates the nbd_config fields.  This new function takes the device size
in bytes as the first argument, and the blocksize as the second argument,
simplifying the calculations required in most callers.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 drivers/block/nbd.c | 44 ++++++++++++++++++--------------------------
 1 file changed, 18 insertions(+), 26 deletions(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 58b7090dcbd832..eb8a5da48ad75a 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -296,28 +296,30 @@ static void nbd_size_clear(struct nbd_device *nbd)
 	}
 }
 
-static void nbd_size_update(struct nbd_device *nbd)
+static void nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
+		loff_t blksize)
 {
-	struct nbd_config *config = nbd->config;
-	sector_t nr_sectors = config->bytesize >> 9;
 	struct block_device *bdev;
 
+	nbd->config->bytesize = bytesize;
+	nbd->config->blksize = blksize;
+
 	if (!nbd->task_recv)
 		return;
 
-	if (config->flags & NBD_FLAG_SEND_TRIM) {
-		nbd->disk->queue->limits.discard_granularity = config->blksize;
-		nbd->disk->queue->limits.discard_alignment = config->blksize;
+	if (nbd->config->flags & NBD_FLAG_SEND_TRIM) {
+		nbd->disk->queue->limits.discard_granularity = blksize;
+		nbd->disk->queue->limits.discard_alignment = blksize;
 		blk_queue_max_discard_sectors(nbd->disk->queue, UINT_MAX);
 	}
-	blk_queue_logical_block_size(nbd->disk->queue, config->blksize);
-	blk_queue_physical_block_size(nbd->disk->queue, config->blksize);
+	blk_queue_logical_block_size(nbd->disk->queue, blksize);
+	blk_queue_physical_block_size(nbd->disk->queue, blksize);
 
-	set_capacity(nbd->disk, nr_sectors);
+	set_capacity(nbd->disk, bytesize >> 9);
 	bdev = bdget_disk(nbd->disk, 0);
 	if (bdev) {
 		if (bdev->bd_disk)
-			bd_set_nr_sectors(bdev, nr_sectors);
+			bd_set_nr_sectors(bdev, bytesize >> 9);
 		else
 			set_bit(GD_NEED_PART_SCAN, &nbd->disk->state);
 		bdput(bdev);
@@ -325,15 +327,6 @@ static void nbd_size_update(struct nbd_device *nbd)
 	kobject_uevent(&nbd_to_dev(nbd)->kobj, KOBJ_CHANGE);
 }
 
-static void nbd_size_set(struct nbd_device *nbd, loff_t blocksize,
-			 loff_t nr_blocks)
-{
-	struct nbd_config *config = nbd->config;
-	config->blksize = blocksize;
-	config->bytesize = blocksize * nr_blocks;
-	nbd_size_update(nbd);
-}
-
 static void nbd_complete_rq(struct request *req)
 {
 	struct nbd_cmd *cmd = blk_mq_rq_to_pdu(req);
@@ -1311,7 +1304,7 @@ static int nbd_start_device(struct nbd_device *nbd)
 		args->index = i;
 		queue_work(nbd->recv_workq, &args->work);
 	}
-	nbd_size_update(nbd);
+	nbd_set_size(nbd, config->bytesize, config->blksize);
 	return error;
 }
 
@@ -1390,15 +1383,14 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
 			arg = NBD_DEF_BLKSIZE;
 		if (!nbd_is_valid_blksize(arg))
 			return -EINVAL;
-		nbd_size_set(nbd, arg,
-			     div_s64(config->bytesize, arg));
+		nbd_set_size(nbd, config->bytesize, arg);
 		return 0;
 	case NBD_SET_SIZE:
-		nbd_size_set(nbd, config->blksize,
-			     div_s64(arg, config->blksize));
+		nbd_set_size(nbd, arg, config->blksize);
 		return 0;
 	case NBD_SET_SIZE_BLOCKS:
-		nbd_size_set(nbd, config->blksize, arg);
+		nbd_set_size(nbd, arg * config->blksize,
+			     config->blksize);
 		return 0;
 	case NBD_SET_TIMEOUT:
 		nbd_set_cmd_timeout(nbd, arg);
@@ -1827,7 +1819,7 @@ static int nbd_genl_size_set(struct genl_info *info, struct nbd_device *nbd)
 	}
 
 	if (bytes != config->bytesize || bsize != config->blksize)
-		nbd_size_set(nbd, bsize, div64_u64(bytes, bsize));
+		nbd_set_size(nbd, bytes, bsize);
 	return 0;
 }
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:28:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:28:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24390.51654 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclUC-0003NY-Mb; Wed, 11 Nov 2020 08:28:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24390.51654; Wed, 11 Nov 2020 08:28:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclUC-0003NO-I8; Wed, 11 Nov 2020 08:28:12 +0000
Received: by outflank-mailman (input) for mailman id 24390;
 Wed, 11 Nov 2020 08:28:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kclUB-0002dF-I2
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:28:11 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b3eb9d60-f4a1-4bfb-b144-892c9bd479f3;
 Wed, 11 Nov 2020 08:27:28 +0000 (UTC)
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kclTB-0007aj-13; Wed, 11 Nov 2020 08:27:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kclUB-0002dF-I2
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:28:11 +0000
X-Inumbo-ID: b3eb9d60-f4a1-4bfb-b144-892c9bd479f3
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b3eb9d60-f4a1-4bfb-b144-892c9bd479f3;
	Wed, 11 Nov 2020 08:27:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=sleWhQz8pOnJuBnS8tyNBOSxS3LFjGP3TCWYI+Y9Xwo=; b=SnGVq269/UTvePhazhsGEqe1g4
	+Sdxfj0JE1XKc8hE5k0k3fff848q4NtXnaEE9VLg/C2I0lFuBfyREt4J81h7TQHcYmvdbPYYZExEc
	rvsv8rKBFRclcRvxaov26S6VkAKG56QaJUn2wD/mTm4/DYl8qdkAvKZ4bO76kHjl21mZEwgM+L3W8
	B0dk20R4wCMbx4pzaYw5R6GsJ3eFFwEWgkikMnu+2X9+jrMgSEIR2I0zoFfL2v0SAjEba5l3GfvGJ
	VVKXne4B7f2m79dzqfoTvI53FLRJZJnCKRBE5HBGzRzPBkuT4Wg+DmexI9Xj8bzcPW8BIeky07GCR
	w5/puTXA==;
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kclTB-0007aj-13; Wed, 11 Nov 2020 08:27:09 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 08/24] nbd: move the task_recv check into nbd_size_update
Date: Wed, 11 Nov 2020 09:26:42 +0100
Message-Id: <20201111082658.3401686-9-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201111082658.3401686-1-hch@lst.de>
References: <20201111082658.3401686-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

nbd_size_update is about to acquire a few more callers, so lift the check
into the function.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 drivers/block/nbd.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index f618688a196654..58b7090dcbd832 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -299,8 +299,11 @@ static void nbd_size_clear(struct nbd_device *nbd)
 static void nbd_size_update(struct nbd_device *nbd)
 {
 	struct nbd_config *config = nbd->config;
-	struct block_device *bdev = bdget_disk(nbd->disk, 0);
 	sector_t nr_sectors = config->bytesize >> 9;
+	struct block_device *bdev;
+
+	if (!nbd->task_recv)
+		return;
 
 	if (config->flags & NBD_FLAG_SEND_TRIM) {
 		nbd->disk->queue->limits.discard_granularity = config->blksize;
@@ -309,7 +312,9 @@ static void nbd_size_update(struct nbd_device *nbd)
 	}
 	blk_queue_logical_block_size(nbd->disk->queue, config->blksize);
 	blk_queue_physical_block_size(nbd->disk->queue, config->blksize);
+
 	set_capacity(nbd->disk, nr_sectors);
+	bdev = bdget_disk(nbd->disk, 0);
 	if (bdev) {
 		if (bdev->bd_disk)
 			bd_set_nr_sectors(bdev, nr_sectors);
@@ -326,8 +331,7 @@ static void nbd_size_set(struct nbd_device *nbd, loff_t blocksize,
 	struct nbd_config *config = nbd->config;
 	config->blksize = blocksize;
 	config->bytesize = blocksize * nr_blocks;
-	if (nbd->task_recv != NULL)
-		nbd_size_update(nbd);
+	nbd_size_update(nbd);
 }
 
 static void nbd_complete_rq(struct request *req)
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:29:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:29:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24416.51672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVe-0003or-C3; Wed, 11 Nov 2020 08:29:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24416.51672; Wed, 11 Nov 2020 08:29:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVe-0003od-7b; Wed, 11 Nov 2020 08:29:42 +0000
Received: by outflank-mailman (input) for mailman id 24416;
 Wed, 11 Nov 2020 08:29:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kclUp-0002dF-JS
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:28:51 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8891f72b-1465-4c5b-b8d1-26dbe9cb2862;
 Wed, 11 Nov 2020 08:27:40 +0000 (UTC)
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kclTN-0007dE-Rp; Wed, 11 Nov 2020 08:27:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kclUp-0002dF-JS
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:28:51 +0000
X-Inumbo-ID: 8891f72b-1465-4c5b-b8d1-26dbe9cb2862
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8891f72b-1465-4c5b-b8d1-26dbe9cb2862;
	Wed, 11 Nov 2020 08:27:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=WnN/7srg5pLReznCKKgcsF3AnXJUcZlbQO74sw0EfcM=; b=Yn9ZJJkFPKY+e/NLCTNecyPGTK
	il0irebR1t1HO/8eJQn4fbMXYeP8mJXUek8GZzwX4pJgWyXrPl2bszTSxZ11r9soxMYwduEgujiGA
	X5SuddMPNCCFAoWdlddCDN/5gXkyf48GJYtL9OOOBNVOfNJhfwKRc3vMXYv39ISngvdGZZ9qpscdU
	AAJZ8mdncp//thuUaUj0lxPvvkPCJyExBd5wbS0H7c6UR2fYb5ZxUpZQH4S5OO2eDLwnVsXgO8EC4
	/1K1AFxavSBN/H461PJ4OR0/zQpmT35t91FrNDjKal2wDZL90UO4VAADrInrBUoiXiOYnHJ5kHoNj
	G44xLvFQ==;
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kclTN-0007dE-Rp; Wed, 11 Nov 2020 08:27:22 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 17/24] rbd: use set_capacity_and_notify
Date: Wed, 11 Nov 2020 09:26:51 +0100
Message-Id: <20201111082658.3401686-18-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201111082658.3401686-1-hch@lst.de>
References: <20201111082658.3401686-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 drivers/block/rbd.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index f84128abade319..b7a194ffda55b4 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -4920,8 +4920,7 @@ static void rbd_dev_update_size(struct rbd_device *rbd_dev)
 	    !test_bit(RBD_DEV_FLAG_REMOVING, &rbd_dev->flags)) {
 		size = (sector_t)rbd_dev->mapping.size / SECTOR_SIZE;
 		dout("setting size to %llu sectors", (unsigned long long)size);
-		set_capacity(rbd_dev->disk, size);
-		revalidate_disk_size(rbd_dev->disk, true);
+		set_capacity_and_notify(rbd_dev->disk, size);
 	}
 }
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:29:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:29:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24417.51683 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVe-0003pZ-SJ; Wed, 11 Nov 2020 08:29:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24417.51683; Wed, 11 Nov 2020 08:29:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVe-0003pG-Hk; Wed, 11 Nov 2020 08:29:42 +0000
Received: by outflank-mailman (input) for mailman id 24417;
 Wed, 11 Nov 2020 08:29:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kclV9-0002dF-K5
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:29:11 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6d5f442b-70e7-48e9-8b55-ee48768ad375;
 Wed, 11 Nov 2020 08:27:44 +0000 (UTC)
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kclTS-0007eJ-94; Wed, 11 Nov 2020 08:27:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kclV9-0002dF-K5
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:29:11 +0000
X-Inumbo-ID: 6d5f442b-70e7-48e9-8b55-ee48768ad375
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6d5f442b-70e7-48e9-8b55-ee48768ad375;
	Wed, 11 Nov 2020 08:27:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=WS/5HaJpXF8+Aj3VZ2GQ3e6HnWhXjc79y2mDNxe6L1Y=; b=f4V0HAWekMYcBSj6o8XWWUTCFm
	T5BclxNb2e1/yCArOCAZtaHpaI5MIvAMBAlmQ09k0Vs5C7H5osSMi/yyi2ooqWwolZX/hl/HmmgsB
	D+jP8pA5ScU2KctsL6oRx5X33ddFNfX2rqUIjTaXwq9GiEa355k5oV92nC5tQ+6+9kGxVSh/UNgqs
	aOBS/lcTMjIHcJ1f2lfO4I99IxF6tvVGFNOs4sU6QXK89CfJsoeXu3Sdp0W7hScHrIw+mif8uzQfL
	3ybKz/y66nVaw99jpra4/f6IwWze87BLSvE7QXpx7MYzrjqlbJlkDiYUTnrhStUMf69bg/OhD0sEJ
	FL2tuymg==;
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kclTS-0007eJ-94; Wed, 11 Nov 2020 08:27:26 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 20/24] dm-raid: use set_capacity_and_notify
Date: Wed, 11 Nov 2020 09:26:54 +0100
Message-Id: <20201111082658.3401686-21-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201111082658.3401686-1-hch@lst.de>
References: <20201111082658.3401686-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/md/dm-raid.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
index 9c1f7c4de65b35..294f34d2d61bae 100644
--- a/drivers/md/dm-raid.c
+++ b/drivers/md/dm-raid.c
@@ -700,8 +700,7 @@ static void rs_set_capacity(struct raid_set *rs)
 {
 	struct gendisk *gendisk = dm_disk(dm_table_get_md(rs->ti->table));
 
-	set_capacity(gendisk, rs->md.array_sectors);
-	revalidate_disk_size(gendisk, true);
+	set_capacity_and_notify(gendisk, rs->md.array_sectors);
 }
 
 /*
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:29:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:29:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24415.51667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVe-0003oO-2V; Wed, 11 Nov 2020 08:29:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24415.51667; Wed, 11 Nov 2020 08:29:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVd-0003oH-Vb; Wed, 11 Nov 2020 08:29:41 +0000
Received: by outflank-mailman (input) for mailman id 24415;
 Wed, 11 Nov 2020 08:29:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kclUL-0002dF-Id
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:28:21 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b00f643d-29c9-436a-94eb-402ae29bce23;
 Wed, 11 Nov 2020 08:27:28 +0000 (UTC)
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kclTE-0007bJ-Mw; Wed, 11 Nov 2020 08:27:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kclUL-0002dF-Id
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:28:21 +0000
X-Inumbo-ID: b00f643d-29c9-436a-94eb-402ae29bce23
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b00f643d-29c9-436a-94eb-402ae29bce23;
	Wed, 11 Nov 2020 08:27:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=v2gXXjqIndtMF2Cf5P694LBcUky2akZ2+CMnUKb8IcU=; b=h/ZYOi0vPQIdcMW1X52epW5iIP
	h9Zyh9xPBuvnBtDR9rJcVsymjFWgk2TUgP4FFoLLdSCTXe7ntGcJ5XG69rsQgx37axiExy9vtBsT1
	WBPGFF1X/JoJyNnoKqxYrl7ac50v8NuDAsyON6Auh+N53il+fFZBpHHFKORtkmvUwR/cJitnZpnUl
	qpV6JtDzWquk9bGQKBeU5PsoxpY5zIRfl9bEs8/f166ebvAy9I0T0F3CoN03bOJTOZl0KUK74BFDy
	p+OLpVJpLwXYsam4cN6EaXZ7O9sv6RwyX3Fruby3X96wcAyNGRRY8EPu1u1sug1AC5oK4U/Qi4bOK
	QmB+Zi2A==;
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kclTE-0007bJ-Mw; Wed, 11 Nov 2020 08:27:13 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 11/24] nbd: use set_capacity_and_notify
Date: Wed, 11 Nov 2020 09:26:45 +0100
Message-Id: <20201111082658.3401686-12-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201111082658.3401686-1-hch@lst.de>
References: <20201111082658.3401686-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to update the disk and block device sizes and
send a RESIZE uevent to userspace.  Note that blktests relies on uevents
being sent also for updates that did not change the device size, so the
explicit kobject_uevent remains for that case.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 drivers/block/nbd.c | 15 +++------------
 1 file changed, 3 insertions(+), 12 deletions(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 327060e01ad58e..a6f51934391edb 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -299,8 +299,6 @@ static void nbd_size_clear(struct nbd_device *nbd)
 static int nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
 		loff_t blksize)
 {
-	struct block_device *bdev;
-
 	if (!blksize)
 		blksize = NBD_DEF_BLKSIZE;
 	if (blksize < 512 || blksize > PAGE_SIZE || !is_power_of_2(blksize))
@@ -320,16 +318,9 @@ static int nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
 	blk_queue_logical_block_size(nbd->disk->queue, blksize);
 	blk_queue_physical_block_size(nbd->disk->queue, blksize);
 
-	set_capacity(nbd->disk, bytesize >> 9);
-	bdev = bdget_disk(nbd->disk, 0);
-	if (bdev) {
-		if (bdev->bd_disk)
-			bd_set_nr_sectors(bdev, bytesize >> 9);
-		else
-			set_bit(GD_NEED_PART_SCAN, &nbd->disk->state);
-		bdput(bdev);
-	}
-	kobject_uevent(&nbd_to_dev(nbd)->kobj, KOBJ_CHANGE);
+	set_bit(GD_NEED_PART_SCAN, &nbd->disk->state);
+	if (!set_capacity_and_notify(nbd->disk, bytesize >> 9))
+		kobject_uevent(&nbd_to_dev(nbd)->kobj, KOBJ_CHANGE);
 	return 0;
 }
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:29:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:29:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24418.51703 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVl-0003zf-9o; Wed, 11 Nov 2020 08:29:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24418.51703; Wed, 11 Nov 2020 08:29:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVl-0003zU-60; Wed, 11 Nov 2020 08:29:49 +0000
Received: by outflank-mailman (input) for mailman id 24418;
 Wed, 11 Nov 2020 08:29:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kclV4-0002dF-Jx
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:29:06 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 75d75fb8-b899-4531-a163-2f2e28c96bd6;
 Wed, 11 Nov 2020 08:27:43 +0000 (UTC)
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kclTT-0007ei-Ov; Wed, 11 Nov 2020 08:27:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kclV4-0002dF-Jx
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:29:06 +0000
X-Inumbo-ID: 75d75fb8-b899-4531-a163-2f2e28c96bd6
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 75d75fb8-b899-4531-a163-2f2e28c96bd6;
	Wed, 11 Nov 2020 08:27:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=c57pfSGVIl7drxDQpyo1OuK6dxor1mcv+7J/1mq5sAE=; b=QHBfDWzXAKk1skY+lIGbw93ckc
	gKqhO8I1MtrdA0LPeGiDCiNED15Azy7tBvKn9a/NXdjgwUWOEXCSOtkTdoRLcIExCuz/KX9/OW7A0
	bwrrG0vavHuuIg73K4l6GtezLeYH07NZEzdRSXpJq8JeEltsUALPW2xV3YV/xy7BpOCwvxxf+U/q5
	GFEDIhUfCtm2C6IeQS+uxCKl4zMoR4QAV/xgxKk2J9Inb2CSTlyu2LwxeTFDfvBkuCj1sNBh2EO26
	M3kJUrKqmDYCtWbxJHee6dbwHQl9Imy4/bbjBDScuK54nYX+3bHWyilHFhYhNPvThld3SE2wmmiyY
	w61ZJVLQ==;
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kclTT-0007ei-Ov; Wed, 11 Nov 2020 08:27:28 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 21/24] md: use set_capacity_and_notify
Date: Wed, 11 Nov 2020 09:26:55 +0100
Message-Id: <20201111082658.3401686-22-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201111082658.3401686-1-hch@lst.de>
References: <20201111082658.3401686-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Song Liu <song@kernel.org>
---
 drivers/md/md-cluster.c |  6 ++----
 drivers/md/md-linear.c  |  3 +--
 drivers/md/md.c         | 24 ++++++++++--------------
 3 files changed, 13 insertions(+), 20 deletions(-)

diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
index 4aaf4820b6f625..87442dc59f6ca3 100644
--- a/drivers/md/md-cluster.c
+++ b/drivers/md/md-cluster.c
@@ -581,8 +581,7 @@ static int process_recvd_msg(struct mddev *mddev, struct cluster_msg *msg)
 		process_metadata_update(mddev, msg);
 		break;
 	case CHANGE_CAPACITY:
-		set_capacity(mddev->gendisk, mddev->array_sectors);
-		revalidate_disk_size(mddev->gendisk, true);
+		set_capacity_and_notify(mddev->gendisk, mddev->array_sectors);
 		break;
 	case RESYNCING:
 		set_bit(MD_RESYNCING_REMOTE, &mddev->recovery);
@@ -1296,8 +1295,7 @@ static void update_size(struct mddev *mddev, sector_t old_dev_sectors)
 		if (ret)
 			pr_err("%s:%d: failed to send CHANGE_CAPACITY msg\n",
 			       __func__, __LINE__);
-		set_capacity(mddev->gendisk, mddev->array_sectors);
-		revalidate_disk_size(mddev->gendisk, true);
+		set_capacity_and_notify(mddev->gendisk, mddev->array_sectors);
 	} else {
 		/* revert to previous sectors */
 		ret = mddev->pers->resize(mddev, old_dev_sectors);
diff --git a/drivers/md/md-linear.c b/drivers/md/md-linear.c
index 5ab22069b5be9c..98f1b4b2bdcef8 100644
--- a/drivers/md/md-linear.c
+++ b/drivers/md/md-linear.c
@@ -200,9 +200,8 @@ static int linear_add(struct mddev *mddev, struct md_rdev *rdev)
 		"copied raid_disks doesn't match mddev->raid_disks");
 	rcu_assign_pointer(mddev->private, newconf);
 	md_set_array_sectors(mddev, linear_size(mddev, 0, 0));
-	set_capacity(mddev->gendisk, mddev->array_sectors);
+	set_capacity_and_notify(mddev->gendisk, mddev->array_sectors);
 	mddev_resume(mddev);
-	revalidate_disk_size(mddev->gendisk, true);
 	kfree_rcu(oldconf, rcu);
 	return 0;
 }
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 98bac4f304ae26..32e375d50fee17 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -5355,10 +5355,9 @@ array_size_store(struct mddev *mddev, const char *buf, size_t len)
 
 	if (!err) {
 		mddev->array_sectors = sectors;
-		if (mddev->pers) {
-			set_capacity(mddev->gendisk, mddev->array_sectors);
-			revalidate_disk_size(mddev->gendisk, true);
-		}
+		if (mddev->pers)
+			set_capacity_and_notify(mddev->gendisk,
+						mddev->array_sectors);
 	}
 	mddev_unlock(mddev);
 	return err ?: len;
@@ -6107,8 +6106,7 @@ int do_md_run(struct mddev *mddev)
 	md_wakeup_thread(mddev->thread);
 	md_wakeup_thread(mddev->sync_thread); /* possibly kick off a reshape */
 
-	set_capacity(mddev->gendisk, mddev->array_sectors);
-	revalidate_disk_size(mddev->gendisk, true);
+	set_capacity_and_notify(mddev->gendisk, mddev->array_sectors);
 	clear_bit(MD_NOT_READY, &mddev->flags);
 	mddev->changed = 1;
 	kobject_uevent(&disk_to_dev(mddev->gendisk)->kobj, KOBJ_CHANGE);
@@ -6423,10 +6421,9 @@ static int do_md_stop(struct mddev *mddev, int mode,
 			if (rdev->raid_disk >= 0)
 				sysfs_unlink_rdev(mddev, rdev);
 
-		set_capacity(disk, 0);
+		set_capacity_and_notify(disk, 0);
 		mutex_unlock(&mddev->open_mutex);
 		mddev->changed = 1;
-		revalidate_disk_size(disk, true);
 
 		if (mddev->ro)
 			mddev->ro = 0;
@@ -7257,8 +7254,8 @@ static int update_size(struct mddev *mddev, sector_t num_sectors)
 		if (mddev_is_clustered(mddev))
 			md_cluster_ops->update_size(mddev, old_dev_sectors);
 		else if (mddev->queue) {
-			set_capacity(mddev->gendisk, mddev->array_sectors);
-			revalidate_disk_size(mddev->gendisk, true);
+			set_capacity_and_notify(mddev->gendisk,
+						mddev->array_sectors);
 		}
 	}
 	return rv;
@@ -9035,10 +9032,9 @@ void md_do_sync(struct md_thread *thread)
 		mddev_lock_nointr(mddev);
 		md_set_array_sectors(mddev, mddev->pers->size(mddev, 0, 0));
 		mddev_unlock(mddev);
-		if (!mddev_is_clustered(mddev)) {
-			set_capacity(mddev->gendisk, mddev->array_sectors);
-			revalidate_disk_size(mddev->gendisk, true);
-		}
+		if (!mddev_is_clustered(mddev))
+			set_capacity_and_notify(mddev->gendisk,
+						mddev->array_sectors);
 	}
 
 	spin_lock(&mddev->lock);
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:29:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:29:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24419.51709 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVl-00040c-Q7; Wed, 11 Nov 2020 08:29:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24419.51709; Wed, 11 Nov 2020 08:29:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVl-000406-GE; Wed, 11 Nov 2020 08:29:49 +0000
Received: by outflank-mailman (input) for mailman id 24419;
 Wed, 11 Nov 2020 08:29:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kclVY-0002dF-LF
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:29:36 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1cfb7a9c-a23b-4275-8c7d-eddb810445ce;
 Wed, 11 Nov 2020 08:27:48 +0000 (UTC)
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kclTY-0007gX-LB; Wed, 11 Nov 2020 08:27:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kclVY-0002dF-LF
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:29:36 +0000
X-Inumbo-ID: 1cfb7a9c-a23b-4275-8c7d-eddb810445ce
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1cfb7a9c-a23b-4275-8c7d-eddb810445ce;
	Wed, 11 Nov 2020 08:27:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=qTvXuSKxmUERuRhycJyctBDf7GzVXaeOz2hEmG1jnCY=; b=StdRHq9eqGBzFFXpMKfN8ot2yd
	jPC1U+22bkuUiJFL3H0mb/9/1vlelGLQvG1hCH467eSdVW15dn8Pibm3EIxJwicffJfGSFU2WZAlL
	tM76FQA8Mhz7J0Dt012NqxGVof6CBWaiq+71e7SWBXxgz9PGD/HSjyzrq5laC3bHahSJjAdt/YgSg
	VK5FG4/KAjlBjCF1Mxk5ErE82Aw6rkXyhTwsy/FUzDELWaBKzEkjqbZ7oRMK+Qd8mrTJBHvgsXqEQ
	R94YnaXqfp4e3S0Qk0xduyFHl/C3X5yAr/DgMPQvmpUp8si6GBSW2z68gBmwcJiLLaIwE1WsQpe7C
	3Of9U5JA==;
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kclTY-0007gX-LB; Wed, 11 Nov 2020 08:27:33 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 24/24] block: unexport revalidate_disk_size
Date: Wed, 11 Nov 2020 09:26:58 +0100
Message-Id: <20201111082658.3401686-25-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201111082658.3401686-1-hch@lst.de>
References: <20201111082658.3401686-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

revalidate_disk_size is now only called from set_capacity_and_notify,
so drop the export.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/block_dev.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index 66ebf594c97f47..d8664f5c1ff669 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -1362,7 +1362,6 @@ void revalidate_disk_size(struct gendisk *disk, bool verbose)
 		bdput(bdev);
 	}
 }
-EXPORT_SYMBOL(revalidate_disk_size);
 
 void bd_set_nr_sectors(struct block_device *bdev, sector_t sectors)
 {
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:29:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:29:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24420.51717 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVm-00041v-DB; Wed, 11 Nov 2020 08:29:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24420.51717; Wed, 11 Nov 2020 08:29:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVm-00041M-28; Wed, 11 Nov 2020 08:29:50 +0000
Received: by outflank-mailman (input) for mailman id 24420;
 Wed, 11 Nov 2020 08:29:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kclVT-0002dF-Kw
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:29:31 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eee93955-29b4-48e0-b36c-4b024b419472;
 Wed, 11 Nov 2020 08:27:48 +0000 (UTC)
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kclTW-0007fT-TE; Wed, 11 Nov 2020 08:27:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kclVT-0002dF-Kw
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:29:31 +0000
X-Inumbo-ID: eee93955-29b4-48e0-b36c-4b024b419472
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id eee93955-29b4-48e0-b36c-4b024b419472;
	Wed, 11 Nov 2020 08:27:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=0pOGuZ16rreJ8RZ4ivF8AiAM7803r01swtmMxrdYqs4=; b=RTU5UMhdpsjtChFCSIwnQCiNZ0
	nLMhYrquzojaCWKfYUYpIV8wG55z/9uW3YWLeoQ1rUIpnE//ou/YhU/DPlXAapy4YewrFNGqBo2FO
	NBMfZ9QZ5CbAIfIgfBRWxY3KHi6Jzjx2slAcrlK/u6FEmKnmHNFQ3/hYrKTa3Bv1kBZpb1GR/mJBU
	RrVIalCfa+Zm+i+qCLA8uu6mcb1MtochkntXC+ZhYnh8VC4mv7MvHqnbR+NuSytrJJQpp7KhMHXlF
	ySOGQCEIpF2KZxWJUEOmTknhZ1d00cwIeytUy1JVjpL77pAaI+ZcDeJgM9rthHxW+G1//Kr4qHv8B
	dlmVJQfQ==;
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kclTW-0007fT-TE; Wed, 11 Nov 2020 08:27:31 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 23/24] virtio-blk: remove a spurious call to revalidate_disk_size
Date: Wed, 11 Nov 2020 09:26:57 +0100
Message-Id: <20201111082658.3401686-24-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201111082658.3401686-1-hch@lst.de>
References: <20201111082658.3401686-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

revalidate_disk_size just updates the block device size from the disk
size.  Thus calling it from virtblk_update_cache_mode doesn't actually
do anything.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
---
 drivers/block/virtio_blk.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 3e812b4c32e669..145606dc52db1e 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -598,7 +598,6 @@ static void virtblk_update_cache_mode(struct virtio_device *vdev)
 	struct virtio_blk *vblk = vdev->priv;
 
 	blk_queue_write_cache(vblk->disk->queue, writeback, false);
-	revalidate_disk_size(vblk->disk, true);
 }
 
 static const char *const virtblk_cache_types[] = {
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:29:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:29:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24422.51738 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVo-00046o-9g; Wed, 11 Nov 2020 08:29:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24422.51738; Wed, 11 Nov 2020 08:29:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVn-00046K-Qy; Wed, 11 Nov 2020 08:29:51 +0000
Received: by outflank-mailman (input) for mailman id 24422;
 Wed, 11 Nov 2020 08:29:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kclUV-0002dF-JE
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:28:31 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 221d66e4-11d5-4821-a163-918ee6d1d23e;
 Wed, 11 Nov 2020 08:27:32 +0000 (UTC)
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kclTI-0007c0-C4; Wed, 11 Nov 2020 08:27:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kclUV-0002dF-JE
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:28:31 +0000
X-Inumbo-ID: 221d66e4-11d5-4821-a163-918ee6d1d23e
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 221d66e4-11d5-4821-a163-918ee6d1d23e;
	Wed, 11 Nov 2020 08:27:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=aTYccYixW8gbHYOBakp7ukQbj8m53SZz7oZSluFWVGc=; b=u/ozI6WBA8REfAwdRvhywExuNr
	JLlsRRFbzxUvhpbOhxUj8/yRTolc2kh0ts2NzCUCtvtIGjt7R+mL86s4iNKKy19DxPdRahEdhs7pR
	VWtl3HgZF75ovljLu1zA3T/5X+fFn3rmuS8w2OpsYFgzn7J0YuXm5T5t2THdt8scAYc5+kG47mtie
	yNOj6g1hD19a3zBv5h1AdD75GWTJEyW2t6w6Ar1OufkGiHlajaAgfjYMc64Hrs99tQQwSRtuKeGPk
	C900mkX+WRHnV1MWMEPhCqV0DdkzBlCFAXPcCILEVO5ocQBjNPNvC67NrkhDk+Rw1ZN2Zzateo5JG
	8ZYIrJAA==;
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kclTI-0007c0-C4; Wed, 11 Nov 2020 08:27:16 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 13/24] dm: use set_capacity_and_notify
Date: Wed, 11 Nov 2020 09:26:47 +0100
Message-Id: <20201111082658.3401686-14-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201111082658.3401686-1-hch@lst.de>
References: <20201111082658.3401686-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/md/dm.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index c18fc25485186d..62ad44925e73ec 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1971,8 +1971,7 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
 	if (size != dm_get_size(md))
 		memset(&md->geometry, 0, sizeof(md->geometry));
 
-	set_capacity(md->disk, size);
-	bd_set_nr_sectors(md->bdev, size);
+	set_capacity_and_notify(md->disk, size);
 
 	dm_table_event_callback(t, event_callback, md);
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:29:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:29:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24423.51751 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVp-0004B4-Vd; Wed, 11 Nov 2020 08:29:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24423.51751; Wed, 11 Nov 2020 08:29:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVp-0004AX-HJ; Wed, 11 Nov 2020 08:29:53 +0000
Received: by outflank-mailman (input) for mailman id 24423;
 Wed, 11 Nov 2020 08:29:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kclUQ-0002dF-Ia
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:28:26 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fa1f05d0-9967-49ed-bbbe-693c00e77d6c;
 Wed, 11 Nov 2020 08:27:30 +0000 (UTC)
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kclTG-0007bi-W8; Wed, 11 Nov 2020 08:27:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kclUQ-0002dF-Ia
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:28:26 +0000
X-Inumbo-ID: fa1f05d0-9967-49ed-bbbe-693c00e77d6c
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fa1f05d0-9967-49ed-bbbe-693c00e77d6c;
	Wed, 11 Nov 2020 08:27:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=bgbs5emPj5p7gTWcmnEuks79StD6TA5roIn47mScHH4=; b=fBt6QhZHrv17/Y1zEBZjou8nag
	/S9UF+dPlT5bEmXhZ0AkX3Des+z44TDZwhlLf8UZwm1bya/Tw1ihVWQ1eo0/wpa9jcmRthywUSjkC
	a5WCNVJjuCks8OG28CqM1W4KLfPlPhhxQn2JdcTB6we015wUZWKRrDnDkhahK+xL2QOV8g1e7VDFb
	R8OvQOpZICV1BZy/htkG1su/uosUSegQuLRvZZFjPgBFD8n7wOMK+yVOakCbK2S5b4ZCm7OCaoLMu
	rE0BnJF3Xca/1n6MGkXXTyg53AWbOj3IBct9gJs8ljT/56L365ZklV7NkghnJYSHb0UpMIe95v8XT
	ZTecf9FA==;
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kclTG-0007bi-W8; Wed, 11 Nov 2020 08:27:15 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 12/24] aoe: don't call set_capacity from irq context
Date: Wed, 11 Nov 2020 09:26:46 +0100
Message-Id: <20201111082658.3401686-13-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201111082658.3401686-1-hch@lst.de>
References: <20201111082658.3401686-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Updating the block device size from irq context can lead to torn
writes of the 64-bit value, and prevents us from using normal
process context locking primitives to serialize access to the 64-bit
nr_sectors value.  Defer the set_capacity to the already existing
workqueue handler, where it can be merged with the update of the
block device size by using set_capacity_and_notify.  As an extra
bonus this also adds proper uevent notifications for the resize.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/aoe/aoecmd.c | 15 ++++-----------
 1 file changed, 4 insertions(+), 11 deletions(-)

diff --git a/drivers/block/aoe/aoecmd.c b/drivers/block/aoe/aoecmd.c
index 313f0b946fe2b3..ac720bdcd983e7 100644
--- a/drivers/block/aoe/aoecmd.c
+++ b/drivers/block/aoe/aoecmd.c
@@ -890,19 +890,13 @@ void
 aoecmd_sleepwork(struct work_struct *work)
 {
 	struct aoedev *d = container_of(work, struct aoedev, work);
-	struct block_device *bd;
-	u64 ssize;
 
 	if (d->flags & DEVFL_GDALLOC)
 		aoeblk_gdalloc(d);
 
 	if (d->flags & DEVFL_NEWSIZE) {
-		ssize = get_capacity(d->gd);
-		bd = bdget_disk(d->gd, 0);
-		if (bd) {
-			bd_set_nr_sectors(bd, ssize);
-			bdput(bd);
-		}
+		set_capacity_and_notify(d->gd, d->ssize);
+
 		spin_lock_irq(&d->lock);
 		d->flags |= DEVFL_UP;
 		d->flags &= ~DEVFL_NEWSIZE;
@@ -971,10 +965,9 @@ ataid_complete(struct aoedev *d, struct aoetgt *t, unsigned char *id)
 	d->geo.start = 0;
 	if (d->flags & (DEVFL_GDALLOC|DEVFL_NEWSIZE))
 		return;
-	if (d->gd != NULL) {
-		set_capacity(d->gd, ssize);
+	if (d->gd != NULL)
 		d->flags |= DEVFL_NEWSIZE;
-	} else
+	else
 		d->flags |= DEVFL_GDALLOC;
 	schedule_work(&d->work);
 }
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:29:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:29:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24424.51755 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVq-0004Cc-IS; Wed, 11 Nov 2020 08:29:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24424.51755; Wed, 11 Nov 2020 08:29:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVq-0004By-32; Wed, 11 Nov 2020 08:29:54 +0000
Received: by outflank-mailman (input) for mailman id 24424;
 Wed, 11 Nov 2020 08:29:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kclUa-0002dF-Js
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:28:36 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 68fa3683-2cec-43fc-9f2a-0c43189e80e9;
 Wed, 11 Nov 2020 08:27:33 +0000 (UTC)
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kclTJ-0007cE-Po; Wed, 11 Nov 2020 08:27:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kclUa-0002dF-Js
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:28:36 +0000
X-Inumbo-ID: 68fa3683-2cec-43fc-9f2a-0c43189e80e9
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 68fa3683-2cec-43fc-9f2a-0c43189e80e9;
	Wed, 11 Nov 2020 08:27:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=lumJq5B2fyOOj2vqJR3cb92lXUnjwfkYYs/CdjVL7AE=; b=uDrlBIhMK6T1RnrpgvD/pKf6sG
	QV1Sp6TudbRl1mWifO0nZnLE7QfG2iw/OPgIhCSja3Blj4/cJ1bCXgyh4CGVU8X7dkAknvMZodl4b
	2ynoRHYB66K72oZd5MYRpOB7ObAcIAq2+HyK+1Hr2mJ91JL6kHn0bBGu17bSqY+tlRCoT0t+bpnhq
	rRarLOFqySYCDRZZr894GHJCn0WJl3x59Oz4tbjlcJPFDeufbZN7qJ20G6JYSqW+cnntCFsKfgQj/
	oMJ4UPFDhi1LuO4RUOy54Y+XqGoQnfQDGHb+DJOLJYIoYZZrMsBLtPkmlQJdVt/k6Enfm2GLnM3Zj
	Ld3eXzpQ==;
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kclTJ-0007cE-Po; Wed, 11 Nov 2020 08:27:18 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 14/24] pktcdvd: use set_capacity_and_notify
Date: Wed, 11 Nov 2020 09:26:48 +0100
Message-Id: <20201111082658.3401686-15-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201111082658.3401686-1-hch@lst.de>
References: <20201111082658.3401686-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/pktcdvd.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c
index 467dbd06b7cdb1..4326401cede445 100644
--- a/drivers/block/pktcdvd.c
+++ b/drivers/block/pktcdvd.c
@@ -2130,8 +2130,7 @@ static int pkt_open_dev(struct pktcdvd_device *pd, fmode_t write)
 	}
 
 	set_capacity(pd->disk, lba << 2);
-	set_capacity(pd->bdev->bd_disk, lba << 2);
-	bd_set_nr_sectors(pd->bdev, lba << 2);
+	set_capacity_and_notify(pd->bdev->bd_disk, lba << 2);
 
 	q = bdev_get_queue(pd->bdev);
 	if (write) {
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:29:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:29:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24427.51775 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVt-0004Lu-UP; Wed, 11 Nov 2020 08:29:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24427.51775; Wed, 11 Nov 2020 08:29:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVt-0004LH-FF; Wed, 11 Nov 2020 08:29:57 +0000
Received: by outflank-mailman (input) for mailman id 24427;
 Wed, 11 Nov 2020 08:29:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kclVO-0002dF-Kn
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:29:26 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 80988304-fd99-4571-9f7c-807a6f36633b;
 Wed, 11 Nov 2020 08:27:46 +0000 (UTC)
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kclTV-0007f7-7D; Wed, 11 Nov 2020 08:27:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kclVO-0002dF-Kn
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:29:26 +0000
X-Inumbo-ID: 80988304-fd99-4571-9f7c-807a6f36633b
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 80988304-fd99-4571-9f7c-807a6f36633b;
	Wed, 11 Nov 2020 08:27:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=VpoGqglFmdO7rnCbfTvzOvXddLcpMeI5Oi6kPi7JzMI=; b=fh6dXrMtrp6rcPPLQiDw9sObnB
	efyUG413uhXCb/QlAb9v8YRflPAq5yVp8u+GdJ3bhDBmbClXQ1kCpqH+iSdq//kveTtc0aIkz9/t1
	NlOAJ4US+n4Craqy3YLcf0d/UIIueyp+te++vbLGVABk4rzZkClrL41F56aqypppQdeDo+E3XqxqJ
	i+R8ENt7qFBaf1eEpWEC2Ev9uhlpwv3fW3tkYO79IfCJ8e8m8V20ceNDASmk3zIUqgUWaiXo3hZ8t
	x6L5b2uRc0iF0V0qiQuq2BgZgpQ3K0E2zlRsdDc30ePD0TrZzD8SAnDEx2V9S0Ja3dbtv/5M2xPH2
	2ywmdMQw==;
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kclTV-0007f7-7D; Wed, 11 Nov 2020 08:27:29 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 22/24] md: remove a spurious call to revalidate_disk_size in update_size
Date: Wed, 11 Nov 2020 09:26:56 +0100
Message-Id: <20201111082658.3401686-23-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201111082658.3401686-1-hch@lst.de>
References: <20201111082658.3401686-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

None of the ->resize methods updates the disk size, so calling
revalidate_disk_size here won't do anything.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Song Liu <song@kernel.org>
---
 drivers/md/md-cluster.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
index 87442dc59f6ca3..35e2690c1803dd 100644
--- a/drivers/md/md-cluster.c
+++ b/drivers/md/md-cluster.c
@@ -1299,8 +1299,6 @@ static void update_size(struct mddev *mddev, sector_t old_dev_sectors)
 	} else {
 		/* revert to previous sectors */
 		ret = mddev->pers->resize(mddev, old_dev_sectors);
-		if (!ret)
-			revalidate_disk_size(mddev->gendisk, true);
 		ret = __sendmsg(cinfo, &cmsg);
 		if (ret)
 			pr_err("%s:%d: failed to send METADATA_UPDATED msg\n",
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:29:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:29:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24431.51786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVv-0004Qi-GS; Wed, 11 Nov 2020 08:29:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24431.51786; Wed, 11 Nov 2020 08:29:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVv-0004QJ-5q; Wed, 11 Nov 2020 08:29:59 +0000
Received: by outflank-mailman (input) for mailman id 24431;
 Wed, 11 Nov 2020 08:29:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kclUf-0002dF-J3
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:28:41 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 35e22f0f-d65f-46c2-9d30-353efc961803;
 Wed, 11 Nov 2020 08:27:37 +0000 (UTC)
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kclTL-0007cU-2G; Wed, 11 Nov 2020 08:27:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kclUf-0002dF-J3
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:28:41 +0000
X-Inumbo-ID: 35e22f0f-d65f-46c2-9d30-353efc961803
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 35e22f0f-d65f-46c2-9d30-353efc961803;
	Wed, 11 Nov 2020 08:27:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=7eiDjQ8ZjozXig6+m+V2Nb2yhyFiCm1kEOIxzxYYgzo=; b=INZhS6jek7sGzfAIL6nzYXyeXs
	SsycI2Wi/bh4Pu9HbXbPFzDNdHlSwDosG0AcMwQMMm1UOxdqMl11eyvFrWr8b8z0IrJsEn0H42AYn
	qK9iedvHnlhvSRdKVleqXJmjQGFSSkaIdxmnABppoi7MGvBdw70K6g4OEqXr8kkf1cCEWQ716tM5I
	V+Aj8IGwwwfmE/a75KkaqAmHB6T6imPjEjCW+u7gRuoO7i2YL9fXBfMlWqCYSOkvaP4efehVbn86F
	I/pA5x8OW7h9U9vg1qyAB+518eTm0anOOLvxsYmd8D9ABdtIXycMKu8rH3KJ32rwjG+GfPfEUHYEs
	jyKDZgiw==;
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kclTL-0007cU-2G; Wed, 11 Nov 2020 08:27:19 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 15/24] nvme: use set_capacity_and_notify in nvme_set_queue_dying
Date: Wed, 11 Nov 2020 09:26:49 +0100
Message-Id: <20201111082658.3401686-16-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201111082658.3401686-1-hch@lst.de>
References: <20201111082658.3401686-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use the block layer helper to update both the disk and block device
sizes.  Contrary to the name no notification is sent in this case,
as a size 0 is special cased.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/nvme/host/core.c | 13 +------------
 1 file changed, 1 insertion(+), 12 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 445274b28518fb..cc771c70047a96 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -93,16 +93,6 @@ static void nvme_put_subsystem(struct nvme_subsystem *subsys);
 static void nvme_remove_invalid_namespaces(struct nvme_ctrl *ctrl,
 					   unsigned nsid);
 
-static void nvme_update_bdev_size(struct gendisk *disk)
-{
-	struct block_device *bdev = bdget_disk(disk, 0);
-
-	if (bdev) {
-		bd_set_nr_sectors(bdev, get_capacity(disk));
-		bdput(bdev);
-	}
-}
-
 /*
  * Prepare a queue for teardown.
  *
@@ -119,8 +109,7 @@ static void nvme_set_queue_dying(struct nvme_ns *ns)
 	blk_set_queue_dying(ns->queue);
 	blk_mq_unquiesce_queue(ns->queue);
 
-	set_capacity(ns->disk, 0);
-	nvme_update_bdev_size(ns->disk);
+	set_capacity_and_notify(ns->disk, 0);
 }
 
 static void nvme_queue_scan(struct nvme_ctrl *ctrl)
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:30:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:30:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24435.51798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVy-0004ZM-8x; Wed, 11 Nov 2020 08:30:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24435.51798; Wed, 11 Nov 2020 08:30:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVy-0004Ya-0R; Wed, 11 Nov 2020 08:30:02 +0000
Received: by outflank-mailman (input) for mailman id 24435;
 Wed, 11 Nov 2020 08:30:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kclUk-0002dF-JD
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:28:46 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c92b6225-f1bb-424b-9c4c-c50a5614609f;
 Wed, 11 Nov 2020 08:27:39 +0000 (UTC)
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kclTM-0007cn-Hp; Wed, 11 Nov 2020 08:27:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kclUk-0002dF-JD
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:28:46 +0000
X-Inumbo-ID: c92b6225-f1bb-424b-9c4c-c50a5614609f
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c92b6225-f1bb-424b-9c4c-c50a5614609f;
	Wed, 11 Nov 2020 08:27:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=nVz8RkNtpZwARdADh/JLCUJA7/hL1Ru40zDyIK26ET0=; b=tinmFTQPUUV98QgHlYKWX8YU4L
	4qD5VD7H0dD3uFNSDokl4KoFGi+j9gbRq7IRLld2eCm6EcqPRS6TWJuVKJhRvlBT4k8G49ePxnn5y
	IJdwH0Tkv70a7xu934IYCUXQlwrqMdxTy/SOG5XZrYyyc8yRDIsMxf9cnhcVLrtzz5C+wb1IO24Zf
	AqNi1dVWI5Z0ckM3+Q2ktx3U+FU4QA8GqzJow7/V+uWu26C7iqg4HEhWS44W5/9cZSIyjuIofOa4K
	kaEY3fE06TS463cWbBOQ4b28ewRycc05JB8+7U97q79iJYL/Y9Nk16FoT28Ku03ykhf7QO82EmBlE
	Y3KW2tZQ==;
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kclTM-0007cn-Hp; Wed, 11 Nov 2020 08:27:20 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 16/24] drbd: use set_capacity_and_notify
Date: Wed, 11 Nov 2020 09:26:50 +0100
Message-Id: <20201111082658.3401686-17-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201111082658.3401686-1-hch@lst.de>
References: <20201111082658.3401686-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Ilya Dryomov <idryomov@gmail.com>
---
 drivers/block/drbd/drbd_main.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
index 65b95aef8dbc95..1c8c18b2a25f33 100644
--- a/drivers/block/drbd/drbd_main.c
+++ b/drivers/block/drbd/drbd_main.c
@@ -2036,8 +2036,7 @@ void drbd_set_my_capacity(struct drbd_device *device, sector_t size)
 {
 	char ppb[10];
 
-	set_capacity(device->vdisk, size);
-	revalidate_disk_size(device->vdisk, false);
+	set_capacity_and_notify(device->vdisk, size);
 
 	drbd_info(device, "size = %s (%llu KB)\n",
 		ppsize(ppb, size>>1), (unsigned long long)size>>1);
@@ -2068,8 +2067,7 @@ void drbd_device_cleanup(struct drbd_device *device)
 	}
 	D_ASSERT(device, first_peer_device(device)->connection->net_conf == NULL);
 
-	set_capacity(device->vdisk, 0);
-	revalidate_disk_size(device->vdisk, false);
+	set_capacity_and_notify(device->vdisk, 0);
 	if (device->bitmap) {
 		/* maybe never allocated. */
 		drbd_bm_resize(device, 0, 1);
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:30:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:30:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24437.51805 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVz-0004cl-Fh; Wed, 11 Nov 2020 08:30:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24437.51805; Wed, 11 Nov 2020 08:30:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclVy-0004cD-Su; Wed, 11 Nov 2020 08:30:02 +0000
Received: by outflank-mailman (input) for mailman id 24437;
 Wed, 11 Nov 2020 08:30:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kclUu-0002dF-KE
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:28:56 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 638e2d1b-5587-4b82-987d-ef092004841e;
 Wed, 11 Nov 2020 08:27:41 +0000 (UTC)
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kclTP-0007dc-EI; Wed, 11 Nov 2020 08:27:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kclUu-0002dF-KE
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:28:56 +0000
X-Inumbo-ID: 638e2d1b-5587-4b82-987d-ef092004841e
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 638e2d1b-5587-4b82-987d-ef092004841e;
	Wed, 11 Nov 2020 08:27:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=PHuhBTY1ouZHYZght48HNcRnOQ9ETUZCJ1FQTXf/EwQ=; b=m5MWiiWzcpaSYXwS8kDVrHOGAw
	92iCzGTWlSpt7Br9adIXKJfa7+N+xudiux0bUtzBV0XOA8sCFw4A+F+21N6k6DSKovVJVpX0hyRne
	XeIBsiJ1g7Ruf7j/9oBvg5u8uE72Evjw5CNmDZkyoFT44EDwPVXjfy+qvbBMZwYJ4ZM30mXDzKACM
	sbMnAv+VpKqU/y2BICE63kA6Ex/Ct7GtegULS3KTE0zTRAvu/6wpo59jG/nz2JK3HbrnIKVuEf9X9
	L91CAfTn1sj5/yjS/WJVSnqinqVMqn7XYadu2gBkPI9KyrNPK+vEVaQZyACPmx4+EtdIlPpHqXZDG
	leis3YSg==;
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kclTP-0007dc-EI; Wed, 11 Nov 2020 08:27:23 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 18/24] rnbd: use set_capacity_and_notify
Date: Wed, 11 Nov 2020 09:26:52 +0100
Message-Id: <20201111082658.3401686-19-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201111082658.3401686-1-hch@lst.de>
References: <20201111082658.3401686-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/rnbd/rnbd-clt.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c
index 8b2411ccbda97c..bb13d7dd195a08 100644
--- a/drivers/block/rnbd/rnbd-clt.c
+++ b/drivers/block/rnbd/rnbd-clt.c
@@ -100,8 +100,7 @@ static int rnbd_clt_change_capacity(struct rnbd_clt_dev *dev,
 	rnbd_clt_info(dev, "Device size changed from %zu to %zu sectors\n",
 		       dev->nsectors, new_nsectors);
 	dev->nsectors = new_nsectors;
-	set_capacity(dev->gd, dev->nsectors);
-	revalidate_disk_size(dev->gd, true);
+	set_capacity_and_notify(dev->gd, dev->nsectors);
 	return 0;
 }
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:30:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:30:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24455.51823 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclW8-0005EV-2e; Wed, 11 Nov 2020 08:30:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24455.51823; Wed, 11 Nov 2020 08:30:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclW7-0005Dm-T6; Wed, 11 Nov 2020 08:30:11 +0000
Received: by outflank-mailman (input) for mailman id 24455;
 Wed, 11 Nov 2020 08:30:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kclUG-0002dF-IE
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:28:16 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ba0bac8c-b5c4-4394-97d4-91a256199013;
 Wed, 11 Nov 2020 08:27:28 +0000 (UTC)
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kclTD-0007b7-Eu; Wed, 11 Nov 2020 08:27:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kclUG-0002dF-IE
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:28:16 +0000
X-Inumbo-ID: ba0bac8c-b5c4-4394-97d4-91a256199013
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ba0bac8c-b5c4-4394-97d4-91a256199013;
	Wed, 11 Nov 2020 08:27:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=7BWzzM1ONnukKsMJYvFCtkh7Po4ojUIcRAoXEXZQpls=; b=RCXh8hHVKHOdlbTgZudBE0UgxQ
	OdZbQcCl+Yq46gbMhWR8GVkt7ot75syMIyMmRVs70SjQ1O8I3Qml7eCC8UgjmLw+eZKo3Y+wA2Jn3
	swdtiCKDiqLib1ChlnI43270zWz+HESqHy3CFg19CGNjMaZmfyCASYpR/6tWH4C8l/mB79/1BSWgs
	puiFVbzfVm1so5b0mSiZ9Ei1//wgG5uEkaKOW5zYLA6lZYmZLWg1vlPvHI0aYLKiQl40RWCE1w1Is
	2SNvkcNq6XAtQw8IczEREzWr5hvM924WJzBVkrt1lqmXha4587UsPlSaCJVXmHuHTxzWvnOewG4Qb
	h8NmpJFw==;
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kclTD-0007b7-Eu; Wed, 11 Nov 2020 08:27:11 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 10/24] nbd: validate the block size in nbd_set_size
Date: Wed, 11 Nov 2020 09:26:44 +0100
Message-Id: <20201111082658.3401686-11-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201111082658.3401686-1-hch@lst.de>
References: <20201111082658.3401686-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Move the validation of the block from the callers into nbd_set_size.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 drivers/block/nbd.c | 47 +++++++++++++++------------------------------
 1 file changed, 15 insertions(+), 32 deletions(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index eb8a5da48ad75a..327060e01ad58e 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -296,16 +296,21 @@ static void nbd_size_clear(struct nbd_device *nbd)
 	}
 }
 
-static void nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
+static int nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
 		loff_t blksize)
 {
 	struct block_device *bdev;
 
+	if (!blksize)
+		blksize = NBD_DEF_BLKSIZE;
+	if (blksize < 512 || blksize > PAGE_SIZE || !is_power_of_2(blksize))
+		return -EINVAL;
+
 	nbd->config->bytesize = bytesize;
 	nbd->config->blksize = blksize;
 
 	if (!nbd->task_recv)
-		return;
+		return 0;
 
 	if (nbd->config->flags & NBD_FLAG_SEND_TRIM) {
 		nbd->disk->queue->limits.discard_granularity = blksize;
@@ -325,6 +330,7 @@ static void nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
 		bdput(bdev);
 	}
 	kobject_uevent(&nbd_to_dev(nbd)->kobj, KOBJ_CHANGE);
+	return 0;
 }
 
 static void nbd_complete_rq(struct request *req)
@@ -1304,8 +1310,7 @@ static int nbd_start_device(struct nbd_device *nbd)
 		args->index = i;
 		queue_work(nbd->recv_workq, &args->work);
 	}
-	nbd_set_size(nbd, config->bytesize, config->blksize);
-	return error;
+	return nbd_set_size(nbd, config->bytesize, config->blksize);
 }
 
 static int nbd_start_device_ioctl(struct nbd_device *nbd, struct block_device *bdev)
@@ -1347,14 +1352,6 @@ static void nbd_clear_sock_ioctl(struct nbd_device *nbd,
 		nbd_config_put(nbd);
 }
 
-static bool nbd_is_valid_blksize(unsigned long blksize)
-{
-	if (!blksize || !is_power_of_2(blksize) || blksize < 512 ||
-	    blksize > PAGE_SIZE)
-		return false;
-	return true;
-}
-
 static void nbd_set_cmd_timeout(struct nbd_device *nbd, u64 timeout)
 {
 	nbd->tag_set.timeout = timeout * HZ;
@@ -1379,19 +1376,12 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
 	case NBD_SET_SOCK:
 		return nbd_add_socket(nbd, arg, false);
 	case NBD_SET_BLKSIZE:
-		if (!arg)
-			arg = NBD_DEF_BLKSIZE;
-		if (!nbd_is_valid_blksize(arg))
-			return -EINVAL;
-		nbd_set_size(nbd, config->bytesize, arg);
-		return 0;
+		return nbd_set_size(nbd, config->bytesize, arg);
 	case NBD_SET_SIZE:
-		nbd_set_size(nbd, arg, config->blksize);
-		return 0;
+		return nbd_set_size(nbd, arg, config->blksize);
 	case NBD_SET_SIZE_BLOCKS:
-		nbd_set_size(nbd, arg * config->blksize,
-			     config->blksize);
-		return 0;
+		return nbd_set_size(nbd, arg * config->blksize,
+				    config->blksize);
 	case NBD_SET_TIMEOUT:
 		nbd_set_cmd_timeout(nbd, arg);
 		return 0;
@@ -1808,18 +1798,11 @@ static int nbd_genl_size_set(struct genl_info *info, struct nbd_device *nbd)
 	if (info->attrs[NBD_ATTR_SIZE_BYTES])
 		bytes = nla_get_u64(info->attrs[NBD_ATTR_SIZE_BYTES]);
 
-	if (info->attrs[NBD_ATTR_BLOCK_SIZE_BYTES]) {
+	if (info->attrs[NBD_ATTR_BLOCK_SIZE_BYTES])
 		bsize = nla_get_u64(info->attrs[NBD_ATTR_BLOCK_SIZE_BYTES]);
-		if (!bsize)
-			bsize = NBD_DEF_BLKSIZE;
-		if (!nbd_is_valid_blksize(bsize)) {
-			printk(KERN_ERR "Invalid block size %llu\n", bsize);
-			return -EINVAL;
-		}
-	}
 
 	if (bytes != config->bytesize || bsize != config->blksize)
-		nbd_set_size(nbd, bytes, bsize);
+		return nbd_set_size(nbd, bytes, bsize);
 	return 0;
 }
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:30:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:30:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24456.51827 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclW8-0005GD-HG; Wed, 11 Nov 2020 08:30:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24456.51827; Wed, 11 Nov 2020 08:30:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclW8-0005FW-AH; Wed, 11 Nov 2020 08:30:12 +0000
Received: by outflank-mailman (input) for mailman id 24456;
 Wed, 11 Nov 2020 08:30:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kclUz-0002dF-Jm
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:29:01 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bce83519-a651-44d3-9271-d9d667d58d90;
 Wed, 11 Nov 2020 08:27:41 +0000 (UTC)
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kclTQ-0007du-T4; Wed, 11 Nov 2020 08:27:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJD9=ER=casper.srs.infradead.org=batv+33c89f8a75624a8d62ce+6289+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kclUz-0002dF-Jm
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:29:01 +0000
X-Inumbo-ID: bce83519-a651-44d3-9271-d9d667d58d90
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bce83519-a651-44d3-9271-d9d667d58d90;
	Wed, 11 Nov 2020 08:27:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=4bS/eZqoJAQR/DSs9IGqi2g9LCY9L0WWDnWefDh2k5M=; b=nQ0hySiZUaY9fLZcgrMawm9KxL
	DbgEDe1kMMO1r2VbjG/Ha8Omw2jZ9OcL4L35p38O5YLec9s6gZFqZDsBvRcEaVBQf5ShLXgduUfPD
	87lEv7hQ4w6SMvk5725iCpwRkLYeP85Upi9ENHtST0lvS3ZK2A6pldL0+TZelBmo7KQDRUcRCGdYO
	HuuIuddxS5+s4wDf5RBRG5NdNwdHC1R5zqSzn4cXlzM+in9/z/RQvc4S+cvVNMv8ufLGQQWHoc4+c
	HTfKvbfu5a0af+lNakwx6fOj7Gy+7QKzkLBa496tldsKLleDl5gXZEux1D0rN8XatRpBkUJfWHfC2
	G0ns064g==;
Received: from [2001:4bb8:180:6600:bcde:334f:863c:27b8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kclTQ-0007du-T4; Wed, 11 Nov 2020 08:27:25 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 19/24] zram: use set_capacity_and_notify
Date: Wed, 11 Nov 2020 09:26:53 +0100
Message-Id: <20201111082658.3401686-20-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201111082658.3401686-1-hch@lst.de>
References: <20201111082658.3401686-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/zram/zram_drv.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 1b697208d66157..6d15d51cee2b7e 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1695,7 +1695,7 @@ static void zram_reset_device(struct zram *zram)
 	disksize = zram->disksize;
 	zram->disksize = 0;
 
-	set_capacity(zram->disk, 0);
+	set_capacity_and_notify(zram->disk, 0);
 	part_stat_set_all(&zram->disk->part0, 0);
 
 	up_write(&zram->init_lock);
@@ -1741,9 +1741,7 @@ static ssize_t disksize_store(struct device *dev,
 
 	zram->comp = comp;
 	zram->disksize = disksize;
-	set_capacity(zram->disk, zram->disksize >> SECTOR_SHIFT);
-
-	revalidate_disk_size(zram->disk, true);
+	set_capacity_and_notify(zram->disk, zram->disksize >> SECTOR_SHIFT);
 	up_write(&zram->init_lock);
 
 	return len;
@@ -1790,7 +1788,6 @@ static ssize_t reset_store(struct device *dev,
 	/* Make sure all the pending I/O are finished */
 	fsync_bdev(bdev);
 	zram_reset_device(zram);
-	revalidate_disk_size(zram->disk, true);
 	bdput(bdev);
 
 	mutex_lock(&bdev->bd_mutex);
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 08:41:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 08:41:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24536.51849 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclgt-0007I1-DS; Wed, 11 Nov 2020 08:41:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24536.51849; Wed, 11 Nov 2020 08:41:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kclgt-0007Hu-AR; Wed, 11 Nov 2020 08:41:19 +0000
Received: by outflank-mailman (input) for mailman id 24536;
 Wed, 11 Nov 2020 08:41:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OH4G=ER=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kclgr-0007Ho-Og
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:41:17 +0000
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id daf066f2-9763-4215-8554-842d4a90ff37;
 Wed, 11 Nov 2020 08:41:16 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id s30so2101480lfc.4
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 00:41:16 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id m4sm158479ljg.137.2020.11.11.00.41.13
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 11 Nov 2020 00:41:14 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=OH4G=ER=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kclgr-0007Ho-Og
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 08:41:17 +0000
X-Inumbo-ID: daf066f2-9763-4215-8554-842d4a90ff37
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id daf066f2-9763-4215-8554-842d4a90ff37;
	Wed, 11 Nov 2020 08:41:16 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id s30so2101480lfc.4
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 00:41:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=1p1YoNJhcui0Z5VPptfzLr+hDu0PusmRVdK0otCW7k8=;
        b=kr/h8xwb/xjyw/IWBfiguYjTJfOIEoCy7vdm5fxvxu89dKxUqrOFCYTpXyUyG54d70
         hv8+NCcPV0QlFIQFh5uxu2gMRysRjd3ri3KUqG4zqlvEUfKvLpZ4NxjO38Q6XZkenlrF
         ujdICmZocYDWeT3ODf5Gg22k5mfQzi5QPrXIDD58vFWMg6JHqwG1P2054JELYFVs/klq
         hpmNdVKuuNaUltI2uc8bO/Rnhwb1pbUmie9EGuccqDe6f31CmvUqkRLn0shtxop1Ztg7
         V1Y2dhdu6OQYCssGzaz0Aj3PQmnVTYDzSxf+4mOGmnr2yQt2dnF8BqfdDIf3b4SdzbLc
         rnnA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=1p1YoNJhcui0Z5VPptfzLr+hDu0PusmRVdK0otCW7k8=;
        b=umJStj9zUdV/stkt+JFaoZZqCFBIk8fcsCWgUaFIg8kfSmiRY0Lns0uDYrr/UMPnNz
         U6UaX3X62A36qBAovnXlL6BACFDuNUkCZ4vIW6Ha1UAYjWzN3bvQ8lsBxo9TMZl252M/
         6z1Nh/wVuvncduVqv6ErRZU7cr06xxy8iQhXlCUDfbinbxW6VhBqBwrbrjOIivqMUuep
         YOOtNFNFdycWPHr1TA08C2gG4rPM051ukPXZ7n5HjPBI7Rt23AuKF9vSxRwUsg3/RHmO
         DRu/TrCGcHtk44ReVOk3VjjMh5xaQeQX6DpQ+ocPBpJ1vXY0j2vHcfNhODHzd8PUl9SB
         D1rw==
X-Gm-Message-State: AOAM5335Zze4EQgbO3d+C4bWszVecR1p9BQAjbOFtYBnWRp08Yq29ifP
	HpeCtDPk2ZaSOtQveVkN5agf3DJkqRvXpg==
X-Google-Smtp-Source: ABdhPJx11/VEpWCkJL0WT5d3Fj6kLmKPnyKWioQwREy44p7HPWJvRDTTUYTqy2Bw+IZCdFyNsVfkFw==
X-Received: by 2002:a19:8487:: with SMTP id g129mr8761852lfd.285.1605084074763;
        Wed, 11 Nov 2020 00:41:14 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id m4sm158479ljg.137.2020.11.11.00.41.13
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Wed, 11 Nov 2020 00:41:14 -0800 (PST)
Subject: Re: [PATCH V2 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
To: Jan Beulich <jbeulich@suse.com>
Cc: 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Ian Jackson' <iwj@xenproject.org>, 'Wei Liu' <wl@xen.org>,
 'Julien Grall' <julien.grall@arm.com>, paul@xen.org,
 xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-18-git-send-email-olekstysh@gmail.com>
 <004e01d6a6cf$09cd9f40$1d68ddc0$@xen.org>
 <700a643e-641e-c243-cb2d-7ad8b5a9b8ad@gmail.com>
 <d4088e1b-1a50-d2fd-29b0-0f8a2cf4e7d4@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <ed9defbe-b6bf-bd1f-cd88-64d1b0e135c1@gmail.com>
Date: Wed, 11 Nov 2020 10:41:08 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <d4088e1b-1a50-d2fd-29b0-0f8a2cf4e7d4@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 11.11.20 10:08, Jan Beulich wrote:

Hi Jan

> On 10.11.2020 21:53, Oleksandr wrote:
>> On 20.10.20 13:51, Paul Durrant wrote:
>>
>> Hi Paul.
>>
>> Sorry for the late response.
>>
>>>> -----Original Message-----
>>>> From: Oleksandr Tyshchenko <olekstysh@gmail.com>
>>>> Sent: 15 October 2020 17:44
>>>> To: xen-devel@lists.xenproject.org
>>>> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Stefano Stabellini <sstabellini@kernel.org>;
>>>> Julien Grall <julien@xen.org>; Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>; Andrew Cooper
>>>> <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>; Ian Jackson
>>>> <iwj@xenproject.org>; Jan Beulich <jbeulich@suse.com>; Wei Liu <wl@xen.org>; Paul Durrant
>>>> <paul@xen.org>; Julien Grall <julien.grall@arm.com>
>>>> Subject: [PATCH V2 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
>>>>
>>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>>
>>>> This patch introduces a helper the main purpose of which is to check
>>>> if a domain is using IOREQ server(s).
>>>>
>>>> On Arm the current benefit is to avoid calling handle_io_completion()
>>>> (which implies iterating over all possible IOREQ servers anyway)
>>>> on every return in leave_hypervisor_to_guest() if there is no active
>>>> servers for the particular domain.
>>>> Also this helper will be used by one of the subsequent patches on Arm.
>>>>
>>>> This involves adding an extra per-domain variable to store the count
>>>> of servers in use.
>>>>
>>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>> CC: Julien Grall <julien.grall@arm.com>
>>>>
>>>> ---
>>>> Please note, this is a split/cleanup/hardening of Julien's PoC:
>>>> "Add support for Guest IO forwarding to a device emulator"
>>>>
>>>> Changes RFC -> V1:
>>>>      - new patch
>>>>
>>>> Changes V1 -> V2:
>>>>      - update patch description
>>>>      - guard helper with CONFIG_IOREQ_SERVER
>>>>      - remove "hvm" prefix
>>>>      - modify helper to just return d->arch.hvm.ioreq_server.nr_servers
>>>>      - put suitable ASSERT()s
>>>>      - use ASSERT(d->ioreq_server.server[id] ? !s : !!s) in set_ioreq_server()
>>>>      - remove d->ioreq_server.nr_servers = 0 from hvm_ioreq_init()
>>>> ---
>>>>    xen/arch/arm/traps.c    | 15 +++++++++------
>>>>    xen/common/ioreq.c      |  7 ++++++-
>>>>    xen/include/xen/ioreq.h | 14 ++++++++++++++
>>>>    xen/include/xen/sched.h |  1 +
>>>>    4 files changed, 30 insertions(+), 7 deletions(-)
>>>>
>>>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>>>> index 507c095..a8f5fdf 100644
>>>> --- a/xen/arch/arm/traps.c
>>>> +++ b/xen/arch/arm/traps.c
>>>> @@ -2261,14 +2261,17 @@ static bool check_for_vcpu_work(void)
>>>>        struct vcpu *v = current;
>>>>
>>>>    #ifdef CONFIG_IOREQ_SERVER
>>>> -    bool handled;
>>>> +    if ( domain_has_ioreq_server(v->domain) )
>>>> +    {
>>>> +        bool handled;
>>>>
>>>> -    local_irq_enable();
>>>> -    handled = handle_io_completion(v);
>>>> -    local_irq_disable();
>>>> +        local_irq_enable();
>>>> +        handled = handle_io_completion(v);
>>>> +        local_irq_disable();
>>>>
>>>> -    if ( !handled )
>>>> -        return true;
>>>> +        if ( !handled )
>>>> +            return true;
>>>> +    }
>>>>    #endif
>>>>
>>>>        if ( likely(!v->arch.need_flush_to_ram) )
>>>> diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
>>>> index bcd4961..a72bc0e 100644
>>>> --- a/xen/common/ioreq.c
>>>> +++ b/xen/common/ioreq.c
>>>> @@ -39,9 +39,14 @@ static void set_ioreq_server(struct domain *d, unsigned int id,
>>>>                                 struct ioreq_server *s)
>>>>    {
>>>>        ASSERT(id < MAX_NR_IOREQ_SERVERS);
>>>> -    ASSERT(!s || !d->ioreq_server.server[id]);
>>>> +    ASSERT(d->ioreq_server.server[id] ? !s : !!s);
>>> That looks odd. How about ASSERT(!s ^ !d->ioreq_server.server[id])?
>> ok, looks like it will work.
>>
>>
>>>     Paul
>>>
>>>>        d->ioreq_server.server[id] = s;
>>>> +
>>>> +    if ( s )
>>>> +        d->ioreq_server.nr_servers++;
>>>> +    else
>>>> +        d->ioreq_server.nr_servers--;
>>>>    }
>>>>
>>>>    #define GET_IOREQ_SERVER(d, id) \
>>>> diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
>>>> index 7b03ab5..0679fef 100644
>>>> --- a/xen/include/xen/ioreq.h
>>>> +++ b/xen/include/xen/ioreq.h
>>>> @@ -55,6 +55,20 @@ struct ioreq_server {
>>>>        uint8_t                bufioreq_handling;
>>>>    };
>>>>
>>>> +#ifdef CONFIG_IOREQ_SERVER
>>>> +static inline bool domain_has_ioreq_server(const struct domain *d)
>>>> +{
>>>> +    ASSERT((current->domain == d) || atomic_read(&d->pause_count));
>>>> +
>>> This seems like an odd place to put such an assertion.
>> I might miss something or interpreted incorrectly but these asserts are
>> the result of how I understood the review comment on previous version [1].
>>
>> I will copy a comment here for the convenience:
>> "This is safe only when d == current->domain and it's not paused,
>> or when they're distinct and d is paused. Otherwise the result is
>> stale before the caller can inspect it. This wants documenting by
>> at least a comment, but perhaps better by suitable ASSERT()s."
> The way his reply was worded, I think Paul was wondering about the
> place where you put the assertion, not what you actually assert.

Shall I put the assertion at the call sites of this helper instead?


>   
>
>>>> +    return d->ioreq_server.nr_servers;
>>>> +}
>>>> +#else
>>>> +static inline bool domain_has_ioreq_server(const struct domain *d)
>>>> +{
>>>> +    return false;
>>>> +}
>>>> +#endif
>>>> +
>>> Can this be any more compact? E.g.
>>>
>>> return IS_ENABLED(CONFIG_IOREQ_SERVER) && d->ioreq_server.nr_servers;
>>>
>>> ?
>> I have got a compilation error this way (if CONFIG_IOREQ_SERVER is
>> disabled):
>>
>> ...xen/4.14.0+gitAUTOINC+ee22110219-r0/git/xen/include/xen/ioreq.h:62:48:
>> error: ‘const struct domain’ has no member named ‘ioreq_server’
>>        return IS_ENABLED(CONFIG_IOREQ_SERVER) && d->ioreq_server.nr_servers;
>>                                                   ^
>> as domain's ioreq_server struct is guarded by CONFIG_IOREQ_SERVER as well.
> The #ifdef is unavoidable here, I agree, but it should be inside
> the function's body. There's no need to duplicate the rest of it.


Got it, will do.


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 09:56:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 09:56:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24571.51870 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcmqx-0005ef-4r; Wed, 11 Nov 2020 09:55:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24571.51870; Wed, 11 Nov 2020 09:55:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcmqx-0005eY-1u; Wed, 11 Nov 2020 09:55:47 +0000
Received: by outflank-mailman (input) for mailman id 24571;
 Wed, 11 Nov 2020 09:55:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bkR8=ER=gmail.com=idryomov@srs-us1.protection.inumbo.net>)
 id 1kcmqw-0005eT-Ar
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 09:55:46 +0000
Received: from mail-il1-x144.google.com (unknown [2607:f8b0:4864:20::144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 78c4218d-4f87-4165-a964-dc4f714c7bb2;
 Wed, 11 Nov 2020 09:55:45 +0000 (UTC)
Received: by mail-il1-x144.google.com with SMTP id y17so1444714ilg.4
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 01:55:45 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bkR8=ER=gmail.com=idryomov@srs-us1.protection.inumbo.net>)
	id 1kcmqw-0005eT-Ar
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 09:55:46 +0000
X-Inumbo-ID: 78c4218d-4f87-4165-a964-dc4f714c7bb2
Received: from mail-il1-x144.google.com (unknown [2607:f8b0:4864:20::144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 78c4218d-4f87-4165-a964-dc4f714c7bb2;
	Wed, 11 Nov 2020 09:55:45 +0000 (UTC)
Received: by mail-il1-x144.google.com with SMTP id y17so1444714ilg.4
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 01:55:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=qfEf7xRtwJh6vel7ndE10SMbjF6zTEFOWV8RA9xu3eI=;
        b=Flu0kG/G8H5tmdzF22ns+x1T3WixC4FBs0RbA9YeGx20Pw3IL573gomPOrRSgLhidi
         jxm4xXdraWCqe23Uz/lTRF5Gzul+neu0ZW5SRlL8rLtuRaQOMWWnKLr+e1J5E2f/IB4R
         iaO9iEipyz2F49FzVbspMmUTamyYBNigOi7Hq45S5vTkNhsbMM69J9bWbbawzfmKvyaE
         prBgh+sprAAWuhDKQzcpJRm9c4LoRmjW0z3OGufy4OEk4vgw7impUerADJ8+P1l4Evs/
         gUeZGg6Qzu0m4oCwYhIr6na+Ay9Hq4ATHKFwzo1XQC1pmqP6NOhCJ7ijtefRPRR/Tb4r
         FFHw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=qfEf7xRtwJh6vel7ndE10SMbjF6zTEFOWV8RA9xu3eI=;
        b=CXYE1MdLuUdSFrvEij2himZ1IWbumtTsROc58EoPTqpoPBNgOWyzSRxkWzVGbnNn8S
         fUMLwUXDKykaSDpPUgVIYiMAwXF959D8uiFc0NNv0Rmvtci19Ndfz649W5h1+5uPXmrs
         DuN7Zf2+EZ/YhqW7Iuf6wLwgru74TCxT/fs0TzHIkkphOphaoHrrW3/yC1qaFkmbq7vi
         L++MI1ILOj3X38BO/pWV3Ar1jplwWqmkpnpLRW40SptBO2sF1n7yCbG0kD8karTn6Q/6
         EI/MF/MM/7sIGYfjPcko//OEp1SBn1T1uoTqFNnWOoZ80pxaBWAd/3gAxDaspW6k0eKQ
         iagA==
X-Gm-Message-State: AOAM530OpRB8G73nvRWiXxVBCT+xG51FD1Tj/wtV+px1AbjjVGkFG8KK
	N+6BMKblwMBCA5+xtD7pWZa/YEh28wfohCFn5xo=
X-Google-Smtp-Source: ABdhPJy9ZPoku1YjH4P5V9sTrbXGNc0TDLR8E4ST1Mn4KA5kvu+HvwpbOUC5MO/5tCHlTeWvzQ658Y1GM8zrKYsupSY=
X-Received: by 2002:a92:1f43:: with SMTP id i64mr17516886ile.281.1605088545285;
 Wed, 11 Nov 2020 01:55:45 -0800 (PST)
MIME-Version: 1.0
References: <20201111082658.3401686-1-hch@lst.de> <20201111082658.3401686-18-hch@lst.de>
In-Reply-To: <20201111082658.3401686-18-hch@lst.de>
From: Ilya Dryomov <idryomov@gmail.com>
Date: Wed, 11 Nov 2020 10:55:45 +0100
Message-ID: <CAOi1vP-JjnNdAUqd9Gy6YdFgi8Ev4_Jt3zcB9DhAmdAvQhG7Eg@mail.gmail.com>
Subject: Re: [PATCH 17/24] rbd: use set_capacity_and_notify
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Justin Sanders <justin@coraid.com>, 
	Josef Bacik <josef@toxicpanda.com>, Jack Wang <jinpu.wang@cloud.ionos.com>, 
	"Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>, 
	Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>, Song Liu <song@kernel.org>, 
	"Martin K. Petersen" <martin.petersen@oracle.com>, dm-devel@redhat.com, 
	linux-block <linux-block@vger.kernel.org>, Lars Ellenberg <drbd-dev@lists.linbit.com>, 
	nbd@other.debian.org, Ceph Development <ceph-devel@vger.kernel.org>, 
	xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org, 
	linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org, 
	linux-fsdevel <linux-fsdevel@vger.kernel.org>
Content-Type: text/plain; charset="UTF-8"

On Wed, Nov 11, 2020 at 9:27 AM Christoph Hellwig <hch@lst.de> wrote:
>
> Use set_capacity_and_notify to set the size of both the disk and block
> device.  This also gets the uevent notifications for the resize for free.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Acked-by: Jack Wang <jinpu.wang@cloud.ionos.com>
> ---
>  drivers/block/rbd.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
> index f84128abade319..b7a194ffda55b4 100644
> --- a/drivers/block/rbd.c
> +++ b/drivers/block/rbd.c
> @@ -4920,8 +4920,7 @@ static void rbd_dev_update_size(struct rbd_device *rbd_dev)
>             !test_bit(RBD_DEV_FLAG_REMOVING, &rbd_dev->flags)) {
>                 size = (sector_t)rbd_dev->mapping.size / SECTOR_SIZE;
>                 dout("setting size to %llu sectors", (unsigned long long)size);
> -               set_capacity(rbd_dev->disk, size);
> -               revalidate_disk_size(rbd_dev->disk, true);
> +               set_capacity_and_notify(rbd_dev->disk, size);
>         }
>  }
>
> --
> 2.28.0
>

Hi Christoph,

The Acked-by is wrong here.  I acked this patch (17/24, rbd), and Jack
acked the next one (18/24, rnbd).

Thanks,

                Ilya


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 09:58:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 09:58:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24577.51883 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcmtS-0005p2-Ig; Wed, 11 Nov 2020 09:58:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24577.51883; Wed, 11 Nov 2020 09:58:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcmtS-0005ov-FQ; Wed, 11 Nov 2020 09:58:22 +0000
Received: by outflank-mailman (input) for mailman id 24577;
 Wed, 11 Nov 2020 09:58:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kcmtR-0005oq-OR
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 09:58:21 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 55b48ee9-b115-4306-a927-e410cc40e570;
 Wed, 11 Nov 2020 09:58:18 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcmtO-0007l4-LL; Wed, 11 Nov 2020 09:58:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcmtO-0004P7-CK; Wed, 11 Nov 2020 09:58:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kcmtO-0006pf-Br; Wed, 11 Nov 2020 09:58:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kcmtR-0005oq-OR
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 09:58:21 +0000
X-Inumbo-ID: 55b48ee9-b115-4306-a927-e410cc40e570
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 55b48ee9-b115-4306-a927-e410cc40e570;
	Wed, 11 Nov 2020 09:58:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WaxgwJEGqmVlw3XbgZmBHyGcqij0ZC6YIAztjmIioXM=; b=7BAlBW4/qd3KBIb0n8KVTaBwZ/
	RIpCO/qyoVN4zbm4lRMXuIgBx8Y1VDG/bY1LeaVZHQulFQyR+Ke9RLerwsnaIs/7lvalEMJyU+guO
	IrAaNaRCajQrsZFdh7rbBeJahFmLl/jnKwKeF9GfvF2EhA4cggers0CbeKtlv0Lkthj4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcmtO-0007l4-LL; Wed, 11 Nov 2020 09:58:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcmtO-0004P7-CK; Wed, 11 Nov 2020 09:58:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcmtO-0006pf-Br; Wed, 11 Nov 2020 09:58:18 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156681-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 156681: all pass - PUSHED
X-Osstest-Versions-This:
    xen=3059178798a23ba870ff86ff54d442a07e6651fc
X-Osstest-Versions-That:
    xen=0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Nov 2020 09:58:18 +0000

flight 156681 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156681/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  3059178798a23ba870ff86ff54d442a07e6651fc
baseline version:
 xen                  0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258

Last test of basis   156554  2020-11-08 09:20:27 Z    3 days
Testing same since   156681  2020-11-11 09:19:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0a5e0ce0fb..3059178798  3059178798a23ba870ff86ff54d442a07e6651fc -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 10:01:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 10:01:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24586.51897 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcmwn-0006nP-15; Wed, 11 Nov 2020 10:01:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24586.51897; Wed, 11 Nov 2020 10:01:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcmwm-0006nI-UO; Wed, 11 Nov 2020 10:01:48 +0000
Received: by outflank-mailman (input) for mailman id 24586;
 Wed, 11 Nov 2020 10:01:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GpG1=ER=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kcmwl-0006nD-EF
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 10:01:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0c8eb5f3-ef88-44dc-b668-8c19ca45da39;
 Wed, 11 Nov 2020 10:01:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B3A5FAC23;
 Wed, 11 Nov 2020 10:01:45 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GpG1=ER=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kcmwl-0006nD-EF
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 10:01:47 +0000
X-Inumbo-ID: 0c8eb5f3-ef88-44dc-b668-8c19ca45da39
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0c8eb5f3-ef88-44dc-b668-8c19ca45da39;
	Wed, 11 Nov 2020 10:01:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605088905;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=KZhvr+aHJsAIl4S2AH81gc6u7VXj74Lri8KHt68rkDg=;
	b=XJBwr9YOkcmTzxprZy6Ug8GISMnzb+t/rr35/nudQCWCPCqa7sTa1Kpcus/IjmGXrlz+7E
	0PC77OTnsOo0R4R4s8DJvo0V+EBVHrgteESMiN7t2VHbbDIq+lFZnOsVA6piHrVL+U+FCA
	BYMbEVMYx8RdUJH2extVX8lLm3zccKU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B3A5FAC23;
	Wed, 11 Nov 2020 10:01:45 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2] tools/libs/ctrl: fix dumping of ballooned guest
Date: Wed, 11 Nov 2020 11:01:43 +0100
Message-Id: <20201111100143.13820-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

A guest with memory < maxmem often can't be dumped via xl dump-core
without an error message today:

xc: info: exceeded nr_pages (262144) losing pages

In case the last page of the guest isn't allocated the loop in
xc_domain_dumpcore_via_callback() will always spit out this message,
as the number of already dumped pages is tested before the next page
is checked to be valid.

The guest's p2m_size might be lower than expected, so this should be
tested in order to avoid reading past the end of it.

The guest might use high bits in p2m entries to flag special cases like
foreign mappings. Entries with an MFN larger than the highest MFN of
the host should be skipped.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/ctrl/xc_core.c | 42 +++++++++++++++++++++++++++++----------
 1 file changed, 31 insertions(+), 11 deletions(-)

diff --git a/tools/libs/ctrl/xc_core.c b/tools/libs/ctrl/xc_core.c
index e8c6fb96f9..b47ab2f6d8 100644
--- a/tools/libs/ctrl/xc_core.c
+++ b/tools/libs/ctrl/xc_core.c
@@ -439,6 +439,7 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
     unsigned long i;
     unsigned long j;
     unsigned long nr_pages;
+    unsigned long max_mfn;
 
     xc_core_memory_map_t *memory_map = NULL;
     unsigned int nr_memory_map;
@@ -577,6 +578,10 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
                                    &p2m, &dinfo->p2m_size);
         if ( sts != 0 )
             goto out;
+
+        sts = xc_maximum_ram_page(xch, &max_mfn);
+        if ( sts != 0 )
+            goto out;
     }
     else
     {
@@ -818,19 +823,12 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
         {
             uint64_t gmfn;
             void *vaddr;
-            
-            if ( j >= nr_pages )
-            {
-                /*
-                 * When live dump-mode (-L option) is specified,
-                 * guest domain may increase memory.
-                 */
-                IPRINTF("exceeded nr_pages (%ld) losing pages", nr_pages);
-                goto copy_done;
-            }
 
             if ( !auto_translated_physmap )
             {
+                if ( i >= dinfo->p2m_size )
+                    break;
+
                 if ( dinfo->guest_width >= sizeof(unsigned long) )
                 {
                     if ( dinfo->guest_width == sizeof(unsigned long) )
@@ -846,6 +844,14 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
                     if ( gmfn == (uint32_t)INVALID_PFN )
                        continue;
                 }
+                if ( gmfn > max_mfn )
+                    continue;
+
+                if ( j >= nr_pages )
+                {
+                    j++;
+                    continue;
+                }
 
                 p2m_array[j].pfn = i;
                 p2m_array[j].gmfn = gmfn;
@@ -855,6 +861,12 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
                 if ( !xc_core_arch_gpfn_may_present(&arch_ctxt, i) )
                     continue;
 
+                if ( j >= nr_pages )
+                {
+                    j++;
+                    continue;
+                }
+
                 gmfn = i;
                 pfn_array[j] = i;
             }
@@ -879,7 +891,15 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
         }
     }
 
-copy_done:
+    if ( j > nr_pages )
+    {
+        /*
+         * When live dump-mode (-L option) is specified,
+         * guest domain may increase memory.
+         */
+        IPRINTF("exceeded nr_pages (%ld) losing %ld pages", nr_pages, j - nr_pages);
+    }
+
     sts = dump_rtn(xch, args, dump_mem_start, dump_mem - dump_mem_start);
     if ( sts != 0 )
         goto out;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 10:07:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 10:07:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24594.51910 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcn1m-00070G-LM; Wed, 11 Nov 2020 10:06:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24594.51910; Wed, 11 Nov 2020 10:06:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcn1m-000709-IH; Wed, 11 Nov 2020 10:06:58 +0000
Received: by outflank-mailman (input) for mailman id 24594;
 Wed, 11 Nov 2020 10:06:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DN9B=ER=cloud.ionos.com=jinpu.wang@srs-us1.protection.inumbo.net>)
 id 1kcn1k-000704-F2
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 10:06:56 +0000
Received: from mail-ej1-x642.google.com (unknown [2a00:1450:4864:20::642])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 31937f84-febf-464e-a45e-c4a4ffc3bf9a;
 Wed, 11 Nov 2020 10:06:54 +0000 (UTC)
Received: by mail-ej1-x642.google.com with SMTP id za3so1980655ejb.5
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 02:06:54 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DN9B=ER=cloud.ionos.com=jinpu.wang@srs-us1.protection.inumbo.net>)
	id 1kcn1k-000704-F2
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 10:06:56 +0000
X-Inumbo-ID: 31937f84-febf-464e-a45e-c4a4ffc3bf9a
Received: from mail-ej1-x642.google.com (unknown [2a00:1450:4864:20::642])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 31937f84-febf-464e-a45e-c4a4ffc3bf9a;
	Wed, 11 Nov 2020 10:06:54 +0000 (UTC)
Received: by mail-ej1-x642.google.com with SMTP id za3so1980655ejb.5
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 02:06:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.ionos.com; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=8BVYeCUTuKi3mlr+D1JejV3WQ4DReZqKao2un+osJdI=;
        b=EWvaurrQLE5Q4yXlmvtPWeVGE0dkpLilBhFACYcpJh5SLgD4k2LHFrfA3fg/phe+gw
         pOXsWhTko+6vmsPMUys1ydyafND8sfafc/BhJ0cLDqzJH5wv7CFdS4PBi4SJJirndEDo
         I3sopJwAoMSpcvKBGOBbJDfAHPo19nxrr5mYxmaUjZAtoeSEinoAPgcqf0tp4SGruWUm
         s+puyVNd/9QM59nJCHGxgwiF4Uhv/qkw3Bh4E6NoIJN/jPJN6bl5siKGHhBBHVdEhBTl
         i9cN1zoK+m/ZDFXKRJbDKDgS4Hb/xEaLU1cJNAhSo0nBcfuCpt/4xbdL/4qVB2KjPelC
         haxw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=8BVYeCUTuKi3mlr+D1JejV3WQ4DReZqKao2un+osJdI=;
        b=svGWB8hkvuTCnCQjuMwmsuzmr/iYoDvbK+qKuT0DBajRsrMWmO019xkfyS8oyeYz+g
         fnCo+tpYXma7mtOD/gbsHCYzcUvbgv6209WEHS4Fn97pozfj3VaYzFXaTu3qwMYM0L4y
         AKW/sMqzDWzou6TsTwRWYHDrQYkB5d8Qtg9s9AG5OX6pXRwq3SXk1CFyJpAJOGF/KCHB
         G5rQ/V1cpbjhifE5Kr1zqYDTp2uHAERVQz2HENkbpoMxebw8KgG0f/cbK30HufqgzOga
         rQyUFoz2P9Sre2Pp/0si1k6Pb0PmJSc+VsrVGqxfgtd70Q2MLKXTXKEjmXm+Ts8rJgbl
         rQSQ==
X-Gm-Message-State: AOAM531/7pY4UTLla6yOE6S5fCnFVVSJwzjdOtSdqyCak4HT4p64ojh1
	oq9XIF79xBbZ2K38YkepjWlWy+2/9wOIrW/cTsBkTg==
X-Google-Smtp-Source: ABdhPJzsSG4n+ejxAi7LIm+JtRF2XZ0Zmfdo/Dim1Bn2zjoT1vntwc6k1ciTHUd7GwTXoyYPdOeDF7UWMmfnel/IKPk=
X-Received: by 2002:a17:907:c05:: with SMTP id ga5mr19170455ejc.212.1605089213881;
 Wed, 11 Nov 2020 02:06:53 -0800 (PST)
MIME-Version: 1.0
References: <20201111082658.3401686-1-hch@lst.de> <20201111082658.3401686-18-hch@lst.de>
 <CAOi1vP-JjnNdAUqd9Gy6YdFgi8Ev4_Jt3zcB9DhAmdAvQhG7Eg@mail.gmail.com>
In-Reply-To: <CAOi1vP-JjnNdAUqd9Gy6YdFgi8Ev4_Jt3zcB9DhAmdAvQhG7Eg@mail.gmail.com>
From: Jinpu Wang <jinpu.wang@cloud.ionos.com>
Date: Wed, 11 Nov 2020 11:06:43 +0100
Message-ID: <CAMGffEmU1ezUo68zF8DS4CRZZMosqhmDw3h7uiWzh2nL8tUs9g@mail.gmail.com>
Subject: Re: [PATCH 17/24] rbd: use set_capacity_and_notify
To: Ilya Dryomov <idryomov@gmail.com>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>, Justin Sanders <justin@coraid.com>, 
	Josef Bacik <josef@toxicpanda.com>, "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>, 
	Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>, Song Liu <song@kernel.org>, 
	"Martin K. Petersen" <martin.petersen@oracle.com>, 
	device-mapper development <dm-devel@redhat.com>, linux-block <linux-block@vger.kernel.org>, 
	Lars Ellenberg <drbd-dev@lists.linbit.com>, nbd@other.debian.org, 
	Ceph Development <ceph-devel@vger.kernel.org>, xen-devel@lists.xenproject.org, 
	linux-raid <linux-raid@vger.kernel.org>, linux-nvme@lists.infradead.org, 
	Linux SCSI Mailinglist <linux-scsi@vger.kernel.org>, linux-fsdevel <linux-fsdevel@vger.kernel.org>
Content-Type: text/plain; charset="UTF-8"

On Wed, Nov 11, 2020 at 10:55 AM Ilya Dryomov <idryomov@gmail.com> wrote:
>
> On Wed, Nov 11, 2020 at 9:27 AM Christoph Hellwig <hch@lst.de> wrote:
> >
> > Use set_capacity_and_notify to set the size of both the disk and block
> > device.  This also gets the uevent notifications for the resize for free.
> >
> > Signed-off-by: Christoph Hellwig <hch@lst.de>
> > Acked-by: Jack Wang <jinpu.wang@cloud.ionos.com>
> > ---
> >  drivers/block/rbd.c | 3 +--
> >  1 file changed, 1 insertion(+), 2 deletions(-)
> >
> > diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
> > index f84128abade319..b7a194ffda55b4 100644
> > --- a/drivers/block/rbd.c
> > +++ b/drivers/block/rbd.c
> > @@ -4920,8 +4920,7 @@ static void rbd_dev_update_size(struct rbd_device *rbd_dev)
> >             !test_bit(RBD_DEV_FLAG_REMOVING, &rbd_dev->flags)) {
> >                 size = (sector_t)rbd_dev->mapping.size / SECTOR_SIZE;
> >                 dout("setting size to %llu sectors", (unsigned long long)size);
> > -               set_capacity(rbd_dev->disk, size);
> > -               revalidate_disk_size(rbd_dev->disk, true);
> > +               set_capacity_and_notify(rbd_dev->disk, size);
> >         }
> >  }
> >
> > --
> > 2.28.0
> >
>
> Hi Christoph,
>
> The Acked-by is wrong here.  I acked this patch (17/24, rbd), and Jack
> acked the next one (18/24, rnbd).
right. :)
>
> Thanks,
>
>                 Ilya


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 10:12:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 10:12:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24600.51922 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcn7J-0007vT-8T; Wed, 11 Nov 2020 10:12:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24600.51922; Wed, 11 Nov 2020 10:12:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcn7J-0007vM-4y; Wed, 11 Nov 2020 10:12:41 +0000
Received: by outflank-mailman (input) for mailman id 24600;
 Wed, 11 Nov 2020 10:12:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TQiX=ER=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1kcn7H-0007vH-8i
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 10:12:39 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7d0cdd09-96b4-48b0-b930-45a64c7c7dbf;
 Wed, 11 Nov 2020 10:12:33 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TQiX=ER=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
	id 1kcn7H-0007vH-8i
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 10:12:39 +0000
X-Inumbo-ID: 7d0cdd09-96b4-48b0-b930-45a64c7c7dbf
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7d0cdd09-96b4-48b0-b930-45a64c7c7dbf;
	Wed, 11 Nov 2020 10:12:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605089553;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=nzeKJm5vfRcS4dUFNR5EwXhpyFRAMD5LO0ZM1LgeYmg=;
  b=hH+6dynciQYXeYvjvdFSmlKjfg8z2Uslo+7mjf6fKd2NXWpjqIrc+PcV
   djHJCMmgEqFx69h+fySvafb2Coqn+e4XRvw1brKVyQ07N4k3NY/niETGP
   qLaiB/OW1M4sKcaBnXn0RTzQ41FtDvxFM4gQ8s5GjBxZkIk24rqTITX5U
   0=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: p3Dwa8KDl9lOfTDLWNthFNjxV7oSt5Tj+/+8eD2hOKQb6IZ2qfJiU1K3tzWupcUWH1F+2x9pPU
 T0Ju46dFZOV5aYqWtbugZq2rFBPugsQK6iYB/1v/9Az9b2IoXoVC0oIpnnaMvVLg5ISptgk4SH
 nPz4+9CzkgAXeNLz2lEiY26+GpuG9nGd2P0tqq+X6VORGd6btk2f0UleH6acUDfM4KSliYTFZ+
 TNTs8PxBIuPRbmn2sKok9z1f1rT0HucJ9DiEn5SxtbndXJc4SzVgCUhnvWVuShJI5d97t8UYst
 TZc=
X-SBRS: None
X-MesageID: 32053252
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,469,1596513600"; 
   d="scan'208";a="32053252"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Am87dsUPgFD/FbLPdQEJj2hLs3+74IQOOKCcexNY7Rzclh/JWV7zlFtY8xPi98DwVS7PlFb/lKoi/5hk7XljEBhYQwRTqobwx/SWsmAXKyYvIsvONwHqMYJrEQDRl57e7eVpCiFsrTDxI5Va6eNagDML86bM6NzuEtPd13Z9nW/WBMM78qUYeX4cNuKXA1W2qWWWYQpVHI63s8Wft4lAhKXhIIBgWB29zckAuYXplI4jQO7kl3uMhtfAN/rsCpNZLankS+5n3iPdTMXJIpBJ1eQMNz6gyorUG/HEDORKFcX8kDwWvoJz1tcg6sAkaPq9RqSGxAxxSkHB8BRZRSGE/w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nzeKJm5vfRcS4dUFNR5EwXhpyFRAMD5LO0ZM1LgeYmg=;
 b=I+2AaU2hooyM3qrPRj8GfhO6QOwLkOjSOF1I8/iMTu0OptKtlx7hY3+p38Oo512DeBc81GUXmbw7Y8Uruzv/1otG7JL5h8XJIesDSlgfsjDOGdHAAteR3QqxQ/qkbzZQV08ZFl3DuDwC8tmSxxIzHPOzq0bHrfa9SPOczfHiroCn+dF0aw/7KEtGk/vTctpsFWfFFxX1/qH8bzzi9szr9hyk5UzRqdfF50uFHsd711+Bn/Ft9XxpZ4RpF1Vo0/WcAEwKhVIkctXCebGWpvkuRxvpqxvrKfK+PRYvCL/FKL/phtMAQh2pfwE/RoYU9rRWImoU076Eg+tD7NempxtlEQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nzeKJm5vfRcS4dUFNR5EwXhpyFRAMD5LO0ZM1LgeYmg=;
 b=QFxyJfeR5t9cPQh6P52Qg/W47luB/zsDcmd6M30yTIzKdRjy/CI/N9AGIn/cIq9i0cGr5WXHXa8DC/9G3b5ao6xJzrrt5YQAVtKVvkmNRaOSc2owvdbqsXSD8HhQjo00QU1tzSs+GC8fS5XxcIvVM5HLzPrEVD8Svv4kYYsNeww=
From: George Dunlap <George.Dunlap@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Andrew Cooper
	<Andrew.Cooper3@citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] docs: fix documentation about default scheduler
Thread-Topic: [PATCH] docs: fix documentation about default scheduler
Thread-Index: AQHWt5KEiHxEu0uY0Eat6hiwYn2hl6nCtyuA
Date: Wed, 11 Nov 2020 10:12:28 +0000
Message-ID: <9A8ADF64-4D76-4BEE-8E1C-4E23E77B9112@citrix.com>
References: <20201110185129.5951-1-roger.pau@citrix.com>
In-Reply-To: <20201110185129.5951-1-roger.pau@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 0064d171-0e17-4545-b450-08d8862a4bcf
x-ms-traffictypediagnostic: BY5PR03MB5094:
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <BY5PR03MB509473581C7C1EEB1D7D482399E80@BY5PR03MB5094.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: KcnNhK4wtpaKDLnN31usdp3t1WO67K7zfi+csm7dh2cJVPl2lW9E10xV2HsIoGbAJWnm/E+D94ms8Y4gziEx8kDQQV+YdXt/Olk9LvX38koTx9c8ZAHlEGaOBbVPfnBgzssn23zRYuo6Z+z3iWRfYkT1jd7tDppoI/Dt8AXHjKUY6AMD/fSWFq31iJ0aUOlzkLLFpSgkbH+b/Qg87+cr1xRkTsgsZNEbGJZYS8GPSc3V01Ms+cIZTrUERx0+1JLXvyxFG8535U99aSjWidxtFNtEfJVbifuwnosZGsAszfPeP6yDh29DL5q5jWMPssR8
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4229.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(366004)(396003)(376002)(136003)(346002)(26005)(2616005)(36756003)(186003)(6506007)(55236004)(6636002)(4326008)(71200400001)(66476007)(6486002)(2906002)(6512007)(83380400001)(37006003)(6862004)(66446008)(66556008)(66946007)(316002)(64756008)(76116006)(91956017)(5660300002)(53546011)(33656002)(86362001)(8676002)(8936002)(478600001)(54906003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: c09hj9OIEvuleHOfVTibLH6ESsEo9HjG8bwQxYainXgSSnyEvzWVLV5OW29bPJUpbKwaCV89njKV6CotNSr0VfIy96tRgcEG4KtjizEhHzmKft6Z7krgZKhC4XwXrsEkrTjg8RZ9V8UPwCRlHlbmpr4DuthMsA2TOVf/QcTVlbN7UaD9sy21TwYvX+r4y9l7gk2Q61wprW51ngUO+lQAYXjetWh4qAzwUqX4iwZEew+Lby8pvWrdzCGEfNi1aGEoiXfxX2xn/3xJVVFyKRXi5ufX7kZ5nFskpOVBAJWirkOkEdvADMJBBsdXMOm3hkkXT87gIVRwwxB6gwYZJOEpxkYQMMtabGcb0egGix373n61Nk9Vb5CsyAAkIfeaZ3FfjKxo4ups/NQ8pZHJu2XFFJauoTzBpYQBJ1RvhtEgd3n/CVHAtCAo3FIVI30fsG8TpywpLRZFchS0CNnySg8CItejfY3vDCcZqDPJiuX/eqe5uyK7GeuXY0mSw+5SsfodhqYf2Shntc0rfafj00bkjs1N8rErsYrOiKW5osDl5tN42RPdLU2847gq5hokuzTN/W+12WHc3crczwHt43VvliFWf11VJAXMgZurL96fzamwUaxu/6zTYVC4QmOqKoXHAQCb1+b5+kzoF1RSimDfmZyQEbqki8H7iKMB4FDiK5ln7UaOxDuOCtd85MiCVC3PZTupDLn/otn42GBYxP5vB/8sIkpAaIj1C6VHzDIpO7T2PLx8W39T3T0typRYGXJ3XTzwoc1ro+lrVL7XcIzGZ+siL1zuoSWQCvajidajDasyar0CErAr0uuQfyu3Dm8qVQE6mnuDkM3pBhDC0QtASOFuOms3G4ZguVC8J+FO+CgQC7PiAg6Y//LpU+3/RHkydenONP4rkwVXVny/L1yLCQ==
Content-Type: text/plain; charset="utf-8"
Content-ID: <803934165E1AE24A8802D9AA795B6A3B@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4229.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0064d171-0e17-4545-b450-08d8862a4bcf
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Nov 2020 10:12:29.0027
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: UDYT2qKcyc+3VkMseT4MdAAhgmB04JKK0nTQV5QWIaN0OQktvRz9HMzTTVldVlXR7cN4S1CgYwyI91okchyIhqBQAS5iVHPzw1k4oxHIJlI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5094
X-OriginatorOrg: citrix.com

DQoNCj4gT24gTm92IDEwLCAyMDIwLCBhdCA2OjUxIFBNLCBSb2dlciBQYXUgTW9ubmUgPHJvZ2Vy
LnBhdUBjaXRyaXguY29tPiB3cm90ZToNCj4gDQo+IEZpeCB0aGUgY29tbWFuZCBsaW5lIGRvY3Vt
ZW50IHRvIGFjY291bnQgZm9yIHRoZSBkZWZhdWx0IHNjaGVkdWxlciBub3QNCj4gYmVpbmcgY3Jl
ZGl0IGFueW1vcmUgbGlrZWx5LCBhbmQgdGhlIGZhY3QgdGhhdCBpdCdzIHNlbGVjdGFibGUgZnJv
bQ0KPiBLY29uZmlnIGFuZCB0aHVzIGRpZmZlcmVudCBidWlsZHMgY291bGQgZW5kIHVwIHdpdGgg
ZGlmZmVyZW50IGRlZmF1bHQNCj4gc2NoZWR1bGVycy4NCj4gDQo+IEZpeGVzOiBkYWZkOTM2ZGRk
YmQgKCdNYWtlIGNyZWRpdDIgdGhlIGRlZmF1bHQgc2NoZWR1bGVyJykNCj4gU2lnbmVkLW9mZi1i
eTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+DQo+IC0tLQ0KPiBDaGFu
Z2VzIHNpbmNlIHYxOg0KPiAtIFBvaW50IHRoYXQgdGhlIGRlZmF1bHQgc2NoZWR1bGVyIGlzIGJl
aW5nIHNlbGVjdGVkIGJ5IEtjb25maWcsDQo+ICAgZG9uJ3QgbWVudGlvbiB0aGUgZGVmYXVsdCBL
Y29uZmlnIHNlbGVjdGlvbi4NCj4gLS0tDQo+IGRvY3MvbWlzYy94ZW4tY29tbWFuZC1saW5lLnBh
bmRvYyB8IDIgKy0NCj4gMSBmaWxlIGNoYW5nZWQsIDEgaW5zZXJ0aW9uKCspLCAxIGRlbGV0aW9u
KC0pDQo+IA0KPiBkaWZmIC0tZ2l0IGEvZG9jcy9taXNjL3hlbi1jb21tYW5kLWxpbmUucGFuZG9j
IGIvZG9jcy9taXNjL3hlbi1jb21tYW5kLWxpbmUucGFuZG9jDQo+IGluZGV4IDRhZTkzOTFmY2Qu
LmViMWRiMjVmOTIgMTAwNjQ0DQo+IC0tLSBhL2RvY3MvbWlzYy94ZW4tY29tbWFuZC1saW5lLnBh
bmRvYw0KPiArKysgYi9kb2NzL21pc2MveGVuLWNvbW1hbmQtbGluZS5wYW5kb2MNCj4gQEAgLTE4
NzYsNyArMTg3Niw3IEBAIHdpdGggcmVhZCBhbmQgd3JpdGUgcGVybWlzc2lvbnMuDQo+ICMjIyBz
Y2hlZA0KPj4gYD0gY3JlZGl0IHwgY3JlZGl0MiB8IGFyaW5jNjUzIHwgcnRkcyB8IG51bGxgDQo+
IA0KPiAtPiBEZWZhdWx0OiBgc2NoZWQ9Y3JlZGl0YA0KPiArPiBEZWZhdWx0OiBzZWxlY3RhYmxl
IHZpYSBLY29uZmlnLiAgRGVwZW5kcyBvbiBlbmFibGVkIHNjaGVkdWxlcnMuDQoNClNvcnJ5IGZv
ciBub3Qgd2VpZ2hpbmcgaW4gZWFybGllcjsgYnV0IHRoaXMgYmFzaWNhbGx5IG1ha2VzIHRoaXMg
ZG9jdW1lbnRhdGlvbiB1c2VsZXNzLg0KDQpJ4oCZZCByYXRoZXIgc2F5Og0KDQrigJQ+OA0KDQo9
IGNyZWRpdCB8IGNyZWRpdDIg4oCmDQoNCkRlZmF1bHQ6IHNjaGVkPWNyZWRpdDINCg0KTkIgdGhh
dCBkZWZhdWx0IHNjaGVkdWxlciBhbmQgc2NoZWR1bGVycyBhdmFpbGFibGUgY2FuIGJlIG1vZGlm
aWVkIHZpYSBLY29uZmlnLg0KDQo4PC0tLS0NCg0KIC1HZW9yZ2U=


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 10:13:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 10:13:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24603.51937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcn7g-00081O-J3; Wed, 11 Nov 2020 10:13:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24603.51937; Wed, 11 Nov 2020 10:13:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcn7g-00081H-FD; Wed, 11 Nov 2020 10:13:04 +0000
Received: by outflank-mailman (input) for mailman id 24603;
 Wed, 11 Nov 2020 10:13:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kcn7f-00080b-PA
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 10:13:03 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 32481d4a-0a80-485d-a7fc-ca82d624c381;
 Wed, 11 Nov 2020 10:13:01 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcn7d-00089B-Cs; Wed, 11 Nov 2020 10:13:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcn7d-0004vf-5p; Wed, 11 Nov 2020 10:13:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kcn7d-00068t-5M; Wed, 11 Nov 2020 10:13:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kcn7f-00080b-PA
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 10:13:03 +0000
X-Inumbo-ID: 32481d4a-0a80-485d-a7fc-ca82d624c381
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 32481d4a-0a80-485d-a7fc-ca82d624c381;
	Wed, 11 Nov 2020 10:13:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=563XLd+IDZGMNNFz3o2MvTuufWa43/SJiohlc7Nc8Mk=; b=n9sVbVhMucZHWcSn0iw16cDUVO
	fKWHgIQVgZJnKhzkHJ1VKtbeD8Yl3dNTax3tarcOaX6G6Xor98KxScQyNNR4IcXZivdAQCD9Q6Kyw
	3VLJgLbQRFr0GENm3i74PoC5HnkQBJNrD73Ffsi4YbjT+vcy40uIA4ETHwWBnmfdjCmk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcn7d-00089B-Cs; Wed, 11 Nov 2020 10:13:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcn7d-0004vf-5p; Wed, 11 Nov 2020 10:13:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcn7d-00068t-5M; Wed, 11 Nov 2020 10:13:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156679-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156679: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=69634224afaf84474f04e1ab050f216d66bcda68
X-Osstest-Versions-That:
    xen=3059178798a23ba870ff86ff54d442a07e6651fc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Nov 2020 10:13:01 +0000

flight 156679 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156679/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  69634224afaf84474f04e1ab050f216d66bcda68
baseline version:
 xen                  3059178798a23ba870ff86ff54d442a07e6651fc

Last test of basis   156622  2020-11-10 13:01:19 Z    0 days
Failing since        156628  2020-11-10 17:00:28 Z    0 days    6 attempts
Testing same since   156679  2020-11-11 08:00:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3059178798..69634224af  69634224afaf84474f04e1ab050f216d66bcda68 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 11:10:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 11:10:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24622.51958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kco18-0004zB-W0; Wed, 11 Nov 2020 11:10:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24622.51958; Wed, 11 Nov 2020 11:10:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kco18-0004z4-Sm; Wed, 11 Nov 2020 11:10:22 +0000
Received: by outflank-mailman (input) for mailman id 24622;
 Wed, 11 Nov 2020 11:10:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ttLz=ER=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kco17-0004yz-Tc
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 11:10:21 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bede7193-360a-48b6-8584-b428ba45975a;
 Wed, 11 Nov 2020 11:10:20 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ttLz=ER=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kco17-0004yz-Tc
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 11:10:21 +0000
X-Inumbo-ID: bede7193-360a-48b6-8584-b428ba45975a
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id bede7193-360a-48b6-8584-b428ba45975a;
	Wed, 11 Nov 2020 11:10:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605093020;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=FrUrfzQ1PWZCL6L9TISjWH7ECa/n8mftd8rRa8EUYe8=;
  b=R1Gc2H4DNQFeD+x8M/7uSKGl4DwM0uDaeU8rCf4oB/ylJzA2hjUfH9GW
   C3Wi2XTeFd+g/RhnRaIV1N3bf8KIxSXbeK5jVTbAMWI6VNWQf1yYoAe0G
   KuMP5u7VJxnPokRHt+/i04z1ho+9CniWKPSNG04rX5b+ya7ZS4pblugjM
   g=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: +xuZKQeB7BNIaxRVbBleRdPCuv86qLyFoaNB2dbeS019F4JBEYHgjC+7vcrmFqUXMmIqvMwzxE
 HpQLGQwxUD3/43fsEIB4ahsUbApffGkbmRlMo+h3tQ6PH40XrGowjcihiEAhVV81PN24YjIb8u
 MeSlIgwjEo8369ePT4Rmqf+24W6h07p40fUwgksqogOUcHOv1VslZuGuRmuXSv/PU3+Esm/aUV
 MCQp3ekcsEqbjrO5Cu0bvgXiHvuOV2UKot2tk5AnGsr6Aw4N2vT8Z9vd9JLFRwSfbffgE8tn4b
 mZI=
X-SBRS: None
X-MesageID: 32056078
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,469,1596513600"; 
   d="scan'208";a="32056078"
Subject: Re: [PATCH] docs: fix documentation about default scheduler
To: George Dunlap <George.Dunlap@citrix.com>, Roger Pau Monne
	<roger.pau@citrix.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Ian Jackson
	<iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
References: <20201110185129.5951-1-roger.pau@citrix.com>
 <9A8ADF64-4D76-4BEE-8E1C-4E23E77B9112@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e24211db-7ab5-d950-df56-669b90fda041@citrix.com>
Date: Wed, 11 Nov 2020 11:10:13 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <9A8ADF64-4D76-4BEE-8E1C-4E23E77B9112@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 11/11/2020 10:12, George Dunlap wrote:
>
>> On Nov 10, 2020, at 6:51 PM, Roger Pau Monne <roger.pau@citrix.com> wrote:
>>
>> Fix the command line document to account for the default scheduler not
>> being credit anymore likely, and the fact that it's selectable from
>> Kconfig and thus different builds could end up with different default
>> schedulers.
>>
>> Fixes: dafd936dddbd ('Make credit2 the default scheduler')
>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>> ---
>> Changes since v1:
>> - Point that the default scheduler is being selected by Kconfig,
>>   don't mention the default Kconfig selection.
>> ---
>> docs/misc/xen-command-line.pandoc | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
>> index 4ae9391fcd..eb1db25f92 100644
>> --- a/docs/misc/xen-command-line.pandoc
>> +++ b/docs/misc/xen-command-line.pandoc
>> @@ -1876,7 +1876,7 @@ with read and write permissions.
>> ### sched
>>> `= credit | credit2 | arinc653 | rtds | null`
>> -> Default: `sched=credit`
>> +> Default: selectable via Kconfig.  Depends on enabled schedulers.
> Sorry for not weighing in earlier; but this basically makes this documentation useless.

No.  It is the only half useable version, by being the only version
which isn't misleading.

It would however be far better to name the CONFIG_ variable (we do
elsewhere in this doc) in question so people can actually figure out
what they've got in front of them.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 11:15:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 11:15:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24629.51970 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kco5t-0005Cs-MF; Wed, 11 Nov 2020 11:15:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24629.51970; Wed, 11 Nov 2020 11:15:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kco5t-0005Cl-I8; Wed, 11 Nov 2020 11:15:17 +0000
Received: by outflank-mailman (input) for mailman id 24629;
 Wed, 11 Nov 2020 11:15:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kco5s-0005Cg-BF
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 11:15:16 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b00f005a-604b-46f2-acb1-7d14fe048649;
 Wed, 11 Nov 2020 11:15:13 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kco5s-0005Cg-BF
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 11:15:16 +0000
X-Inumbo-ID: b00f005a-604b-46f2-acb1-7d14fe048649
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b00f005a-604b-46f2-acb1-7d14fe048649;
	Wed, 11 Nov 2020 11:15:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605093314;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=ZBbY9I2LeAb8lHantSX4XXM+wh0OxsXIi0G3DhcUL0I=;
  b=UgGvdwLUjVRE6cUOG2X190cZn+AIFfuMZ5HeGBaNoG1LyBfF7/uUeZr9
   FywQAdzdKT8Z+hmln/rV/z/Qk4JnC6k3VmVHC0kIygS0zzg6ctwqWGhGz
   7skozSRFCLBtVCxpTreWUYCdLcqRAXrztNzW7kXnakJh3c9Wu9T9dGltE
   o=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: v6UgKnQOtgauWvFaxGyafM0Ys5ihbE0V8wRH7IrF/NUFEPjG+jboxBAXtTQU+mG9H6i+YMyHDf
 GAK5S+yyGeaOvipQy/4XQyyIzVJieSsvOvB5IkJe1pT6g5tRH3KZnIV3aARa7cJeCR9o4TAS39
 k1s924ak6Eoa2E7g7kYZUfXRK6ZlOvYOtZO0CsU4vitxnsP0GlGD8UfJqEp6lu2FsNPTW5xp1W
 i+OK6R5y1JCCbj+hYnBFVHh26SHncT3HELipczTo1qQ3kiY4s0HNYCGfrXJrM8E8UDgZRCYJhN
 /XA=
X-SBRS: None
X-MesageID: 31277148
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,469,1596513600"; 
   d="scan'208";a="31277148"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LRCtg4j0jd8RP3MqZ5tqxOy+JZGYKwLY4X/UkqYSOSrJBRiVNZcUFhz+qAr/U0an6t/FqHKldzc2+LE56kPOffikNiOXsldKTiBKHRNQ0mUU+6zGOATNPoO9mzJcqnnVB6ckISTsnPkE/P0wcjmeE2nczGMfRBc7bkK/ReE8YDr8Aap834QLztloyeykqLknu83UmJ2TD2J5LWc5WNujE7j8cvzYC/g2YXuGI34u7LjomVAXOjvs27BNZcowou4fBESA6yn7nkME5h7o6x2ljkQqf1Cum38uVJzBATytmFnCcm9SiazwAL5CpNGu4ImZlSG+Nr6JoBfhDQwsQJ57MA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/8NKIoQS2S86uhyFZ2/dOmcBEiEleM/wLw3QW8a8L9U=;
 b=K6HqNkaA/WGC4ZHBnqKFCPhwiNUjEMKuo2a00xuEsr3/t4VCPbKAcqMVjAVkiibHix5ucXFIcSCEazEcX35DuYkmB5whGp82yQ86IrJ8WtyeOf/SnQ/n/JVkhHpMesw/z/UtStvQ60HSdsky+1tiF6QTxt1fKvt/P+/Lqtnbd2y2qjCwgdhemrZmEvS98v20J9JhnlaNp8WxHIt54HV/b2vLwhtnp+/6DhJN1ya9YyPT8ivYhkaj+Jo0M+/LeLAAJUuyiWyKeP8ZZabqQfTY0fbDcY1Z54DsbJylIkQ15PUNic2RKofmzYcMVe7Z4uGPepu0dOkqoIf8QCNtSMy2/Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/8NKIoQS2S86uhyFZ2/dOmcBEiEleM/wLw3QW8a8L9U=;
 b=Jq008qv7j5zYChIPoCE73C6baETvccKyvH0SNjAPnUWD94HJDg9OCv5oFXqauTEw1MqeHC0iDV7sdbjx+s7HD97Di+VimdVqKli/Efff2Kz9J0n8dk9HEz9P+xwPnejEBxgBw1fFdyGz1f0ExZywH5DE0cWkjwMxlh7qnoSqLVQ=
Date: Wed, 11 Nov 2020 12:15:04 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3 5/7] x86: guard against straight-line speculation past
 RET
Message-ID: <20201111111504.r4k7a53spsy7pzjq@Air-de-Roger>
References: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
 <80ceea17-958d-f409-5f39-9f353e780f5b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <80ceea17-958d-f409-5f39-9f353e780f5b@suse.com>
X-ClientProxiedBy: LO2P265CA0094.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8::34) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 94c67ec4-6d28-4d10-9c22-08d886330cc6
X-MS-TrafficTypeDiagnostic: DM6PR03MB3737:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB373783BF7412E4E82A9D48C48FE80@DM6PR03MB3737.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: emZeiTn+XYPpnpqCsBAcc6VXISkTu6Z4XvA10lzBDi2LTNn9h7uq9LDDwTdbNlT+3AaUuEitsAzJvXx3uDQBCQwediXtYY1EtzFFBH21k7GCc4l1F5AWUFiQ1y9zwVf8RoKSFb1ygzVLMesha4TXMh1lP+wkM0coLU+i6+ULMNVkLBLZcCfJUeoJ9KUsvaxMq2TVMq7DDrXFmQGTqTWos0kTTLLQ6AFKsWyizO0T0IACtI69n83x6wCA9GytXtZtjL/tXL5QcQcZ0ehVVfe5pe2gpPg050JeEvNQSAYw4yNzNJj9FPr3x0fdosy6A0wSUq8hBMuNstiIIXj2E+ClKZyDu0DsRThPX0sTUKO5bmxbblQbsTh721jJ0GEMOiI5k13BuUc3/QpvKG9wy2plRg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(376002)(346002)(366004)(396003)(136003)(39860400002)(1076003)(6916009)(8936002)(956004)(478600001)(2906002)(66946007)(66476007)(316002)(26005)(5660300002)(86362001)(6496006)(4326008)(9686003)(66556008)(16526019)(186003)(85182001)(33716001)(54906003)(83380400001)(6666004)(8676002)(6486002)(966005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 4y4U592Y2uVV7q4HpWgsxsKdHw7Y5R+E4GFJrZJE7OUlekfW7TE331nVZqkUBVdU82BSHcPdshDN6Jb30H2E+0uOUBCALHgzb55jxAy7lv5HtMzf0UBhjAQI49ew5/NaQNLjhKENjxoHAZnADzdfQVFyYiEqT4klD6kR18TGPretMSIO+mXAR8YSJr0Kr6mM7wgJk/n71Z+kv/qaxqEtdGQR6kAolnyx9BeVCzkq639SmmsCwN5bNuSgTs0zmQppvCxXJrEUE7U/CLizH52VAGscLpg7YI4wvXHQSm23Riv3BJ7fWjbbAS6p+CQH+xYL+z9f/UIstPv1MArYm/o74PsPojQux5D3bTnV9s4MHo/UQ0nkCt7H7DAeO6TAV3mDRSIXQkNwIw3PSQWc6S9OTfVI5aJwW+KlkPYwm9wQnzWRT3zAIJBciSHZAPyccr0JSlZtzdKwnyD5rQ8nbl+VOm9hB4vMsiBNqjRqCzOT4O8p99y7VH4qvP1BZcpvolWJhi8OS38q0CbV7ZWq/XG8kAgd3fbgp8JkwaxWlVbx2TvjBfCPAmKqBMHajNc5igJx40CcdSBj8v1Q0sfAu9GdvmmfiXSUbyZWocEykh3Xgsp2YZ2oKa7Opfwt2f/KY7wK4oG2Pt5VedK/bUOjsFac4hQ0rdIwibJ3nYedhEOroHfWNfLc7QccxMniwHEPCj3nGdl7DjyHJijNq72XfkAfA6xthPmIjvKxFIfN0z9UqMU4ZqdvW6mw1XrArG3jPctKNB862cBzUtqwJ98o7ZYu77UYxiDOvD+bpfyX5xVH1T2jsx7UoXmwXyEF1Ts1Ye0+dSFYLtfNFgkaFr1G5MnerpqHP8Ze88vfWMAhVB8Jxy8e9VgQ1QTg4MZ3yVVN0oa51QWhFHwUXfo+MMBgPZJD5A==
X-MS-Exchange-CrossTenant-Network-Message-Id: 94c67ec4-6d28-4d10-9c22-08d886330cc6
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Nov 2020 11:15:09.2236
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ZiHepMZSZTYCVMQAl5mX+ai1M0Mwef6h+Lgow6VwzOJkA9Ll/2MqTeHqFzM7uV5PHOFREyxBxqoz/96acMFFXg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3737
X-OriginatorOrg: citrix.com

On Fri, Oct 23, 2020 at 10:38:04AM +0200, Jan Beulich wrote:
> Under certain conditions CPUs can speculate into the instruction stream
> past a RET instruction. Guard against this just like 3b7dab93f240
> ("x86/spec-ctrl: Protect against CALL/JMP straight-line speculation")
> did - by inserting an "INT $3" insn. It's merely the mechanics of how to
> achieve this that differ: A set of macros gets introduced to post-
> process RET insns issued by the compiler (or living in assembly files).
> 
> Unfortunately for clang this requires further features their built-in
> assembler doesn't support: We need to be able to override insn mnemonics
> produced by the compiler (which may be impossible, if internally
> assembly mnemonics never get generated), and we want to use \(text)
> escaping / quoting in the auxiliary macro.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> TBD: Would be nice to avoid the additions in .init.text, but a query to
>      the binutils folks regarding the ability to identify the section
>      stuff is in (by Peter Zijlstra over a year ago:
>      https://sourceware.org/pipermail/binutils/2019-July/107528.html)
>      has been left without helpful replies.
> ---
> v3: Use .byte 0xc[23] instead of the nested macros.
> v2: Fix build with newer clang. Use int3 mnemonic. Also override retq.
> 
> --- a/xen/Makefile
> +++ b/xen/Makefile
> @@ -145,7 +145,15 @@ t2 = $(call as-insn,$(CC) -I$(BASEDIR)/i
>  # https://bugs.llvm.org/show_bug.cgi?id=36110
>  t3 = $(call as-insn,$(CC),".macro FOO;.endm"$(close); asm volatile $(open)".macro FOO;.endm",-no-integrated-as)
>  
> -CLANG_FLAGS += $(call or,$(t1),$(t2),$(t3))
> +# Check whether \(text) escaping in macro bodies is supported.
> +t4 = $(call as-insn,$(CC),".macro m ret:req; \\(ret) $$\\ret; .endm; m 8",,-no-integrated-as)
> +
> +# Check whether macros can override insn mnemonics in inline assembly.
> +t5 = $(call as-insn,$(CC),".macro ret; .error; .endm; .macro retq; .error; .endm",-no-integrated-as)

I was going over this to post a bug report to LLVM, but it seems like
gcc also doesn't overwrite ret when using the above snippet:

https://godbolt.org/z/oqsPTv

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 11:25:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 11:25:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24640.51982 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcoFG-0006Db-Om; Wed, 11 Nov 2020 11:24:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24640.51982; Wed, 11 Nov 2020 11:24:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcoFG-0006DU-LT; Wed, 11 Nov 2020 11:24:58 +0000
Received: by outflank-mailman (input) for mailman id 24640;
 Wed, 11 Nov 2020 11:24:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TQiX=ER=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1kcoFE-0006DP-MR
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 11:24:56 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 97fcedf4-ba99-4046-9e0c-4a820942c744;
 Wed, 11 Nov 2020 11:24:55 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TQiX=ER=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
	id 1kcoFE-0006DP-MR
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 11:24:56 +0000
X-Inumbo-ID: 97fcedf4-ba99-4046-9e0c-4a820942c744
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 97fcedf4-ba99-4046-9e0c-4a820942c744;
	Wed, 11 Nov 2020 11:24:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605093896;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=tnN7HrCCdRQHe2t+v+fKDZzXpzM3DS1XbhOKQxdhHTY=;
  b=RfBG30MU0S2VgEwd5efYL/Q9uC2CEz7L/ShnS7pnltKzCAp4krOCGJpR
   AyBVeOZ7HagMdYNN2pi3PG9yvUAuUKkWkk39c/HYa2YhDY1dgrTPCNp61
   jbab7QajL3rtXx05Kvha8hMP3LTLpb57IwAJ/HcRePDo0XcmFOKJnV4E9
   k=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 57VZWVP9SY3bspiZszkhvaOBup+bkaOA77/RxrAET28+uw/oUnZQT/gY2igziH3k656OaMw6vM
 RS3z1Lx1xcPK79Qo0dc15k2IY36CG1YZ8a7U8IahpPEIIa//4YZthOZYbVX0Z/Aht1C4bVEtGt
 uAaNG8c8vUiEThtYrRtynsqoH1dS0wLk0a8IrLr5XBHhsBsmSKJyb4k4O+/2Dw5H6uxbB/Q6oI
 q0r9Paw2v2KqREH79i4T6gArdwCebg82kbjzPgeX5kfCo3aVcLujSUb2vh1svgZOMcf1sqCq3U
 DVw=
X-SBRS: None
X-MesageID: 30910178
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,469,1596513600"; 
   d="scan'208";a="30910178"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WLn0EJ1r1YSRneRtCmi6WvAVOLdbQVkpduYjeCBhzx/V6Cd4ejYPvquyvz15uOoNJNu5Yz/r5DswwqeC2z4Q8nRzS1Fj8SPfJiK0R9OMd43HYjQ/y/Pc1jodeRYM9L8e8uIxnFlnKddbRshLLPZXHAo9DHh93Ge2o2oxI8OyDjbiibCaxG3bLqjGkm+wtJqxJY8eHjDrSDRFlh1qOxZG+2aNEzk/n+fXwEIuwCcO6uUOlLlkgeHthnLU+aZpmNFDTIHZpWxcXEeqXQ2AWhGZEsa7KVzATiKaRPNk7vU2lbCpaRKT1kiP4ONNtE6yoDEC5ZS+1ZUoRaesXT/PYZojlQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tnN7HrCCdRQHe2t+v+fKDZzXpzM3DS1XbhOKQxdhHTY=;
 b=NaHz7nZ4fBR3wbLR03B3tMjWoZVN9WG6wV0l0ChlcbpDfWHO+ClUsMCjboY2WBY4EUQfUqlWvKX2GoNuO0CpfHW64TyQuzLKG+uZnuUbFKqlyuTp/5tNFR5aztqkU0/4HfJbHxMB6XV6siQtxddLJfdcGwTRmuGZlZ4ocgXhzkLrYiXhWDB/I2/8So/RNvK0BOdsWAo7UuYotDFrrXoYuUxX11lSBTpWTw3j4ULQNr785JfO1bWBOJjIfXSw0SC6gT8n3sWGiGdyprl1tlcf8Z4oy+Y+4s4FWjB032hlrc+53JzyRuCfsF7M5wKX14+OuGPrNSAyJkXITpMlSc+2Fg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tnN7HrCCdRQHe2t+v+fKDZzXpzM3DS1XbhOKQxdhHTY=;
 b=P10C9M3g5VIPNA96vFihcwVpfuY+R3TP9uVKPNBZ8mlpOO3Nio35bNRuDBXSbZGrlacOkV+I1FOJ9M4zn+yWYwefhH5hz6zF8Bd1dUdxNhjaHesWRFzmX4hbehXMHopiF3pjJV5CDy9kXAdL1EmMLtaSqj8LmFfU9wCFjFAsvw4=
From: George Dunlap <George.Dunlap@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
CC: Roger Pau Monne <roger.pau@citrix.com>, "open list:X86"
	<xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] docs: fix documentation about default scheduler
Thread-Topic: [PATCH] docs: fix documentation about default scheduler
Thread-Index: AQHWt5KEiHxEu0uY0Eat6hiwYn2hl6nCtyuAgAAQI4CAAAQVgA==
Date: Wed, 11 Nov 2020 11:24:50 +0000
Message-ID: <19A67843-4667-46EB-8F11-109D8989BB71@citrix.com>
References: <20201110185129.5951-1-roger.pau@citrix.com>
 <9A8ADF64-4D76-4BEE-8E1C-4E23E77B9112@citrix.com>
 <e24211db-7ab5-d950-df56-669b90fda041@citrix.com>
In-Reply-To: <e24211db-7ab5-d950-df56-669b90fda041@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 60cb569a-d1f0-42a7-77e0-08d8863467d0
x-ms-traffictypediagnostic: SJ0PR03MB5536:
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <SJ0PR03MB5536A4BB2705A4CE95C4F26699E80@SJ0PR03MB5536.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: I6zxWIz3b1SJMDFrw62XGDDdiIQ45R/rY7QFa2m1YCFqVOEzzbXv5rkJbqJmfUzEZInRuUYMKuBYvNTvsTM7uBxGG3vZFPHBjMY+KxYpeNjkOej6qgF9IHxomY/ZIlM5ujr/2ORnCh46rb/C/IpbVmTYydbJnOFvs6ZsEhBlne9hC/3x+Zd0fyblEnQfsnbM/W/Eq5Fhk4os/a7HCyRuIvLb6sehE5tplt3kWOdD7JGULMGfHiH5GvP32rmm6b2F5qUtfKBHRfBU2exZTFAFBOoEqMp657j6WhudonjAfOYlQgfZMwC9+pbZgwApZywCjE7Etji1OtvViTxWh/qv/g==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4229.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(39860400002)(376002)(366004)(346002)(478600001)(6486002)(86362001)(66574015)(71200400001)(2616005)(83380400001)(37006003)(36756003)(6512007)(316002)(53546011)(2906002)(6506007)(8936002)(8676002)(4326008)(6862004)(54906003)(26005)(186003)(55236004)(66946007)(76116006)(91956017)(66446008)(5660300002)(33656002)(66556008)(64756008)(66476007)(6636002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: t9qDSUoxlUMHHwxYlyLyWbjZxES9QHM5JP++pnVfwDehYXSVyPjySwSqpsVH9QhM4y2KqKVmzp31IpB8kYzAe3G+iULFcEJfnmJJk09eaAV0fejjzQCMidgHkPo9xVS3bGxvdgkzsu9hRiLa5AT/R1HwG++JlNEfnxM8/9teBf4G/3dzoZCGitwg+8jHGTUQn0ZTMksUm5IhQfAdAQstxzKvS/SOFYimwtpFnMnyM+b45wBiuSyt7A/bhPG5YXSWhBYS3ak/+rmwSK0fRcSzNrVcDKWSHX4yFVGjYDKYjcFovdwmpgI80fzOxyfyrHsVZLZuqRBdfQPH72whVuDvBSZnHhwspg/4/9YNmIZtUxPFx/cicx78EDJWnaoHzTurY0lBOZ0W2LH7y+rtI2Qyqy0M+f3d7tJQ8rMn6LfjZ86iktFzYFcqmyXifTWEr2sW7C/nZMjzj8hDeewihlF0ubfg7/Ec2TbTppXvUxzLkPlkjmKDif1tXqflVEMmrQ7tdCWVRCZg+9FNDzJAYLfbswe4Ow71uSS6RXr74I6lVxfzGiN2p7RDn8nBJeJtMPrqqpI5vO/+cXEmNnDmW0bvjzX4008XPmYmKANfyE0TVku129PauIZNnzgXvelpUmqyE9ZfcS9MFDkiteshiaqk1lKvbrYtZV88/3CB4CajBOBPk9XFMuagrpQA36qOUOwjfUDsIA3CmxGUONPvTDdfrTyZAREjGKcj7tef5ztDcM37S9NqdAv1BA91Yi2F0bzOhdSG06SJrsf5zopA5PJAjb5M5XqCoCzUohMXbHfdELcQSY61nJwXe6KHjBa0uM4AFWaImiZY2mOW2/80Ab+uUqWbN8RXDlU6MI+BcVrQWCbxjsV3opDxN3ThpjWydZBtu1hY2PsLDNFwzCmPgMHv5g==
Content-Type: text/plain; charset="utf-8"
Content-ID: <366EFE78D0368F43B97EB7A3B09690FB@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4229.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 60cb569a-d1f0-42a7-77e0-08d8863467d0
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Nov 2020 11:24:51.0162
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: reC1OQ8GzK4220emLYkAMRhucYLsq8nrwllfa+flCvtAvJuvO+VVFwTqqlCu6dSJv3MkpJbShTufCDz+3gMXUFcvmHnw+h0u+laQd4igJYw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5536
X-OriginatorOrg: citrix.com

DQoNCj4gT24gTm92IDExLCAyMDIwLCBhdCAxMToxMCBBTSwgQW5kcmV3IENvb3BlciA8QW5kcmV3
LkNvb3BlcjNAY2l0cml4LmNvbT4gd3JvdGU6DQo+IA0KPiBPbiAxMS8xMS8yMDIwIDEwOjEyLCBH
ZW9yZ2UgRHVubGFwIHdyb3RlOg0KPj4gDQo+Pj4gT24gTm92IDEwLCAyMDIwLCBhdCA2OjUxIFBN
LCBSb2dlciBQYXUgTW9ubmUgPHJvZ2VyLnBhdUBjaXRyaXguY29tPiB3cm90ZToNCj4+PiANCj4+
PiBGaXggdGhlIGNvbW1hbmQgbGluZSBkb2N1bWVudCB0byBhY2NvdW50IGZvciB0aGUgZGVmYXVs
dCBzY2hlZHVsZXIgbm90DQo+Pj4gYmVpbmcgY3JlZGl0IGFueW1vcmUgbGlrZWx5LCBhbmQgdGhl
IGZhY3QgdGhhdCBpdCdzIHNlbGVjdGFibGUgZnJvbQ0KPj4+IEtjb25maWcgYW5kIHRodXMgZGlm
ZmVyZW50IGJ1aWxkcyBjb3VsZCBlbmQgdXAgd2l0aCBkaWZmZXJlbnQgZGVmYXVsdA0KPj4+IHNj
aGVkdWxlcnMuDQo+Pj4gDQo+Pj4gRml4ZXM6IGRhZmQ5MzZkZGRiZCAoJ01ha2UgY3JlZGl0MiB0
aGUgZGVmYXVsdCBzY2hlZHVsZXInKQ0KPj4+IFNpZ25lZC1vZmYtYnk6IFJvZ2VyIFBhdSBNb25u
w6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPg0KPj4+IC0tLQ0KPj4+IENoYW5nZXMgc2luY2UgdjE6
DQo+Pj4gLSBQb2ludCB0aGF0IHRoZSBkZWZhdWx0IHNjaGVkdWxlciBpcyBiZWluZyBzZWxlY3Rl
ZCBieSBLY29uZmlnLA0KPj4+ICBkb24ndCBtZW50aW9uIHRoZSBkZWZhdWx0IEtjb25maWcgc2Vs
ZWN0aW9uLg0KPj4+IC0tLQ0KPj4+IGRvY3MvbWlzYy94ZW4tY29tbWFuZC1saW5lLnBhbmRvYyB8
IDIgKy0NCj4+PiAxIGZpbGUgY2hhbmdlZCwgMSBpbnNlcnRpb24oKyksIDEgZGVsZXRpb24oLSkN
Cj4+PiANCj4+PiBkaWZmIC0tZ2l0IGEvZG9jcy9taXNjL3hlbi1jb21tYW5kLWxpbmUucGFuZG9j
IGIvZG9jcy9taXNjL3hlbi1jb21tYW5kLWxpbmUucGFuZG9jDQo+Pj4gaW5kZXggNGFlOTM5MWZj
ZC4uZWIxZGIyNWY5MiAxMDA2NDQNCj4+PiAtLS0gYS9kb2NzL21pc2MveGVuLWNvbW1hbmQtbGlu
ZS5wYW5kb2MNCj4+PiArKysgYi9kb2NzL21pc2MveGVuLWNvbW1hbmQtbGluZS5wYW5kb2MNCj4+
PiBAQCAtMTg3Niw3ICsxODc2LDcgQEAgd2l0aCByZWFkIGFuZCB3cml0ZSBwZXJtaXNzaW9ucy4N
Cj4+PiAjIyMgc2NoZWQNCj4+Pj4gYD0gY3JlZGl0IHwgY3JlZGl0MiB8IGFyaW5jNjUzIHwgcnRk
cyB8IG51bGxgDQo+Pj4gLT4gRGVmYXVsdDogYHNjaGVkPWNyZWRpdGANCj4+PiArPiBEZWZhdWx0
OiBzZWxlY3RhYmxlIHZpYSBLY29uZmlnLiAgRGVwZW5kcyBvbiBlbmFibGVkIHNjaGVkdWxlcnMu
DQo+PiBTb3JyeSBmb3Igbm90IHdlaWdoaW5nIGluIGVhcmxpZXI7IGJ1dCB0aGlzIGJhc2ljYWxs
eSBtYWtlcyB0aGlzIGRvY3VtZW50YXRpb24gdXNlbGVzcy4NCj4gDQo+IE5vLiAgSXQgaXMgdGhl
IG9ubHkgaGFsZiB1c2VhYmxlIHZlcnNpb24sIGJ5IGJlaW5nIHRoZSBvbmx5IHZlcnNpb24NCj4g
d2hpY2ggaXNuJ3QgbWlzbGVhZGluZy4NCg0KVGhlIHZlcnNpb24gaW4gdGhpcyBwYXRjaCBlc3Nl
bnRpYWxseSBzYXlzIOKAnFRoaXMgaGFzIHNvbWUgb3B0aW9ucywgaXQgYWxzbyBoYXMgYSBkZWZh
dWx0LCBidXQgd2XigJlyZSBub3QgZ29pbmcgdG8gdGVsbCB5b3UgYW55IG9mIHRoZW0sIG5vciB3
aGF0IHlvdXIgZGVmYXVsdCBtb3N0IGxpa2VseSBpcy4gIEdvIHJlYWQgS0NvbmZpZyB0byBmaW5k
IG91dC7igJ0gIFRoaXMgaXMgaXMgY29tcGxldGVseSB1c2VsZXNzIHRvIHRoZSBwZXJzb24gdHJ5
aW5nIHRvIGZpZ3VyZSBvdXQgd2hhdCB0aGUgZGVmYXVsdCBpcyBhbmQgd2hhdCBwb3RlbnRpYWwg
YWx0ZXJuYXRlIHZhbHVlcyB0aGV5IGNhbiBwdXQgaGVyZS4NCg0KVGhlIHZhc3QgbWFqb3JpdHkg
b2YgcGVvcGxlIHdobyBzZWFyY2ggZm9yIHRoaXMgb24gdGhlIGludGVybmV0IHdpbGwgaGF2ZSB0
aGF0IGxpc3Qgb2Ygb3B0aW9ucywgYW5kIGhhdmUgY3JlZGl0Mj1kZWZhdWx0LiAgQXMgbG9uZyBh
cyB3ZSB0ZWxsIHRoZW0gdGhhdCBhIGxvY2FsIGNvbmZpZ3VyYXRpb24gY2FuIG92ZXJyaWRlIHRo
ZSBhdmFpbGFibGUgb3B0aW9ucyBhbmQgdGhlIGRlZmF1bHQsIHBlb3BsZSBhcmUgc21hcnQgZW5v
dWdoIHRvIGZpZ3VyZSBvdXQgd2hhdCB0aGVpciBzeXN0ZW0gaXMgZG9pbmcuDQoNCj4gSXQgd291
bGQgaG93ZXZlciBiZSBmYXIgYmV0dGVyIHRvIG5hbWUgdGhlIENPTkZJR18gdmFyaWFibGUgKHdl
IGRvDQo+IGVsc2V3aGVyZSBpbiB0aGlzIGRvYykgaW4gcXVlc3Rpb24gc28gcGVvcGxlIGNhbiBh
Y3R1YWxseSBmaWd1cmUgb3V0DQo+IHdoYXQgdGhleSd2ZSBnb3QgaW4gZnJvbnQgb2YgdGhlbS4N
Cg0KU29tZXRoaW5nIGxpa2UgdGhhdCB3b3VsZCBiZSBldmVuIGJldHRlciwgaWYgUm9nZXIgKG9y
IHNvbWVvbmUpIHdhbnRzIHRvIHdyaXRlIGl0IHVwLg0KDQogLUdlb3JnZQ==


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 11:29:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 11:29:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24649.51993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcoJ8-0006QI-9b; Wed, 11 Nov 2020 11:28:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24649.51993; Wed, 11 Nov 2020 11:28:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcoJ8-0006QB-6Y; Wed, 11 Nov 2020 11:28:58 +0000
Received: by outflank-mailman (input) for mailman id 24649;
 Wed, 11 Nov 2020 11:28:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kcoJ7-0006Q6-3Y
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 11:28:57 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0ec4a146-47f9-41de-a98e-3d311d852838;
 Wed, 11 Nov 2020 11:28:56 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kcoJ7-0006Q6-3Y
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 11:28:57 +0000
X-Inumbo-ID: 0ec4a146-47f9-41de-a98e-3d311d852838
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0ec4a146-47f9-41de-a98e-3d311d852838;
	Wed, 11 Nov 2020 11:28:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605094136;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=gDYiAWA+L6Lto9L4umyrZ7aJVJyp4NQe/dnX4E3RDFY=;
  b=ZJKbGz4HCVkIYJX0CpzbtPhCZ+h/XIrYeYFannyAOL3JmZok1QdFgP8r
   h649087PBitIJJVICEdvmgCzS0jKeGJZR56IxVQPqmld/4agq+5zl5wzw
   uDRnjVeBc63J7uXMKTU0jA4Fb3GtUeY3Kwe0ubPN1EsSvtaTpwiqNO8s5
   k=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: hkgMNrnbn96rItb/nLHEjudiwP1BGSZnB+zJBugojKSo6FQhXg6xZIfwuGJB8MI3a0qg0M5/Wc
 7SuDmK8pc0SAhk6Xfq143K8K/8GPKYqJAVfHLezSNZ3oHaD9QfNlDUuio4SElULEWJNIqFWP55
 SFIOIr7pAcjmQNpHGCGR0WSt9KCuYaNgDZQTQaN6KAwDL23VFcZ6WVT53VV+a0f9aMNkuOK8qN
 0doKF21wEgdP81lhqEpmznujmVOC/CuVj2F8eQjkUbAzph+Jxwqy3wXaJcB153/n5tYFyM1JJp
 /2A=
X-SBRS: None
X-MesageID: 30940680
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,469,1596513600"; 
   d="scan'208";a="30940680"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FzfbTeqAu9ianjGOjaI9qsKJknYi5A+8IZ+zu0SxYem4upDtna4+XZpaDcS0gsawpjuII5wA38NFnYGzoLcM8ByESnbgZIV+Ay4lCdfzZyULWr6p3Vqj5BI8fhHvEqer8x+4VkeEYMxa8iY5SAYM6UE0xrPo9NuxReYBJbUsbsX70Oferqgq7Q/Vc/t49ZrRk4cSgg6KjYOFuQnf/HfaQJPTalGipaVjv5xNPDZ/DnBlDDC2w03oFEa4Ai15AU4V/wwksiiwffRi9O666rYg+cGHTufy9Q7MCnBvu/VroI4sxAXJbv1F458GiO3O5Mb+azd8jDrfQC1jeXOoc8gV3w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oj4XlYtKAzzvwluGDPPEuGdNj6BGyfej6B+oN8iNW+Q=;
 b=BEeiqtzsC98sBVlZZxw58OjfdZfv8416FHcqY2WJaUPE8IXTbBjUxbclV8eWG9s9zAikvlxCAqbB7jrInvoChGbH7YM/q5aHG7jKK8qRlT04WjuFZrKqILy1xXW1Bb1/Wr/eLvqeOnL3Z4pzaYV5LFJ3a0Co74qSt2Yd9+kMWdMNHz+hr0XLcNrZVHoDFpciTDyizHg9dZdRbvemKm8oD2bRfP6dS4mRnWwDGewNICmvw5fAKv9YoAievl0U1+waMekhe+MigeWo/vEATBW/noNT6NbXRe8fu4d4mVoCyz+peiqyjaej/a6vD4uOHBXjUX9XJFg9CNZi7mQNe27f9A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oj4XlYtKAzzvwluGDPPEuGdNj6BGyfej6B+oN8iNW+Q=;
 b=JwYhNTqXEEL2iYRtovMgWkQdsDsWPs/IG9h7MkDmSRqoCZtNLAILAkVZvOSnexFzXA2/bSbLVZGI2GxHK80WYU/45iQe1lT17ZUxQOaK/Fjn7UZhtIek0O0oQCH0jYAJG52ly8m482qVDGSSdeA8QmsIq4b01gUeY6PwkhlH+fg=
Date: Wed, 11 Nov 2020 12:28:48 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Anthony Perard
	<anthony.perard@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3 5/7] x86: guard against straight-line speculation past
 RET
Message-ID: <20201111112848.bzbqdqthsq2sm4nx@Air-de-Roger>
References: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
 <80ceea17-958d-f409-5f39-9f353e780f5b@suse.com>
 <20201110093142.hkufamaepn67gv43@Air-de-Roger>
 <92e58ff0-e6a4-f92f-1ad6-06db7751762a@suse.com>
 <20201110111603.rarf7ncddrkswlxs@Air-de-Roger>
 <586bb9e5-bb90-bb27-3010-e702d65e301c@suse.com>
 <20201110140856.dtdql7lkwzwijko2@Air-de-Roger>
 <63ac07fc-1a71-b765-007e-571550970833@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <63ac07fc-1a71-b765-007e-571550970833@suse.com>
X-ClientProxiedBy: LNXP265CA0088.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:76::28) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7e2d4ba1-80fa-4646-c774-08d88634f7d8
X-MS-TrafficTypeDiagnostic: DM6PR03MB4764:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4764573E1572A8412B356FBA8FE80@DM6PR03MB4764.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Y9Qgf6jF1n4w4KyhZFeHDh6p3mZn7Cmm/+9RW0dkyW86bKgpb6kEA9LPJbwNmZWbYCKPYZ6GDobpISbGAcun8sb0NM25Q545TQyD1qshsrniNlS4UC2SAr2lWRB3DSO0kT5AElxjlw36nb3waSAQR6dB0oOlECYQwmkNQ5TB31HKhfKrLP6rF4n2IBkFh3jlKEZ7AR0IEZqAxZMdB5a9U8Hr/jWa1izQ/oU2KAsUz7C4I5O6mBRznnDWP2h2/NjurTmZrnYU5M1p4ytzzc3NorqYm8c6JB/5fQLWV/w+4Rq6Mkbs6Budsue6zKEj+DMk3Yfz8vCmUJm72e/6/G8xPA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39860400002)(376002)(346002)(366004)(136003)(396003)(66946007)(1076003)(4326008)(66476007)(83380400001)(5660300002)(6666004)(316002)(86362001)(54906003)(186003)(26005)(16526019)(6486002)(53546011)(478600001)(6916009)(956004)(8936002)(6496006)(9686003)(8676002)(33716001)(85182001)(66556008)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: EthS5pvSvrt+/M9F/l6yToTxyUVEVOgsdEGE4jydEPwQ7ikt2XXd0khI0gP9o2Ou0RcFGSpVH5BLEABf6JCI41AHfpWFpgpzjnEG+4fFGkoXvWjiWI+LufyR9vNddBjXuZcXb0uzdbrG2HkePFi4H4LkgUgXtbmdUoO/Zh1CHnwLVOW/llux7Gai4ui2WjgqEK7OrDpig3NcMHj7CPVsOKzlJQztXqBRdhQ1740SCqAKY+Zbt9Y/tjXoTxL5vaOwpnQv7pY+lx6pKFhI8peDyKq/ZNX+qNNYYZcXv1l8+JajCqQ23+Y8OKXn/vfYhq1qSNquIgV1i7G2mpSdg5FyEZyKRK98kR7skpii4rd/GcvCtwiGC5U+E3OGto4Kfz5A6s1HaImUvyUhcXd0pYGxAihzzadBpnO22lhVHm4Pfw8C/ZrWuDpvCqhsp+YGZIqIadPetFtogINADRhwsXn5WB1FUcjoTp8pj6PvVdYrXlUiTayOuOMcws4mn+NpAOkaj3VONiUNohZVew8b5b+GH+6M+B4Pgz7KbmaibxwXHEeBXSiBwJ9yFwz0K51E0o4vL7mJrVGT0iP8vXvAyy8OJZjXlhRvm6yv8F0T1oVZ5D/y9j/OKTofdP2B4/l8E47sWLhwHOYMs+dgoB9Nwm22uM2DlZIlSJ62DOPGHLQkQMxE06XiYlpkNk0FWcwzQvbq4RCIHyoeoR308kZGiVTyLLkDrFwRC9ZTMdbfXIp/3Dn4QFufT1ezAz00G8kfx7A6iw7WptHqyiVzbl+phQ/mNeLFNS1ZceP8Mr9iEHGpAVjaLOXiVT9FdCNZ4ta4J0ZYg/HcC2jVLnTo+IkY28Qf5JNHetm+XlB4am75iLtFeUiIpNHUU475PpbOyrkcZ6VASiJPDBpDYVh/2q+9zILQyA==
X-MS-Exchange-CrossTenant-Network-Message-Id: 7e2d4ba1-80fa-4646-c774-08d88634f7d8
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Nov 2020 11:28:52.9009
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6knLT5f+iMDcpBFya5HiOs8V6kOCP3AllKtnIjq0YY7pt03IlSCFdA6KjtXsxq3Gip84VvGeG1RqZ9eP/Wkknw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4764
X-OriginatorOrg: citrix.com

On Tue, Nov 10, 2020 at 03:32:43PM +0100, Jan Beulich wrote:
> On 10.11.2020 15:08, Roger Pau Monné wrote:
> > On Tue, Nov 10, 2020 at 02:19:40PM +0100, Jan Beulich wrote:
> >> On 10.11.2020 12:16, Roger Pau Monné wrote:
> >>> On Tue, Nov 10, 2020 at 11:06:46AM +0100, Jan Beulich wrote:
> >>>> On 10.11.2020 10:31, Roger Pau Monné wrote:
> >>>>> On Fri, Oct 23, 2020 at 10:38:04AM +0200, Jan Beulich wrote:
> >>>>>> Under certain conditions CPUs can speculate into the instruction stream
> >>>>>> past a RET instruction. Guard against this just like 3b7dab93f240
> >>>>>> ("x86/spec-ctrl: Protect against CALL/JMP straight-line speculation")
> >>>>>> did - by inserting an "INT $3" insn. It's merely the mechanics of how to
> >>>>>> achieve this that differ: A set of macros gets introduced to post-
> >>>>>> process RET insns issued by the compiler (or living in assembly files).
> >>>>>>
> >>>>>> Unfortunately for clang this requires further features their built-in
> >>>>>> assembler doesn't support: We need to be able to override insn mnemonics
> >>>>>> produced by the compiler (which may be impossible, if internally
> >>>>>> assembly mnemonics never get generated), and we want to use \(text)
> >>>>>> escaping / quoting in the auxiliary macro.
> >>>>>
> >>>>> Could this have an option to enable/disable at build time?
> >>>>
> >>>> Well, a subsequent patch adds a config option for this, which in
> >>>> the worst case could be turned off. I'm afraid though I'm not
> >>>> clear about the question, because ...
> >>>>
> >>>>> FreeBSD will drop GNU as quite soon from base, and albeit it can be
> >>>>> installed as a package I would like to be able to build Xen using a
> >>>>> toolchain based on LLVM exclusively.
> >>>>
> >>>> ... it's not clear to me what the implications here are: Are you
> >>>> saying -no-integrated-as is not going to function anymore, unless
> >>>> people explicitly install gas? If that's not what you meant to
> >>>> indicate, then I don't see how building would become impossible.
> >>>
> >>> I'm still inquiring about this, but I would say that when gas is
> >>> removed from FreeBSD then the 'as' command would be mapped to llvm-as,
> >>> and thus -no-integrated-as would hit the same issues as the integrated
> >>> as. So far in Xen we have assumed that -no-integrated-as would
> >>> fallback to an as capable of doing what the integrated clang as
> >>> doesn't support, but that might not be the case.
> >>
> >> At which point Xen couldn't be built anyway, I expect. If llvm-as
> >> isn't sufficiently gas-compatible, we've lost (right now at least).
> >>
> >>> Ideally we would have to re-run the tests with -no-integrated-as, in
> >>> order to assert that the external as is really capable of what the
> >>> internal one is not.
> >>
> >> And if it doesn't, what would we do other than failing the build
> >> (which it would also if we didn't do the 2nd round of checks)?
> > 
> > I would always prefer a clear message (ie: your toolstack is not
> > capable of building Xen) rather than a weird build time failure.
> 
> Fair point in general.
> 
> > Also we could maybe disable certain options by default if the
> > toolstack doesn't have the required support to build them?
> 
> We could, but I'm afraid this will go down the route of embedding
> tool chain capabilities in xen/.config, which I continue to not
> consider a good idea (and the thread got stalled, as expected).
> 
> In fact (also to Andrew and Anthony), recently I've become aware
> of another shortcoming of this model: Our kernel packages contain
> .config files for the various architectures and specific per-
> architecture flavors. It used to be easy to update them on any
> system, until the tool chain capability checks got introduced.
> Now, in order to update them, one has to use the precise versions
> of the various tool chain parts that will be used on the build
> hosts, or else an error may result (for unexpected changes to
> the file), or one may unknowingly turn off options that are
> expected to be enabled.

I think the options should only be set based on toolchain capabilities
when there's no .config. If there's an existing .config we should just
check whether the toolchain is capable of building the selected set of
options, or else report an error.

I guess this would apply to defconfig selecting options based on
toolchain capabilties.

> Put more generally - if I handed someone a specific .config, I'd
> expect their resulting binary to contain what I did set up. Or
> for them to report back that they can't build the thing. But it
> should not be the case that the .config got silently changed and
> certain functionality disabled just because they use a different
> tool chain.

Yes, I agree with this.

> > Has anyone reported this shortcoming to upstream llvm, so they are
> > aware and can work on this or maybe provide an alternative way to
> > achieve the same result?
> 
> I didn't and I'm unaware of anyone else possibly having done so.
> That said, I consider it sort of obvious though that the goal of
> replacing the GNU tool chain implies being fully compatible (and
> presumably better in certain areas).

Well, I think we have to keep in mind that the usage of the compiler
and the linker by Xen is far more advanced than what most applications
do, and we are likely to hit corner cases. I bet the LLVM people
weren't even aware of such usage.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 11:37:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 11:37:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24658.52005 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcoRW-0007Mp-3D; Wed, 11 Nov 2020 11:37:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24658.52005; Wed, 11 Nov 2020 11:37:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcoRW-0007Mi-0C; Wed, 11 Nov 2020 11:37:38 +0000
Received: by outflank-mailman (input) for mailman id 24658;
 Wed, 11 Nov 2020 11:37:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kcoRU-0007Md-Of
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 11:37:36 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 364fe789-e6ea-4c93-bd99-5a249932279b;
 Wed, 11 Nov 2020 11:37:35 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kcoRU-0007Md-Of
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 11:37:36 +0000
X-Inumbo-ID: 364fe789-e6ea-4c93-bd99-5a249932279b
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 364fe789-e6ea-4c93-bd99-5a249932279b;
	Wed, 11 Nov 2020 11:37:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605094656;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=r4CEbq3N7zmXBLL5E7Y/EU2TQNHFkbVKI4fNR4dUkPw=;
  b=TfRgk7349urMCuzHE0u82LPmjdR/jvDKJfYQt713IoGfp/iNsnvnjpnl
   Bsbysl/s+Abd3FAXTyXlc2BP3Yir4zXhEIKSs4Q3Q3wuhWW8tRHB06xgI
   +FMi1BHOrP+pMXPsq/+e6Gv4IEyhz8t4o/LPmy139m0oXZ3K8gTYuxF6f
   M=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: T/gZ5yTe7OUWbCeyld5+RaT1yWeTwowFhXbl2SiM5drIaW+ANDMxt+xXJlECdO3EfBlIUGvjON
 iitNxc6pUVBB26SmUN/+JCKcOeue/yY9+G3CqR4fWdaSZL4ZUs3eXEfq+iKzpfGka51fMAInwt
 6iU6Bs/xZAbvp6ya/IgAtOgMgPG0xvj966dvdcFtxhkKOJfBx76km2uMmDRCQLOZQsOr87vWoj
 I3cWWb/3aKp1Gn99Dr2AzAGWbxLDvk2r1YYAFW3jRbi8Z936A2qFJUFOIwJw1bH0xZJr3zmF37
 Bmk=
X-SBRS: None
X-MesageID: 31278426
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,469,1596513600"; 
   d="scan'208";a="31278426"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VSmrSAlk0uB+c8FlQnXi0Wvp9iOPAg2oWfeQGqgQZLxq2a3OWrC4AJ6edUiKVBUcjepqsChOfUTbHVPjSm/2EZmoz/yp2ftkWjQqwjAkEq4mzrwfaUqcpMG3VNRZTFub34HCzqERS5jEdtfRE68c8Kv/sGR47GYGE1cCsaxemweAfEoQOW0LBfLVPubr81BAj0bVvrnCiCe6OT1vkt/4HdLBAxsA6um5X9wNMH30zDuX/iHYcfL/nB3lG8EPdcXZvpiBll6Sfey0jTICmYSvJ2iXx04I1gPp87Rf8UIdgcwnHYINeaBc8mSmT4auUBYP82a4kzzIS8rgiimcspwoZg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5kZWfWIhx2RjfP2NAAjZM9KwgA88SUFpP0h7VcqMtMY=;
 b=C1c1ujbnyjwKrOZghFivP/wvRuS0gMS56FERva75rZser86xUFmthTCXEMf8a57By8YhHgi8CGH/lSSgvSuvX+V2FwTtK4S5d08lN4Yt5JfH6olh88EYGnzEsRV2uX+W60ok64pv7I4Wv4pcD9LPcAb+vxrIOaB/IDmchTU8m1d/FG7I9zdzzv3byj/XGM4OhUubhAGHUKoFy7VJXkyfltfk4CZvWJx3PrMnz8hi/ZL/shHALwNEm0oCOwvVx4ECZJ20zbEJKtca5jbxHbTzSSnm5WloAyaWG2QKHUO6AYQQOGjXzKND/dM+QLZX+o+5IzbwuxBS1PThGnkMm6MNoQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5kZWfWIhx2RjfP2NAAjZM9KwgA88SUFpP0h7VcqMtMY=;
 b=oUSgXbM/rvKbaajhr0Mev1zdBD7KWMjy8BTiPCN1eUdAlac/fN20nmEtcKmYXwjVxjg4aCn/R04Pf4m6CV4VFLDpUIQhnGFK/4K94wdEhfzOfgSlpwk8e+6luAVUxvdESNqPtK5iuV3nuRO50d5on7NuPk9YUvy5ovypynwZGnw=
Date: Wed, 11 Nov 2020 12:37:26 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: George Dunlap <George.Dunlap@citrix.com>
CC: Andrew Cooper <Andrew.Cooper3@citrix.com>, "open list:X86"
	<xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] docs: fix documentation about default scheduler
Message-ID: <20201111113726.iqpzf64sgxpnl3gc@Air-de-Roger>
References: <20201110185129.5951-1-roger.pau@citrix.com>
 <9A8ADF64-4D76-4BEE-8E1C-4E23E77B9112@citrix.com>
 <e24211db-7ab5-d950-df56-669b90fda041@citrix.com>
 <19A67843-4667-46EB-8F11-109D8989BB71@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <19A67843-4667-46EB-8F11-109D8989BB71@citrix.com>
X-ClientProxiedBy: LO2P265CA0416.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a0::20) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e22014d1-d454-4d2d-b247-08d886362c9a
X-MS-TrafficTypeDiagnostic: DM6PR03MB3835:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB38350B0D84EA3D5CCFD30A748FE80@DM6PR03MB3835.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: tzKpmXaSWobJDMMEAJAQNQoyfh0NZrIBM75+ZrfTHlTN4OqGGjX5Ubl4S0QKVq/UBJn0hL8AGk/0KARDjsHrQ03AsOfgDxYkXm/6agqxk294RuEmTxsCvHcfaoPPe8TARjtPDAWC7GoWsKkCILZM19x6b5iVDwOCYIqvnHjG3Y2CuE4slW17i6PVjJwd1qUpsqoS692bwP1grnBJKLArk7ZWVIt6CtW96H+ThUIoWCX4uhkI2h+8aX8BTbMLtsI0PjG29OMYslPdJfvSWRpdHy6T1pJiJME2haBneQPF93r2BwBa+XOkGZMvXiOCxsQo
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(366004)(39860400002)(396003)(136003)(346002)(376002)(6636002)(86362001)(66556008)(5660300002)(2906002)(53546011)(83380400001)(9686003)(26005)(1076003)(956004)(85182001)(4326008)(478600001)(54906003)(33716001)(8936002)(6862004)(6496006)(66946007)(6666004)(16526019)(66574015)(66476007)(316002)(186003)(8676002)(6486002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 8EfdEtG9L4DT1a6iZMgZZv9MALImFEb/xWcA+nPanM9jBeZXtwKPoHLVL8I29WD0JBV5I3nIPGbSx30Q/hirhVAhcY60T7HcNaWMvmGbT8j4VotJki/F0luIL16ldxaQUgcnmruTLhRIxWoRR//wSU/5hfWrQjplxGSNlZ/v1lau190aWwQLP3NGjStRZRRL6w7Hzzj2t26zp9M15j6U2Kd+/sSMES/MuubQVpJsq6OBiZKSeS+PdxURwNMEjNiaJRq3OLjDEzVHH5gGH+on7zir79R7eaynfB40uLJrO0sgP0vN5MNHQ6Cmz5LGCJFoJIkw0Gb4BliYmIPVUo/ALtr159zbiqXo9XZc4XhgcnBRCkf+BRGMsatxotCyNZa7avUQ9A5vCD3syOyHApudl1PC/JIB+q/AGnqdXajeP7h3tMFtu1IVpdLhGWDCsBW5EI+IpRaIKJ/B8wcpHatbqVfJThW96QXmJWErW+lvo/dKQz2JdWugQeztZBGrdk6DbCemE0s7O43XdlE+P9xUux83IS6PFLSxcT9Hms7kuuz77INKrR2DG3/DRAZM/poIJOpqpa3FLsv1da9WJ1dcBNc00iIx+67g0FWgxxwp7PKFvtmWnha8U0dE1jW7jcUPqAniVe7MSImbZcaX7/8Pd56U0ZShfyCKe3omHAfxwKKR/OsNMmjo+QGn29pvc7TqBeayh0sit0nPr30c3+YlNXV+jlIjxb1kGI6qw9Ka6xwKUL8yMbVDyIe6gj9rV7wOckE4Nmsef1ww1SQdCxdbAfJqddoAD7OWlt+KvFChMl8pvRzHuECdpc5gTAh2zPDzu5gBryp1Y0sX+Mzx8bkoH8iid0AP66gfhAbjCKyZAw0vKUJRucbTEby/yXFSkrv8xKbnF59SEaDhggeKClmrow==
X-MS-Exchange-CrossTenant-Network-Message-Id: e22014d1-d454-4d2d-b247-08d886362c9a
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Nov 2020 11:37:31.0085
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zksifzw+cadGQfJdVqqOwZDAGL2QaZQw1YydrPWDExd4Dwh3eG1zbbdlap1GJbwhbDJa4Mgw1EvxmIXd3LrIQQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3835
X-OriginatorOrg: citrix.com

On Wed, Nov 11, 2020 at 12:24:50PM +0100, George Dunlap wrote:
> 
> 
> > On Nov 11, 2020, at 11:10 AM, Andrew Cooper <Andrew.Cooper3@citrix.com> wrote:
> > 
> > On 11/11/2020 10:12, George Dunlap wrote:
> >> 
> >>> On Nov 10, 2020, at 6:51 PM, Roger Pau Monne <roger.pau@citrix.com> wrote:
> >>> 
> >>> Fix the command line document to account for the default scheduler not
> >>> being credit anymore likely, and the fact that it's selectable from
> >>> Kconfig and thus different builds could end up with different default
> >>> schedulers.
> >>> 
> >>> Fixes: dafd936dddbd ('Make credit2 the default scheduler')
> >>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> >>> ---
> >>> Changes since v1:
> >>> - Point that the default scheduler is being selected by Kconfig,
> >>>  don't mention the default Kconfig selection.
> >>> ---
> >>> docs/misc/xen-command-line.pandoc | 2 +-
> >>> 1 file changed, 1 insertion(+), 1 deletion(-)
> >>> 
> >>> diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
> >>> index 4ae9391fcd..eb1db25f92 100644
> >>> --- a/docs/misc/xen-command-line.pandoc
> >>> +++ b/docs/misc/xen-command-line.pandoc
> >>> @@ -1876,7 +1876,7 @@ with read and write permissions.
> >>> ### sched
> >>>> `= credit | credit2 | arinc653 | rtds | null`
> >>> -> Default: `sched=credit`
> >>> +> Default: selectable via Kconfig.  Depends on enabled schedulers.
> >> Sorry for not weighing in earlier; but this basically makes this documentation useless.
> > 
> > No.  It is the only half useable version, by being the only version
> > which isn't misleading.
> 
> The version in this patch essentially says “This has some options, it also has a default, but we’re not going to tell you any of them, nor what your default most likely is.  Go read KConfig to find out.”  This is is completely useless to the person trying to figure out what the default is and what potential alternate values they can put here.
> 
> The vast majority of people who search for this on the internet will have that list of options, and have credit2=default.  As long as we tell them that a local configuration can override the available options and the default, people are smart enough to figure out what their system is doing.
> 
> > It would however be far better to name the CONFIG_ variable (we do
> > elsewhere in this doc) in question so people can actually figure out
> > what they've got in front of them.
> 
> Something like that would be even better, if Roger (or someone) wants to write it up.

I'm happy to send an updated version, but would like to have some
agreement before doing so. Is the text below acceptable to everyone?

### sched
> `= credit | credit2 | arinc653 | rtds | null`

> Default: `sched=credit2`

Choose the default scheduler. Note the default scheduler is selectable via
Kconfig and depends on enabled schedulers. Check
`CONFIG_SCHED_{scheduler_name}_DEFAULT` when you building Xen to adjust the
default scheduler.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 11:45:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 11:45:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24665.52018 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcoZK-0008Lm-1W; Wed, 11 Nov 2020 11:45:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24665.52018; Wed, 11 Nov 2020 11:45:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcoZJ-0008Lf-Ub; Wed, 11 Nov 2020 11:45:41 +0000
Received: by outflank-mailman (input) for mailman id 24665;
 Wed, 11 Nov 2020 11:45:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GpG1=ER=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kcoZI-0008La-QF
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 11:45:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7f080068-5ca6-4397-9dc9-f10b2db1ee91;
 Wed, 11 Nov 2020 11:45:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D4AA8ABDE;
 Wed, 11 Nov 2020 11:45:38 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GpG1=ER=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kcoZI-0008La-QF
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 11:45:40 +0000
X-Inumbo-ID: 7f080068-5ca6-4397-9dc9-f10b2db1ee91
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7f080068-5ca6-4397-9dc9-f10b2db1ee91;
	Wed, 11 Nov 2020 11:45:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605095139;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=uhly8ilgY2xEIQDMNyWtxnrDuhegOHSjMb1/Oy5d2Dg=;
	b=ExVavCl7hZiaGVbbvYMf6eYBBCZuI6aa9y7M4iFkAUYMoveXl2QynKQGLqQeJIpTtS08j0
	+Z0HxMMNDiPvuJZ8VzjVrfEb7wi013g7WbSxAhd8nJLAHiR8ye37tBAqDoLk2P7r9ZeIKw
	VRXXeQ12hQ1vyzqDYu4ShsAsLkiGVnE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D4AA8ABDE;
	Wed, 11 Nov 2020 11:45:38 +0000 (UTC)
Subject: Re: [PATCH] docs: fix documentation about default scheduler
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20201110185129.5951-1-roger.pau@citrix.com>
 <9A8ADF64-4D76-4BEE-8E1C-4E23E77B9112@citrix.com>
 <e24211db-7ab5-d950-df56-669b90fda041@citrix.com>
 <19A67843-4667-46EB-8F11-109D8989BB71@citrix.com>
 <20201111113726.iqpzf64sgxpnl3gc@Air-de-Roger>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <d13b3920-1301-c345-ac6c-6f6ad7af9b8d@suse.com>
Date: Wed, 11 Nov 2020 12:45:38 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201111113726.iqpzf64sgxpnl3gc@Air-de-Roger>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="7J1sQLY4kN27lRr05FopQBEm4ytSmTHfp"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--7J1sQLY4kN27lRr05FopQBEm4ytSmTHfp
Content-Type: multipart/mixed; boundary="KS7qOA1v6E5kZbiZkZ2LUVM0ZG5z44eEl";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
Message-ID: <d13b3920-1301-c345-ac6c-6f6ad7af9b8d@suse.com>
Subject: Re: [PATCH] docs: fix documentation about default scheduler
References: <20201110185129.5951-1-roger.pau@citrix.com>
 <9A8ADF64-4D76-4BEE-8E1C-4E23E77B9112@citrix.com>
 <e24211db-7ab5-d950-df56-669b90fda041@citrix.com>
 <19A67843-4667-46EB-8F11-109D8989BB71@citrix.com>
 <20201111113726.iqpzf64sgxpnl3gc@Air-de-Roger>
In-Reply-To: <20201111113726.iqpzf64sgxpnl3gc@Air-de-Roger>

--KS7qOA1v6E5kZbiZkZ2LUVM0ZG5z44eEl
Content-Type: multipart/mixed;
 boundary="------------CC63A13D7E118BD369DD5934"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------CC63A13D7E118BD369DD5934
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 11.11.20 12:37, Roger Pau Monn=C3=A9 wrote:
> On Wed, Nov 11, 2020 at 12:24:50PM +0100, George Dunlap wrote:
>>
>>
>>> On Nov 11, 2020, at 11:10 AM, Andrew Cooper <Andrew.Cooper3@citrix.co=
m> wrote:
>>>
>>> On 11/11/2020 10:12, George Dunlap wrote:
>>>>
>>>>> On Nov 10, 2020, at 6:51 PM, Roger Pau Monne <roger.pau@citrix.com>=
 wrote:
>>>>>
>>>>> Fix the command line document to account for the default scheduler =
not
>>>>> being credit anymore likely, and the fact that it's selectable from=

>>>>> Kconfig and thus different builds could end up with different defau=
lt
>>>>> schedulers.
>>>>>
>>>>> Fixes: dafd936dddbd ('Make credit2 the default scheduler')
>>>>> Signed-off-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
>>>>> ---
>>>>> Changes since v1:
>>>>> - Point that the default scheduler is being selected by Kconfig,
>>>>>   don't mention the default Kconfig selection.
>>>>> ---
>>>>> docs/misc/xen-command-line.pandoc | 2 +-
>>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-comm=
and-line.pandoc
>>>>> index 4ae9391fcd..eb1db25f92 100644
>>>>> --- a/docs/misc/xen-command-line.pandoc
>>>>> +++ b/docs/misc/xen-command-line.pandoc
>>>>> @@ -1876,7 +1876,7 @@ with read and write permissions.
>>>>> ### sched
>>>>>> `=3D credit | credit2 | arinc653 | rtds | null`
>>>>> -> Default: `sched=3Dcredit`
>>>>> +> Default: selectable via Kconfig.  Depends on enabled schedulers.=

>>>> Sorry for not weighing in earlier; but this basically makes this doc=
umentation useless.
>>>
>>> No.  It is the only half useable version, by being the only version
>>> which isn't misleading.
>>
>> The version in this patch essentially says =E2=80=9CThis has some opti=
ons, it also has a default, but we=E2=80=99re not going to tell you any o=
f them, nor what your default most likely is.  Go read KConfig to find ou=
t.=E2=80=9D  This is is completely useless to the person trying to figure=
 out what the default is and what potential alternate values they can put=
 here.
>>
>> The vast majority of people who search for this on the internet will h=
ave that list of options, and have credit2=3Ddefault.  As long as we tell=
 them that a local configuration can override the available options and t=
he default, people are smart enough to figure out what their system is do=
ing.
>>
>>> It would however be far better to name the CONFIG_ variable (we do
>>> elsewhere in this doc) in question so people can actually figure out
>>> what they've got in front of them.
>>
>> Something like that would be even better, if Roger (or someone) wants =
to write it up.
>=20
> I'm happy to send an updated version, but would like to have some
> agreement before doing so. Is the text below acceptable to everyone?
>=20
> ### sched
>> `=3D credit | credit2 | arinc653 | rtds | null`
>=20
>> Default: `sched=3Dcredit2`
>=20
> Choose the default scheduler. Note the default scheduler is selectable =
via
> Kconfig and depends on enabled schedulers. Check

=2E.. CONFIG_SCHED_DEFAULT to see which scheduler is the default.

CONFIG_SCHED_{scheduler_name} specify which schedulers are available.


Juergen

--------------CC63A13D7E118BD369DD5934
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------CC63A13D7E118BD369DD5934--

--KS7qOA1v6E5kZbiZkZ2LUVM0ZG5z44eEl--

--7J1sQLY4kN27lRr05FopQBEm4ytSmTHfp
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+rzuIFAwAAAAAACgkQsN6d1ii/Ey/o
fAf/WfBWWay4R4YebcvLo94gAgZiF3jgS8+Z+s07QiFKV6KBk6bmvVtKTMkRRIKW/UA57/qbjLld
klgwagzMxAaXU17WzPZ2+y9CbxsRb3WmLFPyPXocUQF7mDXnY4rm0wIkowMKNQ58HCqU6R1hmVXb
xPbVg9ymJ2ANc5pouEtimPjXU9jYOrBjAy3csviEgv2Jww9DIVlDQvtEj/9iXxzxgEshmACvqUFF
RLLvd3tesQ0cFuYGTdIn+QkeoT9TxIZlZMjW6lyNAxVj0FrhiPllihrX8U7+e0LMeOdNJbQLSh/b
PIdfNTm5P5Vx+mkax8BsxnIcguYvsPoMBZdAdPtU+g==
=EhBH
-----END PGP SIGNATURE-----

--7J1sQLY4kN27lRr05FopQBEm4ytSmTHfp--


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 11:51:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 11:51:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24673.52030 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcofE-0000rF-Ny; Wed, 11 Nov 2020 11:51:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24673.52030; Wed, 11 Nov 2020 11:51:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcofE-0000r8-Ku; Wed, 11 Nov 2020 11:51:48 +0000
Received: by outflank-mailman (input) for mailman id 24673;
 Wed, 11 Nov 2020 11:51:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kcofC-0000r3-Io
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 11:51:46 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3fcff635-fff1-4ba3-8ef0-ad401212c46b;
 Wed, 11 Nov 2020 11:51:44 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kcofC-0000r3-Io
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 11:51:46 +0000
X-Inumbo-ID: 3fcff635-fff1-4ba3-8ef0-ad401212c46b
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3fcff635-fff1-4ba3-8ef0-ad401212c46b;
	Wed, 11 Nov 2020 11:51:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605095505;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=6a+ryRQx52NXhthr7G6dA/SjXzSMTcht3I+EUDtCUjA=;
  b=K4VFZAUEKsXiU1DY+H5nUqLVhPtv2UpQhufBV45Fzs4kKUUjOweL9CDh
   qlK/DaD5mYq4X9+yRmm9VzE2nG/ZwHRKZalFX0qwPP8K24vu4UTqnfIMT
   0lXRKoafxZtEzGTnEG0R2OmxkkMRrEgq8l6UuSScfTaUWqmA1Ptj+roYS
   k=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Zv2hMvC0pxucPVXuv42m5YYsJvDxU+ajSHtGFw788G19WKf1OVoILWUXgD/yhETYkLoPFk/T46
 DR7ftaucMHfTMxabEEqzzMSaPmEXJPSRgkQHVyUK7IHmciP6LuPsC1/qRnAViDInuGZ5RF6Zzx
 zBwFli+KtCEl0oTF+GC8ylckptZ6ypY+kZUyPIXwiOV4MZbX/vnR7k6YzLRcanJJSDrHv8lsJg
 bD+n6+GavcxPxSS61hxrwnNhucLWS4KQPJK/8t0UnLSAgofvb1tUPeKPMFGhAS2zJn2hOaIads
 wfA=
X-SBRS: None
X-MesageID: 31279159
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,469,1596513600"; 
   d="scan'208";a="31279159"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ru6jQkQFQoedfeQVxxij9pJYOym5bDy/X8ubiBiOgjkFTXwLICjTODtH9CUvsMkYKQwjqMqO+eHJQYvA0JfIspLGLe+QH9Ysf/W9l2nI9WLjItMusgBSzC55pJE1aRr0Q9byGkHXKLfcPxbnctsB+EUayp6boP/g4dWCX5B0SHbQUOTzV7RnLSPc6VpaKyDbxDNQBEIowHnKImB9BLUGSrTcZhxslNwT1OUeKHUXbCMQlDlZQDCLgHUMFQ1Pv/HDGkHtzaKBubmnpkO3aEXJbq9IqrRnWoX03u+vF9+30UuFliMdKXuzxCX3X3ViDu2FHYSEUkcrysSu0acP8T1jIg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yFxMpIp5YG4oK4PaPjZIjXLyU/zXgZkFczggbSmDzl8=;
 b=YgcpfBWo5SAdyIW+rHJ3h89TyHuC17nxyy5NMrYUQrBQ+JE6LL/TvKzBbVIuKo768j9+66Nw/VNfE2FFLjRm31CddTcm+VT484K1xQ4Vc10tTgEEeoL/7EklVdnXA1tDPiyLovK7hf6ZDCvwJBnaaXemHOR06qUtMQse5N5RgYMZ3Qwc1Fwc4B6SuiQhu1D4s4/dcxjOyxAo87cPMkgpDttyafxTKl9QbC7f51GEbqRSDY9Brng4CYCkiMRgxsAEqTxvKMFA8cs3vxuUH8aizW2JVssIg2Kqf9NVpgb/RDjs870vn7gUOfYWbIbl7C3HWR+mOdRJn6/z0qahvmPnvQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yFxMpIp5YG4oK4PaPjZIjXLyU/zXgZkFczggbSmDzl8=;
 b=EB60PY+btwHLB4b8rxXUWTgoYoi9liNzENyJA4NPRASHnGks1tk4LORnl6gE3UAPWYPnw3z4XwuisU6IHXHC08EdfkFbCuWAeAnmCAV9mB66MF2kiZFnRsvgU1l+UH7JLw9jzMtv6ENG5lE4Krlue4PiuEZnU2y5pfxvz0SWlSk=
Date: Wed, 11 Nov 2020 12:51:37 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
CC: George Dunlap <George.Dunlap@citrix.com>, Andrew Cooper
	<Andrew.Cooper3@citrix.com>, "open list:X86"
	<xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] docs: fix documentation about default scheduler
Message-ID: <20201111115137.ojf7e5j2iq6zcfxe@Air-de-Roger>
References: <20201110185129.5951-1-roger.pau@citrix.com>
 <9A8ADF64-4D76-4BEE-8E1C-4E23E77B9112@citrix.com>
 <e24211db-7ab5-d950-df56-669b90fda041@citrix.com>
 <19A67843-4667-46EB-8F11-109D8989BB71@citrix.com>
 <20201111113726.iqpzf64sgxpnl3gc@Air-de-Roger>
 <d13b3920-1301-c345-ac6c-6f6ad7af9b8d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d13b3920-1301-c345-ac6c-6f6ad7af9b8d@suse.com>
X-ClientProxiedBy: LO2P265CA0039.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:61::27) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3f2d4445-ab60-41a8-b578-08d88638277b
X-MS-TrafficTypeDiagnostic: DM5PR03MB3371:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB3371D08787EA1441E75BB09C8FE80@DM5PR03MB3371.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: xIjhVkeqd/UuAux+wtN3W46JtBwnKUzlHNKGyKg8ullC+WmaZer9NyMoxJgBiPSfml86qF1ZCYuz2kVRnIJ0C4JlHcapPd72VPD+pjI65kT6EkeIFArqYrWLoKeCW9fTNR8SR9a8gHTswUYPG126wKyFzNrta+yfR02ro0j4+gb18pIIeJenNKTafdIgWjJTNmQk9dIQpEU+9IxkoZrNt6SzUP0zfMjfBQfsci9Pu138sGbwld/QwPvq678jj3Ye5IenEW5D7Lx53qzNtxFXlV94tiC+5kp7CTYtfug2hSeAlCfjntsho9gHlrlKLTE0
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39860400002)(396003)(346002)(136003)(376002)(366004)(66574015)(4326008)(6916009)(26005)(6486002)(53546011)(9686003)(5660300002)(83380400001)(16526019)(1076003)(186003)(956004)(85182001)(66476007)(8936002)(86362001)(33716001)(66556008)(54906003)(316002)(2906002)(478600001)(8676002)(66946007)(6666004)(6496006);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: H2/9pMNSmDsIr8ivbJKXF6CHPdwd6mqjbL+l6/z3XdlRnsNxHtEvBRpKL6F0ylr8t498pQ9UWq9+Aqr+iEj9cKFPojZ1+pG4yMexizKvGGDoZVRsospWnp5JOq4bOihg06cl9i9Wy1sr1h9M+23tfdShO5BLl1HEm8Z5CdeooImigGM1o57GiFJK5hFrPwB/1gaM6TxcrOmWj3CVcDMTt8rqvOKdhzCe7kBpTY1OJdEpSQl3Ip5De2LqQ/p6yvooTOoA6J7ef6V8pz1dRIL/rWd5pSWJyj38B5y3xFPdPSvsDZW2kY8qT1fRmTdiJ6+o9mDI/DtgWQaTUYzhUHHpb0h5+HPotkYjoBYt3DsF3SQ1s+ODUU2i9FN0nAVV3RFkwmLoqnN9wORjpTwkYPN5xnaCK9V+TMFnW5royMUrJP6Sk8hJR6hcVe6gsYHCAPlSQlgqaDTexFqQEZIpoxCD0A46dCwNY+5feJQPRvphX1EZ1kdUKpo5okuxnt9EklCSoPVNG9hfrBcnTKXCCYMhjt/Bce496fac9f3nDGLqckrenHREk5Q4qJDyJfB+J3I+0L9JP3VFCSw9hRG6WX1U78Nv8pTEAgEKVNsgyp1DgfKTdOXHmFxe4OKNks2CTmXvvd6/LuVzB+JmYwiXUIh0VE0sBzioSqRaSGnWCDSQ1glw66STOzf4ID2sIqU2QHcRrxplV8DujJ8KKW6ZBF26YCMiA5ilJE1E0i04430uNkyhY2TyIiC6nEBXlnp3dbBupqMeF1PnT8v9pT7446zt7TEPianm9axw1sUZJygtrwmCUsMje/RSkbcL4GDP4W8q9+I3QjxvfOzFD9zJJnk9E1jq7HsrU1an8/BH5NyWn+uvoGOmGhT2We9XPrjVS4WdQtcisviC2z0dvoV8cKj8Yw==
X-MS-Exchange-CrossTenant-Network-Message-Id: 3f2d4445-ab60-41a8-b578-08d88638277b
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Nov 2020 11:51:41.2769
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: E+390UHWhSVMzOG5LZ+FPdBRT+u4TOdBnfmKjGW7v843Lec/0f8EWLxvnEnLadTbApSJrqKy37U+vXzdMSbgPw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3371
X-OriginatorOrg: citrix.com

On Wed, Nov 11, 2020 at 12:45:38PM +0100, Jürgen Groß wrote:
> On 11.11.20 12:37, Roger Pau Monné wrote:
> > On Wed, Nov 11, 2020 at 12:24:50PM +0100, George Dunlap wrote:
> > > 
> > > 
> > > > On Nov 11, 2020, at 11:10 AM, Andrew Cooper <Andrew.Cooper3@citrix.com> wrote:
> > > > 
> > > > On 11/11/2020 10:12, George Dunlap wrote:
> > > > > 
> > > > > > On Nov 10, 2020, at 6:51 PM, Roger Pau Monne <roger.pau@citrix.com> wrote:
> > > > > > 
> > > > > > Fix the command line document to account for the default scheduler not
> > > > > > being credit anymore likely, and the fact that it's selectable from
> > > > > > Kconfig and thus different builds could end up with different default
> > > > > > schedulers.
> > > > > > 
> > > > > > Fixes: dafd936dddbd ('Make credit2 the default scheduler')
> > > > > > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > > > > > ---
> > > > > > Changes since v1:
> > > > > > - Point that the default scheduler is being selected by Kconfig,
> > > > > >   don't mention the default Kconfig selection.
> > > > > > ---
> > > > > > docs/misc/xen-command-line.pandoc | 2 +-
> > > > > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > > > > > 
> > > > > > diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
> > > > > > index 4ae9391fcd..eb1db25f92 100644
> > > > > > --- a/docs/misc/xen-command-line.pandoc
> > > > > > +++ b/docs/misc/xen-command-line.pandoc
> > > > > > @@ -1876,7 +1876,7 @@ with read and write permissions.
> > > > > > ### sched
> > > > > > > `= credit | credit2 | arinc653 | rtds | null`
> > > > > > -> Default: `sched=credit`
> > > > > > +> Default: selectable via Kconfig.  Depends on enabled schedulers.
> > > > > Sorry for not weighing in earlier; but this basically makes this documentation useless.
> > > > 
> > > > No.  It is the only half useable version, by being the only version
> > > > which isn't misleading.
> > > 
> > > The version in this patch essentially says “This has some options, it also has a default, but we’re not going to tell you any of them, nor what your default most likely is.  Go read KConfig to find out.”  This is is completely useless to the person trying to figure out what the default is and what potential alternate values they can put here.
> > > 
> > > The vast majority of people who search for this on the internet will have that list of options, and have credit2=default.  As long as we tell them that a local configuration can override the available options and the default, people are smart enough to figure out what their system is doing.
> > > 
> > > > It would however be far better to name the CONFIG_ variable (we do
> > > > elsewhere in this doc) in question so people can actually figure out
> > > > what they've got in front of them.
> > > 
> > > Something like that would be even better, if Roger (or someone) wants to write it up.
> > 
> > I'm happy to send an updated version, but would like to have some
> > agreement before doing so. Is the text below acceptable to everyone?
> > 
> > ### sched
> > > `= credit | credit2 | arinc653 | rtds | null`
> > 
> > > Default: `sched=credit2`
> > 
> > Choose the default scheduler. Note the default scheduler is selectable via
> > Kconfig and depends on enabled schedulers. Check
> 
> ... CONFIG_SCHED_DEFAULT to see which scheduler is the default.
> 
> CONFIG_SCHED_{scheduler_name} specify which schedulers are available.

Hm, that's weird. When I hit help in menuconfig for the default
scheduler selection it reports the option is named
SCHED_{name}_DEFAULT. Will change it.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 12:12:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 12:12:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24689.52045 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcoyT-0002py-VV; Wed, 11 Nov 2020 12:11:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24689.52045; Wed, 11 Nov 2020 12:11:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcoyT-0002pr-Rj; Wed, 11 Nov 2020 12:11:41 +0000
Received: by outflank-mailman (input) for mailman id 24689;
 Wed, 11 Nov 2020 12:11:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kcoyS-0002pI-84
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 12:11:40 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8921d63d-99d5-457a-a797-23a592f214e0;
 Wed, 11 Nov 2020 12:11:32 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcoyJ-00029N-Lh; Wed, 11 Nov 2020 12:11:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcoyJ-0003DI-Co; Wed, 11 Nov 2020 12:11:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kcoyJ-0001Zh-CJ; Wed, 11 Nov 2020 12:11:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kcoyS-0002pI-84
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 12:11:40 +0000
X-Inumbo-ID: 8921d63d-99d5-457a-a797-23a592f214e0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8921d63d-99d5-457a-a797-23a592f214e0;
	Wed, 11 Nov 2020 12:11:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dmwONrOnj5fYR3r9uev5iqDPzTK2VQfHMJO/Ndqn9i4=; b=5YRWyfCUbmZLPaqLvSQO/BA040
	Z1EhMMPRgZJtqJG4GYoLCIPN3dffKm7/nG5zps5ThnIZecXnm9g4x3t9GbfRsqj+jhpTvvcv+0TPn
	uoItgz7S3xek4zu95v2f+Dv2pLEq4301QY3Bf2t7MgCc+ZeYgy60XrwGKIc3KoT6Dw/U=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcoyJ-00029N-Lh; Wed, 11 Nov 2020 12:11:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcoyJ-0003DI-Co; Wed, 11 Nov 2020 12:11:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcoyJ-0001Zh-CJ; Wed, 11 Nov 2020 12:11:31 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156632-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156632: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=8c610e6075f2a200400970698a810a57ad49220e
X-Osstest-Versions-That:
    ovmf=0af7f8e6a9253960ba820cd6ddfd8c36543d30cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Nov 2020 12:11:31 +0000

flight 156632 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156632/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 8c610e6075f2a200400970698a810a57ad49220e
baseline version:
 ovmf                 0af7f8e6a9253960ba820cd6ddfd8c36543d30cb

Last test of basis   156606  2020-11-10 00:39:48 Z    1 days
Testing same since   156632  2020-11-10 17:53:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Liming Gao <gaoliming@byosoft.com.cn>
  Michael Kubacki <michael.kubacki@microsoft.com>
  Mingyue Liang <mingyuex.liang@intel.com>
  Yunhua Feng <fengyunhua@byosoft.com.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   0af7f8e6a9..8c610e6075  8c610e6075f2a200400970698a810a57ad49220e -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 12:17:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 12:17:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24697.52057 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcp4S-000332-K5; Wed, 11 Nov 2020 12:17:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24697.52057; Wed, 11 Nov 2020 12:17:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcp4S-00032v-Gx; Wed, 11 Nov 2020 12:17:52 +0000
Received: by outflank-mailman (input) for mailman id 24697;
 Wed, 11 Nov 2020 12:17:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kcp4Q-00032q-Un
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 12:17:51 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 172adf74-d758-4783-b8bd-cdf42a36cec7;
 Wed, 11 Nov 2020 12:17:49 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kcp4Q-00032q-Un
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 12:17:51 +0000
X-Inumbo-ID: 172adf74-d758-4783-b8bd-cdf42a36cec7
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 172adf74-d758-4783-b8bd-cdf42a36cec7;
	Wed, 11 Nov 2020 12:17:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605097069;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=MmxoU/uey2FSuyUgnmrRfeZ8zz1oa7egMr57xGb4FSw=;
  b=eoS53BAoTVw5BHtbnk/WLJV1a5RBPiXeULjnLnvn1clVa8YKQQ2BdH1X
   u6rTP/dGJl360iU4K4/dGQfadCIQU6Y3E/UQHSaI02qzNlHzCg2fAAjbR
   /8ExV/lFzAP0R6PtbYnJQPXxI85lumAdvGqSmnaYR4EryjVZK3jR5mnZZ
   4=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: dI6IVFT3ur6TmEjU/XDFu0eIZ9ELEfAXIJyVmO0LWb1vAoZjulBVVmhQRN9iNcFUC3Zvf3akAw
 9WDE+SXS/slexF5iWS7rboRro6pK5MMiWSzrmpNxB/46HvHz4v7OCTP6O/McNhnfy0kiqBp0kY
 dnKzaW6mM5uSw/0CuWpRAcEEgr7fRSjOZYBGELUC6WaJEeHV1ENLTtDDwiIq1Bop3vOtA4AuO0
 tQRQ/kl6vMjydhEnAt1bqpUjKvf6PdDbqqYswqECp3rTXVRYlzgAguvl8V1Co0Bo83qhn2lJAS
 CNc=
X-SBRS: None
X-MesageID: 32060624
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,469,1596513600"; 
   d="scan'208";a="32060624"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BwBmJzZaYKhO7RXq+T61hDCtO8nrnXfNSvrqMc2pTJ4E+VCqemO8y1tq8Ek/L+mT/fhunNm/8c7uzlZjN6GfurafJUdosJ1mngx1qCuYbner7+tZxALkYL/Se9RFqp2spyiNsufGHr0xnxhbJs96OtC+AGnYlsqghQSi9cjj8WmLXi+KgxOFfnvq5aA10PJjKbWywzWlCrLTuJ1W+c/PVpdPSY1P1NSOFO8/uhc1BecTW9xVqjn0zvGIsCPLoXrVfHOdvnF5CyFJkEA/rC3BMln+8cUrwwTiyHnmkWb07f1xeLF8waznGVLcGVP7xXmXlweSS5u7RcXiLRc0ywJ+Ag==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+d4x0ILteW7/99uP/TrVBWxyFMFgdB1qbWMO9GlMOjI=;
 b=HsDNnPsg5dHt/2Cj5OCAXrdwk9s5ulV0HPLB3gQq3dM9n0C1EC2NR0eqkAUWoAkzpTSvJinhOL7ODXx5FBLOWZA8YLJdexqLP+x/V7fPeVjN25mxvkIVRdGFN4mmkKig2AT43Aj/RYlEWVxSDQbIxHCdbXJUY6zVO8ViYTG3DSIHmpzX7K5ZRJ++FjIb24T3VUukUfjMFccDy+B1OEArxvBgLV2OBqCCom+XP/TkIRM1BHqFFk1hexseL5Gm1I6Otceqkf9uAzRZP2QgFQkFGtHdfXUZ9EjxOpzpzl6lusRb0IxHLeF58cUok3zSzcw30UGxwfoNYiDkj0lV5CDSSQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+d4x0ILteW7/99uP/TrVBWxyFMFgdB1qbWMO9GlMOjI=;
 b=FNo/Gt7SU4gt8Z0Uq3X6NJbbHtmWaLBXqilu+qE3S3Ltl18tMGh0P24c6kE4pMawMnpdTWPu8rhHjhB28CT2pcSuLbrohzyZYBm7QBypIZ9ccuMGRsov6O6xrxcGpUYR6p8k56vnUgr5+kDokHxfnzyMbY5PATSBqV+qd0EfhTc=
Date: Wed, 11 Nov 2020 13:17:30 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
Subject: Re: [PATCH 5/5] x86/p2m: split write_p2m_entry() hook
Message-ID: <20201111121730.pblsf6inot5gixfc@Air-de-Roger>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
 <7b2b7cc9-8828-41bd-7949-764161bbe7ff@suse.com>
 <20201110135944.hbsojy6eeyw53has@Air-de-Roger>
 <d73234b0-f22e-0783-3fbe-759ccb0ecc48@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d73234b0-f22e-0783-3fbe-759ccb0ecc48@suse.com>
X-ClientProxiedBy: LO2P265CA0493.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13a::18) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: effa972b-3ef6-46a7-9c30-08d8863bc5cd
X-MS-TrafficTypeDiagnostic: DM6PR03MB3946:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB39463A3C82CAE98681B230658FE80@DM6PR03MB3946.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Eaa7MLPELrP06C+EQCG2Yxzj3R7IQSgkeOVAAvsLa0yMFI2QV9j0pLrrk+ZG+/NSYMl7ahCnnUSPKVYTaaOUPWquscjLUTwZUfTckuDA86DxCEI3OzjmvR07mt8d+sGyNdKaKGDbijrIDevnD9WY6h2oy4D4sCR78rTJgihbnKBnwpeq+E8NrAf8uoLbcPqGawkoWwjWnSMX8f7Q0C705Y4SMTxMeeHNBSuimV9poXzcFSneU0XZxWLcYhnEGG0wy4jNiPN12Nc1lhKsdMupUqub4Em/HDVXmuCAQHCIu44KEK9zePVo7E3pH+tLG+jssqOC4Z/M/eJU6XKc3MO3+v5VWdSvjMEotYJqcfkoOSC/HarYiH4qgonpM+ID4JK/
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(346002)(136003)(396003)(366004)(376002)(39860400002)(26005)(478600001)(83380400001)(66476007)(66556008)(33716001)(6666004)(6916009)(5660300002)(9686003)(16526019)(85182001)(186003)(66946007)(6486002)(54906003)(86362001)(4326008)(2906002)(8676002)(956004)(8936002)(6496006)(316002)(53546011)(1076003)(70780200001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: NqtCM3tzncCB1vYa3CMobWde4TIgCywWPZBxxRNW65MFqahS/VO1jasgTGH9CboB2LG/8SmyrYWrZqFD1KjthtVhz3kwUiranTvEOl6af8APHfq22RH7geyTwTZRtCxWfVSZiFECj6CmQ3qgyDS/oD+T76PiW4bLLhZHT3ivj5oRt/stjk0iEKF/292AosCP9e1zQ1nzGe04083EJ3CWT3scMADqNDLqWuivfUp5OalbwXJVfJpAds6r6rIS7eDKio80wilfzft5elyvGCV3VJiY/JCLXsmYmK12DggMhqSFZrRYwqbs+XkB3jFdf0/2HpGrauggWCjRszxRkV+T0PKCNIC5nl4J3P9iZYZSAIlUw4M4JTcJu8Wa5SwuO4Yb+7T8GTvhdmKGrUGthZueqblGGQbNLdD7zz0mGNCrBpszgLKCSaFOEK0BaTDGumcuVDY78NJRAx4NmsxRxZCyfX+b2c03/zccKDSTcekaBCbG49OypzcOusubfSslfEDWKlJVozWUKl5cKi7fj70jX3i8x9dxDgvgbEnFhE9OkTIGzpT4dAvn5MjI7nTFKdaqFSW2xpK794h0dAXon31vefNBNYv8tBGdjJxAZieS7K+dhLi2VncYm16u8ouTpnqFfUGKY8wn6vPlTY4SWTsuKGgzVFwceEBeRTYkyQ0XIE5XCoKwtp2cA35UG750dYDxfQSrGYG4xTjDCQvnMXghhqzYjiM9lEDGwTOm+ggxWtbF8r1iHObUsK3LbKyFFUbBFGC5VnQV4u+3Qe5uMid4ZhAhSh8d2VNZLb78ikjnDR089zp0QAJfpJmGtbQ6QdzEiYFkCoBb3lJa71mokGtg7dWtK+fdtAQkb3ud3zMVz5nTnPxKZEKk4St0LFROWTcelUdo6A21Rw/5bleKwlJMrA==
X-MS-Exchange-CrossTenant-Network-Message-Id: effa972b-3ef6-46a7-9c30-08d8863bc5cd
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Nov 2020 12:17:35.4068
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: TMPBFxZssIo09xXIs2pk6f/RHu8ZIx5my2QZ8s8CE6Htt7TilSPNEmwQZkutV53BXHcYPT71p3cccIj7S6kRxA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3946
X-OriginatorOrg: citrix.com

On Tue, Nov 10, 2020 at 03:50:44PM +0100, Jan Beulich wrote:
> On 10.11.2020 14:59, Roger Pau Monné wrote:
> > On Wed, Oct 28, 2020 at 10:24:53AM +0100, Jan Beulich wrote:
> >> --- a/xen/arch/x86/mm/hap/nested_hap.c
> >> +++ b/xen/arch/x86/mm/hap/nested_hap.c
> >> @@ -71,24 +71,11 @@
> >>  /*        NESTED VIRT P2M FUNCTIONS         */
> >>  /********************************************/
> >>  
> >> -int
> >> -nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
> >> -    l1_pgentry_t *p, l1_pgentry_t new, unsigned int level)
> >> +void
> >> +nestedp2m_write_p2m_entry_post(struct p2m_domain *p2m, unsigned int oflags)
> >>  {
> >> -    struct domain *d = p2m->domain;
> >> -    uint32_t old_flags;
> >> -
> >> -    paging_lock(d);
> >> -
> >> -    old_flags = l1e_get_flags(*p);
> >> -    safe_write_pte(p, new);
> >> -
> >> -    if (old_flags & _PAGE_PRESENT)
> >> -        guest_flush_tlb_mask(d, p2m->dirty_cpumask);
> >> -
> >> -    paging_unlock(d);
> >> -
> >> -    return 0;
> >> +    if ( oflags & _PAGE_PRESENT )
> >> +        guest_flush_tlb_mask(p2m->domain, p2m->dirty_cpumask);
> >>  }
> > 
> > This is a verbatimi copy of hap_write_p2m_entry_post. I assume there's
> > a reason why we need both, but I'm missing it.
> 
> Only almost, since HAP has
> 
>     if ( oflags & _PAGE_PRESENT )
>         guest_flush_tlb_mask(d, d->dirty_cpumask);
> 
> instead (i.e. they differ in which dirty_cpumask gets used).

Sorry, missed that bit.

> >> --- a/xen/arch/x86/mm/p2m-pt.c
> >> +++ b/xen/arch/x86/mm/p2m-pt.c
> >> @@ -122,17 +122,55 @@ static int write_p2m_entry(struct p2m_do
> >>  {
> >>      struct domain *d = p2m->domain;
> >>      struct vcpu *v = current;
> >> -    int rc = 0;
> >>  
> >>      if ( v->domain != d )
> >>          v = d->vcpu ? d->vcpu[0] : NULL;
> >>      if ( likely(v && paging_mode_enabled(d) && paging_get_hostmode(v)) ||
> >>           p2m_is_nestedp2m(p2m) )
> >> -        rc = p2m->write_p2m_entry(p2m, gfn, p, new, level);
> >> +    {
> >> +        unsigned int oflags;
> >> +        mfn_t omfn;
> >> +        int rc;
> >> +
> >> +        paging_lock(d);
> >> +
> >> +        if ( p2m->write_p2m_entry_pre )
> >> +            p2m->write_p2m_entry_pre(d, gfn, p, new, level);
> >> +
> >> +        oflags = l1e_get_flags(*p);
> >> +        omfn = l1e_get_mfn(*p);
> >> +
> >> +        rc = p2m_entry_modify(p2m, p2m_flags_to_type(l1e_get_flags(new)),
> >> +                              p2m_flags_to_type(oflags), l1e_get_mfn(new),
> >> +                              omfn, level);
> >> +        if ( rc )
> >> +        {
> >> +            paging_unlock(d);
> >> +            return rc;
> >> +        }
> >> +
> >> +        safe_write_pte(p, new);
> >> +
> >> +        if ( p2m->write_p2m_entry_post )
> >> +            p2m->write_p2m_entry_post(p2m, oflags);
> >> +
> >> +        paging_unlock(d);
> >> +
> >> +        if ( nestedhvm_enabled(d) && !p2m_is_nestedp2m(p2m) &&
> >> +             (oflags & _PAGE_PRESENT) &&
> >> +             !p2m_get_hostp2m(d)->defer_nested_flush &&
> >> +             /*
> >> +              * We are replacing a valid entry so we need to flush nested p2ms,
> >> +              * unless the only change is an increase in access rights.
> >> +              */
> >> +             (!mfn_eq(omfn, l1e_get_mfn(new)) ||
> >> +              !perms_strictly_increased(oflags, l1e_get_flags(new))) )
> >> +            p2m_flush_nestedp2m(d);
> > 
> > It feel slightly weird to have a nested p2m hook post, and yet have
> > nested specific code here.
> > 
> > Have you considered if the post hook could be moved outside of the
> > locked region, so that we could put this chunk there in the nested p2m
> > case?
> 
> Yes, I did, but I don't think the post hook can be moved out. The
> only alternative therefore would be a 3rd hook. And this hook would
> then need to be installed on the host p2m for nested guests, as
> opposed to nestedp2m_write_p2m_entry_post, which gets installed in
> the nested p2m-s. As said in the description, the main reason I
> decided against a 3rd hook is that I suppose the code here isn't
> HAP-specific (while prior to this patch it was).

I'm not convinced the guest TLB flush needs to be performed while
holding the paging lock. The point of such flush is to invalidate any
intermediate guest visible translations that might now be invalid as a
result of the p2m change, but the paging lock doesn't affect the guest
in any way.

It's true that the dirty_cpumask might change, but I think we only
care that when returning from the function there are no stale cache
entries that contain the now invalid translation, and this can be
achieved equally by doing the flush outside of the locked region.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 12:32:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 12:32:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24705.52069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcpI9-0004ph-Ta; Wed, 11 Nov 2020 12:32:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24705.52069; Wed, 11 Nov 2020 12:32:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcpI9-0004pa-QV; Wed, 11 Nov 2020 12:32:01 +0000
Received: by outflank-mailman (input) for mailman id 24705;
 Wed, 11 Nov 2020 12:32:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kcpI9-0004pV-36
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 12:32:01 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 62660d43-5c8a-417c-9e3a-7d820735da1f;
 Wed, 11 Nov 2020 12:32:00 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kcpI9-0004pV-36
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 12:32:01 +0000
X-Inumbo-ID: 62660d43-5c8a-417c-9e3a-7d820735da1f
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 62660d43-5c8a-417c-9e3a-7d820735da1f;
	Wed, 11 Nov 2020 12:32:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605097919;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=ya5B27h6RQZoTfphR52MIu7TwVXHQLFStcdcBHExyXg=;
  b=QSJo05GK/ZKFlv9QD7cdyhrvEyqkFIJ0Z/UYTMyxjYaaoZ13GXHnAdEC
   fllrq7hn1jBKOMXzCF5ONKAZ7ys/A6fPsSqgg8fG4wV3u4uu3oE7Ibpz5
   pGvCrw++8XMlpFBGsuvwkg2SV7QRvg7lWXYUzUpbvl4Qvh03LpBpWggfY
   Q=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: rDylXtWvm8lU2NfmA01NuS0AWhKrgxt/oyU1iodSuT0ZLAME3b3zXvCEkBxJZYPmsxqDf7o9TS
 mHnUaZlmuVvuYiSfHiYvz6f/7eCKdI916HfxHGndD9H1As+1HGOvYecAQtgZ0WKrGveKfvqm7W
 famIo/Hjhv+ka0gmdwr/0anRaSmOWTnQLDIqPD4PMzIeg6PNPy81b3BiFtvlMg8bo8zX7mYwqh
 c3CKXjVhQ8gVjOOcrCtvWRWnwTi8+raQgkGRZcadcnkXbdZe9Dmjib9TTiCmFn21AxbG9f/5FZ
 gvc=
X-SBRS: None
X-MesageID: 31177693
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,469,1596513600"; 
   d="scan'208";a="31177693"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MBOFs+GzVKGxWfgbMgoKvb5cu7YP7CQjxTLA/Ac5lSv8KXuafzODJcDDFGLacBPrAslJGzekQq04CELy5OlO0yrXAogrzSAeZDVgA+NlFn0bPch+nm0Mm8Uezf9UHv9lyI6nI8xFiV83TdCZ0ytURDZ2aM0bBgzMJadBjPKWgnHJ/bh5+LevomUhPUSK6y46U3oW5g6ljbFgSNdiwygYEhGAVfT1u+1j2XFgKX7Q0V63qZu0DmZ28SA2rz5iRA/w5EBnrngFwlpqVX//3yQDGW6ZrxV6EHnzxzq0cxTstj2cKl93vGOuy9af4JdpBUBZ+GJBHOn+AQ2fC4xdd44q2A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UMSgqEL7BYtggw2p1pxICEx1c2/JyeBlvGnY2tMPgu8=;
 b=jKKLO5sz4x50jstD09X+u04iyeAiFw/VmrZhThkA+S5qXgNygqnNWHOswhgnJk2mdOaEqdJZR65m/P9HORS6HbamTJTnMbqwzNaGEUCvso/An7yGCPjjB+T39MWIquhxjHbkizTQ77hJKk6+qD2f3+uGVmiBl29ei22GtcWiTuCqvkeIjPlbAn6KJrrMzXwDsKzqsdlJ6Do5+OGnFLS3DH6Mjugi7pPzjFPNAhaGEcN0tub+UFTdZ4vS5cjb4uWUZ/VFeAS2Ea3u+8B2gu8TRQJa1e/n6uTAjVG5Zjkv6z6p1GX52ykwnNza+z+3FfM8/2r9ESzFl6zRrk2YeMlw+A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UMSgqEL7BYtggw2p1pxICEx1c2/JyeBlvGnY2tMPgu8=;
 b=o+JmtyE9MgOOcUuvpR4F19C1DceD/L63cHj1w7ko47ebOgfPbsfmOim9dsNerUmPk88wEI88CTzqqxAVICAqrx1AkAXLisJzlVPz4mFbKPpCxtZuntprXk7KfJvk5JB0x43WdMKp5IRqdH/rPB+joP3lOgmFVsMhn3i+2BWMoo0=
Date: Wed, 11 Nov 2020 13:31:50 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Oleksandr Andrushchenko <andr2000@gmail.com>
CC: <Rahul.Singh@arm.com>, <Bertrand.Marquis@arm.com>, <julien.grall@arm.com>,
	<jbeulich@suse.com>, <sstabellini@kernel.org>,
	<xen-devel@lists.xenproject.org>, <iwj@xenproject.org>, <wl@xen.org>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Subject: Re: [SUSPECTED SPAM][PATCH 01/10] pci/pvh: Allow PCI toolstack code
 run with PVH domains on ARM
Message-ID: <20201111123150.7lkabdo3wix7jkdk@Air-de-Roger>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-2-andr2000@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201109125031.26409-2-andr2000@gmail.com>
X-ClientProxiedBy: LNXP265CA0078.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:76::18) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b42d3b0f-2530-40cc-ab96-08d8863dc672
X-MS-TrafficTypeDiagnostic: DM5PR03MB3068:
X-Microsoft-Antispam-PRVS: <DM5PR03MB30681296A87B7E823A97DA5B8FE80@DM5PR03MB3068.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: VL6RDgx4SLqR9Mo9f5fXqD/k33wpsAxoZCt9kS1rGW3hlg4cYNZRpVJDkEdTSNLDEw0BIpYlk/nonRJ3GByqGgt/xYmxb2mxZknIw7U0LB7qAk2FyUVHnxYvSjVML1eYs77BE2niMLxykFQ2s15lh6E1ONEU34E/GKOKvT2T37CLcaBu+Rmb+NCyDFKV+9DZbCxWknXiIPEQ/wWk+28zZC+yyF6xbzsjvkoDV02ONZyq2zApN5m5uBCC820yi8dmfS8ziKdEso8NSX2TNRz6c8A3GRIfjOF347a9jw9vXVLIHJW3eDc4CWYEo7uOn4k5MMjMnEfbhDMpsJAgqGUYgyADM1Jlh2Ozp1o+9Y2wEo4RIzBcZ434rA15XnCjMam6hn8p5NC6pdLUfaXQbtzegg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39860400002)(376002)(346002)(366004)(396003)(136003)(956004)(1076003)(4326008)(2906002)(33716001)(9686003)(6486002)(8936002)(86362001)(16526019)(66476007)(186003)(6496006)(66946007)(66556008)(26005)(5660300002)(316002)(6666004)(7416002)(83380400001)(966005)(6916009)(478600001)(85182001)(8676002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 8feHkHG2sNYzFeTgDXAQmfzRdLX4aYnzkz8sTJaf7rhcjvlRYYJ3EDtzZf1hk3eEuXSXsjQjbzJkqylACTChSMvLwd/ufLmDHCIpNgZFQbIgLIjCz7qpyBcCfbrOlXH61hRY/5RG36cWukGORzx2Isa3uZIP1Cdc3lXkyz/TL8orW73s5js/IacsqxjrYGb4XVplYiBEILiuIc/SGKBlP/bh3xJ+h1a1sAgeKwuITpU9HlyanaiuoKnvyU6hd0wg1iKmSC+y9k7Dt+jKbC/k5CQ4nB80xgJgkswyyfQUqN7qS/HZ2BniXP4xQTM88puqsE/F/7N/AFYs8/j47oHX+JRdY+N/Ocy4yoaYlUM3r8Yw7AOJbvZBSsnAcwoHO7EHPlkO+7+QGEjA38antXCOQRiJEUmyYU4cSYuqkSkp77UjHERc6eT/pOnWsaWO6kQzTA+LoC7027WKbrLaDp6QkQkN+FCSSj2m2ArYmCz6FKB3fU/kADm1kilm4f/3dXk2TU0gRU7iyuleKHgUpxrb/saNOJmF7cTKvXbpF89oeh1trOrUfHDlp2mCeb0gF/1EYCugWvXgQ3ud3NtgaORcFhfqgm3lMq7YW2jKxjOpoS6syN7aQsKJtHwNPB9cHa8/TaTq/OjFqtR1IbGwAU8RVTJFqafOwwlnn1qr339TslevIeh+fZ+Yug8upiHkzLUhNYvVPwTmpPt+o1lZIdSBPlc+P7yR9s/k2V6MevCFoViaNCUPQqEBPeTO8vjf9qdUsakFW7vs+E4tcj249x5xLQ6PhK4bvkL5GhSWgK2pkeeTlsjQm3P4RAT9d68G7pPNxtkST+oIWk1u/b/Ylr6yTC7lN0m2uNp8APuyeWgIiE30reRkkjHo3PYn5W5GPITHuqGDtQ49hLCzAB2ggPEa/g==
X-MS-Exchange-CrossTenant-Network-Message-Id: b42d3b0f-2530-40cc-ab96-08d8863dc672
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Nov 2020 12:31:55.5357
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hgs4OgcYxl8wtvOIx3Mn7erDcDjzTmY6G4MmjwkTdQg8o7/kVVyBDUfivaYiRdYMYjHiEVtgs1iKkWHECwFmlg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3068
X-OriginatorOrg: citrix.com

On Mon, Nov 09, 2020 at 02:50:22PM +0200, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> 
> According to https://wiki.xenproject.org/wiki/Linux_PVH:
> 
> Items not supported by PVH
>  - PCI pass through (as of Xen 4.10)
> 
> Allow running PCI remove code on ARM and do not assert for PVH domains.
> 
> Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> ---
>  tools/libxl/Makefile    | 4 ++++
>  tools/libxl/libxl_pci.c | 4 +++-
>  2 files changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
> index 241da7fff6f4..f3806aafcb4e 100644
> --- a/tools/libxl/Makefile
> +++ b/tools/libxl/Makefile
> @@ -130,6 +130,10 @@ endif
>  
>  LIBXL_LIBS += -lyajl
>  
> +ifeq ($(CONFIG_ARM),y)
> +CFALGS += -DCONFIG_ARM
> +endif
> +
>  LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
>  			libxl_dom.o libxl_exec.o libxl_xshelp.o libxl_device.o \
>  			libxl_internal.o libxl_utils.o libxl_uuid.o \
> diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
> index bc5843b13701..b93cf976642b 100644
> --- a/tools/libxl/libxl_pci.c
> +++ b/tools/libxl/libxl_pci.c
> @@ -1915,8 +1915,10 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
>              goto out_fail;
>          }
>      } else {
> +        /* PCI passthrough can also run on ARM PVH */
> +#ifndef CONFIG_ARM
>          assert(type == LIBXL_DOMAIN_TYPE_PV);
> -
> +#endif

I would just remove the assert now if this is to be used by Arm and
you don't need to fork the file for Arm.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 12:45:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 12:45:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24713.52080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcpVG-0005sv-4H; Wed, 11 Nov 2020 12:45:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24713.52080; Wed, 11 Nov 2020 12:45:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcpVG-0005so-1G; Wed, 11 Nov 2020 12:45:34 +0000
Received: by outflank-mailman (input) for mailman id 24713;
 Wed, 11 Nov 2020 12:45:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ttLz=ER=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kcpVE-0005sj-Ei
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 12:45:32 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1fac7159-7aec-46a3-80ae-c5c7bb35efa8;
 Wed, 11 Nov 2020 12:45:29 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ttLz=ER=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kcpVE-0005sj-Ei
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 12:45:32 +0000
X-Inumbo-ID: 1fac7159-7aec-46a3-80ae-c5c7bb35efa8
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1fac7159-7aec-46a3-80ae-c5c7bb35efa8;
	Wed, 11 Nov 2020 12:45:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605098729;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=wOOaUGGr8yuGb3kdKrSs9X8NrLohjLcCYBJmLWeB+Rc=;
  b=C7OHBe3m2xDAK4QHIoTDBEmVJrtq1ZDSG+7MwkJrXvV6iiR7luFbG0gg
   Mv+RUp9csSqo/wXgNBTsMTyCwx3MiYXwfqAc3qZ9eNWZ683ahfhJinMdm
   qpuZVIw26GSi7vCc1zRbyCXXA44Wd7LylLyvG8K6GI9anNDQjqAekcD50
   8=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 7hJHVumaeqUa2QZfLZhaNCM5d21SmxemD0EJmWzkQyRsAM4wxfe6x5X+IJb2AqOMwyDA1906FM
 nL67wMsPi0GEUjm/a+aCkiEV4rku92rxYSNgMp11D0LAILztepA/S8Dj74OIAMtnEnkQXWwyE6
 uYBp8WS/cKY7U4F6VZsTF9wjCANaRVq8IM3Mh5NnIEHBVdIg6d06hKEWsz8r+ndyhR8hyt6prp
 nXLlWOGtUS/EtYFG9RK0xfALhOH9kMfEZk2Ucal91VOAQR+y44U8yo1Ld321/Do4eeXdM7bGwc
 h4I=
X-SBRS: None
X-MesageID: 30970913
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,469,1596513600"; 
   d="scan'208";a="30970913"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] xen/x86: Work around Clang code generation bug with asm parameters
Date: Wed, 11 Nov 2020 12:45:12 +0000
Message-ID: <20201111124512.2268-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Clang 9 and later don't handle the clobber of %r10 correctly in
_hypercall64_4().  See https://bugs.llvm.org/show_bug.cgi?id=48122

Rewrite the logic to use the "+r" form, rather than using separate input and
output specifiers, which works around the issue.  For consistency, adjust all
operand handling.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/include/asm-x86/guest/xen-hcall.h | 35 ++++++++++++++++-------------------
 1 file changed, 16 insertions(+), 19 deletions(-)

diff --git a/xen/include/asm-x86/guest/xen-hcall.h b/xen/include/asm-x86/guest/xen-hcall.h
index 03d5868a9e..3240d9807b 100644
--- a/xen/include/asm-x86/guest/xen-hcall.h
+++ b/xen/include/asm-x86/guest/xen-hcall.h
@@ -39,53 +39,50 @@
 
 #define _hypercall64_1(type, hcall, a1)                                 \
     ({                                                                  \
-        long res, tmp__;                                                \
+        long res, _a1 = (long)(a1);                                     \
         asm volatile (                                                  \
             "call hypercall_page + %c[offset]"                          \
-            : "=a" (res), "=D" (tmp__) ASM_CALL_CONSTRAINT              \
-            : [offset] "i" (hcall * 32),                                \
-              "1" ((long)(a1))                                          \
+            : "=a" (res), "+D" (_a1) ASM_CALL_CONSTRAINT                \
+            : [offset] "i" (hcall * 32)                                 \
             : "memory" );                                               \
         (type)res;                                                      \
     })
 
 #define _hypercall64_2(type, hcall, a1, a2)                             \
     ({                                                                  \
-        long res, tmp__;                                                \
+        long res, _a1 = (long)(a1), _a2 = (long)(a2);                   \
         asm volatile (                                                  \
             "call hypercall_page + %c[offset]"                          \
-            : "=a" (res), "=D" (tmp__), "=S" (tmp__)                    \
+            : "=a" (res), "+D" (_a1), "+S" (_a2)                        \
               ASM_CALL_CONSTRAINT                                       \
-            : [offset] "i" (hcall * 32),                                \
-              "1" ((long)(a1)), "2" ((long)(a2))                        \
+            : [offset] "i" (hcall * 32)                                 \
             : "memory" );                                               \
         (type)res;                                                      \
     })
 
 #define _hypercall64_3(type, hcall, a1, a2, a3)                         \
     ({                                                                  \
-        long res, tmp__;                                                \
+        long res, _a1 = (long)(a1), _a2 = (long)(a2),                   \
+            _a3 = (long)(a3);                                           \
         asm volatile (                                                  \
             "call hypercall_page + %c[offset]"                          \
-            : "=a" (res), "=D" (tmp__), "=S" (tmp__), "=d" (tmp__)      \
+            : "=a" (res), "+D" (_a1), "+S" (_a2), "+d" (_a3)            \
               ASM_CALL_CONSTRAINT                                       \
-            : [offset] "i" (hcall * 32),                                \
-              "1" ((long)(a1)), "2" ((long)(a2)), "3" ((long)(a3))      \
+            : [offset] "i" (hcall * 32)                                 \
             : "memory" );                                               \
         (type)res;                                                      \
     })
 
 #define _hypercall64_4(type, hcall, a1, a2, a3, a4)                     \
     ({                                                                  \
-        long res, tmp__;                                                \
-        register long _a4 asm ("r10") = ((long)(a4));                   \
+        long res, _a1 = (long)(a1), _a2 = (long)(a2),                   \
+            _a3 = (long)(a3);                                           \
+        register long _a4 asm ("r10") = (long)(a4);                     \
         asm volatile (                                                  \
             "call hypercall_page + %c[offset]"                          \
-            : "=a" (res), "=D" (tmp__), "=S" (tmp__), "=d" (tmp__),     \
-              "=&r" (tmp__) ASM_CALL_CONSTRAINT                         \
-            : [offset] "i" (hcall * 32),                                \
-              "1" ((long)(a1)), "2" ((long)(a2)), "3" ((long)(a3)),     \
-              "4" (_a4)                                                 \
+            : "=a" (res), "+D" (_a1), "+S" (_a2), "+d" (_a3)            \
+              "+r" (_a4) ASM_CALL_CONSTRAINT                            \
+            : [offset] "i" (hcall * 32)                                 \
             : "memory" );                                               \
         (type)res;                                                      \
     })
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 12:46:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 12:46:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24718.52092 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcpWC-00060K-J0; Wed, 11 Nov 2020 12:46:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24718.52092; Wed, 11 Nov 2020 12:46:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcpWC-00060D-G1; Wed, 11 Nov 2020 12:46:32 +0000
Received: by outflank-mailman (input) for mailman id 24718;
 Wed, 11 Nov 2020 12:46:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8Mnv=ER=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kcpWB-000607-7V
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 12:46:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a971d993-8040-4323-9086-e4a5e5286556;
 Wed, 11 Nov 2020 12:46:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 45EACABD6;
 Wed, 11 Nov 2020 12:46:28 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8Mnv=ER=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kcpWB-000607-7V
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 12:46:31 +0000
X-Inumbo-ID: a971d993-8040-4323-9086-e4a5e5286556
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a971d993-8040-4323-9086-e4a5e5286556;
	Wed, 11 Nov 2020 12:46:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 45EACABD6;
	Wed, 11 Nov 2020 12:46:28 +0000 (UTC)
Subject: Re: [PATCH 01/24] block: remove the call to __invalidate_device in
 check_disk_size_change
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201111082658.3401686-1-hch@lst.de>
 <20201111082658.3401686-2-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <1e4539ab-199f-1d15-863d-052b0b2d3946@suse.de>
Date: Wed, 11 Nov 2020 13:46:24 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201111082658.3401686-2-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/11/20 9:26 AM, Christoph Hellwig wrote:
> __invalidate_device without the kill_dirty parameter just invalidates
> various clean entries in caches, which doesn't really help us with
> anything, but can cause all kinds of horrible lock orders due to how
> it calls into the file system.  The only reason this hasn't been a
> major issue is because so many people use partitions, for which no
> invalidation was performed anyway.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   fs/block_dev.c | 6 ------
>   1 file changed, 6 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 12:47:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 12:47:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24724.52105 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcpX2-00068M-U4; Wed, 11 Nov 2020 12:47:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24724.52105; Wed, 11 Nov 2020 12:47:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcpX2-00068F-Qf; Wed, 11 Nov 2020 12:47:24 +0000
Received: by outflank-mailman (input) for mailman id 24724;
 Wed, 11 Nov 2020 12:47:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8Mnv=ER=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kcpX2-00068A-BJ
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 12:47:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c5a529f2-eb1b-445a-9fe8-3cfba4aa9938;
 Wed, 11 Nov 2020 12:47:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A9D22ABD6;
 Wed, 11 Nov 2020 12:47:22 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8Mnv=ER=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kcpX2-00068A-BJ
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 12:47:24 +0000
X-Inumbo-ID: c5a529f2-eb1b-445a-9fe8-3cfba4aa9938
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c5a529f2-eb1b-445a-9fe8-3cfba4aa9938;
	Wed, 11 Nov 2020 12:47:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A9D22ABD6;
	Wed, 11 Nov 2020 12:47:22 +0000 (UTC)
Subject: Re: [PATCH 02/24] loop: remove loop_set_size
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201111082658.3401686-1-hch@lst.de>
 <20201111082658.3401686-3-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <e98e9de2-826a-6a1e-cbf4-605557e846fa@suse.de>
Date: Wed, 11 Nov 2020 13:47:19 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201111082658.3401686-3-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/11/20 9:26 AM, Christoph Hellwig wrote:
> Just use set_capacity_revalidate_and_notify directly, as this function
> can update the block device size as well when the last parameter is set
> to true.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/block/loop.c | 37 +++++++------------------------------
>   1 file changed, 7 insertions(+), 30 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 12:58:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 12:58:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24734.52116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcpi7-0007CW-UW; Wed, 11 Nov 2020 12:58:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24734.52116; Wed, 11 Nov 2020 12:58:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcpi7-0007CP-Rc; Wed, 11 Nov 2020 12:58:51 +0000
Received: by outflank-mailman (input) for mailman id 24734;
 Wed, 11 Nov 2020 12:58:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcpi6-0007CK-LN
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 12:58:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c7dfe74a-8d03-4c8a-b641-e807418b4eee;
 Wed, 11 Nov 2020 12:58:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2B587ABDE;
 Wed, 11 Nov 2020 12:58:49 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcpi6-0007CK-LN
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 12:58:50 +0000
X-Inumbo-ID: c7dfe74a-8d03-4c8a-b641-e807418b4eee
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c7dfe74a-8d03-4c8a-b641-e807418b4eee;
	Wed, 11 Nov 2020 12:58:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605099529;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=Zkk+pbEE53vnjmjtTGwel4X8ilqn/4qeogMt4MuCuV4=;
	b=i/VxFAHF4orPHRuLpY3tFUSidkNTj2DAdUHpwki28CmkWJyw6dTHeH5+SIwASZnPY5XVCW
	zsrnxLaizh0vzPftzTKTwdV3WXGdcUh+0Lj+HhQTweeP8TbVpDxPidvw3Fy9Xj8O5oQ7FF
	k1ks5iwwdpvuiq+I2AZ4uoFZ1mZnC4Q=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2B587ABDE;
	Wed, 11 Nov 2020 12:58:49 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/vpt: fix build with old gcc
Message-ID: <b345a4ed-dd6d-42a2-f114-6e6393640be5@suse.com>
Date: Wed, 11 Nov 2020 13:58:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

I believe it was the XSA-336 fix (42fcdd42328f "x86/vpt: fix race when
migrating timers between vCPUs") which has unmasked a bogus
uninitialized variable warning. This is observable with gcc 4.3.4, but
only on 4.13 and older; it's hidden on newer versions apparently due to
the addition to _read_unlock() done by 12509bbeb9e3 ("rwlocks: call
preempt_disable() when taking a rwlock").

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Of course we could decide to only work around this on the older
branches. But I think it's better to have the fix everywhere (as long as
we still support such old gcc), as further changes may - effectively
randomly - unhide the warning again.

--- a/xen/arch/x86/hvm/vpt.c
+++ b/xen/arch/x86/hvm/vpt.c
@@ -401,7 +401,7 @@ int pt_update_irq(struct vcpu *v)
                  * associated with the timer.
                  */
                 time_cb *cb = NULL;
-                void *cb_priv;
+                void *cb_priv = NULL;
 
                 pt_vcpu_lock(v);
                 /* Make sure the timer is still on the list. */


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 13:06:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 13:06:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24743.52129 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcppJ-0008BC-OF; Wed, 11 Nov 2020 13:06:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24743.52129; Wed, 11 Nov 2020 13:06:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcppJ-0008B5-L6; Wed, 11 Nov 2020 13:06:17 +0000
Received: by outflank-mailman (input) for mailman id 24743;
 Wed, 11 Nov 2020 13:06:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8Mnv=ER=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kcppI-0008B0-9c
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:06:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1d4d6d3b-31ca-4443-9afb-958254c9e92c;
 Wed, 11 Nov 2020 13:06:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2B341AFC6;
 Wed, 11 Nov 2020 13:06:14 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8Mnv=ER=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kcppI-0008B0-9c
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:06:16 +0000
X-Inumbo-ID: 1d4d6d3b-31ca-4443-9afb-958254c9e92c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1d4d6d3b-31ca-4443-9afb-958254c9e92c;
	Wed, 11 Nov 2020 13:06:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2B341AFC6;
	Wed, 11 Nov 2020 13:06:14 +0000 (UTC)
Subject: Re: [PATCH 03/24] nvme: let set_capacity_revalidate_and_notify update
 the bdev size
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201111082658.3401686-1-hch@lst.de>
 <20201111082658.3401686-4-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <92fdb64d-b6b1-a934-b32b-b3ce565fadea@suse.de>
Date: Wed, 11 Nov 2020 14:06:11 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201111082658.3401686-4-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/11/20 9:26 AM, Christoph Hellwig wrote:
> There is no good reason to call revalidate_disk_size separately.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/nvme/host/core.c | 5 +----
>   1 file changed, 1 insertion(+), 4 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 13:07:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 13:07:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24747.52141 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcpqI-0008I8-2G; Wed, 11 Nov 2020 13:07:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24747.52141; Wed, 11 Nov 2020 13:07:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcpqH-0008I1-Vc; Wed, 11 Nov 2020 13:07:17 +0000
Received: by outflank-mailman (input) for mailman id 24747;
 Wed, 11 Nov 2020 13:07:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8Mnv=ER=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kcpqG-0008Hv-S6
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:07:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e955726d-4b3e-459d-917f-8f9d0f91a727;
 Wed, 11 Nov 2020 13:07:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 68451AD77;
 Wed, 11 Nov 2020 13:07:14 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8Mnv=ER=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kcpqG-0008Hv-S6
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:07:16 +0000
X-Inumbo-ID: e955726d-4b3e-459d-917f-8f9d0f91a727
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e955726d-4b3e-459d-917f-8f9d0f91a727;
	Wed, 11 Nov 2020 13:07:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 68451AD77;
	Wed, 11 Nov 2020 13:07:14 +0000 (UTC)
Subject: Re: [PATCH 04/24] sd: update the bdev size in sd_revalidate_disk
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201111082658.3401686-1-hch@lst.de>
 <20201111082658.3401686-5-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <59f175cf-4a67-7ebc-6f72-cfd28fafa2a9@suse.de>
Date: Wed, 11 Nov 2020 14:07:11 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201111082658.3401686-5-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/11/20 9:26 AM, Christoph Hellwig wrote:
> This avoids the extra call to revalidate_disk_size in sd_rescan and
> is otherwise a no-op because the size did not change, or we are in
> the probe path.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Acked-by: Martin K. Petersen <martin.petersen@oracle.com>
> ---
>   drivers/scsi/sd.c | 8 +++-----
>   1 file changed, 3 insertions(+), 5 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers.

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 13:07:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 13:07:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24751.52153 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcpqf-0008Nq-C3; Wed, 11 Nov 2020 13:07:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24751.52153; Wed, 11 Nov 2020 13:07:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcpqf-0008Nj-81; Wed, 11 Nov 2020 13:07:41 +0000
Received: by outflank-mailman (input) for mailman id 24751;
 Wed, 11 Nov 2020 13:07:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ttLz=ER=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kcpqd-0008N4-UF
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:07:39 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 797f297d-fbb9-4c56-b237-f505a4617ac4;
 Wed, 11 Nov 2020 13:07:35 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ttLz=ER=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kcpqd-0008N4-UF
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:07:39 +0000
X-Inumbo-ID: 797f297d-fbb9-4c56-b237-f505a4617ac4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 797f297d-fbb9-4c56-b237-f505a4617ac4;
	Wed, 11 Nov 2020 13:07:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605100055;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=jTs8Zr1UEJJ+HqAfKraLLnswhhUH2wLtQwV/5mjasAk=;
  b=NA8NarIIPkzb3ByX3y0FTCQZ4SR7hLxWHZzuuQpLoOTgoY4UqvB9Hflt
   13jHXq9I3ugnIaCBPWWpOiBaP2EDsbq+7nD0O50+G1i3LyA46aH+bvn00
   1dTSKzjuxWPEhx4VuOT/zLJVO0MFxCSmSvZ875FZK3iyGVjNf5traq4ea
   Q=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: XwCwG/3cCEY3R262RXsJvddRrMasfNx/uxqHcbSVReuFIPAVI1C5MQA1L1E7B/bgsyWKLHK+yZ
 KPaNhw0OM5AsyUdwk5bVmdr9rpmm9Rxk2Gxg/CJvZruHbH2pQieSNFI1JR/AtQJm1RHCpatT1J
 bCKDsVlhHRlCVkw1cdxksitAnL3OBITv0MO0itiTzxPAM7eWJIeYUEfIR+39u8BvmhiGXDV7Cg
 /bLJc3NNKM3pGXZgr17/BzY2polPTPbFFY4RiWL91ItqYDqeEpz6Tm8OTFHQiIRj0i+B5L4sRv
 410=
X-SBRS: None
X-MesageID: 30916781
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,469,1596513600"; 
   d="scan'208";a="30916781"
Subject: Re: [PATCH] x86/vpt: fix build with old gcc
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <b345a4ed-dd6d-42a2-f114-6e6393640be5@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <0f4a5a4e-39ea-c598-d832-333587e137f0@citrix.com>
Date: Wed, 11 Nov 2020 13:07:28 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <b345a4ed-dd6d-42a2-f114-6e6393640be5@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 11/11/2020 12:58, Jan Beulich wrote:
> I believe it was the XSA-336 fix (42fcdd42328f "x86/vpt: fix race when
> migrating timers between vCPUs") which has unmasked a bogus
> uninitialized variable warning. This is observable with gcc 4.3.4, but
> only on 4.13 and older; it's hidden on newer versions apparently due to
> the addition to _read_unlock() done by 12509bbeb9e3 ("rwlocks: call
> preempt_disable() when taking a rwlock").
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 13:07:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 13:07:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24753.52165 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcpqn-0008R3-KM; Wed, 11 Nov 2020 13:07:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24753.52165; Wed, 11 Nov 2020 13:07:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcpqn-0008Qv-HM; Wed, 11 Nov 2020 13:07:49 +0000
Received: by outflank-mailman (input) for mailman id 24753;
 Wed, 11 Nov 2020 13:07:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8Mnv=ER=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kcpqm-0008Qb-AA
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:07:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 31516a66-d3e2-4728-b3eb-5226600d9b37;
 Wed, 11 Nov 2020 13:07:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5E19FAD45;
 Wed, 11 Nov 2020 13:07:46 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8Mnv=ER=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kcpqm-0008Qb-AA
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:07:48 +0000
X-Inumbo-ID: 31516a66-d3e2-4728-b3eb-5226600d9b37
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 31516a66-d3e2-4728-b3eb-5226600d9b37;
	Wed, 11 Nov 2020 13:07:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5E19FAD45;
	Wed, 11 Nov 2020 13:07:46 +0000 (UTC)
Subject: Re: [PATCH 05/24] block: remove the update_bdev parameter from
 set_capacity_revalidate_and_notify
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201111082658.3401686-1-hch@lst.de>
 <20201111082658.3401686-6-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <a8416510-e3ca-a43f-a3e6-23fb1c20c34f@suse.de>
Date: Wed, 11 Nov 2020 14:07:43 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201111082658.3401686-6-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/11/20 9:26 AM, Christoph Hellwig wrote:
> The update_bdev argument is always set to true, so remove it.  Also
> rename the function to the slighly less verbose set_capacity_and_notify,
> as propagating the disk size to the block device isn't really
> revalidation.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   block/genhd.c                | 13 +++++--------
>   drivers/block/loop.c         | 11 +++++------
>   drivers/block/virtio_blk.c   |  2 +-
>   drivers/block/xen-blkfront.c |  2 +-
>   drivers/nvme/host/core.c     |  2 +-
>   drivers/scsi/sd.c            |  5 ++---
>   include/linux/genhd.h        |  3 +--
>   7 files changed, 16 insertions(+), 22 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 13:08:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 13:08:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24762.52177 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcprJ-0000CE-Tb; Wed, 11 Nov 2020 13:08:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24762.52177; Wed, 11 Nov 2020 13:08:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcprJ-0000C7-QB; Wed, 11 Nov 2020 13:08:21 +0000
Received: by outflank-mailman (input) for mailman id 24762;
 Wed, 11 Nov 2020 13:08:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8Mnv=ER=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kcprJ-0000Bz-76
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:08:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b4b2dded-e74d-4b32-ab9a-9e7be49819af;
 Wed, 11 Nov 2020 13:08:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id ABAFFAD45;
 Wed, 11 Nov 2020 13:08:19 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8Mnv=ER=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kcprJ-0000Bz-76
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:08:21 +0000
X-Inumbo-ID: b4b2dded-e74d-4b32-ab9a-9e7be49819af
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b4b2dded-e74d-4b32-ab9a-9e7be49819af;
	Wed, 11 Nov 2020 13:08:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id ABAFFAD45;
	Wed, 11 Nov 2020 13:08:19 +0000 (UTC)
Subject: Re: [PATCH 06/24] block: add a return value to
 set_capacity_and_notify
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201111082658.3401686-1-hch@lst.de>
 <20201111082658.3401686-7-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <7d892cf3-70bc-7d5d-81f0-07b87e49f6ae@suse.de>
Date: Wed, 11 Nov 2020 14:08:17 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201111082658.3401686-7-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/11/20 9:26 AM, Christoph Hellwig wrote:
> Return if the function ended up sending an uevent or not.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   block/genhd.c         | 7 +++++--
>   include/linux/genhd.h | 2 +-
>   2 files changed, 6 insertions(+), 3 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 13:10:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 13:10:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24775.52188 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcpt6-000180-8k; Wed, 11 Nov 2020 13:10:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24775.52188; Wed, 11 Nov 2020 13:10:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcpt6-00017t-5s; Wed, 11 Nov 2020 13:10:12 +0000
Received: by outflank-mailman (input) for mailman id 24775;
 Wed, 11 Nov 2020 13:10:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pxcb=ER=epam.com=prvs=9584594409=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kcpt4-00017o-Kp
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:10:10 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8cdbdfe3-7850-435b-8dd9-6bdfdd72e5a9;
 Wed, 11 Nov 2020 13:10:09 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0ABD51YV020758; Wed, 11 Nov 2020 13:10:03 GMT
Received: from eur04-db3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2051.outbound.protection.outlook.com [104.47.12.51])
 by mx0a-0039f301.pphosted.com with ESMTP id 34rf8089t5-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 11 Nov 2020 13:10:03 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM9PR03MB6851.eurprd03.prod.outlook.com (2603:10a6:20b:2d9::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Wed, 11 Nov
 2020 13:10:01 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Wed, 11 Nov 2020
 13:10:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Pxcb=ER=epam.com=prvs=9584594409=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kcpt4-00017o-Kp
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:10:10 +0000
X-Inumbo-ID: 8cdbdfe3-7850-435b-8dd9-6bdfdd72e5a9
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8cdbdfe3-7850-435b-8dd9-6bdfdd72e5a9;
	Wed, 11 Nov 2020 13:10:09 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
	by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0ABD51YV020758;
	Wed, 11 Nov 2020 13:10:03 GMT
Received: from eur04-db3-obe.outbound.protection.outlook.com (mail-db3eur04lp2051.outbound.protection.outlook.com [104.47.12.51])
	by mx0a-0039f301.pphosted.com with ESMTP id 34rf8089t5-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Wed, 11 Nov 2020 13:10:03 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bdKxhatwjQbcs4qJUfqhmBwy1QdRQvXCiAgEvYD2t9qumg9x5SU8odIB0DWwpX0z19rIKJwWBVrS7KGaXeZuSytBQTWwTgQvenVXAF+65fWl7EeaK25ESuVJNIQJAXz2EK3o/dbMTO2kjdaUc0GnATisQYy8nFGWPhv8hQHlefa4ftgZIFV1bJPnKrAueWfDxyUQ0mJStFBBT5h2NfuhhpHPzwc4nPuG8ddnziWRb/UfR75ebBRHlSHer9smPWFp0qGWSJzhf8gER7a284hes9aDTKL9NpTBQvhDRRdVekOO1uA8bp2UR5WUX772XoKuS76RyppSFjNJM2SXcBoPdg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mIDlB9PJGPtCwbyk43lUH/M8WeJL4oZx74brkwSsy7s=;
 b=Mqp1Db4p0wBO8BIxb4hqCt0Jf+9BUYVgE6h1K6jDjYelbs3wyXkwP/pSmnvTBVvT1FeuEpqyuVAbLTeoAEh/1ix6FuU/cUAuje4+rDnj0Q1Ez7NNTAmFQdFSPpG3FcwZ27QzSZnIWgRnX0u8zFlAml/azfYX8CXu+E8saiI2zaczNJ+rBjmEjrx18AdiHk2wknUo8i99bvB9fIZWJRvIWR6l1frFvHEWr7qalOkeeLESfd9C78t4DuM+CYLX/JBb0rJOk52U/9SGhTWAWtFot1arC+XhX7hHPmWB1e57JNUO327KqTNS1IKc5EV6HxBzMEWF6hSRbm97Zh7zh/bLcw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mIDlB9PJGPtCwbyk43lUH/M8WeJL4oZx74brkwSsy7s=;
 b=E32vrHc1NsBI8nPB3G8T0cKMvWCYxSj+dLI8F3Jk5Q36/3/v+Ne7pwTgTBr47IgfcxAibcfBhRuQ6l+YdQMRs1yMLvC2KVtAs+Wp/4lGXwajMSzn7G0NpTcW4cHYqF54tKFo/HlZ+g4kV36Rd2zuhC+Pm6CVjlPU+PzCBRaywbwfRQtz3SDmIhMVDmLxlPOdtyEn8X/uqOSf4/hDg86KEoLvxpiJOdJO1KTBYBz2hfSED3kCfGD42Fi+Of5ChPTDZqlxKEO3nkeXQL8LLxAibhuwzMX3wzSrLV99fdYphIQalA0ViqCoTLf40ZfTHjAQ2iprYDGRc5atpRZeH/e0BQ==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM9PR03MB6851.eurprd03.prod.outlook.com (2603:10a6:20b:2d9::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Wed, 11 Nov
 2020 13:10:01 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Wed, 11 Nov 2020
 13:10:01 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
CC: "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
        "Bertrand.Marquis@arm.com"
	<Bertrand.Marquis@arm.com>,
        "julien.grall@arm.com" <julien.grall@arm.com>,
        "jbeulich@suse.com" <jbeulich@suse.com>,
        "sstabellini@kernel.org"
	<sstabellini@kernel.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
        Oleksandr Andrushchenko <andr2000@gmail.com>
Subject: Re: [SUSPECTED SPAM][PATCH 01/10] pci/pvh: Allow PCI toolstack code
 run with PVH domains on ARM
Thread-Topic: [SUSPECTED SPAM][PATCH 01/10] pci/pvh: Allow PCI toolstack code
 run with PVH domains on ARM
Thread-Index: AQHWtpbtW65RafIiXE60uDMyi8KYhqnC4BMAgAAKqgA=
Date: Wed, 11 Nov 2020 13:10:01 +0000
Message-ID: <57fefaee-9684-4c67-662e-f4c57313886e@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-2-andr2000@gmail.com>
 <20201111123150.7lkabdo3wix7jkdk@Air-de-Roger>
In-Reply-To: <20201111123150.7lkabdo3wix7jkdk@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: d498b15a-0e77-4e9a-b5f6-08d8864318fa
x-ms-traffictypediagnostic: AM9PR03MB6851:
x-microsoft-antispam-prvs: 
 <AM9PR03MB6851069DCBF8751A2F4A2414E7E80@AM9PR03MB6851.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 pWtFbRP7ZlntdVvSVjCIMPHCLUcltTZFE5u3TrphO8AsGXsAGs1ZEH17nBVZdC7CrCoSPZEukYTjdcZ17ZQW09O2vqQ+RCehxULjIhhk8L+gg6CgQGtkpFDy9TuBzimEuUIqizWG51ermSZYeX2vX6ZhvykcG131xrIC5KoxbyrQqEPss4JGSTbvQyjrpELuAeyyoa9aarVpbFQHODHNXX4qHjVeT1DdAQ84bj9qRcuqsY1zt9w90xZUHfXwXs4K/Uc9Morp3wZQYW7BZ42RZ67Q1BhNMGox+Q1maNg6TJSO3SFbD4awy3uJilNE6OROh36jvm+hcwkzkbMQcUO2N23j1oa9OdMUwyoin4Bnb3krnO9Ubr9b5S2BqMXTXEbCj75windFoyfyz5MAkizYhw==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(136003)(366004)(346002)(396003)(66446008)(66476007)(86362001)(478600001)(31686004)(53546011)(2906002)(26005)(316002)(83380400001)(66946007)(36756003)(64756008)(6506007)(7416002)(4326008)(6512007)(8936002)(5660300002)(71200400001)(8676002)(76116006)(54906003)(2616005)(66556008)(91956017)(186003)(6486002)(6916009)(966005)(31696002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 3XlwRl8tQgjwPb2Wg/DARkWayD6AVuTSVwb3oCpgKaHUUZCjEybTCyZ0yyeZ8ugB8rHafYF8CnZ66QwVky1Cw0j5GUfETFuILIdQ6+UQwCAISMcXn7etEI9ErBI/VkA8S0L8ea4+ZOzWrpcznW8oDyV2O/rw0mlxhyOR7pByfErTbhwzeN/yNkOfCEGFo4kFAI4x/03o84FASS7C49EG66c8eUqc6I4SlzILrmNU0rMuJrVI746YCplCposXsZfvgda6Jdo0xEJ5UrIUUDziJiZbZRfGs2l1hPSni9c+a2c5fPmXcqjEdvWfHUh9WRsWVnh12ZlJdk6OKSyZEN0XcOv0vGOFwSk0CrzCKdlN+qAjHLOSkSRxIdow8l7r/jHi4OAwqD/5yFMJ3T3pJ1smT1UE6ODP/ugchdBz4o/Eu12lLnaZXpRmHbnWtqkAgdU5xZfFgjq8e9DYP5peuaCsqCgMuxYVMo6LB3yanyVVKXNqRcS+dmWC9s+S8TQGT+8iQLw6MKqJI2augNuq8XYR3b8QWrJ0NlNSlzW17l4WoI4fjbb8kzpGRDhLOO49sBzh3HaDT3O6RdpuHSOY+COktukgpu5U7l95lFQp1W2EmQtA6bSF72aSW2t32BH5hDxfJzwSYWkk4qaoql3zV16h/w==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <A68B4FC0FCBC424097840CFD1A57D24B@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d498b15a-0e77-4e9a-b5f6-08d8864318fa
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Nov 2020 13:10:01.1306
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: TbpSoITRLHCdCkkYnhDBbGtxHMtGvinUF1scyb6GumoOMkpd5C+Yy8S8giebpFqYS/U/bTX25GX5KJqEoe62RwApXrXfWSLoOI776tXwOw1zGbcfLw3HF9IdfsXj+aWO
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR03MB6851
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-11_06:2020-11-10,2020-11-11 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 mlxscore=0
 phishscore=0 spamscore=0 adultscore=0 bulkscore=0 priorityscore=1501
 suspectscore=0 impostorscore=0 mlxlogscore=999 lowpriorityscore=0
 clxscore=1011 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2011110076

DQpPbiAxMS8xMS8yMCAyOjMxIFBNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOg0KPiBPbiBNb24s
IE5vdiAwOSwgMjAyMCBhdCAwMjo1MDoyMlBNICswMjAwLCBPbGVrc2FuZHIgQW5kcnVzaGNoZW5r
byB3cm90ZToNCj4+IEZyb206IE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIDxvbGVrc2FuZHJfYW5k
cnVzaGNoZW5rb0BlcGFtLmNvbT4NCj4+DQo+PiBBY2NvcmRpbmcgdG8gaHR0cHM6Ly93aWtpLnhl
bnByb2plY3Qub3JnL3dpa2kvTGludXhfUFZIOg0KPj4NCj4+IEl0ZW1zIG5vdCBzdXBwb3J0ZWQg
YnkgUFZIDQo+PiAgIC0gUENJIHBhc3MgdGhyb3VnaCAoYXMgb2YgWGVuIDQuMTApDQo+Pg0KPj4g
QWxsb3cgcnVubmluZyBQQ0kgcmVtb3ZlIGNvZGUgb24gQVJNIGFuZCBkbyBub3QgYXNzZXJ0IGZv
ciBQVkggZG9tYWlucy4NCj4+DQo+PiBTaWduZWQtb2ZmLWJ5OiBPbGVrc2FuZHIgQW5kcnVzaGNo
ZW5rbyA8b2xla3NhbmRyX2FuZHJ1c2hjaGVua29AZXBhbS5jb20+DQo+PiAtLS0NCj4+ICAgdG9v
bHMvbGlieGwvTWFrZWZpbGUgICAgfCA0ICsrKysNCj4+ICAgdG9vbHMvbGlieGwvbGlieGxfcGNp
LmMgfCA0ICsrKy0NCj4+ICAgMiBmaWxlcyBjaGFuZ2VkLCA3IGluc2VydGlvbnMoKyksIDEgZGVs
ZXRpb24oLSkNCj4+DQo+PiBkaWZmIC0tZ2l0IGEvdG9vbHMvbGlieGwvTWFrZWZpbGUgYi90b29s
cy9saWJ4bC9NYWtlZmlsZQ0KPj4gaW5kZXggMjQxZGE3ZmZmNmY0Li5mMzgwNmFhZmNiNGUgMTAw
NjQ0DQo+PiAtLS0gYS90b29scy9saWJ4bC9NYWtlZmlsZQ0KPj4gKysrIGIvdG9vbHMvbGlieGwv
TWFrZWZpbGUNCj4+IEBAIC0xMzAsNiArMTMwLDEwIEBAIGVuZGlmDQo+PiAgIA0KPj4gICBMSUJY
TF9MSUJTICs9IC1seWFqbA0KPj4gICANCj4+ICtpZmVxICgkKENPTkZJR19BUk0pLHkpDQo+PiAr
Q0ZBTEdTICs9IC1EQ09ORklHX0FSTQ0KPj4gK2VuZGlmDQo+PiArDQo+PiAgIExJQlhMX09CSlMg
PSBmbGV4YXJyYXkubyBsaWJ4bC5vIGxpYnhsX2NyZWF0ZS5vIGxpYnhsX2RtLm8gbGlieGxfcGNp
Lm8gXA0KPj4gICAJCQlsaWJ4bF9kb20ubyBsaWJ4bF9leGVjLm8gbGlieGxfeHNoZWxwLm8gbGli
eGxfZGV2aWNlLm8gXA0KPj4gICAJCQlsaWJ4bF9pbnRlcm5hbC5vIGxpYnhsX3V0aWxzLm8gbGli
eGxfdXVpZC5vIFwNCj4+IGRpZmYgLS1naXQgYS90b29scy9saWJ4bC9saWJ4bF9wY2kuYyBiL3Rv
b2xzL2xpYnhsL2xpYnhsX3BjaS5jDQo+PiBpbmRleCBiYzU4NDNiMTM3MDEuLmI5M2NmOTc2NjQy
YiAxMDA2NDQNCj4+IC0tLSBhL3Rvb2xzL2xpYnhsL2xpYnhsX3BjaS5jDQo+PiArKysgYi90b29s
cy9saWJ4bC9saWJ4bF9wY2kuYw0KPj4gQEAgLTE5MTUsOCArMTkxNSwxMCBAQCBzdGF0aWMgdm9p
ZCBkb19wY2lfcmVtb3ZlKGxpYnhsX19lZ2MgKmVnYywgdWludDMyX3QgZG9taWQsDQo+PiAgICAg
ICAgICAgICAgIGdvdG8gb3V0X2ZhaWw7DQo+PiAgICAgICAgICAgfQ0KPj4gICAgICAgfSBlbHNl
IHsNCj4+ICsgICAgICAgIC8qIFBDSSBwYXNzdGhyb3VnaCBjYW4gYWxzbyBydW4gb24gQVJNIFBW
SCAqLw0KPj4gKyNpZm5kZWYgQ09ORklHX0FSTQ0KPj4gICAgICAgICAgIGFzc2VydCh0eXBlID09
IExJQlhMX0RPTUFJTl9UWVBFX1BWKTsNCj4+IC0NCj4+ICsjZW5kaWYNCj4gSSB3b3VsZCBqdXN0
IHJlbW92ZSB0aGUgYXNzZXJ0IG5vdyBpZiB0aGlzIGlzIHRvIGJlIHVzZWQgYnkgQXJtIGFuZA0K
PiB5b3UgZG9uJ3QgbmVlZCB0byBmb3JrIHRoZSBmaWxlIGZvciBBcm0uDQoNClNvdW5kcyBnb29k
LCBJIHdpbGwgZHJvcCB0aGVuDQoNCkJ1dCB3aGF0IHdvdWxkIGJlIHRoZSByaWdodCBleHBsYW5h
dGlvbiB0aGVuPyBJIG1lYW4gd2h5IHRoZXJlIHdhcyBhbiBBU1NFUlQNCg0KYW5kIG5vdyBpdCBp
cyBzYWZlIChmb3IgeDg2KSB0byByZW1vdmUgdGhhdD8NCg0KPg0KPiBSb2dlci4NCg0KVGhhbmsg
eW91LA0KDQpPbGVrc2FuZHINCg==


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 13:21:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 13:21:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24788.52204 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcq3l-0002CT-Fv; Wed, 11 Nov 2020 13:21:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24788.52204; Wed, 11 Nov 2020 13:21:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcq3l-0002CM-C1; Wed, 11 Nov 2020 13:21:13 +0000
Received: by outflank-mailman (input) for mailman id 24788;
 Wed, 11 Nov 2020 13:21:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kcq3j-0002Bo-K6
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:21:11 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9bcaf503-017d-480d-9a91-094f4b132b19;
 Wed, 11 Nov 2020 13:21:03 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcq3a-0003au-Sm; Wed, 11 Nov 2020 13:21:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcq3a-00078B-ED; Wed, 11 Nov 2020 13:21:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kcq3a-0005CX-Da; Wed, 11 Nov 2020 13:21:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kcq3j-0002Bo-K6
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:21:11 +0000
X-Inumbo-ID: 9bcaf503-017d-480d-9a91-094f4b132b19
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9bcaf503-017d-480d-9a91-094f4b132b19;
	Wed, 11 Nov 2020 13:21:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SZ2h7mFJz8iKfYQHn8b1nhZXUERr7c39Xe6wJsW1xpI=; b=Lnj1Mv97OEjQrsqyHgBK5lq/Fi
	lLsFZ3t9zkZDWgRDcicBohYbUgTZjIHo3lHb5HgoXc6D3XyYoRevTXKkYAquEgxT231EOd9SlOOqa
	qbSNZVoj+GHmNLAxFWXgArbmQOHOELhL24SH6MvyQSiWEs7Y22d/o4qd5zQZtPP8jLME=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcq3a-0003au-Sm; Wed, 11 Nov 2020 13:21:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcq3a-00078B-ED; Wed, 11 Nov 2020 13:21:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcq3a-0005CX-Da; Wed, 11 Nov 2020 13:21:02 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156631-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156631: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=407ab579637ced6dc32cfb2295afb7259cca4b22
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Nov 2020 13:21:02 +0000

flight 156631 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156631/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                407ab579637ced6dc32cfb2295afb7259cca4b22
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  102 days
Failing since        152366  2020-08-01 20:49:34 Z  101 days  168 attempts
Testing same since   156631  2020-11-10 17:46:07 Z    0 days    1 attempts

------------------------------------------------------------
3479 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 665172 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 13:28:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 13:28:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24796.52216 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcqAK-0002QN-6z; Wed, 11 Nov 2020 13:28:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24796.52216; Wed, 11 Nov 2020 13:28:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcqAK-0002QG-3t; Wed, 11 Nov 2020 13:28:00 +0000
Received: by outflank-mailman (input) for mailman id 24796;
 Wed, 11 Nov 2020 13:27:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcqAJ-0002QB-6Z
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:27:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4fa58e88-1205-4687-b3b4-4fb55443bad2;
 Wed, 11 Nov 2020 13:27:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E1375AD77;
 Wed, 11 Nov 2020 13:27:52 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcqAJ-0002QB-6Z
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:27:59 +0000
X-Inumbo-ID: 4fa58e88-1205-4687-b3b4-4fb55443bad2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4fa58e88-1205-4687-b3b4-4fb55443bad2;
	Wed, 11 Nov 2020 13:27:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605101273;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pcAbsGSJL2UCV1X4kyA6b8DatfGT+Hwdmz0hU91LRyw=;
	b=NmILe0FPzgPRdjcHY+te5ITOlLmc/afa8LaxhlJbfngiQFEHmLJoE6P3NHibo4WwcDf61+
	IFwQ/8Ph2B5YhoJZ5eEOIUMpgqyDknB3MeOcKU+3SRtM1kkUhNWqGifUirpZyltFFhfccd
	zIrsXU2Yn+xTKbWSRB4j/5LSq6+z+4g=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E1375AD77;
	Wed, 11 Nov 2020 13:27:52 +0000 (UTC)
Subject: Re: [PATCH V2 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
To: Oleksandr <olekstysh@gmail.com>
Cc: 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Ian Jackson' <iwj@xenproject.org>, 'Wei Liu' <wl@xen.org>,
 'Julien Grall' <julien.grall@arm.com>, paul@xen.org,
 xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-18-git-send-email-olekstysh@gmail.com>
 <004e01d6a6cf$09cd9f40$1d68ddc0$@xen.org>
 <700a643e-641e-c243-cb2d-7ad8b5a9b8ad@gmail.com>
 <d4088e1b-1a50-d2fd-29b0-0f8a2cf4e7d4@suse.com>
 <ed9defbe-b6bf-bd1f-cd88-64d1b0e135c1@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0ab03a33-5056-0de8-e5f7-b54a661a09c5@suse.com>
Date: Wed, 11 Nov 2020 14:27:52 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <ed9defbe-b6bf-bd1f-cd88-64d1b0e135c1@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11.11.2020 09:41, Oleksandr wrote:
> 
> On 11.11.20 10:08, Jan Beulich wrote:
> 
> Hi Jan
> 
>> On 10.11.2020 21:53, Oleksandr wrote:
>>> On 20.10.20 13:51, Paul Durrant wrote:
>>>
>>> Hi Paul.
>>>
>>> Sorry for the late response.
>>>
>>>>> -----Original Message-----
>>>>> From: Oleksandr Tyshchenko <olekstysh@gmail.com>
>>>>> Sent: 15 October 2020 17:44
>>>>> To: xen-devel@lists.xenproject.org
>>>>> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Stefano Stabellini <sstabellini@kernel.org>;
>>>>> Julien Grall <julien@xen.org>; Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>; Andrew Cooper
>>>>> <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>; Ian Jackson
>>>>> <iwj@xenproject.org>; Jan Beulich <jbeulich@suse.com>; Wei Liu <wl@xen.org>; Paul Durrant
>>>>> <paul@xen.org>; Julien Grall <julien.grall@arm.com>
>>>>> Subject: [PATCH V2 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
>>>>>
>>>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>>>
>>>>> This patch introduces a helper the main purpose of which is to check
>>>>> if a domain is using IOREQ server(s).
>>>>>
>>>>> On Arm the current benefit is to avoid calling handle_io_completion()
>>>>> (which implies iterating over all possible IOREQ servers anyway)
>>>>> on every return in leave_hypervisor_to_guest() if there is no active
>>>>> servers for the particular domain.
>>>>> Also this helper will be used by one of the subsequent patches on Arm.
>>>>>
>>>>> This involves adding an extra per-domain variable to store the count
>>>>> of servers in use.
>>>>>
>>>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>>> CC: Julien Grall <julien.grall@arm.com>
>>>>>
>>>>> ---
>>>>> Please note, this is a split/cleanup/hardening of Julien's PoC:
>>>>> "Add support for Guest IO forwarding to a device emulator"
>>>>>
>>>>> Changes RFC -> V1:
>>>>>      - new patch
>>>>>
>>>>> Changes V1 -> V2:
>>>>>      - update patch description
>>>>>      - guard helper with CONFIG_IOREQ_SERVER
>>>>>      - remove "hvm" prefix
>>>>>      - modify helper to just return d->arch.hvm.ioreq_server.nr_servers
>>>>>      - put suitable ASSERT()s
>>>>>      - use ASSERT(d->ioreq_server.server[id] ? !s : !!s) in set_ioreq_server()
>>>>>      - remove d->ioreq_server.nr_servers = 0 from hvm_ioreq_init()
>>>>> ---
>>>>>    xen/arch/arm/traps.c    | 15 +++++++++------
>>>>>    xen/common/ioreq.c      |  7 ++++++-
>>>>>    xen/include/xen/ioreq.h | 14 ++++++++++++++
>>>>>    xen/include/xen/sched.h |  1 +
>>>>>    4 files changed, 30 insertions(+), 7 deletions(-)
>>>>>
>>>>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>>>>> index 507c095..a8f5fdf 100644
>>>>> --- a/xen/arch/arm/traps.c
>>>>> +++ b/xen/arch/arm/traps.c
>>>>> @@ -2261,14 +2261,17 @@ static bool check_for_vcpu_work(void)
>>>>>        struct vcpu *v = current;
>>>>>
>>>>>    #ifdef CONFIG_IOREQ_SERVER
>>>>> -    bool handled;
>>>>> +    if ( domain_has_ioreq_server(v->domain) )
>>>>> +    {
>>>>> +        bool handled;
>>>>>
>>>>> -    local_irq_enable();
>>>>> -    handled = handle_io_completion(v);
>>>>> -    local_irq_disable();
>>>>> +        local_irq_enable();
>>>>> +        handled = handle_io_completion(v);
>>>>> +        local_irq_disable();
>>>>>
>>>>> -    if ( !handled )
>>>>> -        return true;
>>>>> +        if ( !handled )
>>>>> +            return true;
>>>>> +    }
>>>>>    #endif
>>>>>
>>>>>        if ( likely(!v->arch.need_flush_to_ram) )
>>>>> diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
>>>>> index bcd4961..a72bc0e 100644
>>>>> --- a/xen/common/ioreq.c
>>>>> +++ b/xen/common/ioreq.c
>>>>> @@ -39,9 +39,14 @@ static void set_ioreq_server(struct domain *d, unsigned int id,
>>>>>                                 struct ioreq_server *s)
>>>>>    {
>>>>>        ASSERT(id < MAX_NR_IOREQ_SERVERS);
>>>>> -    ASSERT(!s || !d->ioreq_server.server[id]);
>>>>> +    ASSERT(d->ioreq_server.server[id] ? !s : !!s);
>>>> That looks odd. How about ASSERT(!s ^ !d->ioreq_server.server[id])?
>>> ok, looks like it will work.
>>>
>>>
>>>>     Paul
>>>>
>>>>>        d->ioreq_server.server[id] = s;
>>>>> +
>>>>> +    if ( s )
>>>>> +        d->ioreq_server.nr_servers++;
>>>>> +    else
>>>>> +        d->ioreq_server.nr_servers--;
>>>>>    }
>>>>>
>>>>>    #define GET_IOREQ_SERVER(d, id) \
>>>>> diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
>>>>> index 7b03ab5..0679fef 100644
>>>>> --- a/xen/include/xen/ioreq.h
>>>>> +++ b/xen/include/xen/ioreq.h
>>>>> @@ -55,6 +55,20 @@ struct ioreq_server {
>>>>>        uint8_t                bufioreq_handling;
>>>>>    };
>>>>>
>>>>> +#ifdef CONFIG_IOREQ_SERVER
>>>>> +static inline bool domain_has_ioreq_server(const struct domain *d)
>>>>> +{
>>>>> +    ASSERT((current->domain == d) || atomic_read(&d->pause_count));
>>>>> +
>>>> This seems like an odd place to put such an assertion.
>>> I might miss something or interpreted incorrectly but these asserts are
>>> the result of how I understood the review comment on previous version [1].
>>>
>>> I will copy a comment here for the convenience:
>>> "This is safe only when d == current->domain and it's not paused,
>>> or when they're distinct and d is paused. Otherwise the result is
>>> stale before the caller can inspect it. This wants documenting by
>>> at least a comment, but perhaps better by suitable ASSERT()s."
>> The way his reply was worded, I think Paul was wondering about the
>> place where you put the assertion, not what you actually assert.
> 
> Shall I put the assertion at the call sites of this helper instead?

Since Paul raised the question, I expect this is a question to him
rather than me?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 13:33:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 13:33:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24803.52228 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcqFq-0003Nx-0N; Wed, 11 Nov 2020 13:33:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24803.52228; Wed, 11 Nov 2020 13:33:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcqFp-0003Nq-Ss; Wed, 11 Nov 2020 13:33:41 +0000
Received: by outflank-mailman (input) for mailman id 24803;
 Wed, 11 Nov 2020 13:33:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcqFo-0003Nl-UP
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:33:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2c45bd74-1b41-4a82-8308-a1db739d048a;
 Wed, 11 Nov 2020 13:33:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C96A1AD77;
 Wed, 11 Nov 2020 13:33:34 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcqFo-0003Nl-UP
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:33:40 +0000
X-Inumbo-ID: 2c45bd74-1b41-4a82-8308-a1db739d048a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2c45bd74-1b41-4a82-8308-a1db739d048a;
	Wed, 11 Nov 2020 13:33:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605101614;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HTyaMP2eYC4AgPHdAHAXnPBqQiD7GG97cXDLyIP2Sps=;
	b=pPB/FynxFuKpSytfRaAudrr5MvhFKk402DBXEkIZKxTmDvmrE5ui+JucUfT8FIRJMqrFiA
	Pq7M75bF0IcnLaWLTL1Ew436M2ri3NtqPsxWhVfriSiUwZSly0iZab4AliRsayQsoxFV1P
	6Z2gGWyYLNk+cGxOKI9z4fx19TMOWrg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C96A1AD77;
	Wed, 11 Nov 2020 13:33:34 +0000 (UTC)
Subject: Re: [PATCH v3 5/7] x86: guard against straight-line speculation past
 RET
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
 <80ceea17-958d-f409-5f39-9f353e780f5b@suse.com>
 <20201111111504.r4k7a53spsy7pzjq@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8ab3658f-8b69-455e-74b3-462f89f1cfe4@suse.com>
Date: Wed, 11 Nov 2020 14:33:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201111111504.r4k7a53spsy7pzjq@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11.11.2020 12:15, Roger Pau Monné wrote:
> On Fri, Oct 23, 2020 at 10:38:04AM +0200, Jan Beulich wrote:
>> Under certain conditions CPUs can speculate into the instruction stream
>> past a RET instruction. Guard against this just like 3b7dab93f240
>> ("x86/spec-ctrl: Protect against CALL/JMP straight-line speculation")
>> did - by inserting an "INT $3" insn. It's merely the mechanics of how to
>> achieve this that differ: A set of macros gets introduced to post-
>> process RET insns issued by the compiler (or living in assembly files).
>>
>> Unfortunately for clang this requires further features their built-in
>> assembler doesn't support: We need to be able to override insn mnemonics
>> produced by the compiler (which may be impossible, if internally
>> assembly mnemonics never get generated), and we want to use \(text)
>> escaping / quoting in the auxiliary macro.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
>> ---
>> TBD: Would be nice to avoid the additions in .init.text, but a query to
>>      the binutils folks regarding the ability to identify the section
>>      stuff is in (by Peter Zijlstra over a year ago:
>>      https://sourceware.org/pipermail/binutils/2019-July/107528.html)
>>      has been left without helpful replies.
>> ---
>> v3: Use .byte 0xc[23] instead of the nested macros.
>> v2: Fix build with newer clang. Use int3 mnemonic. Also override retq.
>>
>> --- a/xen/Makefile
>> +++ b/xen/Makefile
>> @@ -145,7 +145,15 @@ t2 = $(call as-insn,$(CC) -I$(BASEDIR)/i
>>  # https://bugs.llvm.org/show_bug.cgi?id=36110
>>  t3 = $(call as-insn,$(CC),".macro FOO;.endm"$(close); asm volatile $(open)".macro FOO;.endm",-no-integrated-as)
>>  
>> -CLANG_FLAGS += $(call or,$(t1),$(t2),$(t3))
>> +# Check whether \(text) escaping in macro bodies is supported.
>> +t4 = $(call as-insn,$(CC),".macro m ret:req; \\(ret) $$\\ret; .endm; m 8",,-no-integrated-as)
>> +
>> +# Check whether macros can override insn mnemonics in inline assembly.
>> +t5 = $(call as-insn,$(CC),".macro ret; .error; .endm; .macro retq; .error; .endm",-no-integrated-as)
> 
> I was going over this to post a bug report to LLVM, but it seems like
> gcc also doesn't overwrite ret when using the above snippet:
> 
> https://godbolt.org/z/oqsPTv

I can't see what's different from

void test(void) {
	asm volatile (".macro ret; .error; .endm; .macro retq; .error; .endm");
}

but this one produces "Error: .error directive invoked in source file"
for me with both old and new gcc.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 13:53:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 13:53:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24813.52240 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcqZC-0005H1-L6; Wed, 11 Nov 2020 13:53:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24813.52240; Wed, 11 Nov 2020 13:53:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcqZC-0005Gu-I1; Wed, 11 Nov 2020 13:53:42 +0000
Received: by outflank-mailman (input) for mailman id 24813;
 Wed, 11 Nov 2020 13:53:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kcqZA-0005Gp-Fj
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:53:40 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1ffca2e2-c609-4480-a404-f064441d832a;
 Wed, 11 Nov 2020 13:53:37 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kcqZA-0005Gp-Fj
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:53:40 +0000
X-Inumbo-ID: 1ffca2e2-c609-4480-a404-f064441d832a
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1ffca2e2-c609-4480-a404-f064441d832a;
	Wed, 11 Nov 2020 13:53:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605102817;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=s7GADPMuX+5dnG3XHhS/bZMNGeV5Q+BVd7ateJZ9dIE=;
  b=gjjhBjcVdRyuweG5ROYFG8x+1vgxK/t1SMBlviujWbMKuwbonuqemtBR
   aWLDVe3xWvUXsxjeTOUC/aKyd1DiFx+ujvXK87844nksBB81i1sy/Y+aQ
   c5cNY6gsn1WuMHU+9q7ZD3xRvGQqw2R72lNSpZv//jKUuUEhKQKn0vjWv
   I=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ArzrObgGqgom4xBXLD0HJo0NY9E0b/Qn2b8/+qcKqT8NDa46k38wD1/XF5TDu4Zku8VTSD+U5y
 FpXbTetvAef2I3tJINvtA3iT8aXJMhuLziKWU7fZMXKeWG9RfMRrS3YD1lQIMkRaYSQEIH/5Hs
 eD+NnF9oW/kgVblymbmoXvZhVpxvOCFKXEANLdsp2wEdXcI2dA8OszEQP+vdXxV2B+UAstR/HQ
 W2cefDc1BMyC1W3qhXTeyCu7iNMMNRolZyEYCst3K8sbcdQbXWbofye9lBSHvebxkoZAIX8DbJ
 Uog=
X-SBRS: None
X-MesageID: 30950451
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,469,1596513600"; 
   d="scan'208";a="30950451"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ewhhqSNFD4j6N5fZFDKOt8Cr5T3A1BlSNAstsFtP5yE76++Zu4ynlg+C23nLKza5c0PWVab7tNDrBtthqmoUSZ1+QZfvy6L9T63a8lFWSt8deg2S8BnnCicGiLM3kg8w+wK2J9ZZ78JYtdHMRaSBsUu4y9skGXhrtyOReMNW+fbK7CHetD85tC6ftABzO3NV0VoSZONmXVsz0KASWIJG95/23oJod4bOwF4j3LFx/isGaEhGampxFA6k0agnzc65/p40+EFYfqDtyRyMu3McW6+5COtCi2ePuWN63EpViogU+Lof9KJN6qNB7NCm3fOBbmB2dBXc+clHXoxHX3FaKA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x9cJz6XwcErQ0xHf7S8M9fazWnhdd+c0LFstrN7FQhU=;
 b=hNZ6Bfe7kLHYOiT5MvxQI2xMvvgwTFA3nKJqV+pwCwNo8wPMTlqrGnDnuqYzJ2OgKRfY3XGsNmTcd4fOjRKVOxY2fE23nryhw9w4sXEoMvcV69YVm8sqwhpsB2yboNfcPrRHk2FBpmSxSxYMg0vGotjoQW4Vj+8OIEc9Q/eEoMZrQI5w9A4mfRjNluPzg5qR1G4BcBdu/XqDwAslGmHcEVZ3QWESR34X/T57mGMMOHTjbBdEGNhld5pfEFWlLBuoShkkj64J63yWsPPzxIucDG2F5X2yDDK+hn8WI3kifDrhBJ6RiSeaiOY3Ia1TKVlSNm0X4XEcOcQ4vQ1dZ3zKZg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x9cJz6XwcErQ0xHf7S8M9fazWnhdd+c0LFstrN7FQhU=;
 b=sja55hRu5T8An/IigONAQV9blt9D+jqyN+CCZhC1P16vNVluTNK85zoVOHhjEHT06xthAoN83QdSaW4+dYk0ads+ZDY4KdaYCMcdum2wtPp+18s+1SPhyVt3WSswjuuy3jLX9gqDvJAbdgebKBfINS5VTUQQJhRuJITPKobkE1w=
Date: Wed, 11 Nov 2020 14:53:11 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Oleksandr Andrushchenko <andr2000@gmail.com>
CC: <Rahul.Singh@arm.com>, <Bertrand.Marquis@arm.com>, <julien.grall@arm.com>,
	<jbeulich@suse.com>, <sstabellini@kernel.org>,
	<xen-devel@lists.xenproject.org>, <iwj@xenproject.org>, <wl@xen.org>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Subject: Re: [PATCH 02/10] arm/pci: Maintain PCI assignable list
Message-ID: <20201111135311.6jhskiss2qswm3zp@Air-de-Roger>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-3-andr2000@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201109125031.26409-3-andr2000@gmail.com>
X-ClientProxiedBy: LO2P265CA0467.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a2::23) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 409dcb7e-5184-4593-b6b4-08d8864923e7
X-MS-TrafficTypeDiagnostic: DM6PR03MB4393:
X-Microsoft-Antispam-PRVS: <DM6PR03MB43930809E9D0CDC2D397C9AD8FE80@DM6PR03MB4393.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: iQ2jObB//qMYfiqExDyXwjkxV5RpTl/bHAko1s61KCTKOrt35XPEhpD8H6ey9GSBsl51zMhSN10pDS0NzWmbXnrUtBi27akvVQ9jN8g+J+5u0lVBOU5hAVzFfUPPCMCfOmgoduMLYN+AfLtYUz6//A/pgcuvKJ33lEZaiGq8PHbO7Nx9bi9eIRglnIzPK+EK563Zxwh1me+youZt/0gfZz94Wh5rxA7yFD7pcRQJILufNEhfOjdB4rgzKVYcuXXlEEaxYTsDafR4RyhewXfZE/GX6ZhjM2jKJ8TR5AU03qazVdTcfJMAoZ35aCGdcthLpaABqC2/CAqjLHU+KF2sUfwDVwD+s+1erPMGJ4QfyCEBh5XJEtU/OO5EILyEADAx
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(39860400002)(376002)(366004)(346002)(136003)(396003)(8936002)(66946007)(1076003)(7416002)(6496006)(8676002)(316002)(83380400001)(6486002)(4326008)(5660300002)(30864003)(6666004)(2906002)(6916009)(33716001)(9686003)(16526019)(66476007)(26005)(66556008)(86362001)(186003)(956004)(85182001)(478600001)(309714004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: tACZ8qX7+RinWfvFc5EHCJFvXt8uPKWabKjd7++k4WmqkoHQb4ic7dXLiIS5Qqow8v0+EvkD6c4lMQC0JfAYnIDnukUPUG1C4Qy93LQMkrT2ZWUNTFTgniqGb11jd69qAhgD2vrhU6moQL/XRrH/1n62Dud+vLmt5KpjBvkZhp/hnlmGmPmtmyO6MPlrLHW115HJ4ol042o/IcCuLH93KiOo5PS85NZN5f12oJ6HjXCuShUoXv3oxI9y8JxJivOlg7oI/bhjdI8Txy0wJ3SZUfFx1DyQ4aKFD0RjBpF+HeVTXFHZ89MA/TCl1diJBLx9p+ZvtDyUafuYUBRCI0ELVdKUaOEsQhaXRsvtTLmaIimm5p4monnhGvuIp+ZN1usQVtpMLvoxbi+/wbS9v39CziGhmBQWOIVzyDcz6rBNMj1AWgcrC1j6fNde04Hoa+t7KuU8cobNOUJrD4MKni+NaMDbrxxSITEmoIOJ+dlFWoVo2yaPpv/E0poeTN0rBy4Ae6m9QloOUKpME6BlIAc0b2v98cU+fgZzV49L29cstgSNyYioHc74pcFjOOTIuAaffAa3bg+uJCYn6mE60iG3ekidp7xetmoKAJuPenL3dc+fdmWaAUeFjV2vuHyD5oFx+i2+XJwdWfZIkH9pezdMvR30PT0W5xCdkcc/hM/t0E5Q4SX6h3iuacaO2agpivw0XL+EirFpgQtpBvdnyJM7YdM9X++MSajbt8wwRVzWgtJWicLMYvhxfgXefHRFpd3a6APiAf9ow5HlV1YvzVN1z9vyBHn2ZKdDTtYeD0vTFNZBMM7b6wRtilXPI+wNnPQkhFLZcYVo4Bt0SuN3HjhyJYjlfglGN3PWvbyBwwJZBlR2pc482FRRX2muCXMJxns5/VPPHbZ1TSQQr/ax+hLyPg==
X-MS-Exchange-CrossTenant-Network-Message-Id: 409dcb7e-5184-4593-b6b4-08d8864923e7
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Nov 2020 13:53:16.8556
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pCd57n63qsyd/HwfLS1drd4057++KFMlcE5Y6J0bNLBO8MetUGo8cHYchp1IFrWMYr5sRsNdBQWP3Kaw1flPDw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4393
X-OriginatorOrg: citrix.com

On Mon, Nov 09, 2020 at 02:50:23PM +0200, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> 
> The original code depends on pciback to manage assignable device list.
> The functionality which is implemented by the pciback and the toolstack
> and which is relevant/missing/needed for ARM:
> 
> 1. pciback is used as a database for assignable PCI devices, e.g. xl
>    pci-assignable-{add|remove|list} manipulates that list. So, whenever the
>    toolstack needs to know which PCI devices can be passed through it reads
>    that from the relevant sysfs entries of the pciback.
> 
> 2. pciback is used to hold the unbound PCI devices, e.g. when passing through
>    a PCI device it needs to be unbound from the relevant device driver and bound
>    to pciback (strictly speaking it is not required that the device is bound to
>    pciback, but pciback is again used as a database of the passed through PCI
>    devices, so we can re-bind the devices back to their original drivers when
>    guest domain shuts down)
> 
> 1. As ARM doesn't use pciback implement the above with additional sysctls:
>  - XEN_SYSCTL_pci_device_set_assigned

I don't see the point in having this sysfs, Xen already knows when a
device is assigned because the XEN_DOMCTL_assign_device hypercall is
used.

>  - XEN_SYSCTL_pci_device_get_assigned
>  - XEN_SYSCTL_pci_device_enum_assigned
> 2. Extend struct pci_dev to hold assignment state.

I'm not really found of this, the hypervisor is no place to store a
database like this, unless it's strictly needed.

IMO the right implementation here would be to split Linux pciback into
two different drivers:

 - The pv-pci backend for doing passthrough to classic PV guests.
 - The rest of pciback: device reset, hand-holding driver for devices
   to be assigned and database.

I think there must be something similar in KVM that performs the tasks
of my last point, maybe we could piggyback on it?

If we want to go the route proposed by this patch, ie: Xen performing
the functions of pciback you would also have to move the PCI reset
code to Xen, so that you can fully manage the PCI devices from Xen.

> 
> Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> ---
>  tools/libxc/include/xenctrl.h |   9 +++
>  tools/libxc/xc_domain.c       |   1 +
>  tools/libxc/xc_misc.c         |  46 +++++++++++++++
>  tools/libxl/Makefile          |   4 ++
>  tools/libxl/libxl_pci.c       | 105 ++++++++++++++++++++++++++++++++--
>  xen/arch/arm/sysctl.c         |  66 ++++++++++++++++++++-
>  xen/drivers/passthrough/pci.c |  93 ++++++++++++++++++++++++++++++
>  xen/include/public/sysctl.h   |  40 +++++++++++++
>  xen/include/xen/pci.h         |  12 ++++
>  9 files changed, 370 insertions(+), 6 deletions(-)

I've done some light review below given my questions above.

> diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
> index 4c89b7294c4f..77029013da7d 100644
> --- a/tools/libxc/include/xenctrl.h
> +++ b/tools/libxc/include/xenctrl.h
> @@ -2652,6 +2652,15 @@ int xc_livepatch_replace(xc_interface *xch, char *name, uint32_t timeout, uint32
>  int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
>                           xen_pfn_t start_pfn, xen_pfn_t nr_pfns);
>  
> +typedef xen_sysctl_pci_device_enum_assigned_t xc_pci_device_enum_assigned_t;
> +
> +int xc_pci_device_set_assigned(xc_interface *xch, uint32_t machine_sbdf,
> +                               bool assigned);
> +int xc_pci_device_get_assigned(xc_interface *xch, uint32_t machine_sbdf);
> +
> +int xc_pci_device_enum_assigned(xc_interface *xch,
> +                                xc_pci_device_enum_assigned_t *e);
> +
>  /* Compat shims */
>  #include "xenctrl_compat.h"
>  
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index 71829c2bce3e..d515191e9243 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -2321,6 +2321,7 @@ int xc_domain_soft_reset(xc_interface *xch,
>      domctl.domain = domid;
>      return do_domctl(xch, &domctl);
>  }
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c
> index 3820394413a9..d439c4ba1019 100644
> --- a/tools/libxc/xc_misc.c
> +++ b/tools/libxc/xc_misc.c
> @@ -988,6 +988,52 @@ int xc_livepatch_replace(xc_interface *xch, char *name, uint32_t timeout, uint32
>      return _xc_livepatch_action(xch, name, LIVEPATCH_ACTION_REPLACE, timeout, flags);
>  }
>  
> +int xc_pci_device_set_assigned(
> +    xc_interface *xch,
> +    uint32_t machine_sbdf,
> +    bool assigned)
> +{
> +    DECLARE_SYSCTL;
> +
> +    sysctl.cmd = XEN_SYSCTL_pci_device_set_assigned;
> +    sysctl.u.pci_set_assigned.machine_sbdf = machine_sbdf;
> +    sysctl.u.pci_set_assigned.assigned = assigned;
> +
> +    return do_sysctl(xch, &sysctl);
> +}
> +
> +int xc_pci_device_get_assigned(
> +    xc_interface *xch,
> +    uint32_t machine_sbdf)
> +{
> +    DECLARE_SYSCTL;
> +
> +    sysctl.cmd = XEN_SYSCTL_pci_device_get_assigned;
> +    sysctl.u.pci_get_assigned.machine_sbdf = machine_sbdf;
> +
> +    return do_sysctl(xch, &sysctl);
> +}
> +
> +int xc_pci_device_enum_assigned(xc_interface *xch,
> +                                xc_pci_device_enum_assigned_t *e)
> +{
> +    int ret;
> +    DECLARE_SYSCTL;
> +
> +    sysctl.cmd = XEN_SYSCTL_pci_device_enum_assigned;
> +    sysctl.u.pci_enum_assigned.idx = e->idx;
> +    sysctl.u.pci_enum_assigned.report_not_assigned = e->report_not_assigned;
> +    ret = do_sysctl(xch, &sysctl);
> +    if ( ret )
> +        errno = EINVAL;
> +    else
> +    {
> +        e->domain = sysctl.u.pci_enum_assigned.domain;
> +        e->machine_sbdf = sysctl.u.pci_enum_assigned.machine_sbdf;
> +    }
> +    return ret;
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
> index f3806aafcb4e..6f76ba35aec7 100644
> --- a/tools/libxl/Makefile
> +++ b/tools/libxl/Makefile
> @@ -130,6 +130,10 @@ endif
>  
>  LIBXL_LIBS += -lyajl
>  
> +ifeq ($(CONFIG_X86),y)
> +CFALGS += -DCONFIG_PCIBACK
> +endif
> +
>  ifeq ($(CONFIG_ARM),y)
>  CFALGS += -DCONFIG_ARM
>  endif
> diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
> index b93cf976642b..41f89b8aae10 100644
> --- a/tools/libxl/libxl_pci.c
> +++ b/tools/libxl/libxl_pci.c
> @@ -319,6 +319,7 @@ retry_transaction2:
>  
>  static int get_all_assigned_devices(libxl__gc *gc, libxl_device_pci **list, int *num)
>  {
> +#ifdef CONFIG_PCIBACK
>      char **domlist;
>      unsigned int nd = 0, i;
>  
> @@ -356,6 +357,33 @@ static int get_all_assigned_devices(libxl__gc *gc, libxl_device_pci **list, int
>              }
>          }
>      }
> +#else
> +    libxl_ctx *ctx = libxl__gc_owner(gc);
> +    int ret;
> +    xc_pci_device_enum_assigned_t e;
> +
> +    *list = NULL;
> +    *num = 0;
> +
> +    memset(&e, 0, sizeof(e));
> +    do {
> +        ret = xc_pci_device_enum_assigned(ctx->xch, &e);
> +        if ( ret && errno == EINVAL )
> +            break;
> +        *list = realloc(*list, sizeof(libxl_device_pci) * (e.idx + 1));
> +        if (*list == NULL)
> +            return ERROR_NOMEM;
> +
> +        pcidev_struct_fill(*list + e.idx,
> +                           e.domain,
> +                           e.machine_sbdf >> 8 & 0xff,
> +                           PCI_SLOT(e.machine_sbdf),
> +                           PCI_FUNC(e.machine_sbdf),
> +                           0 /*vdevfn*/);
> +        e.idx++;
> +    } while (!ret);
> +    *num = e.idx;
> +#endif

I don't think the amount of ifdefs added to this file is acceptable.
If we have to go that route this needs to be split into a different
file, and maybe some of the common bits abstracted together to prevent
code repetition.

>      libxl__ptr_add(gc, *list);
>  
>      return 0;
> @@ -411,13 +439,20 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
>  libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
>  {
>      GC_INIT(ctx);
> -    libxl_device_pci *pcidevs = NULL, *new, *assigned;
> +    libxl_device_pci *pcidevs = NULL, *new;
> +    int r;
> +#ifdef CONFIG_PCIBACK
> +    libxl_device_pci *assigned;
> +    int num_assigned;
>      struct dirent *de;
>      DIR *dir;
> -    int r, num_assigned;
> +#else
> +    xc_pci_device_enum_assigned_t e;
> +#endif
>  
>      *num = 0;
>  
> +#ifdef CONFIG_PCIBACK
>      r = get_all_assigned_devices(gc, &assigned, &num_assigned);
>      if (r) goto out;
>  
> @@ -453,6 +488,32 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
>  
>      closedir(dir);
>  out:
> +#else
> +    memset(&e, 0, sizeof(e));
> +    e.report_not_assigned = 1;
> +    do {
> +        r = xc_pci_device_enum_assigned(ctx->xch, &e);
> +        if ( r && errno == EINVAL )
> +            break;
> +        new = realloc(pcidevs, (e.idx + 1) * sizeof(*new));
> +        if (NULL == new)
> +            continue;
> +
> +        pcidevs = new;
> +        new = pcidevs + e.idx;
> +
> +        memset(new, 0, sizeof(*new));
> +
> +        pcidev_struct_fill(new,
> +                           e.domain,
> +                           e.machine_sbdf >> 8 & 0xff,
> +                           PCI_SLOT(e.machine_sbdf),
> +                           PCI_FUNC(e.machine_sbdf),
> +                           0 /*vdevfn*/);
> +        e.idx++;
> +    } while (!r);
> +    *num = e.idx;
> +#endif
>      GC_FREE;
>      return pcidevs;
>  }
> @@ -606,6 +667,7 @@ bool libxl__is_igd_vga_passthru(libxl__gc *gc,
>      return false;
>  }
>  
> +#ifdef CONFIG_PCIBACK
>  /*
>   * A brief comment about slots.  I don't know what slots are for; however,
>   * I have by experimentation determined:
> @@ -648,11 +710,13 @@ out:
>      fclose(f);
>      return rc;
>  }
> +#endif
>  
>  static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pcidev)
>  {
> -    char * spath;
>      int rc;
> +#ifdef CONFIG_PCIBACK
> +    char * spath;
>      struct stat st;
>  
>      if ( access(SYSFS_PCIBACK_DRIVER, F_OK) < 0 ) {
> @@ -663,22 +727,27 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pcidev)
>          }
>          return -1;
>      }
> -
>      spath = GCSPRINTF(SYSFS_PCIBACK_DRIVER"/"PCI_BDF,
>                        pcidev->domain, pcidev->bus,
>                        pcidev->dev, pcidev->func);
>      rc = lstat(spath, &st);
> -
>      if( rc == 0 )
>          return 1;
>      if ( rc < 0 && errno == ENOENT )
>          return 0;
>      LOGE(ERROR, "Accessing %s", spath);
>      return -1;
> +#else
> +    libxl_ctx *ctx = libxl__gc_owner(gc);
> +
> +    rc = xc_pci_device_get_assigned(ctx->xch, pcidev_encode_bdf(pcidev));
> +    return rc == 0 ? 1 : 0;
> +#endif
>  }
>  
>  static int pciback_dev_assign(libxl__gc *gc, libxl_device_pci *pcidev)
>  {
> +#ifdef CONFIG_PCIBACK
>      int rc;
>  
>      if ( (rc=pciback_dev_has_slot(gc, pcidev)) < 0 ) {
> @@ -697,10 +766,17 @@ static int pciback_dev_assign(libxl__gc *gc, libxl_device_pci *pcidev)
>          return ERROR_FAIL;
>      }
>      return 0;
> +#else
> +    libxl_ctx *ctx = libxl__gc_owner(gc);
> +
> +    return xc_pci_device_set_assigned(ctx->xch, pcidev_encode_bdf(pcidev),
> +                                      true);
> +#endif
>  }
>  
>  static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pcidev)
>  {
> +#ifdef CONFIG_PCIBACK
>      /* Remove from pciback */
>      if ( sysfs_dev_unbind(gc, pcidev, NULL) < 0 ) {
>          LOG(ERROR, "Couldn't unbind device!");
> @@ -716,6 +792,12 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pcidev)
>          }
>      }
>      return 0;
> +#else
> +    libxl_ctx *ctx = libxl__gc_owner(gc);
> +
> +    return xc_pci_device_set_assigned(ctx->xch, pcidev_encode_bdf(pcidev),
> +                                      false);
> +#endif
>  }
>  
>  #define PCIBACK_INFO_PATH "/libxl/pciback"
> @@ -780,10 +862,15 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
>  
>      /* See if the device exists */
>      spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF, dom, bus, dev, func);
> +#ifdef CONFIG_PCI_SYSFS_DOM0
>      if ( lstat(spath, &st) ) {
>          LOGE(ERROR, "Couldn't lstat %s", spath);
>          return ERROR_FAIL;
>      }
> +#else
> +    (void)st;
> +    printf("IMPLEMENT_ME: %s lstat %s\n", __func__, spath);
> +#endif
>  
>      /* Check to see if it's already assigned to pciback */
>      rc = pciback_dev_is_assigned(gc, pcidev);
> @@ -1350,8 +1437,12 @@ static void pci_add_dm_done(libxl__egc *egc,
>  
>      if (f == NULL) {
>          LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
> +#ifdef CONFIG_PCI_SYSFS_DOM0
>          rc = ERROR_FAIL;
>          goto out;
> +#else
> +        goto out_no_irq;
> +#endif
>      }
>      for (i = 0; i < PROC_PCI_NUM_RESOURCES; i++) {
>          if (fscanf(f, "0x%llx 0x%llx 0x%llx\n", &start, &end, &flags) != 3)
> @@ -1522,7 +1613,11 @@ static int libxl_pcidev_assignable(libxl_ctx *ctx, libxl_device_pci *pcidev)
>              break;
>      }
>      free(pcidevs);
> +#ifdef CONFIG_PCIBACK
>      return i != num;
> +#else
> +    return 1;
> +#endif
>  }
>  
>  static void device_pci_add_stubdom_wait(libxl__egc *egc,
> diff --git a/xen/arch/arm/sysctl.c b/xen/arch/arm/sysctl.c
> index f87944e8473c..84e933b2eb45 100644
> --- a/xen/arch/arm/sysctl.c
> +++ b/xen/arch/arm/sysctl.c
> @@ -10,6 +10,7 @@
>  #include <xen/lib.h>
>  #include <xen/errno.h>
>  #include <xen/hypercall.h>
> +#include <xen/guest_access.h>
>  #include <public/sysctl.h>
>  
>  void arch_do_physinfo(struct xen_sysctl_physinfo *pi)
> @@ -20,7 +21,70 @@ void arch_do_physinfo(struct xen_sysctl_physinfo *pi)
>  long arch_do_sysctl(struct xen_sysctl *sysctl,
>                      XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
>  {
> -    return -ENOSYS;
> +    long ret = 0;
> +    bool copyback = 0;
> +
> +    switch ( sysctl->cmd )
> +    {
> +    case XEN_SYSCTL_pci_device_set_assigned:
> +    {
> +        u16 seg;
> +        u8 bus, devfn;
> +        uint32_t machine_sbdf;
> +
> +        machine_sbdf = sysctl->u.pci_set_assigned.machine_sbdf;
> +
> +#if 0
> +        ret = xsm_pci_device_set_assigned(XSM_HOOK, d);
> +        if ( ret )
> +            break;
> +#endif
> +
> +        seg = machine_sbdf >> 16;
> +        bus = PCI_BUS(machine_sbdf);
> +        devfn = PCI_DEVFN2(machine_sbdf);
> +
> +        pcidevs_lock();
> +        ret = pci_device_set_assigned(seg, bus, devfn,
> +                                      !!sysctl->u.pci_set_assigned.assigned);
> +        pcidevs_unlock();
> +        break;
> +    }
> +    case XEN_SYSCTL_pci_device_get_assigned:
> +    {
> +        u16 seg;
> +        u8 bus, devfn;
> +        uint32_t machine_sbdf;
> +
> +        machine_sbdf = sysctl->u.pci_set_assigned.machine_sbdf;
> +
> +        seg = machine_sbdf >> 16;
> +        bus = PCI_BUS(machine_sbdf);
> +        devfn = PCI_DEVFN2(machine_sbdf);
> +
> +        pcidevs_lock();
> +        ret = pci_device_get_assigned(seg, bus, devfn);
> +        pcidevs_unlock();
> +        break;
> +    }
> +    case XEN_SYSCTL_pci_device_enum_assigned:
> +    {
> +        ret = pci_device_enum_assigned(sysctl->u.pci_enum_assigned.report_not_assigned,
> +                                       sysctl->u.pci_enum_assigned.idx,
> +                                       &sysctl->u.pci_enum_assigned.domain,
> +                                       &sysctl->u.pci_enum_assigned.machine_sbdf);
> +        copyback = 1;
> +        break;
> +    }
> +    default:
> +        ret = -ENOSYS;
> +        break;
> +    }
> +    if ( copyback && (!ret || copyback > 0) &&
> +         __copy_to_guest(u_sysctl, sysctl, 1) )
> +        ret = -EFAULT;
> +
> +    return ret;
>  }
>  
>  /*
> diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
> index 98e8a2fade60..49b4279c63bd 100644
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -879,6 +879,43 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
>      return ret;
>  }
>  
> +#ifdef CONFIG_ARM
> +int pci_device_set_assigned(u16 seg, u8 bus, u8 devfn, bool assigned)
> +{
> +    struct pci_dev *pdev;
> +
> +    pdev = pci_get_pdev(seg, bus, devfn);
> +    if ( !pdev )
> +    {
> +        printk(XENLOG_ERR "Can't find PCI device %04x:%02x:%02x.%u\n",
> +               seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));

Take a look at pci_sbdf_t, you should use it as the parameter and in
order to print the SBDF (%pp).

> +        return -ENODEV;
> +    }
> +
> +    pdev->assigned = assigned;
> +    printk(XENLOG_ERR "pciback %sassign PCI device %04x:%02x:%02x.%u\n",
> +           assigned ? "" : "de-",
> +           seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
> +
> +    return 0;
> +}
> +
> +int pci_device_get_assigned(u16 seg, u8 bus, u8 devfn)
> +{
> +    struct pci_dev *pdev;
> +
> +    pdev = pci_get_pdev(seg, bus, devfn);
> +    if ( !pdev )
> +    {
> +        printk(XENLOG_ERR "Can't find PCI device %04x:%02x:%02x.%u\n",
> +               seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
> +        return -ENODEV;
> +    }
> +
> +    return pdev->assigned ? 0 : -ENODEV;
> +}
> +#endif
> +
>  #ifndef CONFIG_ARM
>  /*TODO :Implement MSI support for ARM  */
>  static int pci_clean_dpci_irq(struct domain *d,
> @@ -1821,6 +1858,62 @@ int iommu_do_pci_domctl(
>      return ret;
>  }
>  
> +#ifdef CONFIG_ARM
> +struct list_assigned {
> +    uint32_t cur_idx;
> +    uint32_t from_idx;
> +    bool assigned;
> +    domid_t *domain;
> +    uint32_t *machine_sbdf;
> +};
> +
> +static int _enum_assigned_pci_devices(struct pci_seg *pseg, void *arg)
> +{
> +    struct list_assigned *ctxt = arg;
> +    struct pci_dev *pdev;
> +
> +    list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list )
> +    {
> +        if ( pdev->assigned == ctxt->assigned )
> +        {
> +            if ( ctxt->cur_idx == ctxt->from_idx )
> +            {
> +                *ctxt->domain = pdev->domain->domain_id;
> +                *ctxt->machine_sbdf = pdev->sbdf.sbdf;
> +                return 1;
> +            }
> +            ctxt->cur_idx++;
> +        }
> +    }
> +    return 0;
> +}
> +
> +int pci_device_enum_assigned(bool report_not_assigned,
> +                             uint32_t from_idx, domid_t *domain,
> +                             uint32_t *machine_sbdf)
> +{
> +    struct list_assigned ctxt = {
> +        .assigned = !report_not_assigned,
> +        .cur_idx = 0,
> +        .from_idx = from_idx,
> +        .domain = domain,
> +        .machine_sbdf = machine_sbdf,
> +    };
> +    int ret;
> +
> +    pcidevs_lock();
> +    ret = pci_segments_iterate(_enum_assigned_pci_devices, &ctxt);
> +    pcidevs_unlock();
> +    /*
> +     * If not found then report as EINVAL to mark
> +     * enumeration process finished.
> +     */
> +    if ( !ret )
> +        return -EINVAL;
> +    return 0;
> +}
> +#endif
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
> index a07364711794..5ca73c538688 100644
> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -1062,6 +1062,40 @@ typedef struct xen_sysctl_cpu_policy xen_sysctl_cpu_policy_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_sysctl_cpu_policy_t);
>  #endif
>  
> +/*
> + * These are to emulate pciback device (de-)assignment used by the tools
> + * to track current device assignments: all the PCI devices that can
> + * be passed through must be assigned to the pciback to mark them
> + * as such. As on ARM we do not run pci{back|front} and are emulating
> + * PCI host bridge in Xen, so we need to maintain the assignments on our
> + * own in Xen itself.
> + *
> + * Note on xen_sysctl_pci_device_get_assigned: ENOENT is used to report
> + * that there are no assigned devices left.
> + */
> +struct xen_sysctl_pci_device_set_assigned {
> +    /* IN */
> +    /* FIXME: is this really a machine SBDF or as Domain-0 sees it? */
> +    uint32_t machine_sbdf;

I think you need to make it clear that when running on Xen dom0 (or
the hardware domain) should _never_ change the enumeration of devices,
or else none of this will work.

> +    uint8_t assigned;
> +};
> +
> +struct xen_sysctl_pci_device_get_assigned {
> +    /* IN */
> +    uint32_t machine_sbdf;
> +};
> +
> +struct xen_sysctl_pci_device_enum_assigned {
> +    /* IN */
> +    uint32_t idx;
> +    uint8_t report_not_assigned;
> +    /* OUT */
> +    domid_t domain;
> +    uint32_t machine_sbdf;
> +};
> +typedef struct xen_sysctl_pci_device_enum_assigned xen_sysctl_pci_device_enum_assigned_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_sysctl_pci_device_enum_assigned_t);
> +
>  struct xen_sysctl {
>      uint32_t cmd;
>  #define XEN_SYSCTL_readconsole                    1
> @@ -1092,6 +1126,9 @@ struct xen_sysctl {
>  #define XEN_SYSCTL_livepatch_op                  27
>  /* #define XEN_SYSCTL_set_parameter              28 */
>  #define XEN_SYSCTL_get_cpu_policy                29
> +#define XEN_SYSCTL_pci_device_set_assigned       30
> +#define XEN_SYSCTL_pci_device_get_assigned       31
> +#define XEN_SYSCTL_pci_device_enum_assigned      32
>      uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */
>      union {
>          struct xen_sysctl_readconsole       readconsole;
> @@ -1122,6 +1159,9 @@ struct xen_sysctl {
>  #if defined(__i386__) || defined(__x86_64__)
>          struct xen_sysctl_cpu_policy        cpu_policy;
>  #endif
> +        struct xen_sysctl_pci_device_set_assigned pci_set_assigned;
> +        struct xen_sysctl_pci_device_get_assigned pci_get_assigned;
> +        struct xen_sysctl_pci_device_enum_assigned pci_enum_assigned;
>          uint8_t                             pad[128];
>      } u;
>  };
> diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
> index 2bc4aaf4530c..7bf439de4de0 100644
> --- a/xen/include/xen/pci.h
> +++ b/xen/include/xen/pci.h
> @@ -132,6 +132,13 @@ struct pci_dev {
>  
>      /* Data for vPCI. */
>      struct vpci *vpci;
> +#ifdef CONFIG_ARM
> +    /*
> +     * Set if this PCI device is eligible for pass through,
> +     * e.g. just like it was assigned to pciback driver.
> +     */
> +    bool assigned;

You can see whether a device is assigned or not by looking at the
domain field AFAICT.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 13:57:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 13:57:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24820.52252 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcqdB-0005S0-BY; Wed, 11 Nov 2020 13:57:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24820.52252; Wed, 11 Nov 2020 13:57:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcqdB-0005Rt-84; Wed, 11 Nov 2020 13:57:49 +0000
Received: by outflank-mailman (input) for mailman id 24820;
 Wed, 11 Nov 2020 13:57:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8Mnv=ER=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kcqd9-0005Ro-VC
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:57:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ceb05317-6698-4dad-9199-110c59416cb6;
 Wed, 11 Nov 2020 13:57:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E144AABD1;
 Wed, 11 Nov 2020 13:57:45 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8Mnv=ER=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kcqd9-0005Ro-VC
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:57:47 +0000
X-Inumbo-ID: ceb05317-6698-4dad-9199-110c59416cb6
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ceb05317-6698-4dad-9199-110c59416cb6;
	Wed, 11 Nov 2020 13:57:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E144AABD1;
	Wed, 11 Nov 2020 13:57:45 +0000 (UTC)
Subject: Re: [PATCH 13/24] dm: use set_capacity_and_notify
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201111082658.3401686-1-hch@lst.de>
 <20201111082658.3401686-14-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <1327a2b4-d912-799d-ac94-4f11bf071e15@suse.de>
Date: Wed, 11 Nov 2020 14:57:44 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201111082658.3401686-14-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/11/20 9:26 AM, Christoph Hellwig wrote:
> Use set_capacity_and_notify to set the size of both the disk and block
> device.  This also gets the uevent notifications for the resize for free.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/md/dm.c | 3 +--
>   1 file changed, 1 insertion(+), 2 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 13:58:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 13:58:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24825.52264 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcqeD-0005ac-Mf; Wed, 11 Nov 2020 13:58:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24825.52264; Wed, 11 Nov 2020 13:58:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcqeD-0005aV-Ia; Wed, 11 Nov 2020 13:58:53 +0000
Received: by outflank-mailman (input) for mailman id 24825;
 Wed, 11 Nov 2020 13:58:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kcqeC-0005aP-FA
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:58:52 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ffb27dc9-86d4-4db1-9d1b-fb4f5e92096c;
 Wed, 11 Nov 2020 13:58:51 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kcqeC-0005aP-FA
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:58:52 +0000
X-Inumbo-ID: ffb27dc9-86d4-4db1-9d1b-fb4f5e92096c
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ffb27dc9-86d4-4db1-9d1b-fb4f5e92096c;
	Wed, 11 Nov 2020 13:58:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605103131;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=KEssnSMgi0i12KyQ34fC0O47K4B/pRvdF15SlXvmMgQ=;
  b=RitsR+ySbZqQ9RGZBh1ggcIWnZwpG4qGZK3/bwa3nBTquAqRvCkCwZBB
   OwekhtomtVnuoGIMH0b49wrAHhGVAuYI8qF04ZS01d1YDJu7zwyl3Iys0
   7LAeAO4/BGlSddPaZdj6AHntv+HH/+Wrana/aFVnQp/xKTRsfiQcc69Ki
   0=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 9OB6vcaYkHJY69CdRfIycjwBN0oL6WiZfGnGI3yTnORwvOrc1SrOLM7xAhiuiFds81gUVfegil
 HglMpde/eAtjAtwKBd8eGtqDW7WVGC8dtUkuQjL9ULbX4bdETqkNGEmgXnoD4SBwNl0J2zYfqL
 QXYEhHLF9IclFzsAMxJHfXF0vN2uQcaCzl29lHvCJHZZ/4kWxDk6fc1cbMvxlYtZUaaJ2l3PyZ
 Z7Sjbc6YnxQ/WGCk7hWNen+csuI3M6X+KjjBSvdQTg1Dfd3VVyBnogDgr2yL3+MMBZ/vH1DfIM
 r0w=
X-SBRS: None
X-MesageID: 31289205
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,469,1596513600"; 
   d="scan'208";a="31289205"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jWNwso7HdU5Zhug5kFQitMu68kAVIiWwMC+NkZuBzbuuQ9ZNMcbcodeEVGXhuO7I/Q9Ozp7S3rU/KfXjY2upSR2Oej3CT5YKLNA1hxhS2QdNz8E0e2ykaiG7MXX4/tqJ3qLnI7PxFxiek1oDts9ZrDRIqt8MIk76wlRj2B3IkTJIm7uAnxgAGGmpzyFYUqWJV6lIhka4I0G5Rb6E9eXltQqNoOKeok5/32T3KTynvug4wOahdJZ7viKkRdhx21aJ2govSVcfgC8luLzmrKR9WSNIFKDAts3dAGLTZMV/M4GeKLrzz2z2X+MAQmbdlPMiWF54uWFYPSoyr8PVzxCV5w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XSpNNaLXyuqiFYcmfB9iAlMpva/Dy6v5KrqoG81RS+E=;
 b=jS9JE30tcCGhs1ILKOVQuN8X0U+jY916yN7i8Nj/Sjh2H9pukBgWhzC6vUKtIyOOZoPY0YbD3tK3IwF/GFNQLcZhBbFefpMNt3YN03QPOGqH8dU5MOWOPDkiEYbHWKs0GW9rIfxPE+qaGXy5o0YCV2i7eUS4g+1f2kSszabx0EfVNVwrUv1EEB6s+InRl2oJ/fS4Ler+HPOpSyfzfM/P7d6o1TwVOTxoZOSHoVmHRtQFh1D9mJEAcvr7zaIWb+PnTw8uuUB+j59FTK1P1saJP/fTmjhbH4+GmHYAGEWYLs/T8k87l5XhpAKYKLAZ4LmWKm1TxiLa5jOkkuoBGH/svg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XSpNNaLXyuqiFYcmfB9iAlMpva/Dy6v5KrqoG81RS+E=;
 b=ZEUA2k9hb8Uc+zeGsKqsrCNUmx3ptxgfpMMywKgTO0Sg6PoLhIxBvALX2kCvr/GFU3yJjlePSr4rTlj9Ibjejn1aAeJbOkVYPUoisMMfQxSYPrqZs1SrabEYf6C86AByAvNq9LBLtSv7b1gEBOEsBXBd7Qv0ApdrORsNvpcDci4=
Date: Wed, 11 Nov 2020 14:55:13 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
CC: "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>, "Bertrand.Marquis@arm.com"
	<Bertrand.Marquis@arm.com>, "julien.grall@arm.com" <julien.grall@arm.com>,
	"jbeulich@suse.com" <jbeulich@suse.com>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "iwj@xenproject.org" <iwj@xenproject.org>,
	"wl@xen.org" <wl@xen.org>, Oleksandr Andrushchenko <andr2000@gmail.com>
Subject: Re: [SUSPECTED SPAM][PATCH 01/10] pci/pvh: Allow PCI toolstack code
 run with PVH domains on ARM
Message-ID: <20201111135513.yvqfe4xongnhtjcq@Air-de-Roger>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-2-andr2000@gmail.com>
 <20201111123150.7lkabdo3wix7jkdk@Air-de-Roger>
 <57fefaee-9684-4c67-662e-f4c57313886e@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <57fefaee-9684-4c67-662e-f4c57313886e@epam.com>
X-ClientProxiedBy: LO2P265CA0516.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13b::23) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 99d6be5d-e16c-4a04-865f-08d886496ce1
X-MS-TrafficTypeDiagnostic: DM5PR03MB2491:
X-Microsoft-Antispam-PRVS: <DM5PR03MB249121C89A474F9AB1A224CF8FE80@DM5PR03MB2491.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: tPlVSHVEpgwppCr3Ht0+f6Nfqe51IrbgS3/3bPFuOfGZczfCpWhW1hJLmrXqm3UaxNub/xQBkiOm9TtUeTBqUhxUUhbVq4gj439T1UX0T6WQgiK/kbXjBsSJWWhmrm9lY/Lcr95CO/VGsqorJPmt+BpoCSg7yZ2EfiYV7Zr10uockm6C3MFU7IYPD0d3p6roqu2bR82SNoee+xOo8HphOfhMJ70G81nfAfCc2RUyj4ZQk2LPkqO2bAz7jxzxA0lBLqVFGUNBxbJYwW4202NzkrznL/CsrDBgMkhZGWaCaTpgc2Q3UtK7881nD/+ScCzMVnmHA42xBDYTATwqi0oW+84J2LQEF16JefMdsYDUkhI32sfchhBu3EkOKwzY7u4//hQlkDplcfc5GRZgF31I1w==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(136003)(376002)(346002)(366004)(39860400002)(66946007)(6486002)(956004)(1076003)(83380400001)(316002)(6666004)(4326008)(66556008)(66476007)(478600001)(8936002)(16526019)(53546011)(5660300002)(966005)(8676002)(54906003)(9686003)(186003)(85182001)(86362001)(33716001)(6496006)(6916009)(2906002)(26005)(7416002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: uroa5iCP6JeN8fsBrSuJmeLtS2CqD+jM0xJXRTyhZH3npJrwzCxwArY/eT5r2UCB5WQExKyAy6mNlVkj2AXkgvfcsoLTSkD3XPa4cr42XAbA0aS+0k8Ta+BtRjYbwfbqbUcOZHz7XrgfZnCcq1oFt9e8LannnkxNzO8JHB9oM2r0xAFGJ5FvEc20YPiJyvChIQfXTikZXe+6ZEvU1FTbmq14Gk0SoBsIhWR4Vo4xG5I00w5vrLqzczovAi3TlvWhIY4VAgNY1lZGy7iGRfyeaF+NngYsJa0SbA9kQBttjxwTz7J5/SZrp1sTAnmFJiAFgs60y+F1ZakiqDhMc9mNXpCiHCORj7fd5as8CiKqcLnOBXMnQuPgKZy3JslBwCo/awo52EWAH/DD8dgJIhB5igvxw61bNV8pftxPtDaQje7tZsd7/Le2GhKTdvx0g5Ew5fbAgtoSHLN25fce5n9hrnjdGZnRle0OL6oE2Y+cJY8XMQqs4tJ2HORgaLMwFsCh59M4RLO/GFkP/ZI0Slr/f48YlQkIgV/XDE4d3JNmbdobvThgBBZS1ZCepBrMsXubiyRjbViSWGuvnxSLxX7HlnGY7rT3CA89yUf1ciOuFmdmtzsyoKnZY4eXifF9brA45YDSx1MDBKZMWGFtUHQkrQ/qY5nv9TsuA8bXAcnv6e38Fv4gJZUW9m3NzeYG6mbw8Y86a1qECU6zQdqscjD+P8foXwaFek1pauXKu+dIoQFYiM5TNwe8VoofMr6ttm5zYObSDx0+VxrYkUpvvP5e56NbXqdlNX6A81SchCyzwvWThs4yDfB4cbj0bIR23miOwRCg4YmsEvkxLCK2bKHFwJ/Uhn0qkl5UseOUZtlBBUuZpsnPxl8AZ1XqKpWysSYK9+UVr7Vg9eE38VJKQCTdnw==
X-MS-Exchange-CrossTenant-Network-Message-Id: 99d6be5d-e16c-4a04-865f-08d886496ce1
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Nov 2020 13:55:19.2330
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Ck0wm77rXVOTyPinflbKFsRbHse9uDfqSnjJM1fpihqBt6KtxbrPq71EfscCa7LPfriMLVVwRZto1DoIeI+DCw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2491
X-OriginatorOrg: citrix.com

On Wed, Nov 11, 2020 at 01:10:01PM +0000, Oleksandr Andrushchenko wrote:
> 
> On 11/11/20 2:31 PM, Roger Pau Monné wrote:
> > On Mon, Nov 09, 2020 at 02:50:22PM +0200, Oleksandr Andrushchenko wrote:
> >> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> >>
> >> According to https://wiki.xenproject.org/wiki/Linux_PVH:
> >>
> >> Items not supported by PVH
> >>   - PCI pass through (as of Xen 4.10)
> >>
> >> Allow running PCI remove code on ARM and do not assert for PVH domains.
> >>
> >> Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> >> ---
> >>   tools/libxl/Makefile    | 4 ++++
> >>   tools/libxl/libxl_pci.c | 4 +++-
> >>   2 files changed, 7 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
> >> index 241da7fff6f4..f3806aafcb4e 100644
> >> --- a/tools/libxl/Makefile
> >> +++ b/tools/libxl/Makefile
> >> @@ -130,6 +130,10 @@ endif
> >>   
> >>   LIBXL_LIBS += -lyajl
> >>   
> >> +ifeq ($(CONFIG_ARM),y)
> >> +CFALGS += -DCONFIG_ARM
> >> +endif
> >> +
> >>   LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
> >>   			libxl_dom.o libxl_exec.o libxl_xshelp.o libxl_device.o \
> >>   			libxl_internal.o libxl_utils.o libxl_uuid.o \
> >> diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
> >> index bc5843b13701..b93cf976642b 100644
> >> --- a/tools/libxl/libxl_pci.c
> >> +++ b/tools/libxl/libxl_pci.c
> >> @@ -1915,8 +1915,10 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
> >>               goto out_fail;
> >>           }
> >>       } else {
> >> +        /* PCI passthrough can also run on ARM PVH */
> >> +#ifndef CONFIG_ARM
> >>           assert(type == LIBXL_DOMAIN_TYPE_PV);
> >> -
> >> +#endif
> > I would just remove the assert now if this is to be used by Arm and
> > you don't need to fork the file for Arm.
> 
> Sounds good, I will drop then
> 
> But what would be the right explanation then? I mean why there was an ASSERT
> 
> and now it is safe (for x86) to remove that?

An assert is just a safe belt, the expectation is that it's never hit
by actual code. Given that this path will now also be used by PVH
(even if only on Arm) I don't see the point in keeping the assert, and
making it conditional to != Arm seems worse than just dropping it.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 13:59:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 13:59:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24830.52276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcqey-0005lX-Vj; Wed, 11 Nov 2020 13:59:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24830.52276; Wed, 11 Nov 2020 13:59:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcqey-0005lP-Sm; Wed, 11 Nov 2020 13:59:40 +0000
Received: by outflank-mailman (input) for mailman id 24830;
 Wed, 11 Nov 2020 13:59:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8Mnv=ER=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kcqex-0005hc-9F
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:59:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7a728b7d-f5bb-4eda-8835-d0f1462d3ee6;
 Wed, 11 Nov 2020 13:59:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8ABF3ABD6;
 Wed, 11 Nov 2020 13:59:37 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8Mnv=ER=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kcqex-0005hc-9F
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 13:59:39 +0000
X-Inumbo-ID: 7a728b7d-f5bb-4eda-8835-d0f1462d3ee6
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7a728b7d-f5bb-4eda-8835-d0f1462d3ee6;
	Wed, 11 Nov 2020 13:59:38 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8ABF3ABD6;
	Wed, 11 Nov 2020 13:59:37 +0000 (UTC)
Subject: Re: [PATCH 15/24] nvme: use set_capacity_and_notify in
 nvme_set_queue_dying
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201111082658.3401686-1-hch@lst.de>
 <20201111082658.3401686-16-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <878d9852-4b8d-c5a7-36d4-0fda80fd74c4@suse.de>
Date: Wed, 11 Nov 2020 14:59:36 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201111082658.3401686-16-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/11/20 9:26 AM, Christoph Hellwig wrote:
> Use the block layer helper to update both the disk and block device
> sizes.  Contrary to the name no notification is sent in this case,
> as a size 0 is special cased.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/nvme/host/core.c | 13 +------------
>   1 file changed, 1 insertion(+), 12 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 14:00:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 14:00:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24837.52288 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcqfm-0006e0-8e; Wed, 11 Nov 2020 14:00:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24837.52288; Wed, 11 Nov 2020 14:00:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcqfm-0006dt-5e; Wed, 11 Nov 2020 14:00:30 +0000
Received: by outflank-mailman (input) for mailman id 24837;
 Wed, 11 Nov 2020 14:00:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8Mnv=ER=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kcqfk-0006do-H6
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:00:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 60114471-f7db-4118-9909-ee19c27bd8d9;
 Wed, 11 Nov 2020 14:00:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EB5D2ABD6;
 Wed, 11 Nov 2020 14:00:26 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8Mnv=ER=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kcqfk-0006do-H6
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:00:28 +0000
X-Inumbo-ID: 60114471-f7db-4118-9909-ee19c27bd8d9
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 60114471-f7db-4118-9909-ee19c27bd8d9;
	Wed, 11 Nov 2020 14:00:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EB5D2ABD6;
	Wed, 11 Nov 2020 14:00:26 +0000 (UTC)
Subject: Re: [PATCH 20/24] dm-raid: use set_capacity_and_notify
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201111082658.3401686-1-hch@lst.de>
 <20201111082658.3401686-21-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <a9b7c6bc-5496-9489-95f9-f86f63ea2b14@suse.de>
Date: Wed, 11 Nov 2020 15:00:25 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201111082658.3401686-21-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/11/20 9:26 AM, Christoph Hellwig wrote:
> Use set_capacity_and_notify to set the size of both the disk and block
> device.  This also gets the uevent notifications for the resize for free.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/md/dm-raid.c | 3 +--
>   1 file changed, 1 insertion(+), 2 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 14:13:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 14:13:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24850.52299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcqsB-0007k5-FT; Wed, 11 Nov 2020 14:13:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24850.52299; Wed, 11 Nov 2020 14:13:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcqsB-0007jy-CD; Wed, 11 Nov 2020 14:13:19 +0000
Received: by outflank-mailman (input) for mailman id 24850;
 Wed, 11 Nov 2020 14:13:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pxcb=ER=epam.com=prvs=9584594409=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kcqsA-0007jt-6e
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:13:18 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 763c3517-2d88-454c-a8c7-4bbdedde1955;
 Wed, 11 Nov 2020 14:13:16 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0ABED9Rq003303; Wed, 11 Nov 2020 14:13:09 GMT
Received: from eur04-db3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2051.outbound.protection.outlook.com [104.47.12.51])
 by mx0a-0039f301.pphosted.com with ESMTP id 34rf808h8s-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 11 Nov 2020 14:13:05 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB5012.eurprd03.prod.outlook.com (2603:10a6:208:106::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.23; Wed, 11 Nov
 2020 14:12:56 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Wed, 11 Nov 2020
 14:12:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Pxcb=ER=epam.com=prvs=9584594409=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kcqsA-0007jt-6e
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:13:18 +0000
X-Inumbo-ID: 763c3517-2d88-454c-a8c7-4bbdedde1955
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 763c3517-2d88-454c-a8c7-4bbdedde1955;
	Wed, 11 Nov 2020 14:13:16 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
	by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0ABED9Rq003303;
	Wed, 11 Nov 2020 14:13:09 GMT
Received: from eur04-db3-obe.outbound.protection.outlook.com (mail-db3eur04lp2051.outbound.protection.outlook.com [104.47.12.51])
	by mx0a-0039f301.pphosted.com with ESMTP id 34rf808h8s-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Wed, 11 Nov 2020 14:13:05 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lrUYWvfxJLUcNC9jMdpW7vHQt26MvivY+Y7kpf6VsHt1nrG440uB2kYLpwHSmChXSluZQZkVRWvDNjkkSiVe0Q8yCnnIPbaWm1AuzJAZYSNSOQSGAfQOf4Cr3EMWLB+sFgXky7m7EhHIvIlulYxYxPx14PvLdFndC8O/qzX8TwdwafEprhfj8hTLpYHWVOCVi9Viq97QF6niaUB1AKlPPuqzBO522CR8+paqX3nwv6lu1T0z0U7MVx8cXBPCKxeCooYzQ4L+Efgvh1K2jgL5bLZcyNgZqBCfhz4I7Db3JSmK8neupQs9LUPES6NS4yFQ7t+KHBd4YFWTvQxjJaTDpw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iG75p0ZMhhX6mbehzPC/+pblUTXNaQ9WSXon7+B8TqQ=;
 b=nsNkNPoqaAIJz8s1IbHwM5Wg1b3uN6OxHHRAeO2/epd10g37pzu/7VNE2vqClMeHGayCZIqOYsZByqs9pURiQzPlcPrc1bGNXDcd8SPtbWU4D3SdCEilnnQrr0yTgOS3OSEmrCPJnA7EVBJc8B/Li3qnMCqeNah0fmzBHP3cEKPvLt2Xr3QV77IvVVw936QGr/hqbWskqR+ed7VuZQSeVH6xh3i2pd/Viyq6rrSRM3GLwz+SKBZbiUrUX0uCd7yodyqETSkZ5LwExTlxr2DkjICcV97GvfPmy/UctQQgfhrWtderBXUKavS4ixxgn1WAM3wprCGjeVFeDYK/Ulmp5w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iG75p0ZMhhX6mbehzPC/+pblUTXNaQ9WSXon7+B8TqQ=;
 b=Cb1rcjcOMyna1o67l+zcyZyugAqiAlGGmaL0X9Th4J7ZOjokXkVrenoM6xzDhq+MnvhMCl7UKXC62rgC2LSvV6WlOOuMed7b5wistGUHDTggsBirMJ8YE7ClviOATkEqXM4BspjfyHGzazUYLDnUBJjbcUtuZCyotaxXv1pGsBiqOfUgk4y9XxVWJcUTrMaDgLR9xaGkAP9gsXK8wj/+z6FwjLpdlheUPRNKTT08SIB1ZOteOQiKAdo8rpGWXxda0gfP5+6uGqiNVgyYCBBU15McY8x/LsNyxhkJ5KaNNF2hfz/FsWwSSjlRGays1Rn/RBTBp80ft9+ya3l0M2Pt6g==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB5012.eurprd03.prod.outlook.com (2603:10a6:208:106::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.23; Wed, 11 Nov
 2020 14:12:56 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Wed, 11 Nov 2020
 14:12:56 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
CC: "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
        "Bertrand.Marquis@arm.com"
	<Bertrand.Marquis@arm.com>,
        "julien.grall@arm.com" <julien.grall@arm.com>,
        "jbeulich@suse.com" <jbeulich@suse.com>,
        "sstabellini@kernel.org"
	<sstabellini@kernel.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
        Oleksandr Andrushchenko <andr2000@gmail.com>
Subject: Re: [SUSPECTED SPAM][PATCH 01/10] pci/pvh: Allow PCI toolstack code
 run with PVH domains on ARM
Thread-Topic: [SUSPECTED SPAM][PATCH 01/10] pci/pvh: Allow PCI toolstack code
 run with PVH domains on ARM
Thread-Index: AQHWtpbtW65RafIiXE60uDMyi8KYhqnC4BMAgAAKqgCAAAyigIAABPKA
Date: Wed, 11 Nov 2020 14:12:56 +0000
Message-ID: <fc3a73cc-824b-d941-8ec8-dfcae8ee1756@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-2-andr2000@gmail.com>
 <20201111123150.7lkabdo3wix7jkdk@Air-de-Roger>
 <57fefaee-9684-4c67-662e-f4c57313886e@epam.com>
 <20201111135513.yvqfe4xongnhtjcq@Air-de-Roger>
In-Reply-To: <20201111135513.yvqfe4xongnhtjcq@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 725eafb7-30fd-43cc-6053-08d8864be31c
x-ms-traffictypediagnostic: AM0PR03MB5012:
x-microsoft-antispam-prvs: 
 <AM0PR03MB50125236C00D07ABF2470183E7E80@AM0PR03MB5012.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 R6OXV+Tgb6nziHTGKbdh+nK7Yxa0Ho5jwdw3rWPw3Q9dOlEb5QLhyv79yvVBKmULBN/r33D1lPGlCZLIPCG+wAUJSriD5rg1AI0Dd9+iPx4VhvLeK4Db/P2HHObdwtwGcO53yjnyf+wQGulelWafYESBmzAyvv62MMzhbOOAAdrI+QbbmNeNiFD4priwkQVqHRZxTpv0ZK4Bd9OQ5d5DMNu2D3z/7/C9Av2kzbGM/0EF+nmID+NeskRP/A2itJ9+4NXC6b2cIT10yuM2r14QKWD67NJNtz5Eu52wIGgrhjGGl5K4D6shr5st8491MLJ0AUIHLVEGzPvJSGNk20K05xiB8761xusl7W32qYl640yvqE6++L845i9f5iJQ0VxKJDSLioe45XKgkG0lzMw/Bg==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(366004)(136003)(376002)(39860400002)(396003)(86362001)(26005)(6506007)(4326008)(54906003)(8676002)(316002)(186003)(2906002)(31696002)(7416002)(8936002)(6486002)(31686004)(71200400001)(966005)(36756003)(6512007)(53546011)(83380400001)(478600001)(6916009)(76116006)(91956017)(5660300002)(2616005)(66946007)(66446008)(66476007)(66556008)(64756008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 LyHnL/5DLqX8Sel5gc5ir9piO3WEAf70KsuVPWbOkBNUn4U80s8/GV8kZtxDrTTEPDGM6x//KEON1OMEvC1FUPrcmvPaGsObvMrMqPVHtckl/tE8LVqcy8mvTVpLGwVxjruY/lV+PsNNZZGGxevC0BrKLjMe1N5E/tAcDKmvpiWbLL8SJr5MU41ORUA5Cm2t0Vgbh0RZ2wW0qQYSTyyWeJmQuMRwAkitFuB8rS1Kb7kn02LraAarMyM2bT3VBZo4i8mpg7cHtzDfuTQdQcdjM/7t03/zpmjU/QMxTOZnDJci7IGZCetcmyzEZ4iPRH1EKQWCUDoKeWqgeN8CX2NAndEi/abHwVlDePswADb1hIGhSTMr1sUvq5LHTSG+7TH35RFvc4nGqZLO3XjsehPejdxhBIkzVa53l8CPwBECF/d4Bg1yJxTvcSSuAWlJr+gPOqIh/nji9ZdmqGsqkvzlPtUjRurOyhf0VpSOdCdbpT7StyKV+zbHOOzbjHC4WyjoUn/v9PzrsBDxVEjB89XLA2NswMmqWOpSamQKaQnRXObTzrYtJBhSewA7VB7nOqmW12XJOQMtpjINfaxaZcccWb0izEAOSICWjE/tCBQTbZqLz59YBh+3T/5x2QKGv+BpvG47Nrl5t4/h3HJH9c0buw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <4A9CE19DBA30604081B156018D65F3B5@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 725eafb7-30fd-43cc-6053-08d8864be31c
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Nov 2020 14:12:56.2856
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 6NqABm3e2i2P7oXl/FqnHNj2gnUbrCmLPfBZxnP6oJAUxmxtR/YiHUEMQbtgXNWKV1AaFZc+Tc3vvpi9FnK7OQTwwWTIlkKC6UjZOuUEqYzT6E+MLkBKEZSWERWNt/bS
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB5012
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-11_06:2020-11-10,2020-11-11 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 mlxscore=0
 phishscore=0 spamscore=0 adultscore=0 bulkscore=0 priorityscore=1501
 suspectscore=0 impostorscore=0 mlxlogscore=999 lowpriorityscore=0
 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2011110083

DQpPbiAxMS8xMS8yMCAzOjU1IFBNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOg0KPiBPbiBXZWQs
IE5vdiAxMSwgMjAyMCBhdCAwMToxMDowMVBNICswMDAwLCBPbGVrc2FuZHIgQW5kcnVzaGNoZW5r
byB3cm90ZToNCj4+IE9uIDExLzExLzIwIDI6MzEgUE0sIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6
DQo+Pj4gT24gTW9uLCBOb3YgMDksIDIwMjAgYXQgMDI6NTA6MjJQTSArMDIwMCwgT2xla3NhbmRy
IEFuZHJ1c2hjaGVua28gd3JvdGU6DQo+Pj4+IEZyb206IE9sZWtzYW5kciBBbmRydXNoY2hlbmtv
IDxvbGVrc2FuZHJfYW5kcnVzaGNoZW5rb0BlcGFtLmNvbT4NCj4+Pj4NCj4+Pj4gQWNjb3JkaW5n
IHRvIGh0dHBzOi8vdXJsZGVmZW5zZS5jb20vdjMvX19odHRwczovL3dpa2kueGVucHJvamVjdC5v
cmcvd2lraS9MaW51eF9QVkhfXzshIUdGXzI5ZGJjUUlVQlBBIW5FSGQ2ZWl2bXF0ZEp4dHJoTy0z
eDJNejlGNTBKc0tVb1Y3V1RFSmRfRDFOMDFEckJPSlh6R1cxUUFxd3NoWjlBTXh5d2JVaE9BJCBb
d2lraVsuXXhlbnByb2plY3RbLl1vcmddOg0KPj4+Pg0KPj4+PiBJdGVtcyBub3Qgc3VwcG9ydGVk
IGJ5IFBWSA0KPj4+PiAgICAtIFBDSSBwYXNzIHRocm91Z2ggKGFzIG9mIFhlbiA0LjEwKQ0KPj4+
Pg0KPj4+PiBBbGxvdyBydW5uaW5nIFBDSSByZW1vdmUgY29kZSBvbiBBUk0gYW5kIGRvIG5vdCBh
c3NlcnQgZm9yIFBWSCBkb21haW5zLg0KPj4+Pg0KPj4+PiBTaWduZWQtb2ZmLWJ5OiBPbGVrc2Fu
ZHIgQW5kcnVzaGNoZW5rbyA8b2xla3NhbmRyX2FuZHJ1c2hjaGVua29AZXBhbS5jb20+DQo+Pj4+
IC0tLQ0KPj4+PiAgICB0b29scy9saWJ4bC9NYWtlZmlsZSAgICB8IDQgKysrKw0KPj4+PiAgICB0
b29scy9saWJ4bC9saWJ4bF9wY2kuYyB8IDQgKysrLQ0KPj4+PiAgICAyIGZpbGVzIGNoYW5nZWQs
IDcgaW5zZXJ0aW9ucygrKSwgMSBkZWxldGlvbigtKQ0KPj4+Pg0KPj4+PiBkaWZmIC0tZ2l0IGEv
dG9vbHMvbGlieGwvTWFrZWZpbGUgYi90b29scy9saWJ4bC9NYWtlZmlsZQ0KPj4+PiBpbmRleCAy
NDFkYTdmZmY2ZjQuLmYzODA2YWFmY2I0ZSAxMDA2NDQNCj4+Pj4gLS0tIGEvdG9vbHMvbGlieGwv
TWFrZWZpbGUNCj4+Pj4gKysrIGIvdG9vbHMvbGlieGwvTWFrZWZpbGUNCj4+Pj4gQEAgLTEzMCw2
ICsxMzAsMTAgQEAgZW5kaWYNCj4+Pj4gICAgDQo+Pj4+ICAgIExJQlhMX0xJQlMgKz0gLWx5YWps
DQo+Pj4+ICAgIA0KPj4+PiAraWZlcSAoJChDT05GSUdfQVJNKSx5KQ0KPj4+PiArQ0ZBTEdTICs9
IC1EQ09ORklHX0FSTQ0KPj4+PiArZW5kaWYNCj4+Pj4gKw0KPj4+PiAgICBMSUJYTF9PQkpTID0g
ZmxleGFycmF5Lm8gbGlieGwubyBsaWJ4bF9jcmVhdGUubyBsaWJ4bF9kbS5vIGxpYnhsX3BjaS5v
IFwNCj4+Pj4gICAgCQkJbGlieGxfZG9tLm8gbGlieGxfZXhlYy5vIGxpYnhsX3hzaGVscC5vIGxp
YnhsX2RldmljZS5vIFwNCj4+Pj4gICAgCQkJbGlieGxfaW50ZXJuYWwubyBsaWJ4bF91dGlscy5v
IGxpYnhsX3V1aWQubyBcDQo+Pj4+IGRpZmYgLS1naXQgYS90b29scy9saWJ4bC9saWJ4bF9wY2ku
YyBiL3Rvb2xzL2xpYnhsL2xpYnhsX3BjaS5jDQo+Pj4+IGluZGV4IGJjNTg0M2IxMzcwMS4uYjkz
Y2Y5NzY2NDJiIDEwMDY0NA0KPj4+PiAtLS0gYS90b29scy9saWJ4bC9saWJ4bF9wY2kuYw0KPj4+
PiArKysgYi90b29scy9saWJ4bC9saWJ4bF9wY2kuYw0KPj4+PiBAQCAtMTkxNSw4ICsxOTE1LDEw
IEBAIHN0YXRpYyB2b2lkIGRvX3BjaV9yZW1vdmUobGlieGxfX2VnYyAqZWdjLCB1aW50MzJfdCBk
b21pZCwNCj4+Pj4gICAgICAgICAgICAgICAgZ290byBvdXRfZmFpbDsNCj4+Pj4gICAgICAgICAg
ICB9DQo+Pj4+ICAgICAgICB9IGVsc2Ugew0KPj4+PiArICAgICAgICAvKiBQQ0kgcGFzc3Rocm91
Z2ggY2FuIGFsc28gcnVuIG9uIEFSTSBQVkggKi8NCj4+Pj4gKyNpZm5kZWYgQ09ORklHX0FSTQ0K
Pj4+PiAgICAgICAgICAgIGFzc2VydCh0eXBlID09IExJQlhMX0RPTUFJTl9UWVBFX1BWKTsNCj4+
Pj4gLQ0KPj4+PiArI2VuZGlmDQo+Pj4gSSB3b3VsZCBqdXN0IHJlbW92ZSB0aGUgYXNzZXJ0IG5v
dyBpZiB0aGlzIGlzIHRvIGJlIHVzZWQgYnkgQXJtIGFuZA0KPj4+IHlvdSBkb24ndCBuZWVkIHRv
IGZvcmsgdGhlIGZpbGUgZm9yIEFybS4NCj4+IFNvdW5kcyBnb29kLCBJIHdpbGwgZHJvcCB0aGVu
DQo+Pg0KPj4gQnV0IHdoYXQgd291bGQgYmUgdGhlIHJpZ2h0IGV4cGxhbmF0aW9uIHRoZW4/IEkg
bWVhbiB3aHkgdGhlcmUgd2FzIGFuIEFTU0VSVA0KPj4NCj4+IGFuZCBub3cgaXQgaXMgc2FmZSAo
Zm9yIHg4NikgdG8gcmVtb3ZlIHRoYXQ/DQo+IEFuIGFzc2VydCBpcyBqdXN0IGEgc2FmZSBiZWx0
LCB0aGUgZXhwZWN0YXRpb24gaXMgdGhhdCBpdCdzIG5ldmVyIGhpdA0KPiBieSBhY3R1YWwgY29k
ZS4gR2l2ZW4gdGhhdCB0aGlzIHBhdGggd2lsbCBub3cgYWxzbyBiZSB1c2VkIGJ5IFBWSA0KPiAo
ZXZlbiBpZiBvbmx5IG9uIEFybSkgSSBkb24ndCBzZWUgdGhlIHBvaW50IGluIGtlZXBpbmcgdGhl
IGFzc2VydCwgYW5kDQo+IG1ha2luZyBpdCBjb25kaXRpb25hbCB0byAhPSBBcm0gc2VlbXMgd29y
c2UgdGhhbiBqdXN0IGRyb3BwaW5nIGl0Lg0KDQpPaywgc28gSSBjYW4gd3JpdGUgaW4gdGhlIHBh
dGNoIGRlc2NyaXB0aW9uIHNvbWV0aGluZyBsaWtlOg0KDQoidGhpcyBwYXRoIGlzIG5vdyB1c2Vk
IGJ5IFBWSCwgc28gdGhlIGFzc2VydCBpcyBubyBsb25nZXIgdmFsaWQiDQoNCkRvZXMgaXQgc291
bmQgb2s/DQoNCj4gVGhhbmtzLCBSb2dlci4NCg0KVGhhbmsgeW91LA0KDQpPbGVrc2FuZHINCg==


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 14:21:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 14:21:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24860.52324 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcr02-0000JO-M9; Wed, 11 Nov 2020 14:21:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24860.52324; Wed, 11 Nov 2020 14:21:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcr02-0000J2-J5; Wed, 11 Nov 2020 14:21:26 +0000
Received: by outflank-mailman (input) for mailman id 24860;
 Wed, 11 Nov 2020 14:21:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kcr01-0000HG-5h
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:21:25 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1477b381-4151-4a07-995a-22faa2ba7360;
 Wed, 11 Nov 2020 14:21:19 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kcr01-0000HG-5h
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:21:25 +0000
X-Inumbo-ID: 1477b381-4151-4a07-995a-22faa2ba7360
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1477b381-4151-4a07-995a-22faa2ba7360;
	Wed, 11 Nov 2020 14:21:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605104479;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=euGk2UDOuVDTVqJAkajzJT3pF1DGqy1mjJlYnONO37w=;
  b=Q+QzypHjGhqNcigSEbRT75/+8MCaS1K4d/RNXW2k/WzRHu2GTQ/Rr5Vo
   c1GVNUuJn5QuXXdQMfjlwMcX0UTgfZZoEe7FjvU/VMeo0eTpzvpC1AZgY
   BtwqysRT1wrmUGfVkpiDzfbiF3SPICvA0nP90ZMu6ghcfXx3rP7m/mw9j
   Y=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 7Cb9gf1f6/YcEob2H6at6B3acm3+S9ayQdtKyXGaNv/hfpD9x62dCJ35V5xPK27RmoB6rdZ15Y
 k++Xjr/6tBec/Jsis0+GSV8cLgHUZV8U/C802h1rii6q+Ff0PaoYJSmt3E3drCcnWFfjCDVjHM
 W9qMqThW54WxtUWgTolKwVdWdc4E3Oqp2gKbHTAZb/hP1p0Rl6GO35UPWbz8K4ZT8wfgG5MRIl
 WUi5DG8k61rKXFvn7XhGt0N5i9ldyiUAfWSQkXae5x1WrPXIKdzRu4p9KZrlJVjHb0A2JLKDwh
 Ics=
X-SBRS: None
X-MesageID: 31186527
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,469,1596513600"; 
   d="scan'208";a="31186527"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U9o7dnobV0AO/vy6gVegeSCYWJDFlNdiq+5ScPFcIzr5rc8JVjFMXji1kusA0pn5/uifFktS9ay+1qZCWpSE6lm7l+njtk3aHSjmStTSLQu4ECY7yYUevWtCiCkba2qo8qRqAXmiFzxMpSM6MSvdlRShPY0lhI6ArnioD1Val6k2hZkx/xz4KU/baHd/mo8nLe1f4D6N74ZminRvDWdythSPMwM+YPnCFTmYfAUN6gSBx3tq+jdwqQP6SrB185HD4cUG2wxCgqC+49G295pqMfp1Z53t0khgMdMQM/y8jlsJk4msZmtDfimRMHYajAs+1HSCsmXMNEEukBqdjFtI4w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uLgc+bWB+b/0Dvw3ibOvwF8NjO3fjE2arC95H/R78UM=;
 b=njGSyA85bseLVmWIfcotqRELnVDVdlXKsAVU3e0OKjfVG36U1ZLedRbnCajh9OmDflHAE2cw73nRGs+nbJ8Xqn4zRAYzG+U+wJKbUYv0NhSgI5ks0LlcskPkVogYAlm49mcMzi3gno12hrz4baRFOJShknAytr3fZ8n+tdmccxJzcGvffpYAxXyriXmfFSockbhQ6J17ZpaMFWuKyLynJ/FI+hRAFUA9Iolh6Oadfp5i8ZfOjCuoEe4ZGxSp2ShgC8r77a2YrVrvBZtmPJCaHbQFNwj8Wy3+VXCB1WV4LHJzChlEb6EvjX6isyvPZOyR5yH2fFJRO/LmSx/YN0/qIg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uLgc+bWB+b/0Dvw3ibOvwF8NjO3fjE2arC95H/R78UM=;
 b=bJ+WtfAgPnIpjn9+3OrIOTCNT6f7Hx7sxeTlq7GL8/ifv7zbhRITbEEFOWi77Wp29Qn+tJRRloIXjte7v4ukV50cwiUEMH4HRoxjOLSLyXpInaM/XhWeeYTkeyKzSZjiQn7UlaV00AtUa9M7Ojs3Dv54yjvMnsjvppBJc74O2h4=
Date: Wed, 11 Nov 2020 15:21:09 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
CC: "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>, "Bertrand.Marquis@arm.com"
	<Bertrand.Marquis@arm.com>, "julien.grall@arm.com" <julien.grall@arm.com>,
	"jbeulich@suse.com" <jbeulich@suse.com>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "iwj@xenproject.org" <iwj@xenproject.org>,
	"wl@xen.org" <wl@xen.org>, Oleksandr Andrushchenko <andr2000@gmail.com>
Subject: Re: [SUSPECTED SPAM][PATCH 01/10] pci/pvh: Allow PCI toolstack code
 run with PVH domains on ARM
Message-ID: <20201111142109.cumpeziv4xruy6t2@Air-de-Roger>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-2-andr2000@gmail.com>
 <20201111123150.7lkabdo3wix7jkdk@Air-de-Roger>
 <57fefaee-9684-4c67-662e-f4c57313886e@epam.com>
 <20201111135513.yvqfe4xongnhtjcq@Air-de-Roger>
 <fc3a73cc-824b-d941-8ec8-dfcae8ee1756@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <fc3a73cc-824b-d941-8ec8-dfcae8ee1756@epam.com>
X-ClientProxiedBy: LO2P265CA0460.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a2::16) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b31d1ac2-015f-4820-b3e9-08d8864d0c6c
X-MS-TrafficTypeDiagnostic: DM5PR03MB2636:
X-Microsoft-Antispam-PRVS: <DM5PR03MB2636A6930DDCD81CA011EBDB8FE80@DM5PR03MB2636.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: bfHN0B9SG2hNbfxoPfXJxQghrTXej6+kxoz2UtJgHU1QilytnMpqI5lM3VLzhWfCBlHEuGmO0tH3u8d3eonMTf5cxTPHWXm919ko88RDagYmp9e1w4tLiBxM1Z/zp4qEj2QGl+rkFObARGNBn5+RcWyUAImIcLJkJ0B3LU++cLumCCoQAk1RkjvRBqam6f+Ui4dqSYg73IeYTAaYW2pkef+rTr73y/z/+hFfr10a92u/OrFjxxDzua2l1NtaazT7FiRzlqf10mz/0Npa/WyhSL7CzM14TSt7zvf2fsXcyWZiXLfrARe+r5p5s7mbnu7Hz20LW4BcmmV/2//YOEJkl5Tpo2Bvp7omlYcltSeqFJN4+Sb7IxgaE1YmZM/3Xd9EHPaScBZ9/ZT2xVaRrBIjEg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(136003)(39860400002)(376002)(346002)(396003)(6486002)(66946007)(66556008)(9686003)(478600001)(66476007)(2906002)(5660300002)(83380400001)(186003)(6496006)(6666004)(16526019)(8676002)(85182001)(966005)(33716001)(6916009)(86362001)(956004)(53546011)(54906003)(316002)(4326008)(1076003)(26005)(8936002)(7416002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: xol+fdf4arqho64v1pyu2kA0PC0BrlID0MQueLGpQ7L6+zGZSrgFL5Wu2QagGTtFAhF5Vq3c5i+1ui7UZZPWiLvZ70owGzRbjspBvDv1ReuZwickCcVSW6iTkRuxr2FusmM/d+IDjqb6DyjLGttPfVOZ0utPu8msvykh+LhV2nlJdAJ5eorOIWaGMxrjQFmcJuz078uK3xvSHAybvTtQSvJBVaRIJGln5Pks2Bqva+s57A1QaTBQVzDZymhUKTtcPDpzG6rEiCLPjTxINHz0sGyAz2Nv8Ih4Y9jkFcR18swOXmeYmdr2KbiHT0HgqpTKEVfdjGhQdNqcObnWNNFbqk9RoGdqbRXarp8h5GrOyQeWQ4R0aiD0c8SCT9+iAOqfFG8Dn6Ur+DssIYyyRR3/3CYyBYoUBNHx7QMMmwRwvDBlwUrbjihgsWhpCNTGLUd3PRNqsm8lKg//BTW8DimyJdiVL19E6drwBpkd4jtbxqfL83iJpgGxefvkzUBKFHef6pWrktXJGd+X0bYIBT42PYotSLbX7wJdy97IKlICjAgkfdApB+wtyGCTCxrc/aFLoA7ObaUZzmMcxArulBh2FrhTpRz8uSvrC5a70ebhROdQ4WwHTP8nqYOFnhzL2MnbdNRnZhrafxjtDRNz4e+bm4bX8kI98r+rFZYzXX1oW8QaKl+hFgevTwbXQo+w3KGk2TJavjtLbDL/rG+022/HXqQzP168bdSWHGZ07+m+q+gumPAPQ5Vu1UVvQJ+k58N16laUWA7Lq1bus9vWYC+NC/AUUyFYPFQBYiwFFbO8RTdIbYofhQygr2Q6dNlcBh+qc/p0T+beRCk/NYPMZxKAXfmFMP7xsu+luITLK65+3gjaDT1lBJY6GiraIzakS4IVYZHsz0nZckKoA/ElNbnUcw==
X-MS-Exchange-CrossTenant-Network-Message-Id: b31d1ac2-015f-4820-b3e9-08d8864d0c6c
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Nov 2020 14:21:15.3482
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: X94TMd4UTRNBmqTT4xJZU+w6lOSOWi91M2eor6qinZOS7FzexZG/UuOx3vNl9LpN/MYFfgCumWBySyMnkJOllg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2636
X-OriginatorOrg: citrix.com

On Wed, Nov 11, 2020 at 02:12:56PM +0000, Oleksandr Andrushchenko wrote:
> 
> On 11/11/20 3:55 PM, Roger Pau Monné wrote:
> > On Wed, Nov 11, 2020 at 01:10:01PM +0000, Oleksandr Andrushchenko wrote:
> >> On 11/11/20 2:31 PM, Roger Pau Monné wrote:
> >>> On Mon, Nov 09, 2020 at 02:50:22PM +0200, Oleksandr Andrushchenko wrote:
> >>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> >>>>
> >>>> According to https://urldefense.com/v3/__https://wiki.xenproject.org/wiki/Linux_PVH__;!!GF_29dbcQIUBPA!nEHd6eivmqtdJxtrhO-3x2Mz9F50JsKUoV7WTEJd_D1N01DrBOJXzGW1QAqwshZ9AMxywbUhOA$ [wiki[.]xenproject[.]org]:
> >>>>
> >>>> Items not supported by PVH
> >>>>    - PCI pass through (as of Xen 4.10)
> >>>>
> >>>> Allow running PCI remove code on ARM and do not assert for PVH domains.
> >>>>
> >>>> Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> >>>> ---
> >>>>    tools/libxl/Makefile    | 4 ++++
> >>>>    tools/libxl/libxl_pci.c | 4 +++-
> >>>>    2 files changed, 7 insertions(+), 1 deletion(-)
> >>>>
> >>>> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
> >>>> index 241da7fff6f4..f3806aafcb4e 100644
> >>>> --- a/tools/libxl/Makefile
> >>>> +++ b/tools/libxl/Makefile
> >>>> @@ -130,6 +130,10 @@ endif
> >>>>    
> >>>>    LIBXL_LIBS += -lyajl
> >>>>    
> >>>> +ifeq ($(CONFIG_ARM),y)
> >>>> +CFALGS += -DCONFIG_ARM
> >>>> +endif
> >>>> +
> >>>>    LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
> >>>>    			libxl_dom.o libxl_exec.o libxl_xshelp.o libxl_device.o \
> >>>>    			libxl_internal.o libxl_utils.o libxl_uuid.o \
> >>>> diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
> >>>> index bc5843b13701..b93cf976642b 100644
> >>>> --- a/tools/libxl/libxl_pci.c
> >>>> +++ b/tools/libxl/libxl_pci.c
> >>>> @@ -1915,8 +1915,10 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
> >>>>                goto out_fail;
> >>>>            }
> >>>>        } else {
> >>>> +        /* PCI passthrough can also run on ARM PVH */
> >>>> +#ifndef CONFIG_ARM
> >>>>            assert(type == LIBXL_DOMAIN_TYPE_PV);
> >>>> -
> >>>> +#endif
> >>> I would just remove the assert now if this is to be used by Arm and
> >>> you don't need to fork the file for Arm.
> >> Sounds good, I will drop then
> >>
> >> But what would be the right explanation then? I mean why there was an ASSERT
> >>
> >> and now it is safe (for x86) to remove that?
> > An assert is just a safe belt, the expectation is that it's never hit
> > by actual code. Given that this path will now also be used by PVH
> > (even if only on Arm) I don't see the point in keeping the assert, and
> > making it conditional to != Arm seems worse than just dropping it.
> 
> Ok, so I can write in the patch description something like:
> 
> "this path is now used by PVH, so the assert is no longer valid"
> 
> Does it sound ok?

LGTM.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 14:21:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 14:21:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24859.52312 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcqzx-0000HS-ET; Wed, 11 Nov 2020 14:21:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24859.52312; Wed, 11 Nov 2020 14:21:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcqzx-0000HL-Ag; Wed, 11 Nov 2020 14:21:21 +0000
Received: by outflank-mailman (input) for mailman id 24859;
 Wed, 11 Nov 2020 14:21:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kcqzw-0000HG-AN
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:21:20 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 80ecf6f6-b88a-4a52-be21-a36ed6aad28f;
 Wed, 11 Nov 2020 14:21:18 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kcqzw-0000HG-AN
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:21:20 +0000
X-Inumbo-ID: 80ecf6f6-b88a-4a52-be21-a36ed6aad28f
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 80ecf6f6-b88a-4a52-be21-a36ed6aad28f;
	Wed, 11 Nov 2020 14:21:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605104478;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=Qyf4EkU3HoZ4llq6faj93sshsSNbe4kG0PN62CehT4o=;
  b=ZM4nfzlannVfQmMPbO89CICQbZ37m0p/TEMDLJoKHLpSkkErBP8obJmo
   zTS9iYahgZWuW/Xl3gQ0WwjdPAWYCm3IOopMTAUmKlCkgKE7bGdukdPng
   Ie3CDeNtGIrN7VVuvYPB2TWNjombuxzV+lrnAGpKL9GkG45y8qYNmzqBK
   8=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: qhiZWmqMcktUGflrEsoQv5rNE7TL6FUVRRe3KjwCkMkv9xIFld5SguRitlv0gM7ad/BUvV+H8s
 gR5jitW1eKq5xdi7wBuqknORRhqZyaFu1LGrUwkOKTmzU+swncgBhBdATTnuofvxHqANf/S1Ib
 7u9t4i8mRu9Qb6oaABLWtUM4Ku477rcZDlQixFe6mgpW89Hd3sf/wG2xS8x8tFLrv9TkyOznPT
 dEuXe8SDUDe6AsqF+5gZ6OrLxKtF6Ct36JajuyzROACoUF4Y0Khaepo0DY7DtKKShQ28URyqO9
 Ew4=
X-SBRS: None
X-MesageID: 30953259
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,469,1596513600"; 
   d="scan'208";a="30953259"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oFaQkV0V9i+gZ2jMmjOK+P3QOJvk/vrPuKG9rHL+vCDIxaJEN5SB7yw6rEDOtJijewriRJCfRX74ldx/OQOEUVCmitvyf/pjyI7kSiH+15j7VY2QMl6CgckaEC8t7C2MUKyYatrnQR5svW2fmpk0GjyhIZb7LjDiOCgBW6li1i1XTsqbUzmXO0fHh0fYTCDYRU3rssmB0kT9z9pNrYSx3Od+ql43hcSdZBd6Ivuwbd9NJgYSrKefdAgC0Ql7FSu9hWRoy8DOgXWXn+FA39OX25rj4wFvOneRx971JhgTLghIdVGSitgWpk1KF8K/YvgY/m7xwRm0Kg2mnkKGokFj3Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zIuRIBYUodhU7YjxRg4obkkTosdRqYr++JOcSlXvKcs=;
 b=nUZ/xWzB5cKdsdMkSGmJhw0elw0OO8hhjfTyhZHgfBknK5c9/mnFUlONI8YuT6eDEPIzLg6PQfeBfhhgmd7T7utupLp7iorQ6ODsWEkez2NxPNAn0CeqnvGXKmxsMQz65z1MNLYtF07nxlJIGIgw5vlbYu7G/+JMyzXKtAVtwC0/rVjtuGf9MO5MMhHxenRwH2Ke9cQgPP3jp4RFzOTVLegIYxGACVF2Vtkhl20SNw1tAgzYDI6pQZ5NNkAySh6XApMnc67ehkJUrYvDFXVzS9Ft6RvMuaFzz3xikebPhelR7V22BAV3OGBQz03iBNHT0Xg+VfzgyqU55J1WCcZ6LQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zIuRIBYUodhU7YjxRg4obkkTosdRqYr++JOcSlXvKcs=;
 b=qNhk6YiWekkV1kE63iOtndnDIo3kFn4HX5wv/QKb42KK3S5ctk67DK+warurH8eouUVQGjwFE41lkNA8ezyf4Eq2GeHui8D6O0V/HHlA9SfxGk2fHHeUCqN/jpYpeSaowt0cjtaP+pEZS0SFrTfxI1HgRLjIEjvPivQh/+EOkrU=
Date: Wed, 11 Nov 2020 15:19:50 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3 5/7] x86: guard against straight-line speculation past
 RET
Message-ID: <20201111141950.3a4blschvpcyexw4@Air-de-Roger>
References: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
 <80ceea17-958d-f409-5f39-9f353e780f5b@suse.com>
 <20201111111504.r4k7a53spsy7pzjq@Air-de-Roger>
 <8ab3658f-8b69-455e-74b3-462f89f1cfe4@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <8ab3658f-8b69-455e-74b3-462f89f1cfe4@suse.com>
X-ClientProxiedBy: LO2P123CA0010.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:a6::22) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b4fe150e-3c0b-411c-3ac3-08d8864cdc6e
X-MS-TrafficTypeDiagnostic: DM6PR03MB4297:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4297CEA839AF69BAE5B557AC8FE80@DM6PR03MB4297.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: f5S+6H3rGgGVf/eUGfj9ftjkq/tP6fenwWz5tgJtc1+S6dVQhX7TPzYWJ31AQVck7/qaWgmWU+dMAj6p0JFSShmlM3vE8D0xoQx6Eb0KB3o3gWHJL5/kQtV1vbSfAuK1tbN2XLkSVepZrQAirv68NLZ5p6MQiU/47WFSm59FkXO4e4wPpxcgiOkl4A8xUymCKYI/+WOx3U2cd50PpHcGYafUY7paOGdhlv1FR4sgwHZqAeUueMRY2radeG0DN7X1Xzylzv+buz0tJ+kswXA8X1JS6VZpQFknMw9klg+LDPjBwdoZ+vqgZJ+xDgIWBb+himXXjmRmgJJmTrQU5J3lAj1oqzXZpk1mO3gI2udEUOxN4IVIZaFRFIOHiocjLyBuUh4Su5qgwGVjh5g8Y8a08g==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(39860400002)(376002)(396003)(366004)(346002)(16526019)(2906002)(956004)(6666004)(54906003)(53546011)(6496006)(316002)(8936002)(9686003)(8676002)(83380400001)(4326008)(86362001)(186003)(966005)(33716001)(5660300002)(6916009)(66946007)(1076003)(85182001)(26005)(6486002)(478600001)(66476007)(66556008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: uUHjHEeso467z8KAdMFXICN8U2UGIjYaQPMJjkGt0hEIVFxyEub5QnDr8hdnC3ZGpgRqV5FYYmokKOzTy5BMfkFWwYol8XOoUvAMWd7Uo9Bw+PE1MV1pBszL56nEfWPReJZEhMGu5Ke9Dq233tcEwvQhC7cv7gp4migDSna3cySpRsxrL67CuK9SSInDqsE3RT4nAPZMASuoifc9HvMXgmI7j81mpfUCK2Tx+UemElfqoZf8UMYsDH861XIbgOP/O/P0tcHiPJg4maAXWClH5+n6bXexdjXMMvHv+8VN4J9xEpgve9iASjlvTt+m9ujG6yM96kzXJKtbxSEjezQxmidbGgSShmZlafJqBIiXT4Jh4DkSzUvNptwUE5F9xqLbSId6c2OS1k/sOMiG3BsN7SHkPTeS41N+3fEReUBDPZ0mOzOqjdeYGdNXbCA8oOGmw5zKf20Bk1XObWFbfdvovwSxZ2H3I4rzMoun0rSnJICTnQb4/ABtIMFDesfcjtZKmcfw09L5GOTq69sRnU53R29flkyuj1Gi5zc4jomF66JxMtMxeCt3hDU2enuOeZRwcOJB8Whk3CpxA6MKUGKWv3AkTUP9teUdOsi7loupWmadVgqVxHHb6gAoWI8wtK4fbGaScX2c39DnVvLI/ojo7VRlBXXssnWqYxXTfi/PFtTGIJasPoY6JV7aQJkbkINSYYMV/quVcigJ4p24ghEGvlEUfkLkTEqW4w1pPrEa06fsa7ZlTc7eHzW5rZa8JagRIwYKjoQ/YVEbD9U9DcOszKYcTOi7xWZw0nuqJrCYc4cD1o36Rp+bLa9Y4BoiX1DQE+8lVGo3qrwbFs6DL3gXFgTXtdPmxb74Js3+3mgv1DlF3DeLuqCAPenSnu5bac7MjrEb1+97g80HSdOM3NjU5A==
X-MS-Exchange-CrossTenant-Network-Message-Id: b4fe150e-3c0b-411c-3ac3-08d8864cdc6e
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Nov 2020 14:19:54.7967
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FVa/XDJFkn1ETrUZJGRJ/IsNbOwz6tHgAt7/7kgb3EfILqJGyvlYgpCJzt/qiPGHz2AAjU4XThJ2kSuQ5ZukOQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4297
X-OriginatorOrg: citrix.com

On Wed, Nov 11, 2020 at 02:33:34PM +0100, Jan Beulich wrote:
> On 11.11.2020 12:15, Roger Pau Monné wrote:
> > On Fri, Oct 23, 2020 at 10:38:04AM +0200, Jan Beulich wrote:
> >> Under certain conditions CPUs can speculate into the instruction stream
> >> past a RET instruction. Guard against this just like 3b7dab93f240
> >> ("x86/spec-ctrl: Protect against CALL/JMP straight-line speculation")
> >> did - by inserting an "INT $3" insn. It's merely the mechanics of how to
> >> achieve this that differ: A set of macros gets introduced to post-
> >> process RET insns issued by the compiler (or living in assembly files).
> >>
> >> Unfortunately for clang this requires further features their built-in
> >> assembler doesn't support: We need to be able to override insn mnemonics
> >> produced by the compiler (which may be impossible, if internally
> >> assembly mnemonics never get generated), and we want to use \(text)
> >> escaping / quoting in the auxiliary macro.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
> >> ---
> >> TBD: Would be nice to avoid the additions in .init.text, but a query to
> >>      the binutils folks regarding the ability to identify the section
> >>      stuff is in (by Peter Zijlstra over a year ago:
> >>      https://sourceware.org/pipermail/binutils/2019-July/107528.html)
> >>      has been left without helpful replies.
> >> ---
> >> v3: Use .byte 0xc[23] instead of the nested macros.
> >> v2: Fix build with newer clang. Use int3 mnemonic. Also override retq.
> >>
> >> --- a/xen/Makefile
> >> +++ b/xen/Makefile
> >> @@ -145,7 +145,15 @@ t2 = $(call as-insn,$(CC) -I$(BASEDIR)/i
> >>  # https://bugs.llvm.org/show_bug.cgi?id=36110
> >>  t3 = $(call as-insn,$(CC),".macro FOO;.endm"$(close); asm volatile $(open)".macro FOO;.endm",-no-integrated-as)
> >>  
> >> -CLANG_FLAGS += $(call or,$(t1),$(t2),$(t3))
> >> +# Check whether \(text) escaping in macro bodies is supported.
> >> +t4 = $(call as-insn,$(CC),".macro m ret:req; \\(ret) $$\\ret; .endm; m 8",,-no-integrated-as)
> >> +
> >> +# Check whether macros can override insn mnemonics in inline assembly.
> >> +t5 = $(call as-insn,$(CC),".macro ret; .error; .endm; .macro retq; .error; .endm",-no-integrated-as)
> > 
> > I was going over this to post a bug report to LLVM, but it seems like
> > gcc also doesn't overwrite ret when using the above snippet:
> > 
> > https://godbolt.org/z/oqsPTv
> 
> I can't see what's different from
> 
> void test(void) {
> 	asm volatile (".macro ret; .error; .endm; .macro retq; .error; .endm");
> }
> 
> but this one produces "Error: .error directive invoked in source file"
> for me with both old and new gcc.

You are right, I think godbolt is somehow busted?

I can reproduce your results with my version of gcc, so will just
report to LLVM.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 14:24:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 14:24:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24873.52336 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcr2l-0000aV-7s; Wed, 11 Nov 2020 14:24:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24873.52336; Wed, 11 Nov 2020 14:24:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcr2l-0000aO-4l; Wed, 11 Nov 2020 14:24:15 +0000
Received: by outflank-mailman (input) for mailman id 24873;
 Wed, 11 Nov 2020 14:24:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcr2j-0000aI-Re
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:24:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ba971cec-dee2-4066-88e1-011a7314ef3b;
 Wed, 11 Nov 2020 14:24:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 769C2AC91;
 Wed, 11 Nov 2020 14:24:11 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcr2j-0000aI-Re
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:24:13 +0000
X-Inumbo-ID: ba971cec-dee2-4066-88e1-011a7314ef3b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ba971cec-dee2-4066-88e1-011a7314ef3b;
	Wed, 11 Nov 2020 14:24:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605104651;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JVYfm9gqzJLW08Rw/dwdme1Aci4M+0lqMEWy94gvz+Q=;
	b=Lk6OW2M4OmvmPjoFGuYhv2i/mA3dNB6iBiRjDB6+f8UtDLdeFbK3k2RoeqZhoc9vgWqHuH
	DbaszzXr5OCH8xcedSqcMlxySPz5aQhbjoMgy+HeB10yqXiqAmmJjsbDNMuLfgVnnEmcrl
	TUchuoYx1E9tIcIw9NqXpJVe3Pm75QU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 769C2AC91;
	Wed, 11 Nov 2020 14:24:11 +0000 (UTC)
Subject: Re: [PATCH v3 5/7] x86: guard against straight-line speculation past
 RET
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
 <80ceea17-958d-f409-5f39-9f353e780f5b@suse.com>
 <20201111111504.r4k7a53spsy7pzjq@Air-de-Roger>
 <8ab3658f-8b69-455e-74b3-462f89f1cfe4@suse.com>
 <20201111141950.3a4blschvpcyexw4@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6265e163-6564-97d0-07c8-2876b1951058@suse.com>
Date: Wed, 11 Nov 2020 15:24:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201111141950.3a4blschvpcyexw4@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11.11.2020 15:19, Roger Pau Monné wrote:
> On Wed, Nov 11, 2020 at 02:33:34PM +0100, Jan Beulich wrote:
>> On 11.11.2020 12:15, Roger Pau Monné wrote:
>>> On Fri, Oct 23, 2020 at 10:38:04AM +0200, Jan Beulich wrote:
>>>> Under certain conditions CPUs can speculate into the instruction stream
>>>> past a RET instruction. Guard against this just like 3b7dab93f240
>>>> ("x86/spec-ctrl: Protect against CALL/JMP straight-line speculation")
>>>> did - by inserting an "INT $3" insn. It's merely the mechanics of how to
>>>> achieve this that differ: A set of macros gets introduced to post-
>>>> process RET insns issued by the compiler (or living in assembly files).
>>>>
>>>> Unfortunately for clang this requires further features their built-in
>>>> assembler doesn't support: We need to be able to override insn mnemonics
>>>> produced by the compiler (which may be impossible, if internally
>>>> assembly mnemonics never get generated), and we want to use \(text)
>>>> escaping / quoting in the auxiliary macro.
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
>>>> ---
>>>> TBD: Would be nice to avoid the additions in .init.text, but a query to
>>>>      the binutils folks regarding the ability to identify the section
>>>>      stuff is in (by Peter Zijlstra over a year ago:
>>>>      https://sourceware.org/pipermail/binutils/2019-July/107528.html)
>>>>      has been left without helpful replies.
>>>> ---
>>>> v3: Use .byte 0xc[23] instead of the nested macros.
>>>> v2: Fix build with newer clang. Use int3 mnemonic. Also override retq.
>>>>
>>>> --- a/xen/Makefile
>>>> +++ b/xen/Makefile
>>>> @@ -145,7 +145,15 @@ t2 = $(call as-insn,$(CC) -I$(BASEDIR)/i
>>>>  # https://bugs.llvm.org/show_bug.cgi?id=36110
>>>>  t3 = $(call as-insn,$(CC),".macro FOO;.endm"$(close); asm volatile $(open)".macro FOO;.endm",-no-integrated-as)
>>>>  
>>>> -CLANG_FLAGS += $(call or,$(t1),$(t2),$(t3))
>>>> +# Check whether \(text) escaping in macro bodies is supported.
>>>> +t4 = $(call as-insn,$(CC),".macro m ret:req; \\(ret) $$\\ret; .endm; m 8",,-no-integrated-as)
>>>> +
>>>> +# Check whether macros can override insn mnemonics in inline assembly.
>>>> +t5 = $(call as-insn,$(CC),".macro ret; .error; .endm; .macro retq; .error; .endm",-no-integrated-as)
>>>
>>> I was going over this to post a bug report to LLVM, but it seems like
>>> gcc also doesn't overwrite ret when using the above snippet:
>>>
>>> https://godbolt.org/z/oqsPTv
>>
>> I can't see what's different from
>>
>> void test(void) {
>> 	asm volatile (".macro ret; .error; .endm; .macro retq; .error; .endm");
>> }
>>
>> but this one produces "Error: .error directive invoked in source file"
>> for me with both old and new gcc.
> 
> You are right, I think godbolt is somehow busted?

Or maybe they really only compile to assembly, while the error results
from the assembler?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 14:26:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 14:26:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24871.52347 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcr4w-0000jK-Lu; Wed, 11 Nov 2020 14:26:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24871.52347; Wed, 11 Nov 2020 14:26:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcr4w-0000jD-Ij; Wed, 11 Nov 2020 14:26:30 +0000
Received: by outflank-mailman (input) for mailman id 24871;
 Wed, 11 Nov 2020 14:23:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R6kc=ER=kernel.org=chunkuang.hu@srs-us1.protection.inumbo.net>)
 id 1kcr26-0000ZB-4f
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:23:34 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c6d02d55-163b-4e82-a3cc-ba86804685ca;
 Wed, 11 Nov 2020 14:23:33 +0000 (UTC)
Received: from mail-wr1-f44.google.com (mail-wr1-f44.google.com
 [209.85.221.44])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 3600422203
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 14:23:32 +0000 (UTC)
Received: by mail-wr1-f44.google.com with SMTP id k2so2722953wrx.2
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 06:23:32 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=R6kc=ER=kernel.org=chunkuang.hu@srs-us1.protection.inumbo.net>)
	id 1kcr26-0000ZB-4f
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:23:34 +0000
X-Inumbo-ID: c6d02d55-163b-4e82-a3cc-ba86804685ca
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c6d02d55-163b-4e82-a3cc-ba86804685ca;
	Wed, 11 Nov 2020 14:23:33 +0000 (UTC)
Received: from mail-wr1-f44.google.com (mail-wr1-f44.google.com [209.85.221.44])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 3600422203
	for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 14:23:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605104612;
	bh=xm3rQeDIh2FuZjYEyhCI9+5vLDwhs0MB53yet3dsIaE=;
	h=References:In-Reply-To:From:Date:Subject:To:Cc:From;
	b=dPmto1xzJ4FwUTGL7+SCYcrJcJWW52ez+kxbCClZr+loHMAapvqz8LQz+Evkc9Vuu
	 +KAHrYvHOZwkw2La6X7uex32BeyKimqmO8Ofmcfvrnpb1stNfWTSlBY+XS1QVzjIoA
	 Z6DKu/gGPoh0EgAddBIc+BpDBved4jgG3COWBuvg=
Received: by mail-wr1-f44.google.com with SMTP id k2so2722953wrx.2
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 06:23:32 -0800 (PST)
X-Gm-Message-State: AOAM5332CMdijnd7IETA1sDKf6o1XCBmtmSkkib8suzRd1/AcuFiVNl+
	NNgU8gvK8QO1rij2kqG/A1+35LIXSmjl1DkqwQ==
X-Google-Smtp-Source: ABdhPJxh+77oiphOx3YcIcxiUEmfWiMYRwROdlpWqzLsSf75JddHNGi9QfuS9fuMmuFcPm/rgVNfg169e9VDB275Qg8=
X-Received: by 2002:a50:f0d4:: with SMTP id a20mr5290937edm.303.1605104609593;
 Wed, 11 Nov 2020 06:23:29 -0800 (PST)
MIME-Version: 1.0
References: <20201109103242.19544-1-tzimmermann@suse.de> <20201109103242.19544-3-tzimmermann@suse.de>
In-Reply-To: <20201109103242.19544-3-tzimmermann@suse.de>
From: Chun-Kuang Hu <chunkuang.hu@kernel.org>
Date: Wed, 11 Nov 2020 22:23:15 +0800
X-Gmail-Original-Message-ID: <CAAOTY__QuyEA7c4H+eSvrSdFTZttB4DXbjr6HLWoH8WovOD1eQ@mail.gmail.com>
Message-ID: <CAAOTY__QuyEA7c4H+eSvrSdFTZttB4DXbjr6HLWoH8WovOD1eQ@mail.gmail.com>
Subject: Re: [PATCH 2/2] drm/mediatek: Use struct dma_buf_map in GEM vmap ops
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Daniel Vetter <daniel@ffwll.ch>, David Airlie <airlied@linux.ie>, 
	Chun-Kuang Hu <chunkuang.hu@kernel.org>, Philipp Zabel <p.zabel@pengutronix.de>, robdclark@gmail.com, 
	sean@poorly.run, linux-arm-msm@vger.kernel.org, 
	freedreno@lists.freedesktop.org, 
	DRI Development <dri-devel@lists.freedesktop.org>, 
	=?UTF-8?Q?Christian_K=C3=B6nig?= <christian.koenig@amd.com>, 
	Maarten Lankhorst <maarten.lankhorst@linux.intel.com>, Maxime Ripard <mripard@kernel.org>, 
	Dave Airlie <airlied@redhat.com>, Lucas Stach <l.stach@pengutronix.de>, 
	Russell King <linux+etnaviv@armlinux.org.uk>, 
	Christian Gmeiner <christian.gmeiner@gmail.com>, Qiang Yu <yuq825@gmail.com>, 
	Ben Skeggs <bskeggs@redhat.com>, Rob Herring <robh@kernel.org>, 
	Tomeu Vizoso <tomeu.vizoso@collabora.com>, Steven Price <steven.price@arm.com>, 
	Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>, Gerd Hoffmann <kraxel@redhat.com>, 
	Alex Deucher <alexander.deucher@amd.com>, Sandy Huang <hjc@rock-chips.com>, 
	=?UTF-8?Q?Heiko_St=C3=BCbner?= <heiko@sntech.de>, 
	Hans de Goede <hdegoede@redhat.com>, Eric Anholt <eric@anholt.net>, 
	Rodrigo Siqueira <rodrigosiqueiramelo@gmail.com>, Melissa Wen <melissa.srw@gmail.com>, 
	Haneen Mohammed <hamohammed.sa@gmail.com>, 
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>, Sumit Semwal <sumit.semwal@linaro.org>, 
	Emil Velikov <emil.velikov@collabora.com>, Marek Szyprowski <m.szyprowski@samsung.com>, 
	Arunpravin <apaneers@amd.com>, Huang Rui <ray.huang@amd.com>, 
	Luben Tuikov <luben.tuikov@amd.com>, Madhav Chauhan <madhav.chauhan@amd.com>, 
	Nirmoy Das <Nirmoy.Das@amd.com>, Jason Gunthorpe <jgg@ziepe.ca>, Sam Ravnborg <sam@ravnborg.org>, 
	Chris Wilson <chris@chris-wilson.co.uk>, etnaviv@lists.freedesktop.org, 
	lima@lists.freedesktop.org, nouveau@lists.freedesktop.org, 
	virtualization@lists.linux-foundation.org, spice-devel@lists.freedesktop.org, 
	amd-gfx@lists.freedesktop.org, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, linux-rockchip@lists.infradead.org, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi, Thomas:

Thomas Zimmermann <tzimmermann@suse.de> =E6=96=BC 2020=E5=B9=B411=E6=9C=889=
=E6=97=A5 =E9=80=B1=E4=B8=80 =E4=B8=8B=E5=8D=886:32=E5=AF=AB=E9=81=93=EF=BC=
=9A
>
> Fixes a build failure with mediatek.
>
> This change was supposed to be part of commit 49a3f51dfeee ("drm/gem:
> Use struct dma_buf_map in GEM vmap ops and convert GEM backends"), but
> mediatek was forgotten.

Acked-by: Chun-Kuang Hu <chunkuang.hu@kernel.org>

>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> Fixes: 49a3f51dfeee ("drm/gem: Use struct dma_buf_map in GEM vmap ops and=
 convert GEM backends")
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: Christian K=C3=B6nig <christian.koenig@amd.com>
> Cc: David Airlie <airlied@linux.ie>
> Cc: Daniel Vetter <daniel@ffwll.ch>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Cc: Maxime Ripard <mripard@kernel.org>
> Cc: Dave Airlie <airlied@redhat.com>
> Cc: Lucas Stach <l.stach@pengutronix.de>
> Cc: Russell King <linux+etnaviv@armlinux.org.uk>
> Cc: Christian Gmeiner <christian.gmeiner@gmail.com>
> Cc: Qiang Yu <yuq825@gmail.com>
> Cc: Ben Skeggs <bskeggs@redhat.com>
> Cc: Rob Herring <robh@kernel.org>
> Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
> Cc: Steven Price <steven.price@arm.com>
> Cc: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: Alex Deucher <alexander.deucher@amd.com>
> Cc: "Christian K=C3=B6nig" <christian.koenig@amd.com>
> Cc: Sandy Huang <hjc@rock-chips.com>
> Cc: "Heiko St=C3=BCbner" <heiko@sntech.de>
> Cc: Hans de Goede <hdegoede@redhat.com>
> Cc: Sean Paul <sean@poorly.run>
> Cc: Eric Anholt <eric@anholt.net>
> Cc: Rodrigo Siqueira <rodrigosiqueiramelo@gmail.com>
> Cc: Melissa Wen <melissa.srw@gmail.com>
> Cc: Haneen Mohammed <hamohammed.sa@gmail.com>
> Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Emil Velikov <emil.velikov@collabora.com>
> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
> Cc: Arunpravin <apaneers@amd.com>
> Cc: Huang Rui <ray.huang@amd.com>
> Cc: Luben Tuikov <luben.tuikov@amd.com>
> Cc: Madhav Chauhan <madhav.chauhan@amd.com>
> Cc: Nirmoy Das <Nirmoy.Das@amd.com>
> Cc: Jason Gunthorpe <jgg@ziepe.ca>
> Cc: Sam Ravnborg <sam@ravnborg.org>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: dri-devel@lists.freedesktop.org
> Cc: etnaviv@lists.freedesktop.org
> Cc: lima@lists.freedesktop.org
> Cc: nouveau@lists.freedesktop.org
> Cc: virtualization@lists.linux-foundation.org
> Cc: spice-devel@lists.freedesktop.org
> Cc: amd-gfx@lists.freedesktop.org
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: linux-rockchip@lists.infradead.org
> Cc: xen-devel@lists.xenproject.org
> ---
>  drivers/gpu/drm/mediatek/mtk_drm_gem.c | 20 ++++++++++++--------
>  drivers/gpu/drm/mediatek/mtk_drm_gem.h |  4 ++--
>  2 files changed, 14 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.c b/drivers/gpu/drm/med=
iatek/mtk_drm_gem.c
> index cdd1a6e61564..28a2ee1336ef 100644
> --- a/drivers/gpu/drm/mediatek/mtk_drm_gem.c
> +++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
> @@ -240,23 +240,25 @@ struct drm_gem_object *mtk_gem_prime_import_sg_tabl=
e(struct drm_device *dev,
>         return &mtk_gem->base;
>  }
>
> -void *mtk_drm_gem_prime_vmap(struct drm_gem_object *obj)
> +int mtk_drm_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_ma=
p *map)
>  {
>         struct mtk_drm_gem_obj *mtk_gem =3D to_mtk_gem_obj(obj);
> -       struct sg_table *sgt;
> +       struct sg_table *sgt =3D NULL;
>         unsigned int npages;
>
>         if (mtk_gem->kvaddr)
> -               return mtk_gem->kvaddr;
> +               goto out;
>
>         sgt =3D mtk_gem_prime_get_sg_table(obj);
>         if (IS_ERR(sgt))
> -               return NULL;
> +               return PTR_ERR(sgt);
>
>         npages =3D obj->size >> PAGE_SHIFT;
>         mtk_gem->pages =3D kcalloc(npages, sizeof(*mtk_gem->pages), GFP_K=
ERNEL);
> -       if (!mtk_gem->pages)
> -               goto out;
> +       if (!mtk_gem->pages) {
> +               kfree(sgt);
> +               return -ENOMEM;
> +       }
>
>         drm_prime_sg_to_page_addr_arrays(sgt, mtk_gem->pages, NULL, npage=
s);
>
> @@ -265,13 +267,15 @@ void *mtk_drm_gem_prime_vmap(struct drm_gem_object =
*obj)
>
>  out:
>         kfree(sgt);
> +       dma_buf_map_set_vaddr(map, mtk_gem->kvaddr);
>
> -       return mtk_gem->kvaddr;
> +       return 0;
>  }
>
> -void mtk_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void mtk_drm_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf=
_map *map)
>  {
>         struct mtk_drm_gem_obj *mtk_gem =3D to_mtk_gem_obj(obj);
> +       void *vaddr =3D map->vaddr;
>
>         if (!mtk_gem->pages)
>                 return;
> diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.h b/drivers/gpu/drm/med=
iatek/mtk_drm_gem.h
> index ff9f976d9807..6da5ccb4b933 100644
> --- a/drivers/gpu/drm/mediatek/mtk_drm_gem.h
> +++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.h
> @@ -45,7 +45,7 @@ int mtk_drm_gem_mmap_buf(struct drm_gem_object *obj,
>  struct sg_table *mtk_gem_prime_get_sg_table(struct drm_gem_object *obj);
>  struct drm_gem_object *mtk_gem_prime_import_sg_table(struct drm_device *=
dev,
>                         struct dma_buf_attachment *attach, struct sg_tabl=
e *sg);
> -void *mtk_drm_gem_prime_vmap(struct drm_gem_object *obj);
> -void mtk_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int mtk_drm_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_ma=
p *map);
> +void mtk_drm_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf=
_map *map);
>
>  #endif
> --
> 2.29.2
>


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 14:32:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 14:32:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24892.52360 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrAP-0001gj-F1; Wed, 11 Nov 2020 14:32:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24892.52360; Wed, 11 Nov 2020 14:32:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrAP-0001gc-Bv; Wed, 11 Nov 2020 14:32:09 +0000
Received: by outflank-mailman (input) for mailman id 24892;
 Wed, 11 Nov 2020 14:32:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MbyH=ER=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1kcrAN-0001gX-Md
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:32:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 90c6ca9c-ff9c-4f63-921f-7a306a835a1d;
 Wed, 11 Nov 2020 14:32:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F1734ABD6;
 Wed, 11 Nov 2020 14:32:01 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MbyH=ER=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
	id 1kcrAN-0001gX-Md
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:32:07 +0000
X-Inumbo-ID: 90c6ca9c-ff9c-4f63-921f-7a306a835a1d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 90c6ca9c-ff9c-4f63-921f-7a306a835a1d;
	Wed, 11 Nov 2020 14:32:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605105122;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=jiyAOm34YBlkt8TRW7xKyZI32hhUfjZX9Kt8C3ZWV2w=;
	b=dg1h30rehaUqxj4F3fTEdPPYo7i/4m4IJKJt92LOcbBGA09qzKZ4bbkhpMrz75YfJFe/I9
	T16fMa09UDbENQUStb1OYiB9bycFwxX/6cLP9/REcW42kE41w3UPyAUypQq7PUKV61xVVP
	u6uKPfNpAQiK0bgL21XrORLcyQ3B/GY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id F1734ABD6;
	Wed, 11 Nov 2020 14:32:01 +0000 (UTC)
Message-ID: <34527bbdeef138454b6a555c236b2289643b3d6b.camel@suse.com>
Subject: Re: [PATCH 01/12] xen/cpupool: add cpu to sched_res_mask when
 removing it from cpupool
From: Dario Faggioli <dfaggioli@suse.com>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>
Date: Wed, 11 Nov 2020 15:32:00 +0100
In-Reply-To: <20201026091316.25680-2-jgross@suse.com>
References: <20201026091316.25680-1-jgross@suse.com>
	 <20201026091316.25680-2-jgross@suse.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-1+R5tQwXK3XdNFiQEMxf"
User-Agent: Evolution 3.38.1 (by Flathub.org) 
MIME-Version: 1.0


--=-1+R5tQwXK3XdNFiQEMxf
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2020-10-26 at 10:13 +0100, Juergen Gross wrote:
> When a cpu is removed from a cpupool and added to the free cpus it
> should be added to sched_res_mask, too.
>=20
> The related removal from sched_res_mask in case of core scheduling
> is already done in schedule_cpu_add().
>=20
> As long as all cpupools share the same scheduling granularity there
> is nothing going wrong with the missing removal,=C2=A0
>
This patch is adding an addition of the CPU to sched_res_mask, which
was missing... So isn't the above "there is nothing going wrong with
the missing addition", or something like that?

Or, if it's an actual missing removal that we are referring to here,
then it must be clarified which one.

> but this will change
> when per-cpupool granularity is fully supported.
>=20
> Signed-off-by: Juergen Gross <jgross@suse.com>
>
With the above fixed or clarified:

Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-1+R5tQwXK3XdNFiQEMxf
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl+r9eAACgkQFkJ4iaW4
c+5U7g/+KABC/HNMdXDMUPS+YwrioXpWKk1NMzHt+7L9fx+qmmeSM/a7u2cG9sCE
0w6AKc/q6mOmVCX8LmzymFYQbWYv3LNpbWd1ULBKrmPc/8EeJ+WRgOrWJamYNYGm
ZDzoT2avN0VBJ7rlvz9oRKG5YOIyRNkdV599FH9q9UEwbliBpDjFTHX9F/OpzMGb
S541Eh7GrL1KAiP7wc4ex5FXvp82mWwApS9FgtS0WcXBH4khXd0aMBSDAq79v6kn
sDuxjAdHK6ovYzwf2gh3FxFHv+2Fv6ZgPzard9214e65XQ7PmCbQxtkd+74+UGWW
Fw8IEYA7OfIXkLdfI7A065ZNLspJAZzi1sjwJyVmGpJAn/57jW3kxIUWvwKRR1BG
JPMJTfYHFnxXQYWHvTuvMpdLeB+hAgEa+BjtoZpygLSa+VAyqgGrvpkTqK0W/+Eb
8Nm6Ra3X0NEkKcr65PihVvPlq4w/mo+Ld0wAnx1Yd0Rl/nQWwlB3o9Ny4MgO+FuS
YwnEzC+vMT/ePr2o5f07e/WavvRJNEts4iboRhsMi2KjBZhDf0x+0R7cbEzFg/ye
EmZut5RvH7gaq7OydWOUIMvmOVBEPHQ7tRAn+5zNmU+PGAFMORePAU+tA+y5RQUL
Iv6qk1BwJQEZs1fFUREsDxyzUQY9mi+XnrVVVuB4x5IV7P8w5I8=
=ydfU
-----END PGP SIGNATURE-----

--=-1+R5tQwXK3XdNFiQEMxf--



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 14:39:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 14:39:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24913.52390 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrH1-00028B-Kb; Wed, 11 Nov 2020 14:38:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24913.52390; Wed, 11 Nov 2020 14:38:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrH1-000284-GZ; Wed, 11 Nov 2020 14:38:59 +0000
Received: by outflank-mailman (input) for mailman id 24913;
 Wed, 11 Nov 2020 14:38:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pxcb=ER=epam.com=prvs=9584594409=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kcrH0-00027t-Cb
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:38:58 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 85fdd092-ddbb-456c-b36c-5d71683b801b;
 Wed, 11 Nov 2020 14:38:57 +0000 (UTC)
Received: from pps.filterd (m0174682.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0ABEVT4t012513; Wed, 11 Nov 2020 14:38:51 GMT
Received: from eur05-db8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2105.outbound.protection.outlook.com [104.47.17.105])
 by mx0b-0039f301.pphosted.com with ESMTP id 34rf80rmc0-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 11 Nov 2020 14:38:51 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM4PR0302MB2804.eurprd03.prod.outlook.com (2603:10a6:200:94::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.28; Wed, 11 Nov
 2020 14:38:47 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Wed, 11 Nov 2020
 14:38:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Pxcb=ER=epam.com=prvs=9584594409=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kcrH0-00027t-Cb
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:38:58 +0000
X-Inumbo-ID: 85fdd092-ddbb-456c-b36c-5d71683b801b
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 85fdd092-ddbb-456c-b36c-5d71683b801b;
	Wed, 11 Nov 2020 14:38:57 +0000 (UTC)
Received: from pps.filterd (m0174682.ppops.net [127.0.0.1])
	by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0ABEVT4t012513;
	Wed, 11 Nov 2020 14:38:51 GMT
Received: from eur05-db8-obe.outbound.protection.outlook.com (mail-db8eur05lp2105.outbound.protection.outlook.com [104.47.17.105])
	by mx0b-0039f301.pphosted.com with ESMTP id 34rf80rmc0-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Wed, 11 Nov 2020 14:38:51 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oQi/MRe6uYJJQiRxjM5eKeaiCux6IuhivRfA5kQAIUKNXwMGSSzon5E8+Wd8WwD4eriIgX92LgfWwzCvGyfUEUavS8+QzObOiR1O60Vzod3AiR4wn16XgySUcMM08k774e67gNN6cSOwyIbGS6NPlTmAn1arqIy8O1e/UBoCXskkJcOQjPhixirdmU8Hr32q5FCzNWRF7eNIbvOHXpWnMHemoy28Y3wNniTWG9Uny5QA7MoTBiiCr4BapTxD25pMOtFZ+8DItnEU2MorMB6C7jzesFVeJGK1Wtyeu2l1v4enVdutrgiqNtv995bJjsl9TvvvnEYIfYPZWROOB5qUxw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3qoWBtXRI8LyAe+qCmeGB5z5cb+ZyA3oeWE92XfobDo=;
 b=RkRkRQKmn3SMv77F/TaYO+MbqO0DEa2RgbPMOY/MSmX7RbOyB2s815HXGNR7G9nr9TjbpXHnwh8DWhqh76u1V3bc4+At9mND06NAFwYlMMrCw/CGlwfaMi9+/Z6H2cQmBtElUHv1zQpuUsIOaHvWBB+6GJBx3oEF5vPdzn6LvGMS/q9Rg1LlLy+o0a3seNoFVooBRUIcXD+rd3GZbUbAA/Kslo8QfNyy1sSjdj0tu9IUqv5pm5QP6jDsrcal9S65bn7ZVUtYsKighzgb6yNz4FoEka0JiZuUFAN4XTPbYBaKwf352Tp7XNY3/wjLPG3DGo5fGIho6GL7LQ0OOfifaw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3qoWBtXRI8LyAe+qCmeGB5z5cb+ZyA3oeWE92XfobDo=;
 b=3rkLjx1ewdajGJ50EcPLya0BvBUWZODIZ5iybselFDt3joTLtDjlZdH/yylziet9RHRz4ECyzDh8vWuyUzTVcGtN/dV7UDU7p445FyKhx2P5Gd5VCdOWRKR+aytjwgOuPo2K7gvY9oWJPBZZI2VIELqd8NiYVdc6WiKY8HM6Kga0FsNV1OFfUWI/3KTx1QLBNqfkA+ljlWSgolkbFMHICXXFlqXsmAWSE1/HR79rZmHLtqmWiiaYCvDgg8Eeew7lN5lhELngV5ZDPHyZBuJeexBv4atsf9oAX12Mu8QE1xiwf9ZZxWjYytAa3IRe/gU1EuYqQiLdTbrRTT4TEuBbiw==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM4PR0302MB2804.eurprd03.prod.outlook.com (2603:10a6:200:94::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.28; Wed, 11 Nov
 2020 14:38:47 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Wed, 11 Nov 2020
 14:38:47 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>,
        Oleksandr
 Andrushchenko <andr2000@gmail.com>
CC: "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
        "Bertrand.Marquis@arm.com"
	<Bertrand.Marquis@arm.com>,
        "julien.grall@arm.com" <julien.grall@arm.com>,
        "jbeulich@suse.com" <jbeulich@suse.com>,
        "sstabellini@kernel.org"
	<sstabellini@kernel.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>
Subject: Re: [PATCH 02/10] arm/pci: Maintain PCI assignable list
Thread-Topic: [PATCH 02/10] arm/pci: Maintain PCI assignable list
Thread-Index: AQHWtpbuKiRgL1HiZkyNC2yoNcmnl6nC9s6AgAAMvAA=
Date: Wed, 11 Nov 2020 14:38:47 +0000
Message-ID: <03d6a75c-075d-6c57-1d66-2514ef1d0cb8@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-3-andr2000@gmail.com>
 <20201111135311.6jhskiss2qswm3zp@Air-de-Roger>
In-Reply-To: <20201111135311.6jhskiss2qswm3zp@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: dbb48689-6e1b-43c6-1f69-08d8864f7f6c
x-ms-traffictypediagnostic: AM4PR0302MB2804:
x-microsoft-antispam-prvs: 
 <AM4PR0302MB2804B35ACB5CFCCDB38ACE21E7E80@AM4PR0302MB2804.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 C2GP5r7LcciTPbtzCqEttfZbZ5vLKqMjWH5tOrkzETPLOSpU7skzqUxj4DYBQZVoLRucqqagVdsJbMySOe4j701ZaiypZSuh5ccMBK3kHGhUaY9n80qLv2HlVp/RszRhxdevmY8KrhgNpDHMnfuSVTGFd+5CLvEss/dwgVmIWjZitr9jAZ3MzC5QBoOFQhvEmfUL1uZwZx/R1BqA3hcJAo5v5ddgwl68XVCRCxpO1aH4OTTdGb8Gid+xg95N3X+NSAvklX3jacWcqH+dv0ZqeqPbLSJCxAy+eoBLmJL9wUwrZzz4zFKqoTjVI5Yw2JBo9VvPKjsZIUggvZxrBtrvqV2ws+Q1PZ2vU5Ov+RbJFXNUewMv0CtuGKfOb1psQSSJbzo6LFG2cxTVM87cTPzCPvL7u3PqpN19JjzuWwH+UXpOfUcpn21/AOzYQ5p8q7Fbp/7DgkX8NaESvm2wMNpALg==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(39860400002)(346002)(396003)(366004)(76116006)(54906003)(6512007)(110136005)(66556008)(83380400001)(5660300002)(66446008)(186003)(91956017)(4326008)(31686004)(66476007)(64756008)(66946007)(966005)(6486002)(7416002)(2906002)(316002)(31696002)(36756003)(6506007)(26005)(8676002)(478600001)(2616005)(8936002)(53546011)(30864003)(86362001)(71200400001)(559001)(579004)(309714004);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 5GJcCioyYl0D8eESd38pgdvGRdV+z5w3WKnwPRmzN5Myy+lVCuYTpExA4cSpFeeJj4toxOfE5RP/afxhbUgw52+5WNeE3bxjJk9unAtxQfEsCv03UkLVXVBM2rZfKhIH0O6l172nJbQekS4hG7Gx+70qsf6X2KltTl/PdbFZsPbn0UYF4F3YYcpwBX31d6QJk2DYICAVtqOyGjkiZVll2GSHR4wfD/66rX63G2uvcpasazaRnJ391l46g3BBl4ZWm9r1kXlRStAbxz445keCVb2q50XC2TCTaujKZ/HrctDRC23CzPefXWo8Tr+4Il1alvkls6bhzXimDfRDEwC12orWseiOAEKVi/wgjt6LyOGsvEaCnZOg9jRlyBbJWzE9Vx2sQe3tzcEKh+f5tbgeLUxgE6m9cK1ybsXU1wrVjQXPpDKWhbOWKS59aT9FuXx3myABQdHvtVngLpVROPb9ArpAqJkpgW3tKG0/anT/rmLl3gBXpD733/5QjjhvnR9DcEeebohxn3SRdyBdlvXdGe9Icwjw7AGKAZvP2exy1S+cQEGdl+2bhnJezlN5VqMth/lV0UHkqan/NOrmdIAvKSLaGGO/+KJKD18Q145ga2l6uFaC7Vfh7rdcyXo2dKP0vH0641lXTnZGFoxFxzp93w==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <EC78157399A598409F00728F20ED7195@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dbb48689-6e1b-43c6-1f69-08d8864f7f6c
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Nov 2020 14:38:47.0160
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: FCkJbYqWNMOzIyFO9CJIPDY5wfKC8IFCxHs5l2BQj5A2TVmebUkktsiik+zk4/sNXh+WemoqdmUWcsEUJJPKd5xbsGXIxYHI7KvtD7zvNtrL2XcyQ+LnOO31zUbIUeq/
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR0302MB2804
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-11_07:2020-11-10,2020-11-11 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 mlxlogscore=999
 adultscore=0 clxscore=1015 spamscore=0 priorityscore=1501 impostorscore=0
 malwarescore=0 phishscore=0 mlxscore=0 lowpriorityscore=0 suspectscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2011110087

T24gMTEvMTEvMjAgMzo1MyBQTSwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToNCj4gT24gTW9uLCBO
b3YgMDksIDIwMjAgYXQgMDI6NTA6MjNQTSArMDIwMCwgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28g
d3JvdGU6DQo+PiBGcm9tOiBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyA8b2xla3NhbmRyX2FuZHJ1
c2hjaGVua29AZXBhbS5jb20+DQo+Pg0KPj4gVGhlIG9yaWdpbmFsIGNvZGUgZGVwZW5kcyBvbiBw
Y2liYWNrIHRvIG1hbmFnZSBhc3NpZ25hYmxlIGRldmljZSBsaXN0Lg0KPj4gVGhlIGZ1bmN0aW9u
YWxpdHkgd2hpY2ggaXMgaW1wbGVtZW50ZWQgYnkgdGhlIHBjaWJhY2sgYW5kIHRoZSB0b29sc3Rh
Y2sNCj4+IGFuZCB3aGljaCBpcyByZWxldmFudC9taXNzaW5nL25lZWRlZCBmb3IgQVJNOg0KPj4N
Cj4+IDEuIHBjaWJhY2sgaXMgdXNlZCBhcyBhIGRhdGFiYXNlIGZvciBhc3NpZ25hYmxlIFBDSSBk
ZXZpY2VzLCBlLmcuIHhsDQo+PiAgICAgcGNpLWFzc2lnbmFibGUte2FkZHxyZW1vdmV8bGlzdH0g
bWFuaXB1bGF0ZXMgdGhhdCBsaXN0LiBTbywgd2hlbmV2ZXIgdGhlDQo+PiAgICAgdG9vbHN0YWNr
IG5lZWRzIHRvIGtub3cgd2hpY2ggUENJIGRldmljZXMgY2FuIGJlIHBhc3NlZCB0aHJvdWdoIGl0
IHJlYWRzDQo+PiAgICAgdGhhdCBmcm9tIHRoZSByZWxldmFudCBzeXNmcyBlbnRyaWVzIG9mIHRo
ZSBwY2liYWNrLg0KPj4NCj4+IDIuIHBjaWJhY2sgaXMgdXNlZCB0byBob2xkIHRoZSB1bmJvdW5k
IFBDSSBkZXZpY2VzLCBlLmcuIHdoZW4gcGFzc2luZyB0aHJvdWdoDQo+PiAgICAgYSBQQ0kgZGV2
aWNlIGl0IG5lZWRzIHRvIGJlIHVuYm91bmQgZnJvbSB0aGUgcmVsZXZhbnQgZGV2aWNlIGRyaXZl
ciBhbmQgYm91bmQNCj4+ICAgICB0byBwY2liYWNrIChzdHJpY3RseSBzcGVha2luZyBpdCBpcyBu
b3QgcmVxdWlyZWQgdGhhdCB0aGUgZGV2aWNlIGlzIGJvdW5kIHRvDQo+PiAgICAgcGNpYmFjaywg
YnV0IHBjaWJhY2sgaXMgYWdhaW4gdXNlZCBhcyBhIGRhdGFiYXNlIG9mIHRoZSBwYXNzZWQgdGhy
b3VnaCBQQ0kNCj4+ICAgICBkZXZpY2VzLCBzbyB3ZSBjYW4gcmUtYmluZCB0aGUgZGV2aWNlcyBi
YWNrIHRvIHRoZWlyIG9yaWdpbmFsIGRyaXZlcnMgd2hlbg0KPj4gICAgIGd1ZXN0IGRvbWFpbiBz
aHV0cyBkb3duKQ0KPj4NCj4+IDEuIEFzIEFSTSBkb2Vzbid0IHVzZSBwY2liYWNrIGltcGxlbWVu
dCB0aGUgYWJvdmUgd2l0aCBhZGRpdGlvbmFsIHN5c2N0bHM6DQo+PiAgIC0gWEVOX1NZU0NUTF9w
Y2lfZGV2aWNlX3NldF9hc3NpZ25lZA0KPiBJIGRvbid0IHNlZSB0aGUgcG9pbnQgaW4gaGF2aW5n
IHRoaXMgc3lzZnMsIFhlbiBhbHJlYWR5IGtub3dzIHdoZW4gYQ0KPiBkZXZpY2UgaXMgYXNzaWdu
ZWQgYmVjYXVzZSB0aGUgWEVOX0RPTUNUTF9hc3NpZ25fZGV2aWNlIGh5cGVyY2FsbCBpcw0KPiB1
c2VkLg0KDQpCdXQgaG93IGRvZXMgdGhlIHRvb2xzdGFjayBrbm93IGFib3V0IHRoYXQ/IFdoZW4g
dGhlIHRvb2xzdGFjayBuZWVkcyB0bw0KDQpsaXN0L2tub3cgYWxsIGFzc2lnbmVkIGRldmljZXMg
aXQgcXVlcmllcyBwY2liYWNrJ3Mgc3lzZnMgZW50cmllcy4gU28sIHdpdGgNCg0KWEVOX0RPTUNU
TF9hc3NpZ25fZGV2aWNlIHdlIG1ha2UgdGhhdCBrbm93bGVkZ2UgYXZhaWxhYmxlIHRvIFhlbiwN
Cg0KYnV0IHRoZXJlIGFyZSBubyBtZWFucyBmb3IgdGhlIHRvb2xzdGFjayB0byBnZXQgaXQgYmFj
ay4NCg0KPg0KPj4gICAtIFhFTl9TWVNDVExfcGNpX2RldmljZV9nZXRfYXNzaWduZWQNCj4+ICAg
LSBYRU5fU1lTQ1RMX3BjaV9kZXZpY2VfZW51bV9hc3NpZ25lZA0KPj4gMi4gRXh0ZW5kIHN0cnVj
dCBwY2lfZGV2IHRvIGhvbGQgYXNzaWdubWVudCBzdGF0ZS4NCj4gSSdtIG5vdCByZWFsbHkgZm91
bmQgb2YgdGhpcywgdGhlIGh5cGVydmlzb3IgaXMgbm8gcGxhY2UgdG8gc3RvcmUgYQ0KPiBkYXRh
YmFzZSBsaWtlIHRoaXMsIHVubGVzcyBpdCdzIHN0cmljdGx5IG5lZWRlZC4NCkkgZG8gYWdyZWUg
YW5kIGl0IHdhcyBwcmV2aW91c2x5IGRpc2N1c3NlZCBhIGJpdA0KPg0KPiBJTU8gdGhlIHJpZ2h0
IGltcGxlbWVudGF0aW9uIGhlcmUgd291bGQgYmUgdG8gc3BsaXQgTGludXggcGNpYmFjayBpbnRv
DQo+IHR3byBkaWZmZXJlbnQgZHJpdmVyczoNCj4NCj4gICAtIFRoZSBwdi1wY2kgYmFja2VuZCBm
b3IgZG9pbmcgcGFzc3Rocm91Z2ggdG8gY2xhc3NpYyBQViBndWVzdHMuDQpPaw0KPiAgIC0gVGhl
IHJlc3Qgb2YgcGNpYmFjazogZGV2aWNlIHJlc2V0LCBoYW5kLWhvbGRpbmcgZHJpdmVyIGZvciBk
ZXZpY2VzDQo+ICAgICB0byBiZSBhc3NpZ25lZCBhbmQgZGF0YWJhc2UuDQoNClRoZXNlIGFuZCBh
c3NpZ25lZCBkZXZpY2VzIGxpc3Qgc2VlbSB0byBiZSB0aGUgY29tcGxldGUgc2V0IHdoaWNoDQoN
CmlzIG5lZWRlZCBieSB0aGUgdG9vbHN0YWNrIG9uIEFSTS4gQWxsIG90aGVyIGZ1bmN0aW9uYWxp
dHkgcHJvdmlkZWQgYnkNCg0KcGNpYmFjayBpcyBub3QgbmVlZGVkIGZvciBBUk0uDQoNCkphbiB3
YXMgc2F5aW5nIFsxXSB0aGF0IHdlIG1pZ2h0IHN0aWxsIHVzZSBwY2liYWNrIGFzIGlzLCBidXQg
c2ltcGx5IHVzZSBvbmx5DQoNCnRoZSBmdW5jdGlvbmFsaXR5IHdlIG5lZWQuDQoNCj4NCj4gSSB0
aGluayB0aGVyZSBtdXN0IGJlIHNvbWV0aGluZyBzaW1pbGFyIGluIEtWTSB0aGF0IHBlcmZvcm1z
IHRoZSB0YXNrcw0KPiBvZiBteSBsYXN0IHBvaW50LCBtYXliZSB3ZSBjb3VsZCBwaWdneWJhY2sg
b24gaXQ/DQpJIHByb21pc2VkIHRvIGxvb2sgYXQgaXQuIEkgb3dlIHRoaXMNCj4NCj4gSWYgd2Ug
d2FudCB0byBnbyB0aGUgcm91dGUgcHJvcG9zZWQgYnkgdGhpcyBwYXRjaCwgaWU6IFhlbiBwZXJm
b3JtaW5nDQo+IHRoZSBmdW5jdGlvbnMgb2YgcGNpYmFjayB5b3Ugd291bGQgYWxzbyBoYXZlIHRv
IG1vdmUgdGhlIFBDSSByZXNldA0KPiBjb2RlIHRvIFhlbiwgc28gdGhhdCB5b3UgY2FuIGZ1bGx5
IG1hbmFnZSB0aGUgUENJIGRldmljZXMgZnJvbSBYZW4uDQpJbiBjYXNlIG9mIGRvbTBsZXNzIHRo
aXMgd291bGQgYmUgdGhlIGNhc2U6IG5vIHBjaWJhY2ssIG5vIERvbWFpbi0wDQo+DQo+PiBTaWdu
ZWQtb2ZmLWJ5OiBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyA8b2xla3NhbmRyX2FuZHJ1c2hjaGVu
a29AZXBhbS5jb20+DQo+PiAtLS0NCj4+ICAgdG9vbHMvbGlieGMvaW5jbHVkZS94ZW5jdHJsLmgg
fCAgIDkgKysrDQo+PiAgIHRvb2xzL2xpYnhjL3hjX2RvbWFpbi5jICAgICAgIHwgICAxICsNCj4+
ICAgdG9vbHMvbGlieGMveGNfbWlzYy5jICAgICAgICAgfCAgNDYgKysrKysrKysrKysrKysrDQo+
PiAgIHRvb2xzL2xpYnhsL01ha2VmaWxlICAgICAgICAgIHwgICA0ICsrDQo+PiAgIHRvb2xzL2xp
YnhsL2xpYnhsX3BjaS5jICAgICAgIHwgMTA1ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysr
KysrLS0NCj4+ICAgeGVuL2FyY2gvYXJtL3N5c2N0bC5jICAgICAgICAgfCAgNjYgKysrKysrKysr
KysrKysrKysrKystDQo+PiAgIHhlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3BjaS5jIHwgIDkzICsr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKw0KPj4gICB4ZW4vaW5jbHVkZS9wdWJsaWMvc3lz
Y3RsLmggICB8ICA0MCArKysrKysrKysrKysrDQo+PiAgIHhlbi9pbmNsdWRlL3hlbi9wY2kuaCAg
ICAgICAgIHwgIDEyICsrKysNCj4+ICAgOSBmaWxlcyBjaGFuZ2VkLCAzNzAgaW5zZXJ0aW9ucygr
KSwgNiBkZWxldGlvbnMoLSkNCj4gSSd2ZSBkb25lIHNvbWUgbGlnaHQgcmV2aWV3IGJlbG93IGdp
dmVuIG15IHF1ZXN0aW9ucyBhYm92ZS4NCg0KVGhpcyBpcyBtb3JlIHRoYW4gSSBleHBlY3RlZCBm
b3IgYW4gUkZDIHNlcmllcw0KDQpUaGFuayB5b3UhDQoNCj4NCj4+IGRpZmYgLS1naXQgYS90b29s
cy9saWJ4Yy9pbmNsdWRlL3hlbmN0cmwuaCBiL3Rvb2xzL2xpYnhjL2luY2x1ZGUveGVuY3RybC5o
DQo+PiBpbmRleCA0Yzg5YjcyOTRjNGYuLjc3MDI5MDEzZGE3ZCAxMDA2NDQNCj4+IC0tLSBhL3Rv
b2xzL2xpYnhjL2luY2x1ZGUveGVuY3RybC5oDQo+PiArKysgYi90b29scy9saWJ4Yy9pbmNsdWRl
L3hlbmN0cmwuaA0KPj4gQEAgLTI2NTIsNiArMjY1MiwxNSBAQCBpbnQgeGNfbGl2ZXBhdGNoX3Jl
cGxhY2UoeGNfaW50ZXJmYWNlICp4Y2gsIGNoYXIgKm5hbWUsIHVpbnQzMl90IHRpbWVvdXQsIHVp
bnQzMg0KPj4gICBpbnQgeGNfZG9tYWluX2NhY2hlZmx1c2goeGNfaW50ZXJmYWNlICp4Y2gsIHVp
bnQzMl90IGRvbWlkLA0KPj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgeGVuX3Bmbl90IHN0
YXJ0X3BmbiwgeGVuX3Bmbl90IG5yX3BmbnMpOw0KPj4gICANCj4+ICt0eXBlZGVmIHhlbl9zeXNj
dGxfcGNpX2RldmljZV9lbnVtX2Fzc2lnbmVkX3QgeGNfcGNpX2RldmljZV9lbnVtX2Fzc2lnbmVk
X3Q7DQo+PiArDQo+PiAraW50IHhjX3BjaV9kZXZpY2Vfc2V0X2Fzc2lnbmVkKHhjX2ludGVyZmFj
ZSAqeGNoLCB1aW50MzJfdCBtYWNoaW5lX3NiZGYsDQo+PiArICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIGJvb2wgYXNzaWduZWQpOw0KPj4gK2ludCB4Y19wY2lfZGV2aWNlX2dldF9hc3Np
Z25lZCh4Y19pbnRlcmZhY2UgKnhjaCwgdWludDMyX3QgbWFjaGluZV9zYmRmKTsNCj4+ICsNCj4+
ICtpbnQgeGNfcGNpX2RldmljZV9lbnVtX2Fzc2lnbmVkKHhjX2ludGVyZmFjZSAqeGNoLA0KPj4g
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgeGNfcGNpX2RldmljZV9lbnVtX2Fzc2ln
bmVkX3QgKmUpOw0KPj4gKw0KPj4gICAvKiBDb21wYXQgc2hpbXMgKi8NCj4+ICAgI2luY2x1ZGUg
InhlbmN0cmxfY29tcGF0LmgiDQo+PiAgIA0KPj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhjL3hj
X2RvbWFpbi5jIGIvdG9vbHMvbGlieGMveGNfZG9tYWluLmMNCj4+IGluZGV4IDcxODI5YzJiY2Uz
ZS4uZDUxNTE5MWU5MjQzIDEwMDY0NA0KPj4gLS0tIGEvdG9vbHMvbGlieGMveGNfZG9tYWluLmMN
Cj4+ICsrKyBiL3Rvb2xzL2xpYnhjL3hjX2RvbWFpbi5jDQo+PiBAQCAtMjMyMSw2ICsyMzIxLDcg
QEAgaW50IHhjX2RvbWFpbl9zb2Z0X3Jlc2V0KHhjX2ludGVyZmFjZSAqeGNoLA0KPj4gICAgICAg
ZG9tY3RsLmRvbWFpbiA9IGRvbWlkOw0KPj4gICAgICAgcmV0dXJuIGRvX2RvbWN0bCh4Y2gsICZk
b21jdGwpOw0KPj4gICB9DQo+PiArDQo+PiAgIC8qDQo+PiAgICAqIExvY2FsIHZhcmlhYmxlczoN
Cj4+ICAgICogbW9kZTogQw0KPj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhjL3hjX21pc2MuYyBi
L3Rvb2xzL2xpYnhjL3hjX21pc2MuYw0KPj4gaW5kZXggMzgyMDM5NDQxM2E5Li5kNDM5YzRiYTEw
MTkgMTAwNjQ0DQo+PiAtLS0gYS90b29scy9saWJ4Yy94Y19taXNjLmMNCj4+ICsrKyBiL3Rvb2xz
L2xpYnhjL3hjX21pc2MuYw0KPj4gQEAgLTk4OCw2ICs5ODgsNTIgQEAgaW50IHhjX2xpdmVwYXRj
aF9yZXBsYWNlKHhjX2ludGVyZmFjZSAqeGNoLCBjaGFyICpuYW1lLCB1aW50MzJfdCB0aW1lb3V0
LCB1aW50MzINCj4+ICAgICAgIHJldHVybiBfeGNfbGl2ZXBhdGNoX2FjdGlvbih4Y2gsIG5hbWUs
IExJVkVQQVRDSF9BQ1RJT05fUkVQTEFDRSwgdGltZW91dCwgZmxhZ3MpOw0KPj4gICB9DQo+PiAg
IA0KPj4gK2ludCB4Y19wY2lfZGV2aWNlX3NldF9hc3NpZ25lZCgNCj4+ICsgICAgeGNfaW50ZXJm
YWNlICp4Y2gsDQo+PiArICAgIHVpbnQzMl90IG1hY2hpbmVfc2JkZiwNCj4+ICsgICAgYm9vbCBh
c3NpZ25lZCkNCj4+ICt7DQo+PiArICAgIERFQ0xBUkVfU1lTQ1RMOw0KPj4gKw0KPj4gKyAgICBz
eXNjdGwuY21kID0gWEVOX1NZU0NUTF9wY2lfZGV2aWNlX3NldF9hc3NpZ25lZDsNCj4+ICsgICAg
c3lzY3RsLnUucGNpX3NldF9hc3NpZ25lZC5tYWNoaW5lX3NiZGYgPSBtYWNoaW5lX3NiZGY7DQo+
PiArICAgIHN5c2N0bC51LnBjaV9zZXRfYXNzaWduZWQuYXNzaWduZWQgPSBhc3NpZ25lZDsNCj4+
ICsNCj4+ICsgICAgcmV0dXJuIGRvX3N5c2N0bCh4Y2gsICZzeXNjdGwpOw0KPj4gK30NCj4+ICsN
Cj4+ICtpbnQgeGNfcGNpX2RldmljZV9nZXRfYXNzaWduZWQoDQo+PiArICAgIHhjX2ludGVyZmFj
ZSAqeGNoLA0KPj4gKyAgICB1aW50MzJfdCBtYWNoaW5lX3NiZGYpDQo+PiArew0KPj4gKyAgICBE
RUNMQVJFX1NZU0NUTDsNCj4+ICsNCj4+ICsgICAgc3lzY3RsLmNtZCA9IFhFTl9TWVNDVExfcGNp
X2RldmljZV9nZXRfYXNzaWduZWQ7DQo+PiArICAgIHN5c2N0bC51LnBjaV9nZXRfYXNzaWduZWQu
bWFjaGluZV9zYmRmID0gbWFjaGluZV9zYmRmOw0KPj4gKw0KPj4gKyAgICByZXR1cm4gZG9fc3lz
Y3RsKHhjaCwgJnN5c2N0bCk7DQo+PiArfQ0KPj4gKw0KPj4gK2ludCB4Y19wY2lfZGV2aWNlX2Vu
dW1fYXNzaWduZWQoeGNfaW50ZXJmYWNlICp4Y2gsDQo+PiArICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICB4Y19wY2lfZGV2aWNlX2VudW1fYXNzaWduZWRfdCAqZSkNCj4+ICt7DQo+PiAr
ICAgIGludCByZXQ7DQo+PiArICAgIERFQ0xBUkVfU1lTQ1RMOw0KPj4gKw0KPj4gKyAgICBzeXNj
dGwuY21kID0gWEVOX1NZU0NUTF9wY2lfZGV2aWNlX2VudW1fYXNzaWduZWQ7DQo+PiArICAgIHN5
c2N0bC51LnBjaV9lbnVtX2Fzc2lnbmVkLmlkeCA9IGUtPmlkeDsNCj4+ICsgICAgc3lzY3RsLnUu
cGNpX2VudW1fYXNzaWduZWQucmVwb3J0X25vdF9hc3NpZ25lZCA9IGUtPnJlcG9ydF9ub3RfYXNz
aWduZWQ7DQo+PiArICAgIHJldCA9IGRvX3N5c2N0bCh4Y2gsICZzeXNjdGwpOw0KPj4gKyAgICBp
ZiAoIHJldCApDQo+PiArICAgICAgICBlcnJubyA9IEVJTlZBTDsNCj4+ICsgICAgZWxzZQ0KPj4g
KyAgICB7DQo+PiArICAgICAgICBlLT5kb21haW4gPSBzeXNjdGwudS5wY2lfZW51bV9hc3NpZ25l
ZC5kb21haW47DQo+PiArICAgICAgICBlLT5tYWNoaW5lX3NiZGYgPSBzeXNjdGwudS5wY2lfZW51
bV9hc3NpZ25lZC5tYWNoaW5lX3NiZGY7DQo+PiArICAgIH0NCj4+ICsgICAgcmV0dXJuIHJldDsN
Cj4+ICt9DQo+PiArDQo+PiAgIC8qDQo+PiAgICAqIExvY2FsIHZhcmlhYmxlczoNCj4+ICAgICog
bW9kZTogQw0KPj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL01ha2VmaWxlIGIvdG9vbHMvbGli
eGwvTWFrZWZpbGUNCj4+IGluZGV4IGYzODA2YWFmY2I0ZS4uNmY3NmJhMzVhZWM3IDEwMDY0NA0K
Pj4gLS0tIGEvdG9vbHMvbGlieGwvTWFrZWZpbGUNCj4+ICsrKyBiL3Rvb2xzL2xpYnhsL01ha2Vm
aWxlDQo+PiBAQCAtMTMwLDYgKzEzMCwxMCBAQCBlbmRpZg0KPj4gICANCj4+ICAgTElCWExfTElC
UyArPSAtbHlhamwNCj4+ICAgDQo+PiAraWZlcSAoJChDT05GSUdfWDg2KSx5KQ0KPj4gK0NGQUxH
UyArPSAtRENPTkZJR19QQ0lCQUNLDQo+PiArZW5kaWYNCj4+ICsNCj4+ICAgaWZlcSAoJChDT05G
SUdfQVJNKSx5KQ0KPj4gICBDRkFMR1MgKz0gLURDT05GSUdfQVJNDQo+PiAgIGVuZGlmDQo+PiBk
aWZmIC0tZ2l0IGEvdG9vbHMvbGlieGwvbGlieGxfcGNpLmMgYi90b29scy9saWJ4bC9saWJ4bF9w
Y2kuYw0KPj4gaW5kZXggYjkzY2Y5NzY2NDJiLi40MWY4OWI4YWFlMTAgMTAwNjQ0DQo+PiAtLS0g
YS90b29scy9saWJ4bC9saWJ4bF9wY2kuYw0KPj4gKysrIGIvdG9vbHMvbGlieGwvbGlieGxfcGNp
LmMNCj4+IEBAIC0zMTksNiArMzE5LDcgQEAgcmV0cnlfdHJhbnNhY3Rpb24yOg0KPj4gICANCj4+
ICAgc3RhdGljIGludCBnZXRfYWxsX2Fzc2lnbmVkX2RldmljZXMobGlieGxfX2djICpnYywgbGli
eGxfZGV2aWNlX3BjaSAqKmxpc3QsIGludCAqbnVtKQ0KPj4gICB7DQo+PiArI2lmZGVmIENPTkZJ
R19QQ0lCQUNLDQo+PiAgICAgICBjaGFyICoqZG9tbGlzdDsNCj4+ICAgICAgIHVuc2lnbmVkIGlu
dCBuZCA9IDAsIGk7DQo+PiAgIA0KPj4gQEAgLTM1Niw2ICszNTcsMzMgQEAgc3RhdGljIGludCBn
ZXRfYWxsX2Fzc2lnbmVkX2RldmljZXMobGlieGxfX2djICpnYywgbGlieGxfZGV2aWNlX3BjaSAq
Kmxpc3QsIGludA0KPj4gICAgICAgICAgICAgICB9DQo+PiAgICAgICAgICAgfQ0KPj4gICAgICAg
fQ0KPj4gKyNlbHNlDQo+PiArICAgIGxpYnhsX2N0eCAqY3R4ID0gbGlieGxfX2djX293bmVyKGdj
KTsNCj4+ICsgICAgaW50IHJldDsNCj4+ICsgICAgeGNfcGNpX2RldmljZV9lbnVtX2Fzc2lnbmVk
X3QgZTsNCj4+ICsNCj4+ICsgICAgKmxpc3QgPSBOVUxMOw0KPj4gKyAgICAqbnVtID0gMDsNCj4+
ICsNCj4+ICsgICAgbWVtc2V0KCZlLCAwLCBzaXplb2YoZSkpOw0KPj4gKyAgICBkbyB7DQo+PiAr
ICAgICAgICByZXQgPSB4Y19wY2lfZGV2aWNlX2VudW1fYXNzaWduZWQoY3R4LT54Y2gsICZlKTsN
Cj4+ICsgICAgICAgIGlmICggcmV0ICYmIGVycm5vID09IEVJTlZBTCApDQo+PiArICAgICAgICAg
ICAgYnJlYWs7DQo+PiArICAgICAgICAqbGlzdCA9IHJlYWxsb2MoKmxpc3QsIHNpemVvZihsaWJ4
bF9kZXZpY2VfcGNpKSAqIChlLmlkeCArIDEpKTsNCj4+ICsgICAgICAgIGlmICgqbGlzdCA9PSBO
VUxMKQ0KPj4gKyAgICAgICAgICAgIHJldHVybiBFUlJPUl9OT01FTTsNCj4+ICsNCj4+ICsgICAg
ICAgIHBjaWRldl9zdHJ1Y3RfZmlsbCgqbGlzdCArIGUuaWR4LA0KPj4gKyAgICAgICAgICAgICAg
ICAgICAgICAgICAgIGUuZG9tYWluLA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgIGUu
bWFjaGluZV9zYmRmID4+IDggJiAweGZmLA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAg
IFBDSV9TTE9UKGUubWFjaGluZV9zYmRmKSwNCj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAg
ICBQQ0lfRlVOQyhlLm1hY2hpbmVfc2JkZiksDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAg
ICAgMCAvKnZkZXZmbiovKTsNCj4+ICsgICAgICAgIGUuaWR4Kys7DQo+PiArICAgIH0gd2hpbGUg
KCFyZXQpOw0KPj4gKyAgICAqbnVtID0gZS5pZHg7DQo+PiArI2VuZGlmDQo+IEkgZG9uJ3QgdGhp
bmsgdGhlIGFtb3VudCBvZiBpZmRlZnMgYWRkZWQgdG8gdGhpcyBmaWxlIGlzIGFjY2VwdGFibGUu
DQo+IElmIHdlIGhhdmUgdG8gZ28gdGhhdCByb3V0ZSB0aGlzIG5lZWRzIHRvIGJlIHNwbGl0IGlu
dG8gYSBkaWZmZXJlbnQNCj4gZmlsZSwgYW5kIG1heWJlIHNvbWUgb2YgdGhlIGNvbW1vbiBiaXRz
IGFic3RyYWN0ZWQgdG9nZXRoZXIgdG8gcHJldmVudA0KPiBjb2RlIHJlcGV0aXRpb24uDQoNCldl
IGFsc28gYnJpZWZseSBkaXNjdXNzZWQgdGhhdCBhbmQgd2VyZSB0YWxraW5nIGFib3V0IGlmIHRo
ZSBhcmNoIHNwZWNpZmljDQoNCmZpbGVzIHNob3VsZCBiZSBzb21ldGluZyBsaWtlIGxpYnhsX3Bj
aV94ODZfbGludXguYyBldGMuDQoNCj4NCj4+ICAgICAgIGxpYnhsX19wdHJfYWRkKGdjLCAqbGlz
dCk7DQo+PiAgIA0KPj4gICAgICAgcmV0dXJuIDA7DQo+PiBAQCAtNDExLDEzICs0MzksMjAgQEAg
c3RhdGljIGludCBzeXNmc193cml0ZV9iZGYobGlieGxfX2djICpnYywgY29uc3QgY2hhciAqIHN5
c2ZzX3BhdGgsDQo+PiAgIGxpYnhsX2RldmljZV9wY2kgKmxpYnhsX2RldmljZV9wY2lfYXNzaWdu
YWJsZV9saXN0KGxpYnhsX2N0eCAqY3R4LCBpbnQgKm51bSkNCj4+ICAgew0KPj4gICAgICAgR0Nf
SU5JVChjdHgpOw0KPj4gLSAgICBsaWJ4bF9kZXZpY2VfcGNpICpwY2lkZXZzID0gTlVMTCwgKm5l
dywgKmFzc2lnbmVkOw0KPj4gKyAgICBsaWJ4bF9kZXZpY2VfcGNpICpwY2lkZXZzID0gTlVMTCwg
Km5ldzsNCj4+ICsgICAgaW50IHI7DQo+PiArI2lmZGVmIENPTkZJR19QQ0lCQUNLDQo+PiArICAg
IGxpYnhsX2RldmljZV9wY2kgKmFzc2lnbmVkOw0KPj4gKyAgICBpbnQgbnVtX2Fzc2lnbmVkOw0K
Pj4gICAgICAgc3RydWN0IGRpcmVudCAqZGU7DQo+PiAgICAgICBESVIgKmRpcjsNCj4+IC0gICAg
aW50IHIsIG51bV9hc3NpZ25lZDsNCj4+ICsjZWxzZQ0KPj4gKyAgICB4Y19wY2lfZGV2aWNlX2Vu
dW1fYXNzaWduZWRfdCBlOw0KPj4gKyNlbmRpZg0KPj4gICANCj4+ICAgICAgICpudW0gPSAwOw0K
Pj4gICANCj4+ICsjaWZkZWYgQ09ORklHX1BDSUJBQ0sNCj4+ICAgICAgIHIgPSBnZXRfYWxsX2Fz
c2lnbmVkX2RldmljZXMoZ2MsICZhc3NpZ25lZCwgJm51bV9hc3NpZ25lZCk7DQo+PiAgICAgICBp
ZiAocikgZ290byBvdXQ7DQo+PiAgIA0KPj4gQEAgLTQ1Myw2ICs0ODgsMzIgQEAgbGlieGxfZGV2
aWNlX3BjaSAqbGlieGxfZGV2aWNlX3BjaV9hc3NpZ25hYmxlX2xpc3QobGlieGxfY3R4ICpjdHgs
IGludCAqbnVtKQ0KPj4gICANCj4+ICAgICAgIGNsb3NlZGlyKGRpcik7DQo+PiAgIG91dDoNCj4+
ICsjZWxzZQ0KPj4gKyAgICBtZW1zZXQoJmUsIDAsIHNpemVvZihlKSk7DQo+PiArICAgIGUucmVw
b3J0X25vdF9hc3NpZ25lZCA9IDE7DQo+PiArICAgIGRvIHsNCj4+ICsgICAgICAgIHIgPSB4Y19w
Y2lfZGV2aWNlX2VudW1fYXNzaWduZWQoY3R4LT54Y2gsICZlKTsNCj4+ICsgICAgICAgIGlmICgg
ciAmJiBlcnJubyA9PSBFSU5WQUwgKQ0KPj4gKyAgICAgICAgICAgIGJyZWFrOw0KPj4gKyAgICAg
ICAgbmV3ID0gcmVhbGxvYyhwY2lkZXZzLCAoZS5pZHggKyAxKSAqIHNpemVvZigqbmV3KSk7DQo+
PiArICAgICAgICBpZiAoTlVMTCA9PSBuZXcpDQo+PiArICAgICAgICAgICAgY29udGludWU7DQo+
PiArDQo+PiArICAgICAgICBwY2lkZXZzID0gbmV3Ow0KPj4gKyAgICAgICAgbmV3ID0gcGNpZGV2
cyArIGUuaWR4Ow0KPj4gKw0KPj4gKyAgICAgICAgbWVtc2V0KG5ldywgMCwgc2l6ZW9mKCpuZXcp
KTsNCj4+ICsNCj4+ICsgICAgICAgIHBjaWRldl9zdHJ1Y3RfZmlsbChuZXcsDQo+PiArICAgICAg
ICAgICAgICAgICAgICAgICAgICAgZS5kb21haW4sDQo+PiArICAgICAgICAgICAgICAgICAgICAg
ICAgICAgZS5tYWNoaW5lX3NiZGYgPj4gOCAmIDB4ZmYsDQo+PiArICAgICAgICAgICAgICAgICAg
ICAgICAgICAgUENJX1NMT1QoZS5tYWNoaW5lX3NiZGYpLA0KPj4gKyAgICAgICAgICAgICAgICAg
ICAgICAgICAgIFBDSV9GVU5DKGUubWFjaGluZV9zYmRmKSwNCj4+ICsgICAgICAgICAgICAgICAg
ICAgICAgICAgICAwIC8qdmRldmZuKi8pOw0KPj4gKyAgICAgICAgZS5pZHgrKzsNCj4+ICsgICAg
fSB3aGlsZSAoIXIpOw0KPj4gKyAgICAqbnVtID0gZS5pZHg7DQo+PiArI2VuZGlmDQo+PiAgICAg
ICBHQ19GUkVFOw0KPj4gICAgICAgcmV0dXJuIHBjaWRldnM7DQo+PiAgIH0NCj4+IEBAIC02MDYs
NiArNjY3LDcgQEAgYm9vbCBsaWJ4bF9faXNfaWdkX3ZnYV9wYXNzdGhydShsaWJ4bF9fZ2MgKmdj
LA0KPj4gICAgICAgcmV0dXJuIGZhbHNlOw0KPj4gICB9DQo+PiAgIA0KPj4gKyNpZmRlZiBDT05G
SUdfUENJQkFDSw0KPj4gICAvKg0KPj4gICAgKiBBIGJyaWVmIGNvbW1lbnQgYWJvdXQgc2xvdHMu
ICBJIGRvbid0IGtub3cgd2hhdCBzbG90cyBhcmUgZm9yOyBob3dldmVyLA0KPj4gICAgKiBJIGhh
dmUgYnkgZXhwZXJpbWVudGF0aW9uIGRldGVybWluZWQ6DQo+PiBAQCAtNjQ4LDExICs3MTAsMTMg
QEAgb3V0Og0KPj4gICAgICAgZmNsb3NlKGYpOw0KPj4gICAgICAgcmV0dXJuIHJjOw0KPj4gICB9
DQo+PiArI2VuZGlmDQo+PiAgIA0KPj4gICBzdGF0aWMgaW50IHBjaWJhY2tfZGV2X2lzX2Fzc2ln
bmVkKGxpYnhsX19nYyAqZ2MsIGxpYnhsX2RldmljZV9wY2kgKnBjaWRldikNCj4+ICAgew0KPj4g
LSAgICBjaGFyICogc3BhdGg7DQo+PiAgICAgICBpbnQgcmM7DQo+PiArI2lmZGVmIENPTkZJR19Q
Q0lCQUNLDQo+PiArICAgIGNoYXIgKiBzcGF0aDsNCj4+ICAgICAgIHN0cnVjdCBzdGF0IHN0Ow0K
Pj4gICANCj4+ICAgICAgIGlmICggYWNjZXNzKFNZU0ZTX1BDSUJBQ0tfRFJJVkVSLCBGX09LKSA8
IDAgKSB7DQo+PiBAQCAtNjYzLDIyICs3MjcsMjcgQEAgc3RhdGljIGludCBwY2liYWNrX2Rldl9p
c19hc3NpZ25lZChsaWJ4bF9fZ2MgKmdjLCBsaWJ4bF9kZXZpY2VfcGNpICpwY2lkZXYpDQo+PiAg
ICAgICAgICAgfQ0KPj4gICAgICAgICAgIHJldHVybiAtMTsNCj4+ICAgICAgIH0NCj4+IC0NCj4+
ICAgICAgIHNwYXRoID0gR0NTUFJJTlRGKFNZU0ZTX1BDSUJBQ0tfRFJJVkVSIi8iUENJX0JERiwN
Cj4+ICAgICAgICAgICAgICAgICAgICAgICAgIHBjaWRldi0+ZG9tYWluLCBwY2lkZXYtPmJ1cywN
Cj4+ICAgICAgICAgICAgICAgICAgICAgICAgIHBjaWRldi0+ZGV2LCBwY2lkZXYtPmZ1bmMpOw0K
Pj4gICAgICAgcmMgPSBsc3RhdChzcGF0aCwgJnN0KTsNCj4+IC0NCj4+ICAgICAgIGlmKCByYyA9
PSAwICkNCj4+ICAgICAgICAgICByZXR1cm4gMTsNCj4+ICAgICAgIGlmICggcmMgPCAwICYmIGVy
cm5vID09IEVOT0VOVCApDQo+PiAgICAgICAgICAgcmV0dXJuIDA7DQo+PiAgICAgICBMT0dFKEVS
Uk9SLCAiQWNjZXNzaW5nICVzIiwgc3BhdGgpOw0KPj4gICAgICAgcmV0dXJuIC0xOw0KPj4gKyNl
bHNlDQo+PiArICAgIGxpYnhsX2N0eCAqY3R4ID0gbGlieGxfX2djX293bmVyKGdjKTsNCj4+ICsN
Cj4+ICsgICAgcmMgPSB4Y19wY2lfZGV2aWNlX2dldF9hc3NpZ25lZChjdHgtPnhjaCwgcGNpZGV2
X2VuY29kZV9iZGYocGNpZGV2KSk7DQo+PiArICAgIHJldHVybiByYyA9PSAwID8gMSA6IDA7DQo+
PiArI2VuZGlmDQo+PiAgIH0NCj4+ICAgDQo+PiAgIHN0YXRpYyBpbnQgcGNpYmFja19kZXZfYXNz
aWduKGxpYnhsX19nYyAqZ2MsIGxpYnhsX2RldmljZV9wY2kgKnBjaWRldikNCj4+ICAgew0KPj4g
KyNpZmRlZiBDT05GSUdfUENJQkFDSw0KPj4gICAgICAgaW50IHJjOw0KPj4gICANCj4+ICAgICAg
IGlmICggKHJjPXBjaWJhY2tfZGV2X2hhc19zbG90KGdjLCBwY2lkZXYpKSA8IDAgKSB7DQo+PiBA
QCAtNjk3LDEwICs3NjYsMTcgQEAgc3RhdGljIGludCBwY2liYWNrX2Rldl9hc3NpZ24obGlieGxf
X2djICpnYywgbGlieGxfZGV2aWNlX3BjaSAqcGNpZGV2KQ0KPj4gICAgICAgICAgIHJldHVybiBF
UlJPUl9GQUlMOw0KPj4gICAgICAgfQ0KPj4gICAgICAgcmV0dXJuIDA7DQo+PiArI2Vsc2UNCj4+
ICsgICAgbGlieGxfY3R4ICpjdHggPSBsaWJ4bF9fZ2Nfb3duZXIoZ2MpOw0KPj4gKw0KPj4gKyAg
ICByZXR1cm4geGNfcGNpX2RldmljZV9zZXRfYXNzaWduZWQoY3R4LT54Y2gsIHBjaWRldl9lbmNv
ZGVfYmRmKHBjaWRldiksDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICB0cnVlKTsNCj4+ICsjZW5kaWYNCj4+ICAgfQ0KPj4gICANCj4+ICAgc3RhdGljIGludCBwY2li
YWNrX2Rldl91bmFzc2lnbihsaWJ4bF9fZ2MgKmdjLCBsaWJ4bF9kZXZpY2VfcGNpICpwY2lkZXYp
DQo+PiAgIHsNCj4+ICsjaWZkZWYgQ09ORklHX1BDSUJBQ0sNCj4+ICAgICAgIC8qIFJlbW92ZSBm
cm9tIHBjaWJhY2sgKi8NCj4+ICAgICAgIGlmICggc3lzZnNfZGV2X3VuYmluZChnYywgcGNpZGV2
LCBOVUxMKSA8IDAgKSB7DQo+PiAgICAgICAgICAgTE9HKEVSUk9SLCAiQ291bGRuJ3QgdW5iaW5k
IGRldmljZSEiKTsNCj4+IEBAIC03MTYsNiArNzkyLDEyIEBAIHN0YXRpYyBpbnQgcGNpYmFja19k
ZXZfdW5hc3NpZ24obGlieGxfX2djICpnYywgbGlieGxfZGV2aWNlX3BjaSAqcGNpZGV2KQ0KPj4g
ICAgICAgICAgIH0NCj4+ICAgICAgIH0NCj4+ICAgICAgIHJldHVybiAwOw0KPj4gKyNlbHNlDQo+
PiArICAgIGxpYnhsX2N0eCAqY3R4ID0gbGlieGxfX2djX293bmVyKGdjKTsNCj4+ICsNCj4+ICsg
ICAgcmV0dXJuIHhjX3BjaV9kZXZpY2Vfc2V0X2Fzc2lnbmVkKGN0eC0+eGNoLCBwY2lkZXZfZW5j
b2RlX2JkZihwY2lkZXYpLA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgZmFsc2UpOw0KPj4gKyNlbmRpZg0KPj4gICB9DQo+PiAgIA0KPj4gICAjZGVmaW5lIFBDSUJB
Q0tfSU5GT19QQVRIICIvbGlieGwvcGNpYmFjayINCj4+IEBAIC03ODAsMTAgKzg2MiwxNSBAQCBz
dGF0aWMgaW50IGxpYnhsX19kZXZpY2VfcGNpX2Fzc2lnbmFibGVfYWRkKGxpYnhsX19nYyAqZ2Ms
DQo+PiAgIA0KPj4gICAgICAgLyogU2VlIGlmIHRoZSBkZXZpY2UgZXhpc3RzICovDQo+PiAgICAg
ICBzcGF0aCA9IEdDU1BSSU5URihTWVNGU19QQ0lfREVWIi8iUENJX0JERiwgZG9tLCBidXMsIGRl
diwgZnVuYyk7DQo+PiArI2lmZGVmIENPTkZJR19QQ0lfU1lTRlNfRE9NMA0KPj4gICAgICAgaWYg
KCBsc3RhdChzcGF0aCwgJnN0KSApIHsNCj4+ICAgICAgICAgICBMT0dFKEVSUk9SLCAiQ291bGRu
J3QgbHN0YXQgJXMiLCBzcGF0aCk7DQo+PiAgICAgICAgICAgcmV0dXJuIEVSUk9SX0ZBSUw7DQo+
PiAgICAgICB9DQo+PiArI2Vsc2UNCj4+ICsgICAgKHZvaWQpc3Q7DQo+PiArICAgIHByaW50Zigi
SU1QTEVNRU5UX01FOiAlcyBsc3RhdCAlc1xuIiwgX19mdW5jX18sIHNwYXRoKTsNCj4+ICsjZW5k
aWYNCj4+ICAgDQo+PiAgICAgICAvKiBDaGVjayB0byBzZWUgaWYgaXQncyBhbHJlYWR5IGFzc2ln
bmVkIHRvIHBjaWJhY2sgKi8NCj4+ICAgICAgIHJjID0gcGNpYmFja19kZXZfaXNfYXNzaWduZWQo
Z2MsIHBjaWRldik7DQo+PiBAQCAtMTM1MCw4ICsxNDM3LDEyIEBAIHN0YXRpYyB2b2lkIHBjaV9h
ZGRfZG1fZG9uZShsaWJ4bF9fZWdjICplZ2MsDQo+PiAgIA0KPj4gICAgICAgaWYgKGYgPT0gTlVM
TCkgew0KPj4gICAgICAgICAgIExPR0VEKEVSUk9SLCBkb21haW5pZCwgIkNvdWxkbid0IG9wZW4g
JXMiLCBzeXNmc19wYXRoKTsNCj4+ICsjaWZkZWYgQ09ORklHX1BDSV9TWVNGU19ET00wDQo+PiAg
ICAgICAgICAgcmMgPSBFUlJPUl9GQUlMOw0KPj4gICAgICAgICAgIGdvdG8gb3V0Ow0KPj4gKyNl
bHNlDQo+PiArICAgICAgICBnb3RvIG91dF9ub19pcnE7DQo+PiArI2VuZGlmDQo+PiAgICAgICB9
DQo+PiAgICAgICBmb3IgKGkgPSAwOyBpIDwgUFJPQ19QQ0lfTlVNX1JFU09VUkNFUzsgaSsrKSB7
DQo+PiAgICAgICAgICAgaWYgKGZzY2FuZihmLCAiMHglbGx4IDB4JWxseCAweCVsbHhcbiIsICZz
dGFydCwgJmVuZCwgJmZsYWdzKSAhPSAzKQ0KPj4gQEAgLTE1MjIsNyArMTYxMywxMSBAQCBzdGF0
aWMgaW50IGxpYnhsX3BjaWRldl9hc3NpZ25hYmxlKGxpYnhsX2N0eCAqY3R4LCBsaWJ4bF9kZXZp
Y2VfcGNpICpwY2lkZXYpDQo+PiAgICAgICAgICAgICAgIGJyZWFrOw0KPj4gICAgICAgfQ0KPj4g
ICAgICAgZnJlZShwY2lkZXZzKTsNCj4+ICsjaWZkZWYgQ09ORklHX1BDSUJBQ0sNCj4+ICAgICAg
IHJldHVybiBpICE9IG51bTsNCj4+ICsjZWxzZQ0KPj4gKyAgICByZXR1cm4gMTsNCj4+ICsjZW5k
aWYNCj4+ICAgfQ0KPj4gICANCj4+ICAgc3RhdGljIHZvaWQgZGV2aWNlX3BjaV9hZGRfc3R1YmRv
bV93YWl0KGxpYnhsX19lZ2MgKmVnYywNCj4+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vc3lz
Y3RsLmMgYi94ZW4vYXJjaC9hcm0vc3lzY3RsLmMNCj4+IGluZGV4IGY4Nzk0NGU4NDczYy4uODRl
OTMzYjJlYjQ1IDEwMDY0NA0KPj4gLS0tIGEveGVuL2FyY2gvYXJtL3N5c2N0bC5jDQo+PiArKysg
Yi94ZW4vYXJjaC9hcm0vc3lzY3RsLmMNCj4+IEBAIC0xMCw2ICsxMCw3IEBADQo+PiAgICNpbmNs
dWRlIDx4ZW4vbGliLmg+DQo+PiAgICNpbmNsdWRlIDx4ZW4vZXJybm8uaD4NCj4+ICAgI2luY2x1
ZGUgPHhlbi9oeXBlcmNhbGwuaD4NCj4+ICsjaW5jbHVkZSA8eGVuL2d1ZXN0X2FjY2Vzcy5oPg0K
Pj4gICAjaW5jbHVkZSA8cHVibGljL3N5c2N0bC5oPg0KPj4gICANCj4+ICAgdm9pZCBhcmNoX2Rv
X3BoeXNpbmZvKHN0cnVjdCB4ZW5fc3lzY3RsX3BoeXNpbmZvICpwaSkNCj4+IEBAIC0yMCw3ICsy
MSw3MCBAQCB2b2lkIGFyY2hfZG9fcGh5c2luZm8oc3RydWN0IHhlbl9zeXNjdGxfcGh5c2luZm8g
KnBpKQ0KPj4gICBsb25nIGFyY2hfZG9fc3lzY3RsKHN0cnVjdCB4ZW5fc3lzY3RsICpzeXNjdGws
DQo+PiAgICAgICAgICAgICAgICAgICAgICAgWEVOX0dVRVNUX0hBTkRMRV9QQVJBTSh4ZW5fc3lz
Y3RsX3QpIHVfc3lzY3RsKQ0KPj4gICB7DQo+PiAtICAgIHJldHVybiAtRU5PU1lTOw0KPj4gKyAg
ICBsb25nIHJldCA9IDA7DQo+PiArICAgIGJvb2wgY29weWJhY2sgPSAwOw0KPj4gKw0KPj4gKyAg
ICBzd2l0Y2ggKCBzeXNjdGwtPmNtZCApDQo+PiArICAgIHsNCj4+ICsgICAgY2FzZSBYRU5fU1lT
Q1RMX3BjaV9kZXZpY2Vfc2V0X2Fzc2lnbmVkOg0KPj4gKyAgICB7DQo+PiArICAgICAgICB1MTYg
c2VnOw0KPj4gKyAgICAgICAgdTggYnVzLCBkZXZmbjsNCj4+ICsgICAgICAgIHVpbnQzMl90IG1h
Y2hpbmVfc2JkZjsNCj4+ICsNCj4+ICsgICAgICAgIG1hY2hpbmVfc2JkZiA9IHN5c2N0bC0+dS5w
Y2lfc2V0X2Fzc2lnbmVkLm1hY2hpbmVfc2JkZjsNCj4+ICsNCj4+ICsjaWYgMA0KPj4gKyAgICAg
ICAgcmV0ID0geHNtX3BjaV9kZXZpY2Vfc2V0X2Fzc2lnbmVkKFhTTV9IT09LLCBkKTsNCj4+ICsg
ICAgICAgIGlmICggcmV0ICkNCj4+ICsgICAgICAgICAgICBicmVhazsNCj4+ICsjZW5kaWYNCj4+
ICsNCj4+ICsgICAgICAgIHNlZyA9IG1hY2hpbmVfc2JkZiA+PiAxNjsNCj4+ICsgICAgICAgIGJ1
cyA9IFBDSV9CVVMobWFjaGluZV9zYmRmKTsNCj4+ICsgICAgICAgIGRldmZuID0gUENJX0RFVkZO
MihtYWNoaW5lX3NiZGYpOw0KPj4gKw0KPj4gKyAgICAgICAgcGNpZGV2c19sb2NrKCk7DQo+PiAr
ICAgICAgICByZXQgPSBwY2lfZGV2aWNlX3NldF9hc3NpZ25lZChzZWcsIGJ1cywgZGV2Zm4sDQo+
PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAhIXN5c2N0bC0+dS5wY2lf
c2V0X2Fzc2lnbmVkLmFzc2lnbmVkKTsNCj4+ICsgICAgICAgIHBjaWRldnNfdW5sb2NrKCk7DQo+
PiArICAgICAgICBicmVhazsNCj4+ICsgICAgfQ0KPj4gKyAgICBjYXNlIFhFTl9TWVNDVExfcGNp
X2RldmljZV9nZXRfYXNzaWduZWQ6DQo+PiArICAgIHsNCj4+ICsgICAgICAgIHUxNiBzZWc7DQo+
PiArICAgICAgICB1OCBidXMsIGRldmZuOw0KPj4gKyAgICAgICAgdWludDMyX3QgbWFjaGluZV9z
YmRmOw0KPj4gKw0KPj4gKyAgICAgICAgbWFjaGluZV9zYmRmID0gc3lzY3RsLT51LnBjaV9zZXRf
YXNzaWduZWQubWFjaGluZV9zYmRmOw0KPj4gKw0KPj4gKyAgICAgICAgc2VnID0gbWFjaGluZV9z
YmRmID4+IDE2Ow0KPj4gKyAgICAgICAgYnVzID0gUENJX0JVUyhtYWNoaW5lX3NiZGYpOw0KPj4g
KyAgICAgICAgZGV2Zm4gPSBQQ0lfREVWRk4yKG1hY2hpbmVfc2JkZik7DQo+PiArDQo+PiArICAg
ICAgICBwY2lkZXZzX2xvY2soKTsNCj4+ICsgICAgICAgIHJldCA9IHBjaV9kZXZpY2VfZ2V0X2Fz
c2lnbmVkKHNlZywgYnVzLCBkZXZmbik7DQo+PiArICAgICAgICBwY2lkZXZzX3VubG9jaygpOw0K
Pj4gKyAgICAgICAgYnJlYWs7DQo+PiArICAgIH0NCj4+ICsgICAgY2FzZSBYRU5fU1lTQ1RMX3Bj
aV9kZXZpY2VfZW51bV9hc3NpZ25lZDoNCj4+ICsgICAgew0KPj4gKyAgICAgICAgcmV0ID0gcGNp
X2RldmljZV9lbnVtX2Fzc2lnbmVkKHN5c2N0bC0+dS5wY2lfZW51bV9hc3NpZ25lZC5yZXBvcnRf
bm90X2Fzc2lnbmVkLA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IHN5c2N0bC0+dS5wY2lfZW51bV9hc3NpZ25lZC5pZHgsDQo+PiArICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgJnN5c2N0bC0+dS5wY2lfZW51bV9hc3NpZ25lZC5kb21haW4s
DQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgJnN5c2N0bC0+dS5w
Y2lfZW51bV9hc3NpZ25lZC5tYWNoaW5lX3NiZGYpOw0KPj4gKyAgICAgICAgY29weWJhY2sgPSAx
Ow0KPj4gKyAgICAgICAgYnJlYWs7DQo+PiArICAgIH0NCj4+ICsgICAgZGVmYXVsdDoNCj4+ICsg
ICAgICAgIHJldCA9IC1FTk9TWVM7DQo+PiArICAgICAgICBicmVhazsNCj4+ICsgICAgfQ0KPj4g
KyAgICBpZiAoIGNvcHliYWNrICYmICghcmV0IHx8IGNvcHliYWNrID4gMCkgJiYNCj4+ICsgICAg
ICAgICBfX2NvcHlfdG9fZ3Vlc3QodV9zeXNjdGwsIHN5c2N0bCwgMSkgKQ0KPj4gKyAgICAgICAg
cmV0ID0gLUVGQVVMVDsNCj4+ICsNCj4+ICsgICAgcmV0dXJuIHJldDsNCj4+ICAgfQ0KPj4gICAN
Cj4+ICAgLyoNCj4+IGRpZmYgLS1naXQgYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9wY2kuYyBi
L3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3BjaS5jDQo+PiBpbmRleCA5OGU4YTJmYWRlNjAuLjQ5
YjQyNzljNjNiZCAxMDA2NDQNCj4+IC0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3BjaS5j
DQo+PiArKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9wY2kuYw0KPj4gQEAgLTg3OSw2ICs4
NzksNDMgQEAgaW50IHBjaV9yZW1vdmVfZGV2aWNlKHUxNiBzZWcsIHU4IGJ1cywgdTggZGV2Zm4p
DQo+PiAgICAgICByZXR1cm4gcmV0Ow0KPj4gICB9DQo+PiAgIA0KPj4gKyNpZmRlZiBDT05GSUdf
QVJNDQo+PiAraW50IHBjaV9kZXZpY2Vfc2V0X2Fzc2lnbmVkKHUxNiBzZWcsIHU4IGJ1cywgdTgg
ZGV2Zm4sIGJvb2wgYXNzaWduZWQpDQo+PiArew0KPj4gKyAgICBzdHJ1Y3QgcGNpX2RldiAqcGRl
djsNCj4+ICsNCj4+ICsgICAgcGRldiA9IHBjaV9nZXRfcGRldihzZWcsIGJ1cywgZGV2Zm4pOw0K
Pj4gKyAgICBpZiAoICFwZGV2ICkNCj4+ICsgICAgew0KPj4gKyAgICAgICAgcHJpbnRrKFhFTkxP
R19FUlIgIkNhbid0IGZpbmQgUENJIGRldmljZSAlMDR4OiUwMng6JTAyeC4ldVxuIiwNCj4+ICsg
ICAgICAgICAgICAgICBzZWcsIGJ1cywgUENJX1NMT1QoZGV2Zm4pLCBQQ0lfRlVOQyhkZXZmbikp
Ow0KPiBUYWtlIGEgbG9vayBhdCBwY2lfc2JkZl90LCB5b3Ugc2hvdWxkIHVzZSBpdCBhcyB0aGUg
cGFyYW1ldGVyIGFuZCBpbg0KPiBvcmRlciB0byBwcmludCB0aGUgU0JERiAoJXBwKS4NCkkgd2ls
bCwgdGhhbmsgeW91DQo+DQo+PiArICAgICAgICByZXR1cm4gLUVOT0RFVjsNCj4+ICsgICAgfQ0K
Pj4gKw0KPj4gKyAgICBwZGV2LT5hc3NpZ25lZCA9IGFzc2lnbmVkOw0KPj4gKyAgICBwcmludGso
WEVOTE9HX0VSUiAicGNpYmFjayAlc2Fzc2lnbiBQQ0kgZGV2aWNlICUwNHg6JTAyeDolMDJ4LiV1
XG4iLA0KPj4gKyAgICAgICAgICAgYXNzaWduZWQgPyAiIiA6ICJkZS0iLA0KPj4gKyAgICAgICAg
ICAgc2VnLCBidXMsIFBDSV9TTE9UKGRldmZuKSwgUENJX0ZVTkMoZGV2Zm4pKTsNCj4+ICsNCj4+
ICsgICAgcmV0dXJuIDA7DQo+PiArfQ0KPj4gKw0KPj4gK2ludCBwY2lfZGV2aWNlX2dldF9hc3Np
Z25lZCh1MTYgc2VnLCB1OCBidXMsIHU4IGRldmZuKQ0KPj4gK3sNCj4+ICsgICAgc3RydWN0IHBj
aV9kZXYgKnBkZXY7DQo+PiArDQo+PiArICAgIHBkZXYgPSBwY2lfZ2V0X3BkZXYoc2VnLCBidXMs
IGRldmZuKTsNCj4+ICsgICAgaWYgKCAhcGRldiApDQo+PiArICAgIHsNCj4+ICsgICAgICAgIHBy
aW50ayhYRU5MT0dfRVJSICJDYW4ndCBmaW5kIFBDSSBkZXZpY2UgJTA0eDolMDJ4OiUwMnguJXVc
biIsDQo+PiArICAgICAgICAgICAgICAgc2VnLCBidXMsIFBDSV9TTE9UKGRldmZuKSwgUENJX0ZV
TkMoZGV2Zm4pKTsNCj4+ICsgICAgICAgIHJldHVybiAtRU5PREVWOw0KPj4gKyAgICB9DQo+PiAr
DQo+PiArICAgIHJldHVybiBwZGV2LT5hc3NpZ25lZCA/IDAgOiAtRU5PREVWOw0KPj4gK30NCj4+
ICsjZW5kaWYNCj4+ICsNCj4+ICAgI2lmbmRlZiBDT05GSUdfQVJNDQo+PiAgIC8qVE9ETyA6SW1w
bGVtZW50IE1TSSBzdXBwb3J0IGZvciBBUk0gICovDQo+PiAgIHN0YXRpYyBpbnQgcGNpX2NsZWFu
X2RwY2lfaXJxKHN0cnVjdCBkb21haW4gKmQsDQo+PiBAQCAtMTgyMSw2ICsxODU4LDYyIEBAIGlu
dCBpb21tdV9kb19wY2lfZG9tY3RsKA0KPj4gICAgICAgcmV0dXJuIHJldDsNCj4+ICAgfQ0KPj4g
ICANCj4+ICsjaWZkZWYgQ09ORklHX0FSTQ0KPj4gK3N0cnVjdCBsaXN0X2Fzc2lnbmVkIHsNCj4+
ICsgICAgdWludDMyX3QgY3VyX2lkeDsNCj4+ICsgICAgdWludDMyX3QgZnJvbV9pZHg7DQo+PiAr
ICAgIGJvb2wgYXNzaWduZWQ7DQo+PiArICAgIGRvbWlkX3QgKmRvbWFpbjsNCj4+ICsgICAgdWlu
dDMyX3QgKm1hY2hpbmVfc2JkZjsNCj4+ICt9Ow0KPj4gKw0KPj4gK3N0YXRpYyBpbnQgX2VudW1f
YXNzaWduZWRfcGNpX2RldmljZXMoc3RydWN0IHBjaV9zZWcgKnBzZWcsIHZvaWQgKmFyZykNCj4+
ICt7DQo+PiArICAgIHN0cnVjdCBsaXN0X2Fzc2lnbmVkICpjdHh0ID0gYXJnOw0KPj4gKyAgICBz
dHJ1Y3QgcGNpX2RldiAqcGRldjsNCj4+ICsNCj4+ICsgICAgbGlzdF9mb3JfZWFjaF9lbnRyeSAo
IHBkZXYsICZwc2VnLT5hbGxkZXZzX2xpc3QsIGFsbGRldnNfbGlzdCApDQo+PiArICAgIHsNCj4+
ICsgICAgICAgIGlmICggcGRldi0+YXNzaWduZWQgPT0gY3R4dC0+YXNzaWduZWQgKQ0KPj4gKyAg
ICAgICAgew0KPj4gKyAgICAgICAgICAgIGlmICggY3R4dC0+Y3VyX2lkeCA9PSBjdHh0LT5mcm9t
X2lkeCApDQo+PiArICAgICAgICAgICAgew0KPj4gKyAgICAgICAgICAgICAgICAqY3R4dC0+ZG9t
YWluID0gcGRldi0+ZG9tYWluLT5kb21haW5faWQ7DQo+PiArICAgICAgICAgICAgICAgICpjdHh0
LT5tYWNoaW5lX3NiZGYgPSBwZGV2LT5zYmRmLnNiZGY7DQo+PiArICAgICAgICAgICAgICAgIHJl
dHVybiAxOw0KPj4gKyAgICAgICAgICAgIH0NCj4+ICsgICAgICAgICAgICBjdHh0LT5jdXJfaWR4
Kys7DQo+PiArICAgICAgICB9DQo+PiArICAgIH0NCj4+ICsgICAgcmV0dXJuIDA7DQo+PiArfQ0K
Pj4gKw0KPj4gK2ludCBwY2lfZGV2aWNlX2VudW1fYXNzaWduZWQoYm9vbCByZXBvcnRfbm90X2Fz
c2lnbmVkLA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdWludDMyX3QgZnJvbV9p
ZHgsIGRvbWlkX3QgKmRvbWFpbiwNCj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVp
bnQzMl90ICptYWNoaW5lX3NiZGYpDQo+PiArew0KPj4gKyAgICBzdHJ1Y3QgbGlzdF9hc3NpZ25l
ZCBjdHh0ID0gew0KPj4gKyAgICAgICAgLmFzc2lnbmVkID0gIXJlcG9ydF9ub3RfYXNzaWduZWQs
DQo+PiArICAgICAgICAuY3VyX2lkeCA9IDAsDQo+PiArICAgICAgICAuZnJvbV9pZHggPSBmcm9t
X2lkeCwNCj4+ICsgICAgICAgIC5kb21haW4gPSBkb21haW4sDQo+PiArICAgICAgICAubWFjaGlu
ZV9zYmRmID0gbWFjaGluZV9zYmRmLA0KPj4gKyAgICB9Ow0KPj4gKyAgICBpbnQgcmV0Ow0KPj4g
Kw0KPj4gKyAgICBwY2lkZXZzX2xvY2soKTsNCj4+ICsgICAgcmV0ID0gcGNpX3NlZ21lbnRzX2l0
ZXJhdGUoX2VudW1fYXNzaWduZWRfcGNpX2RldmljZXMsICZjdHh0KTsNCj4+ICsgICAgcGNpZGV2
c191bmxvY2soKTsNCj4+ICsgICAgLyoNCj4+ICsgICAgICogSWYgbm90IGZvdW5kIHRoZW4gcmVw
b3J0IGFzIEVJTlZBTCB0byBtYXJrDQo+PiArICAgICAqIGVudW1lcmF0aW9uIHByb2Nlc3MgZmlu
aXNoZWQuDQo+PiArICAgICAqLw0KPj4gKyAgICBpZiAoICFyZXQgKQ0KPj4gKyAgICAgICAgcmV0
dXJuIC1FSU5WQUw7DQo+PiArICAgIHJldHVybiAwOw0KPj4gK30NCj4+ICsjZW5kaWYNCj4+ICsN
Cj4+ICAgLyoNCj4+ICAgICogTG9jYWwgdmFyaWFibGVzOg0KPj4gICAgKiBtb2RlOiBDDQo+PiBk
aWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvcHVibGljL3N5c2N0bC5oIGIveGVuL2luY2x1ZGUvcHVi
bGljL3N5c2N0bC5oDQo+PiBpbmRleCBhMDczNjQ3MTE3OTQuLjVjYTczYzUzODY4OCAxMDA2NDQN
Cj4+IC0tLSBhL3hlbi9pbmNsdWRlL3B1YmxpYy9zeXNjdGwuaA0KPj4gKysrIGIveGVuL2luY2x1
ZGUvcHVibGljL3N5c2N0bC5oDQo+PiBAQCAtMTA2Miw2ICsxMDYyLDQwIEBAIHR5cGVkZWYgc3Ry
dWN0IHhlbl9zeXNjdGxfY3B1X3BvbGljeSB4ZW5fc3lzY3RsX2NwdV9wb2xpY3lfdDsNCj4+ICAg
REVGSU5FX1hFTl9HVUVTVF9IQU5ETEUoeGVuX3N5c2N0bF9jcHVfcG9saWN5X3QpOw0KPj4gICAj
ZW5kaWYNCj4+ICAgDQo+PiArLyoNCj4+ICsgKiBUaGVzZSBhcmUgdG8gZW11bGF0ZSBwY2liYWNr
IGRldmljZSAoZGUtKWFzc2lnbm1lbnQgdXNlZCBieSB0aGUgdG9vbHMNCj4+ICsgKiB0byB0cmFj
ayBjdXJyZW50IGRldmljZSBhc3NpZ25tZW50czogYWxsIHRoZSBQQ0kgZGV2aWNlcyB0aGF0IGNh
bg0KPj4gKyAqIGJlIHBhc3NlZCB0aHJvdWdoIG11c3QgYmUgYXNzaWduZWQgdG8gdGhlIHBjaWJh
Y2sgdG8gbWFyayB0aGVtDQo+PiArICogYXMgc3VjaC4gQXMgb24gQVJNIHdlIGRvIG5vdCBydW4g
cGNpe2JhY2t8ZnJvbnR9IGFuZCBhcmUgZW11bGF0aW5nDQo+PiArICogUENJIGhvc3QgYnJpZGdl
IGluIFhlbiwgc28gd2UgbmVlZCB0byBtYWludGFpbiB0aGUgYXNzaWdubWVudHMgb24gb3VyDQo+
PiArICogb3duIGluIFhlbiBpdHNlbGYuDQo+PiArICoNCj4+ICsgKiBOb3RlIG9uIHhlbl9zeXNj
dGxfcGNpX2RldmljZV9nZXRfYXNzaWduZWQ6IEVOT0VOVCBpcyB1c2VkIHRvIHJlcG9ydA0KPj4g
KyAqIHRoYXQgdGhlcmUgYXJlIG5vIGFzc2lnbmVkIGRldmljZXMgbGVmdC4NCj4+ICsgKi8NCj4+
ICtzdHJ1Y3QgeGVuX3N5c2N0bF9wY2lfZGV2aWNlX3NldF9hc3NpZ25lZCB7DQo+PiArICAgIC8q
IElOICovDQo+PiArICAgIC8qIEZJWE1FOiBpcyB0aGlzIHJlYWxseSBhIG1hY2hpbmUgU0JERiBv
ciBhcyBEb21haW4tMCBzZWVzIGl0PyAqLw0KPj4gKyAgICB1aW50MzJfdCBtYWNoaW5lX3NiZGY7
DQo+IEkgdGhpbmsgeW91IG5lZWQgdG8gbWFrZSBpdCBjbGVhciB0aGF0IHdoZW4gcnVubmluZyBv
biBYZW4gZG9tMCAob3INCj4gdGhlIGhhcmR3YXJlIGRvbWFpbikgc2hvdWxkIF9uZXZlcl8gY2hh
bmdlIHRoZSBlbnVtZXJhdGlvbiBvZiBkZXZpY2VzLA0KPiBvciBlbHNlIG5vbmUgb2YgdGhpcyB3
aWxsIHdvcmsuDQpJIHdpbGwNCj4NCj4+ICsgICAgdWludDhfdCBhc3NpZ25lZDsNCj4+ICt9Ow0K
Pj4gKw0KPj4gK3N0cnVjdCB4ZW5fc3lzY3RsX3BjaV9kZXZpY2VfZ2V0X2Fzc2lnbmVkIHsNCj4+
ICsgICAgLyogSU4gKi8NCj4+ICsgICAgdWludDMyX3QgbWFjaGluZV9zYmRmOw0KPj4gK307DQo+
PiArDQo+PiArc3RydWN0IHhlbl9zeXNjdGxfcGNpX2RldmljZV9lbnVtX2Fzc2lnbmVkIHsNCj4+
ICsgICAgLyogSU4gKi8NCj4+ICsgICAgdWludDMyX3QgaWR4Ow0KPj4gKyAgICB1aW50OF90IHJl
cG9ydF9ub3RfYXNzaWduZWQ7DQo+PiArICAgIC8qIE9VVCAqLw0KPj4gKyAgICBkb21pZF90IGRv
bWFpbjsNCj4+ICsgICAgdWludDMyX3QgbWFjaGluZV9zYmRmOw0KPj4gK307DQo+PiArdHlwZWRl
ZiBzdHJ1Y3QgeGVuX3N5c2N0bF9wY2lfZGV2aWNlX2VudW1fYXNzaWduZWQgeGVuX3N5c2N0bF9w
Y2lfZGV2aWNlX2VudW1fYXNzaWduZWRfdDsNCj4+ICtERUZJTkVfWEVOX0dVRVNUX0hBTkRMRSh4
ZW5fc3lzY3RsX3BjaV9kZXZpY2VfZW51bV9hc3NpZ25lZF90KTsNCj4+ICsNCj4+ICAgc3RydWN0
IHhlbl9zeXNjdGwgew0KPj4gICAgICAgdWludDMyX3QgY21kOw0KPj4gICAjZGVmaW5lIFhFTl9T
WVNDVExfcmVhZGNvbnNvbGUgICAgICAgICAgICAgICAgICAgIDENCj4+IEBAIC0xMDkyLDYgKzEx
MjYsOSBAQCBzdHJ1Y3QgeGVuX3N5c2N0bCB7DQo+PiAgICNkZWZpbmUgWEVOX1NZU0NUTF9saXZl
cGF0Y2hfb3AgICAgICAgICAgICAgICAgICAyNw0KPj4gICAvKiAjZGVmaW5lIFhFTl9TWVNDVExf
c2V0X3BhcmFtZXRlciAgICAgICAgICAgICAgMjggKi8NCj4+ICAgI2RlZmluZSBYRU5fU1lTQ1RM
X2dldF9jcHVfcG9saWN5ICAgICAgICAgICAgICAgIDI5DQo+PiArI2RlZmluZSBYRU5fU1lTQ1RM
X3BjaV9kZXZpY2Vfc2V0X2Fzc2lnbmVkICAgICAgIDMwDQo+PiArI2RlZmluZSBYRU5fU1lTQ1RM
X3BjaV9kZXZpY2VfZ2V0X2Fzc2lnbmVkICAgICAgIDMxDQo+PiArI2RlZmluZSBYRU5fU1lTQ1RM
X3BjaV9kZXZpY2VfZW51bV9hc3NpZ25lZCAgICAgIDMyDQo+PiAgICAgICB1aW50MzJfdCBpbnRl
cmZhY2VfdmVyc2lvbjsgLyogWEVOX1NZU0NUTF9JTlRFUkZBQ0VfVkVSU0lPTiAqLw0KPj4gICAg
ICAgdW5pb24gew0KPj4gICAgICAgICAgIHN0cnVjdCB4ZW5fc3lzY3RsX3JlYWRjb25zb2xlICAg
ICAgIHJlYWRjb25zb2xlOw0KPj4gQEAgLTExMjIsNiArMTE1OSw5IEBAIHN0cnVjdCB4ZW5fc3lz
Y3RsIHsNCj4+ICAgI2lmIGRlZmluZWQoX19pMzg2X18pIHx8IGRlZmluZWQoX194ODZfNjRfXykN
Cj4+ICAgICAgICAgICBzdHJ1Y3QgeGVuX3N5c2N0bF9jcHVfcG9saWN5ICAgICAgICBjcHVfcG9s
aWN5Ow0KPj4gICAjZW5kaWYNCj4+ICsgICAgICAgIHN0cnVjdCB4ZW5fc3lzY3RsX3BjaV9kZXZp
Y2Vfc2V0X2Fzc2lnbmVkIHBjaV9zZXRfYXNzaWduZWQ7DQo+PiArICAgICAgICBzdHJ1Y3QgeGVu
X3N5c2N0bF9wY2lfZGV2aWNlX2dldF9hc3NpZ25lZCBwY2lfZ2V0X2Fzc2lnbmVkOw0KPj4gKyAg
ICAgICAgc3RydWN0IHhlbl9zeXNjdGxfcGNpX2RldmljZV9lbnVtX2Fzc2lnbmVkIHBjaV9lbnVt
X2Fzc2lnbmVkOw0KPj4gICAgICAgICAgIHVpbnQ4X3QgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIHBhZFsxMjhdOw0KPj4gICAgICAgfSB1Ow0KPj4gICB9Ow0KPj4gZGlmZiAtLWdpdCBhL3hl
bi9pbmNsdWRlL3hlbi9wY2kuaCBiL3hlbi9pbmNsdWRlL3hlbi9wY2kuaA0KPj4gaW5kZXggMmJj
NGFhZjQ1MzBjLi43YmY0MzlkZTRkZTAgMTAwNjQ0DQo+PiAtLS0gYS94ZW4vaW5jbHVkZS94ZW4v
cGNpLmgNCj4+ICsrKyBiL3hlbi9pbmNsdWRlL3hlbi9wY2kuaA0KPj4gQEAgLTEzMiw2ICsxMzIs
MTMgQEAgc3RydWN0IHBjaV9kZXYgew0KPj4gICANCj4+ICAgICAgIC8qIERhdGEgZm9yIHZQQ0ku
ICovDQo+PiAgICAgICBzdHJ1Y3QgdnBjaSAqdnBjaTsNCj4+ICsjaWZkZWYgQ09ORklHX0FSTQ0K
Pj4gKyAgICAvKg0KPj4gKyAgICAgKiBTZXQgaWYgdGhpcyBQQ0kgZGV2aWNlIGlzIGVsaWdpYmxl
IGZvciBwYXNzIHRocm91Z2gsDQo+PiArICAgICAqIGUuZy4ganVzdCBsaWtlIGl0IHdhcyBhc3Np
Z25lZCB0byBwY2liYWNrIGRyaXZlci4NCj4+ICsgICAgICovDQo+PiArICAgIGJvb2wgYXNzaWdu
ZWQ7DQo+IFlvdSBjYW4gc2VlIHdoZXRoZXIgYSBkZXZpY2UgaXMgYXNzaWduZWQgb3Igbm90IGJ5
IGxvb2tpbmcgYXQgdGhlDQo+IGRvbWFpbiBmaWVsZCBBRkFJQ1QuDQoNCkhtLCBkb21haW4gZmll
bGQgY291bGQgYmUgZG9tX2lvLCBzbyB3ZSBuZWVkIHRvIHB1dCBhbiBleHRyYSBsb2dpYyBoZXJl
DQoNCnRvIHVuZGVyc3RhbmQgdGhlIGRldmljZSBpcyByZWFsbHkgYXNzaWduZWQgdG8gdGhlIGRv
bWFpbg0KDQo+DQo+IFRoYW5rcywgUm9nZXIuDQoNClRoYW5rIHlvdSENCg0KT2xla3NhbmRyDQoN
ClsxXSBodHRwczovL3d3dy5tYWlsLWFyY2hpdmUuY29tL3hlbi1kZXZlbEBsaXN0cy54ZW5wcm9q
ZWN0Lm9yZy9tc2c3NzQyMi5odG1s


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 14:39:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 14:39:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24914.52396 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrH1-00028h-Uj; Wed, 11 Nov 2020 14:38:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24914.52396; Wed, 11 Nov 2020 14:38:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrH1-00028V-Pf; Wed, 11 Nov 2020 14:38:59 +0000
Received: by outflank-mailman (input) for mailman id 24914;
 Wed, 11 Nov 2020 14:38:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MbyH=ER=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1kcrH0-00027y-Mc
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:38:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 401e0563-ef10-4f46-8bbf-f43382260f44;
 Wed, 11 Nov 2020 14:38:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2E461ABD6;
 Wed, 11 Nov 2020 14:38:57 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MbyH=ER=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
	id 1kcrH0-00027y-Mc
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:38:58 +0000
X-Inumbo-ID: 401e0563-ef10-4f46-8bbf-f43382260f44
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 401e0563-ef10-4f46-8bbf-f43382260f44;
	Wed, 11 Nov 2020 14:38:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605105537;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=pOng3H3SkACgPUo/cvkAnhlDFfl0u6MHRepjUierqGw=;
	b=swETySJW/7Et6CJ2EFtxVQpdbfCibRmOsmZNXAHPoJn3RoWbqlgb3Akl2VUK2923DC6pzH
	OuzGR9A4gVTt6fpSAzSfsbe9AIIeUkS7JHVpKSvemnHVNumjoLGYiBKhIa+hRVZtz3NmsP
	xhmro3TJ+tpgviXTEvCU47dsxRiMEEY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2E461ABD6;
	Wed, 11 Nov 2020 14:38:57 +0000 (UTC)
Message-ID: <75b3865855dfc3e1771956b0be02e436fbebd71c.camel@suse.com>
Subject: Re: [PATCH 02/12] xen/cpupool: add missing bits for per-cpupool
 scheduling granularity
From: Dario Faggioli <dfaggioli@suse.com>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>
Date: Wed, 11 Nov 2020 15:38:56 +0100
In-Reply-To: <20201026091316.25680-3-jgross@suse.com>
References: <20201026091316.25680-1-jgross@suse.com>
	 <20201026091316.25680-3-jgross@suse.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-aBqizUb7ttVW8cIAewUU"
User-Agent: Evolution 3.38.1 (by Flathub.org) 
MIME-Version: 1.0


--=-aBqizUb7ttVW8cIAewUU
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2020-10-26 at 10:13 +0100, Juergen Gross wrote:
> Even with storing the scheduling granularity in struct cpupool there
> are still a few bits missing for being able to have cpupools with
> different granularity (apart from the missing interface for setting
> the individual granularities): the number of cpus in a scheduling
> unit is always taken from the global sched_granularity variable.
>=20
> So store the value in struct cpupool and use that instead of
> sched_granularity.
>=20
> Signed-off-by: Juergen Gross <jgross@suse.com>
>
Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-aBqizUb7ttVW8cIAewUU
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl+r94AACgkQFkJ4iaW4
c+4w9Q/9FcB2E9jZohGyFTWalQ6aigUZ44bBHWsECfU8o2quZ5O633m4r6pGcu1a
7x+L5/gLYZJElpR0qXck3NTn2Q3H0EG/hTXdUdn1qGStTqagxJsr2lduotaGOob4
esBN7Z0yYOUulrXAbvXPh3ofAscHzohf8bMrHFjpvz3LtME16rCYgxeJLfVIaTHp
2B1SlgqfppbmocGHsY2W23o7inVf/pMR+OV2TYReoeFMpKyNvjREieqdaE5Fof1B
+2Kk6IswIytdNz/kqs1tzDmr/DHai91pwDzVR69anPwM83X5VF36I3j9Q57B+AzN
ud5T2boJSvZaH6DAFGr2bvImZIFsY0NGEtEwgGHb18AV0enzLZQonOP23t6WmY0l
lvaqaihjeDk5RERHVD5JEKCnOJOCs1PLSky+gDcHdQ4I9K6ikBB9o3PnJe2GqnDC
xvDRi49zsnlnkPae8ZQob0Zq7qfFTY4HB8I1ON7bG6DwnLYEkOjvJwdmxN4b3ghe
Tmh9uAjK/w7hRaQN+LiJLkc603Axlyd+Cskr7CWblwaZUe3pAsKHq7RL/UfwvnP1
pxhMi49A/5RYnaWFGzYytpkCC/orsJwPRNtHkmT3L/XNT9sh1wYpOhnZrarVKqS/
DGJ0fk6/9bdgnrwcyk/F4BS65YmRvzv8L7WRpnqa70XpyZYSVXE=
=92Hf
-----END PGP SIGNATURE-----

--=-aBqizUb7ttVW8cIAewUU--



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 14:39:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 14:39:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24921.52413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrHd-0002L8-Db; Wed, 11 Nov 2020 14:39:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24921.52413; Wed, 11 Nov 2020 14:39:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrHd-0002L1-Ak; Wed, 11 Nov 2020 14:39:37 +0000
Received: by outflank-mailman (input) for mailman id 24921;
 Wed, 11 Nov 2020 14:39:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kcrHb-0002Kf-Hb
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:39:35 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 13297285-51e0-4aa6-9880-40b15e9b741c;
 Wed, 11 Nov 2020 14:39:33 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kcrHb-0002Kf-Hb
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:39:35 +0000
X-Inumbo-ID: 13297285-51e0-4aa6-9880-40b15e9b741c
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 13297285-51e0-4aa6-9880-40b15e9b741c;
	Wed, 11 Nov 2020 14:39:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605105573;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=sxEWwR0pkrK3UjjVPCFVZMXh4ZKwOjVQmcSyHDFMzCY=;
  b=Hqyms0UkCQLFoRRcJUR04nhMMZ0l2u4r0WIlcNRWSG0H8Ab7ZmFRXm8M
   mh6L20xmgld8QQrYORkxRY4q9Kn7HiyF5S+J5KWdsUGQ6ZNx1Qxqen+Bv
   Iug27Bo/9vQ6SbazfMWyY2DLAVw1aKxA1JFuwY+9AyxHRvDjjT4sATPD3
   o=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: KKogBYjuG5u/BkfdO+U4huYaSDyclunV2tfJ1ZnVlLU7v2YxJFuM9bM06lH2wyBgfKjynfzej4
 nXmBuMfVWwv93hw5ODSR9VOC7pOhh0mDfxUoP4T9paSmjz9Mq0BnpKoO3EzGgte/uYt7taNoqB
 pLGokVEW8CFSEAb3PV1S8xAYXb3ydmJ+GS2H0l5NL2wfCKDYKj07iAXFN24o+PFTZ75FV23okG
 y+ycNF69k2uiYhGoJmOzanvrcZQeywXgi38cKLIf1joYf9wf1SqJyhSmX9pusSoiTDL7rD73X5
 TbA=
X-SBRS: None
X-MesageID: 32072412
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,469,1596513600"; 
   d="scan'208";a="32072412"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U2MKtt8WpHn2LRCqCIZQAi9sdRoz4hmKdQ9awo1zhcDo1aErovbb0WMzzgFNRLJXriZMz867fUzNIONsfqlNGNyshVs+3PsZZbvsxsoLW8OMv03OPbL0bvnZPUU0cjze+aEOtPbon93vbZuqaTSwqX+VV/lCOs/5L14kbMqRIlU8HnA0U2d43j3XIDUn9hVOiiB1hDCKAPWSVBvAKgDL5dzYspl3D7ZZi8SMDuqRMePPZXfHrV6Knbjizgjs8dK/480T8kN0b1EMW/DxOpIgoN84OM396nXOgyLXezyExXDWzUoZulc4vhLHkQW4QwzU9KLkOkkUVHQ8AurMFwbNOA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yO7TGZgN90SZW5hMpbC3lULCYSmiKnhPVqr/y3SukLA=;
 b=DNRvG7LJgBfkXgV5/ajlOByrRKjMIeySGBe1IUDxtirjsQ+r80e+V1xm7d8epa9ZVOjjsAFhuPnVOp5eLXCfQ2IQJ9zMn9+4/ZYmJpd9kHk3bp/YDAPnCweyFX5zkkGxTfpz6J91pjL3K7oBha1APfvJZEqZTuAIlmLJyH64dvquDm7LSc9bE7t/eCRF9Mm0mpj8D4ctcK1RYSTMWxXHzPr73Fek1P3IBH5yqZdM/JxzmOCqGjwLFVFbEXgj/Noa3pfQ0gIUqBShDRYS0pVkOegZ/9kgQrOojux1hax0kfKvCSFcjKY5enw3kO/I98WHsFhyOaSglZwPQoXaBkyV1A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yO7TGZgN90SZW5hMpbC3lULCYSmiKnhPVqr/y3SukLA=;
 b=J2BO8NLGshPdwl/FgMMkfFE1SRGlg3PPR5SRiwBa5Z6gMoPOjOdOIzXGKfXz7WXycAhIpBwNGVdDQudLsF7AGPYAQtD2Le05qXEO7X8CIZRzppzEzf+/0kR2yOApr9JS6LE8siNxDUwoBbn1p3pmcZxTJIFdEAVKiZype9WSj/M=
Date: Wed, 11 Nov 2020 15:39:24 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Oleksandr Andrushchenko <andr2000@gmail.com>
CC: <Rahul.Singh@arm.com>, <Bertrand.Marquis@arm.com>, <julien.grall@arm.com>,
	<jbeulich@suse.com>, <sstabellini@kernel.org>,
	<xen-devel@lists.xenproject.org>, <iwj@xenproject.org>, <wl@xen.org>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Subject: Re: [PATCH 03/10] xen/arm: Setup MMIO range trap handlers for
 hardware domain
Message-ID: <20201111143924.gxbi5oopfeammgti@Air-de-Roger>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-4-andr2000@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201109125031.26409-4-andr2000@gmail.com>
X-ClientProxiedBy: LO2P123CA0102.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:139::17) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4a1def7c-4bb0-48c3-2131-08d8864f987b
X-MS-TrafficTypeDiagnostic: DS7PR03MB5542:
X-Microsoft-Antispam-PRVS: <DS7PR03MB554245EACEE1CB7355F6142D8FE80@DS7PR03MB5542.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: a1BI6+KEr/MU6bzWh/38buImRZpiVcsIX9e4Q3QMQKzaYaItg3f6MyzbH6MH+C2wwuilTX8KOAJ/3Lajj1U+KKA/j+xUwnN0rbVQ+1xJzuRFI6zFTwreUnUYnsr2oa5otHIFIDURWz3B9NLnEKY6g7aJynWVC+ungfvPO29ksECdy/Coo+YqL638YamqpwvOj4JYRGAFDBGlJ3H1dwon59hPYR6n2mN34ZaRDfjwSa5Vsre9DQoLV9g4cLmZHLyuUuqG+FcyzOkGPBqnyZR2fFaTtgxsP+zbTm5eU84JYaLN7XjviRaGq5y+eRJ6XQn8Y5prTyXn6TfJKxJmGPos0A==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(396003)(346002)(136003)(376002)(366004)(39860400002)(33716001)(26005)(9686003)(186003)(66946007)(66476007)(956004)(8676002)(316002)(16526019)(6916009)(8936002)(4326008)(2906002)(86362001)(83380400001)(7416002)(66556008)(6496006)(6486002)(6666004)(478600001)(85182001)(1076003)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: EsN2YkxJ0jcrO9EtMVNNVD4n15PRx6EveQxB7sNWDGLwy0BLH9Ep2tuUJnTe7CHKO9RpS1ed8sK1pxAR8zOrv4L41lPpA45+rb3jBFSZ0qttJVsTwzZC5A0+IzX7bp0Y3Ea9/Lr7K/LNK2GNsTu6vfs/sCHOcu6bmKHnSLUQ+vGnfN1bSqbEcVLKDXfYxQL0cKHG0+m55hxAtKynPLUq1TuW16xbO2BbwMRi+ZSAQ1TdAWbA0KxyglrDSo+xqD9Scz6oGOMYg8w62/+VWjuUKSdg7hJnT1p6omDqM4l7uNgj1crAouah1oCe+xBviaWEEMnPQCMmw0QH7nwZRyfRMV1MeKLiJsYCeph3ecYuNTRuGoIbzlKsEYk+Hcpeb9uV2fQJw3+fQh0JPgaGlNAgspR5hmR+n7Kfs0efoTdqUza3iUGNKsbJqmh0BrE4BO48CMnXuGJNh4fC+MDzuzaRFqU3fAg9e5Pc1D1v49QsDzoGT9ExdLeUbW54NjuuPsh2BgufDR1iFjSL4hM8YoBqcny63Adao3aPlot58vtXC+zR/Vzs0Ntr5LX3+IWv0SZsBfEOhTrFuIESHfL9/qap1sVZoliGmc4lm4R733ceZwXJ9Kj/QsDXRicb1uLx3K45PSPBZ4Rp1UXX53Zp6e+VXhzFQ1CYKMWM/LZYSUFyTbQMmdj4DvWJHUfqeP4+OpNwx4OjLoFHhwoHdoQWd3Ikex7g9d6tt8Q2sGFySrlTP0sdHF4YnFjq/TYUs5MWIC6MPAAKu44JO0Kkb2RCLk6D2vIbHA8glOZESVXOXNaiknre4PmUSFDL5p8hO+7qxbvVAUJWn7a16POSK+cKA331RL7x3OEAJoGmfY3u6gVvmGPnl/gT2H2DdS2z7p/dR2tdw116EVxO8GXgiFjzW3EGyA==
X-MS-Exchange-CrossTenant-Network-Message-Id: 4a1def7c-4bb0-48c3-2131-08d8864f987b
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Nov 2020 14:39:29.3461
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pqRTw7SptQVVHDLzUpXS2amCfHkcCVJnoLitnsFfexnJLNSEFRY8SCaKII6GqJMGES1njxkQMCdjxYofH4ylRQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5542
X-OriginatorOrg: citrix.com

On Mon, Nov 09, 2020 at 02:50:24PM +0200, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> 
> In order vPCI to work it needs all access to PCI configuration space
> access to be synchronized among all entities, e.g. hardware domain and
> guests. For that implement PCI host bridge specific callbacks to
> propelry setup those ranges depending on host bridge implementation.
> 
> This callback is optional and may not be used by non-ECAM host bridges.
> 
> Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> ---
>  xen/arch/arm/pci/pci-host-common.c  | 16 ++++++++++++++++
>  xen/arch/arm/pci/pci-host-generic.c | 15 +++++++++++++--
>  xen/arch/arm/vpci.c                 | 16 +++++++++++++++-

So this is based on top of another series, maybe it would make sense
to post those together, or else it's hard to get the right context.

>  xen/include/asm-arm/pci.h           |  7 +++++++
>  4 files changed, 51 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/pci/pci-host-common.c b/xen/arch/arm/pci/pci-host-common.c
> index b011c7eff3c8..b81184d34980 100644
> --- a/xen/arch/arm/pci/pci-host-common.c
> +++ b/xen/arch/arm/pci/pci-host-common.c
> @@ -219,6 +219,22 @@ struct device *pci_find_host_bridge_device(struct device *dev)
>      }
>      return dt_to_dev(bridge->dt_node);
>  }
> +
> +int pci_host_iterate_bridges(struct domain *d,
> +                             int (*clb)(struct domain *d,
> +                                        struct pci_host_bridge *bridge))
> +{
> +    struct pci_host_bridge *bridge;
> +    int err;
> +
> +    list_for_each_entry( bridge, &pci_host_bridges, node )
> +    {
> +        err = clb(d, bridge);
> +        if ( err )
> +            return err;
> +    }
> +    return 0;
> +}
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/arch/arm/pci/pci-host-generic.c b/xen/arch/arm/pci/pci-host-generic.c
> index 54dd123e95c7..469df3da0116 100644
> --- a/xen/arch/arm/pci/pci-host-generic.c
> +++ b/xen/arch/arm/pci/pci-host-generic.c
> @@ -85,12 +85,23 @@ int pci_ecam_config_read(struct pci_host_bridge *bridge, uint32_t sbdf,
>      return 0;
>  }
>  
> +static int pci_ecam_register_mmio_handler(struct domain *d,
> +                                          struct pci_host_bridge *bridge,

I think you can also constify bridge here.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 14:40:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 14:40:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24926.52426 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrI4-0002yT-OT; Wed, 11 Nov 2020 14:40:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24926.52426; Wed, 11 Nov 2020 14:40:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrI4-0002xt-KP; Wed, 11 Nov 2020 14:40:04 +0000
Received: by outflank-mailman (input) for mailman id 24926;
 Wed, 11 Nov 2020 14:40:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MbyH=ER=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1kcrI4-0002u4-3L
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:40:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 61aaa258-2a7e-4bfe-904c-61d76a0fb549;
 Wed, 11 Nov 2020 14:40:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4C213ABD1;
 Wed, 11 Nov 2020 14:40:01 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MbyH=ER=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
	id 1kcrI4-0002u4-3L
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:40:04 +0000
X-Inumbo-ID: 61aaa258-2a7e-4bfe-904c-61d76a0fb549
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 61aaa258-2a7e-4bfe-904c-61d76a0fb549;
	Wed, 11 Nov 2020 14:40:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605105601;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=52A7j24ZYNVDVDL+Ly+n0GjKUaYBmcHxz7aOgwt+UZ8=;
	b=LAEOAxkTvohz43y+0/Ddm2t9AS2IotAPyI0GgDyeQWR4l9ggF25J6qQch+qV7l/PMR+U5i
	H2wN/odK9YnI72E5LJxq+pdFj9h6x6nqazvT6CxwXaKj1sH1pDLVXdEyX5/nqw/eJLHIA6
	Zn8Nx6LCS+rvAtymv/jUw07iNTWSW98=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4C213ABD1;
	Wed, 11 Nov 2020 14:40:01 +0000 (UTC)
Message-ID: <42fff7efc4c7233dccc16e8770ca4f00524a20c4.camel@suse.com>
Subject: Re: [PATCH 04/12] xen/sched: sort included headers in cpupool.c
From: Dario Faggioli <dfaggioli@suse.com>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>
Date: Wed, 11 Nov 2020 15:40:00 +0100
In-Reply-To: <20201026091316.25680-5-jgross@suse.com>
References: <20201026091316.25680-1-jgross@suse.com>
	 <20201026091316.25680-5-jgross@suse.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-RMoH63XT9zHnsAhav4EH"
User-Agent: Evolution 3.38.1 (by Flathub.org) 
MIME-Version: 1.0


--=-RMoH63XT9zHnsAhav4EH
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2020-10-26 at 10:13 +0100, Juergen Gross wrote:
> Common style is to include header files in alphabetical order. Sort
> the
> #include statements in cpupool.c accordingly.
>=20
> Signed-off-by: Juergen Gross <jgross@suse.com>
>
Acked-by: Dario Faggioli <dfaggioli@suse.com>

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-RMoH63XT9zHnsAhav4EH
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl+r98AACgkQFkJ4iaW4
c+4C7hAAn3lRHR+V5orIxlqGqPGr++U0iabbxRmgWTNMTTI6yTS63GhcBclrUB2/
vxnAuCjUxFdK/RfSRz1K7aYCvyF2jkT/Ce19nA2+Xs3aqPqz6fPbUQoM6jw0NE6j
LmgkHuBvBMfnKk72jykgNyWJBRsUwIHDZD2YSAoZatyQBBhpGFf64h3CiQm3x83y
6yyw0EtVsfkZZWZ5UvDO7lBs/rMrYgep7sXcAaZU9BAADsA12nGmUx4PGZz41but
KQC2d/C/cUYNUS6OvkIqV/m7LPUjaYOuozzyVws/T2ij5ODRpL/Fry5pGgOmy4a4
szRJbESVRY+WNImox75cLZC91UDYAYD29nHkX9jw33V68P6evW129T4c914ORs4Z
nZN5ja1Gne6fi/O/zsTd1047YzBCbfTQTsQWiHIA0ltYhENmYpJP6LQPCy5GewZR
b56jAOsdNauQ5nJK9T5+tgVxnAY8Uo4GMtvbzqa9TVsEjaaGvFBzYUj+Dtymcu3X
h6UoOQ2VrGRWaOHhU71KcbwTW8fbdn3h6Pkhe24gz/k+ufepRCkbmBhXuIFe4CpN
HTDGWDfH9gL4qBWlAs4nM75gSV5vX8ZTiVZ5Xt2S9SCYDoNPON+UYaBU60xUHHBA
14jAWD3TdQE9iahlmLljVQpZi16Y+DXiAM57LXWgPdaWHFqLK0U=
=ihQQ
-----END PGP SIGNATURE-----

--=-RMoH63XT9zHnsAhav4EH--



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 14:43:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 14:43:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24942.52437 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrL4-0003RS-8J; Wed, 11 Nov 2020 14:43:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24942.52437; Wed, 11 Nov 2020 14:43:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrL4-0003RH-5F; Wed, 11 Nov 2020 14:43:10 +0000
Received: by outflank-mailman (input) for mailman id 24942;
 Wed, 11 Nov 2020 14:43:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pxcb=ER=epam.com=prvs=9584594409=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kcrL2-0003QR-QU
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:43:08 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b5a00dbc-5f87-4b7f-9eff-f674d25fc607;
 Wed, 11 Nov 2020 14:43:07 +0000 (UTC)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0ABEfRjn032405; Wed, 11 Nov 2020 14:42:59 GMT
Received: from eur04-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2053.outbound.protection.outlook.com [104.47.14.53])
 by mx0a-0039f301.pphosted.com with ESMTP id 34rf80gmcf-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 11 Nov 2020 14:42:59 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Wed, 11 Nov
 2020 14:42:56 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Wed, 11 Nov 2020
 14:42:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Pxcb=ER=epam.com=prvs=9584594409=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kcrL2-0003QR-QU
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:43:08 +0000
X-Inumbo-ID: b5a00dbc-5f87-4b7f-9eff-f674d25fc607
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b5a00dbc-5f87-4b7f-9eff-f674d25fc607;
	Wed, 11 Nov 2020 14:43:07 +0000 (UTC)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
	by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0ABEfRjn032405;
	Wed, 11 Nov 2020 14:42:59 GMT
Received: from eur04-vi1-obe.outbound.protection.outlook.com (mail-vi1eur04lp2053.outbound.protection.outlook.com [104.47.14.53])
	by mx0a-0039f301.pphosted.com with ESMTP id 34rf80gmcf-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Wed, 11 Nov 2020 14:42:59 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=c5gKiV3f67by+uzKkzAM053lDP1BQSDnpHD+AD0zNqLVWCfgCcm7EU/sH+/sYpsKVfBNNc5fB8PGj1dFe1fqG2W6OY0LvHdpF3FuDARGKlrLUbTqJOTpbavMEFZpbz0DMt3moGLPGNn6Ji7HiSkBrjQD3i2vbJfk47nBxClVCuuJv5d1DHcKt9V96hgMOwFlZn/JsOYeocoyXsf48K5PEAcVXMWRDblpWpsc3K2gGdFae5uZsgEBg0Y2/e/6A7utdTVin9b/Fqi+An1BkvmvGskZJWXocOOS4PJ2DrKSKC3RwBYr2ANKTE1Ra+UPl59taxOVjJpnRg2qDtts8gwEJQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CdpnGa4rbLp/JBpsamr4pRiPirSjNwVGSJB0QN2k7LU=;
 b=AXXLkO3L6U+JdWILxpe/jGU3OIO9tGNeYgiXlHOwhiHvUAx5+JJpcXHleM5Il8vB1WACuOAfZexEv0JaQuVkXE4fZJBRn6E3UfcurcOZwTkz20HyKv+gLSwvyYMwf8Vo2y2v5HntIjDyqDZT8z3hdg+erGdazqMcOKlu+7uhKmwc5N6K7Q4GREs2fKWttvnHOYv8FWY7CuS6y/tUKiIAE2yICNpvuyKfighT5GGYsaT/D545h6ZqY+7QwP/Zj6+IUmwZp61twxpj88+WaGEqlqzSqbitnJstGsRrhcu9Q+vK0ao++/fC23aVBSYcTUkio4hzmaeCB4zyVWNo6+bfyA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CdpnGa4rbLp/JBpsamr4pRiPirSjNwVGSJB0QN2k7LU=;
 b=i5LUPrQOjONEKtt8ypDROljJULbI08J4vfn1wTJ2sg26+mjnPsVqDnIW9ldCye4Dc0NvTIajIXaOF7wWCdfzD2OGpzLTT+bjQLVl3QS23CDoU0bNrGX4xoJTnwnmWkWTyigxkPQ3ynOL/86yb5hjaQjj2mIDBGz8n7cAt6YGh+pxn6FkzJAtziaog8pwHTvoVRbUs5Y6MHL+QDyOvh/Hxw1R3/68vtRFWGwLULG+R3yURmq2oVufNwyvOAAbtktzuQ7J39bmjBFAh0HCEHuBC4xoYJdCRbznTxYb13Q2mFRpl1b6pfe+5FzriNX5dbm7aqtgyfIhf6NB/Aqin6d5Kw==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Wed, 11 Nov
 2020 14:42:56 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Wed, 11 Nov 2020
 14:42:56 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>,
        Oleksandr
 Andrushchenko <andr2000@gmail.com>
CC: "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
        "Bertrand.Marquis@arm.com"
	<Bertrand.Marquis@arm.com>,
        "julien.grall@arm.com" <julien.grall@arm.com>,
        "jbeulich@suse.com" <jbeulich@suse.com>,
        "sstabellini@kernel.org"
	<sstabellini@kernel.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>
Subject: Re: [PATCH 03/10] xen/arm: Setup MMIO range trap handlers for
 hardware domain
Thread-Topic: [PATCH 03/10] xen/arm: Setup MMIO range trap handlers for
 hardware domain
Thread-Index: AQHWtpb07GsqhunhOkybCrJJI4gm+6nDA7cAgAAA/IA=
Date: Wed, 11 Nov 2020 14:42:56 +0000
Message-ID: <064408c9-6585-0aaf-aaed-f67910b27bf7@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-4-andr2000@gmail.com>
 <20201111143924.gxbi5oopfeammgti@Air-de-Roger>
In-Reply-To: <20201111143924.gxbi5oopfeammgti@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 19105fac-a7b9-432d-136e-08d8865013e6
x-ms-traffictypediagnostic: AM0PR03MB6324:
x-microsoft-antispam-prvs: 
 <AM0PR03MB63243896545CB4B753D167BFE7E80@AM0PR03MB6324.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 tuwur7I13BSHdukP1fXOOEDthyviXLxfCgzHWe9g0CMANWzIEZ/rSjqzMUfVE/ABWGnI7r0PIRQcTVXCJulg4d0So1QBABqJN8Vi5ZoBMI4bg8FOZeegjFGxGXTVlQCYqP0rC00pizuAp36l9m7lRSy9kh8uM34YTnH9NhbXyWeepi2JbiRdyTZ9/pymsyDIvy7EYGEzxe/dWnxIyX2feWFyYMuyzUAO3xVTAFh+rOgvLlFKTw3V7Xwl8BrKKORNHAcAg147TzhtfDv07H1xsKzFr5n1cSLrHpI3HzP95VYBKtYDDpXHrBtHCrN+LSUn0PB6E9qpMAIFi8DpMQamSX3QUWqaJhaECYx1z3Pn5dOwGU9OUepCGMuKxBp+fcwra8B5/8P8J1+qeXFP4UTL+A==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(136003)(376002)(396003)(39860400002)(346002)(31696002)(83380400001)(6512007)(6486002)(86362001)(76116006)(66946007)(66476007)(66446008)(66556008)(91956017)(64756008)(5660300002)(2906002)(31686004)(186003)(8936002)(26005)(53546011)(6506007)(36756003)(7416002)(8676002)(2616005)(71200400001)(54906003)(110136005)(316002)(4326008)(966005)(478600001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 f8U3XnUVIDyzee8PCXx6UnLMvn/7zPZk7h9Ygv2lL5+/tnmwD1us6lRwYxegMXaJBe/97m5EVV2NG+FduyhUyYakajirrnU2mATGG45r4PKi+BU54dHuCNXY0Ro0mMzh2+PSUtErDQ06Zg+NFvQFM0C3Ekh0yKuXFOGBq3sn0dj1mUO1tmb2mCXsvrQzaNTahpmUTrrQM+3XLqR/r5j6kuAzEzJ4WVGLY8EoVT0kyTyr0o3eElnWDAwTkh5sygjDiDhtiUZCP0VJ8bsEo4Ezr/1DJOqXABWM8tw2q98gdRi0FEryaL8j/K40e+1x02ky4AoTsEgJA1LTXye+C+l9Gv2EXZ9Nq4n63ddCT48xM8i5dLFcEQ8ejWAl4YnNKeYk8kkv9xN2Ta85SwP5784dZ8TIHK+nnkaD93nYGnxokR3iStMPqLwgHwnivnqMkRyUX8pIsAKueZUyymQpKu/fFKupMZKHAXKLYWseUcxxZ0ToF1sn1RThB24U+HCJgX/SkKM3eG9BmOVJ6pgn0t8WYvwiLrppbAHbCmpcpfP7+LlFIlQkLivsXa48g5QGBK6h+jYLAdcZs8fkmNyDuL4Fa3JAE03alFfpgT10+NgsWrJc5fOekJg9os0GCRZDqVHnrFpEy9NyjDYfvWa+ECr24Q==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <EB02184318E9C740A18F00BBB8AD8F47@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 19105fac-a7b9-432d-136e-08d8865013e6
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Nov 2020 14:42:56.1648
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: lVehEcMDumNMwfVu9mJ8mk8x4Y5/xjUlmae1zoGU5fquX/zyvX2X1hFVij88hbQKAkopp1DRz3B00/so2OZ+tawBUq3YGFXsIgE75dQ114hsNFtcQckyGFx/z6Cw5qll
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB6324
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-11_07:2020-11-10,2020-11-11 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0
 priorityscore=1501 adultscore=0 phishscore=0 impostorscore=0 spamscore=0
 malwarescore=0 suspectscore=0 mlxlogscore=999 mlxscore=0
 lowpriorityscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2009150000 definitions=main-2011110088

DQpPbiAxMS8xMS8yMCA0OjM5IFBNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOg0KPiBPbiBNb24s
IE5vdiAwOSwgMjAyMCBhdCAwMjo1MDoyNFBNICswMjAwLCBPbGVrc2FuZHIgQW5kcnVzaGNoZW5r
byB3cm90ZToNCj4+IEZyb206IE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIDxvbGVrc2FuZHJfYW5k
cnVzaGNoZW5rb0BlcGFtLmNvbT4NCj4+DQo+PiBJbiBvcmRlciB2UENJIHRvIHdvcmsgaXQgbmVl
ZHMgYWxsIGFjY2VzcyB0byBQQ0kgY29uZmlndXJhdGlvbiBzcGFjZQ0KPj4gYWNjZXNzIHRvIGJl
IHN5bmNocm9uaXplZCBhbW9uZyBhbGwgZW50aXRpZXMsIGUuZy4gaGFyZHdhcmUgZG9tYWluIGFu
ZA0KPj4gZ3Vlc3RzLiBGb3IgdGhhdCBpbXBsZW1lbnQgUENJIGhvc3QgYnJpZGdlIHNwZWNpZmlj
IGNhbGxiYWNrcyB0bw0KPj4gcHJvcGVscnkgc2V0dXAgdGhvc2UgcmFuZ2VzIGRlcGVuZGluZyBv
biBob3N0IGJyaWRnZSBpbXBsZW1lbnRhdGlvbi4NCj4+DQo+PiBUaGlzIGNhbGxiYWNrIGlzIG9w
dGlvbmFsIGFuZCBtYXkgbm90IGJlIHVzZWQgYnkgbm9uLUVDQU0gaG9zdCBicmlkZ2VzLg0KPj4N
Cj4+IFNpZ25lZC1vZmYtYnk6IE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIDxvbGVrc2FuZHJfYW5k
cnVzaGNoZW5rb0BlcGFtLmNvbT4NCj4+IC0tLQ0KPj4gICB4ZW4vYXJjaC9hcm0vcGNpL3BjaS1o
b3N0LWNvbW1vbi5jICB8IDE2ICsrKysrKysrKysrKysrKysNCj4+ICAgeGVuL2FyY2gvYXJtL3Bj
aS9wY2ktaG9zdC1nZW5lcmljLmMgfCAxNSArKysrKysrKysrKysrLS0NCj4+ICAgeGVuL2FyY2gv
YXJtL3ZwY2kuYyAgICAgICAgICAgICAgICAgfCAxNiArKysrKysrKysrKysrKystDQo+IFNvIHRo
aXMgaXMgYmFzZWQgb24gdG9wIG9mIGFub3RoZXIgc2VyaWVzLCBtYXliZSBpdCB3b3VsZCBtYWtl
IHNlbnNlDQo+IHRvIHBvc3QgdGhvc2UgdG9nZXRoZXIsIG9yIGVsc2UgaXQncyBoYXJkIHRvIGdl
dCB0aGUgcmlnaHQgY29udGV4dC4NCg0KVGhpcyBpcyBiYXNlZCBvbiBBUk0ncyBQQ0kgcGFzc3Ro
cm91Z2ggUkZDIHNlcmllcyBbMV0NCg0KWW91IGNhbiBhbHNvIHNlZSB0aGUgd2hvbGUgcGljdHVy
ZSBhdCBbMl0NCg0KPg0KPj4gICB4ZW4vaW5jbHVkZS9hc20tYXJtL3BjaS5oICAgICAgICAgICB8
ICA3ICsrKysrKysNCj4+ICAgNCBmaWxlcyBjaGFuZ2VkLCA1MSBpbnNlcnRpb25zKCspLCAzIGRl
bGV0aW9ucygtKQ0KPj4NCj4+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vcGNpL3BjaS1ob3N0
LWNvbW1vbi5jIGIveGVuL2FyY2gvYXJtL3BjaS9wY2ktaG9zdC1jb21tb24uYw0KPj4gaW5kZXgg
YjAxMWM3ZWZmM2M4Li5iODExODRkMzQ5ODAgMTAwNjQ0DQo+PiAtLS0gYS94ZW4vYXJjaC9hcm0v
cGNpL3BjaS1ob3N0LWNvbW1vbi5jDQo+PiArKysgYi94ZW4vYXJjaC9hcm0vcGNpL3BjaS1ob3N0
LWNvbW1vbi5jDQo+PiBAQCAtMjE5LDYgKzIxOSwyMiBAQCBzdHJ1Y3QgZGV2aWNlICpwY2lfZmlu
ZF9ob3N0X2JyaWRnZV9kZXZpY2Uoc3RydWN0IGRldmljZSAqZGV2KQ0KPj4gICAgICAgfQ0KPj4g
ICAgICAgcmV0dXJuIGR0X3RvX2RldihicmlkZ2UtPmR0X25vZGUpOw0KPj4gICB9DQo+PiArDQo+
PiAraW50IHBjaV9ob3N0X2l0ZXJhdGVfYnJpZGdlcyhzdHJ1Y3QgZG9tYWluICpkLA0KPj4gKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgaW50ICgqY2xiKShzdHJ1Y3QgZG9tYWluICpkLA0K
Pj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdHJ1Y3QgcGNpX2hv
c3RfYnJpZGdlICpicmlkZ2UpKQ0KPj4gK3sNCj4+ICsgICAgc3RydWN0IHBjaV9ob3N0X2JyaWRn
ZSAqYnJpZGdlOw0KPj4gKyAgICBpbnQgZXJyOw0KPj4gKw0KPj4gKyAgICBsaXN0X2Zvcl9lYWNo
X2VudHJ5KCBicmlkZ2UsICZwY2lfaG9zdF9icmlkZ2VzLCBub2RlICkNCj4+ICsgICAgew0KPj4g
KyAgICAgICAgZXJyID0gY2xiKGQsIGJyaWRnZSk7DQo+PiArICAgICAgICBpZiAoIGVyciApDQo+
PiArICAgICAgICAgICAgcmV0dXJuIGVycjsNCj4+ICsgICAgfQ0KPj4gKyAgICByZXR1cm4gMDsN
Cj4+ICt9DQo+PiAgIC8qDQo+PiAgICAqIExvY2FsIHZhcmlhYmxlczoNCj4+ICAgICogbW9kZTog
Qw0KPj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9wY2kvcGNpLWhvc3QtZ2VuZXJpYy5jIGIv
eGVuL2FyY2gvYXJtL3BjaS9wY2ktaG9zdC1nZW5lcmljLmMNCj4+IGluZGV4IDU0ZGQxMjNlOTVj
Ny4uNDY5ZGYzZGEwMTE2IDEwMDY0NA0KPj4gLS0tIGEveGVuL2FyY2gvYXJtL3BjaS9wY2ktaG9z
dC1nZW5lcmljLmMNCj4+ICsrKyBiL3hlbi9hcmNoL2FybS9wY2kvcGNpLWhvc3QtZ2VuZXJpYy5j
DQo+PiBAQCAtODUsMTIgKzg1LDIzIEBAIGludCBwY2lfZWNhbV9jb25maWdfcmVhZChzdHJ1Y3Qg
cGNpX2hvc3RfYnJpZGdlICpicmlkZ2UsIHVpbnQzMl90IHNiZGYsDQo+PiAgICAgICByZXR1cm4g
MDsNCj4+ICAgfQ0KPj4gICANCj4+ICtzdGF0aWMgaW50IHBjaV9lY2FtX3JlZ2lzdGVyX21taW9f
aGFuZGxlcihzdHJ1Y3QgZG9tYWluICpkLA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIHN0cnVjdCBwY2lfaG9zdF9icmlkZ2UgKmJyaWRnZSwNCj4gSSB0aGlu
ayB5b3UgY2FuIGFsc28gY29uc3RpZnkgYnJpZGdlIGhlcmUuDQpNYWtlcyBzZW5zZQ0KPg0KPiBU
aGFua3MsIFJvZ2VyLg0KDQpUaGFuayB5b3UsDQoNCk9sZWtzYW5kcg0KDQpbMV0gaHR0cHM6Ly93
d3cubWFpbC1hcmNoaXZlLmNvbS94ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcvbXNnODQ0
NTIuaHRtbA0KDQpbMl0gaHR0cHM6Ly9naXRodWIuY29tL2FuZHIyMDAwL3hlbi90cmVlL3ZwY2lf
cmZjDQo=


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 14:44:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 14:44:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24949.52450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrLs-0003a9-Lr; Wed, 11 Nov 2020 14:44:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24949.52450; Wed, 11 Nov 2020 14:44:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrLs-0003a2-Il; Wed, 11 Nov 2020 14:44:00 +0000
Received: by outflank-mailman (input) for mailman id 24949;
 Wed, 11 Nov 2020 14:43:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GpG1=ER=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kcrLr-0003Zv-KJ
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:43:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7d2dc119-6099-4d60-8080-c29b450e4b7b;
 Wed, 11 Nov 2020 14:43:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DE7E3AD11;
 Wed, 11 Nov 2020 14:43:57 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GpG1=ER=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kcrLr-0003Zv-KJ
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:43:59 +0000
X-Inumbo-ID: 7d2dc119-6099-4d60-8080-c29b450e4b7b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7d2dc119-6099-4d60-8080-c29b450e4b7b;
	Wed, 11 Nov 2020 14:43:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605105838;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=s4DJLyWnN2lYtnW/1PMpH8EielXkomSTZEp/9XlNGlc=;
	b=W0BAY2VkfQoBcUclQ+gNWZOJ/hFKSev4rhL4PmjI/wwTZVutN1p7PPJKPe7fhC0uiPuNSj
	eqRQMzjaskO7QsaLQRiDFB1egJ0599wxhHV1zOZ5y3AB00JLrJeD96lXJLEUU/9Hm8tx1j
	AZqwCYWnilJHNF2RA40uU2iewaANu04=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id DE7E3AD11;
	Wed, 11 Nov 2020 14:43:57 +0000 (UTC)
Subject: Re: [PATCH 01/12] xen/cpupool: add cpu to sched_res_mask when
 removing it from cpupool
To: Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-2-jgross@suse.com>
 <34527bbdeef138454b6a555c236b2289643b3d6b.camel@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <acef7969-4f9c-bb29-257f-d191f9527b0b@suse.com>
Date: Wed, 11 Nov 2020 15:43:57 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <34527bbdeef138454b6a555c236b2289643b3d6b.camel@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="2tcs4pEbRSfCc4d9blLruWyh9nTxCYOCV"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--2tcs4pEbRSfCc4d9blLruWyh9nTxCYOCV
Content-Type: multipart/mixed; boundary="VPrrTkA5R0g4h7dTyGWaHAuzacgvP01IH";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>
Message-ID: <acef7969-4f9c-bb29-257f-d191f9527b0b@suse.com>
Subject: Re: [PATCH 01/12] xen/cpupool: add cpu to sched_res_mask when
 removing it from cpupool
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-2-jgross@suse.com>
 <34527bbdeef138454b6a555c236b2289643b3d6b.camel@suse.com>
In-Reply-To: <34527bbdeef138454b6a555c236b2289643b3d6b.camel@suse.com>

--VPrrTkA5R0g4h7dTyGWaHAuzacgvP01IH
Content-Type: multipart/mixed;
 boundary="------------7C7BF1FBB8A15192A31BC6FC"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------7C7BF1FBB8A15192A31BC6FC
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 11.11.20 15:32, Dario Faggioli wrote:
> On Mon, 2020-10-26 at 10:13 +0100, Juergen Gross wrote:
>> When a cpu is removed from a cpupool and added to the free cpus it
>> should be added to sched_res_mask, too.
>>
>> The related removal from sched_res_mask in case of core scheduling
>> is already done in schedule_cpu_add().
>>
>> As long as all cpupools share the same scheduling granularity there
>> is nothing going wrong with the missing removal,
>>
> This patch is adding an addition of the CPU to sched_res_mask, which
> was missing... So isn't the above "there is nothing going wrong with
> the missing addition", or something like that?

Oh yes, of course.

Will fix that.

>=20
> Or, if it's an actual missing removal that we are referring to here,
> then it must be clarified which one.
>=20
>> but this will change
>> when per-cpupool granularity is fully supported.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>
> With the above fixed or clarified:
>=20
> Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

Thanks,


Juergen

--------------7C7BF1FBB8A15192A31BC6FC
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------7C7BF1FBB8A15192A31BC6FC--

--VPrrTkA5R0g4h7dTyGWaHAuzacgvP01IH--

--2tcs4pEbRSfCc4d9blLruWyh9nTxCYOCV
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+r+K0FAwAAAAAACgkQsN6d1ii/Ey8e
ZAf/dIxjakESFjPctT0Os4twDxyAWBEBjQeOX6KJD3EGhLkBr6ogZKtRNzykDvU9RD960UB5bVKx
wUZsQjxXZb1P2gUHAyMRT3oBmkLIXbk2haMCBBd1BoyNSMt5TECdDxbKnKtD7XeBuKm9tIo6tpC4
LixLA6JroVdAhZAbSTr6a0ElCP1UXmanEC6iRCVBVFb/CFPpCzNajdaQfz3uUvopecCp+DpdArG5
FvAtt3tCc2t54o4QQxDeNOyDlqBJn6Ulo8u4aXKh4rzhOObUXsK+DClcNCfeQNJnUt/iKtROAeJ0
WvnsZIZBEbBbIXYthEnCCIGW4nZnwFb5Jutm1cwG2A==
=CwOM
-----END PGP SIGNATURE-----

--2tcs4pEbRSfCc4d9blLruWyh9nTxCYOCV--


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 14:44:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 14:44:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24953.52462 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrMP-0003g6-Vk; Wed, 11 Nov 2020 14:44:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24953.52462; Wed, 11 Nov 2020 14:44:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrMP-0003fz-SD; Wed, 11 Nov 2020 14:44:33 +0000
Received: by outflank-mailman (input) for mailman id 24953;
 Wed, 11 Nov 2020 14:44:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kcrMO-0003fp-2u
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:44:32 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0d9f2595-7548-459f-a994-9696dac03b6d;
 Wed, 11 Nov 2020 14:44:31 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kcrMO-0003fp-2u
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:44:32 +0000
X-Inumbo-ID: 0d9f2595-7548-459f-a994-9696dac03b6d
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0d9f2595-7548-459f-a994-9696dac03b6d;
	Wed, 11 Nov 2020 14:44:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605105871;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=eFY5+F/7N20RmK3+pOEUrqAFyG5ehkW1hcSCG++Ekr4=;
  b=ei9a/jWevsbTwxDv9UrhGilm9X5gqVPUCSuBYr/EZbwj8dt9XsPEBXaZ
   axugeQpKGwKIG4gigSxlI1dCsOUseTs0CNxg1j8rXfPppoBuf44dnvsYB
   Z+2zXyNpkwRh9qHbeePU1y9VNN/veyF0g6nNnO3GrctSqf53dWhBu5k8j
   c=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: K2TBFFo/OQqKXBj4EfP4tiJj6nMW5SIEDA3FN5dAknz9A1T6kRvEL4qWlERx41C//F9cW/dKf/
 qmdv9amc0Zpo0w0EU0jJWRSs0onhjhlykjqNZ7PSrRF/NdtA15PEuQf4BJJ3tYe0lp+ifD8mMW
 9gqiD2/jxISxN7gvbLWnHJgT0Qi9dEjUmpO7A9iFIAJSeXbNuq3XWNDCky56Fo547z1TMOcVZo
 Fe+swKCXziPhR+6TYxqdbmDf92gzby+BlvP6boNfNrLWFdeE4yCFdsRlqoYL+UDoWjq/TUaIa2
 +zc=
X-SBRS: None
X-MesageID: 30925166
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,469,1596513600"; 
   d="scan'208";a="30925166"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=E1CQoCWuba7yxKQFkxA0VnrPbGJ7DdDhvou47i1Brinml51I+zhb3kUTWXqZwP3yiY+z5xbWcCIA8HXOEOooMLjD0VhniSAtb9MTJ75gjGjvlAoW4ztF0+7LxOhePpAdUWq2b2x6pkN0hhF51WLaFprTVqySDadGOYYD/P/hohX41TULbUJC529l0rc40M7P5IOSQneXHcLMUacZNfrcQAIfijDIrFSfDzksBChMBEVYzXJa+Z0OoS0g5AZPa6TeYxb8TQSocpMes7/lpby+ZkPd+sixJz4G3Hl5SN5qzs0YCOQZ0owqS1onR6RWcKXEumdBVHwcpu1r1ddw+mtDPQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Qre77Y/POXr/+bLKSBeDaxaJFslTGf37Y4ufQovFKxA=;
 b=HqTxC+rZkeXzD+amTZDTYxw9c8P6MMMaCqcTRE5TtBM0xCY2xpc+GUUnoWXTOLYvERozrRsPYNla2JQT1DX0xVCvPOCio0k9WJXobWO8hqpmhXp+qzkfk3dl5gTJ2+7kTGVZ1eZcN5aCh39hIwytGaXsLd22EB9iRQvxrRq3oMdFS49jV8DrsmgYQd1lW9AVqpP1ASV5bdSp5GF7K4CeOSg+TJmICWngl8eA1L+A3jvTcBpI/CI+VJ5CMOS5SqTC8Lsc5t9Sv5D+OWGQEurJfUdBLML9jPrCbQz684iYZhifxQvD3NBkVkQxnHN3MUR4rOVV8v/gqC4T05D1rAkmXA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Qre77Y/POXr/+bLKSBeDaxaJFslTGf37Y4ufQovFKxA=;
 b=W27n8a++NeW4Ohug2ssmNtVvtv2nJiEwFk5aU8ir9tmdAKvS+3kTwKYNTcEIiIRhnlBcsEKBct00XDXmmqLvwYJHyImT3uDKETr6QINPbxkynIlxfVFIlnUxnCag0j61oOruDWtZNcY+xB+l5dy0DuEz6j3if0yhdysilMfWdJw=
Date: Wed, 11 Nov 2020 15:44:22 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Oleksandr Andrushchenko <andr2000@gmail.com>
CC: <Rahul.Singh@arm.com>, <Bertrand.Marquis@arm.com>, <julien.grall@arm.com>,
	<jbeulich@suse.com>, <sstabellini@kernel.org>,
	<xen-devel@lists.xenproject.org>, <iwj@xenproject.org>, <wl@xen.org>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Subject: Re: [PATCH 04/10] [WORKAROUND] xen/arm: Update hwdom's p2m to trap
 ECAM space
Message-ID: <20201111144422.z2hi3ineg6qwbxi4@Air-de-Roger>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-5-andr2000@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201109125031.26409-5-andr2000@gmail.com>
X-ClientProxiedBy: LO2P265CA0465.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a2::21) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8f78b034-64e9-4a16-125a-08d886504a49
X-MS-TrafficTypeDiagnostic: DS7PR03MB5542:
X-Microsoft-Antispam-PRVS: <DS7PR03MB55422D3BFE209EFEC63F93108FE80@DS7PR03MB5542.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ECBFxeuuP4qBcDV7UQOtDAEeth3GLxxh8fZfK5U1fe9/Z6g86ljXHO558vxhQ6l09VSkobo3/Dhf5Xa965OuYF4bGeHvf5az2m7OD8w0r7sScp9u9Ce2qOnd649ywfEiKmOJJ/kEXIIiEiG/IrZXN3mTqEw/ve6hKSrntiJyV0fYkhr+bR5+I0c9MbfoMVnNTL+P18ONuAyOlRftUH3U2gB1hyZoEw06NSB4kiNN4rIbbwr5Xumzvp2DLzzMxT+OdKbj3N9tRP9jJoKd3T3W3I9vEhA7HM2y56yPOf8uAVd2ENpiMBpDDdizTh4Q0WRz
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(396003)(346002)(136003)(376002)(366004)(39860400002)(33716001)(26005)(9686003)(186003)(66946007)(66476007)(956004)(8676002)(316002)(16526019)(6916009)(8936002)(4326008)(2906002)(86362001)(7416002)(66556008)(6496006)(6486002)(6666004)(4744005)(478600001)(85182001)(1076003)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: swmHlA/vMTIFLz4qJq+nJ+VxyptKmT7bxYV4UR+f41U5/UrHmROOH7oQn2twFIADTLeka98Y5BSGInBubf5pSiobg97zG8HHid92kph+ROSDO6B3tpW7Ap80V1ASCOVbQ0/RAFuhmBCfuDmUQ2L/VOYenBuNwFAEaboApTw4MnhBSgIorh6QlU7ReAwJHCaub0KRS8os/rlptuox/KyEWqTmu7ygq/ms1bOuUKQ2LPn25n9VuQWeXskismN+fs+emWbeZVZR4SGhLX8jqhOI9oPxAa73tmrDZslsKdpLDPAhndjEHMRO71ts3z7PBdbe7UDx+qqTrBwx59Q9imFLCAIPuUuRMK2cr9HTZDgtXqWGZCRfTtG/RVe3SEu/vAePUEfuKy0Qv8JBn32RBaScaxZ2jly56ZShQzaDhUF4MkHPBRNTIcxo0dz0ArlLKPf8GIQuDtEcjqZlznn8KjurDwxi42lIBshG8eITREnL1itL3n5Tm1mzV6NM0QLv7HOVuKLc+E2FHc1rXnGL0jV97TzSRLYt+faI4SMdPeYhxqHlw5aNcGlwPWFJrcy273EegtrMMp4RJV/aGOb/SgafssyIV8IBO6HIgOYKLg5Z0JC3SfxPtUZS8hQKK6IXDY6ENYNkvHIJ0DxHx5RV0hylFKICMnQmJy+hNBAhPBV6KH2opfN0di8yxoe4Kfw3P13bRa0JsxDmGbbAL/vDfltYRcYZ+VH60/YUth4XiNKA8bcX/TeN5sDiFA6F/o5pcidXyvmH9Pogg2WWb1oLibxyC4E/NEoI35jwL/j2o6FzNcIHN2PsylFE3L/DLDEbN7c0IHvxjrneStvqs+ZiW/4XIhjK6/JEoXlX8R84WFl8d8nA/hRWwMIawCOk2Pry0PVGDc2Q+GwlMqufvQabf6yThA==
X-MS-Exchange-CrossTenant-Network-Message-Id: 8f78b034-64e9-4a16-125a-08d886504a49
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Nov 2020 14:44:27.5729
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tDZUiaL/s/leiSApQTQGiwQBXegh5yq9smIN9yIrOgD+ST4PkO/OKU6F5qXnupQ5YTEdbfxcDyJnB6mBL5080Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5542
X-OriginatorOrg: citrix.com

On Mon, Nov 09, 2020 at 02:50:25PM +0200, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> 
> Host bridge controller's ECAM space is mapped into Domain-0's p2m,
> thus it is not possible to trap the same for vPCI via MMIO handlers.
> For this to work we need to unmap those mappings in p2m.
> 
> TODO (Julien): It would be best if we avoid the map/unmap operation.
> So, maybe we want to introduce another way to avoid the mapping.
> Maybe by changing the type of the controller to "PCI_HOSTCONTROLLER"
> and checking if this is a PCI hostcontroller avoid the mapping.

I know very little about Arm to be able to provide meaningful comments
here. I agree that creating the maps just to remove them afterwards is
not the right approach, we should instead avoid those mappings from
being created in the first place.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 14:47:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 14:47:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24968.52477 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrPT-0003t4-Gw; Wed, 11 Nov 2020 14:47:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24968.52477; Wed, 11 Nov 2020 14:47:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrPT-0003sx-D2; Wed, 11 Nov 2020 14:47:43 +0000
Received: by outflank-mailman (input) for mailman id 24968;
 Wed, 11 Nov 2020 14:47:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kcrPS-0003sC-DY
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:47:42 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 57fda3a5-95f2-462c-a9fa-945d04aac860;
 Wed, 11 Nov 2020 14:47:35 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcrPK-0005XG-Q6; Wed, 11 Nov 2020 14:47:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcrPK-0005Je-Ee; Wed, 11 Nov 2020 14:47:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kcrPK-0003s3-DW; Wed, 11 Nov 2020 14:47:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kcrPS-0003sC-DY
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:47:42 +0000
X-Inumbo-ID: 57fda3a5-95f2-462c-a9fa-945d04aac860
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 57fda3a5-95f2-462c-a9fa-945d04aac860;
	Wed, 11 Nov 2020 14:47:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sQzVs9Vw/HXMt4p8AYEOCSp4dUYTYY9fopyC4iiu2UY=; b=RdazMFBuO0UJNvHDTJOFmLgf0M
	Yl8ElHzJkZFYeBwNCiwHvnXeOb2gTVox8YAv4xJg9ttfpK9DEExGwbHflc0K9fKwna3/qcp7jSYym
	eX5XLif7s3+npdcJozxSGkacfhMfGsC9M+dlXatLRKNS0x1WCulryYc0lqgvaSq8RZrk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcrPK-0005XG-Q6; Wed, 11 Nov 2020 14:47:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcrPK-0005Je-Ee; Wed, 11 Nov 2020 14:47:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcrPK-0003s3-DW; Wed, 11 Nov 2020 14:47:34 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156633-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.10-testing test] 156633: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.10-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=15b298097289f1c11b981454a3dc912b95e2f65b
X-Osstest-Versions-That:
    xen=7a4ec792d12d58d14e4d0a9cb569be4fd4fe9cf5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Nov 2020 14:47:34 +0000

flight 156633 xen-4.10-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156633/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  3 hosts-allocate               fail  like 156396
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 156396
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 156396
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156396
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156396
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156396
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156396
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156396
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156396
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156396
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156396
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156396
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156396
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156396
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  15b298097289f1c11b981454a3dc912b95e2f65b
baseline version:
 xen                  7a4ec792d12d58d14e4d0a9cb569be4fd4fe9cf5

Last test of basis   156396  2020-11-04 09:05:41 Z    7 days
Testing same since   156633  2020-11-10 18:05:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7a4ec792d1..15b2980972  15b298097289f1c11b981454a3dc912b95e2f65b -> stable-4.10


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 14:52:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 14:52:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24977.52488 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrTY-0004pN-5J; Wed, 11 Nov 2020 14:51:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24977.52488; Wed, 11 Nov 2020 14:51:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrTY-0004pG-28; Wed, 11 Nov 2020 14:51:56 +0000
Received: by outflank-mailman (input) for mailman id 24977;
 Wed, 11 Nov 2020 14:51:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MbyH=ER=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1kcrTW-0004pB-SR
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:51:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 47ba78ac-5a81-4ae2-9a08-3dde48a67035;
 Wed, 11 Nov 2020 14:51:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7AF06ADE1;
 Wed, 11 Nov 2020 14:51:52 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MbyH=ER=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
	id 1kcrTW-0004pB-SR
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:51:54 +0000
X-Inumbo-ID: 47ba78ac-5a81-4ae2-9a08-3dde48a67035
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 47ba78ac-5a81-4ae2-9a08-3dde48a67035;
	Wed, 11 Nov 2020 14:51:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605106312;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=UNjrdRt/ztAhXZ8xI+j0cukdzNTlJhIIGI4KDIJIORI=;
	b=pAkFazOWexMPr/3uoMhrLk4UV61wuzBQEqME4fwl+DlCnkwU8qE/hTk7dx1R706EIjkEKb
	z+qd10v5hcfm142W4MpHTlmlQlgK+8tOnYAdmTqyjSZqIOeWATjSUhGbjHpD+hQkk4IcOZ
	r3qYlCa4rEG7mOOPstHB0K8DXVtHHGI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7AF06ADE1;
	Wed, 11 Nov 2020 14:51:52 +0000 (UTC)
Message-ID: <c5b12f33b4e3feb0d6f6bc51d5474b36fa42d881.camel@suse.com>
Subject: Re: [PATCH 10/12] xen/hypfs: add cpupool directories
From: Dario Faggioli <dfaggioli@suse.com>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	 <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	 <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Date: Wed, 11 Nov 2020 15:51:51 +0100
In-Reply-To: <20201026091316.25680-11-jgross@suse.com>
References: <20201026091316.25680-1-jgross@suse.com>
	 <20201026091316.25680-11-jgross@suse.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-94siXGnfjUpn5qHrf8Yi"
User-Agent: Evolution 3.38.1 (by Flathub.org) 
MIME-Version: 1.0


--=-94siXGnfjUpn5qHrf8Yi
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2020-10-26 at 10:13 +0100, Juergen Gross wrote:
> Add /cpupool/<cpupool-id> directories to hypfs. Those are completely
> dynamic, so the related hypfs access functions need to be
> implemented.
>=20
> Signed-off-by: Juergen Gross <jgross@suse.com>
>
So, I'm almost sold... Just one comment:

> --- a/xen/common/sched/cpupool.c
> +++ b/xen/common/sched/cpupool.c
> @@ -999,6 +1073,10 @@ static int __init cpupool_init(void)
> =C2=A0
> =C2=A0=C2=A0=C2=A0=C2=A0 cpupool_gran_init();
> =C2=A0
> +#ifdef CONFIG_HYPFS
> +=C2=A0=C2=A0=C2=A0 hypfs_add_dir(&hypfs_root, &cpupool_dir, true);
> +#endif
> +
What would you think about doing this in an helper function
(hypfs_cpupool_init() ?), implemented inside the above #ifdef and as an
empty stub if !CONFIG_HYPFS ?

That will save us from having the #ifdef-s again here.

I'm asking because it's certainly not critical and I don't have a too
strong opinion about it. But I do think the code would look better.

> =C2=A0=C2=A0=C2=A0=C2=A0 cpupool0 =3D cpupool_create(0, 0, &err);
> =C2=A0=C2=A0=C2=A0=C2=A0 BUG_ON(cpupool0 =3D=3D NULL);
> =C2=A0=C2=A0=C2=A0=C2=A0 cpupool_put(cpupool0);

--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-94siXGnfjUpn5qHrf8Yi
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl+r+ocACgkQFkJ4iaW4
c+7f6Q//cDF51AAtUXs+fcwLE2i1VIeWooYluhioIgx4qPjp9b6hcVZSQXYpbjeh
f7QQ2J8dxs7XIqWZc0TlZ6L8BswscTlNoIE2CRxK7YFSHkPFaXHmX0DJjzVZGBh5
R6JMhMXZoauIr+EKa2GZXdXoTumuGPyUs+fbmTr8iQQrY4Umbcwwy6R3oUqUIzNg
8+yLwceE1P7ry0pfbujN2dWqOK5TaqDMZwGimT9jwFVxlDnWEoENBbrIZ7lzPSET
YYcJi0hTDa2NsywWCEKd2EuBsrx7nt61a1TZyKggGCvtP4JFuUbEsem5Te1Xf07J
WRwz5kctL8xN+SknZDO+5zTWQm2rYKMOrF55z2QWKnDjGbB35AgHOO90XakQRlKu
1kLKogFm+C4R715K0gkFwRJM9aCM2z6c1y627ZxtrrlbWs++xewdiEz9qKQ1KWRx
U7757LrkgefDG5AAbBLldpxKMF69U7djqq3pE5vK4afBbqJkum1Jc6q2+QavAlPf
7YP0CAFfNTHv8aiL/SVFQPAJ6I1F2y7Sy/9SYOozgRhUZxO4661co6VS51XwZbZS
LdUXthv1uA3SznJEiRhjilqpWjNlP8ENMxiLbzuhLbKRsholpuvPI82x+ET2vNtb
okyiS2T/2CNoWRU29MP0ZAS7XiWbLpptEfcQakktCLqlF6/irlA=
=wo9T
-----END PGP SIGNATURE-----

--=-94siXGnfjUpn5qHrf8Yi--



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 14:54:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 14:54:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24983.52501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrVp-0004zv-JV; Wed, 11 Nov 2020 14:54:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24983.52501; Wed, 11 Nov 2020 14:54:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrVp-0004zo-GF; Wed, 11 Nov 2020 14:54:17 +0000
Received: by outflank-mailman (input) for mailman id 24983;
 Wed, 11 Nov 2020 14:54:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcrVo-0004zj-0L
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:54:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bcd39f8c-0d95-47b7-a7c5-62464082190c;
 Wed, 11 Nov 2020 14:54:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 136EBABD1;
 Wed, 11 Nov 2020 14:54:14 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcrVo-0004zj-0L
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:54:16 +0000
X-Inumbo-ID: bcd39f8c-0d95-47b7-a7c5-62464082190c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id bcd39f8c-0d95-47b7-a7c5-62464082190c;
	Wed, 11 Nov 2020 14:54:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605106454;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9zhveKXFN05PgIsHJThMQR3aVRjiUTFRaKeo/Uh9GU8=;
	b=nnjJrY2qBLWk9sD76JVFpWBqQMu9zeRAb5XIgB2LfDvGOOF6ux/ScHzw/8x6OXPvWH2fsc
	crWR0wAgApDe0ZH5Guvi3p+l8fAYJBz91ZVRHBEkgRzYTR35bNk1vxfUTvi1Gwy6j8Qd/E
	UgY/+hpKTl5ycm1ghIVV/67jVk6AdSQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 136EBABD1;
	Wed, 11 Nov 2020 14:54:14 +0000 (UTC)
Subject: Re: [PATCH 02/10] arm/pci: Maintain PCI assignable list
To: Oleksandr Andrushchenko <andr2000@gmail.com>
Cc: iwj@xenproject.org, wl@xen.org,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 xen-devel@lists.xenproject.org, Bertrand.Marquis@arm.com,
 julien.grall@arm.com, sstabellini@kernel.org, roger.pau@citrix.com,
 Rahul.Singh@arm.com
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-3-andr2000@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6a2dc03e-202b-b8f9-46a5-9c90d9de8a6d@suse.com>
Date: Wed, 11 Nov 2020 15:54:13 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201109125031.26409-3-andr2000@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.11.2020 13:50, Oleksandr Andrushchenko wrote:
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -879,6 +879,43 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
>      return ret;
>  }
>  
> +#ifdef CONFIG_ARM
> +int pci_device_set_assigned(u16 seg, u8 bus, u8 devfn, bool assigned)
> +{
> +    struct pci_dev *pdev;
> +
> +    pdev = pci_get_pdev(seg, bus, devfn);
> +    if ( !pdev )
> +    {
> +        printk(XENLOG_ERR "Can't find PCI device %04x:%02x:%02x.%u\n",
> +               seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
> +        return -ENODEV;
> +    }
> +
> +    pdev->assigned = assigned;
> +    printk(XENLOG_ERR "pciback %sassign PCI device %04x:%02x:%02x.%u\n",
> +           assigned ? "" : "de-",
> +           seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
> +
> +    return 0;
> +}
> +
> +int pci_device_get_assigned(u16 seg, u8 bus, u8 devfn)
> +{
> +    struct pci_dev *pdev;
> +
> +    pdev = pci_get_pdev(seg, bus, devfn);
> +    if ( !pdev )
> +    {
> +        printk(XENLOG_ERR "Can't find PCI device %04x:%02x:%02x.%u\n",
> +               seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
> +        return -ENODEV;
> +    }
> +
> +    return pdev->assigned ? 0 : -ENODEV;
> +}
> +#endif
> +
>  #ifndef CONFIG_ARM
>  /*TODO :Implement MSI support for ARM  */
>  static int pci_clean_dpci_irq(struct domain *d,
> @@ -1821,6 +1858,62 @@ int iommu_do_pci_domctl(
>      return ret;
>  }
>  
> +#ifdef CONFIG_ARM
> +struct list_assigned {
> +    uint32_t cur_idx;
> +    uint32_t from_idx;
> +    bool assigned;
> +    domid_t *domain;
> +    uint32_t *machine_sbdf;
> +};
> +
> +static int _enum_assigned_pci_devices(struct pci_seg *pseg, void *arg)
> +{
> +    struct list_assigned *ctxt = arg;
> +    struct pci_dev *pdev;
> +
> +    list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list )
> +    {
> +        if ( pdev->assigned == ctxt->assigned )
> +        {
> +            if ( ctxt->cur_idx == ctxt->from_idx )
> +            {
> +                *ctxt->domain = pdev->domain->domain_id;
> +                *ctxt->machine_sbdf = pdev->sbdf.sbdf;
> +                return 1;
> +            }
> +            ctxt->cur_idx++;
> +        }
> +    }
> +    return 0;
> +}
> +
> +int pci_device_enum_assigned(bool report_not_assigned,
> +                             uint32_t from_idx, domid_t *domain,
> +                             uint32_t *machine_sbdf)
> +{
> +    struct list_assigned ctxt = {
> +        .assigned = !report_not_assigned,
> +        .cur_idx = 0,
> +        .from_idx = from_idx,
> +        .domain = domain,
> +        .machine_sbdf = machine_sbdf,
> +    };
> +    int ret;
> +
> +    pcidevs_lock();
> +    ret = pci_segments_iterate(_enum_assigned_pci_devices, &ctxt);
> +    pcidevs_unlock();
> +    /*
> +     * If not found then report as EINVAL to mark
> +     * enumeration process finished.
> +     */
> +    if ( !ret )
> +        return -EINVAL;
> +    return 0;
> +}
> +#endif

Just in case the earlier comments you've got don't lead to removal
of this code - unless there's a real need for them to be put here,
under #ifdef, please add a new xen/drivers/passthrough/arm/pci.c
instead. Even if for just part of the code, this would then also
help with more clear maintainership of this Arm specific code.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 14:56:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 14:56:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24994.52525 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrY1-0005CU-4P; Wed, 11 Nov 2020 14:56:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24994.52525; Wed, 11 Nov 2020 14:56:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrY1-0005CN-12; Wed, 11 Nov 2020 14:56:33 +0000
Received: by outflank-mailman (input) for mailman id 24994;
 Wed, 11 Nov 2020 14:56:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcrY0-0005CI-3d
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:56:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 731c2158-0509-4b89-b917-84741edeb113;
 Wed, 11 Nov 2020 14:56:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8763BAC82;
 Wed, 11 Nov 2020 14:56:30 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcrY0-0005CI-3d
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:56:32 +0000
X-Inumbo-ID: 731c2158-0509-4b89-b917-84741edeb113
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 731c2158-0509-4b89-b917-84741edeb113;
	Wed, 11 Nov 2020 14:56:31 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605106590;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=BuBmGDUwTqtZ2R7oR1kUVhRcvTDdyhHiiXws3ny2u/A=;
	b=Vf7jnXs8apupPg07yG1/5CRecJnZle5MCJMZPIkI3TRTRlNHN+V0qnCHTmjc++9mzDnol5
	/Tg3Sy5hJASkjnWhF+moTnYcjKQdYsPoL2PgZu/K9HmfRvbXF32HEszec6Vd3GLfc8puQU
	Ks1xeMDZNWzl/vsY/PGSjgd16J+oJek=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8763BAC82;
	Wed, 11 Nov 2020 14:56:30 +0000 (UTC)
Subject: Re: [PATCH 10/12] xen/hypfs: add cpupool directories
To: Dario Faggioli <dfaggioli@suse.com>, Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-11-jgross@suse.com>
 <c5b12f33b4e3feb0d6f6bc51d5474b36fa42d881.camel@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6b2cf5d0-9c6d-07bd-51d3-9fd34cd8d1a5@suse.com>
Date: Wed, 11 Nov 2020 15:56:30 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <c5b12f33b4e3feb0d6f6bc51d5474b36fa42d881.camel@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11.11.2020 15:51, Dario Faggioli wrote:
> On Mon, 2020-10-26 at 10:13 +0100, Juergen Gross wrote:
>> Add /cpupool/<cpupool-id> directories to hypfs. Those are completely
>> dynamic, so the related hypfs access functions need to be
>> implemented.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>
> So, I'm almost sold... Just one comment:
> 
>> --- a/xen/common/sched/cpupool.c
>> +++ b/xen/common/sched/cpupool.c
>> @@ -999,6 +1073,10 @@ static int __init cpupool_init(void)
>>  
>>      cpupool_gran_init();
>>  
>> +#ifdef CONFIG_HYPFS
>> +    hypfs_add_dir(&hypfs_root, &cpupool_dir, true);
>> +#endif
>> +
> What would you think about doing this in an helper function
> (hypfs_cpupool_init() ?), implemented inside the above #ifdef and as an
> empty stub if !CONFIG_HYPFS ?
> 
> That will save us from having the #ifdef-s again here.

Having a hypfs_add_dir() stub would also allow to achieve this, and
then, going forward, perhaps also elsewhere.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 14:56:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 14:56:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.24995.52537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrY6-0005Ex-Ct; Wed, 11 Nov 2020 14:56:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 24995.52537; Wed, 11 Nov 2020 14:56:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrY6-0005Ep-9e; Wed, 11 Nov 2020 14:56:38 +0000
Received: by outflank-mailman (input) for mailman id 24995;
 Wed, 11 Nov 2020 14:56:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GpG1=ER=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kcrY5-0005CI-0e
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:56:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3bed9251-079b-4cb9-bf72-b107d1c3d6a7;
 Wed, 11 Nov 2020 14:56:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1FAFAAC82;
 Wed, 11 Nov 2020 14:56:33 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GpG1=ER=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kcrY5-0005CI-0e
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:56:37 +0000
X-Inumbo-ID: 3bed9251-079b-4cb9-bf72-b107d1c3d6a7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3bed9251-079b-4cb9-bf72-b107d1c3d6a7;
	Wed, 11 Nov 2020 14:56:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605106593;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=w1Ti3l+//IWgLg5qzYZiRkyadrCGQYCy2ELsNH03+RQ=;
	b=XH+hUgTDyk/Etlb18FFMFzihJ0w+UJlz6yU8dNJxxBH0vdYANpT4LaJACqdbsYzhuzjQ4l
	xn28ZAZUoNOWBt7tNmlKyWxNuxofFN/wnt3lvlGRA88oVU1+xOaKDI1wWNkZ19bQMuTbz9
	7b0nyK9Z6LdDUMBmDfzJkFgyL0jwxf8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1FAFAAC82;
	Wed, 11 Nov 2020 14:56:33 +0000 (UTC)
Subject: Re: [PATCH 10/12] xen/hypfs: add cpupool directories
To: Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-11-jgross@suse.com>
 <c5b12f33b4e3feb0d6f6bc51d5474b36fa42d881.camel@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <ef89463b-f93c-c098-4829-0fc430fc650a@suse.com>
Date: Wed, 11 Nov 2020 15:56:32 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <c5b12f33b4e3feb0d6f6bc51d5474b36fa42d881.camel@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="qbwbkJr2CUc33n1vuqaX4Pq2nnY3dnT0F"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--qbwbkJr2CUc33n1vuqaX4Pq2nnY3dnT0F
Content-Type: multipart/mixed; boundary="TdYcB7WeigExtybkQqYWGWIPQ53smgnv6";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Message-ID: <ef89463b-f93c-c098-4829-0fc430fc650a@suse.com>
Subject: Re: [PATCH 10/12] xen/hypfs: add cpupool directories
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-11-jgross@suse.com>
 <c5b12f33b4e3feb0d6f6bc51d5474b36fa42d881.camel@suse.com>
In-Reply-To: <c5b12f33b4e3feb0d6f6bc51d5474b36fa42d881.camel@suse.com>

--TdYcB7WeigExtybkQqYWGWIPQ53smgnv6
Content-Type: multipart/mixed;
 boundary="------------A3D213F23AF9C1E2657D97F5"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------A3D213F23AF9C1E2657D97F5
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 11.11.20 15:51, Dario Faggioli wrote:
> On Mon, 2020-10-26 at 10:13 +0100, Juergen Gross wrote:
>> Add /cpupool/<cpupool-id> directories to hypfs. Those are completely
>> dynamic, so the related hypfs access functions need to be
>> implemented.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>
> So, I'm almost sold... Just one comment:
>=20
>> --- a/xen/common/sched/cpupool.c
>> +++ b/xen/common/sched/cpupool.c
>> @@ -999,6 +1073,10 @@ static int __init cpupool_init(void)
>>  =20
>>  =C2=A0=C2=A0=C2=A0=C2=A0 cpupool_gran_init();
>>  =20
>> +#ifdef CONFIG_HYPFS
>> +=C2=A0=C2=A0=C2=A0 hypfs_add_dir(&hypfs_root, &cpupool_dir, true);
>> +#endif
>> +
> What would you think about doing this in an helper function
> (hypfs_cpupool_init() ?), implemented inside the above #ifdef and as an=

> empty stub if !CONFIG_HYPFS ?
>=20
> That will save us from having the #ifdef-s again here.
>=20
> I'm asking because it's certainly not critical and I don't have a too
> strong opinion about it. But I do think the code would look better.

I'm fine either way.

In case nobody objects I'll change it.


Juergen

--------------A3D213F23AF9C1E2657D97F5
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------A3D213F23AF9C1E2657D97F5--

--TdYcB7WeigExtybkQqYWGWIPQ53smgnv6--

--qbwbkJr2CUc33n1vuqaX4Pq2nnY3dnT0F
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+r+6AFAwAAAAAACgkQsN6d1ii/Ey9u
Agf+KSVs3f8pijju/IO4OjvX2tsEcMCn5vgggS4LZbEysTN+lQMMPc8uC4JVvmaWThHcKcJlBWkS
CEwuYc4oCIuJ2fukWxl2pf/B42vfuw5cfRKW9/xXfccoIJ5RvhfBeGWRZMSTGLw4r1ILv10d4sOu
KOvfytxckemdYZNOWJi7Isk+jz2O3leheB8Nabk0lTbh/eeAqoiqn2I9+lmGJJaGtsQNhnbJgqwn
g5BxIDzqq5FaJv3G/E7s/gOBxg5Zlq07iOyNx8xZtGHWIUmaFDHZlRMBBI0dzuywDk6KFpsLnZTh
F86ZAlgC3jNbIwB8GmmKBiBf9DZTdV9LG2X2SgipQw==
=6F5Q
-----END PGP SIGNATURE-----

--qbwbkJr2CUc33n1vuqaX4Pq2nnY3dnT0F--


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 14:59:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 14:59:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25007.52548 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcraR-0005Ua-QY; Wed, 11 Nov 2020 14:59:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25007.52548; Wed, 11 Nov 2020 14:59:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcraR-0005UT-Nc; Wed, 11 Nov 2020 14:59:03 +0000
Received: by outflank-mailman (input) for mailman id 25007;
 Wed, 11 Nov 2020 14:59:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MbyH=ER=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1kcraQ-0005UO-Gs
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:59:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d22b90c7-4e80-4be6-bcf0-320df311ce6c;
 Wed, 11 Nov 2020 14:59:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 86A87ACB5;
 Wed, 11 Nov 2020 14:59:00 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MbyH=ER=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
	id 1kcraQ-0005UO-Gs
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 14:59:02 +0000
X-Inumbo-ID: d22b90c7-4e80-4be6-bcf0-320df311ce6c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d22b90c7-4e80-4be6-bcf0-320df311ce6c;
	Wed, 11 Nov 2020 14:59:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605106740;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=D3/Zq17neIFpSSH5k12guAKQYUSdcYEQKVSHAdgq8t8=;
	b=dn4upLqIUtfzxPhEu3B6PGexhiT4tm6bZVF0ytY65876hLX3oiZhg9w+sJxsvWfB6t5cwr
	bTSQCJzcOfgt9HgIK146ZPvVEPLFNmADoFNcaVmZ67iO49GHsgC5IDAteXQ9EtutR8/zXK
	SIDyM2qgkpfwX5rzwNyV4K6JtnNnm90=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 86A87ACB5;
	Wed, 11 Nov 2020 14:59:00 +0000 (UTC)
Message-ID: <5f1f99784b91f1d44bb1907889191ca631261f3f.camel@suse.com>
Subject: Re: [PATCH 10/12] xen/hypfs: add cpupool directories
From: Dario Faggioli <dfaggioli@suse.com>
To: =?ISO-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>, 
	xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	 <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	 <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Date: Wed, 11 Nov 2020 15:58:59 +0100
In-Reply-To: <ef89463b-f93c-c098-4829-0fc430fc650a@suse.com>
References: <20201026091316.25680-1-jgross@suse.com>
	 <20201026091316.25680-11-jgross@suse.com>
	 <c5b12f33b4e3feb0d6f6bc51d5474b36fa42d881.camel@suse.com>
	 <ef89463b-f93c-c098-4829-0fc430fc650a@suse.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-1dZlH0eB9hY5nPmF3yQd"
User-Agent: Evolution 3.38.1 (by Flathub.org) 
MIME-Version: 1.0


--=-1dZlH0eB9hY5nPmF3yQd
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2020-11-11 at 15:56 +0100, J=C3=BCrgen Gro=C3=9F wrote:
> On 11.11.20 15:51, Dario Faggioli wrote:
> >=20
> > What would you think about doing this in an helper function
> > (hypfs_cpupool_init() ?), implemented inside the above #ifdef and
> > as an
> > empty stub if !CONFIG_HYPFS ?
> >=20
> > That will save us from having the #ifdef-s again here.
> >=20
> > I'm asking because it's certainly not critical and I don't have a
> > too
> > strong opinion about it. But I do think the code would look better.
>=20
> I'm fine either way.
>=20
> In case nobody objects I'll change it.
>=20
Ok, cool. If you do that and resend, and that's the only change, you
can add:

Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

Thanks and Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-1dZlH0eB9hY5nPmF3yQd
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl+r/DMACgkQFkJ4iaW4
c+7wrA//QnxRaExx3sJoXmefB6DbdcjWzuCo3MAVTTZNOVni61y3dRTmmxLVsUsH
v83QbM4uIt3irFyT4YXPl+dr6B5AttppaxDjkphyYfKVP2GUgjy1hAO2y/2GZRkP
kDX5jWIrqDBN/t3bkYYtW2D9g52NbnXuYOLspAlP567rzzTJrvGApfTL6qhMh0Yr
K1LOTTKDmDswTC2YTiCUJFitaB5UwMm0+rSk5ZFRx9h+rH/zdXbF9J1RsibH0IG8
+PV0q+93jdBLLPmuqAiYcrrMFKUKwfihu4C5R3TcG4//Au8Q7OQfJ//I4p38HrhR
mI5U86Yip8kJqGJejs164GfZm402MGeFQmzkAIgu5Dz7cPgPK3VrMNfiGRmoiEsq
EdP6VTw0tXEfpBY9w8U3fsqEucSJqePVTXx1CnL05dpGHtS5r6wFmtEzl92ssefR
2yxbNXRUhUkA4e9eza83ugApq+lt5u5lALQR9T+ufpLPM7OlinFwjcvqni4ybSrx
2vk7P9J5LkWezRtNI0td98nWvtMKtIRjsQVKxeIYpixV4iGAGrkSSxAFTK+2ns24
iWuaEytHJItcdRpYaTVcNTEWMXZVY+GbhNuQ+9c4IqczjEV+iPMyiu1drlXpQPHB
Qgs+OFOcHxlZCGECLnrYqGEb3d9wU3Or334gRXRFMD5GrsuxVGU=
=pOLf
-----END PGP SIGNATURE-----

--=-1dZlH0eB9hY5nPmF3yQd--



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 15:01:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 15:01:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25015.52561 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrcK-0006Po-7Y; Wed, 11 Nov 2020 15:01:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25015.52561; Wed, 11 Nov 2020 15:01:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrcK-0006Ph-3j; Wed, 11 Nov 2020 15:01:00 +0000
Received: by outflank-mailman (input) for mailman id 25015;
 Wed, 11 Nov 2020 15:00:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GpG1=ER=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kcrcI-0006Pa-KC
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 15:00:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2ed46d09-9bff-4607-a13b-508f8f990254;
 Wed, 11 Nov 2020 15:00:57 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B7985AC82;
 Wed, 11 Nov 2020 15:00:56 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GpG1=ER=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kcrcI-0006Pa-KC
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 15:00:58 +0000
X-Inumbo-ID: 2ed46d09-9bff-4607-a13b-508f8f990254
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2ed46d09-9bff-4607-a13b-508f8f990254;
	Wed, 11 Nov 2020 15:00:57 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605106856;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=OF669ofiJecTVgQi3KBql6TEQF3bqXpo9qIem+avUXw=;
	b=Q1ZaoTtqxvSNnBNlcwTL3+tJqdlQNxs4fxc+3mwV2ZNsV7RhmaaT85yjTAs2cPqQxQJ8G5
	Zd867kqkYajXvBc7uuoQkX0rcmRG2MevuBVWlSyBxDcV+h0bX7Tb4ArgZLgMJRg+0586uH
	ST+ySef/WDN0ZuqVu6a8JQuWgrYgXaE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B7985AC82;
	Wed, 11 Nov 2020 15:00:56 +0000 (UTC)
Subject: Re: [PATCH 10/12] xen/hypfs: add cpupool directories
To: Jan Beulich <jbeulich@suse.com>, Dario Faggioli <dfaggioli@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-11-jgross@suse.com>
 <c5b12f33b4e3feb0d6f6bc51d5474b36fa42d881.camel@suse.com>
 <6b2cf5d0-9c6d-07bd-51d3-9fd34cd8d1a5@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <9aba12c7-f8ec-7670-2661-f82b05adf649@suse.com>
Date: Wed, 11 Nov 2020 16:00:55 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <6b2cf5d0-9c6d-07bd-51d3-9fd34cd8d1a5@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="UFfRQk51QwmHYfK2ztisLzaFZePB3Kwdu"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--UFfRQk51QwmHYfK2ztisLzaFZePB3Kwdu
Content-Type: multipart/mixed; boundary="30TUk66TA6sHbf3dfEr1pQcOfMLqQbsZ6";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>, Dario Faggioli <dfaggioli@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <9aba12c7-f8ec-7670-2661-f82b05adf649@suse.com>
Subject: Re: [PATCH 10/12] xen/hypfs: add cpupool directories
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-11-jgross@suse.com>
 <c5b12f33b4e3feb0d6f6bc51d5474b36fa42d881.camel@suse.com>
 <6b2cf5d0-9c6d-07bd-51d3-9fd34cd8d1a5@suse.com>
In-Reply-To: <6b2cf5d0-9c6d-07bd-51d3-9fd34cd8d1a5@suse.com>

--30TUk66TA6sHbf3dfEr1pQcOfMLqQbsZ6
Content-Type: multipart/mixed;
 boundary="------------03BBD9441982CAFB9FB35B1D"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------03BBD9441982CAFB9FB35B1D
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 11.11.20 15:56, Jan Beulich wrote:
> On 11.11.2020 15:51, Dario Faggioli wrote:
>> On Mon, 2020-10-26 at 10:13 +0100, Juergen Gross wrote:
>>> Add /cpupool/<cpupool-id> directories to hypfs. Those are completely
>>> dynamic, so the related hypfs access functions need to be
>>> implemented.
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>
>> So, I'm almost sold... Just one comment:
>>
>>> --- a/xen/common/sched/cpupool.c
>>> +++ b/xen/common/sched/cpupool.c
>>> @@ -999,6 +1073,10 @@ static int __init cpupool_init(void)
>>>  =20
>>>  =C2=A0=C2=A0=C2=A0=C2=A0 cpupool_gran_init();
>>>  =20
>>> +#ifdef CONFIG_HYPFS
>>> +=C2=A0=C2=A0=C2=A0 hypfs_add_dir(&hypfs_root, &cpupool_dir, true);
>>> +#endif
>>> +
>> What would you think about doing this in an helper function
>> (hypfs_cpupool_init() ?), implemented inside the above #ifdef and as a=
n
>> empty stub if !CONFIG_HYPFS ?
>>
>> That will save us from having the #ifdef-s again here.
>=20
> Having a hypfs_add_dir() stub would also allow to achieve this, and
> then, going forward, perhaps also elsewhere.

I thought about that. This would require to be macro for the stub case,
but I don't think this is a major problem.

Currently there are no other places requiring a stub, but in future this
might change.


Juergen

--------------03BBD9441982CAFB9FB35B1D
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------03BBD9441982CAFB9FB35B1D--

--30TUk66TA6sHbf3dfEr1pQcOfMLqQbsZ6--

--UFfRQk51QwmHYfK2ztisLzaFZePB3Kwdu
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+r/KgFAwAAAAAACgkQsN6d1ii/Ey8X
yAgAmFY4vaqgfiRdstt+cTjhjcnl1omNrcAkLJs/KP5walDsGG8gLND5turAu2Cf7JEg2mHLifW5
PYO3FbOnSd9HUi6OSt/FEajEySf/8wu7B4oBtZm4oiiE8slo9VT7RWu3eyoZhTo4GHnaf4Ia38xc
SVGqO4w53x6cxzGm/oS65wn+/fffNikIPGUrSZSg0LlhYW58u13qMGsuTdm0lug6lHP6CkmAtBw7
ilSq3ZmdjLQgQ9bqXQl/ChdOTC8tmblV/wkeImXXMmYGol4hnD0xTfvQp7FTR2rIJ+WgoQwl/j4U
Ma0I7m12EFnphQADXOT0uk+IscrpiU61evH6ItRwFQ==
=q4jW
-----END PGP SIGNATURE-----

--UFfRQk51QwmHYfK2ztisLzaFZePB3Kwdu--


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 15:03:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 15:03:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25025.52573 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcret-0006bg-Po; Wed, 11 Nov 2020 15:03:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25025.52573; Wed, 11 Nov 2020 15:03:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcret-0006bZ-Lk; Wed, 11 Nov 2020 15:03:39 +0000
Received: by outflank-mailman (input) for mailman id 25025;
 Wed, 11 Nov 2020 15:03:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kcrer-0006bU-LV
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 15:03:37 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 20067034-93a5-4f23-8e31-ea0f002f5abc;
 Wed, 11 Nov 2020 15:03:36 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nKbA=ER=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kcrer-0006bU-LV
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 15:03:37 +0000
X-Inumbo-ID: 20067034-93a5-4f23-8e31-ea0f002f5abc
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 20067034-93a5-4f23-8e31-ea0f002f5abc;
	Wed, 11 Nov 2020 15:03:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605107016;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=+G4UFofyIJQuFMHtvIO7Gci3X1f9UHSZ1LVC+e/3zpQ=;
  b=iRvYWBybUbwhB18fB5M4M72JC/VTk/w8ZVp1sS+OOh3jCO2ZNBbRndcu
   Q22esWVZMik5OCgpGo1ioTDVEZ7oDzG2Az3BmmMWUf8pjKX/XiTYWGuW7
   K8JsSkt6grAjFPcyeHNfCWWi+R6hgozNraKjuMK3mFcnohVtanepWN7mA
   4=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 4h6IwsyPdVDMcNBgZCEKOqENnunstA9BpifI2XHLgNhYyASY6d5ovV+c1rAfQ8MlAAdgJiJ+jt
 y1Sxlb3Q3yCcn3Y+nZ0pIlWqAF095xtvKVSXif/kY3VgcCX2SGbTwmbY8J3ZhmJPv4GecK+vT5
 QH8pkmpAmOB2f5j3KNycQxsrP8ygTETh32cgkmRedLQDKUfLj2Jf8XrdVy62bvfMhxwxX4k4P7
 9dCya7Q9Z9yQrvTB5fwd3+7qTISfeqlGVgiq+5lQ3ARfV7v/gsZwFkrcBKAyCqGNIuzrBYHLd/
 E2Q=
X-SBRS: None
X-MesageID: 30981712
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,469,1596513600"; 
   d="scan'208";a="30981712"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jyvan3vHa4UaqiObbABMYi0o4eJJxgMcsiUFG2ZSUHC83KEHcyBi04TXZjwZfuAX7nTJF82HhKLTrUKrfAazRnDecs8RVh7XUzetGdsA+Xy1n3ZK9PsGqT2UyE83cNCcY1Y0vM+Ti074qTpUbDOcgsA69VUBhrpXwevkThGLm9G9BetWXuGZtw1ytDJNSWimbm7GbCgH4f0JWfvgiOoJeE33huvP5zGiqNZqonkXgcCmczaEGJyUNjZ+jUKa1eUcE4LvSBX/o9/abPsGaLCgx62OElfBH4f91mnp6CoF42V9NHcfA6WnvF8qB0RNQIWkdsAEJuP6laxMqPnD5/8eHg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xXVqu6XMKSzoC832hSQGLGFgOtmCssn910dE3tCovKc=;
 b=f48dXWMII3VkkZ7XHgO5Viap5w09nZ1JF8cDA6opKNsZqLPxiTqANQUBJ4RWnVA/raUXib/lfMdma4l5K13YMFs5ojet5zcCbN/h8FHSzveJ7e/sajtQ9+nH33Et3DQ1HLi3NuU8sbamQERayvjKV4Ynj4X/WeYTgcuqJFQjmJVQMgdMXLyaEdE9bUBw0qsHD9xyfo2NpX7Xk6eVNFMsuQq0sWipQPtvQwV1GYQlzBOZHBFWW8rQ/FmWlcCO8eFYxS42J+Y/9cgMBNcqqAxKulRyZqQ2rc3igkF9zNqdp4PTztjQHMARJiwQMzNiE2M7/lT55dJxrs+12//CdD/ceA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xXVqu6XMKSzoC832hSQGLGFgOtmCssn910dE3tCovKc=;
 b=hc0/GkWIArGK0DU45F3IQ04BhuiNe/1viLNQd/ToZzGAtYSBtIT4M30VvZNHi77sC8QyGSPvXd6Lorhu4nkAS1Gu4RYZUES34ArOHEUZ65ByF00JNIGUCFVfts84gsKFm7Pr+DAc6isO2BaFJTGJ+EkE3M2W9DSAXPxHqsPEcxw=
Date: Wed, 11 Nov 2020 16:03:10 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
CC: Oleksandr Andrushchenko <andr2000@gmail.com>, "Rahul.Singh@arm.com"
	<Rahul.Singh@arm.com>, "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
	"julien.grall@arm.com" <julien.grall@arm.com>, "jbeulich@suse.com"
	<jbeulich@suse.com>, "sstabellini@kernel.org" <sstabellini@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>
Subject: Re: [PATCH 02/10] arm/pci: Maintain PCI assignable list
Message-ID: <20201111150310.2wo33lr3f5xrd6sj@Air-de-Roger>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-3-andr2000@gmail.com>
 <20201111135311.6jhskiss2qswm3zp@Air-de-Roger>
 <03d6a75c-075d-6c57-1d66-2514ef1d0cb8@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <03d6a75c-075d-6c57-1d66-2514ef1d0cb8@epam.com>
X-ClientProxiedBy: LO2P265CA0486.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13a::11) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 51eaec14-9e88-4f04-c1bc-08d88652ec17
X-MS-TrafficTypeDiagnostic: DM6PR03MB3673:
X-Microsoft-Antispam-PRVS: <DM6PR03MB3673D4C709F235840DB9D73F8FE80@DM6PR03MB3673.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: dUPnXZq1OdZz4TovXiHKjriuSemv//DuSSf5Edqc0z6VAOqNdPzwHQXFi4Y6+VIMkeFSmd6eHbYHYSGCYBQnO9JqotrUJHoj7Pnhyxg5PWix2jw3I6FClhMQ41mz/LSEW4ehO+fam+YtUxby/JhRq7PwVkVjKSsIDPG2wOQmgQQ63GqXfaRSZ6t9nUDYP5KXbVmmSSGvI/IGynmXVXOhv1gvOjrEYHUAwpKiPo99izRIbesMsjfXdhgucadcSWhlOR+EX4m+V0nOzKQ/bjkue1KkQSpHTGIyS2bLO81Awfpu3f4QQt8pvXxqMGgBUEsr7x1jgnPGGV46d0CI2f2b7Q==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(346002)(396003)(366004)(376002)(136003)(39860400002)(9686003)(1076003)(85182001)(2906002)(5660300002)(86362001)(6666004)(54906003)(66946007)(66556008)(66476007)(6496006)(956004)(7416002)(316002)(6916009)(53546011)(6486002)(8936002)(478600001)(186003)(16526019)(33716001)(8676002)(26005)(4326008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: zTKJzh3iUeWb6CLprWKXIkfd8m+0iiouCfPyf1LVjQ0oeqImY8sotPJ9okUYMYWMK+OkLp02zKJUKMFxenUjXxZSHkSQ2LsayO3AbUnaaL7Jyt8q7nXW1lTS+p3jOJA5fZLlQ4v7tnCma84HqfeoKeDkUTqAysezCRpOVMVMew2OmwmSgNcezhbqMjTrFR5dzWKsQ6eFt6UbEbg2+FxPo64xiUlbKORgKsJ5+72mphMCG/b+G01Snlvc6MZJzjTitv55jQL0BDxi3rwnGaTX2KibGKQajCCCQwByi1GCtCLxcrX4irm4vdGUdcHBe4t0LwvB0dDpH53eAVBG5A9psGo1v/NOkJD1XNaJJYAHHaNxQYvQz0L6LFdtV74JFF3rtjgaqpfVmnCRNCaEhaui3j491LZpM8gX5Gcx3+FrO2ogXkVRTKlu23TSV4kx/LMBuwP8womlJl6s7PsACUgd6ZbHKR8Mo7idBbbGdJR62TE6QDtG2vBtvsspagIRC5h2vKAhKFXvYK9LtzNNKvWN1liQhMaltw91hcAUQEr3OVd03lCEHSnpJIM/VkL3Q+43WqRGWnGGe7lGgzI08KByAqG8hrO6e5jPkkccqIr8YSrGxauUPyxMXkFygTcfDDECYjIaNgWAk6XMveAkPsfo2toxF5WauL071H038oITzwaBx4U4t/Rhz4Pq5w6hour19eD2SfvsC4E+HURbk7X10ze4A4w5hgEaBMTsIkH/9c32PMPGcddvvxXXrl57QtAME1XxRZ25tcLh+4K8kVhgWrSXl4d3vxNhNaKltMoX2V1gqqMLYEsdaSdARcpfQHysVUV5QtjIg9fQ9g7QMD0V17lWn1UJWI0aHTys+c0VbbngFKdyVw1z9oukN/HVJqRYAysRUYJrEgScehEH1ZTe7Q==
X-MS-Exchange-CrossTenant-Network-Message-Id: 51eaec14-9e88-4f04-c1bc-08d88652ec17
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Nov 2020 15:03:18.1485
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: k0Y9ED9ixZgCQc6ew62ZCKuYAWimS2FuOMSA5ydtCud8f57RfQqbxDYVQkSDpd7vuVvb3Rq6rQqholvs67XZyg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3673
X-OriginatorOrg: citrix.com

On Wed, Nov 11, 2020 at 02:38:47PM +0000, Oleksandr Andrushchenko wrote:
> On 11/11/20 3:53 PM, Roger Pau Monné wrote:
> > On Mon, Nov 09, 2020 at 02:50:23PM +0200, Oleksandr Andrushchenko wrote:
> >> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> >>
> >> The original code depends on pciback to manage assignable device list.
> >> The functionality which is implemented by the pciback and the toolstack
> >> and which is relevant/missing/needed for ARM:
> >>
> >> 1. pciback is used as a database for assignable PCI devices, e.g. xl
> >>     pci-assignable-{add|remove|list} manipulates that list. So, whenever the
> >>     toolstack needs to know which PCI devices can be passed through it reads
> >>     that from the relevant sysfs entries of the pciback.
> >>
> >> 2. pciback is used to hold the unbound PCI devices, e.g. when passing through
> >>     a PCI device it needs to be unbound from the relevant device driver and bound
> >>     to pciback (strictly speaking it is not required that the device is bound to
> >>     pciback, but pciback is again used as a database of the passed through PCI
> >>     devices, so we can re-bind the devices back to their original drivers when
> >>     guest domain shuts down)
> >>
> >> 1. As ARM doesn't use pciback implement the above with additional sysctls:
> >>   - XEN_SYSCTL_pci_device_set_assigned
> > I don't see the point in having this sysfs, Xen already knows when a
> > device is assigned because the XEN_DOMCTL_assign_device hypercall is
> > used.
> 
> But how does the toolstack know about that? When the toolstack needs to
> 
> list/know all assigned devices it queries pciback's sysfs entries. So, with
> 
> XEN_DOMCTL_assign_device we make that knowledge available to Xen,
> 
> but there are no means for the toolstack to get it back.

But the toolstack will figure out whether a device is assigned or
not by using
XEN_SYSCTL_pci_device_get_assigned/XEN_SYSCTL_pci_device_enum_assigned?

AFAICT XEN_SYSCTL_pci_device_set_assigned tells Xen a device has been
assigned, but Xen should already know it because
XEN_DOMCTL_assign_device would have been used to assign the device?

> >
> >>   - XEN_SYSCTL_pci_device_get_assigned
> >>   - XEN_SYSCTL_pci_device_enum_assigned
> >> 2. Extend struct pci_dev to hold assignment state.
> > I'm not really found of this, the hypervisor is no place to store a
> > database like this, unless it's strictly needed.
> I do agree and it was previously discussed a bit
> >
> > IMO the right implementation here would be to split Linux pciback into
> > two different drivers:
> >
> >   - The pv-pci backend for doing passthrough to classic PV guests.
> Ok
> >   - The rest of pciback: device reset, hand-holding driver for devices
> >     to be assigned and database.
> 
> These and assigned devices list seem to be the complete set which
> 
> is needed by the toolstack on ARM. All other functionality provided by
> 
> pciback is not needed for ARM.
> 
> Jan was saying [1] that we might still use pciback as is, but simply use only
> 
> the functionality we need.
> 
> >
> > I think there must be something similar in KVM that performs the tasks
> > of my last point, maybe we could piggyback on it?
> I promised to look at it. I owe this
> >
> > If we want to go the route proposed by this patch, ie: Xen performing
> > the functions of pciback you would also have to move the PCI reset
> > code to Xen, so that you can fully manage the PCI devices from Xen.
> In case of dom0less this would be the case: no pciback, no Domain-0

But for dom0less there's no need for any database of assignable
devices, nor the need to perform pci device resets, as it's all
assigned at boot time and then never modified?

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 15:11:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 15:11:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25033.52585 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrme-0007a4-MJ; Wed, 11 Nov 2020 15:11:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25033.52585; Wed, 11 Nov 2020 15:11:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrme-0007Zx-I0; Wed, 11 Nov 2020 15:11:40 +0000
Received: by outflank-mailman (input) for mailman id 25033;
 Wed, 11 Nov 2020 15:11:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcrmd-0007Zs-R2
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 15:11:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c14d34c5-a5a1-4335-8f81-1d25083b6ad6;
 Wed, 11 Nov 2020 15:11:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B42F0AC82;
 Wed, 11 Nov 2020 15:11:37 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcrmd-0007Zs-R2
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 15:11:39 +0000
X-Inumbo-ID: c14d34c5-a5a1-4335-8f81-1d25083b6ad6
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c14d34c5-a5a1-4335-8f81-1d25083b6ad6;
	Wed, 11 Nov 2020 15:11:38 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605107497;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=D25xFq44o+ROSq7PtmfH57k0SQUUt62DuVfai3k9VCg=;
	b=d0waCxHO7tk55pPhck7LiWSWY87tM0XreYEYGdcsf2A9eJYDLb8oP0pgPLyqJQGJ7v5OA4
	IrFTN912Djc/wiq7/2hyx5as+z7+y+GGIUROqw6tr/bVG9mbHkMrqn8uhtpNcH3Y/mq/Kz
	XgDZ7huIBciRGexFHVO7gkAWa2CIhoA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B42F0AC82;
	Wed, 11 Nov 2020 15:11:37 +0000 (UTC)
Subject: Re: [PATCH] xen/x86: Work around Clang code generation bug with asm
 parameters
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20201111124512.2268-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8282790a-a0bd-1d33-d992-9d194766254e@suse.com>
Date: Wed, 11 Nov 2020 16:11:37 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201111124512.2268-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11.11.2020 13:45, Andrew Cooper wrote:
> Clang 9 and later don't handle the clobber of %r10 correctly in
> _hypercall64_4().  See https://bugs.llvm.org/show_bug.cgi?id=48122

Are you sure this is a bug? With ...

>  #define _hypercall64_4(type, hcall, a1, a2, a3, a4)                     \
>      ({                                                                  \
> -        long res, tmp__;                                                \
> -        register long _a4 asm ("r10") = ((long)(a4));                   \
> +        long res, _a1 = (long)(a1), _a2 = (long)(a2),                   \
> +            _a3 = (long)(a3);                                           \
> +        register long _a4 asm ("r10") = (long)(a4);                     \
>          asm volatile (                                                  \
>              "call hypercall_page + %c[offset]"                          \
> -            : "=a" (res), "=D" (tmp__), "=S" (tmp__), "=d" (tmp__),     \
> -              "=&r" (tmp__) ASM_CALL_CONSTRAINT                         \

... this we've requested "any register", while with ...

> -            : [offset] "i" (hcall * 32),                                \
> -              "1" ((long)(a1)), "2" ((long)(a2)), "3" ((long)(a3)),     \
> -              "4" (_a4)                                                 \

... this we've asked for that specific register to be initialized
from r10 (and without telling the compiler that r10 is going to
change).

Also, by what I would have hoped is convention meanwhile, the new
macro local variables' names shouldn't start with an underscore,
but instead perhaps end in one. But to be honest, despite knowing
of the latent (albeit highly hypothetical) issue, each time I
find myself making such a comment I'm one tiny step closer to
giving up.

Anyway, with at least title and description changed (or your view
clarified verbally)
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 15:11:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 15:11:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25034.52597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrmt-0007dm-Ux; Wed, 11 Nov 2020 15:11:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25034.52597; Wed, 11 Nov 2020 15:11:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrmt-0007de-QW; Wed, 11 Nov 2020 15:11:55 +0000
Received: by outflank-mailman (input) for mailman id 25034;
 Wed, 11 Nov 2020 15:11:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MbyH=ER=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1kcrms-0007dI-Ll
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 15:11:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0740b1c6-e2f5-4db0-84ab-b9693fc39eb6;
 Wed, 11 Nov 2020 15:11:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 59824ABD1;
 Wed, 11 Nov 2020 15:11:52 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MbyH=ER=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
	id 1kcrms-0007dI-Ll
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 15:11:54 +0000
X-Inumbo-ID: 0740b1c6-e2f5-4db0-84ab-b9693fc39eb6
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0740b1c6-e2f5-4db0-84ab-b9693fc39eb6;
	Wed, 11 Nov 2020 15:11:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605107512;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=uSPXpLEG8GOSyda/5PVaeleBjocKL/tOq+aKFv7VeqM=;
	b=dKI0r7JK35LFwj0ilmh6RRxVlyv0k0p9yWd/5A7bDxu99/X9F2xkY81YGoznX91IPgeIci
	cA6JgJ85xkK+lf61TMNMaech5tRRywFUGzHZQt6fQdX5SHyg6pzdElhQ9j0p6bfe71VxIe
	1mp1AZhrLbQjo7TfHyeYu2gMUrhte24=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 59824ABD1;
	Wed, 11 Nov 2020 15:11:52 +0000 (UTC)
Message-ID: <7012360df5addb6e7737e421562e7f6a1dfe3f75.camel@suse.com>
Subject: Re: [PATCH 10/12] xen/hypfs: add cpupool directories
From: Dario Faggioli <dfaggioli@suse.com>
To: =?ISO-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>, Jan Beulich
	 <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	 <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	 <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	 <wl@xen.org>, xen-devel@lists.xenproject.org
Date: Wed, 11 Nov 2020 16:11:51 +0100
In-Reply-To: <9aba12c7-f8ec-7670-2661-f82b05adf649@suse.com>
References: <20201026091316.25680-1-jgross@suse.com>
	 <20201026091316.25680-11-jgross@suse.com>
	 <c5b12f33b4e3feb0d6f6bc51d5474b36fa42d881.camel@suse.com>
	 <6b2cf5d0-9c6d-07bd-51d3-9fd34cd8d1a5@suse.com>
	 <9aba12c7-f8ec-7670-2661-f82b05adf649@suse.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-T7tmYYHPs2F+VFT9IuPY"
User-Agent: Evolution 3.38.1 (by Flathub.org) 
MIME-Version: 1.0


--=-T7tmYYHPs2F+VFT9IuPY
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2020-11-11 at 16:00 +0100, J=C3=BCrgen Gro=C3=9F wrote:
> On 11.11.20 15:56, Jan Beulich wrote:
> > On 11.11.2020 15:51, Dario Faggioli wrote:
> >=20
> >=20
> > Having a hypfs_add_dir() stub would also allow to achieve this, and
> > then, going forward, perhaps also elsewhere.
>=20
> I thought about that. This would require to be macro for the stub
> case,
> but I don't think this is a major problem.
>=20
Yes, I thought about "stub-bing" at the hypfs_add_dir() level, but we
need a macro for that, for dealing with the parameters.

Also, it's not like having that stub would prevent having #ifdef-s at
all in this file. So that's why I thought about a local stub.

But sure, if you go for the generic macro stub, I'm fine with that too.

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-T7tmYYHPs2F+VFT9IuPY
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl+r/zcACgkQFkJ4iaW4
c+4ydA//XGZ9rclNfsBWzdxmDng1m9P+PQNtkMiiDaPbIfybirqVR5KiNT1awYBB
O0dsdAmuJpIjWTJ1GGim04t+V4OCqVFTLqSMmeshncKmDffEJzNIA7I7jDQz+vnb
lqsttsom97nQ7+sfVFo9xzXkWKvCtQVIwqkdWgsN6+rIRtf9cy8pJqFn7Q7j168L
urmWZ8Go66UoDrLkZ2Sk4bDKrjZpdav/f3GBR2Lmghz28tL81X0IuFR7C9G5rUsI
nHnH1KAzQrlNCWvD+8cwUW2QmpG9E92gtaWrArpbcM0O62hJu4ZMwZxs6G/k0FwX
u4xnAsTZrpAK8BxUynPh9iEJ71iVJY+jHB5hY21GcbMtbCk8QhWUFdqbi9HZSc2O
ILRRyx9A1Z7PYxvGxcpw8zaVAdV61a9JcFgJC8FYLKkZZOpzmGxd8OUNqoRz3Lwq
pZ0+CFeoudLtX6Sh7z4s5mrCOEGdY+HoDJsBCz20mX6FXJ8kqggFKJQGuR8tGBeZ
/qbELZNTixOL5Lp1ZeLjRLzjq1UqIE+tWZPLyj6FXWZ24Xu54wDipoCLOdJqjZva
Grl4bnUmWqEJXbE7xR8Nk0IyDpmDkWcVGdAlaDAXeHENmt1oVt2HcxgeR/jkCiAV
8fPt5ZQoTC+NpGuqzBrgJJHVVD37H2ls+yE2/m6rF3fWE8wR/zs=
=NqJT
-----END PGP SIGNATURE-----

--=-T7tmYYHPs2F+VFT9IuPY--



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 15:14:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 15:14:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25046.52609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrox-0007s0-At; Wed, 11 Nov 2020 15:14:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25046.52609; Wed, 11 Nov 2020 15:14:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrox-0007rt-7U; Wed, 11 Nov 2020 15:14:03 +0000
Received: by outflank-mailman (input) for mailman id 25046;
 Wed, 11 Nov 2020 15:14:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pxcb=ER=epam.com=prvs=9584594409=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kcrov-0007rn-9z
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 15:14:01 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 80dcbc8f-5fd9-4181-bd7d-93a3a546ccab;
 Wed, 11 Nov 2020 15:14:00 +0000 (UTC)
Received: from pps.filterd (m0174682.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0ABFBW7I012119; Wed, 11 Nov 2020 15:13:54 GMT
Received: from eur05-db8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2106.outbound.protection.outlook.com [104.47.17.106])
 by mx0b-0039f301.pphosted.com with ESMTP id 34rf80rrex-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 11 Nov 2020 15:13:54 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB4434.eurprd03.prod.outlook.com (2603:10a6:208:c7::25)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.25; Wed, 11 Nov
 2020 15:13:52 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Wed, 11 Nov 2020
 15:13:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Pxcb=ER=epam.com=prvs=9584594409=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kcrov-0007rn-9z
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 15:14:01 +0000
X-Inumbo-ID: 80dcbc8f-5fd9-4181-bd7d-93a3a546ccab
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 80dcbc8f-5fd9-4181-bd7d-93a3a546ccab;
	Wed, 11 Nov 2020 15:14:00 +0000 (UTC)
Received: from pps.filterd (m0174682.ppops.net [127.0.0.1])
	by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0ABFBW7I012119;
	Wed, 11 Nov 2020 15:13:54 GMT
Received: from eur05-db8-obe.outbound.protection.outlook.com (mail-db8eur05lp2106.outbound.protection.outlook.com [104.47.17.106])
	by mx0b-0039f301.pphosted.com with ESMTP id 34rf80rrex-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Wed, 11 Nov 2020 15:13:54 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MGca0Oi92dsQ+dpELevReIcnpRn3Gp5AifshGjTnnv6ljaiSyVZTfHjA4MxjEykY5IYGxptHjzD4w+qSSjlGUESfKnfm+0I0c9N7O8MrHSmCKGAvOy/jd4MYDPifz74LDKN922qSClbV+dwIW25TZrW4g0LggP/D7utJ2/Nxiqz2Mie8OSeOaRaOl5j7pHOx4yjBM5lugIXrTJsLrDOSQGkRo99HEko8upjzy7SxnOfvoV7yV60Pv2sQscYg3kyaH8N3WANu6L4UCU71YrN91abNc5JkjvLgZAOwexVwrihlH+c/fFvx5hInXCgQoVuIq4wb9wJscmEPEJjaK74DmA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IhN9hHLd1SCyp79g9d0bOlLuqe506bMtBCVVE3ev64g=;
 b=iyfs+EfoPVzk1pghP2FNFApiXD/O66pCUCDxfhGFeGp0P73Vgf1ClaPwqokm0HP5v1u5J9kKJCX/3pCFS9iLIAzvRZdFMmqdrajPn6Mo0rinUxyLSM9v8XQHB71jngwX6fVRS0D8IWlLacynvfIpFYdq4iKIKDGSwNvVdqP3N0mN6FAMZSgGdMRa8po+C1h7ICykuD33Lsp/e6RSpy5uMmGtlPJgLVKQE3nTLDf18IcBxJqDuv/2mudAq7Xpk3aznv4Z7fUFLmbB+z5MnIWIoYous07zxMsO5qowdIvrNEWYlJf/DKl+hDu4vY0IdEEghUxlnaSuFmVYZoRbTjFOSg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IhN9hHLd1SCyp79g9d0bOlLuqe506bMtBCVVE3ev64g=;
 b=hx7MdOzW6Sa2JRABlDgszRT1COIAKYtAMd0qqcj4Dr7FTm5xmKUNRcJY6ZN2V0it/4fElfFQDGFBkLh2tr/eLLbn3kop2cmyX8C/lBOiRHtv/0crS1YPZrdNekO6yQQUoWP08vzKnRJ9ffpyTXyl2OleIQy6vpHyF/yOTHGAN9gzIpEWRd+XRLlZ/YRvzwaBnmScmxJE5o5hQQ/HDRBHFXM7SWAvGXqfW4raeAQJS1sH+kB0kKK/38PKIp2sHi+NI2XhUork5ohoCxfXP6ZPV3KkThePicKmdVgSJLkLzOW3oye/pEkPe36M6SkPk7YTR9Lk6HEbDozMsaV2v55NZw==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB4434.eurprd03.prod.outlook.com (2603:10a6:208:c7::25) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.25; Wed, 11 Nov
 2020 15:13:52 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Wed, 11 Nov 2020
 15:13:52 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>,
        Oleksandr
 Andrushchenko <andr2000@gmail.com>
CC: "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
        "Bertrand.Marquis@arm.com"
	<Bertrand.Marquis@arm.com>,
        "julien.grall@arm.com" <julien.grall@arm.com>,
        "jbeulich@suse.com" <jbeulich@suse.com>,
        "sstabellini@kernel.org"
	<sstabellini@kernel.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>
Subject: Re: [PATCH 02/10] arm/pci: Maintain PCI assignable list
Thread-Topic: [PATCH 02/10] arm/pci: Maintain PCI assignable list
Thread-Index: AQHWtpbuKiRgL1HiZkyNC2yoNcmnl6nC9s6AgAAMvACAAAbRAIAAAv4A
Date: Wed, 11 Nov 2020 15:13:52 +0000
Message-ID: <d94e37a3-d8b3-7d3d-0ba5-c7885a5ef512@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-3-andr2000@gmail.com>
 <20201111135311.6jhskiss2qswm3zp@Air-de-Roger>
 <03d6a75c-075d-6c57-1d66-2514ef1d0cb8@epam.com>
 <20201111150310.2wo33lr3f5xrd6sj@Air-de-Roger>
In-Reply-To: <20201111150310.2wo33lr3f5xrd6sj@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: f4efdecf-598a-441b-15a2-08d886546668
x-ms-traffictypediagnostic: AM0PR03MB4434:
x-microsoft-antispam-prvs: 
 <AM0PR03MB4434F715A4B745913BCCA46FE7E80@AM0PR03MB4434.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 IdVgiCe3ttLeQRlWTxBKWJUo1IICRAL/sL44w6LOjZ61N7bNDSuD+p2pUyBLeitn7ltNxD8JmkJDq641CHWGKc2I9MjQGmGVbGyPQ5bvl6mlqFiNTCZ1KczWE9IwFU7fXXqtwiOaFRO4Bf/M3Kp8o1wE1OVFPdEEiK4/hfD0HaEcbBOFOEIA69eQ2S3xbQUrefiNezQBmomqAyjg0ixXdfmkvJ+Ifi5LhTVRANp+7kenTqsbRtvziSubWLyuFn3i6tljPWLHsEEKCgztzTkdBJWdaSKHVcomfmHbAn0ilXKISeKczAyoHd6I276WIVqOkYF25dQxfIqeABzsEu0o+A==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(376002)(39860400002)(366004)(346002)(478600001)(31686004)(31696002)(66446008)(86362001)(71200400001)(26005)(66476007)(2616005)(6512007)(66946007)(66556008)(64756008)(91956017)(4326008)(6486002)(5660300002)(8676002)(7416002)(36756003)(316002)(6506007)(8936002)(54906003)(76116006)(2906002)(110136005)(53546011)(186003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 mebQqzF4SpErUVoZ2iADBlSaq2pp898lyKIromIjMu8vXlXGFIWYh0g5ymfA0ipnHccZjD/CJqP5tFBdDsEkGl0kcfnZz6VXEn/Z++IYTp+HWD8ENmYzIQByTQ7PVUbQoctkD3Gq6Mq9LATWuVj5ZC7u0PT557Rau81MNSuh/eGvuttiCI+98kKbrkUqOQ4oapeihmdLpCMJXjug8KeSbC4hpdFYcblOIAzI8LODjK5Rb5h4htDSG6iF71AsZt2b0S+4QeaUL/4ncJH1qMsEX5xmizy0I7erRluoY9a44YS5jvjpH8UGhi7wCVA1ag5TBTSY0thOU+J8D0EQ/53+cWutVAR5CKn4AORKzUGoBtkZJKrhpW2Q2fVX/sIUeqC51ACINT+qSyKLNh3h7/sj2PT+8yBVChO96i95G/48BctHZPjlyByivZmuiK9g0qVMAf3qaZah90qaX53RptHLRZeHmi3HRFD3H+FbCmpEfHF7EJqz53D61kBr47/BTRebKqsxO+wHsBzWAZFhP2LOgKNyFrigdaCb4idNVygdSJn5J5g8hnKBLGd0paOFGsRQ5ijklKyAfCSXsckVJ7oPXHBwEIFefbToa/ViS3eagavw60cdMxjaf6Yvu/GkY/wDI8oLO2cP0mwM0IJSij02tw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <F94A906D5F079B43B9BDB597E424184F@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f4efdecf-598a-441b-15a2-08d886546668
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Nov 2020 15:13:52.5033
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 06XPNJtMb9o7wr+VLjSmE83HBIm7EbCJ75N9QUrP1tD0ZXTdokjIZfbBma7aV3StR8gXwKTkv8DWrYMXn7lW4S9nCoF5FMo9ycsReUm0cTS5ZtWTIThgM6BbZBhuSrB/
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB4434
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-11_07:2020-11-10,2020-11-11 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 mlxlogscore=999
 adultscore=0 clxscore=1015 spamscore=0 priorityscore=1501 impostorscore=0
 malwarescore=0 phishscore=0 mlxscore=0 lowpriorityscore=0 suspectscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2011110092

T24gMTEvMTEvMjAgNTowMyBQTSwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToNCj4gT24gV2VkLCBO
b3YgMTEsIDIwMjAgYXQgMDI6Mzg6NDdQTSArMDAwMCwgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28g
d3JvdGU6DQo+PiBPbiAxMS8xMS8yMCAzOjUzIFBNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOg0K
Pj4+IE9uIE1vbiwgTm92IDA5LCAyMDIwIGF0IDAyOjUwOjIzUE0gKzAyMDAsIE9sZWtzYW5kciBB
bmRydXNoY2hlbmtvIHdyb3RlOg0KPj4+PiBGcm9tOiBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyA8
b2xla3NhbmRyX2FuZHJ1c2hjaGVua29AZXBhbS5jb20+DQo+Pj4+DQo+Pj4+IFRoZSBvcmlnaW5h
bCBjb2RlIGRlcGVuZHMgb24gcGNpYmFjayB0byBtYW5hZ2UgYXNzaWduYWJsZSBkZXZpY2UgbGlz
dC4NCj4+Pj4gVGhlIGZ1bmN0aW9uYWxpdHkgd2hpY2ggaXMgaW1wbGVtZW50ZWQgYnkgdGhlIHBj
aWJhY2sgYW5kIHRoZSB0b29sc3RhY2sNCj4+Pj4gYW5kIHdoaWNoIGlzIHJlbGV2YW50L21pc3Np
bmcvbmVlZGVkIGZvciBBUk06DQo+Pj4+DQo+Pj4+IDEuIHBjaWJhY2sgaXMgdXNlZCBhcyBhIGRh
dGFiYXNlIGZvciBhc3NpZ25hYmxlIFBDSSBkZXZpY2VzLCBlLmcuIHhsDQo+Pj4+ICAgICAgcGNp
LWFzc2lnbmFibGUte2FkZHxyZW1vdmV8bGlzdH0gbWFuaXB1bGF0ZXMgdGhhdCBsaXN0LiBTbywg
d2hlbmV2ZXIgdGhlDQo+Pj4+ICAgICAgdG9vbHN0YWNrIG5lZWRzIHRvIGtub3cgd2hpY2ggUENJ
IGRldmljZXMgY2FuIGJlIHBhc3NlZCB0aHJvdWdoIGl0IHJlYWRzDQo+Pj4+ICAgICAgdGhhdCBm
cm9tIHRoZSByZWxldmFudCBzeXNmcyBlbnRyaWVzIG9mIHRoZSBwY2liYWNrLg0KPj4+Pg0KPj4+
PiAyLiBwY2liYWNrIGlzIHVzZWQgdG8gaG9sZCB0aGUgdW5ib3VuZCBQQ0kgZGV2aWNlcywgZS5n
LiB3aGVuIHBhc3NpbmcgdGhyb3VnaA0KPj4+PiAgICAgIGEgUENJIGRldmljZSBpdCBuZWVkcyB0
byBiZSB1bmJvdW5kIGZyb20gdGhlIHJlbGV2YW50IGRldmljZSBkcml2ZXIgYW5kIGJvdW5kDQo+
Pj4+ICAgICAgdG8gcGNpYmFjayAoc3RyaWN0bHkgc3BlYWtpbmcgaXQgaXMgbm90IHJlcXVpcmVk
IHRoYXQgdGhlIGRldmljZSBpcyBib3VuZCB0bw0KPj4+PiAgICAgIHBjaWJhY2ssIGJ1dCBwY2li
YWNrIGlzIGFnYWluIHVzZWQgYXMgYSBkYXRhYmFzZSBvZiB0aGUgcGFzc2VkIHRocm91Z2ggUENJ
DQo+Pj4+ICAgICAgZGV2aWNlcywgc28gd2UgY2FuIHJlLWJpbmQgdGhlIGRldmljZXMgYmFjayB0
byB0aGVpciBvcmlnaW5hbCBkcml2ZXJzIHdoZW4NCj4+Pj4gICAgICBndWVzdCBkb21haW4gc2h1
dHMgZG93bikNCj4+Pj4NCj4+Pj4gMS4gQXMgQVJNIGRvZXNuJ3QgdXNlIHBjaWJhY2sgaW1wbGVt
ZW50IHRoZSBhYm92ZSB3aXRoIGFkZGl0aW9uYWwgc3lzY3RsczoNCj4+Pj4gICAgLSBYRU5fU1lT
Q1RMX3BjaV9kZXZpY2Vfc2V0X2Fzc2lnbmVkDQo+Pj4gSSBkb24ndCBzZWUgdGhlIHBvaW50IGlu
IGhhdmluZyB0aGlzIHN5c2ZzLCBYZW4gYWxyZWFkeSBrbm93cyB3aGVuIGENCj4+PiBkZXZpY2Ug
aXMgYXNzaWduZWQgYmVjYXVzZSB0aGUgWEVOX0RPTUNUTF9hc3NpZ25fZGV2aWNlIGh5cGVyY2Fs
bCBpcw0KPj4+IHVzZWQuDQo+PiBCdXQgaG93IGRvZXMgdGhlIHRvb2xzdGFjayBrbm93IGFib3V0
IHRoYXQ/IFdoZW4gdGhlIHRvb2xzdGFjayBuZWVkcyB0bw0KPj4NCj4+IGxpc3Qva25vdyBhbGwg
YXNzaWduZWQgZGV2aWNlcyBpdCBxdWVyaWVzIHBjaWJhY2sncyBzeXNmcyBlbnRyaWVzLiBTbywg
d2l0aA0KPj4NCj4+IFhFTl9ET01DVExfYXNzaWduX2RldmljZSB3ZSBtYWtlIHRoYXQga25vd2xl
ZGdlIGF2YWlsYWJsZSB0byBYZW4sDQo+Pg0KPj4gYnV0IHRoZXJlIGFyZSBubyBtZWFucyBmb3Ig
dGhlIHRvb2xzdGFjayB0byBnZXQgaXQgYmFjay4NCj4gQnV0IHRoZSB0b29sc3RhY2sgd2lsbCBm
aWd1cmUgb3V0IHdoZXRoZXIgYSBkZXZpY2UgaXMgYXNzaWduZWQgb3INCj4gbm90IGJ5IHVzaW5n
DQo+IFhFTl9TWVNDVExfcGNpX2RldmljZV9nZXRfYXNzaWduZWQvWEVOX1NZU0NUTF9wY2lfZGV2
aWNlX2VudW1fYXNzaWduZWQ/DQo+DQo+IEFGQUlDVCBYRU5fU1lTQ1RMX3BjaV9kZXZpY2Vfc2V0
X2Fzc2lnbmVkIHRlbGxzIFhlbiBhIGRldmljZSBoYXMgYmVlbg0KPiBhc3NpZ25lZCwgYnV0IFhl
biBzaG91bGQgYWxyZWFkeSBrbm93IGl0IGJlY2F1c2UNCj4gWEVOX0RPTUNUTF9hc3NpZ25fZGV2
aWNlIHdvdWxkIGhhdmUgYmVlbiB1c2VkIHRvIGFzc2lnbiB0aGUgZGV2aWNlPw0KDQpBaCwgSSBt
aXN1bmRlcnN0b29kIHlvdSB0aGVuLiBTbywgd2Ugb25seSB3YW50IHRvIGRyb3AgWEVOX0RPTUNU
TF9hc3NpZ25fZGV2aWNlDQoNCmFuZCBrZWVwIHRoZSByZXN0Lg0KDQo+DQo+Pj4+ICAgIC0gWEVO
X1NZU0NUTF9wY2lfZGV2aWNlX2dldF9hc3NpZ25lZA0KPj4+PiAgICAtIFhFTl9TWVNDVExfcGNp
X2RldmljZV9lbnVtX2Fzc2lnbmVkDQo+Pj4+IDIuIEV4dGVuZCBzdHJ1Y3QgcGNpX2RldiB0byBo
b2xkIGFzc2lnbm1lbnQgc3RhdGUuDQo+Pj4gSSdtIG5vdCByZWFsbHkgZm91bmQgb2YgdGhpcywg
dGhlIGh5cGVydmlzb3IgaXMgbm8gcGxhY2UgdG8gc3RvcmUgYQ0KPj4+IGRhdGFiYXNlIGxpa2Ug
dGhpcywgdW5sZXNzIGl0J3Mgc3RyaWN0bHkgbmVlZGVkLg0KPj4gSSBkbyBhZ3JlZSBhbmQgaXQg
d2FzIHByZXZpb3VzbHkgZGlzY3Vzc2VkIGEgYml0DQo+Pj4gSU1PIHRoZSByaWdodCBpbXBsZW1l
bnRhdGlvbiBoZXJlIHdvdWxkIGJlIHRvIHNwbGl0IExpbnV4IHBjaWJhY2sgaW50bw0KPj4+IHR3
byBkaWZmZXJlbnQgZHJpdmVyczoNCj4+Pg0KPj4+ICAgIC0gVGhlIHB2LXBjaSBiYWNrZW5kIGZv
ciBkb2luZyBwYXNzdGhyb3VnaCB0byBjbGFzc2ljIFBWIGd1ZXN0cy4NCj4+IE9rDQo+Pj4gICAg
LSBUaGUgcmVzdCBvZiBwY2liYWNrOiBkZXZpY2UgcmVzZXQsIGhhbmQtaG9sZGluZyBkcml2ZXIg
Zm9yIGRldmljZXMNCj4+PiAgICAgIHRvIGJlIGFzc2lnbmVkIGFuZCBkYXRhYmFzZS4NCj4+IFRo
ZXNlIGFuZCBhc3NpZ25lZCBkZXZpY2VzIGxpc3Qgc2VlbSB0byBiZSB0aGUgY29tcGxldGUgc2V0
IHdoaWNoDQo+Pg0KPj4gaXMgbmVlZGVkIGJ5IHRoZSB0b29sc3RhY2sgb24gQVJNLiBBbGwgb3Ro
ZXIgZnVuY3Rpb25hbGl0eSBwcm92aWRlZCBieQ0KPj4NCj4+IHBjaWJhY2sgaXMgbm90IG5lZWRl
ZCBmb3IgQVJNLg0KPj4NCj4+IEphbiB3YXMgc2F5aW5nIFsxXSB0aGF0IHdlIG1pZ2h0IHN0aWxs
IHVzZSBwY2liYWNrIGFzIGlzLCBidXQgc2ltcGx5IHVzZSBvbmx5DQo+Pg0KPj4gdGhlIGZ1bmN0
aW9uYWxpdHkgd2UgbmVlZC4NCj4+DQo+Pj4gSSB0aGluayB0aGVyZSBtdXN0IGJlIHNvbWV0aGlu
ZyBzaW1pbGFyIGluIEtWTSB0aGF0IHBlcmZvcm1zIHRoZSB0YXNrcw0KPj4+IG9mIG15IGxhc3Qg
cG9pbnQsIG1heWJlIHdlIGNvdWxkIHBpZ2d5YmFjayBvbiBpdD8NCj4+IEkgcHJvbWlzZWQgdG8g
bG9vayBhdCBpdC4gSSBvd2UgdGhpcw0KPj4+IElmIHdlIHdhbnQgdG8gZ28gdGhlIHJvdXRlIHBy
b3Bvc2VkIGJ5IHRoaXMgcGF0Y2gsIGllOiBYZW4gcGVyZm9ybWluZw0KPj4+IHRoZSBmdW5jdGlv
bnMgb2YgcGNpYmFjayB5b3Ugd291bGQgYWxzbyBoYXZlIHRvIG1vdmUgdGhlIFBDSSByZXNldA0K
Pj4+IGNvZGUgdG8gWGVuLCBzbyB0aGF0IHlvdSBjYW4gZnVsbHkgbWFuYWdlIHRoZSBQQ0kgZGV2
aWNlcyBmcm9tIFhlbi4NCj4+IEluIGNhc2Ugb2YgZG9tMGxlc3MgdGhpcyB3b3VsZCBiZSB0aGUg
Y2FzZTogbm8gcGNpYmFjaywgbm8gRG9tYWluLTANCj4gQnV0IGZvciBkb20wbGVzcyB0aGVyZSdz
IG5vIG5lZWQgZm9yIGFueSBkYXRhYmFzZSBvZiBhc3NpZ25hYmxlDQo+IGRldmljZXMsIG5vciB0
aGUgbmVlZCB0byBwZXJmb3JtIHBjaSBkZXZpY2UgcmVzZXRzLCBhcyBpdCdzIGFsbA0KPiBhc3Np
Z25lZCBhdCBib290IHRpbWUgYW5kIHRoZW4gbmV2ZXIgbW9kaWZpZWQ/DQpZb3UgYXJlIHJpZ2h0
DQo+DQo+IFJvZ2VyLg0KDQpUaGFuayB5b3UsDQoNCk9sZWtzYW5kcg0K


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 15:22:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 15:22:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25056.52621 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrwX-0000Qo-73; Wed, 11 Nov 2020 15:21:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25056.52621; Wed, 11 Nov 2020 15:21:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrwX-0000Qh-3u; Wed, 11 Nov 2020 15:21:53 +0000
Received: by outflank-mailman (input) for mailman id 25056;
 Wed, 11 Nov 2020 15:21:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MbyH=ER=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1kcrwV-0000Qc-Qg
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 15:21:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a2fbb3ae-22ba-45c8-8de5-2fc9cd435877;
 Wed, 11 Nov 2020 15:21:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4D379ABD6;
 Wed, 11 Nov 2020 15:21:50 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MbyH=ER=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
	id 1kcrwV-0000Qc-Qg
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 15:21:51 +0000
X-Inumbo-ID: a2fbb3ae-22ba-45c8-8de5-2fc9cd435877
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a2fbb3ae-22ba-45c8-8de5-2fc9cd435877;
	Wed, 11 Nov 2020 15:21:51 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605108110;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=C2tzLP1jGK1O/DkKtMWgOwpHBBkfzLcFObx5rJsZWCA=;
	b=irXMa44ydziTpzt1nw2uB0Xr4C8Upc32cRGasy9nnjYLtytP2pPB8euK558KP6qyj1lP6B
	5W/YpEcOg4f06WhqzoQgTO5cdalZv8uTKqsZ8XnEmZ5xfeg+CzOhEhcCoDaVcUftxzpd6h
	F40ou3oWHTrp7vJkONTmJikK20dy1gY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4D379ABD6;
	Wed, 11 Nov 2020 15:21:50 +0000 (UTC)
Message-ID: <aa3c1df3fcc5e815a6d652211316f0f196a718cb.camel@suse.com>
Subject: Re: [PATCH 11/12] xen/hypfs: add scheduling granularity entry to
 cpupool entries
From: Dario Faggioli <dfaggioli@suse.com>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	 <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	 <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Date: Wed, 11 Nov 2020 16:21:49 +0100
In-Reply-To: <20201026091316.25680-12-jgross@suse.com>
References: <20201026091316.25680-1-jgross@suse.com>
	 <20201026091316.25680-12-jgross@suse.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-1LrBIHAIuR7ko6p+dcz+"
User-Agent: Evolution 3.38.1 (by Flathub.org) 
MIME-Version: 1.0


--=-1LrBIHAIuR7ko6p+dcz+
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2020-10-26 at 10:13 +0100, Juergen Gross wrote:
> Add a "sched-gran" entry to the per-cpupool hypfs directories.
>=20
> For now make this entry read-only and let it contain one of the
> strings "cpu", "core" or "socket".
>=20
> Signed-off-by: Juergen Gross <jgross@suse.com>
>
Acked-by: Dario Faggioli <dfaggioli@suse.com>

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-1LrBIHAIuR7ko6p+dcz+
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl+sAY0ACgkQFkJ4iaW4
c+7LMRAAvwNbWlmIUNyzL3kBxZHL7K+YsrFiHqjcZYagKTXmdQ2m54D9+q56IIvE
nKajH1DPcaFbDAJFQse9gI6pNzvqSzO9ICJs+TKVzx05SbCEYRndcjMa8shMxtXp
pOhut+knIwWFX7XJo21g4Hp63X2r685d0yuBD0zf5zAVMwU4R8Lu5zlSCzZoO3/E
JP+l3axGw/EW5xfssOh8lpBbGHUzqjtOE0YqlaVmN9mB84dRM5MYOT2b65OaK8F5
UN+Ilw8LR0L2B+UXFWlxwpOhNgeZWfKu+zajpTJpvU0+039+70qXKEy7tj82d1zw
LrfcW7H74b5V+L5+ZVgiyb+593Id5yiaXLDwIH4SwoYjZl04eET5byH9qHJmvkwi
Yw01xogf19Nz8ZwaFQqY4QcVOCnUnd8dbU1CpxI1JErjI90/qIEzo8ycsRJLRqX6
FT/qmpn4ZxLJeQAVtPh0WHZbIiy96mXlACvcXPt+u4BMKbX5o3EEmURYAztGbfni
E1jfOnwgDnqA0GoySfLAZnlmkRfrzZAl1R1ln2qt9PvJ04r9i8aB5UmeceX5g82Y
efVTX7lXhW50jtWACHBc7QIQhawRV+e38sqo0MKgTziiNuv1r3VD7cxA1e2Zc45h
dd12wudci1y1hlEUXD3X5VQf6YG++1JjvTr+C85KpdTYeRaEX+Q=
=zMF9
-----END PGP SIGNATURE-----

--=-1LrBIHAIuR7ko6p+dcz+--



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 15:25:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 15:25:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25064.52632 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrzw-0000cH-NM; Wed, 11 Nov 2020 15:25:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25064.52632; Wed, 11 Nov 2020 15:25:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcrzw-0000cA-KI; Wed, 11 Nov 2020 15:25:24 +0000
Received: by outflank-mailman (input) for mailman id 25064;
 Wed, 11 Nov 2020 15:25:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcrzv-0000c5-90
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 15:25:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cf0e269d-3cfa-4302-bc78-2f25d6066d3d;
 Wed, 11 Nov 2020 15:25:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E8A4BABD1;
 Wed, 11 Nov 2020 15:25:20 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcrzv-0000c5-90
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 15:25:23 +0000
X-Inumbo-ID: cf0e269d-3cfa-4302-bc78-2f25d6066d3d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cf0e269d-3cfa-4302-bc78-2f25d6066d3d;
	Wed, 11 Nov 2020 15:25:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605108321;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lu5eTPLiqZpzCAzhiYptqh5t6WfH2MvKfYhLWaBD1y8=;
	b=K+6sUYmExTm3uN3JONkxpVKnVkWckmzsUfysolKZYBmguQPBZdW4DjCx5l+FMtVNP76DPl
	hFTwp/Amq8e2VnfjA46p2otn5ySBOCfHZXuZ4hZgL+xPcQCOTrFPN+my4nJpiNLxK5IEd4
	GlKn9YuNuJxiEtEv8f2CS6lyCJbGJbc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E8A4BABD1;
	Wed, 11 Nov 2020 15:25:20 +0000 (UTC)
Subject: Re: [PATCH 02/10] arm/pci: Maintain PCI assignable list
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>,
 Oleksandr Andrushchenko <andr2000@gmail.com>
Cc: "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
 "julien.grall@arm.com" <julien.grall@arm.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-3-andr2000@gmail.com>
 <20201111135311.6jhskiss2qswm3zp@Air-de-Roger>
 <03d6a75c-075d-6c57-1d66-2514ef1d0cb8@epam.com>
 <20201111150310.2wo33lr3f5xrd6sj@Air-de-Roger>
 <d94e37a3-d8b3-7d3d-0ba5-c7885a5ef512@epam.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4619f6b5-6050-6beb-ced7-f7d6da9782d0@suse.com>
Date: Wed, 11 Nov 2020 16:25:20 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <d94e37a3-d8b3-7d3d-0ba5-c7885a5ef512@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11.11.2020 16:13, Oleksandr Andrushchenko wrote:
> On 11/11/20 5:03 PM, Roger Pau Monné wrote:
>> On Wed, Nov 11, 2020 at 02:38:47PM +0000, Oleksandr Andrushchenko wrote:
>>> On 11/11/20 3:53 PM, Roger Pau Monné wrote:
>>>> On Mon, Nov 09, 2020 at 02:50:23PM +0200, Oleksandr Andrushchenko wrote:
>>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>>>>
>>>>> The original code depends on pciback to manage assignable device list.
>>>>> The functionality which is implemented by the pciback and the toolstack
>>>>> and which is relevant/missing/needed for ARM:
>>>>>
>>>>> 1. pciback is used as a database for assignable PCI devices, e.g. xl
>>>>>      pci-assignable-{add|remove|list} manipulates that list. So, whenever the
>>>>>      toolstack needs to know which PCI devices can be passed through it reads
>>>>>      that from the relevant sysfs entries of the pciback.
>>>>>
>>>>> 2. pciback is used to hold the unbound PCI devices, e.g. when passing through
>>>>>      a PCI device it needs to be unbound from the relevant device driver and bound
>>>>>      to pciback (strictly speaking it is not required that the device is bound to
>>>>>      pciback, but pciback is again used as a database of the passed through PCI
>>>>>      devices, so we can re-bind the devices back to their original drivers when
>>>>>      guest domain shuts down)
>>>>>
>>>>> 1. As ARM doesn't use pciback implement the above with additional sysctls:
>>>>>    - XEN_SYSCTL_pci_device_set_assigned
>>>> I don't see the point in having this sysfs, Xen already knows when a
>>>> device is assigned because the XEN_DOMCTL_assign_device hypercall is
>>>> used.
>>> But how does the toolstack know about that? When the toolstack needs to
>>>
>>> list/know all assigned devices it queries pciback's sysfs entries. So, with
>>>
>>> XEN_DOMCTL_assign_device we make that knowledge available to Xen,
>>>
>>> but there are no means for the toolstack to get it back.
>> But the toolstack will figure out whether a device is assigned or
>> not by using
>> XEN_SYSCTL_pci_device_get_assigned/XEN_SYSCTL_pci_device_enum_assigned?
>>
>> AFAICT XEN_SYSCTL_pci_device_set_assigned tells Xen a device has been
>> assigned, but Xen should already know it because
>> XEN_DOMCTL_assign_device would have been used to assign the device?
> 
> Ah, I misunderstood you then. So, we only want to drop XEN_DOMCTL_assign_device
> 
> and keep the rest.

Was this a typo? Why would you want to drop XEN_DOMCTL_assign_device?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 15:28:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 15:28:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25070.52645 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcs2l-0000mu-6S; Wed, 11 Nov 2020 15:28:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25070.52645; Wed, 11 Nov 2020 15:28:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcs2l-0000mn-30; Wed, 11 Nov 2020 15:28:19 +0000
Received: by outflank-mailman (input) for mailman id 25070;
 Wed, 11 Nov 2020 15:28:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pxcb=ER=epam.com=prvs=9584594409=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kcs2k-0000mi-CR
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 15:28:18 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eb8d3f40-deb9-48dd-83eb-4314b7c1a9a9;
 Wed, 11 Nov 2020 15:28:17 +0000 (UTC)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0ABFMlq6016077; Wed, 11 Nov 2020 15:28:12 GMT
Received: from eur03-ve1-obe.outbound.protection.outlook.com
 (mail-ve1eur03lp2057.outbound.protection.outlook.com [104.47.9.57])
 by mx0b-0039f301.pphosted.com with ESMTP id 34rf800sva-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 11 Nov 2020 15:28:12 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB6322.eurprd03.prod.outlook.com (2603:10a6:20b:15e::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Wed, 11 Nov
 2020 15:28:10 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Wed, 11 Nov 2020
 15:28:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Pxcb=ER=epam.com=prvs=9584594409=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kcs2k-0000mi-CR
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 15:28:18 +0000
X-Inumbo-ID: eb8d3f40-deb9-48dd-83eb-4314b7c1a9a9
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id eb8d3f40-deb9-48dd-83eb-4314b7c1a9a9;
	Wed, 11 Nov 2020 15:28:17 +0000 (UTC)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
	by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0ABFMlq6016077;
	Wed, 11 Nov 2020 15:28:12 GMT
Received: from eur03-ve1-obe.outbound.protection.outlook.com (mail-ve1eur03lp2057.outbound.protection.outlook.com [104.47.9.57])
	by mx0b-0039f301.pphosted.com with ESMTP id 34rf800sva-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Wed, 11 Nov 2020 15:28:12 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dDkZmIj9AYjOsJ4vO7ltv3EAZbjUUumH/IW0GuDhZ41zXDHs9ZFmU+zNFUT9zXh2hopdvw2dFKUJrXgC3fB1su4rwGlBpvj+f1Mrf4Pqr3NyL2beUCfjmTr47evZEkg5a8LOZFhw44I5x7MNyr4AI2FOYfZIBYWmQ9XRWDw1IQ+fDjr67D/EQgybea5bf6Im3kLQL5w6NGZnyIOI0chdQym0cGJcHHRM/PBwINO1CreJk1xorHTSM4Xq44xzvTTEIPc/WPCq6fYjNY4tV879LfTdcSWf1HGpBKKhkicDRiyo8RRyS6CalxDIMk+blv/cVhhi80IDX9qDZzgOYtW3Lw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=o7yX6/AQXuuo70d0FtI7GkAPxZz5dsPz2NaD4N/5R3c=;
 b=Skw2zAp+D54lRSKW4fUyo/XzzVvTnIN6R9GwB+5NEnkBAMPCt4B6O+Qi3FKM0ObuUPQFmrgRwJEKe62HudfXEN3sV8ifSzeM2NkdqEkQd36p3jpYdJptrSDQd/yWHRouRjTFgX8bMH2aELZEfYTSf0FHXGZ56QME7txn0wAmxyx2JGORCefqGzjb2T2l76ovsBxOec9TFvu7Z9bbkehixCCPOVR0LM4ManNms7iZ3xT6Njw27jVcu6buobwDIkQfQWbVuFYE0UHL8TPP0Xt6/vU/lLQjM1n+cOirWv2PP7+eeqUY8YMnjqJNv1id/XVWrfcGvZFMRR04OQ8s3BwifQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=o7yX6/AQXuuo70d0FtI7GkAPxZz5dsPz2NaD4N/5R3c=;
 b=rAgiOiAc3DrSQ56bPUpDQmowOqq207vEpRYwTLIHJrU+pgctkqLXLpA3ola91b2BShDEUV2LxLhHKYmsVWx/hcPseCritH7BS2N05DqfXyksonZxnZz/qBT0KSLBOmdD3pcoqlUS/8PDdFAWJs47VqHvhriGoDonDiVSqVKdqK8joo3D2aditY/7cXfkHtUfhqVax6sH4yIGh2+9Ar06V+PgRhoBgLT3fNNOw7Da6bmn+SXIDFmrgZGfDfN8vGODhxG6b61UPjsvyGs5RZGRCbSYL+1otkF3cATjzoEHxn0N7PGuqLyC9NZaMEb3MzFcsQFhAxt5rcvEOrkIxQKquQ==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB6322.eurprd03.prod.outlook.com (2603:10a6:20b:15e::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Wed, 11 Nov
 2020 15:28:10 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Wed, 11 Nov 2020
 15:28:10 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Jan Beulich <jbeulich@suse.com>,
        Oleksandr Andrushchenko
	<andr2000@gmail.com>
CC: "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
        "Bertrand.Marquis@arm.com"
	<Bertrand.Marquis@arm.com>,
        "julien.grall@arm.com" <julien.grall@arm.com>,
        "sstabellini@kernel.org" <sstabellini@kernel.org>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
        =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Subject: Re: [PATCH 02/10] arm/pci: Maintain PCI assignable list
Thread-Topic: [PATCH 02/10] arm/pci: Maintain PCI assignable list
Thread-Index: 
 AQHWtpbuKiRgL1HiZkyNC2yoNcmnl6nC9s6AgAAMvACAAAbRAIAAAv4AgAADNACAAADJgA==
Date: Wed, 11 Nov 2020 15:28:09 +0000
Message-ID: <4cfff7a6-6d7b-a6be-b31a-98e4eccd3fb2@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-3-andr2000@gmail.com>
 <20201111135311.6jhskiss2qswm3zp@Air-de-Roger>
 <03d6a75c-075d-6c57-1d66-2514ef1d0cb8@epam.com>
 <20201111150310.2wo33lr3f5xrd6sj@Air-de-Roger>
 <d94e37a3-d8b3-7d3d-0ba5-c7885a5ef512@epam.com>
 <4619f6b5-6050-6beb-ced7-f7d6da9782d0@suse.com>
In-Reply-To: <4619f6b5-6050-6beb-ced7-f7d6da9782d0@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 52cd1c40-3747-4a1d-5f04-08d886566581
x-ms-traffictypediagnostic: AM0PR03MB6322:
x-microsoft-antispam-prvs: 
 <AM0PR03MB63225985A885ADDE836D215CE7E80@AM0PR03MB6322.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 ItxCTAD8GWMOUcpeS7AdKMv4VF01ckNhxfA3D1BtMLP3JhYs54n4yfRa07nBW555UhkmCmghSs6vA+A585xnE2BBrh3Bxi5fcdW511eRyEKZrmdrBQW7M/NQGBw0kCTSMwIX2dlG6vAjHnb6JjhqyFEFa304eNVWRqJug2wsMJAIbExDyhK3cD6DT8zYeZagkdOzNlum2fl+ZQSFv3udX58Uc7BqCsTiaqLJgcXzzQT2YVLR5Vhkm+6v9KoDeSPdBWbj0D30rYITDMz5SItFIqf7fJN+FSSkSpYOukI5eRu1FQ5MHV5QOXnkhKCHLyGzUTTjysjI12rrlvrzJ33lEA==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(366004)(396003)(136003)(39860400002)(376002)(186003)(4326008)(31686004)(7416002)(66556008)(316002)(6486002)(110136005)(71200400001)(54906003)(66476007)(91956017)(66946007)(64756008)(76116006)(66446008)(26005)(8936002)(2906002)(86362001)(478600001)(8676002)(31696002)(5660300002)(2616005)(53546011)(6506007)(36756003)(6512007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 ahJT4cC+Z3LOorbl78dKfyEnTZM9kQ/bsgd/qfdTrd71fvK+frW6UlpzToZThWk9bs/SrnQVEu7CmDdyNV6TQFwHox4C2HHY4DfD8XR4PapKKdTjgLFpGLvEXyG2j/RdsWdd4KhUPE8SB1MjUbMLyiLhKAPBJcDsnZPMDskseIlyejiS+lcD83vuVWjDKpWdbT5LrxAr81bfBJoxbWzo08vqLgtrEst9nArxJwsjCnhSYq2t+GQNJE9PtyAdq864M7MV/8A1awyu4KYQJaIglOf9ragpuynWPjAMJbs8Al//nlZRfBX8uyTPFPBKYNK/G/IiAWrek1QUG5t4oMuMvPRqXc0iuI67R/ghjrfJ/+Yl703uNG62o1ixe3CUq5BZfbsXtlstQwx8XODRxYX/HJbUEnnNjtujjpYNZMWfwUyqo8S8pYO0KY1nIalpTgKeDCO9bIy/EInSD6FGcjw6L97GyscGwTYKyr3uWgb4jOutu9Y/dTlomxKoJtNHXuP3kxPiKPCbHQH4wnQSOp/tDDZ6pVBRGwr+Vi02GluYuZKqHHz6/GdhS2nlU+VGCcGnAElC4wZdXxLCrWg6sxh6Emq9HXKKKf7JTA36XyCqH5r1I1flcVP+KNbf0muFU+ge4F8zS2LP4ZxfHEVhgyDuOw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <5EF5FFD1856418438F6E9D603B27AA21@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 52cd1c40-3747-4a1d-5f04-08d886566581
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Nov 2020 15:28:10.0007
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: hLEVZ+LSFwfSwRrOvTXEnMCEb4wIyydL/dYYudVqCe8WuW8NgGViE5tyIE+Z5k26gL4wNXwRMem5RyNmhjAxgE2cKlBTEpxSscqf4rkgqwYZr+7fNWL/sNlOiwQAmf5f
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB6322
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-11_07:2020-11-10,2020-11-11 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 bulkscore=0
 impostorscore=0 spamscore=0 lowpriorityscore=0 mlxscore=0 phishscore=0
 suspectscore=0 mlxlogscore=890 adultscore=0 malwarescore=0
 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2011110092

DQpPbiAxMS8xMS8yMCA1OjI1IFBNLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTEuMTEuMjAy
MCAxNjoxMywgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gd3JvdGU6DQo+PiBPbiAxMS8xMS8yMCA1
OjAzIFBNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOg0KPj4+IE9uIFdlZCwgTm92IDExLCAyMDIw
IGF0IDAyOjM4OjQ3UE0gKzAwMDAsIE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIHdyb3RlOg0KPj4+
PiBPbiAxMS8xMS8yMCAzOjUzIFBNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOg0KPj4+Pj4gT24g
TW9uLCBOb3YgMDksIDIwMjAgYXQgMDI6NTA6MjNQTSArMDIwMCwgT2xla3NhbmRyIEFuZHJ1c2hj
aGVua28gd3JvdGU6DQo+Pj4+Pj4gRnJvbTogT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gPG9sZWtz
YW5kcl9hbmRydXNoY2hlbmtvQGVwYW0uY29tPg0KPj4+Pj4+DQo+Pj4+Pj4gVGhlIG9yaWdpbmFs
IGNvZGUgZGVwZW5kcyBvbiBwY2liYWNrIHRvIG1hbmFnZSBhc3NpZ25hYmxlIGRldmljZSBsaXN0
Lg0KPj4+Pj4+IFRoZSBmdW5jdGlvbmFsaXR5IHdoaWNoIGlzIGltcGxlbWVudGVkIGJ5IHRoZSBw
Y2liYWNrIGFuZCB0aGUgdG9vbHN0YWNrDQo+Pj4+Pj4gYW5kIHdoaWNoIGlzIHJlbGV2YW50L21p
c3NpbmcvbmVlZGVkIGZvciBBUk06DQo+Pj4+Pj4NCj4+Pj4+PiAxLiBwY2liYWNrIGlzIHVzZWQg
YXMgYSBkYXRhYmFzZSBmb3IgYXNzaWduYWJsZSBQQ0kgZGV2aWNlcywgZS5nLiB4bA0KPj4+Pj4+
ICAgICAgIHBjaS1hc3NpZ25hYmxlLXthZGR8cmVtb3ZlfGxpc3R9IG1hbmlwdWxhdGVzIHRoYXQg
bGlzdC4gU28sIHdoZW5ldmVyIHRoZQ0KPj4+Pj4+ICAgICAgIHRvb2xzdGFjayBuZWVkcyB0byBr
bm93IHdoaWNoIFBDSSBkZXZpY2VzIGNhbiBiZSBwYXNzZWQgdGhyb3VnaCBpdCByZWFkcw0KPj4+
Pj4+ICAgICAgIHRoYXQgZnJvbSB0aGUgcmVsZXZhbnQgc3lzZnMgZW50cmllcyBvZiB0aGUgcGNp
YmFjay4NCj4+Pj4+Pg0KPj4+Pj4+IDIuIHBjaWJhY2sgaXMgdXNlZCB0byBob2xkIHRoZSB1bmJv
dW5kIFBDSSBkZXZpY2VzLCBlLmcuIHdoZW4gcGFzc2luZyB0aHJvdWdoDQo+Pj4+Pj4gICAgICAg
YSBQQ0kgZGV2aWNlIGl0IG5lZWRzIHRvIGJlIHVuYm91bmQgZnJvbSB0aGUgcmVsZXZhbnQgZGV2
aWNlIGRyaXZlciBhbmQgYm91bmQNCj4+Pj4+PiAgICAgICB0byBwY2liYWNrIChzdHJpY3RseSBz
cGVha2luZyBpdCBpcyBub3QgcmVxdWlyZWQgdGhhdCB0aGUgZGV2aWNlIGlzIGJvdW5kIHRvDQo+
Pj4+Pj4gICAgICAgcGNpYmFjaywgYnV0IHBjaWJhY2sgaXMgYWdhaW4gdXNlZCBhcyBhIGRhdGFi
YXNlIG9mIHRoZSBwYXNzZWQgdGhyb3VnaCBQQ0kNCj4+Pj4+PiAgICAgICBkZXZpY2VzLCBzbyB3
ZSBjYW4gcmUtYmluZCB0aGUgZGV2aWNlcyBiYWNrIHRvIHRoZWlyIG9yaWdpbmFsIGRyaXZlcnMg
d2hlbg0KPj4+Pj4+ICAgICAgIGd1ZXN0IGRvbWFpbiBzaHV0cyBkb3duKQ0KPj4+Pj4+DQo+Pj4+
Pj4gMS4gQXMgQVJNIGRvZXNuJ3QgdXNlIHBjaWJhY2sgaW1wbGVtZW50IHRoZSBhYm92ZSB3aXRo
IGFkZGl0aW9uYWwgc3lzY3RsczoNCj4+Pj4+PiAgICAgLSBYRU5fU1lTQ1RMX3BjaV9kZXZpY2Vf
c2V0X2Fzc2lnbmVkDQo+Pj4+PiBJIGRvbid0IHNlZSB0aGUgcG9pbnQgaW4gaGF2aW5nIHRoaXMg
c3lzZnMsIFhlbiBhbHJlYWR5IGtub3dzIHdoZW4gYQ0KPj4+Pj4gZGV2aWNlIGlzIGFzc2lnbmVk
IGJlY2F1c2UgdGhlIFhFTl9ET01DVExfYXNzaWduX2RldmljZSBoeXBlcmNhbGwgaXMNCj4+Pj4+
IHVzZWQuDQo+Pj4+IEJ1dCBob3cgZG9lcyB0aGUgdG9vbHN0YWNrIGtub3cgYWJvdXQgdGhhdD8g
V2hlbiB0aGUgdG9vbHN0YWNrIG5lZWRzIHRvDQo+Pj4+DQo+Pj4+IGxpc3Qva25vdyBhbGwgYXNz
aWduZWQgZGV2aWNlcyBpdCBxdWVyaWVzIHBjaWJhY2sncyBzeXNmcyBlbnRyaWVzLiBTbywgd2l0
aA0KPj4+Pg0KPj4+PiBYRU5fRE9NQ1RMX2Fzc2lnbl9kZXZpY2Ugd2UgbWFrZSB0aGF0IGtub3ds
ZWRnZSBhdmFpbGFibGUgdG8gWGVuLA0KPj4+Pg0KPj4+PiBidXQgdGhlcmUgYXJlIG5vIG1lYW5z
IGZvciB0aGUgdG9vbHN0YWNrIHRvIGdldCBpdCBiYWNrLg0KPj4+IEJ1dCB0aGUgdG9vbHN0YWNr
IHdpbGwgZmlndXJlIG91dCB3aGV0aGVyIGEgZGV2aWNlIGlzIGFzc2lnbmVkIG9yDQo+Pj4gbm90
IGJ5IHVzaW5nDQo+Pj4gWEVOX1NZU0NUTF9wY2lfZGV2aWNlX2dldF9hc3NpZ25lZC9YRU5fU1lT
Q1RMX3BjaV9kZXZpY2VfZW51bV9hc3NpZ25lZD8NCj4+Pg0KPj4+IEFGQUlDVCBYRU5fU1lTQ1RM
X3BjaV9kZXZpY2Vfc2V0X2Fzc2lnbmVkIHRlbGxzIFhlbiBhIGRldmljZSBoYXMgYmVlbg0KPj4+
IGFzc2lnbmVkLCBidXQgWGVuIHNob3VsZCBhbHJlYWR5IGtub3cgaXQgYmVjYXVzZQ0KPj4+IFhF
Tl9ET01DVExfYXNzaWduX2RldmljZSB3b3VsZCBoYXZlIGJlZW4gdXNlZCB0byBhc3NpZ24gdGhl
IGRldmljZT8NCj4+IEFoLCBJIG1pc3VuZGVyc3Rvb2QgeW91IHRoZW4uIFNvLCB3ZSBvbmx5IHdh
bnQgdG8gZHJvcCBYRU5fRE9NQ1RMX2Fzc2lnbl9kZXZpY2UNCj4+DQo+PiBhbmQga2VlcCB0aGUg
cmVzdC4NCj4gV2FzIHRoaXMgYSB0eXBvPyBXaHkgd291bGQgeW91IHdhbnQgdG8gZHJvcCBYRU5f
RE9NQ1RMX2Fzc2lnbl9kZXZpY2U/DQoNCkluZGVlZCBpdCB3YXM6IHMvWEVOX0RPTUNUTF9hc3Np
Z25fZGV2aWNlL1hFTl9TWVNDVExfcGNpX2RldmljZV9zZXRfYXNzaWduZWQNCg0KU29ycnkgZm9y
IGNvbmZ1c2lvbg0KDQo+DQo+IEphbg==


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 15:37:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 15:37:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25079.52657 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcsBK-0001mN-6J; Wed, 11 Nov 2020 15:37:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25079.52657; Wed, 11 Nov 2020 15:37:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcsBK-0001mG-1A; Wed, 11 Nov 2020 15:37:10 +0000
Received: by outflank-mailman (input) for mailman id 25079;
 Wed, 11 Nov 2020 15:37:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcsBI-0001mB-R5
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 15:37:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c56a35c8-3d31-410c-b3a9-fd46b13f719f;
 Wed, 11 Nov 2020 15:37:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2C438AC75;
 Wed, 11 Nov 2020 15:37:06 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcsBI-0001mB-R5
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 15:37:08 +0000
X-Inumbo-ID: c56a35c8-3d31-410c-b3a9-fd46b13f719f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c56a35c8-3d31-410c-b3a9-fd46b13f719f;
	Wed, 11 Nov 2020 15:37:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605109026;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2huT31oFavBxTtGDAAuTREwbyBE6/Nk6AUYAiJly5aw=;
	b=rTADTqkya1JyXiVEGrw1dLL7Hyj84/3Q4DkHDdF8tv0alYlZENi033yFA8VL6KJAL2RJxb
	plHpsFjNqlW59OU/HrvEn+6D4uFmN2bAiKxMq2MjZarOdBFaE6g09WIbBssfwh806azgk2
	MmnKtXtSKLfFip1l46Cy3svIzgrc/d0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2C438AC75;
	Wed, 11 Nov 2020 15:37:06 +0000 (UTC)
Subject: Re: [PATCH v4 1/3] xen/x86: add nmi continuation framework
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201109095021.9897-1-jgross@suse.com>
 <20201109095021.9897-2-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <13e71151-e395-4da9-ce1e-60f02247c6b7@suse.com>
Date: Wed, 11 Nov 2020 16:37:05 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201109095021.9897-2-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.11.2020 10:50, Juergen Gross wrote:
> Actions in NMI context are rather limited as e.g. locking is rather
> fragile.
> 
> Add a framework to continue processing in normal interrupt context
> after leaving NMI processing.
> 
> This is done by a high priority interrupt vector triggered via a
> self IPI from NMI context, which will then call the continuation
> function specified during NMI handling.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with one further adjustment request:

> @@ -1799,6 +1800,24 @@ void unset_nmi_callback(void)
>      nmi_callback = dummy_nmi_callback;
>  }
>  
> +bool nmi_check_continuation(void)
> +{
> +    bool ret = false;
> +
> +    return ret;
> +}
> +
> +void trigger_nmi_continuation(void)
> +{
> +    /*
> +     * Issue a self-IPI. Handling is done in spurious_interrupt().
> +     * NMI could have happened in IPI sequence, so wait for ICR being idle
> +     * again before leaving NMI handler.
> +     */
> +    send_IPI_self(SPURIOUS_APIC_VECTOR);
> +    apic_wait_icr_idle();
> +}

This additionally relies on send_IPI_self_legacy() calling
send_IPI_shortcut(), rather than e.g. resolving the local CPU
number to a destination ID. I think this wants saying maybe
here, but more importantly in that function.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 15:42:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 15:42:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25088.52669 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcsFz-0002jD-O7; Wed, 11 Nov 2020 15:41:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25088.52669; Wed, 11 Nov 2020 15:41:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcsFz-0002j6-L2; Wed, 11 Nov 2020 15:41:59 +0000
Received: by outflank-mailman (input) for mailman id 25088;
 Wed, 11 Nov 2020 15:41:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GpG1=ER=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kcsFy-0002ia-37
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 15:41:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id df4fbc96-11cd-45ee-ba55-dd01125eceb2;
 Wed, 11 Nov 2020 15:41:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E81ECAD2D;
 Wed, 11 Nov 2020 15:41:55 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GpG1=ER=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kcsFy-0002ia-37
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 15:41:58 +0000
X-Inumbo-ID: df4fbc96-11cd-45ee-ba55-dd01125eceb2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id df4fbc96-11cd-45ee-ba55-dd01125eceb2;
	Wed, 11 Nov 2020 15:41:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605109316;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=WTdcv23mN42Sf8leYa9YjTx+k3M5XiMjv661+oaQwrQ=;
	b=I5vPuXuNxmHc4k9wdfKRiFjcHlAaydUl17z5VCb4Vrkz6Tl08jW4u3MKDGh9GvFV+Xji5Z
	W38Ky8xvRPw9LcFeLKaYKOimb3hMXHc75Xt/K1VXga2Zp5duj9Q17QzN5Q0DEsXKY8m24V
	jAH1APGzTAVSQXUibJjYQKGg3Zs9Y2Q=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E81ECAD2D;
	Wed, 11 Nov 2020 15:41:55 +0000 (UTC)
Subject: Re: [PATCH v4 1/3] xen/x86: add nmi continuation framework
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201109095021.9897-1-jgross@suse.com>
 <20201109095021.9897-2-jgross@suse.com>
 <13e71151-e395-4da9-ce1e-60f02247c6b7@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <f943ab23-6194-edbb-591d-8e9912f2c77c@suse.com>
Date: Wed, 11 Nov 2020 16:41:55 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <13e71151-e395-4da9-ce1e-60f02247c6b7@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="QV5hD0Of1wLymUj9aQGJxeFqwbYkFQnAd"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--QV5hD0Of1wLymUj9aQGJxeFqwbYkFQnAd
Content-Type: multipart/mixed; boundary="IVj8jKFGIUUAl3xT8xtBm1IxmH5Y44CQ1";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <f943ab23-6194-edbb-591d-8e9912f2c77c@suse.com>
Subject: Re: [PATCH v4 1/3] xen/x86: add nmi continuation framework
References: <20201109095021.9897-1-jgross@suse.com>
 <20201109095021.9897-2-jgross@suse.com>
 <13e71151-e395-4da9-ce1e-60f02247c6b7@suse.com>
In-Reply-To: <13e71151-e395-4da9-ce1e-60f02247c6b7@suse.com>

--IVj8jKFGIUUAl3xT8xtBm1IxmH5Y44CQ1
Content-Type: multipart/mixed;
 boundary="------------214B6131AA4A084F972617E0"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------214B6131AA4A084F972617E0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 11.11.20 16:37, Jan Beulich wrote:
> On 09.11.2020 10:50, Juergen Gross wrote:
>> Actions in NMI context are rather limited as e.g. locking is rather
>> fragile.
>>
>> Add a framework to continue processing in normal interrupt context
>> after leaving NMI processing.
>>
>> This is done by a high priority interrupt vector triggered via a
>> self IPI from NMI context, which will then call the continuation
>> function specified during NMI handling.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>=20
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> with one further adjustment request:
>=20
>> @@ -1799,6 +1800,24 @@ void unset_nmi_callback(void)
>>       nmi_callback =3D dummy_nmi_callback;
>>   }
>>  =20
>> +bool nmi_check_continuation(void)
>> +{
>> +    bool ret =3D false;
>> +
>> +    return ret;
>> +}
>> +
>> +void trigger_nmi_continuation(void)
>> +{
>> +    /*
>> +     * Issue a self-IPI. Handling is done in spurious_interrupt().
>> +     * NMI could have happened in IPI sequence, so wait for ICR being=
 idle
>> +     * again before leaving NMI handler.
>> +     */
>> +    send_IPI_self(SPURIOUS_APIC_VECTOR);
>> +    apic_wait_icr_idle();
>> +}
>=20
> This additionally relies on send_IPI_self_legacy() calling
> send_IPI_shortcut(), rather than e.g. resolving the local CPU
> number to a destination ID. I think this wants saying maybe
> here, but more importantly in that function.

Okay.


Juergen


--------------214B6131AA4A084F972617E0
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------214B6131AA4A084F972617E0--

--IVj8jKFGIUUAl3xT8xtBm1IxmH5Y44CQ1--

--QV5hD0Of1wLymUj9aQGJxeFqwbYkFQnAd
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+sBkMFAwAAAAAACgkQsN6d1ii/Ey8R
KAf+MBlHqLO/qrWVySBdnpxhslkG5Hx4X6E+QhBPjXkDnzkbv012fvD0fNu1LU5PIZldFekSd/PU
Vks1RP5bkdgCwsJDMCk77I1IGOEZd+Bd40vOuj0YxqSfNb0VMvdfT+hWAw/lF20XJ1u8FjFQkpP1
GfuQngMy+PxV+w9PoqMnpXHhkz2G6hs2OguRMxFYjy4RCBMn1AXrapUmQdMerbUpW/3OPyidsKQV
OzHbPeWAgfvbOkrp0apOu4M/B/y4pvZkyqM7fjyp10zWpTjvBNTKOpBrC1RGxNrHL9DNl6xZAwvl
14Iw3E5RwoCoRXm3tGtwVJZVhXunWFHHuiSX7+TyiA==
=vXXe
-----END PGP SIGNATURE-----

--QV5hD0Of1wLymUj9aQGJxeFqwbYkFQnAd--


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 15:45:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 15:45:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25095.52680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcsJk-0002vB-Cl; Wed, 11 Nov 2020 15:45:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25095.52680; Wed, 11 Nov 2020 15:45:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcsJk-0002v4-9p; Wed, 11 Nov 2020 15:45:52 +0000
Received: by outflank-mailman (input) for mailman id 25095;
 Wed, 11 Nov 2020 15:45:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kcsJj-0002uz-2M
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 15:45:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id afb35e10-add8-453a-a9e0-eaf69e57f0d9;
 Wed, 11 Nov 2020 15:45:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 382BAABD1;
 Wed, 11 Nov 2020 15:45:49 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cwX6=ER=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kcsJj-0002uz-2M
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 15:45:51 +0000
X-Inumbo-ID: afb35e10-add8-453a-a9e0-eaf69e57f0d9
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id afb35e10-add8-453a-a9e0-eaf69e57f0d9;
	Wed, 11 Nov 2020 15:45:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605109549;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=A0dqzBW+IYP4V9/iTL5/5jfwxtPBI0peAops5iyFIYQ=;
	b=MS1WrbXiAZsJDnFXJuqY7+LBxsoGXNqAX0IcqFZB2px1hEMKBmNzh+xpBayJXp1qxJDhju
	JI0F6izCHBsr+OThgbc7+k1F940r0JT63TRibGRO+DTitfnbooXr7ghLNqKSzUI1NJh0We
	seexAWMGt1nkzcuNmNEbpzX8S5ESBKo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 382BAABD1;
	Wed, 11 Nov 2020 15:45:49 +0000 (UTC)
Subject: Re: [PATCH v4 2/3] xen/oprofile: use NMI continuation for sending
 virq to guest
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201109095021.9897-1-jgross@suse.com>
 <20201109095021.9897-3-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d55adbc0-8a98-dd5c-c204-2ec11955c356@suse.com>
Date: Wed, 11 Nov 2020 16:45:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201109095021.9897-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.11.2020 10:50, Juergen Gross wrote:
> @@ -83,14 +85,28 @@ void passive_domain_destroy(struct vcpu *v)
>  		model->free_msr(v);
>  }
>  
> +bool nmi_oprofile_send_virq(void)
> +{
> +	struct vcpu *v = this_cpu(nmi_cont_vcpu);
> +
> +	if ( v )
> +		send_guest_vcpu_virq(v, VIRQ_XENOPROF);
> +
> +	this_cpu(nmi_cont_vcpu) = NULL;

What if, by the time we make it here, a 2nd NMI has arrived? I
agree the next overflow interrupt shouldn't arrive this
quickly, but I also think you want to zap the per-CPU variable
first here, and ...

> +
> +	return v;
> +}
> +
>  static int nmi_callback(const struct cpu_user_regs *regs, int cpu)
>  {
>  	int xen_mode, ovf;
>  
>  	ovf = model->check_ctrs(cpu, &cpu_msrs[cpu], regs);
>  	xen_mode = ring_0(regs);
> -	if ( ovf && is_active(current->domain) && !xen_mode )
> -		send_guest_vcpu_virq(current, VIRQ_XENOPROF);
> +	if ( ovf && is_active(current->domain) && !xen_mode ) {
> +		this_cpu(nmi_cont_vcpu) = current;

... avoid overwriting any non-NULL value here. That's then of
course still not closing the window, but has (imo) overall
better behavior.

Also, style-wise, going through the file it looks to be mainly
Linux style, so may I suggest your additions / changes to be
done that way, rather than extending use of this funny mixed
style?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 15:49:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 15:49:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25102.52693 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcsMk-00036I-SI; Wed, 11 Nov 2020 15:48:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25102.52693; Wed, 11 Nov 2020 15:48:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcsMk-00036B-OW; Wed, 11 Nov 2020 15:48:58 +0000
Received: by outflank-mailman (input) for mailman id 25102;
 Wed, 11 Nov 2020 15:48:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GpG1=ER=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kcsMj-000365-0V
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 15:48:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 091058a1-ae73-46f4-b1d5-bd7e33d42aad;
 Wed, 11 Nov 2020 15:48:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1C0D3AC75;
 Wed, 11 Nov 2020 15:48:55 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GpG1=ER=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kcsMj-000365-0V
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 15:48:57 +0000
X-Inumbo-ID: 091058a1-ae73-46f4-b1d5-bd7e33d42aad
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 091058a1-ae73-46f4-b1d5-bd7e33d42aad;
	Wed, 11 Nov 2020 15:48:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605109735;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=jz5MMH+/kNLXTkt7JMK/JwIUuoIbxXH/8fWLmC0GjLc=;
	b=cvFGggEw//hx0ac+SNz5leTJBv3tzfAVujKVMc04LFR6W4iktBS57ju7sVQgHLzxThVI35
	TrcZQ4nSySsWSBtPae/Q9gMdrmWA5PotH5gGv0rd+74zhZi+WvGXfqv0eEzfZAK7+ErMdL
	FLDNUir0pBYU6WsnssTxgtiHnhpAH0k=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1C0D3AC75;
	Wed, 11 Nov 2020 15:48:55 +0000 (UTC)
Subject: Re: [PATCH v4 2/3] xen/oprofile: use NMI continuation for sending
 virq to guest
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201109095021.9897-1-jgross@suse.com>
 <20201109095021.9897-3-jgross@suse.com>
 <d55adbc0-8a98-dd5c-c204-2ec11955c356@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <288804e4-75e6-6600-9634-8c0ea7a06c22@suse.com>
Date: Wed, 11 Nov 2020 16:48:54 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <d55adbc0-8a98-dd5c-c204-2ec11955c356@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="E8OuMYTm4FgzkMITDBAncpbr4MXDYchTP"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--E8OuMYTm4FgzkMITDBAncpbr4MXDYchTP
Content-Type: multipart/mixed; boundary="GvgwHDRtHjTpUpUIg6PyYNzaUrqi9jFrc";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <288804e4-75e6-6600-9634-8c0ea7a06c22@suse.com>
Subject: Re: [PATCH v4 2/3] xen/oprofile: use NMI continuation for sending
 virq to guest
References: <20201109095021.9897-1-jgross@suse.com>
 <20201109095021.9897-3-jgross@suse.com>
 <d55adbc0-8a98-dd5c-c204-2ec11955c356@suse.com>
In-Reply-To: <d55adbc0-8a98-dd5c-c204-2ec11955c356@suse.com>

--GvgwHDRtHjTpUpUIg6PyYNzaUrqi9jFrc
Content-Type: multipart/mixed;
 boundary="------------1E54A406AECAA0E3A49B8A0D"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------1E54A406AECAA0E3A49B8A0D
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 11.11.20 16:45, Jan Beulich wrote:
> On 09.11.2020 10:50, Juergen Gross wrote:
>> @@ -83,14 +85,28 @@ void passive_domain_destroy(struct vcpu *v)
>>   		model->free_msr(v);
>>   }
>>  =20
>> +bool nmi_oprofile_send_virq(void)
>> +{
>> +	struct vcpu *v =3D this_cpu(nmi_cont_vcpu);
>> +
>> +	if ( v )
>> +		send_guest_vcpu_virq(v, VIRQ_XENOPROF);
>> +
>> +	this_cpu(nmi_cont_vcpu) =3D NULL;
>=20
> What if, by the time we make it here, a 2nd NMI has arrived? I
> agree the next overflow interrupt shouldn't arrive this
> quickly, but I also think you want to zap the per-CPU variable
> first here, and ...

How could that happen? This function is activated only from NMI
context in case the NMI happened in guest mode. And it will be
executed with higher priority than any guest, so there is a zero
chance another NMI in guest mode can happen in between.

I can add a comment in this regard if you want.

>=20
>> +
>> +	return v;
>> +}
>> +
>>   static int nmi_callback(const struct cpu_user_regs *regs, int cpu)
>>   {
>>   	int xen_mode, ovf;
>>  =20
>>   	ovf =3D model->check_ctrs(cpu, &cpu_msrs[cpu], regs);
>>   	xen_mode =3D ring_0(regs);
>> -	if ( ovf && is_active(current->domain) && !xen_mode )
>> -		send_guest_vcpu_virq(current, VIRQ_XENOPROF);
>> +	if ( ovf && is_active(current->domain) && !xen_mode ) {
>> +		this_cpu(nmi_cont_vcpu) =3D current;
>=20
> ... avoid overwriting any non-NULL value here. That's then of
> course still not closing the window, but has (imo) overall
> better behavior.
>=20
> Also, style-wise, going through the file it looks to be mainly
> Linux style, so may I suggest your additions / changes to be
> done that way, rather than extending use of this funny mixed
> style?

Works for me.


Juergen

--------------1E54A406AECAA0E3A49B8A0D
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------1E54A406AECAA0E3A49B8A0D--

--GvgwHDRtHjTpUpUIg6PyYNzaUrqi9jFrc--

--E8OuMYTm4FgzkMITDBAncpbr4MXDYchTP
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+sB+YFAwAAAAAACgkQsN6d1ii/Ey/J
BQf/Vrk8vFJq2wLu3DntrmxqTtJLT9PILw3e2pz3Ms63vtkURlHQJ9vTRdCcdY5GM6sxWnpcGOsF
Dcqisej3TJMp9/eKzpXAd3gIIm56mwXKNzm44yVeH+iGQomZhxYw5edRHdYXDIu19fPR288UA3t7
Sr+fJoTw6uErJjU/Fmn6aVO4euLHziJ2jWfHCnRBYWvgkNC+WslhDvVvApn/W59BR6JwRoTTzh4D
qOUGmIXoqArXrwR1iq01D/e6oKVc5gRF+w/FcZNCTVrbYEE3cPul2vEfhs/LJhRGzYaXJAaCELPQ
l9+fix/ct2zIwgbBU1lBjJsNEY5e9jdsJDGJbWCkuw==
=8qug
-----END PGP SIGNATURE-----

--E8OuMYTm4FgzkMITDBAncpbr4MXDYchTP--


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 16:28:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 16:28:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25119.52705 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcsyh-0007Q9-VO; Wed, 11 Nov 2020 16:28:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25119.52705; Wed, 11 Nov 2020 16:28:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcsyh-0007Q2-SH; Wed, 11 Nov 2020 16:28:11 +0000
Received: by outflank-mailman (input) for mailman id 25119;
 Wed, 11 Nov 2020 16:28:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uOJX=ER=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kcsyg-0007P8-9A
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 16:28:10 +0000
Received: from mail-wr1-x436.google.com (unknown [2a00:1450:4864:20::436])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eb9d95d7-3253-4738-83cf-4b2ad341fa15;
 Wed, 11 Nov 2020 16:28:08 +0000 (UTC)
Received: by mail-wr1-x436.google.com with SMTP id 23so3099494wrc.8
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 08:28:08 -0800 (PST)
Received: from CBGR90WXYV0 (54-240-197-234.amazon.com. [54.240.197.234])
 by smtp.gmail.com with ESMTPSA id e6sm3186276wme.27.2020.11.11.08.28.05
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 11 Nov 2020 08:28:06 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uOJX=ER=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kcsyg-0007P8-9A
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 16:28:10 +0000
X-Inumbo-ID: eb9d95d7-3253-4738-83cf-4b2ad341fa15
Received: from mail-wr1-x436.google.com (unknown [2a00:1450:4864:20::436])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id eb9d95d7-3253-4738-83cf-4b2ad341fa15;
	Wed, 11 Nov 2020 16:28:08 +0000 (UTC)
Received: by mail-wr1-x436.google.com with SMTP id 23so3099494wrc.8
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 08:28:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=N5p7Cgdfa5RSEogVv2IhlZPMjrTH7i/2ipSIxDfwHno=;
        b=JZGmWQ/Dag44YRrUdwXAX7erRcU4kUaev3uuG3M1JnxOhlwPwqxc4b0tq4qeVe5ujw
         l1588ep7zMuSsMSGDacFo4Pq8wQ88czblqdcjMIEXyW2wDB4gqhoPYk6ZfNr5El4J6Zm
         K5qgEeMkxrBwhtiUj9j3+lzvDgdlzYZUAY0oMGtCBNZeNrlgfVEwIhrjUoT9pRW6w8mD
         It0AFaJDHEGQT90cPZudsyEH3YwWnATRzMvn4FaKMZiuxcSFXTzoqNJiHr76ukLKn1fy
         K2ClTCg3xWz34McHL/4mUENVsPf9+1TS7RjgLY4/tXXQlZ+BBybPGd+IzeGIoHJRy3K7
         oipA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=N5p7Cgdfa5RSEogVv2IhlZPMjrTH7i/2ipSIxDfwHno=;
        b=aIPmHNprc0iGfwGcHL9FqHjFZF9YhqMOi+UDodyalCgy7XJHqSJwQrgXvjkDBEoNTA
         J/gEya+E2qa9hF4olEfw4hw5DPNd/nR3y31A3NDk7nN3JFNVGZwiaihIfGp/g3M31l+P
         XwUavY8nlKf2KYoLawh9IxJaK9IWc4HeWzjn+owKxKBpI+shAFB3RsEr+1efP+FuzayF
         2LZUJzh7wXYDFDcJuPf0V4r7KRw4f6MtIWZYG1S8pVhI7jjAp2wnkcxbKQ27v/5ikoGy
         +wZwthPn4H+BD1z1/CwpDs/p9IZidsiUF2IxmIy2OC/4KirKP8ljqvWgY9aMCgZXQ6pt
         tHJw==
X-Gm-Message-State: AOAM530BieWZzWDYFGLNzh0cg2e4sEYoZceBlnYtnfr4NgkUP56c2v7B
	Ade8wodNJsW/yn8T+GTOjLk=
X-Google-Smtp-Source: ABdhPJxGAYgnU/s3RV2jUuLSakYYGxqOOzQJb9zHZpDeCBxXFmLd/Y5IOyN0C5+41N8rmWthj75bzg==
X-Received: by 2002:a5d:4f07:: with SMTP id c7mr32225833wru.296.1605112087504;
        Wed, 11 Nov 2020 08:28:07 -0800 (PST)
Received: from CBGR90WXYV0 (54-240-197-234.amazon.com. [54.240.197.234])
        by smtp.gmail.com with ESMTPSA id e6sm3186276wme.27.2020.11.11.08.28.05
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Wed, 11 Nov 2020 08:28:06 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>,
	"'Oleksandr'" <olekstysh@gmail.com>
Cc: "'Oleksandr Tyshchenko'" <oleksandr_tyshchenko@epam.com>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Julien Grall'" <julien@xen.org>,
	"'Volodymyr Babchuk'" <Volodymyr_Babchuk@epam.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Wei Liu'" <wl@xen.org>,
	"'Julien Grall'" <julien.grall@arm.com>,
	<xen-devel@lists.xenproject.org>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> <1602780274-29141-18-git-send-email-olekstysh@gmail.com> <004e01d6a6cf$09cd9f40$1d68ddc0$@xen.org> <700a643e-641e-c243-cb2d-7ad8b5a9b8ad@gmail.com> <d4088e1b-1a50-d2fd-29b0-0f8a2cf4e7d4@suse.com> <ed9defbe-b6bf-bd1f-cd88-64d1b0e135c1@gmail.com> <0ab03a33-5056-0de8-e5f7-b54a661a09c5@suse.com>
In-Reply-To: <0ab03a33-5056-0de8-e5f7-b54a661a09c5@suse.com>
Subject: RE: [PATCH V2 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
Date: Wed, 11 Nov 2020 16:28:05 -0000
Message-ID: <003401d6b847$a2d9f470$e88ddd50$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQFqp5MaNUj6MKEiN9RM6S6pfA5bVAGr5WYaAb7RyogBgA8OqwGUDZBmArIItvgB82Dp96pCHWKw
Content-Language: en-gb

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 11 November 2020 13:28
> To: Oleksandr <olekstysh@gmail.com>
> Cc: 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>; 'Stefano =
Stabellini'
> <sstabellini@kernel.org>; 'Julien Grall' <julien@xen.org>; 'Volodymyr =
Babchuk'
> <Volodymyr_Babchuk@epam.com>; 'Andrew Cooper' =
<andrew.cooper3@citrix.com>; 'George Dunlap'
> <george.dunlap@citrix.com>; 'Ian Jackson' <iwj@xenproject.org>; 'Wei =
Liu' <wl@xen.org>; 'Julien Grall'
> <julien.grall@arm.com>; paul@xen.org; xen-devel@lists.xenproject.org
> Subject: Re: [PATCH V2 17/23] xen/ioreq: Introduce =
domain_has_ioreq_server()
>=20
> On 11.11.2020 09:41, Oleksandr wrote:
> >
> > On 11.11.20 10:08, Jan Beulich wrote:
> >
> > Hi Jan
> >
> >> On 10.11.2020 21:53, Oleksandr wrote:
> >>> On 20.10.20 13:51, Paul Durrant wrote:
> >>>
> >>> Hi Paul.
> >>>
> >>> Sorry for the late response.
> >>>
> >>>>> -----Original Message-----
> >>>>> From: Oleksandr Tyshchenko <olekstysh@gmail.com>
> >>>>> Sent: 15 October 2020 17:44
> >>>>> To: xen-devel@lists.xenproject.org
> >>>>> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; =
Stefano Stabellini
> <sstabellini@kernel.org>;
> >>>>> Julien Grall <julien@xen.org>; Volodymyr Babchuk =
<Volodymyr_Babchuk@epam.com>; Andrew Cooper
> >>>>> <andrew.cooper3@citrix.com>; George Dunlap =
<george.dunlap@citrix.com>; Ian Jackson
> >>>>> <iwj@xenproject.org>; Jan Beulich <jbeulich@suse.com>; Wei Liu =
<wl@xen.org>; Paul Durrant
> >>>>> <paul@xen.org>; Julien Grall <julien.grall@arm.com>
> >>>>> Subject: [PATCH V2 17/23] xen/ioreq: Introduce =
domain_has_ioreq_server()
> >>>>>
> >>>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> >>>>>
> >>>>> This patch introduces a helper the main purpose of which is to =
check
> >>>>> if a domain is using IOREQ server(s).
> >>>>>
> >>>>> On Arm the current benefit is to avoid calling =
handle_io_completion()
> >>>>> (which implies iterating over all possible IOREQ servers anyway)
> >>>>> on every return in leave_hypervisor_to_guest() if there is no =
active
> >>>>> servers for the particular domain.
> >>>>> Also this helper will be used by one of the subsequent patches =
on Arm.
> >>>>>
> >>>>> This involves adding an extra per-domain variable to store the =
count
> >>>>> of servers in use.
> >>>>>
> >>>>> Signed-off-by: Oleksandr Tyshchenko =
<oleksandr_tyshchenko@epam.com>
> >>>>> CC: Julien Grall <julien.grall@arm.com>
> >>>>>
> >>>>> ---
> >>>>> Please note, this is a split/cleanup/hardening of Julien's PoC:
> >>>>> "Add support for Guest IO forwarding to a device emulator"
> >>>>>
> >>>>> Changes RFC -> V1:
> >>>>>      - new patch
> >>>>>
> >>>>> Changes V1 -> V2:
> >>>>>      - update patch description
> >>>>>      - guard helper with CONFIG_IOREQ_SERVER
> >>>>>      - remove "hvm" prefix
> >>>>>      - modify helper to just return =
d->arch.hvm.ioreq_server.nr_servers
> >>>>>      - put suitable ASSERT()s
> >>>>>      - use ASSERT(d->ioreq_server.server[id] ? !s : !!s) in =
set_ioreq_server()
> >>>>>      - remove d->ioreq_server.nr_servers =3D 0 from =
hvm_ioreq_init()
> >>>>> ---
> >>>>>    xen/arch/arm/traps.c    | 15 +++++++++------
> >>>>>    xen/common/ioreq.c      |  7 ++++++-
> >>>>>    xen/include/xen/ioreq.h | 14 ++++++++++++++
> >>>>>    xen/include/xen/sched.h |  1 +
> >>>>>    4 files changed, 30 insertions(+), 7 deletions(-)
> >>>>>
> >>>>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> >>>>> index 507c095..a8f5fdf 100644
> >>>>> --- a/xen/arch/arm/traps.c
> >>>>> +++ b/xen/arch/arm/traps.c
> >>>>> @@ -2261,14 +2261,17 @@ static bool check_for_vcpu_work(void)
> >>>>>        struct vcpu *v =3D current;
> >>>>>
> >>>>>    #ifdef CONFIG_IOREQ_SERVER
> >>>>> -    bool handled;
> >>>>> +    if ( domain_has_ioreq_server(v->domain) )
> >>>>> +    {
> >>>>> +        bool handled;
> >>>>>
> >>>>> -    local_irq_enable();
> >>>>> -    handled =3D handle_io_completion(v);
> >>>>> -    local_irq_disable();
> >>>>> +        local_irq_enable();
> >>>>> +        handled =3D handle_io_completion(v);
> >>>>> +        local_irq_disable();
> >>>>>
> >>>>> -    if ( !handled )
> >>>>> -        return true;
> >>>>> +        if ( !handled )
> >>>>> +            return true;
> >>>>> +    }
> >>>>>    #endif
> >>>>>
> >>>>>        if ( likely(!v->arch.need_flush_to_ram) )
> >>>>> diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
> >>>>> index bcd4961..a72bc0e 100644
> >>>>> --- a/xen/common/ioreq.c
> >>>>> +++ b/xen/common/ioreq.c
> >>>>> @@ -39,9 +39,14 @@ static void set_ioreq_server(struct domain =
*d, unsigned int id,
> >>>>>                                 struct ioreq_server *s)
> >>>>>    {
> >>>>>        ASSERT(id < MAX_NR_IOREQ_SERVERS);
> >>>>> -    ASSERT(!s || !d->ioreq_server.server[id]);
> >>>>> +    ASSERT(d->ioreq_server.server[id] ? !s : !!s);
> >>>> That looks odd. How about ASSERT(!s ^ =
!d->ioreq_server.server[id])?
> >>> ok, looks like it will work.
> >>>
> >>>
> >>>>     Paul
> >>>>
> >>>>>        d->ioreq_server.server[id] =3D s;
> >>>>> +
> >>>>> +    if ( s )
> >>>>> +        d->ioreq_server.nr_servers++;
> >>>>> +    else
> >>>>> +        d->ioreq_server.nr_servers--;
> >>>>>    }
> >>>>>
> >>>>>    #define GET_IOREQ_SERVER(d, id) \
> >>>>> diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
> >>>>> index 7b03ab5..0679fef 100644
> >>>>> --- a/xen/include/xen/ioreq.h
> >>>>> +++ b/xen/include/xen/ioreq.h
> >>>>> @@ -55,6 +55,20 @@ struct ioreq_server {
> >>>>>        uint8_t                bufioreq_handling;
> >>>>>    };
> >>>>>
> >>>>> +#ifdef CONFIG_IOREQ_SERVER
> >>>>> +static inline bool domain_has_ioreq_server(const struct domain =
*d)
> >>>>> +{
> >>>>> +    ASSERT((current->domain =3D=3D d) || =
atomic_read(&d->pause_count));
> >>>>> +
> >>>> This seems like an odd place to put such an assertion.
> >>> I might miss something or interpreted incorrectly but these =
asserts are
> >>> the result of how I understood the review comment on previous =
version [1].
> >>>
> >>> I will copy a comment here for the convenience:
> >>> "This is safe only when d =3D=3D current->domain and it's not =
paused,
> >>> or when they're distinct and d is paused. Otherwise the result is
> >>> stale before the caller can inspect it. This wants documenting by
> >>> at least a comment, but perhaps better by suitable ASSERT()s."
> >> The way his reply was worded, I think Paul was wondering about the
> >> place where you put the assertion, not what you actually assert.
> >
> > Shall I put the assertion at the call sites of this helper instead?
>=20
> Since Paul raised the question, I expect this is a question to him
> rather than me?

If it is indeed a question for me then yes, put the assertion where it =
is clear why it is needed. domain_has_ioreq_server() is essentially a =
trivial accessor function; it's not the appropriate place.

  Paul

>=20
> Jan



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 17:32:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 17:32:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25133.52723 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kctyG-0005NM-JW; Wed, 11 Nov 2020 17:31:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25133.52723; Wed, 11 Nov 2020 17:31:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kctyG-0005NF-Fm; Wed, 11 Nov 2020 17:31:48 +0000
Received: by outflank-mailman (input) for mailman id 25133;
 Wed, 11 Nov 2020 17:31:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OH4G=ER=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kctyE-0005NA-Vw
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 17:31:47 +0000
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 96299019-09b2-466d-9a3c-1ed40d640f09;
 Wed, 11 Nov 2020 17:31:45 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id 11so2998969ljf.2
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 09:31:45 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id r12sm288509lfc.80.2020.11.11.09.31.42
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 11 Nov 2020 09:31:43 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=OH4G=ER=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kctyE-0005NA-Vw
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 17:31:47 +0000
X-Inumbo-ID: 96299019-09b2-466d-9a3c-1ed40d640f09
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 96299019-09b2-466d-9a3c-1ed40d640f09;
	Wed, 11 Nov 2020 17:31:45 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id 11so2998969ljf.2
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 09:31:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=9lJcUz8QfXwF8vJaUpaGnd9A5ljqxkWw2E3euLf1b2E=;
        b=faVKNxWR3DZbJ6ORB0FoEGO2o7fyd6xSnWLASggYQdI90I3vCsPLm2MqySvFEfk/JE
         sY9AOJGdbaqlnTNNRaUOaR3NM0ZRbh/4CJndsSY/Q2NIyXys4ci4AjiTEyXrQDeZaT26
         X1gZlJ9gLkyn7/BNWvPmGtdjFBskMXnKrxxnJjIekXkXdj2lLswSP9MP96wM8ha5Ui8a
         wmzpPj0A14TRgrNAMnLHzUEVFDIKIfjNv2NIgmyuzgZ3mGh5mRpu5KH7VeG/gMCZm2Gm
         4Fxb9TuX08u7G6UXUL2ZT+1ziMHU83tz5ryy2QfK0bP9OTlNA/MTHvHQSSAfT1T4QBMj
         G9KA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=9lJcUz8QfXwF8vJaUpaGnd9A5ljqxkWw2E3euLf1b2E=;
        b=SHNy/u5bouKKucPJwotmMq8oyJOjSIxl5nr/V1nj7hlDCa32Oh/KqF1gzkUgpETsr5
         4xyRP8YGzM/GZ/upFfct4VYL5bXR6ppcT6wBUojnFz1KW0knOCYSowRM8luKuKIZehEp
         Yr7HKQ8eA7QLMrqh9hSOFMTdR1o6IqjLXL3rq6Wyxe73238A20JncYnCXOVSFccYi3Kf
         05xfQSpb2eAQYIDNgNT0Sx8NM5OsaM9DiCH25wwRiboUax/NVAZx13Va8t2GCcqRFdQy
         qN3vhpFLIOFd5nMs6pUwbLvDidGhaO8ywVYquNoEVzS03OHKs5WduLkQRXhK9de4yfEJ
         F2mQ==
X-Gm-Message-State: AOAM531n5f58urQW/qWCjvr+zc6n7QeUq9fNCpDMs5irZV3yHsyMO/66
	7w/EVCdUUOt8kN5Kx40Uwm65aDUw83LDzw==
X-Google-Smtp-Source: ABdhPJwqsr5xF26n/tPUD6Z4pRNeNivTmqj6ig/i4bu+22DhlBd1aMreC2lH1lbgk57ruKYkPvxlGQ==
X-Received: by 2002:a2e:8e3b:: with SMTP id r27mr8614266ljk.466.1605115904037;
        Wed, 11 Nov 2020 09:31:44 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id r12sm288509lfc.80.2020.11.11.09.31.42
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Wed, 11 Nov 2020 09:31:43 -0800 (PST)
Subject: Re: [PATCH V2 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
To: Jan Beulich <jbeulich@suse.com>
Cc: 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Ian Jackson' <iwj@xenproject.org>, 'Wei Liu' <wl@xen.org>,
 'Julien Grall' <julien.grall@arm.com>, paul@xen.org,
 xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-18-git-send-email-olekstysh@gmail.com>
 <004e01d6a6cf$09cd9f40$1d68ddc0$@xen.org>
 <700a643e-641e-c243-cb2d-7ad8b5a9b8ad@gmail.com>
 <d4088e1b-1a50-d2fd-29b0-0f8a2cf4e7d4@suse.com>
 <ed9defbe-b6bf-bd1f-cd88-64d1b0e135c1@gmail.com>
 <0ab03a33-5056-0de8-e5f7-b54a661a09c5@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <fcf79de6-ce12-24db-0a32-5ce1f657d699@gmail.com>
Date: Wed, 11 Nov 2020 19:31:37 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <0ab03a33-5056-0de8-e5f7-b54a661a09c5@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 11.11.20 15:27, Jan Beulich wrote:

Hi Jan.

>
>>>>>>     }
>>>>>>
>>>>>>     #define GET_IOREQ_SERVER(d, id) \
>>>>>> diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
>>>>>> index 7b03ab5..0679fef 100644
>>>>>> --- a/xen/include/xen/ioreq.h
>>>>>> +++ b/xen/include/xen/ioreq.h
>>>>>> @@ -55,6 +55,20 @@ struct ioreq_server {
>>>>>>         uint8_t                bufioreq_handling;
>>>>>>     };
>>>>>>
>>>>>> +#ifdef CONFIG_IOREQ_SERVER
>>>>>> +static inline bool domain_has_ioreq_server(const struct domain *d)
>>>>>> +{
>>>>>> +    ASSERT((current->domain == d) || atomic_read(&d->pause_count));
>>>>>> +
>>>>> This seems like an odd place to put such an assertion.
>>>> I might miss something or interpreted incorrectly but these asserts are
>>>> the result of how I understood the review comment on previous version [1].
>>>>
>>>> I will copy a comment here for the convenience:
>>>> "This is safe only when d == current->domain and it's not paused,
>>>> or when they're distinct and d is paused. Otherwise the result is
>>>> stale before the caller can inspect it. This wants documenting by
>>>> at least a comment, but perhaps better by suitable ASSERT()s."
>>> The way his reply was worded, I think Paul was wondering about the
>>> place where you put the assertion, not what you actually assert.
>> Shall I put the assertion at the call sites of this helper instead?
> Since Paul raised the question, I expect this is a question to him
> rather than me?
Yes, it was primarily a question to Paul, but I wanted to hear your 
opinion as well. Sorry for the confusion.

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 17:33:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 17:33:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25139.52735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kctzy-0005XN-4U; Wed, 11 Nov 2020 17:33:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25139.52735; Wed, 11 Nov 2020 17:33:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kctzx-0005XG-W6; Wed, 11 Nov 2020 17:33:33 +0000
Received: by outflank-mailman (input) for mailman id 25139;
 Wed, 11 Nov 2020 17:33:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OH4G=ER=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kctzw-0005X9-J8
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 17:33:32 +0000
Received: from mail-lj1-x230.google.com (unknown [2a00:1450:4864:20::230])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d00b3402-5166-4e33-9c8f-29c5817c8b6d;
 Wed, 11 Nov 2020 17:33:31 +0000 (UTC)
Received: by mail-lj1-x230.google.com with SMTP id 11so3005603ljf.2
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 09:33:31 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id j11sm294773ljh.26.2020.11.11.09.33.28
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 11 Nov 2020 09:33:29 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=OH4G=ER=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kctzw-0005X9-J8
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 17:33:32 +0000
X-Inumbo-ID: d00b3402-5166-4e33-9c8f-29c5817c8b6d
Received: from mail-lj1-x230.google.com (unknown [2a00:1450:4864:20::230])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d00b3402-5166-4e33-9c8f-29c5817c8b6d;
	Wed, 11 Nov 2020 17:33:31 +0000 (UTC)
Received: by mail-lj1-x230.google.com with SMTP id 11so3005603ljf.2
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 09:33:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=Nd5jEW0QzDuwykLHNKZ4lrMPbVGUqkgVK6JGfBtXzws=;
        b=aKpKP8sfVMbpyRkFgGYsZdrh6QeyDHLKPh/GDqvzgsb0jMUzU50zZNsQBSMh5qmPJO
         grY9SCLhn4Qjyrep0GPZG2qvzzmKRIzNQzh+JPHXv+4WWogG69XqqYDGowSOiQCDNi6a
         yJIl75S9cA901gtrWZU9Csy9Lst2emRvboN2q+vK0gz+pCumRzL4WNtCMFMIJ1hXf5oc
         +1rFJy4zo7PbSWzmWRYQ+e/c6L1myM8Fptv3pyNhPHQbsVx7vtrZ/Vv+qieY9TMighOW
         ZJMR2ZYFtGiKmHWMmad8DS5humT+yMxe2nih0dHB8ZnAA4KN+uEICu2/Ozybwt4CbhHn
         jyXg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=Nd5jEW0QzDuwykLHNKZ4lrMPbVGUqkgVK6JGfBtXzws=;
        b=PH/xH2VUvTqraDLzEEuukpkuRirpHi8k2OwRZ0U0UJnNOJfAWRHMJD8wSCSQhcZxtv
         n3cyGNmZkUEvWWdEnVPmA5eFBq/AQspwMhKsGukix0kwRO4XXyIwou9l9l9e+1IA49Un
         falY8QaZYUo6p2npkw+iifhagn1gw6qNwQuKFkXeKniSWP2xZw8TSmcMzSsi28mNd7x3
         S+Twae4J0o3s5ZxAb1ANvjFc5DE7qTwJprJxUCbFz+bCldO2uq+YuiUf+RHPGtfHdd8j
         lsj3xG7TBQpBoZH4W3Cn4aPbrDeW98PsqaeNOkwHkSkmlYQiUgJL7X89glCXnMpEINuY
         34VA==
X-Gm-Message-State: AOAM533W9jb711FWaIlTyTPm4+ETAotRTt/qADEAcyAof6vyhQVMP5j1
	QW/zGWAL/JGN1MyPjgivYO60CqxorBU8ww==
X-Google-Smtp-Source: ABdhPJzktMhzISNMR0atIKvY/55l+KaH2X4GFlKFIwD5uERKcvEWMeiyd2N8rv+O6KnNMW/meVeyMg==
X-Received: by 2002:a2e:9798:: with SMTP id y24mr10142909lji.341.1605116010210;
        Wed, 11 Nov 2020 09:33:30 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id j11sm294773ljh.26.2020.11.11.09.33.28
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Wed, 11 Nov 2020 09:33:29 -0800 (PST)
Subject: Re: [PATCH V2 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
To: paul@xen.org
Cc: 'Jan Beulich' <jbeulich@suse.com>,
 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Ian Jackson' <iwj@xenproject.org>, 'Wei Liu' <wl@xen.org>,
 'Julien Grall' <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-18-git-send-email-olekstysh@gmail.com>
 <004e01d6a6cf$09cd9f40$1d68ddc0$@xen.org>
 <700a643e-641e-c243-cb2d-7ad8b5a9b8ad@gmail.com>
 <d4088e1b-1a50-d2fd-29b0-0f8a2cf4e7d4@suse.com>
 <ed9defbe-b6bf-bd1f-cd88-64d1b0e135c1@gmail.com>
 <0ab03a33-5056-0de8-e5f7-b54a661a09c5@suse.com>
 <003401d6b847$a2d9f470$e88ddd50$@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <6472d0b7-d23f-a908-65b9-0cee64ca8e2a@gmail.com>
Date: Wed, 11 Nov 2020 19:33:28 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <003401d6b847$a2d9f470$e88ddd50$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 11.11.20 18:28, Paul Durrant wrote:

Hi Paul.

>>
>>>>>>>         d->ioreq_server.server[id] = s;
>>>>>>> +
>>>>>>> +    if ( s )
>>>>>>> +        d->ioreq_server.nr_servers++;
>>>>>>> +    else
>>>>>>> +        d->ioreq_server.nr_servers--;
>>>>>>>     }
>>>>>>>
>>>>>>>     #define GET_IOREQ_SERVER(d, id) \
>>>>>>> diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
>>>>>>> index 7b03ab5..0679fef 100644
>>>>>>> --- a/xen/include/xen/ioreq.h
>>>>>>> +++ b/xen/include/xen/ioreq.h
>>>>>>> @@ -55,6 +55,20 @@ struct ioreq_server {
>>>>>>>         uint8_t                bufioreq_handling;
>>>>>>>     };
>>>>>>>
>>>>>>> +#ifdef CONFIG_IOREQ_SERVER
>>>>>>> +static inline bool domain_has_ioreq_server(const struct domain *d)
>>>>>>> +{
>>>>>>> +    ASSERT((current->domain == d) || atomic_read(&d->pause_count));
>>>>>>> +
>>>>>> This seems like an odd place to put such an assertion.
>>>>> I might miss something or interpreted incorrectly but these asserts are
>>>>> the result of how I understood the review comment on previous version [1].
>>>>>
>>>>> I will copy a comment here for the convenience:
>>>>> "This is safe only when d == current->domain and it's not paused,
>>>>> or when they're distinct and d is paused. Otherwise the result is
>>>>> stale before the caller can inspect it. This wants documenting by
>>>>> at least a comment, but perhaps better by suitable ASSERT()s."
>>>> The way his reply was worded, I think Paul was wondering about the
>>>> place where you put the assertion, not what you actually assert.
>>> Shall I put the assertion at the call sites of this helper instead?
>> Since Paul raised the question, I expect this is a question to him
>> rather than me?
> If it is indeed a question for me then yes, put the assertion where it is clear why it is needed. domain_has_ioreq_server() is essentially a trivial accessor function; it's not the appropriate place.

Got it. Thank you for the clarification.

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 17:46:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 17:46:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25150.52750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcuCc-0006dR-9V; Wed, 11 Nov 2020 17:46:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25150.52750; Wed, 11 Nov 2020 17:46:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcuCc-0006dK-66; Wed, 11 Nov 2020 17:46:38 +0000
Received: by outflank-mailman (input) for mailman id 25150;
 Wed, 11 Nov 2020 17:46:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kcuCa-0006cm-Uv
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 17:46:37 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ce443ac3-4418-4fc0-be5e-3d9cfac26a02;
 Wed, 11 Nov 2020 17:46:28 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcuCS-0001Ov-FI; Wed, 11 Nov 2020 17:46:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcuCS-0006d5-6y; Wed, 11 Nov 2020 17:46:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kcuCS-00041g-4n; Wed, 11 Nov 2020 17:46:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kcuCa-0006cm-Uv
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 17:46:37 +0000
X-Inumbo-ID: ce443ac3-4418-4fc0-be5e-3d9cfac26a02
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ce443ac3-4418-4fc0-be5e-3d9cfac26a02;
	Wed, 11 Nov 2020 17:46:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=P0ft0nh5IDeEjN+jWD/Ewla2g4aNpZ1kSL6x/ny+wxc=; b=mLpQYxPuApHlbFDN9z8v0DjlHt
	RzO+aXfoper4aaxI6RnoejZKJo97wFr4jkedzqR+7+vyu+IpTth3Z5zmtqdZ/P3us/WNu7ULYXSwM
	qy8kd7w93TyonG0FD7GfEwhmScRzqdVcwsbzKUwIzx75Gri6kYieTkxKrkPKg6MBZeNA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcuCS-0001Ov-FI; Wed, 11 Nov 2020 17:46:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcuCS-0006d5-6y; Wed, 11 Nov 2020 17:46:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcuCS-00041g-4n; Wed, 11 Nov 2020 17:46:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156634-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 156634: regressions - FAIL
X-Osstest-Failures:
    xen-4.11-testing:test-amd64-amd64-xl:debian-fixup:fail:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:guest-start/debianhvm.repeat:fail:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1447d449fab7e48c85faf83951842bb60d7dabe5
X-Osstest-Versions-That:
    xen=b5eb4956e1d2d73546f8cfdef635b6819ed7b527
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Nov 2020 17:46:28 +0000

flight 156634 xen-4.11-testing real [real]
flight 156686 xen-4.11-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156634/
http://logs.test-lab.xenproject.org/osstest/logs/156686/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl          13 debian-fixup             fail REGR. vs. 156397
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 20 guest-start/debianhvm.repeat fail REGR. vs. 156397

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 156397
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 156397
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156397
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156397
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156397
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156397
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156397
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156397
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156397
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156397
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156397
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156397
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156397
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1447d449fab7e48c85faf83951842bb60d7dabe5
baseline version:
 xen                  b5eb4956e1d2d73546f8cfdef635b6819ed7b527

Last test of basis   156397  2020-11-04 09:05:50 Z    7 days
Testing same since   156634  2020-11-10 18:06:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 1447d449fab7e48c85faf83951842bb60d7dabe5
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Apr 23 13:05:46 2020 +0100

    x86/msr: Disallow guest access to the RAPL MSRs
    
    Researchers have demonstrated using the RAPL interface to perform a
    differential power analysis attack to recover AES keys used by other cores in
    the system.
    
    Furthermore, even privileged guests cannot use this interface correctly, due
    to MSR scope and vcpu scheduling issues.  The interface would want to be
    paravirtualised to be used sensibly.
    
    Disallow access to the RAPL MSRs completely, as well as other MSRs which
    potentially access fine grain power information.
    
    This is part of XSA-351.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 3b5de119f0399cbe745502cb6ebd5e6633cc139c
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Tue Oct 6 18:23:27 2020 +0200

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL}
    
    Currently a PV hardware domain can also be given control over the CPU
    frequency, and such guest is allowed to write to MSR_IA32_PERF_CTL.
    However since commit 322ec7c89f6 the default behavior has been changed
    to reject accesses to not explicitly handled MSRs, preventing PV
    guests that manage CPU frequency from reading
    MSR_IA32_PERF_{STATUS/CTL}.
    
    Additionally some HVM guests (Windows at least) will attempt to read
    MSR_IA32_PERF_CTL and will panic if given back a #GP fault:
    
      vmx.c:3035:d8v0 RDMSR 0x00000199 unimplemented
      d8v0 VIRIDIAN CRASH: 3b c0000096 fffff806871c1651 ffffda0253683720 0
    
    Move the handling of MSR_IA32_PERF_{STATUS/CTL} to the common MSR
    handling shared between HVM and PV guests, and add an explicit case
    for reads to MSR_IA32_PERF_{STATUS/CTL}.
    
    Restore previous behavior and allow PV guests with the required
    permissions to read the contents of the mentioned MSRs. Non privileged
    guests will get 0 when trying to read those registers, as writes to
    MSR_IA32_PERF_CTL by such guest will already be silently dropped.
    
    Fixes: 322ec7c89f6 ('x86/pv: disallow access to unknown MSRs')
    Fixes: 84e848fd7a1 ('x86/hvm: disallow access to unknown MSRs')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    (cherry picked from commit 3059178798a23ba870ff86ff54d442a07e6651fc)

commit 65fad0a426db082fa4989b7c4567438bfa93b596
Author: Julien Grall <jgrall@amazon.com>
Date:   Mon Nov 9 20:28:59 2020 +0000

    xen/arm: Always trap AMU system registers
    
    The Activity Monitors Unit (AMU) has been introduced by ARMv8.4. It is
    considered to be unsafe to be expose to guests as they might expose
    information about code executed by other guests or the host.
    
    Arm provided a way to trap all the AMU system registers by setting
    CPTR_EL2.TAM to 1.
    
    Unfortunately, on older revision of the specification, the bit 30 (now
    CPTR_EL1.TAM) was RES0. Because of that, Xen is setting it to 0 and
    therefore the system registers would be exposed to the guest when it is
    run on processors with AMU.
    
    As the bit is mark as UNKNOWN at boot in Armv8.4, the only safe solution
    for us is to always set CPTR_EL1.TAM to 1.
    
    Guest trying to access the AMU system registers will now receive an
    undefined instruction. Unfortunately, this means that even well-behaved
    guest may fail to boot because we don't sanitize the ID registers.
    
    This is a known issues with other Armv8.0+ features (e.g. SVE, Pointer
    Auth). This will taken care separately.
    
    This is part of XSA-351 (or XSA-93 re-born).
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Andre Przywara <andre.przywara@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    (cherry picked from commit 628e1becb6fb121475a6ce68e3f1cb4499851255)
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 19:28:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 19:28:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25176.52765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcvmR-0007Ya-Tc; Wed, 11 Nov 2020 19:27:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25176.52765; Wed, 11 Nov 2020 19:27:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcvmR-0007YT-QF; Wed, 11 Nov 2020 19:27:43 +0000
Received: by outflank-mailman (input) for mailman id 25176;
 Wed, 11 Nov 2020 19:27:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kcvmQ-0007YO-RW
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 19:27:42 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kcvmQ-0005rp-AP; Wed, 11 Nov 2020 19:27:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kcvmQ-0007YO-RW
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 19:27:42 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kcvmQ-0005rp-AP; Wed, 11 Nov 2020 19:27:42 +0000
Subject: Re: [PATCH V2 21/23] xen/arm: Add mapcache invalidation handling
To: Oleksandr <olekstysh@gmail.com>, Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-22-git-send-email-olekstysh@gmail.com>
 <cad29fdb-089a-541b-6c5b-538d96441714@suse.com>
 <b074eb70-a770-1f96-3d68-b06476b963ca@xen.org>
 <ecd5c3a5-e889-4fff-8145-3c129f619979@gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <2ff5e744-b48b-77b0-4e59-faa82951242b@xen.org>
Date: Wed, 11 Nov 2020 19:27:40 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.2
MIME-Version: 1.0
In-Reply-To: <ecd5c3a5-e889-4fff-8145-3c129f619979@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Oleksandr,

On 11/11/2020 00:03, Oleksandr wrote:
> 
> On 16.10.20 11:41, Julien Grall wrote:
>> On 16/10/2020 07:29, Jan Beulich wrote:
>>> On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
>>>> @@ -1067,7 +1068,14 @@ static int __p2m_set_entry(struct p2m_domain 
>>>> *p2m,
>>>>        */
>>>>       if ( p2m_is_valid(orig_pte) &&
>>>>            !mfn_eq(lpae_get_mfn(*entry), lpae_get_mfn(orig_pte)) )
>>>> +    {
>>>> +#ifdef CONFIG_IOREQ_SERVER
>>>> +        if ( domain_has_ioreq_server(p2m->domain) &&
>>>> +             (p2m->domain == current->domain) && 
>>>> p2m_is_ram(orig_pte.p2m.type) )
>>>> +            p2m->domain->qemu_mapcache_invalidate = true;
>>>> +#endif
>>>>           p2m_free_entry(p2m, orig_pte, level);
>>>> +    }
>>>
>>> For all I have to say here, please bear in mind that I don't know
>>> the internals of Arm memory management.
>>>
>>> The first odd thing here the merely MFN-based condition. It may
>>> well be that's sufficient, if there's no way to get a "not present"
>>> entry with an MFN matching any valid MFN. (This isn't just with
>>> your addition, but even before.
>> Invalid entries are always zeroed. So in theory the problem could 
>> arise if MFN 0 used in the guest. It should not be possible on 
>> staging, but I agree this should be fixed.
>>
>>>
>>> Given how p2m_free_entry() works (or is supposed to work in the
>>> long run), is the new code you add guaranteed to only alter leaf
>>> entries?
>>
>> This path may also be called with tables. I think we want to move the 
>> check in p2m_free_entry() so we can find the correct leaf type.
> 
> Well, but inside p2m_free_entry() we don't have a new entry in order to 
> check whether new MFN is actually different. An extra arg only comes to 
> mind...
Aside the recursive call, there are two users for p2m_free_entry():
   - When we fail to shatter a superpage in OOM
   - When the entry is replaced by an entry with a different base

I wouldn't be too concerned to send spurious mapcache invalidation in an 
error path. So I don't think you need to know the new entry.

What you need to know if the type of the leaf.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 19:43:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 19:43:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25186.52776 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcw1J-0000zM-8o; Wed, 11 Nov 2020 19:43:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25186.52776; Wed, 11 Nov 2020 19:43:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcw1J-0000zF-5g; Wed, 11 Nov 2020 19:43:05 +0000
Received: by outflank-mailman (input) for mailman id 25186;
 Wed, 11 Nov 2020 19:43:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OH4G=ER=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kcw1I-0000zA-DP
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 19:43:04 +0000
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 39b7e4b0-8fc1-4956-812d-093ce639c010;
 Wed, 11 Nov 2020 19:43:03 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id l2so4846974lfk.0
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 11:43:03 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id k11sm311002lfd.3.2020.11.11.11.43.01
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 11 Nov 2020 11:43:01 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=OH4G=ER=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kcw1I-0000zA-DP
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 19:43:04 +0000
X-Inumbo-ID: 39b7e4b0-8fc1-4956-812d-093ce639c010
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 39b7e4b0-8fc1-4956-812d-093ce639c010;
	Wed, 11 Nov 2020 19:43:03 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id l2so4846974lfk.0
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 11:43:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=vDP1cmeoiwTxokwoBpDIlL4fryW2mihfwGfG8UdiUHE=;
        b=lUOARjLtm2aeRDKxpoR+zdeH3oSDzzfJagaZsmhfQusKJgyD2nQ9TvtcN0gmoxwuMD
         /YbI6205pypbv9XyMD0CQnNOZJwtH6tN+bWep++mqXmqTjTvs9FNL1wcFvHmMzo7KWyB
         GlDxnm/04OYRrJe1MxpAaVaAHmsU2G1X/OF4ZUF8J8PNcTdxMQsE3MBBpukC65xKu7Jb
         JQr2Cu/klK7OgS36lU8JEOSjuVr2VhEiejNnaoGiuE3jAzUN3vJKJyqu3az9sSl0n2Wj
         s6QxyAwN+IfFosnkGeSKrg8rYTfHG1e8SAVj7Ol37fBND9azIWgl50NRD5ZlEm5xpENH
         A6hQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=vDP1cmeoiwTxokwoBpDIlL4fryW2mihfwGfG8UdiUHE=;
        b=bkqKrTp5KzAG7YsvvBeP1Mc1VPHGVV+9NYSlCepYQbJkixsVdjteEH5XhAu4R5Blhp
         zNIBd8tdsVl/EiUv/HxeUfTDTbh3/ilxxQF4SJinKvlvoG5LmIb9bu80GN5PdvNfOO/C
         +EVN/8loAmdx3oQhhURO+PIj9kHyNm6jn7d7J3HHsbG+Ivem6SPpPrDmiTRbEhFoPT+X
         YfnDUUUd+I7LK3cvSpU1H7g3pliEWBQ3R9U51GgD42jUs7S7AUWOViougdXTGyAMmHwq
         xNGrFWjqrAwaA0tOL+7reJ68MdZySzAZzm0VPRhNy3pVxQZyV30dKGO1YfK+vvQlgPco
         HOWA==
X-Gm-Message-State: AOAM532iQ/bi2zCLDsLRAX/tc0w/J094yqK8jru4FMNX5gqB/Nb2Vdm1
	go2lxUjWEcYHfhT5SUQyl3Q=
X-Google-Smtp-Source: ABdhPJxx1PmCCuzXxZ5EBPKvMp4Shjwf6RfBeAn9zC446i3FFVoijmnjJ5fe/PM2TvJSZXtDLLXQag==
X-Received: by 2002:ac2:598e:: with SMTP id w14mr6560281lfn.187.1605123782091;
        Wed, 11 Nov 2020 11:43:02 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id k11sm311002lfd.3.2020.11.11.11.43.01
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Wed, 11 Nov 2020 11:43:01 -0800 (PST)
Subject: Re: [PATCH V2 21/23] xen/arm: Add mapcache invalidation handling
To: Julien Grall <julien@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-22-git-send-email-olekstysh@gmail.com>
 <cad29fdb-089a-541b-6c5b-538d96441714@suse.com>
 <b074eb70-a770-1f96-3d68-b06476b963ca@xen.org>
 <ecd5c3a5-e889-4fff-8145-3c129f619979@gmail.com>
 <2ff5e744-b48b-77b0-4e59-faa82951242b@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <c8234cc2-0fb6-3ad7-7149-de829d1b46af@gmail.com>
Date: Wed, 11 Nov 2020 21:42:55 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <2ff5e744-b48b-77b0-4e59-faa82951242b@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 11.11.20 21:27, Julien Grall wrote:
> Hi Oleksandr,

Hi Julien.


>
> On 11/11/2020 00:03, Oleksandr wrote:
>>
>> On 16.10.20 11:41, Julien Grall wrote:
>>> On 16/10/2020 07:29, Jan Beulich wrote:
>>>> On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
>>>>> @@ -1067,7 +1068,14 @@ static int __p2m_set_entry(struct 
>>>>> p2m_domain *p2m,
>>>>>        */
>>>>>       if ( p2m_is_valid(orig_pte) &&
>>>>>            !mfn_eq(lpae_get_mfn(*entry), lpae_get_mfn(orig_pte)) )
>>>>> +    {
>>>>> +#ifdef CONFIG_IOREQ_SERVER
>>>>> +        if ( domain_has_ioreq_server(p2m->domain) &&
>>>>> +             (p2m->domain == current->domain) && 
>>>>> p2m_is_ram(orig_pte.p2m.type) )
>>>>> +            p2m->domain->qemu_mapcache_invalidate = true;
>>>>> +#endif
>>>>>           p2m_free_entry(p2m, orig_pte, level);
>>>>> +    }
>>>>
>>>> For all I have to say here, please bear in mind that I don't know
>>>> the internals of Arm memory management.
>>>>
>>>> The first odd thing here the merely MFN-based condition. It may
>>>> well be that's sufficient, if there's no way to get a "not present"
>>>> entry with an MFN matching any valid MFN. (This isn't just with
>>>> your addition, but even before.
>>> Invalid entries are always zeroed. So in theory the problem could 
>>> arise if MFN 0 used in the guest. It should not be possible on 
>>> staging, but I agree this should be fixed.
>>>
>>>>
>>>> Given how p2m_free_entry() works (or is supposed to work in the
>>>> long run), is the new code you add guaranteed to only alter leaf
>>>> entries?
>>>
>>> This path may also be called with tables. I think we want to move 
>>> the check in p2m_free_entry() so we can find the correct leaf type.
>>
>> Well, but inside p2m_free_entry() we don't have a new entry in order 
>> to check whether new MFN is actually different. An extra arg only 
>> comes to mind...
> Aside the recursive call, there are two users for p2m_free_entry():
>   - When we fail to shatter a superpage in OOM
>   - When the entry is replaced by an entry with a different base
>
> I wouldn't be too concerned to send spurious mapcache invalidation in 
> an error path. So I don't think you need to know the new entry.

Thank you for the clarification, sounds reasonable.


>
> What you need to know if the type of the leaf.

Yes, to check whether it is a RAM page.


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 20:01:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 20:01:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25196.52789 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcwJG-0002zc-RM; Wed, 11 Nov 2020 20:01:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25196.52789; Wed, 11 Nov 2020 20:01:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcwJG-0002zV-O6; Wed, 11 Nov 2020 20:01:38 +0000
Received: by outflank-mailman (input) for mailman id 25196;
 Wed, 11 Nov 2020 20:01:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ttLz=ER=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kcwJF-0002zP-O2
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:01:37 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a2670222-2c8e-4430-a224-6d20b4da8eb6;
 Wed, 11 Nov 2020 20:01:36 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ttLz=ER=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kcwJF-0002zP-O2
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:01:37 +0000
X-Inumbo-ID: a2670222-2c8e-4430-a224-6d20b4da8eb6
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a2670222-2c8e-4430-a224-6d20b4da8eb6;
	Wed, 11 Nov 2020 20:01:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605124896;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=iq9RG7cYXuJPMV4JVw65VgwwPfC0VZdgzpX0O+JaaIc=;
  b=OS3TIfCYV49E+ihFCiAPMQlVnJX4dzkx+0DRyzc8rYVXPOSypqYPmmF8
   53pD/ihxS6tuNhrIGHf03sEONAr5gDsY2RaYOfYisMlrRdNOZX7SQ5mvX
   ntPGC/6dE4+vS7ERtZ2kXSorHltET2vkE7D/PI3xehZcq2kTdTzmBtWP4
   M=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: JYCJ24TFRiuqmBsZRN/qus5yCmdw0+xq9XjTp6UbK2hJNB0zy2UAfFIpZcNKmXalrU2/u/3B0U
 /IvQt1kDl5UaZzM1z5KVtzwQWZE/cpZWF2UDAIpTTvrRYCWr9WcvCGhTSM/5b49DfHn29kK9jq
 3aU/acap2Tg4+GYAsfVW631l+X8Wo+W7LqVXwUJm+6+5b4bnew2cHRjsyPyvxLuxJzznTeOyvK
 iV8YNPjSzbWc9KX9RjBRbm8Q9qWiaicaEKq05na2AOW47xMYERTH/+YdwrgBTtAmCa8kFRf6qj
 jx4=
X-SBRS: None
X-MesageID: 31320261
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,470,1596513600"; 
   d="scan'208";a="31320261"
Subject: Re: [PATCH] xen/x86: Work around Clang code generation bug with asm
 parameters
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20201111124512.2268-1-andrew.cooper3@citrix.com>
 <8282790a-a0bd-1d33-d992-9d194766254e@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <3ecb8469-8504-054a-078d-4bf32f8f82c4@citrix.com>
Date: Wed, 11 Nov 2020 20:01:29 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <8282790a-a0bd-1d33-d992-9d194766254e@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 11/11/2020 15:11, Jan Beulich wrote:
> On 11.11.2020 13:45, Andrew Cooper wrote:
>> Clang 9 and later don't handle the clobber of %r10 correctly in
>> _hypercall64_4().  See https://bugs.llvm.org/show_bug.cgi?id=48122
> Are you sure this is a bug?

Yes.

>  With ...
>
>>  #define _hypercall64_4(type, hcall, a1, a2, a3, a4)                     \
>>      ({                                                                  \
>> -        long res, tmp__;                                                \
>> -        register long _a4 asm ("r10") = ((long)(a4));                   \
>> +        long res, _a1 = (long)(a1), _a2 = (long)(a2),                   \
>> +            _a3 = (long)(a3);                                           \
>> +        register long _a4 asm ("r10") = (long)(a4);                     \
>>          asm volatile (                                                  \
>>              "call hypercall_page + %c[offset]"                          \
>> -            : "=a" (res), "=D" (tmp__), "=S" (tmp__), "=d" (tmp__),     \
>> -              "=&r" (tmp__) ASM_CALL_CONSTRAINT                         \
> ... this we've requested "any register", while with ...
>
>> -            : [offset] "i" (hcall * 32),                                \
>> -              "1" ((long)(a1)), "2" ((long)(a2)), "3" ((long)(a3)),     \
>> -              "4" (_a4)                                                 \
> ... this we've asked for that specific register to be initialized
> from r10 (and without telling the compiler that r10 is going to
> change).

Consider applying that same reasoning to "1" instead of "4".  In that
case, a1 would no longer be bound to %rdi.

The use of "4" explicitly binds the input and the output, which includes
requiring them to be the same register.

Furthermore, LLVM tends to consider "not behaving in the same was as
GCC" a bug.

> Also, by what I would have hoped is convention meanwhile, the new
> macro local variables' names shouldn't start with an underscore,
> but instead perhaps end in one. But to be honest, despite knowing
> of the latent (albeit highly hypothetical) issue, each time I
> find myself making such a comment I'm one tiny step closer to
> giving up.

Convention is not created by you getting irritated at others getting
irritated at you for requesting bizarre and unusual changes in submitted
code, or by continuing to point things out, knowing that others
specifically disagree with your reasoning in this case.

Convention is created when there is general agreement over the technical
merits of writing code in one particular way vs the alternatives, *and*
its written down somewhere, so this argument does not continue any further.

There is no restrictions or guidance in the coding style to prefer one
form over another, therefore anything is acceptable.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Nov 11 20:07:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 20:07:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25205.52801 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcwOq-0003D3-CZ; Wed, 11 Nov 2020 20:07:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25205.52801; Wed, 11 Nov 2020 20:07:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcwOq-0003Cw-8h; Wed, 11 Nov 2020 20:07:24 +0000
Received: by outflank-mailman (input) for mailman id 25205;
 Wed, 11 Nov 2020 20:07:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kcwOp-0003Cr-Ae
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:07:23 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcwOp-0000pY-0d; Wed, 11 Nov 2020 20:07:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcwOp-0003Cr-Ae
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:07:23 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcwOp-0000pY-0d; Wed, 11 Nov 2020 20:07:23 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>
Subject: [PATCH 00/10] viridian: add support for ExProcessorMasks
Date: Wed, 11 Nov 2020 20:07:11 +0000
Message-Id: <20201111200721.30551-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Paul Durrant (10):
  viridian: move flush hypercall implementation into separate function
  viridian: move IPI hypercall implementation into separate function
  viridian: introduce a per-cpu hypercall_vpmask and accessor
    functions...
  viridian: use hypercall_vpmask in hvcall_ipi()
  viridian: use softirq batching in hvcall_ipi()
  viridian: add ExProcessorMasks variants of the flush hypercalls
  viridian: add ExProcessorMasks variant of the IPI hypercall
  viridian: log initial invocation of each type of hypercall
  viridian: add a new '_HVMPV_ex_processor_masks' bit into
    HVM_PARAM_VIRIDIAN...
  xl / libxl: add 'ex_processor_mask' into
    'libxl_viridian_enlightenment'

 docs/man/xl.cfg.5.pod.in             |   8 +
 tools/include/libxl.h                |   7 +
 tools/libs/light/libxl_types.idl     |   1 +
 tools/libs/light/libxl_x86.c         |   3 +
 xen/arch/x86/hvm/viridian/viridian.c | 595 +++++++++++++++++++++------
 xen/include/asm-x86/hvm/viridian.h   |   8 +
 xen/include/public/hvm/params.h      |   7 +-
 7 files changed, 506 insertions(+), 123 deletions(-)

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 20:07:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 20:07:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25206.52813 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcwOr-0003ED-Lo; Wed, 11 Nov 2020 20:07:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25206.52813; Wed, 11 Nov 2020 20:07:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcwOr-0003Dz-GW; Wed, 11 Nov 2020 20:07:25 +0000
Received: by outflank-mailman (input) for mailman id 25206;
 Wed, 11 Nov 2020 20:07:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kcwOq-0003DB-DZ
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:07:24 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcwOq-0000pY-45; Wed, 11 Nov 2020 20:07:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcwOq-0003DB-DZ
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:07:24 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcwOq-0000pY-45; Wed, 11 Nov 2020 20:07:24 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH 01/10] viridian: move flush hypercall implementation into separate function
Date: Wed, 11 Nov 2020 20:07:12 +0000
Message-Id: <20201111200721.30551-2-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201111200721.30551-1-paul@xen.org>
References: <20201111200721.30551-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This patch moves the implementation of HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST
that is currently inline in viridian_hypercall() into a new hvcall_flush()
function.

The new function returns Xen erro values which are then dealt with
appropriately. A return value of -ERESTART translates to viridian_hypercall()
returning HVM_HCALL_preempted. Other return values translate to status codes
and viridian_hypercall() returning HVM_HCALL_completed. Currently the only
values, other than -ERESTART, returned by hvcall_flush() are 0 (indicating
success) or -EINVAL.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/viridian/viridian.c | 130 ++++++++++++++++-----------
 1 file changed, 78 insertions(+), 52 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index dc7183a54627..f1a1b6a8af82 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -518,6 +518,69 @@ static bool need_flush(void *ctxt, struct vcpu *v)
     return vcpu_mask & (1ul << v->vcpu_id);
 }
 
+union hypercall_input {
+    uint64_t raw;
+    struct {
+        uint16_t call_code;
+        uint16_t fast:1;
+        uint16_t rsvd1:15;
+        uint16_t rep_count:12;
+        uint16_t rsvd2:4;
+        uint16_t rep_start:12;
+        uint16_t rsvd3:4;
+    };
+};
+
+union hypercall_output {
+    uint64_t raw;
+    struct {
+        uint16_t result;
+        uint16_t rsvd1;
+        uint32_t rep_complete:12;
+        uint32_t rsvd2:20;
+    };
+};
+
+static int hvcall_flush(union hypercall_input *input,
+                        union hypercall_output *output,
+                        unsigned long input_params_gpa,
+                        unsigned long output_params_gpa)
+{
+    struct {
+        uint64_t address_space;
+        uint64_t flags;
+        uint64_t vcpu_mask;
+    } input_params;
+
+    /* These hypercalls should never use the fast-call convention. */
+    if ( input->fast )
+        return -EINVAL;
+
+    /* Get input parameters. */
+    if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
+                                  sizeof(input_params)) != HVMTRANS_okay )
+        return -EINVAL;
+
+    /*
+     * It is not clear from the spec. if we are supposed to
+     * include current virtual CPU in the set or not in this case,
+     * so err on the safe side.
+     */
+    if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
+        input_params.vcpu_mask = ~0ul;
+
+    /*
+     * A false return means that another vcpu is currently trying
+     * a similar operation, so back off.
+     */
+    if ( !paging_flush_tlb(need_flush, &input_params.vcpu_mask) )
+        return -ERESTART;
+
+    output->rep_complete = input->rep_count;
+
+    return 0;
+}
+
 int viridian_hypercall(struct cpu_user_regs *regs)
 {
     struct vcpu *curr = current;
@@ -525,29 +588,8 @@ int viridian_hypercall(struct cpu_user_regs *regs)
     int mode = hvm_guest_x86_mode(curr);
     unsigned long input_params_gpa, output_params_gpa;
     uint16_t status = HV_STATUS_SUCCESS;
-
-    union hypercall_input {
-        uint64_t raw;
-        struct {
-            uint16_t call_code;
-            uint16_t fast:1;
-            uint16_t rsvd1:15;
-            uint16_t rep_count:12;
-            uint16_t rsvd2:4;
-            uint16_t rep_start:12;
-            uint16_t rsvd3:4;
-        };
-    } input;
-
-    union hypercall_output {
-        uint64_t raw;
-        struct {
-            uint16_t result;
-            uint16_t rsvd1;
-            uint32_t rep_complete:12;
-            uint32_t rsvd2:20;
-        };
-    } output = { 0 };
+    union hypercall_input input;
+    union hypercall_output output = {};
 
     ASSERT(is_viridian_domain(currd));
 
@@ -580,41 +622,25 @@ int viridian_hypercall(struct cpu_user_regs *regs)
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
     {
-        struct {
-            uint64_t address_space;
-            uint64_t flags;
-            uint64_t vcpu_mask;
-        } input_params;
+        int rc = hvcall_flush(&input, &output, input_params_gpa,
+                              output_params_gpa);
 
-        /* These hypercalls should never use the fast-call convention. */
-        status = HV_STATUS_INVALID_PARAMETER;
-        if ( input.fast )
+        switch ( rc )
+        {
+        case 0:
             break;
 
-        /* Get input parameters. */
-        if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
-                                      sizeof(input_params)) !=
-             HVMTRANS_okay )
-            break;
-
-        /*
-         * It is not clear from the spec. if we are supposed to
-         * include current virtual CPU in the set or not in this case,
-         * so err on the safe side.
-         */
-        if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
-            input_params.vcpu_mask = ~0ul;
-
-        /*
-         * A false return means that another vcpu is currently trying
-         * a similar operation, so back off.
-         */
-        if ( !paging_flush_tlb(need_flush, &input_params.vcpu_mask) )
+        case -ERESTART:
             return HVM_HCALL_preempted;
 
-        output.rep_complete = input.rep_count;
+        default:
+            ASSERT_UNREACHABLE();
+            /* Fallthrough */
+        case -EINVAL:
+            status = HV_STATUS_INVALID_PARAMETER;
+            break;
+        }
 
-        status = HV_STATUS_SUCCESS;
         break;
     }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 20:07:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 20:07:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25207.52825 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcwOs-0003Fm-TH; Wed, 11 Nov 2020 20:07:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25207.52825; Wed, 11 Nov 2020 20:07:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcwOs-0003Ff-Q6; Wed, 11 Nov 2020 20:07:26 +0000
Received: by outflank-mailman (input) for mailman id 25207;
 Wed, 11 Nov 2020 20:07:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kcwOr-0003E0-Fl
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:07:25 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcwOr-0000pY-7R; Wed, 11 Nov 2020 20:07:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcwOr-0003E0-Fl
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:07:25 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcwOr-0000pY-7R; Wed, 11 Nov 2020 20:07:25 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH 02/10] viridian: move IPI hypercall implementation into separate function
Date: Wed, 11 Nov 2020 20:07:13 +0000
Message-Id: <20201111200721.30551-3-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201111200721.30551-1-paul@xen.org>
References: <20201111200721.30551-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This patch moves the implementation of HVCALL_SEND_IPI that is currently
inline in viridian_hypercall() into a new hvcall_ipi() function.

The new function returns Xen errno values similarly to hvcall_flush(). Hence
the errno translation code in viridial_hypercall() is generalized, removing
the need for the local 'status' variable.

NOTE: The formatting of the 'out' label also corrected as per CODING_STYLE
      and the code is adjusted to avoid a register copy-back if 'mode' is
      neither 8 nor 4.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/viridian/viridian.c | 170 ++++++++++++++-------------
 1 file changed, 87 insertions(+), 83 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index f1a1b6a8af82..c4f720f58d6d 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -581,13 +581,72 @@ static int hvcall_flush(union hypercall_input *input,
     return 0;
 }
 
+static int hvcall_ipi(union hypercall_input *input,
+                      union hypercall_output *output,
+                      unsigned long input_params_gpa,
+                      unsigned long output_params_gpa)
+{
+    struct domain *currd = current->domain;
+    struct vcpu *v;
+    uint32_t vector;
+    uint64_t vcpu_mask;
+
+    /* Get input parameters. */
+    if ( input->fast )
+    {
+        if ( input_params_gpa >> 32 )
+            return -EINVAL;
+
+        vector = input_params_gpa;
+        vcpu_mask = output_params_gpa;
+    }
+    else
+    {
+        struct {
+            uint32_t vector;
+            uint8_t target_vtl;
+            uint8_t reserved_zero[3];
+            uint64_t vcpu_mask;
+        } input_params;
+
+        if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
+                                      sizeof(input_params)) != HVMTRANS_okay )
+            return -EINVAL;
+
+        if ( input_params.target_vtl ||
+             input_params.reserved_zero[0] ||
+             input_params.reserved_zero[1] ||
+             input_params.reserved_zero[2] )
+            return -EINVAL;
+
+        vector = input_params.vector;
+        vcpu_mask = input_params.vcpu_mask;
+    }
+
+    if ( vector < 0x10 || vector > 0xff )
+        return -EINVAL;
+
+    for_each_vcpu ( currd, v )
+    {
+        if ( v->vcpu_id >= (sizeof(vcpu_mask) * 8) )
+            return -EINVAL;
+
+        if ( !(vcpu_mask & (1ul << v->vcpu_id)) )
+            continue;
+
+        vlapic_set_irq(vcpu_vlapic(v), vector, 0);
+    }
+
+    return 0;
+}
+
 int viridian_hypercall(struct cpu_user_regs *regs)
 {
     struct vcpu *curr = current;
     struct domain *currd = curr->domain;
     int mode = hvm_guest_x86_mode(curr);
     unsigned long input_params_gpa, output_params_gpa;
-    uint16_t status = HV_STATUS_SUCCESS;
+    int rc = 0;
     union hypercall_input input;
     union hypercall_output output = {};
 
@@ -616,92 +675,18 @@ int viridian_hypercall(struct cpu_user_regs *regs)
          * See section 14.5.1 of the specification.
          */
         do_sched_op(SCHEDOP_yield, guest_handle_from_ptr(NULL, void));
-        status = HV_STATUS_SUCCESS;
         break;
 
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
-    {
-        int rc = hvcall_flush(&input, &output, input_params_gpa,
-                              output_params_gpa);
-
-        switch ( rc )
-        {
-        case 0:
-            break;
-
-        case -ERESTART:
-            return HVM_HCALL_preempted;
-
-        default:
-            ASSERT_UNREACHABLE();
-            /* Fallthrough */
-        case -EINVAL:
-            status = HV_STATUS_INVALID_PARAMETER;
-            break;
-        }
-
+        rc = hvcall_flush(&input, &output, input_params_gpa,
+                          output_params_gpa);
         break;
-    }
 
     case HVCALL_SEND_IPI:
-    {
-        struct vcpu *v;
-        uint32_t vector;
-        uint64_t vcpu_mask;
-
-        status = HV_STATUS_INVALID_PARAMETER;
-
-        /* Get input parameters. */
-        if ( input.fast )
-        {
-            if ( input_params_gpa >> 32 )
-                break;
-
-            vector = input_params_gpa;
-            vcpu_mask = output_params_gpa;
-        }
-        else
-        {
-            struct {
-                uint32_t vector;
-                uint8_t target_vtl;
-                uint8_t reserved_zero[3];
-                uint64_t vcpu_mask;
-            } input_params;
-
-            if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
-                                          sizeof(input_params)) !=
-                 HVMTRANS_okay )
-                break;
-
-            if ( input_params.target_vtl ||
-                 input_params.reserved_zero[0] ||
-                 input_params.reserved_zero[1] ||
-                 input_params.reserved_zero[2] )
-                break;
-
-            vector = input_params.vector;
-            vcpu_mask = input_params.vcpu_mask;
-        }
-
-        if ( vector < 0x10 || vector > 0xff )
-            break;
-
-        for_each_vcpu ( currd, v )
-        {
-            if ( v->vcpu_id >= (sizeof(vcpu_mask) * 8) )
-                break;
-
-            if ( !(vcpu_mask & (1ul << v->vcpu_id)) )
-                continue;
-
-            vlapic_set_irq(vcpu_vlapic(v), vector, 0);
-        }
-
-        status = HV_STATUS_SUCCESS;
+        rc = hvcall_ipi(&input, &output, input_params_gpa,
+                        output_params_gpa);
         break;
-    }
 
     default:
         gprintk(XENLOG_WARNING, "unimplemented hypercall %04x\n",
@@ -714,17 +699,36 @@ int viridian_hypercall(struct cpu_user_regs *regs)
          * Given that return a status of 'invalid code' has not so far
          * caused any problems it's not worth logging.
          */
-        status = HV_STATUS_INVALID_HYPERCALL_CODE;
+        rc = -EOPNOTSUPP;
+        break;
+    }
+
+ out:
+    switch ( rc )
+    {
+    case 0:
+        break;
+
+    case -ERESTART:
+        return HVM_HCALL_preempted;
+
+    case -EOPNOTSUPP:
+        output.result = HV_STATUS_INVALID_HYPERCALL_CODE;
+        break;
+
+    default:
+        ASSERT_UNREACHABLE();
+        /* Fallthrough */
+    case -EINVAL:
+        output.result = HV_STATUS_INVALID_PARAMETER;
         break;
     }
 
-out:
-    output.result = status;
     switch (mode) {
     case 8:
         regs->rax = output.raw;
         break;
-    default:
+    case 4:
         regs->rdx = output.raw >> 32;
         regs->rax = (uint32_t)output.raw;
         break;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 20:07:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 20:07:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25208.52837 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcwOu-0003IT-Dr; Wed, 11 Nov 2020 20:07:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25208.52837; Wed, 11 Nov 2020 20:07:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcwOu-0003IL-9V; Wed, 11 Nov 2020 20:07:28 +0000
Received: by outflank-mailman (input) for mailman id 25208;
 Wed, 11 Nov 2020 20:07:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kcwOs-0003FT-IO
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:07:26 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcwOs-0000pY-Ai; Wed, 11 Nov 2020 20:07:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcwOs-0003FT-IO
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:07:26 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcwOs-0000pY-Ai; Wed, 11 Nov 2020 20:07:26 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH 03/10] viridian: introduce a per-cpu hypercall_vpmask and accessor functions...
Date: Wed, 11 Nov 2020 20:07:14 +0000
Message-Id: <20201111200721.30551-4-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201111200721.30551-1-paul@xen.org>
References: <20201111200721.30551-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... and make use of them in hvcall_flush()/need_flush().

Subsequent patches will need to deal with virtual processor masks potentially
wider than 64 bits. Thus, to avoid using too much stack, this patch
introduces global per-cpu virtual processor masks and converts the
implementation of hvcall_flush() to use them.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/viridian/viridian.c | 51 +++++++++++++++++++++++++---
 1 file changed, 47 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index c4f720f58d6d..4ab1f14b2248 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -507,15 +507,41 @@ void viridian_domain_deinit(struct domain *d)
     XFREE(d->arch.hvm.viridian);
 }
 
+struct hypercall_vpmask {
+    DECLARE_BITMAP(mask, HVM_MAX_VCPUS);
+};
+
+static DEFINE_PER_CPU(struct hypercall_vpmask, hypercall_vpmask);
+
+static void vpmask_empty(struct hypercall_vpmask *vpmask)
+{
+    bitmap_zero(vpmask->mask, HVM_MAX_VCPUS);
+}
+
+static void vpmask_set(struct hypercall_vpmask *vpmask, unsigned int vp)
+{
+    __set_bit(vp, vpmask->mask);
+}
+
+static void vpmask_fill(struct hypercall_vpmask *vpmask)
+{
+    bitmap_fill(vpmask->mask, HVM_MAX_VCPUS);
+}
+
+static bool vpmask_test(struct hypercall_vpmask *vpmask, unsigned int vp)
+{
+    return test_bit(vp, vpmask->mask);
+}
+
 /*
  * Windows should not issue the hypercalls requiring this callback in the
  * case where vcpu_id would exceed the size of the mask.
  */
 static bool need_flush(void *ctxt, struct vcpu *v)
 {
-    uint64_t vcpu_mask = *(uint64_t *)ctxt;
+    struct hypercall_vpmask *vpmask = ctxt;
 
-    return vcpu_mask & (1ul << v->vcpu_id);
+    return vpmask_test(vpmask, v->vcpu_id);
 }
 
 union hypercall_input {
@@ -546,6 +572,7 @@ static int hvcall_flush(union hypercall_input *input,
                         unsigned long input_params_gpa,
                         unsigned long output_params_gpa)
 {
+    struct hypercall_vpmask *vpmask = &this_cpu(hypercall_vpmask);
     struct {
         uint64_t address_space;
         uint64_t flags;
@@ -567,13 +594,29 @@ static int hvcall_flush(union hypercall_input *input,
      * so err on the safe side.
      */
     if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
-        input_params.vcpu_mask = ~0ul;
+        vpmask_fill(vpmask);
+    else
+    {
+        unsigned int vp;
+
+        vpmask_empty(vpmask);
+        for (vp = 0; vp < 64; vp++)
+        {
+            if ( !input_params.vcpu_mask )
+                break;
+
+            if ( input_params.vcpu_mask & 1 )
+                vpmask_set(vpmask, vp);
+
+            input_params.vcpu_mask >>= 1;
+        }
+    }
 
     /*
      * A false return means that another vcpu is currently trying
      * a similar operation, so back off.
      */
-    if ( !paging_flush_tlb(need_flush, &input_params.vcpu_mask) )
+    if ( !paging_flush_tlb(need_flush, vpmask) )
         return -ERESTART;
 
     output->rep_complete = input->rep_count;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 20:07:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 20:07:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25209.52849 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcwOv-0003LB-Ps; Wed, 11 Nov 2020 20:07:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25209.52849; Wed, 11 Nov 2020 20:07:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcwOv-0003Kz-LZ; Wed, 11 Nov 2020 20:07:29 +0000
Received: by outflank-mailman (input) for mailman id 25209;
 Wed, 11 Nov 2020 20:07:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kcwOt-0003Hk-MB
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:07:27 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcwOt-0000pY-Dw; Wed, 11 Nov 2020 20:07:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcwOt-0003Hk-MB
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:07:27 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcwOt-0000pY-Dw; Wed, 11 Nov 2020 20:07:27 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH 04/10] viridian: use hypercall_vpmask in hvcall_ipi()
Date: Wed, 11 Nov 2020 20:07:15 +0000
Message-Id: <20201111200721.30551-5-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201111200721.30551-1-paul@xen.org>
References: <20201111200721.30551-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

A subsequent patch will need to IPI a mask of virtual processors potentially
wider than 64 bits. A previous patch introduced per-cpu hypercall_vpmask
to allow hvcall_flush() to deal with such wide masks. This patch modifies
the implementation of hvcall_ipi() to make use of the same mask structures,
introducing a for_each_vp() macro to facilitate traversing a mask.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/viridian/viridian.c | 43 ++++++++++++++++++++++------
 1 file changed, 35 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 4ab1f14b2248..63f63093a513 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -533,6 +533,21 @@ static bool vpmask_test(struct hypercall_vpmask *vpmask, unsigned int vp)
     return test_bit(vp, vpmask->mask);
 }
 
+static unsigned int vpmask_first(struct hypercall_vpmask *vpmask)
+{
+    return find_first_bit(vpmask->mask, HVM_MAX_VCPUS);
+}
+
+static unsigned int vpmask_next(struct hypercall_vpmask *vpmask, unsigned int vp)
+{
+    return find_next_bit(vpmask->mask, HVM_MAX_VCPUS, vp + 1);
+}
+
+#define for_each_vp(vpmask, vp) \
+	for ((vp) = vpmask_first(vpmask); \
+	     (vp) < HVM_MAX_VCPUS; \
+	     (vp) = vpmask_next(vpmask, vp))
+
 /*
  * Windows should not issue the hypercalls requiring this callback in the
  * case where vcpu_id would exceed the size of the mask.
@@ -624,15 +639,24 @@ static int hvcall_flush(union hypercall_input *input,
     return 0;
 }
 
+static void send_ipi(struct hypercall_vpmask *vpmask, uint8_t vector)
+{
+    struct domain *currd = current->domain;
+    unsigned int vp;
+
+    for_each_vp ( vpmask, vp )
+        vlapic_set_irq(vcpu_vlapic(currd->vcpu[vp]), vector, 0);
+}
+
 static int hvcall_ipi(union hypercall_input *input,
                       union hypercall_output *output,
                       unsigned long input_params_gpa,
                       unsigned long output_params_gpa)
 {
-    struct domain *currd = current->domain;
-    struct vcpu *v;
+    struct hypercall_vpmask *vpmask = &this_cpu(hypercall_vpmask);
     uint32_t vector;
     uint64_t vcpu_mask;
+    unsigned int vp;
 
     /* Get input parameters. */
     if ( input->fast )
@@ -669,17 +693,20 @@ static int hvcall_ipi(union hypercall_input *input,
     if ( vector < 0x10 || vector > 0xff )
         return -EINVAL;
 
-    for_each_vcpu ( currd, v )
+    vpmask_empty(vpmask);
+    for (vp = 0; vp < 64; vp++)
     {
-        if ( v->vcpu_id >= (sizeof(vcpu_mask) * 8) )
-            return -EINVAL;
+        if ( !vcpu_mask )
+            break;
 
-        if ( !(vcpu_mask & (1ul << v->vcpu_id)) )
-            continue;
+        if ( vcpu_mask & 1 )
+            vpmask_set(vpmask, vp);
 
-        vlapic_set_irq(vcpu_vlapic(v), vector, 0);
+        vcpu_mask >>= 1;
     }
 
+    send_ipi(vpmask, vector);
+
     return 0;
 }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 20:07:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 20:07:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25210.52856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcwOw-0003MT-E8; Wed, 11 Nov 2020 20:07:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25210.52856; Wed, 11 Nov 2020 20:07:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcwOw-0003Ly-2q; Wed, 11 Nov 2020 20:07:30 +0000
Received: by outflank-mailman (input) for mailman id 25210;
 Wed, 11 Nov 2020 20:07:28 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kcwOu-0003JZ-PB
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:07:28 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcwOu-0000pY-HB; Wed, 11 Nov 2020 20:07:28 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcwOu-0003JZ-PB
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:07:28 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcwOu-0000pY-HB; Wed, 11 Nov 2020 20:07:28 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH 05/10] viridian: use softirq batching in hvcall_ipi()
Date: Wed, 11 Nov 2020 20:07:16 +0000
Message-Id: <20201111200721.30551-6-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201111200721.30551-1-paul@xen.org>
References: <20201111200721.30551-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

vlapic_ipi() uses a softirq batching mechanism to improve the efficiency of
sending a IPIs to large number of processors. This patch modifies send_ipi()
(the worker function called by hvcall_ipi()) to also make use of the
mechanism when there multiple bits set the hypercall_vpmask. Hence a `nr`
field is added to the structure to track the number of set bits.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/viridian/viridian.c | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 63f63093a513..765d53016c02 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -11,6 +11,7 @@
 #include <xen/hypercall.h>
 #include <xen/domain_page.h>
 #include <xen/param.h>
+#include <xen/softirq.h>
 #include <asm/guest/hyperv-tlfs.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
@@ -509,6 +510,7 @@ void viridian_domain_deinit(struct domain *d)
 
 struct hypercall_vpmask {
     DECLARE_BITMAP(mask, HVM_MAX_VCPUS);
+    unsigned int nr;
 };
 
 static DEFINE_PER_CPU(struct hypercall_vpmask, hypercall_vpmask);
@@ -516,21 +518,24 @@ static DEFINE_PER_CPU(struct hypercall_vpmask, hypercall_vpmask);
 static void vpmask_empty(struct hypercall_vpmask *vpmask)
 {
     bitmap_zero(vpmask->mask, HVM_MAX_VCPUS);
+    vpmask->nr = 0;
 }
 
 static void vpmask_set(struct hypercall_vpmask *vpmask, unsigned int vp)
 {
-    __set_bit(vp, vpmask->mask);
+    if ( !test_and_set_bit(vp, vpmask->mask) )
+        vpmask->nr++;
 }
 
 static void vpmask_fill(struct hypercall_vpmask *vpmask)
 {
     bitmap_fill(vpmask->mask, HVM_MAX_VCPUS);
+    vpmask->nr = HVM_MAX_VCPUS;
 }
 
 static bool vpmask_test(struct hypercall_vpmask *vpmask, unsigned int vp)
 {
-    return test_bit(vp, vpmask->mask);
+    return vpmask->nr && test_bit(vp, vpmask->mask);
 }
 
 static unsigned int vpmask_first(struct hypercall_vpmask *vpmask)
@@ -644,8 +649,17 @@ static void send_ipi(struct hypercall_vpmask *vpmask, uint8_t vector)
     struct domain *currd = current->domain;
     unsigned int vp;
 
+    if ( !vpmask->nr )
+        return;
+
+    if ( vpmask->nr > 1 )
+        cpu_raise_softirq_batch_begin();
+
     for_each_vp ( vpmask, vp )
         vlapic_set_irq(vcpu_vlapic(currd->vcpu[vp]), vector, 0);
+
+    if ( vpmask->nr > 1 )
+        cpu_raise_softirq_batch_finish();
 }
 
 static int hvcall_ipi(union hypercall_input *input,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 20:07:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 20:07:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25211.52872 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcwOx-0003QL-Nz; Wed, 11 Nov 2020 20:07:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25211.52872; Wed, 11 Nov 2020 20:07:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcwOx-0003Q0-I0; Wed, 11 Nov 2020 20:07:31 +0000
Received: by outflank-mailman (input) for mailman id 25211;
 Wed, 11 Nov 2020 20:07:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kcwOv-0003LR-SL
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:07:29 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcwOv-0000pY-KP; Wed, 11 Nov 2020 20:07:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcwOv-0003LR-SL
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:07:29 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcwOv-0000pY-KP; Wed, 11 Nov 2020 20:07:29 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH 06/10] viridian: add ExProcessorMasks variants of the flush hypercalls
Date: Wed, 11 Nov 2020 20:07:17 +0000
Message-Id: <20201111200721.30551-7-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201111200721.30551-1-paul@xen.org>
References: <20201111200721.30551-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

The Microsoft Hypervisor TLFS specifies variants of the already implemented
HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST hypercalls that take a 'Virtual
Processor Set' as an argument rather than a simple 64-bit mask.

This patch adds a new hvcall_flush_ex() function to implement these
(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX) hypercalls. This makes use of
new helper functions, hv_vpset_nr_banks() and hv_vpset_to_vpmask(), to
determine the size of the Virtual Processor Set (so it can be copied from
guest memory) and parse it into hypercall_vpmask (respectively).

NOTE: A guest should not yet issue these hypercalls as 'ExProcessorMasks'
      support needs to be advertised via CPUID. This will be done in a
      subsequent patch.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/viridian/viridian.c | 147 +++++++++++++++++++++++++++
 1 file changed, 147 insertions(+)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 765d53016c02..1226e1596a1c 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -553,6 +553,83 @@ static unsigned int vpmask_next(struct hypercall_vpmask *vpmask, unsigned int vp
 	     (vp) < HVM_MAX_VCPUS; \
 	     (vp) = vpmask_next(vpmask, vp))
 
+struct hypercall_vpset {
+        struct hv_vpset set;
+        uint64_t __bank_contents[64];
+};
+
+static DEFINE_PER_CPU(struct hypercall_vpset, hypercall_vpset);
+
+static unsigned int hv_vpset_nr_banks(struct hv_vpset *vpset)
+{
+    uint64_t bank_mask;
+    unsigned int nr = 0;
+
+    for ( bank_mask = vpset->valid_bank_mask; bank_mask; bank_mask >>= 1 )
+        if ( bank_mask & 1 )
+            nr++;
+
+    return nr;
+}
+
+static uint16_t hv_vpset_to_vpmask(struct hv_vpset *set, size_t size,
+                                   struct hypercall_vpmask *vpmask)
+{
+    switch ( set->format )
+    {
+    case HV_GENERIC_SET_ALL:
+        vpmask_fill(vpmask);
+        return 0;
+
+    case HV_GENERIC_SET_SPARSE_4K:
+    {
+        uint64_t bank_mask;
+        unsigned int bank = 0, vp = 0;
+
+        vpmask_empty(vpmask);
+        for ( bank_mask = set->valid_bank_mask; bank_mask; bank_mask >>= 1 )
+        {
+            /* Make sure we won't dereference past the end of the array */
+            if ( (void *)(set->bank_contents + bank) >=
+                 (void *)set + size )
+            {
+                ASSERT_UNREACHABLE();
+                return -EINVAL;
+            }
+
+            if ( bank_mask & 1 )
+            {
+                uint64_t mask = set->bank_contents[bank];
+                unsigned int i;
+
+                for ( i = 0; i < 64; i++, vp++ )
+                {
+                    if ( mask & 1 )
+                    {
+                        if ( vp >= HVM_MAX_VCPUS )
+                            return -EINVAL;
+
+                        vpmask_set(vpmask, vp);
+                    }
+
+                    mask >>= 1;
+                }
+
+                bank++;
+            }
+            else
+                vp += 64;
+        }
+        return 0;
+    }
+
+    default:
+        break;
+    }
+
+    return -EINVAL;
+}
+
 /*
  * Windows should not issue the hypercalls requiring this callback in the
  * case where vcpu_id would exceed the size of the mask.
@@ -644,6 +721,70 @@ static int hvcall_flush(union hypercall_input *input,
     return 0;
 }
 
+static int hvcall_flush_ex(union hypercall_input *input,
+                           union hypercall_output *output,
+                           unsigned long input_params_gpa,
+                           unsigned long output_params_gpa)
+{
+    struct hypercall_vpmask *vpmask = &this_cpu(hypercall_vpmask);
+    struct {
+        uint64_t address_space;
+        uint64_t flags;
+        struct hv_vpset set;
+    } input_params;
+
+    /* These hypercalls should never use the fast-call convention. */
+    if ( input->fast )
+        return -EINVAL;
+
+    /* Get input parameters. */
+    if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
+                                  sizeof(input_params)) != HVMTRANS_okay )
+        return -EINVAL;
+
+    if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
+        vpmask_fill(vpmask);
+    else
+    {
+        struct hypercall_vpset *vpset = &this_cpu(hypercall_vpset);
+        struct hv_vpset *set = &vpset->set;
+        size_t size;
+        int rc;
+
+        *set = input_params.set;
+        if ( set->format == HV_GENERIC_SET_SPARSE_4K )
+        {
+            unsigned long offset = offsetof(typeof(input_params),
+                                            set.bank_contents);
+
+            size = sizeof(*set->bank_contents) * hv_vpset_nr_banks(set);
+            if ( hvm_copy_from_guest_phys(&set->bank_contents,
+                                          input_params_gpa + offset,
+                                          size) != HVMTRANS_okay)
+                return -EINVAL;
+
+            size += sizeof(*set);
+        }
+        else
+            size = sizeof(*set);
+
+        rc = hv_vpset_to_vpmask(set, size, vpmask);
+        if ( rc )
+            return rc;
+    }
+
+    /*
+     * A false return means that another vcpu is currently trying
+     * a similar operation, so back off.
+     */
+    if ( !paging_flush_tlb(need_flush, vpmask) )
+        return -ERESTART;
+
+    output->rep_complete = input->rep_count;
+
+    return 0;
+}
+
 static void send_ipi(struct hypercall_vpmask *vpmask, uint8_t vector)
 {
     struct domain *currd = current->domain;
@@ -767,6 +908,12 @@ int viridian_hypercall(struct cpu_user_regs *regs)
                           output_params_gpa);
         break;
 
+    case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX:
+    case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX:
+        rc = hvcall_flush_ex(&input, &output, input_params_gpa,
+                             output_params_gpa);
+        break;
+
     case HVCALL_SEND_IPI:
         rc = hvcall_ipi(&input, &output, input_params_gpa,
                         output_params_gpa);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 20:07:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 20:07:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25212.52885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcwOz-0003UL-As; Wed, 11 Nov 2020 20:07:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25212.52885; Wed, 11 Nov 2020 20:07:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcwOz-0003Ty-1d; Wed, 11 Nov 2020 20:07:33 +0000
Received: by outflank-mailman (input) for mailman id 25212;
 Wed, 11 Nov 2020 20:07:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kcwOw-0003OZ-VU
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:07:30 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcwOw-0000pY-Ng; Wed, 11 Nov 2020 20:07:30 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcwOw-0003OZ-VU
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:07:30 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcwOw-0000pY-Ng; Wed, 11 Nov 2020 20:07:30 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH 07/10] viridian: add ExProcessorMasks variant of the IPI hypercall
Date: Wed, 11 Nov 2020 20:07:18 +0000
Message-Id: <20201111200721.30551-8-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201111200721.30551-1-paul@xen.org>
References: <20201111200721.30551-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

A previous patch introduced variants of the flush hypercalls that take a
'Virtual Processor Set' as an argument rather than a simple 64-bit mask.
This patch introduces a similar variant of the HVCALL_SEND_IPI hypercall
(HVCALL_SEND_IPI_EX).

NOTE: As with HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX, a guest should
      not yet issue the HVCALL_SEND_IPI_EX hypercall as support for
      'ExProcessorMasks' is not yet advertised via CPUID.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/viridian/viridian.c | 66 ++++++++++++++++++++++++++++
 1 file changed, 66 insertions(+)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 1226e1596a1c..e899dd1e9f55 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -865,6 +865,67 @@ static int hvcall_ipi(union hypercall_input *input,
     return 0;
 }
 
+static int hvcall_ipi_ex(union hypercall_input *input,
+                         union hypercall_output *output,
+                         unsigned long input_params_gpa,
+                         unsigned long output_params_gpa)
+{
+    struct hypercall_vpmask *vpmask = &this_cpu(hypercall_vpmask);
+    struct {
+        uint32_t vector;
+        uint8_t target_vtl;
+        uint8_t reserved_zero[3];
+        struct hv_vpset set;
+    } input_params;
+    struct hypercall_vpset *vpset = &this_cpu(hypercall_vpset);
+    struct hv_vpset *set = &vpset->set;
+    size_t size;
+    int rc;
+
+    /* These hypercalls should never use the fast-call convention. */
+    if ( input->fast )
+        return -EINVAL;
+
+    /* Get input parameters. */
+    if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
+                                  sizeof(input_params)) != HVMTRANS_okay )
+        return -EINVAL;
+
+    if ( input_params.target_vtl ||
+         input_params.reserved_zero[0] ||
+         input_params.reserved_zero[1] ||
+         input_params.reserved_zero[2] )
+        return HV_STATUS_INVALID_PARAMETER;
+
+    if ( input_params.vector < 0x10 || input_params.vector > 0xff )
+        return HV_STATUS_INVALID_PARAMETER;
+
+    *set = input_params.set;
+    if ( set->format == HV_GENERIC_SET_SPARSE_4K )
+    {
+        unsigned long offset = offsetof(typeof(input_params),
+                                        set.bank_contents);
+
+        size = sizeof(*set->bank_contents) * hv_vpset_nr_banks(set);
+        if ( hvm_copy_from_guest_phys(&set->bank_contents,
+                                      input_params_gpa + offset,
+                                      size) != HVMTRANS_okay)
+            return -EINVAL;
+
+        size += sizeof(*set);
+    }
+    else
+        size = sizeof(*set);
+
+    rc = hv_vpset_to_vpmask(set, size, vpmask);
+    if ( rc )
+        return rc;
+
+    send_ipi(vpmask, input_params.vector);
+
+    return 0;
+}
+
 int viridian_hypercall(struct cpu_user_regs *regs)
 {
     struct vcpu *curr = current;
@@ -919,6 +980,11 @@ int viridian_hypercall(struct cpu_user_regs *regs)
                         output_params_gpa);
         break;
 
+    case HVCALL_SEND_IPI_EX:
+        rc = hvcall_ipi_ex(&input, &output, input_params_gpa,
+                           output_params_gpa);
+        break;
+
     default:
         gprintk(XENLOG_WARNING, "unimplemented hypercall %04x\n",
                 input.call_code);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 20:07:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 20:07:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25213.52891 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcwP0-0003VP-7c; Wed, 11 Nov 2020 20:07:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25213.52891; Wed, 11 Nov 2020 20:07:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcwOz-0003Uy-HF; Wed, 11 Nov 2020 20:07:33 +0000
Received: by outflank-mailman (input) for mailman id 25213;
 Wed, 11 Nov 2020 20:07:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kcwOy-0003RR-2w
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:07:32 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcwOx-0000pY-Qt; Wed, 11 Nov 2020 20:07:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcwOy-0003RR-2w
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:07:32 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcwOx-0000pY-Qt; Wed, 11 Nov 2020 20:07:32 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH 08/10] viridian: log initial invocation of each type of hypercall
Date: Wed, 11 Nov 2020 20:07:19 +0000
Message-Id: <20201111200721.30551-9-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201111200721.30551-1-paul@xen.org>
References: <20201111200721.30551-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

To make is simpler to observe which viridian hypercalls are issued by a
particular Windows guest, this patch adds a per-domain mask to track them.
Each type of hypercall causes a different bit to be set in the mask and
when the bit transitions from clear to set, a log line is emitted showing
the name of the hypercall and the domain that issued it.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/viridian/viridian.c | 21 +++++++++++++++++++++
 xen/include/asm-x86/hvm/viridian.h   |  8 ++++++++
 2 files changed, 29 insertions(+)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index e899dd1e9f55..670fec3a4870 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -930,6 +930,7 @@ int viridian_hypercall(struct cpu_user_regs *regs)
 {
     struct vcpu *curr = current;
     struct domain *currd = curr->domain;
+    struct viridian_domain *vd = currd->arch.hvm.viridian;
     int mode = hvm_guest_x86_mode(curr);
     unsigned long input_params_gpa, output_params_gpa;
     int rc = 0;
@@ -957,6 +958,10 @@ int viridian_hypercall(struct cpu_user_regs *regs)
     switch ( input.call_code )
     {
     case HVCALL_NOTIFY_LONG_SPIN_WAIT:
+        if ( !test_and_set_bit(_HCALL_spin_wait, &vd->hypercall_flags) )
+            printk(XENLOG_G_INFO "d%d: VIRIDIAN HVCALL_NOTIFY_LONG_SPIN_WAIT\n",
+                   currd->domain_id);
+
         /*
          * See section 14.5.1 of the specification.
          */
@@ -965,22 +970,38 @@ int viridian_hypercall(struct cpu_user_regs *regs)
 
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
+        if ( !test_and_set_bit(_HCALL_flush, &vd->hypercall_flags) )
+            printk(XENLOG_G_INFO "%pd: VIRIDIAN HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST\n",
+                   currd);
+
         rc = hvcall_flush(&input, &output, input_params_gpa,
                           output_params_gpa);
         break;
 
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX:
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX:
+        if ( !test_and_set_bit(_HCALL_flush_ex, &vd->hypercall_flags) )
+            printk(XENLOG_G_INFO "%pd: VIRIDIAN HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX\n",
+                   currd);
+
         rc = hvcall_flush_ex(&input, &output, input_params_gpa,
                              output_params_gpa);
         break;
 
     case HVCALL_SEND_IPI:
+        if ( !test_and_set_bit(_HCALL_ipi, &vd->hypercall_flags) )
+            printk(XENLOG_G_INFO "%pd: VIRIDIAN HVCALL_SEND_IPI\n",
+                   currd);
+
         rc = hvcall_ipi(&input, &output, input_params_gpa,
                         output_params_gpa);
         break;
 
     case HVCALL_SEND_IPI_EX:
+        if ( !test_and_set_bit(_HCALL_ipi_ex, &vd->hypercall_flags) )
+            printk(XENLOG_G_INFO "%pd: VIRIDIAN HVCALL_SEND_IPI_EX\n",
+                   currd);
+
         rc = hvcall_ipi_ex(&input, &output, input_params_gpa,
                            output_params_gpa);
         break;
diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h
index cbf77d9c760b..d176c4b9153b 100644
--- a/xen/include/asm-x86/hvm/viridian.h
+++ b/xen/include/asm-x86/hvm/viridian.h
@@ -59,6 +59,14 @@ struct viridian_domain
 {
     union hv_guest_os_id guest_os_id;
     union hv_vp_assist_page_msr hypercall_gpa;
+    unsigned long hypercall_flags;
+
+#define _HCALL_spin_wait 0
+#define _HCALL_flush 1
+#define _HCALL_flush_ex 2
+#define _HCALL_ipi 3
+#define _HCALL_ipi_ex 4
+
     struct viridian_time_ref_count time_ref_count;
     struct viridian_page reference_tsc;
 };
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 20:07:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 20:07:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25214.52909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcwP2-0003cv-C5; Wed, 11 Nov 2020 20:07:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25214.52909; Wed, 11 Nov 2020 20:07:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcwP1-0003c3-VR; Wed, 11 Nov 2020 20:07:35 +0000
Received: by outflank-mailman (input) for mailman id 25214;
 Wed, 11 Nov 2020 20:07:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kcwOz-0003VL-Jv
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:07:33 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcwOz-0000pY-Ai; Wed, 11 Nov 2020 20:07:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcwOz-0003VL-Jv
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:07:33 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcwOz-0000pY-Ai; Wed, 11 Nov 2020 20:07:33 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 09/10] viridian: add a new '_HVMPV_ex_processor_masks' bit into HVM_PARAM_VIRIDIAN...
Date: Wed, 11 Nov 2020 20:07:20 +0000
Message-Id: <20201111200721.30551-10-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201111200721.30551-1-paul@xen.org>
References: <20201111200721.30551-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... and advertise ExProcessorMasks support if it is set.

Support is advertised by setting bit 11 in CPUID:40000004:EAX.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/x86/hvm/viridian/viridian.c | 3 +++
 xen/include/public/hvm/params.h      | 7 ++++++-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 670fec3a4870..4d570adc21c0 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -84,6 +84,7 @@ typedef union _HV_CRASH_CTL_REG_CONTENTS
 #define CPUID4A_MSR_BASED_APIC         (1 << 3)
 #define CPUID4A_RELAX_TIMER_INT        (1 << 5)
 #define CPUID4A_SYNTHETIC_CLUSTER_IPI  (1 << 10)
+#define CPUID4A_EX_PROCESSOR_MASKS     (1 << 11)
 
 /* Viridian CPUID leaf 6: Implementation HW features detected and in use */
 #define CPUID6A_APIC_OVERLAY    (1 << 0)
@@ -197,6 +198,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
             res->a |= CPUID4A_MSR_BASED_APIC;
         if ( viridian_feature_mask(d) & HVMPV_hcall_ipi )
             res->a |= CPUID4A_SYNTHETIC_CLUSTER_IPI;
+        if ( viridian_feature_mask(d) & HVMPV_ex_processor_masks )
+            res->a |= CPUID4A_EX_PROCESSOR_MASKS;
 
         /*
          * This value is the recommended number of attempts to try to
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index 0e3fdca09646..3b0a0f45da53 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -164,6 +164,10 @@
 #define _HVMPV_hcall_ipi 9
 #define HVMPV_hcall_ipi (1 << _HVMPV_hcall_ipi)
 
+/* Enable ExProcessorMasks */
+#define _HVMPV_ex_processor_masks 10
+#define HVMPV_ex_processor_masks (1 << _HVMPV_ex_processor_masks)
+
 #define HVMPV_feature_mask \
         (HVMPV_base_freq | \
          HVMPV_no_freq | \
@@ -174,7 +178,8 @@
          HVMPV_crash_ctl | \
          HVMPV_synic | \
          HVMPV_stimer | \
-         HVMPV_hcall_ipi)
+         HVMPV_hcall_ipi | \
+         HVMPV_ex_processor_masks)
 
 #endif
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 20:31:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 20:31:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25282.52920 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcwlP-00076L-Bn; Wed, 11 Nov 2020 20:30:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25282.52920; Wed, 11 Nov 2020 20:30:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcwlP-00076E-8u; Wed, 11 Nov 2020 20:30:43 +0000
Received: by outflank-mailman (input) for mailman id 25282;
 Wed, 11 Nov 2020 20:30:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kcwlO-000769-Na
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:30:42 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kcwP0-0000pY-Al; Wed, 11 Nov 2020 20:07:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcwlO-000769-Na
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:30:42 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kcwP0-0000pY-Al; Wed, 11 Nov 2020 20:07:34 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 10/10] xl / libxl: add 'ex_processor_mask' into 'libxl_viridian_enlightenment'
Date: Wed, 11 Nov 2020 20:07:21 +0000
Message-Id: <20201111200721.30551-11-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201111200721.30551-1-paul@xen.org>
References: <20201111200721.30551-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Adding the new value into the enumeration makes it immediately available
to xl, so this patch adjusts the xl.cfg(5) documentation accordingly.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 docs/man/xl.cfg.5.pod.in         | 8 ++++++++
 tools/include/libxl.h            | 7 +++++++
 tools/libs/light/libxl_types.idl | 1 +
 tools/libs/light/libxl_x86.c     | 3 +++
 4 files changed, 19 insertions(+)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 0532739c1fff..3f0f8de1e988 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2318,6 +2318,14 @@ This set incorporates use of a hypercall for interprocessor interrupts.
 This enlightenment may improve performance of Windows guests with multiple
 virtual CPUs.
 
+=item B<ex_processor_masks>
+
+This set enables new hypercall variants taking a variably-sized sparse
+B<Virtual Processor Set> as an argument, rather than a simple 64-bit
+mask. Hence this enlightenment must be specified for guests with more
+than 64 vCPUs if B<hcall_remote_tlb_flush> and/or B<hcall_ipi> are also
+specified.
+
 =item B<defaults>
 
 This is a special value that enables the default set of groups, which
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 1ea5b4f446e8..eaffccb30f37 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -444,6 +444,13 @@
  */
 #define LIBXL_HAVE_DISK_SAFE_REMOVE 1
 
+/*
+ * LIBXL_HAVE_VIRIDIAN_EX_PROCESSOR_MASKS indicates that the
+ * 'ex_processor_masks' value is present in the viridian enlightenment
+ * enumeration.
+ */
+#define LIBXL_HAVE_VIRIDIAN_EX_PROCESSOR_MASKS 1
+
 /*
  * libxl ABI compatibility
  *
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 9d3f05f39978..05324736b744 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -238,6 +238,7 @@ libxl_viridian_enlightenment = Enumeration("viridian_enlightenment", [
     (7, "synic"),
     (8, "stimer"),
     (9, "hcall_ipi"),
+    (10, "ex_processor_masks"),
     ])
 
 libxl_hdtype = Enumeration("hdtype", [
diff --git a/tools/libs/light/libxl_x86.c b/tools/libs/light/libxl_x86.c
index e18274cc10e2..86d272999d67 100644
--- a/tools/libs/light/libxl_x86.c
+++ b/tools/libs/light/libxl_x86.c
@@ -366,6 +366,9 @@ static int hvm_set_viridian_features(libxl__gc *gc, uint32_t domid,
     if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_HCALL_IPI))
         mask |= HVMPV_hcall_ipi;
 
+    if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_EX_PROCESSOR_MASKS))
+        mask |= HVMPV_ex_processor_masks;
+
     if (mask != 0 &&
         xc_hvm_param_set(CTX->xch,
                          domid,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 21:52:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 21:52:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25293.52957 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy2t-00069d-VB; Wed, 11 Nov 2020 21:52:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25293.52957; Wed, 11 Nov 2020 21:52:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy2t-00069U-Rv; Wed, 11 Nov 2020 21:52:51 +0000
Received: by outflank-mailman (input) for mailman id 25293;
 Wed, 11 Nov 2020 21:52:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy2r-00064v-Ok
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:52:49 +0000
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2483e5c9-4c86-4ff1-9b5f-9efaf2ce6fcd;
 Wed, 11 Nov 2020 21:52:40 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id k2so3998943wrx.2
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:40 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.38
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 11 Nov 2020 13:52:38 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kcy2r-00064v-Ok
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:52:49 +0000
X-Inumbo-ID: 2483e5c9-4c86-4ff1-9b5f-9efaf2ce6fcd
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2483e5c9-4c86-4ff1-9b5f-9efaf2ce6fcd;
	Wed, 11 Nov 2020 21:52:40 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id k2so3998943wrx.2
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=CODXTx/qAJFmAOdmnrKmfVlPhSYEzSAm6huhIA2Mo1I=;
        b=oQNVr3As2iNhoTTgUw08EzCuvEeUJGdh+BijVSabFb5qFpbq5mcVhXfZK8uabpWOud
         gQpUHRAaRFa620hoH7jmrUdB1CSM3GcXaHjhv7grkduuprEhqRzK1EidJxWkjM9td51q
         v2xaE6eozqeLg0M+qdJE3hm6fl6YCeQpO8KRtd+WTHFur/s6IaEJk8p7AoOXX40b/WNI
         s8auOJusZejoyZdyPLc58iepBaX2ulQds/oeP/hqXdOXklyllHvoUvwG5l68mmTnkFX6
         EHppP93AqRDWliZnx89JN0lYN7oEHseaVL4zwNHrVwZ9Sb8tKznlrHRmlMmyqRNS92hd
         nlug==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=CODXTx/qAJFmAOdmnrKmfVlPhSYEzSAm6huhIA2Mo1I=;
        b=r/VhnlazKEFtz3S66anfAgLolYgH9ry7G4kv3uNarUL95XTUqwI3QWE/jAnXr/2N1c
         Lh8Clax9FeNpeRVqI8HBYau6O9dN9wBrQArz/tymcUysoIgFLe+r35VOECQvzRlonQwH
         i0i36B38erK+iaAGMIzgbczEeMbAsyjGlOM6tmBEu2XyPDbNDQfVI8Viju37kCbl9vr8
         jRg42m7HzDkrtB7DaSEFmax4zLikJqiDBpGDxdX/bT7+StwmjN7Yqgc124JxJnHzM5+T
         pkkdN5k+sBiyd2oEieYKtmqtlhzRkSlNuxnLkggAibSIs26UxeiZHIIqUtqi0sQP0O4A
         gz3w==
X-Gm-Message-State: AOAM531+pEkYZxZQxVFdNzCAvEk/VnF78WQz3JxYhcAXk1IOFCxKzBsP
	6JtEwRYLLP4u5JGaV439k4p/yPq6oyY=
X-Google-Smtp-Source: ABdhPJzBDSOrdcdgHWz1ZLiR2a2Q81rePOml0YZw0Qnuq0gjXwXFCubF2rtCDjJm1Bwj3aDpBXXauA==
X-Received: by 2002:adf:f607:: with SMTP id t7mr5085487wrp.169.1605131559247;
        Wed, 11 Nov 2020 13:52:39 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.38
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Wed, 11 Nov 2020 13:52:38 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: xen-devel@lists.xenproject.org
Cc: Ash Wilding <ash.j.wilding@gmail.com>,
	julien@xen.org,
	bertrand.marquis@arm.com,
	rahul.singh@arm.com
Subject: [RFC PATCH v2 02/15] xen/arm: Add detection of Armv8.1-LSE atomic instructions
Date: Wed, 11 Nov 2020 21:51:50 +0000
Message-Id: <20201111215203.80336-3-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com>
References: <20201111215203.80336-1-ash.j.wilding@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ash Wilding <ash.j.wilding@gmail.com>

Use the new infrastructure for detecting CPU features in other ID
registers to detect the presence of Armv8.1-LSE atomic instructions,
as reported by ID_AA64ISAR0_EL1.Atomic.

While we're here, print detection of these instructions in setup.c's
processor_id().

Signed-off-by: Ash Wilding <ash.j.wilding@gmail.com>
---
 xen/arch/arm/setup.c             |  5 +++--
 xen/include/asm-arm/cpufeature.h | 10 +++++++++-
 2 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 5121f06fc5..138e1957c5 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -128,10 +128,11 @@ static void __init processor_id(void)
            cpu_has_el2_32 ? "64+32" : cpu_has_el2_64 ? "64" : "No",
            cpu_has_el1_32 ? "64+32" : cpu_has_el1_64 ? "64" : "No",
            cpu_has_el0_32 ? "64+32" : cpu_has_el0_64 ? "64" : "No");
-    printk("    Extensions:%s%s%s\n",
+    printk("    Extensions:%s%s%s%s\n",
            cpu_has_fp ? " FloatingPoint" : "",
            cpu_has_simd ? " AdvancedSIMD" : "",
-           cpu_has_gicv3 ? " GICv3-SysReg" : "");
+           cpu_has_gicv3 ? " GICv3-SysReg" : "",
+           cpu_has_lse_atomics ? " LSE-Atomics" : "");
 
     /* Warn user if we find unknown floating-point features */
     if ( cpu_has_fp && (boot_cpu_feature(pfr64, fp) >= 2) )
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index f9281ea343..2366926e82 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -15,6 +15,7 @@
 #define cpu_has_fp              (boot_cpu_feature(pfr64, fp) < 8)
 #define cpu_has_simd            (boot_cpu_feature(pfr64, simd) < 8)
 #define cpu_has_gicv3           (boot_cpu_feature(pfr64, gic) == 1)
+#define cpu_has_lse_atomics     (boot_cpu_feature(isa64, atomic) == 2)
 #endif
 
 #define cpu_has_arm       (boot_cpu_feature(pfr32, arm) == 1)
@@ -187,8 +188,15 @@ struct cpuinfo_arm {
         };
     } mm64;
 
-    struct {
+    union {
         uint64_t bits[2];
+        struct {
+            unsigned long __res0 : 20;
+            unsigned long atomic : 4;
+            unsigned long __res1 : 40;
+
+            unsigned long __res2 : 64;
+        };
     } isa64;
 
 #endif
-- 
2.24.3 (Apple Git-128)



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 21:52:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 21:52:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25292.52945 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy2o-00066S-NP; Wed, 11 Nov 2020 21:52:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25292.52945; Wed, 11 Nov 2020 21:52:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy2o-00066K-K2; Wed, 11 Nov 2020 21:52:46 +0000
Received: by outflank-mailman (input) for mailman id 25292;
 Wed, 11 Nov 2020 21:52:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy2m-00064v-Oj
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:52:44 +0000
Received: from mail-wm1-x330.google.com (unknown [2a00:1450:4864:20::330])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a17d2cbb-8852-4648-a661-1c53bca5fd55;
 Wed, 11 Nov 2020 21:52:39 +0000 (UTC)
Received: by mail-wm1-x330.google.com with SMTP id c9so3559012wml.5
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:39 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.37
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 11 Nov 2020 13:52:37 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kcy2m-00064v-Oj
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:52:44 +0000
X-Inumbo-ID: a17d2cbb-8852-4648-a661-1c53bca5fd55
Received: from mail-wm1-x330.google.com (unknown [2a00:1450:4864:20::330])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a17d2cbb-8852-4648-a661-1c53bca5fd55;
	Wed, 11 Nov 2020 21:52:39 +0000 (UTC)
Received: by mail-wm1-x330.google.com with SMTP id c9so3559012wml.5
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=u7aluI1nuZMrv7pbrDJ9eHy15M26syg8r4F17Forexw=;
        b=AaXUZNDndVWbWZNl/rKaq8I01YExK21+vqGN9RXrjEsbsDX58L0Ql0iQ0dJPdqvMl5
         IeljEHP5YJFKhWPrgIU1Me2d5/adn2sNpj6cHk9NJ+coqLF+s1+ggGoYGgMLzjtrzDZd
         elBF2488JIz4wIzzfkQn9dUUlkj/2RH+LG2TTep0KDMy18KJ7mRrZ4ArNVu9+0m7Vl4O
         KfWi2nIKFgGBvAtBSHRZhBl+jTnPeXuWk1qN+HTlrqiEpXanJZhAen1SBlLL5Z7bFOER
         pSr/p1Mh/vMihla1PSLl7eLrDfOOYLR3Wz/vV2SoOfEKnMIBkqcieN+3bokSU2kycoDa
         TODw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=u7aluI1nuZMrv7pbrDJ9eHy15M26syg8r4F17Forexw=;
        b=NLz8bj7YjkTPWQpJPg+Dpzx52ohyZy0A9/ckc+vSIN3IIvDVDin9qglbKANet1mEle
         Z2k+juyT691bVeKBrY5Gn755h0bnyZD5Gq0t8tZ/LDW+BRLWfhFm7Mz/G461WhHJ/ogB
         3FQhp0SRWWFXmLt2MG11BpTlqvMWK8lNBtXGsOqOZ5zzT/9GhxLWzlco6JyFozGlqe4d
         JmY2rzxXSCRS9pj0au4B0pzdfsyNyxFbHNoMhU8UeU2b4mY1xoNymDyFXi7aSVZOj+A2
         DJuWaCnN01ZOkodmrmvvxctZWhtI3W97S+8O+wwJWB7G5q97O5nrJwNqpoJT7Vx5QvFv
         r6xQ==
X-Gm-Message-State: AOAM5311oYe7Bw8um+OgnvRxipZ+rH2DCT+pVQvmaDdRC+OpVdFfRw7p
	6aMwD6/VG7Qq5reUcadBmxkUKk2hBKA=
X-Google-Smtp-Source: ABdhPJxqxeloMs0OpqsobCRlKw8L4J9/KoX+2kcXY1LPtgscBNYgLhXT7DvlMqN+lXX38yWPulhAMA==
X-Received: by 2002:a1c:5585:: with SMTP id j127mr6489117wmb.90.1605131558299;
        Wed, 11 Nov 2020 13:52:38 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.37
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Wed, 11 Nov 2020 13:52:37 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: xen-devel@lists.xenproject.org
Cc: Ash Wilding <ash.j.wilding@gmail.com>,
	julien@xen.org,
	bertrand.marquis@arm.com,
	rahul.singh@arm.com
Subject: [RFC PATCH v2 01/15] xen/arm: Support detection of CPU features in other ID registers
Date: Wed, 11 Nov 2020 21:51:49 +0000
Message-Id: <20201111215203.80336-2-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com>
References: <20201111215203.80336-1-ash.j.wilding@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ash Wilding <ash.j.wilding@gmail.com>

The current Arm boot_cpu_feature64() and boot_cpu_feature32() macros
are hardcoded to only detect features in ID_AA64PFR{0,1}_EL1 and
ID_PFR{0,1} respectively.

This patch replaces these macros with a new macro, boot_cpu_feature(),
which takes an explicit ID register name as an argument.

While we're here, cull cpu_feature64() and cpu_feature32() as they
have no callers (we only ever use the boot CPU features), and update
the printk() messages in setup.c to use the new macro.

Signed-off-by: Ash Wilding <ash.j.wilding@gmail.com>
---
 xen/arch/arm/setup.c             |  8 +++---
 xen/include/asm-arm/cpufeature.h | 44 +++++++++++++++-----------------
 2 files changed, 24 insertions(+), 28 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 7fcff9af2a..5121f06fc5 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -134,16 +134,16 @@ static void __init processor_id(void)
            cpu_has_gicv3 ? " GICv3-SysReg" : "");
 
     /* Warn user if we find unknown floating-point features */
-    if ( cpu_has_fp && (boot_cpu_feature64(fp) >= 2) )
+    if ( cpu_has_fp && (boot_cpu_feature(pfr64, fp) >= 2) )
         printk(XENLOG_WARNING "WARNING: Unknown Floating-point ID:%d, "
                "this may result in corruption on the platform\n",
-               boot_cpu_feature64(fp));
+               boot_cpu_feature(pfr64, fp));
 
     /* Warn user if we find unknown AdvancedSIMD features */
-    if ( cpu_has_simd && (boot_cpu_feature64(simd) >= 2) )
+    if ( cpu_has_simd && (boot_cpu_feature(pfr64, simd) >= 2) )
         printk(XENLOG_WARNING "WARNING: Unknown AdvancedSIMD ID:%d, "
                "this may result in corruption on the platform\n",
-               boot_cpu_feature64(simd));
+               boot_cpu_feature(pfr64, simd));
 
     printk("  Debug Features: %016"PRIx64" %016"PRIx64"\n",
            boot_cpu_data.dbg64.bits[0], boot_cpu_data.dbg64.bits[1]);
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index 10878ead8a..f9281ea343 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -1,39 +1,35 @@
 #ifndef __ASM_ARM_CPUFEATURE_H
 #define __ASM_ARM_CPUFEATURE_H
 
+#define boot_cpu_feature(idreg, feat) (boot_cpu_data.idreg.feat)
+
 #ifdef CONFIG_ARM_64
-#define cpu_feature64(c, feat)         ((c)->pfr64.feat)
-#define boot_cpu_feature64(feat)       (boot_cpu_data.pfr64.feat)
-
-#define cpu_has_el0_32    (boot_cpu_feature64(el0) == 2)
-#define cpu_has_el0_64    (boot_cpu_feature64(el0) >= 1)
-#define cpu_has_el1_32    (boot_cpu_feature64(el1) == 2)
-#define cpu_has_el1_64    (boot_cpu_feature64(el1) >= 1)
-#define cpu_has_el2_32    (boot_cpu_feature64(el2) == 2)
-#define cpu_has_el2_64    (boot_cpu_feature64(el2) >= 1)
-#define cpu_has_el3_32    (boot_cpu_feature64(el3) == 2)
-#define cpu_has_el3_64    (boot_cpu_feature64(el3) >= 1)
-#define cpu_has_fp        (boot_cpu_feature64(fp) < 8)
-#define cpu_has_simd      (boot_cpu_feature64(simd) < 8)
-#define cpu_has_gicv3     (boot_cpu_feature64(gic) == 1)
+#define cpu_has_el0_32          (boot_cpu_feature(pfr64, el0) == 2)
+#define cpu_has_el0_64          (boot_cpu_feature(pfr64, el0) >= 1)
+#define cpu_has_el1_32          (boot_cpu_feature(pfr64, el1) == 2)
+#define cpu_has_el1_64          (boot_cpu_feature(pfr64, el1) >= 1)
+#define cpu_has_el2_32          (boot_cpu_feature(pfr64, el2) == 2)
+#define cpu_has_el2_64          (boot_cpu_feature(pfr64, el2) >= 1)
+#define cpu_has_el3_32          (boot_cpu_feature(pfr64, el3) == 2)
+#define cpu_has_el3_64          (boot_cpu_feature(pfr64, el3) >= 1)
+#define cpu_has_fp              (boot_cpu_feature(pfr64, fp) < 8)
+#define cpu_has_simd            (boot_cpu_feature(pfr64, simd) < 8)
+#define cpu_has_gicv3           (boot_cpu_feature(pfr64, gic) == 1)
 #endif
 
-#define cpu_feature32(c, feat)         ((c)->pfr32.feat)
-#define boot_cpu_feature32(feat)       (boot_cpu_data.pfr32.feat)
-
-#define cpu_has_arm       (boot_cpu_feature32(arm) == 1)
-#define cpu_has_thumb     (boot_cpu_feature32(thumb) >= 1)
-#define cpu_has_thumb2    (boot_cpu_feature32(thumb) >= 3)
-#define cpu_has_jazelle   (boot_cpu_feature32(jazelle) > 0)
-#define cpu_has_thumbee   (boot_cpu_feature32(thumbee) == 1)
+#define cpu_has_arm       (boot_cpu_feature(pfr32, arm) == 1)
+#define cpu_has_thumb     (boot_cpu_feature(pfr32, thumb) >= 1)
+#define cpu_has_thumb2    (boot_cpu_feature(pfr32, thumb) >= 3)
+#define cpu_has_jazelle   (boot_cpu_feature(pfr32, jazelle) > 0)
+#define cpu_has_thumbee   (boot_cpu_feature(pfr32, thumbee) == 1)
 #define cpu_has_aarch32   (cpu_has_arm || cpu_has_thumb)
 
 #ifdef CONFIG_ARM_32
-#define cpu_has_gentimer  (boot_cpu_feature32(gentimer) == 1)
+#define cpu_has_gentimer  (boot_cpu_feature(pfr32, gentimer) == 1)
 #else
 #define cpu_has_gentimer  (1)
 #endif
-#define cpu_has_security  (boot_cpu_feature32(security) > 0)
+#define cpu_has_security  (boot_cpu_feature(pfr32, security) > 0)
 
 #define ARM64_WORKAROUND_CLEAN_CACHE    0
 #define ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE    1
-- 
2.24.3 (Apple Git-128)



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 21:52:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 21:52:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25291.52933 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy2j-000657-Eq; Wed, 11 Nov 2020 21:52:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25291.52933; Wed, 11 Nov 2020 21:52:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy2j-000650-Bd; Wed, 11 Nov 2020 21:52:41 +0000
Received: by outflank-mailman (input) for mailman id 25291;
 Wed, 11 Nov 2020 21:52:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy2h-00064v-T3
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:52:39 +0000
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7ed17be0-ef64-47aa-9111-901f5964348e;
 Wed, 11 Nov 2020 21:52:38 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id a3so3668627wmb.5
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:38 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.35
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 11 Nov 2020 13:52:36 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kcy2h-00064v-T3
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:52:39 +0000
X-Inumbo-ID: 7ed17be0-ef64-47aa-9111-901f5964348e
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7ed17be0-ef64-47aa-9111-901f5964348e;
	Wed, 11 Nov 2020 21:52:38 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id a3so3668627wmb.5
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=C3MiOpINH1JUMh9U2LTEPR+PiAsExGF06kWdgI0RNB8=;
        b=Yf6WuB6eY5V4AOeOmHhQIsdq9R0KIUEIHf6zV3mR7vdKK8TaaU3KMmOHdqF6n8YK76
         jmrx3ORr7GQGSgrT50nLhStX1Uijf1T+gZGYMAU1dMVhFRn0m6mcxcQSgXBnN05ED6AZ
         xBomaJB5ymu+eE8nHdoaifPeazXd/5MOoHqkH0x7XhDN70YM4MQ1/rLkKT3S5x5qRkYI
         u1NYm2ZRidMAHZWRCIiH9QQfKNef6EZDLsAOfEN5b60jHfY9Nmw48W14nBqxfKzHA6HY
         J1xfqM84mp/gT4+xMByQ8cU7JSm2ahwBpWMG8kLldYzp97Rdb4Hz2clpscqmsYDwAnd0
         vScA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=C3MiOpINH1JUMh9U2LTEPR+PiAsExGF06kWdgI0RNB8=;
        b=XPXCZ+GlFbGUg8xf6asnMTcMmNmnE64OQtFRXEyyLg4uslvOH0i8vlDLCiI0d/rL/r
         1R2139iFcBYY6qA1NZ95vUj5TtM5KwQg0M8ChFKZC8YkkMOsfBwNTPamyNbIAMnHMfqr
         IG1CdMRlovBJptb/0/f3xy5rhdGUl6kpQAK4YU7iq3Irlbqy+n5pG1fVK6iu9RvpTSqV
         Yh3rHeRMdRwBq+qIfUZvJnRWn8R8T8ianSWT8AO8L+bAmZgrT/tmypnArpXB9GUjdemW
         Bw+x89xwXMa8sieItbwFb5YS/QZ684fM2gVUAA09BBEt7n2epZED5Kxuoq7oiRjTywUV
         a4qg==
X-Gm-Message-State: AOAM531liykeDGh+A0HUNLjN2ELeakgQM1L6nwCahHwgGXMtxkXiKBUk
	dfFh3awOrGbN3g6EwAcbDUJmP6SuaPM=
X-Google-Smtp-Source: ABdhPJy7dYCbVtRGUl0azTrQJsb0BfTc7fWZ5x0q979F7T0J1FOIu9/2EBT0HpN7c3xNyilHDfcN4w==
X-Received: by 2002:a1c:4b18:: with SMTP id y24mr6552423wma.154.1605131557181;
        Wed, 11 Nov 2020 13:52:37 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.35
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Wed, 11 Nov 2020 13:52:36 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: xen-devel@lists.xenproject.org
Cc: Ash Wilding <ash.j.wilding@gmail.com>,
	julien@xen.org,
	bertrand.marquis@arm.com,
	rahul.singh@arm.com
Subject: [RFC PATCH v2 00/15] xen/arm: port Linux LL/SC and LSE atomics helpers to Xen.
Date: Wed, 11 Nov 2020 21:51:48 +0000
Message-Id: <20201111215203.80336-1-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ash Wilding <ash.j.wilding@gmail.com>


Hey,

I've found some time to improve this series a bit:


Changes since RFC v1
====================

 - Broken up patches into smaller chunks to aid in readability.

 - As per Julien's feedback I've also introduced intermediary patches
   that first remove Xen's existing headers, then pull in the current
   Linux versions as-is. This means we only need to review the changes
   made while porting to Xen, rather than reviewing the existing Linux
   code.

 - Pull in Linux's <asm-generic/rwonce.h> as <xen/rwonce.h> for
   __READ_ONCE()/__WRITE_ONCE() instead of putting these in Xen's
   <xen/compiler.h>. While doing this, provide justification for
   dropping Linux's <linux/compiler_types.h> and relaxing the
   __READ_ONCE() usage of __unqual_scalar_typeof() to just typeof()
   (see patches #5 and #6).



Keeping the rest of the cover-letter unchanged as it would still be
good to discuss these things:


Arguments in favour of doing this
=================================

    - Lets SMMUv3 driver switch to using <asm/atomic.h> rather than
      maintaining its own implementation of the helpers.

    - Provides mitigation against XSA-295 [2], which affects both arm32
      and arm64, across all versions of Xen, and may allow a domU to
      maliciously or erroneously DoS the hypervisor.

    - All Armv8-A core implementations since ~2017 implement LSE so
      there is an argument to be made we are long overdue support in
      Xen. This is compounded by LSE atomics being more performant than
      LL/SC atomics in most real-world applications, especially at high
      core counts.

    - We may be able to get improved performance when using LL/SC too
      as Linux provides helpers with relaxed ordering requirements which
      are currently not available in Xen, though for this we would need
      to go back through existing code to see where the more relaxed
      versions can be safely used.

    - Anything else?


Arguments against doing this
============================

    - Limited testing infrastructure in place to ensure use of new
      atomics helpers does not introduce new bugs and regressions. This
      is a particularly strong argument given how difficult it can be to
      identify and debug malfunctioning atomics. The old adage applies,
      "If it ain't broke, don't fix it."

    - Anything else?


Disclaimers
===========

 - This is a very rough first-pass effort intended to spur the
   discussions along.

 - Only build-tested on arm64 and arm32, *not* run-tested. I did also
   build for x86_64 just to make I didn't inadvertently break that.

 - This version only tackles atomics and cmpxchg; I've not yet had a
   chance to look at locks so those are still using LL/SC.

 - The timeout variants of cmpxchg() (used by guest atomics) are still
   using LL/SC regardless of LSE being available, as these helpers are
   not provided by Linux so I just copied over the existing Xen ones.


Any further comments, guidance, and discussion on how to improve this 
approach and get LSE atomics support merged into Xen would be greatly
appreciated.

Thanks!
Ash.

[1] https://lore.kernel.org/xen-devel/13baac40-8b10-0def-4e44-0d8f655fcaf1@xen.org/
[2] https://xenbits.xen.org/xsa/advisory-295.txt

Ash Wilding (15):
  xen/arm: Support detection of CPU features in other ID registers
  xen/arm: Add detection of Armv8.1-LSE atomic instructions
  xen/arm: Add ARM64_HAS_LSE_ATOMICS hwcap
  xen/arm: Delete Xen atomics helpers
  xen/arm: pull in Linux atomics helpers and dependencies
  xen: port Linux <asm-generic/rwonce.h> to Xen
  xen/arm: prepare existing Xen headers for Linux atomics
  xen/arm64: port Linux LL/SC atomics helpers to Xen
  xen/arm64: port Linux LSE atomics helpers to Xen
  xen/arm64: port Linux's arm64 cmpxchg.h to Xen
  xen/arm64: port Linux's arm64 atomic.h to Xen
  xen/arm64: port Linux's arm64 lse.h to Xen
  xen/arm32: port Linux's arm32 atomic.h to Xen
  xen/arm32: port Linux's arm32 cmpxchg.h to Xen
  xen/arm: remove dependency on gcc built-in __sync_fetch_and_add()

 xen/arch/arm/Kconfig                     |  11 +
 xen/arch/arm/Makefile                    |   1 +
 xen/arch/arm/lse.c                       |  13 +
 xen/arch/arm/setup.c                     |  13 +-
 xen/include/asm-arm/arm32/atomic.h       | 253 +++++++-----
 xen/include/asm-arm/arm32/cmpxchg.h      | 388 +++++++++++-------
 xen/include/asm-arm/arm32/system.h       |   2 +-
 xen/include/asm-arm/arm64/atomic.h       | 236 +++++------
 xen/include/asm-arm/arm64/atomic_ll_sc.h | 231 +++++++++++
 xen/include/asm-arm/arm64/atomic_lse.h   | 246 +++++++++++
 xen/include/asm-arm/arm64/cmpxchg.h      | 497 ++++++++++++++++-------
 xen/include/asm-arm/arm64/lse.h          |  48 +++
 xen/include/asm-arm/arm64/system.h       |   2 +-
 xen/include/asm-arm/atomic.h             |  15 +-
 xen/include/asm-arm/cpufeature.h         |  57 +--
 xen/include/asm-arm/system.h             |   9 +-
 xen/include/xen/rwonce.h                 |  21 +
 17 files changed, 1469 insertions(+), 574 deletions(-)
 create mode 100644 xen/arch/arm/lse.c
 create mode 100644 xen/include/asm-arm/arm64/atomic_ll_sc.h
 create mode 100644 xen/include/asm-arm/arm64/atomic_lse.h
 create mode 100644 xen/include/asm-arm/arm64/lse.h
 create mode 100644 xen/include/xen/rwonce.h

-- 
2.24.3 (Apple Git-128)



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 21:52:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 21:52:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25294.52969 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy2x-0006DY-GR; Wed, 11 Nov 2020 21:52:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25294.52969; Wed, 11 Nov 2020 21:52:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy2x-0006DO-BE; Wed, 11 Nov 2020 21:52:55 +0000
Received: by outflank-mailman (input) for mailman id 25294;
 Wed, 11 Nov 2020 21:52:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy2w-00064v-Ov
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:52:54 +0000
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f491eefb-b04a-4d17-a8d0-0fde8a7e6418;
 Wed, 11 Nov 2020 21:52:41 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id w24so3718977wmi.0
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:41 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.39
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 11 Nov 2020 13:52:39 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kcy2w-00064v-Ov
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:52:54 +0000
X-Inumbo-ID: f491eefb-b04a-4d17-a8d0-0fde8a7e6418
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f491eefb-b04a-4d17-a8d0-0fde8a7e6418;
	Wed, 11 Nov 2020 21:52:41 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id w24so3718977wmi.0
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=ILeuG5HDLbKOJMGFigti9pLkxVcIcfbMYnxrhvicuhs=;
        b=An+8CipOTaoiE77z9aL89YeTyY7pLgZ2kaLDHIAXOkU1pGPFomBfw0AajX8cqN2V62
         ZFsXaQMUSrZbGWHIBCSze3xG7u6Y0Dgq8Hd8oyTqG+w5t0juFbp5ch0ULIiluky58wg1
         mZqq4mr+fruNareInBW0rw3GhzIwD6JhPlpdsbEkhsbUxP9OxE5LS8Q4PX3fYiOBPlyM
         gtti3xPSoN9XJyt+w8viVVe5r6YgJ/k1aw+g8T0WKinw1Gz91Y0aXxbEVElpJLC0hJtK
         GNLy5f8jwq+2qGLIyl9olQLp/kgvP2zFoOMjdeBAFgccghfQnfcFX5jYsMplpqrBrD7N
         DRFQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=ILeuG5HDLbKOJMGFigti9pLkxVcIcfbMYnxrhvicuhs=;
        b=U6/alKj2phDC3EdYPbxNA7PIstQ7AOmeSEOOrM83Hfdxziomgd/Ccs04Tbx32zOiJa
         RafwvdppsKJE/xqOj8PxYWCBZPKbs41MVCgEvYz5ycqPLr91HpuZat43mHI6RLbBoz+w
         jXyLxubL7k4Dvu99NuSsV5bLkujDlnTCFD6aVjaNDUZKvZpPLejkAGkuBuOvK9UjzQWt
         91TrBetBmJ2QOmB11tXUkeVCLCwwStBZkmGOyxZbObYIiQ2khOoOn89oUbwY+K9kP5KH
         kLAU2hwjXQsDv2FBoveAE5xl1gzIB3K47xE9pDv7V6hkHG0a4HcukodjMPUZGIF0PfJG
         rmxA==
X-Gm-Message-State: AOAM533IMiVxrNjlgC/jqJdi9OaE4EihlzElDxXFduXV7UGtXpDtRbqB
	lSmPGa/MVy0+/EtAW7N1g1I5d5mhj1c=
X-Google-Smtp-Source: ABdhPJwv9av7Lz91Xd/usJaADjr8A2KccXvtsZlJPdPKY01xUJM6J8xIOnuQYFnTLXt8u5BtGyH1lQ==
X-Received: by 2002:a1c:1906:: with SMTP id 6mr6054100wmz.87.1605131560190;
        Wed, 11 Nov 2020 13:52:40 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.39
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Wed, 11 Nov 2020 13:52:39 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: xen-devel@lists.xenproject.org
Cc: Ash Wilding <ash.j.wilding@gmail.com>,
	julien@xen.org,
	bertrand.marquis@arm.com,
	rahul.singh@arm.com
Subject: [RFC PATCH v2 03/15] xen/arm: Add ARM64_HAS_LSE_ATOMICS hwcap
Date: Wed, 11 Nov 2020 21:51:51 +0000
Message-Id: <20201111215203.80336-4-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com>
References: <20201111215203.80336-1-ash.j.wilding@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ash Wilding <ash.j.wilding@gmail.com>

This patch introduces the ARM64_HAS_LSE_ATOMICS hwcap.

While doing this, CONFIG_ARM64_LSE_ATOMICS is added to control whether
the hwcap is actually detected and set at runtime. Without this Kconfig
being set we will always fallback on LL/SC atomics using Armv8.0 exlusive
accesses.

Note this patch does not actually add the ALTERNATIVE() switching based
on the hwcap being detected and set; that comes later in the series.

Signed-off-by: Ash Wilding <ash.j.wilding@gmail.com>
---
 xen/arch/arm/Kconfig             | 11 +++++++++++
 xen/arch/arm/Makefile            |  1 +
 xen/arch/arm/lse.c               | 13 +++++++++++++
 xen/include/asm-arm/cpufeature.h |  3 ++-
 4 files changed, 27 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/arm/lse.c

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 2777388265..febc41e492 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -78,6 +78,17 @@ config SBSA_VUART_CONSOLE
 	  Allows a guest to use SBSA Generic UART as a console. The
 	  SBSA Generic UART implements a subset of ARM PL011 UART.
 
+config ARM64_LSE_ATOMICS
+	bool "Armv8.1-LSE Atomics"
+	depends on ARM_64 && HAS_ALTERNATIVE
+	default y
+	---help---
+	When set, dynamically patch Xen at runtime to use Armv8.1-LSE
+	atomics when supported by the system.
+
+	When unset, or when Armv8.1-LSE atomics are not supported by the
+	system, fallback on LL/SC atomics using Armv8.0 exclusive accesses.
+
 config ARM_SSBD
 	bool "Speculative Store Bypass Disable" if EXPERT
 	depends on HAS_ALTERNATIVE
diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 296c5e68bb..cadd0ad253 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -63,6 +63,7 @@ obj-y += vsmc.o
 obj-y += vpsci.o
 obj-y += vuart.o
 extra-y += $(TARGET_SUBARCH)/head.o
+obj-$(CONFIG_ARM64_LSE_ATOMICS) += lse.o
 
 #obj-bin-y += ....o
 
diff --git a/xen/arch/arm/lse.c b/xen/arch/arm/lse.c
new file mode 100644
index 0000000000..8274dac671
--- /dev/null
+++ b/xen/arch/arm/lse.c
@@ -0,0 +1,13 @@
+
+#include <asm/cpufeature.h>
+#include <xen/init.h>
+
+static int __init update_lse_caps(void)
+{
+    if ( cpu_has_lse_atomics )
+        cpus_set_cap(ARM64_HAS_LSE_ATOMICS);
+
+    return 0;
+}
+
+__initcall(update_lse_caps);
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index 2366926e82..48c172ee29 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -42,8 +42,9 @@
 #define ARM_SSBD 7
 #define ARM_SMCCC_1_1 8
 #define ARM64_WORKAROUND_AT_SPECULATE 9
+#define ARM64_HAS_LSE_ATOMICS 10
 
-#define ARM_NCAPS           10
+#define ARM_NCAPS           11
 
 #ifndef __ASSEMBLY__
 
-- 
2.24.3 (Apple Git-128)



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 21:53:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 21:53:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25295.52981 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy32-0006K8-QZ; Wed, 11 Nov 2020 21:53:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25295.52981; Wed, 11 Nov 2020 21:53:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy32-0006Jz-ME; Wed, 11 Nov 2020 21:53:00 +0000
Received: by outflank-mailman (input) for mailman id 25295;
 Wed, 11 Nov 2020 21:52:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy31-00064v-P1
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:52:59 +0000
Received: from mail-wm1-x334.google.com (unknown [2a00:1450:4864:20::334])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e0773842-facd-4d42-8abf-566e5bfed63a;
 Wed, 11 Nov 2020 21:52:43 +0000 (UTC)
Received: by mail-wm1-x334.google.com with SMTP id c9so3559143wml.5
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:43 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.40
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 11 Nov 2020 13:52:41 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kcy31-00064v-P1
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:52:59 +0000
X-Inumbo-ID: e0773842-facd-4d42-8abf-566e5bfed63a
Received: from mail-wm1-x334.google.com (unknown [2a00:1450:4864:20::334])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e0773842-facd-4d42-8abf-566e5bfed63a;
	Wed, 11 Nov 2020 21:52:43 +0000 (UTC)
Received: by mail-wm1-x334.google.com with SMTP id c9so3559143wml.5
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=3+/74+RXl1H91X5ZF9JOoW4t9e/MjX4Wl/UlElI//SE=;
        b=YNtR/mlRT3gzJELs4HDv+CjYlKzeA9m1SJjlGWxGo37smGmcx1ICeSJKDNHwWkGzJl
         TPapNwU+9WDbOBWUhoXtVU0w76Eub/Vjv15ma6WNTE41bo0Gsyw3vH8uP/J03PTi4FZm
         1dd7C1ak5dtWlarbez6hrpuYS7BaARZjrxDsnUvr18LpO6sNvhNfuQppy8GBidR7hBNI
         /8SjYpQ32yIkEjM9duRVd9KExhHcm7MC5jIK/aqxCcEvxMixJU/UutXyRHfORF+Xudbu
         q+owN+ghJ7KKRohwSU2ONwWesOkGC3L/eRvt9QMa05Xa9sFkEURd/6Omz8FQtuGroIcq
         jBhw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=3+/74+RXl1H91X5ZF9JOoW4t9e/MjX4Wl/UlElI//SE=;
        b=g472sGGJojv2gsoOEmaNS+vAERiATuAo/OQg6q0v1Dv00d6N/l4+xIJkPdfrjtsuAy
         4w1UVaT62HM3+OKCJA7hpT0i1p0HQNf/DzTFpBd+2PvSBgGTKDZ2zcGlbz1FkWkAjCzB
         s0hCUyF+Rvzs6T/YzzP0uRRGi2cQ3sNxm9FB8/NlPY7oWnQzjtjt+TZvJ7vgH/HLZNLU
         AvRxcNltzeGot4eYtAywsk/7wUqxfUNKZaHzDna0GYnpqFqOujrcIpdohwHBIJvA7Ebh
         uOi6y34CdUzXJf8bFDbraXfNF+pqiSFmD402DpxGHegwqJ4Zc0fQJlLHGsF3fiq+9iiN
         UP6g==
X-Gm-Message-State: AOAM530+jSKluZzvuDdQWCX5hRt3KmxItOYDQlovQrDj37S4kouTJck2
	Jeb1HxPlPFJVVY/heZOMqQr4Qe3D/Zs=
X-Google-Smtp-Source: ABdhPJxjbmQePSEubjeQTycD3G62AK4kgy1YGoyzSFvMK+1rR81b72aa16dQZoW5PugXxPajldPhBQ==
X-Received: by 2002:a05:600c:2319:: with SMTP id 25mr6471524wmo.102.1605131561937;
        Wed, 11 Nov 2020 13:52:41 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.40
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Wed, 11 Nov 2020 13:52:41 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: xen-devel@lists.xenproject.org
Cc: Ash Wilding <ash.j.wilding@gmail.com>,
	julien@xen.org,
	bertrand.marquis@arm.com,
	rahul.singh@arm.com
Subject: [RFC PATCH v2 04/15] xen/arm: Delete Xen atomics helpers
Date: Wed, 11 Nov 2020 21:51:52 +0000
Message-Id: <20201111215203.80336-5-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com>
References: <20201111215203.80336-1-ash.j.wilding@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ash Wilding <ash.j.wilding@gmail.com>

To maintain clean diffs and dissectability, let's delete the existing
Xen atomics helpers before pulling in the Linux versions.

Signed-off-by: Ash Wilding <ash.j.wilding@gmail.com>
---
 xen/include/asm-arm/arm32/atomic.h  | 175 ---------------------
 xen/include/asm-arm/arm32/cmpxchg.h | 229 ----------------------------
 xen/include/asm-arm/arm64/atomic.h  | 148 ------------------
 xen/include/asm-arm/arm64/cmpxchg.h | 183 ----------------------
 4 files changed, 735 deletions(-)
 delete mode 100644 xen/include/asm-arm/arm32/atomic.h
 delete mode 100644 xen/include/asm-arm/arm32/cmpxchg.h
 delete mode 100644 xen/include/asm-arm/arm64/atomic.h
 delete mode 100644 xen/include/asm-arm/arm64/cmpxchg.h

diff --git a/xen/include/asm-arm/arm32/atomic.h b/xen/include/asm-arm/arm32/atomic.h
deleted file mode 100644
index 2832a72792..0000000000
--- a/xen/include/asm-arm/arm32/atomic.h
+++ /dev/null
@@ -1,175 +0,0 @@
-/*
- *  arch/arm/include/asm/atomic.h
- *
- *  Copyright (C) 1996 Russell King.
- *  Copyright (C) 2002 Deep Blue Solutions Ltd.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- */
-#ifndef __ARCH_ARM_ARM32_ATOMIC__
-#define __ARCH_ARM_ARM32_ATOMIC__
-
-/*
- * ARMv6 UP and SMP safe atomic ops.  We use load exclusive and
- * store exclusive to ensure that these are atomic.  We may loop
- * to ensure that the update happens.
- */
-static inline void atomic_add(int i, atomic_t *v)
-{
-	unsigned long tmp;
-	int result;
-
-	prefetchw(&v->counter);
-	__asm__ __volatile__("@ atomic_add\n"
-"1:	ldrex	%0, [%3]\n"
-"	add	%0, %0, %4\n"
-"	strex	%1, %0, [%3]\n"
-"	teq	%1, #0\n"
-"	bne	1b"
-	: "=&r" (result), "=&r" (tmp), "+Qo" (v->counter)
-	: "r" (&v->counter), "Ir" (i)
-	: "cc");
-}
-
-static inline int atomic_add_return(int i, atomic_t *v)
-{
-	unsigned long tmp;
-	int result;
-
-	smp_mb();
-	prefetchw(&v->counter);
-
-	__asm__ __volatile__("@ atomic_add_return\n"
-"1:	ldrex	%0, [%3]\n"
-"	add	%0, %0, %4\n"
-"	strex	%1, %0, [%3]\n"
-"	teq	%1, #0\n"
-"	bne	1b"
-	: "=&r" (result), "=&r" (tmp), "+Qo" (v->counter)
-	: "r" (&v->counter), "Ir" (i)
-	: "cc");
-
-	smp_mb();
-
-	return result;
-}
-
-static inline void atomic_sub(int i, atomic_t *v)
-{
-	unsigned long tmp;
-	int result;
-
-	prefetchw(&v->counter);
-	__asm__ __volatile__("@ atomic_sub\n"
-"1:	ldrex	%0, [%3]\n"
-"	sub	%0, %0, %4\n"
-"	strex	%1, %0, [%3]\n"
-"	teq	%1, #0\n"
-"	bne	1b"
-	: "=&r" (result), "=&r" (tmp), "+Qo" (v->counter)
-	: "r" (&v->counter), "Ir" (i)
-	: "cc");
-}
-
-static inline int atomic_sub_return(int i, atomic_t *v)
-{
-	unsigned long tmp;
-	int result;
-
-	smp_mb();
-	prefetchw(&v->counter);
-
-	__asm__ __volatile__("@ atomic_sub_return\n"
-"1:	ldrex	%0, [%3]\n"
-"	sub	%0, %0, %4\n"
-"	strex	%1, %0, [%3]\n"
-"	teq	%1, #0\n"
-"	bne	1b"
-	: "=&r" (result), "=&r" (tmp), "+Qo" (v->counter)
-	: "r" (&v->counter), "Ir" (i)
-	: "cc");
-
-	smp_mb();
-
-	return result;
-}
-
-static inline void atomic_and(int m, atomic_t *v)
-{
-	unsigned long tmp;
-	int result;
-
-	prefetchw(&v->counter);
-	__asm__ __volatile__("@ atomic_and\n"
-"1:	ldrex	%0, [%3]\n"
-"	and	%0, %0, %4\n"
-"	strex	%1, %0, [%3]\n"
-"	teq	%1, #0\n"
-"	bne	1b"
-	: "=&r" (result), "=&r" (tmp), "+Qo" (v->counter)
-	: "r" (&v->counter), "Ir" (m)
-	: "cc");
-}
-
-static inline int atomic_cmpxchg(atomic_t *ptr, int old, int new)
-{
-	int oldval;
-	unsigned long res;
-
-	smp_mb();
-	prefetchw(&ptr->counter);
-
-	do {
-		__asm__ __volatile__("@ atomic_cmpxchg\n"
-		"ldrex	%1, [%3]\n"
-		"mov	%0, #0\n"
-		"teq	%1, %4\n"
-		"strexeq %0, %5, [%3]\n"
-		    : "=&r" (res), "=&r" (oldval), "+Qo" (ptr->counter)
-		    : "r" (&ptr->counter), "Ir" (old), "r" (new)
-		    : "cc");
-	} while (res);
-
-	smp_mb();
-
-	return oldval;
-}
-
-static inline int __atomic_add_unless(atomic_t *v, int a, int u)
-{
-	int oldval, newval;
-	unsigned long tmp;
-
-	smp_mb();
-	prefetchw(&v->counter);
-
-	__asm__ __volatile__ ("@ atomic_add_unless\n"
-"1:	ldrex	%0, [%4]\n"
-"	teq	%0, %5\n"
-"	beq	2f\n"
-"	add	%1, %0, %6\n"
-"	strex	%2, %1, [%4]\n"
-"	teq	%2, #0\n"
-"	bne	1b\n"
-"2:"
-	: "=&r" (oldval), "=&r" (newval), "=&r" (tmp), "+Qo" (v->counter)
-	: "r" (&v->counter), "r" (u), "r" (a)
-	: "cc");
-
-	if (oldval != u)
-		smp_mb();
-
-	return oldval;
-}
-
-#endif /* __ARCH_ARM_ARM32_ATOMIC__ */
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 8
- * indent-tabs-mode: t
- * End:
- */
diff --git a/xen/include/asm-arm/arm32/cmpxchg.h b/xen/include/asm-arm/arm32/cmpxchg.h
deleted file mode 100644
index b0bd1d8b68..0000000000
--- a/xen/include/asm-arm/arm32/cmpxchg.h
+++ /dev/null
@@ -1,229 +0,0 @@
-#ifndef __ASM_ARM32_CMPXCHG_H
-#define __ASM_ARM32_CMPXCHG_H
-
-#include <xen/prefetch.h>
-
-extern void __bad_xchg(volatile void *, int);
-
-static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size)
-{
-	unsigned long ret;
-	unsigned int tmp;
-
-	smp_mb();
-	prefetchw((const void *)ptr);
-
-	switch (size) {
-	case 1:
-		asm volatile("@	__xchg1\n"
-		"1:	ldrexb	%0, [%3]\n"
-		"	strexb	%1, %2, [%3]\n"
-		"	teq	%1, #0\n"
-		"	bne	1b"
-			: "=&r" (ret), "=&r" (tmp)
-			: "r" (x), "r" (ptr)
-			: "memory", "cc");
-		break;
-	case 4:
-		asm volatile("@	__xchg4\n"
-		"1:	ldrex	%0, [%3]\n"
-		"	strex	%1, %2, [%3]\n"
-		"	teq	%1, #0\n"
-		"	bne	1b"
-			: "=&r" (ret), "=&r" (tmp)
-			: "r" (x), "r" (ptr)
-			: "memory", "cc");
-		break;
-	default:
-		__bad_xchg(ptr, size), ret = 0;
-		break;
-	}
-	smp_mb();
-
-	return ret;
-}
-
-#define xchg(ptr,x) \
-	((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))
-
-/*
- * Atomic compare and exchange.  Compare OLD with MEM, if identical,
- * store NEW in MEM.  Return the initial value in MEM.  Success is
- * indicated by comparing RETURN with OLD.
- */
-
-extern unsigned long __bad_cmpxchg(volatile void *ptr, int size);
-
-#define __CMPXCHG_CASE(sz, name)					\
-static inline bool __cmpxchg_case_##name(volatile void *ptr,		\
-					 unsigned long *old,		\
-					 unsigned long new,		\
-					 bool timeout,			\
-					 unsigned int max_try)		\
-{									\
-	unsigned long oldval;						\
-	unsigned long res;						\
-									\
-	do {								\
-		asm volatile("@ __cmpxchg_case_" #name "\n"		\
-		"	ldrex" #sz "	%1, [%2]\n"			\
-		"	mov	%0, #0\n"				\
-		"	teq	%1, %3\n"				\
-		"	strex" #sz "eq %0, %4, [%2]\n"			\
-		: "=&r" (res), "=&r" (oldval)				\
-		: "r" (ptr), "Ir" (*old), "r" (new)			\
-		: "memory", "cc");					\
-									\
-		if (!res)						\
-			break;						\
-	} while (!timeout || ((--max_try) > 0));			\
-									\
-	*old = oldval;							\
-									\
-	return !res;							\
-}
-
-__CMPXCHG_CASE(b, 1)
-__CMPXCHG_CASE(h, 2)
-__CMPXCHG_CASE( , 4)
-
-static inline bool __cmpxchg_case_8(volatile uint64_t *ptr,
-			 	    uint64_t *old,
-			 	    uint64_t new,
-			 	    bool timeout,
-				    unsigned int max_try)
-{
-	uint64_t oldval;
-	uint64_t res;
-
-	do {
-		asm volatile(
-		"	ldrexd		%1, %H1, [%3]\n"
-		"	teq		%1, %4\n"
-		"	teqeq		%H1, %H4\n"
-		"	movne		%0, #0\n"
-		"	movne		%H0, #0\n"
-		"	bne		2f\n"
-		"	strexd		%0, %5, %H5, [%3]\n"
-		"2:"
-		: "=&r" (res), "=&r" (oldval), "+Qo" (*ptr)
-		: "r" (ptr), "r" (*old), "r" (new)
-		: "memory", "cc");
-		if (!res)
-			break;
-	} while (!timeout || ((--max_try) > 0));
-
-	*old = oldval;
-
-	return !res;
-}
-
-static always_inline bool __int_cmpxchg(volatile void *ptr, unsigned long *old,
-					unsigned long new, int size,
-					bool timeout, unsigned int max_try)
-{
-	prefetchw((const void *)ptr);
-
-	switch (size) {
-	case 1:
-		return __cmpxchg_case_1(ptr, old, new, timeout, max_try);
-	case 2:
-		return __cmpxchg_case_2(ptr, old, new, timeout, max_try);
-	case 4:
-		return __cmpxchg_case_4(ptr, old, new, timeout, max_try);
-	default:
-		return __bad_cmpxchg(ptr, size);
-	}
-
-	ASSERT_UNREACHABLE();
-}
-
-static always_inline unsigned long __cmpxchg(volatile void *ptr,
-					     unsigned long old,
-					     unsigned long new,
-					     int size)
-{
-	smp_mb();
-	if (!__int_cmpxchg(ptr, &old, new, size, false, 0))
-		ASSERT_UNREACHABLE();
-	smp_mb();
-
-	return old;
-}
-
-/*
- * The helper may fail to update the memory if the action takes too long.
- *
- * @old: On call the value pointed contains the expected old value. It will be
- * updated to the actual old value.
- * @max_try: Maximum number of iterations
- *
- * The helper will return true when the update has succeeded (i.e no
- * timeout) and false if the update has failed.
- */
-static always_inline bool __cmpxchg_timeout(volatile void *ptr,
-					    unsigned long *old,
-					    unsigned long new,
-					    int size,
-					    unsigned int max_try)
-{
-	bool ret;
-
-	smp_mb();
-	ret = __int_cmpxchg(ptr, old, new, size, true, max_try);
-	smp_mb();
-
-	return ret;
-}
-
-/*
- * The helper may fail to update the memory if the action takes too long.
- *
- * @old: On call the value pointed contains the expected old value. It will be
- * updated to the actual old value.
- * @max_try: Maximum number of iterations
- *
- * The helper will return true when the update has succeeded (i.e no
- * timeout) and false if the update has failed.
- */
-static always_inline bool __cmpxchg64_timeout(volatile uint64_t *ptr,
-					      uint64_t *old,
-					      uint64_t new,
-					      unsigned int max_try)
-{
-	bool ret;
-
-	smp_mb();
-	ret = __cmpxchg_case_8(ptr, old, new, true, max_try);
-	smp_mb();
-
-	return ret;
-}
-
-#define cmpxchg(ptr,o,n)						\
-	((__typeof__(*(ptr)))__cmpxchg((ptr),				\
-				       (unsigned long)(o),		\
-				       (unsigned long)(n),		\
-				       sizeof(*(ptr))))
-
-static inline uint64_t cmpxchg64(volatile uint64_t *ptr,
-				 uint64_t old,
-				 uint64_t new)
-{
-	smp_mb();
-	if (!__cmpxchg_case_8(ptr, &old, new, false, 0))
-		ASSERT_UNREACHABLE();
-	smp_mb();
-
-	return old;
-}
-
-#endif
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 8
- * indent-tabs-mode: t
- * End:
- */
diff --git a/xen/include/asm-arm/arm64/atomic.h b/xen/include/asm-arm/arm64/atomic.h
deleted file mode 100644
index 2d42567866..0000000000
--- a/xen/include/asm-arm/arm64/atomic.h
+++ /dev/null
@@ -1,148 +0,0 @@
-/*
- * Based on arch/arm64/include/asm/atomic.h
- * which in turn is
- * Based on arch/arm/include/asm/atomic.h
- *
- * Copyright (C) 1996 Russell King.
- * Copyright (C) 2002 Deep Blue Solutions Ltd.
- * Copyright (C) 2012 ARM Ltd.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program.  If not, see <http://www.gnu.org/licenses/>.
- */
-#ifndef __ARCH_ARM_ARM64_ATOMIC
-#define __ARCH_ARM_ARM64_ATOMIC
-
-/*
- * AArch64 UP and SMP safe atomic ops.  We use load exclusive and
- * store exclusive to ensure that these are atomic.  We may loop
- * to ensure that the update happens.
- */
-static inline void atomic_add(int i, atomic_t *v)
-{
-	unsigned long tmp;
-	int result;
-
-	asm volatile("// atomic_add\n"
-"1:	ldxr	%w0, %2\n"
-"	add	%w0, %w0, %w3\n"
-"	stxr	%w1, %w0, %2\n"
-"	cbnz	%w1, 1b"
-	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)
-	: "Ir" (i));
-}
-
-static inline int atomic_add_return(int i, atomic_t *v)
-{
-	unsigned long tmp;
-	int result;
-
-	asm volatile("// atomic_add_return\n"
-"1:	ldxr	%w0, %2\n"
-"	add	%w0, %w0, %w3\n"
-"	stlxr	%w1, %w0, %2\n"
-"	cbnz	%w1, 1b"
-	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)
-	: "Ir" (i)
-	: "memory");
-
-	smp_mb();
-	return result;
-}
-
-static inline void atomic_sub(int i, atomic_t *v)
-{
-	unsigned long tmp;
-	int result;
-
-	asm volatile("// atomic_sub\n"
-"1:	ldxr	%w0, %2\n"
-"	sub	%w0, %w0, %w3\n"
-"	stxr	%w1, %w0, %2\n"
-"	cbnz	%w1, 1b"
-	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)
-	: "Ir" (i));
-}
-
-static inline int atomic_sub_return(int i, atomic_t *v)
-{
-	unsigned long tmp;
-	int result;
-
-	asm volatile("// atomic_sub_return\n"
-"1:	ldxr	%w0, %2\n"
-"	sub	%w0, %w0, %w3\n"
-"	stlxr	%w1, %w0, %2\n"
-"	cbnz	%w1, 1b"
-	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)
-	: "Ir" (i)
-	: "memory");
-
-	smp_mb();
-	return result;
-}
-
-static inline void atomic_and(int m, atomic_t *v)
-{
-	unsigned long tmp;
-	int result;
-
-	asm volatile("// atomic_and\n"
-"1:	ldxr	%w0, %2\n"
-"	and	%w0, %w0, %w3\n"
-"	stxr	%w1, %w0, %2\n"
-"	cbnz	%w1, 1b"
-	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)
-	: "Ir" (m));
-}
-
-static inline int atomic_cmpxchg(atomic_t *ptr, int old, int new)
-{
-	unsigned long tmp;
-	int oldval;
-
-	smp_mb();
-
-	asm volatile("// atomic_cmpxchg\n"
-"1:	ldxr	%w1, %2\n"
-"	cmp	%w1, %w3\n"
-"	b.ne	2f\n"
-"	stxr	%w0, %w4, %2\n"
-"	cbnz	%w0, 1b\n"
-"2:"
-	: "=&r" (tmp), "=&r" (oldval), "+Q" (ptr->counter)
-	: "Ir" (old), "r" (new)
-	: "cc");
-
-	smp_mb();
-	return oldval;
-}
-
-static inline int __atomic_add_unless(atomic_t *v, int a, int u)
-{
-	int c, old;
-
-	c = atomic_read(v);
-	while (c != u && (old = atomic_cmpxchg((v), c, c + a)) != c)
-		c = old;
-	return c;
-}
-
-#endif
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 8
- * indent-tabs-mode: t
- * End:
- */
diff --git a/xen/include/asm-arm/arm64/cmpxchg.h b/xen/include/asm-arm/arm64/cmpxchg.h
deleted file mode 100644
index 10e4edc022..0000000000
--- a/xen/include/asm-arm/arm64/cmpxchg.h
+++ /dev/null
@@ -1,183 +0,0 @@
-#ifndef __ASM_ARM64_CMPXCHG_H
-#define __ASM_ARM64_CMPXCHG_H
-
-extern void __bad_xchg(volatile void *, int);
-
-static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size)
-{
-	unsigned long ret, tmp;
-
-	switch (size) {
-	case 1:
-		asm volatile("//	__xchg1\n"
-		"1:	ldxrb	%w0, %2\n"
-		"	stlxrb	%w1, %w3, %2\n"
-		"	cbnz	%w1, 1b\n"
-			: "=&r" (ret), "=&r" (tmp), "+Q" (*(u8 *)ptr)
-			: "r" (x)
-			: "memory");
-		break;
-	case 2:
-		asm volatile("//	__xchg2\n"
-		"1:	ldxrh	%w0, %2\n"
-		"	stlxrh	%w1, %w3, %2\n"
-		"	cbnz	%w1, 1b\n"
-			: "=&r" (ret), "=&r" (tmp), "+Q" (*(u16 *)ptr)
-			: "r" (x)
-			: "memory");
-		break;
-	case 4:
-		asm volatile("//	__xchg4\n"
-		"1:	ldxr	%w0, %2\n"
-		"	stlxr	%w1, %w3, %2\n"
-		"	cbnz	%w1, 1b\n"
-			: "=&r" (ret), "=&r" (tmp), "+Q" (*(u32 *)ptr)
-			: "r" (x)
-			: "memory");
-		break;
-	case 8:
-		asm volatile("//	__xchg8\n"
-		"1:	ldxr	%0, %2\n"
-		"	stlxr	%w1, %3, %2\n"
-		"	cbnz	%w1, 1b\n"
-			: "=&r" (ret), "=&r" (tmp), "+Q" (*(u64 *)ptr)
-			: "r" (x)
-			: "memory");
-		break;
-	default:
-		__bad_xchg(ptr, size), ret = 0;
-		break;
-	}
-
-	smp_mb();
-	return ret;
-}
-
-#define xchg(ptr,x) \
-({ \
-	__typeof__(*(ptr)) __ret; \
-	__ret = (__typeof__(*(ptr))) \
-		__xchg((unsigned long)(x), (ptr), sizeof(*(ptr))); \
-	__ret; \
-})
-
-extern unsigned long __bad_cmpxchg(volatile void *ptr, int size);
-
-#define __CMPXCHG_CASE(w, sz, name)					\
-static inline bool __cmpxchg_case_##name(volatile void *ptr,		\
-					 unsigned long *old,		\
-					 unsigned long new,		\
-					 bool timeout,			\
-					 unsigned int max_try)		\
-{									\
-	unsigned long oldval;						\
-	unsigned long res;						\
-									\
-	do {								\
-		asm volatile("// __cmpxchg_case_" #name "\n"		\
-		"	ldxr" #sz "	%" #w "1, %2\n"			\
-		"	mov	%w0, #0\n"				\
-		"	cmp	%" #w "1, %" #w "3\n"			\
-		"	b.ne	1f\n"					\
-		"	stxr" #sz "	%w0, %" #w "4, %2\n"		\
-		"1:\n"							\
-		: "=&r" (res), "=&r" (oldval),				\
-		  "+Q" (*(unsigned long *)ptr)				\
-		: "Ir" (*old), "r" (new)				\
-		: "cc");						\
-									\
-		if (!res)						\
-			break;						\
-	} while (!timeout || ((--max_try) > 0));			\
-									\
-	*old = oldval;							\
-									\
-	return !res;							\
-}
-
-__CMPXCHG_CASE(w, b, 1)
-__CMPXCHG_CASE(w, h, 2)
-__CMPXCHG_CASE(w,  , 4)
-__CMPXCHG_CASE( ,  , 8)
-
-static always_inline bool __int_cmpxchg(volatile void *ptr, unsigned long *old,
-					unsigned long new, int size,
-					bool timeout, unsigned int max_try)
-{
-	switch (size) {
-	case 1:
-		return __cmpxchg_case_1(ptr, old, new, timeout, max_try);
-	case 2:
-		return __cmpxchg_case_2(ptr, old, new, timeout, max_try);
-	case 4:
-		return __cmpxchg_case_4(ptr, old, new, timeout, max_try);
-	case 8:
-		return __cmpxchg_case_8(ptr, old, new, timeout, max_try);
-	default:
-		return __bad_cmpxchg(ptr, size);
-	}
-
-	ASSERT_UNREACHABLE();
-}
-
-static always_inline unsigned long __cmpxchg(volatile void *ptr,
-					     unsigned long old,
-					     unsigned long new,
-					     int size)
-{
-	smp_mb();
-	if (!__int_cmpxchg(ptr, &old, new, size, false, 0))
-		ASSERT_UNREACHABLE();
-	smp_mb();
-
-	return old;
-}
-
-/*
- * The helper may fail to update the memory if the action takes too long.
- *
- * @old: On call the value pointed contains the expected old value. It will be
- * updated to the actual old value.
- * @max_try: Maximum number of iterations
- *
- * The helper will return true when the update has succeeded (i.e no
- * timeout) and false if the update has failed.
- */
-static always_inline bool __cmpxchg_timeout(volatile void *ptr,
-					    unsigned long *old,
-					    unsigned long new,
-					    int size,
-					    unsigned int max_try)
-{
-	bool ret;
-
-	smp_mb();
-	ret = __int_cmpxchg(ptr, old, new, size, true, max_try);
-	smp_mb();
-
-	return ret;
-}
-
-#define cmpxchg(ptr, o, n) \
-({ \
-	__typeof__(*(ptr)) __ret; \
-	__ret = (__typeof__(*(ptr))) \
-		__cmpxchg((ptr), (unsigned long)(o), (unsigned long)(n), \
-			  sizeof(*(ptr))); \
-	__ret; \
-})
-
-#define cmpxchg64(ptr, o, n) cmpxchg(ptr, o, n)
-
-#define __cmpxchg64_timeout(ptr, old, new, max_try)	\
-	__cmpxchg_timeout(ptr, old, new, 8, max_try)
-
-#endif
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 8
- * indent-tabs-mode: t
- * End:
- */
-- 
2.24.3 (Apple Git-128)



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 21:53:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 21:53:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25298.52993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy38-0006PT-7r; Wed, 11 Nov 2020 21:53:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25298.52993; Wed, 11 Nov 2020 21:53:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy38-0006PJ-2X; Wed, 11 Nov 2020 21:53:06 +0000
Received: by outflank-mailman (input) for mailman id 25298;
 Wed, 11 Nov 2020 21:53:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy36-00064v-PL
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:04 +0000
Received: from mail-wm1-x344.google.com (unknown [2a00:1450:4864:20::344])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 842d170a-abf2-4a60-8904-60b84d13b358;
 Wed, 11 Nov 2020 21:52:47 +0000 (UTC)
Received: by mail-wm1-x344.google.com with SMTP id 23so4924528wmg.1
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:47 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.45
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 11 Nov 2020 13:52:45 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kcy36-00064v-PL
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:04 +0000
X-Inumbo-ID: 842d170a-abf2-4a60-8904-60b84d13b358
Received: from mail-wm1-x344.google.com (unknown [2a00:1450:4864:20::344])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 842d170a-abf2-4a60-8904-60b84d13b358;
	Wed, 11 Nov 2020 21:52:47 +0000 (UTC)
Received: by mail-wm1-x344.google.com with SMTP id 23so4924528wmg.1
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=sGzJpFoTaZnCaM6xOC0KlI8sjb8W5MHW3wWMiououtA=;
        b=lzrosJwBjf2JJ1R6hDwZGAfEDibeHSG4jsEPUGW4dyTwr9h7yHFIWuauR9MfE/W0Z7
         kCLZ54Z/GachrPhKRopStg6qckWSh/m3aK56rgkZprKQeLZ2M1devLrjrHt+rxHD3Wil
         v+MwST5ssBLkCAaoKR7eRZRPGgMJYExDtRg8PrSMuPZMoGL+t6qORpwd6IaUbR14k3VM
         05deiW3/TcX+UPafP9k0EiRK3dIIj9JK3wLZGssvDHwhb58dmV7CrM82ed2KPrTV4X7w
         4ivCG/OzQPQL1prmrU1tnUqoWhE6TA1mrromhM60WpdfA5Zn4IkON/AlnxcuUeSIWsXa
         ZU8A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=sGzJpFoTaZnCaM6xOC0KlI8sjb8W5MHW3wWMiououtA=;
        b=p4Zp7ciQqTPLZrPeZwXl/g1SR6dOtuO01Yzj0VxTeX6hwT3itMx7iabZT9cDSbmVcH
         Iaioqft0ZkHI4wwbUcmN5LzLmfvUOIlvcetgahrVVGr8U+/ZqVZ8jOEWzhY8IPas7Hxz
         kFaKmVYAzHOvWH+jne0hCTfOyy35F8PSdRU47D1nVlh678sHaLDMplHoAw6mVzjysqrj
         GCcZCOCoYxOF9IzSswRdapoyMkuqIAHd9aRTAcOHELm54rTzl0uQjvmLwEQwoPKetiYG
         8EWeGytf9iqdepgEndKmnJDQkix4joCCoAyS83H17kPlN9lkgbZtXb+Re854eygz29E1
         SF9Q==
X-Gm-Message-State: AOAM53127UzvnZaYcboeaHN4+Fcq/wDBh5vrv3ewGGMWayOOAlXOOgHQ
	bU4Ytv/z7nuZiXUxiwNSjzKJjOgVSKI=
X-Google-Smtp-Source: ABdhPJwpE+zpDXpZX3INJNNQfvF/71AxIIOCQ6cvj9NkP66u4kzrVa6Nr6tLpWEWx5Vkl/8qZyMo1w==
X-Received: by 2002:a7b:c3d2:: with SMTP id t18mr6509009wmj.112.1605131566282;
        Wed, 11 Nov 2020 13:52:46 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.45
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Wed, 11 Nov 2020 13:52:45 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: xen-devel@lists.xenproject.org
Cc: Ash Wilding <ash.j.wilding@gmail.com>,
	julien@xen.org,
	bertrand.marquis@arm.com,
	rahul.singh@arm.com
Subject: [RFC PATCH v2 06/15] xen: port Linux <asm-generic/rwonce.h> to Xen
Date: Wed, 11 Nov 2020 21:51:54 +0000
Message-Id: <20201111215203.80336-7-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com>
References: <20201111215203.80336-1-ash.j.wilding@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ash Wilding <ash.j.wilding@gmail.com>

 - Drop kasan related helpers.

 - Drop READ_ONCE() and WRITE_ONCE(); the __* versions are fine for now
   as the only callers in Xen are the arm32 atomics helpers which are
   always accessing an atomic_t's counter member, which is an int. This
   means we can swap the arm32 atomics helpers over to using the __*
   versions like the arm64 code does, removing a dependency on
   <linux/compiler_types.h> for __native_word().

 - Relax __unqual_scalar_typeof() in __READ_ONCE() to just typeof().
   Similarly to above, the only callers in Xen are the arm32/arm64
   atomics helpers, which are always accessing an atomic_t's counter
   member as a regular (int *) which doesn't need unqual'ing. This means
   we can remove the other dependency on <linux/compiler_types.h>.

Please see previous patch in the series for expanded rationale on why
not having to port <linux/compiler_types.h> to Xen makes life easier.

Signed-off-by: Ash Wilding <ash.j.wilding@gmail.com>
---
 xen/include/xen/rwonce.h | 79 +++-------------------------------------
 1 file changed, 5 insertions(+), 74 deletions(-)

diff --git a/xen/include/xen/rwonce.h b/xen/include/xen/rwonce.h
index 6b47392d1c..d001e7e41e 100644
--- a/xen/include/xen/rwonce.h
+++ b/xen/include/xen/rwonce.h
@@ -1,90 +1,21 @@
-/* SPDX-License-Identifier: GPL-2.0 */
 /*
- * Prevent the compiler from merging or refetching reads or writes. The
- * compiler is also forbidden from reordering successive instances of
- * READ_ONCE and WRITE_ONCE, but only when the compiler is aware of some
- * particular ordering. One way to make the compiler aware of ordering is to
- * put the two invocations of READ_ONCE or WRITE_ONCE in different C
- * statements.
+ * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
  *
- * These two macros will also work on aggregate data types like structs or
- * unions.
- *
- * Their two major use cases are: (1) Mediating communication between
- * process-level code and irq/NMI handlers, all running on the same CPU,
- * and (2) Ensuring that the compiler does not fold, spindle, or otherwise
- * mutilate accesses that either do not require ordering or that interact
- * with an explicit memory barrier or atomic instruction that provides the
- * required ordering.
+ * SPDX-License-Identifier: GPL-2.0
  */
+
 #ifndef __ASM_GENERIC_RWONCE_H
 #define __ASM_GENERIC_RWONCE_H
 
 #ifndef __ASSEMBLY__
 
-#include <linux/compiler_types.h>
-#include <linux/kasan-checks.h>
-#include <linux/kcsan-checks.h>
-
-/*
- * Yes, this permits 64-bit accesses on 32-bit architectures. These will
- * actually be atomic in some cases (namely Armv7 + LPAE), but for others we
- * rely on the access being split into 2x32-bit accesses for a 32-bit quantity
- * (e.g. a virtual address) and a strong prevailing wind.
- */
-#define compiletime_assert_rwonce_type(t)					\
-	compiletime_assert(__native_word(t) || sizeof(t) == sizeof(long long),	\
-		"Unsupported access size for {READ,WRITE}_ONCE().")
-
-/*
- * Use __READ_ONCE() instead of READ_ONCE() if you do not require any
- * atomicity. Note that this may result in tears!
- */
-#ifndef __READ_ONCE
-#define __READ_ONCE(x)	(*(const volatile __unqual_scalar_typeof(x) *)&(x))
-#endif
-
-#define READ_ONCE(x)							\
-({									\
-	compiletime_assert_rwonce_type(x);				\
-	__READ_ONCE(x);							\
-})
+#define __READ_ONCE(x)	(*(const volatile typeof(x) *)&(x))
 
 #define __WRITE_ONCE(x, val)						\
 do {									\
 	*(volatile typeof(x) *)&(x) = (val);				\
 } while (0)
 
-#define WRITE_ONCE(x, val)						\
-do {									\
-	compiletime_assert_rwonce_type(x);				\
-	__WRITE_ONCE(x, val);						\
-} while (0)
-
-static __no_sanitize_or_inline
-unsigned long __read_once_word_nocheck(const void *addr)
-{
-	return __READ_ONCE(*(unsigned long *)addr);
-}
-
-/*
- * Use READ_ONCE_NOCHECK() instead of READ_ONCE() if you need to load a
- * word from memory atomically but without telling KASAN/KCSAN. This is
- * usually used by unwinding code when walking the stack of a running process.
- */
-#define READ_ONCE_NOCHECK(x)						\
-({									\
-	compiletime_assert(sizeof(x) == sizeof(unsigned long),		\
-		"Unsupported access size for READ_ONCE_NOCHECK().");	\
-	(typeof(x))__read_once_word_nocheck(&(x));			\
-})
-
-static __no_kasan_or_inline
-unsigned long read_word_at_a_time(const void *addr)
-{
-	kasan_check_read(addr, 1);
-	return *(unsigned long *)addr;
-}
 
 #endif /* __ASSEMBLY__ */
-#endif	/* __ASM_GENERIC_RWONCE_H */
\ No newline at end of file
+#endif	/* __ASM_GENERIC_RWONCE_H */
-- 
2.24.3 (Apple Git-128)



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 21:53:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 21:53:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25300.53005 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy3C-0006X1-QU; Wed, 11 Nov 2020 21:53:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25300.53005; Wed, 11 Nov 2020 21:53:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy3C-0006Wo-Kh; Wed, 11 Nov 2020 21:53:10 +0000
Received: by outflank-mailman (input) for mailman id 25300;
 Wed, 11 Nov 2020 21:53:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3B-00064v-PU
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:09 +0000
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 61dd892c-4a6a-4f81-9b53-1d30211448c1;
 Wed, 11 Nov 2020 21:52:48 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id j7so3995880wrp.3
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:48 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.46
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 11 Nov 2020 13:52:46 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kcy3B-00064v-PU
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:09 +0000
X-Inumbo-ID: 61dd892c-4a6a-4f81-9b53-1d30211448c1
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 61dd892c-4a6a-4f81-9b53-1d30211448c1;
	Wed, 11 Nov 2020 21:52:48 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id j7so3995880wrp.3
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:48 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=lKBI14xArPA18L2e3mixfdhgDGBWzfLh6HwsMuwD7s4=;
        b=CBI9vPn64FgxZuhJIGgbZ3efbwKrz8NpUktTKCWRjraau29LdgEClES3irHVLpGUm1
         PH9SNGgWKskLHy7zAA2Kjx+f91g2KNDKIDtuiqP8qYLuKzShz0dT4Q7vUS6NTcODUEd7
         y874QrlFnnZ977xY7QVX3xrXv2FmvnYODQDijgqFBj4uX6Y+NmZDL5u6L3ZAD/U6IsY9
         +H0KpYzeFk7jMdwO45yQ32wFe+Zf+kflZxzBEAgMhgQLaUuCe4HiNiCr45HiOVJGR4IC
         ECH4u/Huk+/B1zRTwJk6af7hrBZIkXcam1P2jnhORqvMqz+WsdK4uiow7aPmlQ/hJ/6t
         VymA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=lKBI14xArPA18L2e3mixfdhgDGBWzfLh6HwsMuwD7s4=;
        b=sewc1Z+wtYJqdyUkZL5kMdzjyeDKYvVKxXvx5wEjcMDG2R/RSz+6RJsnabvwTHup5m
         YkvPaVkg84LEU6viguu4IIa/wHNWBAteetAQtDVyggvRu+93GPQ9OAyGzyjQv5MTECxb
         Es9YZxyfWl42XDVJbQKoJv2QcHnkFOU3Wog2mcOeGD5qSSSdM7fnPUBRnvf+crJCip0C
         OdVhnrwwpai7iNej2ARykg3KzJt3L2dzhwnGpjbEdkGDPGDlJtwDCGmC/JsJaWh/3QlS
         WPqth4cKVd8S1WqnFA+7OORnMfO7C2SRPnVwr4Yocls3cXYP3kyPyp3KcaGWYTcBX49P
         StlA==
X-Gm-Message-State: AOAM531HR7nC/Lc1eWMWVpiO13FmQkwIoPcFag3wAa07K+W7W1XwYuwO
	txTW4f9Qrur1Yn9e2gEyPcdAuogRjug=
X-Google-Smtp-Source: ABdhPJy/UaGSodvWycXJfP8lDDsD6AfWRUcQrVgiracEK6FwIOITn2EXSSWznnSwQQjRw7T4lAgMzA==
X-Received: by 2002:adf:e44f:: with SMTP id t15mr29024528wrm.380.1605131567174;
        Wed, 11 Nov 2020 13:52:47 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.46
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Wed, 11 Nov 2020 13:52:46 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: xen-devel@lists.xenproject.org
Cc: Ash Wilding <ash.j.wilding@gmail.com>,
	julien@xen.org,
	bertrand.marquis@arm.com,
	rahul.singh@arm.com
Subject: [RFC PATCH v2 07/15] xen/arm: prepare existing Xen headers for Linux atomics
Date: Wed, 11 Nov 2020 21:51:55 +0000
Message-Id: <20201111215203.80336-8-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com>
References: <20201111215203.80336-1-ash.j.wilding@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ash Wilding <ash.j.wilding@gmail.com>

This small patch helps prepare <asm-arm/atomic.h> and the arm32/arm64
specific system.h headers to play nicely with the Linux atomics helpers:

 - We don't need the indirection around atomic_add_unless() anymore so
   let's just pull up the old Xen arm64 definition into here and use it
   for both arm32 and arm64.

 - We don't need an atomic_xchg() in <asm-arm/atomic.h> as the arm32/arm64
   specific cmpxchg.h from Linux defines it for us.

 - We drop the includes of <asm/system.h> and <xen/prefetch.h> from
   <asm-arm/atomic.h> as they're not needed.

 - We swap out the include of the arm32/arm64 specific cmpxchg.h in the
   arm32/arm64 specific system.h and instead make them include atomic.h;
   this works around build issues from cmpxchg.h trying to use the
   __ll_sc_lse_body() macro before it's ready.

Signed-off-by: Ash Wilding <ash.j.wilding@gmail.com>
---
 xen/include/asm-arm/arm32/system.h |  2 +-
 xen/include/asm-arm/arm64/system.h |  2 +-
 xen/include/asm-arm/atomic.h       | 15 +++++++++++----
 3 files changed, 13 insertions(+), 6 deletions(-)

diff --git a/xen/include/asm-arm/arm32/system.h b/xen/include/asm-arm/arm32/system.h
index ab57abfbc5..88798d11db 100644
--- a/xen/include/asm-arm/arm32/system.h
+++ b/xen/include/asm-arm/arm32/system.h
@@ -2,7 +2,7 @@
 #ifndef __ASM_ARM32_SYSTEM_H
 #define __ASM_ARM32_SYSTEM_H
 
-#include <asm/arm32/cmpxchg.h>
+#include <asm/atomic.h>
 
 #define local_irq_disable() asm volatile ( "cpsid i @ local_irq_disable\n" : : : "cc" )
 #define local_irq_enable()  asm volatile ( "cpsie i @ local_irq_enable\n" : : : "cc" )
diff --git a/xen/include/asm-arm/arm64/system.h b/xen/include/asm-arm/arm64/system.h
index 2e36573ac6..dfbbe4b87d 100644
--- a/xen/include/asm-arm/arm64/system.h
+++ b/xen/include/asm-arm/arm64/system.h
@@ -2,7 +2,7 @@
 #ifndef __ASM_ARM64_SYSTEM_H
 #define __ASM_ARM64_SYSTEM_H
 
-#include <asm/arm64/cmpxchg.h>
+#include <asm/atomic.h>
 
 /* Uses uimm4 as a bitmask to select the clearing of one or more of
  * the DAIF exception mask bits:
diff --git a/xen/include/asm-arm/atomic.h b/xen/include/asm-arm/atomic.h
index ac2798d095..866f54d03c 100644
--- a/xen/include/asm-arm/atomic.h
+++ b/xen/include/asm-arm/atomic.h
@@ -2,8 +2,6 @@
 #define __ARCH_ARM_ATOMIC__
 
 #include <xen/atomic.h>
-#include <xen/prefetch.h>
-#include <asm/system.h>
 
 #define build_atomic_read(name, size, width, type) \
 static inline type name(const volatile type *addr) \
@@ -220,10 +218,19 @@ static inline int atomic_add_negative(int i, atomic_t *v)
 
 static inline int atomic_add_unless(atomic_t *v, int a, int u)
 {
-    return __atomic_add_unless(v, a, u);
+	int c, old;
+
+	c = atomic_read(v);
+	while (c != u && (old = atomic_cmpxchg((v), c, c + a)) != c)
+		c = old;
+
+	return c;
 }
 
-#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
+static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
+{
+	return cmpxchg(&((v)->counter), (old), (new));
+}
 
 #endif /* __ARCH_ARM_ATOMIC__ */
 /*
-- 
2.24.3 (Apple Git-128)



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 21:53:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 21:53:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25303.53017 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy3I-0006dP-7R; Wed, 11 Nov 2020 21:53:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25303.53017; Wed, 11 Nov 2020 21:53:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy3I-0006dB-25; Wed, 11 Nov 2020 21:53:16 +0000
Received: by outflank-mailman (input) for mailman id 25303;
 Wed, 11 Nov 2020 21:53:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3G-00064v-Pm
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:14 +0000
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 88650103-95b8-4958-818b-9ac90130e790;
 Wed, 11 Nov 2020 21:52:49 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id 23so3964827wrc.8
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:49 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.47
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 11 Nov 2020 13:52:47 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kcy3G-00064v-Pm
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:14 +0000
X-Inumbo-ID: 88650103-95b8-4958-818b-9ac90130e790
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 88650103-95b8-4958-818b-9ac90130e790;
	Wed, 11 Nov 2020 21:52:49 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id 23so3964827wrc.8
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=7RO1fbeEEFsgd7lyDUOy1fpofsP4z9rSI0cQwzw9Kvk=;
        b=KO+deDdjmc/oIfnFnkk3fe+ng2FQbdeGxiyUjiaBsxgdLFHgEZfW0cBBc135s2SxXl
         1+IUMKZ+bZCMZgHcX6+nOdZRGaRP5pooOtM+7omdkkXli0MzIp/Un1DJiwX1huQXRmcX
         aUbENgTHvbXOKP1nDXajqDDAoQvPjMeAlQdfreEpiexg+zIQ6YseOWaXyJ+Edj2r/ja8
         cY+5WDdHi4w4ugZEJGjizRia200NjurxQf/P1CheLezL7usvcI+CV2GHRHc+tOVnmlze
         0x3lC1yZDjH6FiPaGGBP53gcMIjlFD0yUVTcciGnsrjqMorwWt56ULf/DCb7bGhy+2bp
         rwLA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=7RO1fbeEEFsgd7lyDUOy1fpofsP4z9rSI0cQwzw9Kvk=;
        b=mlPOxeKLlU2MzjVntzXUs80VxXseK6kouLhVS7pCAMmmQs8BDYydpsFZU/Zw7jn6G/
         maDJ3TpaauaiSOXO0cugar0yPMl7LndnC+qLq6YsGFAP5plsGqOmVZnWpxor+F0NSCBw
         ZAInjPM+7ZJUDqaKJtvFQ3/A4QX2QNHF9LQzjjnXJOdnXC7/ulw6zpDWDSo7Xmud6P0x
         ae0KLtkAIXgThY9HH2wSpFhPWy7gR4eiiCmvlncUMEJhXVjMQ4npHO2nFAomejeNjSc8
         o9oC6pxWbDqGnOwwvdCQA/rNYRhrrcpnWcSrWeNUHF/1xYlIbEw9ltrLvgHws+lHvAYA
         tyUQ==
X-Gm-Message-State: AOAM530NjQwd6uB4woP1+4nhZAX4/TEQnKWr+SlRz7kw2zoewhbM/Ea0
	+bE/5hinp+U+5s30l3XmdXmIQVkTV0I=
X-Google-Smtp-Source: ABdhPJzYqOUBmnW7c4C8WSAck44Yq++828fMpKee3j+Rd+usbytDU7wpgVgP8fBYmcQw2YmLWOtZwQ==
X-Received: by 2002:adf:d84b:: with SMTP id k11mr15840677wrl.305.1605131568233;
        Wed, 11 Nov 2020 13:52:48 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.47
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Wed, 11 Nov 2020 13:52:47 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: xen-devel@lists.xenproject.org
Cc: Ash Wilding <ash.j.wilding@gmail.com>,
	julien@xen.org,
	bertrand.marquis@arm.com,
	rahul.singh@arm.com
Subject: [RFC PATCH v2 08/15] xen/arm64: port Linux's arm64 atomic_ll_sc.h to Xen
Date: Wed, 11 Nov 2020 21:51:56 +0000
Message-Id: <20201111215203.80336-9-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com>
References: <20201111215203.80336-1-ash.j.wilding@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ash Wilding <ash.j.wilding@gmail.com>

Most of the "work" here is simply deleting the atomic64_t helper
definitions as we don't have an atomic64_t type in Xen.

Signed-off-by: Ash Wilding <ash.j.wilding@gmail.com>
---
 xen/include/asm-arm/arm64/atomic_ll_sc.h | 134 +----------------------
 1 file changed, 6 insertions(+), 128 deletions(-)

diff --git a/xen/include/asm-arm/arm64/atomic_ll_sc.h b/xen/include/asm-arm/arm64/atomic_ll_sc.h
index e1009c0f94..20b0cb174e 100644
--- a/xen/include/asm-arm/arm64/atomic_ll_sc.h
+++ b/xen/include/asm-arm/arm64/atomic_ll_sc.h
@@ -1,16 +1,16 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
 /*
- * Based on arch/arm/include/asm/atomic.h
+ * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
  *
  * Copyright (C) 1996 Russell King.
  * Copyright (C) 2002 Deep Blue Solutions Ltd.
  * Copyright (C) 2012 ARM Ltd.
+ * SPDX-License-Identifier: GPL-2.0-only
  */
 
-#ifndef __ASM_ATOMIC_LL_SC_H
-#define __ASM_ATOMIC_LL_SC_H
+#ifndef __ASM_ARM_ARM64_ATOMIC_LL_SC_H
+#define __ASM_ARM_ARM64_ATOMIC_LL_SC_H
 
-#include <linux/stringify.h>
+#include <xen/stringify.h>
 
 #ifdef CONFIG_ARM64_LSE_ATOMICS
 #define __LL_SC_FALLBACK(asm_ops)					\
@@ -134,128 +134,6 @@ ATOMIC_OPS(andnot, bic, )
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-#define ATOMIC64_OP(op, asm_op, constraint)				\
-static inline void							\
-__ll_sc_atomic64_##op(s64 i, atomic64_t *v)				\
-{									\
-	s64 result;							\
-	unsigned long tmp;						\
-									\
-	asm volatile("// atomic64_" #op "\n"				\
-	__LL_SC_FALLBACK(						\
-"	prfm	pstl1strm, %2\n"					\
-"1:	ldxr	%0, %2\n"						\
-"	" #asm_op "	%0, %0, %3\n"					\
-"	stxr	%w1, %0, %2\n"						\
-"	cbnz	%w1, 1b")						\
-	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)		\
-	: __stringify(constraint) "r" (i));				\
-}
-
-#define ATOMIC64_OP_RETURN(name, mb, acq, rel, cl, op, asm_op, constraint)\
-static inline long							\
-__ll_sc_atomic64_##op##_return##name(s64 i, atomic64_t *v)		\
-{									\
-	s64 result;							\
-	unsigned long tmp;						\
-									\
-	asm volatile("// atomic64_" #op "_return" #name "\n"		\
-	__LL_SC_FALLBACK(						\
-"	prfm	pstl1strm, %2\n"					\
-"1:	ld" #acq "xr	%0, %2\n"					\
-"	" #asm_op "	%0, %0, %3\n"					\
-"	st" #rel "xr	%w1, %0, %2\n"					\
-"	cbnz	%w1, 1b\n"						\
-"	" #mb )								\
-	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)		\
-	: __stringify(constraint) "r" (i)				\
-	: cl);								\
-									\
-	return result;							\
-}
-
-#define ATOMIC64_FETCH_OP(name, mb, acq, rel, cl, op, asm_op, constraint)\
-static inline long							\
-__ll_sc_atomic64_fetch_##op##name(s64 i, atomic64_t *v)		\
-{									\
-	s64 result, val;						\
-	unsigned long tmp;						\
-									\
-	asm volatile("// atomic64_fetch_" #op #name "\n"		\
-	__LL_SC_FALLBACK(						\
-"	prfm	pstl1strm, %3\n"					\
-"1:	ld" #acq "xr	%0, %3\n"					\
-"	" #asm_op "	%1, %0, %4\n"					\
-"	st" #rel "xr	%w2, %1, %3\n"					\
-"	cbnz	%w2, 1b\n"						\
-"	" #mb )								\
-	: "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter)	\
-	: __stringify(constraint) "r" (i)				\
-	: cl);								\
-									\
-	return result;							\
-}
-
-#define ATOMIC64_OPS(...)						\
-	ATOMIC64_OP(__VA_ARGS__)					\
-	ATOMIC64_OP_RETURN(, dmb ish,  , l, "memory", __VA_ARGS__)	\
-	ATOMIC64_OP_RETURN(_relaxed,,  ,  ,         , __VA_ARGS__)	\
-	ATOMIC64_OP_RETURN(_acquire,, a,  , "memory", __VA_ARGS__)	\
-	ATOMIC64_OP_RETURN(_release,,  , l, "memory", __VA_ARGS__)	\
-	ATOMIC64_FETCH_OP (, dmb ish,  , l, "memory", __VA_ARGS__)	\
-	ATOMIC64_FETCH_OP (_relaxed,,  ,  ,         , __VA_ARGS__)	\
-	ATOMIC64_FETCH_OP (_acquire,, a,  , "memory", __VA_ARGS__)	\
-	ATOMIC64_FETCH_OP (_release,,  , l, "memory", __VA_ARGS__)
-
-ATOMIC64_OPS(add, add, I)
-ATOMIC64_OPS(sub, sub, J)
-
-#undef ATOMIC64_OPS
-#define ATOMIC64_OPS(...)						\
-	ATOMIC64_OP(__VA_ARGS__)					\
-	ATOMIC64_FETCH_OP (, dmb ish,  , l, "memory", __VA_ARGS__)	\
-	ATOMIC64_FETCH_OP (_relaxed,,  ,  ,         , __VA_ARGS__)	\
-	ATOMIC64_FETCH_OP (_acquire,, a,  , "memory", __VA_ARGS__)	\
-	ATOMIC64_FETCH_OP (_release,,  , l, "memory", __VA_ARGS__)
-
-ATOMIC64_OPS(and, and, L)
-ATOMIC64_OPS(or, orr, L)
-ATOMIC64_OPS(xor, eor, L)
-/*
- * GAS converts the mysterious and undocumented BIC (immediate) alias to
- * an AND (immediate) instruction with the immediate inverted. We don't
- * have a constraint for this, so fall back to register.
- */
-ATOMIC64_OPS(andnot, bic, )
-
-#undef ATOMIC64_OPS
-#undef ATOMIC64_FETCH_OP
-#undef ATOMIC64_OP_RETURN
-#undef ATOMIC64_OP
-
-static inline s64
-__ll_sc_atomic64_dec_if_positive(atomic64_t *v)
-{
-	s64 result;
-	unsigned long tmp;
-
-	asm volatile("// atomic64_dec_if_positive\n"
-	__LL_SC_FALLBACK(
-"	prfm	pstl1strm, %2\n"
-"1:	ldxr	%0, %2\n"
-"	subs	%0, %0, #1\n"
-"	b.lt	2f\n"
-"	stlxr	%w1, %0, %2\n"
-"	cbnz	%w1, 1b\n"
-"	dmb	ish\n"
-"2:")
-	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)
-	:
-	: "cc", "memory");
-
-	return result;
-}
-
 #define __CMPXCHG_CASE(w, sfx, name, sz, mb, acq, rel, cl, constraint)	\
 static inline u##sz							\
 __ll_sc__cmpxchg_case_##name##sz(volatile void *ptr,			\
@@ -350,4 +228,4 @@ __CMPXCHG_DBL(_mb, dmb ish, l, "memory")
 #undef __CMPXCHG_DBL
 #undef K
 
-#endif	/* __ASM_ATOMIC_LL_SC_H */
\ No newline at end of file
+#endif	/* __ASM_ARM_ARM64_ATOMIC_LL_SC_H */
\ No newline at end of file
-- 
2.24.3 (Apple Git-128)



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 21:53:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 21:53:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25307.53029 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy3M-0006jQ-Hh; Wed, 11 Nov 2020 21:53:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25307.53029; Wed, 11 Nov 2020 21:53:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy3M-0006jH-DK; Wed, 11 Nov 2020 21:53:20 +0000
Received: by outflank-mailman (input) for mailman id 25307;
 Wed, 11 Nov 2020 21:53:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3L-00064v-Py
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:19 +0000
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ca30c0ad-830d-40c8-b73b-4dacfcc7c5d6;
 Wed, 11 Nov 2020 21:52:47 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id 10so3584954wml.2
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:47 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.42
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 11 Nov 2020 13:52:44 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kcy3L-00064v-Py
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:19 +0000
X-Inumbo-ID: ca30c0ad-830d-40c8-b73b-4dacfcc7c5d6
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ca30c0ad-830d-40c8-b73b-4dacfcc7c5d6;
	Wed, 11 Nov 2020 21:52:47 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id 10so3584954wml.2
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=stsGgsmPbfxHthK3Am66kcnwhe4wDikUxtnO5WZxAiU=;
        b=kyBB746iC2H+sCtyhNfe3jdl7s85sBgieXlHnVn30CG9UlKN9LTYdp0LcUKgWzOCMn
         NvWpVAMOROBaRQUD0pwEX7qy1lNZt9X4j6EF/Y939OOJ30YCC6NgSSXQfebgM+dpQC88
         3saZGesFj9Ynj5AcQzHmdATWiHoKdnFKPbPemHVV6e55ir56v77eg7e2flzEADZKZ2W9
         OwEb83hFwxX1iRyQK2Na43iSWolCTtUKodxbx+9bMlKp3BR/UFcGEr65+Hke5v2W5h94
         3gTMDMg2HThq3ygOJW9VbxOv+iVW+diW79jTYytAD3h+q+KedzWgHfmQVA1SPAAIJVFw
         ReRw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=stsGgsmPbfxHthK3Am66kcnwhe4wDikUxtnO5WZxAiU=;
        b=g1l6e6AgV8YGidbNLeKbYA44hrntVtMPn76K/q5NLIlnk8LOZL8cgWc7kXApmig0Ce
         2zzzSsfW5vlRgRBTc6RL5b0bhwZ6NMyCEuIFh90+cZMtRfGGKMSdyLaGSxB7FvBIkynE
         pallaOzWIX9pKMLXUnZESKTuEVJ3i+aT8hQmNN5f0nHudMhvCyd+67/sjR0lnPb5dU/J
         RrNXgOlr+fWP55GTUvHOgOJCBvEJhJ6xaGP6yLDI4nmyS80Xy3g+P9wJI+yjghUWXd5n
         ZxblLLoTDsB9MqxiTfd5k7c6S4SAyO/dEtTaNQymjnU28HdN9TfGg52mV27/2AoI1M8W
         rF7w==
X-Gm-Message-State: AOAM531aA/uRVjzeqf2mW2naNKApZyx7cx2Q10g5/jApONtM9Z2oRIt3
	LMbVA28mn0tV1T/901JutkFSwWN77Ho=
X-Google-Smtp-Source: ABdhPJyW39UzUt7ed5ebZOO8NTkvWiogc3vf3WA3Pa8enHq6kgxZm2mt25CWSTTFbfqeOptnsCiA6w==
X-Received: by 2002:a1c:f31a:: with SMTP id q26mr6175980wmq.178.1605131565244;
        Wed, 11 Nov 2020 13:52:45 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.42
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Wed, 11 Nov 2020 13:52:44 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: xen-devel@lists.xenproject.org
Cc: Ash Wilding <ash.j.wilding@gmail.com>,
	julien@xen.org,
	bertrand.marquis@arm.com,
	rahul.singh@arm.com
Subject: [RFC PATCH v2 05/15] xen/arm: pull in Linux atomics helpers and dependencies
Date: Wed, 11 Nov 2020 21:51:53 +0000
Message-Id: <20201111215203.80336-6-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com>
References: <20201111215203.80336-1-ash.j.wilding@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ash Wilding <ash.j.wilding@gmail.com>

This patch pulls in Linux's atomics helpers for arm32 and arm64, and
their dependencies, as-is.

Note that Linux's arm32 atomics helpers use the READ_ONCE() and
WRITE_ONCE() macros defined in <asm-generic/rwonce.h>, while Linux's
arm64 atomics helpers use __READ_ONCE() and __WRITE_ONCE().

The only difference is that the __* versions skip checking whether the
object being accessed is the same size as a native C type (e.g. char,
int, long, etc). Given the arm32 helpers are using the macros to access
an atomic_t's counter member, which is an int, we can skip this check by
using the __* versions like the arm64 code does.

I mention this here because it forms the first part of my justification
for *not* copying Linux's <linux/compiler_types.h> to Xen; the size
check described above is performed by __native_word() defined in that
header.

The second part of my justification may be more contentious; as you'll
see in the next patch, I intend to drop the __unqual_scalar_typeof()
calls in __READ_ONCE() and __WRITE_ONCE(). This is because the pointer
to the atomic_t's counter member is always a basic (int *) so we don't
need this, and dropping it means we can avoid having to port Linux's
<linux/compiler_types.h>.

If people are against this approach, please bear in mind that the
current version of __unqual_scalar_typeof() in <linux/compiler_types.h>
was actually the reason for Linux upgrading the minimum GCC version
required to 4.9 earlier this year so that they can guarantee C11
_Generic support [1].

So if we do want to take Linux's <linux/compiler_types.h> we'll either
need to:

   A) bump up the minimum required version of GCC to 4.9 to match
      that required by Linux; in the Xen README we currently state
      GCC 4.8 for Arm and GCC 4.1.2_20070115 for x86.

or:

   B) mix-and-match an older version of Linux's <linux/compiler_types.h>
      with the other headers taken from a newer version of Linux.

Thoughts?

[1] https://lkml.org/lkml/2020/7/8/1308

Signed-off-by: Ash Wilding <ash.j.wilding@gmail.com>
---
 xen/include/asm-arm/arm32/atomic.h       | 507 +++++++++++++++++++++++
 xen/include/asm-arm/arm32/cmpxchg.h      | 279 +++++++++++++
 xen/include/asm-arm/arm64/atomic.h       | 228 ++++++++++
 xen/include/asm-arm/arm64/atomic_ll_sc.h | 353 ++++++++++++++++
 xen/include/asm-arm/arm64/atomic_lse.h   | 419 +++++++++++++++++++
 xen/include/asm-arm/arm64/cmpxchg.h      | 285 +++++++++++++
 xen/include/asm-arm/arm64/lse.h          |  48 +++
 xen/include/xen/rwonce.h                 |  90 ++++
 8 files changed, 2209 insertions(+)
 create mode 100644 xen/include/asm-arm/arm32/atomic.h
 create mode 100644 xen/include/asm-arm/arm32/cmpxchg.h
 create mode 100644 xen/include/asm-arm/arm64/atomic.h
 create mode 100644 xen/include/asm-arm/arm64/atomic_ll_sc.h
 create mode 100644 xen/include/asm-arm/arm64/atomic_lse.h
 create mode 100644 xen/include/asm-arm/arm64/cmpxchg.h
 create mode 100644 xen/include/asm-arm/arm64/lse.h
 create mode 100644 xen/include/xen/rwonce.h

diff --git a/xen/include/asm-arm/arm32/atomic.h b/xen/include/asm-arm/arm32/atomic.h
new file mode 100644
index 0000000000..ac6338dd9b
--- /dev/null
+++ b/xen/include/asm-arm/arm32/atomic.h
@@ -0,0 +1,507 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ *  arch/arm/include/asm/atomic.h
+ *
+ *  Copyright (C) 1996 Russell King.
+ *  Copyright (C) 2002 Deep Blue Solutions Ltd.
+ */
+#ifndef __ASM_ARM_ATOMIC_H
+#define __ASM_ARM_ATOMIC_H
+
+#include <linux/compiler.h>
+#include <linux/prefetch.h>
+#include <linux/types.h>
+#include <linux/irqflags.h>
+#include <asm/barrier.h>
+#include <asm/cmpxchg.h>
+
+#ifdef __KERNEL__
+
+/*
+ * On ARM, ordinary assignment (str instruction) doesn't clear the local
+ * strex/ldrex monitor on some implementations. The reason we can use it for
+ * atomic_set() is the clrex or dummy strex done on every exception return.
+ */
+#define atomic_read(v)	READ_ONCE((v)->counter)
+#define atomic_set(v,i)	WRITE_ONCE(((v)->counter), (i))
+
+#if __LINUX_ARM_ARCH__ >= 6
+
+/*
+ * ARMv6 UP and SMP safe atomic ops.  We use load exclusive and
+ * store exclusive to ensure that these are atomic.  We may loop
+ * to ensure that the update happens.
+ */
+
+#define ATOMIC_OP(op, c_op, asm_op)					\
+static inline void atomic_##op(int i, atomic_t *v)			\
+{									\
+	unsigned long tmp;						\
+	int result;							\
+									\
+	prefetchw(&v->counter);						\
+	__asm__ __volatile__("@ atomic_" #op "\n"			\
+"1:	ldrex	%0, [%3]\n"						\
+"	" #asm_op "	%0, %0, %4\n"					\
+"	strex	%1, %0, [%3]\n"						\
+"	teq	%1, #0\n"						\
+"	bne	1b"							\
+	: "=&r" (result), "=&r" (tmp), "+Qo" (v->counter)		\
+	: "r" (&v->counter), "Ir" (i)					\
+	: "cc");							\
+}									\
+
+#define ATOMIC_OP_RETURN(op, c_op, asm_op)				\
+static inline int atomic_##op##_return_relaxed(int i, atomic_t *v)	\
+{									\
+	unsigned long tmp;						\
+	int result;							\
+									\
+	prefetchw(&v->counter);						\
+									\
+	__asm__ __volatile__("@ atomic_" #op "_return\n"		\
+"1:	ldrex	%0, [%3]\n"						\
+"	" #asm_op "	%0, %0, %4\n"					\
+"	strex	%1, %0, [%3]\n"						\
+"	teq	%1, #0\n"						\
+"	bne	1b"							\
+	: "=&r" (result), "=&r" (tmp), "+Qo" (v->counter)		\
+	: "r" (&v->counter), "Ir" (i)					\
+	: "cc");							\
+									\
+	return result;							\
+}
+
+#define ATOMIC_FETCH_OP(op, c_op, asm_op)				\
+static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v)	\
+{									\
+	unsigned long tmp;						\
+	int result, val;						\
+									\
+	prefetchw(&v->counter);						\
+									\
+	__asm__ __volatile__("@ atomic_fetch_" #op "\n"			\
+"1:	ldrex	%0, [%4]\n"						\
+"	" #asm_op "	%1, %0, %5\n"					\
+"	strex	%2, %1, [%4]\n"						\
+"	teq	%2, #0\n"						\
+"	bne	1b"							\
+	: "=&r" (result), "=&r" (val), "=&r" (tmp), "+Qo" (v->counter)	\
+	: "r" (&v->counter), "Ir" (i)					\
+	: "cc");							\
+									\
+	return result;							\
+}
+
+#define atomic_add_return_relaxed	atomic_add_return_relaxed
+#define atomic_sub_return_relaxed	atomic_sub_return_relaxed
+#define atomic_fetch_add_relaxed	atomic_fetch_add_relaxed
+#define atomic_fetch_sub_relaxed	atomic_fetch_sub_relaxed
+
+#define atomic_fetch_and_relaxed	atomic_fetch_and_relaxed
+#define atomic_fetch_andnot_relaxed	atomic_fetch_andnot_relaxed
+#define atomic_fetch_or_relaxed		atomic_fetch_or_relaxed
+#define atomic_fetch_xor_relaxed	atomic_fetch_xor_relaxed
+
+static inline int atomic_cmpxchg_relaxed(atomic_t *ptr, int old, int new)
+{
+	int oldval;
+	unsigned long res;
+
+	prefetchw(&ptr->counter);
+
+	do {
+		__asm__ __volatile__("@ atomic_cmpxchg\n"
+		"ldrex	%1, [%3]\n"
+		"mov	%0, #0\n"
+		"teq	%1, %4\n"
+		"strexeq %0, %5, [%3]\n"
+		    : "=&r" (res), "=&r" (oldval), "+Qo" (ptr->counter)
+		    : "r" (&ptr->counter), "Ir" (old), "r" (new)
+		    : "cc");
+	} while (res);
+
+	return oldval;
+}
+#define atomic_cmpxchg_relaxed		atomic_cmpxchg_relaxed
+
+static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
+{
+	int oldval, newval;
+	unsigned long tmp;
+
+	smp_mb();
+	prefetchw(&v->counter);
+
+	__asm__ __volatile__ ("@ atomic_add_unless\n"
+"1:	ldrex	%0, [%4]\n"
+"	teq	%0, %5\n"
+"	beq	2f\n"
+"	add	%1, %0, %6\n"
+"	strex	%2, %1, [%4]\n"
+"	teq	%2, #0\n"
+"	bne	1b\n"
+"2:"
+	: "=&r" (oldval), "=&r" (newval), "=&r" (tmp), "+Qo" (v->counter)
+	: "r" (&v->counter), "r" (u), "r" (a)
+	: "cc");
+
+	if (oldval != u)
+		smp_mb();
+
+	return oldval;
+}
+#define atomic_fetch_add_unless		atomic_fetch_add_unless
+
+#else /* ARM_ARCH_6 */
+
+#ifdef CONFIG_SMP
+#error SMP not supported on pre-ARMv6 CPUs
+#endif
+
+#define ATOMIC_OP(op, c_op, asm_op)					\
+static inline void atomic_##op(int i, atomic_t *v)			\
+{									\
+	unsigned long flags;						\
+									\
+	raw_local_irq_save(flags);					\
+	v->counter c_op i;						\
+	raw_local_irq_restore(flags);					\
+}									\
+
+#define ATOMIC_OP_RETURN(op, c_op, asm_op)				\
+static inline int atomic_##op##_return(int i, atomic_t *v)		\
+{									\
+	unsigned long flags;						\
+	int val;							\
+									\
+	raw_local_irq_save(flags);					\
+	v->counter c_op i;						\
+	val = v->counter;						\
+	raw_local_irq_restore(flags);					\
+									\
+	return val;							\
+}
+
+#define ATOMIC_FETCH_OP(op, c_op, asm_op)				\
+static inline int atomic_fetch_##op(int i, atomic_t *v)			\
+{									\
+	unsigned long flags;						\
+	int val;							\
+									\
+	raw_local_irq_save(flags);					\
+	val = v->counter;						\
+	v->counter c_op i;						\
+	raw_local_irq_restore(flags);					\
+									\
+	return val;							\
+}
+
+static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
+{
+	int ret;
+	unsigned long flags;
+
+	raw_local_irq_save(flags);
+	ret = v->counter;
+	if (likely(ret == old))
+		v->counter = new;
+	raw_local_irq_restore(flags);
+
+	return ret;
+}
+
+#define atomic_fetch_andnot		atomic_fetch_andnot
+
+#endif /* __LINUX_ARM_ARCH__ */
+
+#define ATOMIC_OPS(op, c_op, asm_op)					\
+	ATOMIC_OP(op, c_op, asm_op)					\
+	ATOMIC_OP_RETURN(op, c_op, asm_op)				\
+	ATOMIC_FETCH_OP(op, c_op, asm_op)
+
+ATOMIC_OPS(add, +=, add)
+ATOMIC_OPS(sub, -=, sub)
+
+#define atomic_andnot atomic_andnot
+
+#undef ATOMIC_OPS
+#define ATOMIC_OPS(op, c_op, asm_op)					\
+	ATOMIC_OP(op, c_op, asm_op)					\
+	ATOMIC_FETCH_OP(op, c_op, asm_op)
+
+ATOMIC_OPS(and, &=, and)
+ATOMIC_OPS(andnot, &= ~, bic)
+ATOMIC_OPS(or,  |=, orr)
+ATOMIC_OPS(xor, ^=, eor)
+
+#undef ATOMIC_OPS
+#undef ATOMIC_FETCH_OP
+#undef ATOMIC_OP_RETURN
+#undef ATOMIC_OP
+
+#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
+
+#ifndef CONFIG_GENERIC_ATOMIC64
+typedef struct {
+	s64 counter;
+} atomic64_t;
+
+#define ATOMIC64_INIT(i) { (i) }
+
+#ifdef CONFIG_ARM_LPAE
+static inline s64 atomic64_read(const atomic64_t *v)
+{
+	s64 result;
+
+	__asm__ __volatile__("@ atomic64_read\n"
+"	ldrd	%0, %H0, [%1]"
+	: "=&r" (result)
+	: "r" (&v->counter), "Qo" (v->counter)
+	);
+
+	return result;
+}
+
+static inline void atomic64_set(atomic64_t *v, s64 i)
+{
+	__asm__ __volatile__("@ atomic64_set\n"
+"	strd	%2, %H2, [%1]"
+	: "=Qo" (v->counter)
+	: "r" (&v->counter), "r" (i)
+	);
+}
+#else
+static inline s64 atomic64_read(const atomic64_t *v)
+{
+	s64 result;
+
+	__asm__ __volatile__("@ atomic64_read\n"
+"	ldrexd	%0, %H0, [%1]"
+	: "=&r" (result)
+	: "r" (&v->counter), "Qo" (v->counter)
+	);
+
+	return result;
+}
+
+static inline void atomic64_set(atomic64_t *v, s64 i)
+{
+	s64 tmp;
+
+	prefetchw(&v->counter);
+	__asm__ __volatile__("@ atomic64_set\n"
+"1:	ldrexd	%0, %H0, [%2]\n"
+"	strexd	%0, %3, %H3, [%2]\n"
+"	teq	%0, #0\n"
+"	bne	1b"
+	: "=&r" (tmp), "=Qo" (v->counter)
+	: "r" (&v->counter), "r" (i)
+	: "cc");
+}
+#endif
+
+#define ATOMIC64_OP(op, op1, op2)					\
+static inline void atomic64_##op(s64 i, atomic64_t *v)			\
+{									\
+	s64 result;							\
+	unsigned long tmp;						\
+									\
+	prefetchw(&v->counter);						\
+	__asm__ __volatile__("@ atomic64_" #op "\n"			\
+"1:	ldrexd	%0, %H0, [%3]\n"					\
+"	" #op1 " %Q0, %Q0, %Q4\n"					\
+"	" #op2 " %R0, %R0, %R4\n"					\
+"	strexd	%1, %0, %H0, [%3]\n"					\
+"	teq	%1, #0\n"						\
+"	bne	1b"							\
+	: "=&r" (result), "=&r" (tmp), "+Qo" (v->counter)		\
+	: "r" (&v->counter), "r" (i)					\
+	: "cc");							\
+}									\
+
+#define ATOMIC64_OP_RETURN(op, op1, op2)				\
+static inline s64							\
+atomic64_##op##_return_relaxed(s64 i, atomic64_t *v)			\
+{									\
+	s64 result;							\
+	unsigned long tmp;						\
+									\
+	prefetchw(&v->counter);						\
+									\
+	__asm__ __volatile__("@ atomic64_" #op "_return\n"		\
+"1:	ldrexd	%0, %H0, [%3]\n"					\
+"	" #op1 " %Q0, %Q0, %Q4\n"					\
+"	" #op2 " %R0, %R0, %R4\n"					\
+"	strexd	%1, %0, %H0, [%3]\n"					\
+"	teq	%1, #0\n"						\
+"	bne	1b"							\
+	: "=&r" (result), "=&r" (tmp), "+Qo" (v->counter)		\
+	: "r" (&v->counter), "r" (i)					\
+	: "cc");							\
+									\
+	return result;							\
+}
+
+#define ATOMIC64_FETCH_OP(op, op1, op2)					\
+static inline s64							\
+atomic64_fetch_##op##_relaxed(s64 i, atomic64_t *v)			\
+{									\
+	s64 result, val;						\
+	unsigned long tmp;						\
+									\
+	prefetchw(&v->counter);						\
+									\
+	__asm__ __volatile__("@ atomic64_fetch_" #op "\n"		\
+"1:	ldrexd	%0, %H0, [%4]\n"					\
+"	" #op1 " %Q1, %Q0, %Q5\n"					\
+"	" #op2 " %R1, %R0, %R5\n"					\
+"	strexd	%2, %1, %H1, [%4]\n"					\
+"	teq	%2, #0\n"						\
+"	bne	1b"							\
+	: "=&r" (result), "=&r" (val), "=&r" (tmp), "+Qo" (v->counter)	\
+	: "r" (&v->counter), "r" (i)					\
+	: "cc");							\
+									\
+	return result;							\
+}
+
+#define ATOMIC64_OPS(op, op1, op2)					\
+	ATOMIC64_OP(op, op1, op2)					\
+	ATOMIC64_OP_RETURN(op, op1, op2)				\
+	ATOMIC64_FETCH_OP(op, op1, op2)
+
+ATOMIC64_OPS(add, adds, adc)
+ATOMIC64_OPS(sub, subs, sbc)
+
+#define atomic64_add_return_relaxed	atomic64_add_return_relaxed
+#define atomic64_sub_return_relaxed	atomic64_sub_return_relaxed
+#define atomic64_fetch_add_relaxed	atomic64_fetch_add_relaxed
+#define atomic64_fetch_sub_relaxed	atomic64_fetch_sub_relaxed
+
+#undef ATOMIC64_OPS
+#define ATOMIC64_OPS(op, op1, op2)					\
+	ATOMIC64_OP(op, op1, op2)					\
+	ATOMIC64_FETCH_OP(op, op1, op2)
+
+#define atomic64_andnot atomic64_andnot
+
+ATOMIC64_OPS(and, and, and)
+ATOMIC64_OPS(andnot, bic, bic)
+ATOMIC64_OPS(or,  orr, orr)
+ATOMIC64_OPS(xor, eor, eor)
+
+#define atomic64_fetch_and_relaxed	atomic64_fetch_and_relaxed
+#define atomic64_fetch_andnot_relaxed	atomic64_fetch_andnot_relaxed
+#define atomic64_fetch_or_relaxed	atomic64_fetch_or_relaxed
+#define atomic64_fetch_xor_relaxed	atomic64_fetch_xor_relaxed
+
+#undef ATOMIC64_OPS
+#undef ATOMIC64_FETCH_OP
+#undef ATOMIC64_OP_RETURN
+#undef ATOMIC64_OP
+
+static inline s64 atomic64_cmpxchg_relaxed(atomic64_t *ptr, s64 old, s64 new)
+{
+	s64 oldval;
+	unsigned long res;
+
+	prefetchw(&ptr->counter);
+
+	do {
+		__asm__ __volatile__("@ atomic64_cmpxchg\n"
+		"ldrexd		%1, %H1, [%3]\n"
+		"mov		%0, #0\n"
+		"teq		%1, %4\n"
+		"teqeq		%H1, %H4\n"
+		"strexdeq	%0, %5, %H5, [%3]"
+		: "=&r" (res), "=&r" (oldval), "+Qo" (ptr->counter)
+		: "r" (&ptr->counter), "r" (old), "r" (new)
+		: "cc");
+	} while (res);
+
+	return oldval;
+}
+#define atomic64_cmpxchg_relaxed	atomic64_cmpxchg_relaxed
+
+static inline s64 atomic64_xchg_relaxed(atomic64_t *ptr, s64 new)
+{
+	s64 result;
+	unsigned long tmp;
+
+	prefetchw(&ptr->counter);
+
+	__asm__ __volatile__("@ atomic64_xchg\n"
+"1:	ldrexd	%0, %H0, [%3]\n"
+"	strexd	%1, %4, %H4, [%3]\n"
+"	teq	%1, #0\n"
+"	bne	1b"
+	: "=&r" (result), "=&r" (tmp), "+Qo" (ptr->counter)
+	: "r" (&ptr->counter), "r" (new)
+	: "cc");
+
+	return result;
+}
+#define atomic64_xchg_relaxed		atomic64_xchg_relaxed
+
+static inline s64 atomic64_dec_if_positive(atomic64_t *v)
+{
+	s64 result;
+	unsigned long tmp;
+
+	smp_mb();
+	prefetchw(&v->counter);
+
+	__asm__ __volatile__("@ atomic64_dec_if_positive\n"
+"1:	ldrexd	%0, %H0, [%3]\n"
+"	subs	%Q0, %Q0, #1\n"
+"	sbc	%R0, %R0, #0\n"
+"	teq	%R0, #0\n"
+"	bmi	2f\n"
+"	strexd	%1, %0, %H0, [%3]\n"
+"	teq	%1, #0\n"
+"	bne	1b\n"
+"2:"
+	: "=&r" (result), "=&r" (tmp), "+Qo" (v->counter)
+	: "r" (&v->counter)
+	: "cc");
+
+	smp_mb();
+
+	return result;
+}
+#define atomic64_dec_if_positive atomic64_dec_if_positive
+
+static inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
+{
+	s64 oldval, newval;
+	unsigned long tmp;
+
+	smp_mb();
+	prefetchw(&v->counter);
+
+	__asm__ __volatile__("@ atomic64_add_unless\n"
+"1:	ldrexd	%0, %H0, [%4]\n"
+"	teq	%0, %5\n"
+"	teqeq	%H0, %H5\n"
+"	beq	2f\n"
+"	adds	%Q1, %Q0, %Q6\n"
+"	adc	%R1, %R0, %R6\n"
+"	strexd	%2, %1, %H1, [%4]\n"
+"	teq	%2, #0\n"
+"	bne	1b\n"
+"2:"
+	: "=&r" (oldval), "=&r" (newval), "=&r" (tmp), "+Qo" (v->counter)
+	: "r" (&v->counter), "r" (u), "r" (a)
+	: "cc");
+
+	if (oldval != u)
+		smp_mb();
+
+	return oldval;
+}
+#define atomic64_fetch_add_unless atomic64_fetch_add_unless
+
+#endif /* !CONFIG_GENERIC_ATOMIC64 */
+#endif
+#endif
\ No newline at end of file
diff --git a/xen/include/asm-arm/arm32/cmpxchg.h b/xen/include/asm-arm/arm32/cmpxchg.h
new file mode 100644
index 0000000000..638ae84afb
--- /dev/null
+++ b/xen/include/asm-arm/arm32/cmpxchg.h
@@ -0,0 +1,279 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_ARM_CMPXCHG_H
+#define __ASM_ARM_CMPXCHG_H
+
+#include <linux/irqflags.h>
+#include <linux/prefetch.h>
+#include <asm/barrier.h>
+
+#if defined(CONFIG_CPU_SA1100) || defined(CONFIG_CPU_SA110)
+/*
+ * On the StrongARM, "swp" is terminally broken since it bypasses the
+ * cache totally.  This means that the cache becomes inconsistent, and,
+ * since we use normal loads/stores as well, this is really bad.
+ * Typically, this causes oopsen in filp_close, but could have other,
+ * more disastrous effects.  There are two work-arounds:
+ *  1. Disable interrupts and emulate the atomic swap
+ *  2. Clean the cache, perform atomic swap, flush the cache
+ *
+ * We choose (1) since its the "easiest" to achieve here and is not
+ * dependent on the processor type.
+ *
+ * NOTE that this solution won't work on an SMP system, so explcitly
+ * forbid it here.
+ */
+#define swp_is_buggy
+#endif
+
+static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size)
+{
+	extern void __bad_xchg(volatile void *, int);
+	unsigned long ret;
+#ifdef swp_is_buggy
+	unsigned long flags;
+#endif
+#if __LINUX_ARM_ARCH__ >= 6
+	unsigned int tmp;
+#endif
+
+	prefetchw((const void *)ptr);
+
+	switch (size) {
+#if __LINUX_ARM_ARCH__ >= 6
+#ifndef CONFIG_CPU_V6 /* MIN ARCH >= V6K */
+	case 1:
+		asm volatile("@	__xchg1\n"
+		"1:	ldrexb	%0, [%3]\n"
+		"	strexb	%1, %2, [%3]\n"
+		"	teq	%1, #0\n"
+		"	bne	1b"
+			: "=&r" (ret), "=&r" (tmp)
+			: "r" (x), "r" (ptr)
+			: "memory", "cc");
+		break;
+	case 2:
+		asm volatile("@	__xchg2\n"
+		"1:	ldrexh	%0, [%3]\n"
+		"	strexh	%1, %2, [%3]\n"
+		"	teq	%1, #0\n"
+		"	bne	1b"
+			: "=&r" (ret), "=&r" (tmp)
+			: "r" (x), "r" (ptr)
+			: "memory", "cc");
+		break;
+#endif
+	case 4:
+		asm volatile("@	__xchg4\n"
+		"1:	ldrex	%0, [%3]\n"
+		"	strex	%1, %2, [%3]\n"
+		"	teq	%1, #0\n"
+		"	bne	1b"
+			: "=&r" (ret), "=&r" (tmp)
+			: "r" (x), "r" (ptr)
+			: "memory", "cc");
+		break;
+#elif defined(swp_is_buggy)
+#ifdef CONFIG_SMP
+#error SMP is not supported on this platform
+#endif
+	case 1:
+		raw_local_irq_save(flags);
+		ret = *(volatile unsigned char *)ptr;
+		*(volatile unsigned char *)ptr = x;
+		raw_local_irq_restore(flags);
+		break;
+
+	case 4:
+		raw_local_irq_save(flags);
+		ret = *(volatile unsigned long *)ptr;
+		*(volatile unsigned long *)ptr = x;
+		raw_local_irq_restore(flags);
+		break;
+#else
+	case 1:
+		asm volatile("@	__xchg1\n"
+		"	swpb	%0, %1, [%2]"
+			: "=&r" (ret)
+			: "r" (x), "r" (ptr)
+			: "memory", "cc");
+		break;
+	case 4:
+		asm volatile("@	__xchg4\n"
+		"	swp	%0, %1, [%2]"
+			: "=&r" (ret)
+			: "r" (x), "r" (ptr)
+			: "memory", "cc");
+		break;
+#endif
+	default:
+		/* Cause a link-time error, the xchg() size is not supported */
+		__bad_xchg(ptr, size), ret = 0;
+		break;
+	}
+
+	return ret;
+}
+
+#define xchg_relaxed(ptr, x) ({						\
+	(__typeof__(*(ptr)))__xchg((unsigned long)(x), (ptr),		\
+				   sizeof(*(ptr)));			\
+})
+
+#include <asm-generic/cmpxchg-local.h>
+
+#if __LINUX_ARM_ARCH__ < 6
+/* min ARCH < ARMv6 */
+
+#ifdef CONFIG_SMP
+#error "SMP is not supported on this platform"
+#endif
+
+#define xchg xchg_relaxed
+
+/*
+ * cmpxchg_local and cmpxchg64_local are atomic wrt current CPU. Always make
+ * them available.
+ */
+#define cmpxchg_local(ptr, o, n) ({					\
+	(__typeof(*ptr))__cmpxchg_local_generic((ptr),			\
+					        (unsigned long)(o),	\
+					        (unsigned long)(n),	\
+					        sizeof(*(ptr)));	\
+})
+
+#define cmpxchg64_local(ptr, o, n) __cmpxchg64_local_generic((ptr), (o), (n))
+
+#include <asm-generic/cmpxchg.h>
+
+#else	/* min ARCH >= ARMv6 */
+
+extern void __bad_cmpxchg(volatile void *ptr, int size);
+
+/*
+ * cmpxchg only support 32-bits operands on ARMv6.
+ */
+
+static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
+				      unsigned long new, int size)
+{
+	unsigned long oldval, res;
+
+	prefetchw((const void *)ptr);
+
+	switch (size) {
+#ifndef CONFIG_CPU_V6	/* min ARCH >= ARMv6K */
+	case 1:
+		do {
+			asm volatile("@ __cmpxchg1\n"
+			"	ldrexb	%1, [%2]\n"
+			"	mov	%0, #0\n"
+			"	teq	%1, %3\n"
+			"	strexbeq %0, %4, [%2]\n"
+				: "=&r" (res), "=&r" (oldval)
+				: "r" (ptr), "Ir" (old), "r" (new)
+				: "memory", "cc");
+		} while (res);
+		break;
+	case 2:
+		do {
+			asm volatile("@ __cmpxchg1\n"
+			"	ldrexh	%1, [%2]\n"
+			"	mov	%0, #0\n"
+			"	teq	%1, %3\n"
+			"	strexheq %0, %4, [%2]\n"
+				: "=&r" (res), "=&r" (oldval)
+				: "r" (ptr), "Ir" (old), "r" (new)
+				: "memory", "cc");
+		} while (res);
+		break;
+#endif
+	case 4:
+		do {
+			asm volatile("@ __cmpxchg4\n"
+			"	ldrex	%1, [%2]\n"
+			"	mov	%0, #0\n"
+			"	teq	%1, %3\n"
+			"	strexeq %0, %4, [%2]\n"
+				: "=&r" (res), "=&r" (oldval)
+				: "r" (ptr), "Ir" (old), "r" (new)
+				: "memory", "cc");
+		} while (res);
+		break;
+	default:
+		__bad_cmpxchg(ptr, size);
+		oldval = 0;
+	}
+
+	return oldval;
+}
+
+#define cmpxchg_relaxed(ptr,o,n) ({					\
+	(__typeof__(*(ptr)))__cmpxchg((ptr),				\
+				      (unsigned long)(o),		\
+				      (unsigned long)(n),		\
+				      sizeof(*(ptr)));			\
+})
+
+static inline unsigned long __cmpxchg_local(volatile void *ptr,
+					    unsigned long old,
+					    unsigned long new, int size)
+{
+	unsigned long ret;
+
+	switch (size) {
+#ifdef CONFIG_CPU_V6	/* min ARCH == ARMv6 */
+	case 1:
+	case 2:
+		ret = __cmpxchg_local_generic(ptr, old, new, size);
+		break;
+#endif
+	default:
+		ret = __cmpxchg(ptr, old, new, size);
+	}
+
+	return ret;
+}
+
+#define cmpxchg_local(ptr, o, n) ({					\
+	(__typeof(*ptr))__cmpxchg_local((ptr),				\
+				        (unsigned long)(o),		\
+				        (unsigned long)(n),		\
+				        sizeof(*(ptr)));		\
+})
+
+static inline unsigned long long __cmpxchg64(unsigned long long *ptr,
+					     unsigned long long old,
+					     unsigned long long new)
+{
+	unsigned long long oldval;
+	unsigned long res;
+
+	prefetchw(ptr);
+
+	__asm__ __volatile__(
+"1:	ldrexd		%1, %H1, [%3]\n"
+"	teq		%1, %4\n"
+"	teqeq		%H1, %H4\n"
+"	bne		2f\n"
+"	strexd		%0, %5, %H5, [%3]\n"
+"	teq		%0, #0\n"
+"	bne		1b\n"
+"2:"
+	: "=&r" (res), "=&r" (oldval), "+Qo" (*ptr)
+	: "r" (ptr), "r" (old), "r" (new)
+	: "cc");
+
+	return oldval;
+}
+
+#define cmpxchg64_relaxed(ptr, o, n) ({					\
+	(__typeof__(*(ptr)))__cmpxchg64((ptr),				\
+					(unsigned long long)(o),	\
+					(unsigned long long)(n));	\
+})
+
+#define cmpxchg64_local(ptr, o, n) cmpxchg64_relaxed((ptr), (o), (n))
+
+#endif	/* __LINUX_ARM_ARCH__ >= 6 */
+
+#endif /* __ASM_ARM_CMPXCHG_H */
\ No newline at end of file
diff --git a/xen/include/asm-arm/arm64/atomic.h b/xen/include/asm-arm/arm64/atomic.h
new file mode 100644
index 0000000000..a2eab9f091
--- /dev/null
+++ b/xen/include/asm-arm/arm64/atomic.h
@@ -0,0 +1,228 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Based on arch/arm/include/asm/atomic.h
+ *
+ * Copyright (C) 1996 Russell King.
+ * Copyright (C) 2002 Deep Blue Solutions Ltd.
+ * Copyright (C) 2012 ARM Ltd.
+ */
+#ifndef __ASM_ATOMIC_H
+#define __ASM_ATOMIC_H
+
+#include <linux/compiler.h>
+#include <linux/types.h>
+
+#include <asm/barrier.h>
+#include <asm/cmpxchg.h>
+#include <asm/lse.h>
+
+#define ATOMIC_OP(op)							\
+static inline void arch_##op(int i, atomic_t *v)			\
+{									\
+	__lse_ll_sc_body(op, i, v);					\
+}
+
+ATOMIC_OP(atomic_andnot)
+ATOMIC_OP(atomic_or)
+ATOMIC_OP(atomic_xor)
+ATOMIC_OP(atomic_add)
+ATOMIC_OP(atomic_and)
+ATOMIC_OP(atomic_sub)
+
+#undef ATOMIC_OP
+
+#define ATOMIC_FETCH_OP(name, op)					\
+static inline int arch_##op##name(int i, atomic_t *v)			\
+{									\
+	return __lse_ll_sc_body(op##name, i, v);			\
+}
+
+#define ATOMIC_FETCH_OPS(op)						\
+	ATOMIC_FETCH_OP(_relaxed, op)					\
+	ATOMIC_FETCH_OP(_acquire, op)					\
+	ATOMIC_FETCH_OP(_release, op)					\
+	ATOMIC_FETCH_OP(        , op)
+
+ATOMIC_FETCH_OPS(atomic_fetch_andnot)
+ATOMIC_FETCH_OPS(atomic_fetch_or)
+ATOMIC_FETCH_OPS(atomic_fetch_xor)
+ATOMIC_FETCH_OPS(atomic_fetch_add)
+ATOMIC_FETCH_OPS(atomic_fetch_and)
+ATOMIC_FETCH_OPS(atomic_fetch_sub)
+ATOMIC_FETCH_OPS(atomic_add_return)
+ATOMIC_FETCH_OPS(atomic_sub_return)
+
+#undef ATOMIC_FETCH_OP
+#undef ATOMIC_FETCH_OPS
+
+#define ATOMIC64_OP(op)							\
+static inline void arch_##op(long i, atomic64_t *v)			\
+{									\
+	__lse_ll_sc_body(op, i, v);					\
+}
+
+ATOMIC64_OP(atomic64_andnot)
+ATOMIC64_OP(atomic64_or)
+ATOMIC64_OP(atomic64_xor)
+ATOMIC64_OP(atomic64_add)
+ATOMIC64_OP(atomic64_and)
+ATOMIC64_OP(atomic64_sub)
+
+#undef ATOMIC64_OP
+
+#define ATOMIC64_FETCH_OP(name, op)					\
+static inline long arch_##op##name(long i, atomic64_t *v)		\
+{									\
+	return __lse_ll_sc_body(op##name, i, v);			\
+}
+
+#define ATOMIC64_FETCH_OPS(op)						\
+	ATOMIC64_FETCH_OP(_relaxed, op)					\
+	ATOMIC64_FETCH_OP(_acquire, op)					\
+	ATOMIC64_FETCH_OP(_release, op)					\
+	ATOMIC64_FETCH_OP(        , op)
+
+ATOMIC64_FETCH_OPS(atomic64_fetch_andnot)
+ATOMIC64_FETCH_OPS(atomic64_fetch_or)
+ATOMIC64_FETCH_OPS(atomic64_fetch_xor)
+ATOMIC64_FETCH_OPS(atomic64_fetch_add)
+ATOMIC64_FETCH_OPS(atomic64_fetch_and)
+ATOMIC64_FETCH_OPS(atomic64_fetch_sub)
+ATOMIC64_FETCH_OPS(atomic64_add_return)
+ATOMIC64_FETCH_OPS(atomic64_sub_return)
+
+#undef ATOMIC64_FETCH_OP
+#undef ATOMIC64_FETCH_OPS
+
+static inline long arch_atomic64_dec_if_positive(atomic64_t *v)
+{
+	return __lse_ll_sc_body(atomic64_dec_if_positive, v);
+}
+
+#define arch_atomic_read(v)			__READ_ONCE((v)->counter)
+#define arch_atomic_set(v, i)			__WRITE_ONCE(((v)->counter), (i))
+
+#define arch_atomic_add_return_relaxed		arch_atomic_add_return_relaxed
+#define arch_atomic_add_return_acquire		arch_atomic_add_return_acquire
+#define arch_atomic_add_return_release		arch_atomic_add_return_release
+#define arch_atomic_add_return			arch_atomic_add_return
+
+#define arch_atomic_sub_return_relaxed		arch_atomic_sub_return_relaxed
+#define arch_atomic_sub_return_acquire		arch_atomic_sub_return_acquire
+#define arch_atomic_sub_return_release		arch_atomic_sub_return_release
+#define arch_atomic_sub_return			arch_atomic_sub_return
+
+#define arch_atomic_fetch_add_relaxed		arch_atomic_fetch_add_relaxed
+#define arch_atomic_fetch_add_acquire		arch_atomic_fetch_add_acquire
+#define arch_atomic_fetch_add_release		arch_atomic_fetch_add_release
+#define arch_atomic_fetch_add			arch_atomic_fetch_add
+
+#define arch_atomic_fetch_sub_relaxed		arch_atomic_fetch_sub_relaxed
+#define arch_atomic_fetch_sub_acquire		arch_atomic_fetch_sub_acquire
+#define arch_atomic_fetch_sub_release		arch_atomic_fetch_sub_release
+#define arch_atomic_fetch_sub			arch_atomic_fetch_sub
+
+#define arch_atomic_fetch_and_relaxed		arch_atomic_fetch_and_relaxed
+#define arch_atomic_fetch_and_acquire		arch_atomic_fetch_and_acquire
+#define arch_atomic_fetch_and_release		arch_atomic_fetch_and_release
+#define arch_atomic_fetch_and			arch_atomic_fetch_and
+
+#define arch_atomic_fetch_andnot_relaxed	arch_atomic_fetch_andnot_relaxed
+#define arch_atomic_fetch_andnot_acquire	arch_atomic_fetch_andnot_acquire
+#define arch_atomic_fetch_andnot_release	arch_atomic_fetch_andnot_release
+#define arch_atomic_fetch_andnot		arch_atomic_fetch_andnot
+
+#define arch_atomic_fetch_or_relaxed		arch_atomic_fetch_or_relaxed
+#define arch_atomic_fetch_or_acquire		arch_atomic_fetch_or_acquire
+#define arch_atomic_fetch_or_release		arch_atomic_fetch_or_release
+#define arch_atomic_fetch_or			arch_atomic_fetch_or
+
+#define arch_atomic_fetch_xor_relaxed		arch_atomic_fetch_xor_relaxed
+#define arch_atomic_fetch_xor_acquire		arch_atomic_fetch_xor_acquire
+#define arch_atomic_fetch_xor_release		arch_atomic_fetch_xor_release
+#define arch_atomic_fetch_xor			arch_atomic_fetch_xor
+
+#define arch_atomic_xchg_relaxed(v, new) \
+	arch_xchg_relaxed(&((v)->counter), (new))
+#define arch_atomic_xchg_acquire(v, new) \
+	arch_xchg_acquire(&((v)->counter), (new))
+#define arch_atomic_xchg_release(v, new) \
+	arch_xchg_release(&((v)->counter), (new))
+#define arch_atomic_xchg(v, new) \
+	arch_xchg(&((v)->counter), (new))
+
+#define arch_atomic_cmpxchg_relaxed(v, old, new) \
+	arch_cmpxchg_relaxed(&((v)->counter), (old), (new))
+#define arch_atomic_cmpxchg_acquire(v, old, new) \
+	arch_cmpxchg_acquire(&((v)->counter), (old), (new))
+#define arch_atomic_cmpxchg_release(v, old, new) \
+	arch_cmpxchg_release(&((v)->counter), (old), (new))
+#define arch_atomic_cmpxchg(v, old, new) \
+	arch_cmpxchg(&((v)->counter), (old), (new))
+
+#define arch_atomic_andnot			arch_atomic_andnot
+
+/*
+ * 64-bit arch_atomic operations.
+ */
+#define ATOMIC64_INIT				ATOMIC_INIT
+#define arch_atomic64_read			arch_atomic_read
+#define arch_atomic64_set			arch_atomic_set
+
+#define arch_atomic64_add_return_relaxed	arch_atomic64_add_return_relaxed
+#define arch_atomic64_add_return_acquire	arch_atomic64_add_return_acquire
+#define arch_atomic64_add_return_release	arch_atomic64_add_return_release
+#define arch_atomic64_add_return		arch_atomic64_add_return
+
+#define arch_atomic64_sub_return_relaxed	arch_atomic64_sub_return_relaxed
+#define arch_atomic64_sub_return_acquire	arch_atomic64_sub_return_acquire
+#define arch_atomic64_sub_return_release	arch_atomic64_sub_return_release
+#define arch_atomic64_sub_return		arch_atomic64_sub_return
+
+#define arch_atomic64_fetch_add_relaxed		arch_atomic64_fetch_add_relaxed
+#define arch_atomic64_fetch_add_acquire		arch_atomic64_fetch_add_acquire
+#define arch_atomic64_fetch_add_release		arch_atomic64_fetch_add_release
+#define arch_atomic64_fetch_add			arch_atomic64_fetch_add
+
+#define arch_atomic64_fetch_sub_relaxed		arch_atomic64_fetch_sub_relaxed
+#define arch_atomic64_fetch_sub_acquire		arch_atomic64_fetch_sub_acquire
+#define arch_atomic64_fetch_sub_release		arch_atomic64_fetch_sub_release
+#define arch_atomic64_fetch_sub			arch_atomic64_fetch_sub
+
+#define arch_atomic64_fetch_and_relaxed		arch_atomic64_fetch_and_relaxed
+#define arch_atomic64_fetch_and_acquire		arch_atomic64_fetch_and_acquire
+#define arch_atomic64_fetch_and_release		arch_atomic64_fetch_and_release
+#define arch_atomic64_fetch_and			arch_atomic64_fetch_and
+
+#define arch_atomic64_fetch_andnot_relaxed	arch_atomic64_fetch_andnot_relaxed
+#define arch_atomic64_fetch_andnot_acquire	arch_atomic64_fetch_andnot_acquire
+#define arch_atomic64_fetch_andnot_release	arch_atomic64_fetch_andnot_release
+#define arch_atomic64_fetch_andnot		arch_atomic64_fetch_andnot
+
+#define arch_atomic64_fetch_or_relaxed		arch_atomic64_fetch_or_relaxed
+#define arch_atomic64_fetch_or_acquire		arch_atomic64_fetch_or_acquire
+#define arch_atomic64_fetch_or_release		arch_atomic64_fetch_or_release
+#define arch_atomic64_fetch_or			arch_atomic64_fetch_or
+
+#define arch_atomic64_fetch_xor_relaxed		arch_atomic64_fetch_xor_relaxed
+#define arch_atomic64_fetch_xor_acquire		arch_atomic64_fetch_xor_acquire
+#define arch_atomic64_fetch_xor_release		arch_atomic64_fetch_xor_release
+#define arch_atomic64_fetch_xor			arch_atomic64_fetch_xor
+
+#define arch_atomic64_xchg_relaxed		arch_atomic_xchg_relaxed
+#define arch_atomic64_xchg_acquire		arch_atomic_xchg_acquire
+#define arch_atomic64_xchg_release		arch_atomic_xchg_release
+#define arch_atomic64_xchg			arch_atomic_xchg
+
+#define arch_atomic64_cmpxchg_relaxed		arch_atomic_cmpxchg_relaxed
+#define arch_atomic64_cmpxchg_acquire		arch_atomic_cmpxchg_acquire
+#define arch_atomic64_cmpxchg_release		arch_atomic_cmpxchg_release
+#define arch_atomic64_cmpxchg			arch_atomic_cmpxchg
+
+#define arch_atomic64_andnot			arch_atomic64_andnot
+
+#define arch_atomic64_dec_if_positive		arch_atomic64_dec_if_positive
+
+#define ARCH_ATOMIC
+
+#endif /* __ASM_ATOMIC_H */
\ No newline at end of file
diff --git a/xen/include/asm-arm/arm64/atomic_ll_sc.h b/xen/include/asm-arm/arm64/atomic_ll_sc.h
new file mode 100644
index 0000000000..e1009c0f94
--- /dev/null
+++ b/xen/include/asm-arm/arm64/atomic_ll_sc.h
@@ -0,0 +1,353 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Based on arch/arm/include/asm/atomic.h
+ *
+ * Copyright (C) 1996 Russell King.
+ * Copyright (C) 2002 Deep Blue Solutions Ltd.
+ * Copyright (C) 2012 ARM Ltd.
+ */
+
+#ifndef __ASM_ATOMIC_LL_SC_H
+#define __ASM_ATOMIC_LL_SC_H
+
+#include <linux/stringify.h>
+
+#ifdef CONFIG_ARM64_LSE_ATOMICS
+#define __LL_SC_FALLBACK(asm_ops)					\
+"	b	3f\n"							\
+"	.subsection	1\n"						\
+"3:\n"									\
+asm_ops "\n"								\
+"	b	4f\n"							\
+"	.previous\n"							\
+"4:\n"
+#else
+#define __LL_SC_FALLBACK(asm_ops) asm_ops
+#endif
+
+#ifndef CONFIG_CC_HAS_K_CONSTRAINT
+#define K
+#endif
+
+/*
+ * AArch64 UP and SMP safe atomic ops.  We use load exclusive and
+ * store exclusive to ensure that these are atomic.  We may loop
+ * to ensure that the update happens.
+ */
+
+#define ATOMIC_OP(op, asm_op, constraint)				\
+static inline void							\
+__ll_sc_atomic_##op(int i, atomic_t *v)					\
+{									\
+	unsigned long tmp;						\
+	int result;							\
+									\
+	asm volatile("// atomic_" #op "\n"				\
+	__LL_SC_FALLBACK(						\
+"	prfm	pstl1strm, %2\n"					\
+"1:	ldxr	%w0, %2\n"						\
+"	" #asm_op "	%w0, %w0, %w3\n"				\
+"	stxr	%w1, %w0, %2\n"						\
+"	cbnz	%w1, 1b\n")						\
+	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)		\
+	: __stringify(constraint) "r" (i));				\
+}
+
+#define ATOMIC_OP_RETURN(name, mb, acq, rel, cl, op, asm_op, constraint)\
+static inline int							\
+__ll_sc_atomic_##op##_return##name(int i, atomic_t *v)			\
+{									\
+	unsigned long tmp;						\
+	int result;							\
+									\
+	asm volatile("// atomic_" #op "_return" #name "\n"		\
+	__LL_SC_FALLBACK(						\
+"	prfm	pstl1strm, %2\n"					\
+"1:	ld" #acq "xr	%w0, %2\n"					\
+"	" #asm_op "	%w0, %w0, %w3\n"				\
+"	st" #rel "xr	%w1, %w0, %2\n"					\
+"	cbnz	%w1, 1b\n"						\
+"	" #mb )								\
+	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)		\
+	: __stringify(constraint) "r" (i)				\
+	: cl);								\
+									\
+	return result;							\
+}
+
+#define ATOMIC_FETCH_OP(name, mb, acq, rel, cl, op, asm_op, constraint) \
+static inline int							\
+__ll_sc_atomic_fetch_##op##name(int i, atomic_t *v)			\
+{									\
+	unsigned long tmp;						\
+	int val, result;						\
+									\
+	asm volatile("// atomic_fetch_" #op #name "\n"			\
+	__LL_SC_FALLBACK(						\
+"	prfm	pstl1strm, %3\n"					\
+"1:	ld" #acq "xr	%w0, %3\n"					\
+"	" #asm_op "	%w1, %w0, %w4\n"				\
+"	st" #rel "xr	%w2, %w1, %3\n"					\
+"	cbnz	%w2, 1b\n"						\
+"	" #mb )								\
+	: "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter)	\
+	: __stringify(constraint) "r" (i)				\
+	: cl);								\
+									\
+	return result;							\
+}
+
+#define ATOMIC_OPS(...)							\
+	ATOMIC_OP(__VA_ARGS__)						\
+	ATOMIC_OP_RETURN(        , dmb ish,  , l, "memory", __VA_ARGS__)\
+	ATOMIC_OP_RETURN(_relaxed,        ,  ,  ,         , __VA_ARGS__)\
+	ATOMIC_OP_RETURN(_acquire,        , a,  , "memory", __VA_ARGS__)\
+	ATOMIC_OP_RETURN(_release,        ,  , l, "memory", __VA_ARGS__)\
+	ATOMIC_FETCH_OP (        , dmb ish,  , l, "memory", __VA_ARGS__)\
+	ATOMIC_FETCH_OP (_relaxed,        ,  ,  ,         , __VA_ARGS__)\
+	ATOMIC_FETCH_OP (_acquire,        , a,  , "memory", __VA_ARGS__)\
+	ATOMIC_FETCH_OP (_release,        ,  , l, "memory", __VA_ARGS__)
+
+ATOMIC_OPS(add, add, I)
+ATOMIC_OPS(sub, sub, J)
+
+#undef ATOMIC_OPS
+#define ATOMIC_OPS(...)							\
+	ATOMIC_OP(__VA_ARGS__)						\
+	ATOMIC_FETCH_OP (        , dmb ish,  , l, "memory", __VA_ARGS__)\
+	ATOMIC_FETCH_OP (_relaxed,        ,  ,  ,         , __VA_ARGS__)\
+	ATOMIC_FETCH_OP (_acquire,        , a,  , "memory", __VA_ARGS__)\
+	ATOMIC_FETCH_OP (_release,        ,  , l, "memory", __VA_ARGS__)
+
+ATOMIC_OPS(and, and, K)
+ATOMIC_OPS(or, orr, K)
+ATOMIC_OPS(xor, eor, K)
+/*
+ * GAS converts the mysterious and undocumented BIC (immediate) alias to
+ * an AND (immediate) instruction with the immediate inverted. We don't
+ * have a constraint for this, so fall back to register.
+ */
+ATOMIC_OPS(andnot, bic, )
+
+#undef ATOMIC_OPS
+#undef ATOMIC_FETCH_OP
+#undef ATOMIC_OP_RETURN
+#undef ATOMIC_OP
+
+#define ATOMIC64_OP(op, asm_op, constraint)				\
+static inline void							\
+__ll_sc_atomic64_##op(s64 i, atomic64_t *v)				\
+{									\
+	s64 result;							\
+	unsigned long tmp;						\
+									\
+	asm volatile("// atomic64_" #op "\n"				\
+	__LL_SC_FALLBACK(						\
+"	prfm	pstl1strm, %2\n"					\
+"1:	ldxr	%0, %2\n"						\
+"	" #asm_op "	%0, %0, %3\n"					\
+"	stxr	%w1, %0, %2\n"						\
+"	cbnz	%w1, 1b")						\
+	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)		\
+	: __stringify(constraint) "r" (i));				\
+}
+
+#define ATOMIC64_OP_RETURN(name, mb, acq, rel, cl, op, asm_op, constraint)\
+static inline long							\
+__ll_sc_atomic64_##op##_return##name(s64 i, atomic64_t *v)		\
+{									\
+	s64 result;							\
+	unsigned long tmp;						\
+									\
+	asm volatile("// atomic64_" #op "_return" #name "\n"		\
+	__LL_SC_FALLBACK(						\
+"	prfm	pstl1strm, %2\n"					\
+"1:	ld" #acq "xr	%0, %2\n"					\
+"	" #asm_op "	%0, %0, %3\n"					\
+"	st" #rel "xr	%w1, %0, %2\n"					\
+"	cbnz	%w1, 1b\n"						\
+"	" #mb )								\
+	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)		\
+	: __stringify(constraint) "r" (i)				\
+	: cl);								\
+									\
+	return result;							\
+}
+
+#define ATOMIC64_FETCH_OP(name, mb, acq, rel, cl, op, asm_op, constraint)\
+static inline long							\
+__ll_sc_atomic64_fetch_##op##name(s64 i, atomic64_t *v)		\
+{									\
+	s64 result, val;						\
+	unsigned long tmp;						\
+									\
+	asm volatile("// atomic64_fetch_" #op #name "\n"		\
+	__LL_SC_FALLBACK(						\
+"	prfm	pstl1strm, %3\n"					\
+"1:	ld" #acq "xr	%0, %3\n"					\
+"	" #asm_op "	%1, %0, %4\n"					\
+"	st" #rel "xr	%w2, %1, %3\n"					\
+"	cbnz	%w2, 1b\n"						\
+"	" #mb )								\
+	: "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter)	\
+	: __stringify(constraint) "r" (i)				\
+	: cl);								\
+									\
+	return result;							\
+}
+
+#define ATOMIC64_OPS(...)						\
+	ATOMIC64_OP(__VA_ARGS__)					\
+	ATOMIC64_OP_RETURN(, dmb ish,  , l, "memory", __VA_ARGS__)	\
+	ATOMIC64_OP_RETURN(_relaxed,,  ,  ,         , __VA_ARGS__)	\
+	ATOMIC64_OP_RETURN(_acquire,, a,  , "memory", __VA_ARGS__)	\
+	ATOMIC64_OP_RETURN(_release,,  , l, "memory", __VA_ARGS__)	\
+	ATOMIC64_FETCH_OP (, dmb ish,  , l, "memory", __VA_ARGS__)	\
+	ATOMIC64_FETCH_OP (_relaxed,,  ,  ,         , __VA_ARGS__)	\
+	ATOMIC64_FETCH_OP (_acquire,, a,  , "memory", __VA_ARGS__)	\
+	ATOMIC64_FETCH_OP (_release,,  , l, "memory", __VA_ARGS__)
+
+ATOMIC64_OPS(add, add, I)
+ATOMIC64_OPS(sub, sub, J)
+
+#undef ATOMIC64_OPS
+#define ATOMIC64_OPS(...)						\
+	ATOMIC64_OP(__VA_ARGS__)					\
+	ATOMIC64_FETCH_OP (, dmb ish,  , l, "memory", __VA_ARGS__)	\
+	ATOMIC64_FETCH_OP (_relaxed,,  ,  ,         , __VA_ARGS__)	\
+	ATOMIC64_FETCH_OP (_acquire,, a,  , "memory", __VA_ARGS__)	\
+	ATOMIC64_FETCH_OP (_release,,  , l, "memory", __VA_ARGS__)
+
+ATOMIC64_OPS(and, and, L)
+ATOMIC64_OPS(or, orr, L)
+ATOMIC64_OPS(xor, eor, L)
+/*
+ * GAS converts the mysterious and undocumented BIC (immediate) alias to
+ * an AND (immediate) instruction with the immediate inverted. We don't
+ * have a constraint for this, so fall back to register.
+ */
+ATOMIC64_OPS(andnot, bic, )
+
+#undef ATOMIC64_OPS
+#undef ATOMIC64_FETCH_OP
+#undef ATOMIC64_OP_RETURN
+#undef ATOMIC64_OP
+
+static inline s64
+__ll_sc_atomic64_dec_if_positive(atomic64_t *v)
+{
+	s64 result;
+	unsigned long tmp;
+
+	asm volatile("// atomic64_dec_if_positive\n"
+	__LL_SC_FALLBACK(
+"	prfm	pstl1strm, %2\n"
+"1:	ldxr	%0, %2\n"
+"	subs	%0, %0, #1\n"
+"	b.lt	2f\n"
+"	stlxr	%w1, %0, %2\n"
+"	cbnz	%w1, 1b\n"
+"	dmb	ish\n"
+"2:")
+	: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)
+	:
+	: "cc", "memory");
+
+	return result;
+}
+
+#define __CMPXCHG_CASE(w, sfx, name, sz, mb, acq, rel, cl, constraint)	\
+static inline u##sz							\
+__ll_sc__cmpxchg_case_##name##sz(volatile void *ptr,			\
+					 unsigned long old,		\
+					 u##sz new)			\
+{									\
+	unsigned long tmp;						\
+	u##sz oldval;							\
+									\
+	/*								\
+	 * Sub-word sizes require explicit casting so that the compare  \
+	 * part of the cmpxchg doesn't end up interpreting non-zero	\
+	 * upper bits of the register containing "old".			\
+	 */								\
+	if (sz < 32)							\
+		old = (u##sz)old;					\
+									\
+	asm volatile(							\
+	__LL_SC_FALLBACK(						\
+	"	prfm	pstl1strm, %[v]\n"				\
+	"1:	ld" #acq "xr" #sfx "\t%" #w "[oldval], %[v]\n"		\
+	"	eor	%" #w "[tmp], %" #w "[oldval], %" #w "[old]\n"	\
+	"	cbnz	%" #w "[tmp], 2f\n"				\
+	"	st" #rel "xr" #sfx "\t%w[tmp], %" #w "[new], %[v]\n"	\
+	"	cbnz	%w[tmp], 1b\n"					\
+	"	" #mb "\n"						\
+	"2:")								\
+	: [tmp] "=&r" (tmp), [oldval] "=&r" (oldval),			\
+	  [v] "+Q" (*(u##sz *)ptr)					\
+	: [old] __stringify(constraint) "r" (old), [new] "r" (new)	\
+	: cl);								\
+									\
+	return oldval;							\
+}
+
+/*
+ * Earlier versions of GCC (no later than 8.1.0) appear to incorrectly
+ * handle the 'K' constraint for the value 4294967295 - thus we use no
+ * constraint for 32 bit operations.
+ */
+__CMPXCHG_CASE(w, b,     ,  8,        ,  ,  ,         , K)
+__CMPXCHG_CASE(w, h,     , 16,        ,  ,  ,         , K)
+__CMPXCHG_CASE(w,  ,     , 32,        ,  ,  ,         , K)
+__CMPXCHG_CASE( ,  ,     , 64,        ,  ,  ,         , L)
+__CMPXCHG_CASE(w, b, acq_,  8,        , a,  , "memory", K)
+__CMPXCHG_CASE(w, h, acq_, 16,        , a,  , "memory", K)
+__CMPXCHG_CASE(w,  , acq_, 32,        , a,  , "memory", K)
+__CMPXCHG_CASE( ,  , acq_, 64,        , a,  , "memory", L)
+__CMPXCHG_CASE(w, b, rel_,  8,        ,  , l, "memory", K)
+__CMPXCHG_CASE(w, h, rel_, 16,        ,  , l, "memory", K)
+__CMPXCHG_CASE(w,  , rel_, 32,        ,  , l, "memory", K)
+__CMPXCHG_CASE( ,  , rel_, 64,        ,  , l, "memory", L)
+__CMPXCHG_CASE(w, b,  mb_,  8, dmb ish,  , l, "memory", K)
+__CMPXCHG_CASE(w, h,  mb_, 16, dmb ish,  , l, "memory", K)
+__CMPXCHG_CASE(w,  ,  mb_, 32, dmb ish,  , l, "memory", K)
+__CMPXCHG_CASE( ,  ,  mb_, 64, dmb ish,  , l, "memory", L)
+
+#undef __CMPXCHG_CASE
+
+#define __CMPXCHG_DBL(name, mb, rel, cl)				\
+static inline long							\
+__ll_sc__cmpxchg_double##name(unsigned long old1,			\
+				      unsigned long old2,		\
+				      unsigned long new1,		\
+				      unsigned long new2,		\
+				      volatile void *ptr)		\
+{									\
+	unsigned long tmp, ret;						\
+									\
+	asm volatile("// __cmpxchg_double" #name "\n"			\
+	__LL_SC_FALLBACK(						\
+	"	prfm	pstl1strm, %2\n"				\
+	"1:	ldxp	%0, %1, %2\n"					\
+	"	eor	%0, %0, %3\n"					\
+	"	eor	%1, %1, %4\n"					\
+	"	orr	%1, %0, %1\n"					\
+	"	cbnz	%1, 2f\n"					\
+	"	st" #rel "xp	%w0, %5, %6, %2\n"			\
+	"	cbnz	%w0, 1b\n"					\
+	"	" #mb "\n"						\
+	"2:")								\
+	: "=&r" (tmp), "=&r" (ret), "+Q" (*(unsigned long *)ptr)	\
+	: "r" (old1), "r" (old2), "r" (new1), "r" (new2)		\
+	: cl);								\
+									\
+	return ret;							\
+}
+
+__CMPXCHG_DBL(   ,        ,  ,         )
+__CMPXCHG_DBL(_mb, dmb ish, l, "memory")
+
+#undef __CMPXCHG_DBL
+#undef K
+
+#endif	/* __ASM_ATOMIC_LL_SC_H */
\ No newline at end of file
diff --git a/xen/include/asm-arm/arm64/atomic_lse.h b/xen/include/asm-arm/arm64/atomic_lse.h
new file mode 100644
index 0000000000..b3b0d43a7d
--- /dev/null
+++ b/xen/include/asm-arm/arm64/atomic_lse.h
@@ -0,0 +1,419 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Based on arch/arm/include/asm/atomic.h
+ *
+ * Copyright (C) 1996 Russell King.
+ * Copyright (C) 2002 Deep Blue Solutions Ltd.
+ * Copyright (C) 2012 ARM Ltd.
+ */
+
+#ifndef __ASM_ATOMIC_LSE_H
+#define __ASM_ATOMIC_LSE_H
+
+#define ATOMIC_OP(op, asm_op)						\
+static inline void __lse_atomic_##op(int i, atomic_t *v)			\
+{									\
+	asm volatile(							\
+	__LSE_PREAMBLE							\
+"	" #asm_op "	%w[i], %[v]\n"					\
+	: [i] "+r" (i), [v] "+Q" (v->counter)				\
+	: "r" (v));							\
+}
+
+ATOMIC_OP(andnot, stclr)
+ATOMIC_OP(or, stset)
+ATOMIC_OP(xor, steor)
+ATOMIC_OP(add, stadd)
+
+#undef ATOMIC_OP
+
+#define ATOMIC_FETCH_OP(name, mb, op, asm_op, cl...)			\
+static inline int __lse_atomic_fetch_##op##name(int i, atomic_t *v)	\
+{									\
+	asm volatile(							\
+	__LSE_PREAMBLE							\
+"	" #asm_op #mb "	%w[i], %w[i], %[v]"				\
+	: [i] "+r" (i), [v] "+Q" (v->counter)				\
+	: "r" (v)							\
+	: cl);								\
+									\
+	return i;							\
+}
+
+#define ATOMIC_FETCH_OPS(op, asm_op)					\
+	ATOMIC_FETCH_OP(_relaxed,   , op, asm_op)			\
+	ATOMIC_FETCH_OP(_acquire,  a, op, asm_op, "memory")		\
+	ATOMIC_FETCH_OP(_release,  l, op, asm_op, "memory")		\
+	ATOMIC_FETCH_OP(        , al, op, asm_op, "memory")
+
+ATOMIC_FETCH_OPS(andnot, ldclr)
+ATOMIC_FETCH_OPS(or, ldset)
+ATOMIC_FETCH_OPS(xor, ldeor)
+ATOMIC_FETCH_OPS(add, ldadd)
+
+#undef ATOMIC_FETCH_OP
+#undef ATOMIC_FETCH_OPS
+
+#define ATOMIC_OP_ADD_RETURN(name, mb, cl...)				\
+static inline int __lse_atomic_add_return##name(int i, atomic_t *v)	\
+{									\
+	u32 tmp;							\
+									\
+	asm volatile(							\
+	__LSE_PREAMBLE							\
+	"	ldadd" #mb "	%w[i], %w[tmp], %[v]\n"			\
+	"	add	%w[i], %w[i], %w[tmp]"				\
+	: [i] "+r" (i), [v] "+Q" (v->counter), [tmp] "=&r" (tmp)	\
+	: "r" (v)							\
+	: cl);								\
+									\
+	return i;							\
+}
+
+ATOMIC_OP_ADD_RETURN(_relaxed,   )
+ATOMIC_OP_ADD_RETURN(_acquire,  a, "memory")
+ATOMIC_OP_ADD_RETURN(_release,  l, "memory")
+ATOMIC_OP_ADD_RETURN(        , al, "memory")
+
+#undef ATOMIC_OP_ADD_RETURN
+
+static inline void __lse_atomic_and(int i, atomic_t *v)
+{
+	asm volatile(
+	__LSE_PREAMBLE
+	"	mvn	%w[i], %w[i]\n"
+	"	stclr	%w[i], %[v]"
+	: [i] "+&r" (i), [v] "+Q" (v->counter)
+	: "r" (v));
+}
+
+#define ATOMIC_FETCH_OP_AND(name, mb, cl...)				\
+static inline int __lse_atomic_fetch_and##name(int i, atomic_t *v)	\
+{									\
+	asm volatile(							\
+	__LSE_PREAMBLE							\
+	"	mvn	%w[i], %w[i]\n"					\
+	"	ldclr" #mb "	%w[i], %w[i], %[v]"			\
+	: [i] "+&r" (i), [v] "+Q" (v->counter)				\
+	: "r" (v)							\
+	: cl);								\
+									\
+	return i;							\
+}
+
+ATOMIC_FETCH_OP_AND(_relaxed,   )
+ATOMIC_FETCH_OP_AND(_acquire,  a, "memory")
+ATOMIC_FETCH_OP_AND(_release,  l, "memory")
+ATOMIC_FETCH_OP_AND(        , al, "memory")
+
+#undef ATOMIC_FETCH_OP_AND
+
+static inline void __lse_atomic_sub(int i, atomic_t *v)
+{
+	asm volatile(
+	__LSE_PREAMBLE
+	"	neg	%w[i], %w[i]\n"
+	"	stadd	%w[i], %[v]"
+	: [i] "+&r" (i), [v] "+Q" (v->counter)
+	: "r" (v));
+}
+
+#define ATOMIC_OP_SUB_RETURN(name, mb, cl...)				\
+static inline int __lse_atomic_sub_return##name(int i, atomic_t *v)	\
+{									\
+	u32 tmp;							\
+									\
+	asm volatile(							\
+	__LSE_PREAMBLE							\
+	"	neg	%w[i], %w[i]\n"					\
+	"	ldadd" #mb "	%w[i], %w[tmp], %[v]\n"			\
+	"	add	%w[i], %w[i], %w[tmp]"				\
+	: [i] "+&r" (i), [v] "+Q" (v->counter), [tmp] "=&r" (tmp)	\
+	: "r" (v)							\
+	: cl);							\
+									\
+	return i;							\
+}
+
+ATOMIC_OP_SUB_RETURN(_relaxed,   )
+ATOMIC_OP_SUB_RETURN(_acquire,  a, "memory")
+ATOMIC_OP_SUB_RETURN(_release,  l, "memory")
+ATOMIC_OP_SUB_RETURN(        , al, "memory")
+
+#undef ATOMIC_OP_SUB_RETURN
+
+#define ATOMIC_FETCH_OP_SUB(name, mb, cl...)				\
+static inline int __lse_atomic_fetch_sub##name(int i, atomic_t *v)	\
+{									\
+	asm volatile(							\
+	__LSE_PREAMBLE							\
+	"	neg	%w[i], %w[i]\n"					\
+	"	ldadd" #mb "	%w[i], %w[i], %[v]"			\
+	: [i] "+&r" (i), [v] "+Q" (v->counter)				\
+	: "r" (v)							\
+	: cl);								\
+									\
+	return i;							\
+}
+
+ATOMIC_FETCH_OP_SUB(_relaxed,   )
+ATOMIC_FETCH_OP_SUB(_acquire,  a, "memory")
+ATOMIC_FETCH_OP_SUB(_release,  l, "memory")
+ATOMIC_FETCH_OP_SUB(        , al, "memory")
+
+#undef ATOMIC_FETCH_OP_SUB
+
+#define ATOMIC64_OP(op, asm_op)						\
+static inline void __lse_atomic64_##op(s64 i, atomic64_t *v)		\
+{									\
+	asm volatile(							\
+	__LSE_PREAMBLE							\
+"	" #asm_op "	%[i], %[v]\n"					\
+	: [i] "+r" (i), [v] "+Q" (v->counter)				\
+	: "r" (v));							\
+}
+
+ATOMIC64_OP(andnot, stclr)
+ATOMIC64_OP(or, stset)
+ATOMIC64_OP(xor, steor)
+ATOMIC64_OP(add, stadd)
+
+#undef ATOMIC64_OP
+
+#define ATOMIC64_FETCH_OP(name, mb, op, asm_op, cl...)			\
+static inline long __lse_atomic64_fetch_##op##name(s64 i, atomic64_t *v)\
+{									\
+	asm volatile(							\
+	__LSE_PREAMBLE							\
+"	" #asm_op #mb "	%[i], %[i], %[v]"				\
+	: [i] "+r" (i), [v] "+Q" (v->counter)				\
+	: "r" (v)							\
+	: cl);								\
+									\
+	return i;							\
+}
+
+#define ATOMIC64_FETCH_OPS(op, asm_op)					\
+	ATOMIC64_FETCH_OP(_relaxed,   , op, asm_op)			\
+	ATOMIC64_FETCH_OP(_acquire,  a, op, asm_op, "memory")		\
+	ATOMIC64_FETCH_OP(_release,  l, op, asm_op, "memory")		\
+	ATOMIC64_FETCH_OP(        , al, op, asm_op, "memory")
+
+ATOMIC64_FETCH_OPS(andnot, ldclr)
+ATOMIC64_FETCH_OPS(or, ldset)
+ATOMIC64_FETCH_OPS(xor, ldeor)
+ATOMIC64_FETCH_OPS(add, ldadd)
+
+#undef ATOMIC64_FETCH_OP
+#undef ATOMIC64_FETCH_OPS
+
+#define ATOMIC64_OP_ADD_RETURN(name, mb, cl...)				\
+static inline long __lse_atomic64_add_return##name(s64 i, atomic64_t *v)\
+{									\
+	unsigned long tmp;						\
+									\
+	asm volatile(							\
+	__LSE_PREAMBLE							\
+	"	ldadd" #mb "	%[i], %x[tmp], %[v]\n"			\
+	"	add	%[i], %[i], %x[tmp]"				\
+	: [i] "+r" (i), [v] "+Q" (v->counter), [tmp] "=&r" (tmp)	\
+	: "r" (v)							\
+	: cl);								\
+									\
+	return i;							\
+}
+
+ATOMIC64_OP_ADD_RETURN(_relaxed,   )
+ATOMIC64_OP_ADD_RETURN(_acquire,  a, "memory")
+ATOMIC64_OP_ADD_RETURN(_release,  l, "memory")
+ATOMIC64_OP_ADD_RETURN(        , al, "memory")
+
+#undef ATOMIC64_OP_ADD_RETURN
+
+static inline void __lse_atomic64_and(s64 i, atomic64_t *v)
+{
+	asm volatile(
+	__LSE_PREAMBLE
+	"	mvn	%[i], %[i]\n"
+	"	stclr	%[i], %[v]"
+	: [i] "+&r" (i), [v] "+Q" (v->counter)
+	: "r" (v));
+}
+
+#define ATOMIC64_FETCH_OP_AND(name, mb, cl...)				\
+static inline long __lse_atomic64_fetch_and##name(s64 i, atomic64_t *v)	\
+{									\
+	asm volatile(							\
+	__LSE_PREAMBLE							\
+	"	mvn	%[i], %[i]\n"					\
+	"	ldclr" #mb "	%[i], %[i], %[v]"			\
+	: [i] "+&r" (i), [v] "+Q" (v->counter)				\
+	: "r" (v)							\
+	: cl);								\
+									\
+	return i;							\
+}
+
+ATOMIC64_FETCH_OP_AND(_relaxed,   )
+ATOMIC64_FETCH_OP_AND(_acquire,  a, "memory")
+ATOMIC64_FETCH_OP_AND(_release,  l, "memory")
+ATOMIC64_FETCH_OP_AND(        , al, "memory")
+
+#undef ATOMIC64_FETCH_OP_AND
+
+static inline void __lse_atomic64_sub(s64 i, atomic64_t *v)
+{
+	asm volatile(
+	__LSE_PREAMBLE
+	"	neg	%[i], %[i]\n"
+	"	stadd	%[i], %[v]"
+	: [i] "+&r" (i), [v] "+Q" (v->counter)
+	: "r" (v));
+}
+
+#define ATOMIC64_OP_SUB_RETURN(name, mb, cl...)				\
+static inline long __lse_atomic64_sub_return##name(s64 i, atomic64_t *v)	\
+{									\
+	unsigned long tmp;						\
+									\
+	asm volatile(							\
+	__LSE_PREAMBLE							\
+	"	neg	%[i], %[i]\n"					\
+	"	ldadd" #mb "	%[i], %x[tmp], %[v]\n"			\
+	"	add	%[i], %[i], %x[tmp]"				\
+	: [i] "+&r" (i), [v] "+Q" (v->counter), [tmp] "=&r" (tmp)	\
+	: "r" (v)							\
+	: cl);								\
+									\
+	return i;							\
+}
+
+ATOMIC64_OP_SUB_RETURN(_relaxed,   )
+ATOMIC64_OP_SUB_RETURN(_acquire,  a, "memory")
+ATOMIC64_OP_SUB_RETURN(_release,  l, "memory")
+ATOMIC64_OP_SUB_RETURN(        , al, "memory")
+
+#undef ATOMIC64_OP_SUB_RETURN
+
+#define ATOMIC64_FETCH_OP_SUB(name, mb, cl...)				\
+static inline long __lse_atomic64_fetch_sub##name(s64 i, atomic64_t *v)	\
+{									\
+	asm volatile(							\
+	__LSE_PREAMBLE							\
+	"	neg	%[i], %[i]\n"					\
+	"	ldadd" #mb "	%[i], %[i], %[v]"			\
+	: [i] "+&r" (i), [v] "+Q" (v->counter)				\
+	: "r" (v)							\
+	: cl);								\
+									\
+	return i;							\
+}
+
+ATOMIC64_FETCH_OP_SUB(_relaxed,   )
+ATOMIC64_FETCH_OP_SUB(_acquire,  a, "memory")
+ATOMIC64_FETCH_OP_SUB(_release,  l, "memory")
+ATOMIC64_FETCH_OP_SUB(        , al, "memory")
+
+#undef ATOMIC64_FETCH_OP_SUB
+
+static inline s64 __lse_atomic64_dec_if_positive(atomic64_t *v)
+{
+	unsigned long tmp;
+
+	asm volatile(
+	__LSE_PREAMBLE
+	"1:	ldr	%x[tmp], %[v]\n"
+	"	subs	%[ret], %x[tmp], #1\n"
+	"	b.lt	2f\n"
+	"	casal	%x[tmp], %[ret], %[v]\n"
+	"	sub	%x[tmp], %x[tmp], #1\n"
+	"	sub	%x[tmp], %x[tmp], %[ret]\n"
+	"	cbnz	%x[tmp], 1b\n"
+	"2:"
+	: [ret] "+&r" (v), [v] "+Q" (v->counter), [tmp] "=&r" (tmp)
+	:
+	: "cc", "memory");
+
+	return (long)v;
+}
+
+#define __CMPXCHG_CASE(w, sfx, name, sz, mb, cl...)			\
+static __always_inline u##sz						\
+__lse__cmpxchg_case_##name##sz(volatile void *ptr,			\
+					      u##sz old,		\
+					      u##sz new)		\
+{									\
+	register unsigned long x0 asm ("x0") = (unsigned long)ptr;	\
+	register u##sz x1 asm ("x1") = old;				\
+	register u##sz x2 asm ("x2") = new;				\
+	unsigned long tmp;						\
+									\
+	asm volatile(							\
+	__LSE_PREAMBLE							\
+	"	mov	%" #w "[tmp], %" #w "[old]\n"			\
+	"	cas" #mb #sfx "\t%" #w "[tmp], %" #w "[new], %[v]\n"	\
+	"	mov	%" #w "[ret], %" #w "[tmp]"			\
+	: [ret] "+r" (x0), [v] "+Q" (*(unsigned long *)ptr),		\
+	  [tmp] "=&r" (tmp)						\
+	: [old] "r" (x1), [new] "r" (x2)				\
+	: cl);								\
+									\
+	return x0;							\
+}
+
+__CMPXCHG_CASE(w, b,     ,  8,   )
+__CMPXCHG_CASE(w, h,     , 16,   )
+__CMPXCHG_CASE(w,  ,     , 32,   )
+__CMPXCHG_CASE(x,  ,     , 64,   )
+__CMPXCHG_CASE(w, b, acq_,  8,  a, "memory")
+__CMPXCHG_CASE(w, h, acq_, 16,  a, "memory")
+__CMPXCHG_CASE(w,  , acq_, 32,  a, "memory")
+__CMPXCHG_CASE(x,  , acq_, 64,  a, "memory")
+__CMPXCHG_CASE(w, b, rel_,  8,  l, "memory")
+__CMPXCHG_CASE(w, h, rel_, 16,  l, "memory")
+__CMPXCHG_CASE(w,  , rel_, 32,  l, "memory")
+__CMPXCHG_CASE(x,  , rel_, 64,  l, "memory")
+__CMPXCHG_CASE(w, b,  mb_,  8, al, "memory")
+__CMPXCHG_CASE(w, h,  mb_, 16, al, "memory")
+__CMPXCHG_CASE(w,  ,  mb_, 32, al, "memory")
+__CMPXCHG_CASE(x,  ,  mb_, 64, al, "memory")
+
+#undef __CMPXCHG_CASE
+
+#define __CMPXCHG_DBL(name, mb, cl...)					\
+static __always_inline long						\
+__lse__cmpxchg_double##name(unsigned long old1,				\
+					 unsigned long old2,		\
+					 unsigned long new1,		\
+					 unsigned long new2,		\
+					 volatile void *ptr)		\
+{									\
+	unsigned long oldval1 = old1;					\
+	unsigned long oldval2 = old2;					\
+	register unsigned long x0 asm ("x0") = old1;			\
+	register unsigned long x1 asm ("x1") = old2;			\
+	register unsigned long x2 asm ("x2") = new1;			\
+	register unsigned long x3 asm ("x3") = new2;			\
+	register unsigned long x4 asm ("x4") = (unsigned long)ptr;	\
+									\
+	asm volatile(							\
+	__LSE_PREAMBLE							\
+	"	casp" #mb "\t%[old1], %[old2], %[new1], %[new2], %[v]\n"\
+	"	eor	%[old1], %[old1], %[oldval1]\n"			\
+	"	eor	%[old2], %[old2], %[oldval2]\n"			\
+	"	orr	%[old1], %[old1], %[old2]"			\
+	: [old1] "+&r" (x0), [old2] "+&r" (x1),				\
+	  [v] "+Q" (*(unsigned long *)ptr)				\
+	: [new1] "r" (x2), [new2] "r" (x3), [ptr] "r" (x4),		\
+	  [oldval1] "r" (oldval1), [oldval2] "r" (oldval2)		\
+	: cl);								\
+									\
+	return x0;							\
+}
+
+__CMPXCHG_DBL(   ,   )
+__CMPXCHG_DBL(_mb, al, "memory")
+
+#undef __CMPXCHG_DBL
+
+#endif	/* __ASM_ATOMIC_LSE_H */
\ No newline at end of file
diff --git a/xen/include/asm-arm/arm64/cmpxchg.h b/xen/include/asm-arm/arm64/cmpxchg.h
new file mode 100644
index 0000000000..c51388216e
--- /dev/null
+++ b/xen/include/asm-arm/arm64/cmpxchg.h
@@ -0,0 +1,285 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Based on arch/arm/include/asm/cmpxchg.h
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ */
+#ifndef __ASM_CMPXCHG_H
+#define __ASM_CMPXCHG_H
+
+#include <linux/build_bug.h>
+#include <linux/compiler.h>
+
+#include <asm/barrier.h>
+#include <asm/lse.h>
+
+/*
+ * We need separate acquire parameters for ll/sc and lse, since the full
+ * barrier case is generated as release+dmb for the former and
+ * acquire+release for the latter.
+ */
+#define __XCHG_CASE(w, sfx, name, sz, mb, nop_lse, acq, acq_lse, rel, cl)	\
+static inline u##sz __xchg_case_##name##sz(u##sz x, volatile void *ptr)		\
+{										\
+	u##sz ret;								\
+	unsigned long tmp;							\
+										\
+	asm volatile(ARM64_LSE_ATOMIC_INSN(					\
+	/* LL/SC */								\
+	"	prfm	pstl1strm, %2\n"					\
+	"1:	ld" #acq "xr" #sfx "\t%" #w "0, %2\n"				\
+	"	st" #rel "xr" #sfx "\t%w1, %" #w "3, %2\n"			\
+	"	cbnz	%w1, 1b\n"						\
+	"	" #mb,								\
+	/* LSE atomics */							\
+	"	swp" #acq_lse #rel #sfx "\t%" #w "3, %" #w "0, %2\n"		\
+		__nops(3)							\
+	"	" #nop_lse)							\
+	: "=&r" (ret), "=&r" (tmp), "+Q" (*(u##sz *)ptr)			\
+	: "r" (x)								\
+	: cl);									\
+										\
+	return ret;								\
+}
+
+__XCHG_CASE(w, b,     ,  8,        ,    ,  ,  ,  ,         )
+__XCHG_CASE(w, h,     , 16,        ,    ,  ,  ,  ,         )
+__XCHG_CASE(w,  ,     , 32,        ,    ,  ,  ,  ,         )
+__XCHG_CASE( ,  ,     , 64,        ,    ,  ,  ,  ,         )
+__XCHG_CASE(w, b, acq_,  8,        ,    , a, a,  , "memory")
+__XCHG_CASE(w, h, acq_, 16,        ,    , a, a,  , "memory")
+__XCHG_CASE(w,  , acq_, 32,        ,    , a, a,  , "memory")
+__XCHG_CASE( ,  , acq_, 64,        ,    , a, a,  , "memory")
+__XCHG_CASE(w, b, rel_,  8,        ,    ,  ,  , l, "memory")
+__XCHG_CASE(w, h, rel_, 16,        ,    ,  ,  , l, "memory")
+__XCHG_CASE(w,  , rel_, 32,        ,    ,  ,  , l, "memory")
+__XCHG_CASE( ,  , rel_, 64,        ,    ,  ,  , l, "memory")
+__XCHG_CASE(w, b,  mb_,  8, dmb ish, nop,  , a, l, "memory")
+__XCHG_CASE(w, h,  mb_, 16, dmb ish, nop,  , a, l, "memory")
+__XCHG_CASE(w,  ,  mb_, 32, dmb ish, nop,  , a, l, "memory")
+__XCHG_CASE( ,  ,  mb_, 64, dmb ish, nop,  , a, l, "memory")
+
+#undef __XCHG_CASE
+
+#define __XCHG_GEN(sfx)							\
+static __always_inline  unsigned long __xchg##sfx(unsigned long x,	\
+					volatile void *ptr,		\
+					int size)			\
+{									\
+	switch (size) {							\
+	case 1:								\
+		return __xchg_case##sfx##_8(x, ptr);			\
+	case 2:								\
+		return __xchg_case##sfx##_16(x, ptr);			\
+	case 4:								\
+		return __xchg_case##sfx##_32(x, ptr);			\
+	case 8:								\
+		return __xchg_case##sfx##_64(x, ptr);			\
+	default:							\
+		BUILD_BUG();						\
+	}								\
+									\
+	unreachable();							\
+}
+
+__XCHG_GEN()
+__XCHG_GEN(_acq)
+__XCHG_GEN(_rel)
+__XCHG_GEN(_mb)
+
+#undef __XCHG_GEN
+
+#define __xchg_wrapper(sfx, ptr, x)					\
+({									\
+	__typeof__(*(ptr)) __ret;					\
+	__ret = (__typeof__(*(ptr)))					\
+		__xchg##sfx((unsigned long)(x), (ptr), sizeof(*(ptr))); \
+	__ret;								\
+})
+
+/* xchg */
+#define arch_xchg_relaxed(...)	__xchg_wrapper(    , __VA_ARGS__)
+#define arch_xchg_acquire(...)	__xchg_wrapper(_acq, __VA_ARGS__)
+#define arch_xchg_release(...)	__xchg_wrapper(_rel, __VA_ARGS__)
+#define arch_xchg(...)		__xchg_wrapper( _mb, __VA_ARGS__)
+
+#define __CMPXCHG_CASE(name, sz)			\
+static inline u##sz __cmpxchg_case_##name##sz(volatile void *ptr,	\
+					      u##sz old,		\
+					      u##sz new)		\
+{									\
+	return __lse_ll_sc_body(_cmpxchg_case_##name##sz,		\
+				ptr, old, new);				\
+}
+
+__CMPXCHG_CASE(    ,  8)
+__CMPXCHG_CASE(    , 16)
+__CMPXCHG_CASE(    , 32)
+__CMPXCHG_CASE(    , 64)
+__CMPXCHG_CASE(acq_,  8)
+__CMPXCHG_CASE(acq_, 16)
+__CMPXCHG_CASE(acq_, 32)
+__CMPXCHG_CASE(acq_, 64)
+__CMPXCHG_CASE(rel_,  8)
+__CMPXCHG_CASE(rel_, 16)
+__CMPXCHG_CASE(rel_, 32)
+__CMPXCHG_CASE(rel_, 64)
+__CMPXCHG_CASE(mb_,  8)
+__CMPXCHG_CASE(mb_, 16)
+__CMPXCHG_CASE(mb_, 32)
+__CMPXCHG_CASE(mb_, 64)
+
+#undef __CMPXCHG_CASE
+
+#define __CMPXCHG_DBL(name)						\
+static inline long __cmpxchg_double##name(unsigned long old1,		\
+					 unsigned long old2,		\
+					 unsigned long new1,		\
+					 unsigned long new2,		\
+					 volatile void *ptr)		\
+{									\
+	return __lse_ll_sc_body(_cmpxchg_double##name, 			\
+				old1, old2, new1, new2, ptr);		\
+}
+
+__CMPXCHG_DBL(   )
+__CMPXCHG_DBL(_mb)
+
+#undef __CMPXCHG_DBL
+
+#define __CMPXCHG_GEN(sfx)						\
+static __always_inline unsigned long __cmpxchg##sfx(volatile void *ptr,	\
+					   unsigned long old,		\
+					   unsigned long new,		\
+					   int size)			\
+{									\
+	switch (size) {							\
+	case 1:								\
+		return __cmpxchg_case##sfx##_8(ptr, old, new);		\
+	case 2:								\
+		return __cmpxchg_case##sfx##_16(ptr, old, new);		\
+	case 4:								\
+		return __cmpxchg_case##sfx##_32(ptr, old, new);		\
+	case 8:								\
+		return __cmpxchg_case##sfx##_64(ptr, old, new);		\
+	default:							\
+		BUILD_BUG();						\
+	}								\
+									\
+	unreachable();							\
+}
+
+__CMPXCHG_GEN()
+__CMPXCHG_GEN(_acq)
+__CMPXCHG_GEN(_rel)
+__CMPXCHG_GEN(_mb)
+
+#undef __CMPXCHG_GEN
+
+#define __cmpxchg_wrapper(sfx, ptr, o, n)				\
+({									\
+	__typeof__(*(ptr)) __ret;					\
+	__ret = (__typeof__(*(ptr)))					\
+		__cmpxchg##sfx((ptr), (unsigned long)(o),		\
+				(unsigned long)(n), sizeof(*(ptr)));	\
+	__ret;								\
+})
+
+/* cmpxchg */
+#define arch_cmpxchg_relaxed(...)	__cmpxchg_wrapper(    , __VA_ARGS__)
+#define arch_cmpxchg_acquire(...)	__cmpxchg_wrapper(_acq, __VA_ARGS__)
+#define arch_cmpxchg_release(...)	__cmpxchg_wrapper(_rel, __VA_ARGS__)
+#define arch_cmpxchg(...)		__cmpxchg_wrapper( _mb, __VA_ARGS__)
+#define arch_cmpxchg_local		arch_cmpxchg_relaxed
+
+/* cmpxchg64 */
+#define arch_cmpxchg64_relaxed		arch_cmpxchg_relaxed
+#define arch_cmpxchg64_acquire		arch_cmpxchg_acquire
+#define arch_cmpxchg64_release		arch_cmpxchg_release
+#define arch_cmpxchg64			arch_cmpxchg
+#define arch_cmpxchg64_local		arch_cmpxchg_local
+
+/* cmpxchg_double */
+#define system_has_cmpxchg_double()     1
+
+#define __cmpxchg_double_check(ptr1, ptr2)					\
+({										\
+	if (sizeof(*(ptr1)) != 8)						\
+		BUILD_BUG();							\
+	VM_BUG_ON((unsigned long *)(ptr2) - (unsigned long *)(ptr1) != 1);	\
+})
+
+#define arch_cmpxchg_double(ptr1, ptr2, o1, o2, n1, n2)				\
+({										\
+	int __ret;								\
+	__cmpxchg_double_check(ptr1, ptr2);					\
+	__ret = !__cmpxchg_double_mb((unsigned long)(o1), (unsigned long)(o2),	\
+				     (unsigned long)(n1), (unsigned long)(n2),	\
+				     ptr1);					\
+	__ret;									\
+})
+
+#define arch_cmpxchg_double_local(ptr1, ptr2, o1, o2, n1, n2)			\
+({										\
+	int __ret;								\
+	__cmpxchg_double_check(ptr1, ptr2);					\
+	__ret = !__cmpxchg_double((unsigned long)(o1), (unsigned long)(o2),	\
+				  (unsigned long)(n1), (unsigned long)(n2),	\
+				  ptr1);					\
+	__ret;									\
+})
+
+#define __CMPWAIT_CASE(w, sfx, sz)					\
+static inline void __cmpwait_case_##sz(volatile void *ptr,		\
+				       unsigned long val)		\
+{									\
+	unsigned long tmp;						\
+									\
+	asm volatile(							\
+	"	sevl\n"							\
+	"	wfe\n"							\
+	"	ldxr" #sfx "\t%" #w "[tmp], %[v]\n"			\
+	"	eor	%" #w "[tmp], %" #w "[tmp], %" #w "[val]\n"	\
+	"	cbnz	%" #w "[tmp], 1f\n"				\
+	"	wfe\n"							\
+	"1:"								\
+	: [tmp] "=&r" (tmp), [v] "+Q" (*(unsigned long *)ptr)		\
+	: [val] "r" (val));						\
+}
+
+__CMPWAIT_CASE(w, b, 8);
+__CMPWAIT_CASE(w, h, 16);
+__CMPWAIT_CASE(w,  , 32);
+__CMPWAIT_CASE( ,  , 64);
+
+#undef __CMPWAIT_CASE
+
+#define __CMPWAIT_GEN(sfx)						\
+static __always_inline void __cmpwait##sfx(volatile void *ptr,		\
+				  unsigned long val,			\
+				  int size)				\
+{									\
+	switch (size) {							\
+	case 1:								\
+		return __cmpwait_case##sfx##_8(ptr, (u8)val);		\
+	case 2:								\
+		return __cmpwait_case##sfx##_16(ptr, (u16)val);		\
+	case 4:								\
+		return __cmpwait_case##sfx##_32(ptr, val);		\
+	case 8:								\
+		return __cmpwait_case##sfx##_64(ptr, val);		\
+	default:							\
+		BUILD_BUG();						\
+	}								\
+									\
+	unreachable();							\
+}
+
+__CMPWAIT_GEN()
+
+#undef __CMPWAIT_GEN
+
+#define __cmpwait_relaxed(ptr, val) \
+	__cmpwait((ptr), (unsigned long)(val), sizeof(*(ptr)))
+
+#endif	/* __ASM_CMPXCHG_H */
\ No newline at end of file
diff --git a/xen/include/asm-arm/arm64/lse.h b/xen/include/asm-arm/arm64/lse.h
new file mode 100644
index 0000000000..704be3e4e4
--- /dev/null
+++ b/xen/include/asm-arm/arm64/lse.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_LSE_H
+#define __ASM_LSE_H
+
+#include <asm/atomic_ll_sc.h>
+
+#ifdef CONFIG_ARM64_LSE_ATOMICS
+
+#define __LSE_PREAMBLE	".arch_extension lse\n"
+
+#include <linux/compiler_types.h>
+#include <linux/export.h>
+#include <linux/jump_label.h>
+#include <linux/stringify.h>
+#include <asm/alternative.h>
+#include <asm/atomic_lse.h>
+#include <asm/cpucaps.h>
+
+extern struct static_key_false cpu_hwcap_keys[ARM64_NCAPS];
+extern struct static_key_false arm64_const_caps_ready;
+
+static inline bool system_uses_lse_atomics(void)
+{
+	return (static_branch_likely(&arm64_const_caps_ready)) &&
+		static_branch_likely(&cpu_hwcap_keys[ARM64_HAS_LSE_ATOMICS]);
+}
+
+#define __lse_ll_sc_body(op, ...)					\
+({									\
+	system_uses_lse_atomics() ?					\
+		__lse_##op(__VA_ARGS__) :				\
+		__ll_sc_##op(__VA_ARGS__);				\
+})
+
+/* In-line patching at runtime */
+#define ARM64_LSE_ATOMIC_INSN(llsc, lse)				\
+	ALTERNATIVE(llsc, __LSE_PREAMBLE lse, ARM64_HAS_LSE_ATOMICS)
+
+#else	/* CONFIG_ARM64_LSE_ATOMICS */
+
+static inline bool system_uses_lse_atomics(void) { return false; }
+
+#define __lse_ll_sc_body(op, ...)		__ll_sc_##op(__VA_ARGS__)
+
+#define ARM64_LSE_ATOMIC_INSN(llsc, lse)	llsc
+
+#endif	/* CONFIG_ARM64_LSE_ATOMICS */
+#endif	/* __ASM_LSE_H */
\ No newline at end of file
diff --git a/xen/include/xen/rwonce.h b/xen/include/xen/rwonce.h
new file mode 100644
index 0000000000..6b47392d1c
--- /dev/null
+++ b/xen/include/xen/rwonce.h
@@ -0,0 +1,90 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Prevent the compiler from merging or refetching reads or writes. The
+ * compiler is also forbidden from reordering successive instances of
+ * READ_ONCE and WRITE_ONCE, but only when the compiler is aware of some
+ * particular ordering. One way to make the compiler aware of ordering is to
+ * put the two invocations of READ_ONCE or WRITE_ONCE in different C
+ * statements.
+ *
+ * These two macros will also work on aggregate data types like structs or
+ * unions.
+ *
+ * Their two major use cases are: (1) Mediating communication between
+ * process-level code and irq/NMI handlers, all running on the same CPU,
+ * and (2) Ensuring that the compiler does not fold, spindle, or otherwise
+ * mutilate accesses that either do not require ordering or that interact
+ * with an explicit memory barrier or atomic instruction that provides the
+ * required ordering.
+ */
+#ifndef __ASM_GENERIC_RWONCE_H
+#define __ASM_GENERIC_RWONCE_H
+
+#ifndef __ASSEMBLY__
+
+#include <linux/compiler_types.h>
+#include <linux/kasan-checks.h>
+#include <linux/kcsan-checks.h>
+
+/*
+ * Yes, this permits 64-bit accesses on 32-bit architectures. These will
+ * actually be atomic in some cases (namely Armv7 + LPAE), but for others we
+ * rely on the access being split into 2x32-bit accesses for a 32-bit quantity
+ * (e.g. a virtual address) and a strong prevailing wind.
+ */
+#define compiletime_assert_rwonce_type(t)					\
+	compiletime_assert(__native_word(t) || sizeof(t) == sizeof(long long),	\
+		"Unsupported access size for {READ,WRITE}_ONCE().")
+
+/*
+ * Use __READ_ONCE() instead of READ_ONCE() if you do not require any
+ * atomicity. Note that this may result in tears!
+ */
+#ifndef __READ_ONCE
+#define __READ_ONCE(x)	(*(const volatile __unqual_scalar_typeof(x) *)&(x))
+#endif
+
+#define READ_ONCE(x)							\
+({									\
+	compiletime_assert_rwonce_type(x);				\
+	__READ_ONCE(x);							\
+})
+
+#define __WRITE_ONCE(x, val)						\
+do {									\
+	*(volatile typeof(x) *)&(x) = (val);				\
+} while (0)
+
+#define WRITE_ONCE(x, val)						\
+do {									\
+	compiletime_assert_rwonce_type(x);				\
+	__WRITE_ONCE(x, val);						\
+} while (0)
+
+static __no_sanitize_or_inline
+unsigned long __read_once_word_nocheck(const void *addr)
+{
+	return __READ_ONCE(*(unsigned long *)addr);
+}
+
+/*
+ * Use READ_ONCE_NOCHECK() instead of READ_ONCE() if you need to load a
+ * word from memory atomically but without telling KASAN/KCSAN. This is
+ * usually used by unwinding code when walking the stack of a running process.
+ */
+#define READ_ONCE_NOCHECK(x)						\
+({									\
+	compiletime_assert(sizeof(x) == sizeof(unsigned long),		\
+		"Unsupported access size for READ_ONCE_NOCHECK().");	\
+	(typeof(x))__read_once_word_nocheck(&(x));			\
+})
+
+static __no_kasan_or_inline
+unsigned long read_word_at_a_time(const void *addr)
+{
+	kasan_check_read(addr, 1);
+	return *(unsigned long *)addr;
+}
+
+#endif /* __ASSEMBLY__ */
+#endif	/* __ASM_GENERIC_RWONCE_H */
\ No newline at end of file
-- 
2.24.3 (Apple Git-128)



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 21:53:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 21:53:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25309.53041 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy3S-0006qP-51; Wed, 11 Nov 2020 21:53:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25309.53041; Wed, 11 Nov 2020 21:53:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy3R-0006qD-VJ; Wed, 11 Nov 2020 21:53:25 +0000
Received: by outflank-mailman (input) for mailman id 25309;
 Wed, 11 Nov 2020 21:53:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3Q-00064v-QQ
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:24 +0000
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8535338f-947b-459b-80e8-fe6ceaf732ff;
 Wed, 11 Nov 2020 21:52:50 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id 33so3976715wrl.7
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:50 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.48
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 11 Nov 2020 13:52:49 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kcy3Q-00064v-QQ
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:24 +0000
X-Inumbo-ID: 8535338f-947b-459b-80e8-fe6ceaf732ff
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8535338f-947b-459b-80e8-fe6ceaf732ff;
	Wed, 11 Nov 2020 21:52:50 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id 33so3976715wrl.7
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=+QHRVqSHfJTTG2Ygl57V4sYwrHfvgFCCMFL3g+IeQj8=;
        b=BZHrOBFOy/rx7QL8h3AKVZcmsqI9x8YiW8BYuJ4AfRMF07n2AqP9f1ySpuTIrbsgFL
         46ZB7u58zy5tEvCQZtJjo40QVdnxOlApbZYC2oPNE9ghAQr4XuVvVfEZ1SAv6H4crlHJ
         3X1/AnVsElMG0s4aJ3w0VnLhD5oNlzXe1y2GsaI3bBvF7Vhch9cMzx/w5S6D8vgQpYjS
         ROLsPAssDgJnfiYxS81SwuhiUY87EyZeMuFoHq+uZ8kaYZcekvLpQO08HAcpPUv9v5YV
         QRJS2/9g/6zhHWmh17IkhuG3Fpdncn8Pv54tYwo3ZqyaDUdZ7F2LJAtQ+1J0VTgEu/X1
         Gy/A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=+QHRVqSHfJTTG2Ygl57V4sYwrHfvgFCCMFL3g+IeQj8=;
        b=TkxHe9hL8nTjcdHkbhXMm9aWuatYGK8TOzHYGyUE6+kDHdLfPjXTF0I60oJp0HT+s0
         Zuizc0f7CJnKI0TIH/O3z+gaFQtuOpTq+sF2TfTxwnCaIPljSktLnJDT7wPqKULWnRWX
         SouXnS3Jasp4uhxiFImtvBOpZ06I7U2t2cQgB7nWAoJElrxn94U/WDt8A1A2JCK4kxNe
         BPTbyFviVLHQURJSIQiZkAJWbItHH4cwlKuVfkJ2LUhLxfv9UR7YjNO+gWRGEKdzBPrc
         euzHrNNszN3E8xV3PBP43a1l9KqiEMPLMtpg6aKw/hEvnc4HuAe9K2YUh0XZVGDRm0TF
         bFQA==
X-Gm-Message-State: AOAM5324XN1TAgN5oSL5VEI6CzZ3tJ7pUzfPTdWNxyrubKjMOhVnIN7w
	oJHdYIVdF1MtJLQuarqVr0+gvZTeS24=
X-Google-Smtp-Source: ABdhPJzrjndt6cTML74tYem+23oe1Hq/1DfFe8NBK2vLCL6LPZBEZP0LJRX73ZzEZWEitTpSD1gDAw==
X-Received: by 2002:a05:6000:182:: with SMTP id p2mr22389539wrx.116.1605131569763;
        Wed, 11 Nov 2020 13:52:49 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.48
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Wed, 11 Nov 2020 13:52:49 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: xen-devel@lists.xenproject.org
Cc: Ash Wilding <ash.j.wilding@gmail.com>,
	julien@xen.org,
	bertrand.marquis@arm.com,
	rahul.singh@arm.com
Subject: [RFC PATCH v2 09/15] xen/arm64: port Linux's arm64 atomic_lse.h to Xen
Date: Wed, 11 Nov 2020 21:51:57 +0000
Message-Id: <20201111215203.80336-10-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com>
References: <20201111215203.80336-1-ash.j.wilding@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ash Wilding <ash.j.wilding@gmail.com>

As with the LL/SC atomics helpers, most of the "work" here is simply
deleting the atomic64_t helper definitions as we don't have an
atomic64_t type in Xen.

We do also need to s/__always_inline/always_inline/ to match the
qualifier name used by Xen.

Signed-off-by: Ash Wilding <ash.j.wilding@gmail.com>
---
 xen/include/asm-arm/arm64/atomic_lse.h | 189 ++-----------------------
 1 file changed, 8 insertions(+), 181 deletions(-)

diff --git a/xen/include/asm-arm/arm64/atomic_lse.h b/xen/include/asm-arm/arm64/atomic_lse.h
index b3b0d43a7d..81613f7250 100644
--- a/xen/include/asm-arm/arm64/atomic_lse.h
+++ b/xen/include/asm-arm/arm64/atomic_lse.h
@@ -1,14 +1,15 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
+
 /*
- * Based on arch/arm/include/asm/atomic.h
+ * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
  *
  * Copyright (C) 1996 Russell King.
  * Copyright (C) 2002 Deep Blue Solutions Ltd.
  * Copyright (C) 2012 ARM Ltd.
+ * SPDX-License-Identifier: GPL-2.0-only
  */
 
-#ifndef __ASM_ATOMIC_LSE_H
-#define __ASM_ATOMIC_LSE_H
+#ifndef __ASM_ARM_ARM64_ATOMIC_LSE_H
+#define __ASM_ARM_ARM64_ATOMIC_LSE_H
 
 #define ATOMIC_OP(op, asm_op)						\
 static inline void __lse_atomic_##op(int i, atomic_t *v)			\
@@ -163,182 +164,8 @@ ATOMIC_FETCH_OP_SUB(        , al, "memory")
 
 #undef ATOMIC_FETCH_OP_SUB
 
-#define ATOMIC64_OP(op, asm_op)						\
-static inline void __lse_atomic64_##op(s64 i, atomic64_t *v)		\
-{									\
-	asm volatile(							\
-	__LSE_PREAMBLE							\
-"	" #asm_op "	%[i], %[v]\n"					\
-	: [i] "+r" (i), [v] "+Q" (v->counter)				\
-	: "r" (v));							\
-}
-
-ATOMIC64_OP(andnot, stclr)
-ATOMIC64_OP(or, stset)
-ATOMIC64_OP(xor, steor)
-ATOMIC64_OP(add, stadd)
-
-#undef ATOMIC64_OP
-
-#define ATOMIC64_FETCH_OP(name, mb, op, asm_op, cl...)			\
-static inline long __lse_atomic64_fetch_##op##name(s64 i, atomic64_t *v)\
-{									\
-	asm volatile(							\
-	__LSE_PREAMBLE							\
-"	" #asm_op #mb "	%[i], %[i], %[v]"				\
-	: [i] "+r" (i), [v] "+Q" (v->counter)				\
-	: "r" (v)							\
-	: cl);								\
-									\
-	return i;							\
-}
-
-#define ATOMIC64_FETCH_OPS(op, asm_op)					\
-	ATOMIC64_FETCH_OP(_relaxed,   , op, asm_op)			\
-	ATOMIC64_FETCH_OP(_acquire,  a, op, asm_op, "memory")		\
-	ATOMIC64_FETCH_OP(_release,  l, op, asm_op, "memory")		\
-	ATOMIC64_FETCH_OP(        , al, op, asm_op, "memory")
-
-ATOMIC64_FETCH_OPS(andnot, ldclr)
-ATOMIC64_FETCH_OPS(or, ldset)
-ATOMIC64_FETCH_OPS(xor, ldeor)
-ATOMIC64_FETCH_OPS(add, ldadd)
-
-#undef ATOMIC64_FETCH_OP
-#undef ATOMIC64_FETCH_OPS
-
-#define ATOMIC64_OP_ADD_RETURN(name, mb, cl...)				\
-static inline long __lse_atomic64_add_return##name(s64 i, atomic64_t *v)\
-{									\
-	unsigned long tmp;						\
-									\
-	asm volatile(							\
-	__LSE_PREAMBLE							\
-	"	ldadd" #mb "	%[i], %x[tmp], %[v]\n"			\
-	"	add	%[i], %[i], %x[tmp]"				\
-	: [i] "+r" (i), [v] "+Q" (v->counter), [tmp] "=&r" (tmp)	\
-	: "r" (v)							\
-	: cl);								\
-									\
-	return i;							\
-}
-
-ATOMIC64_OP_ADD_RETURN(_relaxed,   )
-ATOMIC64_OP_ADD_RETURN(_acquire,  a, "memory")
-ATOMIC64_OP_ADD_RETURN(_release,  l, "memory")
-ATOMIC64_OP_ADD_RETURN(        , al, "memory")
-
-#undef ATOMIC64_OP_ADD_RETURN
-
-static inline void __lse_atomic64_and(s64 i, atomic64_t *v)
-{
-	asm volatile(
-	__LSE_PREAMBLE
-	"	mvn	%[i], %[i]\n"
-	"	stclr	%[i], %[v]"
-	: [i] "+&r" (i), [v] "+Q" (v->counter)
-	: "r" (v));
-}
-
-#define ATOMIC64_FETCH_OP_AND(name, mb, cl...)				\
-static inline long __lse_atomic64_fetch_and##name(s64 i, atomic64_t *v)	\
-{									\
-	asm volatile(							\
-	__LSE_PREAMBLE							\
-	"	mvn	%[i], %[i]\n"					\
-	"	ldclr" #mb "	%[i], %[i], %[v]"			\
-	: [i] "+&r" (i), [v] "+Q" (v->counter)				\
-	: "r" (v)							\
-	: cl);								\
-									\
-	return i;							\
-}
-
-ATOMIC64_FETCH_OP_AND(_relaxed,   )
-ATOMIC64_FETCH_OP_AND(_acquire,  a, "memory")
-ATOMIC64_FETCH_OP_AND(_release,  l, "memory")
-ATOMIC64_FETCH_OP_AND(        , al, "memory")
-
-#undef ATOMIC64_FETCH_OP_AND
-
-static inline void __lse_atomic64_sub(s64 i, atomic64_t *v)
-{
-	asm volatile(
-	__LSE_PREAMBLE
-	"	neg	%[i], %[i]\n"
-	"	stadd	%[i], %[v]"
-	: [i] "+&r" (i), [v] "+Q" (v->counter)
-	: "r" (v));
-}
-
-#define ATOMIC64_OP_SUB_RETURN(name, mb, cl...)				\
-static inline long __lse_atomic64_sub_return##name(s64 i, atomic64_t *v)	\
-{									\
-	unsigned long tmp;						\
-									\
-	asm volatile(							\
-	__LSE_PREAMBLE							\
-	"	neg	%[i], %[i]\n"					\
-	"	ldadd" #mb "	%[i], %x[tmp], %[v]\n"			\
-	"	add	%[i], %[i], %x[tmp]"				\
-	: [i] "+&r" (i), [v] "+Q" (v->counter), [tmp] "=&r" (tmp)	\
-	: "r" (v)							\
-	: cl);								\
-									\
-	return i;							\
-}
-
-ATOMIC64_OP_SUB_RETURN(_relaxed,   )
-ATOMIC64_OP_SUB_RETURN(_acquire,  a, "memory")
-ATOMIC64_OP_SUB_RETURN(_release,  l, "memory")
-ATOMIC64_OP_SUB_RETURN(        , al, "memory")
-
-#undef ATOMIC64_OP_SUB_RETURN
-
-#define ATOMIC64_FETCH_OP_SUB(name, mb, cl...)				\
-static inline long __lse_atomic64_fetch_sub##name(s64 i, atomic64_t *v)	\
-{									\
-	asm volatile(							\
-	__LSE_PREAMBLE							\
-	"	neg	%[i], %[i]\n"					\
-	"	ldadd" #mb "	%[i], %[i], %[v]"			\
-	: [i] "+&r" (i), [v] "+Q" (v->counter)				\
-	: "r" (v)							\
-	: cl);								\
-									\
-	return i;							\
-}
-
-ATOMIC64_FETCH_OP_SUB(_relaxed,   )
-ATOMIC64_FETCH_OP_SUB(_acquire,  a, "memory")
-ATOMIC64_FETCH_OP_SUB(_release,  l, "memory")
-ATOMIC64_FETCH_OP_SUB(        , al, "memory")
-
-#undef ATOMIC64_FETCH_OP_SUB
-
-static inline s64 __lse_atomic64_dec_if_positive(atomic64_t *v)
-{
-	unsigned long tmp;
-
-	asm volatile(
-	__LSE_PREAMBLE
-	"1:	ldr	%x[tmp], %[v]\n"
-	"	subs	%[ret], %x[tmp], #1\n"
-	"	b.lt	2f\n"
-	"	casal	%x[tmp], %[ret], %[v]\n"
-	"	sub	%x[tmp], %x[tmp], #1\n"
-	"	sub	%x[tmp], %x[tmp], %[ret]\n"
-	"	cbnz	%x[tmp], 1b\n"
-	"2:"
-	: [ret] "+&r" (v), [v] "+Q" (v->counter), [tmp] "=&r" (tmp)
-	:
-	: "cc", "memory");
-
-	return (long)v;
-}
-
 #define __CMPXCHG_CASE(w, sfx, name, sz, mb, cl...)			\
-static __always_inline u##sz						\
+static always_inline u##sz						\
 __lse__cmpxchg_case_##name##sz(volatile void *ptr,			\
 					      u##sz old,		\
 					      u##sz new)		\
@@ -381,7 +208,7 @@ __CMPXCHG_CASE(x,  ,  mb_, 64, al, "memory")
 #undef __CMPXCHG_CASE
 
 #define __CMPXCHG_DBL(name, mb, cl...)					\
-static __always_inline long						\
+static always_inline long						\
 __lse__cmpxchg_double##name(unsigned long old1,				\
 					 unsigned long old2,		\
 					 unsigned long new1,		\
@@ -416,4 +243,4 @@ __CMPXCHG_DBL(_mb, al, "memory")
 
 #undef __CMPXCHG_DBL
 
-#endif	/* __ASM_ATOMIC_LSE_H */
\ No newline at end of file
+#endif	/* __ASM_ARM_ARM64_ATOMIC_LSE_H */
\ No newline at end of file
-- 
2.24.3 (Apple Git-128)



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 21:59:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 21:59:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25370.53094 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy9Z-0007ir-OX; Wed, 11 Nov 2020 21:59:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25370.53094; Wed, 11 Nov 2020 21:59:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy9Z-0007iW-FQ; Wed, 11 Nov 2020 21:59:45 +0000
Received: by outflank-mailman (input) for mailman id 25370;
 Wed, 11 Nov 2020 21:59:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3f-00064v-Qk
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:39 +0000
Received: from mail-wm1-x32e.google.com (unknown [2a00:1450:4864:20::32e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e5127b06-50ef-41e0-b499-a0baf39ee347;
 Wed, 11 Nov 2020 21:52:53 +0000 (UTC)
Received: by mail-wm1-x32e.google.com with SMTP id p22so3574027wmg.3
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:53 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.52
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 11 Nov 2020 13:52:52 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kcy3f-00064v-Qk
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:39 +0000
X-Inumbo-ID: e5127b06-50ef-41e0-b499-a0baf39ee347
Received: from mail-wm1-x32e.google.com (unknown [2a00:1450:4864:20::32e])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e5127b06-50ef-41e0-b499-a0baf39ee347;
	Wed, 11 Nov 2020 21:52:53 +0000 (UTC)
Received: by mail-wm1-x32e.google.com with SMTP id p22so3574027wmg.3
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=DEXuW1ye/R98bU3FkkVyzjawl7M6F4lMcGyQWz2Hrt0=;
        b=BDCjgexRF/nkz3zayZWKWBFaJk9GsuImSgpzdNEtS12TjZz80LiOSrhUXbkNafK57w
         h5ecFabJtWNoue+CmBqaNr98EBM69CmIS1yS6hr+UHuv/vjGglyeur3hHD9UyFGHzFJQ
         6a3LQUiDXa6LOL76y18d3fwq8XG1BjquH5kBGiPQ1YO2RJnve8LLUrrhMusuS0tDOkgI
         qrcCe2Ri2jMXID6Q5ZyNZdESrQXKGR8yn28pw4TfYfIeaKAjldMeYiX5IiG7DqJePoHq
         SVPu43W2CSIxYh2KW17g5p2ATcPEOes+tAGHF4BLuzwOqI9svz4MZxTfcicYCaor4Tqi
         QB7Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=DEXuW1ye/R98bU3FkkVyzjawl7M6F4lMcGyQWz2Hrt0=;
        b=MSpChR8uVD9xMruncQ0QQVp8gdvZoHvOGSXYffGy+4oA1DceTWugVO3PxUV56UtKxc
         UzBD5EeichEUnOKxMJY9mQ5bE8M+rA0WnR3HuZ/Cl8SFzr0zWlyGuixfjU6PICwbxWR5
         Sw9ic2/nBkvi/19fc90zswB767gQpNFHv/sT0VpHS87hDhXlZcNNGXwmpMMW6Sex6EdB
         qV5XHDt56gH1AJByighYeBRS4OjZuZ0LfOCymcGuSTNISNAy3U6eMIiQwOTGozz+gnXD
         LllWvZtLP7NqqZm5Fi+oeFukFbavEwm9xWxG08rgr1RGntR96SWdA/H9cVT09Ct5+1w4
         dsyg==
X-Gm-Message-State: AOAM533GdcgzfjTgpaA9fkwn3FyRcbB+3/zulIZFkYcmwOT1HD6b5RSU
	9tMrObWNNiYQvWq2I6C/BFXRfKnNmJM=
X-Google-Smtp-Source: ABdhPJykiIWBidlXLzHt7P7UzHcuzyRNwaZ4CUeVlycKkFPOa7EMnevqGj7uV7xjzomLA9nrmSItfw==
X-Received: by 2002:a1c:5a06:: with SMTP id o6mr6406841wmb.181.1605131572888;
        Wed, 11 Nov 2020 13:52:52 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.52
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Wed, 11 Nov 2020 13:52:52 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: xen-devel@lists.xenproject.org
Cc: Ash Wilding <ash.j.wilding@gmail.com>,
	julien@xen.org,
	bertrand.marquis@arm.com,
	rahul.singh@arm.com
Subject: [RFC PATCH v2 12/15] xen/arm64: port Linux's arm64 lse.h to Xen
Date: Wed, 11 Nov 2020 21:52:00 +0000
Message-Id: <20201111215203.80336-13-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com>
References: <20201111215203.80336-1-ash.j.wilding@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ash Wilding <ash.j.wilding@gmail.com>

This just involves making system_uses_lse_atomics() call cpus_have_cap()
instead of directly looking up in cpu_hwcap_keys.

Not 100% sure whether this is a valid transformation until I do a run
test.

Signed-off-by: Ash Wilding <ash.j.wilding@gmail.com>
---
 xen/include/asm-arm/arm64/lse.h | 30 +++++++++++++++---------------
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/xen/include/asm-arm/arm64/lse.h b/xen/include/asm-arm/arm64/lse.h
index 704be3e4e4..847727f219 100644
--- a/xen/include/asm-arm/arm64/lse.h
+++ b/xen/include/asm-arm/arm64/lse.h
@@ -1,28 +1,28 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef __ASM_LSE_H
-#define __ASM_LSE_H
+/*
+ * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
+ *
+ * SPDX-License-Identifier: GPL-2.0
+ */
+#ifndef __ASM_ARM_ARM64_LSE_H
+#define __ASM_ARM_ARM64_LSE_H
 
-#include <asm/atomic_ll_sc.h>
+#include "atomic_ll_sc.h"
 
 #ifdef CONFIG_ARM64_LSE_ATOMICS
 
 #define __LSE_PREAMBLE	".arch_extension lse\n"
 
-#include <linux/compiler_types.h>
-#include <linux/export.h>
-#include <linux/jump_label.h>
-#include <linux/stringify.h>
+#include <xen/compiler.h>
+#include <xen/stringify.h>
+#include <xen/types.h>
+
 #include <asm/alternative.h>
-#include <asm/atomic_lse.h>
-#include <asm/cpucaps.h>
 
-extern struct static_key_false cpu_hwcap_keys[ARM64_NCAPS];
-extern struct static_key_false arm64_const_caps_ready;
+#include "atomic_lse.h"
 
 static inline bool system_uses_lse_atomics(void)
 {
-	return (static_branch_likely(&arm64_const_caps_ready)) &&
-		static_branch_likely(&cpu_hwcap_keys[ARM64_HAS_LSE_ATOMICS]);
+	return cpus_have_cap(ARM64_HAS_LSE_ATOMICS);
 }
 
 #define __lse_ll_sc_body(op, ...)					\
@@ -45,4 +45,4 @@ static inline bool system_uses_lse_atomics(void) { return false; }
 #define ARM64_LSE_ATOMIC_INSN(llsc, lse)	llsc
 
 #endif	/* CONFIG_ARM64_LSE_ATOMICS */
-#endif	/* __ASM_LSE_H */
\ No newline at end of file
+#endif	/* __ASM_ARM_ARM64_LSE_H */
\ No newline at end of file
-- 
2.24.3 (Apple Git-128)



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 21:59:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 21:59:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25365.53057 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy9X-0007e7-U9; Wed, 11 Nov 2020 21:59:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25365.53057; Wed, 11 Nov 2020 21:59:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy9X-0007e3-Po; Wed, 11 Nov 2020 21:59:43 +0000
Received: by outflank-mailman (input) for mailman id 25365;
 Wed, 11 Nov 2020 21:59:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3a-00064v-Qa
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:34 +0000
Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e8572d65-a88e-4d6d-9e12-c7b7451f14b3;
 Wed, 11 Nov 2020 21:52:53 +0000 (UTC)
Received: by mail-wm1-x341.google.com with SMTP id a3so3669141wmb.5
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:52 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.50
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 11 Nov 2020 13:52:51 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kcy3a-00064v-Qa
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:34 +0000
X-Inumbo-ID: e8572d65-a88e-4d6d-9e12-c7b7451f14b3
Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e8572d65-a88e-4d6d-9e12-c7b7451f14b3;
	Wed, 11 Nov 2020 21:52:53 +0000 (UTC)
Received: by mail-wm1-x341.google.com with SMTP id a3so3669141wmb.5
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=Df9Xh6fAL7MEGN4PPLSLvsKAz4yhwBBWrhg9iGXHQa0=;
        b=uF84L1YgWpK/16Gee/yPHZGbKLt/KXZASJ7oPFGrzoDBMo9l5AYMJJhAJ9qsF5H2BY
         cO3x/E//asW6nbzdufmOh9GmOJR/N/paqbsGcR+xPJfvsVfob471OJYsuA0V2l5YPyhq
         IfBYCvgrDPHpL2bd/xm5RJF/3DvmJ4J1Ou/u+5lDEZ38bffyPIP5ceVeviSbmv/oFd2K
         G+30yIhQ6ZrqQWbYIGRJ9vQeXelVwSZIVsvQWnVT/aHk8yZqAMGI1yBZiQ5F6BWJQ7M+
         /HSp2vvFd5Ekd8WMtEB0FRsCndmsl8YqIwGEyrBgQvkoLMx8vJT0ozbW5ZM4VPWn4mDd
         X8zg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=Df9Xh6fAL7MEGN4PPLSLvsKAz4yhwBBWrhg9iGXHQa0=;
        b=t04dM7dcHtzBXMoWlPkbqfh55S3bwr/zk2sfSzsXCSsJqzlcP6gUJyqW6U2K4E8Hf8
         KCgq8/aKDbgfCLu8s2VWyQpFRPILjeSe3k1K6CIs27KU0ZcjhEKaQv4MnKhAPgqAGjrX
         UvvfYXLi3CHNEurR2ALXvUVV0m/dKLOURAWewlbkbePx8rHyHvXqjusXlwvP+gna+ypR
         tDR5cJCJmetRFD5Qi7eMM/o3MNZK0qKgkiUZHAHVmmy1PIk3Px6hKWU06bVZBoTyiD2i
         LXjcP+37SNXGoeQBmdRxmpwcwpQkAvFBFGG99D/FN26+A1v2b39ypfNUuIicOZJ14URW
         RsNA==
X-Gm-Message-State: AOAM5302I364Xht1AmHnleotcy0jeUSqmWxAKCB4BiY925lO9ICmOtMs
	6zkJn886CMwK+MpgwX4qVS2rvHZUrpc=
X-Google-Smtp-Source: ABdhPJz998xdMQgMVAfS5d5KafAZkwOa43oizKaiuZiuVOvTHxZwM7rWKfd7OzHDt3uJAVh9K0k1XA==
X-Received: by 2002:a7b:c77a:: with SMTP id x26mr6304609wmk.63.1605131571835;
        Wed, 11 Nov 2020 13:52:51 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.50
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Wed, 11 Nov 2020 13:52:51 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: xen-devel@lists.xenproject.org
Cc: Ash Wilding <ash.j.wilding@gmail.com>,
	julien@xen.org,
	bertrand.marquis@arm.com,
	rahul.singh@arm.com
Subject: [RFC PATCH v2 11/15] xen/arm64: port Linux's arm64 atomic.h to Xen
Date: Wed, 11 Nov 2020 21:51:59 +0000
Message-Id: <20201111215203.80336-12-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com>
References: <20201111215203.80336-1-ash.j.wilding@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ash Wilding <ash.j.wilding@gmail.com>

 - Drop atomic64_t helper declarations as we don't currently have an
   atomic64_t in Xen.

 - Drop arch_* prefixes.

 - Swap include of <linux/compiler.h> to just <xen/rwonce.h>.

Signed-off-by: Ash Wilding <ash.j.wilding@gmail.com>
---
 xen/include/asm-arm/arm64/atomic.h | 256 ++++++++---------------------
 1 file changed, 73 insertions(+), 183 deletions(-)

diff --git a/xen/include/asm-arm/arm64/atomic.h b/xen/include/asm-arm/arm64/atomic.h
index a2eab9f091..b695cc6e09 100644
--- a/xen/include/asm-arm/arm64/atomic.h
+++ b/xen/include/asm-arm/arm64/atomic.h
@@ -1,23 +1,23 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
+
 /*
- * Based on arch/arm/include/asm/atomic.h
+ * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
  *
  * Copyright (C) 1996 Russell King.
  * Copyright (C) 2002 Deep Blue Solutions Ltd.
  * Copyright (C) 2012 ARM Ltd.
+ * SPDX-License-Identifier: GPL-2.0-only
  */
-#ifndef __ASM_ATOMIC_H
-#define __ASM_ATOMIC_H
+#ifndef __ASM_ARM_ARM64_ATOMIC_H
+#define __ASM_ARM_ARM64_ATOMIC_H
 
-#include <linux/compiler.h>
-#include <linux/types.h>
+#include <xen/rwonce.h>
+#include <xen/types.h>
 
-#include <asm/barrier.h>
-#include <asm/cmpxchg.h>
-#include <asm/lse.h>
+#include "lse.h"
+#include "cmpxchg.h"
 
 #define ATOMIC_OP(op)							\
-static inline void arch_##op(int i, atomic_t *v)			\
+static inline void op(int i, atomic_t *v)			\
 {									\
 	__lse_ll_sc_body(op, i, v);					\
 }
@@ -32,7 +32,7 @@ ATOMIC_OP(atomic_sub)
 #undef ATOMIC_OP
 
 #define ATOMIC_FETCH_OP(name, op)					\
-static inline int arch_##op##name(int i, atomic_t *v)			\
+static inline int op##name(int i, atomic_t *v)			\
 {									\
 	return __lse_ll_sc_body(op##name, i, v);			\
 }
@@ -54,175 +54,65 @@ ATOMIC_FETCH_OPS(atomic_sub_return)
 
 #undef ATOMIC_FETCH_OP
 #undef ATOMIC_FETCH_OPS
-
-#define ATOMIC64_OP(op)							\
-static inline void arch_##op(long i, atomic64_t *v)			\
-{									\
-	__lse_ll_sc_body(op, i, v);					\
-}
-
-ATOMIC64_OP(atomic64_andnot)
-ATOMIC64_OP(atomic64_or)
-ATOMIC64_OP(atomic64_xor)
-ATOMIC64_OP(atomic64_add)
-ATOMIC64_OP(atomic64_and)
-ATOMIC64_OP(atomic64_sub)
-
-#undef ATOMIC64_OP
-
-#define ATOMIC64_FETCH_OP(name, op)					\
-static inline long arch_##op##name(long i, atomic64_t *v)		\
-{									\
-	return __lse_ll_sc_body(op##name, i, v);			\
-}
-
-#define ATOMIC64_FETCH_OPS(op)						\
-	ATOMIC64_FETCH_OP(_relaxed, op)					\
-	ATOMIC64_FETCH_OP(_acquire, op)					\
-	ATOMIC64_FETCH_OP(_release, op)					\
-	ATOMIC64_FETCH_OP(        , op)
-
-ATOMIC64_FETCH_OPS(atomic64_fetch_andnot)
-ATOMIC64_FETCH_OPS(atomic64_fetch_or)
-ATOMIC64_FETCH_OPS(atomic64_fetch_xor)
-ATOMIC64_FETCH_OPS(atomic64_fetch_add)
-ATOMIC64_FETCH_OPS(atomic64_fetch_and)
-ATOMIC64_FETCH_OPS(atomic64_fetch_sub)
-ATOMIC64_FETCH_OPS(atomic64_add_return)
-ATOMIC64_FETCH_OPS(atomic64_sub_return)
-
-#undef ATOMIC64_FETCH_OP
-#undef ATOMIC64_FETCH_OPS
-
-static inline long arch_atomic64_dec_if_positive(atomic64_t *v)
-{
-	return __lse_ll_sc_body(atomic64_dec_if_positive, v);
-}
-
-#define arch_atomic_read(v)			__READ_ONCE((v)->counter)
-#define arch_atomic_set(v, i)			__WRITE_ONCE(((v)->counter), (i))
-
-#define arch_atomic_add_return_relaxed		arch_atomic_add_return_relaxed
-#define arch_atomic_add_return_acquire		arch_atomic_add_return_acquire
-#define arch_atomic_add_return_release		arch_atomic_add_return_release
-#define arch_atomic_add_return			arch_atomic_add_return
-
-#define arch_atomic_sub_return_relaxed		arch_atomic_sub_return_relaxed
-#define arch_atomic_sub_return_acquire		arch_atomic_sub_return_acquire
-#define arch_atomic_sub_return_release		arch_atomic_sub_return_release
-#define arch_atomic_sub_return			arch_atomic_sub_return
-
-#define arch_atomic_fetch_add_relaxed		arch_atomic_fetch_add_relaxed
-#define arch_atomic_fetch_add_acquire		arch_atomic_fetch_add_acquire
-#define arch_atomic_fetch_add_release		arch_atomic_fetch_add_release
-#define arch_atomic_fetch_add			arch_atomic_fetch_add
-
-#define arch_atomic_fetch_sub_relaxed		arch_atomic_fetch_sub_relaxed
-#define arch_atomic_fetch_sub_acquire		arch_atomic_fetch_sub_acquire
-#define arch_atomic_fetch_sub_release		arch_atomic_fetch_sub_release
-#define arch_atomic_fetch_sub			arch_atomic_fetch_sub
-
-#define arch_atomic_fetch_and_relaxed		arch_atomic_fetch_and_relaxed
-#define arch_atomic_fetch_and_acquire		arch_atomic_fetch_and_acquire
-#define arch_atomic_fetch_and_release		arch_atomic_fetch_and_release
-#define arch_atomic_fetch_and			arch_atomic_fetch_and
-
-#define arch_atomic_fetch_andnot_relaxed	arch_atomic_fetch_andnot_relaxed
-#define arch_atomic_fetch_andnot_acquire	arch_atomic_fetch_andnot_acquire
-#define arch_atomic_fetch_andnot_release	arch_atomic_fetch_andnot_release
-#define arch_atomic_fetch_andnot		arch_atomic_fetch_andnot
-
-#define arch_atomic_fetch_or_relaxed		arch_atomic_fetch_or_relaxed
-#define arch_atomic_fetch_or_acquire		arch_atomic_fetch_or_acquire
-#define arch_atomic_fetch_or_release		arch_atomic_fetch_or_release
-#define arch_atomic_fetch_or			arch_atomic_fetch_or
-
-#define arch_atomic_fetch_xor_relaxed		arch_atomic_fetch_xor_relaxed
-#define arch_atomic_fetch_xor_acquire		arch_atomic_fetch_xor_acquire
-#define arch_atomic_fetch_xor_release		arch_atomic_fetch_xor_release
-#define arch_atomic_fetch_xor			arch_atomic_fetch_xor
-
-#define arch_atomic_xchg_relaxed(v, new) \
-	arch_xchg_relaxed(&((v)->counter), (new))
-#define arch_atomic_xchg_acquire(v, new) \
-	arch_xchg_acquire(&((v)->counter), (new))
-#define arch_atomic_xchg_release(v, new) \
-	arch_xchg_release(&((v)->counter), (new))
-#define arch_atomic_xchg(v, new) \
-	arch_xchg(&((v)->counter), (new))
-
-#define arch_atomic_cmpxchg_relaxed(v, old, new) \
-	arch_cmpxchg_relaxed(&((v)->counter), (old), (new))
-#define arch_atomic_cmpxchg_acquire(v, old, new) \
-	arch_cmpxchg_acquire(&((v)->counter), (old), (new))
-#define arch_atomic_cmpxchg_release(v, old, new) \
-	arch_cmpxchg_release(&((v)->counter), (old), (new))
-#define arch_atomic_cmpxchg(v, old, new) \
-	arch_cmpxchg(&((v)->counter), (old), (new))
-
-#define arch_atomic_andnot			arch_atomic_andnot
-
-/*
- * 64-bit arch_atomic operations.
- */
-#define ATOMIC64_INIT				ATOMIC_INIT
-#define arch_atomic64_read			arch_atomic_read
-#define arch_atomic64_set			arch_atomic_set
-
-#define arch_atomic64_add_return_relaxed	arch_atomic64_add_return_relaxed
-#define arch_atomic64_add_return_acquire	arch_atomic64_add_return_acquire
-#define arch_atomic64_add_return_release	arch_atomic64_add_return_release
-#define arch_atomic64_add_return		arch_atomic64_add_return
-
-#define arch_atomic64_sub_return_relaxed	arch_atomic64_sub_return_relaxed
-#define arch_atomic64_sub_return_acquire	arch_atomic64_sub_return_acquire
-#define arch_atomic64_sub_return_release	arch_atomic64_sub_return_release
-#define arch_atomic64_sub_return		arch_atomic64_sub_return
-
-#define arch_atomic64_fetch_add_relaxed		arch_atomic64_fetch_add_relaxed
-#define arch_atomic64_fetch_add_acquire		arch_atomic64_fetch_add_acquire
-#define arch_atomic64_fetch_add_release		arch_atomic64_fetch_add_release
-#define arch_atomic64_fetch_add			arch_atomic64_fetch_add
-
-#define arch_atomic64_fetch_sub_relaxed		arch_atomic64_fetch_sub_relaxed
-#define arch_atomic64_fetch_sub_acquire		arch_atomic64_fetch_sub_acquire
-#define arch_atomic64_fetch_sub_release		arch_atomic64_fetch_sub_release
-#define arch_atomic64_fetch_sub			arch_atomic64_fetch_sub
-
-#define arch_atomic64_fetch_and_relaxed		arch_atomic64_fetch_and_relaxed
-#define arch_atomic64_fetch_and_acquire		arch_atomic64_fetch_and_acquire
-#define arch_atomic64_fetch_and_release		arch_atomic64_fetch_and_release
-#define arch_atomic64_fetch_and			arch_atomic64_fetch_and
-
-#define arch_atomic64_fetch_andnot_relaxed	arch_atomic64_fetch_andnot_relaxed
-#define arch_atomic64_fetch_andnot_acquire	arch_atomic64_fetch_andnot_acquire
-#define arch_atomic64_fetch_andnot_release	arch_atomic64_fetch_andnot_release
-#define arch_atomic64_fetch_andnot		arch_atomic64_fetch_andnot
-
-#define arch_atomic64_fetch_or_relaxed		arch_atomic64_fetch_or_relaxed
-#define arch_atomic64_fetch_or_acquire		arch_atomic64_fetch_or_acquire
-#define arch_atomic64_fetch_or_release		arch_atomic64_fetch_or_release
-#define arch_atomic64_fetch_or			arch_atomic64_fetch_or
-
-#define arch_atomic64_fetch_xor_relaxed		arch_atomic64_fetch_xor_relaxed
-#define arch_atomic64_fetch_xor_acquire		arch_atomic64_fetch_xor_acquire
-#define arch_atomic64_fetch_xor_release		arch_atomic64_fetch_xor_release
-#define arch_atomic64_fetch_xor			arch_atomic64_fetch_xor
-
-#define arch_atomic64_xchg_relaxed		arch_atomic_xchg_relaxed
-#define arch_atomic64_xchg_acquire		arch_atomic_xchg_acquire
-#define arch_atomic64_xchg_release		arch_atomic_xchg_release
-#define arch_atomic64_xchg			arch_atomic_xchg
-
-#define arch_atomic64_cmpxchg_relaxed		arch_atomic_cmpxchg_relaxed
-#define arch_atomic64_cmpxchg_acquire		arch_atomic_cmpxchg_acquire
-#define arch_atomic64_cmpxchg_release		arch_atomic_cmpxchg_release
-#define arch_atomic64_cmpxchg			arch_atomic_cmpxchg
-
-#define arch_atomic64_andnot			arch_atomic64_andnot
-
-#define arch_atomic64_dec_if_positive		arch_atomic64_dec_if_positive
-
-#define ARCH_ATOMIC
-
-#endif /* __ASM_ATOMIC_H */
\ No newline at end of file
+#define atomic_read(v)			__READ_ONCE((v)->counter)
+#define atomic_set(v, i)			__WRITE_ONCE(((v)->counter), (i))
+
+#define atomic_add_return_relaxed		atomic_add_return_relaxed
+#define atomic_add_return_acquire		atomic_add_return_acquire
+#define atomic_add_return_release		atomic_add_return_release
+#define atomic_add_return			atomic_add_return
+
+#define atomic_sub_return_relaxed		atomic_sub_return_relaxed
+#define atomic_sub_return_acquire		atomic_sub_return_acquire
+#define atomic_sub_return_release		atomic_sub_return_release
+#define atomic_sub_return			atomic_sub_return
+
+#define atomic_fetch_add_relaxed		atomic_fetch_add_relaxed
+#define atomic_fetch_add_acquire		atomic_fetch_add_acquire
+#define atomic_fetch_add_release		atomic_fetch_add_release
+#define atomic_fetch_add			atomic_fetch_add
+
+#define atomic_fetch_sub_relaxed		atomic_fetch_sub_relaxed
+#define atomic_fetch_sub_acquire		atomic_fetch_sub_acquire
+#define atomic_fetch_sub_release		atomic_fetch_sub_release
+#define atomic_fetch_sub			atomic_fetch_sub
+
+#define atomic_fetch_and_relaxed		atomic_fetch_and_relaxed
+#define atomic_fetch_and_acquire		atomic_fetch_and_acquire
+#define atomic_fetch_and_release		atomic_fetch_and_release
+#define atomic_fetch_and			atomic_fetch_and
+
+#define atomic_fetch_andnot_relaxed	atomic_fetch_andnot_relaxed
+#define atomic_fetch_andnot_acquire	atomic_fetch_andnot_acquire
+#define atomic_fetch_andnot_release	atomic_fetch_andnot_release
+#define atomic_fetch_andnot		atomic_fetch_andnot
+
+#define atomic_fetch_or_relaxed		atomic_fetch_or_relaxed
+#define atomic_fetch_or_acquire		atomic_fetch_or_acquire
+#define atomic_fetch_or_release		atomic_fetch_or_release
+#define atomic_fetch_or			atomic_fetch_or
+
+#define atomic_fetch_xor_relaxed		atomic_fetch_xor_relaxed
+#define atomic_fetch_xor_acquire		atomic_fetch_xor_acquire
+#define atomic_fetch_xor_release		atomic_fetch_xor_release
+#define atomic_fetch_xor			atomic_fetch_xor
+
+#define atomic_xchg_relaxed(v, new) \
+	xchg_relaxed(&((v)->counter), (new))
+#define atomic_xchg_acquire(v, new) \
+	xchg_acquire(&((v)->counter), (new))
+#define atomic_xchg_release(v, new) \
+	xchg_release(&((v)->counter), (new))
+#define atomic_xchg(v, new) \
+	xchg(&((v)->counter), (new))
+
+#define atomic_cmpxchg_relaxed(v, old, new) \
+	cmpxchg_relaxed(&((v)->counter), (old), (new))
+#define atomic_cmpxchg_acquire(v, old, new) \
+	cmpxchg_acquire(&((v)->counter), (old), (new))
+#define atomic_cmpxchg_release(v, old, new) \
+	cmpxchg_release(&((v)->counter), (old), (new))
+
+#define atomic_andnot			atomic_andnot
+
+#endif /* __ASM_ARM_ARM64_ATOMIC_H */
\ No newline at end of file
-- 
2.24.3 (Apple Git-128)



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 21:59:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 21:59:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25369.53086 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy9Z-0007hn-9k; Wed, 11 Nov 2020 21:59:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25369.53086; Wed, 11 Nov 2020 21:59:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy9Y-0007hN-WF; Wed, 11 Nov 2020 21:59:45 +0000
Received: by outflank-mailman (input) for mailman id 25369;
 Wed, 11 Nov 2020 21:59:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3V-00064v-QI
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:29 +0000
Received: from mail-wr1-x433.google.com (unknown [2a00:1450:4864:20::433])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c5224aac-a27f-435e-829f-b96da6d16fe3;
 Wed, 11 Nov 2020 21:52:52 +0000 (UTC)
Received: by mail-wr1-x433.google.com with SMTP id p8so3986703wrx.5
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:52 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.49
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 11 Nov 2020 13:52:50 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kcy3V-00064v-QI
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:29 +0000
X-Inumbo-ID: c5224aac-a27f-435e-829f-b96da6d16fe3
Received: from mail-wr1-x433.google.com (unknown [2a00:1450:4864:20::433])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c5224aac-a27f-435e-829f-b96da6d16fe3;
	Wed, 11 Nov 2020 21:52:52 +0000 (UTC)
Received: by mail-wr1-x433.google.com with SMTP id p8so3986703wrx.5
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=BBJwuz9SQvVyH51XqrVftJIZ0i+2M/l+kiSJyhKCl9k=;
        b=XlzDxZVRka1KzvX5UJrwjhiosi0LTKxfDC1uz5BD3yOlL+I42WiaVPwKI9+aqVKXql
         5YBd4vgOHXP4dK47MDifcU3g2YSqLpxdvo+pAaGTLsPTj9NdIaphWCTKGSnUlkHMg+4S
         fs3zpJKk77/1AfGsFX1E6KRnBYP8uBpkPLtVasjZFSJL/6lKpmiPA/JBKEOBapTUFiKH
         cRr9WFfgk1InFZPGSUHkKJ3hV/fV9APjaw2abeMHb8vo1ONFxu19mPPmmDBAdI3r5gR+
         cJjpO++2/qhasb2xm057E8xKo7UjRoRC81aXvxvDmiJ8avbDfS/65UFyiKgfOPYl+Wkb
         ZM4g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=BBJwuz9SQvVyH51XqrVftJIZ0i+2M/l+kiSJyhKCl9k=;
        b=in/N5AYoKBXnrx+nnFqfsoTaB+4rdiDjvzRM0ECjc2WxxIM3Lgu6g1u/c7u5//9pt0
         0eYyyAsEU6HfMaiumAxxHQUA52M/3WPL7+p++cb7Es0+WrJFm7XyDgwY3LOrMJUzRmco
         lnhewLYFqGRM6asiyE9gQ+v2KVATzZKycgXvSAWwpd0dpi0NCZ/6Kx5w+kxT1XbKv2Ub
         E7DTFTf3f/xB2Nj2v7GD456fUBuAldalxHdV+zocH6obWb9diIutQ01XYpyY1SnYqk80
         Gbt21wH2ZMvcKu17CHA6NowtFKUdbRLT6SKO1p0G8kNyu2oJTFUGZukkfr6As1SlRhoe
         XvyA==
X-Gm-Message-State: AOAM533EmdtH0NrUAYDgn15R73o0nmMsD6XysvkcvM1qb/14IWZKe6vH
	DyMkWr5DqUJeLHJAl6nXoyLafJZ8TO0=
X-Google-Smtp-Source: ABdhPJzCLdv2deDdC5HyuuTbfyEyMVsfKt4vC1Cap1tpREd6zqOgCLxp6yQlpIo2fvH7a8l43IhQhA==
X-Received: by 2002:adf:f246:: with SMTP id b6mr3825186wrp.238.1605131570813;
        Wed, 11 Nov 2020 13:52:50 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.49
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Wed, 11 Nov 2020 13:52:50 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: xen-devel@lists.xenproject.org
Cc: Ash Wilding <ash.j.wilding@gmail.com>,
	julien@xen.org,
	bertrand.marquis@arm.com,
	rahul.singh@arm.com
Subject: [RFC PATCH v2 10/15] xen/arm64: port Linux's arm64 cmpxchg.h to Xen
Date: Wed, 11 Nov 2020 21:51:58 +0000
Message-Id: <20201111215203.80336-11-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com>
References: <20201111215203.80336-1-ash.j.wilding@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ash Wilding <ash.j.wilding@gmail.com>

 - s/arch_xchg/xchg/

 - s/arch_cmpxchg/cmpxchg/

 - Replace calls to BUILD_BUG() with calls to __bad_cmpxchg() as we
   don't currently have a BUILD_BUG() macro in Xen and this will
   equivalently cause a link-time error.

 - Replace calls to VM_BUG_ON() with BUG_ON() as we don't currently
   have a VM_BUG_ON() macro in Xen.

 - Pull in the timeout variants of cmpxchg from the original Xen
   arm64 cmpxchg.h as these are required for guest atomics and are
   not provided by Linux. Note these are always using LL/SC so we
   should ideally write LSE versions at some point.

Signed-off-by: Ash Wilding <ash.j.wilding@gmail.com>
---
 xen/include/asm-arm/arm64/cmpxchg.h | 165 ++++++++++++++++++++++------
 1 file changed, 131 insertions(+), 34 deletions(-)

diff --git a/xen/include/asm-arm/arm64/cmpxchg.h b/xen/include/asm-arm/arm64/cmpxchg.h
index c51388216e..a5282cf66e 100644
--- a/xen/include/asm-arm/arm64/cmpxchg.h
+++ b/xen/include/asm-arm/arm64/cmpxchg.h
@@ -1,17 +1,16 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
 /*
- * Based on arch/arm/include/asm/cmpxchg.h
+ * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
  *
  * Copyright (C) 2012 ARM Ltd.
+ * SPDX-License-Identifier: GPL-2.0-only
  */
-#ifndef __ASM_CMPXCHG_H
-#define __ASM_CMPXCHG_H
+#ifndef __ASM_ARM_ARM64_CMPXCHG_H
+#define __ASM_ARM_ARM64_CMPXCHG_H
 
-#include <linux/build_bug.h>
-#include <linux/compiler.h>
+#include <asm/bug.h>
+#include "lse.h"
 
-#include <asm/barrier.h>
-#include <asm/lse.h>
+extern unsigned long __bad_cmpxchg(volatile void *ptr, int size);
 
 /*
  * We need separate acquire parameters for ll/sc and lse, since the full
@@ -33,7 +32,9 @@ static inline u##sz __xchg_case_##name##sz(u##sz x, volatile void *ptr)		\
 	"	" #mb,								\
 	/* LSE atomics */							\
 	"	swp" #acq_lse #rel #sfx "\t%" #w "3, %" #w "0, %2\n"		\
-		__nops(3)							\
+		"nop\n"							\
+		"nop\n"							\
+		"nop\n"							\
 	"	" #nop_lse)							\
 	: "=&r" (ret), "=&r" (tmp), "+Q" (*(u##sz *)ptr)			\
 	: "r" (x)								\
@@ -62,7 +63,7 @@ __XCHG_CASE( ,  ,  mb_, 64, dmb ish, nop,  , a, l, "memory")
 #undef __XCHG_CASE
 
 #define __XCHG_GEN(sfx)							\
-static __always_inline  unsigned long __xchg##sfx(unsigned long x,	\
+static always_inline  unsigned long __xchg##sfx(unsigned long x,	\
 					volatile void *ptr,		\
 					int size)			\
 {									\
@@ -76,7 +77,7 @@ static __always_inline  unsigned long __xchg##sfx(unsigned long x,	\
 	case 8:								\
 		return __xchg_case##sfx##_64(x, ptr);			\
 	default:							\
-		BUILD_BUG();						\
+		return __bad_cmpxchg(ptr, size);						\
 	}								\
 									\
 	unreachable();							\
@@ -98,10 +99,10 @@ __XCHG_GEN(_mb)
 })
 
 /* xchg */
-#define arch_xchg_relaxed(...)	__xchg_wrapper(    , __VA_ARGS__)
-#define arch_xchg_acquire(...)	__xchg_wrapper(_acq, __VA_ARGS__)
-#define arch_xchg_release(...)	__xchg_wrapper(_rel, __VA_ARGS__)
-#define arch_xchg(...)		__xchg_wrapper( _mb, __VA_ARGS__)
+#define xchg_relaxed(...)	__xchg_wrapper(    , __VA_ARGS__)
+#define xchg_acquire(...)	__xchg_wrapper(_acq, __VA_ARGS__)
+#define xchg_release(...)	__xchg_wrapper(_rel, __VA_ARGS__)
+#define xchg(...)		__xchg_wrapper( _mb, __VA_ARGS__)
 
 #define __CMPXCHG_CASE(name, sz)			\
 static inline u##sz __cmpxchg_case_##name##sz(volatile void *ptr,	\
@@ -148,7 +149,7 @@ __CMPXCHG_DBL(_mb)
 #undef __CMPXCHG_DBL
 
 #define __CMPXCHG_GEN(sfx)						\
-static __always_inline unsigned long __cmpxchg##sfx(volatile void *ptr,	\
+static always_inline unsigned long __cmpxchg##sfx(volatile void *ptr,	\
 					   unsigned long old,		\
 					   unsigned long new,		\
 					   int size)			\
@@ -163,7 +164,7 @@ static __always_inline unsigned long __cmpxchg##sfx(volatile void *ptr,	\
 	case 8:								\
 		return __cmpxchg_case##sfx##_64(ptr, old, new);		\
 	default:							\
-		BUILD_BUG();						\
+		return __bad_cmpxchg(ptr, size);						\
 	}								\
 									\
 	unreachable();							\
@@ -186,18 +187,18 @@ __CMPXCHG_GEN(_mb)
 })
 
 /* cmpxchg */
-#define arch_cmpxchg_relaxed(...)	__cmpxchg_wrapper(    , __VA_ARGS__)
-#define arch_cmpxchg_acquire(...)	__cmpxchg_wrapper(_acq, __VA_ARGS__)
-#define arch_cmpxchg_release(...)	__cmpxchg_wrapper(_rel, __VA_ARGS__)
-#define arch_cmpxchg(...)		__cmpxchg_wrapper( _mb, __VA_ARGS__)
-#define arch_cmpxchg_local		arch_cmpxchg_relaxed
+#define cmpxchg_relaxed(...)	__cmpxchg_wrapper(    , __VA_ARGS__)
+#define cmpxchg_acquire(...)	__cmpxchg_wrapper(_acq, __VA_ARGS__)
+#define cmpxchg_release(...)	__cmpxchg_wrapper(_rel, __VA_ARGS__)
+#define cmpxchg(...)		__cmpxchg_wrapper( _mb, __VA_ARGS__)
+#define cmpxchg_local		cmpxchg_relaxed
 
 /* cmpxchg64 */
-#define arch_cmpxchg64_relaxed		arch_cmpxchg_relaxed
-#define arch_cmpxchg64_acquire		arch_cmpxchg_acquire
-#define arch_cmpxchg64_release		arch_cmpxchg_release
-#define arch_cmpxchg64			arch_cmpxchg
-#define arch_cmpxchg64_local		arch_cmpxchg_local
+#define cmpxchg64_relaxed		cmpxchg_relaxed
+#define cmpxchg64_acquire		cmpxchg_acquire
+#define cmpxchg64_release		cmpxchg_release
+#define cmpxchg64			cmpxchg
+#define cmpxchg64_local		cmpxchg_local
 
 /* cmpxchg_double */
 #define system_has_cmpxchg_double()     1
@@ -205,11 +206,11 @@ __CMPXCHG_GEN(_mb)
 #define __cmpxchg_double_check(ptr1, ptr2)					\
 ({										\
 	if (sizeof(*(ptr1)) != 8)						\
-		BUILD_BUG();							\
-	VM_BUG_ON((unsigned long *)(ptr2) - (unsigned long *)(ptr1) != 1);	\
+		return __bad_cmpxchg(ptr, size);							\
+	BUG_ON((unsigned long *)(ptr2) - (unsigned long *)(ptr1) != 1);	\
 })
 
-#define arch_cmpxchg_double(ptr1, ptr2, o1, o2, n1, n2)				\
+#define cmpxchg_double(ptr1, ptr2, o1, o2, n1, n2)				\
 ({										\
 	int __ret;								\
 	__cmpxchg_double_check(ptr1, ptr2);					\
@@ -219,7 +220,7 @@ __CMPXCHG_GEN(_mb)
 	__ret;									\
 })
 
-#define arch_cmpxchg_double_local(ptr1, ptr2, o1, o2, n1, n2)			\
+#define cmpxchg_double_local(ptr1, ptr2, o1, o2, n1, n2)			\
 ({										\
 	int __ret;								\
 	__cmpxchg_double_check(ptr1, ptr2);					\
@@ -255,7 +256,7 @@ __CMPWAIT_CASE( ,  , 64);
 #undef __CMPWAIT_CASE
 
 #define __CMPWAIT_GEN(sfx)						\
-static __always_inline void __cmpwait##sfx(volatile void *ptr,		\
+static always_inline void __cmpwait##sfx(volatile void *ptr,		\
 				  unsigned long val,			\
 				  int size)				\
 {									\
@@ -269,7 +270,7 @@ static __always_inline void __cmpwait##sfx(volatile void *ptr,		\
 	case 8:								\
 		return __cmpwait_case##sfx##_64(ptr, val);		\
 	default:							\
-		BUILD_BUG();						\
+		__bad_cmpxchg(ptr, size);						\
 	}								\
 									\
 	unreachable();							\
@@ -282,4 +283,100 @@ __CMPWAIT_GEN()
 #define __cmpwait_relaxed(ptr, val) \
 	__cmpwait((ptr), (unsigned long)(val), sizeof(*(ptr)))
 
-#endif	/* __ASM_CMPXCHG_H */
\ No newline at end of file
+/*
+ * This code is from the original Xen arm64 cmpxchg.h, from before the
+ * Linux 5.10-rc2 atomics helpers were ported over. The only changes
+ * here are renaming the macros and functions to explicitly use
+ * "timeout" in their names so that they don't clash with the above.
+ *
+ * We need this here for guest atomics (the only user of the timeout
+ * variants).
+ */
+
+#define __CMPXCHG_TIMEOUT_CASE(w, sz, name)                             \
+static inline bool __cmpxchg_timeout_case_##name(volatile void *ptr,    \
+                                         unsigned long *old,            \
+                                         unsigned long new,             \
+                                         bool timeout,                  \
+                                         unsigned int max_try)          \
+{                                                                       \
+        unsigned long oldval;                                           \
+        unsigned long res;                                              \
+                                                                        \
+        do {                                                            \
+                asm volatile("// __cmpxchg_timeout_case_" #name "\n"    \
+                "       ldxr" #sz "     %" #w "1, %2\n"                 \
+                "       mov     %w0, #0\n"                              \
+                "       cmp     %" #w "1, %" #w "3\n"                   \
+                "       b.ne    1f\n"                                   \
+                "       stxr" #sz "     %w0, %" #w "4, %2\n"            \
+                "1:\n"                                                  \
+                : "=&r" (res), "=&r" (oldval),                          \
+                  "+Q" (*(unsigned long *)ptr)                          \
+                : "Ir" (*old), "r" (new)                                \
+                : "cc");                                                \
+                                                                        \
+                if (!res)                                               \
+                        break;                                          \
+        } while (!timeout || ((--max_try) > 0));                        \
+                                                                        \
+        *old = oldval;                                                  \
+                                                                        \
+        return !res;                                                    \
+}
+
+__CMPXCHG_TIMEOUT_CASE(w, b, 1)
+__CMPXCHG_TIMEOUT_CASE(w, h, 2)
+__CMPXCHG_TIMEOUT_CASE(w,  , 4)
+__CMPXCHG_TIMEOUT_CASE( ,  , 8)
+
+static always_inline bool __int_cmpxchg(volatile void *ptr, unsigned long *old,
+                                        unsigned long new, int size,
+                                        bool timeout, unsigned int max_try)
+{
+        switch (size) {
+        case 1:
+                return __cmpxchg_timeout_case_1(ptr, old, new, timeout, max_try);
+        case 2:
+                return __cmpxchg_timeout_case_2(ptr, old, new, timeout, max_try);
+        case 4:
+                return __cmpxchg_timeout_case_4(ptr, old, new, timeout, max_try);
+        case 8:
+                return __cmpxchg_timeout_case_8(ptr, old, new, timeout, max_try);
+        default:
+                return __bad_cmpxchg(ptr, size);
+        }
+
+        ASSERT_UNREACHABLE();
+}
+
+/*
+ * The helper may fail to update the memory if the action takes too long.
+ *
+ * @old: On call the value pointed contains the expected old value. It will be
+ * updated to the actual old value.
+ * @max_try: Maximum number of iterations
+ *
+ * The helper will return true when the update has succeeded (i.e no
+ * timeout) and false if the update has failed.
+ */
+static always_inline bool __cmpxchg_timeout(volatile void *ptr,
+                                            unsigned long *old,
+                                            unsigned long new,
+                                            int size,
+                                            unsigned int max_try)
+{
+        bool ret;
+
+        smp_mb();
+        ret = __int_cmpxchg(ptr, old, new, size, true, max_try);
+        smp_mb();
+
+        return ret;
+}
+
+#define __cmpxchg64_timeout(ptr, old, new, max_try)     \
+        __cmpxchg_timeout(ptr, old, new, 8, max_try)
+
+
+#endif	/* __ASM_ARM_ARM64_CMPXCHG_H */
-- 
2.24.3 (Apple Git-128)



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 21:59:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 21:59:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25367.53062 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy9Y-0007f3-7j; Wed, 11 Nov 2020 21:59:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25367.53062; Wed, 11 Nov 2020 21:59:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy9Y-0007en-2Y; Wed, 11 Nov 2020 21:59:44 +0000
Received: by outflank-mailman (input) for mailman id 25367;
 Wed, 11 Nov 2020 21:59:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3p-00064v-RB
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:49 +0000
Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5a7def3d-c819-4d01-8499-8cff76a05d08;
 Wed, 11 Nov 2020 21:52:55 +0000 (UTC)
Received: by mail-wm1-x341.google.com with SMTP id h2so3607180wmm.0
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:55 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.53
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 11 Nov 2020 13:52:53 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kcy3p-00064v-RB
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:49 +0000
X-Inumbo-ID: 5a7def3d-c819-4d01-8499-8cff76a05d08
Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5a7def3d-c819-4d01-8499-8cff76a05d08;
	Wed, 11 Nov 2020 21:52:55 +0000 (UTC)
Received: by mail-wm1-x341.google.com with SMTP id h2so3607180wmm.0
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=IXFCViiQlQAZEJbjZ19ZkmNy8WD0OQFKlmdpW+wsKU8=;
        b=RnBL36HD997IHMhccuSQNZSZDDpVAU8KLGq0R59vlQGn7wBebSGfrp0mJtXAovT4c6
         EJyWLq3sYPTsA4cm9P6lMf1fuAHGLWzS3A+qX8gthR6F01Xm4VRPqXgXrgdGbyc7Xoa9
         YMFMZKI9GfeWIYWyAriVKNSjXBea1Q6/vsaUfU0fqy5AgYLUMjMZiLQOkVRhg8tVwIpL
         VA9UUlNQS4UWUDJZb5+5a/EYByAH/12OhCaBzDoP7Md6j6x0k5mLdrU/fG7Yj/mfMn3v
         BIO6R9T3RS0RBGllNLCW0QBzTLnfQzEl2V2ie/Mws1oaVW9aLYI5apmW91PUjAzYrAis
         qs+g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=IXFCViiQlQAZEJbjZ19ZkmNy8WD0OQFKlmdpW+wsKU8=;
        b=L4dzUbspzMXM7zQGaTVt7haalWANjKBqoXGs5cWpp4BiCNI73DfMVAPJliHY6jTXmf
         yljL5gQ9KvL3fEBt2cf95CoT3vcSiXtuZImwFu3prLOuV9mW/OPASdOW5JNcOboMb64V
         inaeRSADdAzz5caIHGt1GtiicXkGSndbvc1YbFvHAzJ6QaOGyz7toegDuu5NdlV58SNh
         WhxhTVy48IWfuZt2SG5byxtQprZafcF0HT2IQVASMGS9micRVmM/nV74Qxl62p7aQrUS
         BwGX2g0l12iqTA54xS2FgaIpFzhqGILlbczGTRS1t9pvhblPt6/Y2KIPEqFGKRkeq4i9
         khAQ==
X-Gm-Message-State: AOAM533K6tUSIqAMLIgb0RNSaG6syDVpgVJvL4fDHVN+pSq1Af0ghWdL
	sbSg130APpndTpGFZJySPCvm5tOtZ24=
X-Google-Smtp-Source: ABdhPJxQF6Pq2fJkYjLdTtlSw3AwUxsPPRK7RpLtDF2xcsJ1YDjaoI17FxqchSrlgwSxgtpfRKuXiw==
X-Received: by 2002:a1c:dd41:: with SMTP id u62mr6067891wmg.78.1605131574031;
        Wed, 11 Nov 2020 13:52:54 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.53
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Wed, 11 Nov 2020 13:52:53 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: xen-devel@lists.xenproject.org
Cc: Ash Wilding <ash.j.wilding@gmail.com>,
	julien@xen.org,
	bertrand.marquis@arm.com,
	rahul.singh@arm.com
Subject: [RFC PATCH v2 13/15] xen/arm32: port Linux's arm32 atomic.h to Xen
Date: Wed, 11 Nov 2020 21:52:01 +0000
Message-Id: <20201111215203.80336-14-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com>
References: <20201111215203.80336-1-ash.j.wilding@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ash Wilding <ash.j.wilding@gmail.com>

 - Drop arch_* prefixes.

 - Redirect include of <xen/compiler.h> to <xen/rwonce.h>, and swap
   usage of READ_ONCE()/WRITE_ONCE() to the __* versions accordingly.
   As discussed earlier in the series, we can do this because we're
   accessing an atomic_t's counter member, which is an int, so the
   extra checks performed by READ_ONCE()/WRITE_ONCE() are redundant,
   and this actually matches the Linux arm64 code which already uses
   the __* variants.

 - Drop support for pre-Armv7 systems.

 - Drop atomic64_t helper definitions as we don't currently have an
   atomic64_t in Xen.

 - Add explicit strict variants of atomic_{add,sub}_return() as Linux
   does not define these for arm32 and they're needed for Xen. These
   strict variants are just wrappers that sandwich a call to the
   relaxed variants between two smp_mb()'s.'

Signed-off-by: Ash Wilding <ash.j.wilding@gmail.com>
---
 xen/include/asm-arm/arm32/atomic.h | 357 +++--------------------------
 1 file changed, 28 insertions(+), 329 deletions(-)

diff --git a/xen/include/asm-arm/arm32/atomic.h b/xen/include/asm-arm/arm32/atomic.h
index ac6338dd9b..2d8cd3c586 100644
--- a/xen/include/asm-arm/arm32/atomic.h
+++ b/xen/include/asm-arm/arm32/atomic.h
@@ -1,31 +1,26 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
 /*
- *  arch/arm/include/asm/atomic.h
+ * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
  *
- *  Copyright (C) 1996 Russell King.
- *  Copyright (C) 2002 Deep Blue Solutions Ltd.
+ * Copyright (C) 1996 Russell King.
+ * Copyright (C) 2002 Deep Blue Solutions Ltd.
+ * SPDX-License-Identifier: GPL-2.0-only
  */
-#ifndef __ASM_ARM_ATOMIC_H
-#define __ASM_ARM_ATOMIC_H
+#ifndef __ASM_ARM_ARM32_ATOMIC_H
+#define __ASM_ARM_ARM32_ATOMIC_H
 
-#include <linux/compiler.h>
-#include <linux/prefetch.h>
-#include <linux/types.h>
-#include <linux/irqflags.h>
-#include <asm/barrier.h>
-#include <asm/cmpxchg.h>
-
-#ifdef __KERNEL__
+#include <xen/rwonce.h>
+#include <xen/prefetch.h>
+#include <xen/types.h>
+#include "system.h"
+#include "cmpxchg.h"
 
 /*
  * On ARM, ordinary assignment (str instruction) doesn't clear the local
  * strex/ldrex monitor on some implementations. The reason we can use it for
  * atomic_set() is the clrex or dummy strex done on every exception return.
  */
-#define atomic_read(v)	READ_ONCE((v)->counter)
-#define atomic_set(v,i)	WRITE_ONCE(((v)->counter), (i))
-
-#if __LINUX_ARM_ARCH__ >= 6
+#define atomic_read(v)	__READ_ONCE((v)->counter)
+#define atomic_set(v,i)	__WRITE_ONCE(((v)->counter), (i))
 
 /*
  * ARMv6 UP and SMP safe atomic ops.  We use load exclusive and
@@ -153,68 +148,6 @@ static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
 }
 #define atomic_fetch_add_unless		atomic_fetch_add_unless
 
-#else /* ARM_ARCH_6 */
-
-#ifdef CONFIG_SMP
-#error SMP not supported on pre-ARMv6 CPUs
-#endif
-
-#define ATOMIC_OP(op, c_op, asm_op)					\
-static inline void atomic_##op(int i, atomic_t *v)			\
-{									\
-	unsigned long flags;						\
-									\
-	raw_local_irq_save(flags);					\
-	v->counter c_op i;						\
-	raw_local_irq_restore(flags);					\
-}									\
-
-#define ATOMIC_OP_RETURN(op, c_op, asm_op)				\
-static inline int atomic_##op##_return(int i, atomic_t *v)		\
-{									\
-	unsigned long flags;						\
-	int val;							\
-									\
-	raw_local_irq_save(flags);					\
-	v->counter c_op i;						\
-	val = v->counter;						\
-	raw_local_irq_restore(flags);					\
-									\
-	return val;							\
-}
-
-#define ATOMIC_FETCH_OP(op, c_op, asm_op)				\
-static inline int atomic_fetch_##op(int i, atomic_t *v)			\
-{									\
-	unsigned long flags;						\
-	int val;							\
-									\
-	raw_local_irq_save(flags);					\
-	val = v->counter;						\
-	v->counter c_op i;						\
-	raw_local_irq_restore(flags);					\
-									\
-	return val;							\
-}
-
-static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
-{
-	int ret;
-	unsigned long flags;
-
-	raw_local_irq_save(flags);
-	ret = v->counter;
-	if (likely(ret == old))
-		v->counter = new;
-	raw_local_irq_restore(flags);
-
-	return ret;
-}
-
-#define atomic_fetch_andnot		atomic_fetch_andnot
-
-#endif /* __LINUX_ARM_ARCH__ */
-
 #define ATOMIC_OPS(op, c_op, asm_op)					\
 	ATOMIC_OP(op, c_op, asm_op)					\
 	ATOMIC_OP_RETURN(op, c_op, asm_op)				\
@@ -242,266 +175,32 @@ ATOMIC_OPS(xor, ^=, eor)
 
 #define atomic_xchg(v, new) (xchg(&((v)->counter), new))
 
-#ifndef CONFIG_GENERIC_ATOMIC64
-typedef struct {
-	s64 counter;
-} atomic64_t;
+/*
+ * Linux doesn't define strict atomic_add_return() or atomic_sub_return()
+ * for /arch/arm -- Let's manually define these for Xen.
+ */
 
-#define ATOMIC64_INIT(i) { (i) }
-
-#ifdef CONFIG_ARM_LPAE
-static inline s64 atomic64_read(const atomic64_t *v)
-{
-	s64 result;
-
-	__asm__ __volatile__("@ atomic64_read\n"
-"	ldrd	%0, %H0, [%1]"
-	: "=&r" (result)
-	: "r" (&v->counter), "Qo" (v->counter)
-	);
-
-	return result;
-}
-
-static inline void atomic64_set(atomic64_t *v, s64 i)
-{
-	__asm__ __volatile__("@ atomic64_set\n"
-"	strd	%2, %H2, [%1]"
-	: "=Qo" (v->counter)
-	: "r" (&v->counter), "r" (i)
-	);
-}
-#else
-static inline s64 atomic64_read(const atomic64_t *v)
-{
-	s64 result;
-
-	__asm__ __volatile__("@ atomic64_read\n"
-"	ldrexd	%0, %H0, [%1]"
-	: "=&r" (result)
-	: "r" (&v->counter), "Qo" (v->counter)
-	);
-
-	return result;
-}
-
-static inline void atomic64_set(atomic64_t *v, s64 i)
+static inline int atomic_add_return(int i, atomic_t *v)
 {
-	s64 tmp;
-
-	prefetchw(&v->counter);
-	__asm__ __volatile__("@ atomic64_set\n"
-"1:	ldrexd	%0, %H0, [%2]\n"
-"	strexd	%0, %3, %H3, [%2]\n"
-"	teq	%0, #0\n"
-"	bne	1b"
-	: "=&r" (tmp), "=Qo" (v->counter)
-	: "r" (&v->counter), "r" (i)
-	: "cc");
-}
-#endif
-
-#define ATOMIC64_OP(op, op1, op2)					\
-static inline void atomic64_##op(s64 i, atomic64_t *v)			\
-{									\
-	s64 result;							\
-	unsigned long tmp;						\
-									\
-	prefetchw(&v->counter);						\
-	__asm__ __volatile__("@ atomic64_" #op "\n"			\
-"1:	ldrexd	%0, %H0, [%3]\n"					\
-"	" #op1 " %Q0, %Q0, %Q4\n"					\
-"	" #op2 " %R0, %R0, %R4\n"					\
-"	strexd	%1, %0, %H0, [%3]\n"					\
-"	teq	%1, #0\n"						\
-"	bne	1b"							\
-	: "=&r" (result), "=&r" (tmp), "+Qo" (v->counter)		\
-	: "r" (&v->counter), "r" (i)					\
-	: "cc");							\
-}									\
-
-#define ATOMIC64_OP_RETURN(op, op1, op2)				\
-static inline s64							\
-atomic64_##op##_return_relaxed(s64 i, atomic64_t *v)			\
-{									\
-	s64 result;							\
-	unsigned long tmp;						\
-									\
-	prefetchw(&v->counter);						\
-									\
-	__asm__ __volatile__("@ atomic64_" #op "_return\n"		\
-"1:	ldrexd	%0, %H0, [%3]\n"					\
-"	" #op1 " %Q0, %Q0, %Q4\n"					\
-"	" #op2 " %R0, %R0, %R4\n"					\
-"	strexd	%1, %0, %H0, [%3]\n"					\
-"	teq	%1, #0\n"						\
-"	bne	1b"							\
-	: "=&r" (result), "=&r" (tmp), "+Qo" (v->counter)		\
-	: "r" (&v->counter), "r" (i)					\
-	: "cc");							\
-									\
-	return result;							\
-}
-
-#define ATOMIC64_FETCH_OP(op, op1, op2)					\
-static inline s64							\
-atomic64_fetch_##op##_relaxed(s64 i, atomic64_t *v)			\
-{									\
-	s64 result, val;						\
-	unsigned long tmp;						\
-									\
-	prefetchw(&v->counter);						\
-									\
-	__asm__ __volatile__("@ atomic64_fetch_" #op "\n"		\
-"1:	ldrexd	%0, %H0, [%4]\n"					\
-"	" #op1 " %Q1, %Q0, %Q5\n"					\
-"	" #op2 " %R1, %R0, %R5\n"					\
-"	strexd	%2, %1, %H1, [%4]\n"					\
-"	teq	%2, #0\n"						\
-"	bne	1b"							\
-	: "=&r" (result), "=&r" (val), "=&r" (tmp), "+Qo" (v->counter)	\
-	: "r" (&v->counter), "r" (i)					\
-	: "cc");							\
-									\
-	return result;							\
-}
-
-#define ATOMIC64_OPS(op, op1, op2)					\
-	ATOMIC64_OP(op, op1, op2)					\
-	ATOMIC64_OP_RETURN(op, op1, op2)				\
-	ATOMIC64_FETCH_OP(op, op1, op2)
-
-ATOMIC64_OPS(add, adds, adc)
-ATOMIC64_OPS(sub, subs, sbc)
-
-#define atomic64_add_return_relaxed	atomic64_add_return_relaxed
-#define atomic64_sub_return_relaxed	atomic64_sub_return_relaxed
-#define atomic64_fetch_add_relaxed	atomic64_fetch_add_relaxed
-#define atomic64_fetch_sub_relaxed	atomic64_fetch_sub_relaxed
-
-#undef ATOMIC64_OPS
-#define ATOMIC64_OPS(op, op1, op2)					\
-	ATOMIC64_OP(op, op1, op2)					\
-	ATOMIC64_FETCH_OP(op, op1, op2)
-
-#define atomic64_andnot atomic64_andnot
-
-ATOMIC64_OPS(and, and, and)
-ATOMIC64_OPS(andnot, bic, bic)
-ATOMIC64_OPS(or,  orr, orr)
-ATOMIC64_OPS(xor, eor, eor)
-
-#define atomic64_fetch_and_relaxed	atomic64_fetch_and_relaxed
-#define atomic64_fetch_andnot_relaxed	atomic64_fetch_andnot_relaxed
-#define atomic64_fetch_or_relaxed	atomic64_fetch_or_relaxed
-#define atomic64_fetch_xor_relaxed	atomic64_fetch_xor_relaxed
-
-#undef ATOMIC64_OPS
-#undef ATOMIC64_FETCH_OP
-#undef ATOMIC64_OP_RETURN
-#undef ATOMIC64_OP
-
-static inline s64 atomic64_cmpxchg_relaxed(atomic64_t *ptr, s64 old, s64 new)
-{
-	s64 oldval;
-	unsigned long res;
-
-	prefetchw(&ptr->counter);
-
-	do {
-		__asm__ __volatile__("@ atomic64_cmpxchg\n"
-		"ldrexd		%1, %H1, [%3]\n"
-		"mov		%0, #0\n"
-		"teq		%1, %4\n"
-		"teqeq		%H1, %H4\n"
-		"strexdeq	%0, %5, %H5, [%3]"
-		: "=&r" (res), "=&r" (oldval), "+Qo" (ptr->counter)
-		: "r" (&ptr->counter), "r" (old), "r" (new)
-		: "cc");
-	} while (res);
-
-	return oldval;
-}
-#define atomic64_cmpxchg_relaxed	atomic64_cmpxchg_relaxed
-
-static inline s64 atomic64_xchg_relaxed(atomic64_t *ptr, s64 new)
-{
-	s64 result;
-	unsigned long tmp;
-
-	prefetchw(&ptr->counter);
-
-	__asm__ __volatile__("@ atomic64_xchg\n"
-"1:	ldrexd	%0, %H0, [%3]\n"
-"	strexd	%1, %4, %H4, [%3]\n"
-"	teq	%1, #0\n"
-"	bne	1b"
-	: "=&r" (result), "=&r" (tmp), "+Qo" (ptr->counter)
-	: "r" (&ptr->counter), "r" (new)
-	: "cc");
-
-	return result;
-}
-#define atomic64_xchg_relaxed		atomic64_xchg_relaxed
-
-static inline s64 atomic64_dec_if_positive(atomic64_t *v)
-{
-	s64 result;
-	unsigned long tmp;
+	int ret;
 
 	smp_mb();
-	prefetchw(&v->counter);
-
-	__asm__ __volatile__("@ atomic64_dec_if_positive\n"
-"1:	ldrexd	%0, %H0, [%3]\n"
-"	subs	%Q0, %Q0, #1\n"
-"	sbc	%R0, %R0, #0\n"
-"	teq	%R0, #0\n"
-"	bmi	2f\n"
-"	strexd	%1, %0, %H0, [%3]\n"
-"	teq	%1, #0\n"
-"	bne	1b\n"
-"2:"
-	: "=&r" (result), "=&r" (tmp), "+Qo" (v->counter)
-	: "r" (&v->counter)
-	: "cc");
-
+	ret = atomic_add_return_relaxed(i, v);
 	smp_mb();
 
-	return result;
+	return ret;
 }
-#define atomic64_dec_if_positive atomic64_dec_if_positive
+#define atomic_fetch_add(i, v) atomic_add_return(i, v)
 
-static inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
+static inline int atomic_sub_return(int i, atomic_t *v)
 {
-	s64 oldval, newval;
-	unsigned long tmp;
+	int ret;
 
 	smp_mb();
-	prefetchw(&v->counter);
-
-	__asm__ __volatile__("@ atomic64_add_unless\n"
-"1:	ldrexd	%0, %H0, [%4]\n"
-"	teq	%0, %5\n"
-"	teqeq	%H0, %H5\n"
-"	beq	2f\n"
-"	adds	%Q1, %Q0, %Q6\n"
-"	adc	%R1, %R0, %R6\n"
-"	strexd	%2, %1, %H1, [%4]\n"
-"	teq	%2, #0\n"
-"	bne	1b\n"
-"2:"
-	: "=&r" (oldval), "=&r" (newval), "=&r" (tmp), "+Qo" (v->counter)
-	: "r" (&v->counter), "r" (u), "r" (a)
-	: "cc");
-
-	if (oldval != u)
-		smp_mb();
+	ret = atomic_sub_return_relaxed(i, v);
+	smp_mb();
 
-	return oldval;
+	return ret;
 }
-#define atomic64_fetch_add_unless atomic64_fetch_add_unless
 
-#endif /* !CONFIG_GENERIC_ATOMIC64 */
-#endif
-#endif
\ No newline at end of file
+#endif /* __ASM_ARM_ARM32_ATOMIC_H */
-- 
2.24.3 (Apple Git-128)



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 21:59:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 21:59:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25368.53073 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy9Y-0007gC-NY; Wed, 11 Nov 2020 21:59:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25368.53073; Wed, 11 Nov 2020 21:59:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy9Y-0007fs-CS; Wed, 11 Nov 2020 21:59:44 +0000
Received: by outflank-mailman (input) for mailman id 25368;
 Wed, 11 Nov 2020 21:59:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3z-00064v-RZ
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:59 +0000
Received: from mail-wr1-x435.google.com (unknown [2a00:1450:4864:20::435])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e89902fb-7dcf-49c9-809d-f4bae8f5d314;
 Wed, 11 Nov 2020 21:52:56 +0000 (UTC)
Received: by mail-wr1-x435.google.com with SMTP id 23so3965030wrc.8
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:56 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.54
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 11 Nov 2020 13:52:54 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kcy3z-00064v-RZ
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:59 +0000
X-Inumbo-ID: e89902fb-7dcf-49c9-809d-f4bae8f5d314
Received: from mail-wr1-x435.google.com (unknown [2a00:1450:4864:20::435])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e89902fb-7dcf-49c9-809d-f4bae8f5d314;
	Wed, 11 Nov 2020 21:52:56 +0000 (UTC)
Received: by mail-wr1-x435.google.com with SMTP id 23so3965030wrc.8
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=fD74UoudjvYdXmC0ErN+hXn9USTuY6/Q/RnD/8M+iRI=;
        b=Ng2wzGaRjSGtlmnnkCYihoMFHpZwADCLk1XYrol1yaUdmD12gN2MY9krmJLEMObL6M
         qeRfXzyFGXGEnelMKh4vUMrud0s6/iA/EUwfhCibNgEjEP566X9zJmiQTKX1q1ktTCLJ
         sjpCDE3p9TQ9wjWCnQQ3DZEZFqTHUKS7+uoqw8NhBfgxNhX4AeY/83a5/TN26GG57/m8
         BrqKON3pdQMW3y/pn0D9ZbfBJFvC4peOqNTzmfSMbnEyNsmhLAokvxy33FcYsY2rYPV4
         ETxMzPHoSjCSaulv2OL7l/j9EEjzSZM9JTc9q3ziGcwrZVFNQsQolZadfomxkWugAKcJ
         wqcA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=fD74UoudjvYdXmC0ErN+hXn9USTuY6/Q/RnD/8M+iRI=;
        b=JCsgIbj02b2RowGtJ38pbdY+pQMjqSqjzi66CKL2A+lTIiB3sBb58+g/VetooaBwe9
         MfT5rdeTH1PLJlJx/bqI2G4vMxABM8sjMOTcnRHqkFiDGdEMXMZZqcRgb/iOR5AMqFGK
         6EI9+a1BtXFxtKO2U2qgz6yXlBXlAol8hO/kocR3cxr06pTH+xxPvSpabr0imzTA6A1l
         M/KaujIioWNYg/eObYvbCq5u7TCMKY5/VNwFREgRVr7bOZXBqCYL7eSb2lsBt+IcWVlC
         sTm+z9Ak07XUTqnolSA6HSOtjg8+B8bcND9DPXJ/Ij9pOPrPtOLWf9YtJB6Yv7ucxypJ
         RFeQ==
X-Gm-Message-State: AOAM530yv2AmksfNyxIGpWZ2AQdJZ6nya4NeHR32DKNkEOn+fR7WM6N1
	f4aWdU5xJgjiqUavkTa9NDt4Hid6aDw=
X-Google-Smtp-Source: ABdhPJyolI2is12QJH+ntYHmZaUJPBkLe6jN9v0sIQCa3sPAyKR5IWMzFSm2OnjNj32FgK5Am2o3aw==
X-Received: by 2002:a5d:69d1:: with SMTP id s17mr21879105wrw.104.1605131574983;
        Wed, 11 Nov 2020 13:52:54 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.54
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Wed, 11 Nov 2020 13:52:54 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: xen-devel@lists.xenproject.org
Cc: Ash Wilding <ash.j.wilding@gmail.com>,
	julien@xen.org,
	bertrand.marquis@arm.com,
	rahul.singh@arm.com
Subject: [RFC PATCH v2 14/15] xen/arm32: port Linux's arm32 cmpxchg.h to Xen
Date: Wed, 11 Nov 2020 21:52:02 +0000
Message-Id: <20201111215203.80336-15-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com>
References: <20201111215203.80336-1-ash.j.wilding@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ash Wilding <ash.j.wilding@gmail.com>

 - Drop support for pre-Armv7 systems, including the workarounds for
   the swp instruction being broken on StrongARM.

 - Drop local variants as no callers in Xen.

 - Keep the compiler happy by fixing __cmpxchg64()'s ptr arg to be
   volatile, and casting ptr to be (const void *) in the call to
   prefetchw().

 - Add explicit strict variants of xchg(), cmpxchg(), and cmpxchg64(),
   as the Linux arm32 cmpxchg.h doesn't define these and they're needed
   for Xen. These strict variants are just wrappers that sandwich calls
   to the relaxed variants between two smp_mb()'s.

 - Pull in the timeout variants of cmpxchg from the original Xen
   arm32 cmpxchg.h as these are required for guest atomics and are
   not provided by Linux.

Signed-off-by: Ash Wilding <ash.j.wilding@gmail.com>
---
 xen/include/asm-arm/arm32/cmpxchg.h | 322 ++++++++++++++++------------
 1 file changed, 188 insertions(+), 134 deletions(-)

diff --git a/xen/include/asm-arm/arm32/cmpxchg.h b/xen/include/asm-arm/arm32/cmpxchg.h
index 638ae84afb..d7189984d0 100644
--- a/xen/include/asm-arm/arm32/cmpxchg.h
+++ b/xen/include/asm-arm/arm32/cmpxchg.h
@@ -1,46 +1,24 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef __ASM_ARM_CMPXCHG_H
-#define __ASM_ARM_CMPXCHG_H
-
-#include <linux/irqflags.h>
-#include <linux/prefetch.h>
-#include <asm/barrier.h>
-
-#if defined(CONFIG_CPU_SA1100) || defined(CONFIG_CPU_SA110)
 /*
- * On the StrongARM, "swp" is terminally broken since it bypasses the
- * cache totally.  This means that the cache becomes inconsistent, and,
- * since we use normal loads/stores as well, this is really bad.
- * Typically, this causes oopsen in filp_close, but could have other,
- * more disastrous effects.  There are two work-arounds:
- *  1. Disable interrupts and emulate the atomic swap
- *  2. Clean the cache, perform atomic swap, flush the cache
+ * Taken from Linux 5.10-rc2 (last commit 3cea11cd5)
  *
- * We choose (1) since its the "easiest" to achieve here and is not
- * dependent on the processor type.
- *
- * NOTE that this solution won't work on an SMP system, so explcitly
- * forbid it here.
+ * SPDX-License-Identifier: GPL-2.0
  */
-#define swp_is_buggy
-#endif
+#ifndef __ASM_ARM_ARM32_CMPXCHG_H
+#define __ASM_ARM_ARM32_CMPXCHG_H
+
+#include <xen/prefetch.h>
+#include <xen/types.h>
+
+extern void __bad_cmpxchg(volatile void *ptr, int size);
 
 static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size)
 {
-	extern void __bad_xchg(volatile void *, int);
 	unsigned long ret;
-#ifdef swp_is_buggy
-	unsigned long flags;
-#endif
-#if __LINUX_ARM_ARCH__ >= 6
 	unsigned int tmp;
-#endif
 
 	prefetchw((const void *)ptr);
 
 	switch (size) {
-#if __LINUX_ARM_ARCH__ >= 6
-#ifndef CONFIG_CPU_V6 /* MIN ARCH >= V6K */
 	case 1:
 		asm volatile("@	__xchg1\n"
 		"1:	ldrexb	%0, [%3]\n"
@@ -61,7 +39,6 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
 			: "r" (x), "r" (ptr)
 			: "memory", "cc");
 		break;
-#endif
 	case 4:
 		asm volatile("@	__xchg4\n"
 		"1:	ldrex	%0, [%3]\n"
@@ -72,42 +49,10 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
 			: "r" (x), "r" (ptr)
 			: "memory", "cc");
 		break;
-#elif defined(swp_is_buggy)
-#ifdef CONFIG_SMP
-#error SMP is not supported on this platform
-#endif
-	case 1:
-		raw_local_irq_save(flags);
-		ret = *(volatile unsigned char *)ptr;
-		*(volatile unsigned char *)ptr = x;
-		raw_local_irq_restore(flags);
-		break;
 
-	case 4:
-		raw_local_irq_save(flags);
-		ret = *(volatile unsigned long *)ptr;
-		*(volatile unsigned long *)ptr = x;
-		raw_local_irq_restore(flags);
-		break;
-#else
-	case 1:
-		asm volatile("@	__xchg1\n"
-		"	swpb	%0, %1, [%2]"
-			: "=&r" (ret)
-			: "r" (x), "r" (ptr)
-			: "memory", "cc");
-		break;
-	case 4:
-		asm volatile("@	__xchg4\n"
-		"	swp	%0, %1, [%2]"
-			: "=&r" (ret)
-			: "r" (x), "r" (ptr)
-			: "memory", "cc");
-		break;
-#endif
 	default:
-		/* Cause a link-time error, the xchg() size is not supported */
-		__bad_xchg(ptr, size), ret = 0;
+		/* Cause a link-time error, the size is not supported */
+		__bad_cmpxchg(ptr, size), ret = 0;
 		break;
 	}
 
@@ -119,40 +64,6 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
 				   sizeof(*(ptr)));			\
 })
 
-#include <asm-generic/cmpxchg-local.h>
-
-#if __LINUX_ARM_ARCH__ < 6
-/* min ARCH < ARMv6 */
-
-#ifdef CONFIG_SMP
-#error "SMP is not supported on this platform"
-#endif
-
-#define xchg xchg_relaxed
-
-/*
- * cmpxchg_local and cmpxchg64_local are atomic wrt current CPU. Always make
- * them available.
- */
-#define cmpxchg_local(ptr, o, n) ({					\
-	(__typeof(*ptr))__cmpxchg_local_generic((ptr),			\
-					        (unsigned long)(o),	\
-					        (unsigned long)(n),	\
-					        sizeof(*(ptr)));	\
-})
-
-#define cmpxchg64_local(ptr, o, n) __cmpxchg64_local_generic((ptr), (o), (n))
-
-#include <asm-generic/cmpxchg.h>
-
-#else	/* min ARCH >= ARMv6 */
-
-extern void __bad_cmpxchg(volatile void *ptr, int size);
-
-/*
- * cmpxchg only support 32-bits operands on ARMv6.
- */
-
 static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
 				      unsigned long new, int size)
 {
@@ -161,7 +72,6 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
 	prefetchw((const void *)ptr);
 
 	switch (size) {
-#ifndef CONFIG_CPU_V6	/* min ARCH >= ARMv6K */
 	case 1:
 		do {
 			asm volatile("@ __cmpxchg1\n"
@@ -186,7 +96,6 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
 				: "memory", "cc");
 		} while (res);
 		break;
-#endif
 	case 4:
 		do {
 			asm volatile("@ __cmpxchg4\n"
@@ -199,6 +108,7 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
 				: "memory", "cc");
 		} while (res);
 		break;
+
 	default:
 		__bad_cmpxchg(ptr, size);
 		oldval = 0;
@@ -214,41 +124,14 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
 				      sizeof(*(ptr)));			\
 })
 
-static inline unsigned long __cmpxchg_local(volatile void *ptr,
-					    unsigned long old,
-					    unsigned long new, int size)
-{
-	unsigned long ret;
-
-	switch (size) {
-#ifdef CONFIG_CPU_V6	/* min ARCH == ARMv6 */
-	case 1:
-	case 2:
-		ret = __cmpxchg_local_generic(ptr, old, new, size);
-		break;
-#endif
-	default:
-		ret = __cmpxchg(ptr, old, new, size);
-	}
-
-	return ret;
-}
-
-#define cmpxchg_local(ptr, o, n) ({					\
-	(__typeof(*ptr))__cmpxchg_local((ptr),				\
-				        (unsigned long)(o),		\
-				        (unsigned long)(n),		\
-				        sizeof(*(ptr)));		\
-})
-
-static inline unsigned long long __cmpxchg64(unsigned long long *ptr,
+static inline unsigned long long __cmpxchg64(volatile unsigned long long *ptr,
 					     unsigned long long old,
 					     unsigned long long new)
 {
 	unsigned long long oldval;
 	unsigned long res;
 
-	prefetchw(ptr);
+	prefetchw((const void *)ptr);
 
 	__asm__ __volatile__(
 "1:	ldrexd		%1, %H1, [%3]\n"
@@ -272,8 +155,179 @@ static inline unsigned long long __cmpxchg64(unsigned long long *ptr,
 					(unsigned long long)(n));	\
 })
 
-#define cmpxchg64_local(ptr, o, n) cmpxchg64_relaxed((ptr), (o), (n))
 
-#endif	/* __LINUX_ARM_ARCH__ >= 6 */
+/*
+ * Linux doesn't provide strict versions of xchg(), cmpxchg(), and cmpxchg64(),
+ * so manually define these for Xen as smp_mb() wrappers around the relaxed
+ * variants.
+ */
 
-#endif /* __ASM_ARM_CMPXCHG_H */
\ No newline at end of file
+#define xchg(ptr, x) ({ \
+	long ret; \
+	smp_mb(); \
+	ret = xchg_relaxed(ptr, x); \
+	smp_mb(); \
+	ret; \
+})
+
+#define cmpxchg(ptr, o, n) ({ \
+	long ret; \
+	smp_mb(); \
+	ret = cmpxchg_relaxed(ptr, o, n); \
+	smp_mb(); \
+	ret; \
+})
+
+#define cmpxchg64(ptr, o, n) ({ \
+	long long ret; \
+	smp_mb(); \
+	ret = cmpxchg64_relaxed(ptr, o, n); \
+	smp_mb(); \
+	ret; \
+})
+
+/*
+ * This code is from the original Xen arm32 cmpxchg.h, from before the
+ * Linux 5.10-rc2 atomics helpers were ported over. The only changes
+ * here are renaming the macros and functions to explicitly use
+ * "timeout" in their names so that they don't clash with the above.
+ *
+ * We need this here for guest atomics (the only user of the timeout
+ * variants).
+ */
+
+#define __CMPXCHG_TIMEOUT_CASE(sz, name)                                        \
+static inline bool __cmpxchg_timeout_case_##name(volatile void *ptr,            \
+                                         unsigned long *old,            \
+                                         unsigned long new,             \
+                                         bool timeout,                  \
+                                         unsigned int max_try)          \
+{                                                                       \
+        unsigned long oldval;                                           \
+        unsigned long res;                                              \
+                                                                        \
+        do {                                                            \
+                asm volatile("@ __cmpxchg_timeout_case_" #name "\n"             \
+                "       ldrex" #sz "    %1, [%2]\n"                     \
+                "       mov     %0, #0\n"                               \
+                "       teq     %1, %3\n"                               \
+                "       strex" #sz "eq %0, %4, [%2]\n"                  \
+                : "=&r" (res), "=&r" (oldval)                           \
+                : "r" (ptr), "Ir" (*old), "r" (new)                     \
+                : "memory", "cc");                                      \
+                                                                        \
+                if (!res)                                               \
+                        break;                                          \
+        } while (!timeout || ((--max_try) > 0));                        \
+                                                                        \
+        *old = oldval;                                                  \
+                                                                        \
+        return !res;                                                    \
+}
+
+__CMPXCHG_TIMEOUT_CASE(b, 1)
+__CMPXCHG_TIMEOUT_CASE(h, 2)
+__CMPXCHG_TIMEOUT_CASE( , 4)
+
+static inline bool __cmpxchg_timeout_case_8(volatile uint64_t *ptr,
+                                    uint64_t *old,
+                                    uint64_t new,
+                                    bool timeout,
+                                    unsigned int max_try)
+{
+        uint64_t oldval;
+        uint64_t res;
+
+        do {
+                asm volatile(
+                "       ldrexd          %1, %H1, [%3]\n"
+                "       teq             %1, %4\n"
+                "       teqeq           %H1, %H4\n"
+                "       movne           %0, #0\n"
+                "       movne           %H0, #0\n"
+                "       bne             2f\n"
+                "       strexd          %0, %5, %H5, [%3]\n"
+                "2:"
+                : "=&r" (res), "=&r" (oldval), "+Qo" (*ptr)
+                : "r" (ptr), "r" (*old), "r" (new)
+                : "memory", "cc");
+                if (!res)
+                        break;
+        } while (!timeout || ((--max_try) > 0));
+
+        *old = oldval;
+
+        return !res;
+}
+
+static always_inline bool __int_cmpxchg(volatile void *ptr, unsigned long *old,
+                                        unsigned long new, int size,
+                                        bool timeout, unsigned int max_try)
+{
+        prefetchw((const void *)ptr);
+
+        switch (size) {
+        case 1:
+                return __cmpxchg_timeout_case_1(ptr, old, new, timeout, max_try);
+        case 2:
+                return __cmpxchg_timeout_case_2(ptr, old, new, timeout, max_try);
+        case 4:
+                return __cmpxchg_timeout_case_4(ptr, old, new, timeout, max_try);
+        default:
+                __bad_cmpxchg(ptr, size);
+				return false;
+        }
+
+        ASSERT_UNREACHABLE();
+}
+
+/*
+ * The helper may fail to update the memory if the action takes too long.
+ *
+ * @old: On call the value pointed contains the expected old value. It will be
+ * updated to the actual old value.
+ * @max_try: Maximum number of iterations
+ *
+ * The helper will return true when the update has succeeded (i.e no
+ * timeout) and false if the update has failed.
+ */
+static always_inline bool __cmpxchg_timeout(volatile void *ptr,
+                                            unsigned long *old,
+                                            unsigned long new,
+                                            int size,
+                                            unsigned int max_try)
+{
+        bool ret;
+
+        smp_mb();
+        ret = __int_cmpxchg(ptr, old, new, size, true, max_try);
+        smp_mb();
+
+        return ret;
+}
+
+/*
+ * The helper may fail to update the memory if the action takes too long.
+ *
+ * @old: On call the value pointed contains the expected old value. It will be
+ * updated to the actual old value.
+ * @max_try: Maximum number of iterations
+ *
+ * The helper will return true when the update has succeeded (i.e no
+ * timeout) and false if the update has failed.
+ */
+static always_inline bool __cmpxchg64_timeout(volatile uint64_t *ptr,
+                                              uint64_t *old,
+                                              uint64_t new,
+                                              unsigned int max_try)
+{
+        bool ret;
+
+        smp_mb();
+        ret = __cmpxchg_timeout_case_8(ptr, old, new, true, max_try);
+        smp_mb();
+
+        return ret;
+}
+
+#endif /* __ASM_ARM_ARM32_CMPXCHG_H */
-- 
2.24.3 (Apple Git-128)



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 21:59:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 21:59:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25371.53109 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy9a-0007kq-GE; Wed, 11 Nov 2020 21:59:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25371.53109; Wed, 11 Nov 2020 21:59:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcy9a-0007kJ-5G; Wed, 11 Nov 2020 21:59:46 +0000
Received: by outflank-mailman (input) for mailman id 25371;
 Wed, 11 Nov 2020 21:59:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kcy3u-00064v-RK
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:54 +0000
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id faaf8b8e-c396-4fb1-8e30-4a2829c5ebe4;
 Wed, 11 Nov 2020 21:52:57 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id k2so3999500wrx.2
 for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:57 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.55
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 11 Nov 2020 13:52:55 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kcy3u-00064v-RK
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 21:53:54 +0000
X-Inumbo-ID: faaf8b8e-c396-4fb1-8e30-4a2829c5ebe4
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id faaf8b8e-c396-4fb1-8e30-4a2829c5ebe4;
	Wed, 11 Nov 2020 21:52:57 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id k2so3999500wrx.2
        for <xen-devel@lists.xenproject.org>; Wed, 11 Nov 2020 13:52:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=rfNmZKkQ9GXWazOVDjrBrED1XtwfKtN3kTzNIukG8jo=;
        b=FDG3HI4ZHFQ1tpEoeHroWB7sKALJuSzLT2s0Wk3a6dcimJwgnnhb2UTGDN+N9mjXQS
         SNNcOsCdSrOPtErpo7GBQpILhO+UCkLidYqp1hCn0m3xW1ueJEFc8Gsy2cYhcXUl0a9c
         1LHRczZ5NQ4SGLJ6F7rIYvTc/KzYutIQgWm9qjbqmd3iczlFY1pE6MZC0hiYzRwvWgcz
         tmbTBnSWc56V5mVcyuL4BkaYuReTjl82qf+pB/HVzKCZcGtEdDO+gkYy5KSGdeZiVB2J
         1j/lw3HPSiaV5DspnC6Q5waZ1eW6LSPjUWtbgH/GpsuN9EJGgTqXLboTaHDVV/coxTXp
         iuOA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=rfNmZKkQ9GXWazOVDjrBrED1XtwfKtN3kTzNIukG8jo=;
        b=d7I9Btx34alxAC0eQ1b4zsl+MmGoTQvadrgsnyWrQjS0eRpSXBpSQ0CMiva8NkK8Cf
         me36xbOaE7RsjVfPflx5eos7uUgyKJcgFRFcru+0OJYOWClJYI0brP5P9f54sh2NaUXc
         TViDvzD2VkrP6eTAltUKu+/yFhfXZ0ChARzm9rwEjUknKITMKZ3ndHMn/vzPuh9fzYy+
         oTNszf2vA3LKtz3ZY92nO8ZpzMxXhuVJozh9EleZq/JqxVAfZ9ZZrBnvezeorifFzoW2
         u1qVOUHdDy6hfPzsGdNCQwahmpD0r8ZGBkK5XJhogj6zvAhwHTDchaV1QnET36VrgYH0
         AEzQ==
X-Gm-Message-State: AOAM531SpxrKFSp2bEXylTt4Tbjs5CjCdWw9JBBYIEf2X/K0b3PpQd9A
	28J02lhvhiQLNLzONvMUFz8rqrBT5MQ=
X-Google-Smtp-Source: ABdhPJxsDoo8botbsPJoNsZZSEN0xqfshqIiL0ER+EsuBlcdAQUToZNdg9NG7mwG5vyj0hFiyppkwQ==
X-Received: by 2002:adf:9cc6:: with SMTP id h6mr32447024wre.341.1605131576028;
        Wed, 11 Nov 2020 13:52:56 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id u23sm4096078wmc.32.2020.11.11.13.52.55
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Wed, 11 Nov 2020 13:52:55 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: xen-devel@lists.xenproject.org
Cc: Ash Wilding <ash.j.wilding@gmail.com>,
	julien@xen.org,
	bertrand.marquis@arm.com,
	rahul.singh@arm.com
Subject: [RFC PATCH v2 15/15] xen/arm: remove dependency on gcc built-in __sync_fetch_and_add()
Date: Wed, 11 Nov 2020 21:52:03 +0000
Message-Id: <20201111215203.80336-16-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <20201111215203.80336-1-ash.j.wilding@gmail.com>
References: <20201111215203.80336-1-ash.j.wilding@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ash Wilding <ash.j.wilding@gmail.com>

Now that we have explicit implementations of LL/SC and LSE atomics
helpers after porting Linux's versions to Xen, we can drop the reference
to gcc's built-in __sync_fetch_and_add().

This requires some fudging using container_of() because the users of
__sync_fetch_and_add(), namely xen/spinlock.c, expect the ptr to be
directly to the u32 being modified while the atomics helpers expect the
ptr to be to an atomic_t and then access that atomic_t's counter member.

By using container_of() we can create a "fake" (atomic_t *) pointer and
pass that to the atomic_fetch_add() that we ported from Linux.

NOTE: spinlock.c is using u32 for the value being added while the atomics
helpers use int for their counter member. This shouldn't actually matter
because we do the addition in assembly and the compiler isn't smart enough
to detect potential signed integer overflow in inline assembly, but I
thought it worth calling out in the commit message.

Signed-off-by: Ash Wilding <ash.j.wilding@gmail.com>
---
 xen/include/asm-arm/system.h | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/xen/include/asm-arm/system.h b/xen/include/asm-arm/system.h
index 65d5c8e423..0326e3ade4 100644
--- a/xen/include/asm-arm/system.h
+++ b/xen/include/asm-arm/system.h
@@ -58,7 +58,14 @@ static inline int local_abort_is_enabled(void)
     return !(flags & PSR_ABT_MASK);
 }
 
-#define arch_fetch_and_add(x, v) __sync_fetch_and_add(x, v)
+#define arch_fetch_and_add(ptr, x) ({                                   \
+    int ret;                                                            \
+                                                                        \
+    atomic_t * tmp = container_of((int *)(&(x)), atomic_t, counter);    \
+    ret = atomic_fetch_add(x, tmp);                                     \
+                                                                        \
+    ret;                                                                \
+})
 
 extern struct vcpu *__context_switch(struct vcpu *prev, struct vcpu *next);
 
-- 
2.24.3 (Apple Git-128)



From xen-devel-bounces@lists.xenproject.org Wed Nov 11 22:38:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 11 Nov 2020 22:38:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25420.53137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcyki-0003cG-1A; Wed, 11 Nov 2020 22:38:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25420.53137; Wed, 11 Nov 2020 22:38:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kcykh-0003c9-Tz; Wed, 11 Nov 2020 22:38:07 +0000
Received: by outflank-mailman (input) for mailman id 25420;
 Wed, 11 Nov 2020 22:38:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kcykg-0003bV-Uh
 for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 22:38:06 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c6d9b322-bbfd-4fea-966e-e61b0d36537a;
 Wed, 11 Nov 2020 22:37:59 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcykZ-0007bV-7w; Wed, 11 Nov 2020 22:37:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kcykY-0008Ie-RP; Wed, 11 Nov 2020 22:37:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kcykY-0004HJ-Qh; Wed, 11 Nov 2020 22:37:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Iq8f=ER=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kcykg-0003bV-Uh
	for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 22:38:06 +0000
X-Inumbo-ID: c6d9b322-bbfd-4fea-966e-e61b0d36537a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c6d9b322-bbfd-4fea-966e-e61b0d36537a;
	Wed, 11 Nov 2020 22:37:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PoU9MoLG1DcKjmrneziJromhLcmq8Byu/Pw1Qz+WMTU=; b=yQkGjVdpAIN9Qc0AKtO1jQOrky
	Caw3iaePnhdk3+Z22KUTRvExra51SLUUAwIuU6NFemmmjc99QNtwRMi3SFnb/6MeW6jciZOkmHSQb
	cvW5lvvedUkxpJKOp0wqC+/6w2WPGes1aLXXfVbBHE4DcwmOevv9W//ZRcwY4MYCrKhU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcykZ-0007bV-7w; Wed, 11 Nov 2020 22:37:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcykY-0008Ie-RP; Wed, 11 Nov 2020 22:37:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kcykY-0004HJ-Qh; Wed, 11 Nov 2020 22:37:58 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156635-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 156635: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=14c9c0fceae92a18dedc3f280ebf8b9f52e39de5
X-Osstest-Versions-That:
    xen=46ad8841ac4da8fc2a128e49b8406966bf5d3601
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 11 Nov 2020 22:37:58 +0000

flight 156635 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156635/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qcow2    19 guest-localmigrate/x10       fail  like 156592
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156592
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156592
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156592
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156592
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156592
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156592
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156592
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156592
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156592
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156592
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156592
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  14c9c0fceae92a18dedc3f280ebf8b9f52e39de5
baseline version:
 xen                  46ad8841ac4da8fc2a128e49b8406966bf5d3601

Last test of basis   156592  2020-11-09 18:08:09 Z    2 days
Testing same since   156635  2020-11-10 18:06:22 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   46ad8841ac..14c9c0fcea  14c9c0fceae92a18dedc3f280ebf8b9f52e39de5 -> stable-4.12


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 00:32:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 00:32:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25437.53174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd0X5-00071q-Td; Thu, 12 Nov 2020 00:32:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25437.53174; Thu, 12 Nov 2020 00:32:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd0X5-00071j-QB; Thu, 12 Nov 2020 00:32:11 +0000
Received: by outflank-mailman (input) for mailman id 25437;
 Thu, 12 Nov 2020 00:32:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kd0X4-00071e-0O
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 00:32:10 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 465bd2f2-df79-4747-9d36-fcdea700ac3e;
 Thu, 12 Nov 2020 00:32:07 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kd0X1-00026T-99; Thu, 12 Nov 2020 00:32:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kd0X0-00059n-TH; Thu, 12 Nov 2020 00:32:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kd0X0-0000UF-Sn; Thu, 12 Nov 2020 00:32:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kd0X4-00071e-0O
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 00:32:10 +0000
X-Inumbo-ID: 465bd2f2-df79-4747-9d36-fcdea700ac3e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 465bd2f2-df79-4747-9d36-fcdea700ac3e;
	Thu, 12 Nov 2020 00:32:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8KMRDUEDJYP9K5onCo28dn3OESC0QWqFOa8T/fBx0Js=; b=QRHUV2Nch3uwVkrqfq2ORWreFk
	HGxIZAnulxZL5zRXaa+wFt0Jfst1aJQl/eDcRFcC4l3rUFbssS+mW++OP3S9YQYDRD4gZzRz9HyJ3
	jtaC+Up+MA9UbU/yfUMoTIFFJg7aVeNQu47i9BaL2qTVPVTjy2yn1MmH6kLv7VC41ano=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kd0X1-00026T-99; Thu, 12 Nov 2020 00:32:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kd0X0-00059n-TH; Thu, 12 Nov 2020 00:32:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kd0X0-0000UF-Sn; Thu, 12 Nov 2020 00:32:06 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156636-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 156636: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d4c0483c0b87768cd9b95542e98111e4c098d57f
X-Osstest-Versions-That:
    xen=971a9d14667448427ddea1cf15fd7fd409205c58
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Nov 2020 00:32:06 +0000

flight 156636 xen-4.13-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156636/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156593
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156593
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156593
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156593
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156593
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156593
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156593
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156593
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156593
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156593
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156593
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  d4c0483c0b87768cd9b95542e98111e4c098d57f
baseline version:
 xen                  971a9d14667448427ddea1cf15fd7fd409205c58

Last test of basis   156593  2020-11-09 18:08:08 Z    2 days
Testing same since   156636  2020-11-10 18:06:32 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   971a9d1466..d4c0483c0b  d4c0483c0b87768cd9b95542e98111e4c098d57f -> stable-4.13


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 03:50:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 03:50:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25467.53231 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd3cJ-0004la-M1; Thu, 12 Nov 2020 03:49:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25467.53231; Thu, 12 Nov 2020 03:49:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd3cJ-0004lR-EP; Thu, 12 Nov 2020 03:49:47 +0000
Received: by outflank-mailman (input) for mailman id 25467;
 Thu, 12 Nov 2020 03:49:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kd3cH-0004fh-Gx
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 03:49:45 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d8ce2302-b450-46cc-b9e8-2f99f6070a48;
 Thu, 12 Nov 2020 03:49:38 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kd3cA-0002H4-F4; Thu, 12 Nov 2020 03:49:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kd3cA-00067y-2z; Thu, 12 Nov 2020 03:49:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kd3cA-0001PA-2W; Thu, 12 Nov 2020 03:49:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kd3cH-0004fh-Gx
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 03:49:45 +0000
X-Inumbo-ID: d8ce2302-b450-46cc-b9e8-2f99f6070a48
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d8ce2302-b450-46cc-b9e8-2f99f6070a48;
	Thu, 12 Nov 2020 03:49:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dxLvxyA8wHPXkrb8V5v4vCxpSMnPxNl54JS6rgdz4o0=; b=lMda1o6MLXypVxDQQ0siWyVBtc
	5CYWSMzzAxc3INLYduKU8dLAG7JHsSh4ywLqpn/8iMfcQ/3vlfodEs42nl7lu+olCXMzX02/yvfLX
	wM80qm/72LDsCmkzBRssD6bnN2wMlyZhq2dsdHdAZfGpXT7Kg3TmuCOybK1zIFNTeMEY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kd3cA-0002H4-F4; Thu, 12 Nov 2020 03:49:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kd3cA-00067y-2z; Thu, 12 Nov 2020 03:49:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kd3cA-0001PA-2W; Thu, 12 Nov 2020 03:49:38 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156689-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156689: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5505f5f8e7e805365cfe70b6a4af6115940bb749
X-Osstest-Versions-That:
    xen=69634224afaf84474f04e1ab050f216d66bcda68
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Nov 2020 03:49:38 +0000

flight 156689 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156689/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5505f5f8e7e805365cfe70b6a4af6115940bb749
baseline version:
 xen                  69634224afaf84474f04e1ab050f216d66bcda68

Last test of basis   156679  2020-11-11 08:00:30 Z    0 days
Testing same since   156689  2020-11-11 22:01:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Penny Zheng <penny.zheng@arm.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   69634224af..5505f5f8e7  5505f5f8e7e805365cfe70b6a4af6115940bb749 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 04:47:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 04:47:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25482.53266 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd4WG-00029a-0v; Thu, 12 Nov 2020 04:47:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25482.53266; Thu, 12 Nov 2020 04:47:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd4WF-00029T-UG; Thu, 12 Nov 2020 04:47:35 +0000
Received: by outflank-mailman (input) for mailman id 25482;
 Thu, 12 Nov 2020 04:47:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kd4WF-00028p-G1
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 04:47:35 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ba7104cd-ef6d-4799-9aa2-e1623d77a5e0;
 Thu, 12 Nov 2020 04:47:26 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kd4W6-0003f9-2D; Thu, 12 Nov 2020 04:47:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kd4W5-000181-RF; Thu, 12 Nov 2020 04:47:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kd4W5-000340-Ql; Thu, 12 Nov 2020 04:47:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kd4WF-00028p-G1
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 04:47:35 +0000
X-Inumbo-ID: ba7104cd-ef6d-4799-9aa2-e1623d77a5e0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ba7104cd-ef6d-4799-9aa2-e1623d77a5e0;
	Thu, 12 Nov 2020 04:47:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=O7wo9dyBEKo635nsHL/QVteNeLs0X0h3fDSzasYbk8Y=; b=Pmf9D1D6JjyQ4PNLy8DAr3+GGb
	chVRRMTKV9i+bCNNZjrYPRhyXOXavejro6X+Kq6TdR7QdORCv4rfgFviMkdCCAByV5o4QmqYLGuEA
	QhsRdHtxR9BUeIPizigeJBxzGwnSNz7MspFtWsEcnECJgmmVfT9/SV5plN6Ft+DRrveo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kd4W6-0003f9-2D; Thu, 12 Nov 2020 04:47:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kd4W5-000181-RF; Thu, 12 Nov 2020 04:47:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kd4W5-000340-Ql; Thu, 12 Nov 2020 04:47:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156671-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156671: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=d44a8203e7d3be30daf6a95c1fcbe67e38e8b475
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Nov 2020 04:47:25 +0000

flight 156671 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156671/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              d44a8203e7d3be30daf6a95c1fcbe67e38e8b475
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  125 days
Failing since        151818  2020-07-11 04:18:52 Z  124 days  118 attempts
Testing same since   156671  2020-11-11 04:19:17 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 25782 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 05:33:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 05:33:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25491.53282 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd5EJ-00079b-JV; Thu, 12 Nov 2020 05:33:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25491.53282; Thu, 12 Nov 2020 05:33:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd5EJ-00079U-GM; Thu, 12 Nov 2020 05:33:07 +0000
Received: by outflank-mailman (input) for mailman id 25491;
 Thu, 12 Nov 2020 05:33:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kd5EI-00079P-Ak
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 05:33:06 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9489288f-0da2-42b1-8950-d6479b960d43;
 Thu, 12 Nov 2020 05:32:57 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kd5E9-0004sI-5F; Thu, 12 Nov 2020 05:32:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kd5E8-0003MS-RS; Thu, 12 Nov 2020 05:32:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kd5E8-0005Ks-Qx; Thu, 12 Nov 2020 05:32:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kd5EI-00079P-Ak
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 05:33:06 +0000
X-Inumbo-ID: 9489288f-0da2-42b1-8950-d6479b960d43
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9489288f-0da2-42b1-8950-d6479b960d43;
	Thu, 12 Nov 2020 05:32:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/V/5DuYFgXgwusxogMoRdlwvZSTvKdcx55yxX3Ke5wA=; b=EIDdZ3xdcpcMR7IGebNOPZWEW1
	j8nV9fRPigvjkyxwPgQMIboJbXAG4c2wYtXPJbHxhUVE/sJs6PhneqnfcY2gfHQVck+U3kSPCaNhS
	pHmRhTpZI0+LZ/fTQm3uGQsjp2aq9BeQTxUJu2BsGnXcFbxYxYMDd/A75fB/IlrnsqfM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kd5E9-0004sI-5F; Thu, 12 Nov 2020 05:32:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kd5E8-0003MS-RS; Thu, 12 Nov 2020 05:32:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kd5E8-0005Ks-Qx; Thu, 12 Nov 2020 05:32:56 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156646-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156646: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-xsm:guest-localmigrate/x10:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=879860ca706fa1ef47ba511c49a6e2b1b49be9b7
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Nov 2020 05:32:56 +0000

flight 156646 qemu-mainline real [real]
flight 156704 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156646/
http://logs.test-lab.xenproject.org/osstest/logs/156704/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-xsm      20 guest-localmigrate/x10   fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                879860ca706fa1ef47ba511c49a6e2b1b49be9b7
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   83 days
Failing since        152659  2020-08-21 14:07:39 Z   82 days  178 attempts
Testing same since   156646  2020-11-10 20:59:15 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 63195 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 08:15:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 08:15:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25522.53324 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd7kt-00067O-V2; Thu, 12 Nov 2020 08:14:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25522.53324; Thu, 12 Nov 2020 08:14:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd7kt-00067H-RV; Thu, 12 Nov 2020 08:14:55 +0000
Received: by outflank-mailman (input) for mailman id 25522;
 Thu, 12 Nov 2020 08:14:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kd7ks-00067C-Hx
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 08:14:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f5d513f8-27e5-4e09-87ff-0ca5ae1e1e6c;
 Thu, 12 Nov 2020 08:14:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A0D8AAC48;
 Thu, 12 Nov 2020 08:14:52 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kd7ks-00067C-Hx
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 08:14:54 +0000
X-Inumbo-ID: f5d513f8-27e5-4e09-87ff-0ca5ae1e1e6c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f5d513f8-27e5-4e09-87ff-0ca5ae1e1e6c;
	Thu, 12 Nov 2020 08:14:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605168892;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4Lrw4MaCanCjK64q0hqvuM3XkTxYJuV94P34RGjf6/I=;
	b=mOw4tHw3T9PyIBN7qdPZEQXzwaZtmSnMZ4fy9i8D3aqqlQ+8Wpph/HRkMTUCdp6wPizcIW
	fz0Ypg0bh4gyvjjaa0DTfgciaGYS2wiSTl3+VKuW+UJLwobFCH7vK2rqJmbXIv2qXg4xJP
	aTfOlb+vzQcD7fKrI3TH0pe+pCDOjXk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A0D8AAC48;
	Thu, 12 Nov 2020 08:14:52 +0000 (UTC)
Subject: Re: [PATCH] xen/x86: Work around Clang code generation bug with asm
 parameters
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20201111124512.2268-1-andrew.cooper3@citrix.com>
 <8282790a-a0bd-1d33-d992-9d194766254e@suse.com>
 <3ecb8469-8504-054a-078d-4bf32f8f82c4@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <cfc7ad85-22b3-701f-f1d8-5009e5262b92@suse.com>
Date: Thu, 12 Nov 2020 09:14:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <3ecb8469-8504-054a-078d-4bf32f8f82c4@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11.11.2020 21:01, Andrew Cooper wrote:
> On 11/11/2020 15:11, Jan Beulich wrote:
>> On 11.11.2020 13:45, Andrew Cooper wrote:
>>> Clang 9 and later don't handle the clobber of %r10 correctly in
>>> _hypercall64_4().  See https://bugs.llvm.org/show_bug.cgi?id=48122
>> Are you sure this is a bug?
> 
> Yes.
> 
>>  With ...
>>
>>>  #define _hypercall64_4(type, hcall, a1, a2, a3, a4)                     \
>>>      ({                                                                  \
>>> -        long res, tmp__;                                                \
>>> -        register long _a4 asm ("r10") = ((long)(a4));                   \
>>> +        long res, _a1 = (long)(a1), _a2 = (long)(a2),                   \
>>> +            _a3 = (long)(a3);                                           \
>>> +        register long _a4 asm ("r10") = (long)(a4);                     \
>>>          asm volatile (                                                  \
>>>              "call hypercall_page + %c[offset]"                          \
>>> -            : "=a" (res), "=D" (tmp__), "=S" (tmp__), "=d" (tmp__),     \
>>> -              "=&r" (tmp__) ASM_CALL_CONSTRAINT                         \
>> ... this we've requested "any register", while with ...
>>
>>> -            : [offset] "i" (hcall * 32),                                \
>>> -              "1" ((long)(a1)), "2" ((long)(a2)), "3" ((long)(a3)),     \
>>> -              "4" (_a4)                                                 \
>> ... this we've asked for that specific register to be initialized
>> from r10 (and without telling the compiler that r10 is going to
>> change).
> 
> Consider applying that same reasoning to "1" instead of "4".  In that
> case, a1 would no longer be bound to %rdi.

That's different: "=D" specifies the register, and "1" says "use
the same register as input". Whereas, as said, "=&r" says "use
any register" with "1" saying "use the same register" and (_a4)
specifying where the value is to come from.

> The use of "4" explicitly binds the input and the output, which includes
> requiring them to be the same register.
> 
> Furthermore, LLVM tends to consider "not behaving in the same was as
> GCC" a bug.

That's a fair statement, but then still the description wants
re-wording. Plus of course future gcc is free to change their
behavior to that currently observed with clang.

Consider the following example (on an arch where "f" is a
floating point register and there are ways to copy directly
between GPR and floating point registers:

   int i;
   register float f asm("f7") = <input>;
   asm("..." : "=r" (i) : "0" (f));

In this case obviously f7 can't be used for i (as it doesn't
match "r"). It's merely that the initial value of i is to come
from f7. In fact for Arm64 this

extern float flt;

int test(void) {
	int i;
	register float f asm("s7") = flt;
	asm("add %0,%0,5" : "=r" (i) : "0" (f));
	return i;
}

behaves exactly as described:

test:
        adrp    x0, flt
        ldr     s7, [x0, @lo12(flt)]
        fmov    w0, s7
        add     x0, x0, #5
        ret

(Whether fmov is a sensible choice here is a different question;
I'd have expected some fcvt*.)

Similarly a constant initial value would demonstrate this (or
one necessarily coming from memory), albeit in a less obvious
way: It doesn't say "this constant lives in the register", but
"this constant is to be loaded into the register".

>> Also, by what I would have hoped is convention meanwhile, the new
>> macro local variables' names shouldn't start with an underscore,
>> but instead perhaps end in one. But to be honest, despite knowing
>> of the latent (albeit highly hypothetical) issue, each time I
>> find myself making such a comment I'm one tiny step closer to
>> giving up.
> 
> Convention is not created by you getting irritated at others getting
> irritated at you for requesting bizarre and unusual changes in submitted
> code, or by continuing to point things out, knowing that others
> specifically disagree with your reasoning in this case.
> 
> Convention is created when there is general agreement over the technical
> merits of writing code in one particular way vs the alternatives, *and*
> its written down somewhere, so this argument does not continue any further.
> 
> There is no restrictions or guidance in the coding style to prefer one
> form over another, therefore anything is acceptable.

Our coding style document imo ought to describe things not already
specified by the standard. What scopes certain identifiers are to
be considered reserved in is, however, written down there already.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 08:26:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 08:26:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25530.53339 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd7wH-0007CX-6n; Thu, 12 Nov 2020 08:26:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25530.53339; Thu, 12 Nov 2020 08:26:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd7wH-0007CQ-3r; Thu, 12 Nov 2020 08:26:41 +0000
Received: by outflank-mailman (input) for mailman id 25530;
 Thu, 12 Nov 2020 08:26:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AYe4=ES=gmail.com=lambert.olivier@srs-us1.protection.inumbo.net>)
 id 1kd7wG-0007CK-7t
 for xen-devel@lists.xen.org; Thu, 12 Nov 2020 08:26:40 +0000
Received: from mail-vs1-xe2e.google.com (unknown [2607:f8b0:4864:20::e2e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ca3e4887-efc0-4377-8209-46c2b5ed791b;
 Thu, 12 Nov 2020 08:26:39 +0000 (UTC)
Received: by mail-vs1-xe2e.google.com with SMTP id t8so2767880vsr.2
 for <xen-devel@lists.xen.org>; Thu, 12 Nov 2020 00:26:39 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=AYe4=ES=gmail.com=lambert.olivier@srs-us1.protection.inumbo.net>)
	id 1kd7wG-0007CK-7t
	for xen-devel@lists.xen.org; Thu, 12 Nov 2020 08:26:40 +0000
X-Inumbo-ID: ca3e4887-efc0-4377-8209-46c2b5ed791b
Received: from mail-vs1-xe2e.google.com (unknown [2607:f8b0:4864:20::e2e])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ca3e4887-efc0-4377-8209-46c2b5ed791b;
	Thu, 12 Nov 2020 08:26:39 +0000 (UTC)
Received: by mail-vs1-xe2e.google.com with SMTP id t8so2767880vsr.2
        for <xen-devel@lists.xen.org>; Thu, 12 Nov 2020 00:26:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to;
        bh=KxYhA8URFOUVcC0n9xmCpT8QbAW4X/j6MTL50w9Xv+Y=;
        b=JyazXarXJkKXpgjS4XKdjHJi3hq/cbDovM1p7LULp/Xl7trXAiqDBEHiCRaF+5R9QH
         K+BLQ+LvljnFxFbho8s0NqBsGIZS4aCq0vjSJsqyvkiycRqdXmV1MrNWxUmjb8jMe+Hb
         FDIVoRxI1GgljYxcPAHD3d9uaIu8uqC6EAC/ncerHoSBsC6CmUfgLVtiGbvWzGQ2chtP
         v/S037TMDRUBvDnhVSyhby6aq+ZzA3nwaM57nyKXo6EVihAvz5EkZVAOW6lsbGeH9+uZ
         OCp4ocGP20R148Z/1n7HKy2UJQPtjYN8lAAJz9+zuA9Yk08BY1uwGa+yIhw/1q8RIAhq
         UC2Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to;
        bh=KxYhA8URFOUVcC0n9xmCpT8QbAW4X/j6MTL50w9Xv+Y=;
        b=KMt9abQCb1XDXJRxFCZoZR94ZnFe18TPmFc0ylILfzOXlmFWanXrJzjAsl7Fa5lgul
         o3hZWzo6rt2rGV9wC9309Q/67ht6iIC4dpKeCgKqUByyi+pajtNfIl2bLNHscdKi0Usq
         jMrp23QtKvagrH1Tu48zaFvgmeaqs6wjVAIf6feVGkIwEeLRTgTlHa0COWEaSnkRim6/
         AMvdd7H39rXAfCOVVInYBtmLrvQFyQru393vYjpCOOug1eVxhP5ARiVFg6cUyRlr3YEP
         XBWHXTwociMTejXqS6vv6uo4yHYtrdDPvIx791d9FdVqZ9eBTiRMIVLOi+hULJWf5LN3
         IuQQ==
X-Gm-Message-State: AOAM530Q//dHNPVwcOah0HoZT2qcyZzNB7zq/3zBtCXq1/9XfwxuqjEz
	IK1kcRB2KbO7zttFuIVM0LyXkNQZmSANbhdWoGoN9KGksh+J+g==
X-Google-Smtp-Source: ABdhPJzQE3tiJQZNmAjPnQ1wb6Lraey1Do5/fs85bFNEmek/z6CTzHURxnPJveTvm6etSr1EU8zVxPqUtPv+qFdGFkw=
X-Received: by 2002:a67:ff10:: with SMTP id v16mr17617590vsp.40.1605169598624;
 Thu, 12 Nov 2020 00:26:38 -0800 (PST)
MIME-Version: 1.0
References: <CACJ1ZNuJCgDkRHvH2gXqC5gWTJHdUQ9J4G-HBNFwKYZFaWpWuw@mail.gmail.com>
In-Reply-To: <CACJ1ZNuJCgDkRHvH2gXqC5gWTJHdUQ9J4G-HBNFwKYZFaWpWuw@mail.gmail.com>
From: Olivier Lambert <lambert.olivier@gmail.com>
Date: Thu, 12 Nov 2020 09:26:27 +0100
Message-ID: <CACJ1ZNupvRX_fcGPWn3mm+3Lm4gT38M088tUc_sSUu8JeQg3Fg@mail.gmail.com>
Subject: Re: Schedule for OpenPOWER/Xen meeting
To: "<xen-devel@lists.xen.org>" <xen-devel@lists.xen.org>
Content-Type: multipart/alternative; boundary="000000000000baf5ef05b3e4aba9"

--000000000000baf5ef05b3e4aba9
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Thanks to everyone who participated in the poll. Due to the limited number
of answers, I think it's wiser to go for the second option (Thursday the
19th), because everyone who already answered seems available that day. I'll
confirm that to OpenPOWER. When it's confirmed, I'll do a recap here
ideally with the meeting place.

Thanks,

Olivier.


Le mar. 10 nov. 2020 =C3=A0 13:41, Olivier Lambert <lambert.olivier@gmail.c=
om> a
=C3=A9crit :

> Hi everyone,
>
> We got 2 potential dates for the initial tech meeting with at least one
> OpenPOWER expert, so we can discuss the effort needed to port Xen on this
> architecture.
>
> Because of time zones (on OpenPower side, there's one guy in Australia),
> we got 2 possible schedules in November:
>
> 1. 3pm CT on this Thursday the 12th (! this week)
> 2. Or next week Thursday the 19th
>
> I made a doodle-like so everyone can vote on their preferred schedule:
> https://framadate.org/QQu5rYEOEYr4ZHc4
>
> Note: 3pm CT would mean 9pm UTC, 10pm UTC+1 (CET). But correct me if I'm
> wrong.
>
> Reminder: the Cryptpad of the last Xen Community meeting contains the lis=
t
> of people interested. If you are aware of someone interested that could
> miss this email on this devel list, feel free to forward it. Cryptpad lin=
k:
> https://cryptpad.fr/pad/#/2/pad/edit/k-0Aj+Sxb5SliLWrFRBwx49V/
>
> Thank you and see you soon!
>
> Olivier.
>

--000000000000baf5ef05b3e4aba9
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Thanks to everyone who participated in the poll. Due =
to the limited number of answers, I think it&#39;s wiser to go for the seco=
nd option (Thursday the 19th), because everyone who already answered seems =
available that day. I&#39;ll confirm that to OpenPOWER. When it&#39;s confi=
rmed, I&#39;ll do a recap here ideally with the meeting place.</div><div><b=
r></div><div>Thanks,</div><div><br></div><div>Olivier.<br></div><br></div><=
br><div class=3D"gmail_quote"><div dir=3D"ltr" class=3D"gmail_attr">Le=C2=
=A0mar. 10 nov. 2020 =C3=A0=C2=A013:41, Olivier Lambert &lt;<a href=3D"mail=
to:lambert.olivier@gmail.com">lambert.olivier@gmail.com</a>&gt; a =C3=A9cri=
t=C2=A0:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px=
 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div di=
r=3D"ltr"><div>Hi everyone,</div><div><br></div><div>We got 2 potential dat=
es for the initial tech meeting with at least one OpenPOWER expert, so we c=
an discuss the effort needed to port Xen on this architecture.</div><div><b=
r></div><div>Because of time zones (on OpenPower side, there&#39;s one guy =
in Australia), we got 2 possible schedules in November:</div><div><br></div=
><div>1. 3pm CT on this Thursday the 12th (! this week)<br></div><div>2. Or=
 next week Thursday the 19th</div><div><br></div><div>I made a doodle-like =
so everyone can vote on their preferred schedule: <a href=3D"https://framad=
ate.org/QQu5rYEOEYr4ZHc4" target=3D"_blank">https://framadate.org/QQu5rYEOE=
Yr4ZHc4</a></div><div><br></div><div>Note: 3pm CT would mean 9pm UTC, 10pm =
UTC+1 (CET). But correct me if I&#39;m wrong.</div><div><br></div><div>Remi=
nder: the Cryptpad of the last Xen Community meeting contains the list of p=
eople interested. If you are aware of someone interested that could miss th=
is email on this devel list, feel free to forward it. Cryptpad link: <a hre=
f=3D"https://cryptpad.fr/pad/#/2/pad/edit/k-0Aj+Sxb5SliLWrFRBwx49V/" target=
=3D"_blank">https://cryptpad.fr/pad/#/2/pad/edit/k-0Aj+Sxb5SliLWrFRBwx49V/<=
/a></div><div><br></div><div>Thank you and see you soon!</div><div><br></di=
v><div>Olivier.<br></div></div>
</blockquote></div>

--000000000000baf5ef05b3e4aba9--


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 08:31:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 08:31:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25537.53351 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd80v-0008A7-Rm; Thu, 12 Nov 2020 08:31:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25537.53351; Thu, 12 Nov 2020 08:31:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd80v-0008A0-Np; Thu, 12 Nov 2020 08:31:29 +0000
Received: by outflank-mailman (input) for mailman id 25537;
 Thu, 12 Nov 2020 08:31:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kd80t-00089v-Tt
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 08:31:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9da97217-b18a-4f16-99f5-fddbde9e71e4;
 Thu, 12 Nov 2020 08:31:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B6A65AB95;
 Thu, 12 Nov 2020 08:31:25 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kd80t-00089v-Tt
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 08:31:27 +0000
X-Inumbo-ID: 9da97217-b18a-4f16-99f5-fddbde9e71e4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9da97217-b18a-4f16-99f5-fddbde9e71e4;
	Thu, 12 Nov 2020 08:31:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605169885;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=O1dPkWLjtvBC5R6zavbQ4rFJdkZlkirqroDdoeHfwBo=;
	b=XwWmUh6g146sj0GLPdIBM3OxRfxzgIGhM95VW7EVxi8NwThrCdmp29fCpE6f0ZQSR65ir1
	L7ufwD1rjQkwj7fYfAeJXfmFz4OLmT4OQku4IAGX0E9WzSrhmX2m9adBIhFhUWUzc3If+Q
	u52o5AjpOZbC0WNeVXSxeiuKL1RksOo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B6A65AB95;
	Thu, 12 Nov 2020 08:31:25 +0000 (UTC)
Subject: Re: [RFC PATCH v2 05/15] xen/arm: pull in Linux atomics helpers and
 dependencies
To: Ash Wilding <ash.j.wilding@gmail.com>
Cc: julien@xen.org, bertrand.marquis@arm.com, rahul.singh@arm.com,
 xen-devel@lists.xenproject.org
References: <20201111215203.80336-1-ash.j.wilding@gmail.com>
 <20201111215203.80336-6-ash.j.wilding@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e5603684-1f4b-83f3-8b80-6c9d045912cc@suse.com>
Date: Thu, 12 Nov 2020 09:31:26 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201111215203.80336-6-ash.j.wilding@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11.11.2020 22:51, Ash Wilding wrote:
> From: Ash Wilding <ash.j.wilding@gmail.com>
> 
> This patch pulls in Linux's atomics helpers for arm32 and arm64, and
> their dependencies, as-is.
> 
> Note that Linux's arm32 atomics helpers use the READ_ONCE() and
> WRITE_ONCE() macros defined in <asm-generic/rwonce.h>, while Linux's
> arm64 atomics helpers use __READ_ONCE() and __WRITE_ONCE().

And our ACCESS_ONCE() can't be used, or be made usable? I don't think
we want a 3rd variant when we're already in the process of discussing
how to fold the two ones we have right now.

> The only difference is that the __* versions skip checking whether the
> object being accessed is the same size as a native C type (e.g. char,
> int, long, etc). Given the arm32 helpers are using the macros to access
> an atomic_t's counter member, which is an int, we can skip this check by
> using the __* versions like the arm64 code does.
> 
> I mention this here because it forms the first part of my justification
> for *not* copying Linux's <linux/compiler_types.h> to Xen; the size
> check described above is performed by __native_word() defined in that
> header.
> 
> The second part of my justification may be more contentious; as you'll
> see in the next patch, I intend to drop the __unqual_scalar_typeof()
> calls in __READ_ONCE() and __WRITE_ONCE(). This is because the pointer
> to the atomic_t's counter member is always a basic (int *) so we don't
> need this, and dropping it means we can avoid having to port Linux's
> <linux/compiler_types.h>.

I don't think weakening the checking is a good idea when the macros
are being made available for general use. If they'd be renamed to be
private flavors for use just in Arm's atomics, this would be a
different thing.

> If people are against this approach, please bear in mind that the
> current version of __unqual_scalar_typeof() in <linux/compiler_types.h>
> was actually the reason for Linux upgrading the minimum GCC version
> required to 4.9 earlier this year so that they can guarantee C11
> _Generic support [1].
> 
> So if we do want to take Linux's <linux/compiler_types.h> we'll either
> need to:
> 
>    A) bump up the minimum required version of GCC to 4.9 to match
>       that required by Linux; in the Xen README we currently state
>       GCC 4.8 for Arm and GCC 4.1.2_20070115 for x86.

Discussion about bumping the baseline has started, but is progressing
rather slowly. I'm not an Arm maintainer, but it would seem to me
that going from 4.8 to 4.9 just for Arm at this point may be easier
than the decision what the new baseline ought to be for x86.

> --- /dev/null
> +++ b/xen/include/xen/rwonce.h
> @@ -0,0 +1,90 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Prevent the compiler from merging or refetching reads or writes. The
> + * compiler is also forbidden from reordering successive instances of
> + * READ_ONCE and WRITE_ONCE, but only when the compiler is aware of some
> + * particular ordering. One way to make the compiler aware of ordering is to
> + * put the two invocations of READ_ONCE or WRITE_ONCE in different C
> + * statements.
> + *
> + * These two macros will also work on aggregate data types like structs or
> + * unions.
> + *
> + * Their two major use cases are: (1) Mediating communication between
> + * process-level code and irq/NMI handlers, all running on the same CPU,
> + * and (2) Ensuring that the compiler does not fold, spindle, or otherwise
> + * mutilate accesses that either do not require ordering or that interact
> + * with an explicit memory barrier or atomic instruction that provides the
> + * required ordering.
> + */
> +#ifndef __ASM_GENERIC_RWONCE_H
> +#define __ASM_GENERIC_RWONCE_H
> +
> +#ifndef __ASSEMBLY__
> +
> +#include <linux/compiler_types.h>
> +#include <linux/kasan-checks.h>
> +#include <linux/kcsan-checks.h>
> +
> +/*
> + * Yes, this permits 64-bit accesses on 32-bit architectures. These will
> + * actually be atomic in some cases (namely Armv7 + LPAE), but for others we
> + * rely on the access being split into 2x32-bit accesses for a 32-bit quantity
> + * (e.g. a virtual address) and a strong prevailing wind.
> + */
> +#define compiletime_assert_rwonce_type(t)					\
> +	compiletime_assert(__native_word(t) || sizeof(t) == sizeof(long long),	\
> +		"Unsupported access size for {READ,WRITE}_ONCE().")
> +
> +/*
> + * Use __READ_ONCE() instead of READ_ONCE() if you do not require any
> + * atomicity. Note that this may result in tears!
> + */
> +#ifndef __READ_ONCE
> +#define __READ_ONCE(x)	(*(const volatile __unqual_scalar_typeof(x) *)&(x))
> +#endif
> +
> +#define READ_ONCE(x)							\
> +({									\
> +	compiletime_assert_rwonce_type(x);				\
> +	__READ_ONCE(x);							\
> +})
> +
> +#define __WRITE_ONCE(x, val)						\
> +do {									\
> +	*(volatile typeof(x) *)&(x) = (val);				\
> +} while (0)
> +
> +#define WRITE_ONCE(x, val)						\
> +do {									\
> +	compiletime_assert_rwonce_type(x);				\
> +	__WRITE_ONCE(x, val);						\
> +} while (0)
> +
> +static __no_sanitize_or_inline
> +unsigned long __read_once_word_nocheck(const void *addr)
> +{
> +	return __READ_ONCE(*(unsigned long *)addr);
> +}
> +
> +/*
> + * Use READ_ONCE_NOCHECK() instead of READ_ONCE() if you need to load a
> + * word from memory atomically but without telling KASAN/KCSAN. This is
> + * usually used by unwinding code when walking the stack of a running process.
> + */
> +#define READ_ONCE_NOCHECK(x)						\
> +({									\
> +	compiletime_assert(sizeof(x) == sizeof(unsigned long),		\
> +		"Unsupported access size for READ_ONCE_NOCHECK().");	\
> +	(typeof(x))__read_once_word_nocheck(&(x));			\
> +})
> +
> +static __no_kasan_or_inline
> +unsigned long read_word_at_a_time(const void *addr)
> +{
> +	kasan_check_read(addr, 1);
> +	return *(unsigned long *)addr;
> +}
> +
> +#endif /* __ASSEMBLY__ */
> +#endif	/* __ASM_GENERIC_RWONCE_H */
> \ No newline at end of file

This wants taking care of in any event - there are multiple instances
in this patch (in fact it looks as if all of the new files had this
issue), and I didn't check others.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 08:38:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 08:38:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25545.53362 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd87A-0008N2-HW; Thu, 12 Nov 2020 08:37:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25545.53362; Thu, 12 Nov 2020 08:37:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd87A-0008Mv-EV; Thu, 12 Nov 2020 08:37:56 +0000
Received: by outflank-mailman (input) for mailman id 25545;
 Thu, 12 Nov 2020 08:37:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kd879-0008Mq-BS
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 08:37:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f739ebd6-2c27-44d8-9f3f-ede26ee4e126;
 Thu, 12 Nov 2020 08:37:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8ABD5AE49;
 Thu, 12 Nov 2020 08:37:53 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kd879-0008Mq-BS
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 08:37:55 +0000
X-Inumbo-ID: f739ebd6-2c27-44d8-9f3f-ede26ee4e126
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f739ebd6-2c27-44d8-9f3f-ede26ee4e126;
	Thu, 12 Nov 2020 08:37:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605170273;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mKtBw0yrqCVPrbGQ5riNB4AdQHefK1Z3wzKKVyz7SEY=;
	b=OAJie9wqKmWRzmP9One/YxJWpWmp1Y+NWz5psVvjIjXZlijrz8t/hh2s+ItJAetvKwMBnc
	bOh0P3qAMJFOxXKj1HxftCfCvJ8dXIJhZx7NtZiVboJj8f+vcYWlbEfknOPpHZC0/jKuiv
	NX5r9PSrBpx+FHLN8GsiHOlXE46cT7k=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8ABD5AE49;
	Thu, 12 Nov 2020 08:37:53 +0000 (UTC)
Subject: Re: [PATCH 02/10] viridian: move IPI hypercall implementation into
 separate function
To: Paul Durrant <paul@xen.org>
Cc: Paul Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201111200721.30551-1-paul@xen.org>
 <20201111200721.30551-3-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <37655f80-2f72-5069-6de4-0b2c8dce47bf@suse.com>
Date: Thu, 12 Nov 2020 09:37:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201111200721.30551-3-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11.11.2020 21:07, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> This patch moves the implementation of HVCALL_SEND_IPI that is currently
> inline in viridian_hypercall() into a new hvcall_ipi() function.
> 
> The new function returns Xen errno values similarly to hvcall_flush(). Hence
> the errno translation code in viridial_hypercall() is generalized, removing
> the need for the local 'status' variable.
> 
> NOTE: The formatting of the 'out' label also corrected as per CODING_STYLE

How about correcting the adjacent switch() at the same time as well?

>       and the code is adjusted to avoid a register copy-back if 'mode' is
>       neither 8 nor 4.

While you mention it here, isn't this an unrelated change wanting
its own justification?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 08:45:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 08:45:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25552.53374 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd8En-0000yc-9H; Thu, 12 Nov 2020 08:45:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25552.53374; Thu, 12 Nov 2020 08:45:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd8En-0000yV-6L; Thu, 12 Nov 2020 08:45:49 +0000
Received: by outflank-mailman (input) for mailman id 25552;
 Thu, 12 Nov 2020 08:45:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kd8El-0000yQ-Ta
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 08:45:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3b0b4d1e-b984-4dbb-8d56-0307d0361a0c;
 Thu, 12 Nov 2020 08:45:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 33C99ABCC;
 Thu, 12 Nov 2020 08:45:46 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kd8El-0000yQ-Ta
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 08:45:47 +0000
X-Inumbo-ID: 3b0b4d1e-b984-4dbb-8d56-0307d0361a0c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3b0b4d1e-b984-4dbb-8d56-0307d0361a0c;
	Thu, 12 Nov 2020 08:45:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605170746;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QRCwYyPANFATSzVlviCvRFHicDrIgqCXcxGzuGrNEu8=;
	b=NfExwE9PLMPcmwWLUNnuNCK3INVfxk6jN63n4M/TRT4CrfcnW2J6zMtLFyOxNIFqW6rYuf
	qI88UtHuXIqo1fgfydi9dcp8mvBrDOAtntRPVwjSKIzctrqQy5vaIsDyksSvIVXBPuRHB8
	lgKCSXVRshYlJpW0dgmlffaNf40Q35U=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 33C99ABCC;
	Thu, 12 Nov 2020 08:45:46 +0000 (UTC)
Subject: Re: [PATCH 03/10] viridian: introduce a per-cpu hypercall_vpmask and
 accessor functions...
To: Paul Durrant <paul@xen.org>
Cc: Paul Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201111200721.30551-1-paul@xen.org>
 <20201111200721.30551-4-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <01c7747e-70d0-e32b-45a6-afc1688c1741@suse.com>
Date: Thu, 12 Nov 2020 09:45:46 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201111200721.30551-4-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11.11.2020 21:07, Paul Durrant wrote:
> --- a/xen/arch/x86/hvm/viridian/viridian.c
> +++ b/xen/arch/x86/hvm/viridian/viridian.c
> @@ -507,15 +507,41 @@ void viridian_domain_deinit(struct domain *d)
>      XFREE(d->arch.hvm.viridian);
>  }
>  
> +struct hypercall_vpmask {
> +    DECLARE_BITMAP(mask, HVM_MAX_VCPUS);
> +};
> +
> +static DEFINE_PER_CPU(struct hypercall_vpmask, hypercall_vpmask);
> +
> +static void vpmask_empty(struct hypercall_vpmask *vpmask)

const?

> +{
> +    bitmap_zero(vpmask->mask, HVM_MAX_VCPUS);
> +}
> +
> +static void vpmask_set(struct hypercall_vpmask *vpmask, unsigned int vp)
> +{
> +    __set_bit(vp, vpmask->mask);

Perhaps assert vp in range here and ...

> +}
> +
> +static void vpmask_fill(struct hypercall_vpmask *vpmask)
> +{
> +    bitmap_fill(vpmask->mask, HVM_MAX_VCPUS);
> +}
> +
> +static bool vpmask_test(struct hypercall_vpmask *vpmask, unsigned int vp)
> +{
> +    return test_bit(vp, vpmask->mask);

... here?

This also wants const again.

> @@ -567,13 +594,29 @@ static int hvcall_flush(union hypercall_input *input,
>       * so err on the safe side.
>       */
>      if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
> -        input_params.vcpu_mask = ~0ul;
> +        vpmask_fill(vpmask);
> +    else
> +    {
> +        unsigned int vp;
> +
> +        vpmask_empty(vpmask);
> +        for (vp = 0; vp < 64; vp++)

Nit: Missing blanks.

> +        {
> +            if ( !input_params.vcpu_mask )
> +                break;
> +
> +            if ( input_params.vcpu_mask & 1 )
> +                vpmask_set(vpmask, vp);
> +
> +            input_params.vcpu_mask >>= 1;
> +        }

Wouldn't bitmap_zero() + bitmap_copy() (in a suitable wrapper)
be quite a bit cheaper a way to achieve the same effect?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 08:49:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 08:49:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25558.53386 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd8IS-0001AF-QE; Thu, 12 Nov 2020 08:49:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25558.53386; Thu, 12 Nov 2020 08:49:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd8IS-0001A8-Mr; Thu, 12 Nov 2020 08:49:36 +0000
Received: by outflank-mailman (input) for mailman id 25558;
 Thu, 12 Nov 2020 08:49:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kd8IR-0001A3-Tr
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 08:49:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e39b27df-e42f-449c-8ba9-fcd5ac637c51;
 Thu, 12 Nov 2020 08:49:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1B0CEABCC;
 Thu, 12 Nov 2020 08:49:34 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kd8IR-0001A3-Tr
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 08:49:35 +0000
X-Inumbo-ID: e39b27df-e42f-449c-8ba9-fcd5ac637c51
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e39b27df-e42f-449c-8ba9-fcd5ac637c51;
	Thu, 12 Nov 2020 08:49:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605170974;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SE7Ym9UkD6rPteXr8NUdV8uDoT5CG6+FuC5A7lvBV6M=;
	b=DmgoFPl2xsXU1re2PBv0URmrx5Mgz3W03SMmM7zngWMGjPowU1AAUheH6N84ClsEn8hjtT
	moLPzf6GKFUvC3GKFlDp56u+wzoavdBAI+Zn2vgO7N/kPX/GBEfAyWY+Ug4zAfoMJ4wAac
	W8RvPICtDgUoIGzdQPPGLoMQSMURLek=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1B0CEABCC;
	Thu, 12 Nov 2020 08:49:34 +0000 (UTC)
Subject: Re: [PATCH 04/10] viridian: use hypercall_vpmask in hvcall_ipi()
To: Paul Durrant <paul@xen.org>
Cc: Paul Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201111200721.30551-1-paul@xen.org>
 <20201111200721.30551-5-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0e05cc16-c7b9-34a1-8862-a4e7d581189f@suse.com>
Date: Thu, 12 Nov 2020 09:49:35 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201111200721.30551-5-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11.11.2020 21:07, Paul Durrant wrote:
> --- a/xen/arch/x86/hvm/viridian/viridian.c
> +++ b/xen/arch/x86/hvm/viridian/viridian.c
> @@ -533,6 +533,21 @@ static bool vpmask_test(struct hypercall_vpmask *vpmask, unsigned int vp)
>      return test_bit(vp, vpmask->mask);
>  }
>  
> +static unsigned int vpmask_first(struct hypercall_vpmask *vpmask)
> +{
> +    return find_first_bit(vpmask->mask, HVM_MAX_VCPUS);
> +}
> +
> +static unsigned int vpmask_next(struct hypercall_vpmask *vpmask, unsigned int vp)
> +{
> +    return find_next_bit(vpmask->mask, HVM_MAX_VCPUS, vp + 1);

Perhaps again assert on vp's range?

> +}
> +
> +#define for_each_vp(vpmask, vp) \
> +	for ((vp) = vpmask_first(vpmask); \
> +	     (vp) < HVM_MAX_VCPUS; \
> +	     (vp) = vpmask_next(vpmask, vp))

Nit again: Missing blanks here and ...

> @@ -669,17 +693,20 @@ static int hvcall_ipi(union hypercall_input *input,
>      if ( vector < 0x10 || vector > 0xff )
>          return -EINVAL;
>  
> -    for_each_vcpu ( currd, v )
> +    vpmask_empty(vpmask);
> +    for (vp = 0; vp < 64; vp++)

... here. I also wonder if the literal 64 couldn't be expressed in
some suitable way.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 08:53:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 08:53:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25565.53399 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd8Li-000257-9G; Thu, 12 Nov 2020 08:52:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25565.53399; Thu, 12 Nov 2020 08:52:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd8Li-000250-68; Thu, 12 Nov 2020 08:52:58 +0000
Received: by outflank-mailman (input) for mailman id 25565;
 Thu, 12 Nov 2020 08:52:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kd8Lh-00024r-Ec
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 08:52:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0bfb13eb-c43e-4598-9622-b5447e75571c;
 Thu, 12 Nov 2020 08:52:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 00881AEEF;
 Thu, 12 Nov 2020 08:52:54 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kd8Lh-00024r-Ec
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 08:52:57 +0000
X-Inumbo-ID: 0bfb13eb-c43e-4598-9622-b5447e75571c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0bfb13eb-c43e-4598-9622-b5447e75571c;
	Thu, 12 Nov 2020 08:52:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605171175;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=b1Bm1ngOP1WkNoKcvlHcwd/62nXHP9dqBvCLvVeWuzo=;
	b=EbkgWtodszxUAm6SrXLf/67WERKrmz+4wZhstM5BBskO74mVdsu7O6rjui9Hejqa1b7jMN
	yRHpXo8ngGvwagw9VwuqQ9TzH6jnGyzc+U0WMvAuvuwZBeb+vOq4fKysOkdtmANoZsxsIj
	ZI92zeQiFMCOdZ5lLfZl6XRDvl76sOU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 00881AEEF;
	Thu, 12 Nov 2020 08:52:54 +0000 (UTC)
Subject: Re: [PATCH 05/10] viridian: use softirq batching in hvcall_ipi()
To: Paul Durrant <paul@xen.org>
Cc: Paul Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201111200721.30551-1-paul@xen.org>
 <20201111200721.30551-6-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <701b30f9-9552-84c0-63fa-0837b7939f4d@suse.com>
Date: Thu, 12 Nov 2020 09:52:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201111200721.30551-6-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11.11.2020 21:07, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> vlapic_ipi() uses a softirq batching mechanism to improve the efficiency of
> sending a IPIs to large number of processors. This patch modifies send_ipi()
> (the worker function called by hvcall_ipi()) to also make use of the
> mechanism when there multiple bits set the hypercall_vpmask. Hence a `nr`
> field is added to the structure to track the number of set bits.

This is kind of unusual, i.e. we don't do so elsewhere. I take it the
assumption is that using bitmap_weight() is too much overhead?

> @@ -509,6 +510,7 @@ void viridian_domain_deinit(struct domain *d)
>  
>  struct hypercall_vpmask {
>      DECLARE_BITMAP(mask, HVM_MAX_VCPUS);
> +    unsigned int nr;
>  };
>  
>  static DEFINE_PER_CPU(struct hypercall_vpmask, hypercall_vpmask);
> @@ -516,21 +518,24 @@ static DEFINE_PER_CPU(struct hypercall_vpmask, hypercall_vpmask);
>  static void vpmask_empty(struct hypercall_vpmask *vpmask)
>  {
>      bitmap_zero(vpmask->mask, HVM_MAX_VCPUS);
> +    vpmask->nr = 0;
>  }
>  
>  static void vpmask_set(struct hypercall_vpmask *vpmask, unsigned int vp)
>  {
> -    __set_bit(vp, vpmask->mask);
> +    if ( !test_and_set_bit(vp, vpmask->mask) )
> +        vpmask->nr++;

If test_and_set_bit() is the correct thing to use here (rather
than __test_and_set_bit()), the counter also needs updating
atomically.

>  }
>  
>  static void vpmask_fill(struct hypercall_vpmask *vpmask)
>  {
>      bitmap_fill(vpmask->mask, HVM_MAX_VCPUS);
> +    vpmask->nr = HVM_MAX_VCPUS;
>  }
>  
>  static bool vpmask_test(struct hypercall_vpmask *vpmask, unsigned int vp)
>  {
> -    return test_bit(vp, vpmask->mask);
> +    return vpmask->nr && test_bit(vp, vpmask->mask);

Is this in fact an improvement?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 09:04:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 09:04:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25572.53411 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd8WT-0003DA-E7; Thu, 12 Nov 2020 09:04:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25572.53411; Thu, 12 Nov 2020 09:04:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd8WT-0003D3-AY; Thu, 12 Nov 2020 09:04:05 +0000
Received: by outflank-mailman (input) for mailman id 25572;
 Thu, 12 Nov 2020 09:04:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kd8WR-0003Cy-QQ
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 09:04:03 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 41b6050b-4a17-4d49-bd36-4bfd7e569084;
 Thu, 12 Nov 2020 09:04:00 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kd8WO-0001Q6-2B; Thu, 12 Nov 2020 09:04:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kd8WN-0007Bu-QU; Thu, 12 Nov 2020 09:03:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kd8WN-00066N-Pk; Thu, 12 Nov 2020 09:03:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kd8WR-0003Cy-QQ
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 09:04:03 +0000
X-Inumbo-ID: 41b6050b-4a17-4d49-bd36-4bfd7e569084
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 41b6050b-4a17-4d49-bd36-4bfd7e569084;
	Thu, 12 Nov 2020 09:04:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AQUi34N+Jt7qfDSKD73Jv6KkS5ZViRVsLqPM0BZxB/E=; b=wk14eygpJqQ50DC9qCGWM4PLjP
	+Fl1TPb/XyK0doPfr6zjp4+DwTpC2VQxKZ7+Cbb9sAlDi1CquNZ6r8XhABi8GwCz1WsVlpKHxPAh3
	ZHCX4+dKQGgr+SNXLb3wpswiORZATkk+82l7vSZB0ThdIvkaO1CGFj3KNreT9HBP+di4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kd8WO-0001Q6-2B; Thu, 12 Nov 2020 09:04:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kd8WN-0007Bu-QU; Thu, 12 Nov 2020 09:03:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kd8WN-00066N-Pk; Thu, 12 Nov 2020 09:03:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156663-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156663: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start.2:fail:allowable
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3059178798a23ba870ff86ff54d442a07e6651fc
X-Osstest-Versions-That:
    xen=9ff9705647646aa937b5f5c1426a64c69a62b3bd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Nov 2020 09:03:59 +0000

flight 156663 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156663/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm       14 guest-start              fail REGR. vs. 156443
 test-amd64-amd64-xl-xsm      14 guest-start              fail REGR. vs. 156443
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 156443
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 156443
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156443

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     19 guest-start.2            fail REGR. vs. 156443

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156443
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156443
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156443
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156443
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156443
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156443
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156443
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156443
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156443
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156443
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156443
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  3059178798a23ba870ff86ff54d442a07e6651fc
baseline version:
 xen                  9ff9705647646aa937b5f5c1426a64c69a62b3bd

Last test of basis   156443  2020-11-05 15:47:13 Z    6 days
Failing since        156524  2020-11-06 14:22:28 Z    5 days    7 attempts
Testing same since   156663  2020-11-11 00:11:12 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3059178798a23ba870ff86ff54d442a07e6651fc
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Tue Oct 6 18:23:27 2020 +0200

    x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL}
    
    Currently a PV hardware domain can also be given control over the CPU
    frequency, and such guest is allowed to write to MSR_IA32_PERF_CTL.
    However since commit 322ec7c89f6 the default behavior has been changed
    to reject accesses to not explicitly handled MSRs, preventing PV
    guests that manage CPU frequency from reading
    MSR_IA32_PERF_{STATUS/CTL}.
    
    Additionally some HVM guests (Windows at least) will attempt to read
    MSR_IA32_PERF_CTL and will panic if given back a #GP fault:
    
      vmx.c:3035:d8v0 RDMSR 0x00000199 unimplemented
      d8v0 VIRIDIAN CRASH: 3b c0000096 fffff806871c1651 ffffda0253683720 0
    
    Move the handling of MSR_IA32_PERF_{STATUS/CTL} to the common MSR
    handling shared between HVM and PV guests, and add an explicit case
    for reads to MSR_IA32_PERF_{STATUS/CTL}.
    
    Restore previous behavior and allow PV guests with the required
    permissions to read the contents of the mentioned MSRs. Non privileged
    guests will get 0 when trying to read those registers, as writes to
    MSR_IA32_PERF_CTL by such guest will already be silently dropped.
    
    Fixes: 322ec7c89f6 ('x86/pv: disallow access to unknown MSRs')
    Fixes: 84e848fd7a1 ('x86/hvm: disallow access to unknown MSRs')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Oct 29 15:03:32 2020 -0400

    libxl: Add suppress-vmdesc to QEMU machine
    
    The device model state saved by QMP xen-save-devices-state doesn't
    include the vmdesc json.  When restoring an HVM, xen-load-devices-state
    always triggers "Expected vmdescription section, but got 0".  This is
    not a problem when restore comes from a file.  However, when QEMU runs
    in a linux stubdom and comes over a console, EOF is not received.  This
    causes a delay restoring - though it does restore.
    
    Setting suppress-vmdesc skips looking for the vmdesc during restore and
    avoids the wait.
    
    QEMU 5.2 enables suppress-vmdesc by default for xenfv, but this change
    sets it manually for xenfv and xen_platform_pci=0 when -machine pc is
    use.
    
    QEMU commit 9850c6047b8b "migration: Allow to suppress vmdesc
    submission" added suppress-vmdesc in QEMU 2.3.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Anthony PERARD <anthony.perard@citrix.com>

commit cd800ce442eeba5bc0857ade70a075367c01c350
Author: Stefano Stabellini <sstabellini@kernel.org>
Date:   Fri Nov 6 16:12:56 2020 +0000

    libxl: set vuart_gfn in libxl__build_hvm
    
    Setting vuart_gfn was missed when switching ARM guests to the PVH build.
    Like libxl__build_pv, libxl__build_hvm should set state->vuart_gfn to
    dom->vuart_gfn.
    
    Without this change, xl console cannot connect to the vuart console (-t
    vuart), see https://marc.info/?l=xen-devel&m=160402342101366.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

commit 4196b1523aebe0ed929accba318d5e833d7ff6b3
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 15:05:04 2020 +0100

    tools/libs/light: correct bitmap operations
    
    Libxl bitmap operations for single bits (test, set, reset) take the bit
    number as a signed integer without testing the value to be larger than
    0.
    
    Correct that by adding the appropriate tests.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8aac8e0ef43a452d0b565d63e4943c275badba3f
Author: Olaf Hering <olaf@aepfle.de>
Date:   Fri Nov 6 14:05:17 2020 +0100

    docs/xl: fix cpupool-cpu-remove
    
    The cpu-pool must be specified.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Wei Liu <wl@xen.org>

commit 2a5f9f6a6932214fda76b9b3c03e024772882d34
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 10:48:44 2020 +0100

    PCI: remove unused pcidevs_trylock()
    
    pcidevs_trylock() is used nowhere, so remove it.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Paul Durrant <paul@xen.org>

commit e19bcb626f50a652fb1854a8b2f2c9c371687a11
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 10:48:07 2020 +0100

    xen/rwlock: add check_lock() handling to rwlocks
    
    Checking whether a lock is consistently used regarding interrupts on
    or off is beneficial for rwlocks, too.
    
    So add check_lock() calls to rwlock functions. For this purpose make
    check_lock() globally accessible.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit c3453a23f7905d24f2404787543e26ec7d02301c
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Nov 6 10:47:09 2020 +0100

    xen/locking: harmonize spinlocks and rwlocks regarding preemption
    
    Spinlocks and rwlocks behave differently in the try variants regarding
    preemption: rwlocks are switching preemption off before testing the
    lock, while spinlocks do so only after the first check.
    
    Modify _spin_trylock() to disable preemption before testing the lock
    to be held in order to be preemption-ready.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>

commit 957708c2d1ae25d7375abd5e5e70c3043d64f1f1
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Nov 5 22:31:06 2020 +0000

    xen/arm: traps: Don't panic when receiving an unknown debug trap
    
    Even if debug trap are only meant for debugging purpose, it is quite
    harsh to crash Xen if one of the trap sent by the guest is not handled.
    
    So switch from a panic() to a printk().
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit e006b2e3be72e502b86bd9e1405417abd87bdfed
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Nov 5 16:48:55 2020 +0100

    libxl: fix libacpi dependency
    
    $(DSDT_FILES-y) depends on the recursive make to have run in libacpi/
    such that the file(s) itself/themselves were generated before
    compilation gets attempted. The same, however, is also necessary for
    generated headers, before source files including them would get
    attempted to be compiled.
    
    The dependency specified in libacpi's Makefile, otoh, is entirely
    pointless nowadays - no compilation happens there anymore (except for
    tools involved in building the generated files). Together with it, the
    rule generating acpi.a also can go away.
    
    Reported-by: Olaf Hering <olaf@aepfle.de>
    Fixes: 14c0d328da2b ("libxl/acpi: Build ACPI tables for HVMlite guests")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 2b8314a3c354d04545700c80ff5a5f86799b79c7
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Nov 5 16:48:37 2020 +0100

    tools/python: pass more -rpath-link options to ld
    
    With the split of libraries, I've observed a number of warnings from
    (old?) ld.
    
    Instead of duplicating the additions in two places, introduce a setup.py
    make variable holding all the common parts of the invocations.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 09:19:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 09:19:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25579.53426 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd8l6-0004NJ-Va; Thu, 12 Nov 2020 09:19:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25579.53426; Thu, 12 Nov 2020 09:19:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd8l6-0004NC-SN; Thu, 12 Nov 2020 09:19:12 +0000
Received: by outflank-mailman (input) for mailman id 25579;
 Thu, 12 Nov 2020 09:19:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kd8l5-0004N7-Vw
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 09:19:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e0ce09ef-d003-417e-a37d-68226d539bb3;
 Thu, 12 Nov 2020 09:19:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 61A75AE63;
 Thu, 12 Nov 2020 09:19:09 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kd8l5-0004N7-Vw
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 09:19:12 +0000
X-Inumbo-ID: e0ce09ef-d003-417e-a37d-68226d539bb3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e0ce09ef-d003-417e-a37d-68226d539bb3;
	Thu, 12 Nov 2020 09:19:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605172749;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=THVK7YJvyekPziTrS0KK+/aNAHFbZzt2KVExjj9El6E=;
	b=CHexR3QHU2gVtD/oRNcF4EQl3K2PeFEdLzxVwMrmjtlIBMmzkyWCtchBvEWf8C9lbmk5UM
	QcfU9PBk+l5NUXyZeuiiGmRDxGM2SvASvBxJnRv7gefWKVQYwPfxoYdHJmW8s0/dYefWUn
	CzVoCOFWEpbk9QfMP1FGx7DXjM79lTU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 61A75AE63;
	Thu, 12 Nov 2020 09:19:09 +0000 (UTC)
Subject: Re: [PATCH 06/10] viridian: add ExProcessorMasks variants of the
 flush hypercalls
To: Paul Durrant <paul@xen.org>
Cc: Paul Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201111200721.30551-1-paul@xen.org>
 <20201111200721.30551-7-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <dd6c4a0d-f611-7b81-8c95-72786891f311@suse.com>
Date: Thu, 12 Nov 2020 10:19:10 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201111200721.30551-7-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11.11.2020 21:07, Paul Durrant wrote:
> --- a/xen/arch/x86/hvm/viridian/viridian.c
> +++ b/xen/arch/x86/hvm/viridian/viridian.c
> @@ -553,6 +553,83 @@ static unsigned int vpmask_next(struct hypercall_vpmask *vpmask, unsigned int vp
>  	     (vp) < HVM_MAX_VCPUS; \
>  	     (vp) = vpmask_next(vpmask, vp))
>  
> +struct hypercall_vpset {
> +        struct hv_vpset set;
> +        uint64_t __bank_contents[64];

gcc documents this to be supported as an extension; did you check
clang supports this, too? (I'd also prefer if the leading
underscores could be dropped, but as you know I'm not the maintainer
of this code.)

> +static unsigned int hv_vpset_nr_banks(struct hv_vpset *vpset)
> +{
> +    uint64_t bank_mask;
> +    unsigned int nr = 0;
> +
> +    for ( bank_mask = vpset->valid_bank_mask; bank_mask; bank_mask >>= 1 )
> +        if ( bank_mask & 1 )
> +            nr++;
> +
> +    return nr;
> +}

Simply use hweight64()?

> +static uint16_t hv_vpset_to_vpmask(struct hv_vpset *set, size_t size,
> +                                   struct hypercall_vpmask *vpmask)
> +{
> +    switch ( set->format )
> +    {
> +    case HV_GENERIC_SET_ALL:
> +        vpmask_fill(vpmask);
> +        return 0;
> +
> +    case HV_GENERIC_SET_SPARSE_4K:
> +    {
> +        uint64_t bank_mask;
> +        unsigned int bank = 0, vp = 0;
> +
> +        vpmask_empty(vpmask);
> +        for ( bank_mask = set->valid_bank_mask; bank_mask; bank_mask >>= 1 )
> +        {
> +            /* Make sure we won't dereference past the end of the array */
> +            if ( (void *)(set->bank_contents + bank) >=
> +                 (void *)set + size )
> +            {
> +                ASSERT_UNREACHABLE();
> +                return -EINVAL;
> +            }

Doesn't this come too late? I.e. don't you want to check instead
(or also) that you won't overrun the space when copying in from
the guest? And for the specific purpose here, doesn't it come too
early, as you won't access any memory when the low bit of bank_mask
isn't set?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 09:22:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 09:22:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25585.53438 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd8oe-0005IV-Gu; Thu, 12 Nov 2020 09:22:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25585.53438; Thu, 12 Nov 2020 09:22:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd8oe-0005IO-D9; Thu, 12 Nov 2020 09:22:52 +0000
Received: by outflank-mailman (input) for mailman id 25585;
 Thu, 12 Nov 2020 09:22:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kd8od-0005IJ-3s
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 09:22:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e446899a-2220-4caf-bd5d-f0a3c41e1558;
 Thu, 12 Nov 2020 09:22:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A7082AE66;
 Thu, 12 Nov 2020 09:22:49 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kd8od-0005IJ-3s
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 09:22:51 +0000
X-Inumbo-ID: e446899a-2220-4caf-bd5d-f0a3c41e1558
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e446899a-2220-4caf-bd5d-f0a3c41e1558;
	Thu, 12 Nov 2020 09:22:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605172969;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wUVe30jhrN16I887Oxd7GAdRutQg7nm4zVYC1ATqy0c=;
	b=lpEXR6k+2UNYSCz3xNDVXUMLSuIylaho811iZ9xj7Hs5ipNyM1Z5E6NveSYgNg1HvKgo2L
	d2Cn6vIH3IcI6WkLQCp9e2O7qzAOv+SS4BSMOKRRUssM5/h8IpG8ZiLIsWhawYIiUFuu/a
	8bFTcUFGqodhE3gQd0sVfYxD+HN2MQw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A7082AE66;
	Thu, 12 Nov 2020 09:22:49 +0000 (UTC)
Subject: Re: [PATCH 08/10] viridian: log initial invocation of each type of
 hypercall
To: Paul Durrant <paul@xen.org>
Cc: Paul Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201111200721.30551-1-paul@xen.org>
 <20201111200721.30551-9-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c9a9f41a-871c-65c3-74b6-e5063261210b@suse.com>
Date: Thu, 12 Nov 2020 10:22:50 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201111200721.30551-9-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11.11.2020 21:07, Paul Durrant wrote:
> --- a/xen/include/asm-x86/hvm/viridian.h
> +++ b/xen/include/asm-x86/hvm/viridian.h
> @@ -59,6 +59,14 @@ struct viridian_domain
>  {
>      union hv_guest_os_id guest_os_id;
>      union hv_vp_assist_page_msr hypercall_gpa;
> +    unsigned long hypercall_flags;
> +
> +#define _HCALL_spin_wait 0
> +#define _HCALL_flush 1
> +#define _HCALL_flush_ex 2
> +#define _HCALL_ipi 3
> +#define _HCALL_ipi_ex 4

I'd like to suggest for this to either be unsigned int until
more than 32 bits are needed, or be using DECLARE_BITMAP()
right away.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 09:29:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 09:29:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25592.53450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd8uq-0005XF-6h; Thu, 12 Nov 2020 09:29:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25592.53450; Thu, 12 Nov 2020 09:29:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd8uq-0005X8-3D; Thu, 12 Nov 2020 09:29:16 +0000
Received: by outflank-mailman (input) for mailman id 25592;
 Thu, 12 Nov 2020 09:29:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kd8uo-0005X3-Ax
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 09:29:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a8577514-23ea-4141-affe-e5f1acd462c2;
 Thu, 12 Nov 2020 09:29:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 74D5AAB95;
 Thu, 12 Nov 2020 09:29:12 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kd8uo-0005X3-Ax
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 09:29:14 +0000
X-Inumbo-ID: a8577514-23ea-4141-affe-e5f1acd462c2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a8577514-23ea-4141-affe-e5f1acd462c2;
	Thu, 12 Nov 2020 09:29:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605173352;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Diy9BvmhrDv7viINU44D0LNZ6gMKAbC6xnj9rEMO5Mo=;
	b=FT7/AXFRfA/woCQ4yqc9muBOd3pT1bNqhq/gQGouyVG/Aa4FDPXMFym0+xlBjxXXqoyOTS
	Tw9WIpkxRxUJMKXh/P5R6y7j9NQ/+cS6r5inOsvvN9RLS7n62UYgDiqmQWiqXd1BJujBk+
	26uF038fRUZ5waq7A4AhzlPzH1NacRc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 74D5AAB95;
	Thu, 12 Nov 2020 09:29:12 +0000 (UTC)
Subject: Re: [PATCH v4 3/3] xen/x86: issue pci_serr error message via NMI
 continuation
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201109095021.9897-1-jgross@suse.com>
 <20201109095021.9897-4-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4f660245-8b3b-fe8b-f4f9-66f59597042a@suse.com>
Date: Thu, 12 Nov 2020 10:29:13 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201109095021.9897-4-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.11.2020 10:50, Juergen Gross wrote:
> Instead of using a softirq pci_serr_error() can use NMI continuation
> for issuing an error message.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with one minor change to be considered:

> @@ -1808,6 +1816,9 @@ bool nmi_check_continuation(void)
>      if ( nmi_oprofile_send_virq() )
>          ret = true;
>  
> +    if ( pci_serr_nmicont() )
> +        ret = true;
> +
>      return ret;
>  }

As the likely more important part, wouldn't it be better to insert
this ahead of the oprofile check?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 09:30:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 09:30:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25597.53461 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd8vr-0006Pu-Gt; Thu, 12 Nov 2020 09:30:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25597.53461; Thu, 12 Nov 2020 09:30:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd8vr-0006Pm-DQ; Thu, 12 Nov 2020 09:30:19 +0000
Received: by outflank-mailman (input) for mailman id 25597;
 Thu, 12 Nov 2020 09:30:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OBBx=ES=amazon.co.uk=prvs=578ce244f=pdurrant@srs-us1.protection.inumbo.net>)
 id 1kd8vp-0006Pg-JO
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 09:30:17 +0000
Received: from smtp-fw-4101.amazon.com (unknown [72.21.198.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c1671864-d7ea-44e8-bd2f-c92a481e3590;
 Thu, 12 Nov 2020 09:30:17 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-2b-55156cd4.us-west-2.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-4101.iad4.amazon.com with ESMTP;
 12 Nov 2020 09:30:05 +0000
Received: from EX13D32EUC002.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194])
 by email-inbound-relay-2b-55156cd4.us-west-2.amazon.com (Postfix) with ESMTPS
 id 11429A1EB6; Thu, 12 Nov 2020 09:30:05 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC002.ant.amazon.com (10.43.164.94) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 12 Nov 2020 09:30:03 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Thu, 12 Nov 2020 09:30:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=OBBx=ES=amazon.co.uk=prvs=578ce244f=pdurrant@srs-us1.protection.inumbo.net>)
	id 1kd8vp-0006Pg-JO
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 09:30:17 +0000
X-Inumbo-ID: c1671864-d7ea-44e8-bd2f-c92a481e3590
Received: from smtp-fw-4101.amazon.com (unknown [72.21.198.25])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c1671864-d7ea-44e8-bd2f-c92a481e3590;
	Thu, 12 Nov 2020 09:30:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
  s=amazon201209; t=1605173417; x=1636709417;
  h=from:to:cc:date:message-id:references:in-reply-to:
   content-transfer-encoding:mime-version:subject;
  bh=23Oe8AETpKPh3SVX1ugGRGv+HDiKQYeXhI5QXpGkVVw=;
  b=OVk9mfNInHFt1vkOgQxkCaIaqNjraHs9RMTmmCBP6dL6Wn5ve3nwGQnf
   Tak9Y8WBkCtJjv2ISFme4cIfinhpWbV/zbqDq7S1+efqGchgy/DUfZeY3
   R+/O03C0BxJj1Tr0IyOl0oaUo+zVHOsH/HU5XWAO+eEBRWwVD7ZMUdkZy
   Q=;
X-IronPort-AV: E=Sophos;i="5.77,471,1596499200"; 
   d="scan'208";a="63736465"
Subject: RE: [PATCH 02/10] viridian: move IPI hypercall implementation into separate
 function
Thread-Topic: [PATCH 02/10] viridian: move IPI hypercall implementation into separate
 function
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-2b-55156cd4.us-west-2.amazon.com) ([10.43.8.6])
  by smtp-border-fw-out-4101.iad4.amazon.com with ESMTP; 12 Nov 2020 09:30:05 +0000
Received: from EX13D32EUC002.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194])
	by email-inbound-relay-2b-55156cd4.us-west-2.amazon.com (Postfix) with ESMTPS id 11429A1EB6;
	Thu, 12 Nov 2020 09:30:05 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC002.ant.amazon.com (10.43.164.94) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 12 Nov 2020 09:30:03 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Thu, 12 Nov 2020 09:30:03 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
CC: Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Index: AQHWuM8ka4kmeZEn5kq2cFxSC3hucanEOquw
Date: Thu, 12 Nov 2020 09:30:03 +0000
Message-ID: <d6efef7bcf4d4c40bf87071fb26096bf@EX13D32EUC003.ant.amazon.com>
References: <20201111200721.30551-1-paul@xen.org>
 <20201111200721.30551-3-paul@xen.org>
 <37655f80-2f72-5069-6de4-0b2c8dce47bf@suse.com>
In-Reply-To: <37655f80-2f72-5069-6de4-0b2c8dce47bf@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.164.78]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxp
Y2hAc3VzZS5jb20+DQo+IFNlbnQ6IDEyIE5vdmVtYmVyIDIwMjAgMDg6MzgNCj4gVG86IFBhdWwg
RHVycmFudCA8cGF1bEB4ZW4ub3JnPg0KPiBDYzogRHVycmFudCwgUGF1bCA8cGR1cnJhbnRAYW1h
em9uLmNvLnVrPjsgV2VpIExpdSA8d2xAeGVuLm9yZz47IEFuZHJldyBDb29wZXINCj4gPGFuZHJl
dy5jb29wZXIzQGNpdHJpeC5jb20+OyBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4
LmNvbT47IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZw0KPiBTdWJqZWN0OiBSRTogW0VY
VEVSTkFMXSBbUEFUQ0ggMDIvMTBdIHZpcmlkaWFuOiBtb3ZlIElQSSBoeXBlcmNhbGwgaW1wbGVt
ZW50YXRpb24gaW50byBzZXBhcmF0ZQ0KPiBmdW5jdGlvbg0KPiANCj4gQ0FVVElPTjogVGhpcyBl
bWFpbCBvcmlnaW5hdGVkIGZyb20gb3V0c2lkZSBvZiB0aGUgb3JnYW5pemF0aW9uLiBEbyBub3Qg
Y2xpY2sgbGlua3Mgb3Igb3Blbg0KPiBhdHRhY2htZW50cyB1bmxlc3MgeW91IGNhbiBjb25maXJt
IHRoZSBzZW5kZXIgYW5kIGtub3cgdGhlIGNvbnRlbnQgaXMgc2FmZS4NCj4gDQo+IA0KPiANCj4g
T24gMTEuMTEuMjAyMCAyMTowNywgUGF1bCBEdXJyYW50IHdyb3RlOg0KPiA+IEZyb206IFBhdWwg
RHVycmFudCA8cGR1cnJhbnRAYW1hem9uLmNvbT4NCj4gPg0KPiA+IFRoaXMgcGF0Y2ggbW92ZXMg
dGhlIGltcGxlbWVudGF0aW9uIG9mIEhWQ0FMTF9TRU5EX0lQSSB0aGF0IGlzIGN1cnJlbnRseQ0K
PiA+IGlubGluZSBpbiB2aXJpZGlhbl9oeXBlcmNhbGwoKSBpbnRvIGEgbmV3IGh2Y2FsbF9pcGko
KSBmdW5jdGlvbi4NCj4gPg0KPiA+IFRoZSBuZXcgZnVuY3Rpb24gcmV0dXJucyBYZW4gZXJybm8g
dmFsdWVzIHNpbWlsYXJseSB0byBodmNhbGxfZmx1c2goKS4gSGVuY2UNCj4gPiB0aGUgZXJybm8g
dHJhbnNsYXRpb24gY29kZSBpbiB2aXJpZGlhbF9oeXBlcmNhbGwoKSBpcyBnZW5lcmFsaXplZCwg
cmVtb3ZpbmcNCj4gPiB0aGUgbmVlZCBmb3IgdGhlIGxvY2FsICdzdGF0dXMnIHZhcmlhYmxlLg0K
PiA+DQo+ID4gTk9URTogVGhlIGZvcm1hdHRpbmcgb2YgdGhlICdvdXQnIGxhYmVsIGFsc28gY29y
cmVjdGVkIGFzIHBlciBDT0RJTkdfU1RZTEUNCj4gDQo+IEhvdyBhYm91dCBjb3JyZWN0aW5nIHRo
ZSBhZGphY2VudCBzd2l0Y2goKSBhdCB0aGUgc2FtZSB0aW1lIGFzIHdlbGw/DQo+IA0KDQpTdXJl
Lg0KDQo+ID4gICAgICAgYW5kIHRoZSBjb2RlIGlzIGFkanVzdGVkIHRvIGF2b2lkIGEgcmVnaXN0
ZXIgY29weS1iYWNrIGlmICdtb2RlJyBpcw0KPiA+ICAgICAgIG5laXRoZXIgOCBub3IgNC4NCj4g
DQo+IFdoaWxlIHlvdSBtZW50aW9uIGl0IGhlcmUsIGlzbid0IHRoaXMgYW4gdW5yZWxhdGVkIGNo
YW5nZSB3YW50aW5nDQo+IGl0cyBvd24ganVzdGlmaWNhdGlvbj8NCj4gDQoNCkl0IHdhcyBzdWNo
IHNtYWxsIG1vZCB0aGF0IEkgZm9sZGVkIGl0IGJ1dCBtYXliZSBpdCB3b3VsZCBiZSBiZXN0IHRv
IGJyZWFrIGl0IG91dCBpbnRvIGEgc2VwYXJhdGUgcGF0Y2ggYW5kIGFsc28gZG8gdGhlIGZvcm1h
dCBhZGp1c3RtZW50IHRoZXJlLg0KDQogIFBhdWwNCg0KPiBKYW4NCg==


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 09:40:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 09:40:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25607.53477 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd95W-0007UM-G9; Thu, 12 Nov 2020 09:40:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25607.53477; Thu, 12 Nov 2020 09:40:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd95W-0007UF-D4; Thu, 12 Nov 2020 09:40:18 +0000
Received: by outflank-mailman (input) for mailman id 25607;
 Thu, 12 Nov 2020 09:40:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=muz0=ES=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kd95U-0007UA-NN
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 09:40:17 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3764a262-55cf-44f6-b19c-6724659a8f5d;
 Thu, 12 Nov 2020 09:40:13 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=muz0=ES=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kd95U-0007UA-NN
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 09:40:17 +0000
X-Inumbo-ID: 3764a262-55cf-44f6-b19c-6724659a8f5d
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3764a262-55cf-44f6-b19c-6724659a8f5d;
	Thu, 12 Nov 2020 09:40:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605174012;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=TV+J106vbCP05Jfh2VTPOPChPRP/wm8xStzRmUyYG1E=;
  b=eR3fIeuPDmXFp0ssQVmlQDgdspJ/WsL1bDNzLXSkWD10aPWLSOeQHeU6
   E3WdYNc10gXIiEFzZcGj1X8YkfgyG7dySu/XeaCX9inS+A7mpQ4yYpWmT
   EW+On0veIpf+flOmOfrlQ2JAJkFH7q6Nlv+eJZnK5Q/tzNP1qOKPO+UB5
   Q=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: aqbbyYh7qYje18++vylt0q0OgitcAyjLl/DS9q10P4hrjqDOuxrn5LPrNchc9yXEEGf5I5+JCC
 KNhco36FVI0yMy/ofxIsMzsNVF499X2yTuZ6xVXJiRC3teQ++H222sS4iesiDSycL6JbICxlOM
 5+By6Gr4/GTjzECIHvda3mHcjO3rjV1ZWsPCSI568spHvHqLsyfytPM16tNeDoJlXsCJOQpJ7d
 CKxO1FtpKIOSU9sTmhClNI2t4/9chJyG/P08d+cqu8uG0vIR7CkNomO8fbKcDT0QmA9d9KoxUI
 kdg=
X-SBRS: None
X-MesageID: 31245969
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,471,1596513600"; 
   d="scan'208";a="31245969"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OgoClDx7c6NM+0zlOfvHg6dEMzBnaLvLsMSNyRnWjDB14blZJstk3eVVm63oCQu7UQqZGxc1wa2Y+l6o0co5Uv6Gpb+4u5gqymWriE+tOko95Pz+Ax35WNtDz2WhqLV+KTzCzLhchlvuWznCVbk6BwSgnUr6PBzAaJEKUVJkFjw9itMGFl0UCLQ5xsgIl1d90br0IUV0nMskIum9o1kR1rV+Y50KNMJBy/Rtf+oGruc7wdC+0gkKIF1xlMWF4FPcWLlCoOrgsIKBdmob6KRItXcKXpGDqYbk7Nu8V0qgjmES4NRfgiF3wU3hfh+xM1DJQy6r1Ev/8piZh9Yxyy73jA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EbzmE2kxRSKUNyPPI7O4G6Gu00y4rOiPrH3KBINLnb4=;
 b=W+6e4fq71kLiCgyBkgJbvYDAsucfcSdw30q8F2MH9RdFa4vd1duR92HxWE0ZDOX3VvDyQ1RrSgESqtnsZxa1UGS+KSV1QY38Yx1re4nxEHrNtLBSlLeYYVbCobKMrGxv1JJRRKBNFHmWszNWjy76sd+InMWB543HroaP6LNNjwdSVSwiC3dV8lq21BgeKCyqgtnEiFJkGt6wvpSbfgGr4vTkY9u7DJmBkU2of0pFuvMZO/3DeFbZzRdWoA9VZGQq+PqUHa4eQVTT06oKtKNnBCZOfuJN/4bUt0yNRvACIjSICh3xPewOSQUSdnJdZ5s3TV8QwzFFyXdX1rp5pd2BTQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EbzmE2kxRSKUNyPPI7O4G6Gu00y4rOiPrH3KBINLnb4=;
 b=lWP3RO/BDrt2rI33GIbivfShSfqHZEIwPCyBZPDZMaLAa0Fn79KemnKYXQ0EPtccpIR2d01vdVsYZWQhYv6q9QWfm4yOl56iziHvwBiekJURo0AGPuuEkYGgIuQ/295UVrf7b7rAsxV+TXLhkA6t/hjbzD6kThY1DIyKZhuSGv4=
Date: Thu, 12 Nov 2020 10:40:02 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Oleksandr Andrushchenko <andr2000@gmail.com>
CC: <Rahul.Singh@arm.com>, <Bertrand.Marquis@arm.com>, <julien.grall@arm.com>,
	<jbeulich@suse.com>, <sstabellini@kernel.org>,
	<xen-devel@lists.xenproject.org>, <iwj@xenproject.org>, <wl@xen.org>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
Message-ID: <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201109125031.26409-7-andr2000@gmail.com>
X-ClientProxiedBy: LO2P265CA0254.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8a::26) To SA0PR03MB5610.namprd03.prod.outlook.com
 (2603:10b6:806:b2::9)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2fdb6a0d-4877-44ab-9f32-08d886eef153
X-MS-TrafficTypeDiagnostic: SN6PR03MB4304:
X-Microsoft-Antispam-PRVS: <SN6PR03MB4304E1A4393965682FE542C58FE70@SN6PR03MB4304.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1360;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: mSI2Uo08IyPgPyc/vqsR6u0919MeVfGZSgqOo17O0wxiUveI5gtqapVyd5kSUOZ3Ycc0G6nHX2f4hLRfuXI+26NDUxOdeCm046YLx2G6op8rfnnRGS7EmjUnzMaqPxYm8tJ1h5ojoqBkv7geyS1/v1b0r3i++y/G0K1IxTU8eSB70b+xfYFtC4tqogvF5DgPAt6TIHJ/NS2Bf76qJjant4LUCJkXkc2t4zRNDsnQ0rqRdyYJDPrtmdc4Wp+6ah/mKVHSF34NUq/rqKmXf6vbBU9wMCTbZATsoEE2qHON1CMCB3FHbFdwaa3KsBA3TUuk
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SA0PR03MB5610.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(39860400002)(396003)(346002)(366004)(136003)(376002)(33716001)(16526019)(956004)(6496006)(7416002)(316002)(186003)(8936002)(478600001)(83380400001)(5660300002)(6916009)(6486002)(4326008)(86362001)(66946007)(66556008)(85182001)(66476007)(1076003)(6666004)(2906002)(8676002)(26005)(9686003)(30864003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: BWd6OTlN9v37JzwrvGy1tGS24l/0Zd9EtREPUL/okrwOWtzNhhCkKTJdAO4iN4xxjgqKV4Qd3pEn9PF9qeWhxwMfYN4Gc2mTAQZX7N02jNhQB573JQlYY50lpuRG7g+i4E5hasatyYqfJ20uOLHDQUP8ZYRtlJzcBB3GTXvfKZGTqQi/R7xW//c0ZT0cyisrKE3Pw1NZbDW/eJhsZZo/oTG/aF6nxXUSw+zXwqvCpPRtkPrgsT6P7WpoQtNckesHXs+l8dhL3p4Y/KaV9hM91QYEzl3DcntAylYGsZY/GmG+H6uxmYhQxHOv5ybzlAVEkcudhwhk1wIE/PLCQZa9GPnupRrqIOxI42iOw7jrLtiHvJwiTPCc31uAvk4KJrVPr1Fuu/LRTUaWKSZV3D1/ENULw+qvIWDwA6cWWZR4GB7hBF5BkF/ndFmtiG/r4RfkmldOeZtJEjo0NnPoToyaAlACKmh7/mjTCLdEtI1ageThZHj1h90mIXhcwOM7VHxriwXobSKEJGPxc6hj6oFHtY/wIcAc4k3O0sp97p52JL/O9QNnDgJ/QLarfy1QVOT7VPpWK86PUyPeVQD6vZ7S5uEZi6Z762OeJYiYN0/1CmC7F27PzpYqg8llqOpdFaQz7529AxN2+fUjPZOlIMMV9g==
X-MS-Exchange-CrossTenant-Network-Message-Id: 2fdb6a0d-4877-44ab-9f32-08d886eef153
X-MS-Exchange-CrossTenant-AuthSource: SA0PR03MB5610.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Nov 2020 09:40:08.5069
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VnmZN+4igRLHEiZZeACw0LyJ5ysr0rK46vfLnpPCYM2maBqU7jvwzO0BfzMnQUhcti0uxaFb3YqVjGn/Ua5x7w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR03MB4304
X-OriginatorOrg: citrix.com

On Mon, Nov 09, 2020 at 02:50:27PM +0200, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> 
> At the moment there is an identity mapping between how a guest sees its
> BARs and how they are programmed into guest domain's p2m. This is not
> going to work as guest domains have their own view on the BARs.
> Extend existing vPCI BAR handling to allow every domain to have its own
> view of the BARs: only hardware domain sees physical memory addresses in
> this case and for the rest those are emulated, including logic required
> for the guests to detect memory sizes and properties.
> 
> While emulating BAR access for the guests create a link between the
> virtual BAR address and physical one: use full memory address while
> creating range sets used to map/unmap corresponding address spaces and
> exploit the fact that PCI BAR value doesn't use 8 lower bits of the

I think you mean the low 4bits rather than the low 8bits?

> memory address. Use those bits to pass physical BAR's index, so we can
> build/remove proper p2m mappings.

I find this quite hard to review, given it's a fairly big and
complicated patch. Do you think you could split into smaller chunks?

Maybe you could split into smaller patches that add bits towards the
end goal but still keep the identity mappings?

I've tried to do some review below, but I would appreciate if you
could split this.

> Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> ---
>  xen/drivers/vpci/header.c | 276 ++++++++++++++++++++++++++++++++++----
>  xen/drivers/vpci/vpci.c   |   1 +
>  xen/include/xen/vpci.h    |  24 ++--
>  3 files changed, 265 insertions(+), 36 deletions(-)
> 
> diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
> index f74f728884c0..7dc7c70e24f2 100644
> --- a/xen/drivers/vpci/header.c
> +++ b/xen/drivers/vpci/header.c
> @@ -31,14 +31,87 @@
>  struct map_data {
>      struct domain *d;
>      bool map;
> +    struct pci_dev *pdev;

If the field is required please place it after the domain one.

>  };
>  
> +static struct vpci_header *get_vpci_header(struct domain *d,
> +                                           const struct pci_dev *pdev);
> +
> +static struct vpci_header *get_hwdom_vpci_header(const struct pci_dev *pdev)
> +{
> +    if ( unlikely(list_empty(&pdev->vpci->headers)) )
> +        return get_vpci_header(hardware_domain, pdev);

I'm not sure I understand why you need a list here: each device can
only be owned by a single guest, and thus there shouldn't be multiple
views of the BARs (or the header).

> +
> +    /* hwdom's header is always the very first entry. */
> +    return list_first_entry(&pdev->vpci->headers, struct vpci_header, node);
> +}
> +
> +static struct vpci_header *get_vpci_header(struct domain *d,
> +                                           const struct pci_dev *pdev)
> +{
> +    struct list_head *prev;
> +    struct vpci_header *header;
> +    struct vpci *vpci = pdev->vpci;
> +
> +    list_for_each( prev, &vpci->headers )
> +    {
> +        struct vpci_header *this = list_entry(prev, struct vpci_header, node);
> +
> +        if ( this->domain_id == d->domain_id )
> +            return this;
> +    }
> +    printk(XENLOG_DEBUG "--------------------------------------" \
> +           "Adding new vPCI BAR headers for domain %d: " PRI_pci" \n",
> +           d->domain_id, pdev->sbdf.seg, pdev->sbdf.bus,
> +           pdev->sbdf.dev, pdev->sbdf.fn);
> +    header = xzalloc(struct vpci_header);
> +    if ( !header )
> +    {
> +        printk(XENLOG_ERR
> +               "Failed to add new vPCI BAR headers for domain %d: " PRI_pci" \n",
> +               d->domain_id, pdev->sbdf.seg, pdev->sbdf.bus,
> +               pdev->sbdf.dev, pdev->sbdf.fn);
> +        return NULL;
> +    }
> +
> +    if ( !is_hardware_domain(d) )
> +    {
> +        struct vpci_header *hwdom_header = get_hwdom_vpci_header(pdev);
> +
> +        /* Make a copy of the hwdom's BARs as the initial state for vBARs. */
> +        memcpy(header, hwdom_header, sizeof(*header));
> +    }
> +
> +    header->domain_id = d->domain_id;
> +    list_add_tail(&header->node, &vpci->headers);

Same here, I think you want a single header, and then some fields
would be read-only for domUs (like the position of the BARs on the
physmap).

> +    return header;
> +}
> +
> +static struct vpci_bar *get_vpci_bar(struct domain *d,
> +                                     const struct pci_dev *pdev,
> +                                     int bar_idx)

unsigned

> +{
> +    struct vpci_header *vheader;
> +
> +    vheader = get_vpci_header(d, pdev);
> +    if ( !vheader )
> +        return NULL;
> +
> +    return &vheader->bars[bar_idx];
> +}
> +
>  static int map_range(unsigned long s, unsigned long e, void *data,
>                       unsigned long *c)
>  {
>      const struct map_data *map = data;
> -    int rc;
> -
> +    unsigned long mfn;
> +    int rc, bar_idx;
> +    struct vpci_header *header = get_hwdom_vpci_header(map->pdev);
> +
> +    bar_idx = s & ~PCI_BASE_ADDRESS_MEM_MASK;

I'm not sure it's fine to stash the BAR index in the low bits of the
address, what about a device having concatenated BARs?

The rangeset would normally join them into a single range, and then
you won't be able to notice whether a region in the rangeset belongs
to one BAR or another.

IMO it might be easier to just have a rangeset for each BAR and
structure the pending work as a linked list of BARs, that will contain
the physical addresses of each BAR (the real phymap one and the guest
physmap view) plus the rangeset of memory regions to map.

> +    s = PFN_DOWN(s);
> +    e = PFN_DOWN(e);

Changing the rangeset to store memory addresses instead of frames
could for example be split into a separate patch.

I think you are doing the calculation of the end pfn wrong here, you
should use PFN_UP instead in case the address is not aligned.

> +    mfn = _mfn(PFN_DOWN(header->bars[bar_idx].addr));
>      for ( ; ; )
>      {
>          unsigned long size = e - s + 1;
> @@ -52,11 +125,15 @@ static int map_range(unsigned long s, unsigned long e, void *data,
>           * - {un}map_mmio_regions doesn't support preemption.
>           */
>  
> -        rc = map->map ? map_mmio_regions(map->d, _gfn(s), size, _mfn(s))
> -                      : unmap_mmio_regions(map->d, _gfn(s), size, _mfn(s));
> +        rc = map->map ? map_mmio_regions(map->d, _gfn(s), size, mfn)
> +                      : unmap_mmio_regions(map->d, _gfn(s), size, mfn);
>          if ( rc == 0 )
>          {
> -            *c += size;
> +            /*
> +             * Range set is not expressed in frame numbers and the size
> +             * is the number of frames, so update accordingly.
> +             */
> +            *c += size << PAGE_SHIFT;
>              break;
>          }
>          if ( rc < 0 )
> @@ -67,8 +144,9 @@ static int map_range(unsigned long s, unsigned long e, void *data,
>              break;
>          }
>          ASSERT(rc < size);
> -        *c += rc;
> +        *c += rc << PAGE_SHIFT;
>          s += rc;
> +        mfn += rc;
>          if ( general_preempt_check() )
>                  return -ERESTART;
>      }
> @@ -84,7 +162,7 @@ static int map_range(unsigned long s, unsigned long e, void *data,
>  static void modify_decoding(const struct pci_dev *pdev, uint16_t cmd,
>                              bool rom_only)
>  {
> -    struct vpci_header *header = &pdev->vpci->header;
> +    struct vpci_header *header = get_hwdom_vpci_header(pdev);
>      bool map = cmd & PCI_COMMAND_MEMORY;
>      unsigned int i;
>  
> @@ -136,6 +214,7 @@ bool vpci_process_pending(struct vcpu *v)
>          struct map_data data = {
>              .d = v->domain,
>              .map = v->vpci.cmd & PCI_COMMAND_MEMORY,
> +            .pdev = v->vpci.pdev,
>          };
>          int rc = rangeset_consume_ranges(v->vpci.mem, map_range, &data);
>  
> @@ -168,7 +247,8 @@ bool vpci_process_pending(struct vcpu *v)
>  static int __init apply_map(struct domain *d, const struct pci_dev *pdev,
>                              struct rangeset *mem, uint16_t cmd)
>  {
> -    struct map_data data = { .d = d, .map = true };
> +    struct map_data data = { .d = d, .map = true,
> +        .pdev = (struct pci_dev *)pdev };

Dropping the const here is not fine. IT either needs to be dropped
from apply_map and further up, or this needs to also be made const.

>      int rc;
>  
>      while ( (rc = rangeset_consume_ranges(mem, map_range, &data)) == -ERESTART )
> @@ -205,7 +285,7 @@ static void defer_map(struct domain *d, struct pci_dev *pdev,
>  
>  static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
>  {
> -    struct vpci_header *header = &pdev->vpci->header;
> +    struct vpci_header *header;
>      struct rangeset *mem = rangeset_new(NULL, NULL, 0);
>      struct pci_dev *tmp, *dev = NULL;
>  #ifdef CONFIG_X86
> @@ -217,6 +297,11 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
>      if ( !mem )
>          return -ENOMEM;
>  
> +    if ( is_hardware_domain(current->domain) )
> +        header = get_hwdom_vpci_header(pdev);
> +    else
> +        header = get_vpci_header(current->domain, pdev);
> +
>      /*
>       * Create a rangeset that represents the current device BARs memory region
>       * and compare it against all the currently active BAR memory regions. If
> @@ -225,12 +310,15 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
>       * First fill the rangeset with all the BARs of this device or with the ROM
>       * BAR only, depending on whether the guest is toggling the memory decode
>       * bit of the command register, or the enable bit of the ROM BAR register.
> +     *
> +     * Use the PCI reserved bits of the BAR to pass BAR's index.
>       */
>      for ( i = 0; i < ARRAY_SIZE(header->bars); i++ )
>      {
>          const struct vpci_bar *bar = &header->bars[i];
> -        unsigned long start = PFN_DOWN(bar->addr);
> -        unsigned long end = PFN_DOWN(bar->addr + bar->size - 1);
> +        unsigned long start = (bar->addr & PCI_BASE_ADDRESS_MEM_MASK) | i;
> +        unsigned long end = (bar->addr & PCI_BASE_ADDRESS_MEM_MASK) +
> +            bar->size - 1;

Will this work fine on Arm 32bits with LPAE? It's my understanding
that in that case unsigned long is 32bits, but the physical address
space is 44bits, in which case this won't work.

I think you need to keep the usage of frame numbers here.

>  
>          if ( !MAPPABLE_BAR(bar) ||
>               (rom_only ? bar->type != VPCI_BAR_ROM
> @@ -251,9 +339,11 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
>      /* Remove any MSIX regions if present. */
>      for ( i = 0; msix && i < ARRAY_SIZE(msix->tables); i++ )
>      {
> -        unsigned long start = PFN_DOWN(vmsix_table_addr(pdev->vpci, i));
> -        unsigned long end = PFN_DOWN(vmsix_table_addr(pdev->vpci, i) +
> -                                     vmsix_table_size(pdev->vpci, i) - 1);
> +        unsigned long start = (vmsix_table_addr(pdev->vpci, i) &
> +                               PCI_BASE_ADDRESS_MEM_MASK) | i;
> +        unsigned long end = (vmsix_table_addr(pdev->vpci, i) &
> +                             PCI_BASE_ADDRESS_MEM_MASK ) +
> +                             vmsix_table_size(pdev->vpci, i) - 1;
>  
>          rc = rangeset_remove_range(mem, start, end);
>          if ( rc )
> @@ -273,6 +363,8 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
>       */
>      for_each_pdev ( pdev->domain, tmp )
>      {
> +        struct vpci_header *header;
> +
>          if ( tmp == pdev )
>          {
>              /*
> @@ -289,11 +381,14 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
>                  continue;
>          }
>  
> -        for ( i = 0; i < ARRAY_SIZE(tmp->vpci->header.bars); i++ )
> +        header = get_vpci_header(tmp->domain, pdev);
> +
> +        for ( i = 0; i < ARRAY_SIZE(header->bars); i++ )
>          {
> -            const struct vpci_bar *bar = &tmp->vpci->header.bars[i];
> -            unsigned long start = PFN_DOWN(bar->addr);
> -            unsigned long end = PFN_DOWN(bar->addr + bar->size - 1);
> +            const struct vpci_bar *bar = &header->bars[i];
> +            unsigned long start = (bar->addr & PCI_BASE_ADDRESS_MEM_MASK) | i;
> +            unsigned long end = (bar->addr & PCI_BASE_ADDRESS_MEM_MASK)
> +                + bar->size - 1;
>  
>              if ( !bar->enabled || !rangeset_overlaps_range(mem, start, end) ||
>                   /*
> @@ -357,7 +452,7 @@ static void cmd_write(const struct pci_dev *pdev, unsigned int reg,
>          pci_conf_write16(pdev->sbdf, reg, cmd);
>  }
>  
> -static void bar_write(const struct pci_dev *pdev, unsigned int reg,
> +static void bar_write_hwdom(const struct pci_dev *pdev, unsigned int reg,
>                        uint32_t val, void *data)
>  {
>      struct vpci_bar *bar = data;
> @@ -377,14 +472,17 @@ static void bar_write(const struct pci_dev *pdev, unsigned int reg,
>      {
>          /* If the value written is the current one avoid printing a warning. */
>          if ( val != (uint32_t)(bar->addr >> (hi ? 32 : 0)) )
> +        {
> +            struct vpci_header *header = get_hwdom_vpci_header(pdev);
> +
>              gprintk(XENLOG_WARNING,
>                      "%04x:%02x:%02x.%u: ignored BAR %lu write with memory decoding enabled\n",
>                      pdev->seg, pdev->bus, slot, func,
> -                    bar - pdev->vpci->header.bars + hi);
> +                    bar - header->bars + hi);
> +        }
>          return;
>      }
>  
> -
>      /*
>       * Update the cached address, so that when memory decoding is enabled
>       * Xen can map the BAR into the guest p2m.
> @@ -403,10 +501,89 @@ static void bar_write(const struct pci_dev *pdev, unsigned int reg,
>      pci_conf_write32(pdev->sbdf, reg, val);
>  }
>  
> +static uint32_t bar_read_hwdom(const struct pci_dev *pdev, unsigned int reg,
> +                               void *data)
> +{
> +    return vpci_hw_read32(pdev, reg, data);
> +}
> +
> +static void bar_write_guest(const struct pci_dev *pdev, unsigned int reg,
> +                            uint32_t val, void *data)
> +{
> +    struct vpci_bar *vbar = data;
> +    bool hi = false;
> +
> +    if ( vbar->type == VPCI_BAR_MEM64_HI )
> +    {
> +        ASSERT(reg > PCI_BASE_ADDRESS_0);
> +        vbar--;
> +        hi = true;
> +    }
> +    vbar->addr &= ~(0xffffffffull << (hi ? 32 : 0));
> +    vbar->addr |= (uint64_t)val << (hi ? 32 : 0);
> +}
> +
> +static uint32_t bar_read_guest(const struct pci_dev *pdev, unsigned int reg,
> +                               void *data)
> +{
> +    struct vpci_bar *vbar = data;
> +    uint32_t val;
> +    bool hi = false;
> +
> +    if ( vbar->type == VPCI_BAR_MEM64_HI )
> +    {
> +        ASSERT(reg > PCI_BASE_ADDRESS_0);
> +        vbar--;
> +        hi = true;
> +    }
> +
> +    if ( vbar->type == VPCI_BAR_MEM64_LO || vbar->type == VPCI_BAR_MEM64_HI )

I think this would be clearer using a switch statement.

> +    {
> +        if ( hi )
> +            val = vbar->addr >> 32;
> +        else
> +            val = vbar->addr & 0xffffffff;
> +        if ( val == ~0 )

Strictly speaking I think you are not forced to write 1s to the
reserved 4 bits in the low register (and in the 32bit case).

> +        {
> +            /* Guests detects BAR's properties and sizes. */
> +            if ( !hi )
> +            {
> +                val = 0xffffffff & ~(vbar->size - 1);
> +                val |= vbar->type == VPCI_BAR_MEM32 ? PCI_BASE_ADDRESS_MEM_TYPE_32
> +                                                    : PCI_BASE_ADDRESS_MEM_TYPE_64;
> +                val |= vbar->prefetchable ? PCI_BASE_ADDRESS_MEM_PREFETCH : 0;
> +            }
> +            else
> +                val = vbar->size >> 32;
> +            vbar->addr &= ~(0xffffffffull << (hi ? 32 : 0));
> +            vbar->addr |= (uint64_t)val << (hi ? 32 : 0);
> +        }
> +    }
> +    else if ( vbar->type == VPCI_BAR_MEM32 )
> +    {
> +        val = vbar->addr;
> +        if ( val == ~0 )
> +        {
> +            if ( !hi )

There's no way hi can be true at this point AFAICT.

> +            {
> +                val = 0xffffffff & ~(vbar->size - 1);
> +                val |= vbar->type == VPCI_BAR_MEM32 ? PCI_BASE_ADDRESS_MEM_TYPE_32
> +                                                    : PCI_BASE_ADDRESS_MEM_TYPE_64;
> +                val |= vbar->prefetchable ? PCI_BASE_ADDRESS_MEM_PREFETCH : 0;
> +            }
> +        }
> +    }
> +    else
> +    {
> +        val = vbar->addr;
> +    }
> +    return val;
> +}
> +
>  static void rom_write(const struct pci_dev *pdev, unsigned int reg,
>                        uint32_t val, void *data)
>  {
> -    struct vpci_header *header = &pdev->vpci->header;
> +    struct vpci_header *header = get_hwdom_vpci_header(pdev);
>      struct vpci_bar *rom = data;
>      uint8_t slot = PCI_SLOT(pdev->devfn), func = PCI_FUNC(pdev->devfn);
>      uint16_t cmd = pci_conf_read16(pdev->sbdf, PCI_COMMAND);
> @@ -452,15 +629,56 @@ static void rom_write(const struct pci_dev *pdev, unsigned int reg,
>          rom->addr = val & PCI_ROM_ADDRESS_MASK;
>  }

Don't you need to also protect a domU from writing to the ROM BAR
register?

>  
> +static uint32_t bar_read_dispatch(const struct pci_dev *pdev, unsigned int reg,
> +                                  void *data)
> +{
> +    struct vpci_bar *vbar, *bar = data;
> +
> +    if ( is_hardware_domain(current->domain) )
> +        return bar_read_hwdom(pdev, reg, data);
> +
> +    vbar = get_vpci_bar(current->domain, pdev, bar->index);
> +    if ( !vbar )
> +        return ~0;
> +
> +    return bar_read_guest(pdev, reg, vbar);
> +}
> +
> +static void bar_write_dispatch(const struct pci_dev *pdev, unsigned int reg,
> +                               uint32_t val, void *data)
> +{
> +    struct vpci_bar *bar = data;
> +
> +    if ( is_hardware_domain(current->domain) )
> +        bar_write_hwdom(pdev, reg, val, data);
> +    else
> +    {
> +        struct vpci_bar *vbar = get_vpci_bar(current->domain, pdev, bar->index);
> +
> +        if ( !vbar )
> +            return;
> +        bar_write_guest(pdev, reg, val, vbar);
> +    }
> +}

You should assign different handlers based on whether the domain that
has the device assigned is a domU or the hardware domain, rather than
doing the selection here.

> +
> +/*
> + * FIXME: This is called early while adding vPCI handlers which is done
> + * by and for hwdom.
> + */
>  static int init_bars(struct pci_dev *pdev)
>  {
>      uint16_t cmd;
>      uint64_t addr, size;
>      unsigned int i, num_bars, rom_reg;
> -    struct vpci_header *header = &pdev->vpci->header;
> -    struct vpci_bar *bars = header->bars;
> +    struct vpci_header *header;
> +    struct vpci_bar *bars;
>      int rc;
>  
> +    header = get_hwdom_vpci_header(pdev);
> +    if ( !header )
> +        return -ENOMEM;
> +    bars = header->bars;
> +
>      switch ( pci_conf_read8(pdev->sbdf, PCI_HEADER_TYPE) & 0x7f )
>      {
>      case PCI_HEADER_TYPE_NORMAL:
> @@ -496,11 +714,12 @@ static int init_bars(struct pci_dev *pdev)
>          uint8_t reg = PCI_BASE_ADDRESS_0 + i * 4;
>          uint32_t val;
>  
> +        bars[i].index = i;
>          if ( i && bars[i - 1].type == VPCI_BAR_MEM64_LO )
>          {
>              bars[i].type = VPCI_BAR_MEM64_HI;
> -            rc = vpci_add_register(pdev->vpci, vpci_hw_read32, bar_write, reg,
> -                                   4, &bars[i]);
> +            rc = vpci_add_register(pdev->vpci, bar_read_dispatch,
> +                                   bar_write_dispatch, reg, 4, &bars[i]);
>              if ( rc )
>              {
>                  pci_conf_write16(pdev->sbdf, PCI_COMMAND, cmd);
> @@ -540,8 +759,8 @@ static int init_bars(struct pci_dev *pdev)
>          bars[i].size = size;
>          bars[i].prefetchable = val & PCI_BASE_ADDRESS_MEM_PREFETCH;
>  
> -        rc = vpci_add_register(pdev->vpci, vpci_hw_read32, bar_write, reg, 4,
> -                               &bars[i]);
> +        rc = vpci_add_register(pdev->vpci, bar_read_dispatch,
> +                               bar_write_dispatch, reg, 4, &bars[i]);
>          if ( rc )
>          {
>              pci_conf_write16(pdev->sbdf, PCI_COMMAND, cmd);
> @@ -558,6 +777,7 @@ static int init_bars(struct pci_dev *pdev)
>          rom->type = VPCI_BAR_ROM;
>          rom->size = size;
>          rom->addr = addr;
> +        rom->index = num_bars;
>          header->rom_enabled = pci_conf_read32(pdev->sbdf, rom_reg) &
>                                PCI_ROM_ADDRESS_ENABLE;
>  
> diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
> index a5293521a36a..728029da3e9c 100644
> --- a/xen/drivers/vpci/vpci.c
> +++ b/xen/drivers/vpci/vpci.c
> @@ -69,6 +69,7 @@ int __hwdom_init vpci_add_handlers(struct pci_dev *pdev)
>          return -ENOMEM;
>  
>      INIT_LIST_HEAD(&pdev->vpci->handlers);
> +    INIT_LIST_HEAD(&pdev->vpci->headers);
>      spin_lock_init(&pdev->vpci->lock);
>  
>      for ( i = 0; i < NUM_VPCI_INIT; i++ )
> diff --git a/xen/include/xen/vpci.h b/xen/include/xen/vpci.h
> index c3501e9ec010..54423bc6556d 100644
> --- a/xen/include/xen/vpci.h
> +++ b/xen/include/xen/vpci.h
> @@ -55,16 +55,14 @@ uint32_t vpci_hw_read32(const struct pci_dev *pdev, unsigned int reg,
>   */
>  bool __must_check vpci_process_pending(struct vcpu *v);
>  
> -struct vpci {
> -    /* List of vPCI handlers for a device. */
> -    struct list_head handlers;
> -    spinlock_t lock;
> -
>  #ifdef __XEN__
> -    /* Hide the rest of the vpci struct from the user-space test harness. */
>      struct vpci_header {
> +    struct list_head node;
> +    /* Domain that owns this view of the BARs. */
> +    domid_t domain_id;

Indentation seems screwed here.

>          /* Information about the PCI BARs of this device. */
>          struct vpci_bar {
> +            int index;

unsigned

>              uint64_t addr;
>              uint64_t size;
>              enum {
> @@ -88,8 +86,18 @@ struct vpci {
>           * is mapped into guest p2m) if there's a ROM BAR on the device.
>           */
>          bool rom_enabled      : 1;
> -        /* FIXME: currently there's no support for SR-IOV. */

Unless you are also adding support for SR-IOV, I would keep the
comment.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 09:56:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 09:56:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25617.53492 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd9LB-0000Dz-3H; Thu, 12 Nov 2020 09:56:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25617.53492; Thu, 12 Nov 2020 09:56:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd9LB-0000Ds-0H; Thu, 12 Nov 2020 09:56:29 +0000
Received: by outflank-mailman (input) for mailman id 25617;
 Thu, 12 Nov 2020 09:56:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=muz0=ES=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kd9LA-0000Dn-4y
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 09:56:28 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4b47c2fe-168e-4704-9c96-58e229fd36c7;
 Thu, 12 Nov 2020 09:56:27 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=muz0=ES=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kd9LA-0000Dn-4y
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 09:56:28 +0000
X-Inumbo-ID: 4b47c2fe-168e-4704-9c96-58e229fd36c7
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4b47c2fe-168e-4704-9c96-58e229fd36c7;
	Thu, 12 Nov 2020 09:56:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605174987;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=9h1Cl1cVXoMc2ld3LHKNySduraBdzytvkyQuaHVcXF8=;
  b=WfYbAIOa85NUVwYtuelcq+MbuA3sAWMONDgt238dIBcY9jD1f4hqpUKM
   bBtlJwqb0npR/Oe+DACsunKRnk9kLScQaDRzhOI94bQ/mcR+3gz7vBelc
   YtQQe3PHwE0mhGsQlW5/U2Y6GF6UahruyLQ81PUVZZegsTJCz+epRvrv4
   c=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: RaPtOVwkKeyTYa5wr5J25htGMhHo/Idt8wlXI+p4QaXQU6GBA3ntfPBbREZXJtBrmaVBZBRcom
 S3v2ub2EePrpno9om4XimarJdb3Ugs8N/3kZ9ELXzFDqmK87ZLuRx3dOSyCQfm0DU8MzcMkbH5
 /yTzV1OsCedPa/cx5tHfp0anqo+zQH7l8oEvZjzPWPa+Yhfw+neMzXQWNs4/9okwzHOQtib11H
 nh/WJocchfESxTjgRlC61idKsTBxZUsqpjIxXKo9n0Z5r49lOTgnVw278xRNy+Z1LtAULZrVal
 A3w=
X-SBRS: None
X-MesageID: 31038479
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,471,1596513600"; 
   d="scan'208";a="31038479"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lB5RKNQ0enOqdePru07YwSDloss/CHjrWA6segsKgz+r1+vX6lxb4VzzRSwUHAj6gbSpi61ggHn44S57lcbCfbw2vaXmjfmlrDsFs8sRQK2ncExMY1jLmJfVbKtEGf8tDr1RZH0yq8ekFYEk/wjGp4bPdN0RRA/Gt5H/XgsZP3jhv15PlEbzMnGu0Ab8EZD7s40MRhrLIjqsiHYtryvv6YWNIANmFx08FTiCU3M5ufZcydVglUikLaCn1UO5fR3kcnN2CNWJYNXt3OaemQovx2YoAsUWb7W6RBMg1gj0U7Xu4LeMaPTiks2ZyMiNlO3i3Eh4LOKYRdpRlzV26m8FJg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uKk539bWPITG+QjnyyYY6yUeYdGM4oYsr+WRRW2inFQ=;
 b=D68kNhnfGGrkgFR8znDRuHKJAy0ioIB2jh2NEjcWHnrdh3tQQ67RJT8kPGouEF22WbFEifZyDY1GnD4HS45GUmM53umv6oO9OJAAb9ljSgoeZ2oyzLrzySD6g00/jMz2cK9vUrlVmss6/jbisPbBVJl2R+LYvlD3XIQVJNOg5gYFNuXmiqDw5NQsVr1/L0c3TPdV3mc9F1XSrsAwW2gOmmog7f0LbUSkX9HztIXVy+ZKPMDHvi8T/Xwgl3xromQkAGLjeRJTRBXq0jo5KFfmrnqnX1pGQc9BY/KRQ4Bvz492hjTe6CMjokzel78n1xOqP8ohFHND/woxs/u+iTBnJQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uKk539bWPITG+QjnyyYY6yUeYdGM4oYsr+WRRW2inFQ=;
 b=AIwKG41+RojMgh/nPYCXv4dZ4dDsuJHM7MAzvRulc78lZaju2QFPKtOr6JsizpOKtzLBUlfMeVrHosRDHZdrBVYoObmDAIATBS2NkxWEDCvlSDDoVEUBtH8WoNmwpCfa8Xo4/p7nW1GMuw/8SSbE+Gu8uQfPA1D/icorPJhY1WA=
Date: Thu, 12 Nov 2020 10:56:16 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Oleksandr Andrushchenko <andr2000@gmail.com>
CC: <Rahul.Singh@arm.com>, <Bertrand.Marquis@arm.com>, <julien.grall@arm.com>,
	<jbeulich@suse.com>, <sstabellini@kernel.org>,
	<xen-devel@lists.xenproject.org>, <iwj@xenproject.org>, <wl@xen.org>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Subject: Re: [PATCH 08/10] vpci/arm: Allow updating BAR's header for non-ECAM
 bridges
Message-ID: <20201112095616.5ps37pm6p52zsa33@Air-de-Roger>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-9-andr2000@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201109125031.26409-9-andr2000@gmail.com>
X-ClientProxiedBy: LO3P265CA0002.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:bb::7) To SA0PR03MB5610.namprd03.prod.outlook.com
 (2603:10b6:806:b2::9)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 32ee8e36-f035-4920-6c6f-08d886f1353c
X-MS-TrafficTypeDiagnostic: SN6PR03MB4495:
X-Microsoft-Antispam-PRVS: <SN6PR03MB4495F948B7E3E9B56B62CEF08FE70@SN6PR03MB4495.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: bCTEoPbnGjJcJSwPKUgRtDPlADRBnxctAZky6set5bN0e/4FUxTrk3ksfxUirGCRND3sZXU7HgafL6SNDk4zxzzf2Y5ajdwyUafGH2wPqD+uiL1hewJcqJ1WxiPlm1RagTFu5IyZClOmXvc9aalqDAO6WXfyy39K7v+ucmV3aHXbDom3Zir2CDXjZodHveyMrPVljCYc7Ajvf2UbST5MqdpyG6BP91niy3Hr+6kPICOY+4xKegmyq5Q3CRC8N5YA42BNFwHXJzwipERrroUvgdXfUFWSdY1rBQKiMyxSiKM5YTeYGZo8vGN1tXFak6Fv
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SA0PR03MB5610.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(346002)(366004)(39860400002)(396003)(376002)(136003)(66476007)(6916009)(85182001)(9686003)(8676002)(186003)(86362001)(16526019)(478600001)(33716001)(26005)(83380400001)(2906002)(8936002)(7416002)(66556008)(4744005)(5660300002)(66946007)(4326008)(316002)(6666004)(1076003)(6486002)(956004)(6496006);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: EbhF+lF7D1DgpJ+4O83UM1lf2KeD5W8gcHkv6Qms4zo9FQWVQcqZXQZF76IskMHluya0n72B6Ovrn5t6vos36iqiXHD6THEoEWFiJKsypSs/pedcu5bIzcIcwtdZnvkJd2Foql+EL5CnkMLTxu2TwXVNOYN3GkZo0ja5ZXjCKfP7dlizFTSsDudNx7vkK1ZU+Y+tsJEmVw4F3siQOz/F8VxTebbFKdZiDP339Ilm0VfrWh38SYMmrP1zfoM9QdipAGbzduGuJfVWcdPruYcdJ/c2MsdmV0iOA2PSHm1elpY++PO5I/2XXgvpIWwEO7qL7nEk7sKJ9XN9yb7wy/zJx8+cY01HQ7ACdzGaePSeulwCBGJeu3BOMjpxeoq4omHSoCfqsGq8/aBaSZqlqd8gRvracATuLYJfd/QwXDh1CkteqNhbMFbftTJQZQHcHekvl93KizYS3iub+oxJUMzs01k2rbGFziqu71YEtj92uZmB6bKbJUd+mz4fkFWB8fwNVVnf7hUv3pmksz1Q1gSuZwFtIeJcPRYxTsx2ejwX7V/jVpxUv2h82iL/i4zy6o8GhIDRiGxueq90lT4+9zb2gRCeZImbY1vDAY+ORHhJ415x8DJjjDpIuqEWQPbDwXpc9cqev7+zG+sqlU2zzRKi6w==
X-MS-Exchange-CrossTenant-Network-Message-Id: 32ee8e36-f035-4920-6c6f-08d886f1353c
X-MS-Exchange-CrossTenant-AuthSource: SA0PR03MB5610.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Nov 2020 09:56:21.2462
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: TvQ2Q+Y9RuHJQiiYWFQ3BhdrbPkE/e51L5l3x4p2zhaPaLBYTHfG4m2+8zW43JrN32MAe8f6dgJWKrzRQy7NMg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR03MB4495
X-OriginatorOrg: citrix.com

On Mon, Nov 09, 2020 at 02:50:29PM +0200, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> 
> Non-ECAM host bridges in hwdom go directly to PCI config space,
> not through vpci (they use their specific method for accessing PCI
> configuration, e.g. dedicated registers etc.). Thus hwdom's vpci BARs are
> never updated via vPCI MMIO handlers, so implement a dedicated method
> for a PCI host bridge, so it has a chance to update the initial state of
> the device BARs.
> 
> Note, we rely on the fact that control/hardware domain will not update
> physical BAR locations for the given devices.

This is quite ugly.

I'm looking at the commit that implements the hook for R-Car and I'm
having trouble seeing how that's different from the way we would
normally read the BAR addresses.

I think this should likely be paired with the actual implementation of
a hook, or else it's hard to tell whether it really needed or not.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 10:01:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 10:01:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25626.53510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd9Q5-0001IZ-QX; Thu, 12 Nov 2020 10:01:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25626.53510; Thu, 12 Nov 2020 10:01:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd9Q5-0001IS-NX; Thu, 12 Nov 2020 10:01:33 +0000
Received: by outflank-mailman (input) for mailman id 25626;
 Thu, 12 Nov 2020 10:01:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=muz0=ES=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kd9Q3-0001IN-W4
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 10:01:32 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 683a9b9c-9f18-4925-984d-6e08dcca4426;
 Thu, 12 Nov 2020 10:01:30 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=muz0=ES=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kd9Q3-0001IN-W4
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 10:01:32 +0000
X-Inumbo-ID: 683a9b9c-9f18-4925-984d-6e08dcca4426
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 683a9b9c-9f18-4925-984d-6e08dcca4426;
	Thu, 12 Nov 2020 10:01:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605175290;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=LPB+MOynVXrGbQ2NEPx3hpAKCWshOZZfLevv6KQbSKE=;
  b=DSBYm2nfNFzmVh23LSmLR4xwjOgDpOSZQcMqNF8m8vbF2wp9sIdcTzpV
   NwcoZXsl18ZkCLPESQ+o0jC1LxAsxZnorFApZv9cCBRgPC5UN1FOLA4D/
   30JBVfSUWq0/znUlgUZqRSGoHSp3/XTIZC4j58yrKzzFlZ2WrocuKf8XA
   g=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: F46ydT1aS325onma9RyqqrdjkSNl9POVjRCUspYpv81Gzq8tq78866xIDOYJgPiKXJvhTMdUND
 XA2TlcdYNv6qPXvR0ncYFu5L8Cv17IuckHdITdC1KQkKLfXDa9mXHGgja92PuICXeu3+vEQdiz
 w8PyJzcEy2FVg6vrkl81dyIXu/1RfXJheN8l4CQ9sVkUQqkvaCY82Nicii1YlIoHhjANUyb2/g
 bizfPy6xwCXa2PSJ3l4OW9ruEvuoXUWST4Q3Si5pFelhL1RbmSIKG6YlLRapqoiC/jMwu4kF/W
 k5s=
X-SBRS: None
X-MesageID: 32136777
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,471,1596513600"; 
   d="scan'208";a="32136777"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FN58BN1eT89nD6Ff0OLtio/yrxMJqpSkXd5myzXzWnIg2StEdbuxt2http+wQ21hshRwRzsdCfon2K0Rk/CWsEkKyTbrSRDtcfhRtZYEdc3EclkE5rKdqED6FOQRGxppG7EDGHjt/e0qIbUQv8v52MwZFMSNNh/thk2BC3wuW/6IJmY0ZHoUhMEwn4n8kKZkGOW4V3IGUPOVGa1MthSFhcLWTnf5ElKx0NDC7dmfNffct7ivyHLC9D7CtKXY27uh5ySqNaAuhhZLzaoJdhefDSWfBEmNYtpAjTGXaPQt1vzRDxD2LQfJ85zR1YxAlls8gyJ8KffGcYrUaNL7UftMsQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2lLsArM5AFO5Eqjh3IDL4P3VCMj/DqDKaksWPmoGUxY=;
 b=O8oszOGvb6w4XjM0PnISwJzk+MEnCkpZRaVPByfW2KO8loKtWIl1AobwPgvDPN1Z86mcM2hzC6A3Gz2dEgNYgvlAp5G/H/lubhQUmY1QSICT1Ph9kaxy0t7fLdX/KFzwIT4ftLrnhaGTuDLYtPnwmJRsdhsR3bzysxUnjgmgqBYc8ZsmHPSgOexpa2QwEfl2/UsTsP1cGLAI+E/9WrQhCw/CM8ZT5o5VwijDZcLSe59Hb8F4Bn0B9DlYcZtyVhbLT+VLQJVnbb3NtH+XHdCAsKz9pCyqfI12zxcnecEubYiXn2VeDUr+vsY4XHbeVVqZGLARItpjnRiZruAvuxanCg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2lLsArM5AFO5Eqjh3IDL4P3VCMj/DqDKaksWPmoGUxY=;
 b=AP/ihwLHNMTMw82VJrPnkKmKebm5NVp+8dn1PIvERHNc7WKdA079f2V9bdULjUl09jtRQxqEaEvQO71GBj3QmfboZndW1vyPHUUgmMl9vC/LNF1fzeJB7y8hxXcCpcW3s3pQ8HqgIWy5AxaLRMwWQQ3B8axSKK/Nze0KwcwJ8rw=
Date: Thu, 12 Nov 2020 11:00:54 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Oleksandr Andrushchenko <andr2000@gmail.com>
CC: <Rahul.Singh@arm.com>, <Bertrand.Marquis@arm.com>, <julien.grall@arm.com>,
	<jbeulich@suse.com>, <sstabellini@kernel.org>,
	<xen-devel@lists.xenproject.org>, <iwj@xenproject.org>, <wl@xen.org>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Subject: Re: [PATCH 09/10] vpci/rcar: Implement vPCI.update_bar_header
 callback
Message-ID: <20201112100054.z46hjf2qzcag6sv7@Air-de-Roger>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-10-andr2000@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201109125031.26409-10-andr2000@gmail.com>
X-ClientProxiedBy: PR3P191CA0001.EURP191.PROD.OUTLOOK.COM
 (2603:10a6:102:54::6) To SA0PR03MB5610.namprd03.prod.outlook.com
 (2603:10b6:806:b2::9)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a29bac05-8b9e-400b-e6e3-08d886f1db4d
X-MS-TrafficTypeDiagnostic: SN6PR03MB4254:
X-Microsoft-Antispam-PRVS: <SN6PR03MB4254E1D7775D59D40E3FEDD98FE70@SN6PR03MB4254.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: sNink+9XQ9URyhbkvsIIm07G4Ck0DlUkOsvca1EhCsLTwJZvxDqdd5+fVe48fDfoW9NAEkB7jJ9U9fQwcWYw/1PLCNgEFFduTlfN5AiwkPEmujWq/KaTb0097GQSzrN+8wnzP9vHCIxH2SzXhZtStT7qTK2AX90C4n0UIth8hxRJfGkRYsb9thvbnsD3LnLlcmXxc+HCGC7S5CjooERgKckmo+DfvsozFf2xssZulPJA+hoiKcWVZL/pTqG2nvXTs8FohtBd84G34PDm2J9rxMZYTzU0nPAWSHevQuWx9az0TNnIu9RkfoQhLXPrNyWtS6Zp6WqaJavci3mvbJbApg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SA0PR03MB5610.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(376002)(396003)(136003)(346002)(39860400002)(33716001)(6496006)(2906002)(478600001)(6666004)(186003)(8936002)(26005)(83380400001)(956004)(16526019)(8676002)(1076003)(6916009)(6486002)(85182001)(66946007)(5660300002)(7416002)(4326008)(66476007)(66556008)(316002)(15650500001)(86362001)(9686003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: KA2rTucjSDN4DGX3MUeZ4rHrrIZEs8SBkuZNOPC2opSf9xEIrBtzK4jAOxlXKdA+cZMi2pTAjfuLsDemVfiRqTy5AMhDb/pk8LGTLis9fxrbMu6MASOIxEhDTfsDgNYWsxDMnFwa+G0G7XxZsCLLAgiMqsEB0w7aecWr4WJDiQ53/f3DHMG0bCzVrzVmPEYETezx6kyP7nByaLWO/NuJuNWsaEwqPYh7zXKvImBQOLtbTeZy0oaCH1YbPOLcbavwc1BebAyegh+EU0Ilv9ikgJsJhcFNYtH5vJRsmArMYWfNhgPi/bvfhZR2/fr0nx3MfBiD3wPh0bNRTkcvPPmVA5rkp1hdq7ro2l2oOBhwx4+zTCv2YCj4zZjjvOfp4+dFbB1I2JtQpfsFEj5r97TpkXozU+s4y2GGbVqS8PHE/8psjt6o2dS76uJuUAsWPUnQ3bLHUAw/oR14UQKhzLObldxQ7Pog9/puTCzLKYZZy0qljorddC82GladRVJCqVxqga8zfnTZkZ8hg69TAyl3asGdHZLEZKkkmT6sHtLXcuLq5+x+wF3RyGnZTnkmL9Dv9J6tOvKHDiGNOzICY7okwSP1OrwppkDYuwHVaPlQgIsoz6KEwUVZvH3TaVzHGQKoT2esvaYIjYi7ljg+y/a55g==
X-MS-Exchange-CrossTenant-Network-Message-Id: a29bac05-8b9e-400b-e6e3-08d886f1db4d
X-MS-Exchange-CrossTenant-AuthSource: SA0PR03MB5610.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Nov 2020 10:00:59.9173
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: MVkcEeOvXcFth48HmM4UiAuGNfzSzUTbVYq9u1fQ/fIP3VFK2dwbJLzzKKzEGG+IAXnwHB6ymGl3lB7/ynTXpA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR03MB4254
X-OriginatorOrg: citrix.com

On Mon, Nov 09, 2020 at 02:50:30PM +0200, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> 
> Update hardware domain's BAR header as R-Car Gen3 is a non-ECAM host
> controller, so vPCI MMIO handlers do not work for it in hwdom.
> 
> Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> ---
>  xen/arch/arm/pci/pci-host-rcar-gen3.c | 69 +++++++++++++++++++++++++++
>  1 file changed, 69 insertions(+)
> 
> diff --git a/xen/arch/arm/pci/pci-host-rcar-gen3.c b/xen/arch/arm/pci/pci-host-rcar-gen3.c
> index ec14bb29a38b..353ac2bfd6e6 100644
> --- a/xen/arch/arm/pci/pci-host-rcar-gen3.c
> +++ b/xen/arch/arm/pci/pci-host-rcar-gen3.c
> @@ -23,6 +23,7 @@
>  #include <xen/pci.h>
>  #include <asm/pci.h>
>  #include <xen/vmap.h>
> +#include <xen/vpci.h>
>  
>  /* Error values that may be returned by PCI functions */
>  #define PCIBIOS_SUCCESSFUL		0x00
> @@ -307,12 +308,80 @@ int pci_rcar_gen3_config_write(struct pci_host_bridge *bridge, uint32_t _sbdf,
>      return ret;
>  }
>  
> +static void pci_rcar_gen3_hwbar_init(const struct pci_dev *pdev,
> +                                     struct vpci_header *header)
> +
> +{
> +    static bool once = true;
> +    struct vpci_bar *bars = header->bars;
> +    unsigned int num_bars;
> +    int i;

unsigned.

> +
> +    /* Run only once. */
> +    if (!once)

Missing spaces.

> +        return;
> +    once = false;
> +
> +    printk("\n\n ------------------------ %s -------------------\n", __func__);
> +    switch ( pci_conf_read8(pdev->sbdf, PCI_HEADER_TYPE) & 0x7f )
> +    {
> +    case PCI_HEADER_TYPE_NORMAL:
> +        num_bars = PCI_HEADER_NORMAL_NR_BARS;
> +        break;
> +
> +    case PCI_HEADER_TYPE_BRIDGE:
> +        num_bars = PCI_HEADER_BRIDGE_NR_BARS;
> +        break;
> +
> +    default:
> +        return;
> +    }
> +
> +    for ( i = 0; i < num_bars; i++ )
> +    {
> +        uint8_t reg = PCI_BASE_ADDRESS_0 + i * 4;
> +
> +        if ( bars[i].type == VPCI_BAR_MEM64_HI )
> +        {
> +            /*
> +             * Skip hi part of the 64-bit register: it is read
> +             * together with the lower part.
> +             */
> +            continue;
> +        }
> +
> +        if ( bars[i].type == VPCI_BAR_IO )
> +        {
> +            /* Skip IO. */
> +            continue;
> +        }
> +
> +        if ( bars[i].type == VPCI_BAR_MEM64_LO )
> +        {
> +            /* Read both hi and lo parts of the 64-bit BAR. */
> +            bars[i].addr =
> +                (uint64_t)pci_conf_read32(pdev->sbdf, reg + 4) << 32 |
> +                pci_conf_read32(pdev->sbdf, reg);
> +        }
> +        else if ( bars[i].type == VPCI_BAR_MEM32 )
> +        {
> +            bars[i].addr = pci_conf_read32(pdev->sbdf, reg);
> +        }
> +        else
> +        {
> +            /* Expansion ROM? */
> +            continue;
> +        }

Wouldn't this be much simpler as:

bars[i].addr = 0;
switch ( bars[i].type )
{
case VPCI_BAR_MEM64_HI:
    bars[i].addr = (uint64_t)pci_conf_read32(pdev->sbdf, reg + 4) << 32;
    /+ fallthrough. */
case VPCI_BAR_MEM64_LO:
     bars[i].addr |= pci_conf_read32(pdev->sbdf, reg);
     break;

default:
    break;
}

I also wonder why you only care about the address but not the size of
the BAR.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 10:23:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 10:23:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25634.53522 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd9lS-0003Jn-LO; Thu, 12 Nov 2020 10:23:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25634.53522; Thu, 12 Nov 2020 10:23:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd9lS-0003Jg-HT; Thu, 12 Nov 2020 10:23:38 +0000
Received: by outflank-mailman (input) for mailman id 25634;
 Thu, 12 Nov 2020 10:23:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kd9lQ-0003Jb-Kz
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 10:23:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c66a8743-060f-450d-8df7-38d74687e05a;
 Thu, 12 Nov 2020 10:23:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E6EC7ABCC;
 Thu, 12 Nov 2020 10:23:33 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kd9lQ-0003Jb-Kz
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 10:23:36 +0000
X-Inumbo-ID: c66a8743-060f-450d-8df7-38d74687e05a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c66a8743-060f-450d-8df7-38d74687e05a;
	Thu, 12 Nov 2020 10:23:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605176614;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1NH8x4vk4jgxILtPHWfT8lREPrKPQ5W1ZL/eyg12DyI=;
	b=Mw+986j6+xJZOBktWfdFEtGvihH5vN1Z/3QJfNzpweoLYXzNEBx7x1GJtzUUqYjqALerno
	E2xXEi043gbr8Bb9WYTkodIeT54X4Zp+xvKvPX1I74xFhWGO6v5U19x3TeoRNAoIWX2KHQ
	QAnyACWNGyDiuLPUerrKojz2eG2oOVk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E6EC7ABCC;
	Thu, 12 Nov 2020 10:23:33 +0000 (UTC)
Subject: Re: [PATCH v4 2/3] xen/oprofile: use NMI continuation for sending
 virq to guest
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201109095021.9897-1-jgross@suse.com>
 <20201109095021.9897-3-jgross@suse.com>
 <d55adbc0-8a98-dd5c-c204-2ec11955c356@suse.com>
 <288804e4-75e6-6600-9634-8c0ea7a06c22@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b84d687e-0aab-d48f-c068-1852cc1075b2@suse.com>
Date: Thu, 12 Nov 2020 11:23:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <288804e4-75e6-6600-9634-8c0ea7a06c22@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11.11.2020 16:48, Jürgen Groß wrote:
> On 11.11.20 16:45, Jan Beulich wrote:
>> On 09.11.2020 10:50, Juergen Gross wrote:
>>> @@ -83,14 +85,28 @@ void passive_domain_destroy(struct vcpu *v)
>>>   		model->free_msr(v);
>>>   }
>>>   
>>> +bool nmi_oprofile_send_virq(void)
>>> +{
>>> +	struct vcpu *v = this_cpu(nmi_cont_vcpu);
>>> +
>>> +	if ( v )
>>> +		send_guest_vcpu_virq(v, VIRQ_XENOPROF);
>>> +
>>> +	this_cpu(nmi_cont_vcpu) = NULL;
>>
>> What if, by the time we make it here, a 2nd NMI has arrived? I
>> agree the next overflow interrupt shouldn't arrive this
>> quickly, but I also think you want to zap the per-CPU variable
>> first here, and ...
> 
> How could that happen? This function is activated only from NMI
> context in case the NMI happened in guest mode. And it will be
> executed with higher priority than any guest, so there is a zero
> chance another NMI in guest mode can happen in between.

While I'll admit I didn't pay attention to the bogus (as far as
HVM is concerned) xen_mode check, my understanding is that the
self-IPI will be delivered once we're back in guest mode, as
that's the first time IRQs would be on again (even event checking
gets deferred by sending a self-IPI). If another NMI was latched
by that time, it would take precedence over the IRQ and would
also be delivered on the guest mode insn that the IRET returned
to.

I agree though that this is benign, as the vCPU wouldn't have
been context switched out yet, i.e. current is still the same
and there'll then merely be two NMI instances folded into one.

However, I still think the ordering would better be changed, to
set a good precedent.

>>>   static int nmi_callback(const struct cpu_user_regs *regs, int cpu)
>>>   {
>>>   	int xen_mode, ovf;
>>>   
>>>   	ovf = model->check_ctrs(cpu, &cpu_msrs[cpu], regs);
>>>   	xen_mode = ring_0(regs);

Unrelated to the patch here (i.e. just as an observation), this
use of ring_0() looks bogus when the NMI occurred in HVM guest
mode.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 10:35:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 10:35:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25642.53534 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd9wp-0004OA-O4; Thu, 12 Nov 2020 10:35:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25642.53534; Thu, 12 Nov 2020 10:35:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kd9wp-0004O3-K0; Thu, 12 Nov 2020 10:35:23 +0000
Received: by outflank-mailman (input) for mailman id 25642;
 Thu, 12 Nov 2020 10:35:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=muz0=ES=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kd9wo-0004Ny-Ch
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 10:35:22 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9c8b0d01-f763-4355-85ed-8fa784ddcfb3;
 Thu, 12 Nov 2020 10:35:21 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=muz0=ES=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kd9wo-0004Ny-Ch
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 10:35:22 +0000
X-Inumbo-ID: 9c8b0d01-f763-4355-85ed-8fa784ddcfb3
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9c8b0d01-f763-4355-85ed-8fa784ddcfb3;
	Thu, 12 Nov 2020 10:35:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605177321;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=RqmP5mYY4erSSWm5+MM0WefqxEmGhuyBO3vreM3YtSs=;
  b=gV9VGO4USDR6WqgGwPleBdzD99EXtAfbGczYJ1EUnP0QHOFH+TWvbTtl
   VV85po3BGGVppDj3uLOk8DGaErLuDF0mho/D+BBB3g4u0UqagZNz2q9qT
   ENIboEMSxZEGVMhchmc8p3FKbZ7ZX27xNkKNHovwytwI+m3h1ectIzuBd
   Q=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: PfHtFi9M8d6PVVc/pKF0mnJL5DeLgMzvdIMgemYX32ldDMSygW934npku/4OH4wvsNXItT1XwA
 R4X0twJ9LTACiwefnGlvIUG2gLDxiBp2t0bQxKPW/pNiaRxsFamFY6zVWh7Nk7rUyqaWu0q8b+
 KHIFbSJrJox6b94raLgBrR7vuGxGl1nq0yD+tWdqmQCO7G2D/OHfqBPacrRDwzWue02irSN3EB
 vPYihzQ5QAyvcq2m2vqYWGFBr6y2pDZ66URZMiI0EqeOQxo2cv0RubHTAwYj2PCtHrPEpbQjNV
 vC4=
X-SBRS: None
X-MesageID: 31015255
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,471,1596513600"; 
   d="scan'208";a="31015255"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iidulBiQGbH+/hrHNTE4OLWNW4XUszONW0mRE5X+zzl9V+WSWMZupQxATzykrPWuAZDIJfAsB+I4Xv6P+GaqMpi9EIUHx5IK4xXqKXfKRAW+jbUGs/k8+dTN3Kbs7rWnRrvlaSLh2HC55uyaHPxhX/zqQSzOmjFTXjr3CxhSmL0BbZg1YaPdXlxwTI6gEEWcHjvbMDm0RPiFZvrcl7j54PqdQFEDcVF8QVyuT+99aA8O9wADw/7R37ii+/QUmlE6Elqn7gNTJ15ZANIZ2oogF53vRnPGCVGZ2Rai737SdWrg9ybAFGm66yfDh8CUYhfZt9QqPn793i4H+rOFQzoMew==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MEEjLEP/zHA/yXqIeWbl2pBjd23k2UPx3qQL4Jl/nVw=;
 b=mh720namB5e++RjelzlqR7rOtG1BX1iL8ws3g5diGHVP/9foPasWkhAFtf7rCIL3d0ymPpXc5g4Zy4X5WP5tEXx6rhTrSf7yqFKv9buUcsMYFqANJpkA8EKeVnW1qe6IQhQk4alGfHVV+qG5FweIGxJRl16L3SJOzcLo5kl8iGEE6qF+v60Kb4fc+oxrDGttPSWffamBJoJTDy9KTYT7l7/QtxECZ+zqhRXPKbZehqz/h+kQf2lkwbxoWpG2PknhrF42yg6mgxU1yi+QIy80QdPXF37PL+HtCWJZNI66b/LqRdWzeJZxGo4N8VSBZyTpiFgl7cMv+hfh+pZ4thtq0g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MEEjLEP/zHA/yXqIeWbl2pBjd23k2UPx3qQL4Jl/nVw=;
 b=UJmd6XFekZ3H4EB7VzXlX6MgnzJFBjVc5TKmy7R0FhDlV6EXbq8EbrRrGjN/J60ILaz5V7e+Ai/2EmwRXAwzzJF9z/2tprIjPAVsSErHU/40mNCYFGKEFl2tda0M7Tq2+9t8ve4nSQqgTkBW44yoUBbadhD/otzkiCNYNV7xoBo=
Date: Thu, 12 Nov 2020 11:35:08 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3 5/7] x86: guard against straight-line speculation past
 RET
Message-ID: <20201112103508.f4egyghvdhxuyo3t@Air-de-Roger>
References: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
 <80ceea17-958d-f409-5f39-9f353e780f5b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <80ceea17-958d-f409-5f39-9f353e780f5b@suse.com>
X-ClientProxiedBy: LO2P265CA0053.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:60::17) To SA0PR03MB5610.namprd03.prod.outlook.com
 (2603:10b6:806:b2::9)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1d988a6c-29ef-4f77-4301-08d886f6a322
X-MS-TrafficTypeDiagnostic: SN6PR03MB4094:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SN6PR03MB4094F5EC1902D95B5695B50D8FE70@SN6PR03MB4094.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6430;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Jdf5tM19LJFGM2P45F2kC67KfY+2dAZ523MV7VzhsFE864OB48SYlOOXhmq4+HRvgnSGDDG7jE+TUOlpkZb7hyGfnTntfUJ33AbKeeh0aztPNYz/wAxAtHY2xL5skV2zIDh7EZ0ZnjLdcuotaNQQh2EKQ2WhyOGaMA43Onwy6LIMhi9sxucEoa7hIFXPjQVpRQFyDJ5XjVnJ+mQxQbmjJ8QXnK4U02SNabOBWBXx4YUAvrBHjYddaAG4xxX6isSGhvqNbG7eJ/lYUrWWjddqzZKiEJ1iL2loufYKeWgAMOQYx6+zBsQKIuQKz/DZrQeT2IG0k3vwd2jWeh0IdKkqiRNgdhQCCddfaJ5hoHM84g41bKnjmExRdvfEXO3hGEG9LPsOc544FNqoOZ9BxEgD4w==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SA0PR03MB5610.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(346002)(366004)(39860400002)(136003)(396003)(376002)(956004)(966005)(8936002)(316002)(478600001)(16526019)(8676002)(26005)(4326008)(6666004)(4744005)(54906003)(85182001)(186003)(5660300002)(1076003)(6486002)(66556008)(33716001)(9686003)(86362001)(66946007)(2906002)(66476007)(6496006)(6916009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: pKAfx7GzNzFs5J85jOaimb+9C5B/yXA/b5H/cMfQZPZlcS+Db4q29s7xgDFPbxDtiPXdbS828jaYs+9n9R/gI4vPkQOMEXuMpt8faob80PzpGqe4zD83eWPGpUhqT8s9ZbiqTpdzmh4Aw7IkaD+63wy6ID+J1Te7ioAjcizHRhejwwzAfg/BHCMtriPW3XjlOg8DyIWoD8Oengw6mTBlQrWBseQSwuV2qtbU3gUiIKL2bR4RRNBqdVZkghKfNKw42wsiXszQ30x1jX+dguqP7azipDx8LrPX/ZSq8m1f5p17SLHwNDAC7uM4qYJ9pIW3hPLC+EqpkF9xw44VKkfWLMO16maXFRhPKJFSASe/VyV2dTuHYU5637C9vK58d1lPgQHUwAYa4RzQdE/8GwPhdHllJn1n8JVVDRbuBvAs+SMfl2haOPkBpVKAeDOnrKcnqvNp4PNab9pO5ftMERF5iq17dZwKqxwAjj5SDCaCXiF7plsfjFo4pfEJanB5waGlsfonqz54ikbjZg6NRhIsIPDpvVIH0zLuEMwvzsdHmVOsM6/rKa8i150z+LVwLBOn0HEN01kXq9xzluausTPZoiJ/NLP4O14sTieMSpBIpC4e+t43mG5s+/4s8tdoDBOPt1jLm8ZcZbKu26KweioWEA==
X-MS-Exchange-CrossTenant-Network-Message-Id: 1d988a6c-29ef-4f77-4301-08d886f6a322
X-MS-Exchange-CrossTenant-AuthSource: SA0PR03MB5610.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Nov 2020 10:35:13.2104
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IsT++5rBv+7BWQJA8sZ/Lx4HnYzC5q7wGPbBeNw6JljdtPaDw7h4O+wS1cSQwxVxyHY1SBk7zFg40HIAwiQ0Jw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR03MB4094
X-OriginatorOrg: citrix.com

On Fri, Oct 23, 2020 at 10:38:04AM +0200, Jan Beulich wrote:
> Under certain conditions CPUs can speculate into the instruction stream
> past a RET instruction. Guard against this just like 3b7dab93f240
> ("x86/spec-ctrl: Protect against CALL/JMP straight-line speculation")
> did - by inserting an "INT $3" insn. It's merely the mechanics of how to
> achieve this that differ: A set of macros gets introduced to post-
> process RET insns issued by the compiler (or living in assembly files).
> 
> Unfortunately for clang this requires further features their built-in
> assembler doesn't support: We need to be able to override insn mnemonics
> produced by the compiler (which may be impossible, if internally
> assembly mnemonics never get generated)

FTR I've reported this to LLVM upstream:

https://bugs.llvm.org/show_bug.cgi?id=48159

Roger.


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 10:48:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 10:48:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25649.53545 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdA9r-0005WC-35; Thu, 12 Nov 2020 10:48:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25649.53545; Thu, 12 Nov 2020 10:48:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdA9r-0005W5-00; Thu, 12 Nov 2020 10:48:51 +0000
Received: by outflank-mailman (input) for mailman id 25649;
 Thu, 12 Nov 2020 10:48:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bXVH=ES=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kdA9p-0005W0-UL
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 10:48:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f298ee87-ce4d-4428-9a43-4a435f0e3e19;
 Thu, 12 Nov 2020 10:48:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 71CB8AB95;
 Thu, 12 Nov 2020 10:48:47 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bXVH=ES=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kdA9p-0005W0-UL
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 10:48:49 +0000
X-Inumbo-ID: f298ee87-ce4d-4428-9a43-4a435f0e3e19
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f298ee87-ce4d-4428-9a43-4a435f0e3e19;
	Thu, 12 Nov 2020 10:48:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605178127;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=+TeHY1R13WRoyhHo3LRKmu8LfoLlih3gwVHvO8daaCM=;
	b=ct/CIOsVJhrQGqTJfRH3J903tSyFhXryWj2f7Q+4s7K8H4tiVZ7Os99dwtfWmfpAHlCpy2
	YUsEemZM/jDffy8YoQ4TaQ8ZYL7dA4P89QJLl62uHZaHoGxT+RpGh9VcKKekEx53ODeWB4
	WZLtKoXOJlhI6xk5AGexBGR+E5NTAmk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 71CB8AB95;
	Thu, 12 Nov 2020 10:48:47 +0000 (UTC)
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201109095021.9897-1-jgross@suse.com>
 <20201109095021.9897-3-jgross@suse.com>
 <d55adbc0-8a98-dd5c-c204-2ec11955c356@suse.com>
 <288804e4-75e6-6600-9634-8c0ea7a06c22@suse.com>
 <b84d687e-0aab-d48f-c068-1852cc1075b2@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v4 2/3] xen/oprofile: use NMI continuation for sending
 virq to guest
Message-ID: <6229914c-bc76-2670-a272-ab0603f612cc@suse.com>
Date: Thu, 12 Nov 2020 11:48:46 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <b84d687e-0aab-d48f-c068-1852cc1075b2@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="z5UMM1DSH3UQrECgOEAdM8MWgEganrny8"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--z5UMM1DSH3UQrECgOEAdM8MWgEganrny8
Content-Type: multipart/mixed; boundary="og8ozsOwXYiS8ZgFuLDlghulTUxTHDiE9";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <6229914c-bc76-2670-a272-ab0603f612cc@suse.com>
Subject: Re: [PATCH v4 2/3] xen/oprofile: use NMI continuation for sending
 virq to guest
References: <20201109095021.9897-1-jgross@suse.com>
 <20201109095021.9897-3-jgross@suse.com>
 <d55adbc0-8a98-dd5c-c204-2ec11955c356@suse.com>
 <288804e4-75e6-6600-9634-8c0ea7a06c22@suse.com>
 <b84d687e-0aab-d48f-c068-1852cc1075b2@suse.com>
In-Reply-To: <b84d687e-0aab-d48f-c068-1852cc1075b2@suse.com>

--og8ozsOwXYiS8ZgFuLDlghulTUxTHDiE9
Content-Type: multipart/mixed;
 boundary="------------828759602BE4AD8DB3314E1F"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------828759602BE4AD8DB3314E1F
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 12.11.20 11:23, Jan Beulich wrote:
> On 11.11.2020 16:48, J=C3=BCrgen Gro=C3=9F wrote:
>> On 11.11.20 16:45, Jan Beulich wrote:
>>> On 09.11.2020 10:50, Juergen Gross wrote:
>>>> @@ -83,14 +85,28 @@ void passive_domain_destroy(struct vcpu *v)
>>>>    		model->free_msr(v);
>>>>    }
>>>>   =20
>>>> +bool nmi_oprofile_send_virq(void)
>>>> +{
>>>> +	struct vcpu *v =3D this_cpu(nmi_cont_vcpu);
>>>> +
>>>> +	if ( v )
>>>> +		send_guest_vcpu_virq(v, VIRQ_XENOPROF);
>>>> +
>>>> +	this_cpu(nmi_cont_vcpu) =3D NULL;
>>>
>>> What if, by the time we make it here, a 2nd NMI has arrived? I
>>> agree the next overflow interrupt shouldn't arrive this
>>> quickly, but I also think you want to zap the per-CPU variable
>>> first here, and ...
>>
>> How could that happen? This function is activated only from NMI
>> context in case the NMI happened in guest mode. And it will be
>> executed with higher priority than any guest, so there is a zero
>> chance another NMI in guest mode can happen in between.
>=20
> While I'll admit I didn't pay attention to the bogus (as far as
> HVM is concerned) xen_mode check, my understanding is that the
> self-IPI will be delivered once we're back in guest mode, as
> that's the first time IRQs would be on again (even event checking
> gets deferred by sending a self-IPI). If another NMI was latched
> by that time, it would take precedence over the IRQ and would
> also be delivered on the guest mode insn that the IRET returned
> to.
>=20
> I agree though that this is benign, as the vCPU wouldn't have
> been context switched out yet, i.e. current is still the same
> and there'll then merely be two NMI instances folded into one.

Correct.

>=20
> However, I still think the ordering would better be changed, to
> set a good precedent.

Okay, if you want that.

>=20
>>>>    static int nmi_callback(const struct cpu_user_regs *regs, int cpu=
)
>>>>    {
>>>>    	int xen_mode, ovf;
>>>>   =20
>>>>    	ovf =3D model->check_ctrs(cpu, &cpu_msrs[cpu], regs);
>>>>    	xen_mode =3D ring_0(regs);
>=20
> Unrelated to the patch here (i.e. just as an observation), this
> use of ring_0() looks bogus when the NMI occurred in HVM guest
> mode.

An NMI in an HVM guest due to oprofile would be a VMEXIT with NMI
reason, or just be handled completely inside the guest, right?

I don't see how this test should ever result in xen_mode being
false for an HVM guest.


Juergen

--------------828759602BE4AD8DB3314E1F
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------828759602BE4AD8DB3314E1F--

--og8ozsOwXYiS8ZgFuLDlghulTUxTHDiE9--

--z5UMM1DSH3UQrECgOEAdM8MWgEganrny8
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+tEw4FAwAAAAAACgkQsN6d1ii/Ey9s
gwf9EvTPzBKdLdKl2WWr/mRM+S/7tJ69Ma8bwQSjN6KCiryRWNJ/W5y939vi+ffac0ib4MGew0lO
QEhoVpjkrXzIMqxwkZ/ks7m8ZAXhc1ZOiZ3O6SP1yU2op2jEp2YvlZYpuoxdteAO+o1gSZTv/+y2
Ahn9EHo+ponhd5h20iNqJ0klwo2VVfD5yu4dS25d95nxk6ojBwytIYTEJ9EukDAetonSxNDyUUKR
k8wDUHNABxBcYKa4aBEKQIL2lGBXJrvigJCmm0Pc2KWoW/J9uPeoyxahCgpIAZAesLK7AdAzQ9tO
V4+Tlqn+eHnBmpGfWjSjRPuAqwJWwMhTYlxuS018iw==
=wC+W
-----END PGP SIGNATURE-----

--z5UMM1DSH3UQrECgOEAdM8MWgEganrny8--


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 10:50:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 10:50:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25655.53558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdABZ-0006Q6-F5; Thu, 12 Nov 2020 10:50:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25655.53558; Thu, 12 Nov 2020 10:50:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdABZ-0006Pz-C5; Thu, 12 Nov 2020 10:50:37 +0000
Received: by outflank-mailman (input) for mailman id 25655;
 Thu, 12 Nov 2020 10:50:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bXVH=ES=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kdABX-0006Pu-Ne
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 10:50:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 63866550-6e08-4d66-a02f-beacc5a13d4d;
 Thu, 12 Nov 2020 10:50:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 23835AB95;
 Thu, 12 Nov 2020 10:50:34 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bXVH=ES=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kdABX-0006Pu-Ne
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 10:50:35 +0000
X-Inumbo-ID: 63866550-6e08-4d66-a02f-beacc5a13d4d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 63866550-6e08-4d66-a02f-beacc5a13d4d;
	Thu, 12 Nov 2020 10:50:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605178234;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=X+kjA95DUOwqZc2hxSm2eQhQsLJUZnBEVfZX03j5Yz8=;
	b=r/LrAfxIbQcF0s+9esGknZwrWMIbOCgrFndQFe5LsSAbs3JyeiZoqXWl0D7XgWfj3fscH9
	4sFKjpd2AHQujhJJyKdvFxOy7GNqhpMef4EpgTr5gYipYM3AlDlcl+4Oz4be2bnNPHbSZX
	iraky/g3NSUe2ZQDZ2bz1czikivWaTc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 23835AB95;
	Thu, 12 Nov 2020 10:50:34 +0000 (UTC)
Subject: Re: [PATCH v4 3/3] xen/x86: issue pci_serr error message via NMI
 continuation
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201109095021.9897-1-jgross@suse.com>
 <20201109095021.9897-4-jgross@suse.com>
 <4f660245-8b3b-fe8b-f4f9-66f59597042a@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <b104a386-f35f-7d4c-c2ac-430d9777e4b2@suse.com>
Date: Thu, 12 Nov 2020 11:50:33 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <4f660245-8b3b-fe8b-f4f9-66f59597042a@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="tLQzYBsXFoMbX0iHNBeVYIg0ewRJaI0Jr"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--tLQzYBsXFoMbX0iHNBeVYIg0ewRJaI0Jr
Content-Type: multipart/mixed; boundary="hTqbB6E6aKjjGCIDD4uGaUTh5QbuUqSIQ";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <b104a386-f35f-7d4c-c2ac-430d9777e4b2@suse.com>
Subject: Re: [PATCH v4 3/3] xen/x86: issue pci_serr error message via NMI
 continuation
References: <20201109095021.9897-1-jgross@suse.com>
 <20201109095021.9897-4-jgross@suse.com>
 <4f660245-8b3b-fe8b-f4f9-66f59597042a@suse.com>
In-Reply-To: <4f660245-8b3b-fe8b-f4f9-66f59597042a@suse.com>

--hTqbB6E6aKjjGCIDD4uGaUTh5QbuUqSIQ
Content-Type: multipart/mixed;
 boundary="------------B3CBF0D70FEDD8629B448A8A"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------B3CBF0D70FEDD8629B448A8A
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 12.11.20 10:29, Jan Beulich wrote:
> On 09.11.2020 10:50, Juergen Gross wrote:
>> Instead of using a softirq pci_serr_error() can use NMI continuation
>> for issuing an error message.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>=20
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> with one minor change to be considered:
>=20
>> @@ -1808,6 +1816,9 @@ bool nmi_check_continuation(void)
>>       if ( nmi_oprofile_send_virq() )
>>           ret =3D true;
>>  =20
>> +    if ( pci_serr_nmicont() )
>> +        ret =3D true;
>> +
>>       return ret;
>>   }
>=20
> As the likely more important part, wouldn't it be better to insert
> this ahead of the oprofile check?

Fine with me.


Juergen

--------------B3CBF0D70FEDD8629B448A8A
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------B3CBF0D70FEDD8629B448A8A--

--hTqbB6E6aKjjGCIDD4uGaUTh5QbuUqSIQ--

--tLQzYBsXFoMbX0iHNBeVYIg0ewRJaI0Jr
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+tE3kFAwAAAAAACgkQsN6d1ii/Ey/m
8Qf9G1RWrEw3r+oXFSvY19vKNBJ0bZ3f5yKzAJ+bLrTM+BWYOebtHh5NELKRw1RhUX030Z+cFe6b
O/W0FS9oQmpZXcXKWzAJk+YbrrQPTdNoCaBuTQ/QXBbGv+PVXNsjGAEJtG6e3sfJFWkEW5t/EASF
uxXA3fqyM1FtjigrB0+9oFrQIGsrKAp5FBx4FMhpw8j83Ltx1Oakndx9y1pfryZxaC+1rE8NxYH+
y0UXJIBReT4OVbqSEE5woXNBhSprVhaA0FDn55gmF9xLvNY87HQDJ0ePnpZRc/mbnikN3l+8vVGa
pIILuoeRQsABqmoBl2nyCaVaGQXFjRrXCdCuhO+X1g==
=yFfT
-----END PGP SIGNATURE-----

--tLQzYBsXFoMbX0iHNBeVYIg0ewRJaI0Jr--


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 10:52:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 10:52:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25663.53569 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdADJ-0006YY-RG; Thu, 12 Nov 2020 10:52:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25663.53569; Thu, 12 Nov 2020 10:52:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdADJ-0006YR-OB; Thu, 12 Nov 2020 10:52:25 +0000
Received: by outflank-mailman (input) for mailman id 25663;
 Thu, 12 Nov 2020 10:52:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kdADH-0006YM-Rt
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 10:52:23 +0000
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 882f9bd5-84ee-487a-bf7e-d8d18ff35b73;
 Thu, 12 Nov 2020 10:52:23 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id h62so5070369wme.3
 for <xen-devel@lists.xenproject.org>; Thu, 12 Nov 2020 02:52:23 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com.
 [5.71.162.151])
 by smtp.gmail.com with ESMTPSA id m22sm6384756wrb.97.2020.11.12.02.52.20
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 12 Nov 2020 02:52:21 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kdADH-0006YM-Rt
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 10:52:23 +0000
X-Inumbo-ID: 882f9bd5-84ee-487a-bf7e-d8d18ff35b73
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 882f9bd5-84ee-487a-bf7e-d8d18ff35b73;
	Thu, 12 Nov 2020 10:52:23 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id h62so5070369wme.3
        for <xen-devel@lists.xenproject.org>; Thu, 12 Nov 2020 02:52:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=Ujz+MgNk7p2PobWoZs75Nsm661znwucKZOQhhTTVSGA=;
        b=ggAew42lUOduzbiiuyepthV3vbIvjyrD6kcy+8O5viZ84B3mfBo0qgoLgljdRGg3CT
         SfLuvuujiP7qvF7sJEX90TScIvIW+/RSIPq6o1sSrl4HISrrg5iF6AUCNzegjDxm7tkd
         sfSp+AU7kJIlYnm1oxmnOg1kHWt3v6Ee8EOzFg3Mb+JuPTxgYaoqKXnskoUZz+A3rQIJ
         OkJBcd6D1AyZNByNojaSIaNc3WR8gc879ZHKty8i9KkRws/ehpm8HnzpxzuzhYGM7Yux
         GxOJvMyxq3RG1HQmGQlM8/pFFy5pF+w/MOEH8FKyO+cx2O+4iKuB7PTNlImkVSjn1wNG
         7Jow==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=Ujz+MgNk7p2PobWoZs75Nsm661znwucKZOQhhTTVSGA=;
        b=NuernRAYUufERvGVUuV8UvvunSjJGVwO+OJ176lUVaJKxPWxpoW9l24KEt4DuYvH22
         P9AHLGuhPhF2CcWeV2SkX3gEgAFM/apbjfUVuHXSWNndh19eVi37SSAOWfFmrGMYLBPE
         M9hrC0i/v7FpHcBeBbbp9G7sObr2O+Nw/XgQN6aZuAPAt0zD87DVx2pNrjtd2GUoX0u1
         sz97WulHdojJPYpb6b1lD6O6WOlP7RXLFtChBvboHXyjDzXpGWV4pC6O70KghXTQ4kY+
         SGP/SZC49txmlA6V4uk25yuxbGJRSiTf9oQZAFRFmfNNQp9l2LXgCXh+CiPGipZe20r2
         RcBw==
X-Gm-Message-State: AOAM530KDK0o8Bbxvn2WYuPQxMF89p8uqZzaCw+pCBXqHZm9qx9tL2jy
	9lnfGJDo/d4Xal07mVKw5IA=
X-Google-Smtp-Source: ABdhPJzBXxhwnm0NWSmupEa6hYQm/lw3ahPsKfu6ri+3AFz4eVzK1gtzvNCMJ3eGTP3eSXNCJVVzbw==
X-Received: by 2002:a05:600c:288:: with SMTP id 8mr9014103wmk.106.1605178342201;
        Thu, 12 Nov 2020 02:52:22 -0800 (PST)
Received: from C02ZJ1BNLVDN.emea.arm.com (0547a297.skybroadband.com. [5.71.162.151])
        by smtp.gmail.com with ESMTPSA id m22sm6384756wrb.97.2020.11.12.02.52.20
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 12 Nov 2020 02:52:21 -0800 (PST)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: jbeulich@suse.com
Cc: ash.j.wilding@gmail.com,
	bertrand.marquis@arm.com,
	julien@xen.org,
	rahul.singh@arm.com,
	xen-devel@lists.xenproject.org
Subject: Re: [RFC PATCH v2 05/15] xen/arm: pull in Linux atomics helpers and dependencies
Date: Thu, 12 Nov 2020 10:52:20 +0000
Message-Id: <20201112105220.22799-1-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <e5603684-1f4b-83f3-8b80-6c9d045912cc@suse.com>
References: <e5603684-1f4b-83f3-8b80-6c9d045912cc@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hey Jan,


>>
>> Note that Linux's arm32 atomics helpers use the READ_ONCE() and
>> WRITE_ONCE() macros defined in <asm-generic/rwonce.h>, while Linux's
>> arm64 atomics helpers use __READ_ONCE() and __WRITE_ONCE().
>
> And our ACCESS_ONCE() can't be used, or be made usable? I don't think
> we want a 3rd variant when we're already in the process of discussing
> how to fold the two ones we have right now.

Many thanks for the pointer, I'm still familiarising myself with Xen's
codebase and wasn't aware of ACCESS_ONCE(); yes, that's exactly what we
need which means we can drop Linux's <asm-generic/rwonce.h> completely.


That also means:

>
> I don't think weakening the checking is a good idea when the macros
> are being made available for general use. If they'd be renamed to be
> private flavors for use just in Arm's atomics, this would be a
> different thing.

This problem goes away, as does the need/desire to bump the minimum GCC
version up to 4.9 for xen/arm just to support the usage of C11 _Generic
in Linux's <linux/compiler_types.h>.

That said, agreed, I did think the way I'd done it was a tad suspect
hence the "possibly contentious" disclaimer :-) I'll keep this in mind
for similar porting work in future. 


>>
>> \ No newline at end of file
>
> This wants taking care of in any event - there are multiple instances
> in this patch (in fact it looks as if all of the new files had this
> issue), and I didn't check others.

Ack, will fix.


Thanks for taking the time to provide feedback!

Cheers,
Ash.


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 10:58:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 10:58:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25670.53582 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdAJZ-0006ns-HO; Thu, 12 Nov 2020 10:58:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25670.53582; Thu, 12 Nov 2020 10:58:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdAJZ-0006nl-Df; Thu, 12 Nov 2020 10:58:53 +0000
Received: by outflank-mailman (input) for mailman id 25670;
 Thu, 12 Nov 2020 10:58:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdAJY-0006ng-LF
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 10:58:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e1b17b4a-365b-49f7-9673-1faeafe396c6;
 Thu, 12 Nov 2020 10:58:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B281AABCC;
 Thu, 12 Nov 2020 10:58:50 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdAJY-0006ng-LF
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 10:58:52 +0000
X-Inumbo-ID: e1b17b4a-365b-49f7-9673-1faeafe396c6
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e1b17b4a-365b-49f7-9673-1faeafe396c6;
	Thu, 12 Nov 2020 10:58:51 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605178730;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DlfkP7rECkN0bJWq4bNw6g0KKy/ouqpGPSstB0Hv+CA=;
	b=p0MgpB6j0DbGDwri+i7bDjJUGmt4326j1OBO3zH109x4TEjbDNjWxQUe7vdqRF/LjAmqLe
	RTeRZ/0qGuv+shp4rb5I7s0yN8dwRW/TgPzQSPLu1MIsV7Hap2sTcX8UsRg9xPVMhTqs33
	0xTmi+o0ATCKn0veNKtM7DYK5iYKRrI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B281AABCC;
	Thu, 12 Nov 2020 10:58:50 +0000 (UTC)
Subject: Re: [PATCH V2 01/23] x86/ioreq: Prepare IOREQ feature for making it
 common
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-2-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <61ea02e0-bdd4-5a0a-dd6f-b22e806e6d1e@suse.com>
Date: Thu, 12 Nov 2020 11:58:51 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <1602780274-29141-2-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> As a lot of x86 code can be re-used on Arm later on, this
> patch makes some preparation to x86/hvm/ioreq.c before moving
> to the common code. This way we will get a verbatim copy for
> a code movement in subsequent patch (arch/x86/hvm/ioreq.c
> will be *just* renamed to common/ioreq).
> 
> This patch does the following:
> 1. Introduce *inline* arch_hvm_ioreq_init(), arch_hvm_ioreq_destroy(),
>    arch_hvm_io_completion(), arch_hvm_destroy_ioreq_server() and
>    hvm_ioreq_server_get_type_addr() to abstract arch specific materials.
> 2  Make hvm_map_mem_type_to_ioreq_server() *inline*. It is not going
>    to be called from the common code.

As already indicated on another sub-thread, I think some of these
are too large to be inline functions. Additionally, considering
their single-use purpose, I don't think they should be placed in
a header consumed by more than the producer and the sole consumer.

> 3. Make get_ioreq_server() global. It is going to be called from
>    a few places.

And with this its name ought to change, to fit the general naming
model of global functions of this subsystem.

> 4. Add IOREQ_STATUS_* #define-s and update candidates for moving.

This, it seems to me, could be a separate patch.

> @@ -855,7 +841,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
>  
>      domain_pause(d);
>  
> -    p2m_set_ioreq_server(d, 0, s);
> +    arch_hvm_destroy_ioreq_server(s);

Iirc there are plans to rename hvm_destroy_ioreq_server() in the
course of the generalization. If so, this arch hook would imo
better be named following the new scheme right away.

> @@ -1215,7 +1153,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
>      struct hvm_ioreq_server *s;
>      unsigned int id;
>  
> -    if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) )
> +    if ( !arch_hvm_ioreq_destroy(d) )

There's no ioreq being destroyed here, so I think this wants
renaming (and again ideally right away following the planned
new scheme).

> +static inline int hvm_map_mem_type_to_ioreq_server(struct domain *d,
> +                                                   ioservid_t id,
> +                                                   uint32_t type,
> +                                                   uint32_t flags)
> +{
> +    struct hvm_ioreq_server *s;
> +    int rc;
> +
> +    if ( type != HVMMEM_ioreq_server )
> +        return -EINVAL;
> +
> +    if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE )
> +        return -EINVAL;
> +
> +    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
> +
> +    s = get_ioreq_server(d, id);
> +
> +    rc = -ENOENT;
> +    if ( !s )
> +        goto out;
> +
> +    rc = -EPERM;
> +    if ( s->emulator != current->domain )
> +        goto out;
> +
> +    rc = p2m_set_ioreq_server(d, flags, s);
> +
> + out:
> +    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
> +
> +    if ( rc == 0 && flags == 0 )
> +    {
> +        struct p2m_domain *p2m = p2m_get_hostp2m(d);

I realize I may be asking too much, but would it be possible if,
while moving code, you made simple and likely uncontroversial
adjustments like adding const here? (Such adjustments would be
less desirable to make if they increased the size of the patch,
e.g. if you were touching only nearby code.)

> +        if ( read_atomic(&p2m->ioreq.entry_count) )
> +            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
> +    }
> +
> +    return rc;
> +}
> +
> +static inline int hvm_ioreq_server_get_type_addr(const struct domain *d,
> +                                                 const ioreq_t *p,
> +                                                 uint8_t *type,
> +                                                 uint64_t *addr)
> +{
> +    uint32_t cf8 = d->arch.hvm.pci_cf8;

Similarly, for example, neither this nor ...

> +    if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
> +        return -EINVAL;
> +
> +    if ( p->type == IOREQ_TYPE_PIO &&
> +         (p->addr & ~3) == 0xcfc &&
> +         CF8_ENABLED(cf8) )
> +    {
> +        uint32_t x86_fam;

... this really need to use a fixed width type - unsigned int is
going to be quite fine. But since you're only moving this code,
I guess I'm not going to insist.

> +static inline bool arch_hvm_ioreq_destroy(struct domain *d)
> +{
> +    if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) )
> +        return false;
> +
> +    return true;

Any reason this cannot simply be

    return relocate_portio_handler(d, 0xcf8, 0xcf8, 4);

?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 11:05:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 11:05:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25677.53594 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdAPt-0007pI-BR; Thu, 12 Nov 2020 11:05:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25677.53594; Thu, 12 Nov 2020 11:05:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdAPt-0007pB-80; Thu, 12 Nov 2020 11:05:25 +0000
Received: by outflank-mailman (input) for mailman id 25677;
 Thu, 12 Nov 2020 11:05:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdAPs-0007p6-Ab
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 11:05:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 919ee73a-df10-4f3b-bf6a-e21156d39799;
 Thu, 12 Nov 2020 11:05:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8B840ABCC;
 Thu, 12 Nov 2020 11:05:22 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdAPs-0007p6-Ab
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 11:05:24 +0000
X-Inumbo-ID: 919ee73a-df10-4f3b-bf6a-e21156d39799
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 919ee73a-df10-4f3b-bf6a-e21156d39799;
	Thu, 12 Nov 2020 11:05:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605179122;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=LoC3XLDkSXVJXP1Wpbl+ZGHQmW5PS9QAoG46786pMMI=;
	b=GmEI2We6fK40nhkC15Bv60BeeKHJbNT/MnDEP9BOCcGdYbyet1O2XSY0JyPiFCsE2C9kHK
	JEJfw9XxbqotkrJUbrqyAO3npFylAK/OcI6miSV7Tpg6m2jNQA9mwECIKy3+aqcPhzcb18
	iAN3nOX3XXoDG+/kFKAMZyeP+eQ65HA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8B840ABCC;
	Thu, 12 Nov 2020 11:05:22 +0000 (UTC)
Subject: Re: [PATCH v4 2/3] xen/oprofile: use NMI continuation for sending
 virq to guest
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201109095021.9897-1-jgross@suse.com>
 <20201109095021.9897-3-jgross@suse.com>
 <d55adbc0-8a98-dd5c-c204-2ec11955c356@suse.com>
 <288804e4-75e6-6600-9634-8c0ea7a06c22@suse.com>
 <b84d687e-0aab-d48f-c068-1852cc1075b2@suse.com>
 <6229914c-bc76-2670-a272-ab0603f612cc@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2fe880fb-43d6-8479-278f-a2a38c5b3a9f@suse.com>
Date: Thu, 12 Nov 2020 12:05:23 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <6229914c-bc76-2670-a272-ab0603f612cc@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12.11.2020 11:48, Jürgen Groß wrote:
> On 12.11.20 11:23, Jan Beulich wrote:
>> On 11.11.2020 16:48, Jürgen Groß wrote:
>>> On 11.11.20 16:45, Jan Beulich wrote:
>>>> On 09.11.2020 10:50, Juergen Gross wrote:
>>>>>    static int nmi_callback(const struct cpu_user_regs *regs, int cpu)
>>>>>    {
>>>>>    	int xen_mode, ovf;
>>>>>    
>>>>>    	ovf = model->check_ctrs(cpu, &cpu_msrs[cpu], regs);
>>>>>    	xen_mode = ring_0(regs);
>>
>> Unrelated to the patch here (i.e. just as an observation), this
>> use of ring_0() looks bogus when the NMI occurred in HVM guest
>> mode.
> 
> An NMI in an HVM guest due to oprofile would be a VMEXIT with NMI
> reason, or just be handled completely inside the guest, right?

Yes, and in the former case for VMX it would be handed on to do_nmi(),
with the guest register state. For SVM it would get handled on the
next STGI, i.e. would indeed never surface from HVM guest mode.

> I don't see how this test should ever result in xen_mode being
> false for an HVM guest.

I think, because of hvm_invalidate_regs_fields(), on VMX it would be
consistently true in release builds and consistently false in debug
ones.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 11:12:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 11:12:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25684.53606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdAWF-0000OW-3N; Thu, 12 Nov 2020 11:11:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25684.53606; Thu, 12 Nov 2020 11:11:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdAWF-0000OP-0P; Thu, 12 Nov 2020 11:11:59 +0000
Received: by outflank-mailman (input) for mailman id 25684;
 Thu, 12 Nov 2020 11:11:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdAWD-0000OK-7c
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 11:11:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 96784ba8-c6e5-4de1-b085-637cc44b1891;
 Thu, 12 Nov 2020 11:11:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3AF7AAB95;
 Thu, 12 Nov 2020 11:11:55 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdAWD-0000OK-7c
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 11:11:57 +0000
X-Inumbo-ID: 96784ba8-c6e5-4de1-b085-637cc44b1891
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 96784ba8-c6e5-4de1-b085-637cc44b1891;
	Thu, 12 Nov 2020 11:11:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605179515;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RVhyz9+tgsb1IKzvh1NSweZs4qIqQYJPqSFCQwY3nqA=;
	b=WmgFpNHOG11ilsGTXFE50a93DQ4Qvaij2VGFe3CjESUrc86m6NbaeoUUsDAJbxJ8G5H+g/
	uG5HMxvkgbpj1u+2dw6XKioAQUbvMBNDxgY1UFZKP5RJA8P47Y405woJvCcg/hqB5lroZx
	ZXOar975N0V23kfO1RZQOEbIWippdFA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3AF7AAB95;
	Thu, 12 Nov 2020 11:11:55 +0000 (UTC)
Subject: Re: [PATCH V2 02/23] xen/ioreq: Make x86's IOREQ feature common
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Paul Durrant <paul@xen.org>, Tim Deegan
 <tim@xen.org>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-3-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5aa67969-0571-ee1b-bbe1-0b936a85acd2@suse.com>
Date: Thu, 12 Nov 2020 12:11:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <1602780274-29141-3-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> As a lot of x86 code can be re-used on Arm later on, this patch
> moves previously prepared x86/hvm/ioreq.c to the common code.
> 
> The common IOREQ feature is supposed to be built with IOREQ_SERVER
> option enabled, which is selected for x86's config HVM for now.
> 
> In order to avoid having a gigantic patch here, the subsequent
> patches will update remaining bits in the common code step by step:
> - Make IOREQ related structs/materials common
> - Drop the "hvm" prefixes and infixes
> - Remove layering violation by moving corresponding fields
>   out of *arch.hvm* or abstracting away accesses to them
> 
> This support is going to be used on Arm to be able run device
> emulator outside of Xen hypervisor.
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Julien Grall <julien.grall@arm.com>
> 
> ---
> Please note, this is a split/cleanup/hardening of Julien's PoC:
> "Add support for Guest IO forwarding to a device emulator"
> 
> ***
> Please note, this patch depends on the following which is
> on review:
> https://patchwork.kernel.org/patch/11816689/
> ***
> 
> Changes RFC -> V1:
>    - was split into three patches:
>      - x86/ioreq: Prepare IOREQ feature for making it common
>      - xen/ioreq: Make x86's IOREQ feature common
>      - xen/ioreq: Make x86's hvm_ioreq_needs_completion() common
>    - update MAINTAINERS file
>    - do not use a separate subdir for the IOREQ stuff, move it to:
>      - xen/common/ioreq.c
>      - xen/include/xen/ioreq.h
>    - update x86's files to include xen/ioreq.h
>    - remove unneeded headers in arch/x86/hvm/ioreq.c
>    - re-order the headers alphabetically in common/ioreq.c
>    - update common/ioreq.c according to the newly introduced arch functions:
>      arch_hvm_destroy_ioreq_server()/arch_handle_hvm_io_completion()
> 
> Changes V1 -> V2:
>    - update patch description
>    - make everything needed in the previous patch to achieve
>      a truly rename here
>    - don't include unnecessary headers from asm-x86/hvm/ioreq.h
>      and xen/ioreq.h
>    - use __XEN_IOREQ_H__ instead of __IOREQ_H__
>    - move get_ioreq_server() to common/ioreq.c
> ---
>  MAINTAINERS                     |    8 +-
>  xen/arch/x86/Kconfig            |    1 +
>  xen/arch/x86/hvm/Makefile       |    1 -
>  xen/arch/x86/hvm/ioreq.c        | 1422 ---------------------------------------
>  xen/arch/x86/mm.c               |    2 +-
>  xen/arch/x86/mm/shadow/common.c |    2 +-
>  xen/common/Kconfig              |    3 +
>  xen/common/Makefile             |    1 +
>  xen/common/ioreq.c              | 1422 +++++++++++++++++++++++++++++++++++++++
>  xen/include/asm-x86/hvm/ioreq.h |   39 +-
>  xen/include/xen/ioreq.h         |   71 ++
>  11 files changed, 1509 insertions(+), 1463 deletions(-)
>  delete mode 100644 xen/arch/x86/hvm/ioreq.c
>  create mode 100644 xen/common/ioreq.c
>  create mode 100644 xen/include/xen/ioreq.h

Iirc I've previously asked to make sure the diff here gets created with
git's rename detection enabled, so we wouldn't see 1422 lines deleted
and 1422 lines added, _hoping_ they're all the same (or going through
the extra steps needed to compare old and new code), but instead seeing
just the diff between old and new files (which in the ideal case would
then be empty).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 11:21:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 11:21:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25694.53621 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdAfp-0001TX-2J; Thu, 12 Nov 2020 11:21:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25694.53621; Thu, 12 Nov 2020 11:21:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdAfo-0001TQ-VH; Thu, 12 Nov 2020 11:21:52 +0000
Received: by outflank-mailman (input) for mailman id 25694;
 Thu, 12 Nov 2020 11:21:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdAfn-0001TL-1G
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 11:21:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 01339730-7f6e-4006-89dd-8786c823e55b;
 Thu, 12 Nov 2020 11:21:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 439E6AE95;
 Thu, 12 Nov 2020 11:21:49 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdAfn-0001TL-1G
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 11:21:51 +0000
X-Inumbo-ID: 01339730-7f6e-4006-89dd-8786c823e55b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 01339730-7f6e-4006-89dd-8786c823e55b;
	Thu, 12 Nov 2020 11:21:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605180109;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0FRUkI2aKuPl0W4Z27gLbGQAPZoWkrly1NUEDWgBoYc=;
	b=Zo5FCnsUzQDElAfW0Mn7K0cwcL8C9VnKMDF2djGFg7KMMGaNLTg1Esqc1vAJgqYO2Z8qHy
	mTqcXuLW7ZLrKA4qOxMNmUo1ntiTugXFbJFW2Ol2+TsWDyUnR4jBjlutmLeGpVdvoNVM7A
	5q2+A02XwhDbEpK2BDa+v+V4L36xmZo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 439E6AE95;
	Thu, 12 Nov 2020 11:21:49 +0000 (UTC)
Subject: Re: [PATCH V2 07/23] xen/ioreq: Move x86's ioreq_gfn(server) to
 struct domain
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-8-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <00f9046f-5c77-cee5-b201-aa01f880d4e7@suse.com>
Date: Thu, 12 Nov 2020 12:21:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <1602780274-29141-8-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
> --- a/xen/include/asm-x86/hvm/ioreq.h
> +++ b/xen/include/asm-x86/hvm/ioreq.h
> @@ -77,7 +77,7 @@ static inline int hvm_map_mem_type_to_ioreq_server(struct domain *d,
>      if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE )
>          return -EINVAL;
>  
> -    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
> +    spin_lock_recursive(&d->ioreq_server.lock);
>  
>      s = get_ioreq_server(d, id);
>  
> @@ -92,7 +92,7 @@ static inline int hvm_map_mem_type_to_ioreq_server(struct domain *d,
>      rc = p2m_set_ioreq_server(d, flags, s);
>  
>   out:
> -    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
> +    spin_unlock_recursive(&d->ioreq_server.lock);
>  
>      if ( rc == 0 && flags == 0 )
>      {


Does this build at this point, when !CONFIG_IOREQ_SERVER? Patch 1
moves the code here without guards, and patch 2, when introducing
the Kconfig symbol, doesn't add guards here. I admit I didn't
check further intermediate patches.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 11:27:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 11:27:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25702.53636 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdAkt-0001gB-NL; Thu, 12 Nov 2020 11:27:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25702.53636; Thu, 12 Nov 2020 11:27:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdAkt-0001g4-KA; Thu, 12 Nov 2020 11:27:07 +0000
Received: by outflank-mailman (input) for mailman id 25702;
 Thu, 12 Nov 2020 11:27:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bXVH=ES=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kdAks-0001fz-EZ
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 11:27:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cca748ae-3dfb-4d3f-b7cb-67c4ad1a6f81;
 Thu, 12 Nov 2020 11:27:05 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A720FAEAA;
 Thu, 12 Nov 2020 11:27:04 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bXVH=ES=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kdAks-0001fz-EZ
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 11:27:06 +0000
X-Inumbo-ID: cca748ae-3dfb-4d3f-b7cb-67c4ad1a6f81
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cca748ae-3dfb-4d3f-b7cb-67c4ad1a6f81;
	Thu, 12 Nov 2020 11:27:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605180424;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=I4oUzltFxwA59RzdgNlxU5/IpLu8/XoDGRR8AhR94Yc=;
	b=kkJcAwysLIbGVYoMQpmoIekw6Cg4EtEBWcTSBn/N+QYemem7Z9K0jeQZtE7zQk9QSHH1v7
	4lw78I0/6uhm6wPGB6tCC4wqKTjzrEawluBwCD8MP75iStIeztmVDPivfhPEys1J/5eLxY
	+nRTvSZD07570GUohU68aOlM2YivhVM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A720FAEAA;
	Thu, 12 Nov 2020 11:27:04 +0000 (UTC)
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201109095021.9897-1-jgross@suse.com>
 <20201109095021.9897-3-jgross@suse.com>
 <d55adbc0-8a98-dd5c-c204-2ec11955c356@suse.com>
 <288804e4-75e6-6600-9634-8c0ea7a06c22@suse.com>
 <b84d687e-0aab-d48f-c068-1852cc1075b2@suse.com>
 <6229914c-bc76-2670-a272-ab0603f612cc@suse.com>
 <2fe880fb-43d6-8479-278f-a2a38c5b3a9f@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v4 2/3] xen/oprofile: use NMI continuation for sending
 virq to guest
Message-ID: <f52adcb9-cff7-3bf9-ab98-881e471b0c9a@suse.com>
Date: Thu, 12 Nov 2020 12:27:03 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <2fe880fb-43d6-8479-278f-a2a38c5b3a9f@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="tQ1oNRwa0wLCtycf8pkR1Vp9PBclxpKc4"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--tQ1oNRwa0wLCtycf8pkR1Vp9PBclxpKc4
Content-Type: multipart/mixed; boundary="Ocmh6EFEaXOMpHEONC1BSinVfnL5gXfYk";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <f52adcb9-cff7-3bf9-ab98-881e471b0c9a@suse.com>
Subject: Re: [PATCH v4 2/3] xen/oprofile: use NMI continuation for sending
 virq to guest
References: <20201109095021.9897-1-jgross@suse.com>
 <20201109095021.9897-3-jgross@suse.com>
 <d55adbc0-8a98-dd5c-c204-2ec11955c356@suse.com>
 <288804e4-75e6-6600-9634-8c0ea7a06c22@suse.com>
 <b84d687e-0aab-d48f-c068-1852cc1075b2@suse.com>
 <6229914c-bc76-2670-a272-ab0603f612cc@suse.com>
 <2fe880fb-43d6-8479-278f-a2a38c5b3a9f@suse.com>
In-Reply-To: <2fe880fb-43d6-8479-278f-a2a38c5b3a9f@suse.com>

--Ocmh6EFEaXOMpHEONC1BSinVfnL5gXfYk
Content-Type: multipart/mixed;
 boundary="------------6F881C63212A999AAC6AE94D"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------6F881C63212A999AAC6AE94D
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 12.11.20 12:05, Jan Beulich wrote:
> On 12.11.2020 11:48, J=C3=BCrgen Gro=C3=9F wrote:
>> On 12.11.20 11:23, Jan Beulich wrote:
>>> On 11.11.2020 16:48, J=C3=BCrgen Gro=C3=9F wrote:
>>>> On 11.11.20 16:45, Jan Beulich wrote:
>>>>> On 09.11.2020 10:50, Juergen Gross wrote:
>>>>>>     static int nmi_callback(const struct cpu_user_regs *regs, int =
cpu)
>>>>>>     {
>>>>>>     	int xen_mode, ovf;
>>>>>>    =20
>>>>>>     	ovf =3D model->check_ctrs(cpu, &cpu_msrs[cpu], regs);
>>>>>>     	xen_mode =3D ring_0(regs);
>>>
>>> Unrelated to the patch here (i.e. just as an observation), this
>>> use of ring_0() looks bogus when the NMI occurred in HVM guest
>>> mode.
>>
>> An NMI in an HVM guest due to oprofile would be a VMEXIT with NMI
>> reason, or just be handled completely inside the guest, right?
>=20
> Yes, and in the former case for VMX it would be handed on to do_nmi(),
> with the guest register state. For SVM it would get handled on the
> next STGI, i.e. would indeed never surface from HVM guest mode.
>=20
>> I don't see how this test should ever result in xen_mode being
>> false for an HVM guest.
>=20
> I think, because of hvm_invalidate_regs_fields(), on VMX it would be
> consistently true in release builds and consistently false in debug
> ones.

Ah, okay. I searched for do_nmi(), but the vmx code uses the exception
table instead.

So I guess this should be:

xen_mode =3D !guest_mode(regs);


Juergen

--------------6F881C63212A999AAC6AE94D
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------6F881C63212A999AAC6AE94D--

--Ocmh6EFEaXOMpHEONC1BSinVfnL5gXfYk--

--tQ1oNRwa0wLCtycf8pkR1Vp9PBclxpKc4
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+tHAgFAwAAAAAACgkQsN6d1ii/Ey+l
vAf+Ig7bxkSR60vkr3912wRiUqNEaRasmhtPbPhn1+63BYVlve2PGJ3smvNM8UanKkDxW914HNVh
Yrohw6xrzGm6ndBxYL4om0yprG5GJMf0lP4OhwYUJEUzTa6BKRJYG8a8c7coHkFaM/vG81Z5URz8
r6/WnCwL3yuFe6PP8CVaEw7AgY97cawUqvnULeBXmE0MbkC/4ptwxqXTzOrZRWtlEWWXcfL888FU
oZaRK6rcLZMnqfQWvgeKLKT2pLlw3YXjw7bY1W0fUAeCvz0E6PObhl2jIFPNuIjkyA7ehyxuef3H
DZjiAQAbiUDSoZjGl3p0lpy9+olOfjh8x6PljhlbOw==
=I+XE
-----END PGP SIGNATURE-----

--tQ1oNRwa0wLCtycf8pkR1Vp9PBclxpKc4--


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 11:32:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 11:32:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25709.53648 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdAqM-0002f0-Ce; Thu, 12 Nov 2020 11:32:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25709.53648; Thu, 12 Nov 2020 11:32:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdAqM-0002et-8Y; Thu, 12 Nov 2020 11:32:46 +0000
Received: by outflank-mailman (input) for mailman id 25709;
 Thu, 12 Nov 2020 11:32:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdAqK-0002eo-T6
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 11:32:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ea525d32-fe24-4fbe-b9a4-989e21023fea;
 Thu, 12 Nov 2020 11:32:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 496C5AE95;
 Thu, 12 Nov 2020 11:32:43 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdAqK-0002eo-T6
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 11:32:44 +0000
X-Inumbo-ID: ea525d32-fe24-4fbe-b9a4-989e21023fea
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ea525d32-fe24-4fbe-b9a4-989e21023fea;
	Thu, 12 Nov 2020 11:32:44 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605180763;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+6eocP+7M7OPePQ+4EPQUhWrSB+LV7BUOWte+//Gnzs=;
	b=g5WHe1JwBz5hLH4mf7r2B9V0NrHuqLJ+NmUT1pXP1UB/OIcn9QfKjh97v+kFDpJJEKls7q
	hV4phG0iYfX4ZSj3GE4AsjPJ6PQ9phY6OtU0kb1Y8rKQputFPQzLvf7kUL8PmmEkwdR/90
	x4UcwQSb0JesVVkJQvwTKIX4OAvF6m8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 496C5AE95;
	Thu, 12 Nov 2020 11:32:43 +0000 (UTC)
Subject: Re: [PATCH V2 09/23] xen/dm: Make x86's DM feature common
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Julien Grall <julien.grall@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-10-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3f432fdb-0625-4803-3a16-62200a6264ca@suse.com>
Date: Thu, 12 Nov 2020 12:32:43 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <1602780274-29141-10-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> As a lot of x86 code can be re-used on Arm later on, this patch
> splits devicemodel support into common and arch specific parts.
> 
> The common DM feature is supposed to be built with IOREQ_SERVER
> option enabled (as well as the IOREQ feature), which is selected
> for x86's config HVM for now.

Did you consider doing it the other way around? It would seem
more natural to have the top level dm-op handling arch-specific
and call into e.g. ioreq_server_dm_op() for otherwise unhandled
ops, just like e.g. do_domctl() calls into iommu_do_domctl()
(indirectly via arch_do_domctl()). I ask because in the long
run I expect the ioreq server sub-ops to only be a small part
of the overall set of dm-ops; already now it's 7 out of 18 if
I got the counting right.

This would then also leave compat_dm_op() in x86 code.

But yes, there are also downsides with this alternative.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 11:36:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 11:36:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25716.53660 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdAu8-0002qh-TD; Thu, 12 Nov 2020 11:36:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25716.53660; Thu, 12 Nov 2020 11:36:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdAu8-0002qa-Pz; Thu, 12 Nov 2020 11:36:40 +0000
Received: by outflank-mailman (input) for mailman id 25716;
 Thu, 12 Nov 2020 11:36:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdAu7-0002qU-7H
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 11:36:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7de6145b-45c5-4000-b620-a398842d3708;
 Thu, 12 Nov 2020 11:36:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F2EA1AF24;
 Thu, 12 Nov 2020 11:36:36 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdAu7-0002qU-7H
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 11:36:39 +0000
X-Inumbo-ID: 7de6145b-45c5-4000-b620-a398842d3708
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7de6145b-45c5-4000-b620-a398842d3708;
	Thu, 12 Nov 2020 11:36:37 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605180997;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vLKbMWaFXCMsz44OKSt0fQ8NlYkTUyvX8Xnk6olrnVQ=;
	b=q5+nxDMv4uYqyb1tHwcG4SMMWzlZQdfA/PQoXLvAejzIHfEeUbj2nnW8lVQGtiI4DVR18P
	FQzfTfai51SA8rnlfzihwp+JJsIRVtilOZpzY0V34Bxz1wAOTy+Z4Zx26VI7gia2WGaEii
	n1k6nbLpQCoQO+47oKYCty/ZRc4V1uU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id F2EA1AF24;
	Thu, 12 Nov 2020 11:36:36 +0000 (UTC)
Subject: Re: [PATCH v4 2/3] xen/oprofile: use NMI continuation for sending
 virq to guest
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201109095021.9897-1-jgross@suse.com>
 <20201109095021.9897-3-jgross@suse.com>
 <d55adbc0-8a98-dd5c-c204-2ec11955c356@suse.com>
 <288804e4-75e6-6600-9634-8c0ea7a06c22@suse.com>
 <b84d687e-0aab-d48f-c068-1852cc1075b2@suse.com>
 <6229914c-bc76-2670-a272-ab0603f612cc@suse.com>
 <2fe880fb-43d6-8479-278f-a2a38c5b3a9f@suse.com>
 <f52adcb9-cff7-3bf9-ab98-881e471b0c9a@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <11ead195-f3a9-f9ba-58b6-9ae96650cf07@suse.com>
Date: Thu, 12 Nov 2020 12:36:37 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <f52adcb9-cff7-3bf9-ab98-881e471b0c9a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12.11.2020 12:27, Jürgen Groß wrote:
> On 12.11.20 12:05, Jan Beulich wrote:
>> On 12.11.2020 11:48, Jürgen Groß wrote:
>>> On 12.11.20 11:23, Jan Beulich wrote:
>>>> On 11.11.2020 16:48, Jürgen Groß wrote:
>>>>> On 11.11.20 16:45, Jan Beulich wrote:
>>>>>> On 09.11.2020 10:50, Juergen Gross wrote:
>>>>>>>     static int nmi_callback(const struct cpu_user_regs *regs, int cpu)
>>>>>>>     {
>>>>>>>     	int xen_mode, ovf;
>>>>>>>     
>>>>>>>     	ovf = model->check_ctrs(cpu, &cpu_msrs[cpu], regs);
>>>>>>>     	xen_mode = ring_0(regs);
>>>>
>>>> Unrelated to the patch here (i.e. just as an observation), this
>>>> use of ring_0() looks bogus when the NMI occurred in HVM guest
>>>> mode.
>>>
>>> An NMI in an HVM guest due to oprofile would be a VMEXIT with NMI
>>> reason, or just be handled completely inside the guest, right?
>>
>> Yes, and in the former case for VMX it would be handed on to do_nmi(),
>> with the guest register state. For SVM it would get handled on the
>> next STGI, i.e. would indeed never surface from HVM guest mode.
>>
>>> I don't see how this test should ever result in xen_mode being
>>> false for an HVM guest.
>>
>> I think, because of hvm_invalidate_regs_fields(), on VMX it would be
>> consistently true in release builds and consistently false in debug
>> ones.
> 
> Ah, okay. I searched for do_nmi(), but the vmx code uses the exception
> table instead.
> 
> So I guess this should be:
> 
> xen_mode = !guest_mode(regs);

Yes, I think so. Just that guest_mode() also has its issues (my patch
"x86: refine guest_mode()" improving it at least some is still pending
Andrew's go / no-go / improvement suggestions), so whether it's
suitable to use here may need some careful evaluation.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 11:40:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 11:40:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25722.53672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdAy8-0003q0-KT; Thu, 12 Nov 2020 11:40:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25722.53672; Thu, 12 Nov 2020 11:40:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdAy8-0003pt-Fs; Thu, 12 Nov 2020 11:40:48 +0000
Received: by outflank-mailman (input) for mailman id 25722;
 Thu, 12 Nov 2020 11:40:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdAy6-0003pn-HZ
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 11:40:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c176a9f9-fb50-402c-b4bb-5db4a57142bd;
 Thu, 12 Nov 2020 11:40:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AD2AEAE95;
 Thu, 12 Nov 2020 11:40:44 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdAy6-0003pn-HZ
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 11:40:46 +0000
X-Inumbo-ID: c176a9f9-fb50-402c-b4bb-5db4a57142bd
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c176a9f9-fb50-402c-b4bb-5db4a57142bd;
	Thu, 12 Nov 2020 11:40:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605181244;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XuwcXG213xTdGdTodiKLD0mDII90UwD8c4iBHrJURdE=;
	b=FAmAkevN9mi3x9x+eR90XYQbWeY444+lrWDaaPDdd7Ep9h4G0mcPd1lnMdy+j4Qx8A7DPL
	dCsGGsCTzx/lZYj9OkOEsxQeQ6NxQWyAeTV70SY8rNz6AedhuxAQ6SLmHx6I4V1d3k6OFF
	IGNdDfVzRiltD/WUz45GJ9P1dPEeSWU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id AD2AEAE95;
	Thu, 12 Nov 2020 11:40:44 +0000 (UTC)
Subject: Re: [PATCH V2 10/23] xen/mm: Make x86's XENMEM_resource_ioreq_server
 handling common
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Julien Grall <julien.grall@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-11-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1c9bc1aa-b622-fb8e-e5d5-3e27567354c0@suse.com>
Date: Thu, 12 Nov 2020 12:40:45 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <1602780274-29141-11-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -30,6 +30,10 @@
>  #include <public/memory.h>
>  #include <xsm/xsm.h>
>  
> +#ifdef CONFIG_IOREQ_SERVER
> +#include <xen/ioreq.h>
> +#endif

Preferably #ifdef-s would not be needed here. If any, they'd better
live in xen/ioreq.h itself then.

> @@ -1045,6 +1049,38 @@ static int acquire_grant_table(struct domain *d, unsigned int id,
>      return 0;
>  }
>  
> +#ifdef CONFIG_IOREQ_SERVER

To limit the number of #ifdef-s, could this be moved ...

> +static int acquire_ioreq_server(struct domain *d,
> +                                unsigned int id,
> +                                unsigned long frame,
> +                                unsigned int nr_frames,
> +                                xen_pfn_t mfn_list[])
> +{

... here such that ...

> @@ -1103,9 +1139,14 @@ static int acquire_resource(
>                                   mfn_list);
>          break;
>  
> +#ifdef CONFIG_IOREQ_SERVER
> +    case XENMEM_resource_ioreq_server:
> +        rc = acquire_ioreq_server(d, xmar.id, xmar.frame, xmar.nr_frames,
> +                                  mfn_list);
> +        break;
> +#endif

... the ones here then can be dropped?

>      default:

Also you'll want to a blank line between the new case statement and
the "default:".

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 11:45:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 11:45:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25729.53684 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdB2g-00041z-6K; Thu, 12 Nov 2020 11:45:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25729.53684; Thu, 12 Nov 2020 11:45:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdB2g-00041s-34; Thu, 12 Nov 2020 11:45:30 +0000
Received: by outflank-mailman (input) for mailman id 25729;
 Thu, 12 Nov 2020 11:45:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdB2f-00041n-1d
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 11:45:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e1746a42-f8f6-4254-9aa5-f0a225258117;
 Thu, 12 Nov 2020 11:45:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 84525ACC5;
 Thu, 12 Nov 2020 11:45:27 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdB2f-00041n-1d
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 11:45:29 +0000
X-Inumbo-ID: e1746a42-f8f6-4254-9aa5-f0a225258117
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e1746a42-f8f6-4254-9aa5-f0a225258117;
	Thu, 12 Nov 2020 11:45:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605181527;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=YFJm6E5HGuad0fA181svJ/18xIJWlypV9xdVgzVR5Lc=;
	b=RY51nvZDc2/5azrSsHSumcFNTniZw3dC1G4TwWzOXFiYJOyfiU53XdHGH3LGYeQ+3bT4BD
	xLYI5qifGiunAJ2eroXSgiIrkKBPqvteKOz/HH/RFtOtZJKPH5rxbDzYIUQQ3cqNhJK6B9
	eGlTYq+k7eIoTLGX7GVVrE20dY6cmfI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 84525ACC5;
	Thu, 12 Nov 2020 11:45:27 +0000 (UTC)
Subject: Re: [PATCH V2 12/23] xen/ioreq: Remove "hvm" prefixes from involved
 function names
To: Oleksandr Tyshchenko <olekstysh@gmail.com>, Paul Durrant <paul@xen.org>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-13-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e3064b77-71c3-9d8d-2324-6839895101f4@suse.com>
Date: Thu, 12 Nov 2020 12:45:28 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <1602780274-29141-13-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> This patch removes "hvm" prefixes and infixes from IOREQ
> related function names in the common code.

AT least some of the functions touched here would be nice to be
moved to a more consistent new naming scheme right away, to
avoid having to touch all the same places again. I guess ioreq
server functions would be nice to all start with ioreq_server_
and ioreq functions to all start with ioreq_. E.g. ioreq_send()
and ioreq_server_select().

Paul, thoughts?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 11:48:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 11:48:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25734.53696 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdB5o-0004Dk-Ms; Thu, 12 Nov 2020 11:48:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25734.53696; Thu, 12 Nov 2020 11:48:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdB5o-0004Dd-J0; Thu, 12 Nov 2020 11:48:44 +0000
Received: by outflank-mailman (input) for mailman id 25734;
 Thu, 12 Nov 2020 11:48:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdB5n-0004DY-26
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 11:48:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a629766d-c4c1-41a6-bf44-c45a77d767ad;
 Thu, 12 Nov 2020 11:48:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 77777AC54;
 Thu, 12 Nov 2020 11:48:41 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdB5n-0004DY-26
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 11:48:43 +0000
X-Inumbo-ID: a629766d-c4c1-41a6-bf44-c45a77d767ad
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a629766d-c4c1-41a6-bf44-c45a77d767ad;
	Thu, 12 Nov 2020 11:48:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605181721;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ze8U6I0h6oEwKiD0d7/M+wn/PW2QPVzQJ0CwfPSWjS4=;
	b=lqWeSJRpsjrb0RpFITkgr+loL3yMNWJd23UhlDR2yGcn03Rq7qTHZ/xV9uOag1JqproC+i
	T33Hf+TSDw98/GipOGIuaogQcjOxnat3v/sY/S/thjvmnqY4hNgTtgIfH9212mKkRjqDT8
	eReVKqUAj/F8BuzRA6naeNDHH2wQEdY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 77777AC54;
	Thu, 12 Nov 2020 11:48:41 +0000 (UTC)
Subject: Re: [PATCH V2 14/23] arm/ioreq: Introduce arch specific bits for
 IOREQ/DM features
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Julien Grall <julien.grall@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Paul Durrant <paul@xen.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-15-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2c36a380-633b-1e3f-3f99-014bc315e75f@suse.com>
Date: Thu, 12 Nov 2020 12:48:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <1602780274-29141-15-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
> --- a/xen/common/ioreq.c
> +++ b/xen/common/ioreq.c
> @@ -18,6 +18,7 @@
>  
>  #include <xen/ctype.h>
>  #include <xen/domain.h>
> +#include <xen/domain_page.h>
>  #include <xen/event.h>
>  #include <xen/init.h>
>  #include <xen/irq.h>

There preferably wouldn't be a need to touch non-Arm code in this
patch. I suppose the added #include could easily be introduced
e.g. while moving the file?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 11:55:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 11:55:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25743.53707 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdBCT-0005Dq-Al; Thu, 12 Nov 2020 11:55:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25743.53707; Thu, 12 Nov 2020 11:55:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdBCT-0005Dj-7t; Thu, 12 Nov 2020 11:55:37 +0000
Received: by outflank-mailman (input) for mailman id 25743;
 Thu, 12 Nov 2020 11:55:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdBCR-0005De-My
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 11:55:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b3a41be7-38b3-45f1-85c1-6dce4b1ca034;
 Thu, 12 Nov 2020 11:55:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E2CCCAEAA;
 Thu, 12 Nov 2020 11:55:32 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdBCR-0005De-My
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 11:55:35 +0000
X-Inumbo-ID: b3a41be7-38b3-45f1-85c1-6dce4b1ca034
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b3a41be7-38b3-45f1-85c1-6dce4b1ca034;
	Thu, 12 Nov 2020 11:55:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605182133;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pdla3Ap0GWpOtbianQP0foKR4T1rYZELNePwWVVMuUE=;
	b=LUhTA6s76UzYJmayPPjgRmRP512fvJcISkJUB9/Sb0uYxbZix/rDZT1VPGAgz9SgcX7WxZ
	NgHP4yo9aADw0rsd2WmaAa9xmxsoVw7TzC1Iy6YqWQ+hWpRH2pxVYdFuT+LAPaSqZoUwZp
	c9HT0nwhCMsMHjMuscSWWiDnyYp2g7c=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E2CCCAEAA;
	Thu, 12 Nov 2020 11:55:32 +0000 (UTC)
Subject: Re: [PATCH V2 20/23] xen/ioreq: Make x86's send_invalidate_req()
 common
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-21-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <86ac139b-4dfd-fc45-ea77-3bd51acaea15@suse.com>
Date: Thu, 12 Nov 2020 12:55:33 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <1602780274-29141-21-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
> --- a/xen/include/asm-x86/hvm/io.h
> +++ b/xen/include/asm-x86/hvm/io.h
> @@ -97,7 +97,6 @@ bool relocate_portio_handler(
>      unsigned int size);
>  
>  void send_timeoffset_req(unsigned long timeoff);
> -void send_invalidate_req(void);
>  bool handle_mmio_with_translation(unsigned long gla, unsigned long gpfn,
>                                    struct npfec);
>  bool handle_pio(uint16_t port, unsigned int size, int dir);
> diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
> index 0679fef..aad682f 100644
> --- a/xen/include/xen/ioreq.h
> +++ b/xen/include/xen/ioreq.h
> @@ -126,6 +126,7 @@ struct ioreq_server *select_ioreq_server(struct domain *d,
>  int send_ioreq(struct ioreq_server *s, ioreq_t *proto_p,
>                 bool buffered);
>  unsigned int broadcast_ioreq(ioreq_t *p, bool buffered);
> +void send_invalidate_ioreq(void);

Again while renaming this function anyway could we see about giving
it a suitable and consistent name? Maybe
ioreq_request_mapcache_invalidate() or (to avoid the double "request")
ioreq_signal_mapcache_invalidate()? Maybe even ioreq_server_...().

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 12:14:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 12:14:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25754.53720 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdBUy-0007K0-Ax; Thu, 12 Nov 2020 12:14:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25754.53720; Thu, 12 Nov 2020 12:14:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdBUy-0007Jt-7q; Thu, 12 Nov 2020 12:14:44 +0000
Received: by outflank-mailman (input) for mailman id 25754;
 Thu, 12 Nov 2020 12:14:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PCk2=ES=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kdBUx-0007Jo-IM
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 12:14:43 +0000
Received: from mail-wm1-x329.google.com (unknown [2a00:1450:4864:20::329])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1db68c24-a6eb-4e17-9281-ceddc30c2aac;
 Thu, 12 Nov 2020 12:14:42 +0000 (UTC)
Received: by mail-wm1-x329.google.com with SMTP id c16so5307332wmd.2
 for <xen-devel@lists.xenproject.org>; Thu, 12 Nov 2020 04:14:42 -0800 (PST)
Received: from CBGR90WXYV0 (54-240-197-236.amazon.com. [54.240.197.236])
 by smtp.gmail.com with ESMTPSA id p12sm6503968wrw.28.2020.11.12.04.14.40
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 12 Nov 2020 04:14:41 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PCk2=ES=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kdBUx-0007Jo-IM
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 12:14:43 +0000
X-Inumbo-ID: 1db68c24-a6eb-4e17-9281-ceddc30c2aac
Received: from mail-wm1-x329.google.com (unknown [2a00:1450:4864:20::329])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1db68c24-a6eb-4e17-9281-ceddc30c2aac;
	Thu, 12 Nov 2020 12:14:42 +0000 (UTC)
Received: by mail-wm1-x329.google.com with SMTP id c16so5307332wmd.2
        for <xen-devel@lists.xenproject.org>; Thu, 12 Nov 2020 04:14:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=8gKvgTUq7aERno8X4hu3TUb6HJz/We0kzcdYXJnLv4A=;
        b=MPlm1XobuUv47pgRn5B5nADpNNRWf+wZKcZQfb0VZmnFa05L9xB7SLVqCC3s7uqHd8
         8GZlmafSIiH0/wX7MmTgl2rsRfVSTKgqLBNnTJKF17kxyVCObmdRSTpr8qcsm+c7lwu2
         uw/3iMI1uxrTasOo+tj50PypYKrmGo+PSbzykCAoFtZcFgNX+mNk/j66zkcwkEWagSCI
         fxYrx2hYu/+u0GER7XQdEJ0tQyziYMsq++7aNUA69k/yt/f78Dv7Jb60+Ti42/4O9Xmo
         IHYv2jU47M+x1u1i2ZOywiqeUGO6ZJLOJ/qBinnGgPthlk299Rl9pklVXJ8/i1Gv36lQ
         y/kw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=8gKvgTUq7aERno8X4hu3TUb6HJz/We0kzcdYXJnLv4A=;
        b=tufPTovLlCDtW015gkgfnVyj6l/o5t9slWGYIDR5Dpcveqze/AGdJ8kcM/NJa+qvfd
         dDGoxZEebsUYsFOhRS3p0QsAujuokbSGi9XZxhq8iXLcwxZpRVF7TNk9Oa17CcidtXZB
         QKBNc9gTkz2yjDvRSJsIjRpu+vLZ61pl3C3qXecpNzsezpYP+7Fb4HibII0ldT+zOg72
         N6Q5DhSvS1k8+ui6O4lGdCS8ZRiMiA0lDNlpEr7V1Dk0Kpam/Bdlbixe+hG0TsTd0+es
         gVje0oN9iQ7sE4kDGL+As/nbXMQ38k8TRiJRSJ8nwl75tsnuRvg7AzpR1/r7qAIzRm1e
         0x5w==
X-Gm-Message-State: AOAM5322TalKMW2o/qu0qKn0/VKPUQPC4jKh/zYobiLmv7KKlqNdEQXP
	7NxXBEfcV/ZN3eGmoFY9t9U=
X-Google-Smtp-Source: ABdhPJyLBbTgVW3PpMXhd7hyZvYTE6/gltc8GRmoaUR0t1uErm+ORstyVH5rxKM3yPH0j5GG7h512A==
X-Received: by 2002:a1c:bcd6:: with SMTP id m205mr9141560wmf.47.1605183281976;
        Thu, 12 Nov 2020 04:14:41 -0800 (PST)
Received: from CBGR90WXYV0 (54-240-197-236.amazon.com. [54.240.197.236])
        by smtp.gmail.com with ESMTPSA id p12sm6503968wrw.28.2020.11.12.04.14.40
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 12 Nov 2020 04:14:41 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>,
	"'Oleksandr Tyshchenko'" <olekstysh@gmail.com>
Cc: "'Oleksandr Tyshchenko'" <oleksandr_tyshchenko@epam.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	=?UTF-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	"'Wei Liu'" <wl@xen.org>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Jun Nakajima'" <jun.nakajima@intel.com>,
	"'Kevin Tian'" <kevin.tian@intel.com>,
	"'Julien Grall'" <julien.grall@arm.com>,
	<xen-devel@lists.xenproject.org>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> <1602780274-29141-13-git-send-email-olekstysh@gmail.com> <e3064b77-71c3-9d8d-2324-6839895101f4@suse.com>
In-Reply-To: <e3064b77-71c3-9d8d-2324-6839895101f4@suse.com>
Subject: RE: [PATCH V2 12/23] xen/ioreq: Remove "hvm" prefixes from involved function names
Date: Thu, 12 Nov 2020 12:14:40 -0000
Message-ID: <004801d6b8ed$6620e4c0$3262ae40$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQFqp5MaNUj6MKEiN9RM6S6pfA5bVAHGHN6+A53Zm3aqcWt+kA==
Content-Language: en-gb

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 12 November 2020 11:45
> To: Oleksandr Tyshchenko <olekstysh@gmail.com>; Paul Durrant =
<paul@xen.org>
> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Andrew =
Cooper <andrew.cooper3@citrix.com>;
> Roger Pau Monn=C3=A9 <roger.pau@citrix.com>; Wei Liu <wl@xen.org>; =
George Dunlap
> <george.dunlap@citrix.com>; Ian Jackson <iwj@xenproject.org>; Julien =
Grall <julien@xen.org>; Stefano
> Stabellini <sstabellini@kernel.org>; Jun Nakajima =
<jun.nakajima@intel.com>; Kevin Tian
> <kevin.tian@intel.com>; Julien Grall <julien.grall@arm.com>; =
xen-devel@lists.xenproject.org
> Subject: Re: [PATCH V2 12/23] xen/ioreq: Remove "hvm" prefixes from =
involved function names
>=20
> On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
> > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> >
> > This patch removes "hvm" prefixes and infixes from IOREQ
> > related function names in the common code.
>=20
> AT least some of the functions touched here would be nice to be
> moved to a more consistent new naming scheme right away, to
> avoid having to touch all the same places again. I guess ioreq
> server functions would be nice to all start with ioreq_server_
> and ioreq functions to all start with ioreq_. E.g. ioreq_send()
> and ioreq_server_select().
>=20
> Paul, thoughts?
>=20

Yes, that sounds like a good idea. Obviously the code has grown a little =
organically so some naming cleanup is welcome.

  Paul

> Jan



From xen-devel-bounces@lists.xenproject.org Thu Nov 12 12:16:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 12:16:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25761.53734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdBWI-0007Rg-Mp; Thu, 12 Nov 2020 12:16:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25761.53734; Thu, 12 Nov 2020 12:16:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdBWI-0007RZ-Js; Thu, 12 Nov 2020 12:16:06 +0000
Received: by outflank-mailman (input) for mailman id 25761;
 Thu, 12 Nov 2020 12:16:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kdBWH-0007Qn-46
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 12:16:05 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2045614a-6bf4-4540-9dae-8e51ef900a45;
 Thu, 12 Nov 2020 12:15:57 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdBW9-0005Sn-8C; Thu, 12 Nov 2020 12:15:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdBW8-0001pu-Tz; Thu, 12 Nov 2020 12:15:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kdBW8-0006uI-TR; Thu, 12 Nov 2020 12:15:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kdBWH-0007Qn-46
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 12:16:05 +0000
X-Inumbo-ID: 2045614a-6bf4-4540-9dae-8e51ef900a45
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2045614a-6bf4-4540-9dae-8e51ef900a45;
	Thu, 12 Nov 2020 12:15:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mFnm/Ro3NRcm0jHaeMg8NNizzwV4fFmJXqRmvkxbnRQ=; b=FCJluSo83A+/fKaUlNRuqPWPjx
	/zLKhJAJDOT9tFKyVMoeaLiz1YAxzvF9Y+L+2vzbvdSHwtiAhTum9p231J8vDo2q8AC7AYPk5qDzx
	4DUxL9FdZUmaH3tWhpjKjRXVsRpiRENYgUKgtwsSyfnv/Vnjf4S7sw/X2t4dhXTfXiYU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdBW9-0005Sn-8C; Thu, 12 Nov 2020 12:15:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdBW8-0001pu-Tz; Thu, 12 Nov 2020 12:15:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdBW8-0006uI-TR; Thu, 12 Nov 2020 12:15:56 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156670-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 156670: regressions - FAIL
X-Osstest-Failures:
    xen-4.14-testing:test-amd64-i386-xl-raw:guest-localmigrate/x10:fail:regression
    xen-4.14-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d101b417b784a26326fc7800a79cc539ba570b79
X-Osstest-Versions-That:
    xen=0c96e4297da07944525729ddbe438b0131ab5b7e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Nov 2020 12:15:56 +0000

flight 156670 xen-4.14-testing real [real]
flight 156715 xen-4.14-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156670/
http://logs.test-lab.xenproject.org/osstest/logs/156715/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-raw       19 guest-localmigrate/x10   fail REGR. vs. 156525
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156525

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156525
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156525
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156525
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156525
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156525
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156525
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156525
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156525
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156525
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156525
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156525
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d101b417b784a26326fc7800a79cc539ba570b79
baseline version:
 xen                  0c96e4297da07944525729ddbe438b0131ab5b7e

Last test of basis   156525  2020-11-06 16:01:25 Z    5 days
Failing since        156594  2020-11-09 18:08:08 Z    2 days    3 attempts
Testing same since   156670  2020-11-11 04:05:07 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  David Woodhouse <dwmw@amazon.co.uk>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 303 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 12:18:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 12:18:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25769.53747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdBYY-0007cl-4V; Thu, 12 Nov 2020 12:18:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25769.53747; Thu, 12 Nov 2020 12:18:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdBYY-0007ce-1J; Thu, 12 Nov 2020 12:18:26 +0000
Received: by outflank-mailman (input) for mailman id 25769;
 Thu, 12 Nov 2020 12:18:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdBYW-0007cZ-6M
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 12:18:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 718c445e-2b73-43e2-b952-ce836629ad12;
 Thu, 12 Nov 2020 12:18:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3F9E8AC0C;
 Thu, 12 Nov 2020 12:18:22 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdBYW-0007cZ-6M
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 12:18:24 +0000
X-Inumbo-ID: 718c445e-2b73-43e2-b952-ce836629ad12
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 718c445e-2b73-43e2-b952-ce836629ad12;
	Thu, 12 Nov 2020 12:18:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605183502;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=dGzg1J/ExGX5i1+EK2FTdN5yOYTV85/Pz6cwQFj9vcg=;
	b=XBodNZm7a0eBsL6jMg6AU3NBsKpKtduv/lhbwGPz5HPaqLOwdKhMTU6TKl+1ty2ZzIseTL
	mfeM/LXFdss4MSitMCxW5KMVD7fzhPuOJWJdHlDSoErGmk4x/iZW6fGreNTuIhLatBMNGw
	i3VznvGbN+10majBLIDrTeu8ozknC+Y=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3F9E8AC0C;
	Thu, 12 Nov 2020 12:18:22 +0000 (UTC)
Subject: Re: [PATCH V2 16/23] xen/mm: Handle properly reference in
 set_foreign_p2m_entry() on Arm
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-17-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e4c04492-3e99-4578-8f00-e7b35aeb26c5@suse.com>
Date: Thu, 12 Nov 2020 13:18:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <1602780274-29141-17-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1380,6 +1380,27 @@ int guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
>      return p2m_remove_mapping(d, gfn, (1 << page_order), mfn);
>  }
>  
> +int set_foreign_p2m_entry(struct domain *d, const struct domain *fd,
> +                          unsigned long gfn, mfn_t mfn)
> +{
> +    struct page_info *page = mfn_to_page(mfn);
> +    int rc;
> +
> +    if ( !get_page(page, fd) )
> +        return -EINVAL;
> +
> +    /*
> +     * It is valid to always use p2m_map_foreign_rw here as if this gets
> +     * called that d != fd. A case when d == fd would be rejected by
> +     * rcu_lock_remote_domain_by_id() earlier.
> +     */

Are you describing things here on the assumption that no new
callers may surface later on? To catch such, I'd recommend
adding at least a respective ASSERT().

> +    rc = guest_physmap_add_entry(d, _gfn(gfn), mfn, 0, p2m_map_foreign_rw);
> +    if ( rc )
> +        put_page(page);
> +
> +    return 0;

I can't imagine it's correct to not signal the error to the
caller(s).

> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -1099,7 +1099,8 @@ static int acquire_resource(
>       *        reference counted, it is unsafe to allow mapping of
>       *        resource pages unless the caller is the hardware domain.
>       */
> -    if ( paging_mode_translate(currd) && !is_hardware_domain(currd) )
> +    if ( paging_mode_translate(currd) && !is_hardware_domain(currd) &&
> +         !arch_refcounts_p2m() )
>          return -EACCES;

I guess the hook may want naming differently, as both prior parts of
the condition should be needed only on the x86 side, and there (for
PV) there's no p2m involved in the refcounting. Maybe
arch_acquire_resource_check()? And then the comment wants moving into
the x86 hook as well. You may want to consider leaving a more generic
one here...

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 12:29:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 12:29:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25779.53762 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdBjL-0000Jd-Bw; Thu, 12 Nov 2020 12:29:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25779.53762; Thu, 12 Nov 2020 12:29:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdBjL-0000JW-8E; Thu, 12 Nov 2020 12:29:35 +0000
Received: by outflank-mailman (input) for mailman id 25779;
 Thu, 12 Nov 2020 12:29:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kdBjJ-0000JJ-Et
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 12:29:33 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5743c89b-a1b2-4708-a152-d44f2f10c914;
 Thu, 12 Nov 2020 12:29:31 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdBjG-0005jx-Of; Thu, 12 Nov 2020 12:29:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdBjG-0002hr-IJ; Thu, 12 Nov 2020 12:29:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kdBjG-0000zT-Hs; Thu, 12 Nov 2020 12:29:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kdBjJ-0000JJ-Et
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 12:29:33 +0000
X-Inumbo-ID: 5743c89b-a1b2-4708-a152-d44f2f10c914
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5743c89b-a1b2-4708-a152-d44f2f10c914;
	Thu, 12 Nov 2020 12:29:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=L7ol6jzeWrBjugfIuJHYn1DZfX+GA1RjLEt7EBYTktQ=; b=B+f0IW2SZXqqfcZwEWOiBQLEtv
	UxJF1kRyz16dX6LqowuO7x+YhmWFDwr2ktqxt7bFDKNVENWsEbqmFQ5cc8m16Ionb5L5tb0ny9JEV
	0V6A9UJieGWtBfyhir4yYxIBA0549GsVe1W3atYXZQ3hHo6g4WQLdNp3M4NYg6U4DJlk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdBjG-0005jx-Of; Thu, 12 Nov 2020 12:29:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdBjG-0002hr-IJ; Thu, 12 Nov 2020 12:29:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdBjG-0000zT-Hs; Thu, 12 Nov 2020 12:29:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable bisection] complete test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm
Message-Id: <E1kdBjG-0000zT-Hs@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Nov 2020 12:29:30 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm
testid debian-hvm-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  e19bcb626f50a652fb1854a8b2f2c9c371687a11
  Bug not present: c3453a23f7905d24f2404787543e26ec7d02301c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/156714/


  commit e19bcb626f50a652fb1854a8b2f2c9c371687a11
  Author: Juergen Gross <jgross@suse.com>
  Date:   Fri Nov 6 10:48:07 2020 +0100
  
      xen/rwlock: add check_lock() handling to rwlocks
      
      Checking whether a lock is consistently used regarding interrupts on
      or off is beneficial for rwlocks, too.
      
      So add check_lock() calls to rwlock functions. For this purpose make
      check_lock() globally accessible.
      
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Reviewed-by: Julien Grall <jgrall@amazon.com>
      Reviewed-by: Jan Beulich <jbeulich@suse.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm.debian-hvm-install --summary-out=tmp/156714.bisection-summary --basis-template=156443 --blessings=real,real-bisect,real-retry xen-unstable test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm debian-hvm-install
Searching for failure / basis pass:
 156663 fail [host=huxelrebe0] / 156443 [host=fiano0] 156401 [host=albana0] 156389 [host=elbling1] 156373 [host=huxelrebe1] 156354 [host=albana1] 156339 [host=fiano1] 156331 [host=chardonnay1] 156315 [host=chardonnay0] 156291 [host=elbling0] 156268 [host=fiano1] 156254 [host=rimava1] 156248 [host=albana0] 156228 [host=albana1] 156196 [host=huxelrebe1] 156167 [host=pinot1] 156136 ok.
Failure / basis pass flights: 156663 / 156136
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 3059178798a23ba870ff86ff54d442a07e6651fc
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 6ca70821b59849ad97c3fadc47e63c1a4af1a78c
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#ea6d3cd1ed79d824e605a70c3626bc4\
 37c386260-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/xen.git#6ca70821b59849ad97c3fadc47e63c1a4af1a78c-3059178798a23ba870ff86ff54d442a07e6651fc
Loaded 41918 nodes in revision graph
Searching for test results:
 156119 [host=pinot0]
 156136 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 6ca70821b59849ad97c3fadc47e63c1a4af1a78c
 156167 [host=pinot1]
 156196 [host=huxelrebe1]
 156228 [host=albana1]
 156248 [host=albana0]
 156254 [host=rimava1]
 156268 [host=fiano1]
 156291 [host=elbling0]
 156315 [host=chardonnay0]
 156331 [host=chardonnay1]
 156339 [host=fiano1]
 156354 [host=albana1]
 156373 [host=huxelrebe1]
 156389 [host=elbling1]
 156401 [host=albana0]
 156443 [host=fiano0]
 156524 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 2a5f9f6a6932214fda76b9b3c03e024772882d34
 156538 fail irrelevant
 156556 fail irrelevant
 156577 fail irrelevant
 156588 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
 156608 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
 156666 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 6ca70821b59849ad97c3fadc47e63c1a4af1a78c
 156691 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 0a5e0ce0fb7e5a3b5dfdc936058d2c0e04e5e258
 156694 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 82c0d3d491ccb183cf12c87775086b68531b8444
 156696 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd dac867bf9adc1562a4cf9db5f89726597af13ef8
 156697 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 9ff9705647646aa937b5f5c1426a64c69a62b3bd
 156698 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 957708c2d1ae25d7375abd5e5e70c3043d64f1f1
 156701 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c
 156706 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd e19bcb626f50a652fb1854a8b2f2c9c371687a11
 156707 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c
 156709 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd e19bcb626f50a652fb1854a8b2f2c9c371687a11
 156663 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 3059178798a23ba870ff86ff54d442a07e6651fc
 156710 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c
 156712 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7ea428895af2840d85c524f0bd11a38aac308308 3059178798a23ba870ff86ff54d442a07e6651fc
 156714 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd e19bcb626f50a652fb1854a8b2f2c9c371687a11
Searching for interesting versions
 Result found: flight 156136 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c, results HASH(0x56279bb52f80) HASH(0x56279bb48278) HASH(0x56279e97e650) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe132\
 4c29294bb1d1b8454b3f214725e40fd 957708c2d1ae25d7375abd5e5e70c3043d64f1f1, results HASH(0x56279bb411e8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 9ff9705647646aa937b5f5c1426a64c69a62b3bd, results HASH(0x56279e97f918) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f\
 0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd dac867bf9adc1562a4cf9db5f89726597af13ef8, results HASH(0x56279e981200) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd 82c0d3d491ccb183cf12c87775086b68531b8444, results HASH(0x56279e97ca98) Result found: flight 156524 (fail), for basis failure (at\
  ancestor ~34)
 Repro found: flight 156666 (pass), for basis pass
 Repro found: flight 156712 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 677cbe1324c29294bb1d1b8454b3f214725e40fd c3453a23f7905d24f2404787543e26ec7d02301c
No revisions left to test, checking graph state.
 Result found: flight 156701 (pass), for last pass
 Result found: flight 156706 (fail), for first failure
 Repro found: flight 156707 (pass), for last pass
 Repro found: flight 156709 (fail), for first failure
 Repro found: flight 156710 (pass), for last pass
 Repro found: flight 156714 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  e19bcb626f50a652fb1854a8b2f2c9c371687a11
  Bug not present: c3453a23f7905d24f2404787543e26ec7d02301c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/156714/


  commit e19bcb626f50a652fb1854a8b2f2c9c371687a11
  Author: Juergen Gross <jgross@suse.com>
  Date:   Fri Nov 6 10:48:07 2020 +0100
  
      xen/rwlock: add check_lock() handling to rwlocks
      
      Checking whether a lock is consistently used regarding interrupts on
      or off is beneficial for rwlocks, too.
      
      So add check_lock() calls to rwlock functions. For this purpose make
      check_lock() globally accessible.
      
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Reviewed-by: Julien Grall <jgrall@amazon.com>
      Reviewed-by: Jan Beulich <jbeulich@suse.com>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
156714: tolerable ALL FAIL

flight 156714 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/156714/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 12 debian-hvm-install fail baseline untested


jobs:
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Thu Nov 12 12:29:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 12:29:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25780.53767 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdBjL-0000K5-KQ; Thu, 12 Nov 2020 12:29:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25780.53767; Thu, 12 Nov 2020 12:29:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdBjL-0000Jy-G0; Thu, 12 Nov 2020 12:29:35 +0000
Received: by outflank-mailman (input) for mailman id 25780;
 Thu, 12 Nov 2020 12:29:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdBjK-0000JR-DQ
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 12:29:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1940ffff-c486-4335-b7f2-8132f7008442;
 Thu, 12 Nov 2020 12:29:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 824B0AC0C;
 Thu, 12 Nov 2020 12:29:32 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdBjK-0000JR-DQ
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 12:29:34 +0000
X-Inumbo-ID: 1940ffff-c486-4335-b7f2-8132f7008442
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1940ffff-c486-4335-b7f2-8132f7008442;
	Thu, 12 Nov 2020 12:29:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605184172;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=LtCOfv/NSuK78+U+IYPviVQ/E5IuH95oN3FovdEDX3s=;
	b=JRltZRuONpSpSPZ8w886nSUYrfdA3MxxYem9qXJ7sq55uC87n2bv0cSoLaJseiQp1/Kzj8
	ZIfS0AKIwgsUa3Xy1ZdCz0UxX90n8WT+v+DKKKzLbbtUqkOHiajH1x3XRY9DFWPkvkaEzy
	LtTVhI5j4vPWEn3lKnd9dCeJ8dZb8AU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 824B0AC0C;
	Thu, 12 Nov 2020 12:29:32 +0000 (UTC)
Subject: Re: [PATCH 5/5] x86/p2m: split write_p2m_entry() hook
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
 <7b2b7cc9-8828-41bd-7949-764161bbe7ff@suse.com>
 <20201110135944.hbsojy6eeyw53has@Air-de-Roger>
 <d73234b0-f22e-0783-3fbe-759ccb0ecc48@suse.com>
 <20201111121730.pblsf6inot5gixfc@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7f916527-9a9c-8afe-5e5c-781554d1bd73@suse.com>
Date: Thu, 12 Nov 2020 13:29:33 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201111121730.pblsf6inot5gixfc@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11.11.2020 13:17, Roger Pau Monné wrote:
> On Tue, Nov 10, 2020 at 03:50:44PM +0100, Jan Beulich wrote:
>> On 10.11.2020 14:59, Roger Pau Monné wrote:
>>> On Wed, Oct 28, 2020 at 10:24:53AM +0100, Jan Beulich wrote:
>>>> --- a/xen/arch/x86/mm/p2m-pt.c
>>>> +++ b/xen/arch/x86/mm/p2m-pt.c
>>>> @@ -122,17 +122,55 @@ static int write_p2m_entry(struct p2m_do
>>>>  {
>>>>      struct domain *d = p2m->domain;
>>>>      struct vcpu *v = current;
>>>> -    int rc = 0;
>>>>  
>>>>      if ( v->domain != d )
>>>>          v = d->vcpu ? d->vcpu[0] : NULL;
>>>>      if ( likely(v && paging_mode_enabled(d) && paging_get_hostmode(v)) ||
>>>>           p2m_is_nestedp2m(p2m) )
>>>> -        rc = p2m->write_p2m_entry(p2m, gfn, p, new, level);
>>>> +    {
>>>> +        unsigned int oflags;
>>>> +        mfn_t omfn;
>>>> +        int rc;
>>>> +
>>>> +        paging_lock(d);
>>>> +
>>>> +        if ( p2m->write_p2m_entry_pre )
>>>> +            p2m->write_p2m_entry_pre(d, gfn, p, new, level);
>>>> +
>>>> +        oflags = l1e_get_flags(*p);
>>>> +        omfn = l1e_get_mfn(*p);
>>>> +
>>>> +        rc = p2m_entry_modify(p2m, p2m_flags_to_type(l1e_get_flags(new)),
>>>> +                              p2m_flags_to_type(oflags), l1e_get_mfn(new),
>>>> +                              omfn, level);
>>>> +        if ( rc )
>>>> +        {
>>>> +            paging_unlock(d);
>>>> +            return rc;
>>>> +        }
>>>> +
>>>> +        safe_write_pte(p, new);
>>>> +
>>>> +        if ( p2m->write_p2m_entry_post )
>>>> +            p2m->write_p2m_entry_post(p2m, oflags);
>>>> +
>>>> +        paging_unlock(d);
>>>> +
>>>> +        if ( nestedhvm_enabled(d) && !p2m_is_nestedp2m(p2m) &&
>>>> +             (oflags & _PAGE_PRESENT) &&
>>>> +             !p2m_get_hostp2m(d)->defer_nested_flush &&
>>>> +             /*
>>>> +              * We are replacing a valid entry so we need to flush nested p2ms,
>>>> +              * unless the only change is an increase in access rights.
>>>> +              */
>>>> +             (!mfn_eq(omfn, l1e_get_mfn(new)) ||
>>>> +              !perms_strictly_increased(oflags, l1e_get_flags(new))) )
>>>> +            p2m_flush_nestedp2m(d);
>>>
>>> It feel slightly weird to have a nested p2m hook post, and yet have
>>> nested specific code here.
>>>
>>> Have you considered if the post hook could be moved outside of the
>>> locked region, so that we could put this chunk there in the nested p2m
>>> case?
>>
>> Yes, I did, but I don't think the post hook can be moved out. The
>> only alternative therefore would be a 3rd hook. And this hook would
>> then need to be installed on the host p2m for nested guests, as
>> opposed to nestedp2m_write_p2m_entry_post, which gets installed in
>> the nested p2m-s. As said in the description, the main reason I
>> decided against a 3rd hook is that I suppose the code here isn't
>> HAP-specific (while prior to this patch it was).
> 
> I'm not convinced the guest TLB flush needs to be performed while
> holding the paging lock. The point of such flush is to invalidate any
> intermediate guest visible translations that might now be invalid as a
> result of the p2m change, but the paging lock doesn't affect the guest
> in any way.
> 
> It's true that the dirty_cpumask might change, but I think we only
> care that when returning from the function there are no stale cache
> entries that contain the now invalid translation, and this can be
> achieved equally by doing the flush outside of the locked region.

I agree with all this. If only it was merely about TLB flushes. In
the shadow case, shadow_blow_all_tables() gets invoked, and that
one - looking at the other call sites - wants the paging lock held.

Additionally moving the stuff out of the locked region wouldn't
allow us to achieve the goal of moving the nested flush into the
hook, unless we introduced further hook handlers to be installed
on the host p2m-s of nested guests.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 12:51:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 12:51:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25797.53798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdC46-0003G8-EW; Thu, 12 Nov 2020 12:51:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25797.53798; Thu, 12 Nov 2020 12:51:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdC46-0003G1-BO; Thu, 12 Nov 2020 12:51:02 +0000
Received: by outflank-mailman (input) for mailman id 25797;
 Thu, 12 Nov 2020 12:51:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bXVH=ES=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kdC45-0003Fu-4F
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 12:51:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 02c56eee-d1b1-4f41-b0f0-bbb3db933b94;
 Thu, 12 Nov 2020 12:50:59 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 82276AC75;
 Thu, 12 Nov 2020 12:50:58 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bXVH=ES=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kdC45-0003Fu-4F
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 12:51:01 +0000
X-Inumbo-ID: 02c56eee-d1b1-4f41-b0f0-bbb3db933b94
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 02c56eee-d1b1-4f41-b0f0-bbb3db933b94;
	Thu, 12 Nov 2020 12:50:59 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605185458;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=aaQOEo1oc8aBjYCLuIdNVr/b3Z9v3z3JUDjlzDpRiI4=;
	b=Dqn2jrv1nDnfNTx5b/eO20IZVf5UtTiYQOUJWRf2356niPvdAlUYNbp7bLGAs/X4XQoOls
	jWGr3aHfoa8CL6EFUhr+RCAKwmn+hEhK29oYqgrRfIlZ+VnuV1m0lbTWOMiVF/G9A45tsS
	jtWPJSAZYeJ+1ggwRtBJF8vF5dRxXy4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 82276AC75;
	Thu, 12 Nov 2020 12:50:58 +0000 (UTC)
Subject: Re: [PATCH] xen: add support for automatic debug key actions in case
 of crash
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201022143905.11032-1-jgross@suse.com>
 <977bab69-892c-d94d-d952-1a748f69d0b6@suse.com>
 <53732f8f-fe6d-91bd-4100-4b4d904a4073@suse.com>
 <ed2f73e7-04cc-f568-f0b7-19c843a8d31b@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <8c77ff71-a14e-7cf7-5f27-c7c152ace240@suse.com>
Date: Thu, 12 Nov 2020 13:50:57 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <ed2f73e7-04cc-f568-f0b7-19c843a8d31b@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="hM1ECeJX8mgiaB888nnp36HVgRctACYfd"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--hM1ECeJX8mgiaB888nnp36HVgRctACYfd
Content-Type: multipart/mixed; boundary="bwoMxFaVyrSneugGhwez8ExgAeq6BklI3";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <8c77ff71-a14e-7cf7-5f27-c7c152ace240@suse.com>
Subject: Re: [PATCH] xen: add support for automatic debug key actions in case
 of crash
References: <20201022143905.11032-1-jgross@suse.com>
 <977bab69-892c-d94d-d952-1a748f69d0b6@suse.com>
 <53732f8f-fe6d-91bd-4100-4b4d904a4073@suse.com>
 <ed2f73e7-04cc-f568-f0b7-19c843a8d31b@suse.com>
In-Reply-To: <ed2f73e7-04cc-f568-f0b7-19c843a8d31b@suse.com>

--bwoMxFaVyrSneugGhwez8ExgAeq6BklI3
Content-Type: multipart/mixed;
 boundary="------------4818A1C4ED0F81B33A768B73"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------4818A1C4ED0F81B33A768B73
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 29.10.20 15:49, Jan Beulich wrote:
> On 29.10.2020 15:40, J=C3=BCrgen Gro=C3=9F wrote:
>> On 29.10.20 15:22, Jan Beulich wrote:
>>> On 22.10.2020 16:39, Juergen Gross wrote:
>>>> @@ -507,6 +509,41 @@ void __init initialize_keytable(void)
>>>>        }
>>>>    }
>>>>   =20
>>>> +#define CRASHACTION_SIZE  32
>>>> +static char crash_debug_panic[CRASHACTION_SIZE];
>>>> +static char crash_debug_hwdom[CRASHACTION_SIZE];
>>>> +static char crash_debug_watchdog[CRASHACTION_SIZE];
>>>> +static char crash_debug_kexeccmd[CRASHACTION_SIZE];
>>>> +static char crash_debug_debugkey[CRASHACTION_SIZE];
>>>> +
>>>> +static char *crash_action[CRASHREASON_N] =3D {
>>>> +    [CRASHREASON_PANIC] =3D crash_debug_panic,
>>>> +    [CRASHREASON_HWDOM] =3D crash_debug_hwdom,
>>>> +    [CRASHREASON_WATCHDOG] =3D crash_debug_watchdog,
>>>> +    [CRASHREASON_KEXECCMD] =3D crash_debug_kexeccmd,
>>>> +    [CRASHREASON_DEBUGKEY] =3D crash_debug_debugkey,
>>>> +};
>>>> +
>>>> +string_runtime_param("crash-debug-panic", crash_debug_panic);
>>>> +string_runtime_param("crash-debug-hwdom", crash_debug_hwdom);
>>>> +string_runtime_param("crash-debug-watchdog", crash_debug_watchdog);=

>>>> +string_runtime_param("crash-debug-kexeccmd", crash_debug_kexeccmd);=

>>>> +string_runtime_param("crash-debug-debugkey", crash_debug_debugkey);=

>>>
>>> In general I'm not in favor of this (or similar) needing
>>> five new command line options, instead of just one. The problem
>>> with e.g.
>>>
>>> crash-debug=3Dpanic:rq,watchdog:0d
>>>
>>> is that ',' (or any other separator chosen) could in principle
>>> also be a debug key. It would still be possible to use
>>>
>>> crash-debug=3Dpanic:rq crash-debug=3Dwatchdog:0d
>>>
>>> though. Thoughts?
>>
>> OTOH the runtime parameters are much easier addressable that way.
>=20
> Ah yes, I can see this as a reason. Would make we wonder whether
> command line and runtime handling may want disconnecting in this
> specific case then. (But I can also see the argument of this
> being too much overhead then.)
>=20
>>>> +void keyhandler_crash_action(enum crash_reason reason)
>>>> +{
>>>> +    const char *action =3D crash_action[reason];
>>>> +    struct cpu_user_regs *regs =3D get_irq_regs() ? : guest_cpu_use=
r_regs();
>>>> +
>>>> +    while ( *action ) {
>>>> +        if ( *action =3D=3D '.' )
>>>> +            mdelay(1000);
>>>> +        else
>>>> +            handle_keypress(*action, regs);
>>>> +        action++;
>>>> +    }
>>>> +}
>>>
>>> I think only diagnostic keys should be permitted here.
>>
>> While I understand that other keys could produce nonsense or do harm,
>> I'm not sure we should really prohibit them. Allowing them would e.g.
>> allow to do just a reboot without kdump for one type of crashes.
>=20
> Ah yes, that's a fair point.
>=20
>>>> --- a/xen/include/xen/kexec.h
>>>> +++ b/xen/include/xen/kexec.h
>>>> @@ -1,6 +1,8 @@
>>>>    #ifndef __XEN_KEXEC_H__
>>>>    #define __XEN_KEXEC_H__
>>>>   =20
>>>> +#include <xen/keyhandler.h>
>>>
>>> Could we go without this, utilizing the gcc extension of forward
>>> declared enums? Otoh ...
>>>
>>>> @@ -82,7 +84,11 @@ void vmcoreinfo_append_str(const char *fmt, ...)
>>>>    #define kexecing 0
>>>>   =20
>>>>    static inline void kexec_early_calculations(void) {}
>>>> -static inline void kexec_crash(void) {}
>>>> +static inline void kexec_crash(enum crash_reason reason)
>>>> +{
>>>> +    keyhandler_crash_action(reason);
>>>> +}
>>>
>>> ... if this is to be an inline function and not just a #define,
>>> it'll need the declaration of the function to have been seen.
>>
>> And even being a #define all users of kexec_crash() would need to
>> #include keyhandler.h (and I'm not sure there are any source files
>> including kexec.h which don't use kexec_crash()).
>=20
> About as many which do as ones which don't. But there's no
> generally accessible header which includes xen/kexec.h, so perhaps
> the extra dependency indeed isn't all this problematic.

Any further comments, or even better, Acks?


Juergen


--------------4818A1C4ED0F81B33A768B73
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------4818A1C4ED0F81B33A768B73--

--bwoMxFaVyrSneugGhwez8ExgAeq6BklI3--

--hM1ECeJX8mgiaB888nnp36HVgRctACYfd
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+tL7EFAwAAAAAACgkQsN6d1ii/Ey9Z
oAgAnrBT9BDrZN9Nv4LYT/EOflIVmzHiiAPSHVn61h7a2s3Xew8erTXFZuzdaYzdvvzwyW5ovr+v
fYcBfCD9s0Nc7tIMJRQ8X4tPjM3owaDHkT2qzAFnHXaiVbF1pCvR1Vc7yUNSNaaGWyMnrbx+Kml6
ls8ln7vczjAbFEbL6dM8NiXN4LbKJ2jDEIFJaJS5sAAz3AZmLI1+EspBwRiAfHTrlm/E4gslCI21
cLYVf0agdk1IgU7C3uyHgX/TvZb0H+AoBlinnM/bblbg1CkZUdcCZka1QldX4yOOMrUPhQEDTxkE
+wU6SR/I4xXwtDjEClqKvy2JMguOiUDWwGGO+4GkCQ==
=bRa8
-----END PGP SIGNATURE-----

--hM1ECeJX8mgiaB888nnp36HVgRctACYfd--


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 12:54:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 12:54:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25803.53810 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdC6x-0003Rq-11; Thu, 12 Nov 2020 12:53:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25803.53810; Thu, 12 Nov 2020 12:53:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdC6w-0003Rj-U3; Thu, 12 Nov 2020 12:53:58 +0000
Received: by outflank-mailman (input) for mailman id 25803;
 Thu, 12 Nov 2020 12:53:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=81XN=ES=epam.com=prvs=9585825f72=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kdC6v-0003Re-N5
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 12:53:57 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b7c94627-d6ad-43c6-8249-1da5099a67dc;
 Thu, 12 Nov 2020 12:53:56 +0000 (UTC)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0ACCVRTo008957; Thu, 12 Nov 2020 12:53:51 GMT
Received: from eur01-he1-obe.outbound.protection.outlook.com
 (mail-he1eur01lp2050.outbound.protection.outlook.com [104.47.0.50])
 by mx0b-0039f301.pphosted.com with ESMTP id 34rf80ur4p-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 12 Nov 2020 12:53:50 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM9PR03MB6883.eurprd03.prod.outlook.com (2603:10a6:20b:282::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Thu, 12 Nov
 2020 12:53:47 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Thu, 12 Nov 2020
 12:53:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=81XN=ES=epam.com=prvs=9585825f72=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kdC6v-0003Re-N5
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 12:53:57 +0000
X-Inumbo-ID: b7c94627-d6ad-43c6-8249-1da5099a67dc
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b7c94627-d6ad-43c6-8249-1da5099a67dc;
	Thu, 12 Nov 2020 12:53:56 +0000 (UTC)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
	by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0ACCVRTo008957;
	Thu, 12 Nov 2020 12:53:51 GMT
Received: from eur01-he1-obe.outbound.protection.outlook.com (mail-he1eur01lp2050.outbound.protection.outlook.com [104.47.0.50])
	by mx0b-0039f301.pphosted.com with ESMTP id 34rf80ur4p-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Thu, 12 Nov 2020 12:53:50 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RtmHfC5BMTrN+OKh6X09/JYWiowjpqGtK2Teoz47X0TI5KHgSyjWq7pADimbC1lBZ0ipRut/RfdX+gNejnjWS+qPkKXkD+bSEx12lhgOOCMKyQtD/FcvzZcqNvup9sxIoiWkOD0F1IJ33Ds6u5B14Yq5HA2WbXnvl9ohp6AC+rUrM2X37LO3DYjBCmVPchioEgZbcgSCPj7zee/OPlLv5s6XtDdqa1Z8AqC83EhJsAKG2FxoFK2muxcx8ORpDdrnMKJzXcSLDQJi/vKBOk7p2nc2p/vOsuSD34T6AXUxSS4OGk4to3KDtzoMFlbznad4Jmqw5fwNFZBbZuEnN5rXsQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hNloKH2E3e+RoItDDf/WzumfowLq0jgvU8+B2joGuxw=;
 b=aHRw1tDuXFo3PmC1QUo4m5J/3UU9rPTNJ90CZOsGCzR+r+6Xs0bC4eyrsNS+40iugAHdMX00fJHA9Sc6yP3iTpqT/sbzt7dGamYQPBoeGomSbTLSwfy989IkrasLZ3rFfW1r5VpZeD4633dtFc9QVm+VGCw6NU5owdZzv1HxBOjzViVJcNiauVEsPOpd4vVs9Kz+ex3yOHao4J8CjSxr3hf+U687itdtCMHYZAGi6TbOzr1nWsqGMAkd4nuoVwwyOEv3VwCp41HTJ3Dg9owR6uoiS8LB1YxDOaGL0abXZz3s1o7GAh6YWqDeqkZUz6zRj2zcsXfzXEGET6vuesD9Og==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hNloKH2E3e+RoItDDf/WzumfowLq0jgvU8+B2joGuxw=;
 b=ZltFrQrvik9V/OytNu0SxR+A7C00v/xQ17fSZ0Ad4UBZLz0ji+5P7O9eyFnyykBdeS9XYhvaBF272jo4mBwjE8KOCfW3Lbx2sCMvAlLZAogdedNWlYe3+az+GFMCws/FZB1gmdoGN3DgdKWyuFcopGXpQC2jGFkfwCZ2NndV8RUKp1KqF/upm2O+ESbVIdRUOV1ttybQ/6xbGoIq4kKCpX4uJ/SyU92+L8DZeT4/VvcyiY0BDTt7bQK+lXRy6VdKiiU3/qziXXUpo6aaQ3vxXzYX2fgCY0kHqtrx0RMfaegUKUNzY5u61jGqVA22SyQ75/TBbEOrGRVeqPTNkI0bfw==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM9PR03MB6883.eurprd03.prod.outlook.com (2603:10a6:20b:282::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Thu, 12 Nov
 2020 12:53:47 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Thu, 12 Nov 2020
 12:53:47 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Jan Beulich <jbeulich@suse.com>,
        Oleksandr Andrushchenko
	<andr2000@gmail.com>
CC: "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
        "julien.grall@arm.com"
	<julien.grall@arm.com>,
        "sstabellini@kernel.org" <sstabellini@kernel.org>,
        "roger.pau@citrix.com" <roger.pau@citrix.com>,
        "Rahul.Singh@arm.com"
	<Rahul.Singh@arm.com>
Subject: Re: [PATCH 02/10] arm/pci: Maintain PCI assignable list
Thread-Topic: [PATCH 02/10] arm/pci: Maintain PCI assignable list
Thread-Index: AQHWtpbuKiRgL1HiZkyNC2yoNcmnl6nDB9uAgAFwr4A=
Date: Thu, 12 Nov 2020 12:53:47 +0000
Message-ID: <eb50b7f2-a2fb-a7fb-7937-511e5b26f7a0@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-3-andr2000@gmail.com>
 <6a2dc03e-202b-b8f9-46a5-9c90d9de8a6d@suse.com>
In-Reply-To: <6a2dc03e-202b-b8f9-46a5-9c90d9de8a6d@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 8d80fe6d-4369-4ae7-6fea-08d88709ff36
x-ms-traffictypediagnostic: AM9PR03MB6883:
x-microsoft-antispam-prvs: 
 <AM9PR03MB688340CF3793A3731938A77AE7E70@AM9PR03MB6883.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7691;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 9n4yyALzw3SpINwl5CHT67G702I2j+PMOIm0ydeuma/nh1s/51GDHLKhAQnmGYP6wfMQ7ooYTWc/7iBRqjzNSJ4vCV1lefy1Rb+CCdero+hXdxdZ8KlTf1Pkf4dbe/cPj1w4/2MmOlma1+pIqiJJsrVq+Q1ID+2FdODOiYZ5KykbvjkCN4U7dWIZWhH+sc0AMzVQ6XOQxPUfGkEbq3a3oUBjb/en3lZF9ciFiTiJBZUlCa/Q2lNe4LjJWlbTID1Y2gN7Fnz3Df7+MBiuX7K3Tfhz/8Uas/fNYN56XZDHYlO0i8gpHBEswyWUAvbWykjXk90FcUTbN6oJ+808Exetow==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(396003)(346002)(39860400002)(376002)(2616005)(83380400001)(6512007)(36756003)(2906002)(6486002)(8676002)(8936002)(31696002)(53546011)(71200400001)(6506007)(26005)(7416002)(54906003)(31686004)(186003)(316002)(66446008)(76116006)(64756008)(66476007)(66946007)(5660300002)(86362001)(110136005)(478600001)(66556008)(4326008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 h1pe5e/UnR1YXLq7CTHSOUxq6evYxfWZTxvsJetxmdbyVOvmks13Gg2RRkPQXSzMShfDFMgsyJ5w+frEhUHQ/hIBZNDapQLQnkhaye6OX3AHFAmP79dVVxf6Rvsvo87wIT4YbJu1C/J7ezxaCiMdlvw/967eFc1mBZQxbmMfvO/+mFq5NlJmXadMhxqIbJcqMgGuj2XZ6a7Zy1Nh6hO5alf+YFnRdI0Bb2hghq8XGjmamGp4Kla3TGOwca/NcOzXznfCjLGhqd4Q4+ZZHzRHeeEcFWmgRWGeJiXXblgmNj6wl1Eg3Vsqeey3NUmyDNHj9KT6qgNQs98QWEosQMurkJ1j/gHuaMg1dgizoTB4gVJ/QkW+Img9PMgivhszOH4zGrJobqJkWXLXb5s6KMiFK5bjVKpdnVrog/znr/nGJ1mkuMeNgbKqSxguxQxeYa0wugRdRdiYcFyJWv/2Kt6x7uqidpzYixs1vSUX5zy8GxYBiQ5Ui/bwNfn/nQPZB9l8cFtfltHcu1gHQMDzf/3f3M3hkMIT17mjYb9YiumJV2HeO2xwHxIP0zZmBeE1+TaRhMMFz2Ix/pio6BqgwUAvXl/q9urNcmv+rUI3HzPtTKwLNmBk6+/dQi3PSP6UV8E/gUIYLqe14MFbOqat6mTsyw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <6F88741EE7FE1042B15BD4DD03898A88@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8d80fe6d-4369-4ae7-6fea-08d88709ff36
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Nov 2020 12:53:47.7774
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: XRB+v3VwZZux3k/GhLHLD6dSmMQN+n7Aq+MYvSn+bK9lPNGzwULqupeXtoyZv37mEYpv9mbyIrDWhCs90xFkhm02d68bCJCuCUsje+4Ij6D0V0CHNHwb11YilUVFaTAq
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR03MB6883
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-12_05:2020-11-12,2020-11-12 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 clxscore=1015
 mlxscore=0 suspectscore=0 bulkscore=0 lowpriorityscore=0 impostorscore=0
 mlxlogscore=999 malwarescore=0 priorityscore=1501 adultscore=0 spamscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2011120075

DQpPbiAxMS8xMS8yMCA0OjU0IFBNLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMDkuMTEuMjAy
MCAxMzo1MCwgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gd3JvdGU6DQo+PiAtLS0gYS94ZW4vZHJp
dmVycy9wYXNzdGhyb3VnaC9wY2kuYw0KPj4gKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gv
cGNpLmMNCj4+IEBAIC04NzksNiArODc5LDQzIEBAIGludCBwY2lfcmVtb3ZlX2RldmljZSh1MTYg
c2VnLCB1OCBidXMsIHU4IGRldmZuKQ0KPj4gICAgICAgcmV0dXJuIHJldDsNCj4+ICAgfQ0KPj4g
ICANCj4+ICsjaWZkZWYgQ09ORklHX0FSTQ0KPj4gK2ludCBwY2lfZGV2aWNlX3NldF9hc3NpZ25l
ZCh1MTYgc2VnLCB1OCBidXMsIHU4IGRldmZuLCBib29sIGFzc2lnbmVkKQ0KPj4gK3sNCj4+ICsg
ICAgc3RydWN0IHBjaV9kZXYgKnBkZXY7DQo+PiArDQo+PiArICAgIHBkZXYgPSBwY2lfZ2V0X3Bk
ZXYoc2VnLCBidXMsIGRldmZuKTsNCj4+ICsgICAgaWYgKCAhcGRldiApDQo+PiArICAgIHsNCj4+
ICsgICAgICAgIHByaW50ayhYRU5MT0dfRVJSICJDYW4ndCBmaW5kIFBDSSBkZXZpY2UgJTA0eDol
MDJ4OiUwMnguJXVcbiIsDQo+PiArICAgICAgICAgICAgICAgc2VnLCBidXMsIFBDSV9TTE9UKGRl
dmZuKSwgUENJX0ZVTkMoZGV2Zm4pKTsNCj4+ICsgICAgICAgIHJldHVybiAtRU5PREVWOw0KPj4g
KyAgICB9DQo+PiArDQo+PiArICAgIHBkZXYtPmFzc2lnbmVkID0gYXNzaWduZWQ7DQo+PiArICAg
IHByaW50ayhYRU5MT0dfRVJSICJwY2liYWNrICVzYXNzaWduIFBDSSBkZXZpY2UgJTA0eDolMDJ4
OiUwMnguJXVcbiIsDQo+PiArICAgICAgICAgICBhc3NpZ25lZCA/ICIiIDogImRlLSIsDQo+PiAr
ICAgICAgICAgICBzZWcsIGJ1cywgUENJX1NMT1QoZGV2Zm4pLCBQQ0lfRlVOQyhkZXZmbikpOw0K
Pj4gKw0KPj4gKyAgICByZXR1cm4gMDsNCj4+ICt9DQo+PiArDQo+PiAraW50IHBjaV9kZXZpY2Vf
Z2V0X2Fzc2lnbmVkKHUxNiBzZWcsIHU4IGJ1cywgdTggZGV2Zm4pDQo+PiArew0KPj4gKyAgICBz
dHJ1Y3QgcGNpX2RldiAqcGRldjsNCj4+ICsNCj4+ICsgICAgcGRldiA9IHBjaV9nZXRfcGRldihz
ZWcsIGJ1cywgZGV2Zm4pOw0KPj4gKyAgICBpZiAoICFwZGV2ICkNCj4+ICsgICAgew0KPj4gKyAg
ICAgICAgcHJpbnRrKFhFTkxPR19FUlIgIkNhbid0IGZpbmQgUENJIGRldmljZSAlMDR4OiUwMng6
JTAyeC4ldVxuIiwNCj4+ICsgICAgICAgICAgICAgICBzZWcsIGJ1cywgUENJX1NMT1QoZGV2Zm4p
LCBQQ0lfRlVOQyhkZXZmbikpOw0KPj4gKyAgICAgICAgcmV0dXJuIC1FTk9ERVY7DQo+PiArICAg
IH0NCj4+ICsNCj4+ICsgICAgcmV0dXJuIHBkZXYtPmFzc2lnbmVkID8gMCA6IC1FTk9ERVY7DQo+
PiArfQ0KPj4gKyNlbmRpZg0KPj4gKw0KPj4gICAjaWZuZGVmIENPTkZJR19BUk0NCj4+ICAgLypU
T0RPIDpJbXBsZW1lbnQgTVNJIHN1cHBvcnQgZm9yIEFSTSAgKi8NCj4+ICAgc3RhdGljIGludCBw
Y2lfY2xlYW5fZHBjaV9pcnEoc3RydWN0IGRvbWFpbiAqZCwNCj4+IEBAIC0xODIxLDYgKzE4NTgs
NjIgQEAgaW50IGlvbW11X2RvX3BjaV9kb21jdGwoDQo+PiAgICAgICByZXR1cm4gcmV0Ow0KPj4g
ICB9DQo+PiAgIA0KPj4gKyNpZmRlZiBDT05GSUdfQVJNDQo+PiArc3RydWN0IGxpc3RfYXNzaWdu
ZWQgew0KPj4gKyAgICB1aW50MzJfdCBjdXJfaWR4Ow0KPj4gKyAgICB1aW50MzJfdCBmcm9tX2lk
eDsNCj4+ICsgICAgYm9vbCBhc3NpZ25lZDsNCj4+ICsgICAgZG9taWRfdCAqZG9tYWluOw0KPj4g
KyAgICB1aW50MzJfdCAqbWFjaGluZV9zYmRmOw0KPj4gK307DQo+PiArDQo+PiArc3RhdGljIGlu
dCBfZW51bV9hc3NpZ25lZF9wY2lfZGV2aWNlcyhzdHJ1Y3QgcGNpX3NlZyAqcHNlZywgdm9pZCAq
YXJnKQ0KPj4gK3sNCj4+ICsgICAgc3RydWN0IGxpc3RfYXNzaWduZWQgKmN0eHQgPSBhcmc7DQo+
PiArICAgIHN0cnVjdCBwY2lfZGV2ICpwZGV2Ow0KPj4gKw0KPj4gKyAgICBsaXN0X2Zvcl9lYWNo
X2VudHJ5ICggcGRldiwgJnBzZWctPmFsbGRldnNfbGlzdCwgYWxsZGV2c19saXN0ICkNCj4+ICsg
ICAgew0KPj4gKyAgICAgICAgaWYgKCBwZGV2LT5hc3NpZ25lZCA9PSBjdHh0LT5hc3NpZ25lZCAp
DQo+PiArICAgICAgICB7DQo+PiArICAgICAgICAgICAgaWYgKCBjdHh0LT5jdXJfaWR4ID09IGN0
eHQtPmZyb21faWR4ICkNCj4+ICsgICAgICAgICAgICB7DQo+PiArICAgICAgICAgICAgICAgICpj
dHh0LT5kb21haW4gPSBwZGV2LT5kb21haW4tPmRvbWFpbl9pZDsNCj4+ICsgICAgICAgICAgICAg
ICAgKmN0eHQtPm1hY2hpbmVfc2JkZiA9IHBkZXYtPnNiZGYuc2JkZjsNCj4+ICsgICAgICAgICAg
ICAgICAgcmV0dXJuIDE7DQo+PiArICAgICAgICAgICAgfQ0KPj4gKyAgICAgICAgICAgIGN0eHQt
PmN1cl9pZHgrKzsNCj4+ICsgICAgICAgIH0NCj4+ICsgICAgfQ0KPj4gKyAgICByZXR1cm4gMDsN
Cj4+ICt9DQo+PiArDQo+PiAraW50IHBjaV9kZXZpY2VfZW51bV9hc3NpZ25lZChib29sIHJlcG9y
dF9ub3RfYXNzaWduZWQsDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1aW50MzJf
dCBmcm9tX2lkeCwgZG9taWRfdCAqZG9tYWluLA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgdWludDMyX3QgKm1hY2hpbmVfc2JkZikNCj4+ICt7DQo+PiArICAgIHN0cnVjdCBsaXN0
X2Fzc2lnbmVkIGN0eHQgPSB7DQo+PiArICAgICAgICAuYXNzaWduZWQgPSAhcmVwb3J0X25vdF9h
c3NpZ25lZCwNCj4+ICsgICAgICAgIC5jdXJfaWR4ID0gMCwNCj4+ICsgICAgICAgIC5mcm9tX2lk
eCA9IGZyb21faWR4LA0KPj4gKyAgICAgICAgLmRvbWFpbiA9IGRvbWFpbiwNCj4+ICsgICAgICAg
IC5tYWNoaW5lX3NiZGYgPSBtYWNoaW5lX3NiZGYsDQo+PiArICAgIH07DQo+PiArICAgIGludCBy
ZXQ7DQo+PiArDQo+PiArICAgIHBjaWRldnNfbG9jaygpOw0KPj4gKyAgICByZXQgPSBwY2lfc2Vn
bWVudHNfaXRlcmF0ZShfZW51bV9hc3NpZ25lZF9wY2lfZGV2aWNlcywgJmN0eHQpOw0KPj4gKyAg
ICBwY2lkZXZzX3VubG9jaygpOw0KPj4gKyAgICAvKg0KPj4gKyAgICAgKiBJZiBub3QgZm91bmQg
dGhlbiByZXBvcnQgYXMgRUlOVkFMIHRvIG1hcmsNCj4+ICsgICAgICogZW51bWVyYXRpb24gcHJv
Y2VzcyBmaW5pc2hlZC4NCj4+ICsgICAgICovDQo+PiArICAgIGlmICggIXJldCApDQo+PiArICAg
ICAgICByZXR1cm4gLUVJTlZBTDsNCj4+ICsgICAgcmV0dXJuIDA7DQo+PiArfQ0KPj4gKyNlbmRp
Zg0KPiBKdXN0IGluIGNhc2UgdGhlIGVhcmxpZXIgY29tbWVudHMgeW91J3ZlIGdvdCBkb24ndCBs
ZWFkIHRvIHJlbW92YWwNCj4gb2YgdGhpcyBjb2RlIC0gdW5sZXNzIHRoZXJlJ3MgYSByZWFsIG5l
ZWQgZm9yIHRoZW0gdG8gYmUgcHV0IGhlcmUsDQo+IHVuZGVyICNpZmRlZiwgcGxlYXNlIGFkZCBh
IG5ldyB4ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hcm0vcGNpLmMNCj4gaW5zdGVhZC4gRXZlbiBp
ZiBmb3IganVzdCBwYXJ0IG9mIHRoZSBjb2RlLCB0aGlzIHdvdWxkIHRoZW4gYWxzbw0KPiBoZWxw
IHdpdGggbW9yZSBjbGVhciBtYWludGFpbmVyc2hpcCBvZiB0aGlzIEFybSBzcGVjaWZpYyBjb2Rl
Lg0KWWVzLCBkb2VzIG1ha2Ugc2Vuc2UgdG8gbW92ZSBhbGwgQVJNIHNwZWNpZmljcyBpbnRvIGEg
ZGVkaWNhdGVkIGZpbGUNCj4NCj4gSmFu


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 12:55:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 12:55:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25809.53822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdC7x-0003Z3-BO; Thu, 12 Nov 2020 12:55:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25809.53822; Thu, 12 Nov 2020 12:55:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdC7x-0003Yw-8L; Thu, 12 Nov 2020 12:55:01 +0000
Received: by outflank-mailman (input) for mailman id 25809;
 Thu, 12 Nov 2020 12:54:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=81XN=ES=epam.com=prvs=9585825f72=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kdC7v-0003Yq-AZ
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 12:54:59 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 92cf2c9d-cb20-491f-8f96-a2156131ccb9;
 Thu, 12 Nov 2020 12:54:58 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0ACCZCSO001011; Thu, 12 Nov 2020 12:54:53 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2170.outbound.protection.outlook.com [104.47.17.170])
 by mx0a-0039f301.pphosted.com with ESMTP id 34rf80bqqf-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 12 Nov 2020 12:54:52 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM4PR0302MB2657.eurprd03.prod.outlook.com (2603:10a6:200:90::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Thu, 12 Nov
 2020 12:54:49 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Thu, 12 Nov 2020
 12:54:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=81XN=ES=epam.com=prvs=9585825f72=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kdC7v-0003Yq-AZ
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 12:54:59 +0000
X-Inumbo-ID: 92cf2c9d-cb20-491f-8f96-a2156131ccb9
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 92cf2c9d-cb20-491f-8f96-a2156131ccb9;
	Thu, 12 Nov 2020 12:54:58 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
	by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0ACCZCSO001011;
	Thu, 12 Nov 2020 12:54:53 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com (mail-vi1eur05lp2170.outbound.protection.outlook.com [104.47.17.170])
	by mx0a-0039f301.pphosted.com with ESMTP id 34rf80bqqf-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Thu, 12 Nov 2020 12:54:52 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gS3K8xc2xEL9gZ7W1s2N4ITwagcAKI5LAAbyEtnbUYdnVOx1je8eRaSGASjdYLLfp0ckcA4RWQ25Kan1UKLYWGlfl1CEU9cnlJDb7pxceA8i4cuSfUm86YvV9CfzHeGWJYTvYZMWIj7xA7YNu5D6BkySvoTv4DBCBwKPPlZ20qXBmYkLv+2BfgMh5K1x7X0SAD4TA6ZJLgbZMKxvgAw5oDZ1zU421a+g0UnL5b5+Az41XN2PPMPXqfEymsKqj72efWUuCMVvoO5IjeXHkc4jmrYvMPylqLB6x9fAQCT4hB5AxUfMG8qoe8gjEEodD8AQF7YGbUFm512FqiUfrHnoQA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7aB8W4OWIlVdg7LOVbs3Kmr8dlOKwhPxhSaGqZv+ueA=;
 b=mACXOkNqE9Fza/xj52QP5GlQNgaCxyqNXj2o2C3B4085wDnPgV9S4C3fxk6d6HX8hExNXSZ/6212NRlOMcz0jmbtv/RhAEeHH7ep5eqE9snoQkTSis0IioTbAoUFWaXjPOvshfpE8bLRcB2bgB4qeAlzXEZGlLC1d5yiwZyn/qsMYrS+4xE0rgw3J4z49nwPKqFeRYgyCHWOPJKkzP5TNeTuArt2wo82LFkwEJLPGMKt1964kfRnrneQDQh+z4Miv+Skv5dBOZ4m5QJa/dcTs+Ty+lrvqv4i4WxiUjSt1EDXDYIaHaEPm3iAveCzqQZUjvb+cn2i9YUmIfsJIHdrgQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7aB8W4OWIlVdg7LOVbs3Kmr8dlOKwhPxhSaGqZv+ueA=;
 b=OVyBEKH7giPjgBd/dXWsP7pz5lJk7GwERK9Uv2Ee2krF/Esa6KYIDtzPR7BFgE2mgNWIQJA95rUnmG5mEvj4As5B0I9nncxLLLTRJP45nIsmdE7nHxbyWSrYCVsMsj97PD7rNWYfCq6Jy+6pmoOJPA+N6B8Yzv2cUJMeS7XkqylpetEAn0rhVWyfFpW8hKmpGaG6HNUbF7u0mnO6zUgwVTveB87mRUF/pgIyAQmJJ3GuRftEIof6ZMy3SRdY+s7L8MaFN8PLwjDsyVhh7VSjH7F7TnUPKDmllh2HKbZ4C5dNP4Xgk7PCVY8VxKlgLDx0OMB3r4lDs1Qd9cMwKqYpSg==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM4PR0302MB2657.eurprd03.prod.outlook.com (2603:10a6:200:90::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Thu, 12 Nov
 2020 12:54:49 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Thu, 12 Nov 2020
 12:54:49 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>,
        Oleksandr
 Andrushchenko <andr2000@gmail.com>
CC: "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
        "Bertrand.Marquis@arm.com"
	<Bertrand.Marquis@arm.com>,
        "julien.grall@arm.com" <julien.grall@arm.com>,
        "jbeulich@suse.com" <jbeulich@suse.com>,
        "sstabellini@kernel.org"
	<sstabellini@kernel.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>
Subject: Re: [PATCH 04/10] [WORKAROUND] xen/arm: Update hwdom's p2m to trap
 ECAM space
Thread-Topic: [PATCH 04/10] [WORKAROUND] xen/arm: Update hwdom's p2m to trap
 ECAM space
Thread-Index: AQHWtpbxgvcOGSsv3UquxFkS+bVZEanDBRsAgAFzuAA=
Date: Thu, 12 Nov 2020 12:54:49 +0000
Message-ID: <6a4b279c-5efe-a47a-5b71-c5bb7531ddb1@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-5-andr2000@gmail.com>
 <20201111144422.z2hi3ineg6qwbxi4@Air-de-Roger>
In-Reply-To: <20201111144422.z2hi3ineg6qwbxi4@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: e03747c2-3a9f-4584-04b4-08d8870a23bc
x-ms-traffictypediagnostic: AM4PR0302MB2657:
x-microsoft-antispam-prvs: 
 <AM4PR0302MB26574CE435B4F04F2361E3C4E7E70@AM4PR0302MB2657.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 n4BdoapUm5mc31K3qsVoZxXLHIbzvF+WfFb7oud83XxQJZ4EuIKEx/dAtPn0GsyUKjePk1klh2f0uIPWxIUccV33riORvk8Sc/SRI8tRZKGPTuiGtP4Trpzgd+3t6s1TI+C73Jt9gvs2ef6aCoQLUqfTMS0VvPjko4YaPgH8XExC2kOtfGlY93uRFRDQPJOZnbMhYkzg0QUsSZYL4jeUOH6x87DJkQ8UUDVVz7M/uN8TA/OPcAsg8cqOMRB1hw2UO1YB0l/OE5n/feFl1cRzzKFtRs29tJtCLLGsUbjsf9OuO9VkfxvYkv9PXljft2uN
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(366004)(39860400002)(376002)(136003)(86362001)(76116006)(6486002)(110136005)(8676002)(6512007)(6506007)(66476007)(54906003)(64756008)(316002)(31686004)(4326008)(31696002)(7416002)(2906002)(26005)(478600001)(36756003)(2616005)(66446008)(53546011)(186003)(4744005)(5660300002)(66946007)(71200400001)(66556008)(8936002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 wPbKeebVZ1X3J3qsTHV3hnRdGVEgE17ISEySXFkwRuGzT+WGC1ozMJhfHLBzbNN4ET+6VjjsNFL1GO0BYfrCn0SW5HKtilS1lnrLawHfaGX8S7fVXaxTeNwM31I+KZkwTnKgPAldPN6ypdeeODPIr4SL6wRtnsND+TseVbqlysKJ31+kRyoybmPmscGbPQ5bnIJmXAGdkgWrrk0i6x6Q5ZP9cVCpN/gp/9zsZG4zdgDaemAcwlPQcRsEXBKZwSeNR1iljEcnqhyPI4e8GY+Y/uvM+a3Xfz1yZuoksbFPirC6No8j2246c1Ju5NcHGDxpnmgcWZGqcPljzl1NYmTd0U3mUKsxQgPQ8xR85yUgaO1AX2ZCGh8+n6scYp32jJksFdGTdW7vFyr14mVDrAiLvqbknsZqHzclmsq/QcnGuai6/an969c8WiGuOvZrOTvRggDgl5FJAAMAI1wFISLszrrhvKbbVfyOkzDQhyObTPbdAgBjD1L9f4rhBFvVGnMycEbBxqGUv/zP/sfhY1Yt42TZFkv221CVfHQPbtu6GR7Z8d9kqbsLMmUveZsJ/46woOFJGoz2n91mr5j2seWns1HxQnIxvwuE5vwhTCIZFncZYUo94h8XHiz9mR/HXxlUsV8EYmD/BJdq212nqtnjXQ==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <ADA9D3444E50854CA83A8398444926B0@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e03747c2-3a9f-4584-04b4-08d8870a23bc
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Nov 2020 12:54:49.0727
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 05lJj3vThiqfbd6cUmb36dgpwU7pM7KXwjsX1QdGtMb7pFFf2OL8Sq1s9tWmCz1z2dGzbAE7GDYIsaAFq0SAAEAjEwO3Uu7gdwt3RH1Un0BWMw13BlAFyqeZKZiQ2igI
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR0302MB2657
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-12_05:2020-11-12,2020-11-12 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 mlxscore=0
 phishscore=0 spamscore=0 adultscore=0 bulkscore=0 priorityscore=1501
 suspectscore=0 impostorscore=0 mlxlogscore=999 lowpriorityscore=0
 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2011120075

DQpPbiAxMS8xMS8yMCA0OjQ0IFBNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOg0KPiBPbiBNb24s
IE5vdiAwOSwgMjAyMCBhdCAwMjo1MDoyNVBNICswMjAwLCBPbGVrc2FuZHIgQW5kcnVzaGNoZW5r
byB3cm90ZToNCj4+IEZyb206IE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIDxvbGVrc2FuZHJfYW5k
cnVzaGNoZW5rb0BlcGFtLmNvbT4NCj4+DQo+PiBIb3N0IGJyaWRnZSBjb250cm9sbGVyJ3MgRUNB
TSBzcGFjZSBpcyBtYXBwZWQgaW50byBEb21haW4tMCdzIHAybSwNCj4+IHRodXMgaXQgaXMgbm90
IHBvc3NpYmxlIHRvIHRyYXAgdGhlIHNhbWUgZm9yIHZQQ0kgdmlhIE1NSU8gaGFuZGxlcnMuDQo+
PiBGb3IgdGhpcyB0byB3b3JrIHdlIG5lZWQgdG8gdW5tYXAgdGhvc2UgbWFwcGluZ3MgaW4gcDJt
Lg0KPj4NCj4+IFRPRE8gKEp1bGllbik6IEl0IHdvdWxkIGJlIGJlc3QgaWYgd2UgYXZvaWQgdGhl
IG1hcC91bm1hcCBvcGVyYXRpb24uDQo+PiBTbywgbWF5YmUgd2Ugd2FudCB0byBpbnRyb2R1Y2Ug
YW5vdGhlciB3YXkgdG8gYXZvaWQgdGhlIG1hcHBpbmcuDQo+PiBNYXliZSBieSBjaGFuZ2luZyB0
aGUgdHlwZSBvZiB0aGUgY29udHJvbGxlciB0byAiUENJX0hPU1RDT05UUk9MTEVSIg0KPj4gYW5k
IGNoZWNraW5nIGlmIHRoaXMgaXMgYSBQQ0kgaG9zdGNvbnRyb2xsZXIgYXZvaWQgdGhlIG1hcHBp
bmcuDQo+IEkga25vdyB2ZXJ5IGxpdHRsZSBhYm91dCBBcm0gdG8gYmUgYWJsZSB0byBwcm92aWRl
IG1lYW5pbmdmdWwgY29tbWVudHMNCj4gaGVyZS4gSSBhZ3JlZSB0aGF0IGNyZWF0aW5nIHRoZSBt
YXBzIGp1c3QgdG8gcmVtb3ZlIHRoZW0gYWZ0ZXJ3YXJkcyBpcw0KPiBub3QgdGhlIHJpZ2h0IGFw
cHJvYWNoLCB3ZSBzaG91bGQgaW5zdGVhZCBhdm9pZCB0aG9zZSBtYXBwaW5ncyBmcm9tDQo+IGJl
aW5nIGNyZWF0ZWQgaW4gdGhlIGZpcnN0IHBsYWNlLg0KQWdyZWVkLCB3ZSdsbCBuZWVkIHRvIGZp
bmQgYW4gYWNjZXB0YWJsZSB3YXkgb2YgZG9pbmcgc28NCj4gUm9nZXIuDQoNClRoYW5rIHlvdSwN
Cg0KT2xla3NhbmRyDQo=


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 12:55:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 12:55:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25814.53833 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdC8n-0003hG-Lq; Thu, 12 Nov 2020 12:55:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25814.53833; Thu, 12 Nov 2020 12:55:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdC8n-0003h9-Id; Thu, 12 Nov 2020 12:55:53 +0000
Received: by outflank-mailman (input) for mailman id 25814;
 Thu, 12 Nov 2020 12:55:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=s91y=ES=nvidia.com=jgg@srs-us1.protection.inumbo.net>)
 id 1kdC8m-0003h2-Ml
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 12:55:52 +0000
Received: from nat-hk.nvidia.com (unknown [203.18.50.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 28044785-ecc8-40f8-b6ff-b288ba880ad0;
 Thu, 12 Nov 2020 12:55:51 +0000 (UTC)
Received: from HKMAIL102.nvidia.com (Not Verified[10.18.92.9]) by
 nat-hk.nvidia.com (using TLS: TLSv1.2, AES256-SHA)
 id <B5fad30d30000>; Thu, 12 Nov 2020 20:55:47 +0800
Received: from HKMAIL104.nvidia.com (10.18.16.13) by HKMAIL102.nvidia.com
 (10.18.16.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 12 Nov
 2020 12:55:36 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com (104.47.66.48) by
 HKMAIL104.nvidia.com (10.18.16.13) with Microsoft SMTP Server (TLS)
 id
 15.0.1473.3 via Frontend Transport; Thu, 12 Nov 2020 12:55:36 +0000
Received: from DM6PR12MB3834.namprd12.prod.outlook.com (2603:10b6:5:14a::12)
 by DM5PR12MB1881.namprd12.prod.outlook.com (2603:10b6:3:10f::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Thu, 12 Nov
 2020 12:55:33 +0000
Received: from DM6PR12MB3834.namprd12.prod.outlook.com
 ([fe80::cdbe:f274:ad65:9a78]) by DM6PR12MB3834.namprd12.prod.outlook.com
 ([fe80::cdbe:f274:ad65:9a78%7]) with mapi id 15.20.3499.032; Thu, 12 Nov 2020
 12:55:33 +0000
Received: from mlx.ziepe.ca (156.34.48.30) by
 MN2PR15CA0022.namprd15.prod.outlook.com (2603:10b6:208:1b4::35) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Thu, 12 Nov 2020 12:55:32 +0000
Received: from jgg by mlx with local (Exim 4.94)	(envelope-from
 <jgg@nvidia.com>)	id 1kdC8R-003fG5-Cp; Thu, 12 Nov 2020 08:55:31 -0400
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=s91y=ES=nvidia.com=jgg@srs-us1.protection.inumbo.net>)
	id 1kdC8m-0003h2-Ml
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 12:55:52 +0000
X-Inumbo-ID: 28044785-ecc8-40f8-b6ff-b288ba880ad0
Received: from nat-hk.nvidia.com (unknown [203.18.50.4])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 28044785-ecc8-40f8-b6ff-b288ba880ad0;
	Thu, 12 Nov 2020 12:55:51 +0000 (UTC)
Received: from HKMAIL102.nvidia.com (Not Verified[10.18.92.9]) by nat-hk.nvidia.com (using TLS: TLSv1.2, AES256-SHA)
	id <B5fad30d30000>; Thu, 12 Nov 2020 20:55:47 +0800
Received: from HKMAIL104.nvidia.com (10.18.16.13) by HKMAIL102.nvidia.com
 (10.18.16.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 12 Nov
 2020 12:55:36 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com (104.47.66.48) by
 HKMAIL104.nvidia.com (10.18.16.13) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Frontend Transport; Thu, 12 Nov 2020 12:55:36 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KyIW7IOM0+++D4M++knuxpj7alhdYeaG22sc6dnPQJvngD6iTQvoKuFK3btnmgz2x5yqp2KUXW8gpJDEBdEvtEjNVEU+Zg1UHlyDVRTAXy/CRaAAshslESeA8XtvD9CNUSi5fQw1W/emZlcqGsBZni9eCp7dwc2n18TqXLtRVzzftvL8iAHZnki7/0gMua+yf1LU6I2Vi9Ryc9MtQ3bYLON/ejRZETiohSE8rkOel8ShqC5HGlLy5WVj5K49GrvhTwQ9bnRTURwNdW9PHm+9+Lm3Ds27QwQru7/BDW33MSTzo8ZlWlhMymu3rYrEycSfJsaw8+nCgsb+WHf+Hx7KEA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=odkGYsTueGG1Oj1E5+/FEqJBoN5zCA2wnbKnOv3SBSE=;
 b=joeixh4hu5ooTwctsGDAchHhtySQy59tD6tksNFsMczPvsiyyWuzacJauPFFKPZyCHv0IzmOqJaVJq3IwPU71DE7W5hi4kyQEyR6pJIxlSE2XuQcPvjQ4yLf3y6AVksI30sPG4CH/p0/c42VC6zzDIaNOS3iyIN7a4GQHtRk45P0ZWoJCF7P/gyZpqD1uxRRmTdun+D7uF0L7uTvzzcjoMqR8cOI8o/r2s2Lw7FgyBfaTwc4uBJ21Wt1CdBu5vAuD3ZVNz5IjO6f4b1uqtB6rxz5Wi0BUV7nwqfmviq43rTvsR0bX8YHYEt8oDBApe84NJk4YnGpc9atCfbvSXNNhw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com;
 dkim=pass header.d=nvidia.com; arc=none
Received: from DM6PR12MB3834.namprd12.prod.outlook.com (2603:10b6:5:14a::12)
 by DM5PR12MB1881.namprd12.prod.outlook.com (2603:10b6:3:10f::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Thu, 12 Nov
 2020 12:55:33 +0000
Received: from DM6PR12MB3834.namprd12.prod.outlook.com
 ([fe80::cdbe:f274:ad65:9a78]) by DM6PR12MB3834.namprd12.prod.outlook.com
 ([fe80::cdbe:f274:ad65:9a78%7]) with mapi id 15.20.3499.032; Thu, 12 Nov 2020
 12:55:33 +0000
Date: Thu, 12 Nov 2020 08:55:31 -0400
From: Jason Gunthorpe <jgg@nvidia.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ziyad Atiyyeh <ziyadat@nvidia.com>,
	Itay Aveksis <itayav@nvidia.com>, Moshe Shemesh <moshe@nvidia.com>
CC: LKML <linux-kernel@vger.kernel.org>, <x86@kernel.org>, Joerg Roedel
	<joro@8bytes.org>, <iommu@lists.linux-foundation.org>,
	<linux-hyperv@vger.kernel.org>, Haiyang Zhang <haiyangz@microsoft.com>, "Jon
 Derrick" <jonathan.derrick@intel.com>, Lu Baolu <baolu.lu@linux.intel.com>,
	Wei Liu <wei.liu@kernel.org>, "K. Y. Srinivasan" <kys@microsoft.com>, Stephen
 Hemminger <sthemmin@microsoft.com>, Steve Wahl <steve.wahl@hpe.com>, Dimitri
 Sivanich <sivanich@hpe.com>, Russ Anderson <rja@hpe.com>,
	<linux-pci@vger.kernel.org>, Bjorn Helgaas <bhelgaas@google.com>, Lorenzo
 Pieralisi <lorenzo.pieralisi@arm.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>, <xen-devel@lists.xenproject.org>, Juergen Gross
	<jgross@suse.com>, "Boris Ostrovsky" <boris.ostrovsky@oracle.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Marc Zyngier <maz@kernel.org>, Greg
 Kroah-Hartman <gregkh@linuxfoundation.org>, "Rafael J. Wysocki"
	<rafael@kernel.org>, "Megha Dey" <megha.dey@intel.com>, Dave Jiang
	<dave.jiang@intel.com>, Alex Williamson <alex.williamson@redhat.com>, Jacob
 Pan <jacob.jun.pan@intel.com>, Baolu Lu <baolu.lu@intel.com>, Kevin Tian
	<kevin.tian@intel.com>, Dan Williams <dan.j.williams@intel.com>
Subject: REGRESSION: Re: [patch V2 00/46] x86, PCI, XEN, genirq ...: Prepare
 for device MSI
Message-ID: <20201112125531.GA873287@nvidia.com>
References: <20200826111628.794979401@linutronix.de>
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
In-Reply-To: <20200826111628.794979401@linutronix.de>
X-ClientProxiedBy: MN2PR15CA0022.namprd15.prod.outlook.com
 (2603:10b6:208:1b4::35) To DM6PR12MB3834.namprd12.prod.outlook.com
 (2603:10b6:5:14a::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from mlx.ziepe.ca (156.34.48.30) by MN2PR15CA0022.namprd15.prod.outlook.com (2603:10b6:208:1b4::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend Transport; Thu, 12 Nov 2020 12:55:32 +0000
Received: from jgg by mlx with local (Exim 4.94)	(envelope-from <jgg@nvidia.com>)	id 1kdC8R-003fG5-Cp; Thu, 12 Nov 2020 08:55:31 -0400
X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1;
	t=1605185747; bh=eQ25xNlY3rVx34V0CM4Bnq31qnmUxB4lNkLgF/NTCTU=;
	h=ARC-Seal:ARC-Message-Signature:ARC-Authentication-Results:Date:
	 From:To:CC:Subject:Message-ID:References:Content-Type:
	 Content-Disposition:Content-Transfer-Encoding:In-Reply-To:
	 X-ClientProxiedBy:MIME-Version:
	 X-MS-Exchange-MessageSentRepresentingType:X-LD-Processed;
	b=VsWoaT5gF6t9jiuouhEpEGN7Qg7rtC2IuOTe3e6QAc+/J5kvbsUGMZeN9Kt4v340c
	 KVbRoCH9TR2N9HqJF4VZ84B6k+51zu6l3inP6l0HJqan+cR7kEEn/+Cqpds/KwjSEr
	 BJf+0jPEtmRpSIm1ToZML8tn4g1PO1iVyJXs6pHwlKZwUso5Cn8iCe6LGFBRQ1tEUP
	 95Nt8mKzj2PmKuTTS2AowvWwB9kBFIAxLC7TFNHrpa0/KKU1hbM20l9NiG3hCradK7
	 hY1uGGNICjBrlcYzyYVoaC3ibvjfkHv7m8xxytVcHuVd6aNpu8YGZow8Vv5rsi+u+9
	 YDb6R/lLzsueQ==

On Wed, Aug 26, 2020 at 01:16:28PM +0200, Thomas Gleixner wrote:
> This is the second version of providing a base to support device MSI (non
> PCI based) and on top of that support for IMS (Interrupt Message Storm)
> based devices in a halfways architecture independent way.

Hi Thomas,

Our test team has been struggling with a regression on bare metal
SRIOV VFs since -rc1 that they were able to bisect to this series

This commit tests good:

5712c3ed549e ("Merge tag 'armsoc-fixes' of git://git.kernel.org/pub/scm/lin=
ux/kernel/git/soc/soc")

This commit tests bad:

981aa1d366bf ("PCI: MSI: Fix Kconfig dependencies for PCI_MSI_ARCH_FALLBACK=
S")

They were unable to bisect further into the series because some of the
interior commits don't boot :(

When we try to load the mlx5 driver on a bare metal VF it gets this:

[Thu Oct 22 08:54:51 2020] DMAR: DRHD: handling fault status reg 2
[Thu Oct 22 08:54:51 2020] DMAR: [INTR-REMAP] Request device [42:00.2] faul=
t index 1600 [fault reason 37] Blocked a compatibility format interrupt req=
uest
[Thu Oct 22 08:55:04 2020] mlx5_core 0000:42:00.1 eth4: Link down
[Thu Oct 22 08:55:11 2020] mlx5_core 0000:42:00.1 eth4: Link up
[Thu Oct 22 08:55:54 2020] mlx5_core 0000:42:00.2: mlx5_cmd_eq_recover:264:=
(pid 3390): Recovered 1 EQEs on cmd_eq
[Thu Oct 22 08:55:54 2020] mlx5_core 0000:42:00.2: wait_func_handle_exec_ti=
meout:1051:(pid 3390): cmd0: CREATE_EQ(0=C3=83=C2=97301) recovered after ti=
meout
[Thu Oct 22 08:55:54 2020] DMAR: DRHD: handling fault status reg 102
[Thu Oct 22 08:55:54 2020] DMAR: [INTR-REMAP] Request device [42:00.2] faul=
t index 1600 [fault reason 37] Blocked a compatibility format interrupt req=
uest

If you have any idea Ziyad and Itay can run any debugging you like.

I suppose it is because this series is handing out compatability
addr/data pairs while the IOMMU is setup to only accept remap ones
from SRIOV VFs?

Thanks,
Jason


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 13:03:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 13:03:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25828.53846 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdCG7-0004nc-Ig; Thu, 12 Nov 2020 13:03:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25828.53846; Thu, 12 Nov 2020 13:03:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdCG7-0004nT-Fn; Thu, 12 Nov 2020 13:03:27 +0000
Received: by outflank-mailman (input) for mailman id 25828;
 Thu, 12 Nov 2020 13:03:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bXVH=ES=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kdCG6-0004nO-GL
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 13:03:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cddbc6f8-4918-4650-8bc8-abf385dff8af;
 Thu, 12 Nov 2020 13:03:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 94E31AC0C;
 Thu, 12 Nov 2020 13:03:23 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bXVH=ES=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kdCG6-0004nO-GL
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 13:03:26 +0000
X-Inumbo-ID: cddbc6f8-4918-4650-8bc8-abf385dff8af
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cddbc6f8-4918-4650-8bc8-abf385dff8af;
	Thu, 12 Nov 2020 13:03:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605186203;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=UAxGVC/47tuxg1fEph32w2VpULUuKSdgkSzc7H9SKys=;
	b=USJLlEGKClafSE1a4DM4yzHea/I0CPOdzaZbufFrSRlSM9yJcFrNTt6QMTjYFAfP6U2Z7Q
	ijbtiGgrYuYA7wTjBlkqWZerZ3z/v5WnlL+CjuoaHUkNpWlRaoB1WtFKfjUWvFSOTKsITD
	3jjAq7laHibAcLX/3RpBvQNzH1EK13g=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 94E31AC0C;
	Thu, 12 Nov 2020 13:03:23 +0000 (UTC)
Subject: Re: [PATCH v4 2/3] xen/oprofile: use NMI continuation for sending
 virq to guest
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201109095021.9897-1-jgross@suse.com>
 <20201109095021.9897-3-jgross@suse.com>
 <d55adbc0-8a98-dd5c-c204-2ec11955c356@suse.com>
 <288804e4-75e6-6600-9634-8c0ea7a06c22@suse.com>
 <b84d687e-0aab-d48f-c068-1852cc1075b2@suse.com>
 <6229914c-bc76-2670-a272-ab0603f612cc@suse.com>
 <2fe880fb-43d6-8479-278f-a2a38c5b3a9f@suse.com>
 <f52adcb9-cff7-3bf9-ab98-881e471b0c9a@suse.com>
 <11ead195-f3a9-f9ba-58b6-9ae96650cf07@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <236b34e1-01c8-f4e9-01dc-33e8d99a239e@suse.com>
Date: Thu, 12 Nov 2020 14:03:22 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <11ead195-f3a9-f9ba-58b6-9ae96650cf07@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="Yn4wmR8JTfOGmxUAzhi5ckx4vnY3VTsL3"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--Yn4wmR8JTfOGmxUAzhi5ckx4vnY3VTsL3
Content-Type: multipart/mixed; boundary="ZBafHWdhoy3n3s6GmaZYUPgzlGBxbR6jl";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <236b34e1-01c8-f4e9-01dc-33e8d99a239e@suse.com>
Subject: Re: [PATCH v4 2/3] xen/oprofile: use NMI continuation for sending
 virq to guest
References: <20201109095021.9897-1-jgross@suse.com>
 <20201109095021.9897-3-jgross@suse.com>
 <d55adbc0-8a98-dd5c-c204-2ec11955c356@suse.com>
 <288804e4-75e6-6600-9634-8c0ea7a06c22@suse.com>
 <b84d687e-0aab-d48f-c068-1852cc1075b2@suse.com>
 <6229914c-bc76-2670-a272-ab0603f612cc@suse.com>
 <2fe880fb-43d6-8479-278f-a2a38c5b3a9f@suse.com>
 <f52adcb9-cff7-3bf9-ab98-881e471b0c9a@suse.com>
 <11ead195-f3a9-f9ba-58b6-9ae96650cf07@suse.com>
In-Reply-To: <11ead195-f3a9-f9ba-58b6-9ae96650cf07@suse.com>

--ZBafHWdhoy3n3s6GmaZYUPgzlGBxbR6jl
Content-Type: multipart/mixed;
 boundary="------------779065727E6F664CDC070B70"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------779065727E6F664CDC070B70
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 12.11.20 12:36, Jan Beulich wrote:
> On 12.11.2020 12:27, J=C3=BCrgen Gro=C3=9F wrote:
>> On 12.11.20 12:05, Jan Beulich wrote:
>>> On 12.11.2020 11:48, J=C3=BCrgen Gro=C3=9F wrote:
>>>> On 12.11.20 11:23, Jan Beulich wrote:
>>>>> On 11.11.2020 16:48, J=C3=BCrgen Gro=C3=9F wrote:
>>>>>> On 11.11.20 16:45, Jan Beulich wrote:
>>>>>>> On 09.11.2020 10:50, Juergen Gross wrote:
>>>>>>>>      static int nmi_callback(const struct cpu_user_regs *regs, i=
nt cpu)
>>>>>>>>      {
>>>>>>>>      	int xen_mode, ovf;
>>>>>>>>     =20
>>>>>>>>      	ovf =3D model->check_ctrs(cpu, &cpu_msrs[cpu], regs);
>>>>>>>>      	xen_mode =3D ring_0(regs);
>>>>>
>>>>> Unrelated to the patch here (i.e. just as an observation), this
>>>>> use of ring_0() looks bogus when the NMI occurred in HVM guest
>>>>> mode.
>>>>
>>>> An NMI in an HVM guest due to oprofile would be a VMEXIT with NMI
>>>> reason, or just be handled completely inside the guest, right?
>>>
>>> Yes, and in the former case for VMX it would be handed on to do_nmi()=
,
>>> with the guest register state. For SVM it would get handled on the
>>> next STGI, i.e. would indeed never surface from HVM guest mode.
>>>
>>>> I don't see how this test should ever result in xen_mode being
>>>> false for an HVM guest.
>>>
>>> I think, because of hvm_invalidate_regs_fields(), on VMX it would be
>>> consistently true in release builds and consistently false in debug
>>> ones.
>>
>> Ah, okay. I searched for do_nmi(), but the vmx code uses the exception=

>> table instead.
>>
>> So I guess this should be:
>>
>> xen_mode =3D !guest_mode(regs);
>=20
> Yes, I think so. Just that guest_mode() also has its issues (my patch
> "x86: refine guest_mode()" improving it at least some is still pending
> Andrew's go / no-go / improvement suggestions), so whether it's
> suitable to use here may need some careful evaluation.

I'll leave the test as is for now.

We can revisit it when your patch has been committed.


Juergen

--------------779065727E6F664CDC070B70
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------779065727E6F664CDC070B70--

--ZBafHWdhoy3n3s6GmaZYUPgzlGBxbR6jl--

--Yn4wmR8JTfOGmxUAzhi5ckx4vnY3VTsL3
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+tMpoFAwAAAAAACgkQsN6d1ii/Ey/d
1Qf/ctnsBIz5lvMcWL89VGTwpHTHQoFftU8JwddXqfKy/MVxbbFltYXxNWA1qc8KcjHMJ75f/AdF
5ylr+1wN8nSMbd25G3H8UGDaigFtMERq7/arV9SI/yYprgmvNXSAY6xL4AY/GObQMuWaoCqMXEUk
9JGvCn6tamT021SEbPKE++ji0T8VutSHse76VolJrt2VhqE2d4WmbxciBBvo44UkXIU9YHpTfGJY
nH695+p5Y6Gzf/dxPgyqvS1qwoFhtQ4JkJuVHhWYPzlliPg41ftTy6C7a3TXvPY6FCmTBpQRgTEv
nY8Wfp9azabeDkesKl+YQPjNwHZbpWLxrmIrZv0Gqw==
=He84
-----END PGP SIGNATURE-----

--Yn4wmR8JTfOGmxUAzhi5ckx4vnY3VTsL3--


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 13:07:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 13:07:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25835.53858 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdCJx-0004xj-58; Thu, 12 Nov 2020 13:07:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25835.53858; Thu, 12 Nov 2020 13:07:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdCJx-0004xc-1S; Thu, 12 Nov 2020 13:07:25 +0000
Received: by outflank-mailman (input) for mailman id 25835;
 Thu, 12 Nov 2020 13:07:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=muz0=ES=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kdCJv-0004xX-Lf
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 13:07:23 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 072e60ef-78ad-47bb-8ebd-fb63d5d443c4;
 Thu, 12 Nov 2020 13:07:22 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=muz0=ES=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kdCJv-0004xX-Lf
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 13:07:23 +0000
X-Inumbo-ID: 072e60ef-78ad-47bb-8ebd-fb63d5d443c4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 072e60ef-78ad-47bb-8ebd-fb63d5d443c4;
	Thu, 12 Nov 2020 13:07:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605186441;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=7Gv/6FDaDrPRhhJAoYiZ+Db9VnDK/9SfH6tR5sfFb8c=;
  b=edjlq4yrgw0nlYn9weeDbUiGzIczBTDYiRnPjwd7VmPBsxeGLldH7lsw
   di0X+CqXhQDR/rkFCItqF4LLaSe4fZ9t7tOnpPQaZGQJ7nBV4gFDkq6qH
   Y2qZNEQnGUnHebhjX4kjjjy2t70TAXf4vcItiOFKAvCHhx0KgfSWGjyTO
   I=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Zah3bmuYtVEbuoqncFXydUjpN0X6F0qAXfIYcb9LHx7RMSHZW47weVpRHq61zBe3ul81ipZOmW
 v50V9CDu8mk84vd7ehaYy/3GmowXOVCocQey1kPPjjCxwEYu3ASOguasxGTJwe+Q2ROTOz1o8x
 3OwIqknsQl1AkyV5rCDYIzYWc2abLOPe9H2f7KJE+Ty9PKAFCqLjgM2WErhy3S0b5xPn1MVbX8
 X6JFY+XIxWltrU1cLQ8yx6T9xMkHfpopO+tIXGNyJ6hKd5Y8Tn7pjBMM53YjvFGZ3aCJUnXT5d
 DLY=
X-SBRS: None
X-MesageID: 32147879
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,472,1596513600"; 
   d="scan'208";a="32147879"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WPp5iWTULhmpssm2pqz3w2vZjCSKKLEICwrMKxXeOjOy6xBe0Erj4JSKZCWNxn3pzseX9/qeJ9i2vTstXPBW7xBV3ixxGRugt5J7WbRVc9MKbAE7Be5x2f7tTKoPagX+CiI978ufNsMK5E/7JqBxdjVXYZvwWNCViEBvUD+EwMVaK5A/FDDDiG9I8l+ZdzSzNZFytl6YpdMU4+Lm3pRMSfE74F5EXPow0toFNtiMY+1akAhqDp2Cpy1o3oBLqpyTPRLGejNOqCb9pB+y2mFuPbni8+/lLRVOzJ0lQdqrEAX8TEqXVvkNR0XUwLXbtCNCGda8CuPhI9j1bhjUrmHw8w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=k7KauaaH7z6ZEv3zdHOFzAM00pLUAq41via9BBBSiBY=;
 b=kpMEGplTLqyczvEOZj6hzDpWbMVwrvj3MOQwq0AcA4AKqr/mCx/h2bEENEk0whBehiNbpMnJeBygwl7mlVke/IWjTTuibBaXDzqGKC7eV2oFyb7aeUVcylmXegL974iOakITRRzcbboLIlBrcSBMBv1nob/7ZCj1mNbbCfctELhFL1q7NUb33+LCn+5qQo+Fsb9gy3Dic/y3u6aCvU+YE6uhGxczRLmxxVkelNue2h9SJOVMpXP9u+RnKqKl6vL4nhFJFLkcWRst10uJFJ3zjhK2W3PX9hAvQz8fKR6HjoZLMqaKfrTggUN4ezpwtagrUjF4C2oUV3IQSRoKzFY1lw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=k7KauaaH7z6ZEv3zdHOFzAM00pLUAq41via9BBBSiBY=;
 b=Snvs1cQrrtQ1yyldqoPmXJst99FDr8l9/9nH1g/+r9PMbk5wQnOUAqy1V9eIaiMdmlA82MYf4XxTfEfFNCfj6wGD+ekaM+kGknVPHm4UhVmr2b03wXotv/qsecSvmeZPesu4vIj8t4PQpWHyCu0OYMiyunNGI/PBHgCDpo9T1BI=
Date: Thu, 12 Nov 2020 14:07:09 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
Subject: Re: [PATCH 5/5] x86/p2m: split write_p2m_entry() hook
Message-ID: <20201112130709.r3acpkrkyck6arul@Air-de-Roger>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
 <7b2b7cc9-8828-41bd-7949-764161bbe7ff@suse.com>
 <20201110135944.hbsojy6eeyw53has@Air-de-Roger>
 <d73234b0-f22e-0783-3fbe-759ccb0ecc48@suse.com>
 <20201111121730.pblsf6inot5gixfc@Air-de-Roger>
 <7f916527-9a9c-8afe-5e5c-781554d1bd73@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <7f916527-9a9c-8afe-5e5c-781554d1bd73@suse.com>
X-ClientProxiedBy: LO4P123CA0027.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:151::14) To SA0PR03MB5610.namprd03.prod.outlook.com
 (2603:10b6:806:b2::9)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8f9b5eac-1ee2-431a-1eff-08d8870be16b
X-MS-TrafficTypeDiagnostic: SA0PR03MB5657:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SA0PR03MB565728422DE1F2A5E22DD0B78FE70@SA0PR03MB5657.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: fc6jrdO1TV3u/yiJwnzXHSxF2fmugeI2WjHZPTI+qwYb0NFPEireVRppOnfMnP/3EV7F8D+0rFPNnX8qdYKrR0z9jVFgOX+3RML+VgfAU6yApXBElMHYZ+ZBtZXAbKEPui9MAg4lgu7GnVcCVFFbNfrQMtXv2ay7vFR1MNQxsj/i/UTVIxk04C38tAYlHfTMx/xw/ubFy+UkjsuDNXTi4Uky0d+56zNoo4L5BdxlqmsSRR9z8puGfdbmMqPo5zDs+/c9PDA8tPG9vtZci/9KXk6M5hZnEYQ4GUgnIOj3aubW8si8frCpqm6Kz1IZLfsXok5VABf40hOCV+4QTca3yqFvKf47LL1TV2zGL//yfB2Yk7YDhZh4NyfqU8J1G+bA
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SA0PR03MB5610.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(376002)(346002)(396003)(136003)(39860400002)(66946007)(54906003)(66556008)(66476007)(4326008)(478600001)(1076003)(2906002)(83380400001)(85182001)(5660300002)(33716001)(8676002)(9686003)(8936002)(86362001)(6916009)(6486002)(186003)(53546011)(26005)(6496006)(6666004)(956004)(16526019)(316002)(70780200001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: l5bQvJ2k2JPhyuKREPVc5vQKqc5OFoRSrhsfpg5DG37XKARInfw+I51xZ6NIoRUgqLVsM62o5xE5v+snBlpIvDbqrP36j6z7ZdP5lyY/K8/U15HrMsChLjQmog1CgEskVnB6VrrjDoA1yanmllwqGkP2OFtKmcdP5bmMdZrLVrUNOukeOlPOzG17dniFHp0f5UmawTAdPd94+BYI7gTl8F2JH/wlUNTyGJDv6lqTfWlrl6N6d1vwBhDqYdDt2rjmR4YqQZCdGpzHt11McR4Yq4uwCgkVsm0lJuXdcI8pJ83cMEW1H8PqsHhgayALC9YfXmrt9iV7UvWJQzlS8HvsCtMwvs++sf4WO04e44iXET3JKoTPE7pa2qw6Shov1pBwFtrkaHgYrCJ61QLzrOdBLd7d+4RKZZ1QvQLKrKBGWHJ3S7ms1MD4dYt9WMxRYHa5jquwTNjXbWfLK7j+BR/C0308axNWeYB4TyTQuKvzMSEHQ/f7+nRbJ1s8B+pYV6u3KG05OpAdsch57FwpZZRWxFnpLK5DhiVl8+mRZ3CSv3lBi5Cq4vZZlj3U4FJWMcrbltpPAUSq7bW/7XwhaJCL/mJjrXb8FVKXGr73PFwvYxwPuTpb4+KiFE0d8r1EAA+qKeZqQT/rxOFOrqg1aktL5w==
X-MS-Exchange-CrossTenant-Network-Message-Id: 8f9b5eac-1ee2-431a-1eff-08d8870be16b
X-MS-Exchange-CrossTenant-AuthSource: SA0PR03MB5610.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Nov 2020 13:07:17.1698
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: OVmI+qdT1HjkmM1yRa+oEUdiTiWfJ9jzK+k2M7f28k1j61aAcBxOlLULm1qipos47V/1f77PIqzLpQqaROELFQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR03MB5657
X-OriginatorOrg: citrix.com

On Thu, Nov 12, 2020 at 01:29:33PM +0100, Jan Beulich wrote:
> On 11.11.2020 13:17, Roger Pau Monné wrote:
> > On Tue, Nov 10, 2020 at 03:50:44PM +0100, Jan Beulich wrote:
> >> On 10.11.2020 14:59, Roger Pau Monné wrote:
> >>> On Wed, Oct 28, 2020 at 10:24:53AM +0100, Jan Beulich wrote:
> >>>> --- a/xen/arch/x86/mm/p2m-pt.c
> >>>> +++ b/xen/arch/x86/mm/p2m-pt.c
> >>>> @@ -122,17 +122,55 @@ static int write_p2m_entry(struct p2m_do
> >>>>  {
> >>>>      struct domain *d = p2m->domain;
> >>>>      struct vcpu *v = current;
> >>>> -    int rc = 0;
> >>>>  
> >>>>      if ( v->domain != d )
> >>>>          v = d->vcpu ? d->vcpu[0] : NULL;
> >>>>      if ( likely(v && paging_mode_enabled(d) && paging_get_hostmode(v)) ||
> >>>>           p2m_is_nestedp2m(p2m) )
> >>>> -        rc = p2m->write_p2m_entry(p2m, gfn, p, new, level);
> >>>> +    {
> >>>> +        unsigned int oflags;
> >>>> +        mfn_t omfn;
> >>>> +        int rc;
> >>>> +
> >>>> +        paging_lock(d);
> >>>> +
> >>>> +        if ( p2m->write_p2m_entry_pre )
> >>>> +            p2m->write_p2m_entry_pre(d, gfn, p, new, level);
> >>>> +
> >>>> +        oflags = l1e_get_flags(*p);
> >>>> +        omfn = l1e_get_mfn(*p);
> >>>> +
> >>>> +        rc = p2m_entry_modify(p2m, p2m_flags_to_type(l1e_get_flags(new)),
> >>>> +                              p2m_flags_to_type(oflags), l1e_get_mfn(new),
> >>>> +                              omfn, level);
> >>>> +        if ( rc )
> >>>> +        {
> >>>> +            paging_unlock(d);
> >>>> +            return rc;
> >>>> +        }
> >>>> +
> >>>> +        safe_write_pte(p, new);
> >>>> +
> >>>> +        if ( p2m->write_p2m_entry_post )
> >>>> +            p2m->write_p2m_entry_post(p2m, oflags);
> >>>> +
> >>>> +        paging_unlock(d);
> >>>> +
> >>>> +        if ( nestedhvm_enabled(d) && !p2m_is_nestedp2m(p2m) &&
> >>>> +             (oflags & _PAGE_PRESENT) &&
> >>>> +             !p2m_get_hostp2m(d)->defer_nested_flush &&
> >>>> +             /*
> >>>> +              * We are replacing a valid entry so we need to flush nested p2ms,
> >>>> +              * unless the only change is an increase in access rights.
> >>>> +              */
> >>>> +             (!mfn_eq(omfn, l1e_get_mfn(new)) ||
> >>>> +              !perms_strictly_increased(oflags, l1e_get_flags(new))) )
> >>>> +            p2m_flush_nestedp2m(d);
> >>>
> >>> It feel slightly weird to have a nested p2m hook post, and yet have
> >>> nested specific code here.
> >>>
> >>> Have you considered if the post hook could be moved outside of the
> >>> locked region, so that we could put this chunk there in the nested p2m
> >>> case?
> >>
> >> Yes, I did, but I don't think the post hook can be moved out. The
> >> only alternative therefore would be a 3rd hook. And this hook would
> >> then need to be installed on the host p2m for nested guests, as
> >> opposed to nestedp2m_write_p2m_entry_post, which gets installed in
> >> the nested p2m-s. As said in the description, the main reason I
> >> decided against a 3rd hook is that I suppose the code here isn't
> >> HAP-specific (while prior to this patch it was).
> > 
> > I'm not convinced the guest TLB flush needs to be performed while
> > holding the paging lock. The point of such flush is to invalidate any
> > intermediate guest visible translations that might now be invalid as a
> > result of the p2m change, but the paging lock doesn't affect the guest
> > in any way.
> > 
> > It's true that the dirty_cpumask might change, but I think we only
> > care that when returning from the function there are no stale cache
> > entries that contain the now invalid translation, and this can be
> > achieved equally by doing the flush outside of the locked region.
> 
> I agree with all this. If only it was merely about TLB flushes. In
> the shadow case, shadow_blow_all_tables() gets invoked, and that
> one - looking at the other call sites - wants the paging lock held.

You got me confused here, I think you meant shadow_blow_tables?

The post hook for shadow could pick the lock again, as I don't think
the removal of the tables needs to be strictly done inside of the same
locked region?

Something to consider from a performance PoV.

> Additionally moving the stuff out of the locked region wouldn't
> allow us to achieve the goal of moving the nested flush into the
> hook, unless we introduced further hook handlers to be installed
> on the host p2m-s of nested guests.

Right, or else we would also need to add that chunk in the
non-nestedp2m hook also?

Maybe you could join both the nested and non-nested hooks and use a
different dirty bitmap for the flush?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 13:14:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 13:14:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25845.53877 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdCQp-00060L-9L; Thu, 12 Nov 2020 13:14:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25845.53877; Thu, 12 Nov 2020 13:14:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdCQp-00060A-1m; Thu, 12 Nov 2020 13:14:31 +0000
Received: by outflank-mailman (input) for mailman id 25845;
 Thu, 12 Nov 2020 13:14:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bXVH=ES=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kdCQm-0005zc-S4
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 13:14:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 23a42ec6-0214-4142-b0a1-782f1d295f5f;
 Thu, 12 Nov 2020 13:14:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CC8ECAC0C;
 Thu, 12 Nov 2020 13:14:26 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bXVH=ES=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kdCQm-0005zc-S4
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 13:14:28 +0000
X-Inumbo-ID: 23a42ec6-0214-4142-b0a1-782f1d295f5f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 23a42ec6-0214-4142-b0a1-782f1d295f5f;
	Thu, 12 Nov 2020 13:14:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605186866;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=/NumeyEIq/GYgmAInoQJWuElqUDY9EWBkwA0ORU3Hug=;
	b=MPp3qQnDiV9oqyNZqnoKSYdeGjLbMaaYWIblM2YWnpD7mCsCn19g9bG2R5GMeKH2znFXHt
	48AVpaNjql/Krfyb9YFGE3iRQ6N6HrtY6/+sC4r2ftlZDkwN5JOU/wBZwtJS1nSwf4wSjU
	qOujLawr0Dg0hXpmZL+iInWbltmZaz0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id CC8ECAC0C;
	Thu, 12 Nov 2020 13:14:26 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 0/3] xen/x86: implement NMI continuation
Date: Thu, 12 Nov 2020 14:14:21 +0100
Message-Id: <20201112131424.9930-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move sending of a virq event for oprofile to the local vcpu from NMI
to normal interrupt context.

This has been tested with a small test patch using the continuation
framework of patch 1 for all NMIs and doing a print to console in
the continuation handler.

Version 1 of this small series was sent to the security list before.

Changes in V3:
- switched to self-IPI instead of softirq
- added patch 3

Changes in V4:
- use less generic approach

Changes in V5:
- addressed comments

Juergen Gross (3):
  xen/x86: add nmi continuation framework
  xen/oprofile: use NMI continuation for sending virq to guest
  xen/x86: issue pci_serr error message via NMI continuation

 xen/arch/x86/apic.c             | 13 +++++++---
 xen/arch/x86/genapic/x2apic.c   |  1 +
 xen/arch/x86/oprofile/nmi_int.c | 19 ++++++++++++--
 xen/arch/x86/smp.c              |  1 +
 xen/arch/x86/traps.c            | 46 ++++++++++++++++++++++++++++-----
 xen/include/asm-x86/nmi.h       | 11 +++++++-
 xen/include/asm-x86/softirq.h   |  5 ++--
 xen/include/asm-x86/xenoprof.h  |  7 +++++
 8 files changed, 88 insertions(+), 15 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Nov 12 13:14:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 13:14:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25844.53870 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdCQo-0005zt-Ub; Thu, 12 Nov 2020 13:14:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25844.53870; Thu, 12 Nov 2020 13:14:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdCQo-0005zm-Pt; Thu, 12 Nov 2020 13:14:30 +0000
Received: by outflank-mailman (input) for mailman id 25844;
 Thu, 12 Nov 2020 13:14:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bXVH=ES=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kdCQm-0005zd-Rb
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 13:14:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a20110c-d835-4432-bc95-9ad6e4a819ca;
 Thu, 12 Nov 2020 13:14:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0639EAD31;
 Thu, 12 Nov 2020 13:14:27 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bXVH=ES=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kdCQm-0005zd-Rb
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 13:14:28 +0000
X-Inumbo-ID: 4a20110c-d835-4432-bc95-9ad6e4a819ca
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4a20110c-d835-4432-bc95-9ad6e4a819ca;
	Thu, 12 Nov 2020 13:14:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605186867;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fMXlwXyF+4tXesHMIFZzu/eTSNLM+Yoxq+N7JqzPdL0=;
	b=JDE5qnZR4u+Zq1WJB6ZqJyBKZJFybw6R6w8kazehxa0oN32V0eBsZPGb/WEeyFqkeVMZuM
	Bm63Y9xjhWJpJ/YsMXTckP/rKNzlySF4lrbHvUE1jGM/FBvcgJZOHdMhlaoueHAJA7GQ11
	kjb3KXmNwwdUxAb92B06K8zAeJjrIT4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0639EAD31;
	Thu, 12 Nov 2020 13:14:27 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 2/3] xen/oprofile: use NMI continuation for sending virq to guest
Date: Thu, 12 Nov 2020 14:14:23 +0100
Message-Id: <20201112131424.9930-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201112131424.9930-1-jgross@suse.com>
References: <20201112131424.9930-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of calling send_guest_vcpu_virq() from NMI context use the
NMI continuation framework for that purpose. This avoids taking locks
in NMI mode.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V5:
- use Linux coding style (Jan Beulich)
- assume races could happen (Jan Beulich)

V4:
- rework to less generic approach

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/arch/x86/oprofile/nmi_int.c | 19 +++++++++++++++++--
 xen/arch/x86/traps.c            |  4 ++++
 xen/include/asm-x86/xenoprof.h  |  7 +++++++
 3 files changed, 28 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/oprofile/nmi_int.c b/xen/arch/x86/oprofile/nmi_int.c
index 0f103d80a6..a13bd82915 100644
--- a/xen/arch/x86/oprofile/nmi_int.c
+++ b/xen/arch/x86/oprofile/nmi_int.c
@@ -38,6 +38,8 @@ static unsigned long saved_lvtpc[NR_CPUS];
 
 static char *cpu_type;
 
+static DEFINE_PER_CPU(struct vcpu *, nmi_cont_vcpu);
+
 static int passive_domain_msr_op_checks(unsigned int msr, int *typep, int *indexp)
 {
 	struct vpmu_struct *vpmu = vcpu_vpmu(current);
@@ -83,14 +85,27 @@ void passive_domain_destroy(struct vcpu *v)
 		model->free_msr(v);
 }
 
+bool nmi_oprofile_send_virq(void)
+{
+	struct vcpu *v = xchg(&this_cpu(nmi_cont_vcpu), NULL);
+
+	if (v)
+		send_guest_vcpu_virq(v, VIRQ_XENOPROF);
+
+	return v;
+}
+
 static int nmi_callback(const struct cpu_user_regs *regs, int cpu)
 {
 	int xen_mode, ovf;
 
 	ovf = model->check_ctrs(cpu, &cpu_msrs[cpu], regs);
 	xen_mode = ring_0(regs);
-	if ( ovf && is_active(current->domain) && !xen_mode )
-		send_guest_vcpu_virq(current, VIRQ_XENOPROF);
+	if (ovf && is_active(current->domain) && !xen_mode &&
+	    !this_cpu(nmi_cont_vcpu)) {
+		this_cpu(nmi_cont_vcpu) = current;
+		trigger_nmi_continuation();
+	}
 
 	if ( ovf == 2 )
 		current->arch.nmi_pending = true;
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 5cbaa49031..240fd1b089 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -65,6 +65,7 @@
 #include <asm/debugger.h>
 #include <asm/msr.h>
 #include <asm/nmi.h>
+#include <asm/xenoprof.h>
 #include <asm/shared.h>
 #include <asm/x86_emulate.h>
 #include <asm/traps.h>
@@ -1805,6 +1806,9 @@ bool nmi_check_continuation(void)
 {
     bool ret = false;
 
+    if ( nmi_oprofile_send_virq() )
+        ret = true;
+
     return ret;
 }
 
diff --git a/xen/include/asm-x86/xenoprof.h b/xen/include/asm-x86/xenoprof.h
index 1026ba2e1f..cf6af8c5df 100644
--- a/xen/include/asm-x86/xenoprof.h
+++ b/xen/include/asm-x86/xenoprof.h
@@ -69,6 +69,8 @@ int passive_domain_do_rdmsr(unsigned int msr, uint64_t *msr_content);
 int passive_domain_do_wrmsr(unsigned int msr, uint64_t msr_content);
 void passive_domain_destroy(struct vcpu *v);
 
+bool nmi_oprofile_send_virq(void);
+
 #else
 
 static inline int passive_domain_do_rdmsr(unsigned int msr,
@@ -85,6 +87,11 @@ static inline int passive_domain_do_wrmsr(unsigned int msr,
 
 static inline void passive_domain_destroy(struct vcpu *v) {}
 
+static inline bool nmi_oprofile_send_virq(void)
+{
+    return false;
+}
+
 #endif /* CONFIG_XENOPROF */
 
 #endif /* __ASM_X86_XENOPROF_H__ */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Nov 12 13:14:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 13:14:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25846.53894 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdCQs-00063e-KP; Thu, 12 Nov 2020 13:14:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25846.53894; Thu, 12 Nov 2020 13:14:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdCQs-00063U-HJ; Thu, 12 Nov 2020 13:14:34 +0000
Received: by outflank-mailman (input) for mailman id 25846;
 Thu, 12 Nov 2020 13:14:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bXVH=ES=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kdCQr-0005zc-LT
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 13:14:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1999fd1d-427f-4341-a99f-98b0cbee6c58;
 Thu, 12 Nov 2020 13:14:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 30214AD35;
 Thu, 12 Nov 2020 13:14:27 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bXVH=ES=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kdCQr-0005zc-LT
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 13:14:33 +0000
X-Inumbo-ID: 1999fd1d-427f-4341-a99f-98b0cbee6c58
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1999fd1d-427f-4341-a99f-98b0cbee6c58;
	Thu, 12 Nov 2020 13:14:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605186867;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=o5mPHjtNvM0yH04ZefCYBX4buH74RoSJz6jycMDPprc=;
	b=oEpRvoOdLB5fo5jNnw8aZ55Gx118FqotuNyKj1o63IPHzGeRyZ50h08TtRhLR+wld+sa4U
	oAIoOwFGnaVnUIiQn3V4qkt82GXaKTYtagwaMYDntKAbBTpXNHCiMdVWkRr2HNiCcz5+Lq
	6w2nE6GX/UCdUCzhmy+oSN7sOtFiqus=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 30214AD35;
	Thu, 12 Nov 2020 13:14:27 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 3/3] xen/x86: issue pci_serr error message via NMI continuation
Date: Thu, 12 Nov 2020 14:14:24 +0100
Message-Id: <20201112131424.9930-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201112131424.9930-1-jgross@suse.com>
References: <20201112131424.9930-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of using a softirq pci_serr_error() can use NMI continuation
for issuing an error message.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
V5:
- make PCISERR higher prior than oprofile (Jan Beulich)
V4:
- rework to less generic approach
---
 xen/arch/x86/traps.c          | 21 +++++++++++++++------
 xen/include/asm-x86/softirq.h |  5 ++---
 2 files changed, 17 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 240fd1b089..0459cee9fb 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1661,10 +1661,18 @@ void do_general_protection(struct cpu_user_regs *regs)
     panic("GENERAL PROTECTION FAULT\n[error_code=%04x]\n", regs->error_code);
 }
 
-static void pci_serr_softirq(void)
+static bool pci_serr_cont;
+
+static bool pci_serr_nmicont(void)
 {
+    if ( !pci_serr_cont )
+        return false;
+
+    pci_serr_cont = false;
     printk("\n\nNMI - PCI system error (SERR)\n");
     outb(inb(0x61) & 0x0b, 0x61); /* re-enable the PCI SERR error line. */
+
+    return true;
 }
 
 static void nmi_hwdom_report(unsigned int reason_idx)
@@ -1689,9 +1697,9 @@ static void pci_serr_error(const struct cpu_user_regs *regs)
         nmi_hwdom_report(_XEN_NMIREASON_pci_serr);
         /* fallthrough */
     case 'i': /* 'ignore' */
-        /* Would like to print a diagnostic here but can't call printk()
-           from NMI context -- raise a softirq instead. */
-        raise_softirq(PCI_SERR_SOFTIRQ);
+        /* Issue error message in NMI continuation. */
+        pci_serr_cont = true;
+        trigger_nmi_continuation();
         break;
     default:  /* 'fatal' */
         console_force_unlock();
@@ -1806,6 +1814,9 @@ bool nmi_check_continuation(void)
 {
     bool ret = false;
 
+    if ( pci_serr_nmicont() )
+        ret = true;
+
     if ( nmi_oprofile_send_virq() )
         ret = true;
 
@@ -2157,8 +2168,6 @@ void __init trap_init(void)
     percpu_traps_init();
 
     cpu_init();
-
-    open_softirq(PCI_SERR_SOFTIRQ, pci_serr_softirq);
 }
 
 void activate_debugregs(const struct vcpu *curr)
diff --git a/xen/include/asm-x86/softirq.h b/xen/include/asm-x86/softirq.h
index 0b7a77f11f..415ee866c7 100644
--- a/xen/include/asm-x86/softirq.h
+++ b/xen/include/asm-x86/softirq.h
@@ -6,9 +6,8 @@
 #define VCPU_KICK_SOFTIRQ      (NR_COMMON_SOFTIRQS + 2)
 
 #define MACHINE_CHECK_SOFTIRQ  (NR_COMMON_SOFTIRQS + 3)
-#define PCI_SERR_SOFTIRQ       (NR_COMMON_SOFTIRQS + 4)
-#define HVM_DPCI_SOFTIRQ       (NR_COMMON_SOFTIRQS + 5)
-#define NR_ARCH_SOFTIRQS       6
+#define HVM_DPCI_SOFTIRQ       (NR_COMMON_SOFTIRQS + 4)
+#define NR_ARCH_SOFTIRQS       5
 
 bool arch_skip_send_event_check(unsigned int cpu);
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Nov 12 13:14:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 13:14:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25847.53906 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdCQx-00068s-U2; Thu, 12 Nov 2020 13:14:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25847.53906; Thu, 12 Nov 2020 13:14:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdCQx-00068j-PT; Thu, 12 Nov 2020 13:14:39 +0000
Received: by outflank-mailman (input) for mailman id 25847;
 Thu, 12 Nov 2020 13:14:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bXVH=ES=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kdCQw-0005zc-Lh
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 13:14:38 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8df1d582-b3fc-4503-9774-fc9fc284423b;
 Thu, 12 Nov 2020 13:14:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E34EDAC75;
 Thu, 12 Nov 2020 13:14:26 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bXVH=ES=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kdCQw-0005zc-Lh
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 13:14:38 +0000
X-Inumbo-ID: 8df1d582-b3fc-4503-9774-fc9fc284423b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8df1d582-b3fc-4503-9774-fc9fc284423b;
	Thu, 12 Nov 2020 13:14:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605186867;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zktMZTHwnyh8KDaewloiOYrp8NVJ08f3acWn1GTzuRQ=;
	b=bwAv/CKwTJZpEl6+vBsDSeyO95J4zYnx4pm7vVM4JqM0x8OBmK9YXUTY2gKvlwrZFQy+MR
	OQQSn4+piHpPWQPoy2QctIrKrU/Xv3JMn1xXvdj7gPdEnc6IlWZHQoM7tpytZaUeZrjPh6
	BHC4IJtPNJ7lTEJY6jKRTFiczBDcpU8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E34EDAC75;
	Thu, 12 Nov 2020 13:14:26 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 1/3] xen/x86: add nmi continuation framework
Date: Thu, 12 Nov 2020 14:14:22 +0100
Message-Id: <20201112131424.9930-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201112131424.9930-1-jgross@suse.com>
References: <20201112131424.9930-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Actions in NMI context are rather limited as e.g. locking is rather
fragile.

Add a framework to continue processing in normal interrupt context
after leaving NMI processing.

This is done by a high priority interrupt vector triggered via a
self IPI from NMI context, which will then call the continuation
function specified during NMI handling.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
V5:
- add comment (Jan Beulich)

V4:
- make the framework less generic

V2:
- add prototype for continuation function (Roger Pau Monné)
- switch from softirq to explicit self-IPI (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/arch/x86/apic.c           | 13 ++++++++++---
 xen/arch/x86/genapic/x2apic.c |  1 +
 xen/arch/x86/smp.c            |  1 +
 xen/arch/x86/traps.c          | 21 +++++++++++++++++++++
 xen/include/asm-x86/nmi.h     | 11 ++++++++++-
 5 files changed, 43 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/apic.c b/xen/arch/x86/apic.c
index 60627fd6e6..7497ddb5da 100644
--- a/xen/arch/x86/apic.c
+++ b/xen/arch/x86/apic.c
@@ -40,6 +40,7 @@
 #include <irq_vectors.h>
 #include <xen/kexec.h>
 #include <asm/guest.h>
+#include <asm/nmi.h>
 #include <asm/time.h>
 
 static bool __read_mostly tdt_enabled;
@@ -1376,16 +1377,22 @@ void spurious_interrupt(struct cpu_user_regs *regs)
 {
     /*
      * Check if this is a vectored interrupt (most likely, as this is probably
-     * a request to dump local CPU state). Vectored interrupts are ACKed;
-     * spurious interrupts are not.
+     * a request to dump local CPU state or to continue NMI handling).
+     * Vectored interrupts are ACKed; spurious interrupts are not.
      */
     if (apic_isr_read(SPURIOUS_APIC_VECTOR)) {
+        bool is_spurious;
+
         ack_APIC_irq();
+        is_spurious = !nmi_check_continuation();
         if (this_cpu(state_dump_pending)) {
             this_cpu(state_dump_pending) = false;
             dump_execstate(regs);
-            return;
+            is_spurious = false;
         }
+
+        if ( !is_spurious )
+            return;
     }
 
     /* see sw-dev-man vol 3, chapter 7.4.13.5 */
diff --git a/xen/arch/x86/genapic/x2apic.c b/xen/arch/x86/genapic/x2apic.c
index 077a576a7f..40284b70d1 100644
--- a/xen/arch/x86/genapic/x2apic.c
+++ b/xen/arch/x86/genapic/x2apic.c
@@ -89,6 +89,7 @@ static unsigned int cpu_mask_to_apicid_x2apic_cluster(const cpumask_t *cpumask)
 
 static void send_IPI_self_x2apic(uint8_t vector)
 {
+    /* NMI continuation handling relies on using a shorthand here. */
     apic_wrmsr(APIC_SELF_IPI, vector);
 }
 
diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c
index 14aa355a6b..eef0f9c6cb 100644
--- a/xen/arch/x86/smp.c
+++ b/xen/arch/x86/smp.c
@@ -163,6 +163,7 @@ void send_IPI_self(int vector)
 
 void send_IPI_self_legacy(uint8_t vector)
 {
+    /* NMI continuation handling relies on using a shorthand here. */
     send_IPI_shortcut(APIC_DEST_SELF, vector, APIC_DEST_PHYSICAL);
 }
 
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index c27dd4cd43..5cbaa49031 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -79,6 +79,7 @@
 #include <public/hvm/params.h>
 #include <asm/cpuid.h>
 #include <xsm/xsm.h>
+#include <asm/mach-default/irq_vectors.h>
 #include <asm/pv/traps.h>
 #include <asm/pv/mm.h>
 
@@ -1800,6 +1801,26 @@ void unset_nmi_callback(void)
     nmi_callback = dummy_nmi_callback;
 }
 
+bool nmi_check_continuation(void)
+{
+    bool ret = false;
+
+    return ret;
+}
+
+void trigger_nmi_continuation(void)
+{
+    /*
+     * Issue a self-IPI. Handling is done in spurious_interrupt().
+     * NMI could have happened in IPI sequence, so wait for ICR being idle
+     * again before leaving NMI handler.
+     * This relies on self-IPI using a simple shorthand, thus avoiding any
+     * use of locking or percpu cpumasks.
+     */
+    send_IPI_self(SPURIOUS_APIC_VECTOR);
+    apic_wait_icr_idle();
+}
+
 void do_device_not_available(struct cpu_user_regs *regs)
 {
 #ifdef CONFIG_PV
diff --git a/xen/include/asm-x86/nmi.h b/xen/include/asm-x86/nmi.h
index a288f02a50..9a5da14162 100644
--- a/xen/include/asm-x86/nmi.h
+++ b/xen/include/asm-x86/nmi.h
@@ -33,5 +33,14 @@ nmi_callback_t *set_nmi_callback(nmi_callback_t *callback);
 void unset_nmi_callback(void);
 
 DECLARE_PER_CPU(unsigned int, nmi_count);
- 
+
+/**
+ * trigger_nmi_continuation
+ *
+ * Schedule continuation to be started in interrupt context after NMI handling.
+ */
+void trigger_nmi_continuation(void);
+
+/* Check for NMI continuation pending. */
+bool nmi_check_continuation(void);
 #endif /* ASM_NMI_H */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Nov 12 13:16:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 13:16:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25867.53918 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdCSc-0006Tr-BL; Thu, 12 Nov 2020 13:16:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25867.53918; Thu, 12 Nov 2020 13:16:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdCSc-0006Tk-7h; Thu, 12 Nov 2020 13:16:22 +0000
Received: by outflank-mailman (input) for mailman id 25867;
 Thu, 12 Nov 2020 13:16:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=81XN=ES=epam.com=prvs=9585825f72=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kdCSb-0006Te-Qq
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 13:16:21 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f39804b8-a6cf-4446-be1d-197c8b5bab9f;
 Thu, 12 Nov 2020 13:16:20 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0ACDFKst020597; Thu, 12 Nov 2020 13:16:14 GMT
Received: from eur02-he1-obe.outbound.protection.outlook.com
 (mail-he1eur02lp2051.outbound.protection.outlook.com [104.47.5.51])
 by mx0a-0039f301.pphosted.com with ESMTP id 34rf80bt98-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 12 Nov 2020 13:16:13 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB4436.eurprd03.prod.outlook.com (2603:10a6:208:c4::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.25; Thu, 12 Nov
 2020 13:16:10 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Thu, 12 Nov 2020
 13:16:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=81XN=ES=epam.com=prvs=9585825f72=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kdCSb-0006Te-Qq
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 13:16:21 +0000
X-Inumbo-ID: f39804b8-a6cf-4446-be1d-197c8b5bab9f
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f39804b8-a6cf-4446-be1d-197c8b5bab9f;
	Thu, 12 Nov 2020 13:16:20 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
	by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0ACDFKst020597;
	Thu, 12 Nov 2020 13:16:14 GMT
Received: from eur02-he1-obe.outbound.protection.outlook.com (mail-he1eur02lp2051.outbound.protection.outlook.com [104.47.5.51])
	by mx0a-0039f301.pphosted.com with ESMTP id 34rf80bt98-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Thu, 12 Nov 2020 13:16:13 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=c7vXQws5Saua4xfXN4A5VNC8GTR2fH+RUrAoIo7AIV5wqxeRcZkX9D6j6H3qE0xS0oaAoW/tkpYYGgdX4E5Q8hl+SZbU7DTfY6WQqFt/8uAq07VJzQkyryRgXUUuE1mu0lMhIwdj0UK1kCQPHj+O3e72IYGtcnp7cTU/pTBWUd7VqXzwRCQkWmmjLF9p1LqXzeP12aDuhjdnrix+m7lw/ru1HCfR7PufwtFQjfpXexGvfYuHNDpA4sNtG8/cLCdgKGbrLa55EQ5ld40NlLO9S5Zh2n8460Ge7fjjB8BxPzuhftwHCcDVYaiY6qP3m+NDLo7g7xH8tsFMBWDqDJ8ycQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zruRk307CKdKqD8phz+vf6oNxYfg8jIfM8qjUT7uttw=;
 b=AOrwBQDg3zMs/vNPlMFF7CQ47qM+RBJUXojr4Xp2V20K1dHD1j9+cvEAjQfQxVEjWl/5Wi650vXHFMwAOAYgRPgjUh7joABsP9iX0eSqCjMJwqo5NElCIOsE/4gz6S+Lv3RwQrK5PeUfm9yBzzIArmnO8+3JV+6Nx0lXHJz8krHpBrnqlIWXD9jQV1DuU1P0E0B/HWEtjoMESJ/oTLgDbEJij9AMX58lta/pyLCKXeJD6V05ypakHKxNL+BJcCBlPoZ6JN8rXf+guEvED6+T1nYDTaF7+6Ia/I9Ep5e9iMMmzk/UgPhM8RmsSqjpfwm+OqXGy3Uqe1GGersR1+usKw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zruRk307CKdKqD8phz+vf6oNxYfg8jIfM8qjUT7uttw=;
 b=YKS0Ms8XPjXly6NjiEfmeVrbJHdKHVrTOzI1+wiaRmmqCmtlgYS9Ioby4QfRvT/tmk80+wK4Tq+cezKrQ/dvPLqnVq6RKGeCmRk+F8TgCdxOQ855h8GoU9snfdBimUoRvu1xAyzkp4o3pi7O1BJ7n7C1MjBfmo2eNBHxsQY4vbbytb/ctN+PPQqLAwagrzzp3bcVH2NgL5oVkdpk1iVcLUe0OLUcpA0PcSC5yXVOUZ+1SQPi/c0X2ByNbVElHUj2ML9PGxhWPM04CYyOaa7cS3TTfK/zHg5tCzO+33+gDmoLkGify+ZELd9DFYKAE8bCseINxu0dFpLw71a71FDKvA==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB4436.eurprd03.prod.outlook.com (2603:10a6:208:c4::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.25; Thu, 12 Nov
 2020 13:16:10 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Thu, 12 Nov 2020
 13:16:10 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>,
        Oleksandr
 Andrushchenko <andr2000@gmail.com>
CC: "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
        "Bertrand.Marquis@arm.com"
	<Bertrand.Marquis@arm.com>,
        "julien.grall@arm.com" <julien.grall@arm.com>,
        "jbeulich@suse.com" <jbeulich@suse.com>,
        "sstabellini@kernel.org"
	<sstabellini@kernel.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
Thread-Topic: [PATCH 06/10] vpci: Make every domain handle its own BARs
Thread-Index: AQHWtpbzyJg77SbpvkSRWLKjWy/3PanEQmgAgAA8YoA=
Date: Thu, 12 Nov 2020 13:16:10 +0000
Message-ID: <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
In-Reply-To: <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 16625198-82bc-4a76-da91-08d8870d1f6b
x-ms-traffictypediagnostic: AM0PR03MB4436:
x-microsoft-antispam-prvs: 
 <AM0PR03MB4436E63376EF97277B166DF7E7E70@AM0PR03MB4436.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:826;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 1uQbfXBPmLPWDazx9h7zK7ze8ootVpIQXJ/rC9Jqvr4r/1KjPbMc/BFwMoo5LkAQd3G0Dqea5OCDC5VBzEfjm8JnvRiKAE7Ax0qcG22s4EnbPjYpykzfDzBqvuEh6JQU9RoyU3Dd3+2I8QEfYgb8Xqz3XAxfs4lZmkda9BZRWZteVCwlnChJXzh+vjbyHHbmpK/vF8WRq9O9tbnDz2b3njtbyua6ON3U8xr81eh6OESzgfGHhsKMz8HHt4l8aABNY2AvLNKR5dSY46xsyHtU+jvUCrVvQd2/r8UgaFN2P48VNoVu6jrIqu2fIauGeQ41tpTrhI7jNUI2Lv4a4yh+2qBi/2FE0febolUtorfTCM+B3TSQApWA3MXfnTp9DpVEyOM0RP3ZqetfY5Lodcol1Q==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(39860400002)(346002)(136003)(376002)(2906002)(4326008)(8936002)(30864003)(478600001)(31696002)(64756008)(83380400001)(316002)(66476007)(36756003)(66556008)(2616005)(6506007)(53546011)(86362001)(71200400001)(31686004)(6512007)(66446008)(5660300002)(110136005)(7416002)(76116006)(966005)(6486002)(26005)(8676002)(54906003)(186003)(66946007)(559001)(579004);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 ANmxV3VeT9qxqGfE3XRY62wXCj6rSDuAFTcmVqPQCiY4IGIFMbz3SYtFlSRnMVSvjvMVcxof1z7qXDA20CWM+dEaFW2XujPAleT6HBIDyFXFBNiSP2lmk4UTsfIkNqp6mZ/gsKZ8SH1CKEq7MVK6Vxgz8X0n/4G+2lxwrS2eEgs5iL4spux/974f2m0CiMAj/dXVnIuaUYIDByC3HgKHrp3EMF/QRf8Ap8hwUtWJ443bVVd3HJJe9Yc2/eS/2GfR97NRjMvpTuNPtL2fzuC0ksvoq8qtxBA6RGa4UdgBWUxRSiXKh9ALGJWnwK/P2EijG9gufYkePyCmqEl9sUCzxhuz8XRgaX+97f12MXccqGcTScY9ZLN2kKGFvpgMEO6Q7BZ5pyeyKrQWkBNSbRveH3Y9F12il3Kp384UzNRGel/fI7+T0UBhlizm+CkyTboyAyG9Le6qdX0yMH0zGbnbC+iFYa2Vfv+XIsj7Y7YHvJXWDxeEQSzHV7bJTHIiyRb4xMtD20YoTTSXNEvppWsZSYoWF8OsAMHjbjBXLMgKLbhkJCLjtIQ1GZ8iwqGpjtd8FNDbe8hEKD60BSkzv9w5wzO4T3nJM927lctavKc9OImU2mahH0533ImqFYc26OXoDkNBG+zlcO25fRYEi/ZrIw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <26E229045BFE5A4E9B66E620A598A481@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 16625198-82bc-4a76-da91-08d8870d1f6b
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Nov 2020 13:16:10.3465
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: yhUJbmACxoMS+Jn/ey8aeWFO+65XzilQhsdKwb/13LaJH4mWr/kLErAe6flRDFasWMWyGkgnxY0hXt2xpUwspqvUGd/qP7UZtdKQGRp5eCQLGELxnx2pgq250126Mkg7
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB4436
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-12_05:2020-11-12,2020-11-12 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 mlxscore=0
 phishscore=0 spamscore=0 adultscore=0 bulkscore=0 priorityscore=1501
 suspectscore=0 impostorscore=0 mlxlogscore=999 lowpriorityscore=0
 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2011120078

DQpPbiAxMS8xMi8yMCAxMTo0MCBBTSwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToNCj4gT24gTW9u
LCBOb3YgMDksIDIwMjAgYXQgMDI6NTA6MjdQTSArMDIwMCwgT2xla3NhbmRyIEFuZHJ1c2hjaGVu
a28gd3JvdGU6DQo+PiBGcm9tOiBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyA8b2xla3NhbmRyX2Fu
ZHJ1c2hjaGVua29AZXBhbS5jb20+DQo+Pg0KPj4gQXQgdGhlIG1vbWVudCB0aGVyZSBpcyBhbiBp
ZGVudGl0eSBtYXBwaW5nIGJldHdlZW4gaG93IGEgZ3Vlc3Qgc2VlcyBpdHMNCj4+IEJBUnMgYW5k
IGhvdyB0aGV5IGFyZSBwcm9ncmFtbWVkIGludG8gZ3Vlc3QgZG9tYWluJ3MgcDJtLiBUaGlzIGlz
IG5vdA0KPj4gZ29pbmcgdG8gd29yayBhcyBndWVzdCBkb21haW5zIGhhdmUgdGhlaXIgb3duIHZp
ZXcgb24gdGhlIEJBUnMuDQo+PiBFeHRlbmQgZXhpc3RpbmcgdlBDSSBCQVIgaGFuZGxpbmcgdG8g
YWxsb3cgZXZlcnkgZG9tYWluIHRvIGhhdmUgaXRzIG93bg0KPj4gdmlldyBvZiB0aGUgQkFSczog
b25seSBoYXJkd2FyZSBkb21haW4gc2VlcyBwaHlzaWNhbCBtZW1vcnkgYWRkcmVzc2VzIGluDQo+
PiB0aGlzIGNhc2UgYW5kIGZvciB0aGUgcmVzdCB0aG9zZSBhcmUgZW11bGF0ZWQsIGluY2x1ZGlu
ZyBsb2dpYyByZXF1aXJlZA0KPj4gZm9yIHRoZSBndWVzdHMgdG8gZGV0ZWN0IG1lbW9yeSBzaXpl
cyBhbmQgcHJvcGVydGllcy4NCj4+DQo+PiBXaGlsZSBlbXVsYXRpbmcgQkFSIGFjY2VzcyBmb3Ig
dGhlIGd1ZXN0cyBjcmVhdGUgYSBsaW5rIGJldHdlZW4gdGhlDQo+PiB2aXJ0dWFsIEJBUiBhZGRy
ZXNzIGFuZCBwaHlzaWNhbCBvbmU6IHVzZSBmdWxsIG1lbW9yeSBhZGRyZXNzIHdoaWxlDQo+PiBj
cmVhdGluZyByYW5nZSBzZXRzIHVzZWQgdG8gbWFwL3VubWFwIGNvcnJlc3BvbmRpbmcgYWRkcmVz
cyBzcGFjZXMgYW5kDQo+PiBleHBsb2l0IHRoZSBmYWN0IHRoYXQgUENJIEJBUiB2YWx1ZSBkb2Vz
bid0IHVzZSA4IGxvd2VyIGJpdHMgb2YgdGhlDQo+IEkgdGhpbmsgeW91IG1lYW4gdGhlIGxvdyA0
Yml0cyByYXRoZXIgdGhhbiB0aGUgbG93IDhiaXRzPw0KWWVzLCB5b3UgYXJlIHJpZ2h0LiBXaWxs
IGZpeCB0aGF0DQo+DQo+PiBtZW1vcnkgYWRkcmVzcy4gVXNlIHRob3NlIGJpdHMgdG8gcGFzcyBw
aHlzaWNhbCBCQVIncyBpbmRleCwgc28gd2UgY2FuDQo+PiBidWlsZC9yZW1vdmUgcHJvcGVyIHAy
bSBtYXBwaW5ncy4NCj4gSSBmaW5kIHRoaXMgcXVpdGUgaGFyZCB0byByZXZpZXcsIGdpdmVuIGl0
J3MgYSBmYWlybHkgYmlnIGFuZA0KPiBjb21wbGljYXRlZCBwYXRjaC4gRG8geW91IHRoaW5rIHlv
dSBjb3VsZCBzcGxpdCBpbnRvIHNtYWxsZXIgY2h1bmtzPw0KDQpJJ2xsIHRyeS4gQnV0IGF0IHRo
ZSBtb21lbnQgdGhpcyBjb2RlIGlzbid0IG1lYW50IHRvIGJlIHByb2R1Y3Rpb24gcXVhbGl0eSB5
ZXQNCg0KYXMgd2hhdCBJJ2QgbGlrZSB0byBhY2hpZXZlIGlzIHRvIGNvbGxlY3QgY29tbXVuaXR5
J3MgdmlldyBvbiB0aGUgc3ViamVjdA0KDQpPbmNlIGFsbCBxdWVzdGlvbnMgYXJlIHJlc29sdmVk
IEknbGwgc3RhcnQgd29ya2luZyBvbiB0aGUgYWdyZWVkIHNvbHV0aW9uDQoNCndoaWNoIEkgZXhw
ZWN0IHRvIGJlIHByb2R1Y3Rpb24gcXVhbGl0eSB0aGVuLg0KDQo+DQo+IE1heWJlIHlvdSBjb3Vs
ZCBzcGxpdCBpbnRvIHNtYWxsZXIgcGF0Y2hlcyB0aGF0IGFkZCBiaXRzIHRvd2FyZHMgdGhlDQo+
IGVuZCBnb2FsIGJ1dCBzdGlsbCBrZWVwIHRoZSBpZGVudGl0eSBtYXBwaW5ncz8NCj4NCj4gSSd2
ZSB0cmllZCB0byBkbyBzb21lIHJldmlldyBiZWxvdywgYnV0IEkgd291bGQgYXBwcmVjaWF0ZSBp
ZiB5b3UNCj4gY291bGQgc3BsaXQgdGhpcy4NClRoYW5rIHlvdSBzbyBtdWNoIGZvciBkb2luZyBz
byENCj4NCj4+IFNpZ25lZC1vZmYtYnk6IE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIDxvbGVrc2Fu
ZHJfYW5kcnVzaGNoZW5rb0BlcGFtLmNvbT4NCj4+IC0tLQ0KPj4gICB4ZW4vZHJpdmVycy92cGNp
L2hlYWRlci5jIHwgMjc2ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKystLS0tDQo+
PiAgIHhlbi9kcml2ZXJzL3ZwY2kvdnBjaS5jICAgfCAgIDEgKw0KPj4gICB4ZW4vaW5jbHVkZS94
ZW4vdnBjaS5oICAgIHwgIDI0ICsrLS0NCj4+ICAgMyBmaWxlcyBjaGFuZ2VkLCAyNjUgaW5zZXJ0
aW9ucygrKSwgMzYgZGVsZXRpb25zKC0pDQo+Pg0KPj4gZGlmZiAtLWdpdCBhL3hlbi9kcml2ZXJz
L3ZwY2kvaGVhZGVyLmMgYi94ZW4vZHJpdmVycy92cGNpL2hlYWRlci5jDQo+PiBpbmRleCBmNzRm
NzI4ODg0YzAuLjdkYzdjNzBlMjRmMiAxMDA2NDQNCj4+IC0tLSBhL3hlbi9kcml2ZXJzL3ZwY2kv
aGVhZGVyLmMNCj4+ICsrKyBiL3hlbi9kcml2ZXJzL3ZwY2kvaGVhZGVyLmMNCj4+IEBAIC0zMSwx
NCArMzEsODcgQEANCj4+ICAgc3RydWN0IG1hcF9kYXRhIHsNCj4+ICAgICAgIHN0cnVjdCBkb21h
aW4gKmQ7DQo+PiAgICAgICBib29sIG1hcDsNCj4+ICsgICAgc3RydWN0IHBjaV9kZXYgKnBkZXY7
DQo+IElmIHRoZSBmaWVsZCBpcyByZXF1aXJlZCBwbGVhc2UgcGxhY2UgaXQgYWZ0ZXIgdGhlIGRv
bWFpbiBvbmUuDQpJIHdpbGwsIGJ1dCBtYXkgSSBhc2sgd2h5Pw0KPg0KPj4gICB9Ow0KPj4gICAN
Cj4+ICtzdGF0aWMgc3RydWN0IHZwY2lfaGVhZGVyICpnZXRfdnBjaV9oZWFkZXIoc3RydWN0IGRv
bWFpbiAqZCwNCj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
Y29uc3Qgc3RydWN0IHBjaV9kZXYgKnBkZXYpOw0KPj4gKw0KPj4gK3N0YXRpYyBzdHJ1Y3QgdnBj
aV9oZWFkZXIgKmdldF9od2RvbV92cGNpX2hlYWRlcihjb25zdCBzdHJ1Y3QgcGNpX2RldiAqcGRl
dikNCj4+ICt7DQo+PiArICAgIGlmICggdW5saWtlbHkobGlzdF9lbXB0eSgmcGRldi0+dnBjaS0+
aGVhZGVycykpICkNCj4+ICsgICAgICAgIHJldHVybiBnZXRfdnBjaV9oZWFkZXIoaGFyZHdhcmVf
ZG9tYWluLCBwZGV2KTsNCj4gSSdtIG5vdCBzdXJlIEkgdW5kZXJzdGFuZCB3aHkgeW91IG5lZWQg
YSBsaXN0IGhlcmU6IGVhY2ggZGV2aWNlIGNhbg0KPiBvbmx5IGJlIG93bmVkIGJ5IGEgc2luZ2xl
IGd1ZXN0LCBhbmQgdGh1cyB0aGVyZSBzaG91bGRuJ3QgYmUgbXVsdGlwbGUNCj4gdmlld3Mgb2Yg
dGhlIEJBUnMgKG9yIHRoZSBoZWFkZXIpLg0KDQpUaGF0IGlzIGJlY2F1c2Ugb2YgdGhlIG92ZXIt
ZW5naW5lZXJpbmcgaGFwcGVuaW5nIGhlcmU6IHlvdSBhcmUgMTAwJSByaWdodA0KDQphbmQgdGhp
cyBhbGwgbXVzdCBiZSBtYWRlIHdheSBzaW1wbGVyIHdpdGhvdXQgbGlzdHMgYW5kIGFsbCB0aGF0
LiBJIGp1c3QNCg0KYmxpbmRseSB0aG91Z2h0IHRoYXQgd2UgY291bGQgaGF2ZSBtdWx0aXBsZSBn
dWVzdHMsIGJ1dCBJIGhhdmUgb3ZlcnNlZW4NCg0KdGhlIHNpbXBsZSBmYWN0IHlvdSBtZW50aW9u
ZWQ6IHBoeXNpY2FsIEJBUnMgYXJlIGZvciBod2RvbSBhbmQgdmlydHVhbCBhcmUNCg0KZm9yICph
IHNpbmdsZSogZ3Vlc3QgYXMgd2UgY2FuJ3QgcGFzc3Rocm91Z2ggdGhlIHNhbWUgZGV2aWNlIHRv
IG11bHRpcGxlDQoNCmd1ZXN0cyBhdCBhIHRpbWUuDQoNCj4NCj4+ICsNCj4+ICsgICAgLyogaHdk
b20ncyBoZWFkZXIgaXMgYWx3YXlzIHRoZSB2ZXJ5IGZpcnN0IGVudHJ5LiAqLw0KPj4gKyAgICBy
ZXR1cm4gbGlzdF9maXJzdF9lbnRyeSgmcGRldi0+dnBjaS0+aGVhZGVycywgc3RydWN0IHZwY2lf
aGVhZGVyLCBub2RlKTsNCj4+ICt9DQo+PiArDQo+PiArc3RhdGljIHN0cnVjdCB2cGNpX2hlYWRl
ciAqZ2V0X3ZwY2lfaGVhZGVyKHN0cnVjdCBkb21haW4gKmQsDQo+PiArICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNvbnN0IHN0cnVjdCBwY2lfZGV2ICpwZGV2KQ0K
Pj4gK3sNCj4+ICsgICAgc3RydWN0IGxpc3RfaGVhZCAqcHJldjsNCj4+ICsgICAgc3RydWN0IHZw
Y2lfaGVhZGVyICpoZWFkZXI7DQo+PiArICAgIHN0cnVjdCB2cGNpICp2cGNpID0gcGRldi0+dnBj
aTsNCj4+ICsNCj4+ICsgICAgbGlzdF9mb3JfZWFjaCggcHJldiwgJnZwY2ktPmhlYWRlcnMgKQ0K
Pj4gKyAgICB7DQo+PiArICAgICAgICBzdHJ1Y3QgdnBjaV9oZWFkZXIgKnRoaXMgPSBsaXN0X2Vu
dHJ5KHByZXYsIHN0cnVjdCB2cGNpX2hlYWRlciwgbm9kZSk7DQo+PiArDQo+PiArICAgICAgICBp
ZiAoIHRoaXMtPmRvbWFpbl9pZCA9PSBkLT5kb21haW5faWQgKQ0KPj4gKyAgICAgICAgICAgIHJl
dHVybiB0aGlzOw0KPj4gKyAgICB9DQo+PiArICAgIHByaW50ayhYRU5MT0dfREVCVUcgIi0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tIiBcDQo+PiArICAgICAgICAgICAiQWRk
aW5nIG5ldyB2UENJIEJBUiBoZWFkZXJzIGZvciBkb21haW4gJWQ6ICIgUFJJX3BjaSIgXG4iLA0K
Pj4gKyAgICAgICAgICAgZC0+ZG9tYWluX2lkLCBwZGV2LT5zYmRmLnNlZywgcGRldi0+c2JkZi5i
dXMsDQo+PiArICAgICAgICAgICBwZGV2LT5zYmRmLmRldiwgcGRldi0+c2JkZi5mbik7DQo+PiAr
ICAgIGhlYWRlciA9IHh6YWxsb2Moc3RydWN0IHZwY2lfaGVhZGVyKTsNCj4+ICsgICAgaWYgKCAh
aGVhZGVyICkNCj4+ICsgICAgew0KPj4gKyAgICAgICAgcHJpbnRrKFhFTkxPR19FUlINCj4+ICsg
ICAgICAgICAgICAgICAiRmFpbGVkIHRvIGFkZCBuZXcgdlBDSSBCQVIgaGVhZGVycyBmb3IgZG9t
YWluICVkOiAiIFBSSV9wY2kiIFxuIiwNCj4+ICsgICAgICAgICAgICAgICBkLT5kb21haW5faWQs
IHBkZXYtPnNiZGYuc2VnLCBwZGV2LT5zYmRmLmJ1cywNCj4+ICsgICAgICAgICAgICAgICBwZGV2
LT5zYmRmLmRldiwgcGRldi0+c2JkZi5mbik7DQo+PiArICAgICAgICByZXR1cm4gTlVMTDsNCj4+
ICsgICAgfQ0KPj4gKw0KPj4gKyAgICBpZiAoICFpc19oYXJkd2FyZV9kb21haW4oZCkgKQ0KPj4g
KyAgICB7DQo+PiArICAgICAgICBzdHJ1Y3QgdnBjaV9oZWFkZXIgKmh3ZG9tX2hlYWRlciA9IGdl
dF9od2RvbV92cGNpX2hlYWRlcihwZGV2KTsNCj4+ICsNCj4+ICsgICAgICAgIC8qIE1ha2UgYSBj
b3B5IG9mIHRoZSBod2RvbSdzIEJBUnMgYXMgdGhlIGluaXRpYWwgc3RhdGUgZm9yIHZCQVJzLiAq
Lw0KPj4gKyAgICAgICAgbWVtY3B5KGhlYWRlciwgaHdkb21faGVhZGVyLCBzaXplb2YoKmhlYWRl
cikpOw0KPj4gKyAgICB9DQo+PiArDQo+PiArICAgIGhlYWRlci0+ZG9tYWluX2lkID0gZC0+ZG9t
YWluX2lkOw0KPj4gKyAgICBsaXN0X2FkZF90YWlsKCZoZWFkZXItPm5vZGUsICZ2cGNpLT5oZWFk
ZXJzKTsNCj4gU2FtZSBoZXJlLCBJIHRoaW5rIHlvdSB3YW50IGEgc2luZ2xlIGhlYWRlciwgYW5k
IHRoZW4gc29tZSBmaWVsZHMNCj4gd291bGQgYmUgcmVhZC1vbmx5IGZvciBkb21VcyAobGlrZSB0
aGUgcG9zaXRpb24gb2YgdGhlIEJBUnMgb24gdGhlDQo+IHBoeXNtYXApLg0KZGl0dG8sIHdpbGwg
cmVtb3ZlIHRoZSBsaXN0DQo+DQo+PiArICAgIHJldHVybiBoZWFkZXI7DQo+PiArfQ0KPj4gKw0K
Pj4gK3N0YXRpYyBzdHJ1Y3QgdnBjaV9iYXIgKmdldF92cGNpX2JhcihzdHJ1Y3QgZG9tYWluICpk
LA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjb25zdCBzdHJ1Y3Qg
cGNpX2RldiAqcGRldiwNCj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
aW50IGJhcl9pZHgpDQo+IHVuc2lnbmVkDQpvaw0KPg0KPj4gK3sNCj4+ICsgICAgc3RydWN0IHZw
Y2lfaGVhZGVyICp2aGVhZGVyOw0KPj4gKw0KPj4gKyAgICB2aGVhZGVyID0gZ2V0X3ZwY2lfaGVh
ZGVyKGQsIHBkZXYpOw0KPj4gKyAgICBpZiAoICF2aGVhZGVyICkNCj4+ICsgICAgICAgIHJldHVy
biBOVUxMOw0KPj4gKw0KPj4gKyAgICByZXR1cm4gJnZoZWFkZXItPmJhcnNbYmFyX2lkeF07DQo+
PiArfQ0KPj4gKw0KPj4gICBzdGF0aWMgaW50IG1hcF9yYW5nZSh1bnNpZ25lZCBsb25nIHMsIHVu
c2lnbmVkIGxvbmcgZSwgdm9pZCAqZGF0YSwNCj4+ICAgICAgICAgICAgICAgICAgICAgICAgdW5z
aWduZWQgbG9uZyAqYykNCj4+ICAgew0KPj4gICAgICAgY29uc3Qgc3RydWN0IG1hcF9kYXRhICpt
YXAgPSBkYXRhOw0KPj4gLSAgICBpbnQgcmM7DQo+PiAtDQo+PiArICAgIHVuc2lnbmVkIGxvbmcg
bWZuOw0KPj4gKyAgICBpbnQgcmMsIGJhcl9pZHg7DQo+PiArICAgIHN0cnVjdCB2cGNpX2hlYWRl
ciAqaGVhZGVyID0gZ2V0X2h3ZG9tX3ZwY2lfaGVhZGVyKG1hcC0+cGRldik7DQo+PiArDQo+PiAr
ICAgIGJhcl9pZHggPSBzICYgflBDSV9CQVNFX0FERFJFU1NfTUVNX01BU0s7DQo+IEknbSBub3Qg
c3VyZSBpdCdzIGZpbmUgdG8gc3Rhc2ggdGhlIEJBUiBpbmRleCBpbiB0aGUgbG93IGJpdHMgb2Yg
dGhlDQo+IGFkZHJlc3MsIHdoYXQgYWJvdXQgYSBkZXZpY2UgaGF2aW5nIGNvbmNhdGVuYXRlZCBC
QVJzPw0KDQpIbSwgSSBhbSBub3QgYW4gZXhwZXJ0IGluIFBDSSwgc28gZGlkbid0IHRoaW5rIGFi
b3V0IHRoYXQuDQoNClByb2JhYmx5IG5vdGhpbmcgc3RvcHMgYSBQQ0kgZGV2aWNlIGZyb20gc3Bs
aXR0aW5nIHRoZSBzYW1lIG1lbW9yeQ0KDQpyZWdpb24gaW50byBtdWx0aXBsZSBvbmVzLi4uDQoN
Cj4NCj4gVGhlIHJhbmdlc2V0IHdvdWxkIG5vcm1hbGx5IGpvaW4gdGhlbSBpbnRvIGEgc2luZ2xl
IHJhbmdlLCBhbmQgdGhlbg0KPiB5b3Ugd29uJ3QgYmUgYWJsZSB0byBub3RpY2Ugd2hldGhlciBh
IHJlZ2lvbiBpbiB0aGUgcmFuZ2VzZXQgYmVsb25ncw0KPiB0byBvbmUgQkFSIG9yIGFub3RoZXIu
DQpZZXMsIEkgc2VlLiBWZXJ5IGdvb2QgY2F0Y2gsIHRoYW5rIHlvdQ0KPg0KPiBJTU8gaXQgbWln
aHQgYmUgZWFzaWVyIHRvIGp1c3QgaGF2ZSBhIHJhbmdlc2V0IGZvciBlYWNoIEJBUiBhbmQNCj4g
c3RydWN0dXJlIHRoZSBwZW5kaW5nIHdvcmsgYXMgYSBsaW5rZWQgbGlzdCBvZiBCQVJzLCB0aGF0
IHdpbGwgY29udGFpbg0KPiB0aGUgcGh5c2ljYWwgYWRkcmVzc2VzIG9mIGVhY2ggQkFSICh0aGUg
cmVhbCBwaHltYXAgb25lIGFuZCB0aGUgZ3Vlc3QNCj4gcGh5c21hcCB2aWV3KSBwbHVzIHRoZSBy
YW5nZXNldCBvZiBtZW1vcnkgcmVnaW9ucyB0byBtYXAuDQpJJ2xsIHRyeSB0byB0aGluayBob3cg
dG8gZG8gdGhhdCwgdGhhbmsgeW91DQo+DQo+PiArICAgIHMgPSBQRk5fRE9XTihzKTsNCj4+ICsg
ICAgZSA9IFBGTl9ET1dOKGUpOw0KPiBDaGFuZ2luZyB0aGUgcmFuZ2VzZXQgdG8gc3RvcmUgbWVt
b3J5IGFkZHJlc3NlcyBpbnN0ZWFkIG9mIGZyYW1lcw0KPiBjb3VsZCBmb3IgZXhhbXBsZSBiZSBz
cGxpdCBpbnRvIGEgc2VwYXJhdGUgcGF0Y2guDQpPaw0KPg0KPiBJIHRoaW5rIHlvdSBhcmUgZG9p
bmcgdGhlIGNhbGN1bGF0aW9uIG9mIHRoZSBlbmQgcGZuIHdyb25nIGhlcmUsIHlvdQ0KPiBzaG91
bGQgdXNlIFBGTl9VUCBpbnN0ZWFkIGluIGNhc2UgdGhlIGFkZHJlc3MgaXMgbm90IGFsaWduZWQu
DQoNClBGTl9ET1dOIGZvciB0aGUgc3RhcnQgc2VlbXMgdG8gYmUgb2sgaWYgdGhlIGFkZHJlc3Mg
aXMgbm90IGFsaWduZWQNCg0Kd2hpY2ggaXMgdGhlIGNhc2UgaWYgSSBwYXNzIGJhcl9pbmRleCBp
biB0aGUgbG93ZXIgYml0czogUENJIG1lbW9yeSBoYXMNCg0KUEFHRV9TSVpFIGdyYW51bGFyaXR5
LCBzbyBiZXNpZGVzIHRoZSBmYWN0IHRoYXQgSSB1c2UgYmFyX2luZGV4IHRoZSBhZGRyZXNzDQoN
Cm11c3QgYmUgcGFnZSBhbGlnbmVkLg0KDQpUaGUgZW5kIGFkZHJlc3MgaXMgZXhwcmVzc2VkIGlu
IChzaXplIC0gMSkgZm9ybSwgYWdhaW4gcGFnZSBhbGlnbmVkLA0KDQpzbyB0byBnZXQgdGhlIGxh
c3QgcGFnZSB0byBiZSBtYXBwZWQgUEZOX0RPV04gYWxzbyBzZWVtcyB0byBiZSBhcHByb3ByaWF0
ZS4NCg0KRG8gSSBtaXNzIHNvbWV0aGluZyBoZXJlPw0KDQo+DQo+PiArICAgIG1mbiA9IF9tZm4o
UEZOX0RPV04oaGVhZGVyLT5iYXJzW2Jhcl9pZHhdLmFkZHIpKTsNCj4+ICAgICAgIGZvciAoIDsg
OyApDQo+PiAgICAgICB7DQo+PiAgICAgICAgICAgdW5zaWduZWQgbG9uZyBzaXplID0gZSAtIHMg
KyAxOw0KPj4gQEAgLTUyLDExICsxMjUsMTUgQEAgc3RhdGljIGludCBtYXBfcmFuZ2UodW5zaWdu
ZWQgbG9uZyBzLCB1bnNpZ25lZCBsb25nIGUsIHZvaWQgKmRhdGEsDQo+PiAgICAgICAgICAgICog
LSB7dW59bWFwX21taW9fcmVnaW9ucyBkb2Vzbid0IHN1cHBvcnQgcHJlZW1wdGlvbi4NCj4+ICAg
ICAgICAgICAgKi8NCj4+ICAgDQo+PiAtICAgICAgICByYyA9IG1hcC0+bWFwID8gbWFwX21taW9f
cmVnaW9ucyhtYXAtPmQsIF9nZm4ocyksIHNpemUsIF9tZm4ocykpDQo+PiAtICAgICAgICAgICAg
ICAgICAgICAgIDogdW5tYXBfbW1pb19yZWdpb25zKG1hcC0+ZCwgX2dmbihzKSwgc2l6ZSwgX21m
bihzKSk7DQo+PiArICAgICAgICByYyA9IG1hcC0+bWFwID8gbWFwX21taW9fcmVnaW9ucyhtYXAt
PmQsIF9nZm4ocyksIHNpemUsIG1mbikNCj4+ICsgICAgICAgICAgICAgICAgICAgICAgOiB1bm1h
cF9tbWlvX3JlZ2lvbnMobWFwLT5kLCBfZ2ZuKHMpLCBzaXplLCBtZm4pOw0KPj4gICAgICAgICAg
IGlmICggcmMgPT0gMCApDQo+PiAgICAgICAgICAgew0KPj4gLSAgICAgICAgICAgICpjICs9IHNp
emU7DQo+PiArICAgICAgICAgICAgLyoNCj4+ICsgICAgICAgICAgICAgKiBSYW5nZSBzZXQgaXMg
bm90IGV4cHJlc3NlZCBpbiBmcmFtZSBudW1iZXJzIGFuZCB0aGUgc2l6ZQ0KPj4gKyAgICAgICAg
ICAgICAqIGlzIHRoZSBudW1iZXIgb2YgZnJhbWVzLCBzbyB1cGRhdGUgYWNjb3JkaW5nbHkuDQo+
PiArICAgICAgICAgICAgICovDQo+PiArICAgICAgICAgICAgKmMgKz0gc2l6ZSA8PCBQQUdFX1NI
SUZUOw0KPj4gICAgICAgICAgICAgICBicmVhazsNCj4+ICAgICAgICAgICB9DQo+PiAgICAgICAg
ICAgaWYgKCByYyA8IDAgKQ0KPj4gQEAgLTY3LDggKzE0NCw5IEBAIHN0YXRpYyBpbnQgbWFwX3Jh
bmdlKHVuc2lnbmVkIGxvbmcgcywgdW5zaWduZWQgbG9uZyBlLCB2b2lkICpkYXRhLA0KPj4gICAg
ICAgICAgICAgICBicmVhazsNCj4+ICAgICAgICAgICB9DQo+PiAgICAgICAgICAgQVNTRVJUKHJj
IDwgc2l6ZSk7DQo+PiAtICAgICAgICAqYyArPSByYzsNCj4+ICsgICAgICAgICpjICs9IHJjIDw8
IFBBR0VfU0hJRlQ7DQo+PiAgICAgICAgICAgcyArPSByYzsNCj4+ICsgICAgICAgIG1mbiArPSBy
YzsNCj4+ICAgICAgICAgICBpZiAoIGdlbmVyYWxfcHJlZW1wdF9jaGVjaygpICkNCj4+ICAgICAg
ICAgICAgICAgICAgIHJldHVybiAtRVJFU1RBUlQ7DQo+PiAgICAgICB9DQo+PiBAQCAtODQsNyAr
MTYyLDcgQEAgc3RhdGljIGludCBtYXBfcmFuZ2UodW5zaWduZWQgbG9uZyBzLCB1bnNpZ25lZCBs
b25nIGUsIHZvaWQgKmRhdGEsDQo+PiAgIHN0YXRpYyB2b2lkIG1vZGlmeV9kZWNvZGluZyhjb25z
dCBzdHJ1Y3QgcGNpX2RldiAqcGRldiwgdWludDE2X3QgY21kLA0KPj4gICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgYm9vbCByb21fb25seSkNCj4+ICAgew0KPj4gLSAgICBzdHJ1Y3QgdnBj
aV9oZWFkZXIgKmhlYWRlciA9ICZwZGV2LT52cGNpLT5oZWFkZXI7DQo+PiArICAgIHN0cnVjdCB2
cGNpX2hlYWRlciAqaGVhZGVyID0gZ2V0X2h3ZG9tX3ZwY2lfaGVhZGVyKHBkZXYpOw0KPj4gICAg
ICAgYm9vbCBtYXAgPSBjbWQgJiBQQ0lfQ09NTUFORF9NRU1PUlk7DQo+PiAgICAgICB1bnNpZ25l
ZCBpbnQgaTsNCj4+ICAgDQo+PiBAQCAtMTM2LDYgKzIxNCw3IEBAIGJvb2wgdnBjaV9wcm9jZXNz
X3BlbmRpbmcoc3RydWN0IHZjcHUgKnYpDQo+PiAgICAgICAgICAgc3RydWN0IG1hcF9kYXRhIGRh
dGEgPSB7DQo+PiAgICAgICAgICAgICAgIC5kID0gdi0+ZG9tYWluLA0KPj4gICAgICAgICAgICAg
ICAubWFwID0gdi0+dnBjaS5jbWQgJiBQQ0lfQ09NTUFORF9NRU1PUlksDQo+PiArICAgICAgICAg
ICAgLnBkZXYgPSB2LT52cGNpLnBkZXYsDQo+PiAgICAgICAgICAgfTsNCj4+ICAgICAgICAgICBp
bnQgcmMgPSByYW5nZXNldF9jb25zdW1lX3Jhbmdlcyh2LT52cGNpLm1lbSwgbWFwX3JhbmdlLCAm
ZGF0YSk7DQo+PiAgIA0KPj4gQEAgLTE2OCw3ICsyNDcsOCBAQCBib29sIHZwY2lfcHJvY2Vzc19w
ZW5kaW5nKHN0cnVjdCB2Y3B1ICp2KQ0KPj4gICBzdGF0aWMgaW50IF9faW5pdCBhcHBseV9tYXAo
c3RydWN0IGRvbWFpbiAqZCwgY29uc3Qgc3RydWN0IHBjaV9kZXYgKnBkZXYsDQo+PiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBzdHJ1Y3QgcmFuZ2VzZXQgKm1lbSwgdWludDE2X3QgY21k
KQ0KPj4gICB7DQo+PiAtICAgIHN0cnVjdCBtYXBfZGF0YSBkYXRhID0geyAuZCA9IGQsIC5tYXAg
PSB0cnVlIH07DQo+PiArICAgIHN0cnVjdCBtYXBfZGF0YSBkYXRhID0geyAuZCA9IGQsIC5tYXAg
PSB0cnVlLA0KPj4gKyAgICAgICAgLnBkZXYgPSAoc3RydWN0IHBjaV9kZXYgKilwZGV2IH07DQo+
IERyb3BwaW5nIHRoZSBjb25zdCBoZXJlIGlzIG5vdCBmaW5lLiBJVCBlaXRoZXIgbmVlZHMgdG8g
YmUgZHJvcHBlZA0KPiBmcm9tIGFwcGx5X21hcCBhbmQgZnVydGhlciB1cCwgb3IgdGhpcyBuZWVk
cyB0byBhbHNvIGJlIG1hZGUgY29uc3QuDQpPaywgSSdsbCB0cnkgdG8ga2VlcCBpdCBjb25zdA0K
Pg0KPj4gICAgICAgaW50IHJjOw0KPj4gICANCj4+ICAgICAgIHdoaWxlICggKHJjID0gcmFuZ2Vz
ZXRfY29uc3VtZV9yYW5nZXMobWVtLCBtYXBfcmFuZ2UsICZkYXRhKSkgPT0gLUVSRVNUQVJUICkN
Cj4+IEBAIC0yMDUsNyArMjg1LDcgQEAgc3RhdGljIHZvaWQgZGVmZXJfbWFwKHN0cnVjdCBkb21h
aW4gKmQsIHN0cnVjdCBwY2lfZGV2ICpwZGV2LA0KPj4gICANCj4+ICAgc3RhdGljIGludCBtb2Rp
ZnlfYmFycyhjb25zdCBzdHJ1Y3QgcGNpX2RldiAqcGRldiwgdWludDE2X3QgY21kLCBib29sIHJv
bV9vbmx5KQ0KPj4gICB7DQo+PiAtICAgIHN0cnVjdCB2cGNpX2hlYWRlciAqaGVhZGVyID0gJnBk
ZXYtPnZwY2ktPmhlYWRlcjsNCj4+ICsgICAgc3RydWN0IHZwY2lfaGVhZGVyICpoZWFkZXI7DQo+
PiAgICAgICBzdHJ1Y3QgcmFuZ2VzZXQgKm1lbSA9IHJhbmdlc2V0X25ldyhOVUxMLCBOVUxMLCAw
KTsNCj4+ICAgICAgIHN0cnVjdCBwY2lfZGV2ICp0bXAsICpkZXYgPSBOVUxMOw0KPj4gICAjaWZk
ZWYgQ09ORklHX1g4Ng0KPj4gQEAgLTIxNyw2ICsyOTcsMTEgQEAgc3RhdGljIGludCBtb2RpZnlf
YmFycyhjb25zdCBzdHJ1Y3QgcGNpX2RldiAqcGRldiwgdWludDE2X3QgY21kLCBib29sIHJvbV9v
bmx5KQ0KPj4gICAgICAgaWYgKCAhbWVtICkNCj4+ICAgICAgICAgICByZXR1cm4gLUVOT01FTTsN
Cj4+ICAgDQo+PiArICAgIGlmICggaXNfaGFyZHdhcmVfZG9tYWluKGN1cnJlbnQtPmRvbWFpbikg
KQ0KPj4gKyAgICAgICAgaGVhZGVyID0gZ2V0X2h3ZG9tX3ZwY2lfaGVhZGVyKHBkZXYpOw0KPj4g
KyAgICBlbHNlDQo+PiArICAgICAgICBoZWFkZXIgPSBnZXRfdnBjaV9oZWFkZXIoY3VycmVudC0+
ZG9tYWluLCBwZGV2KTsNCj4+ICsNCj4+ICAgICAgIC8qDQo+PiAgICAgICAgKiBDcmVhdGUgYSBy
YW5nZXNldCB0aGF0IHJlcHJlc2VudHMgdGhlIGN1cnJlbnQgZGV2aWNlIEJBUnMgbWVtb3J5IHJl
Z2lvbg0KPj4gICAgICAgICogYW5kIGNvbXBhcmUgaXQgYWdhaW5zdCBhbGwgdGhlIGN1cnJlbnRs
eSBhY3RpdmUgQkFSIG1lbW9yeSByZWdpb25zLiBJZg0KPj4gQEAgLTIyNSwxMiArMzEwLDE1IEBA
IHN0YXRpYyBpbnQgbW9kaWZ5X2JhcnMoY29uc3Qgc3RydWN0IHBjaV9kZXYgKnBkZXYsIHVpbnQx
Nl90IGNtZCwgYm9vbCByb21fb25seSkNCj4+ICAgICAgICAqIEZpcnN0IGZpbGwgdGhlIHJhbmdl
c2V0IHdpdGggYWxsIHRoZSBCQVJzIG9mIHRoaXMgZGV2aWNlIG9yIHdpdGggdGhlIFJPTQ0KPj4g
ICAgICAgICogQkFSIG9ubHksIGRlcGVuZGluZyBvbiB3aGV0aGVyIHRoZSBndWVzdCBpcyB0b2dn
bGluZyB0aGUgbWVtb3J5IGRlY29kZQ0KPj4gICAgICAgICogYml0IG9mIHRoZSBjb21tYW5kIHJl
Z2lzdGVyLCBvciB0aGUgZW5hYmxlIGJpdCBvZiB0aGUgUk9NIEJBUiByZWdpc3Rlci4NCj4+ICsg
ICAgICoNCj4+ICsgICAgICogVXNlIHRoZSBQQ0kgcmVzZXJ2ZWQgYml0cyBvZiB0aGUgQkFSIHRv
IHBhc3MgQkFSJ3MgaW5kZXguDQo+PiAgICAgICAgKi8NCj4+ICAgICAgIGZvciAoIGkgPSAwOyBp
IDwgQVJSQVlfU0laRShoZWFkZXItPmJhcnMpOyBpKysgKQ0KPj4gICAgICAgew0KPj4gICAgICAg
ICAgIGNvbnN0IHN0cnVjdCB2cGNpX2JhciAqYmFyID0gJmhlYWRlci0+YmFyc1tpXTsNCj4+IC0g
ICAgICAgIHVuc2lnbmVkIGxvbmcgc3RhcnQgPSBQRk5fRE9XTihiYXItPmFkZHIpOw0KPj4gLSAg
ICAgICAgdW5zaWduZWQgbG9uZyBlbmQgPSBQRk5fRE9XTihiYXItPmFkZHIgKyBiYXItPnNpemUg
LSAxKTsNCj4+ICsgICAgICAgIHVuc2lnbmVkIGxvbmcgc3RhcnQgPSAoYmFyLT5hZGRyICYgUENJ
X0JBU0VfQUREUkVTU19NRU1fTUFTSykgfCBpOw0KPj4gKyAgICAgICAgdW5zaWduZWQgbG9uZyBl
bmQgPSAoYmFyLT5hZGRyICYgUENJX0JBU0VfQUREUkVTU19NRU1fTUFTSykgKw0KPj4gKyAgICAg
ICAgICAgIGJhci0+c2l6ZSAtIDE7DQo+IFdpbGwgdGhpcyB3b3JrIGZpbmUgb24gQXJtIDMyYml0
cyB3aXRoIExQQUU/IEl0J3MgbXkgdW5kZXJzdGFuZGluZw0KPiB0aGF0IGluIHRoYXQgY2FzZSB1
bnNpZ25lZCBsb25nIGlzIDMyYml0cywgYnV0IHRoZSBwaHlzaWNhbCBhZGRyZXNzDQo+IHNwYWNl
IGlzIDQ0Yml0cywgaW4gd2hpY2ggY2FzZSB0aGlzIHdvbid0IHdvcmsuDQpIbSwgZ29vZCBxdWVz
dGlvbg0KPg0KPiBJIHRoaW5rIHlvdSBuZWVkIHRvIGtlZXAgdGhlIHVzYWdlIG9mIGZyYW1lIG51
bWJlcnMgaGVyZS4NCklmIEkgcmUtd29yayB0aGUgZ2ZuIDwtPiBtZm4gbWFwcGluZyB0aGVuIHll
cywgSSBjYW4gdXNlIGZyYW1lIG51bWJlcnMgaGVyZSBhbmQgZWxzZXdoZXJlDQo+DQo+PiAgIA0K
Pj4gICAgICAgICAgIGlmICggIU1BUFBBQkxFX0JBUihiYXIpIHx8DQo+PiAgICAgICAgICAgICAg
ICAocm9tX29ubHkgPyBiYXItPnR5cGUgIT0gVlBDSV9CQVJfUk9NDQo+PiBAQCAtMjUxLDkgKzMz
OSwxMSBAQCBzdGF0aWMgaW50IG1vZGlmeV9iYXJzKGNvbnN0IHN0cnVjdCBwY2lfZGV2ICpwZGV2
LCB1aW50MTZfdCBjbWQsIGJvb2wgcm9tX29ubHkpDQo+PiAgICAgICAvKiBSZW1vdmUgYW55IE1T
SVggcmVnaW9ucyBpZiBwcmVzZW50LiAqLw0KPj4gICAgICAgZm9yICggaSA9IDA7IG1zaXggJiYg
aSA8IEFSUkFZX1NJWkUobXNpeC0+dGFibGVzKTsgaSsrICkNCj4+ICAgICAgIHsNCj4+IC0gICAg
ICAgIHVuc2lnbmVkIGxvbmcgc3RhcnQgPSBQRk5fRE9XTih2bXNpeF90YWJsZV9hZGRyKHBkZXYt
PnZwY2ksIGkpKTsNCj4+IC0gICAgICAgIHVuc2lnbmVkIGxvbmcgZW5kID0gUEZOX0RPV04odm1z
aXhfdGFibGVfYWRkcihwZGV2LT52cGNpLCBpKSArDQo+PiAtICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIHZtc2l4X3RhYmxlX3NpemUocGRldi0+dnBjaSwgaSkgLSAxKTsNCj4+
ICsgICAgICAgIHVuc2lnbmVkIGxvbmcgc3RhcnQgPSAodm1zaXhfdGFibGVfYWRkcihwZGV2LT52
cGNpLCBpKSAmDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFBDSV9CQVNFX0FE
RFJFU1NfTUVNX01BU0spIHwgaTsNCj4+ICsgICAgICAgIHVuc2lnbmVkIGxvbmcgZW5kID0gKHZt
c2l4X3RhYmxlX2FkZHIocGRldi0+dnBjaSwgaSkgJg0KPj4gKyAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgUENJX0JBU0VfQUREUkVTU19NRU1fTUFTSyApICsNCj4+ICsgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIHZtc2l4X3RhYmxlX3NpemUocGRldi0+dnBjaSwgaSkgLSAxOw0KPj4g
ICANCj4+ICAgICAgICAgICByYyA9IHJhbmdlc2V0X3JlbW92ZV9yYW5nZShtZW0sIHN0YXJ0LCBl
bmQpOw0KPj4gICAgICAgICAgIGlmICggcmMgKQ0KPj4gQEAgLTI3Myw2ICszNjMsOCBAQCBzdGF0
aWMgaW50IG1vZGlmeV9iYXJzKGNvbnN0IHN0cnVjdCBwY2lfZGV2ICpwZGV2LCB1aW50MTZfdCBj
bWQsIGJvb2wgcm9tX29ubHkpDQo+PiAgICAgICAgKi8NCj4+ICAgICAgIGZvcl9lYWNoX3BkZXYg
KCBwZGV2LT5kb21haW4sIHRtcCApDQo+PiAgICAgICB7DQo+PiArICAgICAgICBzdHJ1Y3QgdnBj
aV9oZWFkZXIgKmhlYWRlcjsNCj4+ICsNCj4+ICAgICAgICAgICBpZiAoIHRtcCA9PSBwZGV2ICkN
Cj4+ICAgICAgICAgICB7DQo+PiAgICAgICAgICAgICAgIC8qDQo+PiBAQCAtMjg5LDExICszODEs
MTQgQEAgc3RhdGljIGludCBtb2RpZnlfYmFycyhjb25zdCBzdHJ1Y3QgcGNpX2RldiAqcGRldiwg
dWludDE2X3QgY21kLCBib29sIHJvbV9vbmx5KQ0KPj4gICAgICAgICAgICAgICAgICAgY29udGlu
dWU7DQo+PiAgICAgICAgICAgfQ0KPj4gICANCj4+IC0gICAgICAgIGZvciAoIGkgPSAwOyBpIDwg
QVJSQVlfU0laRSh0bXAtPnZwY2ktPmhlYWRlci5iYXJzKTsgaSsrICkNCj4+ICsgICAgICAgIGhl
YWRlciA9IGdldF92cGNpX2hlYWRlcih0bXAtPmRvbWFpbiwgcGRldik7DQo+PiArDQo+PiArICAg
ICAgICBmb3IgKCBpID0gMDsgaSA8IEFSUkFZX1NJWkUoaGVhZGVyLT5iYXJzKTsgaSsrICkNCj4+
ICAgICAgICAgICB7DQo+PiAtICAgICAgICAgICAgY29uc3Qgc3RydWN0IHZwY2lfYmFyICpiYXIg
PSAmdG1wLT52cGNpLT5oZWFkZXIuYmFyc1tpXTsNCj4+IC0gICAgICAgICAgICB1bnNpZ25lZCBs
b25nIHN0YXJ0ID0gUEZOX0RPV04oYmFyLT5hZGRyKTsNCj4+IC0gICAgICAgICAgICB1bnNpZ25l
ZCBsb25nIGVuZCA9IFBGTl9ET1dOKGJhci0+YWRkciArIGJhci0+c2l6ZSAtIDEpOw0KPj4gKyAg
ICAgICAgICAgIGNvbnN0IHN0cnVjdCB2cGNpX2JhciAqYmFyID0gJmhlYWRlci0+YmFyc1tpXTsN
Cj4+ICsgICAgICAgICAgICB1bnNpZ25lZCBsb25nIHN0YXJ0ID0gKGJhci0+YWRkciAmIFBDSV9C
QVNFX0FERFJFU1NfTUVNX01BU0spIHwgaTsNCj4+ICsgICAgICAgICAgICB1bnNpZ25lZCBsb25n
IGVuZCA9IChiYXItPmFkZHIgJiBQQ0lfQkFTRV9BRERSRVNTX01FTV9NQVNLKQ0KPj4gKyAgICAg
ICAgICAgICAgICArIGJhci0+c2l6ZSAtIDE7DQo+PiAgIA0KPj4gICAgICAgICAgICAgICBpZiAo
ICFiYXItPmVuYWJsZWQgfHwgIXJhbmdlc2V0X292ZXJsYXBzX3JhbmdlKG1lbSwgc3RhcnQsIGVu
ZCkgfHwNCj4+ICAgICAgICAgICAgICAgICAgICAvKg0KPj4gQEAgLTM1Nyw3ICs0NTIsNyBAQCBz
dGF0aWMgdm9pZCBjbWRfd3JpdGUoY29uc3Qgc3RydWN0IHBjaV9kZXYgKnBkZXYsIHVuc2lnbmVk
IGludCByZWcsDQo+PiAgICAgICAgICAgcGNpX2NvbmZfd3JpdGUxNihwZGV2LT5zYmRmLCByZWcs
IGNtZCk7DQo+PiAgIH0NCj4+ICAgDQo+PiAtc3RhdGljIHZvaWQgYmFyX3dyaXRlKGNvbnN0IHN0
cnVjdCBwY2lfZGV2ICpwZGV2LCB1bnNpZ25lZCBpbnQgcmVnLA0KPj4gK3N0YXRpYyB2b2lkIGJh
cl93cml0ZV9od2RvbShjb25zdCBzdHJ1Y3QgcGNpX2RldiAqcGRldiwgdW5zaWduZWQgaW50IHJl
ZywNCj4+ICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90IHZhbCwgdm9pZCAqZGF0YSkN
Cj4+ICAgew0KPj4gICAgICAgc3RydWN0IHZwY2lfYmFyICpiYXIgPSBkYXRhOw0KPj4gQEAgLTM3
NywxNCArNDcyLDE3IEBAIHN0YXRpYyB2b2lkIGJhcl93cml0ZShjb25zdCBzdHJ1Y3QgcGNpX2Rl
diAqcGRldiwgdW5zaWduZWQgaW50IHJlZywNCj4+ICAgICAgIHsNCj4+ICAgICAgICAgICAvKiBJ
ZiB0aGUgdmFsdWUgd3JpdHRlbiBpcyB0aGUgY3VycmVudCBvbmUgYXZvaWQgcHJpbnRpbmcgYSB3
YXJuaW5nLiAqLw0KPj4gICAgICAgICAgIGlmICggdmFsICE9ICh1aW50MzJfdCkoYmFyLT5hZGRy
ID4+IChoaSA/IDMyIDogMCkpICkNCj4+ICsgICAgICAgIHsNCj4+ICsgICAgICAgICAgICBzdHJ1
Y3QgdnBjaV9oZWFkZXIgKmhlYWRlciA9IGdldF9od2RvbV92cGNpX2hlYWRlcihwZGV2KTsNCj4+
ICsNCj4+ICAgICAgICAgICAgICAgZ3ByaW50ayhYRU5MT0dfV0FSTklORywNCj4+ICAgICAgICAg
ICAgICAgICAgICAgICAiJTA0eDolMDJ4OiUwMnguJXU6IGlnbm9yZWQgQkFSICVsdSB3cml0ZSB3
aXRoIG1lbW9yeSBkZWNvZGluZyBlbmFibGVkXG4iLA0KPj4gICAgICAgICAgICAgICAgICAgICAg
IHBkZXYtPnNlZywgcGRldi0+YnVzLCBzbG90LCBmdW5jLA0KPj4gLSAgICAgICAgICAgICAgICAg
ICAgYmFyIC0gcGRldi0+dnBjaS0+aGVhZGVyLmJhcnMgKyBoaSk7DQo+PiArICAgICAgICAgICAg
ICAgICAgICBiYXIgLSBoZWFkZXItPmJhcnMgKyBoaSk7DQo+PiArICAgICAgICB9DQo+PiAgICAg
ICAgICAgcmV0dXJuOw0KPj4gICAgICAgfQ0KPj4gICANCj4+IC0NCj4+ICAgICAgIC8qDQo+PiAg
ICAgICAgKiBVcGRhdGUgdGhlIGNhY2hlZCBhZGRyZXNzLCBzbyB0aGF0IHdoZW4gbWVtb3J5IGRl
Y29kaW5nIGlzIGVuYWJsZWQNCj4+ICAgICAgICAqIFhlbiBjYW4gbWFwIHRoZSBCQVIgaW50byB0
aGUgZ3Vlc3QgcDJtLg0KPj4gQEAgLTQwMywxMCArNTAxLDg5IEBAIHN0YXRpYyB2b2lkIGJhcl93
cml0ZShjb25zdCBzdHJ1Y3QgcGNpX2RldiAqcGRldiwgdW5zaWduZWQgaW50IHJlZywNCj4+ICAg
ICAgIHBjaV9jb25mX3dyaXRlMzIocGRldi0+c2JkZiwgcmVnLCB2YWwpOw0KPj4gICB9DQo+PiAg
IA0KPj4gK3N0YXRpYyB1aW50MzJfdCBiYXJfcmVhZF9od2RvbShjb25zdCBzdHJ1Y3QgcGNpX2Rl
diAqcGRldiwgdW5zaWduZWQgaW50IHJlZywNCj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgdm9pZCAqZGF0YSkNCj4+ICt7DQo+PiArICAgIHJldHVybiB2cGNpX2h3X3JlYWQzMihw
ZGV2LCByZWcsIGRhdGEpOw0KPj4gK30NCj4+ICsNCj4+ICtzdGF0aWMgdm9pZCBiYXJfd3JpdGVf
Z3Vlc3QoY29uc3Qgc3RydWN0IHBjaV9kZXYgKnBkZXYsIHVuc2lnbmVkIGludCByZWcsDQo+PiAr
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90IHZhbCwgdm9pZCAqZGF0YSkNCj4+
ICt7DQo+PiArICAgIHN0cnVjdCB2cGNpX2JhciAqdmJhciA9IGRhdGE7DQo+PiArICAgIGJvb2wg
aGkgPSBmYWxzZTsNCj4+ICsNCj4+ICsgICAgaWYgKCB2YmFyLT50eXBlID09IFZQQ0lfQkFSX01F
TTY0X0hJICkNCj4+ICsgICAgew0KPj4gKyAgICAgICAgQVNTRVJUKHJlZyA+IFBDSV9CQVNFX0FE
RFJFU1NfMCk7DQo+PiArICAgICAgICB2YmFyLS07DQo+PiArICAgICAgICBoaSA9IHRydWU7DQo+
PiArICAgIH0NCj4+ICsgICAgdmJhci0+YWRkciAmPSB+KDB4ZmZmZmZmZmZ1bGwgPDwgKGhpID8g
MzIgOiAwKSk7DQo+PiArICAgIHZiYXItPmFkZHIgfD0gKHVpbnQ2NF90KXZhbCA8PCAoaGkgPyAz
MiA6IDApOw0KPj4gK30NCj4+ICsNCj4+ICtzdGF0aWMgdWludDMyX3QgYmFyX3JlYWRfZ3Vlc3Qo
Y29uc3Qgc3RydWN0IHBjaV9kZXYgKnBkZXYsIHVuc2lnbmVkIGludCByZWcsDQo+PiArICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIHZvaWQgKmRhdGEpDQo+PiArew0KPj4gKyAgICBzdHJ1
Y3QgdnBjaV9iYXIgKnZiYXIgPSBkYXRhOw0KPj4gKyAgICB1aW50MzJfdCB2YWw7DQo+PiArICAg
IGJvb2wgaGkgPSBmYWxzZTsNCj4+ICsNCj4+ICsgICAgaWYgKCB2YmFyLT50eXBlID09IFZQQ0lf
QkFSX01FTTY0X0hJICkNCj4+ICsgICAgew0KPj4gKyAgICAgICAgQVNTRVJUKHJlZyA+IFBDSV9C
QVNFX0FERFJFU1NfMCk7DQo+PiArICAgICAgICB2YmFyLS07DQo+PiArICAgICAgICBoaSA9IHRy
dWU7DQo+PiArICAgIH0NCj4+ICsNCj4+ICsgICAgaWYgKCB2YmFyLT50eXBlID09IFZQQ0lfQkFS
X01FTTY0X0xPIHx8IHZiYXItPnR5cGUgPT0gVlBDSV9CQVJfTUVNNjRfSEkgKQ0KPiBJIHRoaW5r
IHRoaXMgd291bGQgYmUgY2xlYXJlciB1c2luZyBhIHN3aXRjaCBzdGF0ZW1lbnQuDQpJJ2xsIHRo
aW5rIGFib3V0DQo+DQo+PiArICAgIHsNCj4+ICsgICAgICAgIGlmICggaGkgKQ0KPj4gKyAgICAg
ICAgICAgIHZhbCA9IHZiYXItPmFkZHIgPj4gMzI7DQo+PiArICAgICAgICBlbHNlDQo+PiArICAg
ICAgICAgICAgdmFsID0gdmJhci0+YWRkciAmIDB4ZmZmZmZmZmY7DQo+PiArICAgICAgICBpZiAo
IHZhbCA9PSB+MCApDQo+IFN0cmljdGx5IHNwZWFraW5nIEkgdGhpbmsgeW91IGFyZSBub3QgZm9y
Y2VkIHRvIHdyaXRlIDFzIHRvIHRoZQ0KPiByZXNlcnZlZCA0IGJpdHMgaW4gdGhlIGxvdyByZWdp
c3RlciAoYW5kIGluIHRoZSAzMmJpdCBjYXNlKS4NCg0KQWgsIHNvIExpbnV4IGtlcm5lbCwgZm9y
IGluc3RhbmNlLCBjb3VsZCBoYXZlIHdyaXR0ZW4gMHhmZmZmZmYwIHdoaWxlDQoNCkkgZXhwZWN0
IDB4ZmZmZmZmZmY/DQoNCj4NCj4+ICsgICAgICAgIHsNCj4+ICsgICAgICAgICAgICAvKiBHdWVz
dHMgZGV0ZWN0cyBCQVIncyBwcm9wZXJ0aWVzIGFuZCBzaXplcy4gKi8NCj4+ICsgICAgICAgICAg
ICBpZiAoICFoaSApDQo+PiArICAgICAgICAgICAgew0KPj4gKyAgICAgICAgICAgICAgICB2YWwg
PSAweGZmZmZmZmZmICYgfih2YmFyLT5zaXplIC0gMSk7DQo+PiArICAgICAgICAgICAgICAgIHZh
bCB8PSB2YmFyLT50eXBlID09IFZQQ0lfQkFSX01FTTMyID8gUENJX0JBU0VfQUREUkVTU19NRU1f
VFlQRV8zMg0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICA6IFBDSV9CQVNFX0FERFJFU1NfTUVNX1RZUEVfNjQ7DQo+PiArICAgICAgICAgICAg
ICAgIHZhbCB8PSB2YmFyLT5wcmVmZXRjaGFibGUgPyBQQ0lfQkFTRV9BRERSRVNTX01FTV9QUkVG
RVRDSCA6IDA7DQo+PiArICAgICAgICAgICAgfQ0KPj4gKyAgICAgICAgICAgIGVsc2UNCj4+ICsg
ICAgICAgICAgICAgICAgdmFsID0gdmJhci0+c2l6ZSA+PiAzMjsNCj4+ICsgICAgICAgICAgICB2
YmFyLT5hZGRyICY9IH4oMHhmZmZmZmZmZnVsbCA8PCAoaGkgPyAzMiA6IDApKTsNCj4+ICsgICAg
ICAgICAgICB2YmFyLT5hZGRyIHw9ICh1aW50NjRfdCl2YWwgPDwgKGhpID8gMzIgOiAwKTsNCj4+
ICsgICAgICAgIH0NCj4+ICsgICAgfQ0KPj4gKyAgICBlbHNlIGlmICggdmJhci0+dHlwZSA9PSBW
UENJX0JBUl9NRU0zMiApDQo+PiArICAgIHsNCj4+ICsgICAgICAgIHZhbCA9IHZiYXItPmFkZHI7
DQo+PiArICAgICAgICBpZiAoIHZhbCA9PSB+MCApDQo+PiArICAgICAgICB7DQo+PiArICAgICAg
ICAgICAgaWYgKCAhaGkgKQ0KPiBUaGVyZSdzIG5vIHdheSBoaSBjYW4gYmUgdHJ1ZSBhdCB0aGlz
IHBvaW50IEFGQUlDVC4NClN1cmUsIHRoYW5rIHlvdQ0KPg0KPj4gKyAgICAgICAgICAgIHsNCj4+
ICsgICAgICAgICAgICAgICAgdmFsID0gMHhmZmZmZmZmZiAmIH4odmJhci0+c2l6ZSAtIDEpOw0K
Pj4gKyAgICAgICAgICAgICAgICB2YWwgfD0gdmJhci0+dHlwZSA9PSBWUENJX0JBUl9NRU0zMiA/
IFBDSV9CQVNFX0FERFJFU1NfTUVNX1RZUEVfMzINCj4+ICsgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgOiBQQ0lfQkFTRV9BRERSRVNTX01FTV9UWVBF
XzY0Ow0KPj4gKyAgICAgICAgICAgICAgICB2YWwgfD0gdmJhci0+cHJlZmV0Y2hhYmxlID8gUENJ
X0JBU0VfQUREUkVTU19NRU1fUFJFRkVUQ0ggOiAwOw0KPj4gKyAgICAgICAgICAgIH0NCj4+ICsg
ICAgICAgIH0NCj4+ICsgICAgfQ0KPj4gKyAgICBlbHNlDQo+PiArICAgIHsNCj4+ICsgICAgICAg
IHZhbCA9IHZiYXItPmFkZHI7DQo+PiArICAgIH0NCj4+ICsgICAgcmV0dXJuIHZhbDsNCj4+ICt9
DQo+PiArDQo+PiAgIHN0YXRpYyB2b2lkIHJvbV93cml0ZShjb25zdCBzdHJ1Y3QgcGNpX2RldiAq
cGRldiwgdW5zaWduZWQgaW50IHJlZywNCj4+ICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQz
Ml90IHZhbCwgdm9pZCAqZGF0YSkNCj4+ICAgew0KPj4gLSAgICBzdHJ1Y3QgdnBjaV9oZWFkZXIg
KmhlYWRlciA9ICZwZGV2LT52cGNpLT5oZWFkZXI7DQo+PiArICAgIHN0cnVjdCB2cGNpX2hlYWRl
ciAqaGVhZGVyID0gZ2V0X2h3ZG9tX3ZwY2lfaGVhZGVyKHBkZXYpOw0KPj4gICAgICAgc3RydWN0
IHZwY2lfYmFyICpyb20gPSBkYXRhOw0KPj4gICAgICAgdWludDhfdCBzbG90ID0gUENJX1NMT1Qo
cGRldi0+ZGV2Zm4pLCBmdW5jID0gUENJX0ZVTkMocGRldi0+ZGV2Zm4pOw0KPj4gICAgICAgdWlu
dDE2X3QgY21kID0gcGNpX2NvbmZfcmVhZDE2KHBkZXYtPnNiZGYsIFBDSV9DT01NQU5EKTsNCj4+
IEBAIC00NTIsMTUgKzYyOSw1NiBAQCBzdGF0aWMgdm9pZCByb21fd3JpdGUoY29uc3Qgc3RydWN0
IHBjaV9kZXYgKnBkZXYsIHVuc2lnbmVkIGludCByZWcsDQo+PiAgICAgICAgICAgcm9tLT5hZGRy
ID0gdmFsICYgUENJX1JPTV9BRERSRVNTX01BU0s7DQo+PiAgIH0NCj4gRG9uJ3QgeW91IG5lZWQg
dG8gYWxzbyBwcm90ZWN0IGEgZG9tVSBmcm9tIHdyaXRpbmcgdG8gdGhlIFJPTSBCQVINCj4gcmVn
aXN0ZXI/DQoNClJPTSB3YXMgbm90IGEgdGFyZ2V0IG9mIHRoaXMgUkZDIGFzIEkgaGF2ZSBubyBI
VyB0byB0ZXN0IHRoYXQsIGJ1dCBmaW5hbCBjb2RlIG11c3QNCg0KYWxzbyBoYW5kbGUgUk9NIGFz
IHdlbGwsIHlvdSBhcmUgcmlnaHQNCg0KPg0KPj4gICANCj4+ICtzdGF0aWMgdWludDMyX3QgYmFy
X3JlYWRfZGlzcGF0Y2goY29uc3Qgc3RydWN0IHBjaV9kZXYgKnBkZXYsIHVuc2lnbmVkIGludCBy
ZWcsDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHZvaWQgKmRhdGEpDQo+
PiArew0KPj4gKyAgICBzdHJ1Y3QgdnBjaV9iYXIgKnZiYXIsICpiYXIgPSBkYXRhOw0KPj4gKw0K
Pj4gKyAgICBpZiAoIGlzX2hhcmR3YXJlX2RvbWFpbihjdXJyZW50LT5kb21haW4pICkNCj4+ICsg
ICAgICAgIHJldHVybiBiYXJfcmVhZF9od2RvbShwZGV2LCByZWcsIGRhdGEpOw0KPj4gKw0KPj4g
KyAgICB2YmFyID0gZ2V0X3ZwY2lfYmFyKGN1cnJlbnQtPmRvbWFpbiwgcGRldiwgYmFyLT5pbmRl
eCk7DQo+PiArICAgIGlmICggIXZiYXIgKQ0KPj4gKyAgICAgICAgcmV0dXJuIH4wOw0KPj4gKw0K
Pj4gKyAgICByZXR1cm4gYmFyX3JlYWRfZ3Vlc3QocGRldiwgcmVnLCB2YmFyKTsNCj4+ICt9DQo+
PiArDQo+PiArc3RhdGljIHZvaWQgYmFyX3dyaXRlX2Rpc3BhdGNoKGNvbnN0IHN0cnVjdCBwY2lf
ZGV2ICpwZGV2LCB1bnNpZ25lZCBpbnQgcmVnLA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICB1aW50MzJfdCB2YWwsIHZvaWQgKmRhdGEpDQo+PiArew0KPj4gKyAgICBzdHJ1Y3Qg
dnBjaV9iYXIgKmJhciA9IGRhdGE7DQo+PiArDQo+PiArICAgIGlmICggaXNfaGFyZHdhcmVfZG9t
YWluKGN1cnJlbnQtPmRvbWFpbikgKQ0KPj4gKyAgICAgICAgYmFyX3dyaXRlX2h3ZG9tKHBkZXYs
IHJlZywgdmFsLCBkYXRhKTsNCj4+ICsgICAgZWxzZQ0KPj4gKyAgICB7DQo+PiArICAgICAgICBz
dHJ1Y3QgdnBjaV9iYXIgKnZiYXIgPSBnZXRfdnBjaV9iYXIoY3VycmVudC0+ZG9tYWluLCBwZGV2
LCBiYXItPmluZGV4KTsNCj4+ICsNCj4+ICsgICAgICAgIGlmICggIXZiYXIgKQ0KPj4gKyAgICAg
ICAgICAgIHJldHVybjsNCj4+ICsgICAgICAgIGJhcl93cml0ZV9ndWVzdChwZGV2LCByZWcsIHZh
bCwgdmJhcik7DQo+PiArICAgIH0NCj4+ICt9DQo+IFlvdSBzaG91bGQgYXNzaWduIGRpZmZlcmVu
dCBoYW5kbGVycyBiYXNlZCBvbiB3aGV0aGVyIHRoZSBkb21haW4gdGhhdA0KPiBoYXMgdGhlIGRl
dmljZSBhc3NpZ25lZCBpcyBhIGRvbVUgb3IgdGhlIGhhcmR3YXJlIGRvbWFpbiwgcmF0aGVyIHRo
YW4NCj4gZG9pbmcgdGhlIHNlbGVjdGlvbiBoZXJlLg0KDQpIbSwgaGFuZGxlcnMgYXJlIGFzc2ln
bmVkIG9uY2UgaW4gaW5pdF9iYXJzIGFuZCB0aGlzIGZ1bmN0aW9uIGlzIG9ubHkgY2FsbGVkDQoN
CmZvciBod2RvbSwgc28gdGhlcmUgaXMgbm8gd2F5IEkgY2FuIGRvIHRoYXQgZm9yIHRoZSBndWVz
dHMuIEhlbmNlLCB0aGUgZGlzcGF0Y2hlcg0KDQo+DQo+PiArDQo+PiArLyoNCj4+ICsgKiBGSVhN
RTogVGhpcyBpcyBjYWxsZWQgZWFybHkgd2hpbGUgYWRkaW5nIHZQQ0kgaGFuZGxlcnMgd2hpY2gg
aXMgZG9uZQ0KPj4gKyAqIGJ5IGFuZCBmb3IgaHdkb20uDQo+PiArICovDQo+PiAgIHN0YXRpYyBp
bnQgaW5pdF9iYXJzKHN0cnVjdCBwY2lfZGV2ICpwZGV2KQ0KPj4gICB7DQo+PiAgICAgICB1aW50
MTZfdCBjbWQ7DQo+PiAgICAgICB1aW50NjRfdCBhZGRyLCBzaXplOw0KPj4gICAgICAgdW5zaWdu
ZWQgaW50IGksIG51bV9iYXJzLCByb21fcmVnOw0KPj4gLSAgICBzdHJ1Y3QgdnBjaV9oZWFkZXIg
KmhlYWRlciA9ICZwZGV2LT52cGNpLT5oZWFkZXI7DQo+PiAtICAgIHN0cnVjdCB2cGNpX2JhciAq
YmFycyA9IGhlYWRlci0+YmFyczsNCj4+ICsgICAgc3RydWN0IHZwY2lfaGVhZGVyICpoZWFkZXI7
DQo+PiArICAgIHN0cnVjdCB2cGNpX2JhciAqYmFyczsNCj4+ICAgICAgIGludCByYzsNCj4+ICAg
DQo+PiArICAgIGhlYWRlciA9IGdldF9od2RvbV92cGNpX2hlYWRlcihwZGV2KTsNCj4+ICsgICAg
aWYgKCAhaGVhZGVyICkNCj4+ICsgICAgICAgIHJldHVybiAtRU5PTUVNOw0KPj4gKyAgICBiYXJz
ID0gaGVhZGVyLT5iYXJzOw0KPj4gKw0KPj4gICAgICAgc3dpdGNoICggcGNpX2NvbmZfcmVhZDgo
cGRldi0+c2JkZiwgUENJX0hFQURFUl9UWVBFKSAmIDB4N2YgKQ0KPj4gICAgICAgew0KPj4gICAg
ICAgY2FzZSBQQ0lfSEVBREVSX1RZUEVfTk9STUFMOg0KPj4gQEAgLTQ5NiwxMSArNzE0LDEyIEBA
IHN0YXRpYyBpbnQgaW5pdF9iYXJzKHN0cnVjdCBwY2lfZGV2ICpwZGV2KQ0KPj4gICAgICAgICAg
IHVpbnQ4X3QgcmVnID0gUENJX0JBU0VfQUREUkVTU18wICsgaSAqIDQ7DQo+PiAgICAgICAgICAg
dWludDMyX3QgdmFsOw0KPj4gICANCj4+ICsgICAgICAgIGJhcnNbaV0uaW5kZXggPSBpOw0KPj4g
ICAgICAgICAgIGlmICggaSAmJiBiYXJzW2kgLSAxXS50eXBlID09IFZQQ0lfQkFSX01FTTY0X0xP
ICkNCj4+ICAgICAgICAgICB7DQo+PiAgICAgICAgICAgICAgIGJhcnNbaV0udHlwZSA9IFZQQ0lf
QkFSX01FTTY0X0hJOw0KPj4gLSAgICAgICAgICAgIHJjID0gdnBjaV9hZGRfcmVnaXN0ZXIocGRl
di0+dnBjaSwgdnBjaV9od19yZWFkMzIsIGJhcl93cml0ZSwgcmVnLA0KPj4gLSAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgNCwgJmJhcnNbaV0pOw0KPj4gKyAgICAgICAgICAgIHJj
ID0gdnBjaV9hZGRfcmVnaXN0ZXIocGRldi0+dnBjaSwgYmFyX3JlYWRfZGlzcGF0Y2gsDQo+PiAr
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBiYXJfd3JpdGVfZGlzcGF0Y2gsIHJl
ZywgNCwgJmJhcnNbaV0pOw0KPj4gICAgICAgICAgICAgICBpZiAoIHJjICkNCj4+ICAgICAgICAg
ICAgICAgew0KPj4gICAgICAgICAgICAgICAgICAgcGNpX2NvbmZfd3JpdGUxNihwZGV2LT5zYmRm
LCBQQ0lfQ09NTUFORCwgY21kKTsNCj4+IEBAIC01NDAsOCArNzU5LDggQEAgc3RhdGljIGludCBp
bml0X2JhcnMoc3RydWN0IHBjaV9kZXYgKnBkZXYpDQo+PiAgICAgICAgICAgYmFyc1tpXS5zaXpl
ID0gc2l6ZTsNCj4+ICAgICAgICAgICBiYXJzW2ldLnByZWZldGNoYWJsZSA9IHZhbCAmIFBDSV9C
QVNFX0FERFJFU1NfTUVNX1BSRUZFVENIOw0KPj4gICANCj4+IC0gICAgICAgIHJjID0gdnBjaV9h
ZGRfcmVnaXN0ZXIocGRldi0+dnBjaSwgdnBjaV9od19yZWFkMzIsIGJhcl93cml0ZSwgcmVnLCA0
LA0KPj4gLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAmYmFyc1tpXSk7DQo+PiArICAg
ICAgICByYyA9IHZwY2lfYWRkX3JlZ2lzdGVyKHBkZXYtPnZwY2ksIGJhcl9yZWFkX2Rpc3BhdGNo
LA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBiYXJfd3JpdGVfZGlzcGF0Y2gs
IHJlZywgNCwgJmJhcnNbaV0pOw0KPj4gICAgICAgICAgIGlmICggcmMgKQ0KPj4gICAgICAgICAg
IHsNCj4+ICAgICAgICAgICAgICAgcGNpX2NvbmZfd3JpdGUxNihwZGV2LT5zYmRmLCBQQ0lfQ09N
TUFORCwgY21kKTsNCj4+IEBAIC01NTgsNiArNzc3LDcgQEAgc3RhdGljIGludCBpbml0X2JhcnMo
c3RydWN0IHBjaV9kZXYgKnBkZXYpDQo+PiAgICAgICAgICAgcm9tLT50eXBlID0gVlBDSV9CQVJf
Uk9NOw0KPj4gICAgICAgICAgIHJvbS0+c2l6ZSA9IHNpemU7DQo+PiAgICAgICAgICAgcm9tLT5h
ZGRyID0gYWRkcjsNCj4+ICsgICAgICAgIHJvbS0+aW5kZXggPSBudW1fYmFyczsNCj4+ICAgICAg
ICAgICBoZWFkZXItPnJvbV9lbmFibGVkID0gcGNpX2NvbmZfcmVhZDMyKHBkZXYtPnNiZGYsIHJv
bV9yZWcpICYNCj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgUENJX1JPTV9BRERS
RVNTX0VOQUJMRTsNCj4+ICAgDQo+PiBkaWZmIC0tZ2l0IGEveGVuL2RyaXZlcnMvdnBjaS92cGNp
LmMgYi94ZW4vZHJpdmVycy92cGNpL3ZwY2kuYw0KPj4gaW5kZXggYTUyOTM1MjFhMzZhLi43Mjgw
MjlkYTNlOWMgMTAwNjQ0DQo+PiAtLS0gYS94ZW4vZHJpdmVycy92cGNpL3ZwY2kuYw0KPj4gKysr
IGIveGVuL2RyaXZlcnMvdnBjaS92cGNpLmMNCj4+IEBAIC02OSw2ICs2OSw3IEBAIGludCBfX2h3
ZG9tX2luaXQgdnBjaV9hZGRfaGFuZGxlcnMoc3RydWN0IHBjaV9kZXYgKnBkZXYpDQo+PiAgICAg
ICAgICAgcmV0dXJuIC1FTk9NRU07DQo+PiAgIA0KPj4gICAgICAgSU5JVF9MSVNUX0hFQUQoJnBk
ZXYtPnZwY2ktPmhhbmRsZXJzKTsNCj4+ICsgICAgSU5JVF9MSVNUX0hFQUQoJnBkZXYtPnZwY2kt
PmhlYWRlcnMpOw0KPj4gICAgICAgc3Bpbl9sb2NrX2luaXQoJnBkZXYtPnZwY2ktPmxvY2spOw0K
Pj4gICANCj4+ICAgICAgIGZvciAoIGkgPSAwOyBpIDwgTlVNX1ZQQ0lfSU5JVDsgaSsrICkNCj4+
IGRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS94ZW4vdnBjaS5oIGIveGVuL2luY2x1ZGUveGVuL3Zw
Y2kuaA0KPj4gaW5kZXggYzM1MDFlOWVjMDEwLi41NDQyM2JjNjU1NmQgMTAwNjQ0DQo+PiAtLS0g
YS94ZW4vaW5jbHVkZS94ZW4vdnBjaS5oDQo+PiArKysgYi94ZW4vaW5jbHVkZS94ZW4vdnBjaS5o
DQo+PiBAQCAtNTUsMTYgKzU1LDE0IEBAIHVpbnQzMl90IHZwY2lfaHdfcmVhZDMyKGNvbnN0IHN0
cnVjdCBwY2lfZGV2ICpwZGV2LCB1bnNpZ25lZCBpbnQgcmVnLA0KPj4gICAgKi8NCj4+ICAgYm9v
bCBfX211c3RfY2hlY2sgdnBjaV9wcm9jZXNzX3BlbmRpbmcoc3RydWN0IHZjcHUgKnYpOw0KPj4g
ICANCj4+IC1zdHJ1Y3QgdnBjaSB7DQo+PiAtICAgIC8qIExpc3Qgb2YgdlBDSSBoYW5kbGVycyBm
b3IgYSBkZXZpY2UuICovDQo+PiAtICAgIHN0cnVjdCBsaXN0X2hlYWQgaGFuZGxlcnM7DQo+PiAt
ICAgIHNwaW5sb2NrX3QgbG9jazsNCj4+IC0NCj4+ICAgI2lmZGVmIF9fWEVOX18NCj4+IC0gICAg
LyogSGlkZSB0aGUgcmVzdCBvZiB0aGUgdnBjaSBzdHJ1Y3QgZnJvbSB0aGUgdXNlci1zcGFjZSB0
ZXN0IGhhcm5lc3MuICovDQo+PiAgICAgICBzdHJ1Y3QgdnBjaV9oZWFkZXIgew0KPj4gKyAgICBz
dHJ1Y3QgbGlzdF9oZWFkIG5vZGU7DQo+PiArICAgIC8qIERvbWFpbiB0aGF0IG93bnMgdGhpcyB2
aWV3IG9mIHRoZSBCQVJzLiAqLw0KPj4gKyAgICBkb21pZF90IGRvbWFpbl9pZDsNCj4gSW5kZW50
YXRpb24gc2VlbXMgc2NyZXdlZCBoZXJlLg0KSXQgZGlkIDspDQo+DQo+PiAgICAgICAgICAgLyog
SW5mb3JtYXRpb24gYWJvdXQgdGhlIFBDSSBCQVJzIG9mIHRoaXMgZGV2aWNlLiAqLw0KPj4gICAg
ICAgICAgIHN0cnVjdCB2cGNpX2JhciB7DQo+PiArICAgICAgICAgICAgaW50IGluZGV4Ow0KPiB1
bnNpZ25lZA0Kb2sNCj4NCj4+ICAgICAgICAgICAgICAgdWludDY0X3QgYWRkcjsNCj4+ICAgICAg
ICAgICAgICAgdWludDY0X3Qgc2l6ZTsNCj4+ICAgICAgICAgICAgICAgZW51bSB7DQo+PiBAQCAt
ODgsOCArODYsMTggQEAgc3RydWN0IHZwY2kgew0KPj4gICAgICAgICAgICAqIGlzIG1hcHBlZCBp
bnRvIGd1ZXN0IHAybSkgaWYgdGhlcmUncyBhIFJPTSBCQVIgb24gdGhlIGRldmljZS4NCj4+ICAg
ICAgICAgICAgKi8NCj4+ICAgICAgICAgICBib29sIHJvbV9lbmFibGVkICAgICAgOiAxOw0KPj4g
LSAgICAgICAgLyogRklYTUU6IGN1cnJlbnRseSB0aGVyZSdzIG5vIHN1cHBvcnQgZm9yIFNSLUlP
Vi4gKi8NCj4gVW5sZXNzIHlvdSBhcmUgYWxzbyBhZGRpbmcgc3VwcG9ydCBmb3IgU1ItSU9WLCBJ
IHdvdWxkIGtlZXAgdGhlDQo+IGNvbW1lbnQuDQoNCldSVCBTUi1JT1YgSSBkbyBuZWVkIHlvdXIg
c2VyaWVzIFsxXSA7KSBTUi1JT1YgaXMgb25lIG9mIG91ciB0YXJnZXRzDQoNCj4gVGhhbmtzLCBS
b2dlci4NCg0KVGhhbmsgeW91IHNvIG11Y2ggZm9yIHJldmlld2luZyB0aGlzLA0KDQpPbGVrc2Fu
ZHINCg0KWzFdIGh0dHBzOi8vbGlzdHMueGVucHJvamVjdC5vcmcvYXJjaGl2ZXMvaHRtbC94ZW4t
ZGV2ZWwvMjAxOC0wNy9tc2cwMTQ5NC5odG1s


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 13:36:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 13:36:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25885.53930 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdCmH-00006O-7F; Thu, 12 Nov 2020 13:36:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25885.53930; Thu, 12 Nov 2020 13:36:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdCmH-00006H-4N; Thu, 12 Nov 2020 13:36:41 +0000
Received: by outflank-mailman (input) for mailman id 25885;
 Thu, 12 Nov 2020 13:36:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdCmF-00006B-H8
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 13:36:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e5aa64e5-00a3-494f-b249-0df3906d1c62;
 Thu, 12 Nov 2020 13:36:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 817D5AC1D;
 Thu, 12 Nov 2020 13:36:37 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdCmF-00006B-H8
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 13:36:39 +0000
X-Inumbo-ID: e5aa64e5-00a3-494f-b249-0df3906d1c62
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e5aa64e5-00a3-494f-b249-0df3906d1c62;
	Thu, 12 Nov 2020 13:36:38 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605188197;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=eteJ9ZJIDiBWg8uBlb3V3zK8fFf5uEU0SVmhkBGLvcE=;
	b=BmOM3kqJkSOZ+py8vNn+1hoaz0S0QA8ilx13aKeQ5K1iJ+GRZi0OGJ2qN9f7NVWuXH90p3
	dyLzrdNqWJeocQpR5feNm40vwRBYZH8FhG4VzfgkDBJJvVs54jHXOp/pyd8LvWrZM1eet4
	3PEwWZpAagOVVp9PoKm29SJxKx7CLRo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 817D5AC1D;
	Thu, 12 Nov 2020 13:36:37 +0000 (UTC)
Subject: Re: [PATCH] xen: add support for automatic debug key actions in case
 of crash
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201022143905.11032-1-jgross@suse.com>
 <977bab69-892c-d94d-d952-1a748f69d0b6@suse.com>
 <53732f8f-fe6d-91bd-4100-4b4d904a4073@suse.com>
 <ed2f73e7-04cc-f568-f0b7-19c843a8d31b@suse.com>
 <8c77ff71-a14e-7cf7-5f27-c7c152ace240@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3e2132c9-2ab3-7bfb-656b-2cab58a53342@suse.com>
Date: Thu, 12 Nov 2020 14:36:38 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <8c77ff71-a14e-7cf7-5f27-c7c152ace240@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12.11.2020 13:50, Jürgen Groß wrote:
> Any further comments, or even better, Acks?

To be honest I'd prefer to have at least one of the people Cc-ed
minimally indicate they consider this a good idea. No need for a
close review or such, just a basic opinion. Anyone?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 13:41:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 13:41:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25891.53941 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdCr4-000171-R2; Thu, 12 Nov 2020 13:41:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25891.53941; Thu, 12 Nov 2020 13:41:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdCr4-00016u-Nw; Thu, 12 Nov 2020 13:41:38 +0000
Received: by outflank-mailman (input) for mailman id 25891;
 Thu, 12 Nov 2020 13:41:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdCr3-00016p-Az
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 13:41:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fd61a5ee-6676-45ca-8de2-cb0fb87b5c7b;
 Thu, 12 Nov 2020 13:41:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5FE56AC24;
 Thu, 12 Nov 2020 13:41:35 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdCr3-00016p-Az
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 13:41:37 +0000
X-Inumbo-ID: fd61a5ee-6676-45ca-8de2-cb0fb87b5c7b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id fd61a5ee-6676-45ca-8de2-cb0fb87b5c7b;
	Thu, 12 Nov 2020 13:41:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605188495;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AM9QzQgPmffwaOjJxFI2fBbpKVpn4w0ommto6Vsjcqs=;
	b=ax1ctZo76K0jsQtaJ4ZoomnLLTRcg9z8KZWAcoYzVcwvG3CeORNl3q7HXr9CuLzklaSIrd
	80F67TsFPuQZjQDZLB4tnlO4mJ0VfU8owmGg6k1W44/EBPDXumMxFpluF3P/E0YaWBDjMN
	ASgzXZQSuoFeCxCJCFmqPb1xxUBd+Cg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5FE56AC24;
	Thu, 12 Nov 2020 13:41:35 +0000 (UTC)
Subject: Re: [PATCH v5 1/3] xen/x86: add nmi continuation framework
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201112131424.9930-1-jgross@suse.com>
 <20201112131424.9930-2-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f266e1fe-21fe-a44c-d8b1-94d89813f42f@suse.com>
Date: Thu, 12 Nov 2020 14:41:36 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201112131424.9930-2-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 12.11.2020 14:14, Juergen Gross wrote:
> --- a/xen/arch/x86/genapic/x2apic.c
> +++ b/xen/arch/x86/genapic/x2apic.c
> @@ -89,6 +89,7 @@ static unsigned int cpu_mask_to_apicid_x2apic_cluster(const cpumask_t *cpumask)
>  
>  static void send_IPI_self_x2apic(uint8_t vector)
>  {
> +    /* NMI continuation handling relies on using a shorthand here. */
>      apic_wrmsr(APIC_SELF_IPI, vector);
>  }

I'm inclined to drop this hunk again - I did ask for ...

> --- a/xen/arch/x86/smp.c
> +++ b/xen/arch/x86/smp.c
> @@ -163,6 +163,7 @@ void send_IPI_self(int vector)
>  
>  void send_IPI_self_legacy(uint8_t vector)
>  {
> +    /* NMI continuation handling relies on using a shorthand here. */
>      send_IPI_shortcut(APIC_DEST_SELF, vector, APIC_DEST_PHYSICAL);
>  }

... this one only simply because x2APIC doesn't have the same restriction.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 13:43:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 13:43:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25897.53954 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdCsp-0001GE-8W; Thu, 12 Nov 2020 13:43:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25897.53954; Thu, 12 Nov 2020 13:43:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdCsp-0001G7-3o; Thu, 12 Nov 2020 13:43:27 +0000
Received: by outflank-mailman (input) for mailman id 25897;
 Thu, 12 Nov 2020 13:43:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdCsn-0001G2-HY
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 13:43:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b82396ce-b2ee-49e1-9b2b-69d9369c4f10;
 Thu, 12 Nov 2020 13:43:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DE7FDAC1D;
 Thu, 12 Nov 2020 13:43:23 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdCsn-0001G2-HY
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 13:43:25 +0000
X-Inumbo-ID: b82396ce-b2ee-49e1-9b2b-69d9369c4f10
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b82396ce-b2ee-49e1-9b2b-69d9369c4f10;
	Thu, 12 Nov 2020 13:43:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605188604;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=uK18gfZ7Cv4BatXuQtSCa+4tfAYmJ4mvF4Bk1+DC59w=;
	b=kNGAjSM8uF343l01k6tEvPOai9MsOjuRr+JyzD2NKcBgzNJYfBNiDQBKs1OXa+PmQ96RPw
	9K8t76XIqLdJ1u8NHMbXn0X9V3Vw6eSp2bdc14YHNyWY72MhO8E9gJ5se1l6PLoM89s+TS
	e3dXFdZn5D58S0TeDOSz0dFQl5KDNtk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id DE7FDAC1D;
	Thu, 12 Nov 2020 13:43:23 +0000 (UTC)
Subject: Re: [PATCH v5 2/3] xen/oprofile: use NMI continuation for sending
 virq to guest
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201112131424.9930-1-jgross@suse.com>
 <20201112131424.9930-3-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c03669ae-e09f-681b-c98b-5719932033cd@suse.com>
Date: Thu, 12 Nov 2020 14:43:24 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201112131424.9930-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 12.11.2020 14:14, Juergen Gross wrote:
> Instead of calling send_guest_vcpu_virq() from NMI context use the
> NMI continuation framework for that purpose. This avoids taking locks
> in NMI mode.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 14:05:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 14:05:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25904.53969 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdDDg-0003Ng-2B; Thu, 12 Nov 2020 14:05:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25904.53969; Thu, 12 Nov 2020 14:05:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdDDf-0003NZ-Ui; Thu, 12 Nov 2020 14:04:59 +0000
Received: by outflank-mailman (input) for mailman id 25904;
 Thu, 12 Nov 2020 14:04:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdDDe-0003NU-Sn
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 14:04:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1b5cb250-830d-440b-8756-2374231517c5;
 Thu, 12 Nov 2020 14:04:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B6E14AB95;
 Thu, 12 Nov 2020 14:04:55 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdDDe-0003NU-Sn
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 14:04:58 +0000
X-Inumbo-ID: 1b5cb250-830d-440b-8756-2374231517c5
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1b5cb250-830d-440b-8756-2374231517c5;
	Thu, 12 Nov 2020 14:04:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605189895;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=EoIcCy8PnJAA8P2tB2IZGX8XNkRFEmH/mGghqvTDQ6s=;
	b=roFNOIGmRazYQ6DQucbtNcHIAoW3a41HfeXpS1QSIY6mDf9fPbYzcTVJQes/+3z7X6C0En
	uFFd0Fhap7nZYTFvNXYBuRXOsO/EsAfPsdveddF8XSqWpRZcjnFlAzO64Pq9w94cj5MbT/
	ZAJzGnOuSilpuMcYLtQoPm88FAjFU9w=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B6E14AB95;
	Thu, 12 Nov 2020 14:04:55 +0000 (UTC)
Subject: Re: [PATCH 5/5] x86/p2m: split write_p2m_entry() hook
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tim Deegan <tim@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
 <7b2b7cc9-8828-41bd-7949-764161bbe7ff@suse.com>
 <20201110135944.hbsojy6eeyw53has@Air-de-Roger>
 <d73234b0-f22e-0783-3fbe-759ccb0ecc48@suse.com>
 <20201111121730.pblsf6inot5gixfc@Air-de-Roger>
 <7f916527-9a9c-8afe-5e5c-781554d1bd73@suse.com>
 <20201112130709.r3acpkrkyck6arul@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <51e646d4-3e1b-3698-c649-a39840275ec9@suse.com>
Date: Thu, 12 Nov 2020 15:04:56 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.1
MIME-Version: 1.0
In-Reply-To: <20201112130709.r3acpkrkyck6arul@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12.11.2020 14:07, Roger Pau Monné wrote:
> On Thu, Nov 12, 2020 at 01:29:33PM +0100, Jan Beulich wrote:
>> On 11.11.2020 13:17, Roger Pau Monné wrote:
>>> On Tue, Nov 10, 2020 at 03:50:44PM +0100, Jan Beulich wrote:
>>>> On 10.11.2020 14:59, Roger Pau Monné wrote:
>>>>> On Wed, Oct 28, 2020 at 10:24:53AM +0100, Jan Beulich wrote:
>>>>>> --- a/xen/arch/x86/mm/p2m-pt.c
>>>>>> +++ b/xen/arch/x86/mm/p2m-pt.c
>>>>>> @@ -122,17 +122,55 @@ static int write_p2m_entry(struct p2m_do
>>>>>>  {
>>>>>>      struct domain *d = p2m->domain;
>>>>>>      struct vcpu *v = current;
>>>>>> -    int rc = 0;
>>>>>>  
>>>>>>      if ( v->domain != d )
>>>>>>          v = d->vcpu ? d->vcpu[0] : NULL;
>>>>>>      if ( likely(v && paging_mode_enabled(d) && paging_get_hostmode(v)) ||
>>>>>>           p2m_is_nestedp2m(p2m) )
>>>>>> -        rc = p2m->write_p2m_entry(p2m, gfn, p, new, level);
>>>>>> +    {
>>>>>> +        unsigned int oflags;
>>>>>> +        mfn_t omfn;
>>>>>> +        int rc;
>>>>>> +
>>>>>> +        paging_lock(d);
>>>>>> +
>>>>>> +        if ( p2m->write_p2m_entry_pre )
>>>>>> +            p2m->write_p2m_entry_pre(d, gfn, p, new, level);
>>>>>> +
>>>>>> +        oflags = l1e_get_flags(*p);
>>>>>> +        omfn = l1e_get_mfn(*p);
>>>>>> +
>>>>>> +        rc = p2m_entry_modify(p2m, p2m_flags_to_type(l1e_get_flags(new)),
>>>>>> +                              p2m_flags_to_type(oflags), l1e_get_mfn(new),
>>>>>> +                              omfn, level);
>>>>>> +        if ( rc )
>>>>>> +        {
>>>>>> +            paging_unlock(d);
>>>>>> +            return rc;
>>>>>> +        }
>>>>>> +
>>>>>> +        safe_write_pte(p, new);
>>>>>> +
>>>>>> +        if ( p2m->write_p2m_entry_post )
>>>>>> +            p2m->write_p2m_entry_post(p2m, oflags);
>>>>>> +
>>>>>> +        paging_unlock(d);
>>>>>> +
>>>>>> +        if ( nestedhvm_enabled(d) && !p2m_is_nestedp2m(p2m) &&
>>>>>> +             (oflags & _PAGE_PRESENT) &&
>>>>>> +             !p2m_get_hostp2m(d)->defer_nested_flush &&
>>>>>> +             /*
>>>>>> +              * We are replacing a valid entry so we need to flush nested p2ms,
>>>>>> +              * unless the only change is an increase in access rights.
>>>>>> +              */
>>>>>> +             (!mfn_eq(omfn, l1e_get_mfn(new)) ||
>>>>>> +              !perms_strictly_increased(oflags, l1e_get_flags(new))) )
>>>>>> +            p2m_flush_nestedp2m(d);
>>>>>
>>>>> It feel slightly weird to have a nested p2m hook post, and yet have
>>>>> nested specific code here.
>>>>>
>>>>> Have you considered if the post hook could be moved outside of the
>>>>> locked region, so that we could put this chunk there in the nested p2m
>>>>> case?
>>>>
>>>> Yes, I did, but I don't think the post hook can be moved out. The
>>>> only alternative therefore would be a 3rd hook. And this hook would
>>>> then need to be installed on the host p2m for nested guests, as
>>>> opposed to nestedp2m_write_p2m_entry_post, which gets installed in
>>>> the nested p2m-s. As said in the description, the main reason I
>>>> decided against a 3rd hook is that I suppose the code here isn't
>>>> HAP-specific (while prior to this patch it was).
>>>
>>> I'm not convinced the guest TLB flush needs to be performed while
>>> holding the paging lock. The point of such flush is to invalidate any
>>> intermediate guest visible translations that might now be invalid as a
>>> result of the p2m change, but the paging lock doesn't affect the guest
>>> in any way.
>>>
>>> It's true that the dirty_cpumask might change, but I think we only
>>> care that when returning from the function there are no stale cache
>>> entries that contain the now invalid translation, and this can be
>>> achieved equally by doing the flush outside of the locked region.
>>
>> I agree with all this. If only it was merely about TLB flushes. In
>> the shadow case, shadow_blow_all_tables() gets invoked, and that
>> one - looking at the other call sites - wants the paging lock held.
> 
> You got me confused here, I think you meant shadow_blow_tables?

Oh, yes, sorry - copy-and-paste from the wrong source.

> The post hook for shadow could pick the lock again, as I don't think
> the removal of the tables needs to be strictly done inside of the same
> locked region?

I think it does, or else a use of the now stale tables may occur
before they got blown away. Tim?

> Something to consider from a performance PoV.
> 
>> Additionally moving the stuff out of the locked region wouldn't
>> allow us to achieve the goal of moving the nested flush into the
>> hook, unless we introduced further hook handlers to be installed
>> on the host p2m-s of nested guests.
> 
> Right, or else we would also need to add that chunk in the
> non-nestedp2m hook also?

I'm a little confused by the question: If we wanted to move this
into the hook functions, it would need to be both hap's and
shadow's, i.e. _only_ the non-nested ones. IOW it could then
stay in hap's and be duplicated into shadow's. Avoiding the
duplication _and_ keeping it outside the locked region is why I
moved it into the common logic (provided, of course, I'm right
with my understanding of it also being needed in the shadow
case; else it could stay in the hap function alone), of which
the 2nd aspect would go away if the hook invocation itself lived
outside the locked region. But the duplication of this would
still concern me ...

> Maybe you could join both the nested and non-nested hooks and use a
> different dirty bitmap for the flush?

What would this gain us? Extra conditionals in a hook, when the
hook is (indirectly) to avoid having endless conditionals in the
common logic?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 14:47:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 14:47:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25913.53981 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdDsH-0007Kb-EV; Thu, 12 Nov 2020 14:46:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25913.53981; Thu, 12 Nov 2020 14:46:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdDsH-0007KU-BD; Thu, 12 Nov 2020 14:46:57 +0000
Received: by outflank-mailman (input) for mailman id 25913;
 Thu, 12 Nov 2020 14:46:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=muz0=ES=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kdDsG-0007KP-EJ
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 14:46:56 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c33deb03-62f7-45d4-8e73-cd17cbff5927;
 Thu, 12 Nov 2020 14:46:54 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=muz0=ES=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kdDsG-0007KP-EJ
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 14:46:56 +0000
X-Inumbo-ID: c33deb03-62f7-45d4-8e73-cd17cbff5927
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c33deb03-62f7-45d4-8e73-cd17cbff5927;
	Thu, 12 Nov 2020 14:46:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605192414;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=qUhTeOW820x94pax7PSaVLpcLZVEq0uQeaFbYQ5saEU=;
  b=LepBLwTo4a/LyllmK3X2OoNoCCTiRLzugrFTJ/rjbKWyg+ii9TebWVte
   OcPFIAc/dkZApOMsB2bwSeoaxsbHvqoqKCgfZQOWF67wEKLUsa93EnXAa
   Dop8blUn8N9W4BViQKE6mfrh3u5uI24DMvrJcAo7etQrz9D7cSOzKFyOG
   g=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: s5jYqgir+5Jc9u+Ngzt2+mSePeNIo5/5fNjUPdK0NeN+WER9xtPeWyWpkrMo/fBtzhQtNuY106
 zoftY40MI5nnotSGeS/NEYznwUbyw36J4okrI99mz7P3HHtL+rRUBDpffNNWxaXJeakNs6FlL+
 n9UizFQCYzbI4dFh5b1ne2CBzaGcwuk23oG1xFNRYEnx8I9XZy+uMVpPc1KD7FmB9OkAlVW5YU
 4XLZ/wIXq/VZ6mpXFIrMjN/jSbztXWcyQttybWt7/Rl7JIWNyBoO72SrODTzT3P629IfUvobsp
 +Ss=
X-SBRS: None
X-MesageID: 31268795
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,472,1596513600"; 
   d="scan'208";a="31268795"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=F+NBEsZ/t2yyaXUQwItIvh1st19d+hDYb/TbPGC/iQHX7Lr7rKdYDPgQEKMx0X2GAY3/P0DYszY9EEjdVtxz3mwVpB1hukqKaPsZ3rkd9DlrLlj1AgndGXkSRDVrUird0dkQKDYUCwwvEn59g8uEPRvDbWtLk2uu7wiQZjxZ4cGOWy5l84hTRxT4j2Aqa2CcXkXj6luZthrIGMZC0j5wfD7T7stJ8eBSdwDILaDqwLZ+wGFLsTea4wdTLSXQSjqzRi4BlBOCMWd2b+pY1U2rOXleu5EmXTBAvyK6NJ1NDmbQ9zM1wJ7lyD69PNgYsiyk5gLpKYW71/H6oYhRc1pvhA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JNqSDsDxZXRO2EpaifFeChNnd7skvjiStIlunFt7o5E=;
 b=h1xYNaolTC0Q39WLb7jXI35S7Ix6qkBBNgARkkkoKpnaCWsn0pfqMU3FlFHBq2SzcTwTsxDrdk5ab21DGtj0XS9U9dEkLJO0q9vOInePW1EObOO286XC7vF8lKjYPzFWFDADoFS8iViTKQGIgPyQujJ9DQG6qkZJCscBfhY+UoKiaSA7B9o2AXs/Ok5pz4AU88WjZ9gqsJDtQ6f/FcXIM4d3hTMQkA1Z5ESvtkLKJrsNvX4Rc6yJmkDf6U2ZYZzA3a1OQ+05+rV5JHnB9qCjqp0zUQv/8tGeomER8wfjRM6xjDw3TcT3XeG1pgnVZyqQso22B+zJadXGowSpmrzVGg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JNqSDsDxZXRO2EpaifFeChNnd7skvjiStIlunFt7o5E=;
 b=qKBPZUIJJi5qOTywGXiALdIM+uxVzbfC+HxDBESyQadHGYy0VCtJZUueAqhzvp/g7Zm7Xdwxn2eLz7Yy474znbjODWS51tzJ0xJndfQD4EL6gHfwJOOrkhOVUWjHBh7puW8E+ZdvoInp+36z6q85etjt/qa/pqtIq46LLVkI6G0=
Date: Thu, 12 Nov 2020 15:46:43 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
CC: Oleksandr Andrushchenko <andr2000@gmail.com>, "Rahul.Singh@arm.com"
	<Rahul.Singh@arm.com>, "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
	"julien.grall@arm.com" <julien.grall@arm.com>, "jbeulich@suse.com"
	<jbeulich@suse.com>, "sstabellini@kernel.org" <sstabellini@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
Message-ID: <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
X-ClientProxiedBy: LO2P265CA0386.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:f::14) To SA0PR03MB5610.namprd03.prod.outlook.com
 (2603:10b6:806:b2::9)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: da7d8451-ebc5-4437-43ce-08d88719c8c0
X-MS-TrafficTypeDiagnostic: SN6PR03MB3776:
X-Microsoft-Antispam-PRVS: <SN6PR03MB37762588E5D3052140A4137D8FE70@SN6PR03MB3776.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:137;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: A8c2NlhpJbd0OO6PiQWcXAvcGtHZIzc01QoCwI/FW1kc5h6FDQiW7VhCxzZSnoxSVS2R/OYYZY8EOhfq+dEGWwKQDmqemR7EnE2kTQWox22JEFUeJXwuMw42TTOZTua4prqpdK6tvRSd3AnnozfpYymBgmcb358FSNdVfYb/sSnZjAZrMtBXl/o46w/UC09/iLk6A2tVoqPyPCmEfs2y9gB1v9lJ3icQgQ5eINILQbRzbuYE7lNr6T1ieBJl5zZ/B9QyrpHcvEKFF1Hld4IJQB3GlCGOOsufzEqXG3BP88FGHx8YMUf5Y0Bs0Lo840Og91wx+7TgsBtZ9lQCcZ/AGQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SA0PR03MB5610.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(366004)(39860400002)(396003)(376002)(136003)(346002)(5660300002)(478600001)(8936002)(85182001)(54906003)(2906002)(66476007)(66946007)(66556008)(956004)(8676002)(316002)(86362001)(6916009)(186003)(6486002)(9686003)(83380400001)(4326008)(1076003)(6666004)(33716001)(16526019)(30864003)(26005)(6496006)(7416002)(53546011);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: kB12FbTVj17O6ywDp3opHkNqrmM2IUI2kaaerk5jxCkd6kx9uknn2OVlvMOWVO/oBcyQO3ewFSWGIz5y/v+hA78DDVwFASwC4ZdPP/tesI5j8CS1BXXM8hF/Ee+8B0UbWUHK7axJ+sAzL2FKi44EwOa2CBgK3eJ/06R6j/ihzV+IrBFIVfkvrMY9Qr+7s9I16Ordt4wCKhKwswiRv6GuxJaXU7Sp149IlhpOSoC85Px6yTsDHcWOC4XCYUPimrRqpX/2ZK+a2oQJXNNPAxgqynkHZTolPTa9ATM63goCmGLdfs3fzjluXjHx9dYNt46DwO2zoSAymYB7jgFfFrDAQyFrT2Q32cd2pFrVVMhLBrYs3rGwZPln4fAyCki8z+tZ2FMuie7USNt2MIkTJ4XXEqhTSLzyB4PqBoLBvb7etH3fAYem1Ghg1WKirtLW4Iklyj1PITRY06bvTVG2mOnXGU4rsewuXQuGQdsYPsuZ7nNorvu8yRVnICHRX22WTB0uPAsim7v0F+HlY8IlTBAuMAyRV+ZH/Fknl9VZI8P7eiMZuMQa095ZgQhWzhjL3Bxfk6Spv45bkuAp9RCFL1HH3uG4yLDQ39p343LmYfh3WEx+hpKh9sJnUmLo4UTPVSYC/hN1cnwk0ozGSeZtwVzgnQ==
X-MS-Exchange-CrossTenant-Network-Message-Id: da7d8451-ebc5-4437-43ce-08d88719c8c0
X-MS-Exchange-CrossTenant-AuthSource: SA0PR03MB5610.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Nov 2020 14:46:48.7947
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: a3BAoRL3n1M7aP1Wr6WAxwf1Wb+1q/AXqj3myZ5NpLn5HAGmtgPO8D+Pf+2jg3XrAAZoODdliFrN/QuGUANI2g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR03MB3776
X-OriginatorOrg: citrix.com

On Thu, Nov 12, 2020 at 01:16:10PM +0000, Oleksandr Andrushchenko wrote:
> 
> On 11/12/20 11:40 AM, Roger Pau Monné wrote:
> > On Mon, Nov 09, 2020 at 02:50:27PM +0200, Oleksandr Andrushchenko wrote:
> >> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> >> diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
> >> index f74f728884c0..7dc7c70e24f2 100644
> >> --- a/xen/drivers/vpci/header.c
> >> +++ b/xen/drivers/vpci/header.c
> >> @@ -31,14 +31,87 @@
> >>   struct map_data {
> >>       struct domain *d;
> >>       bool map;
> >> +    struct pci_dev *pdev;
> > If the field is required please place it after the domain one.
> I will, but may I ask why?

So that if we add further boolean fields we can do at the end of the
struct for layout reasons. If we do:

struct map_data {
    struct domain *d;
    bool map;
    struct pci_dev *pdev;
    bool foo;
}

We will end up with a bunch of padding that could be avoided by doing:

struct map_data {
    struct domain *d;
    struct pci_dev *pdev;
    bool map;
    bool foo;
}

> >> +    s = PFN_DOWN(s);
> >> +    e = PFN_DOWN(e);
> > Changing the rangeset to store memory addresses instead of frames
> > could for example be split into a separate patch.
> Ok
> >
> > I think you are doing the calculation of the end pfn wrong here, you
> > should use PFN_UP instead in case the address is not aligned.
> 
> PFN_DOWN for the start seems to be ok if the address is not aligned
> 
> which is the case if I pass bar_index in the lower bits: PCI memory has
> 
> PAGE_SIZE granularity, so besides the fact that I use bar_index the address

No, BARs don't need to be aligned to page boundaries, you can even
have different BARs inside the same physical page.

The spec recommends that the minimum size of a BAR should be 4KB, but
that's not a strict requirement in which case a BAR can be as small as
16bytes, and then you can have multiple ones inside the same page.

> must be page aligned.
> 
> The end address is expressed in (size - 1) form, again page aligned,
> 
> so to get the last page to be mapped PFN_DOWN also seems to be appropriate.
> 
> Do I miss something here?

I'm not aware of any  of those addresses or sizes being guaranteed to
be page aligned, so I think you need to account for that.

Some of the code here uses PFN_DOWN to calculate the end address
because the rangesets are used in an inclusive fashion, so the end
frame also gets mapped.

> >
> >> +    mfn = _mfn(PFN_DOWN(header->bars[bar_idx].addr));
> >>       for ( ; ; )
> >>       {
> >>           unsigned long size = e - s + 1;
> >> @@ -52,11 +125,15 @@ static int map_range(unsigned long s, unsigned long e, void *data,
> >>            * - {un}map_mmio_regions doesn't support preemption.
> >>            */
> >>   
> >> -        rc = map->map ? map_mmio_regions(map->d, _gfn(s), size, _mfn(s))
> >> -                      : unmap_mmio_regions(map->d, _gfn(s), size, _mfn(s));
> >> +        rc = map->map ? map_mmio_regions(map->d, _gfn(s), size, mfn)
> >> +                      : unmap_mmio_regions(map->d, _gfn(s), size, mfn);
> >>           if ( rc == 0 )
> >>           {
> >> -            *c += size;
> >> +            /*
> >> +             * Range set is not expressed in frame numbers and the size
> >> +             * is the number of frames, so update accordingly.
> >> +             */
> >> +            *c += size << PAGE_SHIFT;
> >>               break;
> >>           }
> >>           if ( rc < 0 )
> >> @@ -67,8 +144,9 @@ static int map_range(unsigned long s, unsigned long e, void *data,
> >>               break;
> >>           }
> >>           ASSERT(rc < size);
> >> -        *c += rc;
> >> +        *c += rc << PAGE_SHIFT;
> >>           s += rc;
> >> +        mfn += rc;
> >>           if ( general_preempt_check() )
> >>                   return -ERESTART;
> >>       }
> >> @@ -84,7 +162,7 @@ static int map_range(unsigned long s, unsigned long e, void *data,
> >>   static void modify_decoding(const struct pci_dev *pdev, uint16_t cmd,
> >>                               bool rom_only)
> >>   {
> >> -    struct vpci_header *header = &pdev->vpci->header;
> >> +    struct vpci_header *header = get_hwdom_vpci_header(pdev);
> >>       bool map = cmd & PCI_COMMAND_MEMORY;
> >>       unsigned int i;
> >>   
> >> @@ -136,6 +214,7 @@ bool vpci_process_pending(struct vcpu *v)
> >>           struct map_data data = {
> >>               .d = v->domain,
> >>               .map = v->vpci.cmd & PCI_COMMAND_MEMORY,
> >> +            .pdev = v->vpci.pdev,
> >>           };
> >>           int rc = rangeset_consume_ranges(v->vpci.mem, map_range, &data);
> >>   
> >> @@ -168,7 +247,8 @@ bool vpci_process_pending(struct vcpu *v)
> >>   static int __init apply_map(struct domain *d, const struct pci_dev *pdev,
> >>                               struct rangeset *mem, uint16_t cmd)
> >>   {
> >> -    struct map_data data = { .d = d, .map = true };
> >> +    struct map_data data = { .d = d, .map = true,
> >> +        .pdev = (struct pci_dev *)pdev };
> > Dropping the const here is not fine. IT either needs to be dropped
> > from apply_map and further up, or this needs to also be made const.
> Ok, I'll try to keep it const
> >
> >>       int rc;
> >>   
> >>       while ( (rc = rangeset_consume_ranges(mem, map_range, &data)) == -ERESTART )
> >> @@ -205,7 +285,7 @@ static void defer_map(struct domain *d, struct pci_dev *pdev,
> >>   
> >>   static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
> >>   {
> >> -    struct vpci_header *header = &pdev->vpci->header;
> >> +    struct vpci_header *header;
> >>       struct rangeset *mem = rangeset_new(NULL, NULL, 0);
> >>       struct pci_dev *tmp, *dev = NULL;
> >>   #ifdef CONFIG_X86
> >> @@ -217,6 +297,11 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
> >>       if ( !mem )
> >>           return -ENOMEM;
> >>   
> >> +    if ( is_hardware_domain(current->domain) )
> >> +        header = get_hwdom_vpci_header(pdev);
> >> +    else
> >> +        header = get_vpci_header(current->domain, pdev);
> >> +
> >>       /*
> >>        * Create a rangeset that represents the current device BARs memory region
> >>        * and compare it against all the currently active BAR memory regions. If
> >> @@ -225,12 +310,15 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
> >>        * First fill the rangeset with all the BARs of this device or with the ROM
> >>        * BAR only, depending on whether the guest is toggling the memory decode
> >>        * bit of the command register, or the enable bit of the ROM BAR register.
> >> +     *
> >> +     * Use the PCI reserved bits of the BAR to pass BAR's index.
> >>        */
> >>       for ( i = 0; i < ARRAY_SIZE(header->bars); i++ )
> >>       {
> >>           const struct vpci_bar *bar = &header->bars[i];
> >> -        unsigned long start = PFN_DOWN(bar->addr);
> >> -        unsigned long end = PFN_DOWN(bar->addr + bar->size - 1);
> >> +        unsigned long start = (bar->addr & PCI_BASE_ADDRESS_MEM_MASK) | i;
> >> +        unsigned long end = (bar->addr & PCI_BASE_ADDRESS_MEM_MASK) +
> >> +            bar->size - 1;
> > Will this work fine on Arm 32bits with LPAE? It's my understanding
> > that in that case unsigned long is 32bits, but the physical address
> > space is 44bits, in which case this won't work.
> Hm, good question
> >
> > I think you need to keep the usage of frame numbers here.
> If I re-work the gfn <-> mfn mapping then yes, I can use frame numbers here and elsewhere
> >
> >>   
> >>           if ( !MAPPABLE_BAR(bar) ||
> >>                (rom_only ? bar->type != VPCI_BAR_ROM
> >> @@ -251,9 +339,11 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
> >>       /* Remove any MSIX regions if present. */
> >>       for ( i = 0; msix && i < ARRAY_SIZE(msix->tables); i++ )
> >>       {
> >> -        unsigned long start = PFN_DOWN(vmsix_table_addr(pdev->vpci, i));
> >> -        unsigned long end = PFN_DOWN(vmsix_table_addr(pdev->vpci, i) +
> >> -                                     vmsix_table_size(pdev->vpci, i) - 1);
> >> +        unsigned long start = (vmsix_table_addr(pdev->vpci, i) &
> >> +                               PCI_BASE_ADDRESS_MEM_MASK) | i;
> >> +        unsigned long end = (vmsix_table_addr(pdev->vpci, i) &
> >> +                             PCI_BASE_ADDRESS_MEM_MASK ) +
> >> +                             vmsix_table_size(pdev->vpci, i) - 1;
> >>   
> >>           rc = rangeset_remove_range(mem, start, end);
> >>           if ( rc )
> >> @@ -273,6 +363,8 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
> >>        */
> >>       for_each_pdev ( pdev->domain, tmp )
> >>       {
> >> +        struct vpci_header *header;
> >> +
> >>           if ( tmp == pdev )
> >>           {
> >>               /*
> >> @@ -289,11 +381,14 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
> >>                   continue;
> >>           }
> >>   
> >> -        for ( i = 0; i < ARRAY_SIZE(tmp->vpci->header.bars); i++ )
> >> +        header = get_vpci_header(tmp->domain, pdev);
> >> +
> >> +        for ( i = 0; i < ARRAY_SIZE(header->bars); i++ )
> >>           {
> >> -            const struct vpci_bar *bar = &tmp->vpci->header.bars[i];
> >> -            unsigned long start = PFN_DOWN(bar->addr);
> >> -            unsigned long end = PFN_DOWN(bar->addr + bar->size - 1);
> >> +            const struct vpci_bar *bar = &header->bars[i];
> >> +            unsigned long start = (bar->addr & PCI_BASE_ADDRESS_MEM_MASK) | i;
> >> +            unsigned long end = (bar->addr & PCI_BASE_ADDRESS_MEM_MASK)
> >> +                + bar->size - 1;
> >>   
> >>               if ( !bar->enabled || !rangeset_overlaps_range(mem, start, end) ||
> >>                    /*
> >> @@ -357,7 +452,7 @@ static void cmd_write(const struct pci_dev *pdev, unsigned int reg,
> >>           pci_conf_write16(pdev->sbdf, reg, cmd);
> >>   }
> >>   
> >> -static void bar_write(const struct pci_dev *pdev, unsigned int reg,
> >> +static void bar_write_hwdom(const struct pci_dev *pdev, unsigned int reg,
> >>                         uint32_t val, void *data)
> >>   {
> >>       struct vpci_bar *bar = data;
> >> @@ -377,14 +472,17 @@ static void bar_write(const struct pci_dev *pdev, unsigned int reg,
> >>       {
> >>           /* If the value written is the current one avoid printing a warning. */
> >>           if ( val != (uint32_t)(bar->addr >> (hi ? 32 : 0)) )
> >> +        {
> >> +            struct vpci_header *header = get_hwdom_vpci_header(pdev);
> >> +
> >>               gprintk(XENLOG_WARNING,
> >>                       "%04x:%02x:%02x.%u: ignored BAR %lu write with memory decoding enabled\n",
> >>                       pdev->seg, pdev->bus, slot, func,
> >> -                    bar - pdev->vpci->header.bars + hi);
> >> +                    bar - header->bars + hi);
> >> +        }
> >>           return;
> >>       }
> >>   
> >> -
> >>       /*
> >>        * Update the cached address, so that when memory decoding is enabled
> >>        * Xen can map the BAR into the guest p2m.
> >> @@ -403,10 +501,89 @@ static void bar_write(const struct pci_dev *pdev, unsigned int reg,
> >>       pci_conf_write32(pdev->sbdf, reg, val);
> >>   }
> >>   
> >> +static uint32_t bar_read_hwdom(const struct pci_dev *pdev, unsigned int reg,
> >> +                               void *data)
> >> +{
> >> +    return vpci_hw_read32(pdev, reg, data);
> >> +}
> >> +
> >> +static void bar_write_guest(const struct pci_dev *pdev, unsigned int reg,
> >> +                            uint32_t val, void *data)
> >> +{
> >> +    struct vpci_bar *vbar = data;
> >> +    bool hi = false;
> >> +
> >> +    if ( vbar->type == VPCI_BAR_MEM64_HI )
> >> +    {
> >> +        ASSERT(reg > PCI_BASE_ADDRESS_0);
> >> +        vbar--;
> >> +        hi = true;
> >> +    }
> >> +    vbar->addr &= ~(0xffffffffull << (hi ? 32 : 0));
> >> +    vbar->addr |= (uint64_t)val << (hi ? 32 : 0);
> >> +}
> >> +
> >> +static uint32_t bar_read_guest(const struct pci_dev *pdev, unsigned int reg,
> >> +                               void *data)
> >> +{
> >> +    struct vpci_bar *vbar = data;
> >> +    uint32_t val;
> >> +    bool hi = false;
> >> +
> >> +    if ( vbar->type == VPCI_BAR_MEM64_HI )
> >> +    {
> >> +        ASSERT(reg > PCI_BASE_ADDRESS_0);
> >> +        vbar--;
> >> +        hi = true;
> >> +    }
> >> +
> >> +    if ( vbar->type == VPCI_BAR_MEM64_LO || vbar->type == VPCI_BAR_MEM64_HI )
> > I think this would be clearer using a switch statement.
> I'll think about
> >
> >> +    {
> >> +        if ( hi )
> >> +            val = vbar->addr >> 32;
> >> +        else
> >> +            val = vbar->addr & 0xffffffff;
> >> +        if ( val == ~0 )
> > Strictly speaking I think you are not forced to write 1s to the
> > reserved 4 bits in the low register (and in the 32bit case).
> 
> Ah, so Linux kernel, for instance, could have written 0xffffff0 while
> 
> I expect 0xffffffff?

I think real hardware would return the size when written 1s to all
bits except the reserved ones.

> 
> >
> >> +        {
> >> +            /* Guests detects BAR's properties and sizes. */
> >> +            if ( !hi )
> >> +            {
> >> +                val = 0xffffffff & ~(vbar->size - 1);
> >> +                val |= vbar->type == VPCI_BAR_MEM32 ? PCI_BASE_ADDRESS_MEM_TYPE_32
> >> +                                                    : PCI_BASE_ADDRESS_MEM_TYPE_64;
> >> +                val |= vbar->prefetchable ? PCI_BASE_ADDRESS_MEM_PREFETCH : 0;
> >> +            }
> >> +            else
> >> +                val = vbar->size >> 32;
> >> +            vbar->addr &= ~(0xffffffffull << (hi ? 32 : 0));
> >> +            vbar->addr |= (uint64_t)val << (hi ? 32 : 0);
> >> +        }
> >> +    }
> >> +    else if ( vbar->type == VPCI_BAR_MEM32 )
> >> +    {
> >> +        val = vbar->addr;
> >> +        if ( val == ~0 )
> >> +        {
> >> +            if ( !hi )
> > There's no way hi can be true at this point AFAICT.
> Sure, thank you
> >
> >> +            {
> >> +                val = 0xffffffff & ~(vbar->size - 1);
> >> +                val |= vbar->type == VPCI_BAR_MEM32 ? PCI_BASE_ADDRESS_MEM_TYPE_32
> >> +                                                    : PCI_BASE_ADDRESS_MEM_TYPE_64;
> >> +                val |= vbar->prefetchable ? PCI_BASE_ADDRESS_MEM_PREFETCH : 0;
> >> +            }
> >> +        }
> >> +    }
> >> +    else
> >> +    {
> >> +        val = vbar->addr;
> >> +    }
> >> +    return val;
> >> +}
> >> +
> >>   static void rom_write(const struct pci_dev *pdev, unsigned int reg,
> >>                         uint32_t val, void *data)
> >>   {
> >> -    struct vpci_header *header = &pdev->vpci->header;
> >> +    struct vpci_header *header = get_hwdom_vpci_header(pdev);
> >>       struct vpci_bar *rom = data;
> >>       uint8_t slot = PCI_SLOT(pdev->devfn), func = PCI_FUNC(pdev->devfn);
> >>       uint16_t cmd = pci_conf_read16(pdev->sbdf, PCI_COMMAND);
> >> @@ -452,15 +629,56 @@ static void rom_write(const struct pci_dev *pdev, unsigned int reg,
> >>           rom->addr = val & PCI_ROM_ADDRESS_MASK;
> >>   }
> > Don't you need to also protect a domU from writing to the ROM BAR
> > register?
> 
> ROM was not a target of this RFC as I have no HW to test that, but final code must
> 
> also handle ROM as well, you are right
> 
> >
> >>   
> >> +static uint32_t bar_read_dispatch(const struct pci_dev *pdev, unsigned int reg,
> >> +                                  void *data)
> >> +{
> >> +    struct vpci_bar *vbar, *bar = data;
> >> +
> >> +    if ( is_hardware_domain(current->domain) )
> >> +        return bar_read_hwdom(pdev, reg, data);
> >> +
> >> +    vbar = get_vpci_bar(current->domain, pdev, bar->index);
> >> +    if ( !vbar )
> >> +        return ~0;
> >> +
> >> +    return bar_read_guest(pdev, reg, vbar);
> >> +}
> >> +
> >> +static void bar_write_dispatch(const struct pci_dev *pdev, unsigned int reg,
> >> +                               uint32_t val, void *data)
> >> +{
> >> +    struct vpci_bar *bar = data;
> >> +
> >> +    if ( is_hardware_domain(current->domain) )
> >> +        bar_write_hwdom(pdev, reg, val, data);
> >> +    else
> >> +    {
> >> +        struct vpci_bar *vbar = get_vpci_bar(current->domain, pdev, bar->index);
> >> +
> >> +        if ( !vbar )
> >> +            return;
> >> +        bar_write_guest(pdev, reg, val, vbar);
> >> +    }
> >> +}
> > You should assign different handlers based on whether the domain that
> > has the device assigned is a domU or the hardware domain, rather than
> > doing the selection here.
> 
> Hm, handlers are assigned once in init_bars and this function is only called
> 
> for hwdom, so there is no way I can do that for the guests. Hence, the dispatcher

I think we might want to reset the vPCI handlers when a devices gets
assigned and deassigned. In order to do passthrough to domUs safely
we will have to add more handlers than what's required for dom0, and
having is_hardware_domain sprinkled in all of them is not a suitable
solution.

Roger.


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 15:11:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 15:11:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25935.54011 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdEG4-00023h-UA; Thu, 12 Nov 2020 15:11:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25935.54011; Thu, 12 Nov 2020 15:11:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdEG4-00023a-QT; Thu, 12 Nov 2020 15:11:32 +0000
Received: by outflank-mailman (input) for mailman id 25935;
 Thu, 12 Nov 2020 15:11:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bXVH=ES=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kdEG3-00023U-SH
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 15:11:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0298710d-4138-426b-bbb3-861713acca17;
 Thu, 12 Nov 2020 15:11:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 76CACAC1D;
 Thu, 12 Nov 2020 15:11:29 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bXVH=ES=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kdEG3-00023U-SH
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 15:11:31 +0000
X-Inumbo-ID: 0298710d-4138-426b-bbb3-861713acca17
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0298710d-4138-426b-bbb3-861713acca17;
	Thu, 12 Nov 2020 15:11:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605193889;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=SY/NvZFHWjYh/mlEL7MuIWnVdu8w505QXd0SXbviGv4=;
	b=fZ9J0C8ASk4PJ3n9BOVMcF7CsdP0Oo+8ubSoWN15wQ7PlN/iME1GAIIu9+OjOwUdooM83b
	zm0arOeUf2+E3tBlyhNAdIG731IYr4f76TsQE0t/M8OPgz8/qWaIHKMb4r1NpAvwG8juP6
	ESoFLGJxIIT3559iTPsqpwSqBiXjPO0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 76CACAC1D;
	Thu, 12 Nov 2020 15:11:29 +0000 (UTC)
Subject: Re: [PATCH v5 1/3] xen/x86: add nmi continuation framework
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201112131424.9930-1-jgross@suse.com>
 <20201112131424.9930-2-jgross@suse.com>
 <f266e1fe-21fe-a44c-d8b1-94d89813f42f@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <422e2e77-7e26-4c9d-f022-6ecb7e398751@suse.com>
Date: Thu, 12 Nov 2020 15:50:04 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <f266e1fe-21fe-a44c-d8b1-94d89813f42f@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="ujB3BsEXuhY6FML0g7KRsCfjgz1zOOhbP"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--ujB3BsEXuhY6FML0g7KRsCfjgz1zOOhbP
Content-Type: multipart/mixed; boundary="FzfkXjtPAnQ5GxO6GBZa2JRGuc5uoyGs8";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <422e2e77-7e26-4c9d-f022-6ecb7e398751@suse.com>
Subject: Re: [PATCH v5 1/3] xen/x86: add nmi continuation framework
References: <20201112131424.9930-1-jgross@suse.com>
 <20201112131424.9930-2-jgross@suse.com>
 <f266e1fe-21fe-a44c-d8b1-94d89813f42f@suse.com>
In-Reply-To: <f266e1fe-21fe-a44c-d8b1-94d89813f42f@suse.com>

--FzfkXjtPAnQ5GxO6GBZa2JRGuc5uoyGs8
Content-Type: multipart/mixed;
 boundary="------------73172AECA3C1F131FBAF8636"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------73172AECA3C1F131FBAF8636
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 12.11.20 14:41, Jan Beulich wrote:
> On 12.11.2020 14:14, Juergen Gross wrote:
>> --- a/xen/arch/x86/genapic/x2apic.c
>> +++ b/xen/arch/x86/genapic/x2apic.c
>> @@ -89,6 +89,7 @@ static unsigned int cpu_mask_to_apicid_x2apic_cluste=
r(const cpumask_t *cpumask)
>>  =20
>>   static void send_IPI_self_x2apic(uint8_t vector)
>>   {
>> +    /* NMI continuation handling relies on using a shorthand here. */=

>>       apic_wrmsr(APIC_SELF_IPI, vector);
>>   }
>=20
> I'm inclined to drop this hunk again - I did ask for ...
>=20
>> --- a/xen/arch/x86/smp.c
>> +++ b/xen/arch/x86/smp.c
>> @@ -163,6 +163,7 @@ void send_IPI_self(int vector)
>>  =20
>>   void send_IPI_self_legacy(uint8_t vector)
>>   {
>> +    /* NMI continuation handling relies on using a shorthand here. */=

>>       send_IPI_shortcut(APIC_DEST_SELF, vector, APIC_DEST_PHYSICAL);
>>   }
>=20
> ... this one only simply because x2APIC doesn't have the same restricti=
on.

It would still be bad if the x2APIC variant would e.g. use
send_IPI_mask_x2apic_cluster() due to its usage of
per_cpu(scratch_mask).

Juergen


--------------73172AECA3C1F131FBAF8636
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------73172AECA3C1F131FBAF8636--

--FzfkXjtPAnQ5GxO6GBZa2JRGuc5uoyGs8--

--ujB3BsEXuhY6FML0g7KRsCfjgz1zOOhbP
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+tS5wFAwAAAAAACgkQsN6d1ii/Ey/T
GAf/XnwbekTUmeZJ8Ou8ZT69o5eqOPT/QZx5EuFrtwoR06OjhUrVY02wKIhnJOLdnCKiIzEHoqeg
zIcAZIN2uTd6OZBMaEpM1ksBsCNbzKhEZDOjiZnzg1iLDYx58h9Hp3qOHO1p2CfbEqLZtV7SN8rs
Yg2J/M/RtWc8UL1XJbsNt/96wrKMtbLJPKwM+jbwdOc70H6hRidVBpLteERThhOXM1pRPADYZggw
pHG8hhJHdUP5fwfr708IjxObIMpHIycC++Dv2gZUoJhKU+w6I2juvXpKqfvhLH9xH2XeZWAcyI++
B/M1b4Dzc+8G/8eI/Lmay6mM/XILZpKqvzvcJBYl7g==
=Taom
-----END PGP SIGNATURE-----

--ujB3BsEXuhY6FML0g7KRsCfjgz1zOOhbP--


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 15:57:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 15:57:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25946.54023 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdEyV-00065f-Hd; Thu, 12 Nov 2020 15:57:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25946.54023; Thu, 12 Nov 2020 15:57:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdEyV-00065Y-EY; Thu, 12 Nov 2020 15:57:27 +0000
Received: by outflank-mailman (input) for mailman id 25946;
 Thu, 12 Nov 2020 15:57:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=B7ei=ES=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kdEyU-00065T-LD
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 15:57:26 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d3866cb0-5131-4d4e-a969-a349ada0503c;
 Thu, 12 Nov 2020 15:57:23 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0ACFvK2O022019
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK)
 for <xen-devel@lists.xenproject.org>; Thu, 12 Nov 2020 16:57:21 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 85E722E9CA8; Thu, 12 Nov 2020 16:57:15 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=B7ei=ES=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kdEyU-00065T-LD
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 15:57:26 +0000
X-Inumbo-ID: d3866cb0-5131-4d4e-a969-a349ada0503c
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d3866cb0-5131-4d4e-a969-a349ada0503c;
	Thu, 12 Nov 2020 15:57:23 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0ACFvK2O022019
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK)
	for <xen-devel@lists.xenproject.org>; Thu, 12 Nov 2020 16:57:21 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 85E722E9CA8; Thu, 12 Nov 2020 16:57:15 +0100 (MET)
Date: Thu, 12 Nov 2020 16:57:15 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: xen-devel@lists.xenproject.org
Subject: dom0 PVH: 'entry->arch.pirq != INVALID_PIRQ' failed at vmsi.c:843
Message-ID: <20201112155715.GA5003@antioche.eu.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Thu, 12 Nov 2020 16:57:21 +0100 (MET)

Hello,
I'm trying to add dom0 PVH support to NetBSD. I'm tesing with Xen 4.13
on a brand new Intel x86 server (Dell R440).
While the dom0 kernel configures hardware, Xen panics with:
(XEN) Xen call trace:
(XEN)    [<ffff82d08031cc28>] R vpci_msix_arch_mask_entry+0x18/0x20
(XEN)    [<ffff82d08025a38a>] S drivers/vpci/msix.c#msix_write+0x18a/0x2b0
(XEN)    [<ffff82d08030d943>] S arch/x86/hvm/intercept.c#hvm_mmio_write+0x23/0x3
0
(XEN)    [<ffff82d08030dd19>] S hvm_process_io_intercept+0x1e9/0x260
(XEN)    [<ffff82d08030ddad>] S hvm_io_intercept+0x1d/0x40
(XEN)    [<ffff82d0802fe7ba>] S arch/x86/hvm/emulate.c#hvmemul_do_io+0x26a/0x4d0
(XEN)    [<ffff82d080259ef9>] S drivers/vpci/msix.c#msix_accept+0x9/0x20
(XEN)    [<ffff82d0802fea56>] S arch/x86/hvm/emulate.c#hvmemul_do_io_buffer+0x36
/0x70
(XEN)    [<ffff82d0802ff005>] S arch/x86/hvm/emulate.c#hvmemul_linear_mmio_access+0x1e5/0x300
(XEN)    [<ffff82d0802fff44>] S arch/x86/hvm/emulate.c#linear_write+0x84/0x160
(XEN)    [<ffff82d080301ca8>] S arch/x86/hvm/emulate.c#hvmemul_write+0xe8/0x100
(XEN)    [<ffff82d0802de6cc>] S x86_emulate+0x289dc/0x2cfb0
(XEN)    [<ffff82d08027c7ab>] S map_domain_page+0x4b/0x600
(XEN)    [<ffff82d080340eaa>] S __get_gfn_type_access+0x6a/0x100
(XEN)    [<ffff82d08034a367>] S arch/x86/mm/p2m-ept.c#ept_next_level+0x107/0x150
(XEN)    [<ffff82d0802e4961>] S x86_emulate_wrapper+0x21/0x60
(XEN)    [<ffff82d08030024f>] S arch/x86/hvm/emulate.c#_hvm_emulate_one+0x4f/0x220
(XEN)    [<ffff82d0803004ed>] S hvmemul_get_seg_reg+0x4d/0x50
(XEN)    [<ffff82d08030042e>] S hvm_emulate_one+0xe/0x10
(XEN)    [<ffff82d08030e4ca>] S hvm_emulate_one_insn+0x3a/0xf0
(XEN)    [<ffff82d0802e4af0>] S x86_insn_is_mem_access+0/0x260
(XEN)    [<ffff82d08030e5c9>] S handle_mmio_with_translation+0x49/0x60
(XEN)    [<ffff82d080305d78>] S hvm_hap_nested_page_fault+0x2c8/0x720
(XEN)    [<ffff82d0802fea56>] S arch/x86/hvm/emulate.c#hv(XEN) 
(XEN) ****************************************
(XEN) Panic on CPU 13:
(XEN) Assertion 'entry->arch.pirq != INVALID_PIRQ' failed at vmsi.c:843
(XEN) ****************************************

This is when it configures the broadcom network interface, which interrupts
at "msix3 vec 0". It is the first MSI-X device configured; the previous
ones are MSI only.

Is it a bug on the Xen side, or something missing on the NetBSD side ?
If the later, where can I find informations about it ?

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 15:59:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 15:59:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25953.54038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdF0M-0006FT-UZ; Thu, 12 Nov 2020 15:59:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25953.54038; Thu, 12 Nov 2020 15:59:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdF0M-0006FM-Ra; Thu, 12 Nov 2020 15:59:22 +0000
Received: by outflank-mailman (input) for mailman id 25953;
 Thu, 12 Nov 2020 15:59:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kdF0L-0006Eh-2g
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 15:59:21 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2e30beb5-231d-4b6f-a00b-3171aeed4471;
 Thu, 12 Nov 2020 15:59:14 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdF0E-0001hJ-IF; Thu, 12 Nov 2020 15:59:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdF0E-0005Jh-Ak; Thu, 12 Nov 2020 15:59:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kdF0E-0002sT-8N; Thu, 12 Nov 2020 15:59:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kdF0L-0006Eh-2g
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 15:59:21 +0000
X-Inumbo-ID: 2e30beb5-231d-4b6f-a00b-3171aeed4471
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2e30beb5-231d-4b6f-a00b-3171aeed4471;
	Thu, 12 Nov 2020 15:59:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WOcKC4Vo+oZ3kPa4sE3xpHjlUYpfGp8FxCLcCygvW9U=; b=hoke+t52phOw/YwxAtLOAN8kLv
	lYAKe4ZN5IyCpSBlRcONvu+IVJJtb7yV9apUZCwxvb62/WMiD1o3xdDKMcV0l4hj29WV3PmNVUuBX
	Iguw05+0Qv/tp1r8RWMcRVsdCC2bPUxgVhMonIxz5js9uGXP3A3rzJTZeDRzcUjLeyr8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdF0E-0001hJ-IF; Thu, 12 Nov 2020 15:59:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdF0E-0005Jh-Ak; Thu, 12 Nov 2020 15:59:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdF0E-0002sT-8N; Thu, 12 Nov 2020 15:59:14 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156684-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156684: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=1c48866e041d2afaabb170086c5bb0c69a4653d3
X-Osstest-Versions-That:
    ovmf=8c610e6075f2a200400970698a810a57ad49220e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Nov 2020 15:59:14 +0000

flight 156684 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156684/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 1c48866e041d2afaabb170086c5bb0c69a4653d3
baseline version:
 ovmf                 8c610e6075f2a200400970698a810a57ad49220e

Last test of basis   156632  2020-11-10 17:53:05 Z    1 days
Testing same since   156684  2020-11-11 12:14:13 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Albecki, Mateusz <mateusz.albecki@intel.com>
  Eric Dong <eric.dong@intel.com>
  Fan Wang <fan.wang@intel.com>
  Fu Siyuan <siyuan.fu@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Mateusz Albecki <mateusz.albecki@intel.com>
  Ray Ni <ray.ni@intel.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Ting Ye <ting.ye@intel.com>
  Tom Lendacky <thomas.lendacky@amd.com>
  Wang Fan <fan.wang@intel.com>
  Wu Jiaxin <jiaxin.wu@intel.com>
  Ye Ting <ting.ye@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   8c610e6075..1c48866e04  1c48866e041d2afaabb170086c5bb0c69a4653d3 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 16:36:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 16:36:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25962.54049 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdFZg-0002RR-QY; Thu, 12 Nov 2020 16:35:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25962.54049; Thu, 12 Nov 2020 16:35:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdFZg-0002RK-NR; Thu, 12 Nov 2020 16:35:52 +0000
Received: by outflank-mailman (input) for mailman id 25962;
 Thu, 12 Nov 2020 16:35:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=muz0=ES=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kdFZf-0002RF-Et
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 16:35:51 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b05d7510-6e1b-4423-97ed-4509bee83b4b;
 Thu, 12 Nov 2020 16:35:49 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=muz0=ES=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kdFZf-0002RF-Et
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 16:35:51 +0000
X-Inumbo-ID: b05d7510-6e1b-4423-97ed-4509bee83b4b
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b05d7510-6e1b-4423-97ed-4509bee83b4b;
	Thu, 12 Nov 2020 16:35:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605198949;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=6ttY6MFm/462aukDIC0mQkKNjwLCdRXnlMXXpysWgNs=;
  b=aS76W7It1OwJ6abHANO43TbV/A+e+uTKnyqhK+0tnZ/9O+2BJmjbvYkj
   DZ698Y8nBQIfpWNDu89xq/6WTM8Y76gQKoqPRBLhSl3b+HO7YFCLGOvhj
   r0rCfG1gkLUqzYm1fsngVyICjS4ubzgB2Mb3RN1dbS+NhDsSt/KGA4dkm
   M=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ZbhCex5Isx3mcyQQNHhbec6wy94UNyuVPtdJYNEDNcja5pvibhEcwb1PgZnXxBqHHNwItcPTG0
 JSXPU/9HC5rxXqgk87jBjimTw5G6LdalKT2+mPAHdeLB8xE3LKa6DOnFJ/kUetTMDO1Tm+q6ID
 5m4zcT+Tkxha/uvS9mCmzYViAC0+3tWRTDuwmr0NJt8aSABS8B9GlOL6ya0GbVJcDO11tQnMcy
 QbjMQ8h3XY3BB2/GEr3DQjv961NV2AKtGpZX6dp/cqH8BBFfRepISFw/CN5Q2HGBZYvhfz3TS9
 jOg=
X-SBRS: None
X-MesageID: 31048412
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,472,1596513600"; 
   d="scan'208";a="31048412"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=W7HK2RHGfF/7J9SkU+fvsMOaXopsxg74p7j4virYMQ0zxey0LTs7tVGBQKrnplF7jAowEWFZAg8MaTmxC40Zhw9AJ8TUK+HAnEK9n6J0piin57PZ3JyIx8KSioTBEq60f7mhyXHkxo6ZSAv7gtymkJLoYMz1/YTkVfMXsatcVnsgCRQWp2gazKD10vFoOc/95R2wXu4EhpFpAPKDWoq+cC4iWvrYQegpqktn18/f+18z6X5f3e1j7ZSMGwxGpl4UNhGGyG5Qki9AeVawOD58XcUPPlXGPTxXlAO47RiIuL+JmjmnLBk4h3QyFbfYS8Fj4G6laFPSjeq0ETFFuryAJQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/3G5ho43dK94hG68dk12jzGucVOIAh05RzdT+0FFjbk=;
 b=K2D1316Nx5dGEGLOVF/ZTpSyaFdSumgeqr0ofCatY9c7seWF2Y1+cs5K3USaXdBzJm5jdQ2iW5l0u2hTKhj916ow4Lxf3pA3pIC4A815i3gOW7D1qQY+0EmsP0bpKmLM6vGd17KnT53z2+FTs48hvsSfmLowDXcdL7JQ2nrJAl8e4P9Wb1fPRyG+YwwYTmVQPgtczMNhRd88BQBZLWT+SoqsEjSUwk6KEXrln5Hw/si158x84fzMQysFUwn7NxLXowq4K7fdcCvYojoMswwBFcrYj5UHHnTfINiSZuioozTJ87OKT6nt4Wvo0WiG/woIjtPMoMic6/RD6KFKPnPGDQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/3G5ho43dK94hG68dk12jzGucVOIAh05RzdT+0FFjbk=;
 b=qy2Htc1mIZXUlujaKT1vU4nfp1J2BGbNnfZm2OUOHWt/DEOKIAvHScwhkIBlXtnHsy/zrvqKzdPg70CAOlAs72wowlQGRAoFsobl0iwHWrJT0XynU4mvbfOYVVBPMHoNTWnV11Io65ZO0EVUKbOS355kcgRanfNXH7xYKIvARXQ=
Date: Thu, 12 Nov 2020 17:32:40 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: dom0 PVH: 'entry->arch.pirq != INVALID_PIRQ' failed at vmsi.c:843
Message-ID: <20201112163240.6xswol2iswikdzef@Air-de-Roger>
References: <20201112155715.GA5003@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201112155715.GA5003@antioche.eu.org>
X-ClientProxiedBy: LO2P265CA0021.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:62::33) To SA0PR03MB5610.namprd03.prod.outlook.com
 (2603:10b6:806:b2::9)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6ade9288-1519-46fc-1fd9-08d8872895a5
X-MS-TrafficTypeDiagnostic: SN6PR03MB3599:
X-Microsoft-Antispam-PRVS: <SN6PR03MB35999BBDCBEA57035A6D264C8FE70@SN6PR03MB3599.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: wMnIGyBogG7vcm9iE2ztqJgeUevLHKZ2nIMEyrGN9n4IaJoKyV1T2SxpHBSEYT59EeGH7WAq4lnOkvdngHXhooLABVYjJYJvSAXm8qD0XKaJZSPWkeQT4EMU8+SknxY1LwsIboQBkMGMy50JJM78t17L8TkBxyH0/Yj/ky4doLuIqB35Dpaa6KCv68LcSX3FByVllyAN1in8vSE2LMA60ZZRX1exB+/bp9KgT8V/6g6REnRDzpVRJfCuMjyJzhFwPTLGTonlNKnRuPM8N3NkKnG5Kchn2q26KtI7I8p1/2qLm98iy+a950tWl2/RtdDCR6ielU9owksrUV1rb3aJUw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SA0PR03MB5610.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(346002)(396003)(39860400002)(376002)(366004)(1076003)(8676002)(26005)(83380400001)(6666004)(5660300002)(186003)(16526019)(478600001)(4326008)(316002)(6486002)(9686003)(66946007)(33716001)(956004)(66476007)(86362001)(66556008)(6916009)(2906002)(85182001)(8936002)(6496006);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: wMTlWdfILc67JzPupkTuuNnCwEHX3LZ8e+i4vohwHzZW0eenWWQkT6YhEct2cIx4/qlpYfuYDzL63o6yZkDRL/Op6x56xLxS3Hueg9HQSSX7EmbD7L397x0F8MaO8s+BsLnaoBsihIwGUuXd8ms4G8S/lLyHwBeMS//Vzu6uNfMWir5lxmsP9B0fGAbIQmrggBdUroNdXPVoXDbe+w8OpxgoWNARqtsoPIa37moAD4Qn1i7QdGk7jZEErKSVVohdOGU5R9cOsQBQlsD8bNM11GazDpmrWl9nz71RwM/0ZM1YpWobEsjhkAUPe46Ilxpcvy0UBX1evAbnss+ngkIByYQAcdXxtcDWrasXLK4GNJianf2Rc6unUK4VrgjAOuJXBYsTGDwvf3842sJzNd+GnRG1G1fh3vXjPvuHuKBK3dVdw9RZ1xgGZOG6nY1+kF+LGz/aUeRqeMSV3Y0AzJCo95B6lV1sA/KjN8FROEL3pG/osTcMWKrjI4eWtafTFKzQgHwtcWj9dX6hij/28OKdm+8Ldozok9JcB6fCeZyZF4vwaYOIOLoKHpcQdDUeSIr1rp5B7LZM3mOW7d5vZvIOWJRTQ0OPlDV+X/dP2RRNyLNRWltLHeiCp4lT6mqKn5H3Sty0iudti8HAB9mpFHu4mw==
X-MS-Exchange-CrossTenant-Network-Message-Id: 6ade9288-1519-46fc-1fd9-08d8872895a5
X-MS-Exchange-CrossTenant-AuthSource: SA0PR03MB5610.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Nov 2020 16:32:45.4203
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: CS7f51rmO1767cf3qyknVOlc4KO6AiuCYSwuqvavearasz7ZAemg4lck0vrOgtN6CRYJknhLK1RwlNgUh+bA6Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR03MB3599
X-OriginatorOrg: citrix.com

On Thu, Nov 12, 2020 at 04:57:15PM +0100, Manuel Bouyer wrote:
> Hello,
> I'm trying to add dom0 PVH support to NetBSD. I'm tesing with Xen 4.13
> on a brand new Intel x86 server (Dell R440).

I would recommend using 4.14, PVH dom0 is still very much in
progress, and while I don't recall any specific fix going in 4.14 that
could be related to this there have been changes.

> While the dom0 kernel configures hardware, Xen panics with:
> (XEN) Xen call trace:
> (XEN)    [<ffff82d08031cc28>] R vpci_msix_arch_mask_entry+0x18/0x20
> (XEN)    [<ffff82d08025a38a>] S drivers/vpci/msix.c#msix_write+0x18a/0x2b0
> (XEN)    [<ffff82d08030d943>] S arch/x86/hvm/intercept.c#hvm_mmio_write+0x23/0x3
> 0
> (XEN)    [<ffff82d08030dd19>] S hvm_process_io_intercept+0x1e9/0x260
> (XEN)    [<ffff82d08030ddad>] S hvm_io_intercept+0x1d/0x40
> (XEN)    [<ffff82d0802fe7ba>] S arch/x86/hvm/emulate.c#hvmemul_do_io+0x26a/0x4d0
> (XEN)    [<ffff82d080259ef9>] S drivers/vpci/msix.c#msix_accept+0x9/0x20
> (XEN)    [<ffff82d0802fea56>] S arch/x86/hvm/emulate.c#hvmemul_do_io_buffer+0x36
> /0x70
> (XEN)    [<ffff82d0802ff005>] S arch/x86/hvm/emulate.c#hvmemul_linear_mmio_access+0x1e5/0x300
> (XEN)    [<ffff82d0802fff44>] S arch/x86/hvm/emulate.c#linear_write+0x84/0x160
> (XEN)    [<ffff82d080301ca8>] S arch/x86/hvm/emulate.c#hvmemul_write+0xe8/0x100
> (XEN)    [<ffff82d0802de6cc>] S x86_emulate+0x289dc/0x2cfb0
> (XEN)    [<ffff82d08027c7ab>] S map_domain_page+0x4b/0x600
> (XEN)    [<ffff82d080340eaa>] S __get_gfn_type_access+0x6a/0x100
> (XEN)    [<ffff82d08034a367>] S arch/x86/mm/p2m-ept.c#ept_next_level+0x107/0x150
> (XEN)    [<ffff82d0802e4961>] S x86_emulate_wrapper+0x21/0x60
> (XEN)    [<ffff82d08030024f>] S arch/x86/hvm/emulate.c#_hvm_emulate_one+0x4f/0x220
> (XEN)    [<ffff82d0803004ed>] S hvmemul_get_seg_reg+0x4d/0x50
> (XEN)    [<ffff82d08030042e>] S hvm_emulate_one+0xe/0x10
> (XEN)    [<ffff82d08030e4ca>] S hvm_emulate_one_insn+0x3a/0xf0
> (XEN)    [<ffff82d0802e4af0>] S x86_insn_is_mem_access+0/0x260
> (XEN)    [<ffff82d08030e5c9>] S handle_mmio_with_translation+0x49/0x60
> (XEN)    [<ffff82d080305d78>] S hvm_hap_nested_page_fault+0x2c8/0x720
> (XEN)    [<ffff82d0802fea56>] S arch/x86/hvm/emulate.c#hv(XEN) 
> (XEN) ****************************************
> (XEN) Panic on CPU 13:
> (XEN) Assertion 'entry->arch.pirq != INVALID_PIRQ' failed at vmsi.c:843
> (XEN) ****************************************
> 
> This is when it configures the broadcom network interface, which interrupts
> at "msix3 vec 0". It is the first MSI-X device configured; the previous
> ones are MSI only.
> 
> Is it a bug on the Xen side, or something missing on the NetBSD side ?

Looks like a bug on the Xen side, do you see any relevant messages
before hitting the assert?

Can you give a try to the following debug patch and paste what you
get?

Thanks, Roger.
---8<---
diff --git a/xen/drivers/vpci/msix.c b/xen/drivers/vpci/msix.c
index 64dd0a929c..7ff76b7f59 100644
--- a/xen/drivers/vpci/msix.c
+++ b/xen/drivers/vpci/msix.c
@@ -371,7 +371,12 @@ static int msix_write(struct vcpu *v, unsigned long addr, unsigned int len,
             entry->updated = false;
         }
         else
+        {
+            printk("%pp offset %u len %u new_masked %d enabled %d masked %d updated %d\n",
+                   &pdev->sbdf, offset, len, new_masked, msix->enabled, msix->masked,
+                   entry->updated);
             vpci_msix_arch_mask_entry(entry, pdev, entry->masked);
+        }
 
         break;
     }


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 16:52:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 16:52:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25969.54061 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdFq8-0004PS-Am; Thu, 12 Nov 2020 16:52:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25969.54061; Thu, 12 Nov 2020 16:52:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdFq8-0004PL-7u; Thu, 12 Nov 2020 16:52:52 +0000
Received: by outflank-mailman (input) for mailman id 25969;
 Thu, 12 Nov 2020 16:52:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kdFq7-0004PG-9Z
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 16:52:51 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f95de658-2f84-4869-be7a-d190eba5c352;
 Thu, 12 Nov 2020 16:52:48 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdFq3-0003JE-RY; Thu, 12 Nov 2020 16:52:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdFq3-0007py-JK; Thu, 12 Nov 2020 16:52:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kdFq3-0007Qu-Iq; Thu, 12 Nov 2020 16:52:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kdFq7-0004PG-9Z
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 16:52:51 +0000
X-Inumbo-ID: f95de658-2f84-4869-be7a-d190eba5c352
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f95de658-2f84-4869-be7a-d190eba5c352;
	Thu, 12 Nov 2020 16:52:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=au2oHQ4Z8uL+LkWIUsrMSd4DM+Y8eqZTuhPH6fYZxHE=; b=1lsgNxoVDXMjlw491a7yl7XNoe
	grWOq7aYM/VS2XoYiH2cwQNEQArdVO+CCx9VHeJqGh9Xy1KkgM7vS0tr08UXH+T5yejh1uLE6o6yN
	1VsaOuRXRrDTyu+9bTfGQJB1LKO43iK69R1ASWfNlnbNeCmRjiH28SDBIvsmgAF5tAlQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdFq3-0003JE-RY; Thu, 12 Nov 2020 16:52:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdFq3-0007py-JK; Thu, 12 Nov 2020 16:52:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdFq3-0007Qu-Iq; Thu, 12 Nov 2020 16:52:47 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156677-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 156677: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-amd64-i386-xl-xsm:guest-localmigrate/x10:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=2544d06afd8d060f35b159809274e4b7477e63e8
X-Osstest-Versions-That:
    linux=6e97ed6efa701db070da0054b055c085895aba86
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Nov 2020 16:52:47 +0000

flight 156677 linux-5.4 real [real]
flight 156721 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156677/
http://logs.test-lab.xenproject.org/osstest/logs/156721/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm       20 guest-localmigrate/x10   fail REGR. vs. 156412

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156412
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156412
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156412
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156412
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156412
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156412
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156412
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156412
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156412
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156412
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156412
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                2544d06afd8d060f35b159809274e4b7477e63e8
baseline version:
 linux                6e97ed6efa701db070da0054b055c085895aba86

Last test of basis   156412  2020-11-05 11:13:59 Z    7 days
Failing since        156620  2020-11-10 12:10:31 Z    2 days    2 attempts
Testing same since   156677  2020-11-11 07:50:18 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  "kiyin(尹亮)" <kiyin@tencent.com>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Aring <aahringo@redhat.com>
  Alexander Sverdlin <alexander.sverdlin@nokia.com>
  Andreas Gruenbacher <agruenba@redhat.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Strohman <astroh@amazon.com>
  Artem Lapkin <art@khadas.com>
  Baurzhan Ismagulov <ibr@radix50.net>
  Ben Skeggs <bskeggs@redhat.com>
  Bjørn Mork <bjorn@mork.no>
  Boris Brezillon <boris.brezillon@collabora.com>
  Borislav Petkov <bp@suse.de>
  Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christoph Hellwig <hch@lst.de>
  Chunfeng Yun <chunfeng.yun@mediatek.com>
  Claire Chang <tientzu@chromium.org>
  Claudiu Manoil <claudiu.manoil@nxp.com>
  Clément Péron <peron.clem@gmail.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Daniel Vetter <daniel.vetter@intel.com>
  Daniele Palmas <dnlplm@gmail.com>
  Dany Madden <drt@linux.ibm.com>
  Darrick J. Wong <darrick.wong@oracle.com>
  David S. Miller <davem@davemloft.net>
  Eddy Wu <eddy_wu@trendmicro.com>
  Eddy Wu <itseddy0402@gmail.com>
  Fangrui Song <maskray@google.com>
  Felipe Balbi <balbi@kernel.org>
  Frank Slotta <frank.slotta@posteo.de>
  Gabriel Krisman Bertazi <krisman@collabora.com>
  Geoffrey D. Bennett <g@b4.vu>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gregory CLEMENT <gregory.clement@bootlin.com>
  Guenter Roeck <linux@roeck-us.net>
  Harald Freudenberger <freude@linux.ibm.com>
  Heiko Carstens <hca@linux.ibm.com>
  Hoang Huu Le <hoang.h.le@dektech.com.au>
  Hoegeun Kwon <hoegeun.kwon@samsung.com>
  Ingo Molnar <mingo@kernel.org>
  Jakub Kicinski <kuba@kernel.org>
  Jason Gunthorpe <jgg@nvidia.com>
  Jeff Vander Stoep <jeffv@google.com>
  Jens Axboe <axboe@kernel.dk>
  Jernej Skrabec <jernej.skrabec@siol.net>
  Jiri Slaby <jslaby@suse.cz>
  Johan Hovold <johan@kernel.org>
  Jon Hunter <jonathanh@nvidia.com>
  Jon Maloy <jmaloy@redhat.com>
  Kailang Yang <kailang@realtek.com>
  Kairui Song <kasong@redhat.com>
  Karol Herbst <kherbst@redhat.com>
  Keith Winstein <keithw@cs.stanford.edu>
  Kevin Hilman <khilman@baylibre.com>
  kiyin(尹亮) <kiyin@tencent.com>
  Lee Jones <lee.jones@linaro.org>
  Len Brown <len.brown@intel.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Macpaul Lin <macpaul.lin@mediatek.com>
  Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
  Mark Brown <broonie@kernel.org>
  Mark Deneen <mdeneen@saucontech.com>
  Martin Hundebøll <martin@geanix.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Mateusz Gorski <mateusz.gorski@linux.intel.com>
  Matija Glavinic Pecotic <matija.glavinic-pecotic.ext@nokia.com>
  Maxime Ripard <maxime@cerno.tech>
  Miaohe Lin <linmiaohe@huawei.com>
  Michael Walle <michael@walle.cc>
  Michal Hocko <mhocko@suse.com>
  Michał Mirosław <mirq-linux@rere.qmqm.pl>
  Mika Kuoppala <mika.kuoppala@linux.intel.com>
  Mike Galbraith <efault@gmx.de>
  Ming Lei <ming.lei@redhat.com>
  Nick Desaulniers <ndesaulniers@google.com>
  Oleg Nesterov <oleg@redhat.com>
  Ondřej Jirman <megous@megous.com>
  Pali Rohár <pali@kernel.org>
  Paul E. McKenney <paulmck@kernel.org>
  Peilin Ye <yepeilin.cs@gmail.com>
  Peter Chen <peter.chen@nxp.com>
  Petr Malat <oss@malat.biz>
  Qian Cai <cai@redhat.com>
  Qinglang Miao <miaoqinglang@huawei.com>
  Qiujun Huang <hqjagain@gmail.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Ralph Campbell <rcampbell@nvidia.com>
  Rob Herring <robh@kernel.org>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Sami Tolvanen <samitolvanen@google.com>
  Sasha Levin <sashal@kernel.org>
  Scott K Logan <logans@cottsay.net>
  Shannon Nelson <snelson@pensando.io>
  Shijie Luo <luoshijie1@huawei.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sukadev Bhattiprolu <sukadev@linux.ibm.com>
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Thinh Nguyen <Thinh.Nguyen@synopsys.com>
  Thomas Gleixner <tglx@linutronix.de>
  Tianci.Yin <tianci.yin@amd.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Vasily Gorbik <gor@linux.ibm.com>
  Vignesh Raghavendra <vigneshr@ti.com>
  Vinay Kumar Yadav <vinay.yadav@chelsio.com>
  Vincent Whitchurch <vincent.whitchurch@axis.com>
  Vineet Gupta <vgupta@synopsys.com>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  wenxu <wenxu@ucloud.cn>
  Will Deacon <will@kernel.org>
  Xiang Chen <chenxiang66@hisilicon.com>
  YueHaibing <yuehaibing@huawei.com>
  Zhang Qilong <zhangqilong3@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Ziyi Cao <kernel@septs.pw>
  Zqiang <qiang.zhang@windriver.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2687 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 17:06:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 17:06:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25978.54080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdG2b-0005b4-P2; Thu, 12 Nov 2020 17:05:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25978.54080; Thu, 12 Nov 2020 17:05:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdG2b-0005ax-Lp; Thu, 12 Nov 2020 17:05:45 +0000
Received: by outflank-mailman (input) for mailman id 25978;
 Thu, 12 Nov 2020 17:05:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdG2a-0005as-Fd
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 17:05:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8ad11b9f-5454-4460-bf9f-c1a2d8d59da0;
 Thu, 12 Nov 2020 17:05:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 035E2AEA3;
 Thu, 12 Nov 2020 17:05:43 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2UFB=ES=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdG2a-0005as-Fd
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 17:05:44 +0000
X-Inumbo-ID: 8ad11b9f-5454-4460-bf9f-c1a2d8d59da0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8ad11b9f-5454-4460-bf9f-c1a2d8d59da0;
	Thu, 12 Nov 2020 17:05:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605200743;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Nou+Bbhxc7Qz00xcAcdRZYTsOU2heJ17OW97NrpU6yA=;
	b=f2MOydM+sFmDzzE6PdpnqRBsm4YjJuLBzTUM1xzhXn+ZUI/nzX7Oo4is7YoabswWHx0Ob9
	y2I+XI4fLMHMd0GBLjynLhD8VCqtnkATZwj0qo/SHsL4KORcSiAnGhSdEvE3amwUvomCLP
	S1mrVVyDgeg6Z4Z5OUUXt/7uWMuN6Nk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 035E2AEA3;
	Thu, 12 Nov 2020 17:05:43 +0000 (UTC)
Subject: Re: dom0 PVH: 'entry->arch.pirq != INVALID_PIRQ' failed at vmsi.c:843
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Manuel Bouyer <bouyer@antioche.eu.org>
References: <20201112155715.GA5003@antioche.eu.org>
 <20201112163240.6xswol2iswikdzef@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <49bf4029-ee5f-e90e-c7a1-e0a6c4da5f7a@suse.com>
Date: Thu, 12 Nov 2020 18:02:31 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201112163240.6xswol2iswikdzef@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12.11.2020 17:32, Roger Pau Monné wrote:
> --- a/xen/drivers/vpci/msix.c
> +++ b/xen/drivers/vpci/msix.c
> @@ -371,7 +371,12 @@ static int msix_write(struct vcpu *v, unsigned long addr, unsigned int len,
>              entry->updated = false;
>          }
>          else
> +        {
> +            printk("%pp offset %u len %u new_masked %d enabled %d masked %d updated %d\n",
> +                   &pdev->sbdf, offset, len, new_masked, msix->enabled, msix->masked,
> +                   entry->updated);
>              vpci_msix_arch_mask_entry(entry, pdev, entry->masked);
> +        }

What about a write of all zero as the very first write we
get to see, while msix->masked is true? I'm getting the
impression we would never have come through update_entry()
in this case, and hence vpci_msix_arch_enable_entry() - the
only function setting entry->arch.pirq to a valid value
afaics - would never have been called.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 17:27:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 17:27:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25987.54091 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdGNU-0007dg-JD; Thu, 12 Nov 2020 17:27:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25987.54091; Thu, 12 Nov 2020 17:27:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdGNU-0007dZ-GI; Thu, 12 Nov 2020 17:27:20 +0000
Received: by outflank-mailman (input) for mailman id 25987;
 Thu, 12 Nov 2020 17:27:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=B7ei=ES=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kdGNT-0007dR-1o
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 17:27:19 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e96cbcf8-23a9-44d6-8384-86f866a3b82c;
 Thu, 12 Nov 2020 17:27:16 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0ACHR9M6028250
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Thu, 12 Nov 2020 18:27:10 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id AFF4F2E9CA8; Thu, 12 Nov 2020 18:27:04 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=B7ei=ES=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kdGNT-0007dR-1o
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 17:27:19 +0000
X-Inumbo-ID: e96cbcf8-23a9-44d6-8384-86f866a3b82c
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e96cbcf8-23a9-44d6-8384-86f866a3b82c;
	Thu, 12 Nov 2020 17:27:16 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0ACHR9M6028250
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Thu, 12 Nov 2020 18:27:10 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id AFF4F2E9CA8; Thu, 12 Nov 2020 18:27:04 +0100 (MET)
Date: Thu, 12 Nov 2020 18:27:04 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: dom0 PVH: 'entry->arch.pirq != INVALID_PIRQ' failed at vmsi.c:843
Message-ID: <20201112172704.GA5899@antioche.eu.org>
References: <20201112155715.GA5003@antioche.eu.org>
 <20201112163240.6xswol2iswikdzef@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201112163240.6xswol2iswikdzef@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Thu, 12 Nov 2020 18:27:11 +0100 (MET)

On Thu, Nov 12, 2020 at 05:32:40PM +0100, Roger Pau Monn wrote:
> On Thu, Nov 12, 2020 at 04:57:15PM +0100, Manuel Bouyer wrote:
> > Hello,
> > I'm trying to add dom0 PVH support to NetBSD. I'm tesing with Xen 4.13
> > on a brand new Intel x86 server (Dell R440).
> 
> I would recommend using 4.14, PVH dom0 is still very much in
> progress, and while I don't recall any specific fix going in 4.14 that
> could be related to this there have been changes.

packaging Xen on NetBSD requires quite a bit of work; so I don't package
every releases. I still need to get NetBSD patches in shape to contribute
back ...


> [...]
> > This is when it configures the broadcom network interface, which interrupts
> > at "msix3 vec 0". It is the first MSI-X device configured; the previous
> > ones are MSI only.
> > 
> > Is it a bug on the Xen side, or something missing on the NetBSD side ?
> 
> Looks like a bug on the Xen side, do you see any relevant messages
> before hitting the assert?

nothing from Xen

> 
> Can you give a try to the following debug patch and paste what you
> get?
> 
> Thanks, Roger.
> ---8<---
> diff --git a/xen/drivers/vpci/msix.c b/xen/drivers/vpci/msix.c
> index 64dd0a929c..7ff76b7f59 100644
> --- a/xen/drivers/vpci/msix.c
> +++ b/xen/drivers/vpci/msix.c
> @@ -371,7 +371,12 @@ static int msix_write(struct vcpu *v, unsigned long addr, unsigned int len,
>              entry->updated = false;
>          }
>          else
> +        {
> +            printk("%pp offset %u len %u new_masked %d enabled %d masked %d updated %d\n",
> +                   &pdev->sbdf, offset, len, new_masked, msix->enabled, msix->masked,
> +                   entry->updated);
>              vpci_msix_arch_mask_entry(entry, pdev, entry->masked);
> +        }
>  
>          break;
>      }

I get
(XEN) ffff83083feaf500p offset 12 len 4 new_masked 0 enabled 0 masked 0 updated 1
(XEN) Assertion 'entry->arch.pirq != INVALID_PIRQ' failed at vmsi.c:843

You can find the full serial console log at
http://www-soc.lip6.fr/~bouyer/xen-log.txt

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 17:53:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 17:53:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.25996.54104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdGln-000282-L5; Thu, 12 Nov 2020 17:52:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 25996.54104; Thu, 12 Nov 2020 17:52:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdGln-00027v-H0; Thu, 12 Nov 2020 17:52:27 +0000
Received: by outflank-mailman (input) for mailman id 25996;
 Thu, 12 Nov 2020 17:52:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zwMZ=ES=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1kdGll-00027q-Ii
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 17:52:25 +0000
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e15a3fc2-f6e3-4e06-830c-b1d07598f259;
 Thu, 12 Nov 2020 17:52:24 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1kdGlh-000BVF-1b; Thu, 12 Nov 2020 17:52:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zwMZ=ES=xen.org=tim@srs-us1.protection.inumbo.net>)
	id 1kdGll-00027q-Ii
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 17:52:25 +0000
X-Inumbo-ID: e15a3fc2-f6e3-4e06-830c-b1d07598f259
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e15a3fc2-f6e3-4e06-830c-b1d07598f259;
	Thu, 12 Nov 2020 17:52:24 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1kdGlh-000BVF-1b; Thu, 12 Nov 2020 17:52:21 +0000
Date: Thu, 12 Nov 2020 17:52:21 +0000
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>
Subject: Re: [PATCH 5/5] x86/p2m: split write_p2m_entry() hook
Message-ID: <20201112175221.GB43943@deinos.phlegethon.org>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
 <7b2b7cc9-8828-41bd-7949-764161bbe7ff@suse.com>
 <20201110135944.hbsojy6eeyw53has@Air-de-Roger>
 <d73234b0-f22e-0783-3fbe-759ccb0ecc48@suse.com>
 <20201111121730.pblsf6inot5gixfc@Air-de-Roger>
 <7f916527-9a9c-8afe-5e5c-781554d1bd73@suse.com>
 <20201112130709.r3acpkrkyck6arul@Air-de-Roger>
 <51e646d4-3e1b-3698-c649-a39840275ec9@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <51e646d4-3e1b-3698-c649-a39840275ec9@suse.com>
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org); SAEximRunCond expanded to false

At 15:04 +0100 on 12 Nov (1605193496), Jan Beulich wrote:
> On 12.11.2020 14:07, Roger Pau Monn wrote:
> > On Thu, Nov 12, 2020 at 01:29:33PM +0100, Jan Beulich wrote:
> >> I agree with all this. If only it was merely about TLB flushes. In
> >> the shadow case, shadow_blow_all_tables() gets invoked, and that
> >> one - looking at the other call sites - wants the paging lock held.
[...]
> > The post hook for shadow could pick the lock again, as I don't think
> > the removal of the tables needs to be strictly done inside of the same
> > locked region?
> 
> I think it does, or else a use of the now stale tables may occur
> before they got blown away. Tim?

Is this the call to shadow_blow_tables() in the write_p2m_entry path?
I think it would be safe to drop and re-take the paging lock there as
long as the call happens before the write is considered to have
finished.

But it would not be a useful performance improvement - any update that
takes this path is going to be very slow regardless.  So unless you
have another pressing reason to split it up, I would be inclined to
leave it as it is.  That way it's easier to see that the locking is
correct.

Cheers,

Tim.


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 17:57:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 17:57:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26003.54115 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdGqi-0002KK-8L; Thu, 12 Nov 2020 17:57:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26003.54115; Thu, 12 Nov 2020 17:57:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdGqi-0002KD-5K; Thu, 12 Nov 2020 17:57:32 +0000
Received: by outflank-mailman (input) for mailman id 26003;
 Thu, 12 Nov 2020 17:57:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rCOj=ES=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kdGqh-0002K8-FW
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 17:57:31 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9d9b9b46-641d-43bd-b33e-121819da6dc3;
 Thu, 12 Nov 2020 17:57:30 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rCOj=ES=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kdGqh-0002K8-FW
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 17:57:31 +0000
X-Inumbo-ID: 9d9b9b46-641d-43bd-b33e-121819da6dc3
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9d9b9b46-641d-43bd-b33e-121819da6dc3;
	Thu, 12 Nov 2020 17:57:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605203850;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=O8dSFYATXX2/0rtv+0vpyswT6N2eKHkMas7fUESBDQA=;
  b=GRNmge6jJi+uCniJTbcxyRlwnTnE/KKb7pvBHFPwQbW7GfM8+njR5VwA
   +syzthVQmB+jjGzi7FaHjPNjyS2mgyX+74Ema/XFdqtyTMUUOtCaR3G0t
   fLhr+azCChtiZkhals1k2lfweitSHDJLmHW5oUTyyQj3mrf/GmRd/wx+2
   o=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 3vWL9XZGgiTZLWbfUf8OjRUmPowL2Tu1NgRlwjmOkLZSDQ+ahdxleXzus6XQKiHhi4iWnwOQf2
 BF/RBbjM8zQo8aqmOAvvUTysFDIXbeZ9puIIo2X2MNxWWCWRWrGZxcEujPll4divYiL5j7v0SJ
 O/Npv4gYAaK/mGY2WNOpM8YOGqovr0sC80AFsV0PovW2haIZF5f0J5uib1aWUB2aIcoN0NTuIz
 9+HcJN/08+mH1ZxdzTBZ4xQ6KfCaWRGII/P0PMXBl4Q/gcjRUdbLrYB8jr77mQiXxPh2RE9OLC
 Fok=
X-SBRS: None
X-MesageID: 31056687
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,472,1596513600"; 
   d="scan'208";a="31056687"
Subject: Re: dom0 PVH: 'entry->arch.pirq != INVALID_PIRQ' failed at vmsi.c:843
To: Manuel Bouyer <bouyer@antioche.eu.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
CC: <xen-devel@lists.xenproject.org>
References: <20201112155715.GA5003@antioche.eu.org>
 <20201112163240.6xswol2iswikdzef@Air-de-Roger>
 <20201112172704.GA5899@antioche.eu.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <f5fd0199-698b-5bb9-87bb-4365855b0e27@citrix.com>
Date: Thu, 12 Nov 2020 17:57:18 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201112172704.GA5899@antioche.eu.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 12/11/2020 17:27, Manuel Bouyer wrote:
> On Thu, Nov 12, 2020 at 05:32:40PM +0100, Roger Pau Monné wrote:
>> On Thu, Nov 12, 2020 at 04:57:15PM +0100, Manuel Bouyer wrote:
>>> Hello,
>>> I'm trying to add dom0 PVH support to NetBSD. I'm tesing with Xen 4.13
>>> on a brand new Intel x86 server (Dell R440).
>> I would recommend using 4.14, PVH dom0 is still very much in
>> progress, and while I don't recall any specific fix going in 4.14 that
>> could be related to this there have been changes.
> packaging Xen on NetBSD requires quite a bit of work; so I don't package
> every releases. I still need to get NetBSD patches in shape to contribute
> back ...

For issues like this, you don't need to package all of Xen.  You can
just build the hypervisor locally and boot that.  (It doesn't matter if
the dom0 userspace doesn't come up cleanly if you're debugging a general
issue between the kernel and Xen.)

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 19:09:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 19:09:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26018.54134 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdHxj-0000va-Ge; Thu, 12 Nov 2020 19:08:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26018.54134; Thu, 12 Nov 2020 19:08:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdHxj-0000vT-D6; Thu, 12 Nov 2020 19:08:51 +0000
Received: by outflank-mailman (input) for mailman id 26018;
 Thu, 12 Nov 2020 19:08:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kdHxi-0000vO-Ar
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 19:08:50 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2cf15e19-eebc-41d7-9ada-b7865fba89d3;
 Thu, 12 Nov 2020 19:08:48 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdHxg-00069e-0d; Thu, 12 Nov 2020 19:08:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdHxf-0005ZS-N6; Thu, 12 Nov 2020 19:08:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kdHxf-0001pU-MZ; Thu, 12 Nov 2020 19:08:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kdHxi-0000vO-Ar
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 19:08:50 +0000
X-Inumbo-ID: 2cf15e19-eebc-41d7-9ada-b7865fba89d3
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2cf15e19-eebc-41d7-9ada-b7865fba89d3;
	Thu, 12 Nov 2020 19:08:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vKWDXLzdSOZf5O7y7eEnyxGuoqi7lxuCXx8h6B7Jt44=; b=ofIWlMZWoxy0Kuq52NoRE8eq16
	e6N1PH+J6pAKAWZGvpTlO3mWkqcZncSn/+fSzAa6jeNeZjcC0wwSJ0VdFfBf75OJpit7+Sp3hZRNT
	7sqyOLnW1l77UOu55m3XFq6mkcVd3tjSNKn2W7dC1C14HJVfukYuRt2td9neZmlTthSo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdHxg-00069e-0d; Thu, 12 Nov 2020 19:08:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdHxf-0005ZS-N6; Thu, 12 Nov 2020 19:08:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdHxf-0001pU-MZ; Thu, 12 Nov 2020 19:08:47 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156693-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xtf test] 156693: all pass - PUSHED
X-Osstest-Versions-This:
    xtf=8ab15139728a8efd3ebbb60beb16a958a6a93fa1
X-Osstest-Versions-That:
    xtf=848c3f2dd42711c4d9fc01839d6630c115daa22f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Nov 2020 19:08:47 +0000

flight 156693 xtf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156693/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xtf                  8ab15139728a8efd3ebbb60beb16a958a6a93fa1
baseline version:
 xtf                  848c3f2dd42711c4d9fc01839d6630c115daa22f

Last test of basis   156541  2020-11-07 19:44:12 Z    4 days
Testing same since   156693  2020-11-12 00:13:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-amd64-pvops                                            pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xtf.git
   848c3f2..8ab1513  8ab15139728a8efd3ebbb60beb16a958a6a93fa1 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 20:20:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 20:20:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26033.54149 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdJ4T-00081e-Sy; Thu, 12 Nov 2020 20:19:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26033.54149; Thu, 12 Nov 2020 20:19:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdJ4T-00081X-Pg; Thu, 12 Nov 2020 20:19:53 +0000
Received: by outflank-mailman (input) for mailman id 26033;
 Thu, 12 Nov 2020 20:19:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=muz0=ES=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kdJ4S-00081S-4f
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 20:19:52 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f2ee61e5-47b6-4a29-abb4-9a58f613d491;
 Thu, 12 Nov 2020 20:19:50 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=muz0=ES=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kdJ4S-00081S-4f
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 20:19:52 +0000
X-Inumbo-ID: f2ee61e5-47b6-4a29-abb4-9a58f613d491
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f2ee61e5-47b6-4a29-abb4-9a58f613d491;
	Thu, 12 Nov 2020 20:19:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605212390;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=ml64WgrDyUNCCt0Zwhrj3BKfHRHSJoK3a6HPlUeDMf4=;
  b=TDhLEHQ7aNbUT9XaeQoJt3wsgAVk1/2bV0byLa3LmIttwkEukG5B+BJz
   EKkDw0fNvNZh0OI8Y4LEKspGRv4Wi82NDgDbw+tO4hhPrqpSWSJeoIqsO
   8cg5IbEMgCuwJhjiUK498G7M7fWtGdoIH3WBlz04sVg0Ea2s440EDv9eq
   A=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: tsC6RdhXYn06IOnYM5+hIhW+b0jV8miSKM9b3seQEIJw+MbroFyrL5VRUBjHMUbnPN+laR2DWr
 67SyvOiJGfn1Xtf1NV7RX7q1Doc8F2eICOqXuEHfQOllt+3hgr51Co4cFyTd2+qGn/7paGd58w
 0HueKMafrLK2PNiFpYevPJg/hl3Al1chtqlP98quBu/1Unlild3IRlHySuM06MQWwyJwOTnpzi
 6rHvFJOxRqWs3eCffnJmp6QDrTL+vqsSpuC0+Lr0ey850eJ4Y7du4aMK/4KTSGXqpb6WYnK2lk
 ZRE=
X-SBRS: None
X-MesageID: 32199207
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,472,1596513600"; 
   d="scan'208";a="32199207"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Q5KIwYUnmMWohFckzTrxbB3Q/05ghEVdpe5e8ZckrEEImKQczTJK6pHLIDNUfFkedusQN7w0/4AJnJlqXKMPnYmIJ3fEJ7C0bTlcL+eh+KlK/eSdNgoQWsFx5rl9006PETvcEe7FKk7D5TQvvvTPkMb6wb08oAev1GWS0a/M/MiZS7IctLfTbPBNfDn31qI03l4UlvMCugZQ/eKSn1u2uzA6KoVAnJmzD5p8+894y/IwwZynJgbiT43/z97t4zqRdzEiOZHYZCUYaG33Yaup55KILEgAMLcZt5+GewaMVWUp3OD1wUfGx3Yb4/EymIZKKdZcyHGdS+YheiC7WxFZNQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JEk3MTH4ClpZVHTXC3mpKek7Qm2H6NwtVf7MUraNuuw=;
 b=jEdH/O0GHR3zq/PsEj35PZLKdfv2fL0ReuOPMiv1iOGrnxX/HmYycoYOEoh1zjz+0hcQb2G16iov5o3LYqNkg6fJlBD2Q9NLgqVK0aQNGnhAY9RQ7ERl9Ch8DLkRvN5j8oiF/mfegpvhiSwT05P+uqRqhSi4kMku8CRIBcq140T1v/LkWjlhI6CXCzYpaAXtbFkKpevgoXw5L1X+P7njrvEMw5poe8s6a3s3KiJlieHfBAafu4akJ0it5mspK7c4u7LgrBC5cCC16Jbf7QcavX8YCzBsUE8h35as5v5bti3g8rL9HwDGMHWUHpixUOUNqDzzdGDfFZd6uehACNxevA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JEk3MTH4ClpZVHTXC3mpKek7Qm2H6NwtVf7MUraNuuw=;
 b=o7Tvwu2aQJ4rgV1mZTh5mVTVZA7h/i7/EgtS6z4p1K5U3w6+nH/gNpdb6SCHfbcuDwvHVxwdwPGfxnm9FkQ01FJ6E2nRD6QxBm7iN7WroO2jBeYWt/ZuZ7P0gAAK6ZE2ynv9/d1caH0sQFmN5ue3JPltLbMR6zmb/5gQ6sV0NNU=
Date: Thu, 12 Nov 2020 21:19:39 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: dom0 PVH: 'entry->arch.pirq != INVALID_PIRQ' failed at vmsi.c:843
Message-ID: <20201112201939.be6ztg2iipwa6hkb@Air-de-Roger>
References: <20201112155715.GA5003@antioche.eu.org>
 <20201112163240.6xswol2iswikdzef@Air-de-Roger>
 <20201112172704.GA5899@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201112172704.GA5899@antioche.eu.org>
X-ClientProxiedBy: LO2P265CA0257.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8a::29) To SA0PR03MB5610.namprd03.prod.outlook.com
 (2603:10b6:806:b2::9)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7c4e40e8-8b74-4663-fd06-08d887484c0f
X-MS-TrafficTypeDiagnostic: SN6PR03MB3694:
X-Microsoft-Antispam-PRVS: <SN6PR03MB36948F6465E21EBE3EB0F25F8FE70@SN6PR03MB3694.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ue7IwmRSvowq0bZeGXzWCAhLYkIVV8bRJ504ko99fwzcsqOpZhTiRuPlgWnqvSNaz6VEesZHhPRMZWVydRTeVR6oqjmCrI6iQKCfdgn1GfcGjQcBHYG1sH78zkUQJyjL0aJ18ZQ0wPFFFalcs/3yHTfR5ju64JJ2o1FHN7Fs8AfdMwIBtzARRY+Unwbjb0N7ECo6dtO22x43I+5MdebuLJTeaPxAkD81Hx/3fPU2KtxUt6wSiIQu5Xk4nT9+hgbO9/VjDhOQTd8UMPNNSOFzQW9sIjDUTQqgMi14U3zdR0sRKRr3011J1rreV5zIePw/16cp8Tzu3lbglYpILUZ/XvuFLMnK/UsAcYdHfgObx2Fk+XrK00M3Z2Z66IUkfUdwWjIEejPW/souZRXtU+LgpA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SA0PR03MB5610.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(346002)(396003)(366004)(39860400002)(136003)(376002)(478600001)(16526019)(66476007)(6486002)(26005)(85182001)(4326008)(8676002)(83380400001)(33716001)(1076003)(186003)(6916009)(9686003)(966005)(66946007)(86362001)(6496006)(316002)(6666004)(2906002)(8936002)(956004)(66556008)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: Uw/T6Ju/T+3+7v31XCDV+VyiC7qVK7jYi9qfniZEQ6GTobPSM0bdkj8Hi7dgwH9/Rwn1ykadW5RvFzRHtrA4y7k0hV3P30sifjV6xZm/yEPwOMSLGbVujWe98mmNkwSoVl7dSnM3cPcrr9lYaiMMrC9IKK9XPfKMp6ufDY/wOs//gYsaOjvJWBvL8cBLUDhWTPn9E77dD1uPJ9FgnXACgXQeExk8e3FaUYK+cqsaBDdPGOXbWFhZApGfGhEc8Ypejb34i5PMCPFiDqH5SLWQuZL1aaUtnZqjb3Z84AVprg/Z5PlNkX1SH3mGZ1kqcrYSB4Usykma5COzygvFacaOdt3OueKLXYvTSiM2eocpEsgzHw++OZSStuY2zD2HvnZzU42KyylKEwl16n6FgewLxs89OVQMtn2wOcjHCcLf06BgwGdy8a4diHv/ehEomV//BTJHPzBQBxC35oqSTFHiDBBLhrjVRpPi9C4NlIVpSEkFuXpxnJUcy1QKkwG47AuptL5N0mZTT0B8J6tK3bLCkO6WdorZp3LcmRQxth/4CZkTJP285K/FwtGPCbYBTBozWSan+2LWqapD9JmTsTo82PEuv1ssXDnSR02iEOPjVCwJmDV58y7ljPI6csjiEih8uQY37MXWv9lPcnfcmq1j5A==
X-MS-Exchange-CrossTenant-Network-Message-Id: 7c4e40e8-8b74-4663-fd06-08d887484c0f
X-MS-Exchange-CrossTenant-AuthSource: SA0PR03MB5610.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Nov 2020 20:19:45.8816
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ij/qboOEWSv0Yq6PtzUl67k+UGb0KtUcebX/ezwfPuGVmbWAoP85oj4XMR+FcaZrZRccBXoVM+NzMvjVr9FMdw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR03MB3694
X-OriginatorOrg: citrix.com

On Thu, Nov 12, 2020 at 06:27:04PM +0100, Manuel Bouyer wrote:
> On Thu, Nov 12, 2020 at 05:32:40PM +0100, Roger Pau Monné wrote:
> > On Thu, Nov 12, 2020 at 04:57:15PM +0100, Manuel Bouyer wrote:
> > Can you give a try to the following debug patch and paste what you
> > get?
> > 
> > Thanks, Roger.
> > ---8<---
> > diff --git a/xen/drivers/vpci/msix.c b/xen/drivers/vpci/msix.c
> > index 64dd0a929c..7ff76b7f59 100644
> > --- a/xen/drivers/vpci/msix.c
> > +++ b/xen/drivers/vpci/msix.c
> > @@ -371,7 +371,12 @@ static int msix_write(struct vcpu *v, unsigned long addr, unsigned int len,
> >              entry->updated = false;
> >          }
> >          else
> > +        {
> > +            printk("%pp offset %u len %u new_masked %d enabled %d masked %d updated %d\n",
> > +                   &pdev->sbdf, offset, len, new_masked, msix->enabled, msix->masked,
> > +                   entry->updated);
> >              vpci_msix_arch_mask_entry(entry, pdev, entry->masked);
> > +        }
> >  
> >          break;
> >      }
> 
> I get
> (XEN) ffff83083feaf500p offset 12 len 4 new_masked 0 enabled 0 masked 0 updated 1
> (XEN) Assertion 'entry->arch.pirq != INVALID_PIRQ' failed at vmsi.c:843
> 
> You can find the full serial console log at
> http://www-soc.lip6.fr/~bouyer/xen-log.txt

The following might be able to get you going, but I think I need to
refine the logic a bit there, will have to give it some thought.

Thanks, Roger.
---8<---
diff --git a/xen/drivers/vpci/msix.c b/xen/drivers/vpci/msix.c
index 64dd0a929c..3eb6102a61 100644
--- a/xen/drivers/vpci/msix.c
+++ b/xen/drivers/vpci/msix.c
@@ -370,7 +370,7 @@ static int msix_write(struct vcpu *v, unsigned long addr, unsigned int len,
 
             entry->updated = false;
         }
-        else
+        else if ( msix->enabled )
             vpci_msix_arch_mask_entry(entry, pdev, entry->masked);
 
         break;


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 20:44:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 20:44:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26043.54164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdJSR-0002PB-VA; Thu, 12 Nov 2020 20:44:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26043.54164; Thu, 12 Nov 2020 20:44:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdJSR-0002P4-S1; Thu, 12 Nov 2020 20:44:39 +0000
Received: by outflank-mailman (input) for mailman id 26043;
 Thu, 12 Nov 2020 20:44:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AYe4=ES=gmail.com=lambert.olivier@srs-us1.protection.inumbo.net>)
 id 1kdJSQ-0002Oz-Eo
 for xen-devel@lists.xen.org; Thu, 12 Nov 2020 20:44:38 +0000
Received: from mail-vk1-xa34.google.com (unknown [2607:f8b0:4864:20::a34])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 27781b78-cd97-418b-a217-0eb5b57b7551;
 Thu, 12 Nov 2020 20:44:37 +0000 (UTC)
Received: by mail-vk1-xa34.google.com with SMTP id b190so1651114vka.0
 for <xen-devel@lists.xen.org>; Thu, 12 Nov 2020 12:44:37 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=AYe4=ES=gmail.com=lambert.olivier@srs-us1.protection.inumbo.net>)
	id 1kdJSQ-0002Oz-Eo
	for xen-devel@lists.xen.org; Thu, 12 Nov 2020 20:44:38 +0000
X-Inumbo-ID: 27781b78-cd97-418b-a217-0eb5b57b7551
Received: from mail-vk1-xa34.google.com (unknown [2607:f8b0:4864:20::a34])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 27781b78-cd97-418b-a217-0eb5b57b7551;
	Thu, 12 Nov 2020 20:44:37 +0000 (UTC)
Received: by mail-vk1-xa34.google.com with SMTP id b190so1651114vka.0
        for <xen-devel@lists.xen.org>; Thu, 12 Nov 2020 12:44:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to;
        bh=1Dv/sJJ3IkoBJcevrIhmR5U4he/z6g+Aj4JE4i0rih4=;
        b=ViarK4P6ndu5K7y9ppo+Ndl0Da2uyGzmA2EZiaaxjpDQ6r/BxLQLOZLFMAePwCKLUE
         pBaR9oPKd5Wju/H+5JgjEx915+9/dZcffml0bHYINGyHDVkuHgIic10nBfM6YQe0zD8g
         Kbc+xT4NulPZu8aLQPKCBx7XUsbr0Bbja2NiIQLl0VHvM2be3qnIcF043MW2xIBQBTyP
         E8mSjAts6RC4zblixtgMnZZERp16VC8c/3E71mUjk7WBbfl4fCIVWuyztUyHGiu+CY1s
         WIyS5/mlu5Eris8La+R4AHoNbP+rhUxA80KV69DvZ8aTy6h9P1W6PuCLBHOy7+r+zQ7a
         W0uA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to;
        bh=1Dv/sJJ3IkoBJcevrIhmR5U4he/z6g+Aj4JE4i0rih4=;
        b=hSq1euPzUxk4izdvbAOGaclabow8TugW7UmrM8mvKEM2V9tqx9SO/Vq1lwI7vQh30q
         WiIAzY8MtTzFMAtDJSJffZQ7e4mhzAwzwBRnlrFBfYzqMZiSlXSqY3ZqgrkmGMKlbYmy
         grgPtE9kePF1LU2erLQtftvjDj1uCxBy6hvX6q5ma7KzMQe/0MsZiBYBJS0AQSQncxG9
         N+nV449yS+NeMiMQ64IsyBvIOa3HGLlYnwRT74GbyHz+gB7/doeoAPOGRvMKlrgmCOuD
         EtXDwRjgW+D1IDEn/uE6jcl8wQiGGtf7Xq1BQwPyBJib66sOOjm5kB3vIJk1eMeEpxCk
         wsAg==
X-Gm-Message-State: AOAM531FgC4eqLo4rqsfCxiy0LAroIxmAehi4Z+3LDRTZJO/qrKhsq/e
	/MUmHKl0BOg6rT9PXiQSbl6CoUP5qeh1K5+1Af7mWvvB/+G1RA==
X-Google-Smtp-Source: ABdhPJzFCEtn7ji7EZ5nECRND30Rnub41uq1wPqmPB+hOIwcim6cV41D48iIDwNeXNxsuM3jywta7ktbgRDaXJbmjho=
X-Received: by 2002:ac5:c952:: with SMTP id s18mr1078333vkm.5.1605213876474;
 Thu, 12 Nov 2020 12:44:36 -0800 (PST)
MIME-Version: 1.0
References: <CACJ1ZNuJCgDkRHvH2gXqC5gWTJHdUQ9J4G-HBNFwKYZFaWpWuw@mail.gmail.com>
 <CACJ1ZNupvRX_fcGPWn3mm+3Lm4gT38M088tUc_sSUu8JeQg3Fg@mail.gmail.com>
In-Reply-To: <CACJ1ZNupvRX_fcGPWn3mm+3Lm4gT38M088tUc_sSUu8JeQg3Fg@mail.gmail.com>
From: Olivier Lambert <lambert.olivier@gmail.com>
Date: Thu, 12 Nov 2020 21:44:25 +0100
Message-ID: <CACJ1ZNu5Kdf72j1eTtdgTuSOjgkpeEWFM0cKB-54pxqwXuWCDQ@mail.gmail.com>
Subject: Re: Schedule for OpenPOWER/Xen meeting
To: "<xen-devel@lists.xen.org>" <xen-devel@lists.xen.org>
Content-Type: multipart/alternative; boundary="000000000000e5562105b3eefac8"

--000000000000e5562105b3eefac8
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Okay so before having the meeting webex/whatever link, I think it would be
more efficient to plan a kind of agenda, something we can pass to the
OpenPOWER team in the next few days. This way, they could have some answers
ready, allowing us to explore more things interactively during the meeting.

Feel free to participate in this thread (even if you won't be at the
meeting!), so we can gather and then organize a bit of what we'd like to
know/discuss during this meeting.

So go ahead and start to throw questions :)


Thanks,

OIivier.


Le jeu. 12 nov. 2020 =C3=A0 09:26, Olivier Lambert <lambert.olivier@gmail.c=
om> a
=C3=A9crit :

> Thanks to everyone who participated in the poll. Due to the limited numbe=
r
> of answers, I think it's wiser to go for the second option (Thursday the
> 19th), because everyone who already answered seems available that day. I'=
ll
> confirm that to OpenPOWER. When it's confirmed, I'll do a recap here
> ideally with the meeting place.
>
> Thanks,
>
> Olivier.
>
>
> Le mar. 10 nov. 2020 =C3=A0 13:41, Olivier Lambert <lambert.olivier@gmail=
.com>
> a =C3=A9crit :
>
>> Hi everyone,
>>
>> We got 2 potential dates for the initial tech meeting with at least one
>> OpenPOWER expert, so we can discuss the effort needed to port Xen on thi=
s
>> architecture.
>>
>> Because of time zones (on OpenPower side, there's one guy in Australia),
>> we got 2 possible schedules in November:
>>
>> 1. 3pm CT on this Thursday the 12th (! this week)
>> 2. Or next week Thursday the 19th
>>
>> I made a doodle-like so everyone can vote on their preferred schedule:
>> https://framadate.org/QQu5rYEOEYr4ZHc4
>>
>> Note: 3pm CT would mean 9pm UTC, 10pm UTC+1 (CET). But correct me if I'm
>> wrong.
>>
>> Reminder: the Cryptpad of the last Xen Community meeting contains the
>> list of people interested. If you are aware of someone interested that
>> could miss this email on this devel list, feel free to forward it. Crypt=
pad
>> link: https://cryptpad.fr/pad/#/2/pad/edit/k-0Aj+Sxb5SliLWrFRBwx49V/
>>
>> Thank you and see you soon!
>>
>> Olivier.
>>
>

--000000000000e5562105b3eefac8
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Okay so before having the meeting webex/whatever link=
, I think it would be more efficient to plan a kind of agenda, something we=
 can pass to the OpenPOWER team in the next few days. This way, they could =
have some answers ready, allowing us to explore more things interactively d=
uring the meeting.</div><div><br></div><div>Feel free to participate in thi=
s thread (even if you won&#39;t be at the meeting!), so we can gather and t=
hen organize a bit of what we&#39;d like to know/discuss during this meetin=
g.</div><div><br></div><div>So go ahead and start to throw questions :)<br>=
</div><div><div><br></div><div><br></div><div>Thanks,</div><div><br></div><=
div>OIivier.<br></div><br></div></div><br><div class=3D"gmail_quote"><div d=
ir=3D"ltr" class=3D"gmail_attr">Le=C2=A0jeu. 12 nov. 2020 =C3=A0=C2=A009:26=
, Olivier Lambert &lt;<a href=3D"mailto:lambert.olivier@gmail.com">lambert.=
olivier@gmail.com</a>&gt; a =C3=A9crit=C2=A0:<br></div><blockquote class=3D=
"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(2=
04,204,204);padding-left:1ex"><div dir=3D"ltr"><div>Thanks to everyone who =
participated in the poll. Due to the limited number of answers, I think it&=
#39;s wiser to go for the second option (Thursday the 19th), because everyo=
ne who already answered seems available that day. I&#39;ll confirm that to =
OpenPOWER. When it&#39;s confirmed, I&#39;ll do a recap here ideally with t=
he meeting place.</div><div><br></div><div>Thanks,</div><div><br></div><div=
>Olivier.<br></div><br></div><br><div class=3D"gmail_quote"><div dir=3D"ltr=
" class=3D"gmail_attr">Le=C2=A0mar. 10 nov. 2020 =C3=A0=C2=A013:41, Olivier=
 Lambert &lt;<a href=3D"mailto:lambert.olivier@gmail.com" target=3D"_blank"=
>lambert.olivier@gmail.com</a>&gt; a =C3=A9crit=C2=A0:<br></div><blockquote=
 class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px so=
lid rgb(204,204,204);padding-left:1ex"><div dir=3D"ltr"><div>Hi everyone,</=
div><div><br></div><div>We got 2 potential dates for the initial tech meeti=
ng with at least one OpenPOWER expert, so we can discuss the effort needed =
to port Xen on this architecture.</div><div><br></div><div>Because of time =
zones (on OpenPower side, there&#39;s one guy in Australia), we got 2 possi=
ble schedules in November:</div><div><br></div><div>1. 3pm CT on this Thurs=
day the 12th (! this week)<br></div><div>2. Or next week Thursday the 19th<=
/div><div><br></div><div>I made a doodle-like so everyone can vote on their=
 preferred schedule: <a href=3D"https://framadate.org/QQu5rYEOEYr4ZHc4" tar=
get=3D"_blank">https://framadate.org/QQu5rYEOEYr4ZHc4</a></div><div><br></d=
iv><div>Note: 3pm CT would mean 9pm UTC, 10pm UTC+1 (CET). But correct me i=
f I&#39;m wrong.</div><div><br></div><div>Reminder: the Cryptpad of the las=
t Xen Community meeting contains the list of people interested. If you are =
aware of someone interested that could miss this email on this devel list, =
feel free to forward it. Cryptpad link: <a href=3D"https://cryptpad.fr/pad/=
#/2/pad/edit/k-0Aj+Sxb5SliLWrFRBwx49V/" target=3D"_blank">https://cryptpad.=
fr/pad/#/2/pad/edit/k-0Aj+Sxb5SliLWrFRBwx49V/</a></div><div><br></div><div>=
Thank you and see you soon!</div><div><br></div><div>Olivier.<br></div></di=
v>
</blockquote></div>
</blockquote></div>

--000000000000e5562105b3eefac8--


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 21:32:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 21:32:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26056.54176 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdKC3-0007Ku-G1; Thu, 12 Nov 2020 21:31:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26056.54176; Thu, 12 Nov 2020 21:31:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdKC3-0007Kn-Ck; Thu, 12 Nov 2020 21:31:47 +0000
Received: by outflank-mailman (input) for mailman id 26056;
 Thu, 12 Nov 2020 21:31:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kdKC2-0007K7-69
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 21:31:46 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 751c97a5-f22f-41b7-aa7f-96679b0c4338;
 Thu, 12 Nov 2020 21:31:43 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdKBz-0000i9-Gm; Thu, 12 Nov 2020 21:31:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdKBz-0006Q3-75; Thu, 12 Nov 2020 21:31:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kdKBz-0003y5-6Z; Thu, 12 Nov 2020 21:31:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kdKC2-0007K7-69
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 21:31:46 +0000
X-Inumbo-ID: 751c97a5-f22f-41b7-aa7f-96679b0c4338
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 751c97a5-f22f-41b7-aa7f-96679b0c4338;
	Thu, 12 Nov 2020 21:31:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NUdN5EDNQPsJQh5ZzucJGomQL+9PId5Qkq504KHlOnA=; b=G/QmIqHR20Ndfyzs6oEs8Z/Z+E
	3Pg0B3R3VUpSLxnrVSP7Xe3amuY8wXfyG1ftrAhIl+16oLEabCCi5rV7Dze7HKSb6LQOowSAasmbZ
	Oc1Iu4VNXM8kClFqIj43DPw+pgRhB6Xl4J2vCJF8NsSibToCAsNwZGMU9JXjQ6x7GZB4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdKBz-0000i9-Gm; Thu, 12 Nov 2020 21:31:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdKBz-0006Q3-75; Thu, 12 Nov 2020 21:31:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdKBz-0003y5-6Z; Thu, 12 Nov 2020 21:31:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156685-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156685: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=eccc876724927ff3b9ff91f36f7b6b159e948f0c
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Nov 2020 21:31:43 +0000

flight 156685 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156685/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen      fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                eccc876724927ff3b9ff91f36f7b6b159e948f0c
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  104 days
Failing since        152366  2020-08-01 20:49:34 Z  103 days  169 attempts
Testing same since   156685  2020-11-11 13:23:33 Z    1 days    1 attempts

------------------------------------------------------------
3487 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 665863 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 21:38:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 21:38:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26065.54191 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdKIG-0007a6-8X; Thu, 12 Nov 2020 21:38:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26065.54191; Thu, 12 Nov 2020 21:38:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdKIG-0007Zz-56; Thu, 12 Nov 2020 21:38:12 +0000
Received: by outflank-mailman (input) for mailman id 26065;
 Thu, 12 Nov 2020 21:38:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1igI=ES=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kdKIE-0007Zg-HS
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 21:38:10 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6cafa734-3faf-44f4-be33-9c070fd303d5;
 Thu, 12 Nov 2020 21:38:09 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 7DE8D216C4;
 Thu, 12 Nov 2020 21:38:08 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1igI=ES=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kdKIE-0007Zg-HS
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 21:38:10 +0000
X-Inumbo-ID: 6cafa734-3faf-44f4-be33-9c070fd303d5
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6cafa734-3faf-44f4-be33-9c070fd303d5;
	Thu, 12 Nov 2020 21:38:09 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 7DE8D216C4;
	Thu, 12 Nov 2020 21:38:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605217089;
	bh=dfJxq46eCc0l3PfXO9PBMR/wkbnzqvQdLA8B2ZWFB9o=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=gZjjb3z8Sdt5eQ3VDjtVH8ty5RABlE6uraJj/rvTc3zpdACaG0uwOuRDUntzBHItI
	 SBqXgx+x52QCStGlLiqd+fFeT971j0k83acVCroB7XppyZ4m1LSsuqtDUstxlRn3BJ
	 DSAcwkhNmNYMdmlh20K3RMeUvjGNc42abtBSLtWQ=
Date: Thu, 12 Nov 2020 13:38:07 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: =?UTF-8?Q?J=C3=BCrgen_Gro=C3=9F?= <jgross@suse.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH] xen: add support for automatic debug key actions in case
 of crash
In-Reply-To: <3e2132c9-2ab3-7bfb-656b-2cab58a53342@suse.com>
Message-ID: <alpine.DEB.2.21.2011121332250.20906@sstabellini-ThinkPad-T480s>
References: <20201022143905.11032-1-jgross@suse.com> <977bab69-892c-d94d-d952-1a748f69d0b6@suse.com> <53732f8f-fe6d-91bd-4100-4b4d904a4073@suse.com> <ed2f73e7-04cc-f568-f0b7-19c843a8d31b@suse.com> <8c77ff71-a14e-7cf7-5f27-c7c152ace240@suse.com>
 <3e2132c9-2ab3-7bfb-656b-2cab58a53342@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-782297277-1605216774=:20906"
Content-ID: <alpine.DEB.2.21.2011121333080.20906@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-782297277-1605216774=:20906
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2011121333081.20906@sstabellini-ThinkPad-T480s>

On Thu, 12 Nov 2020, Jan Beulich wrote:
> On 12.11.2020 13:50, Jürgen Groß wrote:
> > Any further comments, or even better, Acks?
> 
> To be honest I'd prefer to have at least one of the people Cc-ed
> minimally indicate they consider this a good idea. No need for a
> close review or such, just a basic opinion. Anyone?

I see Jan's point that it is not clear how much this is going to help in
production. However, it is not going to hurt either, and I have been
told a few times recently that debugging Xen is not easy. Anything that
helps in that regard would be good. So I think this patch would be an
improvement.
--8323329-782297277-1605216774=:20906--


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 21:45:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 21:45:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26074.54202 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdKOn-00009u-W9; Thu, 12 Nov 2020 21:44:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26074.54202; Thu, 12 Nov 2020 21:44:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdKOn-00009n-TB; Thu, 12 Nov 2020 21:44:57 +0000
Received: by outflank-mailman (input) for mailman id 26074;
 Thu, 12 Nov 2020 21:44:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bIZ8=ES=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
 id 1kdKOl-00009i-Sp
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 21:44:56 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id be3cb944-7732-4a93-b71b-bd8a2701a698;
 Thu, 12 Nov 2020 21:44:52 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-283-HzfKj2k2Oy2N0dzTzOJcrQ-1; Thu, 12 Nov 2020 16:44:50 -0500
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 467451018F6F;
 Thu, 12 Nov 2020 21:44:48 +0000 (UTC)
Received: from localhost (ovpn-114-68.rdu2.redhat.com [10.10.114.68])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 6EBE45D9CA;
 Thu, 12 Nov 2020 21:44:38 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bIZ8=ES=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
	id 1kdKOl-00009i-Sp
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 21:44:56 +0000
X-Inumbo-ID: be3cb944-7732-4a93-b71b-bd8a2701a698
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id be3cb944-7732-4a93-b71b-bd8a2701a698;
	Thu, 12 Nov 2020 21:44:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1605217492;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fsQe7Mm1Y7+ujDMAz+Zy/FiKiDElt5mnHtrgipEPHRs=;
	b=PpiHi/8m5QwIeYhxdM5kjXl67ns4GDj0ReIKXEku0dkf+lhvmMgKOF3Iv/HlJLzXq57T3+
	th92C8Q5zRA/2JAL1vs3SYyNmutumjXsjMeIvlJ/gCtbTE0lDuAGuzp7BXSl5MsBpl4hHB
	x8AyheRfWJPeh7RGl7IuJnZdRMQ2Hwo=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-283-HzfKj2k2Oy2N0dzTzOJcrQ-1; Thu, 12 Nov 2020 16:44:50 -0500
X-MC-Unique: HzfKj2k2Oy2N0dzTzOJcrQ-1
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 467451018F6F;
	Thu, 12 Nov 2020 21:44:48 +0000 (UTC)
Received: from localhost (ovpn-114-68.rdu2.redhat.com [10.10.114.68])
	by smtp.corp.redhat.com (Postfix) with ESMTP id 6EBE45D9CA;
	Thu, 12 Nov 2020 21:44:38 +0000 (UTC)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Cc: Igor Mammedov <imammedo@redhat.com>,
	Markus Armbruster <armbru@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	"Daniel P. Berrange" <berrange@redhat.com>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Eric Blake <eblake@redhat.com>,
	John Snow <jsnow@redhat.com>,
	Stefan Berger <stefanb@linux.ibm.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	Stefan Berger <stefanb@linux.vnet.ibm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Max Reitz <mreitz@redhat.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Richard Henderson <rth@twiddle.net>,
	David Hildenbrand <david@redhat.com>,
	Halil Pasic <pasic@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Matthew Rosato <mjrosato@linux.ibm.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	qemu-s390x@nongnu.org
Subject: [PATCH v3 09/53] qdev: Make qdev_get_prop_ptr() get Object* arg
Date: Thu, 12 Nov 2020 16:43:06 -0500
Message-Id: <20201112214350.872250-10-ehabkost@redhat.com>
In-Reply-To: <20201112214350.872250-1-ehabkost@redhat.com>
References: <20201112214350.872250-1-ehabkost@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ehabkost@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Make the code more generic and not specific to TYPE_DEVICE.

Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
Changes v1 -> v2:
- Fix build error with CONFIG_XEN
  I took the liberty of keeping the Reviewed-by line from
  Marc-André as the build fix is a trivial one line change
---
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Thomas Huth <thuth@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: David Hildenbrand <david@redhat.com>
Cc: Halil Pasic <pasic@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Matthew Rosato <mjrosato@linux.ibm.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org
Cc: qemu-block@nongnu.org
Cc: qemu-s390x@nongnu.org
---
 include/hw/qdev-properties.h     |  2 +-
 backends/tpm/tpm_util.c          |  8 ++--
 hw/block/xen-block.c             |  5 +-
 hw/core/qdev-properties-system.c | 57 +++++++++-------------
 hw/core/qdev-properties.c        | 82 +++++++++++++-------------------
 hw/s390x/css.c                   |  5 +-
 hw/s390x/s390-pci-bus.c          |  4 +-
 hw/vfio/pci-quirks.c             |  5 +-
 8 files changed, 68 insertions(+), 100 deletions(-)

diff --git a/include/hw/qdev-properties.h b/include/hw/qdev-properties.h
index 0ea822e6a7..0b92cfc761 100644
--- a/include/hw/qdev-properties.h
+++ b/include/hw/qdev-properties.h
@@ -302,7 +302,7 @@ void qdev_prop_set_macaddr(DeviceState *dev, const char *name,
                            const uint8_t *value);
 void qdev_prop_set_enum(DeviceState *dev, const char *name, int value);
 
-void *qdev_get_prop_ptr(DeviceState *dev, Property *prop);
+void *qdev_get_prop_ptr(Object *obj, Property *prop);
 
 void qdev_prop_register_global(GlobalProperty *prop);
 const GlobalProperty *qdev_find_global_prop(DeviceState *dev,
diff --git a/backends/tpm/tpm_util.c b/backends/tpm/tpm_util.c
index b58d298c1a..e91c21dd4a 100644
--- a/backends/tpm/tpm_util.c
+++ b/backends/tpm/tpm_util.c
@@ -35,8 +35,7 @@
 static void get_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
-    TPMBackend **be = qdev_get_prop_ptr(dev, opaque);
+    TPMBackend **be = qdev_get_prop_ptr(obj, opaque);
     char *p;
 
     p = g_strdup(*be ? (*be)->id : "");
@@ -49,7 +48,7 @@ static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    TPMBackend *s, **be = qdev_get_prop_ptr(dev, prop);
+    TPMBackend *s, **be = qdev_get_prop_ptr(obj, prop);
     char *str;
 
     if (dev->realized) {
@@ -73,9 +72,8 @@ static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
 
 static void release_tpm(Object *obj, const char *name, void *opaque)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    TPMBackend **be = qdev_get_prop_ptr(dev, prop);
+    TPMBackend **be = qdev_get_prop_ptr(obj, prop);
 
     if (*be) {
         tpm_backend_reset(*be);
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index 8a7a3f5452..905e4acd97 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -335,9 +335,8 @@ static char *disk_to_vbd_name(unsigned int disk)
 static void xen_block_get_vdev(Object *obj, Visitor *v, const char *name,
                                void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    XenBlockVdev *vdev = qdev_get_prop_ptr(dev, prop);
+    XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
     char *str;
 
     switch (vdev->type) {
@@ -398,7 +397,7 @@ static void xen_block_set_vdev(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    XenBlockVdev *vdev = qdev_get_prop_ptr(dev, prop);
+    XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
     char *str, *p;
     const char *end;
 
diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
index d0fb063a49..c8c73c371b 100644
--- a/hw/core/qdev-properties-system.c
+++ b/hw/core/qdev-properties-system.c
@@ -59,9 +59,8 @@ static bool check_prop_still_unset(DeviceState *dev, const char *name,
 static void get_drive(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    void **ptr = qdev_get_prop_ptr(dev, prop);
+    void **ptr = qdev_get_prop_ptr(obj, prop);
     const char *value;
     char *p;
 
@@ -87,7 +86,7 @@ static void set_drive_helper(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    void **ptr = qdev_get_prop_ptr(dev, prop);
+    void **ptr = qdev_get_prop_ptr(obj, prop);
     char *str;
     BlockBackend *blk;
     bool blk_created = false;
@@ -185,7 +184,7 @@ static void release_drive(Object *obj, const char *name, void *opaque)
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    BlockBackend **ptr = qdev_get_prop_ptr(dev, prop);
+    BlockBackend **ptr = qdev_get_prop_ptr(obj, prop);
 
     if (*ptr) {
         AioContext *ctx = blk_get_aio_context(*ptr);
@@ -218,8 +217,7 @@ const PropertyInfo qdev_prop_drive_iothread = {
 static void get_chr(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
-    CharBackend *be = qdev_get_prop_ptr(dev, opaque);
+    CharBackend *be = qdev_get_prop_ptr(obj, opaque);
     char *p;
 
     p = g_strdup(be->chr && be->chr->label ? be->chr->label : "");
@@ -232,7 +230,7 @@ static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    CharBackend *be = qdev_get_prop_ptr(dev, prop);
+    CharBackend *be = qdev_get_prop_ptr(obj, prop);
     Chardev *s;
     char *str;
 
@@ -272,9 +270,8 @@ static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
 
 static void release_chr(Object *obj, const char *name, void *opaque)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    CharBackend *be = qdev_get_prop_ptr(dev, prop);
+    CharBackend *be = qdev_get_prop_ptr(obj, prop);
 
     qemu_chr_fe_deinit(be, false);
 }
@@ -297,9 +294,8 @@ const PropertyInfo qdev_prop_chr = {
 static void get_mac(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    MACAddr *mac = qdev_get_prop_ptr(dev, prop);
+    MACAddr *mac = qdev_get_prop_ptr(obj, prop);
     char buffer[2 * 6 + 5 + 1];
     char *p = buffer;
 
@@ -315,7 +311,7 @@ static void set_mac(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    MACAddr *mac = qdev_get_prop_ptr(dev, prop);
+    MACAddr *mac = qdev_get_prop_ptr(obj, prop);
     int i, pos;
     char *str;
     const char *p;
@@ -381,9 +377,8 @@ void qdev_prop_set_macaddr(DeviceState *dev, const char *name,
 static void get_netdev(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    NICPeers *peers_ptr = qdev_get_prop_ptr(dev, prop);
+    NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
     char *p = g_strdup(peers_ptr->ncs[0] ? peers_ptr->ncs[0]->name : "");
 
     visit_type_str(v, name, &p, errp);
@@ -395,7 +390,7 @@ static void set_netdev(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    NICPeers *peers_ptr = qdev_get_prop_ptr(dev, prop);
+    NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
     NetClientState **ncs = peers_ptr->ncs;
     NetClientState *peers[MAX_QUEUE_NUM];
     int queues, err = 0, i = 0;
@@ -461,9 +456,8 @@ const PropertyInfo qdev_prop_netdev = {
 static void get_audiodev(Object *obj, Visitor *v, const char* name,
                          void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    QEMUSoundCard *card = qdev_get_prop_ptr(dev, prop);
+    QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
     char *p = g_strdup(audio_get_id(card));
 
     visit_type_str(v, name, &p, errp);
@@ -475,7 +469,7 @@ static void set_audiodev(Object *obj, Visitor *v, const char* name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    QEMUSoundCard *card = qdev_get_prop_ptr(dev, prop);
+    QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
     AudioState *state;
     int err = 0;
     char *str;
@@ -582,7 +576,7 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
     uint64_t value;
     Error *local_err = NULL;
 
@@ -674,9 +668,8 @@ const PropertyInfo qdev_prop_multifd_compression = {
 static void get_reserved_region(Object *obj, Visitor *v, const char *name,
                                 void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    ReservedRegion *rr = qdev_get_prop_ptr(dev, prop);
+    ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
     char buffer[64];
     char *p = buffer;
     int rc;
@@ -693,7 +686,7 @@ static void set_reserved_region(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    ReservedRegion *rr = qdev_get_prop_ptr(dev, prop);
+    ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
     Error *local_err = NULL;
     const char *endptr;
     char *str;
@@ -761,7 +754,7 @@ static void set_pci_devfn(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int32_t value, *ptr = qdev_get_prop_ptr(dev, prop);
+    int32_t value, *ptr = qdev_get_prop_ptr(obj, prop);
     unsigned int slot, fn, n;
     char *str;
 
@@ -804,8 +797,7 @@ invalid:
 static int print_pci_devfn(Object *obj, Property *prop, char *dest,
                            size_t len)
 {
-    DeviceState *dev = DEVICE(obj);
-    int32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (*ptr == -1) {
         return snprintf(dest, len, "<unset>");
@@ -828,9 +820,8 @@ const PropertyInfo qdev_prop_pci_devfn = {
 static void get_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
                                  void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(dev, prop);
+    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
     char buffer[] = "ffff:ff:ff.f";
     char *p = buffer;
     int rc = 0;
@@ -857,7 +848,7 @@ static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(dev, prop);
+    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
     char *str, *p;
     const char *e;
     unsigned long val;
@@ -950,9 +941,8 @@ const PropertyInfo qdev_prop_off_auto_pcibar = {
 static void get_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    PCIExpLinkSpeed *p = qdev_get_prop_ptr(dev, prop);
+    PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
     int speed;
 
     switch (*p) {
@@ -981,7 +971,7 @@ static void set_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    PCIExpLinkSpeed *p = qdev_get_prop_ptr(dev, prop);
+    PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
     int speed;
 
     if (dev->realized) {
@@ -1027,9 +1017,8 @@ const PropertyInfo qdev_prop_pcie_link_speed = {
 static void get_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    PCIExpLinkWidth *p = qdev_get_prop_ptr(dev, prop);
+    PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
     int width;
 
     switch (*p) {
@@ -1067,7 +1056,7 @@ static void set_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    PCIExpLinkWidth *p = qdev_get_prop_ptr(dev, prop);
+    PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
     int width;
 
     if (dev->realized) {
diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index 3a4638f4de..0a54a922c8 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -38,9 +38,9 @@ void qdev_prop_allow_set_link_before_realize(const Object *obj,
     }
 }
 
-void *qdev_get_prop_ptr(DeviceState *dev, Property *prop)
+void *qdev_get_prop_ptr(Object *obj, Property *prop)
 {
-    void *ptr = dev;
+    void *ptr = obj;
     ptr += prop->offset;
     return ptr;
 }
@@ -48,9 +48,8 @@ void *qdev_get_prop_ptr(DeviceState *dev, Property *prop)
 void qdev_propinfo_get_enum(Object *obj, Visitor *v, const char *name,
                             void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int *ptr = qdev_get_prop_ptr(dev, prop);
+    int *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_enum(v, prop->name, ptr, prop->info->enum_table, errp);
 }
@@ -60,7 +59,7 @@ void qdev_propinfo_set_enum(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int *ptr = qdev_get_prop_ptr(dev, prop);
+    int *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -94,8 +93,7 @@ static uint32_t qdev_get_prop_mask(Property *prop)
 
 static void bit_prop_set(Object *obj, Property *props, bool val)
 {
-    DeviceState *dev = DEVICE(obj);
-    uint32_t *p = qdev_get_prop_ptr(dev, props);
+    uint32_t *p = qdev_get_prop_ptr(obj, props);
     uint32_t mask = qdev_get_prop_mask(props);
     if (val) {
         *p |= mask;
@@ -107,9 +105,8 @@ static void bit_prop_set(Object *obj, Property *props, bool val)
 static void prop_get_bit(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *p = qdev_get_prop_ptr(dev, prop);
+    uint32_t *p = qdev_get_prop_ptr(obj, prop);
     bool value = (*p & qdev_get_prop_mask(prop)) != 0;
 
     visit_type_bool(v, name, &value, errp);
@@ -156,8 +153,7 @@ static uint64_t qdev_get_prop_mask64(Property *prop)
 
 static void bit64_prop_set(Object *obj, Property *props, bool val)
 {
-    DeviceState *dev = DEVICE(obj);
-    uint64_t *p = qdev_get_prop_ptr(dev, props);
+    uint64_t *p = qdev_get_prop_ptr(obj, props);
     uint64_t mask = qdev_get_prop_mask64(props);
     if (val) {
         *p |= mask;
@@ -169,9 +165,8 @@ static void bit64_prop_set(Object *obj, Property *props, bool val)
 static void prop_get_bit64(Object *obj, Visitor *v, const char *name,
                            void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint64_t *p = qdev_get_prop_ptr(dev, prop);
+    uint64_t *p = qdev_get_prop_ptr(obj, prop);
     bool value = (*p & qdev_get_prop_mask64(prop)) != 0;
 
     visit_type_bool(v, name, &value, errp);
@@ -208,9 +203,8 @@ const PropertyInfo qdev_prop_bit64 = {
 static void get_bool(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    bool *ptr = qdev_get_prop_ptr(dev, prop);
+    bool *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_bool(v, name, ptr, errp);
 }
@@ -220,7 +214,7 @@ static void set_bool(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    bool *ptr = qdev_get_prop_ptr(dev, prop);
+    bool *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -242,9 +236,8 @@ const PropertyInfo qdev_prop_bool = {
 static void get_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint8_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_uint8(v, name, ptr, errp);
 }
@@ -254,7 +247,7 @@ static void set_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint8_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -288,9 +281,8 @@ const PropertyInfo qdev_prop_uint8 = {
 void qdev_propinfo_get_uint16(Object *obj, Visitor *v, const char *name,
                               void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint16_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_uint16(v, name, ptr, errp);
 }
@@ -300,7 +292,7 @@ static void set_uint16(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint16_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -322,9 +314,8 @@ const PropertyInfo qdev_prop_uint16 = {
 static void get_uint32(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_uint32(v, name, ptr, errp);
 }
@@ -334,7 +325,7 @@ static void set_uint32(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -347,9 +338,8 @@ static void set_uint32(Object *obj, Visitor *v, const char *name,
 void qdev_propinfo_get_int32(Object *obj, Visitor *v, const char *name,
                              void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_int32(v, name, ptr, errp);
 }
@@ -359,7 +349,7 @@ static void set_int32(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -388,9 +378,8 @@ const PropertyInfo qdev_prop_int32 = {
 static void get_uint64(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_uint64(v, name, ptr, errp);
 }
@@ -400,7 +389,7 @@ static void set_uint64(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -413,9 +402,8 @@ static void set_uint64(Object *obj, Visitor *v, const char *name,
 static void get_int64(Object *obj, Visitor *v, const char *name,
                       void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int64_t *ptr = qdev_get_prop_ptr(dev, prop);
+    int64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_int64(v, name, ptr, errp);
 }
@@ -425,7 +413,7 @@ static void set_int64(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int64_t *ptr = qdev_get_prop_ptr(dev, prop);
+    int64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -454,15 +442,14 @@ const PropertyInfo qdev_prop_int64 = {
 static void release_string(Object *obj, const char *name, void *opaque)
 {
     Property *prop = opaque;
-    g_free(*(char **)qdev_get_prop_ptr(DEVICE(obj), prop));
+    g_free(*(char **)qdev_get_prop_ptr(obj, prop));
 }
 
 static void get_string(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    char **ptr = qdev_get_prop_ptr(dev, prop);
+    char **ptr = qdev_get_prop_ptr(obj, prop);
 
     if (!*ptr) {
         char *str = (char *)"";
@@ -477,7 +464,7 @@ static void set_string(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    char **ptr = qdev_get_prop_ptr(dev, prop);
+    char **ptr = qdev_get_prop_ptr(obj, prop);
     char *str;
 
     if (dev->realized) {
@@ -515,9 +502,8 @@ const PropertyInfo qdev_prop_on_off_auto = {
 void qdev_propinfo_get_size32(Object *obj, Visitor *v, const char *name,
                               void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
     uint64_t value = *ptr;
 
     visit_type_size(v, name, &value, errp);
@@ -528,7 +514,7 @@ static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
     uint64_t value;
 
     if (dev->realized) {
@@ -563,9 +549,8 @@ const PropertyInfo qdev_prop_size32 = {
 static void get_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    QemuUUID *uuid = qdev_get_prop_ptr(dev, prop);
+    QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
     char buffer[UUID_FMT_LEN + 1];
     char *p = buffer;
 
@@ -581,7 +566,7 @@ static void set_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    QemuUUID *uuid = qdev_get_prop_ptr(dev, prop);
+    QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
     char *str;
 
     if (dev->realized) {
@@ -653,7 +638,7 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
      */
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *alenptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *alenptr = qdev_get_prop_ptr(obj, prop);
     void **arrayptr = (void *)dev + prop->arrayoffset;
     void *eltptr;
     const char *arrayname;
@@ -699,7 +684,7 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
          * being inside the device struct.
          */
         arrayprop->prop.offset = eltptr - (void *)dev;
-        assert(qdev_get_prop_ptr(dev, &arrayprop->prop) == eltptr);
+        assert(qdev_get_prop_ptr(obj, &arrayprop->prop) == eltptr);
         object_property_add(obj, propname,
                             arrayprop->prop.info->name,
                             arrayprop->prop.info->get,
@@ -893,9 +878,8 @@ void qdev_prop_set_globals(DeviceState *dev)
 static void get_size(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_size(v, name, ptr, errp);
 }
@@ -905,7 +889,7 @@ static void set_size(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
diff --git a/hw/s390x/css.c b/hw/s390x/css.c
index 9961cfe7bf..2b8f33fec2 100644
--- a/hw/s390x/css.c
+++ b/hw/s390x/css.c
@@ -2343,9 +2343,8 @@ void css_reset(void)
 static void get_css_devid(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    CssDevId *dev_id = qdev_get_prop_ptr(dev, prop);
+    CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
     char buffer[] = "xx.x.xxxx";
     char *p = buffer;
     int r;
@@ -2375,7 +2374,7 @@ static void set_css_devid(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    CssDevId *dev_id = qdev_get_prop_ptr(dev, prop);
+    CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
     char *str;
     int num, n1, n2;
     unsigned int cssid, ssid, devid;
diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
index 48a3be802f..ab27b6e848 100644
--- a/hw/s390x/s390-pci-bus.c
+++ b/hw/s390x/s390-pci-bus.c
@@ -1323,7 +1323,7 @@ static void s390_pci_get_fid(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(DEVICE(obj), prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_uint32(v, name, ptr, errp);
 }
@@ -1334,7 +1334,7 @@ static void s390_pci_set_fid(Object *obj, Visitor *v, const char *name,
     DeviceState *dev = DEVICE(obj);
     S390PCIBusDevice *zpci = S390_PCI_DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
index 57150913b7..53569925a2 100644
--- a/hw/vfio/pci-quirks.c
+++ b/hw/vfio/pci-quirks.c
@@ -1488,9 +1488,8 @@ static void get_nv_gpudirect_clique_id(Object *obj, Visitor *v,
                                        const char *name, void *opaque,
                                        Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint8_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_uint8(v, name, ptr, errp);
 }
@@ -1501,7 +1500,7 @@ static void set_nv_gpudirect_clique_id(Object *obj, Visitor *v,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint8_t value, *ptr = qdev_get_prop_ptr(dev, prop);
+    uint8_t value, *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Thu Nov 12 21:45:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 21:45:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26079.54215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdKPm-0000I9-Gg; Thu, 12 Nov 2020 21:45:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26079.54215; Thu, 12 Nov 2020 21:45:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdKPm-0000I2-Co; Thu, 12 Nov 2020 21:45:58 +0000
Received: by outflank-mailman (input) for mailman id 26079;
 Thu, 12 Nov 2020 21:45:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bIZ8=ES=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
 id 1kdKPl-0000Hx-A4
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 21:45:57 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id fbea7561-b99f-40b3-b6b5-31883ca77d7a;
 Thu, 12 Nov 2020 21:45:55 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-287-nulqUWz6NqS_XwYyoOuy-A-1; Thu, 12 Nov 2020 16:45:52 -0500
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C4D5557204;
 Thu, 12 Nov 2020 21:45:49 +0000 (UTC)
Received: from localhost (ovpn-114-68.rdu2.redhat.com [10.10.114.68])
 by smtp.corp.redhat.com (Postfix) with ESMTP id DBD8C19C66;
 Thu, 12 Nov 2020 21:45:39 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bIZ8=ES=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
	id 1kdKPl-0000Hx-A4
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 21:45:57 +0000
X-Inumbo-ID: fbea7561-b99f-40b3-b6b5-31883ca77d7a
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id fbea7561-b99f-40b3-b6b5-31883ca77d7a;
	Thu, 12 Nov 2020 21:45:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1605217555;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=96y8FlxASBqueB7vdblz6B3QBe6EDXkcFeOt+FcgiDo=;
	b=J00yptfY8u64eME2rC60GZWk/4mIuC9zmbuNYOCJWlgU4kWD3nPIi17ATG4iPxMG6hf93n
	fRg6ELvYmwdN75d4w/ZAbLpcoyhMwVURQLd7zcl+NTt8oFm/GW7+L907fkJQtNnG7T4rJe
	h6iyiohiX7xwlKAxdhUr3adrwqLj5fk=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-287-nulqUWz6NqS_XwYyoOuy-A-1; Thu, 12 Nov 2020 16:45:52 -0500
X-MC-Unique: nulqUWz6NqS_XwYyoOuy-A-1
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C4D5557204;
	Thu, 12 Nov 2020 21:45:49 +0000 (UTC)
Received: from localhost (ovpn-114-68.rdu2.redhat.com [10.10.114.68])
	by smtp.corp.redhat.com (Postfix) with ESMTP id DBD8C19C66;
	Thu, 12 Nov 2020 21:45:39 +0000 (UTC)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Cc: Igor Mammedov <imammedo@redhat.com>,
	Markus Armbruster <armbru@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	"Daniel P. Berrange" <berrange@redhat.com>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Eric Blake <eblake@redhat.com>,
	John Snow <jsnow@redhat.com>,
	Stefan Berger <stefanb@linux.ibm.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	Stefan Berger <stefanb@linux.vnet.ibm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Max Reitz <mreitz@redhat.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Halil Pasic <pasic@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Richard Henderson <rth@twiddle.net>,
	David Hildenbrand <david@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Matthew Rosato <mjrosato@linux.ibm.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Artyom Tarasenko <atar4qemu@gmail.com>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	qemu-s390x@nongnu.org
Subject: [PATCH v3 23/53] qdev: Move dev->realized check to qdev_property_set()
Date: Thu, 12 Nov 2020 16:43:20 -0500
Message-Id: <20201112214350.872250-24-ehabkost@redhat.com>
In-Reply-To: <20201112214350.872250-1-ehabkost@redhat.com>
References: <20201112214350.872250-1-ehabkost@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ehabkost@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Every single qdev property setter function manually checks
dev->realized.  We can just check dev->realized inside
qdev_property_set() instead.

The check is being added as a separate function
(qdev_prop_allow_set()) because it will become a callback later.

Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
Changes v1 -> v2:
* Removed unused variable at xen_block_set_vdev()
* Redone patch after changes in the previous patches in the
  series
---
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Halil Pasic <pasic@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: David Hildenbrand <david@redhat.com>
Cc: Thomas Huth <thuth@redhat.com>
Cc: Matthew Rosato <mjrosato@linux.ibm.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Artyom Tarasenko <atar4qemu@gmail.com>
Cc: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org
Cc: qemu-block@nongnu.org
Cc: qemu-s390x@nongnu.org
---
 backends/tpm/tpm_util.c          |   6 --
 hw/block/xen-block.c             |   6 --
 hw/core/qdev-properties-system.c |  70 ----------------------
 hw/core/qdev-properties.c        | 100 ++++++-------------------------
 hw/s390x/css.c                   |   6 --
 hw/s390x/s390-pci-bus.c          |   6 --
 hw/vfio/pci-quirks.c             |   6 --
 target/sparc/cpu.c               |   6 --
 8 files changed, 18 insertions(+), 188 deletions(-)

diff --git a/backends/tpm/tpm_util.c b/backends/tpm/tpm_util.c
index dba2f6b04a..0b07cf55ea 100644
--- a/backends/tpm/tpm_util.c
+++ b/backends/tpm/tpm_util.c
@@ -46,16 +46,10 @@ static void get_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     TPMBackend *s, **be = qdev_get_prop_ptr(obj, prop);
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index 905e4acd97..bd1aef63a7 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -395,17 +395,11 @@ static int vbd_name_to_disk(const char *name, const char **endp,
 static void xen_block_set_vdev(Object *obj, Visitor *v, const char *name,
                                void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
     char *str, *p;
     const char *end;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
index 202abd0e4b..0d3e57bba0 100644
--- a/hw/core/qdev-properties-system.c
+++ b/hw/core/qdev-properties-system.c
@@ -94,11 +94,6 @@ static void set_drive_helper(Object *obj, Visitor *v, const char *name,
     bool blk_created = false;
     int ret;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -230,17 +225,11 @@ static void get_chr(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     CharBackend *be = qdev_get_prop_ptr(obj, prop);
     Chardev *s;
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -311,18 +300,12 @@ static void get_mac(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_mac(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     MACAddr *mac = qdev_get_prop_ptr(obj, prop);
     int i, pos;
     char *str;
     const char *p;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -390,7 +373,6 @@ static void get_netdev(Object *obj, Visitor *v, const char *name,
 static void set_netdev(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
     NetClientState **ncs = peers_ptr->ncs;
@@ -398,11 +380,6 @@ static void set_netdev(Object *obj, Visitor *v, const char *name,
     int queues, err = 0, i = 0;
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -469,18 +446,12 @@ static void get_audiodev(Object *obj, Visitor *v, const char* name,
 static void set_audiodev(Object *obj, Visitor *v, const char* name,
                          void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
     AudioState *state;
     int err = 0;
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -582,11 +553,6 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
     uint64_t value;
     Error *local_err = NULL;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_size(v, name, &value, errp)) {
         return;
     }
@@ -686,7 +652,6 @@ static void get_reserved_region(Object *obj, Visitor *v, const char *name,
 static void set_reserved_region(Object *obj, Visitor *v, const char *name,
                                 void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
     Error *local_err = NULL;
@@ -694,11 +659,6 @@ static void set_reserved_region(Object *obj, Visitor *v, const char *name,
     char *str;
     int ret;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_str(v, name, &str, &local_err);
     if (local_err) {
         error_propagate(errp, local_err);
@@ -754,17 +714,11 @@ const PropertyInfo qdev_prop_reserved_region = {
 static void set_pci_devfn(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     int32_t value, *ptr = qdev_get_prop_ptr(obj, prop);
     unsigned int slot, fn, n;
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, NULL)) {
         if (!visit_type_int32(v, name, &value, errp)) {
             return;
@@ -848,7 +802,6 @@ static void get_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
 static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
                                  void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
     char *str, *p;
@@ -857,11 +810,6 @@ static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
     unsigned long dom = 0, bus = 0;
     unsigned int slot = 0, func = 0;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -971,16 +919,10 @@ static void get_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
 static void set_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
     int speed;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_enum(v, name, &speed, prop->info->enum_table,
                          errp)) {
         return;
@@ -1056,16 +998,10 @@ static void get_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
 static void set_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
     int width;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_enum(v, name, &width, prop->info->enum_table,
                          errp)) {
         return;
@@ -1128,16 +1064,10 @@ static void get_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index b924f13d58..92f48ecbf2 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -24,6 +24,19 @@ void qdev_prop_set_after_realize(DeviceState *dev, const char *name,
     }
 }
 
+/* returns: true if property is allowed to be set, false otherwise */
+static bool qdev_prop_allow_set(Object *obj, const char *name,
+                                Error **errp)
+{
+    DeviceState *dev = DEVICE(obj);
+
+    if (dev->realized) {
+        qdev_prop_set_after_realize(dev, name, errp);
+        return false;
+    }
+    return true;
+}
+
 void qdev_prop_allow_set_link_before_realize(const Object *obj,
                                              const char *name,
                                              Object *val, Error **errp)
@@ -65,6 +78,11 @@ static void field_prop_set(Object *obj, Visitor *v, const char *name,
                            void *opaque, Error **errp)
 {
     Property *prop = opaque;
+
+    if (!qdev_prop_allow_set(obj, name, errp)) {
+        return;
+    }
+
     return prop->info->set(obj, v, name, opaque, errp);
 }
 
@@ -90,15 +108,9 @@ void qdev_propinfo_get_enum(Object *obj, Visitor *v, const char *name,
 void qdev_propinfo_set_enum(Object *obj, Visitor *v, const char *name,
                             void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     int *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_enum(v, name, ptr, prop->info->enum_table, errp);
 }
 
@@ -148,15 +160,9 @@ static void prop_get_bit(Object *obj, Visitor *v, const char *name,
 static void prop_set_bit(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     bool value;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_bool(v, name, &value, errp)) {
         return;
     }
@@ -208,15 +214,9 @@ static void prop_get_bit64(Object *obj, Visitor *v, const char *name,
 static void prop_set_bit64(Object *obj, Visitor *v, const char *name,
                            void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     bool value;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_bool(v, name, &value, errp)) {
         return;
     }
@@ -245,15 +245,9 @@ static void get_bool(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_bool(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     bool *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_bool(v, name, ptr, errp);
 }
 
@@ -278,15 +272,9 @@ static void get_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_uint8(v, name, ptr, errp);
 }
 
@@ -323,15 +311,9 @@ static void get_uint16(Object *obj, Visitor *v, const char *name,
 static void set_uint16(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_uint16(v, name, ptr, errp);
 }
 
@@ -356,15 +338,9 @@ static void get_uint32(Object *obj, Visitor *v, const char *name,
 static void set_uint32(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_uint32(v, name, ptr, errp);
 }
 
@@ -380,15 +356,9 @@ void qdev_propinfo_get_int32(Object *obj, Visitor *v, const char *name,
 static void set_int32(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     int32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_int32(v, name, ptr, errp);
 }
 
@@ -420,15 +390,9 @@ static void get_uint64(Object *obj, Visitor *v, const char *name,
 static void set_uint64(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_uint64(v, name, ptr, errp);
 }
 
@@ -444,15 +408,9 @@ static void get_int64(Object *obj, Visitor *v, const char *name,
 static void set_int64(Object *obj, Visitor *v, const char *name,
                       void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     int64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_int64(v, name, ptr, errp);
 }
 
@@ -495,16 +453,10 @@ static void get_string(Object *obj, Visitor *v, const char *name,
 static void set_string(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     char **ptr = qdev_get_prop_ptr(obj, prop);
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -545,16 +497,10 @@ void qdev_propinfo_get_size32(Object *obj, Visitor *v, const char *name,
 static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
                        Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
     uint64_t value;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_size(v, name, &value, errp)) {
         return;
     }
@@ -621,10 +567,6 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
     const char *arrayname;
     int i;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
     if (*alenptr) {
         error_setg(errp, "array size property %s may not be set more than once",
                    name);
@@ -864,15 +806,9 @@ static void get_size(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_size(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_size(v, name, ptr, errp);
 }
 
diff --git a/hw/s390x/css.c b/hw/s390x/css.c
index 7a44320d12..496e2c5801 100644
--- a/hw/s390x/css.c
+++ b/hw/s390x/css.c
@@ -2372,18 +2372,12 @@ static void get_css_devid(Object *obj, Visitor *v, const char *name,
 static void set_css_devid(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
     char *str;
     int num, n1, n2;
     unsigned int cssid, ssid, devid;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
index ab27b6e848..54fac3851d 100644
--- a/hw/s390x/s390-pci-bus.c
+++ b/hw/s390x/s390-pci-bus.c
@@ -1331,16 +1331,10 @@ static void s390_pci_get_fid(Object *obj, Visitor *v, const char *name,
 static void s390_pci_set_fid(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     S390PCIBusDevice *zpci = S390_PCI_DEVICE(obj);
     Property *prop = opaque;
     uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_uint32(v, name, ptr, errp)) {
         return;
     }
diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
index 53569925a2..802979635c 100644
--- a/hw/vfio/pci-quirks.c
+++ b/hw/vfio/pci-quirks.c
@@ -1498,15 +1498,9 @@ static void set_nv_gpudirect_clique_id(Object *obj, Visitor *v,
                                        const char *name, void *opaque,
                                        Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint8_t value, *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_uint8(v, name, &value, errp)) {
         return;
     }
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
index f5cff4103b..3375fffb38 100644
--- a/target/sparc/cpu.c
+++ b/target/sparc/cpu.c
@@ -798,17 +798,11 @@ static void sparc_get_nwindows(Object *obj, Visitor *v, const char *name,
 static void sparc_set_nwindows(Object *obj, Visitor *v, const char *name,
                                void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     const int64_t min = MIN_NWINDOWS;
     const int64_t max = MAX_NWINDOWS;
     SPARCCPU *cpu = SPARC_CPU(obj);
     int64_t value;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_int(v, name, &value, errp)) {
         return;
     }
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Thu Nov 12 21:46:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 21:46:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26083.54227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdKQC-0000OW-PP; Thu, 12 Nov 2020 21:46:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26083.54227; Thu, 12 Nov 2020 21:46:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdKQC-0000OO-ML; Thu, 12 Nov 2020 21:46:24 +0000
Received: by outflank-mailman (input) for mailman id 26083;
 Thu, 12 Nov 2020 21:46:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bIZ8=ES=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
 id 1kdKQB-0000OF-UC
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 21:46:23 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 12b0889b-7f1e-40a1-8f53-d7d983f2cbb3;
 Thu, 12 Nov 2020 21:46:21 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-29-2nHHWv-2PvinfqL9fdW78A-1; Thu, 12 Nov 2020 16:46:19 -0500
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com
 [10.5.11.22])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 29C821074643;
 Thu, 12 Nov 2020 21:46:17 +0000 (UTC)
Received: from localhost (ovpn-114-68.rdu2.redhat.com [10.10.114.68])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 0BC1E1002388;
 Thu, 12 Nov 2020 21:46:06 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bIZ8=ES=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
	id 1kdKQB-0000OF-UC
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 21:46:23 +0000
X-Inumbo-ID: 12b0889b-7f1e-40a1-8f53-d7d983f2cbb3
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 12b0889b-7f1e-40a1-8f53-d7d983f2cbb3;
	Thu, 12 Nov 2020 21:46:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1605217581;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=GV8b5aWpq0pxpt4OGLqF3eBIk8WCOaMqmTCMXLaIIeo=;
	b=K8rt+JM5wGETur3HZuVtSb4euEqmO+cwaKRWG+tXQ7udimv/lYm7t3Gp3WeLjFlLYiIMnu
	M+ZPhpwMDCI5qYFLkiQtfqstWNWEgmccM31ScoC9WJE2yV1HmmqYed9+Pp9SMS3PTbo1ho
	SKqyZyxv8REt3n55w9hb5vq6JzELlnY=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-29-2nHHWv-2PvinfqL9fdW78A-1; Thu, 12 Nov 2020 16:46:19 -0500
X-MC-Unique: 2nHHWv-2PvinfqL9fdW78A-1
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 29C821074643;
	Thu, 12 Nov 2020 21:46:17 +0000 (UTC)
Received: from localhost (ovpn-114-68.rdu2.redhat.com [10.10.114.68])
	by smtp.corp.redhat.com (Postfix) with ESMTP id 0BC1E1002388;
	Thu, 12 Nov 2020 21:46:06 +0000 (UTC)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Cc: Igor Mammedov <imammedo@redhat.com>,
	Markus Armbruster <armbru@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	"Daniel P. Berrange" <berrange@redhat.com>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Eric Blake <eblake@redhat.com>,
	John Snow <jsnow@redhat.com>,
	Stefan Berger <stefanb@linux.ibm.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	Stefan Berger <stefanb@linux.vnet.ibm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Max Reitz <mreitz@redhat.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Halil Pasic <pasic@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Richard Henderson <rth@twiddle.net>,
	David Hildenbrand <david@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Matthew Rosato <mjrosato@linux.ibm.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	qemu-s390x@nongnu.org
Subject: [PATCH v3 30/53] qdev: Rename qdev_get_prop_ptr() to object_field_prop_ptr()
Date: Thu, 12 Nov 2020 16:43:27 -0500
Message-Id: <20201112214350.872250-31-ehabkost@redhat.com>
In-Reply-To: <20201112214350.872250-1-ehabkost@redhat.com>
References: <20201112214350.872250-1-ehabkost@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ehabkost@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The function will be moved to common QOM code, as it is not
specific to TYPE_DEVICE anymore.

Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
Changes v1 -> v2:
* Rename to object_field_prop_ptr() instead of object_static_prop_ptr()
---
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Halil Pasic <pasic@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: David Hildenbrand <david@redhat.com>
Cc: Thomas Huth <thuth@redhat.com>
Cc: Matthew Rosato <mjrosato@linux.ibm.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org
Cc: qemu-block@nongnu.org
Cc: qemu-s390x@nongnu.org
---
 include/hw/qdev-properties.h     |  2 +-
 backends/tpm/tpm_util.c          |  6 ++--
 hw/block/xen-block.c             |  4 +--
 hw/core/qdev-properties-system.c | 50 +++++++++++++-------------
 hw/core/qdev-properties.c        | 60 ++++++++++++++++----------------
 hw/s390x/css.c                   |  4 +--
 hw/s390x/s390-pci-bus.c          |  4 +--
 hw/vfio/pci-quirks.c             |  4 +--
 8 files changed, 67 insertions(+), 67 deletions(-)

diff --git a/include/hw/qdev-properties.h b/include/hw/qdev-properties.h
index 90222822f1..97bb9494ae 100644
--- a/include/hw/qdev-properties.h
+++ b/include/hw/qdev-properties.h
@@ -193,7 +193,7 @@ void qdev_prop_set_macaddr(DeviceState *dev, const char *name,
                            const uint8_t *value);
 void qdev_prop_set_enum(DeviceState *dev, const char *name, int value);
 
-void *qdev_get_prop_ptr(Object *obj, Property *prop);
+void *object_field_prop_ptr(Object *obj, Property *prop);
 
 void qdev_prop_register_global(GlobalProperty *prop);
 const GlobalProperty *qdev_find_global_prop(Object *obj,
diff --git a/backends/tpm/tpm_util.c b/backends/tpm/tpm_util.c
index 0b07cf55ea..bb1ab34a75 100644
--- a/backends/tpm/tpm_util.c
+++ b/backends/tpm/tpm_util.c
@@ -35,7 +35,7 @@
 static void get_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    TPMBackend **be = qdev_get_prop_ptr(obj, opaque);
+    TPMBackend **be = object_field_prop_ptr(obj, opaque);
     char *p;
 
     p = g_strdup(*be ? (*be)->id : "");
@@ -47,7 +47,7 @@ static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
     Property *prop = opaque;
-    TPMBackend *s, **be = qdev_get_prop_ptr(obj, prop);
+    TPMBackend *s, **be = object_field_prop_ptr(obj, prop);
     char *str;
 
     if (!visit_type_str(v, name, &str, errp)) {
@@ -67,7 +67,7 @@ static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
 static void release_tpm(Object *obj, const char *name, void *opaque)
 {
     Property *prop = opaque;
-    TPMBackend **be = qdev_get_prop_ptr(obj, prop);
+    TPMBackend **be = object_field_prop_ptr(obj, prop);
 
     if (*be) {
         tpm_backend_reset(*be);
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index bd1aef63a7..718d886e5c 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -336,7 +336,7 @@ static void xen_block_get_vdev(Object *obj, Visitor *v, const char *name,
                                void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
+    XenBlockVdev *vdev = object_field_prop_ptr(obj, prop);
     char *str;
 
     switch (vdev->type) {
@@ -396,7 +396,7 @@ static void xen_block_set_vdev(Object *obj, Visitor *v, const char *name,
                                void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
+    XenBlockVdev *vdev = object_field_prop_ptr(obj, prop);
     char *str, *p;
     const char *end;
 
diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
index 96a0bc5109..8781b856d3 100644
--- a/hw/core/qdev-properties-system.c
+++ b/hw/core/qdev-properties-system.c
@@ -62,7 +62,7 @@ static void get_drive(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
     Property *prop = opaque;
-    void **ptr = qdev_get_prop_ptr(obj, prop);
+    void **ptr = object_field_prop_ptr(obj, prop);
     const char *value;
     char *p;
 
@@ -88,7 +88,7 @@ static void set_drive_helper(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    void **ptr = qdev_get_prop_ptr(obj, prop);
+    void **ptr = object_field_prop_ptr(obj, prop);
     char *str;
     BlockBackend *blk;
     bool blk_created = false;
@@ -181,7 +181,7 @@ static void release_drive(Object *obj, const char *name, void *opaque)
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    BlockBackend **ptr = qdev_get_prop_ptr(obj, prop);
+    BlockBackend **ptr = object_field_prop_ptr(obj, prop);
 
     if (*ptr) {
         AioContext *ctx = blk_get_aio_context(*ptr);
@@ -214,7 +214,7 @@ const PropertyInfo qdev_prop_drive_iothread = {
 static void get_chr(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    CharBackend *be = qdev_get_prop_ptr(obj, opaque);
+    CharBackend *be = object_field_prop_ptr(obj, opaque);
     char *p;
 
     p = g_strdup(be->chr && be->chr->label ? be->chr->label : "");
@@ -226,7 +226,7 @@ static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
     Property *prop = opaque;
-    CharBackend *be = qdev_get_prop_ptr(obj, prop);
+    CharBackend *be = object_field_prop_ptr(obj, prop);
     Chardev *s;
     char *str;
 
@@ -262,7 +262,7 @@ static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
 static void release_chr(Object *obj, const char *name, void *opaque)
 {
     Property *prop = opaque;
-    CharBackend *be = qdev_get_prop_ptr(obj, prop);
+    CharBackend *be = object_field_prop_ptr(obj, prop);
 
     qemu_chr_fe_deinit(be, false);
 }
@@ -286,7 +286,7 @@ static void get_mac(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
     Property *prop = opaque;
-    MACAddr *mac = qdev_get_prop_ptr(obj, prop);
+    MACAddr *mac = object_field_prop_ptr(obj, prop);
     char buffer[2 * 6 + 5 + 1];
     char *p = buffer;
 
@@ -301,7 +301,7 @@ static void set_mac(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
     Property *prop = opaque;
-    MACAddr *mac = qdev_get_prop_ptr(obj, prop);
+    MACAddr *mac = object_field_prop_ptr(obj, prop);
     int i, pos;
     char *str;
     const char *p;
@@ -363,7 +363,7 @@ static void get_netdev(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
+    NICPeers *peers_ptr = object_field_prop_ptr(obj, prop);
     char *p = g_strdup(peers_ptr->ncs[0] ? peers_ptr->ncs[0]->name : "");
 
     visit_type_str(v, name, &p, errp);
@@ -374,7 +374,7 @@ static void set_netdev(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
+    NICPeers *peers_ptr = object_field_prop_ptr(obj, prop);
     NetClientState **ncs = peers_ptr->ncs;
     NetClientState *peers[MAX_QUEUE_NUM];
     int queues, err = 0, i = 0;
@@ -436,7 +436,7 @@ static void get_audiodev(Object *obj, Visitor *v, const char* name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
+    QEMUSoundCard *card = object_field_prop_ptr(obj, prop);
     char *p = g_strdup(audio_get_id(card));
 
     visit_type_str(v, name, &p, errp);
@@ -447,7 +447,7 @@ static void set_audiodev(Object *obj, Visitor *v, const char* name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
+    QEMUSoundCard *card = object_field_prop_ptr(obj, prop);
     AudioState *state;
     int err = 0;
     char *str;
@@ -549,7 +549,7 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_field_prop_ptr(obj, prop);
     uint64_t value;
     Error *local_err = NULL;
 
@@ -637,7 +637,7 @@ static void get_reserved_region(Object *obj, Visitor *v, const char *name,
                                 void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
+    ReservedRegion *rr = object_field_prop_ptr(obj, prop);
     char buffer[64];
     char *p = buffer;
     int rc;
@@ -653,7 +653,7 @@ static void set_reserved_region(Object *obj, Visitor *v, const char *name,
                                 void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
+    ReservedRegion *rr = object_field_prop_ptr(obj, prop);
     Error *local_err = NULL;
     const char *endptr;
     char *str;
@@ -715,7 +715,7 @@ static void set_pci_devfn(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    int32_t value, *ptr = qdev_get_prop_ptr(obj, prop);
+    int32_t value, *ptr = object_field_prop_ptr(obj, prop);
     unsigned int slot, fn, n;
     char *str;
 
@@ -753,7 +753,7 @@ invalid:
 static int print_pci_devfn(Object *obj, Property *prop, char *dest,
                            size_t len)
 {
-    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    int32_t *ptr = object_field_prop_ptr(obj, prop);
 
     if (*ptr == -1) {
         return snprintf(dest, len, "<unset>");
@@ -777,7 +777,7 @@ static void get_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
                                  void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
+    PCIHostDeviceAddress *addr = object_field_prop_ptr(obj, prop);
     char buffer[] = "ffff:ff:ff.f";
     char *p = buffer;
     int rc = 0;
@@ -803,7 +803,7 @@ static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
                                  void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
+    PCIHostDeviceAddress *addr = object_field_prop_ptr(obj, prop);
     char *str, *p;
     const char *e;
     unsigned long val;
@@ -892,7 +892,7 @@ static void get_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
+    PCIExpLinkSpeed *p = object_field_prop_ptr(obj, prop);
     int speed;
 
     switch (*p) {
@@ -920,7 +920,7 @@ static void set_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
+    PCIExpLinkSpeed *p = object_field_prop_ptr(obj, prop);
     int speed;
 
     if (!visit_type_enum(v, name, &speed, prop->info->enum_table,
@@ -962,7 +962,7 @@ static void get_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
+    PCIExpLinkWidth *p = object_field_prop_ptr(obj, prop);
     int width;
 
     switch (*p) {
@@ -999,7 +999,7 @@ static void set_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
+    PCIExpLinkWidth *p = object_field_prop_ptr(obj, prop);
     int width;
 
     if (!visit_type_enum(v, name, &width, prop->info->enum_table,
@@ -1050,7 +1050,7 @@ static void get_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
     Property *prop = opaque;
-    QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
+    QemuUUID *uuid = object_field_prop_ptr(obj, prop);
     char buffer[UUID_FMT_LEN + 1];
     char *p = buffer;
 
@@ -1065,7 +1065,7 @@ static void set_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
     Property *prop = opaque;
-    QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
+    QemuUUID *uuid = object_field_prop_ptr(obj, prop);
     char *str;
 
     if (!visit_type_str(v, name, &str, errp)) {
diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index c1dd4ae71b..3d648b088d 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -50,7 +50,7 @@ void qdev_prop_allow_set_link_before_realize(const Object *obj,
     }
 }
 
-void *qdev_get_prop_ptr(Object *obj, Property *prop)
+void *object_field_prop_ptr(Object *obj, Property *prop)
 {
     void *ptr = obj;
     ptr += prop->offset;
@@ -100,7 +100,7 @@ void field_prop_get_enum(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    int *ptr = qdev_get_prop_ptr(obj, prop);
+    int *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_enum(v, name, ptr, prop->info->enum_table, errp);
 }
@@ -109,7 +109,7 @@ void field_prop_set_enum(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    int *ptr = qdev_get_prop_ptr(obj, prop);
+    int *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_enum(v, name, ptr, prop->info->enum_table, errp);
 }
@@ -138,7 +138,7 @@ static uint32_t qdev_get_prop_mask(Property *prop)
 
 static void bit_prop_set(Object *obj, Property *props, bool val)
 {
-    uint32_t *p = qdev_get_prop_ptr(obj, props);
+    uint32_t *p = object_field_prop_ptr(obj, props);
     uint32_t mask = qdev_get_prop_mask(props);
     if (val) {
         *p |= mask;
@@ -151,7 +151,7 @@ static void prop_get_bit(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *p = qdev_get_prop_ptr(obj, prop);
+    uint32_t *p = object_field_prop_ptr(obj, prop);
     bool value = (*p & qdev_get_prop_mask(prop)) != 0;
 
     visit_type_bool(v, name, &value, errp);
@@ -192,7 +192,7 @@ static uint64_t qdev_get_prop_mask64(Property *prop)
 
 static void bit64_prop_set(Object *obj, Property *props, bool val)
 {
-    uint64_t *p = qdev_get_prop_ptr(obj, props);
+    uint64_t *p = object_field_prop_ptr(obj, props);
     uint64_t mask = qdev_get_prop_mask64(props);
     if (val) {
         *p |= mask;
@@ -205,7 +205,7 @@ static void prop_get_bit64(Object *obj, Visitor *v, const char *name,
                            void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint64_t *p = qdev_get_prop_ptr(obj, prop);
+    uint64_t *p = object_field_prop_ptr(obj, prop);
     bool value = (*p & qdev_get_prop_mask64(prop)) != 0;
 
     visit_type_bool(v, name, &value, errp);
@@ -237,7 +237,7 @@ static void get_bool(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
     Property *prop = opaque;
-    bool *ptr = qdev_get_prop_ptr(obj, prop);
+    bool *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_bool(v, name, ptr, errp);
 }
@@ -246,7 +246,7 @@ static void set_bool(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
     Property *prop = opaque;
-    bool *ptr = qdev_get_prop_ptr(obj, prop);
+    bool *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_bool(v, name, ptr, errp);
 }
@@ -264,7 +264,7 @@ static void get_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
     Property *prop = opaque;
-    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint8_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint8(v, name, ptr, errp);
 }
@@ -273,7 +273,7 @@ static void set_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
     Property *prop = opaque;
-    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint8_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint8(v, name, ptr, errp);
 }
@@ -303,7 +303,7 @@ static void get_uint16(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint16_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint16(v, name, ptr, errp);
 }
@@ -312,7 +312,7 @@ static void set_uint16(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint16_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint16(v, name, ptr, errp);
 }
@@ -330,7 +330,7 @@ static void get_uint32(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint32(v, name, ptr, errp);
 }
@@ -339,7 +339,7 @@ static void set_uint32(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint32(v, name, ptr, errp);
 }
@@ -348,7 +348,7 @@ void field_prop_get_int32(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    int32_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_int32(v, name, ptr, errp);
 }
@@ -357,7 +357,7 @@ static void set_int32(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
     Property *prop = opaque;
-    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    int32_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_int32(v, name, ptr, errp);
 }
@@ -382,7 +382,7 @@ static void get_uint64(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint64_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint64(v, name, ptr, errp);
 }
@@ -391,7 +391,7 @@ static void set_uint64(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint64_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint64(v, name, ptr, errp);
 }
@@ -400,7 +400,7 @@ static void get_int64(Object *obj, Visitor *v, const char *name,
                       void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    int64_t *ptr = qdev_get_prop_ptr(obj, prop);
+    int64_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_int64(v, name, ptr, errp);
 }
@@ -409,7 +409,7 @@ static void set_int64(Object *obj, Visitor *v, const char *name,
                       void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    int64_t *ptr = qdev_get_prop_ptr(obj, prop);
+    int64_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_int64(v, name, ptr, errp);
 }
@@ -433,14 +433,14 @@ const PropertyInfo prop_info_int64 = {
 static void release_string(Object *obj, const char *name, void *opaque)
 {
     Property *prop = opaque;
-    g_free(*(char **)qdev_get_prop_ptr(obj, prop));
+    g_free(*(char **)object_field_prop_ptr(obj, prop));
 }
 
 static void get_string(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    char **ptr = qdev_get_prop_ptr(obj, prop);
+    char **ptr = object_field_prop_ptr(obj, prop);
 
     if (!*ptr) {
         char *str = (char *)"";
@@ -454,7 +454,7 @@ static void set_string(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    char **ptr = qdev_get_prop_ptr(obj, prop);
+    char **ptr = object_field_prop_ptr(obj, prop);
     char *str;
 
     if (!visit_type_str(v, name, &str, errp)) {
@@ -488,7 +488,7 @@ void field_prop_get_size32(Object *obj, Visitor *v, const char *name,
                            void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_field_prop_ptr(obj, prop);
     uint64_t value = *ptr;
 
     visit_type_size(v, name, &value, errp);
@@ -498,7 +498,7 @@ static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
                        Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_field_prop_ptr(obj, prop);
     uint64_t value;
 
     if (!visit_type_size(v, name, &value, errp)) {
@@ -561,7 +561,7 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
      */
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *alenptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *alenptr = object_field_prop_ptr(obj, prop);
     void **arrayptr = (void *)dev + prop->arrayoffset;
     void *eltptr;
     const char *arrayname;
@@ -603,7 +603,7 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
          * being inside the device struct.
          */
         arrayprop->prop.offset = eltptr - (void *)dev;
-        assert(qdev_get_prop_ptr(obj, &arrayprop->prop) == eltptr);
+        assert(object_field_prop_ptr(obj, &arrayprop->prop) == eltptr);
         object_property_add(obj, propname,
                             arrayprop->prop.info->name,
                             field_prop_getter(arrayprop->prop.info),
@@ -798,7 +798,7 @@ static void get_size(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint64_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_size(v, name, ptr, errp);
 }
@@ -807,7 +807,7 @@ static void set_size(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint64_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_size(v, name, ptr, errp);
 }
diff --git a/hw/s390x/css.c b/hw/s390x/css.c
index 496e2c5801..fe47751df4 100644
--- a/hw/s390x/css.c
+++ b/hw/s390x/css.c
@@ -2344,7 +2344,7 @@ static void get_css_devid(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
+    CssDevId *dev_id = object_field_prop_ptr(obj, prop);
     char buffer[] = "xx.x.xxxx";
     char *p = buffer;
     int r;
@@ -2373,7 +2373,7 @@ static void set_css_devid(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
+    CssDevId *dev_id = object_field_prop_ptr(obj, prop);
     char *str;
     int num, n1, n2;
     unsigned int cssid, ssid, devid;
diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
index 54fac3851d..99b18d56ba 100644
--- a/hw/s390x/s390-pci-bus.c
+++ b/hw/s390x/s390-pci-bus.c
@@ -1323,7 +1323,7 @@ static void s390_pci_get_fid(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint32(v, name, ptr, errp);
 }
@@ -1333,7 +1333,7 @@ static void s390_pci_set_fid(Object *obj, Visitor *v, const char *name,
 {
     S390PCIBusDevice *zpci = S390_PCI_DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_field_prop_ptr(obj, prop);
 
     if (!visit_type_uint32(v, name, ptr, errp)) {
         return;
diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
index 802979635c..fc8d63c850 100644
--- a/hw/vfio/pci-quirks.c
+++ b/hw/vfio/pci-quirks.c
@@ -1489,7 +1489,7 @@ static void get_nv_gpudirect_clique_id(Object *obj, Visitor *v,
                                        Error **errp)
 {
     Property *prop = opaque;
-    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint8_t *ptr = object_field_prop_ptr(obj, prop);
 
     visit_type_uint8(v, name, ptr, errp);
 }
@@ -1499,7 +1499,7 @@ static void set_nv_gpudirect_clique_id(Object *obj, Visitor *v,
                                        Error **errp)
 {
     Property *prop = opaque;
-    uint8_t value, *ptr = qdev_get_prop_ptr(obj, prop);
+    uint8_t value, *ptr = object_field_prop_ptr(obj, prop);
 
     if (!visit_type_uint8(v, name, &value, errp)) {
         return;
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Thu Nov 12 22:47:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 22:47:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26103.54243 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdLMv-0006QY-KH; Thu, 12 Nov 2020 22:47:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26103.54243; Thu, 12 Nov 2020 22:47:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdLMv-0006QR-HO; Thu, 12 Nov 2020 22:47:05 +0000
Received: by outflank-mailman (input) for mailman id 26103;
 Thu, 12 Nov 2020 22:47:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kdLMt-0006QM-Op
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 22:47:03 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2696c955-e496-4b70-8c13-64bf762ed65e;
 Thu, 12 Nov 2020 22:47:02 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdLMs-0002En-6W; Thu, 12 Nov 2020 22:47:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdLMr-0001gQ-TU; Thu, 12 Nov 2020 22:47:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kdLMr-00038y-T0; Thu, 12 Nov 2020 22:47:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kdLMt-0006QM-Op
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 22:47:03 +0000
X-Inumbo-ID: 2696c955-e496-4b70-8c13-64bf762ed65e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2696c955-e496-4b70-8c13-64bf762ed65e;
	Thu, 12 Nov 2020 22:47:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5niOOFv/HQ62yn1rMDgyzGRD2VIbca+vGBm11Fm7YEo=; b=rYB5YOGft9xgQxyzg3D1n/6SyB
	5WiT/5WGjEYOzbvedVJQpfiRrRhZFzNamSpXZH1gHhmix0WxfF9FRqM9VgFqa6ePfAa4ExEuvrMwu
	HD1TCyGi1JaNLMPhPqM8ogwKqJmaTx5WTCMCFqn058mZ3GEleY+4k2FmqaM9zLE0QdJA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdLMs-0002En-6W; Thu, 12 Nov 2020 22:47:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdLMr-0001gQ-TU; Thu, 12 Nov 2020 22:47:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdLMr-00038y-T0; Thu, 12 Nov 2020 22:47:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156702-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156702: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=3c7c7cd4d82c2f9a5a59bbd06673b8cd1eb23ce3
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Nov 2020 22:47:01 +0000

flight 156702 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156702/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              3c7c7cd4d82c2f9a5a59bbd06673b8cd1eb23ce3
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  125 days
Failing since        151818  2020-07-11 04:18:52 Z  124 days  119 attempts
Testing same since   156702  2020-11-12 04:49:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 26219 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Nov 12 23:00:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 12 Nov 2020 23:00:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26115.54273 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdLZw-0008Sn-2C; Thu, 12 Nov 2020 23:00:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26115.54273; Thu, 12 Nov 2020 23:00:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdLZv-0008Sf-Ua; Thu, 12 Nov 2020 23:00:31 +0000
Received: by outflank-mailman (input) for mailman id 26115;
 Thu, 12 Nov 2020 23:00:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kdLZu-0008Re-LD
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 23:00:30 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 66465f8b-9ec5-46b2-93cf-87c7a014912b;
 Thu, 12 Nov 2020 23:00:24 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdLZo-0002WP-3U; Thu, 12 Nov 2020 23:00:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdLZn-00026S-PX; Thu, 12 Nov 2020 23:00:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kdLZn-0005G8-P4; Thu, 12 Nov 2020 23:00:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=44Nj=ES=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kdLZu-0008Re-LD
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 23:00:30 +0000
X-Inumbo-ID: 66465f8b-9ec5-46b2-93cf-87c7a014912b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 66465f8b-9ec5-46b2-93cf-87c7a014912b;
	Thu, 12 Nov 2020 23:00:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1CcbXxC1jwzBhJR3gnr/1ZUqD3u8LYUd+knvc6ELrd4=; b=C8hUpSdb0zOcOWLsm/GLbbtPSN
	Acwv1C2UkT1kmJL7kmaU4GAuTD6g7NKyDdMBgLlDfgDKLP1eXnH3Hc4FNbjPa2qsRQ6cttYpLYW2F
	IXuB/5aSN6937W0lsS+1+NCcDij8WZPPYFfsEqkFFAkk0UTpDY6OtoSLVblxe0BwdxSg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdLZo-0002WP-3U; Thu, 12 Nov 2020 23:00:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdLZn-00026S-PX; Thu, 12 Nov 2020 23:00:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdLZn-0005G8-P4; Thu, 12 Nov 2020 23:00:23 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156687-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 156687: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1447d449fab7e48c85faf83951842bb60d7dabe5
X-Osstest-Versions-That:
    xen=b5eb4956e1d2d73546f8cfdef635b6819ed7b527
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 12 Nov 2020 23:00:23 +0000

flight 156687 xen-4.11-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156687/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 156397
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 156397
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156397
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156397
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156397
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156397
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156397
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156397
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156397
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156397
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156397
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156397
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156397
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1447d449fab7e48c85faf83951842bb60d7dabe5
baseline version:
 xen                  b5eb4956e1d2d73546f8cfdef635b6819ed7b527

Last test of basis   156397  2020-11-04 09:05:50 Z    8 days
Testing same since   156634  2020-11-10 18:06:15 Z    2 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b5eb4956e1..1447d449fa  1447d449fab7e48c85faf83951842bb60d7dabe5 -> stable-4.11


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 03:37:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 03:37:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26143.54316 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdPtz-0006kn-3Z; Fri, 13 Nov 2020 03:37:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26143.54316; Fri, 13 Nov 2020 03:37:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdPty-0006ke-TZ; Fri, 13 Nov 2020 03:37:30 +0000
Received: by outflank-mailman (input) for mailman id 26143;
 Fri, 13 Nov 2020 03:37:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=A7JU=ET=arm.com=wei.chen@srs-us1.protection.inumbo.net>)
 id 1kdPtx-0006kZ-KE
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 03:37:29 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.61]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9af8360e-87e7-4442-b76b-7cdd86fab588;
 Fri, 13 Nov 2020 03:37:25 +0000 (UTC)
Received: from DB8PR09CA0015.eurprd09.prod.outlook.com (2603:10a6:10:a0::28)
 by VI1PR08MB3438.eurprd08.prod.outlook.com (2603:10a6:803:82::28) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25; Fri, 13 Nov
 2020 03:37:22 +0000
Received: from DB5EUR03FT049.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:a0:cafe::3d) by DB8PR09CA0015.outlook.office365.com
 (2603:10a6:10:a0::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Fri, 13 Nov 2020 03:37:21 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT049.mail.protection.outlook.com (10.152.20.191) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3564.22 via Frontend Transport; Fri, 13 Nov 2020 03:37:21 +0000
Received: ("Tessian outbound d6c201accd3c:v71");
 Fri, 13 Nov 2020 03:37:21 +0000
Received: from a18beaaadc77.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 5F0C8A1A-858E-49D6-B966-86CA4C778116.1; 
 Fri, 13 Nov 2020 03:37:16 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a18beaaadc77.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 13 Nov 2020 03:37:16 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com (2603:10a6:208:105::24)
 by AM4PR0802MB2194.eurprd08.prod.outlook.com (2603:10a6:200:5c::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Fri, 13 Nov
 2020 03:37:14 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993]) by AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993%3]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 03:37:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A7JU=ET=arm.com=wei.chen@srs-us1.protection.inumbo.net>)
	id 1kdPtx-0006kZ-KE
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 03:37:29 +0000
X-Inumbo-ID: 9af8360e-87e7-4442-b76b-7cdd86fab588
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown [40.107.6.61])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9af8360e-87e7-4442-b76b-7cdd86fab588;
	Fri, 13 Nov 2020 03:37:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zR0mUZEINPlIFc+O7RnwFQBxekV/jcU5KdwgczRi60M=;
 b=v9JxjQLSHyP4nIODSBzlRT8vCrO6QHuh4E/aOZ3prVZSIqE7DWZiTytniRnHwMlTG3cA9kL6wW1eXSQCctHcdKvHGhsq5Zq+sBKBawXFMfKMfFVCuqTPskVKcdgZtx+sNroBMIS+jlmKTuuOlu3Y7CM+NpDvSgO12vrWB6qF3YY=
Received: from DB8PR09CA0015.eurprd09.prod.outlook.com (2603:10a6:10:a0::28)
 by VI1PR08MB3438.eurprd08.prod.outlook.com (2603:10a6:803:82::28) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25; Fri, 13 Nov
 2020 03:37:22 +0000
Received: from DB5EUR03FT049.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:a0:cafe::3d) by DB8PR09CA0015.outlook.office365.com
 (2603:10a6:10:a0::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21 via Frontend
 Transport; Fri, 13 Nov 2020 03:37:21 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT049.mail.protection.outlook.com (10.152.20.191) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3564.22 via Frontend Transport; Fri, 13 Nov 2020 03:37:21 +0000
Received: ("Tessian outbound d6c201accd3c:v71"); Fri, 13 Nov 2020 03:37:21 +0000
X-CR-MTA-TID: 64aa7808
Received: from a18beaaadc77.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 5F0C8A1A-858E-49D6-B966-86CA4C778116.1;
	Fri, 13 Nov 2020 03:37:16 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a18beaaadc77.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 13 Nov 2020 03:37:16 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=b1Ru7VTtiEUSGEeOqAi0ZkotkkGXew2wlvI4liyRbuMewFTWlmo8/0cfuNC8M/e4zlZmSJMdCmru35h7xowWh5m53DqMjZ28g1BxfxAg6Fz9HcSpcnrNw/G5k8ovoxiEvFrzlN8qv3iWSsT4sDqF+xNxQvuKd2EDWvIvNMpTlAy59PrnQMrdgVqM64CUC3rYIDLPnSKOB4G53AiOnr7lRXTpuoRSUXHTlQmc9NTdxYmkWgHG2weFumDVrOgTlVdXOgSwiOfV59i44/wKjWaocC4owIGVlFEPIYrud8NhddK1NvUrPxtSzaSlqVzAB6v4QGdjs2kn/0Q0oWECNy/gcA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zR0mUZEINPlIFc+O7RnwFQBxekV/jcU5KdwgczRi60M=;
 b=jz/11U3/membc8fytqg7Q0FrwwxzZZseC/RsEtWlyuNTO6LG+EjRlwFqqGOTB6oZnP/IWnaH6wUQdYz0dul8WAPx813Ru5j7ztF0IBf+AIGURg9IMcZ6PJ7KN4RAR4WgKJYEAGUHkPC95rlF/y0aJF+9J5uTU+4Au/gbebKw1NjNITRblPju7gSafKyhJrCQisJMhulJu4aVSm9q5KgrZc1G5IhVRlEuHGJpiQvyVltVSQNPxf26xIzsfgvt027SooEUvk4wEL2ekI60KyYtAc6peqPXX36tqrAnB0MPJ9NHaSzHp3cxGSMCECsvsQhRy1FUKSdnGpRU6AGhSkIzPA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zR0mUZEINPlIFc+O7RnwFQBxekV/jcU5KdwgczRi60M=;
 b=v9JxjQLSHyP4nIODSBzlRT8vCrO6QHuh4E/aOZ3prVZSIqE7DWZiTytniRnHwMlTG3cA9kL6wW1eXSQCctHcdKvHGhsq5Zq+sBKBawXFMfKMfFVCuqTPskVKcdgZtx+sNroBMIS+jlmKTuuOlu3Y7CM+NpDvSgO12vrWB6qF3YY=
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com (2603:10a6:208:105::24)
 by AM4PR0802MB2194.eurprd08.prod.outlook.com (2603:10a6:200:5c::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Fri, 13 Nov
 2020 03:37:14 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993]) by AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993%3]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 03:37:14 +0000
From: Wei Chen <Wei.Chen@arm.com>
To: Julien Grall <julien@xen.org>, Penny Zheng <Penny.Zheng@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"sstabellini@kernel.org" <sstabellini@kernel.org>
CC: Andre Przywara <Andre.Przywara@arm.com>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Kaly Xin <Kaly.Xin@arm.com>, nd <nd@arm.com>
Subject: RE: [PATCH] xen/arm: Add Cortex-A73 erratum 858921 workaround
Thread-Topic: [PATCH] xen/arm: Add Cortex-A73 erratum 858921 workaround
Thread-Index: AQHWtnF2jWNIb4RgFU+PRE0mwpdPDam/tA6AgAW2ODA=
Date: Fri, 13 Nov 2020 03:37:14 +0000
Message-ID:
 <AM0PR08MB3747B42FC856B9BDF24646629EE60@AM0PR08MB3747.eurprd08.prod.outlook.com>
References: <20201109082110.1133996-1-penny.zheng@arm.com>
 <cfa63398-8182-b79f-1602-ed068e2319ad@xen.org>
In-Reply-To: <cfa63398-8182-b79f-1602-ed068e2319ad@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 7D0166FA7B196B4598FBD5E8DBA35998.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [218.82.32.45]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 026a6021-6e2a-46ab-db0d-08d887856dd3
x-ms-traffictypediagnostic: AM4PR0802MB2194:|VI1PR08MB3438:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB34387C65945CD0C2B0590EA29EE60@VI1PR08MB3438.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:2150;OLM:2150;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 I+4mKJdh5SFu4PsgkHxWw4htiFROvPjE+sqTZjITt6GEbW6eD89/HJtawl1XjvUxe8sMRMBJJkgn4fEO8BPEXGp38oCMwlmQcJTNktl6QwyDJACYM2tKvh/RQf4xq1gaEXTRIP+3Tj4rmPzgRPcCgM/Y4fSly9uX7Z6eNCke32cXfoCppwAPqmyjY4MVWzM1atqubzUXXiVCLq8x5wX3juCqFzP/OGQ3CogpLoa0HbkU2iUlD+o0mNmMiJNY3FIrmMQLD6AFo1D4HUuXQMhIuCs6URuS4SZwxH66qIsZahlbC7ZioNHCFpLVHiqMXqs/YuyTbKarEnaxzzGGenOE+A==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3747.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(39860400002)(366004)(346002)(376002)(478600001)(2906002)(86362001)(33656002)(71200400001)(8936002)(110136005)(8676002)(66446008)(186003)(66476007)(66946007)(66556008)(52536014)(7696005)(4326008)(26005)(76116006)(9686003)(64756008)(5660300002)(316002)(54906003)(55016002)(83380400001)(6506007)(53546011);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 bORqsTBdORe6sXnhUNaSB+JYs/jqZsQbJX0iWEqJQGMCd9NfCXXA4HrJ9vXCX98SYQ4jImDFDhRjLdM0eafC890Zym4lHGC9EIL3DhNVtAxiHtkY8jwguE5SYXRlJ8HLLw5cxHXcZRsNBagTU7J1YLLzD2QVgQ1C40kRJu86Kx9wnnSfmZnss5gNgtQr5U64Plz+qGcYpc7+i2CE0tb9MdwQFJQ/p3hHqfQq137k20OdFLV73Ke24j2JDfFs6MxGKoFVvw37tw+3os6Tz3K2z6bfX3o3jq84nuhSLt3lmWe8SkArHI91QojssX6kT/qrhKDST7wqz87tTAjwUXjDADONjsOgbEwNwCLSlGTSWChn9tO11MlXcGomWoNSAU9TW0hUmQK368+j1M5jPpXGCkk+uD2uFRXgMTAC+Ulb/hJIHc7ihqn9awLZfjMtOlMI5tigISqC8d2Edd3Sg65tCyNEjXAmtE1XTjDBSWfRNRcstAtON/mLPtPtcEOVJEaaRa8TVp2lrkIzDuq4vZJeqNMAhW0rdYjJzjVQNmaJMTTeT+zyMht5ABQKDPpKmaN+GRxs5huNXMlN9zDejkb4Np5ttT2ituD4B8qgGuUbNNnSXWkJB6g24EetAORgvSVrzssjOEdwvTA5HQSV/1t470IaBDGyhfFzxB8u2HYRqE0pN+TCXmRhKqH3oeAj+ea3jKAorPRsCnde426yRgGSKWCQTeuoOpooJz0T2XTYF4J18pznI6LK25+16xIidfT2grVdfGthedcGSXhyTjk6655IubGv8Pmshncmw4wiI7HfbgwZSeBbzHNUZiQoP/ve/ps6+ApqCFCGzy/3jxTEejrdBA3n9UTfA9CCz/j9OaBf6viRLnmTCVNY/Zbs/DQ56Pr4c9s+xH1rLWw1WDaqtw==
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR0802MB2194
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT049.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	990f2e78-8a7f-46ff-8a28-08d887856983
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bIxonP7Ae7aEm04vQcamyhG0iHmF8ugh/PLgPIgJ89hK7/kndg4BhTTC9J5r9Oh20e4pdYi2RWxFtRpYbynv+BgS//M9dyQv8yMGfEsAThvN+YOit+cLK7jo/ZCPcizx0uFoiufIu+LgbHZ2WSdHfIMPfkcbAWXfR/u50aMQB8yNPzNGkncUe2ctjNQYujf/w6P63LUxfUxoRmSC33K8dZVeUhTp+Hnym+j+8tWc5BE62ecDt1DsgcsjptMuhOyx3Djw0TttwamiTNRhg40eVo4ZWiYloWCapfELck+CAW+XnSyleOjDNnXClFsnw1tOfOWa4yJJTc12tu7yBHPqoDsWk3Riqo3thAv9UM0NyQanm8k2dcS5cDoVSUyYiWrdLoYaqznkLfD/9wyzsArO3w==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(376002)(346002)(396003)(39860400002)(46966005)(26005)(5660300002)(7696005)(110136005)(54906003)(47076004)(316002)(83380400001)(4326008)(186003)(82310400003)(52536014)(336012)(82740400003)(33656002)(356005)(2906002)(8936002)(53546011)(81166007)(70206006)(478600001)(55016002)(70586007)(6506007)(9686003)(8676002)(86362001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Nov 2020 03:37:21.5796
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 026a6021-6e2a-46ab-db0d-08d887856dd3
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT049.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3438

SEkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFNlbnQ6IDIwMjDlubQxMeaciDnml6UgMjA6MDQN
Cj4gVG86IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bhcm0uY29tPjsgeGVuLWRldmVsQGxpc3Rz
LnhlbnByb2plY3Qub3JnOw0KPiBzc3RhYmVsbGluaUBrZXJuZWwub3JnDQo+IENjOiBBbmRyZSBQ
cnp5d2FyYSA8QW5kcmUuUHJ6eXdhcmFAYXJtLmNvbT47IEJlcnRyYW5kIE1hcnF1aXMNCj4gPEJl
cnRyYW5kLk1hcnF1aXNAYXJtLmNvbT47IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29tPjsgS2Fs
eSBYaW4NCj4gPEthbHkuWGluQGFybS5jb20+OyBuZCA8bmRAYXJtLmNvbT4NCj4gU3ViamVjdDog
UmU6IFtQQVRDSF0geGVuL2FybTogQWRkIENvcnRleC1BNzMgZXJyYXR1bSA4NTg5MjEgd29ya2Fy
b3VuZA0KPiANCj4gSGksDQo+IA0KPiBPbiAwOS8xMS8yMDIwIDA4OjIxLCBQZW5ueSBaaGVuZyB3
cm90ZToNCj4gPiBDTlRWQ1RfRUwwIG9yIENOVFBDVF9FTDAgY291bnRlciByZWFkIGluIENvcnRl
eC1BNzMgKGFsbCB2ZXJzaW9ucykNCj4gPiBtaWdodCByZXR1cm4gYSB3cm9uZyB2YWx1ZSB3aGVu
IHRoZSBjb3VudGVyIGNyb3NzZXMgYSAzMmJpdCBib3VuZGFyeS4NCj4gPg0KPiA+IFVudGlsIG5v
dywgdGhlcmUgaXMgbm8gY2FzZSBmb3IgWGVuIGl0c2VsZiB0byBhY2Nlc3MgQ05UVkNUX0VMMCwN
Cj4gPiBhbmQgaXQgYWxzbyBzaG91bGQgYmUgdGhlIEd1ZXN0IE9TJ3MgcmVzcG9uc2liaWxpdHkg
dG8gZGVhbCB3aXRoDQo+ID4gdGhpcyBwYXJ0Lg0KPiA+DQo+ID4gQnV0IGZvciBDTlRQQ1QsIHRo
ZXJlIGV4aXN0cyBzZXZlcmFsIGNhc2VzIGluIFhlbiBpbnZvbHZpbmcgcmVhZGluZw0KPiA+IENO
VFBDVCwgc28gYSBwb3NzaWJsZSB3b3JrYXJvdW5kIGlzIHRoYXQgcGVyZm9ybWluZyB0aGUgcmVh
ZCB0d2ljZSwNCj4gPiBhbmQgdG8gcmV0dXJuIG9uZSBvciB0aGUgb3RoZXIgZGVwZW5kaW5nIG9u
IHdoZXRoZXIgYSB0cmFuc2l0aW9uIGhhcw0KPiA+IHRha2VuIHBsYWNlLg0KPiA+DQo+ID4gU2ln
bmVkLW9mZi1ieTogUGVubnkgWmhlbmcgPHBlbm55LnpoZW5nQGFybS5jb20+DQo+IA0KPiBBY2tl
ZC1ieTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4NCj4gDQo+IE9uIGEgcmVsYXRl
ZCB0b3BpYywgZG8gd2UgbmVlZCBhIGZpeCBzaW1pbGFyIHRvIExpbnV4IGNvbW1pdA0KPiA3NWEx
OWEwMjAyZGIgImFybTY0OiBhcmNoX3RpbWVyOiBFbnN1cmUgY291bnRlciByZWdpc3RlciByZWFk
cyBvY2N1cg0KPiB3aXRoIHNlcWxvY2sgaGVsZCI/DQo+IA0KDQpJIHRha2UgYSBsb29rIGF0IHRo
aXMgTGludXggY29tbWl0LCBpdCBzZWVtcyB0byBwcmV2ZW50IHRoZSBzZXEtbG9jayB0byBiZQ0K
c3BlY3VsYXRlZC4gIFVzaW5nIGFuIGVuZm9yY2Ugb3JkZXJpbmcgaW5zdGVhZCBvZiBJU0IgYWZ0
ZXIgdGhlIHJlYWQgY291bnRlcg0Kb3BlcmF0aW9uIHNlZW1zIHRvIGJlIGZvciBwZXJmb3JtYW5j
ZSByZWFzb25zLg0KDQpJIGhhdmUgZm91bmQgdGhhdCB5b3UgaGFkIHBsYWNlZCBhbiBJU0IgYmVm
b3JlIHJlYWQgY291bnRlciBpbiBnZXRfY3ljbGVzDQp0byBwcmV2ZW50IGNvdW50ZXIgdmFsdWUg
dG8gYmUgc3BlY3VsYXRlZC4gQnV0IHlvdSBoYXZlbid0IHBsYWNlZCB0aGUgc2Vjb25kDQpJU0Ig
YWZ0ZXIgcmVhZGluZy4gSXMgaXQgYmVjYXVzZSB3ZSBoYXZlbid0IHVzZWQgdGhlIGdldF9jeWNs
ZXMgaW4gc2VxIGxvY2sNCmNyaXRpY2FsIGNvbnRleHQgKE1heWJlIEkgZGlkbid0IGZpbmQgdGhl
IHJpZ2h0IHBsYWNlKT8gU28gc2hvdWxkIHdlIG5lZWQgdG8gZml4IGl0DQpub3csIG9yIHlvdSBw
cmVmZXIgdG8gZml4IGl0IG5vdyBmb3IgZnV0dXJlIHVzYWdlPw0KDQpSZWdhcmRzLA0KV2VpIENo
ZW4NCg0KPiBDaGVlcnMsDQo+IA0KPiAtLQ0KPiBKdWxpZW4gR3JhbGwNCg==


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 04:59:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 04:59:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26175.54333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdRAj-0005Py-3d; Fri, 13 Nov 2020 04:58:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26175.54333; Fri, 13 Nov 2020 04:58:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdRAj-0005Pr-0e; Fri, 13 Nov 2020 04:58:53 +0000
Received: by outflank-mailman (input) for mailman id 26175;
 Fri, 13 Nov 2020 04:58:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RH3y=ET=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kdRAg-0005Pm-RB
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 04:58:50 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a6621987-b234-4550-8509-6685036bf2a7;
 Fri, 13 Nov 2020 04:58:47 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdRAc-0006nx-R2; Fri, 13 Nov 2020 04:58:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdRAc-0004Oq-DI; Fri, 13 Nov 2020 04:58:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kdRAc-0000y0-Cd; Fri, 13 Nov 2020 04:58:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=RH3y=ET=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kdRAg-0005Pm-RB
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 04:58:50 +0000
X-Inumbo-ID: a6621987-b234-4550-8509-6685036bf2a7
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a6621987-b234-4550-8509-6685036bf2a7;
	Fri, 13 Nov 2020 04:58:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WN9fOOY1PB3xmWfBy1Wi0S2I0iQx63chO7rbQQh4JmQ=; b=p9dsseK3fC81kNV3NZKxVSITAQ
	4v+z9f/b+t2VIxZyosbgH0+MKiB6QsqXexrTAs0iBwMYYpvO4e8quWNhoWKFfOVJmCn5zjgDUN8z0
	E/jX5hNiX1CIrOzf5rDLjmxRR3+BAlDboD+sUkI/mTlGy02B8D2LVd68zQ9E02e1kOac=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdRAc-0006nx-R2; Fri, 13 Nov 2020 04:58:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdRAc-0004Oq-DI; Fri, 13 Nov 2020 04:58:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdRAc-0000y0-Cd; Fri, 13 Nov 2020 04:58:46 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156705-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156705: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a4c141dca466ed3e9451f147efe6304b1b659ff5
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 13 Nov 2020 04:58:46 +0000

flight 156705 qemu-mainline real [real]
flight 156732 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156705/
http://logs.test-lab.xenproject.org/osstest/logs/156732/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                a4c141dca466ed3e9451f147efe6304b1b659ff5
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   84 days
Failing since        152659  2020-08-21 14:07:39 Z   83 days  179 attempts
Testing same since   156705  2020-11-12 05:35:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 64034 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 05:15:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 05:15:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26027.54349 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdRQD-0007Tk-NH; Fri, 13 Nov 2020 05:14:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26027.54349; Fri, 13 Nov 2020 05:14:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdRQD-0007Td-Jz; Fri, 13 Nov 2020 05:14:53 +0000
Received: by outflank-mailman (input) for mailman id 26027;
 Thu, 12 Nov 2020 19:22:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3lRS=ES=suse.cz=pvorel@srs-us1.protection.inumbo.net>)
 id 1kdIAt-0002rd-RB
 for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 19:22:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab37eda9-1c41-4a50-bf41-c9b76256a01c;
 Thu, 12 Nov 2020 19:22:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9C79EAFF8;
 Thu, 12 Nov 2020 19:22:25 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3lRS=ES=suse.cz=pvorel@srs-us1.protection.inumbo.net>)
	id 1kdIAt-0002rd-RB
	for xen-devel@lists.xenproject.org; Thu, 12 Nov 2020 19:22:27 +0000
X-Inumbo-ID: ab37eda9-1c41-4a50-bf41-c9b76256a01c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ab37eda9-1c41-4a50-bf41-c9b76256a01c;
	Thu, 12 Nov 2020 19:22:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 9C79EAFF8;
	Thu, 12 Nov 2020 19:22:25 +0000 (UTC)
Date: Thu, 12 Nov 2020 20:22:23 +0100
From: Petr Vorel <pvorel@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Roger Pau =?iso-8859-2?Q?Monn=E9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com, linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	ceph-devel@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org, linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org, linux-fsdevel@vger.kernel.org
Subject: Re: [PATCH 05/24] block: remove the update_bdev parameter from
 set_capacity_revalidate_and_notify
Message-ID: <20201112192223.GA17194@pevik>
Reply-To: Petr Vorel <pvorel@suse.cz>
References: <20201111082658.3401686-1-hch@lst.de>
 <20201111082658.3401686-6-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201111082658.3401686-6-hch@lst.de>

Hi Christoph,

> The update_bdev argument is always set to true, so remove it.  Also
> rename the function to the slighly less verbose set_capacity_and_notify,
> as propagating the disk size to the block device isn't really
> revalidation.

> Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Petr Vorel <pvorel@suse.cz>

Nice cleanup.

Kind regards,
Petr


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 05:18:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 05:18:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26191.54361 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdRTg-0007g2-Bq; Fri, 13 Nov 2020 05:18:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26191.54361; Fri, 13 Nov 2020 05:18:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdRTg-0007fv-8n; Fri, 13 Nov 2020 05:18:28 +0000
Received: by outflank-mailman (input) for mailman id 26191;
 Fri, 13 Nov 2020 05:18:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zLmm=ET=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kdRTe-0007fl-NN
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 05:18:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c28b5fd8-8365-4256-8c02-77b216f0fce3;
 Fri, 13 Nov 2020 05:18:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9FC7BAB7A;
 Fri, 13 Nov 2020 05:18:24 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zLmm=ET=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kdRTe-0007fl-NN
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 05:18:26 +0000
X-Inumbo-ID: c28b5fd8-8365-4256-8c02-77b216f0fce3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c28b5fd8-8365-4256-8c02-77b216f0fce3;
	Fri, 13 Nov 2020 05:18:25 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605244704;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=PhtswhUD7tQ+X/ddXo6jvowUx4Ky45vbO9GuMIrnbfU=;
	b=CXglpq2Xa+vpbfzFC0Z0/hhMUrGPCtp+3IMuEj/DbbpwyVoiv8CNrL/g8j+P6rA0GJW1Hd
	VWe9ehQbBKxqdw36jysejgysKZhYQsyfgMzvHRW6A6+BaZ3gB7KEwKVvG5/sC8e9sOVUZl
	TkAlxM+2aROrMfLFY5xorEl07Q/1P1I=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 9FC7BAB7A;
	Fri, 13 Nov 2020 05:18:24 +0000 (UTC)
Subject: Re: [PATCH] xen: add support for automatic debug key actions in case
 of crash
To: Stefano Stabellini <sstabellini@kernel.org>,
 Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20201022143905.11032-1-jgross@suse.com>
 <977bab69-892c-d94d-d952-1a748f69d0b6@suse.com>
 <53732f8f-fe6d-91bd-4100-4b4d904a4073@suse.com>
 <ed2f73e7-04cc-f568-f0b7-19c843a8d31b@suse.com>
 <8c77ff71-a14e-7cf7-5f27-c7c152ace240@suse.com>
 <3e2132c9-2ab3-7bfb-656b-2cab58a53342@suse.com>
 <alpine.DEB.2.21.2011121332250.20906@sstabellini-ThinkPad-T480s>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <383f2f1f-96c1-1634-519f-3526019f4f48@suse.com>
Date: Fri, 13 Nov 2020 06:18:23 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2011121332250.20906@sstabellini-ThinkPad-T480s>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="ZnYMMjsECBIdr9GWTfbQROPdPddxFa8s2"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--ZnYMMjsECBIdr9GWTfbQROPdPddxFa8s2
Content-Type: multipart/mixed; boundary="zpho5v4kGn1LCRw8bJvOPmsgtjILnRcmL";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Stefano Stabellini <sstabellini@kernel.org>,
 Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
Message-ID: <383f2f1f-96c1-1634-519f-3526019f4f48@suse.com>
Subject: Re: [PATCH] xen: add support for automatic debug key actions in case
 of crash
References: <20201022143905.11032-1-jgross@suse.com>
 <977bab69-892c-d94d-d952-1a748f69d0b6@suse.com>
 <53732f8f-fe6d-91bd-4100-4b4d904a4073@suse.com>
 <ed2f73e7-04cc-f568-f0b7-19c843a8d31b@suse.com>
 <8c77ff71-a14e-7cf7-5f27-c7c152ace240@suse.com>
 <3e2132c9-2ab3-7bfb-656b-2cab58a53342@suse.com>
 <alpine.DEB.2.21.2011121332250.20906@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2011121332250.20906@sstabellini-ThinkPad-T480s>

--zpho5v4kGn1LCRw8bJvOPmsgtjILnRcmL
Content-Type: multipart/mixed;
 boundary="------------491D70378802604DDF4BB714"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------491D70378802604DDF4BB714
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 12.11.20 22:38, Stefano Stabellini wrote:
> On Thu, 12 Nov 2020, Jan Beulich wrote:
>> On 12.11.2020 13:50, J=C3=BCrgen Gro=C3=9F wrote:
>>> Any further comments, or even better, Acks?
>>
>> To be honest I'd prefer to have at least one of the people Cc-ed
>> minimally indicate they consider this a good idea. No need for a
>> close review or such, just a basic opinion. Anyone?
>=20
> I see Jan's point that it is not clear how much this is going to help i=
n
> production. However, it is not going to hurt either, and I have been
> told a few times recently that debugging Xen is not easy. Anything that=

> helps in that regard would be good. So I think this patch would be an
> improvement.
>=20

This patch is an effort to get better diagnostic data in case of
Xen crashes from our largest customer, so clearly intended for
production use.


Juergen

--------------491D70378802604DDF4BB714
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------491D70378802604DDF4BB714--

--zpho5v4kGn1LCRw8bJvOPmsgtjILnRcmL--

--ZnYMMjsECBIdr9GWTfbQROPdPddxFa8s2
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+uFx8FAwAAAAAACgkQsN6d1ii/Ey/x
vwf/bRZ6xSPXNeLYfhRCDOEUBbTUeuOwCajT+l3iyEPcs7+3sBwqDp8ApFgRrU4zyeSvlJvyigQ4
GTnXU0JNqYb5Zxx6nYLflUP2yEvT5uWyE2eJPp/dGq5dfD/2ga6BL4tABAsoymL/+yfkFOnrm3BY
Cav/PMqzYL9wYDVrneEw7mC0/Q+bZFiAg1CtPIK6NNoaXvkWd6BUx76EJyPABqfJrIgn5NQZIaOu
NtbLi9WNQRQqNi0osEu5vESqtyBZdKD+yxj+mmMT9JgqWUPrRu2HuUIA6Fhqw9mwj5Ftc1V0bpoL
oIGdw8xkyAudGXOskE/RZP+qkLP6qXcg/vO9tYJung==
=Ob+k
-----END PGP SIGNATURE-----

--ZnYMMjsECBIdr9GWTfbQROPdPddxFa8s2--


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 06:33:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 06:33:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26201.54379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdSdX-0005xq-W6; Fri, 13 Nov 2020 06:32:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26201.54379; Fri, 13 Nov 2020 06:32:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdSdX-0005xj-RM; Fri, 13 Nov 2020 06:32:43 +0000
Received: by outflank-mailman (input) for mailman id 26201;
 Fri, 13 Nov 2020 06:32:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u7TF=ET=epam.com=prvs=9586b5424c=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kdSdX-0005xe-8t
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 06:32:43 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 86e1cb69-25d7-4934-a6ad-40ba61bd5dbb;
 Fri, 13 Nov 2020 06:32:41 +0000 (UTC)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0AD6Vs2n011732; Fri, 13 Nov 2020 06:32:35 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2170.outbound.protection.outlook.com [104.47.17.170])
 by mx0b-0039f301.pphosted.com with ESMTP id 34sd1prse3-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 13 Nov 2020 06:32:35 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM9PR03MB6883.eurprd03.prod.outlook.com (2603:10a6:20b:282::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25; Fri, 13 Nov
 2020 06:32:31 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 06:32:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u7TF=ET=epam.com=prvs=9586b5424c=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kdSdX-0005xe-8t
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 06:32:43 +0000
X-Inumbo-ID: 86e1cb69-25d7-4934-a6ad-40ba61bd5dbb
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 86e1cb69-25d7-4934-a6ad-40ba61bd5dbb;
	Fri, 13 Nov 2020 06:32:41 +0000 (UTC)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
	by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0AD6Vs2n011732;
	Fri, 13 Nov 2020 06:32:35 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com (mail-vi1eur05lp2170.outbound.protection.outlook.com [104.47.17.170])
	by mx0b-0039f301.pphosted.com with ESMTP id 34sd1prse3-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 13 Nov 2020 06:32:35 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QfxnJA7iPT9nHJpb+SjpsuzVU0hh9i2SbmLh+UfPI+UUYxkoPKrfX/Cwnp0m6HsRYtUWJK9u1iRMkGC5JOiDkbxL3ODzt/nyUGLLNCwSIRDBVTANC0el78R2+UMgaXPEXhQlYW0ufpMzgJCnqLjM8r+7YtnJfJ59hS+Y9T08MzgSg/wC7ekPJUfsB0HK0DR18nv7m2cK4mDXV5gL6Tpmt+EFd/TpSbB2lnl+HclhSFE0EY7ore7ASijAX6DhK5avwbOk3vXSfwL2tL642ossUYP0NEA7ZfMl5QwA205n5/VllqR8b91OkxIcmtg674wRGdX7EpnPIjHKTSWIRHW6Vg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DYZlW9li2CbYarknfrEqLhplrYGRY1b5k0m7RNhpF9Y=;
 b=OU6FMPNPyiQcleyFg8V/xiMrwIIdPUSpUYg4Lwdq90yGCjlzIEA3XoZxL1jFbbU96oWU8ZcaqBHZ69sEalSeAEQH8Gh6aCuZEvNrJllJ/Ncva9o2TNi4c1hYWz8vrKzqnSYeFzknvBPX20sYhXs1bbPWLn99LHYlo+CMGDvj+ohmA8Cl2UE+pTXbWAL/FkIsCGGK3Ux2VDAWyNCfXm2tF+/nFvObsijwpSUd6VW1aYBWUOnkS172iYmreu4jmtxfga5+ahdPgaq50Mes/vTx6LwdfwII2ZZ6oSlCYYcQ48QvqFmoOizuSm8+49LmedbMrsyktuaCHDL+3DbZl5EfPA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DYZlW9li2CbYarknfrEqLhplrYGRY1b5k0m7RNhpF9Y=;
 b=zhsVgk4Ew99IoeIwIu420hhkyr9o5ln85ljeGnFFDpFNmoM0yFvEhacU/erqfqRJh0kB9ibKgKjPQyHK+Ggm/Xw7oJdbVvD+3jtBRN8U0Ht/5awHzwSFBfbCaQD4HQKuHpTCw8Paukkds42xqDFREKowwxTSjqubx3NxJEJuuKDB87c2bXTvNTPblkRYbnkhMY2a5KG+4xQ15mqIJbX1XvADnNd/hbtuagtlPyQDAr4Bg7KjGgIa6cbVS3yvt5LhmITwcEJ8gww5y2Adn63e/NV+BE3K0xh7wdhUWI5ZDxBa2cySknw3OGbTqbgtF+ncR1Obyx0jPOeRP0JeCdmHmw==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM9PR03MB6883.eurprd03.prod.outlook.com (2603:10a6:20b:282::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25; Fri, 13 Nov
 2020 06:32:31 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 06:32:31 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
CC: Oleksandr Andrushchenko <andr2000@gmail.com>,
        "Rahul.Singh@arm.com"
	<Rahul.Singh@arm.com>,
        "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
        "julien.grall@arm.com" <julien.grall@arm.com>,
        "jbeulich@suse.com"
	<jbeulich@suse.com>,
        "sstabellini@kernel.org" <sstabellini@kernel.org>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
Thread-Topic: [PATCH 06/10] vpci: Make every domain handle its own BARs
Thread-Index: AQHWtpbzyJg77SbpvkSRWLKjWy/3PanEQmgAgAA8YoCAABlOgIABCD8A
Date: Fri, 13 Nov 2020 06:32:31 +0000
Message-ID: <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
 <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
In-Reply-To: <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: d59bb73e-1fc6-4ccd-9157-08d8879de642
x-ms-traffictypediagnostic: AM9PR03MB6883:
x-microsoft-antispam-prvs: 
 <AM9PR03MB688389B946FCB0B565C65A71E7E60@AM9PR03MB6883.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:29;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 eZeps+joec2RxqMJ+KGl2xodOvwhXMzl1MudMe/D1nPN2qQeELV5YJ3qbqscxqiBJLKapuMZgT75WTFw+OhhtbS5kM+tJI7EoEgNywIxPBFFS7hS2Affz+5sJp9jzVVOCPAdc8CypvT/Gc4zB+/V5LuDkqJ5Lvan3slbjTArzQAwB4gYyu+t0QL7lzyhjqf12vz8CI5kRnz74vlJr2hygG20QPqbuj2qc6ymRpJOHxZ+h7mmy5VD3gOXL4YCZWFES5F4J0Yy3Iu46o+w60aqLhnb+mFZeN4VXI/m+9pb0rLrMKPPq0ieHabvjMec4q0xqaGiK1BIpo5airJwPX3cfg==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(366004)(396003)(39850400004)(136003)(376002)(31696002)(76116006)(91956017)(8936002)(66556008)(66476007)(66946007)(66446008)(8676002)(4326008)(64756008)(53546011)(5660300002)(6506007)(7416002)(86362001)(6916009)(26005)(6486002)(2616005)(31686004)(186003)(30864003)(83380400001)(71200400001)(6512007)(2906002)(478600001)(54906003)(316002)(36756003)(579004);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 jsk2v0htssepQFaTZ3b/T91N+SPR60rTBx9Y3DjiLDZc4Nq7rjpPDBReyjm359p9AR4IQff3wIsyURCykqk4GmVaTPLsPJZ8iK4OGPhiH83jlnjAMXzLq5cSb3nfKcfa19jyXoTxmn5SVWpg+Z/Dvk+JICWR9MTGQZSd0Fqtp1kIg8Jw4/UTkpCALQ3A9iu+GlbEAlSXIAbEdGCRBc8kVFkLKIUguXZK71K3MDHfIZHSRpD9WAn6BF7a68GoDDd62+h2pptqkFQVQfW525AfC0cE9J3BRDk/x/i6GwHEmCYm4nOjYM45VI5bCEwpBsNRbdTWd6rDL5EITn3UEPgm8YWRd49mocXzeiaHW0/FciPcdx7wrvDhAnORzlzboRQY6ahmIroXYrSpaBsemXvJxvRqBYN4UimGRomh075JqE4WLLL/olZ40vvhZudfpLMIAp92axL1X00MsnimCP5Ef3D4yIz1oMKYNR0lkfY+uwFE1AnIEjDic0/l6qTXUU1VUtZzAyGr+ArjZKB8P6u9NagIQHrW3/5JNSBpAWDdN7uAdw7k8Q3bxqVPUdsWZpg181eg6nEniaIZ1rwrg3JpBmqYj4WTYe2cVT59qZ2gmGx7isyn4xC+Ux5DL3VD1J1LzYGcMTju4qnT/gtrZsp3Ug==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <29C62AFA08A2134CA11A1B0B8F4BA46C@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d59bb73e-1fc6-4ccd-9157-08d8879de642
X-MS-Exchange-CrossTenant-originalarrivaltime: 13 Nov 2020 06:32:31.4657
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: R7y0rCaI+0bEMVj64nzvz4zQM52q2kFp4IREuXIKZQjO1yoZktjm0s9zBF34XHfkJRM1pogOvv5TM2o5kWWzUAup4FgDTvOZ2aJKujTSVBCoNfHcmdqiX/Gk26BMBSpR
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR03MB6883
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-13_04:2020-11-12,2020-11-13 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0
 priorityscore=1501 mlxlogscore=999 clxscore=1015 adultscore=0
 malwarescore=0 suspectscore=0 mlxscore=0 spamscore=0 bulkscore=0
 lowpriorityscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2009150000 definitions=main-2011130037

DQpPbiAxMS8xMi8yMCA0OjQ2IFBNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOg0KPiBPbiBUaHUs
IE5vdiAxMiwgMjAyMCBhdCAwMToxNjoxMFBNICswMDAwLCBPbGVrc2FuZHIgQW5kcnVzaGNoZW5r
byB3cm90ZToNCj4+IE9uIDExLzEyLzIwIDExOjQwIEFNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3Rl
Og0KPj4+IE9uIE1vbiwgTm92IDA5LCAyMDIwIGF0IDAyOjUwOjI3UE0gKzAyMDAsIE9sZWtzYW5k
ciBBbmRydXNoY2hlbmtvIHdyb3RlOg0KPj4+PiBGcm9tOiBPbGVrc2FuZHIgQW5kcnVzaGNoZW5r
byA8b2xla3NhbmRyX2FuZHJ1c2hjaGVua29AZXBhbS5jb20+DQo+Pj4+IGRpZmYgLS1naXQgYS94
ZW4vZHJpdmVycy92cGNpL2hlYWRlci5jIGIveGVuL2RyaXZlcnMvdnBjaS9oZWFkZXIuYw0KPj4+
PiBpbmRleCBmNzRmNzI4ODg0YzAuLjdkYzdjNzBlMjRmMiAxMDA2NDQNCj4+Pj4gLS0tIGEveGVu
L2RyaXZlcnMvdnBjaS9oZWFkZXIuYw0KPj4+PiArKysgYi94ZW4vZHJpdmVycy92cGNpL2hlYWRl
ci5jDQo+Pj4+IEBAIC0zMSwxNCArMzEsODcgQEANCj4+Pj4gICAgc3RydWN0IG1hcF9kYXRhIHsN
Cj4+Pj4gICAgICAgIHN0cnVjdCBkb21haW4gKmQ7DQo+Pj4+ICAgICAgICBib29sIG1hcDsNCj4+
Pj4gKyAgICBzdHJ1Y3QgcGNpX2RldiAqcGRldjsNCj4+PiBJZiB0aGUgZmllbGQgaXMgcmVxdWly
ZWQgcGxlYXNlIHBsYWNlIGl0IGFmdGVyIHRoZSBkb21haW4gb25lLg0KPj4gSSB3aWxsLCBidXQg
bWF5IEkgYXNrIHdoeT8NCj4gU28gdGhhdCBpZiB3ZSBhZGQgZnVydGhlciBib29sZWFuIGZpZWxk
cyB3ZSBjYW4gZG8gYXQgdGhlIGVuZCBvZiB0aGUNCj4gc3RydWN0IGZvciBsYXlvdXQgcmVhc29u
cy4gSWYgd2UgZG86DQo+DQo+IHN0cnVjdCBtYXBfZGF0YSB7DQo+ICAgICAgc3RydWN0IGRvbWFp
biAqZDsNCj4gICAgICBib29sIG1hcDsNCj4gICAgICBzdHJ1Y3QgcGNpX2RldiAqcGRldjsNCj4g
ICAgICBib29sIGZvbzsNCj4gfQ0KPg0KPiBXZSB3aWxsIGVuZCB1cCB3aXRoIGEgYnVuY2ggb2Yg
cGFkZGluZyB0aGF0IGNvdWxkIGJlIGF2b2lkZWQgYnkgZG9pbmc6DQo+DQo+IHN0cnVjdCBtYXBf
ZGF0YSB7DQo+ICAgICAgc3RydWN0IGRvbWFpbiAqZDsNCj4gICAgICBzdHJ1Y3QgcGNpX2RldiAq
cGRldjsNCj4gICAgICBib29sIG1hcDsNCj4gICAgICBib29sIGZvbzsNCj4gfQ0KQWgsIHNvIHRo
aXMgaXMgYWJvdXQgcGFkZGluZy4gR290IGl0DQo+DQo+Pj4+ICsgICAgcyA9IFBGTl9ET1dOKHMp
Ow0KPj4+PiArICAgIGUgPSBQRk5fRE9XTihlKTsNCj4+PiBDaGFuZ2luZyB0aGUgcmFuZ2VzZXQg
dG8gc3RvcmUgbWVtb3J5IGFkZHJlc3NlcyBpbnN0ZWFkIG9mIGZyYW1lcw0KPj4+IGNvdWxkIGZv
ciBleGFtcGxlIGJlIHNwbGl0IGludG8gYSBzZXBhcmF0ZSBwYXRjaC4NCj4+IE9rDQo+Pj4gSSB0
aGluayB5b3UgYXJlIGRvaW5nIHRoZSBjYWxjdWxhdGlvbiBvZiB0aGUgZW5kIHBmbiB3cm9uZyBo
ZXJlLCB5b3UNCj4+PiBzaG91bGQgdXNlIFBGTl9VUCBpbnN0ZWFkIGluIGNhc2UgdGhlIGFkZHJl
c3MgaXMgbm90IGFsaWduZWQuDQo+PiBQRk5fRE9XTiBmb3IgdGhlIHN0YXJ0IHNlZW1zIHRvIGJl
IG9rIGlmIHRoZSBhZGRyZXNzIGlzIG5vdCBhbGlnbmVkDQo+Pg0KPj4gd2hpY2ggaXMgdGhlIGNh
c2UgaWYgSSBwYXNzIGJhcl9pbmRleCBpbiB0aGUgbG93ZXIgYml0czogUENJIG1lbW9yeSBoYXMN
Cj4+DQo+PiBQQUdFX1NJWkUgZ3JhbnVsYXJpdHksIHNvIGJlc2lkZXMgdGhlIGZhY3QgdGhhdCBJ
IHVzZSBiYXJfaW5kZXggdGhlIGFkZHJlc3MNCj4gTm8sIEJBUnMgZG9uJ3QgbmVlZCB0byBiZSBh
bGlnbmVkIHRvIHBhZ2UgYm91bmRhcmllcywgeW91IGNhbiBldmVuDQo+IGhhdmUgZGlmZmVyZW50
IEJBUnMgaW5zaWRlIHRoZSBzYW1lIHBoeXNpY2FsIHBhZ2UuDQo+DQo+IFRoZSBzcGVjIHJlY29t
bWVuZHMgdGhhdCB0aGUgbWluaW11bSBzaXplIG9mIGEgQkFSIHNob3VsZCBiZSA0S0IsIGJ1dA0K
PiB0aGF0J3Mgbm90IGEgc3RyaWN0IHJlcXVpcmVtZW50IGluIHdoaWNoIGNhc2UgYSBCQVIgY2Fu
IGJlIGFzIHNtYWxsIGFzDQo+IDE2Ynl0ZXMsIGFuZCB0aGVuIHlvdSBjYW4gaGF2ZSBtdWx0aXBs
ZSBvbmVzIGluc2lkZSB0aGUgc2FtZSBwYWdlLg0KT2ssIHdpbGwgYWNjb3VudCBvbiB0aGF0DQo+
DQo+PiBtdXN0IGJlIHBhZ2UgYWxpZ25lZC4NCj4+DQo+PiBUaGUgZW5kIGFkZHJlc3MgaXMgZXhw
cmVzc2VkIGluIChzaXplIC0gMSkgZm9ybSwgYWdhaW4gcGFnZSBhbGlnbmVkLA0KPj4NCj4+IHNv
IHRvIGdldCB0aGUgbGFzdCBwYWdlIHRvIGJlIG1hcHBlZCBQRk5fRE9XTiBhbHNvIHNlZW1zIHRv
IGJlIGFwcHJvcHJpYXRlLg0KPj4NCj4+IERvIEkgbWlzcyBzb21ldGhpbmcgaGVyZT8NCj4gSSdt
IG5vdCBhd2FyZSBvZiBhbnkgIG9mIHRob3NlIGFkZHJlc3NlcyBvciBzaXplcyBiZWluZyBndWFy
YW50ZWVkIHRvDQo+IGJlIHBhZ2UgYWxpZ25lZCwgc28gSSB0aGluayB5b3UgbmVlZCB0byBhY2Nv
dW50IGZvciB0aGF0Lg0KPg0KPiBTb21lIG9mIHRoZSBjb2RlIGhlcmUgdXNlcyBQRk5fRE9XTiB0
byBjYWxjdWxhdGUgdGhlIGVuZCBhZGRyZXNzDQo+IGJlY2F1c2UgdGhlIHJhbmdlc2V0cyBhcmUg
dXNlZCBpbiBhbiBpbmNsdXNpdmUgZmFzaGlvbiwgc28gdGhlIGVuZA0KPiBmcmFtZSBhbHNvIGdl
dHMgbWFwcGVkLg0KT2sNCj4NCj4+Pj4gKyAgICBtZm4gPSBfbWZuKFBGTl9ET1dOKGhlYWRlci0+
YmFyc1tiYXJfaWR4XS5hZGRyKSk7DQo+Pj4+ICAgICAgICBmb3IgKCA7IDsgKQ0KPj4+PiAgICAg
ICAgew0KPj4+PiAgICAgICAgICAgIHVuc2lnbmVkIGxvbmcgc2l6ZSA9IGUgLSBzICsgMTsNCj4+
Pj4gQEAgLTUyLDExICsxMjUsMTUgQEAgc3RhdGljIGludCBtYXBfcmFuZ2UodW5zaWduZWQgbG9u
ZyBzLCB1bnNpZ25lZCBsb25nIGUsIHZvaWQgKmRhdGEsDQo+Pj4+ICAgICAgICAgICAgICogLSB7
dW59bWFwX21taW9fcmVnaW9ucyBkb2Vzbid0IHN1cHBvcnQgcHJlZW1wdGlvbi4NCj4+Pj4gICAg
ICAgICAgICAgKi8NCj4+Pj4gICAgDQo+Pj4+IC0gICAgICAgIHJjID0gbWFwLT5tYXAgPyBtYXBf
bW1pb19yZWdpb25zKG1hcC0+ZCwgX2dmbihzKSwgc2l6ZSwgX21mbihzKSkNCj4+Pj4gLSAgICAg
ICAgICAgICAgICAgICAgICA6IHVubWFwX21taW9fcmVnaW9ucyhtYXAtPmQsIF9nZm4ocyksIHNp
emUsIF9tZm4ocykpOw0KPj4+PiArICAgICAgICByYyA9IG1hcC0+bWFwID8gbWFwX21taW9fcmVn
aW9ucyhtYXAtPmQsIF9nZm4ocyksIHNpemUsIG1mbikNCj4+Pj4gKyAgICAgICAgICAgICAgICAg
ICAgICA6IHVubWFwX21taW9fcmVnaW9ucyhtYXAtPmQsIF9nZm4ocyksIHNpemUsIG1mbik7DQo+
Pj4+ICAgICAgICAgICAgaWYgKCByYyA9PSAwICkNCj4+Pj4gICAgICAgICAgICB7DQo+Pj4+IC0g
ICAgICAgICAgICAqYyArPSBzaXplOw0KPj4+PiArICAgICAgICAgICAgLyoNCj4+Pj4gKyAgICAg
ICAgICAgICAqIFJhbmdlIHNldCBpcyBub3QgZXhwcmVzc2VkIGluIGZyYW1lIG51bWJlcnMgYW5k
IHRoZSBzaXplDQo+Pj4+ICsgICAgICAgICAgICAgKiBpcyB0aGUgbnVtYmVyIG9mIGZyYW1lcywg
c28gdXBkYXRlIGFjY29yZGluZ2x5Lg0KPj4+PiArICAgICAgICAgICAgICovDQo+Pj4+ICsgICAg
ICAgICAgICAqYyArPSBzaXplIDw8IFBBR0VfU0hJRlQ7DQo+Pj4+ICAgICAgICAgICAgICAgIGJy
ZWFrOw0KPj4+PiAgICAgICAgICAgIH0NCj4+Pj4gICAgICAgICAgICBpZiAoIHJjIDwgMCApDQo+
Pj4+IEBAIC02Nyw4ICsxNDQsOSBAQCBzdGF0aWMgaW50IG1hcF9yYW5nZSh1bnNpZ25lZCBsb25n
IHMsIHVuc2lnbmVkIGxvbmcgZSwgdm9pZCAqZGF0YSwNCj4+Pj4gICAgICAgICAgICAgICAgYnJl
YWs7DQo+Pj4+ICAgICAgICAgICAgfQ0KPj4+PiAgICAgICAgICAgIEFTU0VSVChyYyA8IHNpemUp
Ow0KPj4+PiAtICAgICAgICAqYyArPSByYzsNCj4+Pj4gKyAgICAgICAgKmMgKz0gcmMgPDwgUEFH
RV9TSElGVDsNCj4+Pj4gICAgICAgICAgICBzICs9IHJjOw0KPj4+PiArICAgICAgICBtZm4gKz0g
cmM7DQo+Pj4+ICAgICAgICAgICAgaWYgKCBnZW5lcmFsX3ByZWVtcHRfY2hlY2soKSApDQo+Pj4+
ICAgICAgICAgICAgICAgICAgICByZXR1cm4gLUVSRVNUQVJUOw0KPj4+PiAgICAgICAgfQ0KPj4+
PiBAQCAtODQsNyArMTYyLDcgQEAgc3RhdGljIGludCBtYXBfcmFuZ2UodW5zaWduZWQgbG9uZyBz
LCB1bnNpZ25lZCBsb25nIGUsIHZvaWQgKmRhdGEsDQo+Pj4+ICAgIHN0YXRpYyB2b2lkIG1vZGlm
eV9kZWNvZGluZyhjb25zdCBzdHJ1Y3QgcGNpX2RldiAqcGRldiwgdWludDE2X3QgY21kLA0KPj4+
PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9vbCByb21fb25seSkNCj4+Pj4gICAg
ew0KPj4+PiAtICAgIHN0cnVjdCB2cGNpX2hlYWRlciAqaGVhZGVyID0gJnBkZXYtPnZwY2ktPmhl
YWRlcjsNCj4+Pj4gKyAgICBzdHJ1Y3QgdnBjaV9oZWFkZXIgKmhlYWRlciA9IGdldF9od2RvbV92
cGNpX2hlYWRlcihwZGV2KTsNCj4+Pj4gICAgICAgIGJvb2wgbWFwID0gY21kICYgUENJX0NPTU1B
TkRfTUVNT1JZOw0KPj4+PiAgICAgICAgdW5zaWduZWQgaW50IGk7DQo+Pj4+ICAgIA0KPj4+PiBA
QCAtMTM2LDYgKzIxNCw3IEBAIGJvb2wgdnBjaV9wcm9jZXNzX3BlbmRpbmcoc3RydWN0IHZjcHUg
KnYpDQo+Pj4+ICAgICAgICAgICAgc3RydWN0IG1hcF9kYXRhIGRhdGEgPSB7DQo+Pj4+ICAgICAg
ICAgICAgICAgIC5kID0gdi0+ZG9tYWluLA0KPj4+PiAgICAgICAgICAgICAgICAubWFwID0gdi0+
dnBjaS5jbWQgJiBQQ0lfQ09NTUFORF9NRU1PUlksDQo+Pj4+ICsgICAgICAgICAgICAucGRldiA9
IHYtPnZwY2kucGRldiwNCj4+Pj4gICAgICAgICAgICB9Ow0KPj4+PiAgICAgICAgICAgIGludCBy
YyA9IHJhbmdlc2V0X2NvbnN1bWVfcmFuZ2VzKHYtPnZwY2kubWVtLCBtYXBfcmFuZ2UsICZkYXRh
KTsNCj4+Pj4gICAgDQo+Pj4+IEBAIC0xNjgsNyArMjQ3LDggQEAgYm9vbCB2cGNpX3Byb2Nlc3Nf
cGVuZGluZyhzdHJ1Y3QgdmNwdSAqdikNCj4+Pj4gICAgc3RhdGljIGludCBfX2luaXQgYXBwbHlf
bWFwKHN0cnVjdCBkb21haW4gKmQsIGNvbnN0IHN0cnVjdCBwY2lfZGV2ICpwZGV2LA0KPj4+PiAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgc3RydWN0IHJhbmdlc2V0ICptZW0sIHVpbnQx
Nl90IGNtZCkNCj4+Pj4gICAgew0KPj4+PiAtICAgIHN0cnVjdCBtYXBfZGF0YSBkYXRhID0geyAu
ZCA9IGQsIC5tYXAgPSB0cnVlIH07DQo+Pj4+ICsgICAgc3RydWN0IG1hcF9kYXRhIGRhdGEgPSB7
IC5kID0gZCwgLm1hcCA9IHRydWUsDQo+Pj4+ICsgICAgICAgIC5wZGV2ID0gKHN0cnVjdCBwY2lf
ZGV2ICopcGRldiB9Ow0KPj4+IERyb3BwaW5nIHRoZSBjb25zdCBoZXJlIGlzIG5vdCBmaW5lLiBJ
VCBlaXRoZXIgbmVlZHMgdG8gYmUgZHJvcHBlZA0KPj4+IGZyb20gYXBwbHlfbWFwIGFuZCBmdXJ0
aGVyIHVwLCBvciB0aGlzIG5lZWRzIHRvIGFsc28gYmUgbWFkZSBjb25zdC4NCj4+IE9rLCBJJ2xs
IHRyeSB0byBrZWVwIGl0IGNvbnN0DQo+Pj4+ICAgICAgICBpbnQgcmM7DQo+Pj4+ICAgIA0KPj4+
PiAgICAgICAgd2hpbGUgKCAocmMgPSByYW5nZXNldF9jb25zdW1lX3JhbmdlcyhtZW0sIG1hcF9y
YW5nZSwgJmRhdGEpKSA9PSAtRVJFU1RBUlQgKQ0KPj4+PiBAQCAtMjA1LDcgKzI4NSw3IEBAIHN0
YXRpYyB2b2lkIGRlZmVyX21hcChzdHJ1Y3QgZG9tYWluICpkLCBzdHJ1Y3QgcGNpX2RldiAqcGRl
diwNCj4+Pj4gICAgDQo+Pj4+ICAgIHN0YXRpYyBpbnQgbW9kaWZ5X2JhcnMoY29uc3Qgc3RydWN0
IHBjaV9kZXYgKnBkZXYsIHVpbnQxNl90IGNtZCwgYm9vbCByb21fb25seSkNCj4+Pj4gICAgew0K
Pj4+PiAtICAgIHN0cnVjdCB2cGNpX2hlYWRlciAqaGVhZGVyID0gJnBkZXYtPnZwY2ktPmhlYWRl
cjsNCj4+Pj4gKyAgICBzdHJ1Y3QgdnBjaV9oZWFkZXIgKmhlYWRlcjsNCj4+Pj4gICAgICAgIHN0
cnVjdCByYW5nZXNldCAqbWVtID0gcmFuZ2VzZXRfbmV3KE5VTEwsIE5VTEwsIDApOw0KPj4+PiAg
ICAgICAgc3RydWN0IHBjaV9kZXYgKnRtcCwgKmRldiA9IE5VTEw7DQo+Pj4+ICAgICNpZmRlZiBD
T05GSUdfWDg2DQo+Pj4+IEBAIC0yMTcsNiArMjk3LDExIEBAIHN0YXRpYyBpbnQgbW9kaWZ5X2Jh
cnMoY29uc3Qgc3RydWN0IHBjaV9kZXYgKnBkZXYsIHVpbnQxNl90IGNtZCwgYm9vbCByb21fb25s
eSkNCj4+Pj4gICAgICAgIGlmICggIW1lbSApDQo+Pj4+ICAgICAgICAgICAgcmV0dXJuIC1FTk9N
RU07DQo+Pj4+ICAgIA0KPj4+PiArICAgIGlmICggaXNfaGFyZHdhcmVfZG9tYWluKGN1cnJlbnQt
PmRvbWFpbikgKQ0KPj4+PiArICAgICAgICBoZWFkZXIgPSBnZXRfaHdkb21fdnBjaV9oZWFkZXIo
cGRldik7DQo+Pj4+ICsgICAgZWxzZQ0KPj4+PiArICAgICAgICBoZWFkZXIgPSBnZXRfdnBjaV9o
ZWFkZXIoY3VycmVudC0+ZG9tYWluLCBwZGV2KTsNCj4+Pj4gKw0KPj4+PiAgICAgICAgLyoNCj4+
Pj4gICAgICAgICAqIENyZWF0ZSBhIHJhbmdlc2V0IHRoYXQgcmVwcmVzZW50cyB0aGUgY3VycmVu
dCBkZXZpY2UgQkFScyBtZW1vcnkgcmVnaW9uDQo+Pj4+ICAgICAgICAgKiBhbmQgY29tcGFyZSBp
dCBhZ2FpbnN0IGFsbCB0aGUgY3VycmVudGx5IGFjdGl2ZSBCQVIgbWVtb3J5IHJlZ2lvbnMuIElm
DQo+Pj4+IEBAIC0yMjUsMTIgKzMxMCwxNSBAQCBzdGF0aWMgaW50IG1vZGlmeV9iYXJzKGNvbnN0
IHN0cnVjdCBwY2lfZGV2ICpwZGV2LCB1aW50MTZfdCBjbWQsIGJvb2wgcm9tX29ubHkpDQo+Pj4+
ICAgICAgICAgKiBGaXJzdCBmaWxsIHRoZSByYW5nZXNldCB3aXRoIGFsbCB0aGUgQkFScyBvZiB0
aGlzIGRldmljZSBvciB3aXRoIHRoZSBST00NCj4+Pj4gICAgICAgICAqIEJBUiBvbmx5LCBkZXBl
bmRpbmcgb24gd2hldGhlciB0aGUgZ3Vlc3QgaXMgdG9nZ2xpbmcgdGhlIG1lbW9yeSBkZWNvZGUN
Cj4+Pj4gICAgICAgICAqIGJpdCBvZiB0aGUgY29tbWFuZCByZWdpc3Rlciwgb3IgdGhlIGVuYWJs
ZSBiaXQgb2YgdGhlIFJPTSBCQVIgcmVnaXN0ZXIuDQo+Pj4+ICsgICAgICoNCj4+Pj4gKyAgICAg
KiBVc2UgdGhlIFBDSSByZXNlcnZlZCBiaXRzIG9mIHRoZSBCQVIgdG8gcGFzcyBCQVIncyBpbmRl
eC4NCj4+Pj4gICAgICAgICAqLw0KPj4+PiAgICAgICAgZm9yICggaSA9IDA7IGkgPCBBUlJBWV9T
SVpFKGhlYWRlci0+YmFycyk7IGkrKyApDQo+Pj4+ICAgICAgICB7DQo+Pj4+ICAgICAgICAgICAg
Y29uc3Qgc3RydWN0IHZwY2lfYmFyICpiYXIgPSAmaGVhZGVyLT5iYXJzW2ldOw0KPj4+PiAtICAg
ICAgICB1bnNpZ25lZCBsb25nIHN0YXJ0ID0gUEZOX0RPV04oYmFyLT5hZGRyKTsNCj4+Pj4gLSAg
ICAgICAgdW5zaWduZWQgbG9uZyBlbmQgPSBQRk5fRE9XTihiYXItPmFkZHIgKyBiYXItPnNpemUg
LSAxKTsNCj4+Pj4gKyAgICAgICAgdW5zaWduZWQgbG9uZyBzdGFydCA9IChiYXItPmFkZHIgJiBQ
Q0lfQkFTRV9BRERSRVNTX01FTV9NQVNLKSB8IGk7DQo+Pj4+ICsgICAgICAgIHVuc2lnbmVkIGxv
bmcgZW5kID0gKGJhci0+YWRkciAmIFBDSV9CQVNFX0FERFJFU1NfTUVNX01BU0spICsNCj4+Pj4g
KyAgICAgICAgICAgIGJhci0+c2l6ZSAtIDE7DQo+Pj4gV2lsbCB0aGlzIHdvcmsgZmluZSBvbiBB
cm0gMzJiaXRzIHdpdGggTFBBRT8gSXQncyBteSB1bmRlcnN0YW5kaW5nDQo+Pj4gdGhhdCBpbiB0
aGF0IGNhc2UgdW5zaWduZWQgbG9uZyBpcyAzMmJpdHMsIGJ1dCB0aGUgcGh5c2ljYWwgYWRkcmVz
cw0KPj4+IHNwYWNlIGlzIDQ0Yml0cywgaW4gd2hpY2ggY2FzZSB0aGlzIHdvbid0IHdvcmsuDQo+
PiBIbSwgZ29vZCBxdWVzdGlvbg0KPj4+IEkgdGhpbmsgeW91IG5lZWQgdG8ga2VlcCB0aGUgdXNh
Z2Ugb2YgZnJhbWUgbnVtYmVycyBoZXJlLg0KPj4gSWYgSSByZS13b3JrIHRoZSBnZm4gPC0+IG1m
biBtYXBwaW5nIHRoZW4geWVzLCBJIGNhbiB1c2UgZnJhbWUgbnVtYmVycyBoZXJlIGFuZCBlbHNl
d2hlcmUNCj4+Pj4gICAgDQo+Pj4+ICAgICAgICAgICAgaWYgKCAhTUFQUEFCTEVfQkFSKGJhcikg
fHwNCj4+Pj4gICAgICAgICAgICAgICAgIChyb21fb25seSA/IGJhci0+dHlwZSAhPSBWUENJX0JB
Ul9ST00NCj4+Pj4gQEAgLTI1MSw5ICszMzksMTEgQEAgc3RhdGljIGludCBtb2RpZnlfYmFycyhj
b25zdCBzdHJ1Y3QgcGNpX2RldiAqcGRldiwgdWludDE2X3QgY21kLCBib29sIHJvbV9vbmx5KQ0K
Pj4+PiAgICAgICAgLyogUmVtb3ZlIGFueSBNU0lYIHJlZ2lvbnMgaWYgcHJlc2VudC4gKi8NCj4+
Pj4gICAgICAgIGZvciAoIGkgPSAwOyBtc2l4ICYmIGkgPCBBUlJBWV9TSVpFKG1zaXgtPnRhYmxl
cyk7IGkrKyApDQo+Pj4+ICAgICAgICB7DQo+Pj4+IC0gICAgICAgIHVuc2lnbmVkIGxvbmcgc3Rh
cnQgPSBQRk5fRE9XTih2bXNpeF90YWJsZV9hZGRyKHBkZXYtPnZwY2ksIGkpKTsNCj4+Pj4gLSAg
ICAgICAgdW5zaWduZWQgbG9uZyBlbmQgPSBQRk5fRE9XTih2bXNpeF90YWJsZV9hZGRyKHBkZXYt
PnZwY2ksIGkpICsNCj4+Pj4gLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB2
bXNpeF90YWJsZV9zaXplKHBkZXYtPnZwY2ksIGkpIC0gMSk7DQo+Pj4+ICsgICAgICAgIHVuc2ln
bmVkIGxvbmcgc3RhcnQgPSAodm1zaXhfdGFibGVfYWRkcihwZGV2LT52cGNpLCBpKSAmDQo+Pj4+
ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgUENJX0JBU0VfQUREUkVTU19NRU1fTUFT
SykgfCBpOw0KPj4+PiArICAgICAgICB1bnNpZ25lZCBsb25nIGVuZCA9ICh2bXNpeF90YWJsZV9h
ZGRyKHBkZXYtPnZwY2ksIGkpICYNCj4+Pj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
UENJX0JBU0VfQUREUkVTU19NRU1fTUFTSyApICsNCj4+Pj4gKyAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgdm1zaXhfdGFibGVfc2l6ZShwZGV2LT52cGNpLCBpKSAtIDE7DQo+Pj4+ICAgIA0K
Pj4+PiAgICAgICAgICAgIHJjID0gcmFuZ2VzZXRfcmVtb3ZlX3JhbmdlKG1lbSwgc3RhcnQsIGVu
ZCk7DQo+Pj4+ICAgICAgICAgICAgaWYgKCByYyApDQo+Pj4+IEBAIC0yNzMsNiArMzYzLDggQEAg
c3RhdGljIGludCBtb2RpZnlfYmFycyhjb25zdCBzdHJ1Y3QgcGNpX2RldiAqcGRldiwgdWludDE2
X3QgY21kLCBib29sIHJvbV9vbmx5KQ0KPj4+PiAgICAgICAgICovDQo+Pj4+ICAgICAgICBmb3Jf
ZWFjaF9wZGV2ICggcGRldi0+ZG9tYWluLCB0bXAgKQ0KPj4+PiAgICAgICAgew0KPj4+PiArICAg
ICAgICBzdHJ1Y3QgdnBjaV9oZWFkZXIgKmhlYWRlcjsNCj4+Pj4gKw0KPj4+PiAgICAgICAgICAg
IGlmICggdG1wID09IHBkZXYgKQ0KPj4+PiAgICAgICAgICAgIHsNCj4+Pj4gICAgICAgICAgICAg
ICAgLyoNCj4+Pj4gQEAgLTI4OSwxMSArMzgxLDE0IEBAIHN0YXRpYyBpbnQgbW9kaWZ5X2JhcnMo
Y29uc3Qgc3RydWN0IHBjaV9kZXYgKnBkZXYsIHVpbnQxNl90IGNtZCwgYm9vbCByb21fb25seSkN
Cj4+Pj4gICAgICAgICAgICAgICAgICAgIGNvbnRpbnVlOw0KPj4+PiAgICAgICAgICAgIH0NCj4+
Pj4gICAgDQo+Pj4+IC0gICAgICAgIGZvciAoIGkgPSAwOyBpIDwgQVJSQVlfU0laRSh0bXAtPnZw
Y2ktPmhlYWRlci5iYXJzKTsgaSsrICkNCj4+Pj4gKyAgICAgICAgaGVhZGVyID0gZ2V0X3ZwY2lf
aGVhZGVyKHRtcC0+ZG9tYWluLCBwZGV2KTsNCj4+Pj4gKw0KPj4+PiArICAgICAgICBmb3IgKCBp
ID0gMDsgaSA8IEFSUkFZX1NJWkUoaGVhZGVyLT5iYXJzKTsgaSsrICkNCj4+Pj4gICAgICAgICAg
ICB7DQo+Pj4+IC0gICAgICAgICAgICBjb25zdCBzdHJ1Y3QgdnBjaV9iYXIgKmJhciA9ICZ0bXAt
PnZwY2ktPmhlYWRlci5iYXJzW2ldOw0KPj4+PiAtICAgICAgICAgICAgdW5zaWduZWQgbG9uZyBz
dGFydCA9IFBGTl9ET1dOKGJhci0+YWRkcik7DQo+Pj4+IC0gICAgICAgICAgICB1bnNpZ25lZCBs
b25nIGVuZCA9IFBGTl9ET1dOKGJhci0+YWRkciArIGJhci0+c2l6ZSAtIDEpOw0KPj4+PiArICAg
ICAgICAgICAgY29uc3Qgc3RydWN0IHZwY2lfYmFyICpiYXIgPSAmaGVhZGVyLT5iYXJzW2ldOw0K
Pj4+PiArICAgICAgICAgICAgdW5zaWduZWQgbG9uZyBzdGFydCA9IChiYXItPmFkZHIgJiBQQ0lf
QkFTRV9BRERSRVNTX01FTV9NQVNLKSB8IGk7DQo+Pj4+ICsgICAgICAgICAgICB1bnNpZ25lZCBs
b25nIGVuZCA9IChiYXItPmFkZHIgJiBQQ0lfQkFTRV9BRERSRVNTX01FTV9NQVNLKQ0KPj4+PiAr
ICAgICAgICAgICAgICAgICsgYmFyLT5zaXplIC0gMTsNCj4+Pj4gICAgDQo+Pj4+ICAgICAgICAg
ICAgICAgIGlmICggIWJhci0+ZW5hYmxlZCB8fCAhcmFuZ2VzZXRfb3ZlcmxhcHNfcmFuZ2UobWVt
LCBzdGFydCwgZW5kKSB8fA0KPj4+PiAgICAgICAgICAgICAgICAgICAgIC8qDQo+Pj4+IEBAIC0z
NTcsNyArNDUyLDcgQEAgc3RhdGljIHZvaWQgY21kX3dyaXRlKGNvbnN0IHN0cnVjdCBwY2lfZGV2
ICpwZGV2LCB1bnNpZ25lZCBpbnQgcmVnLA0KPj4+PiAgICAgICAgICAgIHBjaV9jb25mX3dyaXRl
MTYocGRldi0+c2JkZiwgcmVnLCBjbWQpOw0KPj4+PiAgICB9DQo+Pj4+ICAgIA0KPj4+PiAtc3Rh
dGljIHZvaWQgYmFyX3dyaXRlKGNvbnN0IHN0cnVjdCBwY2lfZGV2ICpwZGV2LCB1bnNpZ25lZCBp
bnQgcmVnLA0KPj4+PiArc3RhdGljIHZvaWQgYmFyX3dyaXRlX2h3ZG9tKGNvbnN0IHN0cnVjdCBw
Y2lfZGV2ICpwZGV2LCB1bnNpZ25lZCBpbnQgcmVnLA0KPj4+PiAgICAgICAgICAgICAgICAgICAg
ICAgICAgdWludDMyX3QgdmFsLCB2b2lkICpkYXRhKQ0KPj4+PiAgICB7DQo+Pj4+ICAgICAgICBz
dHJ1Y3QgdnBjaV9iYXIgKmJhciA9IGRhdGE7DQo+Pj4+IEBAIC0zNzcsMTQgKzQ3MiwxNyBAQCBz
dGF0aWMgdm9pZCBiYXJfd3JpdGUoY29uc3Qgc3RydWN0IHBjaV9kZXYgKnBkZXYsIHVuc2lnbmVk
IGludCByZWcsDQo+Pj4+ICAgICAgICB7DQo+Pj4+ICAgICAgICAgICAgLyogSWYgdGhlIHZhbHVl
IHdyaXR0ZW4gaXMgdGhlIGN1cnJlbnQgb25lIGF2b2lkIHByaW50aW5nIGEgd2FybmluZy4gKi8N
Cj4+Pj4gICAgICAgICAgICBpZiAoIHZhbCAhPSAodWludDMyX3QpKGJhci0+YWRkciA+PiAoaGkg
PyAzMiA6IDApKSApDQo+Pj4+ICsgICAgICAgIHsNCj4+Pj4gKyAgICAgICAgICAgIHN0cnVjdCB2
cGNpX2hlYWRlciAqaGVhZGVyID0gZ2V0X2h3ZG9tX3ZwY2lfaGVhZGVyKHBkZXYpOw0KPj4+PiAr
DQo+Pj4+ICAgICAgICAgICAgICAgIGdwcmludGsoWEVOTE9HX1dBUk5JTkcsDQo+Pj4+ICAgICAg
ICAgICAgICAgICAgICAgICAgIiUwNHg6JTAyeDolMDJ4LiV1OiBpZ25vcmVkIEJBUiAlbHUgd3Jp
dGUgd2l0aCBtZW1vcnkgZGVjb2RpbmcgZW5hYmxlZFxuIiwNCj4+Pj4gICAgICAgICAgICAgICAg
ICAgICAgICBwZGV2LT5zZWcsIHBkZXYtPmJ1cywgc2xvdCwgZnVuYywNCj4+Pj4gLSAgICAgICAg
ICAgICAgICAgICAgYmFyIC0gcGRldi0+dnBjaS0+aGVhZGVyLmJhcnMgKyBoaSk7DQo+Pj4+ICsg
ICAgICAgICAgICAgICAgICAgIGJhciAtIGhlYWRlci0+YmFycyArIGhpKTsNCj4+Pj4gKyAgICAg
ICAgfQ0KPj4+PiAgICAgICAgICAgIHJldHVybjsNCj4+Pj4gICAgICAgIH0NCj4+Pj4gICAgDQo+
Pj4+IC0NCj4+Pj4gICAgICAgIC8qDQo+Pj4+ICAgICAgICAgKiBVcGRhdGUgdGhlIGNhY2hlZCBh
ZGRyZXNzLCBzbyB0aGF0IHdoZW4gbWVtb3J5IGRlY29kaW5nIGlzIGVuYWJsZWQNCj4+Pj4gICAg
ICAgICAqIFhlbiBjYW4gbWFwIHRoZSBCQVIgaW50byB0aGUgZ3Vlc3QgcDJtLg0KPj4+PiBAQCAt
NDAzLDEwICs1MDEsODkgQEAgc3RhdGljIHZvaWQgYmFyX3dyaXRlKGNvbnN0IHN0cnVjdCBwY2lf
ZGV2ICpwZGV2LCB1bnNpZ25lZCBpbnQgcmVnLA0KPj4+PiAgICAgICAgcGNpX2NvbmZfd3JpdGUz
MihwZGV2LT5zYmRmLCByZWcsIHZhbCk7DQo+Pj4+ICAgIH0NCj4+Pj4gICAgDQo+Pj4+ICtzdGF0
aWMgdWludDMyX3QgYmFyX3JlYWRfaHdkb20oY29uc3Qgc3RydWN0IHBjaV9kZXYgKnBkZXYsIHVu
c2lnbmVkIGludCByZWcsDQo+Pj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdm9p
ZCAqZGF0YSkNCj4+Pj4gK3sNCj4+Pj4gKyAgICByZXR1cm4gdnBjaV9od19yZWFkMzIocGRldiwg
cmVnLCBkYXRhKTsNCj4+Pj4gK30NCj4+Pj4gKw0KPj4+PiArc3RhdGljIHZvaWQgYmFyX3dyaXRl
X2d1ZXN0KGNvbnN0IHN0cnVjdCBwY2lfZGV2ICpwZGV2LCB1bnNpZ25lZCBpbnQgcmVnLA0KPj4+
PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90IHZhbCwgdm9pZCAqZGF0YSkN
Cj4+Pj4gK3sNCj4+Pj4gKyAgICBzdHJ1Y3QgdnBjaV9iYXIgKnZiYXIgPSBkYXRhOw0KPj4+PiAr
ICAgIGJvb2wgaGkgPSBmYWxzZTsNCj4+Pj4gKw0KPj4+PiArICAgIGlmICggdmJhci0+dHlwZSA9
PSBWUENJX0JBUl9NRU02NF9ISSApDQo+Pj4+ICsgICAgew0KPj4+PiArICAgICAgICBBU1NFUlQo
cmVnID4gUENJX0JBU0VfQUREUkVTU18wKTsNCj4+Pj4gKyAgICAgICAgdmJhci0tOw0KPj4+PiAr
ICAgICAgICBoaSA9IHRydWU7DQo+Pj4+ICsgICAgfQ0KPj4+PiArICAgIHZiYXItPmFkZHIgJj0g
figweGZmZmZmZmZmdWxsIDw8IChoaSA/IDMyIDogMCkpOw0KPj4+PiArICAgIHZiYXItPmFkZHIg
fD0gKHVpbnQ2NF90KXZhbCA8PCAoaGkgPyAzMiA6IDApOw0KPj4+PiArfQ0KPj4+PiArDQo+Pj4+
ICtzdGF0aWMgdWludDMyX3QgYmFyX3JlYWRfZ3Vlc3QoY29uc3Qgc3RydWN0IHBjaV9kZXYgKnBk
ZXYsIHVuc2lnbmVkIGludCByZWcsDQo+Pj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgdm9pZCAqZGF0YSkNCj4+Pj4gK3sNCj4+Pj4gKyAgICBzdHJ1Y3QgdnBjaV9iYXIgKnZiYXIg
PSBkYXRhOw0KPj4+PiArICAgIHVpbnQzMl90IHZhbDsNCj4+Pj4gKyAgICBib29sIGhpID0gZmFs
c2U7DQo+Pj4+ICsNCj4+Pj4gKyAgICBpZiAoIHZiYXItPnR5cGUgPT0gVlBDSV9CQVJfTUVNNjRf
SEkgKQ0KPj4+PiArICAgIHsNCj4+Pj4gKyAgICAgICAgQVNTRVJUKHJlZyA+IFBDSV9CQVNFX0FE
RFJFU1NfMCk7DQo+Pj4+ICsgICAgICAgIHZiYXItLTsNCj4+Pj4gKyAgICAgICAgaGkgPSB0cnVl
Ow0KPj4+PiArICAgIH0NCj4+Pj4gKw0KPj4+PiArICAgIGlmICggdmJhci0+dHlwZSA9PSBWUENJ
X0JBUl9NRU02NF9MTyB8fCB2YmFyLT50eXBlID09IFZQQ0lfQkFSX01FTTY0X0hJICkNCj4+PiBJ
IHRoaW5rIHRoaXMgd291bGQgYmUgY2xlYXJlciB1c2luZyBhIHN3aXRjaCBzdGF0ZW1lbnQuDQo+
PiBJJ2xsIHRoaW5rIGFib3V0DQo+Pj4+ICsgICAgew0KPj4+PiArICAgICAgICBpZiAoIGhpICkN
Cj4+Pj4gKyAgICAgICAgICAgIHZhbCA9IHZiYXItPmFkZHIgPj4gMzI7DQo+Pj4+ICsgICAgICAg
IGVsc2UNCj4+Pj4gKyAgICAgICAgICAgIHZhbCA9IHZiYXItPmFkZHIgJiAweGZmZmZmZmZmOw0K
Pj4+PiArICAgICAgICBpZiAoIHZhbCA9PSB+MCApDQo+Pj4gU3RyaWN0bHkgc3BlYWtpbmcgSSB0
aGluayB5b3UgYXJlIG5vdCBmb3JjZWQgdG8gd3JpdGUgMXMgdG8gdGhlDQo+Pj4gcmVzZXJ2ZWQg
NCBiaXRzIGluIHRoZSBsb3cgcmVnaXN0ZXIgKGFuZCBpbiB0aGUgMzJiaXQgY2FzZSkuDQo+PiBB
aCwgc28gTGludXgga2VybmVsLCBmb3IgaW5zdGFuY2UsIGNvdWxkIGhhdmUgd3JpdHRlbiAweGZm
ZmZmZjAgd2hpbGUNCj4+DQo+PiBJIGV4cGVjdCAweGZmZmZmZmZmPw0KPiBJIHRoaW5rIHJlYWwg
aGFyZHdhcmUgd291bGQgcmV0dXJuIHRoZSBzaXplIHdoZW4gd3JpdHRlbiAxcyB0byBhbGwNCj4g
Yml0cyBleGNlcHQgdGhlIHJlc2VydmVkIG9uZXMuDQo+DQo+Pj4+ICsgICAgICAgIHsNCj4+Pj4g
KyAgICAgICAgICAgIC8qIEd1ZXN0cyBkZXRlY3RzIEJBUidzIHByb3BlcnRpZXMgYW5kIHNpemVz
LiAqLw0KPj4+PiArICAgICAgICAgICAgaWYgKCAhaGkgKQ0KPj4+PiArICAgICAgICAgICAgew0K
Pj4+PiArICAgICAgICAgICAgICAgIHZhbCA9IDB4ZmZmZmZmZmYgJiB+KHZiYXItPnNpemUgLSAx
KTsNCj4+Pj4gKyAgICAgICAgICAgICAgICB2YWwgfD0gdmJhci0+dHlwZSA9PSBWUENJX0JBUl9N
RU0zMiA/IFBDSV9CQVNFX0FERFJFU1NfTUVNX1RZUEVfMzINCj4+Pj4gKyAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA6IFBDSV9CQVNFX0FERFJFU1Nf
TUVNX1RZUEVfNjQ7DQo+Pj4+ICsgICAgICAgICAgICAgICAgdmFsIHw9IHZiYXItPnByZWZldGNo
YWJsZSA/IFBDSV9CQVNFX0FERFJFU1NfTUVNX1BSRUZFVENIIDogMDsNCj4+Pj4gKyAgICAgICAg
ICAgIH0NCj4+Pj4gKyAgICAgICAgICAgIGVsc2UNCj4+Pj4gKyAgICAgICAgICAgICAgICB2YWwg
PSB2YmFyLT5zaXplID4+IDMyOw0KPj4+PiArICAgICAgICAgICAgdmJhci0+YWRkciAmPSB+KDB4
ZmZmZmZmZmZ1bGwgPDwgKGhpID8gMzIgOiAwKSk7DQo+Pj4+ICsgICAgICAgICAgICB2YmFyLT5h
ZGRyIHw9ICh1aW50NjRfdCl2YWwgPDwgKGhpID8gMzIgOiAwKTsNCj4+Pj4gKyAgICAgICAgfQ0K
Pj4+PiArICAgIH0NCj4+Pj4gKyAgICBlbHNlIGlmICggdmJhci0+dHlwZSA9PSBWUENJX0JBUl9N
RU0zMiApDQo+Pj4+ICsgICAgew0KPj4+PiArICAgICAgICB2YWwgPSB2YmFyLT5hZGRyOw0KPj4+
PiArICAgICAgICBpZiAoIHZhbCA9PSB+MCApDQo+Pj4+ICsgICAgICAgIHsNCj4+Pj4gKyAgICAg
ICAgICAgIGlmICggIWhpICkNCj4+PiBUaGVyZSdzIG5vIHdheSBoaSBjYW4gYmUgdHJ1ZSBhdCB0
aGlzIHBvaW50IEFGQUlDVC4NCj4+IFN1cmUsIHRoYW5rIHlvdQ0KPj4+PiArICAgICAgICAgICAg
ew0KPj4+PiArICAgICAgICAgICAgICAgIHZhbCA9IDB4ZmZmZmZmZmYgJiB+KHZiYXItPnNpemUg
LSAxKTsNCj4+Pj4gKyAgICAgICAgICAgICAgICB2YWwgfD0gdmJhci0+dHlwZSA9PSBWUENJX0JB
Ul9NRU0zMiA/IFBDSV9CQVNFX0FERFJFU1NfTUVNX1RZUEVfMzINCj4+Pj4gKyAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA6IFBDSV9CQVNFX0FERFJF
U1NfTUVNX1RZUEVfNjQ7DQo+Pj4+ICsgICAgICAgICAgICAgICAgdmFsIHw9IHZiYXItPnByZWZl
dGNoYWJsZSA/IFBDSV9CQVNFX0FERFJFU1NfTUVNX1BSRUZFVENIIDogMDsNCj4+Pj4gKyAgICAg
ICAgICAgIH0NCj4+Pj4gKyAgICAgICAgfQ0KPj4+PiArICAgIH0NCj4+Pj4gKyAgICBlbHNlDQo+
Pj4+ICsgICAgew0KPj4+PiArICAgICAgICB2YWwgPSB2YmFyLT5hZGRyOw0KPj4+PiArICAgIH0N
Cj4+Pj4gKyAgICByZXR1cm4gdmFsOw0KPj4+PiArfQ0KPj4+PiArDQo+Pj4+ICAgIHN0YXRpYyB2
b2lkIHJvbV93cml0ZShjb25zdCBzdHJ1Y3QgcGNpX2RldiAqcGRldiwgdW5zaWduZWQgaW50IHJl
ZywNCj4+Pj4gICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90IHZhbCwgdm9pZCAqZGF0
YSkNCj4+Pj4gICAgew0KPj4+PiAtICAgIHN0cnVjdCB2cGNpX2hlYWRlciAqaGVhZGVyID0gJnBk
ZXYtPnZwY2ktPmhlYWRlcjsNCj4+Pj4gKyAgICBzdHJ1Y3QgdnBjaV9oZWFkZXIgKmhlYWRlciA9
IGdldF9od2RvbV92cGNpX2hlYWRlcihwZGV2KTsNCj4+Pj4gICAgICAgIHN0cnVjdCB2cGNpX2Jh
ciAqcm9tID0gZGF0YTsNCj4+Pj4gICAgICAgIHVpbnQ4X3Qgc2xvdCA9IFBDSV9TTE9UKHBkZXYt
PmRldmZuKSwgZnVuYyA9IFBDSV9GVU5DKHBkZXYtPmRldmZuKTsNCj4+Pj4gICAgICAgIHVpbnQx
Nl90IGNtZCA9IHBjaV9jb25mX3JlYWQxNihwZGV2LT5zYmRmLCBQQ0lfQ09NTUFORCk7DQo+Pj4+
IEBAIC00NTIsMTUgKzYyOSw1NiBAQCBzdGF0aWMgdm9pZCByb21fd3JpdGUoY29uc3Qgc3RydWN0
IHBjaV9kZXYgKnBkZXYsIHVuc2lnbmVkIGludCByZWcsDQo+Pj4+ICAgICAgICAgICAgcm9tLT5h
ZGRyID0gdmFsICYgUENJX1JPTV9BRERSRVNTX01BU0s7DQo+Pj4+ICAgIH0NCj4+PiBEb24ndCB5
b3UgbmVlZCB0byBhbHNvIHByb3RlY3QgYSBkb21VIGZyb20gd3JpdGluZyB0byB0aGUgUk9NIEJB
Ug0KPj4+IHJlZ2lzdGVyPw0KPj4gUk9NIHdhcyBub3QgYSB0YXJnZXQgb2YgdGhpcyBSRkMgYXMg
SSBoYXZlIG5vIEhXIHRvIHRlc3QgdGhhdCwgYnV0IGZpbmFsIGNvZGUgbXVzdA0KPj4NCj4+IGFs
c28gaGFuZGxlIFJPTSBhcyB3ZWxsLCB5b3UgYXJlIHJpZ2h0DQo+Pg0KPj4+PiAgICANCj4+Pj4g
K3N0YXRpYyB1aW50MzJfdCBiYXJfcmVhZF9kaXNwYXRjaChjb25zdCBzdHJ1Y3QgcGNpX2RldiAq
cGRldiwgdW5zaWduZWQgaW50IHJlZywNCj4+Pj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICB2b2lkICpkYXRhKQ0KPj4+PiArew0KPj4+PiArICAgIHN0cnVjdCB2cGNpX2JhciAq
dmJhciwgKmJhciA9IGRhdGE7DQo+Pj4+ICsNCj4+Pj4gKyAgICBpZiAoIGlzX2hhcmR3YXJlX2Rv
bWFpbihjdXJyZW50LT5kb21haW4pICkNCj4+Pj4gKyAgICAgICAgcmV0dXJuIGJhcl9yZWFkX2h3
ZG9tKHBkZXYsIHJlZywgZGF0YSk7DQo+Pj4+ICsNCj4+Pj4gKyAgICB2YmFyID0gZ2V0X3ZwY2lf
YmFyKGN1cnJlbnQtPmRvbWFpbiwgcGRldiwgYmFyLT5pbmRleCk7DQo+Pj4+ICsgICAgaWYgKCAh
dmJhciApDQo+Pj4+ICsgICAgICAgIHJldHVybiB+MDsNCj4+Pj4gKw0KPj4+PiArICAgIHJldHVy
biBiYXJfcmVhZF9ndWVzdChwZGV2LCByZWcsIHZiYXIpOw0KPj4+PiArfQ0KPj4+PiArDQo+Pj4+
ICtzdGF0aWMgdm9pZCBiYXJfd3JpdGVfZGlzcGF0Y2goY29uc3Qgc3RydWN0IHBjaV9kZXYgKnBk
ZXYsIHVuc2lnbmVkIGludCByZWcsDQo+Pj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgdWludDMyX3QgdmFsLCB2b2lkICpkYXRhKQ0KPj4+PiArew0KPj4+PiArICAgIHN0cnVjdCB2
cGNpX2JhciAqYmFyID0gZGF0YTsNCj4+Pj4gKw0KPj4+PiArICAgIGlmICggaXNfaGFyZHdhcmVf
ZG9tYWluKGN1cnJlbnQtPmRvbWFpbikgKQ0KPj4+PiArICAgICAgICBiYXJfd3JpdGVfaHdkb20o
cGRldiwgcmVnLCB2YWwsIGRhdGEpOw0KPj4+PiArICAgIGVsc2UNCj4+Pj4gKyAgICB7DQo+Pj4+
ICsgICAgICAgIHN0cnVjdCB2cGNpX2JhciAqdmJhciA9IGdldF92cGNpX2JhcihjdXJyZW50LT5k
b21haW4sIHBkZXYsIGJhci0+aW5kZXgpOw0KPj4+PiArDQo+Pj4+ICsgICAgICAgIGlmICggIXZi
YXIgKQ0KPj4+PiArICAgICAgICAgICAgcmV0dXJuOw0KPj4+PiArICAgICAgICBiYXJfd3JpdGVf
Z3Vlc3QocGRldiwgcmVnLCB2YWwsIHZiYXIpOw0KPj4+PiArICAgIH0NCj4+Pj4gK30NCj4+PiBZ
b3Ugc2hvdWxkIGFzc2lnbiBkaWZmZXJlbnQgaGFuZGxlcnMgYmFzZWQgb24gd2hldGhlciB0aGUg
ZG9tYWluIHRoYXQNCj4+PiBoYXMgdGhlIGRldmljZSBhc3NpZ25lZCBpcyBhIGRvbVUgb3IgdGhl
IGhhcmR3YXJlIGRvbWFpbiwgcmF0aGVyIHRoYW4NCj4+PiBkb2luZyB0aGUgc2VsZWN0aW9uIGhl
cmUuDQo+PiBIbSwgaGFuZGxlcnMgYXJlIGFzc2lnbmVkIG9uY2UgaW4gaW5pdF9iYXJzIGFuZCB0
aGlzIGZ1bmN0aW9uIGlzIG9ubHkgY2FsbGVkDQo+Pg0KPj4gZm9yIGh3ZG9tLCBzbyB0aGVyZSBp
cyBubyB3YXkgSSBjYW4gZG8gdGhhdCBmb3IgdGhlIGd1ZXN0cy4gSGVuY2UsIHRoZSBkaXNwYXRj
aGVyDQo+IEkgdGhpbmsgd2UgbWlnaHQgd2FudCB0byByZXNldCB0aGUgdlBDSSBoYW5kbGVycyB3
aGVuIGEgZGV2aWNlcyBnZXRzDQo+IGFzc2lnbmVkIGFuZCBkZWFzc2lnbmVkLg0KDQpJbiBBUk0g
Y2FzZSBpbml0X2JhcnMgaXMgY2FsbGVkIHRvbyBlYXJseTogUENJIGRldmljZSBhc3NpZ25tZW50
IGlzIGN1cnJlbnRseQ0KDQppbml0aWF0ZWQgYnkgRG9tYWluLTAnIGtlcm5lbCBhbmQgaXMgZG9u
ZSAqYmVmb3JlKiBQQ0kgZGV2aWNlcyBhcmUgZ2l2ZW4gbWVtb3J5DQoNCnJhbmdlcyBhbmQgQkFS
cyBhc3NpZ25lZDoNCg0KW8KgwqDCoCAwLjQyOTUxNF0gcGNpX2J1cyAwMDAwOjAwOiByb290IGJ1
cyByZXNvdXJjZSBbYnVzIDAwLWZmXQ0KW8KgwqDCoCAwLjQyOTUzMl0gcGNpX2J1cyAwMDAwOjAw
OiByb290IGJ1cyByZXNvdXJjZSBbaW8gMHgwMDAwLTB4ZmZmZmZdDQpbwqDCoMKgIDAuNDI5NTU1
XSBwY2lfYnVzIDAwMDA6MDA6IHJvb3QgYnVzIHJlc291cmNlIFttZW0gMHhmZTIwMDAwMC0weGZl
M2ZmZmZmXQ0KW8KgwqDCoCAwLjQyOTU3NV0gcGNpX2J1cyAwMDAwOjAwOiByb290IGJ1cyByZXNv
dXJjZSBbbWVtIDB4MzAwMDAwMDAtMHgzN2ZmZmZmZl0NClvCoMKgwqAgMC40Mjk2MDRdIHBjaV9i
dXMgMDAwMDowMDogcm9vdCBidXMgcmVzb3VyY2UgW21lbSAweDM4MDAwMDAwLTB4M2ZmZmZmZmYg
cHJlZl0NClvCoMKgwqAgMC40Mjk2NzBdIHBjaSAwMDAwOjAwOjAwLjA6IGVuYWJsaW5nIEV4dGVu
ZGVkIFRhZ3MNClvCoMKgwqAgMC40NTM3NjRdIHBjaSAwMDAwOjAwOjAwLjA6IC0tLS0tLS0tLS0t
LS0tLS0tLS0tIEJVU19OT1RJRllfQUREX0RFVklDRQ0KDQo8IGluaXRfYmFycyA+DQoNClvCoMKg
wqAgMC40NTM3OTNdIHBjaSAwMDAwOjAwOjAwLjA6IC0tIElSUSAwDQpbwqDCoMKgIDAuNDU4ODI1
XSBwY2kgMDAwMDowMDowMC4wOiBGYWlsZWQgdG8gYWRkIC0gcGFzc3Rocm91Z2ggb3IgTVNJL01T
SS1YIG1pZ2h0IGZhaWwhDQpbwqDCoMKgIDAuNDcxNzkwXSBwY2kgMDAwMDowMTowMC4wOiAtLS0t
LS0tLS0tLS0tLS0tLS0tLSBCVVNfTk9USUZZX0FERF9ERVZJQ0UNCg0KPCBpbml0X2JhcnMgPg0K
DQpbwqDCoMKgIDAuNDcxODIxXSBwY2kgMDAwMDowMTowMC4wOiAtLSBJUlEgMjU1DQpbwqDCoMKg
IDAuNDc2ODA5XSBwY2kgMDAwMDowMTowMC4wOiBGYWlsZWQgdG8gYWRkIC0gcGFzc3Rocm91Z2gg
b3IgTVNJL01TSS1YIG1pZ2h0IGZhaWwhDQoNCjwgQkFSIGFzc2lnbm1lbnRzIGJlbG93ID4NCg0K
W8KgwqDCoCAwLjQ4ODIzM10gcGNpIDAwMDA6MDA6MDAuMDogQkFSIDE0OiBhc3NpZ25lZCBbbWVt
IDB4ZmUyMDAwMDAtMHhmZTJmZmZmZl0NClvCoMKgwqAgMC40ODgyNjVdIHBjaSAwMDAwOjAwOjAw
LjA6IEJBUiAxNTogYXNzaWduZWQgW21lbSAweDM4MDAwMDAwLTB4MzgwZmZmZmYgcHJlZl0NCg0K
SW4gY2FzZSBvZiB4ODYgdGhpcyBpcyBwcmV0dHkgbXVjaCBvayBhcyBCQVJzIGFyZSBhbHJlYWR5
IGluIHBsYWNlLCBidXQgZm9yIEFSTSB3ZQ0KDQpuZWVkIHRvIHRha2UgY2FyZSBhbmQgcmUtc2V0
dXAgdlBDSSBCQVJzIGZvciBod2RvbS4gVGhpbmdzIGFyZSBnZXR0aW5nIGV2ZW4gbW9yZQ0KDQpj
b21wbGljYXRlZCBpZiB0aGUgaG9zdCBQQ0kgYnJpZGdlIGlzIG5vdCBFQ0FNIGxpa2UsIHNvIHlv
dSBjYW5ub3Qgc2V0IG1taW9faGFuZGxlcnMNCg0KYW5kIHRyYXAgaHdkb20ncyBhY2Nlc3MgdG8g
dGhlIGNvbmZpZyBzcGFjZSB0byB1cGRhdGUgQkFScyBldGMuIFRoaXMgaXMgd2h5IEkgaGF2ZSB0
aGF0DQoNCnVnbHkgaGFjayBmb3IgcmNhcl9nZW4zIHRvIHJlYWQgYWN0dWFsIEJBUnMgZm9yIGh3
ZG9tLg0KDQoNCklmIHdlIGdvIGZ1cnRoZXIgYW5kIHRha2UgYSBsb29rIGF0IFNSLUlPViB0aGVu
IHdoZW4gdGhlIGtlcm5lbCBhc3NpZ25zIHRoZSBkZXZpY2UNCg0KKEJVU19OT1RJRllfQUREX0RF
VklDRSkgdGhlbiBpdCBhbHJlYWR5IGhhcyBCQVJzIGFzc2lnbmVkIGZvciB2aXJ0dWFsIGZ1bmN0
aW9ucw0KDQoobmVlZCB0byBkb3VibGUtY2hlY2sgdGhhdCkuDQoNCj4gICBJbiBvcmRlciB0byBk
byBwYXNzdGhyb3VnaCB0byBkb21VcyBzYWZlbHkNCj4gd2Ugd2lsbCBoYXZlIHRvIGFkZCBtb3Jl
IGhhbmRsZXJzIHRoYW4gd2hhdCdzIHJlcXVpcmVkIGZvciBkb20wLA0KQ2FuIHlvdSBwbGVhc2Ug
dGVsbCB3aGF0IGFyZSB0aGlua2luZyBhYm91dD8gV2hhdCBhcmUgdGhlIG1pc3NpbmcgaGFuZGxl
cnM/DQo+ICAgYW5kDQo+IGhhdmluZyBpc19oYXJkd2FyZV9kb21haW4gc3ByaW5rbGVkIGluIGFs
bCBvZiB0aGVtIGlzIG5vdCBhIHN1aXRhYmxlDQo+IHNvbHV0aW9uLg0KDQpJJ2xsIHRyeSB0byBy
ZXBsYWNlIGlzX2hhcmR3YXJlX2RvbWFpbiB3aXRoIHNvbWV0aGluZyBsaWtlOg0KDQorLyoNCisg
KiBEZXRlY3Qgd2hldGhlciBwaHlzaWNhbCBQQ0kgZGV2aWNlcyBpbiB0aGlzIHNlZ21lbnQgYmVs
b25nDQorICogdG8gdGhlIGRvbWFpbiBnaXZlbiwgZS5nLiBvbiB4ODYgYWxsIFBDSSBkZXZpY2Vz
IGxpdmUgaW4gaHdkb20sDQorICogYnV0IGluIGNhc2Ugb2YgQVJNIHRoaXMgbWlnaHQgbm90IGJl
IHRoZSBjYXNlOiB0aG9zZSBtYXkgYWxzbw0KKyAqIGxpdmUgaW4gZHJpdmVyIGRvbWFpbnMgb3Ig
ZXZlbiBYZW4gaXRzZWxmLg0KKyAqLw0KK2Jvb2wgcGNpX2lzX2hhcmR3YXJlX2RvbWFpbihzdHJ1
Y3QgZG9tYWluICpkLCB1MTYgc2VnKQ0KK3sNCisjaWZkZWYgQ09ORklHX1g4Ng0KK8KgwqDCoCBy
ZXR1cm4gaXNfaGFyZHdhcmVfZG9tYWluKGQpOw0KKyNlbGlmIENPTkZJR19BUk0NCivCoMKgwqAg
cmV0dXJuIHBjaV9pc19vd25lcl9kb21haW4oZCwgc2VnKTsNCisjZWxzZQ0KKyNlcnJvciAiVW5z
dXBwb3J0ZWQgYXJjaGl0ZWN0dXJlIg0KKyNlbmRpZg0KK30NCisNCisvKg0KKyAqIEdldCBkb21h
aW4gd2hpY2ggb3ducyB0aGlzIHNlZ21lbnQ6IGZvciB4ODYgdGhpcyBpcyBhbHdheXMgaGFyZHdh
cmUNCisgKiBkb21haW4gYW5kIGZvciBBUk0gdGhpcyBjYW4gYmUgZGlmZmVyZW50Lg0KKyAqLw0K
K3N0cnVjdCBkb21haW4gKnBjaV9nZXRfaGFyZHdhcmVfZG9tYWluKHUxNiBzZWcpDQorew0KKyNp
ZmRlZiBDT05GSUdfWDg2DQorwqDCoMKgIHJldHVybiBoYXJkd2FyZV9kb21haW47DQorI2VsaWYg
Q09ORklHX0FSTQ0KK8KgwqDCoCByZXR1cm4gcGNpX2dldF9vd25lcl9kb21haW4oc2VnKTsNCisj
ZWxzZQ0KKyNlcnJvciAiVW5zdXBwb3J0ZWQgYXJjaGl0ZWN0dXJlIg0KKyNlbmRpZg0KK30NCg0K
VGhpcyBpcyB3aGF0IEkgdXNlIHRvIHByb3Blcmx5IGRldGVjdCB0aGUgZG9tYWluIHRoYXQgcmVh
bGx5IG93bnMgcGh5c2ljYWwgaG9zdCBicmlkZ2UNCg0KPg0KPiBSb2dlci4NCg0KVGhhbmsgeW91
LA0KDQpPbGVrc2FuZHINCg==


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 06:46:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 06:46:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26208.54391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdSqv-00071h-Af; Fri, 13 Nov 2020 06:46:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26208.54391; Fri, 13 Nov 2020 06:46:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdSqv-00071a-7X; Fri, 13 Nov 2020 06:46:33 +0000
Received: by outflank-mailman (input) for mailman id 26208;
 Fri, 13 Nov 2020 06:46:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u7TF=ET=epam.com=prvs=9586b5424c=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kdSqt-00071T-HH
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 06:46:31 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9170f29c-4b54-4311-806c-7a2b57d16a0e;
 Fri, 13 Nov 2020 06:46:30 +0000 (UTC)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0AD6ib8f031375; Fri, 13 Nov 2020 06:46:24 GMT
Received: from eur05-am6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2109.outbound.protection.outlook.com [104.47.18.109])
 by mx0a-0039f301.pphosted.com with ESMTP id 34rf80p837-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 13 Nov 2020 06:46:24 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM4PR0301MB2194.eurprd03.prod.outlook.com (2603:10a6:200:50::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.22; Fri, 13 Nov
 2020 06:46:20 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 06:46:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u7TF=ET=epam.com=prvs=9586b5424c=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kdSqt-00071T-HH
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 06:46:31 +0000
X-Inumbo-ID: 9170f29c-4b54-4311-806c-7a2b57d16a0e
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9170f29c-4b54-4311-806c-7a2b57d16a0e;
	Fri, 13 Nov 2020 06:46:30 +0000 (UTC)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
	by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0AD6ib8f031375;
	Fri, 13 Nov 2020 06:46:24 GMT
Received: from eur05-am6-obe.outbound.protection.outlook.com (mail-am6eur05lp2109.outbound.protection.outlook.com [104.47.18.109])
	by mx0a-0039f301.pphosted.com with ESMTP id 34rf80p837-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 13 Nov 2020 06:46:24 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dYElHk+R4TeYoY6YzLS8xIstAA+HsjDH9p1bxSNI0s+Blk5t21ciSDIQG6xXuqZX9pSIj36yXFEYg1j7D1kz5dplNMGp0Pjl90AQsv1JQAcnI0WUylhhrG5b+Xavy7ra9oaV+cDXi/zpAHG+PqqWKLdQM+SJieIxnlllzFz5/F576p0u9cdHpnEMKVT2xcRl9Yu52us5Ex8c/q9kcXLmyz2BeVuQZMoMS79YnC8xPkNX6yl97ZhepF1tcsKPUiMdcV9KhNEGyI4Ui6YbN5iSVPYYYfdtiMPgdbrUgYrcQPnqTetDzzrYbCUbZ9ogqTGMt0ciiCrj6NmWddCvL1XxIA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=40kjv/gLIKIGd/U6WL74IHsixNEKP4c9f6m8MAZkQos=;
 b=cztqLow8LE0Dpuu5JXxfeK2R94tvZ75ewvBhqXJXjbP4Qk4IytZHmlWnyL+aVcYMMwlMY065F+lNpNL1dLBhZi28L1AHjBHt9kEBzQfzfjtLAK5FqAR6q9NMshar8lWp6qiL0kCh5YFdvhk+GqBQYoa+1AZ+apZSSJ+zlEFYkp3fEHMPekDhmLSXnNCZXJyzpg8EdxbQO1C5WFoSis11C9/RAghsNpOQWr4Nk2LhwEFk/rqLtCLce3c/44ZRYkBOqIk/lOY+qshMsU5StZhjjnk9McmGlS8M874laaxORyKdtK2JcSOU0VradtfN+YbIX0gYqHVLXe03TGbWeB7ZCg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=40kjv/gLIKIGd/U6WL74IHsixNEKP4c9f6m8MAZkQos=;
 b=2zJjAB1yhK+oQmed7oG2f+XTiIYJSWj30Fot7uUNSWuKS7hTXd/5IUvq5dKuwzgXm01VzxC2XDITBbiJAhQz6CZF0OAzv8EWqB2Fjx5hJLy6ITrV2KUcSEy3DTzlZowINFluhCBFjGpSFvmXMssCHtlke+V0sJ7iTiD4Q7PvCJTnwP3ywsVs3Uh83Hd7NRVz4SaM5AzsWEDlzVERuTws4j908dpc1JD2I85bVYda/jGshQYRLe4P4gEt5N4Q1HEmKdydCVTtqU8tApnO5i/VmdjbD5QbLKF40bZiushGlmHlCCVOJa3BpmKw3iv/gSlPUlBHhzh9X8VtfVhUVu5Xtw==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM4PR0301MB2194.eurprd03.prod.outlook.com (2603:10a6:200:50::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.22; Fri, 13 Nov
 2020 06:46:20 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 06:46:20 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>,
        Oleksandr
 Andrushchenko <andr2000@gmail.com>
CC: "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
        "Bertrand.Marquis@arm.com"
	<Bertrand.Marquis@arm.com>,
        "julien.grall@arm.com" <julien.grall@arm.com>,
        "jbeulich@suse.com" <jbeulich@suse.com>,
        "sstabellini@kernel.org"
	<sstabellini@kernel.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>
Subject: Re: [PATCH 08/10] vpci/arm: Allow updating BAR's header for non-ECAM
 bridges
Thread-Topic: [PATCH 08/10] vpci/arm: Allow updating BAR's header for non-ECAM
 bridges
Thread-Index: AQHWtpbyYHzQ7b7gKkKm7jsDdoAEE6nERvEAgAFdRAA=
Date: Fri, 13 Nov 2020 06:46:20 +0000
Message-ID: <5a793e4f-d0da-bfcf-c2e0-1229147ecd83@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-9-andr2000@gmail.com>
 <20201112095616.5ps37pm6p52zsa33@Air-de-Roger>
In-Reply-To: <20201112095616.5ps37pm6p52zsa33@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 73449570-35cc-454f-8b53-08d8879fd466
x-ms-traffictypediagnostic: AM4PR0301MB2194:
x-microsoft-antispam-prvs: 
 <AM4PR0301MB219449BEFA26A71DAF7054ABE7E60@AM4PR0301MB2194.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:5516;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 M00zyyJJvXixQuyez7WIzEfX725JmgIlOMmrPuUiz0AqKgyVbekg2/OHjQVsCxIYQUP/jYNX/kOeYKymBhcraxeOgy8lg5qoLi97w9X68X6Jq2gQoB8UgXU84wMxVhx6v8rkRzA8xbcGDp3bvp8w/35rVTZ8jP6u1yRQz+EkCnWPcICwOv5+raJG0iG8jFoFiXxq5kWNKWtuwuNuJr1542+0ZCbm+aP3vM5Fh0por6ecuCpuYG9TiNfrekzuBXTwn28OHOgPfwISpjQ2yEy+tGK3Z3wSet27EXsspArhrsH/KKj1mUXnj/ycJ/oqe1hK
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(396003)(39850400004)(136003)(366004)(26005)(8676002)(186003)(76116006)(31686004)(8936002)(53546011)(2616005)(83380400001)(6506007)(66446008)(66476007)(66556008)(64756008)(66946007)(6512007)(31696002)(4326008)(2906002)(478600001)(7416002)(5660300002)(71200400001)(36756003)(110136005)(316002)(54906003)(86362001)(6486002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 N9wNkqlkEJFlXdkGP+ZmvEAB+Er4jsod2vVV2LRau4dWaLyXVXOVe3+R5uS04ycXfFy+IvUFgif9biGRczqwtUi5PGiO6CbRCkVK/SX9ixzOiB4WR3XStEGHZJwTEUXkrya576sUnaKcRBXwfMItp42/YgjOjwJl5acgSMf60tbQCU7+/6T36+XG9XawQ1JtR0ZgmEfbtje3G0O7OBR2PNkFzBqylN4iYnJ2qjl7RfEwuioL4HJx1FaNHyddOby5LAnLWLDXNnX+ItugXv5+JWVr3VAHahYuFzC7Y4w/K3hcr6BwO6fXgjOzPKVp0C1yLMHf3sZQ5ucXcZY8sZUgAmiBGSphDtKvr2OvU/MA9CxtsxV+A/36SXRo5H8hWtLSFewTJT23ybN7VkQWPo9kQs6MSed2KddsThThM/bQKvEpBmxthNMUvCFaKSXJqnNDg1IV74CdrQesmiYLi+FYVMIG2YyETatyG+6Mk9Qc0x4Z80HmfL2Nt/aE9UFCIqfLhd0yVb9aRYV+lPkYUJ7LwZO307KRIOyNmeHY6dAdJNR3DmnI97hqbG0VBTHfsUn8MOj6NMvPUwjNtqzMq/gtkHgn+9lIfO/FpTsIVEpeEbokE50pAGHr7YTWRjo4X+8DEbkVrzNPrA/bdSGxPsVDBg==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <ED04D1970EF0F148AA1857C280E4DE5E@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 73449570-35cc-454f-8b53-08d8879fd466
X-MS-Exchange-CrossTenant-originalarrivaltime: 13 Nov 2020 06:46:20.4790
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Nd6+GEkH4zZXGCUXdl23pTpz3O5y7So/4+2b/gkEZzRtn0TMTYkaB9H66g0D5hBpMmq/a98Z49cXu5uE9j5rpVRWn+nhksEoEa4SpStAw/ZZX/f4J3uempiVsP3/xm18
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR0301MB2194
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-13_04:2020-11-12,2020-11-13 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0
 priorityscore=1501 adultscore=0 phishscore=0 impostorscore=0 spamscore=0
 malwarescore=0 suspectscore=0 mlxlogscore=999 mlxscore=0
 lowpriorityscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2009150000 definitions=main-2011130037

DQpPbiAxMS8xMi8yMCAxMTo1NiBBTSwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToNCj4gT24gTW9u
LCBOb3YgMDksIDIwMjAgYXQgMDI6NTA6MjlQTSArMDIwMCwgT2xla3NhbmRyIEFuZHJ1c2hjaGVu
a28gd3JvdGU6DQo+PiBGcm9tOiBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyA8b2xla3NhbmRyX2Fu
ZHJ1c2hjaGVua29AZXBhbS5jb20+DQo+Pg0KPj4gTm9uLUVDQU0gaG9zdCBicmlkZ2VzIGluIGh3
ZG9tIGdvIGRpcmVjdGx5IHRvIFBDSSBjb25maWcgc3BhY2UsDQo+PiBub3QgdGhyb3VnaCB2cGNp
ICh0aGV5IHVzZSB0aGVpciBzcGVjaWZpYyBtZXRob2QgZm9yIGFjY2Vzc2luZyBQQ0kNCj4+IGNv
bmZpZ3VyYXRpb24sIGUuZy4gZGVkaWNhdGVkIHJlZ2lzdGVycyBldGMuKS4gVGh1cyBod2RvbSdz
IHZwY2kgQkFScyBhcmUNCj4+IG5ldmVyIHVwZGF0ZWQgdmlhIHZQQ0kgTU1JTyBoYW5kbGVycywg
c28gaW1wbGVtZW50IGEgZGVkaWNhdGVkIG1ldGhvZA0KPj4gZm9yIGEgUENJIGhvc3QgYnJpZGdl
LCBzbyBpdCBoYXMgYSBjaGFuY2UgdG8gdXBkYXRlIHRoZSBpbml0aWFsIHN0YXRlIG9mDQo+PiB0
aGUgZGV2aWNlIEJBUnMuDQo+Pg0KPj4gTm90ZSwgd2UgcmVseSBvbiB0aGUgZmFjdCB0aGF0IGNv
bnRyb2wvaGFyZHdhcmUgZG9tYWluIHdpbGwgbm90IHVwZGF0ZQ0KPj4gcGh5c2ljYWwgQkFSIGxv
Y2F0aW9ucyBmb3IgdGhlIGdpdmVuIGRldmljZXMuDQo+IFRoaXMgaXMgcXVpdGUgdWdseS4NCkl0
IGlzDQo+DQo+IEknbSBsb29raW5nIGF0IHRoZSBjb21taXQgdGhhdCBpbXBsZW1lbnRzIHRoZSBo
b29rIGZvciBSLUNhciBhbmQgSSdtDQo+IGhhdmluZyB0cm91YmxlIHNlZWluZyBob3cgdGhhdCdz
IGRpZmZlcmVudCBmcm9tIHRoZSB3YXkgd2Ugd291bGQNCj4gbm9ybWFsbHkgcmVhZCB0aGUgQkFS
IGFkZHJlc3Nlcy4NCg0KT2ssIHBsZWFzZSBzZWUgbXkgY29tbWVudCBvbiBwYXRjaCBbMDYvMTBd
LiBJbiBzaG9ydDoNCg0Kd2hlbiBhIFBDSSBkZXZpY2UgaXMgKmFkZGVkKiB3ZSBjYWxsIGluaXRf
YmFycyBhbmQgYXQgdGhhdCB0aW1lIEJBUnMNCg0KYXJlIG5vdCBhc3NpZ25lZCBvbiBBUk0geWV0
LiBCdXQsIGlmIHdlIG1vdmUgaW5pdF9iYXJzIHRvIHRoZSBwb2ludA0KDQp3aGVuIGEgZGV2aWNl
ICphc3NpZ25lZCogdGhlbiBpdCB3aWxsIHdvcms/IEFuZCB0aGlzIGNvZGUgd2lsbCBnbyBhd2F5
DQoNCj4NCj4gSSB0aGluayB0aGlzIHNob3VsZCBsaWtlbHkgYmUgcGFpcmVkIHdpdGggdGhlIGFj
dHVhbCBpbXBsZW1lbnRhdGlvbiBvZg0KPiBhIGhvb2ssIG9yIGVsc2UgaXQncyBoYXJkIHRvIHRl
bGwgd2hldGhlciBpdCByZWFsbHkgbmVlZGVkIG9yIG5vdC4NClllcywgaWYgd2UgbW92ZSB0byBk
ZXZpY2UgYXNzaWduIHRoZW4gaXQgd29uJ3QgYmUgbmVlZGVkOiBoYXZlIHRvIGNoZWNrIHRoYXQN
Cj4NCj4gVGhhbmtzLCBSb2dlci4NCg0KVGhhbmsgeW91LA0KDQpPbGVrc2FuZHINCg==


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 06:48:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 06:48:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26213.54402 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdSst-0007BP-O3; Fri, 13 Nov 2020 06:48:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26213.54402; Fri, 13 Nov 2020 06:48:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdSst-0007BI-KC; Fri, 13 Nov 2020 06:48:35 +0000
Received: by outflank-mailman (input) for mailman id 26213;
 Fri, 13 Nov 2020 06:48:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u7TF=ET=epam.com=prvs=9586b5424c=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kdSsr-0007BD-Lc
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 06:48:33 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e1e461b8-e44d-4033-8f91-fed6f725aa22;
 Fri, 13 Nov 2020 06:48:30 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0AD6iUQU002327; Fri, 13 Nov 2020 06:48:25 GMT
Received: from eur05-db8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2109.outbound.protection.outlook.com [104.47.17.109])
 by mx0a-0039f301.pphosted.com with ESMTP id 34rf80e6cp-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 13 Nov 2020 06:48:24 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM4PR0301MB2195.eurprd03.prod.outlook.com (2603:10a6:200:4f::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Fri, 13 Nov
 2020 06:48:21 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 06:48:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u7TF=ET=epam.com=prvs=9586b5424c=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kdSsr-0007BD-Lc
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 06:48:33 +0000
X-Inumbo-ID: e1e461b8-e44d-4033-8f91-fed6f725aa22
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e1e461b8-e44d-4033-8f91-fed6f725aa22;
	Fri, 13 Nov 2020 06:48:30 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
	by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0AD6iUQU002327;
	Fri, 13 Nov 2020 06:48:25 GMT
Received: from eur05-db8-obe.outbound.protection.outlook.com (mail-db8eur05lp2109.outbound.protection.outlook.com [104.47.17.109])
	by mx0a-0039f301.pphosted.com with ESMTP id 34rf80e6cp-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 13 Nov 2020 06:48:24 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IxHT28dLifrcnZtUbDRbd54v20XQFwySt5KGt1qSMnooRpPGk971leQol+wFEaT9HzM5v2bpRb9nFvBO1z/16jRX1OE3ffAjb9GdVC5ClsS10l4NR/vCo6qp7qtaHcMtOIW99AXKMuW7ceLJjBLb6zSAfNk3ZFFa+p7qY5Y6AvuXQ4Wyuw/JGOlDGJTZa2jHylZTblsH04oWiEekrLXI4PlmaaHq8jMM7Q4Qn9depm69yi7vGk6soGUKlzS+FBqQGoRlTL3zGocPmO/khlL1ecqOSvtsD2AfEUgInXkL7EWGjUoH0lUtqvmNxkeJTZrht9SUXnGi+5Z6RIV4wl0Ddg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XuwBqXtQEZkLSOCgwUjKf6uPweQs9Hu8jkJFcLAGfi8=;
 b=CqjThm0WvzxDS2W1BZfsI7Jmf9+eHOZxUMyQFKNipsgsz1xwbde36GfCafDq5mBylNyR6/1TcrQJ8iVlCJ8ah/h7qdlgs/p/UHUEN8J8DjYKiyutk8eN0oPqJMo3IVGcxnFz80jgi6JG4S0ge1U3YE0Pt19bYim9yPOrC2dej92/4Jv7jz7yuh3xKWZS7Y1RCVGCTqDjR5BiwWi8E2narwijHx3I36CX4ih5m8MOL74aNLdMJN51M/Y0wpqlt5lfRVtxkrAyy+5wAiVPJYPc8hhhTX6KJ3gJooHFNg03284X+aO1+62OuEJ41REpUO4SuzJZWf/npixCQB9ZiwRpmw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XuwBqXtQEZkLSOCgwUjKf6uPweQs9Hu8jkJFcLAGfi8=;
 b=ntQcVgTB/T5kcinZUnRNsz7WSHGXbaX9p/o9AcXMd8DZ0GNQLQwngs08Y7w/EP9YXm5SX/UOOzLszs+QJn7yW/g6ohKR5ASiqoSB5HKJVJ2isdF3y4N7sr2siuCWh7YjrEjb4p/r/IBqu9EFCmeTGcYWvEMZSC8rMuyiFy3T2triaEl8SWF97UgZqXsbQSGDSn6cIh/2JZZG8ydEYKU6vzo8o22ff+TFkE/76QEF5t3J4l/0WYs+morm3WuLxrPIXoEge21B5TZMS/AXCRYT6j5Y06Yot8bGcrPoKvI4DGUccbfYGSp4mtQU/1zDYF896xtgbZ44VqCkFe5rWeSiFg==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM4PR0301MB2195.eurprd03.prod.outlook.com (2603:10a6:200:4f::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Fri, 13 Nov
 2020 06:48:21 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 06:48:20 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
CC: Oleksandr Andrushchenko <andr2000@gmail.com>,
        "Rahul.Singh@arm.com"
	<Rahul.Singh@arm.com>,
        "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
        "julien.grall@arm.com" <julien.grall@arm.com>,
        "jbeulich@suse.com"
	<jbeulich@suse.com>,
        "sstabellini@kernel.org" <sstabellini@kernel.org>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
Thread-Topic: [PATCH 06/10] vpci: Make every domain handle its own BARs
Thread-Index: AQHWtpbzyJg77SbpvkSRWLKjWy/3PanEQmgAgAA8YoCAABlOgIABCD8AgAAEbQA=
Date: Fri, 13 Nov 2020 06:48:20 +0000
Message-ID: <507c332e-86c1-96dd-d552-51a7f334f36d@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
 <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
 <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
In-Reply-To: <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: fefd864f-d59a-4753-3f3f-08d887a01c22
x-ms-traffictypediagnostic: AM4PR0301MB2195:
x-microsoft-antispam-prvs: 
 <AM4PR0301MB21953D7FEA3373D566BEBABAE7E60@AM4PR0301MB2195.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:29;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 qNacd7YxR52+rq1Y+nB/eQ6k3dZddzrPH2oqRxQol/kiX8qqNYJ1i1/2OmYHBQmtclTtI8OdlzRRH+0sLhJnlVMMNZC1tovmlt5yOZsU/zulXmREKS3mOJKy76ewxtjHZjvuBDz6WVHNHigIDlb+TSMKqVTZU+/LJK1EpZHK1hMXxqV70FZT3wTC0C/hwSHGnD2jexWLHopRlLb1koj+KVtO1GiyKRWIHa7mXI6XRVmCkrwaafNklasC+P//yg9+52994renC/KsdsgLsXAsEQoqmG3n2M9HcM9cHm7myh6haban1gUdDMigjQXTmnSXq/EPPbf0tVQ5h1FgbtVXsg==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(39850400004)(346002)(396003)(376002)(6486002)(186003)(83380400001)(53546011)(6916009)(30864003)(86362001)(6506007)(76116006)(316002)(66946007)(66446008)(66556008)(2616005)(4326008)(8676002)(31696002)(5660300002)(26005)(64756008)(31686004)(54906003)(7416002)(36756003)(478600001)(8936002)(2906002)(66476007)(71200400001)(6512007)(579004);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 AosY1Y4rSvuX4QebbeUlrwPzng1MJ/jUKORpWTuyRtqfdHKp4fAwwtrw7vh+1cKLwq8kTA7difN0pUdh4U3vnC1o8qEtPPp6XFPWj1plu21RcWcEoxg3mgo2dYOFO2Wfw77GN1hm38bB10hYWl42qJ/dLHlA6BrZz4o/rNhB/hSmCt69s+5kx/C8P1/FmAPuxwGqakRQ+Kmve0LqKZKpXS7ECneZGS4W2piys/Oi1Z4LSBbsxhIs74DYHTkFEs6ShKHU9eJiOwCxXB9meCKykPiEYwolGmsvEgAtO3Vrj/BvauVT/lz0nSvSjHhtaMD9QZU6Fuo9Ok360FrsR1+XJYPjpF2DkKOdhJY/evxx8SDvxuADTF4Ne5e6VRYi3z5qKrXrxma2P8WHjEdA1+MNwip34yEXGmJ/fmGt9S4zLBOPDLh2LZYogK9wxc1Fe8bC7Gs96D7Ra184VHDL5MA83pxcmb6OyrbAKsZqf0BGEn8OYbCaoBNuduRsOXmGZRmZYCmRS3uukFwqjAM78BNLqyrsmX/khJrswJjchPD9cR44vrRNlBbTiWhSwJQvxz8pSqC48rXBK/sWGeheUqg1y0Je1WM7+k/wYJICd+2S2SjGHjutXzQNEhwfdx7P8WxUigN6RbRrIESgQ3EgzZD/1g==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <619F32176EEB6F4ABB5DA8396C7120C3@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fefd864f-d59a-4753-3f3f-08d887a01c22
X-MS-Exchange-CrossTenant-originalarrivaltime: 13 Nov 2020 06:48:20.8054
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: o9v9I0ofZdao1cr/xT4QaLwNo51XUoeEBTx6IXECyhpsGD7hkARTb3fmtSoFYQb1lXKnhgpMFUwFMRGQZK05EwAwlrdDJ+0BUVkMLJwIPxIdwlRl/u4AZZvm1Uy5Eyx2
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR0301MB2195
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-13_04:2020-11-12,2020-11-13 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 mlxscore=0
 phishscore=0 spamscore=0 adultscore=0 bulkscore=0 priorityscore=1501
 suspectscore=0 impostorscore=0 mlxlogscore=999 lowpriorityscore=0
 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2011130037

DQpPbiAxMS8xMy8yMCA4OjMyIEFNLCBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyB3cm90ZToNCj4N
Cj4gT24gMTEvMTIvMjAgNDo0NiBQTSwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToNCj4+IE9uIFRo
dSwgTm92IDEyLCAyMDIwIGF0IDAxOjE2OjEwUE0gKzAwMDAsIE9sZWtzYW5kciBBbmRydXNoY2hl
bmtvIHdyb3RlOg0KPj4+IE9uIDExLzEyLzIwIDExOjQwIEFNLCBSb2dlciBQYXUgTW9ubsOpIHdy
b3RlOg0KPj4+PiBPbiBNb24sIE5vdiAwOSwgMjAyMCBhdCAwMjo1MDoyN1BNICswMjAwLCBPbGVr
c2FuZHIgQW5kcnVzaGNoZW5rbyB3cm90ZToNCj4+Pj4+IEZyb206IE9sZWtzYW5kciBBbmRydXNo
Y2hlbmtvIDxvbGVrc2FuZHJfYW5kcnVzaGNoZW5rb0BlcGFtLmNvbT4NCj4+Pj4+IGRpZmYgLS1n
aXQgYS94ZW4vZHJpdmVycy92cGNpL2hlYWRlci5jIGIveGVuL2RyaXZlcnMvdnBjaS9oZWFkZXIu
Yw0KPj4+Pj4gaW5kZXggZjc0ZjcyODg4NGMwLi43ZGM3YzcwZTI0ZjIgMTAwNjQ0DQo+Pj4+PiAt
LS0gYS94ZW4vZHJpdmVycy92cGNpL2hlYWRlci5jDQo+Pj4+PiArKysgYi94ZW4vZHJpdmVycy92
cGNpL2hlYWRlci5jDQo+Pj4+PiBAQCAtMzEsMTQgKzMxLDg3IEBADQo+Pj4+PiDCoMKgIHN0cnVj
dCBtYXBfZGF0YSB7DQo+Pj4+PiDCoMKgwqDCoMKgwqAgc3RydWN0IGRvbWFpbiAqZDsNCj4+Pj4+
IMKgwqDCoMKgwqDCoCBib29sIG1hcDsNCj4+Pj4+ICvCoMKgwqAgc3RydWN0IHBjaV9kZXYgKnBk
ZXY7DQo+Pj4+IElmIHRoZSBmaWVsZCBpcyByZXF1aXJlZCBwbGVhc2UgcGxhY2UgaXQgYWZ0ZXIg
dGhlIGRvbWFpbiBvbmUuDQo+Pj4gSSB3aWxsLCBidXQgbWF5IEkgYXNrIHdoeT8NCj4+IFNvIHRo
YXQgaWYgd2UgYWRkIGZ1cnRoZXIgYm9vbGVhbiBmaWVsZHMgd2UgY2FuIGRvIGF0IHRoZSBlbmQg
b2YgdGhlDQo+PiBzdHJ1Y3QgZm9yIGxheW91dCByZWFzb25zLiBJZiB3ZSBkbzoNCj4+DQo+PiBz
dHJ1Y3QgbWFwX2RhdGEgew0KPj4gwqDCoMKgwqAgc3RydWN0IGRvbWFpbiAqZDsNCj4+IMKgwqDC
oMKgIGJvb2wgbWFwOw0KPj4gwqDCoMKgwqAgc3RydWN0IHBjaV9kZXYgKnBkZXY7DQo+PiDCoMKg
wqDCoCBib29sIGZvbzsNCj4+IH0NCj4+DQo+PiBXZSB3aWxsIGVuZCB1cCB3aXRoIGEgYnVuY2gg
b2YgcGFkZGluZyB0aGF0IGNvdWxkIGJlIGF2b2lkZWQgYnkgZG9pbmc6DQo+Pg0KPj4gc3RydWN0
IG1hcF9kYXRhIHsNCj4+IMKgwqDCoMKgIHN0cnVjdCBkb21haW4gKmQ7DQo+PiDCoMKgwqDCoCBz
dHJ1Y3QgcGNpX2RldiAqcGRldjsNCj4+IMKgwqDCoMKgIGJvb2wgbWFwOw0KPj4gwqDCoMKgwqAg
Ym9vbCBmb287DQo+PiB9DQo+IEFoLCBzbyB0aGlzIGlzIGFib3V0IHBhZGRpbmcuIEdvdCBpdA0K
Pj4NCj4+Pj4+ICvCoMKgwqAgcyA9IFBGTl9ET1dOKHMpOw0KPj4+Pj4gK8KgwqDCoCBlID0gUEZO
X0RPV04oZSk7DQo+Pj4+IENoYW5naW5nIHRoZSByYW5nZXNldCB0byBzdG9yZSBtZW1vcnkgYWRk
cmVzc2VzIGluc3RlYWQgb2YgZnJhbWVzDQo+Pj4+IGNvdWxkIGZvciBleGFtcGxlIGJlIHNwbGl0
IGludG8gYSBzZXBhcmF0ZSBwYXRjaC4NCj4+PiBPaw0KPj4+PiBJIHRoaW5rIHlvdSBhcmUgZG9p
bmcgdGhlIGNhbGN1bGF0aW9uIG9mIHRoZSBlbmQgcGZuIHdyb25nIGhlcmUsIHlvdQ0KPj4+PiBz
aG91bGQgdXNlIFBGTl9VUCBpbnN0ZWFkIGluIGNhc2UgdGhlIGFkZHJlc3MgaXMgbm90IGFsaWdu
ZWQuDQo+Pj4gUEZOX0RPV04gZm9yIHRoZSBzdGFydCBzZWVtcyB0byBiZSBvayBpZiB0aGUgYWRk
cmVzcyBpcyBub3QgYWxpZ25lZA0KPj4+DQo+Pj4gd2hpY2ggaXMgdGhlIGNhc2UgaWYgSSBwYXNz
IGJhcl9pbmRleCBpbiB0aGUgbG93ZXIgYml0czogUENJIG1lbW9yeSBoYXMNCj4+Pg0KPj4+IFBB
R0VfU0laRSBncmFudWxhcml0eSwgc28gYmVzaWRlcyB0aGUgZmFjdCB0aGF0IEkgdXNlIGJhcl9p
bmRleCB0aGUgYWRkcmVzcw0KPj4gTm8sIEJBUnMgZG9uJ3QgbmVlZCB0byBiZSBhbGlnbmVkIHRv
IHBhZ2UgYm91bmRhcmllcywgeW91IGNhbiBldmVuDQo+PiBoYXZlIGRpZmZlcmVudCBCQVJzIGlu
c2lkZSB0aGUgc2FtZSBwaHlzaWNhbCBwYWdlLg0KPj4NCj4+IFRoZSBzcGVjIHJlY29tbWVuZHMg
dGhhdCB0aGUgbWluaW11bSBzaXplIG9mIGEgQkFSIHNob3VsZCBiZSA0S0IsIGJ1dA0KPj4gdGhh
dCdzIG5vdCBhIHN0cmljdCByZXF1aXJlbWVudCBpbiB3aGljaCBjYXNlIGEgQkFSIGNhbiBiZSBh
cyBzbWFsbCBhcw0KPj4gMTZieXRlcywgYW5kIHRoZW4geW91IGNhbiBoYXZlIG11bHRpcGxlIG9u
ZXMgaW5zaWRlIHRoZSBzYW1lIHBhZ2UuDQo+IE9rLCB3aWxsIGFjY291bnQgb24gdGhhdA0KPj4N
Cj4+PiBtdXN0IGJlIHBhZ2UgYWxpZ25lZC4NCj4+Pg0KPj4+IFRoZSBlbmQgYWRkcmVzcyBpcyBl
eHByZXNzZWQgaW4gKHNpemUgLSAxKSBmb3JtLCBhZ2FpbiBwYWdlIGFsaWduZWQsDQo+Pj4NCj4+
PiBzbyB0byBnZXQgdGhlIGxhc3QgcGFnZSB0byBiZSBtYXBwZWQgUEZOX0RPV04gYWxzbyBzZWVt
cyB0byBiZSBhcHByb3ByaWF0ZS4NCj4+Pg0KPj4+IERvIEkgbWlzcyBzb21ldGhpbmcgaGVyZT8N
Cj4+IEknbSBub3QgYXdhcmUgb2YgYW55wqAgb2YgdGhvc2UgYWRkcmVzc2VzIG9yIHNpemVzIGJl
aW5nIGd1YXJhbnRlZWQgdG8NCj4+IGJlIHBhZ2UgYWxpZ25lZCwgc28gSSB0aGluayB5b3UgbmVl
ZCB0byBhY2NvdW50IGZvciB0aGF0Lg0KPj4NCj4+IFNvbWUgb2YgdGhlIGNvZGUgaGVyZSB1c2Vz
IFBGTl9ET1dOIHRvIGNhbGN1bGF0ZSB0aGUgZW5kIGFkZHJlc3MNCj4+IGJlY2F1c2UgdGhlIHJh
bmdlc2V0cyBhcmUgdXNlZCBpbiBhbiBpbmNsdXNpdmUgZmFzaGlvbiwgc28gdGhlIGVuZA0KPj4g
ZnJhbWUgYWxzbyBnZXRzIG1hcHBlZC4NCj4gT2sNCj4+DQo+Pj4+PiArwqDCoMKgIG1mbiA9IF9t
Zm4oUEZOX0RPV04oaGVhZGVyLT5iYXJzW2Jhcl9pZHhdLmFkZHIpKTsNCj4+Pj4+IMKgwqDCoMKg
wqDCoCBmb3IgKCA7IDsgKQ0KPj4+Pj4gwqDCoMKgwqDCoMKgIHsNCj4+Pj4+IMKgwqDCoMKgwqDC
oMKgwqDCoMKgIHVuc2lnbmVkIGxvbmcgc2l6ZSA9IGUgLSBzICsgMTsNCj4+Pj4+IEBAIC01Miwx
MSArMTI1LDE1IEBAIHN0YXRpYyBpbnQgbWFwX3JhbmdlKHVuc2lnbmVkIGxvbmcgcywgdW5zaWdu
ZWQgbG9uZyBlLCB2b2lkICpkYXRhLA0KPj4+Pj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAqIC0g
e3VufW1hcF9tbWlvX3JlZ2lvbnMgZG9lc24ndCBzdXBwb3J0IHByZWVtcHRpb24uDQo+Pj4+PiDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgICovDQo+Pj4+PiDCoMKgIC3CoMKgwqDCoMKgwqDCoCByYyA9
IG1hcC0+bWFwID8gbWFwX21taW9fcmVnaW9ucyhtYXAtPmQsIF9nZm4ocyksIHNpemUsIF9tZm4o
cykpDQo+Pj4+PiAtwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIDog
dW5tYXBfbW1pb19yZWdpb25zKG1hcC0+ZCwgX2dmbihzKSwgc2l6ZSwgX21mbihzKSk7DQo+Pj4+
PiArwqDCoMKgwqDCoMKgwqAgcmMgPSBtYXAtPm1hcCA/IG1hcF9tbWlvX3JlZ2lvbnMobWFwLT5k
LCBfZ2ZuKHMpLCBzaXplLCBtZm4pDQo+Pj4+PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgIDogdW5tYXBfbW1pb19yZWdpb25zKG1hcC0+ZCwgX2dmbihzKSwgc2l6
ZSwgbWZuKTsNCj4+Pj4+IMKgwqDCoMKgwqDCoMKgwqDCoMKgIGlmICggcmMgPT0gMCApDQo+Pj4+
PiDCoMKgwqDCoMKgwqDCoMKgwqDCoCB7DQo+Pj4+PiAtwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAq
YyArPSBzaXplOw0KPj4+Pj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqAgLyoNCj4+Pj4+ICvCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqAgKiBSYW5nZSBzZXQgaXMgbm90IGV4cHJlc3NlZCBpbiBmcmFt
ZSBudW1iZXJzIGFuZCB0aGUgc2l6ZQ0KPj4+Pj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAq
IGlzIHRoZSBudW1iZXIgb2YgZnJhbWVzLCBzbyB1cGRhdGUgYWNjb3JkaW5nbHkuDQo+Pj4+PiAr
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgICovDQo+Pj4+PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oCAqYyArPSBzaXplIDw8IFBBR0VfU0hJRlQ7DQo+Pj4+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgIGJyZWFrOw0KPj4+Pj4gwqDCoMKgwqDCoMKgwqDCoMKgwqAgfQ0KPj4+Pj4gwqDCoMKg
wqDCoMKgwqDCoMKgwqAgaWYgKCByYyA8IDAgKQ0KPj4+Pj4gQEAgLTY3LDggKzE0NCw5IEBAIHN0
YXRpYyBpbnQgbWFwX3JhbmdlKHVuc2lnbmVkIGxvbmcgcywgdW5zaWduZWQgbG9uZyBlLCB2b2lk
ICpkYXRhLA0KPj4+Pj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBicmVhazsNCj4+Pj4+
IMKgwqDCoMKgwqDCoMKgwqDCoMKgIH0NCj4+Pj4+IMKgwqDCoMKgwqDCoMKgwqDCoMKgIEFTU0VS
VChyYyA8IHNpemUpOw0KPj4+Pj4gLcKgwqDCoMKgwqDCoMKgICpjICs9IHJjOw0KPj4+Pj4gK8Kg
wqDCoMKgwqDCoMKgICpjICs9IHJjIDw8IFBBR0VfU0hJRlQ7DQo+Pj4+PiDCoMKgwqDCoMKgwqDC
oMKgwqDCoCBzICs9IHJjOw0KPj4+Pj4gK8KgwqDCoMKgwqDCoMKgIG1mbiArPSByYzsNCj4+Pj4+
IMKgwqDCoMKgwqDCoMKgwqDCoMKgIGlmICggZ2VuZXJhbF9wcmVlbXB0X2NoZWNrKCkgKQ0KPj4+
Pj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHJldHVybiAtRVJFU1RBUlQ7
DQo+Pj4+PiDCoMKgwqDCoMKgwqAgfQ0KPj4+Pj4gQEAgLTg0LDcgKzE2Miw3IEBAIHN0YXRpYyBp
bnQgbWFwX3JhbmdlKHVuc2lnbmVkIGxvbmcgcywgdW5zaWduZWQgbG9uZyBlLCB2b2lkICpkYXRh
LA0KPj4+Pj4gwqDCoCBzdGF0aWMgdm9pZCBtb2RpZnlfZGVjb2RpbmcoY29uc3Qgc3RydWN0IHBj
aV9kZXYgKnBkZXYsIHVpbnQxNl90IGNtZCwNCj4+Pj4+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBib29sIHJvbV9vbmx5KQ0KPj4+
Pj4gwqDCoCB7DQo+Pj4+PiAtwqDCoMKgIHN0cnVjdCB2cGNpX2hlYWRlciAqaGVhZGVyID0gJnBk
ZXYtPnZwY2ktPmhlYWRlcjsNCj4+Pj4+ICvCoMKgwqAgc3RydWN0IHZwY2lfaGVhZGVyICpoZWFk
ZXIgPSBnZXRfaHdkb21fdnBjaV9oZWFkZXIocGRldik7DQo+Pj4+PiDCoMKgwqDCoMKgwqAgYm9v
bCBtYXAgPSBjbWQgJiBQQ0lfQ09NTUFORF9NRU1PUlk7DQo+Pj4+PiDCoMKgwqDCoMKgwqAgdW5z
aWduZWQgaW50IGk7DQo+Pj4+PiDCoMKgIEBAIC0xMzYsNiArMjE0LDcgQEAgYm9vbCB2cGNpX3By
b2Nlc3NfcGVuZGluZyhzdHJ1Y3QgdmNwdSAqdikNCj4+Pj4+IMKgwqDCoMKgwqDCoMKgwqDCoMKg
IHN0cnVjdCBtYXBfZGF0YSBkYXRhID0gew0KPj4+Pj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoCAuZCA9IHYtPmRvbWFpbiwNCj4+Pj4+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAg
Lm1hcCA9IHYtPnZwY2kuY21kICYgUENJX0NPTU1BTkRfTUVNT1JZLA0KPj4+Pj4gK8KgwqDCoMKg
wqDCoMKgwqDCoMKgwqAgLnBkZXYgPSB2LT52cGNpLnBkZXYsDQo+Pj4+PiDCoMKgwqDCoMKgwqDC
oMKgwqDCoCB9Ow0KPj4+Pj4gwqDCoMKgwqDCoMKgwqDCoMKgwqAgaW50IHJjID0gcmFuZ2VzZXRf
Y29uc3VtZV9yYW5nZXModi0+dnBjaS5tZW0sIG1hcF9yYW5nZSwgJmRhdGEpOw0KPj4+Pj4gwqDC
oCBAQCAtMTY4LDcgKzI0Nyw4IEBAIGJvb2wgdnBjaV9wcm9jZXNzX3BlbmRpbmcoc3RydWN0IHZj
cHUgKnYpDQo+Pj4+PiDCoMKgIHN0YXRpYyBpbnQgX19pbml0IGFwcGx5X21hcChzdHJ1Y3QgZG9t
YWluICpkLCBjb25zdCBzdHJ1Y3QgcGNpX2RldiAqcGRldiwNCj4+Pj4+IMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBzdHJ1Y3QgcmFu
Z2VzZXQgKm1lbSwgdWludDE2X3QgY21kKQ0KPj4+Pj4gwqDCoCB7DQo+Pj4+PiAtwqDCoMKgIHN0
cnVjdCBtYXBfZGF0YSBkYXRhID0geyAuZCA9IGQsIC5tYXAgPSB0cnVlIH07DQo+Pj4+PiArwqDC
oMKgIHN0cnVjdCBtYXBfZGF0YSBkYXRhID0geyAuZCA9IGQsIC5tYXAgPSB0cnVlLA0KPj4+Pj4g
K8KgwqDCoMKgwqDCoMKgIC5wZGV2ID0gKHN0cnVjdCBwY2lfZGV2ICopcGRldiB9Ow0KPj4+PiBE
cm9wcGluZyB0aGUgY29uc3QgaGVyZSBpcyBub3QgZmluZS4gSVQgZWl0aGVyIG5lZWRzIHRvIGJl
IGRyb3BwZWQNCj4+Pj4gZnJvbSBhcHBseV9tYXAgYW5kIGZ1cnRoZXIgdXAsIG9yIHRoaXMgbmVl
ZHMgdG8gYWxzbyBiZSBtYWRlIGNvbnN0Lg0KPj4+IE9rLCBJJ2xsIHRyeSB0byBrZWVwIGl0IGNv
bnN0DQo+Pj4+PiDCoMKgwqDCoMKgwqAgaW50IHJjOw0KPj4+Pj4gwqDCoCDCoMKgwqDCoMKgwqAg
d2hpbGUgKCAocmMgPSByYW5nZXNldF9jb25zdW1lX3JhbmdlcyhtZW0sIG1hcF9yYW5nZSwgJmRh
dGEpKSA9PSAtRVJFU1RBUlQgKQ0KPj4+Pj4gQEAgLTIwNSw3ICsyODUsNyBAQCBzdGF0aWMgdm9p
ZCBkZWZlcl9tYXAoc3RydWN0IGRvbWFpbiAqZCwgc3RydWN0IHBjaV9kZXYgKnBkZXYsDQo+Pj4+
PiDCoMKgIMKgwqAgc3RhdGljIGludCBtb2RpZnlfYmFycyhjb25zdCBzdHJ1Y3QgcGNpX2RldiAq
cGRldiwgdWludDE2X3QgY21kLCBib29sIHJvbV9vbmx5KQ0KPj4+Pj4gwqDCoCB7DQo+Pj4+PiAt
wqDCoMKgIHN0cnVjdCB2cGNpX2hlYWRlciAqaGVhZGVyID0gJnBkZXYtPnZwY2ktPmhlYWRlcjsN
Cj4+Pj4+ICvCoMKgwqAgc3RydWN0IHZwY2lfaGVhZGVyICpoZWFkZXI7DQo+Pj4+PiDCoMKgwqDC
oMKgwqAgc3RydWN0IHJhbmdlc2V0ICptZW0gPSByYW5nZXNldF9uZXcoTlVMTCwgTlVMTCwgMCk7
DQo+Pj4+PiDCoMKgwqDCoMKgwqAgc3RydWN0IHBjaV9kZXYgKnRtcCwgKmRldiA9IE5VTEw7DQo+
Pj4+PiDCoMKgICNpZmRlZiBDT05GSUdfWDg2DQo+Pj4+PiBAQCAtMjE3LDYgKzI5NywxMSBAQCBz
dGF0aWMgaW50IG1vZGlmeV9iYXJzKGNvbnN0IHN0cnVjdCBwY2lfZGV2ICpwZGV2LCB1aW50MTZf
dCBjbWQsIGJvb2wgcm9tX29ubHkpDQo+Pj4+PiDCoMKgwqDCoMKgwqAgaWYgKCAhbWVtICkNCj4+
Pj4+IMKgwqDCoMKgwqDCoMKgwqDCoMKgIHJldHVybiAtRU5PTUVNOw0KPj4+Pj4gwqDCoCArwqDC
oMKgIGlmICggaXNfaGFyZHdhcmVfZG9tYWluKGN1cnJlbnQtPmRvbWFpbikgKQ0KPj4+Pj4gK8Kg
wqDCoMKgwqDCoMKgIGhlYWRlciA9IGdldF9od2RvbV92cGNpX2hlYWRlcihwZGV2KTsNCj4+Pj4+
ICvCoMKgwqAgZWxzZQ0KPj4+Pj4gK8KgwqDCoMKgwqDCoMKgIGhlYWRlciA9IGdldF92cGNpX2hl
YWRlcihjdXJyZW50LT5kb21haW4sIHBkZXYpOw0KPj4+Pj4gKw0KPj4+Pj4gwqDCoMKgwqDCoMKg
IC8qDQo+Pj4+PiDCoMKgwqDCoMKgwqDCoCAqIENyZWF0ZSBhIHJhbmdlc2V0IHRoYXQgcmVwcmVz
ZW50cyB0aGUgY3VycmVudCBkZXZpY2UgQkFScyBtZW1vcnkgcmVnaW9uDQo+Pj4+PiDCoMKgwqDC
oMKgwqDCoCAqIGFuZCBjb21wYXJlIGl0IGFnYWluc3QgYWxsIHRoZSBjdXJyZW50bHkgYWN0aXZl
IEJBUiBtZW1vcnkgcmVnaW9ucy4gSWYNCj4+Pj4+IEBAIC0yMjUsMTIgKzMxMCwxNSBAQCBzdGF0
aWMgaW50IG1vZGlmeV9iYXJzKGNvbnN0IHN0cnVjdCBwY2lfZGV2ICpwZGV2LCB1aW50MTZfdCBj
bWQsIGJvb2wgcm9tX29ubHkpDQo+Pj4+PiDCoMKgwqDCoMKgwqDCoCAqIEZpcnN0IGZpbGwgdGhl
IHJhbmdlc2V0IHdpdGggYWxsIHRoZSBCQVJzIG9mIHRoaXMgZGV2aWNlIG9yIHdpdGggdGhlIFJP
TQ0KPj4+Pj4gwqDCoMKgwqDCoMKgwqAgKiBCQVIgb25seSwgZGVwZW5kaW5nIG9uIHdoZXRoZXIg
dGhlIGd1ZXN0IGlzIHRvZ2dsaW5nIHRoZSBtZW1vcnkgZGVjb2RlDQo+Pj4+PiDCoMKgwqDCoMKg
wqDCoCAqIGJpdCBvZiB0aGUgY29tbWFuZCByZWdpc3Rlciwgb3IgdGhlIGVuYWJsZSBiaXQgb2Yg
dGhlIFJPTSBCQVIgcmVnaXN0ZXIuDQo+Pj4+PiArwqDCoMKgwqAgKg0KPj4+Pj4gK8KgwqDCoMKg
ICogVXNlIHRoZSBQQ0kgcmVzZXJ2ZWQgYml0cyBvZiB0aGUgQkFSIHRvIHBhc3MgQkFSJ3MgaW5k
ZXguDQo+Pj4+PiDCoMKgwqDCoMKgwqDCoCAqLw0KPj4+Pj4gwqDCoMKgwqDCoMKgIGZvciAoIGkg
PSAwOyBpIDwgQVJSQVlfU0laRShoZWFkZXItPmJhcnMpOyBpKysgKQ0KPj4+Pj4gwqDCoMKgwqDC
oMKgIHsNCj4+Pj4+IMKgwqDCoMKgwqDCoMKgwqDCoMKgIGNvbnN0IHN0cnVjdCB2cGNpX2JhciAq
YmFyID0gJmhlYWRlci0+YmFyc1tpXTsNCj4+Pj4+IC3CoMKgwqDCoMKgwqDCoCB1bnNpZ25lZCBs
b25nIHN0YXJ0ID0gUEZOX0RPV04oYmFyLT5hZGRyKTsNCj4+Pj4+IC3CoMKgwqDCoMKgwqDCoCB1
bnNpZ25lZCBsb25nIGVuZCA9IFBGTl9ET1dOKGJhci0+YWRkciArIGJhci0+c2l6ZSAtIDEpOw0K
Pj4+Pj4gK8KgwqDCoMKgwqDCoMKgIHVuc2lnbmVkIGxvbmcgc3RhcnQgPSAoYmFyLT5hZGRyICYg
UENJX0JBU0VfQUREUkVTU19NRU1fTUFTSykgfCBpOw0KPj4+Pj4gK8KgwqDCoMKgwqDCoMKgIHVu
c2lnbmVkIGxvbmcgZW5kID0gKGJhci0+YWRkciAmIFBDSV9CQVNFX0FERFJFU1NfTUVNX01BU0sp
ICsNCj4+Pj4+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGJhci0+c2l6ZSAtIDE7DQo+Pj4+IFdp
bGwgdGhpcyB3b3JrIGZpbmUgb24gQXJtIDMyYml0cyB3aXRoIExQQUU/IEl0J3MgbXkgdW5kZXJz
dGFuZGluZw0KPj4+PiB0aGF0IGluIHRoYXQgY2FzZSB1bnNpZ25lZCBsb25nIGlzIDMyYml0cywg
YnV0IHRoZSBwaHlzaWNhbCBhZGRyZXNzDQo+Pj4+IHNwYWNlIGlzIDQ0Yml0cywgaW4gd2hpY2gg
Y2FzZSB0aGlzIHdvbid0IHdvcmsuDQo+Pj4gSG0sIGdvb2QgcXVlc3Rpb24NCj4+Pj4gSSB0aGlu
ayB5b3UgbmVlZCB0byBrZWVwIHRoZSB1c2FnZSBvZiBmcmFtZSBudW1iZXJzIGhlcmUuDQo+Pj4g
SWYgSSByZS13b3JrIHRoZSBnZm4gPC0+IG1mbiBtYXBwaW5nIHRoZW4geWVzLCBJIGNhbiB1c2Ug
ZnJhbWUgbnVtYmVycyBoZXJlIGFuZCBlbHNld2hlcmUNCj4+Pj4+IMKgwqAgwqDCoMKgwqDCoMKg
wqDCoMKgwqAgaWYgKCAhTUFQUEFCTEVfQkFSKGJhcikgfHwNCj4+Pj4+IMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoCAocm9tX29ubHkgPyBiYXItPnR5cGUgIT0gVlBDSV9CQVJfUk9NDQo+
Pj4+PiBAQCAtMjUxLDkgKzMzOSwxMSBAQCBzdGF0aWMgaW50IG1vZGlmeV9iYXJzKGNvbnN0IHN0
cnVjdCBwY2lfZGV2ICpwZGV2LCB1aW50MTZfdCBjbWQsIGJvb2wgcm9tX29ubHkpDQo+Pj4+PiDC
oMKgwqDCoMKgwqAgLyogUmVtb3ZlIGFueSBNU0lYIHJlZ2lvbnMgaWYgcHJlc2VudC4gKi8NCj4+
Pj4+IMKgwqDCoMKgwqDCoCBmb3IgKCBpID0gMDsgbXNpeCAmJiBpIDwgQVJSQVlfU0laRShtc2l4
LT50YWJsZXMpOyBpKysgKQ0KPj4+Pj4gwqDCoMKgwqDCoMKgIHsNCj4+Pj4+IC3CoMKgwqDCoMKg
wqDCoCB1bnNpZ25lZCBsb25nIHN0YXJ0ID0gUEZOX0RPV04odm1zaXhfdGFibGVfYWRkcihwZGV2
LT52cGNpLCBpKSk7DQo+Pj4+PiAtwqDCoMKgwqDCoMKgwqAgdW5zaWduZWQgbG9uZyBlbmQgPSBQ
Rk5fRE9XTih2bXNpeF90YWJsZV9hZGRyKHBkZXYtPnZwY2ksIGkpICsNCj4+Pj4+IC0gdm1zaXhf
dGFibGVfc2l6ZShwZGV2LT52cGNpLCBpKSAtIDEpOw0KPj4+Pj4gK8KgwqDCoMKgwqDCoMKgIHVu
c2lnbmVkIGxvbmcgc3RhcnQgPSAodm1zaXhfdGFibGVfYWRkcihwZGV2LT52cGNpLCBpKSAmDQo+
Pj4+PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgIFBDSV9CQVNFX0FERFJFU1NfTUVNX01BU0spIHwgaTsNCj4+Pj4+ICvCoMKgwqDC
oMKgwqDCoCB1bnNpZ25lZCBsb25nIGVuZCA9ICh2bXNpeF90YWJsZV9hZGRyKHBkZXYtPnZwY2ks
IGkpICYNCj4+Pj4+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoCBQQ0lfQkFTRV9BRERSRVNTX01FTV9NQVNLICkgKw0KPj4+Pj4gKyB2bXNp
eF90YWJsZV9zaXplKHBkZXYtPnZwY2ksIGkpIC0gMTsNCj4+Pj4+IMKgwqAgwqDCoMKgwqDCoMKg
wqDCoMKgwqAgcmMgPSByYW5nZXNldF9yZW1vdmVfcmFuZ2UobWVtLCBzdGFydCwgZW5kKTsNCj4+
Pj4+IMKgwqDCoMKgwqDCoMKgwqDCoMKgIGlmICggcmMgKQ0KPj4+Pj4gQEAgLTI3Myw2ICszNjMs
OCBAQCBzdGF0aWMgaW50IG1vZGlmeV9iYXJzKGNvbnN0IHN0cnVjdCBwY2lfZGV2ICpwZGV2LCB1
aW50MTZfdCBjbWQsIGJvb2wgcm9tX29ubHkpDQo+Pj4+PiDCoMKgwqDCoMKgwqDCoCAqLw0KPj4+
Pj4gwqDCoMKgwqDCoMKgIGZvcl9lYWNoX3BkZXYgKCBwZGV2LT5kb21haW4sIHRtcCApDQo+Pj4+
PiDCoMKgwqDCoMKgwqAgew0KPj4+Pj4gK8KgwqDCoMKgwqDCoMKgIHN0cnVjdCB2cGNpX2hlYWRl
ciAqaGVhZGVyOw0KPj4+Pj4gKw0KPj4+Pj4gwqDCoMKgwqDCoMKgwqDCoMKgwqAgaWYgKCB0bXAg
PT0gcGRldiApDQo+Pj4+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoCB7DQo+Pj4+PiDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgIC8qDQo+Pj4+PiBAQCAtMjg5LDExICszODEsMTQgQEAgc3RhdGlj
IGludCBtb2RpZnlfYmFycyhjb25zdCBzdHJ1Y3QgcGNpX2RldiAqcGRldiwgdWludDE2X3QgY21k
LCBib29sIHJvbV9vbmx5KQ0KPj4+Pj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgIGNvbnRpbnVlOw0KPj4+Pj4gwqDCoMKgwqDCoMKgwqDCoMKgwqAgfQ0KPj4+Pj4gwqDCoCAt
wqDCoMKgwqDCoMKgwqAgZm9yICggaSA9IDA7IGkgPCBBUlJBWV9TSVpFKHRtcC0+dnBjaS0+aGVh
ZGVyLmJhcnMpOyBpKysgKQ0KPj4+Pj4gK8KgwqDCoMKgwqDCoMKgIGhlYWRlciA9IGdldF92cGNp
X2hlYWRlcih0bXAtPmRvbWFpbiwgcGRldik7DQo+Pj4+PiArDQo+Pj4+PiArwqDCoMKgwqDCoMKg
wqAgZm9yICggaSA9IDA7IGkgPCBBUlJBWV9TSVpFKGhlYWRlci0+YmFycyk7IGkrKyApDQo+Pj4+
PiDCoMKgwqDCoMKgwqDCoMKgwqDCoCB7DQo+Pj4+PiAtwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBj
b25zdCBzdHJ1Y3QgdnBjaV9iYXIgKmJhciA9ICZ0bXAtPnZwY2ktPmhlYWRlci5iYXJzW2ldOw0K
Pj4+Pj4gLcKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgdW5zaWduZWQgbG9uZyBzdGFydCA9IFBGTl9E
T1dOKGJhci0+YWRkcik7DQo+Pj4+PiAtwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB1bnNpZ25lZCBs
b25nIGVuZCA9IFBGTl9ET1dOKGJhci0+YWRkciArIGJhci0+c2l6ZSAtIDEpOw0KPj4+Pj4gK8Kg
wqDCoMKgwqDCoMKgwqDCoMKgwqAgY29uc3Qgc3RydWN0IHZwY2lfYmFyICpiYXIgPSAmaGVhZGVy
LT5iYXJzW2ldOw0KPj4+Pj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqAgdW5zaWduZWQgbG9uZyBz
dGFydCA9IChiYXItPmFkZHIgJiBQQ0lfQkFTRV9BRERSRVNTX01FTV9NQVNLKSB8IGk7DQo+Pj4+
PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB1bnNpZ25lZCBsb25nIGVuZCA9IChiYXItPmFkZHIg
JiBQQ0lfQkFTRV9BRERSRVNTX01FTV9NQVNLKQ0KPj4+Pj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoCArIGJhci0+c2l6ZSAtIDE7DQo+Pj4+PiDCoMKgIMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqAgaWYgKCAhYmFyLT5lbmFibGVkIHx8ICFyYW5nZXNldF9vdmVybGFwc19yYW5n
ZShtZW0sIHN0YXJ0LCBlbmQpIHx8DQo+Pj4+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoCAvKg0KPj4+Pj4gQEAgLTM1Nyw3ICs0NTIsNyBAQCBzdGF0aWMgdm9pZCBjbWRf
d3JpdGUoY29uc3Qgc3RydWN0IHBjaV9kZXYgKnBkZXYsIHVuc2lnbmVkIGludCByZWcsDQo+Pj4+
PiDCoMKgwqDCoMKgwqDCoMKgwqDCoCBwY2lfY29uZl93cml0ZTE2KHBkZXYtPnNiZGYsIHJlZywg
Y21kKTsNCj4+Pj4+IMKgwqAgfQ0KPj4+Pj4gwqDCoCAtc3RhdGljIHZvaWQgYmFyX3dyaXRlKGNv
bnN0IHN0cnVjdCBwY2lfZGV2ICpwZGV2LCB1bnNpZ25lZCBpbnQgcmVnLA0KPj4+Pj4gK3N0YXRp
YyB2b2lkIGJhcl93cml0ZV9od2RvbShjb25zdCBzdHJ1Y3QgcGNpX2RldiAqcGRldiwgdW5zaWdu
ZWQgaW50IHJlZywNCj4+Pj4+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoCB1aW50MzJfdCB2YWwsIHZvaWQgKmRhdGEpDQo+Pj4+PiDCoMKgIHsNCj4+Pj4+
IMKgwqDCoMKgwqDCoCBzdHJ1Y3QgdnBjaV9iYXIgKmJhciA9IGRhdGE7DQo+Pj4+PiBAQCAtMzc3
LDE0ICs0NzIsMTcgQEAgc3RhdGljIHZvaWQgYmFyX3dyaXRlKGNvbnN0IHN0cnVjdCBwY2lfZGV2
ICpwZGV2LCB1bnNpZ25lZCBpbnQgcmVnLA0KPj4+Pj4gwqDCoMKgwqDCoMKgIHsNCj4+Pj4+IMKg
wqDCoMKgwqDCoMKgwqDCoMKgIC8qIElmIHRoZSB2YWx1ZSB3cml0dGVuIGlzIHRoZSBjdXJyZW50
IG9uZSBhdm9pZCBwcmludGluZyBhIHdhcm5pbmcuICovDQo+Pj4+PiDCoMKgwqDCoMKgwqDCoMKg
wqDCoCBpZiAoIHZhbCAhPSAodWludDMyX3QpKGJhci0+YWRkciA+PiAoaGkgPyAzMiA6IDApKSAp
DQo+Pj4+PiArwqDCoMKgwqDCoMKgwqAgew0KPj4+Pj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqAg
c3RydWN0IHZwY2lfaGVhZGVyICpoZWFkZXIgPSBnZXRfaHdkb21fdnBjaV9oZWFkZXIocGRldik7
DQo+Pj4+PiArDQo+Pj4+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGdwcmludGsoWEVO
TE9HX1dBUk5JTkcsDQo+Pj4+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoCAiJTA0eDolMDJ4OiUwMnguJXU6IGlnbm9yZWQgQkFSICVsdSB3cml0ZSB3aXRoIG1l
bW9yeSBkZWNvZGluZyBlbmFibGVkXG4iLA0KPj4+Pj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqAgcGRldi0+c2VnLCBwZGV2LT5idXMsIHNsb3QsIGZ1bmMsDQo+
Pj4+PiAtwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgYmFyIC0gcGRldi0+
dnBjaS0+aGVhZGVyLmJhcnMgKyBoaSk7DQo+Pj4+PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqAgYmFyIC0gaGVhZGVyLT5iYXJzICsgaGkpOw0KPj4+Pj4gK8KgwqDCoMKg
wqDCoMKgIH0NCj4+Pj4+IMKgwqDCoMKgwqDCoMKgwqDCoMKgIHJldHVybjsNCj4+Pj4+IMKgwqDC
oMKgwqDCoCB9DQo+Pj4+PiDCoMKgIC0NCj4+Pj4+IMKgwqDCoMKgwqDCoCAvKg0KPj4+Pj4gwqDC
oMKgwqDCoMKgwqAgKiBVcGRhdGUgdGhlIGNhY2hlZCBhZGRyZXNzLCBzbyB0aGF0IHdoZW4gbWVt
b3J5IGRlY29kaW5nIGlzIGVuYWJsZWQNCj4+Pj4+IMKgwqDCoMKgwqDCoMKgICogWGVuIGNhbiBt
YXAgdGhlIEJBUiBpbnRvIHRoZSBndWVzdCBwMm0uDQo+Pj4+PiBAQCAtNDAzLDEwICs1MDEsODkg
QEAgc3RhdGljIHZvaWQgYmFyX3dyaXRlKGNvbnN0IHN0cnVjdCBwY2lfZGV2ICpwZGV2LCB1bnNp
Z25lZCBpbnQgcmVnLA0KPj4+Pj4gwqDCoMKgwqDCoMKgIHBjaV9jb25mX3dyaXRlMzIocGRldi0+
c2JkZiwgcmVnLCB2YWwpOw0KPj4+Pj4gwqDCoCB9DQo+Pj4+PiDCoMKgICtzdGF0aWMgdWludDMy
X3QgYmFyX3JlYWRfaHdkb20oY29uc3Qgc3RydWN0IHBjaV9kZXYgKnBkZXYsIHVuc2lnbmVkIGlu
dCByZWcsDQo+Pj4+PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgIHZvaWQgKmRhdGEpDQo+Pj4+PiArew0KPj4+Pj4gK8KgwqDCoCBy
ZXR1cm4gdnBjaV9od19yZWFkMzIocGRldiwgcmVnLCBkYXRhKTsNCj4+Pj4+ICt9DQo+Pj4+PiAr
DQo+Pj4+PiArc3RhdGljIHZvaWQgYmFyX3dyaXRlX2d1ZXN0KGNvbnN0IHN0cnVjdCBwY2lfZGV2
ICpwZGV2LCB1bnNpZ25lZCBpbnQgcmVnLA0KPj4+Pj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB1aW50MzJfdCB2YWwsIHZvaWQgKmRhdGEp
DQo+Pj4+PiArew0KPj4+Pj4gK8KgwqDCoCBzdHJ1Y3QgdnBjaV9iYXIgKnZiYXIgPSBkYXRhOw0K
Pj4+Pj4gK8KgwqDCoCBib29sIGhpID0gZmFsc2U7DQo+Pj4+PiArDQo+Pj4+PiArwqDCoMKgIGlm
ICggdmJhci0+dHlwZSA9PSBWUENJX0JBUl9NRU02NF9ISSApDQo+Pj4+PiArwqDCoMKgIHsNCj4+
Pj4+ICvCoMKgwqDCoMKgwqDCoCBBU1NFUlQocmVnID4gUENJX0JBU0VfQUREUkVTU18wKTsNCj4+
Pj4+ICvCoMKgwqDCoMKgwqDCoCB2YmFyLS07DQo+Pj4+PiArwqDCoMKgwqDCoMKgwqAgaGkgPSB0
cnVlOw0KPj4+Pj4gK8KgwqDCoCB9DQo+Pj4+PiArwqDCoMKgIHZiYXItPmFkZHIgJj0gfigweGZm
ZmZmZmZmdWxsIDw8IChoaSA/IDMyIDogMCkpOw0KPj4+Pj4gK8KgwqDCoCB2YmFyLT5hZGRyIHw9
ICh1aW50NjRfdCl2YWwgPDwgKGhpID8gMzIgOiAwKTsNCj4+Pj4+ICt9DQo+Pj4+PiArDQo+Pj4+
PiArc3RhdGljIHVpbnQzMl90IGJhcl9yZWFkX2d1ZXN0KGNvbnN0IHN0cnVjdCBwY2lfZGV2ICpw
ZGV2LCB1bnNpZ25lZCBpbnQgcmVnLA0KPj4+Pj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB2b2lkICpkYXRhKQ0KPj4+Pj4gK3sN
Cj4+Pj4+ICvCoMKgwqAgc3RydWN0IHZwY2lfYmFyICp2YmFyID0gZGF0YTsNCj4+Pj4+ICvCoMKg
wqAgdWludDMyX3QgdmFsOw0KPj4+Pj4gK8KgwqDCoCBib29sIGhpID0gZmFsc2U7DQo+Pj4+PiAr
DQo+Pj4+PiArwqDCoMKgIGlmICggdmJhci0+dHlwZSA9PSBWUENJX0JBUl9NRU02NF9ISSApDQo+
Pj4+PiArwqDCoMKgIHsNCj4+Pj4+ICvCoMKgwqDCoMKgwqDCoCBBU1NFUlQocmVnID4gUENJX0JB
U0VfQUREUkVTU18wKTsNCj4+Pj4+ICvCoMKgwqDCoMKgwqDCoCB2YmFyLS07DQo+Pj4+PiArwqDC
oMKgwqDCoMKgwqAgaGkgPSB0cnVlOw0KPj4+Pj4gK8KgwqDCoCB9DQo+Pj4+PiArDQo+Pj4+PiAr
wqDCoMKgIGlmICggdmJhci0+dHlwZSA9PSBWUENJX0JBUl9NRU02NF9MTyB8fCB2YmFyLT50eXBl
ID09IFZQQ0lfQkFSX01FTTY0X0hJICkNCj4+Pj4gSSB0aGluayB0aGlzIHdvdWxkIGJlIGNsZWFy
ZXIgdXNpbmcgYSBzd2l0Y2ggc3RhdGVtZW50Lg0KPj4+IEknbGwgdGhpbmsgYWJvdXQNCj4+Pj4+
ICvCoMKgwqAgew0KPj4+Pj4gK8KgwqDCoMKgwqDCoMKgIGlmICggaGkgKQ0KPj4+Pj4gK8KgwqDC
oMKgwqDCoMKgwqDCoMKgwqAgdmFsID0gdmJhci0+YWRkciA+PiAzMjsNCj4+Pj4+ICvCoMKgwqDC
oMKgwqDCoCBlbHNlDQo+Pj4+PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB2YWwgPSB2YmFyLT5h
ZGRyICYgMHhmZmZmZmZmZjsNCj4+Pj4+ICvCoMKgwqDCoMKgwqDCoCBpZiAoIHZhbCA9PSB+MCAp
DQo+Pj4+IFN0cmljdGx5IHNwZWFraW5nIEkgdGhpbmsgeW91IGFyZSBub3QgZm9yY2VkIHRvIHdy
aXRlIDFzIHRvIHRoZQ0KPj4+PiByZXNlcnZlZCA0IGJpdHMgaW4gdGhlIGxvdyByZWdpc3RlciAo
YW5kIGluIHRoZSAzMmJpdCBjYXNlKS4NCj4+PiBBaCwgc28gTGludXgga2VybmVsLCBmb3IgaW5z
dGFuY2UsIGNvdWxkIGhhdmUgd3JpdHRlbiAweGZmZmZmZjAgd2hpbGUNCj4+Pg0KPj4+IEkgZXhw
ZWN0IDB4ZmZmZmZmZmY/DQo+PiBJIHRoaW5rIHJlYWwgaGFyZHdhcmUgd291bGQgcmV0dXJuIHRo
ZSBzaXplIHdoZW4gd3JpdHRlbiAxcyB0byBhbGwNCj4+IGJpdHMgZXhjZXB0IHRoZSByZXNlcnZl
ZCBvbmVzLg0KPj4NCj4+Pj4+ICvCoMKgwqDCoMKgwqDCoCB7DQo+Pj4+PiArwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoCAvKiBHdWVzdHMgZGV0ZWN0cyBCQVIncyBwcm9wZXJ0aWVzIGFuZCBzaXplcy4g
Ki8NCj4+Pj4+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGlmICggIWhpICkNCj4+Pj4+ICvCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgIHsNCj4+Pj4+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqAgdmFsID0gMHhmZmZmZmZmZiAmIH4odmJhci0+c2l6ZSAtIDEpOw0KPj4+Pj4gK8KgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB2YWwgfD0gdmJhci0+dHlwZSA9PSBWUENJX0JBUl9NRU0z
MiA/IFBDSV9CQVNFX0FERFJFU1NfTUVNX1RZUEVfMzINCj4+Pj4+ICvCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgOiBQQ0lfQkFTRV9BRERSRVNTX01FTV9UWVBF
XzY0Ow0KPj4+Pj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB2YWwgfD0gdmJhci0+
cHJlZmV0Y2hhYmxlID8gUENJX0JBU0VfQUREUkVTU19NRU1fUFJFRkVUQ0ggOiAwOw0KPj4+Pj4g
K8KgwqDCoMKgwqDCoMKgwqDCoMKgwqAgfQ0KPj4+Pj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqAg
ZWxzZQ0KPj4+Pj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB2YWwgPSB2YmFyLT5z
aXplID4+IDMyOw0KPj4+Pj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqAgdmJhci0+YWRkciAmPSB+
KDB4ZmZmZmZmZmZ1bGwgPDwgKGhpID8gMzIgOiAwKSk7DQo+Pj4+PiArwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoCB2YmFyLT5hZGRyIHw9ICh1aW50NjRfdCl2YWwgPDwgKGhpID8gMzIgOiAwKTsNCj4+
Pj4+ICvCoMKgwqDCoMKgwqDCoCB9DQo+Pj4+PiArwqDCoMKgIH0NCj4+Pj4+ICvCoMKgwqAgZWxz
ZSBpZiAoIHZiYXItPnR5cGUgPT0gVlBDSV9CQVJfTUVNMzIgKQ0KPj4+Pj4gK8KgwqDCoCB7DQo+
Pj4+PiArwqDCoMKgwqDCoMKgwqAgdmFsID0gdmJhci0+YWRkcjsNCj4+Pj4+ICvCoMKgwqDCoMKg
wqDCoCBpZiAoIHZhbCA9PSB+MCApDQo+Pj4+PiArwqDCoMKgwqDCoMKgwqAgew0KPj4+Pj4gK8Kg
wqDCoMKgwqDCoMKgwqDCoMKgwqAgaWYgKCAhaGkgKQ0KPj4+PiBUaGVyZSdzIG5vIHdheSBoaSBj
YW4gYmUgdHJ1ZSBhdCB0aGlzIHBvaW50IEFGQUlDVC4NCj4+PiBTdXJlLCB0aGFuayB5b3UNCj4+
Pj4+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHsNCj4+Pj4+ICvCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqAgdmFsID0gMHhmZmZmZmZmZiAmIH4odmJhci0+c2l6ZSAtIDEpOw0KPj4+Pj4g
K8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB2YWwgfD0gdmJhci0+dHlwZSA9PSBWUENJ
X0JBUl9NRU0zMiA/IFBDSV9CQVNFX0FERFJFU1NfTUVNX1RZUEVfMzINCj4+Pj4+ICvCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgOiBQQ0lfQkFTRV9BRERSRVNT
X01FTV9UWVBFXzY0Ow0KPj4+Pj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB2YWwg
fD0gdmJhci0+cHJlZmV0Y2hhYmxlID8gUENJX0JBU0VfQUREUkVTU19NRU1fUFJFRkVUQ0ggOiAw
Ow0KPj4+Pj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqAgfQ0KPj4+Pj4gK8KgwqDCoMKgwqDCoMKg
IH0NCj4+Pj4+ICvCoMKgwqAgfQ0KPj4+Pj4gK8KgwqDCoCBlbHNlDQo+Pj4+PiArwqDCoMKgIHsN
Cj4+Pj4+ICvCoMKgwqDCoMKgwqDCoCB2YWwgPSB2YmFyLT5hZGRyOw0KPj4+Pj4gK8KgwqDCoCB9
DQo+Pj4+PiArwqDCoMKgIHJldHVybiB2YWw7DQo+Pj4+PiArfQ0KPj4+Pj4gKw0KPj4+Pj4gwqDC
oCBzdGF0aWMgdm9pZCByb21fd3JpdGUoY29uc3Qgc3RydWN0IHBjaV9kZXYgKnBkZXYsIHVuc2ln
bmVkIGludCByZWcsDQo+Pj4+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqAgdWludDMyX3QgdmFsLCB2b2lkICpkYXRhKQ0KPj4+Pj4gwqDCoCB7DQo+Pj4+
PiAtwqDCoMKgIHN0cnVjdCB2cGNpX2hlYWRlciAqaGVhZGVyID0gJnBkZXYtPnZwY2ktPmhlYWRl
cjsNCj4+Pj4+ICvCoMKgwqAgc3RydWN0IHZwY2lfaGVhZGVyICpoZWFkZXIgPSBnZXRfaHdkb21f
dnBjaV9oZWFkZXIocGRldik7DQo+Pj4+PiDCoMKgwqDCoMKgwqAgc3RydWN0IHZwY2lfYmFyICpy
b20gPSBkYXRhOw0KPj4+Pj4gwqDCoMKgwqDCoMKgIHVpbnQ4X3Qgc2xvdCA9IFBDSV9TTE9UKHBk
ZXYtPmRldmZuKSwgZnVuYyA9IFBDSV9GVU5DKHBkZXYtPmRldmZuKTsNCj4+Pj4+IMKgwqDCoMKg
wqDCoCB1aW50MTZfdCBjbWQgPSBwY2lfY29uZl9yZWFkMTYocGRldi0+c2JkZiwgUENJX0NPTU1B
TkQpOw0KPj4+Pj4gQEAgLTQ1MiwxNSArNjI5LDU2IEBAIHN0YXRpYyB2b2lkIHJvbV93cml0ZShj
b25zdCBzdHJ1Y3QgcGNpX2RldiAqcGRldiwgdW5zaWduZWQgaW50IHJlZywNCj4+Pj4+IMKgwqDC
oMKgwqDCoMKgwqDCoMKgIHJvbS0+YWRkciA9IHZhbCAmIFBDSV9ST01fQUREUkVTU19NQVNLOw0K
Pj4+Pj4gwqDCoCB9DQo+Pj4+IERvbid0IHlvdSBuZWVkIHRvIGFsc28gcHJvdGVjdCBhIGRvbVUg
ZnJvbSB3cml0aW5nIHRvIHRoZSBST00gQkFSDQo+Pj4+IHJlZ2lzdGVyPw0KPj4+IFJPTSB3YXMg
bm90IGEgdGFyZ2V0IG9mIHRoaXMgUkZDIGFzIEkgaGF2ZSBubyBIVyB0byB0ZXN0IHRoYXQsIGJ1
dCBmaW5hbCBjb2RlIG11c3QNCj4+Pg0KPj4+IGFsc28gaGFuZGxlIFJPTSBhcyB3ZWxsLCB5b3Ug
YXJlIHJpZ2h0DQo+Pj4NCj4+Pj4+IMKgwqAgK3N0YXRpYyB1aW50MzJfdCBiYXJfcmVhZF9kaXNw
YXRjaChjb25zdCBzdHJ1Y3QgcGNpX2RldiAqcGRldiwgdW5zaWduZWQgaW50IHJlZywNCj4+Pj4+
ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqAgdm9pZCAqZGF0YSkNCj4+Pj4+ICt7DQo+Pj4+PiArwqDCoMKgIHN0cnVjdCB2
cGNpX2JhciAqdmJhciwgKmJhciA9IGRhdGE7DQo+Pj4+PiArDQo+Pj4+PiArwqDCoMKgIGlmICgg
aXNfaGFyZHdhcmVfZG9tYWluKGN1cnJlbnQtPmRvbWFpbikgKQ0KPj4+Pj4gK8KgwqDCoMKgwqDC
oMKgIHJldHVybiBiYXJfcmVhZF9od2RvbShwZGV2LCByZWcsIGRhdGEpOw0KPj4+Pj4gKw0KPj4+
Pj4gK8KgwqDCoCB2YmFyID0gZ2V0X3ZwY2lfYmFyKGN1cnJlbnQtPmRvbWFpbiwgcGRldiwgYmFy
LT5pbmRleCk7DQo+Pj4+PiArwqDCoMKgIGlmICggIXZiYXIgKQ0KPj4+Pj4gK8KgwqDCoMKgwqDC
oMKgIHJldHVybiB+MDsNCj4+Pj4+ICsNCj4+Pj4+ICvCoMKgwqAgcmV0dXJuIGJhcl9yZWFkX2d1
ZXN0KHBkZXYsIHJlZywgdmJhcik7DQo+Pj4+PiArfQ0KPj4+Pj4gKw0KPj4+Pj4gK3N0YXRpYyB2
b2lkIGJhcl93cml0ZV9kaXNwYXRjaChjb25zdCBzdHJ1Y3QgcGNpX2RldiAqcGRldiwgdW5zaWdu
ZWQgaW50IHJlZywNCj4+Pj4+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgdWludDMyX3QgdmFsLCB2b2lkICpkYXRhKQ0KPj4+Pj4g
K3sNCj4+Pj4+ICvCoMKgwqAgc3RydWN0IHZwY2lfYmFyICpiYXIgPSBkYXRhOw0KPj4+Pj4gKw0K
Pj4+Pj4gK8KgwqDCoCBpZiAoIGlzX2hhcmR3YXJlX2RvbWFpbihjdXJyZW50LT5kb21haW4pICkN
Cj4+Pj4+ICvCoMKgwqDCoMKgwqDCoCBiYXJfd3JpdGVfaHdkb20ocGRldiwgcmVnLCB2YWwsIGRh
dGEpOw0KPj4+Pj4gK8KgwqDCoCBlbHNlDQo+Pj4+PiArwqDCoMKgIHsNCj4+Pj4+ICvCoMKgwqDC
oMKgwqDCoCBzdHJ1Y3QgdnBjaV9iYXIgKnZiYXIgPSBnZXRfdnBjaV9iYXIoY3VycmVudC0+ZG9t
YWluLCBwZGV2LCBiYXItPmluZGV4KTsNCj4+Pj4+ICsNCj4+Pj4+ICvCoMKgwqDCoMKgwqDCoCBp
ZiAoICF2YmFyICkNCj4+Pj4+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHJldHVybjsNCj4+Pj4+
ICvCoMKgwqDCoMKgwqDCoCBiYXJfd3JpdGVfZ3Vlc3QocGRldiwgcmVnLCB2YWwsIHZiYXIpOw0K
Pj4+Pj4gK8KgwqDCoCB9DQo+Pj4+PiArfQ0KPj4+PiBZb3Ugc2hvdWxkIGFzc2lnbiBkaWZmZXJl
bnQgaGFuZGxlcnMgYmFzZWQgb24gd2hldGhlciB0aGUgZG9tYWluIHRoYXQNCj4+Pj4gaGFzIHRo
ZSBkZXZpY2UgYXNzaWduZWQgaXMgYSBkb21VIG9yIHRoZSBoYXJkd2FyZSBkb21haW4sIHJhdGhl
ciB0aGFuDQo+Pj4+IGRvaW5nIHRoZSBzZWxlY3Rpb24gaGVyZS4NCj4+PiBIbSwgaGFuZGxlcnMg
YXJlIGFzc2lnbmVkIG9uY2UgaW4gaW5pdF9iYXJzIGFuZCB0aGlzIGZ1bmN0aW9uIGlzIG9ubHkg
Y2FsbGVkDQo+Pj4NCj4+PiBmb3IgaHdkb20sIHNvIHRoZXJlIGlzIG5vIHdheSBJIGNhbiBkbyB0
aGF0IGZvciB0aGUgZ3Vlc3RzLiBIZW5jZSwgdGhlIGRpc3BhdGNoZXINCj4+IEkgdGhpbmsgd2Ug
bWlnaHQgd2FudCB0byByZXNldCB0aGUgdlBDSSBoYW5kbGVycyB3aGVuIGEgZGV2aWNlcyBnZXRz
DQo+PiBhc3NpZ25lZCBhbmQgZGVhc3NpZ25lZC4NCj4NCj4gSW4gQVJNIGNhc2UgaW5pdF9iYXJz
IGlzIGNhbGxlZCB0b28gZWFybHk6IFBDSSBkZXZpY2UgYXNzaWdubWVudCBpcyBjdXJyZW50bHkN
Cj4NCj4gaW5pdGlhdGVkIGJ5IERvbWFpbi0wJyBrZXJuZWwgYW5kIGlzIGRvbmUgKmJlZm9yZSog
UENJIGRldmljZXMgYXJlIGdpdmVuIG1lbW9yeQ0KPg0KPiByYW5nZXMgYW5kIEJBUnMgYXNzaWdu
ZWQ6DQo+DQo+IFvCoMKgwqAgMC40Mjk1MTRdIHBjaV9idXMgMDAwMDowMDogcm9vdCBidXMgcmVz
b3VyY2UgW2J1cyAwMC1mZl0NCj4gW8KgwqDCoCAwLjQyOTUzMl0gcGNpX2J1cyAwMDAwOjAwOiBy
b290IGJ1cyByZXNvdXJjZSBbaW8gMHgwMDAwLTB4ZmZmZmZdDQo+IFvCoMKgwqAgMC40Mjk1NTVd
IHBjaV9idXMgMDAwMDowMDogcm9vdCBidXMgcmVzb3VyY2UgW21lbSAweGZlMjAwMDAwLTB4ZmUz
ZmZmZmZdDQo+IFvCoMKgwqAgMC40Mjk1NzVdIHBjaV9idXMgMDAwMDowMDogcm9vdCBidXMgcmVz
b3VyY2UgW21lbSAweDMwMDAwMDAwLTB4MzdmZmZmZmZdDQo+IFvCoMKgwqAgMC40Mjk2MDRdIHBj
aV9idXMgMDAwMDowMDogcm9vdCBidXMgcmVzb3VyY2UgW21lbSAweDM4MDAwMDAwLTB4M2ZmZmZm
ZmYgcHJlZl0NCj4gW8KgwqDCoCAwLjQyOTY3MF0gcGNpIDAwMDA6MDA6MDAuMDogZW5hYmxpbmcg
RXh0ZW5kZWQgVGFncw0KPiBbwqDCoMKgIDAuNDUzNzY0XSBwY2kgMDAwMDowMDowMC4wOiAtLS0t
LS0tLS0tLS0tLS0tLS0tLSBCVVNfTk9USUZZX0FERF9ERVZJQ0UNCj4NCj4gPCBpbml0X2JhcnMg
Pg0KPg0KPiBbwqDCoMKgIDAuNDUzNzkzXSBwY2kgMDAwMDowMDowMC4wOiAtLSBJUlEgMA0KPiBb
wqDCoMKgIDAuNDU4ODI1XSBwY2kgMDAwMDowMDowMC4wOiBGYWlsZWQgdG8gYWRkIC0gcGFzc3Ro
cm91Z2ggb3IgTVNJL01TSS1YIG1pZ2h0IGZhaWwhDQo+IFvCoMKgwqAgMC40NzE3OTBdIHBjaSAw
MDAwOjAxOjAwLjA6IC0tLS0tLS0tLS0tLS0tLS0tLS0tIEJVU19OT1RJRllfQUREX0RFVklDRQ0K
Pg0KPiA8IGluaXRfYmFycyA+DQo+DQo+IFvCoMKgwqAgMC40NzE4MjFdIHBjaSAwMDAwOjAxOjAw
LjA6IC0tIElSUSAyNTUNCj4gW8KgwqDCoCAwLjQ3NjgwOV0gcGNpIDAwMDA6MDE6MDAuMDogRmFp
bGVkIHRvIGFkZCAtIHBhc3N0aHJvdWdoIG9yIE1TSS9NU0ktWCBtaWdodCBmYWlsIQ0KPg0KPiA8
IEJBUiBhc3NpZ25tZW50cyBiZWxvdyA+DQo+DQo+IFvCoMKgwqAgMC40ODgyMzNdIHBjaSAwMDAw
OjAwOjAwLjA6IEJBUiAxNDogYXNzaWduZWQgW21lbSAweGZlMjAwMDAwLTB4ZmUyZmZmZmZdDQo+
IFvCoMKgwqAgMC40ODgyNjVdIHBjaSAwMDAwOjAwOjAwLjA6IEJBUiAxNTogYXNzaWduZWQgW21l
bSAweDM4MDAwMDAwLTB4MzgwZmZmZmYgcHJlZl0NCj4NCj4gSW4gY2FzZSBvZiB4ODYgdGhpcyBp
cyBwcmV0dHkgbXVjaCBvayBhcyBCQVJzIGFyZSBhbHJlYWR5IGluIHBsYWNlLCBidXQgZm9yIEFS
TSB3ZQ0KPg0KPiBuZWVkIHRvIHRha2UgY2FyZSBhbmQgcmUtc2V0dXAgdlBDSSBCQVJzIGZvciBo
d2RvbS4gVGhpbmdzIGFyZSBnZXR0aW5nIGV2ZW4gbW9yZQ0KPg0KPiBjb21wbGljYXRlZCBpZiB0
aGUgaG9zdCBQQ0kgYnJpZGdlIGlzIG5vdCBFQ0FNIGxpa2UsIHNvIHlvdSBjYW5ub3Qgc2V0IG1t
aW9faGFuZGxlcnMNCj4NCj4gYW5kIHRyYXAgaHdkb20ncyBhY2Nlc3MgdG8gdGhlIGNvbmZpZyBz
cGFjZSB0byB1cGRhdGUgQkFScyBldGMuIFRoaXMgaXMgd2h5IEkgaGF2ZSB0aGF0DQo+DQo+IHVn
bHkgaGFjayBmb3IgcmNhcl9nZW4zIHRvIHJlYWQgYWN0dWFsIEJBUnMgZm9yIGh3ZG9tLg0KPg0K
Pg0KPiBJZiB3ZSBnbyBmdXJ0aGVyIGFuZCB0YWtlIGEgbG9vayBhdCBTUi1JT1YgdGhlbiB3aGVu
IHRoZSBrZXJuZWwgYXNzaWducyB0aGUgZGV2aWNlDQo+DQo+IChCVVNfTk9USUZZX0FERF9ERVZJ
Q0UpIHRoZW4gaXQgYWxyZWFkeSBoYXMgQkFScyBhc3NpZ25lZCBmb3IgdmlydHVhbCBmdW5jdGlv
bnMNCj4NCj4gKG5lZWQgdG8gZG91YmxlLWNoZWNrIHRoYXQpLg0KDQpIbSwgaW5kZWVkLiBXZSBq
dXN0IG5lZWQgdG8gbW92ZSBpbml0X2JhcnMgZnJvbSBiZWluZyBjYWxsZWQgb24gUENJICpkZXZp
Y2UgYWRkKiB0bw0KDQoqZGV2aWNlIGFzc2lnbiouIFRoaXMgd2F5IGl0IHdvbid0ICg/KSBicmVh
ayB4ODYgYW5kIGFsbG93IEFSTSB0byBwcm9wZXJseSBpbml0aWFsaXplIHZQQ0kncw0KDQpCQVJz
Li4uDQoNCj4NCj4+IMKgIEluIG9yZGVyIHRvIGRvIHBhc3N0aHJvdWdoIHRvIGRvbVVzIHNhZmVs
eQ0KPj4gd2Ugd2lsbCBoYXZlIHRvIGFkZCBtb3JlIGhhbmRsZXJzIHRoYW4gd2hhdCdzIHJlcXVp
cmVkIGZvciBkb20wLA0KPiBDYW4geW91IHBsZWFzZSB0ZWxsIHdoYXQgYXJlIHRoaW5raW5nIGFi
b3V0PyBXaGF0IGFyZSB0aGUgbWlzc2luZyBoYW5kbGVycz8NCj4+IMKgIGFuZA0KPj4gaGF2aW5n
IGlzX2hhcmR3YXJlX2RvbWFpbiBzcHJpbmtsZWQgaW4gYWxsIG9mIHRoZW0gaXMgbm90IGEgc3Vp
dGFibGUNCj4+IHNvbHV0aW9uLg0KPg0KPiBJJ2xsIHRyeSB0byByZXBsYWNlIGlzX2hhcmR3YXJl
X2RvbWFpbiB3aXRoIHNvbWV0aGluZyBsaWtlOg0KPg0KPiArLyoNCj4gKyAqIERldGVjdCB3aGV0
aGVyIHBoeXNpY2FsIFBDSSBkZXZpY2VzIGluIHRoaXMgc2VnbWVudCBiZWxvbmcNCj4gKyAqIHRv
IHRoZSBkb21haW4gZ2l2ZW4sIGUuZy4gb24geDg2IGFsbCBQQ0kgZGV2aWNlcyBsaXZlIGluIGh3
ZG9tLA0KPiArICogYnV0IGluIGNhc2Ugb2YgQVJNIHRoaXMgbWlnaHQgbm90IGJlIHRoZSBjYXNl
OiB0aG9zZSBtYXkgYWxzbw0KPiArICogbGl2ZSBpbiBkcml2ZXIgZG9tYWlucyBvciBldmVuIFhl
biBpdHNlbGYuDQo+ICsgKi8NCj4gK2Jvb2wgcGNpX2lzX2hhcmR3YXJlX2RvbWFpbihzdHJ1Y3Qg
ZG9tYWluICpkLCB1MTYgc2VnKQ0KPiArew0KPiArI2lmZGVmIENPTkZJR19YODYNCj4gK8KgwqDC
oCByZXR1cm4gaXNfaGFyZHdhcmVfZG9tYWluKGQpOw0KPiArI2VsaWYgQ09ORklHX0FSTQ0KPiAr
wqDCoMKgIHJldHVybiBwY2lfaXNfb3duZXJfZG9tYWluKGQsIHNlZyk7DQo+ICsjZWxzZQ0KPiAr
I2Vycm9yICJVbnN1cHBvcnRlZCBhcmNoaXRlY3R1cmUiDQo+ICsjZW5kaWYNCj4gK30NCj4gKw0K
PiArLyoNCj4gKyAqIEdldCBkb21haW4gd2hpY2ggb3ducyB0aGlzIHNlZ21lbnQ6IGZvciB4ODYg
dGhpcyBpcyBhbHdheXMgaGFyZHdhcmUNCj4gKyAqIGRvbWFpbiBhbmQgZm9yIEFSTSB0aGlzIGNh
biBiZSBkaWZmZXJlbnQuDQo+ICsgKi8NCj4gK3N0cnVjdCBkb21haW4gKnBjaV9nZXRfaGFyZHdh
cmVfZG9tYWluKHUxNiBzZWcpDQo+ICt7DQo+ICsjaWZkZWYgQ09ORklHX1g4Ng0KPiArwqDCoMKg
IHJldHVybiBoYXJkd2FyZV9kb21haW47DQo+ICsjZWxpZiBDT05GSUdfQVJNDQo+ICvCoMKgwqAg
cmV0dXJuIHBjaV9nZXRfb3duZXJfZG9tYWluKHNlZyk7DQo+ICsjZWxzZQ0KPiArI2Vycm9yICJV
bnN1cHBvcnRlZCBhcmNoaXRlY3R1cmUiDQo+ICsjZW5kaWYNCj4gK30NCj4NCj4gVGhpcyBpcyB3
aGF0IEkgdXNlIHRvIHByb3Blcmx5IGRldGVjdCB0aGUgZG9tYWluIHRoYXQgcmVhbGx5IG93bnMg
cGh5c2ljYWwgaG9zdCBicmlkZ2UNCj4NCj4+DQo+PiBSb2dlci4NCj4NCj4gVGhhbmsgeW91LA0K
Pg0KPiBPbGVrc2FuZHINCj4=


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 06:50:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 06:50:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26222.54421 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdSuj-00081i-D9; Fri, 13 Nov 2020 06:50:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26222.54421; Fri, 13 Nov 2020 06:50:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdSuj-00081b-8W; Fri, 13 Nov 2020 06:50:29 +0000
Received: by outflank-mailman (input) for mailman id 26222;
 Fri, 13 Nov 2020 06:50:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u7TF=ET=epam.com=prvs=9586b5424c=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kdSui-00081W-8T
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 06:50:28 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1fef7622-1573-4f35-9535-c1b4660f245c;
 Fri, 13 Nov 2020 06:50:27 +0000 (UTC)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0AD6o8Il017185; Fri, 13 Nov 2020 06:50:21 GMT
Received: from eur05-db8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2112.outbound.protection.outlook.com [104.47.17.112])
 by mx0a-0039f301.pphosted.com with ESMTP id 34rf80p88y-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 13 Nov 2020 06:50:21 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM4PR0301MB2195.eurprd03.prod.outlook.com (2603:10a6:200:4f::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Fri, 13 Nov
 2020 06:50:18 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 06:50:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u7TF=ET=epam.com=prvs=9586b5424c=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kdSui-00081W-8T
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 06:50:28 +0000
X-Inumbo-ID: 1fef7622-1573-4f35-9535-c1b4660f245c
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1fef7622-1573-4f35-9535-c1b4660f245c;
	Fri, 13 Nov 2020 06:50:27 +0000 (UTC)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
	by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0AD6o8Il017185;
	Fri, 13 Nov 2020 06:50:21 GMT
Received: from eur05-db8-obe.outbound.protection.outlook.com (mail-db8eur05lp2112.outbound.protection.outlook.com [104.47.17.112])
	by mx0a-0039f301.pphosted.com with ESMTP id 34rf80p88y-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 13 Nov 2020 06:50:21 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jwA+V55PlRh2rNPqzpQe+43G/Vwk8kZAeF4+y5UnPtqy+i/o8memuf09v3Q4hah2mAkAA3o5qfenklvwmLhD3yp+nSNFdKN3d8CnQSYyZ1Jd0XASQkLTI9n3NvY/oaMBgyFw62DazD68FZww8H3IoIjZMZZ4vOaNJsJafotJJfDfzag0MM0ijwZ7O9aQ5KkyZQ0TYYxKQJ5FaOkWRA6tqzJBhxRGrTULXh4SVOVbEdDwvRrbd0NQxESjjpPgACkAjpHRZYXEVx3Azpdr0nVAp66WvFJejFHwwb50uFsqSS4TeVCQfljAeNuy77Jq3SgtW67Ozc58L/0tup/4RsaKzA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nALvFPR8B4d8Ubr7h2k4RBjlh2sdoAQ43oBFYpF56eM=;
 b=LZvBO6GyF+nkNbknmxg7g0tOgNx3MMRURwuMAbN9zIRqVEyplhr+Gc4pFpXa2qnhFfn4wvEOPGBWUQ7BZZKMwk2V+l9htnovpwqglQg3tKAVHRk2XHF3XL0RL5+/xpN0VbcWB7wYnVKGmhrhDpZm6pj5lkZls0rgJ5h9vBwS0MhJkS81n2Zg9lx/f/YywVBhxaw/bayZVg6mRWSpUG6cBbumPY94UEfx1bB8BvteZerCu0uiRiMESYSQm0un/uSUbGsv9MH6UHr4/Qk9t9mZfgOItOKnfz56Rp0xQ5wU9ZTJf6CAol+elzXi/V8CemAFcPRB4XgHDkAxlfkgpfeoIg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nALvFPR8B4d8Ubr7h2k4RBjlh2sdoAQ43oBFYpF56eM=;
 b=Og+3ft2EJpIFE1V8DLQ9DXm9OnGF5dSS8bqsa1V9DMEUWDDUsSf8M2RFY2pDV6dhfbF2Yu/f2eB0M+Lq0llaNqVE3WHLiyxOEt9U8jpTOcm3jy4iKLK+mWBBn4GnRVsLeYzir7r1NPg0Ht/pAU67eWCWcLoGHy4UqFRWT3qxdhWbW4kY3ZSaPwty/CPH1p5O/KwtfJtoDXJnn4CbbgLnkN8+wGU2UMidfoSAnVuZKJ1zRMfZEcIe/cZvZ3Sp6XZROth/Qw95HrGlWOftbNkSptFSeIxY+IElQDJaB5pp5NUQowwmvnNpy4tSyS7WRHUfDOqIpOfqIeZnvbQmAhv7sQ==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM4PR0301MB2195.eurprd03.prod.outlook.com (2603:10a6:200:4f::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.21; Fri, 13 Nov
 2020 06:50:18 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 06:50:18 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>,
        Oleksandr
 Andrushchenko <andr2000@gmail.com>
CC: "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
        "Bertrand.Marquis@arm.com"
	<Bertrand.Marquis@arm.com>,
        "julien.grall@arm.com" <julien.grall@arm.com>,
        "jbeulich@suse.com" <jbeulich@suse.com>,
        "sstabellini@kernel.org"
	<sstabellini@kernel.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>
Subject: Re: [PATCH 09/10] vpci/rcar: Implement vPCI.update_bar_header
 callback
Thread-Topic: [PATCH 09/10] vpci/rcar: Implement vPCI.update_bar_header
 callback
Thread-Index: AQHWtpbzTvJ5qBS+Xk6vR5/qsR0EDanESDwAgAFdFAA=
Date: Fri, 13 Nov 2020 06:50:18 +0000
Message-ID: <6f6f239a-a568-baed-5e40-aec39c6b8c2c@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-10-andr2000@gmail.com>
 <20201112100054.z46hjf2qzcag6sv7@Air-de-Roger>
In-Reply-To: <20201112100054.z46hjf2qzcag6sv7@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 460a6367-0608-4124-d491-08d887a06269
x-ms-traffictypediagnostic: AM4PR0301MB2195:
x-microsoft-antispam-prvs: 
 <AM4PR0301MB2195B04A58188B02DDDE7371E7E60@AM4PR0301MB2195.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 YSpD3Gu6dGHsbkkzal4Q66/7n4MFsLKkSNLIuMiZ9nBo7acNYU+l3oRnmfy8H5A9bCbwmyFf+amZUuqOuDR4HVICc3PXgV/tR+qFDa1wW3dOdVaKEQQXmoOSZxqDbc3hppCge8oKwnuF3pwi1UEWKDYN2UFtu9EN2BhkYH0GFEaTBzmxPcbKeO6nRcD0DXDvbEnndZN4lVlKdL+Fgzc/kLXXixsvFuHH5xKgaJwFuXx3aLaCRhB/p5Gpr2YJUcvtYElXOh0PGl65FcmWgO59gH8EfpkjIk7dEd5oHslble4NLGYQNg7bbsU8hDBQe0tPs1zWiSRD1mPXouGxu4u+2Q==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(39850400004)(396003)(366004)(136003)(26005)(5660300002)(31686004)(64756008)(66446008)(316002)(66556008)(66946007)(31696002)(8676002)(2616005)(4326008)(71200400001)(66476007)(6512007)(8936002)(2906002)(15650500001)(7416002)(54906003)(36756003)(110136005)(478600001)(86362001)(53546011)(186003)(83380400001)(6486002)(6506007)(76116006);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 y/H0yJ9zBwjnXWlVYtMybDT6oFyMlFccRKpiaOUrrkidBO5Z/HJlIPnDusBMWQRTEEygZ+o8DwAC0grpFKKlrESyA4ewdtR9shiTdKo26uEr1dG25wnCexyjXMi9i59TaswxzS2dAaIUhboPY/FLbYi9/HHKmIPFt5n7tJC8BWw8fxLgOq3/HzOzfu669IdBNVn5gyCpKcMgVX8VWp/fR2EHINSbdLTHVz/9LyetxGbBX3k1D4TXtmU2ZZ2bfiv5dcIRyacX0hJljl3A8qSw12j1l/VQFKMziqNDdWonuF7iQBbD0HcZ1aCiKZlPRZqyLRByl+RRk+CG+6AXC+FGwb4mR4YH8xIRzH7iZT66S7Hoc4Yia6a8y7HWg+Pe0tJLcqJBhuUBD60cHCh/Og4zoevfjD7VSIWF3CgP12kE0l37XSnavTyrvC+w1pLheTLa3IYjPhObT6PmZTKC5sPL87oXDs/UVLlXGQxu9Cm8E/6cvp3trRFRU+Wk7zspl8TnvFAR5LZWTYk0ul6uCZGJJgn6N0iN02faS9LKzXogVsbEUSTnlieIWQQrr45qKf76LNPBuW/dHIak6HZvz8V6l/o/O4Otcu29BURIoito2XWbswiayhStFln6EqviIbInS56zgR7g/0VvX8vLdpi1DA==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <E21103095C06DF40A921F5D91F2C5487@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 460a6367-0608-4124-d491-08d887a06269
X-MS-Exchange-CrossTenant-originalarrivaltime: 13 Nov 2020 06:50:18.8047
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: n20/NDtuqoKUJDBCdxeUFgnCrcnIHbD/iWoXWKxJGTLghJpaag4+/ShCNVMnD+5TxdU6Qj0chg9lpznBmLfmeMu0LcUfrdc9vCZhw15oHJiE6h/reKUfxYhN6Wqz3vz4
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR0301MB2195
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-13_05:2020-11-12,2020-11-13 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0
 priorityscore=1501 adultscore=0 phishscore=0 impostorscore=0 spamscore=0
 malwarescore=0 suspectscore=0 mlxlogscore=999 mlxscore=0
 lowpriorityscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2009150000 definitions=main-2011130038

DQpPbiAxMS8xMi8yMCAxMjowMCBQTSwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToNCj4gT24gTW9u
LCBOb3YgMDksIDIwMjAgYXQgMDI6NTA6MzBQTSArMDIwMCwgT2xla3NhbmRyIEFuZHJ1c2hjaGVu
a28gd3JvdGU6DQo+PiBGcm9tOiBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyA8b2xla3NhbmRyX2Fu
ZHJ1c2hjaGVua29AZXBhbS5jb20+DQo+Pg0KPj4gVXBkYXRlIGhhcmR3YXJlIGRvbWFpbidzIEJB
UiBoZWFkZXIgYXMgUi1DYXIgR2VuMyBpcyBhIG5vbi1FQ0FNIGhvc3QNCj4+IGNvbnRyb2xsZXIs
IHNvIHZQQ0kgTU1JTyBoYW5kbGVycyBkbyBub3Qgd29yayBmb3IgaXQgaW4gaHdkb20uDQo+Pg0K
Pj4gU2lnbmVkLW9mZi1ieTogT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gPG9sZWtzYW5kcl9hbmRy
dXNoY2hlbmtvQGVwYW0uY29tPg0KPj4gLS0tDQo+PiAgIHhlbi9hcmNoL2FybS9wY2kvcGNpLWhv
c3QtcmNhci1nZW4zLmMgfCA2OSArKysrKysrKysrKysrKysrKysrKysrKysrKysNCj4+ICAgMSBm
aWxlIGNoYW5nZWQsIDY5IGluc2VydGlvbnMoKykNCj4+DQo+PiBkaWZmIC0tZ2l0IGEveGVuL2Fy
Y2gvYXJtL3BjaS9wY2ktaG9zdC1yY2FyLWdlbjMuYyBiL3hlbi9hcmNoL2FybS9wY2kvcGNpLWhv
c3QtcmNhci1nZW4zLmMNCj4+IGluZGV4IGVjMTRiYjI5YTM4Yi4uMzUzYWMyYmZkNmU2IDEwMDY0
NA0KPj4gLS0tIGEveGVuL2FyY2gvYXJtL3BjaS9wY2ktaG9zdC1yY2FyLWdlbjMuYw0KPj4gKysr
IGIveGVuL2FyY2gvYXJtL3BjaS9wY2ktaG9zdC1yY2FyLWdlbjMuYw0KPj4gQEAgLTIzLDYgKzIz
LDcgQEANCj4+ICAgI2luY2x1ZGUgPHhlbi9wY2kuaD4NCj4+ICAgI2luY2x1ZGUgPGFzbS9wY2ku
aD4NCj4+ICAgI2luY2x1ZGUgPHhlbi92bWFwLmg+DQo+PiArI2luY2x1ZGUgPHhlbi92cGNpLmg+
DQo+PiAgIA0KPj4gICAvKiBFcnJvciB2YWx1ZXMgdGhhdCBtYXkgYmUgcmV0dXJuZWQgYnkgUENJ
IGZ1bmN0aW9ucyAqLw0KPj4gICAjZGVmaW5lIFBDSUJJT1NfU1VDQ0VTU0ZVTAkJMHgwMA0KPj4g
QEAgLTMwNywxMiArMzA4LDgwIEBAIGludCBwY2lfcmNhcl9nZW4zX2NvbmZpZ193cml0ZShzdHJ1
Y3QgcGNpX2hvc3RfYnJpZGdlICpicmlkZ2UsIHVpbnQzMl90IF9zYmRmLA0KPj4gICAgICAgcmV0
dXJuIHJldDsNCj4+ICAgfQ0KPj4gICANCj4+ICtzdGF0aWMgdm9pZCBwY2lfcmNhcl9nZW4zX2h3
YmFyX2luaXQoY29uc3Qgc3RydWN0IHBjaV9kZXYgKnBkZXYsDQo+PiArICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIHN0cnVjdCB2cGNpX2hlYWRlciAqaGVhZGVyKQ0KPj4gKw0K
Pj4gK3sNCj4+ICsgICAgc3RhdGljIGJvb2wgb25jZSA9IHRydWU7DQo+PiArICAgIHN0cnVjdCB2
cGNpX2JhciAqYmFycyA9IGhlYWRlci0+YmFyczsNCj4+ICsgICAgdW5zaWduZWQgaW50IG51bV9i
YXJzOw0KPj4gKyAgICBpbnQgaTsNCj4gdW5zaWduZWQuDQpvaw0KPg0KPj4gKw0KPj4gKyAgICAv
KiBSdW4gb25seSBvbmNlLiAqLw0KPj4gKyAgICBpZiAoIW9uY2UpDQo+IE1pc3Npbmcgc3BhY2Vz
Lg0Kb2sNCj4NCj4+ICsgICAgICAgIHJldHVybjsNCj4+ICsgICAgb25jZSA9IGZhbHNlOw0KPj4g
Kw0KPj4gKyAgICBwcmludGsoIlxuXG4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tICVzIC0tLS0t
LS0tLS0tLS0tLS0tLS1cbiIsIF9fZnVuY19fKTsNCj4+ICsgICAgc3dpdGNoICggcGNpX2NvbmZf
cmVhZDgocGRldi0+c2JkZiwgUENJX0hFQURFUl9UWVBFKSAmIDB4N2YgKQ0KPj4gKyAgICB7DQo+
PiArICAgIGNhc2UgUENJX0hFQURFUl9UWVBFX05PUk1BTDoNCj4+ICsgICAgICAgIG51bV9iYXJz
ID0gUENJX0hFQURFUl9OT1JNQUxfTlJfQkFSUzsNCj4+ICsgICAgICAgIGJyZWFrOw0KPj4gKw0K
Pj4gKyAgICBjYXNlIFBDSV9IRUFERVJfVFlQRV9CUklER0U6DQo+PiArICAgICAgICBudW1fYmFy
cyA9IFBDSV9IRUFERVJfQlJJREdFX05SX0JBUlM7DQo+PiArICAgICAgICBicmVhazsNCj4+ICsN
Cj4+ICsgICAgZGVmYXVsdDoNCj4+ICsgICAgICAgIHJldHVybjsNCj4+ICsgICAgfQ0KPj4gKw0K
Pj4gKyAgICBmb3IgKCBpID0gMDsgaSA8IG51bV9iYXJzOyBpKysgKQ0KPj4gKyAgICB7DQo+PiAr
ICAgICAgICB1aW50OF90IHJlZyA9IFBDSV9CQVNFX0FERFJFU1NfMCArIGkgKiA0Ow0KPj4gKw0K
Pj4gKyAgICAgICAgaWYgKCBiYXJzW2ldLnR5cGUgPT0gVlBDSV9CQVJfTUVNNjRfSEkgKQ0KPj4g
KyAgICAgICAgew0KPj4gKyAgICAgICAgICAgIC8qDQo+PiArICAgICAgICAgICAgICogU2tpcCBo
aSBwYXJ0IG9mIHRoZSA2NC1iaXQgcmVnaXN0ZXI6IGl0IGlzIHJlYWQNCj4+ICsgICAgICAgICAg
ICAgKiB0b2dldGhlciB3aXRoIHRoZSBsb3dlciBwYXJ0Lg0KPj4gKyAgICAgICAgICAgICAqLw0K
Pj4gKyAgICAgICAgICAgIGNvbnRpbnVlOw0KPj4gKyAgICAgICAgfQ0KPj4gKw0KPj4gKyAgICAg
ICAgaWYgKCBiYXJzW2ldLnR5cGUgPT0gVlBDSV9CQVJfSU8gKQ0KPj4gKyAgICAgICAgew0KPj4g
KyAgICAgICAgICAgIC8qIFNraXAgSU8uICovDQo+PiArICAgICAgICAgICAgY29udGludWU7DQo+
PiArICAgICAgICB9DQo+PiArDQo+PiArICAgICAgICBpZiAoIGJhcnNbaV0udHlwZSA9PSBWUENJ
X0JBUl9NRU02NF9MTyApDQo+PiArICAgICAgICB7DQo+PiArICAgICAgICAgICAgLyogUmVhZCBi
b3RoIGhpIGFuZCBsbyBwYXJ0cyBvZiB0aGUgNjQtYml0IEJBUi4gKi8NCj4+ICsgICAgICAgICAg
ICBiYXJzW2ldLmFkZHIgPQ0KPj4gKyAgICAgICAgICAgICAgICAodWludDY0X3QpcGNpX2NvbmZf
cmVhZDMyKHBkZXYtPnNiZGYsIHJlZyArIDQpIDw8IDMyIHwNCj4+ICsgICAgICAgICAgICAgICAg
cGNpX2NvbmZfcmVhZDMyKHBkZXYtPnNiZGYsIHJlZyk7DQo+PiArICAgICAgICB9DQo+PiArICAg
ICAgICBlbHNlIGlmICggYmFyc1tpXS50eXBlID09IFZQQ0lfQkFSX01FTTMyICkNCj4+ICsgICAg
ICAgIHsNCj4+ICsgICAgICAgICAgICBiYXJzW2ldLmFkZHIgPSBwY2lfY29uZl9yZWFkMzIocGRl
di0+c2JkZiwgcmVnKTsNCj4+ICsgICAgICAgIH0NCj4+ICsgICAgICAgIGVsc2UNCj4+ICsgICAg
ICAgIHsNCj4+ICsgICAgICAgICAgICAvKiBFeHBhbnNpb24gUk9NPyAqLw0KPj4gKyAgICAgICAg
ICAgIGNvbnRpbnVlOw0KPj4gKyAgICAgICAgfQ0KPiBXb3VsZG4ndCB0aGlzIGJlIG11Y2ggc2lt
cGxlciBhczoNClllcywgc2VlbXMgdG8gYmUgc2ltcGxlciwgdGhhbmsgeW91DQo+DQo+IGJhcnNb
aV0uYWRkciA9IDA7DQo+IHN3aXRjaCAoIGJhcnNbaV0udHlwZSApDQo+IHsNCj4gY2FzZSBWUENJ
X0JBUl9NRU02NF9ISToNCj4gICAgICBiYXJzW2ldLmFkZHIgPSAodWludDY0X3QpcGNpX2NvbmZf
cmVhZDMyKHBkZXYtPnNiZGYsIHJlZyArIDQpIDw8IDMyOw0KPiAgICAgIC8rIGZhbGx0aHJvdWdo
LiAqLw0KPiBjYXNlIFZQQ0lfQkFSX01FTTY0X0xPOg0KPiAgICAgICBiYXJzW2ldLmFkZHIgfD0g
cGNpX2NvbmZfcmVhZDMyKHBkZXYtPnNiZGYsIHJlZyk7DQo+ICAgICAgIGJyZWFrOw0KPg0KPiBk
ZWZhdWx0Og0KPiAgICAgIGJyZWFrOw0KPiB9DQo+DQo+IEkgYWxzbyB3b25kZXIgd2h5IHlvdSBv
bmx5IGNhcmUgYWJvdXQgdGhlIGFkZHJlc3MgYnV0IG5vdCB0aGUgc2l6ZSBvZg0KPiB0aGUgQkFS
Lg0KWWVzLCBzaXplIG5lZWRzIHRvIGJlIHVwZGF0ZWQgYXMgd2VsbCwgZXZlbiBmb3IgUkZDDQo+
DQo+IFRoYW5rcywgUm9nZXIuDQoNClRoYW5rIHlvdSwNCg0KT2xla3NhbmRyDQo=


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 07:45:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 07:45:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26231.54442 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdTlI-00042Z-Hx; Fri, 13 Nov 2020 07:44:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26231.54442; Fri, 13 Nov 2020 07:44:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdTlI-00042S-Ed; Fri, 13 Nov 2020 07:44:48 +0000
Received: by outflank-mailman (input) for mailman id 26231;
 Fri, 13 Nov 2020 07:44:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RH3y=ET=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kdTlG-00041u-Uu
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 07:44:47 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f790ee0f-7b32-4653-8a32-4889262e35f0;
 Fri, 13 Nov 2020 07:44:39 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdTl8-000291-Q6; Fri, 13 Nov 2020 07:44:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdTl8-0005wE-FT; Fri, 13 Nov 2020 07:44:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kdTl8-00072B-F0; Fri, 13 Nov 2020 07:44:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=RH3y=ET=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kdTlG-00041u-Uu
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 07:44:47 +0000
X-Inumbo-ID: f790ee0f-7b32-4653-8a32-4889262e35f0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f790ee0f-7b32-4653-8a32-4889262e35f0;
	Fri, 13 Nov 2020 07:44:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lzk8eR7ikNLI17QztykGmJ09lD5evRyKz0PmoYWG22g=; b=zn35nW/RZ+NwHdrAEH3/rQTcy2
	4BA1QwBCavTSU1Nx1GUcG1hoJZdMCEu9qiKWGWmYxX5jZNXtR1mTmrtebWCUg8gPFITz/M00JtgWk
	X1uCEPrmvvAextEiR985ckU4WhpNlvCrXhUthRYDcpW9U1V8dMPdxBPXbhcVrLGlQh5M=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdTl8-000291-Q6; Fri, 13 Nov 2020 07:44:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdTl8-0005wE-FT; Fri, 13 Nov 2020 07:44:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdTl8-00072B-F0; Fri, 13 Nov 2020 07:44:38 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156711-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156711: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:build-arm64-libvirt:libvirt-build:fail:regression
    xen-unstable:test-amd64-amd64-pygrub:guest-localmigrate:fail:regression
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5505f5f8e7e805365cfe70b6a4af6115940bb749
X-Osstest-Versions-That:
    xen=9ff9705647646aa937b5f5c1426a64c69a62b3bd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 13 Nov 2020 07:44:38 +0000

flight 156711 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156711/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 156443
 test-amd64-amd64-pygrub      17 guest-localmigrate       fail REGR. vs. 156443
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 156443

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 156443

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156443
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156443
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156443
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156443
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156443
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156443
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156443
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156443
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156443
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156443
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156443
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  5505f5f8e7e805365cfe70b6a4af6115940bb749
baseline version:
 xen                  9ff9705647646aa937b5f5c1426a64c69a62b3bd

Last test of basis   156443  2020-11-05 15:47:13 Z    7 days
Failing since        156524  2020-11-06 14:22:28 Z    6 days    8 attempts
Testing same since   156711  2020-11-12 09:06:03 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Olaf Hering <olaf@aepfle.de>
  Penny Zheng <penny.zheng@arm.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 378 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 09:45:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 09:45:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26252.54471 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdVdM-0006N7-3I; Fri, 13 Nov 2020 09:44:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26252.54471; Fri, 13 Nov 2020 09:44:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdVdM-0006N0-0R; Fri, 13 Nov 2020 09:44:44 +0000
Received: by outflank-mailman (input) for mailman id 26252;
 Fri, 13 Nov 2020 09:44:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rZ2H=ET=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kdVdL-0006Mv-5n
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 09:44:43 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ee93d9c4-8550-46a0-9e0e-b69a7b24de5f;
 Fri, 13 Nov 2020 09:44:41 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AD9iX1N024264
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Fri, 13 Nov 2020 10:44:34 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 9C1E52E9CA8; Fri, 13 Nov 2020 10:44:28 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rZ2H=ET=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kdVdL-0006Mv-5n
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 09:44:43 +0000
X-Inumbo-ID: ee93d9c4-8550-46a0-9e0e-b69a7b24de5f
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ee93d9c4-8550-46a0-9e0e-b69a7b24de5f;
	Fri, 13 Nov 2020 09:44:41 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AD9iX1N024264
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Fri, 13 Nov 2020 10:44:34 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 9C1E52E9CA8; Fri, 13 Nov 2020 10:44:28 +0100 (MET)
Date: Fri, 13 Nov 2020 10:44:28 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: dom0 PVH: 'entry->arch.pirq != INVALID_PIRQ' failed at vmsi.c:843
Message-ID: <20201113094428.GC1512@antioche.eu.org>
References: <20201112155715.GA5003@antioche.eu.org>
 <20201112163240.6xswol2iswikdzef@Air-de-Roger>
 <20201112172704.GA5899@antioche.eu.org>
 <20201112201939.be6ztg2iipwa6hkb@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201112201939.be6ztg2iipwa6hkb@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Fri, 13 Nov 2020 10:44:34 +0100 (MET)

On Thu, Nov 12, 2020 at 09:19:39PM +0100, Roger Pau Monn wrote:
> 
> The following might be able to get you going, but I think I need to
> refine the logic a bit there, will have to give it some thought.

thanks. It avoids the panic, but it seems that msi and msi-x interrupts are
not delivered to the dom0 kernel. The conters stay at 0.
I get some ioapic interrupts though, as well as some Xen events.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 09:53:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 09:53:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26259.54487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdVlS-0007Ir-1C; Fri, 13 Nov 2020 09:53:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26259.54487; Fri, 13 Nov 2020 09:53:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdVlR-0007Ik-TQ; Fri, 13 Nov 2020 09:53:05 +0000
Received: by outflank-mailman (input) for mailman id 26259;
 Fri, 13 Nov 2020 09:53:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdVlQ-0007IE-Hf
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 09:53:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eb3613bf-b9b5-4331-800e-4f3210143ac6;
 Fri, 13 Nov 2020 09:53:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D0AB1AE44;
 Fri, 13 Nov 2020 09:53:02 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdVlQ-0007IE-Hf
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 09:53:04 +0000
X-Inumbo-ID: eb3613bf-b9b5-4331-800e-4f3210143ac6
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id eb3613bf-b9b5-4331-800e-4f3210143ac6;
	Fri, 13 Nov 2020 09:53:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605261182; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=NwDVrFfky0ASlO5W+e+NuHFVPCB6tofe3lMJsBD2Gik=;
	b=IhfMW9DKZaVILdlnQ8valjr/ywAhrbQP3tZwOpcEBaLi3gmZnQUEgnYdKSsdUMf36KmiI9
	97GLPU59UfiRdf7KMVm63YgRj3C6xOjU3Jk5tUQPD0+Z0x5OZiep/wjZy/PU885UnbZpvg
	d96SSaFqwmhshRvPyDoxJTOY3ectxrA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D0AB1AE44;
	Fri, 13 Nov 2020 09:53:02 +0000 (UTC)
Subject: Re: [PATCH 5/5] x86/p2m: split write_p2m_entry() hook
To: Tim Deegan <tim@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
 <7b2b7cc9-8828-41bd-7949-764161bbe7ff@suse.com>
 <20201110135944.hbsojy6eeyw53has@Air-de-Roger>
 <d73234b0-f22e-0783-3fbe-759ccb0ecc48@suse.com>
 <20201111121730.pblsf6inot5gixfc@Air-de-Roger>
 <7f916527-9a9c-8afe-5e5c-781554d1bd73@suse.com>
 <20201112130709.r3acpkrkyck6arul@Air-de-Roger>
 <51e646d4-3e1b-3698-c649-a39840275ec9@suse.com>
 <20201112175221.GB43943@deinos.phlegethon.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <40055cf5-ab16-ad73-4446-3f8f730a6613@suse.com>
Date: Fri, 13 Nov 2020 10:52:58 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201112175221.GB43943@deinos.phlegethon.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12.11.2020 18:52, Tim Deegan wrote:
> At 15:04 +0100 on 12 Nov (1605193496), Jan Beulich wrote:
>> On 12.11.2020 14:07, Roger Pau Monné wrote:
>>> On Thu, Nov 12, 2020 at 01:29:33PM +0100, Jan Beulich wrote:
>>>> I agree with all this. If only it was merely about TLB flushes. In
>>>> the shadow case, shadow_blow_all_tables() gets invoked, and that
>>>> one - looking at the other call sites - wants the paging lock held.
> [...]
>>> The post hook for shadow could pick the lock again, as I don't think
>>> the removal of the tables needs to be strictly done inside of the same
>>> locked region?
>>
>> I think it does, or else a use of the now stale tables may occur
>> before they got blown away. Tim?
> 
> Is this the call to shadow_blow_tables() in the write_p2m_entry path?

Yes.

> I think it would be safe to drop and re-take the paging lock there as
> long as the call happens before the write is considered to have
> finished.
> 
> But it would not be a useful performance improvement - any update that
> takes this path is going to be very slow regardless.  So unless you
> have another pressing reason to split it up, I would be inclined to
> leave it as it is.  That way it's easier to see that the locking is
> correct.

Thanks for the clarification.

Roger - what's your view at this point?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 10:26:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 10:26:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26270.54509 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdWHI-0001ea-MO; Fri, 13 Nov 2020 10:26:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26270.54509; Fri, 13 Nov 2020 10:26:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdWHI-0001eT-JN; Fri, 13 Nov 2020 10:26:00 +0000
Received: by outflank-mailman (input) for mailman id 26270;
 Fri, 13 Nov 2020 10:26:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdWHH-0001eO-Ub
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 10:25:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cea3c7f3-be38-416f-aa9f-12774e890a7a;
 Fri, 13 Nov 2020 10:25:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5C374AE42;
 Fri, 13 Nov 2020 10:25:57 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdWHH-0001eO-Ub
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 10:25:59 +0000
X-Inumbo-ID: cea3c7f3-be38-416f-aa9f-12774e890a7a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cea3c7f3-be38-416f-aa9f-12774e890a7a;
	Fri, 13 Nov 2020 10:25:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605263157; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=uPyTAIuzzW32T5Lk6I/mvLuQcju8+Po2tmF4agL67NY=;
	b=RLmYKXSpXjXOx4tDfP7SHkzQn8cv/+QYo+Cb7/bGrRUyN7xDj5bws/rEyaTzdjHSvr4d4m
	oaVkq58ZbbBG15IR3IwApZfNeQD7knxtlltTg1BUg5uk7ASyZmAsCDKBMK9m5Y48t82uIq
	Rq6H8rGO4EN9wb7DkHXaGxJ8ej8kc1Y=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5C374AE42;
	Fri, 13 Nov 2020 10:25:57 +0000 (UTC)
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
Cc: Oleksandr Andrushchenko <andr2000@gmail.com>,
 "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
 "julien.grall@arm.com" <julien.grall@arm.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
 <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
 <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b4697fbe-6896-ed64-409d-85620c08904a@suse.com>
Date: Fri, 13 Nov 2020 11:25:56 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 13.11.2020 07:32, Oleksandr Andrushchenko wrote:
> On 11/12/20 4:46 PM, Roger Pau Monné wrote:
>> On Thu, Nov 12, 2020 at 01:16:10PM +0000, Oleksandr Andrushchenko wrote:
>>> On 11/12/20 11:40 AM, Roger Pau Monné wrote:
>>>> On Mon, Nov 09, 2020 at 02:50:27PM +0200, Oleksandr Andrushchenko wrote:
>>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>>>> +static uint32_t bar_read_dispatch(const struct pci_dev *pdev, unsigned int reg,
>>>>> +                                  void *data)
>>>>> +{
>>>>> +    struct vpci_bar *vbar, *bar = data;
>>>>> +
>>>>> +    if ( is_hardware_domain(current->domain) )
>>>>> +        return bar_read_hwdom(pdev, reg, data);
>>>>> +
>>>>> +    vbar = get_vpci_bar(current->domain, pdev, bar->index);
>>>>> +    if ( !vbar )
>>>>> +        return ~0;
>>>>> +
>>>>> +    return bar_read_guest(pdev, reg, vbar);
>>>>> +}
>>>>> +
>>>>> +static void bar_write_dispatch(const struct pci_dev *pdev, unsigned int reg,
>>>>> +                               uint32_t val, void *data)
>>>>> +{
>>>>> +    struct vpci_bar *bar = data;
>>>>> +
>>>>> +    if ( is_hardware_domain(current->domain) )
>>>>> +        bar_write_hwdom(pdev, reg, val, data);
>>>>> +    else
>>>>> +    {
>>>>> +        struct vpci_bar *vbar = get_vpci_bar(current->domain, pdev, bar->index);
>>>>> +
>>>>> +        if ( !vbar )
>>>>> +            return;
>>>>> +        bar_write_guest(pdev, reg, val, vbar);
>>>>> +    }
>>>>> +}
>>>> You should assign different handlers based on whether the domain that
>>>> has the device assigned is a domU or the hardware domain, rather than
>>>> doing the selection here.
>>> Hm, handlers are assigned once in init_bars and this function is only called
>>>
>>> for hwdom, so there is no way I can do that for the guests. Hence, the dispatcher
>> I think we might want to reset the vPCI handlers when a devices gets
>> assigned and deassigned.
> 
> In ARM case init_bars is called too early: PCI device assignment is currently
> 
> initiated by Domain-0' kernel and is done *before* PCI devices are given memory
> 
> ranges and BARs assigned:
> 
> [    0.429514] pci_bus 0000:00: root bus resource [bus 00-ff]
> [    0.429532] pci_bus 0000:00: root bus resource [io 0x0000-0xfffff]
> [    0.429555] pci_bus 0000:00: root bus resource [mem 0xfe200000-0xfe3fffff]
> [    0.429575] pci_bus 0000:00: root bus resource [mem 0x30000000-0x37ffffff]
> [    0.429604] pci_bus 0000:00: root bus resource [mem 0x38000000-0x3fffffff pref]
> [    0.429670] pci 0000:00:00.0: enabling Extended Tags
> [    0.453764] pci 0000:00:00.0: -------------------- BUS_NOTIFY_ADD_DEVICE
> 
> < init_bars >
> 
> [    0.453793] pci 0000:00:00.0: -- IRQ 0
> [    0.458825] pci 0000:00:00.0: Failed to add - passthrough or MSI/MSI-X might fail!
> [    0.471790] pci 0000:01:00.0: -------------------- BUS_NOTIFY_ADD_DEVICE
> 
> < init_bars >
> 
> [    0.471821] pci 0000:01:00.0: -- IRQ 255
> [    0.476809] pci 0000:01:00.0: Failed to add - passthrough or MSI/MSI-X might fail!
> 
> < BAR assignments below >
> 
> [    0.488233] pci 0000:00:00.0: BAR 14: assigned [mem 0xfe200000-0xfe2fffff]
> [    0.488265] pci 0000:00:00.0: BAR 15: assigned [mem 0x38000000-0x380fffff pref]
> 
> In case of x86 this is pretty much ok as BARs are already in place, but for ARM we
> 
> need to take care and re-setup vPCI BARs for hwdom.

Even on x86 there's no guarantee that all devices have their BARs set
up by firmware.

In a subsequent reply you've suggested to move init_bars from "add" to
"assign", but I'm having trouble seeing what this would change: It's
not Dom0 controlling assignment (to itself), but Xen assigns the device
towards the end of pci_add_device().

> Things are getting even more
> 
> complicated if the host PCI bridge is not ECAM like, so you cannot set mmio_handlers
> 
> and trap hwdom's access to the config space to update BARs etc. This is why I have that
> 
> ugly hack for rcar_gen3 to read actual BARs for hwdom.

How to config space accesses work there? The latest for MSI/MSI-X it'll
be imperative that Xen be able to intercept config space writes.

>>   In order to do passthrough to domUs safely
>> we will have to add more handlers than what's required for dom0,
> Can you please tell what are thinking about? What are the missing handlers?
>>   and
>> having is_hardware_domain sprinkled in all of them is not a suitable
>> solution.
> 
> I'll try to replace is_hardware_domain with something like:
> 
> +/*
> + * Detect whether physical PCI devices in this segment belong
> + * to the domain given, e.g. on x86 all PCI devices live in hwdom,
> + * but in case of ARM this might not be the case: those may also
> + * live in driver domains or even Xen itself.
> + */
> +bool pci_is_hardware_domain(struct domain *d, u16 seg)
> +{
> +#ifdef CONFIG_X86
> +    return is_hardware_domain(d);
> +#elif CONFIG_ARM
> +    return pci_is_owner_domain(d, seg);
> +#else
> +#error "Unsupported architecture"
> +#endif
> +}
> +
> +/*
> + * Get domain which owns this segment: for x86 this is always hardware
> + * domain and for ARM this can be different.
> + */
> +struct domain *pci_get_hardware_domain(u16 seg)
> +{
> +#ifdef CONFIG_X86
> +    return hardware_domain;
> +#elif CONFIG_ARM
> +    return pci_get_owner_domain(seg);
> +#else
> +#error "Unsupported architecture"
> +#endif
> +}
> 
> This is what I use to properly detect the domain that really owns physical host bridge

I consider this problematic. We should try to not let Arm's and x86'es
PCI implementations diverge too much, i.e. at least the underlying basic
model would better be similar. For example, if entire segments can be
assigned to a driver domain on Arm, why should the same not be possible
on x86?

Furthermore I'm suspicious about segments being the right granularity
here. On ia64 multiple host bridges could (and typically would) live
on segment 0. Iirc I had once seen output from an x86 system which was
apparently laid out similarly. Therefore, just like for MCFG, I think
the granularity wants to be bus ranges within a segment.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 10:29:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 10:29:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26274.54521 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdWKp-0001pt-66; Fri, 13 Nov 2020 10:29:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26274.54521; Fri, 13 Nov 2020 10:29:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdWKp-0001pm-30; Fri, 13 Nov 2020 10:29:39 +0000
Received: by outflank-mailman (input) for mailman id 26274;
 Fri, 13 Nov 2020 10:29:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdWKn-0001pg-TJ
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 10:29:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 17c7362b-c7ff-4fcb-b94e-bafa79243a65;
 Fri, 13 Nov 2020 10:29:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D0C1DAE42;
 Fri, 13 Nov 2020 10:29:35 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdWKn-0001pg-TJ
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 10:29:37 +0000
X-Inumbo-ID: 17c7362b-c7ff-4fcb-b94e-bafa79243a65
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 17c7362b-c7ff-4fcb-b94e-bafa79243a65;
	Fri, 13 Nov 2020 10:29:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605263375; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KGNFsPoPCy3Nl4ebayQGaJ4Fn4Bk+PbHJCKoeTE7JY4=;
	b=Ts55RB2dhEw8HLzKNgbWs4Mh6RsUaVy2DdjIVGrf0pLf8Oy2B4pmBZPnnb6X3H4lEHsQxi
	QJJLXjMICru/bdc1D3gVtNmlVX9/fEkiD8RXT9zgBfhuIygPNTnJ8GYBl90cGazhDET6ls
	3HRWECnoSbRgbuTBqx2bcKtyYzweWFg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D0C1DAE42;
	Fri, 13 Nov 2020 10:29:35 +0000 (UTC)
Subject: Re: [PATCH 08/10] vpci/arm: Allow updating BAR's header for non-ECAM
 bridges
To: Oleksandr Andrushchenko <andr2000@gmail.com>
Cc: iwj@xenproject.org, wl@xen.org,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 xen-devel@lists.xenproject.org, julien.grall@arm.com,
 Bertrand.Marquis@arm.com, sstabellini@kernel.org, roger.pau@citrix.com,
 Rahul.Singh@arm.com
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-9-andr2000@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ae66dddb-98e3-61fd-86c3-eab30ec33d18@suse.com>
Date: Fri, 13 Nov 2020 11:29:35 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201109125031.26409-9-andr2000@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.11.2020 13:50, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> 
> Non-ECAM host bridges in hwdom go directly to PCI config space,
> not through vpci (they use their specific method for accessing PCI
> configuration, e.g. dedicated registers etc.).

And access to these dedicated registers can't be intercepted? It
would seem to me that if so, such a platform is not capable of
being virtualized (without cooperation by all the domains in
possession of devices).

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 10:36:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 10:36:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26283.54535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdWRD-0002jx-Us; Fri, 13 Nov 2020 10:36:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26283.54535; Fri, 13 Nov 2020 10:36:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdWRD-0002jq-Qu; Fri, 13 Nov 2020 10:36:15 +0000
Received: by outflank-mailman (input) for mailman id 26283;
 Fri, 13 Nov 2020 10:36:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kdWRC-0002jl-Ew
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 10:36:14 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kdWRC-0001LK-0x; Fri, 13 Nov 2020 10:36:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kdWRC-0002jl-Ew
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 10:36:14 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kdWRC-0001LK-0x; Fri, 13 Nov 2020 10:36:14 +0000
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
To: Jan Beulich <jbeulich@suse.com>,
 Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
Cc: Oleksandr Andrushchenko <andr2000@gmail.com>,
 "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
 "julien.grall@arm.com" <julien.grall@arm.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
 <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
 <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
 <b4697fbe-6896-ed64-409d-85620c08904a@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <fd656848-1eda-686d-d74c-f10e3ecfe49a@xen.org>
Date: Fri, 13 Nov 2020 10:36:11 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <b4697fbe-6896-ed64-409d-85620c08904a@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 13/11/2020 10:25, Jan Beulich wrote:
> On 13.11.2020 07:32, Oleksandr Andrushchenko wrote:
>> On 11/12/20 4:46 PM, Roger Pau Monné wrote:
>>> On Thu, Nov 12, 2020 at 01:16:10PM +0000, Oleksandr Andrushchenko wrote:
>>>> On 11/12/20 11:40 AM, Roger Pau Monné wrote:
>>>>> On Mon, Nov 09, 2020 at 02:50:27PM +0200, Oleksandr Andrushchenko wrote:
>>>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>>>>> +static uint32_t bar_read_dispatch(const struct pci_dev *pdev, unsigned int reg,
>>>>>> +                                  void *data)
>>>>>> +{
>>>>>> +    struct vpci_bar *vbar, *bar = data;
>>>>>> +
>>>>>> +    if ( is_hardware_domain(current->domain) )
>>>>>> +        return bar_read_hwdom(pdev, reg, data);
>>>>>> +
>>>>>> +    vbar = get_vpci_bar(current->domain, pdev, bar->index);
>>>>>> +    if ( !vbar )
>>>>>> +        return ~0;
>>>>>> +
>>>>>> +    return bar_read_guest(pdev, reg, vbar);
>>>>>> +}
>>>>>> +
>>>>>> +static void bar_write_dispatch(const struct pci_dev *pdev, unsigned int reg,
>>>>>> +                               uint32_t val, void *data)
>>>>>> +{
>>>>>> +    struct vpci_bar *bar = data;
>>>>>> +
>>>>>> +    if ( is_hardware_domain(current->domain) )
>>>>>> +        bar_write_hwdom(pdev, reg, val, data);
>>>>>> +    else
>>>>>> +    {
>>>>>> +        struct vpci_bar *vbar = get_vpci_bar(current->domain, pdev, bar->index);
>>>>>> +
>>>>>> +        if ( !vbar )
>>>>>> +            return;
>>>>>> +        bar_write_guest(pdev, reg, val, vbar);
>>>>>> +    }
>>>>>> +}
>>>>> You should assign different handlers based on whether the domain that
>>>>> has the device assigned is a domU or the hardware domain, rather than
>>>>> doing the selection here.
>>>> Hm, handlers are assigned once in init_bars and this function is only called
>>>>
>>>> for hwdom, so there is no way I can do that for the guests. Hence, the dispatcher
>>> I think we might want to reset the vPCI handlers when a devices gets
>>> assigned and deassigned.
>>
>> In ARM case init_bars is called too early: PCI device assignment is currently
>>
>> initiated by Domain-0' kernel and is done *before* PCI devices are given memory
>>
>> ranges and BARs assigned:
>>
>> [    0.429514] pci_bus 0000:00: root bus resource [bus 00-ff]
>> [    0.429532] pci_bus 0000:00: root bus resource [io 0x0000-0xfffff]
>> [    0.429555] pci_bus 0000:00: root bus resource [mem 0xfe200000-0xfe3fffff]
>> [    0.429575] pci_bus 0000:00: root bus resource [mem 0x30000000-0x37ffffff]
>> [    0.429604] pci_bus 0000:00: root bus resource [mem 0x38000000-0x3fffffff pref]
>> [    0.429670] pci 0000:00:00.0: enabling Extended Tags
>> [    0.453764] pci 0000:00:00.0: -------------------- BUS_NOTIFY_ADD_DEVICE
>>
>> < init_bars >
>>
>> [    0.453793] pci 0000:00:00.0: -- IRQ 0
>> [    0.458825] pci 0000:00:00.0: Failed to add - passthrough or MSI/MSI-X might fail!
>> [    0.471790] pci 0000:01:00.0: -------------------- BUS_NOTIFY_ADD_DEVICE
>>
>> < init_bars >
>>
>> [    0.471821] pci 0000:01:00.0: -- IRQ 255
>> [    0.476809] pci 0000:01:00.0: Failed to add - passthrough or MSI/MSI-X might fail!
>>
>> < BAR assignments below >
>>
>> [    0.488233] pci 0000:00:00.0: BAR 14: assigned [mem 0xfe200000-0xfe2fffff]
>> [    0.488265] pci 0000:00:00.0: BAR 15: assigned [mem 0x38000000-0x380fffff pref]
>>
>> In case of x86 this is pretty much ok as BARs are already in place, but for ARM we
>>
>> need to take care and re-setup vPCI BARs for hwdom.
> 
> Even on x86 there's no guarantee that all devices have their BARs set
> up by firmware.
> 
> In a subsequent reply you've suggested to move init_bars from "add" to
> "assign", but I'm having trouble seeing what this would change: It's
> not Dom0 controlling assignment (to itself), but Xen assigns the device
> towards the end of pci_add_device().
> 
>> Things are getting even more
>>
>> complicated if the host PCI bridge is not ECAM like, so you cannot set mmio_handlers
>>
>> and trap hwdom's access to the config space to update BARs etc. This is why I have that
>>
>> ugly hack for rcar_gen3 to read actual BARs for hwdom.
> 
> How to config space accesses work there? The latest for MSI/MSI-X it'll
> be imperative that Xen be able to intercept config space writes.

I am not sure to understand your last sentence. Are you saying that we 
always need to trap access to MSI/MSI-X message in order to sanitize it?

If one is using the GICv3 ITS (I haven't investigated other MSI 
controller), then I don't believe you need to sanitize the MSI/MSI-X 
message in most of the situation.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 10:39:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 10:39:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26287.54547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdWUH-0002uW-De; Fri, 13 Nov 2020 10:39:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26287.54547; Fri, 13 Nov 2020 10:39:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdWUH-0002uP-9n; Fri, 13 Nov 2020 10:39:25 +0000
Received: by outflank-mailman (input) for mailman id 26287;
 Fri, 13 Nov 2020 10:39:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u7TF=ET=epam.com=prvs=9586b5424c=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kdWUF-0002uH-Rt
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 10:39:24 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 58996885-a1df-464d-b007-2177cd880699;
 Fri, 13 Nov 2020 10:39:22 +0000 (UTC)
Received: from pps.filterd (m0174676.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0ADAapQi018598; Fri, 13 Nov 2020 10:39:16 GMT
Received: from eur02-am5-obe.outbound.protection.outlook.com
 (mail-am5eur02lp2056.outbound.protection.outlook.com [104.47.4.56])
 by mx0a-0039f301.pphosted.com with ESMTP id 34rf7yync3-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 13 Nov 2020 10:39:16 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB4433.eurprd03.prod.outlook.com (2603:10a6:208:cf::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.25; Fri, 13 Nov
 2020 10:39:14 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 10:39:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u7TF=ET=epam.com=prvs=9586b5424c=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kdWUF-0002uH-Rt
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 10:39:24 +0000
X-Inumbo-ID: 58996885-a1df-464d-b007-2177cd880699
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 58996885-a1df-464d-b007-2177cd880699;
	Fri, 13 Nov 2020 10:39:22 +0000 (UTC)
Received: from pps.filterd (m0174676.ppops.net [127.0.0.1])
	by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0ADAapQi018598;
	Fri, 13 Nov 2020 10:39:16 GMT
Received: from eur02-am5-obe.outbound.protection.outlook.com (mail-am5eur02lp2056.outbound.protection.outlook.com [104.47.4.56])
	by mx0a-0039f301.pphosted.com with ESMTP id 34rf7yync3-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 13 Nov 2020 10:39:16 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jkAUq3Od0LqaZhaB6vpXL97KzwcyLn4INdY0aoWYCzokUBSfg4GZM/XtkD2HztQ7Wf1R9yy8XCSu0ju230dFQk/FD2v67KXxkyrEXu/wQwO3CguyNExKhM9aK7sckrtwYTOcHJ8Gsk5Zvb66LIOaIzJtwb20LHlVQUEb+5Pd0/ls4lCzHUuEbfg9KAfy5TDYY9Y3BTr3MZfL4bqFdcs9H9bKyJpt0JC7rBegLWGxspkEMffH86RWbi7FWdJ4mXr/Daa8RKl8x/b/ak+LYLXbygG4sgS/90N5uq1pzdHxCF515bDaYSqhiORiZ9uPt6jSJmf2zfBty79PXGc8ykhY+w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WCClWK2e5vOQFNf52Cc4gu7Npc68nDufeBrTfr+3aYw=;
 b=hd4jcHk3Twjp3faG7r5HLZoBPJsYd94Aa1Gt+mUik9m8wRkP/HgSl1wIennUqa/QlMHcWO0WdxQyfQF/QsXf/1Vz6KqVpYYbmqpw4COVbC/vi5w/j7FCD7en0ghfHujWedCbrim0SXFwMnSvogSbxymfFYQoPuauioA9vvrK2fsuyA5qU6P6gmkcV2fbTuELxa0RjGs/JedR+BFA3b9f6G0VVstQIgtrTPmFJGeGwhCSdM8vLIWBb78fvU/tbC164TTPXarxqYx6d6/fCHXSvOyp8TzjIVTTjGlgy4lzdJJqzvG4FQowDwpM9n6q+To8Wt//XWNfPL5wNJFehWyMGg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WCClWK2e5vOQFNf52Cc4gu7Npc68nDufeBrTfr+3aYw=;
 b=4lQdpA/rRvsZOWSZDUI42/RLcWNs48LGiuGZ2VQz2VGRNq43C3t6DciF3s/soh7zkDGiRZXHKzR9nNWQNGbIZzUKJd7HF9EfYJnHyurSpKw4owUCRVGOy+wEEh7Z6hb5UChysQS5FzTXS9cSDucLlH5qME4NF9UZx805uyx8+3dF/HcuNC90GwMBDIH6O2R8oFxGakRmcPEmDISYrTeJ9AgcOUnH6Lmo07LnduRRfZr053UvlEU43pURU4cj0A0JRBtpprVL5O/4mXbGblDJR7SYmDL4BUU9EmADjwZy/abbrXAp4NtCzqwtDZr8X/vUxGSO9CRyeHGmVfO0aU8YSA==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB4433.eurprd03.prod.outlook.com (2603:10a6:208:cf::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.25; Fri, 13 Nov
 2020 10:39:14 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 10:39:14 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Jan Beulich <jbeulich@suse.com>,
        Oleksandr Andrushchenko
	<andr2000@gmail.com>
CC: "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        "julien.grall@arm.com" <julien.grall@arm.com>,
        "Bertrand.Marquis@arm.com"
	<Bertrand.Marquis@arm.com>,
        "sstabellini@kernel.org"
	<sstabellini@kernel.org>,
        "roger.pau@citrix.com" <roger.pau@citrix.com>,
        "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>
Subject: Re: [PATCH 08/10] vpci/arm: Allow updating BAR's header for non-ECAM
 bridges
Thread-Topic: [PATCH 08/10] vpci/arm: Allow updating BAR's header for non-ECAM
 bridges
Thread-Index: AQHWtpbyYHzQ7b7gKkKm7jsDdoAEE6nF4pWAgAACsYA=
Date: Fri, 13 Nov 2020 10:39:14 +0000
Message-ID: <1f0a3eb7-be20-95a0-0c1e-c7d45e3279f8@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-9-andr2000@gmail.com>
 <ae66dddb-98e3-61fd-86c3-eab30ec33d18@suse.com>
In-Reply-To: <ae66dddb-98e3-61fd-86c3-eab30ec33d18@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 2ddb423e-ea42-48ca-8319-08d887c05d59
x-ms-traffictypediagnostic: AM0PR03MB4433:
x-microsoft-antispam-prvs: 
 <AM0PR03MB4433D64C7C6D053DD3CA20C1E7E60@AM0PR03MB4433.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8273;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 bMUo4V00u6sDXuwlDp/MKEjMY3+grznjA3XqPVpV8KRys1LZ+Fj/LWhRxhAy5Ggyia1N5I/YDyf90SgngjOxhyQhuSEwa7GoCUIDfQnulQDZeGM1+rEgGQ3KSZBfhoWAzV44jRfZixnP+eN0sf6cnRmguoCZrkLrLLnLhSpQZv/8BzqqTv/Tm/Jq5i29LedJYd4rWb443rpFaWBAHi2hgR1biF4H4m+hQxB5TfzAUHs0JGng8zk+uhNlNWytOSyywihDoNSvoEPUp9n3a2vZKJ6ecGJx4lr2zRRSw2XZRnNE1f9mcLrzjBWgc4HbiBaQTK/s0OHu3PKY0ZlvXV8m4Q==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(136003)(376002)(396003)(366004)(346002)(83380400001)(54906003)(6486002)(31686004)(53546011)(31696002)(6506007)(71200400001)(478600001)(316002)(76116006)(26005)(91956017)(8676002)(8936002)(66476007)(66946007)(7416002)(2906002)(2616005)(4744005)(6512007)(110136005)(4326008)(66556008)(66446008)(64756008)(5660300002)(36756003)(86362001)(186003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 U6/r4CfUF3tqBZ5tQ6URmfNfkxtarEhIvE7wFY5i6Q4DKDzTCtxlPwmb3JqiaJRRgP3XYg4yFHf5ZhDMjMlwmDzUHjNHKuNP35fsezER+IAvLhgz040d4zBj+lRYZVFMCz7byuFDeWLPaEf/us+Uf9wjyv9jc8RHkpse1WkoBq9dA3HJbjIP5YhnKlwXCY/vcqYnD04S8J5lv2Bvmf8bGtWDAVHJDffS334TCxepQwPJ6nHYLXiWmxP/ytrxEk3syOZpBtkNUgHpBeyGI8t8mafStxXoDx5233RpUvMN5N7NmUJyWczjysKZAjXWkkz4mHi9df/BwnQGNw+6Xa6ZRfqCjyCoKtNCDHbC7RmKP14ltXcX1FjiRgw4jGQLZkmpidaK4wYPxuVoCy2RSljil6WqtfbmAqU/XUK0+syg5bXZnR/0byhn7nLG1s4U4neVoVdG9/717XRBhWI0cuemxKVn3jGXaGMusCnNL2ALB0QZRBV6OvSLL64EvycJ35V7MUYfK/wt2aWfHR3U+/PceDXmS5RvSDj5t45r/6QtL3TCpiaHf5K7IYDndEHD1LISmFjGR5wmXWswgc8TIdxapx5mAmJJbShsGb6O7RESSclcU4He/OP987bOuq3JxrepYyw2jzham+qIPfwCT1liRw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <5941C4E1305D7C40874075A2A9E0FAEB@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2ddb423e-ea42-48ca-8319-08d887c05d59
X-MS-Exchange-CrossTenant-originalarrivaltime: 13 Nov 2020 10:39:14.1597
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: NtSuYIknWKiRuFVu6194SdvPBiuqf4S1sNZmzasU9/Bzri/r+3dK253TT1EJGLOii6Q08bVUqnIRqiVjP1wSmjEBjM+QvjBHnmfvVNcdPRUT8Ta+GCj5UUVNOAc/73sg
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB4433
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-13_07:2020-11-13,2020-11-13 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 suspectscore=0
 impostorscore=0 malwarescore=0 bulkscore=0 priorityscore=1501
 clxscore=1015 spamscore=0 phishscore=0 adultscore=0 lowpriorityscore=0
 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2011130064

DQpPbiAxMS8xMy8yMCAxMjoyOSBQTSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE9uIDA5LjExLjIw
MjAgMTM6NTAsIE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIHdyb3RlOg0KPj4gRnJvbTogT2xla3Nh
bmRyIEFuZHJ1c2hjaGVua28gPG9sZWtzYW5kcl9hbmRydXNoY2hlbmtvQGVwYW0uY29tPg0KPj4N
Cj4+IE5vbi1FQ0FNIGhvc3QgYnJpZGdlcyBpbiBod2RvbSBnbyBkaXJlY3RseSB0byBQQ0kgY29u
ZmlnIHNwYWNlLA0KPj4gbm90IHRocm91Z2ggdnBjaSAodGhleSB1c2UgdGhlaXIgc3BlY2lmaWMg
bWV0aG9kIGZvciBhY2Nlc3NpbmcgUENJDQo+PiBjb25maWd1cmF0aW9uLCBlLmcuIGRlZGljYXRl
ZCByZWdpc3RlcnMgZXRjLikuDQo+IEFuZCBhY2Nlc3MgdG8gdGhlc2UgZGVkaWNhdGVkIHJlZ2lz
dGVycyBjYW4ndCBiZSBpbnRlcmNlcHRlZD8NCg0KSXQgY2FuLiBCdXQgdGhlbiB5b3UgaGF2ZSB0
byBmdWxseSBlbXVsYXRlIHRoYXQgYnJpZGdlLCBlLmcuDQoNCiJpZiB3ZSB3cml0ZSBBIHRvIHJl
Z0IgYW5kIGFmdGVyIHRoYXQgd3JpdGUgQyB0byByZWdaIHRoZW4gaXQNCg0KbWVhbnMgd2UgYXJl
IGFjY2Vzc2luZyBjb25maWcgc3BhY2UuIElmIHdlIHdyaXRlLi4uLiINCg0KSSBtZWFuIHRoaXMg
d291bGQgbmVlZCBsb3RzIG9mIGNvZGUgaW4gWGVuIHRvIGFjaGlldmUgdGhhdA0KDQo+ICAgSXQN
Cj4gd291bGQgc2VlbSB0byBtZSB0aGF0IGlmIHNvLCBzdWNoIGEgcGxhdGZvcm0gaXMgbm90IGNh
cGFibGUgb2YNCj4gYmVpbmcgdmlydHVhbGl6ZWQgKHdpdGhvdXQgY29vcGVyYXRpb24gYnkgYWxs
IHRoZSBkb21haW5zIGluDQo+IHBvc3Nlc3Npb24gb2YgZGV2aWNlcykuDQoNCkd1ZXN0IGRvbWFp
bnMgYWx3YXlzIHVzZSBhbiBlbXVsYXRlZCBFQ0FNIGJyaWRnZSBhbmQgYXJlIGVhc2lseQ0KDQp0
cmFwcGVkIGFuZCBlbXVsYXRlZA0KDQo+DQo+IEphbg==


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 10:47:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 10:47:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26303.54569 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdWbS-0003uU-DW; Fri, 13 Nov 2020 10:46:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26303.54569; Fri, 13 Nov 2020 10:46:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdWbS-0003uN-AG; Fri, 13 Nov 2020 10:46:50 +0000
Received: by outflank-mailman (input) for mailman id 26303;
 Fri, 13 Nov 2020 10:46:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u7TF=ET=epam.com=prvs=9586b5424c=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kdWbQ-0003uI-KH
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 10:46:48 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ae06d3eb-c932-4343-ae06-47ce1bee7be3;
 Fri, 13 Nov 2020 10:46:47 +0000 (UTC)
Received: from pps.filterd (m0174680.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0ADAeQXZ025774; Fri, 13 Nov 2020 10:46:41 GMT
Received: from eur05-am6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2111.outbound.protection.outlook.com [104.47.18.111])
 by mx0b-0039f301.pphosted.com with ESMTP id 34rf7yg1fc-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 13 Nov 2020 10:46:41 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM9PR03MB7122.eurprd03.prod.outlook.com (2603:10a6:20b:2d7::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.25; Fri, 13 Nov
 2020 10:46:39 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 10:46:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u7TF=ET=epam.com=prvs=9586b5424c=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kdWbQ-0003uI-KH
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 10:46:48 +0000
X-Inumbo-ID: ae06d3eb-c932-4343-ae06-47ce1bee7be3
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ae06d3eb-c932-4343-ae06-47ce1bee7be3;
	Fri, 13 Nov 2020 10:46:47 +0000 (UTC)
Received: from pps.filterd (m0174680.ppops.net [127.0.0.1])
	by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0ADAeQXZ025774;
	Fri, 13 Nov 2020 10:46:41 GMT
Received: from eur05-am6-obe.outbound.protection.outlook.com (mail-am6eur05lp2111.outbound.protection.outlook.com [104.47.18.111])
	by mx0b-0039f301.pphosted.com with ESMTP id 34rf7yg1fc-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 13 Nov 2020 10:46:41 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dc8T5ozEp9nnBTFGk80WxbhNgr8hRnf41mK7wS0OvmzOr+0u4jXNfYfvhgwtvYl3QK+vHQBaOZmC/Mdhu3PVMQqO9JWlalLpIxN1B2grqa5p19VcQWFbXVjP5xQAMYSnJJQ5DrDjWkSkNviw1NfulOUZ5MDvkLOIoJ8lNPQHwyW+GwNIl5TDhEUkTMxrpRw3ebYWd14KhSqK/V/Q/q3XlMB/eI7N3TyHXRt46Y3p5qioGDz76C+xQ0cAneY/MLaZfeyw/Nz13YIJdJYSUO6QnAbmwv+/cKGGiS0AVUWtwYxKSmEt7lx4FOj2YNt4/b8Yx3Uagm6MYYlHNicruX1gXg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YZiq79gaOgQ3DZieciyOSU0byE85sGosJ0zutjqC+rw=;
 b=P8no0Olw1aD63CcEcgGVUgbWI4mWYHDjjhqXAsbzjVhy4Z6ymrKYMDU6CA5ZqTXDyhxj6oONzstnXF3G/Hcj74jwauezQTV96YulhIRk0/nI6Qz1LZP9iTaB3Brzegp95bDQjxLazEyfcVPUGppKM0jjY5w2BOiwm7KOrOip6bH+6+Inr0IyZ0jtJ3p7j83YMVgbAfEPp899OI/O1Llw9BzqYtSs2sJFyAlQyLnICuQsDOfLIjl500017El2Uj4Fj2W1vw4L/fmtOFiNeQ8THrFEny2y4PLnEXDiDuUmMM5DhLSrYe1sOs2a2+iuJrz9aUgP1Lm3Ed5Z8tojYdhvBQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YZiq79gaOgQ3DZieciyOSU0byE85sGosJ0zutjqC+rw=;
 b=uchJb3YbLQY3teNMCbMNC19/2Y5c8wQCtrpagVf0yQFk6qLr9uytGzUC3ltV9VN8meyG1ebcw1UNv0Qy/tAF4n9l5nURs+rhtKIHheb9vIA+yKyERFRZxgR6fTJljJEQuwi7ufH9LVA3dOEuE/lKhCMDqpoQnKF+F2DR5ve42xhPwOW5f33kboN9YKdJYET6CoEBaiVER9OIN9DntTt9w0P8uTbMZ6pMIuCD6yEhOg7BhZzC8WghM264+UC9cVrus2nRpD92bJ2a3Rr4soKAA/h6xrJp3BSgHCI1H9nFC4ks/QPjvO8JpLP8gvtiv6MZl5z6XHHa4GX1IOjawKcrWQ==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM9PR03MB7122.eurprd03.prod.outlook.com (2603:10a6:20b:2d7::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.25; Fri, 13 Nov
 2020 10:46:39 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 10:46:39 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Oleksandr Andrushchenko <andr2000@gmail.com>,
        "Rahul.Singh@arm.com"
	<Rahul.Singh@arm.com>,
        "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
        "julien.grall@arm.com" <julien.grall@arm.com>,
        "sstabellini@kernel.org"
	<sstabellini@kernel.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
        =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
Thread-Topic: [PATCH 06/10] vpci: Make every domain handle its own BARs
Thread-Index: 
 AQHWtpbzyJg77SbpvkSRWLKjWy/3PanEQmgAgAA8YoCAABlOgIABCD8AgABBOQCAAAXIAA==
Date: Fri, 13 Nov 2020 10:46:39 +0000
Message-ID: <3d6e5aab-ff89-7859-09c6-5ecb0c052511@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
 <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
 <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
 <b4697fbe-6896-ed64-409d-85620c08904a@suse.com>
In-Reply-To: <b4697fbe-6896-ed64-409d-85620c08904a@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 8b17b7e1-8a9b-4ed4-36e0-08d887c166db
x-ms-traffictypediagnostic: AM9PR03MB7122:
x-microsoft-antispam-prvs: 
 <AM9PR03MB7122EEA5FF20937D033A121BE7E60@AM9PR03MB7122.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:5797;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 Fjk01S1gj1XDy3G9LSvTNUftfpeUHwvTGSsyY+y3YoZZCu5dD6ScQ3Br5YC7YNtVqtJK5fbXBlQNdtVgmhtXxp52iMOG6AE1AiOYfW46oSaqjyujwerVUej/GIDdi04iEU8xR6VCmUnb6imJd5Rz5DLKNBNnUlzQGAhr4Kc4i2hvlR1U3YXPV1dlKH2aI5stWPPHv4KWUMhxz5Gb/HAx5IDVIrNJ7U2zH2FE87CWF2KZifICqZrEkllgdyELW2eVWflP2Tw0BLfpt1L6zFRlSmBRTEse88EMSbHKt7Pl7tM39JzNLPq2JBwIeZwunhW9IGSPhiuLRK1sWFzxlDmJaQ==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(136003)(376002)(396003)(366004)(346002)(6506007)(64756008)(6512007)(8676002)(478600001)(66476007)(4326008)(186003)(2906002)(86362001)(54906003)(6916009)(26005)(53546011)(7416002)(2616005)(6486002)(31696002)(71200400001)(66556008)(76116006)(66446008)(66946007)(36756003)(83380400001)(5660300002)(8936002)(316002)(31686004);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 Rs0X4L6xLqVQIMcwveuysdPbVzydUC0oDYOOtg8ZJbydmneDo9C+fC5olUyKIPU5LVSIF4bUKOEPV7z3cp01eN9nPJ3R1U0USecJNn394Jdarv4KX42zCqKzdghDasxUXaMdKd/yDUvuMOvbuXJIyWqqksQlezZGHA89ChU7PT+jhoaNPSp1NVK0uRPSRtrIPjwoNZRZ+HzImaqVFVUftkv3kDioANUhZiLLDbswd/lM4PG2W44IN2Ot1zw0GkWQlt6YeyLYP7la8oL1DX4+X6yIP36OVGjiWUnvQc6krUTE/pnU9nAUk5NR2GkAkE5PC+N1z/Qp6+acGg9fkN2EawJeh0tMC8KnW0v/mrYRci/e7F0i3NRtbgbsZ9Z6t1IqT+itqbwml7+DTw7chCpl3vSXwIjer/MvkuY+AEo/4fzIOM63+/YT06FrgRlYqBkIungqVJ3TGJFz/as1SOgpXKNtXrex1CxQ8POqhUsaXal4ni2xMPWYfLbYYW7AuUUSkc7w9Ge9d2NYU5Wl5CAslKPlVjXvZWhGObqt28CN716GiCHAHg5AWwTJi+d+tZ4RU8p3PG9BUns+yob7iM8ikOkW5JI/jJoQTZw8xalOKcclk0EwLnuutz75YdmnE1hE5d666ZI7Pw9WGSeEUa2s2w==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <98730055D513364499A86994062CD792@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8b17b7e1-8a9b-4ed4-36e0-08d887c166db
X-MS-Exchange-CrossTenant-originalarrivaltime: 13 Nov 2020 10:46:39.5840
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: aRDTiGejwCbGB1s2EoZHy2quMtGL/6dUf9dAo+Wz8aGAs7idhjc+RuFhrssm++zogmdw3MBKej6ufgC7yZdfTSzGZRT1gTFxEcjt53Qr19eq0l3gYv5gDk2C+kCWiym0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR03MB7122
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-13_07:2020-11-13,2020-11-13 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 phishscore=0
 adultscore=0 clxscore=1015 impostorscore=0 priorityscore=1501
 malwarescore=0 suspectscore=0 spamscore=0 lowpriorityscore=0
 mlxlogscore=999 bulkscore=0 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2009150000 definitions=main-2011130065

DQpPbiAxMS8xMy8yMCAxMjoyNSBQTSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE9uIDEzLjExLjIw
MjAgMDc6MzIsIE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIHdyb3RlOg0KPj4gT24gMTEvMTIvMjAg
NDo0NiBQTSwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToNCj4+PiBPbiBUaHUsIE5vdiAxMiwgMjAy
MCBhdCAwMToxNjoxMFBNICswMDAwLCBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyB3cm90ZToNCj4+
Pj4gT24gMTEvMTIvMjAgMTE6NDAgQU0sIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6DQo+Pj4+PiBP
biBNb24sIE5vdiAwOSwgMjAyMCBhdCAwMjo1MDoyN1BNICswMjAwLCBPbGVrc2FuZHIgQW5kcnVz
aGNoZW5rbyB3cm90ZToNCj4+Pj4+PiBGcm9tOiBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyA8b2xl
a3NhbmRyX2FuZHJ1c2hjaGVua29AZXBhbS5jb20+DQo+Pj4+Pj4gK3N0YXRpYyB1aW50MzJfdCBi
YXJfcmVhZF9kaXNwYXRjaChjb25zdCBzdHJ1Y3QgcGNpX2RldiAqcGRldiwgdW5zaWduZWQgaW50
IHJlZywNCj4+Pj4+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHZvaWQgKmRh
dGEpDQo+Pj4+Pj4gK3sNCj4+Pj4+PiArICAgIHN0cnVjdCB2cGNpX2JhciAqdmJhciwgKmJhciA9
IGRhdGE7DQo+Pj4+Pj4gKw0KPj4+Pj4+ICsgICAgaWYgKCBpc19oYXJkd2FyZV9kb21haW4oY3Vy
cmVudC0+ZG9tYWluKSApDQo+Pj4+Pj4gKyAgICAgICAgcmV0dXJuIGJhcl9yZWFkX2h3ZG9tKHBk
ZXYsIHJlZywgZGF0YSk7DQo+Pj4+Pj4gKw0KPj4+Pj4+ICsgICAgdmJhciA9IGdldF92cGNpX2Jh
cihjdXJyZW50LT5kb21haW4sIHBkZXYsIGJhci0+aW5kZXgpOw0KPj4+Pj4+ICsgICAgaWYgKCAh
dmJhciApDQo+Pj4+Pj4gKyAgICAgICAgcmV0dXJuIH4wOw0KPj4+Pj4+ICsNCj4+Pj4+PiArICAg
IHJldHVybiBiYXJfcmVhZF9ndWVzdChwZGV2LCByZWcsIHZiYXIpOw0KPj4+Pj4+ICt9DQo+Pj4+
Pj4gKw0KPj4+Pj4+ICtzdGF0aWMgdm9pZCBiYXJfd3JpdGVfZGlzcGF0Y2goY29uc3Qgc3RydWN0
IHBjaV9kZXYgKnBkZXYsIHVuc2lnbmVkIGludCByZWcsDQo+Pj4+Pj4gKyAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICB1aW50MzJfdCB2YWwsIHZvaWQgKmRhdGEpDQo+Pj4+Pj4gK3sNCj4+
Pj4+PiArICAgIHN0cnVjdCB2cGNpX2JhciAqYmFyID0gZGF0YTsNCj4+Pj4+PiArDQo+Pj4+Pj4g
KyAgICBpZiAoIGlzX2hhcmR3YXJlX2RvbWFpbihjdXJyZW50LT5kb21haW4pICkNCj4+Pj4+PiAr
ICAgICAgICBiYXJfd3JpdGVfaHdkb20ocGRldiwgcmVnLCB2YWwsIGRhdGEpOw0KPj4+Pj4+ICsg
ICAgZWxzZQ0KPj4+Pj4+ICsgICAgew0KPj4+Pj4+ICsgICAgICAgIHN0cnVjdCB2cGNpX2JhciAq
dmJhciA9IGdldF92cGNpX2JhcihjdXJyZW50LT5kb21haW4sIHBkZXYsIGJhci0+aW5kZXgpOw0K
Pj4+Pj4+ICsNCj4+Pj4+PiArICAgICAgICBpZiAoICF2YmFyICkNCj4+Pj4+PiArICAgICAgICAg
ICAgcmV0dXJuOw0KPj4+Pj4+ICsgICAgICAgIGJhcl93cml0ZV9ndWVzdChwZGV2LCByZWcsIHZh
bCwgdmJhcik7DQo+Pj4+Pj4gKyAgICB9DQo+Pj4+Pj4gK30NCj4+Pj4+IFlvdSBzaG91bGQgYXNz
aWduIGRpZmZlcmVudCBoYW5kbGVycyBiYXNlZCBvbiB3aGV0aGVyIHRoZSBkb21haW4gdGhhdA0K
Pj4+Pj4gaGFzIHRoZSBkZXZpY2UgYXNzaWduZWQgaXMgYSBkb21VIG9yIHRoZSBoYXJkd2FyZSBk
b21haW4sIHJhdGhlciB0aGFuDQo+Pj4+PiBkb2luZyB0aGUgc2VsZWN0aW9uIGhlcmUuDQo+Pj4+
IEhtLCBoYW5kbGVycyBhcmUgYXNzaWduZWQgb25jZSBpbiBpbml0X2JhcnMgYW5kIHRoaXMgZnVu
Y3Rpb24gaXMgb25seSBjYWxsZWQNCj4+Pj4NCj4+Pj4gZm9yIGh3ZG9tLCBzbyB0aGVyZSBpcyBu
byB3YXkgSSBjYW4gZG8gdGhhdCBmb3IgdGhlIGd1ZXN0cy4gSGVuY2UsIHRoZSBkaXNwYXRjaGVy
DQo+Pj4gSSB0aGluayB3ZSBtaWdodCB3YW50IHRvIHJlc2V0IHRoZSB2UENJIGhhbmRsZXJzIHdo
ZW4gYSBkZXZpY2VzIGdldHMNCj4+PiBhc3NpZ25lZCBhbmQgZGVhc3NpZ25lZC4NCj4+IEluIEFS
TSBjYXNlIGluaXRfYmFycyBpcyBjYWxsZWQgdG9vIGVhcmx5OiBQQ0kgZGV2aWNlIGFzc2lnbm1l
bnQgaXMgY3VycmVudGx5DQo+Pg0KPj4gaW5pdGlhdGVkIGJ5IERvbWFpbi0wJyBrZXJuZWwgYW5k
IGlzIGRvbmUgKmJlZm9yZSogUENJIGRldmljZXMgYXJlIGdpdmVuIG1lbW9yeQ0KPj4NCj4+IHJh
bmdlcyBhbmQgQkFScyBhc3NpZ25lZDoNCj4+DQo+PiBbwqDCoMKgIDAuNDI5NTE0XSBwY2lfYnVz
IDAwMDA6MDA6IHJvb3QgYnVzIHJlc291cmNlIFtidXMgMDAtZmZdDQo+PiBbwqDCoMKgIDAuNDI5
NTMyXSBwY2lfYnVzIDAwMDA6MDA6IHJvb3QgYnVzIHJlc291cmNlIFtpbyAweDAwMDAtMHhmZmZm
Zl0NCj4+IFvCoMKgwqAgMC40Mjk1NTVdIHBjaV9idXMgMDAwMDowMDogcm9vdCBidXMgcmVzb3Vy
Y2UgW21lbSAweGZlMjAwMDAwLTB4ZmUzZmZmZmZdDQo+PiBbwqDCoMKgIDAuNDI5NTc1XSBwY2lf
YnVzIDAwMDA6MDA6IHJvb3QgYnVzIHJlc291cmNlIFttZW0gMHgzMDAwMDAwMC0weDM3ZmZmZmZm
XQ0KPj4gW8KgwqDCoCAwLjQyOTYwNF0gcGNpX2J1cyAwMDAwOjAwOiByb290IGJ1cyByZXNvdXJj
ZSBbbWVtIDB4MzgwMDAwMDAtMHgzZmZmZmZmZiBwcmVmXQ0KPj4gW8KgwqDCoCAwLjQyOTY3MF0g
cGNpIDAwMDA6MDA6MDAuMDogZW5hYmxpbmcgRXh0ZW5kZWQgVGFncw0KPj4gW8KgwqDCoCAwLjQ1
Mzc2NF0gcGNpIDAwMDA6MDA6MDAuMDogLS0tLS0tLS0tLS0tLS0tLS0tLS0gQlVTX05PVElGWV9B
RERfREVWSUNFDQo+Pg0KPj4gPCBpbml0X2JhcnMgPg0KPj4NCj4+IFvCoMKgwqAgMC40NTM3OTNd
IHBjaSAwMDAwOjAwOjAwLjA6IC0tIElSUSAwDQo+PiBbwqDCoMKgIDAuNDU4ODI1XSBwY2kgMDAw
MDowMDowMC4wOiBGYWlsZWQgdG8gYWRkIC0gcGFzc3Rocm91Z2ggb3IgTVNJL01TSS1YIG1pZ2h0
IGZhaWwhDQo+PiBbwqDCoMKgIDAuNDcxNzkwXSBwY2kgMDAwMDowMTowMC4wOiAtLS0tLS0tLS0t
LS0tLS0tLS0tLSBCVVNfTk9USUZZX0FERF9ERVZJQ0UNCj4+DQo+PiA8IGluaXRfYmFycyA+DQo+
Pg0KPj4gW8KgwqDCoCAwLjQ3MTgyMV0gcGNpIDAwMDA6MDE6MDAuMDogLS0gSVJRIDI1NQ0KPj4g
W8KgwqDCoCAwLjQ3NjgwOV0gcGNpIDAwMDA6MDE6MDAuMDogRmFpbGVkIHRvIGFkZCAtIHBhc3N0
aHJvdWdoIG9yIE1TSS9NU0ktWCBtaWdodCBmYWlsIQ0KPj4NCj4+IDwgQkFSIGFzc2lnbm1lbnRz
IGJlbG93ID4NCj4+DQo+PiBbwqDCoMKgIDAuNDg4MjMzXSBwY2kgMDAwMDowMDowMC4wOiBCQVIg
MTQ6IGFzc2lnbmVkIFttZW0gMHhmZTIwMDAwMC0weGZlMmZmZmZmXQ0KPj4gW8KgwqDCoCAwLjQ4
ODI2NV0gcGNpIDAwMDA6MDA6MDAuMDogQkFSIDE1OiBhc3NpZ25lZCBbbWVtIDB4MzgwMDAwMDAt
MHgzODBmZmZmZiBwcmVmXQ0KPj4NCj4+IEluIGNhc2Ugb2YgeDg2IHRoaXMgaXMgcHJldHR5IG11
Y2ggb2sgYXMgQkFScyBhcmUgYWxyZWFkeSBpbiBwbGFjZSwgYnV0IGZvciBBUk0gd2UNCj4+DQo+
PiBuZWVkIHRvIHRha2UgY2FyZSBhbmQgcmUtc2V0dXAgdlBDSSBCQVJzIGZvciBod2RvbS4NCj4g
RXZlbiBvbiB4ODYgdGhlcmUncyBubyBndWFyYW50ZWUgdGhhdCBhbGwgZGV2aWNlcyBoYXZlIHRo
ZWlyIEJBUnMgc2V0DQo+IHVwIGJ5IGZpcm13YXJlLg0KDQpUaGlzIGlzIHRydWUuIEJ1dCB0aGVy
ZSB5b3UgY291bGQgaGF2ZSBjb25maWcgc3BhY2UgdHJhcHBlZCBpbiAieDg2IGdlbmVyaWMgd2F5
IiwNCg0KcGxlYXNlIGNvcnJlY3QgbWUgaWYgSSdtIHdyb25nIGhlcmUNCg0KPg0KPiBJbiBhIHN1
YnNlcXVlbnQgcmVwbHkgeW91J3ZlIHN1Z2dlc3RlZCB0byBtb3ZlIGluaXRfYmFycyBmcm9tICJh
ZGQiIHRvDQo+ICJhc3NpZ24iLCBidXQgSSdtIGhhdmluZyB0cm91YmxlIHNlZWluZyB3aGF0IHRo
aXMgd291bGQgY2hhbmdlOiBJdCdzDQo+IG5vdCBEb20wIGNvbnRyb2xsaW5nIGFzc2lnbm1lbnQg
KHRvIGl0c2VsZiksIGJ1dCBYZW4gYXNzaWducyB0aGUgZGV2aWNlDQo+IHRvd2FyZHMgdGhlIGVu
ZCBvZiBwY2lfYWRkX2RldmljZSgpLg0KDQpQSFlTREVWT1BfcGNpX2RldmljZV9hZGQgdnMgWEVO
X0RPTUNUTF9hc3NpZ25fZGV2aWNlDQoNCkN1cnJlbnRseSB3ZSBpbml0aWFsaXplIEJBUnMgZHVy
aW5nIFBIWVNERVZPUF9wY2lfZGV2aWNlX2FkZCBhbmQNCg0KaWYgd2UgZG8gdGhhdCBkdXJpbmcg
WEVOX0RPTUNUTF9hc3NpZ25fZGV2aWNlIHRoaW5ncyBzZWVtIHRvIGNoYW5nZQ0KDQo+DQo+PiBU
aGluZ3MgYXJlIGdldHRpbmcgZXZlbiBtb3JlDQo+Pg0KPj4gY29tcGxpY2F0ZWQgaWYgdGhlIGhv
c3QgUENJIGJyaWRnZSBpcyBub3QgRUNBTSBsaWtlLCBzbyB5b3UgY2Fubm90IHNldCBtbWlvX2hh
bmRsZXJzDQo+Pg0KPj4gYW5kIHRyYXAgaHdkb20ncyBhY2Nlc3MgdG8gdGhlIGNvbmZpZyBzcGFj
ZSB0byB1cGRhdGUgQkFScyBldGMuIFRoaXMgaXMgd2h5IEkgaGF2ZSB0aGF0DQo+Pg0KPj4gdWds
eSBoYWNrIGZvciByY2FyX2dlbjMgdG8gcmVhZCBhY3R1YWwgQkFScyBmb3IgaHdkb20uDQo+IEhv
dyB0byBjb25maWcgc3BhY2UgYWNjZXNzZXMgd29yayB0aGVyZT8gVGhlIGxhdGVzdCBmb3IgTVNJ
L01TSS1YIGl0J2xsDQo+IGJlIGltcGVyYXRpdmUgdGhhdCBYZW4gYmUgYWJsZSB0byBpbnRlcmNl
cHQgY29uZmlnIHNwYWNlIHdyaXRlcy4NCj4NCj4+PiAgICBJbiBvcmRlciB0byBkbyBwYXNzdGhy
b3VnaCB0byBkb21VcyBzYWZlbHkNCj4+PiB3ZSB3aWxsIGhhdmUgdG8gYWRkIG1vcmUgaGFuZGxl
cnMgdGhhbiB3aGF0J3MgcmVxdWlyZWQgZm9yIGRvbTAsDQo+PiBDYW4geW91IHBsZWFzZSB0ZWxs
IHdoYXQgYXJlIHRoaW5raW5nIGFib3V0PyBXaGF0IGFyZSB0aGUgbWlzc2luZyBoYW5kbGVycz8N
Cj4+PiAgICBhbmQNCj4+PiBoYXZpbmcgaXNfaGFyZHdhcmVfZG9tYWluIHNwcmlua2xlZCBpbiBh
bGwgb2YgdGhlbSBpcyBub3QgYSBzdWl0YWJsZQ0KPj4+IHNvbHV0aW9uLg0KPj4gSSdsbCB0cnkg
dG8gcmVwbGFjZSBpc19oYXJkd2FyZV9kb21haW4gd2l0aCBzb21ldGhpbmcgbGlrZToNCj4+DQo+
PiArLyoNCj4+ICsgKiBEZXRlY3Qgd2hldGhlciBwaHlzaWNhbCBQQ0kgZGV2aWNlcyBpbiB0aGlz
IHNlZ21lbnQgYmVsb25nDQo+PiArICogdG8gdGhlIGRvbWFpbiBnaXZlbiwgZS5nLiBvbiB4ODYg
YWxsIFBDSSBkZXZpY2VzIGxpdmUgaW4gaHdkb20sDQo+PiArICogYnV0IGluIGNhc2Ugb2YgQVJN
IHRoaXMgbWlnaHQgbm90IGJlIHRoZSBjYXNlOiB0aG9zZSBtYXkgYWxzbw0KPj4gKyAqIGxpdmUg
aW4gZHJpdmVyIGRvbWFpbnMgb3IgZXZlbiBYZW4gaXRzZWxmLg0KPj4gKyAqLw0KPj4gK2Jvb2wg
cGNpX2lzX2hhcmR3YXJlX2RvbWFpbihzdHJ1Y3QgZG9tYWluICpkLCB1MTYgc2VnKQ0KPj4gK3sN
Cj4+ICsjaWZkZWYgQ09ORklHX1g4Ng0KPj4gK8KgwqDCoCByZXR1cm4gaXNfaGFyZHdhcmVfZG9t
YWluKGQpOw0KPj4gKyNlbGlmIENPTkZJR19BUk0NCj4+ICvCoMKgwqAgcmV0dXJuIHBjaV9pc19v
d25lcl9kb21haW4oZCwgc2VnKTsNCj4+ICsjZWxzZQ0KPj4gKyNlcnJvciAiVW5zdXBwb3J0ZWQg
YXJjaGl0ZWN0dXJlIg0KPj4gKyNlbmRpZg0KPj4gK30NCj4+ICsNCj4+ICsvKg0KPj4gKyAqIEdl
dCBkb21haW4gd2hpY2ggb3ducyB0aGlzIHNlZ21lbnQ6IGZvciB4ODYgdGhpcyBpcyBhbHdheXMg
aGFyZHdhcmUNCj4+ICsgKiBkb21haW4gYW5kIGZvciBBUk0gdGhpcyBjYW4gYmUgZGlmZmVyZW50
Lg0KPj4gKyAqLw0KPj4gK3N0cnVjdCBkb21haW4gKnBjaV9nZXRfaGFyZHdhcmVfZG9tYWluKHUx
NiBzZWcpDQo+PiArew0KPj4gKyNpZmRlZiBDT05GSUdfWDg2DQo+PiArwqDCoMKgIHJldHVybiBo
YXJkd2FyZV9kb21haW47DQo+PiArI2VsaWYgQ09ORklHX0FSTQ0KPj4gK8KgwqDCoCByZXR1cm4g
cGNpX2dldF9vd25lcl9kb21haW4oc2VnKTsNCj4+ICsjZWxzZQ0KPj4gKyNlcnJvciAiVW5zdXBw
b3J0ZWQgYXJjaGl0ZWN0dXJlIg0KPj4gKyNlbmRpZg0KPj4gK30NCj4+DQo+PiBUaGlzIGlzIHdo
YXQgSSB1c2UgdG8gcHJvcGVybHkgZGV0ZWN0IHRoZSBkb21haW4gdGhhdCByZWFsbHkgb3ducyBw
aHlzaWNhbCBob3N0IGJyaWRnZQ0KPiBJIGNvbnNpZGVyIHRoaXMgcHJvYmxlbWF0aWMuIFdlIHNo
b3VsZCB0cnkgdG8gbm90IGxldCBBcm0ncyBhbmQgeDg2J2VzDQo+IFBDSSBpbXBsZW1lbnRhdGlv
bnMgZGl2ZXJnZSB0b28gbXVjaCwgaS5lLiBhdCBsZWFzdCB0aGUgdW5kZXJseWluZyBiYXNpYw0K
PiBtb2RlbCB3b3VsZCBiZXR0ZXIgYmUgc2ltaWxhci4gRm9yIGV4YW1wbGUsIGlmIGVudGlyZSBz
ZWdtZW50cyBjYW4gYmUNCj4gYXNzaWduZWQgdG8gYSBkcml2ZXIgZG9tYWluIG9uIEFybSwgd2h5
IHNob3VsZCB0aGUgc2FtZSBub3QgYmUgcG9zc2libGUNCj4gb24geDg2Pw0KDQpHb29kIHF1ZXN0
aW9uLCBwcm9iYWJseSBpbiB0aGlzIGNhc2UgeDg2ID09IEFSTSBhbmQgSSBjYW4gdXNlDQoNCnBj
aV9pc19vd25lcl9kb21haW4gZm9yIGJvdGggYXJjaGl0ZWN0dXJlcyBpbnN0ZWFkIG9mIHVzaW5n
IGlzX2hhcmR3YXJlX2RvbWFpbiBmb3IgeDg2DQoNCj4NCj4gRnVydGhlcm1vcmUgSSdtIHN1c3Bp
Y2lvdXMgYWJvdXQgc2VnbWVudHMgYmVpbmcgdGhlIHJpZ2h0IGdyYW51bGFyaXR5DQo+IGhlcmUu
IE9uIGlhNjQgbXVsdGlwbGUgaG9zdCBicmlkZ2VzIGNvdWxkIChhbmQgdHlwaWNhbGx5IHdvdWxk
KSBsaXZlDQo+IG9uIHNlZ21lbnQgMC4gSWlyYyBJIGhhZCBvbmNlIHNlZW4gb3V0cHV0IGZyb20g
YW4geDg2IHN5c3RlbSB3aGljaCB3YXMNCj4gYXBwYXJlbnRseSBsYWlkIG91dCBzaW1pbGFybHku
IFRoZXJlZm9yZSwganVzdCBsaWtlIGZvciBNQ0ZHLCBJIHRoaW5rDQo+IHRoZSBncmFudWxhcml0
eSB3YW50cyB0byBiZSBidXMgcmFuZ2VzIHdpdGhpbiBhIHNlZ21lbnQuDQpDYW4geW91IHBsZWFz
ZSBzdWdnZXN0IHNvbWV0aGluZyB3ZSBjYW4gdXNlIGFzIGEgaGludCBmb3Igc3VjaCBhIGRldGVj
dGlvbiBsb2dpYz8NCj4NCj4gSmFuDQoNClRoYW5rIHlvdSwNCg0KT2xla3NhbmRyDQo=


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 10:47:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 10:47:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26310.54588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdWcS-00042q-S3; Fri, 13 Nov 2020 10:47:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26310.54588; Fri, 13 Nov 2020 10:47:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdWcS-00042j-Oq; Fri, 13 Nov 2020 10:47:52 +0000
Received: by outflank-mailman (input) for mailman id 26310;
 Fri, 13 Nov 2020 10:47:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdWcQ-00042S-UL
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 10:47:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7bb8428c-ad8f-4234-87ff-98f54f036ff2;
 Fri, 13 Nov 2020 10:47:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CE881AED6;
 Fri, 13 Nov 2020 10:47:48 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdWcQ-00042S-UL
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 10:47:50 +0000
X-Inumbo-ID: 7bb8428c-ad8f-4234-87ff-98f54f036ff2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7bb8428c-ad8f-4234-87ff-98f54f036ff2;
	Fri, 13 Nov 2020 10:47:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605264468; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=NxNY7is0eX0w859VRwxbHMJiokokTDN3mH+bsPMPMiU=;
	b=NmCPhe0hEk45rpA08e2loHxpRvIweNF0J+Xvmc+Os4AYLoOoHBniPhyfUyLET3T0Uixj2Y
	GPTnYUPif2QkQdFcdpnauGW5OUvyAIC57l2wLz2wyBkZ5O4eiJY7gjKM5Ok05TAwCoK9ik
	+knbcTqcuOdMrdrsXatOlArNZpUNFxA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id CE881AED6;
	Fri, 13 Nov 2020 10:47:48 +0000 (UTC)
Subject: Re: [PATCH 08/10] vpci/arm: Allow updating BAR's header for non-ECAM
 bridges
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>,
 Oleksandr Andrushchenko <andr2000@gmail.com>
Cc: "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "julien.grall@arm.com" <julien.grall@arm.com>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "roger.pau@citrix.com" <roger.pau@citrix.com>,
 "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-9-andr2000@gmail.com>
 <ae66dddb-98e3-61fd-86c3-eab30ec33d18@suse.com>
 <1f0a3eb7-be20-95a0-0c1e-c7d45e3279f8@epam.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <34e81e3e-7d65-432a-95a6-c8ef437b23dc@suse.com>
Date: Fri, 13 Nov 2020 11:47:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <1f0a3eb7-be20-95a0-0c1e-c7d45e3279f8@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13.11.2020 11:39, Oleksandr Andrushchenko wrote:
> 
> On 11/13/20 12:29 PM, Jan Beulich wrote:
>> On 09.11.2020 13:50, Oleksandr Andrushchenko wrote:
>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>>
>>> Non-ECAM host bridges in hwdom go directly to PCI config space,
>>> not through vpci (they use their specific method for accessing PCI
>>> configuration, e.g. dedicated registers etc.).
>> And access to these dedicated registers can't be intercepted?
> 
> It can. But then you have to fully emulate that bridge, e.g.
> 
> "if we write A to regB and after that write C to regZ then it
> 
> means we are accessing config space. If we write...."

Sounds pretty much like the I/O port based access mechanism on
x86, which also has some sort of "enable". Of course, I/O port
accesses are particularly easy to intercept and handle...

> I mean this would need lots of code in Xen to achieve that

Possibly, but look at the amount of code we have in Xen on the
x86 side to handle MCFG writes by Dom0.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 10:50:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 10:50:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26322.54600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdWfO-0004y2-CP; Fri, 13 Nov 2020 10:50:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26322.54600; Fri, 13 Nov 2020 10:50:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdWfO-0004xv-8u; Fri, 13 Nov 2020 10:50:54 +0000
Received: by outflank-mailman (input) for mailman id 26322;
 Fri, 13 Nov 2020 10:50:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdWfN-0004xq-KB
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 10:50:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b77ede33-2ab6-42a3-9d26-50af521537b3;
 Fri, 13 Nov 2020 10:50:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A25DEAED6;
 Fri, 13 Nov 2020 10:50:51 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdWfN-0004xq-KB
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 10:50:53 +0000
X-Inumbo-ID: b77ede33-2ab6-42a3-9d26-50af521537b3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b77ede33-2ab6-42a3-9d26-50af521537b3;
	Fri, 13 Nov 2020 10:50:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605264651; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=LG+BPsY8bYqXw53+N/Hf+2Azim0UnBMMkFdkLxR89f4=;
	b=gwixdMqLBFLBxSbrknwADduUuN3KK8AlxbyxmQRKyu8liM/iim6Addjt/7bL3cEiKjar2C
	wnZKTTajEkazygDRg7QUMlxROqSy+BDqPyisHKpnV03yhRBmiKbu21qPyE1K9waQM0J6Xg
	UA+59abuXUuEOnfnyO2oBBbTkwAFjrs=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A25DEAED6;
	Fri, 13 Nov 2020 10:50:51 +0000 (UTC)
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
Cc: Oleksandr Andrushchenko <andr2000@gmail.com>,
 "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
 "julien.grall@arm.com" <julien.grall@arm.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
 <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
 <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
 <b4697fbe-6896-ed64-409d-85620c08904a@suse.com>
 <3d6e5aab-ff89-7859-09c6-5ecb0c052511@epam.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1c88fef1-8558-fde1-02c7-8a68f6ecf312@suse.com>
Date: Fri, 13 Nov 2020 11:50:50 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <3d6e5aab-ff89-7859-09c6-5ecb0c052511@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 13.11.2020 11:46, Oleksandr Andrushchenko wrote:
> On 11/13/20 12:25 PM, Jan Beulich wrote:
>> On 13.11.2020 07:32, Oleksandr Andrushchenko wrote:
>>> On 11/12/20 4:46 PM, Roger Pau Monné wrote:
>>>> On Thu, Nov 12, 2020 at 01:16:10PM +0000, Oleksandr Andrushchenko wrote:
>>>>> On 11/12/20 11:40 AM, Roger Pau Monné wrote:
>>>>>> On Mon, Nov 09, 2020 at 02:50:27PM +0200, Oleksandr Andrushchenko wrote:
>>>>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>>>>>> +static uint32_t bar_read_dispatch(const struct pci_dev *pdev, unsigned int reg,
>>>>>>> +                                  void *data)
>>>>>>> +{
>>>>>>> +    struct vpci_bar *vbar, *bar = data;
>>>>>>> +
>>>>>>> +    if ( is_hardware_domain(current->domain) )
>>>>>>> +        return bar_read_hwdom(pdev, reg, data);
>>>>>>> +
>>>>>>> +    vbar = get_vpci_bar(current->domain, pdev, bar->index);
>>>>>>> +    if ( !vbar )
>>>>>>> +        return ~0;
>>>>>>> +
>>>>>>> +    return bar_read_guest(pdev, reg, vbar);
>>>>>>> +}
>>>>>>> +
>>>>>>> +static void bar_write_dispatch(const struct pci_dev *pdev, unsigned int reg,
>>>>>>> +                               uint32_t val, void *data)
>>>>>>> +{
>>>>>>> +    struct vpci_bar *bar = data;
>>>>>>> +
>>>>>>> +    if ( is_hardware_domain(current->domain) )
>>>>>>> +        bar_write_hwdom(pdev, reg, val, data);
>>>>>>> +    else
>>>>>>> +    {
>>>>>>> +        struct vpci_bar *vbar = get_vpci_bar(current->domain, pdev, bar->index);
>>>>>>> +
>>>>>>> +        if ( !vbar )
>>>>>>> +            return;
>>>>>>> +        bar_write_guest(pdev, reg, val, vbar);
>>>>>>> +    }
>>>>>>> +}
>>>>>> You should assign different handlers based on whether the domain that
>>>>>> has the device assigned is a domU or the hardware domain, rather than
>>>>>> doing the selection here.
>>>>> Hm, handlers are assigned once in init_bars and this function is only called
>>>>>
>>>>> for hwdom, so there is no way I can do that for the guests. Hence, the dispatcher
>>>> I think we might want to reset the vPCI handlers when a devices gets
>>>> assigned and deassigned.
>>> In ARM case init_bars is called too early: PCI device assignment is currently
>>>
>>> initiated by Domain-0' kernel and is done *before* PCI devices are given memory
>>>
>>> ranges and BARs assigned:
>>>
>>> [    0.429514] pci_bus 0000:00: root bus resource [bus 00-ff]
>>> [    0.429532] pci_bus 0000:00: root bus resource [io 0x0000-0xfffff]
>>> [    0.429555] pci_bus 0000:00: root bus resource [mem 0xfe200000-0xfe3fffff]
>>> [    0.429575] pci_bus 0000:00: root bus resource [mem 0x30000000-0x37ffffff]
>>> [    0.429604] pci_bus 0000:00: root bus resource [mem 0x38000000-0x3fffffff pref]
>>> [    0.429670] pci 0000:00:00.0: enabling Extended Tags
>>> [    0.453764] pci 0000:00:00.0: -------------------- BUS_NOTIFY_ADD_DEVICE
>>>
>>> < init_bars >
>>>
>>> [    0.453793] pci 0000:00:00.0: -- IRQ 0
>>> [    0.458825] pci 0000:00:00.0: Failed to add - passthrough or MSI/MSI-X might fail!
>>> [    0.471790] pci 0000:01:00.0: -------------------- BUS_NOTIFY_ADD_DEVICE
>>>
>>> < init_bars >
>>>
>>> [    0.471821] pci 0000:01:00.0: -- IRQ 255
>>> [    0.476809] pci 0000:01:00.0: Failed to add - passthrough or MSI/MSI-X might fail!
>>>
>>> < BAR assignments below >
>>>
>>> [    0.488233] pci 0000:00:00.0: BAR 14: assigned [mem 0xfe200000-0xfe2fffff]
>>> [    0.488265] pci 0000:00:00.0: BAR 15: assigned [mem 0x38000000-0x380fffff pref]
>>>
>>> In case of x86 this is pretty much ok as BARs are already in place, but for ARM we
>>>
>>> need to take care and re-setup vPCI BARs for hwdom.
>> Even on x86 there's no guarantee that all devices have their BARs set
>> up by firmware.
> 
> This is true. But there you could have config space trapped in "x86 generic way",
> 
> please correct me if I'm wrong here
> 
>>
>> In a subsequent reply you've suggested to move init_bars from "add" to
>> "assign", but I'm having trouble seeing what this would change: It's
>> not Dom0 controlling assignment (to itself), but Xen assigns the device
>> towards the end of pci_add_device().
> 
> PHYSDEVOP_pci_device_add vs XEN_DOMCTL_assign_device
> 
> Currently we initialize BARs during PHYSDEVOP_pci_device_add and
> 
> if we do that during XEN_DOMCTL_assign_device things seem to change

But there can't possibly be any XEN_DOMCTL_assign_device involved in
booting of Dom0. 

>>>>    In order to do passthrough to domUs safely
>>>> we will have to add more handlers than what's required for dom0,
>>> Can you please tell what are thinking about? What are the missing handlers?
>>>>    and
>>>> having is_hardware_domain sprinkled in all of them is not a suitable
>>>> solution.
>>> I'll try to replace is_hardware_domain with something like:
>>>
>>> +/*
>>> + * Detect whether physical PCI devices in this segment belong
>>> + * to the domain given, e.g. on x86 all PCI devices live in hwdom,
>>> + * but in case of ARM this might not be the case: those may also
>>> + * live in driver domains or even Xen itself.
>>> + */
>>> +bool pci_is_hardware_domain(struct domain *d, u16 seg)
>>> +{
>>> +#ifdef CONFIG_X86
>>> +    return is_hardware_domain(d);
>>> +#elif CONFIG_ARM
>>> +    return pci_is_owner_domain(d, seg);
>>> +#else
>>> +#error "Unsupported architecture"
>>> +#endif
>>> +}
>>> +
>>> +/*
>>> + * Get domain which owns this segment: for x86 this is always hardware
>>> + * domain and for ARM this can be different.
>>> + */
>>> +struct domain *pci_get_hardware_domain(u16 seg)
>>> +{
>>> +#ifdef CONFIG_X86
>>> +    return hardware_domain;
>>> +#elif CONFIG_ARM
>>> +    return pci_get_owner_domain(seg);
>>> +#else
>>> +#error "Unsupported architecture"
>>> +#endif
>>> +}
>>>
>>> This is what I use to properly detect the domain that really owns physical host bridge
>> I consider this problematic. We should try to not let Arm's and x86'es
>> PCI implementations diverge too much, i.e. at least the underlying basic
>> model would better be similar. For example, if entire segments can be
>> assigned to a driver domain on Arm, why should the same not be possible
>> on x86?
> 
> Good question, probably in this case x86 == ARM and I can use
> 
> pci_is_owner_domain for both architectures instead of using is_hardware_domain for x86
> 
>>
>> Furthermore I'm suspicious about segments being the right granularity
>> here. On ia64 multiple host bridges could (and typically would) live
>> on segment 0. Iirc I had once seen output from an x86 system which was
>> apparently laid out similarly. Therefore, just like for MCFG, I think
>> the granularity wants to be bus ranges within a segment.
> Can you please suggest something we can use as a hint for such a detection logic?

The underlying information comes from ACPI tables, iirc. I don't
recall the details, though - sorry.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 10:53:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 10:53:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26328.54611 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdWhp-00059b-Sv; Fri, 13 Nov 2020 10:53:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26328.54611; Fri, 13 Nov 2020 10:53:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdWhp-00059U-Py; Fri, 13 Nov 2020 10:53:25 +0000
Received: by outflank-mailman (input) for mailman id 26328;
 Fri, 13 Nov 2020 10:53:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdWho-00059P-LL
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 10:53:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7954e1db-e69d-4d60-965c-f70116b9a15f;
 Fri, 13 Nov 2020 10:53:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B94C1ABD1;
 Fri, 13 Nov 2020 10:53:22 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdWho-00059P-LL
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 10:53:24 +0000
X-Inumbo-ID: 7954e1db-e69d-4d60-965c-f70116b9a15f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7954e1db-e69d-4d60-965c-f70116b9a15f;
	Fri, 13 Nov 2020 10:53:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605264802; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6i8swCRsHitjS9eQ4seJE16XU4kwrVEDUZy0+7L7ddI=;
	b=hpDvlPDFqitZVuv0CgVmHIprxQclEkXtkKwSA6VKOi682JmCzE3tE4o/y0PBG4bgcX7eMB
	N+iwLB88RMfW62o+3BsxeuJnWawOBINC+NQ3YRg1RmStOWhoCtUnETV1E8kiR95iGCRWta
	wGbQWnJfCEKvx6RR78VAZ5zlyVFmma0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B94C1ABD1;
	Fri, 13 Nov 2020 10:53:22 +0000 (UTC)
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
To: Julien Grall <julien@xen.org>
Cc: Oleksandr Andrushchenko <andr2000@gmail.com>,
 "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
 "julien.grall@arm.com" <julien.grall@arm.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
 <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
 <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
 <b4697fbe-6896-ed64-409d-85620c08904a@suse.com>
 <fd656848-1eda-686d-d74c-f10e3ecfe49a@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <093f0acd-3ddb-84c7-a06e-c75de90ba288@suse.com>
Date: Fri, 13 Nov 2020 11:53:21 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <fd656848-1eda-686d-d74c-f10e3ecfe49a@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 13.11.2020 11:36, Julien Grall wrote:
> On 13/11/2020 10:25, Jan Beulich wrote:
>> On 13.11.2020 07:32, Oleksandr Andrushchenko wrote:
>>> On 11/12/20 4:46 PM, Roger Pau Monné wrote:
>>>> On Thu, Nov 12, 2020 at 01:16:10PM +0000, Oleksandr Andrushchenko wrote:
>>>>> On 11/12/20 11:40 AM, Roger Pau Monné wrote:
>>>>>> On Mon, Nov 09, 2020 at 02:50:27PM +0200, Oleksandr Andrushchenko wrote:
>>>>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>>>>>> +static uint32_t bar_read_dispatch(const struct pci_dev *pdev, unsigned int reg,
>>>>>>> +                                  void *data)
>>>>>>> +{
>>>>>>> +    struct vpci_bar *vbar, *bar = data;
>>>>>>> +
>>>>>>> +    if ( is_hardware_domain(current->domain) )
>>>>>>> +        return bar_read_hwdom(pdev, reg, data);
>>>>>>> +
>>>>>>> +    vbar = get_vpci_bar(current->domain, pdev, bar->index);
>>>>>>> +    if ( !vbar )
>>>>>>> +        return ~0;
>>>>>>> +
>>>>>>> +    return bar_read_guest(pdev, reg, vbar);
>>>>>>> +}
>>>>>>> +
>>>>>>> +static void bar_write_dispatch(const struct pci_dev *pdev, unsigned int reg,
>>>>>>> +                               uint32_t val, void *data)
>>>>>>> +{
>>>>>>> +    struct vpci_bar *bar = data;
>>>>>>> +
>>>>>>> +    if ( is_hardware_domain(current->domain) )
>>>>>>> +        bar_write_hwdom(pdev, reg, val, data);
>>>>>>> +    else
>>>>>>> +    {
>>>>>>> +        struct vpci_bar *vbar = get_vpci_bar(current->domain, pdev, bar->index);
>>>>>>> +
>>>>>>> +        if ( !vbar )
>>>>>>> +            return;
>>>>>>> +        bar_write_guest(pdev, reg, val, vbar);
>>>>>>> +    }
>>>>>>> +}
>>>>>> You should assign different handlers based on whether the domain that
>>>>>> has the device assigned is a domU or the hardware domain, rather than
>>>>>> doing the selection here.
>>>>> Hm, handlers are assigned once in init_bars and this function is only called
>>>>>
>>>>> for hwdom, so there is no way I can do that for the guests. Hence, the dispatcher
>>>> I think we might want to reset the vPCI handlers when a devices gets
>>>> assigned and deassigned.
>>>
>>> In ARM case init_bars is called too early: PCI device assignment is currently
>>>
>>> initiated by Domain-0' kernel and is done *before* PCI devices are given memory
>>>
>>> ranges and BARs assigned:
>>>
>>> [    0.429514] pci_bus 0000:00: root bus resource [bus 00-ff]
>>> [    0.429532] pci_bus 0000:00: root bus resource [io 0x0000-0xfffff]
>>> [    0.429555] pci_bus 0000:00: root bus resource [mem 0xfe200000-0xfe3fffff]
>>> [    0.429575] pci_bus 0000:00: root bus resource [mem 0x30000000-0x37ffffff]
>>> [    0.429604] pci_bus 0000:00: root bus resource [mem 0x38000000-0x3fffffff pref]
>>> [    0.429670] pci 0000:00:00.0: enabling Extended Tags
>>> [    0.453764] pci 0000:00:00.0: -------------------- BUS_NOTIFY_ADD_DEVICE
>>>
>>> < init_bars >
>>>
>>> [    0.453793] pci 0000:00:00.0: -- IRQ 0
>>> [    0.458825] pci 0000:00:00.0: Failed to add - passthrough or MSI/MSI-X might fail!
>>> [    0.471790] pci 0000:01:00.0: -------------------- BUS_NOTIFY_ADD_DEVICE
>>>
>>> < init_bars >
>>>
>>> [    0.471821] pci 0000:01:00.0: -- IRQ 255
>>> [    0.476809] pci 0000:01:00.0: Failed to add - passthrough or MSI/MSI-X might fail!
>>>
>>> < BAR assignments below >
>>>
>>> [    0.488233] pci 0000:00:00.0: BAR 14: assigned [mem 0xfe200000-0xfe2fffff]
>>> [    0.488265] pci 0000:00:00.0: BAR 15: assigned [mem 0x38000000-0x380fffff pref]
>>>
>>> In case of x86 this is pretty much ok as BARs are already in place, but for ARM we
>>>
>>> need to take care and re-setup vPCI BARs for hwdom.
>>
>> Even on x86 there's no guarantee that all devices have their BARs set
>> up by firmware.
>>
>> In a subsequent reply you've suggested to move init_bars from "add" to
>> "assign", but I'm having trouble seeing what this would change: It's
>> not Dom0 controlling assignment (to itself), but Xen assigns the device
>> towards the end of pci_add_device().
>>
>>> Things are getting even more
>>>
>>> complicated if the host PCI bridge is not ECAM like, so you cannot set mmio_handlers
>>>
>>> and trap hwdom's access to the config space to update BARs etc. This is why I have that
>>>
>>> ugly hack for rcar_gen3 to read actual BARs for hwdom.
>>
>> How to config space accesses work there? The latest for MSI/MSI-X it'll
>> be imperative that Xen be able to intercept config space writes.
> 
> I am not sure to understand your last sentence. Are you saying that we 
> always need to trap access to MSI/MSI-X message in order to sanitize it?
> 
> If one is using the GICv3 ITS (I haven't investigated other MSI 
> controller), then I don't believe you need to sanitize the MSI/MSI-X 
> message in most of the situation.

Well, if it's fine for the guest to write arbitrary values to message
address and message data, _and_ to arbitrarily enable/disable MSI / MSI-X,
then yes, no interception would be needed.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 10:55:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 10:55:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26334.54624 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdWjn-0005I2-94; Fri, 13 Nov 2020 10:55:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26334.54624; Fri, 13 Nov 2020 10:55:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdWjn-0005Hv-5w; Fri, 13 Nov 2020 10:55:27 +0000
Received: by outflank-mailman (input) for mailman id 26334;
 Fri, 13 Nov 2020 10:55:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u7TF=ET=epam.com=prvs=9586b5424c=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kdWjm-0005Hq-9h
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 10:55:26 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6e33e2f7-df27-42cf-86d9-e8e0efbbc1ed;
 Fri, 13 Nov 2020 10:55:24 +0000 (UTC)
Received: from pps.filterd (m0174678.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0ADAoCkE030170; Fri, 13 Nov 2020 10:55:16 GMT
Received: from eur04-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2058.outbound.protection.outlook.com [104.47.14.58])
 by mx0a-0039f301.pphosted.com with ESMTP id 34rf80fh4p-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 13 Nov 2020 10:55:16 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM9PR03MB6884.eurprd03.prod.outlook.com (2603:10a6:20b:2de::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25; Fri, 13 Nov
 2020 10:55:13 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 10:55:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u7TF=ET=epam.com=prvs=9586b5424c=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kdWjm-0005Hq-9h
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 10:55:26 +0000
X-Inumbo-ID: 6e33e2f7-df27-42cf-86d9-e8e0efbbc1ed
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6e33e2f7-df27-42cf-86d9-e8e0efbbc1ed;
	Fri, 13 Nov 2020 10:55:24 +0000 (UTC)
Received: from pps.filterd (m0174678.ppops.net [127.0.0.1])
	by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0ADAoCkE030170;
	Fri, 13 Nov 2020 10:55:16 GMT
Received: from eur04-vi1-obe.outbound.protection.outlook.com (mail-vi1eur04lp2058.outbound.protection.outlook.com [104.47.14.58])
	by mx0a-0039f301.pphosted.com with ESMTP id 34rf80fh4p-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 13 Nov 2020 10:55:16 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ENqQ47H8g+p3HT6+RUo+Enr48MEMiy4OVq5mCifgjwjJ20aGWrXHSqDVCkfzz35q9vrguJk4gIN+nbMDGet3KtTwr4aSOqkNzSN/LRDmZltQHfIMGYU3k1wxzZfV4V7rnWD4QSjc+1e57aCn9qdRH2QxSfHWZWSjLFTodDAbAiM2Eo2pwSumlSbXYRchn43yWEAd4zZdOBnGP0KUnwt4NgF9vNwrZ7M/s4MpOU06z3uiaxZsMEqO+QCWnnn+aWbNmV0g2SKfpehd/Jy/35SI7BGU410g3nr2XN4xWuAOyN8RbWdnLsj82qAQX2HEqaS5FFsI8Ti1oTWMOBFnGH7oww==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MTmf1xo1ri00lh5oQncPVVAvvL2RpcWSZY5ZC3IWaoE=;
 b=ZmcsllVJrTEcWvr+ZsoUoGrq6OlID4QMJjv8DembL3XX/j7UvdD62vku3SGwaUhyhWIKGOtMJxTA8RhXl+I08mhVWy2Wmlo3QCUiR/s4Qand8/suC2ejkOuUH4uqNlollRx552WJB3TFzbae4/FYl32W8HYoKEKVpYsTsBiakR1HcUKHG+lw5gvVn9ZmaPuDYCmCCEeLsGv16cF71SRsQUsWjTdVercRvLZKByADUdtACEmZHUechZpoahMwmjqHdCD2w3PIqHGfvR/jdP/yW14tY9LLCiqm9MOZ3lWv23hI+v/u9T9N5IaFyAqb+Q+yvRvuPmlykNP/6Tg7Zx38OQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MTmf1xo1ri00lh5oQncPVVAvvL2RpcWSZY5ZC3IWaoE=;
 b=TMJobHSvRqIFJdnGOvnMaz1kyL06ojAXgGWVlAJe67OH5jAzYNFwUKF9+ed5UYPJJRqFWbZJ8xQcRCkvY8PNUPiBCG2MKTSwZlUoDoIkaQAd4bjgVscCV4ODp8Pp6I2ZHZLa80Uh79XkZqZyh5HoSDlEuO6WUtT1oHewc5q62dNFjteA86yA2gjV3yoW+BYoq7g4if/+s9Wx1c2tYnBaG+aqr1pTqDwb1cEzA1HOMGai5cq/4hZCM6rztplnlTcAjQTAA780QjDk35HB9405oO9ICVRvuSFDLmTf85igE1zLgI1WqMeExIaijU3Ozm1eItfMsf0OqKZnauqhWXOuOA==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM9PR03MB6884.eurprd03.prod.outlook.com (2603:10a6:20b:2de::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25; Fri, 13 Nov
 2020 10:55:13 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 10:55:13 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Jan Beulich <jbeulich@suse.com>,
        Oleksandr Andrushchenko
	<andr2000@gmail.com>
CC: "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        "julien.grall@arm.com" <julien.grall@arm.com>,
        "Bertrand.Marquis@arm.com"
	<Bertrand.Marquis@arm.com>,
        "sstabellini@kernel.org"
	<sstabellini@kernel.org>,
        "roger.pau@citrix.com" <roger.pau@citrix.com>,
        "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>
Subject: Re: [PATCH 08/10] vpci/arm: Allow updating BAR's header for non-ECAM
 bridges
Thread-Topic: [PATCH 08/10] vpci/arm: Allow updating BAR's header for non-ECAM
 bridges
Thread-Index: AQHWtpbyYHzQ7b7gKkKm7jsDdoAEE6nF4pWAgAACsYCAAAJmAIAAAhEA
Date: Fri, 13 Nov 2020 10:55:13 +0000
Message-ID: <0b8803d9-1f83-1953-5225-96e3a9cf13ae@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-9-andr2000@gmail.com>
 <ae66dddb-98e3-61fd-86c3-eab30ec33d18@suse.com>
 <1f0a3eb7-be20-95a0-0c1e-c7d45e3279f8@epam.com>
 <34e81e3e-7d65-432a-95a6-c8ef437b23dc@suse.com>
In-Reply-To: <34e81e3e-7d65-432a-95a6-c8ef437b23dc@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 6dbcbbe2-9503-499d-d9e4-08d887c298ed
x-ms-traffictypediagnostic: AM9PR03MB6884:
x-microsoft-antispam-prvs: 
 <AM9PR03MB68843452562FDD39C033EBE6E7E60@AM9PR03MB6884.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 kBVjcS3Fq5QWnHr88vfv1MVe/mNKyIkI091jM5bon+xX/hDhKclBdN/uHLOCEp5JUu/Y8lzPGDGFUuoqSNmdeqahyDhdg/6Ei7QUWjT90YjSiwY/7x9eBsulXOLNs8sr5qcd9YH0TjpnOa+tbi6IaP1PXqG2FFGMyBf5boj2npexR2fQUCK5uKPms457qShD4ReTLVEpDI0kyxXEEAnq42AbB52Z/fePIJkcZRc1pITj9tAb6A5G19q5ZwrmXrXniERMyFGhwZ26J1KmYH7yJGhIGe9/6f1AjSubjppTue5EHhYTMU9ISgsPabjm7MzS8Hk5gvPKejSrm7tODcGpDA==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(136003)(396003)(376002)(39860400002)(366004)(316002)(6486002)(8676002)(71200400001)(5660300002)(54906003)(86362001)(2906002)(31696002)(110136005)(8936002)(2616005)(66446008)(66946007)(478600001)(64756008)(66476007)(66556008)(6512007)(36756003)(76116006)(7416002)(26005)(53546011)(186003)(6506007)(4326008)(83380400001)(31686004);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 QWq9DTcgrhFeSuQwi94d+alXQX7h3JC/llv3cMi8as8SJGV9lTo/EhCMrOyS+1qMvtD2wVP/cGThq7hAdAoxIj9OSzA0LFC989m1XW3CWIPIkU0cRxnC1DZCFOu6B+YBdm7GxlQ2hBXB+EKCb6aCHVs/P7mQES9wWPRKO+nU37XXjD5tIe255mjmSpKw0EmGFTVGPXWdo90qlacQ5coknd3GLGbbTlnw7pwfndGfzGop3khXlYHv+LnjF265mEKOsnifj1s/IcW6nMZDQvN0bvPug1kZGsrLoekOeEwREynLkBHoNCkaOrTuSfEvDwp8OVNywt8WnCiJvtbPYGEf8UsEmhR9f+EXealTJriMAGa6ohPoVErBB1udU6Jbjq2mGn8hfI+GqnogRMSjusbp5nKBxgTjGXDTvaR/Qej41scVAq/u8iwYx/sIBJSE6uv8mu1PLbyJL6epOPfuoyoTj14rDFFxp01mxsUS//BjcBYj1O8Cx1QrlCddwD9W2HEOsR8Tgmvlm0stzZbNiv4GG0VdWXftpxZIjkosPHUMAnqqO8kFNOLbZroUvERSmcZmu58Swa+xvWAlGXsBdLPQEclfvqp2566ZOieDnexpCaeMwTbQkYzz43j1c8zKKsTNdkVV3ezmZnmEbySjRB7XVw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <6548DF3D13144D4299A060198360934A@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6dbcbbe2-9503-499d-d9e4-08d887c298ed
X-MS-Exchange-CrossTenant-originalarrivaltime: 13 Nov 2020 10:55:13.0750
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: NqydhwIp7BZUUr890KE+CmEzuvavt02YMkyd73kzzYo07Z3wZVB+oSmISmFfbrS+C7ltZ6Sq3NMLCWmN5oGSpLTm6tWlZFdxRRFWhdPX6l7vejaqRkMLYrtFfAKBrG20
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR03MB6884
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-13_07:2020-11-13,2020-11-13 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 adultscore=0
 spamscore=0 mlxlogscore=962 lowpriorityscore=0 priorityscore=1501
 suspectscore=0 malwarescore=0 mlxscore=0 clxscore=1015 phishscore=0
 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2011130066

DQpPbiAxMS8xMy8yMCAxMjo0NyBQTSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE9uIDEzLjExLjIw
MjAgMTE6MzksIE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIHdyb3RlOg0KPj4gT24gMTEvMTMvMjAg
MTI6MjkgUE0sIEphbiBCZXVsaWNoIHdyb3RlOg0KPj4+IE9uIDA5LjExLjIwMjAgMTM6NTAsIE9s
ZWtzYW5kciBBbmRydXNoY2hlbmtvIHdyb3RlOg0KPj4+PiBGcm9tOiBPbGVrc2FuZHIgQW5kcnVz
aGNoZW5rbyA8b2xla3NhbmRyX2FuZHJ1c2hjaGVua29AZXBhbS5jb20+DQo+Pj4+DQo+Pj4+IE5v
bi1FQ0FNIGhvc3QgYnJpZGdlcyBpbiBod2RvbSBnbyBkaXJlY3RseSB0byBQQ0kgY29uZmlnIHNw
YWNlLA0KPj4+PiBub3QgdGhyb3VnaCB2cGNpICh0aGV5IHVzZSB0aGVpciBzcGVjaWZpYyBtZXRo
b2QgZm9yIGFjY2Vzc2luZyBQQ0kNCj4+Pj4gY29uZmlndXJhdGlvbiwgZS5nLiBkZWRpY2F0ZWQg
cmVnaXN0ZXJzIGV0Yy4pLg0KPj4+IEFuZCBhY2Nlc3MgdG8gdGhlc2UgZGVkaWNhdGVkIHJlZ2lz
dGVycyBjYW4ndCBiZSBpbnRlcmNlcHRlZD8NCj4+IEl0IGNhbi4gQnV0IHRoZW4geW91IGhhdmUg
dG8gZnVsbHkgZW11bGF0ZSB0aGF0IGJyaWRnZSwgZS5nLg0KPj4NCj4+ICJpZiB3ZSB3cml0ZSBB
IHRvIHJlZ0IgYW5kIGFmdGVyIHRoYXQgd3JpdGUgQyB0byByZWdaIHRoZW4gaXQNCj4+DQo+PiBt
ZWFucyB3ZSBhcmUgYWNjZXNzaW5nIGNvbmZpZyBzcGFjZS4gSWYgd2Ugd3JpdGUuLi4uIg0KPiBT
b3VuZHMgcHJldHR5IG11Y2ggbGlrZSB0aGUgSS9PIHBvcnQgYmFzZWQgYWNjZXNzIG1lY2hhbmlz
bSBvbg0KPiB4ODYsIHdoaWNoIGFsc28gaGFzIHNvbWUgc29ydCBvZiAiZW5hYmxlIi4gT2YgY291
cnNlLCBJL08gcG9ydA0KPiBhY2Nlc3NlcyBhcmUgcGFydGljdWxhcmx5IGVhc3kgdG8gaW50ZXJj
ZXB0IGFuZCBoYW5kbGUuLi4NClllcywgaXQgaGFzIHNvbWV3aGF0IHNpbWlsYXIgaWRlYQ0KPg0K
Pj4gSSBtZWFuIHRoaXMgd291bGQgbmVlZCBsb3RzIG9mIGNvZGUgaW4gWGVuIHRvIGFjaGlldmUg
dGhhdA0KPiBQb3NzaWJseSwgYnV0IGxvb2sgYXQgdGhlIGFtb3VudCBvZiBjb2RlIHdlIGhhdmUg
aW4gWGVuIG9uIHRoZQ0KPiB4ODYgc2lkZSB0byBoYW5kbGUgTUNGRyB3cml0ZXMgYnkgRG9tMC4N
Cg0KQnV0IE1DRkcgaXMgaGFuZGxlZCB0aGUgc2FtZSB3YXkgZm9yIGFsbCB4ODYgbWFjaGluZXMs
IHJpZ2h0Pw0KDQpBbmQgaGVyZSBJJ2xsIGhhdmUgdG8gaGF2ZSBhIFNvQyBzcGVjaWZpYyBjb2Rl
LCBlLmcuIGEgc3BlY2lmaWMgZHJpdmVyDQoNCj4NCj4gSmFuDQoNClRoYW5rIHlvdSwNCg0KT2xl
a3NhbmRyDQo=


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 11:02:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 11:02:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26341.54635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdWqi-0006Ec-1m; Fri, 13 Nov 2020 11:02:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26341.54635; Fri, 13 Nov 2020 11:02:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdWqh-0006EV-Uz; Fri, 13 Nov 2020 11:02:35 +0000
Received: by outflank-mailman (input) for mailman id 26341;
 Fri, 13 Nov 2020 11:02:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u7TF=ET=epam.com=prvs=9586b5424c=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kdWqg-0006EQ-5F
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 11:02:34 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bf4d9774-547d-4709-93ee-782e774e4f7b;
 Fri, 13 Nov 2020 11:02:30 +0000 (UTC)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0ADAuKQd010748; Fri, 13 Nov 2020 11:02:24 GMT
Received: from eur04-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2054.outbound.protection.outlook.com [104.47.14.54])
 by mx0a-0039f301.pphosted.com with ESMTP id 34rf80qqj4-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 13 Nov 2020 11:02:24 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM9PR03MB6883.eurprd03.prod.outlook.com (2603:10a6:20b:282::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25; Fri, 13 Nov
 2020 11:02:21 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 11:02:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u7TF=ET=epam.com=prvs=9586b5424c=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kdWqg-0006EQ-5F
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 11:02:34 +0000
X-Inumbo-ID: bf4d9774-547d-4709-93ee-782e774e4f7b
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id bf4d9774-547d-4709-93ee-782e774e4f7b;
	Fri, 13 Nov 2020 11:02:30 +0000 (UTC)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
	by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0ADAuKQd010748;
	Fri, 13 Nov 2020 11:02:24 GMT
Received: from eur04-vi1-obe.outbound.protection.outlook.com (mail-vi1eur04lp2054.outbound.protection.outlook.com [104.47.14.54])
	by mx0a-0039f301.pphosted.com with ESMTP id 34rf80qqj4-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 13 Nov 2020 11:02:24 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dQhEXpS/qk9SnX5ac0QHBS2zJbAZkBO/b3eNpLXKyI3BNNoDSt+yvJ5HhQ/IEIxqPFOul4o2jymu+je1rYohdn3Lkz8XgiQDeakhLOa7GV12dgSieVZO6htJ0OJop079oy37zXXGJbZ/0mWF4rBwJHj8zDkyz3E3WW8eEmG+ORNWQesrlQwrYy736NElCUswjNFpdd3Ssrsdks3t4WX5LONPHwXoHFiKlsJT2bOtiV3+Oz5p3u2gx4B8tQKzaabijlrM6/9/Us0K8mXITej1KZg5sjiUoezYTY2eFtz6plb4eoWa0uB7miCqLHnFpR06VoEYIENMmAeqBrCPCVfBxQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WEL+sGuSsto8HF1T0TQocDqYMB5lhW+ndKvHHZ1o/Sw=;
 b=V1JMXNmoc4uy2U0Kci3p1DRvp/OLb8e63FDN2thVYVkfMjaiZGk2xUR7G6tCXlSHeaIpR/MNPgvMWboD9zElWMx2eHV/LGbFJa28yjb/Mn4h9pt3VfuJZ9uucGI4CeIImlbcjHEKiqfqp0KDfegwXwBH3IYQ2wurMQSPfM/+SuS7i2b4trVN/hqE0xVZduVyoacHqBoKG2RNWrTRvEsDo7Vl7uERmr1yc1TFAglQ8LmcZQrDoL/7ff5lgtiRnkS50BY78fF9xbP6K2hntXphA6/3qAlDWgnXF6teg98ZdHXWigTzRXOoEN/gosvOIZv4BrNRHYCHiCR0EUVG5xHceg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WEL+sGuSsto8HF1T0TQocDqYMB5lhW+ndKvHHZ1o/Sw=;
 b=W/9vPzF3gQgXNvVJduCz4wYVCIZXLPjLeSOOirXZ1uoV2AEeBdZuGcsA74/tjcXinjGwGrPLHiQcO0DD1O4EoXZHCePTGK/clR8NHAPs0npO1sJy5dyVNunqYBwinAscM2yzZykjlr2MjaJQYcAIuwwj5Op929ZMz2ZqHvmjbO6SGG87NDUTE7Dyb9dUrVj2rtcGdb/q6VQmNOHdfSTTeopfaPvkKKQCPU17/bWhJKQJuUPo1jcuDMX0fplQkjK/zadV+eqjViYJfH7jcfRu5uyA65cTPBUq7/WjEozFSlPbWAkyEhy2OxLBIn/DcI2E3tHMQZHmOZZnUC7gJ5ySyg==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM9PR03MB6883.eurprd03.prod.outlook.com (2603:10a6:20b:282::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25; Fri, 13 Nov
 2020 11:02:21 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 11:02:21 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Oleksandr Andrushchenko <andr2000@gmail.com>,
        "Rahul.Singh@arm.com"
	<Rahul.Singh@arm.com>,
        "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
        "julien.grall@arm.com" <julien.grall@arm.com>,
        "sstabellini@kernel.org"
	<sstabellini@kernel.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
        =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
Thread-Topic: [PATCH 06/10] vpci: Make every domain handle its own BARs
Thread-Index: 
 AQHWtpbzyJg77SbpvkSRWLKjWy/3PanEQmgAgAA8YoCAABlOgIABCD8AgABBOQCAAAXIAIAAAS0AgAADNgA=
Date: Fri, 13 Nov 2020 11:02:21 +0000
Message-ID: <67fd5df7-2ad2-08e5-294e-b769429164f0@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
 <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
 <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
 <b4697fbe-6896-ed64-409d-85620c08904a@suse.com>
 <3d6e5aab-ff89-7859-09c6-5ecb0c052511@epam.com>
 <1c88fef1-8558-fde1-02c7-8a68f6ecf312@suse.com>
In-Reply-To: <1c88fef1-8558-fde1-02c7-8a68f6ecf312@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 42f2bd8b-786a-4306-ff8e-08d887c397fb
x-ms-traffictypediagnostic: AM9PR03MB6883:
x-microsoft-antispam-prvs: 
 <AM9PR03MB6883F596FE3676730A79EF39E7E60@AM9PR03MB6883.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 Wp00lF/cPiJ9GuCc/Nn7RXBEuylvszeJlWgF4+7q/fAL+1rvoFmPAldWL46ThD1epSmVKYJuMxiXzkzcvVsSv4cyJVtErC5FZ27qbgnquOF4//mzRHbk8D8yXWj+ep6fJo7vaMYhzAgyUC2jhvbph08XXtGrDlkdxiSxXSBv/txA29hpx4xDb3e/8EDYysjh3Ssta2Fsoq5BJX0uYqTbE1nWWbZpVJnz/dEy3ZdHpVLKeMq+oGZXWdNVXQGfxhOetmziUtswrH8RzOFOxSECZnXHzHPBeKEXhcM/NCqT+2ExQMWD7EWbHHp1jqvP3ubE+eBUU9TRYP6I2NSUxnpucg==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(366004)(346002)(136003)(376002)(396003)(31696002)(76116006)(8936002)(66556008)(66446008)(66476007)(8676002)(4326008)(66946007)(64756008)(86362001)(6916009)(6506007)(7416002)(53546011)(26005)(5660300002)(6486002)(2616005)(83380400001)(31686004)(186003)(71200400001)(6512007)(2906002)(478600001)(54906003)(316002)(36756003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 CXC5iTxfHqSIbAbvkh/M1VmiD/MBnbp1565KM4by6FSIM/2TvoU+hC0Zffwc4edeWrOS0p3M1MDKA24NPz+/BDdB5uM0PLAPuxuk+wxdG8aDS/udgratdXt12rIuYn+vIoCWoNwH6bldh2clqti1nluCWoquqet19N7Vo0jJMCXFP6t49i7iq5kxuG7roCWhdgnENzpMNKqPOHmRjbK4okcHbxWKUW7ldx8kEpjYsltPLuUhtduorv3Yo9aPzHa8c+S2zepdV0APYEfuAk7qbeF3XUWjzsCvTl0hkuJDcrVgRRmUrdxfGJ0gqHvQirjJHR0S/HFSafccFTpwtfxByzIAcgUUzhlzLZaBUKLjXIpd9IbFnlOT25W28kOdMy8cARss4FKnuqY+B7dLjcfUeCLoXWxHy2/SFWmq2mrckmCIr3l8AhXy4fSmAemVG0vxBr5dRThNVQNkMOjfQS6hUVea3ebuB6RqBC7zuiPNlsCRaZpX1qO+ibBUDfekCR6fHCWDg+8nVKumq5Jtk4UrpPTps7X9ZUI8bOy/8+2IrkPsDBkzCIY1amOUkfMjPeddG3Oda3ntJGaYZvTEyvMJaOW8hT3LIQ7zI8Zne/yOgId1gPSgKZGnP3Wxd4QeyDCjjRIGxQNL/JsZjk5g7AsnGw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <6FBBF9884848A449A969C373F90AC9C7@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 42f2bd8b-786a-4306-ff8e-08d887c397fb
X-MS-Exchange-CrossTenant-originalarrivaltime: 13 Nov 2020 11:02:21.0375
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: qZkTcOUYtbPiT4veCovfIpDkEFoWtOGSJ7RJMiPOs9Nmgx1l8bU5ETVNrh72zrbxLme2YTKQDVHdTwn1jMLwyFGTucaXaLFkRwhW8XAVSZD54vAKSL/tX55bCykj+mTS
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR03MB6883
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-13_07:2020-11-13,2020-11-13 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0
 priorityscore=1501 adultscore=0 phishscore=0 impostorscore=0 spamscore=0
 malwarescore=0 suspectscore=0 mlxlogscore=999 mlxscore=0
 lowpriorityscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2009150000 definitions=main-2011130067

DQpPbiAxMS8xMy8yMCAxMjo1MCBQTSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE9uIDEzLjExLjIw
MjAgMTE6NDYsIE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIHdyb3RlOg0KPj4gT24gMTEvMTMvMjAg
MTI6MjUgUE0sIEphbiBCZXVsaWNoIHdyb3RlOg0KPj4+IE9uIDEzLjExLjIwMjAgMDc6MzIsIE9s
ZWtzYW5kciBBbmRydXNoY2hlbmtvIHdyb3RlOg0KPj4+PiBPbiAxMS8xMi8yMCA0OjQ2IFBNLCBS
b2dlciBQYXUgTW9ubsOpIHdyb3RlOg0KPj4+Pj4gT24gVGh1LCBOb3YgMTIsIDIwMjAgYXQgMDE6
MTY6MTBQTSArMDAwMCwgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gd3JvdGU6DQo+Pj4+Pj4gT24g
MTEvMTIvMjAgMTE6NDAgQU0sIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6DQo+Pj4+Pj4+IE9uIE1v
biwgTm92IDA5LCAyMDIwIGF0IDAyOjUwOjI3UE0gKzAyMDAsIE9sZWtzYW5kciBBbmRydXNoY2hl
bmtvIHdyb3RlOg0KPj4+Pj4+Pj4gRnJvbTogT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gPG9sZWtz
YW5kcl9hbmRydXNoY2hlbmtvQGVwYW0uY29tPg0KPj4+Pj4+Pj4gK3N0YXRpYyB1aW50MzJfdCBi
YXJfcmVhZF9kaXNwYXRjaChjb25zdCBzdHJ1Y3QgcGNpX2RldiAqcGRldiwgdW5zaWduZWQgaW50
IHJlZywNCj4+Pj4+Pj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdm9pZCAq
ZGF0YSkNCj4+Pj4+Pj4+ICt7DQo+Pj4+Pj4+PiArICAgIHN0cnVjdCB2cGNpX2JhciAqdmJhciwg
KmJhciA9IGRhdGE7DQo+Pj4+Pj4+PiArDQo+Pj4+Pj4+PiArICAgIGlmICggaXNfaGFyZHdhcmVf
ZG9tYWluKGN1cnJlbnQtPmRvbWFpbikgKQ0KPj4+Pj4+Pj4gKyAgICAgICAgcmV0dXJuIGJhcl9y
ZWFkX2h3ZG9tKHBkZXYsIHJlZywgZGF0YSk7DQo+Pj4+Pj4+PiArDQo+Pj4+Pj4+PiArICAgIHZi
YXIgPSBnZXRfdnBjaV9iYXIoY3VycmVudC0+ZG9tYWluLCBwZGV2LCBiYXItPmluZGV4KTsNCj4+
Pj4+Pj4+ICsgICAgaWYgKCAhdmJhciApDQo+Pj4+Pj4+PiArICAgICAgICByZXR1cm4gfjA7DQo+
Pj4+Pj4+PiArDQo+Pj4+Pj4+PiArICAgIHJldHVybiBiYXJfcmVhZF9ndWVzdChwZGV2LCByZWcs
IHZiYXIpOw0KPj4+Pj4+Pj4gK30NCj4+Pj4+Pj4+ICsNCj4+Pj4+Pj4+ICtzdGF0aWMgdm9pZCBi
YXJfd3JpdGVfZGlzcGF0Y2goY29uc3Qgc3RydWN0IHBjaV9kZXYgKnBkZXYsIHVuc2lnbmVkIGlu
dCByZWcsDQo+Pj4+Pj4+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90
IHZhbCwgdm9pZCAqZGF0YSkNCj4+Pj4+Pj4+ICt7DQo+Pj4+Pj4+PiArICAgIHN0cnVjdCB2cGNp
X2JhciAqYmFyID0gZGF0YTsNCj4+Pj4+Pj4+ICsNCj4+Pj4+Pj4+ICsgICAgaWYgKCBpc19oYXJk
d2FyZV9kb21haW4oY3VycmVudC0+ZG9tYWluKSApDQo+Pj4+Pj4+PiArICAgICAgICBiYXJfd3Jp
dGVfaHdkb20ocGRldiwgcmVnLCB2YWwsIGRhdGEpOw0KPj4+Pj4+Pj4gKyAgICBlbHNlDQo+Pj4+
Pj4+PiArICAgIHsNCj4+Pj4+Pj4+ICsgICAgICAgIHN0cnVjdCB2cGNpX2JhciAqdmJhciA9IGdl
dF92cGNpX2JhcihjdXJyZW50LT5kb21haW4sIHBkZXYsIGJhci0+aW5kZXgpOw0KPj4+Pj4+Pj4g
Kw0KPj4+Pj4+Pj4gKyAgICAgICAgaWYgKCAhdmJhciApDQo+Pj4+Pj4+PiArICAgICAgICAgICAg
cmV0dXJuOw0KPj4+Pj4+Pj4gKyAgICAgICAgYmFyX3dyaXRlX2d1ZXN0KHBkZXYsIHJlZywgdmFs
LCB2YmFyKTsNCj4+Pj4+Pj4+ICsgICAgfQ0KPj4+Pj4+Pj4gK30NCj4+Pj4+Pj4gWW91IHNob3Vs
ZCBhc3NpZ24gZGlmZmVyZW50IGhhbmRsZXJzIGJhc2VkIG9uIHdoZXRoZXIgdGhlIGRvbWFpbiB0
aGF0DQo+Pj4+Pj4+IGhhcyB0aGUgZGV2aWNlIGFzc2lnbmVkIGlzIGEgZG9tVSBvciB0aGUgaGFy
ZHdhcmUgZG9tYWluLCByYXRoZXIgdGhhbg0KPj4+Pj4+PiBkb2luZyB0aGUgc2VsZWN0aW9uIGhl
cmUuDQo+Pj4+Pj4gSG0sIGhhbmRsZXJzIGFyZSBhc3NpZ25lZCBvbmNlIGluIGluaXRfYmFycyBh
bmQgdGhpcyBmdW5jdGlvbiBpcyBvbmx5IGNhbGxlZA0KPj4+Pj4+DQo+Pj4+Pj4gZm9yIGh3ZG9t
LCBzbyB0aGVyZSBpcyBubyB3YXkgSSBjYW4gZG8gdGhhdCBmb3IgdGhlIGd1ZXN0cy4gSGVuY2Us
IHRoZSBkaXNwYXRjaGVyDQo+Pj4+PiBJIHRoaW5rIHdlIG1pZ2h0IHdhbnQgdG8gcmVzZXQgdGhl
IHZQQ0kgaGFuZGxlcnMgd2hlbiBhIGRldmljZXMgZ2V0cw0KPj4+Pj4gYXNzaWduZWQgYW5kIGRl
YXNzaWduZWQuDQo+Pj4+IEluIEFSTSBjYXNlIGluaXRfYmFycyBpcyBjYWxsZWQgdG9vIGVhcmx5
OiBQQ0kgZGV2aWNlIGFzc2lnbm1lbnQgaXMgY3VycmVudGx5DQo+Pj4+DQo+Pj4+IGluaXRpYXRl
ZCBieSBEb21haW4tMCcga2VybmVsIGFuZCBpcyBkb25lICpiZWZvcmUqIFBDSSBkZXZpY2VzIGFy
ZSBnaXZlbiBtZW1vcnkNCj4+Pj4NCj4+Pj4gcmFuZ2VzIGFuZCBCQVJzIGFzc2lnbmVkOg0KPj4+
Pg0KPj4+PiBbwqDCoMKgIDAuNDI5NTE0XSBwY2lfYnVzIDAwMDA6MDA6IHJvb3QgYnVzIHJlc291
cmNlIFtidXMgMDAtZmZdDQo+Pj4+IFvCoMKgwqAgMC40Mjk1MzJdIHBjaV9idXMgMDAwMDowMDog
cm9vdCBidXMgcmVzb3VyY2UgW2lvIDB4MDAwMC0weGZmZmZmXQ0KPj4+PiBbwqDCoMKgIDAuNDI5
NTU1XSBwY2lfYnVzIDAwMDA6MDA6IHJvb3QgYnVzIHJlc291cmNlIFttZW0gMHhmZTIwMDAwMC0w
eGZlM2ZmZmZmXQ0KPj4+PiBbwqDCoMKgIDAuNDI5NTc1XSBwY2lfYnVzIDAwMDA6MDA6IHJvb3Qg
YnVzIHJlc291cmNlIFttZW0gMHgzMDAwMDAwMC0weDM3ZmZmZmZmXQ0KPj4+PiBbwqDCoMKgIDAu
NDI5NjA0XSBwY2lfYnVzIDAwMDA6MDA6IHJvb3QgYnVzIHJlc291cmNlIFttZW0gMHgzODAwMDAw
MC0weDNmZmZmZmZmIHByZWZdDQo+Pj4+IFvCoMKgwqAgMC40Mjk2NzBdIHBjaSAwMDAwOjAwOjAw
LjA6IGVuYWJsaW5nIEV4dGVuZGVkIFRhZ3MNCj4+Pj4gW8KgwqDCoCAwLjQ1Mzc2NF0gcGNpIDAw
MDA6MDA6MDAuMDogLS0tLS0tLS0tLS0tLS0tLS0tLS0gQlVTX05PVElGWV9BRERfREVWSUNFDQo+
Pj4+DQo+Pj4+IDwgaW5pdF9iYXJzID4NCj4+Pj4NCj4+Pj4gW8KgwqDCoCAwLjQ1Mzc5M10gcGNp
IDAwMDA6MDA6MDAuMDogLS0gSVJRIDANCj4+Pj4gW8KgwqDCoCAwLjQ1ODgyNV0gcGNpIDAwMDA6
MDA6MDAuMDogRmFpbGVkIHRvIGFkZCAtIHBhc3N0aHJvdWdoIG9yIE1TSS9NU0ktWCBtaWdodCBm
YWlsIQ0KPj4+PiBbwqDCoMKgIDAuNDcxNzkwXSBwY2kgMDAwMDowMTowMC4wOiAtLS0tLS0tLS0t
LS0tLS0tLS0tLSBCVVNfTk9USUZZX0FERF9ERVZJQ0UNCj4+Pj4NCj4+Pj4gPCBpbml0X2JhcnMg
Pg0KPj4+Pg0KPj4+PiBbwqDCoMKgIDAuNDcxODIxXSBwY2kgMDAwMDowMTowMC4wOiAtLSBJUlEg
MjU1DQo+Pj4+IFvCoMKgwqAgMC40NzY4MDldIHBjaSAwMDAwOjAxOjAwLjA6IEZhaWxlZCB0byBh
ZGQgLSBwYXNzdGhyb3VnaCBvciBNU0kvTVNJLVggbWlnaHQgZmFpbCENCj4+Pj4NCj4+Pj4gPCBC
QVIgYXNzaWdubWVudHMgYmVsb3cgPg0KPj4+Pg0KPj4+PiBbwqDCoMKgIDAuNDg4MjMzXSBwY2kg
MDAwMDowMDowMC4wOiBCQVIgMTQ6IGFzc2lnbmVkIFttZW0gMHhmZTIwMDAwMC0weGZlMmZmZmZm
XQ0KPj4+PiBbwqDCoMKgIDAuNDg4MjY1XSBwY2kgMDAwMDowMDowMC4wOiBCQVIgMTU6IGFzc2ln
bmVkIFttZW0gMHgzODAwMDAwMC0weDM4MGZmZmZmIHByZWZdDQo+Pj4+DQo+Pj4+IEluIGNhc2Ug
b2YgeDg2IHRoaXMgaXMgcHJldHR5IG11Y2ggb2sgYXMgQkFScyBhcmUgYWxyZWFkeSBpbiBwbGFj
ZSwgYnV0IGZvciBBUk0gd2UNCj4+Pj4NCj4+Pj4gbmVlZCB0byB0YWtlIGNhcmUgYW5kIHJlLXNl
dHVwIHZQQ0kgQkFScyBmb3IgaHdkb20uDQo+Pj4gRXZlbiBvbiB4ODYgdGhlcmUncyBubyBndWFy
YW50ZWUgdGhhdCBhbGwgZGV2aWNlcyBoYXZlIHRoZWlyIEJBUnMgc2V0DQo+Pj4gdXAgYnkgZmly
bXdhcmUuDQo+PiBUaGlzIGlzIHRydWUuIEJ1dCB0aGVyZSB5b3UgY291bGQgaGF2ZSBjb25maWcg
c3BhY2UgdHJhcHBlZCBpbiAieDg2IGdlbmVyaWMgd2F5IiwNCj4+DQo+PiBwbGVhc2UgY29ycmVj
dCBtZSBpZiBJJ20gd3JvbmcgaGVyZQ0KPj4NCj4+PiBJbiBhIHN1YnNlcXVlbnQgcmVwbHkgeW91
J3ZlIHN1Z2dlc3RlZCB0byBtb3ZlIGluaXRfYmFycyBmcm9tICJhZGQiIHRvDQo+Pj4gImFzc2ln
biIsIGJ1dCBJJ20gaGF2aW5nIHRyb3VibGUgc2VlaW5nIHdoYXQgdGhpcyB3b3VsZCBjaGFuZ2U6
IEl0J3MNCj4+PiBub3QgRG9tMCBjb250cm9sbGluZyBhc3NpZ25tZW50ICh0byBpdHNlbGYpLCBi
dXQgWGVuIGFzc2lnbnMgdGhlIGRldmljZQ0KPj4+IHRvd2FyZHMgdGhlIGVuZCBvZiBwY2lfYWRk
X2RldmljZSgpLg0KPj4gUEhZU0RFVk9QX3BjaV9kZXZpY2VfYWRkIHZzIFhFTl9ET01DVExfYXNz
aWduX2RldmljZQ0KPj4NCj4+IEN1cnJlbnRseSB3ZSBpbml0aWFsaXplIEJBUnMgZHVyaW5nIFBI
WVNERVZPUF9wY2lfZGV2aWNlX2FkZCBhbmQNCj4+DQo+PiBpZiB3ZSBkbyB0aGF0IGR1cmluZyBY
RU5fRE9NQ1RMX2Fzc2lnbl9kZXZpY2UgdGhpbmdzIHNlZW0gdG8gY2hhbmdlDQo+IEJ1dCB0aGVy
ZSBjYW4ndCBwb3NzaWJseSBiZSBhbnkgWEVOX0RPTUNUTF9hc3NpZ25fZGV2aWNlIGludm9sdmVk
IGluDQo+IGJvb3Rpbmcgb2YgRG9tMC4NCg0KSW5kZWVkLiBTbywgZG8geW91IGhhdmUgYW4gaWRl
YSB3aGVuIHdlIHNob3VsZCBjYWxsIGluaXRfYmFycyBzdWl0YWJsZQ0KDQpmb3IgYm90aCBBUk0g
YW5kIHg4Nj8NCg0KQW5vdGhlciBxdWVzdGlvbiBpczogd2hhdCBoYXBwZW5zIGJhZCBpZiB4ODYg
YW5kIEFSTSB3b24ndCBjYWxsIGluaXRfYmFycw0KDQp1bnRpbCB0aGUgbW9tZW50IHdlIHJlYWxs
eSBhc3NpZ24gYSBQQ0kgZGV2aWNlIHRvIHRoZSBmaXJzdCBndWVzdD8NCg0KPg0KPj4+Pj4gICAg
IEluIG9yZGVyIHRvIGRvIHBhc3N0aHJvdWdoIHRvIGRvbVVzIHNhZmVseQ0KPj4+Pj4gd2Ugd2ls
bCBoYXZlIHRvIGFkZCBtb3JlIGhhbmRsZXJzIHRoYW4gd2hhdCdzIHJlcXVpcmVkIGZvciBkb20w
LA0KPj4+PiBDYW4geW91IHBsZWFzZSB0ZWxsIHdoYXQgYXJlIHRoaW5raW5nIGFib3V0PyBXaGF0
IGFyZSB0aGUgbWlzc2luZyBoYW5kbGVycz8NCj4+Pj4+ICAgICBhbmQNCj4+Pj4+IGhhdmluZyBp
c19oYXJkd2FyZV9kb21haW4gc3ByaW5rbGVkIGluIGFsbCBvZiB0aGVtIGlzIG5vdCBhIHN1aXRh
YmxlDQo+Pj4+PiBzb2x1dGlvbi4NCj4+Pj4gSSdsbCB0cnkgdG8gcmVwbGFjZSBpc19oYXJkd2Fy
ZV9kb21haW4gd2l0aCBzb21ldGhpbmcgbGlrZToNCj4+Pj4NCj4+Pj4gKy8qDQo+Pj4+ICsgKiBE
ZXRlY3Qgd2hldGhlciBwaHlzaWNhbCBQQ0kgZGV2aWNlcyBpbiB0aGlzIHNlZ21lbnQgYmVsb25n
DQo+Pj4+ICsgKiB0byB0aGUgZG9tYWluIGdpdmVuLCBlLmcuIG9uIHg4NiBhbGwgUENJIGRldmlj
ZXMgbGl2ZSBpbiBod2RvbSwNCj4+Pj4gKyAqIGJ1dCBpbiBjYXNlIG9mIEFSTSB0aGlzIG1pZ2h0
IG5vdCBiZSB0aGUgY2FzZTogdGhvc2UgbWF5IGFsc28NCj4+Pj4gKyAqIGxpdmUgaW4gZHJpdmVy
IGRvbWFpbnMgb3IgZXZlbiBYZW4gaXRzZWxmLg0KPj4+PiArICovDQo+Pj4+ICtib29sIHBjaV9p
c19oYXJkd2FyZV9kb21haW4oc3RydWN0IGRvbWFpbiAqZCwgdTE2IHNlZykNCj4+Pj4gK3sNCj4+
Pj4gKyNpZmRlZiBDT05GSUdfWDg2DQo+Pj4+ICvCoMKgwqAgcmV0dXJuIGlzX2hhcmR3YXJlX2Rv
bWFpbihkKTsNCj4+Pj4gKyNlbGlmIENPTkZJR19BUk0NCj4+Pj4gK8KgwqDCoCByZXR1cm4gcGNp
X2lzX293bmVyX2RvbWFpbihkLCBzZWcpOw0KPj4+PiArI2Vsc2UNCj4+Pj4gKyNlcnJvciAiVW5z
dXBwb3J0ZWQgYXJjaGl0ZWN0dXJlIg0KPj4+PiArI2VuZGlmDQo+Pj4+ICt9DQo+Pj4+ICsNCj4+
Pj4gKy8qDQo+Pj4+ICsgKiBHZXQgZG9tYWluIHdoaWNoIG93bnMgdGhpcyBzZWdtZW50OiBmb3Ig
eDg2IHRoaXMgaXMgYWx3YXlzIGhhcmR3YXJlDQo+Pj4+ICsgKiBkb21haW4gYW5kIGZvciBBUk0g
dGhpcyBjYW4gYmUgZGlmZmVyZW50Lg0KPj4+PiArICovDQo+Pj4+ICtzdHJ1Y3QgZG9tYWluICpw
Y2lfZ2V0X2hhcmR3YXJlX2RvbWFpbih1MTYgc2VnKQ0KPj4+PiArew0KPj4+PiArI2lmZGVmIENP
TkZJR19YODYNCj4+Pj4gK8KgwqDCoCByZXR1cm4gaGFyZHdhcmVfZG9tYWluOw0KPj4+PiArI2Vs
aWYgQ09ORklHX0FSTQ0KPj4+PiArwqDCoMKgIHJldHVybiBwY2lfZ2V0X293bmVyX2RvbWFpbihz
ZWcpOw0KPj4+PiArI2Vsc2UNCj4+Pj4gKyNlcnJvciAiVW5zdXBwb3J0ZWQgYXJjaGl0ZWN0dXJl
Ig0KPj4+PiArI2VuZGlmDQo+Pj4+ICt9DQo+Pj4+DQo+Pj4+IFRoaXMgaXMgd2hhdCBJIHVzZSB0
byBwcm9wZXJseSBkZXRlY3QgdGhlIGRvbWFpbiB0aGF0IHJlYWxseSBvd25zIHBoeXNpY2FsIGhv
c3QgYnJpZGdlDQo+Pj4gSSBjb25zaWRlciB0aGlzIHByb2JsZW1hdGljLiBXZSBzaG91bGQgdHJ5
IHRvIG5vdCBsZXQgQXJtJ3MgYW5kIHg4Nidlcw0KPj4+IFBDSSBpbXBsZW1lbnRhdGlvbnMgZGl2
ZXJnZSB0b28gbXVjaCwgaS5lLiBhdCBsZWFzdCB0aGUgdW5kZXJseWluZyBiYXNpYw0KPj4+IG1v
ZGVsIHdvdWxkIGJldHRlciBiZSBzaW1pbGFyLiBGb3IgZXhhbXBsZSwgaWYgZW50aXJlIHNlZ21l
bnRzIGNhbiBiZQ0KPj4+IGFzc2lnbmVkIHRvIGEgZHJpdmVyIGRvbWFpbiBvbiBBcm0sIHdoeSBz
aG91bGQgdGhlIHNhbWUgbm90IGJlIHBvc3NpYmxlDQo+Pj4gb24geDg2Pw0KPj4gR29vZCBxdWVz
dGlvbiwgcHJvYmFibHkgaW4gdGhpcyBjYXNlIHg4NiA9PSBBUk0gYW5kIEkgY2FuIHVzZQ0KPj4N
Cj4+IHBjaV9pc19vd25lcl9kb21haW4gZm9yIGJvdGggYXJjaGl0ZWN0dXJlcyBpbnN0ZWFkIG9m
IHVzaW5nIGlzX2hhcmR3YXJlX2RvbWFpbiBmb3IgeDg2DQo+Pg0KPj4+IEZ1cnRoZXJtb3JlIEkn
bSBzdXNwaWNpb3VzIGFib3V0IHNlZ21lbnRzIGJlaW5nIHRoZSByaWdodCBncmFudWxhcml0eQ0K
Pj4+IGhlcmUuIE9uIGlhNjQgbXVsdGlwbGUgaG9zdCBicmlkZ2VzIGNvdWxkIChhbmQgdHlwaWNh
bGx5IHdvdWxkKSBsaXZlDQo+Pj4gb24gc2VnbWVudCAwLiBJaXJjIEkgaGFkIG9uY2Ugc2VlbiBv
dXRwdXQgZnJvbSBhbiB4ODYgc3lzdGVtIHdoaWNoIHdhcw0KPj4+IGFwcGFyZW50bHkgbGFpZCBv
dXQgc2ltaWxhcmx5LiBUaGVyZWZvcmUsIGp1c3QgbGlrZSBmb3IgTUNGRywgSSB0aGluaw0KPj4+
IHRoZSBncmFudWxhcml0eSB3YW50cyB0byBiZSBidXMgcmFuZ2VzIHdpdGhpbiBhIHNlZ21lbnQu
DQo+PiBDYW4geW91IHBsZWFzZSBzdWdnZXN0IHNvbWV0aGluZyB3ZSBjYW4gdXNlIGFzIGEgaGlu
dCBmb3Igc3VjaCBhIGRldGVjdGlvbiBsb2dpYz8NCj4gVGhlIHVuZGVybHlpbmcgaW5mb3JtYXRp
b24gY29tZXMgZnJvbSBBQ1BJIHRhYmxlcywgaWlyYy4gSSBkb24ndA0KPiByZWNhbGwgdGhlIGRl
dGFpbHMsIHRob3VnaCAtIHNvcnJ5Lg0KDQpPaywgc28gc2VnICsgYnVzIHNob3VsZCBiZSBlbm91
Z2ggZm9yIGJvdGggQVJNIGFuZCBYZW4gdGhlbiwgcmlnaHQ/DQoNCnBjaV9nZXRfaGFyZHdhcmVf
ZG9tYWluKHUxNiBzZWcsIHU4IGJ1cykNCg0KPg0KPiBKYW4NCg0KVGhhbmsgeW91LA0KDQpPbGVr
c2FuZHINCg==


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 11:06:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 11:06:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26354.54655 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdWuX-0006VF-Sm; Fri, 13 Nov 2020 11:06:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26354.54655; Fri, 13 Nov 2020 11:06:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdWuX-0006V8-PS; Fri, 13 Nov 2020 11:06:33 +0000
Received: by outflank-mailman (input) for mailman id 26354;
 Fri, 13 Nov 2020 11:06:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kdWuX-0006V3-6m
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 11:06:33 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kdWuW-0003XM-FL; Fri, 13 Nov 2020 11:06:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kdWuX-0006V3-6m
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 11:06:33 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kdWuW-0003XM-FL; Fri, 13 Nov 2020 11:06:33 +0000
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
To: Jan Beulich <jbeulich@suse.com>
Cc: Oleksandr Andrushchenko <andr2000@gmail.com>,
 "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
 "julien.grall@arm.com" <julien.grall@arm.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
 <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
 <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
 <b4697fbe-6896-ed64-409d-85620c08904a@suse.com>
 <fd656848-1eda-686d-d74c-f10e3ecfe49a@xen.org>
 <093f0acd-3ddb-84c7-a06e-c75de90ba288@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <247c2ae8-c1c4-3e0a-3431-82d05bd3be33@xen.org>
Date: Fri, 13 Nov 2020 11:06:30 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <093f0acd-3ddb-84c7-a06e-c75de90ba288@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Jan,

On 13/11/2020 10:53, Jan Beulich wrote:
> On 13.11.2020 11:36, Julien Grall wrote:
>> On 13/11/2020 10:25, Jan Beulich wrote:
>>> On 13.11.2020 07:32, Oleksandr Andrushchenko wrote:
>>>> On 11/12/20 4:46 PM, Roger Pau Monné wrote:
>>>>> On Thu, Nov 12, 2020 at 01:16:10PM +0000, Oleksandr Andrushchenko wrote:
>>>>>> On 11/12/20 11:40 AM, Roger Pau Monné wrote:
>>>>>>> On Mon, Nov 09, 2020 at 02:50:27PM +0200, Oleksandr Andrushchenko wrote:
>>>>>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>>>>>>> +static uint32_t bar_read_dispatch(const struct pci_dev *pdev, unsigned int reg,
>>>>>>>> +                                  void *data)
>>>>>>>> +{
>>>>>>>> +    struct vpci_bar *vbar, *bar = data;
>>>>>>>> +
>>>>>>>> +    if ( is_hardware_domain(current->domain) )
>>>>>>>> +        return bar_read_hwdom(pdev, reg, data);
>>>>>>>> +
>>>>>>>> +    vbar = get_vpci_bar(current->domain, pdev, bar->index);
>>>>>>>> +    if ( !vbar )
>>>>>>>> +        return ~0;
>>>>>>>> +
>>>>>>>> +    return bar_read_guest(pdev, reg, vbar);
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +static void bar_write_dispatch(const struct pci_dev *pdev, unsigned int reg,
>>>>>>>> +                               uint32_t val, void *data)
>>>>>>>> +{
>>>>>>>> +    struct vpci_bar *bar = data;
>>>>>>>> +
>>>>>>>> +    if ( is_hardware_domain(current->domain) )
>>>>>>>> +        bar_write_hwdom(pdev, reg, val, data);
>>>>>>>> +    else
>>>>>>>> +    {
>>>>>>>> +        struct vpci_bar *vbar = get_vpci_bar(current->domain, pdev, bar->index);
>>>>>>>> +
>>>>>>>> +        if ( !vbar )
>>>>>>>> +            return;
>>>>>>>> +        bar_write_guest(pdev, reg, val, vbar);
>>>>>>>> +    }
>>>>>>>> +}
>>>>>>> You should assign different handlers based on whether the domain that
>>>>>>> has the device assigned is a domU or the hardware domain, rather than
>>>>>>> doing the selection here.
>>>>>> Hm, handlers are assigned once in init_bars and this function is only called
>>>>>>
>>>>>> for hwdom, so there is no way I can do that for the guests. Hence, the dispatcher
>>>>> I think we might want to reset the vPCI handlers when a devices gets
>>>>> assigned and deassigned.
>>>>
>>>> In ARM case init_bars is called too early: PCI device assignment is currently
>>>>
>>>> initiated by Domain-0' kernel and is done *before* PCI devices are given memory
>>>>
>>>> ranges and BARs assigned:
>>>>
>>>> [    0.429514] pci_bus 0000:00: root bus resource [bus 00-ff]
>>>> [    0.429532] pci_bus 0000:00: root bus resource [io 0x0000-0xfffff]
>>>> [    0.429555] pci_bus 0000:00: root bus resource [mem 0xfe200000-0xfe3fffff]
>>>> [    0.429575] pci_bus 0000:00: root bus resource [mem 0x30000000-0x37ffffff]
>>>> [    0.429604] pci_bus 0000:00: root bus resource [mem 0x38000000-0x3fffffff pref]
>>>> [    0.429670] pci 0000:00:00.0: enabling Extended Tags
>>>> [    0.453764] pci 0000:00:00.0: -------------------- BUS_NOTIFY_ADD_DEVICE
>>>>
>>>> < init_bars >
>>>>
>>>> [    0.453793] pci 0000:00:00.0: -- IRQ 0
>>>> [    0.458825] pci 0000:00:00.0: Failed to add - passthrough or MSI/MSI-X might fail!
>>>> [    0.471790] pci 0000:01:00.0: -------------------- BUS_NOTIFY_ADD_DEVICE
>>>>
>>>> < init_bars >
>>>>
>>>> [    0.471821] pci 0000:01:00.0: -- IRQ 255
>>>> [    0.476809] pci 0000:01:00.0: Failed to add - passthrough or MSI/MSI-X might fail!
>>>>
>>>> < BAR assignments below >
>>>>
>>>> [    0.488233] pci 0000:00:00.0: BAR 14: assigned [mem 0xfe200000-0xfe2fffff]
>>>> [    0.488265] pci 0000:00:00.0: BAR 15: assigned [mem 0x38000000-0x380fffff pref]
>>>>
>>>> In case of x86 this is pretty much ok as BARs are already in place, but for ARM we
>>>>
>>>> need to take care and re-setup vPCI BARs for hwdom.
>>>
>>> Even on x86 there's no guarantee that all devices have their BARs set
>>> up by firmware.
>>>
>>> In a subsequent reply you've suggested to move init_bars from "add" to
>>> "assign", but I'm having trouble seeing what this would change: It's
>>> not Dom0 controlling assignment (to itself), but Xen assigns the device
>>> towards the end of pci_add_device().
>>>
>>>> Things are getting even more
>>>>
>>>> complicated if the host PCI bridge is not ECAM like, so you cannot set mmio_handlers
>>>>
>>>> and trap hwdom's access to the config space to update BARs etc. This is why I have that
>>>>
>>>> ugly hack for rcar_gen3 to read actual BARs for hwdom.
>>>
>>> How to config space accesses work there? The latest for MSI/MSI-X it'll
>>> be imperative that Xen be able to intercept config space writes.
>>
>> I am not sure to understand your last sentence. Are you saying that we
>> always need to trap access to MSI/MSI-X message in order to sanitize it?
>>
>> If one is using the GICv3 ITS (I haven't investigated other MSI
>> controller), then I don't believe you need to sanitize the MSI/MSI-X
>> message in most of the situation.
> 
> Well, if it's fine for the guest to write arbitrary values to message
> address and message data,

The message address would be the doorbell of the ITS that is usually 
going through the IOMMU page-tables. Although, I am aware of a couple of 
platforms where the doorbell access (among other address ranges 
including P2P transaction) bypass the IOMMU. In this situation, we would 
need a lot more work than just trapping the access.

Regarding the message data, for the ITS this is an event ID. The HW will 
then tag each message with the device ID (this prevents spoofing). The 
tupple (device ID, event ID) is used by the ITS to decide where to 
inject the event.

Whether other MSI controllers (e.g. GICv2m) have similar isolation 
feature will be on the case by case basis.

> _and_ to arbitrarily enable/disable MSI / MSI-X,
> then yes, no interception would be needed.
The device would be owned by the guest, so I am not sure to understand 
the exact problem of letting it enabling/disabling MSI/MSI-X. Do you 
mind expanding your thoughts?

Furthermore, you can also control which event is enabled/disabled at the 
ITS level.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 11:09:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 11:09:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26361.54667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdWxg-0006hF-Cr; Fri, 13 Nov 2020 11:09:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26361.54667; Fri, 13 Nov 2020 11:09:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdWxg-0006h8-9e; Fri, 13 Nov 2020 11:09:48 +0000
Received: by outflank-mailman (input) for mailman id 26361;
 Fri, 13 Nov 2020 11:09:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HVgh=ET=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kdWxe-0006h2-O2
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 11:09:46 +0000
Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 265e078a-e2fa-4753-94fd-5998865f8557;
 Fri, 13 Nov 2020 11:09:45 +0000 (UTC)
Received: by mail-lf1-x144.google.com with SMTP id w142so13203886lff.8
 for <xen-devel@lists.xenproject.org>; Fri, 13 Nov 2020 03:09:45 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id t24sm1598227ljd.7.2020.11.13.03.09.43
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 13 Nov 2020 03:09:43 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HVgh=ET=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kdWxe-0006h2-O2
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 11:09:46 +0000
X-Inumbo-ID: 265e078a-e2fa-4753-94fd-5998865f8557
Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 265e078a-e2fa-4753-94fd-5998865f8557;
	Fri, 13 Nov 2020 11:09:45 +0000 (UTC)
Received: by mail-lf1-x144.google.com with SMTP id w142so13203886lff.8
        for <xen-devel@lists.xenproject.org>; Fri, 13 Nov 2020 03:09:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=Q88r220gbxrNILFjjVyBlTgxhLO0y5o2Y88E4LNhjoM=;
        b=hh/X7UqzxQTDs9whsayhyltb5fyWlRNvrjmyhe3NTggsmuzTyk7eTok6+/oqAKkZ2n
         3BfEcr6g8AiU4MnmHhnnvGYQThxb/4JiuALxh0z9oi0z7QjNsoH7n6q7syLZPLqCUFhi
         isWLfaHsD3njD369N7tA42jBaV4+G/mLyVCHhQ+0MlJCtm19ONoV7D+LlItPoZoJl680
         USnM9Q4FAqFe1KTqewuTb8pPJbjer94xJS0h1uvKMU8zBwmMXC+iu3WiJYxKQTdKE7FR
         MJKOWox8z2nwda6CIQY42qD7mOulUDrG6e0f6Z2R1jMf4JzpiczIJGJuHB7OQtc6tGdZ
         bQJg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=Q88r220gbxrNILFjjVyBlTgxhLO0y5o2Y88E4LNhjoM=;
        b=IFQW/PWqncOvVflwFbYuE4LVJ88mksWZpHZ0CpygTcCr5cqMUeNh72lWfFRon7iydm
         yS86MDaSjF4dVJYeBz0L34zUVr+A5sSVg8lWfin+Bnwk17yJUU/xS6uuF1dkVYwKfLJn
         6Xqy5fCLc7bYYIsTectvA8tfi60tMrI/zRXtgsDvnh9IOznBNQ1uz8Pi/sG3fI79a1n+
         f03Sy/FDHDazrFyKD3Oc3XVLR1DL1Hpw9ru8r1Mic73TD0zTnGZbm8eH4EAeaJxMzaHN
         Owwq0tgKj9tBHe2NkykP/UJfzdWQNruQu++u4C0G279SnGQMOH5CO6iiCKb90/Crgkmv
         6x0w==
X-Gm-Message-State: AOAM532sxY6UGnTCgnc8/Fv7+VbsGTZm1zPCzFIqzZRYkS5c+70Oj6yH
	0zD/1+pJYblk9LOF4B5CytKRJLqGeX/kFA==
X-Google-Smtp-Source: ABdhPJyyj7tzXeixful1sqrrUUUy1UnSg+nApeWuCzO0HNs2B6Xlun0P07BISOmxwqbL8+hbeWvMIA==
X-Received: by 2002:a19:8188:: with SMTP id c130mr679070lfd.184.1605265784197;
        Fri, 13 Nov 2020 03:09:44 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id t24sm1598227ljd.7.2020.11.13.03.09.43
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Fri, 13 Nov 2020 03:09:43 -0800 (PST)
Subject: Re: [PATCH V2 01/23] x86/ioreq: Prepare IOREQ feature for making it
 common
To: Jan Beulich <jbeulich@suse.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-2-git-send-email-olekstysh@gmail.com>
 <61ea02e0-bdd4-5a0a-dd6f-b22e806e6d1e@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <cd16e1f2-849d-ec12-3325-382b8f6689ff@gmail.com>
Date: Fri, 13 Nov 2020 13:09:37 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <61ea02e0-bdd4-5a0a-dd6f-b22e806e6d1e@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 12.11.20 12:58, Jan Beulich wrote:

Hi Jan.

> On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> As a lot of x86 code can be re-used on Arm later on, this
>> patch makes some preparation to x86/hvm/ioreq.c before moving
>> to the common code. This way we will get a verbatim copy for
>> a code movement in subsequent patch (arch/x86/hvm/ioreq.c
>> will be *just* renamed to common/ioreq).
>>
>> This patch does the following:
>> 1. Introduce *inline* arch_hvm_ioreq_init(), arch_hvm_ioreq_destroy(),
>>     arch_hvm_io_completion(), arch_hvm_destroy_ioreq_server() and
>>     hvm_ioreq_server_get_type_addr() to abstract arch specific materials.
>> 2  Make hvm_map_mem_type_to_ioreq_server() *inline*. It is not going
>>     to be called from the common code.
> As already indicated on another sub-thread, I think some of these
> are too large to be inline functions. Additionally, considering
> their single-use purpose, I don't think they should be placed in
> a header consumed by more than the producer and the sole consumer.
ok, the only reason I made these inline was to achieve a moving of the 
whole x86/hvm/ioreq.c to the common code.
I will move some of them back to ioreq.c.


>
>> 3. Make get_ioreq_server() global. It is going to be called from
>>     a few places.
> And with this its name ought to change, to fit the general naming
> model of global functions of this subsystem.
I think, with new requirements (making 
hvm_map_mem_type_to_ioreq_server() common) this helper
doesn't need to be global. I will make it static again.


>
>> 4. Add IOREQ_STATUS_* #define-s and update candidates for moving.
> This, it seems to me, could be a separate patch.

Well, will do.


>
>> @@ -855,7 +841,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
>>   
>>       domain_pause(d);
>>   
>> -    p2m_set_ioreq_server(d, 0, s);
>> +    arch_hvm_destroy_ioreq_server(s);
> Iirc there are plans to rename hvm_destroy_ioreq_server() in the
> course of the generalization. If so, this arch hook would imo
> better be named following the new scheme right away.
Could you please clarify, are you speaking about the plans discussed there

"[PATCH V2 12/23] xen/ioreq: Remove "hvm" prefixes from involved 
function names"?

Copy text for the convenience:
AT least some of the functions touched here would be nice to be
moved to a more consistent new naming scheme right away, to
avoid having to touch all the same places again. I guess ioreq
server functions would be nice to all start with ioreq_server_
and ioreq functions to all start with ioreq_. E.g. ioreq_send()
and ioreq_server_select().

or some other plans I am not aware of?


What I really want to avoid with IOREQ enabling work is the round-trips 
related to naming things, this patch series
became quite big (and consumes som time to rebase and test it) and I 
expect it to become bigger.

So the arch_hvm_destroy_ioreq_server() should be 
arch_ioreq_server_destroy()?


>
>> @@ -1215,7 +1153,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
>>       struct hvm_ioreq_server *s;
>>       unsigned int id;
>>   
>> -    if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) )
>> +    if ( !arch_hvm_ioreq_destroy(d) )
> There's no ioreq being destroyed here, so I think this wants
> renaming (and again ideally right away following the planned
> new scheme).
Agree that no ioreq being destroyed here. Probably 
ioreq_server_check_for_destroy()?
I couldn't think of a better name.


>
>> +static inline int hvm_map_mem_type_to_ioreq_server(struct domain *d,
>> +                                                   ioservid_t id,
>> +                                                   uint32_t type,
>> +                                                   uint32_t flags)
>> +{
>> +    struct hvm_ioreq_server *s;
>> +    int rc;
>> +
>> +    if ( type != HVMMEM_ioreq_server )
>> +        return -EINVAL;
>> +
>> +    if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE )
>> +        return -EINVAL;
>> +
>> +    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
>> +
>> +    s = get_ioreq_server(d, id);
>> +
>> +    rc = -ENOENT;
>> +    if ( !s )
>> +        goto out;
>> +
>> +    rc = -EPERM;
>> +    if ( s->emulator != current->domain )
>> +        goto out;
>> +
>> +    rc = p2m_set_ioreq_server(d, flags, s);
>> +
>> + out:
>> +    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>> +
>> +    if ( rc == 0 && flags == 0 )
>> +    {
>> +        struct p2m_domain *p2m = p2m_get_hostp2m(d);
> I realize I may be asking too much, but would it be possible if,
> while moving code, you made simple and likely uncontroversial
> adjustments like adding const here? (Such adjustments would be
> less desirable to make if they increased the size of the patch,
> e.g. if you were touching only nearby code.)
This function as well as one located below won't be moved to this header
for the next version of patch.

ok, will add const.


>
>> +        if ( read_atomic(&p2m->ioreq.entry_count) )
>> +            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
>> +    }
>> +
>> +    return rc;
>> +}
>> +
>> +static inline int hvm_ioreq_server_get_type_addr(const struct domain *d,
>> +                                                 const ioreq_t *p,
>> +                                                 uint8_t *type,
>> +                                                 uint64_t *addr)
>> +{
>> +    uint32_t cf8 = d->arch.hvm.pci_cf8;
> Similarly, for example, neither this nor ...
>
>> +    if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
>> +        return -EINVAL;
>> +
>> +    if ( p->type == IOREQ_TYPE_PIO &&
>> +         (p->addr & ~3) == 0xcfc &&
>> +         CF8_ENABLED(cf8) )
>> +    {
>> +        uint32_t x86_fam;
> ... this really need to use a fixed width type - unsigned int is
> going to be quite fine. But since you're only moving this code,
> I guess I'm not going to insist.

Will use unsigned int.


>
>> +static inline bool arch_hvm_ioreq_destroy(struct domain *d)
>> +{
>> +    if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) )
>> +        return false;
>> +
>> +    return true;
> Any reason this cannot simply be
>
>      return relocate_portio_handler(d, 0xcf8, 0xcf8, 4);

Yes, good point.


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Fri Nov 13 11:20:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 11:20:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26384.54699 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdX8A-0008SQ-QH; Fri, 13 Nov 2020 11:20:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26384.54699; Fri, 13 Nov 2020 11:20:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdX8A-0008SJ-Mk; Fri, 13 Nov 2020 11:20:38 +0000
Received: by outflank-mailman (input) for mailman id 26384;
 Fri, 13 Nov 2020 11:20:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdX89-0008SE-R2
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 11:20:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 43b4a371-7853-4d07-983d-c36a34d06fc4;
 Fri, 13 Nov 2020 11:20:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DDECAAEFB;
 Fri, 13 Nov 2020 11:20:34 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdX89-0008SE-R2
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 11:20:37 +0000
X-Inumbo-ID: 43b4a371-7853-4d07-983d-c36a34d06fc4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 43b4a371-7853-4d07-983d-c36a34d06fc4;
	Fri, 13 Nov 2020 11:20:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605266435; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TLQ9FVom6R6KKOTePpK0JU7y8/s9rbU+RkGF1j+/oRQ=;
	b=dWJ1cu8+k+o7bhRvIQwO188OTvg/oK7KFot9HskOrlTJ4bKbTWOcKQbi45d9/fgOWqboUI
	gQrelVR5VvWyD12hr8LFNVLjh4v86BNCKE+QD+TWvw1I8SnwFOgcN/xQVWJmk723yVAneh
	aQvHzUkVPDeL7mlzgKtismHKYMns82E=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id DDECAAEFB;
	Fri, 13 Nov 2020 11:20:34 +0000 (UTC)
Subject: Re: [PATCH V2 01/23] x86/ioreq: Prepare IOREQ feature for making it
 common
To: Oleksandr <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-2-git-send-email-olekstysh@gmail.com>
 <61ea02e0-bdd4-5a0a-dd6f-b22e806e6d1e@suse.com>
 <cd16e1f2-849d-ec12-3325-382b8f6689ff@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e08459d9-dd0a-7875-5d12-d374c69fe775@suse.com>
Date: Fri, 13 Nov 2020 12:20:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <cd16e1f2-849d-ec12-3325-382b8f6689ff@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13.11.2020 12:09, Oleksandr wrote:
> On 12.11.20 12:58, Jan Beulich wrote:
>> On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
>>> @@ -855,7 +841,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
>>>   
>>>       domain_pause(d);
>>>   
>>> -    p2m_set_ioreq_server(d, 0, s);
>>> +    arch_hvm_destroy_ioreq_server(s);
>> Iirc there are plans to rename hvm_destroy_ioreq_server() in the
>> course of the generalization. If so, this arch hook would imo
>> better be named following the new scheme right away.
> Could you please clarify, are you speaking about the plans discussed there
> 
> "[PATCH V2 12/23] xen/ioreq: Remove "hvm" prefixes from involved 
> function names"?
> 
> Copy text for the convenience:
> AT least some of the functions touched here would be nice to be
> moved to a more consistent new naming scheme right away, to
> avoid having to touch all the same places again. I guess ioreq
> server functions would be nice to all start with ioreq_server_
> and ioreq functions to all start with ioreq_. E.g. ioreq_send()
> and ioreq_server_select().
> 
> or some other plans I am not aware of?
> 
> 
> What I really want to avoid with IOREQ enabling work is the round-trips 
> related to naming things, this patch series
> became quite big (and consumes som time to rebase and test it) and I 
> expect it to become bigger.
> 
> So the arch_hvm_destroy_ioreq_server() should be 
> arch_ioreq_server_destroy()?

I think so, yes. If you want to avoid doing full patches, how
about you simply list the functions / variables you plan to
rename alongside the intended new names? Would likely be easier
for all involved parties.

>>> @@ -1215,7 +1153,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
>>>       struct hvm_ioreq_server *s;
>>>       unsigned int id;
>>>   
>>> -    if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) )
>>> +    if ( !arch_hvm_ioreq_destroy(d) )
>> There's no ioreq being destroyed here, so I think this wants
>> renaming (and again ideally right away following the planned
>> new scheme).
> Agree that no ioreq being destroyed here. Probably 
> ioreq_server_check_for_destroy()?
> I couldn't think of a better name.

"check" implies no change (and d ought to then be const struct
domain *). With the containing function likely becoming
ioreq_server_destroy_all(), arch_ioreq_server_destroy_all()
would come to mind, or arch_ioreq_server_prepare_destroy_all().

>>> +static inline int hvm_map_mem_type_to_ioreq_server(struct domain *d,
>>> +                                                   ioservid_t id,
>>> +                                                   uint32_t type,
>>> +                                                   uint32_t flags)
>>> +{
>>> +    struct hvm_ioreq_server *s;
>>> +    int rc;
>>> +
>>> +    if ( type != HVMMEM_ioreq_server )
>>> +        return -EINVAL;
>>> +
>>> +    if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE )
>>> +        return -EINVAL;
>>> +
>>> +    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
>>> +
>>> +    s = get_ioreq_server(d, id);
>>> +
>>> +    rc = -ENOENT;
>>> +    if ( !s )
>>> +        goto out;
>>> +
>>> +    rc = -EPERM;
>>> +    if ( s->emulator != current->domain )
>>> +        goto out;
>>> +
>>> +    rc = p2m_set_ioreq_server(d, flags, s);
>>> +
>>> + out:
>>> +    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>>> +
>>> +    if ( rc == 0 && flags == 0 )
>>> +    {
>>> +        struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> I realize I may be asking too much, but would it be possible if,
>> while moving code, you made simple and likely uncontroversial
>> adjustments like adding const here? (Such adjustments would be
>> less desirable to make if they increased the size of the patch,
>> e.g. if you were touching only nearby code.)
> This function as well as one located below won't be moved to this header
> for the next version of patch.
> 
> ok, will add const.

Well, if you don't move the code, then better keep the diff small
and leave things as they are.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 11:24:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 11:24:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26390.54711 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdXBh-0000Ck-AE; Fri, 13 Nov 2020 11:24:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26390.54711; Fri, 13 Nov 2020 11:24:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdXBh-0000Cd-6j; Fri, 13 Nov 2020 11:24:17 +0000
Received: by outflank-mailman (input) for mailman id 26390;
 Fri, 13 Nov 2020 11:24:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+DL5=ET=redhat.com=cohuck@srs-us1.protection.inumbo.net>)
 id 1kdXBf-0000CX-TC
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 11:24:16 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id aebf0d58-04c0-4030-87a6-f9a2267f90b0;
 Fri, 13 Nov 2020 11:24:15 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-428-_EGJdfxvP22jC5B0pSvw3w-1; Fri, 13 Nov 2020 06:24:10 -0500
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com
 [10.5.11.11])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id BA9515F9D1;
 Fri, 13 Nov 2020 11:24:08 +0000 (UTC)
Received: from gondolin (ovpn-113-133.ams2.redhat.com [10.36.113.133])
 by smtp.corp.redhat.com (Postfix) with ESMTP id A47C15B4B0;
 Fri, 13 Nov 2020 11:23:54 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+DL5=ET=redhat.com=cohuck@srs-us1.protection.inumbo.net>)
	id 1kdXBf-0000CX-TC
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 11:24:16 +0000
X-Inumbo-ID: aebf0d58-04c0-4030-87a6-f9a2267f90b0
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id aebf0d58-04c0-4030-87a6-f9a2267f90b0;
	Fri, 13 Nov 2020 11:24:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1605266654;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8n4tiXp+8mrteg2gUVfjQLr91gUwx8183hlKQTpBrkk=;
	b=d+fPLQPeO0NAYE81+Dqs4gYOtDCNBCCBbtXIGZC9ISS9YBuTcrvI0VoDPiBUakCihPU5th
	w4uDsSHH6Q8DmhAj34ZoyUGyhIJby/1A2wFHvcRRVAgwtswsmHK1kHc19VWaM5k/JuJuIx
	vL9XVFZcG7KxnBdoLyOrW3JOKAgk3jQ=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-428-_EGJdfxvP22jC5B0pSvw3w-1; Fri, 13 Nov 2020 06:24:10 -0500
X-MC-Unique: _EGJdfxvP22jC5B0pSvw3w-1
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id BA9515F9D1;
	Fri, 13 Nov 2020 11:24:08 +0000 (UTC)
Received: from gondolin (ovpn-113-133.ams2.redhat.com [10.36.113.133])
	by smtp.corp.redhat.com (Postfix) with ESMTP id A47C15B4B0;
	Fri, 13 Nov 2020 11:23:54 +0000 (UTC)
Date: Fri, 13 Nov 2020 12:23:51 +0100
From: Cornelia Huck <cohuck@redhat.com>
To: Eduardo Habkost <ehabkost@redhat.com>
Cc: qemu-devel@nongnu.org, Igor Mammedov <imammedo@redhat.com>, Markus
 Armbruster <armbru@redhat.com>, Kevin Wolf <kwolf@redhat.com>, "Daniel P.
 Berrange" <berrange@redhat.com>, =?UTF-8?B?TWFyYy1BbmRyw6k=?= Lureau
 <marcandre.lureau@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>, Eric
 Blake <eblake@redhat.com>, John Snow <jsnow@redhat.com>, Stefan Berger
 <stefanb@linux.ibm.com>, Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?=
 <philmd@redhat.com>, Stefan Berger <stefanb@linux.vnet.ibm.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Anthony Perard
 <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>, Max Reitz
 <mreitz@redhat.com>, Thomas Huth <thuth@redhat.com>, Richard Henderson
 <rth@twiddle.net>, David Hildenbrand <david@redhat.com>, Halil Pasic
 <pasic@linux.ibm.com>, Christian Borntraeger <borntraeger@de.ibm.com>,
 Matthew Rosato <mjrosato@linux.ibm.com>, Alex Williamson
 <alex.williamson@redhat.com>, xen-devel@lists.xenproject.org,
 qemu-block@nongnu.org, qemu-s390x@nongnu.org
Subject: Re: [PATCH v3 09/53] qdev: Make qdev_get_prop_ptr() get Object* arg
Message-ID: <20201113122351.583f53a2.cohuck@redhat.com>
In-Reply-To: <20201112214350.872250-10-ehabkost@redhat.com>
References: <20201112214350.872250-1-ehabkost@redhat.com>
	<20201112214350.872250-10-ehabkost@redhat.com>
Organization: Red Hat GmbH
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=cohuck@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Thu, 12 Nov 2020 16:43:06 -0500
Eduardo Habkost <ehabkost@redhat.com> wrote:

> Make the code more generic and not specific to TYPE_DEVICE.
>=20
> Reviewed-by: Marc-Andr=C3=A9 Lureau <marcandre.lureau@redhat.com>
> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
> ---
> Changes v1 -> v2:
> - Fix build error with CONFIG_XEN
>   I took the liberty of keeping the Reviewed-by line from
>   Marc-Andr=C3=A9 as the build fix is a trivial one line change
> ---
> Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> Cc: Paul Durrant <paul@xen.org>
> Cc: Kevin Wolf <kwolf@redhat.com>
> Cc: Max Reitz <mreitz@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Daniel P. Berrang=C3=A9" <berrange@redhat.com>
> Cc: Eduardo Habkost <ehabkost@redhat.com>
> Cc: Cornelia Huck <cohuck@redhat.com>
> Cc: Thomas Huth <thuth@redhat.com>
> Cc: Richard Henderson <rth@twiddle.net>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Halil Pasic <pasic@linux.ibm.com>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Matthew Rosato <mjrosato@linux.ibm.com>
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: qemu-devel@nongnu.org
> Cc: xen-devel@lists.xenproject.org
> Cc: qemu-block@nongnu.org
> Cc: qemu-s390x@nongnu.org
> ---
>  include/hw/qdev-properties.h     |  2 +-
>  backends/tpm/tpm_util.c          |  8 ++--
>  hw/block/xen-block.c             |  5 +-
>  hw/core/qdev-properties-system.c | 57 +++++++++-------------
>  hw/core/qdev-properties.c        | 82 +++++++++++++-------------------
>  hw/s390x/css.c                   |  5 +-
>  hw/s390x/s390-pci-bus.c          |  4 +-
>  hw/vfio/pci-quirks.c             |  5 +-
>  8 files changed, 68 insertions(+), 100 deletions(-)

Reviewed-by: Cornelia Huck <cohuck@redhat.com> #s390 parts



From xen-devel-bounces@lists.xenproject.org Fri Nov 13 11:27:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 11:27:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26394.54723 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdXEI-0000Mi-TE; Fri, 13 Nov 2020 11:26:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26394.54723; Fri, 13 Nov 2020 11:26:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdXEI-0000Mb-Pr; Fri, 13 Nov 2020 11:26:58 +0000
Received: by outflank-mailman (input) for mailman id 26394;
 Fri, 13 Nov 2020 11:26:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdXEH-0000MV-5U
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 11:26:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id db630295-1595-4d35-92db-18bf20625bdd;
 Fri, 13 Nov 2020 11:26:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D42CAABF4;
 Fri, 13 Nov 2020 11:26:54 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdXEH-0000MV-5U
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 11:26:57 +0000
X-Inumbo-ID: db630295-1595-4d35-92db-18bf20625bdd
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id db630295-1595-4d35-92db-18bf20625bdd;
	Fri, 13 Nov 2020 11:26:55 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605266815; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5GHy2zVfvvZFQQ5XWcJzcoBMym3f8oAcj3O13h3Ox/8=;
	b=TeCzHQ0oSlfVuRvZ/oz2pG9jgA8to2pHuEraS9F3sDrPmUWlkwgOZiL8UlzMMxk9jKzvb+
	+6vQuPhnOH6QKGgGgiv3thbTvtQrwGCDO+qf4D+vOFKrf5guyJtAjEiJ9OduCT5xGxzY05
	yMOhxG5gapWxzT1muVVArrlJTQVZ49U=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D42CAABF4;
	Fri, 13 Nov 2020 11:26:54 +0000 (UTC)
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
To: Julien Grall <julien@xen.org>
Cc: Oleksandr Andrushchenko <andr2000@gmail.com>,
 "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
 "julien.grall@arm.com" <julien.grall@arm.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
 <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
 <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
 <b4697fbe-6896-ed64-409d-85620c08904a@suse.com>
 <fd656848-1eda-686d-d74c-f10e3ecfe49a@xen.org>
 <093f0acd-3ddb-84c7-a06e-c75de90ba288@suse.com>
 <247c2ae8-c1c4-3e0a-3431-82d05bd3be33@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <16fe4b7f-51e3-a8a7-3da7-4b94370058a4@suse.com>
Date: Fri, 13 Nov 2020 12:26:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <247c2ae8-c1c4-3e0a-3431-82d05bd3be33@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 13.11.2020 12:06, Julien Grall wrote:
> Hi Jan,
> 
> On 13/11/2020 10:53, Jan Beulich wrote:
>> On 13.11.2020 11:36, Julien Grall wrote:
>>> On 13/11/2020 10:25, Jan Beulich wrote:
>>>> On 13.11.2020 07:32, Oleksandr Andrushchenko wrote:
>>>>> On 11/12/20 4:46 PM, Roger Pau Monné wrote:
>>>>>> On Thu, Nov 12, 2020 at 01:16:10PM +0000, Oleksandr Andrushchenko wrote:
>>>>>>> On 11/12/20 11:40 AM, Roger Pau Monné wrote:
>>>>>>>> On Mon, Nov 09, 2020 at 02:50:27PM +0200, Oleksandr Andrushchenko wrote:
>>>>>>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>>>>>>>> +static uint32_t bar_read_dispatch(const struct pci_dev *pdev, unsigned int reg,
>>>>>>>>> +                                  void *data)
>>>>>>>>> +{
>>>>>>>>> +    struct vpci_bar *vbar, *bar = data;
>>>>>>>>> +
>>>>>>>>> +    if ( is_hardware_domain(current->domain) )
>>>>>>>>> +        return bar_read_hwdom(pdev, reg, data);
>>>>>>>>> +
>>>>>>>>> +    vbar = get_vpci_bar(current->domain, pdev, bar->index);
>>>>>>>>> +    if ( !vbar )
>>>>>>>>> +        return ~0;
>>>>>>>>> +
>>>>>>>>> +    return bar_read_guest(pdev, reg, vbar);
>>>>>>>>> +}
>>>>>>>>> +
>>>>>>>>> +static void bar_write_dispatch(const struct pci_dev *pdev, unsigned int reg,
>>>>>>>>> +                               uint32_t val, void *data)
>>>>>>>>> +{
>>>>>>>>> +    struct vpci_bar *bar = data;
>>>>>>>>> +
>>>>>>>>> +    if ( is_hardware_domain(current->domain) )
>>>>>>>>> +        bar_write_hwdom(pdev, reg, val, data);
>>>>>>>>> +    else
>>>>>>>>> +    {
>>>>>>>>> +        struct vpci_bar *vbar = get_vpci_bar(current->domain, pdev, bar->index);
>>>>>>>>> +
>>>>>>>>> +        if ( !vbar )
>>>>>>>>> +            return;
>>>>>>>>> +        bar_write_guest(pdev, reg, val, vbar);
>>>>>>>>> +    }
>>>>>>>>> +}
>>>>>>>> You should assign different handlers based on whether the domain that
>>>>>>>> has the device assigned is a domU or the hardware domain, rather than
>>>>>>>> doing the selection here.
>>>>>>> Hm, handlers are assigned once in init_bars and this function is only called
>>>>>>>
>>>>>>> for hwdom, so there is no way I can do that for the guests. Hence, the dispatcher
>>>>>> I think we might want to reset the vPCI handlers when a devices gets
>>>>>> assigned and deassigned.
>>>>>
>>>>> In ARM case init_bars is called too early: PCI device assignment is currently
>>>>>
>>>>> initiated by Domain-0' kernel and is done *before* PCI devices are given memory
>>>>>
>>>>> ranges and BARs assigned:
>>>>>
>>>>> [    0.429514] pci_bus 0000:00: root bus resource [bus 00-ff]
>>>>> [    0.429532] pci_bus 0000:00: root bus resource [io 0x0000-0xfffff]
>>>>> [    0.429555] pci_bus 0000:00: root bus resource [mem 0xfe200000-0xfe3fffff]
>>>>> [    0.429575] pci_bus 0000:00: root bus resource [mem 0x30000000-0x37ffffff]
>>>>> [    0.429604] pci_bus 0000:00: root bus resource [mem 0x38000000-0x3fffffff pref]
>>>>> [    0.429670] pci 0000:00:00.0: enabling Extended Tags
>>>>> [    0.453764] pci 0000:00:00.0: -------------------- BUS_NOTIFY_ADD_DEVICE
>>>>>
>>>>> < init_bars >
>>>>>
>>>>> [    0.453793] pci 0000:00:00.0: -- IRQ 0
>>>>> [    0.458825] pci 0000:00:00.0: Failed to add - passthrough or MSI/MSI-X might fail!
>>>>> [    0.471790] pci 0000:01:00.0: -------------------- BUS_NOTIFY_ADD_DEVICE
>>>>>
>>>>> < init_bars >
>>>>>
>>>>> [    0.471821] pci 0000:01:00.0: -- IRQ 255
>>>>> [    0.476809] pci 0000:01:00.0: Failed to add - passthrough or MSI/MSI-X might fail!
>>>>>
>>>>> < BAR assignments below >
>>>>>
>>>>> [    0.488233] pci 0000:00:00.0: BAR 14: assigned [mem 0xfe200000-0xfe2fffff]
>>>>> [    0.488265] pci 0000:00:00.0: BAR 15: assigned [mem 0x38000000-0x380fffff pref]
>>>>>
>>>>> In case of x86 this is pretty much ok as BARs are already in place, but for ARM we
>>>>>
>>>>> need to take care and re-setup vPCI BARs for hwdom.
>>>>
>>>> Even on x86 there's no guarantee that all devices have their BARs set
>>>> up by firmware.
>>>>
>>>> In a subsequent reply you've suggested to move init_bars from "add" to
>>>> "assign", but I'm having trouble seeing what this would change: It's
>>>> not Dom0 controlling assignment (to itself), but Xen assigns the device
>>>> towards the end of pci_add_device().
>>>>
>>>>> Things are getting even more
>>>>>
>>>>> complicated if the host PCI bridge is not ECAM like, so you cannot set mmio_handlers
>>>>>
>>>>> and trap hwdom's access to the config space to update BARs etc. This is why I have that
>>>>>
>>>>> ugly hack for rcar_gen3 to read actual BARs for hwdom.
>>>>
>>>> How to config space accesses work there? The latest for MSI/MSI-X it'll
>>>> be imperative that Xen be able to intercept config space writes.
>>>
>>> I am not sure to understand your last sentence. Are you saying that we
>>> always need to trap access to MSI/MSI-X message in order to sanitize it?
>>>
>>> If one is using the GICv3 ITS (I haven't investigated other MSI
>>> controller), then I don't believe you need to sanitize the MSI/MSI-X
>>> message in most of the situation.
>>
>> Well, if it's fine for the guest to write arbitrary values to message
>> address and message data,
> 
> The message address would be the doorbell of the ITS that is usually 
> going through the IOMMU page-tables. Although, I am aware of a couple of 
> platforms where the doorbell access (among other address ranges 
> including P2P transaction) bypass the IOMMU. In this situation, we would 
> need a lot more work than just trapping the access.

When you say "The message address would be the doorbell of the ITS" am
I right in understanding that's the designated address to be put there?
What if the guest puts some random different address there?

> Regarding the message data, for the ITS this is an event ID. The HW will 
> then tag each message with the device ID (this prevents spoofing). The 
> tupple (device ID, event ID) is used by the ITS to decide where to 
> inject the event.
> 
> Whether other MSI controllers (e.g. GICv2m) have similar isolation 
> feature will be on the case by case basis.
> 
>> _and_ to arbitrarily enable/disable MSI / MSI-X,
>> then yes, no interception would be needed.
> The device would be owned by the guest, so I am not sure to understand 
> the exact problem of letting it enabling/disabling MSI/MSI-X. Do you 
> mind expanding your thoughts?

Question is - is Xen involved in any way in the handling of interrupts
from such a device? If not, then I guess full control can indeed be
left with the guest.

> Furthermore, you can also control which event is enabled/disabled at the 
> ITS level.

And that's something Xen controls? On x86 we don't have a 2nd level
of controls, so we need to merge Xen's and the guest's intentions in
software to know what to store in hardware.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 11:27:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 11:27:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26398.54735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdXEs-0000Sm-62; Fri, 13 Nov 2020 11:27:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26398.54735; Fri, 13 Nov 2020 11:27:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdXEs-0000Sf-2n; Fri, 13 Nov 2020 11:27:34 +0000
Received: by outflank-mailman (input) for mailman id 26398;
 Fri, 13 Nov 2020 11:27:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RH3y=ET=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kdXEq-0000SY-MD
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 11:27:32 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 33315b81-0e95-43f5-9b35-fdafbd1ca51c;
 Fri, 13 Nov 2020 11:27:31 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdXEo-0007O7-VC; Fri, 13 Nov 2020 11:27:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdXEo-0000pW-OV; Fri, 13 Nov 2020 11:27:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kdXEo-0001KG-O1; Fri, 13 Nov 2020 11:27:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=RH3y=ET=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kdXEq-0000SY-MD
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 11:27:32 +0000
X-Inumbo-ID: 33315b81-0e95-43f5-9b35-fdafbd1ca51c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 33315b81-0e95-43f5-9b35-fdafbd1ca51c;
	Fri, 13 Nov 2020 11:27:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=T1QE0mMsHqyQE+mb3VTnPTrgHqCGBc6KTAtVF8B0Upk=; b=V+ysMDhGysyCynpKEf8SMxMeDO
	5EiVJVG2RNb5gfOO4Z8Ot4WpDgX3lUyZ9YWSCoRgCEgaWubLgIranMpq/kN7Jcmt7stxx6gClylTT
	S1pY8Af53ZDYJ3Qj04Kxw97KVQQiNZuRmut5YI1C6TmeWHQBKp26PXgORzdy1G08XC40=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdXEo-0007O7-VC; Fri, 13 Nov 2020 11:27:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdXEo-0000pW-OV; Fri, 13 Nov 2020 11:27:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdXEo-0001KG-O1; Fri, 13 Nov 2020 11:27:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156720-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156720: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=544cb0132dc1778b9e791202995533523fa6cccd
X-Osstest-Versions-That:
    ovmf=1c48866e041d2afaabb170086c5bb0c69a4653d3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 13 Nov 2020 11:27:30 +0000

flight 156720 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156720/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 544cb0132dc1778b9e791202995533523fa6cccd
baseline version:
 ovmf                 1c48866e041d2afaabb170086c5bb0c69a4653d3

Last test of basis   156684  2020-11-11 12:14:13 Z    1 days
Testing same since   156720  2020-11-12 16:00:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  gechao <gechao@greatwall.com.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   1c48866e04..544cb0132d  544cb0132dc1778b9e791202995533523fa6cccd -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 11:31:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 11:31:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26416.54757 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdXIx-0001Pz-T7; Fri, 13 Nov 2020 11:31:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26416.54757; Fri, 13 Nov 2020 11:31:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdXIx-0001Ps-PP; Fri, 13 Nov 2020 11:31:47 +0000
Received: by outflank-mailman (input) for mailman id 26416;
 Fri, 13 Nov 2020 11:31:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RH3y=ET=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kdXIx-0001Pn-63
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 11:31:47 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 66cfc9f8-c4e6-409f-8a94-c77b615954d4;
 Fri, 13 Nov 2020 11:31:44 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdXIu-0007U8-Jg; Fri, 13 Nov 2020 11:31:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdXIu-0001AN-Au; Fri, 13 Nov 2020 11:31:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kdXIu-0002xy-AO; Fri, 13 Nov 2020 11:31:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=RH3y=ET=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kdXIx-0001Pn-63
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 11:31:47 +0000
X-Inumbo-ID: 66cfc9f8-c4e6-409f-8a94-c77b615954d4
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 66cfc9f8-c4e6-409f-8a94-c77b615954d4;
	Fri, 13 Nov 2020 11:31:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6WwKBsixb4jZuxs0oM4iP7l7ImuYVJUk8m2KXhVMRa4=; b=OkZa2T4srTvnFb09IjpbDJh9Q6
	qggi3k/n74oigEAUB4oxkIYv56jLxlLQoy78Raf7szY6ETr2TlDDmHs0uIjRLYFpPnw2dVan+kht5
	7zb09Wqtr4d9hPTRaoBEdD9G6L5QJCPQLExQ3Hf4vXL55JmAhX2F0jx1ezefdHL+JKpU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdXIu-0007U8-Jg; Fri, 13 Nov 2020 11:31:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdXIu-0001AN-Au; Fri, 13 Nov 2020 11:31:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdXIu-0002xy-AO; Fri, 13 Nov 2020 11:31:44 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156716-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 156716: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.14-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.14-testing:test-amd64-i386-xl-raw:guest-localmigrate/x10:fail:heisenbug
    xen-4.14-testing:test-armhf-armhf-xl-credit2:guest-start/debian.repeat:fail:heisenbug
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d101b417b784a26326fc7800a79cc539ba570b79
X-Osstest-Versions-That:
    xen=0c96e4297da07944525729ddbe438b0131ab5b7e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 13 Nov 2020 11:31:44 +0000

flight 156716 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156716/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 156670 pass in 156716
 test-amd64-i386-xl-raw 19 guest-localmigrate/x10 fail in 156670 pass in 156716
 test-armhf-armhf-xl-credit2  18 guest-start/debian.repeat  fail pass in 156670

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156525
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156525
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156525
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156525
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156525
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156525
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156525
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156525
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156525
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156525
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156525
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d101b417b784a26326fc7800a79cc539ba570b79
baseline version:
 xen                  0c96e4297da07944525729ddbe438b0131ab5b7e

Last test of basis   156525  2020-11-06 16:01:25 Z    6 days
Failing since        156594  2020-11-09 18:08:08 Z    3 days    4 attempts
Testing same since   156670  2020-11-11 04:05:07 Z    2 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  David Woodhouse <dwmw@amazon.co.uk>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0c96e4297d..d101b417b7  d101b417b784a26326fc7800a79cc539ba570b79 -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 11:35:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 11:35:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26447.54836 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdXMl-0001xs-Bl; Fri, 13 Nov 2020 11:35:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26447.54836; Fri, 13 Nov 2020 11:35:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdXMl-0001xl-8X; Fri, 13 Nov 2020 11:35:43 +0000
Received: by outflank-mailman (input) for mailman id 26447;
 Fri, 13 Nov 2020 11:35:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdXMj-0001xW-5a
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 11:35:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 38d5fe7e-8839-46f1-9101-9d6b3849c88c;
 Fri, 13 Nov 2020 11:35:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DACF8ABF4;
 Fri, 13 Nov 2020 11:35:38 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdXMj-0001xW-5a
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 11:35:41 +0000
X-Inumbo-ID: 38d5fe7e-8839-46f1-9101-9d6b3849c88c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 38d5fe7e-8839-46f1-9101-9d6b3849c88c;
	Fri, 13 Nov 2020 11:35:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605267339; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=itareUGbdsNbFgHhCq4I/3ylUNY3FLFxDBGVQigEGKQ=;
	b=UawAX4Bpo1TI68n5RIJIk+qPd0fk81+Ptc+QES4j5hiKIIP0aXCv+gP4gZKD2e1BQsYDDt
	b7EDmiG6eBavxslYzDRd9iCUlXY9biASWHyEioCJGSbhIVeDUsn94DePrHUL/OyHPZEUeU
	Tmxo+lsK9AUk6cOyvMgCspnp4nkPa+s=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id DACF8ABF4;
	Fri, 13 Nov 2020 11:35:38 +0000 (UTC)
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
Cc: Oleksandr Andrushchenko <andr2000@gmail.com>,
 "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
 "julien.grall@arm.com" <julien.grall@arm.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
 <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
 <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
 <b4697fbe-6896-ed64-409d-85620c08904a@suse.com>
 <3d6e5aab-ff89-7859-09c6-5ecb0c052511@epam.com>
 <1c88fef1-8558-fde1-02c7-8a68f6ecf312@suse.com>
 <67fd5df7-2ad2-08e5-294e-b769429164f0@epam.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <03e23a66-619f-e846-cf61-a33ca5d9f0b4@suse.com>
Date: Fri, 13 Nov 2020 12:35:38 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <67fd5df7-2ad2-08e5-294e-b769429164f0@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 13.11.2020 12:02, Oleksandr Andrushchenko wrote:
> 
> On 11/13/20 12:50 PM, Jan Beulich wrote:
>> On 13.11.2020 11:46, Oleksandr Andrushchenko wrote:
>>> On 11/13/20 12:25 PM, Jan Beulich wrote:
>>>> On 13.11.2020 07:32, Oleksandr Andrushchenko wrote:
>>>>> On 11/12/20 4:46 PM, Roger Pau Monné wrote:
>>>>>> On Thu, Nov 12, 2020 at 01:16:10PM +0000, Oleksandr Andrushchenko wrote:
>>>>>>> On 11/12/20 11:40 AM, Roger Pau Monné wrote:
>>>>>>>> On Mon, Nov 09, 2020 at 02:50:27PM +0200, Oleksandr Andrushchenko wrote:
>>>>>>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>>>>>>>> +static uint32_t bar_read_dispatch(const struct pci_dev *pdev, unsigned int reg,
>>>>>>>>> +                                  void *data)
>>>>>>>>> +{
>>>>>>>>> +    struct vpci_bar *vbar, *bar = data;
>>>>>>>>> +
>>>>>>>>> +    if ( is_hardware_domain(current->domain) )
>>>>>>>>> +        return bar_read_hwdom(pdev, reg, data);
>>>>>>>>> +
>>>>>>>>> +    vbar = get_vpci_bar(current->domain, pdev, bar->index);
>>>>>>>>> +    if ( !vbar )
>>>>>>>>> +        return ~0;
>>>>>>>>> +
>>>>>>>>> +    return bar_read_guest(pdev, reg, vbar);
>>>>>>>>> +}
>>>>>>>>> +
>>>>>>>>> +static void bar_write_dispatch(const struct pci_dev *pdev, unsigned int reg,
>>>>>>>>> +                               uint32_t val, void *data)
>>>>>>>>> +{
>>>>>>>>> +    struct vpci_bar *bar = data;
>>>>>>>>> +
>>>>>>>>> +    if ( is_hardware_domain(current->domain) )
>>>>>>>>> +        bar_write_hwdom(pdev, reg, val, data);
>>>>>>>>> +    else
>>>>>>>>> +    {
>>>>>>>>> +        struct vpci_bar *vbar = get_vpci_bar(current->domain, pdev, bar->index);
>>>>>>>>> +
>>>>>>>>> +        if ( !vbar )
>>>>>>>>> +            return;
>>>>>>>>> +        bar_write_guest(pdev, reg, val, vbar);
>>>>>>>>> +    }
>>>>>>>>> +}
>>>>>>>> You should assign different handlers based on whether the domain that
>>>>>>>> has the device assigned is a domU or the hardware domain, rather than
>>>>>>>> doing the selection here.
>>>>>>> Hm, handlers are assigned once in init_bars and this function is only called
>>>>>>>
>>>>>>> for hwdom, so there is no way I can do that for the guests. Hence, the dispatcher
>>>>>> I think we might want to reset the vPCI handlers when a devices gets
>>>>>> assigned and deassigned.
>>>>> In ARM case init_bars is called too early: PCI device assignment is currently
>>>>>
>>>>> initiated by Domain-0' kernel and is done *before* PCI devices are given memory
>>>>>
>>>>> ranges and BARs assigned:
>>>>>
>>>>> [    0.429514] pci_bus 0000:00: root bus resource [bus 00-ff]
>>>>> [    0.429532] pci_bus 0000:00: root bus resource [io 0x0000-0xfffff]
>>>>> [    0.429555] pci_bus 0000:00: root bus resource [mem 0xfe200000-0xfe3fffff]
>>>>> [    0.429575] pci_bus 0000:00: root bus resource [mem 0x30000000-0x37ffffff]
>>>>> [    0.429604] pci_bus 0000:00: root bus resource [mem 0x38000000-0x3fffffff pref]
>>>>> [    0.429670] pci 0000:00:00.0: enabling Extended Tags
>>>>> [    0.453764] pci 0000:00:00.0: -------------------- BUS_NOTIFY_ADD_DEVICE
>>>>>
>>>>> < init_bars >
>>>>>
>>>>> [    0.453793] pci 0000:00:00.0: -- IRQ 0
>>>>> [    0.458825] pci 0000:00:00.0: Failed to add - passthrough or MSI/MSI-X might fail!
>>>>> [    0.471790] pci 0000:01:00.0: -------------------- BUS_NOTIFY_ADD_DEVICE
>>>>>
>>>>> < init_bars >
>>>>>
>>>>> [    0.471821] pci 0000:01:00.0: -- IRQ 255
>>>>> [    0.476809] pci 0000:01:00.0: Failed to add - passthrough or MSI/MSI-X might fail!
>>>>>
>>>>> < BAR assignments below >
>>>>>
>>>>> [    0.488233] pci 0000:00:00.0: BAR 14: assigned [mem 0xfe200000-0xfe2fffff]
>>>>> [    0.488265] pci 0000:00:00.0: BAR 15: assigned [mem 0x38000000-0x380fffff pref]
>>>>>
>>>>> In case of x86 this is pretty much ok as BARs are already in place, but for ARM we
>>>>>
>>>>> need to take care and re-setup vPCI BARs for hwdom.
>>>> Even on x86 there's no guarantee that all devices have their BARs set
>>>> up by firmware.
>>> This is true. But there you could have config space trapped in "x86 generic way",
>>>
>>> please correct me if I'm wrong here
>>>
>>>> In a subsequent reply you've suggested to move init_bars from "add" to
>>>> "assign", but I'm having trouble seeing what this would change: It's
>>>> not Dom0 controlling assignment (to itself), but Xen assigns the device
>>>> towards the end of pci_add_device().
>>> PHYSDEVOP_pci_device_add vs XEN_DOMCTL_assign_device
>>>
>>> Currently we initialize BARs during PHYSDEVOP_pci_device_add and
>>>
>>> if we do that during XEN_DOMCTL_assign_device things seem to change
>> But there can't possibly be any XEN_DOMCTL_assign_device involved in
>> booting of Dom0.
> 
> Indeed. So, do you have an idea when we should call init_bars suitable
> 
> for both ARM and x86?
> 
> Another question is: what happens bad if x86 and ARM won't call init_bars
> 
> until the moment we really assign a PCI device to the first guest?

I'd like to answer the latter question first: How would Dom0 use
the device prior to such an assignment? As an implication to the
presumed answer here, I guess init_bars could be deferred up until
the first time Dom0 (or more generally the possessing domain)
accesses any of them. Similarly, devices used by Xen itself could
have this done immediately before first use. This may require
tracking on a per-device basis whether initialization was done.

>>>>>>     In order to do passthrough to domUs safely
>>>>>> we will have to add more handlers than what's required for dom0,
>>>>> Can you please tell what are thinking about? What are the missing handlers?
>>>>>>     and
>>>>>> having is_hardware_domain sprinkled in all of them is not a suitable
>>>>>> solution.
>>>>> I'll try to replace is_hardware_domain with something like:
>>>>>
>>>>> +/*
>>>>> + * Detect whether physical PCI devices in this segment belong
>>>>> + * to the domain given, e.g. on x86 all PCI devices live in hwdom,
>>>>> + * but in case of ARM this might not be the case: those may also
>>>>> + * live in driver domains or even Xen itself.
>>>>> + */
>>>>> +bool pci_is_hardware_domain(struct domain *d, u16 seg)
>>>>> +{
>>>>> +#ifdef CONFIG_X86
>>>>> +    return is_hardware_domain(d);
>>>>> +#elif CONFIG_ARM
>>>>> +    return pci_is_owner_domain(d, seg);
>>>>> +#else
>>>>> +#error "Unsupported architecture"
>>>>> +#endif
>>>>> +}
>>>>> +
>>>>> +/*
>>>>> + * Get domain which owns this segment: for x86 this is always hardware
>>>>> + * domain and for ARM this can be different.
>>>>> + */
>>>>> +struct domain *pci_get_hardware_domain(u16 seg)
>>>>> +{
>>>>> +#ifdef CONFIG_X86
>>>>> +    return hardware_domain;
>>>>> +#elif CONFIG_ARM
>>>>> +    return pci_get_owner_domain(seg);
>>>>> +#else
>>>>> +#error "Unsupported architecture"
>>>>> +#endif
>>>>> +}
>>>>>
>>>>> This is what I use to properly detect the domain that really owns physical host bridge
>>>> I consider this problematic. We should try to not let Arm's and x86'es
>>>> PCI implementations diverge too much, i.e. at least the underlying basic
>>>> model would better be similar. For example, if entire segments can be
>>>> assigned to a driver domain on Arm, why should the same not be possible
>>>> on x86?
>>> Good question, probably in this case x86 == ARM and I can use
>>>
>>> pci_is_owner_domain for both architectures instead of using is_hardware_domain for x86
>>>
>>>> Furthermore I'm suspicious about segments being the right granularity
>>>> here. On ia64 multiple host bridges could (and typically would) live
>>>> on segment 0. Iirc I had once seen output from an x86 system which was
>>>> apparently laid out similarly. Therefore, just like for MCFG, I think
>>>> the granularity wants to be bus ranges within a segment.
>>> Can you please suggest something we can use as a hint for such a detection logic?
>> The underlying information comes from ACPI tables, iirc. I don't
>> recall the details, though - sorry.
> 
> Ok, so seg + bus should be enough for both ARM and Xen then, right?
> 
> pci_get_hardware_domain(u16 seg, u8 bus)

Whether an individual bus number can suitable express things I can't
tell; I did say bus range, but of course if you care about just
individual devices, then a single bus number will of course do.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 11:53:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 11:53:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26458.54849 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdXdb-0003io-Rb; Fri, 13 Nov 2020 11:53:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26458.54849; Fri, 13 Nov 2020 11:53:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdXdb-0003ih-OS; Fri, 13 Nov 2020 11:53:07 +0000
Received: by outflank-mailman (input) for mailman id 26458;
 Fri, 13 Nov 2020 11:53:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kdXda-0003ic-F8
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 11:53:06 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kdXdZ-0006tm-RY; Fri, 13 Nov 2020 11:53:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kdXda-0003ic-F8
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 11:53:06 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kdXdZ-0006tm-RY; Fri, 13 Nov 2020 11:53:06 +0000
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
To: Jan Beulich <jbeulich@suse.com>
Cc: Oleksandr Andrushchenko <andr2000@gmail.com>,
 "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
 "julien.grall@arm.com" <julien.grall@arm.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
 <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
 <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
 <b4697fbe-6896-ed64-409d-85620c08904a@suse.com>
 <fd656848-1eda-686d-d74c-f10e3ecfe49a@xen.org>
 <093f0acd-3ddb-84c7-a06e-c75de90ba288@suse.com>
 <247c2ae8-c1c4-3e0a-3431-82d05bd3be33@xen.org>
 <16fe4b7f-51e3-a8a7-3da7-4b94370058a4@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <f2517b2a-f4d9-d817-3da2-7f6f6a6ef44d@xen.org>
Date: Fri, 13 Nov 2020 11:53:03 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <16fe4b7f-51e3-a8a7-3da7-4b94370058a4@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 13/11/2020 11:26, Jan Beulich wrote:
> On 13.11.2020 12:06, Julien Grall wrote:
>> Hi Jan,
>>
>> On 13/11/2020 10:53, Jan Beulich wrote:
>>> On 13.11.2020 11:36, Julien Grall wrote:
>>>> On 13/11/2020 10:25, Jan Beulich wrote:
>>>>> On 13.11.2020 07:32, Oleksandr Andrushchenko wrote:
>>>>>> On 11/12/20 4:46 PM, Roger Pau Monné wrote:
>>>>>>> On Thu, Nov 12, 2020 at 01:16:10PM +0000, Oleksandr Andrushchenko wrote:
>>>>>>>> On 11/12/20 11:40 AM, Roger Pau Monné wrote:
>>>>>>>>> On Mon, Nov 09, 2020 at 02:50:27PM +0200, Oleksandr Andrushchenko wrote:
>>>>>>>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>>>>>>>>> +static uint32_t bar_read_dispatch(const struct pci_dev *pdev, unsigned int reg,
>>>>>>>>>> +                                  void *data)
>>>>>>>>>> +{
>>>>>>>>>> +    struct vpci_bar *vbar, *bar = data;
>>>>>>>>>> +
>>>>>>>>>> +    if ( is_hardware_domain(current->domain) )
>>>>>>>>>> +        return bar_read_hwdom(pdev, reg, data);
>>>>>>>>>> +
>>>>>>>>>> +    vbar = get_vpci_bar(current->domain, pdev, bar->index);
>>>>>>>>>> +    if ( !vbar )
>>>>>>>>>> +        return ~0;
>>>>>>>>>> +
>>>>>>>>>> +    return bar_read_guest(pdev, reg, vbar);
>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>> +static void bar_write_dispatch(const struct pci_dev *pdev, unsigned int reg,
>>>>>>>>>> +                               uint32_t val, void *data)
>>>>>>>>>> +{
>>>>>>>>>> +    struct vpci_bar *bar = data;
>>>>>>>>>> +
>>>>>>>>>> +    if ( is_hardware_domain(current->domain) )
>>>>>>>>>> +        bar_write_hwdom(pdev, reg, val, data);
>>>>>>>>>> +    else
>>>>>>>>>> +    {
>>>>>>>>>> +        struct vpci_bar *vbar = get_vpci_bar(current->domain, pdev, bar->index);
>>>>>>>>>> +
>>>>>>>>>> +        if ( !vbar )
>>>>>>>>>> +            return;
>>>>>>>>>> +        bar_write_guest(pdev, reg, val, vbar);
>>>>>>>>>> +    }
>>>>>>>>>> +}
>>>>>>>>> You should assign different handlers based on whether the domain that
>>>>>>>>> has the device assigned is a domU or the hardware domain, rather than
>>>>>>>>> doing the selection here.
>>>>>>>> Hm, handlers are assigned once in init_bars and this function is only called
>>>>>>>>
>>>>>>>> for hwdom, so there is no way I can do that for the guests. Hence, the dispatcher
>>>>>>> I think we might want to reset the vPCI handlers when a devices gets
>>>>>>> assigned and deassigned.
>>>>>>
>>>>>> In ARM case init_bars is called too early: PCI device assignment is currently
>>>>>>
>>>>>> initiated by Domain-0' kernel and is done *before* PCI devices are given memory
>>>>>>
>>>>>> ranges and BARs assigned:
>>>>>>
>>>>>> [    0.429514] pci_bus 0000:00: root bus resource [bus 00-ff]
>>>>>> [    0.429532] pci_bus 0000:00: root bus resource [io 0x0000-0xfffff]
>>>>>> [    0.429555] pci_bus 0000:00: root bus resource [mem 0xfe200000-0xfe3fffff]
>>>>>> [    0.429575] pci_bus 0000:00: root bus resource [mem 0x30000000-0x37ffffff]
>>>>>> [    0.429604] pci_bus 0000:00: root bus resource [mem 0x38000000-0x3fffffff pref]
>>>>>> [    0.429670] pci 0000:00:00.0: enabling Extended Tags
>>>>>> [    0.453764] pci 0000:00:00.0: -------------------- BUS_NOTIFY_ADD_DEVICE
>>>>>>
>>>>>> < init_bars >
>>>>>>
>>>>>> [    0.453793] pci 0000:00:00.0: -- IRQ 0
>>>>>> [    0.458825] pci 0000:00:00.0: Failed to add - passthrough or MSI/MSI-X might fail!
>>>>>> [    0.471790] pci 0000:01:00.0: -------------------- BUS_NOTIFY_ADD_DEVICE
>>>>>>
>>>>>> < init_bars >
>>>>>>
>>>>>> [    0.471821] pci 0000:01:00.0: -- IRQ 255
>>>>>> [    0.476809] pci 0000:01:00.0: Failed to add - passthrough or MSI/MSI-X might fail!
>>>>>>
>>>>>> < BAR assignments below >
>>>>>>
>>>>>> [    0.488233] pci 0000:00:00.0: BAR 14: assigned [mem 0xfe200000-0xfe2fffff]
>>>>>> [    0.488265] pci 0000:00:00.0: BAR 15: assigned [mem 0x38000000-0x380fffff pref]
>>>>>>
>>>>>> In case of x86 this is pretty much ok as BARs are already in place, but for ARM we
>>>>>>
>>>>>> need to take care and re-setup vPCI BARs for hwdom.
>>>>>
>>>>> Even on x86 there's no guarantee that all devices have their BARs set
>>>>> up by firmware.
>>>>>
>>>>> In a subsequent reply you've suggested to move init_bars from "add" to
>>>>> "assign", but I'm having trouble seeing what this would change: It's
>>>>> not Dom0 controlling assignment (to itself), but Xen assigns the device
>>>>> towards the end of pci_add_device().
>>>>>
>>>>>> Things are getting even more
>>>>>>
>>>>>> complicated if the host PCI bridge is not ECAM like, so you cannot set mmio_handlers
>>>>>>
>>>>>> and trap hwdom's access to the config space to update BARs etc. This is why I have that
>>>>>>
>>>>>> ugly hack for rcar_gen3 to read actual BARs for hwdom.
>>>>>
>>>>> How to config space accesses work there? The latest for MSI/MSI-X it'll
>>>>> be imperative that Xen be able to intercept config space writes.
>>>>
>>>> I am not sure to understand your last sentence. Are you saying that we
>>>> always need to trap access to MSI/MSI-X message in order to sanitize it?
>>>>
>>>> If one is using the GICv3 ITS (I haven't investigated other MSI
>>>> controller), then I don't believe you need to sanitize the MSI/MSI-X
>>>> message in most of the situation.
>>>
>>> Well, if it's fine for the guest to write arbitrary values to message
>>> address and message data,
>>
>> The message address would be the doorbell of the ITS that is usually
>> going through the IOMMU page-tables. Although, I am aware of a couple of
>> platforms where the doorbell access (among other address ranges
>> including P2P transaction) bypass the IOMMU. In this situation, we would
>> need a lot more work than just trapping the access.
> 
> When you say "The message address would be the doorbell of the ITS" am
> I right in understanding that's the designated address to be put there?
> What if the guest puts some random different address there?

My point was that all the accesses from a PCI device should go through 
the IOMMU. Although, I know this may not be true for all the platforms.

In which case, sanitizing the MSI message address is not going to help 
because a PCI device can DMA into memory range that bypass the IOMMU.

> 
>> Regarding the message data, for the ITS this is an event ID. The HW will
>> then tag each message with the device ID (this prevents spoofing). The
>> tupple (device ID, event ID) is used by the ITS to decide where to
>> inject the event.
>>
>> Whether other MSI controllers (e.g. GICv2m) have similar isolation
>> feature will be on the case by case basis.
>>
>>> _and_ to arbitrarily enable/disable MSI / MSI-X,
>>> then yes, no interception would be needed.
>> The device would be owned by the guest, so I am not sure to understand
>> the exact problem of letting it enabling/disabling MSI/MSI-X. Do you
>> mind expanding your thoughts?
> 
> Question is - is Xen involved in any way in the handling of interrupts
> from such a device? If not, then I guess full control can indeed be
> left with the guest.

Xen will only forward the interrupts to the guest. This is not very 
different to how other interrupts (e.g. SPIs) are dealt with.

So I don't see the problem of giving full control to the guest.

> 
>> Furthermore, you can also control which event is enabled/disabled at the
>> ITS level.
> 
> And that's something Xen controls?

Yes. We only expose a virtual ITS to the guest.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 11:55:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 11:55:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26465.54861 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdXfd-0003t4-8C; Fri, 13 Nov 2020 11:55:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26465.54861; Fri, 13 Nov 2020 11:55:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdXfd-0003sx-49; Fri, 13 Nov 2020 11:55:13 +0000
Received: by outflank-mailman (input) for mailman id 26465;
 Fri, 13 Nov 2020 11:55:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rZ2H=ET=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kdXfc-0003ss-2h
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 11:55:12 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9a5e4f57-3089-4d2b-b5d9-a89f281d1c09;
 Fri, 13 Nov 2020 11:55:09 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0ADBt2eq008101
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Fri, 13 Nov 2020 12:55:03 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 172A52E9CA8; Fri, 13 Nov 2020 12:54:57 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rZ2H=ET=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kdXfc-0003ss-2h
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 11:55:12 +0000
X-Inumbo-ID: 9a5e4f57-3089-4d2b-b5d9-a89f281d1c09
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9a5e4f57-3089-4d2b-b5d9-a89f281d1c09;
	Fri, 13 Nov 2020 11:55:09 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0ADBt2eq008101
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Fri, 13 Nov 2020 12:55:03 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 172A52E9CA8; Fri, 13 Nov 2020 12:54:57 +0100 (MET)
Date: Fri, 13 Nov 2020 12:54:57 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: dom0 PVH: 'entry->arch.pirq != INVALID_PIRQ' failed at vmsi.c:843
Message-ID: <20201113115457.GD1512@antioche.eu.org>
References: <20201112155715.GA5003@antioche.eu.org>
 <20201112163240.6xswol2iswikdzef@Air-de-Roger>
 <20201112172704.GA5899@antioche.eu.org>
 <20201112201939.be6ztg2iipwa6hkb@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201112201939.be6ztg2iipwa6hkb@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Fri, 13 Nov 2020 12:55:03 +0100 (MET)

On Thu, Nov 12, 2020 at 09:19:39PM +0100, Roger Pau Monn wrote:
> The following might be able to get you going, but I think I need to
> refine the logic a bit there, will have to give it some thought.

I also tested with xen devel (Xen version 4.15-unstable, Latest ChangeSet: Wed Nov 4 09:27:22 2020 +0100 git:9ff9705647-dirty).
Your patch is needed there too to avoid the panic.

As with 4.13, I have problems with interrupts not being properly
delivered. The strange thing is that the counter is not 0, but 3 (wuth 4.13)
or 2 (with 4.15) which would mean that interrupts stop being delivered
at some point in the setup process. Maybe something to do with mask/unmask ?

The problematc interrupt in identifed as "ioapic2 pin 2" by the NetBSD kernel,
so it's not MSI/MSI-X (not sure it matters though).
Maybe something related to mask/unmask ?

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 12:42:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 12:42:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26485.54873 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdYOt-0008JH-AA; Fri, 13 Nov 2020 12:41:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26485.54873; Fri, 13 Nov 2020 12:41:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdYOt-0008JA-77; Fri, 13 Nov 2020 12:41:59 +0000
Received: by outflank-mailman (input) for mailman id 26485;
 Fri, 13 Nov 2020 12:41:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u7TF=ET=epam.com=prvs=9586b5424c=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kdYOs-0008J5-Fw
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 12:41:58 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2557f26f-a62e-43a1-a260-02d9448fce46;
 Fri, 13 Nov 2020 12:41:56 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0ADCTVXr014580; Fri, 13 Nov 2020 12:41:47 GMT
Received: from eur02-am5-obe.outbound.protection.outlook.com
 (mail-am5eur02lp2058.outbound.protection.outlook.com [104.47.4.58])
 by mx0a-0039f301.pphosted.com with ESMTP id 34rf80g1ts-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 13 Nov 2020 12:41:46 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB5939.eurprd03.prod.outlook.com (2603:10a6:208:15e::28)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.22; Fri, 13 Nov
 2020 12:41:43 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 12:41:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u7TF=ET=epam.com=prvs=9586b5424c=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kdYOs-0008J5-Fw
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 12:41:58 +0000
X-Inumbo-ID: 2557f26f-a62e-43a1-a260-02d9448fce46
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2557f26f-a62e-43a1-a260-02d9448fce46;
	Fri, 13 Nov 2020 12:41:56 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
	by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0ADCTVXr014580;
	Fri, 13 Nov 2020 12:41:47 GMT
Received: from eur02-am5-obe.outbound.protection.outlook.com (mail-am5eur02lp2058.outbound.protection.outlook.com [104.47.4.58])
	by mx0a-0039f301.pphosted.com with ESMTP id 34rf80g1ts-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 13 Nov 2020 12:41:46 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eCVl6jheAeum17sSz+fy/lK3Xzw431c0qvbvp9pH1kEO3RIob+mtLhovbFWif9n9ZWLHGXotc8EKzHev9R2e239pYw8WZMvRkBbNohwcn30MONt7KmV6KItcdk5ZgbyGft9VpRd/wx7UDKLDEh5PogNPO8w0VPcIPEQ1djAxE0P1EplASvEW5lGaEbnXufckBH6DehoXXetf+IiuDCWRTn6CAd93C5hOCgKIoSHa96VU1g5ISdlYeTv39Lf2856X8LRTxqTNJkFDhUBGixVMAV+QpVxyTlxlMMgMCfDEmqod7mv6h4N1adfPG6kOi7Fa7VjwZHbpG5M6X2MpQDxoCQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fB6mo+Kpwwh8xD5892sG6TYYo7RkmqiMi2lLmDuyo/Q=;
 b=l8jEz/VzaJQlCwn9MQznxaV3dfgPj59HDPNOeUi6THc7zGruM9mA89m8eFlwq699l/x1NvF/SXssSArvQ1a7cDgoZbReBYg5hudp/KoDIkJjX3RKnYH8w4zciI3MBkuU7ZHazwgPZAQcy8BxXO/rCAzlLeOgusdbQjG3AITMejBpIX68WV6uowIy2USMT/NpZA4+LDKClfF20wGvoCh4AGWo8m7MRUt1VTusMjbcfLn0k27jNKsEPTxYFUCDQ7M2q6tY7tpv9HYOpxidUOeJYIw9LRmDmOWC7tco5A6PnwmLWvwFBS8ECq5rtH/VRgTwiTA8kOPXg3Wrk+nrJqEDjA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fB6mo+Kpwwh8xD5892sG6TYYo7RkmqiMi2lLmDuyo/Q=;
 b=EL9XiBmtNMRPhpwKehVlfbZ3KrYhhkBbC7IGL6m4VqIPzM6BGF8VXydCht1dpK74wUsIldFXyPIwFW2OUsgobhaO6Rfg4a5mS+S7rqTSARRNTWsZMtgxktWD3k7Bk6vWBKp7mgQIlgMwz7OPHRpzd1OrUMCjpMqdfIIS+mI8saqpPqpr9MLyWzxU2yF4DRkbYhkQarILwz4SHvntpJkxNbTHn4nkBrIup7j1uyzK1sg6ckD7isk1DJIh8aZigDGo/bMlDQQJ3sCJt9mZs75NONPbZ32OagriVRM1CBHskM5yBLxkaHIBsCtT9BN19Mkp+xafu946Dr5a2so6qqpxRQ==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB5939.eurprd03.prod.outlook.com (2603:10a6:208:15e::28) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.22; Fri, 13 Nov
 2020 12:41:43 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 12:41:43 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Oleksandr Andrushchenko <andr2000@gmail.com>,
        "Rahul.Singh@arm.com"
	<Rahul.Singh@arm.com>,
        "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
        "julien.grall@arm.com" <julien.grall@arm.com>,
        "sstabellini@kernel.org"
	<sstabellini@kernel.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
        =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
Thread-Topic: [PATCH 06/10] vpci: Make every domain handle its own BARs
Thread-Index: 
 AQHWtpbzyJg77SbpvkSRWLKjWy/3PanEQmgAgAA8YoCAABlOgIABCD8AgABBOQCAAAXIAIAAAS0AgAADNgCAAAlOAIAAEnYA
Date: Fri, 13 Nov 2020 12:41:43 +0000
Message-ID: <b151e6d2-5480-d201-432a-bece208a1fd9@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
 <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
 <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
 <b4697fbe-6896-ed64-409d-85620c08904a@suse.com>
 <3d6e5aab-ff89-7859-09c6-5ecb0c052511@epam.com>
 <1c88fef1-8558-fde1-02c7-8a68f6ecf312@suse.com>
 <67fd5df7-2ad2-08e5-294e-b769429164f0@epam.com>
 <03e23a66-619f-e846-cf61-a33ca5d9f0b4@suse.com>
In-Reply-To: <03e23a66-619f-e846-cf61-a33ca5d9f0b4@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 02e5a9c2-4179-4e09-4e6c-08d887d179c3
x-ms-traffictypediagnostic: AM0PR03MB5939:
x-microsoft-antispam-prvs: 
 <AM0PR03MB5939A09D1C3F99C5E289F364E7E60@AM0PR03MB5939.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 0j5Ftb5QekEAI+QdLro/L/8IexRzTvUktd0d4kDZylaxtY+aimjUMRvpRXk9cQ+QKrC1cmcbQ9emeDQi7qSis+q/3GMTRBslbnKIc79lWFByYftWC1KSBzCMpspc4dXR7jHt/TQ1igDelcnNnrNN9M735Yo8ZQDoCaGXU9CKW3mamdsPqwp+Hoc1q2NM0dEb/d5DgOCTxJIfSn2i0RGYLoy4lieOVjVOeZ5fsxU4QKmNF1Ldn0cbLwM5x4qeWqtCVTXGl/diyuBjECdc59bD6OTYYs6SSxB8Y7Jtxesg9IBkrqKb7vKdtQU0JqCWMoghmLTsO5dfKVg29wBIiFKZjw==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(396003)(376002)(136003)(39860400002)(346002)(31686004)(316002)(66446008)(8936002)(66476007)(4326008)(86362001)(6486002)(66556008)(76116006)(64756008)(66946007)(478600001)(6512007)(8676002)(6916009)(7416002)(71200400001)(2906002)(26005)(186003)(53546011)(31696002)(83380400001)(6506007)(5660300002)(2616005)(54906003)(36756003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 uUkG2ulqFBCZOyRHAjUCaYVPozX/bV0qz+HVsYRyDRMZgyMA4OiOTNq3pgAF8+prjcDGC1op7xgnFkEeZ7JIgbJtxgecQabd7tHcqdQZYNNyV5QB5bqcFQlc/exZ3BLlzaiCBlKbFpU8KRvV1gi1lB1t/HWafcIcqst0u5w5tib4phjHkcXaO1FysNn5odpZJ8aFqIFhdy255abW6gjnEhg1dNhCbBi/OuuQuxml/RiXBtOz68ItfgPtEmW9Hkxw2Z+vxFl1gamppvYBWHTuUPSgBVs5ui+mZF28wYeGxG8ostEMORspaawnAud/k4f9zUQJlya7WSnCAfEKlBG7+KZ+sciVPB3RIw4qog0LbPa4nLNeK7sZ+X+FX1Le0ozsJJ9IEjDQKy6lmO8M2dYuc0KpWyJS+zquJyqlQ/ENWzvHt6aWuacx2bsHtE/dGpvrQ0bEKtZdRv2i40rdFIyqliv3XHSHoPsV5K45GMvxvqwzPxE8yYRHcd1tcVX3i2UGz407O8hrgxkIvU7PNCtkJQuJ4EvkZ5heqevjjsRGglqwf5apycOPWXlmtNAvfo1pkLlpi62h0qVFgrrB/e/+JEg0LarvYJHMhBuq1s/Mw3twWK1+FwEfZamiur0BgWMf85H3/qnyqtHd2Brvda9lFw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <7136AAC70D86E54189D0C6044232EF43@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 02e5a9c2-4179-4e09-4e6c-08d887d179c3
X-MS-Exchange-CrossTenant-originalarrivaltime: 13 Nov 2020 12:41:43.3085
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: /LAEV6SnitSFXQzas2CiN7Y+R/efGi7SCT1/2x/CReAMKTyLVT5Uy96jO1g4JOqT3qTsFxvOC45EnYNhMOZzpGNDuRCQqxoOCiqsWM4oy2HGE8IRnJdoQlkRsdEneDjz
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB5939
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-13_10:2020-11-13,2020-11-13 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 mlxscore=0
 phishscore=0 spamscore=0 adultscore=0 bulkscore=0 priorityscore=1501
 suspectscore=0 impostorscore=0 mlxlogscore=999 lowpriorityscore=0
 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2011130078

DQpPbiAxMS8xMy8yMCAxOjM1IFBNLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTMuMTEuMjAy
MCAxMjowMiwgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gd3JvdGU6DQo+PiBPbiAxMS8xMy8yMCAx
Mjo1MCBQTSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4gT24gMTMuMTEuMjAyMCAxMTo0NiwgT2xl
a3NhbmRyIEFuZHJ1c2hjaGVua28gd3JvdGU6DQo+Pj4+IE9uIDExLzEzLzIwIDEyOjI1IFBNLCBK
YW4gQmV1bGljaCB3cm90ZToNCj4+Pj4+IE9uIDEzLjExLjIwMjAgMDc6MzIsIE9sZWtzYW5kciBB
bmRydXNoY2hlbmtvIHdyb3RlOg0KPj4+Pj4+IE9uIDExLzEyLzIwIDQ6NDYgUE0sIFJvZ2VyIFBh
dSBNb25uw6kgd3JvdGU6DQo+Pj4+Pj4+IE9uIFRodSwgTm92IDEyLCAyMDIwIGF0IDAxOjE2OjEw
UE0gKzAwMDAsIE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIHdyb3RlOg0KPj4+Pj4+Pj4gT24gMTEv
MTIvMjAgMTE6NDAgQU0sIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6DQo+Pj4+Pj4+Pj4gT24gTW9u
LCBOb3YgMDksIDIwMjAgYXQgMDI6NTA6MjdQTSArMDIwMCwgT2xla3NhbmRyIEFuZHJ1c2hjaGVu
a28gd3JvdGU6DQo+Pj4+Pj4+Pj4+IEZyb206IE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIDxvbGVr
c2FuZHJfYW5kcnVzaGNoZW5rb0BlcGFtLmNvbT4NCj4+Pj4+Pj4+Pj4gK3N0YXRpYyB1aW50MzJf
dCBiYXJfcmVhZF9kaXNwYXRjaChjb25zdCBzdHJ1Y3QgcGNpX2RldiAqcGRldiwgdW5zaWduZWQg
aW50IHJlZywNCj4+Pj4+Pj4+Pj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB2
b2lkICpkYXRhKQ0KPj4+Pj4+Pj4+PiArew0KPj4+Pj4+Pj4+PiArICAgIHN0cnVjdCB2cGNpX2Jh
ciAqdmJhciwgKmJhciA9IGRhdGE7DQo+Pj4+Pj4+Pj4+ICsNCj4+Pj4+Pj4+Pj4gKyAgICBpZiAo
IGlzX2hhcmR3YXJlX2RvbWFpbihjdXJyZW50LT5kb21haW4pICkNCj4+Pj4+Pj4+Pj4gKyAgICAg
ICAgcmV0dXJuIGJhcl9yZWFkX2h3ZG9tKHBkZXYsIHJlZywgZGF0YSk7DQo+Pj4+Pj4+Pj4+ICsN
Cj4+Pj4+Pj4+Pj4gKyAgICB2YmFyID0gZ2V0X3ZwY2lfYmFyKGN1cnJlbnQtPmRvbWFpbiwgcGRl
diwgYmFyLT5pbmRleCk7DQo+Pj4+Pj4+Pj4+ICsgICAgaWYgKCAhdmJhciApDQo+Pj4+Pj4+Pj4+
ICsgICAgICAgIHJldHVybiB+MDsNCj4+Pj4+Pj4+Pj4gKw0KPj4+Pj4+Pj4+PiArICAgIHJldHVy
biBiYXJfcmVhZF9ndWVzdChwZGV2LCByZWcsIHZiYXIpOw0KPj4+Pj4+Pj4+PiArfQ0KPj4+Pj4+
Pj4+PiArDQo+Pj4+Pj4+Pj4+ICtzdGF0aWMgdm9pZCBiYXJfd3JpdGVfZGlzcGF0Y2goY29uc3Qg
c3RydWN0IHBjaV9kZXYgKnBkZXYsIHVuc2lnbmVkIGludCByZWcsDQo+Pj4+Pj4+Pj4+ICsgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgdWludDMyX3QgdmFsLCB2b2lkICpkYXRhKQ0KPj4+
Pj4+Pj4+PiArew0KPj4+Pj4+Pj4+PiArICAgIHN0cnVjdCB2cGNpX2JhciAqYmFyID0gZGF0YTsN
Cj4+Pj4+Pj4+Pj4gKw0KPj4+Pj4+Pj4+PiArICAgIGlmICggaXNfaGFyZHdhcmVfZG9tYWluKGN1
cnJlbnQtPmRvbWFpbikgKQ0KPj4+Pj4+Pj4+PiArICAgICAgICBiYXJfd3JpdGVfaHdkb20ocGRl
diwgcmVnLCB2YWwsIGRhdGEpOw0KPj4+Pj4+Pj4+PiArICAgIGVsc2UNCj4+Pj4+Pj4+Pj4gKyAg
ICB7DQo+Pj4+Pj4+Pj4+ICsgICAgICAgIHN0cnVjdCB2cGNpX2JhciAqdmJhciA9IGdldF92cGNp
X2JhcihjdXJyZW50LT5kb21haW4sIHBkZXYsIGJhci0+aW5kZXgpOw0KPj4+Pj4+Pj4+PiArDQo+
Pj4+Pj4+Pj4+ICsgICAgICAgIGlmICggIXZiYXIgKQ0KPj4+Pj4+Pj4+PiArICAgICAgICAgICAg
cmV0dXJuOw0KPj4+Pj4+Pj4+PiArICAgICAgICBiYXJfd3JpdGVfZ3Vlc3QocGRldiwgcmVnLCB2
YWwsIHZiYXIpOw0KPj4+Pj4+Pj4+PiArICAgIH0NCj4+Pj4+Pj4+Pj4gK30NCj4+Pj4+Pj4+PiBZ
b3Ugc2hvdWxkIGFzc2lnbiBkaWZmZXJlbnQgaGFuZGxlcnMgYmFzZWQgb24gd2hldGhlciB0aGUg
ZG9tYWluIHRoYXQNCj4+Pj4+Pj4+PiBoYXMgdGhlIGRldmljZSBhc3NpZ25lZCBpcyBhIGRvbVUg
b3IgdGhlIGhhcmR3YXJlIGRvbWFpbiwgcmF0aGVyIHRoYW4NCj4+Pj4+Pj4+PiBkb2luZyB0aGUg
c2VsZWN0aW9uIGhlcmUuDQo+Pj4+Pj4+PiBIbSwgaGFuZGxlcnMgYXJlIGFzc2lnbmVkIG9uY2Ug
aW4gaW5pdF9iYXJzIGFuZCB0aGlzIGZ1bmN0aW9uIGlzIG9ubHkgY2FsbGVkDQo+Pj4+Pj4+Pg0K
Pj4+Pj4+Pj4gZm9yIGh3ZG9tLCBzbyB0aGVyZSBpcyBubyB3YXkgSSBjYW4gZG8gdGhhdCBmb3Ig
dGhlIGd1ZXN0cy4gSGVuY2UsIHRoZSBkaXNwYXRjaGVyDQo+Pj4+Pj4+IEkgdGhpbmsgd2UgbWln
aHQgd2FudCB0byByZXNldCB0aGUgdlBDSSBoYW5kbGVycyB3aGVuIGEgZGV2aWNlcyBnZXRzDQo+
Pj4+Pj4+IGFzc2lnbmVkIGFuZCBkZWFzc2lnbmVkLg0KPj4+Pj4+IEluIEFSTSBjYXNlIGluaXRf
YmFycyBpcyBjYWxsZWQgdG9vIGVhcmx5OiBQQ0kgZGV2aWNlIGFzc2lnbm1lbnQgaXMgY3VycmVu
dGx5DQo+Pj4+Pj4NCj4+Pj4+PiBpbml0aWF0ZWQgYnkgRG9tYWluLTAnIGtlcm5lbCBhbmQgaXMg
ZG9uZSAqYmVmb3JlKiBQQ0kgZGV2aWNlcyBhcmUgZ2l2ZW4gbWVtb3J5DQo+Pj4+Pj4NCj4+Pj4+
PiByYW5nZXMgYW5kIEJBUnMgYXNzaWduZWQ6DQo+Pj4+Pj4NCj4+Pj4+PiBbwqDCoMKgIDAuNDI5
NTE0XSBwY2lfYnVzIDAwMDA6MDA6IHJvb3QgYnVzIHJlc291cmNlIFtidXMgMDAtZmZdDQo+Pj4+
Pj4gW8KgwqDCoCAwLjQyOTUzMl0gcGNpX2J1cyAwMDAwOjAwOiByb290IGJ1cyByZXNvdXJjZSBb
aW8gMHgwMDAwLTB4ZmZmZmZdDQo+Pj4+Pj4gW8KgwqDCoCAwLjQyOTU1NV0gcGNpX2J1cyAwMDAw
OjAwOiByb290IGJ1cyByZXNvdXJjZSBbbWVtIDB4ZmUyMDAwMDAtMHhmZTNmZmZmZl0NCj4+Pj4+
PiBbwqDCoMKgIDAuNDI5NTc1XSBwY2lfYnVzIDAwMDA6MDA6IHJvb3QgYnVzIHJlc291cmNlIFtt
ZW0gMHgzMDAwMDAwMC0weDM3ZmZmZmZmXQ0KPj4+Pj4+IFvCoMKgwqAgMC40Mjk2MDRdIHBjaV9i
dXMgMDAwMDowMDogcm9vdCBidXMgcmVzb3VyY2UgW21lbSAweDM4MDAwMDAwLTB4M2ZmZmZmZmYg
cHJlZl0NCj4+Pj4+PiBbwqDCoMKgIDAuNDI5NjcwXSBwY2kgMDAwMDowMDowMC4wOiBlbmFibGlu
ZyBFeHRlbmRlZCBUYWdzDQo+Pj4+Pj4gW8KgwqDCoCAwLjQ1Mzc2NF0gcGNpIDAwMDA6MDA6MDAu
MDogLS0tLS0tLS0tLS0tLS0tLS0tLS0gQlVTX05PVElGWV9BRERfREVWSUNFDQo+Pj4+Pj4NCj4+
Pj4+PiA8IGluaXRfYmFycyA+DQo+Pj4+Pj4NCj4+Pj4+PiBbwqDCoMKgIDAuNDUzNzkzXSBwY2kg
MDAwMDowMDowMC4wOiAtLSBJUlEgMA0KPj4+Pj4+IFvCoMKgwqAgMC40NTg4MjVdIHBjaSAwMDAw
OjAwOjAwLjA6IEZhaWxlZCB0byBhZGQgLSBwYXNzdGhyb3VnaCBvciBNU0kvTVNJLVggbWlnaHQg
ZmFpbCENCj4+Pj4+PiBbwqDCoMKgIDAuNDcxNzkwXSBwY2kgMDAwMDowMTowMC4wOiAtLS0tLS0t
LS0tLS0tLS0tLS0tLSBCVVNfTk9USUZZX0FERF9ERVZJQ0UNCj4+Pj4+Pg0KPj4+Pj4+IDwgaW5p
dF9iYXJzID4NCj4+Pj4+Pg0KPj4+Pj4+IFvCoMKgwqAgMC40NzE4MjFdIHBjaSAwMDAwOjAxOjAw
LjA6IC0tIElSUSAyNTUNCj4+Pj4+PiBbwqDCoMKgIDAuNDc2ODA5XSBwY2kgMDAwMDowMTowMC4w
OiBGYWlsZWQgdG8gYWRkIC0gcGFzc3Rocm91Z2ggb3IgTVNJL01TSS1YIG1pZ2h0IGZhaWwhDQo+
Pj4+Pj4NCj4+Pj4+PiA8IEJBUiBhc3NpZ25tZW50cyBiZWxvdyA+DQo+Pj4+Pj4NCj4+Pj4+PiBb
wqDCoMKgIDAuNDg4MjMzXSBwY2kgMDAwMDowMDowMC4wOiBCQVIgMTQ6IGFzc2lnbmVkIFttZW0g
MHhmZTIwMDAwMC0weGZlMmZmZmZmXQ0KPj4+Pj4+IFvCoMKgwqAgMC40ODgyNjVdIHBjaSAwMDAw
OjAwOjAwLjA6IEJBUiAxNTogYXNzaWduZWQgW21lbSAweDM4MDAwMDAwLTB4MzgwZmZmZmYgcHJl
Zl0NCj4+Pj4+Pg0KPj4+Pj4+IEluIGNhc2Ugb2YgeDg2IHRoaXMgaXMgcHJldHR5IG11Y2ggb2sg
YXMgQkFScyBhcmUgYWxyZWFkeSBpbiBwbGFjZSwgYnV0IGZvciBBUk0gd2UNCj4+Pj4+Pg0KPj4+
Pj4+IG5lZWQgdG8gdGFrZSBjYXJlIGFuZCByZS1zZXR1cCB2UENJIEJBUnMgZm9yIGh3ZG9tLg0K
Pj4+Pj4gRXZlbiBvbiB4ODYgdGhlcmUncyBubyBndWFyYW50ZWUgdGhhdCBhbGwgZGV2aWNlcyBo
YXZlIHRoZWlyIEJBUnMgc2V0DQo+Pj4+PiB1cCBieSBmaXJtd2FyZS4NCj4+Pj4gVGhpcyBpcyB0
cnVlLiBCdXQgdGhlcmUgeW91IGNvdWxkIGhhdmUgY29uZmlnIHNwYWNlIHRyYXBwZWQgaW4gIng4
NiBnZW5lcmljIHdheSIsDQo+Pj4+DQo+Pj4+IHBsZWFzZSBjb3JyZWN0IG1lIGlmIEknbSB3cm9u
ZyBoZXJlDQo+Pj4+DQo+Pj4+PiBJbiBhIHN1YnNlcXVlbnQgcmVwbHkgeW91J3ZlIHN1Z2dlc3Rl
ZCB0byBtb3ZlIGluaXRfYmFycyBmcm9tICJhZGQiIHRvDQo+Pj4+PiAiYXNzaWduIiwgYnV0IEkn
bSBoYXZpbmcgdHJvdWJsZSBzZWVpbmcgd2hhdCB0aGlzIHdvdWxkIGNoYW5nZTogSXQncw0KPj4+
Pj4gbm90IERvbTAgY29udHJvbGxpbmcgYXNzaWdubWVudCAodG8gaXRzZWxmKSwgYnV0IFhlbiBh
c3NpZ25zIHRoZSBkZXZpY2UNCj4+Pj4+IHRvd2FyZHMgdGhlIGVuZCBvZiBwY2lfYWRkX2Rldmlj
ZSgpLg0KPj4+PiBQSFlTREVWT1BfcGNpX2RldmljZV9hZGQgdnMgWEVOX0RPTUNUTF9hc3NpZ25f
ZGV2aWNlDQo+Pj4+DQo+Pj4+IEN1cnJlbnRseSB3ZSBpbml0aWFsaXplIEJBUnMgZHVyaW5nIFBI
WVNERVZPUF9wY2lfZGV2aWNlX2FkZCBhbmQNCj4+Pj4NCj4+Pj4gaWYgd2UgZG8gdGhhdCBkdXJp
bmcgWEVOX0RPTUNUTF9hc3NpZ25fZGV2aWNlIHRoaW5ncyBzZWVtIHRvIGNoYW5nZQ0KPj4+IEJ1
dCB0aGVyZSBjYW4ndCBwb3NzaWJseSBiZSBhbnkgWEVOX0RPTUNUTF9hc3NpZ25fZGV2aWNlIGlu
dm9sdmVkIGluDQo+Pj4gYm9vdGluZyBvZiBEb20wLg0KPj4gSW5kZWVkLiBTbywgZG8geW91IGhh
dmUgYW4gaWRlYSB3aGVuIHdlIHNob3VsZCBjYWxsIGluaXRfYmFycyBzdWl0YWJsZQ0KPj4NCj4+
IGZvciBib3RoIEFSTSBhbmQgeDg2Pw0KPj4NCj4+IEFub3RoZXIgcXVlc3Rpb24gaXM6IHdoYXQg
aGFwcGVucyBiYWQgaWYgeDg2IGFuZCBBUk0gd29uJ3QgY2FsbCBpbml0X2JhcnMNCj4+DQo+PiB1
bnRpbCB0aGUgbW9tZW50IHdlIHJlYWxseSBhc3NpZ24gYSBQQ0kgZGV2aWNlIHRvIHRoZSBmaXJz
dCBndWVzdD8NCj4gSSdkIGxpa2UgdG8gYW5zd2VyIHRoZSBsYXR0ZXIgcXVlc3Rpb24gZmlyc3Q6
IEhvdyB3b3VsZCBEb20wIHVzZQ0KPiB0aGUgZGV2aWNlIHByaW9yIHRvIHN1Y2ggYW4gYXNzaWdu
bWVudD8gQXMgYW4gaW1wbGljYXRpb24gdG8gdGhlDQo+IHByZXN1bWVkIGFuc3dlciBoZXJlLCBJ
IGd1ZXNzIGluaXRfYmFycyBjb3VsZCBiZSBkZWZlcnJlZCB1cCB1bnRpbA0KPiB0aGUgZmlyc3Qg
dGltZSBEb20wIChvciBtb3JlIGdlbmVyYWxseSB0aGUgcG9zc2Vzc2luZyBkb21haW4pDQo+IGFj
Y2Vzc2VzIGFueSBvZiB0aGVtLiBTaW1pbGFybHksIGRldmljZXMgdXNlZCBieSBYZW4gaXRzZWxm
IGNvdWxkDQo+IGhhdmUgdGhpcyBkb25lIGltbWVkaWF0ZWx5IGJlZm9yZSBmaXJzdCB1c2UuIFRo
aXMgbWF5IHJlcXVpcmUNCj4gdHJhY2tpbmcgb24gYSBwZXItZGV2aWNlIGJhc2lzIHdoZXRoZXIg
aW5pdGlhbGl6YXRpb24gd2FzIGRvbmUuDQpPaywgSSdsbCB0cnkgdG8gbG9vayBpbnRvIGl0DQo+
DQo+Pj4+Pj4+ICAgICAgSW4gb3JkZXIgdG8gZG8gcGFzc3Rocm91Z2ggdG8gZG9tVXMgc2FmZWx5
DQo+Pj4+Pj4+IHdlIHdpbGwgaGF2ZSB0byBhZGQgbW9yZSBoYW5kbGVycyB0aGFuIHdoYXQncyBy
ZXF1aXJlZCBmb3IgZG9tMCwNCj4+Pj4+PiBDYW4geW91IHBsZWFzZSB0ZWxsIHdoYXQgYXJlIHRo
aW5raW5nIGFib3V0PyBXaGF0IGFyZSB0aGUgbWlzc2luZyBoYW5kbGVycz8NCj4+Pj4+Pj4gICAg
ICBhbmQNCj4+Pj4+Pj4gaGF2aW5nIGlzX2hhcmR3YXJlX2RvbWFpbiBzcHJpbmtsZWQgaW4gYWxs
IG9mIHRoZW0gaXMgbm90IGEgc3VpdGFibGUNCj4+Pj4+Pj4gc29sdXRpb24uDQo+Pj4+Pj4gSSds
bCB0cnkgdG8gcmVwbGFjZSBpc19oYXJkd2FyZV9kb21haW4gd2l0aCBzb21ldGhpbmcgbGlrZToN
Cj4+Pj4+Pg0KPj4+Pj4+ICsvKg0KPj4+Pj4+ICsgKiBEZXRlY3Qgd2hldGhlciBwaHlzaWNhbCBQ
Q0kgZGV2aWNlcyBpbiB0aGlzIHNlZ21lbnQgYmVsb25nDQo+Pj4+Pj4gKyAqIHRvIHRoZSBkb21h
aW4gZ2l2ZW4sIGUuZy4gb24geDg2IGFsbCBQQ0kgZGV2aWNlcyBsaXZlIGluIGh3ZG9tLA0KPj4+
Pj4+ICsgKiBidXQgaW4gY2FzZSBvZiBBUk0gdGhpcyBtaWdodCBub3QgYmUgdGhlIGNhc2U6IHRo
b3NlIG1heSBhbHNvDQo+Pj4+Pj4gKyAqIGxpdmUgaW4gZHJpdmVyIGRvbWFpbnMgb3IgZXZlbiBY
ZW4gaXRzZWxmLg0KPj4+Pj4+ICsgKi8NCj4+Pj4+PiArYm9vbCBwY2lfaXNfaGFyZHdhcmVfZG9t
YWluKHN0cnVjdCBkb21haW4gKmQsIHUxNiBzZWcpDQo+Pj4+Pj4gK3sNCj4+Pj4+PiArI2lmZGVm
IENPTkZJR19YODYNCj4+Pj4+PiArwqDCoMKgIHJldHVybiBpc19oYXJkd2FyZV9kb21haW4oZCk7
DQo+Pj4+Pj4gKyNlbGlmIENPTkZJR19BUk0NCj4+Pj4+PiArwqDCoMKgIHJldHVybiBwY2lfaXNf
b3duZXJfZG9tYWluKGQsIHNlZyk7DQo+Pj4+Pj4gKyNlbHNlDQo+Pj4+Pj4gKyNlcnJvciAiVW5z
dXBwb3J0ZWQgYXJjaGl0ZWN0dXJlIg0KPj4+Pj4+ICsjZW5kaWYNCj4+Pj4+PiArfQ0KPj4+Pj4+
ICsNCj4+Pj4+PiArLyoNCj4+Pj4+PiArICogR2V0IGRvbWFpbiB3aGljaCBvd25zIHRoaXMgc2Vn
bWVudDogZm9yIHg4NiB0aGlzIGlzIGFsd2F5cyBoYXJkd2FyZQ0KPj4+Pj4+ICsgKiBkb21haW4g
YW5kIGZvciBBUk0gdGhpcyBjYW4gYmUgZGlmZmVyZW50Lg0KPj4+Pj4+ICsgKi8NCj4+Pj4+PiAr
c3RydWN0IGRvbWFpbiAqcGNpX2dldF9oYXJkd2FyZV9kb21haW4odTE2IHNlZykNCj4+Pj4+PiAr
ew0KPj4+Pj4+ICsjaWZkZWYgQ09ORklHX1g4Ng0KPj4+Pj4+ICvCoMKgwqAgcmV0dXJuIGhhcmR3
YXJlX2RvbWFpbjsNCj4+Pj4+PiArI2VsaWYgQ09ORklHX0FSTQ0KPj4+Pj4+ICvCoMKgwqAgcmV0
dXJuIHBjaV9nZXRfb3duZXJfZG9tYWluKHNlZyk7DQo+Pj4+Pj4gKyNlbHNlDQo+Pj4+Pj4gKyNl
cnJvciAiVW5zdXBwb3J0ZWQgYXJjaGl0ZWN0dXJlIg0KPj4+Pj4+ICsjZW5kaWYNCj4+Pj4+PiAr
fQ0KPj4+Pj4+DQo+Pj4+Pj4gVGhpcyBpcyB3aGF0IEkgdXNlIHRvIHByb3Blcmx5IGRldGVjdCB0
aGUgZG9tYWluIHRoYXQgcmVhbGx5IG93bnMgcGh5c2ljYWwgaG9zdCBicmlkZ2UNCj4+Pj4+IEkg
Y29uc2lkZXIgdGhpcyBwcm9ibGVtYXRpYy4gV2Ugc2hvdWxkIHRyeSB0byBub3QgbGV0IEFybSdz
IGFuZCB4ODYnZXMNCj4+Pj4+IFBDSSBpbXBsZW1lbnRhdGlvbnMgZGl2ZXJnZSB0b28gbXVjaCwg
aS5lLiBhdCBsZWFzdCB0aGUgdW5kZXJseWluZyBiYXNpYw0KPj4+Pj4gbW9kZWwgd291bGQgYmV0
dGVyIGJlIHNpbWlsYXIuIEZvciBleGFtcGxlLCBpZiBlbnRpcmUgc2VnbWVudHMgY2FuIGJlDQo+
Pj4+PiBhc3NpZ25lZCB0byBhIGRyaXZlciBkb21haW4gb24gQXJtLCB3aHkgc2hvdWxkIHRoZSBz
YW1lIG5vdCBiZSBwb3NzaWJsZQ0KPj4+Pj4gb24geDg2Pw0KPj4+PiBHb29kIHF1ZXN0aW9uLCBw
cm9iYWJseSBpbiB0aGlzIGNhc2UgeDg2ID09IEFSTSBhbmQgSSBjYW4gdXNlDQo+Pj4+DQo+Pj4+
IHBjaV9pc19vd25lcl9kb21haW4gZm9yIGJvdGggYXJjaGl0ZWN0dXJlcyBpbnN0ZWFkIG9mIHVz
aW5nIGlzX2hhcmR3YXJlX2RvbWFpbiBmb3IgeDg2DQo+Pj4+DQo+Pj4+PiBGdXJ0aGVybW9yZSBJ
J20gc3VzcGljaW91cyBhYm91dCBzZWdtZW50cyBiZWluZyB0aGUgcmlnaHQgZ3JhbnVsYXJpdHkN
Cj4+Pj4+IGhlcmUuIE9uIGlhNjQgbXVsdGlwbGUgaG9zdCBicmlkZ2VzIGNvdWxkIChhbmQgdHlw
aWNhbGx5IHdvdWxkKSBsaXZlDQo+Pj4+PiBvbiBzZWdtZW50IDAuIElpcmMgSSBoYWQgb25jZSBz
ZWVuIG91dHB1dCBmcm9tIGFuIHg4NiBzeXN0ZW0gd2hpY2ggd2FzDQo+Pj4+PiBhcHBhcmVudGx5
IGxhaWQgb3V0IHNpbWlsYXJseS4gVGhlcmVmb3JlLCBqdXN0IGxpa2UgZm9yIE1DRkcsIEkgdGhp
bmsNCj4+Pj4+IHRoZSBncmFudWxhcml0eSB3YW50cyB0byBiZSBidXMgcmFuZ2VzIHdpdGhpbiBh
IHNlZ21lbnQuDQo+Pj4+IENhbiB5b3UgcGxlYXNlIHN1Z2dlc3Qgc29tZXRoaW5nIHdlIGNhbiB1
c2UgYXMgYSBoaW50IGZvciBzdWNoIGEgZGV0ZWN0aW9uIGxvZ2ljPw0KPj4+IFRoZSB1bmRlcmx5
aW5nIGluZm9ybWF0aW9uIGNvbWVzIGZyb20gQUNQSSB0YWJsZXMsIGlpcmMuIEkgZG9uJ3QNCj4+
PiByZWNhbGwgdGhlIGRldGFpbHMsIHRob3VnaCAtIHNvcnJ5Lg0KPj4gT2ssIHNvIHNlZyArIGJ1
cyBzaG91bGQgYmUgZW5vdWdoIGZvciBib3RoIEFSTSBhbmQgWGVuIHRoZW4sIHJpZ2h0Pw0KPj4N
Cj4+IHBjaV9nZXRfaGFyZHdhcmVfZG9tYWluKHUxNiBzZWcsIHU4IGJ1cykNCj4gV2hldGhlciBh
biBpbmRpdmlkdWFsIGJ1cyBudW1iZXIgY2FuIHN1aXRhYmxlIGV4cHJlc3MgdGhpbmdzIEkgY2Fu
J3QNCj4gdGVsbDsgSSBkaWQgc2F5IGJ1cyByYW5nZSwgYnV0IG9mIGNvdXJzZSBpZiB5b3UgY2Fy
ZSBhYm91dCBqdXN0DQo+IGluZGl2aWR1YWwgZGV2aWNlcywgdGhlbiBhIHNpbmdsZSBidXMgbnVt
YmVyIHdpbGwgb2YgY291cnNlIGRvLg0KDQpJIGNhbiBpbXBsZW1lbnQgdGhlIGxvb2t1cCB3aGV0
aGVyIGEgUENJIGhvc3QgYnJpZGdlIG93bmVkIGJ5IGEgcGFydGljdWxhcg0KDQpkb21haW4gd2l0
aCBzb21ldGhpbmcgbGlrZToNCg0Kc3RydWN0IHBjaV9ob3N0X2JyaWRnZSAqYnJpZGdlID0gcGNp
X2ZpbmRfaG9zdF9icmlkZ2Uoc2VnLCBidXMpOw0KDQpyZXR1cm4gYnJpZGdlLT5kdF9ub2RlLT51
c2VkX2J5ID09IGQtPmRvbWFpbl9pZDsNCg0KQ291bGQgeW91IHBsZWFzZSBnaXZlIG1lIGEgaGlu
dCBob3cgdGhpcyBjYW4gYmUgZG9uZSBvbiB4ODY/DQoNCj4NCj4gSmFuDQoNClRoYW5rIHlvdSwN
Cg0KT2xla3NhbmRyDQo=


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 12:45:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 12:45:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26489.54885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdYS9-0008Ua-QB; Fri, 13 Nov 2020 12:45:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26489.54885; Fri, 13 Nov 2020 12:45:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdYS9-0008UT-N4; Fri, 13 Nov 2020 12:45:21 +0000
Received: by outflank-mailman (input) for mailman id 26489;
 Fri, 13 Nov 2020 12:45:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HVgh=ET=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kdYS8-0008UO-Da
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 12:45:20 +0000
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ec49f380-ac12-4d95-ac20-278e2498d20a;
 Fri, 13 Nov 2020 12:45:19 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id y16so10535011ljk.1
 for <xen-devel@lists.xenproject.org>; Fri, 13 Nov 2020 04:45:19 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id l13sm1664007lje.45.2020.11.13.04.45.16
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 13 Nov 2020 04:45:17 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HVgh=ET=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kdYS8-0008UO-Da
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 12:45:20 +0000
X-Inumbo-ID: ec49f380-ac12-4d95-ac20-278e2498d20a
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ec49f380-ac12-4d95-ac20-278e2498d20a;
	Fri, 13 Nov 2020 12:45:19 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id y16so10535011ljk.1
        for <xen-devel@lists.xenproject.org>; Fri, 13 Nov 2020 04:45:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=9A77taBXDZYnMNe3tVPQpKU4WTI8Ayb7eOJw2ygMD/c=;
        b=Ba74xT4oARdd0hV7UgyZDF/aaPzWhYdwwwhz4CNwx57xjKT1hoMPmymZHVzrzu6R6v
         5GyNG7l92FUlWj8sgvVj5W2kyxXmhduxscPgotEF3g6FKWMMetGJ93gRjKXUsMv7/5nR
         lIQe5i7WyWkWFBXNbbijTbwNJ1knaRmyVQwvXAJVUBZSG1gb2Y++sb8ieUeAUQi65CMV
         IDSJBTklthHRH9rNHgNmP7B/aNDZhxe7sL4gQcOy3bxlg89pQZ+683iCthrNIh5d3oLo
         Gouvq+fAboHLVK343SlSdwOMshX1Aezcoel73NldGz5rDCza9QrVu8oEIQrHbEU/q4RT
         DH7Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=9A77taBXDZYnMNe3tVPQpKU4WTI8Ayb7eOJw2ygMD/c=;
        b=OIZc2jTUkcpEYRWYXN1fWYc4UoSRgGpqWq6rX0pE78spbgb0Ns7149MgvRsL7aSNaA
         qOGES5nADyFn1DrKg0wl01nZtHIbUEMM39NXobRYQMc4Hoz5dnDKMk2+5gr2LulbSciQ
         se3exbiU8hm1Ll0CnEBLTEdAgjbwrWn2mMd4G4wpa+guMvjHymzTUdb69/nY0qRcmD5Q
         Fi5SpoltdyGz6hzHFc3O08P7cOtgnT5sSFWx9BJA7oecdoy+Kbil3D+Mn4htwzesJqrI
         Hv6v+mevcwW9+0O3860lQdmMLRYDvFE8fGmicG5os58CD4rEwU7uzaH8+KEFtyTWGbqz
         C3Pg==
X-Gm-Message-State: AOAM531w6EwDYYWNfvYuiywXQuWLUo/GQ7+VXSXGdPPeBWgLs2yRMQy9
	tmo6rcfZI88jXG4tuMx+YqBNoRMJtbCRhw==
X-Google-Smtp-Source: ABdhPJyGmfL/0ZSn8C09d+30hH5rvIXoTBjeDCMiKPS125Ah7XNhtV910eEjpHB0x1arYnrhXjVeqw==
X-Received: by 2002:a2e:8ec4:: with SMTP id e4mr979318ljl.135.1605271517975;
        Fri, 13 Nov 2020 04:45:17 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id l13sm1664007lje.45.2020.11.13.04.45.16
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Fri, 13 Nov 2020 04:45:17 -0800 (PST)
Subject: Re: [PATCH V2 01/23] x86/ioreq: Prepare IOREQ feature for making it
 common
To: Jan Beulich <jbeulich@suse.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-2-git-send-email-olekstysh@gmail.com>
 <61ea02e0-bdd4-5a0a-dd6f-b22e806e6d1e@suse.com>
 <cd16e1f2-849d-ec12-3325-382b8f6689ff@gmail.com>
 <e08459d9-dd0a-7875-5d12-d374c69fe775@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <9162290b-94c5-295e-3133-71284cd617e1@gmail.com>
Date: Fri, 13 Nov 2020 14:45:11 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <e08459d9-dd0a-7875-5d12-d374c69fe775@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 13.11.20 13:20, Jan Beulich wrote:

Hi Jan.

> On 13.11.2020 12:09, Oleksandr wrote:
>> On 12.11.20 12:58, Jan Beulich wrote:
>>> On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
>>>> @@ -855,7 +841,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
>>>>    
>>>>        domain_pause(d);
>>>>    
>>>> -    p2m_set_ioreq_server(d, 0, s);
>>>> +    arch_hvm_destroy_ioreq_server(s);
>>> Iirc there are plans to rename hvm_destroy_ioreq_server() in the
>>> course of the generalization. If so, this arch hook would imo
>>> better be named following the new scheme right away.
>> Could you please clarify, are you speaking about the plans discussed there
>>
>> "[PATCH V2 12/23] xen/ioreq: Remove "hvm" prefixes from involved
>> function names"?
>>
>> Copy text for the convenience:
>> AT least some of the functions touched here would be nice to be
>> moved to a more consistent new naming scheme right away, to
>> avoid having to touch all the same places again. I guess ioreq
>> server functions would be nice to all start with ioreq_server_
>> and ioreq functions to all start with ioreq_. E.g. ioreq_send()
>> and ioreq_server_select().
>>
>> or some other plans I am not aware of?
>>
>>
>> What I really want to avoid with IOREQ enabling work is the round-trips
>> related to naming things, this patch series
>> became quite big (and consumes som time to rebase and test it) and I
>> expect it to become bigger.
>>
>> So the arch_hvm_destroy_ioreq_server() should be
>> arch_ioreq_server_destroy()?
> I think so, yes. If you want to avoid doing full patches, how
> about you simply list the functions / variables you plan to
> rename alongside the intended new names? Would likely be easier
> for all involved parties.
I think it is a good idea. I will prepare a list once I analyze all new 
comments to this series.
As I understand that only global IOREQ functions need renaming according 
to the new scheme,
local ones can be left as is but without "hvm" prefixes of course?



>
>>>> @@ -1215,7 +1153,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
>>>>        struct hvm_ioreq_server *s;
>>>>        unsigned int id;
>>>>    
>>>> -    if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) )
>>>> +    if ( !arch_hvm_ioreq_destroy(d) )
>>> There's no ioreq being destroyed here, so I think this wants
>>> renaming (and again ideally right away following the planned
>>> new scheme).
>> Agree that no ioreq being destroyed here. Probably
>> ioreq_server_check_for_destroy()?
>> I couldn't think of a better name.
> "check" implies no change (and d ought to then be const struct
> domain *). With the containing function likely becoming
> ioreq_server_destroy_all(), arch_ioreq_server_destroy_all()
> would come to mind, or arch_ioreq_server_prepare_destroy_all().

ok, agree


>
>>>> +static inline int hvm_map_mem_type_to_ioreq_server(struct domain *d,
>>>> +                                                   ioservid_t id,
>>>> +                                                   uint32_t type,
>>>> +                                                   uint32_t flags)
>>>> +{
>>>> +    struct hvm_ioreq_server *s;
>>>> +    int rc;
>>>> +
>>>> +    if ( type != HVMMEM_ioreq_server )
>>>> +        return -EINVAL;
>>>> +
>>>> +    if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE )
>>>> +        return -EINVAL;
>>>> +
>>>> +    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
>>>> +
>>>> +    s = get_ioreq_server(d, id);
>>>> +
>>>> +    rc = -ENOENT;
>>>> +    if ( !s )
>>>> +        goto out;
>>>> +
>>>> +    rc = -EPERM;
>>>> +    if ( s->emulator != current->domain )
>>>> +        goto out;
>>>> +
>>>> +    rc = p2m_set_ioreq_server(d, flags, s);
>>>> +
>>>> + out:
>>>> +    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>>>> +
>>>> +    if ( rc == 0 && flags == 0 )
>>>> +    {
>>>> +        struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>> I realize I may be asking too much, but would it be possible if,
>>> while moving code, you made simple and likely uncontroversial
>>> adjustments like adding const here? (Such adjustments would be
>>> less desirable to make if they increased the size of the patch,
>>> e.g. if you were touching only nearby code.)
>> This function as well as one located below won't be moved to this header
>> for the next version of patch.
>>
>> ok, will add const.
> Well, if you don't move the code, then better keep the diff small
> and leave things as they are.

ok, in case I will have to move that code for any reason, I will take 
suggestions into the account.

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Fri Nov 13 13:12:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 13:12:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26497.54897 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdYrs-0002g1-14; Fri, 13 Nov 2020 13:11:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26497.54897; Fri, 13 Nov 2020 13:11:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdYrr-0002fu-TY; Fri, 13 Nov 2020 13:11:55 +0000
Received: by outflank-mailman (input) for mailman id 26497;
 Fri, 13 Nov 2020 13:11:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HVgh=ET=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kdYrq-0002fp-Ir
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 13:11:54 +0000
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 781a7d3a-a254-41c6-b81b-c6a2b605b8d8;
 Fri, 13 Nov 2020 13:11:53 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id i17so9520153ljd.3
 for <xen-devel@lists.xenproject.org>; Fri, 13 Nov 2020 05:11:53 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id g138sm1183740lfd.260.2020.11.13.05.11.50
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 13 Nov 2020 05:11:50 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HVgh=ET=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kdYrq-0002fp-Ir
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 13:11:54 +0000
X-Inumbo-ID: 781a7d3a-a254-41c6-b81b-c6a2b605b8d8
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 781a7d3a-a254-41c6-b81b-c6a2b605b8d8;
	Fri, 13 Nov 2020 13:11:53 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id i17so9520153ljd.3
        for <xen-devel@lists.xenproject.org>; Fri, 13 Nov 2020 05:11:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=gxp1s8evwRVx1P3f8s5C2m5NppcopvVUMzWzggxDT2g=;
        b=sWjMEqYMin6oVvPag7EiYsyC54tFM84qYanvBV7sn/Tefyo7KP0r3P86WL+YbHHO00
         iQzNmrdpk2fxtsmoG5byal/zr/ifwKNPLiCLpmvGqBSgjIZZ78pFpdJj0fvUdI4hnqbl
         PHjVQBIQva0gyOaQTMAIR6KWxI1SvxzXcaVzjlL9waG+Ds+7f50lp/qOxpMVcOSv97G1
         m77lVxu35q9q5lr2U9YtvZ6YEdo/5ZEpXTKnOdavC3XnTXTPI4nFnuLZOrNI/i0b2EhR
         swEVwlWTMQzpgfXa/SyTO+QVbv0gd8BUktf/hxi22fqrl5zdXhjqRYFINTRxoCp2vXfa
         LZ6w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=gxp1s8evwRVx1P3f8s5C2m5NppcopvVUMzWzggxDT2g=;
        b=RUKz4oRnnPanG6j8VEGIo9//BoZlzk1pixyqrR20RRGsc38Jff6q1DNUBybwmb0UtI
         W5hJNHf934yI6ivVQ242dMULNUqTlvYLOhaeUy1cSOTHbBhgs7hLq5h6RRhzttlB4wfc
         j3/6vii91QocZciuWWK7vlWOz23qt6J0o4gz0NlaD09d57hG0rcLLmaiKz+gxSvsYGI9
         San5LjMO1rX/x06Wur+UJnOJmjC1sQx4uLdJ2k0TwLaDA1CzbdyenXK4n+WbDTFiVFPI
         vL2a/UtuzQc0uplnhQz2BkDtrOQJO2Ihvj2/Hqg6jASmhKAiRX0p8Z+0Ti/AoQgcFMRl
         +liQ==
X-Gm-Message-State: AOAM5319GwizvVdLuHkKX/A8wDC2PuvXWPYOWT32Pp2ZCtiiG88YMpTy
	B4+Ip8yRIIm06k3r/PYA8pqjn16a8kryrQ==
X-Google-Smtp-Source: ABdhPJxYqmEWUy+ll1BKsYGXyDL3jyz3uEoEumC1NyGZRG79hKbyicf7oNJ5QhRBfaFwehZmgNwIyw==
X-Received: by 2002:a2e:b809:: with SMTP id u9mr959042ljo.212.1605273111608;
        Fri, 13 Nov 2020 05:11:51 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id g138sm1183740lfd.260.2020.11.13.05.11.50
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Fri, 13 Nov 2020 05:11:50 -0800 (PST)
Subject: Re: [PATCH V2 02/23] xen/ioreq: Make x86's IOREQ feature common
To: Jan Beulich <jbeulich@suse.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Paul Durrant <paul@xen.org>, Tim Deegan
 <tim@xen.org>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-3-git-send-email-olekstysh@gmail.com>
 <5aa67969-0571-ee1b-bbe1-0b936a85acd2@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <fecd1dd8-7da7-0071-7f2a-c32ffc90e209@gmail.com>
Date: Fri, 13 Nov 2020 15:11:49 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <5aa67969-0571-ee1b-bbe1-0b936a85acd2@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 12.11.20 13:11, Jan Beulich wrote:

Hi Jan

> On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> As a lot of x86 code can be re-used on Arm later on, this patch
>> moves previously prepared x86/hvm/ioreq.c to the common code.
>>
>> The common IOREQ feature is supposed to be built with IOREQ_SERVER
>> option enabled, which is selected for x86's config HVM for now.
>>
>> In order to avoid having a gigantic patch here, the subsequent
>> patches will update remaining bits in the common code step by step:
>> - Make IOREQ related structs/materials common
>> - Drop the "hvm" prefixes and infixes
>> - Remove layering violation by moving corresponding fields
>>    out of *arch.hvm* or abstracting away accesses to them
>>
>> This support is going to be used on Arm to be able run device
>> emulator outside of Xen hypervisor.
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> CC: Julien Grall <julien.grall@arm.com>
>>
>> ---
>> Please note, this is a split/cleanup/hardening of Julien's PoC:
>> "Add support for Guest IO forwarding to a device emulator"
>>
>> ***
>> Please note, this patch depends on the following which is
>> on review:
>> https://patchwork.kernel.org/patch/11816689/
>> ***
>>
>> Changes RFC -> V1:
>>     - was split into three patches:
>>       - x86/ioreq: Prepare IOREQ feature for making it common
>>       - xen/ioreq: Make x86's IOREQ feature common
>>       - xen/ioreq: Make x86's hvm_ioreq_needs_completion() common
>>     - update MAINTAINERS file
>>     - do not use a separate subdir for the IOREQ stuff, move it to:
>>       - xen/common/ioreq.c
>>       - xen/include/xen/ioreq.h
>>     - update x86's files to include xen/ioreq.h
>>     - remove unneeded headers in arch/x86/hvm/ioreq.c
>>     - re-order the headers alphabetically in common/ioreq.c
>>     - update common/ioreq.c according to the newly introduced arch functions:
>>       arch_hvm_destroy_ioreq_server()/arch_handle_hvm_io_completion()
>>
>> Changes V1 -> V2:
>>     - update patch description
>>     - make everything needed in the previous patch to achieve
>>       a truly rename here
>>     - don't include unnecessary headers from asm-x86/hvm/ioreq.h
>>       and xen/ioreq.h
>>     - use __XEN_IOREQ_H__ instead of __IOREQ_H__
>>     - move get_ioreq_server() to common/ioreq.c
>> ---
>>   MAINTAINERS                     |    8 +-
>>   xen/arch/x86/Kconfig            |    1 +
>>   xen/arch/x86/hvm/Makefile       |    1 -
>>   xen/arch/x86/hvm/ioreq.c        | 1422 ---------------------------------------
>>   xen/arch/x86/mm.c               |    2 +-
>>   xen/arch/x86/mm/shadow/common.c |    2 +-
>>   xen/common/Kconfig              |    3 +
>>   xen/common/Makefile             |    1 +
>>   xen/common/ioreq.c              | 1422 +++++++++++++++++++++++++++++++++++++++
>>   xen/include/asm-x86/hvm/ioreq.h |   39 +-
>>   xen/include/xen/ioreq.h         |   71 ++
>>   11 files changed, 1509 insertions(+), 1463 deletions(-)
>>   delete mode 100644 xen/arch/x86/hvm/ioreq.c
>>   create mode 100644 xen/common/ioreq.c
>>   create mode 100644 xen/include/xen/ioreq.h
> Iirc I've previously asked to make sure the diff here gets created with
> git's rename detection enabled, so we wouldn't see 1422 lines deleted
> and 1422 lines added, _hoping_ they're all the same (or going through
> the extra steps needed to compare old and new code), but instead seeing
> just the diff between old and new files (which in the ideal case would
> then be empty).

This is my fault, I misread this last time. I have just rechecked this 
patch again.

git config diff.renames false

  MAINTAINERS                     |    8 +-
  xen/arch/x86/Kconfig            |    1 +
  xen/arch/x86/hvm/Makefile       |    1 -
  xen/arch/x86/hvm/ioreq.c        | 1544 
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  xen/arch/x86/mm.c               |    2 +-
  xen/arch/x86/mm/shadow/common.c |    2 +-
  xen/common/Kconfig              |    3 +
  xen/common/Makefile             |    1 +
  xen/common/ioreq.c              | 1544 
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
  xen/include/asm-x86/hvm/ioreq.h |   38 +----
  xen/include/xen/ioreq.h         |   73 +++++++++
  11 files changed, 1633 insertions(+), 1584 deletions(-)

git config diff.renames true

  MAINTAINERS                          |  8 +++++++-
  xen/arch/x86/Kconfig                 |  1 +
  xen/arch/x86/hvm/Makefile            |  1 -
  xen/arch/x86/mm.c                    |  2 +-
  xen/arch/x86/mm/shadow/common.c      |  2 +-
  xen/common/Kconfig                   |  3 +++
  xen/common/Makefile                  |  1 +
  xen/{arch/x86/hvm => common}/ioreq.c |  0
  xen/include/asm-x86/hvm/ioreq.h      | 38 
++------------------------------------
  xen/include/xen/ioreq.h              | 73 
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
  10 files changed, 89 insertions(+), 40 deletions(-)

Although it is not needed for the patch in question anymore (since there 
won't be a rename here) I will try to keep that in mind for the future 
patches.


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Fri Nov 13 14:05:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 14:05:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26507.54913 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdZhQ-0007CR-7A; Fri, 13 Nov 2020 14:05:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26507.54913; Fri, 13 Nov 2020 14:05:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdZhQ-0007CK-33; Fri, 13 Nov 2020 14:05:12 +0000
Received: by outflank-mailman (input) for mailman id 26507;
 Fri, 13 Nov 2020 14:05:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HVgh=ET=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kdZhO-0007Bo-Hd
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:05:10 +0000
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f643f137-03c3-4a7b-a56b-705aebe3cd3f;
 Fri, 13 Nov 2020 14:05:09 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id i6so14002567lfd.1
 for <xen-devel@lists.xenproject.org>; Fri, 13 Nov 2020 06:05:09 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v13sm1579328lfe.250.2020.11.13.06.05.07
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 13 Nov 2020 06:05:07 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HVgh=ET=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kdZhO-0007Bo-Hd
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:05:10 +0000
X-Inumbo-ID: f643f137-03c3-4a7b-a56b-705aebe3cd3f
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f643f137-03c3-4a7b-a56b-705aebe3cd3f;
	Fri, 13 Nov 2020 14:05:09 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id i6so14002567lfd.1
        for <xen-devel@lists.xenproject.org>; Fri, 13 Nov 2020 06:05:09 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=mMj35E/VQ8uLKKRkNiHDGMQMPmF+Wb2F4IEVPt8+jSk=;
        b=DTSFDGEYsfYXHt3d371K8Re8IvfQsP8xlz0n15SeX2myGOQ6gAFLhle7y4egGSGTEP
         OFAMpM5GfYuZ8M47oVKIAN1tfh6N6tBE2iNWWb6W+xqMhrr+Jtj/DxPT1WHUQK/09eb9
         zaiUUD6REZrBdi3+izPWNvkDmSPccIpx3//87PvBbc4+hYwoviwPErCAlJNzM8P7TSZX
         JXZC0+ukkz8edUFnSFjO9ZFrZCR1ikYB/BRsK7XFQwpUIwTB5GkXsmUR2ihzMaDGuCy1
         3Ld2Vpjw/l7jeJrmqs9f66UX4KkwMEVK5pInYcsMECxDM6z2Vl58bY4IhqCvVjI7E0f0
         SnIA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=mMj35E/VQ8uLKKRkNiHDGMQMPmF+Wb2F4IEVPt8+jSk=;
        b=rwFclX6chbnzfzR6TFzAbEqPlLTme7cE7HnSrIDj3HKr+8Ahbz1sPxaR2hLKiCAkfz
         MshF72whEbcd/ly4v0wa6aJi1NcLbRq/i30fITxrCFNbHway0dVteeDf625FygKhz8Nj
         GX8fdXGgLZSuH+6Q0cUbqx7XJ7UBulOiJSvx7CsYB5onORDvgJ8u3agdJkcHtwvfft4n
         e7FMJ4rgSOvQFDJbLtV4MhrPmYADrkKS6Z5E+INzqDBJ6aymO0YPiUr3X+nUujhJlJqd
         EG363JITs/rzDSB369nU1BSavzPhz3IOlOnxeVWLzwtvRpuMe8mM8h/6Y1dPfn/fm8C+
         IXTA==
X-Gm-Message-State: AOAM530m2jCjNeU9oVBMF0HR4DHBCFcZyMw+VcRLs98U/Wx0QAIHtnG6
	y36LrlUYrTyGJLCa7mDArWI1EfwlO07ZHw==
X-Google-Smtp-Source: ABdhPJzCHAeZbuG1W/40sARjSrPd47L0COXnZFXclD41xKPIn/1PUVrr0cWM6O8LaBYkz9/HvmiZpA==
X-Received: by 2002:a19:c8cc:: with SMTP id y195mr852365lff.225.1605276308392;
        Fri, 13 Nov 2020 06:05:08 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id v13sm1579328lfe.250.2020.11.13.06.05.07
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Fri, 13 Nov 2020 06:05:07 -0800 (PST)
Subject: Re: [PATCH V2 07/23] xen/ioreq: Move x86's ioreq_gfn(server) to
 struct domain
To: Jan Beulich <jbeulich@suse.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-8-git-send-email-olekstysh@gmail.com>
 <00f9046f-5c77-cee5-b201-aa01f880d4e7@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <d4d034ff-08cb-3423-5546-a19d601d09f8@gmail.com>
Date: Fri, 13 Nov 2020 16:05:01 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <00f9046f-5c77-cee5-b201-aa01f880d4e7@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 12.11.20 13:21, Jan Beulich wrote:

Hi Jan

> On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
>> --- a/xen/include/asm-x86/hvm/ioreq.h
>> +++ b/xen/include/asm-x86/hvm/ioreq.h
>> @@ -77,7 +77,7 @@ static inline int hvm_map_mem_type_to_ioreq_server(struct domain *d,
>>       if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE )
>>           return -EINVAL;
>>   
>> -    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
>> +    spin_lock_recursive(&d->ioreq_server.lock);
>>   
>>       s = get_ioreq_server(d, id);
>>   
>> @@ -92,7 +92,7 @@ static inline int hvm_map_mem_type_to_ioreq_server(struct domain *d,
>>       rc = p2m_set_ioreq_server(d, flags, s);
>>   
>>    out:
>> -    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>> +    spin_unlock_recursive(&d->ioreq_server.lock);
>>   
>>       if ( rc == 0 && flags == 0 )
>>       {
>
> Does this build at this point, when !CONFIG_IOREQ_SERVER? Patch 1
> moves the code here without guards, and patch 2, when introducing
> the Kconfig symbol, doesn't add guards here. I admit I didn't
> check further intermediate patches.
Yes.


I can confirm I checked x86 patch by patch with CONFIG_IOREQ_SERVER, as 
for !CONFIG_IOREQ_SERVER I can't recollect to be 100% sure, but likely I 
tested also patch by patch.
Anyway, I have just rechecked.
Probably it is because this header isn't in use with 
!CONFIG_IOREQ_SERVER since all users are x86/hvm/* and common/ioreq.c


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Fri Nov 13 14:21:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 14:21:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26515.54925 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdZwl-0000Vm-Lt; Fri, 13 Nov 2020 14:21:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26515.54925; Fri, 13 Nov 2020 14:21:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdZwl-0000Vf-IE; Fri, 13 Nov 2020 14:21:03 +0000
Received: by outflank-mailman (input) for mailman id 26515;
 Fri, 13 Nov 2020 14:19:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EgAG=ET=amazon.de=prvs=579e99c79=doebel@srs-us1.protection.inumbo.net>)
 id 1kdZvA-0008EU-NP
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:19:25 +0000
Received: from smtp-fw-6002.amazon.com (unknown [52.95.49.90])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 21da5738-23df-4304-a15c-d31f7b61fd64;
 Fri, 13 Nov 2020 14:19:23 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-2c-4e7c8266.us-west-2.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP;
 13 Nov 2020 14:19:01 +0000
Received: from EX13D37EUB004.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194])
 by email-inbound-relay-2c-4e7c8266.us-west-2.amazon.com (Postfix) with ESMTPS
 id F3D6BA0819; Fri, 13 Nov 2020 14:19:00 +0000 (UTC)
Received: from EX13MTAUEA001.ant.amazon.com (10.43.61.82) by
 EX13D37EUB004.ant.amazon.com (10.43.166.187) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 13 Nov 2020 14:18:59 +0000
Received: from dev-dsk-doebel-2a-b41c32f5.us-west-2.amazon.com (172.19.225.92)
 by mail-relay.amazon.com (10.43.61.243) with Microsoft SMTP Server id
 15.0.1497.2 via Frontend Transport; Fri, 13 Nov 2020 14:18:58 +0000
Received: by dev-dsk-doebel-2a-b41c32f5.us-west-2.amazon.com (Postfix,
 from userid 3160037)
 id 1FE8EA27B6; Fri, 13 Nov 2020 14:18:57 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=EgAG=ET=amazon.de=prvs=579e99c79=doebel@srs-us1.protection.inumbo.net>)
	id 1kdZvA-0008EU-NP
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:19:25 +0000
X-Inumbo-ID: 21da5738-23df-4304-a15c-d31f7b61fd64
Received: from smtp-fw-6002.amazon.com (unknown [52.95.49.90])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 21da5738-23df-4304-a15c-d31f7b61fd64;
	Fri, 13 Nov 2020 14:19:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1605277163; x=1636813163;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=uEC5EXOc4YcDJzP4U01LmoW19H30dbThY1jdb+RYew8=;
  b=rG5BmM2/12ebjmGn0qKQtjHPY3qSIbT8sydr905NI8H8upBsgWq+GNqY
   SdQpZ9iWwx/X2J4N5M082D9c5CPFAFHmKSvSfOUuEzxX9YiJlot7NBK1l
   4nvbhDCzTZjoxmapaEsMiu6rT/Ytei4VA+E5z3Vo9PLLcXOMDLfMnda1Z
   c=;
X-IronPort-AV: E=Sophos;i="5.77,475,1596499200"; 
   d="scan'208";a="64882752"
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-2c-4e7c8266.us-west-2.amazon.com) ([10.43.8.6])
  by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP; 13 Nov 2020 14:19:01 +0000
Received: from EX13D37EUB004.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194])
	by email-inbound-relay-2c-4e7c8266.us-west-2.amazon.com (Postfix) with ESMTPS id F3D6BA0819;
	Fri, 13 Nov 2020 14:19:00 +0000 (UTC)
Received: from EX13MTAUEA001.ant.amazon.com (10.43.61.82) by
 EX13D37EUB004.ant.amazon.com (10.43.166.187) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 13 Nov 2020 14:18:59 +0000
Received: from dev-dsk-doebel-2a-b41c32f5.us-west-2.amazon.com (172.19.225.92)
 by mail-relay.amazon.com (10.43.61.243) with Microsoft SMTP Server id
 15.0.1497.2 via Frontend Transport; Fri, 13 Nov 2020 14:18:58 +0000
Received: by dev-dsk-doebel-2a-b41c32f5.us-west-2.amazon.com (Postfix, from userid 3160037)
	id 1FE8EA27B6; Fri, 13 Nov 2020 14:18:57 +0000 (UTC)
From: Bjoern Doebel <doebel@amazon.de>
To: <xen-devel@lists.xenproject.org>
CC: Julien Grall <jgrall@amazon.co.uk>, Eslam Elnikety <elnikety@amazon.de>,
	Bjoern Doebel <doebel@amazon.de>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: [XEN PATCH] tools/xenstore: Log xenstored build ID on startup
Date: Fri, 13 Nov 2020 14:18:23 +0000
Message-ID: <20201113141823.58712-1-doebel@amazon.de>
X-Mailer: git-send-email 2.16.6
MIME-Version: 1.0
Precedence: Bulk
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

UmlnaHQgbm93IHdlIGRvIG5vdCBoYXZlIGEgbWVjaGFuaXNtIHRvIGRldGVybWluZSB0aGUgdmVy
c2lvbiBvZiB0aGUKY3VycmVudGx5IHJ1bm5pbmcgeGVuc3RvcmVkIGF0IHJ1bnRpbWUuIEFzIHhl
bnN0b3JlZCBydW5zIHRocm91Z2hvdXQgdGhlCmxpZmV0aW1lIG9mIGEgWGVuIGhvc3QsIHRoaXMg
bWF5IGxlYWQgdG8gcHJvYmxlbXMgd2hlbiBuZXdlciB1c2VyIHNwYWNlCmJ1aWxkcyBhcmUgc3Rh
Z2VkLiBUaGVuLCB0aGUgcnVubmluZyB4ZW5zdG9yZWQgd2lsbCBubyBsb25nZXIgbWF0Y2ggdGhl
CnZlcnNpb24gb2YgdGhlIGluc3RhbGxlZCB4ZW5zdG9yZWQuCgpUbyBhbGxvdyB1c2VycyB0byBh
bHdheXMgaWRlbnRpZnkgdGhlIHJ1bm5pbmcgdmVyc2lvbiBvZiB4ZW5zdG9yZWQsIGFkZAphIGxp
bmtlci1nZW5lcmF0ZWQgdW5pcXVlIGJ1aWxkIElEIHRvIGV2ZXJ5IHhlbnN0b3JlZCBidWlsZC4g
QWRkCmZ1bmN0aW9uYWxpdHkgdG8gbG9nIHRoaXMgYnVpbGQgSUQgaW50byBhIGZpbGUgdXBvbiBz
ZXJ2aWNlIHN0YXJ0dXAuCgpTaWduZWQtb2ZmLWJ5OiBCam9lcm4gRG9lYmVsIDxkb2ViZWxAYW1h
em9uLmRlPgpSZXZpZXdlZC1ieTogTWFydGluIE1hemVpbiA8YW1hemVpbkBhbWF6b24uZGU+ClJl
dmlld2VkLWJ5OiBQYXVsIER1cnJhbnQgPHBkdXJyYW50QGFtYXpvbi5jby51az4KLS0tCiB0b29s
cy9ob3RwbHVnL0xpbnV4L2xhdW5jaC14ZW5zdG9yZS5pbiB8ICAyICstCiB0b29scy94ZW5zdG9y
ZS9NYWtlZmlsZSAgICAgICAgICAgICAgICB8ICA0ICsrKwogdG9vbHMveGVuc3RvcmUvYnVpbGRp
ZF9zeW1ib2xzLmxkICAgICAgfCAxMCArKysrKysrCiB0b29scy94ZW5zdG9yZS94ZW5zdG9yZWRf
Y29yZS5jICAgICAgICB8ICA4ICsrKysrKwogdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUu
aCAgICAgICAgfCAgMyArKwogdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX21pbmlvcy5jICAgICAg
fCAgNCArKysKIHRvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9wb3NpeC5jICAgICAgIHwgNTIgKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKwogNyBmaWxlcyBjaGFuZ2VkLCA4MiBpbnNl
cnRpb25zKCspLCAxIGRlbGV0aW9uKC0pCiBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMveGVuc3Rv
cmUvYnVpbGRpZF9zeW1ib2xzLmxkCgpkaWZmIC0tZ2l0IGEvdG9vbHMvaG90cGx1Zy9MaW51eC9s
YXVuY2gteGVuc3RvcmUuaW4gYi90b29scy9ob3RwbHVnL0xpbnV4L2xhdW5jaC14ZW5zdG9yZS5p
bgppbmRleCA5OTFkZWM4ZDI1Li5hNmYyMjU0MDMwIDEwMDY0NAotLS0gYS90b29scy9ob3RwbHVn
L0xpbnV4L2xhdW5jaC14ZW5zdG9yZS5pbgorKysgYi90b29scy9ob3RwbHVnL0xpbnV4L2xhdW5j
aC14ZW5zdG9yZS5pbgpAQCAtNjIsNyArNjIsNyBAQCB0ZXN0IC1mIEBDT05GSUdfRElSQC9AQ09O
RklHX0xFQUZfRElSQC94ZW5jb21tb25zICYmIC4gQENPTkZJR19ESVJAL0BDT05GSUdfTEVBRgog
CX0KIAogCWVjaG8gLW4gU3RhcnRpbmcgJFhFTlNUT1JFRC4uLgotCSRYRU5TVE9SRUQgLS1waWQt
ZmlsZSBAWEVOX1JVTl9ESVJAL3hlbnN0b3JlZC5waWQgJFhFTlNUT1JFRF9BUkdTCisJJFhFTlNU
T1JFRCAtLXBpZC1maWxlIEBYRU5fUlVOX0RJUkAveGVuc3RvcmVkLnBpZCAtLWJ1aWxkaWQtZmls
ZSBAWEVOX1JVTl9ESVJAL3hlbnN0b3JlZC5idWlsZGlkICRYRU5TVE9SRURfQVJHUwogCiAJc3lz
dGVtZC1ub3RpZnkgLS1ib290ZWQgMj4vZGV2L251bGwgfHwgdGltZW91dF94ZW5zdG9yZSAkWEVO
U1RPUkVEIHx8IGV4aXQgMQogCmRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS9NYWtlZmlsZSBi
L3Rvb2xzL3hlbnN0b3JlL01ha2VmaWxlCmluZGV4IDlhMGYwZDAxMmQuLmM2MzM1MDk4MGIgMTAw
NjQ0Ci0tLSBhL3Rvb2xzL3hlbnN0b3JlL01ha2VmaWxlCisrKyBiL3Rvb2xzL3hlbnN0b3JlL01h
a2VmaWxlCkBAIC02Niw2ICs2NiwxMCBAQCAkKFhFTlNUT1JFRF9PQkpTKTogQ0ZMQUdTICs9ICQo
U1lTVEVNRF9DRkxBR1MpCiB4ZW5zdG9yZWQ6IExERkxBR1MgKz0gJChTWVNURU1EX0xJQlMpCiBl
bmRpZgogCisjIHhlbnN0b3JlZDogZW5mb3JjZSBjcmVhdGlvbiBvZiBhIGJ1aWxkSUQgc2VjdGlv
biBhbmQgdXNlIGEgbGlua2VyCisjIHNjcmlwdCB0byBhZGQgYWRkaXRpb25hbCBzeW1ib2xzIGFy
b3VuZCB0aGF0IHNlY3Rpb24KK3hlbnN0b3JlZDogTERGTEFHUyArPSAgLVdsLC0tYnVpbGQtaWQ9
c2hhMSAtV2wsLVQsYnVpbGRpZF9zeW1ib2xzLmxkCisKICQoWEVOU1RPUkVEX09CSlMpOiBDRkxB
R1MgKz0gJChDRkxBR1NfbGlieGVuZ250dGFiKQogCiB4ZW5zdG9yZWQ6ICQoWEVOU1RPUkVEX09C
SlMpCmRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS9idWlsZGlkX3N5bWJvbHMubGQgYi90b29s
cy94ZW5zdG9yZS9idWlsZGlkX3N5bWJvbHMubGQKbmV3IGZpbGUgbW9kZSAxMDA2NDQKaW5kZXgg
MDAwMDAwMDAwMC4uZDc0MDI0YzRlOQotLS0gL2Rldi9udWxsCisrKyBiL3Rvb2xzL3hlbnN0b3Jl
L2J1aWxkaWRfc3ltYm9scy5sZApAQCAtMCwwICsxLDEwIEBACitTRUNUSU9OUworeworICAgICAg
IF9fYnVpbGRpZF9ub3RlX3NlY3Rpb24gPSAuIDsKKyAgICAgICAubm90ZS5nbnUuYnVpbGQtaWQg
OgorICAgICAgIHsKKyAgICAgICAgICAgICAgICooLm5vdGUuZ251LmJ1aWxkLWlkKQorICAgICAg
IH0KKyAgICAgICBfX2J1aWxkaWRfZW5kID0gLiA7Cit9CitJTlNFUlQgQUZURVIgLmRhdGEKZGlm
ZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMgYi90b29scy94ZW5zdG9y
ZS94ZW5zdG9yZWRfY29yZS5jCmluZGV4IGI0YmUzNzRkM2YuLmM2ZjEwN2JkZDkgMTAwNjQ0Ci0t
LSBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMKKysrIGIvdG9vbHMveGVuc3RvcmUv
eGVuc3RvcmVkX2NvcmUuYwpAQCAtMTg0NCw2ICsxODQ0LDcgQEAgc3RhdGljIHZvaWQgdXNhZ2Uo
dm9pZCkKIAogCiBzdGF0aWMgc3RydWN0IG9wdGlvbiBvcHRpb25zW10gPSB7CisJeyAiYnVpbGRp
ZC1maWxlIiwgMSwgTlVMTCwgJ0InIH0sCiAJeyAibm8tZG9tYWluLWluaXQiLCAwLCBOVUxMLCAn
RCcgfSwKIAl7ICJlbnRyeS1uYiIsIDEsIE5VTEwsICdFJyB9LAogCXsgInBpZC1maWxlIiwgMSwg
TlVMTCwgJ0YnIH0sCkBAIC0xODc1LDEyICsxODc2LDE2IEBAIGludCBtYWluKGludCBhcmdjLCBj
aGFyICphcmd2W10pCiAJYm9vbCBvdXRwdXRwaWQgPSBmYWxzZTsKIAlib29sIG5vX2RvbWFpbl9p
bml0ID0gZmFsc2U7CiAJY29uc3QgY2hhciAqcGlkZmlsZSA9IE5VTEw7CisJY29uc3QgY2hhciAq
YnVpbGRpZF9maWxlID0gTlVMTDsKIAlpbnQgdGltZW91dDsKIAogCiAJd2hpbGUgKChvcHQgPSBn
ZXRvcHRfbG9uZyhhcmdjLCBhcmd2LCAiREU6RjpITlBTOnQ6VDpSVlc6Iiwgb3B0aW9ucywKIAkJ
CQkgIE5VTEwpKSAhPSAtMSkgewogCQlzd2l0Y2ggKG9wdCkgeworCQljYXNlICdCJzoKKwkJCWJ1
aWxkaWRfZmlsZSA9IG9wdGFyZzsKKwkJCWJyZWFrOwogCQljYXNlICdEJzoKIAkJCW5vX2RvbWFp
bl9pbml0ID0gdHJ1ZTsKIAkJCWJyZWFrOwpAQCAtMTk0OCw2ICsxOTUzLDkgQEAgaW50IG1haW4o
aW50IGFyZ2MsIGNoYXIgKmFyZ3ZbXSkKIAlpZiAocGlkZmlsZSkKIAkJd3JpdGVfcGlkZmlsZShw
aWRmaWxlKTsKIAorCWlmIChidWlsZGlkX2ZpbGUpCisJCXdyaXRlX2J1aWxkaWRfZmlsZShidWls
ZGlkX2ZpbGUpOworCiAJLyogVGFsbG9jIGxlYWsgcmVwb3J0cyBnbyB0byBzdGRlcnIsIHdoaWNo
IGlzIGNsb3NlZCBpZiB3ZSBmb3JrLiAqLwogCWlmICghZG9mb3JrKQogCQl0YWxsb2NfZW5hYmxl
X2xlYWtfcmVwb3J0X2Z1bGwoKTsKZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3Jl
ZF9jb3JlLmggYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5oCmluZGV4IDFkZjZhZDk0
YWIuLjcxMjI4MDYyNmMgMTAwNjQ0Ci0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3Jl
LmgKKysrIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuaApAQCAtMTkzLDYgKzE5Myw5
IEBAIHZvaWQgeGVuYnVzX25vdGlmeV9ydW5uaW5nKHZvaWQpOwogLyogV3JpdGUgb3V0IHRoZSBw
aWRmaWxlICovCiB2b2lkIHdyaXRlX3BpZGZpbGUoY29uc3QgY2hhciAqcGlkZmlsZSk7CiAKKy8q
IFdyaXRlIHRoZSBidWlsZGlkIGZpbGUgKi8KK3ZvaWQgd3JpdGVfYnVpbGRpZF9maWxlKGNvbnN0
IGNoYXIgKmJ1aWxkaWRmaWxlKTsKKwogLyogRm9yayBidXQgZG8gbm90IGNsb3NlIHRlcm1pbmFs
IEZEcyAqLwogdm9pZCBkYWVtb25pemUodm9pZCk7CiAvKiBDbG9zZSBzdGRpbi9zdGRvdXQvc3Rk
ZXJyIHRvIGNvbXBsZXRlIGRhZW1vbml6ZSAqLwpkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUv
eGVuc3RvcmVkX21pbmlvcy5jIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX21pbmlvcy5jCmlu
ZGV4IGM5NDQ5M2U1MmEuLmVmMTE1MWFlZTQgMTAwNjQ0Ci0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hl
bnN0b3JlZF9taW5pb3MuYworKysgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfbWluaW9zLmMK
QEAgLTI0LDYgKzI0LDEwIEBAIHZvaWQgd3JpdGVfcGlkZmlsZShjb25zdCBjaGFyICpwaWRmaWxl
KQogewogfQogCit2b2lkIHdyaXRlX2J1aWxkaWRfZmlsZShjb25zdCBjaGFyICpidWlsZGlkX2Zp
bGUpCit7Cit9CisKIHZvaWQgZGFlbW9uaXplKHZvaWQpCiB7CiB9CmRpZmYgLS1naXQgYS90b29s
cy94ZW5zdG9yZS94ZW5zdG9yZWRfcG9zaXguYyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9w
b3NpeC5jCmluZGV4IDFmOTYwM2ZlYTIuLmVjMDE3NjExZDYgMTAwNjQ0Ci0tLSBhL3Rvb2xzL3hl
bnN0b3JlL3hlbnN0b3JlZF9wb3NpeC5jCisrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9w
b3NpeC5jCkBAIC0yMCw2ICsyMCw3IEBACiAjaW5jbHVkZSA8c3lzL3N0YXQuaD4KICNpbmNsdWRl
IDx1bmlzdGQuaD4KICNpbmNsdWRlIDxmY250bC5oPgorI2luY2x1ZGUgPHN0ZGludC5oPgogI2lu
Y2x1ZGUgPHN0ZGxpYi5oPgogI2luY2x1ZGUgPHN5cy9tbWFuLmg+CiAKQEAgLTQ4LDYgKzQ5LDU3
IEBAIHZvaWQgd3JpdGVfcGlkZmlsZShjb25zdCBjaGFyICpwaWRmaWxlKQogCWNsb3NlKGZkKTsK
IH0KIAorLyoKKyAqIFdlIGRvbid0IGhhdmUgYSB3b3JraW5nIGVsZi5oIGF2YWlsYWJsZSBoZXJl
LCBzbyBsZXQncyBkZWZpbmUgb3VyIHZlcnkgb3duCisgKiBkYXRhIHN0cnVjdHMgYW5kIGFjY2Vz
c29yIG1hY3JvcyBmb3IgRUxGIG5vdGVzLgorICoKKyAqIGh0dHBzOi8vZG9jcy5vcmFjbGUuY29t
L2NkL0UyMzgyNF8wMS9odG1sLzgxOS0wNjkwL2NoYXB0ZXI2LTE4MDQ4Lmh0bWw6CisgKiBGb3Ig
NjTigJNiaXQgb2JqZWN0cyBhbmQgMzLigJNiaXQgb2JqZWN0cywgZWFjaCBlbnRyeSBpcyBhbiBh
cnJheSBvZiA0LWJ5dGUKKyAqIHdvcmRzIGluIHRoZSBmb3JtYXQgb2YgdGhlIHRhcmdldCBwcm9j
ZXNzb3IuCisgKi8KK3R5cGVkZWYgc3RydWN0Cit7CisJdWludDMyX3QgbmFtZXN6OworCXVpbnQz
Ml90IGRlc2NzejsKKwl1aW50MzJfdCB0eXBlOworfSBlbGZfbm90ZV9oZHI7CisKKy8qIEVMRiBO
b3RlIGFjY2Vzc29ycywgY29waWVkIGZyb20gWGVuJ3MgZWxmLmggKi8KKyNkZWZpbmUgRUxGTk9U
RV9BTElHTihfbl8pICgoKF9uXykrMykmfjMpCisjZGVmaW5lIEVMRk5PVEVfTkFNRShfbl8pICgo
Y2hhciopKF9uXykgKyBzaXplb2YoKihfbl8pKSkKKyNkZWZpbmUgRUxGTk9URV9ERVNDKF9uXykg
KEVMRk5PVEVfTkFNRShfbl8pICsgRUxGTk9URV9BTElHTigoX25fKS0+bmFtZXN6KSkKKy8qIEdO
VSBMRDogdHlwZSA9PSBub3RlIChOVF9HTlVfQlVJTERfSUQgYXMgaW4KKyAqIGh0dHBzOi8vc291
cmNld2FyZS5vcmcvbWwvYmludXRpbHMvMjAwNy0wNy9tc2cwMDAxMi5odG1sKSovCisjZGVmaW5l
IE5UX0dOVV9CVUlMRF9JRCAzCisKKwordm9pZCB3cml0ZV9idWlsZGlkX2ZpbGUoY29uc3QgY2hh
ciAqYnVpbGRpZF9maWxlKQoreworCXVuc2lnbmVkIGludCBpID0gMDsKKwlGSUxFICpmZGVzYzsK
KwlleHRlcm4gZWxmX25vdGVfaGRyIF9fYnVpbGRpZF9ub3RlX3NlY3Rpb247CisJdW5zaWduZWQg
aW50IGlkX2xlbmd0aCA9IF9fYnVpbGRpZF9ub3RlX3NlY3Rpb24uZGVzY3N6OworCWNoYXIqIGRl
c2MgPSBFTEZOT1RFX0RFU0MoJl9fYnVpbGRpZF9ub3RlX3NlY3Rpb24pOworCisJaWYgKF9fYnVp
bGRpZF9ub3RlX3NlY3Rpb24udHlwZSAhPSBOVF9HTlVfQlVJTERfSUQpCisJCWJhcmYoIkV4cGVj
dGVkIEdOVV9CVUlMRElEIG5vdGUsIGJ1dCBmb3VuZCB0eXBlICclZCciLAorCQkgICAgIF9fYnVp
bGRpZF9ub3RlX3NlY3Rpb24udHlwZSk7CisKKwlmZGVzYyA9IGZvcGVuKGJ1aWxkaWRfZmlsZSwg
IncrIik7CisJaWYgKCFmZGVzYykKKwkJYmFyZl9wZXJyb3IoIkVycm9yIG9wZW5pbmcgYnVpbGRp
ZCBmaWxlICVzIiwgYnVpbGRpZF9maWxlKTsKKworCS8qIFdlIGV4aXQgc2lsZW50bHkgaWYgZGFl
bW9uIGFscmVhZHkgcnVubmluZy4gKi8KKwlpZiAobG9ja2YoZmlsZW5vKGZkZXNjKSwgRl9UTE9D
SywgMCkgPT0gLTEpCisJCWV4aXQoMCk7CisKKwlmb3IgKGkgPSAwOyBpIDwgaWRfbGVuZ3RoOyAr
K2kpCisJCWZwcmludGYoZmRlc2MsICIlMDJ4IiwgKHVuc2lnbmVkIGNoYXIpZGVzY1tpXSk7CisJ
ZnByaW50ZihmZGVzYywgIlxuIik7CisKKwlmY2xvc2UoZmRlc2MpOworfQorCiAvKiBTdGV2ZW5z
LiAqLwogdm9pZCBkYWVtb25pemUodm9pZCkKIHsKLS0gCjIuMTYuNgoKCgoKQW1hem9uIERldmVs
b3BtZW50IENlbnRlciBHZXJtYW55IEdtYkgKS3JhdXNlbnN0ci4gMzgKMTAxMTcgQmVybGluCkdl
c2NoYWVmdHNmdWVocnVuZzogQ2hyaXN0aWFuIFNjaGxhZWdlciwgSm9uYXRoYW4gV2Vpc3MKRWlu
Z2V0cmFnZW4gYW0gQW10c2dlcmljaHQgQ2hhcmxvdHRlbmJ1cmcgdW50ZXIgSFJCIDE0OTE3MyBC
ClNpdHo6IEJlcmxpbgpVc3QtSUQ6IERFIDI4OSAyMzcgODc5CgoK



From xen-devel-bounces@lists.xenproject.org Fri Nov 13 14:24:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 14:24:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26521.54937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kda0M-0000hM-5g; Fri, 13 Nov 2020 14:24:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26521.54937; Fri, 13 Nov 2020 14:24:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kda0M-0000hF-2h; Fri, 13 Nov 2020 14:24:46 +0000
Received: by outflank-mailman (input) for mailman id 26521;
 Fri, 13 Nov 2020 14:24:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kda0K-0000hA-9g
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:24:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d973e31d-b920-49f0-855c-07f7f6ad7c10;
 Fri, 13 Nov 2020 14:24:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 44092AC91;
 Fri, 13 Nov 2020 14:23:42 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kda0K-0000hA-9g
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:24:44 +0000
X-Inumbo-ID: d973e31d-b920-49f0-855c-07f7f6ad7c10
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d973e31d-b920-49f0-855c-07f7f6ad7c10;
	Fri, 13 Nov 2020 14:24:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605277422; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4Yf5p8U/qhB1Unc/et2evPtE34U5QvhfbEqJebhm0P4=;
	b=jy6z/lbLSFycvgGEPdPOvNxwzFidAkD/LBCENIm1d7ruKNDAZjknkErI+qba3JsLfQX/xB
	Mr3DUDJbz439yDwXAF1y0h7P3ytO4+udvWzclcQfXBDKJhz/+wg8zxA823abrEsQNlAAxv
	+yRtdUqK4qMD7vJGHVNLgIMKTgSx/pQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 44092AC91;
	Fri, 13 Nov 2020 14:23:42 +0000 (UTC)
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
Cc: Oleksandr Andrushchenko <andr2000@gmail.com>,
 "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
 "julien.grall@arm.com" <julien.grall@arm.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
 <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
 <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
 <b4697fbe-6896-ed64-409d-85620c08904a@suse.com>
 <3d6e5aab-ff89-7859-09c6-5ecb0c052511@epam.com>
 <1c88fef1-8558-fde1-02c7-8a68f6ecf312@suse.com>
 <67fd5df7-2ad2-08e5-294e-b769429164f0@epam.com>
 <03e23a66-619f-e846-cf61-a33ca5d9f0b4@suse.com>
 <b151e6d2-5480-d201-432a-bece208a1fd9@epam.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c58c1393-381a-d995-6e41-fa3251f67bd7@suse.com>
Date: Fri, 13 Nov 2020 15:23:41 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <b151e6d2-5480-d201-432a-bece208a1fd9@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 13.11.2020 13:41, Oleksandr Andrushchenko wrote:
> 
> On 11/13/20 1:35 PM, Jan Beulich wrote:
>> On 13.11.2020 12:02, Oleksandr Andrushchenko wrote:
>>> On 11/13/20 12:50 PM, Jan Beulich wrote:
>>>> On 13.11.2020 11:46, Oleksandr Andrushchenko wrote:
>>>>> On 11/13/20 12:25 PM, Jan Beulich wrote:
>>>>>> On 13.11.2020 07:32, Oleksandr Andrushchenko wrote:
>>>>>>> I'll try to replace is_hardware_domain with something like:
>>>>>>>
>>>>>>> +/*
>>>>>>> + * Detect whether physical PCI devices in this segment belong
>>>>>>> + * to the domain given, e.g. on x86 all PCI devices live in hwdom,
>>>>>>> + * but in case of ARM this might not be the case: those may also
>>>>>>> + * live in driver domains or even Xen itself.
>>>>>>> + */
>>>>>>> +bool pci_is_hardware_domain(struct domain *d, u16 seg)
>>>>>>> +{
>>>>>>> +#ifdef CONFIG_X86
>>>>>>> +    return is_hardware_domain(d);
>>>>>>> +#elif CONFIG_ARM
>>>>>>> +    return pci_is_owner_domain(d, seg);
>>>>>>> +#else
>>>>>>> +#error "Unsupported architecture"
>>>>>>> +#endif
>>>>>>> +}
>>>>>>> +
>>>>>>> +/*
>>>>>>> + * Get domain which owns this segment: for x86 this is always hardware
>>>>>>> + * domain and for ARM this can be different.
>>>>>>> + */
>>>>>>> +struct domain *pci_get_hardware_domain(u16 seg)
>>>>>>> +{
>>>>>>> +#ifdef CONFIG_X86
>>>>>>> +    return hardware_domain;
>>>>>>> +#elif CONFIG_ARM
>>>>>>> +    return pci_get_owner_domain(seg);
>>>>>>> +#else
>>>>>>> +#error "Unsupported architecture"
>>>>>>> +#endif
>>>>>>> +}
>>>>>>>
>>>>>>> This is what I use to properly detect the domain that really owns physical host bridge
>>>>>> I consider this problematic. We should try to not let Arm's and x86'es
>>>>>> PCI implementations diverge too much, i.e. at least the underlying basic
>>>>>> model would better be similar. For example, if entire segments can be
>>>>>> assigned to a driver domain on Arm, why should the same not be possible
>>>>>> on x86?
>>>>> Good question, probably in this case x86 == ARM and I can use
>>>>>
>>>>> pci_is_owner_domain for both architectures instead of using is_hardware_domain for x86
>>>>>
>>>>>> Furthermore I'm suspicious about segments being the right granularity
>>>>>> here. On ia64 multiple host bridges could (and typically would) live
>>>>>> on segment 0. Iirc I had once seen output from an x86 system which was
>>>>>> apparently laid out similarly. Therefore, just like for MCFG, I think
>>>>>> the granularity wants to be bus ranges within a segment.
>>>>> Can you please suggest something we can use as a hint for such a detection logic?
>>>> The underlying information comes from ACPI tables, iirc. I don't
>>>> recall the details, though - sorry.
>>> Ok, so seg + bus should be enough for both ARM and Xen then, right?
>>>
>>> pci_get_hardware_domain(u16 seg, u8 bus)
>> Whether an individual bus number can suitable express things I can't
>> tell; I did say bus range, but of course if you care about just
>> individual devices, then a single bus number will of course do.
> 
> I can implement the lookup whether a PCI host bridge owned by a particular
> 
> domain with something like:
> 
> struct pci_host_bridge *bridge = pci_find_host_bridge(seg, bus);
> 
> return bridge->dt_node->used_by == d->domain_id;
> 
> Could you please give me a hint how this can be done on x86?

Bridges can't be assigned to other than the hardware domain right
now. Earlier on I didn't say you should get this to work, only
that I think the general logic around what you add shouldn't make
things more arch specific than they really should be. That said,
something similar to the above should still be doable on x86,
utilizing struct pci_seg's bus2bridge[]. There ought to be
DEV_TYPE_PCI_HOST_BRIDGE entries there, albeit a number of them
(provided by the CPUs themselves rather than the chipset) aren't
really host bridges for the purposes you're after.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 14:27:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 14:27:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26527.54949 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kda3F-0000qn-Kq; Fri, 13 Nov 2020 14:27:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26527.54949; Fri, 13 Nov 2020 14:27:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kda3F-0000qg-Hn; Fri, 13 Nov 2020 14:27:45 +0000
Received: by outflank-mailman (input) for mailman id 26527;
 Fri, 13 Nov 2020 14:27:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kda3E-0000qW-3L
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:27:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eb9543bd-ea17-4907-a2a2-5cf5329ff015;
 Fri, 13 Nov 2020 14:27:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 79C8CABD9;
 Fri, 13 Nov 2020 14:27:42 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kda3E-0000qW-3L
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:27:44 +0000
X-Inumbo-ID: eb9543bd-ea17-4907-a2a2-5cf5329ff015
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id eb9543bd-ea17-4907-a2a2-5cf5329ff015;
	Fri, 13 Nov 2020 14:27:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605277662; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rq5OrBfxWbxqS45SPstigq5O/6hrDrhLwKgYK9WwPqk=;
	b=o50MAOmKgYmjbmcMzsCJ67XK9F7X2ei6cZKK3VMIM0gtnLdm9bCkJp2prJeaJY2RCzK6Mm
	XSFDb+ZhwduQyVc9hlg/C1lzKYwZWEEGPOWqlmdzETMN99ff56VGUxtaI3PYjz2JpWQFum
	VCLdNd6Yt7MH4/P5zan9qqel4JrMM7A=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 79C8CABD9;
	Fri, 13 Nov 2020 14:27:42 +0000 (UTC)
Subject: Re: [XEN PATCH] tools/xenstore: Log xenstored build ID on startup
To: Bjoern Doebel <doebel@amazon.de>
Cc: Julien Grall <jgrall@amazon.co.uk>, Eslam Elnikety <elnikety@amazon.de>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20201113141823.58712-1-doebel@amazon.de>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b61119da-b6e8-7746-9298-54bf60da88ea@suse.com>
Date: Fri, 13 Nov 2020 15:27:41 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201113141823.58712-1-doebel@amazon.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13.11.2020 15:18, Bjoern Doebel wrote:
> --- a/tools/xenstore/Makefile
> +++ b/tools/xenstore/Makefile
> @@ -66,6 +66,10 @@ $(XENSTORED_OBJS): CFLAGS += $(SYSTEMD_CFLAGS)
>  xenstored: LDFLAGS += $(SYSTEMD_LIBS)
>  endif
>  
> +# xenstored: enforce creation of a buildID section and use a linker
> +# script to add additional symbols around that section
> +xenstored: LDFLAGS +=  -Wl,--build-id=sha1 -Wl,-T,buildid_symbols.ld

Since in the hypervisor build we're careful to not use --build-id
when the linker doesn't support it, I suppose the same care needs
applying here. See the setting of build_id_linker in ./Config.mk.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 14:29:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 14:29:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26536.54961 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kda4T-0000zs-1L; Fri, 13 Nov 2020 14:29:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26536.54961; Fri, 13 Nov 2020 14:29:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kda4S-0000zl-Ss; Fri, 13 Nov 2020 14:29:00 +0000
Received: by outflank-mailman (input) for mailman id 26536;
 Fri, 13 Nov 2020 14:29:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HVgh=ET=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kda4S-0000zY-B1
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:29:00 +0000
Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dfef669a-5725-4ab2-bca0-be643314ca98;
 Fri, 13 Nov 2020 14:28:54 +0000 (UTC)
Received: by mail-lf1-x144.google.com with SMTP id 74so14086353lfo.5
 for <xen-devel@lists.xenproject.org>; Fri, 13 Nov 2020 06:28:54 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id u9sm1574268lfo.181.2020.11.13.06.28.52
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 13 Nov 2020 06:28:52 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HVgh=ET=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kda4S-0000zY-B1
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:29:00 +0000
X-Inumbo-ID: dfef669a-5725-4ab2-bca0-be643314ca98
Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id dfef669a-5725-4ab2-bca0-be643314ca98;
	Fri, 13 Nov 2020 14:28:54 +0000 (UTC)
Received: by mail-lf1-x144.google.com with SMTP id 74so14086353lfo.5
        for <xen-devel@lists.xenproject.org>; Fri, 13 Nov 2020 06:28:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=B8uzbkP85RAfT7LVEI0HOKe9zMgG+kbxKO7hUX2Hku8=;
        b=YofW5QmYt4bjKvVqGoafWvwNU/DI/LEKeNSOMjSDt19DWiJZGf9CbZXwo8NdFgxstC
         +mFdOrUQiMbUsYjADaEdz6wUOFHoTVm1HpeLF3ARuHWhGPye0TNmaRSc4loYHkLrpISj
         QHD3nAGO3vh7duv5yfTgjd+togNr+djtF03h15srKDpkKGnRyyRwJfFyB3In8Bo/BPzP
         vCJi9TS/RA9c7e3nq58Mo3QH0YgCX+yE7KZBm10rnG1Ix/1OcYs4gQ9iTfj0tnnJAemq
         N6Dlc9j73s+A0pwgALByR9lRC7Hm78/l6og+Rpw1WMv+db+Otw9RC3OHrSQEfIKhNhde
         TKuA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=B8uzbkP85RAfT7LVEI0HOKe9zMgG+kbxKO7hUX2Hku8=;
        b=ui97l725FEb0lBHkulcG6geuMaazjf1BfJTdy+FIuyF6Hgi2vgbwEuu59RuEU9HdGX
         4IKbnw5vu58eSWkHdmyEApH6lbeofeCgSeIkjxpSBs8WAbFz0wwYwWRm5VXXD5W77yTa
         KsoKp5ai8GSRxlmpWF1kkLSE8G1o4fLRl+RtyHuPMNPzBE8TTrSvHbpcZd07hrNcK1UT
         h6evVu3RMEAZHwdN06Q2vds0spG+VeXfb7fBs/QJCn/AVos6NwAreUm+8fnDLShiNraR
         Pe8ly5QazU1aA6WTb0aFDhqaPnccrn1GGTlBSxlnlW7DVHppyRlvM+S1OZztGBA9BHoX
         +HRw==
X-Gm-Message-State: AOAM531BZ4HxT1t8UviOsmLOlAt5tR40Aao3noqu/sgHOSoki/WRHF5H
	bT1eIO1Rxjp1lPlXq6l5qHlaB2/8bjZgCA==
X-Google-Smtp-Source: ABdhPJywFRi+yrc2L+35uSWOjxJRqQ+5uutGWGQuJUEZq9EZsY+N4QJalzTq6JgP4Feqx8P8sV7fkA==
X-Received: by 2002:a19:2389:: with SMTP id j131mr1014656lfj.324.1605277733458;
        Fri, 13 Nov 2020 06:28:53 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id u9sm1574268lfo.181.2020.11.13.06.28.52
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Fri, 13 Nov 2020 06:28:52 -0800 (PST)
Subject: Re: [PATCH V2 09/23] xen/dm: Make x86's DM feature common
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien.grall@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-10-git-send-email-olekstysh@gmail.com>
 <3f432fdb-0625-4803-3a16-62200a6264ca@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <51dcd1c7-3a1d-a269-12df-84db3bafecce@gmail.com>
Date: Fri, 13 Nov 2020 16:28:52 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <3f432fdb-0625-4803-3a16-62200a6264ca@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 12.11.20 13:32, Jan Beulich wrote:

Hi Jan

> On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
>> From: Julien Grall <julien.grall@arm.com>
>>
>> As a lot of x86 code can be re-used on Arm later on, this patch
>> splits devicemodel support into common and arch specific parts.
>>
>> The common DM feature is supposed to be built with IOREQ_SERVER
>> option enabled (as well as the IOREQ feature), which is selected
>> for x86's config HVM for now.
> Did you consider doing it the other way around? It would seem
> more natural to have the top level dm-op handling arch-specific
> and call into e.g. ioreq_server_dm_op() for otherwise unhandled
> ops, just like e.g. do_domctl() calls into iommu_do_domctl()
> (indirectly via arch_do_domctl()). I ask because in the long
> run I expect the ioreq server sub-ops to only be a small part
> of the overall set of dm-ops; already now it's 7 out of 18 if
> I got the counting right.
>
> This would then also leave compat_dm_op() in x86 code.
>
> But yes, there are also downsides with this alternative.


No, I didn't consider. I separated the proposed DM changes from Julien's 
patch without modifying the logic.
My changes on top (except rebasing of course) are updating XSM code and 
introducing a xen/dm.h for definitions.

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Fri Nov 13 14:30:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 14:30:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26540.54972 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kda64-0001pi-Ez; Fri, 13 Nov 2020 14:30:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26540.54972; Fri, 13 Nov 2020 14:30:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kda64-0001pb-Bg; Fri, 13 Nov 2020 14:30:40 +0000
Received: by outflank-mailman (input) for mailman id 26540;
 Fri, 13 Nov 2020 14:30:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uoW6=ET=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kda63-0001pW-BB
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:30:39 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab0613d7-a5ef-4e46-befe-8b7e9a70e382;
 Fri, 13 Nov 2020 14:30:38 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uoW6=ET=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kda63-0001pW-BB
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:30:39 +0000
X-Inumbo-ID: ab0613d7-a5ef-4e46-befe-8b7e9a70e382
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ab0613d7-a5ef-4e46-befe-8b7e9a70e382;
	Fri, 13 Nov 2020 14:30:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605277838;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=ni6uFxlookvwWxTeqxXDYqjhj5r2NpQO4zQ6Y7g1xrc=;
  b=Kj8ilPr5CkjA0lT7jUM/dTGARks/z4njiV+Hxo5E/HJSYjF3Hf0ARn/Q
   UBznh0BDN8AXCuDhxLyv6rHqHeX7VCrwCXxY4+1Y64vDeJxku4iUhnrK9
   U/kKS1yOfzRYCVhngSyPpDVayjfKZEP2Rth90f6QRSlV3Xg1Sb8mCxyJS
   c=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Yk5nvlNlIf1/NvTnizdS9gf4vQXQxr/pgDrwWahazrT6Pid/u6kQ2a5E2MQv8T2iMqU/DBrFxE
 4+ZSovmEEpoPJx5/66XoUrrTkHoY0Ieqqox8g+8HM6x3qqXTGELLnD7YZuGH1RXRrxsekwVIF3
 2cxiE9qMDQVs337t2gCPwaaMTEAg0JUSSICnRqHFO8ghoKa1LPPKVjSNuumwon3WmG1JkrODy9
 vvENEOOIbQU713DL41cM9saoXXIeqhJ4EWG4r7htvCqCcH+ZZG7wcwryc+ik1vonCc0JH1GeFJ
 Ptw=
X-SBRS: None
X-MesageID: 31098301
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,475,1596513600"; 
   d="scan'208";a="31098301"
Subject: Re: [XEN PATCH] tools/xenstore: Log xenstored build ID on startup
To: Bjoern Doebel <doebel@amazon.de>, <xen-devel@lists.xenproject.org>
CC: Julien Grall <jgrall@amazon.co.uk>, Eslam Elnikety <elnikety@amazon.de>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20201113141823.58712-1-doebel@amazon.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <de06e7ce-65cd-95fb-5862-0135e2110a99@citrix.com>
Date: Fri, 13 Nov 2020 14:30:23 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201113141823.58712-1-doebel@amazon.de>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 13/11/2020 14:18, Bjoern Doebel wrote:
> Right now we do not have a mechanism to determine the version of the
> currently running xenstored at runtime. As xenstored runs throughout the
> lifetime of a Xen host, this may lead to problems when newer user space
> builds are staged. Then, the running xenstored will no longer match the
> version of the installed xenstored.
>
> To allow users to always identify the running version of xenstored, add
> a linker-generated unique build ID to every xenstored build. Add
> functionality to log this build ID into a file upon service startup.
>
> Signed-off-by: Bjoern Doebel <doebel@amazon.de>
> Reviewed-by: Martin Mazein <amazein@amazon.de>
> Reviewed-by: Paul Durrant <pdurrant@amazon.co.uk>

I understand the problem you're trying to solve, but why is this
anything more than just enabling build-id's by default across tools/ ?

There are already standard ways of interacting with the build id of
running executables on the system.  I'd strongly discourage doing
anything custom in xenstored specifically.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 14:31:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 14:31:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26545.54984 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kda6k-0001wL-OJ; Fri, 13 Nov 2020 14:31:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26545.54984; Fri, 13 Nov 2020 14:31:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kda6k-0001wE-LD; Fri, 13 Nov 2020 14:31:22 +0000
Received: by outflank-mailman (input) for mailman id 26545;
 Fri, 13 Nov 2020 14:31:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kda6j-0001w5-15
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:31:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 44a1fc76-e62a-4a8f-b62a-d7187e4b7003;
 Fri, 13 Nov 2020 14:31:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E8A70ABD9;
 Fri, 13 Nov 2020 14:31:18 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kda6j-0001w5-15
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:31:21 +0000
X-Inumbo-ID: 44a1fc76-e62a-4a8f-b62a-d7187e4b7003
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 44a1fc76-e62a-4a8f-b62a-d7187e4b7003;
	Fri, 13 Nov 2020 14:31:19 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605277879; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=27bsbPqnD749rcMxfQnT2QIkrBYTXZ41uSl4ifUf6xg=;
	b=HLxnAlmGBUuE3ZzOyQnXYRwyh9mIYpPBPuobqLB4IA9/AGuJibCGob3LursRW8SXeJaABY
	THMYVj7wLv1X+d17cXzfXvNPJ3C8Du4kqBYjL/fJt6bJISai+os4NhPGRVhZoo/mebcv34
	/evXffCAlRAnc5ZXcRlnii3+WwgniNs=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E8A70ABD9;
	Fri, 13 Nov 2020 14:31:18 +0000 (UTC)
Subject: Re: [PATCH V2 01/23] x86/ioreq: Prepare IOREQ feature for making it
 common
To: Oleksandr <olekstysh@gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-2-git-send-email-olekstysh@gmail.com>
 <61ea02e0-bdd4-5a0a-dd6f-b22e806e6d1e@suse.com>
 <cd16e1f2-849d-ec12-3325-382b8f6689ff@gmail.com>
 <e08459d9-dd0a-7875-5d12-d374c69fe775@suse.com>
 <9162290b-94c5-295e-3133-71284cd617e1@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <eb60cbdf-1148-c304-6d01-aaa7594f795c@suse.com>
Date: Fri, 13 Nov 2020 15:31:18 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <9162290b-94c5-295e-3133-71284cd617e1@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13.11.2020 13:45, Oleksandr wrote:
> On 13.11.20 13:20, Jan Beulich wrote:
>> On 13.11.2020 12:09, Oleksandr wrote:
>>> On 12.11.20 12:58, Jan Beulich wrote:
>>>> On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
>>>>> @@ -855,7 +841,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
>>>>>    
>>>>>        domain_pause(d);
>>>>>    
>>>>> -    p2m_set_ioreq_server(d, 0, s);
>>>>> +    arch_hvm_destroy_ioreq_server(s);
>>>> Iirc there are plans to rename hvm_destroy_ioreq_server() in the
>>>> course of the generalization. If so, this arch hook would imo
>>>> better be named following the new scheme right away.
>>> Could you please clarify, are you speaking about the plans discussed there
>>>
>>> "[PATCH V2 12/23] xen/ioreq: Remove "hvm" prefixes from involved
>>> function names"?
>>>
>>> Copy text for the convenience:
>>> AT least some of the functions touched here would be nice to be
>>> moved to a more consistent new naming scheme right away, to
>>> avoid having to touch all the same places again. I guess ioreq
>>> server functions would be nice to all start with ioreq_server_
>>> and ioreq functions to all start with ioreq_. E.g. ioreq_send()
>>> and ioreq_server_select().
>>>
>>> or some other plans I am not aware of?
>>>
>>>
>>> What I really want to avoid with IOREQ enabling work is the round-trips
>>> related to naming things, this patch series
>>> became quite big (and consumes som time to rebase and test it) and I
>>> expect it to become bigger.
>>>
>>> So the arch_hvm_destroy_ioreq_server() should be
>>> arch_ioreq_server_destroy()?
>> I think so, yes. If you want to avoid doing full patches, how
>> about you simply list the functions / variables you plan to
>> rename alongside the intended new names? Would likely be easier
>> for all involved parties.
> I think it is a good idea. I will prepare a list once I analyze all new 
> comments to this series.
> As I understand that only global IOREQ functions need renaming according 
> to the new scheme,
> local ones can be left as is but without "hvm" prefixes of course?

Please apply common sense: Static ones, if you have to drop a
hvm_ prefix, may in some cases better further be renamed as well,
when their names aren't really in line with their purpose
(anymore). But yes, following a consistent naming model is more
relevant for non-static functions.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 14:33:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 14:33:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26552.54997 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kda8Y-00027j-5B; Fri, 13 Nov 2020 14:33:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26552.54997; Fri, 13 Nov 2020 14:33:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kda8Y-00027c-1J; Fri, 13 Nov 2020 14:33:14 +0000
Received: by outflank-mailman (input) for mailman id 26552;
 Fri, 13 Nov 2020 14:33:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u7TF=ET=epam.com=prvs=9586b5424c=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kda8W-000273-81
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:33:12 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2f0e06f0-4e44-42f7-99ee-57745fe0f0e4;
 Fri, 13 Nov 2020 14:33:11 +0000 (UTC)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0ADEPjg9022279; Fri, 13 Nov 2020 14:32:55 GMT
Received: from eur04-db3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2053.outbound.protection.outlook.com [104.47.12.53])
 by mx0b-0039f301.pphosted.com with ESMTP id 34sd1ptvaw-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 13 Nov 2020 14:32:55 +0000
Received: from AM7PR03MB6325.eurprd03.prod.outlook.com (2603:10a6:20b:13c::18)
 by AS8PR03MB7301.eurprd03.prod.outlook.com (2603:10a6:20b:2eb::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.25; Fri, 13 Nov
 2020 14:32:53 +0000
Received: from AM7PR03MB6325.eurprd03.prod.outlook.com
 ([fe80::f413:8db:d63e:966c]) by AM7PR03MB6325.eurprd03.prod.outlook.com
 ([fe80::f413:8db:d63e:966c%6]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 14:32:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u7TF=ET=epam.com=prvs=9586b5424c=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kda8W-000273-81
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:33:12 +0000
X-Inumbo-ID: 2f0e06f0-4e44-42f7-99ee-57745fe0f0e4
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2f0e06f0-4e44-42f7-99ee-57745fe0f0e4;
	Fri, 13 Nov 2020 14:33:11 +0000 (UTC)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
	by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0ADEPjg9022279;
	Fri, 13 Nov 2020 14:32:55 GMT
Received: from eur04-db3-obe.outbound.protection.outlook.com (mail-db3eur04lp2053.outbound.protection.outlook.com [104.47.12.53])
	by mx0b-0039f301.pphosted.com with ESMTP id 34sd1ptvaw-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 13 Nov 2020 14:32:55 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dUDDonNQQMB3JRhMD6x/WZUYk1cLq6AVWqdXe9364BKwRd2YLcaW3KFzIVE7WknKYUtJk5UAwGcAtCR89UEDFUE4uZpFZv5FLKVt+QbPPAr/md0UozEDnI3P8fPbPvnNXHzeYMW7KS22yV5eoNs6N0gIKgOKI7a2H/uMNslWPh/yEumnjQ+qAdJYQugvsXjZKFHR0QsHaG/syiY+AifWYbjJzWUwrSw56MHfdHZKuSR4BPKewft0aDNuHWtZWwNlWXDlIkB4Kps6YcP0cuFrgHtE2EXmBoJnLsweyVXJtGHPqgNowJltW+qtP9wt3vMMyBT7u5klzSfBgLIoQxDG9A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=o9Ch4ugNbZBira3V9svRU358YKW7Li87r2cPoF/nDOc=;
 b=UVWA3wJyGU2p2oUkM8OLXzTcgHwvkGv9d+hjJirW6uYR0ZooORmwhAGIaoE0KuT3HqpJUkx0N6aXRbUpvx4WaQ057vkTWTc0YPi3+C1yZ0DcwdEIDf3jBgJq9e0tEP0t8ie88xJvB0XKH6uS13eOg+NB9xBpOqMqADJWmssI5QnW30e/5U4Fkz/2YgXAd0F1j6CR1YdhR0VZ58QHterUgE28hEIsIz2lI3dOWlrlI/eGQSHA8068+Ssty7jAK8gzhE4ZZW1RmsklZIs+nHL1Bz8fuwKR42EkogyQYscVlfmK+W3JIX91SvdGeBH2xRNfG2vo+K6mADeviK89vqLgIg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=o9Ch4ugNbZBira3V9svRU358YKW7Li87r2cPoF/nDOc=;
 b=6Nea2718Ngq3v+BilQoAvmJHK1xXyJAzXqBbJq7gBRpAttsjqL3Wq3jSY4FeRc3reNEUp1Sx0WrxGCPPHrplvL81IyhuoF8Q/NL+/UVsvK1JJr8qRCAEun1vgMcL6fN8FxhqXdrKJCnU/u6Datnh0/m5zXaDTLYMe3iW/TmV3AKQowr01vJMNNEeTFE+rqNIzlRDrvZcw13RmWrliXW+tHnO09cOJHewx8TWoLjYLLnbJ2+YprW8opQb2PrVqKHTgOXaJsLpn812EVgc83ZAHa1w4ptRKv4E0wdRqSgW1S4V9aa5XsrIDnAZ8M2zq/Cu24kd1OwTZb+sHrgLvHxbOg==
Received: from AM7PR03MB6325.eurprd03.prod.outlook.com (2603:10a6:20b:13c::18)
 by AS8PR03MB7301.eurprd03.prod.outlook.com (2603:10a6:20b:2eb::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.25; Fri, 13 Nov
 2020 14:32:53 +0000
Received: from AM7PR03MB6325.eurprd03.prod.outlook.com
 ([fe80::f413:8db:d63e:966c]) by AM7PR03MB6325.eurprd03.prod.outlook.com
 ([fe80::f413:8db:d63e:966c%6]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 14:32:53 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Oleksandr Andrushchenko <andr2000@gmail.com>,
        "Rahul.Singh@arm.com"
	<Rahul.Singh@arm.com>,
        "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
        "julien.grall@arm.com" <julien.grall@arm.com>,
        "sstabellini@kernel.org"
	<sstabellini@kernel.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
        =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
Thread-Topic: [PATCH 06/10] vpci: Make every domain handle its own BARs
Thread-Index: 
 AQHWtpbzyJg77SbpvkSRWLKjWy/3PanEQmgAgAA8YoCAABlOgIABCD8AgABBOQCAAAXIAIAAAS0AgAADNgCAAAlOAIAAEnYAgAAcfoCAAAKRAA==
Date: Fri, 13 Nov 2020 14:32:53 +0000
Message-ID: <8fc22774-7380-2de1-9c30-6649a79fdfe1@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
 <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
 <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
 <b4697fbe-6896-ed64-409d-85620c08904a@suse.com>
 <3d6e5aab-ff89-7859-09c6-5ecb0c052511@epam.com>
 <1c88fef1-8558-fde1-02c7-8a68f6ecf312@suse.com>
 <67fd5df7-2ad2-08e5-294e-b769429164f0@epam.com>
 <03e23a66-619f-e846-cf61-a33ca5d9f0b4@suse.com>
 <b151e6d2-5480-d201-432a-bece208a1fd9@epam.com>
 <c58c1393-381a-d995-6e41-fa3251f67bd7@suse.com>
In-Reply-To: <c58c1393-381a-d995-6e41-fa3251f67bd7@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 0b1c7a76-f6ee-49d1-27d7-08d887e101af
x-ms-traffictypediagnostic: AS8PR03MB7301:
x-microsoft-antispam-prvs: 
 <AS8PR03MB73015A37D72DC0D40FCED0C9E7E60@AS8PR03MB7301.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 Wz2CFnN5cFqCqqsOuPPm7QX4TU2xYtHG7BAC7uKuALsHZZturzautc5+q1RAeL+gceDZW9lahyPOuKuH2+T1Q34kWUZzumgWWi39RAcAlzXyd2WLIqMx/hdgR06jcP64GP1771EdoFEa69fBKiDFvbihrqeVKh5rXX49b9E6u+ri89sFboMIhqX2JiIPqCBfzjGnSZveHzjQz4LvTrEiraeYqfTAnvQ+L5mv9njGSUnu7yaDslweAS59NTFL3MaJNGF7RG/BeuTJ3njkB9Pyxs8X8xjf2DvQKd4P1C2gyC33+yIarutGTdTVSz00m1mzQ6i93nBz1+Fsaagwwg6KvA==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM7PR03MB6325.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(346002)(376002)(366004)(136003)(39860400002)(8676002)(76116006)(66476007)(31696002)(64756008)(66556008)(4326008)(6512007)(66446008)(6916009)(71200400001)(8936002)(186003)(53546011)(31686004)(478600001)(36756003)(54906003)(2906002)(83380400001)(316002)(2616005)(26005)(5660300002)(7416002)(6506007)(86362001)(66946007)(6486002)(91956017);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 zWRErDnGW4mCpcOGceD4hNT97JdYqQtc3itWT5ZURczz50K46UmAvDYtqZ9b7V/B20upNAWHUdtxj9TOwZ0aUk+xfPLnPh3zO8MoOqnZ7WrnSfZg0I3rBkfAz/LlJNRXEy2VtNrmwtWNOfVw8yx05NpgrzLMtvyGULabAtWM59NWQ13C8JfDdCAngu47R1+JtrpWwwjqbT3tOWiMywHSNEm8Cy+/IUIvNDYjhKHK9SNHQeQM93PUCgT+zDgCjKRzD4r+AFBMFkyezOHPqGhoDwDQL280zY8+91Fg3SoYN4SKgMEXg5HvO5jOrQEAOaDrTwCTQkBM8b/MhF4/Z4Rs8nifUMwzFkpX0h94mjUXSIGze83T+vgHm4AlkpjfE+few39tPRKAlf1dVe5Q3a43utGAkRn+nkNuMetaVUaHCwXifFjXPT59CJrt3B4mr6Vzkhf44oFOnz32FOg0dnc1p9DlG2ouNN3P6btOzb/Bsx3MGuYSGg4XVArj7CqHZj+qcVlXF7/Pkg5C5jMRC/xhl7yCJRSZ/xWlb81AXMPcvnisd/loKfAEgroD2idCkQ3pO6Vje6kYNQkA+qC/mgB9oZJZhcNLVUDfbhHFX7p0481oB6RQnIUyZzfEOgb4nNa29slH2Iz3Hs6VrGsO9SHHkw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <6933FFEDAF48304ABC361523ED969DD4@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM7PR03MB6325.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0b1c7a76-f6ee-49d1-27d7-08d887e101af
X-MS-Exchange-CrossTenant-originalarrivaltime: 13 Nov 2020 14:32:53.7228
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: fNwtA+77nrGipQfmM8DV5BdVNCwSK/xYESk0XM6DSoE7BwkWOvDSSbydXjuAwOfdP0pl7f0CwgxChnoSvHDTAKDY0qmWCtE42P9TtaGvuHvc+PTrYyxvuGC2V8jGNUE7
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR03MB7301
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-13_10:2020-11-13,2020-11-13 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0
 priorityscore=1501 mlxlogscore=999 clxscore=1015 adultscore=0
 malwarescore=0 suspectscore=0 mlxscore=0 spamscore=0 bulkscore=0
 lowpriorityscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2009150000 definitions=main-2011130091

DQpPbiAxMS8xMy8yMCA0OjIzIFBNLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTMuMTEuMjAy
MCAxMzo0MSwgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gd3JvdGU6DQo+PiBPbiAxMS8xMy8yMCAx
OjM1IFBNLCBKYW4gQmV1bGljaCB3cm90ZToNCj4+PiBPbiAxMy4xMS4yMDIwIDEyOjAyLCBPbGVr
c2FuZHIgQW5kcnVzaGNoZW5rbyB3cm90ZToNCj4+Pj4gT24gMTEvMTMvMjAgMTI6NTAgUE0sIEph
biBCZXVsaWNoIHdyb3RlOg0KPj4+Pj4gT24gMTMuMTEuMjAyMCAxMTo0NiwgT2xla3NhbmRyIEFu
ZHJ1c2hjaGVua28gd3JvdGU6DQo+Pj4+Pj4gT24gMTEvMTMvMjAgMTI6MjUgUE0sIEphbiBCZXVs
aWNoIHdyb3RlOg0KPj4+Pj4+PiBPbiAxMy4xMS4yMDIwIDA3OjMyLCBPbGVrc2FuZHIgQW5kcnVz
aGNoZW5rbyB3cm90ZToNCj4+Pj4+Pj4+IEknbGwgdHJ5IHRvIHJlcGxhY2UgaXNfaGFyZHdhcmVf
ZG9tYWluIHdpdGggc29tZXRoaW5nIGxpa2U6DQo+Pj4+Pj4+Pg0KPj4+Pj4+Pj4gKy8qDQo+Pj4+
Pj4+PiArICogRGV0ZWN0IHdoZXRoZXIgcGh5c2ljYWwgUENJIGRldmljZXMgaW4gdGhpcyBzZWdt
ZW50IGJlbG9uZw0KPj4+Pj4+Pj4gKyAqIHRvIHRoZSBkb21haW4gZ2l2ZW4sIGUuZy4gb24geDg2
IGFsbCBQQ0kgZGV2aWNlcyBsaXZlIGluIGh3ZG9tLA0KPj4+Pj4+Pj4gKyAqIGJ1dCBpbiBjYXNl
IG9mIEFSTSB0aGlzIG1pZ2h0IG5vdCBiZSB0aGUgY2FzZTogdGhvc2UgbWF5IGFsc28NCj4+Pj4+
Pj4+ICsgKiBsaXZlIGluIGRyaXZlciBkb21haW5zIG9yIGV2ZW4gWGVuIGl0c2VsZi4NCj4+Pj4+
Pj4+ICsgKi8NCj4+Pj4+Pj4+ICtib29sIHBjaV9pc19oYXJkd2FyZV9kb21haW4oc3RydWN0IGRv
bWFpbiAqZCwgdTE2IHNlZykNCj4+Pj4+Pj4+ICt7DQo+Pj4+Pj4+PiArI2lmZGVmIENPTkZJR19Y
ODYNCj4+Pj4+Pj4+ICvCoMKgwqAgcmV0dXJuIGlzX2hhcmR3YXJlX2RvbWFpbihkKTsNCj4+Pj4+
Pj4+ICsjZWxpZiBDT05GSUdfQVJNDQo+Pj4+Pj4+PiArwqDCoMKgIHJldHVybiBwY2lfaXNfb3du
ZXJfZG9tYWluKGQsIHNlZyk7DQo+Pj4+Pj4+PiArI2Vsc2UNCj4+Pj4+Pj4+ICsjZXJyb3IgIlVu
c3VwcG9ydGVkIGFyY2hpdGVjdHVyZSINCj4+Pj4+Pj4+ICsjZW5kaWYNCj4+Pj4+Pj4+ICt9DQo+
Pj4+Pj4+PiArDQo+Pj4+Pj4+PiArLyoNCj4+Pj4+Pj4+ICsgKiBHZXQgZG9tYWluIHdoaWNoIG93
bnMgdGhpcyBzZWdtZW50OiBmb3IgeDg2IHRoaXMgaXMgYWx3YXlzIGhhcmR3YXJlDQo+Pj4+Pj4+
PiArICogZG9tYWluIGFuZCBmb3IgQVJNIHRoaXMgY2FuIGJlIGRpZmZlcmVudC4NCj4+Pj4+Pj4+
ICsgKi8NCj4+Pj4+Pj4+ICtzdHJ1Y3QgZG9tYWluICpwY2lfZ2V0X2hhcmR3YXJlX2RvbWFpbih1
MTYgc2VnKQ0KPj4+Pj4+Pj4gK3sNCj4+Pj4+Pj4+ICsjaWZkZWYgQ09ORklHX1g4Ng0KPj4+Pj4+
Pj4gK8KgwqDCoCByZXR1cm4gaGFyZHdhcmVfZG9tYWluOw0KPj4+Pj4+Pj4gKyNlbGlmIENPTkZJ
R19BUk0NCj4+Pj4+Pj4+ICvCoMKgwqAgcmV0dXJuIHBjaV9nZXRfb3duZXJfZG9tYWluKHNlZyk7
DQo+Pj4+Pj4+PiArI2Vsc2UNCj4+Pj4+Pj4+ICsjZXJyb3IgIlVuc3VwcG9ydGVkIGFyY2hpdGVj
dHVyZSINCj4+Pj4+Pj4+ICsjZW5kaWYNCj4+Pj4+Pj4+ICt9DQo+Pj4+Pj4+Pg0KPj4+Pj4+Pj4g
VGhpcyBpcyB3aGF0IEkgdXNlIHRvIHByb3Blcmx5IGRldGVjdCB0aGUgZG9tYWluIHRoYXQgcmVh
bGx5IG93bnMgcGh5c2ljYWwgaG9zdCBicmlkZ2UNCj4+Pj4+Pj4gSSBjb25zaWRlciB0aGlzIHBy
b2JsZW1hdGljLiBXZSBzaG91bGQgdHJ5IHRvIG5vdCBsZXQgQXJtJ3MgYW5kIHg4Nidlcw0KPj4+
Pj4+PiBQQ0kgaW1wbGVtZW50YXRpb25zIGRpdmVyZ2UgdG9vIG11Y2gsIGkuZS4gYXQgbGVhc3Qg
dGhlIHVuZGVybHlpbmcgYmFzaWMNCj4+Pj4+Pj4gbW9kZWwgd291bGQgYmV0dGVyIGJlIHNpbWls
YXIuIEZvciBleGFtcGxlLCBpZiBlbnRpcmUgc2VnbWVudHMgY2FuIGJlDQo+Pj4+Pj4+IGFzc2ln
bmVkIHRvIGEgZHJpdmVyIGRvbWFpbiBvbiBBcm0sIHdoeSBzaG91bGQgdGhlIHNhbWUgbm90IGJl
IHBvc3NpYmxlDQo+Pj4+Pj4+IG9uIHg4Nj8NCj4+Pj4+PiBHb29kIHF1ZXN0aW9uLCBwcm9iYWJs
eSBpbiB0aGlzIGNhc2UgeDg2ID09IEFSTSBhbmQgSSBjYW4gdXNlDQo+Pj4+Pj4NCj4+Pj4+PiBw
Y2lfaXNfb3duZXJfZG9tYWluIGZvciBib3RoIGFyY2hpdGVjdHVyZXMgaW5zdGVhZCBvZiB1c2lu
ZyBpc19oYXJkd2FyZV9kb21haW4gZm9yIHg4Ng0KPj4+Pj4+DQo+Pj4+Pj4+IEZ1cnRoZXJtb3Jl
IEknbSBzdXNwaWNpb3VzIGFib3V0IHNlZ21lbnRzIGJlaW5nIHRoZSByaWdodCBncmFudWxhcml0
eQ0KPj4+Pj4+PiBoZXJlLiBPbiBpYTY0IG11bHRpcGxlIGhvc3QgYnJpZGdlcyBjb3VsZCAoYW5k
IHR5cGljYWxseSB3b3VsZCkgbGl2ZQ0KPj4+Pj4+PiBvbiBzZWdtZW50IDAuIElpcmMgSSBoYWQg
b25jZSBzZWVuIG91dHB1dCBmcm9tIGFuIHg4NiBzeXN0ZW0gd2hpY2ggd2FzDQo+Pj4+Pj4+IGFw
cGFyZW50bHkgbGFpZCBvdXQgc2ltaWxhcmx5LiBUaGVyZWZvcmUsIGp1c3QgbGlrZSBmb3IgTUNG
RywgSSB0aGluaw0KPj4+Pj4+PiB0aGUgZ3JhbnVsYXJpdHkgd2FudHMgdG8gYmUgYnVzIHJhbmdl
cyB3aXRoaW4gYSBzZWdtZW50Lg0KPj4+Pj4+IENhbiB5b3UgcGxlYXNlIHN1Z2dlc3Qgc29tZXRo
aW5nIHdlIGNhbiB1c2UgYXMgYSBoaW50IGZvciBzdWNoIGEgZGV0ZWN0aW9uIGxvZ2ljPw0KPj4+
Pj4gVGhlIHVuZGVybHlpbmcgaW5mb3JtYXRpb24gY29tZXMgZnJvbSBBQ1BJIHRhYmxlcywgaWly
Yy4gSSBkb24ndA0KPj4+Pj4gcmVjYWxsIHRoZSBkZXRhaWxzLCB0aG91Z2ggLSBzb3JyeS4NCj4+
Pj4gT2ssIHNvIHNlZyArIGJ1cyBzaG91bGQgYmUgZW5vdWdoIGZvciBib3RoIEFSTSBhbmQgWGVu
IHRoZW4sIHJpZ2h0Pw0KPj4+Pg0KPj4+PiBwY2lfZ2V0X2hhcmR3YXJlX2RvbWFpbih1MTYgc2Vn
LCB1OCBidXMpDQo+Pj4gV2hldGhlciBhbiBpbmRpdmlkdWFsIGJ1cyBudW1iZXIgY2FuIHN1aXRh
YmxlIGV4cHJlc3MgdGhpbmdzIEkgY2FuJ3QNCj4+PiB0ZWxsOyBJIGRpZCBzYXkgYnVzIHJhbmdl
LCBidXQgb2YgY291cnNlIGlmIHlvdSBjYXJlIGFib3V0IGp1c3QNCj4+PiBpbmRpdmlkdWFsIGRl
dmljZXMsIHRoZW4gYSBzaW5nbGUgYnVzIG51bWJlciB3aWxsIG9mIGNvdXJzZSBkby4NCj4+IEkg
Y2FuIGltcGxlbWVudCB0aGUgbG9va3VwIHdoZXRoZXIgYSBQQ0kgaG9zdCBicmlkZ2Ugb3duZWQg
YnkgYSBwYXJ0aWN1bGFyDQo+Pg0KPj4gZG9tYWluIHdpdGggc29tZXRoaW5nIGxpa2U6DQo+Pg0K
Pj4gc3RydWN0IHBjaV9ob3N0X2JyaWRnZSAqYnJpZGdlID0gcGNpX2ZpbmRfaG9zdF9icmlkZ2Uo
c2VnLCBidXMpOw0KPj4NCj4+IHJldHVybiBicmlkZ2UtPmR0X25vZGUtPnVzZWRfYnkgPT0gZC0+
ZG9tYWluX2lkOw0KPj4NCj4+IENvdWxkIHlvdSBwbGVhc2UgZ2l2ZSBtZSBhIGhpbnQgaG93IHRo
aXMgY2FuIGJlIGRvbmUgb24geDg2Pw0KPiBCcmlkZ2VzIGNhbid0IGJlIGFzc2lnbmVkIHRvIG90
aGVyIHRoYW4gdGhlIGhhcmR3YXJlIGRvbWFpbiByaWdodA0KPiBub3cuDQoNClNvLCBJIGNhbiBw
cm9iYWJseSB0aGVuIGhhdmUgcGNpX2dldF9oYXJkd2FyZV9kb21haW4gaW1wbGVtZW50ZWQNCg0K
YnkgYm90aCBBUk0gYW5kIHg4NiBpbiB0aGVpciBhcmNoIHNwZWNpZmljIGNvZGUuIEFuZCBmb3Ig
eDg2IGZvciBub3cNCg0KaXQgY2FuIHNpbXBseSBiZSBhIHdyYXBwZXIgZm9yIGlzX2hhcmR3YXJl
X2RvbWFpbj8NCg0KPiAgIEVhcmxpZXIgb24gSSBkaWRuJ3Qgc2F5IHlvdSBzaG91bGQgZ2V0IHRo
aXMgdG8gd29yaywgb25seQ0KPiB0aGF0IEkgdGhpbmsgdGhlIGdlbmVyYWwgbG9naWMgYXJvdW5k
IHdoYXQgeW91IGFkZCBzaG91bGRuJ3QgbWFrZQ0KPiB0aGluZ3MgbW9yZSBhcmNoIHNwZWNpZmlj
IHRoYW4gdGhleSByZWFsbHkgc2hvdWxkIGJlLiBUaGF0IHNhaWQsDQo+IHNvbWV0aGluZyBzaW1p
bGFyIHRvIHRoZSBhYm92ZSBzaG91bGQgc3RpbGwgYmUgZG9hYmxlIG9uIHg4NiwNCj4gdXRpbGl6
aW5nIHN0cnVjdCBwY2lfc2VnJ3MgYnVzMmJyaWRnZVtdLiBUaGVyZSBvdWdodCB0byBiZQ0KPiBE
RVZfVFlQRV9QQ0lfSE9TVF9CUklER0UgZW50cmllcyB0aGVyZSwgYWxiZWl0IGEgbnVtYmVyIG9m
IHRoZW0NCj4gKHByb3ZpZGVkIGJ5IHRoZSBDUFVzIHRoZW1zZWx2ZXMgcmF0aGVyIHRoYW4gdGhl
IGNoaXBzZXQpIGFyZW4ndA0KPiByZWFsbHkgaG9zdCBicmlkZ2VzIGZvciB0aGUgcHVycG9zZXMg
eW91J3JlIGFmdGVyLg0KDQpZb3UgbWVhbiBJIGNhbiBzdGlsbCB1c2UgREVWX1RZUEVfUENJX0hP
U1RfQlJJREdFIGFzIGEgbWFya2VyDQoNCndoaWxlIHRyeWluZyB0byBkZXRlY3Qgd2hhdCBJIG5l
ZWQ/DQoNCj4NCj4gSmFuDQoNClRoYW5rIHlvdSwNCg0KT2xla3NhbmRyDQo=


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 14:34:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 14:34:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26557.55009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kda9I-0002E1-EW; Fri, 13 Nov 2020 14:34:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26557.55009; Fri, 13 Nov 2020 14:34:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kda9I-0002Du-BU; Fri, 13 Nov 2020 14:34:00 +0000
Received: by outflank-mailman (input) for mailman id 26557;
 Fri, 13 Nov 2020 14:34:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yQoD=ET=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kda9H-0002Dn-So
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:34:00 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 725bee47-33ae-4ea6-b2c9-893f1d599ed0;
 Fri, 13 Nov 2020 14:33:58 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=yQoD=ET=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kda9H-0002Dn-So
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:34:00 +0000
X-Inumbo-ID: 725bee47-33ae-4ea6-b2c9-893f1d599ed0
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 725bee47-33ae-4ea6-b2c9-893f1d599ed0;
	Fri, 13 Nov 2020 14:33:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605278038;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=EW90h6ugA8EtrqEuUkRKAsenjbZRdKzDX2xG8itrsPM=;
  b=Ek+jpY104Wpk/6ES0Dk6oV0iWGNZY5UC1utECi+EK809k1GFAJMVXIzb
   v04VILaKOL8XwQqo8eArMQlFifw3N8BkAOllbvg1LjIdL5j/7K8H22B3c
   tC6gYkL+EeNdLBSJ3vnaLWIAV/+bbkBevaA0jsjJKR1ML1pUdjRuqL7RE
   Q=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ZIY0CKa3u8ZjiYJGc0ULjm1jpXzEhu6hY3Gh1CKp1U48RhVYyZW9Ucn6QwUilE8mxqAP5+Dgpx
 2l890b6FsbhzQBJNTicR1v3FxAqQFcpZIkgxtPuQMr/KYTZU993/ChutM5EWpn5eefAQkVDl4d
 8CQS4sD0mrm8EI1s09ipjzvC1n9rYbGo5UZY6bgqEnxAiUxBwo98at7LnMCx9YRiI6eoRK5Hf2
 kfDFitvFBm2SJHosR+/VWCOzVYGQwWzuO6TeRCcFsA4KWZH6ufYtx9l3NGre4PnuyNF4MhCx8H
 1Rk=
X-SBRS: None
X-MesageID: 31098696
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,475,1596513600"; 
   d="scan'208";a="31098696"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=li5mPgVaqF605/jO8xXQKTtM0Q8Dbed2UlIWd+Z7uK/bPCEam5wuf9zjZrIRmBog+vDE36fyw2E3JloSGvwc7qA/BhmoLlkNl1z24ncZulnhl36vsUxALfTfV3hQ38EWyMws42MAQb5e1TO60n08m06rAu/fYbrSeCy1AL3gCiMopMm2GwnvbqVwOzsk7yi/6P2/vekHJw/3JWwLXVt7P+MBnSGayVKadWBtsqnQKYGFTDE8t59eQ7chqa7BHRG50yKKCXABI0T4wzk38fFW2rqKSB0caHE82UD1Yz11ZCByP6pRqTBjZY2qCRZCPOTNJ9BG8MSLA/+tC0tCiifK1g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SGV9NRyzsEmWXkCi3srNVusz2oW8ftZAG1ZNGc/EWHc=;
 b=oEqs9M0Rm9k+v07acQ2B0wfmb0YadkgSqWvgDmCPbcUQ/p/IgZNsWaog/5MmwXIvboCtlg3vCgr7qYe2QOTKk1O1fQs8CeXivCPM+v1laSbrs156E0vByGNjwdMdtUu391z22Wk4KWT8wnCvD4Ff4FhQe7ZggQCKvyXVttD1Yd0T0qPqPPMJ0vK0zVza7Ac/6mi423HKjaybvkfAWNuz4XYZ8Gg4rXTNPgKQR1KJfhOAmXn2MVuqO0E5Zhd5D9x2G6gKMOniPIbqqhwlYJObEbERp7S2BKI9lBZ75PzGWta83404ANUVRsR0QqjYf59AXaKPzFqkGRIYtJjfF0pC/g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SGV9NRyzsEmWXkCi3srNVusz2oW8ftZAG1ZNGc/EWHc=;
 b=e4OfK+HA1fUTJngREM2blSnaIA17q5/85Ync0UN0/ADQ4MuUWIj2zvGUl+XRof6WQwKjwm8+Q3U0TIIUSkMM16/Aa4BvIpnpd3dbVYPPh7+z0DOb5RbVTeBLeEcI/Dx0EUw+2mcHcCjOUzHYfOQ1Ok4VdyS1OHvYYcHFEyJ442o=
Date: Fri, 13 Nov 2020 15:33:49 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: dom0 PVH: 'entry->arch.pirq != INVALID_PIRQ' failed at vmsi.c:843
Message-ID: <20201113143349.gehu36wsipvpkrt7@Air-de-Roger>
References: <20201112155715.GA5003@antioche.eu.org>
 <20201112163240.6xswol2iswikdzef@Air-de-Roger>
 <20201112172704.GA5899@antioche.eu.org>
 <20201112201939.be6ztg2iipwa6hkb@Air-de-Roger>
 <20201113115457.GD1512@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201113115457.GD1512@antioche.eu.org>
X-ClientProxiedBy: LNXP265CA0049.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5d::13) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: cc231c89-76f3-46a5-e393-08d887e125b9
X-MS-TrafficTypeDiagnostic: DM5PR03MB3210:
X-Microsoft-Antispam-PRVS: <DM5PR03MB32103C0570886287F30BAEB18FE60@DM5PR03MB3210.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: sN2mWhp7jkkIoI0GlggVQflcbklaeLc7RThkDjFZOwDtVswhv/o0uWa3Ec3/Y/ydNsDpNi101xVU6sKMuV38xwgfEc9c5mYokyZd4C2jCETyv2tUWLxCSOnAgwLZ0L068uQVHUvcyr+jLsKWhgJ6oFO2a6/OgMmr3bAqV6Jg6ZfCEgzJ1PL5gACWlBh2qc5iPhsmOjqv7JcA/invOkp8aW6T+K2bzLPzDoKQJQnzFOyRD7BQdIforxQhLVRxu7WzentKXhw1Pc3kS+lTGNNRw+u2LUDqAmZbVt17cQvJGLixO7/Q244BKQ/2aecyRqiaA5TUoqaEkMvMp1Zg1g1lRA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(366004)(136003)(346002)(376002)(396003)(39860400002)(9686003)(6486002)(478600001)(316002)(4326008)(6666004)(86362001)(5660300002)(26005)(2906002)(186003)(6916009)(66476007)(85182001)(33716001)(956004)(6496006)(8676002)(8936002)(66946007)(1076003)(16526019)(66556008)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: Ft3GkVOITn81znt/yIagKk2q1k+zgdPy3xIKEsaO/rRK4VwRr+bHqeHUwyYKixO6+EnDWvl6yx42vh9sxkp2BrBkwMhG+mlxYDv+W4XTHBNCRjnKCTKKMb44iuXBcGHQPOl1U1W45qHS8T7VsVQGZBzfHfQeu6wi3ZDYVqsibP4/Q6F6yKDJIZWpssUxB9TkRn8xVwD59/OB+llmPvlq8NvT5Hp37wDhp55TA3t1vG0f0boKVXIuC4OsOX52q3QhaeIdaAJnwTfHEiDMZ5EX7c8oCcFx/Tvr3ipdac8n1m326UWq3igsmo08RwjGgFcI/Swq2UtnSfbYBf85O+KDhArvDUHwe6rxPFVFqm3XP297WXV2d+HMbxPUbdpfH0gwE5cJr1BO+t/JMyN7+RW13M5FBWAigpHlc33B0jROzhzS34hLsFKwnHWOP1ys/Cn8GL5lFCYe0L8s5ZO+MibrHxw5wXEXUKky0Bq6MzaCtc+WTgnZKMUzAXgcxmx0j11rnsroE++IPeG4aCGdidq0Dit1riXQQLA24PRSdZlBrg1aT8Bd+nXV7rO4807O4QMbCH0rog3G/ABgmqp7P5B/iwqO5rD9fLYz62tlbJbHP1y9BgAEEQ2DavJOk2a5OtE2hnlb79RTybxraE08MUJhgyQHS/OTizpUaueMTCpGOpvBoZbKe0sY/cYQP1pKCsteF3vB6XoIQG08/xCvnZaEewGpoT5gqjB21cSa+TuZW7xtqy/tmtL5LWhIfH21UYxpAGNlUz81cfbeZbzDoUv/mJM+o759mgVJ8QGmx2KlnxmbSJxjpVtOLYQc59ceOtaxpXBCZ5HKPG5f1pIEan5hKIFvLzFtQnRBW6OX0bmemynkEVgepTRmBqOL7lPjt5EVnRqdgbZnuKKLqtDHsW0gDg==
X-MS-Exchange-CrossTenant-Network-Message-Id: cc231c89-76f3-46a5-e393-08d887e125b9
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Nov 2020 14:33:54.5077
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 93QsUjMR5Rj4d1O0Qy9ma5++PUfKwI2FVBe0S4H+U2oluahFFw23qaXzVNxI6/kSbGdkzUa2W1Ufds2XeACsnw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3210
X-OriginatorOrg: citrix.com

On Fri, Nov 13, 2020 at 12:54:57PM +0100, Manuel Bouyer wrote:
> On Thu, Nov 12, 2020 at 09:19:39PM +0100, Roger Pau Monné wrote:
> > The following might be able to get you going, but I think I need to
> > refine the logic a bit there, will have to give it some thought.
> 
> I also tested with xen devel (Xen version 4.15-unstable, Latest ChangeSet: Wed Nov 4 09:27:22 2020 +0100 git:9ff9705647-dirty).
> Your patch is needed there too to avoid the panic.
> 
> As with 4.13, I have problems with interrupts not being properly
> delivered. The strange thing is that the counter is not 0, but 3 (wuth 4.13)
> or 2 (with 4.15) which would mean that interrupts stop being delivered
> at some point in the setup process. Maybe something to do with mask/unmask ?
> 
> The problematc interrupt in identifed as "ioapic2 pin 2" by the NetBSD kernel,
> so it's not MSI/MSI-X (not sure it matters though).
> Maybe something related to mask/unmask ?

What device do you have on that pin? Is it the only device not working
properly? I get from that that MSI/MSI-X is now working fine.

You can get some interrupt info from the 'i' and the 'z' debug keys,
albeit that won't reflect the state of the emulated IO-APIC used by
dom0, which is likely what we care about. There's also the 'M' debug
key, but that's only useful for MSI/MSI-X.

I can try to prepare a patch to dump some info from the emulated
IO-APIC, but I'm afraid I won't get to it until Monday.

Roger.


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 14:35:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 14:35:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26564.55021 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdaAk-0002Nh-0f; Fri, 13 Nov 2020 14:35:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26564.55021; Fri, 13 Nov 2020 14:35:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdaAj-0002Na-Rp; Fri, 13 Nov 2020 14:35:29 +0000
Received: by outflank-mailman (input) for mailman id 26564;
 Fri, 13 Nov 2020 14:35:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yQoD=ET=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kdaAi-0002NV-Jn
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:35:28 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 52791a99-6ab6-44f3-afff-6377aa920c0f;
 Fri, 13 Nov 2020 14:35:27 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=yQoD=ET=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kdaAi-0002NV-Jn
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:35:28 +0000
X-Inumbo-ID: 52791a99-6ab6-44f3-afff-6377aa920c0f
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 52791a99-6ab6-44f3-afff-6377aa920c0f;
	Fri, 13 Nov 2020 14:35:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605278127;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=ujoTVnN+YBqbDTcmMTB76apyyrOARJ7+33aJqyy73Zk=;
  b=GNeZUv4gLkfCcJd6+YSoWhlrVJ/Jb0KPMvYy1f4bfHnXxSh68COuwi3m
   3zWEHqGlO91ciBVTCW3HnapG+g1cy9zsqnylVvGfjEm1naNS5mZ0tElvC
   Gi21eO/+UitQdtmQU3e0GVlnZ1A6zpUeb0nxVPyVqyJRkDATQG/jIn/qe
   0=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: lsaPtCFS+Q92a+ExlNbV+rxlTIvh+JAF6scq6CbbBWl9KycSZBx6MNW/wvr07emWq1C9OlbpBp
 Km5oe8UoR4VBJ9i6t8MykxLreiqqWYweQUYTmeDRXZiVIZQ44m/4NlzdoyDWrRT93bkpzdlXN+
 MHuqtMLK2wE0yTl95ZdU9T3ogfUHZugDldOoNunhSegHiX2lI/jJXjdw5dugvpTKYOyNIkXR1m
 i+IwaxaLjOj4ICip1MzE6NNJPJjgmt1qU8FGc6Pb7y6S02wuUpvKU46oT8DcQV1vFlnDSUhSfM
 CW4=
X-SBRS: None
X-MesageID: 31362517
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,475,1596513600"; 
   d="scan'208";a="31362517"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PdouVGLZkL6MD5JlXNcvayTR4Z9wdxkax+ZfnhcYnurGEqDMm8Hc+23sCJ7QGwTKTmyE+Udp+IluXxas8WQTYCzWNiFTM1wGaElgH/W026aGn7MFgkGiBjdns1EGiK5J2k5RXYoyzALSpmYCU7xHQXZT2V2E1JoyOnqdwcNbkcmPT47oU8PlcL+hKknptW/t3Tz6IynIgOxG40hVbQtY890F2MGBivpiCloHL0cpYrCjq9snrG/rP4li5Y/Feq0tQXEY13wkM4GLR2OWKqjWI8PYI54pITs0VJMo2KU7Bw9o7N5OwUSymD9JcPHqSRb+lCr1T6cYApsuzOJ9lXsyRA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WhCsZoroeC45ZJavgpDQsb7UfSKy7Lpd+WQMFvqQ2K4=;
 b=BuuvRjz3FMHKsdy/eKqMWUsBH/HjNww/ZuORPL2vKUC2RmyUJ5wrS0Zyk21IW25dDgJ/Dx1BXL/hRgnYsMeVKXYNrvrnT4KDizATHyKqG0moh9bPdn/Lx90Xuet8GwrEwX3Iri23hNLf9a8cgjEHcLiSYKQBT5w8peiYV0n7kA3wdiUlj89ZDhwD34dKVfe0YqQHIty7CO6EQIi4ugG0bKWd+H0HWcXBy5B7sDf7ygBKSwz1/LyjC7NkILRTkWb6cjiDPery5LDuoUni45Fp0VPZ79w/naCs/noRkiFiTth4bei4k6vh/yY9OBJ34vyYoJQ2I+2yM9603qSdlSNBhQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WhCsZoroeC45ZJavgpDQsb7UfSKy7Lpd+WQMFvqQ2K4=;
 b=BzuZBEFlaS+iPOMZvEh/c8IzewN3fGSa+dZIraqOLqj11+lnvSSrroOv8XT2XjXCPQJVawxD8tTospZ/n3K7CP5lPkuJ9V+ZXbWnJa/bnXKWZ+baLf49c5BnzRssuWDdMe5WfVmFlCF8joztLptB0dyu06PFX+Fh2NTM++heqVE=
Date: Fri, 13 Nov 2020 15:35:13 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: dom0 PVH: 'entry->arch.pirq != INVALID_PIRQ' failed at vmsi.c:843
Message-ID: <20201113143513.5mvfb4tyczyo2rwx@Air-de-Roger>
References: <20201112155715.GA5003@antioche.eu.org>
 <20201112163240.6xswol2iswikdzef@Air-de-Roger>
 <20201112172704.GA5899@antioche.eu.org>
 <20201112201939.be6ztg2iipwa6hkb@Air-de-Roger>
 <20201113115457.GD1512@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201113115457.GD1512@antioche.eu.org>
X-ClientProxiedBy: LO4P123CA0028.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:151::15) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 0265994c-152d-431a-3dbc-08d887e15b99
X-MS-TrafficTypeDiagnostic: DM5PR03MB3210:
X-Microsoft-Antispam-PRVS: <DM5PR03MB321091DD9709F0D19B6DA5F88FE60@DM5PR03MB3210.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5236;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Pj0LqPggeerDpPBVWYqbGIF5BIjvogPx4QRAcNWrRzwzBr46wYvEkiyXJo2vPB+uxRNBqjM7/aR1oQHnvmowPh6p3fp5ujV/8hVAi3/u16b1ovrhLDTkzl2ifZQKpjfnaDDYFfhYamYrwEzcGvMxUy3lfLML+SOHpr8UOd1V0BXQcdCgw6h7rn99i1sq47Q9yLc+PZUA3PO2LQ0SkldUmOUeG8zy4TX3zx8aKIJjyWqIOvCs+BxvHo0x9RU19JBpCYu7t0SDxeJIQxMQCW7wvu3N262l2epg2sThXTpFDFT6AY1HZratWOKvAFM8eD6J
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(366004)(136003)(346002)(376002)(396003)(39860400002)(9686003)(6486002)(478600001)(316002)(4326008)(6666004)(86362001)(5660300002)(26005)(2906002)(186003)(6916009)(66476007)(85182001)(33716001)(956004)(6496006)(8676002)(8936002)(66946007)(1076003)(16526019)(66556008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: zPd3VvF8fzR2fn4nScpa9UdLXH5FJkJxJ1nvq3iyKAj59FZTHfoP+madQhZ9ZfqFRJgFZlo4pabzeXxFY1YWSypGLN4QWVBoF6uw2vi3Hwro9hMl5o269gy/p/hkw5mQ9k29EnbssuVhDgTUW6yRYyzm7TPBINvJJmLKHd4cd8E48Vh6fuOPxO01eEXmFjUWXWajzqRoiusi4UbR1D9kQmAzKq6bWE/P1Ux0K73EpQ8X+UiEIWySQkPXCxa6kfywuQ3yi3hCqtaC95zmaZRkV0Pf82SFW7izHk8nTU6MgWcpzt21eHnOEMFv4XPctWmqfyf8oMUx9uuvQnfFY1/Elgh1bKg4Uv3vFEOZ7KRYiqpN1wR+YDL4JjC3JUU4Fniol2SZDC0gnveDTv2/v4Mc6M2WmdsVeLxnrAGRh0F5AEnyZUWoXoMWtyyOT4zZKYx4Dn2/KEsURfX7dqqTxLqd8LmbK0AT5BCCDfN5aM9o3S1+OsefohTKV2A0XM9lRhnE0SO5qhqAxUfauGFbY1dmI4ERMZGDePwQ+2pQXaUrQdufHvzRm11UBBBRVdwNiOJuAQQS33y6yD6NdCen/ICCyy//erZMk6eqsxDWH6zDht34UY4EK3mRc3GUSpqBHJBzZq1nIwPhtm0/s4t+UDWwSP2c5BEx5ziL2L1FIuTwOX8COoGrS8pswKaVM9jl80QqN6utXoTQFi86KBBjbO/2p7nGoSX0+c4riK9qR0+LcHF4Ul6vwKseMcPefKLzl4Q4TVjADlvWOo3hsRCGTPi5VodZOHxw+n4aQ3JjMoQvxvRRX9TpoPSxWPIPLvhPaLSVrI1hQvLdsJGAbbXjiHi+81bKsHPHn9BwZImEpEEmJjZVKU/cQt1f1DcLxyhsgDQmHV9BkBYFZnJ8EhLabjaj5g==
X-MS-Exchange-CrossTenant-Network-Message-Id: 0265994c-152d-431a-3dbc-08d887e15b99
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Nov 2020 14:35:24.7842
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GIcNZaJ/iGshu5QbsqWQnzfmOaq+uFfw53CyDYeoMUXBJyNS4rzpgcGzouqaO2DXbh26ZtiO+Rtc44qbqlM21Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3210
X-OriginatorOrg: citrix.com

On Fri, Nov 13, 2020 at 12:54:57PM +0100, Manuel Bouyer wrote:
> On Thu, Nov 12, 2020 at 09:19:39PM +0100, Roger Pau Monné wrote:
> > The following might be able to get you going, but I think I need to
> > refine the logic a bit there, will have to give it some thought.
> 
> I also tested with xen devel (Xen version 4.15-unstable, Latest ChangeSet: Wed Nov 4 09:27:22 2020 +0100 git:9ff9705647-dirty).
> Your patch is needed there too to avoid the panic.
> 
> As with 4.13, I have problems with interrupts not being properly
> delivered. The strange thing is that the counter is not 0, but 3 (wuth 4.13)
> or 2 (with 4.15) which would mean that interrupts stop being delivered
> at some point in the setup process. Maybe something to do with mask/unmask ?
> 
> The problematc interrupt in identifed as "ioapic2 pin 2" by the NetBSD kernel,
> so it's not MSI/MSI-X (not sure it matters though).
> Maybe something related to mask/unmask ?

Forgot to mention, it might also be helpful to boot Xen with
iommu=debug, just in case.

Roger.


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 14:38:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 14:38:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26572.55032 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdaDW-0002Zv-DW; Fri, 13 Nov 2020 14:38:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26572.55032; Fri, 13 Nov 2020 14:38:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdaDW-0002Zo-AM; Fri, 13 Nov 2020 14:38:22 +0000
Received: by outflank-mailman (input) for mailman id 26572;
 Fri, 13 Nov 2020 14:38:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdaDV-0002Zi-D1
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:38:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d054eba7-0370-4acf-8340-c9bdbfebbe16;
 Fri, 13 Nov 2020 14:38:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1190AAD7C;
 Fri, 13 Nov 2020 14:38:19 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdaDV-0002Zi-D1
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:38:21 +0000
X-Inumbo-ID: d054eba7-0370-4acf-8340-c9bdbfebbe16
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d054eba7-0370-4acf-8340-c9bdbfebbe16;
	Fri, 13 Nov 2020 14:38:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605278299; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=UO1gWVGjEtcPmOydDe/QZYVJvqDCajMausynMmYvfPo=;
	b=NDsUl840zImKT0xVSJLff42JezjM1fEYF6t98OOrberCmJzK9P0o1y64tW1DrE44ay0IUM
	tvH9uu+8NTXj5jg8l0tPXNUBTmyV/2OhluMCdvv9ZaMFtKtVW5qLeX9Gm0Ghz1Qh85kUjC
	XYTlqrZk2CAZaf8hqoqy+4y0bzzuWnQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1190AAD7C;
	Fri, 13 Nov 2020 14:38:19 +0000 (UTC)
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
Cc: Oleksandr Andrushchenko <andr2000@gmail.com>,
 "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
 "julien.grall@arm.com" <julien.grall@arm.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
 <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
 <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
 <b4697fbe-6896-ed64-409d-85620c08904a@suse.com>
 <3d6e5aab-ff89-7859-09c6-5ecb0c052511@epam.com>
 <1c88fef1-8558-fde1-02c7-8a68f6ecf312@suse.com>
 <67fd5df7-2ad2-08e5-294e-b769429164f0@epam.com>
 <03e23a66-619f-e846-cf61-a33ca5d9f0b4@suse.com>
 <b151e6d2-5480-d201-432a-bece208a1fd9@epam.com>
 <c58c1393-381a-d995-6e41-fa3251f67bd7@suse.com>
 <8fc22774-7380-2de1-9c30-6649a79fdfe1@epam.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <46c75ee1-758c-8a42-d8d3-8d42cce3240a@suse.com>
Date: Fri, 13 Nov 2020 15:38:18 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <8fc22774-7380-2de1-9c30-6649a79fdfe1@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 13.11.2020 15:32, Oleksandr Andrushchenko wrote:
> 
> On 11/13/20 4:23 PM, Jan Beulich wrote:
>> On 13.11.2020 13:41, Oleksandr Andrushchenko wrote:
>>> On 11/13/20 1:35 PM, Jan Beulich wrote:
>>>> On 13.11.2020 12:02, Oleksandr Andrushchenko wrote:
>>>>> On 11/13/20 12:50 PM, Jan Beulich wrote:
>>>>>> On 13.11.2020 11:46, Oleksandr Andrushchenko wrote:
>>>>>>> On 11/13/20 12:25 PM, Jan Beulich wrote:
>>>>>>>> On 13.11.2020 07:32, Oleksandr Andrushchenko wrote:
>>>>>>>>> I'll try to replace is_hardware_domain with something like:
>>>>>>>>>
>>>>>>>>> +/*
>>>>>>>>> + * Detect whether physical PCI devices in this segment belong
>>>>>>>>> + * to the domain given, e.g. on x86 all PCI devices live in hwdom,
>>>>>>>>> + * but in case of ARM this might not be the case: those may also
>>>>>>>>> + * live in driver domains or even Xen itself.
>>>>>>>>> + */
>>>>>>>>> +bool pci_is_hardware_domain(struct domain *d, u16 seg)
>>>>>>>>> +{
>>>>>>>>> +#ifdef CONFIG_X86
>>>>>>>>> +    return is_hardware_domain(d);
>>>>>>>>> +#elif CONFIG_ARM
>>>>>>>>> +    return pci_is_owner_domain(d, seg);
>>>>>>>>> +#else
>>>>>>>>> +#error "Unsupported architecture"
>>>>>>>>> +#endif
>>>>>>>>> +}
>>>>>>>>> +
>>>>>>>>> +/*
>>>>>>>>> + * Get domain which owns this segment: for x86 this is always hardware
>>>>>>>>> + * domain and for ARM this can be different.
>>>>>>>>> + */
>>>>>>>>> +struct domain *pci_get_hardware_domain(u16 seg)
>>>>>>>>> +{
>>>>>>>>> +#ifdef CONFIG_X86
>>>>>>>>> +    return hardware_domain;
>>>>>>>>> +#elif CONFIG_ARM
>>>>>>>>> +    return pci_get_owner_domain(seg);
>>>>>>>>> +#else
>>>>>>>>> +#error "Unsupported architecture"
>>>>>>>>> +#endif
>>>>>>>>> +}
>>>>>>>>>
>>>>>>>>> This is what I use to properly detect the domain that really owns physical host bridge
>>>>>>>> I consider this problematic. We should try to not let Arm's and x86'es
>>>>>>>> PCI implementations diverge too much, i.e. at least the underlying basic
>>>>>>>> model would better be similar. For example, if entire segments can be
>>>>>>>> assigned to a driver domain on Arm, why should the same not be possible
>>>>>>>> on x86?
>>>>>>> Good question, probably in this case x86 == ARM and I can use
>>>>>>>
>>>>>>> pci_is_owner_domain for both architectures instead of using is_hardware_domain for x86
>>>>>>>
>>>>>>>> Furthermore I'm suspicious about segments being the right granularity
>>>>>>>> here. On ia64 multiple host bridges could (and typically would) live
>>>>>>>> on segment 0. Iirc I had once seen output from an x86 system which was
>>>>>>>> apparently laid out similarly. Therefore, just like for MCFG, I think
>>>>>>>> the granularity wants to be bus ranges within a segment.
>>>>>>> Can you please suggest something we can use as a hint for such a detection logic?
>>>>>> The underlying information comes from ACPI tables, iirc. I don't
>>>>>> recall the details, though - sorry.
>>>>> Ok, so seg + bus should be enough for both ARM and Xen then, right?
>>>>>
>>>>> pci_get_hardware_domain(u16 seg, u8 bus)
>>>> Whether an individual bus number can suitable express things I can't
>>>> tell; I did say bus range, but of course if you care about just
>>>> individual devices, then a single bus number will of course do.
>>> I can implement the lookup whether a PCI host bridge owned by a particular
>>>
>>> domain with something like:
>>>
>>> struct pci_host_bridge *bridge = pci_find_host_bridge(seg, bus);
>>>
>>> return bridge->dt_node->used_by == d->domain_id;
>>>
>>> Could you please give me a hint how this can be done on x86?
>> Bridges can't be assigned to other than the hardware domain right
>> now.
> 
> So, I can probably then have pci_get_hardware_domain implemented
> 
> by both ARM and x86 in their arch specific code. And for x86 for now
> 
> it can simply be a wrapper for is_hardware_domain?

"get" can't be a wrapper for "is", but I think I get what you're
saying. But no, preferably I would not see you do this (hence my
earlier comment). I still think you could use the owner of the
respective device; I may be mistaken, but iirc we do assign
bridges to Dom0, so deriving from that would be better than
hardwiring to is_hardware_domain().

>>   Earlier on I didn't say you should get this to work, only
>> that I think the general logic around what you add shouldn't make
>> things more arch specific than they really should be. That said,
>> something similar to the above should still be doable on x86,
>> utilizing struct pci_seg's bus2bridge[]. There ought to be
>> DEV_TYPE_PCI_HOST_BRIDGE entries there, albeit a number of them
>> (provided by the CPUs themselves rather than the chipset) aren't
>> really host bridges for the purposes you're after.
> 
> You mean I can still use DEV_TYPE_PCI_HOST_BRIDGE as a marker
> 
> while trying to detect what I need?

I'm afraid I don't understand what marker you're thinking about
here.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 14:45:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 14:45:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26581.55045 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdaJs-0003Tm-4R; Fri, 13 Nov 2020 14:44:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26581.55045; Fri, 13 Nov 2020 14:44:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdaJs-0003Tf-1O; Fri, 13 Nov 2020 14:44:56 +0000
Received: by outflank-mailman (input) for mailman id 26581;
 Fri, 13 Nov 2020 14:44:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u7TF=ET=epam.com=prvs=9586b5424c=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kdaJq-0003Ta-PJ
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:44:54 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 24f7ee39-91c5-4c64-af4b-2bb0c7f0e12b;
 Fri, 13 Nov 2020 14:44:53 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0ADEdVdp031666; Fri, 13 Nov 2020 14:44:44 GMT
Received: from eur02-he1-obe.outbound.protection.outlook.com
 (mail-he1eur02lp2056.outbound.protection.outlook.com [104.47.5.56])
 by mx0a-0039f301.pphosted.com with ESMTP id 34rf80gf2s-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 13 Nov 2020 14:44:44 +0000
Received: from AM7PR03MB6325.eurprd03.prod.outlook.com (2603:10a6:20b:13c::18)
 by AM6PR03MB4455.eurprd03.prod.outlook.com (2603:10a6:20b:6::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.25; Fri, 13 Nov
 2020 14:44:37 +0000
Received: from AM7PR03MB6325.eurprd03.prod.outlook.com
 ([fe80::f413:8db:d63e:966c]) by AM7PR03MB6325.eurprd03.prod.outlook.com
 ([fe80::f413:8db:d63e:966c%6]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 14:44:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u7TF=ET=epam.com=prvs=9586b5424c=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kdaJq-0003Ta-PJ
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:44:54 +0000
X-Inumbo-ID: 24f7ee39-91c5-4c64-af4b-2bb0c7f0e12b
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 24f7ee39-91c5-4c64-af4b-2bb0c7f0e12b;
	Fri, 13 Nov 2020 14:44:53 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
	by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0ADEdVdp031666;
	Fri, 13 Nov 2020 14:44:44 GMT
Received: from eur02-he1-obe.outbound.protection.outlook.com (mail-he1eur02lp2056.outbound.protection.outlook.com [104.47.5.56])
	by mx0a-0039f301.pphosted.com with ESMTP id 34rf80gf2s-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 13 Nov 2020 14:44:44 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FGd0aPV2ssbZijkhZ17ix36yn/zvit3dkh+h8DMXeECmnBvjAaogWfdWwv/vfwU0Ho5BDB6iodiCVtB8YMXDZaaPOojy8bROCm0YdquNC7+ciNiALFU2dFSTn4te4Hdr/MnxuWFRta/TAdNisSDfeHTxoPMeB8nmz0kteGxjFf5bVSLbM8ZOFov2S6lG3wYTTJ7ysD6DYbmHKHLw8UM37Ht83AyWPa5QnStj1efA7Fe9Co7FHGBuKuOjxyt7kqPLY4umYkkd6KgqMJXr+089wu4Yv4QELR4hZxxzjF0PUhXJlRmxsRnDdHAX029rolJ+181umobwOSYaVNw5yQoFUw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=//Pb7et+hrB3O4MpPjTBOlxfuZMFLE6UQeq4Idjerxw=;
 b=UJh4mMFOEtv+2z2h2uZlaRCJG8X6ivcUVYAPRYu4LGkijKxvO6RfFYiC7WPrNy5XD7tx6iG+8+r0s3Bnx2hdgGFakNdpPDfJjHHwAYFQfWxIcMnzn4lMilOHKlEtzBVMg+tDrhqOJLMQzNwWkrCf+vGY20L+ZSGyeON22iJ7eSCHYb/aLEmh7SqnzimWmVklz+xii702rnB0GpKKoFo/hYSdt0sYTPqbxqknsrtHSt/KXWWmeGZKHsvQ1BB6sPaidbYgbnIyodXP4mFyJy9LAxG/iIJwx0bxo0fwMJo8hyouyWWpIlWJ5GklTGnLHhKOylCP5nFzkC5FpQPWgVht9g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=//Pb7et+hrB3O4MpPjTBOlxfuZMFLE6UQeq4Idjerxw=;
 b=bTNgD0jjhxKihGlwlhBpMBO9kKc1IgesWkxcLciycW0ekC4647kOg7fLOxrEKlx579x8L08S0j2xG/Em/3u7h92VkNmZHD5L6zDvQIxJuIkmHIBOS12lEjMetrIZDrCFNqcoTVhyEQUTwNoFX8oP8OuMn1dPHObGybFT6Gy4EUZnHvaX48BUxll5gcV010tKHasV00yqx2MbrDFR2OZ0OqxUsvRkCRNxHGkm01Wn/ylF7U5IWMm79oEcbeIIfi7RQHKB97C7nHho2iumvBryhtoP7i+0Bd27JcVeMSNkxdJK8HgZ5h6HY4OCHTxoXAOeb81NbJFlJYVL393MTZIsYA==
Received: from AM7PR03MB6325.eurprd03.prod.outlook.com (2603:10a6:20b:13c::18)
 by AM6PR03MB4455.eurprd03.prod.outlook.com (2603:10a6:20b:6::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.25; Fri, 13 Nov
 2020 14:44:37 +0000
Received: from AM7PR03MB6325.eurprd03.prod.outlook.com
 ([fe80::f413:8db:d63e:966c]) by AM7PR03MB6325.eurprd03.prod.outlook.com
 ([fe80::f413:8db:d63e:966c%6]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 14:44:37 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Oleksandr Andrushchenko <andr2000@gmail.com>,
        "Rahul.Singh@arm.com"
	<Rahul.Singh@arm.com>,
        "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
        "julien.grall@arm.com" <julien.grall@arm.com>,
        "sstabellini@kernel.org"
	<sstabellini@kernel.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
        =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
Thread-Topic: [PATCH 06/10] vpci: Make every domain handle its own BARs
Thread-Index: 
 AQHWtpbzyJg77SbpvkSRWLKjWy/3PanEQmgAgAA8YoCAABlOgIABCD8AgABBOQCAAAXIAIAAAS0AgAADNgCAAAlOAIAAEnYAgAAcfoCAAAKRAIAAAYQAgAABxIA=
Date: Fri, 13 Nov 2020 14:44:37 +0000
Message-ID: <66cb04c5-5e98-6a4d-da88-230b2dbc3d98@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
 <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
 <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
 <b4697fbe-6896-ed64-409d-85620c08904a@suse.com>
 <3d6e5aab-ff89-7859-09c6-5ecb0c052511@epam.com>
 <1c88fef1-8558-fde1-02c7-8a68f6ecf312@suse.com>
 <67fd5df7-2ad2-08e5-294e-b769429164f0@epam.com>
 <03e23a66-619f-e846-cf61-a33ca5d9f0b4@suse.com>
 <b151e6d2-5480-d201-432a-bece208a1fd9@epam.com>
 <c58c1393-381a-d995-6e41-fa3251f67bd7@suse.com>
 <8fc22774-7380-2de1-9c30-6649a79fdfe1@epam.com>
 <46c75ee1-758c-8a42-d8d3-8d42cce3240a@suse.com>
In-Reply-To: <46c75ee1-758c-8a42-d8d3-8d42cce3240a@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 01e89503-2cec-49a9-a7f2-08d887e2a551
x-ms-traffictypediagnostic: AM6PR03MB4455:
x-microsoft-antispam-prvs: 
 <AM6PR03MB4455FEEE41EE2D3187AA495DE7E60@AM6PR03MB4455.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 G+RSvHJrDIFqqvu9V32xiGifnElufPc3abk0QLikPr2+K0PNhEcBSzuheQ4CHFw0mH/05QQQjA2m3Wso8L3Ex3GPzjS/BwhBCZ/OHzt5f1VcjMnRcsIVkxATUJxHYGTlIoyfccAfUq8KmKSoAe3Z3gepkbzOLl5RPN/4x6McR8GG/NN7KP+ujb5yVBtfhkfOcs97oG8AsuPa9ueASiFzCOa31sc/+mzKr6bbgyYXXVyg4fY9gmcUvJ9HHlpsDVvZgDB4BRpGiiO63NOhzvDWqyzheSSkxgJb6wzIt8LWi5RY/ffuNj34MGh5POzh2AMfjYTfQpDD1ltyH7bReN+wYw==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM7PR03MB6325.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(376002)(366004)(39860400002)(346002)(396003)(2616005)(316002)(8936002)(71200400001)(64756008)(478600001)(8676002)(91956017)(2906002)(66446008)(6916009)(54906003)(5660300002)(66476007)(4326008)(7416002)(6512007)(76116006)(36756003)(31696002)(66946007)(31686004)(83380400001)(66556008)(86362001)(6506007)(53546011)(186003)(26005)(6486002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 L2ix6Lf2+2lBH6vc8DqwLTOq43mzvtUkBnUEaauHP38BeKebVKJUFvIzbmBFrJhxfzRA2O38upIZRqSzhH3BSXpT79rmJ0mt7ZkaiwKOWrd+gEfzMTH2tGYMndo4hbBEmMZmkI1ySlek8pXz+HgwzNvIPJxSGueknyx1rZjvwN5MeXuOjsZ2TWcIyofdW1nGphqzhIt1GWCtqiMrB5FuZvTyKXE+hQEwAPyaZkkl6XpdS3ECsVxTz/XSOYULVWIIDG9UMqhmOm7D0xbOva6Jij2a5L/SvWkNJA4inOExrr72oYq+9p8fBxaDiECcsnuOiuAYe/L9IjrAen0pMryaM5zoh19HQ/+6AFT0qlcpyeJyUmSZvQMWglTqcKIq+j2Df1TrIkJlci52pMYq4fu+u7wr0uN5DWI7RwoXDFmR5L/bbiTaxcGOsOVZDkKRGvGXeJGyD01+bMVJqe+YIw7v8I8wQSiyETyv9IvSp0PUY5D27TPmSrZFPBvsqirTnQhbKu2WmWEb8MrIt/0jxeBmtdP0GtmgisJwU52fsPe+4rq/yQED3jxwJkSPDfVB7D7i8pRj/qPQJ+x8uneO320Xn0SPe8o/GtdX++0ucOloM+3Nuanpli3FMNav7mWeCdL9pZXj6ibNMh0Weoa5DPsWoA==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <207339E9C85A12479BD223E744C137A2@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM7PR03MB6325.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 01e89503-2cec-49a9-a7f2-08d887e2a551
X-MS-Exchange-CrossTenant-originalarrivaltime: 13 Nov 2020 14:44:37.8437
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: xv0VDUfpzkY6RfpplHXgfwT9ukUjgVavV0QJv0Pt7sHOICxk4Q8UllvjymlD2GhDg/GEXZM4VA0sH4vrdJDSFBgQS8SprWJsNj0T7v44Lde1rfZyE5mx0Q4qFGlK8W8l
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR03MB4455
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-13_10:2020-11-13,2020-11-13 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 mlxscore=0
 phishscore=0 spamscore=0 adultscore=0 bulkscore=0 priorityscore=1501
 suspectscore=0 impostorscore=0 mlxlogscore=999 lowpriorityscore=0
 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2011130092

DQpPbiAxMS8xMy8yMCA0OjM4IFBNLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTMuMTEuMjAy
MCAxNTozMiwgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gd3JvdGU6DQo+PiBPbiAxMS8xMy8yMCA0
OjIzIFBNLCBKYW4gQmV1bGljaCB3cm90ZToNCj4+PiBPbiAxMy4xMS4yMDIwIDEzOjQxLCBPbGVr
c2FuZHIgQW5kcnVzaGNoZW5rbyB3cm90ZToNCj4+Pj4gT24gMTEvMTMvMjAgMTozNSBQTSwgSmFu
IEJldWxpY2ggd3JvdGU6DQo+Pj4+PiBPbiAxMy4xMS4yMDIwIDEyOjAyLCBPbGVrc2FuZHIgQW5k
cnVzaGNoZW5rbyB3cm90ZToNCj4+Pj4+PiBPbiAxMS8xMy8yMCAxMjo1MCBQTSwgSmFuIEJldWxp
Y2ggd3JvdGU6DQo+Pj4+Pj4+IE9uIDEzLjExLjIwMjAgMTE6NDYsIE9sZWtzYW5kciBBbmRydXNo
Y2hlbmtvIHdyb3RlOg0KPj4+Pj4+Pj4gT24gMTEvMTMvMjAgMTI6MjUgUE0sIEphbiBCZXVsaWNo
IHdyb3RlOg0KPj4+Pj4+Pj4+IE9uIDEzLjExLjIwMjAgMDc6MzIsIE9sZWtzYW5kciBBbmRydXNo
Y2hlbmtvIHdyb3RlOg0KPj4+Pj4+Pj4+PiBJJ2xsIHRyeSB0byByZXBsYWNlIGlzX2hhcmR3YXJl
X2RvbWFpbiB3aXRoIHNvbWV0aGluZyBsaWtlOg0KPj4+Pj4+Pj4+Pg0KPj4+Pj4+Pj4+PiArLyoN
Cj4+Pj4+Pj4+Pj4gKyAqIERldGVjdCB3aGV0aGVyIHBoeXNpY2FsIFBDSSBkZXZpY2VzIGluIHRo
aXMgc2VnbWVudCBiZWxvbmcNCj4+Pj4+Pj4+Pj4gKyAqIHRvIHRoZSBkb21haW4gZ2l2ZW4sIGUu
Zy4gb24geDg2IGFsbCBQQ0kgZGV2aWNlcyBsaXZlIGluIGh3ZG9tLA0KPj4+Pj4+Pj4+PiArICog
YnV0IGluIGNhc2Ugb2YgQVJNIHRoaXMgbWlnaHQgbm90IGJlIHRoZSBjYXNlOiB0aG9zZSBtYXkg
YWxzbw0KPj4+Pj4+Pj4+PiArICogbGl2ZSBpbiBkcml2ZXIgZG9tYWlucyBvciBldmVuIFhlbiBp
dHNlbGYuDQo+Pj4+Pj4+Pj4+ICsgKi8NCj4+Pj4+Pj4+Pj4gK2Jvb2wgcGNpX2lzX2hhcmR3YXJl
X2RvbWFpbihzdHJ1Y3QgZG9tYWluICpkLCB1MTYgc2VnKQ0KPj4+Pj4+Pj4+PiArew0KPj4+Pj4+
Pj4+PiArI2lmZGVmIENPTkZJR19YODYNCj4+Pj4+Pj4+Pj4gK8KgwqDCoCByZXR1cm4gaXNfaGFy
ZHdhcmVfZG9tYWluKGQpOw0KPj4+Pj4+Pj4+PiArI2VsaWYgQ09ORklHX0FSTQ0KPj4+Pj4+Pj4+
PiArwqDCoMKgIHJldHVybiBwY2lfaXNfb3duZXJfZG9tYWluKGQsIHNlZyk7DQo+Pj4+Pj4+Pj4+
ICsjZWxzZQ0KPj4+Pj4+Pj4+PiArI2Vycm9yICJVbnN1cHBvcnRlZCBhcmNoaXRlY3R1cmUiDQo+
Pj4+Pj4+Pj4+ICsjZW5kaWYNCj4+Pj4+Pj4+Pj4gK30NCj4+Pj4+Pj4+Pj4gKw0KPj4+Pj4+Pj4+
PiArLyoNCj4+Pj4+Pj4+Pj4gKyAqIEdldCBkb21haW4gd2hpY2ggb3ducyB0aGlzIHNlZ21lbnQ6
IGZvciB4ODYgdGhpcyBpcyBhbHdheXMgaGFyZHdhcmUNCj4+Pj4+Pj4+Pj4gKyAqIGRvbWFpbiBh
bmQgZm9yIEFSTSB0aGlzIGNhbiBiZSBkaWZmZXJlbnQuDQo+Pj4+Pj4+Pj4+ICsgKi8NCj4+Pj4+
Pj4+Pj4gK3N0cnVjdCBkb21haW4gKnBjaV9nZXRfaGFyZHdhcmVfZG9tYWluKHUxNiBzZWcpDQo+
Pj4+Pj4+Pj4+ICt7DQo+Pj4+Pj4+Pj4+ICsjaWZkZWYgQ09ORklHX1g4Ng0KPj4+Pj4+Pj4+PiAr
wqDCoMKgIHJldHVybiBoYXJkd2FyZV9kb21haW47DQo+Pj4+Pj4+Pj4+ICsjZWxpZiBDT05GSUdf
QVJNDQo+Pj4+Pj4+Pj4+ICvCoMKgwqAgcmV0dXJuIHBjaV9nZXRfb3duZXJfZG9tYWluKHNlZyk7
DQo+Pj4+Pj4+Pj4+ICsjZWxzZQ0KPj4+Pj4+Pj4+PiArI2Vycm9yICJVbnN1cHBvcnRlZCBhcmNo
aXRlY3R1cmUiDQo+Pj4+Pj4+Pj4+ICsjZW5kaWYNCj4+Pj4+Pj4+Pj4gK30NCj4+Pj4+Pj4+Pj4N
Cj4+Pj4+Pj4+Pj4gVGhpcyBpcyB3aGF0IEkgdXNlIHRvIHByb3Blcmx5IGRldGVjdCB0aGUgZG9t
YWluIHRoYXQgcmVhbGx5IG93bnMgcGh5c2ljYWwgaG9zdCBicmlkZ2UNCj4+Pj4+Pj4+PiBJIGNv
bnNpZGVyIHRoaXMgcHJvYmxlbWF0aWMuIFdlIHNob3VsZCB0cnkgdG8gbm90IGxldCBBcm0ncyBh
bmQgeDg2J2VzDQo+Pj4+Pj4+Pj4gUENJIGltcGxlbWVudGF0aW9ucyBkaXZlcmdlIHRvbyBtdWNo
LCBpLmUuIGF0IGxlYXN0IHRoZSB1bmRlcmx5aW5nIGJhc2ljDQo+Pj4+Pj4+Pj4gbW9kZWwgd291
bGQgYmV0dGVyIGJlIHNpbWlsYXIuIEZvciBleGFtcGxlLCBpZiBlbnRpcmUgc2VnbWVudHMgY2Fu
IGJlDQo+Pj4+Pj4+Pj4gYXNzaWduZWQgdG8gYSBkcml2ZXIgZG9tYWluIG9uIEFybSwgd2h5IHNo
b3VsZCB0aGUgc2FtZSBub3QgYmUgcG9zc2libGUNCj4+Pj4+Pj4+PiBvbiB4ODY/DQo+Pj4+Pj4+
PiBHb29kIHF1ZXN0aW9uLCBwcm9iYWJseSBpbiB0aGlzIGNhc2UgeDg2ID09IEFSTSBhbmQgSSBj
YW4gdXNlDQo+Pj4+Pj4+Pg0KPj4+Pj4+Pj4gcGNpX2lzX293bmVyX2RvbWFpbiBmb3IgYm90aCBh
cmNoaXRlY3R1cmVzIGluc3RlYWQgb2YgdXNpbmcgaXNfaGFyZHdhcmVfZG9tYWluIGZvciB4ODYN
Cj4+Pj4+Pj4+DQo+Pj4+Pj4+Pj4gRnVydGhlcm1vcmUgSSdtIHN1c3BpY2lvdXMgYWJvdXQgc2Vn
bWVudHMgYmVpbmcgdGhlIHJpZ2h0IGdyYW51bGFyaXR5DQo+Pj4+Pj4+Pj4gaGVyZS4gT24gaWE2
NCBtdWx0aXBsZSBob3N0IGJyaWRnZXMgY291bGQgKGFuZCB0eXBpY2FsbHkgd291bGQpIGxpdmUN
Cj4+Pj4+Pj4+PiBvbiBzZWdtZW50IDAuIElpcmMgSSBoYWQgb25jZSBzZWVuIG91dHB1dCBmcm9t
IGFuIHg4NiBzeXN0ZW0gd2hpY2ggd2FzDQo+Pj4+Pj4+Pj4gYXBwYXJlbnRseSBsYWlkIG91dCBz
aW1pbGFybHkuIFRoZXJlZm9yZSwganVzdCBsaWtlIGZvciBNQ0ZHLCBJIHRoaW5rDQo+Pj4+Pj4+
Pj4gdGhlIGdyYW51bGFyaXR5IHdhbnRzIHRvIGJlIGJ1cyByYW5nZXMgd2l0aGluIGEgc2VnbWVu
dC4NCj4+Pj4+Pj4+IENhbiB5b3UgcGxlYXNlIHN1Z2dlc3Qgc29tZXRoaW5nIHdlIGNhbiB1c2Ug
YXMgYSBoaW50IGZvciBzdWNoIGEgZGV0ZWN0aW9uIGxvZ2ljPw0KPj4+Pj4+PiBUaGUgdW5kZXJs
eWluZyBpbmZvcm1hdGlvbiBjb21lcyBmcm9tIEFDUEkgdGFibGVzLCBpaXJjLiBJIGRvbid0DQo+
Pj4+Pj4+IHJlY2FsbCB0aGUgZGV0YWlscywgdGhvdWdoIC0gc29ycnkuDQo+Pj4+Pj4gT2ssIHNv
IHNlZyArIGJ1cyBzaG91bGQgYmUgZW5vdWdoIGZvciBib3RoIEFSTSBhbmQgWGVuIHRoZW4sIHJp
Z2h0Pw0KPj4+Pj4+DQo+Pj4+Pj4gcGNpX2dldF9oYXJkd2FyZV9kb21haW4odTE2IHNlZywgdTgg
YnVzKQ0KPj4+Pj4gV2hldGhlciBhbiBpbmRpdmlkdWFsIGJ1cyBudW1iZXIgY2FuIHN1aXRhYmxl
IGV4cHJlc3MgdGhpbmdzIEkgY2FuJ3QNCj4+Pj4+IHRlbGw7IEkgZGlkIHNheSBidXMgcmFuZ2Us
IGJ1dCBvZiBjb3Vyc2UgaWYgeW91IGNhcmUgYWJvdXQganVzdA0KPj4+Pj4gaW5kaXZpZHVhbCBk
ZXZpY2VzLCB0aGVuIGEgc2luZ2xlIGJ1cyBudW1iZXIgd2lsbCBvZiBjb3Vyc2UgZG8uDQo+Pj4+
IEkgY2FuIGltcGxlbWVudCB0aGUgbG9va3VwIHdoZXRoZXIgYSBQQ0kgaG9zdCBicmlkZ2Ugb3du
ZWQgYnkgYSBwYXJ0aWN1bGFyDQo+Pj4+DQo+Pj4+IGRvbWFpbiB3aXRoIHNvbWV0aGluZyBsaWtl
Og0KPj4+Pg0KPj4+PiBzdHJ1Y3QgcGNpX2hvc3RfYnJpZGdlICpicmlkZ2UgPSBwY2lfZmluZF9o
b3N0X2JyaWRnZShzZWcsIGJ1cyk7DQo+Pj4+DQo+Pj4+IHJldHVybiBicmlkZ2UtPmR0X25vZGUt
PnVzZWRfYnkgPT0gZC0+ZG9tYWluX2lkOw0KPj4+Pg0KPj4+PiBDb3VsZCB5b3UgcGxlYXNlIGdp
dmUgbWUgYSBoaW50IGhvdyB0aGlzIGNhbiBiZSBkb25lIG9uIHg4Nj8NCj4+PiBCcmlkZ2VzIGNh
bid0IGJlIGFzc2lnbmVkIHRvIG90aGVyIHRoYW4gdGhlIGhhcmR3YXJlIGRvbWFpbiByaWdodA0K
Pj4+IG5vdy4NCj4+IFNvLCBJIGNhbiBwcm9iYWJseSB0aGVuIGhhdmUgcGNpX2dldF9oYXJkd2Fy
ZV9kb21haW4gaW1wbGVtZW50ZWQNCj4+DQo+PiBieSBib3RoIEFSTSBhbmQgeDg2IGluIHRoZWly
IGFyY2ggc3BlY2lmaWMgY29kZS4gQW5kIGZvciB4ODYgZm9yIG5vdw0KPj4NCj4+IGl0IGNhbiBz
aW1wbHkgYmUgYSB3cmFwcGVyIGZvciBpc19oYXJkd2FyZV9kb21haW4/DQo+ICJnZXQiIGNhbid0
IGJlIGEgd3JhcHBlciBmb3IgImlzIg0KU3VyZSwgcy9nZXQvaXMNCj4gLCBidXQgSSB0aGluayBJ
IGdldCB3aGF0IHlvdSdyZQ0KPiBzYXlpbmcuIEJ1dCBubywgcHJlZmVyYWJseSBJIHdvdWxkIG5v
dCBzZWUgeW91IGRvIHRoaXMgKGhlbmNlIG15DQo+IGVhcmxpZXIgY29tbWVudCkuIEkgc3RpbGwg
dGhpbmsgeW91IGNvdWxkIHVzZSB0aGUgb3duZXIgb2YgdGhlDQo+IHJlc3BlY3RpdmUgZGV2aWNl
OyBJIG1heSBiZSBtaXN0YWtlbiwgYnV0IGlpcmMgd2UgZG8gYXNzaWduDQo+IGJyaWRnZXMgdG8g
RG9tMCwgc28gZGVyaXZpbmcgZnJvbSB0aGF0IHdvdWxkIGJlIGJldHRlciB0aGFuDQo+IGhhcmR3
aXJpbmcgdG8gaXNfaGFyZHdhcmVfZG9tYWluKCkuDQpPaywgSSdsbCB0cnkgdG8gZmlndXJlIG91
dCBob3cgdG8gZG8gdGhhdA0KPg0KPj4+ICAgIEVhcmxpZXIgb24gSSBkaWRuJ3Qgc2F5IHlvdSBz
aG91bGQgZ2V0IHRoaXMgdG8gd29yaywgb25seQ0KPj4+IHRoYXQgSSB0aGluayB0aGUgZ2VuZXJh
bCBsb2dpYyBhcm91bmQgd2hhdCB5b3UgYWRkIHNob3VsZG4ndCBtYWtlDQo+Pj4gdGhpbmdzIG1v
cmUgYXJjaCBzcGVjaWZpYyB0aGFuIHRoZXkgcmVhbGx5IHNob3VsZCBiZS4gVGhhdCBzYWlkLA0K
Pj4+IHNvbWV0aGluZyBzaW1pbGFyIHRvIHRoZSBhYm92ZSBzaG91bGQgc3RpbGwgYmUgZG9hYmxl
IG9uIHg4NiwNCj4+PiB1dGlsaXppbmcgc3RydWN0IHBjaV9zZWcncyBidXMyYnJpZGdlW10uIFRo
ZXJlIG91Z2h0IHRvIGJlDQo+Pj4gREVWX1RZUEVfUENJX0hPU1RfQlJJREdFIGVudHJpZXMgdGhl
cmUsIGFsYmVpdCBhIG51bWJlciBvZiB0aGVtDQo+Pj4gKHByb3ZpZGVkIGJ5IHRoZSBDUFVzIHRo
ZW1zZWx2ZXMgcmF0aGVyIHRoYW4gdGhlIGNoaXBzZXQpIGFyZW4ndA0KPj4+IHJlYWxseSBob3N0
IGJyaWRnZXMgZm9yIHRoZSBwdXJwb3NlcyB5b3UncmUgYWZ0ZXIuDQo+PiBZb3UgbWVhbiBJIGNh
biBzdGlsbCB1c2UgREVWX1RZUEVfUENJX0hPU1RfQlJJREdFIGFzIGEgbWFya2VyDQo+Pg0KPj4g
d2hpbGUgdHJ5aW5nIHRvIGRldGVjdCB3aGF0IEkgbmVlZD8NCj4gSSdtIGFmcmFpZCBJIGRvbid0
IHVuZGVyc3RhbmQgd2hhdCBtYXJrZXIgeW91J3JlIHRoaW5raW5nIGFib3V0DQo+IGhlcmUuDQoN
CkkgbWVhbiB0aGF0IHdoZW4gSSBnbyBvdmVyIGJ1czJicmlkZ2UgZW50cmllcywgc2hvdWxkIEkg
b25seSB3b3JrIHdpdGgNCg0KdGhvc2Ugd2hvIGhhdmUgREVWX1RZUEVfUENJX0hPU1RfQlJJREdF
Pw0KDQo+DQo+IEphbg==


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 14:51:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 14:51:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26589.55061 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdaPy-0004Q0-0Q; Fri, 13 Nov 2020 14:51:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26589.55061; Fri, 13 Nov 2020 14:51:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdaPx-0004Pt-TR; Fri, 13 Nov 2020 14:51:13 +0000
Received: by outflank-mailman (input) for mailman id 26589;
 Fri, 13 Nov 2020 14:51:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kdaPx-0004PN-0B
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:51:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9dbc3d37-bdda-4882-9965-7dd599e94b12;
 Fri, 13 Nov 2020 14:51:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 94D84ABD9;
 Fri, 13 Nov 2020 14:51:10 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5p9l=ET=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kdaPx-0004PN-0B
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:51:13 +0000
X-Inumbo-ID: 9dbc3d37-bdda-4882-9965-7dd599e94b12
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9dbc3d37-bdda-4882-9965-7dd599e94b12;
	Fri, 13 Nov 2020 14:51:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605279070; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HDb9in0YbXVzlZP+6Vozv8QOtD+q7q4rKAhEl1KpMRs=;
	b=qUrm5ipxjA/JOIdLZy5XRhcwAenPQs7IZUCuwm5kJY5AyONc557/3TcI7eHRw4zI/GNE4M
	Ouc41yC66ypha43AntP5qfHd57mtvffxtIJSw4BqZI3VGGa5ROQv8pHL6V+swSwK5H5dzB
	gShDOzd501YuoazVOH5UAppIVZCAfrg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 94D84ABD9;
	Fri, 13 Nov 2020 14:51:10 +0000 (UTC)
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
Cc: Oleksandr Andrushchenko <andr2000@gmail.com>,
 "Rahul.Singh@arm.com" <Rahul.Singh@arm.com>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
 "julien.grall@arm.com" <julien.grall@arm.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
 <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
 <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
 <b4697fbe-6896-ed64-409d-85620c08904a@suse.com>
 <3d6e5aab-ff89-7859-09c6-5ecb0c052511@epam.com>
 <1c88fef1-8558-fde1-02c7-8a68f6ecf312@suse.com>
 <67fd5df7-2ad2-08e5-294e-b769429164f0@epam.com>
 <03e23a66-619f-e846-cf61-a33ca5d9f0b4@suse.com>
 <b151e6d2-5480-d201-432a-bece208a1fd9@epam.com>
 <c58c1393-381a-d995-6e41-fa3251f67bd7@suse.com>
 <8fc22774-7380-2de1-9c30-6649a79fdfe1@epam.com>
 <46c75ee1-758c-8a42-d8d3-8d42cce3240a@suse.com>
 <66cb04c5-5e98-6a4d-da88-230b2dbc3d98@epam.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <04059ce3-7009-9e1e-8ba2-398cc993d81b@suse.com>
Date: Fri, 13 Nov 2020 15:51:09 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <66cb04c5-5e98-6a4d-da88-230b2dbc3d98@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13.11.2020 15:44, Oleksandr Andrushchenko wrote:
> 
> On 11/13/20 4:38 PM, Jan Beulich wrote:
>> On 13.11.2020 15:32, Oleksandr Andrushchenko wrote:
>>> On 11/13/20 4:23 PM, Jan Beulich wrote:
>>>>    Earlier on I didn't say you should get this to work, only
>>>> that I think the general logic around what you add shouldn't make
>>>> things more arch specific than they really should be. That said,
>>>> something similar to the above should still be doable on x86,
>>>> utilizing struct pci_seg's bus2bridge[]. There ought to be
>>>> DEV_TYPE_PCI_HOST_BRIDGE entries there, albeit a number of them
>>>> (provided by the CPUs themselves rather than the chipset) aren't
>>>> really host bridges for the purposes you're after.
>>> You mean I can still use DEV_TYPE_PCI_HOST_BRIDGE as a marker
>>>
>>> while trying to detect what I need?
>> I'm afraid I don't understand what marker you're thinking about
>> here.
> 
> I mean that when I go over bus2bridge entries, should I only work with
> 
> those who have DEV_TYPE_PCI_HOST_BRIDGE?

Well, if you're after host bridges - yes.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 14:53:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 14:53:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26596.55073 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdaRj-0004Y6-CQ; Fri, 13 Nov 2020 14:53:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26596.55073; Fri, 13 Nov 2020 14:53:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdaRj-0004Xz-9J; Fri, 13 Nov 2020 14:53:03 +0000
Received: by outflank-mailman (input) for mailman id 26596;
 Fri, 13 Nov 2020 14:53:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u7TF=ET=epam.com=prvs=9586b5424c=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kdaRh-0004Xs-KQ
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:53:01 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9f78b6f1-d893-4bec-a8da-3e0818c24995;
 Fri, 13 Nov 2020 14:53:00 +0000 (UTC)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0ADEplrC024641; Fri, 13 Nov 2020 14:52:52 GMT
Received: from eur04-db3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2052.outbound.protection.outlook.com [104.47.12.52])
 by mx0b-0039f301.pphosted.com with ESMTP id 34sd1ptxbp-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 13 Nov 2020 14:52:52 +0000
Received: from AM7PR03MB6325.eurprd03.prod.outlook.com (2603:10a6:20b:13c::18)
 by AM6PR03MB4455.eurprd03.prod.outlook.com (2603:10a6:20b:6::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.25; Fri, 13 Nov
 2020 14:52:48 +0000
Received: from AM7PR03MB6325.eurprd03.prod.outlook.com
 ([fe80::f413:8db:d63e:966c]) by AM7PR03MB6325.eurprd03.prod.outlook.com
 ([fe80::f413:8db:d63e:966c%6]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 14:52:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u7TF=ET=epam.com=prvs=9586b5424c=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kdaRh-0004Xs-KQ
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 14:53:01 +0000
X-Inumbo-ID: 9f78b6f1-d893-4bec-a8da-3e0818c24995
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9f78b6f1-d893-4bec-a8da-3e0818c24995;
	Fri, 13 Nov 2020 14:53:00 +0000 (UTC)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
	by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0ADEplrC024641;
	Fri, 13 Nov 2020 14:52:52 GMT
Received: from eur04-db3-obe.outbound.protection.outlook.com (mail-db3eur04lp2052.outbound.protection.outlook.com [104.47.12.52])
	by mx0b-0039f301.pphosted.com with ESMTP id 34sd1ptxbp-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 13 Nov 2020 14:52:52 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QW4EvO5bVMjn6buGdk+oJmlNAH/+vdRarF1gwsabKvB4MH8EGOQbKyOAcO1nndz/U0ctONSsn6/NJ9wjMjx1rx75JFGiZjciK3TzxFreKMAiWxkFT9/fUeEGe73VRrFQNDQ4wC1wsAAdXNFonM86QWMj1bUPom9DhTaCwHRM1G6I/kLBYLQSq6QxKpW6S3GT0oRDkv1dpgcqY3sVLBDzkbvTqEeQDfar2+Qt8mnInOCwIOZANGUUODI2u5n4L9qvgiyBRDqsq8VwlMyyQpbE8UY3hV+q37YFIwKXvX+ZyP+M2JA9fhwq2LLzfKCEh2NtdmcCa+6zLP4qxaNcNySNyA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uKVgqdiUThdoPomleA1LrdQuB+4fJ2RkWQTgmFxbz+0=;
 b=Wen79VF+adTz9aEEK79VtANZ4IEBkfJmh+ptj9BXD0LW7MLpqGgxuzj9haYJnSbNPu4900/7XicirR47MSifjxTf2T9Em9aCL0OAUz1Vw7aX/caigGltRDCykNRgNOaeOY82j77HL22w0NoBAHo4B6Wx9/KvACUNlDgIMeyYZjaDXyoolrt2YckQ60Fj+/v/b2bFQXq50ZM9T7RyU9hQ0rE7ZPiSJuZEmA4TYVJRD2x79rhK+MNZXdnRAuPchwLUYh63w4Bggwf1vMjG6qyPKEuAWSIr00ztffAuyShfYgzZEmiFIntQGU5wHyQeeCaYR9cFJIJjW+ZyzlsuhiYOqA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uKVgqdiUThdoPomleA1LrdQuB+4fJ2RkWQTgmFxbz+0=;
 b=MKr34Ewut43TDeAl913B3RU0XVsuHyY7NwKP01NYTAE4GP2BueIlZVfrH7dxGAWYiwNoQ/PfDaEm3hcurDA/sphT2HMtU0A5gPFljlUEeC2Qrlc3sKGZMiVq2hqqGJ3RgLPqi3NK5GuDoToO38hIADaSZmeegrEaKkk4BkW4LU6BDC9EAtGR5Gmbshf+W+EWGjXCY6F1PpyD4JBK8mJSyMeS/feo59Q1Ux+uIboAcI5JuUwogQ/g3ANGxZ+w1rVOwwBsFGIGoAu8/iT/oWX3X+5J4JLUoVomSNhUOynQspvn6PpdUuVAMET7hW9b83cbNgfEBJ2W/8OTgZMtFmT7xg==
Received: from AM7PR03MB6325.eurprd03.prod.outlook.com (2603:10a6:20b:13c::18)
 by AM6PR03MB4455.eurprd03.prod.outlook.com (2603:10a6:20b:6::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3541.25; Fri, 13 Nov
 2020 14:52:48 +0000
Received: from AM7PR03MB6325.eurprd03.prod.outlook.com
 ([fe80::f413:8db:d63e:966c]) by AM7PR03MB6325.eurprd03.prod.outlook.com
 ([fe80::f413:8db:d63e:966c%6]) with mapi id 15.20.3541.025; Fri, 13 Nov 2020
 14:52:48 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Oleksandr Andrushchenko <andr2000@gmail.com>,
        "Rahul.Singh@arm.com"
	<Rahul.Singh@arm.com>,
        "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
        "julien.grall@arm.com" <julien.grall@arm.com>,
        "sstabellini@kernel.org"
	<sstabellini@kernel.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        "iwj@xenproject.org" <iwj@xenproject.org>, "wl@xen.org" <wl@xen.org>,
        =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>
Subject: Re: [PATCH 06/10] vpci: Make every domain handle its own BARs
Thread-Topic: [PATCH 06/10] vpci: Make every domain handle its own BARs
Thread-Index: 
 AQHWtpbzyJg77SbpvkSRWLKjWy/3PanEQmgAgAA8YoCAABlOgIABCD8AgABBOQCAAAXIAIAAAS0AgAADNgCAAAlOAIAAEnYAgAAcfoCAAAKRAIAAAYQAgAABxICAAAHUgIAAAHYA
Date: Fri, 13 Nov 2020 14:52:48 +0000
Message-ID: <0162c176-d515-7850-b22c-a8451a7697cd@epam.com>
References: <20201109125031.26409-1-andr2000@gmail.com>
 <20201109125031.26409-7-andr2000@gmail.com>
 <20201112094002.bzk6gvp4iy4dgj4s@Air-de-Roger>
 <1b3f11c2-a5a2-da5c-25b3-851ef9465ab9@epam.com>
 <20201112144643.iyy5b34qyz5zi7mc@Air-de-Roger>
 <1fe15b9a-6f5d-1209-8ff5-af7c4fc0d637@epam.com>
 <b4697fbe-6896-ed64-409d-85620c08904a@suse.com>
 <3d6e5aab-ff89-7859-09c6-5ecb0c052511@epam.com>
 <1c88fef1-8558-fde1-02c7-8a68f6ecf312@suse.com>
 <67fd5df7-2ad2-08e5-294e-b769429164f0@epam.com>
 <03e23a66-619f-e846-cf61-a33ca5d9f0b4@suse.com>
 <b151e6d2-5480-d201-432a-bece208a1fd9@epam.com>
 <c58c1393-381a-d995-6e41-fa3251f67bd7@suse.com>
 <8fc22774-7380-2de1-9c30-6649a79fdfe1@epam.com>
 <46c75ee1-758c-8a42-d8d3-8d42cce3240a@suse.com>
 <66cb04c5-5e98-6a4d-da88-230b2dbc3d98@epam.com>
 <04059ce3-7009-9e1e-8ba2-398cc993d81b@suse.com>
In-Reply-To: <04059ce3-7009-9e1e-8ba2-398cc993d81b@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 5cec4db3-9da3-4d16-e2df-08d887e3c9d3
x-ms-traffictypediagnostic: AM6PR03MB4455:
x-microsoft-antispam-prvs: 
 <AM6PR03MB4455528E86621C6849839635E7E60@AM6PR03MB4455.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 d73musrhJoGEVLWmpWQjbvIKKD+lJhhFfnuuNEuAiSmY8A8kXM82BhKetT69IeG2yy8lJ0K3yaApbGL6UePskz5/O2cABG5C6NSawf2341RbR/Kl9mYCcjdTgZ/F7rlbtAftRdSfmDVUNTPrQIYc/8alrHLSSrOnRtYkIEGwY0MPyYuIcpOnL8gR+lz1XaQEbCWE5cVgYG7p439EWpo7OsWZoyZLQDl2ZQMdx5zYzF5XxOzM5I5M6Ds7/EKBF+IuIHc1f+vO9+DfNG6E5nIKPopy0t8mmFNAz1FfUM2iyO7pIwhlGmCQj1s1isKfp8b4
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM7PR03MB6325.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(376002)(366004)(39860400002)(346002)(396003)(2616005)(316002)(8936002)(71200400001)(64756008)(478600001)(8676002)(91956017)(2906002)(66446008)(6916009)(54906003)(5660300002)(66476007)(4326008)(7416002)(6512007)(76116006)(36756003)(31696002)(66946007)(31686004)(83380400001)(66556008)(86362001)(6506007)(53546011)(186003)(26005)(6486002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 H7mNm9heW7Z7WEzqf7Yx0usi9sjbhtBRpbPF6sI8KY8Vr/UcOGdNM08cAOQGY6PRjbdn8cU65Ng/6j6l5qNrgG+FNbq0aYtAsNPLLzMZFSJ48bcVlg36juJpvpGefMlG1S/HOr6PpIDADfPAXk182bWzJgspomiEEFERS3W3nRwHkUzE7vUGgM/ksJJVl191VIBcvz9A1K4w3FppvgnDp+8B9jQl4DNEy5L751iWfqMlhf1+J+INoh0AlIH3l0pP/VlLyjD4+QnRMxMMyYlTt7jSDxE7IVDCg+ALjLI6U4Y5YnJi16KHSYchiRmdSwnDMusfUARw1eRbG1m5EtSztj1aJsYFPTgZSUg+8ZK3RsupX+Bfkm+PL88aEmptynZYEFUwhotAYJ00T3u+htTryEj1wYKB67Ckc7XJxHpUhmyP6RG5JGjaLTkENQTVQ4yIDC6vmaO2t3zY2n3HBmIxul5m3jSiUGtN+AWZUmzaNFYMyelwDW0GLJepjZlBDBTYPUAkeXheiG6mAcMN5x/utZcHwI1YYNaMLZ//H6wbmVrPIfs0+tBqjrg8Vjapo+WdrhCBXeAwBm/YFtlnoQhL+c+VKbmdwirwHxRCmDBQddyAQL/RIr9LTIcr9Lw1fFkj7vi98QVNJMO1shhvwvHUcw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <29AF1B5B4664ED4DB7E7308EA5C2AC4C@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM7PR03MB6325.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5cec4db3-9da3-4d16-e2df-08d887e3c9d3
X-MS-Exchange-CrossTenant-originalarrivaltime: 13 Nov 2020 14:52:48.5750
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: ZSAVU+ZSnswbXZ99lQ5O6BW+hZ/DfBvnyR6fiGAklvpx0N+xmK/izxo3FGNx4yVTC2/iWWb2I/ovUBPhNKcBkNdqzwxw1IMDGTuv83OEyMUSNoyCvgDaG+4NqZ23hxOU
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR03MB4455
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-13_10:2020-11-13,2020-11-13 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0
 priorityscore=1501 mlxlogscore=806 clxscore=1015 adultscore=0
 malwarescore=0 suspectscore=0 mlxscore=0 spamscore=0 bulkscore=0
 lowpriorityscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2009150000 definitions=main-2011130094

DQpPbiAxMS8xMy8yMCA0OjUxIFBNLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTMuMTEuMjAy
MCAxNTo0NCwgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gd3JvdGU6DQo+PiBPbiAxMS8xMy8yMCA0
OjM4IFBNLCBKYW4gQmV1bGljaCB3cm90ZToNCj4+PiBPbiAxMy4xMS4yMDIwIDE1OjMyLCBPbGVr
c2FuZHIgQW5kcnVzaGNoZW5rbyB3cm90ZToNCj4+Pj4gT24gMTEvMTMvMjAgNDoyMyBQTSwgSmFu
IEJldWxpY2ggd3JvdGU6DQo+Pj4+PiAgICAgRWFybGllciBvbiBJIGRpZG4ndCBzYXkgeW91IHNo
b3VsZCBnZXQgdGhpcyB0byB3b3JrLCBvbmx5DQo+Pj4+PiB0aGF0IEkgdGhpbmsgdGhlIGdlbmVy
YWwgbG9naWMgYXJvdW5kIHdoYXQgeW91IGFkZCBzaG91bGRuJ3QgbWFrZQ0KPj4+Pj4gdGhpbmdz
IG1vcmUgYXJjaCBzcGVjaWZpYyB0aGFuIHRoZXkgcmVhbGx5IHNob3VsZCBiZS4gVGhhdCBzYWlk
LA0KPj4+Pj4gc29tZXRoaW5nIHNpbWlsYXIgdG8gdGhlIGFib3ZlIHNob3VsZCBzdGlsbCBiZSBk
b2FibGUgb24geDg2LA0KPj4+Pj4gdXRpbGl6aW5nIHN0cnVjdCBwY2lfc2VnJ3MgYnVzMmJyaWRn
ZVtdLiBUaGVyZSBvdWdodCB0byBiZQ0KPj4+Pj4gREVWX1RZUEVfUENJX0hPU1RfQlJJREdFIGVu
dHJpZXMgdGhlcmUsIGFsYmVpdCBhIG51bWJlciBvZiB0aGVtDQo+Pj4+PiAocHJvdmlkZWQgYnkg
dGhlIENQVXMgdGhlbXNlbHZlcyByYXRoZXIgdGhhbiB0aGUgY2hpcHNldCkgYXJlbid0DQo+Pj4+
PiByZWFsbHkgaG9zdCBicmlkZ2VzIGZvciB0aGUgcHVycG9zZXMgeW91J3JlIGFmdGVyLg0KPj4+
PiBZb3UgbWVhbiBJIGNhbiBzdGlsbCB1c2UgREVWX1RZUEVfUENJX0hPU1RfQlJJREdFIGFzIGEg
bWFya2VyDQo+Pj4+DQo+Pj4+IHdoaWxlIHRyeWluZyB0byBkZXRlY3Qgd2hhdCBJIG5lZWQ/DQo+
Pj4gSSdtIGFmcmFpZCBJIGRvbid0IHVuZGVyc3RhbmQgd2hhdCBtYXJrZXIgeW91J3JlIHRoaW5r
aW5nIGFib3V0DQo+Pj4gaGVyZS4NCj4+IEkgbWVhbiB0aGF0IHdoZW4gSSBnbyBvdmVyIGJ1czJi
cmlkZ2UgZW50cmllcywgc2hvdWxkIEkgb25seSB3b3JrIHdpdGgNCj4+DQo+PiB0aG9zZSB3aG8g
aGF2ZSBERVZfVFlQRV9QQ0lfSE9TVF9CUklER0U/DQo+IFdlbGwsIGlmIHlvdSdyZSBhZnRlciBo
b3N0IGJyaWRnZXMgLSB5ZXMuDQpPaywgSSdsbCB0cnkgdG8gc2VlIHdoYXQgSSBjYW4gZG8gYWJv
dXQgaXQuDQo+DQo+IEphbg0KDQpUaGFuayB5b3UsDQoNCk9sZWtzYW5kcg0K


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 15:00:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 15:00:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26606.55089 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdaYr-0005Y9-6u; Fri, 13 Nov 2020 15:00:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26606.55089; Fri, 13 Nov 2020 15:00:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdaYr-0005Y2-3r; Fri, 13 Nov 2020 15:00:25 +0000
Received: by outflank-mailman (input) for mailman id 26606;
 Fri, 13 Nov 2020 15:00:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HVgh=ET=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kdaYp-0005Xw-7x
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 15:00:23 +0000
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4f096ea6-2334-43cb-830f-2ed39f2b7599;
 Fri, 13 Nov 2020 15:00:22 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id w15so5801773lji.10
 for <xen-devel@lists.xenproject.org>; Fri, 13 Nov 2020 07:00:22 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 14sm1246833lff.100.2020.11.13.07.00.19
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 13 Nov 2020 07:00:20 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HVgh=ET=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kdaYp-0005Xw-7x
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 15:00:23 +0000
X-Inumbo-ID: 4f096ea6-2334-43cb-830f-2ed39f2b7599
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4f096ea6-2334-43cb-830f-2ed39f2b7599;
	Fri, 13 Nov 2020 15:00:22 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id w15so5801773lji.10
        for <xen-devel@lists.xenproject.org>; Fri, 13 Nov 2020 07:00:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=loEGtmzCg7gfnYlPzkZrXi5KoXBJgTXmL7oE5EG4OW0=;
        b=Gyby1GHrg4L7BUCH09x71JpNR4Mx1lTa9CXoNROpQGUkUzw6Xdntx+2Bbc8YboQsS5
         7X4w48hhghH+5tz5l/k5F2Qt47A8eaeZeS82o93XzXHVvraxFuIILbYFSh+0Q9+s4r8z
         HmD3eNeTGs+LDGHldJkgUdIvKZJARSqXI6l9QOqVpuVcrLxE9KrR/6RHzVlNzkmfTKm0
         X1ceTGe1RU82+mSd9es3qRNHNBs/3TP3WvitTSQ93MiRY3YES66SRbVr34QPD2R6e32z
         7YnLHzdm/qT+M5q/0r5L/PVSr4Br6n1ms58D+OIbhhVOb38ea8q+2iHb6QF1lnOSBqgQ
         8J8w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=loEGtmzCg7gfnYlPzkZrXi5KoXBJgTXmL7oE5EG4OW0=;
        b=sD20Fr/iCR5rUTzVoF7y0zDjYBMY4WHmBZIPRWXgFWWJVKR4JgILQDyd+E7cRcEbzC
         16DRq2VElzBjMCOiFhx3c/YuJFRqNLIJqcrTiuEj8ddl/yL3vkXAytOSHDo8rrPHDKyK
         87l8cPONh9gjyAYworeafJxhWL0B/p8CWD0nV2+0e/hT4gFMiFsyPKI/vUM1UHkN3xT+
         3Flawi77HmbXr4fWvlxhBj7OF4p4npY3qXfbN+idkVgr3C6TbBzg1q5XOHw0Lg+4CoZ3
         sESpKMW0BAYdvqoR6kAj3oGxRMfAGNg9yX8BbdEfZar7e8JQehlY4U5LiKJg5xCnuL29
         f/Hw==
X-Gm-Message-State: AOAM531ueFwy1fpRmdufhDJQ3mbbcqorpPKHenDYbmKMMvOk+20YZWTc
	CXj3wBr4aabjHt/VpOPPK57fY7NIAOPmNQ==
X-Google-Smtp-Source: ABdhPJwQGesHAq9CUQY/Gpht6f6saslE5JXR9mjEPZDITFp7JDlfyGQ4n9SXNm9t0oPuJmN9/DOtuQ==
X-Received: by 2002:a2e:7a0a:: with SMTP id v10mr1304331ljc.5.1605279620956;
        Fri, 13 Nov 2020 07:00:20 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id 14sm1246833lff.100.2020.11.13.07.00.19
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Fri, 13 Nov 2020 07:00:20 -0800 (PST)
Subject: Re: [PATCH V2 10/23] xen/mm: Make x86's XENMEM_resource_ioreq_server
 handling common
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien.grall@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-11-git-send-email-olekstysh@gmail.com>
 <1c9bc1aa-b622-fb8e-e5d5-3e27567354c0@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <2be2f73e-dfcd-f856-cfe1-85e41538bd15@gmail.com>
Date: Fri, 13 Nov 2020 17:00:14 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <1c9bc1aa-b622-fb8e-e5d5-3e27567354c0@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 12.11.20 13:40, Jan Beulich wrote:

Hi Jan

> On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
>> --- a/xen/common/memory.c
>> +++ b/xen/common/memory.c
>> @@ -30,6 +30,10 @@
>>   #include <public/memory.h>
>>   #include <xsm/xsm.h>
>>   
>> +#ifdef CONFIG_IOREQ_SERVER
>> +#include <xen/ioreq.h>
>> +#endif
> Preferably #ifdef-s would not be needed here. If any, they'd better
> live in xen/ioreq.h itself then.

ok


>
>> @@ -1045,6 +1049,38 @@ static int acquire_grant_table(struct domain *d, unsigned int id,
>>       return 0;
>>   }
>>   
>> +#ifdef CONFIG_IOREQ_SERVER
> To limit the number of #ifdef-s, could this be moved ...
>
>> +static int acquire_ioreq_server(struct domain *d,
>> +                                unsigned int id,
>> +                                unsigned long frame,
>> +                                unsigned int nr_frames,
>> +                                xen_pfn_t mfn_list[])
>> +{
> ... here such that ...
>
>> @@ -1103,9 +1139,14 @@ static int acquire_resource(
>>                                    mfn_list);
>>           break;
>>   
>> +#ifdef CONFIG_IOREQ_SERVER
>> +    case XENMEM_resource_ioreq_server:
>> +        rc = acquire_ioreq_server(d, xmar.id, xmar.frame, xmar.nr_frames,
>> +                                  mfn_list);
>> +        break;
>> +#endif
> ... the ones here then can be dropped?

I think yes, that would be better.


>
>>       default:
> Also you'll want to a blank line between the new case statement and
> the "default:".

ok

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Fri Nov 13 15:14:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 15:14:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26615.55101 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdamh-0006bp-Fu; Fri, 13 Nov 2020 15:14:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26615.55101; Fri, 13 Nov 2020 15:14:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdamh-0006bi-Cn; Fri, 13 Nov 2020 15:14:43 +0000
Received: by outflank-mailman (input) for mailman id 26615;
 Fri, 13 Nov 2020 15:14:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rZ2H=ET=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kdamf-0006bd-Fn
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 15:14:41 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1922b374-8617-4756-8502-331ea1ab1d8a;
 Fri, 13 Nov 2020 15:14:39 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0ADFEXFc007118
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Fri, 13 Nov 2020 16:14:34 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 5E7392E9CA8; Fri, 13 Nov 2020 16:14:28 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rZ2H=ET=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kdamf-0006bd-Fn
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 15:14:41 +0000
X-Inumbo-ID: 1922b374-8617-4756-8502-331ea1ab1d8a
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1922b374-8617-4756-8502-331ea1ab1d8a;
	Fri, 13 Nov 2020 15:14:39 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0ADFEXFc007118
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Fri, 13 Nov 2020 16:14:34 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 5E7392E9CA8; Fri, 13 Nov 2020 16:14:28 +0100 (MET)
Date: Fri, 13 Nov 2020 16:14:28 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: dom0 PVH: 'entry->arch.pirq != INVALID_PIRQ' failed at vmsi.c:843
Message-ID: <20201113151428.GE1512@antioche.eu.org>
References: <20201112155715.GA5003@antioche.eu.org>
 <20201112163240.6xswol2iswikdzef@Air-de-Roger>
 <20201112172704.GA5899@antioche.eu.org>
 <20201112201939.be6ztg2iipwa6hkb@Air-de-Roger>
 <20201113115457.GD1512@antioche.eu.org>
 <20201113143349.gehu36wsipvpkrt7@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201113143349.gehu36wsipvpkrt7@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Fri, 13 Nov 2020 16:14:34 +0100 (MET)

On Fri, Nov 13, 2020 at 03:33:49PM +0100, Roger Pau Monn wrote:
> On Fri, Nov 13, 2020 at 12:54:57PM +0100, Manuel Bouyer wrote:
> > On Thu, Nov 12, 2020 at 09:19:39PM +0100, Roger Pau Monn wrote:
> > > The following might be able to get you going, but I think I need to
> > > refine the logic a bit there, will have to give it some thought.
> > 
> > I also tested with xen devel (Xen version 4.15-unstable, Latest ChangeSet: Wed Nov 4 09:27:22 2020 +0100 git:9ff9705647-dirty).
> > Your patch is needed there too to avoid the panic.
> > 
> > As with 4.13, I have problems with interrupts not being properly
> > delivered. The strange thing is that the counter is not 0, but 3 (wuth 4.13)
> > or 2 (with 4.15) which would mean that interrupts stop being delivered
> > at some point in the setup process. Maybe something to do with mask/unmask ?
> > 
> > The problematc interrupt in identifed as "ioapic2 pin 2" by the NetBSD kernel,
> > so it's not MSI/MSI-X (not sure it matters though).
> > Maybe something related to mask/unmask ?
> 
> What device do you have on that pin?

The PERC H740P raid controller.

> Is it the only device not working
> properly?

I'm not sure, as the kernel is stalling because of this.
The other device counter interrupts are 0.
I can try a kernel without this driver, to see if other devices reports
interrupt.

> I get from that that MSI/MSI-X is now working fine.

See above.

> 
> You can get some interrupt info from the 'i' and the 'z' debug keys,
> albeit that won't reflect the state of the emulated IO-APIC used by
> dom0, which is likely what we care about. There's also the 'M' debug
> key, but that's only useful for MSI/MSI-X.
> 
> I can try to prepare a patch to dump some info from the emulated
> IO-APIC, but I'm afraid I won't get to it until Monday.

No problem.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 15:27:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 15:27:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26675.55113 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdayl-0007g3-J4; Fri, 13 Nov 2020 15:27:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26675.55113; Fri, 13 Nov 2020 15:27:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdayl-0007fw-G4; Fri, 13 Nov 2020 15:27:11 +0000
Received: by outflank-mailman (input) for mailman id 26675;
 Fri, 13 Nov 2020 15:27:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rZ2H=ET=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kdayk-0007fr-8w
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 15:27:10 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3732ce84-e327-4226-8af4-3d09026eed3c;
 Fri, 13 Nov 2020 15:27:09 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0ADFR3Bh003574
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Fri, 13 Nov 2020 16:27:04 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 4B9ED2E9CA8; Fri, 13 Nov 2020 16:26:58 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rZ2H=ET=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kdayk-0007fr-8w
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 15:27:10 +0000
X-Inumbo-ID: 3732ce84-e327-4226-8af4-3d09026eed3c
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3732ce84-e327-4226-8af4-3d09026eed3c;
	Fri, 13 Nov 2020 15:27:09 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0ADFR3Bh003574
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Fri, 13 Nov 2020 16:27:04 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 4B9ED2E9CA8; Fri, 13 Nov 2020 16:26:58 +0100 (MET)
Date: Fri, 13 Nov 2020 16:26:58 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: dom0 PVH: 'entry->arch.pirq != INVALID_PIRQ' failed at vmsi.c:843
Message-ID: <20201113152658.GH1512@antioche.eu.org>
References: <20201112155715.GA5003@antioche.eu.org>
 <20201112163240.6xswol2iswikdzef@Air-de-Roger>
 <20201112172704.GA5899@antioche.eu.org>
 <20201112201939.be6ztg2iipwa6hkb@Air-de-Roger>
 <20201113115457.GD1512@antioche.eu.org>
 <20201113143513.5mvfb4tyczyo2rwx@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201113143513.5mvfb4tyczyo2rwx@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Fri, 13 Nov 2020 16:27:04 +0100 (MET)

On Fri, Nov 13, 2020 at 03:35:13PM +0100, Roger Pau Monn wrote:
> Forgot to mention, it might also be helpful to boot Xen with
> iommu=debug, just in case.

I put the serial console log at
http://www-soc.lip6.fr/~bouyer/xen-log2.txt
in case it helps

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 15:29:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 15:29:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26682.55124 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdb1J-0007rH-0s; Fri, 13 Nov 2020 15:29:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26682.55124; Fri, 13 Nov 2020 15:29:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdb1I-0007rA-UF; Fri, 13 Nov 2020 15:29:48 +0000
Received: by outflank-mailman (input) for mailman id 26682;
 Fri, 13 Nov 2020 15:29:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RH3y=ET=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kdb1H-0007r5-NO
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 15:29:47 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5dee294a-6550-428b-9129-2897ca991ab3;
 Fri, 13 Nov 2020 15:29:43 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdb1D-00040E-8N; Fri, 13 Nov 2020 15:29:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdb1C-0005gy-Vf; Fri, 13 Nov 2020 15:29:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kdb1C-0007Pw-VC; Fri, 13 Nov 2020 15:29:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=RH3y=ET=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kdb1H-0007r5-NO
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 15:29:47 +0000
X-Inumbo-ID: 5dee294a-6550-428b-9129-2897ca991ab3
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5dee294a-6550-428b-9129-2897ca991ab3;
	Fri, 13 Nov 2020 15:29:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bDa3yKIfIdlAgfbTeDkjnN2/sK5yhVyN71jP+pydsSk=; b=22x8qLGZQXHCqKjUXAqC9bg8nh
	xMVg3JN4ipXJvGTeCtpKgfSgR+fJC167d20yOW5w46g+EvOqx86JtT1HCZeqrpg+zDUF+nF0gVQBU
	fpmO+QptBOc0/Aem7PIGOyGA9DoIre8SWccg72VnRYV594alYi0LmH37wTzpr//hJHvw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdb1D-00040E-8N; Fri, 13 Nov 2020 15:29:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdb1C-0005gy-Vf; Fri, 13 Nov 2020 15:29:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdb1C-0007Pw-VC; Fri, 13 Nov 2020 15:29:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156731-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156731: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=09ba97ad6b4aa725364ed5e5c48031829dc9549f
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 13 Nov 2020 15:29:42 +0000

flight 156731 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156731/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              09ba97ad6b4aa725364ed5e5c48031829dc9549f
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  126 days
Failing since        151818  2020-07-11 04:18:52 Z  125 days  120 attempts
Testing same since   156731  2020-11-13 04:19:03 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 26437 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 15:36:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 15:36:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26693.55140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdb7O-0000Kw-U0; Fri, 13 Nov 2020 15:36:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26693.55140; Fri, 13 Nov 2020 15:36:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdb7O-0000Kp-Qm; Fri, 13 Nov 2020 15:36:06 +0000
Received: by outflank-mailman (input) for mailman id 26693;
 Fri, 13 Nov 2020 15:36:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zLmm=ET=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kdb7N-0000Kk-9F
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 15:36:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 311aa27f-20e5-4191-8240-9c8e942a8038;
 Fri, 13 Nov 2020 15:36:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2BC93AC91;
 Fri, 13 Nov 2020 15:36:01 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zLmm=ET=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kdb7N-0000Kk-9F
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 15:36:05 +0000
X-Inumbo-ID: 311aa27f-20e5-4191-8240-9c8e942a8038
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 311aa27f-20e5-4191-8240-9c8e942a8038;
	Fri, 13 Nov 2020 15:36:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605281761; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ryp64Z30d3UxMICS2PN7CLOmpEyNxTJF3lub8XET8Nw=;
	b=c4SRwVO6a4esWTxKzOBgDU1tltFQSv+OfAWHwGUmz7IePmzq98clOCqmnt3BRuG1iGL0KM
	V0SX2QpibjWFnWgQ0KI3TXdON6UwyQ0S3ZY+poIhp0cAduQ84FE7h12PwMub4HRy0eAzsn
	JPKwqnkI/MzytxGQL15TQPiCq/MPZog=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2BC93AC91;
	Fri, 13 Nov 2020 15:36:01 +0000 (UTC)
To: Bjoern Doebel <doebel@amazon.de>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.co.uk>, Eslam Elnikety <elnikety@amazon.de>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20201113141823.58712-1-doebel@amazon.de>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [XEN PATCH] tools/xenstore: Log xenstored build ID on startup
Message-ID: <5ac379ad-33fd-2973-dfdb-9e06ea539809@suse.com>
Date: Fri, 13 Nov 2020 16:36:00 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201113141823.58712-1-doebel@amazon.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="VhxVcXNzXEhYwBYwQcp6OXMYrtruzq31i"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--VhxVcXNzXEhYwBYwQcp6OXMYrtruzq31i
Content-Type: multipart/mixed; boundary="P4Fykfytw7OBIa4CBxJXkpBIUjBRQjwdE";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Bjoern Doebel <doebel@amazon.de>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.co.uk>, Eslam Elnikety <elnikety@amazon.de>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <5ac379ad-33fd-2973-dfdb-9e06ea539809@suse.com>
Subject: Re: [XEN PATCH] tools/xenstore: Log xenstored build ID on startup
References: <20201113141823.58712-1-doebel@amazon.de>
In-Reply-To: <20201113141823.58712-1-doebel@amazon.de>

--P4Fykfytw7OBIa4CBxJXkpBIUjBRQjwdE
Content-Type: multipart/mixed;
 boundary="------------3139E45BC8F6F7BC46389DB0"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------3139E45BC8F6F7BC46389DB0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 13.11.20 15:18, Bjoern Doebel wrote:
> Right now we do not have a mechanism to determine the version of the
> currently running xenstored at runtime. As xenstored runs throughout th=
e
> lifetime of a Xen host, this may lead to problems when newer user space=

> builds are staged. Then, the running xenstored will no longer match the=

> version of the installed xenstored.
>=20
> To allow users to always identify the running version of xenstored, add=

> a linker-generated unique build ID to every xenstored build. Add
> functionality to log this build ID into a file upon service startup.
>=20
> Signed-off-by: Bjoern Doebel <doebel@amazon.de>
> Reviewed-by: Martin Mazein <amazein@amazon.de>
> Reviewed-by: Paul Durrant <pdurrant@amazon.co.uk>

No support for oxenstored or xenstore-stubdom?

> ---
>   tools/hotplug/Linux/launch-xenstore.in |  2 +-
>   tools/xenstore/Makefile                |  4 +++
>   tools/xenstore/buildid_symbols.ld      | 10 +++++++
>   tools/xenstore/xenstored_core.c        |  8 ++++++
>   tools/xenstore/xenstored_core.h        |  3 ++
>   tools/xenstore/xenstored_minios.c      |  4 +++
>   tools/xenstore/xenstored_posix.c       | 52 +++++++++++++++++++++++++=
+++++++++
>   7 files changed, 82 insertions(+), 1 deletion(-)
>   create mode 100644 tools/xenstore/buildid_symbols.ld
>=20
> diff --git a/tools/hotplug/Linux/launch-xenstore.in b/tools/hotplug/Lin=
ux/launch-xenstore.in
> index 991dec8d25..a6f2254030 100644
> --- a/tools/hotplug/Linux/launch-xenstore.in
> +++ b/tools/hotplug/Linux/launch-xenstore.in
> @@ -62,7 +62,7 @@ test -f @CONFIG_DIR@/@CONFIG_LEAF_DIR@/xencommons && =
=2E @CONFIG_DIR@/@CONFIG_LEAF
>   	}
>  =20
>   	echo -n Starting $XENSTORED...
> -	$XENSTORED --pid-file @XEN_RUN_DIR@/xenstored.pid $XENSTORED_ARGS
> +	$XENSTORED --pid-file @XEN_RUN_DIR@/xenstored.pid --buildid-file @XEN=
_RUN_DIR@/xenstored.buildid $XENSTORED_ARGS
>  =20
>   	systemd-notify --booted 2>/dev/null || timeout_xenstore $XENSTORED |=
| exit 1
>  =20
> diff --git a/tools/xenstore/Makefile b/tools/xenstore/Makefile
> index 9a0f0d012d..c63350980b 100644
> --- a/tools/xenstore/Makefile
> +++ b/tools/xenstore/Makefile
> @@ -66,6 +66,10 @@ $(XENSTORED_OBJS): CFLAGS +=3D $(SYSTEMD_CFLAGS)
>   xenstored: LDFLAGS +=3D $(SYSTEMD_LIBS)
>   endif
>  =20
> +# xenstored: enforce creation of a buildID section and use a linker
> +# script to add additional symbols around that section
> +xenstored: LDFLAGS +=3D  -Wl,--build-id=3Dsha1 -Wl,-T,buildid_symbols.=
ld
> +
>   $(XENSTORED_OBJS): CFLAGS +=3D $(CFLAGS_libxengnttab)
>  =20
>   xenstored: $(XENSTORED_OBJS)
> diff --git a/tools/xenstore/buildid_symbols.ld b/tools/xenstore/buildid=
_symbols.ld
> new file mode 100644
> index 0000000000..d74024c4e9
> --- /dev/null
> +++ b/tools/xenstore/buildid_symbols.ld
> @@ -0,0 +1,10 @@
> +SECTIONS
> +{
> +       __buildid_note_section =3D . ;
> +       .note.gnu.build-id :
> +       {
> +               *(.note.gnu.build-id)
> +       }
> +       __buildid_end =3D . ;
> +}
> +INSERT AFTER .data
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored=
_core.c
> index b4be374d3f..c6f107bdd9 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -1844,6 +1844,7 @@ static void usage(void)
>  =20
>  =20
>   static struct option options[] =3D {
> +	{ "buildid-file", 1, NULL, 'B' },
>   	{ "no-domain-init", 0, NULL, 'D' },
>   	{ "entry-nb", 1, NULL, 'E' },
>   	{ "pid-file", 1, NULL, 'F' },
> @@ -1875,12 +1876,16 @@ int main(int argc, char *argv[])
>   	bool outputpid =3D false;
>   	bool no_domain_init =3D false;
>   	const char *pidfile =3D NULL;
> +	const char *buildid_file =3D NULL;
>   	int timeout;
>  =20
>  =20
>   	while ((opt =3D getopt_long(argc, argv, "DE:F:HNPS:t:T:RVW:", option=
s,

You are missing "B:" in the short options.

>   				  NULL)) !=3D -1) {
>   		switch (opt) {
> +		case 'B':
> +			buildid_file =3D optarg;
> +			break;
>   		case 'D':
>   			no_domain_init =3D true;
>   			break;
> @@ -1948,6 +1953,9 @@ int main(int argc, char *argv[])
>   	if (pidfile)
>   		write_pidfile(pidfile);
>  =20
> +	if (buildid_file)
> +		write_buildid_file(buildid_file);
> +
>   	/* Talloc leak reports go to stderr, which is closed if we fork. */
>   	if (!dofork)
>   		talloc_enable_leak_report_full();

You should update the usage printout, too.

> diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored=
_core.h
> index 1df6ad94ab..712280626c 100644
> --- a/tools/xenstore/xenstored_core.h
> +++ b/tools/xenstore/xenstored_core.h
> @@ -193,6 +193,9 @@ void xenbus_notify_running(void);
>   /* Write out the pidfile */
>   void write_pidfile(const char *pidfile);
>  =20
> +/* Write the buildid file */
> +void write_buildid_file(const char *buildidfile);
> +
>   /* Fork but do not close terminal FDs */
>   void daemonize(void);
>   /* Close stdin/stdout/stderr to complete daemonize */
> diff --git a/tools/xenstore/xenstored_minios.c b/tools/xenstore/xenstor=
ed_minios.c
> index c94493e52a..ef1151aee4 100644
> --- a/tools/xenstore/xenstored_minios.c
> +++ b/tools/xenstore/xenstored_minios.c
> @@ -24,6 +24,10 @@ void write_pidfile(const char *pidfile)
>   {
>   }
>  =20
> +void write_buildid_file(const char *buildid_file)
> +{
> +}
> +
>   void daemonize(void)
>   {
>   }
> diff --git a/tools/xenstore/xenstored_posix.c b/tools/xenstore/xenstore=
d_posix.c
> index 1f9603fea2..ec017611d6 100644
> --- a/tools/xenstore/xenstored_posix.c
> +++ b/tools/xenstore/xenstored_posix.c
> @@ -20,6 +20,7 @@
>   #include <sys/stat.h>
>   #include <unistd.h>
>   #include <fcntl.h>
> +#include <stdint.h>
>   #include <stdlib.h>
>   #include <sys/mman.h>
>  =20
> @@ -48,6 +49,57 @@ void write_pidfile(const char *pidfile)
>   	close(fd);
>   }
>  =20
> +/*
> + * We don't have a working elf.h available here, so let's define our v=
ery own
> + * data structs and accessor macros for ELF notes.
> + *
> + * https://docs.oracle.com/cd/E23824_01/html/819-0690/chapter6-18048.h=
tml:
> + * For 64=E2=80=93bit objects and 32=E2=80=93bit objects, each entry i=
s an array of 4-byte
> + * words in the format of the target processor.
> + */
> +typedef struct
> +{
> +	uint32_t namesz;
> +	uint32_t descsz;
> +	uint32_t type;
> +} elf_note_hdr;
> +
> +/* ELF Note accessors, copied from Xen's elf.h */
> +#define ELFNOTE_ALIGN(_n_) (((_n_)+3)&~3)
> +#define ELFNOTE_NAME(_n_) ((char*)(_n_) + sizeof(*(_n_)))
> +#define ELFNOTE_DESC(_n_) (ELFNOTE_NAME(_n_) + ELFNOTE_ALIGN((_n_)->na=
mesz))
> +/* GNU LD: type =3D=3D note (NT_GNU_BUILD_ID as in
> + * https://sourceware.org/ml/binutils/2007-07/msg00012.html)*/
> +#define NT_GNU_BUILD_ID 3
> +
> +
> +void write_buildid_file(const char *buildid_file)
> +{
> +	unsigned int i =3D 0;
> +	FILE *fdesc;
> +	extern elf_note_hdr __buildid_note_section;
> +	unsigned int id_length =3D __buildid_note_section.descsz;
> +	char* desc =3D ELFNOTE_DESC(&__buildid_note_section);
> +
> +	if (__buildid_note_section.type !=3D NT_GNU_BUILD_ID)
> +		barf("Expected GNU_BUILDID note, but found type '%d'",
> +		     __buildid_note_section.type);
> +
> +	fdesc =3D fopen(buildid_file, "w+");
> +	if (!fdesc)
> +		barf_perror("Error opening buildid file %s", buildid_file);
> +
> +	/* We exit silently if daemon already running. */
> +	if (lockf(fileno(fdesc), F_TLOCK, 0) =3D=3D -1)
> +		exit(0);
> +
> +	for (i =3D 0; i < id_length; ++i)
> +		fprintf(fdesc, "%02x", (unsigned char)desc[i]);
> +	fprintf(fdesc, "\n");
> +
> +	fclose(fdesc);
> +}
> +
>   /* Stevens. */
>   void daemonize(void)
>   {
>=20

In general I don't like the approach using a file for this purpose.
In case there is no generic solution possible, I'd rather have a
XS_CONTROL sub command to get the current version from Xenstore. This
will then at once cover xenstore-stubdom, too.


Juergen

--------------3139E45BC8F6F7BC46389DB0
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------3139E45BC8F6F7BC46389DB0--

--P4Fykfytw7OBIa4CBxJXkpBIUjBRQjwdE--

--VhxVcXNzXEhYwBYwQcp6OXMYrtruzq31i
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+up+AFAwAAAAAACgkQsN6d1ii/Ey+G
NQf9Fvi0Ygz7LuxV+QMNi2VfNal3vsC6W6bGqJeTHqzosAfzTRaR2QajcQnLe3/DXGZ7Ex009Tlq
iKFCX5OrpdZiN/kwz6516zppusxziMfYKdRS4wY0CyNz/gHDF7eD2tleCtVL1GlWLsWCh3I0wVjS
Xx2W70JmFKD0tOrg1fW7XaaA7BQL07vvE7EzDm0Tr3WVHUuD17noQ6VqBZ60VHGwiKYc0Llw3lgZ
5ecXJqpE2fZzus2+rAF8S9o70iZY7WZS6WQtM6iokSQW3v/4e2BxxBmzRYSsJYAY2qI8QtL+8XTN
x1ELHkCghaooJWKXiuQ5CKEp0EdG5PIWJcNhWpvPTw==
=CS8L
-----END PGP SIGNATURE-----

--VhxVcXNzXEhYwBYwQcp6OXMYrtruzq31i--


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 15:48:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 15:48:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26706.55156 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdbJE-0001Nm-4A; Fri, 13 Nov 2020 15:48:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26706.55156; Fri, 13 Nov 2020 15:48:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdbJE-0001Nf-0w; Fri, 13 Nov 2020 15:48:20 +0000
Received: by outflank-mailman (input) for mailman id 26706;
 Fri, 13 Nov 2020 15:48:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HVgh=ET=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kdbJC-0001Na-76
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 15:48:18 +0000
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1e439676-1b0f-4398-9b76-bb9731d7ef79;
 Fri, 13 Nov 2020 15:48:17 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id u18so14440482lfd.9
 for <xen-devel@lists.xenproject.org>; Fri, 13 Nov 2020 07:48:17 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id c1sm1587744lfj.222.2020.11.13.07.48.10
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 13 Nov 2020 07:48:10 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HVgh=ET=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kdbJC-0001Na-76
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 15:48:18 +0000
X-Inumbo-ID: 1e439676-1b0f-4398-9b76-bb9731d7ef79
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1e439676-1b0f-4398-9b76-bb9731d7ef79;
	Fri, 13 Nov 2020 15:48:17 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id u18so14440482lfd.9
        for <xen-devel@lists.xenproject.org>; Fri, 13 Nov 2020 07:48:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=qe6NRoEvp7r9HrfIe0ZWh52c3djO2Sp8mZj0Ag3hr+M=;
        b=qRTWxNsLZjXVUOnqyABB5npTQHVq8XDg5mgm4PK5qOeggMsnrdSEfUk3LZsST7s6Td
         R7ys+29LVEhtp3qu6WLAwJuaHEWlitsZRu8gBrMsc071o/klw0DS+Kxx5c2WtYwO9Zqs
         2G8RIZujmI3CQYfY/2luAtVSO9F1ShfYeE3z8rfa5GRN21+3Q07q9UNvXuj/RB6S9P/X
         xdQQ08cVxSomZz3c5BIeoZnLxXDUn6HfmEQOfHgl9uOvlLwg8VuqLYNNum5FaHSepLBy
         tr3lA7rGYShHxcgj8jofUOB72Avmhp+XS9g30Dq5qxjRuQqjwoFl8iC69NEH2R7Yl/w7
         mQjw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=qe6NRoEvp7r9HrfIe0ZWh52c3djO2Sp8mZj0Ag3hr+M=;
        b=MgVPzZfsmNnnZAoxoatT9iilDF8hLHmNUOVRL0wfj+xtTIcn/Pi6ua47MNcKVeAx8V
         RfV/8/9kH2dThQlF60Tk64b+rNmC0iACJIov8Y5jC6ToUTdlgx0vChvBUaGviI+0TnrH
         TmvXzMKjPKjsXmDQKdOxF9rr+CcF2CU9tZf4m2vAIpwkXlYZZT1aknxYpJ1Mo3ZZkaOI
         owTgGr+LNPilhoxf1z6T/yUACIGEpYB7JNCMWwe7vJK/5kX1emPRIyrW2uxkZl/zSMaR
         CysdD4EEZG7UPTS04d5N4eZUh0XEHp0IznGMVpeeoieHCZxNQ6yjY/hsiYtSy+nBdDdD
         5Bfw==
X-Gm-Message-State: AOAM5334XSr0oHUD+ux+znS5WcXJFzJvUgY2DSXTDkL0DMIx5gpHuvXy
	pyiWi4nVsm8n043SEzfw7qMzEuP8IBC5GQ==
X-Google-Smtp-Source: ABdhPJw19rLv0uqbpJx41Auicvt/NrUcrSAdHrlYl8M3117HEJkrn+AjGMBgW4GxB+GmZregyWop5w==
X-Received: by 2002:a19:3f55:: with SMTP id m82mr1119605lfa.218.1605282491255;
        Fri, 13 Nov 2020 07:48:11 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id c1sm1587744lfj.222.2020.11.13.07.48.10
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Fri, 13 Nov 2020 07:48:10 -0800 (PST)
Subject: Re: [PATCH V2 14/23] arm/ioreq: Introduce arch specific bits for
 IOREQ/DM features
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien.grall@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Paul Durrant <paul@xen.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-15-git-send-email-olekstysh@gmail.com>
 <2c36a380-633b-1e3f-3f99-014bc315e75f@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <e9472c29-237b-8096-4d2b-26e08c80f064@gmail.com>
Date: Fri, 13 Nov 2020 17:48:04 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <2c36a380-633b-1e3f-3f99-014bc315e75f@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 12.11.20 13:48, Jan Beulich wrote:

Hi Jan

> On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
>> --- a/xen/common/ioreq.c
>> +++ b/xen/common/ioreq.c
>> @@ -18,6 +18,7 @@
>>   
>>   #include <xen/ctype.h>
>>   #include <xen/domain.h>
>> +#include <xen/domain_page.h>
>>   #include <xen/event.h>
>>   #include <xen/init.h>
>>   #include <xen/irq.h>
> There preferably wouldn't be a need to touch non-Arm code in this
> patch. I suppose the added #include could easily be introduced
> e.g. while moving the file?

ok


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Fri Nov 13 15:53:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 15:53:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26717.55172 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdbOL-0002Hk-QA; Fri, 13 Nov 2020 15:53:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26717.55172; Fri, 13 Nov 2020 15:53:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdbOL-0002Hd-Mn; Fri, 13 Nov 2020 15:53:37 +0000
Received: by outflank-mailman (input) for mailman id 26717;
 Fri, 13 Nov 2020 15:53:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HVgh=ET=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kdbOK-0002HY-Dj
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 15:53:36 +0000
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d5bd413d-4755-4114-b90f-f7a1d23bc0e8;
 Fri, 13 Nov 2020 15:53:35 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id 74so14503326lfo.5
 for <xen-devel@lists.xenproject.org>; Fri, 13 Nov 2020 07:53:35 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id f9sm1595055lfa.187.2020.11.13.07.53.27
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 13 Nov 2020 07:53:28 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HVgh=ET=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kdbOK-0002HY-Dj
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 15:53:36 +0000
X-Inumbo-ID: d5bd413d-4755-4114-b90f-f7a1d23bc0e8
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d5bd413d-4755-4114-b90f-f7a1d23bc0e8;
	Fri, 13 Nov 2020 15:53:35 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id 74so14503326lfo.5
        for <xen-devel@lists.xenproject.org>; Fri, 13 Nov 2020 07:53:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=wAmQ7hvO7oCPlSvUDNV0wFKLiIoXw4qlSqyYl9lvpEk=;
        b=o+jkUKDcYK+fJPRePEhnY56WU1qlN6GoYb9l4Cc2phXgQhgZNk3I7X0gGKSTLhzWh9
         ec/PId0QMLfH0DQgX/1+gT+A8vDrVYFcLvTMZi58deXLfZtTkiIyKOwKkofINUGCURxB
         5Q8eqOfVL9RcrGawILWUJjnOI5FFTUojB6osKeBhGDU6FHfRr9fXfU+Ia5edkN8+rrV1
         1AAsAFvqgoR7TdEwj4YED6ziDW5yaqa7oq6vSDXb57Q+5HKmpImATuXE5corRRzLU9R2
         gQCzOHuZ0ah7YRzF8dPXDQZsrUxigY2t12ByYYMJfESSt1k4vu1qBas75Cg3R+n6mb7R
         8Tgw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=wAmQ7hvO7oCPlSvUDNV0wFKLiIoXw4qlSqyYl9lvpEk=;
        b=Z9jDv30vN1yuhxeTO44FB18l/oaLkwlVgS4ALSRn1UGOoQcADgsWIqp0ytjxCKRiN/
         IIXSru15el9nIr0bHUAliLG+pMzY6/U82TTWpkcyrOUk7v0o64Y7WYy7jonDqL3S6APD
         ksAUx/NruhgucSAoM2tSuiIL6ZocRI5d0e8kzARtdrYn6LUT2B2lHHT1QFMwAavTrOX8
         WhhO4Av9e8D5idYmdqzBe92BS2aiJddeNJpUepCJVwq8jbzOuz/af+HqNDzFP6zcQDNV
         ekki744Siern4vGJFHOUYCozariBcS2lquVt5UrJlZq5RW12TcajqDQDSyO6pwP0m6CP
         EmvQ==
X-Gm-Message-State: AOAM5329uOR7Y7MkVGW/wi4RCLynj2XsGicsyiMBHeF0PucjlkChHuYE
	hngXRPeh1sFM4jD5lxTtm/d4PL3A6rbTlQ==
X-Google-Smtp-Source: ABdhPJy/Pv65L6U3qFCnQcVQGplfVcijbVrMf/ugMAyMfNeDo7cSrpWXkiCbg/lXY62yBMmAWQqFXg==
X-Received: by 2002:a19:8396:: with SMTP id f144mr1087638lfd.444.1605282809084;
        Fri, 13 Nov 2020 07:53:29 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id f9sm1595055lfa.187.2020.11.13.07.53.27
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Fri, 13 Nov 2020 07:53:28 -0800 (PST)
Subject: Re: [PATCH V2 12/23] xen/ioreq: Remove "hvm" prefixes from involved
 function names
To: Jan Beulich <jbeulich@suse.com>
Cc: Paul Durrant <paul@xen.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-13-git-send-email-olekstysh@gmail.com>
 <e3064b77-71c3-9d8d-2324-6839895101f4@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <d3b6623c-683d-2845-78c3-a114193b0ce4@gmail.com>
Date: Fri, 13 Nov 2020 17:53:27 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <e3064b77-71c3-9d8d-2324-6839895101f4@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 12.11.20 13:45, Jan Beulich wrote:

Hi Jan.

> On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> This patch removes "hvm" prefixes and infixes from IOREQ
>> related function names in the common code.
> AT least some of the functions touched here would be nice to be
> moved to a more consistent new naming scheme right away, to
> avoid having to touch all the same places again. I guess ioreq
> server functions would be nice to all start with ioreq_server_
> and ioreq functions to all start with ioreq_. E.g. ioreq_send()
> and ioreq_server_select().

As it was initially discussed at patch #1, ok , I will prepare a list
of the proposed renaming so we could find a common ground I hope.
After that I will make required renaming.

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Fri Nov 13 15:59:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 15:59:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26730.55195 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdbUE-0002YY-Ms; Fri, 13 Nov 2020 15:59:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26730.55195; Fri, 13 Nov 2020 15:59:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdbUE-0002YR-IA; Fri, 13 Nov 2020 15:59:42 +0000
Received: by outflank-mailman (input) for mailman id 26730;
 Fri, 13 Nov 2020 15:59:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RH3y=ET=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kdbUD-0002Y4-CW
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 15:59:41 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dfcc9d10-0f52-431e-a24a-0f35821533c0;
 Fri, 13 Nov 2020 15:59:37 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdbU9-0004c4-3m; Fri, 13 Nov 2020 15:59:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdbU8-0006bL-PR; Fri, 13 Nov 2020 15:59:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kdbU8-00044S-Oz; Fri, 13 Nov 2020 15:59:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=RH3y=ET=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kdbUD-0002Y4-CW
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 15:59:41 +0000
X-Inumbo-ID: dfcc9d10-0f52-431e-a24a-0f35821533c0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id dfcc9d10-0f52-431e-a24a-0f35821533c0;
	Fri, 13 Nov 2020 15:59:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=74cF1PEfkDxa8DED9YrsmV1THyboqWkfPNncFlFLpLI=; b=ZxUV4XBDV939M5sb0Qn3CFrqf1
	ssUEDKBLp7msYXevHGXKuxiDGWexrx1cqfaWXQ2f3FFNeWXtJLrVbWRUfIRiGkbXOe/vS9iKYJyqn
	ZZls6imnSD2PHv4RvmRFYPycJyJiI4EVZoXshAPReZyXDh1XkzWDTPGHw4RJEfhacJTA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdbU9-0004c4-3m; Fri, 13 Nov 2020 15:59:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdbU8-0006bL-PR; Fri, 13 Nov 2020 15:59:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdbU8-00044S-Oz; Fri, 13 Nov 2020 15:59:36 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156722-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 156722: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-i386-xl-xsm:guest-localmigrate/x10:fail:heisenbug
    linux-5.4:test-amd64-i386-libvirt-xsm:guest-start.2:fail:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=2544d06afd8d060f35b159809274e4b7477e63e8
X-Osstest-Versions-That:
    linux=6e97ed6efa701db070da0054b055c085895aba86
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 13 Nov 2020 15:59:36 +0000

flight 156722 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156722/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-xsm 20 guest-localmigrate/x10 fail in 156677 pass in 156722
 test-amd64-i386-libvirt-xsm  21 guest-start.2              fail pass in 156677

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156412
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156412
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156412
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156412
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156412
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156412
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156412
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156412
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156412
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156412
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156412
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                2544d06afd8d060f35b159809274e4b7477e63e8
baseline version:
 linux                6e97ed6efa701db070da0054b055c085895aba86

Last test of basis   156412  2020-11-05 11:13:59 Z    8 days
Failing since        156620  2020-11-10 12:10:31 Z    3 days    3 attempts
Testing same since   156677  2020-11-11 07:50:18 Z    2 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  "kiyin(尹亮)" <kiyin@tencent.com>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Aring <aahringo@redhat.com>
  Alexander Sverdlin <alexander.sverdlin@nokia.com>
  Andreas Gruenbacher <agruenba@redhat.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Strohman <astroh@amazon.com>
  Artem Lapkin <art@khadas.com>
  Baurzhan Ismagulov <ibr@radix50.net>
  Ben Skeggs <bskeggs@redhat.com>
  Bjørn Mork <bjorn@mork.no>
  Boris Brezillon <boris.brezillon@collabora.com>
  Borislav Petkov <bp@suse.de>
  Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christoph Hellwig <hch@lst.de>
  Chunfeng Yun <chunfeng.yun@mediatek.com>
  Claire Chang <tientzu@chromium.org>
  Claudiu Manoil <claudiu.manoil@nxp.com>
  Clément Péron <peron.clem@gmail.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Daniel Vetter <daniel.vetter@intel.com>
  Daniele Palmas <dnlplm@gmail.com>
  Dany Madden <drt@linux.ibm.com>
  Darrick J. Wong <darrick.wong@oracle.com>
  David S. Miller <davem@davemloft.net>
  Eddy Wu <eddy_wu@trendmicro.com>
  Eddy Wu <itseddy0402@gmail.com>
  Fangrui Song <maskray@google.com>
  Felipe Balbi <balbi@kernel.org>
  Frank Slotta <frank.slotta@posteo.de>
  Gabriel Krisman Bertazi <krisman@collabora.com>
  Geoffrey D. Bennett <g@b4.vu>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gregory CLEMENT <gregory.clement@bootlin.com>
  Guenter Roeck <linux@roeck-us.net>
  Harald Freudenberger <freude@linux.ibm.com>
  Heiko Carstens <hca@linux.ibm.com>
  Hoang Huu Le <hoang.h.le@dektech.com.au>
  Hoegeun Kwon <hoegeun.kwon@samsung.com>
  Ingo Molnar <mingo@kernel.org>
  Jakub Kicinski <kuba@kernel.org>
  Jason Gunthorpe <jgg@nvidia.com>
  Jeff Vander Stoep <jeffv@google.com>
  Jens Axboe <axboe@kernel.dk>
  Jernej Skrabec <jernej.skrabec@siol.net>
  Jiri Slaby <jslaby@suse.cz>
  Johan Hovold <johan@kernel.org>
  Jon Hunter <jonathanh@nvidia.com>
  Jon Maloy <jmaloy@redhat.com>
  Kailang Yang <kailang@realtek.com>
  Kairui Song <kasong@redhat.com>
  Karol Herbst <kherbst@redhat.com>
  Keith Winstein <keithw@cs.stanford.edu>
  Kevin Hilman <khilman@baylibre.com>
  kiyin(尹亮) <kiyin@tencent.com>
  Lee Jones <lee.jones@linaro.org>
  Len Brown <len.brown@intel.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Macpaul Lin <macpaul.lin@mediatek.com>
  Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
  Mark Brown <broonie@kernel.org>
  Mark Deneen <mdeneen@saucontech.com>
  Martin Hundebøll <martin@geanix.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Mateusz Gorski <mateusz.gorski@linux.intel.com>
  Matija Glavinic Pecotic <matija.glavinic-pecotic.ext@nokia.com>
  Maxime Ripard <maxime@cerno.tech>
  Miaohe Lin <linmiaohe@huawei.com>
  Michael Walle <michael@walle.cc>
  Michal Hocko <mhocko@suse.com>
  Michał Mirosław <mirq-linux@rere.qmqm.pl>
  Mika Kuoppala <mika.kuoppala@linux.intel.com>
  Mike Galbraith <efault@gmx.de>
  Ming Lei <ming.lei@redhat.com>
  Nick Desaulniers <ndesaulniers@google.com>
  Oleg Nesterov <oleg@redhat.com>
  Ondřej Jirman <megous@megous.com>
  Pali Rohár <pali@kernel.org>
  Paul E. McKenney <paulmck@kernel.org>
  Peilin Ye <yepeilin.cs@gmail.com>
  Peter Chen <peter.chen@nxp.com>
  Petr Malat <oss@malat.biz>
  Qian Cai <cai@redhat.com>
  Qinglang Miao <miaoqinglang@huawei.com>
  Qiujun Huang <hqjagain@gmail.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Ralph Campbell <rcampbell@nvidia.com>
  Rob Herring <robh@kernel.org>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Sami Tolvanen <samitolvanen@google.com>
  Sasha Levin <sashal@kernel.org>
  Scott K Logan <logans@cottsay.net>
  Shannon Nelson <snelson@pensando.io>
  Shijie Luo <luoshijie1@huawei.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sukadev Bhattiprolu <sukadev@linux.ibm.com>
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Thinh Nguyen <Thinh.Nguyen@synopsys.com>
  Thomas Gleixner <tglx@linutronix.de>
  Tianci.Yin <tianci.yin@amd.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Vasily Gorbik <gor@linux.ibm.com>
  Vignesh Raghavendra <vigneshr@ti.com>
  Vinay Kumar Yadav <vinay.yadav@chelsio.com>
  Vincent Whitchurch <vincent.whitchurch@axis.com>
  Vineet Gupta <vgupta@synopsys.com>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  wenxu <wenxu@ucloud.cn>
  Will Deacon <will@kernel.org>
  Xiang Chen <chenxiang66@hisilicon.com>
  YueHaibing <yuehaibing@huawei.com>
  Zhang Qilong <zhangqilong3@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Ziyi Cao <kernel@septs.pw>
  Zqiang <qiang.zhang@windriver.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   6e97ed6efa70..2544d06afd8d  2544d06afd8d060f35b159809274e4b7477e63e8 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 16:00:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 16:00:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26739.55206 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdbVG-0003tD-7B; Fri, 13 Nov 2020 16:00:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26739.55206; Fri, 13 Nov 2020 16:00:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdbVG-0003t6-4J; Fri, 13 Nov 2020 16:00:46 +0000
Received: by outflank-mailman (input) for mailman id 26739;
 Fri, 13 Nov 2020 16:00:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rZ2H=ET=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kdbVE-0003sz-Lw
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 16:00:44 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fbd86e06-fe96-41c0-a164-130a4c41145e;
 Fri, 13 Nov 2020 16:00:42 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0ADG0aTX014346
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Fri, 13 Nov 2020 17:00:37 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 858AC2E9CA8; Fri, 13 Nov 2020 17:00:31 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rZ2H=ET=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kdbVE-0003sz-Lw
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 16:00:44 +0000
X-Inumbo-ID: fbd86e06-fe96-41c0-a164-130a4c41145e
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fbd86e06-fe96-41c0-a164-130a4c41145e;
	Fri, 13 Nov 2020 16:00:42 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0ADG0aTX014346
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Fri, 13 Nov 2020 17:00:37 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 858AC2E9CA8; Fri, 13 Nov 2020 17:00:31 +0100 (MET)
Date: Fri, 13 Nov 2020 17:00:31 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: dom0 PVH: 'entry->arch.pirq != INVALID_PIRQ' failed at vmsi.c:843
Message-ID: <20201113160031.GA3512@antioche.eu.org>
References: <20201112155715.GA5003@antioche.eu.org>
 <20201112163240.6xswol2iswikdzef@Air-de-Roger>
 <20201112172704.GA5899@antioche.eu.org>
 <20201112201939.be6ztg2iipwa6hkb@Air-de-Roger>
 <20201113115457.GD1512@antioche.eu.org>
 <20201113143349.gehu36wsipvpkrt7@Air-de-Roger>
 <20201113151428.GE1512@antioche.eu.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201113151428.GE1512@antioche.eu.org>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Fri, 13 Nov 2020 17:00:38 +0100 (MET)

On Fri, Nov 13, 2020 at 04:14:28PM +0100, Manuel Bouyer wrote:
> I can try a kernel without this driver, to see if other devices reports
> interrupt.

This didn't change anything, I don't get more interrupts with this driver
commented out.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 16:03:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 16:03:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26751.55223 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdbY4-000472-OV; Fri, 13 Nov 2020 16:03:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26751.55223; Fri, 13 Nov 2020 16:03:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdbY4-00046v-LH; Fri, 13 Nov 2020 16:03:40 +0000
Received: by outflank-mailman (input) for mailman id 26751;
 Fri, 13 Nov 2020 16:03:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HVgh=ET=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kdbY3-00046p-In
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 16:03:39 +0000
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 54ab88de-d774-493a-b652-26989deddbf5;
 Fri, 13 Nov 2020 16:03:38 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id u18so14517108lfd.9
 for <xen-devel@lists.xenproject.org>; Fri, 13 Nov 2020 08:03:38 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id a8sm722416ljq.77.2020.11.13.08.03.31
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 13 Nov 2020 08:03:31 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HVgh=ET=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kdbY3-00046p-In
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 16:03:39 +0000
X-Inumbo-ID: 54ab88de-d774-493a-b652-26989deddbf5
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 54ab88de-d774-493a-b652-26989deddbf5;
	Fri, 13 Nov 2020 16:03:38 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id u18so14517108lfd.9
        for <xen-devel@lists.xenproject.org>; Fri, 13 Nov 2020 08:03:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=LmTZA96QKs8xdi4U0xx3GlMAMvZ/h0NGq07ZxAKmDo8=;
        b=T6e9gy2nkB09kkTsUx9HDOQxKkQBotjG2A0rz3Q03pVWFGpbboBBkpOF8Bzf1sJsQS
         7+n7/LTzIh+6ca77iH0YG5uNCSZL02Td9QnanoJmVGVzAGPJk4+kxaqkQbxqLg9OHthC
         l2SBbKY5ADBaeShbpe0iv0ymRn6CLILtxvS9iXNDjDOTFGiGnwnDPCt6wj0vZPN8FOdC
         bBp3HORP7xTiJDcQks+85sodPpvpuIIqfjxBucL+r9MlMNlrlq3gjYpTGaxL/DntF3GH
         T0Q2Bh9F48bmBgRtUNvVVEyzNyNtfzrlyQxzxLzwWx0+wRe6KMAaaNcyFQ9tbCEgFMlX
         Mdyg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=LmTZA96QKs8xdi4U0xx3GlMAMvZ/h0NGq07ZxAKmDo8=;
        b=a+SmcfxkMuRnHepYevqeIzNW58wxeQhhve3C/JX+5bU0yuo1QYyonB5mrey5PUMXCY
         KS2TKgZpsMQ7H3pFL8d7pInHPVSc6RXiYY/BJykTD9OC2fwKrfhI9LibMA2yJocwkWu2
         BXZ1FLigdRlON4RxzUwBwfrZvEupGh1yWfyjWQLjWrtFdXbjZJ8jlmSPzImfJ1wgnMj/
         RCIIZsJ5WJApBzR715H8BftW0YFrCvaKlHkeraKdO2a26EVXhPRfA+jt6TmyDbJIAgG1
         tS64FMDZTgHUP0JZhV0xu23D334cfFVw0uqyunr/IryrOIGodBDV6CPnokWn6IJWYW2l
         ZkdA==
X-Gm-Message-State: AOAM530RqJTzLMcCvdJY6+Bz1/BgG6hMwo4Zk5BgZDoXBSgMvhldCFr3
	RhpWKaHROQYYx3FbgU3r3wa36Sp0wAjJGA==
X-Google-Smtp-Source: ABdhPJzIuJK968n1V0CEJJ52LMvwuTr6Rbi6YzXDhyMB1i0EGuu1zw9RAbvRiCUTIa5drOEPkSBh2A==
X-Received: by 2002:a19:c191:: with SMTP id r139mr1265705lff.258.1605283412442;
        Fri, 13 Nov 2020 08:03:32 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id a8sm722416ljq.77.2020.11.13.08.03.31
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Fri, 13 Nov 2020 08:03:31 -0800 (PST)
Subject: Re: [PATCH V2 20/23] xen/ioreq: Make x86's send_invalidate_req()
 common
To: Jan Beulich <jbeulich@suse.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-21-git-send-email-olekstysh@gmail.com>
 <86ac139b-4dfd-fc45-ea77-3bd51acaea15@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <efa0907b-5033-b005-a825-41a51c9b17b0@gmail.com>
Date: Fri, 13 Nov 2020 18:03:30 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <86ac139b-4dfd-fc45-ea77-3bd51acaea15@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 12.11.20 13:55, Jan Beulich wrote:

Hi Jan

> On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
>> --- a/xen/include/asm-x86/hvm/io.h
>> +++ b/xen/include/asm-x86/hvm/io.h
>> @@ -97,7 +97,6 @@ bool relocate_portio_handler(
>>       unsigned int size);
>>   
>>   void send_timeoffset_req(unsigned long timeoff);
>> -void send_invalidate_req(void);
>>   bool handle_mmio_with_translation(unsigned long gla, unsigned long gpfn,
>>                                     struct npfec);
>>   bool handle_pio(uint16_t port, unsigned int size, int dir);
>> diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
>> index 0679fef..aad682f 100644
>> --- a/xen/include/xen/ioreq.h
>> +++ b/xen/include/xen/ioreq.h
>> @@ -126,6 +126,7 @@ struct ioreq_server *select_ioreq_server(struct domain *d,
>>   int send_ioreq(struct ioreq_server *s, ioreq_t *proto_p,
>>                  bool buffered);
>>   unsigned int broadcast_ioreq(ioreq_t *p, bool buffered);
>> +void send_invalidate_ioreq(void);
> Again while renaming this function anyway could we see about giving
> it a suitable and consistent name? Maybe
> ioreq_request_mapcache_invalidate() or (to avoid the double "request")
> ioreq_signal_mapcache_invalidate()? Maybe even ioreq_server_...().
iirc send_invalidate_ioreq() was one of the proposed names during V1 review.
But to follow new scheme, I am fine with both the first and second.

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Fri Nov 13 16:46:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 16:46:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26763.55235 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdcDX-0007fK-4O; Fri, 13 Nov 2020 16:46:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26763.55235; Fri, 13 Nov 2020 16:46:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdcDX-0007fD-1Q; Fri, 13 Nov 2020 16:46:31 +0000
Received: by outflank-mailman (input) for mailman id 26763;
 Fri, 13 Nov 2020 16:46:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rZ2H=ET=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kdcDV-0007f8-ND
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 16:46:29 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8ff6bf7d-f20c-433d-bfe7-db8f8d7895ff;
 Fri, 13 Nov 2020 16:46:28 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0ADGkMVt016311
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Fri, 13 Nov 2020 17:46:23 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 3958F2E9CA8; Fri, 13 Nov 2020 17:46:17 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rZ2H=ET=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kdcDV-0007f8-ND
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 16:46:29 +0000
X-Inumbo-ID: 8ff6bf7d-f20c-433d-bfe7-db8f8d7895ff
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8ff6bf7d-f20c-433d-bfe7-db8f8d7895ff;
	Fri, 13 Nov 2020 16:46:28 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0ADGkMVt016311
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Fri, 13 Nov 2020 17:46:23 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 3958F2E9CA8; Fri, 13 Nov 2020 17:46:17 +0100 (MET)
Date: Fri, 13 Nov 2020 17:46:17 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: dom0 PVH: 'entry->arch.pirq != INVALID_PIRQ' failed at vmsi.c:843
Message-ID: <20201113164617.GJ1512@antioche.eu.org>
References: <20201112155715.GA5003@antioche.eu.org>
 <20201112163240.6xswol2iswikdzef@Air-de-Roger>
 <20201112172704.GA5899@antioche.eu.org>
 <20201112201939.be6ztg2iipwa6hkb@Air-de-Roger>
 <20201113115457.GD1512@antioche.eu.org>
 <20201113143349.gehu36wsipvpkrt7@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201113143349.gehu36wsipvpkrt7@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Fri, 13 Nov 2020 17:46:23 +0100 (MET)

I just noticed that CPU0 also stops receiving clock interrupts - which may
explain why the kernel wedges. I can still enter NetBSD's debugger,
which means that console interrupts are still working (the console's event
channel is also handled by CPU 0).
A 'q' in Xen doens't show any pending or masked events, for any CPU.

The NetBSD interrupt counters show event channel 2's counter (the CPU0 clock)
stuck at 13, while others are happily increasing.

Any idea what to look at from here ?

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 16:55:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 16:55:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26771.55246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdcM4-0000BD-0d; Fri, 13 Nov 2020 16:55:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26771.55246; Fri, 13 Nov 2020 16:55:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdcM3-0000B5-Tx; Fri, 13 Nov 2020 16:55:19 +0000
Received: by outflank-mailman (input) for mailman id 26771;
 Fri, 13 Nov 2020 16:55:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EgAG=ET=amazon.de=prvs=579e99c79=doebel@srs-us1.protection.inumbo.net>)
 id 1kdcM1-0000B0-Uj
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 16:55:18 +0000
Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4431a71c-c10c-4d78-96a7-949b7e31aeac;
 Fri, 13 Nov 2020 16:55:17 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-1a-af6a10df.us-east-1.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP;
 13 Nov 2020 16:55:11 +0000
Received: from EX13D03EUC002.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38])
 by email-inbound-relay-1a-af6a10df.us-east-1.amazon.com (Postfix) with ESMTPS
 id EAD62A1F8A; Fri, 13 Nov 2020 16:55:09 +0000 (UTC)
Received: from [192.168.31.251] (10.43.161.55) by EX13D03EUC002.ant.amazon.com
 (10.43.164.60) with Microsoft SMTP Server (TLS) id 15.0.1497.2;
 Fri, 13 Nov 2020 16:55:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=EgAG=ET=amazon.de=prvs=579e99c79=doebel@srs-us1.protection.inumbo.net>)
	id 1kdcM1-0000B0-Uj
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 16:55:18 +0000
X-Inumbo-ID: 4431a71c-c10c-4d78-96a7-949b7e31aeac
Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4431a71c-c10c-4d78-96a7-949b7e31aeac;
	Fri, 13 Nov 2020 16:55:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1605286517; x=1636822517;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=0cW9/0mnolwA1dO33wTeIFRKYLG4E2BWKZ1kWv8/TwA=;
  b=UctwvzuRejkdwDid2w9xGNqBpcpGh9pU0qiFzd1Rb2OE9O7ML8k8/W/s
   yYCm54rdzwj1jvXu1pkwFsHXCTnhInsRm/jXqOXBwwb9kAeo6QtfxSG/n
   kQV355lcwUAgnAhQl32uc7HyLGoMi1o32LIUJvUudKDa8ahHSGPHTMx2y
   g=;
X-IronPort-AV: E=Sophos;i="5.77,476,1596499200"; 
   d="scan'208";a="63707561"
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-1a-af6a10df.us-east-1.amazon.com) ([10.43.8.6])
  by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP; 13 Nov 2020 16:55:11 +0000
Received: from EX13D03EUC002.ant.amazon.com (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38])
	by email-inbound-relay-1a-af6a10df.us-east-1.amazon.com (Postfix) with ESMTPS id EAD62A1F8A;
	Fri, 13 Nov 2020 16:55:09 +0000 (UTC)
Received: from [192.168.31.251] (10.43.161.55) by EX13D03EUC002.ant.amazon.com
 (10.43.164.60) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 13 Nov
 2020 16:55:05 +0000
Subject: Re: [XEN PATCH] tools/xenstore: Log xenstored build ID on startup
To: Andrew Cooper <andrew.cooper3@citrix.com>,
	<xen-devel@lists.xenproject.org>
CC: Julien Grall <jgrall@amazon.co.uk>, Eslam Elnikety <elnikety@amazon.de>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20201113141823.58712-1-doebel@amazon.de>
 <de06e7ce-65cd-95fb-5862-0135e2110a99@citrix.com>
From: Bjoern Doebel <doebel@amazon.de>
Message-ID: <c216f07a-df70-ddb5-46fd-7b61e36fa6fc@amazon.de>
Date: Fri, 13 Nov 2020 17:55:01 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <de06e7ce-65cd-95fb-5862-0135e2110a99@citrix.com>
Content-Language: en-GB
X-Originating-IP: [10.43.161.55]
X-ClientProxiedBy: EX13D35UWB001.ant.amazon.com (10.43.161.47) To
 EX13D03EUC002.ant.amazon.com (10.43.164.60)
Precedence: Bulk
Content-Type: text/plain; charset="utf-8"; format="flowed"
Content-Transfer-Encoding: base64

Ck9uIDEzLjExLjIwIDE1OjMwLCBBbmRyZXcgQ29vcGVyIHdyb3RlOgo+IE9uIDEzLzExLzIwMjAg
MTQ6MTgsIEJqb2VybiBEb2ViZWwgd3JvdGU6Cj4+IFJpZ2h0IG5vdyB3ZSBkbyBub3QgaGF2ZSBh
IG1lY2hhbmlzbSB0byBkZXRlcm1pbmUgdGhlIHZlcnNpb24gb2YgdGhlCj4+IGN1cnJlbnRseSBy
dW5uaW5nIHhlbnN0b3JlZCBhdCBydW50aW1lLiBBcyB4ZW5zdG9yZWQgcnVucyB0aHJvdWdob3V0
IHRoZQo+PiBsaWZldGltZSBvZiBhIFhlbiBob3N0LCB0aGlzIG1heSBsZWFkIHRvIHByb2JsZW1z
IHdoZW4gbmV3ZXIgdXNlciBzcGFjZQo+PiBidWlsZHMgYXJlIHN0YWdlZC4gVGhlbiwgdGhlIHJ1
bm5pbmcgeGVuc3RvcmVkIHdpbGwgbm8gbG9uZ2VyIG1hdGNoIHRoZQo+PiB2ZXJzaW9uIG9mIHRo
ZSBpbnN0YWxsZWQgeGVuc3RvcmVkLgo+Pgo+PiBUbyBhbGxvdyB1c2VycyB0byBhbHdheXMgaWRl
bnRpZnkgdGhlIHJ1bm5pbmcgdmVyc2lvbiBvZiB4ZW5zdG9yZWQsIGFkZAo+PiBhIGxpbmtlci1n
ZW5lcmF0ZWQgdW5pcXVlIGJ1aWxkIElEIHRvIGV2ZXJ5IHhlbnN0b3JlZCBidWlsZC4gQWRkCj4+
IGZ1bmN0aW9uYWxpdHkgdG8gbG9nIHRoaXMgYnVpbGQgSUQgaW50byBhIGZpbGUgdXBvbiBzZXJ2
aWNlIHN0YXJ0dXAuCj4+Cj4+IFNpZ25lZC1vZmYtYnk6IEJqb2VybiBEb2ViZWwgPGRvZWJlbEBh
bWF6b24uZGU+Cj4+IFJldmlld2VkLWJ5OiBNYXJ0aW4gTWF6ZWluIDxhbWF6ZWluQGFtYXpvbi5k
ZT4KPj4gUmV2aWV3ZWQtYnk6IFBhdWwgRHVycmFudCA8cGR1cnJhbnRAYW1hem9uLmNvLnVrPgo+
IEkgdW5kZXJzdGFuZCB0aGUgcHJvYmxlbSB5b3UncmUgdHJ5aW5nIHRvIHNvbHZlLCBidXQgd2h5
IGlzIHRoaXMKPiBhbnl0aGluZyBtb3JlIHRoYW4ganVzdCBlbmFibGluZyBidWlsZC1pZCdzIGJ5
IGRlZmF1bHQgYWNyb3NzIHRvb2xzLyA/Cj4KPiBUaGVyZSBhcmUgYWxyZWFkeSBzdGFuZGFyZCB3
YXlzIG9mIGludGVyYWN0aW5nIHdpdGggdGhlIGJ1aWxkIGlkIG9mCj4gcnVubmluZyBleGVjdXRh
YmxlcyBvbiB0aGUgc3lzdGVtLiAgSSdkIHN0cm9uZ2x5IGRpc2NvdXJhZ2UgZG9pbmcKPiBhbnl0
aGluZyBjdXN0b20gaW4geGVuc3RvcmVkIHNwZWNpZmljYWxseS4KTWF5IEkgYXNrIHdoYXQgdG9v
bGluZyB5b3Ugd291bGQgdXNlIHRvIGludGVyYWN0IHdpdGggYSBydW5uaW5nIHByb2Nlc3MnIApi
dWlsZGlkPwo+IH5BbmRyZXcKCkJqb2VybgoKCgoKQW1hem9uIERldmVsb3BtZW50IENlbnRlciBH
ZXJtYW55IEdtYkgKS3JhdXNlbnN0ci4gMzgKMTAxMTcgQmVybGluCkdlc2NoYWVmdHNmdWVocnVu
ZzogQ2hyaXN0aWFuIFNjaGxhZWdlciwgSm9uYXRoYW4gV2Vpc3MKRWluZ2V0cmFnZW4gYW0gQW10
c2dlcmljaHQgQ2hhcmxvdHRlbmJ1cmcgdW50ZXIgSFJCIDE0OTE3MyBCClNpdHo6IEJlcmxpbgpV
c3QtSUQ6IERFIDI4OSAyMzcgODc5CgoK



From xen-devel-bounces@lists.xenproject.org Fri Nov 13 16:57:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 16:57:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26778.55259 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdcNi-0000Jp-DT; Fri, 13 Nov 2020 16:57:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26778.55259; Fri, 13 Nov 2020 16:57:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdcNi-0000Jh-9j; Fri, 13 Nov 2020 16:57:02 +0000
Received: by outflank-mailman (input) for mailman id 26778;
 Fri, 13 Nov 2020 16:57:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EgAG=ET=amazon.de=prvs=579e99c79=doebel@srs-us1.protection.inumbo.net>)
 id 1kdcNh-0000Jb-6k
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 16:57:01 +0000
Received: from smtp-fw-4101.amazon.com (unknown [72.21.198.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 219d32d0-21cf-47c4-9bd7-5db01f4ab305;
 Fri, 13 Nov 2020 16:57:00 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-1a-821c648d.us-east-1.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-4101.iad4.amazon.com with ESMTP;
 13 Nov 2020 16:56:51 +0000
Received: from EX13D03EUC002.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38])
 by email-inbound-relay-1a-821c648d.us-east-1.amazon.com (Postfix) with ESMTPS
 id 3060CA18D2; Fri, 13 Nov 2020 16:56:49 +0000 (UTC)
Received: from [192.168.31.251] (10.43.160.229) by
 EX13D03EUC002.ant.amazon.com (10.43.164.60) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 13 Nov 2020 16:56:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=EgAG=ET=amazon.de=prvs=579e99c79=doebel@srs-us1.protection.inumbo.net>)
	id 1kdcNh-0000Jb-6k
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 16:57:01 +0000
X-Inumbo-ID: 219d32d0-21cf-47c4-9bd7-5db01f4ab305
Received: from smtp-fw-4101.amazon.com (unknown [72.21.198.25])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 219d32d0-21cf-47c4-9bd7-5db01f4ab305;
	Fri, 13 Nov 2020 16:57:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1605286621; x=1636822621;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=TKgf0TuuRv8/Cr0iB3Z5ls6TuogmoGzpQbk1XVgt+Io=;
  b=pWSfsPnDmilpknQzgg4u1BnFkTKxMxZfLVTsCc5oFuHkCigIGbhSREbc
   wM3TbCg0klj7mrdCyMGPMkC+w/Lzf6Sp7dZvsxyH1eRxwu/hjsLqg4OW6
   3RBLyBTZohxp2LJ6YHSaETiKfAGGMEktBxixiSYbvfX2KacgjRWdHQYn3
   Q=;
X-IronPort-AV: E=Sophos;i="5.77,476,1596499200"; 
   d="scan'208";a="63969555"
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-1a-821c648d.us-east-1.amazon.com) ([10.43.8.6])
  by smtp-border-fw-out-4101.iad4.amazon.com with ESMTP; 13 Nov 2020 16:56:51 +0000
Received: from EX13D03EUC002.ant.amazon.com (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38])
	by email-inbound-relay-1a-821c648d.us-east-1.amazon.com (Postfix) with ESMTPS id 3060CA18D2;
	Fri, 13 Nov 2020 16:56:49 +0000 (UTC)
Received: from [192.168.31.251] (10.43.160.229) by
 EX13D03EUC002.ant.amazon.com (10.43.164.60) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 13 Nov 2020 16:56:45 +0000
Subject: Re: [XEN PATCH] tools/xenstore: Log xenstored build ID on startup
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
	<xen-devel@lists.xenproject.org>
CC: Julien Grall <jgrall@amazon.co.uk>, Eslam Elnikety <elnikety@amazon.de>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20201113141823.58712-1-doebel@amazon.de>
 <5ac379ad-33fd-2973-dfdb-9e06ea539809@suse.com>
From: Bjoern Doebel <doebel@amazon.de>
Message-ID: <0e6b09fe-ffc4-195f-1b6c-67abc0cff92c@amazon.de>
Date: Fri, 13 Nov 2020 17:56:41 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <5ac379ad-33fd-2973-dfdb-9e06ea539809@suse.com>
Content-Language: en-GB
X-Originating-IP: [10.43.160.229]
X-ClientProxiedBy: EX13D01UWB004.ant.amazon.com (10.43.161.157) To
 EX13D03EUC002.ant.amazon.com (10.43.164.60)
Precedence: Bulk
Content-Type: text/plain; charset="utf-8"; format="flowed"
Content-Transfer-Encoding: base64

T24gMTMuMTEuMjAgMTY6MzYsIErDvHJnZW4gR3Jvw58gd3JvdGU6Cj4gT24gMTMuMTEuMjAgMTU6
MTgsIEJqb2VybiBEb2ViZWwgd3JvdGU6Cj4+IFJpZ2h0IG5vdyB3ZSBkbyBub3QgaGF2ZSBhIG1l
Y2hhbmlzbSB0byBkZXRlcm1pbmUgdGhlIHZlcnNpb24gb2YgdGhlCj4+IGN1cnJlbnRseSBydW5u
aW5nIHhlbnN0b3JlZCBhdCBydW50aW1lLiBBcyB4ZW5zdG9yZWQgcnVucyB0aHJvdWdob3V0IHRo
ZQo+PiBsaWZldGltZSBvZiBhIFhlbiBob3N0LCB0aGlzIG1heSBsZWFkIHRvIHByb2JsZW1zIHdo
ZW4gbmV3ZXIgdXNlciBzcGFjZQo+PiBidWlsZHMgYXJlIHN0YWdlZC4gVGhlbiwgdGhlIHJ1bm5p
bmcgeGVuc3RvcmVkIHdpbGwgbm8gbG9uZ2VyIG1hdGNoIHRoZQo+PiB2ZXJzaW9uIG9mIHRoZSBp
bnN0YWxsZWQgeGVuc3RvcmVkLgo+Pgo+PiBUbyBhbGxvdyB1c2VycyB0byBhbHdheXMgaWRlbnRp
ZnkgdGhlIHJ1bm5pbmcgdmVyc2lvbiBvZiB4ZW5zdG9yZWQsIGFkZAo+PiBhIGxpbmtlci1nZW5l
cmF0ZWQgdW5pcXVlIGJ1aWxkIElEIHRvIGV2ZXJ5IHhlbnN0b3JlZCBidWlsZC4gQWRkCj4+IGZ1
bmN0aW9uYWxpdHkgdG8gbG9nIHRoaXMgYnVpbGQgSUQgaW50byBhIGZpbGUgdXBvbiBzZXJ2aWNl
IHN0YXJ0dXAuCj4+Cj4+IFNpZ25lZC1vZmYtYnk6IEJqb2VybiBEb2ViZWwgPGRvZWJlbEBhbWF6
b24uZGU+Cj4+IFJldmlld2VkLWJ5OiBNYXJ0aW4gTWF6ZWluIDxhbWF6ZWluQGFtYXpvbi5kZT4K
Pj4gUmV2aWV3ZWQtYnk6IFBhdWwgRHVycmFudCA8cGR1cnJhbnRAYW1hem9uLmNvLnVrPgo+Cj4g
Tm8gc3VwcG9ydCBmb3Igb3hlbnN0b3JlZCBvciB4ZW5zdG9yZS1zdHViZG9tPwoKWW91ciBzdWdn
ZXN0aW9uIGZ1cnRoZXIgZG93biB3aWxsIGFwcGFyZW50bHkgaGVscCBmb3Igc3R1YmRvbS4gSSBk
byBub3QgCnNwZWFrIG9jYW1sIGF0IGFsbCAtIGhvdyBkbyB3ZSBhZGRyZXNzIHRoaXM/Cgo+Cj4+
IC0tLQo+PiDCoCB0b29scy9ob3RwbHVnL0xpbnV4L2xhdW5jaC14ZW5zdG9yZS5pbiB8wqAgMiAr
LQo+PiDCoCB0b29scy94ZW5zdG9yZS9NYWtlZmlsZcKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoCB8wqAgNCArKysKPj4gwqAgdG9vbHMveGVuc3RvcmUvYnVpbGRpZF9zeW1ib2xzLmxkwqDC
oMKgwqDCoCB8IDEwICsrKysrKysKPj4gwqAgdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUu
Y8KgwqDCoMKgwqDCoMKgIHzCoCA4ICsrKysrKwo+PiDCoCB0b29scy94ZW5zdG9yZS94ZW5zdG9y
ZWRfY29yZS5owqDCoMKgwqDCoMKgwqAgfMKgIDMgKysKPj4gwqAgdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX21pbmlvcy5jwqDCoMKgwqDCoCB8wqAgNCArKysKPj4gwqAgdG9vbHMveGVuc3RvcmUv
eGVuc3RvcmVkX3Bvc2l4LmPCoMKgwqDCoMKgwqAgfCA1MiAKPj4gKysrKysrKysrKysrKysrKysr
KysrKysrKysrKysrKysrKwo+PiDCoCA3IGZpbGVzIGNoYW5nZWQsIDgyIGluc2VydGlvbnMoKyks
IDEgZGVsZXRpb24oLSkKPj4gwqAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL3hlbnN0b3JlL2J1
aWxkaWRfc3ltYm9scy5sZAo+Pgo+PiBkaWZmIC0tZ2l0IGEvdG9vbHMvaG90cGx1Zy9MaW51eC9s
YXVuY2gteGVuc3RvcmUuaW4gCj4+IGIvdG9vbHMvaG90cGx1Zy9MaW51eC9sYXVuY2gteGVuc3Rv
cmUuaW4KPj4gaW5kZXggOTkxZGVjOGQyNS4uYTZmMjI1NDAzMCAxMDA2NDQKPj4gLS0tIGEvdG9v
bHMvaG90cGx1Zy9MaW51eC9sYXVuY2gteGVuc3RvcmUuaW4KPj4gKysrIGIvdG9vbHMvaG90cGx1
Zy9MaW51eC9sYXVuY2gteGVuc3RvcmUuaW4KPj4gQEAgLTYyLDcgKzYyLDcgQEAgdGVzdCAtZiBA
Q09ORklHX0RJUkAvQENPTkZJR19MRUFGX0RJUkAveGVuY29tbW9ucyAKPj4gJiYgLiBAQ09ORklH
X0RJUkAvQENPTkZJR19MRUFGCj4+IMKgwqDCoMKgwqAgfQo+PiDCoCDCoMKgwqDCoMKgIGVjaG8g
LW4gU3RhcnRpbmcgJFhFTlNUT1JFRC4uLgo+PiAtwqDCoMKgICRYRU5TVE9SRUQgLS1waWQtZmls
ZSBAWEVOX1JVTl9ESVJAL3hlbnN0b3JlZC5waWQgJFhFTlNUT1JFRF9BUkdTCj4+ICvCoMKgwqAg
JFhFTlNUT1JFRCAtLXBpZC1maWxlIEBYRU5fUlVOX0RJUkAveGVuc3RvcmVkLnBpZCAtLWJ1aWxk
aWQtZmlsZSAKPj4gQFhFTl9SVU5fRElSQC94ZW5zdG9yZWQuYnVpbGRpZCAkWEVOU1RPUkVEX0FS
R1MKPj4gwqAgwqDCoMKgwqDCoCBzeXN0ZW1kLW5vdGlmeSAtLWJvb3RlZCAyPi9kZXYvbnVsbCB8
fCB0aW1lb3V0X3hlbnN0b3JlIAo+PiAkWEVOU1RPUkVEIHx8IGV4aXQgMQo+PiDCoCBkaWZmIC0t
Z2l0IGEvdG9vbHMveGVuc3RvcmUvTWFrZWZpbGUgYi90b29scy94ZW5zdG9yZS9NYWtlZmlsZQo+
PiBpbmRleCA5YTBmMGQwMTJkLi5jNjMzNTA5ODBiIDEwMDY0NAo+PiAtLS0gYS90b29scy94ZW5z
dG9yZS9NYWtlZmlsZQo+PiArKysgYi90b29scy94ZW5zdG9yZS9NYWtlZmlsZQo+PiBAQCAtNjYs
NiArNjYsMTAgQEAgJChYRU5TVE9SRURfT0JKUyk6IENGTEFHUyArPSAkKFNZU1RFTURfQ0ZMQUdT
KQo+PiDCoCB4ZW5zdG9yZWQ6IExERkxBR1MgKz0gJChTWVNURU1EX0xJQlMpCj4+IMKgIGVuZGlm
Cj4+IMKgICsjIHhlbnN0b3JlZDogZW5mb3JjZSBjcmVhdGlvbiBvZiBhIGJ1aWxkSUQgc2VjdGlv
biBhbmQgdXNlIGEgbGlua2VyCj4+ICsjIHNjcmlwdCB0byBhZGQgYWRkaXRpb25hbCBzeW1ib2xz
IGFyb3VuZCB0aGF0IHNlY3Rpb24KPj4gK3hlbnN0b3JlZDogTERGTEFHUyArPcKgIC1XbCwtLWJ1
aWxkLWlkPXNoYTEgLVdsLC1ULGJ1aWxkaWRfc3ltYm9scy5sZAo+PiArCj4+IMKgICQoWEVOU1RP
UkVEX09CSlMpOiBDRkxBR1MgKz0gJChDRkxBR1NfbGlieGVuZ250dGFiKQo+PiDCoCDCoCB4ZW5z
dG9yZWQ6ICQoWEVOU1RPUkVEX09CSlMpCj4+IGRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS9i
dWlsZGlkX3N5bWJvbHMubGQgCj4+IGIvdG9vbHMveGVuc3RvcmUvYnVpbGRpZF9zeW1ib2xzLmxk
Cj4+IG5ldyBmaWxlIG1vZGUgMTAwNjQ0Cj4+IGluZGV4IDAwMDAwMDAwMDAuLmQ3NDAyNGM0ZTkK
Pj4gLS0tIC9kZXYvbnVsbAo+PiArKysgYi90b29scy94ZW5zdG9yZS9idWlsZGlkX3N5bWJvbHMu
bGQKPj4gQEAgLTAsMCArMSwxMCBAQAo+PiArU0VDVElPTlMKPj4gK3sKPj4gK8KgwqDCoMKgwqDC
oCBfX2J1aWxkaWRfbm90ZV9zZWN0aW9uID0gLiA7Cj4+ICvCoMKgwqDCoMKgwqAgLm5vdGUuZ251
LmJ1aWxkLWlkIDoKPj4gK8KgwqDCoMKgwqDCoCB7Cj4+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgICooLm5vdGUuZ251LmJ1aWxkLWlkKQo+PiArwqDCoMKgwqDCoMKgIH0KPj4gK8KgwqDC
oMKgwqDCoCBfX2J1aWxkaWRfZW5kID0gLiA7Cj4+ICt9Cj4+ICtJTlNFUlQgQUZURVIgLmRhdGEK
Pj4gZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMgCj4+IGIvdG9v
bHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYwo+PiBpbmRleCBiNGJlMzc0ZDNmLi5jNmYxMDdi
ZGQ5IDEwMDY0NAo+PiAtLS0gYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jCj4+ICsr
KyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMKPj4gQEAgLTE4NDQsNiArMTg0NCw3
IEBAIHN0YXRpYyB2b2lkIHVzYWdlKHZvaWQpCj4+IMKgIMKgIMKgIHN0YXRpYyBzdHJ1Y3Qgb3B0
aW9uIG9wdGlvbnNbXSA9IHsKPj4gK8KgwqDCoCB7ICJidWlsZGlkLWZpbGUiLCAxLCBOVUxMLCAn
QicgfSwKPj4gwqDCoMKgwqDCoCB7ICJuby1kb21haW4taW5pdCIsIDAsIE5VTEwsICdEJyB9LAo+
PiDCoMKgwqDCoMKgIHsgImVudHJ5LW5iIiwgMSwgTlVMTCwgJ0UnIH0sCj4+IMKgwqDCoMKgwqAg
eyAicGlkLWZpbGUiLCAxLCBOVUxMLCAnRicgfSwKPj4gQEAgLTE4NzUsMTIgKzE4NzYsMTYgQEAg
aW50IG1haW4oaW50IGFyZ2MsIGNoYXIgKmFyZ3ZbXSkKPj4gwqDCoMKgwqDCoCBib29sIG91dHB1
dHBpZCA9IGZhbHNlOwo+PiDCoMKgwqDCoMKgIGJvb2wgbm9fZG9tYWluX2luaXQgPSBmYWxzZTsK
Pj4gwqDCoMKgwqDCoCBjb25zdCBjaGFyICpwaWRmaWxlID0gTlVMTDsKPj4gK8KgwqDCoCBjb25z
dCBjaGFyICpidWlsZGlkX2ZpbGUgPSBOVUxMOwo+PiDCoMKgwqDCoMKgIGludCB0aW1lb3V0Owo+
PiDCoCDCoCDCoMKgwqDCoMKgIHdoaWxlICgob3B0ID0gZ2V0b3B0X2xvbmcoYXJnYywgYXJndiwg
IkRFOkY6SE5QUzp0OlQ6UlZXOiIsIAo+PiBvcHRpb25zLAo+Cj4gWW91IGFyZSBtaXNzaW5nICJC
OiIgaW4gdGhlIHNob3J0IG9wdGlvbnMuCkFjay4gRml4aW5nLgo+Cj4+IMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIE5VTEwpKSAhPSAtMSkgewo+PiDCoMKgwqDCoMKgwqDC
oMKgwqAgc3dpdGNoIChvcHQpIHsKPj4gK8KgwqDCoMKgwqDCoMKgIGNhc2UgJ0InOgo+PiArwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoCBidWlsZGlkX2ZpbGUgPSBvcHRhcmc7Cj4+ICvCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgIGJyZWFrOwo+PiDCoMKgwqDCoMKgwqDCoMKgwqAgY2FzZSAnRCc6Cj4+IMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIG5vX2RvbWFpbl9pbml0ID0gdHJ1ZTsKPj4gwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqAgYnJlYWs7Cj4+IEBAIC0xOTQ4LDYgKzE5NTMsOSBAQCBpbnQg
bWFpbihpbnQgYXJnYywgY2hhciAqYXJndltdKQo+PiDCoMKgwqDCoMKgIGlmIChwaWRmaWxlKQo+
PiDCoMKgwqDCoMKgwqDCoMKgwqAgd3JpdGVfcGlkZmlsZShwaWRmaWxlKTsKPj4gwqAgK8KgwqDC
oCBpZiAoYnVpbGRpZF9maWxlKQo+PiArwqDCoMKgwqDCoMKgwqAgd3JpdGVfYnVpbGRpZF9maWxl
KGJ1aWxkaWRfZmlsZSk7Cj4+ICsKPj4gwqDCoMKgwqDCoCAvKiBUYWxsb2MgbGVhayByZXBvcnRz
IGdvIHRvIHN0ZGVyciwgd2hpY2ggaXMgY2xvc2VkIGlmIHdlIAo+PiBmb3JrLiAqLwo+PiDCoMKg
wqDCoMKgIGlmICghZG9mb3JrKQo+PiDCoMKgwqDCoMKgwqDCoMKgwqAgdGFsbG9jX2VuYWJsZV9s
ZWFrX3JlcG9ydF9mdWxsKCk7Cj4KPiBZb3Ugc2hvdWxkIHVwZGF0ZSB0aGUgdXNhZ2UgcHJpbnRv
dXQsIHRvby4KT2suCj4KPj4gZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9j
b3JlLmggCj4+IGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuaAo+PiBpbmRleCAxZGY2
YWQ5NGFiLi43MTIyODA2MjZjIDEwMDY0NAo+PiAtLS0gYS90b29scy94ZW5zdG9yZS94ZW5zdG9y
ZWRfY29yZS5oCj4+ICsrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmgKPj4gQEAg
LTE5Myw2ICsxOTMsOSBAQCB2b2lkIHhlbmJ1c19ub3RpZnlfcnVubmluZyh2b2lkKTsKPj4gwqAg
LyogV3JpdGUgb3V0IHRoZSBwaWRmaWxlICovCj4+IMKgIHZvaWQgd3JpdGVfcGlkZmlsZShjb25z
dCBjaGFyICpwaWRmaWxlKTsKPj4gwqAgKy8qIFdyaXRlIHRoZSBidWlsZGlkIGZpbGUgKi8KPj4g
K3ZvaWQgd3JpdGVfYnVpbGRpZF9maWxlKGNvbnN0IGNoYXIgKmJ1aWxkaWRmaWxlKTsKPj4gKwo+
PiDCoCAvKiBGb3JrIGJ1dCBkbyBub3QgY2xvc2UgdGVybWluYWwgRkRzICovCj4+IMKgIHZvaWQg
ZGFlbW9uaXplKHZvaWQpOwo+PiDCoCAvKiBDbG9zZSBzdGRpbi9zdGRvdXQvc3RkZXJyIHRvIGNv
bXBsZXRlIGRhZW1vbml6ZSAqLwo+PiBkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3Rv
cmVkX21pbmlvcy5jIAo+PiBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9taW5pb3MuYwo+PiBp
bmRleCBjOTQ0OTNlNTJhLi5lZjExNTFhZWU0IDEwMDY0NAo+PiAtLS0gYS90b29scy94ZW5zdG9y
ZS94ZW5zdG9yZWRfbWluaW9zLmMKPj4gKysrIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX21p
bmlvcy5jCj4+IEBAIC0yNCw2ICsyNCwxMCBAQCB2b2lkIHdyaXRlX3BpZGZpbGUoY29uc3QgY2hh
ciAqcGlkZmlsZSkKPj4gwqAgewo+PiDCoCB9Cj4+IMKgICt2b2lkIHdyaXRlX2J1aWxkaWRfZmls
ZShjb25zdCBjaGFyICpidWlsZGlkX2ZpbGUpCj4+ICt7Cj4+ICt9Cj4+ICsKPj4gwqAgdm9pZCBk
YWVtb25pemUodm9pZCkKPj4gwqAgewo+PiDCoCB9Cj4+IGRpZmYgLS1naXQgYS90b29scy94ZW5z
dG9yZS94ZW5zdG9yZWRfcG9zaXguYyAKPj4gYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfcG9z
aXguYwo+PiBpbmRleCAxZjk2MDNmZWEyLi5lYzAxNzYxMWQ2IDEwMDY0NAo+PiAtLS0gYS90b29s
cy94ZW5zdG9yZS94ZW5zdG9yZWRfcG9zaXguYwo+PiArKysgYi90b29scy94ZW5zdG9yZS94ZW5z
dG9yZWRfcG9zaXguYwo+PiBAQCAtMjAsNiArMjAsNyBAQAo+PiDCoCAjaW5jbHVkZSA8c3lzL3N0
YXQuaD4KPj4gwqAgI2luY2x1ZGUgPHVuaXN0ZC5oPgo+PiDCoCAjaW5jbHVkZSA8ZmNudGwuaD4K
Pj4gKyNpbmNsdWRlIDxzdGRpbnQuaD4KPj4gwqAgI2luY2x1ZGUgPHN0ZGxpYi5oPgo+PiDCoCAj
aW5jbHVkZSA8c3lzL21tYW4uaD4KPj4gwqAgQEAgLTQ4LDYgKzQ5LDU3IEBAIHZvaWQgd3JpdGVf
cGlkZmlsZShjb25zdCBjaGFyICpwaWRmaWxlKQo+PiDCoMKgwqDCoMKgIGNsb3NlKGZkKTsKPj4g
wqAgfQo+PiDCoCArLyoKPj4gKyAqIFdlIGRvbid0IGhhdmUgYSB3b3JraW5nIGVsZi5oIGF2YWls
YWJsZSBoZXJlLCBzbyBsZXQncyBkZWZpbmUgb3VyIAo+PiB2ZXJ5IG93bgo+PiArICogZGF0YSBz
dHJ1Y3RzIGFuZCBhY2Nlc3NvciBtYWNyb3MgZm9yIEVMRiBub3Rlcy4KPj4gKyAqCj4+ICsgKiAK
Pj4gaHR0cHM6Ly9kb2NzLm9yYWNsZS5jb20vY2QvRTIzODI0XzAxL2h0bWwvODE5LTA2OTAvY2hh
cHRlcjYtMTgwNDguaHRtbDoKPj4gKyAqIEZvciA2NOKAk2JpdCBvYmplY3RzIGFuZCAzMuKAk2Jp
dCBvYmplY3RzLCBlYWNoIGVudHJ5IGlzIGFuIGFycmF5IG9mIAo+PiA0LWJ5dGUKPj4gKyAqIHdv
cmRzIGluIHRoZSBmb3JtYXQgb2YgdGhlIHRhcmdldCBwcm9jZXNzb3IuCj4+ICsgKi8KPj4gK3R5
cGVkZWYgc3RydWN0Cj4+ICt7Cj4+ICvCoMKgwqAgdWludDMyX3QgbmFtZXN6Owo+PiArwqDCoMKg
IHVpbnQzMl90IGRlc2NzejsKPj4gK8KgwqDCoCB1aW50MzJfdCB0eXBlOwo+PiArfSBlbGZfbm90
ZV9oZHI7Cj4+ICsKPj4gKy8qIEVMRiBOb3RlIGFjY2Vzc29ycywgY29waWVkIGZyb20gWGVuJ3Mg
ZWxmLmggKi8KPj4gKyNkZWZpbmUgRUxGTk9URV9BTElHTihfbl8pICgoKF9uXykrMykmfjMpCj4+
ICsjZGVmaW5lIEVMRk5PVEVfTkFNRShfbl8pICgoY2hhciopKF9uXykgKyBzaXplb2YoKihfbl8p
KSkKPj4gKyNkZWZpbmUgRUxGTk9URV9ERVNDKF9uXykgKEVMRk5PVEVfTkFNRShfbl8pICsgCj4+
IEVMRk5PVEVfQUxJR04oKF9uXyktPm5hbWVzeikpCj4+ICsvKiBHTlUgTEQ6IHR5cGUgPT0gbm90
ZSAoTlRfR05VX0JVSUxEX0lEIGFzIGluCj4+ICsgKiBodHRwczovL3NvdXJjZXdhcmUub3JnL21s
L2JpbnV0aWxzLzIwMDctMDcvbXNnMDAwMTIuaHRtbCkqLwo+PiArI2RlZmluZSBOVF9HTlVfQlVJ
TERfSUQgMwo+PiArCj4+ICsKPj4gK3ZvaWQgd3JpdGVfYnVpbGRpZF9maWxlKGNvbnN0IGNoYXIg
KmJ1aWxkaWRfZmlsZSkKPj4gK3sKPj4gK8KgwqDCoCB1bnNpZ25lZCBpbnQgaSA9IDA7Cj4+ICvC
oMKgwqAgRklMRSAqZmRlc2M7Cj4+ICvCoMKgwqAgZXh0ZXJuIGVsZl9ub3RlX2hkciBfX2J1aWxk
aWRfbm90ZV9zZWN0aW9uOwo+PiArwqDCoMKgIHVuc2lnbmVkIGludCBpZF9sZW5ndGggPSBfX2J1
aWxkaWRfbm90ZV9zZWN0aW9uLmRlc2NzejsKPj4gK8KgwqDCoCBjaGFyKiBkZXNjID0gRUxGTk9U
RV9ERVNDKCZfX2J1aWxkaWRfbm90ZV9zZWN0aW9uKTsKPj4gKwo+PiArwqDCoMKgIGlmIChfX2J1
aWxkaWRfbm90ZV9zZWN0aW9uLnR5cGUgIT0gTlRfR05VX0JVSUxEX0lEKQo+PiArwqDCoMKgwqDC
oMKgwqAgYmFyZigiRXhwZWN0ZWQgR05VX0JVSUxESUQgbm90ZSwgYnV0IGZvdW5kIHR5cGUgJyVk
JyIsCj4+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgX19idWlsZGlkX25vdGVfc2VjdGlvbi50
eXBlKTsKPj4gKwo+PiArwqDCoMKgIGZkZXNjID0gZm9wZW4oYnVpbGRpZF9maWxlLCAidysiKTsK
Pj4gK8KgwqDCoCBpZiAoIWZkZXNjKQo+PiArwqDCoMKgwqDCoMKgwqAgYmFyZl9wZXJyb3IoIkVy
cm9yIG9wZW5pbmcgYnVpbGRpZCBmaWxlICVzIiwgYnVpbGRpZF9maWxlKTsKPj4gKwo+PiArwqDC
oMKgIC8qIFdlIGV4aXQgc2lsZW50bHkgaWYgZGFlbW9uIGFscmVhZHkgcnVubmluZy4gKi8KPj4g
K8KgwqDCoCBpZiAobG9ja2YoZmlsZW5vKGZkZXNjKSwgRl9UTE9DSywgMCkgPT0gLTEpCj4+ICvC
oMKgwqDCoMKgwqDCoCBleGl0KDApOwo+PiArCj4+ICvCoMKgwqAgZm9yIChpID0gMDsgaSA8IGlk
X2xlbmd0aDsgKytpKQo+PiArwqDCoMKgwqDCoMKgwqAgZnByaW50ZihmZGVzYywgIiUwMngiLCAo
dW5zaWduZWQgY2hhcilkZXNjW2ldKTsKPj4gK8KgwqDCoCBmcHJpbnRmKGZkZXNjLCAiXG4iKTsK
Pj4gKwo+PiArwqDCoMKgIGZjbG9zZShmZGVzYyk7Cj4+ICt9Cj4+ICsKPj4gwqAgLyogU3RldmVu
cy4gKi8KPj4gwqAgdm9pZCBkYWVtb25pemUodm9pZCkKPj4gwqAgewo+Pgo+Cj4gSW4gZ2VuZXJh
bCBJIGRvbid0IGxpa2UgdGhlIGFwcHJvYWNoIHVzaW5nIGEgZmlsZSBmb3IgdGhpcyBwdXJwb3Nl
Lgo+IEluIGNhc2UgdGhlcmUgaXMgbm8gZ2VuZXJpYyBzb2x1dGlvbiBwb3NzaWJsZSwgSSdkIHJh
dGhlciBoYXZlIGEKPiBYU19DT05UUk9MIHN1YiBjb21tYW5kIHRvIGdldCB0aGUgY3VycmVudCB2
ZXJzaW9uIGZyb20gWGVuc3RvcmUuIFRoaXMKPiB3aWxsIHRoZW4gYXQgb25jZSBjb3ZlciB4ZW5z
dG9yZS1zdHViZG9tLCB0b28uCj4KSSB3aWxsIGhhdmUgYSBsb29rIGludG8gdGhhdC4KPgo+IEp1
ZXJnZW4KQmpvZXJuCgoKCkFtYXpvbiBEZXZlbG9wbWVudCBDZW50ZXIgR2VybWFueSBHbWJICkty
YXVzZW5zdHIuIDM4CjEwMTE3IEJlcmxpbgpHZXNjaGFlZnRzZnVlaHJ1bmc6IENocmlzdGlhbiBT
Y2hsYWVnZXIsIEpvbmF0aGFuIFdlaXNzCkVpbmdldHJhZ2VuIGFtIEFtdHNnZXJpY2h0IENoYXJs
b3R0ZW5idXJnIHVudGVyIEhSQiAxNDkxNzMgQgpTaXR6OiBCZXJsaW4KVXN0LUlEOiBERSAyODkg
MjM3IDg3OQoKCg==



From xen-devel-bounces@lists.xenproject.org Fri Nov 13 17:00:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 17:00:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26787.55271 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdcQv-0001F9-0A; Fri, 13 Nov 2020 17:00:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26787.55271; Fri, 13 Nov 2020 17:00:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdcQu-0001F2-Sg; Fri, 13 Nov 2020 17:00:20 +0000
Received: by outflank-mailman (input) for mailman id 26787;
 Fri, 13 Nov 2020 17:00:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HVgh=ET=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kdcQu-0001Ex-0w
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 17:00:20 +0000
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1054ae8b-716b-4a01-9f5c-8003bb67be30;
 Fri, 13 Nov 2020 17:00:19 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id w15so6279923lji.10
 for <xen-devel@lists.xenproject.org>; Fri, 13 Nov 2020 09:00:19 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id e27sm1605200lfc.155.2020.11.13.09.00.11
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 13 Nov 2020 09:00:12 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HVgh=ET=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kdcQu-0001Ex-0w
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 17:00:20 +0000
X-Inumbo-ID: 1054ae8b-716b-4a01-9f5c-8003bb67be30
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1054ae8b-716b-4a01-9f5c-8003bb67be30;
	Fri, 13 Nov 2020 17:00:19 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id w15so6279923lji.10
        for <xen-devel@lists.xenproject.org>; Fri, 13 Nov 2020 09:00:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=vJbK1TVhyTD3XFkaQuXFgl8xu+wADAolh3cSTqf+Rvk=;
        b=M9B0Yalic02bJoHd3J9ibEWKP6G4F/2mIeVKhAhmOJKo2i3gbsTgj0wxWkafUUSgX+
         fS1OqT+HFVQ7D3OTjjjjDHdDdsUJOPo2W25c+5XT86eyq6xzXOtyYc/ny3BBvf9qPh9w
         wy5Dufrercrz9k3vMgk1Mk2i5E2hupF9CbFtTpyks/Z7cS3ju6v8g7qZnZeH2elkE//B
         eaD89bFCXlqNXEGTveedxse0FzzfRFH6wLEHMkevZHEwtBN8rm64En9n2Iwl6NuJPZCk
         cCnXyAeZ2RSDDzOd8lAcuOh4Gu/iFqBRGvPn6XwmrARCSxfQoR0mfop1QOb6wWCTG8jK
         /jOg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=vJbK1TVhyTD3XFkaQuXFgl8xu+wADAolh3cSTqf+Rvk=;
        b=qS+ILy9xAQwo4CHLjabzTYrWT8mnTv2vdJ2kIiouZ2YUqMrIEa8MtLhFRVGO/0MFjd
         udklcKfidvxvW1e5kmGT9ijh8J0Cg+iTQ5piEh8anFH6as5NKHuLXQzE1TMbKqtm4CF0
         8kqBJhlIiUcBi4Hb6GmVcdbfYYZBkBWV5FprygHvwuQdB/LWg/n5jb9CT39p6ZUKxvlY
         uld7ocRm7gAAZaF9tIH72zKMKEsm8n6HQs3Wt26zab3dfAXrixceEHt2nd5Al2REfsRC
         D9aRjEy5P0BWgc/WW7JaVAXzPoKH3Sdm/VqJNf11B7SMoKju/dpsWCz2WYTeTrUt0nzF
         Qusg==
X-Gm-Message-State: AOAM530x9wWGY9PSecXxmo+Ej8XZiIeE5RNywtfV1gg9A9TrkqBsaD83
	w4ST17iVquvf2WRkA97eN3FrpbKOE12NQw==
X-Google-Smtp-Source: ABdhPJwF6HwMTD7iCWV8/jUrAz7vQ7sdiN76H8+z01njAl4/BimhS7EQtP9VsqsDJZTMTlHOVDKzlg==
X-Received: by 2002:a2e:2201:: with SMTP id i1mr1476450lji.257.1605286813457;
        Fri, 13 Nov 2020 09:00:13 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id e27sm1605200lfc.155.2020.11.13.09.00.11
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Fri, 13 Nov 2020 09:00:12 -0800 (PST)
Subject: Re: [PATCH V2 16/23] xen/mm: Handle properly reference in
 set_foreign_p2m_entry() on Arm
To: Jan Beulich <jbeulich@suse.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-17-git-send-email-olekstysh@gmail.com>
 <e4c04492-3e99-4578-8f00-e7b35aeb26c5@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <7ad093a1-cc52-a0ec-df35-a56a54c50425@gmail.com>
Date: Fri, 13 Nov 2020 19:00:06 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <e4c04492-3e99-4578-8f00-e7b35aeb26c5@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 12.11.20 14:18, Jan Beulich wrote:

Hi Jan

> On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -1380,6 +1380,27 @@ int guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
>>       return p2m_remove_mapping(d, gfn, (1 << page_order), mfn);
>>   }
>>   
>> +int set_foreign_p2m_entry(struct domain *d, const struct domain *fd,
>> +                          unsigned long gfn, mfn_t mfn)
>> +{
>> +    struct page_info *page = mfn_to_page(mfn);
>> +    int rc;
>> +
>> +    if ( !get_page(page, fd) )
>> +        return -EINVAL;
>> +
>> +    /*
>> +     * It is valid to always use p2m_map_foreign_rw here as if this gets
>> +     * called that d != fd. A case when d == fd would be rejected by
>> +     * rcu_lock_remote_domain_by_id() earlier.
>> +     */
> Are you describing things here on the assumption that no new
> callers may surface later on? To catch such, I'd recommend
> adding at least a respective ASSERT().

Agree, will add.


>
>> +    rc = guest_physmap_add_entry(d, _gfn(gfn), mfn, 0, p2m_map_foreign_rw);
>> +    if ( rc )
>> +        put_page(page);
>> +
>> +    return 0;
> I can't imagine it's correct to not signal the error to the
> caller(s).

It is not correct, I have just missed to place return rc. Thank you for 
the catch.


>
>> --- a/xen/common/memory.c
>> +++ b/xen/common/memory.c
>> @@ -1099,7 +1099,8 @@ static int acquire_resource(
>>        *        reference counted, it is unsafe to allow mapping of
>>        *        resource pages unless the caller is the hardware domain.
>>        */
>> -    if ( paging_mode_translate(currd) && !is_hardware_domain(currd) )
>> +    if ( paging_mode_translate(currd) && !is_hardware_domain(currd) &&
>> +         !arch_refcounts_p2m() )
>>           return -EACCES;
> I guess the hook may want naming differently, as both prior parts of
> the condition should be needed only on the x86 side, and there (for
> PV) there's no p2m involved in the refcounting. Maybe
> arch_acquire_resource_check()? And then the comment wants moving into
> the x86 hook as well. You may want to consider leaving a more generic
> one here...

ok, again, I am fine with the name). Will follow everything suggested 
above.

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Fri Nov 13 17:08:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 17:08:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26796.55282 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdcZ0-0001VW-QI; Fri, 13 Nov 2020 17:08:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26796.55282; Fri, 13 Nov 2020 17:08:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdcZ0-0001VP-NM; Fri, 13 Nov 2020 17:08:42 +0000
Received: by outflank-mailman (input) for mailman id 26796;
 Fri, 13 Nov 2020 17:08:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uoW6=ET=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kdcYz-0001VK-2y
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 17:08:41 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9d62c5ed-c900-49b3-a869-d62e6cc43343;
 Fri, 13 Nov 2020 17:08:39 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uoW6=ET=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kdcYz-0001VK-2y
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 17:08:41 +0000
X-Inumbo-ID: 9d62c5ed-c900-49b3-a869-d62e6cc43343
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9d62c5ed-c900-49b3-a869-d62e6cc43343;
	Fri, 13 Nov 2020 17:08:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605287319;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=nEhD71WdBPiSRNX2ZLtn5P4iTtAdnrsZyLi7zEsS+A4=;
  b=UNxy3TS1/UxrRuKKtgXKcwrzIShUM2F/wBD3co0r6qJLuSVxFXkahln7
   AsEtVln2cfx/W3RU+dtA958wtM7Ppx+tvWXMQDtmMnnXTuZu0FPu2ZZVi
   T/+HgppWPyhs1Z40pYmxYT8f2jrDfOnzXqu7UM6aaFKF9QcnSVEYjPc8t
   Y=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: XsaHqAbAQP7zSTmVpevWowNZDlhPkxRdQ6hjjLhwWEZU79VInI4rsVuVFU5PBLMCEvsB82ME+q
 nY9qbIjRNk40TLSAtuKvnmSmgKoQexwDsYw9gIuCC1X2NXjQuz4ZNp1slakFlhBALLzMJpBWJY
 xXFwj+zI8tCMwFbyWeadmd+fzgGMMokKSOHBhfcjdQPQgK54fsEKuo7T3xrw7skiAq4UaKcODi
 p2vI5g43vJbIOjDg/+kp/+aWTaMwY/yOz7P2gV5Ijr2BUi5t4Mx1tkpvfn8IuN4HIT7Io2DEzL
 pXA=
X-SBRS: None
X-MesageID: 31482236
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,476,1596513600"; 
   d="scan'208";a="31482236"
Subject: Re: [XEN PATCH] tools/xenstore: Log xenstored build ID on startup
To: Bjoern Doebel <doebel@amazon.de>, <xen-devel@lists.xenproject.org>
CC: Julien Grall <jgrall@amazon.co.uk>, Eslam Elnikety <elnikety@amazon.de>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20201113141823.58712-1-doebel@amazon.de>
 <de06e7ce-65cd-95fb-5862-0135e2110a99@citrix.com>
 <c216f07a-df70-ddb5-46fd-7b61e36fa6fc@amazon.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <fef34a7d-d156-94a7-56ea-1b754b6be776@citrix.com>
Date: Fri, 13 Nov 2020 17:08:32 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <c216f07a-df70-ddb5-46fd-7b61e36fa6fc@amazon.de>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 13/11/2020 16:55, Bjoern Doebel wrote:
>
> On 13.11.20 15:30, Andrew Cooper wrote:
>> On 13/11/2020 14:18, Bjoern Doebel wrote:
>>> Right now we do not have a mechanism to determine the version of the
>>> currently running xenstored at runtime. As xenstored runs throughout
>>> the
>>> lifetime of a Xen host, this may lead to problems when newer user space
>>> builds are staged. Then, the running xenstored will no longer match the
>>> version of the installed xenstored.
>>>
>>> To allow users to always identify the running version of xenstored, add
>>> a linker-generated unique build ID to every xenstored build. Add
>>> functionality to log this build ID into a file upon service startup.
>>>
>>> Signed-off-by: Bjoern Doebel <doebel@amazon.de>
>>> Reviewed-by: Martin Mazein <amazein@amazon.de>
>>> Reviewed-by: Paul Durrant <pdurrant@amazon.co.uk>
>> I understand the problem you're trying to solve, but why is this
>> anything more than just enabling build-id's by default across tools/ ?
>>
>> There are already standard ways of interacting with the build id of
>> running executables on the system.  I'd strongly discourage doing
>> anything custom in xenstored specifically.
> May I ask what tooling you would use to interact with a running
> process' buildid?

Amongst other things, yes.  Although as Juergen points out, we want
something which works with stub domains as well, and "normal userspace
tools" won't cut it there.

I still think a first patch in this series should be to turn build-id's
on by default if supported by the toolchain, generally.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 17:13:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 17:13:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26806.55299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdcdg-0002Py-FY; Fri, 13 Nov 2020 17:13:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26806.55299; Fri, 13 Nov 2020 17:13:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdcdg-0002Pr-CU; Fri, 13 Nov 2020 17:13:32 +0000
Received: by outflank-mailman (input) for mailman id 26806;
 Fri, 13 Nov 2020 17:13:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uoW6=ET=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kdcde-0002Pm-EM
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 17:13:30 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 990adde9-7fda-4edf-bc23-6b11b0ad0eb8;
 Fri, 13 Nov 2020 17:13:29 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uoW6=ET=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kdcde-0002Pm-EM
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 17:13:30 +0000
X-Inumbo-ID: 990adde9-7fda-4edf-bc23-6b11b0ad0eb8
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 990adde9-7fda-4edf-bc23-6b11b0ad0eb8;
	Fri, 13 Nov 2020 17:13:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605287609;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=zhNfuYvFoIkrUjwdSPYa6x5TSR2fKQYqHzLxNnXLPBA=;
  b=ZZgi/HvKksCjgD2FGw/01chrgIIfSNtptvp8NUReP9WkRPN6UHKe191p
   06IL0KL4YyfUQRbatihJZ63ujoHKyttxXP6HPPOwyijm7gB2YEHZUC6T4
   KZNxruSxJ2Lc9LoMtpuIVaveQBJPCFSbWihX1IIzCC5g617ub2h0sobNJ
   s=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: qxkmov4/hWhPuPJ5ymtWsEcOpYYlSdpTBfnio195YO9S2U2vuQNFS58U734F7tM5OyKrSaHE5d
 x5Tg16as+H2zjKfgPXGaFIyWy7BHsd6uirliPP4Iu1wni0RL0wHwGCFFh4ry+Kq3F6BKczjU2Q
 ZvRSgBa+oclMkTj05LTV5zfJyJPEQ1uHVRNRg62wlmLJUl1bLHfBfw5K5nPfC0IfeJK/gsE27F
 Mkggq5SkXkLaWVGMQ3zNb9rbHpI4I+833M2FFA6S85KkSA+CG8FL0V6bWNAaYNBmGiAr5ruzUf
 z/Q=
X-SBRS: None
X-MesageID: 31162691
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,476,1596513600"; 
   d="scan'208";a="31162691"
Subject: Re: [XEN PATCH] tools/xenstore: Log xenstored build ID on startup
To: Bjoern Doebel <doebel@amazon.de>, =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?=
	<jgross@suse.com>, <xen-devel@lists.xenproject.org>
CC: Julien Grall <jgrall@amazon.co.uk>, Eslam Elnikety <elnikety@amazon.de>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, Edwin Torok
	<edvin.torok@citrix.com>, Christian Lindig <christian.lindig@citrix.com>
References: <20201113141823.58712-1-doebel@amazon.de>
 <5ac379ad-33fd-2973-dfdb-9e06ea539809@suse.com>
 <0e6b09fe-ffc4-195f-1b6c-67abc0cff92c@amazon.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c1352a2a-112a-966f-7410-b917cabe1d91@citrix.com>
Date: Fri, 13 Nov 2020 17:13:21 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <0e6b09fe-ffc4-195f-1b6c-67abc0cff92c@amazon.de>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 13/11/2020 16:56, Bjoern Doebel wrote:
> On 13.11.20 16:36, Jürgen Groß wrote:
>> On 13.11.20 15:18, Bjoern Doebel wrote:
>>> Right now we do not have a mechanism to determine the version of the
>>> currently running xenstored at runtime. As xenstored runs throughout
>>> the
>>> lifetime of a Xen host, this may lead to problems when newer user space
>>> builds are staged. Then, the running xenstored will no longer match the
>>> version of the installed xenstored.
>>>
>>> To allow users to always identify the running version of xenstored, add
>>> a linker-generated unique build ID to every xenstored build. Add
>>> functionality to log this build ID into a file upon service startup.
>>>
>>> Signed-off-by: Bjoern Doebel <doebel@amazon.de>
>>> Reviewed-by: Martin Mazein <amazein@amazon.de>
>>> Reviewed-by: Paul Durrant <pdurrant@amazon.co.uk>
>>
>> No support for oxenstored or xenstore-stubdom?
>
> Your suggestion further down will apparently help for stubdom. I do
> not speak ocaml at all - how do we address this?

CC'ing Edwin and Christian who have done the bulk of the oxenstored
recently.

It sounds like it might not be possible right now, but would be possible
with a future plan to switch the Ocaml build system over to dune (the
new/preferred Ocaml upstream toolchain).

If it does end up being an XS_CONTROL sub-op, we can implement it at a
future point when we can usefully answer the question.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 17:16:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 17:16:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26811.55311 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdcg5-0002YA-UP; Fri, 13 Nov 2020 17:16:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26811.55311; Fri, 13 Nov 2020 17:16:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdcg5-0002Y3-Qw; Fri, 13 Nov 2020 17:16:01 +0000
Received: by outflank-mailman (input) for mailman id 26811;
 Fri, 13 Nov 2020 17:16:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EgAG=ET=amazon.de=prvs=579e99c79=doebel@srs-us1.protection.inumbo.net>)
 id 1kdcg4-0002Xw-FO
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 17:16:00 +0000
Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b84f65a2-de3d-42fa-b0c7-610ba2afca55;
 Fri, 13 Nov 2020 17:15:59 +0000 (UTC)
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-1e-c7f73527.us-east-1.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP;
 13 Nov 2020 16:51:13 +0000
Received: from EX13D03EUC002.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38])
 by email-inbound-relay-1e-c7f73527.us-east-1.amazon.com (Postfix) with ESMTPS
 id F12C7B2C71; Fri, 13 Nov 2020 16:51:11 +0000 (UTC)
Received: from [192.168.31.251] (10.43.162.50) by EX13D03EUC002.ant.amazon.com
 (10.43.164.60) with Microsoft SMTP Server (TLS) id 15.0.1497.2;
 Fri, 13 Nov 2020 16:51:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=EgAG=ET=amazon.de=prvs=579e99c79=doebel@srs-us1.protection.inumbo.net>)
	id 1kdcg4-0002Xw-FO
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 17:16:00 +0000
X-Inumbo-ID: b84f65a2-de3d-42fa-b0c7-610ba2afca55
Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b84f65a2-de3d-42fa-b0c7-610ba2afca55;
	Fri, 13 Nov 2020 17:15:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1605287760; x=1636823760;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=Z8RkLCygUZ9uffdO4CgEDopjHE/cbAn5BcNhzPPDyQA=;
  b=r9M8Bq3Tk2z583F2Wxvclrq6kHAv2c8GPPSYYRziAH2Ei8D7hMxaDv+A
   f1PCWF/cMa+mGYAxsR7q9MtrSvxkv3XCe0X3WAM/WXc15uaJRl2eHl5c9
   6tJgDGv5CzkYzB6JORkH1m1k0qyKsbYGzQUmdE00NoMSkRrK5NbMY1gxL
   g=;
X-IronPort-AV: E=Sophos;i="5.77,476,1596499200"; 
   d="scan'208";a="95102610"
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-1e-c7f73527.us-east-1.amazon.com) ([10.47.23.38])
  by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP; 13 Nov 2020 16:51:13 +0000
Received: from EX13D03EUC002.ant.amazon.com (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38])
	by email-inbound-relay-1e-c7f73527.us-east-1.amazon.com (Postfix) with ESMTPS id F12C7B2C71;
	Fri, 13 Nov 2020 16:51:11 +0000 (UTC)
Received: from [192.168.31.251] (10.43.162.50) by EX13D03EUC002.ant.amazon.com
 (10.43.164.60) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 13 Nov
 2020 16:51:08 +0000
Subject: Re: [XEN PATCH] tools/xenstore: Log xenstored build ID on startup
To: Jan Beulich <jbeulich@suse.com>
CC: Julien Grall <jgrall@amazon.co.uk>, Eslam Elnikety <elnikety@amazon.de>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	<xen-devel@lists.xenproject.org>
References: <20201113141823.58712-1-doebel@amazon.de>
 <b61119da-b6e8-7746-9298-54bf60da88ea@suse.com>
From: Bjoern Doebel <doebel@amazon.de>
Message-ID: <90212fe0-4cdb-61b3-2da0-c31ab5aeb89e@amazon.de>
Date: Fri, 13 Nov 2020 17:51:03 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <b61119da-b6e8-7746-9298-54bf60da88ea@suse.com>
Content-Language: en-GB
X-Originating-IP: [10.43.162.50]
X-ClientProxiedBy: EX13D24UWB001.ant.amazon.com (10.43.161.93) To
 EX13D03EUC002.ant.amazon.com (10.43.164.60)
Precedence: Bulk
Content-Type: text/plain; charset="utf-8"; format="flowed"
Content-Transfer-Encoding: base64

T24gMTMuMTEuMjAgMTU6MjcsIEphbiBCZXVsaWNoIHdyb3RlOgo+IE9uIDEzLjExLjIwMjAgMTU6
MTgsIEJqb2VybiBEb2ViZWwgd3JvdGU6Cj4+IC0tLSBhL3Rvb2xzL3hlbnN0b3JlL01ha2VmaWxl
Cj4+ICsrKyBiL3Rvb2xzL3hlbnN0b3JlL01ha2VmaWxlCj4+IEBAIC02Niw2ICs2NiwxMCBAQCAk
KFhFTlNUT1JFRF9PQkpTKTogQ0ZMQUdTICs9ICQoU1lTVEVNRF9DRkxBR1MpCj4+ICAgeGVuc3Rv
cmVkOiBMREZMQUdTICs9ICQoU1lTVEVNRF9MSUJTKQo+PiAgIGVuZGlmCj4+Cj4+ICsjIHhlbnN0
b3JlZDogZW5mb3JjZSBjcmVhdGlvbiBvZiBhIGJ1aWxkSUQgc2VjdGlvbiBhbmQgdXNlIGEgbGlu
a2VyCj4+ICsjIHNjcmlwdCB0byBhZGQgYWRkaXRpb25hbCBzeW1ib2xzIGFyb3VuZCB0aGF0IHNl
Y3Rpb24KPj4gK3hlbnN0b3JlZDogTERGTEFHUyArPSAgLVdsLC0tYnVpbGQtaWQ9c2hhMSAtV2ws
LVQsYnVpbGRpZF9zeW1ib2xzLmxkCj4gU2luY2UgaW4gdGhlIGh5cGVydmlzb3IgYnVpbGQgd2Un
cmUgY2FyZWZ1bCB0byBub3QgdXNlIC0tYnVpbGQtaWQKPiB3aGVuIHRoZSBsaW5rZXIgZG9lc24n
dCBzdXBwb3J0IGl0LCBJIHN1cHBvc2UgdGhlIHNhbWUgY2FyZSBuZWVkcwo+IGFwcGx5aW5nIGhl
cmUuIFNlZSB0aGUgc2V0dGluZyBvZiBidWlsZF9pZF9saW5rZXIgaW4gLi9Db25maWcubWsuCgpP
aywgSSB3aWxsIG1ha2UgdGhpcyBjb25kaXRpb25hbCBvbiB0aGUgc2V0dGluZyBvZiAkKGJ1aWxk
X2lkX2xpbmtlcikuCgo+Cj4gSmFuCgpCam9lcm4KCgoKCkFtYXpvbiBEZXZlbG9wbWVudCBDZW50
ZXIgR2VybWFueSBHbWJICktyYXVzZW5zdHIuIDM4CjEwMTE3IEJlcmxpbgpHZXNjaGFlZnRzZnVl
aHJ1bmc6IENocmlzdGlhbiBTY2hsYWVnZXIsIEpvbmF0aGFuIFdlaXNzCkVpbmdldHJhZ2VuIGFt
IEFtdHNnZXJpY2h0IENoYXJsb3R0ZW5idXJnIHVudGVyIEhSQiAxNDkxNzMgQgpTaXR6OiBCZXJs
aW4KVXN0LUlEOiBERSAyODkgMjM3IDg3OQoKCg==



From xen-devel-bounces@lists.xenproject.org Fri Nov 13 17:23:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 17:23:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26819.55323 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdcnD-0003cM-PL; Fri, 13 Nov 2020 17:23:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26819.55323; Fri, 13 Nov 2020 17:23:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdcnD-0003cF-LD; Fri, 13 Nov 2020 17:23:23 +0000
Received: by outflank-mailman (input) for mailman id 26819;
 Fri, 13 Nov 2020 17:23:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D1Nx=ET=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1kdcnC-0003cA-JE
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 17:23:22 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6b9dbdb4-f78e-47e1-90b1-4ab070329c59;
 Fri, 13 Nov 2020 17:23:21 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=D1Nx=ET=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
	id 1kdcnC-0003cA-JE
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 17:23:22 +0000
X-Inumbo-ID: 6b9dbdb4-f78e-47e1-90b1-4ab070329c59
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6b9dbdb4-f78e-47e1-90b1-4ab070329c59;
	Fri, 13 Nov 2020 17:23:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605288203;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=xcZnNLvEltjBS2mrvn2AD3E70LHztdkJvzTOkdF5n84=;
  b=DLuk6nqSQmC249Thuc0SRR/f7kN+Tmf2eYJcTLVChO4S4nFtRJMDowvv
   /GRqQ3DqCITyH+P2FYGegiFiGsXyE0+T2hL87JqAYHPE1wR3wbZOOhVlK
   4wQ1l2q5Fu5rS/U6tyVUp81uhsLrgdQU0r3VvLUlPTyd+GvUMPDJd8MHz
   I=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 67AIAydRUNt67hum4sZePS2UW1MVWiRtdlsl9GMNGOuAZqg842Rvwu7Wz/dhWNfzBA4MDhwoFP
 KUKTWqZcLhO58SwKyc8VYDfve2UkbVevvZ87lm4g5+R2YtXgE6ZPxErBc5lTGhZ2kQcbTZRRqs
 rWZpn6xZQFH8ttbpgmTjPnlxe0uUheh1sSwGfqrNW9AtaBoSaxgTVYeKkNyidHClOvuqtPko5Y
 5fI1dAff00vTMc1P91QNPAuoZqnK1y16InNPC16JQayx4PWmug40IJjvXdEHxLNtOQelkTMjFO
 y6U=
X-SBRS: None
X-MesageID: 31114015
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,476,1596513600"; 
   d="scan'208";a="31114015"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aNTf+TMB/vkTPuE7O95Sed7mBdIvsYqZPYnwBoGnBrSmnFrpHC5Hs8xcgnUQY7eStWL8oHWOswwLOtPxSGb8R4j5isoSk5FHReUm8qrLxLHXcKYcphk2ySLcUz2Lcy7swhHPy5mulR1MTs674UiBB5LOZ3xFC3RKgbdXb5QS22RrVyyxJ1sqBOLOAwxhNsxuhAkVYiU9yP+EwBbUaC9iPOkv6YzQDVQQ5CjIScRoXHZ2f8BJSPrgFLCq+3l7xGCDlDEkplXMnUNRkb3jDDlJtItWRwp/HQWNjbniIg71dDe0WYUm82CA9bmfXxshyCt+mNmpfzhRmxZlDJLOO7H3bA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xcZnNLvEltjBS2mrvn2AD3E70LHztdkJvzTOkdF5n84=;
 b=JweJlzV6EeA+q8vJjGrjmjFINUNp8JJk4qi4jLmOWvbTnurczUZDQljIHN1aTY2vd+1hhrwKJ1DDNJ2o9aX1eLRhdbveLhb6EDLGSs7jqBSj7RI2RVF5pbWzSFAeYuLMh1CdickCSripJ/iEVRGtfRwX0fRlKC/uGGUrrsB41GHZa3g9Qm7WNDEJ5bLIajfOf2ywbYrhHFnm8NZWTbbenyzyze0ZIasuF4NN6/9+f8pj8FXBvtWaEFG44ZUrUcNcISMZ5nOFUHh1oXmMo3ZocY/pv5e7SnkZMdrF+osG+wRWARvgyQlIE4TF3NgS4vZ8Oi6jY5CKvBXCpaAdjQQK8w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xcZnNLvEltjBS2mrvn2AD3E70LHztdkJvzTOkdF5n84=;
 b=RVSKOqsNQl1MLMlX/Av6C1xFEiOhFAKL2Z+hL/ZuLUGpT988jK02rmoxCD0sw6rd9OEQRXJvsakVvQNDi2n6izuY8Iu3P/ZodYRGxoV1RqcLMb/waoHxnaMKSf5x710VSvkaUmM+J0omzNVThMNOzlYAjcRubRhXsYw4/0DPJH8=
From: Edwin Torok <edvin.torok@citrix.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"jgross@suse.com" <jgross@suse.com>, Andrew Cooper
	<Andrew.Cooper3@citrix.com>, "doebel@amazon.de" <doebel@amazon.de>
CC: "wl@xen.org" <wl@xen.org>, Christian Lindig <christian.lindig@citrix.com>,
	"jgrall@amazon.co.uk" <jgrall@amazon.co.uk>, "elnikety@amazon.de"
	<elnikety@amazon.de>, "iwj@xenproject.org" <iwj@xenproject.org>
Subject: Re: [XEN PATCH] tools/xenstore: Log xenstored build ID on startup
Thread-Topic: [XEN PATCH] tools/xenstore: Log xenstored build ID on startup
Thread-Index: AQHWueBOvwB25jmdBUW60qKjlgfroqnGT5aA
Date: Fri, 13 Nov 2020 17:23:16 +0000
Message-ID: <39f0b457514c3b6bcc7419d9eaf5770a5c073333.camel@citrix.com>
References: <20201113141823.58712-1-doebel@amazon.de>
	 <5ac379ad-33fd-2973-dfdb-9e06ea539809@suse.com>
	 <0e6b09fe-ffc4-195f-1b6c-67abc0cff92c@amazon.de>
	 <c1352a2a-112a-966f-7410-b917cabe1d91@citrix.com>
In-Reply-To: <c1352a2a-112a-966f-7410-b917cabe1d91@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 13453398-95a5-4f56-5da3-08d887f8cec6
x-ms-traffictypediagnostic: BYAPR03MB3591:
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <BYAPR03MB35917830E122A798F7115AA09BE60@BYAPR03MB3591.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:6430;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 7V4uYFX6beWzlKDzFxcsgV7iuON0i/n8hikqB9QbyrkAlxuBVVbmSEIqmXbO8s0eT2Be2RyZW8PBXtMBwNTxs3aiECr0DYvaYhiv55NVoNQGxmKU8p/HBjr3Kzm1CuMQCYTCX9myx5xjFCijoOzUZ0lErZPMWLOZcQjm5TrecCI2+oNC23ztiw3fXpHpt19gqBtG2tsiN7UTXTkqG4n9HAzliWyxuhFsMqKUL6HvC537HIx8zJyXI0pQWw/KMXTeyR/Ujm3CRKa0MRDLcHxHP4cZVIOWJxfHvktbfixOwsTUGerc3Vn24awhCKsJxhVli8Uxcq8HaFao3v8ipmwF4rJX3Bc9CiJb5GK8annS70U=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB5888.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(376002)(346002)(39860400002)(136003)(366004)(316002)(54906003)(86362001)(110136005)(26005)(36756003)(83380400001)(6512007)(4001150100001)(53546011)(2906002)(186003)(6506007)(71200400001)(2616005)(966005)(4326008)(66446008)(66476007)(66556008)(64756008)(6486002)(5660300002)(8676002)(66946007)(76116006)(478600001)(91956017)(8936002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: lxc/qbiqWNPZfvh2PC78qOLHwuDZHjrtGDq38bU0h9QkOi23HV88dxcnCg+UCbaXfV0hlEI5QKMqdmA22UuTnjCS+XzkLjpoUtmkIX6wlSrULooA2Z8bx+UnJrLlFbd+TZbw9Ug1tHGRpV9srD9wyjZCDcJK0KMxCy/7WX+fhKC3E2rqHF+C1M4xW+u1su9krlhwyfmzRelUJDv0o/5B4efof0dMUzQnvUW2lhwMLp7izvlX8cY8A2+zk4nq1BY2SBWQTnd06SZE2xlObTDduit921auxqsnoMXAiRNw4L3ZADIS9G+9Vdikx1DfW4pNJeiA/zUBhad6U4wgsD0DIKFhw3JpA4nUTHg8puvnOKye5+q5Jpq9EKBn6pORPNMyOmf+iY+S5pdY3MI3AuYSNzboHWq7jQOuf8gExhz+McshaTnn4/+0lokinIszzsn/V7C8lERbLvEVuWnPqFjyUCavUnBV0c4NDKtEOfavbKuOV3XL7u9m0ZgNq0iVXPaem4Utb0NP4gBeJ6JPK45oUoNtVPW5t5EMvXBqjCL6zhMPeHz7m4ibxWSBb22FmpA4QCM23x1ZsCRwu+EcUCkC2qKe5Rg4fwnE1marUJgxBGPtbV0U0BSj/g9IdpxCyfbCPzaUbMIvA6ewA5wnMprYxtF0b0BspFy47nwTIaHW14vOy9Mcp4fVC+B/KpfCV8GC5wh27kv2+6l9K8fJc4tbrmQGhIG4KmT92eu3JKH8Q3thXfzT7M4H/WvfsxpFbtbMAJMRsKKOhdGCJNdQZx3DRASIvOONMXN0zbucRPdpT8cdAPuke9957FAD3qCyl1PMnn6L09yGpdFw+vyYvY5Seh9R688QMN5hqdzHaVVAQzVlWHAo4kxE8QUvxfuIj452djEQRKsiC3dULRbadfmXgQ==
Content-Type: text/plain; charset="utf-8"
Content-ID: <BFB295523EA4EA4F9CD7512AC96A52C3@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB5888.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 13453398-95a5-4f56-5da3-08d887f8cec6
X-MS-Exchange-CrossTenant-originalarrivaltime: 13 Nov 2020 17:23:16.2645
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Q1zZa8fuCHC70S7A31M5Ls0OfGKQADbHw+NBNLXRiaMif8/ObgDxrlNZ8OP6aQkpKLHIJ5Lmxz8KKXZe2t/ngQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3591
X-OriginatorOrg: citrix.com

T24gRnJpLCAyMDIwLTExLTEzIGF0IDE3OjEzICswMDAwLCBBbmRyZXcgQ29vcGVyIHdyb3RlOg0K
PiBPbiAxMy8xMS8yMDIwIDE2OjU2LCBCam9lcm4gRG9lYmVsIHdyb3RlOg0KPiA+IE9uIDEzLjEx
LjIwIDE2OjM2LCBKw7xyZ2VuIEdyb8OfIHdyb3RlOg0KPiA+ID4gT24gMTMuMTEuMjAgMTU6MTgs
IEJqb2VybiBEb2ViZWwgd3JvdGU6DQo+ID4gPiA+IFJpZ2h0IG5vdyB3ZSBkbyBub3QgaGF2ZSBh
IG1lY2hhbmlzbSB0byBkZXRlcm1pbmUgdGhlIHZlcnNpb24NCj4gPiA+ID4gb2YgdGhlDQo+ID4g
PiA+IGN1cnJlbnRseSBydW5uaW5nIHhlbnN0b3JlZCBhdCBydW50aW1lLiBBcyB4ZW5zdG9yZWQg
cnVucw0KPiA+ID4gPiB0aHJvdWdob3V0DQo+ID4gPiA+IHRoZQ0KPiA+ID4gPiBsaWZldGltZSBv
ZiBhIFhlbiBob3N0LCB0aGlzIG1heSBsZWFkIHRvIHByb2JsZW1zIHdoZW4gbmV3ZXINCj4gPiA+
ID4gdXNlciBzcGFjZQ0KPiA+ID4gPiBidWlsZHMgYXJlIHN0YWdlZC4gVGhlbiwgdGhlIHJ1bm5p
bmcgeGVuc3RvcmVkIHdpbGwgbm8gbG9uZ2VyDQo+ID4gPiA+IG1hdGNoIHRoZQ0KPiA+ID4gPiB2
ZXJzaW9uIG9mIHRoZSBpbnN0YWxsZWQgeGVuc3RvcmVkLg0KPiA+ID4gPiANCj4gPiA+ID4gVG8g
YWxsb3cgdXNlcnMgdG8gYWx3YXlzIGlkZW50aWZ5IHRoZSBydW5uaW5nIHZlcnNpb24gb2YNCj4g
PiA+ID4geGVuc3RvcmVkLCBhZGQNCj4gPiA+ID4gYSBsaW5rZXItZ2VuZXJhdGVkIHVuaXF1ZSBi
dWlsZCBJRCB0byBldmVyeSB4ZW5zdG9yZWQgYnVpbGQuDQo+ID4gPiA+IEFkZA0KPiA+ID4gPiBm
dW5jdGlvbmFsaXR5IHRvIGxvZyB0aGlzIGJ1aWxkIElEIGludG8gYSBmaWxlIHVwb24gc2Vydmlj
ZQ0KPiA+ID4gPiBzdGFydHVwLg0KPiA+ID4gPiANCj4gPiA+ID4gU2lnbmVkLW9mZi1ieTogQmpv
ZXJuIERvZWJlbCA8ZG9lYmVsQGFtYXpvbi5kZT4NCj4gPiA+ID4gUmV2aWV3ZWQtYnk6IE1hcnRp
biBNYXplaW4gPGFtYXplaW5AYW1hem9uLmRlPg0KPiA+ID4gPiBSZXZpZXdlZC1ieTogUGF1bCBE
dXJyYW50IDxwZHVycmFudEBhbWF6b24uY28udWs+DQo+ID4gPiANCj4gPiA+IE5vIHN1cHBvcnQg
Zm9yIG94ZW5zdG9yZWQgb3IgeGVuc3RvcmUtc3R1YmRvbT8NCj4gPiANCj4gPiBZb3VyIHN1Z2dl
c3Rpb24gZnVydGhlciBkb3duIHdpbGwgYXBwYXJlbnRseSBoZWxwIGZvciBzdHViZG9tLiBJIGRv
DQo+ID4gbm90IHNwZWFrIG9jYW1sIGF0IGFsbCAtIGhvdyBkbyB3ZSBhZGRyZXNzIHRoaXM/DQo+
IA0KPiBDQydpbmcgRWR3aW4gYW5kIENocmlzdGlhbiB3aG8gaGF2ZSBkb25lIHRoZSBidWxrIG9m
IHRoZSBveGVuc3RvcmVkDQo+IHJlY2VudGx5Lg0KPiANCj4gSXQgc291bmRzIGxpa2UgaXQgbWln
aHQgbm90IGJlIHBvc3NpYmxlIHJpZ2h0IG5vdywgYnV0IHdvdWxkIGJlDQo+IHBvc3NpYmxlDQo+
IHdpdGggYSBmdXR1cmUgcGxhbiB0byBzd2l0Y2ggdGhlIE9jYW1sIGJ1aWxkIHN5c3RlbSBvdmVy
IHRvIGR1bmUgKHRoZQ0KPiBuZXcvcHJlZmVycmVkIE9jYW1sIHVwc3RyZWFtIHRvb2xjaGFpbiku
DQoNClNlZSBoZXJlIHdoYXQgaXMgcG9zc2libGUgd2l0aCBEdW5lOg0KaHR0cHM6Ly9kdW5lLnJl
YWR0aGVkb2NzLmlvL2VuL3N0YWJsZS9kdW5lLWxpYnMuaHRtbCNidWlsZC1pbmZvDQoNCldvdWxk
IHRoZSBvdXRwdXQgb2YgJ2dpdCBkZXNjcmliZSAtLWFsd2F5cyAtLWRpcnR5JyAocGVyaGFwcyBj
b21iaW5lZA0Kd2l0aCBhIGJ1aWxkIGRhdGUpIHNlcnZlIGFzIGEgdXNlZnVsIGJ1aWxkIElEPw0K
DQo+IA0KPiBJZiBpdCBkb2VzIGVuZCB1cCBiZWluZyBhbiBYU19DT05UUk9MIHN1Yi1vcCwgd2Ug
Y2FuIGltcGxlbWVudCBpdCBhdA0KPiBhDQo+IGZ1dHVyZSBwb2ludCB3aGVuIHdlIGNhbiB1c2Vm
dWxseSBhbnN3ZXIgdGhlIHF1ZXN0aW9uLg0KDQpXb3VsZG4ndCB1c2luZyByZWFkZWxmIG9uIC9w
cm9jLzxwaWQ+L2V4ZSBnaXZlIHlvdSB0aGUgcnVubmluZyBidWlsZGlkPw0KDQpyZWFkZWxmIC1h
IC91c3Ivc2Jpbi9veGVuc3RvcmVkIC9wcm9jLyQocGlkb2Ygb3hlbnN0b3JlZCkvZXhlIHwgZ3Jl
cA0KJ0J1aWxkIElEJw0KICAgIEJ1aWxkIElEOiBiZGQ1MzA0Yzg5ODRlZDIyNTcwZDUxMzA4YWU4
NzE3ZDAzZmU2MGFlDQogICAgQnVpbGQgSUQ6IGJkZDUzMDRjODk4NGVkMjI1NzBkNTEzMDhhZTg3
MTdkMDNmZTYwYWUNCg0KcmVhZGVsZiAtYSAvdXNyL3NiaW4vb3hlbnN0b3JlZCAvcHJvYy8kKHBp
ZG9mIG94ZW5zdG9yZWQpL2V4ZSB8IGdyZXANCidCdWlsZCBJRCcNCiAgICBCdWlsZCBJRDogYjQ0
ZmY5OWIyMTZkYjc1MjZlM2VlNzg0MTA2OGQ1ODRjYzljMmI5NQ0KICAgIEJ1aWxkIElEOiBiZGQ1
MzA0Yzg5ODRlZDIyNTcwZDUxMzA4YWU4NzE3ZDAzZmU2MGFlDQoNCg0KV2hlbiB5b3UncmUgaW5z
aWRlIGEgc3R1YmRvbSBpdCBpcyBwcm9iYWJseSBub3Qgc28gZWFzeSB0aG91Z2guDQoNCkJlc3Qg
cmVnYXJkcywNCi0tRWR3aW4NCg0KPiANCj4gfkFuZHJldw0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 17:39:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 17:39:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26829.55334 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdd2W-0004q4-9a; Fri, 13 Nov 2020 17:39:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26829.55334; Fri, 13 Nov 2020 17:39:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdd2W-0004px-6R; Fri, 13 Nov 2020 17:39:12 +0000
Received: by outflank-mailman (input) for mailman id 26829;
 Fri, 13 Nov 2020 17:39:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HVgh=ET=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kdd2U-0004ps-Lk
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 17:39:10 +0000
Received: from mail-lf1-x135.google.com (unknown [2a00:1450:4864:20::135])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 758d6ebc-941d-4951-a1a9-9c864e7911f4;
 Fri, 13 Nov 2020 17:39:09 +0000 (UTC)
Received: by mail-lf1-x135.google.com with SMTP id f11so15086250lfs.3
 for <xen-devel@lists.xenproject.org>; Fri, 13 Nov 2020 09:39:09 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id t6sm1314133lfe.81.2020.11.13.09.39.02
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 13 Nov 2020 09:39:03 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HVgh=ET=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kdd2U-0004ps-Lk
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 17:39:10 +0000
X-Inumbo-ID: 758d6ebc-941d-4951-a1a9-9c864e7911f4
Received: from mail-lf1-x135.google.com (unknown [2a00:1450:4864:20::135])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 758d6ebc-941d-4951-a1a9-9c864e7911f4;
	Fri, 13 Nov 2020 17:39:09 +0000 (UTC)
Received: by mail-lf1-x135.google.com with SMTP id f11so15086250lfs.3
        for <xen-devel@lists.xenproject.org>; Fri, 13 Nov 2020 09:39:09 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=C9PtRL7msCPm1ZBfIObuxvZHC9YDPfDdR1cQIEJ3ZDs=;
        b=Pb1lg8TFL/w6UkhSagcgZCoEQ9EtdS1b2AmysfWsSuK5EIylk1JoD4fsLCqTMuUfxc
         8LLkIqiZGBbLgxP6e6EFNDJsbIZZ8sinVzKhS1rlNhit85pRTtknKTY97/sMuX4SzvT/
         5ADAGRlIU0uXkfT0ecVoUNjdd9MSvovLaqs7ifjub+DS4KUoNO+bDwOjN1+u4wQsYv6F
         aA/I9jffrucCLbI/KtMmPVy3e6TJZlyEzhtMXXQtTX6aSoZyDmxzDnRDZpLbi/uprTui
         U05BEYBBPs+m/1J9KwbAQaB2KNypsIq56t41GPeIeVK87k8HBlrWyJ0tWf8ziTd7U8Vt
         aI3Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=C9PtRL7msCPm1ZBfIObuxvZHC9YDPfDdR1cQIEJ3ZDs=;
        b=pjptPM9HvqCJ4EjW7VSl9mJPfrV7KatyoV+brBWd4hQEQkUc/EXNKhWhuhM1xBLQge
         NMM6LLdKYedWuXMDssiEOE01++9k95pMzkqLSHSiQqUuHHB8u586UgRCMSTLWN/fyJH6
         77gXjXQIZbopOcQjmMrvBunWpdl6qqQtfwVaNO0GS4CrgyphUPpCSrl8YA4+Dc852zp/
         a3O2kG28TTtKaCB0SnTsc1xWrIOmdZggbY0AAuvu10klTWZyCSoOzutWaBXwcybVXjMu
         y35ZFNT0cQ2p4HFRhP7hbSKfqNBq7DAwX5kNr4/7cT+QDQ/btJI8Dxa8XIcgxIFK5L0o
         rIzw==
X-Gm-Message-State: AOAM5303veqk8f5yaZGZq0otRjhRkCMjlPZevBtItBFYA1ppeDBxKYK/
	Vjbh97MRpttWLcCToJ+OQc8=
X-Google-Smtp-Source: ABdhPJxqAhKIOTD4SX+UmSwHNM152ljEbWHagMBvrxhG4je+V7+JPIO8Hk0y+N0mXMRI6nYqUYrkrQ==
X-Received: by 2002:ac2:44ac:: with SMTP id c12mr1433429lfm.602.1605289143572;
        Fri, 13 Nov 2020 09:39:03 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id t6sm1314133lfe.81.2020.11.13.09.39.02
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Fri, 13 Nov 2020 09:39:03 -0800 (PST)
Subject: Re: [PATCH V2 23/23] [RFC] libxl: Add support for virtio-disk
 configuration
To: Wei Chen <Wei.Chen@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-24-git-send-email-olekstysh@gmail.com>
 <AM0PR08MB374735F747FFF1A3016F37329EEA0@AM0PR08MB3747.eurprd08.prod.outlook.com>
 <99636dd8-4c90-bb84-b96f-6c7a9ad31b63@gmail.com>
 <AM0PR08MB3747DB347ED64AF7DF99C5A19EE80@AM0PR08MB3747.eurprd08.prod.outlook.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <22ec8e33-6c53-69fa-3cc3-00ca00e1e20e@gmail.com>
Date: Fri, 13 Nov 2020 19:38:57 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <AM0PR08MB3747DB347ED64AF7DF99C5A19EE80@AM0PR08MB3747.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 11.11.20 06:28, Wei Chen wrote:
> Hi Oleksandr,

Hi Wei


>>>> An example of domain configuration (two disks are assigned to the guest,
>>>> the latter is in readonly mode):
>>>>
>>>> vdisk = [ 'backend=DomD, disks=rw:/dev/mmcblk0p3;ro:/dev/mmcblk1p3' ]
>>>>
>>> Can we keep use the same 'disk' parameter for virtio-disk, but add an option
>> like
>>>    "model=virtio-disk"?
>>> For example:
>>> disk = [ 'backend=DomD, disks=rw:/dev/mmcblk0p3,model=virtio-disk' ]
>>> Just like what Xen has done for x86 virtio-net.
>> I think, this needs an additional investigation. In general I agree with
>> you that it would be nice to reuse existing 'disk' parameter somehow
>> rather than introducing new one
>> for the same purpose (to handle virtual block device(s)).
>>
>>
>> One note, although both are used for the same purpose they are different
>> in at least one important option.
>>
>> For example:
>> 1. disk = [ 'backend=DomD, phy:/dev/mmcblk0p3, xvda1' ]
>> 2. vdisk = [ 'backend=DomD, disks=rw:/dev/mmcblk0p3' ]
>> As you can see existing "disk" parameter contains xvda1, this means that
>> a new device /dev/xvda1 will appear at the guest side, but "vdisk"
>> doesn't contain anything similar. So with Xen PV driver (xen-blkfront)
> Yes, I understand your concerns. But I think specifying a device name
> for virtio disk is not a mandatory requirement. Even if we're using physical
> disks on bare metal machine, we can't guarantee slot#1 disk is always 'sda'.
> So most modern OS are prefer to use blkid to mount filesystem.
>
>> we are able to configure a device name, but with VirtIO solution
>> (virtio-blk) we aren't (at least I don't know how exactly).
>>
> Just my understanding, not exactly accurate:
> The virtio-blk could not get VDEV information for a bus like Xen-bus. So the disk
> ID is allocated dynamically in bus probe progress. The first probed disk will get
> ID 'a'. And then the ID keeps increasing. If we want to specify the disk ID for virtio
> disk, one possible way to do this is to construct a reasonable position on bus
> (fdt node position of virtio mmio device, PCI Function ID of virtio pci block) for
> virtio disk. But it is not always successful, we can't skip 'vda' to specify a virtio
> disk as 'vdb'.
Thank you for the explanation. I got your point regarding using the same 
disk parameter and I think it makes sense. Now I am focusing on IOREQ 
transport to be available on Arm, and after that (I hope)
I will be able to return to virtio stuff.

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Fri Nov 13 17:40:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 17:40:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26834.55346 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdd3J-0004xx-JA; Fri, 13 Nov 2020 17:40:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26834.55346; Fri, 13 Nov 2020 17:40:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdd3J-0004xq-GC; Fri, 13 Nov 2020 17:40:01 +0000
Received: by outflank-mailman (input) for mailman id 26834;
 Fri, 13 Nov 2020 17:40:00 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kdd3I-0004xl-S9
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 17:40:00 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kdd3I-0003on-MU
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 17:40:00 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kdd3G-0001i3-Q2; Fri, 13 Nov 2020 17:39:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kdd3I-0004xl-S9
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 17:40:00 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kdd3I-0003on-MU
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 17:40:00 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kdd3G-0001i3-Q2; Fri, 13 Nov 2020 17:39:58 +0000
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH] cr-daily-branch: Sort out retest build reuse
Date: Fri, 13 Nov 2020 17:39:52 +0000
Message-Id: <20201113173952.29800-1-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Retest flights ought to reuse precisely the builds from the flight
which prompts the retests.

mg-adjust-flight-makexrefs is the wrong tool for this job.  It can
often leave the retry flights with no build jobs and no references to
the main flights' build jobs, so the results are just broken jobs.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 cr-daily-branch | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/cr-daily-branch b/cr-daily-branch
index e54ca227..6ec3aa62 100755
--- a/cr-daily-branch
+++ b/cr-daily-branch
@@ -529,9 +529,8 @@ END
 			$retry_jobs
 	)
 
-	./mg-adjust-flight-makexrefs -v $rflight \
-		--branch=$branch --revision-osstest=$narness_rev \
-		'^build-*' --debug --blessings=real
+	./cs-adjust-flight -D $rflight \
+			runvar-build-set . . "^$rflight\\." $flight.
 
 	if [ "${OSSTEST_SIMULATE_FAIL_RETRY+y}" = y ]; then
 		export OSSTEST_SIMULATE_FAIL="${OSSTEST_SIMULATE_FAIL_RETRY}"
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Nov 13 19:41:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 19:41:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26859.55359 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdewB-0007j1-64; Fri, 13 Nov 2020 19:40:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26859.55359; Fri, 13 Nov 2020 19:40:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdewB-0007iu-2t; Fri, 13 Nov 2020 19:40:47 +0000
Received: by outflank-mailman (input) for mailman id 26859;
 Fri, 13 Nov 2020 19:40:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RH3y=ET=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kdew9-0007ip-Hu
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 19:40:45 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2d59b52b-fe4a-4ea9-960e-9e115a9e915e;
 Fri, 13 Nov 2020 19:40:43 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdew6-0001He-SD; Fri, 13 Nov 2020 19:40:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdew6-00074x-FS; Fri, 13 Nov 2020 19:40:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kdew6-0006sY-Ew; Fri, 13 Nov 2020 19:40:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=RH3y=ET=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kdew9-0007ip-Hu
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 19:40:45 +0000
X-Inumbo-ID: 2d59b52b-fe4a-4ea9-960e-9e115a9e915e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2d59b52b-fe4a-4ea9-960e-9e115a9e915e;
	Fri, 13 Nov 2020 19:40:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qmOc/A8lHsJPK7F5Tjzl+RMwgMaKFKkNyO0bCENyMQ8=; b=3iN+lncPgNjWNOWMABnH/4aIV0
	Qo/MtoQXLUBn6Eptk0ora4vOXGVCBj/Aw1WnliUeFEYcA2h9OnZdDYCgntd9Xhfr9x7LGpzLpBE1c
	KOoky/Jl5HsTbf0WtYtZ0BMKiMjVFQOaMhc7zE8NaLRJCuvLABjk0dMLC9mjZ9qINNJs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdew6-0001He-SD; Fri, 13 Nov 2020 19:40:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdew6-00074x-FS; Fri, 13 Nov 2020 19:40:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdew6-0006sY-Ew; Fri, 13 Nov 2020 19:40:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156725-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156725: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=af5043c89a8ef6b6949a245fff355a552eaed240
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 13 Nov 2020 19:40:42 +0000

flight 156725 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156725/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                af5043c89a8ef6b6949a245fff355a552eaed240
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  104 days
Failing since        152366  2020-08-01 20:49:34 Z  103 days  170 attempts
Testing same since   156725  2020-11-12 21:34:14 Z    0 days    1 attempts

------------------------------------------------------------
3488 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 666207 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 23:14:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 23:14:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26920.55476 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdiGb-0000wo-6J; Fri, 13 Nov 2020 23:14:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26920.55476; Fri, 13 Nov 2020 23:14:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdiGb-0000wh-3F; Fri, 13 Nov 2020 23:14:05 +0000
Received: by outflank-mailman (input) for mailman id 26920;
 Fri, 13 Nov 2020 23:14:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RH3y=ET=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kdiGa-0000wc-Gm
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 23:14:04 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 938437fa-e3a8-457e-93ce-1840b52dc19c;
 Fri, 13 Nov 2020 23:14:00 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdiGV-0005kL-TA; Fri, 13 Nov 2020 23:13:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdiGV-0002uF-KH; Fri, 13 Nov 2020 23:13:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kdiGV-0000Xw-Jn; Fri, 13 Nov 2020 23:13:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=RH3y=ET=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kdiGa-0000wc-Gm
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 23:14:04 +0000
X-Inumbo-ID: 938437fa-e3a8-457e-93ce-1840b52dc19c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 938437fa-e3a8-457e-93ce-1840b52dc19c;
	Fri, 13 Nov 2020 23:14:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Xvhj2D9s3VZrRVB7zGKiUjsC0puBoVvoyOPxAD/Tp1k=; b=DaFBCzdpcyl+lIXhDRodxtg/qD
	BfK7oZ1YCdbwDs1SfD08WqboPinqJ1Kf4QJqjqsbZDgmfHWve5/IaNEMxyQ/eGUC05XsH908/oG6v
	fssVgpb0vMZLg86Q3h88hBJfzWzM5N0+Aoh3PWPA3A7/Ge2J+NXto4oMfVRFshNUCGwU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdiGV-0005kL-TA; Fri, 13 Nov 2020 23:13:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdiGV-0002uF-KH; Fri, 13 Nov 2020 23:13:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdiGV-0000Xw-Jn; Fri, 13 Nov 2020 23:13:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156733-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156733: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start.2:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=cb5d19e8294486551c422759260883ed290226d9
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 13 Nov 2020 23:13:59 +0000

flight 156733 qemu-mainline real [real]
flight 156763 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156733/
http://logs.test-lab.xenproject.org/osstest/logs/156763/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     19 guest-start.2           fail blocked in 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                cb5d19e8294486551c422759260883ed290226d9
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   85 days
Failing since        152659  2020-08-21 14:07:39 Z   84 days  180 attempts
Testing same since   156733  2020-11-13 05:00:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 64150 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Nov 13 23:55:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 13 Nov 2020 23:55:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26943.55528 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdiv2-0004Zj-2q; Fri, 13 Nov 2020 23:55:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26943.55528; Fri, 13 Nov 2020 23:55:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdiv1-0004Zc-W4; Fri, 13 Nov 2020 23:55:51 +0000
Received: by outflank-mailman (input) for mailman id 26943;
 Fri, 13 Nov 2020 23:55:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0O7E=ET=oracle.com=daniel.kiper@srs-us1.protection.inumbo.net>)
 id 1kdiv0-0004ZX-Fk
 for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 23:55:50 +0000
Received: from aserp2130.oracle.com (unknown [141.146.126.79])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1e6c5d94-fb0e-4696-b0f1-8140c394a959;
 Fri, 13 Nov 2020 23:55:48 +0000 (UTC)
Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1])
 by aserp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0ADNpCIi132206;
 Fri, 13 Nov 2020 23:55:10 GMT
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by aserp2130.oracle.com with ESMTP id 34nh3bcx5x-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Fri, 13 Nov 2020 23:55:10 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0ADNpFw3096479;
 Fri, 13 Nov 2020 23:53:10 GMT
Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235])
 by userp3020.oracle.com with ESMTP id 34rt58rarc-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 13 Nov 2020 23:53:09 +0000
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
 by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 0ADNqphm028281;
 Fri, 13 Nov 2020 23:52:51 GMT
Received: from tomti.i.net-space.pl (/10.175.204.43)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Fri, 13 Nov 2020 15:52:51 -0800
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0O7E=ET=oracle.com=daniel.kiper@srs-us1.protection.inumbo.net>)
	id 1kdiv0-0004ZX-Fk
	for xen-devel@lists.xenproject.org; Fri, 13 Nov 2020 23:55:50 +0000
X-Inumbo-ID: 1e6c5d94-fb0e-4696-b0f1-8140c394a959
Received: from aserp2130.oracle.com (unknown [141.146.126.79])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1e6c5d94-fb0e-4696-b0f1-8140c394a959;
	Fri, 13 Nov 2020 23:55:48 +0000 (UTC)
Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1])
	by aserp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0ADNpCIi132206;
	Fri, 13 Nov 2020 23:55:10 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : mime-version : content-type; s=corp-2020-01-29;
 bh=flCTDbb+FvykrJGMFxj/7GupN5n5PIJFx8mRlgaUwxw=;
 b=UbbR67q6JD+oUzxTtsXjZSqU4hofW+yhSERzgRYrDmt1vP26stuVzfJ3elePn8I9EGk7
 uTYBkCy+em0VXFy5ySI73d/vtG+9g8amfwakDfrM/Nt05hEjrkxxmMCNR4k2LwfQUaCn
 BHIhnBC6MdrAFVo9QKm5uzvGxhaUGRFXoPGsxA3oAME9skxGjZWxjZpJZobLwP8R+a3w
 Vq6j84Vmht/kwlYYtohC28hwV5Ug1oQkZAPOBfZa0pMZuBQBZkyPPfAxkgXnQ0bYSRbA
 je895e77dV9kENe+RP+w+DHyX1+t/XJOzpnfIp0BlKzkE+3jvQ7udpWBbHIG9yFijWPH ug== 
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
	by aserp2130.oracle.com with ESMTP id 34nh3bcx5x-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
	Fri, 13 Nov 2020 23:55:10 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
	by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0ADNpFw3096479;
	Fri, 13 Nov 2020 23:53:10 GMT
Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235])
	by userp3020.oracle.com with ESMTP id 34rt58rarc-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
	Fri, 13 Nov 2020 23:53:09 +0000
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 0ADNqphm028281;
	Fri, 13 Nov 2020 23:52:51 GMT
Received: from tomti.i.net-space.pl (/10.175.204.43)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 13 Nov 2020 15:52:51 -0800
Date: Sat, 14 Nov 2020 00:52:42 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: coreboot@coreboot.org, grub-devel@gnu.org, linux-kernel@vger.kernel.org,
        systemd-devel@lists.freedesktop.org, trenchboot-devel@googlegroups.com,
        u-boot@lists.denx.de, x86@kernel.org, xen-devel@lists.xenproject.org
Cc: alecb@umass.edu, alexander.burmashev@oracle.com, allen.cryptic@gmail.com,
        andrew.cooper3@citrix.com, ard.biesheuvel@linaro.org,
        btrotter@gmail.com, dpsmith@apertussolutions.com,
        eric.devolder@oracle.com, eric.snowberg@oracle.com, hpa@zytor.com,
        hun@n-dimensional.de, javierm@redhat.com, joao.m.martins@oracle.com,
        kanth.ghatraju@oracle.com, konrad.wilk@oracle.com,
        krystian.hebel@3mdeb.com, leif@nuviainc.com, lukasz.hawrylko@intel.com,
        luto@amacapital.net, michal.zygowski@3mdeb.com, mjg59@google.com,
        mtottenh@akamai.com, phcoder@gmail.com, piotr.krol@3mdeb.com,
        pjones@redhat.com, pmenzel@molgen.mpg.de, roger.pau@citrix.com,
        ross.philipson@oracle.com, tyhicks@linux.microsoft.com
Subject: [SPECIFICATION RFC] The firmware and bootloader log specification
Message-ID: <20201113235242.k6fzlwmwm2xqhqsi@tomti.i.net-space.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
User-Agent: NeoMutt/20170113 (1.7.2)
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9804 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 bulkscore=0 mlxscore=0
 mlxlogscore=999 suspectscore=0 adultscore=0 phishscore=0 malwarescore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2011130153
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9804 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 lowpriorityscore=0 priorityscore=1501
 clxscore=1011 malwarescore=0 mlxscore=0 spamscore=0 suspectscore=0
 mlxlogscore=999 impostorscore=0 phishscore=0 adultscore=0 bulkscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2011130153

Hey,

This is next attempt to create firmware and bootloader log specification.
Due to high interest among industry it is an extension to the initial
bootloader log only specification. It takes into the account most of the
comments which I got up until now.

The goal is to pass all logs produced by various boot components to the
running OS. The OS kernel should expose these logs to the user space
and/or process them internally if needed. The content of these logs
should be human readable. However, they should also contain the
information which allows admins to do e.g. boot time analysis.

The log specification should be as much as possible platform agnostic
and self contained. The final version of this spec should be merged into
existing specifications, e.g. UEFI, ACPI, Multiboot2, or be a standalone
spec, e.g. as a part of OASIS Standards. The former seems better but is
not perfect too...

Here is the description (pseudocode) of the structures which will be
used to store the log data.

  struct bf_log
  {
    uint32_t   version;
    char       producer[64];
    uint64_t   flags;
    uint64_t   next_bf_log_addr;
    uint32_t   next_msg_off;
    bf_log_msg msgs[];
  }

  struct bf_log_msg
  {
    uint32_t size;
    uint64_t ts_nsec;
    uint32_t level;
    uint32_t facility;
    uint32_t msg_off;
    char     strings[];
  }

The members of struct bf_log:
  - version: the firmware and bootloader log format version number, 1 for now,
  - producer: the producer/firmware/bootloader/... type; the length
    allows ASCII UUID storage if somebody needs that functionality,
  - flags: it can be used to store information about log state, e.g.
    it was truncated or not (does it make sense to have an information
    about the number of lost messages?),
  - next_bf_log_addr: address of next bf_log struct; none if zero (I think
    newer spec versions should not change anything in first 5 bf_log members;
    this way older log parsers will be able to traverse/copy all logs regardless
    of version used in one log or another),
  - next_msg_off: the offset, in bytes, from the beginning of the bf_log struct,
    of the next byte after the last log message in the msgs[]; i.e. the offset
    of the next available log message slot; it is equal to the total size of
    the log buffer including the bf_log struct,
  - msgs: the array of log messages,
  - should we add CRC or hash or signatures here?

The members of struct bf_log_msg:
  - size: total size of bf_log_msg struct,
  - ts_nsec: timestamp expressed in nanoseconds starting from 0,
  - level: similar to syslog meaning; can be used to differentiate normal messages
    from debug messages; the exact interpretation depends on the current producer
    type specified in the bf_log.producer,
  - facility: similar to syslog meaning; can be used to differentiate the sources of
    the messages, e.g. message produced by networking module; the exact interpretation
    depends on the current producer type specified in the bf_log.producer,
  - msg_off: the log message offset in strings[],
  - strings[0]: the beginning of log message type, similar to the facility member but
    NUL terminated string instead of integer; this will be used by, e.g., the GRUB2
    for messages printed using grub_dprintf(),
  - strings[msg_off]: the beginning of log message, NUL terminated string.

Note: The producers are free to use/ignore any given set of level, facility and/or
      log type members. Though the usage of these members has to be clearly defined.
      Ignored integer members should be set to 0. Ignored log message type should
      contain an empty NUL terminated string. The log message is mandatory but can
      be an empty NUL terminated string.

There is still not fully solved problem how the logs should be presented to the OS.
On the UEFI platforms we can use config tables to do that. Then probably
bf_log.next_bf_log_addr should not be used. On the ACPI and Device Tree platforms
we can use these mechanisms to present the logs to the OSes. The situation gets more
difficult if neither of these mechanisms are present. However, maybe we should not
bother too much about that because probably these platforms getting less and less
common.

Anyway, I am aware that this is not specification per se. The goal of this email is
to continue the discussion about the idea of the firmware and booloader log and to
find out where the final specification should land. Of course taking into the account
assumptions made above.

You can find previous discussions about related topics at [1], [2] and [3].

Additionally, I am going to present this during GRUB mini-summit session on Tuesday,
17th of November at 15:45 UTC. So, if you want to discuss the log design please join
us. You can find more details here [4].

Daniel

[1] https://lists.gnu.org/archive/html/grub-devel/2019-10/msg00107.html
[2] https://lists.gnu.org/archive/html/grub-devel/2019-11/msg00079.html
[3] https://lists.gnu.org/archive/html/grub-devel/2020-05/msg00223.html
[4] https://twitter.com/3mdeb_com/status/1327278804100931587


From xen-devel-bounces@lists.xenproject.org Sat Nov 14 00:47:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Nov 2020 00:47:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.26977.55602 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdjiZ-0001Kj-4X; Sat, 14 Nov 2020 00:47:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 26977.55602; Sat, 14 Nov 2020 00:47:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdjiZ-0001Kc-1P; Sat, 14 Nov 2020 00:47:03 +0000
Received: by outflank-mailman (input) for mailman id 26977;
 Sat, 14 Nov 2020 00:47:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2JWW=EU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kdjiX-0001K4-1u
 for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 00:47:01 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c2b860b3-b0d2-4c7a-acbc-4642b01f6f01;
 Sat, 14 Nov 2020 00:46:53 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdjiP-0008HL-HW; Sat, 14 Nov 2020 00:46:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdjiP-0000aO-BH; Sat, 14 Nov 2020 00:46:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kdjiP-0000Pl-An; Sat, 14 Nov 2020 00:46:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2JWW=EU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kdjiX-0001K4-1u
	for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 00:47:01 +0000
X-Inumbo-ID: c2b860b3-b0d2-4c7a-acbc-4642b01f6f01
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c2b860b3-b0d2-4c7a-acbc-4642b01f6f01;
	Sat, 14 Nov 2020 00:46:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=unF+orWQAQJLWxRvI4PxEyhfbRP7pC+HiyuEibQfrcI=; b=eVZyAa8XLrWdfVi86EZQVLZhch
	x+x8bIav6fycILsWUU5HZ7+lxG4tRLDUqI8PH6h8a/tDsBHfP0XMIzJho881KkI/WDMwOCCTXCNms
	yjlqliAz+bJUbLNLw0xBVV2BXYFYUg/B9JvfWCKzjupNnipM1vHWdQ+J3o1toyRjyN6g=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdjiP-0008HL-HW; Sat, 14 Nov 2020 00:46:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdjiP-0000aO-BH; Sat, 14 Nov 2020 00:46:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdjiP-0000Pl-An; Sat, 14 Nov 2020 00:46:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156742-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156742: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=662b42db76a5b195c3aa94ab2946e342a15cd185
X-Osstest-Versions-That:
    ovmf=544cb0132dc1778b9e791202995533523fa6cccd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 14 Nov 2020 00:46:53 +0000

flight 156742 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156742/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 662b42db76a5b195c3aa94ab2946e342a15cd185
baseline version:
 ovmf                 544cb0132dc1778b9e791202995533523fa6cccd

Last test of basis   156720  2020-11-12 16:00:44 Z    1 days
Testing same since   156742  2020-11-13 11:32:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Pete Batard <pete@akeo.ie>
  Yunhua Feng <fengyunhua@byosoft.com.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   544cb0132d..662b42db76  662b42db76a5b195c3aa94ab2946e342a15cd185 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Nov 14 02:12:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Nov 2020 02:12:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27052.55747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdl2u-0006IW-Bi; Sat, 14 Nov 2020 02:12:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27052.55747; Sat, 14 Nov 2020 02:12:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdl2u-0006II-4B; Sat, 14 Nov 2020 02:12:08 +0000
Received: by outflank-mailman (input) for mailman id 27052;
 Sat, 14 Nov 2020 02:12:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8rVh=EU=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kdl2t-0006HX-54
 for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 02:12:07 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 35034942-9481-4946-a3c5-b9acff4eb3c7;
 Sat, 14 Nov 2020 02:12:06 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 9F8F422261;
 Sat, 14 Nov 2020 02:12:05 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8rVh=EU=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kdl2t-0006HX-54
	for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 02:12:07 +0000
X-Inumbo-ID: 35034942-9481-4946-a3c5-b9acff4eb3c7
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 35034942-9481-4946-a3c5-b9acff4eb3c7;
	Sat, 14 Nov 2020 02:12:06 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 9F8F422261;
	Sat, 14 Nov 2020 02:12:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605319925;
	bh=91tedtBqSR/PprDuGd1wddKAk0sf1gu/9Y2miqIdwYc=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=DKs91J+WyJcDhkUMgag8WI4czY6yMYgw4ef5yTPe7+NOcRPTnj+EPEophNMkyP48b
	 /Bhb4z2nFC2WMQrQHIYvx4lvhlbk/vYGd5HRDgTwc7nxp5Mu3lNt9D1ryl53EyhOAp
	 cJLVAtktIUq8yyzSTeDiJsR2hLxfyvB+CuKC0y+w=
From: Stefano Stabellini <sstabellini@kernel.org>
To: andrew.cooper3@citrix.com,
	cardoe@cardoe.com,
	wl@xen.org
Cc: sstabellini@kernel.org,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH 2/2] automation: add dom0less to the QEMU aarch64 smoke test
Date: Fri, 13 Nov 2020 18:12:03 -0800
Message-Id: <20201114021203.24558-2-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2011131751380.20906@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2011131751380.20906@sstabellini-ThinkPad-T480s>

Add a trivial dom0less test:
- build Linux defconfig arm64 to be used as dom0/U kernel
- use busybox-static to create a trivial dom0/U ramdisk
- use ImageBuilder to generate the uboot boot script automatically
- install and use u-boot from the Debian package to start the test
- binaries are loaded from uboot via tftp

Disabling the pl061 device is required to get any version of Linux to
boot on Xen on qemu-system-aarch64.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 automation/gitlab-ci/test.yaml         |  1 +
 automation/scripts/qemu-smoke-arm64.sh | 87 +++++++++++++++++++++++---
 2 files changed, 81 insertions(+), 7 deletions(-)

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index 35346e3f6e..76eff1004e 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -27,6 +27,7 @@ qemu-smoke-arm64-gcc:
   image: registry.gitlab.com/xen-project/xen/${CONTAINER}
   variables:
     CONTAINER: debian:unstable-arm64v8
+    LINUX_VERSION: 5.9.8
   script:
     - ./automation/scripts/qemu-smoke-arm64.sh 2>&1 | tee qemu-smoke-arm64.log
   dependencies:
diff --git a/automation/scripts/qemu-smoke-arm64.sh b/automation/scripts/qemu-smoke-arm64.sh
index a7efbf8b6f..e8dc5b19cb 100755
--- a/automation/scripts/qemu-smoke-arm64.sh
+++ b/automation/scripts/qemu-smoke-arm64.sh
@@ -6,27 +6,100 @@ set -ex
 export DEBIAN_FRONTENT=noninteractive
 apt-get -qy update
 apt-get -qy install --no-install-recommends qemu-system-aarch64 \
-                                            u-boot-qemu
+                                            u-boot-qemu \
+                                            u-boot-tools \
+                                            device-tree-compiler \
+                                            busybox-static \
+                                            curl \
+                                            cpio \
+                                            bc
 
 # XXX Silly workaround to get the following QEMU command to work
 cp /usr/share/qemu/pvh.bin /usr/share/qemu/efi-virtio.rom
 qemu-system-aarch64 \
    -machine virtualization=true \
    -cpu cortex-a57 -machine type=virt \
-   -m 512 -display none \
+   -m 1024 -display none \
    -machine dumpdtb=binaries/virt-gicv3.dtb
+# XXX disable pl061 to avoid Linux crash
+dtc -I dtb -O dts binaries/virt-gicv3.dtb > binaries/virt-gicv3.dts
+sed 's/compatible = "arm,pl061.*/status = "disabled";/g' binaries/virt-gicv3.dts > binaries/virt-gicv3-edited.dts
+dtc -I dts -O dtb binaries/virt-gicv3-edited.dts > binaries/virt-gicv3.dtb
 
+
+# Busybox Dom0
+mkdir -p initrd
+mkdir -p initrd/bin
+mkdir -p initrd/sbin
+mkdir -p initrd/etc
+mkdir -p initrd/dev
+mkdir -p initrd/proc
+mkdir -p initrd/sys
+mkdir -p initrd/lib
+mkdir -p initrd/var
+mkdir -p initrd/mnt
+cp /bin/busybox initrd/bin/busybox
+initrd/bin/busybox --install initrd/bin
+echo "#!/bin/sh
+
+mount -t proc proc /proc
+mount -t sysfs sysfs /sys
+mount -t devtmpfs devtmpfs /dev
+/bin/sh" > initrd/init
+chmod +x initrd/init
+cd initrd
+find . | cpio --create --format='newc' | gzip > ../binaries/initrd
+cd ..
+
+
+# Linux Dom0
+curl -fsSLO https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-"$LINUX_VERSION".tar.xz
+tar xvJf linux-"$LINUX_VERSION".tar.xz
+cd linux-"$LINUX_VERSION"
+make defconfig
+make -j$(nproc) Image.gz
+cp arch/arm64/boot/Image ../binaries
+cd ..
+
+
+# ImageBuilder
+echo 'MEMORY_START="0x40000000"
+MEMORY_END="0x80000000"
+
+DEVICE_TREE="virt-gicv3.dtb"
+XEN="xen"
+DOM0_KERNEL="Image"
+DOM0_RAMDISK="initrd"
+XEN_CMD="console=dtuart dom0_mem=512M"
+
+NUM_DOMUS=1
+DOMU_KERNEL[0]="Image"
+DOMU_RAMDISK[0]="initrd"
+DOMU_MEM[0]="256"
+
+LOAD_CMD="tftpb"
+UBOOT_SOURCE="boot.source"
+UBOOT_SCRIPT="boot.scr"' > binaries/config
+rm -rf imagebuilder
+git clone https://gitlab.com/ViryaOS/imagebuilder
+bash imagebuilder/scripts/uboot-script-gen -t tftp -d binaries/ -c binaries/config
+
+
+# Run the test
 rm -f smoke.serial
 set +e
-echo "  booti 0x49000000 - 0x44000000" | timeout -k 1 30 qemu-system-aarch64 \
+echo "  virtio scan; dhcp; tftpb 0x40000000 boot.scr; source 0x40000000"| \
+timeout -k 1 240 \
+qemu-system-aarch64 \
     -machine virtualization=true \
     -cpu cortex-a57 -machine type=virt \
-    -m 512 -monitor none -serial stdio \
+    -m 1024 -monitor none -serial stdio \
+    -smp 2 \
     -no-reboot \
-    -device loader,file=binaries/virt-gicv3.dtb,force-raw=on,addr=0x44000000 \
-    -device loader,file=binaries/xen,force-raw=on,addr=0x49000000 \
+    -device virtio-net-pci,netdev=n0 \
+    -netdev user,id=n0,tftp=binaries \
     -bios /usr/lib/u-boot/qemu_arm64/u-boot.bin |& tee smoke.serial
 
 set -e
-grep -q 'LOADING DOMAIN 0' smoke.serial || exit 1
+(grep -q "^BusyBox" smoke.serial && grep -q "DOM1: BusyBox" smoke.serial) || exit 1
 exit 0
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sat Nov 14 02:12:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Nov 2020 02:12:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27050.55729 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdl2i-0006Fj-ON; Sat, 14 Nov 2020 02:11:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27050.55729; Sat, 14 Nov 2020 02:11:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdl2i-0006Fb-JS; Sat, 14 Nov 2020 02:11:56 +0000
Received: by outflank-mailman (input) for mailman id 27050;
 Sat, 14 Nov 2020 02:11:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8rVh=EU=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kdl2h-0006FW-FO
 for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 02:11:55 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ed1db482-3350-4fac-94b2-7d9d96ba054d;
 Sat, 14 Nov 2020 02:11:54 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id D25F92225D;
 Sat, 14 Nov 2020 02:11:53 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8rVh=EU=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kdl2h-0006FW-FO
	for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 02:11:55 +0000
X-Inumbo-ID: ed1db482-3350-4fac-94b2-7d9d96ba054d
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ed1db482-3350-4fac-94b2-7d9d96ba054d;
	Sat, 14 Nov 2020 02:11:54 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id D25F92225D;
	Sat, 14 Nov 2020 02:11:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605319914;
	bh=+DdL6JZH1w++1IqLUX2/aasJClhdWwwHzAyz0B//UrE=;
	h=Date:From:To:cc:Subject:From;
	b=yFvefLeIic6y3QNKZp7jJ7hyVIq3ub3ZZfTih81Q0reel+Iqvp8QSJ4NnuCu5Qrpb
	 PLbH93Aubcl0FXypHjwWMKnOMq8JGq8D/VQ1JypI5RmKxZKuXHXhuzLa9VFLsQa8ZL
	 BcCInjsfBfaDm+RMgtJKN1G5rMWSWXdTB7x7x/Z4=
Date: Fri, 13 Nov 2020 18:11:53 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Andrew Cooper <andrew.cooper3@citrix.com>, cardoe@cardoe.com, wl@xen.org
cc: sstabellini@kernel.org, xen-devel@lists.xenproject.org
Subject: [PATCH 0/2] automation: arm64 dom0less smoke test
Message-ID: <alpine.DEB.2.21.2011131751380.20906@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Hi all,

This short series introduces a very simple Xen Dom0less smoke test based
on qemu-system-aarch64 to be run as part of the gitlab CI-loop. It
currently passes on staging.

Cheers,

Stefano


Stefano Stabellini (2):
      automation: add a QEMU aarch64 smoke test
      automation: add dom0less to the QEMU aarch64 smoke test

 automation/gitlab-ci/test.yaml         |  23 ++++++++
 automation/scripts/build               |   8 +--
 automation/scripts/qemu-smoke-arm64.sh | 105 +++++++++++++++++++++++++++++++++
 3 files changed, 131 insertions(+), 5 deletions(-)
 create mode 100755 automation/scripts/qemu-smoke-arm64.sh


From xen-devel-bounces@lists.xenproject.org Sat Nov 14 02:12:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Nov 2020 02:12:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27051.55741 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdl2t-0006I0-WE; Sat, 14 Nov 2020 02:12:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27051.55741; Sat, 14 Nov 2020 02:12:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdl2t-0006Ht-SF; Sat, 14 Nov 2020 02:12:07 +0000
Received: by outflank-mailman (input) for mailman id 27051;
 Sat, 14 Nov 2020 02:12:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8rVh=EU=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kdl2s-0006HX-Nd
 for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 02:12:06 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f6f05e91-57df-4183-8251-99cd8b11ad7b;
 Sat, 14 Nov 2020 02:12:06 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 3284B2225D;
 Sat, 14 Nov 2020 02:12:05 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8rVh=EU=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kdl2s-0006HX-Nd
	for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 02:12:06 +0000
X-Inumbo-ID: f6f05e91-57df-4183-8251-99cd8b11ad7b
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f6f05e91-57df-4183-8251-99cd8b11ad7b;
	Sat, 14 Nov 2020 02:12:06 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 3284B2225D;
	Sat, 14 Nov 2020 02:12:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605319925;
	bh=FBqpCBRCXyQW7zW1Z4nTkrwi13yRf09HktL4SI8wk1w=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=SU83EcLAvGA9em+JWb/7PacKdsP7KH7JIJVNQIhcOjfxMYXN1zB9KQqwGua4pf4lk
	 TZ0SN/0QOxffxP8D+ksBjHRbmoJ/CS1mu6pISIMSJ3dsh9Km8TvziTVISHdHsU6DDE
	 W2c0U/RMF2K3oaT5PYX5/ABafK2k8juEwpIh2bXk=
From: Stefano Stabellini <sstabellini@kernel.org>
To: andrew.cooper3@citrix.com,
	cardoe@cardoe.com,
	wl@xen.org
Cc: sstabellini@kernel.org,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH 1/2] automation: add a QEMU aarch64 smoke test
Date: Fri, 13 Nov 2020 18:12:02 -0800
Message-Id: <20201114021203.24558-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2011131751380.20906@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2011131751380.20906@sstabellini-ThinkPad-T480s>

Use QEMU to start Xen (just the hypervisor) up until it stops because
there is no dom0 kernel to boot.

It is based on the existing build job unstable-arm64v8.

Also use make -j$(nproc) to build Xen.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 automation/gitlab-ci/test.yaml         | 22 ++++++++++++++++++
 automation/scripts/build               |  8 +++----
 automation/scripts/qemu-smoke-arm64.sh | 32 ++++++++++++++++++++++++++
 3 files changed, 57 insertions(+), 5 deletions(-)
 create mode 100755 automation/scripts/qemu-smoke-arm64.sh

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index 793feafe8b..35346e3f6e 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -22,6 +22,28 @@ build-each-commit-gcc:
     - /^coverity-tested\/.*/
     - /^stable-.*/
 
+qemu-smoke-arm64-gcc:
+  stage: test
+  image: registry.gitlab.com/xen-project/xen/${CONTAINER}
+  variables:
+    CONTAINER: debian:unstable-arm64v8
+  script:
+    - ./automation/scripts/qemu-smoke-arm64.sh 2>&1 | tee qemu-smoke-arm64.log
+  dependencies:
+    - debian-unstable-gcc-arm64
+  artifacts:
+    paths:
+      - smoke.serial
+      - '*.log'
+    when: always
+  tags:
+    - arm64
+  except:
+    - master
+    - smoke
+    - /^coverity-tested\/.*/
+    - /^stable-.*/
+
 qemu-smoke-x86-64-gcc:
   stage: test
   image: registry.gitlab.com/xen-project/xen/${CONTAINER}
diff --git a/automation/scripts/build b/automation/scripts/build
index 0cd0f3971d..95ad23eadb 100755
--- a/automation/scripts/build
+++ b/automation/scripts/build
@@ -10,9 +10,9 @@ cc-ver()
 
 # random config or default config
 if [[ "${RANDCONFIG}" == "y" ]]; then
-    make -C xen KCONFIG_ALLCONFIG=tools/kconfig/allrandom.config randconfig
+    make -j$(nproc) -C xen KCONFIG_ALLCONFIG=tools/kconfig/allrandom.config randconfig
 else
-    make -C xen defconfig
+    make -j$(nproc) -C xen defconfig
 fi
 
 # build up our configure options
@@ -45,9 +45,7 @@ make -j$(nproc) dist
 # Extract artifacts to avoid getting rewritten by customised builds
 cp xen/.config xen-config
 mkdir binaries
-if [[ "${XEN_TARGET_ARCH}" == "x86_64" ]]; then
-    cp xen/xen binaries/xen
-fi
+cp xen/xen binaries/xen
 
 # Build all the configs we care about
 case ${XEN_TARGET_ARCH} in
diff --git a/automation/scripts/qemu-smoke-arm64.sh b/automation/scripts/qemu-smoke-arm64.sh
new file mode 100755
index 0000000000..a7efbf8b6f
--- /dev/null
+++ b/automation/scripts/qemu-smoke-arm64.sh
@@ -0,0 +1,32 @@
+#!/bin/bash
+
+set -ex
+
+# Install QEMU
+export DEBIAN_FRONTENT=noninteractive
+apt-get -qy update
+apt-get -qy install --no-install-recommends qemu-system-aarch64 \
+                                            u-boot-qemu
+
+# XXX Silly workaround to get the following QEMU command to work
+cp /usr/share/qemu/pvh.bin /usr/share/qemu/efi-virtio.rom
+qemu-system-aarch64 \
+   -machine virtualization=true \
+   -cpu cortex-a57 -machine type=virt \
+   -m 512 -display none \
+   -machine dumpdtb=binaries/virt-gicv3.dtb
+
+rm -f smoke.serial
+set +e
+echo "  booti 0x49000000 - 0x44000000" | timeout -k 1 30 qemu-system-aarch64 \
+    -machine virtualization=true \
+    -cpu cortex-a57 -machine type=virt \
+    -m 512 -monitor none -serial stdio \
+    -no-reboot \
+    -device loader,file=binaries/virt-gicv3.dtb,force-raw=on,addr=0x44000000 \
+    -device loader,file=binaries/xen,force-raw=on,addr=0x49000000 \
+    -bios /usr/lib/u-boot/qemu_arm64/u-boot.bin |& tee smoke.serial
+
+set -e
+grep -q 'LOADING DOMAIN 0' smoke.serial || exit 1
+exit 0
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sat Nov 14 02:31:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Nov 2020 02:31:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27080.55786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdlLg-0008Mw-9x; Sat, 14 Nov 2020 02:31:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27080.55786; Sat, 14 Nov 2020 02:31:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdlLg-0008Mp-4M; Sat, 14 Nov 2020 02:31:32 +0000
Received: by outflank-mailman (input) for mailman id 27080;
 Sat, 14 Nov 2020 02:31:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2JWW=EU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kdlLe-0008Lc-7d
 for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 02:31:30 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cc604457-66b4-4330-8067-447576cfab0f;
 Sat, 14 Nov 2020 02:31:22 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdlLW-0006ZH-Kf; Sat, 14 Nov 2020 02:31:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdlLW-00073Y-9b; Sat, 14 Nov 2020 02:31:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kdlLW-0005Su-96; Sat, 14 Nov 2020 02:31:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2JWW=EU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kdlLe-0008Lc-7d
	for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 02:31:30 +0000
X-Inumbo-ID: cc604457-66b4-4330-8067-447576cfab0f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cc604457-66b4-4330-8067-447576cfab0f;
	Sat, 14 Nov 2020 02:31:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Sz3+lVO4H7+nnb3kHohdAZV59TgNPoNcMue0IcYOQUs=; b=ZhY4A7qLJY46SX9HCBBYmBwbvt
	hil08CMSFgxVHLkXKXJjF5GtClPHeMLTUkzOKSqVuhXzCD+xHmrzflYaPQa0BZMEgzKT7T/siXVmp
	FWUBxJfUCG3DVNlueFPVXc1I349vzmoYV89PEDI8CDf6fC5pNaV7HAZpTg8iTCgYCgO4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdlLW-0006ZH-Kf; Sat, 14 Nov 2020 02:31:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdlLW-00073Y-9b; Sat, 14 Nov 2020 02:31:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdlLW-0005Su-96; Sat, 14 Nov 2020 02:31:22 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156737-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156737: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5505f5f8e7e805365cfe70b6a4af6115940bb749
X-Osstest-Versions-That:
    xen=9ff9705647646aa937b5f5c1426a64c69a62b3bd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 14 Nov 2020 02:31:22 +0000

flight 156737 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156737/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156443
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156443
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156443
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156443
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156443
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156443
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156443
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156443
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156443
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156443
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156443
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  5505f5f8e7e805365cfe70b6a4af6115940bb749
baseline version:
 xen                  9ff9705647646aa937b5f5c1426a64c69a62b3bd

Last test of basis   156443  2020-11-05 15:47:13 Z    8 days
Failing since        156524  2020-11-06 14:22:28 Z    7 days    9 attempts
Testing same since   156711  2020-11-12 09:06:03 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Olaf Hering <olaf@aepfle.de>
  Penny Zheng <penny.zheng@arm.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   9ff9705647..5505f5f8e7  5505f5f8e7e805365cfe70b6a4af6115940bb749 -> master


From xen-devel-bounces@lists.xenproject.org Sat Nov 14 03:16:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Nov 2020 03:16:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27108.55881 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdm2k-0003zk-QK; Sat, 14 Nov 2020 03:16:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27108.55881; Sat, 14 Nov 2020 03:16:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdm2k-0003zd-N0; Sat, 14 Nov 2020 03:16:02 +0000
Received: by outflank-mailman (input) for mailman id 27108;
 Sat, 14 Nov 2020 03:16:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kJGx=EU=infradead.org=rdunlap@srs-us1.protection.inumbo.net>)
 id 1kdm2h-0003zW-Pi
 for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 03:16:00 +0000
Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 829784cb-3d4d-4105-8bd8-1d6914f16ba0;
 Sat, 14 Nov 2020 03:15:56 +0000 (UTC)
Received: from [2601:1c0:6280:3f0::f32]
 by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kdm1g-0005d4-3a; Sat, 14 Nov 2020 03:14:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJGx=EU=infradead.org=rdunlap@srs-us1.protection.inumbo.net>)
	id 1kdm2h-0003zW-Pi
	for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 03:16:00 +0000
X-Inumbo-ID: 829784cb-3d4d-4105-8bd8-1d6914f16ba0
Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 829784cb-3d4d-4105-8bd8-1d6914f16ba0;
	Sat, 14 Nov 2020 03:15:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=merlin.20170209; h=Content-Transfer-Encoding:Content-Type:
	In-Reply-To:MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender
	:Reply-To:Content-ID:Content-Description;
	bh=0/KJqz8hu47WbA5FdnejOOtD0NK3mSmNVO2zFIHMmuo=; b=3RyB6zdxnCz7gFnT7+CeKWOF6/
	4XRkDMsYza0TaLF6KTCBT6PLdxfXgbGF3rtmSxCJhjMEXIepF/CxIq7aD21gAnnyclsJiOYzo0H1u
	tbf+fFyALRXtnCeM/q/1V+Ay65sREalvZQ6b6BeDcyzb8n4QBWm/5ZKKn+ZHj1v4TRrQ8iPw+JEDW
	fQ8BrzPTsv3wkyO9ztrawNXFwJHlfx8b1NP+Gn7eI8vwmFo+0H9m0ojFCrqjxbdUGxiEeB9kTIHX/
	qNpWjxmU7rnIX+Pe9sA1E9gjoydPfFDYhqP4rzG6OJZJ9smRn1ri4/2zjATj7xFCXNfa45Tyd7VIS
	avF1iseA==;
Received: from [2601:1c0:6280:3f0::f32]
	by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kdm1g-0005d4-3a; Sat, 14 Nov 2020 03:14:56 +0000
Subject: Re: [SPECIFICATION RFC] The firmware and bootloader log specification
To: Daniel Kiper <daniel.kiper@oracle.com>, coreboot@coreboot.org,
 grub-devel@gnu.org, linux-kernel@vger.kernel.org,
 systemd-devel@lists.freedesktop.org, trenchboot-devel@googlegroups.com,
 u-boot@lists.denx.de, x86@kernel.org, xen-devel@lists.xenproject.org
Cc: alecb@umass.edu, alexander.burmashev@oracle.com, allen.cryptic@gmail.com,
 andrew.cooper3@citrix.com, ard.biesheuvel@linaro.org, btrotter@gmail.com,
 dpsmith@apertussolutions.com, eric.devolder@oracle.com,
 eric.snowberg@oracle.com, hpa@zytor.com, hun@n-dimensional.de,
 javierm@redhat.com, joao.m.martins@oracle.com, kanth.ghatraju@oracle.com,
 konrad.wilk@oracle.com, krystian.hebel@3mdeb.com, leif@nuviainc.com,
 lukasz.hawrylko@intel.com, luto@amacapital.net, michal.zygowski@3mdeb.com,
 mjg59@google.com, mtottenh@akamai.com, phcoder@gmail.com,
 piotr.krol@3mdeb.com, pjones@redhat.com, pmenzel@molgen.mpg.de,
 roger.pau@citrix.com, ross.philipson@oracle.com, tyhicks@linux.microsoft.com
References: <20201113235242.k6fzlwmwm2xqhqsi@tomti.i.net-space.pl>
From: Randy Dunlap <rdunlap@infradead.org>
Message-ID: <ef83facc-b08d-a43d-aff1-e0492ab38064@infradead.org>
Date: Fri, 13 Nov 2020 19:14:44 -0800
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201113235242.k6fzlwmwm2xqhqsi@tomti.i.net-space.pl>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11/13/20 3:52 PM, Daniel Kiper wrote:
> Hey,
> 
> 
> Here is the description (pseudocode) of the structures which will be
> used to store the log data.
> 
> Anyway, I am aware that this is not specification per se.


Yes, you have caveats here. I'm sure that you either already know
or would learn soon enough that struct struct bf_log has some
padding added to it (for alignment) unless it is packed.
Or you could rearrange the order of some of its fields
and save 8 bytes per struct on x86_64.


>   struct bf_log
>   {
>     uint32_t   version;
>     char       producer[64];
>     uint64_t   flags;
>     uint64_t   next_bf_log_addr;
>     uint32_t   next_msg_off;
>     bf_log_msg msgs[];
>   }
> 
>   struct bf_log_msg
>   {
>     uint32_t size;
>     uint64_t ts_nsec;
>     uint32_t level;
>     uint32_t facility;
>     uint32_t msg_off;
>     char     strings[];
>   }


cheers.
-- 
~Randy



From xen-devel-bounces@lists.xenproject.org Sat Nov 14 04:56:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Nov 2020 04:56:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27138.55911 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdnbg-0004gr-Ll; Sat, 14 Nov 2020 04:56:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27138.55911; Sat, 14 Nov 2020 04:56:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdnbg-0004gk-Hw; Sat, 14 Nov 2020 04:56:12 +0000
Received: by outflank-mailman (input) for mailman id 27138;
 Sat, 14 Nov 2020 04:56:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8rVh=EU=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kdnbf-0004gf-8x
 for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 04:56:11 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a9a4e1fb-6557-4ad7-ab24-4ea6afe4634f;
 Sat, 14 Nov 2020 04:56:09 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 058CB22268;
 Sat, 14 Nov 2020 04:56:08 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8rVh=EU=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kdnbf-0004gf-8x
	for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 04:56:11 +0000
X-Inumbo-ID: a9a4e1fb-6557-4ad7-ab24-4ea6afe4634f
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a9a4e1fb-6557-4ad7-ab24-4ea6afe4634f;
	Sat, 14 Nov 2020 04:56:09 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 058CB22268;
	Sat, 14 Nov 2020 04:56:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605329769;
	bh=M3SzRyp9ttQ5m7Z+LkhnkL+t/2ZTv5BNnrFf5elF36Q=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=eSPOWAmq4vpFK16tBVYLK1bCIEji2HAP6Z7/P7LtBxo6qjdJkvb0kTsiTgDxrCxlf
	 weQiZ7bKIoyPBYwM5YF/wp7Dr+yZdfYq/ve91tlS76mkzLmsbpPF45P+nTLU/26ReC
	 MeTb+ykDGELw/oHA4PlpOt9ulg6tDdKYuGao30no=
Date: Fri, 13 Nov 2020 20:56:08 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Stefano Stabellini <sstabellini@kernel.org>
cc: andrew.cooper3@citrix.com, cardoe@cardoe.com, wl@xen.org, 
    xen-devel@lists.xenproject.org, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH 1/2] automation: add a QEMU aarch64 smoke test
In-Reply-To: <20201114021203.24558-1-sstabellini@kernel.org>
Message-ID: <alpine.DEB.2.21.2011131946550.20906@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2011131751380.20906@sstabellini-ThinkPad-T480s> <20201114021203.24558-1-sstabellini@kernel.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 13 Nov 2020, Stefano Stabellini wrote:
> Use QEMU to start Xen (just the hypervisor) up until it stops because
> there is no dom0 kernel to boot.
> 
> It is based on the existing build job unstable-arm64v8.
> 
> Also use make -j$(nproc) to build Xen.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> ---
>  automation/gitlab-ci/test.yaml         | 22 ++++++++++++++++++
>  automation/scripts/build               |  8 +++----
>  automation/scripts/qemu-smoke-arm64.sh | 32 ++++++++++++++++++++++++++
>  3 files changed, 57 insertions(+), 5 deletions(-)
>  create mode 100755 automation/scripts/qemu-smoke-arm64.sh
> 
> diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
> index 793feafe8b..35346e3f6e 100644
> --- a/automation/gitlab-ci/test.yaml
> +++ b/automation/gitlab-ci/test.yaml
> @@ -22,6 +22,28 @@ build-each-commit-gcc:
>      - /^coverity-tested\/.*/
>      - /^stable-.*/
>  
> +qemu-smoke-arm64-gcc:
> +  stage: test
> +  image: registry.gitlab.com/xen-project/xen/${CONTAINER}
> +  variables:
> +    CONTAINER: debian:unstable-arm64v8
> +  script:
> +    - ./automation/scripts/qemu-smoke-arm64.sh 2>&1 | tee qemu-smoke-arm64.log
> +  dependencies:
> +    - debian-unstable-gcc-arm64
> +  artifacts:
> +    paths:
> +      - smoke.serial
> +      - '*.log'
> +    when: always
> +  tags:
> +    - arm64
> +  except:
> +    - master
> +    - smoke
> +    - /^coverity-tested\/.*/
> +    - /^stable-.*/
> +
>  qemu-smoke-x86-64-gcc:
>    stage: test
>    image: registry.gitlab.com/xen-project/xen/${CONTAINER}
> diff --git a/automation/scripts/build b/automation/scripts/build
> index 0cd0f3971d..95ad23eadb 100755
> --- a/automation/scripts/build
> +++ b/automation/scripts/build
> @@ -10,9 +10,9 @@ cc-ver()
>  
>  # random config or default config
>  if [[ "${RANDCONFIG}" == "y" ]]; then
> -    make -C xen KCONFIG_ALLCONFIG=tools/kconfig/allrandom.config randconfig
> +    make -j$(nproc) -C xen KCONFIG_ALLCONFIG=tools/kconfig/allrandom.config randconfig
>  else
> -    make -C xen defconfig
> +    make -j$(nproc) -C xen defconfig
>  fi
>  
>  # build up our configure options
> @@ -45,9 +45,7 @@ make -j$(nproc) dist
>  # Extract artifacts to avoid getting rewritten by customised builds
>  cp xen/.config xen-config
>  mkdir binaries
> -if [[ "${XEN_TARGET_ARCH}" == "x86_64" ]]; then
> -    cp xen/xen binaries/xen
> -fi
> +cp xen/xen binaries/xen

This was a mistake, it should have been:

if [[ "${XEN_TARGET_ARCH}" != "x86_32" ]]; then
    cp xen/xen binaries/xen
fi


Unrelated: should we temporarely disable the stubdom build
(--disable-stubdom) in the gitlab build? (Even better if somebody
volunteers to fix the stubdom build somehow.) At the moment a bunch of
jobs are failing with:

mini-os.c: In function 'main':
mini-os.c:756: warning: cast from pointer to integer of different size
ar: creating /builds/xen-project/people/sstabellini/xen/stubdom/mini-os-x86_64-xenstorepvh/arch/x86/libx86_64.a
/builds/xen-project/people/sstabellini/xen/stubdom/mini-os-x86_64-xenstorepvh/mini-os.o: could not read symbols: File in wrong format
make[2]: *** [/builds/xen-project/people/sstabellini/xen/stubdom/mini-os-x86_64-xenstorepvh/mini-os] Error 1
make[1]: *** [xenstorepvh-stubdom] Error 2
make[1]: *** Waiting for unfinished jobs....


With the above x86_32 fix, stubdoms disabled, all jobs (including the
new aarch64 smoke test) complete successfully.


From xen-devel-bounces@lists.xenproject.org Sat Nov 14 09:49:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Nov 2020 09:49:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27169.55926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdsBF-0005LW-T4; Sat, 14 Nov 2020 09:49:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27169.55926; Sat, 14 Nov 2020 09:49:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdsBF-0005LP-Q6; Sat, 14 Nov 2020 09:49:13 +0000
Received: by outflank-mailman (input) for mailman id 27169;
 Sat, 14 Nov 2020 09:49:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2JWW=EU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kdsBE-0005LK-Mh
 for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 09:49:12 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8d02b0c4-9aef-4f39-a746-60a8ef199210;
 Sat, 14 Nov 2020 09:49:10 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdsBB-0002DG-OI; Sat, 14 Nov 2020 09:49:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdsBB-0003W0-FE; Sat, 14 Nov 2020 09:49:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kdsBB-0005t2-El; Sat, 14 Nov 2020 09:49:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2JWW=EU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kdsBE-0005LK-Mh
	for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 09:49:12 +0000
X-Inumbo-ID: 8d02b0c4-9aef-4f39-a746-60a8ef199210
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8d02b0c4-9aef-4f39-a746-60a8ef199210;
	Sat, 14 Nov 2020 09:49:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dFD1xZnOXcnI5IouskBzVrbmtEeibCCZ/NvqsJUMcQQ=; b=vzEMI+oFzJZzckt9tm6Lr47Xe0
	X4Ce9eNb1bmV0lLiYuZ3sOjHjN726uM/V/Tnd2kyFMYpKEsV9BttYK/QY/YaCRrf3YoteAWubkyKJ
	pQ+fW1CN06vwjXsIog+Itvgg6ArKEE/gs5v+u7nzILx14dCSm1b5i09hblBHvQxW3lXg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdsBB-0002DG-OI; Sat, 14 Nov 2020 09:49:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdsBB-0003W0-FE; Sat, 14 Nov 2020 09:49:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdsBB-0005t2-El; Sat, 14 Nov 2020 09:49:09 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156786-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156786: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=5e9a8a6dfb152472c5d12a3940069b16c774f0fc
X-Osstest-Versions-That:
    ovmf=662b42db76a5b195c3aa94ab2946e342a15cd185
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 14 Nov 2020 09:49:09 +0000

flight 156786 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156786/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 5e9a8a6dfb152472c5d12a3940069b16c774f0fc
baseline version:
 ovmf                 662b42db76a5b195c3aa94ab2946e342a15cd185

Last test of basis   156742  2020-11-13 11:32:54 Z    0 days
Testing same since   156786  2020-11-14 01:09:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Peter Grehan <grehan@freebsd.org>
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   662b42db76..5e9a8a6dfb  5e9a8a6dfb152472c5d12a3940069b16c774f0fc -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Nov 14 10:01:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Nov 2020 10:01:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27179.55944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdsNI-00078T-2Z; Sat, 14 Nov 2020 10:01:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27179.55944; Sat, 14 Nov 2020 10:01:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdsNH-00078M-Vg; Sat, 14 Nov 2020 10:01:39 +0000
Received: by outflank-mailman (input) for mailman id 27179;
 Sat, 14 Nov 2020 10:01:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2JWW=EU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kdsNG-00077o-BP
 for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 10:01:38 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6f7440e5-c8d0-4a8f-b455-f0bee27f8b9e;
 Sat, 14 Nov 2020 10:01:31 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdsN8-0002XW-L5; Sat, 14 Nov 2020 10:01:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdsN8-00043m-8X; Sat, 14 Nov 2020 10:01:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kdsN8-0004Im-80; Sat, 14 Nov 2020 10:01:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2JWW=EU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kdsNG-00077o-BP
	for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 10:01:38 +0000
X-Inumbo-ID: 6f7440e5-c8d0-4a8f-b455-f0bee27f8b9e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6f7440e5-c8d0-4a8f-b455-f0bee27f8b9e;
	Sat, 14 Nov 2020 10:01:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ni4c1Cv11vyxzAmSTJg2GjQjf9HLg4Qoyeym955qRcY=; b=atIFBZ76rMiCRUfnFk4AxsHNVr
	pQ6sse/Xcz4jH68jEQyKl10m3TTTBbQHqcSpVATTGgcvhsP58/YcVR5/pjtlNEslzdUAlqcnI//iW
	rwbWjHAzlN7qqKz8RZvHGr2Gn1fLj5Pa2299Y4ZhlWtnhmq2Vd3SS/OT/mcY9tQuKtCM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdsN8-0002XW-L5; Sat, 14 Nov 2020 10:01:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdsN8-00043m-8X; Sat, 14 Nov 2020 10:01:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdsN8-0004Im-80; Sat, 14 Nov 2020 10:01:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156747-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156747: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=673cb932b688ad3b03de89dc2b0b97c75ad47112
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 14 Nov 2020 10:01:30 +0000

flight 156747 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156747/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                673cb932b688ad3b03de89dc2b0b97c75ad47112
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  105 days
Failing since        152366  2020-08-01 20:49:34 Z  104 days  171 attempts
Testing same since   156747  2020-11-13 19:44:13 Z    0 days    1 attempts

------------------------------------------------------------
3501 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 668992 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Nov 14 10:16:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Nov 2020 10:16:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27189.55956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdsbg-0008CQ-Hc; Sat, 14 Nov 2020 10:16:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27189.55956; Sat, 14 Nov 2020 10:16:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdsbg-0008CJ-EV; Sat, 14 Nov 2020 10:16:32 +0000
Received: by outflank-mailman (input) for mailman id 27189;
 Sat, 14 Nov 2020 10:16:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2JWW=EU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kdsbf-0008CD-IU
 for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 10:16:31 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9b3ea1f6-5cae-4862-a403-2863dbf2e1f5;
 Sat, 14 Nov 2020 10:16:29 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdsbd-0002qN-4z; Sat, 14 Nov 2020 10:16:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdsbc-0004gU-SA; Sat, 14 Nov 2020 10:16:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kdsbc-0001QA-Re; Sat, 14 Nov 2020 10:16:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2JWW=EU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kdsbf-0008CD-IU
	for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 10:16:31 +0000
X-Inumbo-ID: 9b3ea1f6-5cae-4862-a403-2863dbf2e1f5
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9b3ea1f6-5cae-4862-a403-2863dbf2e1f5;
	Sat, 14 Nov 2020 10:16:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dwVgt8nNp5MCYTESe6HZHmwtCmueyXYxAAFPKK75LNM=; b=PsUGsk+hNyzBC6wtLOFmQ0l8wm
	OE0iuyCDesULg46esUiiBBGR/basZNisvodVcWxYaQ9x+Ggy17TTIvRvBO3y/4BenrbLLl5PpEgvU
	xWrgilNZpUnJgiltzB38nacoL0hMi/nNsK1+xlAQKexUTojn68UsM1YSiOwSTmXt0wKw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdsbd-0002qN-4z; Sat, 14 Nov 2020 10:16:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdsbc-0004gU-SA; Sat, 14 Nov 2020 10:16:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdsbc-0001QA-Re; Sat, 14 Nov 2020 10:16:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156800-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156800: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=77549339838b44cd32b576e06701d1f7b4518fb5
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 14 Nov 2020 10:16:28 +0000

flight 156800 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156800/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              77549339838b44cd32b576e06701d1f7b4518fb5
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  127 days
Failing since        151818  2020-07-11 04:18:52 Z  126 days  121 attempts
Testing same since   156800  2020-11-14 04:19:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 26941 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Nov 14 14:31:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Nov 2020 14:31:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27252.55973 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdwa1-0005Sr-61; Sat, 14 Nov 2020 14:31:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27252.55973; Sat, 14 Nov 2020 14:31:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdwa1-0005Sk-2g; Sat, 14 Nov 2020 14:31:05 +0000
Received: by outflank-mailman (input) for mailman id 27252;
 Sat, 14 Nov 2020 14:31:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2JWW=EU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kdwZz-0005SA-Ma
 for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 14:31:03 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9e5b5964-c1dc-4dc5-ae04-c1312d5905cf;
 Sat, 14 Nov 2020 14:30:54 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdwZq-0007zy-9V; Sat, 14 Nov 2020 14:30:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kdwZq-0002S9-0i; Sat, 14 Nov 2020 14:30:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kdwZq-0005Hc-0C; Sat, 14 Nov 2020 14:30:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2JWW=EU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kdwZz-0005SA-Ma
	for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 14:31:03 +0000
X-Inumbo-ID: 9e5b5964-c1dc-4dc5-ae04-c1312d5905cf
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9e5b5964-c1dc-4dc5-ae04-c1312d5905cf;
	Sat, 14 Nov 2020 14:30:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BP/H1XFGN/RE7JMSY8d9AYcPWVM0GMHYqgHFQj5pDJA=; b=ss1QoeaOtcYw6QvjOfQAO097EW
	L+7Uj8Y47OUlcJCCS8FnY/N1pu1kJK9ofpjz8qchE2OY9KirtbZg8Jt5ra2Xitunyo+xjOAijcz4/
	BelEo7SIII64UbciePkPTCvvuHSPDW1DUcMG1lg+B8wFkLQGDWnTk49Uj2j0zSfSR5N4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdwZq-0007zy-9V; Sat, 14 Nov 2020 14:30:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdwZq-0002S9-0i; Sat, 14 Nov 2020 14:30:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kdwZq-0005Hc-0C; Sat, 14 Nov 2020 14:30:54 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156768-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156768: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=5ececc3a0b0086c6168e12f4d032809477b30fe5
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 14 Nov 2020 14:30:54 +0000

flight 156768 qemu-mainline real [real]
flight 156802 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156768/
http://logs.test-lab.xenproject.org/osstest/logs/156802/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                5ececc3a0b0086c6168e12f4d032809477b30fe5
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   86 days
Failing since        152659  2020-08-21 14:07:39 Z   85 days  181 attempts
Testing same since   156768  2020-11-13 23:36:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 64708 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Nov 14 15:28:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Nov 2020 15:28:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27272.55986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdxTP-0001e0-Fz; Sat, 14 Nov 2020 15:28:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27272.55986; Sat, 14 Nov 2020 15:28:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdxTP-0001dt-Co; Sat, 14 Nov 2020 15:28:19 +0000
Received: by outflank-mailman (input) for mailman id 27272;
 Sat, 14 Nov 2020 15:28:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UWgf=EU=kernel.org=jic23@srs-us1.protection.inumbo.net>)
 id 1kdxTN-0001dk-No
 for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 15:28:17 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1d420217-7bb7-4ded-a2a5-26c1e0c896ad;
 Sat, 14 Nov 2020 15:28:15 +0000 (UTC)
Received: from archlinux (cpc149474-cmbg20-2-0-cust94.5-4.cable.virginm.net
 [82.4.196.95])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 2E56322265;
 Sat, 14 Nov 2020 15:28:01 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=UWgf=EU=kernel.org=jic23@srs-us1.protection.inumbo.net>)
	id 1kdxTN-0001dk-No
	for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 15:28:17 +0000
X-Inumbo-ID: 1d420217-7bb7-4ded-a2a5-26c1e0c896ad
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1d420217-7bb7-4ded-a2a5-26c1e0c896ad;
	Sat, 14 Nov 2020 15:28:15 +0000 (UTC)
Received: from archlinux (cpc149474-cmbg20-2-0-cust94.5-4.cable.virginm.net [82.4.196.95])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 2E56322265;
	Sat, 14 Nov 2020 15:28:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605367694;
	bh=w6GRHv1XESMtvaagowpL5feljKeJs2fwDrGdLJ9/tG8=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=vQCvFBuAiwuauA3XjLeP+mCvOpc17dcwecF56gxntp9RKPC8CFObV6cOL7g1sMVGV
	 WUs9y45I0DfoBWSVYS/kE6LiWfrPHsf01BnwcP7bRzi13PODYfJfI6KsRUci1zunac
	 cmQCjGJd52D71VNOyOTEClERKlOIYrQhYJXA8hj8=
Date: Sat, 14 Nov 2020 15:27:57 +0000
From: Jonathan Cameron <jic23@kernel.org>
To: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Fabrice Gasnier
 <fabrice.gasnier@st.com>, Linux Doc Mailing List
 <linux-doc@vger.kernel.org>, "Gautham R. Shenoy" <ego@linux.vnet.ibm.com>,
 "Jason A. Donenfeld" <Jason@zx2c4.com>, Javier =?UTF-8?B?R29uesOhbGV6?=
 <javier@javigon.com>, Jonathan Corbet <corbet@lwn.net>, "Martin K.
 Petersen" <martin.petersen@oracle.com>, "Rafael J. Wysocki"
 <rjw@rjwysocki.net>, Alexander Shishkin
 <alexander.shishkin@linux.intel.com>, Alexandre Belloni
 <alexandre.belloni@bootlin.com>, Alexandre Torgue
 <alexandre.torgue@st.com>, Andrew Donnellan <ajd@linux.ibm.com>, Andy
 Shevchenko <andriy.shevchenko@linux.intel.com>, Baolin Wang
 <baolin.wang7@gmail.com>, Benson Leung <bleung@chromium.org>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, Bruno Meneguele
 <bmeneg@redhat.com>, Chunyan Zhang <zhang.lyra@gmail.com>, Dan Murphy
 <dmurphy@ti.com>, Dan Williams <dan.j.williams@intel.com>, Enric Balletbo i
 Serra <enric.balletbo@collabora.com>, Felipe Balbi <balbi@kernel.org>,
 Frederic Barrat <fbarrat@linux.ibm.com>, Guenter Roeck
 <groeck@chromium.org>, Hanjun Guo <guohanjun@huawei.com>, Heikki Krogerus
 <heikki.krogerus@linux.intel.com>, Jens Axboe <axboe@kernel.dk>, Johannes
 Thumshirn <johannes.thumshirn@wdc.com>, Juergen Gross <jgross@suse.com>,
 Konstantin Khlebnikov <koct9i@gmail.com>, Kranthi Kuntala
 <kranthi.kuntala@intel.com>, Lakshmi Ramasubramanian
 <nramas@linux.microsoft.com>, Lars-Peter Clausen <lars@metafoo.de>, Len
 Brown <lenb@kernel.org>, Leonid Maksymchuk <leonmaxx@gmail.com>, Ludovic
 Desroches <ludovic.desroches@microchip.com>, Mario Limonciello
 <mario.limonciello@dell.com>, Mark Gross <mgross@linux.intel.com>, Maxime
 Coquelin <mcoquelin.stm32@gmail.com>, Michael Ellerman
 <mpe@ellerman.id.au>, Mika Westerberg <mika.westerberg@linux.intel.com>,
 Mike Kravetz <mike.kravetz@oracle.com>, Mimi Zohar <zohar@linux.ibm.com>,
 Nayna Jain <nayna@linux.ibm.com>, Nicolas Ferre
 <nicolas.ferre@microchip.com>, Niklas Cassel <niklas.cassel@wdc.com>, Oded
 Gabbay <oded.gabbay@gmail.com>, Oleh Kravchenko <oleg@kaa.org.ua>, Orson
 Zhai <orsonzhai@gmail.com>, Pavel Machek <pavel@ucw.cz>, Pawan Gupta
 <pawan.kumar.gupta@linux.intel.com>, Peter Meerwald-Stadler
 <pmeerw@pmeerw.net>, Peter Rosin <peda@axentia.se>, Petr Mladek
 <pmladek@suse.com>, Philippe Bergheaud <felix@linux.ibm.com>, Richard
 Cochran <richardcochran@gmail.com>, Sebastian Reichel <sre@kernel.org>,
 Sergey Senozhatsky <sergey.senozhatsky@gmail.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Thinh Nguyen <Thinh.Nguyen@synopsys.com>, Thomas
 Gleixner <tglx@linutronix.de>, Tom Rix <trix@redhat.com>, Vaibhav Jain
 <vaibhav@linux.ibm.com>, Vineela Tummalapalli
 <vineela.tummalapalli@intel.com>, Vishal Verma <vishal.l.verma@intel.com>,
 linux-acpi@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
 linux-iio@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, linux-pm@vger.kernel.org,
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org,
 xen-devel@lists.xenproject.org, Jonathan Cameron
 <Jonathan.Cameron@huawei.com>
Subject: Re: Duplicated ABI entries - Was: Re: [PATCH v2 20/39] docs: ABI:
 testing: make the files compatible with ReST output
Message-ID: <20201114152757.6d8b3b7d@archlinux>
In-Reply-To: <20201110082658.2edc1ab5@coco.lan>
References: <cover.1604042072.git.mchehab+huawei@kernel.org>
	<58cf3c2d611e0197fb215652719ebd82ca2658db.1604042072.git.mchehab+huawei@kernel.org>
	<5326488b-4185-9d67-fc09-79b911fbb3b8@st.com>
	<20201030110925.3e09d59e@coco.lan>
	<cb586ea3-b6e6-4e48-2344-2bd641e5323f@st.com>
	<20201102124641.GA881895@kroah.com>
	<20201102154250.45bee17f@coco.lan>
	<20201108165621.4d0da3f4@archlinux>
	<20201110082658.2edc1ab5@coco.lan>
X-Mailer: Claws Mail 3.17.7 (GTK+ 2.24.32; x86_64-pc-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Tue, 10 Nov 2020 08:26:58 +0100
Mauro Carvalho Chehab <mchehab+huawei@kernel.org> wrote:

> Hi Jonathan,
> 
> Em Sun, 8 Nov 2020 16:56:21 +0000
> Jonathan Cameron <jic23@kernel.org> escreveu:
> 
> > > PS.: the IIO subsystem is the one that currently has more duplicated
> > > ABI entries:  
> > > $ ./scripts/get_abi.pl validate 2>&1|grep iio
> > > Warning: /sys/bus/iio/devices/iio:deviceX/in_accel_x_calibbias is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-icm42600:0  Documentation/ABI/testing/sysfs-bus-iio:394
> > > Warning: /sys/bus/iio/devices/iio:deviceX/in_accel_y_calibbias is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-icm42600:1  Documentation/ABI/testing/sysfs-bus-iio:395
> > > Warning: /sys/bus/iio/devices/iio:deviceX/in_accel_z_calibbias is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-icm42600:2  Documentation/ABI/testing/sysfs-bus-iio:396
> > > Warning: /sys/bus/iio/devices/iio:deviceX/in_anglvel_x_calibbias is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-icm42600:3  Documentation/ABI/testing/sysfs-bus-iio:397
> > > Warning: /sys/bus/iio/devices/iio:deviceX/in_anglvel_y_calibbias is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-icm42600:4  Documentation/ABI/testing/sysfs-bus-iio:398
> > > Warning: /sys/bus/iio/devices/iio:deviceX/in_anglvel_z_calibbias is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-icm42600:5  Documentation/ABI/testing/sysfs-bus-iio:399
> > > Warning: /sys/bus/iio/devices/iio:deviceX/in_count0_preset is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-timer-stm32:100  Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32:0
> > > Warning: /sys/bus/iio/devices/iio:deviceX/in_count0_quadrature_mode is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-timer-stm32:117  Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32:14
> > > Warning: /sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available is defined 3 times:  Documentation/ABI/testing/sysfs-bus-iio-counter-104-quad-8:2  Documentation/ABI/testing/sysfs-bus-iio-timer-stm32:111  Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32:8
> > > Warning: /sys/bus/iio/devices/iio:deviceX/out_altvoltageY_frequency is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-frequency-adf4371:0  Documentation/ABI/testing/sysfs-bus-iio:599
> > > Warning: /sys/bus/iio/devices/iio:deviceX/out_altvoltageY_powerdown is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-frequency-adf4371:36  Documentation/ABI/testing/sysfs-bus-iio:588
> > > Warning: /sys/bus/iio/devices/iio:deviceX/out_currentY_raw is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-light-lm3533-als:43  Documentation/ABI/testing/sysfs-bus-iio-health-afe440x:38
> > > Warning: /sys/bus/iio/devices/iio:deviceX/out_current_heater_raw is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-humidity-hdc2010:0  Documentation/ABI/testing/sysfs-bus-iio-humidity-hdc100x:0
> > > Warning: /sys/bus/iio/devices/iio:deviceX/out_current_heater_raw_available is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-humidity-hdc2010:1  Documentation/ABI/testing/sysfs-bus-iio-humidity-hdc100x:1
> > > Warning: /sys/bus/iio/devices/iio:deviceX/sensor_sensitivity is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-distance-srf08:0  Documentation/ABI/testing/sysfs-bus-iio-proximity-as3935:8
> > > Warning: /sys/bus/iio/devices/triggerX/sampling_frequency is defined 2 times:  Documentation/ABI/testing/sysfs-bus-iio-timer-stm32:92  Documentation/ABI/testing/sysfs-bus-iio:45    
> 
> > 
> > That was intentional.  Often these provide more information on the
> > ABI for a particular device than is present in the base ABI doc.  
> 
> FYI, right now, there are 20 duplicated entries, being 16 of them
> from IIO, on those files:
> 
> 	$ ./scripts/get_abi.pl validate 2>&1|perl -ne 'if (m,(Documentation/\S+)\:,g) { print "$1\n" }'|sort|uniq
> 	Documentation/ABI/stable/sysfs-driver-w1_ds28e04
> 	Documentation/ABI/testing/sysfs-bus-iio-counter-104-quad-8
> 	Documentation/ABI/testing/sysfs-bus-iio-distance-srf08
> 	Documentation/ABI/testing/sysfs-bus-iio-frequency-adf4371
> 	Documentation/ABI/testing/sysfs-bus-iio-humidity-hdc2010
> 	Documentation/ABI/testing/sysfs-bus-iio-icm42600
> 	Documentation/ABI/testing/sysfs-bus-iio-light-lm3533-als
> 	Documentation/ABI/testing/sysfs-bus-iio-timer-stm32
> 	Documentation/ABI/testing/sysfs-class-backlight-adp8860
> 	Documentation/ABI/testing/sysfs-class-led-trigger-pattern
> 	Documentation/ABI/testing/sysfs-kernel-iommu_groups
> 
> > 
> > A bit like when we have additional description for dt binding properties
> > for a particular device, even though they are standard properties.
> > 
> > Often a standard property allows for more values than the specific
> > one for a particular device.  There can also be obscuring coupling
> > between sysfs attributes due to hardware restrictions that we would
> > like to provide some explanatory info on.
> > 
> > I suppose we could add all this information to the parent doc but
> > that is pretty ugly and will make that doc very nasty to read.  
> 
> I understand what you meant to do, but right now, it is is actually
> a lot uglier than merging into a single entry ;-)
> 
> Let's view ABI from the PoV of a system admin that doesn't know
> yet about a certain ABI symbol.

I'd be surprised if a sys admin is looking at these at all. They
tend to be used only by userspace software writers.  But I guess the
point stands.

> 
> He'll try to seek for the symbol, more likely using the HTML 
> documentation. Only very senior system admins might try to take
> a look at the Kernel.

Sad truth here is that before these were in the html docs, they'd
have grepped and the right option would fairly obvious as it
would be the more specific file.  Ah well, sometimes progress bites :)

> 
> This is what happens when one would seek for a duplicated symbol
> via command line:
> 
> 	$ ./scripts/get_abi.pl search /sys/bus/iio/devices/iio:deviceX/out_altvoltageY_frequency$
> 	
> 	/sys/bus/iio/devices/iio:deviceX/out_altvoltageY_frequency
> 	----------------------------------------------------------
> 	
> 	Kernel version:		3.4.0
> 	Contact:		linux-iio@vger.kernel.org
> 	Defined on file(s):	Documentation/ABI/testing/sysfs-bus-iio-frequency-adf4371 Documentation/ABI/testing/sysfs-bus-iio
> 	
> 	Description:
> 	
> 	Stores the PLL frequency in Hz for channel Y.
> 	Reading returns the actual frequency in Hz.
> 	The ADF4371 has an integrated VCO with fundamendal output
> 	frequency ranging from 4000000000 Hz 8000000000 Hz.
> 	
> 	out_altvoltage0_frequency:
> 	        A divide by 1, 2, 4, 8, 16, 32 or circuit generates
> 	        frequencies from 62500000 Hz to 8000000000 Hz.
> 	out_altvoltage1_frequency:
> 	        This channel duplicates the channel 0 frequency
> 	out_altvoltage2_frequency:
> 	        A frequency doubler generates frequencies from
> 	        8000000000 Hz to 16000000000 Hz.
> 	out_altvoltage3_frequency:
> 	        A frequency quadrupler generates frequencies from
> 	        16000000000 Hz to 32000000000 Hz.
> 	
> 	Note: writes to one of the channels will affect the frequency of
> 	all the other channels, since it involves changing the VCO
> 	fundamental output frequency.
> 	
> 	Output frequency for channel Y in Hz. The number must always be
> 	specified and unique if the output corresponds to a single
> 	channel.
> 
> As the "What:" field is identical on both sysfs-bus-iio-frequency-adf4371
> and sysfs-bus-iio, those entries are merged, which produces an ABI
> documentation mixing both the generic one and the board specific one
> into a single output.
> 
> Worse than that, the "generic" content is at the end.
> 
> The same happens when generating the HTML output.
> 
> See, entries at the HTML output are ordered by the What: field,
> which is considered within the script as an unique key, as it is
> unique (except for IIO and a couple of other cases).
> 
> -
> 
> As I commented on an e-mail I sent to Greg, I see a few ways
> to solve it.
> 
> The most trivial one (which I used to solve a few conflicts on
> other places), is to place driver-specific details on a separate
> file under Documentation/driver-api, and mention it at the
> generic entries. The docs building system will generate cross
> references for Documentation/.../foo.rst files, so, everything
> should be OK.

Hmm. That might work out OK.  These devices tend to be weird enough
that they probably could do with some additional explanation anyway. 

> 
> The second alternative that I also used on a couple of places
> is to modify the generic entry for it to contain the generic
> definition first, followed by per-device details.

I'll do an audit of what we actually have here. Perhaps we end
up with a mixture of these two options.

Might take a little while though.

> 
> There is a third possible alternative: add a new optional field
> (something like Scope:) which would be part of the unique key,
> if present. Implementing support for it could be tricky, as the
> produced output would likely need to create cross-references
> between the generic field (if present) and the per-device details.
That would be lovely but probably not worth the effort for something
that occurs so rarely currently.

Jonathan

> 
> Thanks,
> Mauro
> 
> PS.: I'm taking a few days of PTO during this week. So, it
> could take a while for me to reply again to this thread.



From xen-devel-bounces@lists.xenproject.org Sat Nov 14 16:00:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Nov 2020 16:00:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27227.55998 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdxyC-0005TQ-8J; Sat, 14 Nov 2020 16:00:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27227.55998; Sat, 14 Nov 2020 16:00:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kdxyC-0005TJ-50; Sat, 14 Nov 2020 16:00:08 +0000
Received: by outflank-mailman (input) for mailman id 27227;
 Sat, 14 Nov 2020 12:33:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LicH=EU=gmx.de=nico.h@srs-us1.protection.inumbo.net>)
 id 1kdujx-0003XJ-Mf
 for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 12:33:14 +0000
Received: from mout.gmx.net (unknown [212.227.17.20])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 40737012-c478-4d27-8f98-5d94b0d586f0;
 Sat, 14 Nov 2020 12:33:11 +0000 (UTC)
Received: from [192.168.4.129] ([130.180.109.134]) by mail.gmx.com (mrgmx105
 [212.227.17.168]) with ESMTPSA (Nemesis) id 1MulmF-1kLHca4AiJ-00rsVQ; Sat, 14
 Nov 2020 13:32:56 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=LicH=EU=gmx.de=nico.h@srs-us1.protection.inumbo.net>)
	id 1kdujx-0003XJ-Mf
	for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 12:33:14 +0000
X-Inumbo-ID: 40737012-c478-4d27-8f98-5d94b0d586f0
Received: from mout.gmx.net (unknown [212.227.17.20])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 40737012-c478-4d27-8f98-5d94b0d586f0;
	Sat, 14 Nov 2020 12:33:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net;
	s=badeba3b8450; t=1605357176;
	bh=kZIeI3cpWWIfjsUKcxtJgaN82zmB/B3WIlGIEgdXIKQ=;
	h=X-UI-Sender-Class:Subject:To:Cc:References:From:Date:In-Reply-To;
	b=Q1ASGBoxbqMRdBVj05eYjnz/UVQB7H1MRC2mOZqpxv92v7VLlbibkFbqlBMpSYIGk
	 ADbKph8uyQDRq1dzlj9j25CqirTA50oVjp+IR6Trxf+jEv/Ncdy8hp1mSV0mbvtvkL
	 sqHRkKbdTryxRI+2zhkY/p062TfzgTQn/3dSiT/0=
X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c
Received: from [192.168.4.129] ([130.180.109.134]) by mail.gmx.com (mrgmx105
 [212.227.17.168]) with ESMTPSA (Nemesis) id 1MulmF-1kLHca4AiJ-00rsVQ; Sat, 14
 Nov 2020 13:32:56 +0100
Subject: Re: [SPECIFICATION RFC] The firmware and bootloader log specification
To: Daniel Kiper <daniel.kiper@oracle.com>, coreboot@coreboot.org,
 grub-devel@gnu.org, linux-kernel@vger.kernel.org,
 systemd-devel@lists.freedesktop.org, trenchboot-devel@googlegroups.com,
 u-boot@lists.denx.de, x86@kernel.org, xen-devel@lists.xenproject.org
Cc: alecb@umass.edu, alexander.burmashev@oracle.com, allen.cryptic@gmail.com,
 andrew.cooper3@citrix.com, ard.biesheuvel@linaro.org, btrotter@gmail.com,
 dpsmith@apertussolutions.com, eric.devolder@oracle.com,
 eric.snowberg@oracle.com, hpa@zytor.com, hun@n-dimensional.de,
 javierm@redhat.com, joao.m.martins@oracle.com, kanth.ghatraju@oracle.com,
 konrad.wilk@oracle.com, krystian.hebel@3mdeb.com, leif@nuviainc.com,
 lukasz.hawrylko@intel.com, luto@amacapital.net, michal.zygowski@3mdeb.com,
 mjg59@google.com, mtottenh@akamai.com, phcoder@gmail.com,
 piotr.krol@3mdeb.com, pjones@redhat.com, pmenzel@molgen.mpg.de,
 roger.pau@citrix.com, ross.philipson@oracle.com, tyhicks@linux.microsoft.com
References: <20201113235242.k6fzlwmwm2xqhqsi@tomti.i.net-space.pl>
From: Nico Huber <nico.h@gmx.de>
Message-ID: <f0845d6b-deab-957f-0807-1e989a6648ac@gmx.de>
Date: Sat, 14 Nov 2020 13:32:50 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201113235242.k6fzlwmwm2xqhqsi@tomti.i.net-space.pl>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Provags-ID: V03:K1:jCHuGTunOyoefZN86kbCZvbJrL+v2T9SdYi01NqLn7tdJqgUEu1
 DFR2J86cR49a3jMkBqz6fJ74dEjADhWUImgCOeWycwDKktYUsufN+Rvu89QpE2TrC2PdDDL
 El8Bf5V/LXbDchcdPkayJv5bd6wnGbAf5jRyc3+TMNWR0UVo+LjTCB5s4JB8xr6TquVqfVd
 QJ95OUo9pDF1ed3hB6ESA==
X-Spam-Flag: NO
X-UI-Out-Filterresults: notjunk:1;V03:K0:upNVi++e/6c=:NYJXVDYgJk53X9Z/xlMWCq
 NZjo31Tp7fFfuLKOCZxb5OTh2AWdt3GF21SabTPAh92Uv/A+FzlNmFDetqN6IKBEn4A0ObI1Q
 f7oxYX6gf3yzKQQM+izSfe34iBMqzEqkk5iAV8KDAsB/qvDbyE2JSH4oOlRu13L7YK3mMwPR/
 ulH+blVZm4jGGFQhmevjFrn9lM1ScbOfuZO4MJUpOe7PJzlTzwj7rOrZ5r5PMuEGn3A/AL3Ax
 We8dIc2QUPYgfirHbelGTAZ8asaImyRiRuzpI4cTW7umucQWbHkMoUsdb/10IbD+lzATNpgmR
 iU4zkcKswHGEZKKXgcQowUAg/ez14uC+jz6ZiSvA140rK+CP4//LBoznrBBxxgcAtE9c+6N6c
 XimfcnRxtwYNlq49zf+wnd1+tUZBcyESTQNp+x+RPDuUN4q8Qp0ble5L4lpbQlxAf8hrXXYrC
 DwOi/xR5m8HRp3jZTFRWBHHYNqKYqESo4eTXwna+SEfP28WxMP8YYVHQgYPSz95wGAzvUKmfj
 hrbOpLR0EzZJQyoficzEI/Fp5mwpGK5MvztSK8yWW4DMzHv1u2vhS/mpch7KoVdfy617Ul7Mt
 /WMsppAa6fbfCBZ1EKXOH/hpVv7Use4tXfjmkdwsat/p2dVG1dYIgJPiALAjM+TMuKYc52qWG
 2lQPJKDPdGphjBZPvHGHq6xSAFXaJiNWhQsi11pDqOiJQ/Eo5xLFnsnCVXX3Y3hD/5sAx0fQW
 BG7mqchi5geOJ7Uqp28RTSceQBTLepLe6jhZPhckXLt/Ho/3psavg540qSRKdbMQM9RBnS60Q
 GKIHatEe+2YtCGUwaQnWU2HsGUGcyU+DVcAOt4FpzpYo0qSE0K9m//RYGCz10EiF4FQECXTbV
 hevYxVDKAQTl983bxSbg==

Hi Daniel,

I think this is a good idea. Alas, as I hear for the first time about
it, I lack any context of prior discussions / context. So bear with me,
if I ask things that have already been answered.

On 14.11.20 00:52, Daniel Kiper wrote:
> The goal is to pass all logs produced by various boot components to the
> running OS. The OS kernel should expose these logs to the user space
> and/or process them internally if needed. The content of these logs
> should be human readable. However, they should also contain the
> information which allows admins to do e.g. boot time analysis.
>
> The log specification should be as much as possible platform agnostic
> and self contained. The final version of this spec should be merged into
> existing specifications, e.g. UEFI, ACPI, Multiboot2, or be a standalone
> spec, e.g. as a part of OASIS Standards. The former seems better but is
> not perfect too...
>
> Here is the description (pseudocode) of the structures which will be
> used to store the log data.

I guess using C syntax for your "pseudocode" isn't a good choice as it
can confuse people and might lead to (unportable) implementations that
try to copy this definition to C. IMHO, it's much better for a specifi-
cation to provide exact bit/byte offsets. The protocol tool [P], for
instance, can be used to draw things in ASCII. A portable C implemen-
tation could then use these offsets for proper (de)serialization with-
out structs that try to mimic the representation in memory.

> The members of struct bf_log:
>   - version: the firmware and bootloader log format version number, 1 fo=
r now,
>   - producer: the producer/firmware/bootloader/... type; the length
>     allows ASCII UUID storage if somebody needs that functionality,

So, is this always supposed to be a string?

>   - flags: it can be used to store information about log state, e.g.
>     it was truncated or not (does it make sense to have an information
>     about the number of lost messages?),

Truncation is an interesting point as I see no length for the available
space specified. I assume most implementations would want a field for
this. Otherwise they would have to track it separately.

In coreboot, we use a ring-buffer for messages as it seems more useful
to keep the most recent messages, it's also extended across reboots and
suspend/resume cycles. For this, it would need an additional pointer
where the oldest message resides, iow. where to start reading messages.

>   - next_bf_log_addr: address of next bf_log struct; none if zero

Do I understand this correctly that a later-stage boot component would
use this to add its own `bf_log` to the chain? e.g. if I start initia-
lizing hardware with coreboot and then use GRUB2 to boot, each of them
would set up its own ` bf_log` and GRUB2 would set this pointer if
possible?

> (I think
>     newer spec versions should not change anything in first 5 bf_log mem=
bers;
>     this way older log parsers will be able to traverse/copy all logs re=
gardless
>     of version used in one log or another),

Good point, which brings me to another good practice regarding such
data formats: A length field for the header. In this case the length
from the start of `bf_log` to the start of `msgs`. This would give
us backwards compatibility in case additional fields are added in
the future. And would also allow the various implementation to add
custom fields (not for communication with log parser but for their
own use).

>   - next_msg_off: the offset, in bytes, from the beginning of the bf_log=
 struct,
>     of the next byte after the last log message in the msgs[]; i.e. the =
offset
>     of the next available log message slot; it is equal to the total siz=
e of
>     the log buffer including the bf_log struct,
>   - msgs: the array of log messages,
>   - should we add CRC or hash or signatures here?
>
> The members of struct bf_log_msg:
>   - size: total size of bf_log_msg struct,

Does this include the actual message string?

>   - ts_nsec: timestamp expressed in nanoseconds starting from 0,

But what is 0? In coreboot, we log timestamps relative to the last
reset. Which, if applied to our log ring-buffer, might make things
confusing because it can contain messages from multiple boots.

>   - level: similar to syslog meaning; can be used to differentiate norma=
l messages
>     from debug messages; the exact interpretation depends on the current=
 producer
>     type specified in the bf_log.producer,
>   - facility: similar to syslog meaning; can be used to differentiate th=
e sources of
>     the messages, e.g. message produced by networking module; the exact =
interpretation
>     depends on the current producer type specified in the bf_log.produce=
r,
>   - msg_off: the log message offset in strings[],
>   - strings[0]: the beginning of log message type, similar to the facili=
ty member but
>     NUL terminated string instead of integer; this will be used by, e.g.=
, the GRUB2
>     for messages printed using grub_dprintf(),

I don't think this is a good idea. It seems you want to start a new spec
that already supports two competing formats (the `facility` field and
this string). I know it's sometimes hard to make everybody happy, but
think we should decide for a single format. I'll try to find some time
to read about this GRUB string and prior discussions.

>   - strings[msg_off]: the beginning of log message, NUL terminated strin=
g.

> There is still not fully solved problem how the logs should be presented=
 to the OS.
> On the UEFI platforms we can use config tables to do that. Then probably
> bf_log.next_bf_log_addr should not be used. On the ACPI and Device Tree =
platforms
> we can use these mechanisms to present the logs to the OSes. The situati=
on gets more
> difficult if neither of these mechanisms are present. However, maybe we =
should not
> bother too much about that because probably these platforms getting less=
 and less
> common.

There is also the question how a later-stage boot component would do
this. It might not be easy for them to adapt ACPI, for instance (and
if no `bf_log` chain is set up yet, it can't extend that either). Maybe
just leave this open. Beside references to the big ones of course, which
may need some assigned number (UEFI? ACPI?).

Nico

[P] http://www.luismg.com/protocol/


From xen-devel-bounces@lists.xenproject.org Sat Nov 14 18:36:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Nov 2020 18:36:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27318.56010 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ke0PY-000260-DE; Sat, 14 Nov 2020 18:36:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27318.56010; Sat, 14 Nov 2020 18:36:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ke0PY-00025t-9s; Sat, 14 Nov 2020 18:36:32 +0000
Received: by outflank-mailman (input) for mailman id 27318;
 Sat, 14 Nov 2020 18:36:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2JWW=EU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1ke0PX-00025n-0k
 for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 18:36:31 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 99afc28b-38de-47d7-9fc4-09757615d54a;
 Sat, 14 Nov 2020 18:36:28 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ke0PU-00056K-8R; Sat, 14 Nov 2020 18:36:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ke0PT-0005xT-SP; Sat, 14 Nov 2020 18:36:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ke0PT-0001m5-Rw; Sat, 14 Nov 2020 18:36:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2JWW=EU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1ke0PX-00025n-0k
	for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 18:36:31 +0000
X-Inumbo-ID: 99afc28b-38de-47d7-9fc4-09757615d54a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 99afc28b-38de-47d7-9fc4-09757615d54a;
	Sat, 14 Nov 2020 18:36:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MVx3/6oeyRn40F5U9NRF3VTA7Bc5SYtGFtDmCiTiXkE=; b=34oqK6dDhEwR/iAtcq2dcmRbYD
	dpKLJDn1MiM0HP5SGHoph4tOS7cEPtIrmSZaTwT3LI9jHX+Hvkz0aO6/HN8Ny151rrbMsPERIJqFG
	SU0m7RUiPoA3ER3WAz1WoTgD7gWHcykjoehxlFQbLW7zO5HmFqbs8RtkHq/nyMOARsfM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ke0PU-00056K-8R; Sat, 14 Nov 2020 18:36:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ke0PT-0005xT-SP; Sat, 14 Nov 2020 18:36:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ke0PT-0001m5-Rw; Sat, 14 Nov 2020 18:36:27 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156799-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156799: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5505f5f8e7e805365cfe70b6a4af6115940bb749
X-Osstest-Versions-That:
    xen=5505f5f8e7e805365cfe70b6a4af6115940bb749
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 14 Nov 2020 18:36:27 +0000

flight 156799 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156799/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 156737

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156737
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156737
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156737
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156737
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156737
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156737
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156737
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156737
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156737
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156737
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156737
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  5505f5f8e7e805365cfe70b6a4af6115940bb749
baseline version:
 xen                  5505f5f8e7e805365cfe70b6a4af6115940bb749

Last test of basis   156799  2020-11-14 02:33:12 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sat Nov 14 21:11:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 14 Nov 2020 21:11:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27341.56028 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ke2pb-0007lM-PG; Sat, 14 Nov 2020 21:11:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27341.56028; Sat, 14 Nov 2020 21:11:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ke2pb-0007lF-MF; Sat, 14 Nov 2020 21:11:35 +0000
Received: by outflank-mailman (input) for mailman id 27341;
 Sat, 14 Nov 2020 21:11:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2JWW=EU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1ke2pZ-0007kg-NI
 for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 21:11:33 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 85e29448-4c0e-4a3c-acff-a71cf1143e4b;
 Sat, 14 Nov 2020 21:11:26 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ke2pR-0008K8-Q0; Sat, 14 Nov 2020 21:11:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ke2pR-0002hs-Gd; Sat, 14 Nov 2020 21:11:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ke2pR-0005pj-GB; Sat, 14 Nov 2020 21:11:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2JWW=EU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1ke2pZ-0007kg-NI
	for xen-devel@lists.xenproject.org; Sat, 14 Nov 2020 21:11:33 +0000
X-Inumbo-ID: 85e29448-4c0e-4a3c-acff-a71cf1143e4b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 85e29448-4c0e-4a3c-acff-a71cf1143e4b;
	Sat, 14 Nov 2020 21:11:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=d+kCVCb/aa80cN+0eOv5kPzHhlAVhJGBn8EteYYQX94=; b=gIacxz1Fs/ZtDkNO1woQZhY4MT
	7AuTv5FD+4XyWBKR71E+lprRdIG2vd4KB/+Y7MPn5sHXzqxiWxhV3M+MFX8zcJwC4B9N0z/i7S7M7
	ymiF96KPFlpnKbUUN1uw80LOTqoBgunGvDtCbGph1KIDrL/3SpWqPP1jS5duOPOFdRVo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ke2pR-0008K8-Q0; Sat, 14 Nov 2020 21:11:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ke2pR-0002hs-Gd; Sat, 14 Nov 2020 21:11:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ke2pR-0005pj-GB; Sat, 14 Nov 2020 21:11:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156801-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156801: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f01c30de86f1047e9bae1b1b1417b0ce8dcd15b1
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 14 Nov 2020 21:11:25 +0000

flight 156801 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156801/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl          10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen      fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f01c30de86f1047e9bae1b1b1417b0ce8dcd15b1
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  106 days
Failing since        152366  2020-08-01 20:49:34 Z  105 days  172 attempts
Testing same since   156801  2020-11-14 10:05:00 Z    0 days    1 attempts

------------------------------------------------------------
3506 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 670125 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 15 00:35:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Nov 2020 00:35:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27359.56042 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ke606-0000et-5F; Sun, 15 Nov 2020 00:34:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27359.56042; Sun, 15 Nov 2020 00:34:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ke606-0000em-1y; Sun, 15 Nov 2020 00:34:38 +0000
Received: by outflank-mailman (input) for mailman id 27359;
 Sun, 15 Nov 2020 00:34:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t4DI=EV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1ke604-0000e4-35
 for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 00:34:36 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 414f5b09-8dad-4fc3-9867-ac245375a0ff;
 Sun, 15 Nov 2020 00:34:27 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ke5zu-0004Zm-S8; Sun, 15 Nov 2020 00:34:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ke5zu-0003OW-FT; Sun, 15 Nov 2020 00:34:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ke5zu-0004gk-Ez; Sun, 15 Nov 2020 00:34:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=t4DI=EV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1ke604-0000e4-35
	for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 00:34:36 +0000
X-Inumbo-ID: 414f5b09-8dad-4fc3-9867-ac245375a0ff
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 414f5b09-8dad-4fc3-9867-ac245375a0ff;
	Sun, 15 Nov 2020 00:34:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UZ2o/sg9kHKAKyNhWLG1xveXF5hp1SB/QNkSzSo2ka4=; b=C3v35ZNcgFgSsUEOEKdvorctAk
	ygbDxidcGzzPEvxEqQwk/RUX7yOzJl9jZIzMD4Xsy6DzNHfjBKKDyYor2GdLSiNATKEVZv87H6rbR
	6ssgZH3F1Ld37X1MKxredQoZ0eEbQtkCU/Y1unadyWsLrTzlhkJqcN9nz5Qd7vcyJs7Q=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ke5zu-0004Zm-S8; Sun, 15 Nov 2020 00:34:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ke5zu-0003OW-FT; Sun, 15 Nov 2020 00:34:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ke5zu-0004gk-Ez; Sun, 15 Nov 2020 00:34:26 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156803-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156803: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-libvirt-xsm:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=5ececc3a0b0086c6168e12f4d032809477b30fe5
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Nov 2020 00:34:26 +0000

flight 156803 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156803/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm    <job status>                 broken
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                5ececc3a0b0086c6168e12f4d032809477b30fe5
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   86 days
Failing since        152659  2020-08-21 14:07:39 Z   85 days  182 attempts
Testing same since   156768  2020-11-13 23:36:57 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 broken  
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-libvirt-xsm broken
broken-step test-arm64-arm64-libvirt-xsm host-install(5)

Not pushing.

(No revision log; it would be 64708 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 15 04:25:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Nov 2020 04:25:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27377.56055 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ke9au-0000PF-Ej; Sun, 15 Nov 2020 04:24:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27377.56055; Sun, 15 Nov 2020 04:24:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ke9au-0000P7-99; Sun, 15 Nov 2020 04:24:52 +0000
Received: by outflank-mailman (input) for mailman id 27377;
 Sun, 15 Nov 2020 04:24:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t4DI=EV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1ke9at-0000P2-1d
 for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 04:24:51 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 65caef54-49a8-40e0-91a1-371989b152f4;
 Sun, 15 Nov 2020 04:24:48 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ke9aq-0005aB-87; Sun, 15 Nov 2020 04:24:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ke9ap-0003rj-VS; Sun, 15 Nov 2020 04:24:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ke9ap-00020g-To; Sun, 15 Nov 2020 04:24:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=t4DI=EV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1ke9at-0000P2-1d
	for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 04:24:51 +0000
X-Inumbo-ID: 65caef54-49a8-40e0-91a1-371989b152f4
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 65caef54-49a8-40e0-91a1-371989b152f4;
	Sun, 15 Nov 2020 04:24:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1QVHcA+cN8ix7TCkch1Kb4mt4/sLij5dE37DvoA7WiA=; b=R+rsmZnANlfItn7ejbvk/zb6w0
	L3iu4e5OlbvO8mGtBbUjIcr63yruDMAl/r15R/QNo07miiR0xBLNe+fBnUhpSubcClLxvXixSZ7by
	YV5TfwKNifsLJko/K3Hol0CxWQlrxvMD20sYdq3Ve9GjuChcwASeMsP/9BWvw1ieaxjc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ke9aq-0005aB-87; Sun, 15 Nov 2020 04:24:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ke9ap-0003rj-VS; Sun, 15 Nov 2020 04:24:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ke9ap-00020g-To; Sun, 15 Nov 2020 04:24:47 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156804-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156804: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=e28c0d7c92c89016c12a677616668957351e7542
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Nov 2020 04:24:47 +0000

flight 156804 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156804/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                e28c0d7c92c89016c12a677616668957351e7542
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  106 days
Failing since        152366  2020-08-01 20:49:34 Z  105 days  173 attempts
Testing same since   156804  2020-11-14 21:41:08 Z    0 days    1 attempts

------------------------------------------------------------
3512 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 671151 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 15 06:04:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Nov 2020 06:04:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27394.56072 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keB8j-0001IH-RM; Sun, 15 Nov 2020 06:03:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27394.56072; Sun, 15 Nov 2020 06:03:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keB8j-0001IA-OT; Sun, 15 Nov 2020 06:03:53 +0000
Received: by outflank-mailman (input) for mailman id 27394;
 Sun, 15 Nov 2020 06:03:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t4DI=EV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1keB8i-0001Hc-QJ
 for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 06:03:52 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ef77903d-91f6-4910-a3bb-3d1250ec1607;
 Sun, 15 Nov 2020 06:03:44 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keB8Z-0007xB-To; Sun, 15 Nov 2020 06:03:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keB8Z-0000S9-JQ; Sun, 15 Nov 2020 06:03:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1keB8Z-0004BO-Ie; Sun, 15 Nov 2020 06:03:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=t4DI=EV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1keB8i-0001Hc-QJ
	for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 06:03:52 +0000
X-Inumbo-ID: ef77903d-91f6-4910-a3bb-3d1250ec1607
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ef77903d-91f6-4910-a3bb-3d1250ec1607;
	Sun, 15 Nov 2020 06:03:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qQ1f2n+GqvaL0w69rT68+RM1xylrJCUQeEGMr9/RsEc=; b=L3r2qKyMc0F2fttCshyGRjpfEj
	69e0GBwRuUMsbH3rTrzBw+iH5Ln8ALk2f4UOfSdlsHNxBicB8BUnzUogJNZzH12/4vkVrIgIGTUly
	bnH43f8APTuCWOxldzNeeXVI6z+0mtUeG5BEqK5pm+QCIw2VjqxTFz6Q1jcr2MjvPRHE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keB8Z-0007xB-To; Sun, 15 Nov 2020 06:03:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keB8Z-0000S9-JQ; Sun, 15 Nov 2020 06:03:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keB8Z-0004BO-Ie; Sun, 15 Nov 2020 06:03:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156806-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156806: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=d448574e73108f031ea6b02994f2579bb574785a
X-Osstest-Versions-That:
    ovmf=5e9a8a6dfb152472c5d12a3940069b16c774f0fc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Nov 2020 06:03:43 +0000

flight 156806 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156806/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 d448574e73108f031ea6b02994f2579bb574785a
baseline version:
 ovmf                 5e9a8a6dfb152472c5d12a3940069b16c774f0fc

Last test of basis   156786  2020-11-14 01:09:48 Z    1 days
Testing same since   156806  2020-11-15 00:39:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Maurice Ma <maurice.ma@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   5e9a8a6dfb..d448574e73  d448574e73108f031ea6b02994f2579bb574785a -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sun Nov 15 08:16:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Nov 2020 08:16:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27413.56088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keDCS-0004sH-8L; Sun, 15 Nov 2020 08:15:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27413.56088; Sun, 15 Nov 2020 08:15:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keDCS-0004sA-5G; Sun, 15 Nov 2020 08:15:52 +0000
Received: by outflank-mailman (input) for mailman id 27413;
 Sun, 15 Nov 2020 08:15:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t4DI=EV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1keDCQ-0004rW-Ne
 for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 08:15:50 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d3b70306-b197-4284-9dc4-3f95922456e7;
 Sun, 15 Nov 2020 08:15:44 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keDCK-0002fy-4l; Sun, 15 Nov 2020 08:15:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keDCJ-0002Xs-PZ; Sun, 15 Nov 2020 08:15:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1keDCJ-0001hV-P6; Sun, 15 Nov 2020 08:15:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=t4DI=EV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1keDCQ-0004rW-Ne
	for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 08:15:50 +0000
X-Inumbo-ID: d3b70306-b197-4284-9dc4-3f95922456e7
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d3b70306-b197-4284-9dc4-3f95922456e7;
	Sun, 15 Nov 2020 08:15:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yfp7c1CBHsIXrxkaS6A2syTuWxFIAm7RoVdbcQfv3Oo=; b=AULOv+zYLYFuCDPfSBRvvvI5cf
	SIcoakffAbFmR6Yl9dm+uTDNZzmIMb3Qh3qG7VaLZkK4zpgtSCZgNdANaeuMcVOtx4A+UMzOacAaT
	WRwkbXe0HZUr35pyU4DdPwc2ZIwDeQaARZJ7lzzsGKWnJQV5niZWhtFbXWtPb78G9BqY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keDCK-0002fy-4l; Sun, 15 Nov 2020 08:15:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keDCJ-0002Xs-PZ; Sun, 15 Nov 2020 08:15:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keDCJ-0001hV-P6; Sun, 15 Nov 2020 08:15:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156805-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156805: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-libvirt-xsm:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=b50ea0d54bbca7d440315c3d0c0f7a4d6537b180
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Nov 2020 08:15:43 +0000

flight 156805 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156805/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm    <job status>                 broken
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  5 host-install(5)       broken blocked in 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                b50ea0d54bbca7d440315c3d0c0f7a4d6537b180
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   86 days
Failing since        152659  2020-08-21 14:07:39 Z   85 days  183 attempts
Testing same since   156805  2020-11-15 00:37:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 broken  
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-libvirt-xsm broken
broken-step test-arm64-arm64-libvirt-xsm host-install(5)

Not pushing.

(No revision log; it would be 64763 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 15 09:58:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Nov 2020 09:58:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27427.56100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keEnk-0005Mj-Tu; Sun, 15 Nov 2020 09:58:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27427.56100; Sun, 15 Nov 2020 09:58:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keEnk-0005Mc-QH; Sun, 15 Nov 2020 09:58:28 +0000
Received: by outflank-mailman (input) for mailman id 27427;
 Sun, 15 Nov 2020 09:58:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t4DI=EV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1keEnj-0005MS-O7
 for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 09:58:27 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ae727bea-1ae8-40f3-86e8-1989cb1bf043;
 Sun, 15 Nov 2020 09:58:25 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keEng-0004ig-T9; Sun, 15 Nov 2020 09:58:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keEng-0007Kv-J7; Sun, 15 Nov 2020 09:58:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1keEng-0006Nw-Ic; Sun, 15 Nov 2020 09:58:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=t4DI=EV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1keEnj-0005MS-O7
	for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 09:58:27 +0000
X-Inumbo-ID: ae727bea-1ae8-40f3-86e8-1989cb1bf043
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ae727bea-1ae8-40f3-86e8-1989cb1bf043;
	Sun, 15 Nov 2020 09:58:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mmMhuCafIuc0Q8uoUFMcRLCLTF7766gouFmDsQCk0J8=; b=gSE9KwscwHU5p1X0n3wlyrzF2N
	HmnYqCjdE6oIvR3n+9VuW2iSK4GM/8mhbQMi9yWC+Lj6VVgUd1uHi7hiFwROPYb3NSDgYV5TgreMc
	WG0eVrS4xpTFSEckRPsReu9wbPTynW/Qku/FM8SjUF1UcklES4y81v3WngGyvMviQsgw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keEng-0004ig-T9; Sun, 15 Nov 2020 09:58:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keEng-0007Kv-J7; Sun, 15 Nov 2020 09:58:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keEng-0006Nw-Ic; Sun, 15 Nov 2020 09:58:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156811-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 156811: all pass - PUSHED
X-Osstest-Versions-This:
    xen=5505f5f8e7e805365cfe70b6a4af6115940bb749
X-Osstest-Versions-That:
    xen=3059178798a23ba870ff86ff54d442a07e6651fc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Nov 2020 09:58:24 +0000

flight 156811 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156811/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  5505f5f8e7e805365cfe70b6a4af6115940bb749
baseline version:
 xen                  3059178798a23ba870ff86ff54d442a07e6651fc

Last test of basis   156681  2020-11-11 09:19:28 Z    4 days
Testing same since   156811  2020-11-15 09:18:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Penny Zheng <penny.zheng@arm.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3059178798..5505f5f8e7  5505f5f8e7e805365cfe70b6a4af6115940bb749 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun Nov 15 10:22:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Nov 2020 10:22:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27439.56118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keFAR-000848-Uh; Sun, 15 Nov 2020 10:21:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27439.56118; Sun, 15 Nov 2020 10:21:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keFAR-000841-Qj; Sun, 15 Nov 2020 10:21:55 +0000
Received: by outflank-mailman (input) for mailman id 27439;
 Sun, 15 Nov 2020 10:21:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t4DI=EV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1keFAQ-00083T-RX
 for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 10:21:54 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 10b20a55-4d85-480c-bed6-a57c25bb097c;
 Sun, 15 Nov 2020 10:21:48 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keFAJ-0005Hc-LI; Sun, 15 Nov 2020 10:21:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keFAJ-0000Xg-A7; Sun, 15 Nov 2020 10:21:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1keFAJ-0007yV-9X; Sun, 15 Nov 2020 10:21:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=t4DI=EV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1keFAQ-00083T-RX
	for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 10:21:54 +0000
X-Inumbo-ID: 10b20a55-4d85-480c-bed6-a57c25bb097c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 10b20a55-4d85-480c-bed6-a57c25bb097c;
	Sun, 15 Nov 2020 10:21:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hEL5L60tHEMOxyERhwQZFsUg2h+5+vwK7Axu1mY7aU8=; b=cz52FpRCPwqB74Oicx/2kgsLn7
	qqhB4IEPbSpLps8syLD/2QT7k+2t/ezLPiDsgSa1kC0W15ouSTQ2c6RIVIp3YYq2Wu25TRgohwPtK
	IcqL77XzFrJC3NdR79Co/y2A1IDsEG+5D8LHcgnej4tHkyT3XOjHeTjeG5OJhe21XsCE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keFAJ-0005Hc-LI; Sun, 15 Nov 2020 10:21:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keFAJ-0000Xg-A7; Sun, 15 Nov 2020 10:21:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keFAJ-0007yV-9X; Sun, 15 Nov 2020 10:21:47 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156808-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156808: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=77549339838b44cd32b576e06701d1f7b4518fb5
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Nov 2020 10:21:47 +0000

flight 156808 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156808/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              77549339838b44cd32b576e06701d1f7b4518fb5
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  128 days
Failing since        151818  2020-07-11 04:18:52 Z  127 days  122 attempts
Testing same since   156800  2020-11-14 04:19:16 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 26941 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 15 12:34:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Nov 2020 12:34:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27463.56133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keHEW-0002kY-OI; Sun, 15 Nov 2020 12:34:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27463.56133; Sun, 15 Nov 2020 12:34:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keHEW-0002kR-JR; Sun, 15 Nov 2020 12:34:16 +0000
Received: by outflank-mailman (input) for mailman id 27463;
 Sun, 15 Nov 2020 12:34:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t4DI=EV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1keHEU-0002jt-Tn
 for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 12:34:14 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 23d391d5-4e44-47d8-b5a2-45ffca5b4804;
 Sun, 15 Nov 2020 12:34:07 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keHEN-0007vk-3D; Sun, 15 Nov 2020 12:34:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keHEM-0006t8-Pp; Sun, 15 Nov 2020 12:34:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1keHEM-0005tV-PI; Sun, 15 Nov 2020 12:34:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=t4DI=EV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1keHEU-0002jt-Tn
	for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 12:34:14 +0000
X-Inumbo-ID: 23d391d5-4e44-47d8-b5a2-45ffca5b4804
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 23d391d5-4e44-47d8-b5a2-45ffca5b4804;
	Sun, 15 Nov 2020 12:34:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=O5voES5ZdO90PVzu/ot+5ty63tRyivQ64aC0GnkRRAk=; b=qglCQ5kRFZlbuzGCXZEVg7CevH
	17GKDwCDbrt2DHEkYzt0PwaND8MrA5Qp1fPT6xxtdhNlbBs7QJUc2OpSaQLxqENklFtdZOm7VCY9H
	owJw/gvJdnBiUsC/vRNZCpB+ZLff7BQ1Z3RG0pJgRgZ5wzBOJPonwUTTMt70a3iF7Wlo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keHEN-0007vk-3D; Sun, 15 Nov 2020 12:34:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keHEM-0006t8-Pp; Sun, 15 Nov 2020 12:34:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keHEM-0005tV-PI; Sun, 15 Nov 2020 12:34:06 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156807-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156807: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5505f5f8e7e805365cfe70b6a4af6115940bb749
X-Osstest-Versions-That:
    xen=5505f5f8e7e805365cfe70b6a4af6115940bb749
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Nov 2020 12:34:06 +0000

flight 156807 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156807/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156799
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156799
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156799
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156799
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156799
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156799
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156799
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156799
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156799
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156799
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156799
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  5505f5f8e7e805365cfe70b6a4af6115940bb749
baseline version:
 xen                  5505f5f8e7e805365cfe70b6a4af6115940bb749

Last test of basis   156807  2020-11-15 01:52:30 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Nov 15 14:04:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Nov 2020 14:04:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27482.56145 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keIdC-0002Nj-Uw; Sun, 15 Nov 2020 14:03:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27482.56145; Sun, 15 Nov 2020 14:03:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keIdC-0002Nc-Rp; Sun, 15 Nov 2020 14:03:50 +0000
Received: by outflank-mailman (input) for mailman id 27482;
 Sun, 15 Nov 2020 14:02:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A1c3=EV=gmail.com=james.dutton@srs-us1.protection.inumbo.net>)
 id 1keIbv-0002Kf-NA
 for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 14:02:31 +0000
Received: from mail-yb1-xb2a.google.com (unknown [2607:f8b0:4864:20::b2a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 78ad5877-1c27-4d9a-9a30-acd3291397c2;
 Sun, 15 Nov 2020 14:02:30 +0000 (UTC)
Received: by mail-yb1-xb2a.google.com with SMTP id 2so13216046ybc.12
 for <xen-devel@lists.xenproject.org>; Sun, 15 Nov 2020 06:02:30 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A1c3=EV=gmail.com=james.dutton@srs-us1.protection.inumbo.net>)
	id 1keIbv-0002Kf-NA
	for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 14:02:31 +0000
X-Inumbo-ID: 78ad5877-1c27-4d9a-9a30-acd3291397c2
Received: from mail-yb1-xb2a.google.com (unknown [2607:f8b0:4864:20::b2a])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 78ad5877-1c27-4d9a-9a30-acd3291397c2;
	Sun, 15 Nov 2020 14:02:30 +0000 (UTC)
Received: by mail-yb1-xb2a.google.com with SMTP id 2so13216046ybc.12
        for <xen-devel@lists.xenproject.org>; Sun, 15 Nov 2020 06:02:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=uaycsh/Ik3XAEZXuyRKHaZonhS0PXqRf8Lf2yU3Fxgo=;
        b=A0HmQ68LadjwBsod6T+kmY2sq1lSPfafa6jFAgDmtv5rFQKjPm86LpSz1h6LWe3ZbL
         CaBjivk9NGMOg30aZr3ywwHe8/UZyT5kvm7hM1029NH39X4EF0au1h+3+fLprSe+Skfl
         T6MTzpvClUnutdqBRO8Bqu+M11q5DXvUa3VSXqVPi/7aHjJJWDZyp+9F0CbSX52gRLbL
         bzAfuxVxwROT+1BcPT7UOFqqZMPOExONA5GgdO2yVYjaGcs3Tp7/6gz6axNg7hwvHnV5
         4Nm8nhyZLfLsw4TjiQKxTifYZocL4LlIVRLT3FwOdDXQfOc0rh0upgp5SvG9KbBgSYBD
         VzRg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=uaycsh/Ik3XAEZXuyRKHaZonhS0PXqRf8Lf2yU3Fxgo=;
        b=DmiGnrXQ8/uKhMaWFRrPnY3SuVw+cqLh3H66XnV3OdE2FYGq04dj/5ROsb9PS8BKPt
         IkhhRrLnvUhsvuRw96yzpfxbBES24rV9cQQ18L4S5RDxrVEMOB9wGiAupMaxlAAX/aaq
         8HdniN1O8uGz467F6QxIPzJYxY1lzEEF//sFFR5f26oscGTML8Obd02uKDLBAvuH0YZ+
         DZRlnLO9kuIvkPLoyF8zn2jYg+PI0l5bRi2UjstIIHS4r2o+jPxEgVaXFZcVeEAM9Awm
         GV3IScbRb165HWRCWuLulQTshAQpDymV2bqwQDxg4S+vQGHgFxiGW0S00fW+9QrjHs4A
         Lesw==
X-Gm-Message-State: AOAM531dB+/h9v4RdNZrBnQXZnb5TTUSKxyjywjH2EyfXjFOcORJ/3II
	mub1Q7sqRoR90gHBW5KUa3nkqn92ODv32K6HGKs=
X-Google-Smtp-Source: ABdhPJyz078fmC/gcvQOY9gCbfhitY+RBefdLUghubVKwOgJOu1RJeEwaK8xNmYWk9EHX3AUrqmYWVeMQQd3EEofU7g=
X-Received: by 2002:a25:1886:: with SMTP id 128mr15241593yby.163.1605448949742;
 Sun, 15 Nov 2020 06:02:29 -0800 (PST)
MIME-Version: 1.0
References: <20201113235242.k6fzlwmwm2xqhqsi@tomti.i.net-space.pl> <f0845d6b-deab-957f-0807-1e989a6648ac@gmx.de>
In-Reply-To: <f0845d6b-deab-957f-0807-1e989a6648ac@gmx.de>
From: James Courtier-Dutton <james.dutton@gmail.com>
Date: Sun, 15 Nov 2020 14:01:53 +0000
Message-ID: <CAAMvbhFeuzEihOGan6r90CWxgshin3APvnLPCzJxXD-aJhF20g@mail.gmail.com>
Subject: Re: [SPECIFICATION RFC] The firmware and bootloader log specification
To: Nico Huber <nico.h@gmx.de>
Cc: Daniel Kiper <daniel.kiper@oracle.com>, coreboot@coreboot.org, 
	grub-devel <grub-devel@gnu.org>, LKML Mailing List <linux-kernel@vger.kernel.org>, 
	systemd-devel@lists.freedesktop.org, trenchboot-devel@googlegroups.com, 
	u-boot@lists.denx.de, x86@kernel.org, xen-devel@lists.xenproject.org, 
	alecb@umass.edu, alexander.burmashev@oracle.com, allen.cryptic@gmail.com, 
	andrew.cooper3@citrix.com, ard.biesheuvel@linaro.org, btrotter@gmail.com, 
	dpsmith@apertussolutions.com, eric.devolder@oracle.com, 
	eric.snowberg@oracle.com, "H. Peter Anvin" <hpa@zytor.com>, hun@n-dimensional.de, 
	javierm@redhat.com, joao.m.martins@oracle.com, kanth.ghatraju@oracle.com, 
	konrad.wilk@oracle.com, krystian.hebel@3mdeb.com, leif@nuviainc.com, 
	lukasz.hawrylko@intel.com, luto@amacapital.net, michal.zygowski@3mdeb.com, 
	Matthew Garrett <mjg59@google.com>, mtottenh@akamai.com, phcoder@gmail.com, 
	piotr.krol@3mdeb.com, pjones@redhat.com, pmenzel@molgen.mpg.de, 
	roger.pau@citrix.com, ross.philipson@oracle.com, tyhicks@linux.microsoft.com
Content-Type: multipart/alternative; boundary="0000000000005ac01e05b425b6c0"

--0000000000005ac01e05b425b6c0
Content-Type: text/plain; charset="UTF-8"

On Sat, 14 Nov 2020 at 12:37, Nico Huber <nico.h@gmx.de> wrote:

> > (I think
> >     newer spec versions should not change anything in first 5 bf_log
> members;
> >     this way older log parsers will be able to traverse/copy all logs
> regardless
> >     of version used in one log or another),
>
> Good point, which brings me to another good practice regarding such
> data formats: A length field for the header. In this case the length
> from the start of `bf_log` to the start of `msgs`. This would give
> us backwards compatibility in case additional fields are added in
> the future. And would also allow the various implementation to add
> custom fields (not for communication with log parser but for their
> own use).
>
> A fairly future proof approach is to use a TLV.
Type, Length, Value.
The approach can be nested, so other TLVs within the bytes of the value of
the parent TLV.
It makes it very easy for the reader of the message to skip any Types it
does not understand.
For example, the structure you describe could go in the "Value" part of the
TLV.
This is a common approach used by RADIUS, Protobuf, Avro etc.
If anyone wishes to add extra parameters, they can create a new Type, and
put the new parameters in the Value.
TLV is also already used elsewhere in the kernel, in the ALSA sound
interface to pass extra information about a sound control, e.g. dB values,
min/max values etc.

Kind Regards

James

--0000000000005ac01e05b425b6c0
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_quote"><div dir=3D"ltr" class=3D"gmail=
_attr">On Sat, 14 Nov 2020 at 12:37, Nico Huber &lt;<a href=3D"mailto:nico.=
h@gmx.de">nico.h@gmx.de</a>&gt; wrote:<br></div><blockquote class=3D"gmail_=
quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,=
204);padding-left:1ex">
&gt; (I think<br>
&gt;=C2=A0 =C2=A0 =C2=A0newer spec versions should not change anything in f=
irst 5 bf_log members;<br>
&gt;=C2=A0 =C2=A0 =C2=A0this way older log parsers will be able to traverse=
/copy all logs regardless<br>
&gt;=C2=A0 =C2=A0 =C2=A0of version used in one log or another),<br>
<br>
Good point, which brings me to another good practice regarding such<br>
data formats: A length field for the header. In this case the length<br>
from the start of `bf_log` to the start of `msgs`. This would give<br>
us backwards compatibility in case additional fields are added in<br>
the future. And would also allow the various implementation to add<br>
custom fields (not for communication with log parser but for their<br>
own use).<br>
<br></blockquote><div>A fairly future proof approach is to use a TLV.</div>=
<div>Type, Length, Value.</div><div>The approach can be nested, so other TL=
Vs within the bytes of the value of the parent TLV.<br></div><div>It makes =
it very easy for the reader of the message to skip any Types it does not un=
derstand.</div><div>For example, the structure you describe could go in the=
 &quot;Value&quot; part of the TLV.</div><div>This is a common approach use=
d by RADIUS, Protobuf, Avro etc.</div><div>If anyone wishes to add extra pa=
rameters, they can create a new Type, and put the new parameters in the Val=
ue.</div><div>TLV is also already used elsewhere in the kernel, in the ALSA=
 sound interface to pass extra information about a sound control, e.g. dB v=
alues, min/max values etc.</div><div><br></div><div>Kind Regards</div><div>=
<br></div><div>James</div><div><br></div><div><br></div><div><br></div></di=
v></div>

--0000000000005ac01e05b425b6c0--


From xen-devel-bounces@lists.xenproject.org Sun Nov 15 16:57:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Nov 2020 16:57:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27510.56157 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keLLN-00012t-PX; Sun, 15 Nov 2020 16:57:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27510.56157; Sun, 15 Nov 2020 16:57:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keLLN-00012m-MI; Sun, 15 Nov 2020 16:57:37 +0000
Received: by outflank-mailman (input) for mailman id 27510;
 Sun, 15 Nov 2020 16:57:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t4DI=EV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1keLLM-00012h-F7
 for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 16:57:36 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 51f0d0e1-a035-49a9-84aa-cb1e9112f0f2;
 Sun, 15 Nov 2020 16:57:34 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keLLJ-0005LX-Mz; Sun, 15 Nov 2020 16:57:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keLLJ-0000lu-Bg; Sun, 15 Nov 2020 16:57:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1keLLJ-0006Gj-BD; Sun, 15 Nov 2020 16:57:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=t4DI=EV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1keLLM-00012h-F7
	for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 16:57:36 +0000
X-Inumbo-ID: 51f0d0e1-a035-49a9-84aa-cb1e9112f0f2
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 51f0d0e1-a035-49a9-84aa-cb1e9112f0f2;
	Sun, 15 Nov 2020 16:57:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wFDG4Ujru8SObZUKMZQSuyciVCa90U+VcGTLoroAbtY=; b=K8LMYW8qWey5i/kUySXL2hFJSa
	1BjMkFW19KKdkWn/BZ8QRDDNo9P9R6B8QyLeETCXHKTpXSSRvx6PRFkP39FXIsi+v+aiolPQTPKha
	Hl/uO6yuiBsM1CmAkyLgh3uG+P9o5lLpnsFliS8nri6eNzIGn7FiYukBJ63TpRjiPmw8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keLLJ-0005LX-Mz; Sun, 15 Nov 2020 16:57:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keLLJ-0000lu-Bg; Sun, 15 Nov 2020 16:57:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keLLJ-0006Gj-BD; Sun, 15 Nov 2020 16:57:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156809-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156809: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=e28c0d7c92c89016c12a677616668957351e7542
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Nov 2020 16:57:33 +0000

flight 156809 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156809/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop     fail in 156804 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-examine      8 reboot           fail in 156804 pass in 156809
 test-arm64-arm64-xl-xsm       8 xen-boot         fail in 156804 pass in 156809
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 156804
 test-amd64-amd64-i386-pvgrub 19 guest-localmigrate/x10     fail pass in 156804
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 156804

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-xsm      11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11) fail in 156804 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                e28c0d7c92c89016c12a677616668957351e7542
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  106 days
Failing since        152366  2020-08-01 20:49:34 Z  105 days  174 attempts
Testing same since   156804  2020-11-14 21:41:08 Z    0 days    2 attempts

------------------------------------------------------------
3512 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 671151 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 15 17:49:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Nov 2020 17:49:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27520.56172 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keM9u-0005bU-N4; Sun, 15 Nov 2020 17:49:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27520.56172; Sun, 15 Nov 2020 17:49:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keM9u-0005bN-JJ; Sun, 15 Nov 2020 17:49:50 +0000
Received: by outflank-mailman (input) for mailman id 27520;
 Sun, 15 Nov 2020 17:49:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b2Sd=EV=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1keM9s-0005bI-VE
 for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 17:49:48 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 40e847f8-358c-4319-9dea-551fcca8111c;
 Sun, 15 Nov 2020 17:49:45 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AFHnh89013401
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK)
 for <xen-devel@lists.xenproject.org>; Sun, 15 Nov 2020 18:49:44 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 70A002E9CA8; Sun, 15 Nov 2020 18:49:38 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=b2Sd=EV=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1keM9s-0005bI-VE
	for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 17:49:48 +0000
X-Inumbo-ID: 40e847f8-358c-4319-9dea-551fcca8111c
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 40e847f8-358c-4319-9dea-551fcca8111c;
	Sun, 15 Nov 2020 17:49:45 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AFHnh89013401
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK)
	for <xen-devel@lists.xenproject.org>; Sun, 15 Nov 2020 18:49:44 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 70A002E9CA8; Sun, 15 Nov 2020 18:49:38 +0100 (MET)
Date: Sun, 15 Nov 2020 18:49:38 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: xen-devel@lists.xenproject.org
Subject: netbsd PVH dom0: xen clock event stops
Message-ID: <20201115174938.GA3562@antioche.eu.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Sun, 15 Nov 2020 18:49:44 +0100 (MET)

Hello,
I spent some more time debugging NetBSD as a PVH dom0 on Xen,
With Roger's patch to avoid a Xen panic, the NetBSD kernel stalls
configuring devices. At first I though it was an issue with hardware
interrupts, but it more likely is an issue with Xen timer events.
Specifically: virtual CPU 0 stops receiving timer events, while other
CPUs keep receiving them. I tried to force a timer rearm but this didn't help.
The event is not masked nor pending on Xen or NetBSD, as confirmed by 'q'.
Others events (the Xen console, the debug event) are properly received
by CPU0. I don't know how to debug this more at this point.

In case it helps, I put by Xen and netbsd kernels at
http://www-soc.lip6.fr/~bouyer/netbsd-dom0-pvh/
I boot it from the NetBSD boot loader with:
menu=Boot Xen PVH:load /netbsd-test console=com0 root=dk0 -vx; multiboot /xen-test.gz dom0_mem=1024M console=com2 com2=57600,8n1 loglvl=all guest_loglvl=all gnttab_max_nr_frames=64 dom0=pvh iommu=debug

I guess with grub this would be
kernel /xen-test.gz dom0_mem=1024M console=com2 com2=57600,8n1 loglvl=all guest_loglvl=all gnttab_max_nr_frames=64 dom0=pvh iommu=debug
module /netbsd-test console=com0 root=dk0 -vx

(yes, com2 for xen and com0 for netbsd, that's not a bug :)
You can enter the NetBSD debugger with
+++++
you can then enter commands, lile
sh ev /i
to see the interrupt counters

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Sun Nov 15 17:53:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Nov 2020 17:53:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27527.56184 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keMDn-0006VB-79; Sun, 15 Nov 2020 17:53:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27527.56184; Sun, 15 Nov 2020 17:53:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keMDn-0006V4-3q; Sun, 15 Nov 2020 17:53:51 +0000
Received: by outflank-mailman (input) for mailman id 27527;
 Sun, 15 Nov 2020 17:53:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b2Sd=EV=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1keMDm-0006Uz-00
 for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 17:53:50 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a7ab37b-bfcf-405b-8b37-1310ffdb34ec;
 Sun, 15 Nov 2020 17:53:48 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AFHrkAL006417
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK)
 for <xen-devel@lists.xenproject.org>; Sun, 15 Nov 2020 18:53:47 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 31BD72E9CA8; Sun, 15 Nov 2020 18:53:41 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=b2Sd=EV=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1keMDm-0006Uz-00
	for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 17:53:50 +0000
X-Inumbo-ID: 4a7ab37b-bfcf-405b-8b37-1310ffdb34ec
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4a7ab37b-bfcf-405b-8b37-1310ffdb34ec;
	Sun, 15 Nov 2020 17:53:48 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AFHrkAL006417
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK)
	for <xen-devel@lists.xenproject.org>; Sun, 15 Nov 2020 18:53:47 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 31BD72E9CA8; Sun, 15 Nov 2020 18:53:41 +0100 (MET)
Date: Sun, 15 Nov 2020 18:53:41 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: xen-devel@lists.xenproject.org
Subject: Re: netbsd PVH dom0: xen clock event stops
Message-ID: <20201115175341.GA4208@antioche.eu.org>
References: <20201115174938.GA3562@antioche.eu.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201115174938.GA3562@antioche.eu.org>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Sun, 15 Nov 2020 18:53:47 +0100 (MET)

On Sun, Nov 15, 2020 at 06:49:38PM +0100, Manuel Bouyer wrote:
> Hello,
> I spent some more time debugging NetBSD as a PVH dom0 on Xen,
> With Roger's patch to avoid a Xen panic, the NetBSD kernel stalls
> configuring devices. At first I though it was an issue with hardware
> interrupts, but it more likely is an issue with Xen timer events.
> Specifically: virtual CPU 0 stops receiving timer events, while other
> CPUs keep receiving them. I tried to force a timer rearm but this didn't help.
> The event is not masked nor pending on Xen or NetBSD, as confirmed by 'q'.
> Others events (the Xen console, the debug event) are properly received
> by CPU0. I don't know how to debug this more at this point.

I forgot to mention: the same NetBSD kernel boots fine as a PVH domU

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Sun Nov 15 18:24:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Nov 2020 18:24:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27561.56196 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keMhS-0000uy-Lv; Sun, 15 Nov 2020 18:24:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27561.56196; Sun, 15 Nov 2020 18:24:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keMhS-0000ur-Io; Sun, 15 Nov 2020 18:24:30 +0000
Received: by outflank-mailman (input) for mailman id 27561;
 Sun, 15 Nov 2020 18:24:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uMBr=EV=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1keMhQ-0000um-4M
 for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 18:24:28 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2a54f419-9bdb-43b4-9f27-aa342aa68d22;
 Sun, 15 Nov 2020 18:24:26 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uMBr=EV=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1keMhQ-0000um-4M
	for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 18:24:28 +0000
X-Inumbo-ID: 2a54f419-9bdb-43b4-9f27-aa342aa68d22
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2a54f419-9bdb-43b4-9f27-aa342aa68d22;
	Sun, 15 Nov 2020 18:24:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605464667;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=rfb4s4OQ7p3lwNBnAE6SpE1vfV+c7yeIlJBVPfXOURo=;
  b=RxUXY97b/TGmy3YIUU0SNGrYFluAvws+4kWlJgBF9BH4lHy24SFoNZCX
   sLvAvMFRgYUwdXIou2uYwZEZm2thd3G2gzA8o3qhhdbF7F5dq3kYx4ti9
   zji3OAqPl7x5Bh/4/9sqy5Z7P8Ms52xW81yeEUZ/EvFt/dZ0SBGyJTjUO
   w=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: wHN3Na1BouZutdlH/fyud1HNrusBFdhIwzQ+RuLLL0ivWv+0I1/AedbmSN/lreGObPEtuxeoNR
 IHGjw5fc3BfagbcXp36oYXNRenTRzGqstQkUBG7uWvL84+ZYAZ/RcKR6t06Z01ci5JQbfEyRCx
 U8DyQDxr1l/97uFcqC2dofZIyzhbXpJWcj6UnxW0tOVd0bI//APMpGElcRL7yN1bK8MgRfW977
 eb4yxUNDIkrChg3LykJwJGMsnC+8Ee/fb5FT3oLK9fQUFV1Oe3Xj73s08pv0Jy3tQU3pq5H97r
 JxU=
X-SBRS: None
X-MesageID: 31551867
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,480,1596513600"; 
   d="scan'208";a="31551867"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kuhO3K6q+lYqaKX2of4HCH+xd9MKWa5shNV+eIv07/riv7CuzmlpPFpoFNntZ8775LfPDatUFuWdEiDLs4dS/wVEat/B4fN4OQ5zcbI3JapEFTKMDjMnpBrDm5SZGfMypmKZ3FYD6mGgnXlNxuX45SV52QDQeVXmtgb1Z6jBSKrVFh4VLnOrx58K92Fakscct36E8LvzPxTqaESIUJM6nzpjUdcgOTBeAn9RuupIgPP00c7el+16yB8WDFvVKM8WlTDQ1+7guyaVekL+wa51KQn1Hd0/YFIjo5RS09oVYIrc4GZw7X028nh94nDcX/JQiJU1DFivSI9OhCwYRXjDJA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rfb4s4OQ7p3lwNBnAE6SpE1vfV+c7yeIlJBVPfXOURo=;
 b=huO/haIr7VzggGY5KM9SbcpSm5vSfFBx/VrAkGxQVNE3XzrBtO6qFq9ciJMoAen1YjUj2u1MDzznpB1DSyJoLOpKjI4GYVoDdWaCaXfTpvhhRWS4KBoFMrFTnkgD8diW+QCu23mqTLxyhR/We877tIdoDtaCKoEZc5E1yLBX5w8ZaTSSpc6IWBBvGS24/YrgrbnW6zBCAInru0F9vz4BO8DTwUzJ3g7BqVw9JLTgfOSnMAxr0wD9QbGVqoFHU19ryfqgaOMin5DK9Vws2FLiGZh8RKBlukkW4OHfaeN3MO9GryEs1WDd8WnrLu5ZxSWi9g5bEUYCxnwCOeGz5HekZA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rfb4s4OQ7p3lwNBnAE6SpE1vfV+c7yeIlJBVPfXOURo=;
 b=Og46QWQz76f+T0t7p2d/bDEy3bYIWKj0GriR/gG8Beu6tg0O8BuneHzM1pcawLy8kpDdfK2wNgMnEhVkUWFm/RkoZw8xXxw0aZvBS6aeLBnwRlBevTRZsCGF/bP00GXzvftWSYcH8a1Ex+w//PV6I5WU3gkGttJIYd9TLAihJHw=
Date: Sun, 15 Nov 2020 19:24:16 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: netbsd PVH dom0: xen clock event stops
Message-ID: <20201115182416.GA30231@Air-de-Roger>
References: <20201115174938.GA3562@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201115174938.GA3562@antioche.eu.org>
X-ClientProxiedBy: LNXP265CA0001.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5e::13) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 120b405e-47ed-4f8b-6a0f-08d88993ac97
X-MS-TrafficTypeDiagnostic: DM6PR03MB4139:
X-Microsoft-Antispam-PRVS: <DM6PR03MB4139991C7F5B9721472061728FE40@DM6PR03MB4139.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: QTM9JTpxyy4Dzahk7ufjIOPc6oRf5O6KSJDtx1C5zR8TjComs0EzEQZDuf/Eqtvr08rViLp281YaDlIZYYuPGJjz8FPMwPczLRxDX3Octg4yr8BhNga8mz+rtB7U4q96uEjXxV4XKrAGZ8PScFtZ/bPlBQzLptPW+b8vpI38OxduugaEsGEbR65JtAq9E4zJiCi8sPQnHcytKwEgJvMfrmbsmaa3jxFHUIOv1tdvAm5Qh+AY7/e1n7PCSNnN4yqdqtntJAMrHUmQ4VweRVIZ8ndg6xjDvxanz8EMLIyOXUREdNoymuvvK5uje9lYeuJF659PIkaTM6OM2kyAyrI6gw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(346002)(136003)(376002)(396003)(366004)(39850400004)(9686003)(6496006)(2906002)(478600001)(16526019)(33716001)(5660300002)(8676002)(956004)(316002)(1076003)(66946007)(4326008)(8936002)(33656002)(6666004)(6916009)(86362001)(66556008)(66476007)(186003)(6486002)(83380400001)(85182001)(26005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: +lsTnn4IUN6slALSdYmNsd1JQjnsfnKDZqlPul3wTDSXGxRmwG4aPdpTvj5IuBA+Joxmula2y8CavhbNnTm8myf4cukVvPmmYF5uFYyEANp13lOkShq+2oZrFadVo0CG+SLhHOYpGWp2ndrtHs1QkBR9gu5WRKxfgWz7C3qmMmNBcWuMyrJXKH7Nw6fKXuACnWGg6euceEksmLUpiKaQTgFL+0uek7QK3DCanN9FOayH1oumNcG8unmdUQ4YEnLRHkJf7xfga12T/2aerakokKkPewiKP+WQygsUqZz+ut+qtZiVJHAw6Dfks5cX7Q3rnAHjhgIJU7PYvg6Ms6kmcMRXTIi1Ek7a792mKVWEN8RMsp2Opd7wzC7h8gp5stkpZmoTc2Md901zZLtCYWsK5o7dbNkB5S2DFCGtdMsvSMcr+fyUi+uKqTjWNzzzrqmBRM+ifY7zpMGxy1g9jT45+8foIna5tWBoC20t0MCJ49icTu0gOKD4InzJjmtbks5ec8qdz4RVi/IQVV1ww20GHSiKw4w1B1E80gJF4fks0vRJyMdJO9YdliYdlEDnhg1SYQvQM3GXIwk7SjdReO5jqy2+KMw9JZ6688rpRs45HqNF0jQiizhUq82oZ/8eNuc9OVYFKBWRcuBmmt6nSmk2PNp3UmdvVHeJ2rYaON4b3XJoAYhdw/lsGXW8mKTMBHWvZ6jfaz0JXHKMwCALcKxh0JV+uxdoRSlQ3OYnzX6JhOnE8L5QW/zM8UNYQCv6cTZ1s1T81VGR/BUS8rbvbKrz+qYKKbJHwoKgg+5Le4n+tgbKeGwyp4TUnj64yFzFguvdvSl5HSKISrdt276yDoku6QQ+OKyGPFir+6THqM83Fd/8UUuG3VEZTkxyJVJ6ojaQG+/gMijP9OHdcvIXUzWBSQ==
X-MS-Exchange-CrossTenant-Network-Message-Id: 120b405e-47ed-4f8b-6a0f-08d88993ac97
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Nov 2020 18:24:22.4126
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: u/SoYp5oSKHfx8pZ1mHOgGpfyo/1pRkZhGaVKs15jjd0t1aRveArte1vZLE9fb+AL92+pGIQYkv5VbRP1f0Kkw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4139
X-OriginatorOrg: citrix.com

On Sun, Nov 15, 2020 at 06:49:38PM +0100, Manuel Bouyer wrote:
> Hello,
> I spent some more time debugging NetBSD as a PVH dom0 on Xen,
> With Roger's patch to avoid a Xen panic, the NetBSD kernel stalls
> configuring devices. At first I though it was an issue with hardware
> interrupts, but it more likely is an issue with Xen timer events.
> Specifically: virtual CPU 0 stops receiving timer events, while other
> CPUs keep receiving them. I tried to force a timer rearm but this didn't help.
> The event is not masked nor pending on Xen or NetBSD, as confirmed by 'q'.
> Others events (the Xen console, the debug event) are properly received
> by CPU0. I don't know how to debug this more at this point.

You could try to use dom0_vcpus_pin command line option and then dump
the timers using the 'a' debug key, this way you can see if CPU0 has a
timer pending (which would be the vCPU0 timer).

What timer is NetBSD using, is it the PV vCPU single shot timer, the
periodic one, or the emulated local APIC timer?

Depending on the timer you are trying to use I would recommend to add
some printks if needed.

Roger.


From xen-devel-bounces@lists.xenproject.org Sun Nov 15 18:37:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Nov 2020 18:37:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27569.56208 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keMuM-0001we-UY; Sun, 15 Nov 2020 18:37:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27569.56208; Sun, 15 Nov 2020 18:37:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keMuM-0001wX-QE; Sun, 15 Nov 2020 18:37:50 +0000
Received: by outflank-mailman (input) for mailman id 27569;
 Sun, 15 Nov 2020 18:37:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b2Sd=EV=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1keMuM-0001wS-9x
 for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 18:37:50 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0948d722-74ef-47cf-a835-30967c955561;
 Sun, 15 Nov 2020 18:37:47 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AFIbe8H005976
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Sun, 15 Nov 2020 19:37:41 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id D37C92E9CA8; Sun, 15 Nov 2020 19:37:35 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=b2Sd=EV=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1keMuM-0001wS-9x
	for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 18:37:50 +0000
X-Inumbo-ID: 0948d722-74ef-47cf-a835-30967c955561
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0948d722-74ef-47cf-a835-30967c955561;
	Sun, 15 Nov 2020 18:37:47 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AFIbe8H005976
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Sun, 15 Nov 2020 19:37:41 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id D37C92E9CA8; Sun, 15 Nov 2020 19:37:35 +0100 (MET)
Date: Sun, 15 Nov 2020 19:37:35 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: netbsd PVH dom0: xen clock event stops
Message-ID: <20201115183735.GC1096@antioche.eu.org>
References: <20201115174938.GA3562@antioche.eu.org>
 <20201115182416.GA30231@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201115182416.GA30231@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Sun, 15 Nov 2020 19:37:42 +0100 (MET)

On Sun, Nov 15, 2020 at 07:24:16PM +0100, Roger Pau Monn wrote:
> On Sun, Nov 15, 2020 at 06:49:38PM +0100, Manuel Bouyer wrote:
> > Hello,
> > I spent some more time debugging NetBSD as a PVH dom0 on Xen,
> > With Roger's patch to avoid a Xen panic, the NetBSD kernel stalls
> > configuring devices. At first I though it was an issue with hardware
> > interrupts, but it more likely is an issue with Xen timer events.
> > Specifically: virtual CPU 0 stops receiving timer events, while other
> > CPUs keep receiving them. I tried to force a timer rearm but this didn't help.
> > The event is not masked nor pending on Xen or NetBSD, as confirmed by 'q'.
> > Others events (the Xen console, the debug event) are properly received
> > by CPU0. I don't know how to debug this more at this point.
> 
> You could try to use dom0_vcpus_pin command line option and then dump
> the timers using the 'a' debug key, this way you can see if CPU0 has a
> timer pending (which would be the vCPU0 timer).
> 
> What timer is NetBSD using, is it the PV vCPU single shot timer, the
> periodic one, or the emulated local APIC timer?

It is the PV single shot timer, I guess. But used as a periodic
timer by rearming it in the handler:
        next = ci->ci_xen_hardclock_systime_ns + NS_PER_TICK;
	error = HYPERVISOR_set_timer_op(next);

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Sun Nov 15 18:46:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 15 Nov 2020 18:46:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27576.56223 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keN2x-0002vA-Ql; Sun, 15 Nov 2020 18:46:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27576.56223; Sun, 15 Nov 2020 18:46:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keN2x-0002v3-NN; Sun, 15 Nov 2020 18:46:43 +0000
Received: by outflank-mailman (input) for mailman id 27576;
 Sun, 15 Nov 2020 18:46:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t4DI=EV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1keN2w-0002uU-9n
 for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 18:46:42 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 713fa4dc-1143-45ac-9edf-03119bc78bb8;
 Sun, 15 Nov 2020 18:46:33 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keN2n-0007ap-Ft; Sun, 15 Nov 2020 18:46:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keN2n-0003uM-4W; Sun, 15 Nov 2020 18:46:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1keN2n-00075v-41; Sun, 15 Nov 2020 18:46:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=t4DI=EV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1keN2w-0002uU-9n
	for xen-devel@lists.xenproject.org; Sun, 15 Nov 2020 18:46:42 +0000
X-Inumbo-ID: 713fa4dc-1143-45ac-9edf-03119bc78bb8
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 713fa4dc-1143-45ac-9edf-03119bc78bb8;
	Sun, 15 Nov 2020 18:46:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dtWTEPZ7XrXzS+T2c8IbqFscgvEmaura4dGGXpmQ/eE=; b=aM+iNN8z6tBFy2GR1/01cHfRfU
	0+AuzamoqyQtqnpx3tCXnx4wZeb+lbgPIT1Ne+6MeE6HfVsdoCukDpYy5DQR+nqRMdLLd/IO5hqUG
	IPoP/z0kicZGvyx8ua/swwyha4ds703flk78eiVcYKOF8EVnvz8+WtVKVkAnf/krACVw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keN2n-0007ap-Ft; Sun, 15 Nov 2020 18:46:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keN2n-0003uM-4W; Sun, 15 Nov 2020 18:46:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keN2n-00075v-41; Sun, 15 Nov 2020 18:46:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156810-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156810: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-libvirt-xsm:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-pygrub:guest-localmigrate/x10:fail:heisenbug
    qemu-mainline:test-arm64-arm64-libvirt-xsm:host-install(5):broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=b50ea0d54bbca7d440315c3d0c0f7a4d6537b180
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 15 Nov 2020 18:46:33 +0000

flight 156810 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156810/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm    <job status>                 broken  in 156805
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pygrub      19 guest-localmigrate/x10     fail pass in 156805

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm 5 host-install(5) broken in 156805 blocked in 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                b50ea0d54bbca7d440315c3d0c0f7a4d6537b180
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   87 days
Failing since        152659  2020-08-21 14:07:39 Z   86 days  184 attempts
Testing same since   156805  2020-11-15 00:37:37 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-libvirt-xsm broken

Not pushing.

(No revision log; it would be 64763 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 16 02:07:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 02:07:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27645.56238 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keTuf-0004eU-Px; Mon, 16 Nov 2020 02:06:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27645.56238; Mon, 16 Nov 2020 02:06:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keTuf-0004eM-Ko; Mon, 16 Nov 2020 02:06:37 +0000
Received: by outflank-mailman (input) for mailman id 27645;
 Mon, 16 Nov 2020 02:06:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=14IB=EW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1keTue-0004dh-K2
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 02:06:36 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ab470f73-be76-459a-8b30-94184a985957;
 Mon, 16 Nov 2020 02:06:30 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keTuX-0004sX-6M; Mon, 16 Nov 2020 02:06:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keTuW-0005oo-OF; Mon, 16 Nov 2020 02:06:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1keTuW-0005n9-Nj; Mon, 16 Nov 2020 02:06:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=14IB=EW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1keTue-0004dh-K2
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 02:06:36 +0000
X-Inumbo-ID: ab470f73-be76-459a-8b30-94184a985957
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ab470f73-be76-459a-8b30-94184a985957;
	Mon, 16 Nov 2020 02:06:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aC4LfvJXOhwOo3ANp7EPWJdEyujb7X+uNtOTNeE6LYc=; b=BkwIUZf6BNNuHjkd6+W+X8AXgg
	HVub0AVzU+mOpCKmpYj89o3/W1FxVwYLeE9Q9EsT7t1ok9jFpMJxQBufFzmm75ibSfNisXw7qegG/
	5OqNaRCJiAMcVzwFnrPQ2/nttZ/LNf33Wrkr3bNXaaFc7uqpXoAS+LEA1llSkeZN23Xc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keTuX-0004sX-6M; Mon, 16 Nov 2020 02:06:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keTuW-0005oo-OF; Mon, 16 Nov 2020 02:06:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keTuW-0005n9-Nj; Mon, 16 Nov 2020 02:06:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156812-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156812: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:debian-install:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-saverestore.2:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=e28c0d7c92c89016c12a677616668957351e7542
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 16 Nov 2020 02:06:28 +0000

flight 156812 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156812/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  12 debian-install           fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu  fail in 156809 REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop    fail in 156809 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-libvirt-xsm  8 xen-boot         fail in 156809 pass in 156812
 test-arm64-arm64-xl-seattle   8 xen-boot         fail in 156809 pass in 156812
 test-amd64-amd64-i386-pvgrub 19 guest-localmigrate/x10 fail in 156809 pass in 156812
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 156809 pass in 156812
 test-arm64-arm64-examine      8 reboot                     fail pass in 156809
 test-amd64-amd64-amd64-pvgrub 19 guest-localmigrate/x10    fail pass in 156809
 test-amd64-amd64-libvirt-vhd 17 guest-saverestore.2        fail pass in 156809
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 156809
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen        fail pass in 156809

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit2 11 leak-check/basis(11) fail in 156809 blocked in 152332
 test-arm64-arm64-xl-xsm 11 leak-check/basis(11) fail in 156809 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                e28c0d7c92c89016c12a677616668957351e7542
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  107 days
Failing since        152366  2020-08-01 20:49:34 Z  106 days  175 attempts
Testing same since   156804  2020-11-14 21:41:08 Z    1 days    3 attempts

------------------------------------------------------------
3512 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 671151 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 16 05:35:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 05:35:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27661.56253 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keXAG-0006TB-5T; Mon, 16 Nov 2020 05:34:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27661.56253; Mon, 16 Nov 2020 05:34:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keXAG-0006T4-1V; Mon, 16 Nov 2020 05:34:56 +0000
Received: by outflank-mailman (input) for mailman id 27661;
 Mon, 16 Nov 2020 05:34:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=14IB=EW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1keXAE-0006S5-AI
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 05:34:54 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5095aff1-34de-47cd-b13e-9644ac1f0964;
 Mon, 16 Nov 2020 05:34:43 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keXA2-0001af-Sx; Mon, 16 Nov 2020 05:34:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keXA2-0000KQ-L8; Mon, 16 Nov 2020 05:34:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1keXA2-0004dv-Gk; Mon, 16 Nov 2020 05:34:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=14IB=EW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1keXAE-0006S5-AI
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 05:34:54 +0000
X-Inumbo-ID: 5095aff1-34de-47cd-b13e-9644ac1f0964
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5095aff1-34de-47cd-b13e-9644ac1f0964;
	Mon, 16 Nov 2020 05:34:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XKykM5VsKx2JqLepp/dMXPPxbd5ggGN7EguhlmVr9KE=; b=OmgzxL0ZJJTsA4zI1ZPOxv3NvO
	66VZp3qG/5mpt4W16UhQVqYA6YWsJp6hxy0zDDtxxYYKiKXdVLfjS30r/UB4BWSF1inu6MpHMqJyG
	99qF3Nww0vTcbnHKA3vFmhSJAwH+wlmOHRaQ3nlVYZ2QYKLylxZhAmLiakWCqWVswAPk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keXA2-0001af-Sx; Mon, 16 Nov 2020 05:34:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keXA2-0000KQ-L8; Mon, 16 Nov 2020 05:34:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keXA2-0004dv-Gk; Mon, 16 Nov 2020 05:34:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156813-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156813: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=b50ea0d54bbca7d440315c3d0c0f7a4d6537b180
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 16 Nov 2020 05:34:42 +0000

flight 156813 qemu-mainline real [real]
flight 156816 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156813/
http://logs.test-lab.xenproject.org/osstest/logs/156816/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                b50ea0d54bbca7d440315c3d0c0f7a4d6537b180
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   87 days
Failing since        152659  2020-08-21 14:07:39 Z   86 days  185 attempts
Testing same since   156805  2020-11-15 00:37:37 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 64763 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 16 07:02:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 07:02:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27672.56264 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keYWe-0005kI-5j; Mon, 16 Nov 2020 07:02:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27672.56264; Mon, 16 Nov 2020 07:02:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keYWe-0005kB-2j; Mon, 16 Nov 2020 07:02:08 +0000
Received: by outflank-mailman (input) for mailman id 27672;
 Mon, 16 Nov 2020 07:02:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xL3D=EW=rz.uni-regensburg.de=ulrich.windl@srs-us1.protection.inumbo.net>)
 id 1keYWc-0005k6-Lg
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 07:02:06 +0000
Received: from mx2.uni-regensburg.de (unknown [194.94.157.147])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7067ca79-81df-43ba-a8d8-4e7e6e57e626;
 Mon, 16 Nov 2020 07:02:04 +0000 (UTC)
Received: from mx2.uni-regensburg.de (localhost [127.0.0.1])
 by localhost (Postfix) with SMTP id 619BC600004E
 for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 08:02:03 +0100 (CET)
Received: from gwsmtp.uni-regensburg.de (gwsmtp1.uni-regensburg.de
 [132.199.5.51])
 by mx2.uni-regensburg.de (Postfix) with ESMTP id EC2196000052
 for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 08:02:02 +0100 (CET)
Received: from uni-regensburg-smtp1-MTA by gwsmtp.uni-regensburg.de
 with Novell_GroupWise; Mon, 16 Nov 2020 08:02:02 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xL3D=EW=rz.uni-regensburg.de=ulrich.windl@srs-us1.protection.inumbo.net>)
	id 1keYWc-0005k6-Lg
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 07:02:06 +0000
X-Inumbo-ID: 7067ca79-81df-43ba-a8d8-4e7e6e57e626
Received: from mx2.uni-regensburg.de (unknown [194.94.157.147])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7067ca79-81df-43ba-a8d8-4e7e6e57e626;
	Mon, 16 Nov 2020 07:02:04 +0000 (UTC)
Received: from mx2.uni-regensburg.de (localhost [127.0.0.1])
	by localhost (Postfix) with SMTP id 619BC600004E
	for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 08:02:03 +0100 (CET)
Received: from gwsmtp.uni-regensburg.de (gwsmtp1.uni-regensburg.de [132.199.5.51])
	by mx2.uni-regensburg.de (Postfix) with ESMTP id EC2196000052
	for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 08:02:02 +0100 (CET)
Received: from uni-regensburg-smtp1-MTA by gwsmtp.uni-regensburg.de
	with Novell_GroupWise; Mon, 16 Nov 2020 08:02:02 +0100
Message-Id: <5FB223E8020000A10003CB88@gwsmtp.uni-regensburg.de>
X-Mailer: Novell GroupWise Internet Agent 18.3.0 
Date: Mon, 16 Nov 2020 08:02:00 +0100
From: "Ulrich Windl" <Ulrich.Windl@rz.uni-regensburg.de>
To: <coreboot@coreboot.org>,<grub-devel@gnu.org>,
 <trenchboot-devel@googlegroups.com>, <x86@kernel.org>,
 <u-boot@lists.denx.de>,
 "systemd-devel@lists.freedesktop.org" <systemd-devel@lists.freedesktop.org>,
 <xen-devel@lists.xenproject.org>, <daniel.kiper@oracle.com>,
 <linux-kernel@vger.kernel.org>
Cc: <krystian.hebel@3mdeb.com>,<michal.zygowski@3mdeb.com>,
 <piotr.krol@3mdeb.com>, <mtottenh@akamai.com>, <luto@amacapital.net>,
 <dpsmith@apertussolutions.com>, <andrew.cooper3@citrix.com>,
 <roger.pau@citrix.com>, <allen.cryptic@gmail.com>,
 <btrotter@gmail.com>, <phcoder@gmail.com>,
 <lukasz.hawrylko@intel.com>, <ard.biesheuvel@linaro.org>,
 <tyhicks@linux.microsoft.com>, <pmenzel@molgen.mpg.de>,
 <hun@n-dimensional.de>, <leif@nuviainc.com>,
 <alexander.burmashev@oracle.com>, <eric.devolder@oracle.com>,
 <eric.snowberg@oracle.com>, <joao.m.martins@oracle.com>,
 <kanth.ghatraju@oracle.com>, <konrad.wilk@oracle.com>,
 <ross.philipson@oracle.com>, <javierm@redhat.com>,
 <pjones@redhat.com>, <alecb@umass.edu>,
 "H. Peter Anvin" <hpa@zytor.com>
Subject: Antw: [EXT] [systemd-devel] [SPECIFICATION RFC] The firmware
 and bootloader log specification
References: <20201113235242.k6fzlwmwm2xqhqsi@tomti.i.net-space.pl>
In-Reply-To: <20201113235242.k6fzlwmwm2xqhqsi@tomti.i.net-space.pl>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Content-Disposition: inline

>>> Daniel Kiper <daniel.kiper@oracle.com> schrieb am 14.11.2020 um 00:52 in
Nachricht <20201113235242.k6fzlwmwm2xqhqsi@tomti.i.net-space.pl>:
...
> The members of struct bf_log_msg:
>   ‑ size: total size of bf_log_msg struct,
>   ‑ ts_nsec: timestamp expressed in nanoseconds starting from 0,

Who or what defines t == 0?
...

Regards,
Ulrich Windl



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 07:24:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 07:24:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27678.56277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keYsN-0007Ze-SQ; Mon, 16 Nov 2020 07:24:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27678.56277; Mon, 16 Nov 2020 07:24:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keYsN-0007ZX-PW; Mon, 16 Nov 2020 07:24:35 +0000
Received: by outflank-mailman (input) for mailman id 27678;
 Mon, 16 Nov 2020 07:24:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HwWO=EW=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
 id 1keYsM-0007ZR-I4
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 07:24:34 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 1cd53976-bb28-41bd-930f-b97070fe0f1e;
 Mon, 16 Nov 2020 07:24:33 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0C833D6E;
 Sun, 15 Nov 2020 23:24:33 -0800 (PST)
Received: from e123311-lin.arm.com (unknown [10.57.25.95])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id ECAB43F718;
 Sun, 15 Nov 2020 23:24:31 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HwWO=EW=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
	id 1keYsM-0007ZR-I4
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 07:24:34 +0000
X-Inumbo-ID: 1cd53976-bb28-41bd-930f-b97070fe0f1e
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 1cd53976-bb28-41bd-930f-b97070fe0f1e;
	Mon, 16 Nov 2020 07:24:33 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0C833D6E;
	Sun, 15 Nov 2020 23:24:33 -0800 (PST)
Received: from e123311-lin.arm.com (unknown [10.57.25.95])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id ECAB43F718;
	Sun, 15 Nov 2020 23:24:31 -0800 (PST)
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	bertrand.marquis@arm.com
Subject: [PATCH] xen/arm: Add workaround for Cortex-A76/Neoverse-N1 erratum #1286807
Date: Mon, 16 Nov 2020 08:24:22 +0100
Message-Id: <20201116072422.17400-1-michal.orzel@arm.com>
X-Mailer: git-send-email 2.28.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

On the affected Cortex-A76/Neoverse-N1 cores (r0p0 to r3p0),
if a virtual address for a cacheable mapping of a location is being
accessed by a core while another core is remapping the virtual
address to a new physical page using the recommended break-before-make
sequence, then under very rare circumstances TLBI+DSB completes before
a read using the translation being invalidated has been observed by
other observers. The workaround repeats the TLBI+DSB operation.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
 docs/misc/arm/silicon-errata.txt     |  2 ++
 xen/arch/arm/Kconfig                 | 18 +++++++++++++++++
 xen/arch/arm/cpuerrata.c             | 14 ++++++++++++++
 xen/include/asm-arm/arm64/flushtlb.h | 29 +++++++++++++++++++---------
 xen/include/asm-arm/cpufeature.h     |  3 ++-
 5 files changed, 56 insertions(+), 10 deletions(-)

diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-errata.txt
index 552c4151d3..d183ba543f 100644
--- a/docs/misc/arm/silicon-errata.txt
+++ b/docs/misc/arm/silicon-errata.txt
@@ -53,5 +53,7 @@ stable hypervisors.
 | ARM            | Cortex-A72      | #853709         | N/A                     |
 | ARM            | Cortex-A73      | #858921         | ARM_ERRATUM_858921      |
 | ARM            | Cortex-A76      | #1165522        | N/A                     |
+| ARM            | Cortex-A76      | #1286807        | ARM64_ERRATUM_1286807   |
 | ARM            | Neoverse-N1     | #1165522        | N/A
+| ARM            | Neoverse-N1     | #1286807        | ARM64_ERRATUM_1286807   |
 | ARM            | MMU-500         | #842869         | N/A                     |
diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index f938dd21bd..5d6d906d72 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -244,6 +244,24 @@ config ARM_ERRATUM_858921
 
 	  If unsure, say Y.
 
+config ARM64_ERRATUM_1286807
+	bool "Cortex-A76/Neoverse-N1: 1286807: Modification of the translation table for a virtual address might lead to read-after-read ordering violation"
+	default y
+	depends on ARM_64
+	help
+	  This option adds a workaround for ARM Cortex-A76/Neoverse-N1 erratum 1286807.
+
+	  On the affected Cortex-A76/Neoverse-N1 cores (r0p0 to r3p0), if a virtual
+	  address for a cacheable mapping of a location is being
+	  accessed by a core while another core is remapping the virtual
+	  address to a new physical page using the recommended
+	  break-before-make sequence, then under very rare circumstances
+	  TLBI+DSB completes before a read using the translation being
+	  invalidated has been observed by other observers. The
+	  workaround repeats the TLBI+DSB operation.
+
+	  If unsure, say Y.
+
 endmenu
 
 config ARM64_HARDEN_BRANCH_PREDICTOR
diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
index 567911d380..cb4795beec 100644
--- a/xen/arch/arm/cpuerrata.c
+++ b/xen/arch/arm/cpuerrata.c
@@ -424,6 +424,20 @@ static const struct arm_cpu_capabilities arm_errata[] = {
                    (1 << MIDR_VARIANT_SHIFT) | 2),
     },
 #endif
+#ifdef CONFIG_ARM64_ERRATUM_1286807
+    {
+        /* Cortex-A76 r0p0 - r3p0 */
+        .desc = "ARM erratum 1286807",
+        .capability = ARM64_WORKAROUND_REPEAT_TLBI,
+        MIDR_RANGE(MIDR_CORTEX_A76, 0, 3 << MIDR_VARIANT_SHIFT),
+    },
+    {
+        /* Neoverse-N1 r0p0 - r3p0 */
+        .desc = "ARM erratum 1286807",
+        .capability = ARM64_WORKAROUND_REPEAT_TLBI,
+        MIDR_RANGE(MIDR_NEOVERSE_N1, 0, 3 << MIDR_VARIANT_SHIFT),
+    },
+#endif
 #ifdef CONFIG_ARM64_HARDEN_BRANCH_PREDICTOR
     {
         .capability = ARM_HARDEN_BRANCH_PREDICTOR,
diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
index ceec59542e..6726362211 100644
--- a/xen/include/asm-arm/arm64/flushtlb.h
+++ b/xen/include/asm-arm/arm64/flushtlb.h
@@ -9,6 +9,11 @@
  * DSB ISH          // Ensure the TLB invalidation has completed
  * ISB              // See explanation below
  *
+ * ARM64_WORKAROUND_REPEAT_TLBI:
+ * Modification of the translation table for a virtual address might lead to
+ * read-after-read ordering violation.
+ * The workaround repeats TLBI+DSB operation.
+ *
  * For Xen page-tables the ISB will discard any instructions fetched
  * from the old mappings.
  *
@@ -16,15 +21,21 @@
  * (and therefore the TLB invalidation) before continuing. So we know
  * the TLBs cannot contain an entry for a mapping we may have removed.
  */
-#define TLB_HELPER(name, tlbop) \
-static inline void name(void)   \
-{                               \
-    asm volatile(               \
-        "dsb  ishst;"           \
-        "tlbi "  # tlbop  ";"   \
-        "dsb  ish;"             \
-        "isb;"                  \
-        : : : "memory");        \
+#define TLB_HELPER(name, tlbop)       \
+static inline void name(void)         \
+{                                     \
+    asm volatile(                     \
+        "dsb  ishst;"                 \
+        "tlbi "  # tlbop  ";"         \
+        ALTERNATIVE(                  \
+        "nop; nop;",                  \
+        "dsb  ish;"                   \
+        "tlbi "  # tlbop  ";",        \
+        ARM64_WORKAROUND_REPEAT_TLBI, \
+        CONFIG_ARM64_ERRATUM_1286807) \
+        "dsb  ish;"                   \
+        "isb;"                        \
+        : : : "memory");              \
 }
 
 /* Flush local TLBs, current VMID only. */
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index 016a9fe203..c7b5052992 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -46,8 +46,9 @@
 #define ARM_SMCCC_1_1 8
 #define ARM64_WORKAROUND_AT_SPECULATE 9
 #define ARM_WORKAROUND_858921 10
+#define ARM64_WORKAROUND_REPEAT_TLBI 11
 
-#define ARM_NCAPS           11
+#define ARM_NCAPS           12
 
 #ifndef __ASSEMBLY__
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 07:40:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 07:40:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27687.56289 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keZ7n-0000rO-87; Mon, 16 Nov 2020 07:40:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27687.56289; Mon, 16 Nov 2020 07:40:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keZ7n-0000rH-54; Mon, 16 Nov 2020 07:40:31 +0000
Received: by outflank-mailman (input) for mailman id 27687;
 Mon, 16 Nov 2020 07:40:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=14IB=EW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1keZ7l-0000r9-NB
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 07:40:29 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4e3a8c24-38bf-474c-9476-224a511e3695;
 Mon, 16 Nov 2020 07:40:27 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keZ7j-0004B7-7I; Mon, 16 Nov 2020 07:40:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keZ7i-00062R-Ue; Mon, 16 Nov 2020 07:40:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1keZ7i-0000TY-UB; Mon, 16 Nov 2020 07:40:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=14IB=EW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1keZ7l-0000r9-NB
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 07:40:29 +0000
X-Inumbo-ID: 4e3a8c24-38bf-474c-9476-224a511e3695
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4e3a8c24-38bf-474c-9476-224a511e3695;
	Mon, 16 Nov 2020 07:40:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HltOjLfM6QQi3bb4Px85+wXlJfQUepjoJiHwJXjBqeA=; b=Oqq505TREE18SMRJWthOIPTucq
	8fcSkrs2Ys7u5PQIzNTrxqrg6bj5NRN7w1XAir+LcLJ2NQf0IaQ5HCYDOcwjGF5M4EwcjWcoWGPle
	6zSsj4M/rQdffkVAMyAadtQfDVYp1GlDhaHFVloUsKxTFqwoTM9sEyA2Y8dqCj/db2W4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keZ7j-0004B7-7I; Mon, 16 Nov 2020 07:40:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keZ7i-00062R-Ue; Mon, 16 Nov 2020 07:40:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keZ7i-0000TY-UB; Mon, 16 Nov 2020 07:40:26 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156817-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156817: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=77549339838b44cd32b576e06701d1f7b4518fb5
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 16 Nov 2020 07:40:26 +0000

flight 156817 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156817/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              77549339838b44cd32b576e06701d1f7b4518fb5
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  129 days
Failing since        151818  2020-07-11 04:18:52 Z  128 days  123 attempts
Testing same since   156800  2020-11-14 04:19:16 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 26941 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 16 07:55:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 07:55:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27698.56303 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keZML-0001uG-L3; Mon, 16 Nov 2020 07:55:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27698.56303; Mon, 16 Nov 2020 07:55:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keZML-0001u9-I9; Mon, 16 Nov 2020 07:55:33 +0000
Received: by outflank-mailman (input) for mailman id 27698;
 Mon, 16 Nov 2020 07:55:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wln0=EW=amazon.de=prvs=582b22291=doebel@srs-us1.protection.inumbo.net>)
 id 1keZMK-0001u4-CQ
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 07:55:32 +0000
Received: from smtp-fw-6001.amazon.com (unknown [52.95.48.154])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a3e45775-304f-4107-8941-68cd361080d7;
 Mon, 16 Nov 2020 07:55:31 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-2a-90c42d1d.us-west-2.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-6001.iad6.amazon.com with ESMTP;
 16 Nov 2020 07:55:25 +0000
Received: from EX13D03EUC002.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198])
 by email-inbound-relay-2a-90c42d1d.us-west-2.amazon.com (Postfix) with ESMTPS
 id EE3DEA1C33; Mon, 16 Nov 2020 07:54:03 +0000 (UTC)
Received: from [192.168.17.158] (10.43.161.102) by
 EX13D03EUC002.ant.amazon.com (10.43.164.60) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 16 Nov 2020 07:53:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Wln0=EW=amazon.de=prvs=582b22291=doebel@srs-us1.protection.inumbo.net>)
	id 1keZMK-0001u4-CQ
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 07:55:32 +0000
X-Inumbo-ID: a3e45775-304f-4107-8941-68cd361080d7
Received: from smtp-fw-6001.amazon.com (unknown [52.95.48.154])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a3e45775-304f-4107-8941-68cd361080d7;
	Mon, 16 Nov 2020 07:55:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1605513332; x=1637049332;
  h=to:cc:references:from:message-id:date:mime-version:
   in-reply-to:content-transfer-encoding:subject;
  bh=0uRl8Bn0fpAQz51+UvQbi9sbmnJDYAMYaVvazEEREk4=;
  b=ozIUTAE07dESwZY0JPrtMNbjvzI0l9JJT/DxTu+/VQk8iyOcazQlCYP9
   gMYiOm3Thy3IYkuGz48M9itbLMUXEPQXn4UyaA7iEn7TFuqELAWodfaYV
   TJh0QtEeWmDPMc9ZnwksaoFSj8Q5iRs7ws4aPmNIJ82aArTBGhKYOnYfU
   0=;
X-IronPort-AV: E=Sophos;i="5.77,481,1596499200"; 
   d="scan'208";a="66613059"
Subject: Re: [XEN PATCH] tools/xenstore: Log xenstored build ID on startup
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-2a-90c42d1d.us-west-2.amazon.com) ([10.43.8.6])
  by smtp-border-fw-out-6001.iad6.amazon.com with ESMTP; 16 Nov 2020 07:55:25 +0000
Received: from EX13D03EUC002.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198])
	by email-inbound-relay-2a-90c42d1d.us-west-2.amazon.com (Postfix) with ESMTPS id EE3DEA1C33;
	Mon, 16 Nov 2020 07:54:03 +0000 (UTC)
Received: from [192.168.17.158] (10.43.161.102) by
 EX13D03EUC002.ant.amazon.com (10.43.164.60) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 16 Nov 2020 07:53:59 +0000
To: Edwin Torok <edvin.torok@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "jgross@suse.com" <jgross@suse.com>,
	"Andrew Cooper" <Andrew.Cooper3@citrix.com>
CC: "wl@xen.org" <wl@xen.org>, Christian Lindig <christian.lindig@citrix.com>,
	"jgrall@amazon.co.uk" <jgrall@amazon.co.uk>, "elnikety@amazon.de"
	<elnikety@amazon.de>, "iwj@xenproject.org" <iwj@xenproject.org>
References: <20201113141823.58712-1-doebel@amazon.de>
 <5ac379ad-33fd-2973-dfdb-9e06ea539809@suse.com>
 <0e6b09fe-ffc4-195f-1b6c-67abc0cff92c@amazon.de>
 <c1352a2a-112a-966f-7410-b917cabe1d91@citrix.com>
 <39f0b457514c3b6bcc7419d9eaf5770a5c073333.camel@citrix.com>
From: Bjoern Doebel <doebel@amazon.de>
Message-ID: <73515688-5db2-c81b-48fb-6c5dda4a34b1@amazon.de>
Date: Mon, 16 Nov 2020 08:53:55 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <39f0b457514c3b6bcc7419d9eaf5770a5c073333.camel@citrix.com>
Content-Language: en-GB
X-Originating-IP: [10.43.161.102]
X-ClientProxiedBy: EX13P01UWA002.ant.amazon.com (10.43.160.46) To
 EX13D03EUC002.ant.amazon.com (10.43.164.60)
Precedence: Bulk
Content-Type: text/plain; charset="utf-8"; format="flowed"
Content-Transfer-Encoding: base64

Ck9uIDEzLjExLjIwIDE4OjIzLCBFZHdpbiBUb3JvayB3cm90ZToKPiBDQVVUSU9OOiBUaGlzIGVt
YWlsIG9yaWdpbmF0ZWQgZnJvbSBvdXRzaWRlIG9mIHRoZSBvcmdhbml6YXRpb24uIERvIG5vdCBj
bGljayBsaW5rcyBvciBvcGVuIGF0dGFjaG1lbnRzIHVubGVzcyB5b3UgY2FuIGNvbmZpcm0gdGhl
IHNlbmRlciBhbmQga25vdyB0aGUgY29udGVudCBpcyBzYWZlLgo+Cj4KPgo+IE9uIEZyaSwgMjAy
MC0xMS0xMyBhdCAxNzoxMyArMDAwMCwgQW5kcmV3IENvb3BlciB3cm90ZToKPj4gT24gMTMvMTEv
MjAyMCAxNjo1NiwgQmpvZXJuIERvZWJlbCB3cm90ZToKPj4+IE9uIDEzLjExLjIwIDE2OjM2LCBK
w7xyZ2VuIEdyb8OfIHdyb3RlOgo+Pj4+IE9uIDEzLjExLjIwIDE1OjE4LCBCam9lcm4gRG9lYmVs
IHdyb3RlOgo+Pj4+PiBSaWdodCBub3cgd2UgZG8gbm90IGhhdmUgYSBtZWNoYW5pc20gdG8gZGV0
ZXJtaW5lIHRoZSB2ZXJzaW9uCj4+Pj4+IG9mIHRoZQo+Pj4+PiBjdXJyZW50bHkgcnVubmluZyB4
ZW5zdG9yZWQgYXQgcnVudGltZS4gQXMgeGVuc3RvcmVkIHJ1bnMKPj4+Pj4gdGhyb3VnaG91dAo+
Pj4+PiB0aGUKPj4+Pj4gbGlmZXRpbWUgb2YgYSBYZW4gaG9zdCwgdGhpcyBtYXkgbGVhZCB0byBw
cm9ibGVtcyB3aGVuIG5ld2VyCj4+Pj4+IHVzZXIgc3BhY2UKPj4+Pj4gYnVpbGRzIGFyZSBzdGFn
ZWQuIFRoZW4sIHRoZSBydW5uaW5nIHhlbnN0b3JlZCB3aWxsIG5vIGxvbmdlcgo+Pj4+PiBtYXRj
aCB0aGUKPj4+Pj4gdmVyc2lvbiBvZiB0aGUgaW5zdGFsbGVkIHhlbnN0b3JlZC4KPj4+Pj4KPj4+
Pj4gVG8gYWxsb3cgdXNlcnMgdG8gYWx3YXlzIGlkZW50aWZ5IHRoZSBydW5uaW5nIHZlcnNpb24g
b2YKPj4+Pj4geGVuc3RvcmVkLCBhZGQKPj4+Pj4gYSBsaW5rZXItZ2VuZXJhdGVkIHVuaXF1ZSBi
dWlsZCBJRCB0byBldmVyeSB4ZW5zdG9yZWQgYnVpbGQuCj4+Pj4+IEFkZAo+Pj4+PiBmdW5jdGlv
bmFsaXR5IHRvIGxvZyB0aGlzIGJ1aWxkIElEIGludG8gYSBmaWxlIHVwb24gc2VydmljZQo+Pj4+
PiBzdGFydHVwLgo+Pj4+Pgo+Pj4+PiBTaWduZWQtb2ZmLWJ5OiBCam9lcm4gRG9lYmVsIDxkb2Vi
ZWxAYW1hem9uLmRlPgo+Pj4+PiBSZXZpZXdlZC1ieTogTWFydGluIE1hemVpbiA8YW1hemVpbkBh
bWF6b24uZGU+Cj4+Pj4+IFJldmlld2VkLWJ5OiBQYXVsIER1cnJhbnQgPHBkdXJyYW50QGFtYXpv
bi5jby51az4KPj4+PiBObyBzdXBwb3J0IGZvciBveGVuc3RvcmVkIG9yIHhlbnN0b3JlLXN0dWJk
b20/Cj4+PiBZb3VyIHN1Z2dlc3Rpb24gZnVydGhlciBkb3duIHdpbGwgYXBwYXJlbnRseSBoZWxw
IGZvciBzdHViZG9tLiBJIGRvCj4+PiBub3Qgc3BlYWsgb2NhbWwgYXQgYWxsIC0gaG93IGRvIHdl
IGFkZHJlc3MgdGhpcz8KPj4gQ0MnaW5nIEVkd2luIGFuZCBDaHJpc3RpYW4gd2hvIGhhdmUgZG9u
ZSB0aGUgYnVsayBvZiB0aGUgb3hlbnN0b3JlZAo+PiByZWNlbnRseS4KPj4KPj4gSXQgc291bmRz
IGxpa2UgaXQgbWlnaHQgbm90IGJlIHBvc3NpYmxlIHJpZ2h0IG5vdywgYnV0IHdvdWxkIGJlCj4+
IHBvc3NpYmxlCj4+IHdpdGggYSBmdXR1cmUgcGxhbiB0byBzd2l0Y2ggdGhlIE9jYW1sIGJ1aWxk
IHN5c3RlbSBvdmVyIHRvIGR1bmUgKHRoZQo+PiBuZXcvcHJlZmVycmVkIE9jYW1sIHVwc3RyZWFt
IHRvb2xjaGFpbikuCj4gU2VlIGhlcmUgd2hhdCBpcyBwb3NzaWJsZSB3aXRoIER1bmU6Cj4gaHR0
cHM6Ly9kdW5lLnJlYWR0aGVkb2NzLmlvL2VuL3N0YWJsZS9kdW5lLWxpYnMuaHRtbCNidWlsZC1p
bmZvCj4KPiBXb3VsZCB0aGUgb3V0cHV0IG9mICdnaXQgZGVzY3JpYmUgLS1hbHdheXMgLS1kaXJ0
eScgKHBlcmhhcHMgY29tYmluZWQKPiB3aXRoIGEgYnVpbGQgZGF0ZSkgc2VydmUgYXMgYSB1c2Vm
dWwgYnVpbGQgSUQ/CgpUaGUgcG9pbnQgb2YgdGhlIGJ1aWxkIElEIGlzIHRvIHZlcmlmeSBzb21l
dGhpbmcgbGlrZSAKImJpbmFyeS1lcXVpdmFsZW5jZSIgb2YgdHdvIGJ1aWxkcy4KCiogYSBnaXQg
aGFzaCBpcyBub3Qgc3VmZmljaWVudCBiZWNhdXNlIGRpZmZlcmVudCBnaXQgaGFzaGVzIG1heSBy
ZXN1bHQgCmluIHRoZSBzYW1lIGJpbmFyeSB0byBiZSBjcmVhdGVkIChpLmUuLCBpZiB0aGVyZSBp
cyBubyBjb2RlIGNoYW5nZSBpbiAKdGhlIHRhcmdldCBiaW5hcnkgaW4gYmV0d2VlbiB0aG9zZSB0
d28gYnVpbGRzKQoKKiBhIHRpbWUgc3RhbXAgaXMgY291bnRlci1wcm9kdWN0aXZlLCBiZWNhdXNl
IHRoZW4geW91J2QgaGF2ZSB0byAKcmVjcmVhdGUgdGhpcyB0aW1lc3RhbXAgZXZlcnkgdGltZSB5
b3Ugd2FudCB0byByZS1jcmVhdGUgYSBidWlsZAoKR05VIGxkJ3MgLS1idWlsZC1pZCBjbGFpbXMg
dG8gcGVyZm9ybSBhIGNoZWNrc3VtbWluZyBvZiB0aGUgIm5vcm1hdGl2ZSAKcGFydHMgb2YgdGhl
IG91dHB1dCBjb250ZW50cyIuIFdoYXRldmVyIHRoYXQgbWVhbnMuIDspCgo+Cj4+IElmIGl0IGRv
ZXMgZW5kIHVwIGJlaW5nIGFuIFhTX0NPTlRST0wgc3ViLW9wLCB3ZSBjYW4gaW1wbGVtZW50IGl0
IGF0Cj4+IGEKPj4gZnV0dXJlIHBvaW50IHdoZW4gd2UgY2FuIHVzZWZ1bGx5IGFuc3dlciB0aGUg
cXVlc3Rpb24uCj4gV291bGRuJ3QgdXNpbmcgcmVhZGVsZiBvbiAvcHJvYy88cGlkPi9leGUgZ2l2
ZSB5b3UgdGhlIHJ1bm5pbmcgYnVpbGRpZD8KPgo+IHJlYWRlbGYgLWEgL3Vzci9zYmluL294ZW5z
dG9yZWQgL3Byb2MvJChwaWRvZiBveGVuc3RvcmVkKS9leGUgfCBncmVwCj4gJ0J1aWxkIElEJwo+
ICAgICAgQnVpbGQgSUQ6IGJkZDUzMDRjODk4NGVkMjI1NzBkNTEzMDhhZTg3MTdkMDNmZTYwYWUK
PiAgICAgIEJ1aWxkIElEOiBiZGQ1MzA0Yzg5ODRlZDIyNTcwZDUxMzA4YWU4NzE3ZDAzZmU2MGFl
Cj4KPiByZWFkZWxmIC1hIC91c3Ivc2Jpbi9veGVuc3RvcmVkIC9wcm9jLyQocGlkb2Ygb3hlbnN0
b3JlZCkvZXhlIHwgZ3JlcAo+ICdCdWlsZCBJRCcKPiAgICAgIEJ1aWxkIElEOiBiNDRmZjk5YjIx
NmRiNzUyNmUzZWU3ODQxMDY4ZDU4NGNjOWMyYjk1Cj4gICAgICBCdWlsZCBJRDogYmRkNTMwNGM4
OTg0ZWQyMjU3MGQ1MTMwOGFlODcxN2QwM2ZlNjBhZQo+Cj4KPiBXaGVuIHlvdSdyZSBpbnNpZGUg
YSBzdHViZG9tIGl0IGlzIHByb2JhYmx5IG5vdCBzbyBlYXN5IHRob3VnaC4KCkludGVyZXN0aW5n
LiBJIGhhZCBub3QgY29uc2lkZXJlZCB0aGF0IGJlY2F1c2UgYWZ0ZXIgdXBncmFkaW5nIHhlbnN0
b3JlZCAKdG8gYSBkaWZmZXJlbnQgdmVyc2lvbiwgdGhlIHJ1bm5pbmcgeGVuc3RvcmVkJ3MgL3By
b2MvJFBJRC9leGUgc2hvd3MgYXMKCiMgbHMgLWwgL3Byb2MvJChwZ3JlcCB4ZW5zdG9yZWQpL2V4
ZQpscnd4cnd4cnd4IDEgcm9vdCByb290IDAgTm92wqAgOSAxNDowNiAvcHJvYy8zNTI4L2V4ZSAt
PiAKL3Vzci9zYmluL3hlbnN0b3JlZCAoZGVsZXRlZCkKCkJ1dCB5b3UgYXJlIHJpZ2h0LCBvbmUg
Y2FuIHN0aWxsIHJlYWQgdGhhdCBwcm9jZnMgZmlsZS4gTmljZSEKCgpCam9lcm4KCgoKCgpBbWF6
b24gRGV2ZWxvcG1lbnQgQ2VudGVyIEdlcm1hbnkgR21iSApLcmF1c2Vuc3RyLiAzOAoxMDExNyBC
ZXJsaW4KR2VzY2hhZWZ0c2Z1ZWhydW5nOiBDaHJpc3RpYW4gU2NobGFlZ2VyLCBKb25hdGhhbiBX
ZWlzcwpFaW5nZXRyYWdlbiBhbSBBbXRzZ2VyaWNodCBDaGFybG90dGVuYnVyZyB1bnRlciBIUkIg
MTQ5MTczIEIKU2l0ejogQmVybGluClVzdC1JRDogREUgMjg5IDIzNyA4NzkKCgo=



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 08:48:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 08:48:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27739.56316 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keaBI-0006rC-SK; Mon, 16 Nov 2020 08:48:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27739.56316; Mon, 16 Nov 2020 08:48:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keaBI-0006r5-P6; Mon, 16 Nov 2020 08:48:12 +0000
Received: by outflank-mailman (input) for mailman id 27739;
 Mon, 16 Nov 2020 08:48:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nrw9=EW=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1keaBH-0006r0-BP
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 08:48:11 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.41]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d67c097b-12df-4a98-9292-0fab4f576bb8;
 Mon, 16 Nov 2020 08:48:10 +0000 (UTC)
Received: from AM5PR0601CA0031.eurprd06.prod.outlook.com
 (2603:10a6:203:68::17) by VI1PR08MB5534.eurprd08.prod.outlook.com
 (2603:10a6:803:135::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Mon, 16 Nov
 2020 08:48:08 +0000
Received: from VE1EUR03FT012.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:68:cafe::38) by AM5PR0601CA0031.outlook.office365.com
 (2603:10a6:203:68::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25 via Frontend
 Transport; Mon, 16 Nov 2020 08:48:07 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT012.mail.protection.outlook.com (10.152.18.211) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3564.22 via Frontend Transport; Mon, 16 Nov 2020 08:48:07 +0000
Received: ("Tessian outbound 814be617737e:v71");
 Mon, 16 Nov 2020 08:48:06 +0000
Received: from fb9e285bbda2.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 46C855DF-D961-4F55-B67A-CEB6F7A4E77D.1; 
 Mon, 16 Nov 2020 08:48:00 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id fb9e285bbda2.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 16 Nov 2020 08:48:00 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5767.eurprd08.prod.outlook.com (2603:10a6:10:1a7::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25; Mon, 16 Nov
 2020 08:47:59 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737%7]) with mapi id 15.20.3564.028; Mon, 16 Nov 2020
 08:47:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nrw9=EW=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1keaBH-0006r0-BP
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 08:48:11 +0000
X-Inumbo-ID: d67c097b-12df-4a98-9292-0fab4f576bb8
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown [40.107.7.41])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d67c097b-12df-4a98-9292-0fab4f576bb8;
	Mon, 16 Nov 2020 08:48:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8Gouvb/TiqibQiepFYvSlq32AB6Bh8msCECOWlSjvHI=;
 b=BavnVzr1MDmQgiuaeCDG/7L/ZA1qmE0XbM4nXN/fk+u3Iar3Z09DBPvRGU0k0CWi0TqQBdUrayL/u9C4B6DTe6rXK/Cv2inKFoje7R0q5wUnAOd9oDmDfw7FNggNcLJnr/oxc0LUxRmHYN0EkmYJRBvqNx4QrbLBpJrUPefMeig=
Received: from AM5PR0601CA0031.eurprd06.prod.outlook.com
 (2603:10a6:203:68::17) by VI1PR08MB5534.eurprd08.prod.outlook.com
 (2603:10a6:803:135::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Mon, 16 Nov
 2020 08:48:08 +0000
Received: from VE1EUR03FT012.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:68:cafe::38) by AM5PR0601CA0031.outlook.office365.com
 (2603:10a6:203:68::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25 via Frontend
 Transport; Mon, 16 Nov 2020 08:48:07 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT012.mail.protection.outlook.com (10.152.18.211) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3564.22 via Frontend Transport; Mon, 16 Nov 2020 08:48:07 +0000
Received: ("Tessian outbound 814be617737e:v71"); Mon, 16 Nov 2020 08:48:06 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: a68edf1f17d8bda7
X-CR-MTA-TID: 64aa7808
Received: from fb9e285bbda2.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 46C855DF-D961-4F55-B67A-CEB6F7A4E77D.1;
	Mon, 16 Nov 2020 08:48:00 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id fb9e285bbda2.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Mon, 16 Nov 2020 08:48:00 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SZ/CA9ppXitjEy1/5kKP4SKbFsyNIyhQ/ytLhQYDXN97V9jC/UAfPhYNTx821WomqT+n/ECFu9rvxYcu3GKHliL3gdWCsp57WELXdktLdsqLp31RFsEWJesvWEk6pJz8Ja4aJ+mi2T2hU1gWyDld8PdN6+B4SaMni42kQnDwAR2p7h0AI2U3NlIvJeZEOE69+uBhk+rKkdMFOKo5Pax4BbAP1WN57MRkfTvhweqVqVrZb2EwewuQ+TFB0ptvGiw5XFHOZRkkAtdOBxdCcG6rWGwKu/hDCfd5uGNdc/t3XmtQ/zK4U4ROfwt7OlyBoH4SbhNrNWdSGDLdkh6/1zzgcg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8Gouvb/TiqibQiepFYvSlq32AB6Bh8msCECOWlSjvHI=;
 b=Q8WXFsdwP6FsdhLW+QChgbU4irr0XwUxRrqJhMgRa1cE21E+BWbh99CFC9X+GdiYP/urYWObpBY0dnNCjc5EDA2mNHbZ+vLDyb6fHqckzkEXRwLJMCp4x59YyC3F+gONUtpbtCgVdOfUu3sWVynImxkl21cuOfAHqPxjhoI6gEXrZkr4JzOAVZPwxKLxWAd8m4riYdYBgltJ9h+hSa/Uxj0eo6quYC0B9BKdl+ul4YL74JyH/n38eDu6O5up2SPGgia32RQ5G20DLN0HLVzgQCv9sgNRCsrZIWfPCX7ze7Z5pLqNynG7iG3ksvAyoBtVRu7Q2fLbT66AXC/3D37fAA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8Gouvb/TiqibQiepFYvSlq32AB6Bh8msCECOWlSjvHI=;
 b=BavnVzr1MDmQgiuaeCDG/7L/ZA1qmE0XbM4nXN/fk+u3Iar3Z09DBPvRGU0k0CWi0TqQBdUrayL/u9C4B6DTe6rXK/Cv2inKFoje7R0q5wUnAOd9oDmDfw7FNggNcLJnr/oxc0LUxRmHYN0EkmYJRBvqNx4QrbLBpJrUPefMeig=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5767.eurprd08.prod.outlook.com (2603:10a6:10:1a7::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25; Mon, 16 Nov
 2020 08:47:59 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737%7]) with mapi id 15.20.3564.028; Mon, 16 Nov 2020
 08:47:59 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Michal Orzel <Michal.Orzel@arm.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: Add workaround for Cortex-A76/Neoverse-N1
 erratum #1286807
Thread-Topic: [PATCH] xen/arm: Add workaround for Cortex-A76/Neoverse-N1
 erratum #1286807
Thread-Index: AQHWu+mTw4Xbd+Lh7kKhbnNB05cIaqnKcooA
Date: Mon, 16 Nov 2020 08:47:59 +0000
Message-ID: <2F0882B0-7FBA-477C-88C2-FF0734E85F07@arm.com>
References: <20201116072422.17400-1-michal.orzel@arm.com>
In-Reply-To: <20201116072422.17400-1-michal.orzel@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 7ba8db17-7bc6-414d-075c-08d88a0c56e8
x-ms-traffictypediagnostic: DBAPR08MB5767:|VI1PR08MB5534:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB5534751C15B4C671CA6C5A319DE30@VI1PR08MB5534.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 k4iWbMizVEsZ1xXPSUyCqPq308bm5ndjURljQXn9O23rWWwzFVsXDYTzNDxzXIXgzGNe0FFJFo0OWLQxbMs5uO2uls3D/synVb5B5hnCcjdxC/phlDpkcUItEcB3obFKYgarxZgbYiPBvQbFgrv518bcON3G0AUc/H23yE58Xofyj0vcfXSJXH3G7/mDAnzJmg7+3YTpxWyEouRp1aAcSkk+niiVN+yLi0Vu+ME9YemKBK/TS9wIrBfhcZYYSYm4rrYikxUMod+3oOT76jo77UbhVe76AZCPmEJ2qp3zYWEvhEKSCk/vLz4TZoRXNKTvIuD82whSTTRNTPgCiIzSNQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(346002)(376002)(366004)(136003)(6506007)(2616005)(37006003)(478600001)(316002)(4326008)(71200400001)(19627235002)(54906003)(6636002)(8936002)(8676002)(6862004)(6486002)(36756003)(83380400001)(33656002)(26005)(6512007)(5660300002)(2906002)(91956017)(66946007)(66446008)(64756008)(66556008)(66476007)(86362001)(76116006)(186003)(53546011);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 rDvHEP+oxQxA9NkeYx6wNjtCni+mcFPWydRClzerrSpTydnjMgAajqajuFQfMsSFufCxm74sBef4ZgnnrRbss+MCR9zYXV93Cc4y+78NduFwlDcBqExQVfAHIdNQHsLiD5GjAABr3kNwzx2RSHwX7IjS9HrHrL6q5HeQmabTdwScn9lhSvv67dc6YHXNvGg8P0Zaka0FQdo0GmLZFetlgLR8CZN5bCg9k/BKk0IMBd9Vgwv5itq4kXINsXaFBzdWjtNyxNBeVrIT3VWSo2CFb4uHQ5jxkPtj1yqwvKbnVHGwUaKQ8IiSKfdXgT+DXZ2cF35IUfHvtmuFIJXLHCtLutEEQ9QQnAK6ZDOspgq5vVxrabzKu6SWABdczrN8lDI7Hmnz4dh+Gs9Us6obuDQUBnfw5gAnbFABjvbpQnj3LykWQjS78ebQ/x4qAb2DMY1wRqiXMHz9s1pOZa/CZf+mJ2p+sJ8Osu/Jrha1lOdNpsYLyWI4Gud3S5qsvsau4DkI0MUwBJw1I4DDuxcbrr2rudo3F1BPMgVMWBUwzUiLQ4G9MbPDcSCggGxN1rhtKOoHYBP4PbuDabDCAp6MbTkUeRTG+MoMqNmUEPZBQaYjOeGQhXwlRw2nABeckuyC54yrNjhy2ZVWU69im7q0W4zUOw==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <C7D5C4C67E14334DA0B8721F793DDF7F@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5767
Original-Authentication-Results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	4ff170ef-bbb2-4680-5826-08d88a0c51f0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	w5g47aYLSZD6bWQyGV9V5521V8dGVAoDNvehJYTKwjzSQ5f1bZd+nQ/NmUvB32YigK0FiXs6q+2airLcDnzZDB4MP7TkLCx0ap3AHFWQiuanw1R26WjZWy5tusTLqeOToJfLM2zd5fuHfM9uGdSoryhE8iKaxsY+vDMBfqGSLRguY3O4Men3uZgGcV93g4vBgbhKysiGLWt0AtOHKlYfjMn7hqn1V8yN79OxHe6U34syCatmASZtHuUQwSpZ47Ydc1u65JxTo1zqEowObawHMuYwbPadjjGGA/VwLfFJfzOEkWCx14bkiFW7vNJ19cn1K2U0eE8cP8OVH/MCT5qeFlVD2DqpAIjiMDHE9zpEUPeulxrLl5/hyjErseClja0eL00GzrrhKwOuXe1c2KK8lQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(39860400002)(136003)(396003)(376002)(46966005)(5660300002)(2906002)(8936002)(8676002)(6506007)(2616005)(19627235002)(6512007)(186003)(26005)(53546011)(336012)(47076004)(33656002)(70206006)(70586007)(356005)(478600001)(4326008)(36756003)(37006003)(6636002)(6862004)(107886003)(54906003)(316002)(83380400001)(81166007)(82740400003)(36906005)(82310400003)(86362001)(6486002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Nov 2020 08:48:07.3923
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7ba8db17-7bc6-414d-075c-08d88a0c56e8
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5534

Hi,

> On 16 Nov 2020, at 07:24, Michal Orzel <Michal.Orzel@arm.com> wrote:
>=20
> On the affected Cortex-A76/Neoverse-N1 cores (r0p0 to r3p0),
> if a virtual address for a cacheable mapping of a location is being
> accessed by a core while another core is remapping the virtual
> address to a new physical page using the recommended break-before-make
> sequence, then under very rare circumstances TLBI+DSB completes before
> a read using the translation being invalidated has been observed by
> other observers. The workaround repeats the TLBI+DSB operation.
>=20
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> docs/misc/arm/silicon-errata.txt     |  2 ++
> xen/arch/arm/Kconfig                 | 18 +++++++++++++++++
> xen/arch/arm/cpuerrata.c             | 14 ++++++++++++++
> xen/include/asm-arm/arm64/flushtlb.h | 29 +++++++++++++++++++---------
> xen/include/asm-arm/cpufeature.h     |  3 ++-
> 5 files changed, 56 insertions(+), 10 deletions(-)
>=20
> diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-err=
ata.txt
> index 552c4151d3..d183ba543f 100644
> --- a/docs/misc/arm/silicon-errata.txt
> +++ b/docs/misc/arm/silicon-errata.txt
> @@ -53,5 +53,7 @@ stable hypervisors.
> | ARM            | Cortex-A72      | #853709         | N/A               =
      |
> | ARM            | Cortex-A73      | #858921         | ARM_ERRATUM_858921=
      |
> | ARM            | Cortex-A76      | #1165522        | N/A               =
      |
> +| ARM            | Cortex-A76      | #1286807        | ARM64_ERRATUM_128=
6807   |
> | ARM            | Neoverse-N1     | #1165522        | N/A
> +| ARM            | Neoverse-N1     | #1286807        | ARM64_ERRATUM_128=
6807   |
> | ARM            | MMU-500         | #842869         | N/A               =
      |
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index f938dd21bd..5d6d906d72 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -244,6 +244,24 @@ config ARM_ERRATUM_858921
>=20
> 	  If unsure, say Y.
>=20
> +config ARM64_ERRATUM_1286807
> +	bool "Cortex-A76/Neoverse-N1: 1286807: Modification of the translation =
table for a virtual address might lead to read-after-read ordering violatio=
n"
> +	default y
> +	depends on ARM_64
> +	help
> +	  This option adds a workaround for ARM Cortex-A76/Neoverse-N1 erratum =
1286807.
> +
> +	  On the affected Cortex-A76/Neoverse-N1 cores (r0p0 to r3p0), if a vir=
tual
> +	  address for a cacheable mapping of a location is being
> +	  accessed by a core while another core is remapping the virtual
> +	  address to a new physical page using the recommended
> +	  break-before-make sequence, then under very rare circumstances
> +	  TLBI+DSB completes before a read using the translation being
> +	  invalidated has been observed by other observers. The
> +	  workaround repeats the TLBI+DSB operation.
> +
> +	  If unsure, say Y.
> +
> endmenu
>=20
> config ARM64_HARDEN_BRANCH_PREDICTOR
> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
> index 567911d380..cb4795beec 100644
> --- a/xen/arch/arm/cpuerrata.c
> +++ b/xen/arch/arm/cpuerrata.c
> @@ -424,6 +424,20 @@ static const struct arm_cpu_capabilities arm_errata[=
] =3D {
>                    (1 << MIDR_VARIANT_SHIFT) | 2),
>     },
> #endif
> +#ifdef CONFIG_ARM64_ERRATUM_1286807
> +    {
> +        /* Cortex-A76 r0p0 - r3p0 */
> +        .desc =3D "ARM erratum 1286807",
> +        .capability =3D ARM64_WORKAROUND_REPEAT_TLBI,
> +        MIDR_RANGE(MIDR_CORTEX_A76, 0, 3 << MIDR_VARIANT_SHIFT),
> +    },
> +    {
> +        /* Neoverse-N1 r0p0 - r3p0 */
> +        .desc =3D "ARM erratum 1286807",
> +        .capability =3D ARM64_WORKAROUND_REPEAT_TLBI,
> +        MIDR_RANGE(MIDR_NEOVERSE_N1, 0, 3 << MIDR_VARIANT_SHIFT),
> +    },
> +#endif
> #ifdef CONFIG_ARM64_HARDEN_BRANCH_PREDICTOR
>     {
>         .capability =3D ARM_HARDEN_BRANCH_PREDICTOR,
> diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/a=
rm64/flushtlb.h
> index ceec59542e..6726362211 100644
> --- a/xen/include/asm-arm/arm64/flushtlb.h
> +++ b/xen/include/asm-arm/arm64/flushtlb.h
> @@ -9,6 +9,11 @@
>  * DSB ISH          // Ensure the TLB invalidation has completed
>  * ISB              // See explanation below
>  *
> + * ARM64_WORKAROUND_REPEAT_TLBI:
> + * Modification of the translation table for a virtual address might lea=
d to
> + * read-after-read ordering violation.
> + * The workaround repeats TLBI+DSB operation.
> + *
>  * For Xen page-tables the ISB will discard any instructions fetched
>  * from the old mappings.
>  *
> @@ -16,15 +21,21 @@
>  * (and therefore the TLB invalidation) before continuing. So we know
>  * the TLBs cannot contain an entry for a mapping we may have removed.
>  */
> -#define TLB_HELPER(name, tlbop) \
> -static inline void name(void)   \
> -{                               \
> -    asm volatile(               \
> -        "dsb  ishst;"           \
> -        "tlbi "  # tlbop  ";"   \
> -        "dsb  ish;"             \
> -        "isb;"                  \
> -        : : : "memory");        \
> +#define TLB_HELPER(name, tlbop)       \
> +static inline void name(void)         \
> +{                                     \
> +    asm volatile(                     \
> +        "dsb  ishst;"                 \
> +        "tlbi "  # tlbop  ";"         \
> +        ALTERNATIVE(                  \
> +        "nop; nop;",                  \
> +        "dsb  ish;"                   \
> +        "tlbi "  # tlbop  ";",        \
> +        ARM64_WORKAROUND_REPEAT_TLBI, \
> +        CONFIG_ARM64_ERRATUM_1286807) \
> +        "dsb  ish;"                   \
> +        "isb;"                        \
> +        : : : "memory");              \
> }
>=20
> /* Flush local TLBs, current VMID only. */
> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufe=
ature.h
> index 016a9fe203..c7b5052992 100644
> --- a/xen/include/asm-arm/cpufeature.h
> +++ b/xen/include/asm-arm/cpufeature.h
> @@ -46,8 +46,9 @@
> #define ARM_SMCCC_1_1 8
> #define ARM64_WORKAROUND_AT_SPECULATE 9
> #define ARM_WORKAROUND_858921 10
> +#define ARM64_WORKAROUND_REPEAT_TLBI 11
>=20
> -#define ARM_NCAPS           11
> +#define ARM_NCAPS           12
>=20
> #ifndef __ASSEMBLY__
>=20
> --=20
> 2.28.0
>=20



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 08:49:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 08:49:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27751.56328 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keaCs-0006yU-8Z; Mon, 16 Nov 2020 08:49:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27751.56328; Mon, 16 Nov 2020 08:49:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keaCs-0006yN-5U; Mon, 16 Nov 2020 08:49:50 +0000
Received: by outflank-mailman (input) for mailman id 27751;
 Mon, 16 Nov 2020 08:49:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4ptU=EW=citrix.com=christian.lindig@srs-us1.protection.inumbo.net>)
 id 1keaCr-0006yG-A4
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 08:49:49 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 358242b3-47a0-4504-909b-9dab8a02007c;
 Mon, 16 Nov 2020 08:49:48 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=4ptU=EW=citrix.com=christian.lindig@srs-us1.protection.inumbo.net>)
	id 1keaCr-0006yG-A4
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 08:49:49 +0000
X-Inumbo-ID: 358242b3-47a0-4504-909b-9dab8a02007c
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 358242b3-47a0-4504-909b-9dab8a02007c;
	Mon, 16 Nov 2020 08:49:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605516587;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=tjbRTqVWBMlfblAmGAgZ3Vr+MF1XNOo52OW9Fn4GMzg=;
  b=iAPl+jxD7Ii95OUQKRulVHf4nSHrrF16npZ/6nhPPBNFUvD2NQcayMcE
   34C1UwMUzpXJ5TDRDLGSI7oEfnwqe30GzxgVdz2AzG14HarKhgSGN2kDm
   +rZXbkoGiVgNGDVvE7KWP9flieG3eTezCyS9wGRIfT5vaPAfb2zYC8oRD
   U=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: +zgY5I1TiHFQbeLuaADvbIL0ETfT5Vf9hxCfmcbImZz4VJz9Wb0OPTUA+wr6Zz8WwnJ2OJiIQb
 s9v8HiR75ANDMQxljM2bWzaM8U5ZFuydV6pNW5US3xEEFJhdrBCzEb1BK/WHADN8/Q39/hHDwn
 FQuFqQQxtbAD41DC6jQPnasyBifEWfFmnYzBuf7YZht1d6IyQJIY36CdwpV83IRic4eFstd87d
 64Zp6v9BOCVBzPV+yl30o30/oDOyJBDTfjp1tU0oqzr5Eb7KKOc7MPDgD4ozv65AQYnqVYUC+W
 7fM=
X-SBRS: None
X-MesageID: 31251317
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,482,1596513600"; 
   d="scan'208";a="31251317"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OkM8cLSjEOJmg0uY+E9w35ru4JUvIcFrIEQiPBOXFbf150LfZEMGq14bUVKdYpBlPa7DN3sg16MCGm5yuAU9kBVUfGGTJtKG0LFG0Pmv0uxv7u9e1iEYllZnOLDg1TuHwbeGMM/Y6NXIA95PfdRvy7QamAT+wQ6tgPlZmP9kPjNFtFfhKiu96R3dbZYBE3Fbg0GfmvxEI25blqoNyzB3J4//S4TJHRWdrpTamwORXxiVEp68RjPPRX7roNBDnp7l2bDAXX1QJ+A1aRgge0FLICZFthGrErZ0Yl6h6M8RAIVGQzLJ2QsD9tPW4jLbpoHhpTOkc9NqQi6Fs5gkx6tVEw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XX4FJ7iXhQ20bOkUCJb2gd5xzkGc8wkN/rh6EaE5nFg=;
 b=VHZGS87T0hcMTVZlxLVJkmuJDM2ZUGB56PhC/GihODcG0uIIe0f7YK9+BMz1CkOthJtmTDIlydHEYVQjeXlEESTi6kVar6zVNCYTTnCx/SQirhDNF1cVcIIVWDuVxPBoGKsVqLnPSLVYmSRhy5+v8NHIs5PzP2q9ntvLWjNTMCNkxyezRMerP55be3TTqtNlGfZhFAQIpCcZDxUnC7mG+ofLSDYnVHz7SMt0GNlp4MLPM/DYVh/YIs6KGeBwEacdeQV/PjelREIVh6IVM6mkFPrnaYTJ6wv++OicvX8/W1FUhCjhldWd3sMtRP4sM46Dgmg5bi/qJI8BG76e9HU1KA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XX4FJ7iXhQ20bOkUCJb2gd5xzkGc8wkN/rh6EaE5nFg=;
 b=JMmas0sFmAjLNC0M9xWs/kHb8Ifw2uRPoHgp61YY58jLL/XFYVBXESPYQiYuup7FdEr3yrs/7atr2x4+B9K9DkK5oq6O3HU4q5dmzOFskNa4fpAKn+sQOf7QnDabULoDNWvGA4cDtLp9M5KIJLTY0xw8sCWSR5xC4tBpfSy7iZA=
From: Christian Lindig <christian.lindig@citrix.com>
To: Bjoern Doebel <doebel@amazon.de>, Edwin Torok <edvin.torok@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"jgross@suse.com" <jgross@suse.com>, Andrew Cooper
	<Andrew.Cooper3@citrix.com>
CC: "wl@xen.org" <wl@xen.org>, "jgrall@amazon.co.uk" <jgrall@amazon.co.uk>,
	"elnikety@amazon.de" <elnikety@amazon.de>, "iwj@xenproject.org"
	<iwj@xenproject.org>
Subject: Re: [XEN PATCH] tools/xenstore: Log xenstored build ID on startup
Thread-Topic: [XEN PATCH] tools/xenstore: Log xenstored build ID on startup
Thread-Index: AQHWueBOCvB9vAICU0yP2x4L4whcVanGT5cAgAQX64CAAA53Qg==
Date: Mon, 16 Nov 2020 08:49:42 +0000
Message-ID: <DS7PR03MB5655942EF2116BD30A050645F6E30@DS7PR03MB5655.namprd03.prod.outlook.com>
References: <20201113141823.58712-1-doebel@amazon.de>
 <5ac379ad-33fd-2973-dfdb-9e06ea539809@suse.com>
 <0e6b09fe-ffc4-195f-1b6c-67abc0cff92c@amazon.de>
 <c1352a2a-112a-966f-7410-b917cabe1d91@citrix.com>
 <39f0b457514c3b6bcc7419d9eaf5770a5c073333.camel@citrix.com>,<73515688-5db2-c81b-48fb-6c5dda4a34b1@amazon.de>
In-Reply-To: <73515688-5db2-c81b-48fb-6c5dda4a34b1@amazon.de>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: cf10ade3-654e-4143-e548-08d88a0c8fe9
x-ms-traffictypediagnostic: DM5PR03MB3355:
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <DM5PR03MB335599A9E355A7834A1F1DE2F6E30@DM5PR03MB3355.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: U1tRzwRH+lB7S9cjUaqvtoUR19BDx284u3tk/5uBJ7NlUN48o9JrxXmVsFJPUo8tvu73nMdCEqYqfkN9uc2jQJXLVu+nOn7CbjHTsksvfstKwk0YPIdTs3GQVbb6SamRfgd2lQnK/yOfXR3vGOzYepTwSOPvwFa5MzRq+TA0N79u2sr9IvHHfq3jcLRUKuocckbne70/GzaDUL9HmqQenWUp5lpwOKOETx+TMuxhpF6qgWFfb3IxscBNtSybN3Rx7yognJcDBsUAH622yb6AaJcGn/vY9RbKRlnO9j519cliez+qbHux9EcwmPfy3WyWFcCHEWyeLePSZYBboRzO0/KIfSYB5Zw86s+FypNgAn4=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5655.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39850400004)(346002)(366004)(136003)(396003)(8676002)(44832011)(66446008)(71200400001)(64756008)(9686003)(316002)(66476007)(86362001)(52536014)(66946007)(76116006)(66574015)(55236004)(53546011)(110136005)(66556008)(4326008)(2906002)(91956017)(6506007)(55016002)(186003)(7696005)(54906003)(5660300002)(33656002)(6636002)(83380400001)(478600001)(4001150100001)(966005)(26005)(8936002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: GnfePHoRTLTiPEaj1g0aQI0D4DH/gGD5EPAwHMHetpPN+rLgS1DfWE90YOSvC1gUPjdSclePcfwnOKSqwvre8oiMH0bX36hj+/LIizishtcj6UWPyfatxGssegf9ZABqGPy/ohIJqNda3HwITGe+4y9o69cBwq23H7Q9e/3UM+KjlWTQv3znsDCfCQn/RnEC8VBO+hMJQiaHpw+QTEMyqvvHEqnO4dX0wSF3Sl64vd9IqYWHSKqKOW6JgRpHK4+TSZHcfz2ocajPWGL50jY9sVYJW6hhReLBlgIKcidAIJYnHjERM+5apw/tbXUXhUU3cSEuTXWdVPp09rDwVmpPHDscE+m6FMCtdZLKzqCQvmA9C7o2Px8pdcCStzWyoO03kWeJW1HpZSslIvzChQQbRdrsMUzlKUpzykjgx8Zce0zM7+7Tu7TdnF9h4fU0EJgFNknquRU4GJj7jTvfFi0BJ2KyePghGmqsspVssk/Vl6ZLDYdTJCvOyBmuw/DpwvMsBsy6WTk6PqM1vdPTLgwmKGSOmMVLx1RN5U+7IBfzgAaT87yMk/hVb4GufxFuiJm1WBUOvPzQL7mI9akT/0R2c/gO4j79jjHVDQBDHKRdW3XSTtQzYdO9z5vrvIdOpwVMmRKOCybZ7Ujw/+yF05EYaA==
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5655.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cf10ade3-654e-4143-e548-08d88a0c8fe9
X-MS-Exchange-CrossTenant-originalarrivaltime: 16 Nov 2020 08:49:42.9963
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: UydBldzFjo1te3OZh54rQz+hTje+nVcYDv4ArmVzPtnbGBJ5gks845xvdSRJTThLTfvDB+HSlf3OdTHDSVlwQUzac8C74wlKzmDiUUrcl/g=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3355
X-OriginatorOrg: citrix.com


How about keeping such an ID in xenstore itself in some kind of /meta hiera=
rchy where xenstore could also keep stats? As long xenstore is running this=
 information is easily accessible for outside tools.

-- C

________________________________________
From: Bjoern Doebel <doebel@amazon.de>
Sent: 16 November 2020 07:53
To: Edwin Torok; xen-devel@lists.xenproject.org; jgross@suse.com; Andrew Co=
oper
Cc: wl@xen.org; Christian Lindig; jgrall@amazon.co.uk; elnikety@amazon.de; =
iwj@xenproject.org
Subject: Re: [XEN PATCH] tools/xenstore: Log xenstored build ID on startup


On 13.11.20 18:23, Edwin Torok wrote:
> CAUTION: This email originated from outside of the organization. Do not c=
lick links or open attachments unless you can confirm the sender and know t=
he content is safe.
>
>
>
> On Fri, 2020-11-13 at 17:13 +0000, Andrew Cooper wrote:
>> On 13/11/2020 16:56, Bjoern Doebel wrote:
>>> On 13.11.20 16:36, J=FCrgen Gro=DF wrote:
>>>> On 13.11.20 15:18, Bjoern Doebel wrote:
>>>>> Right now we do not have a mechanism to determine the version
>>>>> of the
>>>>> currently running xenstored at runtime. As xenstored runs
>>>>> throughout
>>>>> the
>>>>> lifetime of a Xen host, this may lead to problems when newer
>>>>> user space
>>>>> builds are staged. Then, the running xenstored will no longer
>>>>> match the
>>>>> version of the installed xenstored.
>>>>>
>>>>> To allow users to always identify the running version of
>>>>> xenstored, add
>>>>> a linker-generated unique build ID to every xenstored build.
>>>>> Add
>>>>> functionality to log this build ID into a file upon service
>>>>> startup.
>>>>>
>>>>> Signed-off-by: Bjoern Doebel <doebel@amazon.de>
>>>>> Reviewed-by: Martin Mazein <amazein@amazon.de>
>>>>> Reviewed-by: Paul Durrant <pdurrant@amazon.co.uk>
>>>> No support for oxenstored or xenstore-stubdom?
>>> Your suggestion further down will apparently help for stubdom. I do
>>> not speak ocaml at all - how do we address this?
>> CC'ing Edwin and Christian who have done the bulk of the oxenstored
>> recently.
>>
>> It sounds like it might not be possible right now, but would be
>> possible
>> with a future plan to switch the Ocaml build system over to dune (the
>> new/preferred Ocaml upstream toolchain).
> See here what is possible with Dune:
> https://dune.readthedocs.io/en/stable/dune-libs.html#build-info
>
> Would the output of 'git describe --always --dirty' (perhaps combined
> with a build date) serve as a useful build ID?

The point of the build ID is to verify something like
"binary-equivalence" of two builds.

* a git hash is not sufficient because different git hashes may result
in the same binary to be created (i.e., if there is no code change in
the target binary in between those two builds)

* a time stamp is counter-productive, because then you'd have to
recreate this timestamp every time you want to re-create a build

GNU ld's --build-id claims to perform a checksumming of the "normative
parts of the output contents". Whatever that means. ;)

>
>> If it does end up being an XS_CONTROL sub-op, we can implement it at
>> a
>> future point when we can usefully answer the question.
> Wouldn't using readelf on /proc/<pid>/exe give you the running buildid?
>
> readelf -a /usr/sbin/oxenstored /proc/$(pidof oxenstored)/exe | grep
> 'Build ID'
>      Build ID: bdd5304c8984ed22570d51308ae8717d03fe60ae
>      Build ID: bdd5304c8984ed22570d51308ae8717d03fe60ae
>
> readelf -a /usr/sbin/oxenstored /proc/$(pidof oxenstored)/exe | grep
> 'Build ID'
>      Build ID: b44ff99b216db7526e3ee7841068d584cc9c2b95
>      Build ID: bdd5304c8984ed22570d51308ae8717d03fe60ae
>
>
> When you're inside a stubdom it is probably not so easy though.

Interesting. I had not considered that because after upgrading xenstored
to a different version, the running xenstored's /proc/$PID/exe shows as

# ls -l /proc/$(pgrep xenstored)/exe
lrwxrwxrwx 1 root root 0 Nov  9 14:06 /proc/3528/exe ->
/usr/sbin/xenstored (deleted)

But you are right, one can still read that procfs file. Nice!


Bjoern





Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




From xen-devel-bounces@lists.xenproject.org Mon Nov 16 08:51:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 08:51:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27758.56340 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keaEq-0007mz-Ln; Mon, 16 Nov 2020 08:51:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27758.56340; Mon, 16 Nov 2020 08:51:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keaEq-0007mr-Ie; Mon, 16 Nov 2020 08:51:52 +0000
Received: by outflank-mailman (input) for mailman id 27758;
 Mon, 16 Nov 2020 08:51:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0FBZ=EW=arm.com=wei.chen@srs-us1.protection.inumbo.net>)
 id 1keaEo-0007mk-VR
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 08:51:51 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.55]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3d3e3575-4a59-497e-b464-98d8362d9b68;
 Mon, 16 Nov 2020 08:51:48 +0000 (UTC)
Received: from AM6P194CA0108.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:8f::49)
 by AM4PR08MB2916.eurprd08.prod.outlook.com (2603:10a6:205:e::29) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.26; Mon, 16 Nov
 2020 08:51:44 +0000
Received: from AM5EUR03FT059.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8f:cafe::21) by AM6P194CA0108.outlook.office365.com
 (2603:10a6:209:8f::49) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25 via Frontend
 Transport; Mon, 16 Nov 2020 08:51:43 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT059.mail.protection.outlook.com (10.152.17.193) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3564.22 via Frontend Transport; Mon, 16 Nov 2020 08:51:43 +0000
Received: ("Tessian outbound 814be617737e:v71");
 Mon, 16 Nov 2020 08:51:42 +0000
Received: from e7f3bd0cd432.3
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 BAA221A0-2594-49A6-ADEC-E6205D5C1EB3.1; 
 Mon, 16 Nov 2020 08:51:37 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e7f3bd0cd432.3
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 16 Nov 2020 08:51:37 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com (2603:10a6:208:105::24)
 by AM9PR08MB6275.eurprd08.prod.outlook.com (2603:10a6:20b:286::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Mon, 16 Nov
 2020 08:51:34 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993]) by AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993%3]) with mapi id 15.20.3564.028; Mon, 16 Nov 2020
 08:51:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0FBZ=EW=arm.com=wei.chen@srs-us1.protection.inumbo.net>)
	id 1keaEo-0007mk-VR
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 08:51:51 +0000
X-Inumbo-ID: 3d3e3575-4a59-497e-b464-98d8362d9b68
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown [40.107.21.55])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3d3e3575-4a59-497e-b464-98d8362d9b68;
	Mon, 16 Nov 2020 08:51:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AKYmLKYqGdZEvOq1i3pLAf2axgJ8kBGRgy5gWw0tUmw=;
 b=q/M0NJkCnqbjVj3Zj7AFyuOGTrW1r49D9bLFRi/jp2Yj6lqFJZHKHKRI5o+hzIM5AOOY/vnv65LjGbuhRPhbyYc+H8lTl69l6hcbm7F1omgj8XtOs8Zu9wxxb3lpTKMzqcg4RCYhqdxFGF33fxSC0zHJ6mXT9ZDnUCWGGksYOYY=
Received: from AM6P194CA0108.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:8f::49)
 by AM4PR08MB2916.eurprd08.prod.outlook.com (2603:10a6:205:e::29) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.26; Mon, 16 Nov
 2020 08:51:44 +0000
Received: from AM5EUR03FT059.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8f:cafe::21) by AM6P194CA0108.outlook.office365.com
 (2603:10a6:209:8f::49) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25 via Frontend
 Transport; Mon, 16 Nov 2020 08:51:43 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT059.mail.protection.outlook.com (10.152.17.193) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3564.22 via Frontend Transport; Mon, 16 Nov 2020 08:51:43 +0000
Received: ("Tessian outbound 814be617737e:v71"); Mon, 16 Nov 2020 08:51:42 +0000
X-CR-MTA-TID: 64aa7808
Received: from e7f3bd0cd432.3
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id BAA221A0-2594-49A6-ADEC-E6205D5C1EB3.1;
	Mon, 16 Nov 2020 08:51:37 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e7f3bd0cd432.3
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Mon, 16 Nov 2020 08:51:37 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=B3I5X7xqGO+jMVDFItbXX6bROZ5rKC4Z3wz7RGJZGRqQkF8HOWSpdgUwWRq6hkrrsnnG+G3VKtF9qnaWQkztBlxn82+8LshJpiXooB5yhQHGKJ8YZkxhPuAITST4crVBBssBV/Oq2yxXQ7HjtVqyb6LM7jhXIS472bhcpS7uTdM/NU3zzMSjD2wEyxjap/hTt946iK8ztOPMbZUWii8kcwV4ItdB0eKMoRlUXGN+nIXkcdcRoAyY1d7prEqAVG/wvgLKEjBzcC6p/QOpqS2Z9mgN468cCEQs1hHWkjx5m9rO4QPDvzqD+KVEvmIXW5gs5b9AQqFml3bUJjgXUXU7nw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AKYmLKYqGdZEvOq1i3pLAf2axgJ8kBGRgy5gWw0tUmw=;
 b=JEOoXo5qWVeUxZzrNxJQrzgkHAamhfMU9AKWjVl/HHWMuk8bcb2Gvv8/3ahT+QQ59JxJ/pPFmxtVyS5WXwf+kdX8b4gpLhDdj+8DvgMpyrVNBaGPTPa12FLlAg5pmVEWEW7bLLaP000gOSec7mgwNI69F22lXAvqAGqajEL6zRXKqkBTdi8Igy1Y9CONdSxZoiV690wngv6Bu1VQEJgt12vX4/CrgzdV85HUZUDyka7uV9VIi2bCIMP8Ac5DYv9pQC4dbUy4VO+lD61dDk2usMme1zhY1f+cpaNSl8n6HAhz6mVa0ZESMlFDDE7jg20c2tK/JSS8Qjgkq9a9MTAyEw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AKYmLKYqGdZEvOq1i3pLAf2axgJ8kBGRgy5gWw0tUmw=;
 b=q/M0NJkCnqbjVj3Zj7AFyuOGTrW1r49D9bLFRi/jp2Yj6lqFJZHKHKRI5o+hzIM5AOOY/vnv65LjGbuhRPhbyYc+H8lTl69l6hcbm7F1omgj8XtOs8Zu9wxxb3lpTKMzqcg4RCYhqdxFGF33fxSC0zHJ6mXT9ZDnUCWGGksYOYY=
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com (2603:10a6:208:105::24)
 by AM9PR08MB6275.eurprd08.prod.outlook.com (2603:10a6:20b:286::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Mon, 16 Nov
 2020 08:51:34 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993]) by AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993%3]) with mapi id 15.20.3564.028; Mon, 16 Nov 2020
 08:51:34 +0000
From: Wei Chen <Wei.Chen@arm.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>, Michal Orzel
	<Michal.Orzel@arm.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH] xen/arm: Add workaround for Cortex-A76/Neoverse-N1
 erratum #1286807
Thread-Topic: [PATCH] xen/arm: Add workaround for Cortex-A76/Neoverse-N1
 erratum #1286807
Thread-Index: AQHWu+mg+1214G0SPk6QUrcu7ikNvanKcouAgAAAzsA=
Date: Mon, 16 Nov 2020 08:51:34 +0000
Message-ID:
 <AM0PR08MB3747684AD1D674F3AB6FAE5F9EE30@AM0PR08MB3747.eurprd08.prod.outlook.com>
References: <20201116072422.17400-1-michal.orzel@arm.com>
 <2F0882B0-7FBA-477C-88C2-FF0734E85F07@arm.com>
In-Reply-To: <2F0882B0-7FBA-477C-88C2-FF0734E85F07@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: B95498B267D7BA44B5F7857EFA78283C.0
x-checkrecipientchecked: true
Authentication-Results-Original: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: d00c2b48-a0c8-4202-ec20-08d88a0cd789
x-ms-traffictypediagnostic: AM9PR08MB6275:|AM4PR08MB2916:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM4PR08MB291636CA7860A6DCA2EEC3E39EE30@AM4PR08MB2916.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 VGISw3R5ea98JIQiF/1BI11UZ5m1l2qAivZkQtpwgEnUb8Sk37x2mKrXdozQElHVtdRGinxwy8MIfmfuHAU7+JdWJ6s2n6Oq6cL0ZZCceyx3aDlAyiy9jHubnhG4OTmStC7BLWeQaopUP5O0rdf8mBAXcVcMMjt9JHWs40sF9kHI/Anp5dOtORWoevUIDTJPbtbk4NsMdFk1cOGx6c2DCpmDR0ed7lBNpE3I/HzhpOKXoJRhTghaoCGixon2k/B7mwAUKcLzTShMSm4T1l1rFCWHod9aanAwEH7lmKoKv2UFNbIBllt6tRRQ4OZKBeME15xOEQaMLrGsoc0Jqe/FTQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3747.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(366004)(136003)(346002)(396003)(376002)(66446008)(64756008)(66556008)(66476007)(66946007)(2906002)(76116006)(83380400001)(7696005)(186003)(53546011)(52536014)(5660300002)(33656002)(26005)(110136005)(54906003)(6506007)(316002)(8676002)(8936002)(55016002)(9686003)(71200400001)(86362001)(478600001)(6636002)(4326008)(19627235002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 L0WMfn4S5ELaTv0c8xhImjb4As27lCFOaTH7qZECAtSfsWs2hNx3vNgBJOolDlZedtPkNSf1dfRbJ1NEbCWupgri0HBrx3ExHJtS8zWRC7afS7VjgMOzslkn+LCNO27MZgkPKMzGtESdet1oEG3Wxaj9fb771OxRy79DmE2MIvHMyA+y2Aiu9rDU1cCTSsZCd1WyDSB9HWTRBpDeyUuzR6gCp8wwOgLIHYzxaM/OT2Ty8O7pDkbJqy/7tRVLL/J6WS5VeXSB2IXAqE6QD/qMvzfNi18lNG4J5aiP/9jKU4ucj+gD0m0C00AsGmElUb2GiKtm+wG591vFtpBnEWuiuCjGHW9FosbefsykswlFjiJy9qIbqb7areajCudezFF+/EfrAFCq8SbPClxW+bNBxLizQaabiAVfdHxeSwkTOMdpsZkfs+XbElNsIOBR9XLAORRWmL6jDMtxO/gCSls1sS1tzsa1v5ZiRdWylZiQ5fWOfYVzQ1bGcLE4vrR+VZs2ZdPqxnGTnO5XJNSh86zvqoDol7l7Vaq+geLPwH1xCmFNBK18a7VXmFFCTJo/DKJ3u+pytwETk3RPuvbLBPHeL2n7LAWsFjtnGIXHw6otzgw7hLVxl1eDMja/cJKoGxR47F26X5cQCUtVIf0ZVZi9OiKEQiraTV6dgdG7Wc63933BY+49nP2JC8KBfV423WoBYatbdPvDxRvgamd5AErIm+B/XP760aVaDOn3TG+h1RXtUwKyaDdhVg65WBqs8oJcc6QF0vtgmN4aHbNv0/xvOyWw2b7tmRHtO+ELgkLSEGoZB3AwvaIK52rAsAW4GkNWP04iEDxh5GLjXwxLbuwExmaOLYfF4lThQ91I4Cqmm79FKj72g/s5JrPXRw987m8zMH3NqZwoI3pEaS/nWw/Zuw==
Content-Type: text/plain; charset="gb2312"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6275
Original-Authentication-Results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT059.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f1a4f990-472e-4c5e-1b42-08d88a0cd227
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6zHPFhLGXgwytWwjb6Bk+cNcY3l/V6hmD6TJf/CcnYUNWnql7VJQUdSRgeiio1sFLT26VmbyRCyb7QiC5W8QiTrBjHCSJrWaRNPwzYVj7NPhRsw0heP++Zv/iuCcjvsv0ddY/mqGXZj/zNh7fbonhBjU7LhWYiAG3H5BJFP4nhXQJCqtgNBzlbGTpdG6VtPeSgVoyi/nf2d0uLmYxe5xabQx98nl2NMyIFo+fmWKLwfZF3dWV7sxnOyBTimGzgY3hipw0IoOctCU3HTf5Y2KG4ws6QSPmV0XZpM0TV3m1x5Rj8UA2VUztQ7ogt0YW53RWCxCPBPyCvKwD37ikwhOIwWzQtIK0ngj9DP1tgDPl8+XTEMls2Dw3yneX7/SfL9YjCeloNAldHQBwg3lAl3QFw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(39860400002)(376002)(136003)(346002)(46966005)(107886003)(336012)(47076004)(82740400003)(52536014)(5660300002)(81166007)(356005)(26005)(8676002)(6636002)(6506007)(53546011)(4326008)(86362001)(8936002)(186003)(7696005)(478600001)(316002)(19627235002)(70206006)(70586007)(36906005)(55016002)(9686003)(82310400003)(33656002)(83380400001)(2906002)(54906003)(110136005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Nov 2020 08:51:43.1946
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d00c2b48-a0c8-4202-ec20-08d88a0cd789
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT059.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR08MB2916

SGksDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogWGVuLWRldmVsIDx4
ZW4tZGV2ZWwtYm91bmNlc0BsaXN0cy54ZW5wcm9qZWN0Lm9yZz4gT24gQmVoYWxmIE9mDQo+IEJl
cnRyYW5kIE1hcnF1aXMNCj4gU2VudDogMjAyMMTqMTHUwjE2yNUgMTY6NDgNCj4gVG86IE1pY2hh
bCBPcnplbCA8TWljaGFsLk9yemVsQGFybS5jb20+DQo+IENjOiBvcGVuIGxpc3Q6WDg2IDx4ZW4t
ZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmc+OyBTdGVmYW5vIFN0YWJlbGxpbmkNCj4gPHNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmc+OyBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPjsgVm9sb2R5
bXlyIEJhYmNodWsNCj4gPFZvbG9keW15cl9CYWJjaHVrQGVwYW0uY29tPg0KPiBTdWJqZWN0OiBS
ZTogW1BBVENIXSB4ZW4vYXJtOiBBZGQgd29ya2Fyb3VuZCBmb3IgQ29ydGV4LUE3Ni9OZW92ZXJz
ZS1OMQ0KPiBlcnJhdHVtICMxMjg2ODA3DQo+IA0KPiBIaSwNCj4gDQo+ID4gT24gMTYgTm92IDIw
MjAsIGF0IDA3OjI0LCBNaWNoYWwgT3J6ZWwgPE1pY2hhbC5PcnplbEBhcm0uY29tPiB3cm90ZToN
Cj4gPg0KPiA+IE9uIHRoZSBhZmZlY3RlZCBDb3J0ZXgtQTc2L05lb3ZlcnNlLU4xIGNvcmVzIChy
MHAwIHRvIHIzcDApLA0KPiA+IGlmIGEgdmlydHVhbCBhZGRyZXNzIGZvciBhIGNhY2hlYWJsZSBt
YXBwaW5nIG9mIGEgbG9jYXRpb24gaXMgYmVpbmcNCj4gPiBhY2Nlc3NlZCBieSBhIGNvcmUgd2hp
bGUgYW5vdGhlciBjb3JlIGlzIHJlbWFwcGluZyB0aGUgdmlydHVhbA0KPiA+IGFkZHJlc3MgdG8g
YSBuZXcgcGh5c2ljYWwgcGFnZSB1c2luZyB0aGUgcmVjb21tZW5kZWQgYnJlYWstYmVmb3JlLW1h
a2UNCj4gPiBzZXF1ZW5jZSwgdGhlbiB1bmRlciB2ZXJ5IHJhcmUgY2lyY3Vtc3RhbmNlcyBUTEJJ
K0RTQiBjb21wbGV0ZXMgYmVmb3JlDQo+ID4gYSByZWFkIHVzaW5nIHRoZSB0cmFuc2xhdGlvbiBi
ZWluZyBpbnZhbGlkYXRlZCBoYXMgYmVlbiBvYnNlcnZlZCBieQ0KPiA+IG90aGVyIG9ic2VydmVy
cy4gVGhlIHdvcmthcm91bmQgcmVwZWF0cyB0aGUgVExCSStEU0Igb3BlcmF0aW9uLg0KPiA+DQo+
ID4gU2lnbmVkLW9mZi1ieTogTWljaGFsIE9yemVsIDxtaWNoYWwub3J6ZWxAYXJtLmNvbT4NCj4g
UmV2aWV3ZWQtYnk6IEJlcnRyYW5kIE1hcnF1aXMgPGJlcnRyYW5kLm1hcnF1aXNAYXJtLmNvbT4N
Cj4gDQoNClJldmlld2VkLWJ5OiBXZWkgQ2hlbiA8V2VpLkNoZW5AYXJtLmNvbT4NCg0KPiBDaGVl
cnMNCj4gQmVydHJhbmQNCj4gDQo+ID4gLS0tDQo+ID4gZG9jcy9taXNjL2FybS9zaWxpY29uLWVy
cmF0YS50eHQgICAgIHwgIDIgKysNCj4gPiB4ZW4vYXJjaC9hcm0vS2NvbmZpZyAgICAgICAgICAg
ICAgICAgfCAxOCArKysrKysrKysrKysrKysrKw0KPiA+IHhlbi9hcmNoL2FybS9jcHVlcnJhdGEu
YyAgICAgICAgICAgICB8IDE0ICsrKysrKysrKysrKysrDQo+ID4geGVuL2luY2x1ZGUvYXNtLWFy
bS9hcm02NC9mbHVzaHRsYi5oIHwgMjkgKysrKysrKysrKysrKysrKysrKy0tLS0tLS0tLQ0KPiA+
IHhlbi9pbmNsdWRlL2FzbS1hcm0vY3B1ZmVhdHVyZS5oICAgICB8ICAzICsrLQ0KPiA+IDUgZmls
ZXMgY2hhbmdlZCwgNTYgaW5zZXJ0aW9ucygrKSwgMTAgZGVsZXRpb25zKC0pDQo+ID4NCj4gPiBk
aWZmIC0tZ2l0IGEvZG9jcy9taXNjL2FybS9zaWxpY29uLWVycmF0YS50eHQgYi9kb2NzL21pc2Mv
YXJtL3NpbGljb24tDQo+IGVycmF0YS50eHQNCj4gPiBpbmRleCA1NTJjNDE1MWQzLi5kMTgzYmE1
NDNmIDEwMDY0NA0KPiA+IC0tLSBhL2RvY3MvbWlzYy9hcm0vc2lsaWNvbi1lcnJhdGEudHh0DQo+
ID4gKysrIGIvZG9jcy9taXNjL2FybS9zaWxpY29uLWVycmF0YS50eHQNCj4gPiBAQCAtNTMsNSAr
NTMsNyBAQCBzdGFibGUgaHlwZXJ2aXNvcnMuDQo+ID4gfCBBUk0gICAgICAgICAgICB8IENvcnRl
eC1BNzIgICAgICB8ICM4NTM3MDkgICAgICAgICB8IE4vQSAgICAgICAgICAgICAgICAgICAgIHwN
Cj4gPiB8IEFSTSAgICAgICAgICAgIHwgQ29ydGV4LUE3MyAgICAgIHwgIzg1ODkyMSAgICAgICAg
IHwgQVJNX0VSUkFUVU1fODU4OTIxICAgICAgfA0KPiA+IHwgQVJNICAgICAgICAgICAgfCBDb3J0
ZXgtQTc2ICAgICAgfCAjMTE2NTUyMiAgICAgICAgfCBOL0EgICAgICAgICAgICAgICAgICAgICB8
DQo+ID4gK3wgQVJNICAgICAgICAgICAgfCBDb3J0ZXgtQTc2ICAgICAgfCAjMTI4NjgwNyAgICAg
ICAgfCBBUk02NF9FUlJBVFVNXzEyODY4MDcNCj4gfA0KPiA+IHwgQVJNICAgICAgICAgICAgfCBO
ZW92ZXJzZS1OMSAgICAgfCAjMTE2NTUyMiAgICAgICAgfCBOL0ENCj4gPiArfCBBUk0gICAgICAg
ICAgICB8IE5lb3ZlcnNlLU4xICAgICB8ICMxMjg2ODA3ICAgICAgICB8IEFSTTY0X0VSUkFUVU1f
MTI4NjgwNw0KPiB8DQo+ID4gfCBBUk0gICAgICAgICAgICB8IE1NVS01MDAgICAgICAgICB8ICM4
NDI4NjkgICAgICAgICB8IE4vQSAgICAgICAgICAgICAgICAgICAgIHwNCj4gPiBkaWZmIC0tZ2l0
IGEveGVuL2FyY2gvYXJtL0tjb25maWcgYi94ZW4vYXJjaC9hcm0vS2NvbmZpZw0KPiA+IGluZGV4
IGY5MzhkZDIxYmQuLjVkNmQ5MDZkNzIgMTAwNjQ0DQo+ID4gLS0tIGEveGVuL2FyY2gvYXJtL0tj
b25maWcNCj4gPiArKysgYi94ZW4vYXJjaC9hcm0vS2NvbmZpZw0KPiA+IEBAIC0yNDQsNiArMjQ0
LDI0IEBAIGNvbmZpZyBBUk1fRVJSQVRVTV84NTg5MjENCj4gPg0KPiA+IAkgIElmIHVuc3VyZSwg
c2F5IFkuDQo+ID4NCj4gPiArY29uZmlnIEFSTTY0X0VSUkFUVU1fMTI4NjgwNw0KPiA+ICsJYm9v
bCAiQ29ydGV4LUE3Ni9OZW92ZXJzZS1OMTogMTI4NjgwNzogTW9kaWZpY2F0aW9uIG9mIHRoZQ0K
PiB0cmFuc2xhdGlvbiB0YWJsZSBmb3IgYSB2aXJ0dWFsIGFkZHJlc3MgbWlnaHQgbGVhZCB0byBy
ZWFkLWFmdGVyLXJlYWQgb3JkZXJpbmcNCj4gdmlvbGF0aW9uIg0KPiA+ICsJZGVmYXVsdCB5DQo+
ID4gKwlkZXBlbmRzIG9uIEFSTV82NA0KPiA+ICsJaGVscA0KPiA+ICsJICBUaGlzIG9wdGlvbiBh
ZGRzIGEgd29ya2Fyb3VuZCBmb3IgQVJNIENvcnRleC1BNzYvTmVvdmVyc2UtTjENCj4gZXJyYXR1
bSAxMjg2ODA3Lg0KPiA+ICsNCj4gPiArCSAgT24gdGhlIGFmZmVjdGVkIENvcnRleC1BNzYvTmVv
dmVyc2UtTjEgY29yZXMgKHIwcDAgdG8gcjNwMCksIGlmIGENCj4gdmlydHVhbA0KPiA+ICsJICBh
ZGRyZXNzIGZvciBhIGNhY2hlYWJsZSBtYXBwaW5nIG9mIGEgbG9jYXRpb24gaXMgYmVpbmcNCj4g
PiArCSAgYWNjZXNzZWQgYnkgYSBjb3JlIHdoaWxlIGFub3RoZXIgY29yZSBpcyByZW1hcHBpbmcg
dGhlIHZpcnR1YWwNCj4gPiArCSAgYWRkcmVzcyB0byBhIG5ldyBwaHlzaWNhbCBwYWdlIHVzaW5n
IHRoZSByZWNvbW1lbmRlZA0KPiA+ICsJICBicmVhay1iZWZvcmUtbWFrZSBzZXF1ZW5jZSwgdGhl
biB1bmRlciB2ZXJ5IHJhcmUgY2lyY3Vtc3RhbmNlcw0KPiA+ICsJICBUTEJJK0RTQiBjb21wbGV0
ZXMgYmVmb3JlIGEgcmVhZCB1c2luZyB0aGUgdHJhbnNsYXRpb24gYmVpbmcNCj4gPiArCSAgaW52
YWxpZGF0ZWQgaGFzIGJlZW4gb2JzZXJ2ZWQgYnkgb3RoZXIgb2JzZXJ2ZXJzLiBUaGUNCj4gPiAr
CSAgd29ya2Fyb3VuZCByZXBlYXRzIHRoZSBUTEJJK0RTQiBvcGVyYXRpb24uDQo+ID4gKw0KPiA+
ICsJICBJZiB1bnN1cmUsIHNheSBZLg0KPiA+ICsNCj4gPiBlbmRtZW51DQo+ID4NCj4gPiBjb25m
aWcgQVJNNjRfSEFSREVOX0JSQU5DSF9QUkVESUNUT1INCj4gPiBkaWZmIC0tZ2l0IGEveGVuL2Fy
Y2gvYXJtL2NwdWVycmF0YS5jIGIveGVuL2FyY2gvYXJtL2NwdWVycmF0YS5jDQo+ID4gaW5kZXgg
NTY3OTExZDM4MC4uY2I0Nzk1YmVlYyAxMDA2NDQNCj4gPiAtLS0gYS94ZW4vYXJjaC9hcm0vY3B1
ZXJyYXRhLmMNCj4gPiArKysgYi94ZW4vYXJjaC9hcm0vY3B1ZXJyYXRhLmMNCj4gPiBAQCAtNDI0
LDYgKzQyNCwyMCBAQCBzdGF0aWMgY29uc3Qgc3RydWN0IGFybV9jcHVfY2FwYWJpbGl0aWVzIGFy
bV9lcnJhdGFbXQ0KPiA9IHsNCj4gPiAgICAgICAgICAgICAgICAgICAgKDEgPDwgTUlEUl9WQVJJ
QU5UX1NISUZUKSB8IDIpLA0KPiA+ICAgICB9LA0KPiA+ICNlbmRpZg0KPiA+ICsjaWZkZWYgQ09O
RklHX0FSTTY0X0VSUkFUVU1fMTI4NjgwNw0KPiA+ICsgICAgew0KPiA+ICsgICAgICAgIC8qIENv
cnRleC1BNzYgcjBwMCAtIHIzcDAgKi8NCj4gPiArICAgICAgICAuZGVzYyA9ICJBUk0gZXJyYXR1
bSAxMjg2ODA3IiwNCj4gPiArICAgICAgICAuY2FwYWJpbGl0eSA9IEFSTTY0X1dPUktBUk9VTkRf
UkVQRUFUX1RMQkksDQo+ID4gKyAgICAgICAgTUlEUl9SQU5HRShNSURSX0NPUlRFWF9BNzYsIDAs
IDMgPDwgTUlEUl9WQVJJQU5UX1NISUZUKSwNCj4gPiArICAgIH0sDQo+ID4gKyAgICB7DQo+ID4g
KyAgICAgICAgLyogTmVvdmVyc2UtTjEgcjBwMCAtIHIzcDAgKi8NCj4gPiArICAgICAgICAuZGVz
YyA9ICJBUk0gZXJyYXR1bSAxMjg2ODA3IiwNCj4gPiArICAgICAgICAuY2FwYWJpbGl0eSA9IEFS
TTY0X1dPUktBUk9VTkRfUkVQRUFUX1RMQkksDQo+ID4gKyAgICAgICAgTUlEUl9SQU5HRShNSURS
X05FT1ZFUlNFX04xLCAwLCAzIDw8IE1JRFJfVkFSSUFOVF9TSElGVCksDQo+ID4gKyAgICB9LA0K
PiA+ICsjZW5kaWYNCj4gPiAjaWZkZWYgQ09ORklHX0FSTTY0X0hBUkRFTl9CUkFOQ0hfUFJFRElD
VE9SDQo+ID4gICAgIHsNCj4gPiAgICAgICAgIC5jYXBhYmlsaXR5ID0gQVJNX0hBUkRFTl9CUkFO
Q0hfUFJFRElDVE9SLA0KPiA+IGRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20tYXJtL2FybTY0
L2ZsdXNodGxiLmggYi94ZW4vaW5jbHVkZS9hc20tDQo+IGFybS9hcm02NC9mbHVzaHRsYi5oDQo+
ID4gaW5kZXggY2VlYzU5NTQyZS4uNjcyNjM2MjIxMSAxMDA2NDQNCj4gPiAtLS0gYS94ZW4vaW5j
bHVkZS9hc20tYXJtL2FybTY0L2ZsdXNodGxiLmgNCj4gPiArKysgYi94ZW4vaW5jbHVkZS9hc20t
YXJtL2FybTY0L2ZsdXNodGxiLmgNCj4gPiBAQCAtOSw2ICs5LDExIEBADQo+ID4gICogRFNCIElT
SCAgICAgICAgICAvLyBFbnN1cmUgdGhlIFRMQiBpbnZhbGlkYXRpb24gaGFzIGNvbXBsZXRlZA0K
PiA+ICAqIElTQiAgICAgICAgICAgICAgLy8gU2VlIGV4cGxhbmF0aW9uIGJlbG93DQo+ID4gICoN
Cj4gPiArICogQVJNNjRfV09SS0FST1VORF9SRVBFQVRfVExCSToNCj4gPiArICogTW9kaWZpY2F0
aW9uIG9mIHRoZSB0cmFuc2xhdGlvbiB0YWJsZSBmb3IgYSB2aXJ0dWFsIGFkZHJlc3MgbWlnaHQg
bGVhZCB0bw0KPiA+ICsgKiByZWFkLWFmdGVyLXJlYWQgb3JkZXJpbmcgdmlvbGF0aW9uLg0KPiA+
ICsgKiBUaGUgd29ya2Fyb3VuZCByZXBlYXRzIFRMQkkrRFNCIG9wZXJhdGlvbi4NCj4gPiArICoN
Cj4gPiAgKiBGb3IgWGVuIHBhZ2UtdGFibGVzIHRoZSBJU0Igd2lsbCBkaXNjYXJkIGFueSBpbnN0
cnVjdGlvbnMgZmV0Y2hlZA0KPiA+ICAqIGZyb20gdGhlIG9sZCBtYXBwaW5ncy4NCj4gPiAgKg0K
PiA+IEBAIC0xNiwxNSArMjEsMjEgQEANCj4gPiAgKiAoYW5kIHRoZXJlZm9yZSB0aGUgVExCIGlu
dmFsaWRhdGlvbikgYmVmb3JlIGNvbnRpbnVpbmcuIFNvIHdlIGtub3cNCj4gPiAgKiB0aGUgVExC
cyBjYW5ub3QgY29udGFpbiBhbiBlbnRyeSBmb3IgYSBtYXBwaW5nIHdlIG1heSBoYXZlIHJlbW92
ZWQuDQo+ID4gICovDQo+ID4gLSNkZWZpbmUgVExCX0hFTFBFUihuYW1lLCB0bGJvcCkgXA0KPiA+
IC1zdGF0aWMgaW5saW5lIHZvaWQgbmFtZSh2b2lkKSAgIFwNCj4gPiAteyAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBcDQo+ID4gLSAgICBhc20gdm9sYXRpbGUoICAgICAgICAgICAgICAg
XA0KPiA+IC0gICAgICAgICJkc2IgIGlzaHN0OyIgICAgICAgICAgIFwNCj4gPiAtICAgICAgICAi
dGxiaSAiICAjIHRsYm9wICAiOyIgICBcDQo+ID4gLSAgICAgICAgImRzYiAgaXNoOyIgICAgICAg
ICAgICAgXA0KPiA+IC0gICAgICAgICJpc2I7IiAgICAgICAgICAgICAgICAgIFwNCj4gPiAtICAg
ICAgICA6IDogOiAibWVtb3J5Iik7ICAgICAgICBcDQo+ID4gKyNkZWZpbmUgVExCX0hFTFBFUihu
YW1lLCB0bGJvcCkgICAgICAgXA0KPiA+ICtzdGF0aWMgaW5saW5lIHZvaWQgbmFtZSh2b2lkKSAg
ICAgICAgIFwNCj4gPiAreyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcDQo+
ID4gKyAgICBhc20gdm9sYXRpbGUoICAgICAgICAgICAgICAgICAgICAgXA0KPiA+ICsgICAgICAg
ICJkc2IgIGlzaHN0OyIgICAgICAgICAgICAgICAgIFwNCj4gPiArICAgICAgICAidGxiaSAiICAj
IHRsYm9wICAiOyIgICAgICAgICBcDQo+ID4gKyAgICAgICAgQUxURVJOQVRJVkUoICAgICAgICAg
ICAgICAgICAgXA0KPiA+ICsgICAgICAgICJub3A7IG5vcDsiLCAgICAgICAgICAgICAgICAgIFwN
Cj4gPiArICAgICAgICAiZHNiICBpc2g7IiAgICAgICAgICAgICAgICAgICBcDQo+ID4gKyAgICAg
ICAgInRsYmkgIiAgIyB0bGJvcCAgIjsiLCAgICAgICAgXA0KPiA+ICsgICAgICAgIEFSTTY0X1dP
UktBUk9VTkRfUkVQRUFUX1RMQkksIFwNCj4gPiArICAgICAgICBDT05GSUdfQVJNNjRfRVJSQVRV
TV8xMjg2ODA3KSBcDQo+ID4gKyAgICAgICAgImRzYiAgaXNoOyIgICAgICAgICAgICAgICAgICAg
XA0KPiA+ICsgICAgICAgICJpc2I7IiAgICAgICAgICAgICAgICAgICAgICAgIFwNCj4gPiArICAg
ICAgICA6IDogOiAibWVtb3J5Iik7ICAgICAgICAgICAgICBcDQo+ID4gfQ0KPiA+DQo+ID4gLyog
Rmx1c2ggbG9jYWwgVExCcywgY3VycmVudCBWTUlEIG9ubHkuICovDQo+ID4gZGlmZiAtLWdpdCBh
L3hlbi9pbmNsdWRlL2FzbS1hcm0vY3B1ZmVhdHVyZS5oIGIveGVuL2luY2x1ZGUvYXNtLQ0KPiBh
cm0vY3B1ZmVhdHVyZS5oDQo+ID4gaW5kZXggMDE2YTlmZTIwMy4uYzdiNTA1Mjk5MiAxMDA2NDQN
Cj4gPiAtLS0gYS94ZW4vaW5jbHVkZS9hc20tYXJtL2NwdWZlYXR1cmUuaA0KPiA+ICsrKyBiL3hl
bi9pbmNsdWRlL2FzbS1hcm0vY3B1ZmVhdHVyZS5oDQo+ID4gQEAgLTQ2LDggKzQ2LDkgQEANCj4g
PiAjZGVmaW5lIEFSTV9TTUNDQ18xXzEgOA0KPiA+ICNkZWZpbmUgQVJNNjRfV09SS0FST1VORF9B
VF9TUEVDVUxBVEUgOQ0KPiA+ICNkZWZpbmUgQVJNX1dPUktBUk9VTkRfODU4OTIxIDEwDQo+ID4g
KyNkZWZpbmUgQVJNNjRfV09SS0FST1VORF9SRVBFQVRfVExCSSAxMQ0KPiA+DQo+ID4gLSNkZWZp
bmUgQVJNX05DQVBTICAgICAgICAgICAxMQ0KPiA+ICsjZGVmaW5lIEFSTV9OQ0FQUyAgICAgICAg
ICAgMTINCj4gPg0KPiA+ICNpZm5kZWYgX19BU1NFTUJMWV9fDQo+ID4NCj4gPiAtLQ0KPiA+IDIu
MjguMA0KPiA+DQo+IA0KDQo=


From xen-devel-bounces@lists.xenproject.org Mon Nov 16 09:44:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 09:44:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27804.56354 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keb3O-0003o2-QM; Mon, 16 Nov 2020 09:44:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27804.56354; Mon, 16 Nov 2020 09:44:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keb3O-0003nv-NE; Mon, 16 Nov 2020 09:44:06 +0000
Received: by outflank-mailman (input) for mailman id 27804;
 Mon, 16 Nov 2020 09:44:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=14IB=EW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1keb3N-0003nN-Bu
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 09:44:05 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5c711dea-1eec-4d39-a7b8-3e14a9525ab7;
 Mon, 16 Nov 2020 09:43:57 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keb3F-0007C8-Im; Mon, 16 Nov 2020 09:43:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keb3F-0002zk-9E; Mon, 16 Nov 2020 09:43:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1keb3F-0003vt-8m; Mon, 16 Nov 2020 09:43:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=14IB=EW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1keb3N-0003nN-Bu
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 09:44:05 +0000
X-Inumbo-ID: 5c711dea-1eec-4d39-a7b8-3e14a9525ab7
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5c711dea-1eec-4d39-a7b8-3e14a9525ab7;
	Mon, 16 Nov 2020 09:43:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KkFXT2CwF9YGaxqzdQ1DaiVuaWlqJjPyA4oi53NS5cE=; b=vmEBntgMJq6yo+h/3wTjEPFn/p
	DrbgmGG9GoOoPSo+vHqxJU34iApdZnCMxSRWKwHI9pQsLJsPn4zyyUwohFw+ibrY1PH8tOdtSkc9n
	yj6xeeKNRYpZqkRANvIU80kO7/t2/qCAKAyY3s9pIcNKasRRZeD+03PR8RDJmbnrwHU4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keb3F-0007C8-Im; Mon, 16 Nov 2020 09:43:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keb3F-0002zk-9E; Mon, 16 Nov 2020 09:43:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keb3F-0003vt-8m; Mon, 16 Nov 2020 09:43:57 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156814-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156814: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5505f5f8e7e805365cfe70b6a4af6115940bb749
X-Osstest-Versions-That:
    xen=5505f5f8e7e805365cfe70b6a4af6115940bb749
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 16 Nov 2020 09:43:57 +0000

flight 156814 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156814/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10     fail pass in 156807

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156807
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156807
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156807
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156807
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156807
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156807
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156807
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156807
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156807
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156807
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156807
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  5505f5f8e7e805365cfe70b6a4af6115940bb749
baseline version:
 xen                  5505f5f8e7e805365cfe70b6a4af6115940bb749

Last test of basis   156814  2020-11-16 01:52:22 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 09:49:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 09:49:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27777.56367 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keb8R-000409-F1; Mon, 16 Nov 2020 09:49:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27777.56367; Mon, 16 Nov 2020 09:49:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keb8R-000402-BF; Mon, 16 Nov 2020 09:49:19 +0000
Received: by outflank-mailman (input) for mailman id 27777;
 Mon, 16 Nov 2020 08:57:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KO4f=EW=prevas.dk=rasmus.villemoes@srs-us1.protection.inumbo.net>)
 id 1keaKK-00082R-NT
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 08:57:33 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.122]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 47e6b9d7-c063-460a-8622-5a91e36348ec;
 Mon, 16 Nov 2020 08:57:29 +0000 (UTC)
Received: from AM0PR10MB1874.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:208:3f::10)
 by AM9PR10MB4404.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:20b:26f::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25; Mon, 16 Nov
 2020 08:57:28 +0000
Received: from AM0PR10MB1874.EURPRD10.PROD.OUTLOOK.COM
 ([fe80::9dc3:9785:dc4e:ad62]) by AM0PR10MB1874.EURPRD10.PROD.OUTLOOK.COM
 ([fe80::9dc3:9785:dc4e:ad62%6]) with mapi id 15.20.3564.028; Mon, 16 Nov 2020
 08:57:28 +0000
Received: from [192.168.1.149] (5.186.115.188) by
 AM5PR0701CA0070.eurprd07.prod.outlook.com (2603:10a6:203:2::32) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.15 via Frontend
 Transport; Mon, 16 Nov 2020 08:57:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=KO4f=EW=prevas.dk=rasmus.villemoes@srs-us1.protection.inumbo.net>)
	id 1keaKK-00082R-NT
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 08:57:33 +0000
X-Inumbo-ID: 47e6b9d7-c063-460a-8622-5a91e36348ec
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown [40.107.22.122])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 47e6b9d7-c063-460a-8622-5a91e36348ec;
	Mon, 16 Nov 2020 08:57:29 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XqFMujluC4+O+8vVvlUhTo+FuwcO67sZhl7HXvCaH9cWRWtKT0gK0d2KHBLEfuJ6sRRr0OONtzMq/5QR39cQ6PxHQhCzG7qzIoJWVnv8JJgXjAUV3XyGfKMym2+UwzQewPq9rlhOmWbTXkE7w+Vh4FpZvZx5mHPs5jNF0WPwqyCQN//duDSyCweQvS5ocQapLfrAXSPjpWhFFbmA43sE0MNVDrpPdmomVXFSYadf0WKhFxVJiCVJVsUJD8gDRfu42dLZfALruodp4j+SFff4yDB/ufKnvFr7vpXPG4l6ka0+mfVeLQ2so9D4gv3Nri7/1016JcUVo/aX6l2E/qP9/A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Gb1mYEH6RieZNFdbTWVSUF50n4VJM1Gg9ilylfzVafw=;
 b=Ys/deUSSXRPGrW0PSQInno+CRz45Z/8tsERQI2Br5727Zo9daJ9j+NFIRhJu0qcw5CHEthrf+g4MabMInyVaKvewmaOiPw+wbthM+rKT1xRhHfuBC/iKBBWqazUlDoqo0jkG0plEachOQderdiLqx6MHSl3RnrrUgK10qNUCcjnT2jA2UKqjtDUdpAOz1tt3C0NoJg78X8ACTRnHQ0VFMu0jh+SaJKUsz9y/WR4gyApg80QW+xsY6ALYLPJT5t5y8hdMRgrmGW6s0MlB7RhVjsyt+IKVAeYAYlZSS/Ubq2b6H36fyymFhRPZivjNOAI9T/cDqmre+MxhFSY1QRQNZg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=prevas.dk; dmarc=pass action=none header.from=prevas.dk;
 dkim=pass header.d=prevas.dk; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=prevas.dk;
 s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Gb1mYEH6RieZNFdbTWVSUF50n4VJM1Gg9ilylfzVafw=;
 b=SqzPcLihILhRuJO5r7CIPjOsJTZP7fRgWX88o0aqBUplmZ5M5j9Uq22A1CmF1QQE4LGcds4IDHbsMFOGJ8sgZ0iQqzZuhEESL494rETBqctzf1oswcRJYnnNmU9fPna5uLs/bvQSVbzxAFElGD/GPWWa4oud/KX0dQbqmwvDT+Q=
Authentication-Results: zytor.com; dkim=none (message not signed)
 header.d=none;zytor.com; dmarc=none action=none header.from=prevas.dk;
Received: from AM0PR10MB1874.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:208:3f::10)
 by AM9PR10MB4404.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:20b:26f::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25; Mon, 16 Nov
 2020 08:57:28 +0000
Received: from AM0PR10MB1874.EURPRD10.PROD.OUTLOOK.COM
 ([fe80::9dc3:9785:dc4e:ad62]) by AM0PR10MB1874.EURPRD10.PROD.OUTLOOK.COM
 ([fe80::9dc3:9785:dc4e:ad62%6]) with mapi id 15.20.3564.028; Mon, 16 Nov 2020
 08:57:28 +0000
Subject: Re: Antw: [EXT] [systemd-devel] [SPECIFICATION RFC] The firmware and
 bootloader log specification
To: Ulrich Windl <Ulrich.Windl@rz.uni-regensburg.de>, coreboot@coreboot.org,
 grub-devel@gnu.org, trenchboot-devel@googlegroups.com, x86@kernel.org,
 u-boot@lists.denx.de,
 "systemd-devel@lists.freedesktop.org" <systemd-devel@lists.freedesktop.org>,
 xen-devel@lists.xenproject.org, daniel.kiper@oracle.com,
 linux-kernel@vger.kernel.org
Cc: krystian.hebel@3mdeb.com, michal.zygowski@3mdeb.com,
 piotr.krol@3mdeb.com, mtottenh@akamai.com, luto@amacapital.net,
 dpsmith@apertussolutions.com, andrew.cooper3@citrix.com,
 roger.pau@citrix.com, allen.cryptic@gmail.com, btrotter@gmail.com,
 phcoder@gmail.com, lukasz.hawrylko@intel.com, ard.biesheuvel@linaro.org,
 tyhicks@linux.microsoft.com, pmenzel@molgen.mpg.de, hun@n-dimensional.de,
 leif@nuviainc.com, alexander.burmashev@oracle.com, eric.devolder@oracle.com,
 eric.snowberg@oracle.com, joao.m.martins@oracle.com,
 kanth.ghatraju@oracle.com, konrad.wilk@oracle.com,
 ross.philipson@oracle.com, javierm@redhat.com, pjones@redhat.com,
 alecb@umass.edu, "H. Peter Anvin" <hpa@zytor.com>
References: <20201113235242.k6fzlwmwm2xqhqsi@tomti.i.net-space.pl>
 <5FB223E8020000A10003CB88@gwsmtp.uni-regensburg.de>
From: Rasmus Villemoes <rasmus.villemoes@prevas.dk>
Message-ID: <2d920ff2-68cb-69f5-f2d9-47131ec1b512@prevas.dk>
Date: Mon, 16 Nov 2020 09:57:25 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
In-Reply-To: <5FB223E8020000A10003CB88@gwsmtp.uni-regensburg.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Originating-IP: [5.186.115.188]
X-ClientProxiedBy: AM5PR0701CA0070.eurprd07.prod.outlook.com
 (2603:10a6:203:2::32) To AM0PR10MB1874.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:208:3f::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from [192.168.1.149] (5.186.115.188) by AM5PR0701CA0070.eurprd07.prod.outlook.com (2603:10a6:203:2::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.15 via Frontend Transport; Mon, 16 Nov 2020 08:57:26 +0000
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 51df6655-3ee2-480b-88ea-08d88a0da4d0
X-MS-TrafficTypeDiagnostic: AM9PR10MB4404:
X-Microsoft-Antispam-PRVS:
	<AM9PR10MB4404A174825A369F20F04BBC93E30@AM9PR10MB4404.EURPRD10.PROD.OUTLOOK.COM>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Hu4BhxR9oCJwEru+vqpQVtzm7o52z+tCM7zAnzWlKgMRJFFOuAulwMYUzKqtQZ7AR2LZzxCts8yBLwW8yfOx+5B1ccyxoUGLEXm+bMdDnmN4X1IlGYMJN/Ouaiq3VxhkpRtevgP2C89qdsmrS2FgTnLH+rkrEZXtKbJvygGtT+ym0+S/kcrcn4BqTHndyCupJVZQu9YASYurpBG5w6efTob2tN3ZMrDmcIhJoC7DFgGSV6T00LTmmURpdXC2xrZb73QQdAsSwN+9ExVq8/9Cd2bZu3DSCAQPZ+qV0O/XeTbYZZdcvWIDMk12Z6eDQvhAiemCGz1LvintrJ9mm5NkJG4kU5qnGi57j+vGUYMjFtYrzTMAdAQFfjELo1AoP0qSEl8euhF7FfXQjSbeT20/Xg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR10MB1874.EURPRD10.PROD.OUTLOOK.COM;PTR:;CAT:NONE;SFS:(376002)(346002)(396003)(366004)(136003)(39840400004)(110136005)(6486002)(478600001)(5660300002)(2906002)(16576012)(8676002)(316002)(31696002)(86362001)(44832011)(66946007)(66476007)(66556008)(36756003)(2616005)(956004)(7416002)(83380400001)(7406005)(4326008)(31686004)(8936002)(921005)(16526019)(8976002)(186003)(52116002)(26005)(4744005)(43740500002);DIR:OUT;SFP:1102;
X-MS-Exchange-AntiSpam-MessageData:
	UWzoybq3nNTkbbTk8hr9AtM6lC67kIrU1czuB9edRIVc4u759capC7V4HQW7GfcnqoBUek0lgwL+EK0tlFRiB6LJRoxID5P6ks1iP74HPqwrnpRWTeUaeh1G/U/OuBYSH4hjyYOaAB2xn69pnNNDTl0Xl2H5pnph6eGNdh1h7vzTnLcHt/I089enW1BopLG4Ftk3HQkEGU8RqQ7akO8SjZ+JyPBrEBSPgBRheiftJpWxogObGPh7zTmkrFYaJHWwKZ+21w9ItNv1Iklwv2KkkPc2KDrs25Xs6uQaZVeCHF6bHebwLGWb8qoyy+qpvsqFIPVbFAn8cSXkaf94aa6FEOfBoaWG+kx50Po+z1nGuNkILyEQtIsBgW732s1AWuKnKWFK8mNCZtn4J9tnBa4dLmVabpuucyqqHjIGE+0xKHNsUm/pRnpP7ZpF4A69LLycXkNu0vQ73rqKlMKwUJCf2R/A1aRlwhTa9druJqRP0FX2gb80sxtkTt9/GWBUV+mZ9HYOtjWcLugxQeKfaJWz1zvj42mFqyAie93ETPSnBE4KhZrnUfksV84ezwVCkdKNf+MC/NuqqXEDebR29cf7G78S+czGMztr/+PmyjKoTVRU96bsz2lmRcoAAEZmlYkrCS4yQtoOJgHtEGJSUR1g9Q==
X-OriginatorOrg: prevas.dk
X-MS-Exchange-CrossTenant-Network-Message-Id: 51df6655-3ee2-480b-88ea-08d88a0da4d0
X-MS-Exchange-CrossTenant-AuthSource: AM0PR10MB1874.EURPRD10.PROD.OUTLOOK.COM
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Nov 2020 08:57:28.0211
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: d350cf71-778d-4780-88f5-071a4cb1ed61
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: PmGFQ99hS0Z5oBfPElVCfR+/nsMklXhz+Y0C2IZYwtZegDmUt78DCbFctVhWLG8zVFJBKvQc4iAq3GwhYZySGDo4PFskOBKd6Osed0Yhfbo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR10MB4404

On 16/11/2020 08.02, Ulrich Windl wrote:
>>>> Daniel Kiper <daniel.kiper@oracle.com> schrieb am 14.11.2020 um 00:52 in
> Nachricht <20201113235242.k6fzlwmwm2xqhqsi@tomti.i.net-space.pl>:
> ...
>> The members of struct bf_log_msg:
>>   ‑ size: total size of bf_log_msg struct,
>>   ‑ ts_nsec: timestamp expressed in nanoseconds starting from 0,
> 
> Who or what defines t == 0?

Some sort of "clapperboard" log entry, stating "the RTC says X, the
cycle counter is Y, the onboard ACME atomic clock says Z, I'm now
starting to count ts_nsec from W" might be useful for some eventual
userspace tool to try to stitch together the log entries from the
various stages. I have no idea how a formal spec of such an entry would
look like or if it's even feasible to do formally. But even just such
entries in free-form prose could at least help a human consumer.

Rasmus


From xen-devel-bounces@lists.xenproject.org Mon Nov 16 10:12:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 10:12:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27818.56378 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kebVD-0006b1-CE; Mon, 16 Nov 2020 10:12:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27818.56378; Mon, 16 Nov 2020 10:12:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kebVD-0006au-9D; Mon, 16 Nov 2020 10:12:51 +0000
Received: by outflank-mailman (input) for mailman id 27818;
 Mon, 16 Nov 2020 10:12:50 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kebVC-0006ap-K4
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 10:12:50 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kebVB-0008Nj-Tl; Mon, 16 Nov 2020 10:12:50 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kebVC-0006ap-K4
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 10:12:50 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kebVB-0008Nj-Tl; Mon, 16 Nov 2020 10:12:50 +0000
Subject: Re: [PATCH] xen/arm: Add workaround for Cortex-A76/Neoverse-N1
 erratum #1286807
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, bertrand.marquis@arm.com
References: <20201116072422.17400-1-michal.orzel@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <3c9cd4af-da71-e000-eedb-683c2bc0a40e@xen.org>
Date: Mon, 16 Nov 2020 10:12:47 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201116072422.17400-1-michal.orzel@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Michal,

On 16/11/2020 07:24, Michal Orzel wrote:
> On the affected Cortex-A76/Neoverse-N1 cores (r0p0 to r3p0),
> if a virtual address for a cacheable mapping of a location is being
> accessed by a core while another core is remapping the virtual
> address to a new physical page using the recommended break-before-make
> sequence, then under very rare circumstances TLBI+DSB completes before
> a read using the translation being invalidated has been observed by
> other observers. The workaround repeats the TLBI+DSB operation.

The commit message suggests the second TLBI+DSB operation is only 
necessary for innershearable TLBs.

I agree that it is probably be best to cover all the TLB flush 
operations. However, it would be good to clarify it in the commit 
message that this is done on purpose.

> 
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
> ---
>   docs/misc/arm/silicon-errata.txt     |  2 ++
>   xen/arch/arm/Kconfig                 | 18 +++++++++++++++++
>   xen/arch/arm/cpuerrata.c             | 14 ++++++++++++++
>   xen/include/asm-arm/arm64/flushtlb.h | 29 +++++++++++++++++++---------
>   xen/include/asm-arm/cpufeature.h     |  3 ++-
>   5 files changed, 56 insertions(+), 10 deletions(-)
> 
> diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-errata.txt
> index 552c4151d3..d183ba543f 100644
> --- a/docs/misc/arm/silicon-errata.txt
> +++ b/docs/misc/arm/silicon-errata.txt
> @@ -53,5 +53,7 @@ stable hypervisors.
>   | ARM            | Cortex-A72      | #853709         | N/A                     |
>   | ARM            | Cortex-A73      | #858921         | ARM_ERRATUM_858921      |
>   | ARM            | Cortex-A76      | #1165522        | N/A                     |
> +| ARM            | Cortex-A76      | #1286807        | ARM64_ERRATUM_1286807   |
>   | ARM            | Neoverse-N1     | #1165522        | N/A
> +| ARM            | Neoverse-N1     | #1286807        | ARM64_ERRATUM_1286807   |
>   | ARM            | MMU-500         | #842869         | N/A                     |
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index f938dd21bd..5d6d906d72 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -244,6 +244,24 @@ config ARM_ERRATUM_858921
>   
>   	  If unsure, say Y.
>   
> +config ARM64_ERRATUM_1286807
> +	bool "Cortex-A76/Neoverse-N1: 1286807: Modification of the translation table for a virtual address might lead to read-after-read ordering violation"
> +	default y
> +	depends on ARM_64
> +	help
> +	  This option adds a workaround for ARM Cortex-A76/Neoverse-N1 erratum 1286807.
> +
> +	  On the affected Cortex-A76/Neoverse-N1 cores (r0p0 to r3p0), if a virtual
> +	  address for a cacheable mapping of a location is being
> +	  accessed by a core while another core is remapping the virtual
> +	  address to a new physical page using the recommended
> +	  break-before-make sequence, then under very rare circumstances
> +	  TLBI+DSB completes before a read using the translation being
> +	  invalidated has been observed by other observers. The
> +	  workaround repeats the TLBI+DSB operation.
> +
> +	  If unsure, say Y.
> +
>   endmenu
>   
>   config ARM64_HARDEN_BRANCH_PREDICTOR
> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
> index 567911d380..cb4795beec 100644
> --- a/xen/arch/arm/cpuerrata.c
> +++ b/xen/arch/arm/cpuerrata.c
> @@ -424,6 +424,20 @@ static const struct arm_cpu_capabilities arm_errata[] = {
>                      (1 << MIDR_VARIANT_SHIFT) | 2),
>       },
>   #endif
> +#ifdef CONFIG_ARM64_ERRATUM_1286807
> +    {
> +        /* Cortex-A76 r0p0 - r3p0 */
> +        .desc = "ARM erratum 1286807",
> +        .capability = ARM64_WORKAROUND_REPEAT_TLBI,
> +        MIDR_RANGE(MIDR_CORTEX_A76, 0, 3 << MIDR_VARIANT_SHIFT),
> +    },
> +    {
> +        /* Neoverse-N1 r0p0 - r3p0 */
> +        .desc = "ARM erratum 1286807",
> +        .capability = ARM64_WORKAROUND_REPEAT_TLBI,
> +        MIDR_RANGE(MIDR_NEOVERSE_N1, 0, 3 << MIDR_VARIANT_SHIFT),
> +    },
> +#endif
>   #ifdef CONFIG_ARM64_HARDEN_BRANCH_PREDICTOR
>       {
>           .capability = ARM_HARDEN_BRANCH_PREDICTOR,
> diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
> index ceec59542e..6726362211 100644
> --- a/xen/include/asm-arm/arm64/flushtlb.h
> +++ b/xen/include/asm-arm/arm64/flushtlb.h
> @@ -9,6 +9,11 @@
>    * DSB ISH          // Ensure the TLB invalidation has completed
>    * ISB              // See explanation below
>    *
> + * ARM64_WORKAROUND_REPEAT_TLBI:
> + * Modification of the translation table for a virtual address might lead to
> + * read-after-read ordering violation.
> + * The workaround repeats TLBI+DSB operation.
> + *
>    * For Xen page-tables the ISB will discard any instructions fetched
>    * from the old mappings.
>    *
> @@ -16,15 +21,21 @@
>    * (and therefore the TLB invalidation) before continuing. So we know
>    * the TLBs cannot contain an entry for a mapping we may have removed.
>    */
> -#define TLB_HELPER(name, tlbop) \
> -static inline void name(void)   \
> -{                               \
> -    asm volatile(               \
> -        "dsb  ishst;"           \
> -        "tlbi "  # tlbop  ";"   \
> -        "dsb  ish;"             \
> -        "isb;"                  \
> -        : : : "memory");        \
> +#define TLB_HELPER(name, tlbop)       \
> +static inline void name(void)         \
> +{                                     \
> +    asm volatile(                     \
> +        "dsb  ishst;"                 \
> +        "tlbi "  # tlbop  ";"         \
> +        ALTERNATIVE(                  \
> +        "nop; nop;",                  \

This is a bit difficult to read. I would suggest to indent the arguments 
of ALTERNITIVE() by an extra soft tab.

> +        "dsb  ish;"                   \
> +        "tlbi "  # tlbop  ";",        \
> +        ARM64_WORKAROUND_REPEAT_TLBI, \
> +        CONFIG_ARM64_ERRATUM_1286807) \

I think it would be fine to have this code unconditionally built. But if 
you prefer to keep it conditionally, then I would suggest to introduce 
CONFIG_ARM64_WORKAROUND_REPEAT_TLBI and gate the ALTERNATIVE with it.

The new config would be selected by CONFIG_ARM64_ERRATUM_1286807. This 
will make easier for future workaround to use it (AFAICT there are other 
platforms require the same thing).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Nov 16 10:34:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 10:34:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27831.56391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kebpz-0008Rd-C9; Mon, 16 Nov 2020 10:34:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27831.56391; Mon, 16 Nov 2020 10:34:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kebpz-0008RW-73; Mon, 16 Nov 2020 10:34:19 +0000
Received: by outflank-mailman (input) for mailman id 27831;
 Mon, 16 Nov 2020 10:34:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8hTe=EW=epam.com=prvs=95896be81b=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kebpx-0008RR-EB
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 10:34:17 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id adb109d3-a122-45a2-9ced-bbc8f7bd828e;
 Mon, 16 Nov 2020 10:34:16 +0000 (UTC)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0AGATb8A013128; Mon, 16 Nov 2020 10:34:02 GMT
Received: from eur01-he1-obe.outbound.protection.outlook.com
 (mail-he1eur01lp2054.outbound.protection.outlook.com [104.47.0.54])
 by mx0b-0039f301.pphosted.com with ESMTP id 34t6j5415e-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 16 Nov 2020 10:34:01 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM9PR03MB6963.eurprd03.prod.outlook.com (2603:10a6:20b:2d5::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25; Mon, 16 Nov
 2020 10:33:53 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3564.028; Mon, 16 Nov 2020
 10:33:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8hTe=EW=epam.com=prvs=95896be81b=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kebpx-0008RR-EB
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 10:34:17 +0000
X-Inumbo-ID: adb109d3-a122-45a2-9ced-bbc8f7bd828e
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id adb109d3-a122-45a2-9ced-bbc8f7bd828e;
	Mon, 16 Nov 2020 10:34:16 +0000 (UTC)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
	by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0AGATb8A013128;
	Mon, 16 Nov 2020 10:34:02 GMT
Received: from eur01-he1-obe.outbound.protection.outlook.com (mail-he1eur01lp2054.outbound.protection.outlook.com [104.47.0.54])
	by mx0b-0039f301.pphosted.com with ESMTP id 34t6j5415e-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Mon, 16 Nov 2020 10:34:01 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BuJ7HV1T0cHcbLvMzLyCk/O8ZFitgpCJmmPggnotXGqfC8axrM/gQpolRoplmWkS/nqcTN23ameq+kGmDeGH+xz1c55dnIjeb7A9WHgnPaiiTMFpHvw1We8SOADET++Q57rGUGfXRnnGfwTnS1b2oE33NM9T5MLIn67OPd82c6JUkX5Tj4rWrRiQxQy7fDMRAYDSAlZS0e7HwcNMbJhr/R75vet/GnBfi8FWgT1eC5J1VcO/704ktTRgrhGrdtClY8+hA6CvNhqfcjmRrPT/72i4bEororjbgdPUlFt3dYZmlgNmeEYoLAaEhLDRjsL/ukZdCODyzBWqKz+pJRheEQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bYVC/6PQpUFtmn4wYSMOF5SxTM28JioaejaFgQ+u3is=;
 b=chQt2PV5CPqAn4BQWeisOnav/knwpvRnlnnTwizlP6cqY2AhN/bgTtTHq+aYs2ARfUyBUrky4a2AGZ0dWqHbFs8nQeqUmkjrTbUbDPV/g19/ZDbTwL65FmiaC2V7oyQKoqKB+SA+bIJXGSOLgbZHq5pzAipTWGvMNHIsnEem1XSQ6BFqDcNAQYWDJM5NV3WUtG4zLscxnMIi7viFBgngbgMPu9PoqXFn8M2RbguzcMWpGvVaLeQhdbQWalJitO0x2DLa4cadUBGduzLQcyXuASkLevEouM5EkFoCa6bZgJELotZX4NYQBFWHMav9EyDjaPPlQ5aHJdhwOLAU9cVVKg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bYVC/6PQpUFtmn4wYSMOF5SxTM28JioaejaFgQ+u3is=;
 b=kTOPMutf0LAlVe33O9z4EDUrBgIXxcCGzVNzcEl4ZOwoonN1QocLkJ1OyN0avz3hAL1iLsAThm0FYGQkmNyaNUy71pERAPBO2rcXa9v9SjMgIFsGMBGxIdGlrpD5io5wUuJrESdNXXDLF4wTcoKQXpkyq3ccNTYO9bsvFdE3vG8m7gFqas0s0drBZg9HZZzcVRuGW3JxEv4cAS0dPsxgcXDG/sokpnEGYJ/GUAQNFBbQl/8h9J6ofrV2xFctW3VulkKFKmptBUuVEhAA4eVCR7EFFs+SNz77WDhPRovynurZ6JQenPyv6KUorFLNa9tQrBqHdyFFVSJszSaKFBG3gA==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM9PR03MB6963.eurprd03.prod.outlook.com (2603:10a6:20b:2d5::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25; Mon, 16 Nov
 2020 10:33:53 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3564.028; Mon, 16 Nov 2020
 10:33:53 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Paul Durrant <paul@xen.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>,
        Anthony PERARD
	<anthony.perard@citrix.com>,
        Christian Lindig <christian.lindig@citrix.com>,
        David Scott <dave@recoil.org>,
        George Dunlap <george.dunlap@citrix.com>,
        Ian
 Jackson <iwj@xenproject.org>,
        Nick Rosbrook <rosbrookn@ainfosec.com>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH v2 00/24] xl / libxl: named PCI pass-through devices
Thread-Topic: [PATCH v2 00/24] xl / libxl: named PCI pass-through devices
Thread-Index: AQHWvAP66Z/45wYQrE+vwtERf0Yb4Q==
Date: Mon, 16 Nov 2020 10:33:53 +0000
Message-ID: <f8393cbe-d32f-9b45-049f-ec67a73e7c15@epam.com>
References: <20201110175147.7067-1-paul@xen.org>
In-Reply-To: <20201110175147.7067-1-paul@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [176.36.245.220]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 2fde2f51-6e61-4334-88a7-08d88a1b1d48
x-ms-traffictypediagnostic: AM9PR03MB6963:
x-microsoft-antispam-prvs: 
 <AM9PR03MB69636DAD1356A2826F42552EE7E30@AM9PR03MB6963.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:4502;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 dQSgUyNsJdGMcBUByTnt0fF6bP/umJzBSl8gCqdSrb2SxoLN1iV4kV1vziRn0L7aSXZ9ekn/4Iwnvu4gNqjR2d3wxEP7P25ApQyLnpcGn2ZkdMudVpD0bwtuFScZz3ABW7F+aB4Ms8JwRTR7gX295BdWvDDuE3lscbc4y3pDNFkXS6odBfrWEejI63Acalm9d9x9O5sYVVMPYdw62ywd5+SHkVe6kbn38mqUT7OL0RqyhlyWZLn7EPMasIFUcqO0S4yjC/oW1ZTKUhH6vXQJr8WTljmAlhTIW/dPZLqI/iARAIOXlzF56XKgMVgDv3rB
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(376002)(396003)(39860400002)(136003)(31696002)(8936002)(71200400001)(36756003)(6486002)(53546011)(6512007)(6506007)(2906002)(478600001)(86362001)(55236004)(26005)(186003)(66556008)(66476007)(66446008)(64756008)(5660300002)(7416002)(83380400001)(66946007)(31686004)(2616005)(91956017)(8676002)(76116006)(316002)(110136005)(54906003)(4326008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 0J756hYtO39B2/S8sxV2NGJ5RoFkaCCNa0bIQ/oLZSTz7GWD1KzUZTiAO6aDsWMSDPDIPFq0su0/ZkHyWPat5s4zDH4JoyGp8f9vbKWLQVJuEr9pDeC+2tfrpOPyTcaNQ6eoYHVV0ISGWdVhONjJAY2eiEnIrDnior8a+AgmCOQ8ncwqUH88brg2A7CsTdIhk1OLG3bSQPjxyfttZAidz76i0jeOy9lrx20zkTVkqXzM6wi/gcJotdMpmIJDEQ/JCD5FuoYs1azryQ5wcRjSYf3mfZjk8hGRFGDdrpgSUS/f0BWqE64w9VQfxD9jSx05MV6CMwrX83448d6/bUy5JVNBN6qrlvKxgL0g99ho7Zl5wM3qVyMyZN0te36ky5a3XFduwLMKMQlFN7QCt9170QTmHVOGCxJ7PRKDVPxpS+XWR5Y8lOPOBgHCJyzLpB+/X4QrZC1FVY80zGO60B97IuuMGNcLtK1ALAL8igWsrZPcRH9DszzdfNImOzKf3gZZ0K6aXNSJGfha6OMEmYT9VQpWeIZz8F4V7cuIf3lNlDbMUBSvwkCZSZ2l90HezgCmcPhpkE9cn5ANCIjujybn9xL5tKz6/WATKaNR/UVUDd0n/nxT8heU/AzW9crh7T/6sRCrOvCDr4erWHL76BRPQQ==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <7083A4CCC881454695A492DCC1814056@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2fde2f51-6e61-4334-88a7-08d88a1b1d48
X-MS-Exchange-CrossTenant-originalarrivaltime: 16 Nov 2020 10:33:53.1404
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: ifKa++EZXOy+svftZDQzOFJfo+fID5SifUuMaBy3a9v2b4z7eUF2oYynHbiaZLINkGS927s/0tQMStqMbw6vqCwWiXL4K2d2C8xTFDYuuFBASHFXe5N/UfU5DvO0NwhS
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR03MB6963
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-16_03:2020-11-13,2020-11-16 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=1 phishscore=0
 impostorscore=0 malwarescore=0 mlxlogscore=999 clxscore=1011 spamscore=0
 bulkscore=0 mlxscore=0 adultscore=0 priorityscore=1501 lowpriorityscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2011160061

SGksIFBhdWwhDQoNCk9uIDExLzEwLzIwIDc6NTEgUE0sIFBhdWwgRHVycmFudCB3cm90ZToNCj4g
RnJvbTogUGF1bCBEdXJyYW50IDxwZHVycmFudEBhbWF6b24uY29tPg0KPg0KPiBQYXVsIER1cnJh
bnQgKDI0KToNCj4gICAgeGwgLyBsaWJ4bDogcy9wY2lkZXYvcGNpIGFuZCByZW1vdmUgREVGSU5F
X0RFVklDRV9UWVBFX1NUUlVDVF9YDQo+ICAgIGxpYnhsOiB1c2UgTElCWExfREVGSU5FX0RFVklD
RV9MSVNUIGZvciBwY2kgZGV2aWNlcw0KPiAgICBsaWJ4bDogdXNlIExJQlhMX0RFRklORV9ERVZJ
Q0VfTElTVCBmb3IgbmljIGRldmljZXMNCj4gICAgbGlieGw6IHMvZGV0YXRjaGVkL2RldGFjaGVk
IGluIGxpYnhsX3BjaS5jDQo+ICAgIGxpYnhsOiByZW1vdmUgZXh0cmFuZW91cyBhcmd1bWVudHMg
dG8gZG9fcGNpX3JlbW92ZSgpIGluIGxpYnhsX3BjaS5jDQo+ICAgIGxpYnhsOiBzdG9wIHVzaW5n
IGFvZGV2LT5kZXZpY2VfY29uZmlnIGluIGxpYnhsX19kZXZpY2VfcGNpX2FkZCgpLi4uDQo+ICAg
IGxpYnhsOiBnZW5lcmFsaXNlICdkcml2ZXJfcGF0aCcgeGVuc3RvcmUgYWNjZXNzIGZ1bmN0aW9u
cyBpbg0KPiAgICAgIGxpYnhsX3BjaS5jDQo+ICAgIGxpYnhsOiByZW1vdmUgdW5uZWNlc3Nhcnkg
Y2hlY2sgZnJvbSBsaWJ4bF9fZGV2aWNlX3BjaV9hZGQoKQ0KPiAgICBsaWJ4bDogcmVtb3ZlIGdl
dF9hbGxfYXNzaWduZWRfZGV2aWNlcygpIGZyb20gbGlieGxfcGNpLmMNCj4gICAgbGlieGw6IG1h
a2Ugc3VyZSBjYWxsZXJzIG9mIGxpYnhsX2RldmljZV9wY2lfbGlzdCgpIGZyZWUgdGhlIGxpc3QN
Cj4gICAgICBhZnRlciB1c2UNCj4gICAgbGlieGw6IGFkZCBsaWJ4bF9kZXZpY2VfcGNpX2Fzc2ln
bmFibGVfbGlzdF9mcmVlKCkuLi4NCj4gICAgbGlieGw6IHVzZSBDT01QQVJFX1BDSSgpIG1hY3Jv
IGlzX3BjaV9pbl9hcnJheSgpLi4uDQo+ICAgIGxpYnhsOiBhZGQvcmVjb3ZlciAncmRtX3BvbGlj
eScgdG8vZnJvbSBQQ0kgYmFja2VuZCBpbiB4ZW5zdG9yZQ0KPiAgICBsaWJ4bDogTWFrZSBzdXJl
IGRldmljZXMgYWRkZWQgYnkgcGNpLWF0dGFjaCBhcmUgcmVmbGVjdGVkIGluIHRoZQ0KPiAgICAg
IGNvbmZpZw0KPiAgICBkb2NzL21hbjogZXh0cmFjdCBkb2N1bWVudGF0aW9uIG9mIFBDSV9TUEVD
X1NUUklORyBmcm9tIHRoZSB4bC5jZmcNCj4gICAgICBtYW5wYWdlLi4uDQo+ICAgIGRvY3MvbWFu
OiBpbXByb3ZlIGRvY3VtZW50YXRpb24gb2YgUENJX1NQRUNfU1RSSU5HLi4uDQo+ICAgIGRvY3Mv
bWFuOiBmaXggeGwoMSkgZG9jdW1lbnRhdGlvbiBmb3IgJ3BjaScgb3BlcmF0aW9ucw0KPiAgICBs
aWJ4bDogaW50cm9kdWNlICdsaWJ4bF9wY2lfYmRmJyBpbiB0aGUgaWRsLi4uDQo+ICAgIGxpYnhs
dTogaW50cm9kdWNlIHhsdV9wY2lfcGFyc2Vfc3BlY19zdHJpbmcoKQ0KPiAgICBsaWJ4bDogbW9k
aWZ5DQo+ICAgICAgbGlieGxfZGV2aWNlX3BjaV9hc3NpZ25hYmxlX2FkZC9yZW1vdmUvbGlzdC9s
aXN0X2ZyZWUoKS4uLg0KPiAgICBkb2NzL21hbjogbW9kaWZ5IHhsKDEpIGluIHByZXBhcmF0aW9u
IGZvciBuYW1pbmcgb2YgYXNzaWduYWJsZSBkZXZpY2VzDQo+ICAgIHhsIC8gbGlieGw6IHN1cHBv
cnQgbmFtaW5nIG9mIGFzc2lnbmFibGUgZGV2aWNlcw0KPiAgICBkb2NzL21hbjogbW9kaWZ5IHhs
LXBjaS1jb25maWd1cmF0aW9uKDUpIHRvIGFkZCAnbmFtZScgZmllbGQgdG8NCj4gICAgICBQQ0lf
U1BFQ19TVFJJTkcNCj4gICAgeGwgLyBsaWJ4bDogc3VwcG9ydCAneGwgcGNpLWF0dGFjaC9kZXRh
Y2gnIGJ5IG5hbWUNCj4NCj4gICBkb2NzL21hbi94bC1wY2ktY29uZmlndXJhdGlvbi41LnBvZCAg
fCAgMjE4ICsrKysrKw0KPiAgIGRvY3MvbWFuL3hsLjEucG9kLmluICAgICAgICAgICAgICAgICB8
ICAgMzkgKy0NCj4gICBkb2NzL21hbi94bC5jZmcuNS5wb2QuaW4gICAgICAgICAgICAgfCAgIDY4
ICstDQo+ICAgdG9vbHMvZ29sYW5nL3hlbmxpZ2h0L2hlbHBlcnMuZ2VuLmdvIHwgICA3NyArLQ0K
PiAgIHRvb2xzL2dvbGFuZy94ZW5saWdodC90eXBlcy5nZW4uZ28gICB8ICAgIDggKy0NCj4gICB0
b29scy9pbmNsdWRlL2xpYnhsLmggICAgICAgICAgICAgICAgfCAgIDY3ICstDQo+ICAgdG9vbHMv
aW5jbHVkZS9saWJ4bHV0aWwuaCAgICAgICAgICAgIHwgICAgOCArLQ0KPiAgIHRvb2xzL2xpYnMv
bGlnaHQvbGlieGxfY3JlYXRlLmMgICAgICB8ICAgIDYgKy0NCj4gICB0b29scy9saWJzL2xpZ2h0
L2xpYnhsX2RtLmMgICAgICAgICAgfCAgIDE4ICstDQo+ICAgdG9vbHMvbGlicy9saWdodC9saWJ4
bF9pbnRlcm5hbC5oICAgIHwgICA1MyArLQ0KPiAgIHRvb2xzL2xpYnMvbGlnaHQvbGlieGxfbmlj
LmMgICAgICAgICB8ICAgMTkgKy0NCj4gICB0b29scy9saWJzL2xpZ2h0L2xpYnhsX3BjaS5jICAg
ICAgICAgfCAxMDMwICsrKysrKysrKysrKysrLS0tLS0tLS0tLS0tDQo+ICAgdG9vbHMvbGlicy9s
aWdodC9saWJ4bF90eXBlcy5pZGwgICAgIHwgICAxOSArLQ0KPiAgIHRvb2xzL2xpYnMvdXRpbC9s
aWJ4bHVfcGNpLmMgICAgICAgICB8ICAzNzkgKysrKystLS0tLQ0KPiAgIHRvb2xzL29jYW1sL2xp
YnMveGwveGVubGlnaHRfc3R1YnMuYyB8ICAgMTkgKy0NCj4gICB0b29scy94bC94bF9jbWR0YWJs
ZS5jICAgICAgICAgICAgICAgfCAgIDE2ICstDQo+ICAgdG9vbHMveGwveGxfcGFyc2UuYyAgICAg
ICAgICAgICAgICAgIHwgICAyOCArLQ0KPiAgIHRvb2xzL3hsL3hsX3BjaS5jICAgICAgICAgICAg
ICAgICAgICB8ICAxNTkgKystLQ0KPiAgIHRvb2xzL3hsL3hsX3N4cC5jICAgICAgICAgICAgICAg
ICAgICB8ICAgMTIgKy0NCj4gICAxOSBmaWxlcyBjaGFuZ2VkLCAxMzA4IGluc2VydGlvbnMoKyks
IDkzNSBkZWxldGlvbnMoLSkNCj4gICBjcmVhdGUgbW9kZSAxMDA2NDQgZG9jcy9tYW4veGwtcGNp
LWNvbmZpZ3VyYXRpb24uNS5wb2QNCg0KUGF0Y2hlcyAxLTE4Og0KDQpSZXZpZXdlZC1ieTogT2xl
a3NhbmRyIEFuZHJ1c2hjaGVua28gPG9sZWtzYW5kcl9hbmRydXNoY2hlbmtvQGVwYW0uY29tPg0K
DQooSSdsbCBwcm9iYWJseSByZXZpZXcgbW9yZSBsYXRlciBhcyB0aW1lIGFsbG93cykuDQoNCg0K
SSB3b3VsZCBsaWtlIHRvIGFzayB0aGUgcmVzcGVjdGl2ZSBtYWludGFpbmVycyB0byBsb29rIGF0
IHRoaXMgc2VyaWVzIGFzLCBpbiB0aGUgbGlnaHQgb2YgdGhlDQoNCnVwY29taW5nIGNoYW5nZXMg
Zm9yIEFSTSBQQ0kgcGFzc3Rocm91Z2gsIHRoZXNlIGNoYW5nZXMgZ3JlYXRseSBoZWxwIGluIGFk
YXB0aW5nIHRoZQ0KDQpjb2RlIGZvciBBUk0NCg0KVGhhbmsgeW91LA0KDQpPbGVrc2FuZHINCg0K
PiAtLS0NCj4gQ2M6IEFudGhvbnkgUEVSQVJEIDxhbnRob255LnBlcmFyZEBjaXRyaXguY29tPg0K
PiBDYzogQ2hyaXN0aWFuIExpbmRpZyA8Y2hyaXN0aWFuLmxpbmRpZ0BjaXRyaXguY29tPg0KPiBD
YzogRGF2aWQgU2NvdHQgPGRhdmVAcmVjb2lsLm9yZz4NCj4gQ2M6IEdlb3JnZSBEdW5sYXAgPGdl
b3JnZS5kdW5sYXBAY2l0cml4LmNvbT4NCj4gQ2M6IElhbiBKYWNrc29uIDxpd2pAeGVucHJvamVj
dC5vcmc+DQo+IENjOiBOaWNrIFJvc2Jyb29rIDxyb3Nicm9va25AYWluZm9zZWMuY29tPg0KPiBD
YzogV2VpIExpdSA8d2xAeGVuLm9yZz4=


From xen-devel-bounces@lists.xenproject.org Mon Nov 16 11:09:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 11:09:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27839.56403 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kecNW-0002jg-3n; Mon, 16 Nov 2020 11:08:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27839.56403; Mon, 16 Nov 2020 11:08:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kecNW-0002jZ-0X; Mon, 16 Nov 2020 11:08:58 +0000
Received: by outflank-mailman (input) for mailman id 27839;
 Mon, 16 Nov 2020 11:08:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HwWO=EW=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
 id 1kecNV-0002jU-CY
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 11:08:57 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id f2b41deb-9059-4e49-960e-bef1f0523321;
 Mon, 16 Nov 2020 11:08:54 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 76EAF101E;
 Mon, 16 Nov 2020 03:08:54 -0800 (PST)
Received: from [10.57.25.95] (unknown [10.57.25.95])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 86FF53F70D;
 Mon, 16 Nov 2020 03:08:53 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HwWO=EW=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
	id 1kecNV-0002jU-CY
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 11:08:57 +0000
X-Inumbo-ID: f2b41deb-9059-4e49-960e-bef1f0523321
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id f2b41deb-9059-4e49-960e-bef1f0523321;
	Mon, 16 Nov 2020 11:08:54 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 76EAF101E;
	Mon, 16 Nov 2020 03:08:54 -0800 (PST)
Received: from [10.57.25.95] (unknown [10.57.25.95])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 86FF53F70D;
	Mon, 16 Nov 2020 03:08:53 -0800 (PST)
Subject: Re: [PATCH] xen/arm: Add workaround for Cortex-A76/Neoverse-N1
 erratum #1286807
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, bertrand.marquis@arm.com
References: <20201116072422.17400-1-michal.orzel@arm.com>
 <3c9cd4af-da71-e000-eedb-683c2bc0a40e@xen.org>
From: Michal Orzel <michal.orzel@arm.com>
Message-ID: <384e250e-7657-4c9d-0d8f-6df68aacbad4@arm.com>
Date: Mon, 16 Nov 2020 12:08:47 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <3c9cd4af-da71-e000-eedb-683c2bc0a40e@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit



On 16.11.2020 11:12, Julien Grall wrote:
> Hi Michal,
> 
> On 16/11/2020 07:24, Michal Orzel wrote:
>> On the affected Cortex-A76/Neoverse-N1 cores (r0p0 to r3p0),
>> if a virtual address for a cacheable mapping of a location is being
>> accessed by a core while another core is remapping the virtual
>> address to a new physical page using the recommended break-before-make
>> sequence, then under very rare circumstances TLBI+DSB completes before
>> a read using the translation being invalidated has been observed by
>> other observers. The workaround repeats the TLBI+DSB operation.
> 
> The commit message suggests the second TLBI+DSB operation is only necessary for innershearable TLBs.
> 
> I agree that it is probably be best to cover all the TLB flush operations. However, it would be good to clarify it in the commit message that this is done on purpose.
> 
Hi Julien,

Ok I agree that such note can be added.
I will push v2 then.
>>
>> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
>> ---
>>   docs/misc/arm/silicon-errata.txt     |  2 ++
>>   xen/arch/arm/Kconfig                 | 18 +++++++++++++++++
>>   xen/arch/arm/cpuerrata.c             | 14 ++++++++++++++
>>   xen/include/asm-arm/arm64/flushtlb.h | 29 +++++++++++++++++++---------
>>   xen/include/asm-arm/cpufeature.h     |  3 ++-
>>   5 files changed, 56 insertions(+), 10 deletions(-)
>>
>> diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-errata.txt
>> index 552c4151d3..d183ba543f 100644
>> --- a/docs/misc/arm/silicon-errata.txt
>> +++ b/docs/misc/arm/silicon-errata.txt
>> @@ -53,5 +53,7 @@ stable hypervisors.
>>   | ARM            | Cortex-A72      | #853709         | N/A                     |
>>   | ARM            | Cortex-A73      | #858921         | ARM_ERRATUM_858921      |
>>   | ARM            | Cortex-A76      | #1165522        | N/A                     |
>> +| ARM            | Cortex-A76      | #1286807        | ARM64_ERRATUM_1286807   |
>>   | ARM            | Neoverse-N1     | #1165522        | N/A
>> +| ARM            | Neoverse-N1     | #1286807        | ARM64_ERRATUM_1286807   |
>>   | ARM            | MMU-500         | #842869         | N/A                     |
>> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
>> index f938dd21bd..5d6d906d72 100644
>> --- a/xen/arch/arm/Kconfig
>> +++ b/xen/arch/arm/Kconfig
>> @@ -244,6 +244,24 @@ config ARM_ERRATUM_858921
>>           If unsure, say Y.
>>   +config ARM64_ERRATUM_1286807
>> +    bool "Cortex-A76/Neoverse-N1: 1286807: Modification of the translation table for a virtual address might lead to read-after-read ordering violation"
>> +    default y
>> +    depends on ARM_64
>> +    help
>> +      This option adds a workaround for ARM Cortex-A76/Neoverse-N1 erratum 1286807.
>> +
>> +      On the affected Cortex-A76/Neoverse-N1 cores (r0p0 to r3p0), if a virtual
>> +      address for a cacheable mapping of a location is being
>> +      accessed by a core while another core is remapping the virtual
>> +      address to a new physical page using the recommended
>> +      break-before-make sequence, then under very rare circumstances
>> +      TLBI+DSB completes before a read using the translation being
>> +      invalidated has been observed by other observers. The
>> +      workaround repeats the TLBI+DSB operation.
>> +
>> +      If unsure, say Y.
>> +
>>   endmenu
>>     config ARM64_HARDEN_BRANCH_PREDICTOR
>> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
>> index 567911d380..cb4795beec 100644
>> --- a/xen/arch/arm/cpuerrata.c
>> +++ b/xen/arch/arm/cpuerrata.c
>> @@ -424,6 +424,20 @@ static const struct arm_cpu_capabilities arm_errata[] = {
>>                      (1 << MIDR_VARIANT_SHIFT) | 2),
>>       },
>>   #endif
>> +#ifdef CONFIG_ARM64_ERRATUM_1286807
>> +    {
>> +        /* Cortex-A76 r0p0 - r3p0 */
>> +        .desc = "ARM erratum 1286807",
>> +        .capability = ARM64_WORKAROUND_REPEAT_TLBI,
>> +        MIDR_RANGE(MIDR_CORTEX_A76, 0, 3 << MIDR_VARIANT_SHIFT),
>> +    },
>> +    {
>> +        /* Neoverse-N1 r0p0 - r3p0 */
>> +        .desc = "ARM erratum 1286807",
>> +        .capability = ARM64_WORKAROUND_REPEAT_TLBI,
>> +        MIDR_RANGE(MIDR_NEOVERSE_N1, 0, 3 << MIDR_VARIANT_SHIFT),
>> +    },
>> +#endif
>>   #ifdef CONFIG_ARM64_HARDEN_BRANCH_PREDICTOR
>>       {
>>           .capability = ARM_HARDEN_BRANCH_PREDICTOR,
>> diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
>> index ceec59542e..6726362211 100644
>> --- a/xen/include/asm-arm/arm64/flushtlb.h
>> +++ b/xen/include/asm-arm/arm64/flushtlb.h
>> @@ -9,6 +9,11 @@
>>    * DSB ISH          // Ensure the TLB invalidation has completed
>>    * ISB              // See explanation below
>>    *
>> + * ARM64_WORKAROUND_REPEAT_TLBI:
>> + * Modification of the translation table for a virtual address might lead to
>> + * read-after-read ordering violation.
>> + * The workaround repeats TLBI+DSB operation.
>> + *
>>    * For Xen page-tables the ISB will discard any instructions fetched
>>    * from the old mappings.
>>    *
>> @@ -16,15 +21,21 @@
>>    * (and therefore the TLB invalidation) before continuing. So we know
>>    * the TLBs cannot contain an entry for a mapping we may have removed.
>>    */
>> -#define TLB_HELPER(name, tlbop) \
>> -static inline void name(void)   \
>> -{                               \
>> -    asm volatile(               \
>> -        "dsb  ishst;"           \
>> -        "tlbi "  # tlbop  ";"   \
>> -        "dsb  ish;"             \
>> -        "isb;"                  \
>> -        : : : "memory");        \
>> +#define TLB_HELPER(name, tlbop)       \
>> +static inline void name(void)         \
>> +{                                     \
>> +    asm volatile(                     \
>> +        "dsb  ishst;"                 \
>> +        "tlbi "  # tlbop  ";"         \
>> +        ALTERNATIVE(                  \
>> +        "nop; nop;",                  \
> 
> This is a bit difficult to read. I would suggest to indent the arguments of ALTERNITIVE() by an extra soft tab.
> 
>> +        "dsb  ish;"                   \
>> +        "tlbi "  # tlbop  ";",        \
>> +        ARM64_WORKAROUND_REPEAT_TLBI, \
>> +        CONFIG_ARM64_ERRATUM_1286807) \
> 
> I think it would be fine to have this code unconditionally built. But if you prefer to keep it conditionally, then I would suggest to introduce CONFIG_ARM64_WORKAROUND_REPEAT_TLBI and gate the ALTERNATIVE with it.
> 
> The new config would be selected by CONFIG_ARM64_ERRATUM_1286807. This will make easier for future workaround to use it (AFAICT there are other platforms require the same thing).
> 
> Cheers,
> 


From xen-devel-bounces@lists.xenproject.org Mon Nov 16 12:12:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 12:12:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27854.56415 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kedMP-0000Cz-5F; Mon, 16 Nov 2020 12:11:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27854.56415; Mon, 16 Nov 2020 12:11:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kedMP-0000Cs-25; Mon, 16 Nov 2020 12:11:53 +0000
Received: by outflank-mailman (input) for mailman id 27854;
 Mon, 16 Nov 2020 12:11:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HwWO=EW=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
 id 1kedMO-0000Cm-27
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 12:11:52 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id f5bd6069-712d-43bb-944d-c8131c5fd4cf;
 Mon, 16 Nov 2020 12:11:50 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 06344101E;
 Mon, 16 Nov 2020 04:11:50 -0800 (PST)
Received: from e123311-lin.arm.com (unknown [10.57.25.95])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B92993F70D;
 Mon, 16 Nov 2020 04:11:48 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HwWO=EW=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
	id 1kedMO-0000Cm-27
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 12:11:52 +0000
X-Inumbo-ID: f5bd6069-712d-43bb-944d-c8131c5fd4cf
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id f5bd6069-712d-43bb-944d-c8131c5fd4cf;
	Mon, 16 Nov 2020 12:11:50 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 06344101E;
	Mon, 16 Nov 2020 04:11:50 -0800 (PST)
Received: from e123311-lin.arm.com (unknown [10.57.25.95])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B92993F70D;
	Mon, 16 Nov 2020 04:11:48 -0800 (PST)
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	bertrand.marquis@arm.com
Subject: [PATCH v2] xen/arm: Add workaround for Cortex-A76/Neoverse-N1 erratum #1286807
Date: Mon, 16 Nov 2020 13:11:40 +0100
Message-Id: <20201116121140.26763-1-michal.orzel@arm.com>
X-Mailer: git-send-email 2.28.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

On the affected Cortex-A76/Neoverse-N1 cores (r0p0 to r3p0),
if a virtual address for a cacheable mapping of a location is being
accessed by a core while another core is remapping the virtual
address to a new physical page using the recommended break-before-make
sequence, then under very rare circumstances TLBI+DSB completes before
a read using the translation being invalidated has been observed by
other observers. The workaround repeats the TLBI+DSB operation
for all the TLB flush operations on purpose.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
 docs/misc/arm/silicon-errata.txt     |  2 ++
 xen/arch/arm/Kconfig                 | 23 +++++++++++++++++++++
 xen/arch/arm/cpuerrata.c             | 14 +++++++++++++
 xen/include/asm-arm/arm64/flushtlb.h | 30 +++++++++++++++++++---------
 xen/include/asm-arm/cpufeature.h     |  3 ++-
 5 files changed, 62 insertions(+), 10 deletions(-)

diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-errata.txt
index 552c4151d3..d183ba543f 100644
--- a/docs/misc/arm/silicon-errata.txt
+++ b/docs/misc/arm/silicon-errata.txt
@@ -53,5 +53,7 @@ stable hypervisors.
 | ARM            | Cortex-A72      | #853709         | N/A                     |
 | ARM            | Cortex-A73      | #858921         | ARM_ERRATUM_858921      |
 | ARM            | Cortex-A76      | #1165522        | N/A                     |
+| ARM            | Cortex-A76      | #1286807        | ARM64_ERRATUM_1286807   |
 | ARM            | Neoverse-N1     | #1165522        | N/A
+| ARM            | Neoverse-N1     | #1286807        | ARM64_ERRATUM_1286807   |
 | ARM            | MMU-500         | #842869         | N/A                     |
diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index f938dd21bd..8171b8d04a 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -244,6 +244,29 @@ config ARM_ERRATUM_858921
 
 	  If unsure, say Y.
 
+config ARM64_WORKAROUND_REPEAT_TLBI
+	bool
+
+config ARM64_ERRATUM_1286807
+	bool "Cortex-A76/Neoverse-N1: 1286807: Modification of the translation table for a virtual address might lead to read-after-read ordering violation"
+	default y
+	select ARM64_WORKAROUND_REPEAT_TLBI
+	depends on ARM_64
+	help
+	  This option adds a workaround for ARM Cortex-A76/Neoverse-N1 erratum 1286807.
+
+	  On the affected Cortex-A76/Neoverse-N1 cores (r0p0 to r3p0), if a virtual
+	  address for a cacheable mapping of a location is being
+	  accessed by a core while another core is remapping the virtual
+	  address to a new physical page using the recommended
+	  break-before-make sequence, then under very rare circumstances
+	  TLBI+DSB completes before a read using the translation being
+	  invalidated has been observed by other observers. The
+	  workaround repeats the TLBI+DSB operation for all the TLB flush
+	  operations on purpose.
+
+	  If unsure, say Y.
+
 endmenu
 
 config ARM64_HARDEN_BRANCH_PREDICTOR
diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
index 567911d380..cb4795beec 100644
--- a/xen/arch/arm/cpuerrata.c
+++ b/xen/arch/arm/cpuerrata.c
@@ -424,6 +424,20 @@ static const struct arm_cpu_capabilities arm_errata[] = {
                    (1 << MIDR_VARIANT_SHIFT) | 2),
     },
 #endif
+#ifdef CONFIG_ARM64_ERRATUM_1286807
+    {
+        /* Cortex-A76 r0p0 - r3p0 */
+        .desc = "ARM erratum 1286807",
+        .capability = ARM64_WORKAROUND_REPEAT_TLBI,
+        MIDR_RANGE(MIDR_CORTEX_A76, 0, 3 << MIDR_VARIANT_SHIFT),
+    },
+    {
+        /* Neoverse-N1 r0p0 - r3p0 */
+        .desc = "ARM erratum 1286807",
+        .capability = ARM64_WORKAROUND_REPEAT_TLBI,
+        MIDR_RANGE(MIDR_NEOVERSE_N1, 0, 3 << MIDR_VARIANT_SHIFT),
+    },
+#endif
 #ifdef CONFIG_ARM64_HARDEN_BRANCH_PREDICTOR
     {
         .capability = ARM_HARDEN_BRANCH_PREDICTOR,
diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
index ceec59542e..8f2abfaf1d 100644
--- a/xen/include/asm-arm/arm64/flushtlb.h
+++ b/xen/include/asm-arm/arm64/flushtlb.h
@@ -9,6 +9,12 @@
  * DSB ISH          // Ensure the TLB invalidation has completed
  * ISB              // See explanation below
  *
+ * ARM64_WORKAROUND_REPEAT_TLBI:
+ * Modification of the translation table for a virtual address might lead to
+ * read-after-read ordering violation.
+ * The workaround repeats TLBI+DSB operation for all the TLB flush operations
+ * on purpose.
+ *
  * For Xen page-tables the ISB will discard any instructions fetched
  * from the old mappings.
  *
@@ -16,15 +22,21 @@
  * (and therefore the TLB invalidation) before continuing. So we know
  * the TLBs cannot contain an entry for a mapping we may have removed.
  */
-#define TLB_HELPER(name, tlbop) \
-static inline void name(void)   \
-{                               \
-    asm volatile(               \
-        "dsb  ishst;"           \
-        "tlbi "  # tlbop  ";"   \
-        "dsb  ish;"             \
-        "isb;"                  \
-        : : : "memory");        \
+#define TLB_HELPER(name, tlbop)                  \
+static inline void name(void)                    \
+{                                                \
+    asm volatile(                                \
+        "dsb  ishst;"                            \
+        "tlbi "  # tlbop  ";"                    \
+        ALTERNATIVE(                             \
+            "nop; nop;",                         \
+            "dsb  ish;"                          \
+            "tlbi "  # tlbop  ";",               \
+            ARM64_WORKAROUND_REPEAT_TLBI,        \
+            CONFIG_ARM64_WORKAROUND_REPEAT_TLBI) \
+        "dsb  ish;"                              \
+        "isb;"                                   \
+        : : : "memory");                         \
 }
 
 /* Flush local TLBs, current VMID only. */
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index 016a9fe203..c7b5052992 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -46,8 +46,9 @@
 #define ARM_SMCCC_1_1 8
 #define ARM64_WORKAROUND_AT_SPECULATE 9
 #define ARM_WORKAROUND_858921 10
+#define ARM64_WORKAROUND_REPEAT_TLBI 11
 
-#define ARM_NCAPS           11
+#define ARM_NCAPS           12
 
 #ifndef __ASSEMBLY__
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 12:17:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 12:17:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27860.56426 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kedRm-0000QU-PI; Mon, 16 Nov 2020 12:17:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27860.56426; Mon, 16 Nov 2020 12:17:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kedRm-0000QN-MT; Mon, 16 Nov 2020 12:17:26 +0000
Received: by outflank-mailman (input) for mailman id 27860;
 Mon, 16 Nov 2020 12:17:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=14IB=EW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kedRl-0000QI-7H
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 12:17:25 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 28c97fa4-8a5d-4791-8f73-a3e525b05f15;
 Mon, 16 Nov 2020 12:17:21 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kedRh-0001xp-L1; Mon, 16 Nov 2020 12:17:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kedRh-0000hn-Ag; Mon, 16 Nov 2020 12:17:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kedRh-0003Rs-AD; Mon, 16 Nov 2020 12:17:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=14IB=EW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kedRl-0000QI-7H
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 12:17:25 +0000
X-Inumbo-ID: 28c97fa4-8a5d-4791-8f73-a3e525b05f15
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 28c97fa4-8a5d-4791-8f73-a3e525b05f15;
	Mon, 16 Nov 2020 12:17:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OCDgRF6YUEaNnHoVOiH0wvNUkmQetaAVrSGuZSCZibI=; b=5sSi8RElIpT/NmZugETYRa8O34
	zm1HCQGrsOuy/QY5St4pcpR8qbruZHVrEwxUCysFXwcxRe5efZHFy5gqN6bBziBlMJlew1UPSlCNk
	TqNdiroI/QMmn+AmFu4wpdJejRWyHCih3xGPAqrZ0OYFIvNfdL+DUKWjk1QvuV9j3Q8M=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kedRh-0001xp-L1; Mon, 16 Nov 2020 12:17:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kedRh-0000hn-Ag; Mon, 16 Nov 2020 12:17:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kedRh-0003Rs-AD; Mon, 16 Nov 2020 12:17:21 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156815-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156815: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=09162bc32c880a791c6c0668ce0745cf7958f576
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 16 Nov 2020 12:17:21 +0000

flight 156815 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156815/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                09162bc32c880a791c6c0668ce0745cf7958f576
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  107 days
Failing since        152366  2020-08-01 20:49:34 Z  106 days  176 attempts
Testing same since   156815  2020-11-16 02:08:16 Z    0 days    1 attempts

------------------------------------------------------------
3520 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 672911 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 16 12:24:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 12:24:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27871.56446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kedYb-0001Nq-Jf; Mon, 16 Nov 2020 12:24:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27871.56446; Mon, 16 Nov 2020 12:24:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kedYb-0001Nj-GQ; Mon, 16 Nov 2020 12:24:29 +0000
Received: by outflank-mailman (input) for mailman id 27871;
 Mon, 16 Nov 2020 12:24:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VFxT=EW=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1kedYb-0001Nd-7U
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 12:24:29 +0000
Received: from mail-wr1-x436.google.com (unknown [2a00:1450:4864:20::436])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 57e0c7ed-5be5-49e0-8a38-19966425d512;
 Mon, 16 Nov 2020 12:24:28 +0000 (UTC)
Received: by mail-wr1-x436.google.com with SMTP id p8so18490266wrx.5
 for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 04:24:28 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id t5sm19671024wmg.19.2020.11.16.04.24.18
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 16 Nov 2020 04:24:21 -0800 (PST)
Received: from zen.lan (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id 2DD991FF90;
 Mon, 16 Nov 2020 12:24:18 +0000 (GMT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VFxT=EW=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
	id 1kedYb-0001Nd-7U
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 12:24:29 +0000
X-Inumbo-ID: 57e0c7ed-5be5-49e0-8a38-19966425d512
Received: from mail-wr1-x436.google.com (unknown [2a00:1450:4864:20::436])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 57e0c7ed-5be5-49e0-8a38-19966425d512;
	Mon, 16 Nov 2020 12:24:28 +0000 (UTC)
Received: by mail-wr1-x436.google.com with SMTP id p8so18490266wrx.5
        for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 04:24:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=25kCBizsj7x/H9MBq/7Zh8HdKyedfrYbs0Sj7nFGxRg=;
        b=d5pz22vLeXK1wHTPtF+ZsxVIjXGA8gl6Tp/KPuXBtayrc3ZztGUFqO9NqNNtYCD6S9
         GIqQTWfi7rk11d3aNpYM5Sovu1nZQUg4k78J8N0ovb1RP8rhqjz2ziat2y93gTIR19az
         oIzdPJAEHeOZ+1ho5ixYDxsdnnSTsvLBGvo0C1S/ZpaCfzu60H/ZhbVXIE/Fz6X2rnuz
         bGp+15bJpLSMupv0X7HDh/jbzWyiTXsvLQDjR/xUxuiuAC0BgRnrEMAv7S86N9y/PO/J
         YZ30NYDyOCbsIzdK53/YmkNgrvXL1/EPIj5xyUtm8/JL1r8SstFTI9HT3CYaUmH7ZgB5
         9gbw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=25kCBizsj7x/H9MBq/7Zh8HdKyedfrYbs0Sj7nFGxRg=;
        b=hjM8EAHhvikVpXKRBjxijdWu/BQ9fy+Mktfq2nlQlLlSVQiw0xVGzjokS9ysqJ2pUQ
         JWSQfGLcqV/I3TOR+2FFvASkzdkTQTQC27Tkib4lEIIzNfH1qjzssiRey/4a563sQpEr
         EbDpOLDt3E4c3Nraph1lPlq64f269pWgXReI8WkuFdW/YXbq7SQMts08QiRtfg/ONJ3m
         RpdRkCGlEb8kwzMnaZYrm/CLUfky3qco/J46vxtp2wilrUlis0Klm+J2SRW9pXErr4Sq
         axPQiN6yfs7zakhgmVPimV6G+JebnClq2ut+eUo1su8LpOcfd6YhwXGfQyTNzFoT2/4x
         pP5Q==
X-Gm-Message-State: AOAM530pZISXlPBI3HBaY+G4prUzaV+ykNvn6Pk8zWd0udiNSe3KGGTr
	CBBaRanLIkyMPq636SQWoK71QQ==
X-Google-Smtp-Source: ABdhPJwEtjSnLzfG0Ucc13bL7SYSntZMRHeMdUmK/XWx+GdI3vGIWCG0T5avbmxz2LXh+UgnrJ3RnQ==
X-Received: by 2002:adf:d84b:: with SMTP id k11mr18703530wrl.305.1605529467569;
        Mon, 16 Nov 2020 04:24:27 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
        by smtp.gmail.com with ESMTPSA id t5sm19671024wmg.19.2020.11.16.04.24.18
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 16 Nov 2020 04:24:21 -0800 (PST)
Received: from zen.lan (localhost [127.0.0.1])
	by zen.linaroharston (Postfix) with ESMTP id 2DD991FF90;
	Mon, 16 Nov 2020 12:24:18 +0000 (GMT)
From: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
To: peter.maydell@linaro.org
Cc: qemu-devel@nongnu.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org (open list:X86 Xen CPUs)
Subject: [PULL 4/9] include/hw/xen.h: drop superfluous struct
Date: Mon, 16 Nov 2020 12:24:12 +0000
Message-Id: <20201116122417.28346-5-alex.bennee@linaro.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201116122417.28346-1-alex.bennee@linaro.org>
References: <20201116122417.28346-1-alex.bennee@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Chardev is already a typedef'ed struct.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20201110192316.26397-5-alex.bennee@linaro.org>

diff --git a/include/hw/xen/xen.h b/include/hw/xen/xen.h
index 1406648ca5..0f9962b1c1 100644
--- a/include/hw/xen/xen.h
+++ b/include/hw/xen/xen.h
@@ -28,7 +28,7 @@ int xen_is_pirq_msi(uint32_t msi_data);
 
 qemu_irq *xen_interrupt_controller_init(void);
 
-void xenstore_store_pv_console_info(int i, struct Chardev *chr);
+void xenstore_store_pv_console_info(int i, Chardev *chr);
 
 void xen_register_framebuffer(struct MemoryRegion *mr);
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 12:24:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 12:24:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27872.56457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kedYg-0001QH-W8; Mon, 16 Nov 2020 12:24:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27872.56457; Mon, 16 Nov 2020 12:24:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kedYg-0001QA-T6; Mon, 16 Nov 2020 12:24:34 +0000
Received: by outflank-mailman (input) for mailman id 27872;
 Mon, 16 Nov 2020 12:24:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VFxT=EW=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1kedYg-0001Nd-6Q
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 12:24:34 +0000
Received: from mail-wr1-x435.google.com (unknown [2a00:1450:4864:20::435])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6996d5b0-907b-4f02-84a4-68e287af2eeb;
 Mon, 16 Nov 2020 12:24:29 +0000 (UTC)
Received: by mail-wr1-x435.google.com with SMTP id s8so18457502wrw.10
 for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 04:24:29 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id w15sm23012424wrp.52.2020.11.16.04.24.20
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 16 Nov 2020 04:24:22 -0800 (PST)
Received: from zen.lan (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id 4553E1FF91;
 Mon, 16 Nov 2020 12:24:18 +0000 (GMT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VFxT=EW=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
	id 1kedYg-0001Nd-6Q
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 12:24:34 +0000
X-Inumbo-ID: 6996d5b0-907b-4f02-84a4-68e287af2eeb
Received: from mail-wr1-x435.google.com (unknown [2a00:1450:4864:20::435])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6996d5b0-907b-4f02-84a4-68e287af2eeb;
	Mon, 16 Nov 2020 12:24:29 +0000 (UTC)
Received: by mail-wr1-x435.google.com with SMTP id s8so18457502wrw.10
        for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 04:24:29 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=I5N/dAQ0srBictVseT2rP2fA4AanR0t+DIq+DoGQDUA=;
        b=kUv5YnOumF/Ptfw2eqBb3oW9vbtrXH2sp4GACYHhA96xfqkjYNrDu1E4hPFaiMhABN
         5r6Bp5+qFL+41JkZts4NLOMDU+BOZ7f+B3OxTSrsN4NMQ1sCmvnPaCAgK7AUxbRQBHMz
         2eie64PA2/H4D4HRatmXT07ZJAMsiUF2S2iEBlis90fJOUM3ifH+NgoBJ7NHRUe5GSxJ
         51rArOQXs0dhUX8EN7kitzVSfDEdlKri/ftf2l6VWkJuIkLMNx4oHB7NGAIgPePTFDsa
         KOcaFHJBgMjEndAwhqkKbtIf/jfW3DQ88lvtkMKy0vdV8Oahf+jklSClwHom/imIJJtL
         QtGQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=I5N/dAQ0srBictVseT2rP2fA4AanR0t+DIq+DoGQDUA=;
        b=f2+qH64Qv46f34qMaTyttnwN3+MCBs8E2OfWuGrR1N1m+hTFqRfX7nXocvWvCLavby
         anhG9xcHhV9G/DkEyzVlutGdojLEiXYqsStlWNQxYwKI85goWZJfZvYbEOHjDjMAp4Hh
         rArCtTXBT8DPKa984O5s3xvyX79qWa3ZdRH7TaPg66DEHvDS9mEk2PJ8DDdF75OYdlzS
         3dR3ufQPAI0QF1wYI9nN7/CUiwJq7xWM3vSnvvNTt+oCF2VDEyinNDgPx0gA5gKWfh/3
         gt0iLBuzS+jQAsryql0iCf8L72yq+i+CtjGG3Mch0/RruFha6zqOZ5dsSCVuh5g8OBuJ
         KgoQ==
X-Gm-Message-State: AOAM530api3En5Oq9k53KMPoPcxqvXhklT0YZhLVTgVle8jQQdEqmGea
	R3o7dra3f4YWU7ZgWIi0jtLmzw==
X-Google-Smtp-Source: ABdhPJxEtcR4ZeY941H3WHkpWZMVMIuDrHc4wCNctDwDHmfdBIFTxqjPSisyc+pQG175SXdcSQDB4A==
X-Received: by 2002:adf:e74d:: with SMTP id c13mr19836328wrn.277.1605529469173;
        Mon, 16 Nov 2020 04:24:29 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
        by smtp.gmail.com with ESMTPSA id w15sm23012424wrp.52.2020.11.16.04.24.20
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 16 Nov 2020 04:24:22 -0800 (PST)
Received: from zen.lan (localhost [127.0.0.1])
	by zen.linaroharston (Postfix) with ESMTP id 4553E1FF91;
	Mon, 16 Nov 2020 12:24:18 +0000 (GMT)
From: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
To: peter.maydell@linaro.org
Cc: qemu-devel@nongnu.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	xen-devel@lists.xenproject.org (open list:X86 Xen CPUs)
Subject: [PULL 5/9] stubs/xen-hw-stub: drop xenstore_store_pv_console_info stub
Date: Mon, 16 Nov 2020 12:24:13 +0000
Message-Id: <20201116122417.28346-6-alex.bennee@linaro.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201116122417.28346-1-alex.bennee@linaro.org>
References: <20201116122417.28346-1-alex.bennee@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

We should never build something that calls this without having it.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20201110192316.26397-6-alex.bennee@linaro.org>

diff --git a/stubs/xen-hw-stub.c b/stubs/xen-hw-stub.c
index 2ea8190921..15f3921a76 100644
--- a/stubs/xen-hw-stub.c
+++ b/stubs/xen-hw-stub.c
@@ -10,10 +10,6 @@
 #include "hw/xen/xen.h"
 #include "hw/xen/xen-x86.h"
 
-void xenstore_store_pv_console_info(int i, Chardev *chr)
-{
-}
-
 int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num)
 {
     return -1;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 12:25:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 12:25:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27882.56470 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kedZt-0001cR-DF; Mon, 16 Nov 2020 12:25:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27882.56470; Mon, 16 Nov 2020 12:25:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kedZt-0001cK-8g; Mon, 16 Nov 2020 12:25:49 +0000
Received: by outflank-mailman (input) for mailman id 27882;
 Mon, 16 Nov 2020 12:25:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3+iU=EW=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kedZs-0001cF-1O
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 12:25:48 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id cdd74c7c-d060-4720-88b2-f346a4a63bac;
 Mon, 16 Nov 2020 12:25:47 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B8FDF101E;
 Mon, 16 Nov 2020 04:25:46 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5E4D93F70D;
 Mon, 16 Nov 2020 04:25:45 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3+iU=EW=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kedZs-0001cF-1O
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 12:25:48 +0000
X-Inumbo-ID: cdd74c7c-d060-4720-88b2-f346a4a63bac
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id cdd74c7c-d060-4720-88b2-f346a4a63bac;
	Mon, 16 Nov 2020 12:25:47 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B8FDF101E;
	Mon, 16 Nov 2020 04:25:46 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5E4D93F70D;
	Mon, 16 Nov 2020 04:25:45 -0800 (PST)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v3 0/3]  xen/arm: Make PCI passthrough code non-x86 specific
Date: Mon, 16 Nov 2020 12:25:15 +0000
Message-Id: <cover.1605527997.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1

This patch series is v3 of preparatory work to make PCI passthrough code
non-x86 specific.

Rahul Singh (3):
  xen/ns16550: Make ns16550 driver usable on ARM with HAS_PCI enabled.
  xen/pci: Move x86 specific code to x86 directory.
  xen/pci: solve compilation error on ARM with HAS_PCI enabled.

 xen/drivers/char/Kconfig                    |  4 +
 xen/drivers/char/ns16550.c                  | 32 ++++----
 xen/drivers/passthrough/Makefile            |  3 -
 xen/drivers/passthrough/pci.c               | 86 +--------------------
 xen/drivers/passthrough/x86/Makefile        |  1 +
 xen/drivers/passthrough/{io.c => x86/hvm.c} | 66 ++++++++++++++++
 xen/drivers/passthrough/x86/iommu.c         | 19 +++++
 xen/include/xen/iommu.h                     |  2 +
 xen/include/xen/pci.h                       |  2 +
 9 files changed, 112 insertions(+), 103 deletions(-)
 rename xen/drivers/passthrough/{io.c => x86/hvm.c} (95%)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 12:25:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 12:25:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27883.56482 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kedZy-0001f3-KK; Mon, 16 Nov 2020 12:25:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27883.56482; Mon, 16 Nov 2020 12:25:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kedZy-0001ew-H2; Mon, 16 Nov 2020 12:25:54 +0000
Received: by outflank-mailman (input) for mailman id 27883;
 Mon, 16 Nov 2020 12:25:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3+iU=EW=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kedZx-0001eS-8T
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 12:25:53 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 58ed8055-d374-449b-baad-1c717da4b7ff;
 Mon, 16 Nov 2020 12:25:52 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C2079101E;
 Mon, 16 Nov 2020 04:25:51 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8F5063F70D;
 Mon, 16 Nov 2020 04:25:50 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3+iU=EW=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kedZx-0001eS-8T
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 12:25:53 +0000
X-Inumbo-ID: 58ed8055-d374-449b-baad-1c717da4b7ff
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 58ed8055-d374-449b-baad-1c717da4b7ff;
	Mon, 16 Nov 2020 12:25:52 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C2079101E;
	Mon, 16 Nov 2020 04:25:51 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8F5063F70D;
	Mon, 16 Nov 2020 04:25:50 -0800 (PST)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM with HAS_PCI enabled.
Date: Mon, 16 Nov 2020 12:25:16 +0000
Message-Id: <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1605527997.git.rahul.singh@arm.com>
References: <cover.1605527997.git.rahul.singh@arm.com>
In-Reply-To: <cover.1605527997.git.rahul.singh@arm.com>
References: <cover.1605527997.git.rahul.singh@arm.com>

NS16550 driver has PCI support that is under HAS_PCI flag. When HAS_PCI
is enabled for ARM, compilation error is observed for ARM architecture
because ARM platforms do not have full PCI support available.

Introducing new kconfig option CONFIG_HAS_NS16550_PCI to support
ns16550 PCI for X86.

For X86 platforms it is enabled by default. For ARM platforms it is
disabled by default, once we have proper support for NS16550 PCI for
ARM we can enable it.

No functional change.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---

Changes in v3:
- remove help text from the Kconfig file because of prompt-less option.

---
 xen/drivers/char/Kconfig   |  4 ++++
 xen/drivers/char/ns16550.c | 32 ++++++++++++++++----------------
 2 files changed, 20 insertions(+), 16 deletions(-)

diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
index b572305657..abb59fdb0f 100644
--- a/xen/drivers/char/Kconfig
+++ b/xen/drivers/char/Kconfig
@@ -4,6 +4,10 @@ config HAS_NS16550
 	help
 	  This selects the 16550-series UART support. For most systems, say Y.
 
+config HAS_NS16550_PCI
+	def_bool y
+	depends on X86 && HAS_NS16550 && HAS_PCI
+
 config HAS_CADENCE_UART
 	bool "Xilinx Cadence UART driver"
 	default y
diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index d8b52eb813..bd1c2af956 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -16,7 +16,7 @@
 #include <xen/timer.h>
 #include <xen/serial.h>
 #include <xen/iocap.h>
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/pci_ids.h>
@@ -54,7 +54,7 @@ enum serial_param_type {
     reg_shift,
     reg_width,
     stop_bits,
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     bridge_bdf,
     device,
     port_bdf,
@@ -83,7 +83,7 @@ static struct ns16550 {
     unsigned int timeout_ms;
     bool_t intr_works;
     bool_t dw_usr_bsy;
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     /* PCI card parameters. */
     bool_t pb_bdf_enable;   /* if =1, pb-bdf effective, port behind bridge */
     bool_t ps_bdf_enable;   /* if =1, ps_bdf effective, port on pci card */
@@ -117,14 +117,14 @@ static const struct serial_param_var __initconst sp_vars[] = {
     {"reg-shift", reg_shift},
     {"reg-width", reg_width},
     {"stop-bits", stop_bits},
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     {"bridge", bridge_bdf},
     {"dev", device},
     {"port", port_bdf},
 #endif
 };
 
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
 struct ns16550_config {
     u16 vendor_id;
     u16 dev_id;
@@ -620,7 +620,7 @@ static int ns16550_getc(struct serial_port *port, char *pc)
 
 static void pci_serial_early_init(struct ns16550 *uart)
 {
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     if ( !uart->ps_bdf_enable || uart->io_base >= 0x10000 )
         return;
 
@@ -719,7 +719,7 @@ static void __init ns16550_init_preirq(struct serial_port *port)
 
 static void __init ns16550_init_irq(struct serial_port *port)
 {
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     struct ns16550 *uart = port->uart;
 
     if ( uart->msi )
@@ -761,7 +761,7 @@ static void __init ns16550_init_postirq(struct serial_port *port)
     uart->timeout_ms = max_t(
         unsigned int, 1, (bits * uart->fifo_size * 1000) / uart->baud);
 
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     if ( uart->bar || uart->ps_bdf_enable )
     {
         if ( uart->param && uart->param->mmio &&
@@ -841,7 +841,7 @@ static void ns16550_suspend(struct serial_port *port)
 
     stop_timer(&uart->timer);
 
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     if ( uart->bar )
        uart->cr = pci_conf_read16(PCI_SBDF(0, uart->ps_bdf[0], uart->ps_bdf[1],
                                   uart->ps_bdf[2]), PCI_COMMAND);
@@ -850,7 +850,7 @@ static void ns16550_suspend(struct serial_port *port)
 
 static void _ns16550_resume(struct serial_port *port)
 {
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     struct ns16550 *uart = port->uart;
 
     if ( uart->bar )
@@ -1013,7 +1013,7 @@ static int __init check_existence(struct ns16550 *uart)
     return 1; /* Everything is MMIO */
 #endif
 
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     pci_serial_early_init(uart);
 #endif
 
@@ -1044,7 +1044,7 @@ static int __init check_existence(struct ns16550 *uart)
     return (status == 0x90);
 }
 
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
 static int __init
 pci_uart_config(struct ns16550 *uart, bool_t skip_amt, unsigned int idx)
 {
@@ -1305,7 +1305,7 @@ static bool __init parse_positional(struct ns16550 *uart, char **str)
 
     if ( *conf == ',' && *++conf != ',' )
     {
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
         if ( strncmp(conf, "pci", 3) == 0 )
         {
             if ( pci_uart_config(uart, 1/* skip AMT */, uart - ns16550_com) )
@@ -1327,7 +1327,7 @@ static bool __init parse_positional(struct ns16550 *uart, char **str)
 
     if ( *conf == ',' && *++conf != ',' )
     {
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
         if ( strncmp(conf, "msi", 3) == 0 )
         {
             conf += 3;
@@ -1339,7 +1339,7 @@ static bool __init parse_positional(struct ns16550 *uart, char **str)
             uart->irq = simple_strtol(conf, &conf, 10);
     }
 
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     if ( *conf == ',' && *++conf != ',' )
     {
         conf = parse_pci(conf, NULL, &uart->ps_bdf[0],
@@ -1419,7 +1419,7 @@ static bool __init parse_namevalue_pairs(char *str, struct ns16550 *uart)
             uart->reg_width = simple_strtoul(param_value, NULL, 0);
             break;
 
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
         case bridge_bdf:
             if ( !parse_pci(param_value, NULL, &uart->ps_bdf[0],
                             &uart->ps_bdf[1], &uart->ps_bdf[2]) )
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 12:25:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 12:25:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27884.56494 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keda2-0001iV-Uf; Mon, 16 Nov 2020 12:25:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27884.56494; Mon, 16 Nov 2020 12:25:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keda2-0001iO-Qv; Mon, 16 Nov 2020 12:25:58 +0000
Received: by outflank-mailman (input) for mailman id 27884;
 Mon, 16 Nov 2020 12:25:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3+iU=EW=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1keda2-0001eS-74
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 12:25:58 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 609224b9-9d12-4d98-9da4-f09cc646477a;
 Mon, 16 Nov 2020 12:25:55 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 088DA101E;
 Mon, 16 Nov 2020 04:25:55 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8C23D3F70D;
 Mon, 16 Nov 2020 04:25:53 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3+iU=EW=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1keda2-0001eS-74
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 12:25:58 +0000
X-Inumbo-ID: 609224b9-9d12-4d98-9da4-f09cc646477a
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 609224b9-9d12-4d98-9da4-f09cc646477a;
	Mon, 16 Nov 2020 12:25:55 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 088DA101E;
	Mon, 16 Nov 2020 04:25:55 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8C23D3F70D;
	Mon, 16 Nov 2020 04:25:53 -0800 (PST)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 2/3] xen/pci: Move x86 specific code to x86 directory.
Date: Mon, 16 Nov 2020 12:25:17 +0000
Message-Id: <a84005e5aa6733043e043b015cde4983719c8535.1605527997.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1605527997.git.rahul.singh@arm.com>
References: <cover.1605527997.git.rahul.singh@arm.com>
In-Reply-To: <cover.1605527997.git.rahul.singh@arm.com>
References: <cover.1605527997.git.rahul.singh@arm.com>

passthrough/pci.c file is common for all architecture, but there is x86
specific code in this file.

Move x86 specific code to the drivers/passthrough/io.c file to avoid
compilation error for other architecture.

As drivers/passthrough/io.c is compiled only for x86 move it to
x86 directory and rename it to hvm.c.

No functional change.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---

Changes in v3:
- fixed typo
- As per suggestion move the code to the file io.c and move that file to
  x86 directory and rename it hvm.c

---
 xen/drivers/passthrough/Makefile            |  3 -
 xen/drivers/passthrough/pci.c               | 78 +--------------------
 xen/drivers/passthrough/x86/Makefile        |  1 +
 xen/drivers/passthrough/{io.c => x86/hvm.c} | 66 +++++++++++++++++
 xen/drivers/passthrough/x86/iommu.c         |  7 ++
 xen/include/xen/pci.h                       |  2 +
 6 files changed, 77 insertions(+), 80 deletions(-)
 rename xen/drivers/passthrough/{io.c => x86/hvm.c} (95%)

diff --git a/xen/drivers/passthrough/Makefile b/xen/drivers/passthrough/Makefile
index e973e16c74..cc646612c7 100644
--- a/xen/drivers/passthrough/Makefile
+++ b/xen/drivers/passthrough/Makefile
@@ -6,6 +6,3 @@ obj-$(CONFIG_ARM) += arm/
 obj-y += iommu.o
 obj-$(CONFIG_HAS_PCI) += pci.o
 obj-$(CONFIG_HAS_DEVICE_TREE) += device_tree.o
-
-x86-$(CONFIG_HVM) := io.o
-obj-$(CONFIG_X86) += $(x86-y)
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 51e584127e..e8a28df126 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -14,9 +14,6 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
-#include <xen/sched.h>
-#include <xen/pci.h>
-#include <xen/pci_regs.h>
 #include <xen/pci_ids.h>
 #include <xen/list.h>
 #include <xen/prefetch.h>
@@ -24,7 +21,6 @@
 #include <xen/irq.h>
 #include <xen/param.h>
 #include <xen/vm_event.h>
-#include <asm/hvm/irq.h>
 #include <xen/delay.h>
 #include <xen/keyhandler.h>
 #include <xen/event.h>
@@ -842,71 +838,6 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
     return ret;
 }
 
-static int pci_clean_dpci_irq(struct domain *d,
-                              struct hvm_pirq_dpci *pirq_dpci, void *arg)
-{
-    struct dev_intx_gsi_link *digl, *tmp;
-
-    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
-
-    if ( pt_irq_need_timer(pirq_dpci->flags) )
-        kill_timer(&pirq_dpci->timer);
-
-    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
-    {
-        list_del(&digl->list);
-        xfree(digl);
-    }
-
-    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
-
-    if ( !pt_pirq_softirq_active(pirq_dpci) )
-        return 0;
-
-    domain_get_irq_dpci(d)->pending_pirq_dpci = pirq_dpci;
-
-    return -ERESTART;
-}
-
-static int pci_clean_dpci_irqs(struct domain *d)
-{
-    struct hvm_irq_dpci *hvm_irq_dpci = NULL;
-
-    if ( !is_iommu_enabled(d) )
-        return 0;
-
-    if ( !is_hvm_domain(d) )
-        return 0;
-
-    spin_lock(&d->event_lock);
-    hvm_irq_dpci = domain_get_irq_dpci(d);
-    if ( hvm_irq_dpci != NULL )
-    {
-        int ret = 0;
-
-        if ( hvm_irq_dpci->pending_pirq_dpci )
-        {
-            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci) )
-                 ret = -ERESTART;
-            else
-                 hvm_irq_dpci->pending_pirq_dpci = NULL;
-        }
-
-        if ( !ret )
-            ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
-        if ( ret )
-        {
-            spin_unlock(&d->event_lock);
-            return ret;
-        }
-
-        hvm_domain_irq(d)->dpci = NULL;
-        free_hvm_irq_dpci(hvm_irq_dpci);
-    }
-    spin_unlock(&d->event_lock);
-    return 0;
-}
-
 /* Caller should hold the pcidevs_lock */
 static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
                            uint8_t devfn)
@@ -966,7 +897,7 @@ int pci_release_devices(struct domain *d)
     int ret;
 
     pcidevs_lock();
-    ret = pci_clean_dpci_irqs(d);
+    ret = arch_pci_clean_pirqs(d);
     if ( ret )
     {
         pcidevs_unlock();
@@ -1370,13 +1301,6 @@ static int __init setup_dump_pcidevs(void)
 }
 __initcall(setup_dump_pcidevs);
 
-int iommu_update_ire_from_msi(
-    struct msi_desc *msi_desc, struct msi_msg *msg)
-{
-    return iommu_intremap
-           ? iommu_call(&iommu_ops, update_ire_from_msi, msi_desc, msg) : 0;
-}
-
 static int iommu_add_device(struct pci_dev *pdev)
 {
     const struct domain_iommu *hd;
diff --git a/xen/drivers/passthrough/x86/Makefile b/xen/drivers/passthrough/x86/Makefile
index a70cf9460d..69284a5d19 100644
--- a/xen/drivers/passthrough/x86/Makefile
+++ b/xen/drivers/passthrough/x86/Makefile
@@ -1,2 +1,3 @@
 obj-y += ats.o
 obj-y += iommu.o
+obj-$(CONFIG_HVM) += hvm.o
diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/x86/hvm.c
similarity index 95%
rename from xen/drivers/passthrough/io.c
rename to xen/drivers/passthrough/x86/hvm.c
index 6b1305a3e5..41cfa2e200 100644
--- a/xen/drivers/passthrough/io.c
+++ b/xen/drivers/passthrough/x86/hvm.c
@@ -1036,6 +1036,72 @@ unlock:
     spin_unlock(&d->event_lock);
 }
 
+static int pci_clean_dpci_irq(struct domain *d,
+                              struct hvm_pirq_dpci *pirq_dpci, void *arg)
+{
+    struct dev_intx_gsi_link *digl, *tmp;
+
+    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
+
+    if ( pt_irq_need_timer(pirq_dpci->flags) )
+        kill_timer(&pirq_dpci->timer);
+
+    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
+    {
+        list_del(&digl->list);
+        xfree(digl);
+    }
+
+    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
+
+    if ( !pt_pirq_softirq_active(pirq_dpci) )
+        return 0;
+
+    domain_get_irq_dpci(d)->pending_pirq_dpci = pirq_dpci;
+
+    return -ERESTART;
+}
+
+int arch_pci_clean_pirqs(struct domain *d)
+{
+    struct hvm_irq_dpci *hvm_irq_dpci = NULL;
+
+    if ( !is_iommu_enabled(d) )
+        return 0;
+
+    if ( !is_hvm_domain(d) )
+        return 0;
+
+    spin_lock(&d->event_lock);
+    hvm_irq_dpci = domain_get_irq_dpci(d);
+    if ( hvm_irq_dpci != NULL )
+    {
+        int ret = 0;
+
+        if ( hvm_irq_dpci->pending_pirq_dpci )
+        {
+            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci) )
+                 ret = -ERESTART;
+            else
+                 hvm_irq_dpci->pending_pirq_dpci = NULL;
+        }
+
+        if ( !ret )
+            ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
+        if ( ret )
+        {
+            spin_unlock(&d->event_lock);
+            return ret;
+        }
+
+        hvm_domain_irq(d)->dpci = NULL;
+        free_hvm_irq_dpci(hvm_irq_dpci);
+    }
+    spin_unlock(&d->event_lock);
+
+    return 0;
+}
+
 /*
  * Note: 'pt_pirq_softirq_reset' can clear the STATE_SCHED before we get to
  * doing it. If that is the case we let 'pt_pirq_softirq_reset' do ref-counting.
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index f17b1820f4..875e67b53b 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -308,6 +308,13 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
     return pg;
 }
 
+int iommu_update_ire_from_msi(
+    struct msi_desc *msi_desc, struct msi_msg *msg)
+{
+    return iommu_intremap
+           ? iommu_call(&iommu_ops, update_ire_from_msi, msi_desc, msg) : 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
index 20a54a5bb4..78d83afe64 100644
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -208,4 +208,6 @@ int msixtbl_pt_register(struct domain *, struct pirq *, uint64_t gtable);
 void msixtbl_pt_unregister(struct domain *, struct pirq *);
 void msixtbl_pt_cleanup(struct domain *d);
 
+int arch_pci_clean_pirqs(struct domain *d);
+
 #endif /* __XEN_PCI_H__ */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 12:26:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 12:26:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27885.56506 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keda9-0001od-AE; Mon, 16 Nov 2020 12:26:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27885.56506; Mon, 16 Nov 2020 12:26:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keda9-0001oS-6S; Mon, 16 Nov 2020 12:26:05 +0000
Received: by outflank-mailman (input) for mailman id 27885;
 Mon, 16 Nov 2020 12:26:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3+iU=EW=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1keda7-0001ni-UL
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 12:26:03 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 16d0ef04-b0e7-4ebb-bc97-b44bf1880adf;
 Mon, 16 Nov 2020 12:26:02 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 14DF0101E;
 Mon, 16 Nov 2020 04:26:02 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 668DC3F70D;
 Mon, 16 Nov 2020 04:26:01 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3+iU=EW=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1keda7-0001ni-UL
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 12:26:03 +0000
X-Inumbo-ID: 16d0ef04-b0e7-4ebb-bc97-b44bf1880adf
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 16d0ef04-b0e7-4ebb-bc97-b44bf1880adf;
	Mon, 16 Nov 2020 12:26:02 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 14DF0101E;
	Mon, 16 Nov 2020 04:26:02 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 668DC3F70D;
	Mon, 16 Nov 2020 04:26:01 -0800 (PST)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v3 3/3] xen/pci: solve compilation error on ARM with HAS_PCI enabled.
Date: Mon, 16 Nov 2020 12:25:18 +0000
Message-Id: <efa0c2578a6aabb642b8f38257cf96a983944301.1605527997.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1605527997.git.rahul.singh@arm.com>
References: <cover.1605527997.git.rahul.singh@arm.com>
In-Reply-To: <cover.1605527997.git.rahul.singh@arm.com>
References: <cover.1605527997.git.rahul.singh@arm.com>

If mem-sharing, mem-paging, or log-dirty functionality is not enabled
for non-x86 architecture when HAS_PCI is enabled, the compiler will
throw an error.

Move code to x86 specific directory to fix compilation error.

Also, modify the code to use likely() in place of unlikley() for each
condition to make code more optimized.

No functional change.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---

Changes in v3:
- rename arch_iommu_usable() to arch_iommu_use_permitted()
- fixed comments.

---
 xen/drivers/passthrough/pci.c       |  8 +-------
 xen/drivers/passthrough/x86/iommu.c | 12 ++++++++++++
 xen/include/xen/iommu.h             |  2 ++
 3 files changed, 15 insertions(+), 7 deletions(-)

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index e8a28df126..804b24a0e0 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -20,7 +20,6 @@
 #include <xen/iommu.h>
 #include <xen/irq.h>
 #include <xen/param.h>
-#include <xen/vm_event.h>
 #include <xen/delay.h>
 #include <xen/keyhandler.h>
 #include <xen/event.h>
@@ -1411,12 +1410,7 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
     if ( !is_iommu_enabled(d) )
         return 0;
 
-    /* Prevent device assign if mem paging or mem sharing have been 
-     * enabled for this domain */
-    if ( d != dom_io &&
-         unlikely(mem_sharing_enabled(d) ||
-                  vm_event_check_ring(d->vm_event_paging) ||
-                  p2m_get_hostp2m(d)->global_logdirty) )
+    if( !arch_iommu_use_permitted(d) )
         return -EXDEV;
 
     /* device_assigned() should already have cleared the device for assignment */
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index 875e67b53b..26f57b7e88 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -23,6 +23,7 @@
 #include <asm/hvm/io.h>
 #include <asm/io_apic.h>
 #include <asm/setup.h>
+#include <xen/vm_event.h>
 
 const struct iommu_init_ops *__initdata iommu_init_ops;
 struct iommu_ops __read_mostly iommu_ops;
@@ -315,6 +316,17 @@ int iommu_update_ire_from_msi(
            ? iommu_call(&iommu_ops, update_ire_from_msi, msi_desc, msg) : 0;
 }
 
+bool arch_iommu_use_permitted(const struct domain *d)
+{
+    /*
+     * Prevent device assign if mem paging, mem sharing or log-dirty
+     * have been enabled for this domain.
+     */
+    return d == dom_io ||
+           (likely(!mem_sharing_enabled(d)) &&
+            likely(!vm_event_check_ring(d->vm_event_paging)) &&
+            likely(!p2m_get_hostp2m(d)->global_logdirty));
+}
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 191021870f..056eaa09fc 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -381,6 +381,8 @@ DECLARE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 extern struct spinlock iommu_pt_cleanup_lock;
 extern struct page_list_head iommu_pt_cleanup_list;
 
+bool arch_iommu_use_permitted(const struct domain *d);
+
 #endif /* _IOMMU_H_ */
 
 /*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 13:47:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 13:47:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27945.56530 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keeqF-0000Zo-GG; Mon, 16 Nov 2020 13:46:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27945.56530; Mon, 16 Nov 2020 13:46:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keeqF-0000Zh-Cu; Mon, 16 Nov 2020 13:46:47 +0000
Received: by outflank-mailman (input) for mailman id 27945;
 Mon, 16 Nov 2020 13:46:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nrw9=EW=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1keeqD-0000Zc-TU
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 13:46:46 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.41]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id beade9f6-ea71-4eeb-b1e6-b2b9a2d8a4ac;
 Mon, 16 Nov 2020 13:46:43 +0000 (UTC)
Received: from DB8PR06CA0048.eurprd06.prod.outlook.com (2603:10a6:10:120::22)
 by AM6PR08MB5256.eurprd08.prod.outlook.com (2603:10a6:20b:e7::32)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Mon, 16 Nov
 2020 13:46:41 +0000
Received: from DB5EUR03FT016.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:120:cafe::fb) by DB8PR06CA0048.outlook.office365.com
 (2603:10a6:10:120::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28 via Frontend
 Transport; Mon, 16 Nov 2020 13:46:41 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT016.mail.protection.outlook.com (10.152.20.141) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3564.22 via Frontend Transport; Mon, 16 Nov 2020 13:46:41 +0000
Received: ("Tessian outbound 39167997cde8:v71");
 Mon, 16 Nov 2020 13:46:41 +0000
Received: from ee1655f6f992.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 28F89D29-DF87-469B-85D0-27F9659E5AFF.1; 
 Mon, 16 Nov 2020 13:46:33 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ee1655f6f992.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 16 Nov 2020 13:46:33 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4678.eurprd08.prod.outlook.com (2603:10a6:10:dc::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Mon, 16 Nov
 2020 13:46:29 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737%7]) with mapi id 15.20.3564.028; Mon, 16 Nov 2020
 13:46:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nrw9=EW=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1keeqD-0000Zc-TU
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 13:46:46 +0000
X-Inumbo-ID: beade9f6-ea71-4eeb-b1e6-b2b9a2d8a4ac
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown [40.107.22.41])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id beade9f6-ea71-4eeb-b1e6-b2b9a2d8a4ac;
	Mon, 16 Nov 2020 13:46:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qAUC3bKQ8Do7oF1w8gRlc3YuQWW/wFuhcoOCQqtIVB0=;
 b=DxrZ23ggWTpxizT3CO9ppHf5+hzeq8n5wvdHsDWFNiTJQn5IxV9eyOy5TQV95gQqYm8UEAH/4aTKU+eBB8PvvSof3NUdE17jywNT3YS50onE3Smr81if2cPXfrWvfao0pa0EAVhoao0Fouv8VpV9JbXq4oUyAwYprDwRY4cNLhI=
Received: from DB8PR06CA0048.eurprd06.prod.outlook.com (2603:10a6:10:120::22)
 by AM6PR08MB5256.eurprd08.prod.outlook.com (2603:10a6:20b:e7::32) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Mon, 16 Nov
 2020 13:46:41 +0000
Received: from DB5EUR03FT016.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:120:cafe::fb) by DB8PR06CA0048.outlook.office365.com
 (2603:10a6:10:120::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28 via Frontend
 Transport; Mon, 16 Nov 2020 13:46:41 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT016.mail.protection.outlook.com (10.152.20.141) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3564.22 via Frontend Transport; Mon, 16 Nov 2020 13:46:41 +0000
Received: ("Tessian outbound 39167997cde8:v71"); Mon, 16 Nov 2020 13:46:41 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: d8206f6e5aad9bea
X-CR-MTA-TID: 64aa7808
Received: from ee1655f6f992.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 28F89D29-DF87-469B-85D0-27F9659E5AFF.1;
	Mon, 16 Nov 2020 13:46:33 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ee1655f6f992.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Mon, 16 Nov 2020 13:46:33 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Quh+ivx6wL1cspWDWN7QQK8IYNlAPABweJCC2nIUvxCBKWfFK+qHPa446En2ti565t2S4D+j2JTPIdwo5BRtuYTq6ah7yb+2CL7aoVubgbzjQ0v/rIdINl/j5sSoYsH0OK52At9a66N+afeJjRpZT+deylfIOiLHpNTPGCEcoraTBJF8zsxigWY8GqGURuSodfd3oxLSMAChhJT3BJw8TOytTVzoeOxdtYXPXWX9UlJygC2rVHj+cwPn2h1Gb+kLNkzL0Md4C67k9OJ3ik163CQ8rDd1Ro66k8BHLSWdHqhT2TmnijelaokMevZmwr3ggCpNHHy+d/RheaNc2fiq0w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qAUC3bKQ8Do7oF1w8gRlc3YuQWW/wFuhcoOCQqtIVB0=;
 b=iFT54bawrOX8xw3sGNHgUEw9UkPz/aIPfVOUGeNeAxX+1ZOcFJmqK6C2PJ1XBbYod5jyWlTUgrfeAzqTFf8oSiWNmrRoKMnfNE9F1BP9EiYFi9SxlcqAqEk+nzaFgybK70iw8TeCe0OiSzNWfOzJ5pP4pz5AWPdWf832v0kvCE2/o2zCP1DTwS7dviyT6/c22ywiLqLQKl0t5UdnYAq3iQvJ/suCHufEeDYCiPiIT5NTT7m7NnvCzywQdB3uLjvYa2+Y5DGiz2Q1FIry/gPe3F4B4XeAY4o9pZ1w1rHN/FdSiGRbOshh2EIwGGhOZ7B2Rj0zgQb+v9MWzMEmBbuTsw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qAUC3bKQ8Do7oF1w8gRlc3YuQWW/wFuhcoOCQqtIVB0=;
 b=DxrZ23ggWTpxizT3CO9ppHf5+hzeq8n5wvdHsDWFNiTJQn5IxV9eyOy5TQV95gQqYm8UEAH/4aTKU+eBB8PvvSof3NUdE17jywNT3YS50onE3Smr81if2cPXfrWvfao0pa0EAVhoao0Fouv8VpV9JbXq4oUyAwYprDwRY4cNLhI=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4678.eurprd08.prod.outlook.com (2603:10a6:10:dc::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Mon, 16 Nov
 2020 13:46:29 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737%7]) with mapi id 15.20.3564.028; Mon, 16 Nov 2020
 13:46:29 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Michal Orzel <Michal.Orzel@arm.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2] xen/arm: Add workaround for Cortex-A76/Neoverse-N1
 erratum #1286807
Thread-Topic: [PATCH v2] xen/arm: Add workaround for Cortex-A76/Neoverse-N1
 erratum #1286807
Thread-Index: AQHWvBHCDA6RH0TW50yqO37dVM6f26nKxaGA
Date: Mon, 16 Nov 2020 13:46:29 +0000
Message-ID: <6F0BAEC1-E48C-4266-A967-2B7492F6C682@arm.com>
References: <20201116121140.26763-1-michal.orzel@arm.com>
In-Reply-To: <20201116121140.26763-1-michal.orzel@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 86fa029b-8b8a-4a55-74cb-08d88a360c6f
x-ms-traffictypediagnostic: DBBPR08MB4678:|AM6PR08MB5256:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB52560A93F74A073434C53EC49DE30@AM6PR08MB5256.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 l68d7Z4avy9GYpgMhD3h3wfEAejkzuVsuIsCJijf0h+gfsFP0ir0r7whbafJRBXaw6bH+kfWPlpjmDq/k82cbfIh3FPzL3kJfg444pA15CEipwS/NQMaltUURtBdrWmQXLt3kyOfB0rki9gz+D0OFZosvAg3iyFwbxo+eSZbwZ8bOvX13QTn6OV1bWWNWtY15S2b03JA4B2z5A3uPIPV+HG3vYsA74M5xph9G6flXxCbxEAay7VCBiG2nubnSnu05TP0GbBdRk4R+ikNF+EpoNPA/2AgPED6OiukyhWVMDL0PCPtnU1JnT/a4p1VPPS57dm6GvvB0W8Q4xLwn+dquQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(346002)(366004)(39860400002)(376002)(136003)(5660300002)(6862004)(2616005)(8676002)(4326008)(6636002)(53546011)(6506007)(86362001)(8936002)(186003)(26005)(316002)(76116006)(91956017)(19627235002)(66476007)(66946007)(6512007)(33656002)(6486002)(83380400001)(71200400001)(2906002)(36756003)(54906003)(66556008)(478600001)(37006003)(64756008)(66446008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 m46/oon/5mnGChTBT2ROMoOznG0E3rQAPW0NdlA7001oyhCBWsL9yiSus1WsgRK1Mbs0F6vrXRprFWSAQMyet8J3AlO4A5kwQjCoPRVn6L6q+iHF94QjtAGR40/VcwBmlNPmCPLoc0pIxY9WdQgAzb4Qr1pte+2fNiAQuStX6UO0JuhGvXvPc2MCyB8d60HTBKjaTpzZ9Qbsn93ueMLR5SVP6NoJ/zzNQ4eym91MWubUf9JYLVCOWjnb74Le1U4X27CKEQuDukkuJqzX/rhLY+lSuslcF5H8RhSX0omOGyZz+yQNDMwXbqHiZYzvXKGlBzqHd9NrzMo6XypoHCk3ah4LQI8M2ysOLcnvn08uVMbFGtZptbTUZ0oRIUAIddXhNqlo8ujWCZSdO6GsIygzigBrVvdyY5/R4KhdPxtEUmLGpDI1WIeccd3rwyNUDUYwmAOt8isEMa/Xrnx35DDEkL8zMYxBbSzd+z1Kg5rnLJfJmjHxqVl4PdfZ50k3Wg/ASRnB1njYbg9Yd0P5ErL396suaC+PD1UxiEoH5o2J9XaKyKoO1C7wV1DX2saqdu96bknyjAHmGE33XhcjCLv5i32TB3ZZPTsEi1O+dw9Iqk2MbCYAR0HOPiLBDjJm4grvur4jD9rPfo6FmLIE4LiBLg==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <F8B36F934DFD734BABACB04BBBE948A6@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4678
Original-Authentication-Results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ea2a82db-3193-433d-6089-08d88a360588
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	k4Lj+lZEH+18liDM1/t/v+vkDnTBNJpLFUsG62qgVR0dkvNxHuQfDwcFO4P1CQyYdek6iNj7cQzWMWXb2/bYVWPw2T+2HOKj0kEjZlbVJ3aukWHbXwhvxJcV7dwiqo1RW8LaTQ47e85Kz55KISydhWLT9Y33ybNk6hAiu3j1Q9skP5anud3cbLqP91EhBUYcKCMvw5AOk1fttmc5qmW6Re+lYn3/rf+tbmdYxgWzOTeY9Q99TEDxJFrRK/1tpxsjYC1C9pmmFYEMXiguddX96YwXDgIqd/7+yPZuMtlTrtowigQalgh5wY/lfuYwuEnPIXRFbfQxqcw8oc/z8xkAKhUCalMeF18ypuHTWfl/ThQeRh86Au05XY9rJjwCj1JSvYrUxAryXymNI+HPSouq8w==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(136003)(376002)(396003)(346002)(46966005)(5660300002)(19627235002)(26005)(186003)(2616005)(107886003)(86362001)(6862004)(4326008)(70586007)(70206006)(6506007)(2906002)(53546011)(336012)(36756003)(33656002)(6636002)(6512007)(6486002)(8676002)(54906003)(316002)(37006003)(478600001)(8936002)(83380400001)(356005)(81166007)(82310400003)(82740400003)(47076004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Nov 2020 13:46:41.4353
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 86fa029b-8b8a-4a55-74cb-08d88a360c6f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB5256

Hi,

> On 16 Nov 2020, at 12:11, Michal Orzel <Michal.Orzel@arm.com> wrote:
>=20
> On the affected Cortex-A76/Neoverse-N1 cores (r0p0 to r3p0),
> if a virtual address for a cacheable mapping of a location is being
> accessed by a core while another core is remapping the virtual
> address to a new physical page using the recommended break-before-make
> sequence, then under very rare circumstances TLBI+DSB completes before
> a read using the translation being invalidated has been observed by
> other observers. The workaround repeats the TLBI+DSB operation
> for all the TLB flush operations on purpose.
>=20
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> docs/misc/arm/silicon-errata.txt     |  2 ++
> xen/arch/arm/Kconfig                 | 23 +++++++++++++++++++++
> xen/arch/arm/cpuerrata.c             | 14 +++++++++++++
> xen/include/asm-arm/arm64/flushtlb.h | 30 +++++++++++++++++++---------
> xen/include/asm-arm/cpufeature.h     |  3 ++-
> 5 files changed, 62 insertions(+), 10 deletions(-)
>=20
> diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-err=
ata.txt
> index 552c4151d3..d183ba543f 100644
> --- a/docs/misc/arm/silicon-errata.txt
> +++ b/docs/misc/arm/silicon-errata.txt
> @@ -53,5 +53,7 @@ stable hypervisors.
> | ARM            | Cortex-A72      | #853709         | N/A               =
      |
> | ARM            | Cortex-A73      | #858921         | ARM_ERRATUM_858921=
      |
> | ARM            | Cortex-A76      | #1165522        | N/A               =
      |
> +| ARM            | Cortex-A76      | #1286807        | ARM64_ERRATUM_128=
6807   |
> | ARM            | Neoverse-N1     | #1165522        | N/A
> +| ARM            | Neoverse-N1     | #1286807        | ARM64_ERRATUM_128=
6807   |
> | ARM            | MMU-500         | #842869         | N/A               =
      |
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index f938dd21bd..8171b8d04a 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -244,6 +244,29 @@ config ARM_ERRATUM_858921
>=20
> 	  If unsure, say Y.
>=20
> +config ARM64_WORKAROUND_REPEAT_TLBI
> +	bool
> +
> +config ARM64_ERRATUM_1286807
> +	bool "Cortex-A76/Neoverse-N1: 1286807: Modification of the translation =
table for a virtual address might lead to read-after-read ordering violatio=
n"
> +	default y
> +	select ARM64_WORKAROUND_REPEAT_TLBI
> +	depends on ARM_64
> +	help
> +	  This option adds a workaround for ARM Cortex-A76/Neoverse-N1 erratum =
1286807.
> +
> +	  On the affected Cortex-A76/Neoverse-N1 cores (r0p0 to r3p0), if a vir=
tual
> +	  address for a cacheable mapping of a location is being
> +	  accessed by a core while another core is remapping the virtual
> +	  address to a new physical page using the recommended
> +	  break-before-make sequence, then under very rare circumstances
> +	  TLBI+DSB completes before a read using the translation being
> +	  invalidated has been observed by other observers. The
> +	  workaround repeats the TLBI+DSB operation for all the TLB flush
> +	  operations on purpose.
> +
> +	  If unsure, say Y.
> +
> endmenu
>=20
> config ARM64_HARDEN_BRANCH_PREDICTOR
> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
> index 567911d380..cb4795beec 100644
> --- a/xen/arch/arm/cpuerrata.c
> +++ b/xen/arch/arm/cpuerrata.c
> @@ -424,6 +424,20 @@ static const struct arm_cpu_capabilities arm_errata[=
] =3D {
>                    (1 << MIDR_VARIANT_SHIFT) | 2),
>     },
> #endif
> +#ifdef CONFIG_ARM64_ERRATUM_1286807
> +    {
> +        /* Cortex-A76 r0p0 - r3p0 */
> +        .desc =3D "ARM erratum 1286807",
> +        .capability =3D ARM64_WORKAROUND_REPEAT_TLBI,
> +        MIDR_RANGE(MIDR_CORTEX_A76, 0, 3 << MIDR_VARIANT_SHIFT),
> +    },
> +    {
> +        /* Neoverse-N1 r0p0 - r3p0 */
> +        .desc =3D "ARM erratum 1286807",
> +        .capability =3D ARM64_WORKAROUND_REPEAT_TLBI,
> +        MIDR_RANGE(MIDR_NEOVERSE_N1, 0, 3 << MIDR_VARIANT_SHIFT),
> +    },
> +#endif
> #ifdef CONFIG_ARM64_HARDEN_BRANCH_PREDICTOR
>     {
>         .capability =3D ARM_HARDEN_BRANCH_PREDICTOR,
> diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/a=
rm64/flushtlb.h
> index ceec59542e..8f2abfaf1d 100644
> --- a/xen/include/asm-arm/arm64/flushtlb.h
> +++ b/xen/include/asm-arm/arm64/flushtlb.h
> @@ -9,6 +9,12 @@
>  * DSB ISH          // Ensure the TLB invalidation has completed
>  * ISB              // See explanation below
>  *
> + * ARM64_WORKAROUND_REPEAT_TLBI:
> + * Modification of the translation table for a virtual address might lea=
d to
> + * read-after-read ordering violation.
> + * The workaround repeats TLBI+DSB operation for all the TLB flush opera=
tions
> + * on purpose.
> + *
>  * For Xen page-tables the ISB will discard any instructions fetched
>  * from the old mappings.
>  *
> @@ -16,15 +22,21 @@
>  * (and therefore the TLB invalidation) before continuing. So we know
>  * the TLBs cannot contain an entry for a mapping we may have removed.
>  */
> -#define TLB_HELPER(name, tlbop) \
> -static inline void name(void)   \
> -{                               \
> -    asm volatile(               \
> -        "dsb  ishst;"           \
> -        "tlbi "  # tlbop  ";"   \
> -        "dsb  ish;"             \
> -        "isb;"                  \
> -        : : : "memory");        \
> +#define TLB_HELPER(name, tlbop)                  \
> +static inline void name(void)                    \
> +{                                                \
> +    asm volatile(                                \
> +        "dsb  ishst;"                            \
> +        "tlbi "  # tlbop  ";"                    \
> +        ALTERNATIVE(                             \
> +            "nop; nop;",                         \
> +            "dsb  ish;"                          \
> +            "tlbi "  # tlbop  ";",               \
> +            ARM64_WORKAROUND_REPEAT_TLBI,        \
> +            CONFIG_ARM64_WORKAROUND_REPEAT_TLBI) \
> +        "dsb  ish;"                              \
> +        "isb;"                                   \
> +        : : : "memory");                         \
> }
>=20
> /* Flush local TLBs, current VMID only. */
> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufe=
ature.h
> index 016a9fe203..c7b5052992 100644
> --- a/xen/include/asm-arm/cpufeature.h
> +++ b/xen/include/asm-arm/cpufeature.h
> @@ -46,8 +46,9 @@
> #define ARM_SMCCC_1_1 8
> #define ARM64_WORKAROUND_AT_SPECULATE 9
> #define ARM_WORKAROUND_858921 10
> +#define ARM64_WORKAROUND_REPEAT_TLBI 11
>=20
> -#define ARM_NCAPS           11
> +#define ARM_NCAPS           12
>=20
> #ifndef __ASSEMBLY__
>=20
> --=20
> 2.28.0
>=20



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 14:58:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 14:58:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28001.56566 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefxv-0006sA-GR; Mon, 16 Nov 2020 14:58:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28001.56566; Mon, 16 Nov 2020 14:58:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefxv-0006s0-Cz; Mon, 16 Nov 2020 14:58:47 +0000
Received: by outflank-mailman (input) for mailman id 28001;
 Mon, 16 Nov 2020 14:58:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefxt-0006ni-Vu
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:58:46 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 599cbdfa-1764-4d9f-ad4e-42d98c830889;
 Mon, 16 Nov 2020 14:58:33 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxP-0003j3-EO; Mon, 16 Nov 2020 14:58:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefxt-0006ni-Vu
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:58:46 +0000
X-Inumbo-ID: 599cbdfa-1764-4d9f-ad4e-42d98c830889
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 599cbdfa-1764-4d9f-ad4e-42d98c830889;
	Mon, 16 Nov 2020 14:58:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=x5/MaiTSLd30pQ/Fqm/l4PitleNTflLc7dCfcSezhUw=; b=vsGAL9U8HRExJBfuNBlOJoYTrw
	TjKt1s0QBKjG6I5tKKhtgoYV/e1gMtZau9MsGbzjL6ihP06PQ8IAVbe+JV6qvxfhubM6k3FD5n/Ta
	sfOT7ioQF9fWp3yjxs1txJ+JxTgkSd1h8SuTZhNqV2KfEh3d9sp0wul/KMAtYzv/nWLwcquJtpxyw
	loV+Tkg2tMzw1ciSe//LPbFNEK2gCOGFgsX7ytqRM4Y6skH1rdy0ayx4DfRzhbFzNc8UbnxLsWAfz
	4Oi9dpKoJqVcUe23/rbzHXdvlIR2k09MyeOVJX4AGpVZ5L4Zh8Od777u2dexRDhOPgmWiAMpOnltJ
	cQp3oOwQ==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxP-0003j3-EO; Mon, 16 Nov 2020 14:58:15 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH 04/78] sd: update the bdev size in sd_revalidate_disk
Date: Mon, 16 Nov 2020 15:56:55 +0100
Message-Id: <20201116145809.410558-5-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

This avoids the extra call to revalidate_disk_size in sd_rescan and
is otherwise a no-op because the size did not change, or we are in
the probe path.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 drivers/scsi/sd.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 656bcf4940d6d1..4a34dd5b153196 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -1750,10 +1750,8 @@ static int sd_sync_cache(struct scsi_disk *sdkp, struct scsi_sense_hdr *sshdr)
 static void sd_rescan(struct device *dev)
 {
 	struct scsi_disk *sdkp = dev_get_drvdata(dev);
-	int ret;
 
-	ret = sd_revalidate_disk(sdkp->disk);
-	revalidate_disk_size(sdkp->disk, ret == 0);
+	sd_revalidate_disk(sdkp->disk);
 }
 
 static int sd_ioctl(struct block_device *bdev, fmode_t mode,
@@ -3266,7 +3264,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
 	sdkp->first_scan = 0;
 
 	set_capacity_revalidate_and_notify(disk,
-		logical_to_sectors(sdp, sdkp->capacity), false);
+		logical_to_sectors(sdp, sdkp->capacity), true);
 	sd_config_write_same(sdkp);
 	kfree(buffer);
 
@@ -3276,7 +3274,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
 	 * capacity to 0.
 	 */
 	if (sd_zbc_revalidate_zones(sdkp))
-		set_capacity_revalidate_and_notify(disk, 0, false);
+		set_capacity_revalidate_and_notify(disk, 0, true);
 
  out:
 	return 0;
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 14:58:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 14:58:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28000.56554 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefxq-0006ow-4d; Mon, 16 Nov 2020 14:58:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28000.56554; Mon, 16 Nov 2020 14:58:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefxq-0006on-1N; Mon, 16 Nov 2020 14:58:42 +0000
Received: by outflank-mailman (input) for mailman id 28000;
 Mon, 16 Nov 2020 14:58:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefxo-0006ni-Vi
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:58:41 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f50b753e-efe4-457e-ab0f-2b298420c158;
 Mon, 16 Nov 2020 14:58:33 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxQ-0003jM-O4; Mon, 16 Nov 2020 14:58:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefxo-0006ni-Vi
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:58:41 +0000
X-Inumbo-ID: f50b753e-efe4-457e-ab0f-2b298420c158
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f50b753e-efe4-457e-ab0f-2b298420c158;
	Mon, 16 Nov 2020 14:58:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=gB9wX8Sf5Fr0vBVAuUG0acKrf9nR5B+BsqLVZ7zKAPE=; b=hPdBqu7LASCimrqoWvJWekgiwM
	e6sFN9M6496yHhIwQZ8SVDNPv85Fa5P7ZEcnY74J6Tt2EVaQ1HjE7fGy6Qo/q8v9uzp6sD/vtDi3k
	x+AFEwLxBM9S/UHTZAELAThsMlf8jyW4aFqZBbR6MMz+Vsa4kuh+hdpwNonyepLm5hF/WY/im2Qsp
	M27a3ymH344cKP7AJlnEbQ1ShVBTdT3pN0ujj3wBLxRxqpqeL17Wc9RHjP5WuygZtcYOLRkmU6YW7
	qqsDvFjX11ndArIbuj3JsnUHh/P8r/XoLsW781HNKdSvw8XE1eNQ7Y5+/zRvydj78vPLvJktJkTdb
	/FmNl3CA==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxQ-0003jM-O4; Mon, 16 Nov 2020 14:58:17 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	Hannes Reinecke <hare@suse.de>,
	Petr Vorel <pvorel@suse.cz>
Subject: [PATCH 05/78] block: remove the update_bdev parameter to set_capacity_revalidate_and_notify
Date: Mon, 16 Nov 2020 15:56:56 +0100
Message-Id: <20201116145809.410558-6-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

The update_bdev argument is always set to true, so remove it.  Also
rename the function to the slighly less verbose set_capacity_and_notify,
as propagating the disk size to the block device isn't really
revalidation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Petr Vorel <pvorel@suse.cz>
---
 block/genhd.c                | 13 +++++--------
 drivers/block/loop.c         |  2 +-
 drivers/block/virtio_blk.c   |  2 +-
 drivers/block/xen-blkfront.c |  2 +-
 drivers/nvme/host/core.c     |  2 +-
 drivers/scsi/sd.c            |  5 ++---
 include/linux/genhd.h        |  3 +--
 7 files changed, 12 insertions(+), 17 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index 9387f050c248a7..8c350fecfe8bfe 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -46,17 +46,15 @@ static void disk_del_events(struct gendisk *disk);
 static void disk_release_events(struct gendisk *disk);
 
 /*
- * Set disk capacity and notify if the size is not currently
- * zero and will not be set to zero
+ * Set disk capacity and notify if the size is not currently zero and will not
+ * be set to zero.  Returns true if a uevent was sent, otherwise false.
  */
-bool set_capacity_revalidate_and_notify(struct gendisk *disk, sector_t size,
-					bool update_bdev)
+bool set_capacity_and_notify(struct gendisk *disk, sector_t size)
 {
 	sector_t capacity = get_capacity(disk);
 
 	set_capacity(disk, size);
-	if (update_bdev)
-		revalidate_disk_size(disk, true);
+	revalidate_disk_size(disk, true);
 
 	if (capacity != size && capacity != 0 && size != 0) {
 		char *envp[] = { "RESIZE=1", NULL };
@@ -67,8 +65,7 @@ bool set_capacity_revalidate_and_notify(struct gendisk *disk, sector_t size,
 
 	return false;
 }
-
-EXPORT_SYMBOL_GPL(set_capacity_revalidate_and_notify);
+EXPORT_SYMBOL_GPL(set_capacity_and_notify);
 
 /*
  * Format the device name of the indicated disk into the supplied buffer and
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 0a0c0c3a68ec4c..84a36c242e5550 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -251,7 +251,7 @@ loop_validate_block_size(unsigned short bsize)
  */
 static void loop_set_size(struct loop_device *lo, loff_t size)
 {
-	if (!set_capacity_revalidate_and_notify(lo->lo_disk, size, true))
+	if (!set_capacity_and_notify(lo->lo_disk, size))
 		kobject_uevent(&disk_to_dev(lo->lo_disk)->kobj, KOBJ_CHANGE);
 }
 
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index a314b9382442b6..3e812b4c32e669 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -470,7 +470,7 @@ static void virtblk_update_capacity(struct virtio_blk *vblk, bool resize)
 		   cap_str_10,
 		   cap_str_2);
 
-	set_capacity_revalidate_and_notify(vblk->disk, capacity, true);
+	set_capacity_and_notify(vblk->disk, capacity);
 }
 
 static void virtblk_config_changed_work(struct work_struct *work)
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 48629d3433b4c3..79521e33d30ed5 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -2370,7 +2370,7 @@ static void blkfront_connect(struct blkfront_info *info)
 			return;
 		printk(KERN_INFO "Setting capacity to %Lu\n",
 		       sectors);
-		set_capacity_revalidate_and_notify(info->gd, sectors, true);
+		set_capacity_and_notify(info->gd, sectors);
 
 		return;
 	case BLKIF_STATE_SUSPENDED:
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index f6c6479da0e9ec..6c144e748f8cae 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2053,7 +2053,7 @@ static void nvme_update_disk_info(struct gendisk *disk,
 			capacity = 0;
 	}
 
-	set_capacity_revalidate_and_notify(disk, capacity, true);
+	set_capacity_and_notify(disk, capacity);
 
 	nvme_config_discard(disk, ns);
 	nvme_config_write_zeroes(disk, ns);
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 4a34dd5b153196..a2a4f385833d6c 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -3263,8 +3263,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
 
 	sdkp->first_scan = 0;
 
-	set_capacity_revalidate_and_notify(disk,
-		logical_to_sectors(sdp, sdkp->capacity), true);
+	set_capacity_and_notify(disk, logical_to_sectors(sdp, sdkp->capacity));
 	sd_config_write_same(sdkp);
 	kfree(buffer);
 
@@ -3274,7 +3273,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
 	 * capacity to 0.
 	 */
 	if (sd_zbc_revalidate_zones(sdkp))
-		set_capacity_revalidate_and_notify(disk, 0, true);
+		set_capacity_and_notify(disk, 0);
 
  out:
 	return 0;
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 03da3f603d309c..4b22bfd9336e1a 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -315,8 +315,7 @@ static inline int get_disk_ro(struct gendisk *disk)
 extern void disk_block_events(struct gendisk *disk);
 extern void disk_unblock_events(struct gendisk *disk);
 extern void disk_flush_events(struct gendisk *disk, unsigned int mask);
-bool set_capacity_revalidate_and_notify(struct gendisk *disk, sector_t size,
-		bool update_bdev);
+bool set_capacity_and_notify(struct gendisk *disk, sector_t size);
 
 /* drivers/char/random.c */
 extern void add_disk_randomness(struct gendisk *disk) __latent_entropy;
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 14:58:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 14:58:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.27999.56542 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefxl-0006nu-SQ; Mon, 16 Nov 2020 14:58:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 27999.56542; Mon, 16 Nov 2020 14:58:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefxl-0006nn-OY; Mon, 16 Nov 2020 14:58:37 +0000
Received: by outflank-mailman (input) for mailman id 27999;
 Mon, 16 Nov 2020 14:58:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefxk-0006ni-0m
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:58:37 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5ffb5798-cba6-438a-af2d-9f1cecbf0892;
 Mon, 16 Nov 2020 14:58:33 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxL-0003ie-Nn; Mon, 16 Nov 2020 14:58:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefxk-0006ni-0m
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:58:37 +0000
X-Inumbo-ID: 5ffb5798-cba6-438a-af2d-9f1cecbf0892
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5ffb5798-cba6-438a-af2d-9f1cecbf0892;
	Mon, 16 Nov 2020 14:58:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=82UVn9JDhZ7kcS3JGvYaAt4iz84QDJp/mOdtcI/Npsk=; b=usuU2sefAFYqsEXj3qbul/Q7Un
	CCv9GjY0IirraZHVxERI0VFsNKbIJNWWfb9W2amAmbgPQFz1SQQGP9jucGmGFK5Dmz+uV/dQrh4EZ
	Yz7+inlUGGRZ2SENP6FNBzSsnyPPr/9g2oHpP/gfHR3aZh9bCC9SRkZky62gN2xUhMktrW7I1gC8O
	OlLKrcM8QGapUy4m0OUacF0pEkCV+NYMm0cjABSANyQg80uGZOPlSQK6hRsBvshv4teWYRl87jbJA
	+wqxRblm4gAKo4kgFpaUcUBO3YCWIyQ/tck3EBg4rNw0ym4xMAVhVwQ1xnNmFGgIlSjLbsb6rNNg5
	l7qTv7cQ==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxL-0003ie-Nn; Mon, 16 Nov 2020 14:58:12 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH 01/78] block: remove the call to __invalidate_device in check_disk_size_change
Date: Mon, 16 Nov 2020 15:56:52 +0100
Message-Id: <20201116145809.410558-2-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

__invalidate_device without the kill_dirty parameter just invalidates
various clean entries in caches, which doesn't really help us with
anything, but can cause all kinds of horrible lock orders due to how
it calls into the file system.  The only reason this hasn't been a
major issue is because so many people use partitions, for which no
invalidation was performed anyway.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 fs/block_dev.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index 9e84b1928b9401..66ebf594c97f47 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -1334,12 +1334,6 @@ static void check_disk_size_change(struct gendisk *disk,
 		i_size_write(bdev->bd_inode, disk_size);
 	}
 	spin_unlock(&bdev->bd_size_lock);
-
-	if (bdev_size > disk_size) {
-		if (__invalidate_device(bdev, false))
-			pr_warn("VFS: busy inodes on resized disk %s\n",
-				disk->disk_name);
-	}
 }
 
 /**
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 14:58:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 14:58:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28002.56578 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefy0-0006w0-QQ; Mon, 16 Nov 2020 14:58:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28002.56578; Mon, 16 Nov 2020 14:58:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefy0-0006vp-MA; Mon, 16 Nov 2020 14:58:52 +0000
Received: by outflank-mailman (input) for mailman id 28002;
 Mon, 16 Nov 2020 14:58:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefxy-0006ni-W8
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:58:51 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f94e0b88-1e5d-4aa5-b7e4-c2388518e708;
 Mon, 16 Nov 2020 14:58:34 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxM-0003ii-UU; Mon, 16 Nov 2020 14:58:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefxy-0006ni-W8
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:58:51 +0000
X-Inumbo-ID: f94e0b88-1e5d-4aa5-b7e4-c2388518e708
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f94e0b88-1e5d-4aa5-b7e4-c2388518e708;
	Mon, 16 Nov 2020 14:58:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=k53tbKeUbVsb5uwUbGW8UpW/h79If5n0w+H4eZR3xSU=; b=tuJMnQttaybo9F4t8cP+N2hLQq
	RXrM7+cD1yLqquFQbtlx4GxTcGrgpRY8HWo0g3CxND1nv1GYyQPkp8q3K2wzD/dngpDWkkquqHkp4
	sHuLCh62/kacnFU017uJ6lg3sIxNoOE/FenrUxQsyI4PysSw0WPutcnk5f8h8OTz6qITqDy4OBoNX
	j839YHJ1VKl6fCqCZwFmQOwQQf+y4oyipnv0MRAtvY48Ogo6z5wy5t7KJxz/QqkUnQuTFox/n8y/r
	oz3SnuiJlz5Z/j6oveEA/Y/B74scFktzTg4AjYglEkJ4LnQzCz3bmbG+4ANwTszKKWHBb195E+1pa
	yP5eNyVA==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxM-0003ii-UU; Mon, 16 Nov 2020 14:58:13 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH 02/78] loop: let set_capacity_revalidate_and_notify update the bdev size
Date: Mon, 16 Nov 2020 15:56:53 +0100
Message-Id: <20201116145809.410558-3-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

There is no good reason to call revalidate_disk_size separately.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 drivers/block/loop.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index a58084c2ed7ceb..0a0c0c3a68ec4c 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -251,12 +251,8 @@ loop_validate_block_size(unsigned short bsize)
  */
 static void loop_set_size(struct loop_device *lo, loff_t size)
 {
-	struct block_device *bdev = lo->lo_device;
-
-	bd_set_nr_sectors(bdev, size);
-
-	if (!set_capacity_revalidate_and_notify(lo->lo_disk, size, false))
-		kobject_uevent(&disk_to_dev(bdev->bd_disk)->kobj, KOBJ_CHANGE);
+	if (!set_capacity_revalidate_and_notify(lo->lo_disk, size, true))
+		kobject_uevent(&disk_to_dev(lo->lo_disk)->kobj, KOBJ_CHANGE);
 }
 
 static inline int
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 14:58:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 14:58:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28003.56590 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefy5-00070J-54; Mon, 16 Nov 2020 14:58:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28003.56590; Mon, 16 Nov 2020 14:58:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefy5-00070C-1H; Mon, 16 Nov 2020 14:58:57 +0000
Received: by outflank-mailman (input) for mailman id 28003;
 Mon, 16 Nov 2020 14:58:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefy4-0006ni-00
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:58:56 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 108d8c18-1beb-4757-bd37-fcfe0ad76a33;
 Mon, 16 Nov 2020 14:58:34 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxO-0003il-5a; Mon, 16 Nov 2020 14:58:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefy4-0006ni-00
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:58:56 +0000
X-Inumbo-ID: 108d8c18-1beb-4757-bd37-fcfe0ad76a33
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 108d8c18-1beb-4757-bd37-fcfe0ad76a33;
	Mon, 16 Nov 2020 14:58:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=8PaAH/NlXWyaCL2GXdksIteaFARADfMu/D/Ckol9tMo=; b=wN6wosc7g3dpVtffknH8vLUNmI
	pMbSFF5gh+8vT4rZkoFCvA6aupRiJvV6j3DeZkEjRv3JF0SuQyb5rop7nap6P6bBHDiNXMuOKkZFU
	2/3+raVMb7SXRLste9KvlOJ5u9ts2MBrNSLLG0BKT09RZvaTi4dWaiwejVC2gXZml0XgrjU4ZD9E9
	bfDZpE1IwTyOS+y5OedbAXvRqtlBoMPZrfgS29FgVeIwd44vpSMEJZ21UALgOYKLjnQru2LVpggg8
	bbqyjjy/clatNlvgFvfOP1lPu5Xy/eTtWSN37zZiTOAlIpF+vxuEFOnMYCgfFSQ+D5UWEr2zVNV4F
	dSv9ugzw==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxO-0003il-5a; Mon, 16 Nov 2020 14:58:14 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH 03/78] nvme: let set_capacity_revalidate_and_notify update the bdev size
Date: Mon, 16 Nov 2020 15:56:54 +0100
Message-Id: <20201116145809.410558-4-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

There is no good reason to call revalidate_disk_size separately.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 drivers/nvme/host/core.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 9b01afcb7777b8..f6c6479da0e9ec 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2053,7 +2053,7 @@ static void nvme_update_disk_info(struct gendisk *disk,
 			capacity = 0;
 	}
 
-	set_capacity_revalidate_and_notify(disk, capacity, false);
+	set_capacity_revalidate_and_notify(disk, capacity, true);
 
 	nvme_config_discard(disk, ns);
 	nvme_config_write_zeroes(disk, ns);
@@ -2134,7 +2134,6 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_id_ns *id)
 		blk_stack_limits(&ns->head->disk->queue->limits,
 				 &ns->queue->limits, 0);
 		blk_queue_update_readahead(ns->head->disk->queue);
-		nvme_update_bdev_size(ns->head->disk);
 		blk_mq_unfreeze_queue(ns->head->disk->queue);
 	}
 #endif
@@ -3963,8 +3962,6 @@ static void nvme_validate_ns(struct nvme_ns *ns, struct nvme_ns_ids *ids)
 	 */
 	if (ret && ret != -ENOMEM && !(ret > 0 && !(ret & NVME_SC_DNR)))
 		nvme_ns_remove(ns);
-	else
-		revalidate_disk_size(ns->disk, true);
 }
 
 static void nvme_validate_or_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 14:59:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 14:59:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28005.56602 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefyA-00076M-Du; Mon, 16 Nov 2020 14:59:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28005.56602; Mon, 16 Nov 2020 14:59:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefyA-00076B-9r; Mon, 16 Nov 2020 14:59:02 +0000
Received: by outflank-mailman (input) for mailman id 28005;
 Mon, 16 Nov 2020 14:59:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefy9-0006ni-06
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:59:01 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 60b24fc5-b321-46f8-92f3-6a93644b26a7;
 Mon, 16 Nov 2020 14:58:36 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxS-0003jh-3N; Mon, 16 Nov 2020 14:58:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefy9-0006ni-06
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:59:01 +0000
X-Inumbo-ID: 60b24fc5-b321-46f8-92f3-6a93644b26a7
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 60b24fc5-b321-46f8-92f3-6a93644b26a7;
	Mon, 16 Nov 2020 14:58:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=cvShK6qJnVFjmBKg14zAGdWrea8dma4DfS0jKI1wjR8=; b=XDreKiJ4x96Z/t74WwZXoiqmxZ
	kGasNfT1bNA1fjULaCTvFR06AD4REjpimbHTwNYDO8d5l9aPJZ7sdpjiWmRfMAQTyELig3thtzFNL
	O+hrNDW6I3pZd16LIz/SfupQtZ9eQZKF+Kx78kw1Nxpvf+UEJapN+C7c097yWoiWzUOAHRwDyPtur
	nuWktKd22rUViISLXhzZbuaGhd3JD8rDuxrOCqSZhkDMCxP7CUB4opMwCsRN0rc9+981xvdt0QTBA
	42RpMe8dlT1x0xQeMttdYFUFpb/llHtOALyROgc+bjJHHCFAZA7I2tyW4T5BACWMO7fR7JwZsaT7v
	J8mEjS5g==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxS-0003jh-3N; Mon, 16 Nov 2020 14:58:18 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 06/78] nbd: remove the call to set_blocksize
Date: Mon, 16 Nov 2020 15:56:57 +0100
Message-Id: <20201116145809.410558-7-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Block driver have no business setting the file system concept of a
block size.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 drivers/block/nbd.c | 12 +++++-------
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index aaae9220f3a008..a9a0b49ff16101 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -296,7 +296,7 @@ static void nbd_size_clear(struct nbd_device *nbd)
 	}
 }
 
-static void nbd_size_update(struct nbd_device *nbd, bool start)
+static void nbd_size_update(struct nbd_device *nbd)
 {
 	struct nbd_config *config = nbd->config;
 	struct block_device *bdev = bdget_disk(nbd->disk, 0);
@@ -311,11 +311,9 @@ static void nbd_size_update(struct nbd_device *nbd, bool start)
 	blk_queue_physical_block_size(nbd->disk->queue, config->blksize);
 	set_capacity(nbd->disk, nr_sectors);
 	if (bdev) {
-		if (bdev->bd_disk) {
+		if (bdev->bd_disk)
 			bd_set_nr_sectors(bdev, nr_sectors);
-			if (start)
-				set_blocksize(bdev, config->blksize);
-		} else
+		else
 			set_bit(GD_NEED_PART_SCAN, &nbd->disk->state);
 		bdput(bdev);
 	}
@@ -329,7 +327,7 @@ static void nbd_size_set(struct nbd_device *nbd, loff_t blocksize,
 	config->blksize = blocksize;
 	config->bytesize = blocksize * nr_blocks;
 	if (nbd->task_recv != NULL)
-		nbd_size_update(nbd, false);
+		nbd_size_update(nbd);
 }
 
 static void nbd_complete_rq(struct request *req)
@@ -1309,7 +1307,7 @@ static int nbd_start_device(struct nbd_device *nbd)
 		args->index = i;
 		queue_work(nbd->recv_workq, &args->work);
 	}
-	nbd_size_update(nbd, true);
+	nbd_size_update(nbd);
 	return error;
 }
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 14:59:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 14:59:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28007.56614 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefyF-0007Cd-Q5; Mon, 16 Nov 2020 14:59:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28007.56614; Mon, 16 Nov 2020 14:59:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefyF-0007CT-M1; Mon, 16 Nov 2020 14:59:07 +0000
Received: by outflank-mailman (input) for mailman id 28007;
 Mon, 16 Nov 2020 14:59:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefyE-0006ni-07
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:59:06 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e0e63f5c-7dce-4c65-82c8-47305c904e09;
 Mon, 16 Nov 2020 14:58:36 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxK-0003iO-GD; Mon, 16 Nov 2020 14:58:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefyE-0006ni-07
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:59:06 +0000
X-Inumbo-ID: e0e63f5c-7dce-4c65-82c8-47305c904e09
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e0e63f5c-7dce-4c65-82c8-47305c904e09;
	Mon, 16 Nov 2020 14:58:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID:
	Content-Description:In-Reply-To:References;
	bh=dr8Nr8dbB7J11PqXo+7/b7tximxSHcvF1HcCKGt1IiI=; b=TpTZ5F4/AmdsxM2Cu94cy7eEas
	3X7DT8n5Y9HTYUNPKGh1eqWOrzm0xYcV10TWm7qZ2DAYxl9FZdzB5TSQwbJco0DGEH8GHNaKj4/uO
	UapJemi5kTz7ILke6BFMPk1Qp+enisOjgnM5WI66jPW/Vmd0viYdg1qSGiqiQimTL89OKOdtd5INW
	I0maxo/xPZ0nKatsoAF7NrHgoNRWlnrSm1gKMxhQmVA8vJCsPu08PgWB0cu99api4IM/dOgYmUa59
	BLoaxWXKPrvz0+ejgB6cVN2tLjQDwxy7EgSet0pUC1cws2cyKk57iZ+SQrQfZqgWnLnZ1UKD1blpx
	ffC4VyOw==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxK-0003iO-GD; Mon, 16 Nov 2020 14:58:10 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: cleanup updating the size of block devices v3
Date: Mon, 16 Nov 2020 15:56:51 +0100
Message-Id: <20201116145809.410558-1-hch@lst.de>
X-Mailer: git-send-email 2.29.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Hi Jens,

this series builds on top of the work that went into the last merge window,
and make sure we have a single coherent interfac for updating the size of a
block device.

Changes since v2:
 - rebased to the set_capacity_revalidate_and_notify in mainline
 - keep the loop_set_size function
 - fix two mixed up acks
 
Changes since v1:
 - minor spelling fixes



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 14:59:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 14:59:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28011.56626 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefyK-0007Hn-5P; Mon, 16 Nov 2020 14:59:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28011.56626; Mon, 16 Nov 2020 14:59:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefyK-0007Hg-1O; Mon, 16 Nov 2020 14:59:12 +0000
Received: by outflank-mailman (input) for mailman id 28011;
 Mon, 16 Nov 2020 14:59:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefyJ-0006ni-0B
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:59:11 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id af4cc64f-5e0f-485f-8d6f-89071da97c7a;
 Mon, 16 Nov 2020 14:58:37 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxT-0003kh-MQ; Mon, 16 Nov 2020 14:58:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefyJ-0006ni-0B
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:59:11 +0000
X-Inumbo-ID: af4cc64f-5e0f-485f-8d6f-89071da97c7a
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id af4cc64f-5e0f-485f-8d6f-89071da97c7a;
	Mon, 16 Nov 2020 14:58:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=oD/feK/kknJy8NgrN6GUwKk4j5cIe6yLgGU87KzsgtI=; b=IrFVDJmAyDCdyIjxW0mx/wM+i0
	EfULIltaIxUj6jxf0IidNz1l2jSw8Ksulh7xmd0CBDNRy5YCjxY1r+V6kXvdMfu4B/jrKJGjZItyY
	pmVnQEQV+OHhGl4k/Fbj2BhN/m0dlPmLgbtOB54UGEgkstKPEUU8vGqNKmkcDv/niskfsfpX18U5w
	8InEkFKVUp9ZNaV7h0/GILJT1k3XrBuLsYqtDA+gf+w4j99DHcs4CUnHwP7kXuwaUANPik+3qgqiI
	RHxMSvMovWPDR+Oayh36XrbKH+oZyCtI9oB6aJjNi0FJ0TmkB7C1+oOtTu6BFVjYIqdqBrv/I5gjp
	v+ue0wAw==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxT-0003kh-MQ; Mon, 16 Nov 2020 14:58:20 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 07/78] nbd: move the task_recv check into nbd_size_update
Date: Mon, 16 Nov 2020 15:56:58 +0100
Message-Id: <20201116145809.410558-8-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

nbd_size_update is about to acquire a few more callers, so lift the check
into the function.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 drivers/block/nbd.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index a9a0b49ff16101..48054051e281e6 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -299,8 +299,11 @@ static void nbd_size_clear(struct nbd_device *nbd)
 static void nbd_size_update(struct nbd_device *nbd)
 {
 	struct nbd_config *config = nbd->config;
-	struct block_device *bdev = bdget_disk(nbd->disk, 0);
 	sector_t nr_sectors = config->bytesize >> 9;
+	struct block_device *bdev;
+
+	if (!nbd->task_recv)
+		return;
 
 	if (config->flags & NBD_FLAG_SEND_TRIM) {
 		nbd->disk->queue->limits.discard_granularity = config->blksize;
@@ -309,7 +312,9 @@ static void nbd_size_update(struct nbd_device *nbd)
 	}
 	blk_queue_logical_block_size(nbd->disk->queue, config->blksize);
 	blk_queue_physical_block_size(nbd->disk->queue, config->blksize);
+
 	set_capacity(nbd->disk, nr_sectors);
+	bdev = bdget_disk(nbd->disk, 0);
 	if (bdev) {
 		if (bdev->bd_disk)
 			bd_set_nr_sectors(bdev, nr_sectors);
@@ -326,8 +331,7 @@ static void nbd_size_set(struct nbd_device *nbd, loff_t blocksize,
 	struct nbd_config *config = nbd->config;
 	config->blksize = blocksize;
 	config->bytesize = blocksize * nr_blocks;
-	if (nbd->task_recv != NULL)
-		nbd_size_update(nbd);
+	nbd_size_update(nbd);
 }
 
 static void nbd_complete_rq(struct request *req)
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 14:59:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 14:59:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28016.56638 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefyP-0007Nz-GE; Mon, 16 Nov 2020 14:59:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28016.56638; Mon, 16 Nov 2020 14:59:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefyP-0007Np-CM; Mon, 16 Nov 2020 14:59:17 +0000
Received: by outflank-mailman (input) for mailman id 28016;
 Mon, 16 Nov 2020 14:59:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefyO-0006ni-0X
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:59:16 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 14db0522-9190-4f8e-9df0-cfde75abc00b;
 Mon, 16 Nov 2020 14:58:38 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxX-0003lT-Gv; Mon, 16 Nov 2020 14:58:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefyO-0006ni-0X
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:59:16 +0000
X-Inumbo-ID: 14db0522-9190-4f8e-9df0-cfde75abc00b
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 14db0522-9190-4f8e-9df0-cfde75abc00b;
	Mon, 16 Nov 2020 14:58:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=YQCkYpiR+RuSjw9R7vDwMY44hzy/9NL3g9qrdbmxMFk=; b=HuU1N08umhPWRzRkQAVOlDiSNO
	Y5j48ahm1U4rxXSELx//58ov5M5R00+ej85c1E3MBvCyecR4yR2cgqtKYkt9V3dNt+quIJozcCBeX
	WPA8B/IKoZftjjg2jH6j+dq4zw7Gz8yXBI2NZ7GBnuJHBl7W1bUNn67cyDYXeIZonzQ/U5gzZrKtR
	UicxfKaceFDmMrM1y5GR5hJiePP26G+LeVa+UjKbCS4wIeYo10xky6q9H8WGZ+bIH3j/ha72rBAQO
	EKu1yiHImNOmaNaYQ5bumJh9PwV5TGLZR2S3ZJroR9ocZLpjKIzaAqDOwZuPMjOxdlzcqdmFyrHIz
	2YkZ/D2g==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxX-0003lT-Gv; Mon, 16 Nov 2020 14:58:23 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 10/78] nbd: use set_capacity_and_notify
Date: Mon, 16 Nov 2020 15:57:01 +0100
Message-Id: <20201116145809.410558-11-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to update the disk and block device sizes and
send a RESIZE uevent to userspace.  Note that blktests relies on uevents
being sent also for updates that did not change the device size, so the
explicit kobject_uevent remains for that case.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 drivers/block/nbd.c | 15 +++------------
 1 file changed, 3 insertions(+), 12 deletions(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 7478a5e02bc1ed..45b0423ef2c53d 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -299,8 +299,6 @@ static void nbd_size_clear(struct nbd_device *nbd)
 static int nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
 		loff_t blksize)
 {
-	struct block_device *bdev;
-
 	if (!blksize)
 		blksize = NBD_DEF_BLKSIZE;
 	if (blksize < 512 || blksize > PAGE_SIZE || !is_power_of_2(blksize))
@@ -320,16 +318,9 @@ static int nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
 	blk_queue_logical_block_size(nbd->disk->queue, blksize);
 	blk_queue_physical_block_size(nbd->disk->queue, blksize);
 
-	set_capacity(nbd->disk, bytesize >> 9);
-	bdev = bdget_disk(nbd->disk, 0);
-	if (bdev) {
-		if (bdev->bd_disk)
-			bd_set_nr_sectors(bdev, bytesize >> 9);
-		else
-			set_bit(GD_NEED_PART_SCAN, &nbd->disk->state);
-		bdput(bdev);
-	}
-	kobject_uevent(&nbd_to_dev(nbd)->kobj, KOBJ_CHANGE);
+	set_bit(GD_NEED_PART_SCAN, &nbd->disk->state);
+	if (!set_capacity_and_notify(nbd->disk, bytesize >> 9))
+		kobject_uevent(&nbd_to_dev(nbd)->kobj, KOBJ_CHANGE);
 	return 0;
 }
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 14:59:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 14:59:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28020.56650 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefyV-0007V9-3e; Mon, 16 Nov 2020 14:59:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28020.56650; Mon, 16 Nov 2020 14:59:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefyU-0007Uz-Vf; Mon, 16 Nov 2020 14:59:22 +0000
Received: by outflank-mailman (input) for mailman id 28020;
 Mon, 16 Nov 2020 14:59:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefyT-0006ni-0U
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:59:21 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ef441789-744b-4a28-a4e0-c05de0cff0eb;
 Mon, 16 Nov 2020 14:58:38 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxU-0003l1-V7; Mon, 16 Nov 2020 14:58:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefyT-0006ni-0U
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:59:21 +0000
X-Inumbo-ID: ef441789-744b-4a28-a4e0-c05de0cff0eb
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ef441789-744b-4a28-a4e0-c05de0cff0eb;
	Mon, 16 Nov 2020 14:58:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=iXRxgxZjdW0OSiynW0b97Ye/Ew891ZT5a6+RemicV+c=; b=QYwalgIHuTfbRFaTkWHkM/7tzW
	ACNgYybAP7vqcx/im3VyKOgAqTgW5Atn+0ww2N8yuikc1w292fLMl1UaSuvRPNP3foPRzIFlfYrUO
	TF5EYc+etKoCl/jtkiK/l8LsHF/56Z+hcqD9nruEsa0QAFV8VyAb2w5WHLyaXloSD46SPL4BHTJgf
	Bvq6vpDR5mgNyI2RyrRZoODX71i6Aaac3KpH4ld2puCpmn3rAVa2TkBfOKYPio3kLY7m4sAF2wK3s
	dHoLtSXORuum4wYRF1CwCfPc8DdNWE91x1++nDT4y6RTBsjGds+V9F6YFSSS6E9XoHMur0U1Y3c7J
	sLlJ6zUA==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxU-0003l1-V7; Mon, 16 Nov 2020 14:58:21 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 08/78] nbd: refactor size updates
Date: Mon, 16 Nov 2020 15:56:59 +0100
Message-Id: <20201116145809.410558-9-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Merge nbd_size_set and nbd_size_update into a single function that also
updates the nbd_config fields.  This new function takes the device size
in bytes as the first argument, and the blocksize as the second argument,
simplifying the calculations required in most callers.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 drivers/block/nbd.c | 44 ++++++++++++++++++--------------------------
 1 file changed, 18 insertions(+), 26 deletions(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 48054051e281e6..6e8f2ff715c661 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -296,28 +296,30 @@ static void nbd_size_clear(struct nbd_device *nbd)
 	}
 }
 
-static void nbd_size_update(struct nbd_device *nbd)
+static void nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
+		loff_t blksize)
 {
-	struct nbd_config *config = nbd->config;
-	sector_t nr_sectors = config->bytesize >> 9;
 	struct block_device *bdev;
 
+	nbd->config->bytesize = bytesize;
+	nbd->config->blksize = blksize;
+
 	if (!nbd->task_recv)
 		return;
 
-	if (config->flags & NBD_FLAG_SEND_TRIM) {
-		nbd->disk->queue->limits.discard_granularity = config->blksize;
-		nbd->disk->queue->limits.discard_alignment = config->blksize;
+	if (nbd->config->flags & NBD_FLAG_SEND_TRIM) {
+		nbd->disk->queue->limits.discard_granularity = blksize;
+		nbd->disk->queue->limits.discard_alignment = blksize;
 		blk_queue_max_discard_sectors(nbd->disk->queue, UINT_MAX);
 	}
-	blk_queue_logical_block_size(nbd->disk->queue, config->blksize);
-	blk_queue_physical_block_size(nbd->disk->queue, config->blksize);
+	blk_queue_logical_block_size(nbd->disk->queue, blksize);
+	blk_queue_physical_block_size(nbd->disk->queue, blksize);
 
-	set_capacity(nbd->disk, nr_sectors);
+	set_capacity(nbd->disk, bytesize >> 9);
 	bdev = bdget_disk(nbd->disk, 0);
 	if (bdev) {
 		if (bdev->bd_disk)
-			bd_set_nr_sectors(bdev, nr_sectors);
+			bd_set_nr_sectors(bdev, bytesize >> 9);
 		else
 			set_bit(GD_NEED_PART_SCAN, &nbd->disk->state);
 		bdput(bdev);
@@ -325,15 +327,6 @@ static void nbd_size_update(struct nbd_device *nbd)
 	kobject_uevent(&nbd_to_dev(nbd)->kobj, KOBJ_CHANGE);
 }
 
-static void nbd_size_set(struct nbd_device *nbd, loff_t blocksize,
-			 loff_t nr_blocks)
-{
-	struct nbd_config *config = nbd->config;
-	config->blksize = blocksize;
-	config->bytesize = blocksize * nr_blocks;
-	nbd_size_update(nbd);
-}
-
 static void nbd_complete_rq(struct request *req)
 {
 	struct nbd_cmd *cmd = blk_mq_rq_to_pdu(req);
@@ -1311,7 +1304,7 @@ static int nbd_start_device(struct nbd_device *nbd)
 		args->index = i;
 		queue_work(nbd->recv_workq, &args->work);
 	}
-	nbd_size_update(nbd);
+	nbd_set_size(nbd, config->bytesize, config->blksize);
 	return error;
 }
 
@@ -1390,15 +1383,14 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
 			arg = NBD_DEF_BLKSIZE;
 		if (!nbd_is_valid_blksize(arg))
 			return -EINVAL;
-		nbd_size_set(nbd, arg,
-			     div_s64(config->bytesize, arg));
+		nbd_set_size(nbd, config->bytesize, arg);
 		return 0;
 	case NBD_SET_SIZE:
-		nbd_size_set(nbd, config->blksize,
-			     div_s64(arg, config->blksize));
+		nbd_set_size(nbd, arg, config->blksize);
 		return 0;
 	case NBD_SET_SIZE_BLOCKS:
-		nbd_size_set(nbd, config->blksize, arg);
+		nbd_set_size(nbd, arg * config->blksize,
+			     config->blksize);
 		return 0;
 	case NBD_SET_TIMEOUT:
 		nbd_set_cmd_timeout(nbd, arg);
@@ -1828,7 +1820,7 @@ static int nbd_genl_size_set(struct genl_info *info, struct nbd_device *nbd)
 	}
 
 	if (bytes != config->bytesize || bsize != config->blksize)
-		nbd_size_set(nbd, bsize, div64_u64(bytes, bsize));
+		nbd_set_size(nbd, bytes, bsize);
 	return 0;
 }
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 14:59:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 14:59:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28036.56662 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefyt-0007lC-Dm; Mon, 16 Nov 2020 14:59:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28036.56662; Mon, 16 Nov 2020 14:59:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefyt-0007l5-Aa; Mon, 16 Nov 2020 14:59:47 +0000
Received: by outflank-mailman (input) for mailman id 28036;
 Mon, 16 Nov 2020 14:59:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefyd-0006ni-0r
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:59:31 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c6827a29-2012-40c2-8004-c09647d2256e;
 Mon, 16 Nov 2020 14:58:40 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxW-0003lB-6P; Mon, 16 Nov 2020 14:58:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefyd-0006ni-0r
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:59:31 +0000
X-Inumbo-ID: c6827a29-2012-40c2-8004-c09647d2256e
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c6827a29-2012-40c2-8004-c09647d2256e;
	Mon, 16 Nov 2020 14:58:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=P79Gf++XFU6hHqLQrw0lALqz5dNl+ctbqrznHTPStfQ=; b=FD7Das6TDEjq3rfmXkqaOjbFQ3
	YfESJ91RJF4/7NTkMH584g6c6YDfnvGQoeWsu171HgLMde/Ltz+y7z9vxMTbM/GcG5qcoNrWFfn4o
	ecJkINhlGAPVTLeyfzouxKX5BbYnqe4VXZaD61qR0oEVPbcvoQLvAWD4NKQpoy/sNyynJXDWVO3ca
	dqs8Eq34clhCkpk7c2M876zlKIlTQxn6KvJMSW0IiIB2lNOfm4s1SKx6u3jT3Ri5n8tVOnj2+CgIp
	CwShMZsSRHCj7jUKXC5s454mqxFeP7xg16Zyd8xVitmw1TSarsnO6hNOF7sNkAFXfVWKjRWpbsAnL
	tg/055FA==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxW-0003lB-6P; Mon, 16 Nov 2020 14:58:22 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 09/78] nbd: validate the block size in nbd_set_size
Date: Mon, 16 Nov 2020 15:57:00 +0100
Message-Id: <20201116145809.410558-10-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Move the validation of the block from the callers into nbd_set_size.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 drivers/block/nbd.c | 47 +++++++++++++++------------------------------
 1 file changed, 15 insertions(+), 32 deletions(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 6e8f2ff715c661..7478a5e02bc1ed 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -296,16 +296,21 @@ static void nbd_size_clear(struct nbd_device *nbd)
 	}
 }
 
-static void nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
+static int nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
 		loff_t blksize)
 {
 	struct block_device *bdev;
 
+	if (!blksize)
+		blksize = NBD_DEF_BLKSIZE;
+	if (blksize < 512 || blksize > PAGE_SIZE || !is_power_of_2(blksize))
+		return -EINVAL;
+
 	nbd->config->bytesize = bytesize;
 	nbd->config->blksize = blksize;
 
 	if (!nbd->task_recv)
-		return;
+		return 0;
 
 	if (nbd->config->flags & NBD_FLAG_SEND_TRIM) {
 		nbd->disk->queue->limits.discard_granularity = blksize;
@@ -325,6 +330,7 @@ static void nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
 		bdput(bdev);
 	}
 	kobject_uevent(&nbd_to_dev(nbd)->kobj, KOBJ_CHANGE);
+	return 0;
 }
 
 static void nbd_complete_rq(struct request *req)
@@ -1304,8 +1310,7 @@ static int nbd_start_device(struct nbd_device *nbd)
 		args->index = i;
 		queue_work(nbd->recv_workq, &args->work);
 	}
-	nbd_set_size(nbd, config->bytesize, config->blksize);
-	return error;
+	return nbd_set_size(nbd, config->bytesize, config->blksize);
 }
 
 static int nbd_start_device_ioctl(struct nbd_device *nbd, struct block_device *bdev)
@@ -1347,14 +1352,6 @@ static void nbd_clear_sock_ioctl(struct nbd_device *nbd,
 		nbd_config_put(nbd);
 }
 
-static bool nbd_is_valid_blksize(unsigned long blksize)
-{
-	if (!blksize || !is_power_of_2(blksize) || blksize < 512 ||
-	    blksize > PAGE_SIZE)
-		return false;
-	return true;
-}
-
 static void nbd_set_cmd_timeout(struct nbd_device *nbd, u64 timeout)
 {
 	nbd->tag_set.timeout = timeout * HZ;
@@ -1379,19 +1376,12 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
 	case NBD_SET_SOCK:
 		return nbd_add_socket(nbd, arg, false);
 	case NBD_SET_BLKSIZE:
-		if (!arg)
-			arg = NBD_DEF_BLKSIZE;
-		if (!nbd_is_valid_blksize(arg))
-			return -EINVAL;
-		nbd_set_size(nbd, config->bytesize, arg);
-		return 0;
+		return nbd_set_size(nbd, config->bytesize, arg);
 	case NBD_SET_SIZE:
-		nbd_set_size(nbd, arg, config->blksize);
-		return 0;
+		return nbd_set_size(nbd, arg, config->blksize);
 	case NBD_SET_SIZE_BLOCKS:
-		nbd_set_size(nbd, arg * config->blksize,
-			     config->blksize);
-		return 0;
+		return nbd_set_size(nbd, arg * config->blksize,
+				    config->blksize);
 	case NBD_SET_TIMEOUT:
 		nbd_set_cmd_timeout(nbd, arg);
 		return 0;
@@ -1809,18 +1799,11 @@ static int nbd_genl_size_set(struct genl_info *info, struct nbd_device *nbd)
 	if (info->attrs[NBD_ATTR_SIZE_BYTES])
 		bytes = nla_get_u64(info->attrs[NBD_ATTR_SIZE_BYTES]);
 
-	if (info->attrs[NBD_ATTR_BLOCK_SIZE_BYTES]) {
+	if (info->attrs[NBD_ATTR_BLOCK_SIZE_BYTES])
 		bsize = nla_get_u64(info->attrs[NBD_ATTR_BLOCK_SIZE_BYTES]);
-		if (!bsize)
-			bsize = NBD_DEF_BLKSIZE;
-		if (!nbd_is_valid_blksize(bsize)) {
-			printk(KERN_ERR "Invalid block size %llu\n", bsize);
-			return -EINVAL;
-		}
-	}
 
 	if (bytes != config->bytesize || bsize != config->blksize)
-		nbd_set_size(nbd, bytes, bsize);
+		return nbd_set_size(nbd, bytes, bsize);
 	return 0;
 }
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:00:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:00:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28039.56673 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefzA-0008JW-O6; Mon, 16 Nov 2020 15:00:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28039.56673; Mon, 16 Nov 2020 15:00:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefzA-0008Ip-Ki; Mon, 16 Nov 2020 15:00:04 +0000
Received: by outflank-mailman (input) for mailman id 28039;
 Mon, 16 Nov 2020 15:00:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefyi-0006ni-14
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:59:36 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 82a48279-115e-4cee-aa4f-0f74e6244596;
 Mon, 16 Nov 2020 14:58:43 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxb-0003mp-ER; Mon, 16 Nov 2020 14:58:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefyi-0006ni-14
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:59:36 +0000
X-Inumbo-ID: 82a48279-115e-4cee-aa4f-0f74e6244596
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 82a48279-115e-4cee-aa4f-0f74e6244596;
	Mon, 16 Nov 2020 14:58:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=j/0/qXcDYvQ/KRFTCfbeLrD6BKftlYW1jMJrLTQknrA=; b=byXWvuFWUAS6lZPm5Zeq7p8bBe
	us/bzllqhkru2czyVVokQTTeYL+/3nVsuU4/rDhYvEj/ImRfWDgds8ciFSHt3DJegL3w+3AQaGHag
	PKasBfLdKS5E4pWdTFh6/4uyvJRFiUdVWjcV1a5aQvY/kBYNo4iLxhmPSh2kdJdTp7A4ixInIx3wo
	JpWIePokKzvwp8PcbSTxxumquRpS137WC1RZjgyVjiJeOwfVg8ecKo6rYbXAhcQrObDK5SeQ3EIfa
	OTrz4SZ5xlHtXA94sJkOV0q/92afNyn3WLIhIoPBbXq1zA8kztmL78I31XmGh7QR7o5AcHIy7rcAn
	HnC1nGVw==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxb-0003mp-ER; Mon, 16 Nov 2020 14:58:27 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 13/78] pktcdvd: use set_capacity_and_notify
Date: Mon, 16 Nov 2020 15:57:04 +0100
Message-Id: <20201116145809.410558-14-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/pktcdvd.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c
index 467dbd06b7cdb1..4326401cede445 100644
--- a/drivers/block/pktcdvd.c
+++ b/drivers/block/pktcdvd.c
@@ -2130,8 +2130,7 @@ static int pkt_open_dev(struct pktcdvd_device *pd, fmode_t write)
 	}
 
 	set_capacity(pd->disk, lba << 2);
-	set_capacity(pd->bdev->bd_disk, lba << 2);
-	bd_set_nr_sectors(pd->bdev, lba << 2);
+	set_capacity_and_notify(pd->bdev->bd_disk, lba << 2);
 
 	q = bdev_get_queue(pd->bdev);
 	if (write) {
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:00:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:00:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28041.56686 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefzD-00008V-3B; Mon, 16 Nov 2020 15:00:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28041.56686; Mon, 16 Nov 2020 15:00:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kefzC-00008N-VE; Mon, 16 Nov 2020 15:00:06 +0000
Received: by outflank-mailman (input) for mailman id 28041;
 Mon, 16 Nov 2020 15:00:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefyY-0006ni-0Z
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:59:26 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1158c296-eb3c-467f-894d-8f8cdcbaf2ff;
 Mon, 16 Nov 2020 14:58:38 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxY-0003mO-Ss; Mon, 16 Nov 2020 14:58:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefyY-0006ni-0Z
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:59:26 +0000
X-Inumbo-ID: 1158c296-eb3c-467f-894d-8f8cdcbaf2ff
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1158c296-eb3c-467f-894d-8f8cdcbaf2ff;
	Mon, 16 Nov 2020 14:58:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=MbR6eD8Z5sCp9HMp2xRKfZrC0GyX/16idOKOYUQxwmk=; b=lVGpDrCYC4uApK0HXhMfWQu//t
	v5TNY33739vaVgizhet7Btd0aO7HYFUr4uFDV+hjF3FjzaPwtVUz2Kb7kWA5WHMiCOvV1zLNOLH3Z
	xYyJ9PyjAQotPS9g4g5yfkhyuQvbAwVKYue6ztUh19mPuVx8DtNsK62BOcbCwnRTuEJ3dvoDg2ldg
	WeBaZz6b7R0R55We4FXJgz3pLCWNo1XvND1Ep24BS9hOMWi13U/CDCIc7JHWwTXem/0p/fbzf34hi
	1r3s/4MDxah8TmWEcY0L5xxJ+d6obD1qWfKT/c/iL18LhZNRmRcA2WXg5WAUiLmevTSoEYS+1X/52
	zWWV8w6g==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxY-0003mO-Ss; Mon, 16 Nov 2020 14:58:25 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 11/78] aoe: don't call set_capacity from irq context
Date: Mon, 16 Nov 2020 15:57:02 +0100
Message-Id: <20201116145809.410558-12-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Updating the block device size from irq context can lead to torn
writes of the 64-bit value, and prevents us from using normal
process context locking primitives to serialize access to the 64-bit
nr_sectors value.  Defer the set_capacity to the already existing
workqueue handler, where it can be merged with the update of the
block device size by using set_capacity_and_notify.  As an extra
bonus this also adds proper uevent notifications for the resize.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/aoe/aoecmd.c | 15 ++++-----------
 1 file changed, 4 insertions(+), 11 deletions(-)

diff --git a/drivers/block/aoe/aoecmd.c b/drivers/block/aoe/aoecmd.c
index 313f0b946fe2b3..ac720bdcd983e7 100644
--- a/drivers/block/aoe/aoecmd.c
+++ b/drivers/block/aoe/aoecmd.c
@@ -890,19 +890,13 @@ void
 aoecmd_sleepwork(struct work_struct *work)
 {
 	struct aoedev *d = container_of(work, struct aoedev, work);
-	struct block_device *bd;
-	u64 ssize;
 
 	if (d->flags & DEVFL_GDALLOC)
 		aoeblk_gdalloc(d);
 
 	if (d->flags & DEVFL_NEWSIZE) {
-		ssize = get_capacity(d->gd);
-		bd = bdget_disk(d->gd, 0);
-		if (bd) {
-			bd_set_nr_sectors(bd, ssize);
-			bdput(bd);
-		}
+		set_capacity_and_notify(d->gd, d->ssize);
+
 		spin_lock_irq(&d->lock);
 		d->flags |= DEVFL_UP;
 		d->flags &= ~DEVFL_NEWSIZE;
@@ -971,10 +965,9 @@ ataid_complete(struct aoedev *d, struct aoetgt *t, unsigned char *id)
 	d->geo.start = 0;
 	if (d->flags & (DEVFL_GDALLOC|DEVFL_NEWSIZE))
 		return;
-	if (d->gd != NULL) {
-		set_capacity(d->gd, ssize);
+	if (d->gd != NULL)
 		d->flags |= DEVFL_NEWSIZE;
-	} else
+	else
 		d->flags |= DEVFL_GDALLOC;
 	schedule_work(&d->work);
 }
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:05:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:05:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28065.56698 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg4Q-0000j1-M6; Mon, 16 Nov 2020 15:05:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28065.56698; Mon, 16 Nov 2020 15:05:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg4Q-0000iu-Ip; Mon, 16 Nov 2020 15:05:30 +0000
Received: by outflank-mailman (input) for mailman id 28065;
 Mon, 16 Nov 2020 15:05:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kJi/=EW=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1keg4O-0000ip-LT
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:05:28 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 91c217bb-6db1-4c32-b8c1-aeebb3a4bee0;
 Mon, 16 Nov 2020 15:05:27 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id C42D56736F; Mon, 16 Nov 2020 16:05:24 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kJi/=EW=lst.de=hch@srs-us1.protection.inumbo.net>)
	id 1keg4O-0000ip-LT
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:05:28 +0000
X-Inumbo-ID: 91c217bb-6db1-4c32-b8c1-aeebb3a4bee0
Received: from verein.lst.de (unknown [213.95.11.211])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 91c217bb-6db1-4c32-b8c1-aeebb3a4bee0;
	Mon, 16 Nov 2020 15:05:27 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
	id C42D56736F; Mon, 16 Nov 2020 16:05:24 +0100 (CET)
Date: Mon, 16 Nov 2020 16:05:24 +0100
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Mike Snitzer <snitzer@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>, linux-nvme@lists.infradead.org,
	Song Liu <song@kernel.org>, dm-devel@redhat.com,
	drbd-dev@lists.linbit.com, linux-scsi@vger.kernel.org,
	xen-devel@lists.xenproject.org, Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Josef Bacik <josef@toxicpanda.com>, nbd@other.debian.org,
	linux-raid@vger.kernel.org, Stefan Hajnoczi <stefanha@redhat.com>,
	ceph-devel@vger.kernel.org, linux-block@vger.kernel.org,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	Minchan Kim <minchan@kernel.org>, linux-fsdevel@vger.kernel.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: cleanup updating the size of block devices v3
Message-ID: <20201116150524.GA13367@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
User-Agent: Mutt/1.5.17 (2007-11-01)

Oops,

this is a bigger patch bomb than intended.  Only patches 1-23 are this
series which should be ready to be applied once for-5.11/block pulles in
5.10-rc4.

After that follow patches already in for-5.11/block and my current hot
off the press development branch.


From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:09:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:09:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28075.56722 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8U-0000x2-LJ; Mon, 16 Nov 2020 15:09:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28075.56722; Mon, 16 Nov 2020 15:09:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8U-0000wr-Gi; Mon, 16 Nov 2020 15:09:42 +0000
Received: by outflank-mailman (input) for mailman id 28075;
 Mon, 16 Nov 2020 15:09:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefzl-0006ni-3a
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:00:41 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6b39c2de-7785-4cfa-9a92-56b798237cd8;
 Mon, 16 Nov 2020 14:58:59 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxt-0003sX-50; Mon, 16 Nov 2020 14:58:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefzl-0006ni-3a
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:00:41 +0000
X-Inumbo-ID: 6b39c2de-7785-4cfa-9a92-56b798237cd8
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6b39c2de-7785-4cfa-9a92-56b798237cd8;
	Mon, 16 Nov 2020 14:58:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=2smnVJT4M8MVl5MGvHMWtgKudyWqsSN/hT8JKVcTkcU=; b=UeKPjR9HaE3H/C6qkhiTe+Rocu
	9ShxKNi2wYBSK1rbD5wbPkjZToEWHwpbA4XMmV+yrAvhrlb8waEumxcmLoo6UJpHzT0oVsedCaTNL
	vW0OMgcrFRZWuj9afM+tWViwsKJaqn/qJwmqXTkY1DXr3Udw+lcZ1Jmvd9j5+tvAB+GIbBJjsq0cu
	EaxCxhsT192EWhx+hhW0l/8Cx9tvBzSIb1ezyuiHE2R3FTysXBl34PHXfUeZshdFbbBjcZGRonRTx
	gq9eiawuSsc+VqdBITNC40hyduSCiehl7z2/Koo7uijrjtnSFRlmObRyTCoTBE1CTJke9v1CYf9cY
	zJjjwESg==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxt-0003sX-50; Mon, 16 Nov 2020 14:58:45 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 25/78] block: don't call into the driver for BLKFLSBUF
Date: Mon, 16 Nov 2020 15:57:16 +0100
Message-Id: <20201116145809.410558-26-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

BLKFLSBUF is entirely contained in the block core, and there is no
good reason to give the driver a hook into processing it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/ioctl.c | 7 -------
 1 file changed, 7 deletions(-)

diff --git a/block/ioctl.c b/block/ioctl.c
index 3fbc382eb926d4..c6d8863f040945 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -369,15 +369,8 @@ static inline int is_unrecognized_ioctl(int ret)
 static int blkdev_flushbuf(struct block_device *bdev, fmode_t mode,
 		unsigned cmd, unsigned long arg)
 {
-	int ret;
-
 	if (!capable(CAP_SYS_ADMIN))
 		return -EACCES;
-
-	ret = __blkdev_driver_ioctl(bdev, mode, cmd, arg);
-	if (!is_unrecognized_ioctl(ret))
-		return ret;
-
 	fsync_bdev(bdev);
 	invalidate_bdev(bdev);
 	return 0;
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:09:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:09:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28074.56710 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8T-0000v9-85; Mon, 16 Nov 2020 15:09:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28074.56710; Mon, 16 Nov 2020 15:09:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8T-0000ux-4J; Mon, 16 Nov 2020 15:09:41 +0000
Received: by outflank-mailman (input) for mailman id 28074;
 Mon, 16 Nov 2020 15:09:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg1c-0006ni-78
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:02:36 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 94d3ab4a-5415-4a54-b2b8-52c432e7358b;
 Mon, 16 Nov 2020 14:59:28 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyN-000402-5y; Mon, 16 Nov 2020 14:59:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg1c-0006ni-78
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:02:36 +0000
X-Inumbo-ID: 94d3ab4a-5415-4a54-b2b8-52c432e7358b
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 94d3ab4a-5415-4a54-b2b8-52c432e7358b;
	Mon, 16 Nov 2020 14:59:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=O5A83Oi0Xx0wCgZ53MCDk0SimnnpHqXZpeb9/Xjg+3I=; b=E7BcK9hAyDSHxcI7VinrLs/x4+
	GbPQzDVUV1gmtfeik/ecOJf1xAz9adMm1AlR5n+8pmOFubRvS/7JC38pGCYM9ZwlEJ4zvxSDihrYM
	JGQCaf5k451aQcjTbxUWntTZRSfcMeTsf8Du7UrxdxqROv2m47Q52OOrWyg+7vXskLoftSze8ezz6
	aMmgYUku1QWTkRciIOFj5T0KMfHWt98qdMt4QXSWxtgUUImsabTkg/eANqkgiZzmVtnscOj29Xw6f
	e1x2pMJGB6+SMSCN0E1pYgQM50zCWis0+N6qx617dZwyzoH0THRVg26q4h6UoR9/dNcUrYteSODTY
	uCKNHu/A==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyN-000402-5y; Mon, 16 Nov 2020 14:59:15 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH 46/78] ide: switch to __register_blkdev for command set probing
Date: Mon, 16 Nov 2020 15:57:37 +0100
Message-Id: <20201116145809.410558-47-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

ide is the last user of the blk_register_region framework except for the
tracking of allocated gendisk.  Switch to __register_blkdev, even if that
doesn't allow us to trivially find out which command set to probe for.
That means we now always request all modules when a user tries to access
an unclaimed ide device node, but except for a few potentially loaded
modules for a fringe use case of a deprecated and soon to be removed
driver that doesn't make a difference.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 drivers/ide/ide-probe.c | 34 ++++++----------------------------
 1 file changed, 6 insertions(+), 28 deletions(-)

diff --git a/drivers/ide/ide-probe.c b/drivers/ide/ide-probe.c
index 076d34b381720f..1c1567bb519429 100644
--- a/drivers/ide/ide-probe.c
+++ b/drivers/ide/ide-probe.c
@@ -902,31 +902,12 @@ static int init_irq (ide_hwif_t *hwif)
 	return 1;
 }
 
-static int ata_lock(dev_t dev, void *data)
+static void ata_probe(dev_t dev)
 {
-	/* FIXME: we want to pin hwif down */
-	return 0;
-}
-
-static struct kobject *ata_probe(dev_t dev, int *part, void *data)
-{
-	ide_hwif_t *hwif = data;
-	int unit = *part >> PARTN_BITS;
-	ide_drive_t *drive = hwif->devices[unit];
-
-	if ((drive->dev_flags & IDE_DFLAG_PRESENT) == 0)
-		return NULL;
-
-	if (drive->media == ide_disk)
-		request_module("ide-disk");
-	if (drive->media == ide_cdrom || drive->media == ide_optical)
-		request_module("ide-cd");
-	if (drive->media == ide_tape)
-		request_module("ide-tape");
-	if (drive->media == ide_floppy)
-		request_module("ide-floppy");
-
-	return NULL;
+	request_module("ide-disk");
+	request_module("ide-cd");
+	request_module("ide-tape");
+	request_module("ide-floppy");
 }
 
 void ide_init_disk(struct gendisk *disk, ide_drive_t *drive)
@@ -967,7 +948,7 @@ static int hwif_init(ide_hwif_t *hwif)
 		return 0;
 	}
 
-	if (register_blkdev(hwif->major, hwif->name))
+	if (__register_blkdev(hwif->major, hwif->name, ata_probe))
 		return 0;
 
 	if (!hwif->sg_max_nents)
@@ -989,8 +970,6 @@ static int hwif_init(ide_hwif_t *hwif)
 		goto out;
 	}
 
-	blk_register_region(MKDEV(hwif->major, 0), MAX_DRIVES << PARTN_BITS,
-			    THIS_MODULE, ata_probe, ata_lock, hwif);
 	return 1;
 
 out:
@@ -1582,7 +1561,6 @@ static void ide_unregister(ide_hwif_t *hwif)
 	/*
 	 * Remove us from the kernel's knowledge
 	 */
-	blk_unregister_region(MKDEV(hwif->major, 0), MAX_DRIVES<<PARTN_BITS);
 	kfree(hwif->sg_table);
 	unregister_blkdev(hwif->major, hwif->name);
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:09:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:09:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28078.56735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8V-0000ys-GO; Mon, 16 Nov 2020 15:09:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28078.56735; Mon, 16 Nov 2020 15:09:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8V-0000yK-75; Mon, 16 Nov 2020 15:09:43 +0000
Received: by outflank-mailman (input) for mailman id 28078;
 Mon, 16 Nov 2020 15:09:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg3x-0006ni-Cu
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:05:01 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 240846f7-9bd4-4522-b166-72cff2f224fb;
 Mon, 16 Nov 2020 15:00:06 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefz0-0004Hn-7B; Mon, 16 Nov 2020 14:59:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg3x-0006ni-Cu
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:05:01 +0000
X-Inumbo-ID: 240846f7-9bd4-4522-b166-72cff2f224fb
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 240846f7-9bd4-4522-b166-72cff2f224fb;
	Mon, 16 Nov 2020 15:00:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=hARJkWQlpNoJdj/vxwczaVdCfVVvfIUC8NDH/Nikt6k=; b=in7tsAV0pYzdDaH23WhnwAKR3R
	7xf04zdmNOdEtYs8arbN89y3kw4LApo5bLo3N/m4GrsdUYv2To+bUfGPGTQpzQd4TQGyrpK4e5lHC
	eCShsJJjHfdRG2ZVv14MPwihM8z3yu6kfNX1CiibXDctlXHH5jeZRUvheSFxZ/0kcEtUXIuEjWms/
	mhndsgiWlZLvqm1AI73/nDAEmdFQJYfIewjJl63lmaezSXoLWkADqyILaKsE+S6dzaUQc9rbYpEsZ
	grwi29hhSKISbnn7jD3DaEY2GFWHItE9E68g0ieNppWIL+4Ck8rzVBmAR0Eftf+VYl62odfp1ooaP
	Zg8s5EIQ==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefz0-0004Hn-7B; Mon, 16 Nov 2020 14:59:54 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 72/78] block: use disk_part_iter_exit in disk_part_iter_next
Date: Mon, 16 Nov 2020 15:58:03 +0100
Message-Id: <20201116145809.410558-73-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Call disk_part_iter_exit in disk_part_iter_next instead of duplicating
the functionality.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/genhd.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index 999f7142b04e7d..56bc37e98ed852 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -230,8 +230,7 @@ struct hd_struct *disk_part_iter_next(struct disk_part_iter *piter)
 	int inc, end;
 
 	/* put the last partition */
-	disk_put_part(piter->part);
-	piter->part = NULL;
+	disk_part_iter_exit(piter);
 
 	/* get part_tbl */
 	rcu_read_lock();
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:09:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:09:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28076.56727 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8V-0000xg-2W; Mon, 16 Nov 2020 15:09:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28076.56727; Mon, 16 Nov 2020 15:09:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8U-0000xW-Qw; Mon, 16 Nov 2020 15:09:42 +0000
Received: by outflank-mailman (input) for mailman id 28076;
 Mon, 16 Nov 2020 15:09:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg3E-0006ni-BD
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:16 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fe93c78e-05cf-4634-87c5-a23fec4ed1ce;
 Mon, 16 Nov 2020 14:59:52 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyl-0004C5-Vu; Mon, 16 Nov 2020 14:59:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg3E-0006ni-BD
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:16 +0000
X-Inumbo-ID: fe93c78e-05cf-4634-87c5-a23fec4ed1ce
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fe93c78e-05cf-4634-87c5-a23fec4ed1ce;
	Mon, 16 Nov 2020 14:59:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=2AKcef4DY6iJcprvQynjGgbCIgh8d3+qztu8J6nwbBU=; b=pWF6GiYZwuWwwwCepRCy7HJGry
	ebNMkKhyrUdm7IJyWpCzL1mHXVX2uHrDvMEZnLp7EDprwz/Ixpo6UyIkP9C/eBgJtKePlQ9iOFGEX
	anVcipPAM3jKhn7+W0pAqmOpSUkPyf5/WcShPIbr+Kvrkdy7ShEiMu40JORvE9Eu7vJW7k1YP9SBn
	x4dzOxFL2eT+4GeR6EmljUox59XCLVLuOr/c4rirnRVpl1nSS3OL1hT4yGJ4ATQQt0nK9jflQBVkF
	1F1WeS9cKSzs/UYR7bWm7TOPnDssO/WwxWuvt6FJ3Za4yj5tMQ3BhtLdQ4aklV0fV0Tw57B/U2ugq
	BAULThIQ==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyl-0004C5-Vu; Mon, 16 Nov 2020 14:59:40 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 62/78] loop: do not call set_blocksize
Date: Mon, 16 Nov 2020 15:57:53 +0100
Message-Id: <20201116145809.410558-63-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

set_blocksize is used by file systems to use their preferred buffer cache
block size.  Block drivers should not set it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/loop.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 9a27d4f1c08aac..b42c728620c9e4 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -1164,9 +1164,6 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
 	size = get_loop_size(lo, file);
 	loop_set_size(lo, size);
 
-	set_blocksize(bdev, S_ISBLK(inode->i_mode) ?
-		      block_size(inode->i_bdev) : PAGE_SIZE);
-
 	lo->lo_state = Lo_bound;
 	if (part_shift)
 		lo->lo_flags |= LO_FLAGS_PARTSCAN;
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:09:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:09:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28079.56748 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8W-00010f-AQ; Mon, 16 Nov 2020 15:09:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28079.56748; Mon, 16 Nov 2020 15:09:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8V-0000zf-Ow; Mon, 16 Nov 2020 15:09:43 +0000
Received: by outflank-mailman (input) for mailman id 28079;
 Mon, 16 Nov 2020 15:09:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefys-0006ni-1X
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:59:46 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a341bbf4-6be5-4fed-a83e-3846fb79d988;
 Mon, 16 Nov 2020 14:58:44 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxe-0003nT-5J; Mon, 16 Nov 2020 14:58:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefys-0006ni-1X
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:59:46 +0000
X-Inumbo-ID: a341bbf4-6be5-4fed-a83e-3846fb79d988
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a341bbf4-6be5-4fed-a83e-3846fb79d988;
	Mon, 16 Nov 2020 14:58:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=/Imzh1mtRC/4MzK3hbPZY3ezfjbBAXF+4uATbyQG9tw=; b=dBQLE+OXBD+j477XeVCLVsEYMc
	RcEmj1V7WI2hhE1ic5DNIFG4NASPB/YphyPWVi7Xq1LFghc+xDtYP8gnsZoG77DfNVesQgpmfkhvc
	fRwObxyPEZK/z1Ciz1aZPB7p7sncrnoVPvsbR1T3r36wVT6Dggw6kq20tBT9oOGKs7N8awXlr/4dD
	vSByTZgMJWOqjW2HVCrSuJQGs+W4TB0ER5Ppk1IPUgTtfrMT+MCFmU3g27MhXzwXsmuaY0EkY4/zY
	y2h6fT5m4m5EipT6NUGe0exghgq3sxrlH/h+qqCes57dPolMGfQloQ8v6PU+P54LeLSCXbzWWejRV
	yilzQCOg==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxe-0003nT-5J; Mon, 16 Nov 2020 14:58:30 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 15/78] drbd: use set_capacity_and_notify
Date: Mon, 16 Nov 2020 15:57:06 +0100
Message-Id: <20201116145809.410558-16-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/drbd/drbd_main.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
index 65b95aef8dbc95..1c8c18b2a25f33 100644
--- a/drivers/block/drbd/drbd_main.c
+++ b/drivers/block/drbd/drbd_main.c
@@ -2036,8 +2036,7 @@ void drbd_set_my_capacity(struct drbd_device *device, sector_t size)
 {
 	char ppb[10];
 
-	set_capacity(device->vdisk, size);
-	revalidate_disk_size(device->vdisk, false);
+	set_capacity_and_notify(device->vdisk, size);
 
 	drbd_info(device, "size = %s (%llu KB)\n",
 		ppsize(ppb, size>>1), (unsigned long long)size>>1);
@@ -2068,8 +2067,7 @@ void drbd_device_cleanup(struct drbd_device *device)
 	}
 	D_ASSERT(device, first_peer_device(device)->connection->net_conf == NULL);
 
-	set_capacity(device->vdisk, 0);
-	revalidate_disk_size(device->vdisk, false);
+	set_capacity_and_notify(device->vdisk, 0);
 	if (device->bitmap) {
 		/* maybe never allocated. */
 		drbd_bm_resize(device, 0, 1);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:09:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:09:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28080.56760 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8X-00012Z-4h; Mon, 16 Nov 2020 15:09:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28080.56760; Mon, 16 Nov 2020 15:09:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8W-000121-L3; Mon, 16 Nov 2020 15:09:44 +0000
Received: by outflank-mailman (input) for mailman id 28080;
 Mon, 16 Nov 2020 15:09:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg0F-0006ni-4R
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:01:11 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f63c7646-2def-498b-8709-adcc420b5b00;
 Mon, 16 Nov 2020 14:59:09 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefy1-0003uf-KX; Mon, 16 Nov 2020 14:58:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg0F-0006ni-4R
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:01:11 +0000
X-Inumbo-ID: f63c7646-2def-498b-8709-adcc420b5b00
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f63c7646-2def-498b-8709-adcc420b5b00;
	Mon, 16 Nov 2020 14:59:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=yQQZQPCQfQM7CUBXzCMUyqgLnLGQ9YPa8lq6jkcafKI=; b=WtUs/ZHHHWoA+J542Y6mYXW7Ql
	wMixYk/BvE60aD54t7ExUm0AxV6UzJxr06L6viTlvpgjC2bEe3/zxQDkJkIFlS0bzo78ksny7Nq6W
	OKlhR7wy5ubVTlN/2Y0WDj6CW0pG+vIu/78iIeWE+YehwQa4BqiEFEcMODKoBOrTVdWrXl0Sdysf+
	pclwOsbyiCLrKl06HyP8fs+vPQOuBjK9Uq0ldqvVHcha7jCChZf433xQg5wtWEVmIZ0Td1rSjgQ2R
	mY13xJ5gm6oR6Cum62zjZhpTXXhQHhsumfYYoZ2+ylyEnlTiukpjjXmewaWHdYuHaxhjjM+xQVSh0
	CxfvNOfw==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefy1-0003uf-KX; Mon, 16 Nov 2020 14:58:53 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 31/78] loop: use set_disk_ro
Date: Mon, 16 Nov 2020 15:57:22 +0100
Message-Id: <20201116145809.410558-32-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_disk_ro instead of set_device_ro to match all other block
drivers and to ensure all partitions mirror the read-only flag.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/loop.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 84a36c242e5550..41caf799df721f 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -1134,7 +1134,7 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
 	if (error)
 		goto out_unlock;
 
-	set_device_ro(bdev, (lo->lo_flags & LO_FLAGS_READ_ONLY) != 0);
+	set_disk_ro(lo->lo_disk, (lo->lo_flags & LO_FLAGS_READ_ONLY) != 0);
 
 	lo->use_dio = lo->lo_flags & LO_FLAGS_DIRECT_IO;
 	lo->lo_device = bdev;
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:09:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:09:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28081.56773 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8X-00014O-VE; Mon, 16 Nov 2020 15:09:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28081.56773; Mon, 16 Nov 2020 15:09:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8X-00013i-8y; Mon, 16 Nov 2020 15:09:45 +0000
Received: by outflank-mailman (input) for mailman id 28081;
 Mon, 16 Nov 2020 15:09:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg1D-0006ni-69
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:02:11 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bec5bb8c-775a-4edf-b6a1-f9d2b128e1dd;
 Mon, 16 Nov 2020 14:59:24 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyI-0003z4-LX; Mon, 16 Nov 2020 14:59:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg1D-0006ni-69
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:02:11 +0000
X-Inumbo-ID: bec5bb8c-775a-4edf-b6a1-f9d2b128e1dd
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bec5bb8c-775a-4edf-b6a1-f9d2b128e1dd;
	Mon, 16 Nov 2020 14:59:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=PqvTdxKlXNZLdjBDggPe+0dpl8BdkJnU5rWC71hrHK0=; b=q2Eui2HSznBIPI9cSjfWRkLiXv
	D7M1u2p1A7Bgss7fi8geS+IcBDfPIY7tG3iQsIcsu2pwlnpb6JkzHA9nQPyjhsgzFFr2f5V8gWY0R
	q0oyrC2H/s5XJqKuazQLgZxzSZuDhrLGiXIEEMHlBXx+6keZzYVXDEBv//T2nzeg8fA0yrA8TgRwk
	r0eZqTmdEAWKh5MSx2GpdmmFWrOrgA8uPwc9e459z4ahoYIjDlMtNVnWGRW3SsebzThr08Euwj40u
	FCjdDtLrrY8SFXrtoGuiZV87z8jk53zRslZsCywZYFngjpUQ7WpAm4CkVJQjfGDPL7bghqM+ajoud
	OS2SyUsg==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyI-0003z4-LX; Mon, 16 Nov 2020 14:59:11 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH 43/78] brd: use __register_blkdev to allocate devices on demand
Date: Mon, 16 Nov 2020 15:57:34 +0100
Message-Id: <20201116145809.410558-44-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use the simpler mechanism attached to major_name to allocate a brd device
when a currently unregistered minor is accessed.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 drivers/block/brd.c | 39 +++++++++++----------------------------
 1 file changed, 11 insertions(+), 28 deletions(-)

diff --git a/drivers/block/brd.c b/drivers/block/brd.c
index cc49a921339f77..c43a6ab4b1f39f 100644
--- a/drivers/block/brd.c
+++ b/drivers/block/brd.c
@@ -426,14 +426,15 @@ static void brd_free(struct brd_device *brd)
 	kfree(brd);
 }
 
-static struct brd_device *brd_init_one(int i, bool *new)
+static void brd_probe(dev_t dev)
 {
 	struct brd_device *brd;
+	int i = MINOR(dev) / max_part;
 
-	*new = false;
+	mutex_lock(&brd_devices_mutex);
 	list_for_each_entry(brd, &brd_devices, brd_list) {
 		if (brd->brd_number == i)
-			goto out;
+			goto out_unlock;
 	}
 
 	brd = brd_alloc(i);
@@ -442,9 +443,9 @@ static struct brd_device *brd_init_one(int i, bool *new)
 		add_disk(brd->brd_disk);
 		list_add_tail(&brd->brd_list, &brd_devices);
 	}
-	*new = true;
-out:
-	return brd;
+
+out_unlock:
+	mutex_unlock(&brd_devices_mutex);
 }
 
 static void brd_del_one(struct brd_device *brd)
@@ -454,23 +455,6 @@ static void brd_del_one(struct brd_device *brd)
 	brd_free(brd);
 }
 
-static struct kobject *brd_probe(dev_t dev, int *part, void *data)
-{
-	struct brd_device *brd;
-	struct kobject *kobj;
-	bool new;
-
-	mutex_lock(&brd_devices_mutex);
-	brd = brd_init_one(MINOR(dev) / max_part, &new);
-	kobj = brd ? get_disk_and_module(brd->brd_disk) : NULL;
-	mutex_unlock(&brd_devices_mutex);
-
-	if (new)
-		*part = 0;
-
-	return kobj;
-}
-
 static inline void brd_check_and_reset_par(void)
 {
 	if (unlikely(!max_part))
@@ -510,11 +494,12 @@ static int __init brd_init(void)
 	 *	dynamically.
 	 */
 
-	if (register_blkdev(RAMDISK_MAJOR, "ramdisk"))
+	if (__register_blkdev(RAMDISK_MAJOR, "ramdisk", brd_probe))
 		return -EIO;
 
 	brd_check_and_reset_par();
 
+	mutex_lock(&brd_devices_mutex);
 	for (i = 0; i < rd_nr; i++) {
 		brd = brd_alloc(i);
 		if (!brd)
@@ -532,9 +517,7 @@ static int __init brd_init(void)
 		brd->brd_disk->queue = brd->brd_queue;
 		add_disk(brd->brd_disk);
 	}
-
-	blk_register_region(MKDEV(RAMDISK_MAJOR, 0), 1UL << MINORBITS,
-				  THIS_MODULE, brd_probe, NULL, NULL);
+	mutex_unlock(&brd_devices_mutex);
 
 	pr_info("brd: module loaded\n");
 	return 0;
@@ -544,6 +527,7 @@ static int __init brd_init(void)
 		list_del(&brd->brd_list);
 		brd_free(brd);
 	}
+	mutex_unlock(&brd_devices_mutex);
 	unregister_blkdev(RAMDISK_MAJOR, "ramdisk");
 
 	pr_info("brd: module NOT loaded !!!\n");
@@ -557,7 +541,6 @@ static void __exit brd_exit(void)
 	list_for_each_entry_safe(brd, next, &brd_devices, brd_list)
 		brd_del_one(brd);
 
-	blk_unregister_region(MKDEV(RAMDISK_MAJOR, 0), 1UL << MINORBITS);
 	unregister_blkdev(RAMDISK_MAJOR, "ramdisk");
 
 	pr_info("brd: module unloaded\n");
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:09:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:09:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28083.56781 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8Y-00016h-Kn; Mon, 16 Nov 2020 15:09:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28083.56781; Mon, 16 Nov 2020 15:09:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8Y-00015w-2q; Mon, 16 Nov 2020 15:09:46 +0000
Received: by outflank-mailman (input) for mailman id 28083;
 Mon, 16 Nov 2020 15:09:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg0e-0006ni-59
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:01:36 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 18fe1c3d-ab60-458b-bf87-aa52f0c77773;
 Mon, 16 Nov 2020 14:59:12 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefy5-0003vm-UU; Mon, 16 Nov 2020 14:58:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg0e-0006ni-59
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:01:36 +0000
X-Inumbo-ID: 18fe1c3d-ab60-458b-bf87-aa52f0c77773
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 18fe1c3d-ab60-458b-bf87-aa52f0c77773;
	Mon, 16 Nov 2020 14:59:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=v/lQzgfk3g8cFkaFjGaW1PVi+J4d/Qp7XTb0qTcrcMw=; b=YqzdVakmLdPHF6ezNwy3j22Rhk
	XQVP6AxA/GIysbqTVVeNbFaXKt7fGm9agC08PMgcnJUJeh74vxBSvOfGIuDA/W9HICp10ASgp343S
	hLuXPuNnKIIaKSmewfilQzler2ZMSLSZNnA3JzXsI/oA/vpGcupQtF+vIl+sykv24N3IbJmKQaPjW
	Jn9WRJEVebbUFdcTwhZRo/qo4f5NgH2pRWNmJeZrAnILNTOw/lnNzTwrq9w/rBYEhDgZjS59KWZ4g
	UrBImqs7+qvykUY+GYj6lm5Ht8/hTyu9yEFUWTYQLA51bItg9eUgBTuodXJbLE5gVUevZsuOFItUu
	Kp0yfIOw==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefy5-0003vm-UU; Mon, 16 Nov 2020 14:58:58 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 34/78] block: propagate BLKROSET to all partitions
Date: Mon, 16 Nov 2020 15:57:25 +0100
Message-Id: <20201116145809.410558-35-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

When setting the whole device read-only (or clearing the read-only
state), also update the policy for all partitions.  The s390 dasd
driver has awlways been doing this and it makes a lot of sense.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/ioctl.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/block/ioctl.c b/block/ioctl.c
index 6b785181344fe1..22f394d118c302 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -354,7 +354,10 @@ static int blkdev_roset(struct block_device *bdev, fmode_t mode,
 		if (ret)
 			return ret;
 	}
-	bdev->bd_part->policy = n;
+	if (bdev_is_partition(bdev))
+		bdev->bd_part->policy = n;
+	else
+		set_disk_ro(bdev->bd_disk, n);
 	return 0;
 }
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:09:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:09:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28084.56797 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8Z-00019X-Oe; Mon, 16 Nov 2020 15:09:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28084.56797; Mon, 16 Nov 2020 15:09:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8Z-000190-4N; Mon, 16 Nov 2020 15:09:47 +0000
Received: by outflank-mailman (input) for mailman id 28084;
 Mon, 16 Nov 2020 15:09:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg21-0006ni-8F
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:03:01 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a499afc5-a80a-4be9-a571-92b972a5291a;
 Mon, 16 Nov 2020 14:59:33 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyS-00042Q-Vn; Mon, 16 Nov 2020 14:59:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg21-0006ni-8F
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:03:01 +0000
X-Inumbo-ID: a499afc5-a80a-4be9-a571-92b972a5291a
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a499afc5-a80a-4be9-a571-92b972a5291a;
	Mon, 16 Nov 2020 14:59:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=EaJZ5ZAOnqbW7jE+L2UIcq37a6pYwqJsY9OEaGa0ZMg=; b=K6aOl6Wl4emIGya7dNO3VK4giB
	RrLVrsnwciZD9REImiBOJpS2BGk+3cuN8NTeyZW6ZmMA+Ukg9E6g4MhMrK0UXSjMffjzCiXFaNE4M
	m4oa3O5ssF8kIfuRc5iypBIY7oPwn4MqZ8RIsE00NolXLG9oID81czwJRVsYg++Qn00gT96xEJQUM
	Ec1vaaBm2mYbzA5lP0j//6Qj5PKpmwe+PEfLM5StATuQF7VH1Qfpk0ZyhDEV4WuoGeO82mUEUBDbW
	5wEhqM9fHL89MPwukGt/c3OSiaQo7/iQTTTm0h0fIdu3fV/o7VasqFVwsBqQbeYd6zaP2j8kmeyHU
	dptQrB0w==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyS-00042Q-Vn; Mon, 16 Nov 2020 14:59:21 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH 50/78] z2ram: reindent
Date: Mon, 16 Nov 2020 15:57:41 +0100
Message-Id: <20201116145809.410558-51-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

reindent the driver using Lident as the code style was far away from
normal Linux code.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 drivers/block/z2ram.c | 493 ++++++++++++++++++++----------------------
 1 file changed, 236 insertions(+), 257 deletions(-)

diff --git a/drivers/block/z2ram.c b/drivers/block/z2ram.c
index 0e734802ee7cc6..eafecc9a72b38d 100644
--- a/drivers/block/z2ram.c
+++ b/drivers/block/z2ram.c
@@ -42,7 +42,6 @@
 
 #include <linux/zorro.h>
 
-
 #define Z2MINOR_COMBINED      (0)
 #define Z2MINOR_Z2ONLY        (1)
 #define Z2MINOR_CHIPONLY      (2)
@@ -50,17 +49,17 @@
 #define Z2MINOR_MEMLIST2      (5)
 #define Z2MINOR_MEMLIST3      (6)
 #define Z2MINOR_MEMLIST4      (7)
-#define Z2MINOR_COUNT         (8) /* Move this down when adding a new minor */
+#define Z2MINOR_COUNT         (8)	/* Move this down when adding a new minor */
 
 #define Z2RAM_CHUNK1024       ( Z2RAM_CHUNKSIZE >> 10 )
 
 static DEFINE_MUTEX(z2ram_mutex);
-static u_long *z2ram_map    = NULL;
-static u_long z2ram_size    = 0;
-static int z2_count         = 0;
-static int chip_count       = 0;
-static int list_count       = 0;
-static int current_device   = -1;
+static u_long *z2ram_map = NULL;
+static u_long z2ram_size = 0;
+static int z2_count = 0;
+static int chip_count = 0;
+static int list_count = 0;
+static int current_device = -1;
 
 static DEFINE_SPINLOCK(z2ram_lock);
 
@@ -71,7 +70,7 @@ static blk_status_t z2_queue_rq(struct blk_mq_hw_ctx *hctx,
 {
 	struct request *req = bd->rq;
 	unsigned long start = blk_rq_pos(req) << 9;
-	unsigned long len  = blk_rq_cur_bytes(req);
+	unsigned long len = blk_rq_cur_bytes(req);
 
 	blk_mq_start_request(req);
 
@@ -92,7 +91,7 @@ static blk_status_t z2_queue_rq(struct blk_mq_hw_ctx *hctx,
 
 		if (len < size)
 			size = len;
-		addr += z2ram_map[ start >> Z2RAM_CHUNKSHIFT ];
+		addr += z2ram_map[start >> Z2RAM_CHUNKSHIFT];
 		if (rq_data_dir(req) == READ)
 			memcpy(buffer, (char *)addr, size);
 		else
@@ -106,228 +105,214 @@ static blk_status_t z2_queue_rq(struct blk_mq_hw_ctx *hctx,
 	return BLK_STS_OK;
 }
 
-static void
-get_z2ram( void )
+static void get_z2ram(void)
 {
-    int i;
-
-    for ( i = 0; i < Z2RAM_SIZE / Z2RAM_CHUNKSIZE; i++ )
-    {
-	if ( test_bit( i, zorro_unused_z2ram ) )
-	{
-	    z2_count++;
-	    z2ram_map[z2ram_size++] = (unsigned long)ZTWO_VADDR(Z2RAM_START) +
-				      (i << Z2RAM_CHUNKSHIFT);
-	    clear_bit( i, zorro_unused_z2ram );
+	int i;
+
+	for (i = 0; i < Z2RAM_SIZE / Z2RAM_CHUNKSIZE; i++) {
+		if (test_bit(i, zorro_unused_z2ram)) {
+			z2_count++;
+			z2ram_map[z2ram_size++] =
+			    (unsigned long)ZTWO_VADDR(Z2RAM_START) +
+			    (i << Z2RAM_CHUNKSHIFT);
+			clear_bit(i, zorro_unused_z2ram);
+		}
 	}
-    }
 
-    return;
+	return;
 }
 
-static void
-get_chipram( void )
+static void get_chipram(void)
 {
 
-    while ( amiga_chip_avail() > ( Z2RAM_CHUNKSIZE * 4 ) )
-    {
-	chip_count++;
-	z2ram_map[ z2ram_size ] =
-	    (u_long)amiga_chip_alloc( Z2RAM_CHUNKSIZE, "z2ram" );
+	while (amiga_chip_avail() > (Z2RAM_CHUNKSIZE * 4)) {
+		chip_count++;
+		z2ram_map[z2ram_size] =
+		    (u_long) amiga_chip_alloc(Z2RAM_CHUNKSIZE, "z2ram");
 
-	if ( z2ram_map[ z2ram_size ] == 0 )
-	{
-	    break;
+		if (z2ram_map[z2ram_size] == 0) {
+			break;
+		}
+
+		z2ram_size++;
 	}
 
-	z2ram_size++;
-    }
-	
-    return;
+	return;
 }
 
 static int z2_open(struct block_device *bdev, fmode_t mode)
 {
-    int device;
-    int max_z2_map = ( Z2RAM_SIZE / Z2RAM_CHUNKSIZE ) *
-	sizeof( z2ram_map[0] );
-    int max_chip_map = ( amiga_chip_size / Z2RAM_CHUNKSIZE ) *
-	sizeof( z2ram_map[0] );
-    int rc = -ENOMEM;
-
-    device = MINOR(bdev->bd_dev);
-
-    mutex_lock(&z2ram_mutex);
-    if ( current_device != -1 && current_device != device )
-    {
-	rc = -EBUSY;
-	goto err_out;
-    }
-
-    if ( current_device == -1 )
-    {
-	z2_count   = 0;
-	chip_count = 0;
-	list_count = 0;
-	z2ram_size = 0;
-
-	/* Use a specific list entry. */
-	if (device >= Z2MINOR_MEMLIST1 && device <= Z2MINOR_MEMLIST4) {
-		int index = device - Z2MINOR_MEMLIST1 + 1;
-		unsigned long size, paddr, vaddr;
-
-		if (index >= m68k_realnum_memory) {
-			printk( KERN_ERR DEVICE_NAME
-				": no such entry in z2ram_map\n" );
-		        goto err_out;
-		}
-
-		paddr = m68k_memory[index].addr;
-		size = m68k_memory[index].size & ~(Z2RAM_CHUNKSIZE-1);
-
-#ifdef __powerpc__
-		/* FIXME: ioremap doesn't build correct memory tables. */
-		{
-			vfree(vmalloc (size));
-		}
+	int device;
+	int max_z2_map = (Z2RAM_SIZE / Z2RAM_CHUNKSIZE) * sizeof(z2ram_map[0]);
+	int max_chip_map = (amiga_chip_size / Z2RAM_CHUNKSIZE) *
+	    sizeof(z2ram_map[0]);
+	int rc = -ENOMEM;
 
-		vaddr = (unsigned long)ioremap_wt(paddr, size);
+	device = MINOR(bdev->bd_dev);
 
-#else
-		vaddr = (unsigned long)z_remap_nocache_nonser(paddr, size);
-#endif
-		z2ram_map = 
-			kmalloc_array(size / Z2RAM_CHUNKSIZE,
-                                      sizeof(z2ram_map[0]),
-                                      GFP_KERNEL);
-		if ( z2ram_map == NULL )
-		{
-		    printk( KERN_ERR DEVICE_NAME
-			": cannot get mem for z2ram_map\n" );
-		    goto err_out;
-		}
+	mutex_lock(&z2ram_mutex);
+	if (current_device != -1 && current_device != device) {
+		rc = -EBUSY;
+		goto err_out;
+	}
 
-		while (size) {
-			z2ram_map[ z2ram_size++ ] = vaddr;
-			size -= Z2RAM_CHUNKSIZE;
-			vaddr += Z2RAM_CHUNKSIZE;
-			list_count++;
-		}
+	if (current_device == -1) {
+		z2_count = 0;
+		chip_count = 0;
+		list_count = 0;
+		z2ram_size = 0;
 
-		if ( z2ram_size != 0 )
-		    printk( KERN_INFO DEVICE_NAME
-			": using %iK List Entry %d Memory\n",
-			list_count * Z2RAM_CHUNK1024, index );
-	} else
-
-	switch ( device )
-	{
-	    case Z2MINOR_COMBINED:
-
-		z2ram_map = kmalloc( max_z2_map + max_chip_map, GFP_KERNEL );
-		if ( z2ram_map == NULL )
-		{
-		    printk( KERN_ERR DEVICE_NAME
-			": cannot get mem for z2ram_map\n" );
-		    goto err_out;
-		}
+		/* Use a specific list entry. */
+		if (device >= Z2MINOR_MEMLIST1 && device <= Z2MINOR_MEMLIST4) {
+			int index = device - Z2MINOR_MEMLIST1 + 1;
+			unsigned long size, paddr, vaddr;
 
-		get_z2ram();
-		get_chipram();
-
-		if ( z2ram_size != 0 )
-		    printk( KERN_INFO DEVICE_NAME 
-			": using %iK Zorro II RAM and %iK Chip RAM (Total %dK)\n",
-			z2_count * Z2RAM_CHUNK1024,
-			chip_count * Z2RAM_CHUNK1024,
-			( z2_count + chip_count ) * Z2RAM_CHUNK1024 );
-
-	    break;
-
-    	    case Z2MINOR_Z2ONLY:
-		z2ram_map = kmalloc( max_z2_map, GFP_KERNEL );
-		if ( z2ram_map == NULL )
-		{
-		    printk( KERN_ERR DEVICE_NAME
-			": cannot get mem for z2ram_map\n" );
-		    goto err_out;
-		}
+			if (index >= m68k_realnum_memory) {
+				printk(KERN_ERR DEVICE_NAME
+				       ": no such entry in z2ram_map\n");
+				goto err_out;
+			}
 
-		get_z2ram();
+			paddr = m68k_memory[index].addr;
+			size = m68k_memory[index].size & ~(Z2RAM_CHUNKSIZE - 1);
 
-		if ( z2ram_size != 0 )
-		    printk( KERN_INFO DEVICE_NAME 
-			": using %iK of Zorro II RAM\n",
-			z2_count * Z2RAM_CHUNK1024 );
+#ifdef __powerpc__
+			/* FIXME: ioremap doesn't build correct memory tables. */
+			{
+				vfree(vmalloc(size));
+			}
 
-	    break;
+			vaddr = (unsigned long)ioremap_wt(paddr, size);
 
-	    case Z2MINOR_CHIPONLY:
-		z2ram_map = kmalloc( max_chip_map, GFP_KERNEL );
-		if ( z2ram_map == NULL )
-		{
-		    printk( KERN_ERR DEVICE_NAME
-			": cannot get mem for z2ram_map\n" );
-		    goto err_out;
+#else
+			vaddr =
+			    (unsigned long)z_remap_nocache_nonser(paddr, size);
+#endif
+			z2ram_map =
+			    kmalloc_array(size / Z2RAM_CHUNKSIZE,
+					  sizeof(z2ram_map[0]), GFP_KERNEL);
+			if (z2ram_map == NULL) {
+				printk(KERN_ERR DEVICE_NAME
+				       ": cannot get mem for z2ram_map\n");
+				goto err_out;
+			}
+
+			while (size) {
+				z2ram_map[z2ram_size++] = vaddr;
+				size -= Z2RAM_CHUNKSIZE;
+				vaddr += Z2RAM_CHUNKSIZE;
+				list_count++;
+			}
+
+			if (z2ram_size != 0)
+				printk(KERN_INFO DEVICE_NAME
+				       ": using %iK List Entry %d Memory\n",
+				       list_count * Z2RAM_CHUNK1024, index);
+		} else
+			switch (device) {
+			case Z2MINOR_COMBINED:
+
+				z2ram_map =
+				    kmalloc(max_z2_map + max_chip_map,
+					    GFP_KERNEL);
+				if (z2ram_map == NULL) {
+					printk(KERN_ERR DEVICE_NAME
+					       ": cannot get mem for z2ram_map\n");
+					goto err_out;
+				}
+
+				get_z2ram();
+				get_chipram();
+
+				if (z2ram_size != 0)
+					printk(KERN_INFO DEVICE_NAME
+					       ": using %iK Zorro II RAM and %iK Chip RAM (Total %dK)\n",
+					       z2_count * Z2RAM_CHUNK1024,
+					       chip_count * Z2RAM_CHUNK1024,
+					       (z2_count +
+						chip_count) * Z2RAM_CHUNK1024);
+
+				break;
+
+			case Z2MINOR_Z2ONLY:
+				z2ram_map = kmalloc(max_z2_map, GFP_KERNEL);
+				if (z2ram_map == NULL) {
+					printk(KERN_ERR DEVICE_NAME
+					       ": cannot get mem for z2ram_map\n");
+					goto err_out;
+				}
+
+				get_z2ram();
+
+				if (z2ram_size != 0)
+					printk(KERN_INFO DEVICE_NAME
+					       ": using %iK of Zorro II RAM\n",
+					       z2_count * Z2RAM_CHUNK1024);
+
+				break;
+
+			case Z2MINOR_CHIPONLY:
+				z2ram_map = kmalloc(max_chip_map, GFP_KERNEL);
+				if (z2ram_map == NULL) {
+					printk(KERN_ERR DEVICE_NAME
+					       ": cannot get mem for z2ram_map\n");
+					goto err_out;
+				}
+
+				get_chipram();
+
+				if (z2ram_size != 0)
+					printk(KERN_INFO DEVICE_NAME
+					       ": using %iK Chip RAM\n",
+					       chip_count * Z2RAM_CHUNK1024);
+
+				break;
+
+			default:
+				rc = -ENODEV;
+				goto err_out;
+
+				break;
+			}
+
+		if (z2ram_size == 0) {
+			printk(KERN_NOTICE DEVICE_NAME
+			       ": no unused ZII/Chip RAM found\n");
+			goto err_out_kfree;
 		}
 
-		get_chipram();
-
-		if ( z2ram_size != 0 )
-		    printk( KERN_INFO DEVICE_NAME 
-			": using %iK Chip RAM\n",
-			chip_count * Z2RAM_CHUNK1024 );
-		    
-	    break;
-
-	    default:
-		rc = -ENODEV;
-		goto err_out;
-	
-	    break;
+		current_device = device;
+		z2ram_size <<= Z2RAM_CHUNKSHIFT;
+		set_capacity(z2ram_gendisk, z2ram_size >> 9);
 	}
 
-	if ( z2ram_size == 0 )
-	{
-	    printk( KERN_NOTICE DEVICE_NAME
-		": no unused ZII/Chip RAM found\n" );
-	    goto err_out_kfree;
-	}
-
-	current_device = device;
-	z2ram_size <<= Z2RAM_CHUNKSHIFT;
-	set_capacity(z2ram_gendisk, z2ram_size >> 9);
-    }
-
-    mutex_unlock(&z2ram_mutex);
-    return 0;
+	mutex_unlock(&z2ram_mutex);
+	return 0;
 
 err_out_kfree:
-    kfree(z2ram_map);
+	kfree(z2ram_map);
 err_out:
-    mutex_unlock(&z2ram_mutex);
-    return rc;
+	mutex_unlock(&z2ram_mutex);
+	return rc;
 }
 
-static void
-z2_release(struct gendisk *disk, fmode_t mode)
+static void z2_release(struct gendisk *disk, fmode_t mode)
 {
-    mutex_lock(&z2ram_mutex);
-    if ( current_device == -1 ) {
-    	mutex_unlock(&z2ram_mutex);
-    	return;
-    }
-    mutex_unlock(&z2ram_mutex);
-    /*
-     * FIXME: unmap memory
-     */
+	mutex_lock(&z2ram_mutex);
+	if (current_device == -1) {
+		mutex_unlock(&z2ram_mutex);
+		return;
+	}
+	mutex_unlock(&z2ram_mutex);
+	/*
+	 * FIXME: unmap memory
+	 */
 }
 
-static const struct block_device_operations z2_fops =
-{
-	.owner		= THIS_MODULE,
-	.open		= z2_open,
-	.release	= z2_release,
+static const struct block_device_operations z2_fops = {
+	.owner = THIS_MODULE,
+	.open = z2_open,
+	.release = z2_release,
 };
 
 static struct kobject *z2_find(dev_t dev, int *part, void *data)
@@ -340,89 +325,83 @@ static struct request_queue *z2_queue;
 static struct blk_mq_tag_set tag_set;
 
 static const struct blk_mq_ops z2_mq_ops = {
-	.queue_rq	= z2_queue_rq,
+	.queue_rq = z2_queue_rq,
 };
 
-static int __init 
-z2_init(void)
+static int __init z2_init(void)
 {
-    int ret;
+	int ret;
 
-    if (!MACH_IS_AMIGA)
-	return -ENODEV;
+	if (!MACH_IS_AMIGA)
+		return -ENODEV;
 
-    ret = -EBUSY;
-    if (register_blkdev(Z2RAM_MAJOR, DEVICE_NAME))
-	goto err;
+	ret = -EBUSY;
+	if (register_blkdev(Z2RAM_MAJOR, DEVICE_NAME))
+		goto err;
 
-    ret = -ENOMEM;
-    z2ram_gendisk = alloc_disk(1);
-    if (!z2ram_gendisk)
-	goto out_disk;
+	ret = -ENOMEM;
+	z2ram_gendisk = alloc_disk(1);
+	if (!z2ram_gendisk)
+		goto out_disk;
 
-    z2_queue = blk_mq_init_sq_queue(&tag_set, &z2_mq_ops, 16,
+	z2_queue = blk_mq_init_sq_queue(&tag_set, &z2_mq_ops, 16,
 					BLK_MQ_F_SHOULD_MERGE);
-    if (IS_ERR(z2_queue)) {
-	ret = PTR_ERR(z2_queue);
-	z2_queue = NULL;
-	goto out_queue;
-    }
+	if (IS_ERR(z2_queue)) {
+		ret = PTR_ERR(z2_queue);
+		z2_queue = NULL;
+		goto out_queue;
+	}
 
-    z2ram_gendisk->major = Z2RAM_MAJOR;
-    z2ram_gendisk->first_minor = 0;
-    z2ram_gendisk->fops = &z2_fops;
-    sprintf(z2ram_gendisk->disk_name, "z2ram");
+	z2ram_gendisk->major = Z2RAM_MAJOR;
+	z2ram_gendisk->first_minor = 0;
+	z2ram_gendisk->fops = &z2_fops;
+	sprintf(z2ram_gendisk->disk_name, "z2ram");
 
-    z2ram_gendisk->queue = z2_queue;
-    add_disk(z2ram_gendisk);
-    blk_register_region(MKDEV(Z2RAM_MAJOR, 0), Z2MINOR_COUNT, THIS_MODULE,
-				z2_find, NULL, NULL);
+	z2ram_gendisk->queue = z2_queue;
+	add_disk(z2ram_gendisk);
+	blk_register_region(MKDEV(Z2RAM_MAJOR, 0), Z2MINOR_COUNT, THIS_MODULE,
+			    z2_find, NULL, NULL);
 
-    return 0;
+	return 0;
 
 out_queue:
-    put_disk(z2ram_gendisk);
+	put_disk(z2ram_gendisk);
 out_disk:
-    unregister_blkdev(Z2RAM_MAJOR, DEVICE_NAME);
+	unregister_blkdev(Z2RAM_MAJOR, DEVICE_NAME);
 err:
-    return ret;
+	return ret;
 }
 
 static void __exit z2_exit(void)
 {
-    int i, j;
-    blk_unregister_region(MKDEV(Z2RAM_MAJOR, 0), Z2MINOR_COUNT);
-    unregister_blkdev(Z2RAM_MAJOR, DEVICE_NAME);
-    del_gendisk(z2ram_gendisk);
-    put_disk(z2ram_gendisk);
-    blk_cleanup_queue(z2_queue);
-    blk_mq_free_tag_set(&tag_set);
-
-    if ( current_device != -1 )
-    {
-	i = 0;
-
-	for ( j = 0 ; j < z2_count; j++ )
-	{
-	    set_bit( i++, zorro_unused_z2ram ); 
-	}
+	int i, j;
+	blk_unregister_region(MKDEV(Z2RAM_MAJOR, 0), Z2MINOR_COUNT);
+	unregister_blkdev(Z2RAM_MAJOR, DEVICE_NAME);
+	del_gendisk(z2ram_gendisk);
+	put_disk(z2ram_gendisk);
+	blk_cleanup_queue(z2_queue);
+	blk_mq_free_tag_set(&tag_set);
+
+	if (current_device != -1) {
+		i = 0;
+
+		for (j = 0; j < z2_count; j++) {
+			set_bit(i++, zorro_unused_z2ram);
+		}
 
-	for ( j = 0 ; j < chip_count; j++ )
-	{
-	    if ( z2ram_map[ i ] )
-	    {
-		amiga_chip_free( (void *) z2ram_map[ i++ ] );
-	    }
-	}
+		for (j = 0; j < chip_count; j++) {
+			if (z2ram_map[i]) {
+				amiga_chip_free((void *)z2ram_map[i++]);
+			}
+		}
 
-	if ( z2ram_map != NULL )
-	{
-	    kfree( z2ram_map );
+		if (z2ram_map != NULL) {
+			kfree(z2ram_map);
+		}
 	}
-    }
 
-    return;
-} 
+	return;
+}
 
 module_init(z2_init);
 module_exit(z2_exit);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:09:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:09:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28085.56810 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8a-0001CO-R8; Mon, 16 Nov 2020 15:09:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28085.56810; Mon, 16 Nov 2020 15:09:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8a-0001Bd-59; Mon, 16 Nov 2020 15:09:48 +0000
Received: by outflank-mailman (input) for mailman id 28085;
 Mon, 16 Nov 2020 15:09:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg4W-0006ni-EB
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:05:36 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4c6f1ef5-7ea4-42e8-af8d-bfb879892df4;
 Mon, 16 Nov 2020 15:00:14 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefz7-0004KI-Jc; Mon, 16 Nov 2020 15:00:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg4W-0006ni-EB
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:05:36 +0000
X-Inumbo-ID: 4c6f1ef5-7ea4-42e8-af8d-bfb879892df4
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4c6f1ef5-7ea4-42e8-af8d-bfb879892df4;
	Mon, 16 Nov 2020 15:00:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=YfjTrndpak8c8C7A2Wr6OEv1ltyYeLktzUiwQ2pu+CA=; b=sSbJyd5Z5KD2Jm06fbaragik+M
	U4XXVDpuxMNv4/WjvlIpVZ5hUtJVUjJwX0o6moSI38FP1aoBNjPnqu8o8kZG9K4b+WNesnRErmT0w
	c6uY7DfOfd9C5ydCEyUMR677DxoiIxYLRbGGQfJ2J5j6waIzphSYMTKUR+nE58YtYY8s0sKyBH3Ag
	gi1DVUl5tW1CH54AIrFJbhR7wa+qgoeCULdOKd8s1qQnapCkGCpIt0k3E1/tTPjMNE85mVQKwftyC
	EQD17+/p3VVrTmL/7uawHNQSIdYnwo33SsrBvTjwr6J9LQP9HpQU2Rf1rToCbxKxVr+U/txNCS6he
	amu9O3qQ==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefz7-0004KI-Jc; Mon, 16 Nov 2020 15:00:02 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 77/78] fs: simplify the get_super_thawed interface
Date: Mon, 16 Nov 2020 15:58:08 +0100
Message-Id: <20201116145809.410558-78-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Merge get_super_thawed and get_super_exclusive_thawed into a single
function.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/quota/quota.c   |  4 ++--
 fs/super.c         | 42 +++++++++++-------------------------------
 include/linux/fs.h |  3 +--
 3 files changed, 14 insertions(+), 35 deletions(-)

diff --git a/fs/quota/quota.c b/fs/quota/quota.c
index 9af95c7a0bbe3c..21d43933213965 100644
--- a/fs/quota/quota.c
+++ b/fs/quota/quota.c
@@ -876,9 +876,9 @@ static struct super_block *quotactl_block(const char __user *special, int cmd)
 	if (IS_ERR(bdev))
 		return ERR_CAST(bdev);
 	if (quotactl_cmd_onoff(cmd))
-		sb = get_super_exclusive_thawed(bdev);
+		sb = get_super_thawed(bdev, true);
 	else if (quotactl_cmd_write(cmd))
-		sb = get_super_thawed(bdev);
+		sb = get_super_thawed(bdev, false);
 	else
 		sb = get_super(bdev);
 	bdput(bdev);
diff --git a/fs/super.c b/fs/super.c
index b327a82bc1946b..50995f8abd1bf1 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -789,8 +789,17 @@ struct super_block *get_super(struct block_device *bdev)
 }
 EXPORT_SYMBOL(get_super);
 
-static struct super_block *__get_super_thawed(struct block_device *bdev,
-					      bool excl)
+/**
+ * get_super_thawed - get thawed superblock of a device
+ * @bdev: device to get the superblock for
+ * @excl: lock s_umount exclusive if %true, else shared.
+ *
+ * Scans the superblock list and finds the superblock of the file system mounted
+ * on the device.  The superblock is returned with s_umount held once it is
+ * thawed (or immediately if it was not frozen), or %NULL if no superblock was
+ * found.
+ */
+struct super_block *get_super_thawed(struct block_device *bdev, bool excl)
 {
 	while (1) {
 		struct super_block *s = __get_super(bdev, excl);
@@ -805,37 +814,8 @@ static struct super_block *__get_super_thawed(struct block_device *bdev,
 		put_super(s);
 	}
 }
-
-/**
- *	get_super_thawed - get thawed superblock of a device
- *	@bdev: device to get the superblock for
- *
- *	Scans the superblock list and finds the superblock of the file system
- *	mounted on the device. The superblock is returned once it is thawed
- *	(or immediately if it was not frozen). %NULL is returned if no match
- *	is found.
- */
-struct super_block *get_super_thawed(struct block_device *bdev)
-{
-	return __get_super_thawed(bdev, false);
-}
 EXPORT_SYMBOL(get_super_thawed);
 
-/**
- *	get_super_exclusive_thawed - get thawed superblock of a device
- *	@bdev: device to get the superblock for
- *
- *	Scans the superblock list and finds the superblock of the file system
- *	mounted on the device. The superblock is returned once it is thawed
- *	(or immediately if it was not frozen) and s_umount semaphore is held
- *	in exclusive mode. %NULL is returned if no match is found.
- */
-struct super_block *get_super_exclusive_thawed(struct block_device *bdev)
-{
-	return __get_super_thawed(bdev, true);
-}
-EXPORT_SYMBOL(get_super_exclusive_thawed);
-
 /**
  * get_active_super - get an active reference to the superblock of a device
  * @bdev: device to get the superblock for
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 8667d0cdc71e76..d026d177a526bf 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -3132,8 +3132,7 @@ extern struct file_system_type *get_filesystem(struct file_system_type *fs);
 extern void put_filesystem(struct file_system_type *fs);
 extern struct file_system_type *get_fs_type(const char *name);
 extern struct super_block *get_super(struct block_device *);
-extern struct super_block *get_super_thawed(struct block_device *);
-extern struct super_block *get_super_exclusive_thawed(struct block_device *bdev);
+struct super_block *get_super_thawed(struct block_device *bdev, bool excl);
 extern struct super_block *get_active_super(struct block_device *bdev);
 extern void drop_super(struct super_block *sb);
 extern void drop_super_exclusive(struct super_block *sb);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:09:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:09:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28086.56826 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8c-0001Hc-Bv; Mon, 16 Nov 2020 15:09:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28086.56826; Mon, 16 Nov 2020 15:09:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8b-0001Fz-Jl; Mon, 16 Nov 2020 15:09:49 +0000
Received: by outflank-mailman (input) for mailman id 28086;
 Mon, 16 Nov 2020 15:09:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg1w-0006ni-7u
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:02:56 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ef0b423f-4ff0-415d-834a-9121589f308d;
 Mon, 16 Nov 2020 14:59:32 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyR-00041m-OS; Mon, 16 Nov 2020 14:59:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg1w-0006ni-7u
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:02:56 +0000
X-Inumbo-ID: ef0b423f-4ff0-415d-834a-9121589f308d
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ef0b423f-4ff0-415d-834a-9121589f308d;
	Mon, 16 Nov 2020 14:59:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=utL1e0DdHIY8Fd3lNgkU74rr4MTUuDAaqA4KW/LJ7AI=; b=egIiJi4m0U4Bm5a3xLAsrAucMd
	3Y7Dcxn9qZ2xlM77p9BbRtBKAln3LYpke4xFjJKvrfnjmejTmX21aKv6vHUxAkbNTTH9fw7G54xZE
	Kcbkwh5neJ/WOJ4yHL///KJ52srvgevLbhi3qd1e08YDaRuFP485wS+EsORi6U8MM67Ud6YA6Yo9K
	bekvmhX/aEn35Ce8tzn7JUsepDG8P/sXuIMbKe85k7vlXKh+ZwG8eqa3IAX5tqQQpbDeZHpRvsm4J
	aMOADF6lGFN9v0NVt3sH4VO4MQZpaNVbF7K5D8nyCVbFuFBLe2fOC4iaMbbC11Aoh49rz2wPPgNxC
	Bk30XNJQ==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyR-00041m-OS; Mon, 16 Nov 2020 14:59:20 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 49/78] ataflop: use a separate gendisk for each media format
Date: Mon, 16 Nov 2020 15:57:40 +0100
Message-Id: <20201116145809.410558-50-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

The Atari floppy driver usually autodetects the media when used with the
ormal /dev/fd? devices, which also are the only nodes created by udev.
But it also supports various aliases that force a given media format.
That is currently supported using the blk_register_region framework
which finds the floppy gendisk even for a 'mismatched' dev_t.  The
problem with this (besides the code complexity) is that it creates
multiple struct block_device instances for the whole device of a
single gendisk, which can lead to interesting issues in code not
aware of that fact.

To fix this just create a separate gendisk for each of the aliases
if they are accessed.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/ataflop.c | 135 +++++++++++++++++++++++++---------------
 1 file changed, 86 insertions(+), 49 deletions(-)

diff --git a/drivers/block/ataflop.c b/drivers/block/ataflop.c
index 3e881fdb06e0ad..104b713f4055af 100644
--- a/drivers/block/ataflop.c
+++ b/drivers/block/ataflop.c
@@ -297,7 +297,7 @@ static struct atari_floppy_struct {
 	unsigned int wpstat;	/* current state of WP signal (for
 				   disk change detection) */
 	int flags;		/* flags */
-	struct gendisk *disk;
+	struct gendisk *disk[NUM_DISK_MINORS];
 	int ref;
 	int type;
 	struct blk_mq_tag_set tag_set;
@@ -723,12 +723,16 @@ static void fd_error( void )
 
 static int do_format(int drive, int type, struct atari_format_descr *desc)
 {
-	struct request_queue *q = unit[drive].disk->queue;
+	struct request_queue *q;
 	unsigned char	*p;
 	int sect, nsect;
 	unsigned long	flags;
 	int ret;
 
+	if (type)
+		type--;
+
+	q = unit[drive].disk[type]->queue;
 	blk_mq_freeze_queue(q);
 	blk_mq_quiesce_queue(q);
 
@@ -738,7 +742,7 @@ static int do_format(int drive, int type, struct atari_format_descr *desc)
 	local_irq_restore(flags);
 
 	if (type) {
-		if (--type >= NUM_DISK_MINORS ||
+		if (type >= NUM_DISK_MINORS ||
 		    minor2disktype[type].drive_types > DriveType) {
 			ret = -EINVAL;
 			goto out;
@@ -1154,7 +1158,7 @@ static void fd_rwsec_done1(int status)
 			    if (SUDT[-1].blocks > ReqBlock) {
 				/* try another disk type */
 				SUDT--;
-				set_capacity(unit[SelectedDrive].disk,
+				set_capacity(unit[SelectedDrive].disk[0],
 							SUDT->blocks);
 			    } else
 				Probing = 0;
@@ -1169,7 +1173,7 @@ static void fd_rwsec_done1(int status)
 /* record not found, but not probing. Maybe stretch wrong ? Restart probing */
 			if (SUD.autoprobe) {
 				SUDT = atari_disk_type + StartDiskType[DriveType];
-				set_capacity(unit[SelectedDrive].disk,
+				set_capacity(unit[SelectedDrive].disk[0],
 							SUDT->blocks);
 				Probing = 1;
 			}
@@ -1515,7 +1519,7 @@ static blk_status_t ataflop_queue_rq(struct blk_mq_hw_ctx *hctx,
 		if (!UDT) {
 			Probing = 1;
 			UDT = atari_disk_type + StartDiskType[DriveType];
-			set_capacity(floppy->disk, UDT->blocks);
+			set_capacity(bd->rq->rq_disk, UDT->blocks);
 			UD.autoprobe = 1;
 		}
 	} 
@@ -1533,7 +1537,7 @@ static blk_status_t ataflop_queue_rq(struct blk_mq_hw_ctx *hctx,
 		}
 		type = minor2disktype[type].index;
 		UDT = &atari_disk_type[type];
-		set_capacity(floppy->disk, UDT->blocks);
+		set_capacity(bd->rq->rq_disk, UDT->blocks);
 		UD.autoprobe = 0;
 	}
 
@@ -1658,7 +1662,7 @@ static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode,
 				    printk (KERN_INFO "floppy%d: setting %s %p!\n",
 				        drive, dtp->name, dtp);
 				UDT = dtp;
-				set_capacity(floppy->disk, UDT->blocks);
+				set_capacity(disk, UDT->blocks);
 
 				if (cmd == FDDEFPRM) {
 				  /* save settings as permanent default type */
@@ -1702,7 +1706,7 @@ static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode,
 			return -EINVAL;
 
 		UDT = dtp;
-		set_capacity(floppy->disk, UDT->blocks);
+		set_capacity(disk, UDT->blocks);
 
 		return 0;
 	case FDMSGON:
@@ -1725,7 +1729,7 @@ static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode,
 		UDT = NULL;
 		/* MSch: invalidate default_params */
 		default_params[drive].blocks  = 0;
-		set_capacity(floppy->disk, MAX_DISK_SIZE * 2);
+		set_capacity(disk, MAX_DISK_SIZE * 2);
 		fallthrough;
 	case FDFMTEND:
 	case FDFLUSH:
@@ -1962,14 +1966,50 @@ static const struct blk_mq_ops ataflop_mq_ops = {
 	.commit_rqs = ataflop_commit_rqs,
 };
 
-static struct kobject *floppy_find(dev_t dev, int *part, void *data)
+static int ataflop_alloc_disk(unsigned int drive, unsigned int type)
 {
-	int drive = *part & 3;
-	int type  = *part >> 2;
+	struct gendisk *disk;
+	int ret;
+
+	disk = alloc_disk(1);
+	if (!disk)
+		return -ENOMEM;
+
+	disk->queue = blk_mq_init_queue(&unit[drive].tag_set);
+	if (IS_ERR(disk->queue)) {
+		ret = PTR_ERR(disk->queue);
+		disk->queue = NULL;
+		put_disk(disk);
+		return ret;
+	}
+
+	disk->major = FLOPPY_MAJOR;
+	disk->first_minor = drive + (type << 2);
+	sprintf(disk->disk_name, "fd%d", drive);
+	disk->fops = &floppy_fops;
+	disk->events = DISK_EVENT_MEDIA_CHANGE;
+	disk->private_data = &unit[drive];
+	set_capacity(disk, MAX_DISK_SIZE * 2);
+
+	unit[drive].disk[type] = disk;
+	return 0;
+}
+
+static DEFINE_MUTEX(ataflop_probe_lock);
+
+static void ataflop_probe(dev_t dev)
+{
+	int drive = MINOR(dev) & 3;
+	int type  = MINOR(dev) >> 2;
+
 	if (drive >= FD_MAX_UNITS || type > NUM_DISK_MINORS)
-		return NULL;
-	*part = 0;
-	return get_disk_and_module(unit[drive].disk);
+		return;
+	mutex_lock(&ataflop_probe_lock);
+	if (!unit[drive].disk[type]) {
+		if (ataflop_alloc_disk(drive, type) == 0)
+			add_disk(unit[drive].disk[type]);
+	}
+	mutex_unlock(&ataflop_probe_lock);
 }
 
 static int __init atari_floppy_init (void)
@@ -1981,23 +2021,26 @@ static int __init atari_floppy_init (void)
 		/* Amiga, Mac, ... don't have Atari-compatible floppy :-) */
 		return -ENODEV;
 
-	if (register_blkdev(FLOPPY_MAJOR,"fd"))
-		return -EBUSY;
+	mutex_lock(&ataflop_probe_lock);
+	ret = __register_blkdev(FLOPPY_MAJOR, "fd", ataflop_probe);
+	if (ret)
+		goto out_unlock;
 
 	for (i = 0; i < FD_MAX_UNITS; i++) {
-		unit[i].disk = alloc_disk(1);
-		if (!unit[i].disk) {
-			ret = -ENOMEM;
+		memset(&unit[i].tag_set, 0, sizeof(unit[i].tag_set));
+		unit[i].tag_set.ops = &ataflop_mq_ops;
+		unit[i].tag_set.nr_hw_queues = 1;
+		unit[i].tag_set.nr_maps = 1;
+		unit[i].tag_set.queue_depth = 2;
+		unit[i].tag_set.numa_node = NUMA_NO_NODE;
+		unit[i].tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
+		ret = blk_mq_alloc_tag_set(&unit[i].tag_set);
+		if (ret)
 			goto err;
-		}
 
-		unit[i].disk->queue = blk_mq_init_sq_queue(&unit[i].tag_set,
-							   &ataflop_mq_ops, 2,
-							   BLK_MQ_F_SHOULD_MERGE);
-		if (IS_ERR(unit[i].disk->queue)) {
-			put_disk(unit[i].disk);
-			ret = PTR_ERR(unit[i].disk->queue);
-			unit[i].disk->queue = NULL;
+		ret = ataflop_alloc_disk(i, 0);
+		if (ret) {
+			blk_mq_free_tag_set(&unit[i].tag_set);
 			goto err;
 		}
 	}
@@ -2027,19 +2070,9 @@ static int __init atari_floppy_init (void)
 	for (i = 0; i < FD_MAX_UNITS; i++) {
 		unit[i].track = -1;
 		unit[i].flags = 0;
-		unit[i].disk->major = FLOPPY_MAJOR;
-		unit[i].disk->first_minor = i;
-		sprintf(unit[i].disk->disk_name, "fd%d", i);
-		unit[i].disk->fops = &floppy_fops;
-		unit[i].disk->events = DISK_EVENT_MEDIA_CHANGE;
-		unit[i].disk->private_data = &unit[i];
-		set_capacity(unit[i].disk, MAX_DISK_SIZE * 2);
-		add_disk(unit[i].disk);
+		add_disk(unit[i].disk[0]);
 	}
 
-	blk_register_region(MKDEV(FLOPPY_MAJOR, 0), 256, THIS_MODULE,
-				floppy_find, NULL, NULL);
-
 	printk(KERN_INFO "Atari floppy driver: max. %cD, %strack buffering\n",
 	       DriveType == 0 ? 'D' : DriveType == 1 ? 'H' : 'E',
 	       UseTrackbuffer ? "" : "no ");
@@ -2049,14 +2082,14 @@ static int __init atari_floppy_init (void)
 
 err:
 	while (--i >= 0) {
-		struct gendisk *disk = unit[i].disk;
-
-		blk_cleanup_queue(disk->queue);
+		blk_cleanup_queue(unit[i].disk[0]->queue);
+		put_disk(unit[i].disk[0]);
 		blk_mq_free_tag_set(&unit[i].tag_set);
-		put_disk(unit[i].disk);
 	}
 
 	unregister_blkdev(FLOPPY_MAJOR, "fd");
+out_unlock:
+	mutex_unlock(&ataflop_probe_lock);
 	return ret;
 }
 
@@ -2101,13 +2134,17 @@ __setup("floppy=", atari_floppy_setup);
 
 static void __exit atari_floppy_exit(void)
 {
-	int i;
-	blk_unregister_region(MKDEV(FLOPPY_MAJOR, 0), 256);
+	int i, type;
+
 	for (i = 0; i < FD_MAX_UNITS; i++) {
-		del_gendisk(unit[i].disk);
-		blk_cleanup_queue(unit[i].disk->queue);
+		for (type = 0; type < NUM_DISK_MINORS; type++) {
+			if (!unit[i].disk[type])
+				continue;
+			del_gendisk(unit[i].disk[type]);
+			blk_cleanup_queue(unit[i].disk[type]->queue);
+			put_disk(unit[i].disk[type]);
+		}
 		blk_mq_free_tag_set(&unit[i].tag_set);
-		put_disk(unit[i].disk);
 	}
 	unregister_blkdev(FLOPPY_MAJOR, "fd");
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:09:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:09:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28088.56835 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8e-0001Lt-5E; Mon, 16 Nov 2020 15:09:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28088.56835; Mon, 16 Nov 2020 15:09:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8d-0001Ks-2d; Mon, 16 Nov 2020 15:09:51 +0000
Received: by outflank-mailman (input) for mailman id 28088;
 Mon, 16 Nov 2020 15:09:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg4b-0006ni-EQ
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:05:41 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 17aa412c-5bc9-4945-8436-6e752e1e5445;
 Mon, 16 Nov 2020 15:00:18 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefz9-0004L7-5x; Mon, 16 Nov 2020 15:00:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg4b-0006ni-EQ
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:05:41 +0000
X-Inumbo-ID: 17aa412c-5bc9-4945-8436-6e752e1e5445
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 17aa412c-5bc9-4945-8436-6e752e1e5445;
	Mon, 16 Nov 2020 15:00:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=qT96BeHilVft1JkG10iCrLJR6muck82GuXTsF+6HaBs=; b=HgOm4x8yBW2tnGE4ZX83eavFQ1
	f+qiZeutF3C4x7tuDGFh++e5/QiXl2bHAz05M5M3dUddIxY+Y/evoUTbHl2ZIlekRUKqyhE5E0PVX
	hMFXVk/Jptmtqg7zb6DuiwaXED2QI0Al/Pw8e16nbsMp0jFR4NBY6hKDRfpALL6DpKxh5ZyNmlQb4
	dTi19AUznD+xt+W/9664dH7K0sC5bsfYZaGYXpE4CzAaURhA3tKbMezxtPpka8je7m82AxuOo9z0k
	0qEQF/Fuu296FLCSW86j5ha3QmhlH1fFdD03H8DB5ds5kjLX5SbWCnFvDV9qvWn2BOewIEOKsLbJW
	BVLz5ydg==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefz9-0004L7-5x; Mon, 16 Nov 2020 15:00:03 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 78/78] block: remove i_bdev
Date: Mon, 16 Nov 2020 15:58:09 +0100
Message-Id: <20201116145809.410558-79-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Switch the block device lookup interfaces to directly work with a dev_t
so that struct block_device references are only acquired by the
blkdev_get variants (and the blk-cgroup special case).  This means that
we not don't need an extra reference in the inode.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/ioctl.c                                |   3 +-
 drivers/block/loop.c                         |   8 +-
 drivers/md/dm-table.c                        |   9 +-
 drivers/mtd/mtdsuper.c                       |  17 +-
 drivers/target/target_core_file.c            |   6 +-
 drivers/usb/gadget/function/storage_common.c |   8 +-
 fs/block_dev.c                               | 206 +++++--------------
 fs/btrfs/volumes.c                           |  13 +-
 fs/inode.c                                   |   3 -
 fs/internal.h                                |   6 +-
 fs/io_uring.c                                |   2 +-
 fs/pipe.c                                    |   5 +-
 fs/quota/quota.c                             |  31 +--
 fs/statfs.c                                  |   2 +-
 fs/super.c                                   |  63 ++----
 include/linux/blkdev.h                       |   2 +-
 include/linux/fs.h                           |   4 +-
 17 files changed, 114 insertions(+), 274 deletions(-)

diff --git a/block/ioctl.c b/block/ioctl.c
index 7207b716b6c9a7..39341409927607 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -602,8 +602,7 @@ long compat_blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg)
 {
 	int ret;
 	void __user *argp = compat_ptr(arg);
-	struct inode *inode = file->f_mapping->host;
-	struct block_device *bdev = inode->i_bdev;
+	struct block_device *bdev = I_BDEV(file->f_mapping->host);
 	struct gendisk *disk = bdev->bd_disk;
 	fmode_t mode = file->f_mode;
 	loff_t size;
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 91e47c5b52f1cb..4a0037586f93b2 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -675,10 +675,10 @@ static int loop_validate_file(struct file *file, struct block_device *bdev)
 	while (is_loop_device(f)) {
 		struct loop_device *l;
 
-		if (f->f_mapping->host->i_bdev == bdev)
+		if (f->f_mapping->host->i_rdev == bdev->bd_dev)
 			return -EBADF;
 
-		l = f->f_mapping->host->i_bdev->bd_disk->private_data;
+		l = I_BDEV(f->f_mapping->host)->bd_disk->private_data;
 		if (l->lo_state != Lo_bound) {
 			return -EINVAL;
 		}
@@ -885,9 +885,7 @@ static void loop_config_discard(struct loop_device *lo)
 	 * file-backed loop devices: discarded regions read back as zero.
 	 */
 	if (S_ISBLK(inode->i_mode) && !lo->lo_encrypt_key_size) {
-		struct request_queue *backingq;
-
-		backingq = bdev_get_queue(inode->i_bdev);
+		struct request_queue *backingq = bdev_get_queue(I_BDEV(inode));
 
 		max_discard_sectors = backingq->limits.max_write_zeroes_sectors;
 		granularity = backingq->limits.discard_granularity ?:
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index ce543b761be7b2..dea67772171053 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -348,16 +348,9 @@ static int upgrade_mode(struct dm_dev_internal *dd, fmode_t new_mode,
 dev_t dm_get_dev_t(const char *path)
 {
 	dev_t dev;
-	struct block_device *bdev;
 
-	bdev = lookup_bdev(path);
-	if (IS_ERR(bdev))
+	if (lookup_bdev(path, &dev))
 		dev = name_to_dev_t(path);
-	else {
-		dev = bdev->bd_dev;
-		bdput(bdev);
-	}
-
 	return dev;
 }
 EXPORT_SYMBOL_GPL(dm_get_dev_t);
diff --git a/drivers/mtd/mtdsuper.c b/drivers/mtd/mtdsuper.c
index c3e2098372f2e5..38b6aa849c6383 100644
--- a/drivers/mtd/mtdsuper.c
+++ b/drivers/mtd/mtdsuper.c
@@ -120,8 +120,8 @@ int get_tree_mtd(struct fs_context *fc,
 				struct fs_context *fc))
 {
 #ifdef CONFIG_BLOCK
-	struct block_device *bdev;
-	int ret, major;
+	dev_t dev;
+	int ret;
 #endif
 	int mtdnr;
 
@@ -169,20 +169,15 @@ int get_tree_mtd(struct fs_context *fc,
 	/* try the old way - the hack where we allowed users to mount
 	 * /dev/mtdblock$(n) but didn't actually _use_ the blockdev
 	 */
-	bdev = lookup_bdev(fc->source);
-	if (IS_ERR(bdev)) {
-		ret = PTR_ERR(bdev);
+	ret = lookup_bdev(fc->source, &dev);
+	if (ret) {
 		errorf(fc, "MTD: Couldn't look up '%s': %d", fc->source, ret);
 		return ret;
 	}
 	pr_debug("MTDSB: lookup_bdev() returned 0\n");
 
-	major = MAJOR(bdev->bd_dev);
-	mtdnr = MINOR(bdev->bd_dev);
-	bdput(bdev);
-
-	if (major == MTD_BLOCK_MAJOR)
-		return mtd_get_sb_by_nr(fc, mtdnr, fill_super);
+	if (MAJOR(dev) == MTD_BLOCK_MAJOR)
+		return mtd_get_sb_by_nr(fc, MINOR(dev), fill_super);
 
 #endif /* CONFIG_BLOCK */
 
diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
index 7143d03f0e027e..b0cb5b95e892d3 100644
--- a/drivers/target/target_core_file.c
+++ b/drivers/target/target_core_file.c
@@ -133,10 +133,10 @@ static int fd_configure_device(struct se_device *dev)
 	 */
 	inode = file->f_mapping->host;
 	if (S_ISBLK(inode->i_mode)) {
-		struct request_queue *q = bdev_get_queue(inode->i_bdev);
+		struct request_queue *q = bdev_get_queue(I_BDEV(inode));
 		unsigned long long dev_size;
 
-		fd_dev->fd_block_size = bdev_logical_block_size(inode->i_bdev);
+		fd_dev->fd_block_size = bdev_logical_block_size(I_BDEV(inode));
 		/*
 		 * Determine the number of bytes from i_size_read() minus
 		 * one (1) logical sector from underlying struct block_device
@@ -559,7 +559,7 @@ fd_execute_unmap(struct se_cmd *cmd, sector_t lba, sector_t nolb)
 
 	if (S_ISBLK(inode->i_mode)) {
 		/* The backend is block device, use discard */
-		struct block_device *bdev = inode->i_bdev;
+		struct block_device *bdev = I_BDEV(inode);
 		struct se_device *dev = cmd->se_dev;
 
 		ret = blkdev_issue_discard(bdev,
diff --git a/drivers/usb/gadget/function/storage_common.c b/drivers/usb/gadget/function/storage_common.c
index f7e6c42558eb76..b859a158a4140e 100644
--- a/drivers/usb/gadget/function/storage_common.c
+++ b/drivers/usb/gadget/function/storage_common.c
@@ -204,7 +204,7 @@ int fsg_lun_open(struct fsg_lun *curlun, const char *filename)
 	if (!(filp->f_mode & FMODE_WRITE))
 		ro = 1;
 
-	inode = file_inode(filp);
+	inode = filp->f_mapping->host;
 	if ((!S_ISREG(inode->i_mode) && !S_ISBLK(inode->i_mode))) {
 		LINFO(curlun, "invalid file type: %s\n", filename);
 		goto out;
@@ -221,7 +221,7 @@ int fsg_lun_open(struct fsg_lun *curlun, const char *filename)
 	if (!(filp->f_mode & FMODE_CAN_WRITE))
 		ro = 1;
 
-	size = i_size_read(inode->i_mapping->host);
+	size = i_size_read(inode);
 	if (size < 0) {
 		LINFO(curlun, "unable to find file size: %s\n", filename);
 		rc = (int) size;
@@ -231,8 +231,8 @@ int fsg_lun_open(struct fsg_lun *curlun, const char *filename)
 	if (curlun->cdrom) {
 		blksize = 2048;
 		blkbits = 11;
-	} else if (inode->i_bdev) {
-		blksize = bdev_logical_block_size(inode->i_bdev);
+	} else if (S_ISBLK(inode->i_mode)) {
+		blksize = bdev_logical_block_size(I_BDEV(inode));
 		blkbits = blksize_bits(blksize);
 	} else {
 		blksize = 512;
diff --git a/fs/block_dev.c b/fs/block_dev.c
index e1457bf76c6f34..6b43ee6ee571df 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -523,7 +523,7 @@ EXPORT_SYMBOL(sync_blockdev);
  */
 int fsync_bdev(struct block_device *bdev)
 {
-	struct super_block *sb = get_super(bdev);
+	struct super_block *sb = get_super(bdev->bd_dev, false);
 	if (sb) {
 		int res = sync_filesystem(sb);
 		drop_super(sb);
@@ -557,7 +557,7 @@ struct super_block *freeze_bdev(struct block_device *bdev)
 		 * to freeze_bdev grab an active reference and only the last
 		 * thaw_bdev drops it.
 		 */
-		sb = get_super(bdev);
+		sb = get_super(bdev->bd_dev, false);
 		if (sb)
 			drop_super(sb);
 		mutex_unlock(&bdev->bd_fsfreeze_mutex);
@@ -890,7 +890,6 @@ struct block_device *bdev_alloc(struct gendisk *disk, u8 partno)
 
 	inode->i_mode = S_IFBLK;
 	inode->i_rdev = 0;
-	inode->i_bdev = bdev;
 	inode->i_data.a_ops = &def_blk_aops;
 
 	return bdev;
@@ -942,71 +941,8 @@ void bdput(struct block_device *bdev)
 {
 	iput(bdev->bd_inode);
 }
-
 EXPORT_SYMBOL(bdput);
  
-static struct block_device *bd_acquire(struct inode *inode)
-{
-	struct block_device *bdev;
-
-	spin_lock(&bdev_lock);
-	bdev = inode->i_bdev;
-	if (bdev && !inode_unhashed(bdev->bd_inode)) {
-		bdgrab(bdev);
-		spin_unlock(&bdev_lock);
-		return bdev;
-	}
-	spin_unlock(&bdev_lock);
-
-	/*
-	 * i_bdev references block device inode that was already shut down
-	 * (corresponding device got removed).  Remove the reference and look
-	 * up block device inode again just in case new device got
-	 * reestablished under the same device number.
-	 */
-	if (bdev)
-		bd_forget(inode);
-
-	bdev = bdget(inode->i_rdev);
-	if (!bdev) {
-		blk_request_module(inode->i_rdev);
-		bdev = bdget(inode->i_rdev);
-	}
-	if (bdev) {
-		spin_lock(&bdev_lock);
-		if (!inode->i_bdev) {
-			/*
-			 * We take an additional reference to bd_inode,
-			 * and it's released in clear_inode() of inode.
-			 * So, we can access it via ->i_mapping always
-			 * without igrab().
-			 */
-			bdgrab(bdev);
-			inode->i_bdev = bdev;
-			inode->i_mapping = bdev->bd_inode->i_mapping;
-		}
-		spin_unlock(&bdev_lock);
-	}
-	return bdev;
-}
-
-/* Call when you free inode */
-
-void bd_forget(struct inode *inode)
-{
-	struct block_device *bdev = NULL;
-
-	spin_lock(&bdev_lock);
-	if (!sb_is_blkdev_sb(inode->i_sb))
-		bdev = inode->i_bdev;
-	inode->i_bdev = NULL;
-	inode->i_mapping = &inode->i_data;
-	spin_unlock(&bdev_lock);
-
-	if (bdev)
-		bdput(bdev);
-}
-
 /**
  * bd_may_claim - test whether a block device can be claimed
  * @bdev: block device of interest
@@ -1485,32 +1421,44 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 }
 
 /**
- * blkdev_get - open a block device
- * @bdev: block_device to open
+ * blkdev_get_by_dev - open a block device by device number
+ * @dev: device number of block device to open
  * @mode: FMODE_* mask
  * @holder: exclusive holder identifier
  *
- * Open @bdev with @mode.  If @mode includes %FMODE_EXCL, @bdev is
- * open with exclusive access.  Specifying %FMODE_EXCL with %NULL
- * @holder is invalid.  Exclusive opens may nest for the same @holder.
+ * Open the block device described by device number @dev.  If @mode includes
+ * If @mode includes %FMODE_EXCL, the block device is opened with exclusive
+ * access.  Specifying %FMODE_EXCL with a %NULL @holder is invalid.  Exclusive
+ * opens may nest for the same @holder.
  *
- * On success, the reference count of @bdev is unchanged.  On failure,
- * @bdev is put.
+ * Use this interface ONLY if you really do not have anything better - i.e. when
+ * you are behind a truly sucky interface and all you are given is a device
+ * number.  Everything else should use blkdev_get_by_path().
  *
  * CONTEXT:
  * Might sleep.
  *
  * RETURNS:
- * 0 on success, -errno on failure.
+ * Reference to the block_device on success, ERR_PTR(-errno) on failure.
  */
-static int blkdev_get(struct block_device *bdev, fmode_t mode, void *holder)
+struct block_device *blkdev_get_by_dev(dev_t dev, fmode_t mode, void *holder)
 {
+	struct block_device *bdev;
 	int ret, perm = 0;
 
 	if (mode & FMODE_READ)
 		perm |= MAY_READ;
 	if (mode & FMODE_WRITE)
 		perm |= MAY_WRITE;
+
+	bdev = bdget(dev);
+	if (!bdev) {
+		blk_request_module(dev);
+		bdev = bdget(dev);
+		if (!bdev)
+			return ERR_PTR(-ENOMEM);
+	}
+
 	ret = devcgroup_inode_permission(bdev->bd_inode, perm);
 	if (ret)
 		goto bdput;
@@ -1522,8 +1470,9 @@ static int blkdev_get(struct block_device *bdev, fmode_t mode, void *holder)
 
 bdput:
 	bdput(bdev);
-	return ret;
+	return ERR_PTR(ret);
 }
+EXPORT_SYMBOL(blkdev_get_by_dev);
 
 /**
  * blkdev_get_by_path - open a block device by name
@@ -1531,32 +1480,31 @@ static int blkdev_get(struct block_device *bdev, fmode_t mode, void *holder)
  * @mode: FMODE_* mask
  * @holder: exclusive holder identifier
  *
- * Open the blockdevice described by the device file at @path.  @mode
- * and @holder are identical to blkdev_get().
+ * Open the block device described by the device file at &path.
  *
- * On success, the returned block_device has reference count of one.
+ * If @mode includes %FMODE_EXCL, the block device is opened with exclusive
+ * access.  Specifying %FMODE_EXCL with a %NULL @holder is invalid.  Exclusive
+ * opens may nest for the same @holder.
  *
  * CONTEXT:
  * Might sleep.
  *
  * RETURNS:
- * Pointer to block_device on success, ERR_PTR(-errno) on failure.
+ * Reference to the block_device on success, ERR_PTR(-errno) on failure.
  */
 struct block_device *blkdev_get_by_path(const char *path, fmode_t mode,
 					void *holder)
 {
 	struct block_device *bdev;
-	int err;
-
-	bdev = lookup_bdev(path);
-	if (IS_ERR(bdev))
-		return bdev;
+	dev_t dev;
+	int error;
 
-	err = blkdev_get(bdev, mode, holder);
-	if (err)
-		return ERR_PTR(err);
+	error = lookup_bdev(path, &dev);
+	if (error)
+		return ERR_PTR(error);
 
-	if ((mode & FMODE_WRITE) && bdev_read_only(bdev)) {
+	bdev = blkdev_get_by_dev(dev, mode, holder);
+	if (!IS_ERR(bdev) && (mode & FMODE_WRITE) && bdev_read_only(bdev)) {
 		blkdev_put(bdev, mode);
 		return ERR_PTR(-EACCES);
 	}
@@ -1565,49 +1513,6 @@ struct block_device *blkdev_get_by_path(const char *path, fmode_t mode,
 }
 EXPORT_SYMBOL(blkdev_get_by_path);
 
-/**
- * blkdev_get_by_dev - open a block device by device number
- * @dev: device number of block device to open
- * @mode: FMODE_* mask
- * @holder: exclusive holder identifier
- *
- * Open the blockdevice described by device number @dev.  @mode and
- * @holder are identical to blkdev_get().
- *
- * Use it ONLY if you really do not have anything better - i.e. when
- * you are behind a truly sucky interface and all you are given is a
- * device number.  _Never_ to be used for internal purposes.  If you
- * ever need it - reconsider your API.
- *
- * On success, the returned block_device has reference count of one.
- *
- * CONTEXT:
- * Might sleep.
- *
- * RETURNS:
- * Pointer to block_device on success, ERR_PTR(-errno) on failure.
- */
-struct block_device *blkdev_get_by_dev(dev_t dev, fmode_t mode, void *holder)
-{
-	struct block_device *bdev;
-	int err;
-
-	bdev = bdget(dev);
-	if (!bdev) {
-		blk_request_module(dev);
-		bdev = bdget(dev);
-	}
-	if (!bdev)
-		return ERR_PTR(-ENOMEM);
-
-	err = blkdev_get(bdev, mode, holder);
-	if (err)
-		return ERR_PTR(err);
-
-	return bdev;
-}
-EXPORT_SYMBOL(blkdev_get_by_dev);
-
 static int blkdev_open(struct inode * inode, struct file * filp)
 {
 	struct block_device *bdev;
@@ -1629,14 +1534,12 @@ static int blkdev_open(struct inode * inode, struct file * filp)
 	if ((filp->f_flags & O_ACCMODE) == 3)
 		filp->f_mode |= FMODE_WRITE_IOCTL;
 
-	bdev = bd_acquire(inode);
-	if (bdev == NULL)
-		return -ENOMEM;
-
+	bdev = blkdev_get_by_dev(inode->i_rdev, filp->f_mode, filp);
+	if (IS_ERR(bdev))
+		return PTR_ERR(bdev);
 	filp->f_mapping = bdev->bd_inode->i_mapping;
 	filp->f_wb_err = filemap_sample_wb_err(filp->f_mapping);
-
-	return blkdev_get(bdev, filp->f_mode, filp);
+	return 0;
 }
 
 static void __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part)
@@ -1939,43 +1842,38 @@ const struct file_operations def_blk_fops = {
  * namespace if possible and return it.  Return ERR_PTR(error)
  * otherwise.
  */
-struct block_device *lookup_bdev(const char *pathname)
+int lookup_bdev(const char *pathname, dev_t *dev)
 {
-	struct block_device *bdev;
 	struct inode *inode;
 	struct path path;
 	int error;
 
 	if (!pathname || !*pathname)
-		return ERR_PTR(-EINVAL);
+		return -EINVAL;
 
 	error = kern_path(pathname, LOOKUP_FOLLOW, &path);
 	if (error)
-		return ERR_PTR(error);
+		return error;
 
 	inode = d_backing_inode(path.dentry);
 	error = -ENOTBLK;
 	if (!S_ISBLK(inode->i_mode))
-		goto fail;
+		goto out_path_put;
 	error = -EACCES;
 	if (!may_open_dev(&path))
-		goto fail;
-	error = -ENOMEM;
-	bdev = bd_acquire(inode);
-	if (!bdev)
-		goto fail;
-out:
+		goto out_path_put;
+
+	*dev = inode->i_rdev;
+	error = 0;
+out_path_put:
 	path_put(&path);
-	return bdev;
-fail:
-	bdev = ERR_PTR(error);
-	goto out;
+	return error;
 }
 EXPORT_SYMBOL(lookup_bdev);
 
 int __invalidate_device(struct block_device *bdev, bool kill_dirty)
 {
-	struct super_block *sb = get_super(bdev);
+	struct super_block *sb = get_super(bdev->bd_dev, false);
 	int res = 0;
 
 	if (sb) {
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index ce43732f945f45..76dedfcbd03716 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -929,16 +929,16 @@ static noinline struct btrfs_device *device_list_add(const char *path,
 		 * make sure it's the same device if the device is mounted
 		 */
 		if (device->bdev) {
-			struct block_device *path_bdev;
+			int error;
+			dev_t path_dev;
 
-			path_bdev = lookup_bdev(path);
-			if (IS_ERR(path_bdev)) {
+			error = lookup_bdev(path, &path_dev);
+			if (error) {
 				mutex_unlock(&fs_devices->device_list_mutex);
-				return ERR_CAST(path_bdev);
+				return ERR_PTR(error);
 			}
 
-			if (device->bdev != path_bdev) {
-				bdput(path_bdev);
+			if (device->bdev->bd_dev != path_dev) {
 				mutex_unlock(&fs_devices->device_list_mutex);
 				btrfs_warn_in_rcu(device->fs_info,
 	"duplicate device %s devid %llu generation %llu scanned by %s (%d)",
@@ -947,7 +947,6 @@ static noinline struct btrfs_device *device_list_add(const char *path,
 						  task_pid_nr(current));
 				return ERR_PTR(-EEXIST);
 			}
-			bdput(path_bdev);
 			btrfs_info_in_rcu(device->fs_info,
 	"devid %llu device path %s changed to %s scanned by %s (%d)",
 					  devid, rcu_str_deref(device->name),
diff --git a/fs/inode.c b/fs/inode.c
index 9d78c37b00b817..cb008acf0efdb8 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -155,7 +155,6 @@ int inode_init_always(struct super_block *sb, struct inode *inode)
 	inode->i_bytes = 0;
 	inode->i_generation = 0;
 	inode->i_pipe = NULL;
-	inode->i_bdev = NULL;
 	inode->i_cdev = NULL;
 	inode->i_link = NULL;
 	inode->i_dir_seq = 0;
@@ -580,8 +579,6 @@ static void evict(struct inode *inode)
 		truncate_inode_pages_final(&inode->i_data);
 		clear_inode(inode);
 	}
-	if (S_ISBLK(inode->i_mode) && inode->i_bdev)
-		bd_forget(inode);
 	if (S_ISCHR(inode->i_mode) && inode->i_cdev)
 		cd_forget(inode);
 
diff --git a/fs/internal.h b/fs/internal.h
index a7cd0f64faa4ab..36f87f0ac4f969 100644
--- a/fs/internal.h
+++ b/fs/internal.h
@@ -25,7 +25,6 @@ extern void __init bdev_cache_init(void);
 extern int __sync_blockdev(struct block_device *bdev, int wait);
 void iterate_bdevs(void (*)(struct block_device *, void *), void *);
 void emergency_thaw_bdev(struct super_block *sb);
-void bd_forget(struct inode *inode);
 #else
 static inline void bdev_cache_init(void)
 {
@@ -43,9 +42,6 @@ static inline int emergency_thaw_bdev(struct super_block *sb)
 {
 	return 0;
 }
-static inline void bd_forget(struct inode *inode)
-{
-}
 #endif /* CONFIG_BLOCK */
 
 /*
@@ -114,7 +110,7 @@ extern struct file *alloc_empty_file_noaccount(int, const struct cred *);
  */
 extern int reconfigure_super(struct fs_context *);
 extern bool trylock_super(struct super_block *sb);
-extern struct super_block *user_get_super(dev_t);
+struct super_block *get_super(dev_t, bool excl);
 extern bool mount_capable(struct fs_context *);
 
 /*
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 4ead291b2976f3..84d2fae8518471 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2733,7 +2733,7 @@ static bool io_file_supports_async(struct file *file, int rw)
 	umode_t mode = file_inode(file)->i_mode;
 
 	if (S_ISBLK(mode)) {
-		if (io_bdev_nowait(file->f_inode->i_bdev))
+		if (io_bdev_nowait(I_BDEV(file->f_mapping->host)))
 			return true;
 		return false;
 	}
diff --git a/fs/pipe.c b/fs/pipe.c
index 0ac197658a2d6e..c5989cfd564d45 100644
--- a/fs/pipe.c
+++ b/fs/pipe.c
@@ -1342,9 +1342,8 @@ static long pipe_set_size(struct pipe_inode_info *pipe, unsigned long arg)
 }
 
 /*
- * After the inode slimming patch, i_pipe/i_bdev/i_cdev share the same
- * location, so checking ->i_pipe is not enough to verify that this is a
- * pipe.
+ * Note that i_pipe and i_cdev share the same location, so checking ->i_pipe is
+ * not enough to verify that this is a pipe.
  */
 struct pipe_inode_info *get_pipe_info(struct file *file, bool for_splice)
 {
diff --git a/fs/quota/quota.c b/fs/quota/quota.c
index 21d43933213965..3087225b90880c 100644
--- a/fs/quota/quota.c
+++ b/fs/quota/quota.c
@@ -20,6 +20,7 @@
 #include <linux/writeback.h>
 #include <linux/nospec.h>
 #include "compat.h"
+#include "../internal.h"
 
 static int check_quotactl_permission(struct super_block *sb, int type, int cmd,
 				     qid_t id)
@@ -864,31 +865,31 @@ static bool quotactl_cmd_onoff(int cmd)
  */
 static struct super_block *quotactl_block(const char __user *special, int cmd)
 {
-#ifdef CONFIG_BLOCK
-	struct block_device *bdev;
 	struct super_block *sb;
-	struct filename *tmp = getname(special);
+	struct filename *tmp;
+	int error;
+	dev_t dev;
+
+	if (!IS_ENABLED(CONFIG_BLOCK))
+		return ERR_PTR(-ENODEV);
 
+	tmp = getname(special);
 	if (IS_ERR(tmp))
 		return ERR_CAST(tmp);
-	bdev = lookup_bdev(tmp->name);
-	putname(tmp);
-	if (IS_ERR(bdev))
-		return ERR_CAST(bdev);
+	error = lookup_bdev(tmp->name, &dev);
+	if (error)
+		return ERR_PTR(error);
+
 	if (quotactl_cmd_onoff(cmd))
-		sb = get_super_thawed(bdev, true);
+		sb = get_super_thawed(dev, true);
 	else if (quotactl_cmd_write(cmd))
-		sb = get_super_thawed(bdev, false);
+		sb = get_super_thawed(dev, false);
 	else
-		sb = get_super(bdev);
-	bdput(bdev);
+		sb = get_super(dev, false);
+
 	if (!sb)
 		return ERR_PTR(-ENODEV);
-
 	return sb;
-#else
-	return ERR_PTR(-ENODEV);
-#endif
 }
 
 /*
diff --git a/fs/statfs.c b/fs/statfs.c
index 59f33752c1311f..52230a9814337a 100644
--- a/fs/statfs.c
+++ b/fs/statfs.c
@@ -235,7 +235,7 @@ SYSCALL_DEFINE3(fstatfs64, unsigned int, fd, size_t, sz, struct statfs64 __user
 
 static int vfs_ustat(dev_t dev, struct kstatfs *sbuf)
 {
-	struct super_block *s = user_get_super(dev);
+	struct super_block *s = get_super(dev);
 	int err;
 	if (!s)
 		return -EINVAL;
diff --git a/fs/super.c b/fs/super.c
index 50995f8abd1bf1..ffc16e4eee99c4 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -740,19 +740,25 @@ void iterate_supers_type(struct file_system_type *type,
 
 EXPORT_SYMBOL(iterate_supers_type);
 
-static struct super_block *__get_super(struct block_device *bdev, bool excl)
+/**
+ * get_super - get the superblock of a device
+ * @dev: device to get the superblock for
+ * @excl: lock s_umount exclusive if %true, else shared.
+ *
+ * Scans the superblock list and finds the superblock of the file system mounted
+ * on the device.  The superblock is returned with s_umount held, or %NULL if no
+ * superblock was found.
+ */
+struct super_block *get_super(dev_t dev, bool excl)
 {
 	struct super_block *sb;
 
-	if (!bdev)
-		return NULL;
-
 	spin_lock(&sb_lock);
 rescan:
 	list_for_each_entry(sb, &super_blocks, s_list) {
 		if (hlist_unhashed(&sb->s_instances))
 			continue;
-		if (sb->s_bdev == bdev) {
+		if (sb->s_dev == dev) {
 			sb->s_count++;
 			spin_unlock(&sb_lock);
 			if (!excl)
@@ -776,22 +782,9 @@ static struct super_block *__get_super(struct block_device *bdev, bool excl)
 	return NULL;
 }
 
-/**
- *	get_super - get the superblock of a device
- *	@bdev: device to get the superblock for
- *
- *	Scans the superblock list and finds the superblock of the file system
- *	mounted on the device given. %NULL is returned if no match is found.
- */
-struct super_block *get_super(struct block_device *bdev)
-{
-	return __get_super(bdev, false);
-}
-EXPORT_SYMBOL(get_super);
-
 /**
  * get_super_thawed - get thawed superblock of a device
- * @bdev: device to get the superblock for
+ * @dev: device to get the superblock for
  * @excl: lock s_umount exclusive if %true, else shared.
  *
  * Scans the superblock list and finds the superblock of the file system mounted
@@ -799,10 +792,11 @@ EXPORT_SYMBOL(get_super);
  * thawed (or immediately if it was not frozen), or %NULL if no superblock was
  * found.
  */
-struct super_block *get_super_thawed(struct block_device *bdev, bool excl)
+struct super_block *get_super_thawed(dev_t dev, bool excl)
 {
 	while (1) {
-		struct super_block *s = __get_super(bdev, excl);
+		struct super_block *s = get_super(dev, excl);
+
 		if (!s || s->s_writers.frozen == SB_UNFROZEN)
 			return s;
 		if (!excl)
@@ -847,33 +841,6 @@ struct super_block *get_active_super(struct block_device *bdev)
 	return NULL;
 }
 
-struct super_block *user_get_super(dev_t dev)
-{
-	struct super_block *sb;
-
-	spin_lock(&sb_lock);
-rescan:
-	list_for_each_entry(sb, &super_blocks, s_list) {
-		if (hlist_unhashed(&sb->s_instances))
-			continue;
-		if (sb->s_dev ==  dev) {
-			sb->s_count++;
-			spin_unlock(&sb_lock);
-			down_read(&sb->s_umount);
-			/* still alive? */
-			if (sb->s_root && (sb->s_flags & SB_BORN))
-				return sb;
-			up_read(&sb->s_umount);
-			/* nope, got unmounted */
-			spin_lock(&sb_lock);
-			__put_super(sb);
-			goto rescan;
-		}
-	}
-	spin_unlock(&sb_lock);
-	return NULL;
-}
-
 /**
  * reconfigure_super - asks filesystem to change superblock parameters
  * @fc: The superblock and configuration
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index ed40144ab80339..9dc44f1ae22bb1 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1973,7 +1973,7 @@ int bdev_read_only(struct block_device *bdev);
 int set_blocksize(struct block_device *bdev, int size);
 
 const char *bdevname(struct block_device *bdev, char *buffer);
-struct block_device *lookup_bdev(const char *);
+int lookup_bdev(const char *pathname, dev_t *dev);
 
 void blkdev_show(struct seq_file *seqf, off_t offset);
 
diff --git a/include/linux/fs.h b/include/linux/fs.h
index d026d177a526bf..bd16b8ad5dde32 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -696,7 +696,6 @@ struct inode {
 	struct list_head	i_devices;
 	union {
 		struct pipe_inode_info	*i_pipe;
-		struct block_device	*i_bdev;
 		struct cdev		*i_cdev;
 		char			*i_link;
 		unsigned		i_dir_seq;
@@ -3131,8 +3130,7 @@ extern int vfs_readlink(struct dentry *, char __user *, int);
 extern struct file_system_type *get_filesystem(struct file_system_type *fs);
 extern void put_filesystem(struct file_system_type *fs);
 extern struct file_system_type *get_fs_type(const char *name);
-extern struct super_block *get_super(struct block_device *);
-struct super_block *get_super_thawed(struct block_device *bdev, bool excl);
+struct super_block *get_super_thawed(dev_t dev, bool excl);
 extern struct super_block *get_active_super(struct block_device *bdev);
 extern void drop_super(struct super_block *sb);
 extern void drop_super_exclusive(struct super_block *sb);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:09:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:09:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28089.56846 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8f-0001PU-A9; Mon, 16 Nov 2020 15:09:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28089.56846; Mon, 16 Nov 2020 15:09:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8e-0001O4-96; Mon, 16 Nov 2020 15:09:52 +0000
Received: by outflank-mailman (input) for mailman id 28089;
 Mon, 16 Nov 2020 15:09:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg3Y-0006ni-CH
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:36 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 847b40c8-3320-409a-8c7f-b3b393f00d6d;
 Mon, 16 Nov 2020 14:59:59 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyr-0004Ee-O2; Mon, 16 Nov 2020 14:59:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg3Y-0006ni-CH
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:36 +0000
X-Inumbo-ID: 847b40c8-3320-409a-8c7f-b3b393f00d6d
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 847b40c8-3320-409a-8c7f-b3b393f00d6d;
	Mon, 16 Nov 2020 14:59:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=Gm+iOoQW3SjgzjWt8hYC3cPHjmK1EJUCm30vp5R/rzw=; b=Wa8Kr8gRYlOXU5s7FM3KZMbxGx
	k+vvQ6LYafP8kEY026O5H/FQvU9OL31CSMB7WRAZSn0YfoO8GkSE7EtyFKujbR3S9taVnvCiYcS7Z
	c869fHb6jF51swhtCPUvpafZWJvimoO1/RrjEZTpfPSpdsicpmJpX5nsBFxyhUNUF/toexTt8BFw1
	BWItLElI+E4eRDDYKJAqRDVRp7BoyzzXlOtBai3pOt9iB3wHZxh3QpcQX5etGJeIM2EHYjMeYtAiX
	HHK1s9guu0S32VNgB9FTzD185Ysvj6WT9kmxYVHId+6CkBi/UmoyyYLyyB468HuG06dSZoa7QjoiG
	ED6i/tGQ==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyr-0004Ee-O2; Mon, 16 Nov 2020 14:59:46 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 66/78] block: keep a block_device reference for each hd_struct
Date: Mon, 16 Nov 2020 15:57:57 +0100
Message-Id: <20201116145809.410558-67-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

To simplify block device lookup and a few other upcomdin areas, make sure
that we always have a struct block_device available for each disk and
each partition.  The only downside of this is that each device and
partition uses a little more memories.  The upside will be that a lot of
code can be simplified.

With that all we need to look up the block device is to lookup the inode
and do a few sanity checks on the gendisk, instead of the separate lookup
for the gendisk.

As part of the change switch bdget() to only find existing block devices,
given that we know that the block_device structure must be allocated at
probe / partition scan time.

blk-cgroup needed a bit of a special treatment as the only place that
wanted to lookup a gendisk outside of the normal blkdev_get path.  It is
switched to lookup using the block device hash now that this is the
primary lookup path.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-cgroup.c         |  42 ++++-----
 block/blk-iocost.c         |  36 +++----
 block/blk.h                |   1 -
 block/genhd.c              | 188 +++----------------------------------
 block/partitions/core.c    |  28 +++---
 fs/block_dev.c             | 133 +++++++++++++++-----------
 include/linux/blk-cgroup.h |   4 +-
 include/linux/blkdev.h     |   3 +
 include/linux/genhd.h      |   4 +-
 9 files changed, 153 insertions(+), 286 deletions(-)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 54fbe1e80cc41a..4c0ae0f6bce02d 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -556,22 +556,22 @@ static struct blkcg_gq *blkg_lookup_check(struct blkcg *blkcg,
 }
 
 /**
- * blkg_conf_prep - parse and prepare for per-blkg config update
+ * blkcg_conf_get_bdev - parse and open bdev for per-blkg config update
  * @inputp: input string pointer
  *
  * Parse the device node prefix part, MAJ:MIN, of per-blkg config update
- * from @input and get and return the matching gendisk.  *@inputp is
+ * from @input and get and return the matching bdev.  *@inputp is
  * updated to point past the device node prefix.  Returns an ERR_PTR()
  * value on error.
  *
  * Use this function iff blkg_conf_prep() can't be used for some reason.
  */
-struct gendisk *blkcg_conf_get_disk(char **inputp)
+struct block_device *blkcg_conf_get_bdev(char **inputp)
 {
 	char *input = *inputp;
 	unsigned int major, minor;
-	struct gendisk *disk;
-	int key_len, part;
+	struct block_device *bdev;
+	int key_len;
 
 	if (sscanf(input, "%u:%u%n", &major, &minor, &key_len) != 2)
 		return ERR_PTR(-EINVAL);
@@ -581,16 +581,16 @@ struct gendisk *blkcg_conf_get_disk(char **inputp)
 		return ERR_PTR(-EINVAL);
 	input = skip_spaces(input);
 
-	disk = get_gendisk(MKDEV(major, minor), &part);
-	if (!disk)
+	bdev = bdget(MKDEV(major, minor));
+	if (!bdev)
 		return ERR_PTR(-ENODEV);
-	if (part) {
-		put_disk_and_module(disk);
+	if (bdev_is_partition(bdev)) {
+		bdput(bdev);
 		return ERR_PTR(-ENODEV);
 	}
 
 	*inputp = input;
-	return disk;
+	return bdev;
 }
 
 /**
@@ -607,18 +607,18 @@ struct gendisk *blkcg_conf_get_disk(char **inputp)
  */
 int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
 		   char *input, struct blkg_conf_ctx *ctx)
-	__acquires(rcu) __acquires(&disk->queue->queue_lock)
+	__acquires(rcu) __acquires(&bdev->bd_disk->queue->queue_lock)
 {
-	struct gendisk *disk;
+	struct block_device *bdev;
 	struct request_queue *q;
 	struct blkcg_gq *blkg;
 	int ret;
 
-	disk = blkcg_conf_get_disk(&input);
-	if (IS_ERR(disk))
-		return PTR_ERR(disk);
+	bdev = blkcg_conf_get_bdev(&input);
+	if (IS_ERR(bdev))
+		return PTR_ERR(bdev);
 
-	q = disk->queue;
+	q = bdev->bd_disk->queue;
 
 	rcu_read_lock();
 	spin_lock_irq(&q->queue_lock);
@@ -689,7 +689,7 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
 			goto success;
 	}
 success:
-	ctx->disk = disk;
+	ctx->bdev = bdev;
 	ctx->blkg = blkg;
 	ctx->body = input;
 	return 0;
@@ -700,7 +700,7 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
 	spin_unlock_irq(&q->queue_lock);
 	rcu_read_unlock();
 fail:
-	put_disk_and_module(disk);
+	bdput(bdev);
 	/*
 	 * If queue was bypassing, we should retry.  Do so after a
 	 * short msleep().  It isn't strictly necessary but queue
@@ -723,11 +723,11 @@ EXPORT_SYMBOL_GPL(blkg_conf_prep);
  * with blkg_conf_prep().
  */
 void blkg_conf_finish(struct blkg_conf_ctx *ctx)
-	__releases(&ctx->disk->queue->queue_lock) __releases(rcu)
+	__releases(&ctx->bdev->bd_disk->queue->queue_lock) __releases(rcu)
 {
-	spin_unlock_irq(&ctx->disk->queue->queue_lock);
+	spin_unlock_irq(&ctx->bdev->bd_disk->queue->queue_lock);
 	rcu_read_unlock();
-	put_disk_and_module(ctx->disk);
+	bdput(ctx->bdev);
 }
 EXPORT_SYMBOL_GPL(blkg_conf_finish);
 
diff --git a/block/blk-iocost.c b/block/blk-iocost.c
index bbe86d1199dc5b..bd8bfccf6b9ec3 100644
--- a/block/blk-iocost.c
+++ b/block/blk-iocost.c
@@ -3120,23 +3120,23 @@ static const match_table_t qos_tokens = {
 static ssize_t ioc_qos_write(struct kernfs_open_file *of, char *input,
 			     size_t nbytes, loff_t off)
 {
-	struct gendisk *disk;
+	struct block_device *bdev;
 	struct ioc *ioc;
 	u32 qos[NR_QOS_PARAMS];
 	bool enable, user;
 	char *p;
 	int ret;
 
-	disk = blkcg_conf_get_disk(&input);
-	if (IS_ERR(disk))
-		return PTR_ERR(disk);
+	bdev = blkcg_conf_get_bdev(&input);
+	if (IS_ERR(bdev))
+		return PTR_ERR(bdev);
 
-	ioc = q_to_ioc(disk->queue);
+	ioc = q_to_ioc(bdev->bd_disk->queue);
 	if (!ioc) {
-		ret = blk_iocost_init(disk->queue);
+		ret = blk_iocost_init(bdev->bd_disk->queue);
 		if (ret)
 			goto err;
-		ioc = q_to_ioc(disk->queue);
+		ioc = q_to_ioc(bdev->bd_disk->queue);
 	}
 
 	spin_lock_irq(&ioc->lock);
@@ -3231,12 +3231,12 @@ static ssize_t ioc_qos_write(struct kernfs_open_file *of, char *input,
 	ioc_refresh_params(ioc, true);
 	spin_unlock_irq(&ioc->lock);
 
-	put_disk_and_module(disk);
+	bdput(bdev);
 	return nbytes;
 einval:
 	ret = -EINVAL;
 err:
-	put_disk_and_module(disk);
+	bdput(bdev);
 	return ret;
 }
 
@@ -3287,23 +3287,23 @@ static const match_table_t i_lcoef_tokens = {
 static ssize_t ioc_cost_model_write(struct kernfs_open_file *of, char *input,
 				    size_t nbytes, loff_t off)
 {
-	struct gendisk *disk;
+	struct block_device *bdev;
 	struct ioc *ioc;
 	u64 u[NR_I_LCOEFS];
 	bool user;
 	char *p;
 	int ret;
 
-	disk = blkcg_conf_get_disk(&input);
-	if (IS_ERR(disk))
-		return PTR_ERR(disk);
+	bdev = blkcg_conf_get_bdev(&input);
+	if (IS_ERR(bdev))
+		return PTR_ERR(bdev);
 
-	ioc = q_to_ioc(disk->queue);
+	ioc = q_to_ioc(bdev->bd_disk->queue);
 	if (!ioc) {
-		ret = blk_iocost_init(disk->queue);
+		ret = blk_iocost_init(bdev->bd_disk->queue);
 		if (ret)
 			goto err;
-		ioc = q_to_ioc(disk->queue);
+		ioc = q_to_ioc(bdev->bd_disk->queue);
 	}
 
 	spin_lock_irq(&ioc->lock);
@@ -3356,13 +3356,13 @@ static ssize_t ioc_cost_model_write(struct kernfs_open_file *of, char *input,
 	ioc_refresh_params(ioc, true);
 	spin_unlock_irq(&ioc->lock);
 
-	put_disk_and_module(disk);
+	bdput(bdev);
 	return nbytes;
 
 einval:
 	ret = -EINVAL;
 err:
-	put_disk_and_module(disk);
+	bdput(bdev);
 	return ret;
 }
 
diff --git a/block/blk.h b/block/blk.h
index dfab98465db9a5..d74159bf61eb8f 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -352,7 +352,6 @@ struct hd_struct *disk_map_sector_rcu(struct gendisk *disk, sector_t sector);
 
 int blk_alloc_devt(struct hd_struct *part, dev_t *devt);
 void blk_free_devt(dev_t devt);
-void blk_invalidate_devt(dev_t devt);
 char *disk_name(struct gendisk *hd, int partno, char *buf);
 #define ADDPART_FLAG_NONE	0
 #define ADDPART_FLAG_RAID	1
diff --git a/block/genhd.c b/block/genhd.c
index 4a224a3c8e1071..40ec5473a21dd2 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -27,17 +27,9 @@
 
 static struct kobject *block_depr;
 
-static DEFINE_XARRAY(bdev_map);
-static DEFINE_MUTEX(bdev_map_lock);
-
 /* for extended dynamic devt allocation, currently only one major is used */
 #define NR_EXT_DEVT		(1 << MINORBITS)
-
-/* For extended devt allocation.  ext_devt_lock prevents look up
- * results from going away underneath its user.
- */
-static DEFINE_SPINLOCK(ext_devt_lock);
-static DEFINE_IDR(ext_devt_idr);
+static DEFINE_IDA(ext_devt_ida);
 
 static void disk_check_events(struct disk_events *ev,
 			      unsigned int *clearing_ptr);
@@ -578,14 +570,7 @@ int blk_alloc_devt(struct hd_struct *part, dev_t *devt)
 		return 0;
 	}
 
-	/* allocate ext devt */
-	idr_preload(GFP_KERNEL);
-
-	spin_lock_bh(&ext_devt_lock);
-	idx = idr_alloc(&ext_devt_idr, part, 0, NR_EXT_DEVT, GFP_NOWAIT);
-	spin_unlock_bh(&ext_devt_lock);
-
-	idr_preload_end();
+	idx = ida_alloc_range(&ext_devt_ida, 0, NR_EXT_DEVT, GFP_KERNEL);
 	if (idx < 0)
 		return idx == -ENOSPC ? -EBUSY : idx;
 
@@ -604,26 +589,8 @@ int blk_alloc_devt(struct hd_struct *part, dev_t *devt)
  */
 void blk_free_devt(dev_t devt)
 {
-	if (devt == MKDEV(0, 0))
-		return;
-
-	if (MAJOR(devt) == BLOCK_EXT_MAJOR) {
-		spin_lock_bh(&ext_devt_lock);
-		idr_remove(&ext_devt_idr, blk_mangle_minor(MINOR(devt)));
-		spin_unlock_bh(&ext_devt_lock);
-	}
-}
-
-/*
- * We invalidate devt by assigning NULL pointer for devt in idr.
- */
-void blk_invalidate_devt(dev_t devt)
-{
-	if (MAJOR(devt) == BLOCK_EXT_MAJOR) {
-		spin_lock_bh(&ext_devt_lock);
-		idr_replace(&ext_devt_idr, NULL, blk_mangle_minor(MINOR(devt)));
-		spin_unlock_bh(&ext_devt_lock);
-	}
+	if (MAJOR(devt) == BLOCK_EXT_MAJOR)
+		ida_free(&ext_devt_ida, blk_mangle_minor(MINOR(devt)));
 }
 
 static char *bdevt_str(dev_t devt, char *buf)
@@ -638,28 +605,6 @@ static char *bdevt_str(dev_t devt, char *buf)
 	return buf;
 }
 
-static void blk_register_region(struct gendisk *disk)
-{
-	int i;
-
-	mutex_lock(&bdev_map_lock);
-	for (i = 0; i < disk->minors; i++) {
-		if (xa_insert(&bdev_map, disk_devt(disk) + i, disk, GFP_KERNEL))
-			WARN_ON_ONCE(1);
-	}
-	mutex_unlock(&bdev_map_lock);
-}
-
-static void blk_unregister_region(struct gendisk *disk)
-{
-	int i;
-
-	mutex_lock(&bdev_map_lock);
-	for (i = 0; i < disk->minors; i++)
-		xa_erase(&bdev_map, disk_devt(disk) + i);
-	mutex_unlock(&bdev_map_lock);
-}
-
 static void disk_scan_partitions(struct gendisk *disk)
 {
 	struct block_device *bdev;
@@ -803,7 +748,7 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
 		ret = bdi_register(bdi, "%u:%u", MAJOR(devt), MINOR(devt));
 		WARN_ON(ret);
 		bdi_set_owner(bdi, dev);
-		blk_register_region(disk);
+		bdev_add(disk->part0.bdev, devt);
 	}
 	register_disk(parent, disk, groups);
 	if (register_queue)
@@ -914,16 +859,6 @@ void del_gendisk(struct gendisk *disk)
 	}
 
 	blk_unregister_queue(disk);
-	
-	if (!(disk->flags & GENHD_FL_HIDDEN))
-		blk_unregister_region(disk);
-	/*
-	 * Remove gendisk pointer from idr so that it cannot be looked up
-	 * while RCU period before freeing gendisk is running to prevent
-	 * use-after-free issues. Note that the device number stays
-	 * "in-use" until we really free the gendisk.
-	 */
-	blk_invalidate_devt(disk_devt(disk));
 
 	kobject_put(disk->part0.holder_dir);
 	kobject_put(disk->slave_dir);
@@ -962,7 +897,7 @@ static ssize_t disk_badblocks_store(struct device *dev,
 	return badblocks_store(disk->bb, page, len, 0);
 }
 
-static void request_gendisk_module(dev_t devt)
+void blk_request_module(dev_t devt)
 {
 	unsigned int major = MAJOR(devt);
 	struct blk_major_name **n;
@@ -982,84 +917,6 @@ static void request_gendisk_module(dev_t devt)
 		request_module("block-major-%d", MAJOR(devt));
 }
 
-static bool get_disk_and_module(struct gendisk *disk)
-{
-	struct module *owner;
-
-	if (!disk->fops)
-		return false;
-	owner = disk->fops->owner;
-	if (owner && !try_module_get(owner))
-		return false;
-	if (!kobject_get_unless_zero(&disk_to_dev(disk)->kobj)) {
-		module_put(owner);
-		return false;
-	}
-	return true;
-
-}
-
-/**
- * get_gendisk - get partitioning information for a given device
- * @devt: device to get partitioning information for
- * @partno: returned partition index
- *
- * This function gets the structure containing partitioning
- * information for the given device @devt.
- *
- * Context: can sleep
- */
-struct gendisk *get_gendisk(dev_t devt, int *partno)
-{
-	struct gendisk *disk = NULL;
-
-	might_sleep();
-
-	if (MAJOR(devt) != BLOCK_EXT_MAJOR) {
-		mutex_lock(&bdev_map_lock);
-		disk = xa_load(&bdev_map, devt);
-		if (!disk) {
-			mutex_unlock(&bdev_map_lock);
-			request_gendisk_module(devt);
-			mutex_lock(&bdev_map_lock);
-			disk = xa_load(&bdev_map, devt);
-		}
-		if (disk && !get_disk_and_module(disk))
-			disk = NULL;
-		if (disk)
-			*partno = devt - disk_devt(disk);
-		mutex_unlock(&bdev_map_lock);
-	} else {
-		struct hd_struct *part;
-
-		spin_lock_bh(&ext_devt_lock);
-		part = idr_find(&ext_devt_idr, blk_mangle_minor(MINOR(devt)));
-		if (part && get_disk_and_module(part_to_disk(part))) {
-			*partno = part->partno;
-			disk = part_to_disk(part);
-		}
-		spin_unlock_bh(&ext_devt_lock);
-	}
-
-	if (!disk)
-		return NULL;
-
-	/*
-	 * Synchronize with del_gendisk() to not return disk that is being
-	 * destroyed.
-	 */
-	down_read(&disk->lookup_sem);
-	if (unlikely((disk->flags & GENHD_FL_HIDDEN) ||
-		     !(disk->flags & GENHD_FL_UP))) {
-		up_read(&disk->lookup_sem);
-		put_disk_and_module(disk);
-		disk = NULL;
-	} else {
-		up_read(&disk->lookup_sem);
-	}
-	return disk;
-}
-
 /**
  * bdget_disk - do bdget() by gendisk and partition number
  * @disk: gendisk of interest
@@ -1557,11 +1414,6 @@ int disk_expand_part_tbl(struct gendisk *disk, int partno)
  *
  * This function releases all allocated resources of the gendisk.
  *
- * The struct gendisk refcount is incremented with get_gendisk() or
- * get_disk_and_module(), and its refcount is decremented with
- * put_disk_and_module() or put_disk(). Once the refcount reaches 0 this
- * function is called.
- *
  * Drivers which used __device_add_disk() have a gendisk with a request_queue
  * assigned. Since the request_queue sits on top of the gendisk for these
  * drivers we also call blk_put_queue() for them, and we expect the
@@ -1746,9 +1598,13 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
 	if (!disk)
 		return NULL;
 
+	disk->part0.bdev = bdev_alloc(disk, 0);
+	if (!disk->part0.bdev)
+		goto out_free_disk;
+
 	disk->part0.dkstats = alloc_percpu(struct disk_stats);
 	if (!disk->part0.dkstats)
-		goto out_free_disk;
+		goto out_bdput;
 
 	init_rwsem(&disk->lookup_sem);
 	disk->node_id = node_id;
@@ -1782,6 +1638,8 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
 
 out_free_part0:
 	hd_free_part(&disk->part0);
+out_bdput:
+	bdput(disk->part0.bdev);
 out_free_disk:
 	kfree(disk);
 	return NULL;
@@ -1805,26 +1663,6 @@ void put_disk(struct gendisk *disk)
 }
 EXPORT_SYMBOL(put_disk);
 
-/**
- * put_disk_and_module - decrements the module and gendisk refcount
- * @disk: the struct gendisk to decrement the refcount for
- *
- * This is a counterpart of get_disk_and_module() and thus also of
- * get_gendisk().
- *
- * Context: Any context, but the last reference must not be dropped from
- *          atomic context.
- */
-void put_disk_and_module(struct gendisk *disk)
-{
-	if (disk) {
-		struct module *owner = disk->fops->owner;
-
-		put_disk(disk);
-		module_put(owner);
-	}
-}
-
 static void set_disk_ro_uevent(struct gendisk *gd, int ro)
 {
 	char event[] = "DISK_RO=1";
diff --git a/block/partitions/core.c b/block/partitions/core.c
index a02e224115943d..8b44f46ab1fbfc 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -340,12 +340,11 @@ void delete_partition(struct hd_struct *part)
 	device_del(part_to_dev(part));
 
 	/*
-	 * Remove gendisk pointer from idr so that it cannot be looked up
-	 * while RCU period before freeing gendisk is running to prevent
-	 * use-after-free issues. Note that the device number stays
-	 * "in-use" until we really free the gendisk.
+	 * Remove the block device from the inode hash, so that it cannot be
+	 * looked up while waiting for the RCU grace period.
 	 */
-	blk_invalidate_devt(part_devt(part));
+	bdput(part->bdev);
+
 	percpu_ref_kill(&part->ref);
 }
 
@@ -402,11 +401,14 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 	if (!p)
 		return ERR_PTR(-EBUSY);
 
+	err = -ENOMEM;
 	p->dkstats = alloc_percpu(struct disk_stats);
-	if (!p->dkstats) {
-		err = -ENOMEM;
+	if (!p->dkstats)
 		goto out_free;
-	}
+
+	p->bdev = bdev_alloc(disk, partno);
+	if (!p->bdev)
+		goto out_free_stats;
 
 	hd_sects_seq_init(p);
 	pdev = part_to_dev(p);
@@ -420,10 +422,8 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 		struct partition_meta_info *pinfo;
 
 		pinfo = kzalloc_node(sizeof(*pinfo), GFP_KERNEL, disk->node_id);
-		if (!pinfo) {
-			err = -ENOMEM;
-			goto out_free_stats;
-		}
+		if (!pinfo)
+			goto out_bdput;
 		memcpy(pinfo, info, sizeof(*info));
 		p->info = pinfo;
 	}
@@ -470,6 +470,7 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 	}
 
 	/* everything is up and running, commence */
+	bdev_add(p->bdev, devt);
 	rcu_assign_pointer(ptbl->part[partno], p);
 
 	/* suppress uevent if the disk suppresses it */
@@ -479,11 +480,14 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 
 out_free_info:
 	kfree(p->info);
+out_bdput:
+	bdput(p->bdev);
 out_free_stats:
 	free_percpu(p->dkstats);
 out_free:
 	kfree(p);
 	return ERR_PTR(err);
+
 out_remove_file:
 	device_remove_file(pdev, &dev_attr_whole_disk);
 out_del:
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 29db12c3bb501c..f36788d7699302 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -795,6 +795,12 @@ static void bdev_free_inode(struct inode *inode)
 	kmem_cache_free(bdev_cachep, BDEV_I(inode));
 }
 
+static void bdev_destroy_inode(struct inode *inode)
+{
+	if (inode->i_rdev)
+		put_device(disk_to_dev(I_BDEV(inode)->bd_disk));
+}
+
 static void init_once(void *foo)
 {
 	struct bdev_inode *ei = (struct bdev_inode *) foo;
@@ -829,6 +835,7 @@ static const struct super_operations bdev_sops = {
 	.statfs = simple_statfs,
 	.alloc_inode = bdev_alloc_inode,
 	.free_inode = bdev_free_inode,
+	.destroy_inode = bdev_destroy_inode,
 	.drop_inode = generic_delete_inode,
 	.evict_inode = bdev_evict_inode,
 };
@@ -870,34 +877,51 @@ void __init bdev_cache_init(void)
 	blockdev_superblock = bd_mnt->mnt_sb;   /* For writeback */
 }
 
-static struct block_device *bdget(dev_t dev)
+struct block_device *bdev_alloc(struct gendisk *disk, u8 partno)
 {
 	struct block_device *bdev;
 	struct inode *inode;
 
-	inode = iget_locked(blockdev_superblock, dev);
+	inode = new_inode(blockdev_superblock);
 	if (!inode)
 		return NULL;
 
-	bdev = &BDEV_I(inode)->bdev;
+	bdev = I_BDEV(inode);
+	spin_lock_init(&bdev->bd_size_lock);
+	bdev->bd_disk = disk;
+	bdev->bd_partno = partno;
+	bdev->bd_contains = NULL;
+	bdev->bd_super = NULL;
+	bdev->bd_inode = inode;
+	bdev->bd_part_count = 0;
+
+	inode->i_mode = S_IFBLK;
+	inode->i_rdev = 0;
+	inode->i_bdev = bdev;
+	inode->i_data.a_ops = &def_blk_aops;
 
-	if (inode->i_state & I_NEW) {
-		spin_lock_init(&bdev->bd_size_lock);
-		bdev->bd_contains = NULL;
-		bdev->bd_super = NULL;
-		bdev->bd_inode = inode;
-		bdev->bd_part_count = 0;
-		bdev->bd_dev = dev;
-		inode->i_mode = S_IFBLK;
-		inode->i_rdev = dev;
-		inode->i_bdev = bdev;
-		inode->i_data.a_ops = &def_blk_aops;
-		mapping_set_gfp_mask(&inode->i_data, GFP_USER);
-		unlock_new_inode(inode);
-	}
 	return bdev;
 }
 
+void bdev_add(struct block_device *bdev, dev_t dev)
+{
+	bdev->bd_dev = dev;
+	get_device(disk_to_dev(bdev->bd_disk));
+	bdev->bd_inode->i_rdev = dev;
+	bdev->bd_inode->i_ino = dev;
+	insert_inode_hash(bdev->bd_inode);
+}
+
+struct block_device *bdget(dev_t dev)
+{
+	struct inode *inode;
+
+	inode = ilookup(blockdev_superblock, dev);
+	if (!inode)
+		return NULL;
+	return &BDEV_I(inode)->bdev;
+}
+
 /**
  * bdgrab -- Grab a reference to an already referenced block device
  * @bdev:	Block device to grab a reference to.
@@ -957,6 +981,10 @@ static struct block_device *bd_acquire(struct inode *inode)
 		bd_forget(inode);
 
 	bdev = bdget(inode->i_rdev);
+	if (!bdev) {
+		blk_request_module(inode->i_rdev);
+		bdev = bdget(inode->i_rdev);
+	}
 	if (bdev) {
 		spin_lock(&bdev_lock);
 		if (!inode->i_bdev) {
@@ -1067,27 +1095,6 @@ int bd_prepare_to_claim(struct block_device *bdev, struct block_device *whole,
 }
 EXPORT_SYMBOL_GPL(bd_prepare_to_claim); /* only for the loop driver */
 
-static struct gendisk *bdev_get_gendisk(struct block_device *bdev, int *partno)
-{
-	struct gendisk *disk = get_gendisk(bdev->bd_dev, partno);
-
-	if (!disk)
-		return NULL;
-	/*
-	 * Now that we hold gendisk reference we make sure bdev we looked up is
-	 * not stale. If it is, it means device got removed and created before
-	 * we looked up gendisk and we fail open in such case. Associating
-	 * unhashed bdev with newly created gendisk could lead to two bdevs
-	 * (and thus two independent caches) being associated with one device
-	 * which is bad.
-	 */
-	if (inode_unhashed(bdev->bd_inode)) {
-		put_disk_and_module(disk);
-		return NULL;
-	}
-	return disk;
-}
-
 static void bd_clear_claiming(struct block_device *whole, void *holder)
 {
 	lockdep_assert_held(&bdev_lock);
@@ -1404,6 +1411,24 @@ int bdev_disk_changed(struct block_device *bdev, bool invalidate)
  */
 EXPORT_SYMBOL_GPL(bdev_disk_changed);
 
+/*
+ * Synchronize with del_gendisk() to not return a disk that is being destroyed.
+ * Callers needs to drop the reference on disk->fops->owner.
+ */
+static int bdev_get_gendisk(struct gendisk *disk)
+{
+	down_read(&disk->lookup_sem);
+	if ((disk->flags & (GENHD_FL_HIDDEN | GENHD_FL_UP)) != GENHD_FL_UP)
+		goto out_unlock;
+	if (disk->fops->owner && !try_module_get(disk->fops->owner))
+		goto out_unlock;
+	up_read(&disk->lookup_sem);
+	return 0;
+out_unlock:
+	up_read(&disk->lookup_sem);
+	return -ENXIO;
+}
+
 /*
  * bd_mutex locking:
  *
@@ -1415,19 +1440,17 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 		int for_part)
 {
 	struct block_device *whole = NULL, *claiming = NULL;
-	struct gendisk *disk;
+	struct gendisk *disk = bdev->bd_disk;
 	int ret;
-	int partno;
 	bool first_open = false, unblock_events = true, need_restart;
 
  restart:
 	need_restart = false;
-	ret = -ENXIO;
-	disk = bdev_get_gendisk(bdev, &partno);
-	if (!disk)
+	ret = bdev_get_gendisk(bdev->bd_disk);
+	if (ret)
 		goto out;
 
-	if (partno) {
+	if (bdev->bd_partno) {
 		whole = bdget_disk(disk, 0);
 		if (!whole) {
 			ret = -ENOMEM;
@@ -1450,13 +1473,11 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 	mutex_lock_nested(&bdev->bd_mutex, for_part);
 	if (!bdev->bd_openers) {
 		first_open = true;
-		bdev->bd_disk = disk;
 		bdev->bd_contains = bdev;
-		bdev->bd_partno = partno;
 
-		if (!partno) {
+		if (!bdev->bd_partno) {
 			ret = -ENXIO;
-			bdev->bd_part = disk_get_part(disk, partno);
+			bdev->bd_part = disk_get_part(disk, 0);
 			if (!bdev->bd_part)
 				goto out_clear;
 
@@ -1494,7 +1515,7 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 			if (ret)
 				goto out_clear;
 			bdev->bd_contains = bdgrab(whole);
-			bdev->bd_part = disk_get_part(disk, partno);
+			bdev->bd_part = disk_get_part(disk, bdev->bd_partno);
 			if (!(disk->flags & GENHD_FL_UP) ||
 			    !bdev->bd_part || !bdev->bd_part->nr_sects) {
 				ret = -ENXIO;
@@ -1541,16 +1562,15 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 	if (unblock_events)
 		disk_unblock_events(disk);
 
-	/* only one opener holds refs to the module and disk */
+	/* only one opener holds the module reference */
 	if (!first_open)
-		put_disk_and_module(disk);
+		module_put(disk->fops->owner);
 	if (whole)
 		bdput(whole);
 	return 0;
 
  out_clear:
 	disk_put_part(bdev->bd_part);
-	bdev->bd_disk = NULL;
 	bdev->bd_part = NULL;
 	if (bdev != bdev->bd_contains)
 		__blkdev_put(bdev->bd_contains, mode, 1);
@@ -1564,7 +1584,7 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
  	if (whole)
 		bdput(whole);
  out_put_disk:
-	put_disk_and_module(disk);
+	module_put(disk->fops->owner);
 	if (need_restart)
 		goto restart;
  out:
@@ -1680,6 +1700,10 @@ struct block_device *blkdev_get_by_dev(dev_t dev, fmode_t mode, void *holder)
 	int err;
 
 	bdev = bdget(dev);
+	if (!bdev) {
+		blk_request_module(dev);
+		bdev = bdget(dev);
+	}
 	if (!bdev)
 		return ERR_PTR(-ENOMEM);
 
@@ -1755,12 +1779,11 @@ static void __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part)
 	if (!bdev->bd_openers) {
 		disk_put_part(bdev->bd_part);
 		bdev->bd_part = NULL;
-		bdev->bd_disk = NULL;
 		if (bdev != bdev->bd_contains)
 			victim = bdev->bd_contains;
 		bdev->bd_contains = NULL;
 
-		put_disk_and_module(disk);
+		module_put(disk->fops->owner);
 	}
 	mutex_unlock(&bdev->bd_mutex);
 	bdput(bdev);
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index c8fc9792ac776d..064f14daedebca 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -197,12 +197,12 @@ void blkcg_print_blkgs(struct seq_file *sf, struct blkcg *blkcg,
 u64 __blkg_prfill_u64(struct seq_file *sf, struct blkg_policy_data *pd, u64 v);
 
 struct blkg_conf_ctx {
-	struct gendisk			*disk;
+	struct block_device		*bdev;
 	struct blkcg_gq			*blkg;
 	char				*body;
 };
 
-struct gendisk *blkcg_conf_get_disk(char **inputp);
+struct block_device *blkcg_conf_get_bdev(char **inputp);
 int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
 		   char *input, struct blkg_conf_ctx *ctx);
 void blkg_conf_finish(struct blkg_conf_ctx *ctx);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 05b346a68c2eee..044d9dd159d882 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1994,6 +1994,9 @@ void bd_abort_claiming(struct block_device *bdev, struct block_device *whole,
 		void *holder);
 void blkdev_put(struct block_device *bdev, fmode_t mode);
 
+struct block_device *bdev_alloc(struct gendisk *disk, u8 partno);
+void bdev_add(struct block_device *bdev, dev_t dev);
+struct block_device *bdget(dev_t dev);
 struct block_device *I_BDEV(struct inode *inode);
 struct block_device *bdget_part(struct hd_struct *part);
 struct block_device *bdgrab(struct block_device *bdev);
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index ca5e356084c353..ab5fca99764e7a 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -65,6 +65,7 @@ struct hd_struct {
 	struct disk_stats __percpu *dkstats;
 	struct percpu_ref ref;
 
+	struct block_device *bdev;
 	struct device __dev;
 	struct kobject *holder_dir;
 	int policy, partno;
@@ -300,7 +301,6 @@ static inline void add_disk_no_queue_reg(struct gendisk *disk)
 }
 
 extern void del_gendisk(struct gendisk *gp);
-extern struct gendisk *get_gendisk(dev_t dev, int *partno);
 extern struct block_device *bdget_disk(struct gendisk *disk, int partno);
 
 extern void set_disk_ro(struct gendisk *disk, int flag);
@@ -338,7 +338,6 @@ int blk_drop_partitions(struct block_device *bdev);
 
 extern struct gendisk *__alloc_disk_node(int minors, int node_id);
 extern void put_disk(struct gendisk *disk);
-extern void put_disk_and_module(struct gendisk *disk);
 
 #define alloc_disk_node(minors, node_id)				\
 ({									\
@@ -389,6 +388,7 @@ static inline void bd_unlink_disk_holder(struct block_device *bdev,
 #endif /* CONFIG_SYSFS */
 
 dev_t blk_lookup_devt(const char *name, int partno);
+void blk_request_module(dev_t devt);
 #ifdef CONFIG_BLOCK
 void printk_all_partitions(void);
 #else /* CONFIG_BLOCK */
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:09:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:09:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28090.56860 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8i-0001Wp-2u; Mon, 16 Nov 2020 15:09:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28090.56860; Mon, 16 Nov 2020 15:09:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8g-0001Uu-JY; Mon, 16 Nov 2020 15:09:54 +0000
Received: by outflank-mailman (input) for mailman id 28090;
 Mon, 16 Nov 2020 15:09:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefzW-0006ni-2v
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:00:26 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f1815270-2616-4cea-9859-b53f647ce90d;
 Mon, 16 Nov 2020 14:58:55 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxm-0003qi-1m; Mon, 16 Nov 2020 14:58:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefzW-0006ni-2v
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:00:26 +0000
X-Inumbo-ID: f1815270-2616-4cea-9859-b53f647ce90d
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f1815270-2616-4cea-9859-b53f647ce90d;
	Mon, 16 Nov 2020 14:58:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=IMIzlncMGNKmllwIOsfxmrv0eLBvrluJTUMF3jTpuQU=; b=k9I9MrkYIIqQ7JqFNAj/ZM1V4q
	fnRQw9pHyuYdosXzcTsUpct/APGHyHZ7nQCzDTPBYWKb7HrWsqQXDORjJYWy8OjQdicyYdW3PlxPu
	wqMi0Vhywi7qganw7NsAbQv4gOag+5ar1NaSXq2fKhYuZd8FwIJVH3dH1drAnztP4skkRRCgtIYLj
	Pjw8ksGMPtkHYf08cbb503DDj0v+MfWTUIREPU72v6YjjL9QwNBor49syaMVeLgCn3qZSFU6g/s8L
	U3jj6Zs9nBEf9ztd5cWkrrCJejXKia3qXo3DpHT4vJ84+lcQrWfocMAg1qs7gQeMEKKfZSWOVRuBo
	VJid72pg==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxm-0003qi-1m; Mon, 16 Nov 2020 14:58:38 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 20/78] md: use set_capacity_and_notify
Date: Mon, 16 Nov 2020 15:57:11 +0100
Message-Id: <20201116145809.410558-21-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Song Liu <song@kernel.org>
---
 drivers/md/md-cluster.c |  6 ++----
 drivers/md/md-linear.c  |  3 +--
 drivers/md/md.c         | 24 ++++++++++--------------
 3 files changed, 13 insertions(+), 20 deletions(-)

diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
index 4aaf4820b6f625..87442dc59f6ca3 100644
--- a/drivers/md/md-cluster.c
+++ b/drivers/md/md-cluster.c
@@ -581,8 +581,7 @@ static int process_recvd_msg(struct mddev *mddev, struct cluster_msg *msg)
 		process_metadata_update(mddev, msg);
 		break;
 	case CHANGE_CAPACITY:
-		set_capacity(mddev->gendisk, mddev->array_sectors);
-		revalidate_disk_size(mddev->gendisk, true);
+		set_capacity_and_notify(mddev->gendisk, mddev->array_sectors);
 		break;
 	case RESYNCING:
 		set_bit(MD_RESYNCING_REMOTE, &mddev->recovery);
@@ -1296,8 +1295,7 @@ static void update_size(struct mddev *mddev, sector_t old_dev_sectors)
 		if (ret)
 			pr_err("%s:%d: failed to send CHANGE_CAPACITY msg\n",
 			       __func__, __LINE__);
-		set_capacity(mddev->gendisk, mddev->array_sectors);
-		revalidate_disk_size(mddev->gendisk, true);
+		set_capacity_and_notify(mddev->gendisk, mddev->array_sectors);
 	} else {
 		/* revert to previous sectors */
 		ret = mddev->pers->resize(mddev, old_dev_sectors);
diff --git a/drivers/md/md-linear.c b/drivers/md/md-linear.c
index 5ab22069b5be9c..98f1b4b2bdcef8 100644
--- a/drivers/md/md-linear.c
+++ b/drivers/md/md-linear.c
@@ -200,9 +200,8 @@ static int linear_add(struct mddev *mddev, struct md_rdev *rdev)
 		"copied raid_disks doesn't match mddev->raid_disks");
 	rcu_assign_pointer(mddev->private, newconf);
 	md_set_array_sectors(mddev, linear_size(mddev, 0, 0));
-	set_capacity(mddev->gendisk, mddev->array_sectors);
+	set_capacity_and_notify(mddev->gendisk, mddev->array_sectors);
 	mddev_resume(mddev);
-	revalidate_disk_size(mddev->gendisk, true);
 	kfree_rcu(oldconf, rcu);
 	return 0;
 }
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 98bac4f304ae26..32e375d50fee17 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -5355,10 +5355,9 @@ array_size_store(struct mddev *mddev, const char *buf, size_t len)
 
 	if (!err) {
 		mddev->array_sectors = sectors;
-		if (mddev->pers) {
-			set_capacity(mddev->gendisk, mddev->array_sectors);
-			revalidate_disk_size(mddev->gendisk, true);
-		}
+		if (mddev->pers)
+			set_capacity_and_notify(mddev->gendisk,
+						mddev->array_sectors);
 	}
 	mddev_unlock(mddev);
 	return err ?: len;
@@ -6107,8 +6106,7 @@ int do_md_run(struct mddev *mddev)
 	md_wakeup_thread(mddev->thread);
 	md_wakeup_thread(mddev->sync_thread); /* possibly kick off a reshape */
 
-	set_capacity(mddev->gendisk, mddev->array_sectors);
-	revalidate_disk_size(mddev->gendisk, true);
+	set_capacity_and_notify(mddev->gendisk, mddev->array_sectors);
 	clear_bit(MD_NOT_READY, &mddev->flags);
 	mddev->changed = 1;
 	kobject_uevent(&disk_to_dev(mddev->gendisk)->kobj, KOBJ_CHANGE);
@@ -6423,10 +6421,9 @@ static int do_md_stop(struct mddev *mddev, int mode,
 			if (rdev->raid_disk >= 0)
 				sysfs_unlink_rdev(mddev, rdev);
 
-		set_capacity(disk, 0);
+		set_capacity_and_notify(disk, 0);
 		mutex_unlock(&mddev->open_mutex);
 		mddev->changed = 1;
-		revalidate_disk_size(disk, true);
 
 		if (mddev->ro)
 			mddev->ro = 0;
@@ -7257,8 +7254,8 @@ static int update_size(struct mddev *mddev, sector_t num_sectors)
 		if (mddev_is_clustered(mddev))
 			md_cluster_ops->update_size(mddev, old_dev_sectors);
 		else if (mddev->queue) {
-			set_capacity(mddev->gendisk, mddev->array_sectors);
-			revalidate_disk_size(mddev->gendisk, true);
+			set_capacity_and_notify(mddev->gendisk,
+						mddev->array_sectors);
 		}
 	}
 	return rv;
@@ -9035,10 +9032,9 @@ void md_do_sync(struct md_thread *thread)
 		mddev_lock_nointr(mddev);
 		md_set_array_sectors(mddev, mddev->pers->size(mddev, 0, 0));
 		mddev_unlock(mddev);
-		if (!mddev_is_clustered(mddev)) {
-			set_capacity(mddev->gendisk, mddev->array_sectors);
-			revalidate_disk_size(mddev->gendisk, true);
-		}
+		if (!mddev_is_clustered(mddev))
+			set_capacity_and_notify(mddev->gendisk,
+						mddev->array_sectors);
 	}
 
 	spin_lock(&mddev->lock);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:09:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:09:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28092.56873 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8j-0001cE-S8; Mon, 16 Nov 2020 15:09:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28092.56873; Mon, 16 Nov 2020 15:09:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8i-0001aB-BP; Mon, 16 Nov 2020 15:09:56 +0000
Received: by outflank-mailman (input) for mailman id 28092;
 Mon, 16 Nov 2020 15:09:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg05-0006ni-44
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:01:01 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5edc6226-772c-4156-b816-19dd07e52488;
 Mon, 16 Nov 2020 14:59:05 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxy-0003tn-LS; Mon, 16 Nov 2020 14:58:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg05-0006ni-44
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:01:01 +0000
X-Inumbo-ID: 5edc6226-772c-4156-b816-19dd07e52488
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5edc6226-772c-4156-b816-19dd07e52488;
	Mon, 16 Nov 2020 14:59:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=d0zI0xiIcFHFTwTdsA7t1wqk2DnkvBTfmLOxL2TnUek=; b=DKAlryWF5aHE8/B3DVx5lhXd8h
	RQsT9hPX38SbguocnZS554oaNmM9+EcgOmPSe3bSy4pQSqGGct2tuNtJ0yCfFmplFU4Jcw4DNR+Gz
	J7Oz/e2SFesb/6AMnzcvu+cf+7L4KALASo9qO1gsK4PDLlxYsYhRBm5wzHXCJ4YuBYsjvABidOfFo
	p48i+zCOuzCBz9Jt3Qs/XvlKgbU+XBWYolicobbwk3FUzb3sQBQPT35cBdmlTfMB9Ru0tm2W7/68r
	NFrH8b8PC5q1fbzQHqZSTq90cUOcrpwyoWr9RhxuehKghoJ6srJuGYzg6d5pU+2MePhUr2LoGYkUr
	biWVcMtQ==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxy-0003tn-LS; Mon, 16 Nov 2020 14:58:51 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 29/78] dasd: implement ->set_read_only to hook into BLKROSET processing
Date: Mon, 16 Nov 2020 15:57:20 +0100
Message-Id: <20201116145809.410558-30-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Implement the ->set_read_only method instead of parsing the actual
ioctl command.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/s390/block/dasd.c       |  1 +
 drivers/s390/block/dasd_int.h   |  3 ++-
 drivers/s390/block/dasd_ioctl.c | 27 +++++++++------------------
 3 files changed, 12 insertions(+), 19 deletions(-)

diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
index eb17fea8075c6f..db24e04ee9781e 100644
--- a/drivers/s390/block/dasd.c
+++ b/drivers/s390/block/dasd.c
@@ -3394,6 +3394,7 @@ dasd_device_operations = {
 	.ioctl		= dasd_ioctl,
 	.compat_ioctl	= dasd_ioctl,
 	.getgeo		= dasd_getgeo,
+	.set_read_only	= dasd_set_read_only,
 };
 
 /*******************************************************************************
diff --git a/drivers/s390/block/dasd_int.h b/drivers/s390/block/dasd_int.h
index fa552f9f166671..c59a0d63b506e6 100644
--- a/drivers/s390/block/dasd_int.h
+++ b/drivers/s390/block/dasd_int.h
@@ -844,7 +844,8 @@ int dasd_scan_partitions(struct dasd_block *);
 void dasd_destroy_partitions(struct dasd_block *);
 
 /* externals in dasd_ioctl.c */
-int  dasd_ioctl(struct block_device *, fmode_t, unsigned int, unsigned long);
+int dasd_ioctl(struct block_device *, fmode_t, unsigned int, unsigned long);
+int dasd_set_read_only(struct block_device *bdev, bool ro);
 
 /* externals in dasd_proc.c */
 int dasd_proc_init(void);
diff --git a/drivers/s390/block/dasd_ioctl.c b/drivers/s390/block/dasd_ioctl.c
index cb6427fb9f3d16..3359559517bfcf 100644
--- a/drivers/s390/block/dasd_ioctl.c
+++ b/drivers/s390/block/dasd_ioctl.c
@@ -532,28 +532,22 @@ static int dasd_ioctl_information(struct dasd_block *block, void __user *argp,
 /*
  * Set read only
  */
-static int
-dasd_ioctl_set_ro(struct block_device *bdev, void __user *argp)
+int dasd_set_read_only(struct block_device *bdev, bool ro)
 {
 	struct dasd_device *base;
-	int intval, rc;
+	int rc;
 
-	if (!capable(CAP_SYS_ADMIN))
-		return -EACCES;
+	/* do not manipulate hardware state for partitions */
 	if (bdev_is_partition(bdev))
-		// ro setting is not allowed for partitions
-		return -EINVAL;
-	if (get_user(intval, (int __user *)argp))
-		return -EFAULT;
+		return 0;
+
 	base = dasd_device_from_gendisk(bdev->bd_disk);
 	if (!base)
 		return -ENODEV;
-	if (!intval && test_bit(DASD_FLAG_DEVICE_RO, &base->flags)) {
-		dasd_put_device(base);
-		return -EROFS;
-	}
-	set_disk_ro(bdev->bd_disk, intval);
-	rc = dasd_set_feature(base->cdev, DASD_FEATURE_READONLY, intval);
+	if (!ro && test_bit(DASD_FLAG_DEVICE_RO, &base->flags))
+		rc = -EROFS;
+	else
+		rc = dasd_set_feature(base->cdev, DASD_FEATURE_READONLY, ro);
 	dasd_put_device(base);
 	return rc;
 }
@@ -633,9 +627,6 @@ int dasd_ioctl(struct block_device *bdev, fmode_t mode,
 	case BIODASDPRRST:
 		rc = dasd_ioctl_reset_profile(block);
 		break;
-	case BLKROSET:
-		rc = dasd_ioctl_set_ro(bdev, argp);
-		break;
 	case DASDAPIVER:
 		rc = dasd_ioctl_api_version(argp);
 		break;
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:10:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:10:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28093.56884 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8m-0001i0-6B; Mon, 16 Nov 2020 15:10:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28093.56884; Mon, 16 Nov 2020 15:09:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8k-0001gS-NB; Mon, 16 Nov 2020 15:09:58 +0000
Received: by outflank-mailman (input) for mailman id 28093;
 Mon, 16 Nov 2020 15:09:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefz7-0006ni-2M
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:00:01 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 84999a7c-e958-49dc-b4cf-4e412fb66942;
 Mon, 16 Nov 2020 14:58:49 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxf-0003ny-QH; Mon, 16 Nov 2020 14:58:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefz7-0006ni-2M
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:00:01 +0000
X-Inumbo-ID: 84999a7c-e958-49dc-b4cf-4e412fb66942
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 84999a7c-e958-49dc-b4cf-4e412fb66942;
	Mon, 16 Nov 2020 14:58:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=ud6p7e1IhRULC6wZtzdvOstF28I68kQBPxpGvL+xOgI=; b=gPww3FQIhl5iWeI+g9tG2PGyW2
	8cFaViQoZFAn3scL7v7e6VlWSSHM0qujAFL7izZSOKnG55zOSstvCPtgh6j+BgcIcjdOLRmWbSuPx
	87livSCdfMp3Q0CIYUUkcKRJP5Va1JiGvvjNWim+FctICo3+JOPZDui4e1QuYygSU2PpQDLhZqOWf
	f197DU1YDQPIlLWXNMJiKwoVJLVKNOz6H3aGhMJM8IdhDJMUGVHFFQeJkvMcj1S9q2+rh+/TxUG2Y
	29p0F2fhazyop6FNc7WioL6g6LusWwAv5JZKRR86YRvfazPlnU/qoUaNKbSGMde0pGudftQZoTtiY
	4sHWOC/g==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxf-0003ny-QH; Mon, 16 Nov 2020 14:58:32 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 16/78] rbd: use set_capacity_and_notify
Date: Mon, 16 Nov 2020 15:57:07 +0100
Message-Id: <20201116145809.410558-17-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Ilya Dryomov <idryomov@gmail.com>
---
 drivers/block/rbd.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index f84128abade319..b7a194ffda55b4 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -4920,8 +4920,7 @@ static void rbd_dev_update_size(struct rbd_device *rbd_dev)
 	    !test_bit(RBD_DEV_FLAG_REMOVING, &rbd_dev->flags)) {
 		size = (sector_t)rbd_dev->mapping.size / SECTOR_SIZE;
 		dout("setting size to %llu sectors", (unsigned long long)size);
-		set_capacity(rbd_dev->disk, size);
-		revalidate_disk_size(rbd_dev->disk, true);
+		set_capacity_and_notify(rbd_dev->disk, size);
 	}
 }
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:10:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:10:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28094.56895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8o-0001oY-IC; Mon, 16 Nov 2020 15:10:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28094.56895; Mon, 16 Nov 2020 15:10:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8m-0001mu-Vo; Mon, 16 Nov 2020 15:10:00 +0000
Received: by outflank-mailman (input) for mailman id 28094;
 Mon, 16 Nov 2020 15:09:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg26-0006ni-8W
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:03:06 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 994eb64c-8bbc-472e-8332-3da94a57b649;
 Mon, 16 Nov 2020 14:59:34 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyG-0003yI-1n; Mon, 16 Nov 2020 14:59:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg26-0006ni-8W
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:03:06 +0000
X-Inumbo-ID: 994eb64c-8bbc-472e-8332-3da94a57b649
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 994eb64c-8bbc-472e-8332-3da94a57b649;
	Mon, 16 Nov 2020 14:59:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=Ub9jpknBNltWl/xM1z440GPAq6lbWyOu/cfyXRrsD14=; b=VtALb1vlttJ0buxeem4ez+UVG4
	Hu+Wqzb0QtgrS5QrZw/6aENv47GRNNh/CISIBQASNaoUt+cYAFXDVK4WXECNVL5dVc+xc/uVQOnIg
	HTLM6KHwDsX+QnIalbhYcS8kk7anQ8yPF0pzoGt/ezmRxs/uYaG5KB5LFuw2m65+v4FLJ9BMD4Ovz
	FfsRasFisl+b2XzsGmdUtlricIuZ6vBTkUhVjcJ0lMBS4vQc6DLTXfAo6MDzYuO1bB1gpNPStxfTg
	5oO+avG3iudqBw1sow49CGkIHR2P/HvdJHVOxannWGYHrDjws6h+iO/erNUOkLsTjti8W+DcHNPIc
	3Oq8t4Bg==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyG-0003yI-1n; Mon, 16 Nov 2020 14:59:08 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH 41/78] swim: don't call blk_register_region
Date: Mon, 16 Nov 2020 15:57:32 +0100
Message-Id: <20201116145809.410558-42-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

The swim driver (unlike various other floppy drivers) doesn't have
magic device nodes for certain modes, and already registers a gendisk
for each of the floppies supported by a device.  Thus the region
registered is a no-op and can be removed.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 drivers/block/swim.c | 17 -----------------
 1 file changed, 17 deletions(-)

diff --git a/drivers/block/swim.c b/drivers/block/swim.c
index 52dd1efa00f9c5..cc6a0bc6c005a7 100644
--- a/drivers/block/swim.c
+++ b/drivers/block/swim.c
@@ -745,18 +745,6 @@ static const struct block_device_operations floppy_fops = {
 	.check_events	 = floppy_check_events,
 };
 
-static struct kobject *floppy_find(dev_t dev, int *part, void *data)
-{
-	struct swim_priv *swd = data;
-	int drive = (*part & 3);
-
-	if (drive >= swd->floppy_count)
-		return NULL;
-
-	*part = 0;
-	return get_disk_and_module(swd->unit[drive].disk);
-}
-
 static int swim_add_floppy(struct swim_priv *swd, enum drive_location location)
 {
 	struct floppy_state *fs = &swd->unit[swd->floppy_count];
@@ -846,9 +834,6 @@ static int swim_floppy_init(struct swim_priv *swd)
 		add_disk(swd->unit[drive].disk);
 	}
 
-	blk_register_region(MKDEV(FLOPPY_MAJOR, 0), 256, THIS_MODULE,
-			    floppy_find, NULL, swd);
-
 	return 0;
 
 exit_put_disks:
@@ -932,8 +917,6 @@ static int swim_remove(struct platform_device *dev)
 	int drive;
 	struct resource *res;
 
-	blk_unregister_region(MKDEV(FLOPPY_MAJOR, 0), 256);
-
 	for (drive = 0; drive < swd->floppy_count; drive++) {
 		del_gendisk(swd->unit[drive].disk);
 		blk_cleanup_queue(swd->unit[drive].disk->queue);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:10:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:10:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28096.56905 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8q-0001vh-L0; Mon, 16 Nov 2020 15:10:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28096.56905; Mon, 16 Nov 2020 15:10:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8p-0001tJ-Am; Mon, 16 Nov 2020 15:10:03 +0000
Received: by outflank-mailman (input) for mailman id 28096;
 Mon, 16 Nov 2020 15:09:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg1I-0006ni-6K
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:02:16 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8401f5f8-b40b-4b28-86bc-cbad960be06f;
 Mon, 16 Nov 2020 14:59:24 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyH-0003yd-CW; Mon, 16 Nov 2020 14:59:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg1I-0006ni-6K
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:02:16 +0000
X-Inumbo-ID: 8401f5f8-b40b-4b28-86bc-cbad960be06f
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8401f5f8-b40b-4b28-86bc-cbad960be06f;
	Mon, 16 Nov 2020 14:59:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=soDf5n/4IHZXpRDONsC4iRIWKhhdabm7wm1Ix3YyzkY=; b=sXKHf4+KcwIa5fuTcLVQORl6vc
	Y03Gwu4v2aKXYxB2gC70w5AjOtoQE1Q1pXkpIHcqxu1afca2PvwHAIEb13ajBGY5qll0Bc+WJnuHt
	TgCIRD/tv9h7SIzJyRqtqrpqxOYCL5TBFCCOpqvnrOBTaKdroJMy4aFUM6Eu1+1BjsZPfN+VBECoA
	VBKTChEgoSg7gTk+HdF7LfwwqhGnCtekD9xT9Y736twxTwR4aWyi6Skc6DBIDj8x5uz90PEkL1KLh
	CVLnyiuFwPdIlcsawji6tbe7dqrYAdB98VJPkjxf9YLh+GLVDgz+CUZ/XticM28L06ss6gSMN118i
	lq40xEVw==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyH-0003yd-CW; Mon, 16 Nov 2020 14:59:09 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH 42/78] sd: use __register_blkdev to avoid a modprobe for an unregistered dev_t
Date: Mon, 16 Nov 2020 15:57:33 +0100
Message-Id: <20201116145809.410558-43-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Switch from using blk_register_region to the probe callback passed to
__register_blkdev to disable the request_module call for an unclaimed
dev_t in the SD majors.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 drivers/scsi/sd.c | 19 +++++--------------
 1 file changed, 5 insertions(+), 14 deletions(-)

diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index a2a4f385833d6c..679c2c02504763 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -630,13 +630,11 @@ static struct scsi_driver sd_template = {
 };
 
 /*
- * Dummy kobj_map->probe function.
- * The default ->probe function will call modprobe, which is
- * pointless as this module is already loaded.
+ * Don't request a new module, as that could deadlock in multipath
+ * environment.
  */
-static struct kobject *sd_default_probe(dev_t devt, int *partno, void *data)
+static void sd_default_probe(dev_t devt)
 {
-	return NULL;
 }
 
 /*
@@ -3525,9 +3523,6 @@ static int sd_remove(struct device *dev)
 
 	free_opal_dev(sdkp->opal_dev);
 
-	blk_register_region(devt, SD_MINORS, NULL,
-			    sd_default_probe, NULL, NULL);
-
 	mutex_lock(&sd_ref_mutex);
 	dev_set_drvdata(dev, NULL);
 	put_device(&sdkp->dev);
@@ -3717,11 +3712,9 @@ static int __init init_sd(void)
 	SCSI_LOG_HLQUEUE(3, printk("init_sd: sd driver entry point\n"));
 
 	for (i = 0; i < SD_MAJORS; i++) {
-		if (register_blkdev(sd_major(i), "sd") != 0)
+		if (__register_blkdev(sd_major(i), "sd", sd_default_probe))
 			continue;
 		majors++;
-		blk_register_region(sd_major(i), SD_MINORS, NULL,
-				    sd_default_probe, NULL, NULL);
 	}
 
 	if (!majors)
@@ -3794,10 +3787,8 @@ static void __exit exit_sd(void)
 
 	class_unregister(&sd_disk_class);
 
-	for (i = 0; i < SD_MAJORS; i++) {
-		blk_unregister_region(sd_major(i), SD_MINORS);
+	for (i = 0; i < SD_MAJORS; i++)
 		unregister_blkdev(sd_major(i), "sd");
-	}
 }
 
 module_init(init_sd);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:10:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:10:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28099.56917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8s-00022d-VF; Mon, 16 Nov 2020 15:10:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28099.56917; Mon, 16 Nov 2020 15:10:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8r-00020c-Mj; Mon, 16 Nov 2020 15:10:05 +0000
Received: by outflank-mailman (input) for mailman id 28099;
 Mon, 16 Nov 2020 15:10:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg2Q-0006ni-9F
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:03:26 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4b4e9507-09b7-40e8-b25c-66f875b8d6f1;
 Mon, 16 Nov 2020 14:59:40 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyY-00044s-Tr; Mon, 16 Nov 2020 14:59:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg2Q-0006ni-9F
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:03:26 +0000
X-Inumbo-ID: 4b4e9507-09b7-40e8-b25c-66f875b8d6f1
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4b4e9507-09b7-40e8-b25c-66f875b8d6f1;
	Mon, 16 Nov 2020 14:59:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=TGRxj3P5fN3QWbEKVTDt35+3igPcA4rCa+2EvBLtC8g=; b=C+aFntyXdJbXcDXW0zdEm0MZf/
	Q8Zx0N1Bi1RkYFpnimVowbz0rcddgCEhzUASaPfj19d8nD2OoiQ7zBRz9XuVFxvMc+YbxWMyHzaTx
	PYzv9LtI9kFkIcqvnRgzJ1QSX/8XbEjuPMJK4EkCHxJJNOZAq17DsBe6jQ87rPiGWnpudzQyM/EHf
	XW5TF2D/8wJwMwlrqTogsg01StwU1f5L48SB2l4E2dG+cGpvb0qjX+vW1trzMnD3BduvbANSwGVcX
	fXDT/3TaiGf3bIXETdbDeP/h5OUIqBlJlOYSObKbRBS9M21yv36cQQIio98w72OzH3KuXMzUGVMV3
	ibdbwukw==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyY-00044s-Tr; Mon, 16 Nov 2020 14:59:27 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 54/78] block: remove a duplicate __disk_get_part prototype
Date: Mon, 16 Nov 2020 15:57:45 +0100
Message-Id: <20201116145809.410558-55-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/genhd.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 46553d6d602563..22f5b9fd96f8bf 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -250,7 +250,6 @@ static inline dev_t part_devt(struct hd_struct *part)
 	return part_to_dev(part)->devt;
 }
 
-extern struct hd_struct *__disk_get_part(struct gendisk *disk, int partno);
 extern struct hd_struct *disk_get_part(struct gendisk *disk, int partno);
 
 static inline void disk_put_part(struct hd_struct *part)
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:10:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:10:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28100.56932 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8w-0002E5-Et; Mon, 16 Nov 2020 15:10:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28100.56932; Mon, 16 Nov 2020 15:10:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8u-0002B9-P1; Mon, 16 Nov 2020 15:10:08 +0000
Received: by outflank-mailman (input) for mailman id 28100;
 Mon, 16 Nov 2020 15:10:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg3i-0006ni-CT
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:46 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cb8d4a81-ae8d-4979-904f-713bf13dd768;
 Mon, 16 Nov 2020 15:00:02 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyo-0004DN-O9; Mon, 16 Nov 2020 14:59:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg3i-0006ni-CT
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:46 +0000
X-Inumbo-ID: cb8d4a81-ae8d-4979-904f-713bf13dd768
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cb8d4a81-ae8d-4979-904f-713bf13dd768;
	Mon, 16 Nov 2020 15:00:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=x6ThtvQgvwHIscZfOUELeo3Uy7svW74ISAsOiTBJTd8=; b=wYjWzLfzQlz5nMoIrs2dUqAKes
	QFt+YLi/Ow0cGCkVgOnqteaZCNuUDwCov7PJB5hvC6nYp5cVF5aDTygDVep1VuFeVLzibRXVrb0PG
	FwRPx4XB6Y0FbjEFe92tnC6Dld2W5KJSeAc8kJg120u3mpOcZ4tc9ydP5qFtsdEWA6KBA1phfelhv
	ykkNgsnmusDbP9O5T8K6xNSTSx6pFYwqjKwfuv8O23V0jhAmUMdhinoTTyaCbb1UQNoqaNS8S8OhY
	IkQHzpe8v4SVvuc9YJmie1VvzG8FrG8PonETyiwJgsN7wW5Xu/ggXJWk0IApLZbQQsH5DPzic6AvO
	6QnJMDVw==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyo-0004DN-O9; Mon, 16 Nov 2020 14:59:43 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 64/78] dm: simplify flush_bio initialization in __send_empty_flush
Date: Mon, 16 Nov 2020 15:57:55 +0100
Message-Id: <20201116145809.410558-65-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

We don't really need the struct block_device to initialize a bio.  So
switch from using bio_set_dev to manually setting up bi_disk (bi_partno
will always be zero and has been cleared by bio_init already).

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/md/dm.c | 12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 54739f1b579bc8..6d7eb72d41f9ea 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1422,18 +1422,12 @@ static int __send_empty_flush(struct clone_info *ci)
 	 */
 	bio_init(&flush_bio, NULL, 0);
 	flush_bio.bi_opf = REQ_OP_WRITE | REQ_PREFLUSH | REQ_SYNC;
+	flush_bio.bi_disk = ci->io->md->disk;
+	bio_associate_blkg(&flush_bio);
+
 	ci->bio = &flush_bio;
 	ci->sector_count = 0;
 
-	/*
-	 * Empty flush uses a statically initialized bio, as the base for
-	 * cloning.  However, blkg association requires that a bdev is
-	 * associated with a gendisk, which doesn't happen until the bdev is
-	 * opened.  So, blkg association is done at issue time of the flush
-	 * rather than when the device is created in alloc_dev().
-	 */
-	bio_set_dev(ci->bio, ci->io->md->bdev);
-
 	BUG_ON(bio_has_data(ci->bio));
 	while ((ti = dm_table_get_target(ci->map, target_nr++)))
 		__send_duplicate_bios(ci, ti, ti->num_flush_bios, NULL);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:10:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:10:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28103.56944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8z-0002N0-TY; Mon, 16 Nov 2020 15:10:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28103.56944; Mon, 16 Nov 2020 15:10:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg8x-0002Kd-M4; Mon, 16 Nov 2020 15:10:11 +0000
Received: by outflank-mailman (input) for mailman id 28103;
 Mon, 16 Nov 2020 15:10:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg2G-0006ni-8w
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:03:16 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7e198399-0ace-44a4-a46c-240666608b4c;
 Mon, 16 Nov 2020 14:59:36 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyW-00043h-2j; Mon, 16 Nov 2020 14:59:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg2G-0006ni-8w
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:03:16 +0000
X-Inumbo-ID: 7e198399-0ace-44a4-a46c-240666608b4c
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7e198399-0ace-44a4-a46c-240666608b4c;
	Mon, 16 Nov 2020 14:59:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=B9SmRAKqCfACKKyGENFkBsseQWLE7CqM6G32RVvIzu4=; b=YQQqHRpSkrytYfsEPQgYxForVD
	ablAk/WRbGXtgxrxwNPGmkrzt1O8iJfhT5R2bc9kw1ILZs2JspT6CnDAtWi6H1YUByvbdA5f9WyhN
	uCO93gIlHgtuIV9TH2X+F3LmXC0K4QQezeb3WfKxjzE/QxnV9BXZKeQZppJYBgbbWJVaY6WHdV6Wz
	o3+F4X3jMir+rU9NgAMuiknEYgx5X5aTQ7lEP6wvCebgL1xLVb4qQYSvAXNIGsZclY1CJa85yva1t
	3ZOV/mD9/yBkT9Ag7uKj999ThoEG+ydI187GuglZeRckSc1dxAQ1vI9VhLWGyx3iMOWxNRPx1e+V1
	8b8xJE+w==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyW-00043h-2j; Mon, 16 Nov 2020 14:59:24 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	Hannes Reinecke <hare@suse.de>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Subject: [PATCH 52/78] block: switch gendisk lookup to a simple xarray
Date: Mon, 16 Nov 2020 15:57:43 +0100
Message-Id: <20201116145809.410558-53-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Now that bdev_map is only used for finding gendisks, we can use
a simple xarray instead of the regions tracking structure for it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 block/genhd.c         | 208 ++++++++----------------------------------
 include/linux/genhd.h |   7 --
 2 files changed, 37 insertions(+), 178 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index dc8690bc281c16..4a224a3c8e1071 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -27,15 +27,7 @@
 
 static struct kobject *block_depr;
 
-struct bdev_map {
-	struct bdev_map *next;
-	dev_t dev;
-	unsigned long range;
-	struct module *owner;
-	struct kobject *(*probe)(dev_t, int *, void *);
-	int (*lock)(dev_t, void *);
-	void *data;
-} *bdev_map[255];
+static DEFINE_XARRAY(bdev_map);
 static DEFINE_MUTEX(bdev_map_lock);
 
 /* for extended dynamic devt allocation, currently only one major is used */
@@ -646,85 +638,26 @@ static char *bdevt_str(dev_t devt, char *buf)
 	return buf;
 }
 
-/*
- * Register device numbers dev..(dev+range-1)
- * range must be nonzero
- * The hash chain is sorted on range, so that subranges can override.
- */
-void blk_register_region(dev_t devt, unsigned long range, struct module *module,
-			 struct kobject *(*probe)(dev_t, int *, void *),
-			 int (*lock)(dev_t, void *), void *data)
-{
-	unsigned n = MAJOR(devt + range - 1) - MAJOR(devt) + 1;
-	unsigned index = MAJOR(devt);
-	unsigned i;
-	struct bdev_map *p;
-
-	n = min(n, 255u);
-	p = kmalloc_array(n, sizeof(struct bdev_map), GFP_KERNEL);
-	if (p == NULL)
-		return;
-
-	for (i = 0; i < n; i++, p++) {
-		p->owner = module;
-		p->probe = probe;
-		p->lock = lock;
-		p->dev = devt;
-		p->range = range;
-		p->data = data;
-	}
+static void blk_register_region(struct gendisk *disk)
+{
+	int i;
 
 	mutex_lock(&bdev_map_lock);
-	for (i = 0, p -= n; i < n; i++, p++, index++) {
-		struct bdev_map **s = &bdev_map[index % 255];
-		while (*s && (*s)->range < range)
-			s = &(*s)->next;
-		p->next = *s;
-		*s = p;
+	for (i = 0; i < disk->minors; i++) {
+		if (xa_insert(&bdev_map, disk_devt(disk) + i, disk, GFP_KERNEL))
+			WARN_ON_ONCE(1);
 	}
 	mutex_unlock(&bdev_map_lock);
 }
-EXPORT_SYMBOL(blk_register_region);
 
-void blk_unregister_region(dev_t devt, unsigned long range)
+static void blk_unregister_region(struct gendisk *disk)
 {
-	unsigned n = MAJOR(devt + range - 1) - MAJOR(devt) + 1;
-	unsigned index = MAJOR(devt);
-	unsigned i;
-	struct bdev_map *found = NULL;
+	int i;
 
 	mutex_lock(&bdev_map_lock);
-	for (i = 0; i < min(n, 255u); i++, index++) {
-		struct bdev_map **s;
-		for (s = &bdev_map[index % 255]; *s; s = &(*s)->next) {
-			struct bdev_map *p = *s;
-			if (p->dev == devt && p->range == range) {
-				*s = p->next;
-				if (!found)
-					found = p;
-				break;
-			}
-		}
-	}
+	for (i = 0; i < disk->minors; i++)
+		xa_erase(&bdev_map, disk_devt(disk) + i);
 	mutex_unlock(&bdev_map_lock);
-	kfree(found);
-}
-EXPORT_SYMBOL(blk_unregister_region);
-
-static struct kobject *exact_match(dev_t devt, int *partno, void *data)
-{
-	struct gendisk *p = data;
-
-	return &disk_to_dev(p)->kobj;
-}
-
-static int exact_lock(dev_t devt, void *data)
-{
-	struct gendisk *p = data;
-
-	if (!get_disk_and_module(p))
-		return -1;
-	return 0;
 }
 
 static void disk_scan_partitions(struct gendisk *disk)
@@ -870,8 +803,7 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
 		ret = bdi_register(bdi, "%u:%u", MAJOR(devt), MINOR(devt));
 		WARN_ON(ret);
 		bdi_set_owner(bdi, dev);
-		blk_register_region(disk_devt(disk), disk->minors, NULL,
-				    exact_match, exact_lock, disk);
+		blk_register_region(disk);
 	}
 	register_disk(parent, disk, groups);
 	if (register_queue)
@@ -984,7 +916,7 @@ void del_gendisk(struct gendisk *disk)
 	blk_unregister_queue(disk);
 	
 	if (!(disk->flags & GENHD_FL_HIDDEN))
-		blk_unregister_region(disk_devt(disk), disk->minors);
+		blk_unregister_region(disk);
 	/*
 	 * Remove gendisk pointer from idr so that it cannot be looked up
 	 * while RCU period before freeing gendisk is running to prevent
@@ -1050,54 +982,22 @@ static void request_gendisk_module(dev_t devt)
 		request_module("block-major-%d", MAJOR(devt));
 }
 
-static struct gendisk *lookup_gendisk(dev_t dev, int *partno)
+static bool get_disk_and_module(struct gendisk *disk)
 {
-	struct kobject *kobj;
-	struct bdev_map *p;
-	unsigned long best = ~0UL;
-
-retry:
-	mutex_lock(&bdev_map_lock);
-	for (p = bdev_map[MAJOR(dev) % 255]; p; p = p->next) {
-		struct kobject *(*probe)(dev_t, int *, void *);
-		struct module *owner;
-		void *data;
-
-		if (p->dev > dev || p->dev + p->range - 1 < dev)
-			continue;
-		if (p->range - 1 >= best)
-			break;
-		if (!try_module_get(p->owner))
-			continue;
-		owner = p->owner;
-		data = p->data;
-		probe = p->probe;
-		best = p->range - 1;
-		*partno = dev - p->dev;
-
-		if (!probe) {
-			mutex_unlock(&bdev_map_lock);
-			module_put(owner);
-			request_gendisk_module(dev);
-			goto retry;
-		}
+	struct module *owner;
 
-		if (p->lock && p->lock(dev, data) < 0) {
-			module_put(owner);
-			continue;
-		}
-		mutex_unlock(&bdev_map_lock);
-		kobj = probe(dev, partno, data);
-		/* Currently ->owner protects _only_ ->probe() itself. */
+	if (!disk->fops)
+		return false;
+	owner = disk->fops->owner;
+	if (owner && !try_module_get(owner))
+		return false;
+	if (!kobject_get_unless_zero(&disk_to_dev(disk)->kobj)) {
 		module_put(owner);
-		if (kobj)
-			return dev_to_disk(kobj_to_dev(kobj));
-		goto retry;
+		return false;
 	}
-	mutex_unlock(&bdev_map_lock);
-	return NULL;
-}
+	return true;
 
+}
 
 /**
  * get_gendisk - get partitioning information for a given device
@@ -1116,7 +1016,19 @@ struct gendisk *get_gendisk(dev_t devt, int *partno)
 	might_sleep();
 
 	if (MAJOR(devt) != BLOCK_EXT_MAJOR) {
-		disk = lookup_gendisk(devt, partno);
+		mutex_lock(&bdev_map_lock);
+		disk = xa_load(&bdev_map, devt);
+		if (!disk) {
+			mutex_unlock(&bdev_map_lock);
+			request_gendisk_module(devt);
+			mutex_lock(&bdev_map_lock);
+			disk = xa_load(&bdev_map, devt);
+		}
+		if (disk && !get_disk_and_module(disk))
+			disk = NULL;
+		if (disk)
+			*partno = devt - disk_devt(disk);
+		mutex_unlock(&bdev_map_lock);
 	} else {
 		struct hd_struct *part;
 
@@ -1320,21 +1232,6 @@ static const struct seq_operations partitions_op = {
 };
 #endif
 
-static void bdev_map_init(void)
-{
-	struct bdev_map *base;
-	int i;
-
-	base = kzalloc(sizeof(*base), GFP_KERNEL);
-	if (!base)
-		panic("cannot allocate bdev_map");
-
-	base->dev = 1;
-	base->range = ~0 ;
-	for (i = 0; i < 255; i++)
-		bdev_map[i] = base;
-}
-
 static int __init genhd_device_init(void)
 {
 	int error;
@@ -1343,7 +1240,6 @@ static int __init genhd_device_init(void)
 	error = class_register(&block_class);
 	if (unlikely(error))
 		return error;
-	bdev_map_init();
 	blk_dev_init();
 
 	register_blkdev(BLOCK_EXT_MAJOR, "blkext");
@@ -1892,35 +1788,6 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
 }
 EXPORT_SYMBOL(__alloc_disk_node);
 
-/**
- * get_disk_and_module - increments the gendisk and gendisk fops module refcount
- * @disk: the struct gendisk to increment the refcount for
- *
- * This increments the refcount for the struct gendisk, and the gendisk's
- * fops module owner.
- *
- * Context: Any context.
- */
-struct kobject *get_disk_and_module(struct gendisk *disk)
-{
-	struct module *owner;
-	struct kobject *kobj;
-
-	if (!disk->fops)
-		return NULL;
-	owner = disk->fops->owner;
-	if (owner && !try_module_get(owner))
-		return NULL;
-	kobj = kobject_get_unless_zero(&disk_to_dev(disk)->kobj);
-	if (kobj == NULL) {
-		module_put(owner);
-		return NULL;
-	}
-	return kobj;
-
-}
-EXPORT_SYMBOL(get_disk_and_module);
-
 /**
  * put_disk - decrements the gendisk refcount
  * @disk: the struct gendisk to decrement the refcount for
@@ -1957,7 +1824,6 @@ void put_disk_and_module(struct gendisk *disk)
 		module_put(owner);
 	}
 }
-EXPORT_SYMBOL(put_disk_and_module);
 
 static void set_disk_ro_uevent(struct gendisk *gd, int ro)
 {
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 04f6a6bf577a90..46553d6d602563 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -338,15 +338,8 @@ int blk_add_partitions(struct gendisk *disk, struct block_device *bdev);
 int blk_drop_partitions(struct block_device *bdev);
 
 extern struct gendisk *__alloc_disk_node(int minors, int node_id);
-extern struct kobject *get_disk_and_module(struct gendisk *disk);
 extern void put_disk(struct gendisk *disk);
 extern void put_disk_and_module(struct gendisk *disk);
-extern void blk_register_region(dev_t devt, unsigned long range,
-			struct module *module,
-			struct kobject *(*probe)(dev_t, int *, void *),
-			int (*lock)(dev_t, void *),
-			void *data);
-extern void blk_unregister_region(dev_t devt, unsigned long range);
 
 #define alloc_disk_node(minors, node_id)				\
 ({									\
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:10:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:10:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28105.56952 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg91-0002VA-VU; Mon, 16 Nov 2020 15:10:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28105.56952; Mon, 16 Nov 2020 15:10:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg90-0002T5-I6; Mon, 16 Nov 2020 15:10:14 +0000
Received: by outflank-mailman (input) for mailman id 28105;
 Mon, 16 Nov 2020 15:10:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg1r-0006ni-7r
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:02:51 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3ee13d52-ddb8-4201-bbd8-a73e5196a8ef;
 Mon, 16 Nov 2020 14:59:30 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyQ-000416-5g; Mon, 16 Nov 2020 14:59:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg1r-0006ni-7r
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:02:51 +0000
X-Inumbo-ID: 3ee13d52-ddb8-4201-bbd8-a73e5196a8ef
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3ee13d52-ddb8-4201-bbd8-a73e5196a8ef;
	Mon, 16 Nov 2020 14:59:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=VpFTnx8mSRVx3CfXvp0WWOpahH+qOqL0mP+Plef//Dw=; b=QuaAvRf47Sgn9ESoWOtbCEEgKf
	SoaxupntT7YcBZyDBufAz89YGCb7+GAKx9JrFGTvtb5gkr5Mrj1a5W/A5DrAzDitUTF1NGhPPsRi1
	SwVqYv0iBeIK7LPoNhiBiDMfP76vlVOCz5Xf7CxEP2Fi3mTEXQJHqC/WuP7Qu3AmsaO2RNIWflVNw
	HDQluFLq+Cexw+XqCWcvsLMuEyrXOagZfpeUVxdZeb9NaJWSWPy5zSF1aMVGD0p6spy3vB3r2LtM3
	DuVbxT8CDDONVntDglOll0hYSPlE1qzQ6tCXf2v/dx8BmBOznqiTmYL594fNkkwXRuptP0wmQJyGf
	43+tbRTQ==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyQ-000416-5g; Mon, 16 Nov 2020 14:59:18 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH 48/78] amiflop: use separate gendisks for Amiga vs MS-DOS mode
Date: Mon, 16 Nov 2020 15:57:39 +0100
Message-Id: <20201116145809.410558-49-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use separate gendisks (which share a tag_set) for the native Amgiga vs
the MS-DOS mode instead of redirecting the gendisk lookup using a probe
callback.  This avoids potential problems with aliased block_device
instances and will eventually allow for removing the blk_register_region
framework.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 drivers/block/amiflop.c | 98 +++++++++++++++++++++++------------------
 1 file changed, 55 insertions(+), 43 deletions(-)

diff --git a/drivers/block/amiflop.c b/drivers/block/amiflop.c
index 71c2b156455860..9e2d0c6a387721 100644
--- a/drivers/block/amiflop.c
+++ b/drivers/block/amiflop.c
@@ -201,7 +201,7 @@ struct amiga_floppy_struct {
 	int busy;			/* true when drive is active */
 	int dirty;			/* true when trackbuf is not on disk */
 	int status;			/* current error code for unit */
-	struct gendisk *gendisk;
+	struct gendisk *gendisk[2];
 	struct blk_mq_tag_set tag_set;
 };
 
@@ -1669,6 +1669,11 @@ static int floppy_open(struct block_device *bdev, fmode_t mode)
 		return -EBUSY;
 	}
 
+	if (unit[drive].type->code == FD_NODRIVE) {
+		mutex_unlock(&amiflop_mutex);
+		return -ENXIO;
+	}
+
 	if (mode & (FMODE_READ|FMODE_WRITE)) {
 		bdev_check_media_change(bdev);
 		if (mode & FMODE_WRITE) {
@@ -1695,7 +1700,7 @@ static int floppy_open(struct block_device *bdev, fmode_t mode)
 	unit[drive].dtype=&data_types[system];
 	unit[drive].blocks=unit[drive].type->heads*unit[drive].type->tracks*
 		data_types[system].sects*unit[drive].type->sect_mult;
-	set_capacity(unit[drive].gendisk, unit[drive].blocks);
+	set_capacity(unit[drive].gendisk[system], unit[drive].blocks);
 
 	printk(KERN_INFO "fd%d: accessing %s-disk with %s-layout\n",drive,
 	       unit[drive].type->name, data_types[system].name);
@@ -1772,36 +1777,68 @@ static const struct blk_mq_ops amiflop_mq_ops = {
 	.queue_rq = amiflop_queue_rq,
 };
 
-static struct gendisk *fd_alloc_disk(int drive)
+static int fd_alloc_disk(int drive, int system)
 {
 	struct gendisk *disk;
 
 	disk = alloc_disk(1);
 	if (!disk)
 		goto out;
-
-	disk->queue = blk_mq_init_sq_queue(&unit[drive].tag_set, &amiflop_mq_ops,
-						2, BLK_MQ_F_SHOULD_MERGE);
-	if (IS_ERR(disk->queue)) {
-		disk->queue = NULL;
+	disk->queue = blk_mq_init_queue(&unit[drive].tag_set);
+	if (IS_ERR(disk->queue))
 		goto out_put_disk;
-	}
 
+	disk->major = FLOPPY_MAJOR;
+	disk->first_minor = drive + system;
+	disk->fops = &floppy_fops;
+	disk->events = DISK_EVENT_MEDIA_CHANGE;
+	if (system)
+		sprintf(disk->disk_name, "fd%d_msdos", drive);
+	else
+		sprintf(disk->disk_name, "fd%d", drive);
+	disk->private_data = &unit[drive];
+	set_capacity(disk, 880 * 2);
+
+	unit[drive].gendisk[system] = disk;
+	add_disk(disk);
+	return 0;
+
+out_put_disk:
+	disk->queue = NULL;
+	put_disk(disk);
+out:
+	return -ENOMEM;
+}
+
+static int fd_alloc_drive(int drive)
+{
 	unit[drive].trackbuf = kmalloc(FLOPPY_MAX_SECTORS * 512, GFP_KERNEL);
 	if (!unit[drive].trackbuf)
-		goto out_cleanup_queue;
+		goto out;
 
-	return disk;
+	memset(&unit[drive].tag_set, 0, sizeof(unit[drive].tag_set));
+	unit[drive].tag_set.ops = &amiflop_mq_ops;
+	unit[drive].tag_set.nr_hw_queues = 1;
+	unit[drive].tag_set.nr_maps = 1;
+	unit[drive].tag_set.queue_depth = 2;
+	unit[drive].tag_set.numa_node = NUMA_NO_NODE;
+	unit[drive].tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
+	if (blk_mq_alloc_tag_set(&unit[drive].tag_set))
+		goto out_cleanup_trackbuf;
 
-out_cleanup_queue:
-	blk_cleanup_queue(disk->queue);
-	disk->queue = NULL;
+	pr_cont(" fd%d", drive);
+
+	if (fd_alloc_disk(drive, 0) || fd_alloc_disk(drive, 1))
+		goto out_cleanup_tagset;
+	return 0;
+
+out_cleanup_tagset:
 	blk_mq_free_tag_set(&unit[drive].tag_set);
-out_put_disk:
-	put_disk(disk);
+out_cleanup_trackbuf:
+	kfree(unit[drive].trackbuf);
 out:
 	unit[drive].type->code = FD_NODRIVE;
-	return NULL;
+	return -ENOMEM;
 }
 
 static int __init fd_probe_drives(void)
@@ -1812,29 +1849,16 @@ static int __init fd_probe_drives(void)
 	drives=0;
 	nomem=0;
 	for(drive=0;drive<FD_MAX_UNITS;drive++) {
-		struct gendisk *disk;
 		fd_probe(drive);
 		if (unit[drive].type->code == FD_NODRIVE)
 			continue;
 
-		disk = fd_alloc_disk(drive);
-		if (!disk) {
+		if (fd_alloc_drive(drive) < 0) {
 			pr_cont(" no mem for fd%d", drive);
 			nomem = 1;
 			continue;
 		}
-		unit[drive].gendisk = disk;
 		drives++;
-
-		pr_cont(" fd%d",drive);
-		disk->major = FLOPPY_MAJOR;
-		disk->first_minor = drive;
-		disk->fops = &floppy_fops;
-		disk->events = DISK_EVENT_MEDIA_CHANGE;
-		sprintf(disk->disk_name, "fd%d", drive);
-		disk->private_data = &unit[drive];
-		set_capacity(disk, 880*2);
-		add_disk(disk);
 	}
 	if ((drives > 0) || (nomem == 0)) {
 		if (drives == 0)
@@ -1846,15 +1870,6 @@ static int __init fd_probe_drives(void)
 	return -ENOMEM;
 }
  
-static struct kobject *floppy_find(dev_t dev, int *part, void *data)
-{
-	int drive = *part & 3;
-	if (unit[drive].type->code == FD_NODRIVE)
-		return NULL;
-	*part = 0;
-	return get_disk_and_module(unit[drive].gendisk);
-}
-
 static int __init amiga_floppy_probe(struct platform_device *pdev)
 {
 	int i, ret;
@@ -1884,9 +1899,6 @@ static int __init amiga_floppy_probe(struct platform_device *pdev)
 	if (fd_probe_drives() < 1) /* No usable drives */
 		goto out_probe;
 
-	blk_register_region(MKDEV(FLOPPY_MAJOR, 0), 256, THIS_MODULE,
-				floppy_find, NULL, NULL);
-
 	/* initialize variables */
 	timer_setup(&motor_on_timer, motor_on_callback, 0);
 	motor_on_timer.expires = 0;
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:10:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:10:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28113.56966 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg95-0002dm-77; Mon, 16 Nov 2020 15:10:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28113.56966; Mon, 16 Nov 2020 15:10:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg93-0002bx-Gp; Mon, 16 Nov 2020 15:10:17 +0000
Received: by outflank-mailman (input) for mailman id 28113;
 Mon, 16 Nov 2020 15:10:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg3T-0006ni-Bz
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:31 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 13503b00-3caa-401f-a377-c6927975d40c;
 Mon, 16 Nov 2020 14:59:58 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyt-0004FO-7Z; Mon, 16 Nov 2020 14:59:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg3T-0006ni-Bz
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:31 +0000
X-Inumbo-ID: 13503b00-3caa-401f-a377-c6927975d40c
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 13503b00-3caa-401f-a377-c6927975d40c;
	Mon, 16 Nov 2020 14:59:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=GDdahn/aHFPgbq/Tch1XnIWDhbuP0LNShW0kbWCU2DI=; b=uzx32xbEXDeOyc3BQeMYzcMEP+
	VpcFJj/mppjMWyEpC2f1TAGdERyyKZRFPMS6LQb/FLhD/jkr9TUTULDzsay+mncANq4u7ucvLomnp
	Kww7QziQHj9SrxdP5uPA+fxQzCqelMZmiy+dj6oy7m0yAUJH/x3Enl5GkdSBQ3vLRg56BDDTvBg5b
	/6VF20e7eXN/eur5CKoLa0f/4Y9uesPi5j923mPMuLXs+nJmuwvB4GqU9KGScPD2A37u2GF4wHIxT
	bXUovXGeQ3icRQ7sRczYatscyuMTybV1WBblJX+wktL3ld8Vx/0lixyLEk8QqnbHMtclj/KTJswjw
	cojmMX0g==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyt-0004FO-7Z; Mon, 16 Nov 2020 14:59:47 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 67/78] block: simplify the block device claiming interface
Date: Mon, 16 Nov 2020 15:57:58 +0100
Message-Id: <20201116145809.410558-68-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Stop passing the whole device as a separate argument given that it
can be trivially deducted.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/loop.c   | 12 +++-----
 fs/block_dev.c         | 69 +++++++++++++++++++-----------------------
 include/linux/blkdev.h |  6 ++--
 3 files changed, 38 insertions(+), 49 deletions(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index b42c728620c9e4..599e94a7e69259 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -1071,7 +1071,6 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
 	struct file	*file;
 	struct inode	*inode;
 	struct address_space *mapping;
-	struct block_device *claimed_bdev = NULL;
 	int		error;
 	loff_t		size;
 	bool		partscan;
@@ -1090,8 +1089,7 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
 	 * here to avoid changing device under exclusive owner.
 	 */
 	if (!(mode & FMODE_EXCL)) {
-		claimed_bdev = bdev->bd_contains;
-		error = bd_prepare_to_claim(bdev, claimed_bdev, loop_configure);
+		error = bd_prepare_to_claim(bdev, loop_configure);
 		if (error)
 			goto out_putf;
 	}
@@ -1178,15 +1176,15 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
 	mutex_unlock(&loop_ctl_mutex);
 	if (partscan)
 		loop_reread_partitions(lo, bdev);
-	if (claimed_bdev)
-		bd_abort_claiming(bdev, claimed_bdev, loop_configure);
+	if (!(mode & FMODE_EXCL))
+		bd_abort_claiming(bdev, loop_configure);
 	return 0;
 
 out_unlock:
 	mutex_unlock(&loop_ctl_mutex);
 out_bdev:
-	if (claimed_bdev)
-		bd_abort_claiming(bdev, claimed_bdev, loop_configure);
+	if (!(mode & FMODE_EXCL))
+		bd_abort_claiming(bdev, loop_configure);
 out_putf:
 	fput(file);
 out:
diff --git a/fs/block_dev.c b/fs/block_dev.c
index f36788d7699302..fd4df132a97590 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -110,24 +110,20 @@ EXPORT_SYMBOL(invalidate_bdev);
 int truncate_bdev_range(struct block_device *bdev, fmode_t mode,
 			loff_t lstart, loff_t lend)
 {
-	struct block_device *claimed_bdev = NULL;
-	int err;
-
 	/*
 	 * If we don't hold exclusive handle for the device, upgrade to it
 	 * while we discard the buffer cache to avoid discarding buffers
 	 * under live filesystem.
 	 */
 	if (!(mode & FMODE_EXCL)) {
-		claimed_bdev = bdev->bd_contains;
-		err = bd_prepare_to_claim(bdev, claimed_bdev,
-					  truncate_bdev_range);
+		int err = bd_prepare_to_claim(bdev, truncate_bdev_range);
 		if (err)
 			return err;
 	}
+
 	truncate_inode_pages_range(bdev->bd_inode->i_mapping, lstart, lend);
-	if (claimed_bdev)
-		bd_abort_claiming(bdev, claimed_bdev, truncate_bdev_range);
+	if (!(mode & FMODE_EXCL))
+		bd_abort_claiming(bdev, truncate_bdev_range);
 	return 0;
 }
 EXPORT_SYMBOL(truncate_bdev_range);
@@ -1055,7 +1051,6 @@ static bool bd_may_claim(struct block_device *bdev, struct block_device *whole,
 /**
  * bd_prepare_to_claim - claim a block device
  * @bdev: block device of interest
- * @whole: the whole device containing @bdev, may equal @bdev
  * @holder: holder trying to claim @bdev
  *
  * Claim @bdev.  This function fails if @bdev is already claimed by another
@@ -1065,9 +1060,10 @@ static bool bd_may_claim(struct block_device *bdev, struct block_device *whole,
  * RETURNS:
  * 0 if @bdev can be claimed, -EBUSY otherwise.
  */
-int bd_prepare_to_claim(struct block_device *bdev, struct block_device *whole,
-		void *holder)
+int bd_prepare_to_claim(struct block_device *bdev, void *holder)
 {
+	struct block_device *whole = bdev->bd_contains;
+
 retry:
 	spin_lock(&bdev_lock);
 	/* if someone else claimed, fail */
@@ -1107,15 +1103,15 @@ static void bd_clear_claiming(struct block_device *whole, void *holder)
 /**
  * bd_finish_claiming - finish claiming of a block device
  * @bdev: block device of interest
- * @whole: whole block device
  * @holder: holder that has claimed @bdev
  *
  * Finish exclusive open of a block device. Mark the device as exlusively
  * open by the holder and wake up all waiters for exclusive open to finish.
  */
-static void bd_finish_claiming(struct block_device *bdev,
-		struct block_device *whole, void *holder)
+static void bd_finish_claiming(struct block_device *bdev, void *holder)
 {
+	struct block_device *whole = bdev->bd_contains;
+
 	spin_lock(&bdev_lock);
 	BUG_ON(!bd_may_claim(bdev, whole, holder));
 	/*
@@ -1140,11 +1136,10 @@ static void bd_finish_claiming(struct block_device *bdev,
  * also used when exclusive open is not actually desired and we just needed
  * to block other exclusive openers for a while.
  */
-void bd_abort_claiming(struct block_device *bdev, struct block_device *whole,
-		       void *holder)
+void bd_abort_claiming(struct block_device *bdev, void *holder)
 {
 	spin_lock(&bdev_lock);
-	bd_clear_claiming(whole, holder);
+	bd_clear_claiming(bdev->bd_contains, holder);
 	spin_unlock(&bdev_lock);
 }
 EXPORT_SYMBOL(bd_abort_claiming);
@@ -1439,7 +1434,7 @@ static int bdev_get_gendisk(struct gendisk *disk)
 static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 		int for_part)
 {
-	struct block_device *whole = NULL, *claiming = NULL;
+	struct block_device *whole = NULL;
 	struct gendisk *disk = bdev->bd_disk;
 	int ret;
 	bool first_open = false, unblock_events = true, need_restart;
@@ -1460,11 +1455,7 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 
 	if (!for_part && (mode & FMODE_EXCL)) {
 		WARN_ON_ONCE(!holder);
-		if (whole)
-			claiming = whole;
-		else
-			claiming = bdev;
-		ret = bd_prepare_to_claim(bdev, claiming, holder);
+		ret = bd_prepare_to_claim(bdev, holder);
 		if (ret)
 			goto out_put_whole;
 	}
@@ -1541,21 +1532,23 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 		}
 	}
 	bdev->bd_openers++;
-	if (for_part)
+	if (for_part) {
 		bdev->bd_part_count++;
-	if (claiming)
-		bd_finish_claiming(bdev, claiming, holder);
+	} else if (mode & FMODE_EXCL) {
+		bd_finish_claiming(bdev, holder);
 
-	/*
-	 * Block event polling for write claims if requested.  Any write holder
-	 * makes the write_holder state stick until all are released.  This is
-	 * good enough and tracking individual writeable reference is too
-	 * fragile given the way @mode is used in blkdev_get/put().
-	 */
-	if (claiming && (mode & FMODE_WRITE) && !bdev->bd_write_holder &&
-	    (disk->flags & GENHD_FL_BLOCK_EVENTS_ON_EXCL_WRITE)) {
-		bdev->bd_write_holder = true;
-		unblock_events = false;
+		/*
+		 * Block event polling for write claims if requested.  Any write
+		 * holder makes the write_holder state stick until all are
+		 * released.  This is good enough and tracking individual
+		 * writeable reference is too fragile given the way @mode is
+		 * used in blkdev_get/put().
+		 */
+		if ((mode & FMODE_WRITE) && !bdev->bd_write_holder &&
+		    (disk->flags & GENHD_FL_BLOCK_EVENTS_ON_EXCL_WRITE)) {
+			bdev->bd_write_holder = true;
+			unblock_events = false;
+		}
 	}
 	mutex_unlock(&bdev->bd_mutex);
 
@@ -1576,8 +1569,8 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 		__blkdev_put(bdev->bd_contains, mode, 1);
 	bdev->bd_contains = NULL;
  out_unlock_bdev:
-	if (claiming)
-		bd_abort_claiming(bdev, claiming, holder);
+	if (!for_part && (mode & FMODE_EXCL))
+		bd_abort_claiming(bdev, holder);
 	mutex_unlock(&bdev->bd_mutex);
 	disk_unblock_events(disk);
  out_put_whole:
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 044d9dd159d882..696b2f9c5529d8 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1988,10 +1988,8 @@ void blkdev_show(struct seq_file *seqf, off_t offset);
 struct block_device *blkdev_get_by_path(const char *path, fmode_t mode,
 		void *holder);
 struct block_device *blkdev_get_by_dev(dev_t dev, fmode_t mode, void *holder);
-int bd_prepare_to_claim(struct block_device *bdev, struct block_device *whole,
-		void *holder);
-void bd_abort_claiming(struct block_device *bdev, struct block_device *whole,
-		void *holder);
+int bd_prepare_to_claim(struct block_device *bdev, void *holder);
+void bd_abort_claiming(struct block_device *bdev, void *holder);
 void blkdev_put(struct block_device *bdev, fmode_t mode);
 
 struct block_device *bdev_alloc(struct gendisk *disk, u8 partno);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:10:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:10:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28118.56981 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg98-0002o5-LX; Mon, 16 Nov 2020 15:10:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28118.56981; Mon, 16 Nov 2020 15:10:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg97-0002m3-4i; Mon, 16 Nov 2020 15:10:21 +0000
Received: by outflank-mailman (input) for mailman id 28118;
 Mon, 16 Nov 2020 15:10:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg2L-0006ni-94
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:03:21 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e4347489-f748-45ea-af7e-05d83ce6f9d0;
 Mon, 16 Nov 2020 14:59:38 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyX-00044O-9l; Mon, 16 Nov 2020 14:59:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg2L-0006ni-94
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:03:21 +0000
X-Inumbo-ID: e4347489-f748-45ea-af7e-05d83ce6f9d0
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e4347489-f748-45ea-af7e-05d83ce6f9d0;
	Mon, 16 Nov 2020 14:59:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=JcB9e14ZTBoVWi3xdlC8mpSR3JW9dN3IR752p30j0qE=; b=YOFzGbUV9VEtyNxwp7qzlkBkkA
	Xn0oawKxqvp0rAIqeYWfpnGW5iMkDHaT9Q19LnRvvfRleFF+dEeV1rIXFrxzZ6V3OKsv1gqggDose
	CZ8HXwPH00Ft2bAKbkVkNVOZSwCoCf+BbjKAUSbeaDWhW2TFIaXsT/mf0vQ14ewbGqO18nmazvj5g
	b1Wy0xuDKhqpZX+ptswIwNRpIl/sRXy2DRT6yF/1ufD4XI4Nw45co8Yphscy9XmWqZZmCKt0hG1sl
	9Nlddjx5eT+Hbi58Z8JKvsvZh3X28RVFCimEe1Pj+dy6gOSZ5ekmYcmi5bYlS8agcooKnl5AoCpPZ
	e88oTXSg==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyX-00044O-9l; Mon, 16 Nov 2020 14:59:25 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 53/78] blk-cgroup: fix a hd_struct leak in blkcg_fill_root_iostats
Date: Mon, 16 Nov 2020 15:57:44 +0100
Message-Id: <20201116145809.410558-54-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

disk_get_part needs to be paired with a disk_put_part.

Fixes: ef45fe470e1 ("blk-cgroup: show global disk stats in root cgroup io.stat")
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-cgroup.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index c68bdf58c9a6e1..54fbe1e80cc41a 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -849,6 +849,7 @@ static void blkcg_fill_root_iostats(void)
 			blkg_iostat_set(&blkg->iostat.cur, &tmp);
 			u64_stats_update_end(&blkg->iostat.sync);
 		}
+		disk_put_part(part);
 	}
 }
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:10:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:10:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28125.56990 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9A-0002tZ-GA; Mon, 16 Nov 2020 15:10:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28125.56990; Mon, 16 Nov 2020 15:10:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg99-0002sC-9q; Mon, 16 Nov 2020 15:10:23 +0000
Received: by outflank-mailman (input) for mailman id 28125;
 Mon, 16 Nov 2020 15:10:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg47-0006ni-DL
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:05:11 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab00e9e7-f953-400c-9bc6-bf8d3ca7085c;
 Mon, 16 Nov 2020 15:00:11 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefz4-0004J1-6I; Mon, 16 Nov 2020 14:59:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg47-0006ni-DL
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:05:11 +0000
X-Inumbo-ID: ab00e9e7-f953-400c-9bc6-bf8d3ca7085c
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ab00e9e7-f953-400c-9bc6-bf8d3ca7085c;
	Mon, 16 Nov 2020 15:00:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=l33WjPCm9yNjP/lDMXMqfeSBEmxLf9KKJXjU2Vgod3M=; b=vPvF0yPsOjSp4dSMhRHvBehRhV
	PFOiS/io66DL+FFe4YN2Fb0Rqbc+Nzpp4psxVyQtH5VyzgK4aBlY10MJ0SxK5+DNq8YoroqcIQHJw
	OIuTXBByC0wVDzrNiZkakvZ0NFRqY8ZwJhRw5jo9u/h87uXYh6iBDLLS4YAlPwVHFZfrqpMwESpy2
	S/cZ92gGjV3WkSF1YqzQWyT9omasa0ij1LAeniOiRxHFEakybTIcDPCG+9QeD6K5cDl+9Ihm1Hlf2
	hIuzQV52MyZOWFhKNgVtAf/UV4Hbk5CaHh2K6ojORYxj4yNCBgrAeaNTOEGsClhqF7ii0EHL7srAc
	6fqEm0Ow==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefz4-0004J1-6I; Mon, 16 Nov 2020 14:59:58 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 75/78] block: stop using bdget_disk for partition 0
Date: Mon, 16 Nov 2020 15:58:06 +0100
Message-Id: <20201116145809.410558-76-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

We can just dereference the point in struct gendisk instead.  Also
remove the now unused export.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/genhd.c                   |  1 -
 drivers/block/nbd.c             |  4 +---
 drivers/block/xen-blkfront.c    | 20 +++++---------------
 drivers/block/zram/zram_drv.c   | 25 ++-----------------------
 drivers/md/dm.c                 | 13 ++-----------
 drivers/s390/block/dasd_ioctl.c |  5 ++---
 6 files changed, 12 insertions(+), 56 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index 5dcb8b8902daae..b2a4e68171519a 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -904,7 +904,6 @@ struct block_device *bdget_disk(struct gendisk *disk, int partno)
 
 	return bdev;
 }
-EXPORT_SYMBOL(bdget_disk);
 
 /*
  * print a full list of all partitions - intended for places where the root
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 014683968ce174..92f84ed0ba9eb6 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -1488,12 +1488,10 @@ static int nbd_open(struct block_device *bdev, fmode_t mode)
 static void nbd_release(struct gendisk *disk, fmode_t mode)
 {
 	struct nbd_device *nbd = disk->private_data;
-	struct block_device *bdev = bdget_disk(disk, 0);
 
 	if (test_bit(NBD_RT_DISCONNECT_ON_CLOSE, &nbd->config->runtime_flags) &&
-			bdev->bd_openers == 0)
+			disk->part0->bd_openers == 0)
 		nbd_disconnect_and_put(nbd);
-	bdput(bdev);
 
 	nbd_config_put(nbd);
 	nbd_put(nbd);
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 5b1f99ca77b734..c2721ec73d7291 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -2153,7 +2153,7 @@ static void blkfront_closing(struct blkfront_info *info)
 	}
 
 	if (info->gd)
-		bdev = bdget_disk(info->gd, 0);
+		bdev = bdgrab(info->gd->part0);
 
 	mutex_unlock(&info->mutex);
 
@@ -2518,7 +2518,7 @@ static int blkfront_remove(struct xenbus_device *xbdev)
 
 	disk = info->gd;
 	if (disk)
-		bdev = bdget_disk(disk, 0);
+		bdev = bdgrab(disk->part0);
 
 	info->xbdev = NULL;
 	mutex_unlock(&info->mutex);
@@ -2595,19 +2595,11 @@ static int blkif_open(struct block_device *bdev, fmode_t mode)
 static void blkif_release(struct gendisk *disk, fmode_t mode)
 {
 	struct blkfront_info *info = disk->private_data;
-	struct block_device *bdev;
 	struct xenbus_device *xbdev;
 
 	mutex_lock(&blkfront_mutex);
-
-	bdev = bdget_disk(disk, 0);
-
-	if (!bdev) {
-		WARN(1, "Block device %s yanked out from us!\n", disk->disk_name);
+	if (disk->part0->bd_openers)
 		goto out_mutex;
-	}
-	if (bdev->bd_openers)
-		goto out;
 
 	/*
 	 * Check if we have been instructed to close. We will have
@@ -2619,7 +2611,7 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
 
 	if (xbdev && xbdev->state == XenbusStateClosing) {
 		/* pending switch to state closed */
-		dev_info(disk_to_dev(bdev->bd_disk), "releasing disk\n");
+		dev_info(disk_to_dev(disk), "releasing disk\n");
 		xlvbd_release_gendisk(info);
 		xenbus_frontend_closed(info->xbdev);
  	}
@@ -2628,14 +2620,12 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
 
 	if (!xbdev) {
 		/* sudden device removal */
-		dev_info(disk_to_dev(bdev->bd_disk), "releasing disk\n");
+		dev_info(disk_to_dev(disk), "releasing disk\n");
 		xlvbd_release_gendisk(info);
 		disk->private_data = NULL;
 		free_info(info);
 	}
 
-out:
-	bdput(bdev);
 out_mutex:
 	mutex_unlock(&blkfront_mutex);
 }
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index e765765263495f..e7a23638e2f181 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1748,7 +1748,6 @@ static ssize_t reset_store(struct device *dev,
 		struct device_attribute *attr, const char *buf, size_t len)
 {
 	struct zram *zram = dev_to_zram(dev);
-	struct block_device *bdev;
 	unsigned short do_reset;
 	int ret = 0;
 
@@ -1758,17 +1757,12 @@ static ssize_t reset_store(struct device *dev,
 	if (!do_reset)
 		return -EINVAL;
 
-	bdev = bdget_disk(zram->disk, 0);
-	if (!bdev)
-		return -ENOMEM;
-
 	mutex_lock(&zram->disk->mutex);
-	if (bdev->bd_openers)
+	if (zram->disk->part0->bd_openers)
 		ret = -EBUSY;
 	else
 		zram_reset_device(zram);
 	mutex_unlock(&zram->disk->mutex);
-	bdput(bdev);
 
 	return ret ? ret : len;
 }
@@ -1931,24 +1925,9 @@ static int zram_add(void)
 	return ret;
 }
 
-static bool zram_busy(struct zram *zram)
-{
-	struct block_device *bdev;
-	bool busy = false;
-
-	bdev = bdget_disk(zram->disk, 0);
-	if (bdev) {
-		if (bdev->bd_openers)
-			busy = true;
-		bdput(bdev);
-	}
-
-	return busy;
-}
-
 static int zram_remove(struct zram *zram)
 {
-	if (zram_busy(zram))
+	if (zram->disk->part0->bd_openers)
 		return -EBUSY;
 
 	del_gendisk(zram->disk);
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index ac46f6e41279cc..ec48ccae50dd53 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -2375,17 +2375,12 @@ struct dm_table *dm_swap_table(struct mapped_device *md, struct dm_table *table)
  */
 static int lock_fs(struct mapped_device *md)
 {
-	struct block_device *bdev;
 	int r;
 
 	WARN_ON(md->frozen_sb);
 
-	bdev = bdget_disk(md->disk, 0);
-	if (!bdev)
-		return -ENOMEM;
-	md->frozen_sb = freeze_bdev(bdev);
+	md->frozen_sb = freeze_bdev(md->disk->part0);
 	if (IS_ERR(md->frozen_sb)) {
-		bdput(bdev);
 		r = PTR_ERR(md->frozen_sb);
 		md->frozen_sb = NULL;
 		return r;
@@ -2398,14 +2393,10 @@ static int lock_fs(struct mapped_device *md)
 
 static void unlock_fs(struct mapped_device *md)
 {
-	struct block_device *bdev;
-
 	if (!test_bit(DMF_FROZEN, &md->flags))
 		return;
 
-	bdev = md->frozen_sb->s_bdev;
-	thaw_bdev(bdev, md->frozen_sb);
-	bdput(bdev);
+	thaw_bdev(md->frozen_sb->s_bdev, md->frozen_sb);
 	md->frozen_sb = NULL;
 	clear_bit(DMF_FROZEN, &md->flags);
 }
diff --git a/drivers/s390/block/dasd_ioctl.c b/drivers/s390/block/dasd_ioctl.c
index 304eba1acf163c..9f642440894655 100644
--- a/drivers/s390/block/dasd_ioctl.c
+++ b/drivers/s390/block/dasd_ioctl.c
@@ -220,9 +220,8 @@ dasd_format(struct dasd_block *block, struct format_data_t *fdata)
 	 * enabling the device later.
 	 */
 	if (fdata->start_unit == 0) {
-		struct block_device *bdev = bdget_disk(block->gdp, 0);
-		bdev->bd_inode->i_blkbits = blksize_bits(fdata->blksize);
-		bdput(bdev);
+		block->gdp->part0->bd_inode->i_blkbits =
+			blksize_bits(fdata->blksize);
 	}
 
 	rc = base->discipline->format_device(base, fdata, 1);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:10:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:10:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28133.57003 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9C-00030f-RE; Mon, 16 Nov 2020 15:10:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28133.57003; Mon, 16 Nov 2020 15:10:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9B-0002yO-EC; Mon, 16 Nov 2020 15:10:25 +0000
Received: by outflank-mailman (input) for mailman id 28133;
 Mon, 16 Nov 2020 15:10:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg0o-0006ni-5R
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:01:46 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fc9880fd-3b24-4878-ae44-1c1b0edcdc1c;
 Mon, 16 Nov 2020 14:59:16 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefy9-0003wb-0o; Mon, 16 Nov 2020 14:59:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg0o-0006ni-5R
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:01:46 +0000
X-Inumbo-ID: fc9880fd-3b24-4878-ae44-1c1b0edcdc1c
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fc9880fd-3b24-4878-ae44-1c1b0edcdc1c;
	Mon, 16 Nov 2020 14:59:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=HSHstTA5FBmItbRjwbt8hMTbGvjYX3IcG/k6MZ54c5g=; b=trr8ifokY9kZUYDsLpSU9II+9M
	8iOgO1nNBoSZITeSO6bl6xy7/7QVmOV+Wp/IF2eAc4UazTh4EZjMGH7xMC2tlJ1cltl+f6HNJiFqt
	M+MdbLlrB1hAhEhOqTv8IbtoPMg0JuE4E3FFqf/AKsqROtJ5cBZEB1aT1wtbW/JeaGVhyB/xyhrIU
	XjIGpFSG04CEPTZ4UHcdMJuKrv28ynGnyrFeKY7MrJa4DjTd0KsoYD9H2QWkllN8Kh9efXfFFayRp
	wpgPnaDMPtCgTAFS90qS+ULgU0+e254709WuXfRit4EQSdOHjhmneOoyscxZi13M+fEKAYvQXhDsO
	VcvGE4yA==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefy9-0003wb-0o; Mon, 16 Nov 2020 14:59:01 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 36/78] block: open code kobj_map into in block/genhd.c
Date: Mon, 16 Nov 2020 15:57:27 +0100
Message-Id: <20201116145809.410558-37-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Copy and paste the kobj_map functionality in the block code in preparation
for completely rewriting it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/genhd.c | 130 +++++++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 117 insertions(+), 13 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index 8180195b76634b..482f7b89802010 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -17,7 +17,6 @@
 #include <linux/seq_file.h>
 #include <linux/slab.h>
 #include <linux/kmod.h>
-#include <linux/kobj_map.h>
 #include <linux/mutex.h>
 #include <linux/idr.h>
 #include <linux/log2.h>
@@ -29,6 +28,16 @@
 static DEFINE_MUTEX(block_class_lock);
 static struct kobject *block_depr;
 
+struct bdev_map {
+	struct bdev_map *next;
+	dev_t dev;
+	unsigned long range;
+	struct module *owner;
+	struct kobject *(*probe)(dev_t, int *, void *);
+	int (*lock)(dev_t, void *);
+	void *data;
+} *bdev_map[255];
+
 /* for extended dynamic devt allocation, currently only one major is used */
 #define NR_EXT_DEVT		(1 << MINORBITS)
 
@@ -517,8 +526,6 @@ void unregister_blkdev(unsigned int major, const char *name)
 
 EXPORT_SYMBOL(unregister_blkdev);
 
-static struct kobj_map *bdev_map;
-
 /**
  * blk_mangle_minor - scatter minor numbers apart
  * @minor: minor number to mangle
@@ -645,16 +652,60 @@ void blk_register_region(dev_t devt, unsigned long range, struct module *module,
 			 struct kobject *(*probe)(dev_t, int *, void *),
 			 int (*lock)(dev_t, void *), void *data)
 {
-	kobj_map(bdev_map, devt, range, module, probe, lock, data);
-}
+	unsigned n = MAJOR(devt + range - 1) - MAJOR(devt) + 1;
+	unsigned index = MAJOR(devt);
+	unsigned i;
+	struct bdev_map *p;
+
+	n = min(n, 255u);
+	p = kmalloc_array(n, sizeof(struct bdev_map), GFP_KERNEL);
+	if (p == NULL)
+		return;
 
+	for (i = 0; i < n; i++, p++) {
+		p->owner = module;
+		p->probe = probe;
+		p->lock = lock;
+		p->dev = devt;
+		p->range = range;
+		p->data = data;
+	}
+
+	mutex_lock(&block_class_lock);
+	for (i = 0, p -= n; i < n; i++, p++, index++) {
+		struct bdev_map **s = &bdev_map[index % 255];
+		while (*s && (*s)->range < range)
+			s = &(*s)->next;
+		p->next = *s;
+		*s = p;
+	}
+	mutex_unlock(&block_class_lock);
+}
 EXPORT_SYMBOL(blk_register_region);
 
 void blk_unregister_region(dev_t devt, unsigned long range)
 {
-	kobj_unmap(bdev_map, devt, range);
-}
+	unsigned n = MAJOR(devt + range - 1) - MAJOR(devt) + 1;
+	unsigned index = MAJOR(devt);
+	unsigned i;
+	struct bdev_map *found = NULL;
 
+	mutex_lock(&block_class_lock);
+	for (i = 0; i < min(n, 255u); i++, index++) {
+		struct bdev_map **s;
+		for (s = &bdev_map[index % 255]; *s; s = &(*s)->next) {
+			struct bdev_map *p = *s;
+			if (p->dev == devt && p->range == range) {
+				*s = p->next;
+				if (!found)
+					found = p;
+				break;
+			}
+		}
+	}
+	mutex_unlock(&block_class_lock);
+	kfree(found);
+}
 EXPORT_SYMBOL(blk_unregister_region);
 
 static struct kobject *exact_match(dev_t devt, int *partno, void *data)
@@ -976,6 +1027,47 @@ static ssize_t disk_badblocks_store(struct device *dev,
 	return badblocks_store(disk->bb, page, len, 0);
 }
 
+static struct gendisk *lookup_gendisk(dev_t dev, int *partno)
+{
+	struct kobject *kobj;
+	struct bdev_map *p;
+	unsigned long best = ~0UL;
+
+retry:
+	mutex_lock(&block_class_lock);
+	for (p = bdev_map[MAJOR(dev) % 255]; p; p = p->next) {
+		struct kobject *(*probe)(dev_t, int *, void *);
+		struct module *owner;
+		void *data;
+
+		if (p->dev > dev || p->dev + p->range - 1 < dev)
+			continue;
+		if (p->range - 1 >= best)
+			break;
+		if (!try_module_get(p->owner))
+			continue;
+		owner = p->owner;
+		data = p->data;
+		probe = p->probe;
+		best = p->range - 1;
+		*partno = dev - p->dev;
+		if (p->lock && p->lock(dev, data) < 0) {
+			module_put(owner);
+			continue;
+		}
+		mutex_unlock(&block_class_lock);
+		kobj = probe(dev, partno, data);
+		/* Currently ->owner protects _only_ ->probe() itself. */
+		module_put(owner);
+		if (kobj)
+			return dev_to_disk(kobj_to_dev(kobj));
+		goto retry;
+	}
+	mutex_unlock(&block_class_lock);
+	return NULL;
+}
+
+
 /**
  * get_gendisk - get partitioning information for a given device
  * @devt: device to get partitioning information for
@@ -993,11 +1085,7 @@ struct gendisk *get_gendisk(dev_t devt, int *partno)
 	might_sleep();
 
 	if (MAJOR(devt) != BLOCK_EXT_MAJOR) {
-		struct kobject *kobj;
-
-		kobj = kobj_lookup(bdev_map, devt, partno);
-		if (kobj)
-			disk = dev_to_disk(kobj_to_dev(kobj));
+		disk = lookup_gendisk(devt, partno);
 	} else {
 		struct hd_struct *part;
 
@@ -1210,6 +1298,22 @@ static struct kobject *base_probe(dev_t devt, int *partno, void *data)
 	return NULL;
 }
 
+static void bdev_map_init(void)
+{
+	struct bdev_map *base;
+	int i;
+
+	base = kzalloc(sizeof(*base), GFP_KERNEL);
+	if (!base)
+		panic("cannot allocate bdev_map");
+
+	base->dev = 1;
+	base->range = ~0 ;
+	base->probe = base_probe;
+	for (i = 0; i < 255; i++)
+		bdev_map[i] = base;
+}
+
 static int __init genhd_device_init(void)
 {
 	int error;
@@ -1218,7 +1322,7 @@ static int __init genhd_device_init(void)
 	error = class_register(&block_class);
 	if (unlikely(error))
 		return error;
-	bdev_map = kobj_map_init(base_probe, &block_class_lock);
+	bdev_map_init();
 	blk_dev_init();
 
 	register_blkdev(BLOCK_EXT_MAJOR, "blkext");
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:10:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:10:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28139.57014 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9E-000362-LF; Mon, 16 Nov 2020 15:10:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28139.57014; Mon, 16 Nov 2020 15:10:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9D-00034U-9J; Mon, 16 Nov 2020 15:10:27 +0000
Received: by outflank-mailman (input) for mailman id 28139;
 Mon, 16 Nov 2020 15:10:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg3J-0006ni-BT
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:21 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9e967c60-57cf-4c00-8ee6-bca0cf74ae87;
 Mon, 16 Nov 2020 14:59:54 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyn-0004Cl-AQ; Mon, 16 Nov 2020 14:59:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg3J-0006ni-BT
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:21 +0000
X-Inumbo-ID: 9e967c60-57cf-4c00-8ee6-bca0cf74ae87
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9e967c60-57cf-4c00-8ee6-bca0cf74ae87;
	Mon, 16 Nov 2020 14:59:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=IuZ426HAkGqkpmUmDFrK/jtW75cQ9xEgfhlt4WVjI90=; b=FbV09+iwI8V/pddqU1sEXr5MRG
	2LGeCEdJzbpooxAikaWsNr5YfxH34ymPyuCnSJiwiRrphkz09RICgAbMVyHqGKWH6sHHkHvoruwW2
	nCorG1tvxjOd6vHnEE7swqUtKyD+8syeAG7c9+VEIxM/4sCmK8n7wJLvPzvcwP62k5ghl2GH7p6CA
	bmJY/GpCYWKc3Q69Wkb4Pf5N62nndPgbrmqv4yTNQic3S4e4YxaEGByMc1ZRJgWTOUB49DbdAAXXh
	7s2MiZc7fa4gFYEJLSLS59VU/0nxQ+mwt1zxXN8laEEd1sAgMYK6+cPM77anChY7SAPVWwdnlrtv1
	YRLIq+dw==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyn-0004Cl-AQ; Mon, 16 Nov 2020 14:59:41 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 63/78] bcache: remove a superflous lookup_bdev all
Date: Mon, 16 Nov 2020 15:57:54 +0100
Message-Id: <20201116145809.410558-64-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Don't bother to call lookup_bdev for just a slightly different error
message without any functional change.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/md/bcache/super.c | 10 +---------
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 46a00134a36ae1..d36ccdda16ed2e 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -2538,15 +2538,7 @@ static ssize_t register_bcache(struct kobject *k, struct kobj_attribute *attr,
 				  sb);
 	if (IS_ERR(bdev)) {
 		if (bdev == ERR_PTR(-EBUSY)) {
-			bdev = lookup_bdev(strim(path));
-			mutex_lock(&bch_register_lock);
-			if (!IS_ERR(bdev) && bch_is_open(bdev))
-				err = "device already registered";
-			else
-				err = "device busy";
-			mutex_unlock(&bch_register_lock);
-			if (!IS_ERR(bdev))
-				bdput(bdev);
+			err = "device busy";
 			if (attr == &ksysfs_register_quiet)
 				goto done;
 		}
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:10:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:10:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28144.57027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9H-0003CK-1c; Mon, 16 Nov 2020 15:10:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28144.57027; Mon, 16 Nov 2020 15:10:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9F-0003A8-11; Mon, 16 Nov 2020 15:10:29 +0000
Received: by outflank-mailman (input) for mailman id 28144;
 Mon, 16 Nov 2020 15:10:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg3d-0006ni-CR
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:41 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 388063e4-c042-4c1d-923b-75897cc6eb35;
 Mon, 16 Nov 2020 15:00:00 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyu-0004Fd-HL; Mon, 16 Nov 2020 14:59:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg3d-0006ni-CR
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:41 +0000
X-Inumbo-ID: 388063e4-c042-4c1d-923b-75897cc6eb35
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 388063e4-c042-4c1d-923b-75897cc6eb35;
	Mon, 16 Nov 2020 15:00:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=U1T3F6kddQ5RAbJP0Kd5aSQRXwgJOO9lO/0pb1NuZu4=; b=Mp/bnXWKW1eCWc4cnpGQF+LzFz
	+yKGUtIwAwJyLYcIyqIwUzX5kFor/+9XSitRKLGwelhsMyM+jo5MdzNpCHHBJ0Iz9zR3J1N5RDw/L
	elxhWeRTq9mn68jo7mo+5Q4N7FoDnOiB2Ko1LxyfvZVKKrY7T6HB8yCxAUe+NVO99WAAyihpabqln
	mDNQgQ32Fsx3DUoMB7bJ5t1bCEfvo9jNviF/wnSQFLIPLYsctwCrVbyO1039CykidDZsBnCUd8INa
	McCEnHBIAwZ8N3tE8HKmkENLQlH4gRIfZNloYiLCS6I7+1U/hYZQbSbazKQRNqqSS2T9kesdq+8Pe
	qXSnpaVg==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyu-0004Fd-HL; Mon, 16 Nov 2020 14:59:49 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 68/78] block: remove ->bd_contains
Date: Mon, 16 Nov 2020 15:57:59 +0100
Message-Id: <20201116145809.410558-69-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Now that each gendisk has a reference to the block_device referencing
it, we can just use that everywhere and get rid of ->bd_contain.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/scsi/scsicam.c    |  2 +-
 fs/block_dev.c            | 50 +++++++++++++--------------------------
 include/linux/blk_types.h |  4 +++-
 3 files changed, 20 insertions(+), 36 deletions(-)

diff --git a/drivers/scsi/scsicam.c b/drivers/scsi/scsicam.c
index 682cf08ab04153..f1553a453616fd 100644
--- a/drivers/scsi/scsicam.c
+++ b/drivers/scsi/scsicam.c
@@ -32,7 +32,7 @@
  */
 unsigned char *scsi_bios_ptable(struct block_device *dev)
 {
-	struct address_space *mapping = dev->bd_contains->bd_inode->i_mapping;
+	struct address_space *mapping = bdev_whole(dev)->bd_inode->i_mapping;
 	unsigned char *res = NULL;
 	struct page *page;
 
diff --git a/fs/block_dev.c b/fs/block_dev.c
index fd4df132a97590..2348f218d45deb 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -886,7 +886,6 @@ struct block_device *bdev_alloc(struct gendisk *disk, u8 partno)
 	spin_lock_init(&bdev->bd_size_lock);
 	bdev->bd_disk = disk;
 	bdev->bd_partno = partno;
-	bdev->bd_contains = NULL;
 	bdev->bd_super = NULL;
 	bdev->bd_inode = inode;
 	bdev->bd_part_count = 0;
@@ -1062,7 +1061,7 @@ static bool bd_may_claim(struct block_device *bdev, struct block_device *whole,
  */
 int bd_prepare_to_claim(struct block_device *bdev, void *holder)
 {
-	struct block_device *whole = bdev->bd_contains;
+	struct block_device *whole = bdev_whole(bdev);
 
 retry:
 	spin_lock(&bdev_lock);
@@ -1110,7 +1109,7 @@ static void bd_clear_claiming(struct block_device *whole, void *holder)
  */
 static void bd_finish_claiming(struct block_device *bdev, void *holder)
 {
-	struct block_device *whole = bdev->bd_contains;
+	struct block_device *whole = bdev_whole(bdev);
 
 	spin_lock(&bdev_lock);
 	BUG_ON(!bd_may_claim(bdev, whole, holder));
@@ -1139,7 +1138,7 @@ static void bd_finish_claiming(struct block_device *bdev, void *holder)
 void bd_abort_claiming(struct block_device *bdev, void *holder)
 {
 	spin_lock(&bdev_lock);
-	bd_clear_claiming(bdev->bd_contains, holder);
+	bd_clear_claiming(bdev_whole(bdev), holder);
 	spin_unlock(&bdev_lock);
 }
 EXPORT_SYMBOL(bd_abort_claiming);
@@ -1434,7 +1433,6 @@ static int bdev_get_gendisk(struct gendisk *disk)
 static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 		int for_part)
 {
-	struct block_device *whole = NULL;
 	struct gendisk *disk = bdev->bd_disk;
 	int ret;
 	bool first_open = false, unblock_events = true, need_restart;
@@ -1445,26 +1443,17 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 	if (ret)
 		goto out;
 
-	if (bdev->bd_partno) {
-		whole = bdget_disk(disk, 0);
-		if (!whole) {
-			ret = -ENOMEM;
-			goto out_put_disk;
-		}
-	}
-
 	if (!for_part && (mode & FMODE_EXCL)) {
 		WARN_ON_ONCE(!holder);
 		ret = bd_prepare_to_claim(bdev, holder);
 		if (ret)
-			goto out_put_whole;
+			goto out_put_disk;
 	}
 
 	disk_block_events(disk);
 	mutex_lock_nested(&bdev->bd_mutex, for_part);
 	if (!bdev->bd_openers) {
 		first_open = true;
-		bdev->bd_contains = bdev;
 
 		if (!bdev->bd_partno) {
 			ret = -ENXIO;
@@ -1502,10 +1491,10 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 				goto out_clear;
 		} else {
 			BUG_ON(for_part);
-			ret = __blkdev_get(whole, mode, NULL, 1);
+			bdgrab(bdev_whole(bdev));
+			ret = __blkdev_get(bdev_whole(bdev), mode, NULL, 1);
 			if (ret)
 				goto out_clear;
-			bdev->bd_contains = bdgrab(whole);
 			bdev->bd_part = disk_get_part(disk, bdev->bd_partno);
 			if (!(disk->flags & GENHD_FL_UP) ||
 			    !bdev->bd_part || !bdev->bd_part->nr_sects) {
@@ -1519,7 +1508,7 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 		if (bdev->bd_bdi == &noop_backing_dev_info)
 			bdev->bd_bdi = bdi_get(disk->queue->backing_dev_info);
 	} else {
-		if (bdev->bd_contains == bdev) {
+		if (!bdev->bd_partno) {
 			ret = 0;
 			if (bdev->bd_disk->fops->open)
 				ret = bdev->bd_disk->fops->open(bdev, mode);
@@ -1558,24 +1547,18 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 	/* only one opener holds the module reference */
 	if (!first_open)
 		module_put(disk->fops->owner);
-	if (whole)
-		bdput(whole);
 	return 0;
 
  out_clear:
 	disk_put_part(bdev->bd_part);
 	bdev->bd_part = NULL;
-	if (bdev != bdev->bd_contains)
-		__blkdev_put(bdev->bd_contains, mode, 1);
-	bdev->bd_contains = NULL;
+	if (bdev_is_partition(bdev))
+		__blkdev_put(bdev_whole(bdev), mode, 1);
  out_unlock_bdev:
 	if (!for_part && (mode & FMODE_EXCL))
 		bd_abort_claiming(bdev, holder);
 	mutex_unlock(&bdev->bd_mutex);
 	disk_unblock_events(disk);
- out_put_whole:
- 	if (whole)
-		bdput(whole);
  out_put_disk:
 	module_put(disk->fops->owner);
 	if (need_restart)
@@ -1765,16 +1748,15 @@ static void __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part)
 
 		bdev_write_inode(bdev);
 	}
-	if (bdev->bd_contains == bdev) {
+	if (!bdev_is_partition(bdev)) {
 		if (disk->fops->release)
 			disk->fops->release(disk, mode);
 	}
 	if (!bdev->bd_openers) {
 		disk_put_part(bdev->bd_part);
 		bdev->bd_part = NULL;
-		if (bdev != bdev->bd_contains)
-			victim = bdev->bd_contains;
-		bdev->bd_contains = NULL;
+		if (bdev_is_partition(bdev))
+			victim = bdev_whole(bdev);
 
 		module_put(disk->fops->owner);
 	}
@@ -1789,6 +1771,7 @@ void blkdev_put(struct block_device *bdev, fmode_t mode)
 	mutex_lock(&bdev->bd_mutex);
 
 	if (mode & FMODE_EXCL) {
+		struct block_device *whole = bdev_whole(bdev);
 		bool bdev_free;
 
 		/*
@@ -1799,13 +1782,12 @@ void blkdev_put(struct block_device *bdev, fmode_t mode)
 		spin_lock(&bdev_lock);
 
 		WARN_ON_ONCE(--bdev->bd_holders < 0);
-		WARN_ON_ONCE(--bdev->bd_contains->bd_holders < 0);
+		WARN_ON_ONCE(--whole->bd_holders < 0);
 
-		/* bd_contains might point to self, check in a separate step */
 		if ((bdev_free = !bdev->bd_holders))
 			bdev->bd_holder = NULL;
-		if (!bdev->bd_contains->bd_holders)
-			bdev->bd_contains->bd_holder = NULL;
+		if (!whole->bd_holders)
+			whole->bd_holder = NULL;
 
 		spin_unlock(&bdev_lock);
 
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index d9b69bbde5cc54..041caca25fc787 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -32,7 +32,6 @@ struct block_device {
 #ifdef CONFIG_SYSFS
 	struct list_head	bd_holder_disks;
 #endif
-	struct block_device *	bd_contains;
 	u8			bd_partno;
 	struct hd_struct *	bd_part;
 	/* number of times partitions within this device have been opened. */
@@ -48,6 +47,9 @@ struct block_device {
 	struct mutex		bd_fsfreeze_mutex;
 } __randomize_layout;
 
+#define bdev_whole(_bdev) \
+	((_bdev)->bd_disk->part0.bdev)
+
 /*
  * Block error status values.  See block/blk-core:blk_errors for the details.
  * Alpha cannot write a byte atomically, so we need to use 32-bit value.
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:10:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:10:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28148.57041 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9J-0003LS-9B; Mon, 16 Nov 2020 15:10:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28148.57041; Mon, 16 Nov 2020 15:10:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9I-0003JN-4L; Mon, 16 Nov 2020 15:10:32 +0000
Received: by outflank-mailman (input) for mailman id 28148;
 Mon, 16 Nov 2020 15:10:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg0A-0006ni-46
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:01:06 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bf865050-86db-4bce-a732-7e2d74f28cb6;
 Mon, 16 Nov 2020 14:59:07 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefy0-0003uJ-BS; Mon, 16 Nov 2020 14:58:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg0A-0006ni-46
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:01:06 +0000
X-Inumbo-ID: bf865050-86db-4bce-a732-7e2d74f28cb6
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bf865050-86db-4bce-a732-7e2d74f28cb6;
	Mon, 16 Nov 2020 14:59:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=vZNllDaY98uh7PAY/yZSS9lYqgoFil9tZwzrbsUtcc0=; b=V6cl1a79XMcq1v/BBo+f92yrRu
	liN7F038xmkCdB/Ds3o/rdOf/u8+F4XRDe5XS2+x9GlkDsXh+3U/5bj4wHU5EPMYxp0LOK6ecE1yG
	m42ASUTh05evcjBqfM2ZWjFjoerZ1E7O62W59SM7OwmwlzPpSqsgSL7GTu8GjI32B/j1fjTcfCCgm
	EBQXKMs49HKVw8T32KURh6LUgcjPRcK+qacoHPF/BWNvvkKs1x1+S/gsCLMI6svMoxc6IbKljvzPa
	PrLsxr2OhCYNRnmFqAv4BBDVACpNln8toBTssxbOqENWwwW1v/dXqjAf6HU5UCW36Z4h5dw18Xmhb
	kRDZEZZw==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefy0-0003uJ-BS; Mon, 16 Nov 2020 14:58:52 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 30/78] block: don't call into the driver for BLKROSET
Date: Mon, 16 Nov 2020 15:57:21 +0100
Message-Id: <20201116145809.410558-31-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Now that all drivers that want to hook into setting or clearing the
read-only flag use the set_read_only method, this code can be removed.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/ioctl.c | 23 -----------------------
 1 file changed, 23 deletions(-)

diff --git a/block/ioctl.c b/block/ioctl.c
index a6fa16b9770593..96cb4544736468 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -346,26 +346,6 @@ static int blkdev_pr_clear(struct block_device *bdev,
 	return ops->pr_clear(bdev, c.key);
 }
 
-/*
- * Is it an unrecognized ioctl? The correct returns are either
- * ENOTTY (final) or ENOIOCTLCMD ("I don't know this one, try a
- * fallback"). ENOIOCTLCMD gets turned into ENOTTY by the ioctl
- * code before returning.
- *
- * Confused drivers sometimes return EINVAL, which is wrong. It
- * means "I understood the ioctl command, but the parameters to
- * it were wrong".
- *
- * We should aim to just fix the broken drivers, the EINVAL case
- * should go away.
- */
-static inline int is_unrecognized_ioctl(int ret)
-{
-	return	ret == -EINVAL ||
-		ret == -ENOTTY ||
-		ret == -ENOIOCTLCMD;
-}
-
 static int blkdev_flushbuf(struct block_device *bdev, fmode_t mode,
 		unsigned cmd, unsigned long arg)
 {
@@ -384,9 +364,6 @@ static int blkdev_roset(struct block_device *bdev, fmode_t mode,
 	if (!capable(CAP_SYS_ADMIN))
 		return -EACCES;
 
-	ret = __blkdev_driver_ioctl(bdev, mode, cmd, arg);
-	if (!is_unrecognized_ioctl(ret))
-		return ret;
 	if (get_user(n, (int __user *)arg))
 		return -EFAULT;
 	if (bdev->bd_disk->fops->set_read_only) {
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:10:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:10:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28153.57051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9L-0003QX-R3; Mon, 16 Nov 2020 15:10:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28153.57051; Mon, 16 Nov 2020 15:10:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9J-0003OK-SC; Mon, 16 Nov 2020 15:10:33 +0000
Received: by outflank-mailman (input) for mailman id 28153;
 Mon, 16 Nov 2020 15:10:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg4C-0006ni-DV
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:05:16 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b23910fb-f10a-4947-ac82-d3d5386ce94b;
 Mon, 16 Nov 2020 15:00:11 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefz2-0004IT-Ku; Mon, 16 Nov 2020 14:59:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg4C-0006ni-DV
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:05:16 +0000
X-Inumbo-ID: b23910fb-f10a-4947-ac82-d3d5386ce94b
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b23910fb-f10a-4947-ac82-d3d5386ce94b;
	Mon, 16 Nov 2020 15:00:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=80SwJVFyrwdMAGHb8qvnmcFaocv9s27xa3zlKgz2FYI=; b=Mm5fP/0uZicna3KaSz54F+62ZK
	2sZmVKlDFsWb/bgA9RYDxt+IsIsfsfaaWs1o4iz9vFd69OXrQJe8UtdSzrQBji8h5UNzf05cU+xSc
	zCI/TWWc0RrDAg652tQ1VRh47cmtBjzM6filDduYLI9E3OU8ubnJ9NUXRd+zH2U5qDObLGQNXsfNP
	B7HXZCBvdScXwnlG83N+QF7Q7Ggo8Bb6wO9kWs2to1WJLZkcgpEmP2dxr4rTrTIvETGm662trkuxJ
	lM9qtUuAyT4SqUrIfG0kdDQgPHmYrx8XK8GsJ36RvxB5rWRiRoW0xdu1Bm5Yk31sCzUcjmzgbRko7
	mbfx97zg==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefz2-0004IT-Ku; Mon, 16 Nov 2020 14:59:57 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 74/78] block: merge struct block_device and struct hd_struct
Date: Mon, 16 Nov 2020 15:58:05 +0100
Message-Id: <20201116145809.410558-75-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Instead of having two structures that represent each block device with
different lift time rules merged them into a single one.  This also
greatly simplifies the reference counting rules, as we can use the inode
reference count as the main reference count for the new struct
block_device, with the device model reference front ending it for device
model interaction.  The percpu refcount in struct hd_struct is entirely
gone given that struct block_device must be opened and thus valid for
the duration of the I/O.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/bio.c                        |   6 +-
 block/blk-cgroup.c                 |   9 +-
 block/blk-core.c                   |  85 +++++-----
 block/blk-flush.c                  |   2 +-
 block/blk-lib.c                    |   2 +-
 block/blk-merge.c                  |   6 +-
 block/blk-mq.c                     |  11 +-
 block/blk-mq.h                     |   5 +-
 block/blk.h                        |  38 ++---
 block/genhd.c                      | 242 +++++++++++------------------
 block/ioctl.c                      |   4 +-
 block/partitions/core.c            | 221 +++++++-------------------
 drivers/block/drbd/drbd_receiver.c |   2 +-
 drivers/block/drbd/drbd_worker.c   |   2 +-
 drivers/block/zram/zram_drv.c      |   2 +-
 drivers/md/bcache/request.c        |   4 +-
 drivers/md/dm.c                    |   8 +-
 drivers/md/md.c                    |   4 +-
 drivers/nvme/target/admin-cmd.c    |  20 +--
 drivers/s390/block/dasd.c          |   8 +-
 fs/block_dev.c                     |  68 +++-----
 fs/ext4/super.c                    |  18 +--
 fs/ext4/sysfs.c                    |  10 +-
 fs/f2fs/checkpoint.c               |   5 +-
 fs/f2fs/f2fs.h                     |   2 +-
 fs/f2fs/super.c                    |   6 +-
 fs/f2fs/sysfs.c                    |   9 --
 include/linux/blk_types.h          |  23 ++-
 include/linux/blkdev.h             |  13 +-
 include/linux/genhd.h              |  67 ++------
 include/linux/part_stat.h          |  17 +-
 init/do_mounts.c                   |  20 +--
 kernel/trace/blktrace.c            |  54 ++-----
 33 files changed, 351 insertions(+), 642 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 0c5269997434d6..4df1ecd53baf8f 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -608,12 +608,12 @@ void bio_truncate(struct bio *bio, unsigned new_size)
 void guard_bio_eod(struct bio *bio)
 {
 	sector_t maxsector;
-	struct hd_struct *part;
+	struct block_device *part;
 
 	rcu_read_lock();
-	part = __disk_get_part(bio->bi_disk, bio->bi_partno);
+	part = __bdget_disk(bio->bi_disk, bio->bi_partno);
 	if (part)
-		maxsector = bdev_nr_sectors(part->bdev);
+		maxsector = bdev_nr_sectors(part);
 	else
 		maxsector = get_capacity(bio->bi_disk);
 	rcu_read_unlock();
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 4c0ae0f6bce02d..fb5076223f10f2 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -820,9 +820,9 @@ static void blkcg_fill_root_iostats(void)
 
 	class_dev_iter_init(&iter, &block_class, NULL, &disk_type);
 	while ((dev = class_dev_iter_next(&iter))) {
-		struct gendisk *disk = dev_to_disk(dev);
-		struct hd_struct *part = disk_get_part(disk, 0);
-		struct blkcg_gq *blkg = blk_queue_root_blkg(disk->queue);
+		struct block_device *bdev = dev_to_bdev(dev);
+		struct blkcg_gq *blkg =
+			blk_queue_root_blkg(bdev->bd_disk->queue);
 		struct blkg_iostat tmp;
 		int cpu;
 
@@ -830,7 +830,7 @@ static void blkcg_fill_root_iostats(void)
 		for_each_possible_cpu(cpu) {
 			struct disk_stats *cpu_dkstats;
 
-			cpu_dkstats = per_cpu_ptr(part->dkstats, cpu);
+			cpu_dkstats = per_cpu_ptr(bdev->bd_stats, cpu);
 			tmp.ios[BLKG_IOSTAT_READ] +=
 				cpu_dkstats->ios[STAT_READ];
 			tmp.ios[BLKG_IOSTAT_WRITE] +=
@@ -849,7 +849,6 @@ static void blkcg_fill_root_iostats(void)
 			blkg_iostat_set(&blkg->iostat.cur, &tmp);
 			u64_stats_update_end(&blkg->iostat.sync);
 		}
-		disk_put_part(part);
 	}
 }
 
diff --git a/block/blk-core.c b/block/blk-core.c
index 988f45094a387b..192607c98e87c5 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -119,7 +119,7 @@ void blk_rq_init(struct request_queue *q, struct request *rq)
 	rq->tag = BLK_MQ_NO_TAG;
 	rq->internal_tag = BLK_MQ_NO_TAG;
 	rq->start_time_ns = ktime_get_ns();
-	rq->part = NULL;
+	rq->bdev = NULL;
 	refcount_set(&rq->ref, 1);
 	blk_crypto_rq_set_defaults(rq);
 }
@@ -666,9 +666,9 @@ static int __init setup_fail_make_request(char *str)
 }
 __setup("fail_make_request=", setup_fail_make_request);
 
-static bool should_fail_request(struct hd_struct *part, unsigned int bytes)
+static bool should_fail_request(struct block_device *bdev, unsigned int bytes)
 {
-	return part->make_it_fail && should_fail(&fail_make_request, bytes);
+	return bdev->bd_make_it_fail && should_fail(&fail_make_request, bytes);
 }
 
 static int __init fail_make_request_debugfs(void)
@@ -683,19 +683,19 @@ late_initcall(fail_make_request_debugfs);
 
 #else /* CONFIG_FAIL_MAKE_REQUEST */
 
-static inline bool should_fail_request(struct hd_struct *part,
-					unsigned int bytes)
+static inline bool should_fail_request(struct block_device *bdev,
+		unsigned int bytes)
 {
 	return false;
 }
 
 #endif /* CONFIG_FAIL_MAKE_REQUEST */
 
-static inline bool bio_check_ro(struct bio *bio, struct hd_struct *part)
+static inline bool bio_check_ro(struct bio *bio, struct block_device *bdev)
 {
 	const int op = bio_op(bio);
 
-	if (part->policy && op_is_write(op)) {
+	if (bdev->bd_policy && op_is_write(op)) {
 		char b[BDEVNAME_SIZE];
 
 		if (op_is_flush(bio->bi_opf) && !bio_sectors(bio))
@@ -703,7 +703,7 @@ static inline bool bio_check_ro(struct bio *bio, struct hd_struct *part)
 
 		WARN_ONCE(1,
 		       "Trying to write to read-only block-device %s (partno %d)\n",
-			bio_devname(bio, b), part->partno);
+			bio_devname(bio, b), bdev->bd_partno);
 		/* Older lvm-tools actually trigger this */
 		return false;
 	}
@@ -713,7 +713,7 @@ static inline bool bio_check_ro(struct bio *bio, struct hd_struct *part)
 
 static noinline int should_fail_bio(struct bio *bio)
 {
-	if (should_fail_request(&bio->bi_disk->part0, bio->bi_iter.bi_size))
+	if (should_fail_request(bio->bi_disk->part0, bio->bi_iter.bi_size))
 		return -EIO;
 	return 0;
 }
@@ -742,11 +742,11 @@ static inline int bio_check_eod(struct bio *bio, sector_t maxsector)
  */
 static inline int blk_partition_remap(struct bio *bio)
 {
-	struct hd_struct *p;
+	struct block_device *p;
 	int ret = -EIO;
 
 	rcu_read_lock();
-	p = __disk_get_part(bio->bi_disk, bio->bi_partno);
+	p = __bdget_disk(bio->bi_disk, bio->bi_partno);
 	if (unlikely(!p))
 		goto out;
 	if (unlikely(should_fail_request(p, bio->bi_iter.bi_size)))
@@ -755,11 +755,11 @@ static inline int blk_partition_remap(struct bio *bio)
 		goto out;
 
 	if (bio_sectors(bio)) {
-		if (bio_check_eod(bio, bdev_nr_sectors(p->bdev)))
+		if (bio_check_eod(bio, bdev_nr_sectors(p)))
 			goto out;
-		bio->bi_iter.bi_sector += p->start_sect;
-		trace_block_bio_remap(bio->bi_disk->queue, bio, part_devt(p),
-				      bio->bi_iter.bi_sector - p->start_sect);
+		bio->bi_iter.bi_sector += p->bd_start_sect;
+		trace_block_bio_remap(bio->bi_disk->queue, bio, p->bd_dev,
+				      bio->bi_iter.bi_sector - p->bd_start_sect);
 	}
 	bio->bi_partno = 0;
 	ret = 0;
@@ -829,7 +829,7 @@ static noinline_for_stack bool submit_bio_checks(struct bio *bio)
 		if (unlikely(blk_partition_remap(bio)))
 			goto end_io;
 	} else {
-		if (unlikely(bio_check_ro(bio, &bio->bi_disk->part0)))
+		if (unlikely(bio_check_ro(bio, bio->bi_disk->part0)))
 			goto end_io;
 		if (unlikely(bio_check_eod(bio, get_capacity(bio->bi_disk))))
 			goto end_io;
@@ -1201,7 +1201,7 @@ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *
 		return ret;
 
 	if (rq->rq_disk &&
-	    should_fail_request(&rq->rq_disk->part0, blk_rq_bytes(rq)))
+	    should_fail_request(rq->rq_disk->part0, blk_rq_bytes(rq)))
 		return BLK_STS_IOERR;
 
 	if (blk_crypto_insert_cloned_request(rq))
@@ -1260,30 +1260,29 @@ unsigned int blk_rq_err_bytes(const struct request *rq)
 }
 EXPORT_SYMBOL_GPL(blk_rq_err_bytes);
 
-static void update_io_ticks(struct hd_struct *part, unsigned long now, bool end)
+static void update_io_ticks(struct block_device *part, unsigned long now,
+		bool end)
 {
 	unsigned long stamp;
 again:
-	stamp = READ_ONCE(part->stamp);
+	stamp = READ_ONCE(part->bd_stamp);
 	if (unlikely(stamp != now)) {
-		if (likely(cmpxchg(&part->stamp, stamp, now) == stamp))
+		if (likely(cmpxchg(&part->bd_stamp, stamp, now) == stamp))
 			__part_stat_add(part, io_ticks, end ? now - stamp : 1);
 	}
-	if (part->partno) {
-		part = &part_to_disk(part)->part0;
+	if (part->bd_partno) {
+		part = part->bd_disk->part0;
 		goto again;
 	}
 }
 
 static void blk_account_io_completion(struct request *req, unsigned int bytes)
 {
-	if (req->part && blk_do_io_stat(req)) {
+	if (req->bdev && blk_do_io_stat(req)) {
 		const int sgrp = op_stat_group(req_op(req));
-		struct hd_struct *part;
 
 		part_stat_lock();
-		part = req->part;
-		part_stat_add(part, sectors[sgrp], bytes >> 9);
+		part_stat_add(req->bdev, sectors[sgrp], bytes >> 9);
 		part_stat_unlock();
 	}
 }
@@ -1295,20 +1294,15 @@ void blk_account_io_done(struct request *req, u64 now)
 	 * normal IO on queueing nor completion.  Accounting the
 	 * containing request is enough.
 	 */
-	if (req->part && blk_do_io_stat(req) &&
+	if (req->bdev && blk_do_io_stat(req) &&
 	    !(req->rq_flags & RQF_FLUSH_SEQ)) {
 		const int sgrp = op_stat_group(req_op(req));
-		struct hd_struct *part;
 
 		part_stat_lock();
-		part = req->part;
-
-		update_io_ticks(part, jiffies, true);
-		part_stat_inc(part, ios[sgrp]);
-		part_stat_add(part, nsecs[sgrp], now - req->start_time_ns);
+		update_io_ticks(req->bdev, jiffies, true);
+		part_stat_inc(req->bdev, ios[sgrp]);
+		part_stat_add(req->bdev, nsecs[sgrp], now - req->start_time_ns);
 		part_stat_unlock();
-
-		hd_struct_put(part);
 	}
 }
 
@@ -1317,15 +1311,15 @@ void blk_account_io_start(struct request *rq)
 	if (!blk_do_io_stat(rq))
 		return;
 
-	rq->part = disk_map_sector_rcu(rq->rq_disk, blk_rq_pos(rq));
+	rq->bdev = disk_map_sector_rcu(rq->rq_disk, blk_rq_pos(rq));
 
 	part_stat_lock();
-	update_io_ticks(rq->part, jiffies, false);
+	update_io_ticks(rq->bdev, jiffies, false);
 	part_stat_unlock();
 }
 
-static unsigned long __part_start_io_acct(struct hd_struct *part,
-					  unsigned int sectors, unsigned int op)
+static unsigned long __part_start_io_acct(struct block_device *part,
+		unsigned int sectors, unsigned int op)
 {
 	const int sgrp = op_stat_group(op);
 	unsigned long now = READ_ONCE(jiffies);
@@ -1340,8 +1334,8 @@ static unsigned long __part_start_io_acct(struct hd_struct *part,
 	return now;
 }
 
-unsigned long part_start_io_acct(struct gendisk *disk, struct hd_struct **part,
-				 struct bio *bio)
+unsigned long part_start_io_acct(struct gendisk *disk,
+		struct block_device **part, struct bio *bio)
 {
 	*part = disk_map_sector_rcu(disk, bio->bi_iter.bi_sector);
 
@@ -1352,11 +1346,11 @@ EXPORT_SYMBOL_GPL(part_start_io_acct);
 unsigned long disk_start_io_acct(struct gendisk *disk, unsigned int sectors,
 				 unsigned int op)
 {
-	return __part_start_io_acct(&disk->part0, sectors, op);
+	return __part_start_io_acct(disk->part0, sectors, op);
 }
 EXPORT_SYMBOL(disk_start_io_acct);
 
-static void __part_end_io_acct(struct hd_struct *part, unsigned int op,
+static void __part_end_io_acct(struct block_device *part, unsigned int op,
 			       unsigned long start_time)
 {
 	const int sgrp = op_stat_group(op);
@@ -1370,18 +1364,17 @@ static void __part_end_io_acct(struct hd_struct *part, unsigned int op,
 	part_stat_unlock();
 }
 
-void part_end_io_acct(struct hd_struct *part, struct bio *bio,
+void part_end_io_acct(struct block_device *part, struct bio *bio,
 		      unsigned long start_time)
 {
 	__part_end_io_acct(part, bio_op(bio), start_time);
-	hd_struct_put(part);
 }
 EXPORT_SYMBOL_GPL(part_end_io_acct);
 
 void disk_end_io_acct(struct gendisk *disk, unsigned int op,
 		      unsigned long start_time)
 {
-	__part_end_io_acct(&disk->part0, op, start_time);
+	__part_end_io_acct(disk->part0, op, start_time);
 }
 EXPORT_SYMBOL(disk_end_io_acct);
 
diff --git a/block/blk-flush.c b/block/blk-flush.c
index e32958f0b68750..9507dcdd58814c 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -139,7 +139,7 @@ static void blk_flush_queue_rq(struct request *rq, bool add_front)
 
 static void blk_account_io_flush(struct request *rq)
 {
-	struct hd_struct *part = &rq->rq_disk->part0;
+	struct block_device *part = rq->rq_disk->part0;
 
 	part_stat_lock();
 	part_stat_inc(part, ios[STAT_FLUSH]);
diff --git a/block/blk-lib.c b/block/blk-lib.c
index e90614fd8d6a42..752f9c7220622a 100644
--- a/block/blk-lib.c
+++ b/block/blk-lib.c
@@ -65,7 +65,7 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
 
 	/* In case the discard request is in a partition */
 	if (bdev_is_partition(bdev))
-		part_offset = bdev->bd_part->start_sect;
+		part_offset = bdev->bd_start_sect;
 
 	while (nr_sects) {
 		sector_t granularity_aligned_lba, req_sects;
diff --git a/block/blk-merge.c b/block/blk-merge.c
index bcf5e458060337..3ec0d322e4a769 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -681,10 +681,8 @@ static void blk_account_io_merge_request(struct request *req)
 {
 	if (blk_do_io_stat(req)) {
 		part_stat_lock();
-		part_stat_inc(req->part, merges[op_stat_group(req_op(req))]);
+		part_stat_inc(req->bdev, merges[op_stat_group(req_op(req))]);
 		part_stat_unlock();
-
-		hd_struct_put(req->part);
 	}
 }
 
@@ -906,7 +904,7 @@ static void blk_account_io_merge_bio(struct request *req)
 		return;
 
 	part_stat_lock();
-	part_stat_inc(req->part, merges[op_stat_group(req_op(req))]);
+	part_stat_inc(req->bdev, merges[op_stat_group(req_op(req))]);
 	part_stat_unlock();
 }
 
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 55bcee5dc0320c..a28475e6405de9 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -95,7 +95,7 @@ static void blk_mq_hctx_clear_pending(struct blk_mq_hw_ctx *hctx,
 }
 
 struct mq_inflight {
-	struct hd_struct *part;
+	struct block_device *part;
 	unsigned int inflight[2];
 };
 
@@ -105,13 +105,14 @@ static bool blk_mq_check_inflight(struct blk_mq_hw_ctx *hctx,
 {
 	struct mq_inflight *mi = priv;
 
-	if (rq->part == mi->part && blk_mq_rq_state(rq) == MQ_RQ_IN_FLIGHT)
+	if (rq->bdev == mi->part && blk_mq_rq_state(rq) == MQ_RQ_IN_FLIGHT)
 		mi->inflight[rq_data_dir(rq)]++;
 
 	return true;
 }
 
-unsigned int blk_mq_in_flight(struct request_queue *q, struct hd_struct *part)
+unsigned int blk_mq_in_flight(struct request_queue *q,
+		struct block_device *part)
 {
 	struct mq_inflight mi = { .part = part };
 
@@ -120,7 +121,7 @@ unsigned int blk_mq_in_flight(struct request_queue *q, struct hd_struct *part)
 	return mi.inflight[0] + mi.inflight[1];
 }
 
-void blk_mq_in_flight_rw(struct request_queue *q, struct hd_struct *part,
+void blk_mq_in_flight_rw(struct request_queue *q, struct block_device *part,
 			 unsigned int inflight[2])
 {
 	struct mq_inflight mi = { .part = part };
@@ -300,7 +301,7 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data,
 	INIT_HLIST_NODE(&rq->hash);
 	RB_CLEAR_NODE(&rq->rb_node);
 	rq->rq_disk = NULL;
-	rq->part = NULL;
+	rq->bdev = NULL;
 #ifdef CONFIG_BLK_RQ_ALLOC_TIME
 	rq->alloc_time_ns = alloc_time_ns;
 #endif
diff --git a/block/blk-mq.h b/block/blk-mq.h
index a52703c98b7736..395fbc6c59d1eb 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -182,8 +182,9 @@ static inline bool blk_mq_hw_queue_mapped(struct blk_mq_hw_ctx *hctx)
 	return hctx->nr_ctx && hctx->tags;
 }
 
-unsigned int blk_mq_in_flight(struct request_queue *q, struct hd_struct *part);
-void blk_mq_in_flight_rw(struct request_queue *q, struct hd_struct *part,
+unsigned int blk_mq_in_flight(struct request_queue *q,
+		struct block_device *bdev);
+void blk_mq_in_flight_rw(struct request_queue *q, struct block_device *bdev,
 			 unsigned int inflight[2]);
 
 static inline void blk_mq_put_dispatch_budget(struct request_queue *q)
diff --git a/block/blk.h b/block/blk.h
index 7d10bb24eb282d..90dd2047c6cd29 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -215,7 +215,15 @@ static inline void elevator_exit(struct request_queue *q,
 	__elevator_exit(q, e);
 }
 
-struct hd_struct *__disk_get_part(struct gendisk *disk, int partno);
+static inline struct block_device *__bdget_disk(struct gendisk *disk,
+		int partno)
+{
+	struct disk_part_tbl *ptbl = rcu_dereference(disk->part_tbl);
+
+	if (unlikely(partno < 0 || partno >= ptbl->len))
+		return NULL;
+	return rcu_dereference(ptbl->part[partno]);
+}
 
 ssize_t part_size_show(struct device *dev, struct device_attribute *attr,
 		char *buf);
@@ -348,43 +356,21 @@ void blk_queue_free_zone_bitmaps(struct request_queue *q);
 static inline void blk_queue_free_zone_bitmaps(struct request_queue *q) {}
 #endif
 
-struct hd_struct *disk_map_sector_rcu(struct gendisk *disk, sector_t sector);
+struct block_device *disk_map_sector_rcu(struct gendisk *disk, sector_t sector);
 
-int blk_alloc_devt(struct hd_struct *part, dev_t *devt);
+int blk_alloc_devt(struct block_device *bdev, dev_t *devt);
 void blk_free_devt(dev_t devt);
 char *disk_name(struct gendisk *hd, int partno, char *buf);
 #define ADDPART_FLAG_NONE	0
 #define ADDPART_FLAG_RAID	1
 #define ADDPART_FLAG_WHOLEDISK	2
-void delete_partition(struct hd_struct *part);
+void delete_partition(struct block_device *part);
 int bdev_add_partition(struct block_device *bdev, int partno,
 		sector_t start, sector_t length);
 int bdev_del_partition(struct block_device *bdev, int partno);
 int bdev_resize_partition(struct block_device *bdev, int partno,
 		sector_t start, sector_t length);
 int disk_expand_part_tbl(struct gendisk *disk, int target);
-int hd_ref_init(struct hd_struct *part);
-
-/* no need to get/put refcount of part0 */
-static inline int hd_struct_try_get(struct hd_struct *part)
-{
-	if (part->partno)
-		return percpu_ref_tryget_live(&part->ref);
-	return 1;
-}
-
-static inline void hd_struct_put(struct hd_struct *part)
-{
-	if (part->partno)
-		percpu_ref_put(&part->ref);
-}
-
-static inline void hd_free_part(struct hd_struct *part)
-{
-	free_percpu(part->dkstats);
-	kfree(part->info);
-	percpu_ref_exit(&part->ref);
-}
 
 int bio_add_hw_page(struct request_queue *q, struct bio *bio,
 		struct page *page, unsigned int len, unsigned int offset,
diff --git a/block/genhd.c b/block/genhd.c
index f1e20ec1b62887..5dcb8b8902daae 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -40,7 +40,7 @@ static void disk_release_events(struct gendisk *disk);
 
 void set_capacity(struct gendisk *disk, sector_t sectors)
 {
-	struct block_device *bdev = disk->part0.bdev;
+	struct block_device *bdev = disk->part0;
 
 	spin_lock(&bdev->bd_size_lock);
 	i_size_write(bdev->bd_inode, (loff_t)sectors << SECTOR_SHIFT);
@@ -93,13 +93,14 @@ const char *bdevname(struct block_device *bdev, char *buf)
 }
 EXPORT_SYMBOL(bdevname);
 
-static void part_stat_read_all(struct hd_struct *part, struct disk_stats *stat)
+static void part_stat_read_all(struct block_device *part,
+		struct disk_stats *stat)
 {
 	int cpu;
 
 	memset(stat, 0, sizeof(struct disk_stats));
 	for_each_possible_cpu(cpu) {
-		struct disk_stats *ptr = per_cpu_ptr(part->dkstats, cpu);
+		struct disk_stats *ptr = per_cpu_ptr(part->bd_stats, cpu);
 		int group;
 
 		for (group = 0; group < NR_STAT_GROUPS; group++) {
@@ -113,7 +114,7 @@ static void part_stat_read_all(struct hd_struct *part, struct disk_stats *stat)
 	}
 }
 
-static unsigned int part_in_flight(struct hd_struct *part)
+static unsigned int part_in_flight(struct block_device *part)
 {
 	unsigned int inflight = 0;
 	int cpu;
@@ -128,7 +129,8 @@ static unsigned int part_in_flight(struct hd_struct *part)
 	return inflight;
 }
 
-static void part_in_flight_rw(struct hd_struct *part, unsigned int inflight[2])
+static void part_in_flight_rw(struct block_device *part,
+		unsigned int inflight[2])
 {
 	int cpu;
 
@@ -144,42 +146,6 @@ static void part_in_flight_rw(struct hd_struct *part, unsigned int inflight[2])
 		inflight[1] = 0;
 }
 
-struct hd_struct *__disk_get_part(struct gendisk *disk, int partno)
-{
-	struct disk_part_tbl *ptbl = rcu_dereference(disk->part_tbl);
-
-	if (unlikely(partno < 0 || partno >= ptbl->len))
-		return NULL;
-	return rcu_dereference(ptbl->part[partno]);
-}
-
-/**
- * disk_get_part - get partition
- * @disk: disk to look partition from
- * @partno: partition number
- *
- * Look for partition @partno from @disk.  If found, increment
- * reference count and return it.
- *
- * CONTEXT:
- * Don't care.
- *
- * RETURNS:
- * Pointer to the found partition on success, NULL if not found.
- */
-struct hd_struct *disk_get_part(struct gendisk *disk, int partno)
-{
-	struct hd_struct *part;
-
-	rcu_read_lock();
-	part = __disk_get_part(disk, partno);
-	if (part)
-		get_device(part_to_dev(part));
-	rcu_read_unlock();
-
-	return part;
-}
-
 /**
  * disk_part_iter_init - initialize partition iterator
  * @piter: iterator to initialize
@@ -224,7 +190,7 @@ EXPORT_SYMBOL_GPL(disk_part_iter_init);
  * CONTEXT:
  * Don't care.
  */
-struct hd_struct *disk_part_iter_next(struct disk_part_iter *piter)
+struct block_device *disk_part_iter_next(struct disk_part_iter *piter)
 {
 	struct disk_part_tbl *ptbl;
 	int inc, end;
@@ -251,19 +217,18 @@ struct hd_struct *disk_part_iter_next(struct disk_part_iter *piter)
 
 	/* iterate to the next partition */
 	for (; piter->idx != end; piter->idx += inc) {
-		struct hd_struct *part;
+		struct block_device *part;
 
 		part = rcu_dereference(ptbl->part[piter->idx]);
 		if (!part)
 			continue;
-		if (!bdev_nr_sectors(part->bdev) &&
+		if (!bdev_nr_sectors(part) &&
 		    !(piter->flags & DISK_PITER_INCL_EMPTY) &&
 		    !(piter->flags & DISK_PITER_INCL_EMPTY_PART0 &&
 		      piter->idx == 0))
 			continue;
 
-		get_device(part_to_dev(part));
-		piter->part = part;
+		piter->part = bdgrab(part);
 		piter->idx += inc;
 		break;
 	}
@@ -285,15 +250,16 @@ EXPORT_SYMBOL_GPL(disk_part_iter_next);
  */
 void disk_part_iter_exit(struct disk_part_iter *piter)
 {
-	disk_put_part(piter->part);
+	if (piter->part)
+		bdput(piter->part);
 	piter->part = NULL;
 }
 EXPORT_SYMBOL_GPL(disk_part_iter_exit);
 
-static inline int sector_in_part(struct hd_struct *part, sector_t sector)
+static inline int sector_in_part(struct block_device *part, sector_t sector)
 {
-	return part->start_sect <= sector &&
-		sector < part->start_sect + bdev_nr_sectors(part->bdev);
+	return part->bd_start_sect <= sector &&
+		sector < part->bd_start_sect + bdev_nr_sectors(part);
 }
 
 /**
@@ -313,36 +279,28 @@ static inline int sector_in_part(struct hd_struct *part, sector_t sector)
  * Found partition on success, part0 is returned if no partition matches
  * or the matched partition is being deleted.
  */
-struct hd_struct *disk_map_sector_rcu(struct gendisk *disk, sector_t sector)
+struct block_device *disk_map_sector_rcu(struct gendisk *disk, sector_t sector)
 {
 	struct disk_part_tbl *ptbl;
-	struct hd_struct *part;
+	struct block_device *part;
 	int i;
 
 	rcu_read_lock();
 	ptbl = rcu_dereference(disk->part_tbl);
 
 	part = rcu_dereference(ptbl->last_lookup);
-	if (part && sector_in_part(part, sector) && hd_struct_try_get(part))
+	if (part && sector_in_part(part, sector))
 		goto out_unlock;
 
 	for (i = 1; i < ptbl->len; i++) {
 		part = rcu_dereference(ptbl->part[i]);
-
 		if (part && sector_in_part(part, sector)) {
-			/*
-			 * only live partition can be cached for lookup,
-			 * so use-after-free on cached & deleting partition
-			 * can be avoided
-			 */
-			if (!hd_struct_try_get(part))
-				break;
 			rcu_assign_pointer(ptbl->last_lookup, part);
 			goto out_unlock;
 		}
 	}
 
-	part = &disk->part0;
+	part = disk->part0;
 out_unlock:
 	rcu_read_unlock();
 	return part;
@@ -557,7 +515,7 @@ static int blk_mangle_minor(int minor)
 
 /**
  * blk_alloc_devt - allocate a dev_t for a partition
- * @part: partition to allocate dev_t for
+ * @bdev: partition to allocate dev_t for
  * @devt: out parameter for resulting dev_t
  *
  * Allocate a dev_t for block device.
@@ -569,14 +527,14 @@ static int blk_mangle_minor(int minor)
  * CONTEXT:
  * Might sleep.
  */
-int blk_alloc_devt(struct hd_struct *part, dev_t *devt)
+int blk_alloc_devt(struct block_device *bdev, dev_t *devt)
 {
-	struct gendisk *disk = part_to_disk(part);
+	struct gendisk *disk = bdev->bd_disk;
 	int idx;
 
 	/* in consecutive minor range? */
-	if (part->partno < disk->minors) {
-		*devt = MKDEV(disk->major, disk->first_minor + part->partno);
+	if (bdev->bd_partno < disk->minors) {
+		*devt = MKDEV(disk->major, disk->first_minor + bdev->bd_partno);
 		return 0;
 	}
 
@@ -633,7 +591,7 @@ static void register_disk(struct device *parent, struct gendisk *disk,
 {
 	struct device *ddev = disk_to_dev(disk);
 	struct disk_part_iter piter;
-	struct hd_struct *part;
+	struct block_device *part;
 	int err;
 
 	ddev->parent = parent;
@@ -665,7 +623,8 @@ static void register_disk(struct device *parent, struct gendisk *disk,
 	 */
 	pm_runtime_set_memalloc_noio(ddev, true);
 
-	disk->part0.holder_dir = kobject_create_and_add("holders", &ddev->kobj);
+	disk->part0->bd_holder_dir =
+		kobject_create_and_add("holders", &ddev->kobj);
 	disk->slave_dir = kobject_create_and_add("slaves", &ddev->kobj);
 
 	if (disk->flags & GENHD_FL_HIDDEN) {
@@ -682,7 +641,7 @@ static void register_disk(struct device *parent, struct gendisk *disk,
 	/* announce possible partitions */
 	disk_part_iter_init(&piter, disk, 0);
 	while ((part = disk_part_iter_next(&piter)))
-		kobject_uevent(&part_to_dev(part)->kobj, KOBJ_ADD);
+		kobject_uevent(bdev_kobj(part), KOBJ_ADD);
 	disk_part_iter_exit(&piter);
 
 	if (disk->queue->backing_dev_info->dev) {
@@ -731,7 +690,7 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
 
 	disk->flags |= GENHD_FL_UP;
 
-	retval = blk_alloc_devt(&disk->part0, &devt);
+	retval = blk_alloc_devt(disk->part0, &devt);
 	if (retval) {
 		WARN_ON(1);
 		return;
@@ -758,7 +717,7 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
 		ret = bdi_register(bdi, "%u:%u", MAJOR(devt), MINOR(devt));
 		WARN_ON(ret);
 		bdi_set_owner(bdi, dev);
-		bdev_add(disk->part0.bdev, devt);
+		bdev_add(disk->part0, devt);
 	}
 	register_disk(parent, disk, groups);
 	if (register_queue)
@@ -788,14 +747,8 @@ void device_add_disk_no_queue_reg(struct device *parent, struct gendisk *disk)
 }
 EXPORT_SYMBOL(device_add_disk_no_queue_reg);
 
-static void invalidate_partition(struct gendisk *disk, int partno)
+static void invalidate_partition(struct block_device *bdev)
 {
-	struct block_device *bdev;
-
-	bdev = bdget_disk(disk, partno);
-	if (!bdev)
-		return;
-
 	fsync_bdev(bdev);
 	__invalidate_device(bdev, true);
 
@@ -804,7 +757,6 @@ static void invalidate_partition(struct gendisk *disk, int partno)
 	 * as last inode reference is dropped.
 	 */
 	remove_inode_hash(bdev->bd_inode);
-	bdput(bdev);
 }
 
 /**
@@ -829,7 +781,7 @@ static void invalidate_partition(struct gendisk *disk, int partno)
 void del_gendisk(struct gendisk *disk)
 {
 	struct disk_part_iter piter;
-	struct hd_struct *part;
+	struct block_device *part;
 
 	might_sleep();
 
@@ -848,12 +800,12 @@ void del_gendisk(struct gendisk *disk)
 	disk_part_iter_init(&piter, disk,
 			     DISK_PITER_INCL_EMPTY | DISK_PITER_REVERSE);
 	while ((part = disk_part_iter_next(&piter))) {
-		invalidate_partition(disk, part->partno);
+		invalidate_partition(part);
 		delete_partition(part);
 	}
 	disk_part_iter_exit(&piter);
 
-	invalidate_partition(disk, 0);
+	invalidate_partition(disk->part0);
 	set_capacity(disk, 0);
 	disk->flags &= ~GENHD_FL_UP;
 	up_write(&disk->lookup_sem);
@@ -870,11 +822,11 @@ void del_gendisk(struct gendisk *disk)
 
 	blk_unregister_queue(disk);
 
-	kobject_put(disk->part0.holder_dir);
+	kobject_put(disk->part0->bd_holder_dir);
 	kobject_put(disk->slave_dir);
 
-	part_stat_set_all(&disk->part0, 0);
-	disk->part0.stamp = 0;
+	part_stat_set_all(disk->part0, 0);
+	disk->part0->bd_stamp = 0;
 	if (!sysfs_deprecated)
 		sysfs_remove_link(block_depr, dev_name(disk_to_dev(disk)));
 	pm_runtime_set_memalloc_noio(disk_to_dev(disk), false);
@@ -942,13 +894,13 @@ void blk_request_module(dev_t devt)
  */
 struct block_device *bdget_disk(struct gendisk *disk, int partno)
 {
-	struct hd_struct *part;
 	struct block_device *bdev = NULL;
 
-	part = disk_get_part(disk, partno);
-	if (part)
-		bdev = bdget_part(part);
-	disk_put_part(part);
+	rcu_read_lock();
+	bdev = __bdget_disk(disk, partno);
+	if (bdev)
+		bdgrab(bdev);
+	rcu_read_unlock();
 
 	return bdev;
 }
@@ -968,7 +920,7 @@ void __init printk_all_partitions(void)
 	while ((dev = class_dev_iter_next(&iter))) {
 		struct gendisk *disk = dev_to_disk(dev);
 		struct disk_part_iter piter;
-		struct hd_struct *part;
+		struct block_device *part;
 		char name_buf[BDEVNAME_SIZE];
 		char devt_buf[BDEVT_SIZE];
 
@@ -987,13 +939,14 @@ void __init printk_all_partitions(void)
 		 */
 		disk_part_iter_init(&piter, disk, DISK_PITER_INCL_PART0);
 		while ((part = disk_part_iter_next(&piter))) {
-			bool is_part0 = part == &disk->part0;
+			bool is_part0 = part == disk->part0;
 
 			printk("%s%s %10llu %s %s", is_part0 ? "" : "  ",
-			       bdevt_str(part_devt(part), devt_buf),
-			       bdev_nr_sectors(part->bdev) >> 1
-			       , disk_name(disk, part->partno, name_buf),
-			       part->info ? part->info->uuid : "");
+			       bdevt_str(part->bd_dev, devt_buf),
+			       bdev_nr_sectors(part) >> 1,
+			       disk_name(disk, part->bd_partno, name_buf),
+			       part->bd_meta_info ?
+					part->bd_meta_info->uuid : "");
 			if (is_part0) {
 				if (dev->parent && dev->parent->driver)
 					printk(" driver: %s\n",
@@ -1069,7 +1022,7 @@ static int show_partition(struct seq_file *seqf, void *v)
 {
 	struct gendisk *sgp = v;
 	struct disk_part_iter piter;
-	struct hd_struct *part;
+	struct block_device *part;
 	char buf[BDEVNAME_SIZE];
 
 	/* Don't show non-partitionable removeable devices or empty devices */
@@ -1083,9 +1036,9 @@ static int show_partition(struct seq_file *seqf, void *v)
 	disk_part_iter_init(&piter, sgp, DISK_PITER_INCL_PART0);
 	while ((part = disk_part_iter_next(&piter)))
 		seq_printf(seqf, "%4d  %7d %10llu %s\n",
-			   MAJOR(part_devt(part)), MINOR(part_devt(part)),
-			   bdev_nr_sectors(part->bdev) >> 1,
-			   disk_name(sgp, part->partno, buf));
+			   MAJOR(part->bd_dev), MINOR(part->bd_dev),
+			   bdev_nr_sectors(part) >> 1,
+			   disk_name(sgp, part->bd_partno, buf));
 	disk_part_iter_exit(&piter);
 
 	return 0;
@@ -1164,24 +1117,22 @@ static ssize_t disk_ro_show(struct device *dev,
 ssize_t part_size_show(struct device *dev,
 		       struct device_attribute *attr, char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
-
-	return sprintf(buf, "%llu\n", bdev_nr_sectors(p->bdev));
+	return sprintf(buf, "%llu\n", bdev_nr_sectors(dev_to_bdev(dev)));
 }
 
 ssize_t part_stat_show(struct device *dev,
 		       struct device_attribute *attr, char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
-	struct request_queue *q = part_to_disk(p)->queue;
+	struct block_device *bdev = dev_to_bdev(dev);
+	struct request_queue *q = bdev->bd_disk->queue;
 	struct disk_stats stat;
 	unsigned int inflight;
 
-	part_stat_read_all(p, &stat);
+	part_stat_read_all(bdev, &stat);
 	if (queue_is_mq(q))
-		inflight = blk_mq_in_flight(q, p);
+		inflight = blk_mq_in_flight(q, bdev);
 	else
-		inflight = part_in_flight(p);
+		inflight = part_in_flight(bdev);
 
 	return sprintf(buf,
 		"%8lu %8lu %8llu %8u "
@@ -1216,14 +1167,14 @@ ssize_t part_stat_show(struct device *dev,
 ssize_t part_inflight_show(struct device *dev, struct device_attribute *attr,
 			   char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
-	struct request_queue *q = part_to_disk(p)->queue;
+	struct block_device *bdev = dev_to_bdev(dev);
+	struct request_queue *q = bdev->bd_disk->queue;
 	unsigned int inflight[2];
 
 	if (queue_is_mq(q))
-		blk_mq_in_flight_rw(q, p, inflight);
+		blk_mq_in_flight_rw(q, bdev, inflight);
 	else
-		part_in_flight_rw(p, inflight);
+		part_in_flight_rw(bdev, inflight);
 
 	return sprintf(buf, "%8u %8u\n", inflight[0], inflight[1]);
 }
@@ -1271,16 +1222,14 @@ static DEVICE_ATTR(badblocks, 0644, disk_badblocks_show, disk_badblocks_store);
 ssize_t part_fail_show(struct device *dev,
 		       struct device_attribute *attr, char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
-
-	return sprintf(buf, "%d\n", p->make_it_fail);
+	return sprintf(buf, "%d\n", dev_to_bdev(dev)->make_it_fail);
 }
 
 ssize_t part_fail_store(struct device *dev,
 			struct device_attribute *attr,
 			const char *buf, size_t count)
 {
-	struct hd_struct *p = dev_to_part(dev);
+	struct block_device *p = dev_to_bdev(dev);
 	int i;
 
 	if (count > 0 && sscanf(buf, "%d", &i) > 0)
@@ -1441,9 +1390,9 @@ static void disk_release(struct device *dev)
 	disk_release_events(disk);
 	kfree(disk->random);
 	disk_replace_part_tbl(disk, NULL);
-	hd_free_part(&disk->part0);
 	if (disk->queue)
 		blk_put_queue(disk->queue);
+	bdput(disk->part0);
 	kfree(disk);
 }
 struct class block_class = {
@@ -1479,7 +1428,7 @@ static int diskstats_show(struct seq_file *seqf, void *v)
 {
 	struct gendisk *gp = v;
 	struct disk_part_iter piter;
-	struct hd_struct *hd;
+	struct block_device *hd;
 	char buf[BDEVNAME_SIZE];
 	unsigned int inflight;
 	struct disk_stats stat;
@@ -1507,8 +1456,8 @@ static int diskstats_show(struct seq_file *seqf, void *v)
 			   "%lu %lu %lu %u "
 			   "%lu %u"
 			   "\n",
-			   MAJOR(part_devt(hd)), MINOR(part_devt(hd)),
-			   disk_name(gp, hd->partno, buf),
+			   MAJOR(hd->bd_dev), MINOR(hd->bd_dev),
+			   disk_name(gp, hd->bd_partno, buf),
 			   stat.ios[STAT_READ],
 			   stat.merges[STAT_READ],
 			   stat.sectors[STAT_READ],
@@ -1564,9 +1513,9 @@ dev_t blk_lookup_devt(const char *name, int partno)
 	struct device *dev;
 
 	class_dev_iter_init(&iter, &block_class, NULL, &disk_type);
-	while ((dev = class_dev_iter_next(&iter))) {
+	while ((dev = class_dev_iter_next(&iter)) && !devt) {
 		struct gendisk *disk = dev_to_disk(dev);
-		struct hd_struct *part;
+		struct block_device *bdev;
 
 		if (strcmp(dev_name(dev), name))
 			continue;
@@ -1577,15 +1526,13 @@ dev_t blk_lookup_devt(const char *name, int partno)
 			 */
 			devt = MKDEV(MAJOR(dev->devt),
 				     MINOR(dev->devt) + partno);
-			break;
+		} else {
+			rcu_read_lock();
+			bdev = __bdget_disk(disk, partno);
+			if (bdev)
+				devt = bdev->bd_dev;
+			rcu_read_unlock();
 		}
-		part = disk_get_part(disk, partno);
-		if (part) {
-			devt = part_devt(part);
-			disk_put_part(part);
-			break;
-		}
-		disk_put_part(part);
 	}
 	class_dev_iter_exit(&iter);
 	return devt;
@@ -1607,27 +1554,18 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
 	if (!disk)
 		return NULL;
 
-	disk->part0.bdev = bdev_alloc(disk, 0);
-	if (!disk->part0.bdev)
+	disk->part0 = bdev_alloc(disk, 0);
+	if (!disk->part0)
 		goto out_free_disk;
 
-	disk->part0.dkstats = alloc_percpu(struct disk_stats);
-	if (!disk->part0.dkstats)
-		goto out_bdput;
-
 	mutex_init(&disk->mutex);
 	init_rwsem(&disk->lookup_sem);
 	disk->node_id = node_id;
-	if (disk_expand_part_tbl(disk, 0)) {
-		free_percpu(disk->part0.dkstats);
-		goto out_free_disk;
-	}
+	if (disk_expand_part_tbl(disk, 0))
+		goto out_bdput;
 
 	ptbl = rcu_dereference_protected(disk->part_tbl, 1);
-	rcu_assign_pointer(ptbl->part[0], &disk->part0);
-
-	if (hd_ref_init(&disk->part0))
-		goto out_free_part0;
+	rcu_assign_pointer(ptbl->part[0], disk->part0);
 
 	disk->minors = minors;
 	rand_initialize_disk(disk);
@@ -1636,10 +1574,8 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
 	device_initialize(disk_to_dev(disk));
 	return disk;
 
-out_free_part0:
-	hd_free_part(&disk->part0);
 out_bdput:
-	bdput(disk->part0.bdev);
+	bdput(disk->part0);
 out_free_disk:
 	kfree(disk);
 	return NULL;
@@ -1676,16 +1612,16 @@ static void set_disk_ro_uevent(struct gendisk *gd, int ro)
 void set_disk_ro(struct gendisk *disk, int flag)
 {
 	struct disk_part_iter piter;
-	struct hd_struct *part;
+	struct block_device *part;
 
-	if (disk->part0.policy != flag) {
+	if (disk->part0->bd_policy != flag) {
 		set_disk_ro_uevent(disk, flag);
-		disk->part0.policy = flag;
+		disk->part0->bd_policy = flag;
 	}
 
 	disk_part_iter_init(&piter, disk, DISK_PITER_INCL_EMPTY);
 	while ((part = disk_part_iter_next(&piter)))
-		part->policy = flag;
+		part->bd_policy = flag;
 	disk_part_iter_exit(&piter);
 }
 
@@ -1695,7 +1631,7 @@ int bdev_read_only(struct block_device *bdev)
 {
 	if (!bdev)
 		return 0;
-	return bdev->bd_part->policy;
+	return bdev->bd_policy;
 }
 
 EXPORT_SYMBOL(bdev_read_only);
diff --git a/block/ioctl.c b/block/ioctl.c
index 18adf9b16a30f6..7207b716b6c9a7 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -35,7 +35,7 @@ static int blkpg_do_ioctl(struct block_device *bdev,
 	start = p.start >> SECTOR_SHIFT;
 	length = p.length >> SECTOR_SHIFT;
 
-	/* check for fit in a hd_struct */
+	/* check for fit in a sector_t */
 	if (sizeof(sector_t) < sizeof(long long)) {
 		long pstart = start, plength = length;
 
@@ -355,7 +355,7 @@ static int blkdev_roset(struct block_device *bdev, fmode_t mode,
 			return ret;
 	}
 	if (bdev_is_partition(bdev))
-		bdev->bd_part->policy = n;
+		bdev->bd_policy = n;
 	else
 		set_disk_ro(bdev->bd_disk, n);
 	return 0;
diff --git a/block/partitions/core.c b/block/partitions/core.c
index e50b5ca17df550..e22f1b2d5c423d 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -182,44 +182,39 @@ static struct parsed_partitions *check_partition(struct gendisk *hd,
 static ssize_t part_partition_show(struct device *dev,
 				   struct device_attribute *attr, char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
-
-	return sprintf(buf, "%d\n", p->partno);
+	return sprintf(buf, "%d\n", dev_to_bdev(dev)->bd_partno);
 }
 
 static ssize_t part_start_show(struct device *dev,
 			       struct device_attribute *attr, char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
-
-	return sprintf(buf, "%llu\n",(unsigned long long)p->start_sect);
+	return sprintf(buf, "%llu\n", dev_to_bdev(dev)->bd_start_sect);
 }
 
 static ssize_t part_ro_show(struct device *dev,
 			    struct device_attribute *attr, char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
-	return sprintf(buf, "%d\n", p->policy ? 1 : 0);
+	return sprintf(buf, "%d\n", dev_to_bdev(dev)->bd_policy ? 1 : 0);
 }
 
 static ssize_t part_alignment_offset_show(struct device *dev,
 					  struct device_attribute *attr, char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
+	struct block_device *bdev = dev_to_bdev(dev);
 
 	return sprintf(buf, "%u\n",
-		queue_limit_alignment_offset(&part_to_disk(p)->queue->limits,
-				p->start_sect));
+		queue_limit_alignment_offset(&bdev->bd_disk->queue->limits,
+				bdev->bd_start_sect));
 }
 
 static ssize_t part_discard_alignment_show(struct device *dev,
 					   struct device_attribute *attr, char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
+	struct block_device *bdev = dev_to_bdev(dev);
 
 	return sprintf(buf, "%u\n",
-		queue_limit_discard_alignment(&part_to_disk(p)->queue->limits,
-				p->start_sect));
+		queue_limit_discard_alignment(&bdev->bd_disk->queue->limits,
+				bdev->bd_start_sect));
 }
 
 static DEVICE_ATTR(partition, 0444, part_partition_show, NULL);
@@ -264,19 +259,19 @@ static const struct attribute_group *part_attr_groups[] = {
 
 static void part_release(struct device *dev)
 {
-	struct hd_struct *p = dev_to_part(dev);
+	struct block_device *p = dev_to_bdev(dev);
+
 	blk_free_devt(dev->devt);
-	hd_free_part(p);
-	kfree(p);
+	bdput(p);
 }
 
 static int part_uevent(struct device *dev, struct kobj_uevent_env *env)
 {
-	struct hd_struct *part = dev_to_part(dev);
+	struct block_device *part = dev_to_bdev(dev);
 
-	add_uevent_var(env, "PARTN=%u", part->partno);
-	if (part->info && part->info->volname[0])
-		add_uevent_var(env, "PARTNAME=%s", part->info->volname);
+	add_uevent_var(env, "PARTN=%u", part->bd_partno);
+	if (part->bd_meta_info && part->bd_meta_info->volname[0])
+		add_uevent_var(env, "PARTNAME=%s", part->bd_meta_info->volname);
 	return 0;
 }
 
@@ -287,72 +282,21 @@ struct device_type part_type = {
 	.uevent		= part_uevent,
 };
 
-static void hd_struct_free_work(struct work_struct *work)
-{
-	struct hd_struct *part =
-		container_of(to_rcu_work(work), struct hd_struct, rcu_work);
-	struct gendisk *disk = part_to_disk(part);
-
-	/*
-	 * Release the disk reference acquired in delete_partition here.
-	 * We can't release it in hd_struct_free because the final put_device
-	 * needs process context and thus can't be run directly from a
-	 * percpu_ref ->release handler.
-	 */
-	put_device(disk_to_dev(disk));
-
-	part->start_sect = 0;
-	bdev_set_nr_sectors(part->bdev, 0);
-	part_stat_set_all(part, 0);
-	put_device(part_to_dev(part));
-}
-
-static void hd_struct_free(struct percpu_ref *ref)
-{
-	struct hd_struct *part = container_of(ref, struct hd_struct, ref);
-	struct gendisk *disk = part_to_disk(part);
-	struct disk_part_tbl *ptbl =
-		rcu_dereference_protected(disk->part_tbl, 1);
-
-	rcu_assign_pointer(ptbl->last_lookup, NULL);
-
-	INIT_RCU_WORK(&part->rcu_work, hd_struct_free_work);
-	queue_rcu_work(system_wq, &part->rcu_work);
-}
-
-int hd_ref_init(struct hd_struct *part)
-{
-	if (percpu_ref_init(&part->ref, hd_struct_free, 0, GFP_KERNEL))
-		return -ENOMEM;
-	return 0;
-}
-
 /*
  * Must be called either with disk->mutex held, before a disk can be opened or
  * after all disk users are gone.
  */
-void delete_partition(struct hd_struct *part)
+void delete_partition(struct block_device *part)
 {
-	struct gendisk *disk = part_to_disk(part);
+	struct gendisk *disk = part->bd_disk;
 	struct disk_part_tbl *ptbl =
 		rcu_dereference_protected(disk->part_tbl, 1);
 
-	/*
-	 * ->part_tbl is referenced in this part's release handler, so
-	 *  we have to hold the disk device
-	 */
-	get_device(disk_to_dev(disk));
-	rcu_assign_pointer(ptbl->part[part->partno], NULL);
-	kobject_put(part->holder_dir);
+	rcu_assign_pointer(ptbl->part[part->bd_partno], NULL);
+	rcu_assign_pointer(ptbl->last_lookup, NULL);
+	kobject_put(part->bd_holder_dir);
 	device_del(part_to_dev(part));
-
-	/*
-	 * Remove the block device from the inode hash, so that it cannot be
-	 * looked up while waiting for the RCU grace period.
-	 */
-	bdput(part->bdev);
-
-	percpu_ref_kill(&part->ref);
+	put_device(part_to_dev(part));
 }
 
 static ssize_t whole_disk_show(struct device *dev,
@@ -366,11 +310,11 @@ static DEVICE_ATTR(whole_disk, 0444, whole_disk_show, NULL);
  * Must be called either with disk->mutex held, before a disk can be opened or
  * after all disk users are gone.
  */
-static struct hd_struct *add_partition(struct gendisk *disk, int partno,
+static struct block_device *add_partition(struct gendisk *disk, int partno,
 				sector_t start, sector_t len, int flags,
 				struct partition_meta_info *info)
 {
-	struct hd_struct *p;
+	struct block_device *p;
 	dev_t devt = MKDEV(0, 0);
 	struct device *ddev = disk_to_dev(disk);
 	struct device *pdev;
@@ -404,36 +348,22 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 	if (ptbl->part[partno])
 		return ERR_PTR(-EBUSY);
 
-	p = kzalloc(sizeof(*p), GFP_KERNEL);
+	p = bdev_alloc(disk, partno);
 	if (!p)
-		return ERR_PTR(-EBUSY);
-
-	err = -ENOMEM;
-	p->dkstats = alloc_percpu(struct disk_stats);
-	if (!p->dkstats)
-		goto out_free;
-
-	p->bdev = bdev_alloc(disk, partno);
-	if (!p->bdev)
-		goto out_free_stats;
-
-	pdev = part_to_dev(p);
+		return ERR_PTR(-ENOMEM);
 
-	p->start_sect = start;
-	bdev_set_nr_sectors(p->bdev, len);
-	p->partno = partno;
-	p->policy = get_disk_ro(disk);
+	p->bd_start_sect = start;
+	bdev_set_nr_sectors(p, len);
+	p->bd_policy = get_disk_ro(disk);
 
 	if (info) {
-		struct partition_meta_info *pinfo;
-
-		pinfo = kzalloc_node(sizeof(*pinfo), GFP_KERNEL, disk->node_id);
-		if (!pinfo)
-			goto out_bdput;
-		memcpy(pinfo, info, sizeof(*info));
-		p->info = pinfo;
+		err = -ENOMEM;
+		p->bd_meta_info = kmemdup(info, sizeof(*info), GFP_KERNEL);
+		if (!p->bd_meta_info)
+			goto out_free_stats;
 	}
 
+	pdev = part_to_dev(p);
 	dname = dev_name(ddev);
 	if (isdigit(dname[strlen(dname) - 1]))
 		dev_set_name(pdev, "%sp%d", dname, partno);
@@ -457,8 +387,8 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 		goto out_put;
 
 	err = -ENOMEM;
-	p->holder_dir = kobject_create_and_add("holders", &pdev->kobj);
-	if (!p->holder_dir)
+	p->bd_holder_dir = kobject_create_and_add("holders", &pdev->kobj);
+	if (!p->bd_holder_dir)
 		goto out_del;
 
 	dev_set_uevent_suppress(pdev, 0);
@@ -468,15 +398,8 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 			goto out_del;
 	}
 
-	err = hd_ref_init(p);
-	if (err) {
-		if (flags & ADDPART_FLAG_WHOLEDISK)
-			goto out_remove_file;
-		goto out_del;
-	}
-
 	/* everything is up and running, commence */
-	bdev_add(p->bdev, devt);
+	bdev_add(p, devt);
 	rcu_assign_pointer(ptbl->part[partno], p);
 
 	/* suppress uevent if the disk suppresses it */
@@ -485,19 +408,13 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 	return p;
 
 out_free_info:
-	kfree(p->info);
-out_bdput:
-	bdput(p->bdev);
+	kfree(p->bd_meta_info);
 out_free_stats:
-	free_percpu(p->dkstats);
-out_free:
-	kfree(p);
+	bdput(p);
 	return ERR_PTR(err);
 
-out_remove_file:
-	device_remove_file(pdev, &dev_attr_whole_disk);
 out_del:
-	kobject_put(p->holder_dir);
+	kobject_put(p->bd_holder_dir);
 	device_del(pdev);
 out_put:
 	put_device(pdev);
@@ -508,14 +425,14 @@ static bool partition_overlaps(struct gendisk *disk, sector_t start,
 		sector_t length, int skip_partno)
 {
 	struct disk_part_iter piter;
-	struct hd_struct *part;
+	struct block_device *part;
 	bool overlap = false;
 
 	disk_part_iter_init(&piter, disk, DISK_PITER_INCL_EMPTY);
 	while ((part = disk_part_iter_next(&piter))) {
-		if (part->partno == skip_partno ||
-		    start >= part->start_sect + bdev_nr_sectors(part->bdev) ||
-		    start + length <= part->start_sect)
+		if (part->bd_partno == skip_partno ||
+		    start >= part->bd_start_sect + bdev_nr_sectors(part) ||
+		    start + length <= part->bd_start_sect)
 			continue;
 		overlap = true;
 		break;
@@ -528,7 +445,7 @@ static bool partition_overlaps(struct gendisk *disk, sector_t start,
 int bdev_add_partition(struct block_device *bdev, int partno,
 		sector_t start, sector_t length)
 {
-	struct hd_struct *part;
+	struct block_device *part;
 
 	mutex_lock(&bdev->bd_disk->mutex);
 	if (partition_overlaps(bdev->bd_disk, start, length, -1)) {
@@ -544,72 +461,54 @@ int bdev_add_partition(struct block_device *bdev, int partno,
 
 int bdev_del_partition(struct block_device *bdev, int partno)
 {
-	struct block_device *bdevp;
-	struct hd_struct *part = NULL;
+	struct block_device *part = NULL;
 	int ret;
 
-	bdevp = bdget_disk(bdev->bd_disk, partno);
-	if (!bdevp)
+	part = bdget_disk(bdev->bd_disk, partno);
+	if (!part)
 		return -ENXIO;
 
 	mutex_lock(&bdev->bd_disk->mutex);
-
-	ret = -ENXIO;
-	part = disk_get_part(bdev->bd_disk, partno);
-	if (!part)
-		goto out_unlock;
-
 	ret = -EBUSY;
-	if (bdevp->bd_openers)
+	if (part->bd_openers)
 		goto out_unlock;
 
-	sync_blockdev(bdevp);
-	invalidate_bdev(bdevp);
+	sync_blockdev(part);
+	invalidate_bdev(part);
 
 	delete_partition(part);
 	ret = 0;
 out_unlock:
 	mutex_unlock(&bdev->bd_disk->mutex);
-	bdput(bdevp);
-	if (part)
-		disk_put_part(part);
+	bdput(part);
 	return ret;
 }
 
 int bdev_resize_partition(struct block_device *bdev, int partno,
 		sector_t start, sector_t length)
 {
-	struct block_device *bdevp;
-	struct hd_struct *part;
+	struct block_device *part = NULL;
 	int ret = 0;
 
-	part = disk_get_part(bdev->bd_disk, partno);
+	part = bdget_disk(bdev->bd_disk, partno);
 	if (!part)
 		return -ENXIO;
 
-	ret = -ENOMEM;
-	bdevp = bdget_part(part);
-	if (!bdevp)
-		goto out_put_part;
-
 	mutex_lock(&bdev->bd_disk->mutex);
-
 	ret = -EINVAL;
-	if (start != part->start_sect)
+	if (start != part->bd_start_sect)
 		goto out_unlock;
 
 	ret = -EBUSY;
 	if (partition_overlaps(bdev->bd_disk, start, length, partno))
 		goto out_unlock;
 
-	bdev_set_nr_sectors(bdevp, length);
+	bdev_set_nr_sectors(part, length);
 
 	ret = 0;
 out_unlock:
 	mutex_unlock(&bdev->bd_disk->mutex);
-	bdput(bdevp);
-out_put_part:
-	disk_put_part(part);
+	bdput(part);
 	return ret;
 }
 
@@ -632,7 +531,7 @@ static bool disk_unlock_native_capacity(struct gendisk *disk)
 int blk_drop_partitions(struct block_device *bdev)
 {
 	struct disk_part_iter piter;
-	struct hd_struct *part;
+	struct block_device *part;
 
 	if (bdev->bd_part_count)
 		return -EBUSY;
@@ -657,7 +556,7 @@ static bool blk_add_partition(struct gendisk *disk, struct block_device *bdev,
 {
 	sector_t size = state->parts[p].size;
 	sector_t from = state->parts[p].from;
-	struct hd_struct *part;
+	struct block_device *part;
 
 	if (!size)
 		return true;
@@ -697,7 +596,7 @@ static bool blk_add_partition(struct gendisk *disk, struct block_device *bdev,
 
 	if (IS_BUILTIN(CONFIG_BLK_DEV_MD) &&
 	    (state->parts[p].flags & ADDPART_FLAG_RAID))
-		md_autodetect_dev(part_to_dev(part)->devt);
+		md_autodetect_dev(part->bd_dev);
 
 	return true;
 }
diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c
index dc333dbe523281..09c86ef3f0fd93 100644
--- a/drivers/block/drbd/drbd_receiver.c
+++ b/drivers/block/drbd/drbd_receiver.c
@@ -2802,7 +2802,7 @@ bool drbd_rs_c_min_rate_throttle(struct drbd_device *device)
 	if (c_min_rate == 0)
 		return false;
 
-	curr_events = (int)part_stat_read_accum(&disk->part0, sectors) -
+	curr_events = (int)part_stat_read_accum(disk->part0, sectors) -
 			atomic_read(&device->rs_sect_ev);
 
 	if (atomic_read(&device->ap_actlog_cnt)
diff --git a/drivers/block/drbd/drbd_worker.c b/drivers/block/drbd/drbd_worker.c
index ba56f3f05312f0..4537559829876e 100644
--- a/drivers/block/drbd/drbd_worker.c
+++ b/drivers/block/drbd/drbd_worker.c
@@ -1678,7 +1678,7 @@ void drbd_rs_controller_reset(struct drbd_device *device)
 	atomic_set(&device->rs_sect_in, 0);
 	atomic_set(&device->rs_sect_ev, 0);
 	device->rs_in_flight = 0;
-	device->rs_last_events = (int)part_stat_read_accum(&disk->part0, sectors);
+	device->rs_last_events = part_stat_read_accum(disk->part0, sectors);
 
 	/* Updating the RCU protected object in place is necessary since
 	   this function gets called from atomic context.
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 0b156f09e208df..e765765263495f 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1687,7 +1687,7 @@ static void zram_reset_device(struct zram *zram)
 	zram->disksize = 0;
 
 	set_capacity_and_notify(zram->disk, 0);
-	part_stat_set_all(&zram->disk->part0, 0);
+	part_stat_set_all(zram->disk->part0, 0);
 
 	up_write(&zram->init_lock);
 	/* I/O operation under all of CPU are done so let's free */
diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
index afac8d07c1bd00..85b1f2a9b72d68 100644
--- a/drivers/md/bcache/request.c
+++ b/drivers/md/bcache/request.c
@@ -475,7 +475,7 @@ struct search {
 	unsigned int		read_dirty_data:1;
 	unsigned int		cache_missed:1;
 
-	struct hd_struct	*part;
+	struct block_device	*part;
 	unsigned long		start_time;
 
 	struct btree_op		op;
@@ -1073,7 +1073,7 @@ struct detached_dev_io_private {
 	unsigned long		start_time;
 	bio_end_io_t		*bi_end_io;
 	void			*bi_private;
-	struct hd_struct	*part;
+	struct block_device	*part;
 };
 
 static void detached_dev_end_io(struct bio *bio)
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index c789ffea2badde..ac46f6e41279cc 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1607,7 +1607,7 @@ static blk_qc_t __split_and_process_bio(struct mapped_device *md,
 				 * (by eliminating DM's splitting and just using bio_split)
 				 */
 				part_stat_lock();
-				__dm_part_stat_sub(&dm_disk(md)->part0,
+				__dm_part_stat_sub(dm_disk(md)->part0,
 						   sectors[op_stat_group(bio_op(bio))], ci.sector_count);
 				part_stat_unlock();
 
@@ -2242,12 +2242,12 @@ EXPORT_SYMBOL_GPL(dm_put);
 static bool md_in_flight_bios(struct mapped_device *md)
 {
 	int cpu;
-	struct hd_struct *part = &dm_disk(md)->part0;
+	struct block_device *bdev = dm_disk(md)->part0;
 	long sum = 0;
 
 	for_each_possible_cpu(cpu) {
-		sum += part_stat_local_read_cpu(part, in_flight[0], cpu);
-		sum += part_stat_local_read_cpu(part, in_flight[1], cpu);
+		sum += part_stat_local_read_cpu(bdev, in_flight[0], cpu);
+		sum += part_stat_local_read_cpu(bdev, in_flight[1], cpu);
 	}
 
 	return sum != 0;
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 7ce6047c856ea2..0065736f05b428 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -464,7 +464,7 @@ struct md_io {
 	bio_end_io_t *orig_bi_end_io;
 	void *orig_bi_private;
 	unsigned long start_time;
-	struct hd_struct *part;
+	struct block_device *part;
 };
 
 static void md_end_io(struct bio *bio)
@@ -8441,7 +8441,7 @@ static int is_mddev_idle(struct mddev *mddev, int init)
 	rcu_read_lock();
 	rdev_for_each_rcu(rdev, mddev) {
 		struct gendisk *disk = rdev->bdev->bd_disk;
-		curr_events = (int)part_stat_read_accum(&disk->part0, sectors) -
+		curr_events = (int)part_stat_read_accum(disk->part0, sectors) -
 			      atomic_read(&disk->sync_io);
 		/* sync IO will cause sync_io to increase before the disk_stats
 		 * as sync_io is counted when a request starts, and
diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index dca34489a1dc9e..8d90235e4fcc5a 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -89,12 +89,12 @@ static u16 nvmet_get_smart_log_nsid(struct nvmet_req *req,
 	if (!ns->bdev)
 		goto out;
 
-	host_reads = part_stat_read(ns->bdev->bd_part, ios[READ]);
-	data_units_read = DIV_ROUND_UP(part_stat_read(ns->bdev->bd_part,
-		sectors[READ]), 1000);
-	host_writes = part_stat_read(ns->bdev->bd_part, ios[WRITE]);
-	data_units_written = DIV_ROUND_UP(part_stat_read(ns->bdev->bd_part,
-		sectors[WRITE]), 1000);
+	host_reads = part_stat_read(ns->bdev, ios[READ]);
+	data_units_read =
+		DIV_ROUND_UP(part_stat_read(ns->bdev, sectors[READ]), 1000);
+	host_writes = part_stat_read(ns->bdev, ios[WRITE]);
+	data_units_written =
+		DIV_ROUND_UP(part_stat_read(ns->bdev, sectors[WRITE]), 1000);
 
 	put_unaligned_le64(host_reads, &slog->host_reads[0]);
 	put_unaligned_le64(data_units_read, &slog->data_units_read[0]);
@@ -120,12 +120,12 @@ static u16 nvmet_get_smart_log_all(struct nvmet_req *req,
 		/* we don't have the right data for file backed ns */
 		if (!ns->bdev)
 			continue;
-		host_reads += part_stat_read(ns->bdev->bd_part, ios[READ]);
+		host_reads += part_stat_read(ns->bdev, ios[READ]);
 		data_units_read += DIV_ROUND_UP(
-			part_stat_read(ns->bdev->bd_part, sectors[READ]), 1000);
-		host_writes += part_stat_read(ns->bdev->bd_part, ios[WRITE]);
+			part_stat_read(ns->bdev, sectors[READ]), 1000);
+		host_writes += part_stat_read(ns->bdev, ios[WRITE]);
 		data_units_written += DIV_ROUND_UP(
-			part_stat_read(ns->bdev->bd_part, sectors[WRITE]), 1000);
+			part_stat_read(ns->bdev, sectors[WRITE]), 1000);
 	}
 
 	put_unaligned_le64(host_reads, &slog->host_reads[0]);
diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
index db24e04ee9781e..1825fa8d05a780 100644
--- a/drivers/s390/block/dasd.c
+++ b/drivers/s390/block/dasd.c
@@ -432,7 +432,7 @@ dasd_state_ready_to_online(struct dasd_device * device)
 {
 	struct gendisk *disk;
 	struct disk_part_iter piter;
-	struct hd_struct *part;
+	struct block_device *part;
 
 	device->state = DASD_STATE_ONLINE;
 	if (device->block) {
@@ -445,7 +445,7 @@ dasd_state_ready_to_online(struct dasd_device * device)
 		disk = device->block->bdev->bd_disk;
 		disk_part_iter_init(&piter, disk, DISK_PITER_INCL_PART0);
 		while ((part = disk_part_iter_next(&piter)))
-			kobject_uevent(&part_to_dev(part)->kobj, KOBJ_CHANGE);
+			kobject_uevent(bdev_kobj(part), KOBJ_CHANGE);
 		disk_part_iter_exit(&piter);
 	}
 	return 0;
@@ -459,7 +459,7 @@ static int dasd_state_online_to_ready(struct dasd_device *device)
 	int rc;
 	struct gendisk *disk;
 	struct disk_part_iter piter;
-	struct hd_struct *part;
+	struct block_device *part;
 
 	if (device->discipline->online_to_ready) {
 		rc = device->discipline->online_to_ready(device);
@@ -472,7 +472,7 @@ static int dasd_state_online_to_ready(struct dasd_device *device)
 		disk = device->block->bdev->bd_disk;
 		disk_part_iter_init(&piter, disk, DISK_PITER_INCL_PART0);
 		while ((part = disk_part_iter_next(&piter)))
-			kobject_uevent(&part_to_dev(part)->kobj, KOBJ_CHANGE);
+			kobject_uevent(bdev_kobj(part), KOBJ_CHANGE);
 		disk_part_iter_exit(&piter);
 	}
 	return 0;
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 4b59ace9632f65..e1457bf76c6f34 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -34,6 +34,7 @@
 #include <linux/falloc.h>
 #include <linux/uaccess.h>
 #include <linux/suspend.h>
+#include <linux/part_stat.h>
 #include "internal.h"
 
 struct bdev_inode {
@@ -788,28 +789,18 @@ static struct inode *bdev_alloc_inode(struct super_block *sb)
 
 static void bdev_free_inode(struct inode *inode)
 {
-	kmem_cache_free(bdev_cachep, BDEV_I(inode));
-}
+	struct block_device *bdev = I_BDEV(inode);
 
-static void bdev_destroy_inode(struct inode *inode)
-{
-	if (inode->i_rdev)
-		put_device(disk_to_dev(I_BDEV(inode)->bd_disk));
+	kfree(bdev->bd_meta_info);
+	free_percpu(bdev->bd_stats);
+	kmem_cache_free(bdev_cachep, BDEV_I(inode));
 }
 
 static void init_once(void *foo)
 {
 	struct bdev_inode *ei = (struct bdev_inode *) foo;
-	struct block_device *bdev = &ei->bdev;
 
-	memset(bdev, 0, sizeof(*bdev));
-#ifdef CONFIG_SYSFS
-	INIT_LIST_HEAD(&bdev->bd_holder_disks);
-#endif
-	bdev->bd_bdi = &noop_backing_dev_info;
 	inode_init_once(&ei->vfs_inode);
-	/* Initialize mutex for freeze. */
-	mutex_init(&bdev->bd_fsfreeze_mutex);
 }
 
 static void bdev_evict_inode(struct inode *inode)
@@ -830,7 +821,6 @@ static const struct super_operations bdev_sops = {
 	.statfs = simple_statfs,
 	.alloc_inode = bdev_alloc_inode,
 	.free_inode = bdev_free_inode,
-	.destroy_inode = bdev_destroy_inode,
 	.drop_inode = generic_delete_inode,
 	.evict_inode = bdev_evict_inode,
 };
@@ -882,12 +872,21 @@ struct block_device *bdev_alloc(struct gendisk *disk, u8 partno)
 		return NULL;
 
 	bdev = I_BDEV(inode);
+	memset(bdev, 0, sizeof(*bdev));
 	spin_lock_init(&bdev->bd_size_lock);
+	mutex_init(&bdev->bd_fsfreeze_mutex);
+	bdev->bd_bdi = &noop_backing_dev_info;
 	bdev->bd_disk = disk;
 	bdev->bd_partno = partno;
-	bdev->bd_super = NULL;
 	bdev->bd_inode = inode;
-	bdev->bd_part_count = 0;
+	bdev->bd_stats = alloc_percpu(struct disk_stats);
+	if (!bdev->bd_stats) {
+		iput(inode);
+		return NULL;
+	}
+#ifdef CONFIG_SYSFS
+	INIT_LIST_HEAD(&bdev->bd_holder_disks);
+#endif
 
 	inode->i_mode = S_IFBLK;
 	inode->i_rdev = 0;
@@ -900,7 +899,6 @@ struct block_device *bdev_alloc(struct gendisk *disk, u8 partno)
 void bdev_add(struct block_device *bdev, dev_t dev)
 {
 	bdev->bd_dev = dev;
-	get_device(disk_to_dev(bdev->bd_disk));
 	bdev->bd_inode->i_rdev = dev;
 	bdev->bd_inode->i_ino = dev;
 	insert_inode_hash(bdev->bd_inode);
@@ -927,11 +925,6 @@ struct block_device *bdgrab(struct block_device *bdev)
 }
 EXPORT_SYMBOL(bdgrab);
 
-struct block_device *bdget_part(struct hd_struct *part)
-{
-	return bdget(part_devt(part));
-}
-
 long nr_blockdev_pages(void)
 {
 	struct inode *inode;
@@ -1208,7 +1201,7 @@ int bd_link_disk_holder(struct block_device *bdev, struct gendisk *disk)
 	WARN_ON_ONCE(!bdev->bd_holder);
 
 	/* FIXME: remove the following once add_disk() handles errors */
-	if (WARN_ON(!disk->slave_dir || !bdev->bd_part->holder_dir))
+	if (WARN_ON(!disk->slave_dir || !bdev->bd_holder_dir))
 		goto out_unlock;
 
 	holder = bd_find_holder_disk(bdev, disk);
@@ -1227,24 +1220,24 @@ int bd_link_disk_holder(struct block_device *bdev, struct gendisk *disk)
 	holder->disk = disk;
 	holder->refcnt = 1;
 
-	ret = add_symlink(disk->slave_dir, &part_to_dev(bdev->bd_part)->kobj);
+	ret = add_symlink(disk->slave_dir, bdev_kobj(bdev));
 	if (ret)
 		goto out_free;
 
-	ret = add_symlink(bdev->bd_part->holder_dir, &disk_to_dev(disk)->kobj);
+	ret = add_symlink(bdev->bd_holder_dir, &disk_to_dev(disk)->kobj);
 	if (ret)
 		goto out_del;
 	/*
 	 * bdev could be deleted beneath us which would implicitly destroy
 	 * the holder directory.  Hold on to it.
 	 */
-	kobject_get(bdev->bd_part->holder_dir);
+	kobject_get(bdev->bd_holder_dir);
 
 	list_add(&holder->list, &bdev->bd_holder_disks);
 	goto out_unlock;
 
 out_del:
-	del_symlink(disk->slave_dir, &part_to_dev(bdev->bd_part)->kobj);
+	del_symlink(disk->slave_dir, bdev_kobj(bdev));
 out_free:
 	kfree(holder);
 out_unlock:
@@ -1272,10 +1265,10 @@ void bd_unlink_disk_holder(struct block_device *bdev, struct gendisk *disk)
 	holder = bd_find_holder_disk(bdev, disk);
 
 	if (!WARN_ON_ONCE(holder == NULL) && !--holder->refcnt) {
-		del_symlink(disk->slave_dir, &part_to_dev(bdev->bd_part)->kobj);
-		del_symlink(bdev->bd_part->holder_dir,
+		del_symlink(disk->slave_dir, bdev_kobj(bdev));
+		del_symlink(bdev->bd_holder_dir,
 			    &disk_to_dev(disk)->kobj);
-		kobject_put(bdev->bd_part->holder_dir);
+		kobject_put(bdev->bd_holder_dir);
 		list_del_init(&holder->list);
 		kfree(holder);
 	}
@@ -1385,11 +1378,6 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 		first_open = true;
 
 		if (!bdev->bd_partno) {
-			ret = -ENXIO;
-			bdev->bd_part = disk_get_part(disk, 0);
-			if (!bdev->bd_part)
-				goto out_clear;
-
 			ret = 0;
 			if (disk->fops->open) {
 				ret = disk->fops->open(bdev, mode);
@@ -1422,9 +1410,8 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 			ret = __blkdev_get(bdev_whole(bdev), mode, NULL, 1);
 			if (ret)
 				goto out_clear;
-			bdev->bd_part = disk_get_part(disk, bdev->bd_partno);
 			if (!(disk->flags & GENHD_FL_UP) ||
-			    !bdev->bd_part || !bdev_nr_sectors(bdev)) {
+			    !bdev_nr_sectors(bdev)) {
 				ret = -ENXIO;
 				goto out_clear;
 			}
@@ -1480,8 +1467,6 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 	return 0;
 
  out_clear:
-	disk_put_part(bdev->bd_part);
-	bdev->bd_part = NULL;
 	if (bdev_is_partition(bdev))
 		__blkdev_put(bdev_whole(bdev), mode, 1);
  out_unlock_bdev:
@@ -1686,11 +1671,8 @@ static void __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part)
 			disk->fops->release(disk, mode);
 	}
 	if (!bdev->bd_openers) {
-		disk_put_part(bdev->bd_part);
-		bdev->bd_part = NULL;
 		if (bdev_is_partition(bdev))
 			victim = bdev_whole(bdev);
-
 		module_put(disk->fops->owner);
 	}
 	if (!for_part)
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 6633b20224d509..c303a0ff0b1701 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -4048,9 +4048,8 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
 	sbi->s_sb = sb;
 	sbi->s_inode_readahead_blks = EXT4_DEF_INODE_READAHEAD_BLKS;
 	sbi->s_sb_block = sb_block;
-	if (sb->s_bdev->bd_part)
-		sbi->s_sectors_written_start =
-			part_stat_read(sb->s_bdev->bd_part, sectors[STAT_WRITE]);
+	sbi->s_sectors_written_start =
+		part_stat_read(sb->s_bdev, sectors[STAT_WRITE]);
 
 	/* Cleanup superblock name */
 	strreplace(sb->s_id, '/', '!');
@@ -5509,15 +5508,10 @@ static int ext4_commit_super(struct super_block *sb, int sync)
 	 */
 	if (!(sb->s_flags & SB_RDONLY))
 		ext4_update_tstamp(es, s_wtime);
-	if (sb->s_bdev->bd_part)
-		es->s_kbytes_written =
-			cpu_to_le64(EXT4_SB(sb)->s_kbytes_written +
-			    ((part_stat_read(sb->s_bdev->bd_part,
-					     sectors[STAT_WRITE]) -
-			      EXT4_SB(sb)->s_sectors_written_start) >> 1));
-	else
-		es->s_kbytes_written =
-			cpu_to_le64(EXT4_SB(sb)->s_kbytes_written);
+	es->s_kbytes_written =
+		cpu_to_le64(EXT4_SB(sb)->s_kbytes_written +
+		    ((part_stat_read(sb->s_bdev, sectors[STAT_WRITE]) -
+		      EXT4_SB(sb)->s_sectors_written_start) >> 1));
 	if (percpu_counter_initialized(&EXT4_SB(sb)->s_freeclusters_counter))
 		ext4_free_blocks_count_set(es,
 			EXT4_C2B(EXT4_SB(sb), percpu_counter_sum_positive(
diff --git a/fs/ext4/sysfs.c b/fs/ext4/sysfs.c
index 4e27fe6ed3ae6a..075aa3a19ff5f1 100644
--- a/fs/ext4/sysfs.c
+++ b/fs/ext4/sysfs.c
@@ -62,11 +62,8 @@ static ssize_t session_write_kbytes_show(struct ext4_sb_info *sbi, char *buf)
 {
 	struct super_block *sb = sbi->s_buddy_cache->i_sb;
 
-	if (!sb->s_bdev->bd_part)
-		return snprintf(buf, PAGE_SIZE, "0\n");
 	return snprintf(buf, PAGE_SIZE, "%lu\n",
-			(part_stat_read(sb->s_bdev->bd_part,
-					sectors[STAT_WRITE]) -
+			(part_stat_read(sb->s_bdev, sectors[STAT_WRITE]) -
 			 sbi->s_sectors_written_start) >> 1);
 }
 
@@ -74,12 +71,9 @@ static ssize_t lifetime_write_kbytes_show(struct ext4_sb_info *sbi, char *buf)
 {
 	struct super_block *sb = sbi->s_buddy_cache->i_sb;
 
-	if (!sb->s_bdev->bd_part)
-		return snprintf(buf, PAGE_SIZE, "0\n");
 	return snprintf(buf, PAGE_SIZE, "%llu\n",
 			(unsigned long long)(sbi->s_kbytes_written +
-			((part_stat_read(sb->s_bdev->bd_part,
-					 sectors[STAT_WRITE]) -
+			((part_stat_read(sb->s_bdev, sectors[STAT_WRITE]) -
 			  EXT4_SB(sb)->s_sectors_written_start) >> 1)));
 }
 
diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
index 023462e80e58d5..54a1905af052cc 100644
--- a/fs/f2fs/checkpoint.c
+++ b/fs/f2fs/checkpoint.c
@@ -1395,7 +1395,6 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
 	__u32 crc32 = 0;
 	int i;
 	int cp_payload_blks = __cp_payload(sbi);
-	struct super_block *sb = sbi->sb;
 	struct curseg_info *seg_i = CURSEG_I(sbi, CURSEG_HOT_NODE);
 	u64 kbytes_written;
 	int err;
@@ -1489,9 +1488,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
 	start_blk += data_sum_blocks;
 
 	/* Record write statistics in the hot node summary */
-	kbytes_written = sbi->kbytes_written;
-	if (sb->s_bdev->bd_part)
-		kbytes_written += BD_PART_WRITTEN(sbi);
+	kbytes_written = sbi->kbytes_written + BD_PART_WRITTEN(sbi);
 
 	seg_i->journal->info.kbytes_written = cpu_to_le64(kbytes_written);
 
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index cb700d79729680..5f9522d4c727fb 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -1675,7 +1675,7 @@ static inline bool f2fs_is_multi_device(struct f2fs_sb_info *sbi)
  * and the return value is in kbytes. s is of struct f2fs_sb_info.
  */
 #define BD_PART_WRITTEN(s)						 \
-(((u64)part_stat_read((s)->sb->s_bdev->bd_part, sectors[STAT_WRITE]) -   \
+(((u64)part_stat_read((s)->sb->s_bdev, sectors[STAT_WRITE]) -   \
 		(s)->sectors_written_start) >> 1)
 
 static inline void f2fs_update_time(struct f2fs_sb_info *sbi, int type)
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index d4e7fab352bacb..fae92285f561b4 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -3700,10 +3700,8 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
 	}
 
 	/* For write statistics */
-	if (sb->s_bdev->bd_part)
-		sbi->sectors_written_start =
-			(u64)part_stat_read(sb->s_bdev->bd_part,
-					    sectors[STAT_WRITE]);
+	sbi->sectors_written_start =
+		part_stat_read(sb->s_bdev, sectors[STAT_WRITE]);
 
 	/* Read accumulated write IO statistics if exists */
 	seg_i = CURSEG_I(sbi, CURSEG_HOT_NODE);
diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
index ec77ccfea923dc..24e876e849c512 100644
--- a/fs/f2fs/sysfs.c
+++ b/fs/f2fs/sysfs.c
@@ -90,11 +90,6 @@ static ssize_t free_segments_show(struct f2fs_attr *a,
 static ssize_t lifetime_write_kbytes_show(struct f2fs_attr *a,
 		struct f2fs_sb_info *sbi, char *buf)
 {
-	struct super_block *sb = sbi->sb;
-
-	if (!sb->s_bdev->bd_part)
-		return sprintf(buf, "0\n");
-
 	return sprintf(buf, "%llu\n",
 			(unsigned long long)(sbi->kbytes_written +
 			BD_PART_WRITTEN(sbi)));
@@ -103,12 +98,8 @@ static ssize_t lifetime_write_kbytes_show(struct f2fs_attr *a,
 static ssize_t features_show(struct f2fs_attr *a,
 		struct f2fs_sb_info *sbi, char *buf)
 {
-	struct super_block *sb = sbi->sb;
 	int len = 0;
 
-	if (!sb->s_bdev->bd_part)
-		return sprintf(buf, "0\n");
-
 	if (f2fs_sb_has_encrypt(sbi))
 		len += scnprintf(buf, PAGE_SIZE - len, "%s",
 						"encryption");
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 5a5ccacb804cdb..c6d00732b1af52 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -8,6 +8,7 @@
 
 #include <linux/types.h>
 #include <linux/bvec.h>
+#include <linux/device.h>
 #include <linux/ktime.h>
 
 struct bio_set;
@@ -20,7 +21,13 @@ typedef void (bio_end_io_t) (struct bio *);
 struct bio_crypt_ctx;
 
 struct block_device {
+	sector_t		bd_start_sect;
+	unsigned long		bd_stamp;
+	struct disk_stats __percpu *bd_stats;
+	u8			bd_partno;
+	int			bd_policy;
 	dev_t			bd_dev;
+	struct device		bd_device;
 	int			bd_openers;
 	struct inode *		bd_inode;	/* will die */
 	struct super_block *	bd_super;
@@ -31,8 +38,7 @@ struct block_device {
 #ifdef CONFIG_SYSFS
 	struct list_head	bd_holder_disks;
 #endif
-	u8			bd_partno;
-	struct hd_struct *	bd_part;
+	struct kobject		*bd_holder_dir;
 	/* number of times partitions within this device have been opened. */
 	unsigned		bd_part_count;
 
@@ -44,13 +50,22 @@ struct block_device {
 	int			bd_fsfreeze_count;
 	/* Mutex for freeze */
 	struct mutex		bd_fsfreeze_mutex;
+
+	struct partition_meta_info *bd_meta_info;
+#ifdef CONFIG_FAIL_MAKE_REQUEST
+	int			bd_make_it_fail;
+#endif
 } __randomize_layout;
 
 #define bdev_whole(_bdev) \
-	((_bdev)->bd_disk->part0.bdev)
+	((_bdev)->bd_disk->part0)
+
+#define dev_to_bdev(device) \
+	container_of((device), struct block_device, bd_device)
+#define part_to_dev(part)	(&((part)->bd_device))
 
 #define bdev_kobj(_bdev) \
-	(&part_to_dev((_bdev)->bd_part)->kobj)
+	(&part_to_dev((_bdev))->kobj)
 
 /*
  * Block error status values.  See block/blk-core:blk_errors for the details.
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 696b2f9c5529d8..ed40144ab80339 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -191,7 +191,7 @@ struct request {
 	};
 
 	struct gendisk *rq_disk;
-	struct hd_struct *part;
+	struct block_device *bdev;
 #ifdef CONFIG_BLK_RQ_ALLOC_TIME
 	/* Time that the first bio started allocating this request. */
 	u64 alloc_time_ns;
@@ -1488,7 +1488,7 @@ static inline int bdev_alignment_offset(struct block_device *bdev)
 		return -1;
 	if (bdev_is_partition(bdev))
 		return queue_limit_alignment_offset(&q->limits,
-				bdev->bd_part->start_sect);
+				bdev->bd_start_sect);
 	return q->limits.alignment_offset;
 }
 
@@ -1529,7 +1529,7 @@ static inline int bdev_discard_alignment(struct block_device *bdev)
 
 	if (bdev_is_partition(bdev))
 		return queue_limit_discard_alignment(&q->limits,
-				bdev->bd_part->start_sect);
+				bdev->bd_start_sect);
 	return q->limits.discard_alignment;
 }
 
@@ -1943,9 +1943,9 @@ unsigned long disk_start_io_acct(struct gendisk *disk, unsigned int sectors,
 void disk_end_io_acct(struct gendisk *disk, unsigned int op,
 		unsigned long start_time);
 
-unsigned long part_start_io_acct(struct gendisk *disk, struct hd_struct **part,
-				 struct bio *bio);
-void part_end_io_acct(struct hd_struct *part, struct bio *bio,
+unsigned long part_start_io_acct(struct gendisk *disk,
+		struct block_device **part, struct bio *bio);
+void part_end_io_acct(struct block_device *part, struct bio *bio,
 		      unsigned long start_time);
 
 /**
@@ -1996,7 +1996,6 @@ struct block_device *bdev_alloc(struct gendisk *disk, u8 partno);
 void bdev_add(struct block_device *bdev, dev_t dev);
 struct block_device *bdget(dev_t dev);
 struct block_device *I_BDEV(struct inode *inode);
-struct block_device *bdget_part(struct hd_struct *part);
 struct block_device *bdgrab(struct block_device *bdev);
 void bdput(struct block_device *);
 
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index bc0469cc8fb0dc..98e1ce7d56a256 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -19,11 +19,6 @@
 #include <linux/blk_types.h>
 #include <asm/local.h>
 
-#define dev_to_disk(device)	container_of((device), struct gendisk, part0.__dev)
-#define dev_to_part(device)	container_of((device), struct hd_struct, __dev)
-#define disk_to_dev(disk)	(&(disk)->part0.__dev)
-#define part_to_dev(part)	(&((part)->__dev))
-
 extern const struct device_type disk_type;
 extern struct device_type part_type;
 extern struct class block_class;
@@ -50,23 +45,6 @@ struct partition_meta_info {
 	u8 volname[PARTITION_META_INFO_VOLNAMELTH];
 };
 
-struct hd_struct {
-	sector_t start_sect;
-	unsigned long stamp;
-	struct disk_stats __percpu *dkstats;
-	struct percpu_ref ref;
-
-	struct block_device *bdev;
-	struct device __dev;
-	struct kobject *holder_dir;
-	int policy, partno;
-	struct partition_meta_info *info;
-#ifdef CONFIG_FAIL_MAKE_REQUEST
-	int make_it_fail;
-#endif
-	struct rcu_work rcu_work;
-};
-
 /**
  * DOC: genhd capability flags
  *
@@ -141,8 +119,8 @@ enum {
 struct disk_part_tbl {
 	struct rcu_head rcu_head;
 	int len;
-	struct hd_struct __rcu *last_lookup;
-	struct hd_struct __rcu *part[];
+	struct block_device __rcu *last_lookup;
+	struct block_device __rcu *part[];
 };
 
 struct disk_events;
@@ -176,7 +154,7 @@ struct gendisk {
 	 * helpers.
 	 */
 	struct disk_part_tbl __rcu *part_tbl;
-	struct hd_struct part0;
+	struct block_device *part0;
 
 	const struct block_device_operations *fops;
 	struct request_queue *queue;
@@ -203,23 +181,17 @@ struct gendisk {
 	struct lockdep_map lockdep_map;
 };
 
+#define dev_to_disk(device) \
+	(dev_to_bdev(device)->bd_disk)
+#define disk_to_dev(disk) \
+	(part_to_dev((disk)->part0))
+
 #if IS_REACHABLE(CONFIG_CDROM)
 #define disk_to_cdi(disk)	((disk)->cdi)
 #else
 #define disk_to_cdi(disk)	NULL
 #endif
 
-static inline struct gendisk *part_to_disk(struct hd_struct *part)
-{
-	if (likely(part)) {
-		if (part->partno)
-			return dev_to_disk(part_to_dev(part)->parent);
-		else
-			return dev_to_disk(part_to_dev(part));
-	}
-	return NULL;
-}
-
 static inline int disk_max_parts(struct gendisk *disk)
 {
 	if (disk->flags & GENHD_FL_EXT_DEVT)
@@ -238,19 +210,6 @@ static inline dev_t disk_devt(struct gendisk *disk)
 	return MKDEV(disk->major, disk->first_minor);
 }
 
-static inline dev_t part_devt(struct hd_struct *part)
-{
-	return part_to_dev(part)->devt;
-}
-
-extern struct hd_struct *disk_get_part(struct gendisk *disk, int partno);
-
-static inline void disk_put_part(struct hd_struct *part)
-{
-	if (likely(part))
-		put_device(part_to_dev(part));
-}
-
 /*
  * Smarter partition iterator without context limits.
  */
@@ -261,14 +220,14 @@ static inline void disk_put_part(struct hd_struct *part)
 
 struct disk_part_iter {
 	struct gendisk		*disk;
-	struct hd_struct	*part;
+	struct block_device	*part;
 	int			idx;
 	unsigned int		flags;
 };
 
 extern void disk_part_iter_init(struct disk_part_iter *piter,
 				 struct gendisk *disk, unsigned int flags);
-extern struct hd_struct *disk_part_iter_next(struct disk_part_iter *piter);
+struct block_device *disk_part_iter_next(struct disk_part_iter *piter);
 extern void disk_part_iter_exit(struct disk_part_iter *piter);
 extern bool disk_has_partitions(struct gendisk *disk);
 
@@ -292,7 +251,7 @@ extern void set_disk_ro(struct gendisk *disk, int flag);
 
 static inline int get_disk_ro(struct gendisk *disk)
 {
-	return disk->part0.policy;
+	return disk->part0->bd_policy;
 }
 
 extern void disk_block_events(struct gendisk *disk);
@@ -306,7 +265,7 @@ extern void rand_initialize_disk(struct gendisk *disk);
 
 static inline sector_t get_start_sect(struct block_device *bdev)
 {
-	return bdev->bd_part->start_sect;
+	return bdev->bd_start_sect;
 }
 	
 static inline sector_t bdev_nr_sectors(struct block_device *bdev)
@@ -316,7 +275,7 @@ static inline sector_t bdev_nr_sectors(struct block_device *bdev)
 	
 static inline sector_t get_capacity(struct gendisk *disk)
 {
-	return bdev_nr_sectors(disk->part0.bdev);
+	return bdev_nr_sectors(disk->part0);
 }
 
 int bdev_disk_changed(struct block_device *bdev, bool invalidate);
diff --git a/include/linux/part_stat.h b/include/linux/part_stat.h
index 24125778ef3ec7..3b3621b4983a58 100644
--- a/include/linux/part_stat.h
+++ b/include/linux/part_stat.h
@@ -25,26 +25,26 @@ struct disk_stats {
 #define part_stat_unlock()	preempt_enable()
 
 #define part_stat_get_cpu(part, field, cpu)				\
-	(per_cpu_ptr((part)->dkstats, (cpu))->field)
+	(per_cpu_ptr((part)->bd_stats, (cpu))->field)
 
 #define part_stat_get(part, field)					\
 	part_stat_get_cpu(part, field, smp_processor_id())
 
 #define part_stat_read(part, field)					\
 ({									\
-	typeof((part)->dkstats->field) res = 0;				\
+	typeof((part)->bd_stats->field) res = 0;				\
 	unsigned int _cpu;						\
 	for_each_possible_cpu(_cpu)					\
-		res += per_cpu_ptr((part)->dkstats, _cpu)->field;	\
+		res += per_cpu_ptr((part)->bd_stats, _cpu)->field;	\
 	res;								\
 })
 
-static inline void part_stat_set_all(struct hd_struct *part, int value)
+static inline void part_stat_set_all(struct block_device *bdev, int value)
 {
 	int i;
 
 	for_each_possible_cpu(i)
-		memset(per_cpu_ptr(part->dkstats, i), value,
+		memset(per_cpu_ptr(bdev->bd_stats, i), value,
 				sizeof(struct disk_stats));
 }
 
@@ -54,13 +54,12 @@ static inline void part_stat_set_all(struct hd_struct *part, int value)
 	 part_stat_read(part, field[STAT_DISCARD]))
 
 #define __part_stat_add(part, field, addnd)				\
-	__this_cpu_add((part)->dkstats->field, addnd)
+	__this_cpu_add((part)->bd_stats->field, addnd)
 
 #define part_stat_add(part, field, addnd)	do {			\
 	__part_stat_add((part), field, addnd);				\
-	if ((part)->partno)						\
-		__part_stat_add(&part_to_disk((part))->part0,		\
-				field, addnd);				\
+	if ((part)->bd_partno)						\
+		__part_stat_add((part)->bd_disk->part0, field, addnd);	\
 } while (0)
 
 #define part_stat_dec(gendiskp, field)					\
diff --git a/init/do_mounts.c b/init/do_mounts.c
index 5879edf083b318..a78e44ee6adb8d 100644
--- a/init/do_mounts.c
+++ b/init/do_mounts.c
@@ -76,11 +76,11 @@ struct uuidcmp {
  */
 static int match_dev_by_uuid(struct device *dev, const void *data)
 {
+	struct block_device *bdev = dev_to_bdev(dev);
 	const struct uuidcmp *cmp = data;
-	struct hd_struct *part = dev_to_part(dev);
 
-	if (!part->info ||
-	    strncasecmp(cmp->uuid, part->info->uuid, cmp->len))
+	if (!bdev->bd_meta_info ||
+	    strncasecmp(cmp->uuid, bdev->bd_meta_info->uuid, cmp->len))
 		return 0;
 	return 1;
 }
@@ -133,13 +133,13 @@ static dev_t devt_from_partuuid(const char *uuid_str)
 		 * Attempt to find the requested partition by adding an offset
 		 * to the partition number found by UUID.
 		 */
-		struct hd_struct *part;
+		struct block_device *part;
 
-		part = disk_get_part(dev_to_disk(dev),
-				     dev_to_part(dev)->partno + offset);
+		part = bdget_disk(dev_to_disk(dev),
+				  dev_to_bdev(dev)->bd_partno + offset);
 		if (part) {
-			devt = part_devt(part);
-			put_device(part_to_dev(part));
+			devt = part->bd_dev;
+			bdput(part);
 		}
 	} else {
 		devt = dev->devt;
@@ -166,10 +166,10 @@ static dev_t devt_from_partuuid(const char *uuid_str)
  */
 static int match_dev_by_label(struct device *dev, const void *data)
 {
+	struct block_device *bdev = dev_to_bdev(dev);
 	const char *label = data;
-	struct hd_struct *part = dev_to_part(dev);
 
-	if (!part->info || strcmp(label, part->info->volname))
+	if (!bdev->bd_meta_info || strcmp(label, bdev->bd_meta_info->volname))
 		return 0;
 	return 1;
 }
diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
index 7076d588a50d69..a482a37848bff7 100644
--- a/kernel/trace/blktrace.c
+++ b/kernel/trace/blktrace.c
@@ -458,14 +458,9 @@ static struct rchan_callbacks blk_relay_callbacks = {
 static void blk_trace_setup_lba(struct blk_trace *bt,
 				struct block_device *bdev)
 {
-	struct hd_struct *part = NULL;
-
-	if (bdev)
-		part = bdev->bd_part;
-
-	if (part) {
-		bt->start_lba = part->start_sect;
-		bt->end_lba = part->start_sect + bdev_nr_sectors(bdev);
+	if (bdev) {
+		bt->start_lba = bdev->bd_start_sect;
+		bt->end_lba = bdev->bd_start_sect + bdev_nr_sectors(bdev);
 	} else {
 		bt->start_lba = 0;
 		bt->end_lba = -1ULL;
@@ -1815,30 +1810,15 @@ static ssize_t blk_trace_mask2str(char *buf, int mask)
 	return p - buf;
 }
 
-static struct request_queue *blk_trace_get_queue(struct block_device *bdev)
-{
-	if (bdev->bd_disk == NULL)
-		return NULL;
-
-	return bdev_get_queue(bdev);
-}
-
 static ssize_t sysfs_blk_trace_attr_show(struct device *dev,
 					 struct device_attribute *attr,
 					 char *buf)
 {
-	struct block_device *bdev = bdget_part(dev_to_part(dev));
-	struct request_queue *q;
+	struct block_device *bdev = dev_to_bdev(dev);
+	struct request_queue *q = bdev_get_queue(bdev);
 	struct blk_trace *bt;
 	ssize_t ret = -ENXIO;
 
-	if (bdev == NULL)
-		goto out;
-
-	q = blk_trace_get_queue(bdev);
-	if (q == NULL)
-		goto out_bdput;
-
 	mutex_lock(&q->debugfs_mutex);
 
 	bt = rcu_dereference_protected(q->blk_trace,
@@ -1861,9 +1841,6 @@ static ssize_t sysfs_blk_trace_attr_show(struct device *dev,
 
 out_unlock_bdev:
 	mutex_unlock(&q->debugfs_mutex);
-out_bdput:
-	bdput(bdev);
-out:
 	return ret;
 }
 
@@ -1871,8 +1848,8 @@ static ssize_t sysfs_blk_trace_attr_store(struct device *dev,
 					  struct device_attribute *attr,
 					  const char *buf, size_t count)
 {
-	struct block_device *bdev;
-	struct request_queue *q;
+	struct block_device *bdev = dev_to_bdev(dev);
+	struct request_queue *q = bdev_get_queue(bdev);
 	struct blk_trace *bt;
 	u64 value;
 	ssize_t ret = -EINVAL;
@@ -1888,17 +1865,10 @@ static ssize_t sysfs_blk_trace_attr_store(struct device *dev,
 				goto out;
 			value = ret;
 		}
-	} else if (kstrtoull(buf, 0, &value))
-		goto out;
-
-	ret = -ENXIO;
-	bdev = bdget_part(dev_to_part(dev));
-	if (bdev == NULL)
-		goto out;
-
-	q = blk_trace_get_queue(bdev);
-	if (q == NULL)
-		goto out_bdput;
+	} else {
+		if (kstrtoull(buf, 0, &value))
+			goto out;
+	}
 
 	mutex_lock(&q->debugfs_mutex);
 
@@ -1936,8 +1906,6 @@ static ssize_t sysfs_blk_trace_attr_store(struct device *dev,
 
 out_unlock_bdev:
 	mutex_unlock(&q->debugfs_mutex);
-out_bdput:
-	bdput(bdev);
 out:
 	return ret ? ret : count;
 }
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:10:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:10:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28158.57066 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9O-0003bd-J7; Mon, 16 Nov 2020 15:10:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28158.57066; Mon, 16 Nov 2020 15:10:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9N-0003aE-D2; Mon, 16 Nov 2020 15:10:37 +0000
Received: by outflank-mailman (input) for mailman id 28158;
 Mon, 16 Nov 2020 15:10:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefyn-0006ni-1K
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:59:41 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 45721808-a2e9-404e-b99f-3fe6bfe8b291;
 Mon, 16 Nov 2020 14:58:44 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxa-0003md-8r; Mon, 16 Nov 2020 14:58:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefyn-0006ni-1K
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:59:41 +0000
X-Inumbo-ID: 45721808-a2e9-404e-b99f-3fe6bfe8b291
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 45721808-a2e9-404e-b99f-3fe6bfe8b291;
	Mon, 16 Nov 2020 14:58:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=79ABHzAqAgUDi+r50gNTM9KzgxZfWtsd0Ks61Se8Jjk=; b=iXlgdH+JkeL6roCvy9N8OmOptV
	1Ll+Te6ItuycUsm1mpg5y8MmMvSDVgOMN0DW4BJjUzEL2YXMUw9FI/gCg+GglIDP+2ssWuLid2fb0
	cJZnJp4fVhOgNW619uL3/Kd6hYd61L2NdpCEsJu2r2sYtsuKn5OWNCme31Wn3/FHGbzcF/v5kYRm2
	JnXQsbwRBlyoUcHtnndvHGCnGiJjghxupsGPmu0u5FKR/+UjbjsAeA9OHsZoDrRaOzkGH06M+kWrS
	F/Df603StVMFLWH3uWTjYYJHcHqMIvbijKLsxzszv3RzSONMJItnrjbuYjjQ6SdDQT1ZrHiZj6i2L
	17QgIiog==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxa-0003md-8r; Mon, 16 Nov 2020 14:58:26 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH 12/78] dm: use set_capacity_and_notify
Date: Mon, 16 Nov 2020 15:57:03 +0100
Message-Id: <20201116145809.410558-13-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 drivers/md/dm.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index c18fc25485186d..62ad44925e73ec 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1971,8 +1971,7 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
 	if (size != dm_get_size(md))
 		memset(&md->geometry, 0, sizeof(md->geometry));
 
-	set_capacity(md->disk, size);
-	bd_set_nr_sectors(md->bdev, size);
+	set_capacity_and_notify(md->disk, size);
 
 	dm_table_event_callback(t, event_callback, md);
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:10:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:10:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28166.57074 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9Q-0003iB-JL; Mon, 16 Nov 2020 15:10:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28166.57074; Mon, 16 Nov 2020 15:10:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9P-0003fR-6J; Mon, 16 Nov 2020 15:10:39 +0000
Received: by outflank-mailman (input) for mailman id 28166;
 Mon, 16 Nov 2020 15:10:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg1N-0006ni-6X
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:02:21 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0f9b8218-1088-4ae9-9c82-2f16bee03de1;
 Mon, 16 Nov 2020 14:59:24 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyL-0003zc-NO; Mon, 16 Nov 2020 14:59:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg1N-0006ni-6X
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:02:21 +0000
X-Inumbo-ID: 0f9b8218-1088-4ae9-9c82-2f16bee03de1
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0f9b8218-1088-4ae9-9c82-2f16bee03de1;
	Mon, 16 Nov 2020 14:59:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=K+kkAiIfWzB4syRFt+33j5OXLPH83fZlF9ACMDZkYhg=; b=k5dNhGax8VxlBfuaj9EQBdAXgt
	4072QYhm07kKi1w+W9nEyoMSW3ELii8kjNeM1F9XviJBMC0uEQv6MPhU9nRzHBDI1yZZmWAy9Q0N2
	fTvPwi+VC8nhfIjC7jFXt9leIcNNiMb0XmPx9eg/JoTwye7h/y7KHjVMVYweaBNLQMktbr4A8iWEt
	dOTjTgdEq+CLoKYrid8bVvBdmQLTGWBIgPYXUmyZ/n6rnRuKWj+UuQ5XIshtknCUOMSkMwKUNjD5w
	dJBoojTYMzEvaH9mPATlftM3BLIRcn7GZkGadLtxoUQdZeeK4Y0pHIv111cNROAtxxvV+mZxdO2Kj
	KOai2cDA==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyL-0003zc-NO; Mon, 16 Nov 2020 14:59:14 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH 45/78] md: use __register_blkdev to allocate devices on demand
Date: Mon, 16 Nov 2020 15:57:36 +0100
Message-Id: <20201116145809.410558-46-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use the simpler mechanism attached to major_name to allocate a md device
when a currently unregistered minor is accessed.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Song Liu <song@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 drivers/md/md.c | 21 ++++++++-------------
 1 file changed, 8 insertions(+), 13 deletions(-)

diff --git a/drivers/md/md.c b/drivers/md/md.c
index fa31b71a72a35d..b2edf5e0f965b5 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -5764,11 +5764,12 @@ static int md_alloc(dev_t dev, char *name)
 	return error;
 }
 
-static struct kobject *md_probe(dev_t dev, int *part, void *data)
+static void md_probe(dev_t dev)
 {
+	if (MAJOR(dev) == MD_MAJOR && MINOR(dev) >= 512)
+		return;
 	if (create_on_open)
 		md_alloc(dev, NULL);
-	return NULL;
 }
 
 static int add_named_array(const char *val, const struct kernel_param *kp)
@@ -6532,7 +6533,7 @@ static void autorun_devices(int part)
 			break;
 		}
 
-		md_probe(dev, NULL, NULL);
+		md_probe(dev);
 		mddev = mddev_find(dev);
 		if (!mddev || !mddev->gendisk) {
 			if (mddev)
@@ -9563,18 +9564,15 @@ static int __init md_init(void)
 	if (!md_rdev_misc_wq)
 		goto err_rdev_misc_wq;
 
-	if ((ret = register_blkdev(MD_MAJOR, "md")) < 0)
+	ret = __register_blkdev(MD_MAJOR, "md", md_probe);
+	if (ret < 0)
 		goto err_md;
 
-	if ((ret = register_blkdev(0, "mdp")) < 0)
+	ret = __register_blkdev(0, "mdp", md_probe);
+	if (ret < 0)
 		goto err_mdp;
 	mdp_major = ret;
 
-	blk_register_region(MKDEV(MD_MAJOR, 0), 512, THIS_MODULE,
-			    md_probe, NULL, NULL);
-	blk_register_region(MKDEV(mdp_major, 0), 1UL<<MINORBITS, THIS_MODULE,
-			    md_probe, NULL, NULL);
-
 	register_reboot_notifier(&md_notifier);
 	raid_table_header = register_sysctl_table(raid_root_table);
 
@@ -9841,9 +9839,6 @@ static __exit void md_exit(void)
 	struct list_head *tmp;
 	int delay = 1;
 
-	blk_unregister_region(MKDEV(MD_MAJOR,0), 512);
-	blk_unregister_region(MKDEV(mdp_major,0), 1U << MINORBITS);
-
 	unregister_blkdev(MD_MAJOR,"md");
 	unregister_blkdev(mdp_major, "mdp");
 	unregister_reboot_notifier(&md_notifier);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:10:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:10:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28168.57086 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9T-0003nU-0g; Mon, 16 Nov 2020 15:10:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28168.57086; Mon, 16 Nov 2020 15:10:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9R-0003lO-01; Mon, 16 Nov 2020 15:10:41 +0000
Received: by outflank-mailman (input) for mailman id 28168;
 Mon, 16 Nov 2020 15:10:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefzC-0006ni-2e
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:00:06 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 672c11eb-1d21-4cda-9057-ded2a5af7792;
 Mon, 16 Nov 2020 14:58:49 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxk-0003q9-JB; Mon, 16 Nov 2020 14:58:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefzC-0006ni-2e
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:00:06 +0000
X-Inumbo-ID: 672c11eb-1d21-4cda-9057-ded2a5af7792
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 672c11eb-1d21-4cda-9057-ded2a5af7792;
	Mon, 16 Nov 2020 14:58:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=WS0vecBNMTsbH0kfrSJ0IAAqwhjZtzJU/FJS3+haIgk=; b=XiDeFXjcwvV0hGjuIW+QK1Xv0w
	XGobZbmHzLRUcD5GNVIGT2Bc1yabJ1c24ju7Tm9tZT9m70rjwTM3e4kfKj9+Yzf4D6BcIOGRoHWVO
	58U++lincnMxmBRddirNSFJWoomcZxd7LHPHDBpo4SBUWT5/0/0TZyxkVi+TSxX4QmtNoIxH0xlIE
	UnNdDnUmvwZ3FwZ74Y+S8XchvEu7Gpq63QxP5jhia9g9ClQolScCGQVw5Qqu78VVnkayK6YldeLHu
	4GJeOzy9tqhbDwjb+XUpX0P60CH1luDQpSEelnG6JlIsAgpWNy0IYJrCEspY0TQPywnY30K5yNsCX
	w5jtnWrA==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxk-0003q9-JB; Mon, 16 Nov 2020 14:58:36 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH 19/78] dm-raid: use set_capacity_and_notify
Date: Mon, 16 Nov 2020 15:57:10 +0100
Message-Id: <20201116145809.410558-20-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 drivers/md/dm-raid.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
index 9c1f7c4de65b35..294f34d2d61bae 100644
--- a/drivers/md/dm-raid.c
+++ b/drivers/md/dm-raid.c
@@ -700,8 +700,7 @@ static void rs_set_capacity(struct raid_set *rs)
 {
 	struct gendisk *gendisk = dm_disk(dm_table_get_md(rs->ti->table));
 
-	set_capacity(gendisk, rs->md.array_sectors);
-	revalidate_disk_size(gendisk, true);
+	set_capacity_and_notify(gendisk, rs->md.array_sectors);
 }
 
 /*
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:10:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:10:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28174.57098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9V-0003uJ-1M; Mon, 16 Nov 2020 15:10:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28174.57098; Mon, 16 Nov 2020 15:10:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9T-0003s7-9o; Mon, 16 Nov 2020 15:10:43 +0000
Received: by outflank-mailman (input) for mailman id 28174;
 Mon, 16 Nov 2020 15:10:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg1h-0006ni-7K
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:02:41 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 675d4fa7-28c5-44c2-aed4-3e299c288f29;
 Mon, 16 Nov 2020 14:59:28 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyO-00040T-NC; Mon, 16 Nov 2020 14:59:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg1h-0006ni-7K
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:02:41 +0000
X-Inumbo-ID: 675d4fa7-28c5-44c2-aed4-3e299c288f29
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 675d4fa7-28c5-44c2-aed4-3e299c288f29;
	Mon, 16 Nov 2020 14:59:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=9LF1BbKgk/puzDyzFvgCk2Gxa/rjsKotbPw1h41DYz4=; b=PsPGgpEssDcY3DoxeX0rl9jT1m
	ePijooB6rcCALhs+vkzdSd9GMO5GLYkV7e0zTwcGHHE+uDGa5Okrt1vJiKHjQaGYIw3qglSyJpxoh
	1wdWUrL2wAEnRp3v/DWWe5m4sPr7hmOcugy+P7+kpUAS6+eyE30wsF47g+aBTkxhXH0G90Y/RTKWH
	JyVOt8LXW9IB4zqXKLjxXeAgbAEFQJIdx4AffZ5Cj0poOR9of2aHcPnu3FRQNsxZGlz2XHxmsG3LQ
	gpB+h0o36YVGYD1GptGTKeQFcOJBmwXuru3ClOo0F9gzGLCz6/prWb5mze7Ncke5V6AMAazBQWrVs
	W/AKU9YA==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyO-00040T-NC; Mon, 16 Nov 2020 14:59:17 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 47/78] floppy: use a separate gendisk for each media format
Date: Mon, 16 Nov 2020 15:57:38 +0100
Message-Id: <20201116145809.410558-48-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

The floppy driver usually autodetects the media when used with the
normal /dev/fd? devices, which also are the only nodes created by udev.
But it also supports various aliases that force a given media format.
That is currently supported using the blk_register_region framework
which finds the floppy gendisk even for a 'mismatched' dev_t.  The
problem with this (besides the code complexity) is that it creates
multiple struct block_device instances for the whole device of a
single gendisk, which can lead to interesting issues in code not
aware of that fact.

To fix this just create a separate gendisk for each of the aliases
if they are accessed.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/floppy.c | 154 ++++++++++++++++++++++++++---------------
 1 file changed, 97 insertions(+), 57 deletions(-)

diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
index 7df79ae6b0a1e1..dfe1dfc901ccc2 100644
--- a/drivers/block/floppy.c
+++ b/drivers/block/floppy.c
@@ -402,7 +402,6 @@ static struct floppy_drive_params drive_params[N_DRIVE];
 static struct floppy_drive_struct drive_state[N_DRIVE];
 static struct floppy_write_errors write_errors[N_DRIVE];
 static struct timer_list motor_off_timer[N_DRIVE];
-static struct gendisk *disks[N_DRIVE];
 static struct blk_mq_tag_set tag_sets[N_DRIVE];
 static struct block_device *opened_bdev[N_DRIVE];
 static DEFINE_MUTEX(open_lock);
@@ -477,6 +476,8 @@ static struct floppy_struct floppy_type[32] = {
 	{ 3200,20,2,80,0,0x1C,0x00,0xCF,0x2C,"H1600" }, /* 31 1.6MB 3.5"    */
 };
 
+static struct gendisk *disks[N_DRIVE][ARRAY_SIZE(floppy_type)];
+
 #define SECTSIZE (_FD_SECTSIZE(*floppy))
 
 /* Auto-detection: Disk type used until the next media change occurs. */
@@ -4111,7 +4112,7 @@ static int floppy_open(struct block_device *bdev, fmode_t mode)
 
 	new_dev = MINOR(bdev->bd_dev);
 	drive_state[drive].fd_device = new_dev;
-	set_capacity(disks[drive], floppy_sizes[new_dev]);
+	set_capacity(disks[drive][ITYPE(new_dev)], floppy_sizes[new_dev]);
 	if (old_dev != -1 && old_dev != new_dev) {
 		if (buffer_drive == drive)
 			buffer_track = -1;
@@ -4579,15 +4580,58 @@ static bool floppy_available(int drive)
 	return true;
 }
 
-static struct kobject *floppy_find(dev_t dev, int *part, void *data)
+static int floppy_alloc_disk(unsigned int drive, unsigned int type)
 {
-	int drive = (*part & 3) | ((*part & 0x80) >> 5);
-	if (drive >= N_DRIVE || !floppy_available(drive))
-		return NULL;
-	if (((*part >> 2) & 0x1f) >= ARRAY_SIZE(floppy_type))
-		return NULL;
-	*part = 0;
-	return get_disk_and_module(disks[drive]);
+	struct gendisk *disk;
+	int err;
+
+	disk = alloc_disk(1);
+	if (!disk)
+		return -ENOMEM;
+
+	disk->queue = blk_mq_init_queue(&tag_sets[drive]);
+	if (IS_ERR(disk->queue)) {
+		err = PTR_ERR(disk->queue);
+		disk->queue = NULL;
+		put_disk(disk);
+		return err;
+	}
+
+	blk_queue_bounce_limit(disk->queue, BLK_BOUNCE_HIGH);
+	blk_queue_max_hw_sectors(disk->queue, 64);
+	disk->major = FLOPPY_MAJOR;
+	disk->first_minor = TOMINOR(drive) | (type << 2);
+	disk->fops = &floppy_fops;
+	disk->events = DISK_EVENT_MEDIA_CHANGE;
+	if (type)
+		sprintf(disk->disk_name, "fd%d_type%d", drive, type);
+	else
+		sprintf(disk->disk_name, "fd%d", drive);
+	/* to be cleaned up... */
+	disk->private_data = (void *)(long)drive;
+	disk->flags |= GENHD_FL_REMOVABLE;
+
+	disks[drive][type] = disk;
+	return 0;
+}
+
+static DEFINE_MUTEX(floppy_probe_lock);
+
+static void floppy_probe(dev_t dev)
+{
+	unsigned int drive = (MINOR(dev) & 3) | ((MINOR(dev) & 0x80) >> 5);
+	unsigned int type = (MINOR(dev) >> 2) & 0x1f;
+
+	if (drive >= N_DRIVE || !floppy_available(drive) ||
+	    type >= ARRAY_SIZE(floppy_type))
+		return;
+
+	mutex_lock(&floppy_probe_lock);
+	if (!disks[drive][type]) {
+		if (floppy_alloc_disk(drive, type) == 0)
+			add_disk(disks[drive][type]);
+	}
+	mutex_unlock(&floppy_probe_lock);
 }
 
 static int __init do_floppy_init(void)
@@ -4609,33 +4653,25 @@ static int __init do_floppy_init(void)
 		return -ENOMEM;
 
 	for (drive = 0; drive < N_DRIVE; drive++) {
-		disks[drive] = alloc_disk(1);
-		if (!disks[drive]) {
-			err = -ENOMEM;
+		memset(&tag_sets[drive], 0, sizeof(tag_sets[drive]));
+		tag_sets[drive].ops = &floppy_mq_ops;
+		tag_sets[drive].nr_hw_queues = 1;
+		tag_sets[drive].nr_maps = 1;
+		tag_sets[drive].queue_depth = 2;
+		tag_sets[drive].numa_node = NUMA_NO_NODE;
+		tag_sets[drive].flags = BLK_MQ_F_SHOULD_MERGE;
+		err = blk_mq_alloc_tag_set(&tag_sets[drive]);
+		if (err)
 			goto out_put_disk;
-		}
 
-		disks[drive]->queue = blk_mq_init_sq_queue(&tag_sets[drive],
-							   &floppy_mq_ops, 2,
-							   BLK_MQ_F_SHOULD_MERGE);
-		if (IS_ERR(disks[drive]->queue)) {
-			err = PTR_ERR(disks[drive]->queue);
-			disks[drive]->queue = NULL;
+		err = floppy_alloc_disk(drive, 0);
+		if (err)
 			goto out_put_disk;
-		}
-
-		blk_queue_bounce_limit(disks[drive]->queue, BLK_BOUNCE_HIGH);
-		blk_queue_max_hw_sectors(disks[drive]->queue, 64);
-		disks[drive]->major = FLOPPY_MAJOR;
-		disks[drive]->first_minor = TOMINOR(drive);
-		disks[drive]->fops = &floppy_fops;
-		disks[drive]->events = DISK_EVENT_MEDIA_CHANGE;
-		sprintf(disks[drive]->disk_name, "fd%d", drive);
 
 		timer_setup(&motor_off_timer[drive], motor_off_callback, 0);
 	}
 
-	err = register_blkdev(FLOPPY_MAJOR, "fd");
+	err = __register_blkdev(FLOPPY_MAJOR, "fd", floppy_probe);
 	if (err)
 		goto out_put_disk;
 
@@ -4643,9 +4679,6 @@ static int __init do_floppy_init(void)
 	if (err)
 		goto out_unreg_blkdev;
 
-	blk_register_region(MKDEV(FLOPPY_MAJOR, 0), 256, THIS_MODULE,
-			    floppy_find, NULL, NULL);
-
 	for (i = 0; i < 256; i++)
 		if (ITYPE(i))
 			floppy_sizes[i] = floppy_type[ITYPE(i)].size;
@@ -4673,7 +4706,7 @@ static int __init do_floppy_init(void)
 	if (fdc_state[0].address == -1) {
 		cancel_delayed_work(&fd_timeout);
 		err = -ENODEV;
-		goto out_unreg_region;
+		goto out_unreg_driver;
 	}
 #if N_FDC > 1
 	fdc_state[1].address = FDC2;
@@ -4684,7 +4717,7 @@ static int __init do_floppy_init(void)
 	if (err) {
 		cancel_delayed_work(&fd_timeout);
 		err = -EBUSY;
-		goto out_unreg_region;
+		goto out_unreg_driver;
 	}
 
 	/* initialise drive state */
@@ -4761,10 +4794,8 @@ static int __init do_floppy_init(void)
 		if (err)
 			goto out_remove_drives;
 
-		/* to be cleaned up... */
-		disks[drive]->private_data = (void *)(long)drive;
-		disks[drive]->flags |= GENHD_FL_REMOVABLE;
-		device_add_disk(&floppy_device[drive].dev, disks[drive], NULL);
+		device_add_disk(&floppy_device[drive].dev, disks[drive][0],
+				NULL);
 	}
 
 	return 0;
@@ -4772,30 +4803,27 @@ static int __init do_floppy_init(void)
 out_remove_drives:
 	while (drive--) {
 		if (floppy_available(drive)) {
-			del_gendisk(disks[drive]);
+			del_gendisk(disks[drive][0]);
 			platform_device_unregister(&floppy_device[drive]);
 		}
 	}
 out_release_dma:
 	if (atomic_read(&usage_count))
 		floppy_release_irq_and_dma();
-out_unreg_region:
-	blk_unregister_region(MKDEV(FLOPPY_MAJOR, 0), 256);
+out_unreg_driver:
 	platform_driver_unregister(&floppy_driver);
 out_unreg_blkdev:
 	unregister_blkdev(FLOPPY_MAJOR, "fd");
 out_put_disk:
 	destroy_workqueue(floppy_wq);
 	for (drive = 0; drive < N_DRIVE; drive++) {
-		if (!disks[drive])
+		if (!disks[drive][0])
 			break;
-		if (disks[drive]->queue) {
-			del_timer_sync(&motor_off_timer[drive]);
-			blk_cleanup_queue(disks[drive]->queue);
-			disks[drive]->queue = NULL;
-			blk_mq_free_tag_set(&tag_sets[drive]);
-		}
-		put_disk(disks[drive]);
+		del_timer_sync(&motor_off_timer[drive]);
+		blk_cleanup_queue(disks[drive][0]->queue);
+		disks[drive][0]->queue = NULL;
+		blk_mq_free_tag_set(&tag_sets[drive]);
+		put_disk(disks[drive][0]);
 	}
 	return err;
 }
@@ -5006,9 +5034,8 @@ module_init(floppy_module_init);
 
 static void __exit floppy_module_exit(void)
 {
-	int drive;
+	int drive, i;
 
-	blk_unregister_region(MKDEV(FLOPPY_MAJOR, 0), 256);
 	unregister_blkdev(FLOPPY_MAJOR, "fd");
 	platform_driver_unregister(&floppy_driver);
 
@@ -5018,10 +5045,16 @@ static void __exit floppy_module_exit(void)
 		del_timer_sync(&motor_off_timer[drive]);
 
 		if (floppy_available(drive)) {
-			del_gendisk(disks[drive]);
+			for (i = 0; i < ARRAY_SIZE(floppy_type); i++) {
+				if (disks[drive][i])
+					del_gendisk(disks[drive][i]);
+			}
 			platform_device_unregister(&floppy_device[drive]);
 		}
-		blk_cleanup_queue(disks[drive]->queue);
+		for (i = 0; i < ARRAY_SIZE(floppy_type); i++) {
+			if (disks[drive][i])
+				blk_cleanup_queue(disks[drive][i]->queue);
+		}
 		blk_mq_free_tag_set(&tag_sets[drive]);
 
 		/*
@@ -5029,10 +5062,17 @@ static void __exit floppy_module_exit(void)
 		 * queue reference in put_disk().
 		 */
 		if (!(allowed_drive_mask & (1 << drive)) ||
-		    fdc_state[FDC(drive)].version == FDC_NONE)
-			disks[drive]->queue = NULL;
+		    fdc_state[FDC(drive)].version == FDC_NONE) {
+			for (i = 0; i < ARRAY_SIZE(floppy_type); i++) {
+				if (disks[drive][i])
+					disks[drive][i]->queue = NULL;
+			}
+		}
 
-		put_disk(disks[drive]);
+		for (i = 0; i < ARRAY_SIZE(floppy_type); i++) {
+			if (disks[drive][i])
+				put_disk(disks[drive][i]);
+		}
 	}
 
 	cancel_delayed_work_sync(&fd_timeout);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:10:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:10:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28175.57108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9W-0003zo-IW; Mon, 16 Nov 2020 15:10:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28175.57108; Mon, 16 Nov 2020 15:10:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9V-0003xr-6c; Mon, 16 Nov 2020 15:10:45 +0000
Received: by outflank-mailman (input) for mailman id 28175;
 Mon, 16 Nov 2020 15:10:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefzg-0006ni-3J
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:00:36 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 227477fb-98ae-4619-a093-4bc84c4791f1;
 Mon, 16 Nov 2020 14:58:58 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxr-0003sC-Nh; Mon, 16 Nov 2020 14:58:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefzg-0006ni-3J
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:00:36 +0000
X-Inumbo-ID: 227477fb-98ae-4619-a093-4bc84c4791f1
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 227477fb-98ae-4619-a093-4bc84c4791f1;
	Mon, 16 Nov 2020 14:58:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=2MZ7rG2SVw4WzYTKnpU/EThu7I42ye4vtM6air7UEg8=; b=PyHnyUh0PPkH+yqLagQ19OvvC+
	FlSpzlSdg1OQunPTCAeDu0gQH1vPQJCoApDX7VxN6c/WOIqI0ic8Iv/M87Km2qTOlzCX+eJxIcK7t
	9gT+KgnonlrqeWO8HGhDmNKS9d2N8KcELAEcVJYOp2xLKPiY9VrDekD0EQIUvZfvsG5kAL+aHRVDf
	MrC5YNsHcLZkbfWU7+tWXI2XVoRl2stXpjIpNroEOB5aMcrebGwmReG2yS2Hcvy4gcX3e++R2ckwp
	3uA1SKy8JuMioxo6fxMdKZXrCHsKzlw8Gr1OG0+awbEELu9T1276RdekSFGCRODCHQGNUgKXtUJAu
	o8MV8THA==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxr-0003sC-Nh; Mon, 16 Nov 2020 14:58:44 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	Richard Weinberger <richard@nod.at>
Subject: [PATCH 24/78] mtd_blkdevs: don't override BLKFLSBUF
Date: Mon, 16 Nov 2020 15:57:15 +0100
Message-Id: <20201116145809.410558-25-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

BLKFLSBUF is not supposed to actually send a flush command to the device,
but to tear down buffer cache structures.  Remove the mtd_blkdevs
implementation and just use the default semantics instead.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Richard Weinberger <richard@nod.at>
---
 drivers/mtd/mtd_blkdevs.c | 28 ----------------------------
 1 file changed, 28 deletions(-)

diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
index 0c05f77f9b216e..fb8e12d590a13a 100644
--- a/drivers/mtd/mtd_blkdevs.c
+++ b/drivers/mtd/mtd_blkdevs.c
@@ -298,38 +298,10 @@ static int blktrans_getgeo(struct block_device *bdev, struct hd_geometry *geo)
 	return ret;
 }
 
-static int blktrans_ioctl(struct block_device *bdev, fmode_t mode,
-			      unsigned int cmd, unsigned long arg)
-{
-	struct mtd_blktrans_dev *dev = blktrans_dev_get(bdev->bd_disk);
-	int ret = -ENXIO;
-
-	if (!dev)
-		return ret;
-
-	mutex_lock(&dev->lock);
-
-	if (!dev->mtd)
-		goto unlock;
-
-	switch (cmd) {
-	case BLKFLSBUF:
-		ret = dev->tr->flush ? dev->tr->flush(dev) : 0;
-		break;
-	default:
-		ret = -ENOTTY;
-	}
-unlock:
-	mutex_unlock(&dev->lock);
-	blktrans_dev_put(dev);
-	return ret;
-}
-
 static const struct block_device_operations mtd_block_ops = {
 	.owner		= THIS_MODULE,
 	.open		= blktrans_open,
 	.release	= blktrans_release,
-	.ioctl		= blktrans_ioctl,
 	.getgeo		= blktrans_getgeo,
 };
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:10:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:10:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28183.57127 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9a-0004Dm-QT; Mon, 16 Nov 2020 15:10:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28183.57127; Mon, 16 Nov 2020 15:10:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9Z-0004C3-Tj; Mon, 16 Nov 2020 15:10:49 +0000
Received: by outflank-mailman (input) for mailman id 28183;
 Mon, 16 Nov 2020 15:10:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg4H-0006ni-Dn
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:05:21 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 14af5499-b809-4547-becc-abc358585705;
 Mon, 16 Nov 2020 15:00:12 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefz6-0004Ja-4t; Mon, 16 Nov 2020 15:00:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg4H-0006ni-Dn
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:05:21 +0000
X-Inumbo-ID: 14af5499-b809-4547-becc-abc358585705
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 14af5499-b809-4547-becc-abc358585705;
	Mon, 16 Nov 2020 15:00:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=hht0IJxm0Atonm6wFAUEQs4XOgcCelxMQ4knV7FOtuU=; b=OnR7Hl4Gn8Cqes4fUGZG09Ou5b
	wkdlpWME3XUZyniKD0u6+LvQEZ4jfV5KI0bvgOdKoKy/avRVN1ICpG3ayLr7y9f7mbweYvyb5dtf9
	LB6vxsD9CHKtKypIkEHeVB91GzwQLjRfxlTFqOarpX4O1io/JrahYaGlAejJ9Wdq1/ICyENnWPCND
	K6zRNnIdtyLJMuh8Hr4Sfe2rN4rdYR4DXOl0KacL5GfpL0QuIaqSWtKXdNYNYLVFnm5vnjYokZj+O
	0vQqQ4fnqy8Lr9QzB4VCWPgAQgxp0yphr6N7aK0gOpCB0lKFL+g2IP94YJiiFkRNGuFpsWI78KvwO
	vmSnReZw==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefz6-0004Ja-4t; Mon, 16 Nov 2020 15:00:00 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 76/78] filemap: use ->f_mapping over ->i_mapping consistently
Date: Mon, 16 Nov 2020 15:58:07 +0100
Message-Id: <20201116145809.410558-77-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use file->f_mapping in all functions that have a struct file available
to properly handle the case where file_inode(file)->i_mapping !=
inode->i_mapping.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 mm/filemap.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index d5e7c2029d16b4..3e3531a757f8db 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2887,13 +2887,13 @@ EXPORT_SYMBOL(filemap_map_pages);
 vm_fault_t filemap_page_mkwrite(struct vm_fault *vmf)
 {
 	struct page *page = vmf->page;
-	struct inode *inode = file_inode(vmf->vma->vm_file);
+	struct inode *inode = vmf->vma->vm_file->f_mapping->host;
 	vm_fault_t ret = VM_FAULT_LOCKED;
 
 	sb_start_pagefault(inode->i_sb);
 	file_update_time(vmf->vma->vm_file);
 	lock_page(page);
-	if (page->mapping != inode->i_mapping) {
+	if (page->mapping != vmf->vma->vm_file->f_mapping) {
 		unlock_page(page);
 		ret = VM_FAULT_NOPAGE;
 		goto out;
@@ -3149,10 +3149,9 @@ void dio_warn_stale_pagecache(struct file *filp)
 {
 	static DEFINE_RATELIMIT_STATE(_rs, 86400 * HZ, DEFAULT_RATELIMIT_BURST);
 	char pathname[128];
-	struct inode *inode = file_inode(filp);
 	char *path;
 
-	errseq_set(&inode->i_mapping->wb_err, -EIO);
+	errseq_set(&filp->f_mapping->wb_err, -EIO);
 	if (__ratelimit(&_rs)) {
 		path = file_path(filp, pathname, sizeof(pathname));
 		if (IS_ERR(path))
@@ -3179,7 +3178,7 @@ generic_file_direct_write(struct kiocb *iocb, struct iov_iter *from)
 
 	if (iocb->ki_flags & IOCB_NOWAIT) {
 		/* If there are pages to writeback, return */
-		if (filemap_range_has_page(inode->i_mapping, pos,
+		if (filemap_range_has_page(file->f_mapping, pos,
 					   pos + write_len - 1))
 			return -EAGAIN;
 	} else {
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:10:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:10:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28187.57135 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9c-0004JY-UL; Mon, 16 Nov 2020 15:10:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28187.57135; Mon, 16 Nov 2020 15:10:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9b-0004Hr-Q2; Mon, 16 Nov 2020 15:10:51 +0000
Received: by outflank-mailman (input) for mailman id 28187;
 Mon, 16 Nov 2020 15:10:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefz2-0006ni-27
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:59:56 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6c7ce6bc-daab-4d9c-b5a9-a0b0022352e6;
 Mon, 16 Nov 2020 14:58:48 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxh-0003on-CP; Mon, 16 Nov 2020 14:58:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefz2-0006ni-27
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:59:56 +0000
X-Inumbo-ID: 6c7ce6bc-daab-4d9c-b5a9-a0b0022352e6
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6c7ce6bc-daab-4d9c-b5a9-a0b0022352e6;
	Mon, 16 Nov 2020 14:58:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=Q3W7KdKmwyFppJQ1L6YFPM2WOqRK8Pu4NYkxno3YD6A=; b=NyxRsiP7cn1hmv0eDl11Mc4Rxf
	OWKUCBLhCdlto1TuFqmc+ik0CxwHpClG8i6SZ055UUUpnyX2CNTfXXftomb5qIsUZE4ygF+FRUGhr
	fAXszF/eCbKtsfOuGENKuWXtkVuHidIrHgPovS6xL5kAaxcTygkWAaJcUI8slrGfBhTnvPM/I72fk
	967ch5pi4wrCzTxqeZL+093PJdfdMvz8APGAdktPfxH2ycCn8Iv2kNb8PonW588NcAiV+GH5OWLY1
	SBnjZWwXSjGXCqdfH6hPJN3tyaO2ihu38I7qOOrSyr95XyL/IMr6yv0B7vtiKwh1PJYvarW+oJafP
	gmduacxw==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxh-0003on-CP; Mon, 16 Nov 2020 14:58:33 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 17/78] rnbd: use set_capacity_and_notify
Date: Mon, 16 Nov 2020 15:57:08 +0100
Message-Id: <20201116145809.410558-18-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jack Wang <jinpu.wang@cloud.ionos.com>
---
 drivers/block/rnbd/rnbd-clt.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c
index 8b2411ccbda97c..bb13d7dd195a08 100644
--- a/drivers/block/rnbd/rnbd-clt.c
+++ b/drivers/block/rnbd/rnbd-clt.c
@@ -100,8 +100,7 @@ static int rnbd_clt_change_capacity(struct rnbd_clt_dev *dev,
 	rnbd_clt_info(dev, "Device size changed from %zu to %zu sectors\n",
 		       dev->nsectors, new_nsectors);
 	dev->nsectors = new_nsectors;
-	set_capacity(dev->gd, dev->nsectors);
-	revalidate_disk_size(dev->gd, true);
+	set_capacity_and_notify(dev->gd, dev->nsectors);
 	return 0;
 }
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:10:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:10:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28190.57147 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9f-0004Pf-1b; Mon, 16 Nov 2020 15:10:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28190.57147; Mon, 16 Nov 2020 15:10:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9d-0004NA-IA; Mon, 16 Nov 2020 15:10:53 +0000
Received: by outflank-mailman (input) for mailman id 28190;
 Mon, 16 Nov 2020 15:10:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg3n-0006ni-CZ
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:51 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7e9ac43c-8ee6-4baf-8705-0b96765ed7d7;
 Mon, 16 Nov 2020 15:00:03 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyw-0004GD-43; Mon, 16 Nov 2020 14:59:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg3n-0006ni-CZ
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:51 +0000
X-Inumbo-ID: 7e9ac43c-8ee6-4baf-8705-0b96765ed7d7
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7e9ac43c-8ee6-4baf-8705-0b96765ed7d7;
	Mon, 16 Nov 2020 15:00:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=TvpeRXm8BRTuXCT37wlukwe6BOZ3/dX7L/umbpflCMM=; b=DDAkZCTDGxfbePoEHo/XsSjwQ3
	tMobs8wjtx98GP6GSeBpBqTzY/MJdwt8tOtG+zffeQQl2gmvutWvJojj5XVOBF9+ReiqAWXNRGvXP
	RscNCSP4GwLHLORPC97xBJJQyi80eiJE4B2ihbqMzkRm8IOGTTuFd3iGRix1pvcDDITqjUrgXwtDM
	T8P9wh9Aw1eKlv377fPg/jQd1UjLyKMrfQWgZnihlcdiujNWsxrtVIzgN6w/ZikmYZYbltxx++hoV
	OYmAqLNxp25/xu0dHNAIaWsalGvPwhEJYV2MliNvw6xLaIxEVEBC8aTOgyuqYdv6qnjTQ7Ar+pNkR
	Xyo3SO7w==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyw-0004GD-43; Mon, 16 Nov 2020 14:59:50 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 69/78] block: remove the nr_sects field in struct hd_struct
Date: Mon, 16 Nov 2020 15:58:00 +0100
Message-Id: <20201116145809.410558-70-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Now that the hd_struct always has a block device attached to it, there is
no need for having two size field that just get out of sync.

Additional the field in hd_struct did not use proper serializiation,
possibly allowing for torn writes.  By only using the block_device field
this problem also gets fixed.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/bio.c                        |  2 +-
 block/blk-core.c                   |  2 +-
 block/blk.h                        | 53 ----------------------
 block/genhd.c                      | 34 +++++++-------
 block/partitions/core.c            | 17 ++++---
 drivers/block/loop.c               |  1 -
 drivers/block/nbd.c                |  2 +-
 drivers/block/xen-blkback/common.h |  4 +-
 drivers/md/bcache/super.c          |  2 +-
 drivers/s390/block/dasd_ioctl.c    |  4 +-
 drivers/target/target_core_pscsi.c |  7 +--
 fs/block_dev.c                     | 73 +-----------------------------
 fs/f2fs/super.c                    |  2 +-
 fs/pstore/blk.c                    |  2 +-
 include/linux/genhd.h              | 29 +++---------
 kernel/trace/blktrace.c            |  2 +-
 16 files changed, 47 insertions(+), 189 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index fa01bef35bb1fe..0c5269997434d6 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -613,7 +613,7 @@ void guard_bio_eod(struct bio *bio)
 	rcu_read_lock();
 	part = __disk_get_part(bio->bi_disk, bio->bi_partno);
 	if (part)
-		maxsector = part_nr_sects_read(part);
+		maxsector = bdev_nr_sectors(part->bdev);
 	else
 		maxsector = get_capacity(bio->bi_disk);
 	rcu_read_unlock();
diff --git a/block/blk-core.c b/block/blk-core.c
index 2db8bda43b6e6d..988f45094a387b 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -755,7 +755,7 @@ static inline int blk_partition_remap(struct bio *bio)
 		goto out;
 
 	if (bio_sectors(bio)) {
-		if (bio_check_eod(bio, part_nr_sects_read(p)))
+		if (bio_check_eod(bio, bdev_nr_sectors(p->bdev)))
 			goto out;
 		bio->bi_iter.bi_sector += p->start_sect;
 		trace_block_bio_remap(bio->bi_disk->queue, bio, part_devt(p),
diff --git a/block/blk.h b/block/blk.h
index d74159bf61eb8f..7d10bb24eb282d 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -386,59 +386,6 @@ static inline void hd_free_part(struct hd_struct *part)
 	percpu_ref_exit(&part->ref);
 }
 
-/*
- * Any access of part->nr_sects which is not protected by partition
- * bd_mutex or gendisk bdev bd_mutex, should be done using this
- * accessor function.
- *
- * Code written along the lines of i_size_read() and i_size_write().
- * CONFIG_PREEMPTION case optimizes the case of UP kernel with preemption
- * on.
- */
-static inline sector_t part_nr_sects_read(struct hd_struct *part)
-{
-#if BITS_PER_LONG==32 && defined(CONFIG_SMP)
-	sector_t nr_sects;
-	unsigned seq;
-	do {
-		seq = read_seqcount_begin(&part->nr_sects_seq);
-		nr_sects = part->nr_sects;
-	} while (read_seqcount_retry(&part->nr_sects_seq, seq));
-	return nr_sects;
-#elif BITS_PER_LONG==32 && defined(CONFIG_PREEMPTION)
-	sector_t nr_sects;
-
-	preempt_disable();
-	nr_sects = part->nr_sects;
-	preempt_enable();
-	return nr_sects;
-#else
-	return part->nr_sects;
-#endif
-}
-
-/*
- * Should be called with mutex lock held (typically bd_mutex) of partition
- * to provide mutual exlusion among writers otherwise seqcount might be
- * left in wrong state leaving the readers spinning infinitely.
- */
-static inline void part_nr_sects_write(struct hd_struct *part, sector_t size)
-{
-#if BITS_PER_LONG==32 && defined(CONFIG_SMP)
-	preempt_disable();
-	write_seqcount_begin(&part->nr_sects_seq);
-	part->nr_sects = size;
-	write_seqcount_end(&part->nr_sects_seq);
-	preempt_enable();
-#elif BITS_PER_LONG==32 && defined(CONFIG_PREEMPTION)
-	preempt_disable();
-	part->nr_sects = size;
-	preempt_enable();
-#else
-	part->nr_sects = size;
-#endif
-}
-
 int bio_add_hw_page(struct request_queue *q, struct bio *bio,
 		struct page *page, unsigned int len, unsigned int offset,
 		unsigned int max_sectors, bool *same_page);
diff --git a/block/genhd.c b/block/genhd.c
index 40ec5473a21dd2..7832968ce3fbb7 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -38,6 +38,16 @@ static void disk_add_events(struct gendisk *disk);
 static void disk_del_events(struct gendisk *disk);
 static void disk_release_events(struct gendisk *disk);
 
+void set_capacity(struct gendisk *disk, sector_t sectors)
+{
+	struct block_device *bdev = disk->part0.bdev;
+
+	spin_lock(&bdev->bd_size_lock);
+	i_size_write(bdev->bd_inode, (loff_t)sectors << SECTOR_SHIFT);
+	spin_unlock(&bdev->bd_size_lock);
+}
+EXPORT_SYMBOL(set_capacity);
+
 /*
  * Set disk capacity and notify if the size is not currently zero and will not
  * be set to zero.  Returns true if a uevent was sent, otherwise false.
@@ -47,11 +57,12 @@ bool set_capacity_and_notify(struct gendisk *disk, sector_t size)
 	sector_t capacity = get_capacity(disk);
 
 	set_capacity(disk, size);
-	revalidate_disk_size(disk, true);
 
 	if (capacity != size && capacity != 0 && size != 0) {
 		char *envp[] = { "RESIZE=1", NULL };
 
+		pr_info("%s: detected capacity change from %lld to %lld\n",
+		       disk->disk_name, size, capacity);
 		kobject_uevent_env(&disk_to_dev(disk)->kobj, KOBJ_CHANGE, envp);
 		return true;
 	}
@@ -246,7 +257,7 @@ struct hd_struct *disk_part_iter_next(struct disk_part_iter *piter)
 		part = rcu_dereference(ptbl->part[piter->idx]);
 		if (!part)
 			continue;
-		if (!part_nr_sects_read(part) &&
+		if (!bdev_nr_sectors(part->bdev) &&
 		    !(piter->flags & DISK_PITER_INCL_EMPTY) &&
 		    !(piter->flags & DISK_PITER_INCL_EMPTY_PART0 &&
 		      piter->idx == 0))
@@ -283,7 +294,7 @@ EXPORT_SYMBOL_GPL(disk_part_iter_exit);
 static inline int sector_in_part(struct hd_struct *part, sector_t sector)
 {
 	return part->start_sect <= sector &&
-		sector < part->start_sect + part_nr_sects_read(part);
+		sector < part->start_sect + bdev_nr_sectors(part->bdev);
 }
 
 /**
@@ -981,7 +992,7 @@ void __init printk_all_partitions(void)
 
 			printk("%s%s %10llu %s %s", is_part0 ? "" : "  ",
 			       bdevt_str(part_devt(part), devt_buf),
-			       (unsigned long long)part_nr_sects_read(part) >> 1
+			       bdev_nr_sectors(part->bdev) >> 1
 			       , disk_name(disk, part->partno, name_buf),
 			       part->info ? part->info->uuid : "");
 			if (is_part0) {
@@ -1074,7 +1085,7 @@ static int show_partition(struct seq_file *seqf, void *v)
 	while ((part = disk_part_iter_next(&piter)))
 		seq_printf(seqf, "%4d  %7d %10llu %s\n",
 			   MAJOR(part_devt(part)), MINOR(part_devt(part)),
-			   (unsigned long long)part_nr_sects_read(part) >> 1,
+			   bdev_nr_sectors(part->bdev) >> 1,
 			   disk_name(sgp, part->partno, buf));
 	disk_part_iter_exit(&piter);
 
@@ -1156,8 +1167,7 @@ ssize_t part_size_show(struct device *dev,
 {
 	struct hd_struct *p = dev_to_part(dev);
 
-	return sprintf(buf, "%llu\n",
-		(unsigned long long)part_nr_sects_read(p));
+	return sprintf(buf, "%llu\n", bdev_nr_sectors(p->bdev));
 }
 
 ssize_t part_stat_show(struct device *dev,
@@ -1616,16 +1626,6 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
 	ptbl = rcu_dereference_protected(disk->part_tbl, 1);
 	rcu_assign_pointer(ptbl->part[0], &disk->part0);
 
-	/*
-	 * set_capacity() and get_capacity() currently don't use
-	 * seqcounter to read/update the part0->nr_sects. Still init
-	 * the counter as we can read the sectors in IO submission
-	 * patch using seqence counters.
-	 *
-	 * TODO: Ideally set_capacity() and get_capacity() should be
-	 * converted to make use of bd_mutex and sequence counters.
-	 */
-	hd_sects_seq_init(&disk->part0);
 	if (hd_ref_init(&disk->part0))
 		goto out_free_part0;
 
diff --git a/block/partitions/core.c b/block/partitions/core.c
index 8b44f46ab1fbfc..573ef5a03fc104 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -85,6 +85,13 @@ static int (*check_part[])(struct parsed_partitions *) = {
 	NULL
 };
 
+static void bdev_set_nr_sectors(struct block_device *bdev, sector_t sectors)
+{
+	spin_lock(&bdev->bd_size_lock);
+	i_size_write(bdev->bd_inode, (loff_t)sectors << SECTOR_SHIFT);
+	spin_unlock(&bdev->bd_size_lock);
+}
+
 static struct parsed_partitions *allocate_partitions(struct gendisk *hd)
 {
 	struct parsed_partitions *state;
@@ -295,7 +302,7 @@ static void hd_struct_free_work(struct work_struct *work)
 	put_device(disk_to_dev(disk));
 
 	part->start_sect = 0;
-	part->nr_sects = 0;
+	bdev_set_nr_sectors(part->bdev, 0);
 	part_stat_set_all(part, 0);
 	put_device(part_to_dev(part));
 }
@@ -410,11 +417,10 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 	if (!p->bdev)
 		goto out_free_stats;
 
-	hd_sects_seq_init(p);
 	pdev = part_to_dev(p);
 
 	p->start_sect = start;
-	p->nr_sects = len;
+	bdev_set_nr_sectors(p->bdev, len);
 	p->partno = partno;
 	p->policy = get_disk_ro(disk);
 
@@ -508,7 +514,7 @@ static bool partition_overlaps(struct gendisk *disk, sector_t start,
 	disk_part_iter_init(&piter, disk, DISK_PITER_INCL_EMPTY);
 	while ((part = disk_part_iter_next(&piter))) {
 		if (part->partno == skip_partno ||
-		    start >= part->start_sect + part->nr_sects ||
+		    start >= part->start_sect + bdev_nr_sectors(part->bdev) ||
 		    start + length <= part->start_sect)
 			continue;
 		overlap = true;
@@ -599,8 +605,7 @@ int bdev_resize_partition(struct block_device *bdev, int partno,
 	if (partition_overlaps(bdev->bd_disk, start, length, partno))
 		goto out_unlock;
 
-	part_nr_sects_write(part, length);
-	bd_set_nr_sectors(bdevp, length);
+	bdev_set_nr_sectors(bdevp, length);
 
 	ret = 0;
 out_unlock:
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 599e94a7e69259..9d2587f6167cd8 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -1243,7 +1243,6 @@ static int __loop_clr_fd(struct loop_device *lo, bool release)
 	set_capacity(lo->lo_disk, 0);
 	loop_sysfs_exit(lo);
 	if (bdev) {
-		bd_set_nr_sectors(bdev, 0);
 		/* let user-space know about this change */
 		kobject_uevent(&disk_to_dev(bdev->bd_disk)->kobj, KOBJ_CHANGE);
 	}
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 45b0423ef2c53d..014683968ce174 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -1132,7 +1132,7 @@ static void nbd_bdev_reset(struct block_device *bdev)
 {
 	if (bdev->bd_openers > 1)
 		return;
-	bd_set_nr_sectors(bdev, 0);
+	set_capacity(bdev->bd_disk, 0);
 }
 
 static void nbd_parse_flags(struct nbd_device *nbd)
diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback/common.h
index c6ea5d38c509a6..0762db247b41b3 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -358,9 +358,7 @@ struct pending_req {
 };
 
 
-#define vbd_sz(_v)	((_v)->bdev->bd_part ? \
-			 (_v)->bdev->bd_part->nr_sects : \
-			  get_capacity((_v)->bdev->bd_disk))
+#define vbd_sz(_v)	bdev_nr_sectors((_v)->bdev)
 
 #define xen_blkif_get(_b) (atomic_inc(&(_b)->refcnt))
 #define xen_blkif_put(_b)				\
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index d36ccdda16ed2e..ea2b80c3e44c38 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -1408,7 +1408,7 @@ static int cached_dev_init(struct cached_dev *dc, unsigned int block_size)
 			q->limits.raid_partial_stripes_expensive;
 
 	ret = bcache_device_init(&dc->disk, block_size,
-			 dc->bdev->bd_part->nr_sects - dc->sb.data_offset,
+			 bdev_nr_sectors(dc->bdev) - dc->sb.data_offset,
 			 dc->bdev, &bcache_cached_ops);
 	if (ret)
 		return ret;
diff --git a/drivers/s390/block/dasd_ioctl.c b/drivers/s390/block/dasd_ioctl.c
index 3359559517bfcf..304eba1acf163c 100644
--- a/drivers/s390/block/dasd_ioctl.c
+++ b/drivers/s390/block/dasd_ioctl.c
@@ -54,8 +54,6 @@ dasd_ioctl_enable(struct block_device *bdev)
 		return -ENODEV;
 
 	dasd_enable_device(base);
-	/* Formatting the dasd device can change the capacity. */
-	bd_set_nr_sectors(bdev, get_capacity(base->block->gdp));
 	dasd_put_device(base);
 	return 0;
 }
@@ -88,7 +86,7 @@ dasd_ioctl_disable(struct block_device *bdev)
 	 * Set i_size to zero, since read, write, etc. check against this
 	 * value.
 	 */
-	bd_set_nr_sectors(bdev, 0);
+	set_capacity(bdev->bd_disk, 0);
 	dasd_put_device(base);
 	return 0;
 }
diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c
index 4e37fa9b409d52..a70c33c49f0960 100644
--- a/drivers/target/target_core_pscsi.c
+++ b/drivers/target/target_core_pscsi.c
@@ -1027,12 +1027,7 @@ static u32 pscsi_get_device_type(struct se_device *dev)
 
 static sector_t pscsi_get_blocks(struct se_device *dev)
 {
-	struct pscsi_dev_virt *pdv = PSCSI_DEV(dev);
-
-	if (pdv->pdv_bd && pdv->pdv_bd->bd_part)
-		return pdv->pdv_bd->bd_part->nr_sects;
-
-	return 0;
+	return bdev_nr_sectors(PSCSI_DEV(dev)->pdv_bd);
 }
 
 static void pscsi_req_done(struct request *req, blk_status_t status)
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 2348f218d45deb..14b6dbfa9dda2a 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -1286,70 +1286,6 @@ void bd_unlink_disk_holder(struct block_device *bdev, struct gendisk *disk)
 EXPORT_SYMBOL_GPL(bd_unlink_disk_holder);
 #endif
 
-/**
- * check_disk_size_change - checks for disk size change and adjusts bdev size.
- * @disk: struct gendisk to check
- * @bdev: struct bdev to adjust.
- * @verbose: if %true log a message about a size change if there is any
- *
- * This routine checks to see if the bdev size does not match the disk size
- * and adjusts it if it differs. When shrinking the bdev size, its all caches
- * are freed.
- */
-static void check_disk_size_change(struct gendisk *disk,
-		struct block_device *bdev, bool verbose)
-{
-	loff_t disk_size, bdev_size;
-
-	spin_lock(&bdev->bd_size_lock);
-	disk_size = (loff_t)get_capacity(disk) << 9;
-	bdev_size = i_size_read(bdev->bd_inode);
-	if (disk_size != bdev_size) {
-		if (verbose) {
-			printk(KERN_INFO
-			       "%s: detected capacity change from %lld to %lld\n",
-			       disk->disk_name, bdev_size, disk_size);
-		}
-		i_size_write(bdev->bd_inode, disk_size);
-	}
-	spin_unlock(&bdev->bd_size_lock);
-}
-
-/**
- * revalidate_disk_size - checks for disk size change and adjusts bdev size.
- * @disk: struct gendisk to check
- * @verbose: if %true log a message about a size change if there is any
- *
- * This routine checks to see if the bdev size does not match the disk size
- * and adjusts it if it differs. When shrinking the bdev size, its all caches
- * are freed.
- */
-void revalidate_disk_size(struct gendisk *disk, bool verbose)
-{
-	struct block_device *bdev;
-
-	/*
-	 * Hidden disks don't have associated bdev so there's no point in
-	 * revalidating them.
-	 */
-	if (disk->flags & GENHD_FL_HIDDEN)
-		return;
-
-	bdev = bdget_disk(disk, 0);
-	if (bdev) {
-		check_disk_size_change(disk, bdev, verbose);
-		bdput(bdev);
-	}
-}
-
-void bd_set_nr_sectors(struct block_device *bdev, sector_t sectors)
-{
-	spin_lock(&bdev->bd_size_lock);
-	i_size_write(bdev->bd_inode, (loff_t)sectors << SECTOR_SHIFT);
-	spin_unlock(&bdev->bd_size_lock);
-}
-EXPORT_SYMBOL(bd_set_nr_sectors);
-
 static void __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part);
 
 int bdev_disk_changed(struct block_device *bdev, bool invalidate)
@@ -1383,8 +1319,6 @@ int bdev_disk_changed(struct block_device *bdev, bool invalidate)
 			disk->fops->revalidate_disk(disk);
 	}
 
-	check_disk_size_change(disk, bdev, !invalidate);
-
 	if (get_capacity(disk)) {
 		ret = blk_add_partitions(disk, bdev);
 		if (ret == -EAGAIN)
@@ -1472,10 +1406,8 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 					need_restart = true;
 			}
 
-			if (!ret) {
-				bd_set_nr_sectors(bdev, get_capacity(disk));
+			if (!ret)
 				set_init_blocksize(bdev);
-			}
 
 			/*
 			 * If the device is invalidated, rescan partition
@@ -1497,11 +1429,10 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 				goto out_clear;
 			bdev->bd_part = disk_get_part(disk, bdev->bd_partno);
 			if (!(disk->flags & GENHD_FL_UP) ||
-			    !bdev->bd_part || !bdev->bd_part->nr_sects) {
+			    !bdev->bd_part || !bdev_nr_sectors(bdev)) {
 				ret = -ENXIO;
 				goto out_clear;
 			}
-			bd_set_nr_sectors(bdev, bdev->bd_part->nr_sects);
 			set_init_blocksize(bdev);
 		}
 
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index 00eff2f5180790..d4e7fab352bacb 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -3151,7 +3151,7 @@ static int f2fs_report_zone_cb(struct blk_zone *zone, unsigned int idx,
 static int init_blkz_info(struct f2fs_sb_info *sbi, int devi)
 {
 	struct block_device *bdev = FDEV(devi).bdev;
-	sector_t nr_sectors = bdev->bd_part->nr_sects;
+	sector_t nr_sectors = bdev_nr_sectors(bdev);
 	struct f2fs_report_zones_args rep_zone_arg;
 	int ret;
 
diff --git a/fs/pstore/blk.c b/fs/pstore/blk.c
index fcd5563dde063c..777a26f7bbe2aa 100644
--- a/fs/pstore/blk.c
+++ b/fs/pstore/blk.c
@@ -245,7 +245,7 @@ static struct block_device *psblk_get_bdev(void *holder,
 			return bdev;
 	}
 
-	nr_sects = part_nr_sects_read(bdev->bd_part);
+	nr_sects = bdev_nr_sectors(bdev);
 	if (!nr_sects) {
 		pr_err("not enough space for '%s'\n", blkdev);
 		blkdev_put(bdev, mode);
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index ab5fca99764e7a..e01618dfafc05c 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -52,15 +52,6 @@ struct partition_meta_info {
 
 struct hd_struct {
 	sector_t start_sect;
-	/*
-	 * nr_sects is protected by sequence counter. One might extend a
-	 * partition while IO is happening to it and update of nr_sects
-	 * can be non-atomic on 32bit machines with 64bit sector_t.
-	 */
-	sector_t nr_sects;
-#if BITS_PER_LONG==32 && defined(CONFIG_SMP)
-	seqcount_t nr_sects_seq;
-#endif
 	unsigned long stamp;
 	struct disk_stats __percpu *dkstats;
 	struct percpu_ref ref;
@@ -259,13 +250,6 @@ static inline void disk_put_part(struct hd_struct *part)
 		put_device(part_to_dev(part));
 }
 
-static inline void hd_sects_seq_init(struct hd_struct *p)
-{
-#if BITS_PER_LONG==32 && defined(CONFIG_SMP)
-	seqcount_init(&p->nr_sects_seq);
-#endif
-}
-
 /*
  * Smarter partition iterator without context limits.
  */
@@ -323,13 +307,15 @@ static inline sector_t get_start_sect(struct block_device *bdev)
 {
 	return bdev->bd_part->start_sect;
 }
-static inline sector_t get_capacity(struct gendisk *disk)
+	
+static inline sector_t bdev_nr_sectors(struct block_device *bdev)
 {
-	return disk->part0.nr_sects;
+	return i_size_read(bdev->bd_inode) >> 9;
 }
-static inline void set_capacity(struct gendisk *disk, sector_t size)
+	
+static inline sector_t get_capacity(struct gendisk *disk)
 {
-	disk->part0.nr_sects = size;
+	return bdev_nr_sectors(disk->part0.bdev);
 }
 
 int bdev_disk_changed(struct block_device *bdev, bool invalidate);
@@ -363,10 +349,9 @@ int __register_blkdev(unsigned int major, const char *name,
 	__register_blkdev(major, name, NULL)
 void unregister_blkdev(unsigned int major, const char *name);
 
-void revalidate_disk_size(struct gendisk *disk, bool verbose);
 bool bdev_check_media_change(struct block_device *bdev);
 int __invalidate_device(struct block_device *bdev, bool kill_dirty);
-void bd_set_nr_sectors(struct block_device *bdev, sector_t sectors);
+void set_capacity(struct gendisk *disk, sector_t size);
 
 /* for drivers/char/raw.c: */
 int blkdev_ioctl(struct block_device *, fmode_t, unsigned, unsigned long);
diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
index f1022945e3460b..7076d588a50d69 100644
--- a/kernel/trace/blktrace.c
+++ b/kernel/trace/blktrace.c
@@ -465,7 +465,7 @@ static void blk_trace_setup_lba(struct blk_trace *bt,
 
 	if (part) {
 		bt->start_lba = part->start_sect;
-		bt->end_lba = part->start_sect + part->nr_sects;
+		bt->end_lba = part->start_sect + bdev_nr_sectors(bdev);
 	} else {
 		bt->start_lba = 0;
 		bt->end_lba = -1ULL;
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:10:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:10:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28193.57157 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9h-0004WA-9x; Mon, 16 Nov 2020 15:10:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28193.57157; Mon, 16 Nov 2020 15:10:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9g-0004Uv-52; Mon, 16 Nov 2020 15:10:56 +0000
Received: by outflank-mailman (input) for mailman id 28193;
 Mon, 16 Nov 2020 15:10:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefzv-0006ni-3v
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:00:51 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4d088b20-290f-4a5e-9919-40af0d6cc98c;
 Mon, 16 Nov 2020 14:59:02 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxv-0003t5-MO; Mon, 16 Nov 2020 14:58:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefzv-0006ni-3v
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:00:51 +0000
X-Inumbo-ID: 4d088b20-290f-4a5e-9919-40af0d6cc98c
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4d088b20-290f-4a5e-9919-40af0d6cc98c;
	Mon, 16 Nov 2020 14:59:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=lsAXL3hTvnB0lZbP/cBxcA3iF4//0CyS8Nha3XJ9JJw=; b=gRq5Bd6I041r3EGzSNRea27VcV
	raQcLpJmWj/jK+75OwDsxA43vptbb5bLxmKSp35hPLSybywahfsIEXtmUIrD9ToqHyYTfeCqsn8wJ
	wQx0MYjiBzcMrM+7ZWQTSUtknggoViJ3wuvotrL9OpbxcIk+6i29Awe1Y+rr6wmnDy5z2lOUyiXz3
	LW8h++kCUa/3/O3LYkCUURTEpEcW9xBVCEtC0iFWL7xkfc/FqwTqkcVZcvaSmNllVviTiDwS9kVSn
	J0iO8i2WqgPUw34BttXzbrAtJQsZJyDDz3NhCLwGaOn0B22GTINfKHNlj2O/NnHFU2taSXPwtg2gn
	SSzCdCgg==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxv-0003t5-MO; Mon, 16 Nov 2020 14:58:48 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 27/78] rbd: implement ->set_read_only to hook into BLKROSET processing
Date: Mon, 16 Nov 2020 15:57:18 +0100
Message-Id: <20201116145809.410558-28-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Implement the ->set_read_only method instead of parsing the actual
ioctl command.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Ilya Dryomov <idryomov@gmail.com>
---
 drivers/block/rbd.c | 40 ++++------------------------------------
 1 file changed, 4 insertions(+), 36 deletions(-)

diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index b7a194ffda55b4..2ed79b09439a82 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -692,12 +692,9 @@ static void rbd_release(struct gendisk *disk, fmode_t mode)
 	put_device(&rbd_dev->dev);
 }
 
-static int rbd_ioctl_set_ro(struct rbd_device *rbd_dev, unsigned long arg)
+static int rbd_set_read_only(struct block_device *bdev, bool ro)
 {
-	int ro;
-
-	if (get_user(ro, (int __user *)arg))
-		return -EFAULT;
+	struct rbd_device *rbd_dev = bdev->bd_disk->private_data;
 
 	/*
 	 * Both images mapped read-only and snapshots can't be marked
@@ -710,43 +707,14 @@ static int rbd_ioctl_set_ro(struct rbd_device *rbd_dev, unsigned long arg)
 		rbd_assert(!rbd_is_snap(rbd_dev));
 	}
 
-	/* Let blkdev_roset() handle it */
-	return -ENOTTY;
-}
-
-static int rbd_ioctl(struct block_device *bdev, fmode_t mode,
-			unsigned int cmd, unsigned long arg)
-{
-	struct rbd_device *rbd_dev = bdev->bd_disk->private_data;
-	int ret;
-
-	switch (cmd) {
-	case BLKROSET:
-		ret = rbd_ioctl_set_ro(rbd_dev, arg);
-		break;
-	default:
-		ret = -ENOTTY;
-	}
-
-	return ret;
-}
-
-#ifdef CONFIG_COMPAT
-static int rbd_compat_ioctl(struct block_device *bdev, fmode_t mode,
-				unsigned int cmd, unsigned long arg)
-{
-	return rbd_ioctl(bdev, mode, cmd, arg);
+	return 0;
 }
-#endif /* CONFIG_COMPAT */
 
 static const struct block_device_operations rbd_bd_ops = {
 	.owner			= THIS_MODULE,
 	.open			= rbd_open,
 	.release		= rbd_release,
-	.ioctl			= rbd_ioctl,
-#ifdef CONFIG_COMPAT
-	.compat_ioctl		= rbd_compat_ioctl,
-#endif
+	.set_read_only		= rbd_set_read_only,
 };
 
 /*
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:11:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:11:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28199.57167 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9j-0004cH-JO; Mon, 16 Nov 2020 15:10:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28199.57167; Mon, 16 Nov 2020 15:10:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9h-0004Zb-Rl; Mon, 16 Nov 2020 15:10:57 +0000
Received: by outflank-mailman (input) for mailman id 28199;
 Mon, 16 Nov 2020 15:10:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefzH-0006ni-2g
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:00:11 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 205855a2-63e8-49a4-ba67-7ec399427765;
 Mon, 16 Nov 2020 14:58:52 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxi-0003ph-Re; Mon, 16 Nov 2020 14:58:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefzH-0006ni-2g
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:00:11 +0000
X-Inumbo-ID: 205855a2-63e8-49a4-ba67-7ec399427765
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 205855a2-63e8-49a4-ba67-7ec399427765;
	Mon, 16 Nov 2020 14:58:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=pTUA48t42+HHCJV8On6c5azUAuBD9lzCSJPspjiep2g=; b=qaAfbB35avsytGqpNEWytrYnG3
	Gzj30W9piDTxClPGcKhQyC+7xWVJuzk1wqdaEoOD9LirlO1oU4d+8MQk8YOX9kwzJ7YvxhfCYVwqV
	R1dvaVAuFO/bc6xYy1Ev4vzKAgcI+bbNKPs2o+tMhgBNzFFqOC/Cs3j9//Uol04948xLm+DnqHaQV
	JmjjR/YVKAVJHjcogeCp9WMAO/bNvPQrjfn23UtbJYTihH9NKGBOQhYu6ZZR0Bj/S8oBjTDj9QxIb
	xdoFbJTJepAHEgsdxqO7JMl+zv8FirGSWZmdep8SDJQcE2TlUZhGVyxST/B+EptdxxxLsqlXBn1sz
	2IVCtD9Q==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxi-0003ph-Re; Mon, 16 Nov 2020 14:58:35 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 18/78] zram: use set_capacity_and_notify
Date: Mon, 16 Nov 2020 15:57:09 +0100
Message-Id: <20201116145809.410558-19-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use set_capacity_and_notify to set the size of both the disk and block
device.  This also gets the uevent notifications for the resize for free.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/zram/zram_drv.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 1b697208d66157..6d15d51cee2b7e 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1695,7 +1695,7 @@ static void zram_reset_device(struct zram *zram)
 	disksize = zram->disksize;
 	zram->disksize = 0;
 
-	set_capacity(zram->disk, 0);
+	set_capacity_and_notify(zram->disk, 0);
 	part_stat_set_all(&zram->disk->part0, 0);
 
 	up_write(&zram->init_lock);
@@ -1741,9 +1741,7 @@ static ssize_t disksize_store(struct device *dev,
 
 	zram->comp = comp;
 	zram->disksize = disksize;
-	set_capacity(zram->disk, zram->disksize >> SECTOR_SHIFT);
-
-	revalidate_disk_size(zram->disk, true);
+	set_capacity_and_notify(zram->disk, zram->disksize >> SECTOR_SHIFT);
 	up_write(&zram->init_lock);
 
 	return len;
@@ -1790,7 +1788,6 @@ static ssize_t reset_store(struct device *dev,
 	/* Make sure all the pending I/O are finished */
 	fsync_bdev(bdev);
 	zram_reset_device(zram);
-	revalidate_disk_size(zram->disk, true);
 	bdput(bdev);
 
 	mutex_lock(&bdev->bd_mutex);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:11:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:11:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28202.57181 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9m-0004jj-6T; Mon, 16 Nov 2020 15:11:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28202.57181; Mon, 16 Nov 2020 15:11:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9k-0004hE-L0; Mon, 16 Nov 2020 15:11:00 +0000
Received: by outflank-mailman (input) for mailman id 28202;
 Mon, 16 Nov 2020 15:10:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefzb-0006ni-37
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:00:31 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 431b1d26-7935-4fe5-b571-d09fc98d303b;
 Mon, 16 Nov 2020 14:58:57 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxq-0003ro-7G; Mon, 16 Nov 2020 14:58:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefzb-0006ni-37
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:00:31 +0000
X-Inumbo-ID: 431b1d26-7935-4fe5-b571-d09fc98d303b
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 431b1d26-7935-4fe5-b571-d09fc98d303b;
	Mon, 16 Nov 2020 14:58:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=oHer8RhXp63g2o7G9aHUvzNO/U6So+wGZ90GcOP+Dmc=; b=gtGrbYoF3IfV2RkEJ0q/yjRhPm
	u76YggegzqwLDaW2wmtp0guycHd1DdJJXPC3l6bTCvZCUYePgzv3dFWOHEUXPC4So8yc5A2gvS4tR
	Hy1VCX8Q73aU1KV7sS6JXYB1g1s3pqYGejG2zrJp4z+yeUQPmcBPT4l540gRaJENO1IAv7EZfB7Lb
	lyoF7Dy4LZPRSHsVYIhEZuzqHRyruSHYZCLAQq9fL+cQKB4NITwomCUWwED4eV5+fsaYLCNOmHJqM
	0gddd4nl/e3/vrSUtpbS5Ut7YYhzUYZW8brx5egS7KnCoMTgxDAyPhKVlNw9lvJjh85+Rt/sRchjX
	rDNgymUw==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxq-0003ro-7G; Mon, 16 Nov 2020 14:58:42 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 23/78] block: unexport revalidate_disk_size
Date: Mon, 16 Nov 2020 15:57:14 +0100
Message-Id: <20201116145809.410558-24-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

revalidate_disk_size is now only called from set_capacity_and_notify,
so drop the export.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/block_dev.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index 66ebf594c97f47..d8664f5c1ff669 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -1362,7 +1362,6 @@ void revalidate_disk_size(struct gendisk *disk, bool verbose)
 		bdput(bdev);
 	}
 }
-EXPORT_SYMBOL(revalidate_disk_size);
 
 void bd_set_nr_sectors(struct block_device *bdev, sector_t sectors)
 {
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:11:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:11:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28207.57190 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9o-0004ov-5P; Mon, 16 Nov 2020 15:11:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28207.57190; Mon, 16 Nov 2020 15:11:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9m-0004mk-H1; Mon, 16 Nov 2020 15:11:02 +0000
Received: by outflank-mailman (input) for mailman id 28207;
 Mon, 16 Nov 2020 15:10:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg13-0006ni-5l
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:02:01 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 52606c08-a6b1-434b-a979-50167f68b7e9;
 Mon, 16 Nov 2020 14:59:21 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyE-0003y0-K9; Mon, 16 Nov 2020 14:59:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg13-0006ni-5l
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:02:01 +0000
X-Inumbo-ID: 52606c08-a6b1-434b-a979-50167f68b7e9
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 52606c08-a6b1-434b-a979-50167f68b7e9;
	Mon, 16 Nov 2020 14:59:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=432B/HqYIabrrukDRcMGdzlFQ4i1D0sjcTyhKWP2cJc=; b=NxdcCH21KU0PaY1Ri6HGwnc/L0
	Pm1N/StwGgisA5L0p+2kPsr3LyRSKmgmK71GuTbdbvT9TCfwbfRA1dvjC1q7rJtFSAtSyf3mHxeWW
	EZ5rlOqNukX80JHmh+gFjJH5eHC5ab37sgk28CMqNBjsJHct6TH7Aec/aHSrjGlIQeH8J/fYjoVWT
	m9S/ORPThsGxcTn9lN7RKKBuWsC/aTMeA81vfKRaWx1OhdXImDLKffH2sllTwanrVTcnLmBpn+vIK
	rxhCqctUbf1QtVmOhyQKeyb0Bskq73OmiuHEGSYFZZUivRg1xuCXEdg0WX9hj3ohTWz5bCrIzwOJR
	oOsbfaxQ==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyE-0003y0-K9; Mon, 16 Nov 2020 14:59:06 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH 40/78] ide: remove ide_{,un}register_region
Date: Mon, 16 Nov 2020 15:57:31 +0100
Message-Id: <20201116145809.410558-41-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

There is no need to ever register the fake gendisk used for ide-tape.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 drivers/ide/ide-probe.c | 32 --------------------------------
 drivers/ide/ide-tape.c  |  2 --
 include/linux/ide.h     |  3 ---
 3 files changed, 37 deletions(-)

diff --git a/drivers/ide/ide-probe.c b/drivers/ide/ide-probe.c
index 1ddc45a04418cd..076d34b381720f 100644
--- a/drivers/ide/ide-probe.c
+++ b/drivers/ide/ide-probe.c
@@ -929,38 +929,6 @@ static struct kobject *ata_probe(dev_t dev, int *part, void *data)
 	return NULL;
 }
 
-static struct kobject *exact_match(dev_t dev, int *part, void *data)
-{
-	struct gendisk *p = data;
-	*part &= (1 << PARTN_BITS) - 1;
-	return &disk_to_dev(p)->kobj;
-}
-
-static int exact_lock(dev_t dev, void *data)
-{
-	struct gendisk *p = data;
-
-	if (!get_disk_and_module(p))
-		return -1;
-	return 0;
-}
-
-void ide_register_region(struct gendisk *disk)
-{
-	blk_register_region(MKDEV(disk->major, disk->first_minor),
-			    disk->minors, NULL, exact_match, exact_lock, disk);
-}
-
-EXPORT_SYMBOL_GPL(ide_register_region);
-
-void ide_unregister_region(struct gendisk *disk)
-{
-	blk_unregister_region(MKDEV(disk->major, disk->first_minor),
-			      disk->minors);
-}
-
-EXPORT_SYMBOL_GPL(ide_unregister_region);
-
 void ide_init_disk(struct gendisk *disk, ide_drive_t *drive)
 {
 	ide_hwif_t *hwif = drive->hwif;
diff --git a/drivers/ide/ide-tape.c b/drivers/ide/ide-tape.c
index 6f26634b22bbec..88b96437b22e62 100644
--- a/drivers/ide/ide-tape.c
+++ b/drivers/ide/ide-tape.c
@@ -1822,7 +1822,6 @@ static void ide_tape_remove(ide_drive_t *drive)
 
 	ide_proc_unregister_driver(drive, tape->driver);
 	device_del(&tape->dev);
-	ide_unregister_region(tape->disk);
 
 	mutex_lock(&idetape_ref_mutex);
 	put_device(&tape->dev);
@@ -2026,7 +2025,6 @@ static int ide_tape_probe(ide_drive_t *drive)
 		      "n%s", tape->name);
 
 	g->fops = &idetape_block_ops;
-	ide_register_region(g);
 
 	return 0;
 
diff --git a/include/linux/ide.h b/include/linux/ide.h
index 62653769509f89..2c300689a51a5c 100644
--- a/include/linux/ide.h
+++ b/include/linux/ide.h
@@ -1493,9 +1493,6 @@ static inline void ide_acpi_port_init_devices(ide_hwif_t *hwif) { ; }
 static inline void ide_acpi_set_state(ide_hwif_t *hwif, int on) {}
 #endif
 
-void ide_register_region(struct gendisk *);
-void ide_unregister_region(struct gendisk *);
-
 void ide_check_nien_quirk_list(ide_drive_t *);
 void ide_undecoded_slave(ide_drive_t *);
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:11:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:11:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28211.57203 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9q-0004vO-Cc; Mon, 16 Nov 2020 15:11:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28211.57203; Mon, 16 Nov 2020 15:11:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9o-0004tB-PO; Mon, 16 Nov 2020 15:11:04 +0000
Received: by outflank-mailman (input) for mailman id 28211;
 Mon, 16 Nov 2020 15:11:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg34-0006ni-Ag
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:06 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1d91a229-e416-462e-a685-7197268de26f;
 Mon, 16 Nov 2020 14:59:48 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyi-0004Ad-Px; Mon, 16 Nov 2020 14:59:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg34-0006ni-Ag
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:06 +0000
X-Inumbo-ID: 1d91a229-e416-462e-a685-7197268de26f
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1d91a229-e416-462e-a685-7197268de26f;
	Mon, 16 Nov 2020 14:59:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=MmPwn4ttbQT6DHYheoE0UJGEFNI8kA6nAc7IEYdL0K0=; b=um2QX1aPNEBrXGLXtU6YvAtfko
	SUJQ/zrYAfsNYMAKdZ3tINvi3zqorS3Hcvwis/lRWwGYc/D4a1i+axMJb3c9L3kCaZuM/YpRBYYYl
	XLZ6DdJ1Y/xHMe+0KQ4YHMrkyZmtkiCaiO6dicUKCBonmuuw50C1TKzUDzndp/SXSYZ/wTEjCmw8k
	AIxOkUDjtUH52YVU8pqQr/5UXUu1+KzSvrDq39iKullr+3CXYrM5D1wNrkUadnpJVHhZ7zw3XpmrF
	j8SHOcAIOQu8AyneX2JJ5GEkPZOzCGTSxW+MHMHVW+E/mdPaslbb69OAjiZZeO+aj/n9AezJ3u1Wq
	K2n33VVw==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyi-0004Ad-Px; Mon, 16 Nov 2020 14:59:37 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 60/78] zram: remove the claim mechanism
Date: Mon, 16 Nov 2020 15:57:51 +0100
Message-Id: <20201116145809.410558-61-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

The zram claim mechanism was added to ensure no new opens come in
during teardown.  But the proper way to archive that is to call
del_gendisk first, which takes care of all that.  Once del_gendisk
is called in the right place, the reset side can also be simplified
as no I/O can be outstanding on a block device that is not open.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/zram/zram_drv.c | 76 ++++++++++-------------------------
 1 file changed, 21 insertions(+), 55 deletions(-)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 6d15d51cee2b7e..3641434a9b154d 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1756,64 +1756,33 @@ static ssize_t disksize_store(struct device *dev,
 static ssize_t reset_store(struct device *dev,
 		struct device_attribute *attr, const char *buf, size_t len)
 {
-	int ret;
-	unsigned short do_reset;
-	struct zram *zram;
+	struct zram *zram = dev_to_zram(dev);
 	struct block_device *bdev;
+	unsigned short do_reset;
+	int ret = 0;
 
 	ret = kstrtou16(buf, 10, &do_reset);
 	if (ret)
 		return ret;
-
 	if (!do_reset)
 		return -EINVAL;
 
-	zram = dev_to_zram(dev);
 	bdev = bdget_disk(zram->disk, 0);
 	if (!bdev)
 		return -ENOMEM;
 
 	mutex_lock(&bdev->bd_mutex);
-	/* Do not reset an active device or claimed device */
-	if (bdev->bd_openers || zram->claim) {
-		mutex_unlock(&bdev->bd_mutex);
-		bdput(bdev);
-		return -EBUSY;
-	}
-
-	/* From now on, anyone can't open /dev/zram[0-9] */
-	zram->claim = true;
+	if (bdev->bd_openers)
+		ret = -EBUSY;
+	else
+		zram_reset_device(zram);
 	mutex_unlock(&bdev->bd_mutex);
-
-	/* Make sure all the pending I/O are finished */
-	fsync_bdev(bdev);
-	zram_reset_device(zram);
 	bdput(bdev);
 
-	mutex_lock(&bdev->bd_mutex);
-	zram->claim = false;
-	mutex_unlock(&bdev->bd_mutex);
-
-	return len;
-}
-
-static int zram_open(struct block_device *bdev, fmode_t mode)
-{
-	int ret = 0;
-	struct zram *zram;
-
-	WARN_ON(!mutex_is_locked(&bdev->bd_mutex));
-
-	zram = bdev->bd_disk->private_data;
-	/* zram was claimed to reset so open request fails */
-	if (zram->claim)
-		ret = -EBUSY;
-
-	return ret;
+	return ret ? ret : len;
 }
 
 static const struct block_device_operations zram_devops = {
-	.open = zram_open,
 	.submit_bio = zram_submit_bio,
 	.swap_slot_free_notify = zram_slot_free_notify,
 	.rw_page = zram_rw_page,
@@ -1821,7 +1790,6 @@ static const struct block_device_operations zram_devops = {
 };
 
 static const struct block_device_operations zram_wb_devops = {
-	.open = zram_open,
 	.submit_bio = zram_submit_bio,
 	.swap_slot_free_notify = zram_slot_free_notify,
 	.owner = THIS_MODULE
@@ -1972,34 +1940,32 @@ static int zram_add(void)
 	return ret;
 }
 
-static int zram_remove(struct zram *zram)
+static bool zram_busy(struct zram *zram)
 {
 	struct block_device *bdev;
+	bool busy = false;
 
 	bdev = bdget_disk(zram->disk, 0);
-	if (!bdev)
-		return -ENOMEM;
-
-	mutex_lock(&bdev->bd_mutex);
-	if (bdev->bd_openers || zram->claim) {
-		mutex_unlock(&bdev->bd_mutex);
+	if (bdev) {
+		if (bdev->bd_openers)
+			busy = true;
 		bdput(bdev);
-		return -EBUSY;
 	}
 
-	zram->claim = true;
-	mutex_unlock(&bdev->bd_mutex);
+	return busy;
+}
 
-	zram_debugfs_unregister(zram);
+static int zram_remove(struct zram *zram)
+{
+	if (zram_busy(zram))
+		return -EBUSY;
 
-	/* Make sure all the pending I/O are finished */
-	fsync_bdev(bdev);
+	del_gendisk(zram->disk);
+	zram_debugfs_unregister(zram);
 	zram_reset_device(zram);
-	bdput(bdev);
 
 	pr_info("Removed device: %s\n", zram->disk->disk_name);
 
-	del_gendisk(zram->disk);
 	blk_cleanup_queue(zram->disk->queue);
 	put_disk(zram->disk);
 	kfree(zram);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:11:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:11:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28217.57223 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9v-00059G-0q; Mon, 16 Nov 2020 15:11:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28217.57223; Mon, 16 Nov 2020 15:11:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9t-00056f-3w; Mon, 16 Nov 2020 15:11:09 +0000
Received: by outflank-mailman (input) for mailman id 28217;
 Mon, 16 Nov 2020 15:11:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefyx-0006ni-1o
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:59:51 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7e3ee4f9-c567-4ec4-8a64-c3e715411e07;
 Mon, 16 Nov 2020 14:58:45 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxc-0003nE-NO; Mon, 16 Nov 2020 14:58:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefyx-0006ni-1o
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 14:59:51 +0000
X-Inumbo-ID: 7e3ee4f9-c567-4ec4-8a64-c3e715411e07
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7e3ee4f9-c567-4ec4-8a64-c3e715411e07;
	Mon, 16 Nov 2020 14:58:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=CN10BGv9dKVd+RKbbdTxT2BvIyv06ZDMhow0ngUqFVs=; b=Wf8/hkFaNLSd+gPM2W2r4fv5vP
	kTvVoS3zDf7gbVd/+sUJtkyAqA5lUq9yQxOzaZbEyDVX2S9VcEH6akrikD/EHAskWC5e3u5GFwo94
	pvyK6vF1xv0yoWZW0Trp2YhTMDwX2yOQJVSOA1j8iKDivtRGQvmQELbe3fJn2yUvmr1OA274aOkkL
	de9bCAvfPrVUPcqJAHSKnKZg7g+R46oLX8YCc9Mk8CdlBH2cDMS5JaO9BjoWNQj8G+Bge/APmZ6vq
	FKHpBXoJF7X4vPleICzEeG/Mc88InJcGvDP+Fhh9lnoF/XRSuvgZLsqj2YVXvQLELtvaQnptV5v8a
	VOhU/gOA==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxc-0003nE-NO; Mon, 16 Nov 2020 14:58:29 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH 14/78] nvme: use set_capacity_and_notify in nvme_set_queue_dying
Date: Mon, 16 Nov 2020 15:57:05 +0100
Message-Id: <20201116145809.410558-15-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use the block layer helper to update both the disk and block device
sizes.  Contrary to the name no notification is sent in this case,
as a size 0 is special cased.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 drivers/nvme/host/core.c | 13 +------------
 1 file changed, 1 insertion(+), 12 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 6c144e748f8cae..bc89e8659c403f 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -93,16 +93,6 @@ static void nvme_put_subsystem(struct nvme_subsystem *subsys);
 static void nvme_remove_invalid_namespaces(struct nvme_ctrl *ctrl,
 					   unsigned nsid);
 
-static void nvme_update_bdev_size(struct gendisk *disk)
-{
-	struct block_device *bdev = bdget_disk(disk, 0);
-
-	if (bdev) {
-		bd_set_nr_sectors(bdev, get_capacity(disk));
-		bdput(bdev);
-	}
-}
-
 /*
  * Prepare a queue for teardown.
  *
@@ -119,8 +109,7 @@ static void nvme_set_queue_dying(struct nvme_ns *ns)
 	blk_set_queue_dying(ns->queue);
 	blk_mq_unquiesce_queue(ns->queue);
 
-	set_capacity(ns->disk, 0);
-	nvme_update_bdev_size(ns->disk);
+	set_capacity_and_notify(ns->disk, 0);
 }
 
 static void nvme_queue_scan(struct nvme_ctrl *ctrl)
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:11:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:11:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28221.57232 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9y-0005Ln-Fh; Mon, 16 Nov 2020 15:11:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28221.57232; Mon, 16 Nov 2020 15:11:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9x-0005Jp-1h; Mon, 16 Nov 2020 15:11:13 +0000
Received: by outflank-mailman (input) for mailman id 28221;
 Mon, 16 Nov 2020 15:11:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg18-0006ni-64
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:02:06 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d239da22-5f61-4b54-af6f-9324bbc9feb8;
 Mon, 16 Nov 2020 14:59:22 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyD-0003xZ-0p; Mon, 16 Nov 2020 14:59:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg18-0006ni-64
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:02:06 +0000
X-Inumbo-ID: d239da22-5f61-4b54-af6f-9324bbc9feb8
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d239da22-5f61-4b54-af6f-9324bbc9feb8;
	Mon, 16 Nov 2020 14:59:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=ISq8BehFRmfA7X2bUEKuKNCEzkzF+ixwO2P3vH2Ca4g=; b=d73ns3M7F3JRR21anbEXFxuf9g
	A1xWTv4tBpopyBGHuve4hdPzPLwGluE0C2j9u+A4uk1WzPvrWwUq+wvI0IfwLikycPsXYuDfNoqOe
	m8TfyEmDYqxSeCEPv8OHRnRxlZa2BK0Mb/DUIm1Z0f6pGaDzcvBobxYIJlmaIdSdeUSaf89GVeTXM
	JufiFirvEUqjxlNXxHVTpo2WhwYvjUKrOkF5h55ecknCMyA1me/y2F2FdynUFWj9zZc5XKzC9ijl9
	HJGyPVpfezAKOMvhyISvMkEFsLN1uvdIFVszTvvkCN35wOb/aMbNL+Xc75GBLtiIukt+24033x1tu
	55Dk0zgg==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyD-0003xZ-0p; Mon, 16 Nov 2020 14:59:05 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH 39/78] block: add an optional probe callback to major_names
Date: Mon, 16 Nov 2020 15:57:30 +0100
Message-Id: <20201116145809.410558-40-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Add a callback to the major_names array that allows a driver to override
how to probe for dev_t that doesn't currently have a gendisk registered.
This will help separating the lookup of the gendisk by dev_t vs probe
action for a not currently registered dev_t.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 block/genhd.c         | 21 ++++++++++++++++++---
 include/linux/genhd.h |  5 ++++-
 2 files changed, 22 insertions(+), 4 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index 8391e7d83a6920..dc8690bc281c16 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -399,6 +399,7 @@ static struct blk_major_name {
 	struct blk_major_name *next;
 	int major;
 	char name[16];
+	void (*probe)(dev_t devt);
 } *major_names[BLKDEV_MAJOR_HASH_SIZE];
 static DEFINE_MUTEX(major_names_lock);
 
@@ -441,7 +442,8 @@ void blkdev_show(struct seq_file *seqf, off_t offset)
  * See Documentation/admin-guide/devices.txt for the list of allocated
  * major numbers.
  */
-int register_blkdev(unsigned int major, const char *name)
+int __register_blkdev(unsigned int major, const char *name,
+		void (*probe)(dev_t devt))
 {
 	struct blk_major_name **n, *p;
 	int index, ret = 0;
@@ -480,6 +482,7 @@ int register_blkdev(unsigned int major, const char *name)
 	}
 
 	p->major = major;
+	p->probe = probe;
 	strlcpy(p->name, name, sizeof(p->name));
 	p->next = NULL;
 	index = major_to_index(major);
@@ -502,8 +505,7 @@ int register_blkdev(unsigned int major, const char *name)
 	mutex_unlock(&major_names_lock);
 	return ret;
 }
-
-EXPORT_SYMBOL(register_blkdev);
+EXPORT_SYMBOL(__register_blkdev);
 
 void unregister_blkdev(unsigned int major, const char *name)
 {
@@ -1030,6 +1032,19 @@ static ssize_t disk_badblocks_store(struct device *dev,
 
 static void request_gendisk_module(dev_t devt)
 {
+	unsigned int major = MAJOR(devt);
+	struct blk_major_name **n;
+
+	mutex_lock(&major_names_lock);
+	for (n = &major_names[major_to_index(major)]; *n; n = &(*n)->next) {
+		if ((*n)->major == major && (*n)->probe) {
+			(*n)->probe(devt);
+			mutex_unlock(&major_names_lock);
+			return;
+		}
+	}
+	mutex_unlock(&major_names_lock);
+
 	if (request_module("block-major-%d-%d", MAJOR(devt), MINOR(devt)) > 0)
 		/* Make old-style 2.4 aliases work */
 		request_module("block-major-%d", MAJOR(devt));
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 8427ad8bef520d..04f6a6bf577a90 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -366,7 +366,10 @@ extern void blk_unregister_region(dev_t devt, unsigned long range);
 
 #define alloc_disk(minors) alloc_disk_node(minors, NUMA_NO_NODE)
 
-int register_blkdev(unsigned int major, const char *name);
+int __register_blkdev(unsigned int major, const char *name,
+		void (*probe)(dev_t devt));
+#define register_blkdev(major, name) \
+	__register_blkdev(major, name, NULL)
 void unregister_blkdev(unsigned int major, const char *name);
 
 void revalidate_disk_size(struct gendisk *disk, bool verbose);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:11:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:11:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28226.57242 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegA0-0005SW-Dt; Mon, 16 Nov 2020 15:11:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28226.57242; Mon, 16 Nov 2020 15:11:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keg9z-0005QR-6S; Mon, 16 Nov 2020 15:11:15 +0000
Received: by outflank-mailman (input) for mailman id 28226;
 Mon, 16 Nov 2020 15:11:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg0Z-0006ni-5F
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:01:31 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b67c46aa-153c-442c-a19d-ac7fb059517c;
 Mon, 16 Nov 2020 14:59:12 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefy3-0003uy-0n; Mon, 16 Nov 2020 14:58:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg0Z-0006ni-5F
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:01:31 +0000
X-Inumbo-ID: b67c46aa-153c-442c-a19d-ac7fb059517c
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b67c46aa-153c-442c-a19d-ac7fb059517c;
	Mon, 16 Nov 2020 14:59:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=xZanwokwrC93vQLNLNyxIE9cwRexAjHIbs3YFwVPsFg=; b=a7d0WKhPNBtgQ87fZA08kVqe1y
	s8Sr2lxZqA4u53myzeDnnzR5qIAz6vA3egUoMXd0PFkUGzFv/UxEUPI892XyeOmB7FRMPywqZgzFQ
	fvJvVKuwSCKubwUgCeDnhSdQj3zjceiRZRQvdQW64tflG1Ubobb0xgCwA9qgVb3P8bufG21KACqxF
	LdxKiVKNAom0BaCGTkLFBbfzny15lEtGihDxBpVpKvP1ZCnxYnkEVNdSSfERf5h6KpD5Xp1Gipw2N
	P3IlVQJDBBHLI1ckJlQziKc3EInlMrYDiIlMfsnRoAyesz+vylm3lK1KNLRuTKMEj958SrIUpqIbd
	95x8ALqA==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefy3-0003uy-0n; Mon, 16 Nov 2020 14:58:55 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 32/78] block: remove set_device_ro
Date: Mon, 16 Nov 2020 15:57:23 +0100
Message-Id: <20201116145809.410558-33-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Fold set_device_ro into its only remaining caller.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/genhd.c         | 7 -------
 block/ioctl.c         | 2 +-
 include/linux/genhd.h | 1 -
 3 files changed, 1 insertion(+), 9 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index 8c350fecfe8bfe..b0f0b0cac9aa7f 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -1843,13 +1843,6 @@ static void set_disk_ro_uevent(struct gendisk *gd, int ro)
 	kobject_uevent_env(&disk_to_dev(gd)->kobj, KOBJ_CHANGE, envp);
 }
 
-void set_device_ro(struct block_device *bdev, int flag)
-{
-	bdev->bd_part->policy = flag;
-}
-
-EXPORT_SYMBOL(set_device_ro);
-
 void set_disk_ro(struct gendisk *disk, int flag)
 {
 	struct disk_part_iter piter;
diff --git a/block/ioctl.c b/block/ioctl.c
index 96cb4544736468..04255dc5f3bff3 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -371,7 +371,7 @@ static int blkdev_roset(struct block_device *bdev, fmode_t mode,
 		if (ret)
 			return ret;
 	}
-	set_device_ro(bdev, n);
+	bdev->bd_part->policy = n;
 	return 0;
 }
 
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 4b22bfd9336e1a..8427ad8bef520d 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -304,7 +304,6 @@ extern void del_gendisk(struct gendisk *gp);
 extern struct gendisk *get_gendisk(dev_t dev, int *partno);
 extern struct block_device *bdget_disk(struct gendisk *disk, int partno);
 
-extern void set_device_ro(struct block_device *bdev, int flag);
 extern void set_disk_ro(struct gendisk *disk, int flag);
 
 static inline int get_disk_ro(struct gendisk *disk)
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:11:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:11:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28237.57259 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegA3-0005ey-RT; Mon, 16 Nov 2020 15:11:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28237.57259; Mon, 16 Nov 2020 15:11:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegA3-0005d1-54; Mon, 16 Nov 2020 15:11:19 +0000
Received: by outflank-mailman (input) for mailman id 28237;
 Mon, 16 Nov 2020 15:11:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg3O-0006ni-Bm
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:26 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 38b0210e-76fc-4558-95d1-8f8c3e286353;
 Mon, 16 Nov 2020 14:59:56 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyq-0004E0-6d; Mon, 16 Nov 2020 14:59:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg3O-0006ni-Bm
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:26 +0000
X-Inumbo-ID: 38b0210e-76fc-4558-95d1-8f8c3e286353
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 38b0210e-76fc-4558-95d1-8f8c3e286353;
	Mon, 16 Nov 2020 14:59:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=zd1S9ob6uvhOklVRkrOeSjqFIX0gjs8FIGa+SnPo4cs=; b=KMc7aGpwgD+lz4Ap0QfJBSwvVZ
	iRElJGYPaXtvm2l9EE0QgJGZsLEMjMUI0TNdT5TzgHVBvn7Zkv6OmXwKmUcW5NNi2zdVuHdJ996Ac
	OwvmfOLIphJ8QVS1fk75k9Alf/HM3+yte1sC2Cw0u4ljnVUY+8L5tiyT4eYi4wo/A+Hc0PUaBTH0m
	JjNk8TEGb2obEe56crgiqesbff6AXgdDuyaMinswXcIOBb/Qar6Y3kJ12qgYT1OQ66sP9wZekpAU1
	SHqAycA5BdeglD3wmyoPmCAafo1AzTrZyURw0bkbcfVj6u7/3WjF0X4AlPQXPGdveo8XwIrYcJs7A
	LRojdTPA==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyq-0004E0-6d; Mon, 16 Nov 2020 14:59:44 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 65/78] dm: remove the block_device reference in struct mapped_device
Date: Mon, 16 Nov 2020 15:57:56 +0100
Message-Id: <20201116145809.410558-66-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Get rid of the long-lasting struct block_device reference in
struct mapped_device.  The only remaining user is the freeze code,
where we can trivially look up the block device at freeze time
and release the reference at thaw time.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/md/dm-core.h |  2 --
 drivers/md/dm.c      | 22 +++++++++++-----------
 2 files changed, 11 insertions(+), 13 deletions(-)

diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h
index d522093cb39dda..b1b400ed76fe90 100644
--- a/drivers/md/dm-core.h
+++ b/drivers/md/dm-core.h
@@ -107,8 +107,6 @@ struct mapped_device {
 	/* kobject and completion */
 	struct dm_kobject_holder kobj_holder;
 
-	struct block_device *bdev;
-
 	struct dm_stats stats;
 
 	/* for blk-mq request-based DM support */
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 6d7eb72d41f9ea..c789ffea2badde 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1744,11 +1744,6 @@ static void cleanup_mapped_device(struct mapped_device *md)
 
 	cleanup_srcu_struct(&md->io_barrier);
 
-	if (md->bdev) {
-		bdput(md->bdev);
-		md->bdev = NULL;
-	}
-
 	mutex_destroy(&md->suspend_lock);
 	mutex_destroy(&md->type_lock);
 	mutex_destroy(&md->table_devices_lock);
@@ -1840,10 +1835,6 @@ static struct mapped_device *alloc_dev(int minor)
 	if (!md->wq)
 		goto bad;
 
-	md->bdev = bdget_disk(md->disk, 0);
-	if (!md->bdev)
-		goto bad;
-
 	dm_stats_init(&md->stats);
 
 	/* Populate the mapping, nobody knows we exist yet */
@@ -2384,12 +2375,17 @@ struct dm_table *dm_swap_table(struct mapped_device *md, struct dm_table *table)
  */
 static int lock_fs(struct mapped_device *md)
 {
+	struct block_device *bdev;
 	int r;
 
 	WARN_ON(md->frozen_sb);
 
-	md->frozen_sb = freeze_bdev(md->bdev);
+	bdev = bdget_disk(md->disk, 0);
+	if (!bdev)
+		return -ENOMEM;
+	md->frozen_sb = freeze_bdev(bdev);
 	if (IS_ERR(md->frozen_sb)) {
+		bdput(bdev);
 		r = PTR_ERR(md->frozen_sb);
 		md->frozen_sb = NULL;
 		return r;
@@ -2402,10 +2398,14 @@ static int lock_fs(struct mapped_device *md)
 
 static void unlock_fs(struct mapped_device *md)
 {
+	struct block_device *bdev;
+
 	if (!test_bit(DMF_FROZEN, &md->flags))
 		return;
 
-	thaw_bdev(md->bdev, md->frozen_sb);
+	bdev = md->frozen_sb->s_bdev;
+	thaw_bdev(bdev, md->frozen_sb);
+	bdput(bdev);
 	md->frozen_sb = NULL;
 	clear_bit(DMF_FROZEN, &md->flags);
 }
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:11:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:11:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28239.57267 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegA6-0005kW-5Y; Mon, 16 Nov 2020 15:11:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28239.57267; Mon, 16 Nov 2020 15:11:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegA4-0005if-I6; Mon, 16 Nov 2020 15:11:20 +0000
Received: by outflank-mailman (input) for mailman id 28239;
 Mon, 16 Nov 2020 15:11:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg3s-0006ni-Ca
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:56 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7a5e27aa-54f3-42e4-b550-1d2c0984b14c;
 Mon, 16 Nov 2020 15:00:04 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyx-0004Gy-FJ; Mon, 16 Nov 2020 14:59:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg3s-0006ni-Ca
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:56 +0000
X-Inumbo-ID: 7a5e27aa-54f3-42e4-b550-1d2c0984b14c
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7a5e27aa-54f3-42e4-b550-1d2c0984b14c;
	Mon, 16 Nov 2020 15:00:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=w/riLxj3CZ4Q9Gfmyagadzl4IipHmB+3k5ctHYvmp9E=; b=e3n3XdEwqVtwU0sWR2oLSHdI9h
	Xu9b8omu+qurHVQrx252HjMdGxtFSRrdDkyT9GYTFfiFpDBF97BnrQ6pC9jcufNPkSyTfd8BODYdX
	mYLCx1XsAYtOkGGXjkj8yFsJ40uwDdct6R70OllvZhcvCXH93L5Sy7HtA4x/D+xebzZ/VJs2q24S4
	FQey8F8Myn46udpTxUuORZ0JHJzE8pHidG4gjeN/ZO46kjyZbtUDemfOIoZFFE57g1ZU/mXnPOfsQ
	jaQV1FXg1vtV5lOLY7dsZ2yD+ymN9pQh7kEkiHqpz3PMfIzqRqJcSHYnaAOhMjP8HnXTokU2ZHHJv
	ZBRFQ0eQ==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyx-0004Gy-FJ; Mon, 16 Nov 2020 14:59:51 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 70/78] block: replace bd_mutex with a per-gendisk mutex
Date: Mon, 16 Nov 2020 15:58:01 +0100
Message-Id: <20201116145809.410558-71-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

bd_mutex is primarily used for synchronizing the block device open and
release path, which recurses from partitions to the whole disk device.
The fact that we have two locks makes life unnecessarily complex due
to lock order constrains.  Replace the two levels of locking with a
single mutex in the gendisk structure.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/genhd.c                   |  7 ++--
 block/ioctl.c                   |  4 +-
 block/partitions/core.c         | 22 +++++-----
 drivers/block/loop.c            | 14 +++----
 drivers/block/xen-blkfront.c    |  8 ++--
 drivers/block/zram/zram_drv.c   |  4 +-
 drivers/block/zram/zram_drv.h   |  2 +-
 drivers/md/md.h                 |  7 +---
 drivers/s390/block/dasd_genhd.c |  8 ++--
 drivers/scsi/sd.c               |  4 +-
 fs/block_dev.c                  | 71 +++++++++++++++++----------------
 fs/btrfs/volumes.c              |  2 +-
 fs/super.c                      |  8 ++--
 include/linux/blk_types.h       |  1 -
 include/linux/genhd.h           |  1 +
 15 files changed, 80 insertions(+), 83 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index 7832968ce3fbb7..999f7142b04e7d 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -1350,7 +1350,7 @@ static const struct attribute_group *disk_attr_groups[] = {
  * original ptbl is freed using RCU callback.
  *
  * LOCKING:
- * Matching bd_mutex locked or the caller is the only user of @disk.
+ * disk->mutex locked or the caller is the only user of @disk.
  */
 static void disk_replace_part_tbl(struct gendisk *disk,
 				  struct disk_part_tbl *new_ptbl)
@@ -1375,7 +1375,7 @@ static void disk_replace_part_tbl(struct gendisk *disk,
  * uses RCU to allow unlocked dereferencing for stats and other stuff.
  *
  * LOCKING:
- * Matching bd_mutex locked or the caller is the only user of @disk.
+ * disk->mutex locked or the caller is the only user of @disk.
  * Might sleep.
  *
  * RETURNS:
@@ -1616,6 +1616,7 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
 	if (!disk->part0.dkstats)
 		goto out_bdput;
 
+	mutex_init(&disk->mutex);
 	init_rwsem(&disk->lookup_sem);
 	disk->node_id = node_id;
 	if (disk_expand_part_tbl(disk, 0)) {
@@ -1842,7 +1843,7 @@ void disk_unblock_events(struct gendisk *disk)
  * doesn't clear the events from @disk->ev.
  *
  * CONTEXT:
- * If @mask is non-zero must be called with bdev->bd_mutex held.
+ * If @mask is non-zero must be called with disk->mutex held.
  */
 void disk_flush_events(struct gendisk *disk, unsigned int mask)
 {
diff --git a/block/ioctl.c b/block/ioctl.c
index 22f394d118c302..18adf9b16a30f6 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -99,9 +99,9 @@ static int blkdev_reread_part(struct block_device *bdev)
 	if (!capable(CAP_SYS_ADMIN))
 		return -EACCES;
 
-	mutex_lock(&bdev->bd_mutex);
+	mutex_lock(&bdev->bd_disk->mutex);
 	ret = bdev_disk_changed(bdev, false);
-	mutex_unlock(&bdev->bd_mutex);
+	mutex_unlock(&bdev->bd_disk->mutex);
 
 	return ret;
 }
diff --git a/block/partitions/core.c b/block/partitions/core.c
index 573ef5a03fc104..e50b5ca17df550 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -328,7 +328,7 @@ int hd_ref_init(struct hd_struct *part)
 }
 
 /*
- * Must be called either with bd_mutex held, before a disk can be opened or
+ * Must be called either with disk->mutex held, before a disk can be opened or
  * after all disk users are gone.
  */
 void delete_partition(struct hd_struct *part)
@@ -363,7 +363,7 @@ static ssize_t whole_disk_show(struct device *dev,
 static DEVICE_ATTR(whole_disk, 0444, whole_disk_show, NULL);
 
 /*
- * Must be called either with bd_mutex held, before a disk can be opened or
+ * Must be called either with disk->mutex held, before a disk can be opened or
  * after all disk users are gone.
  */
 static struct hd_struct *add_partition(struct gendisk *disk, int partno,
@@ -530,15 +530,15 @@ int bdev_add_partition(struct block_device *bdev, int partno,
 {
 	struct hd_struct *part;
 
-	mutex_lock(&bdev->bd_mutex);
+	mutex_lock(&bdev->bd_disk->mutex);
 	if (partition_overlaps(bdev->bd_disk, start, length, -1)) {
-		mutex_unlock(&bdev->bd_mutex);
+		mutex_unlock(&bdev->bd_disk->mutex);
 		return -EBUSY;
 	}
 
 	part = add_partition(bdev->bd_disk, partno, start, length,
 			ADDPART_FLAG_NONE, NULL);
-	mutex_unlock(&bdev->bd_mutex);
+	mutex_unlock(&bdev->bd_disk->mutex);
 	return PTR_ERR_OR_ZERO(part);
 }
 
@@ -552,8 +552,7 @@ int bdev_del_partition(struct block_device *bdev, int partno)
 	if (!bdevp)
 		return -ENXIO;
 
-	mutex_lock(&bdevp->bd_mutex);
-	mutex_lock_nested(&bdev->bd_mutex, 1);
+	mutex_lock(&bdev->bd_disk->mutex);
 
 	ret = -ENXIO;
 	part = disk_get_part(bdev->bd_disk, partno);
@@ -570,8 +569,7 @@ int bdev_del_partition(struct block_device *bdev, int partno)
 	delete_partition(part);
 	ret = 0;
 out_unlock:
-	mutex_unlock(&bdev->bd_mutex);
-	mutex_unlock(&bdevp->bd_mutex);
+	mutex_unlock(&bdev->bd_disk->mutex);
 	bdput(bdevp);
 	if (part)
 		disk_put_part(part);
@@ -594,8 +592,7 @@ int bdev_resize_partition(struct block_device *bdev, int partno,
 	if (!bdevp)
 		goto out_put_part;
 
-	mutex_lock(&bdevp->bd_mutex);
-	mutex_lock_nested(&bdev->bd_mutex, 1);
+	mutex_lock(&bdev->bd_disk->mutex);
 
 	ret = -EINVAL;
 	if (start != part->start_sect)
@@ -609,8 +606,7 @@ int bdev_resize_partition(struct block_device *bdev, int partno,
 
 	ret = 0;
 out_unlock:
-	mutex_unlock(&bdevp->bd_mutex);
-	mutex_unlock(&bdev->bd_mutex);
+	mutex_unlock(&bdev->bd_disk->mutex);
 	bdput(bdevp);
 out_put_part:
 	disk_put_part(part);
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 9d2587f6167cd8..91e47c5b52f1cb 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -651,9 +651,9 @@ static void loop_reread_partitions(struct loop_device *lo,
 {
 	int rc;
 
-	mutex_lock(&bdev->bd_mutex);
+	mutex_lock(&bdev->bd_disk->mutex);
 	rc = bdev_disk_changed(bdev, false);
-	mutex_unlock(&bdev->bd_mutex);
+	mutex_unlock(&bdev->bd_disk->mutex);
 	if (rc)
 		pr_warn("%s: partition scan of loop%d (%s) failed (rc=%d)\n",
 			__func__, lo->lo_number, lo->lo_file_name, rc);
@@ -746,7 +746,7 @@ static int loop_change_fd(struct loop_device *lo, struct block_device *bdev,
 	mutex_unlock(&loop_ctl_mutex);
 	/*
 	 * We must drop file reference outside of loop_ctl_mutex as dropping
-	 * the file ref can take bd_mutex which creates circular locking
+	 * the file ref can take disk->mutex which creates circular locking
 	 * dependency.
 	 */
 	fput(old_file);
@@ -1258,7 +1258,7 @@ static int __loop_clr_fd(struct loop_device *lo, bool release)
 	mutex_unlock(&loop_ctl_mutex);
 	if (partscan) {
 		/*
-		 * bd_mutex has been held already in release path, so don't
+		 * disk->mutex has been held already in release path, so don't
 		 * acquire it if this function is called in such case.
 		 *
 		 * If the reread partition isn't from release path, lo_refcnt
@@ -1266,10 +1266,10 @@ static int __loop_clr_fd(struct loop_device *lo, bool release)
 		 * current holder is released.
 		 */
 		if (!release)
-			mutex_lock(&bdev->bd_mutex);
+			mutex_lock(&bdev->bd_disk->mutex);
 		err = bdev_disk_changed(bdev, false);
 		if (!release)
-			mutex_unlock(&bdev->bd_mutex);
+			mutex_unlock(&bdev->bd_disk->mutex);
 		if (err)
 			pr_warn("%s: partition scan of loop%d failed (rc=%d)\n",
 				__func__, lo_number, err);
@@ -1297,7 +1297,7 @@ static int __loop_clr_fd(struct loop_device *lo, bool release)
 	 * Need not hold loop_ctl_mutex to fput backing file.
 	 * Calling fput holding loop_ctl_mutex triggers a circular
 	 * lock dependency possibility warning as fput can take
-	 * bd_mutex which is usually taken before loop_ctl_mutex.
+	 * disk->mutex which is usually taken before loop_ctl_mutex.
 	 */
 	if (filp)
 		fput(filp);
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 79521e33d30ed5..5b1f99ca77b734 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -2162,7 +2162,7 @@ static void blkfront_closing(struct blkfront_info *info)
 		return;
 	}
 
-	mutex_lock(&bdev->bd_mutex);
+	mutex_lock(&info->gd->mutex);
 
 	if (bdev->bd_openers) {
 		xenbus_dev_error(xbdev, -EBUSY,
@@ -2173,7 +2173,7 @@ static void blkfront_closing(struct blkfront_info *info)
 		xenbus_frontend_closed(xbdev);
 	}
 
-	mutex_unlock(&bdev->bd_mutex);
+	mutex_unlock(&info->gd->mutex);
 	bdput(bdev);
 }
 
@@ -2536,7 +2536,7 @@ static int blkfront_remove(struct xenbus_device *xbdev)
 	 * isn't closed yet, we let release take care of it.
 	 */
 
-	mutex_lock(&bdev->bd_mutex);
+	mutex_lock(&info->gd->mutex);
 	info = disk->private_data;
 
 	dev_warn(disk_to_dev(disk),
@@ -2551,7 +2551,7 @@ static int blkfront_remove(struct xenbus_device *xbdev)
 		mutex_unlock(&blkfront_mutex);
 	}
 
-	mutex_unlock(&bdev->bd_mutex);
+	mutex_unlock(&info->gd->mutex);
 	bdput(bdev);
 
 	return 0;
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index d00b5761ec0b21..0b156f09e208df 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1762,12 +1762,12 @@ static ssize_t reset_store(struct device *dev,
 	if (!bdev)
 		return -ENOMEM;
 
-	mutex_lock(&bdev->bd_mutex);
+	mutex_lock(&zram->disk->mutex);
 	if (bdev->bd_openers)
 		ret = -EBUSY;
 	else
 		zram_reset_device(zram);
-	mutex_unlock(&bdev->bd_mutex);
+	mutex_unlock(&zram->disk->mutex);
 	bdput(bdev);
 
 	return ret ? ret : len;
diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
index 712354a4207c77..b300632c17c172 100644
--- a/drivers/block/zram/zram_drv.h
+++ b/drivers/block/zram/zram_drv.h
@@ -111,7 +111,7 @@ struct zram {
 	/*
 	 * zram is claimed so open request will be failed
 	 */
-	bool claim; /* Protected by bdev->bd_mutex */
+	bool claim; /* Protected by bdev->bd_disk->mutex */
 	struct file *backing_dev;
 #ifdef CONFIG_ZRAM_WRITEBACK
 	spinlock_t wb_limit_lock;
diff --git a/drivers/md/md.h b/drivers/md/md.h
index ccfb69868c2ec9..28712d3498de2c 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -394,11 +394,8 @@ struct mddev {
 	/* 'open_mutex' avoids races between 'md_open' and 'do_md_stop', so
 	 * that we are never stopping an array while it is open.
 	 * 'reconfig_mutex' protects all other reconfiguration.
-	 * These locks are separate due to conflicting interactions
-	 * with bdev->bd_mutex.
-	 * Lock ordering is:
-	 *  reconfig_mutex -> bd_mutex
-	 *  bd_mutex -> open_mutex:  e.g. __blkdev_get -> md_open
+	 * These locks are separate due to historically conflicting
+	 * interactions with block layer locks.
 	 */
 	struct mutex			open_mutex;
 	struct mutex			reconfig_mutex;
diff --git a/drivers/s390/block/dasd_genhd.c b/drivers/s390/block/dasd_genhd.c
index a9698fba9b76ce..7b5f475b500e8c 100644
--- a/drivers/s390/block/dasd_genhd.c
+++ b/drivers/s390/block/dasd_genhd.c
@@ -109,9 +109,9 @@ int dasd_scan_partitions(struct dasd_block *block)
 		return -ENODEV;
 	}
 
-	mutex_lock(&bdev->bd_mutex);
+	mutex_lock(&bdev->bd_disk->mutex);
 	rc = bdev_disk_changed(bdev, false);
-	mutex_unlock(&bdev->bd_mutex);
+	mutex_unlock(&bdev->bd_disk->mutex);
 	if (rc)
 		DBF_DEV_EVENT(DBF_ERR, block->base,
 				"scan partitions error, rc %d", rc);
@@ -145,9 +145,9 @@ void dasd_destroy_partitions(struct dasd_block *block)
 	bdev = block->bdev;
 	block->bdev = NULL;
 
-	mutex_lock(&bdev->bd_mutex);
+	mutex_lock(&bdev->bd_disk->mutex);
 	blk_drop_partitions(bdev);
-	mutex_unlock(&bdev->bd_mutex);
+	mutex_unlock(&bdev->bd_disk->mutex);
 
 	/* Matching blkdev_put to the blkdev_get in dasd_scan_partitions. */
 	blkdev_put(bdev, FMODE_READ);
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 679c2c02504763..68c752ef3ed575 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -1398,7 +1398,7 @@ static void sd_uninit_command(struct scsi_cmnd *SCpnt)
  *	In the latter case @inode and @filp carry an abridged amount
  *	of information as noted above.
  *
- *	Locking: called with bdev->bd_mutex held.
+ *	Locking: called with bdev->bd_disk->mutex held.
  **/
 static int sd_open(struct block_device *bdev, fmode_t mode)
 {
@@ -1474,7 +1474,7 @@ static int sd_open(struct block_device *bdev, fmode_t mode)
  *	Note: may block (uninterruptible) if error recovery is underway
  *	on this disk.
  *
- *	Locking: called with bdev->bd_mutex held.
+ *	Locking: called with bdev->bd_disk->mutex held.
  **/
 static void sd_release(struct gendisk *disk, fmode_t mode)
 {
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 14b6dbfa9dda2a..4b59ace9632f65 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -803,7 +803,6 @@ static void init_once(void *foo)
 	struct block_device *bdev = &ei->bdev;
 
 	memset(bdev, 0, sizeof(*bdev));
-	mutex_init(&bdev->bd_mutex);
 #ifdef CONFIG_SYSFS
 	INIT_LIST_HEAD(&bdev->bd_holder_disks);
 #endif
@@ -1204,7 +1203,7 @@ int bd_link_disk_holder(struct block_device *bdev, struct gendisk *disk)
 	struct bd_holder_disk *holder;
 	int ret = 0;
 
-	mutex_lock(&bdev->bd_mutex);
+	mutex_lock(&bdev->bd_disk->mutex);
 
 	WARN_ON_ONCE(!bdev->bd_holder);
 
@@ -1249,7 +1248,7 @@ int bd_link_disk_holder(struct block_device *bdev, struct gendisk *disk)
 out_free:
 	kfree(holder);
 out_unlock:
-	mutex_unlock(&bdev->bd_mutex);
+	mutex_unlock(&bdev->bd_disk->mutex);
 	return ret;
 }
 EXPORT_SYMBOL_GPL(bd_link_disk_holder);
@@ -1268,7 +1267,7 @@ void bd_unlink_disk_holder(struct block_device *bdev, struct gendisk *disk)
 {
 	struct bd_holder_disk *holder;
 
-	mutex_lock(&bdev->bd_mutex);
+	mutex_lock(&bdev->bd_disk->mutex);
 
 	holder = bd_find_holder_disk(bdev, disk);
 
@@ -1281,7 +1280,7 @@ void bd_unlink_disk_holder(struct block_device *bdev, struct gendisk *disk)
 		kfree(holder);
 	}
 
-	mutex_unlock(&bdev->bd_mutex);
+	mutex_unlock(&bdev->bd_disk->mutex);
 }
 EXPORT_SYMBOL_GPL(bd_unlink_disk_holder);
 #endif
@@ -1293,7 +1292,7 @@ int bdev_disk_changed(struct block_device *bdev, bool invalidate)
 	struct gendisk *disk = bdev->bd_disk;
 	int ret;
 
-	lockdep_assert_held(&bdev->bd_mutex);
+	lockdep_assert_held(&bdev->bd_disk->mutex);
 
 	clear_bit(GD_NEED_PART_SCAN, &bdev->bd_disk->state);
 
@@ -1357,13 +1356,6 @@ static int bdev_get_gendisk(struct gendisk *disk)
 	return -ENXIO;
 }
 
-/*
- * bd_mutex locking:
- *
- *  mutex_lock(part->bd_mutex)
- *    mutex_lock_nested(whole->bd_mutex, 1)
- */
-
 static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 		int for_part)
 {
@@ -1377,15 +1369,18 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 	if (ret)
 		goto out;
 
-	if (!for_part && (mode & FMODE_EXCL)) {
-		WARN_ON_ONCE(!holder);
-		ret = bd_prepare_to_claim(bdev, holder);
-		if (ret)
-			goto out_put_disk;
+	if (!for_part) {
+		if (mode & FMODE_EXCL) {
+			WARN_ON_ONCE(!holder);
+			ret = bd_prepare_to_claim(bdev, holder);
+			if (ret)
+				goto out_put_disk;
+		}
+
+		disk_block_events(disk);
+		mutex_lock(&disk->mutex);
 	}
 
-	disk_block_events(disk);
-	mutex_lock_nested(&bdev->bd_mutex, for_part);
 	if (!bdev->bd_openers) {
 		first_open = true;
 
@@ -1470,10 +1465,14 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 			unblock_events = false;
 		}
 	}
-	mutex_unlock(&bdev->bd_mutex);
 
-	if (unblock_events)
-		disk_unblock_events(disk);
+	if (!for_part) {
+		mutex_unlock(&disk->mutex);
+
+		if (unblock_events)
+			disk_unblock_events(disk);
+	}
+
 
 	/* only one opener holds the module reference */
 	if (!first_open)
@@ -1486,10 +1485,12 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 	if (bdev_is_partition(bdev))
 		__blkdev_put(bdev_whole(bdev), mode, 1);
  out_unlock_bdev:
-	if (!for_part && (mode & FMODE_EXCL))
-		bd_abort_claiming(bdev, holder);
-	mutex_unlock(&bdev->bd_mutex);
-	disk_unblock_events(disk);
+	if (!for_part) {
+		if (mode & FMODE_EXCL)
+			bd_abort_claiming(bdev, holder);
+		mutex_unlock(&disk->mutex);
+		disk_unblock_events(disk);
+	}
  out_put_disk:
 	module_put(disk->fops->owner);
 	if (need_restart)
@@ -1668,9 +1669,10 @@ static void __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part)
 	if (bdev->bd_openers == 1)
 		sync_blockdev(bdev);
 
-	mutex_lock_nested(&bdev->bd_mutex, for_part);
 	if (for_part)
 		bdev->bd_part_count--;
+	else
+		mutex_lock(&disk->mutex);
 
 	if (!--bdev->bd_openers) {
 		WARN_ON_ONCE(bdev->bd_holders);
@@ -1691,7 +1693,8 @@ static void __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part)
 
 		module_put(disk->fops->owner);
 	}
-	mutex_unlock(&bdev->bd_mutex);
+	if (!for_part)
+		mutex_unlock(&disk->mutex);
 	bdput(bdev);
 	if (victim)
 		__blkdev_put(victim, mode, 1);
@@ -1699,7 +1702,7 @@ static void __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part)
 
 void blkdev_put(struct block_device *bdev, fmode_t mode)
 {
-	mutex_lock(&bdev->bd_mutex);
+	mutex_lock(&bdev->bd_disk->mutex);
 
 	if (mode & FMODE_EXCL) {
 		struct block_device *whole = bdev_whole(bdev);
@@ -1707,7 +1710,7 @@ void blkdev_put(struct block_device *bdev, fmode_t mode)
 
 		/*
 		 * Release a claim on the device.  The holder fields
-		 * are protected with bdev_lock.  bd_mutex is to
+		 * are protected with bdev_lock.  disk->mutex is to
 		 * synchronize disk_holder unlinking.
 		 */
 		spin_lock(&bdev_lock);
@@ -1739,7 +1742,7 @@ void blkdev_put(struct block_device *bdev, fmode_t mode)
 	 */
 	disk_flush_events(bdev->bd_disk, DISK_EVENT_MEDIA_CHANGE);
 
-	mutex_unlock(&bdev->bd_mutex);
+	mutex_unlock(&bdev->bd_disk->mutex);
 
 	__blkdev_put(bdev, mode, 0);
 }
@@ -2039,10 +2042,10 @@ void iterate_bdevs(void (*func)(struct block_device *, void *), void *arg)
 		old_inode = inode;
 		bdev = I_BDEV(inode);
 
-		mutex_lock(&bdev->bd_mutex);
+		mutex_lock(&bdev->bd_disk->mutex);
 		if (bdev->bd_openers)
 			func(bdev, arg);
-		mutex_unlock(&bdev->bd_mutex);
+		mutex_unlock(&bdev->bd_disk->mutex);
 
 		spin_lock(&blockdev_superblock->s_inode_list_lock);
 	}
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index a6406b3b8c2b4f..ce43732f945f45 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -1237,7 +1237,7 @@ int btrfs_open_devices(struct btrfs_fs_devices *fs_devices,
 	lockdep_assert_held(&uuid_mutex);
 	/*
 	 * The device_list_mutex cannot be taken here in case opening the
-	 * underlying device takes further locks like bd_mutex.
+	 * underlying device takes further locks like disk->mutex.
 	 *
 	 * We also don't need the lock here as this is called during mount and
 	 * exclusion is provided by uuid_mutex
diff --git a/fs/super.c b/fs/super.c
index 98bb0629ee108e..b327a82bc1946b 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -1328,9 +1328,9 @@ int get_tree_bdev(struct fs_context *fc,
 		}
 
 		/*
-		 * s_umount nests inside bd_mutex during
+		 * s_umount nests inside disk->mutex during
 		 * __invalidate_device().  blkdev_put() acquires
-		 * bd_mutex and can't be called under s_umount.  Drop
+		 * disk->mutex and can't be called under s_umount.  Drop
 		 * s_umount temporarily.  This is safe as we're
 		 * holding an active reference.
 		 */
@@ -1403,9 +1403,9 @@ struct dentry *mount_bdev(struct file_system_type *fs_type,
 		}
 
 		/*
-		 * s_umount nests inside bd_mutex during
+		 * s_umount nests inside disk->mutex during
 		 * __invalidate_device().  blkdev_put() acquires
-		 * bd_mutex and can't be called under s_umount.  Drop
+		 * disk->mutex and can't be called under s_umount.  Drop
 		 * s_umount temporarily.  This is safe as we're
 		 * holding an active reference.
 		 */
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 041caca25fc787..0735e335ca6c0a 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -24,7 +24,6 @@ struct block_device {
 	int			bd_openers;
 	struct inode *		bd_inode;	/* will die */
 	struct super_block *	bd_super;
-	struct mutex		bd_mutex;	/* open/close mutex */
 	void *			bd_claiming;
 	void *			bd_holder;
 	int			bd_holders;
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index e01618dfafc05c..bc0469cc8fb0dc 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -186,6 +186,7 @@ struct gendisk {
 	unsigned long state;
 #define GD_NEED_PART_SCAN		0
 	struct rw_semaphore lookup_sem;
+	struct mutex mutex;		/* open/close mutex */
 	struct kobject *slave_dir;
 
 	struct timer_rand_state *random;
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:11:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:11:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28242.57279 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegA9-0005uN-54; Mon, 16 Nov 2020 15:11:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28242.57279; Mon, 16 Nov 2020 15:11:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegA7-0005rb-HE; Mon, 16 Nov 2020 15:11:23 +0000
Received: by outflank-mailman (input) for mailman id 28242;
 Mon, 16 Nov 2020 15:11:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg39-0006ni-Ay
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:11 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 349e8444-6b60-4045-a709-8fce86a62a37;
 Mon, 16 Nov 2020 14:59:49 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyk-0004BJ-HE; Mon, 16 Nov 2020 14:59:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg39-0006ni-Ay
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:11 +0000
X-Inumbo-ID: 349e8444-6b60-4045-a709-8fce86a62a37
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 349e8444-6b60-4045-a709-8fce86a62a37;
	Mon, 16 Nov 2020 14:59:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=Y+mF6kfZyX+9LgfiO2Btb3TAVeGp89oda3czGJwGTJs=; b=Bes/KMf+bs1I4DR5G0Q9TOnUPD
	9joDl/S6qw6ZpxOQWG6Y+yldMhWM6mFKnMzJr+i8TYoCpaeYndLIsJKL+sOlXlAuclhKSFhnR3UCp
	WJXOKeNHtc4MABM6hp7JGjCiE6fZ5ela9kirrVONq5fnIhmy+U+4tfsuOJVOXCYA5RBzKC83RR+4b
	qxZP1ZXarE/IKEORkzrEG4OGuYDREZTgbRj7xCtCOKoPFN8CeO4jN+bDAHNrnlIjkHnkpCPznvP9h
	3EPL5v3wV+4q+MPr35MSCL+n6p7mkMnCAwjivusEswSTw73cM8G2TKRKASqaQHIQNMCssmGTd1M3h
	18UxRfvA==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyk-0004BJ-HE; Mon, 16 Nov 2020 14:59:38 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 61/78] zram:  do not call set_blocksize
Date: Mon, 16 Nov 2020 15:57:52 +0100
Message-Id: <20201116145809.410558-62-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

set_blocksize is used by file systems to use their preferred buffer cache
block size.  Block drivers should not set it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/zram/zram_drv.c | 11 +----------
 drivers/block/zram/zram_drv.h |  1 -
 2 files changed, 1 insertion(+), 11 deletions(-)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 3641434a9b154d..d00b5761ec0b21 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -403,13 +403,10 @@ static void reset_bdev(struct zram *zram)
 		return;
 
 	bdev = zram->bdev;
-	if (zram->old_block_size)
-		set_blocksize(bdev, zram->old_block_size);
 	blkdev_put(bdev, FMODE_READ|FMODE_WRITE|FMODE_EXCL);
 	/* hope filp_close flush all of IO */
 	filp_close(zram->backing_dev, NULL);
 	zram->backing_dev = NULL;
-	zram->old_block_size = 0;
 	zram->bdev = NULL;
 	zram->disk->fops = &zram_devops;
 	kvfree(zram->bitmap);
@@ -454,7 +451,7 @@ static ssize_t backing_dev_store(struct device *dev,
 	struct file *backing_dev = NULL;
 	struct inode *inode;
 	struct address_space *mapping;
-	unsigned int bitmap_sz, old_block_size = 0;
+	unsigned int bitmap_sz;
 	unsigned long nr_pages, *bitmap = NULL;
 	struct block_device *bdev = NULL;
 	int err;
@@ -509,14 +506,8 @@ static ssize_t backing_dev_store(struct device *dev,
 		goto out;
 	}
 
-	old_block_size = block_size(bdev);
-	err = set_blocksize(bdev, PAGE_SIZE);
-	if (err)
-		goto out;
-
 	reset_bdev(zram);
 
-	zram->old_block_size = old_block_size;
 	zram->bdev = bdev;
 	zram->backing_dev = backing_dev;
 	zram->bitmap = bitmap;
diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
index f2fd46daa76045..712354a4207c77 100644
--- a/drivers/block/zram/zram_drv.h
+++ b/drivers/block/zram/zram_drv.h
@@ -118,7 +118,6 @@ struct zram {
 	bool wb_limit_enable;
 	u64 bd_wb_limit;
 	struct block_device *bdev;
-	unsigned int old_block_size;
 	unsigned long *bitmap;
 	unsigned long nr_pages;
 #endif
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:11:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:11:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28245.57287 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAB-00061C-In; Mon, 16 Nov 2020 15:11:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28245.57287; Mon, 16 Nov 2020 15:11:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegA9-0005zV-SA; Mon, 16 Nov 2020 15:11:25 +0000
Received: by outflank-mailman (input) for mailman id 28245;
 Mon, 16 Nov 2020 15:11:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefzM-0006ni-2h
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:00:16 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 120be73a-7976-4561-8f74-49712a345b75;
 Mon, 16 Nov 2020 14:58:53 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxn-0003r7-B8; Mon, 16 Nov 2020 14:58:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefzM-0006ni-2h
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:00:16 +0000
X-Inumbo-ID: 120be73a-7976-4561-8f74-49712a345b75
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 120be73a-7976-4561-8f74-49712a345b75;
	Mon, 16 Nov 2020 14:58:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=Y79HfAEyQ1Osc+bcMzZJ5692pPHEaLCtoTd5IVsJTUY=; b=eK8/rlIlxKne8mY/TyQF28rFKp
	R+NjWRmyqlHhZW7mc0uKiw7grYmQBF60/GTH8mV7+V1y+jWkJgjEMZIH8WPLPgIBhMJUMD1G0pLpF
	7JyjO8JIaJaLLPZiY0OUk25E7QxOEU7w104PbHzamDXeIRSHtvAz9/NM31Z7B0V5xRgvs+KXRqxPq
	x8i9w7gcG6i3Ci/5TUST199evDiwc4f6VACjbFzWy4Z0jBwdV4x0GMLLvZCFi/p1y7Db0VzicSOUt
	lrzr+ZlA6NpSYydrhrbUYW+XCA6DIrRp0muBraMGxMazXjPHsTjGudfq7pK8O26r+BkzECXNBsH/F
	JZxCaIPg==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxn-0003r7-B8; Mon, 16 Nov 2020 14:58:39 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 21/78] md: remove a spurious call to revalidate_disk_size in update_size
Date: Mon, 16 Nov 2020 15:57:12 +0100
Message-Id: <20201116145809.410558-22-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

None of the ->resize methods updates the disk size, so calling
revalidate_disk_size here won't do anything.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Song Liu <song@kernel.org>
---
 drivers/md/md-cluster.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
index 87442dc59f6ca3..35e2690c1803dd 100644
--- a/drivers/md/md-cluster.c
+++ b/drivers/md/md-cluster.c
@@ -1299,8 +1299,6 @@ static void update_size(struct mddev *mddev, sector_t old_dev_sectors)
 	} else {
 		/* revert to previous sectors */
 		ret = mddev->pers->resize(mddev, old_dev_sectors);
-		if (!ret)
-			revalidate_disk_size(mddev->gendisk, true);
 		ret = __sendmsg(cinfo, &cmsg);
 		if (ret)
 			pr_err("%s:%d: failed to send METADATA_UPDATED msg\n",
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:11:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:11:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28249.57299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAD-00068G-Rv; Mon, 16 Nov 2020 15:11:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28249.57299; Mon, 16 Nov 2020 15:11:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAC-00066s-2Y; Mon, 16 Nov 2020 15:11:28 +0000
Received: by outflank-mailman (input) for mailman id 28249;
 Mon, 16 Nov 2020 15:11:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefzq-0006ni-3r
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:00:46 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 074b718d-c101-4c17-b1f1-963d1a071900;
 Mon, 16 Nov 2020 14:59:01 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxu-0003sm-C6; Mon, 16 Nov 2020 14:58:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefzq-0006ni-3r
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:00:46 +0000
X-Inumbo-ID: 074b718d-c101-4c17-b1f1-963d1a071900
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 074b718d-c101-4c17-b1f1-963d1a071900;
	Mon, 16 Nov 2020 14:59:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=9xKkbsaYgLwhE10hNLQkbULUUrvPvOI5lAnm4OXLvPs=; b=oYdX9nzKoJif6AkXObIozxVfvj
	7cGF/f/pUnJJHDQ6T2624151wrdwaTi3mmORA1Sk4dkNQXTLpvcPq6k6KdWEaLJxRF4lDmEXm8qw9
	oYE1wB5V9C5rcNG0vg2/s/RZq7WpvPXksRbrz6BhFVFbp3fgW6Cu8Ei10wVmFRzzU2lZbNkryPPwp
	UyZEBLPwZzIDOtCGDnQaZ/yC0WN3H+ttyOeryQQMS7541LF9/8zj21iM7fknwF/oyMb92PdRg+U+/
	bjDqshSNF4XCVwta4ehBG8dKT/cx10oylnvhgVmCv3gk5lCpRUkUrEy9iufHfdIZqR7G/SymA4RHz
	LVTtFmtA==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxu-0003sm-C6; Mon, 16 Nov 2020 14:58:46 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 26/78] block: add a new set_read_only method
Date: Mon, 16 Nov 2020 15:57:17 +0100
Message-Id: <20201116145809.410558-27-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Add a new method to allow for driver-specific processing when setting or
clearing the block device read-only state.  This allows to replace the
cumbersome and error-prone override of the whole ioctl implementation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/ioctl.c          | 5 +++++
 include/linux/blkdev.h | 1 +
 2 files changed, 6 insertions(+)

diff --git a/block/ioctl.c b/block/ioctl.c
index c6d8863f040945..a6fa16b9770593 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -389,6 +389,11 @@ static int blkdev_roset(struct block_device *bdev, fmode_t mode,
 		return ret;
 	if (get_user(n, (int __user *)arg))
 		return -EFAULT;
+	if (bdev->bd_disk->fops->set_read_only) {
+		ret = bdev->bd_disk->fops->set_read_only(bdev, n);
+		if (ret)
+			return ret;
+	}
 	set_device_ro(bdev, n);
 	return 0;
 }
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 639cae2c158b59..5c1ba8a8d2bc7e 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1850,6 +1850,7 @@ struct block_device_operations {
 	void (*unlock_native_capacity) (struct gendisk *);
 	int (*revalidate_disk) (struct gendisk *);
 	int (*getgeo)(struct block_device *, struct hd_geometry *);
+	int (*set_read_only)(struct block_device *bdev, bool ro);
 	/* this callback is with swap_lock and sometimes page table lock held */
 	void (*swap_slot_free_notify) (struct block_device *, unsigned long);
 	int (*report_zones)(struct gendisk *, sector_t sector,
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:11:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:11:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28255.57309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAF-0006Ed-HJ; Mon, 16 Nov 2020 15:11:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28255.57309; Mon, 16 Nov 2020 15:11:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAE-0006DF-7i; Mon, 16 Nov 2020 15:11:30 +0000
Received: by outflank-mailman (input) for mailman id 28255;
 Mon, 16 Nov 2020 15:11:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg0t-0006ni-5U
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:01:51 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0f8cc8eb-c40a-4454-bd97-0c2eaf0f39fc;
 Mon, 16 Nov 2020 14:59:18 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyB-0003xB-Ni; Mon, 16 Nov 2020 14:59:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg0t-0006ni-5U
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:01:51 +0000
X-Inumbo-ID: 0f8cc8eb-c40a-4454-bd97-0c2eaf0f39fc
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0f8cc8eb-c40a-4454-bd97-0c2eaf0f39fc;
	Mon, 16 Nov 2020 14:59:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=j9YzX4Imfj42oekp8Tth+LQoKAxJAOk+J48Ms5WWyRs=; b=tNKy0A5I0JGyRwszKo8CS5i8OD
	+8V953YfFx3Xman6sQ8F42cjnDZ+fEMnGq+5JSKyBoWRq84B0ROGouApW++jZ44y6feigXqLuqKsR
	cxqh2lFOkiVIegiphp90kij1qmc1NDslqQZNsGbC21VOg69MDd/n08QOAs5NDSIePC2uZIeQod82Z
	T6wB+Z9xXg1/0NZKtWCv5t7lkIbvOb2buar5vD5Qry8bhovOugY2M7SSd4S16Dl+mklfVYWbkqbny
	8IY3ZaEc9JVI0kFlBfL2Zv+6ubgx5Kd03xor/wsERRQOPm0Gj4ArtdUOT7w0cJYRw8cIK/bIaVXYw
	0ktG/eKQ==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyB-0003xB-Ni; Mon, 16 Nov 2020 14:59:04 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH 38/78] block: rework requesting modules for unclaimed devices
Date: Mon, 16 Nov 2020 15:57:29 +0100
Message-Id: <20201116145809.410558-39-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Instead of reusing the ranges in bdev_map, add a new helper that is
called if no ranges was found.  This is a first step to unpeel and
eventually remove the complex ranges structure.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 block/genhd.c | 25 +++++++++++++++----------
 1 file changed, 15 insertions(+), 10 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index 2a20372756625e..8391e7d83a6920 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -1028,6 +1028,13 @@ static ssize_t disk_badblocks_store(struct device *dev,
 	return badblocks_store(disk->bb, page, len, 0);
 }
 
+static void request_gendisk_module(dev_t devt)
+{
+	if (request_module("block-major-%d-%d", MAJOR(devt), MINOR(devt)) > 0)
+		/* Make old-style 2.4 aliases work */
+		request_module("block-major-%d", MAJOR(devt));
+}
+
 static struct gendisk *lookup_gendisk(dev_t dev, int *partno)
 {
 	struct kobject *kobj;
@@ -1052,6 +1059,14 @@ static struct gendisk *lookup_gendisk(dev_t dev, int *partno)
 		probe = p->probe;
 		best = p->range - 1;
 		*partno = dev - p->dev;
+
+		if (!probe) {
+			mutex_unlock(&bdev_map_lock);
+			module_put(owner);
+			request_gendisk_module(dev);
+			goto retry;
+		}
+
 		if (p->lock && p->lock(dev, data) < 0) {
 			module_put(owner);
 			continue;
@@ -1290,15 +1305,6 @@ static const struct seq_operations partitions_op = {
 };
 #endif
 
-
-static struct kobject *base_probe(dev_t devt, int *partno, void *data)
-{
-	if (request_module("block-major-%d-%d", MAJOR(devt), MINOR(devt)) > 0)
-		/* Make old-style 2.4 aliases work */
-		request_module("block-major-%d", MAJOR(devt));
-	return NULL;
-}
-
 static void bdev_map_init(void)
 {
 	struct bdev_map *base;
@@ -1310,7 +1316,6 @@ static void bdev_map_init(void)
 
 	base->dev = 1;
 	base->range = ~0 ;
-	base->probe = base_probe;
 	for (i = 0; i < 255; i++)
 		bdev_map[i] = base;
 }
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:11:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:11:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28265.57332 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAK-0006Tt-5r; Mon, 16 Nov 2020 15:11:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28265.57332; Mon, 16 Nov 2020 15:11:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAJ-0006SU-3l; Mon, 16 Nov 2020 15:11:35 +0000
Received: by outflank-mailman (input) for mailman id 28265;
 Mon, 16 Nov 2020 15:11:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg2k-0006ni-A5
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:03:46 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 459d0bda-70dc-42b9-a574-2252f9222f57;
 Mon, 16 Nov 2020 14:59:44 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyb-00046E-Q3; Mon, 16 Nov 2020 14:59:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg2k-0006ni-A5
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:03:46 +0000
X-Inumbo-ID: 459d0bda-70dc-42b9-a574-2252f9222f57
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 459d0bda-70dc-42b9-a574-2252f9222f57;
	Mon, 16 Nov 2020 14:59:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=W6vAbrHw5UIe+hBqHSxB53buLdYkSelEr9Mgh5kMt7I=; b=HGd154cweczfVFFJ33I+2Ap0qE
	vAINs3yHEiIwNtbvzYgE/WXzWWOHi4hc8oMlG0QlPnvuCq9d5vtI3j5c7tLlDgCpluHDQGkOMLH35
	WhaogjDXSkuKzRYt6GHIOzX4hyBICirNPP8WpvRSgupaFYTXu4b5Xk6AUTDiPf3+ItYmACJeiM6LJ
	YMVtATQDBg4A47fCUNe/awUmsyyfVPgKgi53SFmu9OHsk+c41UDve1Y9S8vUN34EuaJyVFEjXpH1A
	ljV+4g8sWW0+9ykM+WSjgXqlDX0dW2sdD4zJ/CFNxKQ+EUlLDbbDS0aDOeZpxQ46Grgh3LbO4Tr17
	vOl39Yaw==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyb-00046E-Q3; Mon, 16 Nov 2020 14:59:30 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 56/78] init: refactor name_to_dev_t
Date: Mon, 16 Nov 2020 15:57:47 +0100
Message-Id: <20201116145809.410558-57-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Split each case into a self-contained helper.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/genhd.h |   7 +-
 init/do_mounts.c      | 183 +++++++++++++++++++++---------------------
 2 files changed, 91 insertions(+), 99 deletions(-)

diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 22f5b9fd96f8bf..ca5e356084c353 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -388,18 +388,13 @@ static inline void bd_unlink_disk_holder(struct block_device *bdev,
 }
 #endif /* CONFIG_SYSFS */
 
+dev_t blk_lookup_devt(const char *name, int partno);
 #ifdef CONFIG_BLOCK
 void printk_all_partitions(void);
-dev_t blk_lookup_devt(const char *name, int partno);
 #else /* CONFIG_BLOCK */
 static inline void printk_all_partitions(void)
 {
 }
-static inline dev_t blk_lookup_devt(const char *name, int partno)
-{
-	dev_t devt = MKDEV(0, 0);
-	return devt;
-}
 #endif /* CONFIG_BLOCK */
 
 #endif /* _LINUX_GENHD_H */
diff --git a/init/do_mounts.c b/init/do_mounts.c
index b5f9604d0c98a2..aef2f24461c7f1 100644
--- a/init/do_mounts.c
+++ b/init/do_mounts.c
@@ -90,7 +90,6 @@ static int match_dev_by_uuid(struct device *dev, const void *data)
 	return 0;
 }
 
-
 /**
  * devt_from_partuuid - looks up the dev_t of a partition by its UUID
  * @uuid_str:	char array containing ascii UUID
@@ -186,7 +185,83 @@ static int match_dev_by_label(struct device *dev, const void *data)
 
 	return 0;
 }
-#endif
+
+static dev_t devt_from_partlabel(const char *label)
+{
+	struct device *dev;
+	dev_t devt = 0;
+
+	dev = class_find_device(&block_class, NULL, label, &match_dev_by_label);
+	if (dev) {
+		devt = dev->devt;
+		put_device(dev);
+	}
+
+	return devt;
+}
+
+static dev_t devt_from_devname(const char *name)
+{
+	dev_t devt = 0;
+	int part;
+	char s[32];
+	char *p;
+
+	if (strlen(name) > 31)
+		return 0;
+	strcpy(s, name);
+	for (p = s; *p; p++) {
+		if (*p == '/')
+			*p = '!';
+	}
+
+	devt = blk_lookup_devt(s, 0);
+	if (devt)
+		return devt;
+
+	/*
+	 * Try non-existent, but valid partition, which may only exist after
+	 * opening the device, like partitioned md devices.
+	 */
+	while (p > s && isdigit(p[-1]))
+		p--;
+	if (p == s || !*p || *p == '0')
+		return 0;
+
+	/* try disk name without <part number> */
+	part = simple_strtoul(p, NULL, 10);
+	*p = '\0';
+	devt = blk_lookup_devt(s, part);
+	if (devt)
+		return devt;
+
+	/* try disk name without p<part number> */
+	if (p < s + 2 || !isdigit(p[-2]) || p[-1] != 'p')
+		return 0;
+	p[-1] = '\0';
+	return blk_lookup_devt(s, part);
+}
+#endif /* CONFIG_BLOCK */
+
+static dev_t devt_from_devnum(const char *name)
+{
+	unsigned maj, min, offset;
+	dev_t devt = 0;
+	char *p, dummy;
+
+	if (sscanf(name, "%u:%u%c", &maj, &min, &dummy) == 2 ||
+	    sscanf(name, "%u:%u:%u:%c", &maj, &min, &offset, &dummy) == 3) {
+		devt = MKDEV(maj, min);
+		if (maj != MAJOR(devt) || min != MINOR(devt))
+			return 0;
+	} else {
+		devt = new_decode_dev(simple_strtoul(name, &p, 16));
+		if (*p)
+			return 0;
+	}
+
+	return devt;
+}
 
 /*
  *	Convert a name into device number.  We accept the following variants:
@@ -218,101 +293,23 @@ static int match_dev_by_label(struct device *dev, const void *data)
  *	name contains slashes, the device name has them replaced with
  *	bangs.
  */
-
 dev_t name_to_dev_t(const char *name)
 {
-	char s[32];
-	char *p;
-	dev_t res = 0;
-	int part;
-
+	if (strcmp(name, "/dev/nfs") == 0)
+		return Root_NFS;
+	if (strcmp(name, "/dev/cifs") == 0)
+		return Root_CIFS;
+	if (strcmp(name, "/dev/ram") == 0)
+		return Root_RAM0;
 #ifdef CONFIG_BLOCK
-	if (strncmp(name, "PARTUUID=", 9) == 0) {
-		name += 9;
-		res = devt_from_partuuid(name);
-		if (!res)
-			goto fail;
-		goto done;
-	} else if (strncmp(name, "PARTLABEL=", 10) == 0) {
-		struct device *dev;
-
-		dev = class_find_device(&block_class, NULL, name + 10,
-					&match_dev_by_label);
-		if (!dev)
-			goto fail;
-
-		res = dev->devt;
-		put_device(dev);
-		goto done;
-	}
+	if (strncmp(name, "PARTUUID=", 9) == 0)
+		return devt_from_partuuid(name + 9);
+	if (strncmp(name, "PARTLABEL=", 10) == 0)
+		return devt_from_partlabel(name + 10);
+	if (strncmp(name, "/dev/", 5) == 0)
+		return devt_from_devname(name + 5);
 #endif
-
-	if (strncmp(name, "/dev/", 5) != 0) {
-		unsigned maj, min, offset;
-		char dummy;
-
-		if ((sscanf(name, "%u:%u%c", &maj, &min, &dummy) == 2) ||
-		    (sscanf(name, "%u:%u:%u:%c", &maj, &min, &offset, &dummy) == 3)) {
-			res = MKDEV(maj, min);
-			if (maj != MAJOR(res) || min != MINOR(res))
-				goto fail;
-		} else {
-			res = new_decode_dev(simple_strtoul(name, &p, 16));
-			if (*p)
-				goto fail;
-		}
-		goto done;
-	}
-
-	name += 5;
-	res = Root_NFS;
-	if (strcmp(name, "nfs") == 0)
-		goto done;
-	res = Root_CIFS;
-	if (strcmp(name, "cifs") == 0)
-		goto done;
-	res = Root_RAM0;
-	if (strcmp(name, "ram") == 0)
-		goto done;
-
-	if (strlen(name) > 31)
-		goto fail;
-	strcpy(s, name);
-	for (p = s; *p; p++)
-		if (*p == '/')
-			*p = '!';
-	res = blk_lookup_devt(s, 0);
-	if (res)
-		goto done;
-
-	/*
-	 * try non-existent, but valid partition, which may only exist
-	 * after revalidating the disk, like partitioned md devices
-	 */
-	while (p > s && isdigit(p[-1]))
-		p--;
-	if (p == s || !*p || *p == '0')
-		goto fail;
-
-	/* try disk name without <part number> */
-	part = simple_strtoul(p, NULL, 10);
-	*p = '\0';
-	res = blk_lookup_devt(s, part);
-	if (res)
-		goto done;
-
-	/* try disk name without p<part number> */
-	if (p < s + 2 || !isdigit(p[-2]) || p[-1] != 'p')
-		goto fail;
-	p[-1] = '\0';
-	res = blk_lookup_devt(s, part);
-	if (res)
-		goto done;
-
-fail:
-	return 0;
-done:
-	return res;
+	return devt_from_devnum(name);
 }
 EXPORT_SYMBOL_GPL(name_to_dev_t);
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:11:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:11:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28268.57345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAM-0006cL-Qb; Mon, 16 Nov 2020 15:11:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28268.57345; Mon, 16 Nov 2020 15:11:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAL-0006ad-Sb; Mon, 16 Nov 2020 15:11:37 +0000
Received: by outflank-mailman (input) for mailman id 28268;
 Mon, 16 Nov 2020 15:11:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg2u-0006ni-AP
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:03:56 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d18f707c-349c-4efa-91eb-3533777f3576;
 Mon, 16 Nov 2020 14:59:45 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyf-00048G-6S; Mon, 16 Nov 2020 14:59:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg2u-0006ni-AP
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:03:56 +0000
X-Inumbo-ID: d18f707c-349c-4efa-91eb-3533777f3576
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d18f707c-349c-4efa-91eb-3533777f3576;
	Mon, 16 Nov 2020 14:59:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=O60EjIrCGFuLmrlAIlHLEpDJ56KCKSEnNb3gMAlOeiw=; b=cYF8R4cCbC+Qq9sINa5yqZdL0X
	93ga8OVPoDCuMuAzsc6LCCPVCTlMz7M8kyXQ3W8mLoMGcBK+UT1OJHIIKBiFEY8O3CL9PCFV1VptS
	KSjRwtARoJ87Nd4nxtGIFM+Jg0k8wYWN9TZgzsN5D1nV7I1sq1ZXwlr9gtYWZBhht8Q7hHPelTezf
	P6wBPYwljx8jZR1nBwtw+O44D+ondU8UugunbuODPW/VJqK0DSykBoPrIhbgMz4by7p5DQsN18A3D
	FROspfRYmqv4Ua7pIxHH3F+yAyAW+FYJe6JFNqS5wpMiz2anC/h5Tu/QSn/THlw4upY+gSQg90ArH
	DtvNc1gA==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyf-00048G-6S; Mon, 16 Nov 2020 14:59:33 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 58/78] init: cleanup match_dev_by_uuid and match_dev_by_label
Date: Mon, 16 Nov 2020 15:57:49 +0100
Message-Id: <20201116145809.410558-59-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Avoid a totally pointless goto label, and use the same style of
comparism for both helpers.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 init/do_mounts.c | 18 ++++++------------
 1 file changed, 6 insertions(+), 12 deletions(-)

diff --git a/init/do_mounts.c b/init/do_mounts.c
index afa26a4028d25e..5879edf083b318 100644
--- a/init/do_mounts.c
+++ b/init/do_mounts.c
@@ -79,15 +79,10 @@ static int match_dev_by_uuid(struct device *dev, const void *data)
 	const struct uuidcmp *cmp = data;
 	struct hd_struct *part = dev_to_part(dev);
 
-	if (!part->info)
-		goto no_match;
-
-	if (strncasecmp(cmp->uuid, part->info->uuid, cmp->len))
-		goto no_match;
-
+	if (!part->info ||
+	    strncasecmp(cmp->uuid, part->info->uuid, cmp->len))
+		return 0;
 	return 1;
-no_match:
-	return 0;
 }
 
 /**
@@ -174,10 +169,9 @@ static int match_dev_by_label(struct device *dev, const void *data)
 	const char *label = data;
 	struct hd_struct *part = dev_to_part(dev);
 
-	if (part->info && !strcmp(label, part->info->volname))
-		return 1;
-
-	return 0;
+	if (!part->info || strcmp(label, part->info->volname))
+		return 0;
+	return 1;
 }
 
 static dev_t devt_from_partlabel(const char *label)
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:11:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:11:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28271.57352 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAO-0006ig-JI; Mon, 16 Nov 2020 15:11:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28271.57352; Mon, 16 Nov 2020 15:11:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAN-0006gu-RC; Mon, 16 Nov 2020 15:11:39 +0000
Received: by outflank-mailman (input) for mailman id 28271;
 Mon, 16 Nov 2020 15:11:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg0y-0006ni-5l
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:01:56 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c90f4ce8-abdf-4f23-9023-68b347ea78dc;
 Mon, 16 Nov 2020 14:59:19 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyA-0003wx-DT; Mon, 16 Nov 2020 14:59:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg0y-0006ni-5l
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:01:56 +0000
X-Inumbo-ID: c90f4ce8-abdf-4f23-9023-68b347ea78dc
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c90f4ce8-abdf-4f23-9023-68b347ea78dc;
	Mon, 16 Nov 2020 14:59:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=9MasGm4sRMlqjIfBzvyRbCzoZ9vCizFcPgSc5rhaRew=; b=uXQm/yyIiJQnYODeCv7tLBroh6
	arXMrp0Tk+2mkvK5mlMoVYQZhqOzsPFVFoVsNhqaXKjdbjxT6ddIFdHimSsiofqQFd042iVeyoPjf
	kjJVvZC7dJvRJWgFk5vnXEzcfiS2wRI+9CAHzFb6PaF2pf3aK/Vv/y+iBA3+wNlveiknIb+cM4k72
	sy6+79pc3yQnXdRLw5qP9MUrMQWCUQr94hrqb0BHYJRuZX0RRf1WPApUNdGnDYjJKsodIy93uSBGl
	mXFAo94CwxYf32WkNugPW9wMsBEd2E2lCs3nmZ+m6b8g/Nfawh6vTjWCNJv2VNFFCFBCwJLajvHZl
	HFZ0RH2Q==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyA-0003wx-DT; Mon, 16 Nov 2020 14:59:02 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH 37/78] block: split block_class_lock
Date: Mon, 16 Nov 2020 15:57:28 +0100
Message-Id: <20201116145809.410558-38-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Split the block_class_lock mutex into one each to protect bdev_map
and major_names.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 block/genhd.c | 29 +++++++++++++++--------------
 1 file changed, 15 insertions(+), 14 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index 482f7b89802010..2a20372756625e 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -25,7 +25,6 @@
 
 #include "blk.h"
 
-static DEFINE_MUTEX(block_class_lock);
 static struct kobject *block_depr;
 
 struct bdev_map {
@@ -37,6 +36,7 @@ struct bdev_map {
 	int (*lock)(dev_t, void *);
 	void *data;
 } *bdev_map[255];
+static DEFINE_MUTEX(bdev_map_lock);
 
 /* for extended dynamic devt allocation, currently only one major is used */
 #define NR_EXT_DEVT		(1 << MINORBITS)
@@ -400,6 +400,7 @@ static struct blk_major_name {
 	int major;
 	char name[16];
 } *major_names[BLKDEV_MAJOR_HASH_SIZE];
+static DEFINE_MUTEX(major_names_lock);
 
 /* index in the above - for now: assume no multimajor ranges */
 static inline int major_to_index(unsigned major)
@@ -412,11 +413,11 @@ void blkdev_show(struct seq_file *seqf, off_t offset)
 {
 	struct blk_major_name *dp;
 
-	mutex_lock(&block_class_lock);
+	mutex_lock(&major_names_lock);
 	for (dp = major_names[major_to_index(offset)]; dp; dp = dp->next)
 		if (dp->major == offset)
 			seq_printf(seqf, "%3d %s\n", dp->major, dp->name);
-	mutex_unlock(&block_class_lock);
+	mutex_unlock(&major_names_lock);
 }
 #endif /* CONFIG_PROC_FS */
 
@@ -445,7 +446,7 @@ int register_blkdev(unsigned int major, const char *name)
 	struct blk_major_name **n, *p;
 	int index, ret = 0;
 
-	mutex_lock(&block_class_lock);
+	mutex_lock(&major_names_lock);
 
 	/* temporary */
 	if (major == 0) {
@@ -498,7 +499,7 @@ int register_blkdev(unsigned int major, const char *name)
 		kfree(p);
 	}
 out:
-	mutex_unlock(&block_class_lock);
+	mutex_unlock(&major_names_lock);
 	return ret;
 }
 
@@ -510,7 +511,7 @@ void unregister_blkdev(unsigned int major, const char *name)
 	struct blk_major_name *p = NULL;
 	int index = major_to_index(major);
 
-	mutex_lock(&block_class_lock);
+	mutex_lock(&major_names_lock);
 	for (n = &major_names[index]; *n; n = &(*n)->next)
 		if ((*n)->major == major)
 			break;
@@ -520,7 +521,7 @@ void unregister_blkdev(unsigned int major, const char *name)
 		p = *n;
 		*n = p->next;
 	}
-	mutex_unlock(&block_class_lock);
+	mutex_unlock(&major_names_lock);
 	kfree(p);
 }
 
@@ -671,7 +672,7 @@ void blk_register_region(dev_t devt, unsigned long range, struct module *module,
 		p->data = data;
 	}
 
-	mutex_lock(&block_class_lock);
+	mutex_lock(&bdev_map_lock);
 	for (i = 0, p -= n; i < n; i++, p++, index++) {
 		struct bdev_map **s = &bdev_map[index % 255];
 		while (*s && (*s)->range < range)
@@ -679,7 +680,7 @@ void blk_register_region(dev_t devt, unsigned long range, struct module *module,
 		p->next = *s;
 		*s = p;
 	}
-	mutex_unlock(&block_class_lock);
+	mutex_unlock(&bdev_map_lock);
 }
 EXPORT_SYMBOL(blk_register_region);
 
@@ -690,7 +691,7 @@ void blk_unregister_region(dev_t devt, unsigned long range)
 	unsigned i;
 	struct bdev_map *found = NULL;
 
-	mutex_lock(&block_class_lock);
+	mutex_lock(&bdev_map_lock);
 	for (i = 0; i < min(n, 255u); i++, index++) {
 		struct bdev_map **s;
 		for (s = &bdev_map[index % 255]; *s; s = &(*s)->next) {
@@ -703,7 +704,7 @@ void blk_unregister_region(dev_t devt, unsigned long range)
 			}
 		}
 	}
-	mutex_unlock(&block_class_lock);
+	mutex_unlock(&bdev_map_lock);
 	kfree(found);
 }
 EXPORT_SYMBOL(blk_unregister_region);
@@ -1034,7 +1035,7 @@ static struct gendisk *lookup_gendisk(dev_t dev, int *partno)
 	unsigned long best = ~0UL;
 
 retry:
-	mutex_lock(&block_class_lock);
+	mutex_lock(&bdev_map_lock);
 	for (p = bdev_map[MAJOR(dev) % 255]; p; p = p->next) {
 		struct kobject *(*probe)(dev_t, int *, void *);
 		struct module *owner;
@@ -1055,7 +1056,7 @@ static struct gendisk *lookup_gendisk(dev_t dev, int *partno)
 			module_put(owner);
 			continue;
 		}
-		mutex_unlock(&block_class_lock);
+		mutex_unlock(&bdev_map_lock);
 		kobj = probe(dev, partno, data);
 		/* Currently ->owner protects _only_ ->probe() itself. */
 		module_put(owner);
@@ -1063,7 +1064,7 @@ static struct gendisk *lookup_gendisk(dev_t dev, int *partno)
 			return dev_to_disk(kobj_to_dev(kobj));
 		goto retry;
 	}
-	mutex_unlock(&block_class_lock);
+	mutex_unlock(&bdev_map_lock);
 	return NULL;
 }
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:11:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:11:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28274.57357 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAP-0006mc-MU; Mon, 16 Nov 2020 15:11:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28274.57357; Mon, 16 Nov 2020 15:11:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAP-0006li-9g; Mon, 16 Nov 2020 15:11:41 +0000
Received: by outflank-mailman (input) for mailman id 28274;
 Mon, 16 Nov 2020 15:11:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kefzR-0006ni-2g
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:00:21 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7a7bc4d3-87c4-4f72-884c-ef266ed2069c;
 Mon, 16 Nov 2020 14:58:54 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxo-0003rX-KZ; Mon, 16 Nov 2020 14:58:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kefzR-0006ni-2g
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:00:21 +0000
X-Inumbo-ID: 7a7bc4d3-87c4-4f72-884c-ef266ed2069c
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7a7bc4d3-87c4-4f72-884c-ef266ed2069c;
	Mon, 16 Nov 2020 14:58:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=Vj0GgZOGZUCqHagrDbWIgIb68CNEMN1B5n/upu75+9M=; b=vJvWy+M6q2aairpF0Q89oICkxI
	PS5amLRUZxt3TDLOTCJKifVBAbRS3ozPh9ssbARTX05RTcowruj+r6DaYiko+IA8YM6yKnqqO9hZ0
	YLPWVj8bReI6zNXL311McIRo6JgWss20/ic0IkcgvXqouBBov4LHHRc3LFvEHbyGhAEZ6L49Q+h8z
	LPV18Yh1YwBXDnHfnQyxhLqXZkED99bdFyjJYXBanIYI+26WlIFdrhNuludlP3Zi5bFQ1Jai7SYT8
	kxlPZdfsH/hR1Of6f1mV+sb6fFGWqYqS2gk4bZxnQp6B+oIaE6yoC0rkzi1aN8TgGTKSgtY0FwCjd
	PfyxB0dA==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxo-0003rX-KZ; Mon, 16 Nov 2020 14:58:41 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 22/78] virtio-blk: remove a spurious call to revalidate_disk_size
Date: Mon, 16 Nov 2020 15:57:13 +0100
Message-Id: <20201116145809.410558-23-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

revalidate_disk_size just updates the block device size from the disk
size.  Thus calling it from virtblk_update_cache_mode doesn't actually
do anything.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
---
 drivers/block/virtio_blk.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 3e812b4c32e669..145606dc52db1e 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -598,7 +598,6 @@ static void virtblk_update_cache_mode(struct virtio_device *vdev)
 	struct virtio_blk *vblk = vdev->priv;
 
 	blk_queue_write_cache(vblk->disk->queue, writeback, false);
-	revalidate_disk_size(vblk->disk, true);
 }
 
 static const char *const virtblk_cache_types[] = {
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:11:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:11:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28276.57371 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAR-0006qk-Vi; Mon, 16 Nov 2020 15:11:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28276.57371; Mon, 16 Nov 2020 15:11:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAQ-0006pR-Ew; Mon, 16 Nov 2020 15:11:42 +0000
Received: by outflank-mailman (input) for mailman id 28276;
 Mon, 16 Nov 2020 15:11:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg2p-0006ni-AA
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:03:51 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c16919fe-ecd7-445c-ae88-7717e0a09ba5;
 Mon, 16 Nov 2020 14:59:44 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyd-000470-B8; Mon, 16 Nov 2020 14:59:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg2p-0006ni-AA
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:03:51 +0000
X-Inumbo-ID: c16919fe-ecd7-445c-ae88-7717e0a09ba5
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c16919fe-ecd7-445c-ae88-7717e0a09ba5;
	Mon, 16 Nov 2020 14:59:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=AZC0zIud/sy36HL3UqpdhruuJxaP+dAsE+XWl7N2xio=; b=hC2kUAEWCYh9OryEptbmVZ4ebz
	NJfypqCrjvekFThEDQHeAH+KnX3FgqoOh2oFdElpy2UOUU8OYvMVMyDtZx5PgsNNgO5KvLRJtcx2F
	8Z2wJ6ZAecQ/tCcYHLbAlAGK62e/LPO1anzZqXyhoiZcnyM456rb6Ib/pV9REehr8DlazH4amfdur
	a7V1kHvqoU8gZ+KDoDs23F8OiOCpjlxcgAWIv7ZpBKEh+7hc5sOGW+853KLP0z+7cruWtWsL7L99a
	DSrjRt5uLPvZTcUNxbnqe7Ie7DIj2M7MWaG8rzW38nm4RM1CwtXe4KSgUM8q+WHYxhoTnZfd7fBBR
	Uej9O+GQ==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyd-000470-B8; Mon, 16 Nov 2020 14:59:31 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 57/78] init: refactor devt_from_partuuid
Date: Mon, 16 Nov 2020 15:57:48 +0100
Message-Id: <20201116145809.410558-58-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

The code in devt_from_partuuid is very convoluted.  Refactor a bit by
sanitizing the goto and variable name usage.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 init/do_mounts.c | 68 ++++++++++++++++++++++--------------------------
 1 file changed, 31 insertions(+), 37 deletions(-)

diff --git a/init/do_mounts.c b/init/do_mounts.c
index aef2f24461c7f1..afa26a4028d25e 100644
--- a/init/do_mounts.c
+++ b/init/do_mounts.c
@@ -105,13 +105,10 @@ static int match_dev_by_uuid(struct device *dev, const void *data)
  */
 static dev_t devt_from_partuuid(const char *uuid_str)
 {
-	dev_t res = 0;
 	struct uuidcmp cmp;
 	struct device *dev = NULL;
-	struct gendisk *disk;
-	struct hd_struct *part;
+	dev_t devt = 0;
 	int offset = 0;
-	bool clear_root_wait = false;
 	char *slash;
 
 	cmp.uuid = uuid_str;
@@ -120,52 +117,49 @@ static dev_t devt_from_partuuid(const char *uuid_str)
 	/* Check for optional partition number offset attributes. */
 	if (slash) {
 		char c = 0;
+
 		/* Explicitly fail on poor PARTUUID syntax. */
-		if (sscanf(slash + 1,
-			   "PARTNROFF=%d%c", &offset, &c) != 1) {
-			clear_root_wait = true;
-			goto done;
-		}
+		if (sscanf(slash + 1, "PARTNROFF=%d%c", &offset, &c) != 1)
+			goto clear_root_wait;
 		cmp.len = slash - uuid_str;
 	} else {
 		cmp.len = strlen(uuid_str);
 	}
 
-	if (!cmp.len) {
-		clear_root_wait = true;
-		goto done;
-	}
+	if (!cmp.len)
+		goto clear_root_wait;
 
-	dev = class_find_device(&block_class, NULL, &cmp,
-				&match_dev_by_uuid);
+	dev = class_find_device(&block_class, NULL, &cmp, &match_dev_by_uuid);
 	if (!dev)
-		goto done;
-
-	res = dev->devt;
+		return 0;
 
-	/* Attempt to find the partition by offset. */
-	if (!offset)
-		goto no_offset;
+	if (offset) {
+		/*
+		 * Attempt to find the requested partition by adding an offset
+		 * to the partition number found by UUID.
+		 */
+		struct hd_struct *part;
 
-	res = 0;
-	disk = part_to_disk(dev_to_part(dev));
-	part = disk_get_part(disk, dev_to_part(dev)->partno + offset);
-	if (part) {
-		res = part_devt(part);
-		put_device(part_to_dev(part));
+		part = disk_get_part(dev_to_disk(dev),
+				     dev_to_part(dev)->partno + offset);
+		if (part) {
+			devt = part_devt(part);
+			put_device(part_to_dev(part));
+		}
+	} else {
+		devt = dev->devt;
 	}
 
-no_offset:
 	put_device(dev);
-done:
-	if (clear_root_wait) {
-		pr_err("VFS: PARTUUID= is invalid.\n"
-		       "Expected PARTUUID=<valid-uuid-id>[/PARTNROFF=%%d]\n");
-		if (root_wait)
-			pr_err("Disabling rootwait; root= is invalid.\n");
-		root_wait = 0;
-	}
-	return res;
+	return devt;
+
+clear_root_wait:
+	pr_err("VFS: PARTUUID= is invalid.\n"
+	       "Expected PARTUUID=<valid-uuid-id>[/PARTNROFF=%%d]\n");
+	if (root_wait)
+		pr_err("Disabling rootwait; root= is invalid.\n");
+	root_wait = 0;
+	return 0;
 }
 
 /**
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:11:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:11:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28279.57384 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAU-0006z3-JA; Mon, 16 Nov 2020 15:11:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28279.57384; Mon, 16 Nov 2020 15:11:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAT-0006wr-25; Mon, 16 Nov 2020 15:11:45 +0000
Received: by outflank-mailman (input) for mailman id 28279;
 Mon, 16 Nov 2020 15:11:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg1S-0006ni-6r
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:02:26 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5e1cd4b6-f80e-4819-83f6-cebff75a7493;
 Mon, 16 Nov 2020 14:59:25 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyK-0003zL-Gc; Mon, 16 Nov 2020 14:59:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg1S-0006ni-6r
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:02:26 +0000
X-Inumbo-ID: 5e1cd4b6-f80e-4819-83f6-cebff75a7493
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5e1cd4b6-f80e-4819-83f6-cebff75a7493;
	Mon, 16 Nov 2020 14:59:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=PZFkyvf1145z+UDf1b5Hj+jOh1AFkHrQzn1ghiqKhDM=; b=nsafq+rGm1Rv8PLkOskpq2E1WT
	GblaGkEbRxaXt7tMa8lsB0T/f5O44rOftO550QMTTXAsaN06O33HbZWw8qR8BuIC5LY1K1bDXw7h0
	fXI0JFhYf8Q4CM2W+FyYMvt6rxUABlRFuwShsT5ZmrGbHYU704qrV+elTl5Y7tXYggWdyVa22D/65
	XFEMEvNDmXdMEgGAPYiI8eQrdp7u2i1IGbuQVMZNL0XJlhyMGFQuZahfLnp3Jg/O7cmqHPhKanuYM
	OxeKkCkWcJqNNHH4YMUIOwRSUBMRc+b1rwJCOum2gr3Iswu9BqnghgCeZoZpkFA0tR3BGhjo23NTg
	G+sn0B+g==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyK-0003zL-Gc; Mon, 16 Nov 2020 14:59:12 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH 44/78] loop: use __register_blkdev to allocate devices on demand
Date: Mon, 16 Nov 2020 15:57:35 +0100
Message-Id: <20201116145809.410558-45-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use the simpler mechanism attached to major_name to allocate a brd device
when a currently unregistered minor is accessed.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 drivers/block/loop.c | 30 ++++++++----------------------
 1 file changed, 8 insertions(+), 22 deletions(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 41caf799df721f..9a27d4f1c08aac 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -2231,24 +2231,18 @@ static int loop_lookup(struct loop_device **l, int i)
 	return ret;
 }
 
-static struct kobject *loop_probe(dev_t dev, int *part, void *data)
+static void loop_probe(dev_t dev)
 {
+	int idx = MINOR(dev) >> part_shift;
 	struct loop_device *lo;
-	struct kobject *kobj;
-	int err;
+
+	if (max_loop && idx >= max_loop)
+		return;
 
 	mutex_lock(&loop_ctl_mutex);
-	err = loop_lookup(&lo, MINOR(dev) >> part_shift);
-	if (err < 0)
-		err = loop_add(&lo, MINOR(dev) >> part_shift);
-	if (err < 0)
-		kobj = NULL;
-	else
-		kobj = get_disk_and_module(lo->lo_disk);
+	if (loop_lookup(&lo, idx) < 0)
+		loop_add(&lo, idx);
 	mutex_unlock(&loop_ctl_mutex);
-
-	*part = 0;
-	return kobj;
 }
 
 static long loop_control_ioctl(struct file *file, unsigned int cmd,
@@ -2368,14 +2362,11 @@ static int __init loop_init(void)
 		goto err_out;
 
 
-	if (register_blkdev(LOOP_MAJOR, "loop")) {
+	if (__register_blkdev(LOOP_MAJOR, "loop", loop_probe)) {
 		err = -EIO;
 		goto misc_out;
 	}
 
-	blk_register_region(MKDEV(LOOP_MAJOR, 0), range,
-				  THIS_MODULE, loop_probe, NULL, NULL);
-
 	/* pre-create number of devices given by config or max_loop */
 	mutex_lock(&loop_ctl_mutex);
 	for (i = 0; i < nr; i++)
@@ -2401,16 +2392,11 @@ static int loop_exit_cb(int id, void *ptr, void *data)
 
 static void __exit loop_exit(void)
 {
-	unsigned long range;
-
-	range = max_loop ? max_loop << part_shift : 1UL << MINORBITS;
-
 	mutex_lock(&loop_ctl_mutex);
 
 	idr_for_each(&loop_index_idr, &loop_exit_cb, NULL);
 	idr_destroy(&loop_index_idr);
 
-	blk_unregister_region(MKDEV(LOOP_MAJOR, 0), range);
 	unregister_blkdev(LOOP_MAJOR, "loop");
 
 	misc_deregister(&loop_misc);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:11:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:11:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28281.57396 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAX-00077e-Gd; Mon, 16 Nov 2020 15:11:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28281.57396; Mon, 16 Nov 2020 15:11:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAV-000754-SO; Mon, 16 Nov 2020 15:11:47 +0000
Received: by outflank-mailman (input) for mailman id 28281;
 Mon, 16 Nov 2020 15:11:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg4M-0006ni-Dp
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:05:26 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ac949361-6422-4038-958e-2c49273c779e;
 Mon, 16 Nov 2020 15:00:12 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyy-0004HD-T0; Mon, 16 Nov 2020 14:59:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg4M-0006ni-Dp
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:05:26 +0000
X-Inumbo-ID: ac949361-6422-4038-958e-2c49273c779e
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ac949361-6422-4038-958e-2c49273c779e;
	Mon, 16 Nov 2020 15:00:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=MRrZvPHBJ7MmOJqwCwh6BOwzTdjIUug0RFrQfdUF4D0=; b=VdELZpYq+wC9PRGd7E7gW7f022
	iWbf/yBwlwECTqIWeSXdM3vxnd7C5rUPQt4zYgr4w4y5VS7aEw4XAXZZL1+gLjTfVzbErgItFolKx
	NoEr+DW9LtrWG9rcze2kkbyopscePeN91WRL96HJxHAuEnkYQzapbI9qst3MaFX6akHBdBUBm7/eb
	Ud1nBnozpN0bnLBzUbAJlQGLF7WvSR9ySOqhWBvr5OEnKwbk0b1hc36z3Q1K2nfJ5d/hpkLfSI8aB
	GuaLrCyXWDM01VUM6hy+z+KyIliSIUOnGWO2OtY/JMGDpZLn+Yks0DpKHFYl+4qlADIgIFPwiG6NQ
	nFJMH8wg==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyy-0004HD-T0; Mon, 16 Nov 2020 14:59:53 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 71/78] block: add a bdev_kobj helper
Date: Mon, 16 Nov 2020 15:58:02 +0100
Message-Id: <20201116145809.410558-72-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Add a little helper to find the kobject for a struct block_device.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/md/bcache/super.c |  7 ++-----
 drivers/md/md.c           |  4 +---
 fs/btrfs/sysfs.c          | 15 +++------------
 include/linux/blk_types.h |  3 +++
 4 files changed, 9 insertions(+), 20 deletions(-)

diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index ea2b80c3e44c38..f6edacc81527c7 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -1447,8 +1447,7 @@ static int register_bdev(struct cache_sb *sb, struct cache_sb_disk *sb_disk,
 		goto err;
 
 	err = "error creating kobject";
-	if (kobject_add(&dc->disk.kobj, &part_to_dev(bdev->bd_part)->kobj,
-			"bcache"))
+	if (kobject_add(&dc->disk.kobj, bdev_kobj(bdev), "bcache"))
 		goto err;
 	if (bch_cache_accounting_add_kobjs(&dc->accounting, &dc->disk.kobj))
 		goto err;
@@ -2342,9 +2341,7 @@ static int register_cache(struct cache_sb *sb, struct cache_sb_disk *sb_disk,
 		goto err;
 	}
 
-	if (kobject_add(&ca->kobj,
-			&part_to_dev(bdev->bd_part)->kobj,
-			"bcache")) {
+	if (kobject_add(&ca->kobj, bdev_kobj(bdev), "bcache")) {
 		err = "error calling kobject_add";
 		ret = -ENOMEM;
 		goto out;
diff --git a/drivers/md/md.c b/drivers/md/md.c
index b2edf5e0f965b5..7ce6047c856ea2 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -2414,7 +2414,6 @@ EXPORT_SYMBOL(md_integrity_add_rdev);
 static int bind_rdev_to_array(struct md_rdev *rdev, struct mddev *mddev)
 {
 	char b[BDEVNAME_SIZE];
-	struct kobject *ko;
 	int err;
 
 	/* prevent duplicates */
@@ -2477,9 +2476,8 @@ static int bind_rdev_to_array(struct md_rdev *rdev, struct mddev *mddev)
 	if ((err = kobject_add(&rdev->kobj, &mddev->kobj, "dev-%s", b)))
 		goto fail;
 
-	ko = &part_to_dev(rdev->bdev->bd_part)->kobj;
 	/* failure here is OK */
-	err = sysfs_create_link(&rdev->kobj, ko, "block");
+	err = sysfs_create_link(&rdev->kobj, bdev_kobj(rdev->bdev), "block");
 	rdev->sysfs_state = sysfs_get_dirent_safe(rdev->kobj.sd, "state");
 	rdev->sysfs_unack_badblocks =
 		sysfs_get_dirent_safe(rdev->kobj.sd, "unacknowledged_bad_blocks");
diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
index 279d9262b676d4..24b6c6dc69000a 100644
--- a/fs/btrfs/sysfs.c
+++ b/fs/btrfs/sysfs.c
@@ -1232,8 +1232,6 @@ int btrfs_sysfs_add_space_info_type(struct btrfs_fs_info *fs_info,
 
 void btrfs_sysfs_remove_device(struct btrfs_device *device)
 {
-	struct hd_struct *disk;
-	struct kobject *disk_kobj;
 	struct kobject *devices_kobj;
 
 	/*
@@ -1243,11 +1241,8 @@ void btrfs_sysfs_remove_device(struct btrfs_device *device)
 	devices_kobj = device->fs_info->fs_devices->devices_kobj;
 	ASSERT(devices_kobj);
 
-	if (device->bdev) {
-		disk = device->bdev->bd_part;
-		disk_kobj = &part_to_dev(disk)->kobj;
-		sysfs_remove_link(devices_kobj, disk_kobj->name);
-	}
+	if (device->bdev)
+		sysfs_remove_link(devices_kobj, bdev_kobj(device->bdev)->name);
 
 	if (device->devid_kobj.state_initialized) {
 		kobject_del(&device->devid_kobj);
@@ -1353,11 +1348,7 @@ int btrfs_sysfs_add_device(struct btrfs_device *device)
 	nofs_flag = memalloc_nofs_save();
 
 	if (device->bdev) {
-		struct hd_struct *disk;
-		struct kobject *disk_kobj;
-
-		disk = device->bdev->bd_part;
-		disk_kobj = &part_to_dev(disk)->kobj;
+		struct kobject *disk_kobj = bdev_kobj(device->bdev);
 
 		ret = sysfs_create_link(devices_kobj, disk_kobj, disk_kobj->name);
 		if (ret) {
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 0735e335ca6c0a..5a5ccacb804cdb 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -49,6 +49,9 @@ struct block_device {
 #define bdev_whole(_bdev) \
 	((_bdev)->bd_disk->part0.bdev)
 
+#define bdev_kobj(_bdev) \
+	(&part_to_dev((_bdev)->bd_part)->kobj)
+
 /*
  * Block error status values.  See block/blk-core:blk_errors for the details.
  * Alpha cannot write a byte atomically, so we need to use 32-bit value.
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:11:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:11:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28282.57413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAb-0007JV-Kh; Mon, 16 Nov 2020 15:11:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28282.57413; Mon, 16 Nov 2020 15:11:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAa-0007Hv-4B; Mon, 16 Nov 2020 15:11:52 +0000
Received: by outflank-mailman (input) for mailman id 28282;
 Mon, 16 Nov 2020 15:11:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg0j-0006ni-5P
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:01:41 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 413e75f1-1a97-4439-9c30-58a420666853;
 Mon, 16 Nov 2020 14:59:15 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefy7-0003wG-IY; Mon, 16 Nov 2020 14:58:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg0j-0006ni-5P
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:01:41 +0000
X-Inumbo-ID: 413e75f1-1a97-4439-9c30-58a420666853
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 413e75f1-1a97-4439-9c30-58a420666853;
	Mon, 16 Nov 2020 14:59:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=ceSruCVWzLax0XFB3PV+YChHyJOUoDTFl1AuzUG7uKo=; b=CW2g8R0fi9IlSMs6PB15CroPmB
	VwoaOYD9bMi1DVWfaTM4vlulUe111h2RI3tztOT5WMq0qsSqGFH2pFQe0NOcqNLdex+g9q4BMMVZP
	/6TuwzASGMA1SrkPTvS7l/wvEgTeWNYqEdTqZb/ZtDyfrlGP270HGrcmH7UeWoKasPCt9CokG+QBB
	a2OdxXHzodz4vx5g/ZqCX44oTJtSk/OzRCJ8dKXSw80cEaDg4gDxz1oQY0QautKMW+b4o+TDgBAIv
	i8SHCduCi7NL+7FPVx6mbW4O37u1Dt6jRtYD7qdW2goUC6ahFPGml0cbg+8h8UcWX3F//bKVJttjN
	HevPpOdA==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefy7-0003wG-IY; Mon, 16 Nov 2020 14:58:59 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH 35/78] block: cleanup del_gendisk a bit
Date: Mon, 16 Nov 2020 15:57:26 +0100
Message-Id: <20201116145809.410558-36-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Merge three hidden gendisk checks into one.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 block/genhd.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index b0f0b0cac9aa7f..8180195b76634b 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -892,6 +892,9 @@ void del_gendisk(struct gendisk *disk)
 
 	might_sleep();
 
+	if (WARN_ON_ONCE(!disk->queue))
+		return;
+
 	blk_integrity_del(disk);
 	disk_del_events(disk);
 
@@ -914,20 +917,18 @@ void del_gendisk(struct gendisk *disk)
 	disk->flags &= ~GENHD_FL_UP;
 	up_write(&disk->lookup_sem);
 
-	if (!(disk->flags & GENHD_FL_HIDDEN))
+	if (!(disk->flags & GENHD_FL_HIDDEN)) {
 		sysfs_remove_link(&disk_to_dev(disk)->kobj, "bdi");
-	if (disk->queue) {
+
 		/*
 		 * Unregister bdi before releasing device numbers (as they can
 		 * get reused and we'd get clashes in sysfs).
 		 */
-		if (!(disk->flags & GENHD_FL_HIDDEN))
-			bdi_unregister(disk->queue->backing_dev_info);
-		blk_unregister_queue(disk);
-	} else {
-		WARN_ON(1);
+		bdi_unregister(disk->queue->backing_dev_info);
 	}
 
+	blk_unregister_queue(disk);
+	
 	if (!(disk->flags & GENHD_FL_HIDDEN))
 		blk_unregister_region(disk_devt(disk), disk->minors);
 	/*
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:11:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:11:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28288.57420 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAd-0007QM-Sh; Mon, 16 Nov 2020 15:11:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28288.57420; Mon, 16 Nov 2020 15:11:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAc-0007Od-JI; Mon, 16 Nov 2020 15:11:54 +0000
Received: by outflank-mailman (input) for mailman id 28288;
 Mon, 16 Nov 2020 15:11:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg00-0006ni-3x
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:00:56 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 66e666a7-7255-447e-ab65-da33c1738cbf;
 Mon, 16 Nov 2020 14:59:04 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefxx-0003tO-1m; Mon, 16 Nov 2020 14:58:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg00-0006ni-3x
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:00:56 +0000
X-Inumbo-ID: 66e666a7-7255-447e-ab65-da33c1738cbf
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 66e666a7-7255-447e-ab65-da33c1738cbf;
	Mon, 16 Nov 2020 14:59:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=haCFfej0iNxilydygMyUNjj0Bzl5Tk5LXJ0r2tE8cD8=; b=LW4/6M2BzdkZTdqoW9FpaTex6c
	yeKDdiRW6YnjlyAChX8LOEfPaouAWA0tFFvwyKbNtb7HoVUHscIVNlHgeVi2r165V1Tgx2aKkdZoP
	5MjECd6pnbCNmIKVL6LmyxCwiO5++Zt+k2n/pjncfbsde1PoD8Cu5ziFp6YpuimOpFnfiFXcluzY4
	HbgmfYC3nd/XNICvtV7c6pBxIpTCRYdTKbASCAZ7TD3/gUoOQTvaAnYemXljNQV80vURAaL809+Vh
	hU7hK9mcTqnz4YcJiM0y5yyLVhx38oqqfkWNTgrF7gN0aHG6mAnvRtyuEG1/Rhv1Yoe0OxefT5aK2
	GBVJ9+jg==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefxx-0003tO-1m; Mon, 16 Nov 2020 14:58:49 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 28/78] md: implement ->set_read_only to hook into BLKROSET processing
Date: Mon, 16 Nov 2020 15:57:19 +0100
Message-Id: <20201116145809.410558-29-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Implement the ->set_read_only method instead of parsing the actual
ioctl command.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/md/md.c | 62 ++++++++++++++++++++++++-------------------------
 1 file changed, 31 insertions(+), 31 deletions(-)

diff --git a/drivers/md/md.c b/drivers/md/md.c
index 32e375d50fee17..fa31b71a72a35d 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -7477,7 +7477,6 @@ static inline bool md_ioctl_valid(unsigned int cmd)
 {
 	switch (cmd) {
 	case ADD_NEW_DISK:
-	case BLKROSET:
 	case GET_ARRAY_INFO:
 	case GET_BITMAP_FILE:
 	case GET_DISK_INFO:
@@ -7504,7 +7503,6 @@ static int md_ioctl(struct block_device *bdev, fmode_t mode,
 	int err = 0;
 	void __user *argp = (void __user *)arg;
 	struct mddev *mddev = NULL;
-	int ro;
 	bool did_set_md_closing = false;
 
 	if (!md_ioctl_valid(cmd))
@@ -7684,35 +7682,6 @@ static int md_ioctl(struct block_device *bdev, fmode_t mode,
 			goto unlock;
 		}
 		break;
-
-	case BLKROSET:
-		if (get_user(ro, (int __user *)(arg))) {
-			err = -EFAULT;
-			goto unlock;
-		}
-		err = -EINVAL;
-
-		/* if the bdev is going readonly the value of mddev->ro
-		 * does not matter, no writes are coming
-		 */
-		if (ro)
-			goto unlock;
-
-		/* are we are already prepared for writes? */
-		if (mddev->ro != 1)
-			goto unlock;
-
-		/* transitioning to readauto need only happen for
-		 * arrays that call md_write_start
-		 */
-		if (mddev->pers) {
-			err = restart_array(mddev);
-			if (err == 0) {
-				mddev->ro = 2;
-				set_disk_ro(mddev->gendisk, 0);
-			}
-		}
-		goto unlock;
 	}
 
 	/*
@@ -7806,6 +7775,36 @@ static int md_compat_ioctl(struct block_device *bdev, fmode_t mode,
 }
 #endif /* CONFIG_COMPAT */
 
+static int md_set_read_only(struct block_device *bdev, bool ro)
+{
+	struct mddev *mddev = bdev->bd_disk->private_data;
+	int err;
+
+	err = mddev_lock(mddev);
+	if (err)
+		return err;
+
+	if (!mddev->raid_disks && !mddev->external) {
+		err = -ENODEV;
+		goto out_unlock;
+	}
+
+	/*
+	 * Transitioning to read-auto need only happen for arrays that call
+	 * md_write_start and which are not ready for writes yet.
+	 */
+	if (!ro && mddev->ro == 1 && mddev->pers) {
+		err = restart_array(mddev);
+		if (err)
+			goto out_unlock;
+		mddev->ro = 2;
+	}
+
+out_unlock:
+	mddev_unlock(mddev);
+	return err;
+}
+
 static int md_open(struct block_device *bdev, fmode_t mode)
 {
 	/*
@@ -7883,6 +7882,7 @@ const struct block_device_operations md_fops =
 #endif
 	.getgeo		= md_getgeo,
 	.check_events	= md_check_events,
+	.set_read_only	= md_set_read_only,
 };
 
 static int md_thread(void *arg)
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:11:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:11:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28293.57434 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAf-0007Wr-TM; Mon, 16 Nov 2020 15:11:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28293.57434; Mon, 16 Nov 2020 15:11:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAe-0007V7-Mh; Mon, 16 Nov 2020 15:11:56 +0000
Received: by outflank-mailman (input) for mailman id 28293;
 Mon, 16 Nov 2020 15:11:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg2B-0006ni-8n
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:03:11 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 05399526-000e-43e5-a4d1-f85a6c61ce3f;
 Mon, 16 Nov 2020 14:59:36 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyU-00043H-JC; Mon, 16 Nov 2020 14:59:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg2B-0006ni-8n
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:03:11 +0000
X-Inumbo-ID: 05399526-000e-43e5-a4d1-f85a6c61ce3f
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 05399526-000e-43e5-a4d1-f85a6c61ce3f;
	Mon, 16 Nov 2020 14:59:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=G/1St6AKcpua/c8qIsi4IENao5SAFeNTsicWvdwJGKA=; b=cAGojbspvBWLkhCqOD3ExDenEy
	hsbkbexTXugOliCWH3wwCYbwxpTnVtQvHgLrTwVXI7TrPcURLYokj4yyAxwxnnG9BNHdwGnWa8IJp
	dYa+F3Gfpccic7jbe1Y8gwBkRHM+oEunUjIG3WI5m5UONDYn9V9VBRbIHSxNQhGM/1JRKjD49Xkx0
	0pBHsNUFU/UPntTwF6P/GQ0sJw5J/q2+RSCrhYYdXSuMK/KdhdMuk9LCQiscZHIVOefv7v6KYkI86
	OblDiO3zLuOm232EWEv9/powerW9w8u+EGbE5WQyTuk7Cg+xA59dLIs/S/NWj2DLWnBN8UoHAueaN
	ZQN++kOA==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyU-00043H-JC; Mon, 16 Nov 2020 14:59:23 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH 51/78] z2ram: use separate gendisk for the different modes
Date: Mon, 16 Nov 2020 15:57:42 +0100
Message-Id: <20201116145809.410558-52-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use separate gendisks (which share a tag_set) for the different operating
modes instead of redirecting the gendisk lookup using a probe callback.
This avoids potential problems with aliased block_device instances and
will eventually allow for removing the blk_register_region framework.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 drivers/block/z2ram.c | 100 ++++++++++++++++++++++++------------------
 1 file changed, 58 insertions(+), 42 deletions(-)

diff --git a/drivers/block/z2ram.c b/drivers/block/z2ram.c
index eafecc9a72b38d..c1d20818e64920 100644
--- a/drivers/block/z2ram.c
+++ b/drivers/block/z2ram.c
@@ -63,7 +63,7 @@ static int current_device = -1;
 
 static DEFINE_SPINLOCK(z2ram_lock);
 
-static struct gendisk *z2ram_gendisk;
+static struct gendisk *z2ram_gendisk[Z2MINOR_COUNT];
 
 static blk_status_t z2_queue_rq(struct blk_mq_hw_ctx *hctx,
 				const struct blk_mq_queue_data *bd)
@@ -283,7 +283,7 @@ static int z2_open(struct block_device *bdev, fmode_t mode)
 
 		current_device = device;
 		z2ram_size <<= Z2RAM_CHUNKSHIFT;
-		set_capacity(z2ram_gendisk, z2ram_size >> 9);
+		set_capacity(z2ram_gendisk[device], z2ram_size >> 9);
 	}
 
 	mutex_unlock(&z2ram_mutex);
@@ -315,71 +315,87 @@ static const struct block_device_operations z2_fops = {
 	.release = z2_release,
 };
 
-static struct kobject *z2_find(dev_t dev, int *part, void *data)
-{
-	*part = 0;
-	return get_disk_and_module(z2ram_gendisk);
-}
-
-static struct request_queue *z2_queue;
 static struct blk_mq_tag_set tag_set;
 
 static const struct blk_mq_ops z2_mq_ops = {
 	.queue_rq = z2_queue_rq,
 };
 
+static int z2ram_register_disk(int minor)
+{
+	struct request_queue *q;
+	struct gendisk *disk;
+
+	disk = alloc_disk(1);
+	if (!disk)
+		return -ENOMEM;
+
+	q = blk_mq_init_queue(&tag_set);
+	if (IS_ERR(q)) {
+		put_disk(disk);
+		return PTR_ERR(q);
+	}
+
+	disk->major = Z2RAM_MAJOR;
+	disk->first_minor = minor;
+	disk->fops = &z2_fops;
+	if (minor)
+		sprintf(disk->disk_name, "z2ram%d", minor);
+	else
+		sprintf(disk->disk_name, "z2ram");
+	disk->queue = q;
+
+	z2ram_gendisk[minor] = disk;
+	add_disk(disk);
+	return 0;
+}
+
 static int __init z2_init(void)
 {
-	int ret;
+	int ret, i;
 
 	if (!MACH_IS_AMIGA)
 		return -ENODEV;
 
-	ret = -EBUSY;
 	if (register_blkdev(Z2RAM_MAJOR, DEVICE_NAME))
-		goto err;
-
-	ret = -ENOMEM;
-	z2ram_gendisk = alloc_disk(1);
-	if (!z2ram_gendisk)
-		goto out_disk;
-
-	z2_queue = blk_mq_init_sq_queue(&tag_set, &z2_mq_ops, 16,
-					BLK_MQ_F_SHOULD_MERGE);
-	if (IS_ERR(z2_queue)) {
-		ret = PTR_ERR(z2_queue);
-		z2_queue = NULL;
-		goto out_queue;
+		return -EBUSY;
+
+	tag_set.ops = &z2_mq_ops;
+	tag_set.nr_hw_queues = 1;
+	tag_set.nr_maps = 1;
+	tag_set.queue_depth = 16;
+	tag_set.numa_node = NUMA_NO_NODE;
+	tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
+	ret = blk_mq_alloc_tag_set(&tag_set);
+	if (ret)
+		goto out_unregister_blkdev;
+
+	for (i = 0; i < Z2MINOR_COUNT; i++) {
+		ret = z2ram_register_disk(i);
+		if (ret && i == 0)
+			goto out_free_tagset;
 	}
 
-	z2ram_gendisk->major = Z2RAM_MAJOR;
-	z2ram_gendisk->first_minor = 0;
-	z2ram_gendisk->fops = &z2_fops;
-	sprintf(z2ram_gendisk->disk_name, "z2ram");
-
-	z2ram_gendisk->queue = z2_queue;
-	add_disk(z2ram_gendisk);
-	blk_register_region(MKDEV(Z2RAM_MAJOR, 0), Z2MINOR_COUNT, THIS_MODULE,
-			    z2_find, NULL, NULL);
-
 	return 0;
 
-out_queue:
-	put_disk(z2ram_gendisk);
-out_disk:
+out_free_tagset:
+	blk_mq_free_tag_set(&tag_set);
+out_unregister_blkdev:
 	unregister_blkdev(Z2RAM_MAJOR, DEVICE_NAME);
-err:
 	return ret;
 }
 
 static void __exit z2_exit(void)
 {
 	int i, j;
-	blk_unregister_region(MKDEV(Z2RAM_MAJOR, 0), Z2MINOR_COUNT);
+
 	unregister_blkdev(Z2RAM_MAJOR, DEVICE_NAME);
-	del_gendisk(z2ram_gendisk);
-	put_disk(z2ram_gendisk);
-	blk_cleanup_queue(z2_queue);
+
+	for (i = 0; i < Z2MINOR_COUNT; i++) {
+		del_gendisk(z2ram_gendisk[i]);
+		blk_cleanup_queue(z2ram_gendisk[i]->queue);
+		put_disk(z2ram_gendisk[i]);
+	}
 	blk_mq_free_tag_set(&tag_set);
 
 	if (current_device != -1) {
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:12:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:12:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28300.57443 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAi-0007cB-0h; Mon, 16 Nov 2020 15:12:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28300.57443; Mon, 16 Nov 2020 15:11:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAg-0007ah-BA; Mon, 16 Nov 2020 15:11:58 +0000
Received: by outflank-mailman (input) for mailman id 28300;
 Mon, 16 Nov 2020 15:11:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg2z-0006ni-Ag
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:01 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c72fc1de-9677-4916-8246-087815331960;
 Mon, 16 Nov 2020 14:59:47 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefyh-00049g-7K; Mon, 16 Nov 2020 14:59:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg2z-0006ni-Ag
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:04:01 +0000
X-Inumbo-ID: c72fc1de-9677-4916-8246-087815331960
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c72fc1de-9677-4916-8246-087815331960;
	Mon, 16 Nov 2020 14:59:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=0e+oH+92GlExM9i4SIPWTJvhHTDsEZRU6nAY6yeduas=; b=mMFTZRsVcAqmf+VlCpySbI4/AO
	Lvk1YHPUUx2pLWe6CEfyYTfSvNVAKhdEDtRNYgv0KknubQs4EEfe4Ww43hMMGLiSPbCzUY8fYe5/w
	ImRQodVw+JPRr2LxSfoDq1Ob9sBNVvCDmhGgUHYNdyodB9r7BgLKKuS1T5JHMOpydlwgeRX6CcH24
	Xj6k8bM1XYWX3OFouuNz5g4QXxFiyroDeYHmHTh09EcbYvOCQHjjCyVR/50jwzxvRUfG7haNExoXj
	/C/YqFeSVuEWf5BDvfiyEyGDUrvIYpPAoZpqWlPGo4+IH/9U+ubwIYwcVjNgFnqzPbQcgexYh8oWx
	MBTM2Vjg==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefyh-00049g-7K; Mon, 16 Nov 2020 14:59:35 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 59/78] mtip32xx: remove the call to fsync_bdev on removal
Date: Mon, 16 Nov 2020 15:57:50 +0100
Message-Id: <20201116145809.410558-60-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

del_gendisk already calls fsync_bdev for every partition, no need
to do this twice.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/mtip32xx/mtip32xx.c | 15 ---------------
 drivers/block/mtip32xx/mtip32xx.h |  2 --
 2 files changed, 17 deletions(-)

diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c
index 153e2cdecb4d40..53ac59d19ae530 100644
--- a/drivers/block/mtip32xx/mtip32xx.c
+++ b/drivers/block/mtip32xx/mtip32xx.c
@@ -3687,7 +3687,6 @@ static int mtip_block_initialize(struct driver_data *dd)
 	/* Enable the block device and add it to /dev */
 	device_add_disk(&dd->pdev->dev, dd->disk, NULL);
 
-	dd->bdev = bdget_disk(dd->disk, 0);
 	/*
 	 * Now that the disk is active, initialize any sysfs attributes
 	 * managed by the protocol layer.
@@ -3721,9 +3720,6 @@ static int mtip_block_initialize(struct driver_data *dd)
 	return rv;
 
 kthread_run_error:
-	bdput(dd->bdev);
-	dd->bdev = NULL;
-
 	/* Delete our gendisk. This also removes the device from /dev */
 	del_gendisk(dd->disk);
 
@@ -3804,14 +3800,6 @@ static int mtip_block_remove(struct driver_data *dd)
 	blk_mq_tagset_busy_iter(&dd->tags, mtip_no_dev_cleanup, dd);
 	blk_mq_unquiesce_queue(dd->queue);
 
-	/*
-	 * Delete our gendisk structure. This also removes the device
-	 * from /dev
-	 */
-	if (dd->bdev) {
-		bdput(dd->bdev);
-		dd->bdev = NULL;
-	}
 	if (dd->disk) {
 		if (test_bit(MTIP_DDF_INIT_DONE_BIT, &dd->dd_flag))
 			del_gendisk(dd->disk);
@@ -4206,9 +4194,6 @@ static void mtip_pci_remove(struct pci_dev *pdev)
 	} while (atomic_read(&dd->irq_workers_active) != 0 &&
 		time_before(jiffies, to));
 
-	if (!dd->sr)
-		fsync_bdev(dd->bdev);
-
 	if (atomic_read(&dd->irq_workers_active) != 0) {
 		dev_warn(&dd->pdev->dev,
 			"Completion workers still active!\n");
diff --git a/drivers/block/mtip32xx/mtip32xx.h b/drivers/block/mtip32xx/mtip32xx.h
index e22a7f0523bf30..88f4206310e4c8 100644
--- a/drivers/block/mtip32xx/mtip32xx.h
+++ b/drivers/block/mtip32xx/mtip32xx.h
@@ -463,8 +463,6 @@ struct driver_data {
 
 	int isr_binding;
 
-	struct block_device *bdev;
-
 	struct list_head online_list; /* linkage for online list */
 
 	struct list_head remove_list; /* linkage for removing list */
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:12:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:12:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28307.57457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAl-0007mW-Pr; Mon, 16 Nov 2020 15:12:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28307.57457; Mon, 16 Nov 2020 15:12:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAj-0007jv-KL; Mon, 16 Nov 2020 15:12:01 +0000
Received: by outflank-mailman (input) for mailman id 28307;
 Mon, 16 Nov 2020 15:11:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg0K-0006ni-4T
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:01:16 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6ae54633-16ef-4579-bc5b-38c9211f7ba2;
 Mon, 16 Nov 2020 14:59:10 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefy4-0003vL-Dh; Mon, 16 Nov 2020 14:58:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg0K-0006ni-4T
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:01:16 +0000
X-Inumbo-ID: 6ae54633-16ef-4579-bc5b-38c9211f7ba2
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6ae54633-16ef-4579-bc5b-38c9211f7ba2;
	Mon, 16 Nov 2020 14:59:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=hmW8AusT819Hpp1U0vQRnp8T84EK6KuhlshbO+/HZ1o=; b=s3PyuNumv2XJNuimh+ngj+d/f6
	scNE+kFuc6xkgKNnr1zZ7z2rWNuEJjtRjoF8etdpy5umK3setslsL6lUQiK5cAVCegzHKOP2tNVRN
	1Sa9ShGf1/F5HMkg28qrnE9RKWnptAc1NOgcS5xzLqKhJFI4gE0I6jkqsNfuoY0Yg/d/tfTnu2g27
	+3jL897Lg+GjB+qFNGwVEtM3B26pX5XnYLr43N09shWkl6V+i4C3a3gB1Gh7uFE1MmG1EyGX2j2HD
	PQjkAdNsC8nET4kWMgCjPetxyfW4P8cNls2hPSf7oPeT4Bi2Ng0klNLw8Pn/NUGmnPBoTRVCulBmk
	W/SI5tEw==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefy4-0003vL-Dh; Mon, 16 Nov 2020 14:58:56 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 33/78] block: remove __blkdev_driver_ioctl
Date: Mon, 16 Nov 2020 15:57:24 +0100
Message-Id: <20201116145809.410558-34-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Just open code it in the few callers.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/ioctl.c               | 25 +++++--------------------
 drivers/block/pktcdvd.c     |  6 ++++--
 drivers/md/bcache/request.c |  5 +++--
 drivers/md/dm.c             |  5 ++++-
 include/linux/blkdev.h      |  2 --
 5 files changed, 16 insertions(+), 27 deletions(-)

diff --git a/block/ioctl.c b/block/ioctl.c
index 04255dc5f3bff3..6b785181344fe1 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -219,23 +219,6 @@ static int compat_put_ulong(compat_ulong_t __user *argp, compat_ulong_t val)
 }
 #endif
 
-int __blkdev_driver_ioctl(struct block_device *bdev, fmode_t mode,
-			unsigned cmd, unsigned long arg)
-{
-	struct gendisk *disk = bdev->bd_disk;
-
-	if (disk->fops->ioctl)
-		return disk->fops->ioctl(bdev, mode, cmd, arg);
-
-	return -ENOTTY;
-}
-/*
- * For the record: _GPL here is only because somebody decided to slap it
- * on the previous export.  Sheer idiocy, since it wasn't copyrightable
- * at all and could be open-coded without any exports by anybody who cares.
- */
-EXPORT_SYMBOL_GPL(__blkdev_driver_ioctl);
-
 #ifdef CONFIG_COMPAT
 /*
  * This is the equivalent of compat_ptr_ioctl(), to be used by block
@@ -594,10 +577,12 @@ int blkdev_ioctl(struct block_device *bdev, fmode_t mode, unsigned cmd,
 	}
 
 	ret = blkdev_common_ioctl(bdev, mode, cmd, arg, argp);
-	if (ret == -ENOIOCTLCMD)
-		return __blkdev_driver_ioctl(bdev, mode, cmd, arg);
+	if (ret != -ENOIOCTLCMD)
+		return ret;
 
-	return ret;
+	if (!bdev->bd_disk->fops->ioctl)
+		return -ENOTTY;
+	return bdev->bd_disk->fops->ioctl(bdev, mode, cmd, arg);
 }
 EXPORT_SYMBOL_GPL(blkdev_ioctl); /* for /dev/raw */
 
diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c
index 4326401cede445..b8bb8ec7538d9b 100644
--- a/drivers/block/pktcdvd.c
+++ b/drivers/block/pktcdvd.c
@@ -2583,9 +2583,11 @@ static int pkt_ioctl(struct block_device *bdev, fmode_t mode, unsigned int cmd,
 	case CDROM_LAST_WRITTEN:
 	case CDROM_SEND_PACKET:
 	case SCSI_IOCTL_SEND_COMMAND:
-		ret = __blkdev_driver_ioctl(pd->bdev, mode, cmd, arg);
+		if (!bdev->bd_disk->fops->ioctl)
+			ret = -ENOTTY;
+		else
+			ret = bdev->bd_disk->fops->ioctl(bdev, mode, cmd, arg);
 		break;
-
 	default:
 		pkt_dbg(2, pd, "Unknown ioctl (%x)\n", cmd);
 		ret = -ENOTTY;
diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
index 21432638314562..afac8d07c1bd00 100644
--- a/drivers/md/bcache/request.c
+++ b/drivers/md/bcache/request.c
@@ -1230,8 +1230,9 @@ static int cached_dev_ioctl(struct bcache_device *d, fmode_t mode,
 
 	if (dc->io_disable)
 		return -EIO;
-
-	return __blkdev_driver_ioctl(dc->bdev, mode, cmd, arg);
+	if (!dc->bdev->bd_disk->fops->ioctl)
+		return -ENOTTY;
+	return dc->bdev->bd_disk->fops->ioctl(dc->bdev, mode, cmd, arg);
 }
 
 void bch_cached_dev_request_init(struct cached_dev *dc)
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 62ad44925e73ec..54739f1b579bc8 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -570,7 +570,10 @@ static int dm_blk_ioctl(struct block_device *bdev, fmode_t mode,
 		}
 	}
 
-	r =  __blkdev_driver_ioctl(bdev, mode, cmd, arg);
+	if (!bdev->bd_disk->fops->ioctl)
+		r = -ENOTTY;
+	else
+		r = bdev->bd_disk->fops->ioctl(bdev, mode, cmd, arg);
 out:
 	dm_unprepare_ioctl(md, srcu_idx);
 	return r;
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 5c1ba8a8d2bc7e..05b346a68c2eee 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1867,8 +1867,6 @@ extern int blkdev_compat_ptr_ioctl(struct block_device *, fmode_t,
 #define blkdev_compat_ptr_ioctl NULL
 #endif
 
-extern int __blkdev_driver_ioctl(struct block_device *, fmode_t, unsigned int,
-				 unsigned long);
 extern int bdev_read_page(struct block_device *, sector_t, struct page *);
 extern int bdev_write_page(struct block_device *, sector_t, struct page *,
 						struct writeback_control *);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:12:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:12:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28311.57474 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAo-0007xL-Rn; Mon, 16 Nov 2020 15:12:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28311.57474; Mon, 16 Nov 2020 15:12:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAn-0007v8-Ox; Mon, 16 Nov 2020 15:12:05 +0000
Received: by outflank-mailman (input) for mailman id 28311;
 Mon, 16 Nov 2020 15:11:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg2V-0006ni-9J
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:03:31 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6e590672-b054-412a-8d04-48508d6d95b0;
 Mon, 16 Nov 2020 14:59:40 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefya-00045V-AA; Mon, 16 Nov 2020 14:59:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg2V-0006ni-9J
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:03:31 +0000
X-Inumbo-ID: 6e590672-b054-412a-8d04-48508d6d95b0
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6e590672-b054-412a-8d04-48508d6d95b0;
	Mon, 16 Nov 2020 14:59:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=AuzCwkw90Oy6IPNhoiC+qBglSTsZM+HyoDoCPgOfof0=; b=Gei8YMmy1pPuUmlC/BZRAtHClX
	92uN480odx4m65CCc4Tfb2n7qkWXXDlvv232pz3oEAX+XbC7INmbsQu431O6VUovjCbxRXS/6+CGm
	I03IsfR/g+MC38cctw2PBlul5o941KKyElmuVJhnyEIK9fLr9jathnAglj+AJmt+2gxYimOEFvVaN
	M/TT6NDHo35SQ0IqAeU3wK0Jd6ZgWYZ3++nf1EvXI57cDYNw3hpvRFLXn2hKDcFrK6sI1K2XntRYw
	SrT2g7HIUak8u4YpPV3kCsIBS/JUz8pIwPLmbN0ck/5og2vPZl4CYZ3h8OXDoTM0q+/eHv9liSw30
	he5+yUVA==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefya-00045V-AA; Mon, 16 Nov 2020 14:59:28 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 55/78] block: change the hash used for looking up block devices
Date: Mon, 16 Nov 2020 15:57:46 +0100
Message-Id: <20201116145809.410558-56-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Adding the minor to the major creates tons of pointless conflicts. Just
use the dev_t itself, which is 32-bits and thus is guaranteed to fit
into ino_t.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/block_dev.c | 26 ++------------------------
 1 file changed, 2 insertions(+), 24 deletions(-)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index d8664f5c1ff669..29db12c3bb501c 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -870,35 +870,12 @@ void __init bdev_cache_init(void)
 	blockdev_superblock = bd_mnt->mnt_sb;   /* For writeback */
 }
 
-/*
- * Most likely _very_ bad one - but then it's hardly critical for small
- * /dev and can be fixed when somebody will need really large one.
- * Keep in mind that it will be fed through icache hash function too.
- */
-static inline unsigned long hash(dev_t dev)
-{
-	return MAJOR(dev)+MINOR(dev);
-}
-
-static int bdev_test(struct inode *inode, void *data)
-{
-	return BDEV_I(inode)->bdev.bd_dev == *(dev_t *)data;
-}
-
-static int bdev_set(struct inode *inode, void *data)
-{
-	BDEV_I(inode)->bdev.bd_dev = *(dev_t *)data;
-	return 0;
-}
-
 static struct block_device *bdget(dev_t dev)
 {
 	struct block_device *bdev;
 	struct inode *inode;
 
-	inode = iget5_locked(blockdev_superblock, hash(dev),
-			bdev_test, bdev_set, &dev);
-
+	inode = iget_locked(blockdev_superblock, dev);
 	if (!inode)
 		return NULL;
 
@@ -910,6 +887,7 @@ static struct block_device *bdget(dev_t dev)
 		bdev->bd_super = NULL;
 		bdev->bd_inode = inode;
 		bdev->bd_part_count = 0;
+		bdev->bd_dev = dev;
 		inode->i_mode = S_IFBLK;
 		inode->i_rdev = dev;
 		inode->i_bdev = bdev;
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:12:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:12:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28319.57486 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAr-00085J-Dy; Mon, 16 Nov 2020 15:12:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28319.57486; Mon, 16 Nov 2020 15:12:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegAq-00083M-9r; Mon, 16 Nov 2020 15:12:08 +0000
Received: by outflank-mailman (input) for mailman id 28319;
 Mon, 16 Nov 2020 15:12:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1keg42-0006ni-DC
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:05:06 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c92ce8f8-bae4-4e76-b5a4-71bc42b45cc9;
 Mon, 16 Nov 2020 15:00:06 +0000 (UTC)
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kefz1-0004IE-BR; Mon, 16 Nov 2020 14:59:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DM7u=EW=casper.srs.infradead.org=batv+29a21e8ca386e11a5a78+6294+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1keg42-0006ni-DC
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:05:06 +0000
X-Inumbo-ID: c92ce8f8-bae4-4e76-b5a4-71bc42b45cc9
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c92ce8f8-bae4-4e76-b5a4-71bc42b45cc9;
	Mon, 16 Nov 2020 15:00:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=9s01ven/lnjzAuhlV063ntSY/vXy7NuM1J7SpBiZFfk=; b=Iz0K1z2gG+m/yrqS3PHmWRLoWX
	iQ2r9VCIRC1F/onM60kJvAH6c1Mq4RlXW7ec+E4URlz5EGNYvgdquPHqNY52gjE8nE20X9RXX7VMi
	1DOJ7Vtr4lrIBJ7aVOvb8UqrR2nwcvWP6ZnS5k2cLIXw3A5Dbiaen1+C+S/XhWHMOfqQAtP9C+gT2
	bC1yTTqhuANia6Zl9uBJicixvJBbdGfQlRzf4UsN7IZa6iajX16uM9PSh5wgGK5K5zsV7yVWM/HD7
	+fT3KXcxxU4zJu5OS1H+5N5e66rH7O41GjcCcL5p8HAE3cZQ+D+MIf49gWr+tF6YubeZAmAdPRe+t
	MQay8zLg==;
Received: from [2001:4bb8:180:6600:255b:7def:a93:4a09] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kefz1-0004IE-BR; Mon, 16 Nov 2020 14:59:55 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Minchan Kim <minchan@kernel.org>,
	Mike Snitzer <snitzer@redhat.com>,
	Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com,
	linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org,
	linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: [PATCH 73/78] block: use put_device in put_disk
Date: Mon, 16 Nov 2020 15:58:04 +0100
Message-Id: <20201116145809.410558-74-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
References: <20201116145809.410558-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use put_device to put the device instead of poking into the internals
and using kobject_put.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/genhd.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/block/genhd.c b/block/genhd.c
index 56bc37e98ed852..f1e20ec1b62887 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -1659,7 +1659,7 @@ EXPORT_SYMBOL(__alloc_disk_node);
 void put_disk(struct gendisk *disk)
 {
 	if (disk)
-		kobject_put(&disk_to_dev(disk)->kobj);
+		put_device(disk_to_dev(disk));
 }
 EXPORT_SYMBOL(put_disk);
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:23:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:23:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28410.57526 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegLe-00025R-TS; Mon, 16 Nov 2020 15:23:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28410.57526; Mon, 16 Nov 2020 15:23:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegLe-00025J-OS; Mon, 16 Nov 2020 15:23:18 +0000
Received: by outflank-mailman (input) for mailman id 28410;
 Mon, 16 Nov 2020 15:23:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zhvb=EW=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kegLe-00021s-0Z
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:23:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0774217c-2716-4aa5-b29d-c6a913fc3fde;
 Mon, 16 Nov 2020 15:23:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E4CB2AC1F;
 Mon, 16 Nov 2020 15:23:11 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Zhvb=EW=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kegLe-00021s-0Z
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:23:18 +0000
X-Inumbo-ID: 0774217c-2716-4aa5-b29d-c6a913fc3fde
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0774217c-2716-4aa5-b29d-c6a913fc3fde;
	Mon, 16 Nov 2020 15:23:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605540192; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2Ih6BAdTqIYcZGajiKhQmKsU9mt9jlp2kFaTR6v+29c=;
	b=RcaWqPPNLS8Em9KnOTw0KtsN3VszBN4lQlxFM7RAbVGsbGobSqlSI4FcSWmeUOyydwXmRh
	71FMjJcZAoyMRjyEB9tdtZ6kdtJGirmz4RHVBeJyf5fBhUSQQ1cJQQk7Vz45Fhu44uhCqD
	W690zZkHBYbFbEPUxzf283OmM+/onMU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E4CB2AC1F;
	Mon, 16 Nov 2020 15:23:11 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org
Cc: Juergen Gross <jgross@suse.com>,
	Andy Lutomirski <luto@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 3/4] x86/pv: switch SWAPGS to ALTERNATIVE
Date: Mon, 16 Nov 2020 16:23:00 +0100
Message-Id: <20201116152301.24558-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201116152301.24558-1-jgross@suse.com>
References: <20201116152301.24558-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

SWAPGS is used only for interrupts coming from user mode or for
returning to user mode. So there is no reason to use the PARAVIRT
framework, as it can easily be replaced by an ALTERNATIVE depending
on X86_FEATURE_XENPV.

There are several instances using the PV-aware SWAPGS macro in paths
which are never executed in a Xen PV guest. Replace those with the
plain swapgs instruction. For SWAPGS_UNSAFE_STACK the same applies.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/entry/entry_64.S             | 10 +++++-----
 arch/x86/include/asm/irqflags.h       | 20 ++++++++------------
 arch/x86/include/asm/paravirt.h       | 20 --------------------
 arch/x86/include/asm/paravirt_types.h |  2 --
 arch/x86/kernel/asm-offsets_64.c      |  1 -
 arch/x86/kernel/paravirt.c            |  1 -
 arch/x86/kernel/paravirt_patch.c      |  3 ---
 arch/x86/xen/enlighten_pv.c           |  3 ---
 8 files changed, 13 insertions(+), 47 deletions(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index cad08703c4ad..a876204a73e0 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -669,7 +669,7 @@ native_irq_return_ldt:
 	 */
 
 	pushq	%rdi				/* Stash user RDI */
-	SWAPGS					/* to kernel GS */
+	swapgs					/* to kernel GS */
 	SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi	/* to kernel CR3 */
 
 	movq	PER_CPU_VAR(espfix_waddr), %rdi
@@ -699,7 +699,7 @@ native_irq_return_ldt:
 	orq	PER_CPU_VAR(espfix_stack), %rax
 
 	SWITCH_TO_USER_CR3_STACK scratch_reg=%rdi
-	SWAPGS					/* to user GS */
+	swapgs					/* to user GS */
 	popq	%rdi				/* Restore user RDI */
 
 	movq	%rax, %rsp
@@ -943,7 +943,7 @@ SYM_CODE_START_LOCAL(paranoid_entry)
 	ret
 
 .Lparanoid_entry_swapgs:
-	SWAPGS
+	swapgs
 
 	/*
 	 * The above SAVE_AND_SWITCH_TO_KERNEL_CR3 macro doesn't do an
@@ -1001,7 +1001,7 @@ SYM_CODE_START_LOCAL(paranoid_exit)
 	jnz		restore_regs_and_return_to_kernel
 
 	/* We are returning to a context with user GSBASE */
-	SWAPGS_UNSAFE_STACK
+	swapgs
 	jmp		restore_regs_and_return_to_kernel
 SYM_CODE_END(paranoid_exit)
 
@@ -1426,7 +1426,7 @@ nmi_no_fsgsbase:
 	jnz	nmi_restore
 
 nmi_swapgs:
-	SWAPGS_UNSAFE_STACK
+	swapgs
 
 nmi_restore:
 	POP_REGS
diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
index 2dfc8d380dab..8c86edefa115 100644
--- a/arch/x86/include/asm/irqflags.h
+++ b/arch/x86/include/asm/irqflags.h
@@ -131,18 +131,6 @@ static __always_inline unsigned long arch_local_irq_save(void)
 #define SAVE_FLAGS(x)		pushfq; popq %rax
 #endif
 
-#define SWAPGS	swapgs
-/*
- * Currently paravirt can't handle swapgs nicely when we
- * don't have a stack we can rely on (such as a user space
- * stack).  So we either find a way around these or just fault
- * and emulate if a guest tries to call swapgs directly.
- *
- * Either way, this is a good way to document that we don't
- * have a reliable stack. x86_64 only.
- */
-#define SWAPGS_UNSAFE_STACK	swapgs
-
 #define INTERRUPT_RETURN	jmp native_iret
 #define USERGS_SYSRET64				\
 	swapgs;					\
@@ -170,6 +158,14 @@ static __always_inline int arch_irqs_disabled(void)
 
 	return arch_irqs_disabled_flags(flags);
 }
+#else
+#ifdef CONFIG_X86_64
+#ifdef CONFIG_XEN_PV
+#define SWAPGS	ALTERNATIVE "swapgs", "", X86_FEATURE_XENPV
+#else
+#define SWAPGS	swapgs
+#endif
+#endif
 #endif /* !__ASSEMBLY__ */
 
 #endif
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index d25cc6830e89..5647bcdba776 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -776,26 +776,6 @@ extern void default_banner(void);
 
 #ifdef CONFIG_X86_64
 #ifdef CONFIG_PARAVIRT_XXL
-/*
- * If swapgs is used while the userspace stack is still current,
- * there's no way to call a pvop.  The PV replacement *must* be
- * inlined, or the swapgs instruction must be trapped and emulated.
- */
-#define SWAPGS_UNSAFE_STACK						\
-	PARA_SITE(PARA_PATCH(PV_CPU_swapgs), swapgs)
-
-/*
- * Note: swapgs is very special, and in practise is either going to be
- * implemented with a single "swapgs" instruction or something very
- * special.  Either way, we don't need to save any registers for
- * it.
- */
-#define SWAPGS								\
-	PARA_SITE(PARA_PATCH(PV_CPU_swapgs),				\
-		  ANNOTATE_RETPOLINE_SAFE;				\
-		  call PARA_INDIRECT(pv_ops+PV_CPU_swapgs);		\
-		 )
-
 #define USERGS_SYSRET64							\
 	PARA_SITE(PARA_PATCH(PV_CPU_usergs_sysret64),			\
 		  ANNOTATE_RETPOLINE_SAFE;				\
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 0fad9f61c76a..903d71884fa2 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -169,8 +169,6 @@ struct pv_cpu_ops {
 	   frame set up. */
 	void (*iret)(void);
 
-	void (*swapgs)(void);
-
 	void (*start_context_switch)(struct task_struct *prev);
 	void (*end_context_switch)(struct task_struct *next);
 #endif
diff --git a/arch/x86/kernel/asm-offsets_64.c b/arch/x86/kernel/asm-offsets_64.c
index 828be792231e..1354bc30614d 100644
--- a/arch/x86/kernel/asm-offsets_64.c
+++ b/arch/x86/kernel/asm-offsets_64.c
@@ -15,7 +15,6 @@ int main(void)
 #ifdef CONFIG_PARAVIRT_XXL
 	OFFSET(PV_CPU_usergs_sysret64, paravirt_patch_template,
 	       cpu.usergs_sysret64);
-	OFFSET(PV_CPU_swapgs, paravirt_patch_template, cpu.swapgs);
 #ifdef CONFIG_DEBUG_ENTRY
 	OFFSET(PV_IRQ_save_fl, paravirt_patch_template, irq.save_fl);
 #endif
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 6c3407ba6ee9..5e5fcf5c376d 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -312,7 +312,6 @@ struct paravirt_patch_template pv_ops = {
 
 	.cpu.usergs_sysret64	= native_usergs_sysret64,
 	.cpu.iret		= native_iret,
-	.cpu.swapgs		= native_swapgs,
 
 #ifdef CONFIG_X86_IOPL_IOPERM
 	.cpu.invalidate_io_bitmap	= native_tss_invalidate_io_bitmap,
diff --git a/arch/x86/kernel/paravirt_patch.c b/arch/x86/kernel/paravirt_patch.c
index ace6e334cb39..7c518b08aa3c 100644
--- a/arch/x86/kernel/paravirt_patch.c
+++ b/arch/x86/kernel/paravirt_patch.c
@@ -28,7 +28,6 @@ struct patch_xxl {
 	const unsigned char	irq_restore_fl[2];
 	const unsigned char	cpu_wbinvd[2];
 	const unsigned char	cpu_usergs_sysret64[6];
-	const unsigned char	cpu_swapgs[3];
 	const unsigned char	mov64[3];
 };
 
@@ -43,7 +42,6 @@ static const struct patch_xxl patch_data_xxl = {
 	.cpu_wbinvd		= { 0x0f, 0x09 },	// wbinvd
 	.cpu_usergs_sysret64	= { 0x0f, 0x01, 0xf8,
 				    0x48, 0x0f, 0x07 },	// swapgs; sysretq
-	.cpu_swapgs		= { 0x0f, 0x01, 0xf8 },	// swapgs
 	.mov64			= { 0x48, 0x89, 0xf8 },	// mov %rdi, %rax
 };
 
@@ -86,7 +84,6 @@ unsigned int native_patch(u8 type, void *insn_buff, unsigned long addr,
 	PATCH_CASE(mmu, write_cr3, xxl, insn_buff, len);
 
 	PATCH_CASE(cpu, usergs_sysret64, xxl, insn_buff, len);
-	PATCH_CASE(cpu, swapgs, xxl, insn_buff, len);
 	PATCH_CASE(cpu, wbinvd, xxl, insn_buff, len);
 #endif
 
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 803fbcb398c4..82030d49f4f7 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -1085,9 +1085,6 @@ static const struct pv_cpu_ops xen_cpu_ops __initconst = {
 #endif
 	.io_delay = xen_io_delay,
 
-	/* Xen takes care of %gs when switching to usermode for us */
-	.swapgs = paravirt_nop,
-
 	.start_context_switch = paravirt_start_context_switch,
 	.end_context_switch = xen_end_context_switch,
 };
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:23:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:23:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28411.57531 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegLf-000269-7K; Mon, 16 Nov 2020 15:23:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28411.57531; Mon, 16 Nov 2020 15:23:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegLf-00025w-1S; Mon, 16 Nov 2020 15:23:19 +0000
Received: by outflank-mailman (input) for mailman id 28411;
 Mon, 16 Nov 2020 15:23:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zhvb=EW=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kegLe-00021y-ES
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:23:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f5713986-34fd-471c-a68c-c89fdd80742b;
 Mon, 16 Nov 2020 15:23:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 30619AF37;
 Mon, 16 Nov 2020 15:23:11 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Zhvb=EW=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kegLe-00021y-ES
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:23:18 +0000
X-Inumbo-ID: f5713986-34fd-471c-a68c-c89fdd80742b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f5713986-34fd-471c-a68c-c89fdd80742b;
	Mon, 16 Nov 2020 15:23:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605540191; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=hXDPxLHhf2Wu81S1L0YOGGDmDdav6QELXVfWLp0Wriw=;
	b=OVr7yeWC1WJ/1YNfjLfTDpzLFPM5irXSm+vTpdhhYWW0gyPttx4RWUmJDusvFHgGnUjA1f
	6B9Q3UEWCc9Hfdns2tdN/wn130OzGduOD18wLIaCKu3fqq2TgVhL4IToyZS54mNBwyfRQD
	XslVTZ7zthwQfSWOk5zXiuWG4F7uhpA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 30619AF37;
	Mon, 16 Nov 2020 15:23:11 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org
Cc: Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Andy Lutomirski <luto@kernel.org>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>
Subject: [PATCH 0/4] x86/xen: do some paravirt cleanup
Date: Mon, 16 Nov 2020 16:22:57 +0100
Message-Id: <20201116152301.24558-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Eliminate the usergs_sysret64 paravirt call completely and switch
the swapgs one to use ALTERNATIVE instead. This requires to fix the
IST based exception entries for Xen PV to use the same mechanism as
NMI and debug exception already do.

Juergen Gross (4):
  x86/xen: use specific Xen pv interrupt entry for MCE
  x86/xen: use specific Xen pv interrupt entry for DF
  x86/pv: switch SWAPGS to ALTERNATIVE
  x86/xen: drop USERGS_SYSRET64 paravirt call

 arch/x86/entry/entry_64.S             | 32 ++++++++++++---------------
 arch/x86/include/asm/idtentry.h       |  6 +++++
 arch/x86/include/asm/irqflags.h       | 26 +++++++---------------
 arch/x86/include/asm/paravirt.h       | 25 ---------------------
 arch/x86/include/asm/paravirt_types.h | 10 ---------
 arch/x86/kernel/asm-offsets_64.c      |  3 ---
 arch/x86/kernel/paravirt.c            |  6 +----
 arch/x86/kernel/paravirt_patch.c      |  7 ------
 arch/x86/xen/enlighten_pv.c           | 28 ++++++++++++++++++-----
 arch/x86/xen/xen-asm.S                | 24 ++------------------
 arch/x86/xen/xen-ops.h                |  2 --
 11 files changed, 53 insertions(+), 116 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:23:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:23:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28409.57514 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegLb-000237-Iv; Mon, 16 Nov 2020 15:23:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28409.57514; Mon, 16 Nov 2020 15:23:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegLb-000230-FZ; Mon, 16 Nov 2020 15:23:15 +0000
Received: by outflank-mailman (input) for mailman id 28409;
 Mon, 16 Nov 2020 15:23:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zhvb=EW=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kegLZ-00021y-Mc
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:23:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d5969826-4006-4b7f-8455-fa814f286285;
 Mon, 16 Nov 2020 15:23:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8F72BAF57;
 Mon, 16 Nov 2020 15:23:11 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Zhvb=EW=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kegLZ-00021y-Mc
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:23:13 +0000
X-Inumbo-ID: d5969826-4006-4b7f-8455-fa814f286285
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d5969826-4006-4b7f-8455-fa814f286285;
	Mon, 16 Nov 2020 15:23:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605540191; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+maGdeQPC4RH2n2V3NwwaXjQr4W+CfjYAlItEcmIDY4=;
	b=P5pBiRTFGQ+/iPL2SQtyGj7AOxJmQA4mPncKLqxXXlqg8AORPjX4NQnYNaCuSEwjNsYGGz
	Jmli0nChGi6X6ipmcEcuBf26PvPxmBkpFq6nsLro4ZawosD9/KTnh5Fvu527601jcVpBiW
	EUs+Jt/cnHtIKrjXxTPsFnojse1bIe8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8F72BAF57;
	Mon, 16 Nov 2020 15:23:11 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 2/4] x86/xen: use specific Xen pv interrupt entry for DF
Date: Mon, 16 Nov 2020 16:22:59 +0100
Message-Id: <20201116152301.24558-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201116152301.24558-1-jgross@suse.com>
References: <20201116152301.24558-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Xen PV guests don't use IST. For double fault interrupts switch to
the same model as NMI.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/include/asm/idtentry.h | 3 +++
 arch/x86/xen/enlighten_pv.c     | 8 +++++++-
 arch/x86/xen/xen-asm.S          | 2 +-
 3 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
index 3505c0396fa5..b35825392547 100644
--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -611,6 +611,9 @@ DECLARE_IDTENTRY_RAW(X86_TRAP_DB,	xenpv_exc_debug);
 
 /* #DF */
 DECLARE_IDTENTRY_DF(X86_TRAP_DF,	exc_double_fault);
+#ifdef CONFIG_XEN_PV
+DECLARE_IDTENTRY_RAW_ERRORCODE(X86_TRAP_DF,	xenpv_exc_double_fault);
+#endif
 
 /* #VC */
 #ifdef CONFIG_AMD_MEM_ENCRYPT
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 9f5e44c1f70a..803fbcb398c4 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -571,6 +571,12 @@ DEFINE_IDTENTRY_RAW(xenpv_exc_nmi)
 	exc_nmi(regs);
 }
 
+DEFINE_IDTENTRY_RAW_ERRORCODE(xenpv_exc_double_fault)
+{
+	/* On Xen PV, DF doesn't use IST.  The C part is the sane as native. */
+	exc_double_fault(regs, error_code);
+}
+
 DEFINE_IDTENTRY_RAW(xenpv_exc_debug)
 {
 	/*
@@ -615,7 +621,7 @@ struct trap_array_entry {
 
 static struct trap_array_entry trap_array[] = {
 	TRAP_ENTRY_REDIR(exc_debug,			true  ),
-	TRAP_ENTRY(exc_double_fault,			true  ),
+	TRAP_ENTRY_REDIR(exc_double_fault,		true  ),
 #ifdef CONFIG_X86_MCE
 	TRAP_ENTRY_REDIR(exc_machine_check,		true  ),
 #endif
diff --git a/arch/x86/xen/xen-asm.S b/arch/x86/xen/xen-asm.S
index bc2586730a5b..1d054c915046 100644
--- a/arch/x86/xen/xen-asm.S
+++ b/arch/x86/xen/xen-asm.S
@@ -161,7 +161,7 @@ xen_pv_trap asm_exc_overflow
 xen_pv_trap asm_exc_bounds
 xen_pv_trap asm_exc_invalid_op
 xen_pv_trap asm_exc_device_not_available
-xen_pv_trap asm_exc_double_fault
+xen_pv_trap asm_xenpv_exc_double_fault
 xen_pv_trap asm_exc_coproc_segment_overrun
 xen_pv_trap asm_exc_invalid_tss
 xen_pv_trap asm_exc_segment_not_present
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:23:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:23:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28408.57501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegLa-00022E-Av; Mon, 16 Nov 2020 15:23:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28408.57501; Mon, 16 Nov 2020 15:23:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegLa-000227-7a; Mon, 16 Nov 2020 15:23:14 +0000
Received: by outflank-mailman (input) for mailman id 28408;
 Mon, 16 Nov 2020 15:23:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zhvb=EW=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kegLZ-00021s-6E
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:23:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 878419fc-0a68-46c0-80a6-a0ba60545ba3;
 Mon, 16 Nov 2020 15:23:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 53E7DAF54;
 Mon, 16 Nov 2020 15:23:11 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Zhvb=EW=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kegLZ-00021s-6E
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:23:13 +0000
X-Inumbo-ID: 878419fc-0a68-46c0-80a6-a0ba60545ba3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 878419fc-0a68-46c0-80a6-a0ba60545ba3;
	Mon, 16 Nov 2020 15:23:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605540191; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7TSXaJ46w8yEBaZXgTkkUKrJ11tcc5zJYr0/Dd0EWvw=;
	b=kbxwwbSbic75h8CbBEGa4RPjZcOZSiLsyUaLQmZI4NJChAXBnTOWl3cLQq3B049ptVRxCv
	HMo1ztEJTWDos++KwwHzOWAamJQ2ns0YYm5dfePqDQ7x7Chl1g27Hgux0rBmT+630o1hmv
	zqDzdWww0VhhzjsH4LxoLFUb0xWhT5U=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 53E7DAF54;
	Mon, 16 Nov 2020 15:23:11 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 1/4] x86/xen: use specific Xen pv interrupt entry for MCE
Date: Mon, 16 Nov 2020 16:22:58 +0100
Message-Id: <20201116152301.24558-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201116152301.24558-1-jgross@suse.com>
References: <20201116152301.24558-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Xen PV guests don't use IST. For machine check interrupts switch to
the same model as debug interrupts.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/include/asm/idtentry.h |  3 +++
 arch/x86/xen/enlighten_pv.c     | 16 +++++++++++++++-
 arch/x86/xen/xen-asm.S          |  2 +-
 3 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
index b2442eb0ac2f..3505c0396fa5 100644
--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -588,6 +588,9 @@ DECLARE_IDTENTRY_MCE(X86_TRAP_MC,	exc_machine_check);
 #else
 DECLARE_IDTENTRY_RAW(X86_TRAP_MC,	exc_machine_check);
 #endif
+#ifdef CONFIG_XEN_PV
+DECLARE_IDTENTRY_RAW(X86_TRAP_MC,	xenpv_exc_machine_check);
+#endif
 #endif
 
 /* NMI */
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 4409306364dc..9f5e44c1f70a 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -583,6 +583,20 @@ DEFINE_IDTENTRY_RAW(xenpv_exc_debug)
 		exc_debug(regs);
 }
 
+#ifdef CONFIG_X86_MCE
+DEFINE_IDTENTRY_RAW(xenpv_exc_machine_check)
+{
+	/*
+	 * There's no IST on Xen PV, but we still need to dispatch
+	 * to the correct handler.
+	 */
+	if (user_mode(regs))
+		noist_exc_machine_check(regs);
+	else
+		exc_machine_check(regs);
+}
+#endif
+
 struct trap_array_entry {
 	void (*orig)(void);
 	void (*xen)(void);
@@ -603,7 +617,7 @@ static struct trap_array_entry trap_array[] = {
 	TRAP_ENTRY_REDIR(exc_debug,			true  ),
 	TRAP_ENTRY(exc_double_fault,			true  ),
 #ifdef CONFIG_X86_MCE
-	TRAP_ENTRY(exc_machine_check,			true  ),
+	TRAP_ENTRY_REDIR(exc_machine_check,		true  ),
 #endif
 	TRAP_ENTRY_REDIR(exc_nmi,			true  ),
 	TRAP_ENTRY(exc_int3,				false ),
diff --git a/arch/x86/xen/xen-asm.S b/arch/x86/xen/xen-asm.S
index 1cb0e84b9161..bc2586730a5b 100644
--- a/arch/x86/xen/xen-asm.S
+++ b/arch/x86/xen/xen-asm.S
@@ -172,7 +172,7 @@ xen_pv_trap asm_exc_spurious_interrupt_bug
 xen_pv_trap asm_exc_coprocessor_error
 xen_pv_trap asm_exc_alignment_check
 #ifdef CONFIG_X86_MCE
-xen_pv_trap asm_exc_machine_check
+xen_pv_trap asm_xenpv_exc_machine_check
 #endif /* CONFIG_X86_MCE */
 xen_pv_trap asm_exc_simd_coprocessor_error
 #ifdef CONFIG_IA32_EMULATION
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:23:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:23:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28412.57550 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegLk-0002DV-Mn; Mon, 16 Nov 2020 15:23:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28412.57550; Mon, 16 Nov 2020 15:23:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegLk-0002DM-IZ; Mon, 16 Nov 2020 15:23:24 +0000
Received: by outflank-mailman (input) for mailman id 28412;
 Mon, 16 Nov 2020 15:23:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zhvb=EW=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kegLj-00021s-0f
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:23:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0bd9916a-60ea-4f03-8f11-22b55754c559;
 Mon, 16 Nov 2020 15:23:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4043AAF59;
 Mon, 16 Nov 2020 15:23:12 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Zhvb=EW=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kegLj-00021s-0f
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:23:23 +0000
X-Inumbo-ID: 0bd9916a-60ea-4f03-8f11-22b55754c559
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0bd9916a-60ea-4f03-8f11-22b55754c559;
	Mon, 16 Nov 2020 15:23:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605540192; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=c+r2h7lsdFFR0LxyJ7ZKzN3l3z/WJ2B6A3rDCIxtzbk=;
	b=QTj+ivK4XGUI5GqsG9xpPoBbHIWElNCvk36c5+5OjaofeJ4xIXQFeSX2RLcN0Kiaw3VXiy
	zF2QvTqu2uejqzbgH72IC+OBO97+a0ONmklxOo5GJ1hz3at8jWuMy4+8wnyQ9TGFa5jWE9
	ipL288LRwjIaJSoN+hO8S03lvvB01G0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4043AAF59;
	Mon, 16 Nov 2020 15:23:12 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org
Cc: Juergen Gross <jgross@suse.com>,
	Andy Lutomirski <luto@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 4/4] x86/xen: drop USERGS_SYSRET64 paravirt call
Date: Mon, 16 Nov 2020 16:23:01 +0100
Message-Id: <20201116152301.24558-5-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201116152301.24558-1-jgross@suse.com>
References: <20201116152301.24558-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

USERGS_SYSRET64 is used to return from a syscall via sysret, but
a Xen PV guest will nevertheless use the iret hypercall, as there
is no sysret PV hypercall defined.

So instead of testing all the prerequisites for doing a sysret and
then mangling the stack for Xen PV again for doing an iret just use
the iret exit from the beginning.

This can easily be done via an ALTERNATIVE like it is done for the
sysenter compat case already.

While at it remove to stale sysret32 remnants.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/entry/entry_64.S             | 22 +++++++++-------------
 arch/x86/include/asm/irqflags.h       |  6 ------
 arch/x86/include/asm/paravirt.h       |  5 -----
 arch/x86/include/asm/paravirt_types.h |  8 --------
 arch/x86/kernel/asm-offsets_64.c      |  2 --
 arch/x86/kernel/paravirt.c            |  5 +----
 arch/x86/kernel/paravirt_patch.c      |  4 ----
 arch/x86/xen/enlighten_pv.c           |  1 -
 arch/x86/xen/xen-asm.S                | 20 --------------------
 arch/x86/xen/xen-ops.h                |  2 --
 10 files changed, 10 insertions(+), 65 deletions(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index a876204a73e0..df865eebd3d7 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -46,14 +46,6 @@
 .code64
 .section .entry.text, "ax"
 
-#ifdef CONFIG_PARAVIRT_XXL
-SYM_CODE_START(native_usergs_sysret64)
-	UNWIND_HINT_EMPTY
-	swapgs
-	sysretq
-SYM_CODE_END(native_usergs_sysret64)
-#endif /* CONFIG_PARAVIRT_XXL */
-
 /*
  * 64-bit SYSCALL instruction entry. Up to 6 arguments in registers.
  *
@@ -123,12 +115,15 @@ SYM_INNER_LABEL(entry_SYSCALL_64_after_hwframe, SYM_L_GLOBAL)
 	 * Try to use SYSRET instead of IRET if we're returning to
 	 * a completely clean 64-bit userspace context.  If we're not,
 	 * go to the slow exit path.
+	 * In the Xen PV case we must use iret anyway.
 	 */
-	movq	RCX(%rsp), %rcx
-	movq	RIP(%rsp), %r11
 
-	cmpq	%rcx, %r11	/* SYSRET requires RCX == RIP */
-	jne	swapgs_restore_regs_and_return_to_usermode
+	ALTERNATIVE __stringify( \
+		movq	RCX(%rsp), %rcx; \
+		movq	RIP(%rsp), %r11; \
+		cmpq	%rcx, %r11;	/* SYSRET requires RCX == RIP */ \
+		jne	swapgs_restore_regs_and_return_to_usermode), \
+	"jmp	swapgs_restore_regs_and_return_to_usermode", X86_FEATURE_XENPV
 
 	/*
 	 * On Intel CPUs, SYSRET with non-canonical RCX/RIP will #GP
@@ -215,7 +210,8 @@ syscall_return_via_sysret:
 
 	popq	%rdi
 	popq	%rsp
-	USERGS_SYSRET64
+	swapgs
+	sysretq
 SYM_CODE_END(entry_SYSCALL_64)
 
 /*
diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
index 8c86edefa115..e585a4705b8d 100644
--- a/arch/x86/include/asm/irqflags.h
+++ b/arch/x86/include/asm/irqflags.h
@@ -132,12 +132,6 @@ static __always_inline unsigned long arch_local_irq_save(void)
 #endif
 
 #define INTERRUPT_RETURN	jmp native_iret
-#define USERGS_SYSRET64				\
-	swapgs;					\
-	sysretq;
-#define USERGS_SYSRET32				\
-	swapgs;					\
-	sysretl
 
 #else
 #define INTERRUPT_RETURN		iret
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 5647bcdba776..8121cf9b8d81 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -776,11 +776,6 @@ extern void default_banner(void);
 
 #ifdef CONFIG_X86_64
 #ifdef CONFIG_PARAVIRT_XXL
-#define USERGS_SYSRET64							\
-	PARA_SITE(PARA_PATCH(PV_CPU_usergs_sysret64),			\
-		  ANNOTATE_RETPOLINE_SAFE;				\
-		  jmp PARA_INDIRECT(pv_ops+PV_CPU_usergs_sysret64);)
-
 #ifdef CONFIG_DEBUG_ENTRY
 #define SAVE_FLAGS(clobbers)                                        \
 	PARA_SITE(PARA_PATCH(PV_IRQ_save_fl),			    \
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 903d71884fa2..55d8b7950e61 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -157,14 +157,6 @@ struct pv_cpu_ops {
 
 	u64 (*read_pmc)(int counter);
 
-	/*
-	 * Switch to usermode gs and return to 64-bit usermode using
-	 * sysret.  Only used in 64-bit kernels to return to 64-bit
-	 * processes.  Usermode register state, including %rsp, must
-	 * already be restored.
-	 */
-	void (*usergs_sysret64)(void);
-
 	/* Normal iret.  Jump to this with the standard iret stack
 	   frame set up. */
 	void (*iret)(void);
diff --git a/arch/x86/kernel/asm-offsets_64.c b/arch/x86/kernel/asm-offsets_64.c
index 1354bc30614d..b14533af7676 100644
--- a/arch/x86/kernel/asm-offsets_64.c
+++ b/arch/x86/kernel/asm-offsets_64.c
@@ -13,8 +13,6 @@ int main(void)
 {
 #ifdef CONFIG_PARAVIRT
 #ifdef CONFIG_PARAVIRT_XXL
-	OFFSET(PV_CPU_usergs_sysret64, paravirt_patch_template,
-	       cpu.usergs_sysret64);
 #ifdef CONFIG_DEBUG_ENTRY
 	OFFSET(PV_IRQ_save_fl, paravirt_patch_template, irq.save_fl);
 #endif
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 5e5fcf5c376d..18560b71e717 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -135,8 +135,7 @@ unsigned paravirt_patch_default(u8 type, void *insn_buff,
 	else if (opfunc == _paravirt_ident_64)
 		ret = paravirt_patch_ident_64(insn_buff, len);
 
-	else if (type == PARAVIRT_PATCH(cpu.iret) ||
-		 type == PARAVIRT_PATCH(cpu.usergs_sysret64))
+	else if (type == PARAVIRT_PATCH(cpu.iret))
 		/* If operation requires a jmp, then jmp */
 		ret = paravirt_patch_jmp(insn_buff, opfunc, addr, len);
 #endif
@@ -170,7 +169,6 @@ static u64 native_steal_clock(int cpu)
 
 /* These are in entry.S */
 extern void native_iret(void);
-extern void native_usergs_sysret64(void);
 
 static struct resource reserve_ioports = {
 	.start = 0,
@@ -310,7 +308,6 @@ struct paravirt_patch_template pv_ops = {
 
 	.cpu.load_sp0		= native_load_sp0,
 
-	.cpu.usergs_sysret64	= native_usergs_sysret64,
 	.cpu.iret		= native_iret,
 
 #ifdef CONFIG_X86_IOPL_IOPERM
diff --git a/arch/x86/kernel/paravirt_patch.c b/arch/x86/kernel/paravirt_patch.c
index 7c518b08aa3c..2fada2c347c9 100644
--- a/arch/x86/kernel/paravirt_patch.c
+++ b/arch/x86/kernel/paravirt_patch.c
@@ -27,7 +27,6 @@ struct patch_xxl {
 	const unsigned char	mmu_write_cr3[3];
 	const unsigned char	irq_restore_fl[2];
 	const unsigned char	cpu_wbinvd[2];
-	const unsigned char	cpu_usergs_sysret64[6];
 	const unsigned char	mov64[3];
 };
 
@@ -40,8 +39,6 @@ static const struct patch_xxl patch_data_xxl = {
 	.mmu_write_cr3		= { 0x0f, 0x22, 0xdf },	// mov %rdi, %cr3
 	.irq_restore_fl		= { 0x57, 0x9d },	// push %rdi; popfq
 	.cpu_wbinvd		= { 0x0f, 0x09 },	// wbinvd
-	.cpu_usergs_sysret64	= { 0x0f, 0x01, 0xf8,
-				    0x48, 0x0f, 0x07 },	// swapgs; sysretq
 	.mov64			= { 0x48, 0x89, 0xf8 },	// mov %rdi, %rax
 };
 
@@ -83,7 +80,6 @@ unsigned int native_patch(u8 type, void *insn_buff, unsigned long addr,
 	PATCH_CASE(mmu, read_cr3, xxl, insn_buff, len);
 	PATCH_CASE(mmu, write_cr3, xxl, insn_buff, len);
 
-	PATCH_CASE(cpu, usergs_sysret64, xxl, insn_buff, len);
 	PATCH_CASE(cpu, wbinvd, xxl, insn_buff, len);
 #endif
 
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 82030d49f4f7..2170553f524a 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -1060,7 +1060,6 @@ static const struct pv_cpu_ops xen_cpu_ops __initconst = {
 	.read_pmc = xen_read_pmc,
 
 	.iret = xen_iret,
-	.usergs_sysret64 = xen_sysret64,
 
 	.load_tr_desc = paravirt_nop,
 	.set_ldt = xen_set_ldt,
diff --git a/arch/x86/xen/xen-asm.S b/arch/x86/xen/xen-asm.S
index 1d054c915046..c0630fd9f44e 100644
--- a/arch/x86/xen/xen-asm.S
+++ b/arch/x86/xen/xen-asm.S
@@ -214,26 +214,6 @@ SYM_CODE_START(xen_iret)
 	jmp hypercall_iret
 SYM_CODE_END(xen_iret)
 
-SYM_CODE_START(xen_sysret64)
-	/*
-	 * We're already on the usermode stack at this point, but
-	 * still with the kernel gs, so we can easily switch back.
-	 *
-	 * tss.sp2 is scratch space.
-	 */
-	movq %rsp, PER_CPU_VAR(cpu_tss_rw + TSS_sp2)
-	movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
-
-	pushq $__USER_DS
-	pushq PER_CPU_VAR(cpu_tss_rw + TSS_sp2)
-	pushq %r11
-	pushq $__USER_CS
-	pushq %rcx
-
-	pushq $VGCF_in_syscall
-	jmp hypercall_iret
-SYM_CODE_END(xen_sysret64)
-
 /*
  * Xen handles syscall callbacks much like ordinary exceptions, which
  * means we have:
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 9546c3384c75..b2fd80a01a36 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -138,8 +138,6 @@ __visible unsigned long xen_read_cr2_direct(void);
 
 /* These are not functions, and cannot be called normally */
 __visible void xen_iret(void);
-__visible void xen_sysret32(void);
-__visible void xen_sysret64(void);
 
 extern int xen_panic_handler_init(void);
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:32:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:32:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28445.57562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegUN-0003dR-IM; Mon, 16 Nov 2020 15:32:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28445.57562; Mon, 16 Nov 2020 15:32:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegUN-0003dK-FE; Mon, 16 Nov 2020 15:32:19 +0000
Received: by outflank-mailman (input) for mailman id 28445;
 Mon, 16 Nov 2020 15:32:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=14IB=EW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kegUL-0003dF-M9
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:32:17 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7127c89e-ed58-4864-9859-408930cd1b4a;
 Mon, 16 Nov 2020 15:32:11 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kegUF-0006AF-2A; Mon, 16 Nov 2020 15:32:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kegUE-0006Yy-MT; Mon, 16 Nov 2020 15:32:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kegUE-0001Dj-Ly; Mon, 16 Nov 2020 15:32:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=14IB=EW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kegUL-0003dF-M9
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:32:17 +0000
X-Inumbo-ID: 7127c89e-ed58-4864-9859-408930cd1b4a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7127c89e-ed58-4864-9859-408930cd1b4a;
	Mon, 16 Nov 2020 15:32:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NiDQY5gkywFCTrSEJO5kwh0raBO65tLZlQf7dbzGqWs=; b=7HrCdk6tleeMYlLFFCXR/B4wuQ
	dBQV2zt6pC2OG8j4ZEDM0rCRRPcXMgYRMbasNug54enDPbWvb4aGTYljKOsjxPqrrCB5SO5Ut9Q5b
	4cwKk8iIiWQDT9duWeYKhb6VJEpuPHOrRlGWkbZKTMk/bZWND8T2b/1/XasqTfp1r3MQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kegUF-0006AF-2A; Mon, 16 Nov 2020 15:32:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kegUE-0006Yy-MT; Mon, 16 Nov 2020 15:32:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kegUE-0001Dj-Ly; Mon, 16 Nov 2020 15:32:10 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156818-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156818: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=b50ea0d54bbca7d440315c3d0c0f7a4d6537b180
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 16 Nov 2020 15:32:10 +0000

flight 156818 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156818/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                b50ea0d54bbca7d440315c3d0c0f7a4d6537b180
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   88 days
Failing since        152659  2020-08-21 14:07:39 Z   87 days  186 attempts
Testing same since   156805  2020-11-15 00:37:37 Z    1 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 64763 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 16 15:40:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 15:40:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28487.57577 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegc3-0005Qn-Hd; Mon, 16 Nov 2020 15:40:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28487.57577; Mon, 16 Nov 2020 15:40:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kegc3-0005Qg-Ed; Mon, 16 Nov 2020 15:40:15 +0000
Received: by outflank-mailman (input) for mailman id 28487;
 Mon, 16 Nov 2020 15:40:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5kkY=EW=kernel.dk=axboe@srs-us1.protection.inumbo.net>)
 id 1kegc2-0005Qb-3W
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:40:14 +0000
Received: from mail-io1-xd2a.google.com (unknown [2607:f8b0:4864:20::d2a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 755abfd4-ebea-4ac9-8a7d-5c4968dbd515;
 Mon, 16 Nov 2020 15:40:12 +0000 (UTC)
Received: by mail-io1-xd2a.google.com with SMTP id i18so16365855ioa.3
 for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 07:40:12 -0800 (PST)
Received: from [192.168.1.30] ([65.144.74.34])
 by smtp.gmail.com with ESMTPSA id i82sm10491839ill.84.2020.11.16.07.40.10
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 16 Nov 2020 07:40:11 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5kkY=EW=kernel.dk=axboe@srs-us1.protection.inumbo.net>)
	id 1kegc2-0005Qb-3W
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 15:40:14 +0000
X-Inumbo-ID: 755abfd4-ebea-4ac9-8a7d-5c4968dbd515
Received: from mail-io1-xd2a.google.com (unknown [2607:f8b0:4864:20::d2a])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 755abfd4-ebea-4ac9-8a7d-5c4968dbd515;
	Mon, 16 Nov 2020 15:40:12 +0000 (UTC)
Received: by mail-io1-xd2a.google.com with SMTP id i18so16365855ioa.3
        for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 07:40:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=kernel-dk.20150623.gappssmtp.com; s=20150623;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=tp0p5PHbr2FkOYp8vx6enMy9SJ7npMGO8ToOlTZAVD0=;
        b=vJHuCgkLtyaYNtKWJGKWTC2HAaX7Y9m8y1VpqDKgvUWa8d0I51VY3ALJeXJtOlsGpr
         QzxZZWfSIBe9Z9YJVgCkUF04iuHI0uFspV4JwGcoTFtYktgjnY7A8CRm243mremkxSFW
         /W8ZuC0hXxmoHn+ezTFvYhGe2XU1ttQu6xlHRIQZSFb9V/9nguKyH01uIZBsQIBla1sm
         kSpCVk0QJdWteogX0XEWe3MJV6jr1Lh++tWh99mrFKS4zYsaeRBflt08zg4X1OBFoLaD
         OZKwfBprL/CxXZIUp5e8ounGHLLXOFSswyyzQU1smV5YfKh+taZ0KanaNv47asHmejvk
         eLpw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=tp0p5PHbr2FkOYp8vx6enMy9SJ7npMGO8ToOlTZAVD0=;
        b=IJpNW+WFfsoTs9/l43YyOM23NRFFM8kjIJA/IA03D7syNcpcmXQ9Rf5Q/jdzq3ehCq
         dl16qKylbMM9+3SGFUg4eZA8c4D5/P/Kf/6NRzLtHPtZ4/rVlIawHhNQ6GPj32jFoe7h
         h2+Hwlbb8dAO4IP5n4ToABN1qGcXkG/QqDQWFykmNy6KaoGwAAN8EASm6IKSXMZGFcKx
         Z8g+XugkrtapArBXsgku3jJor/6VY2cfLFKA3d11uvwbypbv6kmsx1gqA50++Vtwhodk
         0NwJZ3fgiUDVIA4mnHVPoLqVya/TPowOx5hh6rkOuVzMBJzAHtOjXlP4muDbBaZSzWQM
         rfiQ==
X-Gm-Message-State: AOAM532pfO3XcoJjz6Z8p+4sLs+hvNZ6UmY5RNUYaL1AifojcN48kh70
	xVE9Tg4BbZdq6B84wr4qW2sONA==
X-Google-Smtp-Source: ABdhPJzORiPywjBgXN2NzytHbmvXHQyepXDfzUWSZfRzXJXzcmliMaw8xrhzAaFsEhdxvHHHShuK7g==
X-Received: by 2002:a5e:9e0b:: with SMTP id i11mr3534187ioq.33.1605541212314;
        Mon, 16 Nov 2020 07:40:12 -0800 (PST)
Received: from [192.168.1.30] ([65.144.74.34])
        by smtp.gmail.com with ESMTPSA id i82sm10491839ill.84.2020.11.16.07.40.10
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Mon, 16 Nov 2020 07:40:11 -0800 (PST)
Subject: Re: cleanup updating the size of block devices v3
To: Christoph Hellwig <hch@lst.de>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
From: Jens Axboe <axboe@kernel.dk>
Message-ID: <506876ff-65b0-7610-6f9e-8228fcd201c8@kernel.dk>
Date: Mon, 16 Nov 2020 08:40:10 -0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-1-hch@lst.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11/16/20 7:56 AM, Christoph Hellwig wrote:
> Hi Jens,
> 
> this series builds on top of the work that went into the last merge window,
> and make sure we have a single coherent interfac for updating the size of a
> block device.
> 
> Changes since v2:
>  - rebased to the set_capacity_revalidate_and_notify in mainline
>  - keep the loop_set_size function
>  - fix two mixed up acks

Applied 1-23 for 5.11, thanks.

-- 
Jens Axboe



From xen-devel-bounces@lists.xenproject.org Mon Nov 16 16:13:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 16:13:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28513.57589 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keh7m-0000jE-4c; Mon, 16 Nov 2020 16:13:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28513.57589; Mon, 16 Nov 2020 16:13:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keh7m-0000j7-1b; Mon, 16 Nov 2020 16:13:02 +0000
Received: by outflank-mailman (input) for mailman id 28513;
 Mon, 16 Nov 2020 16:13:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l6nr=EW=amacapital.net=luto@srs-us1.protection.inumbo.net>)
 id 1keh7l-0000j2-2B
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 16:13:01 +0000
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 46ba993d-4fb7-44e5-9d61-9cdcf59e56f0;
 Mon, 16 Nov 2020 16:13:00 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id u12so12069186wrt.0
 for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 08:12:59 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l6nr=EW=amacapital.net=luto@srs-us1.protection.inumbo.net>)
	id 1keh7l-0000j2-2B
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 16:13:01 +0000
X-Inumbo-ID: 46ba993d-4fb7-44e5-9d61-9cdcf59e56f0
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 46ba993d-4fb7-44e5-9d61-9cdcf59e56f0;
	Mon, 16 Nov 2020 16:13:00 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id u12so12069186wrt.0
        for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 08:12:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=amacapital-net.20150623.gappssmtp.com; s=20150623;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=8fy9yJa8xaunaxNAmcC4fKqFwO3hgJ8i3UHbtvhB9OQ=;
        b=CCagE05Tte+vu4qHj9zLiAy1I7Elq1T7BM2FMQZRQchCZ21yGqXT2YIxuHB1rT6tQU
         0LcVZeEa/d5r7lBKYg885sd+EMDaHuuiUSjtiw6wyCzePiG9vcbpKpGxuuGAFKCHnKY9
         k4RMOEOann/4ubOcrv+/XYY0n4HBJcNr9nwT5hbOO0wssAjW86GoL5Inuob6H6fJzP5j
         Gxakvx1WwtCmj9A86RVfRKyhaKGY/apU5JxBjYFo4+a8UkfQoMuIDwX7LSC6kOpF3DGi
         ifwNLHwVW1ES9BGSAPmYeZsMVbpTpPOjXLR950OPJeXzIcejbCyDOvJvRc0OjFXCmDpU
         MqbQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=8fy9yJa8xaunaxNAmcC4fKqFwO3hgJ8i3UHbtvhB9OQ=;
        b=SY3z0PppUtbYsyWzOiFprD+tyHb5RxqX/xGH5xTN9GwGUb6uc2gzT5QG80w9cwE1kv
         jWaQZzveCNT66EXK5JtnBLWnCjnJ8S7DPnipbrpls+FQVah2OD+l5DnNHJicVnGDEizm
         fzVA8x1qTmpl6XsqfXyH4oBk3p40VTsO2NgOFo2Z4GpNdoFhbFd6wJKZhweHEHEE4XLk
         ++CnjZa1aBufQa6JzAAdfCtoq+UPQFv0b1bu6mSOsuDjck1UxeegLzCyHiNPNVNk92Ym
         vLALJkbeqmULfrL2YQ6ZAZQvCh2GiverRx/WfR0yIELOToYWuBsRHYxcx+MtX9rnI7fC
         8Ovg==
X-Gm-Message-State: AOAM530rLXMmmBCAviipw+qoIgS649dwKzoZSqFRxXh/8fXaScF6sc7K
	YIfQVdLGZ0iVwaO2SCtFD0QU5pX7Ve4SEDx8vKGekA==
X-Google-Smtp-Source: ABdhPJxC7jxqf4ZHD7BBqVXQL2hN8P5Oggx7Bc5cCg7hTHoa0ssVjlPSEQtioabPGctgV4nNkJs/uFpogKzZWU9akvo=
X-Received: by 2002:adf:f808:: with SMTP id s8mr20430438wrp.257.1605543179184;
 Mon, 16 Nov 2020 08:12:59 -0800 (PST)
MIME-Version: 1.0
References: <20201116152301.24558-1-jgross@suse.com> <20201116152301.24558-3-jgross@suse.com>
In-Reply-To: <20201116152301.24558-3-jgross@suse.com>
From: Andy Lutomirski <luto@amacapital.net>
Date: Mon, 16 Nov 2020 08:12:46 -0800
Message-ID: <CALCETrWVSEB5zrUiZ81KaB5egx78TfDuSDv+qR3HFtJ=SKxwkQ@mail.gmail.com>
Subject: Re: [PATCH 2/4] x86/xen: use specific Xen pv interrupt entry for DF
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, X86 ML <x86@kernel.org>, 
	LKML <linux-kernel@vger.kernel.org>, Thomas Gleixner <tglx@linutronix.de>, 
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>, 
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"

On Mon, Nov 16, 2020 at 7:23 AM Juergen Gross <jgross@suse.com> wrote:
>
> Xen PV guests don't use IST. For double fault interrupts switch to
> the same model as NMI.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  arch/x86/include/asm/idtentry.h | 3 +++
>  arch/x86/xen/enlighten_pv.c     | 8 +++++++-
>  arch/x86/xen/xen-asm.S          | 2 +-
>  3 files changed, 11 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
> index 3505c0396fa5..b35825392547 100644
> --- a/arch/x86/include/asm/idtentry.h
> +++ b/arch/x86/include/asm/idtentry.h
> @@ -611,6 +611,9 @@ DECLARE_IDTENTRY_RAW(X86_TRAP_DB,   xenpv_exc_debug);
>
>  /* #DF */
>  DECLARE_IDTENTRY_DF(X86_TRAP_DF,       exc_double_fault);
> +#ifdef CONFIG_XEN_PV
> +DECLARE_IDTENTRY_RAW_ERRORCODE(X86_TRAP_DF,    xenpv_exc_double_fault);
> +#endif
>
>  /* #VC */
>  #ifdef CONFIG_AMD_MEM_ENCRYPT
> diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
> index 9f5e44c1f70a..803fbcb398c4 100644
> --- a/arch/x86/xen/enlighten_pv.c
> +++ b/arch/x86/xen/enlighten_pv.c
> @@ -571,6 +571,12 @@ DEFINE_IDTENTRY_RAW(xenpv_exc_nmi)
>         exc_nmi(regs);
>  }
>
> +DEFINE_IDTENTRY_RAW_ERRORCODE(xenpv_exc_double_fault)
> +{
> +       /* On Xen PV, DF doesn't use IST.  The C part is the sane as native. */

I would like to think that code is sane, but you probably meant "same".

> +       exc_double_fault(regs, error_code);
> +}


From xen-devel-bounces@lists.xenproject.org Mon Nov 16 16:14:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 16:14:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28518.57601 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keh9a-0000sU-I9; Mon, 16 Nov 2020 16:14:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28518.57601; Mon, 16 Nov 2020 16:14:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keh9a-0000sM-ED; Mon, 16 Nov 2020 16:14:54 +0000
Received: by outflank-mailman (input) for mailman id 28518;
 Mon, 16 Nov 2020 16:14:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zhvb=EW=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1keh9Y-0000sD-Vy
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 16:14:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5c671058-066a-4d06-8e89-164d625d3135;
 Mon, 16 Nov 2020 16:14:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1F83FABF4;
 Mon, 16 Nov 2020 16:14:51 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Zhvb=EW=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1keh9Y-0000sD-Vy
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 16:14:53 +0000
X-Inumbo-ID: 5c671058-066a-4d06-8e89-164d625d3135
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5c671058-066a-4d06-8e89-164d625d3135;
	Mon, 16 Nov 2020 16:14:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605543291; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=lTyk9A6GGIcJAoS076O0S7xHALgB4HNkwhV5vuZqpwk=;
	b=ZPZN5D8lqmlYewhYgNSlFsQ2Ora4cljy8/rfRs988QzFMylI/mruDB0Ya8qDmnTqPR63nr
	1Tz3MK+afJKEHmkl0omh70DfCg1bxiFaZ6kNm1BJv26QfK/5lehM8QwgXzteKPTnuJnb+0
	052GkwV7/+9QYXi4xbo4SvEVoeBJbX4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1F83FABF4;
	Mon, 16 Nov 2020 16:14:51 +0000 (UTC)
Subject: Re: [PATCH 2/4] x86/xen: use specific Xen pv interrupt entry for DF
To: Andy Lutomirski <luto@amacapital.net>
Cc: xen-devel <xen-devel@lists.xenproject.org>, X86 ML <x86@kernel.org>,
 LKML <linux-kernel@vger.kernel.org>, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 "H. Peter Anvin" <hpa@zytor.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20201116152301.24558-1-jgross@suse.com>
 <20201116152301.24558-3-jgross@suse.com>
 <CALCETrWVSEB5zrUiZ81KaB5egx78TfDuSDv+qR3HFtJ=SKxwkQ@mail.gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <c5af2206-28fb-a95c-e003-6d12594b915e@suse.com>
Date: Mon, 16 Nov 2020 17:14:49 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <CALCETrWVSEB5zrUiZ81KaB5egx78TfDuSDv+qR3HFtJ=SKxwkQ@mail.gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="KV3Sotp1vrb9BEntdK4bhW84qAEyG2Rmb"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--KV3Sotp1vrb9BEntdK4bhW84qAEyG2Rmb
Content-Type: multipart/mixed; boundary="2HZrYXKuAhgHcBDOR5iPLffuBQQTs9hXe";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Andy Lutomirski <luto@amacapital.net>
Cc: xen-devel <xen-devel@lists.xenproject.org>, X86 ML <x86@kernel.org>,
 LKML <linux-kernel@vger.kernel.org>, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 "H. Peter Anvin" <hpa@zytor.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <c5af2206-28fb-a95c-e003-6d12594b915e@suse.com>
Subject: Re: [PATCH 2/4] x86/xen: use specific Xen pv interrupt entry for DF
References: <20201116152301.24558-1-jgross@suse.com>
 <20201116152301.24558-3-jgross@suse.com>
 <CALCETrWVSEB5zrUiZ81KaB5egx78TfDuSDv+qR3HFtJ=SKxwkQ@mail.gmail.com>
In-Reply-To: <CALCETrWVSEB5zrUiZ81KaB5egx78TfDuSDv+qR3HFtJ=SKxwkQ@mail.gmail.com>

--2HZrYXKuAhgHcBDOR5iPLffuBQQTs9hXe
Content-Type: multipart/mixed;
 boundary="------------00E5D34FFC63ED429A7BC194"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------00E5D34FFC63ED429A7BC194
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 16.11.20 17:12, Andy Lutomirski wrote:
> On Mon, Nov 16, 2020 at 7:23 AM Juergen Gross <jgross@suse.com> wrote:
>>
>> Xen PV guests don't use IST. For double fault interrupts switch to
>> the same model as NMI.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>>   arch/x86/include/asm/idtentry.h | 3 +++
>>   arch/x86/xen/enlighten_pv.c     | 8 +++++++-
>>   arch/x86/xen/xen-asm.S          | 2 +-
>>   3 files changed, 11 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/id=
tentry.h
>> index 3505c0396fa5..b35825392547 100644
>> --- a/arch/x86/include/asm/idtentry.h
>> +++ b/arch/x86/include/asm/idtentry.h
>> @@ -611,6 +611,9 @@ DECLARE_IDTENTRY_RAW(X86_TRAP_DB,   xenpv_exc_debu=
g);
>>
>>   /* #DF */
>>   DECLARE_IDTENTRY_DF(X86_TRAP_DF,       exc_double_fault);
>> +#ifdef CONFIG_XEN_PV
>> +DECLARE_IDTENTRY_RAW_ERRORCODE(X86_TRAP_DF,    xenpv_exc_double_fault=
);
>> +#endif
>>
>>   /* #VC */
>>   #ifdef CONFIG_AMD_MEM_ENCRYPT
>> diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c=

>> index 9f5e44c1f70a..803fbcb398c4 100644
>> --- a/arch/x86/xen/enlighten_pv.c
>> +++ b/arch/x86/xen/enlighten_pv.c
>> @@ -571,6 +571,12 @@ DEFINE_IDTENTRY_RAW(xenpv_exc_nmi)
>>          exc_nmi(regs);
>>   }
>>
>> +DEFINE_IDTENTRY_RAW_ERRORCODE(xenpv_exc_double_fault)
>> +{
>> +       /* On Xen PV, DF doesn't use IST.  The C part is the sane as n=
ative. */
>=20
> I would like to think that code is sane, but you probably meant "same".=


Oh, this is the result of copy and paste. Now we have two sane
functions. :-)


Juergen

--------------00E5D34FFC63ED429A7BC194
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------00E5D34FFC63ED429A7BC194--

--2HZrYXKuAhgHcBDOR5iPLffuBQQTs9hXe--

--KV3Sotp1vrb9BEntdK4bhW84qAEyG2Rmb
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+ypXkFAwAAAAAACgkQsN6d1ii/Ey+v
Igf+MEHD1T2xZkUxUQFMxBH7GfzMQ527kw50LN0yFocRqbN5R6VB309zKZOY2eNYgtUpE/oNdw4W
rAhifQNZXlEaG83PnGrSgQ5Lss3L1J17WW6q26KrCA0Pfnc0P17rHkUwJMxB0F3qC/rJqLlw/n7v
rFxZmVIX8ATSgSyRq31BKOCVAFBFPkM464Ntu/ZS+h2TPUL/6HHzukYySOOonu1DR/ZSBZYiEYN/
TDPBJhzuFQOwbUQRsPgvfXO7nDCD0V2c/fqkYAw/9Bofbu0MVcrpQasDgeQ6PxvQ6uFmH2HubsHB
8MduN3P84h684BHX2RJjQHfLOKXG1AMqh5edHl0vow==
=JuGh
-----END PGP SIGNATURE-----

--KV3Sotp1vrb9BEntdK4bhW84qAEyG2Rmb--


From xen-devel-bounces@lists.xenproject.org Mon Nov 16 16:17:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 16:17:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28525.57612 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kehCN-00014m-Vy; Mon, 16 Nov 2020 16:17:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28525.57612; Mon, 16 Nov 2020 16:17:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kehCN-00014f-Sx; Mon, 16 Nov 2020 16:17:47 +0000
Received: by outflank-mailman (input) for mailman id 28525;
 Mon, 16 Nov 2020 16:17:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ru3m=EW=kernel.org=luto@srs-us1.protection.inumbo.net>)
 id 1kehCM-00014Z-Bc
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 16:17:46 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d5f983a5-c388-483e-a422-d3e33c5d7d88;
 Mon, 16 Nov 2020 16:17:45 +0000 (UTC)
Received: from mail-wm1-f43.google.com (mail-wm1-f43.google.com
 [209.85.128.43])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id C98272227F
 for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 16:17:44 +0000 (UTC)
Received: by mail-wm1-f43.google.com with SMTP id h2so24277131wmm.0
 for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 08:17:44 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ru3m=EW=kernel.org=luto@srs-us1.protection.inumbo.net>)
	id 1kehCM-00014Z-Bc
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 16:17:46 +0000
X-Inumbo-ID: d5f983a5-c388-483e-a422-d3e33c5d7d88
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d5f983a5-c388-483e-a422-d3e33c5d7d88;
	Mon, 16 Nov 2020 16:17:45 +0000 (UTC)
Received: from mail-wm1-f43.google.com (mail-wm1-f43.google.com [209.85.128.43])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id C98272227F
	for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 16:17:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605543465;
	bh=Mv6f7f8506FraoMumz35j4BQyYx1tMmBjlS7DZTMh6s=;
	h=References:In-Reply-To:From:Date:Subject:To:Cc:From;
	b=cO0fEGISnNA7PcgOiATiolTw7tAkS5Zg7T/iuml160lD72+D/c5QQCC6g99NPdiv0
	 E+tm/P0YcyWfeksg5V68oPeJbMzzoFz8MOlckQYOvSLN5cL4nnxznKQsmlZxwvR3+9
	 21xAGXQROAjxwTtWdTl8e7cPOi/ciZNc1Yiis33Q=
Received: by mail-wm1-f43.google.com with SMTP id h2so24277131wmm.0
        for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 08:17:44 -0800 (PST)
X-Gm-Message-State: AOAM531cG51DPaDMkOWymUfDE59ZBvhBtOWpGYSP45gMrb++uh+E8yN4
	fueHHvWbY7M8XqxcTpZ2dm8Jzsk20GQtMRD/OqRC0A==
X-Google-Smtp-Source: ABdhPJzo4DLkbb/iKsJTRaDIKTsVLyxyBBgQcLU+k85si7bdNAKlaxr9HCjPF0P3RxTG5pvYnF7uVMxefE2gleeWFnw=
X-Received: by 2002:a1c:7213:: with SMTP id n19mr15820304wmc.36.1605543463407;
 Mon, 16 Nov 2020 08:17:43 -0800 (PST)
MIME-Version: 1.0
References: <20201116152301.24558-1-jgross@suse.com> <20201116152301.24558-4-jgross@suse.com>
In-Reply-To: <20201116152301.24558-4-jgross@suse.com>
From: Andy Lutomirski <luto@kernel.org>
Date: Mon, 16 Nov 2020 08:17:29 -0800
X-Gmail-Original-Message-ID: <CALCETrWwnK1AwrGRn8Kuin-23NOG31LrWBO7w=T2QE+EJW=f-w@mail.gmail.com>
Message-ID: <CALCETrWwnK1AwrGRn8Kuin-23NOG31LrWBO7w=T2QE+EJW=f-w@mail.gmail.com>
Subject: Re: [PATCH 3/4] x86/pv: switch SWAPGS to ALTERNATIVE
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, X86 ML <x86@kernel.org>, 
	LKML <linux-kernel@vger.kernel.org>, 
	Linux Virtualization <virtualization@lists.linux-foundation.org>, 
	Andy Lutomirski <luto@kernel.org>, Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, 
	Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>, Deep Shah <sdeep@vmware.com>, 
	"VMware, Inc." <pv-drivers@vmware.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
	Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"

On Mon, Nov 16, 2020 at 7:23 AM Juergen Gross <jgross@suse.com> wrote:
>
> SWAPGS is used only for interrupts coming from user mode or for
> returning to user mode. So there is no reason to use the PARAVIRT
> framework, as it can easily be replaced by an ALTERNATIVE depending
> on X86_FEATURE_XENPV.
>
> There are several instances using the PV-aware SWAPGS macro in paths
> which are never executed in a Xen PV guest. Replace those with the
> plain swapgs instruction. For SWAPGS_UNSAFE_STACK the same applies.

Acked-by: Andy Lutomirski <luto@kernel.org>


From xen-devel-bounces@lists.xenproject.org Mon Nov 16 16:28:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 16:28:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28531.57625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kehMY-0002Da-0D; Mon, 16 Nov 2020 16:28:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28531.57625; Mon, 16 Nov 2020 16:28:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kehMX-0002DT-TQ; Mon, 16 Nov 2020 16:28:17 +0000
Received: by outflank-mailman (input) for mailman id 28531;
 Mon, 16 Nov 2020 16:28:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ru3m=EW=kernel.org=luto@srs-us1.protection.inumbo.net>)
 id 1kehMW-0002DO-Lp
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 16:28:16 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7ed66f31-a522-4214-827c-a0b053ab0a3f;
 Mon, 16 Nov 2020 16:28:15 +0000 (UTC)
Received: from mail-wr1-f43.google.com (mail-wr1-f43.google.com
 [209.85.221.43])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id AB8362222C
 for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 16:28:14 +0000 (UTC)
Received: by mail-wr1-f43.google.com with SMTP id k2so19384667wrx.2
 for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 08:28:14 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ru3m=EW=kernel.org=luto@srs-us1.protection.inumbo.net>)
	id 1kehMW-0002DO-Lp
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 16:28:16 +0000
X-Inumbo-ID: 7ed66f31-a522-4214-827c-a0b053ab0a3f
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7ed66f31-a522-4214-827c-a0b053ab0a3f;
	Mon, 16 Nov 2020 16:28:15 +0000 (UTC)
Received: from mail-wr1-f43.google.com (mail-wr1-f43.google.com [209.85.221.43])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id AB8362222C
	for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 16:28:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605544095;
	bh=x3FHK31Xm/BBB7HpLRBIa98x7QEqf275AgxsToa6mkU=;
	h=References:In-Reply-To:From:Date:Subject:To:Cc:From;
	b=dSp2LxJUJ58gcX59eL2xT0qVUTROA6hs+/D6p0wzaVRwjytMTLR0IOaUwLCw0qvV9
	 oPxeDEHDRvLSTb6pZKi4znh9AdooQXrCYtwzhXRxhD5nuPb55BqjjPXvLVpCG0M3FD
	 iDc77eS+vloBkEmarGAjuhz3e8qNrbVN/RCsAOsE=
Received: by mail-wr1-f43.google.com with SMTP id k2so19384667wrx.2
        for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 08:28:14 -0800 (PST)
X-Gm-Message-State: AOAM530EMi1c2Btqt4OFd/Mm5I7xoAfY12bkSV+TLcsykbM5Qsn17mgb
	ckcrCzus2X5hNlsCFNlxd8SZb6YAvkT08r6yZUXfMA==
X-Google-Smtp-Source: ABdhPJyncJCxKduqliOLf7Ic4pClunkNO5+SsixRmuCBm2erhKc8dlxKlGJlIEYjuLvuG7ANk+QxxpsYnP+LvzP3zfc=
X-Received: by 2002:a5d:4991:: with SMTP id r17mr20556285wrq.70.1605544093233;
 Mon, 16 Nov 2020 08:28:13 -0800 (PST)
MIME-Version: 1.0
References: <20201116152301.24558-1-jgross@suse.com> <20201116152301.24558-5-jgross@suse.com>
In-Reply-To: <20201116152301.24558-5-jgross@suse.com>
From: Andy Lutomirski <luto@kernel.org>
Date: Mon, 16 Nov 2020 08:28:00 -0800
X-Gmail-Original-Message-ID: <CALCETrW_UO9sksa1agOfs5E7yV+RqOyugEEOBjZY8Z47R-04Pg@mail.gmail.com>
Message-ID: <CALCETrW_UO9sksa1agOfs5E7yV+RqOyugEEOBjZY8Z47R-04Pg@mail.gmail.com>
Subject: Re: [PATCH 4/4] x86/xen: drop USERGS_SYSRET64 paravirt call
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, X86 ML <x86@kernel.org>, 
	LKML <linux-kernel@vger.kernel.org>, 
	Linux Virtualization <virtualization@lists.linux-foundation.org>, 
	Andy Lutomirski <luto@kernel.org>, Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, 
	Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>, Deep Shah <sdeep@vmware.com>, 
	"VMware, Inc." <pv-drivers@vmware.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
	Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"

On Mon, Nov 16, 2020 at 7:23 AM Juergen Gross <jgross@suse.com> wrote:
>
> USERGS_SYSRET64 is used to return from a syscall via sysret, but
> a Xen PV guest will nevertheless use the iret hypercall, as there
> is no sysret PV hypercall defined.
>
> So instead of testing all the prerequisites for doing a sysret and
> then mangling the stack for Xen PV again for doing an iret just use
> the iret exit from the beginning.
>
> This can easily be done via an ALTERNATIVE like it is done for the
> sysenter compat case already.
>
> While at it remove to stale sysret32 remnants.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Andy Lutomirski <luto@kernel.org>

FWIW, you've lost the VGCF_in_syscall optimization.  Let me see if I
can give it back to you better.

> ---
>  arch/x86/entry/entry_64.S             | 22 +++++++++-------------
>  arch/x86/include/asm/irqflags.h       |  6 ------
>  arch/x86/include/asm/paravirt.h       |  5 -----
>  arch/x86/include/asm/paravirt_types.h |  8 --------
>  arch/x86/kernel/asm-offsets_64.c      |  2 --
>  arch/x86/kernel/paravirt.c            |  5 +----
>  arch/x86/kernel/paravirt_patch.c      |  4 ----
>  arch/x86/xen/enlighten_pv.c           |  1 -
>  arch/x86/xen/xen-asm.S                | 20 --------------------
>  arch/x86/xen/xen-ops.h                |  2 --
>  10 files changed, 10 insertions(+), 65 deletions(-)
>
> diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
> index a876204a73e0..df865eebd3d7 100644
> --- a/arch/x86/entry/entry_64.S
> +++ b/arch/x86/entry/entry_64.S
> @@ -46,14 +46,6 @@
>  .code64
>  .section .entry.text, "ax"
>
> -#ifdef CONFIG_PARAVIRT_XXL
> -SYM_CODE_START(native_usergs_sysret64)
> -       UNWIND_HINT_EMPTY
> -       swapgs
> -       sysretq
> -SYM_CODE_END(native_usergs_sysret64)
> -#endif /* CONFIG_PARAVIRT_XXL */
> -
>  /*
>   * 64-bit SYSCALL instruction entry. Up to 6 arguments in registers.
>   *
> @@ -123,12 +115,15 @@ SYM_INNER_LABEL(entry_SYSCALL_64_after_hwframe, SYM_L_GLOBAL)
>          * Try to use SYSRET instead of IRET if we're returning to
>          * a completely clean 64-bit userspace context.  If we're not,
>          * go to the slow exit path.
> +        * In the Xen PV case we must use iret anyway.
>          */
> -       movq    RCX(%rsp), %rcx
> -       movq    RIP(%rsp), %r11
>
> -       cmpq    %rcx, %r11      /* SYSRET requires RCX == RIP */
> -       jne     swapgs_restore_regs_and_return_to_usermode
> +       ALTERNATIVE __stringify( \
> +               movq    RCX(%rsp), %rcx; \
> +               movq    RIP(%rsp), %r11; \
> +               cmpq    %rcx, %r11;     /* SYSRET requires RCX == RIP */ \
> +               jne     swapgs_restore_regs_and_return_to_usermode), \
> +       "jmp    swapgs_restore_regs_and_return_to_usermode", X86_FEATURE_XENPV

I'm not in love with this save-a-few-bytes stringify, but I can live with it.

--Andy


From xen-devel-bounces@lists.xenproject.org Mon Nov 16 16:30:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 16:30:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28536.57636 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kehP6-00039T-Gz; Mon, 16 Nov 2020 16:30:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28536.57636; Mon, 16 Nov 2020 16:30:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kehP6-00039M-E6; Mon, 16 Nov 2020 16:30:56 +0000
Received: by outflank-mailman (input) for mailman id 28536;
 Mon, 16 Nov 2020 16:30:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ru3m=EW=kernel.org=luto@srs-us1.protection.inumbo.net>)
 id 1kehP5-00039H-Di
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 16:30:55 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3e48c782-6f97-48db-a3e1-60d15c4574d2;
 Mon, 16 Nov 2020 16:30:54 +0000 (UTC)
Received: from mail-wr1-f49.google.com (mail-wr1-f49.google.com
 [209.85.221.49])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id AC8B520776
 for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 16:30:53 +0000 (UTC)
Received: by mail-wr1-f49.google.com with SMTP id s8so19326221wrw.10
 for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 08:30:53 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ru3m=EW=kernel.org=luto@srs-us1.protection.inumbo.net>)
	id 1kehP5-00039H-Di
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 16:30:55 +0000
X-Inumbo-ID: 3e48c782-6f97-48db-a3e1-60d15c4574d2
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3e48c782-6f97-48db-a3e1-60d15c4574d2;
	Mon, 16 Nov 2020 16:30:54 +0000 (UTC)
Received: from mail-wr1-f49.google.com (mail-wr1-f49.google.com [209.85.221.49])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id AC8B520776
	for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 16:30:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605544254;
	bh=2O3LBp1HPw0KQ1G2spNpvOUgelxUM0n8eKWWsC7ds30=;
	h=References:In-Reply-To:From:Date:Subject:To:Cc:From;
	b=dqOsYI+0YGbb9NsAnIzVJVzN0+ZCeTcljVXkHY3Ca9c9X+hnuCJH8G2FUn67vfd9U
	 obw74aqxw/UPBefBUp/bAQMVgF7JOQ9WyTuYSFtDMR4oweOAreMWzRFjeNPN566/DC
	 MYCetQ39nn4CSqfi8ZhF3wCBDvpGh7bvgBDqVpPc=
Received: by mail-wr1-f49.google.com with SMTP id s8so19326221wrw.10
        for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 08:30:53 -0800 (PST)
X-Gm-Message-State: AOAM533uDR5uUtsH+0gO/orTWljHAQYcyONIpxXUtiPGPN8BEqsn6PlY
	gzybpLrYE+3R+s8JBWU4SWkTkIcZLItyo4y0tTNdpw==
X-Google-Smtp-Source: ABdhPJxRrSLHlH4ScZov5CaCHz95dre2M6ViQYOjxHY5oh3r9MUJXokIlLxakAYO4Hpet3isswmFWL/5m9Ab6jgvvwU=
X-Received: by 2002:a5d:5482:: with SMTP id h2mr9404192wrv.18.1605544252224;
 Mon, 16 Nov 2020 08:30:52 -0800 (PST)
MIME-Version: 1.0
References: <20201116152301.24558-1-jgross@suse.com> <20201116152301.24558-5-jgross@suse.com>
In-Reply-To: <20201116152301.24558-5-jgross@suse.com>
From: Andy Lutomirski <luto@kernel.org>
Date: Mon, 16 Nov 2020 08:30:38 -0800
X-Gmail-Original-Message-ID: <CALCETrVMX+D1fv3bbb7F_Cp2SfrFBudUqJk=uR3AJkgQ_KCniQ@mail.gmail.com>
Message-ID: <CALCETrVMX+D1fv3bbb7F_Cp2SfrFBudUqJk=uR3AJkgQ_KCniQ@mail.gmail.com>
Subject: Re: [PATCH 4/4] x86/xen: drop USERGS_SYSRET64 paravirt call
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, X86 ML <x86@kernel.org>, 
	LKML <linux-kernel@vger.kernel.org>, 
	Linux Virtualization <virtualization@lists.linux-foundation.org>, 
	Andy Lutomirski <luto@kernel.org>, Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, 
	Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>, Deep Shah <sdeep@vmware.com>, 
	"VMware, Inc." <pv-drivers@vmware.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
	Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"

On Mon, Nov 16, 2020 at 7:23 AM Juergen Gross <jgross@suse.com> wrote:
>
> USERGS_SYSRET64 is used to return from a syscall via sysret, but
> a Xen PV guest will nevertheless use the iret hypercall, as there
> is no sysret PV hypercall defined.
>
> So instead of testing all the prerequisites for doing a sysret and
> then mangling the stack for Xen PV again for doing an iret just use
> the iret exit from the beginning.
>
> This can easily be done via an ALTERNATIVE like it is done for the
> sysenter compat case already.
>
> While at it remove to stale sysret32 remnants.

s/to/the/


From xen-devel-bounces@lists.xenproject.org Mon Nov 16 17:38:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 17:38:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28560.57653 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keiRq-0000Yy-LR; Mon, 16 Nov 2020 17:37:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28560.57653; Mon, 16 Nov 2020 17:37:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keiRq-0000Yr-IB; Mon, 16 Nov 2020 17:37:50 +0000
Received: by outflank-mailman (input) for mailman id 28560;
 Mon, 16 Nov 2020 17:37:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BrUE=EW=kernel.org=song@srs-us1.protection.inumbo.net>)
 id 1keiRp-0000Ym-2t
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 17:37:49 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2e20b3a0-7447-45de-9c27-6f6a82752644;
 Mon, 16 Nov 2020 17:37:48 +0000 (UTC)
Received: from mail-wm1-f51.google.com (mail-wm1-f51.google.com
 [209.85.128.51])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 13FF322409
 for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 17:37:47 +0000 (UTC)
Received: by mail-wm1-f51.google.com with SMTP id 19so37211wmf.1
 for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 09:37:47 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BrUE=EW=kernel.org=song@srs-us1.protection.inumbo.net>)
	id 1keiRp-0000Ym-2t
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 17:37:49 +0000
X-Inumbo-ID: 2e20b3a0-7447-45de-9c27-6f6a82752644
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2e20b3a0-7447-45de-9c27-6f6a82752644;
	Mon, 16 Nov 2020 17:37:48 +0000 (UTC)
Received: from mail-wm1-f51.google.com (mail-wm1-f51.google.com [209.85.128.51])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 13FF322409
	for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 17:37:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605548267;
	bh=ftduAERcAIXDqH3TIoaKNDFMaog9lpKkHZKNbYpzPJA=;
	h=References:In-Reply-To:From:Date:Subject:To:Cc:From;
	b=wH0K96aiOcV3zWDXEfO4eGXrSbfel4fzpavvJrcvlGnm+IOvE6CCXcyPRZ9CDEpWQ
	 ERyPgn3goZaj4Rjri8kCDsRD2QqVIDrl9bwMtmzAsZXelVyGcoi/J8IVVb6h9ki8ms
	 Nt6zKSXy/JZjbcTnM2HNzH4y+CQMSryqUl+eemUA=
Received: by mail-wm1-f51.google.com with SMTP id 19so37211wmf.1
        for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 09:37:47 -0800 (PST)
X-Gm-Message-State: AOAM531q7xsz7Xf2N58I4Ok+/rP+xF/irONLWmynjxrtI8d9V3B0+mg8
	UlzJTC8/TGUW3A9eodQ2GB2fhu577Ggy7EO2a44=
X-Google-Smtp-Source: ABdhPJwSJNXG5A+c75VC7fB+eg5ocrYD0cMopnWErnzA3/FY5Qe4JfgRdEHZElz1KxxX9qTRF1opV72EejXl51sD+G0=
X-Received: by 2002:a1c:bbc4:: with SMTP id l187mr17490114wmf.133.1605548265533;
 Mon, 16 Nov 2020 09:37:45 -0800 (PST)
MIME-Version: 1.0
References: <20201116145809.410558-1-hch@lst.de> <20201116145809.410558-29-hch@lst.de>
In-Reply-To: <20201116145809.410558-29-hch@lst.de>
From: Song Liu <song@kernel.org>
Date: Mon, 16 Nov 2020 09:37:34 -0800
X-Gmail-Original-Message-ID: <CAPhsuW5YeO0-Cb=avHu2osRKjz19Lvk4jWqaCdaqFnjbdPJtrw@mail.gmail.com>
Message-ID: <CAPhsuW5YeO0-Cb=avHu2osRKjz19Lvk4jWqaCdaqFnjbdPJtrw@mail.gmail.com>
Subject: Re: [PATCH 28/78] md: implement ->set_read_only to hook into BLKROSET processing
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Justin Sanders <justin@coraid.com>, 
	Josef Bacik <josef@toxicpanda.com>, Ilya Dryomov <idryomov@gmail.com>, 
	Jack Wang <jinpu.wang@cloud.ionos.com>, "Michael S. Tsirkin" <mst@redhat.com>, 
	Jason Wang <jasowang@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>, 
	Stefan Hajnoczi <stefanha@redhat.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>, 
	"Martin K. Petersen" <martin.petersen@oracle.com>, dm-devel@redhat.com, 
	linux-block@vger.kernel.org, drbd-dev@lists.linbit.com, nbd@other.debian.org, 
	ceph-devel@vger.kernel.org, xen-devel@lists.xenproject.org, 
	linux-raid <linux-raid@vger.kernel.org>, linux-nvme@lists.infradead.org, 
	linux-scsi@vger.kernel.org, Linux-Fsdevel <linux-fsdevel@vger.kernel.org>
Content-Type: text/plain; charset="UTF-8"

On Mon, Nov 16, 2020 at 6:58 AM Christoph Hellwig <hch@lst.de> wrote:
>
> Implement the ->set_read_only method instead of parsing the actual
> ioctl command.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Acked-by: Song Liu <song@kernel.org>

[...]


From xen-devel-bounces@lists.xenproject.org Mon Nov 16 18:22:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 18:22:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28577.57669 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kej95-0005DE-3e; Mon, 16 Nov 2020 18:22:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28577.57669; Mon, 16 Nov 2020 18:22:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kej94-0005D7-Uw; Mon, 16 Nov 2020 18:22:30 +0000
Received: by outflank-mailman (input) for mailman id 28577;
 Mon, 16 Nov 2020 18:22:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gc94=EW=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kej93-0005D2-7w
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 18:22:29 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fcd55861-2ea1-461a-8051-edc5dc74dabc;
 Mon, 16 Nov 2020 18:22:26 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AGIMGrU026657
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Mon, 16 Nov 2020 19:22:17 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 8FC4F2E9CA8; Mon, 16 Nov 2020 19:22:11 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gc94=EW=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kej93-0005D2-7w
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 18:22:29 +0000
X-Inumbo-ID: fcd55861-2ea1-461a-8051-edc5dc74dabc
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fcd55861-2ea1-461a-8051-edc5dc74dabc;
	Mon, 16 Nov 2020 18:22:26 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AGIMGrU026657
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Mon, 16 Nov 2020 19:22:17 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 8FC4F2E9CA8; Mon, 16 Nov 2020 19:22:11 +0100 (MET)
Date: Mon, 16 Nov 2020 19:22:11 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: netbsd PVH dom0: xen clock event stops
Message-ID: <20201116182211.GS840@antioche.eu.org>
References: <20201115174938.GA3562@antioche.eu.org>
 <20201115182416.GA30231@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201115182416.GA30231@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Mon, 16 Nov 2020 19:22:17 +0100 (MET)

On Sun, Nov 15, 2020 at 07:24:16PM +0100, Roger Pau Monn wrote:
> On Sun, Nov 15, 2020 at 06:49:38PM +0100, Manuel Bouyer wrote:
> > Hello,
> > I spent some more time debugging NetBSD as a PVH dom0 on Xen,
> > With Roger's patch to avoid a Xen panic, the NetBSD kernel stalls
> > configuring devices. At first I though it was an issue with hardware
> > interrupts, but it more likely is an issue with Xen timer events.
> > Specifically: virtual CPU 0 stops receiving timer events, while other
> > CPUs keep receiving them. I tried to force a timer rearm but this didn't help.
> > The event is not masked nor pending on Xen or NetBSD, as confirmed by 'q'.
> > Others events (the Xen console, the debug event) are properly received
> > by CPU0. I don't know how to debug this more at this point.
> 
> You could try to use dom0_vcpus_pin command line option and then dump
> the timers using the 'a' debug key, this way you can see if CPU0 has a
> timer pending (which would be the vCPU0 timer).

thanks, this helped. This was a bug in the NetBSD kernel, which would show
up only when there are enough physical device interrupts (which explains why
I didn't notice it on PVH domUs)

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Mon Nov 16 21:17:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 21:17:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28622.57680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kelsY-0003va-Ea; Mon, 16 Nov 2020 21:17:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28622.57680; Mon, 16 Nov 2020 21:17:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kelsY-0003vT-B8; Mon, 16 Nov 2020 21:17:38 +0000
Received: by outflank-mailman (input) for mailman id 28622;
 Mon, 16 Nov 2020 21:17:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=14IB=EW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kelsX-0003vO-7p
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 21:17:37 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e5d54e00-5d0f-4dfb-8b29-b849479ae93a;
 Mon, 16 Nov 2020 21:17:33 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kelsT-0005Vi-4S; Mon, 16 Nov 2020 21:17:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kelsS-0006xB-Lg; Mon, 16 Nov 2020 21:17:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kelsS-00074O-LC; Mon, 16 Nov 2020 21:17:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=14IB=EW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kelsX-0003vO-7p
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 21:17:37 +0000
X-Inumbo-ID: e5d54e00-5d0f-4dfb-8b29-b849479ae93a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e5d54e00-5d0f-4dfb-8b29-b849479ae93a;
	Mon, 16 Nov 2020 21:17:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HL99z43OlAHj3rNWax7Z6lCbejtsAA9F5v3bzWcKixs=; b=Xih8mrZuS56bMdvEVGC2hgfyRT
	11qSq/s0BzNn9OOofqmVeerEqQSYtRfO9YRJuZ1V9Kv0+u71nrSj9OKoMyxqZiE6mT34RtDVpTEJW
	y2mCLd9veKsN8BzinnOGzdOSVoRab+ylbkLOjNPxvCxeooweenhHa9hlta8kMth2i9Oo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kelsT-0005Vi-4S; Mon, 16 Nov 2020 21:17:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kelsS-0006xB-Lg; Mon, 16 Nov 2020 21:17:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kelsS-00074O-LC; Mon, 16 Nov 2020 21:17:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156819-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156819: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:build-amd64-xsm:xen-build:fail:regression
    linux-linus:build-i386:xen-build:fail:regression
    linux-linus:build-i386-xsm:xen-build:fail:regression
    linux-linus:build-amd64:xen-build:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:build-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=09162bc32c880a791c6c0668ce0745cf7958f576
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 16 Nov 2020 21:17:32 +0000

flight 156819 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156819/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152332
 build-i386                    6 xen-build                fail REGR. vs. 152332
 build-i386-xsm                6 xen-build                fail REGR. vs. 152332
 build-amd64                   6 xen-build                fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel 7 xen-install fail in 156815 REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64 7 xen-install fail in 156815 REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail in 156815 REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install    fail in 156815 REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel 7 xen-install fail in 156815 REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail in 156815 REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64 7 xen-install fail in 156815 REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install    fail in 156815 REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install    fail in 156815 REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail in 156815 REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64 7 xen-install fail in 156815 REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd 7 xen-install fail in 156815 REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd 7 xen-install fail in 156815 REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install    fail in 156815 REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64 7 xen-install fail in 156815 REGR. vs. 152332
 test-amd64-i386-pair   10 xen-install/src_host fail in 156815 REGR. vs. 152332
 test-amd64-i386-pair   11 xen-install/dst_host fail in 156815 REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install    fail in 156815 REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail in 156815 REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install fail in 156815 REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install    fail in 156815 REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install    fail in 156815 REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail in 156815 REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install  fail in 156815 REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install    fail in 156815 REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64 7 xen-install fail in 156815 REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64 7 xen-install fail in 156815 REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64 7 xen-install fail in 156815 REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail in 156815 REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host fail in 156815 REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host fail in 156815 REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install    fail in 156815 REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu  fail in 156815 REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop    fail in 156815 REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop     fail in 156815 REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot       fail in 156815 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit1   8 xen-boot         fail in 156815 pass in 156819
 test-arm64-arm64-xl           8 xen-boot         fail in 156815 pass in 156819
 test-arm64-arm64-xl-seattle   8 xen-boot         fail in 156815 pass in 156819
 test-arm64-arm64-examine      8 reboot                     fail pass in 156815

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop  fail in 156815 like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop  fail in 156815 like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop  fail in 156815 like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop  fail in 156815 like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail in 156815 like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check fail in 156815 never pass
 test-amd64-amd64-libvirt    15 migrate-support-check fail in 156815 never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail in 156815 never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check fail in 156815 never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2   3 hosts-allocate               starved  n/a

version targeted for testing:
 linux                09162bc32c880a791c6c0668ce0745cf7958f576
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  108 days
Failing since        152366  2020-08-01 20:49:34 Z  107 days  177 attempts
Testing same since   156815  2020-11-16 02:08:16 Z    0 days    2 attempts

------------------------------------------------------------
3520 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 672911 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 16 22:06:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 22:06:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28634.57696 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kemdO-0000lU-DP; Mon, 16 Nov 2020 22:06:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28634.57696; Mon, 16 Nov 2020 22:06:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kemdO-0000lN-AM; Mon, 16 Nov 2020 22:06:02 +0000
Received: by outflank-mailman (input) for mailman id 28634;
 Mon, 16 Nov 2020 21:57:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s5R5=EW=gmail.com=cheyenne.wills@srs-us1.protection.inumbo.net>)
 id 1kemV5-0008Gt-Av
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 21:57:27 +0000
Received: from mail-lf1-x134.google.com (unknown [2a00:1450:4864:20::134])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b9dfdd29-918d-47b5-8f0f-665a0b259f77;
 Mon, 16 Nov 2020 21:57:26 +0000 (UTC)
Received: by mail-lf1-x134.google.com with SMTP id w142so27239972lff.8
 for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 13:57:26 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=s5R5=EW=gmail.com=cheyenne.wills@srs-us1.protection.inumbo.net>)
	id 1kemV5-0008Gt-Av
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 21:57:27 +0000
X-Inumbo-ID: b9dfdd29-918d-47b5-8f0f-665a0b259f77
Received: from mail-lf1-x134.google.com (unknown [2a00:1450:4864:20::134])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b9dfdd29-918d-47b5-8f0f-665a0b259f77;
	Mon, 16 Nov 2020 21:57:26 +0000 (UTC)
Received: by mail-lf1-x134.google.com with SMTP id w142so27239972lff.8
        for <xen-devel@lists.xenproject.org>; Mon, 16 Nov 2020 13:57:26 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:from:date:message-id:subject:to;
        bh=HlIbb1shyvkY+uIlBeMbS0LuUPpKa9ZgUjImKSNfQGE=;
        b=ZMDKu+9Izug6eJI9uJ+0I1A6fvrCuGTXbaDO6Op9NoD5UKmq98fyzSGzbUNhmG7GQw
         +3BXGXxgTvlvUyn8w8vIK574HxZFpov5G6pvTGPWtTEjZNXhpazidgoHxm4MagCGWzRo
         FvzbX1VAzclmLvGl6vNrwWd+py0KGBeTinJgV8QxitkCMSyA9JTU89GMS4pQIvws5hTJ
         7n0z3CnzDkBYf2NF4MlR679zeBD/iBai0haOGuqTgAx/bxDvFwgpfJK0xT/mANCQS8Ex
         5QzlLzHF0HsPswQ55rnjbUmiW3dX8JsF58xIVIeqEklz+CgxEe+0IlFQCZTrSQkZnFHz
         UA7w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:from:date:message-id:subject:to;
        bh=HlIbb1shyvkY+uIlBeMbS0LuUPpKa9ZgUjImKSNfQGE=;
        b=aGyIx0EB6wo9hC/E0lM++P7Cg6lxWaZ535Auvi8aZaSoVZ9ha6OYcSFSzO9LgoBhW3
         zrn3k7VHuqmP/P+qQdQ41emsid/S87Q+YgLnB7smPFanbm5ObKLE5FRVTUjF+JiAT/0n
         65SgBQTbbn6EXEHD6u/kYfOqRZaeV8fx45+0JsYAmPHvVuQUBQB2RzU2jW5YwHSJvu0f
         S5E7FSI8c5UQzSoKdhTjuFJtLtsuV+QS5lB5225/wrA9iOp5uNzhT/mapy9XzcJuq/zR
         tQZ6a1EKZH0ozZHlLs0Ann0Y8jLcGW4toUnZtWsGRWX9sFtJaF+9xzmGTeNn0sY3Lklg
         VDew==
X-Gm-Message-State: AOAM532yUTkQpi/UV/E46Ihj22y9aPqzK4lugjUcWPqX3kok10eHpG+R
	jHBjF+aKOi/7mFIdx09BVG52h6SpY5QgWlw8y8o/nVr9FFU=
X-Google-Smtp-Source: ABdhPJzSzC7gthH+PuQcAeHEMnR+a9R5GF9IgyKhKC6j1OhBM/CZ+oowKzPF6Knvb6j75jt1ybv8+avBuMlpBPDrGwk=
X-Received: by 2002:a19:c8ca:: with SMTP id y193mr493116lff.150.1605563845035;
 Mon, 16 Nov 2020 13:57:25 -0800 (PST)
MIME-Version: 1.0
From: Cheyenne Wills <cheyenne.wills@gmail.com>
Date: Mon, 16 Nov 2020 14:57:14 -0700
Message-ID: <CAHpsFVc4AAm6L0rKUuV47ydOjtw7XAgFnDZxRjdCL0OHXJERDw@mail.gmail.com>
Subject: XSA-351 causing Solaris-11 systems to panic during boot.
To: xen-devel@lists.xenproject.org
Content-Type: multipart/alternative; boundary="000000000000a5c9d505b4407665"

--000000000000a5c9d505b4407665
Content-Type: text/plain; charset="UTF-8"

Running Xen with XSA-351 is causing Solaris 11 systems to panic during
boot.  The panic screen is showing the failure to be coming from
"unix:rdmsr".  The panic occurs with existing guests (booting off a disk)
and the  booting from an install ISO image.

I discussed the problem with "andyhhp__" in the "#xen" IRC channel and he
requested that I report it here.

This was failing on a Xen 4.13 and a Xen 4.14 system built via gentoo.

I understand that ultimately this is a bug in Solaris.  However it does
impact existing guests that were functional before applying the XSA-351
security patches.

--000000000000a5c9d505b4407665
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Running Xen with XSA-351 is causing Solaris 11 systems to =
panic during boot.=C2=A0 The panic screen is showing the failure to be comi=
ng from &quot;unix:rdmsr&quot;.=C2=A0 The panic occurs with existing guests=
 (booting off a disk) and the=C2=A0 booting from an install ISO image.=C2=
=A0<br><br>I discussed the problem with &quot;andyhhp__&quot; in the &quot;=
#xen&quot; IRC channel and he requested that I report it here.<br><br>This =
was failing on a Xen 4.13 and a Xen 4.14 system built via gentoo.<br><br>I =
understand that ultimately this is a bug in Solaris.=C2=A0 However it does =
impact existing guests that were functional before applying the XSA-351 sec=
urity patches.</div>

--000000000000a5c9d505b4407665--


From xen-devel-bounces@lists.xenproject.org Mon Nov 16 22:15:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 16 Nov 2020 22:15:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28657.57708 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kemmD-0001mF-Bq; Mon, 16 Nov 2020 22:15:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28657.57708; Mon, 16 Nov 2020 22:15:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kemmD-0001m8-7k; Mon, 16 Nov 2020 22:15:09 +0000
Received: by outflank-mailman (input) for mailman id 28657;
 Mon, 16 Nov 2020 22:15:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DwRh=EW=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kemmB-0001m3-91
 for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 22:15:07 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6b6ba5e9-bbaf-4a94-8d56-2f53f41c1f9b;
 Mon, 16 Nov 2020 22:15:06 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id E9DCC2224B;
 Mon, 16 Nov 2020 22:15:04 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DwRh=EW=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kemmB-0001m3-91
	for xen-devel@lists.xenproject.org; Mon, 16 Nov 2020 22:15:07 +0000
X-Inumbo-ID: 6b6ba5e9-bbaf-4a94-8d56-2f53f41c1f9b
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6b6ba5e9-bbaf-4a94-8d56-2f53f41c1f9b;
	Mon, 16 Nov 2020 22:15:06 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id E9DCC2224B;
	Mon, 16 Nov 2020 22:15:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605564905;
	bh=oPKNVtxxK245+Q42be0r+QqSCBzguDwheQthAdmtAKM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=n3nAiMMLHOUiiv1iSyaDKLCTtQBWpgrqM2ILl2BpO3wdfeZ7QthaK7zWb2Ow4sFQr
	 BNuwqJTQL9JnHwbu13Zx5FPVEQIA2j9+1ggH2ORM42esIRQDkbvNYv3vVGCxodV5jS
	 BEiRY6YNf0cprbFowwpRuQGM61KLQvMc8bRy7qXM=
Date: Mon, 16 Nov 2020 14:15:04 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Michal Orzel <michal.orzel@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, bertrand.marquis@arm.com
Subject: Re: [PATCH v2] xen/arm: Add workaround for Cortex-A76/Neoverse-N1
 erratum #1286807
In-Reply-To: <20201116121140.26763-1-michal.orzel@arm.com>
Message-ID: <alpine.DEB.2.21.2011161414280.20906@sstabellini-ThinkPad-T480s>
References: <20201116121140.26763-1-michal.orzel@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 16 Nov 2020, Michal Orzel wrote:
> On the affected Cortex-A76/Neoverse-N1 cores (r0p0 to r3p0),
> if a virtual address for a cacheable mapping of a location is being
> accessed by a core while another core is remapping the virtual
> address to a new physical page using the recommended break-before-make
> sequence, then under very rare circumstances TLBI+DSB completes before
> a read using the translation being invalidated has been observed by
> other observers. The workaround repeats the TLBI+DSB operation
> for all the TLB flush operations on purpose.
> 
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>

Looks good and it looks like you addressed all Julien's comments, so:

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  docs/misc/arm/silicon-errata.txt     |  2 ++
>  xen/arch/arm/Kconfig                 | 23 +++++++++++++++++++++
>  xen/arch/arm/cpuerrata.c             | 14 +++++++++++++
>  xen/include/asm-arm/arm64/flushtlb.h | 30 +++++++++++++++++++---------
>  xen/include/asm-arm/cpufeature.h     |  3 ++-
>  5 files changed, 62 insertions(+), 10 deletions(-)
> 
> diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-errata.txt
> index 552c4151d3..d183ba543f 100644
> --- a/docs/misc/arm/silicon-errata.txt
> +++ b/docs/misc/arm/silicon-errata.txt
> @@ -53,5 +53,7 @@ stable hypervisors.
>  | ARM            | Cortex-A72      | #853709         | N/A                     |
>  | ARM            | Cortex-A73      | #858921         | ARM_ERRATUM_858921      |
>  | ARM            | Cortex-A76      | #1165522        | N/A                     |
> +| ARM            | Cortex-A76      | #1286807        | ARM64_ERRATUM_1286807   |
>  | ARM            | Neoverse-N1     | #1165522        | N/A
> +| ARM            | Neoverse-N1     | #1286807        | ARM64_ERRATUM_1286807   |
>  | ARM            | MMU-500         | #842869         | N/A                     |
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index f938dd21bd..8171b8d04a 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -244,6 +244,29 @@ config ARM_ERRATUM_858921
>  
>  	  If unsure, say Y.
>  
> +config ARM64_WORKAROUND_REPEAT_TLBI
> +	bool
> +
> +config ARM64_ERRATUM_1286807
> +	bool "Cortex-A76/Neoverse-N1: 1286807: Modification of the translation table for a virtual address might lead to read-after-read ordering violation"
> +	default y
> +	select ARM64_WORKAROUND_REPEAT_TLBI
> +	depends on ARM_64
> +	help
> +	  This option adds a workaround for ARM Cortex-A76/Neoverse-N1 erratum 1286807.
> +
> +	  On the affected Cortex-A76/Neoverse-N1 cores (r0p0 to r3p0), if a virtual
> +	  address for a cacheable mapping of a location is being
> +	  accessed by a core while another core is remapping the virtual
> +	  address to a new physical page using the recommended
> +	  break-before-make sequence, then under very rare circumstances
> +	  TLBI+DSB completes before a read using the translation being
> +	  invalidated has been observed by other observers. The
> +	  workaround repeats the TLBI+DSB operation for all the TLB flush
> +	  operations on purpose.
> +
> +	  If unsure, say Y.
> +
>  endmenu
>  
>  config ARM64_HARDEN_BRANCH_PREDICTOR
> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
> index 567911d380..cb4795beec 100644
> --- a/xen/arch/arm/cpuerrata.c
> +++ b/xen/arch/arm/cpuerrata.c
> @@ -424,6 +424,20 @@ static const struct arm_cpu_capabilities arm_errata[] = {
>                     (1 << MIDR_VARIANT_SHIFT) | 2),
>      },
>  #endif
> +#ifdef CONFIG_ARM64_ERRATUM_1286807
> +    {
> +        /* Cortex-A76 r0p0 - r3p0 */
> +        .desc = "ARM erratum 1286807",
> +        .capability = ARM64_WORKAROUND_REPEAT_TLBI,
> +        MIDR_RANGE(MIDR_CORTEX_A76, 0, 3 << MIDR_VARIANT_SHIFT),
> +    },
> +    {
> +        /* Neoverse-N1 r0p0 - r3p0 */
> +        .desc = "ARM erratum 1286807",
> +        .capability = ARM64_WORKAROUND_REPEAT_TLBI,
> +        MIDR_RANGE(MIDR_NEOVERSE_N1, 0, 3 << MIDR_VARIANT_SHIFT),
> +    },
> +#endif
>  #ifdef CONFIG_ARM64_HARDEN_BRANCH_PREDICTOR
>      {
>          .capability = ARM_HARDEN_BRANCH_PREDICTOR,
> diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
> index ceec59542e..8f2abfaf1d 100644
> --- a/xen/include/asm-arm/arm64/flushtlb.h
> +++ b/xen/include/asm-arm/arm64/flushtlb.h
> @@ -9,6 +9,12 @@
>   * DSB ISH          // Ensure the TLB invalidation has completed
>   * ISB              // See explanation below
>   *
> + * ARM64_WORKAROUND_REPEAT_TLBI:
> + * Modification of the translation table for a virtual address might lead to
> + * read-after-read ordering violation.
> + * The workaround repeats TLBI+DSB operation for all the TLB flush operations
> + * on purpose.
> + *
>   * For Xen page-tables the ISB will discard any instructions fetched
>   * from the old mappings.
>   *
> @@ -16,15 +22,21 @@
>   * (and therefore the TLB invalidation) before continuing. So we know
>   * the TLBs cannot contain an entry for a mapping we may have removed.
>   */
> -#define TLB_HELPER(name, tlbop) \
> -static inline void name(void)   \
> -{                               \
> -    asm volatile(               \
> -        "dsb  ishst;"           \
> -        "tlbi "  # tlbop  ";"   \
> -        "dsb  ish;"             \
> -        "isb;"                  \
> -        : : : "memory");        \
> +#define TLB_HELPER(name, tlbop)                  \
> +static inline void name(void)                    \
> +{                                                \
> +    asm volatile(                                \
> +        "dsb  ishst;"                            \
> +        "tlbi "  # tlbop  ";"                    \
> +        ALTERNATIVE(                             \
> +            "nop; nop;",                         \
> +            "dsb  ish;"                          \
> +            "tlbi "  # tlbop  ";",               \
> +            ARM64_WORKAROUND_REPEAT_TLBI,        \
> +            CONFIG_ARM64_WORKAROUND_REPEAT_TLBI) \
> +        "dsb  ish;"                              \
> +        "isb;"                                   \
> +        : : : "memory");                         \
>  }
>  
>  /* Flush local TLBs, current VMID only. */
> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
> index 016a9fe203..c7b5052992 100644
> --- a/xen/include/asm-arm/cpufeature.h
> +++ b/xen/include/asm-arm/cpufeature.h
> @@ -46,8 +46,9 @@
>  #define ARM_SMCCC_1_1 8
>  #define ARM64_WORKAROUND_AT_SPECULATE 9
>  #define ARM_WORKAROUND_858921 10
> +#define ARM64_WORKAROUND_REPEAT_TLBI 11
>  
> -#define ARM_NCAPS           11
> +#define ARM_NCAPS           12
>  
>  #ifndef __ASSEMBLY__
>  
> -- 
> 2.28.0
> 


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 01:12:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 01:12:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28673.57723 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kepXH-0006JX-Q9; Tue, 17 Nov 2020 01:11:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28673.57723; Tue, 17 Nov 2020 01:11:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kepXH-0006JQ-Me; Tue, 17 Nov 2020 01:11:55 +0000
Received: by outflank-mailman (input) for mailman id 28673;
 Tue, 17 Nov 2020 01:11:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Shnx=EX=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kepXG-0006JL-BP
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 01:11:54 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 40f6d5cd-4341-4717-b900-5ec9b099e8d8;
 Tue, 17 Nov 2020 01:11:53 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 27B1F24686;
 Tue, 17 Nov 2020 01:11:52 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Shnx=EX=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kepXG-0006JL-BP
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 01:11:54 +0000
X-Inumbo-ID: 40f6d5cd-4341-4717-b900-5ec9b099e8d8
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 40f6d5cd-4341-4717-b900-5ec9b099e8d8;
	Tue, 17 Nov 2020 01:11:53 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 27B1F24686;
	Tue, 17 Nov 2020 01:11:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605575512;
	bh=l4Ig2qO6LhuzAZ2MTnBHCjBBRYCfMmboeAapZ2vkQho=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=eZmwDLl56O+GYkr1bZuF5uZNalE44AKysgNXvzyMgNDJx5+c1aJzATIwkOqnd5p6N
	 lBm3GllKXspTnTHKTRqmYh/psbGDH5rod+8afRd7BXaBjpetrbgjZtgJi0uNRFKPNK
	 xSxfSgrnNi321zeCdWPqOR9URK3pLVowIv5ACbuM=
Date: Mon, 16 Nov 2020 17:11:51 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
In-Reply-To: <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2011161710140.20906@sstabellini-ThinkPad-T480s>
References: <cover.1605527997.git.rahul.singh@arm.com> <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 16 Nov 2020, Rahul Singh wrote:
> NS16550 driver has PCI support that is under HAS_PCI flag. When HAS_PCI
> is enabled for ARM, compilation error is observed for ARM architecture
> because ARM platforms do not have full PCI support available.
> 
> Introducing new kconfig option CONFIG_HAS_NS16550_PCI to support
> ns16550 PCI for X86.
> 
> For X86 platforms it is enabled by default. For ARM platforms it is
> disabled by default, once we have proper support for NS16550 PCI for
> ARM we can enable it.
> 
> No functional change.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> 
> Changes in v3:
> - remove help text from the Kconfig file because of prompt-less option.
> 
> ---
>  xen/drivers/char/Kconfig   |  4 ++++
>  xen/drivers/char/ns16550.c | 32 ++++++++++++++++----------------
>  2 files changed, 20 insertions(+), 16 deletions(-)
> 
> diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
> index b572305657..abb59fdb0f 100644
> --- a/xen/drivers/char/Kconfig
> +++ b/xen/drivers/char/Kconfig
> @@ -4,6 +4,10 @@ config HAS_NS16550
>  	help
>  	  This selects the 16550-series UART support. For most systems, say Y.
>  
> +config HAS_NS16550_PCI
> +	def_bool y
> +	depends on X86 && HAS_NS16550 && HAS_PCI
> +
>  config HAS_CADENCE_UART
>  	bool "Xilinx Cadence UART driver"
>  	default y
> diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
> index d8b52eb813..bd1c2af956 100644
> --- a/xen/drivers/char/ns16550.c
> +++ b/xen/drivers/char/ns16550.c
> @@ -16,7 +16,7 @@
>  #include <xen/timer.h>
>  #include <xen/serial.h>
>  #include <xen/iocap.h>
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>  #include <xen/pci.h>
>  #include <xen/pci_regs.h>
>  #include <xen/pci_ids.h>
> @@ -54,7 +54,7 @@ enum serial_param_type {
>      reg_shift,
>      reg_width,
>      stop_bits,
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>      bridge_bdf,
>      device,
>      port_bdf,
> @@ -83,7 +83,7 @@ static struct ns16550 {
>      unsigned int timeout_ms;
>      bool_t intr_works;
>      bool_t dw_usr_bsy;
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>      /* PCI card parameters. */
>      bool_t pb_bdf_enable;   /* if =1, pb-bdf effective, port behind bridge */
>      bool_t ps_bdf_enable;   /* if =1, ps_bdf effective, port on pci card */
> @@ -117,14 +117,14 @@ static const struct serial_param_var __initconst sp_vars[] = {
>      {"reg-shift", reg_shift},
>      {"reg-width", reg_width},
>      {"stop-bits", stop_bits},
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>      {"bridge", bridge_bdf},
>      {"dev", device},
>      {"port", port_bdf},
>  #endif
>  };
>  
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>  struct ns16550_config {
>      u16 vendor_id;
>      u16 dev_id;
> @@ -620,7 +620,7 @@ static int ns16550_getc(struct serial_port *port, char *pc)
>  
>  static void pci_serial_early_init(struct ns16550 *uart)
>  {
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>      if ( !uart->ps_bdf_enable || uart->io_base >= 0x10000 )
>          return;
>  
> @@ -719,7 +719,7 @@ static void __init ns16550_init_preirq(struct serial_port *port)
>  
>  static void __init ns16550_init_irq(struct serial_port *port)
>  {
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>      struct ns16550 *uart = port->uart;
>  
>      if ( uart->msi )
> @@ -761,7 +761,7 @@ static void __init ns16550_init_postirq(struct serial_port *port)
>      uart->timeout_ms = max_t(
>          unsigned int, 1, (bits * uart->fifo_size * 1000) / uart->baud);
>  
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>      if ( uart->bar || uart->ps_bdf_enable )
>      {
>          if ( uart->param && uart->param->mmio &&
> @@ -841,7 +841,7 @@ static void ns16550_suspend(struct serial_port *port)
>  
>      stop_timer(&uart->timer);
>  
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>      if ( uart->bar )
>         uart->cr = pci_conf_read16(PCI_SBDF(0, uart->ps_bdf[0], uart->ps_bdf[1],
>                                    uart->ps_bdf[2]), PCI_COMMAND);
> @@ -850,7 +850,7 @@ static void ns16550_suspend(struct serial_port *port)
>  
>  static void _ns16550_resume(struct serial_port *port)
>  {
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>      struct ns16550 *uart = port->uart;
>  
>      if ( uart->bar )
> @@ -1013,7 +1013,7 @@ static int __init check_existence(struct ns16550 *uart)
>      return 1; /* Everything is MMIO */
>  #endif
>  
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>      pci_serial_early_init(uart);
>  #endif
>  
> @@ -1044,7 +1044,7 @@ static int __init check_existence(struct ns16550 *uart)
>      return (status == 0x90);
>  }
>  
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>  static int __init
>  pci_uart_config(struct ns16550 *uart, bool_t skip_amt, unsigned int idx)
>  {
> @@ -1305,7 +1305,7 @@ static bool __init parse_positional(struct ns16550 *uart, char **str)
>  
>      if ( *conf == ',' && *++conf != ',' )
>      {
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>          if ( strncmp(conf, "pci", 3) == 0 )
>          {
>              if ( pci_uart_config(uart, 1/* skip AMT */, uart - ns16550_com) )
> @@ -1327,7 +1327,7 @@ static bool __init parse_positional(struct ns16550 *uart, char **str)
>  
>      if ( *conf == ',' && *++conf != ',' )
>      {
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>          if ( strncmp(conf, "msi", 3) == 0 )
>          {
>              conf += 3;
> @@ -1339,7 +1339,7 @@ static bool __init parse_positional(struct ns16550 *uart, char **str)
>              uart->irq = simple_strtol(conf, &conf, 10);
>      }
>  
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>      if ( *conf == ',' && *++conf != ',' )
>      {
>          conf = parse_pci(conf, NULL, &uart->ps_bdf[0],
> @@ -1419,7 +1419,7 @@ static bool __init parse_namevalue_pairs(char *str, struct ns16550 *uart)
>              uart->reg_width = simple_strtoul(param_value, NULL, 0);
>              break;
>  
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>          case bridge_bdf:
>              if ( !parse_pci(param_value, NULL, &uart->ps_bdf[0],
>                              &uart->ps_bdf[1], &uart->ps_bdf[2]) )
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 01:18:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 01:18:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28679.57734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kepdF-0006WO-Fc; Tue, 17 Nov 2020 01:18:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28679.57734; Tue, 17 Nov 2020 01:18:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kepdF-0006WH-Ci; Tue, 17 Nov 2020 01:18:05 +0000
Received: by outflank-mailman (input) for mailman id 28679;
 Tue, 17 Nov 2020 01:18:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g2yZ=EX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kepdD-0006WC-Mi
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 01:18:03 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5c36206f-5a29-4f54-804d-732ad89a86e2;
 Tue, 17 Nov 2020 01:18:02 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kepdC-0006km-3S; Tue, 17 Nov 2020 01:18:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kepdB-0002KL-P1; Tue, 17 Nov 2020 01:18:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kepdB-0005uP-OW; Tue, 17 Nov 2020 01:18:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=g2yZ=EX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kepdD-0006WC-Mi
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 01:18:03 +0000
X-Inumbo-ID: 5c36206f-5a29-4f54-804d-732ad89a86e2
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5c36206f-5a29-4f54-804d-732ad89a86e2;
	Tue, 17 Nov 2020 01:18:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=c9+ns39L0WQQmZa305xx5mMwv6W2SfC9mVryi02PPaM=; b=MGq/FdcHFLV9QSaPE+aYB+uDW8
	26gl2LWkjrj7ZpkuY85lusgY1wX5tX7/5imsE4zmhmEOqS/mXTcsZ+K5DmCTa9eqGMninHFhvGBXE
	xdBtvPxpAIqfOGr9fzg2NJJJ/YmnTUu8ZOKxm/urmoG8pIELbcsXO9gs3XUSUQteLAUw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kepdC-0006km-3S; Tue, 17 Nov 2020 01:18:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kepdB-0002KL-P1; Tue, 17 Nov 2020 01:18:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kepdB-0005uP-OW; Tue, 17 Nov 2020 01:18:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156824-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [seabios test] 156824: tolerable FAIL - PUSHED
X-Osstest-Failures:
    seabios:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    seabios:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    seabios:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    seabios=748d619be3282fba35f99446098ac2d0579f6063
X-Osstest-Versions-That:
    seabios=94f0510dc75e910400aad6c169048d672c8c7193
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Nov 2020 01:18:01 +0000

flight 156824 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156824/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156285
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156285
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156285
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156285
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156285
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass

version targeted for testing:
 seabios              748d619be3282fba35f99446098ac2d0579f6063
baseline version:
 seabios              94f0510dc75e910400aad6c169048d672c8c7193

Last test of basis   156285  2020-10-28 19:42:01 Z   19 days
Testing same since   156824  2020-11-16 15:39:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  David Woodhouse <dwmw@amazon.co.uk>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/seabios.git
   94f0510..748d619  748d619be3282fba35f99446098ac2d0579f6063 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 01:20:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 01:20:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28687.57750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kepfT-0007RJ-30; Tue, 17 Nov 2020 01:20:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28687.57750; Tue, 17 Nov 2020 01:20:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kepfS-0007RC-Uw; Tue, 17 Nov 2020 01:20:22 +0000
Received: by outflank-mailman (input) for mailman id 28687;
 Tue, 17 Nov 2020 01:20:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Shnx=EX=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kepfR-0007R7-N1
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 01:20:21 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id df247c49-e423-47fc-9634-72d615c7d45c;
 Tue, 17 Nov 2020 01:20:21 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id C51C524687;
 Tue, 17 Nov 2020 01:20:19 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Shnx=EX=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kepfR-0007R7-N1
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 01:20:21 +0000
X-Inumbo-ID: df247c49-e423-47fc-9634-72d615c7d45c
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id df247c49-e423-47fc-9634-72d615c7d45c;
	Tue, 17 Nov 2020 01:20:21 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id C51C524687;
	Tue, 17 Nov 2020 01:20:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605576020;
	bh=gnW5OPgoJa/LEEtvAiuuM0HPDysRn6ywKhvXLHp5QOY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Fr8lHVMlRPZhtdWOWybnZAWy4GzT/gCw1jKrzyP8K/DauqIa8KwGf1tPUivhOgmt6
	 6t4B1u4pjtJuewGEF1mp6Aw1QzPHHCpZhd1hfHvNzumhPIDPbCI8aAA/6cJEaQHchP
	 SrmlZCemI5zVcE6KYXa1ZUhGG++aUksbDBSL86Ck=
Date: Mon, 16 Nov 2020 17:20:19 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3 2/3] xen/pci: Move x86 specific code to x86
 directory.
In-Reply-To: <a84005e5aa6733043e043b015cde4983719c8535.1605527997.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2011161719300.20906@sstabellini-ThinkPad-T480s>
References: <cover.1605527997.git.rahul.singh@arm.com> <a84005e5aa6733043e043b015cde4983719c8535.1605527997.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 16 Nov 2020, Rahul Singh wrote:
> passthrough/pci.c file is common for all architecture, but there is x86
> specific code in this file.
> 
> Move x86 specific code to the drivers/passthrough/io.c file to avoid
> compilation error for other architecture.
> 
> As drivers/passthrough/io.c is compiled only for x86 move it to
> x86 directory and rename it to hvm.c.
> 
> No functional change.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>

This patch breaks the x86 build if you disable CONFIG_HVM:

prelink-efi.o: In function `pci_release_devices':
/local/repos/xen-upstream/xen/drivers/passthrough/pci.c:900: undefined reference to `arch_pci_clean_pirqs'
Makefile:209: recipe for target '/local/repos/xen-upstream/xen/xen.efi' failed



> ---
> 
> Changes in v3:
> - fixed typo
> - As per suggestion move the code to the file io.c and move that file to
>   x86 directory and rename it hvm.c
> 
> ---
>  xen/drivers/passthrough/Makefile            |  3 -
>  xen/drivers/passthrough/pci.c               | 78 +--------------------
>  xen/drivers/passthrough/x86/Makefile        |  1 +
>  xen/drivers/passthrough/{io.c => x86/hvm.c} | 66 +++++++++++++++++
>  xen/drivers/passthrough/x86/iommu.c         |  7 ++
>  xen/include/xen/pci.h                       |  2 +
>  6 files changed, 77 insertions(+), 80 deletions(-)
>  rename xen/drivers/passthrough/{io.c => x86/hvm.c} (95%)
> 
> diff --git a/xen/drivers/passthrough/Makefile b/xen/drivers/passthrough/Makefile
> index e973e16c74..cc646612c7 100644
> --- a/xen/drivers/passthrough/Makefile
> +++ b/xen/drivers/passthrough/Makefile
> @@ -6,6 +6,3 @@ obj-$(CONFIG_ARM) += arm/
>  obj-y += iommu.o
>  obj-$(CONFIG_HAS_PCI) += pci.o
>  obj-$(CONFIG_HAS_DEVICE_TREE) += device_tree.o
> -
> -x86-$(CONFIG_HVM) := io.o
> -obj-$(CONFIG_X86) += $(x86-y)
> diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
> index 51e584127e..e8a28df126 100644
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -14,9 +14,6 @@
>   * this program; If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> -#include <xen/sched.h>
> -#include <xen/pci.h>
> -#include <xen/pci_regs.h>
>  #include <xen/pci_ids.h>
>  #include <xen/list.h>
>  #include <xen/prefetch.h>
> @@ -24,7 +21,6 @@
>  #include <xen/irq.h>
>  #include <xen/param.h>
>  #include <xen/vm_event.h>
> -#include <asm/hvm/irq.h>
>  #include <xen/delay.h>
>  #include <xen/keyhandler.h>
>  #include <xen/event.h>
> @@ -842,71 +838,6 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
>      return ret;
>  }
>  
> -static int pci_clean_dpci_irq(struct domain *d,
> -                              struct hvm_pirq_dpci *pirq_dpci, void *arg)
> -{
> -    struct dev_intx_gsi_link *digl, *tmp;
> -
> -    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
> -
> -    if ( pt_irq_need_timer(pirq_dpci->flags) )
> -        kill_timer(&pirq_dpci->timer);
> -
> -    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
> -    {
> -        list_del(&digl->list);
> -        xfree(digl);
> -    }
> -
> -    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
> -
> -    if ( !pt_pirq_softirq_active(pirq_dpci) )
> -        return 0;
> -
> -    domain_get_irq_dpci(d)->pending_pirq_dpci = pirq_dpci;
> -
> -    return -ERESTART;
> -}
> -
> -static int pci_clean_dpci_irqs(struct domain *d)
> -{
> -    struct hvm_irq_dpci *hvm_irq_dpci = NULL;
> -
> -    if ( !is_iommu_enabled(d) )
> -        return 0;
> -
> -    if ( !is_hvm_domain(d) )
> -        return 0;
> -
> -    spin_lock(&d->event_lock);
> -    hvm_irq_dpci = domain_get_irq_dpci(d);
> -    if ( hvm_irq_dpci != NULL )
> -    {
> -        int ret = 0;
> -
> -        if ( hvm_irq_dpci->pending_pirq_dpci )
> -        {
> -            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci) )
> -                 ret = -ERESTART;
> -            else
> -                 hvm_irq_dpci->pending_pirq_dpci = NULL;
> -        }
> -
> -        if ( !ret )
> -            ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
> -        if ( ret )
> -        {
> -            spin_unlock(&d->event_lock);
> -            return ret;
> -        }
> -
> -        hvm_domain_irq(d)->dpci = NULL;
> -        free_hvm_irq_dpci(hvm_irq_dpci);
> -    }
> -    spin_unlock(&d->event_lock);
> -    return 0;
> -}
> -
>  /* Caller should hold the pcidevs_lock */
>  static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
>                             uint8_t devfn)
> @@ -966,7 +897,7 @@ int pci_release_devices(struct domain *d)
>      int ret;
>  
>      pcidevs_lock();
> -    ret = pci_clean_dpci_irqs(d);
> +    ret = arch_pci_clean_pirqs(d);
>      if ( ret )
>      {
>          pcidevs_unlock();
> @@ -1370,13 +1301,6 @@ static int __init setup_dump_pcidevs(void)
>  }
>  __initcall(setup_dump_pcidevs);
>  
> -int iommu_update_ire_from_msi(
> -    struct msi_desc *msi_desc, struct msi_msg *msg)
> -{
> -    return iommu_intremap
> -           ? iommu_call(&iommu_ops, update_ire_from_msi, msi_desc, msg) : 0;
> -}
> -
>  static int iommu_add_device(struct pci_dev *pdev)
>  {
>      const struct domain_iommu *hd;
> diff --git a/xen/drivers/passthrough/x86/Makefile b/xen/drivers/passthrough/x86/Makefile
> index a70cf9460d..69284a5d19 100644
> --- a/xen/drivers/passthrough/x86/Makefile
> +++ b/xen/drivers/passthrough/x86/Makefile
> @@ -1,2 +1,3 @@
>  obj-y += ats.o
>  obj-y += iommu.o
> +obj-$(CONFIG_HVM) += hvm.o
> diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/x86/hvm.c
> similarity index 95%
> rename from xen/drivers/passthrough/io.c
> rename to xen/drivers/passthrough/x86/hvm.c
> index 6b1305a3e5..41cfa2e200 100644
> --- a/xen/drivers/passthrough/io.c
> +++ b/xen/drivers/passthrough/x86/hvm.c
> @@ -1036,6 +1036,72 @@ unlock:
>      spin_unlock(&d->event_lock);
>  }
>  
> +static int pci_clean_dpci_irq(struct domain *d,
> +                              struct hvm_pirq_dpci *pirq_dpci, void *arg)
> +{
> +    struct dev_intx_gsi_link *digl, *tmp;
> +
> +    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
> +
> +    if ( pt_irq_need_timer(pirq_dpci->flags) )
> +        kill_timer(&pirq_dpci->timer);
> +
> +    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
> +    {
> +        list_del(&digl->list);
> +        xfree(digl);
> +    }
> +
> +    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
> +
> +    if ( !pt_pirq_softirq_active(pirq_dpci) )
> +        return 0;
> +
> +    domain_get_irq_dpci(d)->pending_pirq_dpci = pirq_dpci;
> +
> +    return -ERESTART;
> +}
> +
> +int arch_pci_clean_pirqs(struct domain *d)
> +{
> +    struct hvm_irq_dpci *hvm_irq_dpci = NULL;
> +
> +    if ( !is_iommu_enabled(d) )
> +        return 0;
> +
> +    if ( !is_hvm_domain(d) )
> +        return 0;
> +
> +    spin_lock(&d->event_lock);
> +    hvm_irq_dpci = domain_get_irq_dpci(d);
> +    if ( hvm_irq_dpci != NULL )
> +    {
> +        int ret = 0;
> +
> +        if ( hvm_irq_dpci->pending_pirq_dpci )
> +        {
> +            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci) )
> +                 ret = -ERESTART;
> +            else
> +                 hvm_irq_dpci->pending_pirq_dpci = NULL;
> +        }
> +
> +        if ( !ret )
> +            ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
> +        if ( ret )
> +        {
> +            spin_unlock(&d->event_lock);
> +            return ret;
> +        }
> +
> +        hvm_domain_irq(d)->dpci = NULL;
> +        free_hvm_irq_dpci(hvm_irq_dpci);
> +    }
> +    spin_unlock(&d->event_lock);
> +
> +    return 0;
> +}
> +
>  /*
>   * Note: 'pt_pirq_softirq_reset' can clear the STATE_SCHED before we get to
>   * doing it. If that is the case we let 'pt_pirq_softirq_reset' do ref-counting.
> diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
> index f17b1820f4..875e67b53b 100644
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -308,6 +308,13 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
>      return pg;
>  }
>  
> +int iommu_update_ire_from_msi(
> +    struct msi_desc *msi_desc, struct msi_msg *msg)
> +{
> +    return iommu_intremap
> +           ? iommu_call(&iommu_ops, update_ire_from_msi, msi_desc, msg) : 0;
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
> index 20a54a5bb4..78d83afe64 100644
> --- a/xen/include/xen/pci.h
> +++ b/xen/include/xen/pci.h
> @@ -208,4 +208,6 @@ int msixtbl_pt_register(struct domain *, struct pirq *, uint64_t gtable);
>  void msixtbl_pt_unregister(struct domain *, struct pirq *);
>  void msixtbl_pt_cleanup(struct domain *d);
>  
> +int arch_pci_clean_pirqs(struct domain *d);
> +
>  #endif /* __XEN_PCI_H__ */
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 04:21:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 04:21:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28708.57773 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kesUl-0007WS-6i; Tue, 17 Nov 2020 04:21:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28708.57773; Tue, 17 Nov 2020 04:21:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kesUl-0007WL-3o; Tue, 17 Nov 2020 04:21:31 +0000
Received: by outflank-mailman (input) for mailman id 28708;
 Tue, 17 Nov 2020 04:21:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Shnx=EX=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kesUk-0007Vy-3q
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 04:21:30 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a08ecfa7-b961-4c59-9628-4cf2717f7e06;
 Tue, 17 Nov 2020 04:21:29 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 683C4205F4;
 Tue, 17 Nov 2020 04:21:28 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Shnx=EX=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kesUk-0007Vy-3q
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 04:21:30 +0000
X-Inumbo-ID: a08ecfa7-b961-4c59-9628-4cf2717f7e06
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a08ecfa7-b961-4c59-9628-4cf2717f7e06;
	Tue, 17 Nov 2020 04:21:29 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 683C4205F4;
	Tue, 17 Nov 2020 04:21:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605586888;
	bh=/lcIGplqn6odufkVeXSQ/3A4o3V3IGNc2rmyv6gS60M=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=tSZUd3BHwRJJ6C987+NAT4qrC6/kRq/4+Jlas9lBBMa05nBQf7RSTB98hFuIOlq1g
	 xmcyTJCVHi4PrEEgrfP6d+IAerJAStMKJ+SABEtMdXWXDT9chx5aTRbjC1zeJdBYkv
	 2jia4l4cgtIQhXa+CTAcVnI2mXZDi2OWzb9tb7OE=
From: Stefano Stabellini <sstabellini@kernel.org>
To: andrew.cooper3@citrix.com,
	cardoe@cardoe.com,
	wl@xen.org
Cc: sstabellini@kernel.org,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v2 1/2] automation: add a QEMU aarch64 smoke test
Date: Mon, 16 Nov 2020 20:21:26 -0800
Message-Id: <20201117042127.30981-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2011161927120.20906@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2011161927120.20906@sstabellini-ThinkPad-T480s>

Use QEMU to start Xen (just the hypervisor) up until it stops because
there is no dom0 kernel to boot.

It is based on the existing build job unstable-arm64v8.

Also use make -j$(nproc) to build Xen.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
Changes in v2:
- fix x86_32 build
---
 automation/gitlab-ci/test.yaml         | 22 ++++++++++++++++++
 automation/scripts/build               |  6 ++---
 automation/scripts/qemu-smoke-arm64.sh | 32 ++++++++++++++++++++++++++
 3 files changed, 57 insertions(+), 3 deletions(-)
 create mode 100755 automation/scripts/qemu-smoke-arm64.sh

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index 793feafe8b..35346e3f6e 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -22,6 +22,28 @@ build-each-commit-gcc:
     - /^coverity-tested\/.*/
     - /^stable-.*/
 
+qemu-smoke-arm64-gcc:
+  stage: test
+  image: registry.gitlab.com/xen-project/xen/${CONTAINER}
+  variables:
+    CONTAINER: debian:unstable-arm64v8
+  script:
+    - ./automation/scripts/qemu-smoke-arm64.sh 2>&1 | tee qemu-smoke-arm64.log
+  dependencies:
+    - debian-unstable-gcc-arm64
+  artifacts:
+    paths:
+      - smoke.serial
+      - '*.log'
+    when: always
+  tags:
+    - arm64
+  except:
+    - master
+    - smoke
+    - /^coverity-tested\/.*/
+    - /^stable-.*/
+
 qemu-smoke-x86-64-gcc:
   stage: test
   image: registry.gitlab.com/xen-project/xen/${CONTAINER}
diff --git a/automation/scripts/build b/automation/scripts/build
index 0cd0f3971d..7038e5eb50 100755
--- a/automation/scripts/build
+++ b/automation/scripts/build
@@ -10,9 +10,9 @@ cc-ver()
 
 # random config or default config
 if [[ "${RANDCONFIG}" == "y" ]]; then
-    make -C xen KCONFIG_ALLCONFIG=tools/kconfig/allrandom.config randconfig
+    make -j$(nproc) -C xen KCONFIG_ALLCONFIG=tools/kconfig/allrandom.config randconfig
 else
-    make -C xen defconfig
+    make -j$(nproc) -C xen defconfig
 fi
 
 # build up our configure options
@@ -45,7 +45,7 @@ make -j$(nproc) dist
 # Extract artifacts to avoid getting rewritten by customised builds
 cp xen/.config xen-config
 mkdir binaries
-if [[ "${XEN_TARGET_ARCH}" == "x86_64" ]]; then
+if [[ "${XEN_TARGET_ARCH}" != "x86_32" ]]; then
     cp xen/xen binaries/xen
 fi
 
diff --git a/automation/scripts/qemu-smoke-arm64.sh b/automation/scripts/qemu-smoke-arm64.sh
new file mode 100755
index 0000000000..a7efbf8b6f
--- /dev/null
+++ b/automation/scripts/qemu-smoke-arm64.sh
@@ -0,0 +1,32 @@
+#!/bin/bash
+
+set -ex
+
+# Install QEMU
+export DEBIAN_FRONTENT=noninteractive
+apt-get -qy update
+apt-get -qy install --no-install-recommends qemu-system-aarch64 \
+                                            u-boot-qemu
+
+# XXX Silly workaround to get the following QEMU command to work
+cp /usr/share/qemu/pvh.bin /usr/share/qemu/efi-virtio.rom
+qemu-system-aarch64 \
+   -machine virtualization=true \
+   -cpu cortex-a57 -machine type=virt \
+   -m 512 -display none \
+   -machine dumpdtb=binaries/virt-gicv3.dtb
+
+rm -f smoke.serial
+set +e
+echo "  booti 0x49000000 - 0x44000000" | timeout -k 1 30 qemu-system-aarch64 \
+    -machine virtualization=true \
+    -cpu cortex-a57 -machine type=virt \
+    -m 512 -monitor none -serial stdio \
+    -no-reboot \
+    -device loader,file=binaries/virt-gicv3.dtb,force-raw=on,addr=0x44000000 \
+    -device loader,file=binaries/xen,force-raw=on,addr=0x49000000 \
+    -bios /usr/lib/u-boot/qemu_arm64/u-boot.bin |& tee smoke.serial
+
+set -e
+grep -q 'LOADING DOMAIN 0' smoke.serial || exit 1
+exit 0
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 17 04:21:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 04:21:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28709.57786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kesUm-0007YC-FZ; Tue, 17 Nov 2020 04:21:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28709.57786; Tue, 17 Nov 2020 04:21:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kesUm-0007Y4-Bv; Tue, 17 Nov 2020 04:21:32 +0000
Received: by outflank-mailman (input) for mailman id 28709;
 Tue, 17 Nov 2020 04:21:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Shnx=EX=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kesUk-0007W8-M9
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 04:21:30 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 96c13b98-123a-4a68-8ad9-0390c52b829a;
 Tue, 17 Nov 2020 04:21:29 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id DD45920897;
 Tue, 17 Nov 2020 04:21:28 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Shnx=EX=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kesUk-0007W8-M9
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 04:21:30 +0000
X-Inumbo-ID: 96c13b98-123a-4a68-8ad9-0390c52b829a
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 96c13b98-123a-4a68-8ad9-0390c52b829a;
	Tue, 17 Nov 2020 04:21:29 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id DD45920897;
	Tue, 17 Nov 2020 04:21:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605586889;
	bh=B3voGS66tFjvS6XbYFLKdnIC8vjICDXfAp5+13pxSzs=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=V359CSiav7KiGxu3f+el+28ZVnKG224S9/u0E4hxdkqEh6dadyvvX72q8lFGWmspO
	 UGNnON1suFtQrUZW94/pAyskPNqfhj7Ytex0g8ABzcE8G0hg/3yl1VHuBBh/cnJ3kZ
	 Q3bCbVvwq4RT5ywj8MwRuc9/r5EllnK1rvJeT0hE=
From: Stefano Stabellini <sstabellini@kernel.org>
To: andrew.cooper3@citrix.com,
	cardoe@cardoe.com,
	wl@xen.org
Cc: sstabellini@kernel.org,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v2 2/2] automation: add dom0less to the QEMU aarch64 smoke test
Date: Mon, 16 Nov 2020 20:21:27 -0800
Message-Id: <20201117042127.30981-2-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2011161927120.20906@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2011161927120.20906@sstabellini-ThinkPad-T480s>

Add a trivial dom0less test:
- fetch the Debian arm64 kernel and use it ad dom0/U kernel
- use busybox-static to create a trivial dom0/U ramdisk
- use ImageBuilder to generate the uboot boot script automatically
- install and use u-boot from the Debian package to start the test
- binaries are loaded from uboot via tftp

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
Changes in v2:
- use the Debian kernel for testing
---
 automation/scripts/qemu-smoke-arm64.sh | 80 +++++++++++++++++++++++---
 1 file changed, 73 insertions(+), 7 deletions(-)

diff --git a/automation/scripts/qemu-smoke-arm64.sh b/automation/scripts/qemu-smoke-arm64.sh
index a7efbf8b6f..41f570fff8 100755
--- a/automation/scripts/qemu-smoke-arm64.sh
+++ b/automation/scripts/qemu-smoke-arm64.sh
@@ -6,27 +6,93 @@ set -ex
 export DEBIAN_FRONTENT=noninteractive
 apt-get -qy update
 apt-get -qy install --no-install-recommends qemu-system-aarch64 \
-                                            u-boot-qemu
+                                            u-boot-qemu \
+                                            u-boot-tools \
+                                            device-tree-compiler \
+                                            busybox-static \
+                                            cpio
+
+apt-get download linux-image-5.9.0-2-arm64
+dpkg -i --ignore-depends=initramfs-tools ./linux-image-5.9.0-2-arm64_5.9.6-1_arm64.deb || true
+rm ./linux-image-5.9.0-2-arm64_5.9.6-1_arm64.deb
+cp /boot/vmlinuz-5.9.0-2-arm64 ./binaries/Image
 
 # XXX Silly workaround to get the following QEMU command to work
 cp /usr/share/qemu/pvh.bin /usr/share/qemu/efi-virtio.rom
 qemu-system-aarch64 \
    -machine virtualization=true \
    -cpu cortex-a57 -machine type=virt \
-   -m 512 -display none \
+   -m 1024 -display none \
    -machine dumpdtb=binaries/virt-gicv3.dtb
+# XXX disable pl061 to avoid Linux crash
+dtc -I dtb -O dts binaries/virt-gicv3.dtb > binaries/virt-gicv3.dts
+sed 's/compatible = "arm,pl061.*/status = "disabled";/g' binaries/virt-gicv3.dts > binaries/virt-gicv3-edited.dts
+dtc -I dts -O dtb binaries/virt-gicv3-edited.dts > binaries/virt-gicv3.dtb
+
+
+# Busybox Dom0
+mkdir -p initrd
+mkdir -p initrd/bin
+mkdir -p initrd/sbin
+mkdir -p initrd/etc
+mkdir -p initrd/dev
+mkdir -p initrd/proc
+mkdir -p initrd/sys
+mkdir -p initrd/lib
+mkdir -p initrd/var
+mkdir -p initrd/mnt
+cp /bin/busybox initrd/bin/busybox
+initrd/bin/busybox --install initrd/bin
+echo "#!/bin/sh
+
+mount -t proc proc /proc
+mount -t sysfs sysfs /sys
+mount -t devtmpfs devtmpfs /dev
+/bin/sh" > initrd/init
+chmod +x initrd/init
+cd initrd
+find . | cpio --create --format='newc' | gzip > ../binaries/initrd
+cd ..
+
+
+# ImageBuilder
+echo 'MEMORY_START="0x40000000"
+MEMORY_END="0x80000000"
 
+DEVICE_TREE="virt-gicv3.dtb"
+XEN="xen"
+DOM0_KERNEL="Image"
+DOM0_RAMDISK="initrd"
+XEN_CMD="console=dtuart dom0_mem=512M"
+
+NUM_DOMUS=1
+DOMU_KERNEL[0]="Image"
+DOMU_RAMDISK[0]="initrd"
+DOMU_MEM[0]="256"
+
+LOAD_CMD="tftpb"
+UBOOT_SOURCE="boot.source"
+UBOOT_SCRIPT="boot.scr"' > binaries/config
+rm -rf imagebuilder
+git clone https://gitlab.com/ViryaOS/imagebuilder
+bash imagebuilder/scripts/uboot-script-gen -t tftp -d binaries/ -c binaries/config
+
+
+# Run the test
 rm -f smoke.serial
 set +e
-echo "  booti 0x49000000 - 0x44000000" | timeout -k 1 30 qemu-system-aarch64 \
+echo "  virtio scan; dhcp; tftpb 0x40000000 boot.scr; source 0x40000000"| \
+timeout -k 1 240 \
+qemu-system-aarch64 \
     -machine virtualization=true \
     -cpu cortex-a57 -machine type=virt \
-    -m 512 -monitor none -serial stdio \
+    -m 1024 -monitor none -serial stdio \
+    -smp 2 \
     -no-reboot \
-    -device loader,file=binaries/virt-gicv3.dtb,force-raw=on,addr=0x44000000 \
-    -device loader,file=binaries/xen,force-raw=on,addr=0x49000000 \
+    -device virtio-net-pci,netdev=n0 \
+    -netdev user,id=n0,tftp=binaries \
     -bios /usr/lib/u-boot/qemu_arm64/u-boot.bin |& tee smoke.serial
 
 set -e
-grep -q 'LOADING DOMAIN 0' smoke.serial || exit 1
+(grep -q "^BusyBox" smoke.serial && grep -q "DOM1: BusyBox" smoke.serial) || exit 1
 exit 0
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 17 04:21:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 04:21:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28707.57762 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kesUc-0007UZ-V8; Tue, 17 Nov 2020 04:21:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28707.57762; Tue, 17 Nov 2020 04:21:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kesUc-0007US-Ry; Tue, 17 Nov 2020 04:21:22 +0000
Received: by outflank-mailman (input) for mailman id 28707;
 Tue, 17 Nov 2020 04:21:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Shnx=EX=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kesUb-0007UN-M9
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 04:21:21 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3a023934-3978-452a-981b-7676b368c567;
 Tue, 17 Nov 2020 04:21:20 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 46C912463D;
 Tue, 17 Nov 2020 04:21:19 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Shnx=EX=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kesUb-0007UN-M9
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 04:21:21 +0000
X-Inumbo-ID: 3a023934-3978-452a-981b-7676b368c567
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3a023934-3978-452a-981b-7676b368c567;
	Tue, 17 Nov 2020 04:21:20 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 46C912463D;
	Tue, 17 Nov 2020 04:21:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605586879;
	bh=R6i+XqfnLhuveeIQ6z4Pa7aQo539h24oDLTz60FS9m8=;
	h=Date:From:To:cc:Subject:From;
	b=XErBTo4BlFkES9XJImN7r9Fh/nYvKa+RNl/78SN7HI4vjgPfxw/7CIuZ2yiIHn4n8
	 8jeiWlJiR7fOYAed+B+cUc/ITchSYrZLcvSvqSaauReg1ZwDoPw1BQIxHrT9X/S7ba
	 pUYECSazW1ykSbYjK56RBJm6StZU0aS8LI6B4xzY=
Date: Mon, 16 Nov 2020 20:21:18 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: andrew.cooper3@citrix.com, cardoe@cardoe.com, wl@xen.org
cc: sstabellini@kernel.org, xen-devel@lists.xenproject.org
Subject: [PATCH v2 0/2] automation: arm64 dom0less smoke test
Message-ID: <alpine.DEB.2.21.2011161927120.20906@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Hi all,

This short series introduces a very simple Xen Dom0less smoke test based
on qemu-system-aarch64 to be run as part of the gitlab CI-loop. It
currently passes on staging.

Cheers,

Stefano


Changes in v2:
- use the Debian kernel for testing instead of building your own
- fix x86_32 build


Stefano Stabellini (2):
      automation: add a QEMU aarch64 smoke test
      automation: add dom0less to the QEMU aarch64 smoke test

 automation/gitlab-ci/test.yaml         | 23 ++++++++
 automation/scripts/build               |  6 +--
 automation/scripts/qemu-smoke-arm64.sh | 98 ++++++++++++++++++++++++++++++++++
 3 files changed, 124 insertions(+), 3 deletions(-)
 create mode 100755 automation/scripts/qemu-smoke-arm64.sh


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 04:45:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 04:45:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28724.57801 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kessC-0001I7-FE; Tue, 17 Nov 2020 04:45:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28724.57801; Tue, 17 Nov 2020 04:45:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kessC-0001I0-CG; Tue, 17 Nov 2020 04:45:44 +0000
Received: by outflank-mailman (input) for mailman id 28724;
 Tue, 17 Nov 2020 04:45:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g2yZ=EX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kessB-0001HS-JN
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 04:45:43 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1e1c9052-8a11-4a1c-a2a9-1564635ad09d;
 Tue, 17 Nov 2020 04:45:36 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kess4-00033m-7Y; Tue, 17 Nov 2020 04:45:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kess3-0003qF-Ty; Tue, 17 Nov 2020 04:45:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kess3-0005rk-St; Tue, 17 Nov 2020 04:45:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=g2yZ=EX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kessB-0001HS-JN
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 04:45:43 +0000
X-Inumbo-ID: 1e1c9052-8a11-4a1c-a2a9-1564635ad09d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1e1c9052-8a11-4a1c-a2a9-1564635ad09d;
	Tue, 17 Nov 2020 04:45:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=H28yGD3pDbZoPoatKZ0E4kssf7TZeV8/aD2vxur2Amc=; b=FIdCM29BXSxW5+kcjSRmpcSIt5
	zd8lRXyIQ4f0VYiLTL6ypDXozH9frv+qLmbuOmwYWEGdeTlCG7VTBIlaCP8DiWBvjK40tepzlOcyW
	TCoFvDsDdxXy1xgwYeL34Laa6Ntm4Hnd+5RARZqDZ2QPr6xigCnTy/nqYbrbWnnuEyNg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kess4-00033m-7Y; Tue, 17 Nov 2020 04:45:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kess3-0003qF-Ty; Tue, 17 Nov 2020 04:45:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kess3-0005rk-St; Tue, 17 Nov 2020 04:45:35 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156826-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156826: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=124b3f9289f11479d9f042ea6e39bea2b1d5cee3
X-Osstest-Versions-That:
    ovmf=d448574e73108f031ea6b02994f2579bb574785a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Nov 2020 04:45:35 +0000

flight 156826 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156826/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 124b3f9289f11479d9f042ea6e39bea2b1d5cee3
baseline version:
 ovmf                 d448574e73108f031ea6b02994f2579bb574785a

Last test of basis   156806  2020-11-15 00:39:51 Z    2 days
Testing same since   156826  2020-11-17 01:10:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bret Barkelew <brbarkel@microsoft.com>
  Jian J Wang <jian.j.wang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   d448574e73..124b3f9289  124b3f9289f11479d9f042ea6e39bea2b1d5cee3 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 06:02:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 06:02:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28733.57812 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keu4a-0000WT-AV; Tue, 17 Nov 2020 06:02:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28733.57812; Tue, 17 Nov 2020 06:02:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keu4a-0000WM-7c; Tue, 17 Nov 2020 06:02:36 +0000
Received: by outflank-mailman (input) for mailman id 28733;
 Tue, 17 Nov 2020 06:02:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g2yZ=EX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1keu4Y-0000Vd-PX
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 06:02:34 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 98333576-032d-4060-935d-74278c916f18;
 Tue, 17 Nov 2020 06:02:32 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keu4W-00057b-8e; Tue, 17 Nov 2020 06:02:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keu4V-0007nr-T8; Tue, 17 Nov 2020 06:02:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1keu4V-00075X-Se; Tue, 17 Nov 2020 06:02:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=g2yZ=EX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1keu4Y-0000Vd-PX
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 06:02:34 +0000
X-Inumbo-ID: 98333576-032d-4060-935d-74278c916f18
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 98333576-032d-4060-935d-74278c916f18;
	Tue, 17 Nov 2020 06:02:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=J4ZWfuwzymzeDKpAtOOXEkOL+Nu+ULnv2T0Dw9iAZTs=; b=jh/eU0Swy2S6yVq1gaBYC3ZFs1
	FFC4K9itAd6NqIMuXDCEI5PQZpjJYVtCPwGhb3xpdWVZ+irUqYFfwZcAfmVlQXMB52JsDNycavMea
	kNgtSQjuQfu/DEy+xIKE969Tqi6lhHsVyVYGzw/s/ZOg4JMLkVH3regfNUE2E4Bv03qc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keu4W-00057b-8e; Tue, 17 Nov 2020 06:02:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keu4V-0007nr-T8; Tue, 17 Nov 2020 06:02:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keu4V-00075X-Se; Tue, 17 Nov 2020 06:02:31 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156828-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156828: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=19c4c6f8fdd125e40d6718b189ef1be507e55505
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Nov 2020 06:02:31 +0000

flight 156828 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156828/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              19c4c6f8fdd125e40d6718b189ef1be507e55505
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  130 days
Failing since        151818  2020-07-11 04:18:52 Z  129 days  124 attempts
Testing same since   156828  2020-11-17 04:20:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 27507 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 06:36:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 06:36:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28741.57831 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keubJ-0003QK-W9; Tue, 17 Nov 2020 06:36:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28741.57831; Tue, 17 Nov 2020 06:36:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keubJ-0003QD-TF; Tue, 17 Nov 2020 06:36:25 +0000
Received: by outflank-mailman (input) for mailman id 28741;
 Tue, 17 Nov 2020 06:36:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g2yZ=EX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1keubJ-0003PZ-Bv
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 06:36:25 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c999ae59-8e3f-4cc9-9b77-a20de231084d;
 Tue, 17 Nov 2020 06:36:14 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keub8-0005lL-5y; Tue, 17 Nov 2020 06:36:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1keub7-00019A-TW; Tue, 17 Nov 2020 06:36:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1keub7-0002J4-T1; Tue, 17 Nov 2020 06:36:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=g2yZ=EX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1keubJ-0003PZ-Bv
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 06:36:25 +0000
X-Inumbo-ID: c999ae59-8e3f-4cc9-9b77-a20de231084d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c999ae59-8e3f-4cc9-9b77-a20de231084d;
	Tue, 17 Nov 2020 06:36:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=crR1710zUkszzfr+b9b0Zf8UNfREDrned5CugpD0qBw=; b=bqtDofuiv3ZHgbH4YT0X4xubmt
	sbzAJBFsJr6boFSFeG9UmMeEDV0lSuZRe/T3Mg8Q1clN1GSdR+wh1ad3zqUmeIOlifwxT5Ipk0NGh
	tSckW5oB0AbLZg71q1BMQ+Jgu6KGglvAfGygkuhvfDUfjvnayVEtEgT6mlDqqJoUsvQA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keub8-0005lL-5y; Tue, 17 Nov 2020 06:36:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keub7-00019A-TW; Tue, 17 Nov 2020 06:36:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1keub7-0002J4-T1; Tue, 17 Nov 2020 06:36:13 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156823-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156823: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=2f7c9dd5181524ceaf75ba3ef8d84090b1e9e8d8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Nov 2020 06:36:13 +0000

flight 156823 qemu-mainline real [real]
flight 156830 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156823/
http://logs.test-lab.xenproject.org/osstest/logs/156830/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                2f7c9dd5181524ceaf75ba3ef8d84090b1e9e8d8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   88 days
Failing since        152659  2020-08-21 14:07:39 Z   87 days  187 attempts
Testing same since   156823  2020-11-16 15:37:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 64868 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 08:02:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 08:02:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28751.57846 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kevvz-0003nq-Sn; Tue, 17 Nov 2020 08:01:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28751.57846; Tue, 17 Nov 2020 08:01:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kevvz-0003nj-PE; Tue, 17 Nov 2020 08:01:51 +0000
Received: by outflank-mailman (input) for mailman id 28751;
 Tue, 17 Nov 2020 08:01:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g2yZ=EX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kevvy-0003n5-8J
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 08:01:50 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c485d73f-c656-40cc-8f51-b827344e7c4a;
 Tue, 17 Nov 2020 08:01:41 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kevvp-00084y-9S; Tue, 17 Nov 2020 08:01:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kevvo-0006S9-TZ; Tue, 17 Nov 2020 08:01:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kevvo-0007k5-T6; Tue, 17 Nov 2020 08:01:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=g2yZ=EX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kevvy-0003n5-8J
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 08:01:50 +0000
X-Inumbo-ID: c485d73f-c656-40cc-8f51-b827344e7c4a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c485d73f-c656-40cc-8f51-b827344e7c4a;
	Tue, 17 Nov 2020 08:01:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qAlOsgiZXCHiDaRQubcEZZZCpPMv3A1ynGA+HyG/eVM=; b=mk+II6uS9+surAZ6IZRdh5bvY5
	2X0VuYqlefZfzZ3nvaBEcRvraEqYZqZAO6WRTUePsSOq/T0+3swUXQUMvLbIDOqGwzQI1uVsNZbI2
	dmwJvJqAiwb2JwtMxWGJKhqyYnp7Q0mtmhUv88lsd2vDXpknQeRZXsi/56fN5dUo4Rco=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kevvp-00084y-9S; Tue, 17 Nov 2020 08:01:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kevvo-0006S9-TZ; Tue, 17 Nov 2020 08:01:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kevvo-0007k5-T6; Tue, 17 Nov 2020 08:01:40 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156825-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156825: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:build-amd64-xsm:xen-build:fail:regression
    linux-linus:build-i386:xen-build:fail:regression
    linux-linus:build-i386-xsm:xen-build:fail:regression
    linux-linus:build-amd64:xen-build:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:build-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=09162bc32c880a791c6c0668ce0745cf7958f576
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Nov 2020 08:01:40 +0000

flight 156825 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156825/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu  fail in 156815 REGR. vs. 152332
 build-amd64-xsm               6 xen-build      fail in 156819 REGR. vs. 152332
 build-i386                    6 xen-build      fail in 156819 REGR. vs. 152332
 build-i386-xsm                6 xen-build      fail in 156819 REGR. vs. 152332
 build-amd64                   6 xen-build      fail in 156819 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-seattle   8 xen-boot         fail in 156815 pass in 156825
 test-arm64-arm64-examine      8 reboot                     fail pass in 156815
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10     fail pass in 156815
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen        fail pass in 156819
 test-arm64-arm64-xl-credit1   8 xen-boot                   fail pass in 156819
 test-arm64-arm64-xl           8 xen-boot                   fail pass in 156819

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pvshim    1 build-check(1)           blocked in 156819 n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)   blocked in 156819 n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)           blocked in 156819 n/a
 test-amd64-i386-xl-shadow     1 build-check(1)           blocked in 156819 n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)          blocked in 156819 n/a
 test-amd64-amd64-pygrub       1 build-check(1)           blocked in 156819 n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 1 build-check(1) blocked in 156819 n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)    blocked in 156819 n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 1 build-check(1) blocked in 156819 n/a
 build-amd64-libvirt           1 build-check(1)           blocked in 156819 n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)    blocked in 156819 n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64 1 build-check(1) blocked in 156819 n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1) blocked in 156819 n/a
 test-amd64-amd64-xl           1 build-check(1)           blocked in 156819 n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)      blocked in 156819 n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)          blocked in 156819 n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)        blocked in 156819 n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked in 156819 n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)           blocked in 156819 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)   blocked in 156819 n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)   blocked in 156819 n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 1 build-check(1) blocked in 156819 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)     blocked in 156819 n/a
 build-i386-libvirt            1 build-check(1)           blocked in 156819 n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)           blocked in 156819 n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 1 build-check(1) blocked in 156819 n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64 1 build-check(1) blocked in 156819 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)   blocked in 156819 n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)        blocked in 156819 n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)           blocked in 156819 n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)    blocked in 156819 n/a
 test-amd64-i386-pair          1 build-check(1)           blocked in 156819 n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)    blocked in 156819 n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1) blocked in 156819 n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)           blocked in 156819 n/a
 test-amd64-i386-libvirt       1 build-check(1)           blocked in 156819 n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 1 build-check(1) blocked in 156819 n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)           blocked in 156819 n/a
 test-amd64-i386-xl-xsm        1 build-check(1)           blocked in 156819 n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)           blocked in 156819 n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)           blocked in 156819 n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 1 build-check(1) blocked in 156819 n/a
 test-amd64-amd64-pair         1 build-check(1)           blocked in 156819 n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked in 156819 n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)         blocked in 156819 n/a
 test-amd64-amd64-libvirt      1 build-check(1)           blocked in 156819 n/a
 test-amd64-i386-xl            1 build-check(1)           blocked in 156819 n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)          blocked in 156819 n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked in 156819 n/a
 test-amd64-i386-examine       1 build-check(1)           blocked in 156819 n/a
 test-amd64-amd64-examine      1 build-check(1)           blocked in 156819 n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked in 156819 n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)          blocked in 156819 n/a
 test-amd64-coresched-i386-xl  1 build-check(1)           blocked in 156819 n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)          blocked in 156819 n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)      blocked in 156819 n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)   blocked in 156819 n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)           blocked in 156819 n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)    blocked in 156819 n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)        blocked in 156819 n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)   blocked in 156819 n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)           blocked in 156819 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)     blocked in 156819 n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64 1 build-check(1) blocked in 156819 n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked in 156819 n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)           blocked in 156819 n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)    blocked in 156819 n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)           blocked in 156819 n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 1 build-check(1) blocked in 156819 n/a
 test-amd64-i386-xl-raw        1 build-check(1)           blocked in 156819 n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)   blocked in 156819 n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked in 156819 n/a
 test-arm64-arm64-xl-seattle 11 leak-check/basis(11) fail in 156819 blocked in 152332
 test-arm64-arm64-xl-credit1 11 leak-check/basis(11) fail in 156819 blocked in 152332
 test-arm64-arm64-xl   11 leak-check/basis(11) fail in 156819 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2   3 hosts-allocate           starved in 156819 n/a

version targeted for testing:
 linux                09162bc32c880a791c6c0668ce0745cf7958f576
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  108 days
Failing since        152366  2020-08-01 20:49:34 Z  107 days  178 attempts
Testing same since   156815  2020-11-16 02:08:16 Z    1 days    3 attempts

------------------------------------------------------------
3520 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 672911 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 08:12:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 08:12:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28757.57858 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kew66-0004q9-11; Tue, 17 Nov 2020 08:12:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28757.57858; Tue, 17 Nov 2020 08:12:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kew65-0004q2-TY; Tue, 17 Nov 2020 08:12:17 +0000
Received: by outflank-mailman (input) for mailman id 28757;
 Tue, 17 Nov 2020 08:12:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kew64-0004px-Gf
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 08:12:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 026c294e-1185-40e7-9194-664038b8a5a3;
 Tue, 17 Nov 2020 08:12:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 623CBAD2D;
 Tue, 17 Nov 2020 08:12:14 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kew64-0004px-Gf
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 08:12:16 +0000
X-Inumbo-ID: 026c294e-1185-40e7-9194-664038b8a5a3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 026c294e-1185-40e7-9194-664038b8a5a3;
	Tue, 17 Nov 2020 08:12:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605600734; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=g8SCEM3AchjQG8YOYHO0TBbekwafiQ6MQ8X2JLbb9IQ=;
	b=NNjMJ29m6Ope1R+lXw2SeDhhr5z4scO2wJ+SrWsAMyfHgmnVP0rMuRjJbBT+RI2jTXWakP
	XNMsKi/VMQY/H36b2mRy5AB8xVDCKsGjq4PU/sH3q6rhQE2JhzR2U3XBj0eYutNnDZZbkz
	UHFpgR1mgK1k0FUJItBJ0onkSzp79z8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 623CBAD2D;
	Tue, 17 Nov 2020 08:12:14 +0000 (UTC)
Subject: Re: XSA-351 causing Solaris-11 systems to panic during boot.
To: Cheyenne Wills <cheyenne.wills@gmail.com>
References: <CAHpsFVc4AAm6L0rKUuV47ydOjtw7XAgFnDZxRjdCL0OHXJERDw@mail.gmail.com>
Cc: xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7bca24cb-a3af-b54d-b224-3c2a316859dd@suse.com>
Date: Tue, 17 Nov 2020 09:12:10 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <CAHpsFVc4AAm6L0rKUuV47ydOjtw7XAgFnDZxRjdCL0OHXJERDw@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 16.11.2020 22:57, Cheyenne Wills wrote:
> Running Xen with XSA-351 is causing Solaris 11 systems to panic during
> boot.  The panic screen is showing the failure to be coming from
> "unix:rdmsr".  The panic occurs with existing guests (booting off a disk)
> and the  booting from an install ISO image.
> 
> I discussed the problem with "andyhhp__" in the "#xen" IRC channel and he
> requested that I report it here.

Thanks. What we need though is information on the specific MSR(s) that
will need to have workarounds added: We surely would want to avoid
blindly doing this for all that the XSA change disallowed access to.
Reproducing the panic screen here might already help; proper full logs
would be even better.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 08:19:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 08:19:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28762.57869 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kewCv-00055R-Oh; Tue, 17 Nov 2020 08:19:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28762.57869; Tue, 17 Nov 2020 08:19:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kewCv-00055K-Ld; Tue, 17 Nov 2020 08:19:21 +0000
Received: by outflank-mailman (input) for mailman id 28762;
 Tue, 17 Nov 2020 08:19:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=J6Lq=EX=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kewCu-00055F-MF
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 08:19:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 98a3e620-3de8-4610-a7b4-ccea6458fbe2;
 Tue, 17 Nov 2020 08:19:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D0DC8AB3D;
 Tue, 17 Nov 2020 08:19:18 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=J6Lq=EX=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kewCu-00055F-MF
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 08:19:20 +0000
X-Inumbo-ID: 98a3e620-3de8-4610-a7b4-ccea6458fbe2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 98a3e620-3de8-4610-a7b4-ccea6458fbe2;
	Tue, 17 Nov 2020 08:19:19 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605601159; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=jBSwZgAuEvWHMjM4d4hzcqClmDivjG0qI8H2PdJ0jpE=;
	b=aBvtB6OW+wniVNWwSdyEAPT9jdP+RRtrWwmViYqKzHPhVgQvkYOF9uj+DJrD4A1SyHY+8P
	Vc0r0YVu/Uii3y/e7zY3IEi1dJMNKmtDwBgHe8wwNl23FuvkJG3OwPa1bZzU9LIJOLrqzy
	I6WE9Bcz+Z1k2p8rTPSkfIRoOb8/smc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D0DC8AB3D;
	Tue, 17 Nov 2020 08:19:18 +0000 (UTC)
To: Andy Lutomirski <luto@kernel.org>
Cc: xen-devel <xen-devel@lists.xenproject.org>, X86 ML <x86@kernel.org>,
 LKML <linux-kernel@vger.kernel.org>,
 Linux Virtualization <virtualization@lists.linux-foundation.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>,
 Deep Shah <sdeep@vmware.com>, "VMware, Inc." <pv-drivers@vmware.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20201116152301.24558-1-jgross@suse.com>
 <20201116152301.24558-5-jgross@suse.com>
 <CALCETrW_UO9sksa1agOfs5E7yV+RqOyugEEOBjZY8Z47R-04Pg@mail.gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH 4/4] x86/xen: drop USERGS_SYSRET64 paravirt call
Message-ID: <194ffa2c-cfc6-29b2-5ee4-3d02581b8e28@suse.com>
Date: Tue, 17 Nov 2020 09:19:17 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <CALCETrW_UO9sksa1agOfs5E7yV+RqOyugEEOBjZY8Z47R-04Pg@mail.gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="TZeSXnOA78aXm6rOFpyaEcYCc2ZyiwE9B"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--TZeSXnOA78aXm6rOFpyaEcYCc2ZyiwE9B
Content-Type: multipart/mixed; boundary="x8PsBNwBukhgo2A84pdicBLVDWhNt46lb";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Andy Lutomirski <luto@kernel.org>
Cc: xen-devel <xen-devel@lists.xenproject.org>, X86 ML <x86@kernel.org>,
 LKML <linux-kernel@vger.kernel.org>,
 Linux Virtualization <virtualization@lists.linux-foundation.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>,
 Deep Shah <sdeep@vmware.com>, "VMware, Inc." <pv-drivers@vmware.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <194ffa2c-cfc6-29b2-5ee4-3d02581b8e28@suse.com>
Subject: Re: [PATCH 4/4] x86/xen: drop USERGS_SYSRET64 paravirt call
References: <20201116152301.24558-1-jgross@suse.com>
 <20201116152301.24558-5-jgross@suse.com>
 <CALCETrW_UO9sksa1agOfs5E7yV+RqOyugEEOBjZY8Z47R-04Pg@mail.gmail.com>
In-Reply-To: <CALCETrW_UO9sksa1agOfs5E7yV+RqOyugEEOBjZY8Z47R-04Pg@mail.gmail.com>

--x8PsBNwBukhgo2A84pdicBLVDWhNt46lb
Content-Type: multipart/mixed;
 boundary="------------9EE45120FE990CE4FF143A7A"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------9EE45120FE990CE4FF143A7A
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 16.11.20 17:28, Andy Lutomirski wrote:
> On Mon, Nov 16, 2020 at 7:23 AM Juergen Gross <jgross@suse.com> wrote:
>>
>> USERGS_SYSRET64 is used to return from a syscall via sysret, but
>> a Xen PV guest will nevertheless use the iret hypercall, as there
>> is no sysret PV hypercall defined.
>>
>> So instead of testing all the prerequisites for doing a sysret and
>> then mangling the stack for Xen PV again for doing an iret just use
>> the iret exit from the beginning.
>>
>> This can easily be done via an ALTERNATIVE like it is done for the
>> sysenter compat case already.
>>
>> While at it remove to stale sysret32 remnants.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>=20
> Acked-by: Andy Lutomirski <luto@kernel.org>
>=20
> FWIW, you've lost the VGCF_in_syscall optimization.  Let me see if I
> can give it back to you better.

Ah, right.

Nevertheless a simple kernel build is about 0.5% faster with this
patch.


Juergen

--------------9EE45120FE990CE4FF143A7A
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------9EE45120FE990CE4FF143A7A--

--x8PsBNwBukhgo2A84pdicBLVDWhNt46lb--

--TZeSXnOA78aXm6rOFpyaEcYCc2ZyiwE9B
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+zh4UFAwAAAAAACgkQsN6d1ii/Ey9U
fQf7BHJ5Bw5iPQtVJ/DpCX/TPqNeQ3JbRh2IfYAve06UfVeTmwGYkv8kMpPcyvdbh87G1IyYJai6
XwGksJU2kH+Lzj+jt1ISI+/TYIzmgATD1RCs5iyHAkV8Eh6oYvWpCpQaoUGhRSzZ5XfwjyAWtT0m
V8ZY1BH9SOvyL/XvxaEIHOnFsYTcnhno3fhJwZW5jCc148Y2E1Tby91TG1XE2YFfbbgAWc8963Kk
5v4mmQZuPi8MKDTB5tytwc2g8y47YBAtyBKqPO5BuBZoJpNqAgOXgswKmWV4jyB6Ppter0tgZmYL
gQTEpG3aja8xEHLKFPZpZBYBWPRNGubSu7+QTZM6Hg==
=7+u4
-----END PGP SIGNATURE-----

--TZeSXnOA78aXm6rOFpyaEcYCc2ZyiwE9B--


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 09:05:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 09:05:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28810.57926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kewvK-0001WJ-Vp; Tue, 17 Nov 2020 09:05:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28810.57926; Tue, 17 Nov 2020 09:05:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kewvK-0001WC-Sg; Tue, 17 Nov 2020 09:05:14 +0000
Received: by outflank-mailman (input) for mailman id 28810;
 Tue, 17 Nov 2020 09:05:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rcuG=EX=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kewvJ-0001W7-99
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 09:05:13 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 68dbd055-1b1a-4c05-9070-af20e6ad1b85;
 Tue, 17 Nov 2020 09:05:11 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rcuG=EX=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kewvJ-0001W7-99
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 09:05:13 +0000
X-Inumbo-ID: 68dbd055-1b1a-4c05-9070-af20e6ad1b85
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 68dbd055-1b1a-4c05-9070-af20e6ad1b85;
	Tue, 17 Nov 2020 09:05:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605603911;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=I6V3IFZPZ0tgcXQy4GN+Z6LIOsDCEQWL4NBeeSvTAbM=;
  b=ccaeFqngaZH7u1HW3F/87+0J3N2SZpfbtNjbfIgKsNIs70ZZyx3Zo8iw
   pdWc8PBRDjCFqHdlZYYhPClQclgMHKYkRYVt9yu4GFuAd45d69cx9r9NJ
   nSvtWfIAd9BBoc3Ef8qsKT6nJv74cOlHI/lgPt2tzIiZIiKDEZb4Zp3Sv
   g=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: HcPHcS/Lcbx6n68FtFrO8mXlYvZFzzOYm/yes6ZLrtP38OhcOPx2+0SCfS2sy7/tbzldzLfkzt
 nx+gwig/47V2ewiBb1TvFBYl3gzZYefah3RvBkNPMyu0TLoSskpdVj03Ek4YSbNa7E8aM9hpEw
 fmUVLHs5ERURCsqt/ljQUWh5b9z4RynKZVWqQeFOntMDwY4lzxjBq5PqOiwDP60qwahW5XZZTV
 XIcidg3T5bUA5YvHwQMjurAXSNlXdqSPcrKyi03yGqmT1nimRlL9Zo69T8G/olZUvafWGnoKVm
 080=
X-SBRS: None
X-MesageID: 31661718
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,485,1596513600"; 
   d="scan'208";a="31661718"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=atN7j9XJk3tB6tWxBd2p2iMSSUlKnX7mpMk0/FbNLYWDWR522FBiysleuMCJDdj89p+V5upEuY74oWk4nBH78/9AYPin7ZL7sMgzwdRlWAwM004wMtjQjE4bzQ8n1/VuwtGtgarWKtGte40PCDNhN+hExW4tPJLUAyQsYvG98+klWHBVQrXfdANx5Athjr19vh/AxSHLLOcWQSnOtjT5UCyf9aO6PdrTn+YRR7KTJVUXO+HvyignWhCuXG73aqKP82ke0iKl9otzJ+LA3DDAn9VKhot251s4h4gwtrhtgPQ3Vvnhd6MGXvXUJ8G50LJYt9EWowPdHVaFz26KGnQ1vQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=w6YdC8A8tj9JNAjiSUIsSjQHIdlAHYWeca/gAv2iKGk=;
 b=bxyha9D2KDo7wr61WgHjE8uD71UF8sagBs8Y2WAIJ8G3XJQu0O1wbbMek2HVVLXTdXRbfAcenN9gB05jxduMUQq/HxW1hohaMbAmKNaBBt8O+Td15eDL5GQ7J3qjxwb8OxKghmLkjq9+GJy+j2qSypBbJyBhS6pAnnRS5MRLdopNOiEUaQ97fsgCqXruH7xJEik+YcVnc9k0X1HLO7ZDRQi6yJj17OFBfFvucYA8S06I/mlLSvMF/6dyxarpUAxktm9q2R31damFclG2DcNojHOrLmjbaFIKOaHYUcOA0BPwqrusCIzuqpRhxwSrISUy4OzekVeltWq8OTcXTEpwmA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=w6YdC8A8tj9JNAjiSUIsSjQHIdlAHYWeca/gAv2iKGk=;
 b=g39y2yz5lQ8/NcXwQA9kQNSNGKGyKsLUkM7OZmfbIRjIMfA5Pzqr0VHY8OVdKPWZcH5laifAaJCIfCUI5VmmLSdCH0SJJU3PfYbvGnNN/76try7F4kNuqmKOYaw/fcVmG+biYzDjwBuN9eyD8NwBNZyo/n5NubLI2EIcT0eH/24=
Date: Tue, 17 Nov 2020 10:02:04 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: netbsd PVH dom0: xen clock event stops
Message-ID: <20201117085448.2haekgpbcspwmqja@Air-de-Roger>
References: <20201115174938.GA3562@antioche.eu.org>
 <20201115182416.GA30231@Air-de-Roger> <20201116182211.GS840@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201116182211.GS840@antioche.eu.org>
X-ClientProxiedBy: CWLP123CA0145.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:401:88::13) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1600e786-68cf-4c2a-8639-08d88ad779a6
X-MS-TrafficTypeDiagnostic: DM6PR03MB3580:
X-Microsoft-Antispam-PRVS: <DM6PR03MB35802955606949F5465E4B948FE20@DM6PR03MB3580.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 2FuLegZML/WxKaR/A7IXUNa0YCNYFLThftaYQ38juR6tS3jUITTlOwFySEQAZSBwN44qd38JmJkQFbk4gpgJ11D+6ehnt81AXN7rrSkA6Ise8ldN+moW1p+lbv8YKc4uz7fem39kYlmndVgpqbx9ehhMOljj2XBV1YHRWWKzvy6DncJn5R3ACq4YKLaYST+1rUko8GpTDApQqx+wvEUNpZD1te6aztIsuUy+CUC92toyduE7sKMVe1X4tQOxjCDOqJJPxMyDOS2PR4Xi+fH9pq4CRTXC8O179lvY4nkiGrMfkwgSZEh9V3Z/jA2yXr6rCfeq6ZG4bztfEwTIyuwLCg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(366004)(396003)(346002)(376002)(39860400002)(136003)(9686003)(2906002)(83380400001)(6666004)(85182001)(1076003)(6486002)(86362001)(4326008)(66556008)(66476007)(66946007)(6916009)(478600001)(186003)(8936002)(16526019)(33716001)(6496006)(8676002)(26005)(956004)(5660300002)(316002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: K7aCeq26uPHOrFoFNh/Gnyj/1lu0gP+qbI3FgWKr49viWze1Pn/ja3bVg1h8+kDIMOM3b8MSZi6MWJ++afs92H1L/+ezX49/1Pd8ZSTV0UZLBreZv9UD1R5mscfYFSZGgCh+/7sJ8ic3MwvUnWphBt+bhgZbpob/ebQ46nPdeTnn6JAWBMw5b/m6jpJgZIc4PTSNIKhZWL9RFqO9qlwYZEFFUDtsEB2Xd352lyfZpED+TNdRfGC30pxNxyCSLwRe4+GriwuDGA8n73WCFc8UldTCpv8TH9Psg0nY6GTGN3YcJ/vG8WKxRfiUtkQ6zI+P0HzhawhyaqbyJbpnwoVXRTycWkrhl8ElOZCHqPC2hQIzz2KcyqG0oKC3cdHzdl1XbyGP5ZGNI6TV6MLn/STOlsP82aIQzcKaIJM7dOBHag7HQ+2rRZHPmaIyJeMoCzOXKzH7m8k2Fh+98cxXpKCI/AMqvGWdqjboTxh4wwzY4nBbfYQWUTFU/MzY/2vVd7DmDg6IxnLMyXpbxuYTn+ghsp1Lrc8cIur5GTCZdl2nPYpBT3ki9623spFsHfDMVoyeDYI2ZLGGYDG35gTRuugFrCwIgi9jB2xNe48AEN+izEeW1zIy38HdmOQIHmy3VtQZ6dL80k96+sDQv+TlncKV8DhRfvYG0Veph/b5UDjRc9OQ8E+YHTDbnphaoK3KT826ep8I0ASlcfeDE2RmOq7usFuB9BbR/qklccbgeNn/CIvNtbTKXXjo/t/p8BIYNEaKxMMtIfqPLqvqEBQcU1PShJWHbMJcABa6FPorO3A1Tqm+IgeGxpk9r3yY7u0SdXG8UAoCEJWQse489qKv7unaHr5OYj1NDFPDmq/MAQusa3GzeMg9H17E0VAWGEGr2jdcYtV13JDVAeidG/2cbQEqcw==
X-MS-Exchange-CrossTenant-Network-Message-Id: 1600e786-68cf-4c2a-8639-08d88ad779a6
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Nov 2020 09:02:13.7817
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Mo9YdprKkD2O2XOi9DjxSX56wpsf4zqrsORPtXtk4jBwRkGHlQzMjQEH431zrVpbG+0HR0eCu8O272/RFAxB+Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3580
X-OriginatorOrg: citrix.com

On Mon, Nov 16, 2020 at 07:22:11PM +0100, Manuel Bouyer wrote:
> On Sun, Nov 15, 2020 at 07:24:16PM +0100, Roger Pau Monné wrote:
> > On Sun, Nov 15, 2020 at 06:49:38PM +0100, Manuel Bouyer wrote:
> > > Hello,
> > > I spent some more time debugging NetBSD as a PVH dom0 on Xen,
> > > With Roger's patch to avoid a Xen panic, the NetBSD kernel stalls
> > > configuring devices. At first I though it was an issue with hardware
> > > interrupts, but it more likely is an issue with Xen timer events.
> > > Specifically: virtual CPU 0 stops receiving timer events, while other
> > > CPUs keep receiving them. I tried to force a timer rearm but this didn't help.
> > > The event is not masked nor pending on Xen or NetBSD, as confirmed by 'q'.
> > > Others events (the Xen console, the debug event) are properly received
> > > by CPU0. I don't know how to debug this more at this point.
> > 
> > You could try to use dom0_vcpus_pin command line option and then dump
> > the timers using the 'a' debug key, this way you can see if CPU0 has a
> > timer pending (which would be the vCPU0 timer).
> 
> thanks, this helped. This was a bug in the NetBSD kernel, which would show
> up only when there are enough physical device interrupts (which explains why
> I didn't notice it on PVH domUs)

Great! So all interrupts are working as expected now?

Roger.


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 09:07:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 09:07:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28815.57937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kewxp-0001eq-DE; Tue, 17 Nov 2020 09:07:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28815.57937; Tue, 17 Nov 2020 09:07:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kewxp-0001ej-AG; Tue, 17 Nov 2020 09:07:49 +0000
Received: by outflank-mailman (input) for mailman id 28815;
 Tue, 17 Nov 2020 09:07:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EUfM=EX=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kewxo-0001ee-0q
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 09:07:48 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d986aa7b-a812-4d46-9f81-d2aa4e0d25ff;
 Tue, 17 Nov 2020 09:07:46 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AH97c0u014536
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Tue, 17 Nov 2020 10:07:39 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 5ACA42E9CA8; Tue, 17 Nov 2020 10:07:33 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=EUfM=EX=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kewxo-0001ee-0q
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 09:07:48 +0000
X-Inumbo-ID: d986aa7b-a812-4d46-9f81-d2aa4e0d25ff
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d986aa7b-a812-4d46-9f81-d2aa4e0d25ff;
	Tue, 17 Nov 2020 09:07:46 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AH97c0u014536
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Tue, 17 Nov 2020 10:07:39 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 5ACA42E9CA8; Tue, 17 Nov 2020 10:07:33 +0100 (MET)
Date: Tue, 17 Nov 2020 10:07:33 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: netbsd PVH dom0: xen clock event stops
Message-ID: <20201117090733.GC1405@antioche.eu.org>
References: <20201115174938.GA3562@antioche.eu.org>
 <20201115182416.GA30231@Air-de-Roger>
 <20201116182211.GS840@antioche.eu.org>
 <20201117085448.2haekgpbcspwmqja@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201117085448.2haekgpbcspwmqja@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Tue, 17 Nov 2020 10:07:39 +0100 (MET)

On Tue, Nov 17, 2020 at 10:02:04AM +0100, Roger Pau Monn wrote:
> Great! So all interrupts are working as expected now?

No, I'm back at the problem where the PERC raid controller times out on
commands. I'm cleaing up my sources and will try to get more data
about this problem.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 09:33:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 09:33:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28826.57950 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kexMP-0004T5-HG; Tue, 17 Nov 2020 09:33:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28826.57950; Tue, 17 Nov 2020 09:33:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kexMP-0004Sy-CR; Tue, 17 Nov 2020 09:33:13 +0000
Received: by outflank-mailman (input) for mailman id 28826;
 Tue, 17 Nov 2020 09:33:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rcuG=EX=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kexMO-0004St-DI
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 09:33:12 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e5d6a6da-4398-4d29-8b60-0e95b08a84a9;
 Tue, 17 Nov 2020 09:33:11 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rcuG=EX=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kexMO-0004St-DI
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 09:33:12 +0000
X-Inumbo-ID: e5d6a6da-4398-4d29-8b60-0e95b08a84a9
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e5d6a6da-4398-4d29-8b60-0e95b08a84a9;
	Tue, 17 Nov 2020 09:33:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605605591;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=xlG7GF5iQJSjzhvJ2z7oBn5133kFgRL6UqiHO3w5Ib0=;
  b=ZhG7heNB+/Cn7gGtf0ucn4K/7tpRzkFBTN6eGn4zkOejTBtT6nVZ+w8s
   NoW5fE0hnpTqMVQM7HsDy+s5/XWXUCJdiBhbOKYRZBg+Jx1Wm9GVf6FKv
   ge4lUzic7vatTeirAcYWza+nSx2squRchfI1JxbAUNc5G7ywUc3tTTh2c
   8=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: MVcem7/R8ipHHc0XKl+LBw+5fuQUs0WbPWPwBrFI56J6Ug6URM5um+tZorIdD0HekuaRYeOCdB
 sottx6pJVaV6UOAVv7qHTqFHpJuVfbw/Qk+SSotgpymE3NZUioxAfu6lMuscL4Sf2ss1m2asF3
 SDMiSc0lybJVTXjmSPwox0IgU8k9amdoya6yz1nXybtmcYEAwM5ukgqMsqxsHxd/9LpKbBuser
 CH+dMgI2oO4W/gZsMnqclArt35muVQkCqCzFr3Z/Z7BBsiCoszmw4ttOhrIHwpj3a/DQ/5YgJA
 ja8=
X-SBRS: None
X-MesageID: 31290796
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,485,1596513600"; 
   d="scan'208";a="31290796"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=c1AtNCC+ZLrLl3p3xVuZkW+XNrFVX2geWb+E9luuPKBCXzHlX41r6kyzwHfO4RFPVJZuQqVjrjdz+ZxvoUZ2d1uXFvCANAyh6B+V8wdE0/4eMCrAegy1/zeD50P9G7M3xokOHdpslwUwsCODwHllybi6BLgew/+m7x7XOiXRXOtchr5flgirZFUXhCkagvnMc+ODJhs8VggirD4gqj9Za64ZkomqRKrpwxkqM8oOgrW11OHrSk2zJXvPzgIxCLuiW0xCIVeMXv1XWbuIsdAxWFB6OwxJ0Nhfrk29v1zaee14MKpg11wANY9sn6ELF3j80ws/lt6hURdsHLyhTf15jw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rBXLBz2fRRUloLhGoC6sj4nzp8i6aSdhXeuRHbff3uE=;
 b=UE9GRdJ7xOObGjwWyLcHTlAUD3LRFRr1dljswLjg1N4glGaIQ5va+7WstMXMRp3CGDRMFbfSUdcGtaAEbHQ44K5w0JFouxPBsIFU6laYLfukBXE3kYG0KdVYh4Di8G3hBHlriVDffjuu2u7+7Zzrmf/THOEfpUSTs1cqjI0NfanuANYPbEmhiPCZqH5KVWI0vXgKb1ejaOH29oRuCqzz85xCm/NyoRS3vXRulTnGg+IiUJmF1x4DOu/1rb8mt9987K+tIhYi3O9egYfFY4gjAtSbSsMzE9bDhRhevFwaN8bVJNFZ/S9hxxV3KHgtVo2nKErnIQzacuWgt7KSfxS59g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rBXLBz2fRRUloLhGoC6sj4nzp8i6aSdhXeuRHbff3uE=;
 b=KjwU6zrJjgSeFZtPXN5DyxWM4FzmRYTg7H5m2tRxnKihdFqwE9oB4oAQqFEi6arf3m6NC+JGxcsIlMTBWEp3hMqH3J/XjbkT1A6+xklkqqfC5Y5Ewc8H9HAVHd8BdZRnOk045z2XxNfxNDqQqaN+u7rNdxf/wVxrwB/HSSAJufY=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Ian
 Jackson" <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v3] docs: fix documentation about default scheduler
Date: Tue, 17 Nov 2020 10:32:58 +0100
Message-ID: <20201117093258.26754-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.29.2
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0357.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:d::33) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: fbacf35b-28ef-4774-8f58-08d88adbca19
X-MS-TrafficTypeDiagnostic: DS7PR03MB5526:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DS7PR03MB5526C71471C702D3512814E28FE20@DS7PR03MB5526.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: HirnF6bX2ZTY8IPPAeCN4LNHmiusui7KWs33UJfOaximO4KjMCC54aU6nvzVRcc10rOulgeYOFjweFNnP8nxN7B1zc5lwvJxfv8MTElkSqKqbqC9qWpCXaFxAhCnIBlnL6itt6dgnuevhXukDRySaDom4QMPJgRZPQs1XYcgQOSZaWOYuv20ls3gOFe3WQZFwoixwZcg5YwsImTulBT0Bz8Brw7BFl/yhA1Ia9+rv82NKNxYqIJG9WA3xJ2itqr2Os7jI5jTBrZ/jn/TTBNeYwbOhwD0b4MqAloX8fRs3ceCU4FGpp7xIuwIVzTRTsqn
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(346002)(366004)(376002)(136003)(16526019)(186003)(26005)(66556008)(66476007)(66946007)(6666004)(5660300002)(4326008)(1076003)(6496006)(36756003)(956004)(316002)(54906003)(478600001)(2906002)(2616005)(6916009)(6486002)(86362001)(83380400001)(8936002)(8676002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: r1ZwQmRyPzIf8iZmv82x7Y6h4mDOhAgsZzO5h37wFnYCO0m5+zFqee48JpTVCVipH4XArI4NjAKyLEpCl7O6YpJvI/7wOp3C9tRBTMpY2G+gJTID2QB6N+cK70RVNFfMRPQYQt8ixQ1pMI9CPYZRIYubGToRjlADlIXsUPasv3Gg4S5oLps/zxmb2SJs1CASJqJLd/OyHdpjXRrCBll5WN4UIwhPc6D2uiwcH6IEAIpgtMTpNjcHu0QWOdkYKbDGR76KUodYt2JdYsjnk0xURAhZA8Rr5buaO5q3aT4RoY/lzSg/4k0CvY/5HALYUMoRBu+yRQat9ziDJWXsILiKCEijNSbp+tD52Fm2hHRaQvAN/NbtrfoWsuoeV/yQSH+leXYKJlKms+XhwfZH2P4hVaBHx7RtrLa3bjqspOLJ+4gkn9XSNQlg4oRh8IWS19NzFKaQiQUkqFMD0lPWOgybeFwy+ta8dJHXixK4ZlX9YSleqZrawwY7xbIKP6cTBHzGPTjmbjke5nreDsurEckdK3/746JQ4tg9B2F5ufSLHty0CWZSzS+9ztfaBA8Di++cF8SZRrfMMenNcY4oJJFO72fd2vTfJ2SCJu4Aj5sVphwKNM5Dy4DNiQAbKTJUF8sxamrVm0Zbu+H06VBbd11uHNIPhHbVcANnJDxYHxAkBy+c4MiT0p2Zgn+TmdCqIhdd4VN0BpjsqtCEVv7TSkrYq1mUalLXdLxfL66wgGxIpZydhCoqCTkIoLsWgKg+7F6JpYJKMK+hpl/jYTz2Jhhv27Fwkt3UJTimR51LUA5Io5JiJ+Owqe6QRTMQGqE1k5ApvQ4oTxLQxXkY7QMhhrpHq+DH8urwsn1phqO5rDMiuNnFDqARSHPGA+k4Br8QuTY6nNuhx+211VQem94FM/RVLQ==
X-MS-Exchange-CrossTenant-Network-Message-Id: fbacf35b-28ef-4774-8f58-08d88adbca19
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Nov 2020 09:33:06.8209
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Wt8OyLBLpKOpjFxLHcJglRgmQO4l2FuYJud96mwFKEdJatOekde6Ca/mU4+u+r9w20w5Wtnln1swgWhxjaZ1jg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5526
X-OriginatorOrg: citrix.com

Fix the command line document to account for the default scheduler in
Kconfig being credit2 now, and the fact that it's selectable at build
time and thus different builds could end up with different default
schedulers.

Fixes: dafd936dddbd ('Make credit2 the default scheduler')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v2:
 - List credit2 as the default scheduler.
 - Note that default scheduler can be set from Kconfig.

Changes since v1:
 - Point that the default scheduler is being selected by Kconfig,
   don't mention the default Kconfig selection.
---
 docs/misc/xen-command-line.pandoc | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index 4ae9391fcd..803243c3fa 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1876,9 +1876,11 @@ with read and write permissions.
 ### sched
 > `= credit | credit2 | arinc653 | rtds | null`
 
-> Default: `sched=credit`
+> Default: `sched=credit2`
 
-Choose the default scheduler.
+Choose the default scheduler. Note the default scheduler is selectable via
+Kconfig and depends on enabled schedulers. Check
+`CONFIG_SCHED_DEFAULT` to see which scheduler is the default.
 
 ### sched_credit2_max_cpus_runqueue
 > `= <integer>`
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 17 09:42:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 09:42:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28834.57962 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kexVh-0005Tk-Km; Tue, 17 Nov 2020 09:42:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28834.57962; Tue, 17 Nov 2020 09:42:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kexVh-0005Td-Go; Tue, 17 Nov 2020 09:42:49 +0000
Received: by outflank-mailman (input) for mailman id 28834;
 Tue, 17 Nov 2020 09:42:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kexVg-0005TV-7L
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 09:42:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b1d86cb7-9a1f-4d8b-883b-96994cb29806;
 Tue, 17 Nov 2020 09:42:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5CFE6AD07;
 Tue, 17 Nov 2020 09:42:45 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kexVg-0005TV-7L
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 09:42:48 +0000
X-Inumbo-ID: b1d86cb7-9a1f-4d8b-883b-96994cb29806
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b1d86cb7-9a1f-4d8b-883b-96994cb29806;
	Tue, 17 Nov 2020 09:42:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605606165; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=jQxi+KVV7edey7RTByntwQOR6fraNIOxshEf9K269PM=;
	b=dzLzw5cdhAJAJ87E2/J9YR2KRjiuHLQ9/gZYtMAar5pguhR2GndXSgWGQ0oJaFLzzjS7xI
	2s3OhAWe5IMBBmuVuQMJ7qdoZi0cIacAUSOHVg35h2ks33otUVzRV8jBfrG5IgHgpoJXWn
	Aboe60/H33MY1GPdW16jWKWfYkwcM5o=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5CFE6AD07;
	Tue, 17 Nov 2020 09:42:45 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Olaf Hering <olaf@aepfle.de>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] libxenstat: avoid build race
Message-ID: <273da46e-2a56-f53c-f137-f6fc411ad8db@suse.com>
Date: Tue, 17 Nov 2020 10:42:45 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Olaf reported observing

xenstat_qmp.c:26:10: fatal error: _paths.h: No such file or directory
.../tools/libs/stat/../../../tools/Rules.mk:153: xenstat_qmp.opic] Error 1

Obviously _paths.h, when included by any of the sources, needs to be
created in advance of compiling any of them, not just the non-PIC ones.

Reported-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
A similar issue (at the time of the report) in the building of
libxenstore was addressed by Jürgen's 9af5e2b31b4e ("tools/libs/store:
don't use symbolic links for external files").

--- a/tools/libs/stat/Makefile
+++ b/tools/libs/stat/Makefile
@@ -30,7 +30,7 @@ include $(XEN_ROOT)/tools/libs/libs.mk
 
 include $(XEN_ROOT)/tools/libs/libs.mk
 
-$(LIB_OBJS): _paths.h
+$(LIB_OBJS) $(PIC_OBJS): _paths.h
 
 PYLIB=bindings/swig/python/_xenstat.so
 PYMOD=bindings/swig/python/xenstat.py


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 09:45:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 09:45:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28839.57974 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kexYf-0005eF-3O; Tue, 17 Nov 2020 09:45:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28839.57974; Tue, 17 Nov 2020 09:45:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kexYf-0005e8-09; Tue, 17 Nov 2020 09:45:53 +0000
Received: by outflank-mailman (input) for mailman id 28839;
 Tue, 17 Nov 2020 09:45:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rcuG=EX=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kexYe-0005e3-0Y
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 09:45:52 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d684498e-a975-412d-bf96-4a5574b4ac98;
 Tue, 17 Nov 2020 09:45:49 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rcuG=EX=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kexYe-0005e3-0Y
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 09:45:52 +0000
X-Inumbo-ID: d684498e-a975-412d-bf96-4a5574b4ac98
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d684498e-a975-412d-bf96-4a5574b4ac98;
	Tue, 17 Nov 2020 09:45:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605606349;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=jU/GQeNT9VECEGZp6at1TS7GCw3RTxDwROdm6yd8Pqc=;
  b=NYUnK3kO39S3ZKxWUTQ+mCaMUExKPjXeoqPiVuwNMwm8BlcDeMn61Yok
   EotEjQwNzg52otDjqwNOEd5lrQUM3A4VBJdXqhc6nvCZA6cENqtChDbNq
   obzZtvVPR4R7L1jRw6hduPb3c/OvheM5+baxNpE1Qzhh6ODEVvfXHO2IM
   M=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Uli+V3pqbYMY7VLBDEkw8zU6Jfx51qPghPaAy4KDczcdtUdgM/COfHlfbgvtDL/aTB/vfw1jie
 KFno/fBPrEpGH1hPm56hQnY8m5CYxLDojKRHcgPUGD1Ulz+pofJcBOw1rXoFpT89HWKRv4sfWT
 6fwmoX+RVykWMgwO4VGBgjjwIsiJQgW/GuP1a820tQFd+eChqZxq7yLK+BrZuwIm/4eUVUUVTk
 YBxeAp9IylZBGqB/VQ2MdNUG6H5w3F5XjSJfixim3wbe7fK0C55/fn4Vt0dC3wcvRLvuFOzFL7
 9bQ=
X-SBRS: None
X-MesageID: 32457950
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,485,1596513600"; 
   d="scan'208";a="32457950"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FFgZwwaf/CuxNNo7OXCr6CIP9gmEM8P4QcUQmIg9n2/hi/EsNRKWU2Ebz+HHZrL6zLHRu4YcLNscma4LiV/vu0jguz/ytmJBfB4bAXWvibH0WqSLgLiiJ36b+DysBjTm0k1g2dD0I35VkM5HFOWCHb7KzG6xf8AEhmyORcJoZ7OMkFp2xeJfwRjzctxIzFTX/ZB3BRpUQSuycvHglO8QzNkNrdCz9au5/cK33OAudL9O5TOF2sZ6Fa2RvkVa17tuNvcE64xVGmoQhU/gVdFVWNT+bygk+NhW1DNM+APYaGoHysXbwe+qrr5OsoL0VfyT7Rc47ot5wpap/kKZtvBr/A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=34KokYREnulQM/fcbrIbd8nwLMKmZGtLmoRvZKEIMn4=;
 b=MKSikhyY3rJ1SaEl7mpi6aEp+cDEV8/LQu1q5ZG3XdRZ0u7UVhEKM/PucqWTESQYkuG4qYdGRvvcisrWBW9ydkTh8RqCGe6Lyhodh2dNDQ1gq3JcrEMZjgfAi4/MarQgFGegT4v8hGJ/PcOA8JQIiMR+kQZ9FElgbQeS1lmt0xwlcVzEKpmgra0EmJ16ajwtSw+TM0IYmhKJlXnZFE2sMJEEac/UlN271aH50UUD+IwZP+WWgjyIItjsKW64FbWoMbNgQalZn/pSkzc0JWm2o7ckDKR/X5+gXGzA+Cjmtci/4cCdfo6Ohz3SnIhU70EccVr5/z6mTP48UtvaVXqEZg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=34KokYREnulQM/fcbrIbd8nwLMKmZGtLmoRvZKEIMn4=;
 b=mAuwTByAb7xnEUm/X8kc8/qNSF6FFKhQsxvOkLfXJG0vwnRuCXsLO2AC6rKxee+yfW+K/H/owarWGX3l3JmzF0WBEG9he3ACeqHc4UJ1CSVq+PF7S9f+oVJZJHL9FQhyqXo6mzpBNkUlxa+AzdYe9+wJ7FPMnEEyhtulDmwMfsQ=
Date: Tue, 17 Nov 2020 10:45:34 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: netbsd PVH dom0: xen clock event stops
Message-ID: <20201117094534.z3usx6tsydtle53o@Air-de-Roger>
References: <20201115174938.GA3562@antioche.eu.org>
 <20201115182416.GA30231@Air-de-Roger> <20201116182211.GS840@antioche.eu.org>
 <20201117085448.2haekgpbcspwmqja@Air-de-Roger>
 <20201117090733.GC1405@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201117090733.GC1405@antioche.eu.org>
X-ClientProxiedBy: CWLP265CA0355.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:401:5a::31) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 34a735f5-7586-4f66-4ad0-08d88add8f18
X-MS-TrafficTypeDiagnostic: DM6PR03MB5065:
X-Microsoft-Antispam-PRVS: <DM6PR03MB5065B4CDF779A93B81C2F2D98FE20@DM6PR03MB5065.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: QvsFJ2k88aQOmR+uSITSUb2uDULndGVMeKrxAzcl2kSKvmmyYUu48ZDr1x/ebyXs4UOPa4y6K8p0PSyRvwmY7caOomoDOCcm+x89W43GZX/qYTA8wAI5u14Ev3kuL9y5hJEcwvrQFvyjOOdgwBqroiuEtbJnNd7OpN7uK3k/gXDyxiglG59WPbBP6Ettojhf4Alwu0EWkm2vX1jKosX7oOSZ3WPtuFuM5dJpYjJO7UEazDQOccPJVRMKalNLAQj4XJisobMZiMzx4XIBRgUlmUDsWqu7UG87wqTZgvepr+yEqlmOvpl1Hm+WEcnhso1M
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(376002)(396003)(346002)(136003)(39860400002)(366004)(6916009)(8676002)(478600001)(8936002)(9686003)(66946007)(6666004)(6486002)(86362001)(4326008)(85182001)(186003)(1076003)(6496006)(4744005)(956004)(66556008)(33716001)(66476007)(2906002)(316002)(16526019)(5660300002)(26005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 911VPRXQpNCwO9bRtLkijNqsscesWyK+H+JRHKki9k/ctmRmBiSVbHSsLNMQyYcSOVeqyjgTi9Jm1m7ddWEfXqzwH82q3XaTkkWQYV5AC50gj8DIWne38sKY1eUCjVst7qj58uY7+q369GjjeX79gjhdg3Gr61PShwcJ1f061G8I1qEtuPTcoRMeH5aU/eamzpXWPoYIkb4aw9xVHCNMJJxybUAGkcCSsoMr/cu2gfpwvR8zQJAsDqbcxk5BEM+IOa5x+NKNkfL22pwmteJ0xDijWj6np9VWW+f9AFlj04aZQIwWax0IQ1vvAU6YvWny0Dlv0gS+6CEpd55SJpPSe8VAx5W6CoZmVEhPidgi34j9ZRAEJ2H+l75Itl3zTv73A358E07sbWaRFMaKka4byxH69qRUj/27ep1sRHjeXlscad/NbHaDwjmUNk/qr/4uTNCcKnFZxePpdCgF1vv6roq/1030yzYANevjltiKTzrLvOpCfPEK5oId9kQyGJYfS3B/+lUDdxdE4lhkn+JeFpDMalZBlgWqrt2iAufQRZtaq1XliQ4gEhFxBLuUCFpgy1/KB3PueGfMX4r0DLUn7pw113+20sBG7VkEZ44+2wEQr2yHYLFDKDD3EScr1RWJHIUjaBByV3DGUMZsJjnMsxSpRXMXCvLmeIJdE8pOzMk3CWAvLf4FW0mpvdDjwTv6bM9b1ZBJfIwc1meHZyk8sFAx0qthL5JDYt1AFBPyffWFSMB9uYnzGCOeYXPyqiGOLM6rUkoRaGxQtYkADULNCwY71z7kfWdoetrkdg5RP7GWMFqRn2Bj52OzFOVFzC8/e26v0/NvsXpOom/4qj3GqnQVzYVTs0k2xXVg/lCRZjEkl9tWBa9Qfp52gJeJtkDH53NwRvtqTHx9hcScWP0FdQ==
X-MS-Exchange-CrossTenant-Network-Message-Id: 34a735f5-7586-4f66-4ad0-08d88add8f18
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Nov 2020 09:45:46.7209
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tWUIS1GlVUeNapTht2ysu1+gfHTFLkfdL1Tgo8Nw4ykRUC/kNGytL4+B85LuS55R2Gz1kwEEduy9IBbIfuq2NA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5065
X-OriginatorOrg: citrix.com

On Tue, Nov 17, 2020 at 10:07:33AM +0100, Manuel Bouyer wrote:
> On Tue, Nov 17, 2020 at 10:02:04AM +0100, Roger Pau Monné wrote:
> > Great! So all interrupts are working as expected now?
> 
> No, I'm back at the problem where the PERC raid controller times out on
> commands. I'm cleaing up my sources and will try to get more data
> about this problem.

OK, the output of the 'M' debug key might be helpful in that case to
see if the MSI-X entries are masked (IIRC you said this controller was
using MSIX).

Rogerr.


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 09:50:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 09:50:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28845.57986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kexcZ-0005uV-LK; Tue, 17 Nov 2020 09:49:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28845.57986; Tue, 17 Nov 2020 09:49:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kexcZ-0005uO-Hq; Tue, 17 Nov 2020 09:49:55 +0000
Received: by outflank-mailman (input) for mailman id 28845;
 Tue, 17 Nov 2020 09:49:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/ssn=EX=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kexcX-0005uJ-TD
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 09:49:54 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.64]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eccc1b2a-12e1-4b2b-a405-e3a2242b12eb;
 Tue, 17 Nov 2020 09:49:50 +0000 (UTC)
Received: from AM5PR0701CA0019.eurprd07.prod.outlook.com
 (2603:10a6:203:51::29) by AM6PR08MB4519.eurprd08.prod.outlook.com
 (2603:10a6:20b:74::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Tue, 17 Nov
 2020 09:49:47 +0000
Received: from VE1EUR03FT040.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:51:cafe::67) by AM5PR0701CA0019.outlook.office365.com
 (2603:10a6:203:51::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.15 via Frontend
 Transport; Tue, 17 Nov 2020 09:49:47 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT040.mail.protection.outlook.com (10.152.18.210) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3564.22 via Frontend Transport; Tue, 17 Nov 2020 09:49:47 +0000
Received: ("Tessian outbound 13ed5f5344c0:v71");
 Tue, 17 Nov 2020 09:49:46 +0000
Received: from 024e3840942f.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 FE529277-0409-413B-8FBD-D759ADECB8C4.1; 
 Tue, 17 Nov 2020 09:49:30 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 024e3840942f.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 17 Nov 2020 09:49:30 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB6PR0801MB1992.eurprd08.prod.outlook.com (2603:10a6:4:76::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25; Tue, 17 Nov
 2020 09:49:28 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3564.028; Tue, 17 Nov 2020
 09:49:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/ssn=EX=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kexcX-0005uJ-TD
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 09:49:54 +0000
X-Inumbo-ID: eccc1b2a-12e1-4b2b-a405-e3a2242b12eb
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown [40.107.6.64])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id eccc1b2a-12e1-4b2b-a405-e3a2242b12eb;
	Tue, 17 Nov 2020 09:49:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JZGC6pHTsP477l9M581r6eVumdLkpOQUid+cz7DYd88=;
 b=rOSbHIfuBteKzMdypW40ZqWRpgo2rPE77eNH2uQT0NySwwAQypDH1ZyQh1j+dA8/2KjaiXIk7HohyTWMesW5OeFSYFJbBNQ8TvuPpBJdzL2mO25pMy66A0BCi8lWEOnEBepeR0qBC6Qrh+hL1U9QSDReucJYkyPy0t5OmJ7WaLA=
Received: from AM5PR0701CA0019.eurprd07.prod.outlook.com
 (2603:10a6:203:51::29) by AM6PR08MB4519.eurprd08.prod.outlook.com
 (2603:10a6:20b:74::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Tue, 17 Nov
 2020 09:49:47 +0000
Received: from VE1EUR03FT040.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:51:cafe::67) by AM5PR0701CA0019.outlook.office365.com
 (2603:10a6:203:51::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.15 via Frontend
 Transport; Tue, 17 Nov 2020 09:49:47 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT040.mail.protection.outlook.com (10.152.18.210) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3564.22 via Frontend Transport; Tue, 17 Nov 2020 09:49:47 +0000
Received: ("Tessian outbound 13ed5f5344c0:v71"); Tue, 17 Nov 2020 09:49:46 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: e8290b2b7b294f7a
X-CR-MTA-TID: 64aa7808
Received: from 024e3840942f.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id FE529277-0409-413B-8FBD-D759ADECB8C4.1;
	Tue, 17 Nov 2020 09:49:30 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 024e3840942f.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Tue, 17 Nov 2020 09:49:30 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MILSIrcf7tpB2PoFtfguyeRmqZnyCYVr8SS8VrLbgImjp28kFny3tfApv4uK4YV1356Tb464WUSRwI9VL94GGTxPZZQgCXqoRPbKF7caBlILnf9hOrWeyNm/VErGG9r9zeSH47KlPl7eIPnWIrWZeXqhUGjRYxcOCM8MR66jGZ1mJ48w/GfaGghgbIgMvyN0I0ijIz5tW9LzLjtXaRmgo/DflRGYQX4+F385yn/PLUO7zEjVmUrndk/ZjwWnv3QPAjinNd7vc/Pvley6x6qQ5AbBw7CXFKv7fGASI7XjWSPTFUAM5+RDKP4J0XOaJocFpRlLYAqZkG01p30tDc3HCQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JZGC6pHTsP477l9M581r6eVumdLkpOQUid+cz7DYd88=;
 b=mMoHzrWkgyKJd0O9A67AIofNactBW4E35E0B4m6UbqZu/y3hqgXxDuIzlV1zFQpu8Ehk/ZqmoxdapGs/9b+DQ1ZHVQyaM17T6tOPv2r/XMFahXDCtjd7sU09Mv9zcuBcyQ/KsoWWwN9F0pnB6BrF9VjUpsaklt0CGKdjymk8Wwib0r661LD1zBylqci75F+w2x1kXWfpGuCxytRVUw+SelyRAMg8FTuybpnqOgxjVOu8rrV/J8g7OTrEeZ+NEUxXS9J8gMxJN2vAF4ojDgv0iPxwqbBG8XbUVGVdVnuoM8z464vQXA08ew27D59OIZdB8PNu2Tc2DKUijc/osKpCzw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JZGC6pHTsP477l9M581r6eVumdLkpOQUid+cz7DYd88=;
 b=rOSbHIfuBteKzMdypW40ZqWRpgo2rPE77eNH2uQT0NySwwAQypDH1ZyQh1j+dA8/2KjaiXIk7HohyTWMesW5OeFSYFJbBNQ8TvuPpBJdzL2mO25pMy66A0BCi8lWEOnEBepeR0qBC6Qrh+hL1U9QSDReucJYkyPy0t5OmJ7WaLA=
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB6PR0801MB1992.eurprd08.prod.outlook.com (2603:10a6:4:76::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25; Tue, 17 Nov
 2020 09:49:28 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3564.028; Tue, 17 Nov 2020
 09:49:28 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3 2/3] xen/pci: Move x86 specific code to x86 directory.
Thread-Topic: [PATCH v3 2/3] xen/pci: Move x86 specific code to x86 directory.
Thread-Index: AQHWvBPDynVNQDWjWUaLWbo/nKPcx6nLh3iAgACOQIA=
Date: Tue, 17 Nov 2020 09:49:28 +0000
Message-ID: <85965899-870B-4D50-BB62-BDB3EB17F76B@arm.com>
References: <cover.1605527997.git.rahul.singh@arm.com>
 <a84005e5aa6733043e043b015cde4983719c8535.1605527997.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2011161719300.20906@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2011161719300.20906@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 13e42912-3235-43e8-89e5-08d88ade1e9a
x-ms-traffictypediagnostic: DB6PR0801MB1992:|AM6PR08MB4519:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB451968777ADDE9D0126FDA55FCE20@AM6PR08MB4519.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8273;OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 bRnG+omTswVBYUcOCE8IzBWA9ukTZDgvlTSsSlv5BU8NjfmWEQiWCm6kJ8I5J/UVF3tf9Vb4+vcpZJxi7yyWKdrLWFVAUwamD+ra/LHg35Ej2ecbyfC4tT0Xhj8PuD2vTobbjUYBgJlPh6k0Yg1IIQ54jYsIerE62a1+QJ98LeXP8kVYoHQI6/vcuMJKFTQP9hWL97s2T4dqghqdluVd0fObQLE7ZO0JaKJ+WkJrJq4BMSpc5DkmFcNk8GBhtPjVhBKCb83XfF3olkTphfm+nZmo9kkerisup5GX+f+NFrfN8YsH3ZXvzTwZJV2OP/n6ydDoixbXdLVovSQKvBbk4p5CjZNFYAONceNiD1joBMkd5Z+/8rESIL/xBrVCcHDuZ530K8OxOIjDwSANdMW9VQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(346002)(366004)(396003)(376002)(136003)(71200400001)(8676002)(91956017)(478600001)(53546011)(64756008)(66446008)(66476007)(76116006)(6486002)(186003)(26005)(6506007)(66946007)(66556008)(36756003)(2616005)(86362001)(83380400001)(316002)(33656002)(4326008)(6916009)(54906003)(6512007)(8936002)(5660300002)(2906002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 Ilj/jJ03WV6f8UD8SAtEtV7Wbd08MGfLHdal9DT5wZQOFvaLMvo+P+8b3xQz02WXk4rx42z7AVP4A2Ha87eg92LZQnuE3J5nl7yNUXnXi3Sp8b9pfhiYzx0mfYTJHMhntDTwvNlqv5Rh12C+K2+1h5iFDJ6mPVGHr1lzoidRzsT69S7VemAfWFEO4E6ddT1SFnlaX1naGATjc8pCY2ERk3Fblg16t377tNMnLHgGwU1KLpD6jDi1BKNpQ1BRWIBWLi4r7bhceuzJHFbox9/gcM5Oc/zggCjgumfa4XWyXZtoUTiyZAXpoJ6mwJf2ovEanLWs9G1+iIRi2UYpgHs9083XajoVfT4QQrN7kY0Y8HKELi/XdUwz8I0H6G8QzPIfEW6ekagYNV6/IHo7s5Ae0DYknAnP3VMTsC9+EZQFQ4K3P4bfqUVjRLDKYmV6cTiBv1c/723e/7u1+XvnF4xa5S8QGQr7sPimSvQJEmABPkRzeiJnKfQN1pbBYjiB+HV12n7CyFPKbWWBOg1/zy9tsm7WoXQx1cjXo1zpywNz6pZ2lHxLXMQcv9AwCRXu1mvx1LuBmLrxmKRiOhieGmgVPP/hNrCsDAv6NFiRip581JpWgLjacDNRp2E3k4izeWAY20c5TdRQmd6IbDfv6bQOD0/kgM8S/PkwTzfsWaxkKmpwZe1RZcsiMVOSH6qveOgF9s0VeOIc+yJ2I7eLgsGBjV2ZKtX4RHjOIs3CVVlF4il3cwG5iWV/5XDd0cxGFY7pqAii1ALYly2q98beip38r83xr81dyx06a5ZPa34VGe20/YgNymovfhn/tqdRqZnTb1BNbeZfrOf5TcweR0MihTAXD+A5Qxyy4fQ5sQ1LKvUzFECjiBLrPJez0Fxw4ysLAN78OBKX5wkr5bvK3s+CRQ==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <D4B5312B999B0D4D9B19EC0522D23C34@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1992
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT040.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	09545fac-955c-4ab5-c8d7-08d88ade1395
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	idn7svn+tjCe74NI0uLkIpDpmqBwdzyHrVooWS03EVI1y6ZWAbsV48cC3YmQHK00JQ2mBQklBg/U07FwD9CLd5ymYDYkadzWZkOU4S8hLjpvIlOjuTMhUEk+vQLYba5osPjmQiBG58gXHWFneiPhmytpxZCkgsyPLlpTEGz70Tq5jft+n52Fw71I73Xe8d+hM+N89QaYJfY1lERw/IeXkOooXEejtcVHaGx8LdtuYmU2tktMEHfsBposXsvM7DxD9HOGNScHkZoGvaXQ1JkYX691H9i2b5Kfc1G9eBtYDJQZiAAUzVqXciA5AOLr2gmRMLHGum4x/KXSm9p1+MwRctrgI9tvQjSEPKPCG85PMpfJveabB4ULNNwXYZbhgB5O5Ua3iVAVkYVDQKyIz31rN3VsuCp9+oWhmWXBxQ20BS0+Y/nEOgDthArR4C5rOklehUC7uXDPq7QaJQYLsLWB67wEduzwSVBRrWumXj5Q+Cw=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(346002)(136003)(39860400002)(396003)(46966005)(26005)(4326008)(336012)(2616005)(8676002)(70586007)(2906002)(478600001)(53546011)(54906003)(8936002)(316002)(6506007)(36906005)(6862004)(6512007)(186003)(86362001)(70206006)(36756003)(5660300002)(33656002)(47076004)(82310400003)(81166007)(6486002)(82740400003)(356005)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Nov 2020 09:49:47.1973
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 13e42912-3235-43e8-89e5-08d88ade1e9a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT040.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4519

Hello Stefano,

> On 17 Nov 2020, at 1:20 am, Stefano Stabellini <sstabellini@kernel.org> w=
rote:
>=20
> On Mon, 16 Nov 2020, Rahul Singh wrote:
>> passthrough/pci.c file is common for all architecture, but there is x86
>> specific code in this file.
>>=20
>> Move x86 specific code to the drivers/passthrough/io.c file to avoid
>> compilation error for other architecture.
>>=20
>> As drivers/passthrough/io.c is compiled only for x86 move it to
>> x86 directory and rename it to hvm.c.
>>=20
>> No functional change.
>>=20
>> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
>=20
> This patch breaks the x86 build if you disable CONFIG_HVM:
>=20
> prelink-efi.o: In function `pci_release_devices':
> /local/repos/xen-upstream/xen/drivers/passthrough/pci.c:900: undefined re=
ference to `arch_pci_clean_pirqs'
> Makefile:209: recipe for target '/local/repos/xen-upstream/xen/xen.efi' f=
ailed
>=20
>=20

Thanks for reviewing the code.
I will fix the build and will send the v3 patch.

Regards,
Rahul

>=20
>> ---
>>=20
>> Changes in v3:
>> - fixed typo
>> - As per suggestion move the code to the file io.c and move that file to
>>  x86 directory and rename it hvm.c
>>=20
>> ---
>> xen/drivers/passthrough/Makefile            |  3 -
>> xen/drivers/passthrough/pci.c               | 78 +--------------------
>> xen/drivers/passthrough/x86/Makefile        |  1 +
>> xen/drivers/passthrough/{io.c =3D> x86/hvm.c} | 66 +++++++++++++++++
>> xen/drivers/passthrough/x86/iommu.c         |  7 ++
>> xen/include/xen/pci.h                       |  2 +
>> 6 files changed, 77 insertions(+), 80 deletions(-)
>> rename xen/drivers/passthrough/{io.c =3D> x86/hvm.c} (95%)
>>=20
>> diff --git a/xen/drivers/passthrough/Makefile b/xen/drivers/passthrough/=
Makefile
>> index e973e16c74..cc646612c7 100644
>> --- a/xen/drivers/passthrough/Makefile
>> +++ b/xen/drivers/passthrough/Makefile
>> @@ -6,6 +6,3 @@ obj-$(CONFIG_ARM) +=3D arm/
>> obj-y +=3D iommu.o
>> obj-$(CONFIG_HAS_PCI) +=3D pci.o
>> obj-$(CONFIG_HAS_DEVICE_TREE) +=3D device_tree.o
>> -
>> -x86-$(CONFIG_HVM) :=3D io.o
>> -obj-$(CONFIG_X86) +=3D $(x86-y)
>> diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci=
.c
>> index 51e584127e..e8a28df126 100644
>> --- a/xen/drivers/passthrough/pci.c
>> +++ b/xen/drivers/passthrough/pci.c
>> @@ -14,9 +14,6 @@
>>  * this program; If not, see <http://www.gnu.org/licenses/>.
>>  */
>>=20
>> -#include <xen/sched.h>
>> -#include <xen/pci.h>
>> -#include <xen/pci_regs.h>
>> #include <xen/pci_ids.h>
>> #include <xen/list.h>
>> #include <xen/prefetch.h>
>> @@ -24,7 +21,6 @@
>> #include <xen/irq.h>
>> #include <xen/param.h>
>> #include <xen/vm_event.h>
>> -#include <asm/hvm/irq.h>
>> #include <xen/delay.h>
>> #include <xen/keyhandler.h>
>> #include <xen/event.h>
>> @@ -842,71 +838,6 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
>>     return ret;
>> }
>>=20
>> -static int pci_clean_dpci_irq(struct domain *d,
>> -                              struct hvm_pirq_dpci *pirq_dpci, void *ar=
g)
>> -{
>> -    struct dev_intx_gsi_link *digl, *tmp;
>> -
>> -    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
>> -
>> -    if ( pt_irq_need_timer(pirq_dpci->flags) )
>> -        kill_timer(&pirq_dpci->timer);
>> -
>> -    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
>> -    {
>> -        list_del(&digl->list);
>> -        xfree(digl);
>> -    }
>> -
>> -    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
>> -
>> -    if ( !pt_pirq_softirq_active(pirq_dpci) )
>> -        return 0;
>> -
>> -    domain_get_irq_dpci(d)->pending_pirq_dpci =3D pirq_dpci;
>> -
>> -    return -ERESTART;
>> -}
>> -
>> -static int pci_clean_dpci_irqs(struct domain *d)
>> -{
>> -    struct hvm_irq_dpci *hvm_irq_dpci =3D NULL;
>> -
>> -    if ( !is_iommu_enabled(d) )
>> -        return 0;
>> -
>> -    if ( !is_hvm_domain(d) )
>> -        return 0;
>> -
>> -    spin_lock(&d->event_lock);
>> -    hvm_irq_dpci =3D domain_get_irq_dpci(d);
>> -    if ( hvm_irq_dpci !=3D NULL )
>> -    {
>> -        int ret =3D 0;
>> -
>> -        if ( hvm_irq_dpci->pending_pirq_dpci )
>> -        {
>> -            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci=
) )
>> -                 ret =3D -ERESTART;
>> -            else
>> -                 hvm_irq_dpci->pending_pirq_dpci =3D NULL;
>> -        }
>> -
>> -        if ( !ret )
>> -            ret =3D pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
>> -        if ( ret )
>> -        {
>> -            spin_unlock(&d->event_lock);
>> -            return ret;
>> -        }
>> -
>> -        hvm_domain_irq(d)->dpci =3D NULL;
>> -        free_hvm_irq_dpci(hvm_irq_dpci);
>> -    }
>> -    spin_unlock(&d->event_lock);
>> -    return 0;
>> -}
>> -
>> /* Caller should hold the pcidevs_lock */
>> static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
>>                            uint8_t devfn)
>> @@ -966,7 +897,7 @@ int pci_release_devices(struct domain *d)
>>     int ret;
>>=20
>>     pcidevs_lock();
>> -    ret =3D pci_clean_dpci_irqs(d);
>> +    ret =3D arch_pci_clean_pirqs(d);
>>     if ( ret )
>>     {
>>         pcidevs_unlock();
>> @@ -1370,13 +1301,6 @@ static int __init setup_dump_pcidevs(void)
>> }
>> __initcall(setup_dump_pcidevs);
>>=20
>> -int iommu_update_ire_from_msi(
>> -    struct msi_desc *msi_desc, struct msi_msg *msg)
>> -{
>> -    return iommu_intremap
>> -           ? iommu_call(&iommu_ops, update_ire_from_msi, msi_desc, msg)=
 : 0;
>> -}
>> -
>> static int iommu_add_device(struct pci_dev *pdev)
>> {
>>     const struct domain_iommu *hd;
>> diff --git a/xen/drivers/passthrough/x86/Makefile b/xen/drivers/passthro=
ugh/x86/Makefile
>> index a70cf9460d..69284a5d19 100644
>> --- a/xen/drivers/passthrough/x86/Makefile
>> +++ b/xen/drivers/passthrough/x86/Makefile
>> @@ -1,2 +1,3 @@
>> obj-y +=3D ats.o
>> obj-y +=3D iommu.o
>> +obj-$(CONFIG_HVM) +=3D hvm.o
>> diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/x86/=
hvm.c
>> similarity index 95%
>> rename from xen/drivers/passthrough/io.c
>> rename to xen/drivers/passthrough/x86/hvm.c
>> index 6b1305a3e5..41cfa2e200 100644
>> --- a/xen/drivers/passthrough/io.c
>> +++ b/xen/drivers/passthrough/x86/hvm.c
>> @@ -1036,6 +1036,72 @@ unlock:
>>     spin_unlock(&d->event_lock);
>> }
>>=20
>> +static int pci_clean_dpci_irq(struct domain *d,
>> +                              struct hvm_pirq_dpci *pirq_dpci, void *ar=
g)
>> +{
>> +    struct dev_intx_gsi_link *digl, *tmp;
>> +
>> +    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
>> +
>> +    if ( pt_irq_need_timer(pirq_dpci->flags) )
>> +        kill_timer(&pirq_dpci->timer);
>> +
>> +    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
>> +    {
>> +        list_del(&digl->list);
>> +        xfree(digl);
>> +    }
>> +
>> +    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
>> +
>> +    if ( !pt_pirq_softirq_active(pirq_dpci) )
>> +        return 0;
>> +
>> +    domain_get_irq_dpci(d)->pending_pirq_dpci =3D pirq_dpci;
>> +
>> +    return -ERESTART;
>> +}
>> +
>> +int arch_pci_clean_pirqs(struct domain *d)
>> +{
>> +    struct hvm_irq_dpci *hvm_irq_dpci =3D NULL;
>> +
>> +    if ( !is_iommu_enabled(d) )
>> +        return 0;
>> +
>> +    if ( !is_hvm_domain(d) )
>> +        return 0;
>> +
>> +    spin_lock(&d->event_lock);
>> +    hvm_irq_dpci =3D domain_get_irq_dpci(d);
>> +    if ( hvm_irq_dpci !=3D NULL )
>> +    {
>> +        int ret =3D 0;
>> +
>> +        if ( hvm_irq_dpci->pending_pirq_dpci )
>> +        {
>> +            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci=
) )
>> +                 ret =3D -ERESTART;
>> +            else
>> +                 hvm_irq_dpci->pending_pirq_dpci =3D NULL;
>> +        }
>> +
>> +        if ( !ret )
>> +            ret =3D pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
>> +        if ( ret )
>> +        {
>> +            spin_unlock(&d->event_lock);
>> +            return ret;
>> +        }
>> +
>> +        hvm_domain_irq(d)->dpci =3D NULL;
>> +        free_hvm_irq_dpci(hvm_irq_dpci);
>> +    }
>> +    spin_unlock(&d->event_lock);
>> +
>> +    return 0;
>> +}
>> +
>> /*
>>  * Note: 'pt_pirq_softirq_reset' can clear the STATE_SCHED before we get=
 to
>>  * doing it. If that is the case we let 'pt_pirq_softirq_reset' do ref-c=
ounting.
>> diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrou=
gh/x86/iommu.c
>> index f17b1820f4..875e67b53b 100644
>> --- a/xen/drivers/passthrough/x86/iommu.c
>> +++ b/xen/drivers/passthrough/x86/iommu.c
>> @@ -308,6 +308,13 @@ struct page_info *iommu_alloc_pgtable(struct domain=
 *d)
>>     return pg;
>> }
>>=20
>> +int iommu_update_ire_from_msi(
>> +    struct msi_desc *msi_desc, struct msi_msg *msg)
>> +{
>> +    return iommu_intremap
>> +           ? iommu_call(&iommu_ops, update_ire_from_msi, msi_desc, msg)=
 : 0;
>> +}
>> +
>> /*
>>  * Local variables:
>>  * mode: C
>> diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
>> index 20a54a5bb4..78d83afe64 100644
>> --- a/xen/include/xen/pci.h
>> +++ b/xen/include/xen/pci.h
>> @@ -208,4 +208,6 @@ int msixtbl_pt_register(struct domain *, struct pirq=
 *, uint64_t gtable);
>> void msixtbl_pt_unregister(struct domain *, struct pirq *);
>> void msixtbl_pt_cleanup(struct domain *d);
>>=20
>> +int arch_pci_clean_pirqs(struct domain *d);
>> +
>> #endif /* __XEN_PCI_H__ */
>> --=20
>> 2.17.1
>>=20



From xen-devel-bounces@lists.xenproject.org Tue Nov 17 09:51:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 09:51:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28857.57998 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kexdl-0006iC-4E; Tue, 17 Nov 2020 09:51:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28857.57998; Tue, 17 Nov 2020 09:51:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kexdl-0006i5-0q; Tue, 17 Nov 2020 09:51:09 +0000
Received: by outflank-mailman (input) for mailman id 28857;
 Tue, 17 Nov 2020 09:51:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g2yZ=EX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kexdi-0006hz-WE
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 09:51:07 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d0b0ef58-6448-4dbc-87a6-741c930d1eeb;
 Tue, 17 Nov 2020 09:51:03 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kexdf-0001sX-9t; Tue, 17 Nov 2020 09:51:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kexde-00034B-Vd; Tue, 17 Nov 2020 09:51:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kexde-00080w-V9; Tue, 17 Nov 2020 09:51:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=g2yZ=EX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kexdi-0006hz-WE
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 09:51:07 +0000
X-Inumbo-ID: d0b0ef58-6448-4dbc-87a6-741c930d1eeb
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d0b0ef58-6448-4dbc-87a6-741c930d1eeb;
	Tue, 17 Nov 2020 09:51:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bLghYp9rRwW0cx2rwSV9yQNI5IbKn7WHXKROfQz8I+k=; b=NVXrgAnEa8lYM1hry5FD6WR9zl
	xgpS9+7dp4JA1XMIBkWDnG8KwV/U6qKc/CM3IzRBOT+0HLn6n+/xGwkYjgcJ6aVhv51X8J75DSA7+
	zDq7Z/F+BCq3UqPRysGOzW000LajabYk43oxTwt+aBl23KdJlArJmSuaUWJkgjhXks6M=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kexdf-0001sX-9t; Tue, 17 Nov 2020 09:51:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kexde-00034B-Vd; Tue, 17 Nov 2020 09:51:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kexde-00080w-V9; Tue, 17 Nov 2020 09:51:02 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156829-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156829: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=29d59baa3907277782e9f26ecaa99704ff57e3f1
X-Osstest-Versions-That:
    ovmf=124b3f9289f11479d9f042ea6e39bea2b1d5cee3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Nov 2020 09:51:02 +0000

flight 156829 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156829/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 29d59baa3907277782e9f26ecaa99704ff57e3f1
baseline version:
 ovmf                 124b3f9289f11479d9f042ea6e39bea2b1d5cee3

Last test of basis   156826  2020-11-17 01:10:39 Z    0 days
Testing same since   156829  2020-11-17 04:47:02 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael D Kinney <michael.d.kinney@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   124b3f9289..29d59baa39  29d59baa3907277782e9f26ecaa99704ff57e3f1 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 10:00:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 10:00:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28870.58013 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kexms-0007pI-2c; Tue, 17 Nov 2020 10:00:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28870.58013; Tue, 17 Nov 2020 10:00:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kexmr-0007pB-VZ; Tue, 17 Nov 2020 10:00:33 +0000
Received: by outflank-mailman (input) for mailman id 28870;
 Tue, 17 Nov 2020 10:00:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t15h=EX=infradead.org=peterz@srs-us1.protection.inumbo.net>)
 id 1kexmo-0007p1-Kh
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 10:00:33 +0000
Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 91ffc2a4-b01e-4837-81bf-135453d75b4a;
 Tue, 17 Nov 2020 10:00:27 +0000 (UTC)
Received: from j217100.upc-j.chello.nl ([24.132.217.100]
 helo=noisy.programming.kicks-ass.net)
 by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kexmd-0003xs-EN; Tue, 17 Nov 2020 10:00:19 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id B795C3012DC;
 Tue, 17 Nov 2020 11:00:17 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 9ECDB20116732; Tue, 17 Nov 2020 11:00:17 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=t15h=EX=infradead.org=peterz@srs-us1.protection.inumbo.net>)
	id 1kexmo-0007p1-Kh
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 10:00:33 +0000
X-Inumbo-ID: 91ffc2a4-b01e-4837-81bf-135453d75b4a
Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 91ffc2a4-b01e-4837-81bf-135453d75b4a;
	Tue, 17 Nov 2020 10:00:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=sJWlIJf2Pt4tnjPJDlQRhe8ivTuZU0+s+pSU+BozkHA=; b=LF6k8N4mzA0pTMgRv3lmY/Ey4q
	AGIrti2W6EZ9Uy/Qt5md9YKM/pY5rkk5ivFVLGW3dqReurUueIwJeA9izUsXi1IwnoEn/5y0GppPG
	dzynQ78PZNzlaQngN3CCD9ZEqH36H8rzlB3fG4cSR5sooxnvpgmMM2IHiye6QUEfpaGcOW9LKrwLr
	PK7fJHkoOHmdFOfgQOr8zPy+AWok1Gew/v/l3iZZ9Gp8hRJFAj8tgf5viV2D+HMFN4NaC3dQfea7h
	SHrMi4DsoYgQcvM7zhfetHXMfHZGeyMQQdzV0h6OLNb6RswkPQOpsA4GGQK7r2ucGXhQ1CuPh260X
	49H5mmIA==;
Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net)
	by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kexmd-0003xs-EN; Tue, 17 Nov 2020 10:00:19 +0000
Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(Client did not present a certificate)
	by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id B795C3012DC;
	Tue, 17 Nov 2020 11:00:17 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
	id 9ECDB20116732; Tue, 17 Nov 2020 11:00:17 +0100 (CET)
Date: Tue, 17 Nov 2020 11:00:17 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Andy Lutomirski <luto@kernel.org>, Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>
Subject: Re: [PATCH 0/4] x86/xen: do some paravirt cleanup
Message-ID: <20201117100017.GB3121406@hirez.programming.kicks-ass.net>
References: <20201116152301.24558-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201116152301.24558-1-jgross@suse.com>

On Mon, Nov 16, 2020 at 04:22:57PM +0100, Juergen Gross wrote:
> Eliminate the usergs_sysret64 paravirt call completely and switch
> the swapgs one to use ALTERNATIVE instead. This requires to fix the
> IST based exception entries for Xen PV to use the same mechanism as
> NMI and debug exception already do.
> 
> Juergen Gross (4):
>   x86/xen: use specific Xen pv interrupt entry for MCE
>   x86/xen: use specific Xen pv interrupt entry for DF
>   x86/pv: switch SWAPGS to ALTERNATIVE
>   x86/xen: drop USERGS_SYSRET64 paravirt call

Looks 'sane' :-))

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 10:18:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 10:18:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28881.58024 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1key47-0000Yz-K7; Tue, 17 Nov 2020 10:18:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28881.58024; Tue, 17 Nov 2020 10:18:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1key47-0000Ys-H8; Tue, 17 Nov 2020 10:18:23 +0000
Received: by outflank-mailman (input) for mailman id 28881;
 Tue, 17 Nov 2020 10:18:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EUfM=EX=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1key45-0000Yn-TV
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 10:18:21 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9e3471f0-0dcd-40bc-80a1-8a85e250acb2;
 Tue, 17 Nov 2020 10:18:20 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AHAIE8R013559
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Tue, 17 Nov 2020 11:18:15 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 44B1F2E9CC6; Tue, 17 Nov 2020 11:18:09 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=EUfM=EX=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1key45-0000Yn-TV
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 10:18:21 +0000
X-Inumbo-ID: 9e3471f0-0dcd-40bc-80a1-8a85e250acb2
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9e3471f0-0dcd-40bc-80a1-8a85e250acb2;
	Tue, 17 Nov 2020 10:18:20 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AHAIE8R013559
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Tue, 17 Nov 2020 11:18:15 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 44B1F2E9CC6; Tue, 17 Nov 2020 11:18:09 +0100 (MET)
Date: Tue, 17 Nov 2020 11:18:09 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: netbsd PVH dom0: xen clock event stops
Message-ID: <20201117101809.GE1405@antioche.eu.org>
References: <20201115174938.GA3562@antioche.eu.org>
 <20201115182416.GA30231@Air-de-Roger>
 <20201116182211.GS840@antioche.eu.org>
 <20201117085448.2haekgpbcspwmqja@Air-de-Roger>
 <20201117090733.GC1405@antioche.eu.org>
 <20201117094534.z3usx6tsydtle53o@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201117094534.z3usx6tsydtle53o@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Tue, 17 Nov 2020 11:18:15 +0100 (MET)

On Tue, Nov 17, 2020 at 10:45:34AM +0100, Roger Pau Monn wrote:
> On Tue, Nov 17, 2020 at 10:07:33AM +0100, Manuel Bouyer wrote:
> > On Tue, Nov 17, 2020 at 10:02:04AM +0100, Roger Pau Monn wrote:
> > > Great! So all interrupts are working as expected now?
> > 
> > No, I'm back at the problem where the PERC raid controller times out on
> > commands. I'm cleaing up my sources and will try to get more data
> > about this problem.
> 
> OK, the output of the 'M' debug key might be helpful in that case to
> see if the MSI-X entries are masked (IIRC you said this controller was
> using MSIX).

No, this one is ioapic

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 10:50:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 10:50:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28899.58037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keyZY-0004CB-QJ; Tue, 17 Nov 2020 10:50:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28899.58037; Tue, 17 Nov 2020 10:50:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keyZY-0004C4-Mw; Tue, 17 Nov 2020 10:50:52 +0000
Received: by outflank-mailman (input) for mailman id 28899;
 Tue, 17 Nov 2020 10:50:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rcuG=EX=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1keyZX-0004Bz-IN
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 10:50:51 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a024ab34-5031-46a9-a2cb-c96e8e4b8386;
 Tue, 17 Nov 2020 10:50:49 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rcuG=EX=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1keyZX-0004Bz-IN
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 10:50:51 +0000
X-Inumbo-ID: a024ab34-5031-46a9-a2cb-c96e8e4b8386
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a024ab34-5031-46a9-a2cb-c96e8e4b8386;
	Tue, 17 Nov 2020 10:50:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605610249;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=tlzdjdDywLVOnuvIHuRCFG+th2PC85iGGj0Q+vgpLEk=;
  b=W+HvRgdhUelnYYrpyXDNJTev+u0tLdzL6a3hFkrxRtlg2izP7Q7OnzvS
   JvduflyBTiZEzG52G62Qrmotx/1UPWvfS1yXrLNKh0crri3kfPVq2yVAZ
   QTtk6U70zmhGS7P6y0kSp0YIUdp3B+ClUOYia7O7tZXMumVbIfmjx9nTW
   Y=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: FKjlv3YfwYSrQSW08yte8xOBAMsixhHcsyDowvcPWJu/Q/nqEwHf9mdtnv5r9Q0+G5AGN+oh37
 GujbNScSqMfq8B1mUwIAzENtVbIwPhtfhV22Anzlp8lAxTsgylEzX0niV3EHqjOZEVg5r/CzW9
 /VLIPXTIp/ZpFOuo9StMj3+/fp7Ro6Ey+bpy7MlNrc/mYZBReNCZoTKJ68IplgwAEiFrbEsw9K
 zyh9Uzd/2GGYD4pJevKB8DLpePmZ/01hM+cBVLsrWG3tSa2aHUunWb181dSKFFK5aNG+rsSiX5
 tIo=
X-SBRS: None
X-MesageID: 32461328
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,485,1596513600"; 
   d="scan'208";a="32461328"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=T9VDXH1Zj1T0k4TcTcQt7mLiKKtl+DnaA3afWT5JwoDmNp16lh/lqyVGsRCzCR73uFPS5VSNZQFV7Il7VuIgXABO9cXOiq5iH5hVceRf1D8lVuSV9tQOHKP3eq3hXSjXnT46x1H9cu1xj5s+SBAhgf+6tA7zCjImSGboWhEPA+9qH9TfNx5AQH6hlx6uakB+22Auku5muVuaHCGg2YI6xbF+Aref1i/i+Y0UxQuq3KaDV7xaEQV6dRH7DAFJLtkdDFU62avwVJeiRNLOR2KuddWirWotjQUXrGMByB14azjIDFbX8qaoms6v98M9r/EeGX1/aqqQX5d/wCKrgp1NNQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ubiuhm/ubfrsk0kdCcuBS9nyUq95olb7mUVepFOwJPo=;
 b=AQ7yQDiBLde+L+QvHzec4kTpbUcx571/KYHy6qNukOJIoMC3+h7Ygy0UN/t0zi9TmWBdnNwbMZrZmqeSohbLYOw0tLvFBpVGMcr2Pf0hLdMAE4+jjKWsWTd5FQSiGCRkKSYiu01JXwfUAVQMUSFFIEHSS1A58uE1BplkUjYxmFNMy0EC7+gRCNOZGFQDpXV+VDVBOIL3BbAjmbFxdwPtHFEH70x6ZCcf1OaAdzArXc6zem+UCzQdr0CYKEWO08395gIvk+A8W2g36LUX09y5AMoMO6HFOtS5UU1BkYC5LzvihNRnucyugnT1jdtw6zWY5Xhsql7VU5rmmVAqD36zvg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ubiuhm/ubfrsk0kdCcuBS9nyUq95olb7mUVepFOwJPo=;
 b=I2Z+TUpUh7N80e6Bq5DGphjK6l9SdVmOz9bUKRx6S7lOSWexExcx5TmHDBt5eqU2NlIjYXeaUokAyDXi4IfCb0kDlMjiJtQUlXrydKdaj8AYrXQylnaefEInh4/so+v4CrVvMQtufsw3OM2+luSdO5mXsILJjHsrS00Lu3She24=
Date: Tue, 17 Nov 2020 11:50:39 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Cheyenne Wills <cheyenne.wills@gmail.com>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: XSA-351 causing Solaris-11 systems to panic during boot.
Message-ID: <20201117105039.mpfrnwvojpmfaopx@Air-de-Roger>
References: <CAHpsFVc4AAm6L0rKUuV47ydOjtw7XAgFnDZxRjdCL0OHXJERDw@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <CAHpsFVc4AAm6L0rKUuV47ydOjtw7XAgFnDZxRjdCL0OHXJERDw@mail.gmail.com>
X-ClientProxiedBy: MR2P264CA0089.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:32::29) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 15b17550-1f14-4944-2e70-08d88ae6a321
X-MS-TrafficTypeDiagnostic: DM6PR03MB4394:
X-Microsoft-Antispam-PRVS: <DM6PR03MB439493DFC5DEB40D8B9626958FE20@DM6PR03MB4394.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: KbU1AByWq4sjPxglErZdzHr+NIIJJg5k86WnYSW41TFs2KG92dk6tqkeXWTP1P9qTJew+Rlov8IEeceXbckUzVXqA/F7FVhCyr1g326f5AVTcyeVHweAel47U/O3rBYcKp159T9uGVMlJMR79QdIh32W+XfYViEKkQXNfQvMDd9h8GM0tSUyG9cMQGhri8GJXE/pb9i7R38gEDvBh6EtEKrqEiaAWXl8tJ1I+BPHpWlNkj2CbNiAylbz71NeQQOy2y/Y+c/qXGpx7r0iN023K7M0Ly6lkXTmW1aFRA0d9fGOeQcy54+ryIhp5nJq2rBi
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39860400002)(136003)(396003)(366004)(376002)(346002)(33716001)(66946007)(4326008)(66556008)(66476007)(316002)(85182001)(5660300002)(1076003)(83380400001)(6916009)(86362001)(6486002)(478600001)(3716004)(26005)(16526019)(186003)(2906002)(8936002)(956004)(8676002)(6666004)(6496006)(9686003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: /OEhSO2TOyswfsityGkZlg74R5GeDsp30bRBdw2N3e76vm7vpweepvzFZbXoFmMGaNrFrIKNCxX0NUzUoR6QI4txMTCNi2FjalX3FBJsLnMBN2iVTQ7UQwsr1bo5RN6uIwIknuVPFew57h7W2b5kYThC6xFRjiE7g+RRvHwA8PWb5x6QyAIP7grOYmXJgFVTXGZpbp3D8Kb3NPaFIiuKVLt9x943JkqQ7LKYcyEfbRK1kiqbsUpDEpSDy9QBzVCpKhXUthZ60ohdBVsRWcxWWC+cMumbZ7lAOgHELIWmz7vePnOZW0pwHLCSF4atyWcymuoBsdgx29U/Ho4UK0VsdwRvowAXHkR7D9nQw1TQo0JIRAOL1d4k4n8QkbLp2fQZ/KFtWrLDBuAsriTnFVaQKAvPd/xDtABrZXx5kiyJ96DLcl6n8Do1BXwqadIPDcmQErlEfeyDtt6UaVQJ+jSCqY8AynhW6Kj5dw/A+fxr4BjxFnBF6mRBWGuZWe8VVgeOKmFFAel8eWPdgs7Hq775q1L2nkb/mUY93wY4P1n0+TRHapfI5KhEqGqG620mKlrinKKpPpmxbc9114E3nvhOIeXsddTkQjYoIatOdCfdM4umn5mudLn4rK89pXYeGSXWUWKBB0rggFHuG2yuJ8PQX4bhDhbuwmclqg9XPCxD+vIovbg5yHY2KF/06ql/3gg9yK394wgoSPpZf6fTmf/I1BjmKq9NTH/BO7yHm7Uz0Cw4Fel2DGEBoZQ5VcH6+W5Tpuhxzb9yGGRoSYHOzfFWO2iH8Dx10v05fbdMoRBJCHDkWcfTMyV4fkZFsWZCc+tCZucFkc+LaR4Q7dZ+v2jRj30jsnfbYn4cE1Q5WdgpMBACpVln0rgy63Yp4ro3VPeJtTDwMMku/IBTSzlTFowX2A==
X-MS-Exchange-CrossTenant-Network-Message-Id: 15b17550-1f14-4944-2e70-08d88ae6a321
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Nov 2020 10:50:45.9147
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xtTvozFX/0cnb3DnNo8tXWLc5YBT6XC+k6NAa3w17qCU2+ENrCoNy0N9ARemHafmISpHsBgkiPrceFHuSCHeRg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4394
X-OriginatorOrg: citrix.com

On Mon, Nov 16, 2020 at 02:57:14PM -0700, Cheyenne Wills wrote:
> Running Xen with XSA-351 is causing Solaris 11 systems to panic during
> boot.  The panic screen is showing the failure to be coming from
> "unix:rdmsr".  The panic occurs with existing guests (booting off a disk)
> and the  booting from an install ISO image.
> 
> I discussed the problem with "andyhhp__" in the "#xen" IRC channel and he
> requested that I report it here.
> 
> This was failing on a Xen 4.13 and a Xen 4.14 system built via gentoo.
> 
> I understand that ultimately this is a bug in Solaris.  However it does
> impact existing guests that were functional before applying the XSA-351
> security patches.

I seem to have some issues getting the Solaris 11.4 ISO to boot, which I
think are unrelated to the MSR changes. I get what seems to be a panic
just after the Copyright message, but there's no reason printed at all
about the panic. The message just reads (transcript):

SunOS Release 5.11 Version 11.4.0.15.0 64-bit
Copyright (c) 1983, 2018, Oracle and/or it's affiliates. All right reserved.
System would not fast reboot because:
 newkernel not valid
 fastreboot_onpanic is not set
 ...

The config file I'm using is:

memory=1024
vcpus=4
name="solaris"

builder="hvm"

disk = [
  'format=raw,vdev=hdc,access=ro,devtype=cdrom,target=/root/sol-11_4-text-x86.iso',
  'format=raw,vdev=hda,access=rw,target=/root/solaris.img',
]

vif = [
 'mac=00:16:3E:74:3d:88,bridge=bridge0',
]

vnc=1
vnclisten="0.0.0.0"

serial='pty'

on_crash="preserve"

Is there anything I'm missing?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 10:55:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 10:55:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28909.58049 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keye9-0004PG-CE; Tue, 17 Nov 2020 10:55:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28909.58049; Tue, 17 Nov 2020 10:55:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keye9-0004P9-8v; Tue, 17 Nov 2020 10:55:37 +0000
Received: by outflank-mailman (input) for mailman id 28909;
 Tue, 17 Nov 2020 10:55:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1keye8-0004P4-BP
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 10:55:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1619ef4f-0b0f-4656-9671-5d22267930e3;
 Tue, 17 Nov 2020 10:55:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0979AACAC;
 Tue, 17 Nov 2020 10:55:34 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1keye8-0004P4-BP
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 10:55:36 +0000
X-Inumbo-ID: 1619ef4f-0b0f-4656-9671-5d22267930e3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1619ef4f-0b0f-4656-9671-5d22267930e3;
	Tue, 17 Nov 2020 10:55:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605610534; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Tfa9QtYIAf/6uVkF4jzi1W9iwf4qW8zRvyaGVxS+W30=;
	b=Wu2LVXOzGH0c9ILBZIU+dEhgWPZE5eQBuV8HhRJ+pk0WqiFZNjTqCbTcaVl1jQ/upWeIxC
	8ICwixC7KYotDA5gTskCv2L0zPYFw4YpgnPG8mm+W1QWlYa2zxOuu4lPDZw1czWOeNMip8
	Zmp/EGpQhFEsQrOsxuq0wws6WsT/uBU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0979AACAC;
	Tue, 17 Nov 2020 10:55:34 +0000 (UTC)
Subject: Re: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
To: Rahul Singh <rahul.singh@arm.com>
Cc: bertrand.marquis@arm.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1605527997.git.rahul.singh@arm.com>
 <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bd5fa7bb-7c44-1ec0-fc57-3ecf01c7d651@suse.com>
Date: Tue, 17 Nov 2020 11:55:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 16.11.2020 13:25, Rahul Singh wrote:
> NS16550 driver has PCI support that is under HAS_PCI flag. When HAS_PCI
> is enabled for ARM, compilation error is observed for ARM architecture
> because ARM platforms do not have full PCI support available.

While you've extended the sentence, it remains unclear to me what
compilation error it is that results here. I've requested such
clarification for v2 already.

> --- a/xen/drivers/char/Kconfig
> +++ b/xen/drivers/char/Kconfig
> @@ -4,6 +4,10 @@ config HAS_NS16550
>  	help
>  	  This selects the 16550-series UART support. For most systems, say Y.
>  
> +config HAS_NS16550_PCI
> +	def_bool y
> +	depends on X86 && HAS_NS16550 && HAS_PCI

Looking at this again (in particular at all the #ifdef changes in
the actual source file), I wonder whether an approach with less
code churn and without such an extra Kconfig setting (with, as
said, a bogus dependency on x86) couldn't be found. For example,
how about ...

> --- a/xen/drivers/char/ns16550.c
> +++ b/xen/drivers/char/ns16550.c
> @@ -16,7 +16,7 @@
>  #include <xen/timer.h>
>  #include <xen/serial.h>
>  #include <xen/iocap.h>
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>  #include <xen/pci.h>
>  #include <xen/pci_regs.h>
>  #include <xen/pci_ids.h>

... #undef-ining CONFIG_HAS_PCI at a suitable position in this
file (e.g. after all #include-s, to make sure all structure
layouts remain correct)? This would then be far easier to revert
down the road, and would confine the oddity to a single file
(and there a single place) in the code base.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 11:04:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 11:04:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28917.58061 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keymE-0005Rs-7A; Tue, 17 Nov 2020 11:03:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28917.58061; Tue, 17 Nov 2020 11:03:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keymE-0005Rl-48; Tue, 17 Nov 2020 11:03:58 +0000
Received: by outflank-mailman (input) for mailman id 28917;
 Tue, 17 Nov 2020 11:03:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1keymC-0005Rg-44
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 11:03:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 63daa348-7cdb-475a-a71f-c8185187e1a0;
 Tue, 17 Nov 2020 11:03:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 60F4AAC2E;
 Tue, 17 Nov 2020 11:03:54 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1keymC-0005Rg-44
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 11:03:56 +0000
X-Inumbo-ID: 63daa348-7cdb-475a-a71f-c8185187e1a0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 63daa348-7cdb-475a-a71f-c8185187e1a0;
	Tue, 17 Nov 2020 11:03:55 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605611034; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mXevA6p3/+3NhtQoFA/yMlemtZVTpwrq9UFhquhsaic=;
	b=ZkHiB4wC2yxi3WCDi2qjF9CCr1YlskztBOgSJbeZh55os4L+69tT2ArdIj0UA4jVMlJJVB
	sbfI62y1aEjWVmK0bbydrGoyxF8vHX0AkQrrWJoYDQpkNbsH1qdFp2hAQviVfGMbRA3aE1
	NhXH6QKxVoN8C/h2U7aX4YwkgHn8KmI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 60F4AAC2E;
	Tue, 17 Nov 2020 11:03:54 +0000 (UTC)
Subject: Re: [PATCH v3 2/3] xen/pci: Move x86 specific code to x86 directory.
To: Rahul Singh <rahul.singh@arm.com>
Cc: bertrand.marquis@arm.com, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1605527997.git.rahul.singh@arm.com>
 <a84005e5aa6733043e043b015cde4983719c8535.1605527997.git.rahul.singh@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a6e6d884-93c0-f9ac-c2b4-b264c5a72db1@suse.com>
Date: Tue, 17 Nov 2020 12:03:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <a84005e5aa6733043e043b015cde4983719c8535.1605527997.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 16.11.2020 13:25, Rahul Singh wrote:
> passthrough/pci.c file is common for all architecture, but there is x86
> specific code in this file.

In how far is ...

> @@ -1370,13 +1301,6 @@ static int __init setup_dump_pcidevs(void)
>  }
>  __initcall(setup_dump_pcidevs);
>  
> -int iommu_update_ire_from_msi(
> -    struct msi_desc *msi_desc, struct msi_msg *msg)
> -{
> -    return iommu_intremap
> -           ? iommu_call(&iommu_ops, update_ire_from_msi, msi_desc, msg) : 0;
> -}

... this code x86-specific? The hook being called lives in a
#ifdef CONFIG_HAS_PCI section, and MSI is a general PCI sub-
feature. IOW if this is another workaround, it should be
called so (if there's really no other way to address whatever
issue there is), which in turn likely means it wants to be in
a separate patch.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 11:12:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 11:12:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28926.58073 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keyuq-0006RG-4P; Tue, 17 Nov 2020 11:12:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28926.58073; Tue, 17 Nov 2020 11:12:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1keyuq-0006R9-0o; Tue, 17 Nov 2020 11:12:52 +0000
Received: by outflank-mailman (input) for mailman id 28926;
 Tue, 17 Nov 2020 11:12:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1keyuo-0006R4-AB
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 11:12:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f085d7cb-1b1f-490f-b497-af8410ab3946;
 Tue, 17 Nov 2020 11:12:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 61645AC2D;
 Tue, 17 Nov 2020 11:12:48 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1keyuo-0006R4-AB
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 11:12:50 +0000
X-Inumbo-ID: f085d7cb-1b1f-490f-b497-af8410ab3946
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f085d7cb-1b1f-490f-b497-af8410ab3946;
	Tue, 17 Nov 2020 11:12:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605611568; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=S7BvEANOG94S0TBLRcF+255RyHPqVKV4Max7BFcpsB0=;
	b=YFbK06JFEluFgGGg/OZSZxhEkzFMXyKKkrknNUk1J9ThP1++kG34p31isubQVLwzidbHQ8
	8HdDOasNkzmb2R8e6qnF1ox/o3ufdXo4wQVw4VBQTRfNA4ARxTFts92SI2Hy/4LBVqEWFU
	Ee2aQnkQjIXpQHL49pKJ9ZMNy5gg95k=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 61645AC2D;
	Tue, 17 Nov 2020 11:12:48 +0000 (UTC)
Subject: Re: [PATCH v3 3/3] xen/pci: solve compilation error on ARM with
 HAS_PCI enabled.
To: Rahul Singh <rahul.singh@arm.com>
Cc: bertrand.marquis@arm.com, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1605527997.git.rahul.singh@arm.com>
 <efa0c2578a6aabb642b8f38257cf96a983944301.1605527997.git.rahul.singh@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <fe73928e-bf95-234d-c181-4904a13ad0a1@suse.com>
Date: Tue, 17 Nov 2020 12:12:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <efa0c2578a6aabb642b8f38257cf96a983944301.1605527997.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 16.11.2020 13:25, Rahul Singh wrote:
> If mem-sharing, mem-paging, or log-dirty functionality is not enabled
> for non-x86 architecture when HAS_PCI is enabled, the compiler will
> throw an error.
> 
> Move code to x86 specific directory to fix compilation error.

Perhaps rather "file" than "directory"?

> Also, modify the code to use likely() in place of unlikley() for each
> condition to make code more optimized.
> 
> No functional change.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>

In principle I'm okay with this now, but there continue to be a few
nits:

> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -23,6 +23,7 @@
>  #include <asm/hvm/io.h>
>  #include <asm/io_apic.h>
>  #include <asm/setup.h>
> +#include <xen/vm_event.h>

Please insert this alongside the other "#include <xen/...>" higher up.

> @@ -315,6 +316,17 @@ int iommu_update_ire_from_msi(
>             ? iommu_call(&iommu_ops, update_ire_from_msi, msi_desc, msg) : 0;
>  }
>  
> +bool arch_iommu_use_permitted(const struct domain *d)
> +{
> +    /*
> +     * Prevent device assign if mem paging, mem sharing or log-dirty
> +     * have been enabled for this domain.
> +     */
> +    return d == dom_io ||
> +           (likely(!mem_sharing_enabled(d)) &&
> +            likely(!vm_event_check_ring(d->vm_event_paging)) &&
> +            likely(!p2m_get_hostp2m(d)->global_logdirty));
> +}
>  /*
>   * Local variables:
>   * mode: C

Please don't alter stylistic aspects like this trailing comment
being preceded by a blank line.

> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -381,6 +381,8 @@ DECLARE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
>  extern struct spinlock iommu_pt_cleanup_lock;
>  extern struct page_list_head iommu_pt_cleanup_list;
>  
> +bool arch_iommu_use_permitted(const struct domain *d);

Just FTR - this way you effectively preclude an arch from
making this a trivial static inline in one of its headers.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 11:18:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 11:18:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28933.58085 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kez0U-0006fu-07; Tue, 17 Nov 2020 11:18:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28933.58085; Tue, 17 Nov 2020 11:18:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kez0T-0006fn-TG; Tue, 17 Nov 2020 11:18:41 +0000
Received: by outflank-mailman (input) for mailman id 28933;
 Tue, 17 Nov 2020 11:18:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kez0S-0006fi-Ci
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 11:18:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b63b42f8-ea1e-4495-b9bc-56293f9071f5;
 Tue, 17 Nov 2020 11:18:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D4AE2AC23;
 Tue, 17 Nov 2020 11:18:38 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kez0S-0006fi-Ci
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 11:18:40 +0000
X-Inumbo-ID: b63b42f8-ea1e-4495-b9bc-56293f9071f5
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b63b42f8-ea1e-4495-b9bc-56293f9071f5;
	Tue, 17 Nov 2020 11:18:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605611919; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lkxfR3i5w6zX+HLBvlooZzm+W9Yy7IGTSSX3u9qyyyY=;
	b=N/4FGcRBg/o+eoVv6Lpej/u6a6Fw/D6Jl6GQl47ycESoOrDMpiejQPppevr/ktS+KJNsaa
	WXQQWjPGw0pbP8DNdIDHvBOv5UwEHQFB/yrQVmEWvlnjDKLcbqPN82lsm2skwDSIFoAGaD
	CZgFnna3GShLe/O7I0fBVZTFRwp8/h0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D4AE2AC23;
	Tue, 17 Nov 2020 11:18:38 +0000 (UTC)
Subject: Re: [PATCH 06/12] xen/hypfs: move per-node function pointers into a
 dedicated struct
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-7-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5c9d71ea-8f25-0f57-ac48-5152a1e35264@suse.com>
Date: Tue, 17 Nov 2020 12:18:38 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201026091316.25680-7-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.10.2020 10:13, Juergen Gross wrote:
> @@ -15,10 +29,7 @@ struct hypfs_entry {
>      unsigned int max_size;
>      const char *name;
>      struct list_head list;
> -    int (*read)(const struct hypfs_entry *entry,
> -                XEN_GUEST_HANDLE_PARAM(void) uaddr);
> -    int (*write)(struct hypfs_entry_leaf *leaf,
> -                 XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
> +    struct hypfs_funcs *funcs;

const (with all the cascade changes necessary)?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 11:23:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 11:23:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28939.58096 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kez50-0007cB-Iy; Tue, 17 Nov 2020 11:23:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28939.58096; Tue, 17 Nov 2020 11:23:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kez50-0007c4-Fu; Tue, 17 Nov 2020 11:23:22 +0000
Received: by outflank-mailman (input) for mailman id 28939;
 Tue, 17 Nov 2020 11:23:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kez4y-0007bz-MV
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 11:23:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0098b7e7-d9c4-4e1e-8d34-7310e2352add;
 Tue, 17 Nov 2020 11:23:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 27931AC23;
 Tue, 17 Nov 2020 11:23:19 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kez4y-0007bz-MV
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 11:23:20 +0000
X-Inumbo-ID: 0098b7e7-d9c4-4e1e-8d34-7310e2352add
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0098b7e7-d9c4-4e1e-8d34-7310e2352add;
	Tue, 17 Nov 2020 11:23:19 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605612199; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=eQx3oPFm+QKaarZ4yUa1HfhPc7sP/lWshgtkUT3t9EI=;
	b=Cu9GEAvf7N9J8kxYxZgAR6g+0s9r6ZoEwVFTQFfseI3JW439/8mY/OprOQb7jTBEt4kAaq
	kTe8WcubK/qNSGqdKQhWL3lTUxn7+KAoIF64bT0JgZKgNiNFknLxXOOBcm5+DUNpH2j0Co
	Jjwq7UKyJDVo7XO7gLQwCpVpd7Zys4I=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 27931AC23;
	Tue, 17 Nov 2020 11:23:19 +0000 (UTC)
Subject: Re: [PATCH 07/12] xen/hypfs: pass real failure reason up from
 hypfs_get_entry()
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-8-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6eda2dc8-6dfd-84df-6e50-51974d81f28e@suse.com>
Date: Tue, 17 Nov 2020 12:23:19 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201026091316.25680-8-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.10.2020 10:13, Juergen Gross wrote:
> Instead of handling all errors from hypfs_get_entry() as ENOENT pass
> up the real error value via ERR_PTR().
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 12:38:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 12:38:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28963.58112 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf0F1-0005mm-Gw; Tue, 17 Nov 2020 12:37:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28963.58112; Tue, 17 Nov 2020 12:37:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf0F1-0005mf-Cw; Tue, 17 Nov 2020 12:37:47 +0000
Received: by outflank-mailman (input) for mailman id 28963;
 Tue, 17 Nov 2020 12:37:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kf0F0-0005ma-Pl
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 12:37:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bdb9ce39-f8ae-4eaf-a3ab-e0f8e02b0e3b;
 Tue, 17 Nov 2020 12:37:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 07C43AF13;
 Tue, 17 Nov 2020 12:37:45 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kf0F0-0005ma-Pl
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 12:37:46 +0000
X-Inumbo-ID: bdb9ce39-f8ae-4eaf-a3ab-e0f8e02b0e3b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bdb9ce39-f8ae-4eaf-a3ab-e0f8e02b0e3b;
	Tue, 17 Nov 2020 12:37:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605616665; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=a2W5MJbG4LyDpayhsQW8gGLbqrvmBJMcq+Ai06Mo5Bc=;
	b=d9/nyvfnCfZryZbibtFkmv7QdmaNHz0aPUZ6QcDV0BVFcm6Z2l3pn98GuefMtAKAY70+K8
	DYHQ91mDGfJ2G308X5Tmf6w2rbxklR0tMNJEj114M7fPHR9EMn2frWraJnFr172mLt5t27
	Qp4XLit2QjG99PlQGSv9HUiV8E4fIQc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 07C43AF13;
	Tue, 17 Nov 2020 12:37:45 +0000 (UTC)
Subject: Re: [PATCH 08/12] xen/hypfs: support dynamic hypfs nodes
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-9-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d8653200-fbee-4e87-3e2d-7062879d7b4e@suse.com>
Date: Tue, 17 Nov 2020 13:37:45 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201026091316.25680-9-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.10.2020 10:13, Juergen Gross wrote:
> Add a getsize() function pointer to struct hypfs_funcs for being able
> to have dynamically filled entries without the need to take the hypfs
> lock each time the contents are being generated.

But a dynamic update causing a change in size will require _some_
lock, won't it?

> --- a/xen/common/hypfs.c
> +++ b/xen/common/hypfs.c
> @@ -19,28 +19,29 @@
>  CHECK_hypfs_dirlistentry;
>  #endif
>  
> -#define DIRENTRY_NAME_OFF offsetof(struct xen_hypfs_dirlistentry, name)
> -#define DIRENTRY_SIZE(name_len) \
> -    (DIRENTRY_NAME_OFF +        \
> -     ROUNDUP((name_len) + 1, alignof(struct xen_hypfs_direntry)))
> -
>  struct hypfs_funcs hypfs_dir_funcs = {
>      .read = hypfs_read_dir,
> +    .getsize = hypfs_getsize,
> +    .findentry = hypfs_dir_findentry,
>  };
>  struct hypfs_funcs hypfs_leaf_ro_funcs = {
>      .read = hypfs_read_leaf,
> +    .getsize = hypfs_getsize,
>  };
>  struct hypfs_funcs hypfs_leaf_wr_funcs = {
>      .read = hypfs_read_leaf,
>      .write = hypfs_write_leaf,
> +    .getsize = hypfs_getsize,
>  };
>  struct hypfs_funcs hypfs_bool_wr_funcs = {
>      .read = hypfs_read_leaf,
>      .write = hypfs_write_bool,
> +    .getsize = hypfs_getsize,
>  };
>  struct hypfs_funcs hypfs_custom_wr_funcs = {
>      .read = hypfs_read_leaf,
>      .write = hypfs_write_custom,
> +    .getsize = hypfs_getsize,
>  };

With the increasing number of fields that may (deliberately or
by mistake) be NULL, should we gain some form of proactive
guarding against calls through such pointers?

> @@ -88,6 +93,23 @@ static void hypfs_unlock(void)
>      }
>  }
>  
> +void *hypfs_alloc_dyndata(unsigned long size, unsigned long align)

Will callers really need to specify (high) alignment values? IOW ...

> +{
> +    unsigned int cpu = smp_processor_id();
> +
> +    ASSERT(per_cpu(hypfs_locked, cpu) != hypfs_unlocked);
> +    ASSERT(per_cpu(hypfs_dyndata, cpu) == NULL);
> +
> +    per_cpu(hypfs_dyndata, cpu) = _xzalloc(size, align);

... is xzalloc_bytes() not suitable for use here?

> @@ -171,15 +193,34 @@ static int hypfs_get_path_user(char *buf,
>      return 0;
>  }
>  
> +struct hypfs_entry *hypfs_dir_findentry(struct hypfs_entry_dir *dir,
> +                                        const char *name,
> +                                        unsigned int name_len)
> +{
> +    struct hypfs_entry *entry;
> +
> +    list_for_each_entry ( entry, &dir->dirlist, list )
> +    {
> +        int cmp = strncmp(name, entry->name, name_len);
> +
> +        if ( cmp < 0 )
> +            return ERR_PTR(-ENOENT);
> +
> +        if ( !cmp && strlen(entry->name) == name_len )
> +            return entry;
> +    }
> +
> +    return ERR_PTR(-ENOENT);
> +}
> +
>  static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir,
>                                                 const char *path)
>  {
>      const char *end;
>      struct hypfs_entry *entry;
>      unsigned int name_len;
> -    bool again = true;
>  
> -    while ( again )
> +    for ( ;; )

Nit: Strictly speaking another blank is needed between the two
semicolons.

> @@ -275,22 +305,25 @@ int hypfs_read_leaf(const struct hypfs_entry *entry,
>  
>      l = container_of(entry, const struct hypfs_entry_leaf, e);
>  
> -    return copy_to_guest(uaddr, l->u.content, entry->size) ? -EFAULT: 0;
> +    return copy_to_guest(uaddr, l->u.content, entry->funcs->getsize(entry)) ?
> +                                              -EFAULT : 0;

With the intended avoiding of locking, how is this ->getsize()
guaranteed to not ...

> @@ -298,7 +331,7 @@ static int hypfs_read(const struct hypfs_entry *entry,
>          goto out;
>  
>      ret = -ENOBUFS;
> -    if ( ulen < entry->size + sizeof(e) )
> +    if ( ulen < size + sizeof(e) )
>          goto out;

... invalidate the checking done here? (A similar risk looks to
exist on the write path, albeit there we have at least the
->max_size checks, where I hope that field isn't mean to become
dynamic as well.)

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 12:54:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 12:54:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28970.58124 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf0VB-0007fb-US; Tue, 17 Nov 2020 12:54:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28970.58124; Tue, 17 Nov 2020 12:54:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf0VB-0007fU-RC; Tue, 17 Nov 2020 12:54:29 +0000
Received: by outflank-mailman (input) for mailman id 28970;
 Tue, 17 Nov 2020 12:54:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rcuG=EX=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kf0VA-0007fP-9a
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 12:54:28 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2edbb50c-65ce-4849-8578-31c28127995c;
 Tue, 17 Nov 2020 12:54:26 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rcuG=EX=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kf0VA-0007fP-9a
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 12:54:28 +0000
X-Inumbo-ID: 2edbb50c-65ce-4849-8578-31c28127995c
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2edbb50c-65ce-4849-8578-31c28127995c;
	Tue, 17 Nov 2020 12:54:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605617666;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=IeWlfUqCmhy5y6xIe3S5Xjsz/LVN584qLW6yOrPaquI=;
  b=bqjlLbwS8ccLu6CeKklWVPcWnewzTSdBpFeT8FRvrq4hJjr0KyJ44G3r
   nDac8NV1iGzhNwSu8bGRTc0IQufd8g0cpIawL7i2yZrrSP92mXFjxvduv
   eB0R+bk7Ps1lFN+LzOnkOaxI5ufQEZTyqaBcanoap2gOg25Rh7WcDGRDk
   8=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: nrE2r5EcEtkkLXj95gMh3Hdq/LWZn42m+5pF0IIpTYyB3PE9i0JX5HuzYUN9PPXXtCkOEPwalf
 2IjIcJFm+5h1RIdYD08YodmWRxIz4wc6rSx/+eo+7kZ83vr/8X4cXN4c8mWdiZHQG+ia8KnAwX
 fG5yHcswe0tb5oB4UXjtCcTQtZ2DvEhtKEPAL58kMwBSXscgJNItrP6laFOAnINkuDb4vRk7D/
 RLCEMwyg+RnLy7amJUSDhY5aeoELresKHc30uIiq4LJxYTK/szR2hCNsEtEn6EiZHcHbo7a9Cb
 bco=
X-SBRS: None
X-MesageID: 31673585
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,485,1596513600"; 
   d="scan'208";a="31673585"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jVGVw5rzG5/iPT/WVvu7u/3wIIGQN4UeOYWwE2cXhD3wGzB4CMYa/yYyWDIvkWbNkb2RkLKKyeZeMXmYbvDg2swyBQtdysh7RYSc5hdk+4J+YSwpgGiW1MlKIeQC8xH9P0Hj125U+4akn5Ex2FRPPw1OctbZFHCc2VVjUFBXNLh5R/H2PWbCfag+3/e8wOkFL2+7LUACfiO5Cvy9BOIUfrMtWj5d4PRI4Ct71nZMe+fzfyuovdKaqlnTQt9GDz2GRLKgHtofuy0Pe4PBNVgb9w+834OZ3QOpVcgfFyRh7i9e0KbuJJGgNlYOqNv784+7JHDpbCxcpcwowiQuKklQXA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XNScQ2lP2jTT3efRphi/9RIgLG8rE7pQrMcbEcbruwg=;
 b=AQNdR8Im0DFfmuKcRGAJRdIqDZPITKonmEetnScTSodO+U/AiNkvI0MeZ+jcr8UhdsBvsblNRTLZp5ewCAwgLFHljOEn53AcIhrY6seDAtjMHhqb8QIcixpA8CQYp9A1PL1cu4YpcVtV0gJmPrESWC5yXEZtKbwiqgehaZx/Tf9Ti/utVJZY12DYuWkpBrQi/QuExlUl7Yr0aEbXJwKb8fT+J6DHuEDpTkr/SN/A+gYMxG3ZvBazdSrgKxjUPlW+smGjEfM1erQl9sH8BTtIYzy+u1tOWeXjZbhIniw7VbBfmTEsU2taJGmk1hPFccwAIyk858i0J91jJwGvsaYHMg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XNScQ2lP2jTT3efRphi/9RIgLG8rE7pQrMcbEcbruwg=;
 b=hfp4D4bKzDfZ5fNpEI+mOUdE+f2WxKnz0YfUx4wZHmtgic2WiP8O4uvRFq9jgP/4yRfIreLPVtGNcO3/nI1KdYbUkOfex4mfdWu5e3cOIVIMKrXibF9eA300wOttkixNjDIWk8V042nVeUg58Sn4EZDWr7p5korTzjcVI8hOa/I=
Date: Tue, 17 Nov 2020 13:54:07 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Cheyenne Wills <cheyenne.wills@gmail.com>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: XSA-351 causing Solaris-11 systems to panic during boot.
Message-ID: <20201117125407.66xb3uuil3g4t6ek@Air-de-Roger>
References: <CAHpsFVc4AAm6L0rKUuV47ydOjtw7XAgFnDZxRjdCL0OHXJERDw@mail.gmail.com>
 <20201117105039.mpfrnwvojpmfaopx@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201117105039.mpfrnwvojpmfaopx@Air-de-Roger>
X-ClientProxiedBy: MRXP264CA0030.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:14::18) To SA0PR03MB5610.namprd03.prod.outlook.com
 (2603:10b6:806:b2::9)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7fec8937-76fa-4da5-592e-08d88af7e243
X-MS-TrafficTypeDiagnostic: SN6PR03MB4253:
X-Microsoft-Antispam-PRVS: <SN6PR03MB425326AD0BFE5DB81CBDCE378FE20@SN6PR03MB4253.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: jbexlLwsCiB7uTADZfHK64ZCJHH3jW38Y26z79uj+K7MdkIjJYrD6UiHfqhnTY8AkuyW+iA09/ngT8Y+8rptnUfbDwEv/+ust3/MbCC3fvc4NOIcjC50XWPJCeZ/QEWS6uq24waCASl7XfYiQ+3UIHzb7tBXKf7+zf3/9Cyp00Fo5Udxf7izWWTd99OMP2Or/h65NjYgSPYNZhguZYZg+GYf00FxTGAW5zMQvl9V64zbfI6MY/756RUSJsLM18PMBPRLxE3a9riDpo6fqaiDQs1Ydzp76QFdKXKeQwZyZCplEwxof8JKp+sfgso9fNNO
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SA0PR03MB5610.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(366004)(39860400002)(396003)(346002)(376002)(186003)(16526019)(85182001)(8676002)(66556008)(316002)(66476007)(6916009)(8936002)(66946007)(1076003)(2906002)(4326008)(33716001)(26005)(5660300002)(6496006)(86362001)(478600001)(9686003)(956004)(83380400001)(6666004)(6486002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: r8WcWx9sh+TWQJnziSz2uRYaASTW9kgD9rjg+4SAKKin0Cb0Deh92au1Ae5044n1GjkiIc7Kin0ZuO2qe0b1yoLqcW5vinyerKWEDEq9iwSjc5ITPRujJoosKRMZIlJp+Iu4qt8DwN7b1lqZVa0W6jOev58l8f8PjhlZqL77wZ9/vPcqr+WY42qXNTWbZ0ZXb9yeI2yVH7sye6XYSfvyboThhkSaBksnoJHguzFzAQ3waE0mcqhuWLQ8sl3mtwgb2+CgmP4LTd/ijSVdXZksJU1e3Azg7Uu0zBrZkL3R5r7ix3E4B1ifBoXflLCeTcIXZrL4pPw6WipsxG6wiCjTsAxw1+lXJVWh71Qtcc2O6E1GBgu7UzalH+uPt5cMZjv5XAuqr+SCNfLXjGFcRXRL/dKc2TlY1MNqkWgbuK7gRWjl4jVsVELjFX7B+5VddnMkCFVKP3T74SeHBXz/+Skyk4Hd8Jt09taCuWsKKWnGF7oIBFP1J3WuEigDAhCVdEUwNAB4kz09nFLhhzZ/g1j7d9t9jJarZhdg6VYVpwmddOjFXXUacLwmNULGY+BdBgfdyG5RpZduwpDnd76jBet0M19QpSRWzroebml5yfI/EyOQkvufR+gxN94eMOjf9kndMcIGad1i662d5ZVeh0zdwA==
X-MS-Exchange-CrossTenant-Network-Message-Id: 7fec8937-76fa-4da5-592e-08d88af7e243
X-MS-Exchange-CrossTenant-AuthSource: SA0PR03MB5610.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Nov 2020 12:54:13.7096
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4KduIXo+14VF/muoIKs+FwNIevnAX5xHX5cdfgA4XsSpibSRDU11jCCOq5nbrIRj8Blbzfhc4cSOvy53HFeNQA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR03MB4253
X-OriginatorOrg: citrix.com

On Tue, Nov 17, 2020 at 11:50:39AM +0100, Roger Pau Monné wrote:
> On Mon, Nov 16, 2020 at 02:57:14PM -0700, Cheyenne Wills wrote:
> > Running Xen with XSA-351 is causing Solaris 11 systems to panic during
> > boot.  The panic screen is showing the failure to be coming from
> > "unix:rdmsr".  The panic occurs with existing guests (booting off a disk)
> > and the  booting from an install ISO image.
> > 
> > I discussed the problem with "andyhhp__" in the "#xen" IRC channel and he
> > requested that I report it here.
> > 
> > This was failing on a Xen 4.13 and a Xen 4.14 system built via gentoo.
> > 
> > I understand that ultimately this is a bug in Solaris.  However it does
> > impact existing guests that were functional before applying the XSA-351
> > security patches.
> 
> I seem to have some issues getting the Solaris 11.4 ISO to boot, which I
> think are unrelated to the MSR changes. I get what seems to be a panic
> just after the Copyright message, but there's no reason printed at all
> about the panic. The message just reads (transcript):
> 
> SunOS Release 5.11 Version 11.4.0.15.0 64-bit
> Copyright (c) 1983, 2018, Oracle and/or it's affiliates. All right reserved.
> System would not fast reboot because:
>  newkernel not valid
>  fastreboot_onpanic is not set
>  ...
> 
> The config file I'm using is:
> 
> memory=1024
> vcpus=4
> name="solaris"
> 
> builder="hvm"
> 
> disk = [
>   'format=raw,vdev=hdc,access=ro,devtype=cdrom,target=/root/sol-11_4-text-x86.iso',
>   'format=raw,vdev=hda,access=rw,target=/root/solaris.img',
> ]
> 
> vif = [
>  'mac=00:16:3E:74:3d:88,bridge=bridge0',
> ]
> 
> vnc=1
> vnclisten="0.0.0.0"
> 
> serial='pty'
> 
> on_crash="preserve"
> 
> Is there anything I'm missing?

OK, it seems like Solaris requires more than 1GB of memory in order to
boot. I've increased it to 4GB and I've been able to boot successfully
up to the installer.

I'm however able to boot up to the installer screen without any
crashes, so I guess the version I'm using (11.4.0.15.0) is already
fixed?

Can you paste which version of Solaris you are using and if possible
where I can find the installer media to reproduce?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 13:34:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 13:34:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.28990.58139 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf17K-00031W-43; Tue, 17 Nov 2020 13:33:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 28990.58139; Tue, 17 Nov 2020 13:33:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf17K-00031P-0r; Tue, 17 Nov 2020 13:33:54 +0000
Received: by outflank-mailman (input) for mailman id 28990;
 Tue, 17 Nov 2020 13:33:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kf17I-00031K-Uh
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 13:33:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6bc85bba-86a0-40a7-b550-2c8300eb2c26;
 Tue, 17 Nov 2020 13:33:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 203FFAC23;
 Tue, 17 Nov 2020 13:33:50 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kf17I-00031K-Uh
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 13:33:52 +0000
X-Inumbo-ID: 6bc85bba-86a0-40a7-b550-2c8300eb2c26
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6bc85bba-86a0-40a7-b550-2c8300eb2c26;
	Tue, 17 Nov 2020 13:33:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605620030; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=79ygoGHAAzlFJx2jZakVEWODcUmCwXL7xp6xX6/cLQw=;
	b=rVhXgS1E/IualgL2ea8ecYlaYis0FVa0+UFqvvmAjUCdzNLk9tQE7Q4kc4KfCZmFgbo/Ya
	vX4cq3ZFS4N2g42O5YDDdI+6HLxYa5gGSmZoNnQ3brUJabLLRO8mzGhHwwJ0zWl0LA2ZND
	KP3e/ById11grW7mzSj1+QTN1kONl+0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 203FFAC23;
	Tue, 17 Nov 2020 13:33:50 +0000 (UTC)
Subject: Re: [PATCH 09/12] xen/hypfs: add support for id-based dynamic
 directories
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-10-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6f8c0d3d-73f6-d10f-182a-8bf76856bf09@suse.com>
Date: Tue, 17 Nov 2020 14:33:50 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201026091316.25680-10-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.10.2020 10:13, Juergen Gross wrote:
> --- a/xen/common/hypfs.c
> +++ b/xen/common/hypfs.c
> @@ -257,6 +257,82 @@ unsigned int hypfs_getsize(const struct hypfs_entry *entry)
>      return entry->size;
>  }
>  
> +int hypfs_read_dyndir_id_entry(struct hypfs_entry_dir *template,
> +                               unsigned int id, bool is_last,
> +                               XEN_GUEST_HANDLE_PARAM(void) *uaddr)
> +{
> +    struct xen_hypfs_dirlistentry direntry;
> +    char name[12];

Perhaps better tie this literal 12 to the one used for declaring
struct hypfs_dyndir_id's name[] field, such that an eventual
change will need making in exactly one place?

> +    unsigned int e_namelen, e_len;
> +
> +    e_namelen = snprintf(name, sizeof(name), "%u", id);
> +    e_len = HYPFS_DIRENTRY_SIZE(e_namelen);
> +    direntry.e.pad = 0;
> +    direntry.e.type = template->e.type;
> +    direntry.e.encoding = template->e.encoding;
> +    direntry.e.content_len = template->e.funcs->getsize(&template->e);
> +    direntry.e.max_write_len = template->e.max_size;
> +    direntry.off_next = is_last ? 0 : e_len;
> +    if ( copy_to_guest(*uaddr, &direntry, 1) )
> +        return -EFAULT;
> +    if ( copy_to_guest_offset(*uaddr, HYPFS_DIRENTRY_NAME_OFF, name,
> +                              e_namelen + 1) )
> +        return -EFAULT;
> +
> +    guest_handle_add_offset(*uaddr, e_len);
> +
> +    return 0;
> +}
> +
> +static struct hypfs_entry *hypfs_dyndir_findentry(struct hypfs_entry_dir *dir,
> +                                                  const char *name,
> +                                                  unsigned int name_len)
> +{
> +    struct hypfs_dyndir_id *data;

const? (also in read_dyndir below)

> +    data = hypfs_get_dyndata();
> +    if ( !data )
> +        return ERR_PTR(-ENOENT);
> +
> +    /* Use template with original findentry function. */
> +    return data->template->e.funcs->findentry(data->template, name, name_len);

Why does this pass the address of the template? If it truly is
(just) a template, then its dirlist ought to be empty at all
times? If otoh the "template" indeed gets used as a node in the
tree then perhaps it wants naming differently? "Stem" would come
to mind, but likely there are better alternatives. I've also
considered the German "Statthalter", but its English translations
don't seem reasonable to me here. And "placeholder" has kind of a
negative touch. (Also in this case some of my "const?" remarks
may be irrelevant.)

Further this and ...

> +static int hypfs_read_dyndir(const struct hypfs_entry *entry,
> +                             XEN_GUEST_HANDLE_PARAM(void) uaddr)
> +{
> +    struct hypfs_dyndir_id *data;
> +
> +    data = hypfs_get_dyndata();
> +    if ( !data )
> +        return -ENOENT;
> +
> +    /* Use template with original read function. */
> +    return data->template->e.funcs->read(&data->template->e, uaddr);

... this using the template's funcs is somewhat unexpected, but
with the functions acting as the entry's .findentry() / .read()
hooks is obviously the right thing (and if the template is more
that what the word says, the consideration may become
inapplicable anyway). The implication is that the hooks
themselves can't be replaced, if need be down the road.

> +struct hypfs_entry *hypfs_gen_dyndir_entry_id(struct hypfs_entry_dir *template,
> +                                              unsigned int id)
> +{
> +    struct hypfs_dyndir_id *data;
> +
> +    data = hypfs_alloc_dyndata(sizeof(*data), alignof(*data));
> +    if ( !data )
> +        return ERR_PTR(-ENOMEM);
> +
> +    data->template = template;
> +    data->id = id;

I can't seem to spot any consumer of this field: Is it really
needed?

> --- a/xen/include/xen/hypfs.h
> +++ b/xen/include/xen/hypfs.h
> @@ -50,6 +50,15 @@ struct hypfs_entry_dir {
>      struct list_head dirlist;
>  };
>  
> +struct hypfs_dyndir_id {
> +    struct hypfs_entry_dir dir;       /* Modified copy of template. */
> +    struct hypfs_funcs funcs;         /* Dynamic functions. */
> +    struct hypfs_entry_dir *template; /* Template used. */

const?

> @@ -150,6 +159,11 @@ struct hypfs_entry *hypfs_dir_findentry(struct hypfs_entry_dir *dir,
>                                          unsigned int name_len);
>  void *hypfs_alloc_dyndata(unsigned long size, unsigned long align);
>  void *hypfs_get_dyndata(void);
> +int hypfs_read_dyndir_id_entry(struct hypfs_entry_dir *template,

const?

> +                               unsigned int id, bool is_last,
> +                               XEN_GUEST_HANDLE_PARAM(void) *uaddr);
> +struct hypfs_entry *hypfs_gen_dyndir_entry_id(struct hypfs_entry_dir *template,

const?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 14:00:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 14:00:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29000.58150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf1WU-00059y-9b; Tue, 17 Nov 2020 13:59:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29000.58150; Tue, 17 Nov 2020 13:59:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf1WU-00059r-6P; Tue, 17 Nov 2020 13:59:54 +0000
Received: by outflank-mailman (input) for mailman id 29000;
 Tue, 17 Nov 2020 13:59:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+iwA=EX=gmail.com=cheyenne.wills@srs-us1.protection.inumbo.net>)
 id 1kf1WS-00059l-6t
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 13:59:52 +0000
Received: from mail-lj1-x230.google.com (unknown [2a00:1450:4864:20::230])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d5e28589-3ff6-4f1b-9a01-4ce72a185146;
 Tue, 17 Nov 2020 13:59:51 +0000 (UTC)
Received: by mail-lj1-x230.google.com with SMTP id r17so24348921ljg.5
 for <xen-devel@lists.xenproject.org>; Tue, 17 Nov 2020 05:59:50 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+iwA=EX=gmail.com=cheyenne.wills@srs-us1.protection.inumbo.net>)
	id 1kf1WS-00059l-6t
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 13:59:52 +0000
X-Inumbo-ID: d5e28589-3ff6-4f1b-9a01-4ce72a185146
Received: from mail-lj1-x230.google.com (unknown [2a00:1450:4864:20::230])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d5e28589-3ff6-4f1b-9a01-4ce72a185146;
	Tue, 17 Nov 2020 13:59:51 +0000 (UTC)
Received: by mail-lj1-x230.google.com with SMTP id r17so24348921ljg.5
        for <xen-devel@lists.xenproject.org>; Tue, 17 Nov 2020 05:59:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=gLxTs81IMFYS+U+UnWAMUUJ68TsI2YJnY5VDUgFiRp4=;
        b=E6ko0QC/vUOhYYS3XdqNiGCgMUephFjKk5AoFMfgVk6wDTyWIBmqqe0kjwNDE7EFCq
         rYCXjj91RB4scOkummpLfb0KFEaFHhrM/nOWCXM3/YV2vk4eRspJTmAxMwIj03x/ojjv
         iv6d0elpEWxm976Uw6/CQWW51udnsdd12ecA23GZy8O/tvJ/wraSj1TjXhmht1x4AlfJ
         4/MStOWiJcIJoArg1thepPFJL0rgqmJtZal+7aXIIMvTO3pCzZ2AUxGe1UmgUrmlK4JP
         jduuPmToqbANvPAEFCHVjKNsMhPEI9qiJ0mJQJCIkQ+3tRCPMZGlJMAnyt5gjzR9d7xw
         FfJA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=gLxTs81IMFYS+U+UnWAMUUJ68TsI2YJnY5VDUgFiRp4=;
        b=oaoMQkRXumaAwdKusYlU8J7cXoNUAXn+4Px+xEkolsOC3xklr6sNJoWOC2OLBWuV54
         e0gC6GI7WZGz2n+ptJpIABulmwPhMByn9VeeMNX0oLSsVfnbYmJX/4HvcqlZHgMYNVah
         vjtzB84RH/PACZ1bOI7A/3Xx2Jd9dZ6nsRmr0vbPftbnJbx4cb7DFHhb5h6BH6AFNie0
         AAy9DTfYrk/7M3oV/DZQf/jd60P/U6l6UK2TvppYcGCnf+FEsjxu7NK4y1TOzXb0LtpY
         f9r9LFle8K5u6qkTY+xwSy5Z7p0LagA8a9iFRGOSzErLsy4f0EPRRxTM8Cog4DHzYu66
         gqiQ==
X-Gm-Message-State: AOAM531AHOHDjk3LWD1TtBANfHI2wYNasagQZGhzlrJfcWCxaROn0o4h
	EYbZ72KUJFrCZMziMnCvK8Yx+wLKt4zm4omc3CQ=
X-Google-Smtp-Source: ABdhPJxqDhrmluXbeB0n7wHzje+Yewx5nqXEoRWHRhOHPvShvQL4VaUZLHsqb43XYvBEZnSGbfE6stYmN0g/itTS9YQ=
X-Received: by 2002:a2e:9005:: with SMTP id h5mr1756200ljg.59.1605621589775;
 Tue, 17 Nov 2020 05:59:49 -0800 (PST)
MIME-Version: 1.0
References: <CAHpsFVc4AAm6L0rKUuV47ydOjtw7XAgFnDZxRjdCL0OHXJERDw@mail.gmail.com>
 <20201117105039.mpfrnwvojpmfaopx@Air-de-Roger> <20201117125407.66xb3uuil3g4t6ek@Air-de-Roger>
In-Reply-To: <20201117125407.66xb3uuil3g4t6ek@Air-de-Roger>
From: Cheyenne Wills <cheyenne.wills@gmail.com>
Date: Tue, 17 Nov 2020 06:59:39 -0700
Message-ID: <CAHpsFVdSRr=R1dP2Gn_Efr5ijvoUV44Rcjgz6eOfSZVrkYxbOg@mail.gmail.com>
Subject: Re: XSA-351 causing Solaris-11 systems to panic during boot.
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
Content-Type: multipart/alternative; boundary="0000000000008098a405b44de831"

--0000000000008098a405b44de831
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Yes. I will have to re-upgrade my xen system to collect the additional info
from the panic, so it will be later today before I can reply with all the
info.

On Tue, Nov 17, 2020, 5:54 AM Roger Pau Monn=C3=A9 <roger.pau@citrix.com> w=
rote:

> On Tue, Nov 17, 2020 at 11:50:39AM +0100, Roger Pau Monn=C3=A9 wrote:
> > On Mon, Nov 16, 2020 at 02:57:14PM -0700, Cheyenne Wills wrote:
> > > Running Xen with XSA-351 is causing Solaris 11 systems to panic durin=
g
> > > boot.  The panic screen is showing the failure to be coming from
> > > "unix:rdmsr".  The panic occurs with existing guests (booting off a
> disk)
> > > and the  booting from an install ISO image.
> > >
> > > I discussed the problem with "andyhhp__" in the "#xen" IRC channel an=
d
> he
> > > requested that I report it here.
> > >
> > > This was failing on a Xen 4.13 and a Xen 4.14 system built via gentoo=
.
> > >
> > > I understand that ultimately this is a bug in Solaris.  However it do=
es
> > > impact existing guests that were functional before applying the XSA-3=
51
> > > security patches.
> >
> > I seem to have some issues getting the Solaris 11.4 ISO to boot, which =
I
> > think are unrelated to the MSR changes. I get what seems to be a panic
> > just after the Copyright message, but there's no reason printed at all
> > about the panic. The message just reads (transcript):
> >
> > SunOS Release 5.11 Version 11.4.0.15.0 64-bit
> > Copyright (c) 1983, 2018, Oracle and/or it's affiliates. All right
> reserved.
> > System would not fast reboot because:
> >  newkernel not valid
> >  fastreboot_onpanic is not set
> >  ...
> >
> > The config file I'm using is:
> >
> > memory=3D1024
> > vcpus=3D4
> > name=3D"solaris"
> >
> > builder=3D"hvm"
> >
> > disk =3D [
> >
>  'format=3Draw,vdev=3Dhdc,access=3Dro,devtype=3Dcdrom,target=3D/root/sol-=
11_4-text-x86.iso',
> >   'format=3Draw,vdev=3Dhda,access=3Drw,target=3D/root/solaris.img',
> > ]
> >
> > vif =3D [
> >  'mac=3D00:16:3E:74:3d:88,bridge=3Dbridge0',
> > ]
> >
> > vnc=3D1
> > vnclisten=3D"0.0.0.0"
> >
> > serial=3D'pty'
> >
> > on_crash=3D"preserve"
> >
> > Is there anything I'm missing?
>
> OK, it seems like Solaris requires more than 1GB of memory in order to
> boot. I've increased it to 4GB and I've been able to boot successfully
> up to the installer.
>
> I'm however able to boot up to the installer screen without any
> crashes, so I guess the version I'm using (11.4.0.15.0) is already
> fixed?
>
> Can you paste which version of Solaris you are using and if possible
> where I can find the installer media to reproduce?
>
> Thanks, Roger.
>

--0000000000008098a405b44de831
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"auto">Yes. I will have to re-upgrade my xen system to collect t=
he additional info from the panic, so it will be later today before I can r=
eply with all the info.</div><br><div class=3D"gmail_quote"><div dir=3D"ltr=
" class=3D"gmail_attr">On Tue, Nov 17, 2020, 5:54 AM Roger Pau Monn=C3=A9 &=
lt;<a href=3D"mailto:roger.pau@citrix.com">roger.pau@citrix.com</a>&gt; wro=
te:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;b=
order-left:1px #ccc solid;padding-left:1ex">On Tue, Nov 17, 2020 at 11:50:3=
9AM +0100, Roger Pau Monn=C3=A9 wrote:<br>
&gt; On Mon, Nov 16, 2020 at 02:57:14PM -0700, Cheyenne Wills wrote:<br>
&gt; &gt; Running Xen with XSA-351 is causing Solaris 11 systems to panic d=
uring<br>
&gt; &gt; boot.=C2=A0 The panic screen is showing the failure to be coming =
from<br>
&gt; &gt; &quot;unix:rdmsr&quot;.=C2=A0 The panic occurs with existing gues=
ts (booting off a disk)<br>
&gt; &gt; and the=C2=A0 booting from an install ISO image.<br>
&gt; &gt; <br>
&gt; &gt; I discussed the problem with &quot;andyhhp__&quot; in the &quot;#=
xen&quot; IRC channel and he<br>
&gt; &gt; requested that I report it here.<br>
&gt; &gt; <br>
&gt; &gt; This was failing on a Xen 4.13 and a Xen 4.14 system built via ge=
ntoo.<br>
&gt; &gt; <br>
&gt; &gt; I understand that ultimately this is a bug in Solaris.=C2=A0 Howe=
ver it does<br>
&gt; &gt; impact existing guests that were functional before applying the X=
SA-351<br>
&gt; &gt; security patches.<br>
&gt; <br>
&gt; I seem to have some issues getting the Solaris 11.4 ISO to boot, which=
 I<br>
&gt; think are unrelated to the MSR changes. I get what seems to be a panic=
<br>
&gt; just after the Copyright message, but there&#39;s no reason printed at=
 all<br>
&gt; about the panic. The message just reads (transcript):<br>
&gt; <br>
&gt; SunOS Release 5.11 Version 11.4.0.15.0 64-bit<br>
&gt; Copyright (c) 1983, 2018, Oracle and/or it&#39;s affiliates. All right=
 reserved.<br>
&gt; System would not fast reboot because:<br>
&gt;=C2=A0 newkernel not valid<br>
&gt;=C2=A0 fastreboot_onpanic is not set<br>
&gt;=C2=A0 ...<br>
&gt; <br>
&gt; The config file I&#39;m using is:<br>
&gt; <br>
&gt; memory=3D1024<br>
&gt; vcpus=3D4<br>
&gt; name=3D&quot;solaris&quot;<br>
&gt; <br>
&gt; builder=3D&quot;hvm&quot;<br>
&gt; <br>
&gt; disk =3D [<br>
&gt;=C2=A0 =C2=A0&#39;format=3Draw,vdev=3Dhdc,access=3Dro,devtype=3Dcdrom,t=
arget=3D/root/sol-11_4-text-x86.iso&#39;,<br>
&gt;=C2=A0 =C2=A0&#39;format=3Draw,vdev=3Dhda,access=3Drw,target=3D/root/so=
laris.img&#39;,<br>
&gt; ]<br>
&gt; <br>
&gt; vif =3D [<br>
&gt;=C2=A0 &#39;mac=3D00:16:3E:74:3d:88,bridge=3Dbridge0&#39;,<br>
&gt; ]<br>
&gt; <br>
&gt; vnc=3D1<br>
&gt; vnclisten=3D&quot;0.0.0.0&quot;<br>
&gt; <br>
&gt; serial=3D&#39;pty&#39;<br>
&gt; <br>
&gt; on_crash=3D&quot;preserve&quot;<br>
&gt; <br>
&gt; Is there anything I&#39;m missing?<br>
<br>
OK, it seems like Solaris requires more than 1GB of memory in order to<br>
boot. I&#39;ve increased it to 4GB and I&#39;ve been able to boot successfu=
lly<br>
up to the installer.<br>
<br>
I&#39;m however able to boot up to the installer screen without any<br>
crashes, so I guess the version I&#39;m using (11.4.0.15.0) is already<br>
fixed?<br>
<br>
Can you paste which version of Solaris you are using and if possible<br>
where I can find the installer media to reproduce?<br>
<br>
Thanks, Roger.<br>
</blockquote></div>

--0000000000008098a405b44de831--


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 14:01:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 14:01:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29006.58166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf1Xl-00064F-QG; Tue, 17 Nov 2020 14:01:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29006.58166; Tue, 17 Nov 2020 14:01:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf1Xl-000648-ND; Tue, 17 Nov 2020 14:01:13 +0000
Received: by outflank-mailman (input) for mailman id 29006;
 Tue, 17 Nov 2020 14:01:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g2yZ=EX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kf1Xk-00063U-Oq
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 14:01:12 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 894c6e6b-a4e4-4a59-b1dd-44eb6ee048b0;
 Tue, 17 Nov 2020 14:01:03 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kf1Xa-0007Cr-HY; Tue, 17 Nov 2020 14:01:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kf1Xa-0007in-4j; Tue, 17 Nov 2020 14:01:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kf1Xa-0007eI-4E; Tue, 17 Nov 2020 14:01:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=g2yZ=EX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kf1Xk-00063U-Oq
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 14:01:12 +0000
X-Inumbo-ID: 894c6e6b-a4e4-4a59-b1dd-44eb6ee048b0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 894c6e6b-a4e4-4a59-b1dd-44eb6ee048b0;
	Tue, 17 Nov 2020 14:01:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yGwYFoF5BMqJBTPuZYlFA7upPYXgUtaW7/Yb3+1Xiwg=; b=I6FFpxm34nA432+aXFcN9dXfCU
	z/Eijle6TIbgrP20t4AhiPN5Lr0HU/VZTdzvsP8nys6k1VMbNJgGJhEtAoqcCN7UWWQdxbhBNZjOw
	OBbE+KjV9EuXJ0GHUcddMu7QU5lHGUEHf64OFAWcIiZIiIVNx8eTFFAoww6oOYzP/LiI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kf1Xa-0007Cr-HY; Tue, 17 Nov 2020 14:01:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kf1Xa-0007in-4j; Tue, 17 Nov 2020 14:01:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kf1Xa-0007eI-4E; Tue, 17 Nov 2020 14:01:02 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156827-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156827: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start.2:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5505f5f8e7e805365cfe70b6a4af6115940bb749
X-Osstest-Versions-That:
    xen=5505f5f8e7e805365cfe70b6a4af6115940bb749
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Nov 2020 14:01:02 +0000

flight 156827 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156827/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 20 guest-start/debianhvm.repeat fail pass in 156814
 test-armhf-armhf-xl-rtds     19 guest-start.2              fail pass in 156814

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10       fail  like 156814
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156814
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156814
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156814
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156814
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156814
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156814
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156814
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156814
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156814
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156814
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156814
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  5505f5f8e7e805365cfe70b6a4af6115940bb749
baseline version:
 xen                  5505f5f8e7e805365cfe70b6a4af6115940bb749

Last test of basis   156827  2020-11-17 01:52:28 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue Nov 17 14:13:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 14:13:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29014.58177 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf1k0-0007B8-06; Tue, 17 Nov 2020 14:13:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29014.58177; Tue, 17 Nov 2020 14:13:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf1jz-0007B1-TQ; Tue, 17 Nov 2020 14:13:51 +0000
Received: by outflank-mailman (input) for mailman id 29014;
 Tue, 17 Nov 2020 14:13:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kf1jy-0007Aw-PG
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 14:13:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 643ef3da-e864-44e7-b168-0691a12c2a37;
 Tue, 17 Nov 2020 14:13:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1159CADCF;
 Tue, 17 Nov 2020 14:13:48 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kf1jy-0007Aw-PG
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 14:13:50 +0000
X-Inumbo-ID: 643ef3da-e864-44e7-b168-0691a12c2a37
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 643ef3da-e864-44e7-b168-0691a12c2a37;
	Tue, 17 Nov 2020 14:13:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605622428; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DN93mxkfRlJGtt4PZwPWnTGvGvL74z0+Nib5KrFXMn0=;
	b=NcFJgKNu9UxW9qxtviuuE1jLe7sLTTdk9GRUhzywDdeSmxwPKxaUZgdVbcBDCz4DkKBJvm
	JyMatCifVIY/NIsEdjO1S+T4R/SuaGMyOo5zrUKkj2oo7snBcrKedRdxflMCqrGKOaTD1x
	aGwIW4ZjMqrhehGE3LQk4Onlt2HStOQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1159CADCF;
	Tue, 17 Nov 2020 14:13:48 +0000 (UTC)
Subject: Re: [PATCH 10/12] xen/hypfs: add cpupool directories
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Dario Faggioli <dfaggioli@suse.com>,
 xen-devel@lists.xenproject.org
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-11-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0c199e96-c686-2045-8972-036e69600873@suse.com>
Date: Tue, 17 Nov 2020 15:13:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201026091316.25680-11-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.10.2020 10:13, Juergen Gross wrote:
> @@ -992,6 +994,78 @@ static struct notifier_block cpu_nfb = {
>      .notifier_call = cpu_callback
>  };
>  
> +#ifdef CONFIG_HYPFS
> +static HYPFS_DIR_INIT(cpupool_pooldir, "id");

This "id" string won't appear anywhere, will it? I would have
expected this to act as the format string used when generating
names of the dynamic entries. This would e.g. allow CPU pools
to have decimal numbered names, but other entries also hex
ones, and then if so desired also e.g. with leading zeros.

> +static int cpupool_dir_read(const struct hypfs_entry *entry,
> +                            XEN_GUEST_HANDLE_PARAM(void) uaddr)
> +{
> +    int ret = 0;
> +    struct cpupool **q;

I was going to ask for const here, but the way for_each_cpupool()
works looks to prohibit this. Nevertheless I wonder whether the
extra level of indirection there wouldn't better be dropped. Of
the users, only cpupool_destroy() looks to need it, so open-
coding the loop there (or introducing an auxiliary variable)
would allow improvements here and elsewhere. (Actually I notice
there's also a similar use in cpupool_create(), but the general
consideration remains.)

> +    spin_lock(&cpupool_lock);
> +
> +    for_each_cpupool(q)
> +    {
> +        ret = hypfs_read_dyndir_id_entry(&cpupool_pooldir, (*q)->cpupool_id,
> +                                         !(*q)->next, &uaddr);
> +        if ( ret )
> +            break;
> +    }
> +
> +    spin_unlock(&cpupool_lock);
> +
> +    return ret;
> +}
> +
> +static unsigned int cpupool_dir_getsize(const struct hypfs_entry *entry)
> +{
> +    struct cpupool **q;
> +    unsigned int size = 0;
> +
> +    spin_lock(&cpupool_lock);
> +
> +    for_each_cpupool(q)
> +        size += HYPFS_DIRENTRY_SIZE(snprintf(NULL, 0, "%d", (*q)->cpupool_id));

Beyond the remark above I consider this problematic: If the pool
ID was negative, the use of %d here would get things out of sync
with the %u uses in hypfs.c. I guess exposing
HYPFS_DIRENTRY_SIZE() isn't the right approach, and you instead
need another hypfs library function.

> +static struct hypfs_entry *cpupool_dir_findentry(struct hypfs_entry_dir *dir,
> +                                                 const char *name,
> +                                                 unsigned int name_len)
> +{
> +    unsigned long id;
> +    const char *end;
> +    struct cpupool *cpupool;
> +
> +    id = simple_strtoul(name, &end, 10);
> +    if ( id > INT_MAX || end != name + name_len )

What does this INT_MAX match up with? Afaics
XEN_SYSCTL_CPUPOOL_OP_CREATE is fine to have an effectively
negative pool ID passed in (the public interface struct uses
uint32_t, but this gets converted to plain int first thing in
the sysctl handler).

> +        return ERR_PTR(-ENOENT);
> +
> +    spin_lock(&cpupool_lock);
> +
> +    cpupool = __cpupool_find_by_id(id, true);
> +
> +    spin_unlock(&cpupool_lock);
> +
> +    if ( !cpupool )
> +        return ERR_PTR(-ENOENT);
> +
> +    return hypfs_gen_dyndir_entry_id(&cpupool_pooldir, id);

The latest this one makes clear that cpupool_lock nests inside
the hypfs one. I think this wants spelling out next to the
definition of the former, as it implies that there are
restrictions on what can be done from inside cpupool-locked
regions. hypfs_read_dyndir_id_entry(), for example, has to
remain lock free for this reason.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 14:19:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 14:19:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29021.58193 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf1pU-0007ON-TP; Tue, 17 Nov 2020 14:19:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29021.58193; Tue, 17 Nov 2020 14:19:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf1pU-0007OG-QR; Tue, 17 Nov 2020 14:19:32 +0000
Received: by outflank-mailman (input) for mailman id 29021;
 Tue, 17 Nov 2020 14:19:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=J6Lq=EX=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kf1pT-0007OB-Hs
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 14:19:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e531874a-e8eb-47d4-9e4a-39efed150824;
 Tue, 17 Nov 2020 14:19:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D64CAAC1F;
 Tue, 17 Nov 2020 14:19:29 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=J6Lq=EX=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kf1pT-0007OB-Hs
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 14:19:31 +0000
X-Inumbo-ID: e531874a-e8eb-47d4-9e4a-39efed150824
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e531874a-e8eb-47d4-9e4a-39efed150824;
	Tue, 17 Nov 2020 14:19:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605622770; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=s6TyF4KPyw2U3xrvVl9z7AZOuFlIf1NCeG0WBz1NTVo=;
	b=LYAYM7FtzOJvWHbUW7jO7FpnIZAJb9rWCLnARwUK+qxZteMCglPfMcHKhpoD+f9mb/o5+9
	VLhsaxgqyazqRA+EYuUA3qzbLgGAGW60VtZgxEOXQkjvitoDEs8KTeFmtjXAGzX2OFskeN
	6Dpia6ORaYw98Ir5v0hH14HdgVyTwIw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D64CAAC1F;
	Tue, 17 Nov 2020 14:19:29 +0000 (UTC)
Subject: Re: [PATCH 06/12] xen/hypfs: move per-node function pointers into a
 dedicated struct
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-7-jgross@suse.com>
 <5c9d71ea-8f25-0f57-ac48-5152a1e35264@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <f0563696-9523-3b02-81ef-b7d58029fab1@suse.com>
Date: Tue, 17 Nov 2020 15:19:29 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <5c9d71ea-8f25-0f57-ac48-5152a1e35264@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="iil3i7gJbQgzHbD6lotK8Qhug7sn6BQ7F"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--iil3i7gJbQgzHbD6lotK8Qhug7sn6BQ7F
Content-Type: multipart/mixed; boundary="Xk3tHcnFfDvYxmCORlgPUyoKWsV92ozNu";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <f0563696-9523-3b02-81ef-b7d58029fab1@suse.com>
Subject: Re: [PATCH 06/12] xen/hypfs: move per-node function pointers into a
 dedicated struct
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-7-jgross@suse.com>
 <5c9d71ea-8f25-0f57-ac48-5152a1e35264@suse.com>
In-Reply-To: <5c9d71ea-8f25-0f57-ac48-5152a1e35264@suse.com>

--Xk3tHcnFfDvYxmCORlgPUyoKWsV92ozNu
Content-Type: multipart/mixed;
 boundary="------------ADD297E328D2CD9B201BA2C7"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------ADD297E328D2CD9B201BA2C7
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.11.20 12:18, Jan Beulich wrote:
> On 26.10.2020 10:13, Juergen Gross wrote:
>> @@ -15,10 +29,7 @@ struct hypfs_entry {
>>       unsigned int max_size;
>>       const char *name;
>>       struct list_head list;
>> -    int (*read)(const struct hypfs_entry *entry,
>> -                XEN_GUEST_HANDLE_PARAM(void) uaddr);
>> -    int (*write)(struct hypfs_entry_leaf *leaf,
>> -                 XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ule=
n);
>> +    struct hypfs_funcs *funcs;
>=20
> const (with all the cascade changes necessary)?

Yes, probably possible.


Juergen

--------------ADD297E328D2CD9B201BA2C7
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------ADD297E328D2CD9B201BA2C7--

--Xk3tHcnFfDvYxmCORlgPUyoKWsV92ozNu--

--iil3i7gJbQgzHbD6lotK8Qhug7sn6BQ7F
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+z2/EFAwAAAAAACgkQsN6d1ii/Ey+F
Pwf8D6Dh+EbGtcHoXb14W1uJUyPS7YJryfbtrChRMDpjOOY0thscu/frOuttkQ2cYlLU6ISAbNuS
0OnaS49Ou2EcBWWl3FATC4wHfTSq/KnryVl/8n1sihMmJB70plNxj1twDFtQ53iGdB7FNEbOd4SG
oT/+vMaYAb8xcuOUvsGywiXphtalYzdq6XSoPXJVEl3cfu8tZP0Q/F+GLIy2giPXFp1StfBxtimj
VNDiGy9tLUPlr9zUby9vNuI9P4OM1uMG3UBILCQKMHxBEk7BNVlMNphvShqy4JoWqBMnN1pRLEe3
pEYBNgY8ODu0VaIhg8vcq6L0Dd7xi7TVu4pIND9vqA==
=rd/n
-----END PGP SIGNATURE-----

--iil3i7gJbQgzHbD6lotK8Qhug7sn6BQ7F--


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 14:30:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 14:30:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29028.58205 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf1zX-0008Rq-3i; Tue, 17 Nov 2020 14:29:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29028.58205; Tue, 17 Nov 2020 14:29:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf1zX-0008Rj-0I; Tue, 17 Nov 2020 14:29:55 +0000
Received: by outflank-mailman (input) for mailman id 29028;
 Tue, 17 Nov 2020 14:29:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=J6Lq=EX=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kf1zV-0008Re-EK
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 14:29:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 87103953-48bb-45bd-86d0-e9a68d1f67fe;
 Tue, 17 Nov 2020 14:29:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 938DEAC2E;
 Tue, 17 Nov 2020 14:29:51 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=J6Lq=EX=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kf1zV-0008Re-EK
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 14:29:53 +0000
X-Inumbo-ID: 87103953-48bb-45bd-86d0-e9a68d1f67fe
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 87103953-48bb-45bd-86d0-e9a68d1f67fe;
	Tue, 17 Nov 2020 14:29:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605623391; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=hnVuomKx40WMf4cquvKgXgKksdZX6y8pUMWsfmeaJ5Y=;
	b=QwUN/DygWdq6BlnfeD1V7xnwMMHUXIzP/fqlkURzzu907ZPXbZzcOR02T9OFRoPhIP93K9
	TSpuiHciPEp2opM4wsbEks5DWHtsMk9H6WKK7LZXZ0Xde5Vg+uxt/qlVqR+0WvF6N3b1Y1
	EwGGdwoTc9F18hR4CmMyzLBGkG1pX04=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 938DEAC2E;
	Tue, 17 Nov 2020 14:29:51 +0000 (UTC)
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-9-jgross@suse.com>
 <d8653200-fbee-4e87-3e2d-7062879d7b4e@suse.com>
Subject: [PATCH 08/12] xen/hypfs: support dynamic hypfs nodes
Message-ID: <6fe809d5-09c1-28d3-61ec-10244b2d7d5f@suse.com>
Date: Tue, 17 Nov 2020 15:29:50 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <d8653200-fbee-4e87-3e2d-7062879d7b4e@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="hFCjskmoi0gjnp7B2YhiBYfebIBqtFjQc"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--hFCjskmoi0gjnp7B2YhiBYfebIBqtFjQc
Content-Type: multipart/mixed; boundary="XX7YNifh0uwWq4sHqsQyz91Ny5fmrfHRo";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <6fe809d5-09c1-28d3-61ec-10244b2d7d5f@suse.com>
Subject: [PATCH 08/12] xen/hypfs: support dynamic hypfs nodes
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-9-jgross@suse.com>
 <d8653200-fbee-4e87-3e2d-7062879d7b4e@suse.com>
In-Reply-To: <d8653200-fbee-4e87-3e2d-7062879d7b4e@suse.com>

--XX7YNifh0uwWq4sHqsQyz91Ny5fmrfHRo
Content-Type: multipart/mixed;
 boundary="------------C506B9991BCDEFDCABE781B3"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------C506B9991BCDEFDCABE781B3
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.11.20 13:37, Jan Beulich wrote:
> On 26.10.2020 10:13, Juergen Gross wrote:
>> Add a getsize() function pointer to struct hypfs_funcs for being able
>> to have dynamically filled entries without the need to take the hypfs
>> lock each time the contents are being generated.
>=20
> But a dynamic update causing a change in size will require _some_
> lock, won't it?

Yes, of course.

e.g. the getsize() function returning the size of a directory holding an
entry for each cpupool will need to take the cpupool lock in order to
avoid a cpupool being created or deleted in parallel.

But the cpupool create/destroy functions don't need to take the hypfs
lock.

>=20
>> --- a/xen/common/hypfs.c
>> +++ b/xen/common/hypfs.c
>> @@ -19,28 +19,29 @@
>>   CHECK_hypfs_dirlistentry;
>>   #endif
>>  =20
>> -#define DIRENTRY_NAME_OFF offsetof(struct xen_hypfs_dirlistentry, nam=
e)
>> -#define DIRENTRY_SIZE(name_len) \
>> -    (DIRENTRY_NAME_OFF +        \
>> -     ROUNDUP((name_len) + 1, alignof(struct xen_hypfs_direntry)))
>> -
>>   struct hypfs_funcs hypfs_dir_funcs =3D {
>>       .read =3D hypfs_read_dir,
>> +    .getsize =3D hypfs_getsize,
>> +    .findentry =3D hypfs_dir_findentry,
>>   };
>>   struct hypfs_funcs hypfs_leaf_ro_funcs =3D {
>>       .read =3D hypfs_read_leaf,
>> +    .getsize =3D hypfs_getsize,
>>   };
>>   struct hypfs_funcs hypfs_leaf_wr_funcs =3D {
>>       .read =3D hypfs_read_leaf,
>>       .write =3D hypfs_write_leaf,
>> +    .getsize =3D hypfs_getsize,
>>   };
>>   struct hypfs_funcs hypfs_bool_wr_funcs =3D {
>>       .read =3D hypfs_read_leaf,
>>       .write =3D hypfs_write_bool,
>> +    .getsize =3D hypfs_getsize,
>>   };
>>   struct hypfs_funcs hypfs_custom_wr_funcs =3D {
>>       .read =3D hypfs_read_leaf,
>>       .write =3D hypfs_write_custom,
>> +    .getsize =3D hypfs_getsize,
>>   };
>=20
> With the increasing number of fields that may (deliberately or
> by mistake) be NULL, should we gain some form of proactive
> guarding against calls through such pointers?

Hmm, up to now I think such a bug would be detected rather fast.

I can add some ASSERT()s for mandatory functions not being NULL when
a node is added dynamically or during hypfs initialization for the
static nodes.

>=20
>> @@ -88,6 +93,23 @@ static void hypfs_unlock(void)
>>       }
>>   }
>>  =20
>> +void *hypfs_alloc_dyndata(unsigned long size, unsigned long align)
>=20
> Will callers really need to specify (high) alignment values? IOW ...
>=20
>> +{
>> +    unsigned int cpu =3D smp_processor_id();
>> +
>> +    ASSERT(per_cpu(hypfs_locked, cpu) !=3D hypfs_unlocked);
>> +    ASSERT(per_cpu(hypfs_dyndata, cpu) =3D=3D NULL);
>> +
>> +    per_cpu(hypfs_dyndata, cpu) =3D _xzalloc(size, align);
>=20
> ... is xzalloc_bytes() not suitable for use here?

Good question.

Up to now I think we could get away without specific alignment.

I can drop that parameter for now if you'd like that better.

>=20
>> @@ -171,15 +193,34 @@ static int hypfs_get_path_user(char *buf,
>>       return 0;
>>   }
>>  =20
>> +struct hypfs_entry *hypfs_dir_findentry(struct hypfs_entry_dir *dir,
>> +                                        const char *name,
>> +                                        unsigned int name_len)
>> +{
>> +    struct hypfs_entry *entry;
>> +
>> +    list_for_each_entry ( entry, &dir->dirlist, list )
>> +    {
>> +        int cmp =3D strncmp(name, entry->name, name_len);
>> +
>> +        if ( cmp < 0 )
>> +            return ERR_PTR(-ENOENT);
>> +
>> +        if ( !cmp && strlen(entry->name) =3D=3D name_len )
>> +            return entry;
>> +    }
>> +
>> +    return ERR_PTR(-ENOENT);
>> +}
>> +
>>   static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_di=
r *dir,
>>                                                  const char *path)
>>   {
>>       const char *end;
>>       struct hypfs_entry *entry;
>>       unsigned int name_len;
>> -    bool again =3D true;
>>  =20
>> -    while ( again )
>> +    for ( ;; )
>=20
> Nit: Strictly speaking another blank is needed between the two
> semicolons.

Okay.

>=20
>> @@ -275,22 +305,25 @@ int hypfs_read_leaf(const struct hypfs_entry *en=
try,
>>  =20
>>       l =3D container_of(entry, const struct hypfs_entry_leaf, e);
>>  =20
>> -    return copy_to_guest(uaddr, l->u.content, entry->size) ? -EFAULT:=
 0;
>> +    return copy_to_guest(uaddr, l->u.content, entry->funcs->getsize(e=
ntry)) ?
>> +                                              -EFAULT : 0;
>=20
> With the intended avoiding of locking, how is this ->getsize()
> guaranteed to not ...
>=20
>> @@ -298,7 +331,7 @@ static int hypfs_read(const struct hypfs_entry *en=
try,
>>           goto out;
>>  =20
>>       ret =3D -ENOBUFS;
>> -    if ( ulen < entry->size + sizeof(e) )
>> +    if ( ulen < size + sizeof(e) )
>>           goto out;
>=20
> ... invalidate the checking done here? (A similar risk looks to
> exist on the write path, albeit there we have at least the
> ->max_size checks, where I hope that field isn't mean to become
> dynamic as well.)

I think you are right. I should add the size value as a parameter to the
read and write functions.

And no, max_size should not be dynamic.


Juergen

--------------C506B9991BCDEFDCABE781B3
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------C506B9991BCDEFDCABE781B3--

--XX7YNifh0uwWq4sHqsQyz91Ny5fmrfHRo--

--hFCjskmoi0gjnp7B2YhiBYfebIBqtFjQc
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+z3l4FAwAAAAAACgkQsN6d1ii/Ey+y
mAf/ey8H4fxbNemcTS8JfA8BVjkH/ax6rUuAmR0yWh2sKbtsSiyDzHusKWSDySOSXDVU52PrJWjn
4D+xg72E9s/XvhL7jR3DgyUkdPwjT5Va5tf2/ZZap6pCidkC1Cfe6G/boI4ONb+Mw06eyhnset8V
e9dHHq0XdWKFOSeIGKIzOQo2CjoJnoo6IKFuXVwQBxD3XMhCEFxFLYJOfrlI5cMo79VVdvULkseg
+AVZSkK4mdjSE51N1cjGRRfo/p/Rwcbq4a6MljfQG38d8jMbuCRNYuW/5bj928rp4J7mHjBd+gxF
j4CYUeDtc5Q6cj2zDq4nRQB0JPuVDkO0d1e8wD6WPQ==
=FVDl
-----END PGP SIGNATURE-----

--hFCjskmoi0gjnp7B2YhiBYfebIBqtFjQc--


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 14:38:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 14:38:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29033.58217 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf289-00012T-0j; Tue, 17 Nov 2020 14:38:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29033.58217; Tue, 17 Nov 2020 14:38:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf288-00012M-TT; Tue, 17 Nov 2020 14:38:48 +0000
Received: by outflank-mailman (input) for mailman id 29033;
 Tue, 17 Nov 2020 14:38:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=J6Lq=EX=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kf287-00011d-Qx
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 14:38:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7ef65306-d06e-431d-8500-8244a451cb44;
 Tue, 17 Nov 2020 14:38:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 28986AC1F;
 Tue, 17 Nov 2020 14:38:46 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=J6Lq=EX=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kf287-00011d-Qx
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 14:38:47 +0000
X-Inumbo-ID: 7ef65306-d06e-431d-8500-8244a451cb44
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7ef65306-d06e-431d-8500-8244a451cb44;
	Tue, 17 Nov 2020 14:38:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605623926; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=zFq0mNEnNGQQbJQ5eWMnQX3fnZHSAeNPY7v15vUs1iY=;
	b=kvvth0b4KbiIPp5RRRi1o+388AjmPqMJV9wcrSxmwSc9kmysIUYRyuQ+AHFxWFg7RAzQnw
	jk1yvYI50LXjyq9okM79d6bcPmzgmwwF9dVx1kHxqHknic0GjC31gHlNFEpGCtyD0j7UAj
	gNXO5JTdekopgCc2+Nfz4yPZErEmIys=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 28986AC1F;
	Tue, 17 Nov 2020 14:38:46 +0000 (UTC)
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-10-jgross@suse.com>
 <6f8c0d3d-73f6-d10f-182a-8bf76856bf09@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH 09/12] xen/hypfs: add support for id-based dynamic
 directories
Message-ID: <95f673e5-90a8-0fe9-3842-bdb9de5c4aa4@suse.com>
Date: Tue, 17 Nov 2020 15:38:45 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <6f8c0d3d-73f6-d10f-182a-8bf76856bf09@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="iRTqzNAF16dzDUUORWQDukyLXID908L9w"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--iRTqzNAF16dzDUUORWQDukyLXID908L9w
Content-Type: multipart/mixed; boundary="zq09cQFK2MztX650FX9PfZqne4pUChG4T";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <95f673e5-90a8-0fe9-3842-bdb9de5c4aa4@suse.com>
Subject: Re: [PATCH 09/12] xen/hypfs: add support for id-based dynamic
 directories
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-10-jgross@suse.com>
 <6f8c0d3d-73f6-d10f-182a-8bf76856bf09@suse.com>
In-Reply-To: <6f8c0d3d-73f6-d10f-182a-8bf76856bf09@suse.com>

--zq09cQFK2MztX650FX9PfZqne4pUChG4T
Content-Type: multipart/mixed;
 boundary="------------17B76CAA08B0E581952F3F76"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------17B76CAA08B0E581952F3F76
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.11.20 14:33, Jan Beulich wrote:
> On 26.10.2020 10:13, Juergen Gross wrote:
>> --- a/xen/common/hypfs.c
>> +++ b/xen/common/hypfs.c
>> @@ -257,6 +257,82 @@ unsigned int hypfs_getsize(const struct hypfs_ent=
ry *entry)
>>       return entry->size;
>>   }
>>  =20
>> +int hypfs_read_dyndir_id_entry(struct hypfs_entry_dir *template,
>> +                               unsigned int id, bool is_last,
>> +                               XEN_GUEST_HANDLE_PARAM(void) *uaddr)
>> +{
>> +    struct xen_hypfs_dirlistentry direntry;
>> +    char name[12];
>=20
> Perhaps better tie this literal 12 to the one used for declaring
> struct hypfs_dyndir_id's name[] field, such that an eventual
> change will need making in exactly one place?

Yes.

>=20
>> +    unsigned int e_namelen, e_len;
>> +
>> +    e_namelen =3D snprintf(name, sizeof(name), "%u", id);
>> +    e_len =3D HYPFS_DIRENTRY_SIZE(e_namelen);
>> +    direntry.e.pad =3D 0;
>> +    direntry.e.type =3D template->e.type;
>> +    direntry.e.encoding =3D template->e.encoding;
>> +    direntry.e.content_len =3D template->e.funcs->getsize(&template->=
e);
>> +    direntry.e.max_write_len =3D template->e.max_size;
>> +    direntry.off_next =3D is_last ? 0 : e_len;
>> +    if ( copy_to_guest(*uaddr, &direntry, 1) )
>> +        return -EFAULT;
>> +    if ( copy_to_guest_offset(*uaddr, HYPFS_DIRENTRY_NAME_OFF, name,
>> +                              e_namelen + 1) )
>> +        return -EFAULT;
>> +
>> +    guest_handle_add_offset(*uaddr, e_len);
>> +
>> +    return 0;
>> +}
>> +
>> +static struct hypfs_entry *hypfs_dyndir_findentry(struct hypfs_entry_=
dir *dir,
>> +                                                  const char *name,
>> +                                                  unsigned int name_l=
en)
>> +{
>> +    struct hypfs_dyndir_id *data;
>=20
> const? (also in read_dyndir below)

Okay.

>=20
>> +    data =3D hypfs_get_dyndata();
>> +    if ( !data )
>> +        return ERR_PTR(-ENOENT);
>> +
>> +    /* Use template with original findentry function. */
>> +    return data->template->e.funcs->findentry(data->template, name, n=
ame_len);
>=20
> Why does this pass the address of the template? If it truly is
> (just) a template, then its dirlist ought to be empty at all
> times? If otoh the "template" indeed gets used as a node in the
> tree then perhaps it wants naming differently? "Stem" would come
> to mind, but likely there are better alternatives. I've also
> considered the German "Statthalter", but its English translations
> don't seem reasonable to me here. And "placeholder" has kind of a
> negative touch. (Also in this case some of my "const?" remarks
> may be irrelevant.)

It is basically a template tree.

In the current use case (cpupool/<id>/sched-gran) the template is
<id> with the leaf "sched-gran" which is the template for the per
cpupool incarnation.

If you like it better I can use stem.

>=20
> Further this and ...
>=20
>> +static int hypfs_read_dyndir(const struct hypfs_entry *entry,
>> +                             XEN_GUEST_HANDLE_PARAM(void) uaddr)
>> +{
>> +    struct hypfs_dyndir_id *data;
>> +
>> +    data =3D hypfs_get_dyndata();
>> +    if ( !data )
>> +        return -ENOENT;
>> +
>> +    /* Use template with original read function. */
>> +    return data->template->e.funcs->read(&data->template->e, uaddr);
>=20
> ... this using the template's funcs is somewhat unexpected, but
> with the functions acting as the entry's .findentry() / .read()
> hooks is obviously the right thing (and if the template is more
> that what the word says, the consideration may become
> inapplicable anyway). The implication is that the hooks
> themselves can't be replaced, if need be down the road.

Correct. In case this is needed the related node must be really
completely dynamical instead.

>=20
>> +struct hypfs_entry *hypfs_gen_dyndir_entry_id(struct hypfs_entry_dir =
*template,
>> +                                              unsigned int id)
>> +{
>> +    struct hypfs_dyndir_id *data;
>> +
>> +    data =3D hypfs_alloc_dyndata(sizeof(*data), alignof(*data));
>> +    if ( !data )
>> +        return ERR_PTR(-ENOMEM);
>> +
>> +    data->template =3D template;
>> +    data->id =3D id;
>=20
> I can't seem to spot any consumer of this field: Is it really
> needed?

Yes. It will be used by the specific read/write functions, e.g.
cpupool_gran_read().

>=20
>> --- a/xen/include/xen/hypfs.h
>> +++ b/xen/include/xen/hypfs.h
>> @@ -50,6 +50,15 @@ struct hypfs_entry_dir {
>>       struct list_head dirlist;
>>   };
>>  =20
>> +struct hypfs_dyndir_id {
>> +    struct hypfs_entry_dir dir;       /* Modified copy of template. *=
/
>> +    struct hypfs_funcs funcs;         /* Dynamic functions. */
>> +    struct hypfs_entry_dir *template; /* Template used. */
>=20
> const?

Yes.

>=20
>> @@ -150,6 +159,11 @@ struct hypfs_entry *hypfs_dir_findentry(struct hy=
pfs_entry_dir *dir,
>>                                           unsigned int name_len);
>>   void *hypfs_alloc_dyndata(unsigned long size, unsigned long align);
>>   void *hypfs_get_dyndata(void);
>> +int hypfs_read_dyndir_id_entry(struct hypfs_entry_dir *template,
>=20
> const?

Yes.

>=20
>> +                               unsigned int id, bool is_last,
>> +                               XEN_GUEST_HANDLE_PARAM(void) *uaddr);
>> +struct hypfs_entry *hypfs_gen_dyndir_entry_id(struct hypfs_entry_dir =
*template,
>=20
> const?

Yes.


Juergen



--------------17B76CAA08B0E581952F3F76
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------17B76CAA08B0E581952F3F76--

--zq09cQFK2MztX650FX9PfZqne4pUChG4T--

--iRTqzNAF16dzDUUORWQDukyLXID908L9w
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+z4HUFAwAAAAAACgkQsN6d1ii/Ey+3
bQf9FoBvPni/+o59RJvMDOYMJKzs3LQGy4bVSA3wK0p+/RZDFknbZPz/u+F+BXXOj0PTeSKqxbHa
W/v1HE7Cr8gKoeiFG04fS1C/J9ICGujrTjPrwwCJh8C8qoxhPddgNhv67Npq212Bj1L2avjDr7of
SjV/GVbHvj4zx1ZjPKtrwIbLYajX0j7Pl5A7G0PEv/unUotN/dEfSK+5jHVyqS0S22PLyUdsKvrr
CinNz618KD5XLQ0JLQFx6ooBiRfKZBUvj1jLylbddmu46qFUZVSxkEGocvBoS1c67M/xvtdW2KJn
6DcNL6i4V7emcVcNVOj7KTkTIeHYNOcHxLl9mU2aEQ==
=1nGL
-----END PGP SIGNATURE-----

--iRTqzNAF16dzDUUORWQDukyLXID908L9w--


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 14:40:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 14:40:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29037.58228 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf29s-0001tz-CJ; Tue, 17 Nov 2020 14:40:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29037.58228; Tue, 17 Nov 2020 14:40:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf29s-0001ts-9O; Tue, 17 Nov 2020 14:40:36 +0000
Received: by outflank-mailman (input) for mailman id 29037;
 Tue, 17 Nov 2020 14:40:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kf29q-0001tn-I0
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 14:40:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1d652301-9b6b-4e75-bf71-45e969c34aae;
 Tue, 17 Nov 2020 14:40:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0FDFEAC2E;
 Tue, 17 Nov 2020 14:40:31 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kf29q-0001tn-I0
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 14:40:34 +0000
X-Inumbo-ID: 1d652301-9b6b-4e75-bf71-45e969c34aae
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1d652301-9b6b-4e75-bf71-45e969c34aae;
	Tue, 17 Nov 2020 14:40:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605624031; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6HFszFh1G7iRnoFVT7oihr/NGDAnvmpT5ygMnzEOBt4=;
	b=NzNjnrwCZ7deUl5beX1zmvgKsNxEvU6QSRUAac9h4qnIxecopLIs9MQYcIXwJGHFCtOwle
	9XkkY3cgGxBe/4sZYrut8aDrnLLWtuKGmB1YR9CrQ6uCVA0manT1nOTllNtxSXVzE5E1M8
	DHJP7qSD+kTUyL9q8DyhNae+dDg1ZK0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0FDFEAC2E;
	Tue, 17 Nov 2020 14:40:31 +0000 (UTC)
Subject: Re: [PATCH 08/12] xen/hypfs: support dynamic hypfs nodes
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-9-jgross@suse.com>
 <d8653200-fbee-4e87-3e2d-7062879d7b4e@suse.com>
 <6fe809d5-09c1-28d3-61ec-10244b2d7d5f@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e93c98cd-1cd2-1646-9db9-3ebd8bc3684c@suse.com>
Date: Tue, 17 Nov 2020 15:40:31 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <6fe809d5-09c1-28d3-61ec-10244b2d7d5f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 17.11.2020 15:29, Jürgen Groß wrote:
> On 17.11.20 13:37, Jan Beulich wrote:
>> On 26.10.2020 10:13, Juergen Gross wrote:
>>> --- a/xen/common/hypfs.c
>>> +++ b/xen/common/hypfs.c
>>> @@ -19,28 +19,29 @@
>>>   CHECK_hypfs_dirlistentry;
>>>   #endif
>>>   
>>> -#define DIRENTRY_NAME_OFF offsetof(struct xen_hypfs_dirlistentry, name)
>>> -#define DIRENTRY_SIZE(name_len) \
>>> -    (DIRENTRY_NAME_OFF +        \
>>> -     ROUNDUP((name_len) + 1, alignof(struct xen_hypfs_direntry)))
>>> -
>>>   struct hypfs_funcs hypfs_dir_funcs = {
>>>       .read = hypfs_read_dir,
>>> +    .getsize = hypfs_getsize,
>>> +    .findentry = hypfs_dir_findentry,
>>>   };
>>>   struct hypfs_funcs hypfs_leaf_ro_funcs = {
>>>       .read = hypfs_read_leaf,
>>> +    .getsize = hypfs_getsize,
>>>   };
>>>   struct hypfs_funcs hypfs_leaf_wr_funcs = {
>>>       .read = hypfs_read_leaf,
>>>       .write = hypfs_write_leaf,
>>> +    .getsize = hypfs_getsize,
>>>   };
>>>   struct hypfs_funcs hypfs_bool_wr_funcs = {
>>>       .read = hypfs_read_leaf,
>>>       .write = hypfs_write_bool,
>>> +    .getsize = hypfs_getsize,
>>>   };
>>>   struct hypfs_funcs hypfs_custom_wr_funcs = {
>>>       .read = hypfs_read_leaf,
>>>       .write = hypfs_write_custom,
>>> +    .getsize = hypfs_getsize,
>>>   };
>>
>> With the increasing number of fields that may (deliberately or
>> by mistake) be NULL, should we gain some form of proactive
>> guarding against calls through such pointers?
> 
> Hmm, up to now I think such a bug would be detected rather fast.

Not sure: Are there any unavoidable uses of all affected code
paths?

> I can add some ASSERT()s for mandatory functions not being NULL when
> a node is added dynamically or during hypfs initialization for the
> static nodes.

I'm not sure ASSERT()s alone are enough. I'd definitely prefer
something which at least avoids the obvious x86 PV privilege
escalation attack in case a call through NULL has gone
unnoticed earlier on. E.g. rather than storing NULL in unused
entries, store a non-canonical pointer so that the effect will
"just" be a DoS.

>>> @@ -88,6 +93,23 @@ static void hypfs_unlock(void)
>>>       }
>>>   }
>>>   
>>> +void *hypfs_alloc_dyndata(unsigned long size, unsigned long align)
>>
>> Will callers really need to specify (high) alignment values? IOW ...
>>
>>> +{
>>> +    unsigned int cpu = smp_processor_id();
>>> +
>>> +    ASSERT(per_cpu(hypfs_locked, cpu) != hypfs_unlocked);
>>> +    ASSERT(per_cpu(hypfs_dyndata, cpu) == NULL);
>>> +
>>> +    per_cpu(hypfs_dyndata, cpu) = _xzalloc(size, align);
>>
>> ... is xzalloc_bytes() not suitable for use here?
> 
> Good question.
> 
> Up to now I think we could get away without specific alignment.
> 
> I can drop that parameter for now if you'd like that better.

I think control over alignment should be limited to those
special cases really needing it.

>>> @@ -275,22 +305,25 @@ int hypfs_read_leaf(const struct hypfs_entry *entry,
>>>   
>>>       l = container_of(entry, const struct hypfs_entry_leaf, e);
>>>   
>>> -    return copy_to_guest(uaddr, l->u.content, entry->size) ? -EFAULT: 0;
>>> +    return copy_to_guest(uaddr, l->u.content, entry->funcs->getsize(entry)) ?
>>> +                                              -EFAULT : 0;
>>
>> With the intended avoiding of locking, how is this ->getsize()
>> guaranteed to not ...
>>
>>> @@ -298,7 +331,7 @@ static int hypfs_read(const struct hypfs_entry *entry,
>>>           goto out;
>>>   
>>>       ret = -ENOBUFS;
>>> -    if ( ulen < entry->size + sizeof(e) )
>>> +    if ( ulen < size + sizeof(e) )
>>>           goto out;
>>
>> ... invalidate the checking done here? (A similar risk looks to
>> exist on the write path, albeit there we have at least the
>> ->max_size checks, where I hope that field isn't mean to become
>> dynamic as well.)
> 
> I think you are right. I should add the size value as a parameter to the
> read and write functions.

Except that a function like hypfs_read_leaf() shouldn't really
require its caller to pass in the size of the leaf's payload.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 14:43:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 14:43:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29043.58240 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf2Ct-000266-UX; Tue, 17 Nov 2020 14:43:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29043.58240; Tue, 17 Nov 2020 14:43:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf2Ct-00025z-Re; Tue, 17 Nov 2020 14:43:43 +0000
Received: by outflank-mailman (input) for mailman id 29043;
 Tue, 17 Nov 2020 14:43:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+iwA=EX=gmail.com=cheyenne.wills@srs-us1.protection.inumbo.net>)
 id 1kf2Cs-00025u-F5
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 14:43:42 +0000
Received: from mail-lf1-x130.google.com (unknown [2a00:1450:4864:20::130])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 01e943a0-a0b4-4d3f-99d2-0adf996cd279;
 Tue, 17 Nov 2020 14:43:41 +0000 (UTC)
Received: by mail-lf1-x130.google.com with SMTP id u19so24197957lfr.7
 for <xen-devel@lists.xenproject.org>; Tue, 17 Nov 2020 06:43:41 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+iwA=EX=gmail.com=cheyenne.wills@srs-us1.protection.inumbo.net>)
	id 1kf2Cs-00025u-F5
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 14:43:42 +0000
X-Inumbo-ID: 01e943a0-a0b4-4d3f-99d2-0adf996cd279
Received: from mail-lf1-x130.google.com (unknown [2a00:1450:4864:20::130])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 01e943a0-a0b4-4d3f-99d2-0adf996cd279;
	Tue, 17 Nov 2020 14:43:41 +0000 (UTC)
Received: by mail-lf1-x130.google.com with SMTP id u19so24197957lfr.7
        for <xen-devel@lists.xenproject.org>; Tue, 17 Nov 2020 06:43:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=QeNFXaMRbG0TXRdf1BcLhaiZbUvGLStAY3jmHGSDROA=;
        b=sR46a+KE8aZl+yy4j2I6SVx+VPjarZiC9qTGTt86igNaBpN4+QrtMYfpqqSBBvJ0lp
         uWPkzjtQoGRRXqOsogIqlYcmw12FP1H7164YHU5IJJbm8NTo9/eSOJqXNALdyM4pJSQ1
         zOxJKq2WMl/MH1pqVqS6AXvCEHLBfhKFE24t+pMMrWPQkH10RA5vWfNkCvx/U+F42qHq
         BVq/ldRpATNc+VGqTK4uS6OphxqMKOcsycKLAP071aWkEqdMhQ8PU/vL8mG6yCub2vKr
         TD7tE+LBzIxOz5jKqka4THrVqOJ6l3UNb+a0/IC5z7VvOZV7blHMq82U9pGmTH90/lzq
         jaqQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=QeNFXaMRbG0TXRdf1BcLhaiZbUvGLStAY3jmHGSDROA=;
        b=YFgPXUnF+IqaZKLtOYBk8GAy3n1sJ8GY50y83EVwLGX4dXpb5tTJQ0bzPLJFQ4SPpK
         znJhIBsrecNGJWdHrei509aoLJHr3i7lcYfsDO7jZdTdTMqddN61dzCJags3zwZLLZih
         zAPR+wlRWXB8l88r4ZXOOZZnUrdJ2MWM4OXULg0q23cCuGBGbG4VcFCUchYly6pA8jgJ
         0fmvyaH+AlGgmouMs39ffMerfOpPt/jmLiS3aocaIyXwjBf5tzoa0gOkkWgbisXn2+u8
         TYsg3lM92MV3Gy0Nxp1awjgxIjrvyRjAO4NHtJAFooRCZJn1fG+wJRlBJrHc4W6zN61g
         gd3Q==
X-Gm-Message-State: AOAM530ebr/w39aPndSVi9GEONvzpTId6moiUp1yvgUkvddGR8j1CAzL
	Z7gWoYPKmPZ/WqzTeF3HKyMsb3nDJkfcwiv/wOc=
X-Google-Smtp-Source: ABdhPJxqpGr8/kTWAmL7nf9+ebHPt8dVNKc+PwoLvFN1e17IVHFWFSDz77lXhlevDMmTRZJKQs2P8fdNLaPxG58jnq8=
X-Received: by 2002:a05:6512:1095:: with SMTP id j21mr1830382lfg.309.1605624220457;
 Tue, 17 Nov 2020 06:43:40 -0800 (PST)
MIME-Version: 1.0
References: <CAHpsFVc4AAm6L0rKUuV47ydOjtw7XAgFnDZxRjdCL0OHXJERDw@mail.gmail.com>
 <7bca24cb-a3af-b54d-b224-3c2a316859dd@suse.com>
In-Reply-To: <7bca24cb-a3af-b54d-b224-3c2a316859dd@suse.com>
From: Cheyenne Wills <cheyenne.wills@gmail.com>
Date: Tue, 17 Nov 2020 07:43:29 -0700
Message-ID: <CAHpsFVcy2n2Sr845mPw4txH5UTbtKrbezRtgdmDaDX0T2r5wog@mail.gmail.com>
Subject: Re: XSA-351 causing Solaris-11 systems to panic during boot.
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org
Content-Type: multipart/alternative; boundary="0000000000004d9f1605b44e851c"

--0000000000004d9f1605b44e851c
Content-Type: text/plain; charset="UTF-8"

The Solaris version reported in the copyright banner on the ISO is SunOS
Release 5.11 Version 11.4.0.15.0 64-bit

My existing guest solaris systems are also at the same release/version level

At the time of the panic, the panic log reports that the rcx register
contains '0606' (this was from my notes yesterday).  If additional
information is needed, I will need a bit more time to set up my system
again.

On Tue, Nov 17, 2020 at 1:12 AM Jan Beulich <jbeulich@suse.com> wrote:

> On 16.11.2020 22:57, Cheyenne Wills wrote:
> > Running Xen with XSA-351 is causing Solaris 11 systems to panic during
> > boot.  The panic screen is showing the failure to be coming from
> > "unix:rdmsr".  The panic occurs with existing guests (booting off a disk)
> > and the  booting from an install ISO image.
> >
> > I discussed the problem with "andyhhp__" in the "#xen" IRC channel and he
> > requested that I report it here.
>
> Thanks. What we need though is information on the specific MSR(s) that
> will need to have workarounds added: We surely would want to avoid
> blindly doing this for all that the XSA change disallowed access to.
> Reproducing the panic screen here might already help; proper full logs
> would be even better.
>
> Jan
>

--0000000000004d9f1605b44e851c
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">The Solaris version reported in the copyright banner on th=
e ISO is SunOS Release 5.11 Version 11.4.0.15.0 64-bit<div><br></div><div>M=
y existing guest solaris systems are also at the same release/version level=
</div><div><br></div><div>At the time of the panic, the panic log reports t=
hat the rcx register contains &#39;0606&#39; (this was from my notes yester=
day).=C2=A0 If additional information is needed, I will need a bit more tim=
e to set up=C2=A0my system again.</div></div><br><div class=3D"gmail_quote"=
><div dir=3D"ltr" class=3D"gmail_attr">On Tue, Nov 17, 2020 at 1:12 AM Jan =
Beulich &lt;<a href=3D"mailto:jbeulich@suse.com">jbeulich@suse.com</a>&gt; =
wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0=
px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 16.11.=
2020 22:57, Cheyenne Wills wrote:<br>
&gt; Running Xen with XSA-351 is causing Solaris 11 systems to panic during=
<br>
&gt; boot.=C2=A0 The panic screen is showing the failure to be coming from<=
br>
&gt; &quot;unix:rdmsr&quot;.=C2=A0 The panic occurs with existing guests (b=
ooting off a disk)<br>
&gt; and the=C2=A0 booting from an install ISO image.<br>
&gt; <br>
&gt; I discussed the problem with &quot;andyhhp__&quot; in the &quot;#xen&q=
uot; IRC channel and he<br>
&gt; requested that I report it here.<br>
<br>
Thanks. What we need though is information on the specific MSR(s) that<br>
will need to have workarounds added: We surely would want to avoid<br>
blindly doing this for all that the XSA change disallowed access to.<br>
Reproducing the panic screen here might already help; proper full logs<br>
would be even better.<br>
<br>
Jan<br>
</blockquote></div>

--0000000000004d9f1605b44e851c--


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 14:46:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 14:46:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29050.58253 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf2Fi-0002FT-DL; Tue, 17 Nov 2020 14:46:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29050.58253; Tue, 17 Nov 2020 14:46:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf2Fi-0002FM-9v; Tue, 17 Nov 2020 14:46:38 +0000
Received: by outflank-mailman (input) for mailman id 29050;
 Tue, 17 Nov 2020 14:46:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mGex=EX=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kf2Fh-0002FG-EB
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 14:46:37 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b4383c75-692d-408c-98e1-7fe364ea4ae0;
 Tue, 17 Nov 2020 14:46:36 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mGex=EX=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kf2Fh-0002FG-EB
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 14:46:37 +0000
X-Inumbo-ID: b4383c75-692d-408c-98e1-7fe364ea4ae0
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b4383c75-692d-408c-98e1-7fe364ea4ae0;
	Tue, 17 Nov 2020 14:46:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605624396;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=PHZmOYQKAawXJYU3wsIkiuYykq2AVxYFG5RJRHmYfQk=;
  b=G2EnqVItyAgVND5RkairYMwvM3/U5pWMS6Vycv0ZILdlJR/KsxnU6Xxs
   gbH97W8Cv0xrFaLJ0cRoXtLduFqobzo2/y6cXApVE73fQ1/y8MdG1v8AH
   gPWgFu7p1iiN7LfVYZZvdpoG+zO+R7ljHwUcEJ+eVyz7A3BDDZnwZOi8t
   U=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: yiyGHD1RHVq6yrvTz8YevvYS0Bx8G939tk9M2LKRgnehBun9Be7851ofXWyPKa8/IrZc/lmJBL
 5/gbaOFikFtdhnIzgRwBn3zMtd7QKtEyE/A8mD2BUdXRPtsDTS1VPI7TvhjM5+hVB+4qTSmwxi
 iRM1RdOrNLDUAUG0MIQzYKeQXvFxuhzzsoEVikemxMjb1m/c8ALLpcbZmbzbO1+ILW6iM+NYds
 CaOrZljAvTZOBCa+hA661zvFU2dqvf+eztZt8eEVDBBopSlY/iZ0kPyzLVvIpp/H5yYSog6p5y
 LuU=
X-SBRS: None
X-MesageID: 31338751
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,485,1596513600"; 
   d="scan'208";a="31338751"
Subject: Re: XSA-351 causing Solaris-11 systems to panic during boot.
To: Cheyenne Wills <cheyenne.wills@gmail.com>, Jan Beulich <jbeulich@suse.com>
CC: <xen-devel@lists.xenproject.org>
References: <CAHpsFVc4AAm6L0rKUuV47ydOjtw7XAgFnDZxRjdCL0OHXJERDw@mail.gmail.com>
 <7bca24cb-a3af-b54d-b224-3c2a316859dd@suse.com>
 <CAHpsFVcy2n2Sr845mPw4txH5UTbtKrbezRtgdmDaDX0T2r5wog@mail.gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <ad2bf514-75b9-1383-bc2f-c800ab5e1a7a@citrix.com>
Date: Tue, 17 Nov 2020 14:46:31 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <CAHpsFVcy2n2Sr845mPw4txH5UTbtKrbezRtgdmDaDX0T2r5wog@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 17/11/2020 14:43, Cheyenne Wills wrote:
> The Solaris version reported in the copyright banner on the ISO is
> SunOS Release 5.11 Version 11.4.0.15.0 64-bit
>
> My existing guest solaris systems are also at the same release/version
> level
>
> At the time of the panic, the panic log reports that the rcx register
> contains '0606' (this was from my notes yesterday).  If additional
> information is needed, I will need a bit more time to set up my system
> again.

As I said on IRC, this is RAPL_POWER_UNIT, but if it is read unguarded,
then the others will be too.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 14:48:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 14:48:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29056.58264 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf2H2-0002NG-O2; Tue, 17 Nov 2020 14:48:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29056.58264; Tue, 17 Nov 2020 14:48:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf2H2-0002N9-Kv; Tue, 17 Nov 2020 14:48:00 +0000
Received: by outflank-mailman (input) for mailman id 29056;
 Tue, 17 Nov 2020 14:47:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UWEc=EX=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kf2H1-0002N1-7J
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 14:47:59 +0000
Received: from mail-lj1-x236.google.com (unknown [2a00:1450:4864:20::236])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d6908074-aa12-48d3-a597-c8204e8fff0c;
 Tue, 17 Nov 2020 14:47:58 +0000 (UTC)
Received: by mail-lj1-x236.google.com with SMTP id 142so10909824ljj.10
 for <xen-devel@lists.xenproject.org>; Tue, 17 Nov 2020 06:47:58 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id s26sm3084683lji.31.2020.11.17.06.47.55
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 17 Nov 2020 06:47:56 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=UWEc=EX=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kf2H1-0002N1-7J
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 14:47:59 +0000
X-Inumbo-ID: d6908074-aa12-48d3-a597-c8204e8fff0c
Received: from mail-lj1-x236.google.com (unknown [2a00:1450:4864:20::236])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d6908074-aa12-48d3-a597-c8204e8fff0c;
	Tue, 17 Nov 2020 14:47:58 +0000 (UTC)
Received: by mail-lj1-x236.google.com with SMTP id 142so10909824ljj.10
        for <xen-devel@lists.xenproject.org>; Tue, 17 Nov 2020 06:47:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:from:to:cc:references:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=XpSxvfpdzP1lQuCn9JFmOxA384xIAGn8YKeAo1cdXno=;
        b=ZiMJ64FS5r16N5/za6VPjb/b6oz1QnE0t7IbuWlLP7gjj7/wT+Hm9TtwOjfkgfpnxw
         QWTObHI8SqzfFi3IB44qQpP1Ddsx6toBk5PIKc4XFkFgQ8E3DxvfHksSvHFaPnLhAU/a
         DGd3ShX7Xj1yTcuB25rwIuLW6cWFHwZZwEgxnOAUmKK3PrJ4FBYzUVCSVh5+qBDV1+gy
         qmb34tsP2hTbY3c8JoRamfL4NdVFErdiLtBEfotlRWZdGspz+VPyJEPfu29sqNpOMOOp
         uG/g8hm31w4w5yWm8J6ClLtlbNbhz272O4TB4nAmp3dKFprk1x3lgMJedaG8gN/IVS7w
         dEtQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:from:to:cc:references:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=XpSxvfpdzP1lQuCn9JFmOxA384xIAGn8YKeAo1cdXno=;
        b=JMTR5oM4nM8xFQ/L9mtQviSrZFimfSKZ7HiAhd2whLBAjUBV7tyo2KXT1vpDrA98p+
         OIs0cLEOU8ZExll3zB8am0fHtXw6CjXQwCkNxo9X4Cg9bNzkA77RovYiYteVzyfPmMEy
         XSf4zr/unJ6HDiPqQJsubHcpSlU/89ByoRcYFX+8uHG5y41sukEMhNfO1l9NagNINj4d
         ISS9O8ktJRhpwhVJrvkcZIyjolmMB+otS1nZXjpc+zb6HQRXb2vPRzJdBQDDsBP3S+XQ
         Td3GakJpdxUdOmr/hpscAQflS/4eDsV6dUO6Ot46Z7IdKzGrR6Zu6MxMcCqmJzb0BMHN
         FA6Q==
X-Gm-Message-State: AOAM5322rtKSiyjmsyKnaAwj3cUMV2AqykjlLb0QmyQHbnjutNf0sJUc
	xVXrzM0PIpMdsVsa7l/1bBM=
X-Google-Smtp-Source: ABdhPJx/nUOhJJtuN0gHyX6KFxjGpHkQI9bdJpUKR1VOEFtFoX1LIu0TSgkm9SsXOB24vOiLhwXcgA==
X-Received: by 2002:a2e:6a14:: with SMTP id f20mr1853976ljc.377.1605624477002;
        Tue, 17 Nov 2020 06:47:57 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id s26sm3084683lji.31.2020.11.17.06.47.55
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Tue, 17 Nov 2020 06:47:56 -0800 (PST)
Subject: Re: [PATCH V2 02/23] xen/ioreq: Make x86's IOREQ feature common
From: Oleksandr <olekstysh@gmail.com>
To: paul@xen.org
Cc: xen-devel@lists.xenproject.org,
 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Ian Jackson' <iwj@xenproject.org>, 'Jan Beulich' <jbeulich@suse.com>,
 'Julien Grall' <julien@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>, 'Wei Liu' <wl@xen.org>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 'Tim Deegan' <tim@xen.org>, 'Julien Grall' <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-3-git-send-email-olekstysh@gmail.com>
 <004001d6a6b6$9ffd3ac0$dff7b040$@xen.org>
 <436143ea-609f-f6c3-4952-19fcf410fe8f@gmail.com>
Message-ID: <34133df1-bff2-f4df-00a5-674a2af867fc@gmail.com>
Date: Tue, 17 Nov 2020 16:47:50 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <436143ea-609f-f6c3-4952-19fcf410fe8f@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


Hi Paul

>
>> The 'legacy' mechanism of mapping magic pages for ioreq servers 
>> should remain x86 specific I think that aspect of the code needs to 
>> remain behind and not get moved into common code. You could do that 
>> in arch specific calls in hvm_ioreq_server_enable/disable() and 
>> hvm_get_ioreq_server_info().
> Well, if legacy mechanism is not going to be used for other arch and 
> should remain x86 specific, I will try to investigate what should be 
> left in x86 code and rework the series.
> As a side note, I am afraid, we won't get a 100% code movement (which 
> I managed to achieve here) for the next version of this patch as we 
> need arch/x86/hvm/ioreq.c.

I am investigating how to split the code in order to leave the 'legacy' 
mechanism x86 specific and have a few questions. Could you please 
clarify the following:

1. The split of hvm_ioreq_server_enable/disable() is obvious to me, I 
would like to clarify regarding hvm_get_ioreq_server_info().
Is it close to what you had in mind when suggesting the split of 
hvm_get_ioreq_server_info() or I just need to abstract 
hvm_ioreq_server_map_pages() call only?
(Not completed and non tested)

+/* Called with ioreq_server lock held */
+int arch_ioreq_server_get_info(struct hvm_ioreq_server *s,
+                               unsigned long *ioreq_gfn,
+                               unsigned long *bufioreq_gfn,
+                               evtchn_port_t *bufioreq_port)
+{
+    if ( ioreq_gfn || bufioreq_gfn )
+    {
+        int rc = hvm_ioreq_server_map_pages(s);
+
+        if ( rc )
+            return rc;
+    }
+
+    if ( ioreq_gfn )
+        *ioreq_gfn = gfn_x(s->ioreq.gfn);
+
+    if ( HANDLE_BUFIOREQ(s) )
+    {
+        if ( bufioreq_gfn )
+            *bufioreq_gfn = gfn_x(s->bufioreq.gfn);
+
+        if ( bufioreq_port )
+            *bufioreq_port = s->bufioreq_evtchn;
+    }
+
+    return 0;
+}
+
  int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
                                unsigned long *ioreq_gfn,
                                unsigned long *bufioreq_gfn,
@@ -916,26 +954,7 @@ int hvm_get_ioreq_server_info(struct domain *d, 
ioservid_t id,
      if ( s->emulator != current->domain )
          goto out;

-    if ( ioreq_gfn || bufioreq_gfn )
-    {
-        rc = hvm_ioreq_server_map_pages(s);
-        if ( rc )
-            goto out;
-    }
-
-    if ( ioreq_gfn )
-        *ioreq_gfn = gfn_x(s->ioreq.gfn);
-
-    if ( HANDLE_BUFIOREQ(s) )
-    {
-        if ( bufioreq_gfn )
-            *bufioreq_gfn = gfn_x(s->bufioreq.gfn);
-
-        if ( bufioreq_port )
-            *bufioreq_port = s->bufioreq_evtchn;
-    }
-
-    rc = 0;
+    rc = arch_ioreq_server_get_info(s, ioreq_gfn, bufioreq_gfn, 
bufioreq_port);

   out:
      spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);

2. If I understand the code correctly, besides of the above-mentioned 
functions the arch specific calls should be in hvm_ioreq_server_init() 
and hvm_ioreq_server_deinit() to actually hide 
"hvm_ioreq_server_unmap_pages" usage from the common code.  I noticed 
that the rollback code in hvm_ioreq_server_init() and the whole 
hvm_ioreq_server_deinit() have a lot in common except an extra ASSERT() 
and hvm_ioreq_server_free_pages() call in the latter. My question is 
could we just replace the rollback code by hvm_ioreq_server_deinit()? I 
assume an extra hvm_ioreq_server_free_pages() call wouldn't be an issue 
here, but I am not sure what to do with the ASSERT, I expect it to be 
triggered at such an early stage (so it probably needs moving out of the 
hvm_ioreq_server_deinit() (or dropped?) as well as comment needs 
updating). I am asking, because this way we would get *a single* arch 
hook here which would be arch_ioreq_server_deinit() and remove code 
duplication a bit.

Something close to this.
(Not completed and non tested)

@@ -761,18 +771,17 @@ static int hvm_ioreq_server_init(struct 
hvm_ioreq_server *s,
      return 0;

   fail_add:
-    hvm_ioreq_server_remove_all_vcpus(s);
-    hvm_ioreq_server_unmap_pages(s);
-
-    hvm_ioreq_server_free_rangesets(s);
-
-    put_domain(s->emulator);
+    hvm_ioreq_server_deinit(s);
      return rc;
  }

+void arch_ioreq_server_deinit(struct hvm_ioreq_server *s)
+{
+    hvm_ioreq_server_unmap_pages(s);
+}
+
  static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s)
  {
-    ASSERT(!s->enabled);
      hvm_ioreq_server_remove_all_vcpus(s);

      /*
@@ -784,7 +793,7 @@ static void hvm_ioreq_server_deinit(struct 
hvm_ioreq_server *s)
       *       the page_info pointer to NULL, meaning the latter will do
       *       nothing.
       */
-    hvm_ioreq_server_unmap_pages(s);
+    arch_ioreq_server_deinit(s);
      hvm_ioreq_server_free_pages(s);

      hvm_ioreq_server_free_rangesets(s);

      put_domain(s->emulator);


-- 

Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Nov 17 14:50:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 14:50:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29062.58276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf2JZ-0003JJ-5e; Tue, 17 Nov 2020 14:50:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29062.58276; Tue, 17 Nov 2020 14:50:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf2JZ-0003JC-2V; Tue, 17 Nov 2020 14:50:37 +0000
Received: by outflank-mailman (input) for mailman id 29062;
 Tue, 17 Nov 2020 14:50:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kf2JX-0003J7-I8
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 14:50:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 38ee0997-aef8-4514-8ca2-5c1e392bdcc7;
 Tue, 17 Nov 2020 14:50:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C1D1FAC23;
 Tue, 17 Nov 2020 14:50:33 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kf2JX-0003J7-I8
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 14:50:35 +0000
X-Inumbo-ID: 38ee0997-aef8-4514-8ca2-5c1e392bdcc7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 38ee0997-aef8-4514-8ca2-5c1e392bdcc7;
	Tue, 17 Nov 2020 14:50:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605624633; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0XJx1Hac8LSV8gciGkalOYU/2bo1JHDIsC6Q7dplHtE=;
	b=HM7lGMBKddNO+cNqx4XGjP2TP7Wac9zyGTSTVkazxLorsnXK5zREgVrG6HM26MulWmql3e
	yj755Xi0Zw1ELS7tkBUjlQuLMtYtssDJEy2N3o2Fep5Z+3hlei83Zp69luE2idP0SSIlCJ
	gyHvgpqQIO/l+nFqe+H5hb2mKQ5jHv8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C1D1FAC23;
	Tue, 17 Nov 2020 14:50:33 +0000 (UTC)
Subject: Re: [PATCH 09/12] xen/hypfs: add support for id-based dynamic
 directories
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-10-jgross@suse.com>
 <6f8c0d3d-73f6-d10f-182a-8bf76856bf09@suse.com>
 <95f673e5-90a8-0fe9-3842-bdb9de5c4aa4@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0b8ae7db-b1f6-a23c-e608-27650db702ae@suse.com>
Date: Tue, 17 Nov 2020 15:50:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <95f673e5-90a8-0fe9-3842-bdb9de5c4aa4@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 17.11.2020 15:38, Jürgen Groß wrote:
> On 17.11.20 14:33, Jan Beulich wrote:
>> On 26.10.2020 10:13, Juergen Gross wrote:
>>> +static struct hypfs_entry *hypfs_dyndir_findentry(struct hypfs_entry_dir *dir,
>>> +                                                  const char *name,
>>> +                                                  unsigned int name_len)
>>> +{
>>> +    struct hypfs_dyndir_id *data;
>>> +
>>> +    data = hypfs_get_dyndata();
>>> +    if ( !data )
>>> +        return ERR_PTR(-ENOENT);
>>> +
>>> +    /* Use template with original findentry function. */
>>> +    return data->template->e.funcs->findentry(data->template, name, name_len);
>>
>> Why does this pass the address of the template? If it truly is
>> (just) a template, then its dirlist ought to be empty at all
>> times? If otoh the "template" indeed gets used as a node in the
>> tree then perhaps it wants naming differently? "Stem" would come
>> to mind, but likely there are better alternatives. I've also
>> considered the German "Statthalter", but its English translations
>> don't seem reasonable to me here. And "placeholder" has kind of a
>> negative touch. (Also in this case some of my "const?" remarks
>> may be irrelevant.)
> 
> It is basically a template tree.
> 
> In the current use case (cpupool/<id>/sched-gran) the template is
> <id> with the leaf "sched-gran" which is the template for the per
> cpupool incarnation.

I can see sched-gran being a template, albeit even that will get
dirtied as soon as a second leaf appears, as then these entries
again need linking together. I think anything called template
should be strictly r/o.

It's also not clear to me how adding a 2nd level in the hierarchy
would end up working: <component>/<id1>/<id2>/<leaf>. How would
<leaf> know all the higher level IDs it's associated with? While
I don't think this needs making work right away, the underlying
model at least shouldn't prohibit it.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 15:01:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 15:01:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29072.58292 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf2Tz-0004P6-Gv; Tue, 17 Nov 2020 15:01:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29072.58292; Tue, 17 Nov 2020 15:01:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf2Tz-0004Oz-Dv; Tue, 17 Nov 2020 15:01:23 +0000
Received: by outflank-mailman (input) for mailman id 29072;
 Tue, 17 Nov 2020 15:01:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=J6Lq=EX=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kf2Ty-0004Ou-S0
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 15:01:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 15b73f8c-0cce-44e5-805f-d965e866ca55;
 Tue, 17 Nov 2020 15:01:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 42C85AC23;
 Tue, 17 Nov 2020 15:01:21 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=J6Lq=EX=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kf2Ty-0004Ou-S0
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 15:01:22 +0000
X-Inumbo-ID: 15b73f8c-0cce-44e5-805f-d965e866ca55
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 15b73f8c-0cce-44e5-805f-d965e866ca55;
	Tue, 17 Nov 2020 15:01:22 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605625281; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ibuZn72g8bYhvlka5ILnkbgS6avpop6EoFBhrrIahqw=;
	b=aJjY+HcEHoAr46kS23yBdZlUVqpTiDEZLNF52YBrRZogwfInQPylhzEzqdOZsPPw4TSlSF
	JBTqjurlcQx6w4UNlqao+ZGvdlBLD2/fZ6H+lS8sDkqjOqQbwqam6ETkxLbuYG5MzMntQ5
	e9LoN3gttCwaG9pAtL/Ckyhkv4QMtbc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 42C85AC23;
	Tue, 17 Nov 2020 15:01:21 +0000 (UTC)
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Dario Faggioli <dfaggioli@suse.com>,
 xen-devel@lists.xenproject.org
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-11-jgross@suse.com>
 <0c199e96-c686-2045-8972-036e69600873@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH 10/12] xen/hypfs: add cpupool directories
Message-ID: <de52d6b3-9b6a-16a2-51cf-b032df58671c@suse.com>
Date: Tue, 17 Nov 2020 16:01:20 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <0c199e96-c686-2045-8972-036e69600873@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="4kcXVnjuzAwd3oqi2Rqe4TSGxRakGonCU"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--4kcXVnjuzAwd3oqi2Rqe4TSGxRakGonCU
Content-Type: multipart/mixed; boundary="4oFseI7xPoJmgbvHgpXNKFz86eNi0bTwi";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Dario Faggioli <dfaggioli@suse.com>,
 xen-devel@lists.xenproject.org
Message-ID: <de52d6b3-9b6a-16a2-51cf-b032df58671c@suse.com>
Subject: Re: [PATCH 10/12] xen/hypfs: add cpupool directories
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-11-jgross@suse.com>
 <0c199e96-c686-2045-8972-036e69600873@suse.com>
In-Reply-To: <0c199e96-c686-2045-8972-036e69600873@suse.com>

--4oFseI7xPoJmgbvHgpXNKFz86eNi0bTwi
Content-Type: multipart/mixed;
 boundary="------------F444E41C153C39471BD66BDF"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------F444E41C153C39471BD66BDF
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.11.20 15:13, Jan Beulich wrote:
> On 26.10.2020 10:13, Juergen Gross wrote:
>> @@ -992,6 +994,78 @@ static struct notifier_block cpu_nfb =3D {
>>       .notifier_call =3D cpu_callback
>>   };
>>  =20
>> +#ifdef CONFIG_HYPFS
>> +static HYPFS_DIR_INIT(cpupool_pooldir, "id");
>=20
> This "id" string won't appear anywhere, will it? I would have
> expected this to act as the format string used when generating
> names of the dynamic entries. This would e.g. allow CPU pools
> to have decimal numbered names, but other entries also hex
> ones, and then if so desired also e.g. with leading zeros.

I like that idea.

>=20
>> +static int cpupool_dir_read(const struct hypfs_entry *entry,
>> +                            XEN_GUEST_HANDLE_PARAM(void) uaddr)
>> +{
>> +    int ret =3D 0;
>> +    struct cpupool **q;
>=20
> I was going to ask for const here, but the way for_each_cpupool()
> works looks to prohibit this. Nevertheless I wonder whether the
> extra level of indirection there wouldn't better be dropped. Of
> the users, only cpupool_destroy() looks to need it, so open-
> coding the loop there (or introducing an auxiliary variable)
> would allow improvements here and elsewhere. (Actually I notice
> there's also a similar use in cpupool_create(), but the general
> consideration remains.)

I'll have a look.

>=20
>> +    spin_lock(&cpupool_lock);
>> +
>> +    for_each_cpupool(q)
>> +    {
>> +        ret =3D hypfs_read_dyndir_id_entry(&cpupool_pooldir, (*q)->cp=
upool_id,
>> +                                         !(*q)->next, &uaddr);
>> +        if ( ret )
>> +            break;
>> +    }
>> +
>> +    spin_unlock(&cpupool_lock);
>> +
>> +    return ret;
>> +}
>> +
>> +static unsigned int cpupool_dir_getsize(const struct hypfs_entry *ent=
ry)
>> +{
>> +    struct cpupool **q;
>> +    unsigned int size =3D 0;
>> +
>> +    spin_lock(&cpupool_lock);
>> +
>> +    for_each_cpupool(q)
>> +        size +=3D HYPFS_DIRENTRY_SIZE(snprintf(NULL, 0, "%d", (*q)->c=
pupool_id));
>=20
> Beyond the remark above I consider this problematic: If the pool
> ID was negative, the use of %d here would get things out of sync
> with the %u uses in hypfs.c. I guess exposing
> HYPFS_DIRENTRY_SIZE() isn't the right approach, and you instead
> need another hypfs library function.

Fine with me.

>=20
>> +static struct hypfs_entry *cpupool_dir_findentry(struct hypfs_entry_d=
ir *dir,
>> +                                                 const char *name,
>> +                                                 unsigned int name_le=
n)
>> +{
>> +    unsigned long id;
>> +    const char *end;
>> +    struct cpupool *cpupool;
>> +
>> +    id =3D simple_strtoul(name, &end, 10);
>> +    if ( id > INT_MAX || end !=3D name + name_len )
>=20
> What does this INT_MAX match up with? Afaics
> XEN_SYSCTL_CPUPOOL_OP_CREATE is fine to have an effectively
> negative pool ID passed in (the public interface struct uses
> uint32_t, but this gets converted to plain int first thing in
> the sysctl handler).

Oh, this wants to be fixed.

>=20
>> +        return ERR_PTR(-ENOENT);
>> +
>> +    spin_lock(&cpupool_lock);
>> +
>> +    cpupool =3D __cpupool_find_by_id(id, true);
>> +
>> +    spin_unlock(&cpupool_lock);
>> +
>> +    if ( !cpupool )
>> +        return ERR_PTR(-ENOENT);
>> +
>> +    return hypfs_gen_dyndir_entry_id(&cpupool_pooldir, id);
>=20
> The latest this one makes clear that cpupool_lock nests inside
> the hypfs one. I think this wants spelling out next to the
> definition of the former, as it implies that there are
> restrictions on what can be done from inside cpupool-locked
> regions. hypfs_read_dyndir_id_entry(), for example, has to
> remain lock free for this reason.

Okay, I'll add a comment.


Juergen

--------------F444E41C153C39471BD66BDF
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------F444E41C153C39471BD66BDF--

--4oFseI7xPoJmgbvHgpXNKFz86eNi0bTwi--

--4kcXVnjuzAwd3oqi2Rqe4TSGxRakGonCU
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+z5cAFAwAAAAAACgkQsN6d1ii/Ey9Z
YQf/co6Y42Ze2niGcK5/fh1EpqGclwegSdl5Z1g1Q6wh7VaDR67Oqnnwd1BNhcvJ9YaUHzyPe57O
OuWz/IHl8s44Mzobh2qETwYm1xahgddsLHvqoy0xiUxqhBEWxQX8b91GNX1jsGwrW6MDmWQ+r149
lLnUi8mNhNUbWa59dCP5Kn1ABqHwdEKiWaqIxodwppZcbmb8MGQGckhfcG0rIfoT4zbxxaAkmxBG
b3Bq11vGlgAkgpxkOhS0ujMmlOPQmXKB+H34bC7Zpkw1UtWXJVN0c0gJ8pwjI9RsUR7b5/TrmUPF
Wfl+M5+qoxHrtPLNMAixjT/QE0kSqaomyzAOmkMaaQ==
=Z4AO
-----END PGP SIGNATURE-----

--4kcXVnjuzAwd3oqi2Rqe4TSGxRakGonCU--


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 15:07:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 15:07:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29078.58304 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf2Zq-0004cx-AZ; Tue, 17 Nov 2020 15:07:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29078.58304; Tue, 17 Nov 2020 15:07:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf2Zq-0004cq-7H; Tue, 17 Nov 2020 15:07:26 +0000
Received: by outflank-mailman (input) for mailman id 29078;
 Tue, 17 Nov 2020 15:07:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=J6Lq=EX=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kf2Zo-0004cl-RV
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 15:07:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6219acbc-1240-4df5-b5a4-77343d3e2751;
 Tue, 17 Nov 2020 15:07:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 280FFAC1F;
 Tue, 17 Nov 2020 15:07:23 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=J6Lq=EX=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kf2Zo-0004cl-RV
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 15:07:24 +0000
X-Inumbo-ID: 6219acbc-1240-4df5-b5a4-77343d3e2751
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6219acbc-1240-4df5-b5a4-77343d3e2751;
	Tue, 17 Nov 2020 15:07:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605625643; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=uNQro1ozDnCyHXXjR8NJDJd2P8pXOh4IMjO3ySENtKw=;
	b=qYtzbQkSC2pEriL93ARujHp6ECdbIpmcVzyJ2fHhN8K7WhjvYIhYcPconukl+UBuBqBMZa
	EfaYtTr0/x+A5qK9bpJqW0a02HZ4UxgOyNQQy+MnDyTfqvEGFShbP7mGUrIX2VToc0VJCm
	t5rFKDc80XhFNi4x4+TVpJDEg+m0UCg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 280FFAC1F;
	Tue, 17 Nov 2020 15:07:23 +0000 (UTC)
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-9-jgross@suse.com>
 <d8653200-fbee-4e87-3e2d-7062879d7b4e@suse.com>
 <6fe809d5-09c1-28d3-61ec-10244b2d7d5f@suse.com>
 <e93c98cd-1cd2-1646-9db9-3ebd8bc3684c@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH 08/12] xen/hypfs: support dynamic hypfs nodes
Message-ID: <820af202-7a40-600a-39ff-d1a49ca95301@suse.com>
Date: Tue, 17 Nov 2020 16:07:22 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <e93c98cd-1cd2-1646-9db9-3ebd8bc3684c@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="ibkXbRJKiPOH09e7nBd6wNiiqbqqAWnQg"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--ibkXbRJKiPOH09e7nBd6wNiiqbqqAWnQg
Content-Type: multipart/mixed; boundary="BfIVQNf697HHKSgG0xVHFLCgrWD0WNg29";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <820af202-7a40-600a-39ff-d1a49ca95301@suse.com>
Subject: Re: [PATCH 08/12] xen/hypfs: support dynamic hypfs nodes
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-9-jgross@suse.com>
 <d8653200-fbee-4e87-3e2d-7062879d7b4e@suse.com>
 <6fe809d5-09c1-28d3-61ec-10244b2d7d5f@suse.com>
 <e93c98cd-1cd2-1646-9db9-3ebd8bc3684c@suse.com>
In-Reply-To: <e93c98cd-1cd2-1646-9db9-3ebd8bc3684c@suse.com>

--BfIVQNf697HHKSgG0xVHFLCgrWD0WNg29
Content-Type: multipart/mixed;
 boundary="------------6F2186788185058F457F8B58"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------6F2186788185058F457F8B58
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.11.20 15:40, Jan Beulich wrote:
> On 17.11.2020 15:29, J=C3=BCrgen Gro=C3=9F wrote:
>> On 17.11.20 13:37, Jan Beulich wrote:
>>> On 26.10.2020 10:13, Juergen Gross wrote:
>>>> --- a/xen/common/hypfs.c
>>>> +++ b/xen/common/hypfs.c
>>>> @@ -19,28 +19,29 @@
>>>>    CHECK_hypfs_dirlistentry;
>>>>    #endif
>>>>   =20
>>>> -#define DIRENTRY_NAME_OFF offsetof(struct xen_hypfs_dirlistentry, n=
ame)
>>>> -#define DIRENTRY_SIZE(name_len) \
>>>> -    (DIRENTRY_NAME_OFF +        \
>>>> -     ROUNDUP((name_len) + 1, alignof(struct xen_hypfs_direntry)))
>>>> -
>>>>    struct hypfs_funcs hypfs_dir_funcs =3D {
>>>>        .read =3D hypfs_read_dir,
>>>> +    .getsize =3D hypfs_getsize,
>>>> +    .findentry =3D hypfs_dir_findentry,
>>>>    };
>>>>    struct hypfs_funcs hypfs_leaf_ro_funcs =3D {
>>>>        .read =3D hypfs_read_leaf,
>>>> +    .getsize =3D hypfs_getsize,
>>>>    };
>>>>    struct hypfs_funcs hypfs_leaf_wr_funcs =3D {
>>>>        .read =3D hypfs_read_leaf,
>>>>        .write =3D hypfs_write_leaf,
>>>> +    .getsize =3D hypfs_getsize,
>>>>    };
>>>>    struct hypfs_funcs hypfs_bool_wr_funcs =3D {
>>>>        .read =3D hypfs_read_leaf,
>>>>        .write =3D hypfs_write_bool,
>>>> +    .getsize =3D hypfs_getsize,
>>>>    };
>>>>    struct hypfs_funcs hypfs_custom_wr_funcs =3D {
>>>>        .read =3D hypfs_read_leaf,
>>>>        .write =3D hypfs_write_custom,
>>>> +    .getsize =3D hypfs_getsize,
>>>>    };
>>>
>>> With the increasing number of fields that may (deliberately or
>>> by mistake) be NULL, should we gain some form of proactive
>>> guarding against calls through such pointers?
>>
>> Hmm, up to now I think such a bug would be detected rather fast.
>=20
> Not sure: Are there any unavoidable uses of all affected code
> paths?

Assuming that any new node implementation is tested at least once,
yes. But in general you are right, of course.

>=20
>> I can add some ASSERT()s for mandatory functions not being NULL when
>> a node is added dynamically or during hypfs initialization for the
>> static nodes.
>=20
> I'm not sure ASSERT()s alone are enough. I'd definitely prefer
> something which at least avoids the obvious x86 PV privilege
> escalation attack in case a call through NULL has gone
> unnoticed earlier on. E.g. rather than storing NULL in unused
> entries, store a non-canonical pointer so that the effect will
> "just" be a DoS.

Hmm, either this, or a dummy function doing no harm?

>=20
>>>> @@ -88,6 +93,23 @@ static void hypfs_unlock(void)
>>>>        }
>>>>    }
>>>>   =20
>>>> +void *hypfs_alloc_dyndata(unsigned long size, unsigned long align)
>>>
>>> Will callers really need to specify (high) alignment values? IOW ...
>>>
>>>> +{
>>>> +    unsigned int cpu =3D smp_processor_id();
>>>> +
>>>> +    ASSERT(per_cpu(hypfs_locked, cpu) !=3D hypfs_unlocked);
>>>> +    ASSERT(per_cpu(hypfs_dyndata, cpu) =3D=3D NULL);
>>>> +
>>>> +    per_cpu(hypfs_dyndata, cpu) =3D _xzalloc(size, align);
>>>
>>> ... is xzalloc_bytes() not suitable for use here?
>>
>> Good question.
>>
>> Up to now I think we could get away without specific alignment.
>>
>> I can drop that parameter for now if you'd like that better.
>=20
> I think control over alignment should be limited to those
> special cases really needing it.

Okay, I'll drop the align parameter then.

>=20
>>>> @@ -275,22 +305,25 @@ int hypfs_read_leaf(const struct hypfs_entry *=
entry,
>>>>   =20
>>>>        l =3D container_of(entry, const struct hypfs_entry_leaf, e);
>>>>   =20
>>>> -    return copy_to_guest(uaddr, l->u.content, entry->size) ? -EFAUL=
T: 0;
>>>> +    return copy_to_guest(uaddr, l->u.content, entry->funcs->getsize=
(entry)) ?
>>>> +                                              -EFAULT : 0;
>>>
>>> With the intended avoiding of locking, how is this ->getsize()
>>> guaranteed to not ...
>>>
>>>> @@ -298,7 +331,7 @@ static int hypfs_read(const struct hypfs_entry *=
entry,
>>>>            goto out;
>>>>   =20
>>>>        ret =3D -ENOBUFS;
>>>> -    if ( ulen < entry->size + sizeof(e) )
>>>> +    if ( ulen < size + sizeof(e) )
>>>>            goto out;
>>>
>>> ... invalidate the checking done here? (A similar risk looks to
>>> exist on the write path, albeit there we have at least the
>>> ->max_size checks, where I hope that field isn't mean to become
>>> dynamic as well.)
>>
>> I think you are right. I should add the size value as a parameter to t=
he
>> read and write functions.
>=20
> Except that a function like hypfs_read_leaf() shouldn't really
> require its caller to pass in the size of the leaf's payload.

Its not the leaf's payload size, but the size the user buffer has been
tested against.


Juergen

--------------6F2186788185058F457F8B58
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------6F2186788185058F457F8B58--

--BfIVQNf697HHKSgG0xVHFLCgrWD0WNg29--

--ibkXbRJKiPOH09e7nBd6wNiiqbqqAWnQg
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+z5yoFAwAAAAAACgkQsN6d1ii/Ey+i
agf/XusFWmv91GHOhWXgSEs2O4hFuJBjB967vrTiQPdo8KYIesgv2zdFrZMuB7En8+tzN8NJfa+K
QTNybX2LXUi7gF4+Zf/G1xgPVEPcFWLo+3CnhJT4YdZRJ/FMljZoLIut2CM21hkEemPGGAEZfVlQ
QI2j3krnwNHV4N89nRiCAsaLdJm3Ow6E6UFBVdMUUu7Ld0nS5PlzPHz88GXVdBcFYy1lPySgZ582
zD8zooqnjgarXtzNvM4O+ZgN/AIkqNwDpctGmz6IIDOhL81CH/RA1odXqJVeiJipDncPU2n0IQ0t
q8kN0MxXr9igjhvWqHwbGOGU3vr1qzwJkqHSx0RLnQ==
=CyAJ
-----END PGP SIGNATURE-----

--ibkXbRJKiPOH09e7nBd6wNiiqbqqAWnQg--


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 15:10:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 15:10:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29083.58316 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf2cJ-0004qT-PE; Tue, 17 Nov 2020 15:09:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29083.58316; Tue, 17 Nov 2020 15:09:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf2cJ-0004qM-M2; Tue, 17 Nov 2020 15:09:59 +0000
Received: by outflank-mailman (input) for mailman id 29083;
 Tue, 17 Nov 2020 15:09:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EUfM=EX=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kf2cI-0004qH-LH
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 15:09:58 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b6049ace-4c61-4d3e-aff2-95e05a55cab9;
 Tue, 17 Nov 2020 15:09:56 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AHF9sei029017
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK)
 for <xen-devel@lists.xenproject.org>; Tue, 17 Nov 2020 16:09:55 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 4624C2E9CA8; Tue, 17 Nov 2020 16:09:49 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=EUfM=EX=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kf2cI-0004qH-LH
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 15:09:58 +0000
X-Inumbo-ID: b6049ace-4c61-4d3e-aff2-95e05a55cab9
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b6049ace-4c61-4d3e-aff2-95e05a55cab9;
	Tue, 17 Nov 2020 15:09:56 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AHF9sei029017
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK)
	for <xen-devel@lists.xenproject.org>; Tue, 17 Nov 2020 16:09:55 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 4624C2E9CA8; Tue, 17 Nov 2020 16:09:49 +0100 (MET)
Date: Tue, 17 Nov 2020 16:09:49 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: xen-devel@lists.xenproject.org
Subject: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201117150949.GA3791@antioche.eu.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Tue, 17 Nov 2020 16:09:55 +0100 (MET)

Hello,
so, after fixing an issue in the NetBSD kernel, related to PV clock
interrupts, I'm back with physical interrupts issues.
At some point in the initialisation, the dom0 kernel stops receiving
interrupts for its disks controller. The disk controller is:
[   1.0000030] mfii0 at pci6 dev 0 function 0: "PERC H740P Adapter ", firmware 51.13.0-3485, 8192MB cache
(XEN) d0: bind: m_gsi=34 g_gsi=34
[   1.0000030] allocated pic ioapic2 type level pin 2 level 6 to cpu0 slot 2 idt entry 103
[   1.0000030] mfii0: interrupting at ioapic2 pin 2

entering the NetBSD kenrel debugger and looking at interrupt counters,
I see that some interrupts did trigger on ioapic2 pin 2, as well as for
some other hardware controllers.
I did print the controller's status when the command times out, and
the controller says that there is an interrupt pending. So I guess that
the command was executed, but the dom0 kernel didn't get interupted.

At this point I can't say if other hardware controller interripts are
working (because of the lockdown I don't have physical access
to the hardware).

What's strange is that some Xen console activity seems to be enough to
resume interrupt activity. Hitting ^A 3 times is enough to get some progess
on the dom0's disk controller, and hitting 'v' is usually enough to
get the dom0 multiuser. Once there the systems looks stable, I can log
in from network. But I/O may stall again on reboot, maybe because the
dom0 kenrel is back using synchronous console output.

Any idea what to look at from here ?

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 15:15:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 15:15:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29090.58328 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf2hc-0005kh-E1; Tue, 17 Nov 2020 15:15:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29090.58328; Tue, 17 Nov 2020 15:15:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf2hc-0005ka-Ab; Tue, 17 Nov 2020 15:15:28 +0000
Received: by outflank-mailman (input) for mailman id 29090;
 Tue, 17 Nov 2020 15:15:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=J6Lq=EX=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kf2ha-0005kV-SW
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 15:15:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9abedb6a-64e2-4b5b-9905-b4b448d632d5;
 Tue, 17 Nov 2020 15:15:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2665FAC23;
 Tue, 17 Nov 2020 15:15:25 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=J6Lq=EX=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kf2ha-0005kV-SW
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 15:15:26 +0000
X-Inumbo-ID: 9abedb6a-64e2-4b5b-9905-b4b448d632d5
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9abedb6a-64e2-4b5b-9905-b4b448d632d5;
	Tue, 17 Nov 2020 15:15:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605626125; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=NIMh+kDn8sv7QigDFeMSUidn3Gdqdrk8+0MbayEVtE4=;
	b=NuFy0+ceA+njIr7i79ghUoRniWt0RWugotfl2AbLWX0y2kr2epuWF4BgtL0c6FHikGLnnF
	dMBRZN61WrxozY4dkNmUg14JJcX/41AhZJK6AMOVBoXHmkt8ON5cxMOXl2/WRGBEdWq4LG
	ez1uBOqgy41lpyJj3xgXr5j8XryF/p4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2665FAC23;
	Tue, 17 Nov 2020 15:15:25 +0000 (UTC)
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-10-jgross@suse.com>
 <6f8c0d3d-73f6-d10f-182a-8bf76856bf09@suse.com>
 <95f673e5-90a8-0fe9-3842-bdb9de5c4aa4@suse.com>
 <0b8ae7db-b1f6-a23c-e608-27650db702ae@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH 09/12] xen/hypfs: add support for id-based dynamic
 directories
Message-ID: <a5d40e13-1585-1093-8d04-ab49874d63f4@suse.com>
Date: Tue, 17 Nov 2020 16:15:24 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <0b8ae7db-b1f6-a23c-e608-27650db702ae@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="z8tazrwpXzoFTBMLIbNsA3eK8YaO0egYm"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--z8tazrwpXzoFTBMLIbNsA3eK8YaO0egYm
Content-Type: multipart/mixed; boundary="t3GALMtWCUxtR9JAA9JMpXVdWHZVMftqI";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <a5d40e13-1585-1093-8d04-ab49874d63f4@suse.com>
Subject: Re: [PATCH 09/12] xen/hypfs: add support for id-based dynamic
 directories
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-10-jgross@suse.com>
 <6f8c0d3d-73f6-d10f-182a-8bf76856bf09@suse.com>
 <95f673e5-90a8-0fe9-3842-bdb9de5c4aa4@suse.com>
 <0b8ae7db-b1f6-a23c-e608-27650db702ae@suse.com>
In-Reply-To: <0b8ae7db-b1f6-a23c-e608-27650db702ae@suse.com>

--t3GALMtWCUxtR9JAA9JMpXVdWHZVMftqI
Content-Type: multipart/mixed;
 boundary="------------94DDC25416F4E4CDBF94B304"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------94DDC25416F4E4CDBF94B304
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.11.20 15:50, Jan Beulich wrote:
> On 17.11.2020 15:38, J=C3=BCrgen Gro=C3=9F wrote:
>> On 17.11.20 14:33, Jan Beulich wrote:
>>> On 26.10.2020 10:13, Juergen Gross wrote:
>>>> +static struct hypfs_entry *hypfs_dyndir_findentry(struct hypfs_entr=
y_dir *dir,
>>>> +                                                  const char *name,=

>>>> +                                                  unsigned int name=
_len)
>>>> +{
>>>> +    struct hypfs_dyndir_id *data;
>>>> +
>>>> +    data =3D hypfs_get_dyndata();
>>>> +    if ( !data )
>>>> +        return ERR_PTR(-ENOENT);
>>>> +
>>>> +    /* Use template with original findentry function. */
>>>> +    return data->template->e.funcs->findentry(data->template, name,=
 name_len);
>>>
>>> Why does this pass the address of the template? If it truly is
>>> (just) a template, then its dirlist ought to be empty at all
>>> times? If otoh the "template" indeed gets used as a node in the
>>> tree then perhaps it wants naming differently? "Stem" would come
>>> to mind, but likely there are better alternatives. I've also
>>> considered the German "Statthalter", but its English translations
>>> don't seem reasonable to me here. And "placeholder" has kind of a
>>> negative touch. (Also in this case some of my "const?" remarks
>>> may be irrelevant.)
>>
>> It is basically a template tree.
>>
>> In the current use case (cpupool/<id>/sched-gran) the template is
>> <id> with the leaf "sched-gran" which is the template for the per
>> cpupool incarnation.
>=20
> I can see sched-gran being a template, albeit even that will get
> dirtied as soon as a second leaf appears, as then these entries
> again need linking together. I think anything called template
> should be strictly r/o.

After boot the template will be constant, just like the hypfs tree
created at boot time.

In theory it could be setup completely static, but this would be
a rather awful mess of macros.

> It's also not clear to me how adding a 2nd level in the hierarchy
> would end up working: <component>/<id1>/<id2>/<leaf>. How would
> <leaf> know all the higher level IDs it's associated with? While
> I don't think this needs making work right away, the underlying
> model at least shouldn't prohibit it.

This is the purpose of hypfs_alloc_dyndata(). You'd need something
like struct hypfs_dyndir_id, but with two ids in it.


Juergen

--------------94DDC25416F4E4CDBF94B304
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------94DDC25416F4E4CDBF94B304--

--t3GALMtWCUxtR9JAA9JMpXVdWHZVMftqI--

--z8tazrwpXzoFTBMLIbNsA3eK8YaO0egYm
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+z6QwFAwAAAAAACgkQsN6d1ii/Ey/9
igf/Ypyrkdzn80nw8mMiozvdXUZphXfHWleEFJSmVNPJ0f9NMHSQr2HUYewItKAvlBGqwkV6qEQF
17KmsNToZXjF/hP86I0kQT34ONwmYVmF2o3luu6ZXIDuPS8pSibjsCg+XeXsOJtpPakZMndq7Iuf
5YLZ7+IfZhugSp+pJUN9+OXeWpw67xEC916oT5waHIGTaR6qFjzHlYyOxlCT3PzTOVr29yrV5Qey
M+5hCwGkE57djKDL9vPFeJhi2F0KKFIIELB9QVNR4ilBPDSZsld3D0SzwAJDgL6d4f38hhQJxHbl
cYxHxt/G2O6Rs+yr8GTuQDM+iELMrQwBriMemgjDWA==
=gFyO
-----END PGP SIGNATURE-----

--z8tazrwpXzoFTBMLIbNsA3eK8YaO0egYm--


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 15:29:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 15:29:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29096.58339 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf2ut-0006ry-MX; Tue, 17 Nov 2020 15:29:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29096.58339; Tue, 17 Nov 2020 15:29:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf2ut-0006rr-JL; Tue, 17 Nov 2020 15:29:11 +0000
Received: by outflank-mailman (input) for mailman id 29096;
 Tue, 17 Nov 2020 15:29:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lsz4=EX=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kf2us-0006rm-Fv
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 15:29:10 +0000
Received: from mail-wr1-x435.google.com (unknown [2a00:1450:4864:20::435])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ad4a12b0-990c-4f87-b80e-ad3f99f52387;
 Tue, 17 Nov 2020 15:29:09 +0000 (UTC)
Received: by mail-wr1-x435.google.com with SMTP id 23so23561499wrc.8
 for <xen-devel@lists.xenproject.org>; Tue, 17 Nov 2020 07:29:09 -0800 (PST)
Received: from CBGR90WXYV0
 (host109-146-187-185.range109-146.btcentralplus.com. [109.146.187.185])
 by smtp.gmail.com with ESMTPSA id v19sm4122033wmj.31.2020.11.17.07.29.07
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 17 Nov 2020 07:29:07 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lsz4=EX=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kf2us-0006rm-Fv
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 15:29:10 +0000
X-Inumbo-ID: ad4a12b0-990c-4f87-b80e-ad3f99f52387
Received: from mail-wr1-x435.google.com (unknown [2a00:1450:4864:20::435])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ad4a12b0-990c-4f87-b80e-ad3f99f52387;
	Tue, 17 Nov 2020 15:29:09 +0000 (UTC)
Received: by mail-wr1-x435.google.com with SMTP id 23so23561499wrc.8
        for <xen-devel@lists.xenproject.org>; Tue, 17 Nov 2020 07:29:09 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=yioG4elPx+SggR71YG0KF7MvFUWAj6yyGjSEmCGQ83E=;
        b=oywscOk5oZIBnuX4EaIZV3WBDfwwnjNueXtHJEo6x1TfH3AUMJFES9Ll6ygpJPCKeP
         AIHMqie2XT8NU4TFpPULs0WmzS2KXcBbSMiFKBH+Nnb/Wz+8N+gAkSuKV0pKE5e9PbmB
         uzZzrEYgIrDL6c5M5AhU8WNFBmqJGMiN+JX2YmPvokRBgVzqVymg/RJEyQvzehRqNXAm
         Si7zlZM07wF1wdLy2rXPu/VijmTgafvCGggrHQABT1HwX/7rWqepUY1R1v8uAkI9Xxud
         7DbXIZxih5URybF+6y0jkFafmA/bV9BcOwbXr0+Ivoqk+QZtJlg2oA1f9dHYZ7egNyNv
         tPDg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=yioG4elPx+SggR71YG0KF7MvFUWAj6yyGjSEmCGQ83E=;
        b=O78sy0DGrSY05ZHaM/fP1CJ52zjiMn+wUgenNFmQ+TykSyfTNQbjNadzgMkYGCj0BQ
         QAReEuhHSYdkdBypu2i1hEbeWAm6IVVPWkwGoJNJH0ciNUv7qO8CMCmAQOnkZVvUhQTN
         4SqHo6XJrdwneJwlvqhs36sx53q0A1yWUu781Kj5J3eHdhvoJm66LgGUP9AYHpbDIBdO
         8SWEqne/r/ihRx7t6nxDPkndz/jkxCFdFhABCwb86YW3JK1FG/1PlyAFlplxRTxUMPOg
         AUjeQFfsfy236/miu4BWsBdDn43gl3+cieeOMzQ7vfblqV+DGmmfFaebrOuyUrKuGIFk
         C9tg==
X-Gm-Message-State: AOAM533YWQSSZDTx2jhFy+tiFdSg4hXc9BWLyoV49cd05f61ymkt/Lvy
	snG2eiFnEqW0GdeZNbdrmzU=
X-Google-Smtp-Source: ABdhPJxziRvO2CxdyVrOBfIHF9KcKpmjwJv00ACoNsPMfK/hKtUN0mb4Vbog2vVPX14W88FjB3RASw==
X-Received: by 2002:a5d:51cd:: with SMTP id n13mr33094wrv.87.1605626948528;
        Tue, 17 Nov 2020 07:29:08 -0800 (PST)
Received: from CBGR90WXYV0 (host109-146-187-185.range109-146.btcentralplus.com. [109.146.187.185])
        by smtp.gmail.com with ESMTPSA id v19sm4122033wmj.31.2020.11.17.07.29.07
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Tue, 17 Nov 2020 07:29:07 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Oleksandr'" <olekstysh@gmail.com>
Cc: <xen-devel@lists.xenproject.org>,
	"'Oleksandr Tyshchenko'" <oleksandr_tyshchenko@epam.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Wei Liu'" <wl@xen.org>,
	=?UTF-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	"'Tim Deegan'" <tim@xen.org>,
	"'Julien Grall'" <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> <1602780274-29141-3-git-send-email-olekstysh@gmail.com> <004001d6a6b6$9ffd3ac0$dff7b040$@xen.org> <436143ea-609f-f6c3-4952-19fcf410fe8f@gmail.com> <34133df1-bff2-f4df-00a5-674a2af867fc@gmail.com>
In-Reply-To: <34133df1-bff2-f4df-00a5-674a2af867fc@gmail.com>
Subject: RE: [PATCH V2 02/23] xen/ioreq: Make x86's IOREQ feature common
Date: Tue, 17 Nov 2020 15:29:06 -0000
Message-ID: <007401d6bcf6$63d3f420$2b7bdc60$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQFqp5MaNUj6MKEiN9RM6S6pfA5bVAJRJ+K5AatvYagBUPuZ1AIVomhYqmmB2JA=

> -----Original Message-----
> From: Oleksandr <olekstysh@gmail.com>
> Sent: 17 November 2020 14:48
> To: paul@xen.org
> Cc: xen-devel@lists.xenproject.org; 'Oleksandr Tyshchenko' =
<oleksandr_tyshchenko@epam.com>; 'Andrew
> Cooper' <andrew.cooper3@citrix.com>; 'George Dunlap' =
<george.dunlap@citrix.com>; 'Ian Jackson'
> <iwj@xenproject.org>; 'Jan Beulich' <jbeulich@suse.com>; 'Julien =
Grall' <julien@xen.org>; 'Stefano
> Stabellini' <sstabellini@kernel.org>; 'Wei Liu' <wl@xen.org>; 'Roger =
Pau Monn=C3=A9'
> <roger.pau@citrix.com>; 'Tim Deegan' <tim@xen.org>; 'Julien Grall' =
<julien.grall@arm.com>
> Subject: Re: [PATCH V2 02/23] xen/ioreq: Make x86's IOREQ feature =
common
>=20
>=20
> Hi Paul
>=20

Hi Oleksandr,

> >
> >> The 'legacy' mechanism of mapping magic pages for ioreq servers
> >> should remain x86 specific I think that aspect of the code needs to
> >> remain behind and not get moved into common code. You could do that
> >> in arch specific calls in hvm_ioreq_server_enable/disable() and
> >> hvm_get_ioreq_server_info().
> > Well, if legacy mechanism is not going to be used for other arch and
> > should remain x86 specific, I will try to investigate what should be
> > left in x86 code and rework the series.
> > As a side note, I am afraid, we won't get a 100% code movement =
(which
> > I managed to achieve here) for the next version of this patch as we
> > need arch/x86/hvm/ioreq.c.
>=20
> I am investigating how to split the code in order to leave the =
'legacy'
> mechanism x86 specific and have a few questions. Could you please
> clarify the following:
>=20
> 1. The split of hvm_ioreq_server_enable/disable() is obvious to me, I
> would like to clarify regarding hvm_get_ioreq_server_info().
> Is it close to what you had in mind when suggesting the split of
> hvm_get_ioreq_server_info() or I just need to abstract
> hvm_ioreq_server_map_pages() call only?

I think it is sufficient to just abstract hvm_ioreq_server_map_pages() =
(and return -EOPNOTSUPP in the Arm case).
The buf ioreq port should be common.

> (Not completed and non tested)
>=20
> +/* Called with ioreq_server lock held */
> +int arch_ioreq_server_get_info(struct hvm_ioreq_server *s,
> +                               unsigned long *ioreq_gfn,
> +                               unsigned long *bufioreq_gfn,
> +                               evtchn_port_t *bufioreq_port)
> +{
> +    if ( ioreq_gfn || bufioreq_gfn )
> +    {
> +        int rc =3D hvm_ioreq_server_map_pages(s);
> +
> +        if ( rc )
> +            return rc;
> +    }
> +
> +    if ( ioreq_gfn )
> +        *ioreq_gfn =3D gfn_x(s->ioreq.gfn);
> +
> +    if ( HANDLE_BUFIOREQ(s) )
> +    {
> +        if ( bufioreq_gfn )
> +            *bufioreq_gfn =3D gfn_x(s->bufioreq.gfn);
> +
> +        if ( bufioreq_port )
> +            *bufioreq_port =3D s->bufioreq_evtchn;
> +    }
> +
> +    return 0;
> +}
> +
>   int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
>                                 unsigned long *ioreq_gfn,
>                                 unsigned long *bufioreq_gfn,
> @@ -916,26 +954,7 @@ int hvm_get_ioreq_server_info(struct domain *d,
> ioservid_t id,
>       if ( s->emulator !=3D current->domain )
>           goto out;
>=20
> -    if ( ioreq_gfn || bufioreq_gfn )
> -    {
> -        rc =3D hvm_ioreq_server_map_pages(s);
> -        if ( rc )
> -            goto out;
> -    }
> -
> -    if ( ioreq_gfn )
> -        *ioreq_gfn =3D gfn_x(s->ioreq.gfn);
> -
> -    if ( HANDLE_BUFIOREQ(s) )
> -    {
> -        if ( bufioreq_gfn )
> -            *bufioreq_gfn =3D gfn_x(s->bufioreq.gfn);
> -
> -        if ( bufioreq_port )
> -            *bufioreq_port =3D s->bufioreq_evtchn;
> -    }
> -
> -    rc =3D 0;
> +    rc =3D arch_ioreq_server_get_info(s, ioreq_gfn, bufioreq_gfn,
> bufioreq_port);
>=20
>    out:
>       spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>=20
> 2. If I understand the code correctly, besides of the above-mentioned
> functions the arch specific calls should be in hvm_ioreq_server_init()
> and hvm_ioreq_server_deinit() to actually hide
> "hvm_ioreq_server_unmap_pages" usage from the common code.  I noticed
> that the rollback code in hvm_ioreq_server_init() and the whole
> hvm_ioreq_server_deinit() have a lot in common except an extra =
ASSERT()
> and hvm_ioreq_server_free_pages() call in the latter. My question is
> could we just replace the rollback code by hvm_ioreq_server_deinit()? =
I
> assume an extra hvm_ioreq_server_free_pages() call wouldn't be an =
issue
> here, but I am not sure what to do with the ASSERT, I expect it to be
> triggered at such an early stage (so it probably needs moving out of =
the
> hvm_ioreq_server_deinit() (or dropped?) as well as comment needs
> updating). I am asking, because this way we would get *a single* arch
> hook here which would be arch_ioreq_server_deinit() and remove code
> duplication a bit.

I would arch specific init and deinit, even if one of them does =
nothing... but then I like symmetry :-)

>=20
> Something close to this.
> (Not completed and non tested)
>=20
> @@ -761,18 +771,17 @@ static int hvm_ioreq_server_init(struct
> hvm_ioreq_server *s,
>       return 0;
>=20
>    fail_add:
> -    hvm_ioreq_server_remove_all_vcpus(s);
> -    hvm_ioreq_server_unmap_pages(s);
> -
> -    hvm_ioreq_server_free_rangesets(s);
> -
> -    put_domain(s->emulator);
> +    hvm_ioreq_server_deinit(s);
>       return rc;
>   }
>=20
> +void arch_ioreq_server_deinit(struct hvm_ioreq_server *s)
> +{
> +    hvm_ioreq_server_unmap_pages(s);
> +}
> +
>   static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s)
>   {
> -    ASSERT(!s->enabled);

I assume this is the ASSERT you're referring to... There's no way we =
should be deinit-ing an enabled server so that should remain in common =
code as is.

  Paul

>       hvm_ioreq_server_remove_all_vcpus(s);
>=20
>       /*
> @@ -784,7 +793,7 @@ static void hvm_ioreq_server_deinit(struct
> hvm_ioreq_server *s)
>        *       the page_info pointer to NULL, meaning the latter will =
do
>        *       nothing.
>        */
> -    hvm_ioreq_server_unmap_pages(s);
> +    arch_ioreq_server_deinit(s);
>       hvm_ioreq_server_free_pages(s);
>=20
>       hvm_ioreq_server_free_rangesets(s);
>=20
>       put_domain(s->emulator);
>=20
>=20
> --
>=20
> Regards,
>=20
> Oleksandr Tyshchenko




From xen-devel-bounces@lists.xenproject.org Tue Nov 17 15:58:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 15:58:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29105.58351 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf3N7-0001J0-1g; Tue, 17 Nov 2020 15:58:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29105.58351; Tue, 17 Nov 2020 15:58:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf3N6-0001It-UM; Tue, 17 Nov 2020 15:58:20 +0000
Received: by outflank-mailman (input) for mailman id 29105;
 Tue, 17 Nov 2020 15:58:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rcuG=EX=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kf3N5-0001Io-B2
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 15:58:19 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ffca296e-dded-4ea1-9ddc-98f0ceab0b40;
 Tue, 17 Nov 2020 15:58:17 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rcuG=EX=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kf3N5-0001Io-B2
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 15:58:19 +0000
X-Inumbo-ID: ffca296e-dded-4ea1-9ddc-98f0ceab0b40
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ffca296e-dded-4ea1-9ddc-98f0ceab0b40;
	Tue, 17 Nov 2020 15:58:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605628697;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=SGn6ZqOB9PaDN918fLYt0ItUj4Dmmqy9DE1QD6gV1wU=;
  b=gspD/UB/A1Zm4Ralp7TNdT1iK3rBxSDBI0UDChp7yRxio/jlRDsF7Lnb
   F0bsuer5I0UxjrJyCAxtYqWkk0FCbe2KU4wXVMrHzqtFSJAau0kXi6iLP
   gQFUjUTLFz3P86kuYOFZSLWGKOXGr63BqFdyRqlQ9UIlV3ejY3CpYtGzk
   M=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: C9HbK6g+pOtZnGOVZ/O3/gt5oU6RWBq6Ki8ufYTMIR7lMMFf4RJ4UrJYaB8iWF/OJDzzFTdbvT
 GoCduA9ehm5Z6eM9F8SSU0zdkB5jJzZKIEfSijBI9QUBxtqfzXeDjpRNxQmodZaJIbErfQMtAy
 wMiNxXblpnNXREWORnUCgFF/AAK6/9AF5bFJqgl7AA7XdWiwRsdzEq1QppRb/WoDPz2igC589/
 f/030uCHsMGKiQcgqlzRJVcZfwwT3Xoxlxhu3U25mh0UOWhAxGOCH1xLyXHmaJTYBRUjitilnO
 YnQ=
X-SBRS: None
X-MesageID: 32489197
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,485,1596513600"; 
   d="scan'208";a="32489197"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FMmUyJPFT9Urfe9/+KhKk8aY0Xd2D1r4zwrWaklIj3erHmUKFdKNEab8jWF+qDHjBzb7R/m09f3MG24FXcsOfAKdxdpm3EWiYH/7nBA8FZpx/kER1fc6iqqv43qokjQCd+qZhLF4sPPDG4N/cW8fJFkzWyLmsZUdkv2CzUgcpFv5A7DxSAsOu1xsph8pWBORfMx9W0lJWfUdFBublPtXJtkIem06mArnFyWIRIpcwtKX2OVVLTULNdGzRU7etJ4iRaVxkHQ++tUnm3u4db5XHwJCetYWEpfqMIjIMwh3uSvlmJUGuD63jni56CaTxMorkg1vvMbzIhLssqs4Klc+9w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1CGoFyLPlUsRQw7DTbXrB6juYSwysCpizYUG3l7GMak=;
 b=MZ48vgDEgqHOnGs+NBp3Hjw4w8thFJwNCasO1Y5lybyjJD1pnKkuBf3z3HD/R+Vq9ximRVFoCaG1/jcWKSVgbBAzU+3J8q5si+f2iur1emah/BP6oYw82B187nWxLLGrHSdCl4ypM4uvI8QBu0PGN1ybUq2TJUVelParZZIBCZVk5gOmRvWmGnq0Qd3SDtZjKc/YhF03/c4xfqqAqywt629uYppZnWfkSkCadji7MTx43dMx+k6xrJ9YTKP8LMKir4d+bcdpbjOrVesqpqF2uHwt/6p/BHxG+O9uB5XXP44p3utbsylsTf9eBkDsiEJORinzWj8PZcbgrKTmq6dsZw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1CGoFyLPlUsRQw7DTbXrB6juYSwysCpizYUG3l7GMak=;
 b=A2Zn0IKzeoh0vuHcnevETlOoBgWll26MSig6ntTmYFEgpkZY2H0hG+fc6XVTy5IaR9Dz6x+O1fVQ9v82grVWOkvkZBBTLi9f0IxpToc6voJP7Jn/eR++YAKIIH6Ir8DJKXUJi1KeJeJHSQVpAGZ1PAAURWOCTonIqyKKYQP5lss=
Date: Tue, 17 Nov 2020 16:58:07 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201117155807.a7jgmftnj6njg6oz@Air-de-Roger>
References: <20201117150949.GA3791@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201117150949.GA3791@antioche.eu.org>
X-ClientProxiedBy: MRXP264CA0001.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:15::13) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9d2e4869-55ff-4b4b-4227-08d88b1196e5
X-MS-TrafficTypeDiagnostic: DM6PR03MB4682:
X-Microsoft-Antispam-PRVS: <DM6PR03MB4682D222D05BEC06BE2483848FE20@DM6PR03MB4682.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: PD9xVZDi1yMoyDCBQaoYtqtmHLOxvXh67SMbvf2ZB0GROi5ppBkpL6Un1wfJB3DuB23lQJj5Mlm3eSpf8q4FBjnb857+Dyldy6miMS911sUD1FU46Zx3mL6Ywpp4brYhs7StmAWjHb5VsRldc4ah9DbzsKVE8Ahyhd5OTMo+900HUsu8Qp282uc//kesNraQZaZtkMkxZsCJLt9fmXSkY53GUHx/EFOl6G44di7eRjCRybcs8b/DjA/cvbMqOcC+m8taKq6L2cE08t/+bTM7J/26gNkbmpyuN6t/m6GSkHGy7YX74bkvOB0ZxbKoOtKM
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(39860400002)(366004)(346002)(396003)(376002)(186003)(4326008)(956004)(6496006)(33716001)(6666004)(6916009)(86362001)(5660300002)(6486002)(2906002)(8936002)(16526019)(316002)(8676002)(478600001)(26005)(83380400001)(66556008)(9686003)(1076003)(66476007)(85182001)(66946007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: +cAqydkCbVmGyJb3o1mA9bzWcL+WAPQE6wRg8GsNNJaTNvh5NEwilRih1ww57K081Wpl00NwOt8hbyzTznI63Gavj++EwSc9FXY9AfSFX2RMVgYKyokrjAtcueQNTD95Kd6kNSrjHWe/bLjAp5toxQ7cP3rHQ2SYtGxw3UQGtkNKaW8jPm7GYruxRQRPqzhuNy+LTD2nbOz3a+deWnnMNq2dzaheSBvyx9ji+ccm0TeWOX4mRUnMb+x0t9L+L/ron4vEWc3rV9p8AAEWx7Id6Q30Ie5MzRjSFpmDIMDYhosVoRE1UQy/E6vfqHbZmwQkFiWIl4hAgEhk2da6+F870JDmilZIi/r4jP1M950K+Yjx0i2Ly5YwDou07rfdeYJkRoXcLP3POjhlTPSeeerUJlHJqtev0aIHvwHmw8WVlee6PYoA/jsYINeclR6NQUEV/ybwFDdWIWKd4qjQqaIMgo5W54Y9sGXNMg56n9b7tnAtXrF0qyTkVEF/010uFP+qLxUnDFKZ6vtHjDB7jqObS9EqAYYesXJmUQvCJgUjFPLOPXytLabOtDRnx13/ffrvQTDdClVKsuJpZjQSzyf10MhiUvGkTET+6gJWg9bDHaKD9asdo1DINdv6nfUohUGndBEfGjpZm8XlQRwecX9zJxUx4Zq5lfhwu5K3SwikE3LoaMGmjsguuqgAf6t21BBhbjTIK3kyClu0EdzOPrOqdXntAKioby8sgxUnN30vBy46UTdbDFmwQ+Tfs7cMBIifeK9goOH2hSXSc8ReegAuw7e0/fCuFLEtWwOUDtjF5ZG+QBQYZI5O70T+VX3CXlyVzDFAwwIrlE5ZGMMwdrpLSnbsJt8un+slse40VrKGb0IhEc1+DZUa5Dd6jpqBMiY7WpM74XNVWmRxGXWuBaVZWw==
X-MS-Exchange-CrossTenant-Network-Message-Id: 9d2e4869-55ff-4b4b-4227-08d88b1196e5
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Nov 2020 15:58:13.7208
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BUCNzKzazMU0mCTpBANxJCu7QW3CO8nP8DINVFS8jhS5R3E8i16OSCD5Zs4wX5Ko29KdqEZ/+dSVQ4LNVPkiBA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4682
X-OriginatorOrg: citrix.com

On Tue, Nov 17, 2020 at 04:09:49PM +0100, Manuel Bouyer wrote:
> Hello,
> so, after fixing an issue in the NetBSD kernel, related to PV clock
> interrupts, I'm back with physical interrupts issues.
> At some point in the initialisation, the dom0 kernel stops receiving
> interrupts for its disks controller. The disk controller is:
> [   1.0000030] mfii0 at pci6 dev 0 function 0: "PERC H740P Adapter ", firmware 51.13.0-3485, 8192MB cache
> (XEN) d0: bind: m_gsi=34 g_gsi=34
> [   1.0000030] allocated pic ioapic2 type level pin 2 level 6 to cpu0 slot 2 idt entry 103
> [   1.0000030] mfii0: interrupting at ioapic2 pin 2
> 
> entering the NetBSD kenrel debugger and looking at interrupt counters,
> I see that some interrupts did trigger on ioapic2 pin 2, as well as for
> some other hardware controllers.
> I did print the controller's status when the command times out, and
> the controller says that there is an interrupt pending. So I guess that
> the command was executed, but the dom0 kernel didn't get interupted.
> 
> At this point I can't say if other hardware controller interripts are
> working (because of the lockdown I don't have physical access
> to the hardware).
> 
> What's strange is that some Xen console activity seems to be enough to
> resume interrupt activity. Hitting ^A 3 times is enough to get some progess
> on the dom0's disk controller, and hitting 'v' is usually enough to
> get the dom0 multiuser. Once there the systems looks stable, I can log
> in from network. But I/O may stall again on reboot, maybe because the
> dom0 kenrel is back using synchronous console output.

Hm, certainly weird.

> Any idea what to look at from here ?

I have attached a patch below that will dump the vIO-APIC info as part
of the 'i' debug key output, can you paste the whole output of the 'i'
debug key when the system stalls?

Can you assert that you properly EOI the vectors on the local APIC? (I
don't have a patch to dump the emulated lapic ISR right now, but could
provide one if needed).

Roger.
---8<---
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 67d4a6237f..fd0d75db80 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -30,6 +30,7 @@
 #include <xen/lib.h>
 #include <xen/errno.h>
 #include <xen/sched.h>
+#include <xen/softirq.h>
 #include <xen/nospec.h>
 #include <public/hvm/ioreq.h>
 #include <asm/hvm/io.h>
@@ -720,3 +721,34 @@ void vioapic_deinit(struct domain *d)
 
     vioapic_free(d, d->arch.hvm.nr_vioapics);
 }
+
+void vioapic_dump(void)
+{
+    const struct domain *d = hardware_domain;
+    unsigned int i;
+
+    if ( !has_vioapic(d) )
+        return;
+
+    printk("vIO-APIC dom%u state:\n", d->domain_id);
+    for ( i = 0; i < d->arch.hvm.nr_vioapics; i++ )
+    {
+        const struct hvm_vioapic *vioapic = domain_vioapic(d, i);
+        unsigned int j;
+
+        for ( j = 0; j < vioapic->nr_pins; j++ )
+        {
+            const union vioapic_redir_entry *ent = &vioapic->redirtbl[j];
+
+            printk("ioapic %u pin %u gsi %u vector %#x\n"
+                   "  delivery mode %u dest mode %u delivery status %u\n"
+                   "  polarity %u IRR %u trig mode %u mask %u dest id %u\n",
+                   i, j, vioapic->base_gsi + j, ent->fields.vector,
+                   ent->fields.delivery_mode, ent->fields.dest_mode,
+                   ent->fields.delivery_status, ent->fields.polarity,
+                   ent->fields.remote_irr, ent->fields.trig_mode,
+                   ent->fields.mask, ent->fields.dest_id);
+            process_pending_softirqs();
+        }
+    }
+}
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 8d1f9a9fc6..bd208efc58 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -24,6 +24,7 @@
 #include <asm/msi.h>
 #include <asm/current.h>
 #include <asm/flushtlb.h>
+#include <asm/hvm/vioapic.h>
 #include <asm/mach-generic/mach_apic.h>
 #include <irq_vectors.h>
 #include <public/physdev.h>
@@ -441,8 +442,15 @@ int __init init_irq_data(void)
     set_bit(HYPERCALL_VECTOR, used_vectors);
 #endif
     
-    /* IRQ_MOVE_CLEANUP_VECTOR used for clean up vectors */
-    set_bit(IRQ_MOVE_CLEANUP_VECTOR, used_vectors);
+    /*
+     * Mark vectors up to the cleanup one as used, to prevent an infinite loop
+     * in irq_move_cleanup_interrupt.
+     */
+    BUILD_BUG_ON(IRQ_MOVE_CLEANUP_VECTOR < FIRST_DYNAMIC_VECTOR);
+    for ( vector = FIRST_DYNAMIC_VECTOR;
+          vector <= IRQ_MOVE_CLEANUP_VECTOR;
+          vector++ )
+        set_bit(vector, used_vectors);
 
     return 0;
 }
@@ -727,10 +735,6 @@ void irq_move_cleanup_interrupt(struct cpu_user_regs *regs)
 {
     unsigned vector, me;
 
-    /* This interrupt should not nest inside others. */
-    BUILD_BUG_ON(APIC_PRIO_CLASS(IRQ_MOVE_CLEANUP_VECTOR) !=
-                 APIC_PRIO_CLASS(FIRST_DYNAMIC_VECTOR));
-
     ack_APIC_irq();
 
     me = smp_processor_id();
@@ -764,6 +768,8 @@ void irq_move_cleanup_interrupt(struct cpu_user_regs *regs)
              cpumask_test_cpu(me, desc->arch.cpu_mask) )
             goto unlock;
 
+        BUG_ON(vector <= IRQ_MOVE_CLEANUP_VECTOR);
+
         irr = apic_read(APIC_IRR + (vector / 32 * 0x10));
         /*
          * Check if the vector that needs to be cleanedup is
@@ -2524,6 +2530,7 @@ static void dump_irqs(unsigned char key)
             printk("   %#02x -> %ps()\n", i, direct_apic_vector[i]);
 
     dump_ioapic_irq_info();
+    vioapic_dump();
 }
 
 static int __init setup_dump_irqs(void)
diff --git a/xen/include/asm-x86/hvm/vioapic.h b/xen/include/asm-x86/hvm/vioapic.h
index d6f4e12d54..8a3ad18b20 100644
--- a/xen/include/asm-x86/hvm/vioapic.h
+++ b/xen/include/asm-x86/hvm/vioapic.h
@@ -70,4 +70,6 @@ int vioapic_get_mask(const struct domain *d, unsigned int gsi);
 int vioapic_get_vector(const struct domain *d, unsigned int gsi);
 int vioapic_get_trigger_mode(const struct domain *d, unsigned int gsi);
 
+void vioapic_dump(void);
+
 #endif /* __ASM_X86_HVM_VIOAPIC_H__ */



From xen-devel-bounces@lists.xenproject.org Tue Nov 17 16:16:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 16:16:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29113.58364 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf3eW-0003uK-KA; Tue, 17 Nov 2020 16:16:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29113.58364; Tue, 17 Nov 2020 16:16:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf3eW-0003uD-G4; Tue, 17 Nov 2020 16:16:20 +0000
Received: by outflank-mailman (input) for mailman id 29113;
 Tue, 17 Nov 2020 16:16:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/ssn=EX=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kf3eU-0003u8-Pl
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 16:16:18 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.14.71]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0c526d1d-3190-4609-bdb1-b0747e10aaa8;
 Tue, 17 Nov 2020 16:16:17 +0000 (UTC)
Received: from DU2PR04CA0138.eurprd04.prod.outlook.com (2603:10a6:10:231::23)
 by DBBPR08MB4646.eurprd08.prod.outlook.com (2603:10a6:10:f5::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Tue, 17 Nov
 2020 16:16:09 +0000
Received: from DB5EUR03FT017.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:231:cafe::83) by DU2PR04CA0138.outlook.office365.com
 (2603:10a6:10:231::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28 via Frontend
 Transport; Tue, 17 Nov 2020 16:16:09 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT017.mail.protection.outlook.com (10.152.20.114) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3564.22 via Frontend Transport; Tue, 17 Nov 2020 16:16:08 +0000
Received: ("Tessian outbound 797fb8e1da56:v71");
 Tue, 17 Nov 2020 16:16:07 +0000
Received: from fb6a45815400.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 62FB1B25-6CC3-40A0-A0BE-EE6BF0EE9F15.1; 
 Tue, 17 Nov 2020 16:16:01 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id fb6a45815400.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 17 Nov 2020 16:16:01 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB8PR08MB5385.eurprd08.prod.outlook.com (2603:10a6:10:119::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Tue, 17 Nov
 2020 16:16:00 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3564.028; Tue, 17 Nov 2020
 16:16:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/ssn=EX=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kf3eU-0003u8-Pl
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 16:16:18 +0000
X-Inumbo-ID: 0c526d1d-3190-4609-bdb1-b0747e10aaa8
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown [40.107.14.71])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0c526d1d-3190-4609-bdb1-b0747e10aaa8;
	Tue, 17 Nov 2020 16:16:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4fASpjCsbyuVHI9z61R7qORpV7CHHXBCue/CIu4KnHA=;
 b=riLhm9ak5+HKd4za2uL+hJqmQ2H44HJK/W80bAFz9mVpBbgKqqGG+A7yWZBGURMr1szxTwF+EQMC3jTf7UAl6Xm8ZwMEFn0XdUmB3MLLmjI9RjwbM2cL9oY8mDvrWghvWW81pSlDfRAiqniSkqKoxhi++D1gRiXxapuoMapEsNI=
Received: from DU2PR04CA0138.eurprd04.prod.outlook.com (2603:10a6:10:231::23)
 by DBBPR08MB4646.eurprd08.prod.outlook.com (2603:10a6:10:f5::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Tue, 17 Nov
 2020 16:16:09 +0000
Received: from DB5EUR03FT017.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:231:cafe::83) by DU2PR04CA0138.outlook.office365.com
 (2603:10a6:10:231::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28 via Frontend
 Transport; Tue, 17 Nov 2020 16:16:09 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT017.mail.protection.outlook.com (10.152.20.114) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3564.22 via Frontend Transport; Tue, 17 Nov 2020 16:16:08 +0000
Received: ("Tessian outbound 797fb8e1da56:v71"); Tue, 17 Nov 2020 16:16:07 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: cf38d7dca9684757
X-CR-MTA-TID: 64aa7808
Received: from fb6a45815400.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 62FB1B25-6CC3-40A0-A0BE-EE6BF0EE9F15.1;
	Tue, 17 Nov 2020 16:16:01 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id fb6a45815400.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Tue, 17 Nov 2020 16:16:01 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JRRpY6cL/Ru0cbLL/CXrbjOGnZ1cylao0hsQbzhzQh8iVD6/PG1YGl9S2dCM4seYvnWzNKxNhVt9kLvIg/qNrnFEYup8VjJn8CX8fjMRw5O0LEEyMk9OE1PdRVyL0Zhprc1gskfljR4PHv/DuMg3IShwTeLL5qn/2BC7OHbjcazJxj99cYzLy4Uho98/E3gmw3x4B7/qmcAIlhFYEAqRfk8UoXk6ZqpXDplZT+olTYNX/DTjcrDm4nCxSiEL/5eW/W+RtlP2Nik8AtWeoKU3Fk1naiDL+In+f+1lccsTMd/7N07rGF5aD1BMfMvtjC1JKDKV//MRl28dC02THuPvgg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4fASpjCsbyuVHI9z61R7qORpV7CHHXBCue/CIu4KnHA=;
 b=jRUsBqsqlZ2p41ATY1OXQlA4oKPxiIZxvGYeDMwb4Nou/I26oBKsfuC6x6k/ODbRPug5eZ1acHNEwzvfIPANfukSj/nRoKzNOF266c2HfRK6xhDOf+0fgpuixcozYIRhdYFUz+VI7/AsMWGPseiVjyBj96mZN4ZzEdojA+ZqwmI5pANgQ0H9h9WRMoR7zpDLipIabRowoXglrT1ndhEoMDp0ZPWX6ZTMi2dRBtWsJlsn5NnuSA+gM1cX9UJvWzUIqVfaXbZX0+aV5VLAUHLyiUf265fyyhA961luA9ha/OEqyUiezXb+d+kzeTJQb34Gn42vK5cT41q8HzboVoCBIw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4fASpjCsbyuVHI9z61R7qORpV7CHHXBCue/CIu4KnHA=;
 b=riLhm9ak5+HKd4za2uL+hJqmQ2H44HJK/W80bAFz9mVpBbgKqqGG+A7yWZBGURMr1szxTwF+EQMC3jTf7UAl6Xm8ZwMEFn0XdUmB3MLLmjI9RjwbM2cL9oY8mDvrWghvWW81pSlDfRAiqniSkqKoxhi++D1gRiXxapuoMapEsNI=
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB8PR08MB5385.eurprd08.prod.outlook.com (2603:10a6:10:119::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Tue, 17 Nov
 2020 16:16:00 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3564.028; Tue, 17 Nov 2020
 16:16:00 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Paul Durrant <paul@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v3 3/3] xen/pci: solve compilation error on ARM with
 HAS_PCI enabled.
Thread-Topic: [PATCH v3 3/3] xen/pci: solve compilation error on ARM with
 HAS_PCI enabled.
Thread-Index: AQHWvBO4hlySn5llrk6lgfFsXmzQpanMLQIAgABUtAA=
Date: Tue, 17 Nov 2020 16:15:59 +0000
Message-ID: <F2F25880-2407-479F-9BC0-1DBA7FD59F4E@arm.com>
References: <cover.1605527997.git.rahul.singh@arm.com>
 <efa0c2578a6aabb642b8f38257cf96a983944301.1605527997.git.rahul.singh@arm.com>
 <fe73928e-bf95-234d-c181-4904a13ad0a1@suse.com>
In-Reply-To: <fe73928e-bf95-234d-c181-4904a13ad0a1@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 143fc315-eb85-47df-ab92-08d88b14178b
x-ms-traffictypediagnostic: DB8PR08MB5385:|DBBPR08MB4646:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DBBPR08MB464634F21604E0904B859558FCE20@DBBPR08MB4646.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ordUuHS0/ROgxyB9czjUmr9K7cEADmZVsZAOlg+YCw2ExjxSQY8BIfbvcWEktQ5ctCYZ7GgGc5VAwj6Wa8K+dLj4zPvad4cJFZNjNwmVEuwYoOKXHncM8iDwJIiLiJRLp5JB594yWZykJOzS/hBLqQGeg4h4KdLBxpjjKghlKmFUp60+jn8gWaeI++A0LlH958UJREc+hEebffT5aimcY/Rz2vUs2fqiQqVyA269HiwaIQPYLkeZxL74aQ0TDg4eHTWZISb84T79tvykRUKVqGsRVinQiEeSTmzLXowopeeo3/uceuRJE+0pGt2xthDcsIh5kckUBOF3JPopVpcp2A==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(396003)(346002)(39850400004)(376002)(186003)(26005)(6916009)(2906002)(54906003)(316002)(8676002)(66476007)(91956017)(8936002)(33656002)(64756008)(76116006)(6506007)(66556008)(66446008)(6512007)(66946007)(4326008)(53546011)(86362001)(5660300002)(478600001)(6486002)(83380400001)(71200400001)(2616005)(36756003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 D5pE7+uHwLcJFnH9FZYpAh3tv3vdM+GEYFYnhlbIH8PgGCTOY+5OF3bZgBY72x28IDBV0S1Ft+XkIKufLXAZUiBZph3MEDimxKfDU6lKPFkBqvThvA7z+AXH1tvG60hgFMQahh58eCYNSc3JUaZWXia3UM43O1oHLfgOp9gpl921dwzhmdZzSS0bPtqfItozQQRyHrTLrLR9OLJyJX3JjjTiz3jP6fCFFxpBgwrMY6GGc9DMPbCM/AIuOabDjkC0LcMKcUQrxcA8LTUJyhSGeRFK2u0jIJkSLYdh/QkRCty9toIMQEogQEO6oyC/9FV9ZksHUar2gNKb+vy4PZP8IrWZm1Uwyu1O5gQRWMAwx0NvS2a84Iisx51OJm+68DF+EZBafnEmhp7YqaEiV/OBjIYvPluqbA5u1kT3aksmevriUn2oR54X3OAZMDQx0qCFdT6yd+0CvhuTuwvdNH6aYMAse7aCatnnHrRHd7/x2kcaKEVV/G/iMlnhnGjC+HT07MdVtzbQTcrOWXfY+BMSKfM4urcBJVLwjPWwd3BV+ziL2nxLfMycfu2DDvkhgKgkRLAAsdZip3I58rOoJ8CuCgp3/DXEZZsqpWBkvIOv9MKjFrn4WMgyBJYbCh2u9SjlHmfCvmUq/FyiNQWYMXS2tGPurn94JTLC7U4tGQLVSbKjd6VQcImSI62avTVFSbLe4InC0kki3dEHpHLrFN61i9JUwxwqoSXXd2oXz73OULu8Ct5H4J10QaKlSqd6/rWFGe5zVPyJoaAb5h7IIPrlqQEJvuIrefE+N3+CQDzeKN6fiTAxZpwLAi4VLwfJp2w/Ig4P6N4Q4eAtz8hRog7pW3s/2K1lCaws6iYrD1lJzi9l6nloYoeFV47f0TyCLzrANLKSSyxA8KV/WtD2c9tKag==
Content-Type: text/plain; charset="utf-8"
Content-ID: <2ED3CC1ACE394A49A0EC231B54493D7E@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5385
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6fcdc788-7e6b-40f1-44e3-08d88b1412ac
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	asbHoAuXdzjpKSB/i9CgJktl+CiXHOQjdaUlKhgMAfbXHo/T5P+zWBTw/evuaesIPnpLk0qPfbQCkPbp17UeNmkCIKmB0XTkffCFhKKzB6VGSDnp4dGjQhYyvkykx11uqpvN7zweQ33ebdxwOPYCvzuVoxANjKd+HGk0oVbb1L08n19w5/kxR8p+qUdidUYaOWnYLQWmDWymv2esJGWB5c/mNn18+lYwRKEtYtI0QqiZDBGJR25lTVZwqGZfVbI7nW4bilIvfyinm0tiwRZAOMotTgyXGvzdmLh8jWaTbbvvEUuxzZ2tJEN09nnxLQnpBYpHU6nfxXY9mXpdKbZZq66zqX/iJTz2WObqqTpITANhanmPJyL66C+o29YpK/4W9y/0ALuCBk/0zv+/QCgvTA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(376002)(396003)(346002)(136003)(46966005)(53546011)(83380400001)(2906002)(26005)(478600001)(47076004)(6862004)(36756003)(186003)(33656002)(6512007)(316002)(82310400003)(8676002)(336012)(8936002)(6506007)(2616005)(5660300002)(356005)(54906003)(70586007)(81166007)(70206006)(4326008)(86362001)(82740400003)(6486002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Nov 2020 16:16:08.3471
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 143fc315-eb85-47df-ab92-08d88b14178b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4646

SGVsbG8gSmFuLA0KDQo+IE9uIDE3IE5vdiAyMDIwLCBhdCAxMToxMiBhbSwgSmFuIEJldWxpY2gg
PGpiZXVsaWNoQHN1c2UuY29tPiB3cm90ZToNCj4gDQo+IE9uIDE2LjExLjIwMjAgMTM6MjUsIFJh
aHVsIFNpbmdoIHdyb3RlOg0KPj4gSWYgbWVtLXNoYXJpbmcsIG1lbS1wYWdpbmcsIG9yIGxvZy1k
aXJ0eSBmdW5jdGlvbmFsaXR5IGlzIG5vdCBlbmFibGVkDQo+PiBmb3Igbm9uLXg4NiBhcmNoaXRl
Y3R1cmUgd2hlbiBIQVNfUENJIGlzIGVuYWJsZWQsIHRoZSBjb21waWxlciB3aWxsDQo+PiB0aHJv
dyBhbiBlcnJvci4NCj4+IA0KPj4gTW92ZSBjb2RlIHRvIHg4NiBzcGVjaWZpYyBkaXJlY3Rvcnkg
dG8gZml4IGNvbXBpbGF0aW9uIGVycm9yLg0KPiANCj4gUGVyaGFwcyByYXRoZXIgImZpbGUiIHRo
YW4gImRpcmVjdG9yeeKAnT8NCk9rLg0KPiANCj4+IEFsc28sIG1vZGlmeSB0aGUgY29kZSB0byB1
c2UgbGlrZWx5KCkgaW4gcGxhY2Ugb2YgdW5saWtsZXkoKSBmb3IgZWFjaA0KPj4gY29uZGl0aW9u
IHRvIG1ha2UgY29kZSBtb3JlIG9wdGltaXplZC4NCj4+IA0KPj4gTm8gZnVuY3Rpb25hbCBjaGFu
Z2UuDQo+PiANCj4+IFNpZ25lZC1vZmYtYnk6IFJhaHVsIFNpbmdoIDxyYWh1bC5zaW5naEBhcm0u
Y29tPg0KPiANCj4gSW4gcHJpbmNpcGxlIEknbSBva2F5IHdpdGggdGhpcyBub3csIGJ1dCB0aGVy
ZSBjb250aW51ZSB0byBiZSBhIGZldw0KPiBuaXRzOg0KDQpUaGFua3MgZm9yIHJldmlld2luZyB0
aGUgY29kZSBJIHdpbGwgZml4IGFsbCBjb21tZW50cyBhbmQgd2lsbCBzaGFyZSB0aGUgbmV4dCB2
ZXJzaW9uLg0KPiANCj4+IC0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3g4Ni9pb21tdS5j
DQo+PiArKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC94ODYvaW9tbXUuYw0KPj4gQEAgLTIz
LDYgKzIzLDcgQEANCj4+ICNpbmNsdWRlIDxhc20vaHZtL2lvLmg+DQo+PiAjaW5jbHVkZSA8YXNt
L2lvX2FwaWMuaD4NCj4+ICNpbmNsdWRlIDxhc20vc2V0dXAuaD4NCj4+ICsjaW5jbHVkZSA8eGVu
L3ZtX2V2ZW50Lmg+DQo+IA0KPiBQbGVhc2UgaW5zZXJ0IHRoaXMgYWxvbmdzaWRlIHRoZSBvdGhl
ciAiI2luY2x1ZGUgPHhlbi8uLi4+IiBoaWdoZXIgdXAuDQoNCk9rLg0KPiANCj4+IEBAIC0zMTUs
NiArMzE2LDE3IEBAIGludCBpb21tdV91cGRhdGVfaXJlX2Zyb21fbXNpKA0KPj4gICAgICAgICAg
ICA/IGlvbW11X2NhbGwoJmlvbW11X29wcywgdXBkYXRlX2lyZV9mcm9tX21zaSwgbXNpX2Rlc2Ms
IG1zZykgOiAwOw0KPj4gfQ0KPj4gDQo+PiArYm9vbCBhcmNoX2lvbW11X3VzZV9wZXJtaXR0ZWQo
Y29uc3Qgc3RydWN0IGRvbWFpbiAqZCkNCj4+ICt7DQo+PiArICAgIC8qDQo+PiArICAgICAqIFBy
ZXZlbnQgZGV2aWNlIGFzc2lnbiBpZiBtZW0gcGFnaW5nLCBtZW0gc2hhcmluZyBvciBsb2ctZGly
dHkNCj4+ICsgICAgICogaGF2ZSBiZWVuIGVuYWJsZWQgZm9yIHRoaXMgZG9tYWluLg0KPj4gKyAg
ICAgKi8NCj4+ICsgICAgcmV0dXJuIGQgPT0gZG9tX2lvIHx8DQo+PiArICAgICAgICAgICAobGlr
ZWx5KCFtZW1fc2hhcmluZ19lbmFibGVkKGQpKSAmJg0KPj4gKyAgICAgICAgICAgIGxpa2VseSgh
dm1fZXZlbnRfY2hlY2tfcmluZyhkLT52bV9ldmVudF9wYWdpbmcpKSAmJg0KPj4gKyAgICAgICAg
ICAgIGxpa2VseSghcDJtX2dldF9ob3N0cDJtKGQpLT5nbG9iYWxfbG9nZGlydHkpKTsNCj4+ICt9
DQo+PiAvKg0KPj4gICogTG9jYWwgdmFyaWFibGVzOg0KPj4gICogbW9kZTogQw0KPiANCj4gUGxl
YXNlIGRvbid0IGFsdGVyIHN0eWxpc3RpYyBhc3BlY3RzIGxpa2UgdGhpcyB0cmFpbGluZyBjb21t
ZW50DQo+IGJlaW5nIHByZWNlZGVkIGJ5IGEgYmxhbmsgbGluZS4NCg0KT2suDQo+IA0KPj4gLS0t
IGEveGVuL2luY2x1ZGUveGVuL2lvbW11LmgNCj4+ICsrKyBiL3hlbi9pbmNsdWRlL3hlbi9pb21t
dS5oDQo+PiBAQCAtMzgxLDYgKzM4MSw4IEBAIERFQ0xBUkVfUEVSX0NQVShib29sX3QsIGlvbW11
X2RvbnRfZmx1c2hfaW90bGIpOw0KPj4gZXh0ZXJuIHN0cnVjdCBzcGlubG9jayBpb21tdV9wdF9j
bGVhbnVwX2xvY2s7DQo+PiBleHRlcm4gc3RydWN0IHBhZ2VfbGlzdF9oZWFkIGlvbW11X3B0X2Ns
ZWFudXBfbGlzdDsNCj4+IA0KPj4gK2Jvb2wgYXJjaF9pb21tdV91c2VfcGVybWl0dGVkKGNvbnN0
IHN0cnVjdCBkb21haW4gKmQpOw0KPiANCj4gSnVzdCBGVFIgLSB0aGlzIHdheSB5b3UgZWZmZWN0
aXZlbHkgcHJlY2x1ZGUgYW4gYXJjaCBmcm9tDQo+IG1ha2luZyB0aGlzIGEgdHJpdmlhbCBzdGF0
aWMgaW5saW5lIGluIG9uZSBvZiBpdHMgaGVhZGVycy4NCj4gDQo+IEphbg0KDQo=


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 16:30:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 16:30:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29129.58376 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf3rr-0005mx-V0; Tue, 17 Nov 2020 16:30:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29129.58376; Tue, 17 Nov 2020 16:30:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf3rr-0005mq-QQ; Tue, 17 Nov 2020 16:30:07 +0000
Received: by outflank-mailman (input) for mailman id 29129;
 Tue, 17 Nov 2020 16:30:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UWEc=EX=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kf3rq-0005dY-1f
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 16:30:06 +0000
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c7054294-a346-43ff-9c96-ecd32a0792df;
 Tue, 17 Nov 2020 16:30:04 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id x9so24974961ljc.7
 for <xen-devel@lists.xenproject.org>; Tue, 17 Nov 2020 08:30:04 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id p25sm3206522lfc.125.2020.11.17.08.30.02
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 17 Nov 2020 08:30:03 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=UWEc=EX=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kf3rq-0005dY-1f
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 16:30:06 +0000
X-Inumbo-ID: c7054294-a346-43ff-9c96-ecd32a0792df
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c7054294-a346-43ff-9c96-ecd32a0792df;
	Tue, 17 Nov 2020 16:30:04 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id x9so24974961ljc.7
        for <xen-devel@lists.xenproject.org>; Tue, 17 Nov 2020 08:30:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=01HVuw46qRoOFWw/AqqExtJgoLq9BcD3AdYtNrquhhs=;
        b=U6wgdthK0up06n1FGwW97B1xZU1a6z5C+5cwYLVtkwLLLfKNaWrZZzmqLxFDyEaTjG
         Sey0Op4cZb7f7v1yGh1WDwhlCrLi4I3xQzVLRBnKbl9VIQRDexMguMHiLkAQqEW6MBQR
         APNkY7EmQ4gRgeVs7jB6OMKgFwlpxnHtgpVVwzdte/MOLTYUreriPUgJc3R/+X6Op8D8
         BvDHgZ+OhidChT3q6FYHGyiX1iMdZI91FvGkx7hl5eQue/oWCiCSdGlX+oGw3yqOGJOg
         T6UdFGnnLdDfvCV+wvpJyu3nPGa6KCiS7NQ6Ow1agmZiOcRdk7Lmk6KoPfham2ld8B23
         2kkw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=01HVuw46qRoOFWw/AqqExtJgoLq9BcD3AdYtNrquhhs=;
        b=COQAPz020Knawkfb2L3QbFiMoJIdeHAhgjdmU89ZCiT91SstVDQdFSVooA6dduqjji
         epKFc3gY3+EkRxqrkKI4/ckTrYsanfoudNWy2KvI0TAmLs7NwCTB3PQHLjS4gM+/0VLI
         fUULU4FqivuYtVIkPGZMHEO/Qk/kOtvGqG0Z1qQm/o/Sftad217WMEecFfBWLDnp0Yt1
         v4Zf6boRrO6wRPYwKnHtO97kQqDnGECiWKFMSdxRJRwHc2Dai/uCymBOq0fKpbPcTbJ8
         tP+pPj5T3Y2jWgNSegp2wedaH+czHPUi3Xnp1GXZPvb7ZGeriAeUjwcgnpGAmEolHh6r
         UQ7Q==
X-Gm-Message-State: AOAM532D94aHMO7LPi/ykVP9ZX0Qwafy137uJ+9SEcgL3aWBmGkjRnyj
	Gn7rKiaHbdSv+h1yQ8rievI=
X-Google-Smtp-Source: ABdhPJy4quR2RyUXySZ14ZAeNfdvGX2JjCG0TrbMG/l0/zJpN3MxlUfAIDdMU8oT7w3C520Tvoo/7Q==
X-Received: by 2002:a2e:1556:: with SMTP id 22mr2435394ljv.416.1605630603584;
        Tue, 17 Nov 2020 08:30:03 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id p25sm3206522lfc.125.2020.11.17.08.30.02
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Tue, 17 Nov 2020 08:30:03 -0800 (PST)
Subject: Re: [PATCH V2 02/23] xen/ioreq: Make x86's IOREQ feature common
To: paul@xen.org
Cc: xen-devel@lists.xenproject.org,
 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Ian Jackson' <iwj@xenproject.org>, 'Jan Beulich' <jbeulich@suse.com>,
 'Julien Grall' <julien@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>, 'Wei Liu' <wl@xen.org>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 'Tim Deegan' <tim@xen.org>, 'Julien Grall' <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-3-git-send-email-olekstysh@gmail.com>
 <004001d6a6b6$9ffd3ac0$dff7b040$@xen.org>
 <436143ea-609f-f6c3-4952-19fcf410fe8f@gmail.com>
 <34133df1-bff2-f4df-00a5-674a2af867fc@gmail.com>
 <007401d6bcf6$63d3f420$2b7bdc60$@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <a2eecf9b-7246-68c8-aee4-b4009ee16ed8@gmail.com>
Date: Tue, 17 Nov 2020 18:29:57 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <007401d6bcf6$63d3f420$2b7bdc60$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 17.11.20 17:29, Paul Durrant wrote:

Hi Paul

Thank you for the prompt answer.

>>>> The 'legacy' mechanism of mapping magic pages for ioreq servers
>>>> should remain x86 specific I think that aspect of the code needs to
>>>> remain behind and not get moved into common code. You could do that
>>>> in arch specific calls in hvm_ioreq_server_enable/disable() and
>>>> hvm_get_ioreq_server_info().
>>> Well, if legacy mechanism is not going to be used for other arch and
>>> should remain x86 specific, I will try to investigate what should be
>>> left in x86 code and rework the series.
>>> As a side note, I am afraid, we won't get a 100% code movement (which
>>> I managed to achieve here) for the next version of this patch as we
>>> need arch/x86/hvm/ioreq.c.
>> I am investigating how to split the code in order to leave the 'legacy'
>> mechanism x86 specific and have a few questions. Could you please
>> clarify the following:
>>
>> 1. The split of hvm_ioreq_server_enable/disable() is obvious to me, I
>> would like to clarify regarding hvm_get_ioreq_server_info().
>> Is it close to what you had in mind when suggesting the split of
>> hvm_get_ioreq_server_info() or I just need to abstract
>> hvm_ioreq_server_map_pages() call only?
> I think it is sufficient to just abstract hvm_ioreq_server_map_pages() (and return -EOPNOTSUPP in the Arm case).
> The buf ioreq port should be common.

ok, will do.


>
>> (Not completed and non tested)
>>
>> +/* Called with ioreq_server lock held */
>> +int arch_ioreq_server_get_info(struct hvm_ioreq_server *s,
>> +                               unsigned long *ioreq_gfn,
>> +                               unsigned long *bufioreq_gfn,
>> +                               evtchn_port_t *bufioreq_port)
>> +{
>> +    if ( ioreq_gfn || bufioreq_gfn )
>> +    {
>> +        int rc = hvm_ioreq_server_map_pages(s);
>> +
>> +        if ( rc )
>> +            return rc;
>> +    }
>> +
>> +    if ( ioreq_gfn )
>> +        *ioreq_gfn = gfn_x(s->ioreq.gfn);
>> +
>> +    if ( HANDLE_BUFIOREQ(s) )
>> +    {
>> +        if ( bufioreq_gfn )
>> +            *bufioreq_gfn = gfn_x(s->bufioreq.gfn);
>> +
>> +        if ( bufioreq_port )
>> +            *bufioreq_port = s->bufioreq_evtchn;
>> +    }
>> +
>> +    return 0;
>> +}
>> +
>>    int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
>>                                  unsigned long *ioreq_gfn,
>>                                  unsigned long *bufioreq_gfn,
>> @@ -916,26 +954,7 @@ int hvm_get_ioreq_server_info(struct domain *d,
>> ioservid_t id,
>>        if ( s->emulator != current->domain )
>>            goto out;
>>
>> -    if ( ioreq_gfn || bufioreq_gfn )
>> -    {
>> -        rc = hvm_ioreq_server_map_pages(s);
>> -        if ( rc )
>> -            goto out;
>> -    }
>> -
>> -    if ( ioreq_gfn )
>> -        *ioreq_gfn = gfn_x(s->ioreq.gfn);
>> -
>> -    if ( HANDLE_BUFIOREQ(s) )
>> -    {
>> -        if ( bufioreq_gfn )
>> -            *bufioreq_gfn = gfn_x(s->bufioreq.gfn);
>> -
>> -        if ( bufioreq_port )
>> -            *bufioreq_port = s->bufioreq_evtchn;
>> -    }
>> -
>> -    rc = 0;
>> +    rc = arch_ioreq_server_get_info(s, ioreq_gfn, bufioreq_gfn,
>> bufioreq_port);
>>
>>     out:
>>        spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>>
>> 2. If I understand the code correctly, besides of the above-mentioned
>> functions the arch specific calls should be in hvm_ioreq_server_init()
>> and hvm_ioreq_server_deinit() to actually hide
>> "hvm_ioreq_server_unmap_pages" usage from the common code.  I noticed
>> that the rollback code in hvm_ioreq_server_init() and the whole
>> hvm_ioreq_server_deinit() have a lot in common except an extra ASSERT()
>> and hvm_ioreq_server_free_pages() call in the latter. My question is
>> could we just replace the rollback code by hvm_ioreq_server_deinit()? I
>> assume an extra hvm_ioreq_server_free_pages() call wouldn't be an issue
>> here, but I am not sure what to do with the ASSERT, I expect it to be
>> triggered at such an early stage (so it probably needs moving out of the
>> hvm_ioreq_server_deinit() (or dropped?) as well as comment needs
>> updating). I am asking, because this way we would get *a single* arch
>> hook here which would be arch_ioreq_server_deinit() and remove code
>> duplication a bit.
> I would arch specific init and deinit, even if one of them does nothing... but then I like symmetry :-)


Both hvm_ioreq_server_init() and hvm_ioreq_server_deinit() call "legacy" 
hvm_ioreq_server_unmap_pages()
which we want to be abstracted. The only difference between these two 
usages is that the former calls it during rollback only (in case of error).
Taking into the account what has been suggested for question #1 could we 
just introduce arch_ioreq_server_unmap_pages() to be called from both 
init and deinit?


[Not completed not tested]

@@ -762,7 +772,7 @@ static int hvm_ioreq_server_init(struct 
hvm_ioreq_server *s,

   fail_add:
      hvm_ioreq_server_remove_all_vcpus(s);
-    hvm_ioreq_server_unmap_pages(s);
+    arch_ioreq_server_unmap_pages(s);

      hvm_ioreq_server_free_rangesets(s);

@@ -776,7 +786,7 @@ static void hvm_ioreq_server_deinit(struct 
hvm_ioreq_server *s)
      hvm_ioreq_server_remove_all_vcpus(s);

      /*
-     * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and
+     * NOTE: It is safe to call both arch_ioreq_server_unmap_pages() and
       *       hvm_ioreq_server_free_pages() in that order.
       *       This is because the former will do nothing if the pages
       *       are not mapped, leaving the page to be freed by the latter.
@@ -784,7 +794,7 @@ static void hvm_ioreq_server_deinit(struct 
hvm_ioreq_server *s)
       *       the page_info pointer to NULL, meaning the latter will do
       *       nothing.
       */
-    hvm_ioreq_server_unmap_pages(s);
+    arch_ioreq_server_unmap_pages(s);
      hvm_ioreq_server_free_pages(s);

      hvm_ioreq_server_free_rangesets(s);
@@ -918,7 +928,7 @@ int hvm_get_ioreq_server_info(struct domain *d, 
ioservid_t id,

      if ( ioreq_gfn || bufioreq_gfn )
      {
-        rc = hvm_ioreq_server_map_pages(s);
+        rc = arch_ioreq_server_map_pages(s);
          if ( rc )
              goto out;
      }


So looks like for leaving legacy mechanism x86 specific we need 4 new 
arch callbacks:

- arch_ioreq_server_enable
- arch_ioreq_server_disable
- arch_ioreq_server_map_pages
- arch_ioreq_server_unmap_pages
>
>> Something close to this.
>> (Not completed and non tested)
>>
>> @@ -761,18 +771,17 @@ static int hvm_ioreq_server_init(struct
>> hvm_ioreq_server *s,
>>        return 0;
>>
>>     fail_add:
>> -    hvm_ioreq_server_remove_all_vcpus(s);
>> -    hvm_ioreq_server_unmap_pages(s);
>> -
>> -    hvm_ioreq_server_free_rangesets(s);
>> -
>> -    put_domain(s->emulator);
>> +    hvm_ioreq_server_deinit(s);
>>        return rc;
>>    }
>>
>> +void arch_ioreq_server_deinit(struct hvm_ioreq_server *s)
>> +{
>> +    hvm_ioreq_server_unmap_pages(s);
>> +}
>> +
>>    static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s)
>>    {
>> -    ASSERT(!s->enabled);
> I assume this is the ASSERT you're referring to... There's no way we should be deinit-ing an enabled server so that should remain in common code as is.
ok, I agree.

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Nov 17 16:40:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 16:40:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29144.58388 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf42E-0006qp-4S; Tue, 17 Nov 2020 16:40:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29144.58388; Tue, 17 Nov 2020 16:40:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf42E-0006qi-0K; Tue, 17 Nov 2020 16:40:50 +0000
Received: by outflank-mailman (input) for mailman id 29144;
 Tue, 17 Nov 2020 16:40:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EUfM=EX=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kf42D-0006qd-0R
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 16:40:49 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 27a0a535-718d-40c1-9619-f7d1d6bd8b7b;
 Tue, 17 Nov 2020 16:40:46 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AHGec6a017766
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Tue, 17 Nov 2020 17:40:39 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id BC2D12E9CA8; Tue, 17 Nov 2020 17:40:33 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=EUfM=EX=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kf42D-0006qd-0R
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 16:40:49 +0000
X-Inumbo-ID: 27a0a535-718d-40c1-9619-f7d1d6bd8b7b
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 27a0a535-718d-40c1-9619-f7d1d6bd8b7b;
	Tue, 17 Nov 2020 16:40:46 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AHGec6a017766
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Tue, 17 Nov 2020 17:40:39 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id BC2D12E9CA8; Tue, 17 Nov 2020 17:40:33 +0100 (MET)
Date: Tue, 17 Nov 2020 17:40:33 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201117164033.GB3093@antioche.eu.org>
References: <20201117150949.GA3791@antioche.eu.org>
 <20201117155807.a7jgmftnj6njg6oz@Air-de-Roger>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="OXfL5xGRrasGEqWY"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201117155807.a7jgmftnj6njg6oz@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Tue, 17 Nov 2020 17:40:40 +0100 (MET)


--OXfL5xGRrasGEqWY
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit

On Tue, Nov 17, 2020 at 04:58:07PM +0100, Roger Pau Monn wrote:
> [...]
> 
> I have attached a patch below that will dump the vIO-APIC info as part
> of the 'i' debug key output, can you paste the whole output of the 'i'
> debug key when the system stalls?

see attached file. Note that the kernel did unstall while 'i' output was
being printed, so it is mixed with some NetBSD kernel output.
The idt entry of the 'ioapic2 pin2' interrupt is 103 on CPU 0.

I also put the whole sequence at
http://www-soc.lip6.fr/~bouyer/xen-log3.txt

You'll see that I did hit 'i' 2 times to get the NetBSD kernel to boot
multiuser.

> 
> Can you assert that you properly EOI the vectors on the local APIC? (I
> don't have a patch to dump the emulated lapic ISR right now, but could
> provide one if needed).

Reading the code, I think it's OK (assuming I properly understood what
you mean too). Wouldn't it cause problems on real hardware too
if the vectors were not EOI'd ?

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--

--OXfL5xGRrasGEqWY
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="i.txt"

(XEN) IRQ information:
(XEN)    IRQ:   0 vec:f0 IO-APIC-edge    status=000 aff:{0}/{0} arch/x86/time.c#timer_interrupt()
(XEN)    IRQ:   1 vec:70 IO-APIC-edge    status=006 aff:{0}/{0} mapped, unbound
(XEN)    IRQ:   3 vec:f1 IO-APIC-edge    status=000 aff:{0-15}/{0-15} drivers/char/ns16550.c#ns16550_interrupt()
(XEN)    IRQ:   4 vec:78 IO-APIC-edge    status=002 aff:{0}/{0} mapped, unbound
(XEN)    IRQ:   5 vec:88 IO-APIC-edge    status=002 aff:{0}/{0} mapped, unbound
(XEN)    IRQ:   6 vec:90 IO-APIC-edge    status=002 aff:{0}/{0} mapped, unbound
(XEN)    IRQ:   7 vec:98 IO-APIC-edge    status=002 aff:{0}/{0} mapped, unbound
(XEN)    IRQ:   8 vec:a0 IO-APIC-edge    status=002 aff:{0}/{0} mapped, unbound
(XEN)    IRQ:   9 vec:a8 IO-APIC-level   status=010 aff:{0}/{0} in-flight=0 d0:  9(-M-)
(XEN)    IRQ:  10 vec:b0 IO-APIC-edge    status=006 aff:{0}/{0} mapped, unbound
(XEN)    IRQ:  11 vec:b8 IO-APIC-edge    status=002 aff:{0}/{0} mapped, unbound
(XEN)    IRQ:  12 vec:c0 IO-APIC-edge    status=002 aff:{0}/{0} mapped, unbound
(XEN)    IRQ:  13 vec:c8 IO-APIC-edge    status=002 aff:{0}/{0} mapped, unbound
(XEN)    IRQ:  14 vec:d0 IO-APIC-edge    status=002 aff:{0}/{0} mapped, unbound
(XEN)    IRQ:  15 vec:d8 IO-APIC-edge    status=002 aff:{0}/{0} mapped, unbound
(XEN)    IRQ:  16 vec:49 IO-APIC-level   status=010 aff:{0}/{0-7} in-flight=0 d0: 16(-M-)
(XEN)    IRQ:  34 vec:51 IO-APIC-level   status=010 aff:{0}/{0-7} in-flight=1 d0: 34(-MM)
(XEN)    IRQ: 104 vec:30 DMA_MSI         status=000 aff:{0-223}/{0} drivers/passthrough/vtd/iommu.c#iommu_page_fault()
(XEN)    IRQ: 105 vec:38 DMA_MSI         status=000 aff:{0-223}/{0} drivers/passthrough/vtd/iommu.c#iommu_page_fault()
(XEN)    IRQ: 106 vec:40 DMA_MSI         status=000 aff:{0-223}/{0} drivers/passthrough/vtd/iommu.c#iommu_page_fault()
(XEN)    IRQ: 107 vec:48 DMA_MSI         status=000 aff:{0-223}/{0} drivers/passthrough/vtd/iommu.c#iommu_page_fault()
(XEN)    IRQ: 108 vec:50 DMA_MSI         status=000 aff:{0-223}/{0} drivers/passthrough/vtd/iommu.c#iommu_page_fault()
(XEN)    IRQ: 109 vec:58 DMA_MSI         status=000 aff:{0-223}/{0} drivers/passthrough/vtd/iommu.c#iommu_page_fault()
(XEN)    IRQ: 110 vec:60 DMA_MSI         status=000 aff:{0-223}/{0} drivers/passthrough/vtd/iommu.c#iommu_page_fault()
(XEN)    IRQ: 111 vec:68 DMA_MSI         status=000 aff:{0-223}/{0} [drivers/passthrough/vtd/iommu.c#iommu_page_fault()
 (XEN)    IRQ: 112 vec:e0 PCI-MSI         status=030 aff:{0}/{0-7}  in-flight=0 5d0:1143(-M-)
0(XEN)    IRQ: 113 vec:e8 PCI-MSI         status=010 aff:{0}/{0-7} .in-flight=0 3d0:1142(-M-)
5(XEN)    IRQ: 114 vec:31 PCI-MSI         status=030 aff:{0}/{0-7} 7in-flight=0 6d0:1141(-M-)
0(XEN)    IRQ: 115 vec:39 PCI-MSI/-X      status=030 aff:{0}/{0-7} 3in-flight=0 8d0:1140(-M-)
](XEN)    IRQ: 116 vec:41 PCI-MSI/-X      status=030 aff:{0}/{0-7}  in-flight=0 md0:1139(-M-)
f(XEN)    IRQ: 117 vec:8e PCI-MSI/-X      status=030 aff:{0}/{0-7} iin-flight=0 id0:1138(-M-)
0(XEN)    IRQ: 118 vec:96 PCI-MSI/-X      status=030 aff:{0}/{0-7} :in-flight=0  d0:1137(-M-)
c(XEN)    IRQ: 119 vec:9e PCI-MSI/-X      status=030 aff:{0}/{0-7} min-flight=0 dd0:1136(-M-)
 (XEN)    IRQ: 120 vec:a6 PCI-MSI/-X      status=030 aff:{0}/{0-7} ain-flight=0 bd0:1135(-M-)
o(XEN)    IRQ: 121 vec:ae PCI-MSI/-X      status=030 aff:{0}/{0-7} rin-flight=0 td0:1134(-M-)
e(XEN)    IRQ: 122 vec:b6 PCI-MSI/-X      status=030 aff:{0}/{0-7} din-flight=0  d0:1133(-M-)
c(XEN)    IRQ: 123 vec:be PCI-MSI/-X      status=030 aff:{0}/{0-7} cin-flight=0 bd0:1132(-M-)
 (XEN)    IRQ: 124 vec:c6 PCI-MSI/-X      status=030 aff:{0}/{0-7} 0in-flight=0 xd0:1131(-M-)
f(XEN)    IRQ: 125 vec:ce PCI-MSI/-X      status=030 aff:{0}/{0-7} fin-flight=0 fd0:1130(-M-)
f(XEN)    IRQ: 126 vec:d6 PCI-MSI/-X      status=030 aff:{0}/{0-7} 8in-flight=0 cd0:1129(-M-)
8(XEN)    IRQ: 127 vec:de PCI-MSI/-X      status=030 aff:{0}/{0-7} 0in-flight=0 0d0:1128(-M-)
2(XEN)    IRQ: 128 vec:e6 PCI-MSI/-X      status=030 aff:{0}/{0-7} 7in-flight=0 bd0:1127(-M-)
7(XEN)    IRQ: 129 vec:ee PCI-MSI/-X      status=030 aff:{0}/{0-7} din-flight=0 6d0:1126(-M-)
0(XEN)    IRQ: 130 vec:37 PCI-MSI/-X      status=030 aff:{0}/{0-7} in-flight=0 
d0:1125(-M-)
[(XEN)    IRQ: 131 vec:3f PCI-MSI/-X      status=030 aff:{0}/{0-7}  in-flight=0  d0:1124(-M-)
5(XEN)    IRQ: 132 vec:47 PCI-MSI/-X      status=030 aff:{0}/{0-7} 0in-flight=0 .d0:1123(-M-)
6(XEN)    IRQ: 133 vec:4f PCI-MSI/-X      status=030 aff:{0}/{0-7} 8in-flight=0 9d0:1122(-M-)
2(XEN)    IRQ: 134 vec:76 PCI-MSI/-X      status=030 aff:{0}/{0-7} 7in-flight=0 2d0:1121(-M-)
3(XEN)    IRQ: 135 vec:7e PCI-MSI/-X      status=030 aff:{0}/{0-7} ]in-flight=0  d0:1120(-M-)
m(XEN)    IRQ: 136 vec:86 PCI-MSI/-X      status=030 aff:{0}/{0-7} fin-flight=0 id0:1119(-M-)
i(XEN)    IRQ: 137 vec:57 PCI-MSI/-X      status=030 aff:{0}/{0-7} 0in-flight=0 :d0:1118(-M-)
 (XEN)    IRQ: 138 vec:5f PCI-MSI/-X      status=030 aff:{0}/{0-7} nin-flight=0 od0:1117(-M-)
r(XEN)    IRQ: 139 vec:67 PCI-MSI/-X      status=030 aff:{0}/{0-7} min-flight=0 ad0:1116(-M-)
l(XEN)    IRQ: 140 vec:6f PCI-MSI/-X      status=030 aff:{0}/{0-7}  in-flight=0 sd0:1115(-M-)
t(XEN)    IRQ: 141 vec:77 PCI-MSI/-X      status=030 aff:{0}/{0-7} ain-flight=0 td0:1114(-M-)
e(XEN)    IRQ: 142 vec:7f PCI-MSI/-X      status=030 aff:{0}/{0-7}  in-flight=0 od0:1113(-M-)
n(XEN)    IRQ: 143 vec:87 PCI-MSI/-X      status=030 aff:{0}/{0-7}  in-flight=0 'd0:1112(-M-)
m(XEN)    IRQ: 144 vec:8f PCI-MSI/-X      status=030 aff:{0}/{0-7} fin-flight=0 id0:1111(-M-)
i(XEN)    IRQ: 145 vec:97 PCI-MSI/-X      status=030 aff:{0}/{0-7} 0in-flight=0 :d0:1110(-M-)
0(XEN)    IRQ: 146 vec:9f PCI-MSI/-X      status=030 aff:{0}/{0-7} 'in-flight=0  d0:1109(-M-)
((XEN)    IRQ: 147 vec:a7 PCI-MSI/-X      status=030 aff:{0}/{0-7} oin-flight=0 nd0:1108(-M-)
l(XEN)    IRQ: 148 vec:af PCI-MSI/-X      status=030 aff:{0}/{0-7} iin-flight=0 nd0:1107(-M-)
e(XEN)    IRQ: 149 vec:b7 PCI-MSI/-X      status=030 aff:{0}/{0-7} )in-flight=0 d0:1106(-M-)

(XEN)    IRQ: 150 vec:bf PCI-MSI/-X      status=030 aff:{0}/{0-7} [in-flight=0  d0:1105(-M-)
 (XEN) Direct vector information:
5(XEN)    0x22 -> irq_move_cleanup_interrupt()
0(XEN)    0xf2 -> arch/x86/cpu/mcheck/mce_intel.c#cmci_interrupt()
.(XEN)    0xf3 -> arch/x86/cpu/mcheck/mce_intel.c#intel_thermal_interrupt()
7(XEN)    0xf4 -> arch/x86/hvm/vmx/vmx.c#pi_notification_interrupt()
1(XEN)    0xf9 -> pmu_apic_interrupt()
3(XEN)    0xfa -> apic_timer_interrupt()
8(XEN)    0xfb -> call_function_interrupt()
3(XEN)    0xfc -> event_check_interrupt()
7(XEN)    0xfd -> invalidate_interrupt()
3(XEN)    0xfe -> error_interrupt()
](XEN)    0xff -> spurious_interrupt()
 (XEN) IO-APIC interrupt information:
s(XEN)     IRQ  0 Vec240:
d(XEN)       Apic 0x00, Pin  2: 0vec=f0 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:00000001
:(XEN)     IRQ  1 Vec112:
 (XEN)       Apic 0x00, Pin  1: fvec=70 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:00000001
a(XEN)     IRQ  3 Vec241:
b(XEN)       Apic 0x00, Pin  3: rvec=f1 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:00015555
i(XEN)     IRQ  4 Vec120:
c(XEN)       Apic 0x00, Pin  4: avec=78 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:00000001
t(XEN)     IRQ  5 Vec136:
i(XEN)       Apic 0x00, Pin  5: nvec=88 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:00000001
g(XEN)     IRQ  6 Vec144:
 (XEN)       Apic 0x00, Pin  6: avec=90 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:00000001
 (XEN)     IRQ  7 Vec152:
g(XEN)       Apic 0x00, Pin  7: evec=98 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:00000001
o(XEN)     IRQ  8 Vec160:
m(XEN)       Apic 0x00, Pin  8: evec=a0 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:00000001
t(XEN)     IRQ  9 Vec168:
r(XEN)       Apic 0x00, Pin  9: yvec=a8 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=L mask=0 dest_id:00000001
(XEN)     IRQ 10 Vec176:
(XEN)       Apic 0x00, Pin 10: 
vec=b0 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=1 dest_id:00000001
(XEN)     IRQ 11 Vec184:
(XEN)       Apic 0x00, Pin 11: vec=b8 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:00000001
[(XEN)     IRQ 12 Vec192:
 (XEN)       Apic 0x00, Pin 12:  vec=c0 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:00000001
5(XEN)     IRQ 13 Vec200:
1(XEN)       Apic 0x00, Pin 13: .vec=c8 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:00000001
3(XEN)     IRQ 14 Vec208:
9(XEN)       Apic 0x00, Pin 14: 3vec=d0 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:00000001
9(XEN)     IRQ 15 Vec216:
1(XEN)       Apic 0x00, Pin 15: 0vec=d8 delivery=LoPri dest=L status=0 polarity=0 irr=0 trig=E mask=0 dest_id:00000001
2(XEN)     IRQ 16 Vec 73:
](XEN)       Apic 0x00, Pin 16:  vec=49 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=0 dest_id:00000001
s(XEN)     IRQ 34 Vec 81:
d(XEN)       Apic 0x02, Pin  2: 0vec=51 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=0 dest_id:00000001
:(XEN) vIO-APIC dom0 state:
 (XEN) ioapic 0 pin 0 gsi 0 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
G(XEN) ioapic 0 pin 1 gsi 1 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
P(XEN) ioapic 0 pin 2 gsi 2 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
T(XEN) ioapic 0 pin 3 gsi 3 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
 (XEN) ioapic 0 pin 4 gsi 4 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
G(XEN) ioapic 0 pin 5 gsi 5 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
U(XEN) ioapic 0 pin 6 gsi 6 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
I(XEN) ioapic 0 pin 7 gsi 7 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
D(XEN) ioapic 0 pin 8 gsi 8 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
:(XEN) ioapic 0 pin 9 gsi 9 vector 0x60
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 1 mask 0 dest id 0
 (XEN) ioapic 0 pin 10 gsi 10 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
4(XEN) ioapic 0 pin 11 gsi 11 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
c(XEN) ioapic 0 pin 12 gsi 12 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
d(XEN) ioapic 0 pin 13 gsi 13 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
0(XEN) ioapic 0 pin 14 gsi 14 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
a(XEN) ioapic 0 pin 15 gsi 15 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
8(XEN) ioapic 0 pin 16 gsi 16 vector 0x66
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 1 IRR 0 trig mode 1 mask 0 dest id 0
9(XEN) ioapic 0 pin 17 gsi 17 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 1 IRR 0 trig mode 1 mask 1 dest id 0
7(XEN) ioapic 0 pin 18 gsi 18 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 1 IRR 0 trig mode 1 mask 1 dest id 0
-(XEN) ioapic 0 pin 19 gsi 19 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 1 IRR 0 trig mode 1 mask 1 dest id 0
b(XEN) ioapic 0 pin 20 gsi 20 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 1 IRR 0 trig mode 1 mask 1 dest id 0
5(XEN) ioapic 0 pin 21 gsi 21 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 1 IRR 0 trig mode 1 mask 1 dest id 0
9(XEN) ioapic 0 pin 22 gsi 22 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 1 IRR 0 trig mode 1 mask 1 dest id 0
1(XEN) ioapic 0 pin 23 gsi 23 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 1 IRR 0 trig mode 1 mask 1 dest id 0
-(XEN) ioapic 1 pin 0 gsi 24 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
4(XEN) ioapic 1 pin 1 gsi 25 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
5(XEN) ioapic 1 pin 2 gsi 26 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
c(XEN) ioapic 1 pin 3 gsi 27 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
5(XEN) ioapic 1 pin 4 gsi 28 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
-(XEN) ioapic 1 pin 5 gsi 29 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
8(XEN) ioapic 1 pin 6 gsi 30 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
6(XEN) ioapic 1 pin 7 gsi 31 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
7(XEN) ioapic 2 pin 0 gsi 32 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
9(XEN) ioapic 2 pin 1 gsi 33 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
-(XEN) ioapic 2 pin 2 gsi 34 vector 0x67
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 1 IRR 0 trig mode 1 mask 0 dest id 0
f(XEN) ioapic 2 pin 3 gsi 35 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
2(XEN) ioapic 2 pin 4 gsi 36 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
d(XEN) ioapic 2 pin 5 gsi 37 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
4(XEN) ioapic 2 pin 6 gsi 38 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
b(XEN) ioapic 2 pin 7 gsi 39 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
c(XEN) ioapic 3 pin 0 gsi 40 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
0(XEN) ioapic 3 pin 1 gsi 41 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
d(XEN) ioapic 3 pin 2 gsi 42 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
8(XEN) ioapic 3 pin 3 gsi 43 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
3(XEN) ioapic 3 pin 4 gsi 44 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
c(XEN) ioapic 3 pin 5 gsi 45 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
8(XEN) ioapic 3 pin 6 gsi 46 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
(XEN) ioapic 3 pin 7 gsi 47 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0

(XEN) ioapic 4 pin 0 gsi 48 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
[(XEN) ioapic 4 pin 1 gsi 49 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
 (XEN) ioapic 4 pin 2 gsi 50 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
 (XEN) ioapic 4 pin 3 gsi 51 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
5(XEN) ioapic 4 pin 4 gsi 52 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
1(XEN) ioapic 4 pin 5 gsi 53 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
.(XEN) ioapic 4 pin 6 gsi 54 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
8(XEN) ioapic 4 pin 7 gsi 55 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
1(XEN) ioapic 5 pin 0 gsi 72 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
0(XEN) ioapic 5 pin 1 gsi 73 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
3(XEN) ioapic 5 pin 2 gsi 74 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
0(XEN) ioapic 5 pin 3 gsi 75 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
7(XEN) ioapic 5 pin 4 gsi 76 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
4(XEN) ioapic 5 pin 5 gsi 77 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
](XEN) ioapic 5 pin 6 gsi 78 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
 (XEN) ioapic 5 pin 7 gsi 79 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
d(XEN) ioapic 6 pin 0 gsi 80 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
k(XEN) ioapic 6 pin 1 gsi 81 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
0(XEN) ioapic 6 pin 2 gsi 82 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
 (XEN) ioapic 6 pin 3 gsi 83 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
a(XEN) ioapic 6 pin 4 gsi 84 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
t(XEN) ioapic 6 pin 5 gsi 85 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
 (XEN) ioapic 6 pin 6 gsi 86 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
s(XEN) ioapic 6 pin 7 gsi 87 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
d(XEN) ioapic 7 pin 0 gsi 88 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
0(XEN) ioapic 7 pin 1 gsi 89 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
:(XEN) ioapic 7 pin 2 gsi 90 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
 (XEN) ioapic 7 pin 3 gsi 91 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
"(XEN) ioapic 7 pin 4 gsi 92 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
r(XEN) ioapic 7 pin 5 gsi 93 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
o(XEN) ioapic 7 pin 6 gsi 94 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
o(XEN) ioapic 7 pin 7 gsi 95 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
t(XEN) ioapic 8 pin 0 gsi 96 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
"(XEN) ioapic 8 pin 1 gsi 97 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
,(XEN) ioapic 8 pin 2 gsi 98 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
 (XEN) ioapic 8 pin 3 gsi 99 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
1(XEN) ioapic 8 pin 4 gsi 100 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
0(XEN) ioapic 8 pin 5 gsi 101 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
2(XEN) ioapic 8 pin 6 gsi 102 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0
4(XEN) ioapic 8 pin 7 gsi 103 vector 0
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 0 IRR 0 trig mode 0 mask 1 dest id 0

--OXfL5xGRrasGEqWY--


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 16:49:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 16:49:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29149.58400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf4An-0007DN-W1; Tue, 17 Nov 2020 16:49:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29149.58400; Tue, 17 Nov 2020 16:49:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf4An-0007DG-Ss; Tue, 17 Nov 2020 16:49:41 +0000
Received: by outflank-mailman (input) for mailman id 29149;
 Tue, 17 Nov 2020 16:49:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kf4Am-00077m-53
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 16:49:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3f556ac0-d195-4ff7-bb21-03b18afd8f2e;
 Tue, 17 Nov 2020 16:49:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B9E01AC1F;
 Tue, 17 Nov 2020 16:49:37 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+6JM=EX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kf4Am-00077m-53
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 16:49:40 +0000
X-Inumbo-ID: 3f556ac0-d195-4ff7-bb21-03b18afd8f2e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3f556ac0-d195-4ff7-bb21-03b18afd8f2e;
	Tue, 17 Nov 2020 16:49:38 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605631777; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=I0mNzBaXvcn3Sk2ApEMgXh0UxObpBe/rTdVRcDEr1rM=;
	b=R3dC9uLhLmq+KOLI7JjiEENCabij7NKDrIzCz2PNgaeYrDJayQujfyUuFXPrfTDn6sKjnT
	qyWQT2jjh6bDSvUgSnaV96nxPRic2PD0att0yYXlkEO64Y8nuZvULqEuUXPzmc0DRAwlga
	2NWn5PK1ejGVWGbOhFu0Dv4qzB+Ap/Y=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B9E01AC1F;
	Tue, 17 Nov 2020 16:49:37 +0000 (UTC)
Subject: Re: [PATCH 11/12] xen/hypfs: add scheduling granularity entry to
 cpupool entries
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Dario Faggioli <dfaggioli@suse.com>,
 xen-devel@lists.xenproject.org
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-12-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1c3a991b-f7c4-6aeb-6b3d-f7a8865e821a@suse.com>
Date: Tue, 17 Nov 2020 17:49:37 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201026091316.25680-12-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.10.2020 10:13, Juergen Gross wrote:
> @@ -1057,6 +1063,43 @@ static struct hypfs_entry *cpupool_dir_findentry(struct hypfs_entry_dir *dir,
>      return hypfs_gen_dyndir_entry_id(&cpupool_pooldir, id);
>  }
>  
> +static int cpupool_gran_read(const struct hypfs_entry *entry,
> +                             XEN_GUEST_HANDLE_PARAM(void) uaddr)
> +{
> +    const struct hypfs_dyndir_id *data;
> +    struct cpupool *cpupool;

const?

> +    const char *name = "";
> +
> +    data = hypfs_get_dyndata();
> +    if ( !data )
> +        return -ENOENT;
> +
> +    spin_lock(&cpupool_lock);
> +
> +    cpupool = __cpupool_find_by_id(data->id, true);
> +    if ( cpupool )
> +        name = sched_gran_get_name(cpupool->gran);
> +
> +    spin_unlock(&cpupool_lock);
> +
> +    if ( !cpupool )

May I suggest to check !*name here, to avoid giving the impression
of ...

> +        return -ENOENT;
> +
> +    return copy_to_guest(uaddr, name, strlen(name) + 1) ? -EFAULT : 0;

... success (but an empty name) in this admittedly unlikely event?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 16:53:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 16:53:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29156.58412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf4EM-00084N-Go; Tue, 17 Nov 2020 16:53:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29156.58412; Tue, 17 Nov 2020 16:53:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf4EM-00084G-Cq; Tue, 17 Nov 2020 16:53:22 +0000
Received: by outflank-mailman (input) for mailman id 29156;
 Tue, 17 Nov 2020 16:53:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/ssn=EX=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kf4EK-00084B-UP
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 16:53:20 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe08::60b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d9e6cb6f-8c91-4f8b-8a99-298280765a7b;
 Tue, 17 Nov 2020 16:53:17 +0000 (UTC)
Received: from DU2PR04CA0043.eurprd04.prod.outlook.com (2603:10a6:10:234::18)
 by VI1PR08MB4366.eurprd08.prod.outlook.com (2603:10a6:803:fc::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25; Tue, 17 Nov
 2020 16:53:14 +0000
Received: from DB5EUR03FT018.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:234:cafe::ed) by DU2PR04CA0043.outlook.office365.com
 (2603:10a6:10:234::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.26 via Frontend
 Transport; Tue, 17 Nov 2020 16:53:14 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT018.mail.protection.outlook.com (10.152.20.69) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3564.22 via Frontend Transport; Tue, 17 Nov 2020 16:53:14 +0000
Received: ("Tessian outbound 797fb8e1da56:v71");
 Tue, 17 Nov 2020 16:53:12 +0000
Received: from aa5df48f51f4.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 0FC84D46-9E92-4C39-939B-DFC932CEB5F3.1; 
 Tue, 17 Nov 2020 16:52:58 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id aa5df48f51f4.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 17 Nov 2020 16:52:58 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB4284.eurprd08.prod.outlook.com (2603:10a6:10:d0::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Tue, 17 Nov
 2020 16:52:56 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3564.028; Tue, 17 Nov 2020
 16:52:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/ssn=EX=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kf4EK-00084B-UP
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 16:53:20 +0000
X-Inumbo-ID: d9e6cb6f-8c91-4f8b-8a99-298280765a7b
Received: from EUR03-AM5-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:fe08::60b])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d9e6cb6f-8c91-4f8b-8a99-298280765a7b;
	Tue, 17 Nov 2020 16:53:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=f+oQyniPepmnOVJ6lwHyL6t/EAAi8uZDDV9+CNCxSl4=;
 b=nAYaz8FBCysvRNX376/Rv3QilFw6G3zLSCWECpmyCkMRa2OHhrWOfA6gFMKLL/acCXVrsZNNwdLp3kHnPHTwowQV3kDWHAzmj+UyBwRR4KHZR6DxJ4VIQgaCo/0t5THNey2yjC2XRV+ld+tC8RxF6+OUCfDGepIYW7oITJJ5rTE=
Received: from DU2PR04CA0043.eurprd04.prod.outlook.com (2603:10a6:10:234::18)
 by VI1PR08MB4366.eurprd08.prod.outlook.com (2603:10a6:803:fc::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25; Tue, 17 Nov
 2020 16:53:14 +0000
Received: from DB5EUR03FT018.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:234:cafe::ed) by DU2PR04CA0043.outlook.office365.com
 (2603:10a6:10:234::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.26 via Frontend
 Transport; Tue, 17 Nov 2020 16:53:14 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT018.mail.protection.outlook.com (10.152.20.69) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3564.22 via Frontend Transport; Tue, 17 Nov 2020 16:53:14 +0000
Received: ("Tessian outbound 797fb8e1da56:v71"); Tue, 17 Nov 2020 16:53:12 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: da46682fd13d9f54
X-CR-MTA-TID: 64aa7808
Received: from aa5df48f51f4.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 0FC84D46-9E92-4C39-939B-DFC932CEB5F3.1;
	Tue, 17 Nov 2020 16:52:58 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id aa5df48f51f4.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Tue, 17 Nov 2020 16:52:58 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FYk+MsV32nmsY9qMTK/M/bwtFtA21yzfVv5gaR1U8e6HMfBdtdzbdgtNL6fc0hoXoGIT7rcMC1UEKzsfi/lE1fjrO5JJrAqMq2XrJtBOfVQ8syP5O+jlHvfmn0i7UUmnvZmMbWojesQoi920btplOwzsqm0WnJDnwjk3tJoge7vctMaOhb2rHvF7mIXDOtmgFjV6xhJASaINTbv8Yqf22Hn360jp3ooy9s8fWbqHn6DbWqFC5TQpBJNAKgJRrTKI0BWUTe9SMuRoe9OIc4GCb7cLS0ZALl8sYQnMPT3zeb0YmGCV+GULpTJfwyi1B07Z+XGo+HBn4+4/2pd3dv31Tg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=f+oQyniPepmnOVJ6lwHyL6t/EAAi8uZDDV9+CNCxSl4=;
 b=hTyInCmf1co6FRsGU7oxMxKYQfWW9OkxgUfGxV4YwbaXeb9QhE/nssrhrUraye32vUj3IQCnKwE6n70nvbU+f/UtvYXPhpM4eqRO+NYRr2kaFlIRMM4zyUGUf6Qmp0xeNeU/urC0yWhLX/HGpu4wDhxvLk2NF5aDTun19RQz7bvESwvzx671LOihZDrsAE3iR/WSQN+zhP5bU6enf3GAT3V4Ib/P+zzRFC+BFh8Ay5eXdlaKBFBda3VohYvvefa9BIOtYvO12K7SkpNox9MQ+5CmQfH0qNNsq3znwExUMIE0YXzNK0SJKa6G5LmV8cCbkAcdE++sYbPK+Xgtld+u4Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=f+oQyniPepmnOVJ6lwHyL6t/EAAi8uZDDV9+CNCxSl4=;
 b=nAYaz8FBCysvRNX376/Rv3QilFw6G3zLSCWECpmyCkMRa2OHhrWOfA6gFMKLL/acCXVrsZNNwdLp3kHnPHTwowQV3kDWHAzmj+UyBwRR4KHZR6DxJ4VIQgaCo/0t5THNey2yjC2XRV+ld+tC8RxF6+OUCfDGepIYW7oITJJ5rTE=
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB4284.eurprd08.prod.outlook.com (2603:10a6:10:d0::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Tue, 17 Nov
 2020 16:52:56 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3564.028; Tue, 17 Nov 2020
 16:52:56 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Paul Durrant <paul@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v3 2/3] xen/pci: Move x86 specific code to x86 directory.
Thread-Topic: [PATCH v3 2/3] xen/pci: Move x86 specific code to x86 directory.
Thread-Index: AQHWvBPDynVNQDWjWUaLWbo/nKPcx6nMKoUAgABhhIA=
Date: Tue, 17 Nov 2020 16:52:56 +0000
Message-ID: <232F5C48-745B-44E3-A56F-3F492A7E7D60@arm.com>
References: <cover.1605527997.git.rahul.singh@arm.com>
 <a84005e5aa6733043e043b015cde4983719c8535.1605527997.git.rahul.singh@arm.com>
 <a6e6d884-93c0-f9ac-c2b4-b264c5a72db1@suse.com>
In-Reply-To: <a6e6d884-93c0-f9ac-c2b4-b264c5a72db1@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: d8a0a63a-707b-40f5-78d4-08d88b194647
x-ms-traffictypediagnostic: DBBPR08MB4284:|VI1PR08MB4366:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB4366539CCAEE09CD8E9BD02BFCE20@VI1PR08MB4366.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 +ohGQD3cdKeUdEjB47uosJQ+EiGLSExGWNVMee9gHSUv4nHoYfi7MlRWldtc08QdjNL1oCXp4f9fRCc67jD5HqvbIs8Z9KII98tzuQ30cooQizw6nXM0CMiFp1i6A+IXIk65C6WBVFHZIzZvZoEHNYwc4dvu+uTxxn7IxaYYOozqqbx1I9ZPLb7D3Gnc/4OgBxR9wIKuSh2TAEsnKsXC01QpvQtaIOFEiNLpa/pnqJAw/lgMwnUb3FD0jCTeOk4UfDPIpYdRVK6UZyipuVDrfKcoISDb2T/nLejw7OVrM1oghAOTCD2QhbsNxUm0KpAP
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(346002)(39860400002)(396003)(376002)(6916009)(86362001)(91956017)(6486002)(6512007)(8676002)(8936002)(83380400001)(33656002)(2906002)(76116006)(66946007)(66476007)(66556008)(64756008)(66446008)(4326008)(53546011)(6506007)(316002)(54906003)(26005)(478600001)(5660300002)(186003)(2616005)(71200400001)(36756003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 3afkM+5NNJ/poR/xpcGBAKRgMv/kHDfttHREBDM5d2/YWcqV7DxArxf2sBKHqaN2lX1YKMJLTxvR4ppno04sroRJeqaPnjoL2PNdMs/u0fwp7/02Euq7GPpx80akBfK8k10WD2YgdhNatDvDbjF9fdMPNUQoMwgqbh3zdMUP7cH3khpYBMzTnh4K24OaQoWWLqzwYKuLuSGdFqqZ8Ik6CiJYTrMdTigWhMxEPziYuaotINJyMkP9rqoi6Xcdi+7+wE4AK9FI+jFWZawPLlBaRWyrwMPUAPmLhlXG90IezHntqqtAXiZuSUDvF3DW4SKqgUq6Wivi14x9eU6FVrgUNlUMvWnF1wfUYjDQJNXC8CxYx2nAbzrppWrRjLvcs5B6w5JEGtYgAZL06JAJhmh2XwAcKS7RE45YlFYQqwYnKeIjdsw4hjfxmi8Wg4O4/4JvQQMhV10rUR1SpGbwlFMOQGwYR2Dz6VG12EMQCxrlBLPXfQOXeIhoilhgW/7Dtr8q+2oA21kdK/qMQ6b+WYhGsqdvl2br1c6YhaNcExuKYPazPxGKFW5oisuuXJAdBIqBxQP5ooOhAHxc9pfmoOknpFbqRPDgkGBnpjxzmcgxNLiPuscWHlLTBYoxerxf3miFR2Gj51cNSbGV9bLkGI4a7CiWOeL8/73b9QDmpbSQKrSR5h5vmXAsu1+efj0RULcpymPnTv+FHGpnBTWLtjSZgfECM7dy1n/cuKg2Cj38abfaCdHcI4lwhaH2JfVCKfPr71gg9iaMKAjsKFLpeOfRzUetlF/6YfphQ9AABZS+bwdR3r6+TIUKVDnfbifsbDEfvNUAHpMjwC0WLYkSP4+9IcpviDqi77sRVoNN7v1oA1uBTgoCDg4N81+S1QFhLPpqT3AoEsh0njvar2vIK4DF4Q==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <56CB40668698484FBF37A2418118DA12@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4284
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT018.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	fd634b85-117e-41eb-38c6-08d88b193bc6
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OWF+cieLWkzxYpFpJ1DTrFOHgPYO0tWBDF6LcLWHQ3zkZAk0TV77ECaXUQNLnm9JvWOVQMz+m2OqVZk+0yOSnm3Riz+Z1Zol7stiOnbPzNBzYe/KmcvVLjN3k06vrLH6YT9JPDzC1YxZ6XuqRespEiV0zEqAjdp/T+4LVWf6NW+yqjzBsJ/xqHEDl+VDdoXGyvQx6qlbBIOCerJTKmBcG0x8tq8B+bovOK1gfPXoAjQVtA6UEXhM2V/hT/mZrQMl4uyOhgALKF9F2cUSOy7OpdBvkSWLmgFEFz1+sWGuLemfbT3DWXQcb5Z0CaA9CzTUNa090cNtyLsADBa08Fw0lXMuMwnPKapqWZoV4I06wtYfLtDVImeuKBWl7p5TTGY4gXWJLLwfcg1VTgDIpeEHPg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(136003)(376002)(346002)(396003)(46966005)(53546011)(36756003)(6506007)(6486002)(8936002)(478600001)(83380400001)(82310400003)(6512007)(33656002)(26005)(6862004)(4326008)(8676002)(70206006)(86362001)(186003)(5660300002)(336012)(81166007)(316002)(70586007)(356005)(54906003)(47076004)(82740400003)(2616005)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Nov 2020 16:53:14.2237
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d8a0a63a-707b-40f5-78d4-08d88b194647
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT018.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4366

Hello Jan,

> On 17 Nov 2020, at 11:03 am, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 16.11.2020 13:25, Rahul Singh wrote:
>> passthrough/pci.c file is common for all architecture, but there is x86
>> specific code in this file.
>=20
> In how far is ...
>=20
>> @@ -1370,13 +1301,6 @@ static int __init setup_dump_pcidevs(void)
>> }
>> __initcall(setup_dump_pcidevs);
>>=20
>> -int iommu_update_ire_from_msi(
>> -    struct msi_desc *msi_desc, struct msi_msg *msg)
>> -{
>> -    return iommu_intremap
>> -           ? iommu_call(&iommu_ops, update_ire_from_msi, msi_desc, msg)=
 : 0;
>> -}
>=20
> ... this code x86-specific? The hook being called lives in a
> #ifdef CONFIG_HAS_PCI section, and MSI is a general PCI sub-
> feature. IOW if this is another workaround, it should be
> called so (if there's really no other way to address whatever
> issue there is), which in turn likely means it wants to be in
> a separate patch.
>=20

As of now there is no implementation for ARM to remap the interrupt and int=
errupt remapping is enabled for x86 only so I thought I will move this code=
 to x86 file as this function is called from the x86 specific code only.

I will remove this code form this patch and will fix this once once will ha=
ve proper MSI implementation for ARM.

> Jan



From xen-devel-bounces@lists.xenproject.org Tue Nov 17 17:06:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 17:06:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29179.58424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf4Qa-0000lu-RY; Tue, 17 Nov 2020 17:06:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29179.58424; Tue, 17 Nov 2020 17:06:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf4Qa-0000ln-O4; Tue, 17 Nov 2020 17:06:00 +0000
Received: by outflank-mailman (input) for mailman id 29179;
 Tue, 17 Nov 2020 17:05:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=J6Lq=EX=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kf4QZ-0000ld-E4
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 17:05:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 22aafae3-e6ba-4c30-85d9-32e07ccf0ced;
 Tue, 17 Nov 2020 17:05:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0DC05AC23;
 Tue, 17 Nov 2020 17:05:52 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=J6Lq=EX=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kf4QZ-0000ld-E4
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 17:05:59 +0000
X-Inumbo-ID: 22aafae3-e6ba-4c30-85d9-32e07ccf0ced
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 22aafae3-e6ba-4c30-85d9-32e07ccf0ced;
	Tue, 17 Nov 2020 17:05:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605632752; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=jU9zgjzkKm5TUxHW9+jxf6dwu+x5eJjIvscgzJqljts=;
	b=CQHuwSV9yrBce7wTbYFvHE8U+rYq4jrtQ2Sl8jJAfnPLnzVTBL2rdNg4ObVNjtgx3SIC81
	oNVXgDwotdZmaUY0OC/9mcWHM3UwuCpwq7Zp4F3J9QBqyVx6qrfKGPIuujmP02YsM6KlmF
	Y4eeh3jCn4zt8dBB53ob2A7tSxYCf9I=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0DC05AC23;
	Tue, 17 Nov 2020 17:05:52 +0000 (UTC)
Subject: Re: [PATCH 11/12] xen/hypfs: add scheduling granularity entry to
 cpupool entries
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Dario Faggioli <dfaggioli@suse.com>,
 xen-devel@lists.xenproject.org
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-12-jgross@suse.com>
 <1c3a991b-f7c4-6aeb-6b3d-f7a8865e821a@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <553484e9-a73e-8469-f6cf-ae834cc7edc1@suse.com>
Date: Tue, 17 Nov 2020 18:05:51 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <1c3a991b-f7c4-6aeb-6b3d-f7a8865e821a@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="5vFfxkmIiPxfeUjAtCWGbHdyj5LCZI1pz"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--5vFfxkmIiPxfeUjAtCWGbHdyj5LCZI1pz
Content-Type: multipart/mixed; boundary="lmilOCPDHuVXtkyy3V4tTabNVbZjItU5U";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Dario Faggioli <dfaggioli@suse.com>,
 xen-devel@lists.xenproject.org
Message-ID: <553484e9-a73e-8469-f6cf-ae834cc7edc1@suse.com>
Subject: Re: [PATCH 11/12] xen/hypfs: add scheduling granularity entry to
 cpupool entries
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-12-jgross@suse.com>
 <1c3a991b-f7c4-6aeb-6b3d-f7a8865e821a@suse.com>
In-Reply-To: <1c3a991b-f7c4-6aeb-6b3d-f7a8865e821a@suse.com>

--lmilOCPDHuVXtkyy3V4tTabNVbZjItU5U
Content-Type: multipart/mixed;
 boundary="------------721BE416B4F46C379A1AF4F9"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------721BE416B4F46C379A1AF4F9
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.11.20 17:49, Jan Beulich wrote:
> On 26.10.2020 10:13, Juergen Gross wrote:
>> @@ -1057,6 +1063,43 @@ static struct hypfs_entry *cpupool_dir_findentr=
y(struct hypfs_entry_dir *dir,
>>       return hypfs_gen_dyndir_entry_id(&cpupool_pooldir, id);
>>   }
>>  =20
>> +static int cpupool_gran_read(const struct hypfs_entry *entry,
>> +                             XEN_GUEST_HANDLE_PARAM(void) uaddr)
>> +{
>> +    const struct hypfs_dyndir_id *data;
>> +    struct cpupool *cpupool;
>=20
> const?

Yes.

>=20
>> +    const char *name =3D "";
>> +
>> +    data =3D hypfs_get_dyndata();
>> +    if ( !data )
>> +        return -ENOENT;
>> +
>> +    spin_lock(&cpupool_lock);
>> +
>> +    cpupool =3D __cpupool_find_by_id(data->id, true);
>> +    if ( cpupool )
>> +        name =3D sched_gran_get_name(cpupool->gran);
>> +
>> +    spin_unlock(&cpupool_lock);
>> +
>> +    if ( !cpupool )
>=20
> May I suggest to check !*name here, to avoid giving the impression
> of ...
>=20
>> +        return -ENOENT;
>> +
>> +    return copy_to_guest(uaddr, name, strlen(name) + 1) ? -EFAULT : 0=
;
>=20
> ... success (but an empty name) in this admittedly unlikely event?

Fine with me.


Juergen

--------------721BE416B4F46C379A1AF4F9
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------721BE416B4F46C379A1AF4F9--

--lmilOCPDHuVXtkyy3V4tTabNVbZjItU5U--

--5vFfxkmIiPxfeUjAtCWGbHdyj5LCZI1pz
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+0Au8FAwAAAAAACgkQsN6d1ii/Ey/2
tAf/fAgoU1D7poVoAEBE74aSBKZlRQNNePTdW8f37nM/qQCtOSSFa4qGGhlcyvKXf3Vx+LA4634d
p6wEOUFCriBzkylewEoVw0nwGZ2YZnBbXNFhx5XMkpob00ut+uegkV8Sm9yMXbJsrkFgUclndBkL
eXXZGWSoanraR6mXYDAqRjAE1g9NjyvU0q66gLuAf6OiHANgwVUWuPgvpI+DFoZ/uKx0Tv5ggXPh
yT2m5YjdztfoKvtIrt/dm3KbOBatnkKwTDu+V/G0SM0VmfCqRWRW0tcl0l0II1KiQBKxklQohwnm
wBSZUApiTzPV0QVo4YKRtjJIRtJoEsINe7xryOrf0A==
=TDBS
-----END PGP SIGNATURE-----

--5vFfxkmIiPxfeUjAtCWGbHdyj5LCZI1pz--


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 17:30:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 17:30:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29188.58436 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf4oA-0003ZT-Rv; Tue, 17 Nov 2020 17:30:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29188.58436; Tue, 17 Nov 2020 17:30:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf4oA-0003ZM-Og; Tue, 17 Nov 2020 17:30:22 +0000
Received: by outflank-mailman (input) for mailman id 29188;
 Tue, 17 Nov 2020 17:30:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kf4o9-0003ZH-D4
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 17:30:21 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kf4o8-0003ib-T3; Tue, 17 Nov 2020 17:30:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kf4o9-0003ZH-D4
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 17:30:21 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kf4o8-0003ib-T3; Tue, 17 Nov 2020 17:30:21 +0000
Subject: Re: [PATCH v2] xen/arm: Add workaround for Cortex-A76/Neoverse-N1
 erratum #1286807
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, bertrand.marquis@arm.com
References: <20201116121140.26763-1-michal.orzel@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <c7475d91-c956-3e2c-4445-ef5c005ff465@xen.org>
Date: Tue, 17 Nov 2020 17:30:19 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201116121140.26763-1-michal.orzel@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Michal,

On 16/11/2020 12:11, Michal Orzel wrote:
> On the affected Cortex-A76/Neoverse-N1 cores (r0p0 to r3p0),
> if a virtual address for a cacheable mapping of a location is being
> accessed by a core while another core is remapping the virtual
> address to a new physical page using the recommended break-before-make
> sequence, then under very rare circumstances TLBI+DSB completes before
> a read using the translation being invalidated has been observed by
> other observers. The workaround repeats the TLBI+DSB operation
> for all the TLB flush operations on purpose.

Sorry for nitpicking, the commit message should contain enough 
information for future reader to understand why this was done "on purpose".

So how about:

"The workaround repeats the TLBI+DSB operation for all the TLB flush 
operations. While this is stricly not necessary, we don't want to take 
any risk.".

I can fix it on commit.

Reviewed-by: Julien Grall <jgrall@amazon.com>

> 
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
> ---
>   docs/misc/arm/silicon-errata.txt     |  2 ++
>   xen/arch/arm/Kconfig                 | 23 +++++++++++++++++++++
>   xen/arch/arm/cpuerrata.c             | 14 +++++++++++++
>   xen/include/asm-arm/arm64/flushtlb.h | 30 +++++++++++++++++++---------
>   xen/include/asm-arm/cpufeature.h     |  3 ++-
>   5 files changed, 62 insertions(+), 10 deletions(-)
> 
> diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-errata.txt
> index 552c4151d3..d183ba543f 100644
> --- a/docs/misc/arm/silicon-errata.txt
> +++ b/docs/misc/arm/silicon-errata.txt
> @@ -53,5 +53,7 @@ stable hypervisors.
>   | ARM            | Cortex-A72      | #853709         | N/A                     |
>   | ARM            | Cortex-A73      | #858921         | ARM_ERRATUM_858921      |
>   | ARM            | Cortex-A76      | #1165522        | N/A                     |
> +| ARM            | Cortex-A76      | #1286807        | ARM64_ERRATUM_1286807   |
>   | ARM            | Neoverse-N1     | #1165522        | N/A
> +| ARM            | Neoverse-N1     | #1286807        | ARM64_ERRATUM_1286807   |
>   | ARM            | MMU-500         | #842869         | N/A                     |
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index f938dd21bd..8171b8d04a 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -244,6 +244,29 @@ config ARM_ERRATUM_858921
>   
>   	  If unsure, say Y.
>   
> +config ARM64_WORKAROUND_REPEAT_TLBI
> +	bool
> +
> +config ARM64_ERRATUM_1286807
> +	bool "Cortex-A76/Neoverse-N1: 1286807: Modification of the translation table for a virtual address might lead to read-after-read ordering violation"
> +	default y
> +	select ARM64_WORKAROUND_REPEAT_TLBI
> +	depends on ARM_64
> +	help
> +	  This option adds a workaround for ARM Cortex-A76/Neoverse-N1 erratum 1286807.
> +
> +	  On the affected Cortex-A76/Neoverse-N1 cores (r0p0 to r3p0), if a virtual
> +	  address for a cacheable mapping of a location is being
> +	  accessed by a core while another core is remapping the virtual
> +	  address to a new physical page using the recommended
> +	  break-before-make sequence, then under very rare circumstances
> +	  TLBI+DSB completes before a read using the translation being
> +	  invalidated has been observed by other observers. The
> +	  workaround repeats the TLBI+DSB operation for all the TLB flush
> +	  operations on purpose.
If you agree with what I wrote above, I will update this section and ...

> +
> +	  If unsure, say Y.
> +
>   endmenu
>   
>   config ARM64_HARDEN_BRANCH_PREDICTOR
> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
> index 567911d380..cb4795beec 100644
> --- a/xen/arch/arm/cpuerrata.c
> +++ b/xen/arch/arm/cpuerrata.c
> @@ -424,6 +424,20 @@ static const struct arm_cpu_capabilities arm_errata[] = {
>                      (1 << MIDR_VARIANT_SHIFT) | 2),
>       },
>   #endif
> +#ifdef CONFIG_ARM64_ERRATUM_1286807
> +    {
> +        /* Cortex-A76 r0p0 - r3p0 */
> +        .desc = "ARM erratum 1286807",
> +        .capability = ARM64_WORKAROUND_REPEAT_TLBI,
> +        MIDR_RANGE(MIDR_CORTEX_A76, 0, 3 << MIDR_VARIANT_SHIFT),
> +    },
> +    {
> +        /* Neoverse-N1 r0p0 - r3p0 */
> +        .desc = "ARM erratum 1286807",
> +        .capability = ARM64_WORKAROUND_REPEAT_TLBI,
> +        MIDR_RANGE(MIDR_NEOVERSE_N1, 0, 3 << MIDR_VARIANT_SHIFT),
> +    },
> +#endif
>   #ifdef CONFIG_ARM64_HARDEN_BRANCH_PREDICTOR
>       {
>           .capability = ARM_HARDEN_BRANCH_PREDICTOR,
> diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
> index ceec59542e..8f2abfaf1d 100644
> --- a/xen/include/asm-arm/arm64/flushtlb.h
> +++ b/xen/include/asm-arm/arm64/flushtlb.h
> @@ -9,6 +9,12 @@
>    * DSB ISH          // Ensure the TLB invalidation has completed
>    * ISB              // See explanation below
>    *
> + * ARM64_WORKAROUND_REPEAT_TLBI:
> + * Modification of the translation table for a virtual address might lead to
> + * read-after-read ordering violation.
> + * The workaround repeats TLBI+DSB operation for all the TLB flush operations
> + * on purpose.

... this section while committing.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 18:13:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 18:13:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29199.58448 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf5TX-0007bN-7F; Tue, 17 Nov 2020 18:13:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29199.58448; Tue, 17 Nov 2020 18:13:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf5TX-0007bG-3M; Tue, 17 Nov 2020 18:13:07 +0000
Received: by outflank-mailman (input) for mailman id 29199;
 Tue, 17 Nov 2020 18:13:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kf5TV-0007bB-PR
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 18:13:05 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kf5TV-0006wG-B9; Tue, 17 Nov 2020 18:13:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kf5TV-0007bB-PR
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 18:13:05 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kf5TV-0006wG-B9; Tue, 17 Nov 2020 18:13:05 +0000
Subject: Re: [PATCH v6 0/3] XSA-343 followup patches
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 "committers@xenproject.org" <committers@xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <20201109163826.13035-1-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <aaa3c26f-4bfa-d881-8e72-112e3108f4b5@xen.org>
Date: Tue, 17 Nov 2020 18:13:02 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201109163826.13035-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 09/11/2020 16:38, Juergen Gross wrote:
> The patches for XSA-343 produced some fallout, especially the event
> channel locking has shown to be problematic.
> 
> Patch 1 is targeting fifo event channels for avoiding any races for the
> case that the fifo queue has been changed for a specific event channel.
> 
> The second patch is modifying the per event channel locking scheme in
> order to avoid deadlocks and problems due to the event channel lock
> having been changed to require IRQs off by the XSA-343 patches.
> 
> Changes in V6:
> - added patch 3 (Jan Beulich)
> - switched some more read_trylock() cases to read_lock() (Jan Beulich)
> 
> Changes in V5:
> - moved evtchn_write_[un]lock() to event_channel.c (Jan Beulich)
> - used normal read_lock() in some cases (Jan Beulich)
> 
> Changes in V4:
> - switched to real rwlock
> 
> Changes in V3:
> - addressed comments
> 
> Juergen Gross (3):
>    xen/events: access last_priority and last_vcpu_id together
>    xen/evtchn: rework per event channel lock
>    xen/evtchn: revert 52e1fc47abc3a0123

While looking at the list of commits, I noticed that the first patch 
hasn't been committed. They were all acked/reviewed, so I am a bit 
puzzled why this was omitted...

I have nearly missed as I was expecting the 3 patches to be committed 
together. May I suggest that in the future we reply to the cover letter 
and mention which patches are (or not) committed?

Regarding patch #1, I will commit it tomorrow unless there are strong 
objections against.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 18:24:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 18:24:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29206.58484 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf5eg-0000N7-RB; Tue, 17 Nov 2020 18:24:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29206.58484; Tue, 17 Nov 2020 18:24:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf5eg-0000Mz-Nf; Tue, 17 Nov 2020 18:24:38 +0000
Received: by outflank-mailman (input) for mailman id 29206;
 Tue, 17 Nov 2020 18:24:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dD8=EX=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1kf5ef-0000JQ-4P
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 18:24:37 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1021ad91-c687-437e-bb8b-37f15fd9eb24;
 Tue, 17 Nov 2020 18:24:31 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+dD8=EX=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
	id 1kf5ef-0000JQ-4P
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 18:24:37 +0000
X-Inumbo-ID: 1021ad91-c687-437e-bb8b-37f15fd9eb24
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1021ad91-c687-437e-bb8b-37f15fd9eb24;
	Tue, 17 Nov 2020 18:24:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605637471;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=4HMvIBx0BI9P0e9tGOp3LfzAc3z+4o6/c/L/wV9Ft/A=;
  b=J0+UpsOG/fTGmgpotvsj/UAqJqiVo7Ygp2FoTEcJk/i7FwggeEogPYrA
   0a4pshEdQIHxYOnwokInzzVx0qY61Xvdd+PcTgELTV6EvHqkFDIKc8BQr
   ar7JykjHo+tXvd4fKSEDp1qkFs1FxHUsKgZyPn8toV96olMJUxQj+vioy
   U=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: yfjK8pbVZlS8xaoxMdLChXUuOw2PJlmlBBbqQb8r11PGMT8NegFwLaxXk/dJuzgwoURSX4J4eZ
 5qr/H0iuOUHpuH/ZBXP7oDwmOXQU4prMecmOTSJf2K9OjmKvlLzj/gR4qx34WuOJSSLh7OvP0q
 zr0Sf6RQhYn4NO/m5Hgc4DXoZN/2KGpZChVkcoYMmGaAOsHrGbrWckiuP5m3EBlumWnfcMNb1g
 HZFyDVq2NrmLnH6wbVs7uNRNw8FRhT5rPeg1YktZ+IO0AX5TAtlzqzrkIM8Kb3ZA/VdJnSvd4L
 51c=
X-SBRS: None
X-MesageID: 32508574
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,486,1596513600"; 
   d="scan'208";a="32508574"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Doug
 Goldstein" <cardoe@cardoe.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Christian Lindig
	<christian.lindig@citrix.com>, David Scott <dave@recoil.org>
Subject: [PATCH v1 0/4] tools/ocaml/libs/xc: domid control at domain creation time
Date: Tue, 17 Nov 2020 18:24:08 +0000
Message-ID: <cover.1605636799.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.18.4
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The xl toolstack allows some control over the domid at VM creation time,
allow xenopsd similar control by exposing the appropriate domid field in the OCaml xenctrl bindings.
A new API function is introduced to preserve backwards compatibility without merge ordering
requirements between the Xen and xenopsd patches: Xen can merge the patch and xenopsd will keep
building with the old function, and a new version of xenopsd will start using the new function.

I've also included some build system fixes to allow me to test the build
in an upstream build environment:
```
cd automation/build
podman build -t registry.gitlab.com/xen-project/xen/ubuntu:focal -f ubuntu/focal.dockerfile ubuntu
DOCKER_CMD=podman CONTAINER_NO_PULL=1 CONTAINER=registry.gitlab.com/xen-project/xen/ubuntu:focal automation/scripts/containerize make build-tools-oxenstored
```

It'd be good if someone could test whether containerize still works on non-SELinux systems now, or
whether we need more detection logic in the script.

This works around bugs in the OCaml makefiles that end up in "inconsistent assumptions" by doing a
'make clean' before building the OCaml files every time. This is inefficient, but works.
Long term it would be beneficial to switch to Dune as build system,
which can do correct incremental builds with minimal configuration.
I'll send a separate patch series for that.

Edwin Török (4):
  automation/scripts/containerize: fix DOCKER_CMD=podman
  automation/: add Ubuntu:focal container
  Makefile: add build-tools-oxenstored
  tools/ocaml/libs/xc: backward compatible domid control at domain
    creation time

 Makefile                                 |  6 +++
 automation/build/ubuntu/focal.dockerfile | 50 ++++++++++++++++++++++++
 automation/scripts/containerize          |  7 ++--
 tools/ocaml/Makefile                     |  8 ++++
 tools/ocaml/libs/xc/xenctrl.ml           |  3 ++
 tools/ocaml/libs/xc/xenctrl.mli          |  2 +
 tools/ocaml/libs/xc/xenctrl_stubs.c      |  9 ++++-
 7 files changed, 80 insertions(+), 5 deletions(-)
 create mode 100644 automation/build/ubuntu/focal.dockerfile

-- 
2.18.4



From xen-devel-bounces@lists.xenproject.org Tue Nov 17 18:24:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 18:24:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29204.58460 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf5ec-0000Jk-9d; Tue, 17 Nov 2020 18:24:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29204.58460; Tue, 17 Nov 2020 18:24:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf5ec-0000Ja-64; Tue, 17 Nov 2020 18:24:34 +0000
Received: by outflank-mailman (input) for mailman id 29204;
 Tue, 17 Nov 2020 18:24:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dD8=EX=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1kf5ea-0000JQ-9R
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 18:24:32 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1fa9788e-34b1-401f-83d8-b0ee46c4d5b4;
 Tue, 17 Nov 2020 18:24:31 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+dD8=EX=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
	id 1kf5ea-0000JQ-9R
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 18:24:32 +0000
X-Inumbo-ID: 1fa9788e-34b1-401f-83d8-b0ee46c4d5b4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1fa9788e-34b1-401f-83d8-b0ee46c4d5b4;
	Tue, 17 Nov 2020 18:24:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605637471;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=+xpoQqJ/pbgvLZlCMz1D78iIlwiuXy9MMBl0DGtdCWE=;
  b=CA5HiP0bTqJl34cCj37MKkMHVFDfiG8HgMgKqpkAyJkvXcgiWaOXHLjm
   UW77X5sRFZ70SInqNyhk8po1UXdMBEspaDSerOiG5V0tFFZy3Q47yWPRR
   7FGpmX21ZzFaxg7bFojPhcW5kPVXqTPKutXalRMyNsWVln2WL4oBTOsb7
   4=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: PZ7B6VkG7TYTQ2LzJFIJRinvipug0hywn82BZW1WgXjggOwuGai/BfAMIa5zeFsaSgs3RNVUeP
 aNMbS7ycFA2wEYeeazq0OTL4nM/NVBgaogeFB2rD0j6AVldImmO9vekRPzem3mTAXMWtDHn0s1
 Ub31NwE0KhnTSGXDpNY86dadJgb9iArgWkXeh0ytArYEotGe5Rj7Eu9Ar7OIhH8Yx8xeQb8mAl
 Uk7R+fjlrbDniwA0iPQqPuRuMdvv8j4J6HzD02EloMPWcMz0xmnHpz/ElatiFrUQWc4UYYEHF8
 284=
X-SBRS: None
X-MesageID: 31385518
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,486,1596513600"; 
   d="scan'208";a="31385518"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Doug
 Goldstein" <cardoe@cardoe.com>
Subject: [PATCH v1 1/4] automation/scripts/containerize: fix DOCKER_CMD=podman
Date: Tue, 17 Nov 2020 18:24:09 +0000
Message-ID: <28469d0fea059a51694c6fa3b5bd3971696a4f13.1605636800.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.18.4
In-Reply-To: <cover.1605636799.git.edvin.torok@citrix.com>
References: <cover.1605636799.git.edvin.torok@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

On CentOS 8 with SELinux containerize doesn't work at all:

Make sure that the source code and SSH agent directories are passed on
with SELinux relabeling enabled.
(`-security-opt label=disabled` would be another option)

Signed-off-by: Edwin Török <edvin.torok@citrix.com>
---
 automation/scripts/containerize | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/automation/scripts/containerize b/automation/scripts/containerize
index a75d54566c..ed991bb79c 100755
--- a/automation/scripts/containerize
+++ b/automation/scripts/containerize
@@ -7,7 +7,7 @@
 # and /etc/subgid.
 #
 docker_cmd=${DOCKER_CMD:-"docker"}
-[ "$DOCKER_CMD" = "podman" ] && userns_podman="--userns=keep-id"
+[ "$DOCKER_CMD" = "podman" ] && userns_podman="--userns=keep-id" selinux=",z"
 
 einfo() {
     echo "$*" >&2
@@ -95,9 +95,9 @@ einfo "*** Launching container ..."
 exec ${docker_cmd} run \
     ${userarg} \
     ${SSH_AUTH_SOCK:+-e SSH_AUTH_SOCK="/tmp/ssh-agent/${SSH_AUTH_NAME}"} \
-    -v "${CONTAINER_PATH}":/build:rw \
+    -v "${CONTAINER_PATH}":/build:rw${selinux} \
     -v "${HOME}/.ssh":/root/.ssh:ro \
-    ${SSH_AUTH_DIR:+-v "${SSH_AUTH_DIR}":/tmp/ssh-agent} \
+    ${SSH_AUTH_DIR:+-v "${SSH_AUTH_DIR}":/tmp/ssh-agent${selinux}} \
     ${XEN_CONFIG_EXPERT:+-e XEN_CONFIG_EXPERT=${XEN_CONFIG_EXPERT}} \
     ${CONTAINER_ARGS} \
     -${termint}i --rm -- \
-- 
2.18.4



From xen-devel-bounces@lists.xenproject.org Tue Nov 17 18:24:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 18:24:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29205.58466 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf5ec-0000KR-L0; Tue, 17 Nov 2020 18:24:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29205.58466; Tue, 17 Nov 2020 18:24:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf5ec-0000K5-Ef; Tue, 17 Nov 2020 18:24:34 +0000
Received: by outflank-mailman (input) for mailman id 29205;
 Tue, 17 Nov 2020 18:24:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dD8=EX=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1kf5eb-0000JV-A4
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 18:24:33 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 685c27c8-d13e-44bb-b2c3-2762a5b9c7fb;
 Tue, 17 Nov 2020 18:24:32 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+dD8=EX=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
	id 1kf5eb-0000JV-A4
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 18:24:33 +0000
X-Inumbo-ID: 685c27c8-d13e-44bb-b2c3-2762a5b9c7fb
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 685c27c8-d13e-44bb-b2c3-2762a5b9c7fb;
	Tue, 17 Nov 2020 18:24:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605637472;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=eJ2hLKgOfS8/P8g/B8+o58K3OleQXpKSYMGB+jKi44U=;
  b=X04BZn3DyFbok95nRtc3LwEcRPZwNWCVqFNWH80Xv/J1lrtKnyHYGLH0
   EvnOEX3lSRx329ue1c8jC6Fu4aQvk0TPqarYPrAZb4tckZ9Z48/ZC1Hpi
   KRPqeQY/lfQAxMKvfhvd5maVbqBnaIazi74li5XvawTl89l2nuBnol7N5
   g=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Duet1FZ93wPHjC75zY2ZTMyUr//3RCwPwahrIs/3iL1IC6BUuvJE9QfJRy1GENt7YOb79PvcI4
 GUe1Ym/N+jKTQcxQHlY+LNPJKNMAzRE6bKP/pz4zsb0zy208zS4gZpbdE2ZeSbEbZ1PDXDK3R8
 EEuFh18kA64U6gNz4sfOad44H28UqM1yIXvY4D9oBWnjLG972VWjCI/gAkBLbJ4dvaAckVRFiX
 gN4rxYrYlJoPV7Hl2BJ6CeOL9FT4u9Q4XQldWOYPKbgcVouepKmwFZePAWU98szHW7dovLbTz3
 Iac=
X-SBRS: None
X-MesageID: 31365326
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,486,1596513600"; 
   d="scan'208";a="31365326"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Doug
 Goldstein" <cardoe@cardoe.com>
Subject: [PATCH v1 2/4] automation/: add Ubuntu:focal container
Date: Tue, 17 Nov 2020 18:24:10 +0000
Message-ID: <42b2b80779e264d60fa3daf01110fece34f00696.1605636800.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.18.4
In-Reply-To: <cover.1605636799.git.edvin.torok@citrix.com>
References: <cover.1605636799.git.edvin.torok@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Signed-off-by: Edwin Török <edvin.torok@citrix.com>
---
 automation/build/ubuntu/focal.dockerfile | 50 ++++++++++++++++++++++++
 automation/scripts/containerize          |  1 +
 2 files changed, 51 insertions(+)
 create mode 100644 automation/build/ubuntu/focal.dockerfile

diff --git a/automation/build/ubuntu/focal.dockerfile b/automation/build/ubuntu/focal.dockerfile
new file mode 100644
index 0000000000..1f014b67bc
--- /dev/null
+++ b/automation/build/ubuntu/focal.dockerfile
@@ -0,0 +1,50 @@
+FROM ubuntu:20.04
+LABEL maintainer.name="The Xen Project " \
+      maintainer.email="xen-devel@lists.xenproject.org"
+
+ENV DEBIAN_FRONTEND=noninteractive
+ENV USER root
+
+RUN mkdir /build
+WORKDIR /build
+
+# build depends
+RUN apt-get update && \
+    apt-get --quiet --yes install \
+        build-essential \
+        zlib1g-dev \
+        libncurses5-dev \
+        libssl-dev \
+        python-dev \
+        python3-dev \
+        xorg-dev \
+        uuid-dev \
+        libyajl-dev \
+        libaio-dev \
+        libglib2.0-dev \
+        clang \
+        libpixman-1-dev \
+        pkg-config \
+        flex \
+        bison \
+        gettext \
+        acpica-tools \
+        bin86 \
+        bcc \
+        liblzma-dev \
+        libc6-dev-i386 \
+        libnl-3-dev \
+        ocaml-nox \
+        libfindlib-ocaml-dev \
+        libsystemd-dev \
+        markdown \
+        transfig \
+        pandoc \
+        checkpolicy \
+        wget \
+        git \
+        nasm \
+        && \
+        apt-get autoremove -y && \
+        apt-get clean && \
+        rm -rf /var/lib/apt/lists* /tmp/* /var/tmp/*
diff --git a/automation/scripts/containerize b/automation/scripts/containerize
index ed991bb79c..94ff8b1ca8 100755
--- a/automation/scripts/containerize
+++ b/automation/scripts/containerize
@@ -29,6 +29,7 @@ case "_${CONTAINER}" in
     _centos7) CONTAINER="${BASE}/centos:7" ;;
     _centos72) CONTAINER="${BASE}/centos:7.2" ;;
     _fedora) CONTAINER="${BASE}/fedora:29";;
+    _focal) CONTAINER="${BASE}/ubuntu:focal" ;;
     _jessie) CONTAINER="${BASE}/debian:jessie" ;;
     _stretch|_) CONTAINER="${BASE}/debian:stretch" ;;
     _unstable|_) CONTAINER="${BASE}/debian:unstable" ;;
-- 
2.18.4



From xen-devel-bounces@lists.xenproject.org Tue Nov 17 18:24:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 18:24:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29207.58496 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf5ej-0000Qd-9p; Tue, 17 Nov 2020 18:24:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29207.58496; Tue, 17 Nov 2020 18:24:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf5ej-0000QS-60; Tue, 17 Nov 2020 18:24:41 +0000
Received: by outflank-mailman (input) for mailman id 29207;
 Tue, 17 Nov 2020 18:24:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dD8=EX=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1kf5eh-0000Nf-64
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 18:24:39 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 85000255-6885-414e-a09d-84777ce71f9c;
 Tue, 17 Nov 2020 18:24:38 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+dD8=EX=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
	id 1kf5eh-0000Nf-64
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 18:24:39 +0000
X-Inumbo-ID: 85000255-6885-414e-a09d-84777ce71f9c
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 85000255-6885-414e-a09d-84777ce71f9c;
	Tue, 17 Nov 2020 18:24:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605637478;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=i6bm0XeGGoT0blOMLK0wVmpS37YcigZ/KepZXeFkhVk=;
  b=CDCQMG7ZH8K/EbvBWB+MpbTHoI9j8xNVqXJDi0aD7ISfwA2KK4kf0n5l
   uyLNDLTtlopMu0agRfSkEF6mBfIwQwI3XlWwjeRqQVDYibbTFFJVwDnTL
   WTjAk2E6SqwnZjZew5xCt8v8jPswBRB3ZH/XNVK2vA+aDt7Cw6QSJugEJ
   Y=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 1YBI43kkoek/v47lc4t9Y8IDbe6KSH18dXeybFs4y06H72P+DI3XEiaAsV1+7yaI0gHedJaL4e
 PtPON66JBgeg0xnC3DNtU3+Du5cRE/i0ERcZoA7RQUJc6v8wir9J5A2fLmzacY0ImEjwPPfW0z
 6eMrNLnQYeeZ7agMNQK31Ay667AnvslrDpmTZ1p6xOUaxPaPnu4Ad07f30GqZRs5bEh3IbINPD
 je1isAWr+ASL7t04L0xOPt4qOspvC0qHF+6hkGReaC6amHYQT3dVlj6vCYsULEpGY1Vn/lCjfI
 REE=
X-SBRS: None
X-MesageID: 31385525
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,486,1596513600"; 
   d="scan'208";a="31385525"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, David Scott <dave@recoil.org>, "Ian
 Jackson" <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [PATCH v1 4/4] tools/ocaml/libs/xc: backward compatible domid control at domain creation time
Date: Tue, 17 Nov 2020 18:24:12 +0000
Message-ID: <559929d2ae95f6527e5050051c917b7586182ad2.1605636800.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.18.4
In-Reply-To: <cover.1605636799.git.edvin.torok@citrix.com>
References: <cover.1605636799.git.edvin.torok@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

One can specify the domid to use when creating the domain, but this was hardcoded to 0.

Keep the existing `domain_create` function (and the type of its parameters) as is to make
backwards compatibility easier.
Introduce a new `domain_create_domid` OCaml API that allows specifying the domid.
A new version of xenopsd can choose to start using this, while old versions of xenopsd will keep
building and using the old API.

Controlling the domid can be useful during testing or migration.

Signed-off-by: Edwin Török <edvin.torok@citrix.com>
---
 tools/ocaml/libs/xc/xenctrl.ml      | 3 +++
 tools/ocaml/libs/xc/xenctrl.mli     | 2 ++
 tools/ocaml/libs/xc/xenctrl_stubs.c | 9 +++++++--
 3 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index e878699b0a..9d720886e9 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -182,6 +182,9 @@ let with_intf f =
 external domain_create: handle -> domctl_create_config -> domid
        = "stub_xc_domain_create"
 
+external domain_create_domid: handle -> domctl_create_config -> domid -> domid
+       = "stub_xc_domain_create_domid"
+
 external domain_sethandle: handle -> domid -> string -> unit
        = "stub_xc_domain_sethandle"
 
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index e64907df8e..e629022901 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -145,6 +145,8 @@ val close_handle: unit -> unit
 
 external domain_create : handle -> domctl_create_config -> domid
   = "stub_xc_domain_create"
+external domain_create_domid : handle -> domctl_create_config -> domid -> domid
+  = "stub_xc_domain_create_domid"
 external domain_sethandle : handle -> domid -> string -> unit = "stub_xc_domain_sethandle"
 external domain_max_vcpus : handle -> domid -> int -> unit
   = "stub_xc_domain_max_vcpus"
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index 94aba38a42..bb718fd164 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -175,7 +175,7 @@ static unsigned int ocaml_list_to_c_bitmap(value l)
 	return val;
 }
 
-CAMLprim value stub_xc_domain_create(value xch, value config)
+CAMLprim value stub_xc_domain_create_domid(value xch, value config, value want_domid)
 {
 	CAMLparam2(xch, config);
 	CAMLlocal2(l, arch_domconfig);
@@ -191,7 +191,7 @@ CAMLprim value stub_xc_domain_create(value xch, value config)
 #define VAL_MAX_MAPTRACK_FRAMES Field(config, 7)
 #define VAL_ARCH                Field(config, 8)
 
-	uint32_t domid = 0;
+	uint32_t domid = Int_val(want_domid);
 	int result;
 	struct xen_domctl_createdomain cfg = {
 		.ssidref = Int32_val(VAL_SSIDREF),
@@ -262,6 +262,11 @@ CAMLprim value stub_xc_domain_create(value xch, value config)
 	CAMLreturn(Val_int(domid));
 }
 
+CAMLprim value stub_xc_domain_create(value xch, value config, value want_domid)
+{
+    return stub_xc_domain_create_domid(xch, config, Val_int(0));
+}
+
 CAMLprim value stub_xc_domain_max_vcpus(value xch, value domid,
                                         value max_vcpus)
 {
-- 
2.18.4



From xen-devel-bounces@lists.xenproject.org Tue Nov 17 18:24:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 18:24:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29208.58508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf5ek-0000T1-KK; Tue, 17 Nov 2020 18:24:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29208.58508; Tue, 17 Nov 2020 18:24:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf5ek-0000Sq-GP; Tue, 17 Nov 2020 18:24:42 +0000
Received: by outflank-mailman (input) for mailman id 29208;
 Tue, 17 Nov 2020 18:24:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dD8=EX=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1kf5ek-0000JQ-4X
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 18:24:42 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a928d525-978b-4330-9a74-8d722926b385;
 Tue, 17 Nov 2020 18:24:38 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+dD8=EX=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
	id 1kf5ek-0000JQ-4X
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 18:24:42 +0000
X-Inumbo-ID: a928d525-978b-4330-9a74-8d722926b385
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a928d525-978b-4330-9a74-8d722926b385;
	Tue, 17 Nov 2020 18:24:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605637479;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=CtykqrKDJP663B6ErlZm6r+lHmD7f9GNtBRjUYk3JhQ=;
  b=K0t21k+9dWUlqzIhKlqURCvhhE58HeSX40fg5zgLYwUcPEmi+UvUn6np
   qNQCJqcwk+V2c7F/dzO6/b8lLf/oFfS28Zu9eyLu8ORocbDR3jTpmVUUG
   jVQlNQGK1zZvaZ5vHFbtQupS0dEpnBNt7pAkRUr/StQGbNgclogiPM7JC
   k=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: vV+tv1oYOZIXI216rBlo81pHarJwZSElKGw/uZUv3fXFBMiX8AizwokD/Xz7WCRqWf10mPeoxI
 HzqD1Qgh/Ar/D02fMWFpjmjl46nWBJTm28N7c2l6LvxwINePunbjJrDRiwzgbd3Uc5fRdGuwKb
 IQ6PXXXnS7S/5ya2m6p+7xmqr09AET9rn6h+CezA/zJfr0HP6OA1SFPhs6l7fCpWQD991hG59U
 C/uRZYLzwruOye3A0aTT6RcFIGUYKuJ+rQlqo6hI+2UU3XLMa9jnZnKVuIhLxpHCCO0/EuLINC
 rhA=
X-SBRS: None
X-MesageID: 31365339
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,486,1596513600"; 
   d="scan'208";a="31365339"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<jbeulich@suse.com>, "Julien Grall" <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Christian Lindig
	<christian.lindig@citrix.com>, David Scott <dave@recoil.org>
Subject: [PATCH v1 3/4] Makefile: add build-tools-oxenstored
Date: Tue, 17 Nov 2020 18:24:11 +0000
Message-ID: <516274ccf7ce5958251fa36b1bd63b3216937b8b.1605636800.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.18.4
In-Reply-To: <cover.1605636799.git.edvin.torok@citrix.com>
References: <cover.1605636799.git.edvin.torok@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

As a convenience so that oxenstored patches can be compile-tested
using upstream's build-system before submitting upstream.

Signed-off-by: Edwin Török <edvin.torok@citrix.com>
---
 Makefile             | 6 ++++++
 tools/ocaml/Makefile | 8 ++++++++
 2 files changed, 14 insertions(+)

diff --git a/Makefile b/Makefile
index 9ad2602f63..96d32cfd50 100644
--- a/Makefile
+++ b/Makefile
@@ -62,6 +62,12 @@ build-xen:
 build-tools: build-tools-public-headers
 	$(MAKE) -C tools build
 
+.PHONY: build-tools-oxenstored
+build-tools-oxenstored: build-tools-public-headers
+	$(MAKE) -s -C tools/ocaml clean
+	$(MAKE) -s -C tools/libs
+	$(MAKE) -C tools/ocaml build-tools-oxenstored
+
 .PHONY: build-stubdom
 build-stubdom: mini-os-dir build-tools-public-headers
 	$(MAKE) -C stubdom build
diff --git a/tools/ocaml/Makefile b/tools/ocaml/Makefile
index 66f2d6b131..a7c04b6546 100644
--- a/tools/ocaml/Makefile
+++ b/tools/ocaml/Makefile
@@ -26,3 +26,11 @@ clean: subdirs-clean
 
 .PHONY: distclean
 distclean: subdirs-distclean
+
+.PHONY: build-tools-oxenstored
+build-tools-oxenstored:
+	$(MAKE) -s -C libs/eventchn
+	$(MAKE) -s -C libs/mmap
+	$(MAKE) -s -C libs/xb
+	$(MAKE) -s -C libs/xc
+	$(MAKE) -C xenstored
-- 
2.18.4



From xen-devel-bounces@lists.xenproject.org Tue Nov 17 19:41:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 19:41:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29237.58523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf6r2-00005s-Ga; Tue, 17 Nov 2020 19:41:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29237.58523; Tue, 17 Nov 2020 19:41:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf6r2-00005l-Cy; Tue, 17 Nov 2020 19:41:28 +0000
Received: by outflank-mailman (input) for mailman id 29237;
 Tue, 17 Nov 2020 19:41:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g2yZ=EX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kf6r1-00005f-LF
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 19:41:27 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c8b8def5-11a0-44b7-9f1d-5fa7d7fdb81f;
 Tue, 17 Nov 2020 19:41:25 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kf6qy-0006U6-JZ; Tue, 17 Nov 2020 19:41:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kf6qy-0003S5-AA; Tue, 17 Nov 2020 19:41:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kf6qy-0008R9-9d; Tue, 17 Nov 2020 19:41:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=g2yZ=EX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kf6r1-00005f-LF
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 19:41:27 +0000
X-Inumbo-ID: c8b8def5-11a0-44b7-9f1d-5fa7d7fdb81f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c8b8def5-11a0-44b7-9f1d-5fa7d7fdb81f;
	Tue, 17 Nov 2020 19:41:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JGTIg5H+zwqZOQ25kiiokHVrgBFbjrxTulc27yzmcJ8=; b=2iXK/8qnI7ArD0AaAgR4nDVqhu
	8odvS5A1spHx8c/7o75Wfo2zP+xXNNQzazi++ag5BDYaHGhPruKsP0pP+Pls49CrM/UnYHDvDy13q
	Uqy9eeFHl7Bvci8XfuCZFx/Z2SwrXvKA78XiD41Ze9mZqoZs0hqYB3/2FzTaWI59nCM4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kf6qy-0006U6-JZ; Tue, 17 Nov 2020 19:41:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kf6qy-0003S5-AA; Tue, 17 Nov 2020 19:41:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kf6qy-0008R9-9d; Tue, 17 Nov 2020 19:41:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156832-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156832: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=9c87c9f41245baa3fc4716cf39141439cf405b01
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Nov 2020 19:41:24 +0000

flight 156832 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156832/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl          10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                9c87c9f41245baa3fc4716cf39141439cf405b01
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  108 days
Failing since        152366  2020-08-01 20:49:34 Z  107 days  179 attempts
Testing same since   156832  2020-11-17 08:05:18 Z    0 days    1 attempts

------------------------------------------------------------
3526 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 674308 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 19:43:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 19:43:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29242.58538 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf6tF-0000Hn-3z; Tue, 17 Nov 2020 19:43:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29242.58538; Tue, 17 Nov 2020 19:43:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf6tE-0000Hg-Vu; Tue, 17 Nov 2020 19:43:44 +0000
Received: by outflank-mailman (input) for mailman id 29242;
 Tue, 17 Nov 2020 19:43:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lsz4=EX=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kf6tD-0000HV-A7
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 19:43:43 +0000
Received: from mail-wm1-x335.google.com (unknown [2a00:1450:4864:20::335])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 01e27463-f56f-4c89-af77-7822685a6d84;
 Tue, 17 Nov 2020 19:43:42 +0000 (UTC)
Received: by mail-wm1-x335.google.com with SMTP id p22so4511082wmg.3
 for <xen-devel@lists.xenproject.org>; Tue, 17 Nov 2020 11:43:42 -0800 (PST)
Received: from CBGR90WXYV0
 (host109-146-187-185.range109-146.btcentralplus.com. [109.146.187.185])
 by smtp.gmail.com with ESMTPSA id s25sm4983361wmh.16.2020.11.17.11.43.40
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 17 Nov 2020 11:43:41 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lsz4=EX=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kf6tD-0000HV-A7
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 19:43:43 +0000
X-Inumbo-ID: 01e27463-f56f-4c89-af77-7822685a6d84
Received: from mail-wm1-x335.google.com (unknown [2a00:1450:4864:20::335])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 01e27463-f56f-4c89-af77-7822685a6d84;
	Tue, 17 Nov 2020 19:43:42 +0000 (UTC)
Received: by mail-wm1-x335.google.com with SMTP id p22so4511082wmg.3
        for <xen-devel@lists.xenproject.org>; Tue, 17 Nov 2020 11:43:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=cww5t69Y+TBbd051sTPKSFMR0SdWNduPcjesmSdMtYI=;
        b=qLs0rGSw8fdrVx4pKU0crQEHN3u+cG2Ulme1YvIl6yYA69ZykFQeU0E01NPdcCIMLo
         OE+PQY8LNxvu6I2EA2LkaWc7Ek2kH0uoUo55goUQy7WVt11NeJoLbCsl6/NXqsazamYZ
         loogO4efz1bY4twyfvWvVNrcvvXvJbOmKMu6GcEyBWdPr86y0PVWJ5fN0K9B/x2Cbjd/
         55CFbvPh6E1YSneDaD58qrRkGzcM6abE6+/4BoMBquQATQrFl7LNGLqOh6DyY44qHrSj
         IKZ9BEKrai3fgV2IMaSxPaVll5BN6VJyRb0eJoL4bmYtW0BT47c+Qm2uFudF0SDEat1w
         TRLA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=cww5t69Y+TBbd051sTPKSFMR0SdWNduPcjesmSdMtYI=;
        b=bJnTdIYSbTdQsK+bC+HgVgC7OLdxws6cP53PZk2wCoF9lg59YnXpvrKNvv7MoRyC+V
         w9bgNEAQeKYbkuRF/0XpWpFNrDtvAENwevL3UmKHslhB6eOivgq1x/4XWmbZO8MRYlUJ
         FuNHKZXxc+hJtE6A3E22Ids2xkXcs6v2iOWYl+qRgq/cBENMSld82pV/Bup0zSxDphjq
         E36PoDsaotpkGhUeNJpw4NjZKEbR4i/PK0+Oqwc7dsWFCFkYygIIPBA13lHTDVWkpX8s
         r1gvbD3dM3XZfeTzz1Pv4Bowuj0dtq2fALzZXqVx7VKOjKSrt6TOQLnAKfiGgHpkZMG0
         XwmQ==
X-Gm-Message-State: AOAM5321migAd8KXeoeLV9ous68kKK2EDsQtfGene7SVzbfkJBFhXvlj
	SVLQ6igpnkx6k5iwfa25aVI=
X-Google-Smtp-Source: ABdhPJzzBrFavBO5uUzSdp1x4fUscTc0mqBukRptDocamSJyLDfd5Zpoz09GPqfkLV6C9NIyAdeQIA==
X-Received: by 2002:a1c:230b:: with SMTP id j11mr693664wmj.12.1605642221762;
        Tue, 17 Nov 2020 11:43:41 -0800 (PST)
Received: from CBGR90WXYV0 (host109-146-187-185.range109-146.btcentralplus.com. [109.146.187.185])
        by smtp.gmail.com with ESMTPSA id s25sm4983361wmh.16.2020.11.17.11.43.40
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Tue, 17 Nov 2020 11:43:41 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Oleksandr'" <olekstysh@gmail.com>
Cc: <xen-devel@lists.xenproject.org>,
	"'Oleksandr Tyshchenko'" <oleksandr_tyshchenko@epam.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Wei Liu'" <wl@xen.org>,
	=?UTF-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	"'Tim Deegan'" <tim@xen.org>,
	"'Julien Grall'" <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> <1602780274-29141-3-git-send-email-olekstysh@gmail.com> <004001d6a6b6$9ffd3ac0$dff7b040$@xen.org> <436143ea-609f-f6c3-4952-19fcf410fe8f@gmail.com> <34133df1-bff2-f4df-00a5-674a2af867fc@gmail.com> <007401d6bcf6$63d3f420$2b7bdc60$@xen.org> <a2eecf9b-7246-68c8-aee4-b4009ee16ed8@gmail.com>
In-Reply-To: <a2eecf9b-7246-68c8-aee4-b4009ee16ed8@gmail.com>
Subject: RE: [PATCH V2 02/23] xen/ioreq: Make x86's IOREQ feature common
Date: Tue, 17 Nov 2020 19:43:40 -0000
Message-ID: <009601d6bd19$f379b4c0$da6d1e40$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQFqp5MaNUj6MKEiN9RM6S6pfA5bVAJRJ+K5AatvYagBUPuZ1AIVomhYAiEk0csBuTFrDapK928g

> -----Original Message-----
[snip]
> 
> Both hvm_ioreq_server_init() and hvm_ioreq_server_deinit() call "legacy"
> hvm_ioreq_server_unmap_pages()
> which we want to be abstracted. The only difference between these two
> usages is that the former calls it during rollback only (in case of error).
> Taking into the account what has been suggested for question #1 could we
> just introduce arch_ioreq_server_unmap_pages() to be called from both
> init and deinit?
> 

That sounds fine, yes.

> 
> [Not completed not tested]
> 
> @@ -762,7 +772,7 @@ static int hvm_ioreq_server_init(struct
> hvm_ioreq_server *s,
> 
>    fail_add:
>       hvm_ioreq_server_remove_all_vcpus(s);
> -    hvm_ioreq_server_unmap_pages(s);
> +    arch_ioreq_server_unmap_pages(s);
> 
>       hvm_ioreq_server_free_rangesets(s);
> 
> @@ -776,7 +786,7 @@ static void hvm_ioreq_server_deinit(struct
> hvm_ioreq_server *s)
>       hvm_ioreq_server_remove_all_vcpus(s);
> 
>       /*
> -     * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and
> +     * NOTE: It is safe to call both arch_ioreq_server_unmap_pages() and
>        *       hvm_ioreq_server_free_pages() in that order.
>        *       This is because the former will do nothing if the pages
>        *       are not mapped, leaving the page to be freed by the latter.
> @@ -784,7 +794,7 @@ static void hvm_ioreq_server_deinit(struct
> hvm_ioreq_server *s)
>        *       the page_info pointer to NULL, meaning the latter will do
>        *       nothing.
>        */
> -    hvm_ioreq_server_unmap_pages(s);
> +    arch_ioreq_server_unmap_pages(s);
>       hvm_ioreq_server_free_pages(s);
> 
>       hvm_ioreq_server_free_rangesets(s);
> @@ -918,7 +928,7 @@ int hvm_get_ioreq_server_info(struct domain *d,
> ioservid_t id,
> 
>       if ( ioreq_gfn || bufioreq_gfn )
>       {
> -        rc = hvm_ioreq_server_map_pages(s);
> +        rc = arch_ioreq_server_map_pages(s);
>           if ( rc )
>               goto out;
>       }
> 
> 
> So looks like for leaving legacy mechanism x86 specific we need 4 new
> arch callbacks:
> 
> - arch_ioreq_server_enable
> - arch_ioreq_server_disable
> - arch_ioreq_server_map_pages
> - arch_ioreq_server_unmap_pages

Yes, that looks ok.

  Paul



From xen-devel-bounces@lists.xenproject.org Tue Nov 17 19:44:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 19:44:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29248.58553 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf6u2-0000Pi-Dq; Tue, 17 Nov 2020 19:44:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29248.58553; Tue, 17 Nov 2020 19:44:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf6u2-0000Pb-Av; Tue, 17 Nov 2020 19:44:34 +0000
Received: by outflank-mailman (input) for mailman id 29248;
 Tue, 17 Nov 2020 19:44:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g2yZ=EX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kf6u0-0000Oc-Iq
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 19:44:32 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ac5f3dee-e898-4145-8a35-7b61911e35f9;
 Tue, 17 Nov 2020 19:44:24 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kf6ts-0006Yp-0E; Tue, 17 Nov 2020 19:44:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kf6tr-0003W4-KT; Tue, 17 Nov 2020 19:44:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kf6tr-0000U9-K0; Tue, 17 Nov 2020 19:44:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=g2yZ=EX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kf6u0-0000Oc-Iq
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 19:44:32 +0000
X-Inumbo-ID: ac5f3dee-e898-4145-8a35-7b61911e35f9
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ac5f3dee-e898-4145-8a35-7b61911e35f9;
	Tue, 17 Nov 2020 19:44:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WU/y/x7mDpZCsdQkQjVMJdt3gsOoi6oI3ueCYe4fDQs=; b=Uc2h/RwC908+5bMqE5HkBNudUw
	GU67rOLH3sCqUBZAOV4sdViZ1R6v5OeFuCyGGvXtzxkfGdJxrUaJY6Csyj6wXYWRusqrtPor6wX+3
	Tx0gmco5d4+Y557zF2LPzMJIGsoitodRuDrWH4yHXyA+0/1eSxiekUtFowam9xRf9PJk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kf6ts-0006Yp-0E; Tue, 17 Nov 2020 19:44:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kf6tr-0003W4-KT; Tue, 17 Nov 2020 19:44:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kf6tr-0000U9-K0; Tue, 17 Nov 2020 19:44:23 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156831-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156831: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start.2:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=b17d3b7b77f043f0e76f0e6ce6def3c1b1d5ee8b
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Nov 2020 19:44:23 +0000

flight 156831 qemu-mainline real [real]
flight 156837 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156831/
http://logs.test-lab.xenproject.org/osstest/logs/156837/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt     19 guest-start.2            fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                b17d3b7b77f043f0e76f0e6ce6def3c1b1d5ee8b
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   89 days
Failing since        152659  2020-08-21 14:07:39 Z   88 days  188 attempts
Testing same since   156831  2020-11-17 06:38:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 65563 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 19:46:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 19:46:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29255.58565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf6vZ-0000Zo-VB; Tue, 17 Nov 2020 19:46:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29255.58565; Tue, 17 Nov 2020 19:46:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf6vZ-0000Zh-Rv; Tue, 17 Nov 2020 19:46:09 +0000
Received: by outflank-mailman (input) for mailman id 29255;
 Tue, 17 Nov 2020 19:46:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lsz4=EX=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kf6vZ-0000Zb-7V
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 19:46:09 +0000
Received: from mail-wr1-x42c.google.com (unknown [2a00:1450:4864:20::42c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7845435a-e6e8-4aaa-a58d-8e7731568280;
 Tue, 17 Nov 2020 19:46:07 +0000 (UTC)
Received: by mail-wr1-x42c.google.com with SMTP id m6so7319662wrg.7
 for <xen-devel@lists.xenproject.org>; Tue, 17 Nov 2020 11:46:07 -0800 (PST)
Received: from CBGR90WXYV0
 (host109-146-187-185.range109-146.btcentralplus.com. [109.146.187.185])
 by smtp.gmail.com with ESMTPSA id q12sm5256327wmc.45.2020.11.17.11.46.05
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 17 Nov 2020 11:46:06 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lsz4=EX=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kf6vZ-0000Zb-7V
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 19:46:09 +0000
X-Inumbo-ID: 7845435a-e6e8-4aaa-a58d-8e7731568280
Received: from mail-wr1-x42c.google.com (unknown [2a00:1450:4864:20::42c])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7845435a-e6e8-4aaa-a58d-8e7731568280;
	Tue, 17 Nov 2020 19:46:07 +0000 (UTC)
Received: by mail-wr1-x42c.google.com with SMTP id m6so7319662wrg.7
        for <xen-devel@lists.xenproject.org>; Tue, 17 Nov 2020 11:46:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=WhbrHOh5eSpBRYHEg/N9n8NuGSwrwc/G6QPCxIY3d7o=;
        b=GJINZe74AIO3LsMz4NPEuLCLOt64/dNnqvA9/+lV8h+n2ZiLUq3+CClnjJtNjvm+m5
         CGpbMCd+cfkhaRg+YFqA5QbTQPfJPuVxpTUsGjDQalpIDfuIXGGYGBuS3arOF7DHCLW7
         p6SzGIVkAusKoW78k00lFqtg0i3uupddXKg3tFa4dQ0/uzPd1T409QYZZh//9IVOsxbM
         dYo1ZS17eV0EqVILT4xMIgybIPAH105+Oh1ANeevC8sz5jy594VzOEwLFiHUtneTo534
         yqUSmRqZF8Vi3KDfqAB1hvdXpDe/F9gB7gWc8b+vhmHTGAanmOfQIvaTw2ZaK1usrqi7
         wLig==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=WhbrHOh5eSpBRYHEg/N9n8NuGSwrwc/G6QPCxIY3d7o=;
        b=lv69SdCg7mCSkIK+/PdpJW0G8fdgPLYn6R0NKa+sqpOrfH3ZvCmLICZzaaVTd7T5ti
         LQCP/vPxHfwUUJny5s+HrBS5s/MLd3DPusW3QXOzqsl3/EXiXq8R3s8jC52Lsc0RYbVN
         pMFd1ec5f7unuBPl1lCtUpVyNF0q5d8lRRdRbzsbM8kf7IepEoBCAw9H/DAOYqJaaoWv
         FXFcDXPElZQRUtZ+8ltzOqNhXT3PbwkVRjSTi5HgfadjAt93AIgzKTnTa4zW+qSKqDGl
         PgxYrBJ/G2oLcp97JZa5Fq8nyug3xW9W7MWppc2oPVor48IQMEs8oa9km5Lwv+26Q2cI
         pBPA==
X-Gm-Message-State: AOAM533+M89ZrrUG7GwTMmuGBU5DFSjTW/YYRuUVgdDwg1zC1JgMPYnH
	//+lFvhduoXo/EDv+19AWWI=
X-Google-Smtp-Source: ABdhPJxsmC9uOcSOcnMknob71tiENDIwTJyS8x40iGMs+a69olsrp5uWjSEgv0MisZ0dO48QImq7Hg==
X-Received: by 2002:adf:f9c6:: with SMTP id w6mr1113204wrr.273.1605642367105;
        Tue, 17 Nov 2020 11:46:07 -0800 (PST)
Received: from CBGR90WXYV0 (host109-146-187-185.range109-146.btcentralplus.com. [109.146.187.185])
        by smtp.gmail.com with ESMTPSA id q12sm5256327wmc.45.2020.11.17.11.46.05
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Tue, 17 Nov 2020 11:46:06 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Oleksandr Andrushchenko'" <Oleksandr_Andrushchenko@epam.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	"'Anthony PERARD'" <anthony.perard@citrix.com>,
	"'Christian Lindig'" <christian.lindig@citrix.com>,
	"'David Scott'" <dave@recoil.org>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Nick Rosbrook'" <rosbrookn@ainfosec.com>,
	"'Wei Liu'" <wl@xen.org>
References: <20201110175147.7067-1-paul@xen.org> <f8393cbe-d32f-9b45-049f-ec67a73e7c15@epam.com>
In-Reply-To: <f8393cbe-d32f-9b45-049f-ec67a73e7c15@epam.com>
Subject: RE: [PATCH v2 00/24] xl / libxl: named PCI pass-through devices
Date: Tue, 17 Nov 2020 19:46:05 -0000
Message-ID: <009901d6bd1a$4a1b97d0$de52c770$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQHVuwFKgyCNjAECHbA+TpJefiRY1gJP93B7qbw+PMA=

> -----Original Message-----
> From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
> Sent: 16 November 2020 10:34
> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org
> Cc: Paul Durrant <pdurrant@amazon.com>; Anthony PERARD =
<anthony.perard@citrix.com>; Christian Lindig
> <christian.lindig@citrix.com>; David Scott <dave@recoil.org>; George =
Dunlap
> <george.dunlap@citrix.com>; Ian Jackson <iwj@xenproject.org>; Nick =
Rosbrook <rosbrookn@ainfosec.com>;
> Wei Liu <wl@xen.org>
> Subject: Re: [PATCH v2 00/24] xl / libxl: named PCI pass-through =
devices
>=20
> Hi, Paul!
>=20
> On 11/10/20 7:51 PM, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > Paul Durrant (24):
> >    xl / libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
> >    libxl: use LIBXL_DEFINE_DEVICE_LIST for pci devices
> >    libxl: use LIBXL_DEFINE_DEVICE_LIST for nic devices
> >    libxl: s/detatched/detached in libxl_pci.c
> >    libxl: remove extraneous arguments to do_pci_remove() in =
libxl_pci.c
> >    libxl: stop using aodev->device_config in =
libxl__device_pci_add()...
> >    libxl: generalise 'driver_path' xenstore access functions in
> >      libxl_pci.c
> >    libxl: remove unnecessary check from libxl__device_pci_add()
> >    libxl: remove get_all_assigned_devices() from libxl_pci.c
> >    libxl: make sure callers of libxl_device_pci_list() free the list
> >      after use
> >    libxl: add libxl_device_pci_assignable_list_free()...
> >    libxl: use COMPARE_PCI() macro is_pci_in_array()...
> >    libxl: add/recover 'rdm_policy' to/from PCI backend in xenstore
> >    libxl: Make sure devices added by pci-attach are reflected in the
> >      config
> >    docs/man: extract documentation of PCI_SPEC_STRING from the =
xl.cfg
> >      manpage...
> >    docs/man: improve documentation of PCI_SPEC_STRING...
> >    docs/man: fix xl(1) documentation for 'pci' operations
> >    libxl: introduce 'libxl_pci_bdf' in the idl...
> >    libxlu: introduce xlu_pci_parse_spec_string()
> >    libxl: modify
> >      libxl_device_pci_assignable_add/remove/list/list_free()...
> >    docs/man: modify xl(1) in preparation for naming of assignable =
devices
> >    xl / libxl: support naming of assignable devices
> >    docs/man: modify xl-pci-configuration(5) to add 'name' field to
> >      PCI_SPEC_STRING
> >    xl / libxl: support 'xl pci-attach/detach' by name
> >
> >   docs/man/xl-pci-configuration.5.pod  |  218 ++++++
> >   docs/man/xl.1.pod.in                 |   39 +-
> >   docs/man/xl.cfg.5.pod.in             |   68 +-
> >   tools/golang/xenlight/helpers.gen.go |   77 +-
> >   tools/golang/xenlight/types.gen.go   |    8 +-
> >   tools/include/libxl.h                |   67 +-
> >   tools/include/libxlutil.h            |    8 +-
> >   tools/libs/light/libxl_create.c      |    6 +-
> >   tools/libs/light/libxl_dm.c          |   18 +-
> >   tools/libs/light/libxl_internal.h    |   53 +-
> >   tools/libs/light/libxl_nic.c         |   19 +-
> >   tools/libs/light/libxl_pci.c         | 1030 =
++++++++++++++------------
> >   tools/libs/light/libxl_types.idl     |   19 +-
> >   tools/libs/util/libxlu_pci.c         |  379 +++++-----
> >   tools/ocaml/libs/xl/xenlight_stubs.c |   19 +-
> >   tools/xl/xl_cmdtable.c               |   16 +-
> >   tools/xl/xl_parse.c                  |   28 +-
> >   tools/xl/xl_pci.c                    |  159 ++--
> >   tools/xl/xl_sxp.c                    |   12 +-
> >   19 files changed, 1308 insertions(+), 935 deletions(-)
> >   create mode 100644 docs/man/xl-pci-configuration.5.pod
>=20
> Patches 1-18:
>=20
> Reviewed-by: Oleksandr Andrushchenko =
<oleksandr_andrushchenko@epam.com>
>=20
> (I'll probably review more later as time allows).
>=20
>=20
> I would like to ask the respective maintainers to look at this series =
as, in the light of the
>=20
> upcoming changes for ARM PCI passthrough, these changes greatly help =
in adapting the
>=20
> code for ARM
>=20

FWIW, I believe there is still an issue in one of the patches (probably =
patch #14) which has caused problems for pass-through of multiple =
devices. I will debug that in the next couple of days and post v3.

  Paul

> Thank you,
>=20
> Oleksandr
>=20
> > ---
> > Cc: Anthony PERARD <anthony.perard@citrix.com>
> > Cc: Christian Lindig <christian.lindig@citrix.com>
> > Cc: David Scott <dave@recoil.org>
> > Cc: George Dunlap <george.dunlap@citrix.com>
> > Cc: Ian Jackson <iwj@xenproject.org>
> > Cc: Nick Rosbrook <rosbrookn@ainfosec.com>
> > Cc: Wei Liu <wl@xen.org>



From xen-devel-bounces@lists.xenproject.org Tue Nov 17 20:27:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 20:27:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29264.58580 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf7Zc-0004Yg-8t; Tue, 17 Nov 2020 20:27:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29264.58580; Tue, 17 Nov 2020 20:27:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf7Zc-0004YZ-5s; Tue, 17 Nov 2020 20:27:32 +0000
Received: by outflank-mailman (input) for mailman id 29264;
 Tue, 17 Nov 2020 20:27:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4XLr=EX=durham.ac.uk=m.a.young@srs-us1.protection.inumbo.net>)
 id 1kf7Za-0004YU-Fc
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 20:27:30 +0000
Received: from GBR01-LO2-obe.outbound.protection.outlook.com (unknown
 [40.107.10.108]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4229209f-766f-4036-b529-eebea0937aac;
 Tue, 17 Nov 2020 20:27:28 +0000 (UTC)
Received: from LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:83::20)
 by LNXP265MB0378.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:1d::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25; Tue, 17 Nov
 2020 20:27:26 +0000
Received: from LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM
 ([fe80::7956:7dd:b840:ac1b]) by LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM
 ([fe80::7956:7dd:b840:ac1b%6]) with mapi id 15.20.3564.028; Tue, 17 Nov 2020
 20:27:26 +0000
Received: from broadband.bt.com (2a00:23c6:751d:7701:1f1a:39af:4235:7681) by
 LO2P265CA0414.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:a0::18) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3564.28 via Frontend Transport; Tue, 17 Nov 2020 20:27:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=4XLr=EX=durham.ac.uk=m.a.young@srs-us1.protection.inumbo.net>)
	id 1kf7Za-0004YU-Fc
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 20:27:30 +0000
X-Inumbo-ID: 4229209f-766f-4036-b529-eebea0937aac
Received: from GBR01-LO2-obe.outbound.protection.outlook.com (unknown [40.107.10.108])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4229209f-766f-4036-b529-eebea0937aac;
	Tue, 17 Nov 2020 20:27:28 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=T3V+ZZLr0AAq5aq8mIGk7XD5tT7Gvy3hsVTruEYk0pUl4Af7lQUU9buIhyc7/1jG8/EETwAqmNUjOWDFZ69Cm+u1EMjGCU+EDUNo7RCldEO1SIbxIjLdJ4Zw7RofBIH5sO9d9i0FcsvYetqzIFwb/seT+Y9NsbH0rZSZ48BSoducWe6f6rumMzyMGOKjR7hngweHXdQ/4+HKTEpPjeEu0vg0/LQIc8yQRZMDzRGbuoDMM6SXxMGK6t3SsAkJnUwx4e0ECz2JF7jfMKnj+NoKz30IhqnDTowlLgrmc3bO+AutyV9sc+MLy1wYi6NSym7/UWtc+Kt3gXbVnMefn8yK6A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8PofDEF0cCDUe0oMaBguZb5Xe0KhAUnNy30dm8aDG9w=;
 b=fRe453KWmQ1Cpkqgs+1wYQjHxOnYdja4obO3t9wl0byL6uzBvlbuT/YfiVtD6NKsVfYi2kDRWA+H0ppTjLtUPZoArUz2nsq29afcFXTxIcFsj9AqVpTtCDTY69APJXUTBS2wB2tH8W1kFOJIii6/T2n8+CH3jLpoM25ajrMTPo+YTSJnQWLTbhZg27RZ0GK6Ll86kfJ/Qbc/OtC+W4lowHSJY+ASl7ITXEIqVvsLX4emSR1Uyj1/Jv6tt8QsPUzf0u+CP6JnHQQb86pnfltOY+YMGxqR2UBxofw6OwOw7SJpda59mCYqWxUxU+Uk52JvbvDbYP7kNCkITx6m/v2bvw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=durham.ac.uk; dmarc=pass action=none header.from=durham.ac.uk;
 dkim=pass header.d=durham.ac.uk; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=durhamuniversity.onmicrosoft.com;
 s=selector2-durhamuniversity-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8PofDEF0cCDUe0oMaBguZb5Xe0KhAUnNy30dm8aDG9w=;
 b=I0d7PpoO8FXZXE8mAPZac09PqGbis0KsCvDdQTiGNDmYvAHMbsf7Jw8395ieXN4a8OAjmEhopgWtwR+ptq8YtqenbcVZ3+fCNBEfOQ2n1/0QJTgj0v7ANhAEQREaqld2QRnGKU8ZmFCSp1wJwBIBPM/8osDmltYGY4MrcZMhi2E=
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=durham.ac.uk;
Received: from LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:83::20)
 by LNXP265MB0378.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:1d::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25; Tue, 17 Nov
 2020 20:27:26 +0000
Received: from LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM
 ([fe80::7956:7dd:b840:ac1b]) by LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM
 ([fe80::7956:7dd:b840:ac1b%6]) with mapi id 15.20.3564.028; Tue, 17 Nov 2020
 20:27:26 +0000
Date: Tue, 17 Nov 2020 20:27:24 +0000 (GMT)
From: Michael Young <m.a.young@durham.ac.uk>
To: xen-devel@lists.xenproject.org
Subject: zstd compressed kernels
Message-ID: <1abcd9d-428f-93d-b63d-996ef4592723@austen3.home>
Content-Type: text/plain; format=flowed; charset=US-ASCII
X-Originating-IP: [2a00:23c6:751d:7701:1f1a:39af:4235:7681]
X-ClientProxiedBy: LO2P265CA0414.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a0::18) To LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:83::20)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from broadband.bt.com (2a00:23c6:751d:7701:1f1a:39af:4235:7681) by LO2P265CA0414.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:a0::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28 via Frontend Transport; Tue, 17 Nov 2020 20:27:26 +0000
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 22640c89-696b-4e88-f258-08d88b3732d2
X-MS-TrafficTypeDiagnostic: LNXP265MB0378:
X-Microsoft-Antispam-PRVS:
	<LNXP265MB03789003CDF442DB283216D187E20@LNXP265MB0378.GBRP265.PROD.OUTLOOK.COM>
X-MS-Oob-TLC-OOBClassifiers: OLM:2803;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zMzPnfxjaoELZmcrzZ8t+U0ZISLB6VyMci+ba9VIALhcmuOady7yvIJNTsEglyvdMyj4ITcYFm5/x7Nj2094b4mytH0RMhWQSoJEdFJjEFMXl0++Ir7Ri4kceVrcdjlci1GM/vUU3fpQeX/eTCZuJ4pC0J/JPgU9DV9+ak8kqf85dtS2+6/ncrZRk8jVxIPtZ3ue+hx3i7NBW8ukH8AtY2bRDdCUDQFdpcuIs6fybWNT9ivk/j38LJF3JOVH/PdRpJYl8VEtKuQjkSdPEiQ+UCyX6pl+oAryjMJ+6CJEjm6lx8ILqTymf02ht42YggS8uIVic1bncuVHYQQpISRanMJFUDcCmOWuud+8KeS1rgkMral4P3QBxdjKH7DTCm2b
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM;PTR:;CAT:NONE;SFS:(4636009)(366004)(7116003)(498600001)(558084003)(8676002)(8936002)(16526019)(186003)(6506007)(6512007)(6916009)(9686003)(2906002)(5660300002)(66556008)(52116002)(66476007)(66946007)(6486002)(36756003)(86362001)(3480700007)(133083001);DIR:OUT;SFP:1102;
X-MS-Exchange-AntiSpam-MessageData:
	4Ua41qPhLr0Cl88Z2VEa8TPHUUy90ycWtDB8sq0CCBqJvF6cHYkPy0Los2Rd5BRww/e0AJjBJX+0ELHf9OLlu3RJFnJZCgU77KKx00AXldiW/N7f+jYRKYmsTpAYiMgIzt6ny3xWbzGhJ6ajbdxFx68/9ueMW76ye2uDjqXj89bwHcArAAOh8FZsRRiYObGU+j4jiJKCHK5vVZrknbK4mUxYygu8jTJtsxnqfV5cakv9QqnKXVC9qaA6qtTC4VRhq5YHOy10Ide/LndtjHENyhl8hPWQ9lzdNjN+PUYcpZnOG74lGJGm9m+nnrNxCvp83cwLG8ZN0H38FSiThl2XakVwYqNpe+q7bJ9ugddI/AYnLBsW4B9I9yZP/atdZoJKVFXnuaE8BztxF4+Rl3Z++sdOPfmlSCu/CXOtaGLD4kkCJX/3IswFQ8miKlvTunaFhQn86huRS+6Wo4yf9BFuvuNABoeS1exsTsJl3N/grFR7T8rCZwG6jS6gFcrvFftqajtxnZdJH8ATMkfEnu28Wn4yAPswdjcyeKH9BNlYFaqE0oT4SlARFRgzqm6yyCtuxmsPZSKXInzE5qsZ4nRpeyVaGxsokwAZIdnglooO7cY8FAbu2phapExLr5EOtl1oVIPnUnu8ZGEqDn7OcUWvoWIAxNfD86EVw9p2HNjXfUw8z22gztIyqLz3tzn68ULNWbBPi5q4iXDnyk/xxZp9yg==
X-OriginatorOrg: durham.ac.uk
X-MS-Exchange-CrossTenant-Network-Message-Id: 22640c89-696b-4e88-f258-08d88b3732d2
X-MS-Exchange-CrossTenant-AuthSource: LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Nov 2020 20:27:26.7191
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 7250d88b-4b68-4529-be44-d59a2d8a6f94
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yNet9EPpO+p/lo1pGfol8/Qzw7JmbVRuBH8kFYqOzv2IcmAg2puoUzMA+XyzLKOB8Rlr2BviK0edcez9sMKx+g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: LNXP265MB0378

Is anyone else looking at vmlinuz files which use zstd compression? Fedora 
has started doing so with its 5.9 kernels.

 	Michael Young


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 20:48:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 20:48:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29271.58592 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf7ty-0006ZP-4U; Tue, 17 Nov 2020 20:48:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29271.58592; Tue, 17 Nov 2020 20:48:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf7ty-0006ZI-07; Tue, 17 Nov 2020 20:48:34 +0000
Received: by outflank-mailman (input) for mailman id 29271;
 Tue, 17 Nov 2020 20:48:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mGex=EX=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kf7tw-0006ZD-4J
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 20:48:32 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d18184dc-a9ba-4f50-ab49-18e1722e7722;
 Tue, 17 Nov 2020 20:48:30 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mGex=EX=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kf7tw-0006ZD-4J
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 20:48:32 +0000
X-Inumbo-ID: d18184dc-a9ba-4f50-ab49-18e1722e7722
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d18184dc-a9ba-4f50-ab49-18e1722e7722;
	Tue, 17 Nov 2020 20:48:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605646110;
  h=subject:to:references:from:message-id:date:mime-version:
   in-reply-to:content-transfer-encoding;
  bh=VJ2M/ptrAly8AtOnX78VGBltMv23VCrNepqthOKPu3Q=;
  b=BMuJBGUO8MLm+/w/X80U6Om696WG/j7IxS7tpNAHVQa2BvlJQ/kVk3Dl
   ks3nqAR6CmewJIYN5s3JYIGD1x0p0Dpc0xYmHWlUYoBekyUeXoIVgQkjg
   nxfuh2JpMMxO57jqa1OiWxNqaW42H4uk+WhN7XfnfpDhBhaRQmk12LyID
   4=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: OivuvwWRjbdmj6IcnS0Zia2vrKz2p6hlpLjuDOgawEqndHxB7eYdgzB6Huj+OdeeJvH93Be1yr
 nJMXmwW2lc0RBFTc2xraICWOFO4ebDdX1XiToPH7b0XXPtmieRnfEJMWg4m1ZucFXIzLGDVmK+
 Ckof5lUZ5hDiXmxNVjf70cMqAZhFRCFcM1947hZhxfSx7rUYQHQQh8HBOxSM0vzwA90xz/QLbr
 hpAysAcw5g+gxYPBhOwP7tqxGZCAsmMZZIzpdw5f8OaiyjmPIGH/iBtO63uU8NFIMq1biA22GE
 s7w=
X-SBRS: None
X-MesageID: 31720347
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,486,1596513600"; 
   d="scan'208";a="31720347"
Subject: Re: zstd compressed kernels
To: Michael Young <m.a.young@durham.ac.uk>, <xen-devel@lists.xenproject.org>
References: <1abcd9d-428f-93d-b63d-996ef4592723@austen3.home>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <71d36766-1258-0a79-02ff-d888a41e431e@citrix.com>
Date: Tue, 17 Nov 2020 20:48:25 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <1abcd9d-428f-93d-b63d-996ef4592723@austen3.home>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 17/11/2020 20:27, Michael Young wrote:
> Is anyone else looking at vmlinuz files which use zstd compression?
> Fedora has started doing so with its 5.9 kernels.

Well volunteered ;)

Yes - I'm aware that it is an area needing working on, but it is not
sufficiently urgent on my TODO stack to look at yet.

There are other compression schemes supported by Linux which aren't
supported by us, either.  I was planning to go through all of them and
check them all.


If you're willing to have a go:

For dom0 support, port Linux's decompressor into xen/common/ and plumb
it into xen/common/decompress.c

For domU's, tools/libs/guest/xg_dom_bzimageloader.c and
xc_dom_probe_bzimage_kernel()

(Wow this plumbing is ugly and in need of some rationalisation...)

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 21:44:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 21:44:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29281.58607 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf8m7-00047O-FX; Tue, 17 Nov 2020 21:44:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29281.58607; Tue, 17 Nov 2020 21:44:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kf8m7-00047H-C7; Tue, 17 Nov 2020 21:44:31 +0000
Received: by outflank-mailman (input) for mailman id 29281;
 Tue, 17 Nov 2020 21:44:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g2yZ=EX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kf8m6-00046h-Qg
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 21:44:30 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 35a38147-0000-480b-87d8-9875e7ece607;
 Tue, 17 Nov 2020 21:44:24 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kf8m0-0000ft-ED; Tue, 17 Nov 2020 21:44:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kf8m0-00070i-3e; Tue, 17 Nov 2020 21:44:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kf8m0-0008IS-38; Tue, 17 Nov 2020 21:44:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=g2yZ=EX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kf8m6-00046h-Qg
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 21:44:30 +0000
X-Inumbo-ID: 35a38147-0000-480b-87d8-9875e7ece607
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 35a38147-0000-480b-87d8-9875e7ece607;
	Tue, 17 Nov 2020 21:44:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=X1+AjzSHCDiZvudZ1gr54AAoah4GEyDrSKhbhyIFAzU=; b=VD3vlogMsVktfYhi3ZhLSf3oS6
	JXbhw6MlkuViMG369vajjRdg7IdsD8r1cPJS41g6SAbj1Bp/uDt0ml+1rQy0bNkie41Y+e2mgpx3E
	l6yeNcSTNzitG0y+jymi6ogHd7kekc0guMUcJl9g0qpAFknqvksLZkJkJ5EgcNkVDdt0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kf8m0-0000ft-ED; Tue, 17 Nov 2020 21:44:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kf8m0-00070i-3e; Tue, 17 Nov 2020 21:44:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kf8m0-0008IS-38; Tue, 17 Nov 2020 21:44:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156838-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156838: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=e6a12a0fc817e26ac05e8301e89433c2367ff362
X-Osstest-Versions-That:
    ovmf=29d59baa3907277782e9f26ecaa99704ff57e3f1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 17 Nov 2020 21:44:24 +0000

flight 156838 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156838/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 e6a12a0fc817e26ac05e8301e89433c2367ff362
baseline version:
 ovmf                 29d59baa3907277782e9f26ecaa99704ff57e3f1

Last test of basis   156829  2020-11-17 04:47:02 Z    0 days
Testing same since   156838  2020-11-17 19:39:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gao, Zhichao <zhichao.gao@intel.com>
  Zhichao Gao <zhichao.gao@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   29d59baa39..e6a12a0fc8  e6a12a0fc817e26ac05e8301e89433c2367ff362 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Nov 17 23:53:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 17 Nov 2020 23:53:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29303.58625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfAmv-0000Qq-6T; Tue, 17 Nov 2020 23:53:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29303.58625; Tue, 17 Nov 2020 23:53:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfAmv-0000Qj-2S; Tue, 17 Nov 2020 23:53:29 +0000
Received: by outflank-mailman (input) for mailman id 29303;
 Tue, 17 Nov 2020 23:53:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ct0K=EX=xilinx.com=stefanos@srs-us1.protection.inumbo.net>)
 id 1kfAmt-0000QZ-3G
 for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 23:53:27 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com (unknown
 [40.107.220.41]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4875fee1-023f-4f08-a726-849a53d09777;
 Tue, 17 Nov 2020 23:53:22 +0000 (UTC)
Received: from SA9PR13CA0004.namprd13.prod.outlook.com (2603:10b6:806:21::9)
 by SN6PR02MB5024.namprd02.prod.outlook.com (2603:10b6:805:6a::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20; Tue, 17 Nov
 2020 23:53:17 +0000
Received: from SN1NAM02FT014.eop-nam02.prod.protection.outlook.com
 (2603:10b6:806:21:cafe::c8) by SA9PR13CA0004.outlook.office365.com
 (2603:10b6:806:21::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.15 via Frontend
 Transport; Tue, 17 Nov 2020 23:53:17 +0000
Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by
 SN1NAM02FT014.mail.protection.outlook.com (10.152.72.106) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.3564.22 via Frontend Transport; Tue, 17 Nov 2020 23:53:15 +0000
Received: from xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) by
 xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1913.5; Tue, 17 Nov 2020 15:53:14 -0800
Received: from smtp.xilinx.com (172.19.127.95) by
 xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server id
 15.1.1913.5 via Frontend Transport; Tue, 17 Nov 2020 15:53:14 -0800
Received: from [10.23.120.204] (port=64958 helo=localhost)
 by smtp.xilinx.com with esmtp (Exim 4.90)
 (envelope-from <stefano.stabellini@xilinx.com>)
 id 1kfAmf-00034j-TM; Tue, 17 Nov 2020 15:53:14 -0800
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ct0K=EX=xilinx.com=stefanos@srs-us1.protection.inumbo.net>)
	id 1kfAmt-0000QZ-3G
	for xen-devel@lists.xenproject.org; Tue, 17 Nov 2020 23:53:27 +0000
X-Inumbo-ID: 4875fee1-023f-4f08-a726-849a53d09777
Received: from NAM11-CO1-obe.outbound.protection.outlook.com (unknown [40.107.220.41])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4875fee1-023f-4f08-a726-849a53d09777;
	Tue, 17 Nov 2020 23:53:22 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JXcu+Q24Z/ooVWGuyG8K650izdhMTDhE3yBSnnJI6mBxpBhzJZIszj7SR0OVIGOtb0nTmD6lpbkniNKH7/oySPJUXhZhEzAE37UUPxhV99ZUlwPVBvuxo56iz8GTLCso0pBJeSPnFhhuW/GSNXbXTeFCh+1SXrBeBeRSO3479bUAeTKx7C8+mDbWuHD9eGO/YDMNMf1P/zY/679MH8mjBEtPVYcDEL/pwZqgSQGb5azesRQx5cLyCi+A5aOUMO7lFLIGsPHpjmAcsz/VBeI6O8uMlqGYrEHGgvLrIvbriZxWiz1f/vlveCkw6eQ0U5OZvPfHXsWhIcTYGgnqPDXD8w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uTMcDdDhDLIIShZKVKmD5GHRUXQJUUBebIlTm2Z02oA=;
 b=MEwAqOxJ7dt/3ZYmaqWPI3FC1Mb5/TWwI/ODW+4x9CjgjZFrOdNlkN9f2A0R4RhSOD7h49eCEGNskBz8ZmfrSc3n++Anzf3deSrB3glhXC7hyFXSqGayDGdlXR7uvc3+sijMFA+E5+9OiwsmmGYA4QrCgwWjh1+Z+JUFp3SR9riiGs3Gh2XHEeLFclDCeUABvP1AOTZ9IYvEu6DGDa9FrwWymmRrs73ybQH8CE1S3Quv98upUI3o7+NOkeQwGyGfl74LUYmMUq8n/nfTFxjFDAWvfY7HI9bPmPB9t9sz0i12QY9yYC2WtyzIOFe9iyDcAdM6zYH0hcA7dGuNUFMF5A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 149.199.62.198) smtp.rcpttodomain=zal.aero smtp.mailfrom=xilinx.com;
 dmarc=bestguesspass action=none header.from=xilinx.com; dkim=none (message
 not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uTMcDdDhDLIIShZKVKmD5GHRUXQJUUBebIlTm2Z02oA=;
 b=ZpXH5FtxvzacvZsAuIVihcgk2H0gsNC+aqG1iCrkx26b++faRJ5+UzacVMoPmpN8pB3gst3ZpgoZRrvRo9DngZNApC6Y7coZnooQu12gVczbHqOGpJqmYYX6HGxA5WMYnUJ8Dp4SZ25z6igTnsKYZ/bIgIpSyAuZ0LznNdsrEHA=
Received: from SA9PR13CA0004.namprd13.prod.outlook.com (2603:10b6:806:21::9)
 by SN6PR02MB5024.namprd02.prod.outlook.com (2603:10b6:805:6a::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20; Tue, 17 Nov
 2020 23:53:17 +0000
Received: from SN1NAM02FT014.eop-nam02.prod.protection.outlook.com
 (2603:10b6:806:21:cafe::c8) by SA9PR13CA0004.outlook.office365.com
 (2603:10b6:806:21::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.15 via Frontend
 Transport; Tue, 17 Nov 2020 23:53:17 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198)
 smtp.mailfrom=xilinx.com; zal.aero; dkim=none (message not signed)
 header.d=none;zal.aero; dmarc=bestguesspass action=none
 header.from=xilinx.com;
Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates
 149.199.62.198 as permitted sender) receiver=protection.outlook.com;
 client-ip=149.199.62.198; helo=xsj-pvapexch01.xlnx.xilinx.com;
Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by
 SN1NAM02FT014.mail.protection.outlook.com (10.152.72.106) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.3564.22 via Frontend Transport; Tue, 17 Nov 2020 23:53:15 +0000
Received: from xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) by
 xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1913.5; Tue, 17 Nov 2020 15:53:14 -0800
Received: from smtp.xilinx.com (172.19.127.95) by
 xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server id
 15.1.1913.5 via Frontend Transport; Tue, 17 Nov 2020 15:53:14 -0800
Received: from [10.23.120.204] (port=64958 helo=localhost)
	by smtp.xilinx.com with esmtp (Exim 4.90)
	(envelope-from <stefano.stabellini@xilinx.com>)
	id 1kfAmf-00034j-TM; Tue, 17 Nov 2020 15:53:14 -0800
Date: Tue, 17 Nov 2020 15:53:13 -0800
From: Stefano Stabellini <stefano.stabellini@xilinx.com>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Leo Krueger <leo.krueger@zal.aero>
CC: Stefano Stabellini <stefano.stabellini@xilinx.com>, Peng Fan
	<peng.fan@nxp.com>, "brucea@xilinx.com" <brucea@xilinx.com>, "Cornelia
 Bruelhart" <cornelia.bruelhart@zal.aero>, <oleksandr_andrushchenko@epam.com>,
	<xen-devel@lists.xenproject.org>, <Bertrand.Marquis@arm.com>,
	<julien@xen.org>
Subject: Re: AW: AW: AW: AW: Xen data from meta-virtualization layer
In-Reply-To: <HE1PR05MB4794569AC67109AF8B6517268BE20@HE1PR05MB4794.eurprd05.prod.outlook.com>
Message-ID: <alpine.DEB.2.21.2011171544380.438@sstabellini-ThinkPad-T480s>
References: <AM4PR0501MB2227089FDDF0209EF6E215D9E6100@AM4PR0501MB2227.eurprd05.prod.outlook.com> <AM4PR0501MB22274E52A5A3BE912D477D8CE6EA0@AM4PR0501MB2227.eurprd05.prod.outlook.com> <HE1PR05MB47941E23CE053CE72F18867C8BEA0@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011091858010.21307@sstabellini-ThinkPad-T480s> <HE1PR05MB4794B5C57A54A29A48EE8EAE8BE90@HE1PR05MB4794.eurprd05.prod.outlook.com> <alpine.DEB.2.21.2011101842500.21307@sstabellini-ThinkPad-T480s>,<DB6PR0402MB27608A03EC717053E392A92988E80@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <HE1PR05MB47940ED4E5FDC0BADC54C8E78BE80@HE1PR05MB4794.eurprd05.prod.outlook.com> <DB6PR0402MB2760CEEABA9F52CDEB27C1DB88E80@DB6PR0402MB2760.eurprd04.prod.outlook.com> <HE1PR05MB47944761ED6A26D3E2CE15868BE40@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011161656080.20906@sstabellini-ThinkPad-T480s> <HE1PR05MB4794569AC67109AF8B6517268BE20@HE1PR05MB4794.eurprd05.prod.outlook.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-2110900997-1605656680=:438"
Content-ID: <alpine.DEB.2.21.2011171544490.438@sstabellini-ThinkPad-T480s>
X-EOPAttributedMessage: 0
X-MS-Office365-Filtering-HT: Tenant
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9672d543-d690-4cbd-ef31-08d88b53f36b
X-MS-TrafficTypeDiagnostic: SN6PR02MB5024:
X-Microsoft-Antispam-PRVS:
	<SN6PR02MB50240565C1882A32B9DD54EFA0E20@SN6PR02MB5024.namprd02.prod.outlook.com>
X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XSRhvAMKTQQ4Z+9jzvzMEAn3XenCbehKsyAzFQrpZQ3Cv9F20uDDevVW+8MQUoUjJNMZqjtFb76csJQtJX3xoQfDZOwoXEzpsC9sRzDJWM9tJgVfn8EbRKnWDfM4+buJ/uw/DvErwJg1XyhBAlClRs/m9XmPYyyphBKmdzLJpQC16sjhIZGjvH59MS7Xj9RyKwwORW5TIZSeUe77cA+m+h0Y7TVZO4rrHrEvpEwFHs39qUY2N/y2DjTs3Q0Y7L7BYNgtwo0Z5thn+zuqFpwdEPvgjlwI7GahFRU/mppz73pZ3OEZTdmSqQ4b2ZTa54tnhobQ4sJRlVFczlt6ptmP/4qftB1G/f/bTMxeENJtSvg0/x4M9g4y5yqpFvch1S7/JEZbY8RrnhlGtWK1e+CnXg==
X-Forefront-Antispam-Report:
	CIP:149.199.62.198;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:xsj-pvapexch01.xlnx.xilinx.com;PTR:unknown-62-198.xilinx.com;CAT:NONE;SFS:(4636009)(7916004)(39860400002)(376002)(346002)(136003)(396003)(46966005)(83380400001)(82740400003)(9786002)(21480400003)(82310400003)(8676002)(26005)(33964004)(9686003)(47076004)(7636003)(33716001)(44832011)(356005)(426003)(6916009)(8936002)(336012)(2906002)(5660300002)(235185007)(316002)(478600001)(54906003)(36906005)(186003)(66574015)(4326008)(70586007)(70206006)(66616009);DIR:OUT;SFP:1101;
X-OriginatorOrg: xilinx.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Nov 2020 23:53:15.4589
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9672d543-d690-4cbd-ef31-08d88b53f36b
X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c;Ip=[149.199.62.198];Helo=[xsj-pvapexch01.xlnx.xilinx.com]
X-MS-Exchange-CrossTenant-AuthSource:
	SN1NAM02FT014.eop-nam02.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR02MB5024

--8323329-2110900997-1605656680=:438
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2011171544491.438@sstabellini-ThinkPad-T480s>

Adding Bertrand, Oleksandr, Julien, and others -- they have a more
recent experience with GICv3 ITS than me and might be able to help.
I am attaching the device tree Leo sent a few days ago for reference.


Typically when you can set the ethernet link up and no packets are
exchanged it is because of a missing interrupt. In this case a missing
MSI.

Bertrand, I believe you tried the GIC ITS driver with PCI devices
recently. It is expected to work correctly with MSIs in Dom0, right?



On Tue, 17 Nov 2020, Leo Krueger wrote:
> Hi,
> 
> I enabled CONFIG_HAS_ITS (what a stupid mistake by me to not set it before...) but then had to add the following node to my device tree
> 
> 	gic_lpi_base: syscon@0x80000000 {
> 		compatible = "gic-lpi-base";
> 		reg = <0x0 0x80000000 0x0 0x100000>;
> 		max-gic-redistributors = <2>;
> 	};
> 
> to somehow change something in regard to the ITS and MSI/MSI-X
> 
> (XEN) GICv3 initialization:
> (XEN)       gic_dist_addr=0x00000006000000
> (XEN)       gic_maintenance_irq=25
> (XEN)       gic_rdist_stride=0
> (XEN)       gic_rdist_regions=1
> (XEN)       redistributor regions:
> (XEN)         - region 0: 0x00000006040000 - 0x00000006080000
> (XEN) GICv3: using at most 57344 LPIs on the host.
> (XEN) GICv3: 288 lines, (IID 0001143b).
> (XEN) GICv3: Found ITS @0x6020000
> (XEN) using non-cacheable ITS command queue
> (XEN) GICv3: CPU0: Found redistributor in region 0 @000000004001c000
> 
> [    0.000000] GICv3: Distributor has no Range Selector support
> [    0.000000] GICv3: no VLPI support, no direct LPI support
> [    0.000000] ITS [mem 0x06020000-0x0603ffff]
> [    0.000000] ITS@0x0000000006020000: allocated 65536 Devices @dc880000 (flat, esz 8, psz 64K, shr 1)
> [    0.000000] ITS@0x0000000006020000: allocated 32768 Interrupt Collections @dc820000 (flat, esz 2, psz 64K, shr 1)
> [    0.000000] GIC: using LPI property table @0x00000000dc830000
> [    0.000000] GICv3: CPU0: found redistributor 0 region 0:0x0000000006040000
> [    0.000000] CPU0: using LPI pending table @0x00000000dc840000
> ...
> [    0.040080] Platform MSI: gic-its domain created
> [    0.040136] PCI/MSI: /interrupt-controller/gic-its domain created
> [    0.040181] fsl-mc MSI: /interrupt-controller/gic-its domain created
> 
> 
> Still I am ending up with the " Failed to add - passthrough or MSI/MSI-X might fail!" log messages for some PCI devices, but at least the on-board ethernet ports (fsl_enetc ) are initialized.
> I can set the link up and a link is successfully established.
> 
> But (!) I cannot receive or transmit anything (no error message...) and there seem to be no interrupts:
> 
> 29:          0   ITS-MSI   1 Edge      gbe0-rxtx0
>  32:          0   ITS-MSI 8192 Edge      ptp_qoriq
> 
> (from /proc/interrupts).
> 
> Any idea on this one? I keep digging and checking whether my device tree needs some additional fixes.
> 
> Kind regards,
> Leo
> 
> --
> Leo Krüger, M.Sc.
> Senior Systems Engineer Distributed Systems
> Intelligent Digital Cabin
> 
> ZAL Zentrum für Angewandte Luftfahrtforschung GmbH
> Hein-Saß-Weg 22
> 21129 Hamburg
>  
> +49 (0) 40 248 595-154
> 
> zal.aero | twitter.com/ZALTechCenter | facebook.com/ZALTechCenter 
> 
> ZAL Zentrum für Angewandte Luftfahrtforschung GmbH 
> Sitz der Gesellschaft / Legal Domicile: Hamburg 
> Registergericht / Registration Court: Amtsgericht Hamburg HRB 110232
> Vorsitzender des Aufsichtsrates / Chairman of the Supervisory Board: StR Andreas Rieckhof
> Geschäftsführung / Board of Management: Roland Gerhards
> 
> Disclaimer:
> This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have
> received this mail in error), please notify the sender immediately and destroy this e-mail. Any unauthorised copying, 
> disclosure or distribution of the material in this e-mail is strictly forbidden.
> 
> > -----Ursprüngliche Nachricht-----
> > Von: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > Gesendet: Dienstag, 17. November 2020 01:59
> > An: Leo Krueger <leo.krueger@zal.aero>
> > Cc: Peng Fan <peng.fan@nxp.com>; Stefano Stabellini
> > <stefano.stabellini@xilinx.com>; brucea@xilinx.com; Cornelia Bruelhart
> > <cornelia.bruelhart@zal.aero>
> > Betreff: Re: AW: AW: AW: Xen data from meta-virtualization layer
> > 
> > Replies inline below
> > 
> > 
> > On Sun, 15 Nov 2020, Leo Krueger wrote:
> > > Hi Peng, hi Stefano,
> > >
> > >
> > >
> > > sorry for the long silence…
> > >
> > >
> > >
> > > I tried the change suggested (and hope I didn’t do anything wrong
> > > while doing so…) on top of XEN 4.13.2 (before, I always tried with
> > > 4.12 but wanted to give 4.13.2 a try as well) but I do not see any difference,
> > still the same “unhandled context fault” log entries pop up and I cannot
> > access my sdcard.
> > >
> > >
> > >
> > > As it seems to work without respectively disabled iommu, that would be
> > > fine for me for now. What I am worried about a bit more is PCIe or
> > MSI/MSIX to be exact.
> > >
> > >
> > >
> > > Here is the gic-v3 and its node from my device tree:
> > >
> > >
> > >
> > > interrupt-controller@6000000 {
> > >
> > >         compatible = "arm,gic-v3";
> > >
> > >         #address-cells = <0x2>;
> > >
> > >         #size-cells = <0x2>;
> > >
> > >         ranges;
> > >
> > >         reg = <0x0 0x6000000 0x0 0x10000 0x0 0x6040000 0x0 0x40000>;
> > >
> > >         #interrupt-cells = <0x3>;
> > >
> > >         interrupt-controller;
> > >
> > >         interrupts = <0x1 0x9 0xf08>;
> > >
> > >         phandle = <0x1>;
> > >
> > >
> > >
> > >         gic-its@6020000 {
> > >
> > >                 compatible = "arm,gic-v3-its";
> > >
> > >                 msi-controller;
> > >
> > >                 reg = <0x0 0x6020000 0x0 0x20000>;
> > >
> > >                 phandle = <0xd>;
> > >
> > >         };
> > >
> > > };
> > >
> > >
> > >
> > > And here are some kernel log excerpts related to GIC when booting
> > without (!) XEN:
> > >
> > >
> > >
> > > [    0.000000] GICv3: GIC: Using split EOI/Deactivate mode
> > >
> > > [    0.000000] GICv3: Distributor has no Range Selector support
> > >
> > > [    0.000000] GICv3: no VLPI support, no direct LPI support
> > >
> > > [    0.000000] ITS [mem 0x06020000-0x0603ffff]
> > >
> > > [    0.000000] ITS@0x0000000006020000: allocated 65536 Devices
> > > @20fb880000 (flat, esz 8, psz 64K, shr 0)
> > >
> > > [    0.000000] ITS: using cache flushing for cmd queue
> > >
> > > [    0.000000] GIC: using LPI property table @0x00000020fb830000
> > >
> > > [    0.000000] GICv3: CPU0: found redistributor 0 region
> > > 0:0x0000000006040000
> > >
> > > [    0.000000] CPU0: using LPI pending table @0x00000020fb840000
> > >
> > > [    0.000000] GIC: using cache flushing for LPI property table
> > >
> > >
> > >
> > > However, when booting with XEN, only the following three lines are
> > contained in the kernel log:
> > >
> > >
> > >
> > > [    0.000000] GICv3: Distributor has no Range Selector support
> > >
> > > [    0.000000] GICv3: no VLPI support, no direct LPI support
> > >
> > > [    0.000000] GICv3: CPU0: found redistributor 0 region
> > > 0:0x0000000006040000
> > 
> > "no VLPI support" is very suspicious, it looks like Dom0 doesn't find any ITS
> > support.
> > 
> > Can you double check that you have the ITS driver in Xen built-in? That would
> > be CONFIG_HAS_ITS. If you do "make menuconfig" and enable "Configure
> > standard Xen features (expert users)" you should get a new option "GICv3
> > ITS MSI controller support" under "Architecture Features".
> > Make sure to enable it.
> > 
> > Let me know if that works!
> 
--8323329-2110900997-1605656680=:438
Content-Type: text/plain; charset="US-ASCII"; name="devicetree.dts"
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.21.2011171547220.438@sstabellini-ThinkPad-T480s>
Content-Description:
Content-Disposition: attachment; filename="devicetree.dts"

LyB7DQogICAgICAgIGNvbXBhdGlibGUgPSAia29udHJvbixzbDI4IiwgImZz
bCxsczEwMjhhIjsNCiAgICAgICAgaW50ZXJydXB0LXBhcmVudCA9IDwweDAw
MDAwMDAxPjsNCiAgICAgICAgI2FkZHJlc3MtY2VsbHMgPSA8MHgwMDAwMDAw
Mj47DQogICAgICAgICNzaXplLWNlbGxzID0gPDB4MDAwMDAwMDI+Ow0KICAg
ICAgICBtb2RlbCA9ICJLb250cm9uIFNNQVJDLXNBTDI4IjsNCiAgICAgICAg
Y2hvc2VuIHsNCiAgICAgICAgICAgICAgICB4ZW4sZG9tMC1ib290YXJncyA9
ICJyb290PS9kZXYvbW1jYmxrMXAyIGNvbnNvbGU9aHZjMCBlYXJseWNvbj14
ZW4gZWFybHlwcmludGs9eGVuIGNsa19pZ25vcmVfdW51c2VkIHJvb3R3YWl0
ICI7DQogICAgICAgICAgICAgICAgeGVuLHhlbi1ib290YXJncyA9ICJjb25z
b2xlPWR0dWFydCBkdHVhcnQ9c2VyaWFsMCBkb20wX21lbT01MTJNIGRvbTBf
bWF4X3ZjcHVzPTEgYm9vdHNjcnViPTAgdndmaT1uYXRpdmUgc2NoZWQ9bnVs
bCAiOw0KICAgICAgICAgICAgICAgICNzaXplLWNlbGxzID0gPDB4MDAwMDAw
MDI+Ow0KICAgICAgICAgICAgICAgICNhZGRyZXNzLWNlbGxzID0gPDB4MDAw
MDAwMDI+Ow0KICAgICAgICAgICAgICAgIGRvbTAgew0KICAgICAgICAgICAg
ICAgICAgICAgICAgcmVnID0gPDB4MDAwMDAwMDAgMHg4MTIwMDAwMCAweDAw
MDAwMDAwIDB4MDE1MTcwMDg+Ow0KICAgICAgICAgICAgICAgICAgICAgICAg
Y29tcGF0aWJsZSA9ICJ4ZW4sbGludXgtemltYWdlIiwgInhlbixtdWx0aWJv
b3QtbW9kdWxlIjsNCiAgICAgICAgICAgICAgICB9Ow0KICAgICAgICB9Ow0K
ICAgICAgICBhbGlhc2VzIHsNCiAgICAgICAgICAgICAgICBydGMxID0gIi9z
b2MvdGltZXJAMjgwMDAwMCI7DQogICAgICAgICAgICAgICAgY3J5cHRvID0g
Ii9zb2MvY3J5cHRvQDgwMDAwMDAiOw0KICAgICAgICAgICAgICAgIHNlcmlh
bDAgPSAiL3NvYy9zZXJpYWxAMjFjMDUwMCI7DQogICAgICAgICAgICAgICAg
c2VyaWFsMSA9ICIvc29jL3NlcmlhbEAyMWMwNjAwIjsNCiAgICAgICAgfTsN
CiAgICAgICAgY3B1cyB7DQogICAgICAgICAgICAgICAgI2FkZHJlc3MtY2Vs
bHMgPSA8MHgwMDAwMDAwMT47DQogICAgICAgICAgICAgICAgI3NpemUtY2Vs
bHMgPSA8MHgwMDAwMDAwMD47DQogICAgICAgICAgICAgICAgY3B1QDAgew0K
ICAgICAgICAgICAgICAgICAgICAgICAgZGV2aWNlX3R5cGUgPSAiY3B1IjsN
CiAgICAgICAgICAgICAgICAgICAgICAgIGNvbXBhdGlibGUgPSAiYXJtLGNv
cnRleC1hNzIiOw0KICAgICAgICAgICAgICAgICAgICAgICAgcmVnID0gPDB4
MDAwMDAwMDA+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgZW5hYmxlLW1l
dGhvZCA9ICJwc2NpIjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGNsb2Nr
cyA9IDwweDAwMDAwMDAyIDB4MDAwMDAwMDEgMHgwMDAwMDAwMD47DQogICAg
ICAgICAgICAgICAgICAgICAgICBuZXh0LWxldmVsLWNhY2hlID0gPDB4MDAw
MDAwMDM+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgY3B1LWlkbGUtc3Rh
dGVzID0gPDB4MDAwMDAwMDQ+Ow0KICAgICAgICAgICAgICAgICAgICAgICAg
I2Nvb2xpbmctY2VsbHMgPSA8MHgwMDAwMDAwMj47DQogICAgICAgICAgICAg
ICAgICAgICAgICBwaGFuZGxlID0gPDB4MDAwMDAwMTM+Ow0KICAgICAgICAg
ICAgICAgIH07DQogICAgICAgICAgICAgICAgY3B1QDEgew0KICAgICAgICAg
ICAgICAgICAgICAgICAgZGV2aWNlX3R5cGUgPSAiY3B1IjsNCiAgICAgICAg
ICAgICAgICAgICAgICAgIGNvbXBhdGlibGUgPSAiYXJtLGNvcnRleC1hNzIi
Ow0KICAgICAgICAgICAgICAgICAgICAgICAgcmVnID0gPDB4MDAwMDAwMDE+
Ow0KICAgICAgICAgICAgICAgICAgICAgICAgZW5hYmxlLW1ldGhvZCA9ICJw
c2NpIjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGNsb2NrcyA9IDwweDAw
MDAwMDAyIDB4MDAwMDAwMDEgMHgwMDAwMDAwMD47DQogICAgICAgICAgICAg
ICAgICAgICAgICBuZXh0LWxldmVsLWNhY2hlID0gPDB4MDAwMDAwMDM+Ow0K
ICAgICAgICAgICAgICAgICAgICAgICAgY3B1LWlkbGUtc3RhdGVzID0gPDB4
MDAwMDAwMDQ+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgI2Nvb2xpbmct
Y2VsbHMgPSA8MHgwMDAwMDAwMj47DQogICAgICAgICAgICAgICAgICAgICAg
ICBwaGFuZGxlID0gPDB4MDAwMDAwMTQ+Ow0KICAgICAgICAgICAgICAgIH07
DQogICAgICAgICAgICAgICAgbDItY2FjaGUgew0KICAgICAgICAgICAgICAg
ICAgICAgICAgY29tcGF0aWJsZSA9ICJjYWNoZSI7DQogICAgICAgICAgICAg
ICAgICAgICAgICBwaGFuZGxlID0gPDB4MDAwMDAwMDM+Ow0KICAgICAgICAg
ICAgICAgIH07DQogICAgICAgIH07DQogICAgICAgIGlkbGUtc3RhdGVzIHsN
CiAgICAgICAgICAgICAgICBlbnRyeS1tZXRob2QgPSAiYXJtLHBzY2kiOw0K
ICAgICAgICAgICAgICAgIGNwdS1wdzIwIHsNCiAgICAgICAgICAgICAgICAg
ICAgICAgIGNvbXBhdGlibGUgPSAiYXJtLGlkbGUtc3RhdGUiOw0KICAgICAg
ICAgICAgICAgICAgICAgICAgaWRsZS1zdGF0ZS1uYW1lID0gIlBXMjAiOw0K
ICAgICAgICAgICAgICAgICAgICAgICAgYXJtLHBzY2ktc3VzcGVuZC1wYXJh
bSA9IDwweDAwMDAwMDAwPjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGVu
dHJ5LWxhdGVuY3ktdXMgPSA8MHgwMDAwMDdkMD47DQogICAgICAgICAgICAg
ICAgICAgICAgICBleGl0LWxhdGVuY3ktdXMgPSA8MHgwMDAwMDdkMD47DQog
ICAgICAgICAgICAgICAgICAgICAgICBtaW4tcmVzaWRlbmN5LXVzID0gPDB4
MDAwMDE3NzA+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgcGhhbmRsZSA9
IDwweDAwMDAwMDA0PjsNCiAgICAgICAgICAgICAgICB9Ow0KICAgICAgICB9
Ow0KICAgICAgICBjbG9jay1zeXNjbGsgew0KICAgICAgICAgICAgICAgIGNv
bXBhdGlibGUgPSAiZml4ZWQtY2xvY2siOw0KICAgICAgICAgICAgICAgICNj
bG9jay1jZWxscyA9IDwweDAwMDAwMDAwPjsNCiAgICAgICAgICAgICAgICBj
bG9jay1mcmVxdWVuY3kgPSA8MHgwNWY1ZTEwMD47DQogICAgICAgICAgICAg
ICAgY2xvY2stb3V0cHV0LW5hbWVzID0gInN5c2NsayI7DQogICAgICAgICAg
ICAgICAgcGhhbmRsZSA9IDwweDAwMDAwMDA3PjsNCiAgICAgICAgfTsNCiAg
ICAgICAgY2xvY2stb3NjLTI3bSB7DQogICAgICAgICAgICAgICAgY29tcGF0
aWJsZSA9ICJmaXhlZC1jbG9jayI7DQogICAgICAgICAgICAgICAgI2Nsb2Nr
LWNlbGxzID0gPDB4MDAwMDAwMDA+Ow0KICAgICAgICAgICAgICAgIGNsb2Nr
LWZyZXF1ZW5jeSA9IDwweDAxOWJmY2MwPjsNCiAgICAgICAgICAgICAgICBj
bG9jay1vdXRwdXQtbmFtZXMgPSAicGh5XzI3bSI7DQogICAgICAgICAgICAg
ICAgcGhhbmRsZSA9IDwweDAwMDAwMDA1PjsNCiAgICAgICAgfTsNCiAgICAg
ICAgY2xvY2stZGlzcGxheUBmMWYwMDAwIHsNCiAgICAgICAgICAgICAgICBj
b21wYXRpYmxlID0gImZzbCxsczEwMjhhLXBsbGRpZyI7DQogICAgICAgICAg
ICAgICAgcmVnID0gPDB4MDAwMDAwMDAgMHgwZjFmMDAwMCAweDAwMDAwMDAw
IDB4MDAwMGZmZmY+Ow0KICAgICAgICAgICAgICAgICNjbG9jay1jZWxscyA9
IDwweDAwMDAwMDAwPjsNCiAgICAgICAgICAgICAgICBjbG9ja3MgPSA8MHgw
MDAwMDAwNT47DQogICAgICAgICAgICAgICAgcGhhbmRsZSA9IDwweDAwMDAw
MDE3PjsNCiAgICAgICAgfTsNCiAgICAgICAgY2xvY2stYXhpIHsNCiAgICAg
ICAgICAgICAgICBjb21wYXRpYmxlID0gImZpeGVkLWNsb2NrIjsNCiAgICAg
ICAgICAgICAgICAjY2xvY2stY2VsbHMgPSA8MHgwMDAwMDAwMD47DQogICAg
ICAgICAgICAgICAgY2xvY2stZnJlcXVlbmN5ID0gPDB4MjZiZTM2ODA+Ow0K
ICAgICAgICAgICAgICAgIGNsb2NrLW91dHB1dC1uYW1lcyA9ICJhY2xrIjsN
CiAgICAgICAgICAgICAgICBwaGFuZGxlID0gPDB4MDAwMDAwMTg+Ow0KICAg
ICAgICB9Ow0KICAgICAgICBjbG9jay1hcGIgew0KICAgICAgICAgICAgICAg
IGNvbXBhdGlibGUgPSAiZml4ZWQtY2xvY2siOw0KICAgICAgICAgICAgICAg
ICNjbG9jay1jZWxscyA9IDwweDAwMDAwMDAwPjsNCiAgICAgICAgICAgICAg
ICBjbG9jay1mcmVxdWVuY3kgPSA8MHgyNmJlMzY4MD47DQogICAgICAgICAg
ICAgICAgY2xvY2stb3V0cHV0LW5hbWVzID0gInBjbGsiOw0KICAgICAgICAg
ICAgICAgIHBoYW5kbGUgPSA8MHgwMDAwMDAxOT47DQogICAgICAgIH07DQog
ICAgICAgIGNsb2NrLWhkcGNvcmUgew0KICAgICAgICAgICAgICAgIGNvbXBh
dGlibGUgPSAiZml4ZWQtY2xvY2siOw0KICAgICAgICAgICAgICAgICNjbG9j
ay1jZWxscyA9IDwweDAwMDAwMDAwPjsNCiAgICAgICAgICAgICAgICBjbG9j
ay1mcmVxdWVuY3kgPSA8MHgwOWFmOGRhMD47DQogICAgICAgICAgICAgICAg
Y2xvY2stb3V0cHV0LW5hbWVzID0gImhkcGNsayI7DQogICAgICAgICAgICAg
ICAgcGhhbmRsZSA9IDwweDAwMDAwMDFiPjsNCiAgICAgICAgfTsNCiAgICAg
ICAgcmVib290IHsNCiAgICAgICAgICAgICAgICBjb21wYXRpYmxlID0gInN5
c2Nvbi1yZWJvb3QiOw0KICAgICAgICAgICAgICAgIHJlZ21hcCA9IDwweDAw
MDAwMDA2PjsNCiAgICAgICAgICAgICAgICBvZmZzZXQgPSA8MHgwMDAwMDAw
MD47DQogICAgICAgICAgICAgICAgbWFzayA9IDwweDAwMDAwMDAyPjsNCiAg
ICAgICAgfTsNCiAgICAgICAgdGltZXIgew0KICAgICAgICAgICAgICAgIGNv
bXBhdGlibGUgPSAiYXJtLGFybXY4LXRpbWVyIjsNCiAgICAgICAgICAgICAg
ICBpbnRlcnJ1cHRzID0gPDB4MDAwMDAwMDEgMHgwMDAwMDAwZCAweDAwMDAw
MzA4IDB4MDAwMDAwMDEgMHgwMDAwMDAwZSAweDAwMDAwMzA4IDB4MDAwMDAw
MDEgMHgwMDAwMDAwYiAweDAwMDAwMzA4IDB4MDAwMDAwMDEgMHgwMDAwMDAw
YSAweDAwMDAwMzA4PjsNCiAgICAgICAgfTsNCiAgICAgICAgaW50ZXJydXB0
LWNvbnRyb2xsZXJANjAwMDAwMCB7DQogICAgICAgICAgICAgICAgY29tcGF0
aWJsZSA9ICJhcm0sZ2ljLXYzIjsNCiAgICAgICAgICAgICAgICAjYWRkcmVz
cy1jZWxscyA9IDwweDAwMDAwMDAyPjsNCiAgICAgICAgICAgICAgICAjc2l6
ZS1jZWxscyA9IDwweDAwMDAwMDAyPjsNCiAgICAgICAgICAgICAgICByYW5n
ZXM7DQogICAgICAgICAgICAgICAgcmVnID0gPDB4MDAwMDAwMDAgMHgwNjAw
MDAwMCAweDAwMDAwMDAwIDB4MDAwMTAwMDAgMHgwMDAwMDAwMCAweDA2MDQw
MDAwIDB4MDAwMDAwMDAgMHgwMDA0MDAwMD47DQogICAgICAgICAgICAgICAg
I2ludGVycnVwdC1jZWxscyA9IDwweDAwMDAwMDAzPjsNCiAgICAgICAgICAg
ICAgICBpbnRlcnJ1cHQtY29udHJvbGxlcjsNCiAgICAgICAgICAgICAgICBp
bnRlcnJ1cHRzID0gPDB4MDAwMDAwMDEgMHgwMDAwMDAwOSAweDAwMDAwZjA4
PjsNCiAgICAgICAgICAgICAgICBwaGFuZGxlID0gPDB4MDAwMDAwMDE+Ow0K
ICAgICAgICAgICAgICAgIGdpYy1pdHNANjAyMDAwMCB7DQogICAgICAgICAg
ICAgICAgICAgICAgICBjb21wYXRpYmxlID0gImFybSxnaWMtdjMtaXRzIjsN
CiAgICAgICAgICAgICAgICAgICAgICAgIG1zaS1jb250cm9sbGVyOw0KICAg
ICAgICAgICAgICAgICAgICAgICAgcmVnID0gPDB4MDAwMDAwMDAgMHgwNjAy
MDAwMCAweDAwMDAwMDAwIDB4MDAwMjAwMDA+Ow0KICAgICAgICAgICAgICAg
ICAgICAgICAgcGhhbmRsZSA9IDwweDAwMDAwMDBkPjsNCiAgICAgICAgICAg
ICAgICB9Ow0KICAgICAgICB9Ow0KICAgICAgICBzb2Mgew0KICAgICAgICAg
ICAgICAgIGNvbXBhdGlibGUgPSAic2ltcGxlLWJ1cyI7DQogICAgICAgICAg
ICAgICAgI2FkZHJlc3MtY2VsbHMgPSA8MHgwMDAwMDAwMj47DQogICAgICAg
ICAgICAgICAgI3NpemUtY2VsbHMgPSA8MHgwMDAwMDAwMj47DQogICAgICAg
ICAgICAgICAgcmFuZ2VzOw0KICAgICAgICAgICAgICAgIHBoYW5kbGUgPSA8
MHgwMDAwMDAxZj47DQogICAgICAgICAgICAgICAgbWVtb3J5LWNvbnRyb2xs
ZXJAMTA4MDAwMCB7DQogICAgICAgICAgICAgICAgICAgICAgICBjb21wYXRp
YmxlID0gImZzbCxxb3JpcS1tZW1vcnktY29udHJvbGxlciI7DQogICAgICAg
ICAgICAgICAgICAgICAgICByZWcgPSA8MHgwMDAwMDAwMCAweDAxMDgwMDAw
IDB4MDAwMDAwMDAgMHgwMDAwMTAwMD47DQogICAgICAgICAgICAgICAgICAg
ICAgICBpbnRlcnJ1cHRzID0gPDB4MDAwMDAwMDAgMHgwMDAwMDA5MCAweDAw
MDAwMDA0PjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGJpZy1lbmRpYW47
DQogICAgICAgICAgICAgICAgICAgICAgICBwaGFuZGxlID0gPDB4MDAwMDAw
MjA+Ow0KICAgICAgICAgICAgICAgIH07DQogICAgICAgICAgICAgICAgc3lz
Y29uQDFlMDAwMDAgew0KICAgICAgICAgICAgICAgICAgICAgICAgY29tcGF0
aWJsZSA9ICJmc2wsbHMxMDI4YS1kY2ZnIiwgInN5c2NvbiI7DQogICAgICAg
ICAgICAgICAgICAgICAgICByZWcgPSA8MHgwMDAwMDAwMCAweDAxZTAwMDAw
IDB4MDAwMDAwMDAgMHgwMDAxMDAwMD47DQogICAgICAgICAgICAgICAgICAg
ICAgICBiaWctZW5kaWFuOw0KICAgICAgICAgICAgICAgICAgICAgICAgcGhh
bmRsZSA9IDwweDAwMDAwMDIxPjsNCiAgICAgICAgICAgICAgICB9Ow0KICAg
ICAgICAgICAgICAgIHN5c2NvbkAxZTYwMDAwIHsNCiAgICAgICAgICAgICAg
ICAgICAgICAgIGNvbXBhdGlibGUgPSAiZnNsLGxzMTAyOGEtcnN0IiwgInN5
c2NvbiI7DQogICAgICAgICAgICAgICAgICAgICAgICByZWcgPSA8MHgwMDAw
MDAwMCAweDAxZTYwMDAwIDB4MDAwMDAwMDAgMHgwMDAxMDAwMD47DQogICAg
ICAgICAgICAgICAgICAgICAgICBsaXR0bGUtZW5kaWFuOw0KICAgICAgICAg
ICAgICAgICAgICAgICAgcGhhbmRsZSA9IDwweDAwMDAwMDA2PjsNCiAgICAg
ICAgICAgICAgICB9Ow0KICAgICAgICAgICAgICAgIHN5c2NvbkAxZmMwMDAw
IHsNCiAgICAgICAgICAgICAgICAgICAgICAgIGNvbXBhdGlibGUgPSAiZnNs
LGxzMTAyOGEtc2NmZyIsICJzeXNjb24iOw0KICAgICAgICAgICAgICAgICAg
ICAgICAgcmVnID0gPDB4MDAwMDAwMDAgMHgwMWZjMDAwMCAweDAwMDAwMDAw
IDB4MDAwMTAwMDA+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgYmlnLWVu
ZGlhbjsNCiAgICAgICAgICAgICAgICAgICAgICAgIHBoYW5kbGUgPSA8MHgw
MDAwMDAyMj47DQogICAgICAgICAgICAgICAgfTsNCiAgICAgICAgICAgICAg
ICBjbG9jay1jb250cm9sbGVyQDEzMDAwMDAgew0KICAgICAgICAgICAgICAg
ICAgICAgICAgY29tcGF0aWJsZSA9ICJmc2wsbHMxMDI4YS1jbG9ja2dlbiI7
DQogICAgICAgICAgICAgICAgICAgICAgICByZWcgPSA8MHgwMDAwMDAwMCAw
eDAxMzAwMDAwIDB4MDAwMDAwMDAgMHgwMDBhMDAwMD47DQogICAgICAgICAg
ICAgICAgICAgICAgICAjY2xvY2stY2VsbHMgPSA8MHgwMDAwMDAwMj47DQog
ICAgICAgICAgICAgICAgICAgICAgICBjbG9ja3MgPSA8MHgwMDAwMDAwNz47
DQogICAgICAgICAgICAgICAgICAgICAgICBwaGFuZGxlID0gPDB4MDAwMDAw
MDI+Ow0KICAgICAgICAgICAgICAgIH07DQogICAgICAgICAgICAgICAgdXNi
QDMxMDAwMDAgew0KICAgICAgICAgICAgICAgICAgICAgICAgeGVuLHBhc3N0
aHJvdWdoOw0KICAgICAgICAgICAgICAgICAgICAgICAgY29tcGF0aWJsZSA9
ICJzbnBzLGR3YzMiOw0KICAgICAgICAgICAgICAgICAgICAgICAgcmVnID0g
PDB4MDAwMDAwMDAgMHgwMzEwMDAwMCAweDAwMDAwMDAwIDB4MDAwMTAwMDA+
Ow0KICAgICAgICAgICAgICAgICAgICAgICAgaW50ZXJydXB0cyA9IDwweDAw
MDAwMDAwIDB4MDAwMDAwNTAgMHgwMDAwMDAwND47DQogICAgICAgICAgICAg
ICAgICAgICAgICBkcl9tb2RlID0gImhvc3QiOw0KICAgICAgICAgICAgICAg
ICAgICAgICAgc25wcyxkaXNfcnhkZXRfaW5wM19xdWlyazsNCiAgICAgICAg
ICAgICAgICAgICAgICAgIHNucHMscXVpcmstZnJhbWUtbGVuZ3RoLWFkanVz
dG1lbnQgPSA8MHgwMDAwMDAyMD47DQogICAgICAgICAgICAgICAgICAgICAg
ICBzbnBzLGluY3ItYnVyc3QtdHlwZS1hZGp1c3RtZW50ID0gPDB4MDAwMDAw
MDEgMHgwMDAwMDAwNCAweDAwMDAwMDA4IDB4MDAwMDAwMTA+Ow0KICAgICAg
ICAgICAgICAgICAgICAgICAgcGhhbmRsZSA9IDwweDAwMDAwMDIzPjsNCiAg
ICAgICAgICAgICAgICB9Ow0KICAgICAgICAgICAgICAgIHVzYkAzMTEwMDAw
IHsNCiAgICAgICAgICAgICAgICAgICAgICAgIHhlbixwYXNzdGhyb3VnaDsN
CiAgICAgICAgICAgICAgICAgICAgICAgIGNvbXBhdGlibGUgPSAic25wcyxk
d2MzIjsNCiAgICAgICAgICAgICAgICAgICAgICAgIHJlZyA9IDwweDAwMDAw
MDAwIDB4MDMxMTAwMDAgMHgwMDAwMDAwMCAweDAwMDEwMDAwPjsNCiAgICAg
ICAgICAgICAgICAgICAgICAgIGludGVycnVwdHMgPSA8MHgwMDAwMDAwMCAw
eDAwMDAwMDUxIDB4MDAwMDAwMDQ+Ow0KICAgICAgICAgICAgICAgICAgICAg
ICAgZHJfbW9kZSA9ICJob3N0IjsNCiAgICAgICAgICAgICAgICAgICAgICAg
IHNucHMsZGlzX3J4ZGV0X2lucDNfcXVpcms7DQogICAgICAgICAgICAgICAg
ICAgICAgICBzbnBzLHF1aXJrLWZyYW1lLWxlbmd0aC1hZGp1c3RtZW50ID0g
PDB4MDAwMDAwMjA+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgc25wcyxp
bmNyLWJ1cnN0LXR5cGUtYWRqdXN0bWVudCA9IDwweDAwMDAwMDAxIDB4MDAw
MDAwMDQgMHgwMDAwMDAwOCAweDAwMDAwMDEwPjsNCiAgICAgICAgICAgICAg
ICAgICAgICAgIHBoYW5kbGUgPSA8MHgwMDAwMDAyND47DQogICAgICAgICAg
ICAgICAgfTsNCiAgICAgICAgICAgICAgICBzcGlAMjBjMDAwMCB7DQogICAg
ICAgICAgICAgICAgICAgICAgICBjb21wYXRpYmxlID0gIm54cCxseDIxNjBh
LWZzcGkiOw0KICAgICAgICAgICAgICAgICAgICAgICAgI2FkZHJlc3MtY2Vs
bHMgPSA8MHgwMDAwMDAwMT47DQogICAgICAgICAgICAgICAgICAgICAgICAj
c2l6ZS1jZWxscyA9IDwweDAwMDAwMDAwPjsNCiAgICAgICAgICAgICAgICAg
ICAgICAgIHJlZyA9IDwweDAwMDAwMDAwIDB4MDIwYzAwMDAgMHgwMDAwMDAw
MCAweDAwMDEwMDAwIDB4MDAwMDAwMDAgMHgyMDAwMDAwMCAweDAwMDAwMDAw
IDB4MTAwMDAwMDA+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgcmVnLW5h
bWVzID0gImZzcGlfYmFzZSIsICJmc3BpX21tYXAiOw0KICAgICAgICAgICAg
ICAgICAgICAgICAgaW50ZXJydXB0cyA9IDwweDAwMDAwMDAwIDB4MDAwMDAw
MTkgMHgwMDAwMDAwND47DQogICAgICAgICAgICAgICAgICAgICAgICBjbG9j
a3MgPSA8MHgwMDAwMDAwMiAweDAwMDAwMDA0IDB4MDAwMDAwMDMgMHgwMDAw
MDAwMiAweDAwMDAwMDA0IDB4MDAwMDAwMDM+Ow0KICAgICAgICAgICAgICAg
ICAgICAgICAgY2xvY2stbmFtZXMgPSAiZnNwaV9lbiIsICJmc3BpIjsNCiAg
ICAgICAgICAgICAgICAgICAgICAgIHN0YXR1cyA9ICJva2F5IjsNCiAgICAg
ICAgICAgICAgICAgICAgICAgIG54cCxmc3BpLWhhcy1zZWNvbmQtY2hpcDsN
CiAgICAgICAgICAgICAgICAgICAgICAgIHBoYW5kbGUgPSA8MHgwMDAwMDAy
NT47DQogICAgICAgICAgICAgICAgICAgICAgICB3MjVxMzJqd0AwIHsNCiAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgI2FkZHJlc3MtY2VsbHMg
PSA8MHgwMDAwMDAwMT47DQogICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICNzaXplLWNlbGxzID0gPDB4MDAwMDAwMDE+Ow0KICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBjb21wYXRpYmxlID0gIncyNXEzMmp3Iiwg
ImplZGVjLHNwaS1ub3IiOw0KICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBtMjVwLGZhc3QtcmVhZDsNCiAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgc3BpLW1heC1mcmVxdWVuY3kgPSA8MHgwN2VkNmI0MD47DQog
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHJlZyA9IDwweDAwMDAw
MDAwPjsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgc3BpLXJ4
LWJ1cy13aWR0aCA9IDwweDAwMDAwMDAyPjsNCiAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgc3BpLXR4LWJ1cy13aWR0aCA9IDwweDAwMDAwMDAx
PjsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcGFydGl0aW9u
QDAgew0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IHJlZyA9IDwweDAwMDAwMDAwIDB4MDAwMTAwMDA+Ow0KICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxhYmVsID0gInJjdyI7DQog
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcmVhZC1v
bmx5Ow0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB9Ow0KICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICBwYXJ0aXRpb25AMTAwMDAg
ew0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHJl
ZyA9IDwweDAwMDEwMDAwIDB4MDAwZjAwMDA+Ow0KICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIGxhYmVsID0gImZhaWxzYWZlIGJv
b3Rsb2FkZXIiOw0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIHJlYWQtb25seTsNCiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgfTsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcGFy
dGl0aW9uQDEwMDAwMCB7DQogICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgcmVnID0gPDB4MDAxMDAwMDAgMHgwMDA0MDAwMD47DQog
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGFiZWwg
PSAiZmFpbHNhZmUgRFAgZmlybXdhcmUiOw0KICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIHJlYWQtb25seTsNCiAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgfTsNCiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgcGFydGl0aW9uQDE0MDAwMCB7DQogICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgcmVnID0gPDB4MDAxNDAwMDAg
MHgwMDBhMDAwMD47DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgbGFiZWwgPSAiZmFpbHNhZmUgdHJ1c3RlZCBmaXJtd2FyZSI7
DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcmVh
ZC1vbmx5Ow0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB9Ow0K
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBwYXJ0aXRpb25AMWUw
MDAwIHsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICByZWcgPSA8MHgwMDFlMDAwMCAweDAwMDIwMDAwPjsNCiAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsYWJlbCA9ICJyZXNlcnZl
ZCI7DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
cmVhZC1vbmx5Ow0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB9
Ow0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBwYXJ0aXRpb25A
MjAwMDAwIHsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICByZWcgPSA8MHgwMDIwMDAwMCAweDAwMDEwMDAwPjsNCiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsYWJlbCA9ICJjb25m
aWd1cmF0aW9uIHN0b3JlIjsNCiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgfTsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcGFy
dGl0aW9uQDIxMDAwMCB7DQogICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgcmVnID0gPDB4MDAyMTAwMDAgMHgwMDBmMDAwMD47DQog
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGFiZWwg
PSAiYm9vdGxvYWRlciI7DQogICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIH07DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBhcnRp
dGlvbkAzMDAwMDAgew0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIHJlZyA9IDwweDAwMzAwMDAwIDB4MDAwNDAwMDA+Ow0KICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxhYmVsID0g
IkRQIGZpcm13YXJlIjsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgfTsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcGFydGl0
aW9uQDM0MDAwMCB7DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgcmVnID0gPDB4MDAzNDAwMDAgMHgwMDBhMDAwMD47DQogICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGFiZWwgPSAi
dHJ1c3RlZCBmaXJtd2FyZSI7DQogICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIH07DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBh
cnRpdGlvbkAzZTAwMDAgew0KICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIHJlZyA9IDwweDAwM2UwMDAwIDB4MDAwMjAwMDA+Ow0K
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxhYmVs
ID0gImJvb3Rsb2FkZXIgZW52aXJvbm1lbnQiOw0KICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICB9Ow0KICAgICAgICAgICAgICAgICAgICAgICAg
fTsNCiAgICAgICAgICAgICAgICB9Ow0KICAgICAgICAgICAgICAgIGkyY0Ay
MDAwMDAwIHsNCiAgICAgICAgICAgICAgICAgICAgICAgIGNvbXBhdGlibGUg
PSAiZnNsLHZmNjEwLWkyYyI7DQogICAgICAgICAgICAgICAgICAgICAgICAj
YWRkcmVzcy1jZWxscyA9IDwweDAwMDAwMDAxPjsNCiAgICAgICAgICAgICAg
ICAgICAgICAgICNzaXplLWNlbGxzID0gPDB4MDAwMDAwMDA+Ow0KICAgICAg
ICAgICAgICAgICAgICAgICAgcmVnID0gPDB4MDAwMDAwMDAgMHgwMjAwMDAw
MCAweDAwMDAwMDAwIDB4MDAwMTAwMDA+Ow0KICAgICAgICAgICAgICAgICAg
ICAgICAgaW50ZXJydXB0cyA9IDwweDAwMDAwMDAwIDB4MDAwMDAwMjIgMHgw
MDAwMDAwND47DQogICAgICAgICAgICAgICAgICAgICAgICBjbG9ja3MgPSA8
MHgwMDAwMDAwMiAweDAwMDAwMDA0IDB4MDAwMDAwMDE+Ow0KICAgICAgICAg
ICAgICAgICAgICAgICAgZG1hLW5hbWVzOw0KICAgICAgICAgICAgICAgICAg
ICAgICAgZG1hcyA9IDwweDAwMDAwMDA4IDB4MDAwMDAwMDEgMHgwMDAwMDAy
NyAweDAwMDAwMDA4IDB4MDAwMDAwMDEgMHgwMDAwMDAyNj47DQogICAgICAg
ICAgICAgICAgICAgICAgICBzdGF0dXMgPSAib2theSI7DQogICAgICAgICAg
ICAgICAgICAgICAgICBwaGFuZGxlID0gPDB4MDAwMDAwMjY+Ow0KICAgICAg
ICAgICAgICAgICAgICAgICAgcnRjQDMyIHsNCiAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgY29tcGF0aWJsZSA9ICJtaWNyb2NyeXN0YWwscnY4
ODAzIjsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcmVnID0g
PDB4MDAwMDAwMzI+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBpbnRlcnJ1cHQtcGFyZW50ID0gPDB4MDAwMDAwMDk+Ow0KICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBpbnRlcnJ1cHRzID0gPDB4MDAwMDAw
MDAgMHgwMDAwMDAwMD47DQogICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIHdha2V1cC1zb3VyY2U7DQogICAgICAgICAgICAgICAgICAgICAgICB9
Ow0KICAgICAgICAgICAgICAgICAgICAgICAgc2wyOGNwbGRANGEgew0KICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAjYWRkcmVzcy1jZWxscyA9
IDwweDAwMDAwMDAxPjsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgI3NpemUtY2VsbHMgPSA8MHgwMDAwMDAwMD47DQogICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIGNvbXBhdGlibGUgPSAia29udHJvbixzbDI4
Y3BsZCI7DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHJlZyA9
IDwweDAwMDAwMDRhPjsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgaW50ZXJydXB0LXBhcmVudCA9IDwweDAwMDAwMDBhPjsNCiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgaW50ZXJydXB0cyA9IDwweDAwMDAw
MDA2IDB4MDAwMDAwMDI+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBpbnRlcnJ1cHQtY29udHJvbGxlcjsNCiAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgI2ludGVycnVwdC1jZWxscyA9IDwweDAwMDAwMDAy
PjsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcGhhbmRsZSA9
IDwweDAwMDAwMDA5PjsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgd2F0Y2hkb2dANCB7DQogICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgY29tcGF0aWJsZSA9ICJrb250cm9uLHNsMjhjcGxkLXdk
dCI7DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
cmVnID0gPDB4MDAwMDAwMDQ+Ow0KICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICB9Ow0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBm
YW5AYiB7DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgY29tcGF0aWJsZSA9ICJrb250cm9uLHNsMjhjcGxkLWZhbiI7DQogICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcmVnID0gPDB4
MDAwMDAwMGI+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB9
Ow0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBwd20wQGMgew0K
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICNwd20t
Y2VsbHMgPSA8MHgwMDAwMDAwMj47DQogICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgY29tcGF0aWJsZSA9ICJrb250cm9uLHNsMjhj
cGxkLXB3bSI7DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgcmVnID0gPDB4MDAwMDAwMGM+Ow0KICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIHBoYW5kbGUgPSA8MHgwMDAwMDAyNz47
DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIH07DQogICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIHB3bTFAZSB7DQogICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgI3B3bS1jZWxscyA9IDww
eDAwMDAwMDAyPjsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBjb21wYXRpYmxlID0gImtvbnRyb24sc2wyOGNwbGQtcHdtIjsN
CiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICByZWcg
PSA8MHgwMDAwMDAwZT47DQogICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgcGhhbmRsZSA9IDwweDAwMDAwMDI4PjsNCiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgfTsNCiAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgZ3BpbzBAMTAgew0KICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIGNvbXBhdGlibGUgPSAia29udHJvbixz
bDI4Y3BsZC1ncGlvIjsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICByZWcgPSA8MHgwMDAwMDAxMD47DQogICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgaW50ZXJydXB0LXBhcmVudCA9
IDwweDAwMDAwMDBhPjsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBpbnRlcnJ1cHRzID0gPDB4MDAwMDAwMDYgMHgwMDAwMDAw
Mj47DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
Z3Bpby1jb250cm9sbGVyOw0KICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICNncGlvLWNlbGxzID0gPDB4MDAwMDAwMDI+Ow0KICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGdwaW8tbGlu
ZS1uYW1lcyA9ICJHUElPMF9DQU0wX1BXUl9OIiwgIkdQSU8xX0NBTTFfUFdS
X04iLCAiR1BJTzJfQ0FNMF9SU1RfTiIsICJHUElPM19DQU0xX1JTVF9OIiwg
IkdQSU80X0hEQV9SU1RfTiIsICJHUElPNV9QV01fT1VUIiwgIkdQSU82X1RB
Q0hJTiIsICJHUElPNyI7DQogICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgaW50ZXJydXB0LWNvbnRyb2xsZXI7DQogICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgI2ludGVycnVwdC1jZWxs
cyA9IDwweDAwMDAwMDAyPjsNCiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBwaGFuZGxlID0gPDB4MDAwMDAwMjk+Ow0KICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICB9Ow0KICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBncGlvMUAxNSB7DQogICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgY29tcGF0aWJsZSA9ICJrb250cm9u
LHNsMjhjcGxkLWdwaW8iOw0KICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIHJlZyA9IDwweDAwMDAwMDE1PjsNCiAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICBpbnRlcnJ1cHQtcGFyZW50
ID0gPDB4MDAwMDAwMGE+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIGludGVycnVwdHMgPSA8MHgwMDAwMDAwNiAweDAwMDAw
MDAyPjsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBncGlvLWNvbnRyb2xsZXI7DQogICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgI2dwaW8tY2VsbHMgPSA8MHgwMDAwMDAwMj47DQog
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZ3Bpby1s
aW5lLW5hbWVzID0gWzQ3IDUwIDQ5IDRmIDM4IDAwIDQ3IDUwIDQ5IDRmIDM5
IDAwIDQ3IDUwIDQ5IDRmIDMxIDMwIDAwIDQ3IDUwIDQ5IDRmIDMxIDMxIDAw
IDAwIDAwIDAwIDAwXTsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBpbnRlcnJ1cHQtY29udHJvbGxlcjsNCiAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAjaW50ZXJydXB0LWNlbGxz
ID0gPDB4MDAwMDAwMDI+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIHBoYW5kbGUgPSA8MHgwMDAwMDAyYT47DQogICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIH07DQogICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIGdwbzBAMWEgew0KICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIGNvbXBhdGlibGUgPSAia29udHJvbixz
bDI4Y3BsZC1ncG8iOw0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIHJlZyA9IDwweDAwMDAwMDFhPjsNCiAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBncGlvLWNvbnRyb2xsZXI7DQog
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgI2dwaW8t
Y2VsbHMgPSA8MHgwMDAwMDAwMj47DQogICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgZ3Bpby1saW5lLW5hbWVzID0gKiAweDAwMDAw
MDAwODI4MDE5ODggWzB4MDAwMDAwNzJdOw0KICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIHBoYW5kbGUgPSA8MHgwMDAwMDAxZD47
DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIH07DQogICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIGdwaTBAMWIgew0KICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNvbXBhdGlibGUgPSAi
a29udHJvbixzbDI4Y3BsZC1ncGkiOw0KICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIHJlZyA9IDwweDAwMDAwMDFiPjsNCiAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBncGlvLWNvbnRy
b2xsZXI7DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgI2dwaW8tY2VsbHMgPSA8MHgwMDAwMDAwMj47DQogICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgZ3Bpby1saW5lLW5hbWVzID0g
KiAweDAwMDAwMDAwODI4MDFhNzggWzB4MDAwMDAwNTJdOw0KICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBoYW5kbGUgPSA8MHgw
MDAwMDAxZT47DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIH07
DQogICAgICAgICAgICAgICAgICAgICAgICB9Ow0KICAgICAgICAgICAgICAg
ICAgICAgICAgZWVwcm9tQDUwIHsNCiAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgY29tcGF0aWJsZSA9ICJhdG1lbCwyNGMzMiI7DQogICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIHJlZyA9IDwweDAwMDAwMDUwPjsN
CiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcGFnZXNpemUgPSA8
MHgwMDAwMDAyMD47DQogICAgICAgICAgICAgICAgICAgICAgICB9Ow0KICAg
ICAgICAgICAgICAgIH07DQogICAgICAgICAgICAgICAgaTJjQDIwMTAwMDAg
ew0KICAgICAgICAgICAgICAgICAgICAgICAgY29tcGF0aWJsZSA9ICJmc2ws
dmY2MTAtaTJjIjsNCiAgICAgICAgICAgICAgICAgICAgICAgICNhZGRyZXNz
LWNlbGxzID0gPDB4MDAwMDAwMDE+Ow0KICAgICAgICAgICAgICAgICAgICAg
ICAgI3NpemUtY2VsbHMgPSA8MHgwMDAwMDAwMD47DQogICAgICAgICAgICAg
ICAgICAgICAgICByZWcgPSA8MHgwMDAwMDAwMCAweDAyMDEwMDAwIDB4MDAw
MDAwMDAgMHgwMDAxMDAwMD47DQogICAgICAgICAgICAgICAgICAgICAgICBp
bnRlcnJ1cHRzID0gPDB4MDAwMDAwMDAgMHgwMDAwMDAyMiAweDAwMDAwMDA0
PjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGNsb2NrcyA9IDwweDAwMDAw
MDAyIDB4MDAwMDAwMDQgMHgwMDAwMDAwMT47DQogICAgICAgICAgICAgICAg
ICAgICAgICBkbWEtbmFtZXMgPSAidHgiLCAicngiOw0KICAgICAgICAgICAg
ICAgICAgICAgICAgZG1hcyA9IDwweDAwMDAwMDA4IDB4MDAwMDAwMDEgMHgw
MDAwMDAyNSAweDAwMDAwMDA4IDB4MDAwMDAwMDEgMHgwMDAwMDAyND47DQog
ICAgICAgICAgICAgICAgICAgICAgICBzdGF0dXMgPSAiZGlzYWJsZWQiOw0K
ICAgICAgICAgICAgICAgICAgICAgICAgcGhhbmRsZSA9IDwweDAwMDAwMDJi
PjsNCiAgICAgICAgICAgICAgICB9Ow0KICAgICAgICAgICAgICAgIGkyY0Ay
MDIwMDAwIHsNCiAgICAgICAgICAgICAgICAgICAgICAgIGNvbXBhdGlibGUg
PSAiZnNsLHZmNjEwLWkyYyI7DQogICAgICAgICAgICAgICAgICAgICAgICAj
YWRkcmVzcy1jZWxscyA9IDwweDAwMDAwMDAxPjsNCiAgICAgICAgICAgICAg
ICAgICAgICAgICNzaXplLWNlbGxzID0gPDB4MDAwMDAwMDA+Ow0KICAgICAg
ICAgICAgICAgICAgICAgICAgcmVnID0gPDB4MDAwMDAwMDAgMHgwMjAyMDAw
MCAweDAwMDAwMDAwIDB4MDAwMTAwMDA+Ow0KICAgICAgICAgICAgICAgICAg
ICAgICAgaW50ZXJydXB0cyA9IDwweDAwMDAwMDAwIDB4MDAwMDAwMjMgMHgw
MDAwMDAwND47DQogICAgICAgICAgICAgICAgICAgICAgICBjbG9ja3MgPSA8
MHgwMDAwMDAwMiAweDAwMDAwMDA0IDB4MDAwMDAwMDE+Ow0KICAgICAgICAg
ICAgICAgICAgICAgICAgZG1hLW5hbWVzID0gInR4IiwgInJ4IjsNCiAgICAg
ICAgICAgICAgICAgICAgICAgIGRtYXMgPSA8MHgwMDAwMDAwOCAweDAwMDAw
MDAxIDB4MDAwMDAwMjMgMHgwMDAwMDAwOCAweDAwMDAwMDAxIDB4MDAwMDAw
MjI+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgc3RhdHVzID0gImRpc2Fi
bGVkIjsNCiAgICAgICAgICAgICAgICAgICAgICAgIHBoYW5kbGUgPSA8MHgw
MDAwMDAyYz47DQogICAgICAgICAgICAgICAgfTsNCiAgICAgICAgICAgICAg
ICBpMmNAMjAzMDAwMCB7DQogICAgICAgICAgICAgICAgICAgICAgICBjb21w
YXRpYmxlID0gImZzbCx2ZjYxMC1pMmMiOw0KICAgICAgICAgICAgICAgICAg
ICAgICAgI2FkZHJlc3MtY2VsbHMgPSA8MHgwMDAwMDAwMT47DQogICAgICAg
ICAgICAgICAgICAgICAgICAjc2l6ZS1jZWxscyA9IDwweDAwMDAwMDAwPjsN
CiAgICAgICAgICAgICAgICAgICAgICAgIHJlZyA9IDwweDAwMDAwMDAwIDB4
MDIwMzAwMDAgMHgwMDAwMDAwMCAweDAwMDEwMDAwPjsNCiAgICAgICAgICAg
ICAgICAgICAgICAgIGludGVycnVwdHMgPSA8MHgwMDAwMDAwMCAweDAwMDAw
MDIzIDB4MDAwMDAwMDQ+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgY2xv
Y2tzID0gPDB4MDAwMDAwMDIgMHgwMDAwMDAwNCAweDAwMDAwMDAxPjsNCiAg
ICAgICAgICAgICAgICAgICAgICAgIGRtYS1uYW1lczsNCiAgICAgICAgICAg
ICAgICAgICAgICAgIGRtYXMgPSA8MHgwMDAwMDAwOCAweDAwMDAwMDAxIDB4
MDAwMDAwMjkgMHgwMDAwMDAwOCAweDAwMDAwMDAxIDB4MDAwMDAwMjg+Ow0K
ICAgICAgICAgICAgICAgICAgICAgICAgc3RhdHVzID0gIm9rYXkiOw0KICAg
ICAgICAgICAgICAgICAgICAgICAgcGhhbmRsZSA9IDwweDAwMDAwMDJkPjsN
CiAgICAgICAgICAgICAgICB9Ow0KICAgICAgICAgICAgICAgIGkyY0AyMDQw
MDAwIHsNCiAgICAgICAgICAgICAgICAgICAgICAgIGNvbXBhdGlibGUgPSAi
ZnNsLHZmNjEwLWkyYyI7DQogICAgICAgICAgICAgICAgICAgICAgICAjYWRk
cmVzcy1jZWxscyA9IDwweDAwMDAwMDAxPjsNCiAgICAgICAgICAgICAgICAg
ICAgICAgICNzaXplLWNlbGxzID0gPDB4MDAwMDAwMDA+Ow0KICAgICAgICAg
ICAgICAgICAgICAgICAgcmVnID0gPDB4MDAwMDAwMDAgMHgwMjA0MDAwMCAw
eDAwMDAwMDAwIDB4MDAwMTAwMDA+Ow0KICAgICAgICAgICAgICAgICAgICAg
ICAgaW50ZXJydXB0cyA9IDwweDAwMDAwMDAwIDB4MDAwMDAwNGEgMHgwMDAw
MDAwND47DQogICAgICAgICAgICAgICAgICAgICAgICBjbG9ja3MgPSA8MHgw
MDAwMDAwMiAweDAwMDAwMDA0IDB4MDAwMDAwMDE+Ow0KICAgICAgICAgICAg
ICAgICAgICAgICAgZG1hLW5hbWVzOw0KICAgICAgICAgICAgICAgICAgICAg
ICAgZG1hcyA9IDwweDAwMDAwMDA4IDB4MDAwMDAwMDEgMHgwMDAwMDAyYiAw
eDAwMDAwMDA4IDB4MDAwMDAwMDEgMHgwMDAwMDAyYT47DQogICAgICAgICAg
ICAgICAgICAgICAgICBzdGF0dXMgPSAib2theSI7DQogICAgICAgICAgICAg
ICAgICAgICAgICBwaGFuZGxlID0gPDB4MDAwMDAwMmU+Ow0KICAgICAgICAg
ICAgICAgICAgICAgICAgZWVwcm9tQDUwIHsNCiAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgY29tcGF0aWJsZSA9ICJhdG1lbCwyNGMzMiI7DQog
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHJlZyA9IDwweDAwMDAw
MDUwPjsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcGFnZXNp
emUgPSA8MHgwMDAwMDAyMD47DQogICAgICAgICAgICAgICAgICAgICAgICB9
Ow0KICAgICAgICAgICAgICAgIH07DQogICAgICAgICAgICAgICAgaTJjQDIw
NTAwMDAgew0KICAgICAgICAgICAgICAgICAgICAgICAgY29tcGF0aWJsZSA9
ICJmc2wsdmY2MTAtaTJjIjsNCiAgICAgICAgICAgICAgICAgICAgICAgICNh
ZGRyZXNzLWNlbGxzID0gPDB4MDAwMDAwMDE+Ow0KICAgICAgICAgICAgICAg
ICAgICAgICAgI3NpemUtY2VsbHMgPSA8MHgwMDAwMDAwMD47DQogICAgICAg
ICAgICAgICAgICAgICAgICByZWcgPSA8MHgwMDAwMDAwMCAweDAyMDUwMDAw
IDB4MDAwMDAwMDAgMHgwMDAxMDAwMD47DQogICAgICAgICAgICAgICAgICAg
ICAgICBpbnRlcnJ1cHRzID0gPDB4MDAwMDAwMDAgMHgwMDAwMDA0YSAweDAw
MDAwMDA0PjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGNsb2NrcyA9IDww
eDAwMDAwMDAyIDB4MDAwMDAwMDQgMHgwMDAwMDAwMT47DQogICAgICAgICAg
ICAgICAgICAgICAgICBkbWEtbmFtZXMgPSAidHgiLCAicngiOw0KICAgICAg
ICAgICAgICAgICAgICAgICAgZG1hcyA9IDwweDAwMDAwMDA4IDB4MDAwMDAw
MDEgMHgwMDAwMDAyZCAweDAwMDAwMDA4IDB4MDAwMDAwMDEgMHgwMDAwMDAy
Yz47DQogICAgICAgICAgICAgICAgICAgICAgICBzdGF0dXMgPSAiZGlzYWJs
ZWQiOw0KICAgICAgICAgICAgICAgICAgICAgICAgcGhhbmRsZSA9IDwweDAw
MDAwMDJmPjsNCiAgICAgICAgICAgICAgICB9Ow0KICAgICAgICAgICAgICAg
IGkyY0AyMDYwMDAwIHsNCiAgICAgICAgICAgICAgICAgICAgICAgIGNvbXBh
dGlibGUgPSAiZnNsLHZmNjEwLWkyYyI7DQogICAgICAgICAgICAgICAgICAg
ICAgICAjYWRkcmVzcy1jZWxscyA9IDwweDAwMDAwMDAxPjsNCiAgICAgICAg
ICAgICAgICAgICAgICAgICNzaXplLWNlbGxzID0gPDB4MDAwMDAwMDA+Ow0K
ICAgICAgICAgICAgICAgICAgICAgICAgcmVnID0gPDB4MDAwMDAwMDAgMHgw
MjA2MDAwMCAweDAwMDAwMDAwIDB4MDAwMTAwMDA+Ow0KICAgICAgICAgICAg
ICAgICAgICAgICAgaW50ZXJydXB0cyA9IDwweDAwMDAwMDAwIDB4MDAwMDAw
NGIgMHgwMDAwMDAwND47DQogICAgICAgICAgICAgICAgICAgICAgICBjbG9j
a3MgPSA8MHgwMDAwMDAwMiAweDAwMDAwMDA0IDB4MDAwMDAwMDE+Ow0KICAg
ICAgICAgICAgICAgICAgICAgICAgZG1hLW5hbWVzID0gInR4IiwgInJ4IjsN
CiAgICAgICAgICAgICAgICAgICAgICAgIGRtYXMgPSA8MHgwMDAwMDAwOCAw
eDAwMDAwMDAxIDB4MDAwMDAwMmYgMHgwMDAwMDAwOCAweDAwMDAwMDAxIDB4
MDAwMDAwMmU+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgc3RhdHVzID0g
ImRpc2FibGVkIjsNCiAgICAgICAgICAgICAgICAgICAgICAgIHBoYW5kbGUg
PSA8MHgwMDAwMDAzMD47DQogICAgICAgICAgICAgICAgfTsNCiAgICAgICAg
ICAgICAgICBpMmNAMjA3MDAwMCB7DQogICAgICAgICAgICAgICAgICAgICAg
ICBjb21wYXRpYmxlID0gImZzbCx2ZjYxMC1pMmMiOw0KICAgICAgICAgICAg
ICAgICAgICAgICAgI2FkZHJlc3MtY2VsbHMgPSA8MHgwMDAwMDAwMT47DQog
ICAgICAgICAgICAgICAgICAgICAgICAjc2l6ZS1jZWxscyA9IDwweDAwMDAw
MDAwPjsNCiAgICAgICAgICAgICAgICAgICAgICAgIHJlZyA9IDwweDAwMDAw
MDAwIDB4MDIwNzAwMDAgMHgwMDAwMDAwMCAweDAwMDEwMDAwPjsNCiAgICAg
ICAgICAgICAgICAgICAgICAgIGludGVycnVwdHMgPSA8MHgwMDAwMDAwMCAw
eDAwMDAwMDRiIDB4MDAwMDAwMDQ+Ow0KICAgICAgICAgICAgICAgICAgICAg
ICAgY2xvY2tzID0gPDB4MDAwMDAwMDIgMHgwMDAwMDAwNCAweDAwMDAwMDAx
PjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGRtYS1uYW1lcyA9ICJ0eCIs
ICJyeCI7DQogICAgICAgICAgICAgICAgICAgICAgICBkbWFzID0gPDB4MDAw
MDAwMDggMHgwMDAwMDAwMSAweDAwMDAwMDExIDB4MDAwMDAwMDggMHgwMDAw
MDAwMSAweDAwMDAwMDEwPjsNCiAgICAgICAgICAgICAgICAgICAgICAgIHN0
YXR1cyA9ICJkaXNhYmxlZCI7DQogICAgICAgICAgICAgICAgICAgICAgICBw
aGFuZGxlID0gPDB4MDAwMDAwMzE+Ow0KICAgICAgICAgICAgICAgIH07DQog
ICAgICAgICAgICAgICAgc3BpQDIxMDAwMDAgew0KICAgICAgICAgICAgICAg
ICAgICAgICAgY29tcGF0aWJsZSA9ICJmc2wsbHMxMDI4YS1kc3BpIiwgImZz
bCxsczEwMjFhLXYxLjAtZHNwaSI7DQogICAgICAgICAgICAgICAgICAgICAg
ICAjYWRkcmVzcy1jZWxscyA9IDwweDAwMDAwMDAxPjsNCiAgICAgICAgICAg
ICAgICAgICAgICAgICNzaXplLWNlbGxzID0gPDB4MDAwMDAwMDA+Ow0KICAg
ICAgICAgICAgICAgICAgICAgICAgcmVnID0gPDB4MDAwMDAwMDAgMHgwMjEw
MDAwMCAweDAwMDAwMDAwIDB4MDAwMTAwMDA+Ow0KICAgICAgICAgICAgICAg
ICAgICAgICAgaW50ZXJydXB0cyA9IDwweDAwMDAwMDAwIDB4MDAwMDAwMWEg
MHgwMDAwMDAwND47DQogICAgICAgICAgICAgICAgICAgICAgICBjbG9jay1u
YW1lcyA9ICJkc3BpIjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGNsb2Nr
cyA9IDwweDAwMDAwMDAyIDB4MDAwMDAwMDQgMHgwMDAwMDAwMT47DQogICAg
ICAgICAgICAgICAgICAgICAgICBzcGktbnVtLWNoaXBzZWxlY3RzID0gPDB4
MDAwMDAwMDU+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgbGl0dGxlLWVu
ZGlhbjsNCiAgICAgICAgICAgICAgICAgICAgICAgIHN0YXR1cyA9ICJkaXNh
YmxlZCI7DQogICAgICAgICAgICAgICAgICAgICAgICBwaGFuZGxlID0gPDB4
MDAwMDAwMzI+Ow0KICAgICAgICAgICAgICAgIH07DQogICAgICAgICAgICAg
ICAgc3BpQDIxMTAwMDAgew0KICAgICAgICAgICAgICAgICAgICAgICAgY29t
cGF0aWJsZSA9ICJmc2wsbHMxMDI4YS1kc3BpIiwgImZzbCxsczEwMjFhLXYx
LjAtZHNwaSI7DQogICAgICAgICAgICAgICAgICAgICAgICAjYWRkcmVzcy1j
ZWxscyA9IDwweDAwMDAwMDAxPjsNCiAgICAgICAgICAgICAgICAgICAgICAg
ICNzaXplLWNlbGxzID0gPDB4MDAwMDAwMDA+Ow0KICAgICAgICAgICAgICAg
ICAgICAgICAgcmVnID0gPDB4MDAwMDAwMDAgMHgwMjExMDAwMCAweDAwMDAw
MDAwIDB4MDAwMTAwMDA+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgaW50
ZXJydXB0cyA9IDwweDAwMDAwMDAwIDB4MDAwMDAwMWEgMHgwMDAwMDAwND47
DQogICAgICAgICAgICAgICAgICAgICAgICBjbG9jay1uYW1lcyA9ICJkc3Bp
IjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGNsb2NrcyA9IDwweDAwMDAw
MDAyIDB4MDAwMDAwMDQgMHgwMDAwMDAwMT47DQogICAgICAgICAgICAgICAg
ICAgICAgICBzcGktbnVtLWNoaXBzZWxlY3RzID0gPDB4MDAwMDAwMDU+Ow0K
ICAgICAgICAgICAgICAgICAgICAgICAgbGl0dGxlLWVuZGlhbjsNCiAgICAg
ICAgICAgICAgICAgICAgICAgIHN0YXR1cyA9ICJkaXNhYmxlZCI7DQogICAg
ICAgICAgICAgICAgICAgICAgICBwaGFuZGxlID0gPDB4MDAwMDAwMzM+Ow0K
ICAgICAgICAgICAgICAgIH07DQogICAgICAgICAgICAgICAgc3BpQDIxMjAw
MDAgew0KICAgICAgICAgICAgICAgICAgICAgICAgY29tcGF0aWJsZSA9ICJm
c2wsbHMxMDI4YS1kc3BpIiwgImZzbCxsczEwMjFhLXYxLjAtZHNwaSI7DQog
ICAgICAgICAgICAgICAgICAgICAgICAjYWRkcmVzcy1jZWxscyA9IDwweDAw
MDAwMDAxPjsNCiAgICAgICAgICAgICAgICAgICAgICAgICNzaXplLWNlbGxz
ID0gPDB4MDAwMDAwMDA+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgcmVn
ID0gPDB4MDAwMDAwMDAgMHgwMjEyMDAwMCAweDAwMDAwMDAwIDB4MDAwMTAw
MDA+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgaW50ZXJydXB0cyA9IDww
eDAwMDAwMDAwIDB4MDAwMDAwMWEgMHgwMDAwMDAwND47DQogICAgICAgICAg
ICAgICAgICAgICAgICBjbG9jay1uYW1lcyA9ICJkc3BpIjsNCiAgICAgICAg
ICAgICAgICAgICAgICAgIGNsb2NrcyA9IDwweDAwMDAwMDAyIDB4MDAwMDAw
MDQgMHgwMDAwMDAwMT47DQogICAgICAgICAgICAgICAgICAgICAgICBzcGkt
bnVtLWNoaXBzZWxlY3RzID0gPDB4MDAwMDAwMDU+Ow0KICAgICAgICAgICAg
ICAgICAgICAgICAgbGl0dGxlLWVuZGlhbjsNCiAgICAgICAgICAgICAgICAg
ICAgICAgIHN0YXR1cyA9ICJkaXNhYmxlZCI7DQogICAgICAgICAgICAgICAg
ICAgICAgICBwaGFuZGxlID0gPDB4MDAwMDAwMzQ+Ow0KICAgICAgICAgICAg
ICAgIH07DQogICAgICAgICAgICAgICAgY2FuQDIxODAwMDAgew0KICAgICAg
ICAgICAgICAgICAgICAgICAgeGVuLHBhc3N0aHJvdWdoOw0KICAgICAgICAg
ICAgICAgICAgICAgICAgY29tcGF0aWJsZSA9ICJmc2wsbHMxMDI4YXIxLWZs
ZXhjYW4iLCAiZnNsLGx4MjE2MGFyMS1mbGV4Y2FuIjsNCiAgICAgICAgICAg
ICAgICAgICAgICAgIHJlZyA9IDwweDAwMDAwMDAwIDB4MDIxODAwMDAgMHgw
MDAwMDAwMCAweDAwMDEwMDAwPjsNCiAgICAgICAgICAgICAgICAgICAgICAg
IGludGVycnVwdHMgPSA8MHgwMDAwMDAwMCAweDAwMDAwMDE1IDB4MDAwMDAw
MDQ+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgY2xvY2tzID0gPDB4MDAw
MDAwMDcgMHgwMDAwMDAwMiAweDAwMDAwMDA0IDB4MDAwMDAwMDE+Ow0KICAg
ICAgICAgICAgICAgICAgICAgICAgY2xvY2stbmFtZXMgPSAiaXBnIiwgInBl
ciI7DQogICAgICAgICAgICAgICAgICAgICAgICBzdGF0dXMgPSAib2theSI7
DQogICAgICAgICAgICAgICAgICAgICAgICBwaGFuZGxlID0gPDB4MDAwMDAw
MzU+Ow0KICAgICAgICAgICAgICAgIH07DQogICAgICAgICAgICAgICAgY2Fu
QDIxOTAwMDAgew0KICAgICAgICAgICAgICAgICAgICAgICAgeGVuLHBhc3N0
aHJvdWdoOw0KICAgICAgICAgICAgICAgICAgICAgICAgY29tcGF0aWJsZSA9
ICJmc2wsbHMxMDI4YXIxLWZsZXhjYW4iLCAiZnNsLGx4MjE2MGFyMS1mbGV4
Y2FuIjsNCiAgICAgICAgICAgICAgICAgICAgICAgIHJlZyA9IDwweDAwMDAw
MDAwIDB4MDIxOTAwMDAgMHgwMDAwMDAwMCAweDAwMDEwMDAwPjsNCiAgICAg
ICAgICAgICAgICAgICAgICAgIGludGVycnVwdHMgPSA8MHgwMDAwMDAwMCAw
eDAwMDAwMDE2IDB4MDAwMDAwMDQ+Ow0KICAgICAgICAgICAgICAgICAgICAg
ICAgY2xvY2tzID0gPDB4MDAwMDAwMDcgMHgwMDAwMDAwMiAweDAwMDAwMDA0
IDB4MDAwMDAwMDE+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgY2xvY2st
bmFtZXMgPSAiaXBnIiwgInBlciI7DQogICAgICAgICAgICAgICAgICAgICAg
ICBzdGF0dXMgPSAiZGlzYWJsZWQiOw0KICAgICAgICAgICAgICAgICAgICAg
ICAgcGhhbmRsZSA9IDwweDAwMDAwMDM2PjsNCiAgICAgICAgICAgICAgICB9
Ow0KICAgICAgICAgICAgICAgIHNlcmlhbEAyMWMwNTAwIHsNCiAgICAgICAg
ICAgICAgICAgICAgICAgIGNvbXBhdGlibGUgPSAiZnNsLG5zMTY1NTAiLCAi
bnMxNjU1MGEiOw0KICAgICAgICAgICAgICAgICAgICAgICAgcmVnID0gPDB4
MDAwMDAwMDAgMHgwMjFjMDUwMCAweDAwMDAwMDAwIDB4MDAwMDAxMDA+Ow0K
ICAgICAgICAgICAgICAgICAgICAgICAgaW50ZXJydXB0cyA9IDwweDAwMDAw
MDAwIDB4MDAwMDAwMjAgMHgwMDAwMDAwND47DQogICAgICAgICAgICAgICAg
ICAgICAgICBjbG9ja3MgPSA8MHgwMDAwMDAwMiAweDAwMDAwMDA0IDB4MDAw
MDAwMDE+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgc3RhdHVzID0gIm9r
YXkiOw0KICAgICAgICAgICAgICAgICAgICAgICAgcGhhbmRsZSA9IDwweDAw
MDAwMDM3PjsNCiAgICAgICAgICAgICAgICB9Ow0KICAgICAgICAgICAgICAg
IHNlcmlhbEAyMWMwNjAwIHsNCiAgICAgICAgICAgICAgICAgICAgICAgIHhl
bixwYXNzdGhyb3VnaDsNCiAgICAgICAgICAgICAgICAgICAgICAgIGNvbXBh
dGlibGUgPSAiZnNsLG5zMTY1NTAiLCAibnMxNjU1MGEiOw0KICAgICAgICAg
ICAgICAgICAgICAgICAgcmVnID0gPDB4MDAwMDAwMDAgMHgwMjFjMDYwMCAw
eDAwMDAwMDAwIDB4MDAwMDAxMDA+Ow0KICAgICAgICAgICAgICAgICAgICAg
ICAgaW50ZXJydXB0cyA9IDwweDAwMDAwMDAwIDB4MDAwMDAwMjAgMHgwMDAw
MDAwND47DQogICAgICAgICAgICAgICAgICAgICAgICBjbG9ja3MgPSA8MHgw
MDAwMDAwMiAweDAwMDAwMDA0IDB4MDAwMDAwMDE+Ow0KICAgICAgICAgICAg
ICAgICAgICAgICAgc3RhdHVzID0gIm9rYXkiOw0KICAgICAgICAgICAgICAg
ICAgICAgICAgcGhhbmRsZSA9IDwweDAwMDAwMDM4PjsNCiAgICAgICAgICAg
ICAgICB9Ow0KICAgICAgICAgICAgICAgIHNlcmlhbEAyMjYwMDAwIHsNCiAg
ICAgICAgICAgICAgICAgICAgICAgIGNvbXBhdGlibGUgPSAiZnNsLGxzMTAy
MWEtbHB1YXJ0IjsNCiAgICAgICAgICAgICAgICAgICAgICAgIHJlZyA9IDww
eDAwMDAwMDAwIDB4MDIyNjAwMDAgMHgwMDAwMDAwMCAweDAwMDAxMDAwPjsN
CiAgICAgICAgICAgICAgICAgICAgICAgIGludGVycnVwdHMgPSA8MHgwMDAw
MDAwMCAweDAwMDAwMGU4IDB4MDAwMDAwMDQ+Ow0KICAgICAgICAgICAgICAg
ICAgICAgICAgY2xvY2tzID0gPDB4MDAwMDAwMDIgMHgwMDAwMDAwNCAweDAw
MDAwMDAxPjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGNsb2NrLW5hbWVz
ID0gImlwZyI7DQogICAgICAgICAgICAgICAgICAgICAgICBkbWEtbmFtZXMg
PSAidHgiLCAicngiOw0KICAgICAgICAgICAgICAgICAgICAgICAgZG1hcyA9
IDwweDAwMDAwMDA4IDB4MDAwMDAwMDEgMHgwMDAwMDAyMSAweDAwMDAwMDA4
IDB4MDAwMDAwMDEgMHgwMDAwMDAyMD47DQogICAgICAgICAgICAgICAgICAg
ICAgICBsaXR0bGUtZW5kaWFuOw0KICAgICAgICAgICAgICAgICAgICAgICAg
c3RhdHVzID0gImRpc2FibGVkIjsNCiAgICAgICAgICAgICAgICAgICAgICAg
IHBoYW5kbGUgPSA8MHgwMDAwMDAzOT47DQogICAgICAgICAgICAgICAgfTsN
CiAgICAgICAgICAgICAgICBzZXJpYWxAMjI3MDAwMCB7DQogICAgICAgICAg
ICAgICAgICAgICAgICBjb21wYXRpYmxlID0gImZzbCxsczEwMjFhLWxwdWFy
dCI7DQogICAgICAgICAgICAgICAgICAgICAgICByZWcgPSA8MHgwMDAwMDAw
MCAweDAyMjcwMDAwIDB4MDAwMDAwMDAgMHgwMDAwMTAwMD47DQogICAgICAg
ICAgICAgICAgICAgICAgICBpbnRlcnJ1cHRzID0gPDB4MDAwMDAwMDAgMHgw
MDAwMDBlOSAweDAwMDAwMDA0PjsNCiAgICAgICAgICAgICAgICAgICAgICAg
IGNsb2NrcyA9IDwweDAwMDAwMDAyIDB4MDAwMDAwMDQgMHgwMDAwMDAwMT47
DQogICAgICAgICAgICAgICAgICAgICAgICBjbG9jay1uYW1lcyA9ICJpcGci
Ow0KICAgICAgICAgICAgICAgICAgICAgICAgZG1hLW5hbWVzID0gInR4Iiwg
InJ4IjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGRtYXMgPSA8MHgwMDAw
MDAwOCAweDAwMDAwMDAxIDB4MDAwMDAwMWYgMHgwMDAwMDAwOCAweDAwMDAw
MDAxIDB4MDAwMDAwMWU+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgbGl0
dGxlLWVuZGlhbjsNCiAgICAgICAgICAgICAgICAgICAgICAgIHN0YXR1cyA9
ICJva2F5IjsNCiAgICAgICAgICAgICAgICAgICAgICAgIHBoYW5kbGUgPSA8
MHgwMDAwMDAzYT47DQogICAgICAgICAgICAgICAgfTsNCiAgICAgICAgICAg
ICAgICBzZXJpYWxAMjI4MDAwMCB7DQogICAgICAgICAgICAgICAgICAgICAg
ICBjb21wYXRpYmxlID0gImZzbCxsczEwMjFhLWxwdWFydCI7DQogICAgICAg
ICAgICAgICAgICAgICAgICByZWcgPSA8MHgwMDAwMDAwMCAweDAyMjgwMDAw
IDB4MDAwMDAwMDAgMHgwMDAwMTAwMD47DQogICAgICAgICAgICAgICAgICAg
ICAgICBpbnRlcnJ1cHRzID0gPDB4MDAwMDAwMDAgMHgwMDAwMDBlYSAweDAw
MDAwMDA0PjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGNsb2NrcyA9IDww
eDAwMDAwMDAyIDB4MDAwMDAwMDQgMHgwMDAwMDAwMT47DQogICAgICAgICAg
ICAgICAgICAgICAgICBjbG9jay1uYW1lcyA9ICJpcGciOw0KICAgICAgICAg
ICAgICAgICAgICAgICAgZG1hLW5hbWVzID0gInR4IiwgInJ4IjsNCiAgICAg
ICAgICAgICAgICAgICAgICAgIGRtYXMgPSA8MHgwMDAwMDAwOCAweDAwMDAw
MDAxIDB4MDAwMDAwMWQgMHgwMDAwMDAwOCAweDAwMDAwMDAxIDB4MDAwMDAw
MWM+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgbGl0dGxlLWVuZGlhbjsN
CiAgICAgICAgICAgICAgICAgICAgICAgIHN0YXR1cyA9ICJkaXNhYmxlZCI7
DQogICAgICAgICAgICAgICAgICAgICAgICBwaGFuZGxlID0gPDB4MDAwMDAw
M2I+Ow0KICAgICAgICAgICAgICAgIH07DQogICAgICAgICAgICAgICAgc2Vy
aWFsQDIyOTAwMDAgew0KICAgICAgICAgICAgICAgICAgICAgICAgY29tcGF0
aWJsZSA9ICJmc2wsbHMxMDIxYS1scHVhcnQiOw0KICAgICAgICAgICAgICAg
ICAgICAgICAgcmVnID0gPDB4MDAwMDAwMDAgMHgwMjI5MDAwMCAweDAwMDAw
MDAwIDB4MDAwMDEwMDA+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgaW50
ZXJydXB0cyA9IDwweDAwMDAwMDAwIDB4MDAwMDAwZWIgMHgwMDAwMDAwND47
DQogICAgICAgICAgICAgICAgICAgICAgICBjbG9ja3MgPSA8MHgwMDAwMDAw
MiAweDAwMDAwMDA0IDB4MDAwMDAwMDE+Ow0KICAgICAgICAgICAgICAgICAg
ICAgICAgY2xvY2stbmFtZXMgPSAiaXBnIjsNCiAgICAgICAgICAgICAgICAg
ICAgICAgIGRtYS1uYW1lcyA9ICJ0eCIsICJyeCI7DQogICAgICAgICAgICAg
ICAgICAgICAgICBkbWFzID0gPDB4MDAwMDAwMDggMHgwMDAwMDAwMSAweDAw
MDAwMDFiIDB4MDAwMDAwMDggMHgwMDAwMDAwMSAweDAwMDAwMDFhPjsNCiAg
ICAgICAgICAgICAgICAgICAgICAgIGxpdHRsZS1lbmRpYW47DQogICAgICAg
ICAgICAgICAgICAgICAgICBzdGF0dXMgPSAiZGlzYWJsZWQiOw0KICAgICAg
ICAgICAgICAgICAgICAgICAgcGhhbmRsZSA9IDwweDAwMDAwMDNjPjsNCiAg
ICAgICAgICAgICAgICB9Ow0KICAgICAgICAgICAgICAgIHNlcmlhbEAyMmEw
MDAwIHsNCiAgICAgICAgICAgICAgICAgICAgICAgIGNvbXBhdGlibGUgPSAi
ZnNsLGxzMTAyMWEtbHB1YXJ0IjsNCiAgICAgICAgICAgICAgICAgICAgICAg
IHJlZyA9IDwweDAwMDAwMDAwIDB4MDIyYTAwMDAgMHgwMDAwMDAwMCAweDAw
MDAxMDAwPjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGludGVycnVwdHMg
PSA8MHgwMDAwMDAwMCAweDAwMDAwMGVjIDB4MDAwMDAwMDQ+Ow0KICAgICAg
ICAgICAgICAgICAgICAgICAgY2xvY2tzID0gPDB4MDAwMDAwMDIgMHgwMDAw
MDAwNCAweDAwMDAwMDAxPjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGNs
b2NrLW5hbWVzID0gImlwZyI7DQogICAgICAgICAgICAgICAgICAgICAgICBk
bWEtbmFtZXMgPSAidHgiLCAicngiOw0KICAgICAgICAgICAgICAgICAgICAg
ICAgZG1hcyA9IDwweDAwMDAwMDA4IDB4MDAwMDAwMDEgMHgwMDAwMDAxOSAw
eDAwMDAwMDA4IDB4MDAwMDAwMDEgMHgwMDAwMDAxOD47DQogICAgICAgICAg
ICAgICAgICAgICAgICBsaXR0bGUtZW5kaWFuOw0KICAgICAgICAgICAgICAg
ICAgICAgICAgc3RhdHVzID0gImRpc2FibGVkIjsNCiAgICAgICAgICAgICAg
ICAgICAgICAgIHBoYW5kbGUgPSA8MHgwMDAwMDAzZD47DQogICAgICAgICAg
ICAgICAgfTsNCiAgICAgICAgICAgICAgICBzZXJpYWxAMjJiMDAwMCB7DQog
ICAgICAgICAgICAgICAgICAgICAgICBjb21wYXRpYmxlID0gImZzbCxsczEw
MjFhLWxwdWFydCI7DQogICAgICAgICAgICAgICAgICAgICAgICByZWcgPSA8
MHgwMDAwMDAwMCAweDAyMmIwMDAwIDB4MDAwMDAwMDAgMHgwMDAwMTAwMD47
DQogICAgICAgICAgICAgICAgICAgICAgICBpbnRlcnJ1cHRzID0gPDB4MDAw
MDAwMDAgMHgwMDAwMDBlZCAweDAwMDAwMDA0PjsNCiAgICAgICAgICAgICAg
ICAgICAgICAgIGNsb2NrcyA9IDwweDAwMDAwMDAyIDB4MDAwMDAwMDQgMHgw
MDAwMDAwMT47DQogICAgICAgICAgICAgICAgICAgICAgICBjbG9jay1uYW1l
cyA9ICJpcGciOw0KICAgICAgICAgICAgICAgICAgICAgICAgZG1hLW5hbWVz
ID0gInR4IiwgInJ4IjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGRtYXMg
PSA8MHgwMDAwMDAwOCAweDAwMDAwMDAxIDB4MDAwMDAwMTcgMHgwMDAwMDAw
OCAweDAwMDAwMDAxIDB4MDAwMDAwMTY+Ow0KICAgICAgICAgICAgICAgICAg
ICAgICAgbGl0dGxlLWVuZGlhbjsNCiAgICAgICAgICAgICAgICAgICAgICAg
IHN0YXR1cyA9ICJkaXNhYmxlZCI7DQogICAgICAgICAgICAgICAgICAgICAg
ICBwaGFuZGxlID0gPDB4MDAwMDAwM2U+Ow0KICAgICAgICAgICAgICAgIH07
DQogICAgICAgICAgICAgICAgZG1hLWNvbnRyb2xsZXJAMjJjMDAwMCB7DQog
ICAgICAgICAgICAgICAgICAgICAgICAjc3RyZWFtLWlkLWNlbGxzID0gPDB4
MDAwMDAwMDE+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgI2RtYS1jZWxs
cyA9IDwweDAwMDAwMDAyPjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGNv
bXBhdGlibGUgPSAiZnNsLHZmNjEwLWVkbWEiOw0KICAgICAgICAgICAgICAg
ICAgICAgICAgcmVnID0gPDB4MDAwMDAwMDAgMHgwMjJjMDAwMCAweDAwMDAw
MDAwIDB4MDAwMTAwMDAgMHgwMDAwMDAwMCAweDAyMmQwMDAwIDB4MDAwMDAw
MDAgMHgwMDAxMDAwMCAweDAwMDAwMDAwIDB4MDIyZTAwMDAgMHgwMDAwMDAw
MCAweDAwMDEwMDAwPjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGludGVy
cnVwdHMgPSA8MHgwMDAwMDAwMCAweDAwMDAwMDM4IDB4MDAwMDAwMDQgMHgw
MDAwMDAwMCAweDAwMDAwMDM4IDB4MDAwMDAwMDQ+Ow0KICAgICAgICAgICAg
ICAgICAgICAgICAgaW50ZXJydXB0LW5hbWVzID0gImVkbWEtdHgiLCAiZWRt
YS1lcnIiOw0KICAgICAgICAgICAgICAgICAgICAgICAgZG1hLWNoYW5uZWxz
ID0gPDB4MDAwMDAwMjA+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgY2xv
Y2stbmFtZXMgPSAiZG1hbXV4MCIsICJkbWFtdXgxIjsNCiAgICAgICAgICAg
ICAgICAgICAgICAgIGNsb2NrcyA9IDwweDAwMDAwMDAyIDB4MDAwMDAwMDQg
MHgwMDAwMDAwMSAweDAwMDAwMDAyIDB4MDAwMDAwMDQgMHgwMDAwMDAwMT47
DQogICAgICAgICAgICAgICAgICAgICAgICBwaGFuZGxlID0gPDB4MDAwMDAw
MDg+Ow0KICAgICAgICAgICAgICAgIH07DQogICAgICAgICAgICAgICAgZ3Bp
b0AyMzAwMDAwIHsNCiAgICAgICAgICAgICAgICAgICAgICAgIGNvbXBhdGli
bGUgPSAiZnNsLGxzMTAyOGEtZ3BpbyIsICJmc2wscW9yaXEtZ3BpbyI7DQog
ICAgICAgICAgICAgICAgICAgICAgICByZWcgPSA8MHgwMDAwMDAwMCAweDAy
MzAwMDAwIDB4MDAwMDAwMDAgMHgwMDAxMDAwMD47DQogICAgICAgICAgICAg
ICAgICAgICAgICBpbnRlcnJ1cHRzID0gPDB4MDAwMDAwMDAgMHgwMDAwMDAy
NCAweDAwMDAwMDA0PjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGdwaW8t
Y29udHJvbGxlcjsNCiAgICAgICAgICAgICAgICAgICAgICAgICNncGlvLWNl
bGxzID0gPDB4MDAwMDAwMDI+Ow0KICAgICAgICAgICAgICAgICAgICAgICAg
aW50ZXJydXB0LWNvbnRyb2xsZXI7DQogICAgICAgICAgICAgICAgICAgICAg
ICAjaW50ZXJydXB0LWNlbGxzID0gPDB4MDAwMDAwMDI+Ow0KICAgICAgICAg
ICAgICAgICAgICAgICAgbGl0dGxlLWVuZGlhbjsNCiAgICAgICAgICAgICAg
ICAgICAgICAgIGdwaW8tbGluZS1uYW1lcyA9IFswMCAwMCAwMCAwMCAwMCAw
MCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAw
MCAwMCA1NCA0NCA0ZiAwMCA1NCA0MyA0YiAwMCAwMCAwMCAwMCAwMCAwMCAw
MCAwMCAwMF07DQogICAgICAgICAgICAgICAgICAgICAgICBwaGFuZGxlID0g
PDB4MDAwMDAwM2Y+Ow0KICAgICAgICAgICAgICAgIH07DQogICAgICAgICAg
ICAgICAgZ3Bpb0AyMzEwMDAwIHsNCiAgICAgICAgICAgICAgICAgICAgICAg
IGNvbXBhdGlibGUgPSAiZnNsLGxzMTAyOGEtZ3BpbyIsICJmc2wscW9yaXEt
Z3BpbyI7DQogICAgICAgICAgICAgICAgICAgICAgICByZWcgPSA8MHgwMDAw
MDAwMCAweDAyMzEwMDAwIDB4MDAwMDAwMDAgMHgwMDAxMDAwMD47DQogICAg
ICAgICAgICAgICAgICAgICAgICBpbnRlcnJ1cHRzID0gPDB4MDAwMDAwMDAg
MHgwMDAwMDAyNCAweDAwMDAwMDA0PjsNCiAgICAgICAgICAgICAgICAgICAg
ICAgIGdwaW8tY29udHJvbGxlcjsNCiAgICAgICAgICAgICAgICAgICAgICAg
ICNncGlvLWNlbGxzID0gPDB4MDAwMDAwMDI+Ow0KICAgICAgICAgICAgICAg
ICAgICAgICAgaW50ZXJydXB0LWNvbnRyb2xsZXI7DQogICAgICAgICAgICAg
ICAgICAgICAgICAjaW50ZXJydXB0LWNlbGxzID0gPDB4MDAwMDAwMDI+Ow0K
ICAgICAgICAgICAgICAgICAgICAgICAgbGl0dGxlLWVuZGlhbjsNCiAgICAg
ICAgICAgICAgICAgICAgICAgIGdwaW8tbGluZS1uYW1lcyA9IFswMCAwMCAw
MCAwMCAwMCAwMCA1NCA0ZCA1MyAwMCA1NCA0NCA0OSAwMCAwMCAwMCAwMCAw
MCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAw
MCAwMCAwMCAwMCAwMCAwMF07DQogICAgICAgICAgICAgICAgICAgICAgICBw
aGFuZGxlID0gPDB4MDAwMDAwMGE+Ow0KICAgICAgICAgICAgICAgIH07DQog
ICAgICAgICAgICAgICAgZ3Bpb0AyMzIwMDAwIHsNCiAgICAgICAgICAgICAg
ICAgICAgICAgIGNvbXBhdGlibGUgPSAiZnNsLGxzMTAyOGEtZ3BpbyIsICJm
c2wscW9yaXEtZ3BpbyI7DQogICAgICAgICAgICAgICAgICAgICAgICByZWcg
PSA8MHgwMDAwMDAwMCAweDAyMzIwMDAwIDB4MDAwMDAwMDAgMHgwMDAxMDAw
MD47DQogICAgICAgICAgICAgICAgICAgICAgICBpbnRlcnJ1cHRzID0gPDB4
MDAwMDAwMDAgMHgwMDAwMDAyNSAweDAwMDAwMDA0PjsNCiAgICAgICAgICAg
ICAgICAgICAgICAgIGdwaW8tY29udHJvbGxlcjsNCiAgICAgICAgICAgICAg
ICAgICAgICAgICNncGlvLWNlbGxzID0gPDB4MDAwMDAwMDI+Ow0KICAgICAg
ICAgICAgICAgICAgICAgICAgaW50ZXJydXB0LWNvbnRyb2xsZXI7DQogICAg
ICAgICAgICAgICAgICAgICAgICAjaW50ZXJydXB0LWNlbGxzID0gPDB4MDAw
MDAwMDI+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgbGl0dGxlLWVuZGlh
bjsNCiAgICAgICAgICAgICAgICAgICAgICAgIHBoYW5kbGUgPSA8MHgwMDAw
MDA0MD47DQogICAgICAgICAgICAgICAgfTsNCiAgICAgICAgICAgICAgICBj
cnlwdG9AODAwMDAwMCB7DQogICAgICAgICAgICAgICAgICAgICAgICB4ZW4s
cGFzc3Rocm91Z2g7DQogICAgICAgICAgICAgICAgICAgICAgICBjb21wYXRp
YmxlID0gImZzbCxzZWMtdjUuMCIsICJmc2wsc2VjLXY0LjAiOw0KICAgICAg
ICAgICAgICAgICAgICAgICAgZnNsLHNlYy1lcmEgPSA8MHgwMDAwMDAwYT47
DQogICAgICAgICAgICAgICAgICAgICAgICAjYWRkcmVzcy1jZWxscyA9IDww
eDAwMDAwMDAxPjsNCiAgICAgICAgICAgICAgICAgICAgICAgICNzaXplLWNl
bGxzID0gPDB4MDAwMDAwMDE+Ow0KICAgICAgICAgICAgICAgICAgICAgICAg
cmFuZ2VzID0gPDB4MDAwMDAwMDAgMHgwMDAwMDAwMCAweDA4MDAwMDAwIDB4
MDAxMDAwMDA+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgcmVnID0gPDB4
MDAwMDAwMDAgMHgwODAwMDAwMCAweDAwMDAwMDAwIDB4MDAxMDAwMDA+Ow0K
ICAgICAgICAgICAgICAgICAgICAgICAgaW50ZXJydXB0cyA9IDwweDAwMDAw
MDAwIDB4MDAwMDAwOGIgMHgwMDAwMDAwND47DQogICAgICAgICAgICAgICAg
ICAgICAgICBkbWEtY29oZXJlbnQ7DQogICAgICAgICAgICAgICAgICAgICAg
ICBwaGFuZGxlID0gPDB4MDAwMDAwNDE+Ow0KICAgICAgICAgICAgICAgICAg
ICAgICAganJAMTAwMDAgew0KICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBjb21wYXRpYmxlID0gImZzbCxzZWMtdjUuMC1qb2ItcmluZyIsICJm
c2wsc2VjLXY0LjAtam9iLXJpbmciOw0KICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICByZWcgPSA8MHgwMDAxMDAwMCAweDAwMDEwMDAwPjsNCiAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgaW50ZXJydXB0cyA9IDww
eDAwMDAwMDAwIDB4MDAwMDAwOGMgMHgwMDAwMDAwND47DQogICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIHBoYW5kbGUgPSA8MHgwMDAwMDA0Mj47
DQogICAgICAgICAgICAgICAgICAgICAgICB9Ow0KICAgICAgICAgICAgICAg
ICAgICAgICAganJAMjAwMDAgew0KICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBjb21wYXRpYmxlID0gImZzbCxzZWMtdjUuMC1qb2ItcmluZyIs
ICJmc2wsc2VjLXY0LjAtam9iLXJpbmciOw0KICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICByZWcgPSA8MHgwMDAyMDAwMCAweDAwMDEwMDAwPjsN
CiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgaW50ZXJydXB0cyA9
IDwweDAwMDAwMDAwIDB4MDAwMDAwOGQgMHgwMDAwMDAwND47DQogICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIHBoYW5kbGUgPSA8MHgwMDAwMDA0
Mz47DQogICAgICAgICAgICAgICAgICAgICAgICB9Ow0KICAgICAgICAgICAg
ICAgICAgICAgICAganJAMzAwMDAgew0KICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBjb21wYXRpYmxlID0gImZzbCxzZWMtdjUuMC1qb2Itcmlu
ZyIsICJmc2wsc2VjLXY0LjAtam9iLXJpbmciOw0KICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICByZWcgPSA8MHgwMDAzMDAwMCAweDAwMDEwMDAw
PjsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgaW50ZXJydXB0
cyA9IDwweDAwMDAwMDAwIDB4MDAwMDAwOGUgMHgwMDAwMDAwND47DQogICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBoYW5kbGUgPSA8MHgwMDAw
MDA0ND47DQogICAgICAgICAgICAgICAgICAgICAgICB9Ow0KICAgICAgICAg
ICAgICAgICAgICAgICAganJANDAwMDAgew0KICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBjb21wYXRpYmxlID0gImZzbCxzZWMtdjUuMC1qb2It
cmluZyIsICJmc2wsc2VjLXY0LjAtam9iLXJpbmciOw0KICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICByZWcgPSA8MHgwMDA0MDAwMCAweDAwMDEw
MDAwPjsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgaW50ZXJy
dXB0cyA9IDwweDAwMDAwMDAwIDB4MDAwMDAwOGYgMHgwMDAwMDAwND47DQog
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBoYW5kbGUgPSA8MHgw
MDAwMDA0NT47DQogICAgICAgICAgICAgICAgICAgICAgICB9Ow0KICAgICAg
ICAgICAgICAgIH07DQogICAgICAgICAgICAgICAgd2R0QGMwMDAwMDAgew0K
ICAgICAgICAgICAgICAgICAgICAgICAgY29tcGF0aWJsZSA9ICJhcm0sc3A4
MDUiLCAiYXJtLHByaW1lY2VsbCI7DQogICAgICAgICAgICAgICAgICAgICAg
ICByZWcgPSA8MHgwMDAwMDAwMCAweDBjMDAwMDAwIDB4MDAwMDAwMDAgMHgw
MDAwMTAwMD47DQogICAgICAgICAgICAgICAgICAgICAgICBjbG9ja3MgPSA8
MHgwMDAwMDAwMiAweDAwMDAwMDA0IDB4MDAwMDAwMGYgMHgwMDAwMDAwMiAw
eDAwMDAwMDA0IDB4MDAwMDAwMGY+Ow0KICAgICAgICAgICAgICAgICAgICAg
ICAgY2xvY2stbmFtZXMgPSAiYXBiX3BjbGsiLCAid2RvZ19jbGsiOw0KICAg
ICAgICAgICAgICAgICAgICAgICAgcGhhbmRsZSA9IDwweDAwMDAwMDQ2PjsN
CiAgICAgICAgICAgICAgICB9Ow0KICAgICAgICAgICAgICAgIHdkdEBjMDEw
MDAwIHsNCiAgICAgICAgICAgICAgICAgICAgICAgIGNvbXBhdGlibGUgPSAi
YXJtLHNwODA1IiwgImFybSxwcmltZWNlbGwiOw0KICAgICAgICAgICAgICAg
ICAgICAgICAgcmVnID0gPDB4MDAwMDAwMDAgMHgwYzAxMDAwMCAweDAwMDAw
MDAwIDB4MDAwMDEwMDA+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgY2xv
Y2tzID0gPDB4MDAwMDAwMDIgMHgwMDAwMDAwNCAweDAwMDAwMDBmIDB4MDAw
MDAwMDIgMHgwMDAwMDAwNCAweDAwMDAwMDBmPjsNCiAgICAgICAgICAgICAg
ICAgICAgICAgIGNsb2NrLW5hbWVzID0gImFwYl9wY2xrIiwgIndkb2dfY2xr
IjsNCiAgICAgICAgICAgICAgICAgICAgICAgIHBoYW5kbGUgPSA8MHgwMDAw
MDA0Nz47DQogICAgICAgICAgICAgICAgfTsNCiAgICAgICAgICAgICAgICBt
bWNAMjE0MDAwMCB7DQogICAgICAgICAgICAgICAgICAgICAgICAjc3RyZWFt
LWlkLWNlbGxzID0gPDB4MDAwMDAwMDE+Ow0KICAgICAgICAgICAgICAgICAg
ICAgICAgY29tcGF0aWJsZSA9ICJmc2wsbHMxMDI4YS1lc2RoYyIsICJmc2ws
ZXNkaGMiOw0KICAgICAgICAgICAgICAgICAgICAgICAgcmVnID0gPDB4MDAw
MDAwMDAgMHgwMjE0MDAwMCAweDAwMDAwMDAwIDB4MDAwMTAwMDA+Ow0KICAg
ICAgICAgICAgICAgICAgICAgICAgaW50ZXJydXB0cyA9IDwweDAwMDAwMDAw
IDB4MDAwMDAwMWMgMHgwMDAwMDAwND47DQogICAgICAgICAgICAgICAgICAg
ICAgICBjbG9jay1mcmVxdWVuY3kgPSA8MHgwMDAwMDAwMD47DQogICAgICAg
ICAgICAgICAgICAgICAgICBjbG9ja3MgPSA8MHgwMDAwMDAwMiAweDAwMDAw
MDAyIDB4MDAwMDAwMDE+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgdm9s
dGFnZS1yYW5nZXMgPSA8MHgwMDAwMDcwOCAweDAwMDAwNzA4IDB4MDAwMDBj
ZTQgMHgwMDAwMGNlND47DQogICAgICAgICAgICAgICAgICAgICAgICBzZGhj
aSxhdXRvLWNtZDEyOw0KICAgICAgICAgICAgICAgICAgICAgICAgbGl0dGxl
LWVuZGlhbjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGJ1cy13aWR0aCA9
IDwweDAwMDAwMDA0PjsNCiAgICAgICAgICAgICAgICAgICAgICAgIHN0YXR1
cyA9ICJva2F5IjsNCiAgICAgICAgICAgICAgICAgICAgICAgIHNkLXVocy1z
ZHIxMDQ7DQogICAgICAgICAgICAgICAgICAgICAgICBzZC11aHMtc2RyNTA7
DQogICAgICAgICAgICAgICAgICAgICAgICBzZC11aHMtc2RyMjU7DQogICAg
ICAgICAgICAgICAgICAgICAgICBzZC11aHMtc2RyMTI7DQogICAgICAgICAg
ICAgICAgICAgICAgICBwaGFuZGxlID0gPDB4MDAwMDAwNDg+Ow0KICAgICAg
ICAgICAgICAgIH07DQogICAgICAgICAgICAgICAgbW1jQDIxNTAwMDAgew0K
ICAgICAgICAgICAgICAgICAgICAgICAgI3N0cmVhbS1pZC1jZWxscyA9IDww
eDAwMDAwMDAxPjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGNvbXBhdGli
bGUgPSAiZnNsLGxzMTAyOGEtZXNkaGMiLCAiZnNsLGVzZGhjIjsNCiAgICAg
ICAgICAgICAgICAgICAgICAgIHJlZyA9IDwweDAwMDAwMDAwIDB4MDIxNTAw
MDAgMHgwMDAwMDAwMCAweDAwMDEwMDAwPjsNCiAgICAgICAgICAgICAgICAg
ICAgICAgIGludGVycnVwdHMgPSA8MHgwMDAwMDAwMCAweDAwMDAwMDNmIDB4
MDAwMDAwMDQ+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgY2xvY2stZnJl
cXVlbmN5ID0gPDB4MDAwMDAwMDA+Ow0KICAgICAgICAgICAgICAgICAgICAg
ICAgY2xvY2tzID0gPDB4MDAwMDAwMDIgMHgwMDAwMDAwMiAweDAwMDAwMDAx
PjsNCiAgICAgICAgICAgICAgICAgICAgICAgIHZvbHRhZ2UtcmFuZ2VzID0g
PDB4MDAwMDA3MDggMHgwMDAwMDcwOCAweDAwMDAwY2U0IDB4MDAwMDBjZTQ+
Ow0KICAgICAgICAgICAgICAgICAgICAgICAgc2RoY2ksYXV0by1jbWQxMjsN
CiAgICAgICAgICAgICAgICAgICAgICAgIGJyb2tlbi1jZDsNCiAgICAgICAg
ICAgICAgICAgICAgICAgIGxpdHRsZS1lbmRpYW47DQogICAgICAgICAgICAg
ICAgICAgICAgICBidXMtd2lkdGggPSA8MHgwMDAwMDAwOD47DQogICAgICAg
ICAgICAgICAgICAgICAgICBzdGF0dXMgPSAib2theSI7DQogICAgICAgICAg
ICAgICAgICAgICAgICBtbWMtaHMyMDAtMV84djsNCiAgICAgICAgICAgICAg
ICAgICAgICAgIG1tYy1wd3JzZXEgPSA8MHgwMDAwMDAwYj47DQogICAgICAg
ICAgICAgICAgICAgICAgICBwaGFuZGxlID0gPDB4MDAwMDAwNDk+Ow0KICAg
ICAgICAgICAgICAgIH07DQogICAgICAgICAgICAgICAgc2F0YUAzMjAwMDAw
IHsNCiAgICAgICAgICAgICAgICAgICAgICAgIGNvbXBhdGlibGUgPSAiZnNs
LGxzMTAyOGEtYWhjaSI7DQogICAgICAgICAgICAgICAgICAgICAgICByZWcg
PSA8MHgwMDAwMDAwMCAweDAzMjAwMDAwIDB4MDAwMDAwMDAgMHgwMDAxMDAw
MCAweDAwMDAwMDA3IDB4MDAxMDA1MjAgMHgwMDAwMDAwMCAweDAwMDAwMDA0
PjsNCiAgICAgICAgICAgICAgICAgICAgICAgIHJlZy1uYW1lcyA9ICJhaGNp
IiwgInNhdGEtZWNjIjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGludGVy
cnVwdHMgPSA8MHgwMDAwMDAwMCAweDAwMDAwMDg1IDB4MDAwMDAwMDQ+Ow0K
ICAgICAgICAgICAgICAgICAgICAgICAgY2xvY2tzID0gPDB4MDAwMDAwMDIg
MHgwMDAwMDAwNCAweDAwMDAwMDAxPjsNCiAgICAgICAgICAgICAgICAgICAg
ICAgIHN0YXR1cyA9ICJkaXNhYmxlZCI7DQogICAgICAgICAgICAgICAgICAg
ICAgICBwaGFuZGxlID0gPDB4MDAwMDAwNGE+Ow0KICAgICAgICAgICAgICAg
IH07DQogICAgICAgICAgICAgICAgcGNpZUAzNDAwMDAwIHsNCiAgICAgICAg
ICAgICAgICAgICAgICAgIGNvbXBhdGlibGUgPSAiZnNsLGxzMTAyOGEtcGNp
ZSI7DQogICAgICAgICAgICAgICAgICAgICAgICByZWcgPSA8MHgwMDAwMDAw
MCAweDAzNDAwMDAwIDB4MDAwMDAwMDAgMHgwMDEwMDAwMCAweDAwMDAwMDgw
IDB4MDAwMDAwMDAgMHgwMDAwMDAwMCAweDAwMDAyMDAwPjsNCiAgICAgICAg
ICAgICAgICAgICAgICAgIHJlZy1uYW1lcyA9ICJyZWdzIiwgImNvbmZpZyI7
DQogICAgICAgICAgICAgICAgICAgICAgICBpbnRlcnJ1cHRzID0gPDB4MDAw
MDAwMDAgMHgwMDAwMDA2YyAweDAwMDAwMDA0IDB4MDAwMDAwMDAgMHgwMDAw
MDA2ZCAweDAwMDAwMDA0PjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGlu
dGVycnVwdC1uYW1lcyA9ICJwbWUiLCAiYWVyIjsNCiAgICAgICAgICAgICAg
ICAgICAgICAgICNhZGRyZXNzLWNlbGxzID0gPDB4MDAwMDAwMDM+Ow0KICAg
ICAgICAgICAgICAgICAgICAgICAgI3NpemUtY2VsbHMgPSA8MHgwMDAwMDAw
Mj47DQogICAgICAgICAgICAgICAgICAgICAgICBkZXZpY2VfdHlwZSA9ICJw
Y2kiOw0KICAgICAgICAgICAgICAgICAgICAgICAgZG1hLWNvaGVyZW50Ow0K
ICAgICAgICAgICAgICAgICAgICAgICAgaW9tbXUtbWFwID0gPDB4MDAwMDAw
MDAgMHgwMDAwMDAwYyAweDAwMDAwMDAwIDB4MDAwMDAwMDE+Ow0KICAgICAg
ICAgICAgICAgICAgICAgICAgYnVzLXJhbmdlID0gPDB4MDAwMDAwMDAgMHgw
MDAwMDBmZj47DQogICAgICAgICAgICAgICAgICAgICAgICByYW5nZXMgPSA8
MHg4MTAwMDAwMCAweDAwMDAwMDAwIDB4MDAwMDAwMDAgMHgwMDAwMDA4MCAw
eDAwMDEwMDAwIDB4MDAwMDAwMDAgMHgwMDAxMDAwMCAweDgyMDAwMDAwIDB4
MDAwMDAwMDAgMHg0MDAwMDAwMCAweDAwMDAwMDgwIDB4NDAwMDAwMDAgMHgw
MDAwMDAwMCAweDQwMDAwMDAwPjsNCiAgICAgICAgICAgICAgICAgICAgICAg
IG1zaS1wYXJlbnQgPSA8MHgwMDAwMDAwZD47DQogICAgICAgICAgICAgICAg
ICAgICAgICAjaW50ZXJydXB0LWNlbGxzID0gPDB4MDAwMDAwMDE+Ow0KICAg
ICAgICAgICAgICAgICAgICAgICAgaW50ZXJydXB0LW1hcC1tYXNrID0gPDB4
MDAwMDAwMDAgMHgwMDAwMDAwMCAweDAwMDAwMDAwIDB4MDAwMDAwMDc+Ow0K
ICAgICAgICAgICAgICAgICAgICAgICAgaW50ZXJydXB0LW1hcCA9ICogMHgw
MDAwMDAwMDgyODAzYmVjIFsweDAwMDAwMGEwXTsNCiAgICAgICAgICAgICAg
ICAgICAgICAgIHN0YXR1cyA9ICJkaXNhYmxlZCI7DQogICAgICAgICAgICAg
ICAgfTsNCiAgICAgICAgICAgICAgICBwY2llQDM1MDAwMDAgew0KICAgICAg
ICAgICAgICAgICAgICAgICAgY29tcGF0aWJsZSA9ICJmc2wsbHMxMDI4YS1w
Y2llIjsNCiAgICAgICAgICAgICAgICAgICAgICAgIHJlZyA9IDwweDAwMDAw
MDAwIDB4MDM1MDAwMDAgMHgwMDAwMDAwMCAweDAwMTAwMDAwIDB4MDAwMDAw
ODggMHgwMDAwMDAwMCAweDAwMDAwMDAwIDB4MDAwMDIwMDA+Ow0KICAgICAg
ICAgICAgICAgICAgICAgICAgcmVnLW5hbWVzID0gInJlZ3MiLCAiY29uZmln
IjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGludGVycnVwdHMgPSA8MHgw
MDAwMDAwMCAweDAwMDAwMDcxIDB4MDAwMDAwMDQgMHgwMDAwMDAwMCAweDAw
MDAwMDcyIDB4MDAwMDAwMDQ+Ow0KICAgICAgICAgICAgICAgICAgICAgICAg
aW50ZXJydXB0LW5hbWVzID0gInBtZSIsICJhZXIiOw0KICAgICAgICAgICAg
ICAgICAgICAgICAgI2FkZHJlc3MtY2VsbHMgPSA8MHgwMDAwMDAwMz47DQog
ICAgICAgICAgICAgICAgICAgICAgICAjc2l6ZS1jZWxscyA9IDwweDAwMDAw
MDAyPjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGRldmljZV90eXBlID0g
InBjaSI7DQogICAgICAgICAgICAgICAgICAgICAgICBkbWEtY29oZXJlbnQ7
DQogICAgICAgICAgICAgICAgICAgICAgICBpb21tdS1tYXAgPSA8MHgwMDAw
MDAwMCAweDAwMDAwMDBjIDB4MDAwMDAwMDAgMHgwMDAwMDAwMT47DQogICAg
ICAgICAgICAgICAgICAgICAgICBidXMtcmFuZ2UgPSA8MHgwMDAwMDAwMCAw
eDAwMDAwMGZmPjsNCiAgICAgICAgICAgICAgICAgICAgICAgIHJhbmdlcyA9
IDwweDgxMDAwMDAwIDB4MDAwMDAwMDAgMHgwMDAwMDAwMCAweDAwMDAwMDg4
IDB4MDAwMTAwMDAgMHgwMDAwMDAwMCAweDAwMDEwMDAwIDB4ODIwMDAwMDAg
MHgwMDAwMDAwMCAweDQwMDAwMDAwIDB4MDAwMDAwODggMHg0MDAwMDAwMCAw
eDAwMDAwMDAwIDB4NDAwMDAwMDA+Ow0KICAgICAgICAgICAgICAgICAgICAg
ICAgbXNpLXBhcmVudCA9IDwweDAwMDAwMDBkPjsNCiAgICAgICAgICAgICAg
ICAgICAgICAgICNpbnRlcnJ1cHQtY2VsbHMgPSA8MHgwMDAwMDAwMT47DQog
ICAgICAgICAgICAgICAgICAgICAgICBpbnRlcnJ1cHQtbWFwLW1hc2sgPSA8
MHgwMDAwMDAwMCAweDAwMDAwMDAwIDB4MDAwMDAwMDAgMHgwMDAwMDAwNz47
DQogICAgICAgICAgICAgICAgICAgICAgICBpbnRlcnJ1cHQtbWFwID0gKiAw
eDAwMDAwMDAwODI4MDNlNTAgWzB4MDAwMDAwYTBdOw0KICAgICAgICAgICAg
ICAgICAgICAgICAgc3RhdHVzID0gImRpc2FibGVkIjsNCiAgICAgICAgICAg
ICAgICB9Ow0KICAgICAgICAgICAgICAgIHBjaWVAMWYwMDAwMDAwIHsNCiAg
ICAgICAgICAgICAgICAgICAgICAgIGNvbXBhdGlibGUgPSAicGNpLWhvc3Qt
ZWNhbS1nZW5lcmljIjsNCiAgICAgICAgICAgICAgICAgICAgICAgIHJlZyA9
IDwweDAwMDAwMDAxIDB4ZjAwMDAwMDAgMHgwMDAwMDAwMCAweDAwMTAwMDAw
PjsNCiAgICAgICAgICAgICAgICAgICAgICAgICNhZGRyZXNzLWNlbGxzID0g
PDB4MDAwMDAwMDM+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgI3NpemUt
Y2VsbHMgPSA8MHgwMDAwMDAwMj47DQogICAgICAgICAgICAgICAgICAgICAg
ICBtc2ktcGFyZW50ID0gPDB4MDAwMDAwMGQ+Ow0KICAgICAgICAgICAgICAg
ICAgICAgICAgZGV2aWNlX3R5cGUgPSAicGNpIjsNCiAgICAgICAgICAgICAg
ICAgICAgICAgIGJ1cy1yYW5nZSA9IDwweDAwMDAwMDAwIDB4MDAwMDAwMDA+
Ow0KICAgICAgICAgICAgICAgICAgICAgICAgZG1hLWNvaGVyZW50Ow0KICAg
ICAgICAgICAgICAgICAgICAgICAgbXNpLW1hcCA9IDwweDAwMDAwMDAwIDB4
MDAwMDAwMGQgMHgwMDAwMDAxNyAweDAwMDAwMDBlPjsNCiAgICAgICAgICAg
ICAgICAgICAgICAgIGlvbW11LW1hcCA9IDwweDAwMDAwMDAwIDB4MDAwMDAw
MGMgMHgwMDAwMDAxNyAweDAwMDAwMDBlPjsNCiAgICAgICAgICAgICAgICAg
ICAgICAgIHJhbmdlcyA9ICogMHgwMDAwMDAwMDgyODA0MDA0IFsweDAwMDAw
MGM0XTsNCiAgICAgICAgICAgICAgICAgICAgICAgIGV0aGVybmV0QDAsMCB7
DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICNzdHJlYW0taWQt
Y2VsbHMgPSA8MHgwMDAwMDAwMT47DQogICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIGNvbXBhdGlibGUgPSAiZnNsLGVuZXRjIjsNCiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgcmVnID0gPDB4MDAwMDAwMDAgMHgw
MDAwMDAwMCAweDAwMDAwMDAwIDB4MDAwMDAwMDAgMHgwMDAwMDAwMD47DQog
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBoeS1oYW5kbGUgPSA8
MHgwMDAwMDAwZT47DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IHBoeS1jb25uZWN0aW9uLXR5cGUgPSAic2dtaWkiOw0KICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBzdGF0dXMgPSAib2theSI7DQogICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIHBoYW5kbGUgPSA8MHgwMDAwMDA0
Yj47DQogICAgICAgICAgICAgICAgICAgICAgICB9Ow0KICAgICAgICAgICAg
ICAgICAgICAgICAgZXRoZXJuZXRAMCwxIHsNCiAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgI3N0cmVhbS1pZC1jZWxscyA9IDwweDAwMDAwMDAx
PjsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY29tcGF0aWJs
ZSA9ICJmc2wsZW5ldGMiOw0KICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICByZWcgPSA8MHgwMDAwMDEwMCAweDAwMDAwMDAwIDB4MDAwMDAwMDAg
MHgwMDAwMDAwMCAweDAwMDAwMDAwPjsNCiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgcGh5LWhhbmRsZSA9IDwweDAwMDAwMDBmPjsNCiAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgcGh5LWNvbm5lY3Rpb24tdHlw
ZSA9ICJyZ21paSI7DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IHN0YXR1cyA9ICJva2F5IjsNCiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgcGhhbmRsZSA9IDwweDAwMDAwMDRjPjsNCiAgICAgICAgICAgICAg
ICAgICAgICAgIH07DQogICAgICAgICAgICAgICAgICAgICAgICBldGhlcm5l
dEAwLDIgew0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjb21w
YXRpYmxlID0gImZzbCxlbmV0YyI7DQogICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIHJlZyA9IDwweDAwMDAwMjAwIDB4MDAwMDAwMDAgMHgwMDAw
MDAwMCAweDAwMDAwMDAwIDB4MDAwMDAwMDA+Ow0KICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBzdGF0dXMgPSAiZGlzYWJsZWQiOw0KICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBwaGFuZGxlID0gPDB4MDAwMDAw
NGQ+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBmaXhlZC1s
aW5rIHsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBzcGVlZCA9IDwweDAwMDAwM2U4PjsNCiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBmdWxsLWR1cGxleDsNCiAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgfTsNCiAgICAgICAgICAgICAgICAgICAg
ICAgIH07DQogICAgICAgICAgICAgICAgICAgICAgICBtZGlvQDAsMyB7DQog
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNvbXBhdGlibGUgPSAi
ZnNsLGVuZXRjLW1kaW8iOw0KICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICByZWcgPSA8MHgwMDAwMDMwMCAweDAwMDAwMDAwIDB4MDAwMDAwMDAg
MHgwMDAwMDAwMCAweDAwMDAwMDAwPjsNCiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgI2FkZHJlc3MtY2VsbHMgPSA8MHgwMDAwMDAwMT47DQog
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICNzaXplLWNlbGxzID0g
PDB4MDAwMDAwMDA+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBwaGFuZGxlID0gPDB4MDAwMDAwNGU+Ow0KICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBldGhlcm5ldC1waHlANSB7DQogICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgcmVnID0gPDB4MDAwMDAwMDU+
Ow0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBo
YW5kbGUgPSA8MHgwMDAwMDAwZT47DQogICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIH07DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IGV0aGVybmV0LXBoeUA0IHsNCiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICByZWcgPSA8MHgwMDAwMDAwND47DQogICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgc3RhdHVzID0gIm9rYXki
Ow0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBo
YW5kbGUgPSA8MHgwMDAwMDAwZj47DQogICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIH07DQogICAgICAgICAgICAgICAgICAgICAgICB9Ow0KICAg
ICAgICAgICAgICAgICAgICAgICAgZXRoZXJuZXRAMCw0IHsNCiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgY29tcGF0aWJsZSA9ICJmc2wsZW5l
dGMtcHRwIjsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcmVn
ID0gPDB4MDAwMDA0MDAgMHgwMDAwMDAwMCAweDAwMDAwMDAwIDB4MDAwMDAw
MDAgMHgwMDAwMDAwMD47DQogICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIGNsb2NrcyA9IDwweDAwMDAwMDAyIDB4MDAwMDAwMDQgMHgwMDAwMDAw
MD47DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpdHRsZS1l
bmRpYW47DQogICAgICAgICAgICAgICAgICAgICAgICB9Ow0KICAgICAgICAg
ICAgICAgICAgICAgICAgc3dpdGNoQDAsNSB7DQogICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIGNvbXBhdGlibGUgPSAibXNjYyxmZWxpeC1zd2l0
Y2giOw0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICByZWcgPSA8
MHgwMDAwMDUwMCAweDAwMDAwMDAwIDB4MDAwMDAwMDAgMHgwMDAwMDAwMCAw
eDAwMDAwMDAwPjsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
aW50ZXJydXB0cyA9IDwweDAwMDAwMDAwIDB4MDAwMDAwNWYgMHgwMDAwMDAw
ND47DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBvcnRzIHsN
CiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAjYWRk
cmVzcy1jZWxscyA9IDwweDAwMDAwMDAxPjsNCiAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAjc2l6ZS1jZWxscyA9IDwweDAwMDAw
MDAwPjsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBwb3J0QDAgew0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgcmVnID0gPDB4MDAwMDAwMDA+Ow0KICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgc3RhdHVz
ID0gImRpc2FibGVkIjsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIHBoYW5kbGUgPSA8MHgwMDAwMDA0Zj47DQog
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfTsNCiAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBwb3J0QDEg
ew0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgcmVnID0gPDB4MDAwMDAwMDE+Ow0KICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgc3RhdHVzID0gImRpc2Fi
bGVkIjsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIHBoYW5kbGUgPSA8MHgwMDAwMDA1MD47DQogICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfTsNCiAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICBwb3J0QDIgew0KICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcmVn
ID0gPDB4MDAwMDAwMDI+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgc3RhdHVzID0gImRpc2FibGVkIjsNCiAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IHBoYW5kbGUgPSA8MHgwMDAwMDA1MT47DQogICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgfTsNCiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBwb3J0QDMgew0KICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcmVnID0gPDB4MDAw
MDAwMDM+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgc3RhdHVzID0gImRpc2FibGVkIjsNCiAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBoYW5kbGUg
PSA8MHgwMDAwMDA1Mj47DQogICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgfTsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBwb3J0QDQgew0KICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgcmVnID0gPDB4MDAwMDAwMDQ+Ow0K
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgc3RhdHVzID0gImRpc2FibGVkIjsNCiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBoYW5kbGUgPSA8MHgwMDAw
MDA1Mz47DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBmaXhlZC1saW5rIHsNCiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgc3BlZWQgPSA8
MHgwMDAwMDNlOD47DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIGZ1bGwtZHVwbGV4Ow0KICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfTsN
CiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB9Ow0K
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBvcnRA
NSB7DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICByZWcgPSA8MHgwMDAwMDAwNT47DQogICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBldGhlcm5ldCA9IDww
eDAwMDAwMDEwPjsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIHN0YXR1cyA9ICJkaXNhYmxlZCI7DQogICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBwaGFu
ZGxlID0gPDB4MDAwMDAwNTQ+Ow0KICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgZml4ZWQtbGluayB7DQogICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIHNwZWVkID0gPDB4MDAwMDAzZTg+Ow0KICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBmdWxsLWR1
cGxleDsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIH07DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgfTsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
fTsNCiAgICAgICAgICAgICAgICAgICAgICAgIH07DQogICAgICAgICAgICAg
ICAgICAgICAgICBldGhlcm5ldEAwLDYgew0KICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBjb21wYXRpYmxlID0gImZzbCxlbmV0YyI7DQogICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHJlZyA9IDwweDAwMDAwNjAw
IDB4MDAwMDAwMDAgMHgwMDAwMDAwMCAweDAwMDAwMDAwIDB4MDAwMDAwMDA+
Ow0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGF0dXMgPSAi
ZGlzYWJsZWQiOw0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBw
aGFuZGxlID0gPDB4MDAwMDAwMTA+Ow0KICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBmaXhlZC1saW5rIHsNCiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBzcGVlZCA9IDwweDAwMDAwM2U4PjsNCiAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBmdWxsLWR1
cGxleDsNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfTsNCiAg
ICAgICAgICAgICAgICAgICAgICAgIH07DQogICAgICAgICAgICAgICAgfTsN
CiAgICAgICAgICAgICAgICBpb21tdUA1MDAwMDAwIHsNCiAgICAgICAgICAg
ICAgICAgICAgICAgIG1tdS1tYXN0ZXJzID0gPDB4MDAwMDAwNDkgMHgwMDAw
MDgwMCAweDAwMDAwMDU3IDB4MDAwMDAwMTIgMHgwMDAwMDA0YiAweDAwMDAw
NDE3IDB4MDAwMDAwNGMgMHgwMDAwMDQxOCAweDAwMDAwMDA4IDB4MDAwMDAw
MjA+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgY29tcGF0aWJsZSA9ICJh
cm0sbW11LTUwMCI7DQogICAgICAgICAgICAgICAgICAgICAgICByZWcgPSA8
MHgwMDAwMDAwMCAweDA1MDAwMDAwIDB4MDAwMDAwMDAgMHgwMDgwMDAwMD47
DQogICAgICAgICAgICAgICAgICAgICAgICAjZ2xvYmFsLWludGVycnVwdHMg
PSA8MHgwMDAwMDAwOD47DQogICAgICAgICAgICAgICAgICAgICAgICAjaW9t
bXUtY2VsbHMgPSA8MHgwMDAwMDAwMD47DQogICAgICAgICAgICAgICAgICAg
ICAgICBzdHJlYW0tbWF0Y2gtbWFzayA9IDwweDAwMDA3YzAwPjsNCiAgICAg
ICAgICAgICAgICAgICAgICAgIGludGVycnVwdHMgPSAqIDB4MDAwMDAwMDA4
MjgwNDg2NCBbMHgwMDAwMDM2MF07DQogICAgICAgICAgICAgICAgICAgICAg
ICBwaGFuZGxlID0gPDB4MDAwMDAwMGM+Ow0KICAgICAgICAgICAgICAgIH07
DQogICAgICAgICAgICAgICAgZG1hLWNvbnRyb2xsZXJAODM4MDAwMCB7DQog
ICAgICAgICAgICAgICAgICAgICAgICAjc3RyZWFtLWlkLWNlbGxzID0gPDB4
MDAwMDAwMDE+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgY29tcGF0aWJs
ZSA9ICJmc2wsbHMxMDI4YS1xZG1hIiwgImZzbCxsczEwMjFhLXFkbWEiOw0K
ICAgICAgICAgICAgICAgICAgICAgICAgcmVnID0gPDB4MDAwMDAwMDAgMHgw
ODM4MDAwMCAweDAwMDAwMDAwIDB4MDAwMDEwMDAgMHgwMDAwMDAwMCAweDA4
MzkwMDAwIDB4MDAwMDAwMDAgMHgwMDAxMDAwMCAweDAwMDAwMDAwIDB4MDgz
YTAwMDAgMHgwMDAwMDAwMCAweDAwMDQwMDAwPjsNCiAgICAgICAgICAgICAg
ICAgICAgICAgIGludGVycnVwdHMgPSA8MHgwMDAwMDAwMCAweDAwMDAwMDJi
IDB4MDAwMDAwMDQgMHgwMDAwMDAwMCAweDAwMDAwMGZiIDB4MDAwMDAwMDQg
MHgwMDAwMDAwMCAweDAwMDAwMGZjIDB4MDAwMDAwMDQgMHgwMDAwMDAwMCAw
eDAwMDAwMGZkIDB4MDAwMDAwMDQgMHgwMDAwMDAwMCAweDAwMDAwMGZlIDB4
MDAwMDAwMDQ+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgaW50ZXJydXB0
LW5hbWVzID0gInFkbWEtZXJyb3IiLCAicWRtYS1xdWV1ZTAiLCAicWRtYS1x
dWV1ZTEiLCAicWRtYS1xdWV1ZTIiLCAicWRtYS1xdWV1ZTMiOw0KICAgICAg
ICAgICAgICAgICAgICAgICAgY2hhbm5lbHMgPSA8MHgwMDAwMDAwOD47DQog
ICAgICAgICAgICAgICAgICAgICAgICBibG9jay1udW1iZXIgPSA8MHgwMDAw
MDAwMT47DQogICAgICAgICAgICAgICAgICAgICAgICBibG9jay1vZmZzZXQg
PSA8MHgwMDAxMDAwMD47DQogICAgICAgICAgICAgICAgICAgICAgICBxdWV1
ZXMgPSA8MHgwMDAwMDAwMj47DQogICAgICAgICAgICAgICAgICAgICAgICBz
dGF0dXMtc2l6ZXMgPSA8MHgwMDAwMDA0MD47DQogICAgICAgICAgICAgICAg
ICAgICAgICBxdWV1ZS1zaXplcyA9IDwweDAwMDAwMDQwIDB4MDAwMDAwNDA+
Ow0KICAgICAgICAgICAgICAgICAgICAgICAgcGhhbmRsZSA9IDwweDAwMDAw
MDU3PjsNCiAgICAgICAgICAgICAgICB9Ow0KICAgICAgICAgICAgICAgIGdw
dUBmMGMwMDAwIHsNCiAgICAgICAgICAgICAgICAgICAgICAgIHhlbixwYXNz
dGhyb3VnaDsNCiAgICAgICAgICAgICAgICAgICAgICAgIGNvbXBhdGlibGUg
PSAiZnNsLGxzMTAyOGEtZ3B1IjsNCiAgICAgICAgICAgICAgICAgICAgICAg
IHJlZyA9IDwweDAwMDAwMDAwIDB4MGYwYzAwMDAgMHgwMDAwMDAwMCAweDAw
MDEwMDAwIDB4MDAwMDAwMDAgMHg4MDAwMDAwMCAweDAwMDAwMDAwIDB4ODAw
MDAwMDAgMHgwMDAwMDAwMCAweDAwMDAwMDAwIDB4MDAwMDAwMDAgMHgwMzAw
MDAwMD47DQogICAgICAgICAgICAgICAgICAgICAgICByZWctbmFtZXMgPSAi
YmFzZSIsICJwaHlzX2Jhc2VhZGRyIiwgImNvbnRpZ3VvdXNfbWVtIjsNCiAg
ICAgICAgICAgICAgICAgICAgICAgIGludGVycnVwdHMgPSA8MHgwMDAwMDAw
MCAweDAwMDAwMGRjIDB4MDAwMDAwMDQ+Ow0KICAgICAgICAgICAgICAgIH07
DQogICAgICAgICAgICAgICAgYXVkaW8tY29udHJvbGxlckBmMTAwMDAwIHsN
CiAgICAgICAgICAgICAgICAgICAgICAgICNzb3VuZC1kYWktY2VsbHMgPSA8
MHgwMDAwMDAwMD47DQogICAgICAgICAgICAgICAgICAgICAgICBjb21wYXRp
YmxlID0gImZzbCx2ZjYxMC1zYWkiOw0KICAgICAgICAgICAgICAgICAgICAg
ICAgcmVnID0gPDB4MDAwMDAwMDAgMHgwZjEwMDAwMCAweDAwMDAwMDAwIDB4
MDAwMTAwMDA+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgaW50ZXJydXB0
cyA9IDwweDAwMDAwMDAwIDB4MDAwMDAwNTIgMHgwMDAwMDAwND47DQogICAg
ICAgICAgICAgICAgICAgICAgICBjbG9ja3MgPSA8MHgwMDAwMDAwMiAweDAw
MDAwMDA0IDB4MDAwMDAwMDEgMHgwMDAwMDAwMiAweDAwMDAwMDA0IDB4MDAw
MDAwMDEgMHgwMDAwMDAwMiAweDAwMDAwMDA0IDB4MDAwMDAwMDEgMHgwMDAw
MDAwMiAweDAwMDAwMDA0IDB4MDAwMDAwMDE+Ow0KICAgICAgICAgICAgICAg
ICAgICAgICAgY2xvY2stbmFtZXMgPSAiYnVzIiwgIm1jbGsxIiwgIm1jbGsy
IiwgIm1jbGszIjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGRtYS1uYW1l
cyA9ICJ0eCIsICJyeCI7DQogICAgICAgICAgICAgICAgICAgICAgICBkbWFz
ID0gPDB4MDAwMDAwMDggMHgwMDAwMDAwMSAweDAwMDAwMDA0IDB4MDAwMDAw
MDggMHgwMDAwMDAwMSAweDAwMDAwMDAzPjsNCiAgICAgICAgICAgICAgICAg
ICAgICAgIHN0YXR1cyA9ICJkaXNhYmxlZCI7DQogICAgICAgICAgICAgICAg
ICAgICAgICBwaGFuZGxlID0gPDB4MDAwMDAwNTg+Ow0KICAgICAgICAgICAg
ICAgIH07DQogICAgICAgICAgICAgICAgYXVkaW8tY29udHJvbGxlckBmMTEw
MDAwIHsNCiAgICAgICAgICAgICAgICAgICAgICAgICNzb3VuZC1kYWktY2Vs
bHMgPSA8MHgwMDAwMDAwMD47DQogICAgICAgICAgICAgICAgICAgICAgICBj
b21wYXRpYmxlID0gImZzbCx2ZjYxMC1zYWkiOw0KICAgICAgICAgICAgICAg
ICAgICAgICAgcmVnID0gPDB4MDAwMDAwMDAgMHgwZjExMDAwMCAweDAwMDAw
MDAwIDB4MDAwMTAwMDA+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgaW50
ZXJydXB0cyA9IDwweDAwMDAwMDAwIDB4MDAwMDAwNTIgMHgwMDAwMDAwND47
DQogICAgICAgICAgICAgICAgICAgICAgICBjbG9ja3MgPSA8MHgwMDAwMDAw
MiAweDAwMDAwMDA0IDB4MDAwMDAwMDEgMHgwMDAwMDAwMiAweDAwMDAwMDA0
IDB4MDAwMDAwMDEgMHgwMDAwMDAwMiAweDAwMDAwMDA0IDB4MDAwMDAwMDEg
MHgwMDAwMDAwMiAweDAwMDAwMDA0IDB4MDAwMDAwMDE+Ow0KICAgICAgICAg
ICAgICAgICAgICAgICAgY2xvY2stbmFtZXMgPSAiYnVzIiwgIm1jbGsxIiwg
Im1jbGsyIiwgIm1jbGszIjsNCiAgICAgICAgICAgICAgICAgICAgICAgIGRt
YS1uYW1lcyA9ICJ0eCIsICJyeCI7DQogICAgICAgICAgICAgICAgICAgICAg
ICBkbWFzID0gPDB4MDAwMDAwMDggMHgwMDAwMDAwMSAweDAwMDAwMDA2IDB4
MDAwMDAwMDggMHgwMDAwMDAwMSAweDAwMDAwMDA1PjsNCiAgICAgICAgICAg
ICAgICAgICAgICAgIHN0YXR1cyA9ICJkaXNhYmxlZCI7DQogICAgICAgICAg
ICAgICAgICAgICAgICBwaGFuZGxlID0gPDB4MDAwMDAwNTk+Ow0KICAgICAg
ICAgICAgICAgIH07DQogICAgICAgICAgICAgICAgYXVkaW8tY29udHJvbGxl
ckBmMTIwMDAwIHsNCiAgICAgICAgICAgICAgICAgICAgICAgICNzb3VuZC1k
YWktY2VsbHMgPSA8MHgwMDAwMDAwMD47DQogICAgICAgICAgICAgICAgICAg
ICAgICBjb21wYXRpYmxlID0gImZzbCx2ZjYxMC1zYWkiOw0KICAgICAgICAg
ICAgICAgICAgICAgICAgcmVnID0gPDB4MDAwMDAwMDAgMHgwZjEyMDAwMCAw
eDAwMDAwMDAwIDB4MDAwMTAwMDA+Ow0KICAgICAgICAgICAgICAgICAgICAg
ICAgaW50ZXJydXB0cyA9IDwweDAwMDAwMDAwIDB4MDAwMDAwNTMgMHgwMDAw
MDAwND47DQogICAgICAgICAgICAgICAgICAgICAgICBjbG9ja3MgPSA8MHgw
MDAwMDAwMiAweDAwMDAwMDA0IDB4MDAwMDAwMDEgMHgwMDAwMDAwMiAweDAw
MDAwMDA0IDB4MDAwMDAwMDEgMHgwMDAwMDAwMiAweDAwMDAwMDA0IDB4MDAw
MDAwMDEgMHgwMDAwMDAwMiAweDAwMDAwMDA0IDB4MDAwMDAwMDE+Ow0KICAg
ICAgICAgICAgICAgICAgICAgICAgY2xvY2stbmFtZXMgPSAiYnVzIiwgIm1j
bGsxIiwgIm1jbGsyIiwgIm1jbGszIjsNCiAgICAgICAgICAgICAgICAgICAg
ICAgIGRtYS1uYW1lcyA9ICJ0eCIsICJyeCI7DQogICAgICAgICAgICAgICAg
ICAgICAgICBkbWFzID0gPDB4MDAwMDAwMDggMHgwMDAwMDAwMSAweDAwMDAw
MDA4IDB4MDAwMDAwMDggMHgwMDAwMDAwMSAweDAwMDAwMDA3PjsNCiAgICAg
ICAgICAgICAgICAgICAgICAgIHN0YXR1cyA9ICJkaXNhYmxlZCI7DQogICAg
ICAgICAgICAgICAgICAgICAgICBwaGFuZGxlID0gPDB4MDAwMDAwNWE+Ow0K
ICAgICAgICAgICAgICAgIH07DQogICAgICAgICAgICAgICAgYXVkaW8tY29u
dHJvbGxlckBmMTMwMDAwIHsNCiAgICAgICAgICAgICAgICAgICAgICAgICNz
b3VuZC1kYWktY2VsbHMgPSA8MHgwMDAwMDAwMD47DQogICAgICAgICAgICAg
ICAgICAgICAgICBjb21wYXRpYmxlID0gImZzbCx2ZjYxMC1zYWkiOw0KICAg
ICAgICAgICAgICAgICAgICAgICAgcmVnID0gPDB4MDAwMDAwMDAgMHgwZjEz
MDAwMCAweDAwMDAwMDAwIDB4MDAwMTAwMDA+Ow0KICAgICAgICAgICAgICAg
ICAgICAgICAgaW50ZXJydXB0cyA9IDwweDAwMDAwMDAwIDB4MDAwMDAwNTMg
MHgwMDAwMDAwND47DQogICAgICAgICAgICAgICAgICAgICAgICBjbG9ja3Mg
PSA8MHgwMDAwMDAwMiAweDAwMDAwMDA0IDB4MDAwMDAwMDEgMHgwMDAwMDAw
MiAweDAwMDAwMDA0IDB4MDAwMDAwMDEgMHgwMDAwMDAwMiAweDAwMDAwMDA0
IDB4MDAwMDAwMDEgMHgwMDAwMDAwMiAweDAwMDAwMDA0IDB4MDAwMDAwMDE+
Ow0KICAgICAgICAgICAgICAgICAgICAgICAgY2xvY2stbmFtZXMgPSAiYnVz
IiwgIm1jbGsxIiwgIm1jbGsyIiwgIm1jbGszIjsNCiAgICAgICAgICAgICAg
ICAgICAgICAgIGRtYS1uYW1lcyA9ICJ0eCIsICJyeCI7DQogICAgICAgICAg
ICAgICAgICAgICAgICBkbWFzID0gPDB4MDAwMDAwMDggMHgwMDAwMDAwMSAw
eDAwMDAwMDBhIDB4MDAwMDAwMDggMHgwMDAwMDAwMSAweDAwMDAwMDA5PjsN
CiAgICAgICAgICAgICAgICAgICAgICAgIHN0YXR1cyA9ICJkaXNhYmxlZCI7
DQogICAgICAgICAgICAgICAgICAgICAgICBwaGFuZGxlID0gPDB4MDAwMDAw
NWI+Ow0KICAgICAgICAgICAgICAgIH07DQogICAgICAgICAgICAgICAgYXVk
aW8tY29udHJvbGxlckBmMTQwMDAwIHsNCiAgICAgICAgICAgICAgICAgICAg
ICAgICNzb3VuZC1kYWktY2VsbHMgPSA8MHgwMDAwMDAwMD47DQogICAgICAg
ICAgICAgICAgICAgICAgICBjb21wYXRpYmxlID0gImZzbCx2ZjYxMC1zYWki
Ow0KICAgICAgICAgICAgICAgICAgICAgICAgcmVnID0gPDB4MDAwMDAwMDAg
MHgwZjE0MDAwMCAweDAwMDAwMDAwIDB4MDAwMTAwMDA+Ow0KICAgICAgICAg
ICAgICAgICAgICAgICAgaW50ZXJydXB0cyA9IDwweDAwMDAwMDAwIDB4MDAw
MDAwNTQgMHgwMDAwMDAwND47DQogICAgICAgICAgICAgICAgICAgICAgICBj
bG9ja3MgPSA8MHgwMDAwMDAwMiAweDAwMDAwMDA0IDB4MDAwMDAwMDEgMHgw
MDAwMDAwMiAweDAwMDAwMDA0IDB4MDAwMDAwMDEgMHgwMDAwMDAwMiAweDAw
MDAwMDA0IDB4MDAwMDAwMDEgMHgwMDAwMDAwMiAweDAwMDAwMDA0IDB4MDAw
MDAwMDE+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgY2xvY2stbmFtZXMg
PSAiYnVzIiwgIm1jbGsxIiwgIm1jbGsyIiwgIm1jbGszIjsNCiAgICAgICAg
ICAgICAgICAgICAgICAgIGRtYS1uYW1lcyA9ICJ0eCIsICJyeCI7DQogICAg
ICAgICAgICAgICAgICAgICAgICBkbWFzID0gPDB4MDAwMDAwMDggMHgwMDAw
MDAwMSAweDAwMDAwMDBjIDB4MDAwMDAwMDggMHgwMDAwMDAwMSAweDAwMDAw
MDBiPjsNCiAgICAgICAgICAgICAgICAgICAgICAgIHN0YXR1cyA9ICJkaXNh
YmxlZCI7DQogICAgICAgICAgICAgICAgICAgICAgICBwaGFuZGxlID0gPDB4
MDAwMDAwNWM+Ow0KICAgICAgICAgICAgICAgIH07DQogICAgICAgICAgICAg
ICAgYXVkaW8tY29udHJvbGxlckBmMTUwMDAwIHsNCiAgICAgICAgICAgICAg
ICAgICAgICAgICNzb3VuZC1kYWktY2VsbHMgPSA8MHgwMDAwMDAwMD47DQog
ICAgICAgICAgICAgICAgICAgICAgICBjb21wYXRpYmxlID0gImZzbCx2ZjYx
MC1zYWkiOw0KICAgICAgICAgICAgICAgICAgICAgICAgcmVnID0gPDB4MDAw
MDAwMDAgMHgwZjE1MDAwMCAweDAwMDAwMDAwIDB4MDAwMTAwMDA+Ow0KICAg
ICAgICAgICAgICAgICAgICAgICAgaW50ZXJydXB0cyA9IDwweDAwMDAwMDAw
IDB4MDAwMDAwNTQgMHgwMDAwMDAwND47DQogICAgICAgICAgICAgICAgICAg
ICAgICBjbG9ja3MgPSA8MHgwMDAwMDAwMiAweDAwMDAwMDA0IDB4MDAwMDAw
MDEgMHgwMDAwMDAwMiAweDAwMDAwMDA0IDB4MDAwMDAwMDEgMHgwMDAwMDAw
MiAweDAwMDAwMDA0IDB4MDAwMDAwMDEgMHgwMDAwMDAwMiAweDAwMDAwMDA0
IDB4MDAwMDAwMDE+Ow0KICAgICAgICAgICAgICAgICAgICAgICAgY2xvY2st
bmFtZXMgPSAiYnVzIiwgIm1jbGsxIiwgIm1jbGsyIiwgIm1jbGszIjsNCiAg
ICAgICAgICAgICAgICAgICAgICAgIGRtYS1uYW1lcyA9ICJ0eCIsICJyeCI7
DQogICAgICAgICAgICAgICAgICAgICAgICBkbWFzID0gPDB4MDAwMDAwMDgg
MHgwMDAwMDAwMSAweDAwMDAwMDBlIDB4MDAwMDAwMDggMHgwMDAwMDAwMSAw
eDAwMDAwMDBkPjsNCiAgICAgICAgICAgICAgICAgICAgICAgIHN0YXR1cyA9
ICJkaXNhYmxlZCI7DQogICAgICAgICAgICAgICAgICAgICAgICBwaGFuZGxl
ID0gPDB4MDAwMDAwNWQ+Ow0KICAgICAgICAgICAgICAgIH07DQogICAgICAg
ICAgICAgICAgcmNwbUAxZTM0MDQwIHsNCiAgICAgICAgICAgICAgICAgICAg
ICAgIGNvbXBhdGlibGUgPSAiZnNsLGxzMTAyOGEtcmNwbSIsICJmc2wscW9y
aXEtcmNwbS0yLjErIjsNCiAgICAgICAgICAgICAgICAgICAgICAgIHJlZyA9
IDwweDAwMDAwMDAwIDB4MDFlMzQwNDAgMHgwMDAwMDAwMCAweDAwMDAwMDFj
PjsNCiAgICAgICAgICAgICAgICAgICAgICAgICNmc2wscmNwbS13YWtldXAt
Y2VsbHMgPSA8MHgwMDAwMDAwNz47DQogICAgICAgICAgICAgICAgICAgICAg
ICBsaXR0bGUtZW5kaWFuOw0KICAgICAgICAgICAgICAgICAgICAgICAgcGhh
bmRsZSA9IDwweDAwMDAwMDE2PjsNCiAgICAgICAgICAgICAgICB9Ow0KICAg
ICAgICAgICAgICAgIHRpbWVyQDI4MDAwMDAgew0KICAgICAgICAgICAgICAg
ICAgICAgICAgY29tcGF0aWJsZSA9ICJmc2wsbHMxMDI4YS1mdG0tYWxhcm0i
Ow0KICAgICAgICAgICAgICAgICAgICAgICAgcmVnID0gPDB4MDAwMDAwMDAg
MHgwMjgwMDAwMCAweDAwMDAwMDAwIDB4MDAwMTAwMDA+Ow0KICAgICAgICAg
ICAgICAgICAgICAgICAgZnNsLHJjcG0td2FrZXVwID0gPDB4MDAwMDAwMTYg
MHgwMDAwMDAwMCAweDAwMDAwMDAwIDB4MDAwMDAwMDAgMHgwMDAwMDAwMCAw
eDAwMDA0MDAwIDB4MDAwMDAwMDAgMHgwMDAwMDAwMD47DQogICAgICAgICAg
ICAgICAgICAgICAgICBpbnRlcnJ1cHRzID0gPDB4MDAwMDAwMDAgMHgwMDAw
MDAyYyAweDAwMDAwMDA0PjsNCiAgICAgICAgICAgICAgICAgICAgICAgIHBo
YW5kbGUgPSA8MHgwMDAwMDA1ZT47DQogICAgICAgICAgICAgICAgfTsNCiAg
ICAgICAgfTsNCiAgICAgICAgbWFsaWRwQGYwODAwMDAgew0KICAgICAgICAg
ICAgICAgIHhlbixwYXNzdGhyb3VnaDsNCiAgICAgICAgICAgICAgICBjb21w
YXRpYmxlID0gImFybSxtYWxpLWRwNTAwIjsNCiAgICAgICAgICAgICAgICBy
ZWcgPSA8MHgwMDAwMDAwMCAweDBmMDgwMDAwIDB4MDAwMDAwMDAgMHgwMDAx
MDAwMD47DQogICAgICAgICAgICAgICAgaW50ZXJydXB0cyA9IDwweDAwMDAw
MDAwIDB4MDAwMDAwZGUgMHgwMDAwMDAwNCAweDAwMDAwMDAwIDB4MDAwMDAw
ZGYgMHgwMDAwMDAwND47DQogICAgICAgICAgICAgICAgaW50ZXJydXB0LW5h
bWVzID0gIkRFIiwgIlNFIjsNCiAgICAgICAgICAgICAgICBjbG9ja3MgPSA8
MHgwMDAwMDAxNyAweDAwMDAwMDE4IDB4MDAwMDAwMTggMHgwMDAwMDAxOT47
DQogICAgICAgICAgICAgICAgY2xvY2stbmFtZXMgPSAicHhsY2xrIiwgIm1j
bGsiLCAiYWNsayIsICJwY2xrIjsNCiAgICAgICAgICAgICAgICBhcm0sbWFs
aWRwLW91dHB1dC1wb3J0LWxpbmVzID0gWzA4IDA4IDA4XTsNCiAgICAgICAg
ICAgICAgICBwaGFuZGxlID0gPDB4MDAwMDAwNWY+Ow0KICAgICAgICAgICAg
ICAgIHBvcnQgew0KICAgICAgICAgICAgICAgICAgICAgICAgZW5kcG9pbnQg
ew0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICByZW1vdGUtZW5k
cG9pbnQgPSA8MHgwMDAwMDAxYT47DQogICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIHBoYW5kbGUgPSA8MHgwMDAwMDAxYz47DQogICAgICAgICAg
ICAgICAgICAgICAgICB9Ow0KICAgICAgICAgICAgICAgIH07DQogICAgICAg
IH07DQogICAgICAgIGhkcEBmMjAwMDAwIHsNCiAgICAgICAgICAgICAgICB4
ZW4scGFzc3Rocm91Z2g7DQogICAgICAgICAgICAgICAgY29tcGF0aWJsZSA9
ICJmc2wsbHMxMDI4YS1kcCI7DQogICAgICAgICAgICAgICAgcmVnID0gPDB4
MDAwMDAwMDAgMHgwZjIwMDAwMCAweDAwMDAwMDAwIDB4MDAwZmZmZmY+Ow0K
ICAgICAgICAgICAgICAgIGludGVycnVwdHMgPSA8MHgwMDAwMDAwMCAweDAw
MDAwMGRkIDB4MDAwMDAwMDQ+Ow0KICAgICAgICAgICAgICAgIGNsb2NrcyA9
IDwweDAwMDAwMDA3IDB4MDAwMDAwMDIgMHgwMDAwMDAwMiAweDAwMDAwMDAy
IDB4MDAwMDAwMWIgMHgwMDAwMDAxYiAweDAwMDAwMDFiIDB4MDAwMDAwMWIg
MHgwMDAwMDAxYj47DQogICAgICAgICAgICAgICAgY2xvY2stbmFtZXMgPSAi
Y2xrX2lwZyIsICJjbGtfY29yZSIsICJjbGtfcHhsIiwgImNsa19weGxfbXV4
IiwgImNsa19weGxfbGluayIsICJjbGtfYXBiIiwgImNsa192aWYiOw0KICAg
ICAgICAgICAgICAgIGZzbCxub19lZGlkOw0KICAgICAgICAgICAgICAgIHJl
c29sdXRpb24gPSAiMzg0MHgyMTYwQDYwIiwgIjE5MjB4MTA4MEA2MCIsICIx
MjgweDcyMEA2MCIsICI3MjB4NDgwQDYwIjsNCiAgICAgICAgICAgICAgICBs
YW5lX21hcHBpbmcgPSA8MHgwMDAwMDA0ZT47DQogICAgICAgICAgICAgICAg
ZWRwX2xpbmtfcmF0ZSA9IDwweDAwMDAwMDA2PjsNCiAgICAgICAgICAgICAg
ICBlZHBfbnVtX2xhbmVzID0gPDB4MDAwMDAwMDQ+Ow0KICAgICAgICAgICAg
ICAgIHN0YXR1cyA9ICJva2F5IjsNCiAgICAgICAgICAgICAgICBwaGFuZGxl
ID0gPDB4MDAwMDAwNjA+Ow0KICAgICAgICAgICAgICAgIHBvcnQgew0KICAg
ICAgICAgICAgICAgICAgICAgICAgZW5kcG9pbnQgew0KICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICByZW1vdGUtZW5kcG9pbnQgPSA8MHgwMDAw
MDAxYz47DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBoYW5k
bGUgPSA8MHgwMDAwMDAxYT47DQogICAgICAgICAgICAgICAgICAgICAgICB9
Ow0KICAgICAgICAgICAgICAgIH07DQogICAgICAgIH07DQogICAgICAgIGVt
bWMtcHdyc2VxIHsNCiAgICAgICAgICAgICAgICBjb21wYXRpYmxlID0gIm1t
Yy1wd3JzZXEtc2ltcGxlIjsNCiAgICAgICAgICAgICAgICByZXNldC1ncGlv
cyA9IDwweDAwMDAwMDFkIDB4MDAwMDAwMDIgMHgwMDAwMDAwMD47DQogICAg
ICAgICAgICAgICAgcGhhbmRsZSA9IDwweDAwMDAwMDBiPjsNCiAgICAgICAg
fTsNCiAgICAgICAgYnV0dG9uczAgew0KICAgICAgICAgICAgICAgIGNvbXBh
dGlibGUgPSAiZ3Bpby1rZXlzIjsNCiAgICAgICAgICAgICAgICBwb3dlci1i
dXR0b24gew0KICAgICAgICAgICAgICAgICAgICAgICAgaW50ZXJydXB0LXBh
cmVudCA9IDwweDAwMDAwMDA5PjsNCiAgICAgICAgICAgICAgICAgICAgICAg
IGludGVycnVwdHMgPSA8MHgwMDAwMDAwNCAweDAwMDAwMDAwPjsNCiAgICAg
ICAgICAgICAgICAgICAgICAgIGxpbnV4LGNvZGUgPSA8MHgwMDAwMDA3ND47
DQogICAgICAgICAgICAgICAgICAgICAgICBsYWJlbCA9ICJQb3dlciI7DQog
ICAgICAgICAgICAgICAgICAgICAgICB3YWtldXAtc291cmNlOw0KICAgICAg
ICAgICAgICAgIH07DQogICAgICAgICAgICAgICAgc2xlZXAtYnV0dG9uIHsN
CiAgICAgICAgICAgICAgICAgICAgICAgIGludGVycnVwdC1wYXJlbnQgPSA8
MHgwMDAwMDAwOT47DQogICAgICAgICAgICAgICAgICAgICAgICBpbnRlcnJ1
cHRzID0gPDB4MDAwMDAwMDUgMHgwMDAwMDAwMD47DQogICAgICAgICAgICAg
ICAgICAgICAgICBsaW51eCxjb2RlID0gPDB4MDAwMDAwOGU+Ow0KICAgICAg
ICAgICAgICAgICAgICAgICAgbGFiZWwgPSAiU2xlZXAiOw0KICAgICAgICAg
ICAgICAgICAgICAgICAgd2FrZXVwLXNvdXJjZTsNCiAgICAgICAgICAgICAg
ICB9Ow0KICAgICAgICB9Ow0KICAgICAgICBidXR0b25zMSB7DQogICAgICAg
ICAgICAgICAgY29tcGF0aWJsZSA9ICJncGlvLWtleXMtcG9sbGVkIjsNCiAg
ICAgICAgICAgICAgICBwb2xsLWludGVydmFsID0gPDB4MDAwMDAwYzg+Ow0K
ICAgICAgICAgICAgICAgIGxpZF9zd2l0Y2ggew0KICAgICAgICAgICAgICAg
ICAgICAgICAgbGludXgsaW5wdXQtdHlwZSA9IDwweDAwMDAwMDA1PjsNCiAg
ICAgICAgICAgICAgICAgICAgICAgIGxpbnV4LGNvZGUgPSA8MHgwMDAwMDAw
MD47DQogICAgICAgICAgICAgICAgICAgICAgICBncGlvcyA9IDwweDAwMDAw
MDFlIDB4MDAwMDAwMDQgMHgwMDAwMDAwMT47DQogICAgICAgICAgICAgICAg
ICAgICAgICBsYWJlbCA9ICJMaWQiOw0KICAgICAgICAgICAgICAgICAgICAg
ICAgcGhhbmRsZSA9IDwweDAwMDAwMDYxPjsNCiAgICAgICAgICAgICAgICB9
Ow0KICAgICAgICB9Ow0KICAgICAgICBjaGFyZ2VyIHsNCiAgICAgICAgICAg
ICAgICBjb21wYXRpYmxlID0gImdwaW8tY2hhcmdlciI7DQogICAgICAgICAg
ICAgICAgY2hhcmdlci10eXBlID0gImJhdHRlcnkiOw0KICAgICAgICAgICAg
ICAgIGdwaW9zID0gPDB4MDAwMDAwMWUgMHgwMDAwMDAwNiAweDAwMDAwMDAx
PjsNCiAgICAgICAgICAgICAgICBjaGFyZ2luZy1ncGlvID0gPDB4MDAwMDAw
MWUgMHgwMDAwMDAwNSAweDAwMDAwMDAxPjsNCiAgICAgICAgICAgICAgICBi
YXQtbG93LWdwaW8gPSA8MHgwMDAwMDAxZSAweDAwMDAwMDAzIDB4MDAwMDAw
MDE+Ow0KICAgICAgICB9Ow0KICAgICAgICBfX3N5bWJvbHNfXyB7DQogICAg
ICAgICAgICAgICAgY3B1MCA9ICIvY3B1cy9jcHVAMCI7DQogICAgICAgICAg
ICAgICAgY3B1MSA9ICIvY3B1cy9jcHVAMSI7DQogICAgICAgICAgICAgICAg
bDIgPSAiL2NwdXMvbDItY2FjaGUiOw0KICAgICAgICAgICAgICAgIENQVV9Q
VzIwID0gIi9pZGxlLXN0YXRlcy9jcHUtcHcyMCI7DQogICAgICAgICAgICAg
ICAgc3lzY2xrID0gIi9jbG9jay1zeXNjbGsiOw0KICAgICAgICAgICAgICAg
IG9zY18yN20gPSAiL2Nsb2NrLW9zYy0yN20iOw0KICAgICAgICAgICAgICAg
IGRwY2xrID0gIi9jbG9jay1kaXNwbGF5QGYxZjAwMDAiOw0KICAgICAgICAg
ICAgICAgIGFjbGsgPSAiL2Nsb2NrLWF4aSI7DQogICAgICAgICAgICAgICAg
cGNsayA9ICIvY2xvY2stYXBiIjsNCiAgICAgICAgICAgICAgICBoZHBjbGsg
PSAiL2Nsb2NrLWhkcGNvcmUiOw0KICAgICAgICAgICAgICAgIGdpYyA9ICIv
aW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDAwMCI7DQogICAgICAgICAgICAg
ICAgaXRzID0gIi9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwMDAwL2dpYy1p
dHNANjAyMDAwMCI7DQogICAgICAgICAgICAgICAgc29jID0gIi9zb2MiOw0K
ICAgICAgICAgICAgICAgIGRkciA9ICIvc29jL21lbW9yeS1jb250cm9sbGVy
QDEwODAwMDAiOw0KICAgICAgICAgICAgICAgIGRjZmcgPSAiL3NvYy9zeXNj
b25AMWUwMDAwMCI7DQogICAgICAgICAgICAgICAgcnN0ID0gIi9zb2Mvc3lz
Y29uQDFlNjAwMDAiOw0KICAgICAgICAgICAgICAgIHNjZmcgPSAiL3NvYy9z
eXNjb25AMWZjMDAwMCI7DQogICAgICAgICAgICAgICAgY2xvY2tnZW4gPSAi
L3NvYy9jbG9jay1jb250cm9sbGVyQDEzMDAwMDAiOw0KICAgICAgICAgICAg
ICAgIHVzYjAgPSAiL3NvYy91c2JAMzEwMDAwMCI7DQogICAgICAgICAgICAg
ICAgdXNiMSA9ICIvc29jL3VzYkAzMTEwMDAwIjsNCiAgICAgICAgICAgICAg
ICBmc3BpID0gIi9zb2Mvc3BpQDIwYzAwMDAiOw0KICAgICAgICAgICAgICAg
IGkyYzAgPSAiL3NvYy9pMmNAMjAwMDAwMCI7DQogICAgICAgICAgICAgICAg
c2wyOGNwbGQgPSAiL3NvYy9pMmNAMjAwMDAwMC9zbDI4Y3BsZEA0YSI7DQog
ICAgICAgICAgICAgICAgcHdtMCA9ICIvc29jL2kyY0AyMDAwMDAwL3NsMjhj
cGxkQDRhL3B3bTBAYyI7DQogICAgICAgICAgICAgICAgcHdtMSA9ICIvc29j
L2kyY0AyMDAwMDAwL3NsMjhjcGxkQDRhL3B3bTFAZSI7DQogICAgICAgICAg
ICAgICAgY3BsZF9ncGlvMCA9ICIvc29jL2kyY0AyMDAwMDAwL3NsMjhjcGxk
QDRhL2dwaW8wQDEwIjsNCiAgICAgICAgICAgICAgICBjcGxkX2dwaW8xID0g
Ii9zb2MvaTJjQDIwMDAwMDAvc2wyOGNwbGRANGEvZ3BpbzFAMTUiOw0KICAg
ICAgICAgICAgICAgIGNwbGRfZ3BvMCA9ICIvc29jL2kyY0AyMDAwMDAwL3Ns
MjhjcGxkQDRhL2dwbzBAMWEiOw0KICAgICAgICAgICAgICAgIGNwbGRfZ3Bp
MCA9ICIvc29jL2kyY0AyMDAwMDAwL3NsMjhjcGxkQDRhL2dwaTBAMWIiOw0K
ICAgICAgICAgICAgICAgIGkyYzEgPSAiL3NvYy9pMmNAMjAxMDAwMCI7DQog
ICAgICAgICAgICAgICAgaTJjMiA9ICIvc29jL2kyY0AyMDIwMDAwIjsNCiAg
ICAgICAgICAgICAgICBpMmMzID0gIi9zb2MvaTJjQDIwMzAwMDAiOw0KICAg
ICAgICAgICAgICAgIGkyYzQgPSAiL3NvYy9pMmNAMjA0MDAwMCI7DQogICAg
ICAgICAgICAgICAgaTJjNSA9ICIvc29jL2kyY0AyMDUwMDAwIjsNCiAgICAg
ICAgICAgICAgICBpMmM2ID0gIi9zb2MvaTJjQDIwNjAwMDAiOw0KICAgICAg
ICAgICAgICAgIGkyYzcgPSAiL3NvYy9pMmNAMjA3MDAwMCI7DQogICAgICAg
ICAgICAgICAgZHNwaTAgPSAiL3NvYy9zcGlAMjEwMDAwMCI7DQogICAgICAg
ICAgICAgICAgZHNwaTEgPSAiL3NvYy9zcGlAMjExMDAwMCI7DQogICAgICAg
ICAgICAgICAgZHNwaTIgPSAiL3NvYy9zcGlAMjEyMDAwMCI7DQogICAgICAg
ICAgICAgICAgY2FuMCA9ICIvc29jL2NhbkAyMTgwMDAwIjsNCiAgICAgICAg
ICAgICAgICBjYW4xID0gIi9zb2MvY2FuQDIxOTAwMDAiOw0KICAgICAgICAg
ICAgICAgIGR1YXJ0MCA9ICIvc29jL3NlcmlhbEAyMWMwNTAwIjsNCiAgICAg
ICAgICAgICAgICBkdWFydDEgPSAiL3NvYy9zZXJpYWxAMjFjMDYwMCI7DQog
ICAgICAgICAgICAgICAgbHB1YXJ0MCA9ICIvc29jL3NlcmlhbEAyMjYwMDAw
IjsNCiAgICAgICAgICAgICAgICBscHVhcnQxID0gIi9zb2Mvc2VyaWFsQDIy
NzAwMDAiOw0KICAgICAgICAgICAgICAgIGxwdWFydDIgPSAiL3NvYy9zZXJp
YWxAMjI4MDAwMCI7DQogICAgICAgICAgICAgICAgbHB1YXJ0MyA9ICIvc29j
L3NlcmlhbEAyMjkwMDAwIjsNCiAgICAgICAgICAgICAgICBscHVhcnQ0ID0g
Ii9zb2Mvc2VyaWFsQDIyYTAwMDAiOw0KICAgICAgICAgICAgICAgIGxwdWFy
dDUgPSAiL3NvYy9zZXJpYWxAMjJiMDAwMCI7DQogICAgICAgICAgICAgICAg
ZWRtYTAgPSAiL3NvYy9kbWEtY29udHJvbGxlckAyMmMwMDAwIjsNCiAgICAg
ICAgICAgICAgICBncGlvMSA9ICIvc29jL2dwaW9AMjMwMDAwMCI7DQogICAg
ICAgICAgICAgICAgZ3BpbzIgPSAiL3NvYy9ncGlvQDIzMTAwMDAiOw0KICAg
ICAgICAgICAgICAgIGdwaW8zID0gIi9zb2MvZ3Bpb0AyMzIwMDAwIjsNCiAg
ICAgICAgICAgICAgICBjcnlwdG8gPSAiL3NvYy9jcnlwdG9AODAwMDAwMCI7
DQogICAgICAgICAgICAgICAgc2VjX2pyMCA9ICIvc29jL2NyeXB0b0A4MDAw
MDAwL2pyQDEwMDAwIjsNCiAgICAgICAgICAgICAgICBzZWNfanIxID0gIi9z
b2MvY3J5cHRvQDgwMDAwMDAvanJAMjAwMDAiOw0KICAgICAgICAgICAgICAg
IHNlY19qcjIgPSAiL3NvYy9jcnlwdG9AODAwMDAwMC9qckAzMDAwMCI7DQog
ICAgICAgICAgICAgICAgc2VjX2pyMyA9ICIvc29jL2NyeXB0b0A4MDAwMDAw
L2pyQDQwMDAwIjsNCiAgICAgICAgICAgICAgICBjbHVzdGVyMV9jb3JlMF93
YXRjaGRvZyA9ICIvc29jL3dkdEBjMDAwMDAwIjsNCiAgICAgICAgICAgICAg
ICBjbHVzdGVyMV9jb3JlMV93YXRjaGRvZyA9ICIvc29jL3dkdEBjMDEwMDAw
IjsNCiAgICAgICAgICAgICAgICBlc2RoYyA9ICIvc29jL21tY0AyMTQwMDAw
IjsNCiAgICAgICAgICAgICAgICBlc2RoYzEgPSAiL3NvYy9tbWNAMjE1MDAw
MCI7DQogICAgICAgICAgICAgICAgc2F0YSA9ICIvc29jL3NhdGFAMzIwMDAw
MCI7DQogICAgICAgICAgICAgICAgZW5ldGNfcG9ydDAgPSAiL3NvYy9wY2ll
QDFmMDAwMDAwMC9ldGhlcm5ldEAwLDAiOw0KICAgICAgICAgICAgICAgIGVu
ZXRjX3BvcnQxID0gIi9zb2MvcGNpZUAxZjAwMDAwMDAvZXRoZXJuZXRAMCwx
IjsNCiAgICAgICAgICAgICAgICBlbmV0Y19wb3J0MiA9ICIvc29jL3BjaWVA
MWYwMDAwMDAwL2V0aGVybmV0QDAsMiI7DQogICAgICAgICAgICAgICAgZW5l
dGNfbWRpb19wZjMgPSAiL3NvYy9wY2llQDFmMDAwMDAwMC9tZGlvQDAsMyI7
DQogICAgICAgICAgICAgICAgcGh5MCA9ICIvc29jL3BjaWVAMWYwMDAwMDAw
L21kaW9AMCwzL2V0aGVybmV0LXBoeUA1IjsNCiAgICAgICAgICAgICAgICBw
aHkxID0gIi9zb2MvcGNpZUAxZjAwMDAwMDAvbWRpb0AwLDMvZXRoZXJuZXQt
cGh5QDQiOw0KICAgICAgICAgICAgICAgIHN3aXRjaF9wb3J0MCA9ICIvc29j
L3BjaWVAMWYwMDAwMDAwL3N3aXRjaEAwLDUvcG9ydHMvcG9ydEAwIjsNCiAg
ICAgICAgICAgICAgICBzd2l0Y2hfcG9ydDEgPSAiL3NvYy9wY2llQDFmMDAw
MDAwMC9zd2l0Y2hAMCw1L3BvcnRzL3BvcnRAMSI7DQogICAgICAgICAgICAg
ICAgc3dpdGNoX3BvcnQyID0gIi9zb2MvcGNpZUAxZjAwMDAwMDAvc3dpdGNo
QDAsNS9wb3J0cy9wb3J0QDIiOw0KICAgICAgICAgICAgICAgIHN3aXRjaF9w
b3J0MyA9ICIvc29jL3BjaWVAMWYwMDAwMDAwL3N3aXRjaEAwLDUvcG9ydHMv
cG9ydEAzIjsNCiAgICAgICAgICAgICAgICBzd2l0Y2hfcG9ydDQgPSAiL3Nv
Yy9wY2llQDFmMDAwMDAwMC9zd2l0Y2hAMCw1L3BvcnRzL3BvcnRANCI7DQog
ICAgICAgICAgICAgICAgc3dpdGNoX3BvcnQ1ID0gIi9zb2MvcGNpZUAxZjAw
MDAwMDAvc3dpdGNoQDAsNS9wb3J0cy9wb3J0QDUiOw0KICAgICAgICAgICAg
ICAgIGVuZXRjX3BvcnQzID0gIi9zb2MvcGNpZUAxZjAwMDAwMDAvZXRoZXJu
ZXRAMCw2IjsNCiAgICAgICAgICAgICAgICB0bXUgPSAiL3NvYy90bXVAMWYw
MDAwMCI7DQogICAgICAgICAgICAgICAgY29yZV9jbHVzdGVyX2FsZXJ0ID0g
Ii9zb2MvdGhlcm1hbC16b25lcy9jb3JlLWNsdXN0ZXIvdHJpcHMvY29yZS1j
bHVzdGVyLWFsZXJ0IjsNCiAgICAgICAgICAgICAgICBjb3JlX2NsdXN0ZXJf
Y3JpdCA9ICIvc29jL3RoZXJtYWwtem9uZXMvY29yZS1jbHVzdGVyL3RyaXBz
L2NvcmUtY2x1c3Rlci1jcml0IjsNCiAgICAgICAgICAgICAgICBkZHJfY29u
dHJvbGxlcl9hbGVydCA9ICIvc29jL3RoZXJtYWwtem9uZXMvZGRyLWNvbnRy
b2xsZXIvdHJpcHMvZGRyLWNvbnRyb2xsZXItYWxlcnQiOw0KICAgICAgICAg
ICAgICAgIGRkcl9jb250cm9sbGVyX2NyaXQgPSAiL3NvYy90aGVybWFsLXpv
bmVzL2Rkci1jb250cm9sbGVyL3RyaXBzL2Rkci1jb250cm9sbGVyLWNyaXQi
Ow0KICAgICAgICAgICAgICAgIHNtbXUgPSAiL3NvYy9pb21tdUA1MDAwMDAw
IjsNCiAgICAgICAgICAgICAgICBxZG1hID0gIi9zb2MvZG1hLWNvbnRyb2xs
ZXJAODM4MDAwMCI7DQogICAgICAgICAgICAgICAgc2FpMSA9ICIvc29jL2F1
ZGlvLWNvbnRyb2xsZXJAZjEwMDAwMCI7DQogICAgICAgICAgICAgICAgc2Fp
MiA9ICIvc29jL2F1ZGlvLWNvbnRyb2xsZXJAZjExMDAwMCI7DQogICAgICAg
ICAgICAgICAgc2FpMyA9ICIvc29jL2F1ZGlvLWNvbnRyb2xsZXJAZjEyMDAw
MCI7DQogICAgICAgICAgICAgICAgc2FpNCA9ICIvc29jL2F1ZGlvLWNvbnRy
b2xsZXJAZjEzMDAwMCI7DQogICAgICAgICAgICAgICAgc2FpNSA9ICIvc29j
L2F1ZGlvLWNvbnRyb2xsZXJAZjE0MDAwMCI7DQogICAgICAgICAgICAgICAg
c2FpNiA9ICIvc29jL2F1ZGlvLWNvbnRyb2xsZXJAZjE1MDAwMCI7DQogICAg
ICAgICAgICAgICAgcmNwbSA9ICIvc29jL3JjcG1AMWUzNDA0MCI7DQogICAg
ICAgICAgICAgICAgZnRtX2FsYXJtMCA9ICIvc29jL3RpbWVyQDI4MDAwMDAi
Ow0KICAgICAgICAgICAgICAgIGRpc3BsYXkwID0gIi9tYWxpZHBAZjA4MDAw
MCI7DQogICAgICAgICAgICAgICAgZHAwX291dCA9ICIvbWFsaWRwQGYwODAw
MDAvcG9ydC9lbmRwb2ludCI7DQogICAgICAgICAgICAgICAgZGlzcGxheTEg
PSAiL2hkcEBmMjAwMDAwIjsNCiAgICAgICAgICAgICAgICBkcDFfb3V0ID0g
Ii9oZHBAZjIwMDAwMC9wb3J0L2VuZHBvaW50IjsNCiAgICAgICAgICAgICAg
ICBlbW1jX3B3cnNlcSA9ICIvZW1tYy1wd3JzZXEiOw0KICAgICAgICAgICAg
ICAgIGxpZF9zdyA9ICIvYnV0dG9uczEvbGlkX3N3aXRjaCI7DQogICAgICAg
IH07DQp9Ow0K

--8323329-2110900997-1605656680=:438--


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 00:51:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 00:51:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29321.58637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfBgY-0006he-5k; Wed, 18 Nov 2020 00:50:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29321.58637; Wed, 18 Nov 2020 00:50:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfBgY-0006hX-27; Wed, 18 Nov 2020 00:50:58 +0000
Received: by outflank-mailman (input) for mailman id 29321;
 Wed, 18 Nov 2020 00:50:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hmm/=EY=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kfBgV-0006hS-S7
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 00:50:55 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 69f4ff4f-7185-4c50-b47d-f743f970d372;
 Wed, 18 Nov 2020 00:50:54 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 41EC824198;
 Wed, 18 Nov 2020 00:50:53 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hmm/=EY=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kfBgV-0006hS-S7
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 00:50:55 +0000
X-Inumbo-ID: 69f4ff4f-7185-4c50-b47d-f743f970d372
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 69f4ff4f-7185-4c50-b47d-f743f970d372;
	Wed, 18 Nov 2020 00:50:54 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 41EC824198;
	Wed, 18 Nov 2020 00:50:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605660653;
	bh=JOIEgFpCWDzYiqatKGkd6htz5XfTnsJ+CkRtWN96/x4=;
	h=From:To:Cc:Subject:Date:From;
	b=K1P5KIT3EAQYVVIagqDjTWSOdaceRWcnCrHsk6gGiW/OLe+8CNWK+pOKHQx7QePPm
	 9iasMeOVIA4qeFbpQodekM3urWQGbnjzG5FhHf4npjHAbtWV4/W8DRFfWCWY6uynIw
	 zOTP3SIQtx7fqznWYQ3JOMpQRC9L03qqb2MVVkwU=
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	Bertrand.Marquis@arm.com,
	Stefano Stabellini <stefano.stabellini@xilinx.com>,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	iwj@xenproject.org,
	jbeulich@suse.com,
	julien@xen.org,
	wl@xen.org
Subject: [PATCH v2] xen: EXPERT clean-up and introduce UNSUPPORTED
Date: Tue, 17 Nov 2020 16:50:51 -0800
Message-Id: <20201118005051.26115-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

A recent thread [1] has exposed a couple of issues with our current way
of handling EXPERT.

1) It is not obvious that "Configure standard Xen features (expert
users)" is actually the famous EXPERT we keep talking about on xen-devel

2) It is not obvious when we need to enable EXPERT to get a specific
feature

In particular if you want to enable ACPI support so that you can boot
Xen on an ACPI platform, you have to enable EXPERT first. But searching
through the kconfig menu it is really not clear (type '/' and "ACPI"):
nothing in the description tells you that you need to enable EXPERT to
get the option.

So this patch makes things easier by doing two things:

- introduce a new kconfig option UNSUPPORTED which is clearly to enable
  UNSUPPORTED features as defined by SUPPORT.md

- change EXPERT options to UNSUPPORTED where it makes sense: keep
  depending on EXPERT for features made for experts

- tag unsupported features by adding (UNSUPPORTED) to the one-line
  description

- clarify the EXPERT one-line description

[1] https://marc.info/?l=xen-devel&m=160333101228981

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
CC: andrew.cooper3@citrix.com
CC: george.dunlap@citrix.com
CC: iwj@xenproject.org
CC: jbeulich@suse.com
CC: julien@xen.org
CC: wl@xen.org

---
Changes in v2:
- introduce UNSUPPORTED as a separate new option
- don't switch all EXPERT options to UNSUPPORTED
---
 xen/Kconfig              | 11 ++++++++++-
 xen/arch/arm/Kconfig     | 10 +++++-----
 xen/arch/x86/Kconfig     |  8 ++++----
 xen/common/Kconfig       |  4 ++--
 xen/common/sched/Kconfig |  6 +++---
 5 files changed, 24 insertions(+), 15 deletions(-)

diff --git a/xen/Kconfig b/xen/Kconfig
index 34c318bfa2..59400c4788 100644
--- a/xen/Kconfig
+++ b/xen/Kconfig
@@ -34,8 +34,17 @@ config DEFCONFIG_LIST
 	option defconfig_list
 	default ARCH_DEFCONFIG
 
+config UNSUPPORTED
+	bool "Configure UNSUPPORTED features"
+	help
+	  This option allows unsupported Xen options to be enabled, which
+	  includes non-security-supported, experimental, and tech preview
+	  features as defined by SUPPORT.md. Xen binaries built with this
+	  option enabled are not security supported.
+	default n
+
 config EXPERT
-	bool "Configure standard Xen features (expert users)"
+	bool "Configure EXPERT features"
 	help
 	  This option allows certain base Xen options and settings
 	  to be disabled or tweaked. This is for specialized environments
diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index f938dd21bd..5981e7380d 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -32,7 +32,7 @@ menu "Architecture Features"
 source "arch/Kconfig"
 
 config ACPI
-	bool "ACPI (Advanced Configuration and Power Interface) Support" if EXPERT
+	bool "ACPI (Advanced Configuration and Power Interface) Support (UNSUPPORTED)" if UNSUPPORTED
 	depends on ARM_64
 	---help---
 
@@ -49,7 +49,7 @@ config GICV3
 	  If unsure, say Y
 
 config HAS_ITS
-        bool "GICv3 ITS MSI controller support" if EXPERT
+        bool "GICv3 ITS MSI controller support (UNSUPPORTED)" if UNSUPPORTED
         depends on GICV3 && !NEW_VGIC
 
 config HVM
@@ -79,7 +79,7 @@ config SBSA_VUART_CONSOLE
 	  SBSA Generic UART implements a subset of ARM PL011 UART.
 
 config ARM_SSBD
-	bool "Speculative Store Bypass Disable" if EXPERT
+	bool "Speculative Store Bypass Disable (UNSUPPORTED)" if UNSUPPORTED
 	depends on HAS_ALTERNATIVE
 	default y
 	help
@@ -89,7 +89,7 @@ config ARM_SSBD
 	  If unsure, say Y.
 
 config HARDEN_BRANCH_PREDICTOR
-	bool "Harden the branch predictor against aliasing attacks" if EXPERT
+	bool "Harden the branch predictor against aliasing attacks (UNSUPPORTED)" if UNSUPPORTED
 	default y
 	help
 	  Speculation attacks against some high-performance processors rely on
@@ -106,7 +106,7 @@ config HARDEN_BRANCH_PREDICTOR
 	  If unsure, say Y.
 
 config TEE
-	bool "Enable TEE mediators support" if EXPERT
+	bool "Enable TEE mediators support (UNSUPPORTED)" if UNSUPPORTED
 	default n
 	help
 	  This option enables generic TEE mediators support. It allows guests
diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 24868aa6ad..d4e20e9d31 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -102,8 +102,8 @@ config HVM
 	  If unsure, say Y.
 
 config XEN_SHSTK
-	bool "Supervisor Shadow Stacks"
-	depends on HAS_AS_CET_SS && EXPERT
+	bool "Supervisor Shadow Stacks (UNSUPPORTED)"
+	depends on HAS_AS_CET_SS && UNSUPPORTED
 	default y
 	---help---
 	  Control-flow Enforcement Technology (CET) is a set of features in
@@ -165,7 +165,7 @@ config HVM_FEP
 	  If unsure, say N.
 
 config TBOOT
-	bool "Xen tboot support" if EXPERT
+	bool "Xen tboot support (UNSUPPORTED)" if UNSUPPORTED
 	default y if !PV_SHIM_EXCLUSIVE
 	select CRYPTO
 	---help---
@@ -251,7 +251,7 @@ config HYPERV_GUEST
 endif
 
 config MEM_SHARING
-	bool "Xen memory sharing support" if EXPERT
+	bool "Xen memory sharing support (UNSUPPORTED)" if UNSUPPORTED
 	depends on HVM
 
 endmenu
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index 3e2cf25088..beed507727 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -151,7 +151,7 @@ config KEXEC
 	  If unsure, say Y.
 
 config EFI_SET_VIRTUAL_ADDRESS_MAP
-    bool "EFI: call SetVirtualAddressMap()" if EXPERT
+    bool "EFI: call SetVirtualAddressMap() (UNSUPPORTED)" if UNSUPPORTED
     ---help---
       Call EFI SetVirtualAddressMap() runtime service to setup memory map for
       further runtime services. According to UEFI spec, it isn't strictly
@@ -272,7 +272,7 @@ config LATE_HWDOM
 	  If unsure, say N.
 
 config ARGO
-	bool "Argo: hypervisor-mediated interdomain communication" if EXPERT
+	bool "Argo: hypervisor-mediated interdomain communication (UNSUPPORTED)" if UNSUPPORTED
 	---help---
 	  Enables a hypercall for domains to ask the hypervisor to perform
 	  data transfer of messages between domains.
diff --git a/xen/common/sched/Kconfig b/xen/common/sched/Kconfig
index 61231aacaa..94c9e20139 100644
--- a/xen/common/sched/Kconfig
+++ b/xen/common/sched/Kconfig
@@ -15,7 +15,7 @@ config SCHED_CREDIT2
 	  optimized for lower latency and higher VM density.
 
 config SCHED_RTDS
-	bool "RTDS scheduler support (EXPERIMENTAL)"
+	bool "RTDS scheduler support (UNSUPPORTED)" if UNSUPPORTED
 	default y
 	---help---
 	  The RTDS scheduler is a soft and firm real-time scheduler for
@@ -23,14 +23,14 @@ config SCHED_RTDS
 	  in the cloud, and general low-latency workloads.
 
 config SCHED_ARINC653
-	bool "ARINC653 scheduler support (EXPERIMENTAL)"
+	bool "ARINC653 scheduler support (UNSUPPORTED)" if UNSUPPORTED
 	default DEBUG
 	---help---
 	  The ARINC653 scheduler is a hard real-time scheduler for single
 	  cores, targeted for avionics, drones, and medical devices.
 
 config SCHED_NULL
-	bool "Null scheduler support (EXPERIMENTAL)"
+	bool "Null scheduler support (UNSUPPORTED)" if UNSUPPORTED
 	default y
 	---help---
 	  The null scheduler is a static, zero overhead scheduler,
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 05:12:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 05:12:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29337.58661 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfFkp-0002JL-VA; Wed, 18 Nov 2020 05:11:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29337.58661; Wed, 18 Nov 2020 05:11:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfFkp-0002JC-Nb; Wed, 18 Nov 2020 05:11:39 +0000
Received: by outflank-mailman (input) for mailman id 29337;
 Wed, 18 Nov 2020 05:11:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sH5o=EY=strugglers.net=andy@srs-us1.protection.inumbo.net>)
 id 1kfFko-0002J7-Mc
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 05:11:38 +0000
Received: from mail.bitfolk.com (unknown [2001:ba8:1f1:f019::25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ce0ca390-75cf-4278-bc02-42c9bbc3f84b;
 Wed, 18 Nov 2020 05:11:37 +0000 (UTC)
Received: from andy by mail.bitfolk.com with local (Exim 4.84_2)
 (envelope-from <andy@strugglers.net>)
 id 1kfFkj-0007kk-Nj; Wed, 18 Nov 2020 05:11:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sH5o=EY=strugglers.net=andy@srs-us1.protection.inumbo.net>)
	id 1kfFko-0002J7-Mc
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 05:11:38 +0000
X-Inumbo-ID: ce0ca390-75cf-4278-bc02-42c9bbc3f84b
Received: from mail.bitfolk.com (unknown [2001:ba8:1f1:f019::25])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ce0ca390-75cf-4278-bc02-42c9bbc3f84b;
	Wed, 18 Nov 2020 05:11:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=bitfolk.com; s=alpha;
	h=In-Reply-To:Content-Transfer-Encoding:Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date; bh=N2+2+l5W3BApAH8eEzLL4aB4J2YcCxUUhW6wjo/KqgM=;
	b=n5f6TKpiPS0jh/IwZboS11xoxj6EVzAo+PcfMrwusaBLx2K4Lqh3qL5j/ut1fVHCivaXr0IW1wWwzdEILlJQYcSZba5U8Ij42WLz2D4SyLvAchG4XyBxz9ZpOrhld7Eo6H15W6mJLXcTedFeInZW4eDpPrLZONIQGwb7+tB+FH0X5XhLDR9pg/ub0FEEivPE/ctL9Cqky5Z3MwCZdtwE5+V4KLr1aJ3pWjfiwrowhfk8doeBOJVwXMWJQrxt+Pb7bSa8X3dyizoXq1Q//LGqv8PWyPaaXToXv/yWttVrCGmrW4j/XnNseD1tUBnFyUmGoy0VNJj/U6anUtzftCF35g==;
Received: from andy by mail.bitfolk.com with local (Exim 4.84_2)
	(envelope-from <andy@strugglers.net>)
	id 1kfFkj-0007kk-Nj; Wed, 18 Nov 2020 05:11:33 +0000
Date: Wed, 18 Nov 2020 05:11:33 +0000
From: Andy Smith <andy@strugglers.net>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Michael Young <m.a.young@durham.ac.uk>, xen-devel@lists.xenproject.org
Subject: Re: zstd compressed kernels
Message-ID: <20201118051133.GV16071@bitfolk.com>
References: <1abcd9d-428f-93d-b63d-996ef4592723@austen3.home>
 <71d36766-1258-0a79-02ff-d888a41e431e@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <71d36766-1258-0a79-02ff-d888a41e431e@citrix.com>
OpenPGP: id=BF15490B; url=http://strugglers.net/~andy/pubkey.asc
X-URL: http://strugglers.net/wiki/User:Andy
User-Agent: Mutt/1.5.23 (2014-03-12)
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: andy@strugglers.net
X-SA-Exim-Scanned: No (on mail.bitfolk.com); SAEximRunCond expanded to false

On Tue, Nov 17, 2020 at 08:48:25PM +0000, Andrew Cooper wrote:
> For domU's, tools/libs/guest/xg_dom_bzimageloader.c and
> xc_dom_probe_bzimage_kernel()
> 
> (Wow this plumbing is ugly and in need of some rationalisation...)

Though not part of Xen, the PV part of grub could also do with some
love as it is also missing lz4 kernel support. This is particularly
painful since Ubuntu switched to lz4 kernels from the 19.10 release.

I understand from Jrgen though that grub's PVH support just uses
the normal i386 loader so if grub supports a bzImage on bare metal
it should do so with PVH, so that's an option. Certainly that works
for lz4 anyway.

Cheers,
Andy


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 07:13:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 07:13:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29353.58679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfHdz-0004xk-RH; Wed, 18 Nov 2020 07:12:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29353.58679; Wed, 18 Nov 2020 07:12:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfHdz-0004xd-MY; Wed, 18 Nov 2020 07:12:43 +0000
Received: by outflank-mailman (input) for mailman id 29353;
 Wed, 18 Nov 2020 07:12:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3RgB=EY=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
 id 1kfHdz-0004xY-AO
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 07:12:43 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id a7a83d6f-1fd1-417e-83c5-9960a96d6108;
 Wed, 18 Nov 2020 07:12:41 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F32FF101E;
 Tue, 17 Nov 2020 23:12:40 -0800 (PST)
Received: from [10.57.22.240] (unknown [10.57.22.240])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A60523F70D;
 Tue, 17 Nov 2020 23:12:39 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3RgB=EY=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
	id 1kfHdz-0004xY-AO
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 07:12:43 +0000
X-Inumbo-ID: a7a83d6f-1fd1-417e-83c5-9960a96d6108
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id a7a83d6f-1fd1-417e-83c5-9960a96d6108;
	Wed, 18 Nov 2020 07:12:41 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F32FF101E;
	Tue, 17 Nov 2020 23:12:40 -0800 (PST)
Received: from [10.57.22.240] (unknown [10.57.22.240])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A60523F70D;
	Tue, 17 Nov 2020 23:12:39 -0800 (PST)
Subject: Re: [PATCH v2] xen/arm: Add workaround for Cortex-A76/Neoverse-N1
 erratum #1286807
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, bertrand.marquis@arm.com
References: <20201116121140.26763-1-michal.orzel@arm.com>
 <c7475d91-c956-3e2c-4445-ef5c005ff465@xen.org>
From: Michal Orzel <michal.orzel@arm.com>
Message-ID: <d6d632b2-eba7-1746-d398-2bd539a51caf@arm.com>
Date: Wed, 18 Nov 2020 08:12:33 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <c7475d91-c956-3e2c-4445-ef5c005ff465@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Hi Julien,

On 17.11.2020 18:30, Julien Grall wrote:
> Hi Michal,
> 
> On 16/11/2020 12:11, Michal Orzel wrote:
>> On the affected Cortex-A76/Neoverse-N1 cores (r0p0 to r3p0),
>> if a virtual address for a cacheable mapping of a location is being
>> accessed by a core while another core is remapping the virtual
>> address to a new physical page using the recommended break-before-make
>> sequence, then under very rare circumstances TLBI+DSB completes before
>> a read using the translation being invalidated has been observed by
>> other observers. The workaround repeats the TLBI+DSB operation
>> for all the TLB flush operations on purpose.
> 
> Sorry for nitpicking, the commit message should contain enough information for future reader to understand why this was done "on purpose".
> 
> So how about:
> 
> "The workaround repeats the TLBI+DSB operation for all the TLB flush operations. While this is stricly not necessary, we don't want to take any risk.".
> 
> I can fix it on commit.
> 
Ok I agree to add this extra clarification.
Please go on and fix it on commit/etc.

Thanks,
Michal
> Reviewed-by: Julien Grall <jgrall@amazon.com>
> 
>>
>> Signed-off-by: Michal Orzel <michal.orzel@arm.com>
>> ---
>>   docs/misc/arm/silicon-errata.txt     |  2 ++
>>   xen/arch/arm/Kconfig                 | 23 +++++++++++++++++++++
>>   xen/arch/arm/cpuerrata.c             | 14 +++++++++++++
>>   xen/include/asm-arm/arm64/flushtlb.h | 30 +++++++++++++++++++---------
>>   xen/include/asm-arm/cpufeature.h     |  3 ++-
>>   5 files changed, 62 insertions(+), 10 deletions(-)
>>
>> diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-errata.txt
>> index 552c4151d3..d183ba543f 100644
>> --- a/docs/misc/arm/silicon-errata.txt
>> +++ b/docs/misc/arm/silicon-errata.txt
>> @@ -53,5 +53,7 @@ stable hypervisors.
>>   | ARM            | Cortex-A72      | #853709         | N/A                     |
>>   | ARM            | Cortex-A73      | #858921         | ARM_ERRATUM_858921      |
>>   | ARM            | Cortex-A76      | #1165522        | N/A                     |
>> +| ARM            | Cortex-A76      | #1286807        | ARM64_ERRATUM_1286807   |
>>   | ARM            | Neoverse-N1     | #1165522        | N/A
>> +| ARM            | Neoverse-N1     | #1286807        | ARM64_ERRATUM_1286807   |
>>   | ARM            | MMU-500         | #842869         | N/A                     |
>> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
>> index f938dd21bd..8171b8d04a 100644
>> --- a/xen/arch/arm/Kconfig
>> +++ b/xen/arch/arm/Kconfig
>> @@ -244,6 +244,29 @@ config ARM_ERRATUM_858921
>>           If unsure, say Y.
>>   +config ARM64_WORKAROUND_REPEAT_TLBI
>> +    bool
>> +
>> +config ARM64_ERRATUM_1286807
>> +    bool "Cortex-A76/Neoverse-N1: 1286807: Modification of the translation table for a virtual address might lead to read-after-read ordering violation"
>> +    default y
>> +    select ARM64_WORKAROUND_REPEAT_TLBI
>> +    depends on ARM_64
>> +    help
>> +      This option adds a workaround for ARM Cortex-A76/Neoverse-N1 erratum 1286807.
>> +
>> +      On the affected Cortex-A76/Neoverse-N1 cores (r0p0 to r3p0), if a virtual
>> +      address for a cacheable mapping of a location is being
>> +      accessed by a core while another core is remapping the virtual
>> +      address to a new physical page using the recommended
>> +      break-before-make sequence, then under very rare circumstances
>> +      TLBI+DSB completes before a read using the translation being
>> +      invalidated has been observed by other observers. The
>> +      workaround repeats the TLBI+DSB operation for all the TLB flush
>> +      operations on purpose.
> If you agree with what I wrote above, I will update this section and ...
> 
>> +
>> +      If unsure, say Y.
>> +
>>   endmenu
>>     config ARM64_HARDEN_BRANCH_PREDICTOR
>> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
>> index 567911d380..cb4795beec 100644
>> --- a/xen/arch/arm/cpuerrata.c
>> +++ b/xen/arch/arm/cpuerrata.c
>> @@ -424,6 +424,20 @@ static const struct arm_cpu_capabilities arm_errata[] = {
>>                      (1 << MIDR_VARIANT_SHIFT) | 2),
>>       },
>>   #endif
>> +#ifdef CONFIG_ARM64_ERRATUM_1286807
>> +    {
>> +        /* Cortex-A76 r0p0 - r3p0 */
>> +        .desc = "ARM erratum 1286807",
>> +        .capability = ARM64_WORKAROUND_REPEAT_TLBI,
>> +        MIDR_RANGE(MIDR_CORTEX_A76, 0, 3 << MIDR_VARIANT_SHIFT),
>> +    },
>> +    {
>> +        /* Neoverse-N1 r0p0 - r3p0 */
>> +        .desc = "ARM erratum 1286807",
>> +        .capability = ARM64_WORKAROUND_REPEAT_TLBI,
>> +        MIDR_RANGE(MIDR_NEOVERSE_N1, 0, 3 << MIDR_VARIANT_SHIFT),
>> +    },
>> +#endif
>>   #ifdef CONFIG_ARM64_HARDEN_BRANCH_PREDICTOR
>>       {
>>           .capability = ARM_HARDEN_BRANCH_PREDICTOR,
>> diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
>> index ceec59542e..8f2abfaf1d 100644
>> --- a/xen/include/asm-arm/arm64/flushtlb.h
>> +++ b/xen/include/asm-arm/arm64/flushtlb.h
>> @@ -9,6 +9,12 @@
>>    * DSB ISH          // Ensure the TLB invalidation has completed
>>    * ISB              // See explanation below
>>    *
>> + * ARM64_WORKAROUND_REPEAT_TLBI:
>> + * Modification of the translation table for a virtual address might lead to
>> + * read-after-read ordering violation.
>> + * The workaround repeats TLBI+DSB operation for all the TLB flush operations
>> + * on purpose.
> 
> ... this section while committing.
> 
> Cheers,
> 


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 07:27:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 07:27:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29360.58694 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfHrn-00066e-4V; Wed, 18 Nov 2020 07:26:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29360.58694; Wed, 18 Nov 2020 07:26:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfHrn-00066X-0Z; Wed, 18 Nov 2020 07:26:59 +0000
Received: by outflank-mailman (input) for mailman id 29360;
 Wed, 18 Nov 2020 07:26:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F1t9=EY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kfHrm-00065y-98
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 07:26:58 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d6333a72-65fa-4862-8e26-47908474495c;
 Wed, 18 Nov 2020 07:26:50 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfHre-0001Pj-Aj; Wed, 18 Nov 2020 07:26:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfHre-0006BK-0y; Wed, 18 Nov 2020 07:26:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kfHre-0006BW-0T; Wed, 18 Nov 2020 07:26:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=F1t9=EY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kfHrm-00065y-98
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 07:26:58 +0000
X-Inumbo-ID: d6333a72-65fa-4862-8e26-47908474495c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d6333a72-65fa-4862-8e26-47908474495c;
	Wed, 18 Nov 2020 07:26:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0O6mj1dB6W8PRocZsnv4Y7d+XeB5G5N+yGhVgWl7u+E=; b=LJ856Zj5A4Oipm5zDaIY6LWa+E
	akxw6VL674L35hmALKQRhdv3Iq/bdqF2D3Vt77VWCcCemiw/akWxlsOGCyySqsskcY8p4y7gbf6+Q
	X6ei6IJipbIcHnpIrJBedWbvySBPz/X7Sh2LLkAO/hIJDD9VWlVfZMa0LeH5Yll0GojA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfHre-0001Pj-Aj; Wed, 18 Nov 2020 07:26:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfHre-0006BK-0y; Wed, 18 Nov 2020 07:26:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfHre-0006BW-0T; Wed, 18 Nov 2020 07:26:50 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156841-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156841: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=0fa8ee0d9ab95c9350b8b84574824d9a384a9f7d
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 18 Nov 2020 07:26:50 +0000

flight 156841 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156841/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                0fa8ee0d9ab95c9350b8b84574824d9a384a9f7d
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  109 days
Failing since        152366  2020-08-01 20:49:34 Z  108 days  180 attempts
Testing same since   156841  2020-11-17 20:09:03 Z    0 days    1 attempts

------------------------------------------------------------
3528 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 675049 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 07:36:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 07:36:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29367.58706 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfI1F-00078H-5b; Wed, 18 Nov 2020 07:36:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29367.58706; Wed, 18 Nov 2020 07:36:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfI1F-00078A-2L; Wed, 18 Nov 2020 07:36:45 +0000
Received: by outflank-mailman (input) for mailman id 29367;
 Wed, 18 Nov 2020 07:36:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RdwY=EY=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kfI1D-000785-Er
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 07:36:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 715adb5f-105f-4ad0-ad9e-c67475d7dfb2;
 Wed, 18 Nov 2020 07:36:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C8172ABDE;
 Wed, 18 Nov 2020 07:36:40 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=RdwY=EY=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kfI1D-000785-Er
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 07:36:43 +0000
X-Inumbo-ID: 715adb5f-105f-4ad0-ad9e-c67475d7dfb2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 715adb5f-105f-4ad0-ad9e-c67475d7dfb2;
	Wed, 18 Nov 2020 07:36:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605685000; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=tLE/HnC7rZNXV53oPIX6KU3BLzFEqx2znH5VWSpChdU=;
	b=koYxI2fQLPfXvkK3S6UC36O7HO4NsLvMyDYEorZiERPxe6YuzR5fW10mevn/x5UausTw3g
	RuyG1TlET9dkWrQ6j46lFN6dMcgQoDSh1OFneBgwEeY9YntCa3uobbtGA1JALztqpOcfrt
	TKwmDpU9/kAJK1OH3gNX5q8BdyALOoo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C8172ABDE;
	Wed, 18 Nov 2020 07:36:40 +0000 (UTC)
Subject: Re: [linux-linus test] 156841: regressions - FAIL
To: osstest service owner <osstest-admin@xenproject.org>,
 xen-devel@lists.xenproject.org
References: <osstest-156841-mainreport@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <95256867-1745-fe54-1478-19047d30f1e5@suse.com>
Date: Wed, 18 Nov 2020 08:36:40 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <osstest-156841-mainreport@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="CgUKZI0Whv5udxue3gnyFUBgPLexS42CE"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--CgUKZI0Whv5udxue3gnyFUBgPLexS42CE
Content-Type: multipart/mixed; boundary="7hMJ04CtJv25yg3ja8r8Y7WrKU8jHdHG4";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: osstest service owner <osstest-admin@xenproject.org>,
 xen-devel@lists.xenproject.org
Message-ID: <95256867-1745-fe54-1478-19047d30f1e5@suse.com>
Subject: Re: [linux-linus test] 156841: regressions - FAIL
References: <osstest-156841-mainreport@xen.org>
In-Reply-To: <osstest-156841-mainreport@xen.org>

--7hMJ04CtJv25yg3ja8r8Y7WrKU8jHdHG4
Content-Type: multipart/mixed;
 boundary="------------2D9E3AC56554EB5B8429E686"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------2D9E3AC56554EB5B8429E686
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 18.11.20 08:26, osstest service owner wrote:
> flight 156841 linux-linus real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/156841/
>=20
> Regressions :-(
>=20
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>   test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. v=
s. 152332

I think a lot of those tests are failing because Linux kernel doesn't
support 32-bit pv mode any longer.

At least for kernels 5.9 and later those tests should be skipped.


Juergen

>   test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fa=
il REGR. vs. 152332
>   test-amd64-i386-xl-xsm        7 xen-install              fail REGR. v=
s. 152332
>   test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail RE=
GR. vs. 152332
>   test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. v=
s. 152332
>   test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. v=
s. 152332
>   test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. v=
s. 152332
>   test-amd64-i386-examine       6 xen-install              fail REGR. v=
s. 152332
>   test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. =
vs. 152332
>   test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. v=
s. 152332
>   test-amd64-i386-libvirt       7 xen-install              fail REGR. v=
s. 152332
>   test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. v=
s. 152332
>   test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. v=
s. 152332
>   test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail =
REGR. vs. 152332
>   test-amd64-i386-xl            7 xen-install              fail REGR. v=
s. 152332
>   test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. v=
s. 152332
>   test-amd64-i386-pair         10 xen-install/src_host     fail REGR. v=
s. 152332
>   test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. v=
s. 152332
>   test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. v=
s. 152332
>   test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. v=
s. 152332
>   test-amd64-i386-xl-raw        7 xen-install              fail REGR. v=
s. 152332
>   test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. v=
s. 152332
>   test-amd64-i386-xl-shadow     7 xen-install              fail REGR. v=
s. 152332
>   test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. =
vs. 152332
>   test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. v=
s. 152332
>   test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. v=
s. 152332
>   test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. v=
s. 152332
>   test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. v=
s. 152332
>   test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fa=
il REGR. vs. 152332
>   test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. v=
s. 152332
>   test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. v=
s. 152332
>   test-amd64-coresched-i386-xl  7 xen-install              fail REGR. v=
s. 152332
>   test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. v=
s. 152332
>   test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. v=
s. 152332
>   test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. v=
s. 152332
>   test-arm64-arm64-examine      8 reboot                   fail REGR. v=
s. 152332
>   test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. v=
s. 152332
>   test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. v=
s. 152332
>   test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. v=
s. 152332
>   test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. v=
s. 152332
>   test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. v=
s. 152332
>=20
> Tests which did not succeed, but are not blocking:
>   test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked =
in 152332
>   test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail li=
ke 152332
>   test-armhf-armhf-libvirt     16 saverestore-support-check    fail  li=
ke 152332
>   test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail li=
ke 152332
>   test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail li=
ke 152332
>   test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  li=
ke 152332
>   test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail li=
ke 152332
>   test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail li=
ke 152332
>   test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   n=
ever pass
>   test-amd64-amd64-libvirt     15 migrate-support-check        fail   n=
ever pass
>   test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   n=
ever pass
>   test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   n=
ever pass
>   test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support=
-check fail never pass
>   test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   n=
ever pass
>   test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   n=
ever pass
>   test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   n=
ever pass
>   test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   n=
ever pass
>   test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   n=
ever pass
>   test-armhf-armhf-libvirt     15 migrate-support-check        fail   n=
ever pass
>   test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail n=
ever pass
>   test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  n=
ever pass
>   test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   n=
ever pass
>   test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail n=
ever pass
>   test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  n=
ever pass
>   test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   n=
ever pass
>   test-armhf-armhf-xl          15 migrate-support-check        fail   n=
ever pass
>   test-armhf-armhf-xl          16 saverestore-support-check    fail   n=
ever pass
>   test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   n=
ever pass
>   test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   n=
ever pass
>   test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   n=
ever pass
>   test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   n=
ever pass
>   test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   n=
ever pass
>=20
> version targeted for testing:
>   linux                0fa8ee0d9ab95c9350b8b84574824d9a384a9f7d
> baseline version:
>   linux                deacdb3e3979979016fcd0ffd518c320a62ad166
>=20
> Last test of basis   152332  2020-07-31 19:41:23 Z  109 days
> Failing since        152366  2020-08-01 20:49:34 Z  108 days  180 attem=
pts
> Testing same since   156841  2020-11-17 20:09:03 Z    0 days    1 attem=
pts
>=20
> ------------------------------------------------------------
> 3528 people touched revisions under test,
> not listing them all
>=20
> jobs:
>   build-amd64-xsm                                              pass
>   build-arm64-xsm                                              pass
>   build-i386-xsm                                               pass
>   build-amd64                                                  pass
>   build-arm64                                                  pass
>   build-armhf                                                  pass
>   build-i386                                                   pass
>   build-amd64-libvirt                                          pass
>   build-arm64-libvirt                                          pass
>   build-armhf-libvirt                                          pass
>   build-i386-libvirt                                           pass
>   build-amd64-pvops                                            pass
>   build-arm64-pvops                                            pass
>   build-armhf-pvops                                            pass
>   build-i386-pvops                                             pass
>   test-amd64-amd64-xl                                          pass
>   test-amd64-coresched-amd64-xl                                pass
>   test-arm64-arm64-xl                                          fail
>   test-armhf-armhf-xl                                          pass
>   test-amd64-i386-xl                                           fail
>   test-amd64-coresched-i386-xl                                 fail
>   test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass
>   test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail
>   test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass
>   test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail
>   test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass
>   test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail
>   test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass
>   test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail
>   test-amd64-amd64-libvirt-xsm                                 pass
>   test-arm64-arm64-libvirt-xsm                                 fail
>   test-amd64-i386-libvirt-xsm                                  fail
>   test-amd64-amd64-xl-xsm                                      pass
>   test-arm64-arm64-xl-xsm                                      fail
>   test-amd64-i386-xl-xsm                                       fail
>   test-amd64-amd64-qemuu-nested-amd                            fail
>   test-amd64-amd64-xl-pvhv2-amd                                pass
>   test-amd64-i386-qemut-rhel6hvm-amd                           fail
>   test-amd64-i386-qemuu-rhel6hvm-amd                           fail
>   test-amd64-amd64-dom0pvh-xl-amd                              pass
>   test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass
>   test-amd64-i386-xl-qemut-debianhvm-amd64                     fail
>   test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass
>   test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail
>   test-amd64-i386-freebsd10-amd64                              fail
>   test-amd64-amd64-qemuu-freebsd11-amd64                       pass
>   test-amd64-amd64-qemuu-freebsd12-amd64                       pass
>   test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass
>   test-amd64-i386-xl-qemuu-ovmf-amd64                          fail
>   test-amd64-amd64-xl-qemut-win7-amd64                         fail
>   test-amd64-i386-xl-qemut-win7-amd64                          fail
>   test-amd64-amd64-xl-qemuu-win7-amd64                         fail
>   test-amd64-i386-xl-qemuu-win7-amd64                          fail
>   test-amd64-amd64-xl-qemut-ws16-amd64                         fail
>   test-amd64-i386-xl-qemut-ws16-amd64                          fail
>   test-amd64-amd64-xl-qemuu-ws16-amd64                         fail
>   test-amd64-i386-xl-qemuu-ws16-amd64                          fail
>   test-armhf-armhf-xl-arndale                                  pass
>   test-amd64-amd64-xl-credit1                                  pass
>   test-arm64-arm64-xl-credit1                                  fail
>   test-armhf-armhf-xl-credit1                                  pass
>   test-amd64-amd64-xl-credit2                                  pass
>   test-arm64-arm64-xl-credit2                                  fail
>   test-armhf-armhf-xl-credit2                                  pass
>   test-armhf-armhf-xl-cubietruck                               pass
>   test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass
>   test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail
>   test-amd64-amd64-examine                                     pass
>   test-arm64-arm64-examine                                     fail
>   test-armhf-armhf-examine                                     pass
>   test-amd64-i386-examine                                      fail
>   test-amd64-i386-freebsd10-i386                               fail
>   test-amd64-amd64-qemuu-nested-intel                          pass
>   test-amd64-amd64-xl-pvhv2-intel                              pass
>   test-amd64-i386-qemut-rhel6hvm-intel                         fail
>   test-amd64-i386-qemuu-rhel6hvm-intel                         fail
>   test-amd64-amd64-dom0pvh-xl-intel                            pass
>   test-amd64-amd64-libvirt                                     pass
>   test-armhf-armhf-libvirt                                     pass
>   test-amd64-i386-libvirt                                      fail
>   test-amd64-amd64-xl-multivcpu                                pass
>   test-armhf-armhf-xl-multivcpu                                pass
>   test-amd64-amd64-pair                                        pass
>   test-amd64-i386-pair                                         fail
>   test-amd64-amd64-libvirt-pair                                pass
>   test-amd64-i386-libvirt-pair                                 fail
>   test-amd64-amd64-amd64-pvgrub                                fail
>   test-amd64-amd64-i386-pvgrub                                 fail
>   test-amd64-amd64-xl-pvshim                                   pass
>   test-amd64-i386-xl-pvshim                                    fail
>   test-amd64-amd64-pygrub                                      pass
>   test-amd64-amd64-xl-qcow2                                    pass
>   test-armhf-armhf-libvirt-raw                                 pass
>   test-amd64-i386-xl-raw                                       fail
>   test-amd64-amd64-xl-rtds                                     pass
>   test-armhf-armhf-xl-rtds                                     pass
>   test-arm64-arm64-xl-seattle                                  fail
>   test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass
>   test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail
>   test-amd64-amd64-xl-shadow                                   pass
>   test-amd64-i386-xl-shadow                                    fail
>   test-arm64-arm64-xl-thunderx                                 pass
>   test-amd64-amd64-libvirt-vhd                                 pass
>   test-armhf-armhf-xl-vhd                                      pass
>=20
>=20
> ------------------------------------------------------------
> sg-report-flight on osstest.test-lab.xenproject.org
> logs: /home/logs/logs
> images: /home/logs/images
>=20
> Logs, config files, etc. are available at
>      http://logs.test-lab.xenproject.org/osstest/logs
>=20
> Explanation of these reports, and of osstest in general, is at
>      http://xenbits.xen.org/gitweb/?p=3Dosstest.git;a=3Dblob;f=3DREADME=
=2Eemail;hb=3Dmaster
>      http://xenbits.xen.org/gitweb/?p=3Dosstest.git;a=3Dblob;f=3DREADME=
;hb=3Dmaster
>=20
> Test harness code can be found at
>      http://xenbits.xen.org/gitweb?p=3Dosstest.git;a=3Dsummary
>=20
>=20
> Not pushing.
>=20
> (No revision log; it would be 675049 lines long.)
>=20


--------------2D9E3AC56554EB5B8429E686
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------2D9E3AC56554EB5B8429E686--

--7hMJ04CtJv25yg3ja8r8Y7WrKU8jHdHG4--

--CgUKZI0Whv5udxue3gnyFUBgPLexS42CE
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+0zwgFAwAAAAAACgkQsN6d1ii/Ey/b
Ngf9H6UKNzPBggQb7XObp4DnULYPmHvLZgWG+qyXXK07Easa9Fi5n4MoC8UaUbYu3zfmbVXRVJpF
VAs2r4maR/ZZJ3l/rjMLdQMwx8zoWeSK1kgKBiuoynp/EWIkRfBBElmsK9fF5gDampBq904sHRjt
uHiwCJa4gaSNd0dh8il5RVWz3xg//4dhDHV5K3oYTmljGpuOrerWjfrDx4UZiBk8qbj4pUxFQDm7
5oKwBV4R12ottyCZ4nbSmmL/nTZ9QUbdZ97/UEXdulizxY1U7P/O1gMQnUt1oTd5TgFGQtW7GnsO
mNJzpupA8Ia32OEfQZQ6bZCdojyKz4gLapYzXVuwAQ==
=yYsG
-----END PGP SIGNATURE-----

--CgUKZI0Whv5udxue3gnyFUBgPLexS42CE--


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 07:55:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 07:55:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29375.58718 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfIJR-0000d0-OW; Wed, 18 Nov 2020 07:55:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29375.58718; Wed, 18 Nov 2020 07:55:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfIJR-0000ct-LK; Wed, 18 Nov 2020 07:55:33 +0000
Received: by outflank-mailman (input) for mailman id 29375;
 Wed, 18 Nov 2020 07:55:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F1t9=EY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kfIJQ-0000co-Tw
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 07:55:32 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bf3135c3-2b8a-4b32-87da-ce7599c65044;
 Wed, 18 Nov 2020 07:55:29 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfIJN-0001xW-0J; Wed, 18 Nov 2020 07:55:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfIJM-00089H-ML; Wed, 18 Nov 2020 07:55:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kfIJM-0000zp-Lo; Wed, 18 Nov 2020 07:55:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=F1t9=EY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kfIJQ-0000co-Tw
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 07:55:32 +0000
X-Inumbo-ID: bf3135c3-2b8a-4b32-87da-ce7599c65044
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bf3135c3-2b8a-4b32-87da-ce7599c65044;
	Wed, 18 Nov 2020 07:55:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=A4QIxcTgBLAp/vtac+M0T+fvT3Q+WVKaRDFcgZD8mJA=; b=RJLVujcb+MeZ51SG8VlHS8G+52
	VFAnR2GJCwFQN0mzizhU6HU9kiKvWEN0Pv5dOOXMpEANJfYhov1wnTfFS+DXhrHTs1jXN2XsxSVGX
	ko39ZbRqvltjSmEqvW7yowkJi2gW6JzrBqbs0d8CqYbzTzSsi7FttXlNkkDFxNMD+1PY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfIJN-0001xW-0J; Wed, 18 Nov 2020 07:55:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfIJM-00089H-ML; Wed, 18 Nov 2020 07:55:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfIJM-0000zp-Lo; Wed, 18 Nov 2020 07:55:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156840-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156840: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=c8e5c4b246584da36694a3c259a7dbb8a7e7b1f3
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 18 Nov 2020 07:55:28 +0000

flight 156840 qemu-mainline real [real]
flight 156850 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156840/
http://logs.test-lab.xenproject.org/osstest/logs/156850/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                c8e5c4b246584da36694a3c259a7dbb8a7e7b1f3
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   89 days
Failing since        152659  2020-08-21 14:07:39 Z   88 days  189 attempts
Testing same since   156840  2020-11-17 20:06:42 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 66425 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:01:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:01:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29389.58735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfIOf-00028w-Pt; Wed, 18 Nov 2020 08:00:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29389.58735; Wed, 18 Nov 2020 08:00:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfIOf-00028p-Mk; Wed, 18 Nov 2020 08:00:57 +0000
Received: by outflank-mailman (input) for mailman id 29389;
 Wed, 18 Nov 2020 08:00:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F1t9=EY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kfIOe-00028H-2G
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:00:56 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 716684a9-1e8a-4097-a1bc-b8f9c0e9bf00;
 Wed, 18 Nov 2020 08:00:50 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfIOX-0002cl-NG; Wed, 18 Nov 2020 08:00:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfIOX-00009u-E4; Wed, 18 Nov 2020 08:00:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kfIOX-0005Bq-Da; Wed, 18 Nov 2020 08:00:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=F1t9=EY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kfIOe-00028H-2G
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:00:56 +0000
X-Inumbo-ID: 716684a9-1e8a-4097-a1bc-b8f9c0e9bf00
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 716684a9-1e8a-4097-a1bc-b8f9c0e9bf00;
	Wed, 18 Nov 2020 08:00:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hh5wLoKI66qm8f8IAqp2GNGwEGt0U/I3QloLhPJG7JU=; b=ewWgyfZEBUgmWq0Ig8XhgQBxST
	ED4vh4iXWUf+3S30tKCLwc+k43/YGaqtUkCoPgXd9Bwl0GzGOntmLK6ZHesPwXtFak+qbOWSLTYGe
	xGeuA/BFOlkbBo8Dfs4LljILJLIc99qwfTTSUGQ1Z8HpxRqijt6OqbVpgC8n0uGdMIh4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfIOX-0002cl-NG; Wed, 18 Nov 2020 08:00:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfIOX-00009u-E4; Wed, 18 Nov 2020 08:00:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfIOX-0005Bq-Da; Wed, 18 Nov 2020 08:00:49 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156847-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156847: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:build-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    libvirt:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    libvirt=3fba30fc82a3f4286ec12e56c75b217e3a19abb0
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 18 Nov 2020 08:00:49 +0000

flight 156847 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156847/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 libvirt              3fba30fc82a3f4286ec12e56c75b217e3a19abb0
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  131 days
Failing since        151818  2020-07-11 04:18:52 Z  130 days  125 attempts
Testing same since   156847  2020-11-18 04:19:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 27719 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:02:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:02:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29393.58751 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfIQ4-0002Gx-70; Wed, 18 Nov 2020 08:02:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29393.58751; Wed, 18 Nov 2020 08:02:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfIQ4-0002Gq-2e; Wed, 18 Nov 2020 08:02:24 +0000
Received: by outflank-mailman (input) for mailman id 29393;
 Wed, 18 Nov 2020 08:02:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F1t9=EY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kfIQ2-0002G9-8W
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:02:22 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4efa0bf5-f8b1-4b2f-9b0e-b1f2b7405c46;
 Wed, 18 Nov 2020 08:02:16 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfIPw-0002fX-6b; Wed, 18 Nov 2020 08:02:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfIPv-0000Fd-Uy; Wed, 18 Nov 2020 08:02:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kfIPv-0005e6-UT; Wed, 18 Nov 2020 08:02:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=F1t9=EY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kfIQ2-0002G9-8W
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:02:22 +0000
X-Inumbo-ID: 4efa0bf5-f8b1-4b2f-9b0e-b1f2b7405c46
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4efa0bf5-f8b1-4b2f-9b0e-b1f2b7405c46;
	Wed, 18 Nov 2020 08:02:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mg4WjYJISKE53xPbcp2TVDvQedj5E4wmU86ASkZ1ggQ=; b=M6wey2m4pq2khmcj0/dbttGBLC
	fNsbcUFnMs0O43wbG6W808coOvgwOC2TwXdSozeVzu23RcBfVXst4OL2vS+PvaE/QBUzl3Vai5f1X
	C8+Lo4j/NOj4yh62ZMwuhbYzSZlR8JSE3/0oNg8WdXQYUtToz54g4qV1NlCgRVv1GrtY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfIPw-0002fX-6b; Wed, 18 Nov 2020 08:02:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfIPv-0000Fd-Uy; Wed, 18 Nov 2020 08:02:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfIPv-0005e6-UT; Wed, 18 Nov 2020 08:02:15 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156849-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156849: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=404250c8f77d09077321766602c3118cec7f6ecd
X-Osstest-Versions-That:
    ovmf=e6a12a0fc817e26ac05e8301e89433c2367ff362
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 18 Nov 2020 08:02:15 +0000

flight 156849 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156849/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 404250c8f77d09077321766602c3118cec7f6ecd
baseline version:
 ovmf                 e6a12a0fc817e26ac05e8301e89433c2367ff362

Last test of basis   156838  2020-11-17 19:39:47 Z    0 days
Testing same since   156849  2020-11-18 05:10:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Sheng Wei <w.sheng@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   e6a12a0fc8..404250c8f7  404250c8f77d09077321766602c3118cec7f6ecd -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:23:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:23:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29402.58763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfIjv-0004E9-Tl; Wed, 18 Nov 2020 08:22:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29402.58763; Wed, 18 Nov 2020 08:22:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfIjv-0004E2-Qj; Wed, 18 Nov 2020 08:22:55 +0000
Received: by outflank-mailman (input) for mailman id 29402;
 Wed, 18 Nov 2020 08:22:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfIju-0004Dx-Oq
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:22:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1ac2549a-b2b4-4cb7-b5e9-f314a49687a5;
 Wed, 18 Nov 2020 08:22:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E4C6EAD0B;
 Wed, 18 Nov 2020 08:22:52 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfIju-0004Dx-Oq
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:22:54 +0000
X-Inumbo-ID: 1ac2549a-b2b4-4cb7-b5e9-f314a49687a5
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1ac2549a-b2b4-4cb7-b5e9-f314a49687a5;
	Wed, 18 Nov 2020 08:22:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605687773; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=xkeVCKf9SSikvb345dSGfK17HF+2zTH1z30Ye44BN/0=;
	b=FQOWAjvolwWMmglmULAN21aN2qVgyRa2NiogiVBZxi65Iqq5Jvb0yhHqUnsNqhpIxEH+5J
	i0vN5sqgNZ1i3hc7JuXCFAFLkbYFoaLrGAUS9wB+3Of5Pb4NYDbVBfpiP7QQXQGUxZT0Yu
	RmeXvao7ORBesgMtsft2Twj7y6M3BBY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E4C6EAD0B;
	Wed, 18 Nov 2020 08:22:52 +0000 (UTC)
Subject: Re: [PATCH v6 0/3] XSA-343 followup patches
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 "committers@xenproject.org" <committers@xenproject.org>,
 xen-devel@lists.xenproject.org, Juergen Gross <jgross@suse.com>
References: <20201109163826.13035-1-jgross@suse.com>
 <aaa3c26f-4bfa-d881-8e72-112e3108f4b5@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1b54d0bb-deab-f4bd-b773-67a716a1fde1@suse.com>
Date: Wed, 18 Nov 2020 09:22:52 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <aaa3c26f-4bfa-d881-8e72-112e3108f4b5@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 17.11.2020 19:13, Julien Grall wrote:
> On 09/11/2020 16:38, Juergen Gross wrote:
>> Juergen Gross (3):
>>    xen/events: access last_priority and last_vcpu_id together
>>    xen/evtchn: rework per event channel lock
>>    xen/evtchn: revert 52e1fc47abc3a0123
> 
> While looking at the list of commits, I noticed that the first patch 
> hasn't been committed. They were all acked/reviewed, so I am a bit 
> puzzled why this was omitted...
> 
> I have nearly missed as I was expecting the 3 patches to be committed 
> together. May I suggest that in the future we reply to the cover letter 
> and mention which patches are (or not) committed?
> 
> Regarding patch #1, I will commit it tomorrow unless there are strong 
> objections against.

Without a clear outline of what would break with the present logic,
I had previously indicated I'm not convinced of the change. This
isn't a strong objection, no, but I still wouldn't want to see my
name associated with it in such a case. Furthermore I clearly view
this as not a backporting candidate, while the other two are (as I
did previously indicate). Hence the latter two patches wanted
re-basing ahead of the first one anyway, to ease the backports.

While I will accept there are different views possible here, I also
don't think sending mail in such a case is a good use of resources.
The information what was or was not committed is readily available.
Personally I view such mails as at least very close to spam.

Irrespective of the above I'm sorry for any inconvenience caused.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:41:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:41:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29409.58775 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ1q-00068w-HI; Wed, 18 Nov 2020 08:41:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29409.58775; Wed, 18 Nov 2020 08:41:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ1q-00068p-EC; Wed, 18 Nov 2020 08:41:26 +0000
Received: by outflank-mailman (input) for mailman id 29409;
 Wed, 18 Nov 2020 08:41:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RdwY=EY=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kfJ1p-00068k-17
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:41:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 67267c97-4aca-4604-ba16-8d575d54d979;
 Wed, 18 Nov 2020 08:41:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5E94BAE95;
 Wed, 18 Nov 2020 08:41:22 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=RdwY=EY=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kfJ1p-00068k-17
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:41:25 +0000
X-Inumbo-ID: 67267c97-4aca-4604-ba16-8d575d54d979
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 67267c97-4aca-4604-ba16-8d575d54d979;
	Wed, 18 Nov 2020 08:41:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605688883; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=LpwAH/GDaRA0BAynstm4DBtmv+8jvkwy3qqO+caih7o=;
	b=d9GRlPgYwr4CylEPH07DgUA5VMDqWtL805c6+h8nMrZA3HDF8+Mh7i6Ctp4g8SVXbtLdIe
	ShHNKXm+SGc737Gq8tT6zJ57Y3MafTrqTOHjgBQakvijLeeWHNKzuAJFTk1UHzv8Xx4vYW
	Joh5ebKsqSPOpXNt0+Qso9ypQKZtyqM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5E94BAE95;
	Wed, 18 Nov 2020 08:41:22 +0000 (UTC)
To: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 "committers@xenproject.org" <committers@xenproject.org>,
 xen-devel@lists.xenproject.org
References: <20201109163826.13035-1-jgross@suse.com>
 <aaa3c26f-4bfa-d881-8e72-112e3108f4b5@xen.org>
 <1b54d0bb-deab-f4bd-b773-67a716a1fde1@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v6 0/3] XSA-343 followup patches
Message-ID: <4cb2e205-49e2-7dc6-9ae6-39e5335d5a66@suse.com>
Date: Wed, 18 Nov 2020 09:41:20 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <1b54d0bb-deab-f4bd-b773-67a716a1fde1@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="FXzOFNujxRUFWmQAKDJtTeDBpa00mZwMp"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--FXzOFNujxRUFWmQAKDJtTeDBpa00mZwMp
Content-Type: multipart/mixed; boundary="sEOoyfPY7Z5a1OnwlcqO1aeXveYQYLC1J";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 "committers@xenproject.org" <committers@xenproject.org>,
 xen-devel@lists.xenproject.org
Message-ID: <4cb2e205-49e2-7dc6-9ae6-39e5335d5a66@suse.com>
Subject: Re: [PATCH v6 0/3] XSA-343 followup patches
References: <20201109163826.13035-1-jgross@suse.com>
 <aaa3c26f-4bfa-d881-8e72-112e3108f4b5@xen.org>
 <1b54d0bb-deab-f4bd-b773-67a716a1fde1@suse.com>
In-Reply-To: <1b54d0bb-deab-f4bd-b773-67a716a1fde1@suse.com>

--sEOoyfPY7Z5a1OnwlcqO1aeXveYQYLC1J
Content-Type: multipart/mixed;
 boundary="------------0B6C86B437E196B9BA7EBD76"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------0B6C86B437E196B9BA7EBD76
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 18.11.20 09:22, Jan Beulich wrote:
> On 17.11.2020 19:13, Julien Grall wrote:
>> On 09/11/2020 16:38, Juergen Gross wrote:
>>> Juergen Gross (3):
>>>     xen/events: access last_priority and last_vcpu_id together
>>>     xen/evtchn: rework per event channel lock
>>>     xen/evtchn: revert 52e1fc47abc3a0123
>>
>> While looking at the list of commits, I noticed that the first patch
>> hasn't been committed. They were all acked/reviewed, so I am a bit
>> puzzled why this was omitted...
>>
>> I have nearly missed as I was expecting the 3 patches to be committed
>> together. May I suggest that in the future we reply to the cover lette=
r
>> and mention which patches are (or not) committed?
>>
>> Regarding patch #1, I will commit it tomorrow unless there are strong
>> objections against.
>=20
> Without a clear outline of what would break with the present logic,
> I had previously indicated I'm not convinced of the change. This
> isn't a strong objection, no, but I still wouldn't want to see my
> name associated with it in such a case. Furthermore I clearly view
> this as not a backporting candidate, while the other two are (as I
> did previously indicate). Hence the latter two patches wanted
> re-basing ahead of the first one anyway, to ease the backports.

Consider an NMI during evtchn_fifo_set_pending() between updating
last_vcpu_id and last_priority, while on another cpu a concurrent
evtchn_fifo_set_pending() is being called. On that other cpu
lock_old_queue() might return a wrong queue as it will read only
the new last_vcpu_id, but not the new last_priority value.


Juergen

--------------0B6C86B437E196B9BA7EBD76
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------0B6C86B437E196B9BA7EBD76--

--sEOoyfPY7Z5a1OnwlcqO1aeXveYQYLC1J--

--FXzOFNujxRUFWmQAKDJtTeDBpa00mZwMp
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+03jAFAwAAAAAACgkQsN6d1ii/Ey9t
CQf/bWpcfOx86E1Zj+iHp/oIvHSOGWJAu+TVpqOsWkgQG3uApnk03LtgjqZ+JeBj89tCI3XMxAPR
m1g92cD72l/zZQczkWZXHT73yL3q1WxQEKfqN3x/xDKm9ycOOZYDG0Sq4ibnjGHsL7eYAA/RhRgl
IXua0CFcMC5hTaT2V8cDYf6ynhEoGckWIqnhJ2rN/s84YAkxesoJpK1MX7RRPIGF3LI2esbhX2Ky
7J2SLZgSxrhi6vyC4Ug/dNzVudhwGyBdvhZ/oEWZjsLqn4VYsxc/+gMwbuf79cmeuPCeWyU8NIMY
TU0of/7QBTYoS4uk6b/3eJ8RjYt9zkDWBYBwCWdqNw==
=7oDf
-----END PGP SIGNATURE-----

--FXzOFNujxRUFWmQAKDJtTeDBpa00mZwMp--


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:44:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:44:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29413.58787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ4g-0006KT-3H; Wed, 18 Nov 2020 08:44:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29413.58787; Wed, 18 Nov 2020 08:44:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ4g-0006KM-06; Wed, 18 Nov 2020 08:44:22 +0000
Received: by outflank-mailman (input) for mailman id 29413;
 Wed, 18 Nov 2020 08:44:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfJ4e-0006KH-9h
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:44:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 94311955-9a79-46a7-8e5d-12acd2e8ae4e;
 Wed, 18 Nov 2020 08:44:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 51D47B234;
 Wed, 18 Nov 2020 08:44:18 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfJ4e-0006KH-9h
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:44:20 +0000
X-Inumbo-ID: 94311955-9a79-46a7-8e5d-12acd2e8ae4e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 94311955-9a79-46a7-8e5d-12acd2e8ae4e;
	Wed, 18 Nov 2020 08:44:19 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605689058; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=H7YsdtOzuRsahnU7pbqNXL1yMgQZIY2Ji5YlxLRYyaY=;
	b=fu5E8dtiHunr2lNlPkaggzJoe80FjH24zRd8jhiDU3c2Zfg+UDhY4+5JHSQ4hCC3tQHWre
	w8LE9o6ZAlWQaUrzCkNLP5U1vM3LZqf2mtYxO5uVKR4qwiK/tX0NClWvL4/2Sd5tadb2e4
	hgdMhBD/JHeStj00Fpgv+mExsp6BiJ4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 51D47B234;
	Wed, 18 Nov 2020 08:44:18 +0000 (UTC)
Subject: Re: [PATCH v2] xen: EXPERT clean-up and introduce UNSUPPORTED
To: Stefano Stabellini <sstabellini@kernel.org>, andrew.cooper3@citrix.com
Cc: Bertrand.Marquis@arm.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 george.dunlap@citrix.com, iwj@xenproject.org, julien@xen.org, wl@xen.org,
 xen-devel@lists.xenproject.org
References: <20201118005051.26115-1-sstabellini@kernel.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <eb6b32c3-c7e2-1e36-f492-0c00cc170ce2@suse.com>
Date: Wed, 18 Nov 2020 09:44:16 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201118005051.26115-1-sstabellini@kernel.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.11.2020 01:50, Stefano Stabellini wrote:
> 1) It is not obvious that "Configure standard Xen features (expert
> users)" is actually the famous EXPERT we keep talking about on xen-devel

Which can be addressed by simply changing the one prompt line.

> 2) It is not obvious when we need to enable EXPERT to get a specific
> feature
> 
> In particular if you want to enable ACPI support so that you can boot
> Xen on an ACPI platform, you have to enable EXPERT first. But searching
> through the kconfig menu it is really not clear (type '/' and "ACPI"):
> nothing in the description tells you that you need to enable EXPERT to
> get the option.

And what causes this to be different once you switch to UNSUPPORTED?

> So this patch makes things easier by doing two things:
> 
> - introduce a new kconfig option UNSUPPORTED which is clearly to enable
>   UNSUPPORTED features as defined by SUPPORT.md
> 
> - change EXPERT options to UNSUPPORTED where it makes sense: keep
>   depending on EXPERT for features made for experts
> 
> - tag unsupported features by adding (UNSUPPORTED) to the one-line
>   description

I am, btw, not fully convinced of the need for this redundancy. Wouldn't
it be enough to have just EXPERT as a setting, but varying (<reason>)
tokens in the prompt text?

> --- a/xen/Kconfig
> +++ b/xen/Kconfig
> @@ -34,8 +34,17 @@ config DEFCONFIG_LIST
>  	option defconfig_list
>  	default ARCH_DEFCONFIG
>  
> +config UNSUPPORTED
> +	bool "Configure UNSUPPORTED features"
> +	help
> +	  This option allows unsupported Xen options to be enabled, which

I'd recommend against "enabled" - a control may also be there to allow
disabling something.

> +	  includes non-security-supported, experimental, and tech preview
> +	  features as defined by SUPPORT.md. Xen binaries built with this
> +	  option enabled are not security supported.

Overall I'm a little afraid of possible inverse implications: Anything
_not_ dependent upon this option (and in particular anything not
dependent upon any Kconfig control) may be considered supported then.

Also the last sentence is already present for EXPERT, 

> +	default n

I realize you likely merely copied what EXPERT has, but this "default n"
is pretty pointless and hence would better be omitted imo.

> --- a/xen/arch/x86/Kconfig
> +++ b/xen/arch/x86/Kconfig
> @@ -102,8 +102,8 @@ config HVM
>  	  If unsure, say Y.
>  
>  config XEN_SHSTK
> -	bool "Supervisor Shadow Stacks"
> -	depends on HAS_AS_CET_SS && EXPERT
> +	bool "Supervisor Shadow Stacks (UNSUPPORTED)"
> +	depends on HAS_AS_CET_SS && UNSUPPORTED
>  	default y

Andrew, I think I did ask on v1 already: Do we need to continue to
consider this unsupported? While perhaps not a change to make right in
this patch, it should perhaps be a pre-patch then to avoid the need to
touch it here.

> @@ -165,7 +165,7 @@ config HVM_FEP

Seeing just the patch context here, I think HVM_FEP may also want
converting.

> --- a/xen/common/Kconfig
> +++ b/xen/common/Kconfig
> @@ -151,7 +151,7 @@ config KEXEC
>  	  If unsure, say Y.
>  
>  config EFI_SET_VIRTUAL_ADDRESS_MAP
> -    bool "EFI: call SetVirtualAddressMap()" if EXPERT
> +    bool "EFI: call SetVirtualAddressMap() (UNSUPPORTED)" if UNSUPPORTED

I have to admit I'm pretty unsure about what to do with this one.

> @@ -272,7 +272,7 @@ config LATE_HWDOM
>  	  If unsure, say N.
>  
>  config ARGO
> -	bool "Argo: hypervisor-mediated interdomain communication" if EXPERT
> +	bool "Argo: hypervisor-mediated interdomain communication (UNSUPPORTED)" if UNSUPPORTED

Perhaps better (EXPERIMENTAL)?

> --- a/xen/common/sched/Kconfig
> +++ b/xen/common/sched/Kconfig
> @@ -15,7 +15,7 @@ config SCHED_CREDIT2
>  	  optimized for lower latency and higher VM density.
>  
>  config SCHED_RTDS
> -	bool "RTDS scheduler support (EXPERIMENTAL)"
> +	bool "RTDS scheduler support (UNSUPPORTED)" if UNSUPPORTED
>  	default y
>  	---help---
>  	  The RTDS scheduler is a soft and firm real-time scheduler for
> @@ -23,14 +23,14 @@ config SCHED_RTDS
>  	  in the cloud, and general low-latency workloads.
>  
>  config SCHED_ARINC653
> -	bool "ARINC653 scheduler support (EXPERIMENTAL)"
> +	bool "ARINC653 scheduler support (UNSUPPORTED)" if UNSUPPORTED
>  	default DEBUG
>  	---help---
>  	  The ARINC653 scheduler is a hard real-time scheduler for single
>  	  cores, targeted for avionics, drones, and medical devices.
>  
>  config SCHED_NULL
> -	bool "Null scheduler support (EXPERIMENTAL)"
> +	bool "Null scheduler support (UNSUPPORTED)" if UNSUPPORTED
>  	default y
>  	---help---
>  	  The null scheduler is a static, zero overhead scheduler,

I'd like to see (EXPERIMENTAL) stay everywhere here.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:45:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:45:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29418.58799 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ5h-0006S1-ET; Wed, 18 Nov 2020 08:45:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29418.58799; Wed, 18 Nov 2020 08:45:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ5h-0006Ru-Ar; Wed, 18 Nov 2020 08:45:25 +0000
Received: by outflank-mailman (input) for mailman id 29418;
 Wed, 18 Nov 2020 08:45:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1xcH=EY=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kfJ5f-0006Ri-JK
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:45:23 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.81]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 094ff0f9-15c8-4d3c-9555-e420c1bfcdc8;
 Wed, 18 Nov 2020 08:45:22 +0000 (UTC)
Received: from AM5PR0601CA0062.eurprd06.prod.outlook.com (2603:10a6:206::27)
 by DB6PR08MB2919.eurprd08.prod.outlook.com (2603:10a6:6:1e::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Wed, 18 Nov
 2020 08:45:17 +0000
Received: from AM5EUR03FT045.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:0:cafe::1f) by AM5PR0601CA0062.outlook.office365.com
 (2603:10a6:206::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Wed, 18 Nov 2020 08:45:17 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT045.mail.protection.outlook.com (10.152.17.105) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Wed, 18 Nov 2020 08:45:17 +0000
Received: ("Tessian outbound 797fb8e1da56:v71");
 Wed, 18 Nov 2020 08:45:17 +0000
Received: from 5caa43e8644d.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E74829A7-F97E-4708-9BD7-E67829069F1E.1; 
 Wed, 18 Nov 2020 08:45:10 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5caa43e8644d.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 18 Nov 2020 08:45:10 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1656.eurprd08.prod.outlook.com (2603:10a6:4:39::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20; Wed, 18 Nov
 2020 08:45:08 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737%7]) with mapi id 15.20.3564.028; Wed, 18 Nov 2020
 08:45:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1xcH=EY=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kfJ5f-0006Ri-JK
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:45:23 +0000
X-Inumbo-ID: 094ff0f9-15c8-4d3c-9555-e420c1bfcdc8
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown [40.107.21.81])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 094ff0f9-15c8-4d3c-9555-e420c1bfcdc8;
	Wed, 18 Nov 2020 08:45:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nzqAOmxlnpmCBeIgsOCIljyb4qxmTfi6iORSlNXxTIQ=;
 b=W+O1FTeL768a/RV9COaGUBjsudSDiXNzAVAT3dziqg87xExPiBiWsP4+nhL/zskdWmk7G8dq5bqboNQ3s974OGzTcAmtDW3iG0jhEta3pjeCYPYq3FhoFEeBOyYKvlknfQZ1CQe4dx7hfV/MWknUFUamENTiR4Xh0oFo+OSYUAo=
Received: from AM5PR0601CA0062.eurprd06.prod.outlook.com (2603:10a6:206::27)
 by DB6PR08MB2919.eurprd08.prod.outlook.com (2603:10a6:6:1e::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Wed, 18 Nov
 2020 08:45:17 +0000
Received: from AM5EUR03FT045.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:0:cafe::1f) by AM5PR0601CA0062.outlook.office365.com
 (2603:10a6:206::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Wed, 18 Nov 2020 08:45:17 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT045.mail.protection.outlook.com (10.152.17.105) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Wed, 18 Nov 2020 08:45:17 +0000
Received: ("Tessian outbound 797fb8e1da56:v71"); Wed, 18 Nov 2020 08:45:17 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 50319fd9ffc283d4
X-CR-MTA-TID: 64aa7808
Received: from 5caa43e8644d.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id E74829A7-F97E-4708-9BD7-E67829069F1E.1;
	Wed, 18 Nov 2020 08:45:10 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5caa43e8644d.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 18 Nov 2020 08:45:10 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XS1WIsknv2NKOvqKgvsAhRASFpiwvCjKe2omQH0PijeQTQTtncdpGt5effQHBCo7T73g/l3l7vfPDeSRGjkouaZwSNVQXKTZJodQRT6TBLQjw/m9+57kQ8h3WGFM2nJs330oSb2maaM7Nb7c1uP/kc0UoEOmYGdunZXkgmDTZuSPtOd2fp+Vz+d7aXejZUt9fW59TeK6p63LSZ4OhtesmieOBhtmbKsfZk3qtQ92Isi1NkdtZB2cYU1ZB0NR7oXbsX6v/nW/7QzAKQvG9AftLDLUTJT/CD2L92DRLYE7hBtjoj5HUATSdakFS+2+tWiPDJ4mJ+HXN5CqidRjRYuViQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nzqAOmxlnpmCBeIgsOCIljyb4qxmTfi6iORSlNXxTIQ=;
 b=mpuVEyUmfuztlMtxuI6mu1vn2TMhpkiMFztBmtglnsqruQHWfAm2vJtXhfKjPwyg5vCOnRRP5oDfhaVd+CaoXA4q72R1jCY5CkleM/AhVNbV45oIsAimnbwtvEkRaeK2oVZG1+qN62rBNd2s5OAbgHxGUdSHNFX2/28H5q+6n515ECGlYMHecGSUk/Ti+LCvOvwocfehx3UTLqhTohiH/ZVUEYl3ba3iCX8ZWnDJnXl5vw8YfRzX2IPs4YNyFxSS4VkeIPVGYt7R+Am2vdG1ju7rC8kSfgdYWnVWMB1nFJCmxMMvG721Uq+HpLrsaEpZfxndRqIliduPNUbqE3m/bg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nzqAOmxlnpmCBeIgsOCIljyb4qxmTfi6iORSlNXxTIQ=;
 b=W+O1FTeL768a/RV9COaGUBjsudSDiXNzAVAT3dziqg87xExPiBiWsP4+nhL/zskdWmk7G8dq5bqboNQ3s974OGzTcAmtDW3iG0jhEta3pjeCYPYq3FhoFEeBOyYKvlknfQZ1CQe4dx7hfV/MWknUFUamENTiR4Xh0oFo+OSYUAo=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1656.eurprd08.prod.outlook.com (2603:10a6:4:39::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20; Wed, 18 Nov
 2020 08:45:08 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737%7]) with mapi id 15.20.3564.028; Wed, 18 Nov 2020
 08:45:08 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Stefano Stabellini
	<stefano.stabellini@xilinx.com>, "andrew.cooper3@citrix.com"
	<andrew.cooper3@citrix.com>, "george.dunlap@citrix.com"
	<george.dunlap@citrix.com>, "iwj@xenproject.org" <iwj@xenproject.org>,
	"jbeulich@suse.com" <jbeulich@suse.com>, "julien@xen.org" <julien@xen.org>,
	"wl@xen.org" <wl@xen.org>
Subject: Re: [PATCH v2] xen: EXPERT clean-up and introduce UNSUPPORTED
Thread-Topic: [PATCH v2] xen: EXPERT clean-up and introduce UNSUPPORTED
Thread-Index: AQHWvUT1dGbwa+YaakuAH1PIoRpzaanNk7GA
Date: Wed, 18 Nov 2020 08:45:07 +0000
Message-ID: <0A50C952-B9D8-44C3-9326-A0555B435693@arm.com>
References: <20201118005051.26115-1-sstabellini@kernel.org>
In-Reply-To: <20201118005051.26115-1-sstabellini@kernel.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 9a7b953f-1414-4398-f499-08d88b9e4648
x-ms-traffictypediagnostic: DB6PR0801MB1656:|DB6PR08MB2919:
X-Microsoft-Antispam-PRVS:
	<DB6PR08MB291907F5B10B073CDEF8458C9DE10@DB6PR08MB2919.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ZexJUnefE1/jH63pKYd/UwIHmynC2KYnQRj6JfzVtSYTwTqdoIAAIZLvtxshGGxKicIJXD1hLX14P+ygVpPZKN3IcGetgSw7nwrMAKFcB9f8pugMaRCzQRNIyED09c3LKjwh9RSfTnPTE/s7yibc3E9C5WXG3voOHKfB5OZvxSupT3Rr9Vq3jtCR341JCuzpaWtDoS0DKCY0q674FPDgTcNprYLiGTpCN/iKGSSe9ZPCVvnjHffmDI+xSbltn2Z6V/bw3/SLo3JEG47EmdaIpbjPvfktmskeFtfNtRHj9eB7HG85dAfzWcV53t805xdSYvmBCmlqm3pSRIslGt5tf8ICWTzvDFg9FSe03xqICsujpC1CF3yNalUW94qZ8i4eJ8FBxlzJBBdNhoezPrII3Wwry9Jp22gIbbJ4ZBnslZxwdzcHMNj9iRt1X0ytVqe97a7HYrp4dEXcgcIaccxXQQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(39850400004)(346002)(396003)(376002)(6512007)(5660300002)(83380400001)(6916009)(2906002)(86362001)(54906003)(66476007)(64756008)(4326008)(66946007)(91956017)(66446008)(186003)(53546011)(6486002)(66556008)(478600001)(26005)(71200400001)(76116006)(2616005)(36756003)(316002)(6506007)(33656002)(8676002)(8936002)(966005)(6606295002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 Q9ISNhp3Gvu9PD2+W0r5uMzyIw1lhgwE42Qhmd+sWNaa0f0tqJTd/G8PC48TbgQzXJXhGw0Gd2cAx8tOxvrE/jJPHhXK9MtRvAx+J39JDzD+lwj1Oh7LDhXPUvY4cObuP735PPqtLjZ/63z07YRCbQpV2hXl7YVNWAgnYMe/6nIxNMr17r8Df2f5fUHaesFLeUi7hwO0bGD0RMcm3DFC6Lf98WTbmkuC/5SBnbVM+tdUm5N05OwV9U5E9HuDGMKYqm2w3V+ho8Rv81njf2J3Q0RsK2uIY+2oQQlZ7eCiduK1TorgmEtv5VjsxaD3jOXtNL6icJNUp2LzDsoAuVcDogM4E2clyusuXPq2avIAvi8YWJTXObQ9JO0t0jeG2gKoY5pDqvcLWSEuvNy6d5N29I6aVUPLmUm+BY6HAIeUnMQYehpUQ4e5pTR7csAHKovuUmg2NcbKBUgWi6SduHQE85O2l1+3eaK+nxpOILruGyJesidf2NRwWem7f2pt4/MP2q1MRGX76JOJEggC8eNFNIgQrVqd7VUwjbdUxYjQe/Qcz5RdVewQB0ionTjHD5Sz7orZid9IFB+jgVS26wjde8aO7BdOUNHBuaCriOa/Cm+orgL+IPkmri5qtcuNQYAAAeVKGUtPs/AwgaYfVb3WDg==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <2BA35AFF8A9C46459F8299B8886372EB@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1656
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT045.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	04830b8d-d78d-4f0c-591e-08d88b9e40c9
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	t0LD2M64udtaNu14R2K7IgErdyLf1p9Kq4Wdz1kAM4eOPjoPiWVZ9wwcFbcfA45Nv+MAIEdoMtZgES4KPDtV3RrFpBiGPeJOrtcmrHEfPtm+8s6C/3bjuwWdBYmT54z7hP49+ay3XDXSwh62EYKgrKa4UC4S2eQk+84+o2AUNSq0VvnQEDAX8ZzxIQkqJ6hMZ4wmqdDkUGYckX65BeoNjGtN5ynoykvuABeUWY1D8A6MnAaiJqMTU8g6tLn1ikGpOoBlGzNt0O8U/WXJC9Tzak8t/YVJLYztF4ZLwi2lrP+5POoYV90vZlGkAMvDOITUUE9QoM0u61vhngMdkut29lVW5yLRD3qzwgpvHADzUIjIj8q6wppyMMDOOLGKkuCvJsYkR87ptE/MrZxpAc9Toe/Dhb9xpzes85vej1nuvK8s53XJUNLByctfSmN0B4xh4GzDZ1zqLVEp2U50K8mJ8NbMSHDH0r9ghuHAB4uQ9eWdWbLatRuc84NvGVL+quLn
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(346002)(376002)(396003)(136003)(46966005)(33656002)(81166007)(6486002)(47076004)(478600001)(82310400003)(6506007)(70206006)(356005)(336012)(186003)(82740400003)(5660300002)(26005)(70586007)(53546011)(8676002)(83380400001)(8936002)(36756003)(2616005)(2906002)(6512007)(86362001)(36906005)(316002)(6862004)(54906003)(4326008)(966005)(6606295002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Nov 2020 08:45:17.2465
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9a7b953f-1414-4398-f499-08d88b9e4648
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT045.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR08MB2919

SGkgU3RlZmFubywNCg0KPiBPbiAxOCBOb3YgMjAyMCwgYXQgMDA6NTAsIFN0ZWZhbm8gU3RhYmVs
bGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gd3JvdGU6DQo+IA0KPiBGcm9tOiBTdGVmYW5v
IFN0YWJlbGxpbmkgPHN0ZWZhbm8uc3RhYmVsbGluaUB4aWxpbnguY29tPg0KPiANCj4gQSByZWNl
bnQgdGhyZWFkIFsxXSBoYXMgZXhwb3NlZCBhIGNvdXBsZSBvZiBpc3N1ZXMgd2l0aCBvdXIgY3Vy
cmVudCB3YXkNCj4gb2YgaGFuZGxpbmcgRVhQRVJULg0KPiANCj4gMSkgSXQgaXMgbm90IG9idmlv
dXMgdGhhdCAiQ29uZmlndXJlIHN0YW5kYXJkIFhlbiBmZWF0dXJlcyAoZXhwZXJ0DQo+IHVzZXJz
KSIgaXMgYWN0dWFsbHkgdGhlIGZhbW91cyBFWFBFUlQgd2Uga2VlcCB0YWxraW5nIGFib3V0IG9u
IHhlbi1kZXZlbA0KPiANCj4gMikgSXQgaXMgbm90IG9idmlvdXMgd2hlbiB3ZSBuZWVkIHRvIGVu
YWJsZSBFWFBFUlQgdG8gZ2V0IGEgc3BlY2lmaWMNCj4gZmVhdHVyZQ0KPiANCj4gSW4gcGFydGlj
dWxhciBpZiB5b3Ugd2FudCB0byBlbmFibGUgQUNQSSBzdXBwb3J0IHNvIHRoYXQgeW91IGNhbiBi
b290DQo+IFhlbiBvbiBhbiBBQ1BJIHBsYXRmb3JtLCB5b3UgaGF2ZSB0byBlbmFibGUgRVhQRVJU
IGZpcnN0LiBCdXQgc2VhcmNoaW5nDQo+IHRocm91Z2ggdGhlIGtjb25maWcgbWVudSBpdCBpcyBy
ZWFsbHkgbm90IGNsZWFyICh0eXBlICcvJyBhbmQgIkFDUEkiKToNCj4gbm90aGluZyBpbiB0aGUg
ZGVzY3JpcHRpb24gdGVsbHMgeW91IHRoYXQgeW91IG5lZWQgdG8gZW5hYmxlIEVYUEVSVCB0bw0K
PiBnZXQgdGhlIG9wdGlvbi4NCg0KVGhpcyBpcyBhIGdyZWF0IGNoYW5nZSB0aGF0IG1ha2VzIGNv
bmZpZ3VyYXRpb24gbW9yZSBjbGVhci4NCg0KPiANCj4gU28gdGhpcyBwYXRjaCBtYWtlcyB0aGlu
Z3MgZWFzaWVyIGJ5IGRvaW5nIHR3byB0aGluZ3M6DQo+IA0KPiAtIGludHJvZHVjZSBhIG5ldyBr
Y29uZmlnIG9wdGlvbiBVTlNVUFBPUlRFRCB3aGljaCBpcyBjbGVhcmx5IHRvIGVuYWJsZQ0KPiAg
VU5TVVBQT1JURUQgZmVhdHVyZXMgYXMgZGVmaW5lZCBieSBTVVBQT1JULm1kDQo+IA0KPiAtIGNo
YW5nZSBFWFBFUlQgb3B0aW9ucyB0byBVTlNVUFBPUlRFRCB3aGVyZSBpdCBtYWtlcyBzZW5zZTog
a2VlcA0KPiAgZGVwZW5kaW5nIG9uIEVYUEVSVCBmb3IgZmVhdHVyZXMgbWFkZSBmb3IgZXhwZXJ0
cw0KPiANCj4gLSB0YWcgdW5zdXBwb3J0ZWQgZmVhdHVyZXMgYnkgYWRkaW5nIChVTlNVUFBPUlRF
RCkgdG8gdGhlIG9uZS1saW5lDQo+ICBkZXNjcmlwdGlvbg0KPiANCj4gLSBjbGFyaWZ5IHRoZSBF
WFBFUlQgb25lLWxpbmUgZGVzY3JpcHRpb24NCg0KU2hvdWxkIHdlIGFsc28gZm9sbG93IHRoZSBz
Y2hlbWUgYW5kIGFkZCAoRVhQRVJUKSBpbiB0aGUgdGV4dCBmb3IgZXhwZXJ0IG9wdGlvbnMgPw0K
DQphbmQgb25lIHNtYWxsIGZpeA0KDQo+IA0KPiBbMV0gaHR0cHM6Ly9tYXJjLmluZm8vP2w9eGVu
LWRldmVsJm09MTYwMzMzMTAxMjI4OTgxDQo+IA0KPiBTaWduZWQtb2ZmLWJ5OiBTdGVmYW5vIFN0
YWJlbGxpbmkgPHN0ZWZhbm8uc3RhYmVsbGluaUB4aWxpbnguY29tPg0KPiBDQzogYW5kcmV3LmNv
b3BlcjNAY2l0cml4LmNvbQ0KPiBDQzogZ2VvcmdlLmR1bmxhcEBjaXRyaXguY29tDQo+IENDOiBp
d2pAeGVucHJvamVjdC5vcmcNCj4gQ0M6IGpiZXVsaWNoQHN1c2UuY29tDQo+IENDOiBqdWxpZW5A
eGVuLm9yZw0KPiBDQzogd2xAeGVuLm9yZw0KPiANCj4gLS0tDQo+IENoYW5nZXMgaW4gdjI6DQo+
IC0gaW50cm9kdWNlIFVOU1VQUE9SVEVEIGFzIGEgc2VwYXJhdGUgbmV3IG9wdGlvbg0KPiAtIGRv
bid0IHN3aXRjaCBhbGwgRVhQRVJUIG9wdGlvbnMgdG8gVU5TVVBQT1JURUQNCj4gLS0tDQo+IHhl
bi9LY29uZmlnICAgICAgICAgICAgICB8IDExICsrKysrKysrKystDQo+IHhlbi9hcmNoL2FybS9L
Y29uZmlnICAgICB8IDEwICsrKysrLS0tLS0NCj4geGVuL2FyY2gveDg2L0tjb25maWcgICAgIHwg
IDggKysrKy0tLS0NCj4geGVuL2NvbW1vbi9LY29uZmlnICAgICAgIHwgIDQgKystLQ0KPiB4ZW4v
Y29tbW9uL3NjaGVkL0tjb25maWcgfCAgNiArKystLS0NCj4gNSBmaWxlcyBjaGFuZ2VkLCAyNCBp
bnNlcnRpb25zKCspLCAxNSBkZWxldGlvbnMoLSkNCj4gDQo+IGRpZmYgLS1naXQgYS94ZW4vS2Nv
bmZpZyBiL3hlbi9LY29uZmlnDQo+IGluZGV4IDM0YzMxOGJmYTIuLjU5NDAwYzQ3ODggMTAwNjQ0
DQo+IC0tLSBhL3hlbi9LY29uZmlnDQo+ICsrKyBiL3hlbi9LY29uZmlnDQo+IEBAIC0zNCw4ICsz
NCwxNyBAQCBjb25maWcgREVGQ09ORklHX0xJU1QNCj4gCW9wdGlvbiBkZWZjb25maWdfbGlzdA0K
PiAJZGVmYXVsdCBBUkNIX0RFRkNPTkZJRw0KPiANCj4gK2NvbmZpZyBVTlNVUFBPUlRFRA0KPiAr
CWJvb2wgIkNvbmZpZ3VyZSBVTlNVUFBPUlRFRCBmZWF0dXJlcyINCj4gKwloZWxwDQo+ICsJICBU
aGlzIG9wdGlvbiBhbGxvd3MgdW5zdXBwb3J0ZWQgWGVuIG9wdGlvbnMgdG8gYmUgZW5hYmxlZCwg
d2hpY2gNCj4gKwkgIGluY2x1ZGVzIG5vbi1zZWN1cml0eS1zdXBwb3J0ZWQsIGV4cGVyaW1lbnRh
bCwgYW5kIHRlY2ggcHJldmlldw0KPiArCSAgZmVhdHVyZXMgYXMgZGVmaW5lZCBieSBTVVBQT1JU
Lm1kLiBYZW4gYmluYXJpZXMgYnVpbHQgd2l0aCB0aGlzDQo+ICsJICBvcHRpb24gZW5hYmxlZCBh
cmUgbm90IHNlY3VyaXR5IHN1cHBvcnRlZC4NCj4gKwlkZWZhdWx0IG4NCj4gKw0KPiBjb25maWcg
RVhQRVJUDQo+IC0JYm9vbCAiQ29uZmlndXJlIHN0YW5kYXJkIFhlbiBmZWF0dXJlcyAoZXhwZXJ0
IHVzZXJzKSINCj4gKwlib29sICJDb25maWd1cmUgRVhQRVJUIGZlYXR1cmVzIg0KPiAJaGVscA0K
PiAJICBUaGlzIG9wdGlvbiBhbGxvd3MgY2VydGFpbiBiYXNlIFhlbiBvcHRpb25zIGFuZCBzZXR0
aW5ncw0KPiAJICB0byBiZSBkaXNhYmxlZCBvciB0d2Vha2VkLiBUaGlzIGlzIGZvciBzcGVjaWFs
aXplZCBlbnZpcm9ubWVudHMNCj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9LY29uZmlnIGIv
eGVuL2FyY2gvYXJtL0tjb25maWcNCj4gaW5kZXggZjkzOGRkMjFiZC4uNTk4MWU3MzgwZCAxMDA2
NDQNCj4gLS0tIGEveGVuL2FyY2gvYXJtL0tjb25maWcNCj4gKysrIGIveGVuL2FyY2gvYXJtL0tj
b25maWcNCj4gQEAgLTMyLDcgKzMyLDcgQEAgbWVudSAiQXJjaGl0ZWN0dXJlIEZlYXR1cmVzIg0K
PiBzb3VyY2UgImFyY2gvS2NvbmZpZyINCj4gDQo+IGNvbmZpZyBBQ1BJDQo+IC0JYm9vbCAiQUNQ
SSAoQWR2YW5jZWQgQ29uZmlndXJhdGlvbiBhbmQgUG93ZXIgSW50ZXJmYWNlKSBTdXBwb3J0IiBp
ZiBFWFBFUlQNCj4gKwlib29sICJBQ1BJIChBZHZhbmNlZCBDb25maWd1cmF0aW9uIGFuZCBQb3dl
ciBJbnRlcmZhY2UpIFN1cHBvcnQgKFVOU1VQUE9SVEVEKSIgaWYgVU5TVVBQT1JURUQNCj4gCWRl
cGVuZHMgb24gQVJNXzY0DQo+IAktLS1oZWxwLS0tDQo+IA0KPiBAQCAtNDksNyArNDksNyBAQCBj
b25maWcgR0lDVjMNCj4gCSAgSWYgdW5zdXJlLCBzYXkgWQ0KPiANCj4gY29uZmlnIEhBU19JVFMN
Cj4gLSAgICAgICAgYm9vbCAiR0lDdjMgSVRTIE1TSSBjb250cm9sbGVyIHN1cHBvcnQiIGlmIEVY
UEVSVA0KPiArICAgICAgICBib29sICJHSUN2MyBJVFMgTVNJIGNvbnRyb2xsZXIgc3VwcG9ydCAo
VU5TVVBQT1JURUQpIiBpZiBVTlNVUFBPUlRFRA0KPiAgICAgICAgIGRlcGVuZHMgb24gR0lDVjMg
JiYgIU5FV19WR0lDDQo+IA0KPiBjb25maWcgSFZNDQo+IEBAIC03OSw3ICs3OSw3IEBAIGNvbmZp
ZyBTQlNBX1ZVQVJUX0NPTlNPTEUNCj4gCSAgU0JTQSBHZW5lcmljIFVBUlQgaW1wbGVtZW50cyBh
IHN1YnNldCBvZiBBUk0gUEwwMTEgVUFSVC4NCj4gDQo+IGNvbmZpZyBBUk1fU1NCRA0KPiAtCWJv
b2wgIlNwZWN1bGF0aXZlIFN0b3JlIEJ5cGFzcyBEaXNhYmxlIiBpZiBFWFBFUlQNCj4gKwlib29s
ICJTcGVjdWxhdGl2ZSBTdG9yZSBCeXBhc3MgRGlzYWJsZSAoVU5TVVBQT1JURUQpIiBpZiBVTlNV
UFBPUlRFRA0KPiAJZGVwZW5kcyBvbiBIQVNfQUxURVJOQVRJVkUNCj4gCWRlZmF1bHQgeQ0KPiAJ
aGVscA0KPiBAQCAtODksNyArODksNyBAQCBjb25maWcgQVJNX1NTQkQNCj4gCSAgSWYgdW5zdXJl
LCBzYXkgWS4NCj4gDQo+IGNvbmZpZyBIQVJERU5fQlJBTkNIX1BSRURJQ1RPUg0KPiAtCWJvb2wg
IkhhcmRlbiB0aGUgYnJhbmNoIHByZWRpY3RvciBhZ2FpbnN0IGFsaWFzaW5nIGF0dGFja3MiIGlm
IEVYUEVSVA0KPiArCWJvb2wgIkhhcmRlbiB0aGUgYnJhbmNoIHByZWRpY3RvciBhZ2FpbnN0IGFs
aWFzaW5nIGF0dGFja3MgKFVOU1VQUE9SVEVEKSIgaWYgVU5TVVBQT1JURUQNCj4gCWRlZmF1bHQg
eQ0KPiAJaGVscA0KPiAJICBTcGVjdWxhdGlvbiBhdHRhY2tzIGFnYWluc3Qgc29tZSBoaWdoLXBl
cmZvcm1hbmNlIHByb2Nlc3NvcnMgcmVseSBvbg0KPiBAQCAtMTA2LDcgKzEwNiw3IEBAIGNvbmZp
ZyBIQVJERU5fQlJBTkNIX1BSRURJQ1RPUg0KPiAJICBJZiB1bnN1cmUsIHNheSBZLg0KPiANCj4g
Y29uZmlnIFRFRQ0KPiAtCWJvb2wgIkVuYWJsZSBURUUgbWVkaWF0b3JzIHN1cHBvcnQiIGlmIEVY
UEVSVA0KPiArCWJvb2wgIkVuYWJsZSBURUUgbWVkaWF0b3JzIHN1cHBvcnQgKFVOU1VQUE9SVEVE
KSIgaWYgVU5TVVBQT1JURUQNCj4gCWRlZmF1bHQgbg0KPiAJaGVscA0KPiAJICBUaGlzIG9wdGlv
biBlbmFibGVzIGdlbmVyaWMgVEVFIG1lZGlhdG9ycyBzdXBwb3J0LiBJdCBhbGxvd3MgZ3Vlc3Rz
DQo+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvS2NvbmZpZyBiL3hlbi9hcmNoL3g4Ni9LY29u
ZmlnDQo+IGluZGV4IDI0ODY4YWE2YWQuLmQ0ZTIwZTlkMzEgMTAwNjQ0DQo+IC0tLSBhL3hlbi9h
cmNoL3g4Ni9LY29uZmlnDQo+ICsrKyBiL3hlbi9hcmNoL3g4Ni9LY29uZmlnDQo+IEBAIC0xMDIs
OCArMTAyLDggQEAgY29uZmlnIEhWTQ0KPiAJICBJZiB1bnN1cmUsIHNheSBZLg0KPiANCj4gY29u
ZmlnIFhFTl9TSFNUSw0KPiAtCWJvb2wgIlN1cGVydmlzb3IgU2hhZG93IFN0YWNrcyINCj4gLQlk
ZXBlbmRzIG9uIEhBU19BU19DRVRfU1MgJiYgRVhQRVJUDQo+ICsJYm9vbCAiU3VwZXJ2aXNvciBT
aGFkb3cgU3RhY2tzIChVTlNVUFBPUlRFRCkiDQo+ICsJZGVwZW5kcyBvbiBIQVNfQVNfQ0VUX1NT
ICYmIFVOU1VQUE9SVEVEDQoNClRoaXMgb25lIGlzIG5vdCBmb2xsb3dpbmcgdGhlIHN0YW5kYXJk
IHNjaGVtZSB3aXRoIOKAnGlmIFVOU1VQUE9SVEVEIg0KDQpDaGVlcnMNCkJlcnRyYW5kDQoNCj4g
CWRlZmF1bHQgeQ0KPiAJLS0taGVscC0tLQ0KPiAJICBDb250cm9sLWZsb3cgRW5mb3JjZW1lbnQg
VGVjaG5vbG9neSAoQ0VUKSBpcyBhIHNldCBvZiBmZWF0dXJlcyBpbg0KPiBAQCAtMTY1LDcgKzE2
NSw3IEBAIGNvbmZpZyBIVk1fRkVQDQo+IAkgIElmIHVuc3VyZSwgc2F5IE4uDQo+IA0KPiBjb25m
aWcgVEJPT1QNCj4gLQlib29sICJYZW4gdGJvb3Qgc3VwcG9ydCIgaWYgRVhQRVJUDQo+ICsJYm9v
bCAiWGVuIHRib290IHN1cHBvcnQgKFVOU1VQUE9SVEVEKSIgaWYgVU5TVVBQT1JURUQNCj4gCWRl
ZmF1bHQgeSBpZiAhUFZfU0hJTV9FWENMVVNJVkUNCj4gCXNlbGVjdCBDUllQVE8NCj4gCS0tLWhl
bHAtLS0NCj4gQEAgLTI1MSw3ICsyNTEsNyBAQCBjb25maWcgSFlQRVJWX0dVRVNUDQo+IGVuZGlm
DQo+IA0KPiBjb25maWcgTUVNX1NIQVJJTkcNCj4gLQlib29sICJYZW4gbWVtb3J5IHNoYXJpbmcg
c3VwcG9ydCIgaWYgRVhQRVJUDQo+ICsJYm9vbCAiWGVuIG1lbW9yeSBzaGFyaW5nIHN1cHBvcnQg
KFVOU1VQUE9SVEVEKSIgaWYgVU5TVVBQT1JURUQNCj4gCWRlcGVuZHMgb24gSFZNDQo+IA0KPiBl
bmRtZW51DQo+IGRpZmYgLS1naXQgYS94ZW4vY29tbW9uL0tjb25maWcgYi94ZW4vY29tbW9uL0tj
b25maWcNCj4gaW5kZXggM2UyY2YyNTA4OC4uYmVlZDUwNzcyNyAxMDA2NDQNCj4gLS0tIGEveGVu
L2NvbW1vbi9LY29uZmlnDQo+ICsrKyBiL3hlbi9jb21tb24vS2NvbmZpZw0KPiBAQCAtMTUxLDcg
KzE1MSw3IEBAIGNvbmZpZyBLRVhFQw0KPiAJICBJZiB1bnN1cmUsIHNheSBZLg0KPiANCj4gY29u
ZmlnIEVGSV9TRVRfVklSVFVBTF9BRERSRVNTX01BUA0KPiAtICAgIGJvb2wgIkVGSTogY2FsbCBT
ZXRWaXJ0dWFsQWRkcmVzc01hcCgpIiBpZiBFWFBFUlQNCj4gKyAgICBib29sICJFRkk6IGNhbGwg
U2V0VmlydHVhbEFkZHJlc3NNYXAoKSAoVU5TVVBQT1JURUQpIiBpZiBVTlNVUFBPUlRFRA0KPiAg
ICAgLS0taGVscC0tLQ0KPiAgICAgICBDYWxsIEVGSSBTZXRWaXJ0dWFsQWRkcmVzc01hcCgpIHJ1
bnRpbWUgc2VydmljZSB0byBzZXR1cCBtZW1vcnkgbWFwIGZvcg0KPiAgICAgICBmdXJ0aGVyIHJ1
bnRpbWUgc2VydmljZXMuIEFjY29yZGluZyB0byBVRUZJIHNwZWMsIGl0IGlzbid0IHN0cmljdGx5
DQo+IEBAIC0yNzIsNyArMjcyLDcgQEAgY29uZmlnIExBVEVfSFdET00NCj4gCSAgSWYgdW5zdXJl
LCBzYXkgTi4NCj4gDQo+IGNvbmZpZyBBUkdPDQo+IC0JYm9vbCAiQXJnbzogaHlwZXJ2aXNvci1t
ZWRpYXRlZCBpbnRlcmRvbWFpbiBjb21tdW5pY2F0aW9uIiBpZiBFWFBFUlQNCj4gKwlib29sICJB
cmdvOiBoeXBlcnZpc29yLW1lZGlhdGVkIGludGVyZG9tYWluIGNvbW11bmljYXRpb24gKFVOU1VQ
UE9SVEVEKSIgaWYgVU5TVVBQT1JURUQNCj4gCS0tLWhlbHAtLS0NCj4gCSAgRW5hYmxlcyBhIGh5
cGVyY2FsbCBmb3IgZG9tYWlucyB0byBhc2sgdGhlIGh5cGVydmlzb3IgdG8gcGVyZm9ybQ0KPiAJ
ICBkYXRhIHRyYW5zZmVyIG9mIG1lc3NhZ2VzIGJldHdlZW4gZG9tYWlucy4NCj4gZGlmZiAtLWdp
dCBhL3hlbi9jb21tb24vc2NoZWQvS2NvbmZpZyBiL3hlbi9jb21tb24vc2NoZWQvS2NvbmZpZw0K
PiBpbmRleCA2MTIzMWFhY2FhLi45NGM5ZTIwMTM5IDEwMDY0NA0KPiAtLS0gYS94ZW4vY29tbW9u
L3NjaGVkL0tjb25maWcNCj4gKysrIGIveGVuL2NvbW1vbi9zY2hlZC9LY29uZmlnDQo+IEBAIC0x
NSw3ICsxNSw3IEBAIGNvbmZpZyBTQ0hFRF9DUkVESVQyDQo+IAkgIG9wdGltaXplZCBmb3IgbG93
ZXIgbGF0ZW5jeSBhbmQgaGlnaGVyIFZNIGRlbnNpdHkuDQo+IA0KPiBjb25maWcgU0NIRURfUlRE
Uw0KPiAtCWJvb2wgIlJURFMgc2NoZWR1bGVyIHN1cHBvcnQgKEVYUEVSSU1FTlRBTCkiDQo+ICsJ
Ym9vbCAiUlREUyBzY2hlZHVsZXIgc3VwcG9ydCAoVU5TVVBQT1JURUQpIiBpZiBVTlNVUFBPUlRF
RA0KPiAJZGVmYXVsdCB5DQo+IAktLS1oZWxwLS0tDQo+IAkgIFRoZSBSVERTIHNjaGVkdWxlciBp
cyBhIHNvZnQgYW5kIGZpcm0gcmVhbC10aW1lIHNjaGVkdWxlciBmb3INCj4gQEAgLTIzLDE0ICsy
MywxNCBAQCBjb25maWcgU0NIRURfUlREUw0KPiAJICBpbiB0aGUgY2xvdWQsIGFuZCBnZW5lcmFs
IGxvdy1sYXRlbmN5IHdvcmtsb2Fkcy4NCj4gDQo+IGNvbmZpZyBTQ0hFRF9BUklOQzY1Mw0KPiAt
CWJvb2wgIkFSSU5DNjUzIHNjaGVkdWxlciBzdXBwb3J0IChFWFBFUklNRU5UQUwpIg0KPiArCWJv
b2wgIkFSSU5DNjUzIHNjaGVkdWxlciBzdXBwb3J0IChVTlNVUFBPUlRFRCkiIGlmIFVOU1VQUE9S
VEVEDQo+IAlkZWZhdWx0IERFQlVHDQo+IAktLS1oZWxwLS0tDQo+IAkgIFRoZSBBUklOQzY1MyBz
Y2hlZHVsZXIgaXMgYSBoYXJkIHJlYWwtdGltZSBzY2hlZHVsZXIgZm9yIHNpbmdsZQ0KPiAJICBj
b3JlcywgdGFyZ2V0ZWQgZm9yIGF2aW9uaWNzLCBkcm9uZXMsIGFuZCBtZWRpY2FsIGRldmljZXMu
DQo+IA0KPiBjb25maWcgU0NIRURfTlVMTA0KPiAtCWJvb2wgIk51bGwgc2NoZWR1bGVyIHN1cHBv
cnQgKEVYUEVSSU1FTlRBTCkiDQo+ICsJYm9vbCAiTnVsbCBzY2hlZHVsZXIgc3VwcG9ydCAoVU5T
VVBQT1JURUQpIiBpZiBVTlNVUFBPUlRFRA0KPiAJZGVmYXVsdCB5DQo+IAktLS1oZWxwLS0tDQo+
IAkgIFRoZSBudWxsIHNjaGVkdWxlciBpcyBhIHN0YXRpYywgemVybyBvdmVyaGVhZCBzY2hlZHVs
ZXIsDQo+IC0tIA0KPiAyLjE3LjENCj4gDQoNCg==


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:48:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:48:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29436.58823 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ8j-0006g7-8h; Wed, 18 Nov 2020 08:48:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29436.58823; Wed, 18 Nov 2020 08:48:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ8j-0006fz-5j; Wed, 18 Nov 2020 08:48:33 +0000
Received: by outflank-mailman (input) for mailman id 29436;
 Wed, 18 Nov 2020 08:48:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kfJ8i-0006e0-1R
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:48:32 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 754bf64a-9605-43bd-a6cf-fc582c7d0c4f;
 Wed, 18 Nov 2020 08:48:23 +0000 (UTC)
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kfJ8E-0007kG-NB; Wed, 18 Nov 2020 08:48:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kfJ8i-0006e0-1R
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:48:32 +0000
X-Inumbo-ID: 754bf64a-9605-43bd-a6cf-fc582c7d0c4f
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 754bf64a-9605-43bd-a6cf-fc582c7d0c4f;
	Wed, 18 Nov 2020 08:48:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID:
	Content-Description:In-Reply-To:References;
	bh=rtXdmB6pB+D+QbtGbV+uHFq0yQJijYyGHbBI0cLjv2A=; b=MbUsQaP4K2SpOGVW5mv43oSYV6
	aSTVb67wwO58YAjNvYMV+JptsTZETqgQ3iVNAbfWHCQ9HE5JT81DQ5LdbNLiSSXvHq/k7/yLci8zE
	BWakOahKmMr+FGf6UbLTbY5ACvNjirGAI+lCUyjSOA50PJSXlUcCDA0CuGKj7KT+rJUTyYFPsUGnT
	6eRsMiIYjULW2kg2XeaXIAB0btu3VodTKNE1Pz44bT6gTmxgQcB687KguKJxiJBm9EAIUQxLSiLOJ
	rMJ8Ixv3mA08atwcPFG6zZ0vA+A8nlhq/s/JYSobbimDNkp47kt2AOxl+LkE9cWtZofajTO0QNTZm
	OmM7zM1Q==;
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kfJ8E-0007kG-NB; Wed, 18 Nov 2020 08:48:03 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: merge struct block_device and struct hd_struct
Date: Wed, 18 Nov 2020 09:47:40 +0100
Message-Id: <20201118084800.2339180-1-hch@lst.de>
X-Mailer: git-send-email 2.29.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Hi Jens,

this series cleans up our main per-device node data structure by merging
the block_device and hd_struct data structures that have the same scope,
but different life times.  The main effect (besides removing lots of
code) is that instead of having two device sizes that need complex
synchronization there is just one now.

Note that it depends on the previous "misc cleanups" series.

A git tree is available here:

    git://git.infradead.org/users/hch/block.git bdev-lookup

Gitweb:

    http://git.infradead.org/users/hch/block.git/shortlog/refs/heads/bdev-lookup

Diffstat:
 block/bio.c                                  |    6 
 block/blk-cgroup.c                           |   50 +-
 block/blk-core.c                             |   85 +--
 block/blk-flush.c                            |    2 
 block/blk-iocost.c                           |   36 -
 block/blk-lib.c                              |    2 
 block/blk-merge.c                            |    6 
 block/blk-mq.c                               |   11 
 block/blk-mq.h                               |    5 
 block/blk.h                                  |   92 ----
 block/genhd.c                                |  444 +++++---------------
 block/ioctl.c                                |    7 
 block/partitions/core.c                      |  238 +++--------
 drivers/block/drbd/drbd_receiver.c           |    2 
 drivers/block/drbd/drbd_worker.c             |    2 
 drivers/block/loop.c                         |   21 
 drivers/block/nbd.c                          |    6 
 drivers/block/xen-blkback/common.h           |    4 
 drivers/block/xen-blkfront.c                 |   20 
 drivers/block/zram/zram_drv.c                |   20 
 drivers/md/bcache/request.c                  |    4 
 drivers/md/bcache/super.c                    |   53 --
 drivers/md/dm-table.c                        |    9 
 drivers/md/dm.c                              |   16 
 drivers/md/md.c                              |    8 
 drivers/mtd/mtdsuper.c                       |   17 
 drivers/nvme/target/admin-cmd.c              |   20 
 drivers/s390/block/dasd.c                    |    8 
 drivers/s390/block/dasd_ioctl.c              |    9 
 drivers/scsi/scsicam.c                       |    2 
 drivers/target/target_core_file.c            |    6 
 drivers/target/target_core_pscsi.c           |    7 
 drivers/usb/gadget/function/storage_common.c |    8 
 fs/block_dev.c                               |  578 ++++++++-------------------
 fs/btrfs/sysfs.c                             |   15 
 fs/btrfs/volumes.c                           |   13 
 fs/ext4/super.c                              |   18 
 fs/ext4/sysfs.c                              |   10 
 fs/f2fs/checkpoint.c                         |    5 
 fs/f2fs/f2fs.h                               |    2 
 fs/f2fs/super.c                              |    8 
 fs/f2fs/sysfs.c                              |    9 
 fs/inode.c                                   |    3 
 fs/internal.h                                |    7 
 fs/io_uring.c                                |   10 
 fs/pipe.c                                    |    5 
 fs/pstore/blk.c                              |    2 
 fs/quota/quota.c                             |   40 +
 fs/statfs.c                                  |    2 
 fs/super.c                                   |   86 ----
 include/linux/blk-cgroup.h                   |    4 
 include/linux/blk_types.h                    |   26 +
 include/linux/blkdev.h                       |   24 -
 include/linux/fs.h                           |    5 
 include/linux/genhd.h                        |  104 ----
 include/linux/part_stat.h                    |   17 
 init/do_mounts.c                             |  271 +++++-------
 kernel/trace/blktrace.c                      |   54 --
 mm/filemap.c                                 |    9 
 59 files changed, 837 insertions(+), 1716 deletions(-)


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:48:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:48:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29435.58811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ8g-0006eS-U3; Wed, 18 Nov 2020 08:48:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29435.58811; Wed, 18 Nov 2020 08:48:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ8g-0006eL-R6; Wed, 18 Nov 2020 08:48:30 +0000
Received: by outflank-mailman (input) for mailman id 29435;
 Wed, 18 Nov 2020 08:48:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kfJ8d-0006e0-1R
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:48:30 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 96f5616f-f278-429a-ba97-74bd3615791c;
 Wed, 18 Nov 2020 08:48:24 +0000 (UTC)
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kfJ8K-0007ko-U7; Wed, 18 Nov 2020 08:48:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kfJ8d-0006e0-1R
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:48:30 +0000
X-Inumbo-ID: 96f5616f-f278-429a-ba97-74bd3615791c
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 96f5616f-f278-429a-ba97-74bd3615791c;
	Wed, 18 Nov 2020 08:48:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=HlR8BHDvXv9Sgk92bGXzXu1h5fBqLDHnxGaHmTelGu0=; b=KdghkPK+bK39xYpfOITSgGrMVp
	BZli70ghNiYun7n+87JdzQVckkFmjA/iVEU1uoAHxbpWKwGwUBpwv8NlGzqCCuztL0lPtEylchCPx
	X77Zf9jDIFy/H0VRo0NOUt6YvmcSPgxMUC7DF5aAGAQes6WU3Pdt/eM+kILHXyjoQeMLFgGOGqGO2
	2wQ3zSfil9jGD9rGJGwOm7NLvJIwXJnBSIFtI1p+WPxnBXhtK4tCfAOpvizcMYINpM9jc8zU6z+H6
	QZvAKdshaOtNqnFRSbHHIEzczFZ/YzEf/GcUelmoGKLkJaXLcQNXVKFdaUzmSzDBrHLYmuWBgWQTb
	Qy4idlSQ==;
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kfJ8K-0007ko-U7; Wed, 18 Nov 2020 08:48:09 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 04/20] block: use disk_part_iter_exit in disk_part_iter_next
Date: Wed, 18 Nov 2020 09:47:44 +0100
Message-Id: <20201118084800.2339180-5-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201118084800.2339180-1-hch@lst.de>
References: <20201118084800.2339180-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Call disk_part_iter_exit in disk_part_iter_next instead of duplicating
the functionality.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/genhd.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index 4e039524f92b8f..0bd9c41dd4cb69 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -227,8 +227,7 @@ struct hd_struct *disk_part_iter_next(struct disk_part_iter *piter)
 	int inc, end;
 
 	/* put the last partition */
-	disk_put_part(piter->part);
-	piter->part = NULL;
+	disk_part_iter_exit(piter);
 
 	/* get part_tbl */
 	rcu_read_lock();
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:48:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:48:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29438.58835 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ8o-0006jy-J5; Wed, 18 Nov 2020 08:48:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29438.58835; Wed, 18 Nov 2020 08:48:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ8o-0006jp-F4; Wed, 18 Nov 2020 08:48:38 +0000
Received: by outflank-mailman (input) for mailman id 29438;
 Wed, 18 Nov 2020 08:48:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kfJ8n-0006e0-1d
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:48:37 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 27d1465f-5e53-4a5f-a328-e9ff239e08b1;
 Wed, 18 Nov 2020 08:48:24 +0000 (UTC)
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kfJ8G-0007kK-3j; Wed, 18 Nov 2020 08:48:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kfJ8n-0006e0-1d
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:48:37 +0000
X-Inumbo-ID: 27d1465f-5e53-4a5f-a328-e9ff239e08b1
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 27d1465f-5e53-4a5f-a328-e9ff239e08b1;
	Wed, 18 Nov 2020 08:48:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=JcB9e14ZTBoVWi3xdlC8mpSR3JW9dN3IR752p30j0qE=; b=NU91cq3Thn62+S87BXdw/AgTnO
	9uK6cfXrVF9yEmA7gnhhmrl2MGNUcw9zuMUuI9hT67RLy+rD4GPNSaJHyxowUOAxm138/fc2plNb4
	XuymtZouAkEWfW9SYiaTL1h8ium+W1M+d0plwtKeHozF9b78iR3IbJAH7THKQ5q9+EOEA67NxaQ+a
	sRuv7GeCq8Iai/OUyr+UoOseqMpOlQBsZuaoZRCTte5XcJjQ0cWOaWQaAJct5Q6R49nWqcs+mT/qa
	3mreP00X2BL79i88S4UYKH6hvnklYD2z0TokXDz3qFO3qzztG5kmXwkN8YZHjWH1I0vBdaXznVot0
	Kok8pY9Q==;
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kfJ8G-0007kK-3j; Wed, 18 Nov 2020 08:48:04 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 01/20] blk-cgroup: fix a hd_struct leak in blkcg_fill_root_iostats
Date: Wed, 18 Nov 2020 09:47:41 +0100
Message-Id: <20201118084800.2339180-2-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201118084800.2339180-1-hch@lst.de>
References: <20201118084800.2339180-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

disk_get_part needs to be paired with a disk_put_part.

Fixes: ef45fe470e1 ("blk-cgroup: show global disk stats in root cgroup io.stat")
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-cgroup.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index c68bdf58c9a6e1..54fbe1e80cc41a 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -849,6 +849,7 @@ static void blkcg_fill_root_iostats(void)
 			blkg_iostat_set(&blkg->iostat.cur, &tmp);
 			u64_stats_update_end(&blkg->iostat.sync);
 		}
+		disk_put_part(part);
 	}
 }
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:48:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:48:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29440.58846 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ8s-0006oB-V4; Wed, 18 Nov 2020 08:48:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29440.58846; Wed, 18 Nov 2020 08:48:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ8s-0006ny-Qa; Wed, 18 Nov 2020 08:48:42 +0000
Received: by outflank-mailman (input) for mailman id 29440;
 Wed, 18 Nov 2020 08:48:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kfJ8s-0006e0-1o
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:48:42 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2ec9e0d1-13b2-44d9-83ef-d58e4f30802e;
 Wed, 18 Nov 2020 08:48:24 +0000 (UTC)
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kfJ8M-0007ky-MA; Wed, 18 Nov 2020 08:48:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kfJ8s-0006e0-1o
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:48:42 +0000
X-Inumbo-ID: 2ec9e0d1-13b2-44d9-83ef-d58e4f30802e
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2ec9e0d1-13b2-44d9-83ef-d58e4f30802e;
	Wed, 18 Nov 2020 08:48:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=yHhXeiyRNaQPcpM2nqUGOQAlRtiBH1PeNN6xDbuA8UE=; b=ZzCRySiTV8SiCXIJ3vfmIUEXvr
	uIJL5nS39gGqLhbC29g9AgM4MwKAtfdRHvWFPaw8smy3i22eU8iUjNKhVCWvGhbBViCoRsKcO6GQU
	SrQ87ktJRa5o18ql91Xe/INMz7nHNZqm8LQea7oI5+JgzZpIdqxe0qq5Ujh4GvgHhDai5SolBAJuU
	l/EDSwn8iwf9v+NLccmqQ0SiSa/CQy9vyIOE8M4CboYWb1kWDaemO8wzQB2ZRBmLrktfDwZWQWu2e
	CuL+t5bFawnPWgtVE7aLR0tFwcZmUHWOkcqi2WOEC8evYMNZjdXQFh3mSoEa6nq87Jj9ktco/+Nkh
	5sAjpgag==;
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kfJ8M-0007ky-MA; Wed, 18 Nov 2020 08:48:11 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 05/20] block: use put_device in put_disk
Date: Wed, 18 Nov 2020 09:47:45 +0100
Message-Id: <20201118084800.2339180-6-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201118084800.2339180-1-hch@lst.de>
References: <20201118084800.2339180-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use put_device to put the device instead of poking into the internals
and using kobject_put.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/genhd.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/block/genhd.c b/block/genhd.c
index 0bd9c41dd4cb69..f46e89226fdf91 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -1803,7 +1803,7 @@ EXPORT_SYMBOL(__alloc_disk_node);
 void put_disk(struct gendisk *disk)
 {
 	if (disk)
-		kobject_put(&disk_to_dev(disk)->kobj);
+		put_device(disk_to_dev(disk));
 }
 EXPORT_SYMBOL(put_disk);
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:48:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:48:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29442.58859 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ8y-0006te-Av; Wed, 18 Nov 2020 08:48:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29442.58859; Wed, 18 Nov 2020 08:48:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ8y-0006tW-67; Wed, 18 Nov 2020 08:48:48 +0000
Received: by outflank-mailman (input) for mailman id 29442;
 Wed, 18 Nov 2020 08:48:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kfJ8x-0006e0-1n
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:48:47 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e015755b-a763-43f0-9ad7-d5e4d6a30097;
 Wed, 18 Nov 2020 08:48:26 +0000 (UTC)
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kfJ8H-0007kQ-Oc; Wed, 18 Nov 2020 08:48:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kfJ8x-0006e0-1n
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:48:47 +0000
X-Inumbo-ID: e015755b-a763-43f0-9ad7-d5e4d6a30097
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e015755b-a763-43f0-9ad7-d5e4d6a30097;
	Wed, 18 Nov 2020 08:48:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=TGRxj3P5fN3QWbEKVTDt35+3igPcA4rCa+2EvBLtC8g=; b=PneJaAwHQHJWZMHR7k5+0kk3SW
	A637+tLVo0+wFYquvEJFrDmJGArQtG1KOZwaGCDcb5uwjoCpzXm5p38WC3Z79c3W1Swj1SNUBRNa6
	rcFaHgHlsCggXHjqeabUB4vl/TxH3BbslIfvIWOw76bNGryHFS0oOO6QGe02Z/szHszDuZfZaVGF7
	23TDL9jueG8glTxv5jhe+z4KkCNXNKIuP3jUqJ8Fp21K4ZBhqs9AfHyhYybRkG2we41vboVqG0XXs
	mAy1tiCpbrwoXa0Ibko7TJrxx/UZnZMUdAbbu4bn9DFirVnGNcjP7fIL0fwtGBVYjU1KM/a0NOzqV
	GyvwqgEA==;
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kfJ8H-0007kQ-Oc; Wed, 18 Nov 2020 08:48:06 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 02/20] block: remove a duplicate __disk_get_part prototype
Date: Wed, 18 Nov 2020 09:47:42 +0100
Message-Id: <20201118084800.2339180-3-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201118084800.2339180-1-hch@lst.de>
References: <20201118084800.2339180-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/genhd.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 46553d6d602563..22f5b9fd96f8bf 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -250,7 +250,6 @@ static inline dev_t part_devt(struct hd_struct *part)
 	return part_to_dev(part)->devt;
 }
 
-extern struct hd_struct *__disk_get_part(struct gendisk *disk, int partno);
 extern struct hd_struct *disk_get_part(struct gendisk *disk, int partno);
 
 static inline void disk_put_part(struct hd_struct *part)
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:48:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:48:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29445.58871 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ93-0006zl-M6; Wed, 18 Nov 2020 08:48:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29445.58871; Wed, 18 Nov 2020 08:48:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ93-0006zb-Gm; Wed, 18 Nov 2020 08:48:53 +0000
Received: by outflank-mailman (input) for mailman id 29445;
 Wed, 18 Nov 2020 08:48:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kfJ92-0006e0-1w
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:48:52 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a0e377e9-29f2-44d6-9138-596a44a71dc6;
 Wed, 18 Nov 2020 08:48:26 +0000 (UTC)
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kfJ8J-0007ke-8F; Wed, 18 Nov 2020 08:48:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kfJ92-0006e0-1w
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:48:52 +0000
X-Inumbo-ID: a0e377e9-29f2-44d6-9138-596a44a71dc6
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a0e377e9-29f2-44d6-9138-596a44a71dc6;
	Wed, 18 Nov 2020 08:48:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=9bzANv+VYgNayi3XqYu3+gJSnycumDxTZtqBmtGYeOA=; b=Zwjf9/vfCdFX/wzjudAegg8I9I
	ne6T0qgttSCBMB4/+qUGaZ0F8oemsXBAjc8MiA01yPrzHlCI+B7NOaQCcgUPiGeeOeRaDRjNOL0TI
	XTvLoy6KGn3T2Sy/YYIyZfKkz8pr91yZcXqExhyNx6T1JicjMldOTa2ud8J4dvUkqfJXsI2ky1h0j
	JzrP7PsQ5U3dJXRRu9nPHXIWufdefaoTP45TCu+rSkQhlllaqIRdDC1SU1aU3sN/0323xx3thLATv
	NBGm1F1GhheaF/vYD/ePxy3W47MdD2OJPtdvqukJRdrxHYP5kAaSKYzKivQrPHlRuOzrRoPRqWhYp
	zSRofNWw==;
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kfJ8J-0007ke-8F; Wed, 18 Nov 2020 08:48:07 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 03/20] block: add a bdev_kobj helper
Date: Wed, 18 Nov 2020 09:47:43 +0100
Message-Id: <20201118084800.2339180-4-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201118084800.2339180-1-hch@lst.de>
References: <20201118084800.2339180-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Add a little helper to find the kobject for a struct block_device.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/md/bcache/super.c |  7 ++-----
 drivers/md/md.c           |  4 +---
 fs/btrfs/sysfs.c          | 15 +++------------
 include/linux/blk_types.h |  3 +++
 4 files changed, 9 insertions(+), 20 deletions(-)

diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 46a00134a36ae1..a6a5e21e4fd136 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -1447,8 +1447,7 @@ static int register_bdev(struct cache_sb *sb, struct cache_sb_disk *sb_disk,
 		goto err;
 
 	err = "error creating kobject";
-	if (kobject_add(&dc->disk.kobj, &part_to_dev(bdev->bd_part)->kobj,
-			"bcache"))
+	if (kobject_add(&dc->disk.kobj, bdev_kobj(bdev), "bcache"))
 		goto err;
 	if (bch_cache_accounting_add_kobjs(&dc->accounting, &dc->disk.kobj))
 		goto err;
@@ -2342,9 +2341,7 @@ static int register_cache(struct cache_sb *sb, struct cache_sb_disk *sb_disk,
 		goto err;
 	}
 
-	if (kobject_add(&ca->kobj,
-			&part_to_dev(bdev->bd_part)->kobj,
-			"bcache")) {
+	if (kobject_add(&ca->kobj, bdev_kobj(bdev), "bcache")) {
 		err = "error calling kobject_add";
 		ret = -ENOMEM;
 		goto out;
diff --git a/drivers/md/md.c b/drivers/md/md.c
index b2edf5e0f965b5..7ce6047c856ea2 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -2414,7 +2414,6 @@ EXPORT_SYMBOL(md_integrity_add_rdev);
 static int bind_rdev_to_array(struct md_rdev *rdev, struct mddev *mddev)
 {
 	char b[BDEVNAME_SIZE];
-	struct kobject *ko;
 	int err;
 
 	/* prevent duplicates */
@@ -2477,9 +2476,8 @@ static int bind_rdev_to_array(struct md_rdev *rdev, struct mddev *mddev)
 	if ((err = kobject_add(&rdev->kobj, &mddev->kobj, "dev-%s", b)))
 		goto fail;
 
-	ko = &part_to_dev(rdev->bdev->bd_part)->kobj;
 	/* failure here is OK */
-	err = sysfs_create_link(&rdev->kobj, ko, "block");
+	err = sysfs_create_link(&rdev->kobj, bdev_kobj(rdev->bdev), "block");
 	rdev->sysfs_state = sysfs_get_dirent_safe(rdev->kobj.sd, "state");
 	rdev->sysfs_unack_badblocks =
 		sysfs_get_dirent_safe(rdev->kobj.sd, "unacknowledged_bad_blocks");
diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
index 279d9262b676d4..24b6c6dc69000a 100644
--- a/fs/btrfs/sysfs.c
+++ b/fs/btrfs/sysfs.c
@@ -1232,8 +1232,6 @@ int btrfs_sysfs_add_space_info_type(struct btrfs_fs_info *fs_info,
 
 void btrfs_sysfs_remove_device(struct btrfs_device *device)
 {
-	struct hd_struct *disk;
-	struct kobject *disk_kobj;
 	struct kobject *devices_kobj;
 
 	/*
@@ -1243,11 +1241,8 @@ void btrfs_sysfs_remove_device(struct btrfs_device *device)
 	devices_kobj = device->fs_info->fs_devices->devices_kobj;
 	ASSERT(devices_kobj);
 
-	if (device->bdev) {
-		disk = device->bdev->bd_part;
-		disk_kobj = &part_to_dev(disk)->kobj;
-		sysfs_remove_link(devices_kobj, disk_kobj->name);
-	}
+	if (device->bdev)
+		sysfs_remove_link(devices_kobj, bdev_kobj(device->bdev)->name);
 
 	if (device->devid_kobj.state_initialized) {
 		kobject_del(&device->devid_kobj);
@@ -1353,11 +1348,7 @@ int btrfs_sysfs_add_device(struct btrfs_device *device)
 	nofs_flag = memalloc_nofs_save();
 
 	if (device->bdev) {
-		struct hd_struct *disk;
-		struct kobject *disk_kobj;
-
-		disk = device->bdev->bd_part;
-		disk_kobj = &part_to_dev(disk)->kobj;
+		struct kobject *disk_kobj = bdev_kobj(device->bdev);
 
 		ret = sysfs_create_link(devices_kobj, disk_kobj, disk_kobj->name);
 		if (ret) {
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index d9b69bbde5cc54..0069bee992063e 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -48,6 +48,9 @@ struct block_device {
 	struct mutex		bd_fsfreeze_mutex;
 } __randomize_layout;
 
+#define bdev_kobj(_bdev) \
+	(&part_to_dev((_bdev)->bd_part)->kobj)
+
 /*
  * Block error status values.  See block/blk-core:blk_errors for the details.
  * Alpha cannot write a byte atomically, so we need to use 32-bit value.
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:48:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:48:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29453.58883 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ98-000761-Ud; Wed, 18 Nov 2020 08:48:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29453.58883; Wed, 18 Nov 2020 08:48:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ98-00075s-Qm; Wed, 18 Nov 2020 08:48:58 +0000
Received: by outflank-mailman (input) for mailman id 29453;
 Wed, 18 Nov 2020 08:48:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kfJ97-0006e0-29
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:48:57 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9366edc9-59e5-4b85-b840-b9ca92b993a2;
 Wed, 18 Nov 2020 08:48:27 +0000 (UTC)
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kfJ8O-0007lA-RG; Wed, 18 Nov 2020 08:48:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kfJ97-0006e0-29
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:48:57 +0000
X-Inumbo-ID: 9366edc9-59e5-4b85-b840-b9ca92b993a2
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9366edc9-59e5-4b85-b840-b9ca92b993a2;
	Wed, 18 Nov 2020 08:48:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=AuzCwkw90Oy6IPNhoiC+qBglSTsZM+HyoDoCPgOfof0=; b=YtHXD8P3bl/ZEagDvWBMs7HUMz
	zdBwVpVtkLRztaIitYOD+jnHVbFQ6nJ8vuy+RLNNTKM13Op/plycZzjy+O4m3LWRxyGds3Zqm8uXL
	0XuTVrpZcPUDGwu7KtN8TqRfLjbMNJ84KtyxsgxJmnWMTEPOjVTiAMZb5/WgHNDE+x7H3rKLRuDmH
	S0COUGr4VfDChivd1zEu5OgZhhAocAq3rfUghhZiWVdd7wm1RQ+tppv9VJnogHVF5m81oV1c+2vot
	cs+KiVe+jSmoJYHNAdtCaO4J/3LKZi6FdgQumaduslpItNE93jCeXFM/TwIDIwiBiRp2E9vOvb5No
	1S1bNtVg==;
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kfJ8O-0007lA-RG; Wed, 18 Nov 2020 08:48:13 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 06/20] block: change the hash used for looking up block devices
Date: Wed, 18 Nov 2020 09:47:46 +0100
Message-Id: <20201118084800.2339180-7-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201118084800.2339180-1-hch@lst.de>
References: <20201118084800.2339180-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Adding the minor to the major creates tons of pointless conflicts. Just
use the dev_t itself, which is 32-bits and thus is guaranteed to fit
into ino_t.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/block_dev.c | 26 ++------------------------
 1 file changed, 2 insertions(+), 24 deletions(-)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index d8664f5c1ff669..29db12c3bb501c 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -870,35 +870,12 @@ void __init bdev_cache_init(void)
 	blockdev_superblock = bd_mnt->mnt_sb;   /* For writeback */
 }
 
-/*
- * Most likely _very_ bad one - but then it's hardly critical for small
- * /dev and can be fixed when somebody will need really large one.
- * Keep in mind that it will be fed through icache hash function too.
- */
-static inline unsigned long hash(dev_t dev)
-{
-	return MAJOR(dev)+MINOR(dev);
-}
-
-static int bdev_test(struct inode *inode, void *data)
-{
-	return BDEV_I(inode)->bdev.bd_dev == *(dev_t *)data;
-}
-
-static int bdev_set(struct inode *inode, void *data)
-{
-	BDEV_I(inode)->bdev.bd_dev = *(dev_t *)data;
-	return 0;
-}
-
 static struct block_device *bdget(dev_t dev)
 {
 	struct block_device *bdev;
 	struct inode *inode;
 
-	inode = iget5_locked(blockdev_superblock, hash(dev),
-			bdev_test, bdev_set, &dev);
-
+	inode = iget_locked(blockdev_superblock, dev);
 	if (!inode)
 		return NULL;
 
@@ -910,6 +887,7 @@ static struct block_device *bdget(dev_t dev)
 		bdev->bd_super = NULL;
 		bdev->bd_inode = inode;
 		bdev->bd_part_count = 0;
+		bdev->bd_dev = dev;
 		inode->i_mode = S_IFBLK;
 		inode->i_rdev = dev;
 		inode->i_bdev = bdev;
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:49:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:49:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29454.58895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ9D-0007Bd-Ec; Wed, 18 Nov 2020 08:49:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29454.58895; Wed, 18 Nov 2020 08:49:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ9D-0007BS-Ai; Wed, 18 Nov 2020 08:49:03 +0000
Received: by outflank-mailman (input) for mailman id 29454;
 Wed, 18 Nov 2020 08:49:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kfJ9C-0006e0-2a
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:49:02 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b3b98daa-67f0-4be6-a54f-a13add3a68af;
 Wed, 18 Nov 2020 08:48:28 +0000 (UTC)
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kfJ8Q-0007lQ-JE; Wed, 18 Nov 2020 08:48:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kfJ9C-0006e0-2a
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:49:02 +0000
X-Inumbo-ID: b3b98daa-67f0-4be6-a54f-a13add3a68af
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b3b98daa-67f0-4be6-a54f-a13add3a68af;
	Wed, 18 Nov 2020 08:48:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=W6vAbrHw5UIe+hBqHSxB53buLdYkSelEr9Mgh5kMt7I=; b=tqlpD2LwWrfiR177Afk+lAf6gG
	SdmN28pAYbTuw2kFiO29RbN7xRKWcK+sQd821VIf7xVnbdqQONrfdCuRtoqMmY0zDB4DyogQkJMe7
	9KJZJnJN50aEkWC27Pt/nYWB/lO0+TLNoU3MJjrzcHqOdJLhZAe/lA/w0T2YZcgTl2HKfT7BX/UmQ
	qR0sr3QQ090u2mE8YUnyoTA6wu2oYHnwkBT2mdSvairMOUdPrhfoWp8XHx+R1gi5pmhnBe8qvea52
	8A5EiVzDe66E9rlleHsfTARbQmub+5RZzwBNLiRZMW1rvIw+AhtX9+xd6ISnwxbQOslYFzI6NMx5l
	zyrQfYxw==;
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kfJ8Q-0007lQ-JE; Wed, 18 Nov 2020 08:48:15 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 07/20] init: refactor name_to_dev_t
Date: Wed, 18 Nov 2020 09:47:47 +0100
Message-Id: <20201118084800.2339180-8-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201118084800.2339180-1-hch@lst.de>
References: <20201118084800.2339180-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Split each case into a self-contained helper.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/genhd.h |   7 +-
 init/do_mounts.c      | 183 +++++++++++++++++++++---------------------
 2 files changed, 91 insertions(+), 99 deletions(-)

diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 22f5b9fd96f8bf..ca5e356084c353 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -388,18 +388,13 @@ static inline void bd_unlink_disk_holder(struct block_device *bdev,
 }
 #endif /* CONFIG_SYSFS */
 
+dev_t blk_lookup_devt(const char *name, int partno);
 #ifdef CONFIG_BLOCK
 void printk_all_partitions(void);
-dev_t blk_lookup_devt(const char *name, int partno);
 #else /* CONFIG_BLOCK */
 static inline void printk_all_partitions(void)
 {
 }
-static inline dev_t blk_lookup_devt(const char *name, int partno)
-{
-	dev_t devt = MKDEV(0, 0);
-	return devt;
-}
 #endif /* CONFIG_BLOCK */
 
 #endif /* _LINUX_GENHD_H */
diff --git a/init/do_mounts.c b/init/do_mounts.c
index b5f9604d0c98a2..aef2f24461c7f1 100644
--- a/init/do_mounts.c
+++ b/init/do_mounts.c
@@ -90,7 +90,6 @@ static int match_dev_by_uuid(struct device *dev, const void *data)
 	return 0;
 }
 
-
 /**
  * devt_from_partuuid - looks up the dev_t of a partition by its UUID
  * @uuid_str:	char array containing ascii UUID
@@ -186,7 +185,83 @@ static int match_dev_by_label(struct device *dev, const void *data)
 
 	return 0;
 }
-#endif
+
+static dev_t devt_from_partlabel(const char *label)
+{
+	struct device *dev;
+	dev_t devt = 0;
+
+	dev = class_find_device(&block_class, NULL, label, &match_dev_by_label);
+	if (dev) {
+		devt = dev->devt;
+		put_device(dev);
+	}
+
+	return devt;
+}
+
+static dev_t devt_from_devname(const char *name)
+{
+	dev_t devt = 0;
+	int part;
+	char s[32];
+	char *p;
+
+	if (strlen(name) > 31)
+		return 0;
+	strcpy(s, name);
+	for (p = s; *p; p++) {
+		if (*p == '/')
+			*p = '!';
+	}
+
+	devt = blk_lookup_devt(s, 0);
+	if (devt)
+		return devt;
+
+	/*
+	 * Try non-existent, but valid partition, which may only exist after
+	 * opening the device, like partitioned md devices.
+	 */
+	while (p > s && isdigit(p[-1]))
+		p--;
+	if (p == s || !*p || *p == '0')
+		return 0;
+
+	/* try disk name without <part number> */
+	part = simple_strtoul(p, NULL, 10);
+	*p = '\0';
+	devt = blk_lookup_devt(s, part);
+	if (devt)
+		return devt;
+
+	/* try disk name without p<part number> */
+	if (p < s + 2 || !isdigit(p[-2]) || p[-1] != 'p')
+		return 0;
+	p[-1] = '\0';
+	return blk_lookup_devt(s, part);
+}
+#endif /* CONFIG_BLOCK */
+
+static dev_t devt_from_devnum(const char *name)
+{
+	unsigned maj, min, offset;
+	dev_t devt = 0;
+	char *p, dummy;
+
+	if (sscanf(name, "%u:%u%c", &maj, &min, &dummy) == 2 ||
+	    sscanf(name, "%u:%u:%u:%c", &maj, &min, &offset, &dummy) == 3) {
+		devt = MKDEV(maj, min);
+		if (maj != MAJOR(devt) || min != MINOR(devt))
+			return 0;
+	} else {
+		devt = new_decode_dev(simple_strtoul(name, &p, 16));
+		if (*p)
+			return 0;
+	}
+
+	return devt;
+}
 
 /*
  *	Convert a name into device number.  We accept the following variants:
@@ -218,101 +293,23 @@ static int match_dev_by_label(struct device *dev, const void *data)
  *	name contains slashes, the device name has them replaced with
  *	bangs.
  */
-
 dev_t name_to_dev_t(const char *name)
 {
-	char s[32];
-	char *p;
-	dev_t res = 0;
-	int part;
-
+	if (strcmp(name, "/dev/nfs") == 0)
+		return Root_NFS;
+	if (strcmp(name, "/dev/cifs") == 0)
+		return Root_CIFS;
+	if (strcmp(name, "/dev/ram") == 0)
+		return Root_RAM0;
 #ifdef CONFIG_BLOCK
-	if (strncmp(name, "PARTUUID=", 9) == 0) {
-		name += 9;
-		res = devt_from_partuuid(name);
-		if (!res)
-			goto fail;
-		goto done;
-	} else if (strncmp(name, "PARTLABEL=", 10) == 0) {
-		struct device *dev;
-
-		dev = class_find_device(&block_class, NULL, name + 10,
-					&match_dev_by_label);
-		if (!dev)
-			goto fail;
-
-		res = dev->devt;
-		put_device(dev);
-		goto done;
-	}
+	if (strncmp(name, "PARTUUID=", 9) == 0)
+		return devt_from_partuuid(name + 9);
+	if (strncmp(name, "PARTLABEL=", 10) == 0)
+		return devt_from_partlabel(name + 10);
+	if (strncmp(name, "/dev/", 5) == 0)
+		return devt_from_devname(name + 5);
 #endif
-
-	if (strncmp(name, "/dev/", 5) != 0) {
-		unsigned maj, min, offset;
-		char dummy;
-
-		if ((sscanf(name, "%u:%u%c", &maj, &min, &dummy) == 2) ||
-		    (sscanf(name, "%u:%u:%u:%c", &maj, &min, &offset, &dummy) == 3)) {
-			res = MKDEV(maj, min);
-			if (maj != MAJOR(res) || min != MINOR(res))
-				goto fail;
-		} else {
-			res = new_decode_dev(simple_strtoul(name, &p, 16));
-			if (*p)
-				goto fail;
-		}
-		goto done;
-	}
-
-	name += 5;
-	res = Root_NFS;
-	if (strcmp(name, "nfs") == 0)
-		goto done;
-	res = Root_CIFS;
-	if (strcmp(name, "cifs") == 0)
-		goto done;
-	res = Root_RAM0;
-	if (strcmp(name, "ram") == 0)
-		goto done;
-
-	if (strlen(name) > 31)
-		goto fail;
-	strcpy(s, name);
-	for (p = s; *p; p++)
-		if (*p == '/')
-			*p = '!';
-	res = blk_lookup_devt(s, 0);
-	if (res)
-		goto done;
-
-	/*
-	 * try non-existent, but valid partition, which may only exist
-	 * after revalidating the disk, like partitioned md devices
-	 */
-	while (p > s && isdigit(p[-1]))
-		p--;
-	if (p == s || !*p || *p == '0')
-		goto fail;
-
-	/* try disk name without <part number> */
-	part = simple_strtoul(p, NULL, 10);
-	*p = '\0';
-	res = blk_lookup_devt(s, part);
-	if (res)
-		goto done;
-
-	/* try disk name without p<part number> */
-	if (p < s + 2 || !isdigit(p[-2]) || p[-1] != 'p')
-		goto fail;
-	p[-1] = '\0';
-	res = blk_lookup_devt(s, part);
-	if (res)
-		goto done;
-
-fail:
-	return 0;
-done:
-	return res;
+	return devt_from_devnum(name);
 }
 EXPORT_SYMBOL_GPL(name_to_dev_t);
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:49:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:49:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29460.58907 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ9I-0007I3-QT; Wed, 18 Nov 2020 08:49:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29460.58907; Wed, 18 Nov 2020 08:49:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ9I-0007Hu-Mc; Wed, 18 Nov 2020 08:49:08 +0000
Received: by outflank-mailman (input) for mailman id 29460;
 Wed, 18 Nov 2020 08:49:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kfJ9H-0006e0-2b
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:49:07 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7c66ecca-63ef-4cbc-af2b-d9a709ce71cc;
 Wed, 18 Nov 2020 08:48:28 +0000 (UTC)
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kfJ8S-0007lc-Hx; Wed, 18 Nov 2020 08:48:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kfJ9H-0006e0-2b
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:49:07 +0000
X-Inumbo-ID: 7c66ecca-63ef-4cbc-af2b-d9a709ce71cc
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7c66ecca-63ef-4cbc-af2b-d9a709ce71cc;
	Wed, 18 Nov 2020 08:48:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=AZC0zIud/sy36HL3UqpdhruuJxaP+dAsE+XWl7N2xio=; b=Nx+h2rhZg/HvIdxi/V4Jhb3r0u
	Ha9RMCfuFy1DcNiHVfggBRgvESO9x7+pxSBvldUugXJje0PXHGlc4RwDDAg7KizNUutdw1Oz6YYyv
	GW4iMGk2/W7s5u9pbxqjoSdBjJixbVkMGKI4sJ3RWvWoE3K0iOy/pbqDa1e1ML4CIGcPuaXqz2g9g
	GWC0Sp67FA+/219oiqS3gSXWZE8BQ1pE8WMHKNGls47Mx8ZIwmqDOxT3eb3qNMpqy/o6N2LhYhPo2
	rajpeTK4bPGzZuMUv7onhX7MqP2bpqaR14FcEdjZKriDz9DBejg943e2dL+oCaa6VfesI9rxujAcy
	PLSuDG6w==;
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kfJ8S-0007lc-Hx; Wed, 18 Nov 2020 08:48:16 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 08/20] init: refactor devt_from_partuuid
Date: Wed, 18 Nov 2020 09:47:48 +0100
Message-Id: <20201118084800.2339180-9-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201118084800.2339180-1-hch@lst.de>
References: <20201118084800.2339180-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

The code in devt_from_partuuid is very convoluted.  Refactor a bit by
sanitizing the goto and variable name usage.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 init/do_mounts.c | 68 ++++++++++++++++++++++--------------------------
 1 file changed, 31 insertions(+), 37 deletions(-)

diff --git a/init/do_mounts.c b/init/do_mounts.c
index aef2f24461c7f1..afa26a4028d25e 100644
--- a/init/do_mounts.c
+++ b/init/do_mounts.c
@@ -105,13 +105,10 @@ static int match_dev_by_uuid(struct device *dev, const void *data)
  */
 static dev_t devt_from_partuuid(const char *uuid_str)
 {
-	dev_t res = 0;
 	struct uuidcmp cmp;
 	struct device *dev = NULL;
-	struct gendisk *disk;
-	struct hd_struct *part;
+	dev_t devt = 0;
 	int offset = 0;
-	bool clear_root_wait = false;
 	char *slash;
 
 	cmp.uuid = uuid_str;
@@ -120,52 +117,49 @@ static dev_t devt_from_partuuid(const char *uuid_str)
 	/* Check for optional partition number offset attributes. */
 	if (slash) {
 		char c = 0;
+
 		/* Explicitly fail on poor PARTUUID syntax. */
-		if (sscanf(slash + 1,
-			   "PARTNROFF=%d%c", &offset, &c) != 1) {
-			clear_root_wait = true;
-			goto done;
-		}
+		if (sscanf(slash + 1, "PARTNROFF=%d%c", &offset, &c) != 1)
+			goto clear_root_wait;
 		cmp.len = slash - uuid_str;
 	} else {
 		cmp.len = strlen(uuid_str);
 	}
 
-	if (!cmp.len) {
-		clear_root_wait = true;
-		goto done;
-	}
+	if (!cmp.len)
+		goto clear_root_wait;
 
-	dev = class_find_device(&block_class, NULL, &cmp,
-				&match_dev_by_uuid);
+	dev = class_find_device(&block_class, NULL, &cmp, &match_dev_by_uuid);
 	if (!dev)
-		goto done;
-
-	res = dev->devt;
+		return 0;
 
-	/* Attempt to find the partition by offset. */
-	if (!offset)
-		goto no_offset;
+	if (offset) {
+		/*
+		 * Attempt to find the requested partition by adding an offset
+		 * to the partition number found by UUID.
+		 */
+		struct hd_struct *part;
 
-	res = 0;
-	disk = part_to_disk(dev_to_part(dev));
-	part = disk_get_part(disk, dev_to_part(dev)->partno + offset);
-	if (part) {
-		res = part_devt(part);
-		put_device(part_to_dev(part));
+		part = disk_get_part(dev_to_disk(dev),
+				     dev_to_part(dev)->partno + offset);
+		if (part) {
+			devt = part_devt(part);
+			put_device(part_to_dev(part));
+		}
+	} else {
+		devt = dev->devt;
 	}
 
-no_offset:
 	put_device(dev);
-done:
-	if (clear_root_wait) {
-		pr_err("VFS: PARTUUID= is invalid.\n"
-		       "Expected PARTUUID=<valid-uuid-id>[/PARTNROFF=%%d]\n");
-		if (root_wait)
-			pr_err("Disabling rootwait; root= is invalid.\n");
-		root_wait = 0;
-	}
-	return res;
+	return devt;
+
+clear_root_wait:
+	pr_err("VFS: PARTUUID= is invalid.\n"
+	       "Expected PARTUUID=<valid-uuid-id>[/PARTNROFF=%%d]\n");
+	if (root_wait)
+		pr_err("Disabling rootwait; root= is invalid.\n");
+	root_wait = 0;
+	return 0;
 }
 
 /**
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:49:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:49:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29475.58935 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ9q-0007c8-Ts; Wed, 18 Nov 2020 08:49:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29475.58935; Wed, 18 Nov 2020 08:49:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ9q-0007bq-K2; Wed, 18 Nov 2020 08:49:42 +0000
Received: by outflank-mailman (input) for mailman id 29475;
 Wed, 18 Nov 2020 08:49:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kfJ9b-0006e0-3q
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:49:27 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b535a621-0a14-4c13-b9f2-a60576cd638f;
 Wed, 18 Nov 2020 08:48:31 +0000 (UTC)
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kfJ8W-0007mG-H5; Wed, 18 Nov 2020 08:48:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kfJ9b-0006e0-3q
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:49:27 +0000
X-Inumbo-ID: b535a621-0a14-4c13-b9f2-a60576cd638f
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b535a621-0a14-4c13-b9f2-a60576cd638f;
	Wed, 18 Nov 2020 08:48:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=6hkgUJ1mS3kvM7sA/4AN3atVruy32itd0ZGr9xe66Po=; b=L2td6RO7MBCM/6vXyAvSKM5bSN
	kbsKiO2NgeaEkpglWX1w8vEghix6mQar9pAu2yADmsBzvkLRTgKoAec3QJrOPR5K+0zGVdK9ljfi+
	hHEjWtOobuqj26E6G3m3rFBqnmew1lRpm+RtN6moeq3AnVFyzCv/NSfh48KrKPX5R7NIJTYZHeBJa
	9RpWbbygX8TogH7UlhxXizbevNvJ20t1ytuxUSVj8cfuJiQrnbMhiixy8syEwMBus9lqPOXDZ4yaV
	Xg7/w+XrbjFywKqWl/93+Xn66Ui5f5Ed6QJ9dc0m7NvVfnttDdO0KLT8Y7TGS5G981XWFBub3IoCl
	YAmHgubw==;
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kfJ8W-0007mG-H5; Wed, 18 Nov 2020 08:48:21 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 11/20] block: reference struct block_device from struct hd_struct
Date: Wed, 18 Nov 2020 09:47:51 +0100
Message-Id: <20201118084800.2339180-12-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201118084800.2339180-1-hch@lst.de>
References: <20201118084800.2339180-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

To simplify block device lookup and a few other upcomdin areas, make sure
that we always have a struct block_device available for each disk and
each partition.  The only downside of this is that each device and
partition uses a little more memories.  The upside will be that a lot of
code can be simplified.

With that all we need to look up the block device is to lookup the inode
and do a few sanity checks on the gendisk, instead of the separate lookup
for the gendisk.

As part of the change switch bdget() to only find existing block devices,
given that we know that the block_device structure must be allocated at
probe / partition scan time.

blk-cgroup needed a bit of a special treatment as the only place that
wanted to lookup a gendisk outside of the normal blkdev_get path.  It is
switched to lookup using the block device hash now that this is the
primary lookup path.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-cgroup.c         |  42 ++++----
 block/blk-iocost.c         |  36 +++----
 block/blk.h                |   2 +-
 block/genhd.c              | 204 ++++---------------------------------
 block/partitions/core.c    |  28 ++---
 fs/block_dev.c             | 123 ++++++++++++----------
 include/linux/blk-cgroup.h |   4 +-
 include/linux/blkdev.h     |   3 +
 include/linux/genhd.h      |   4 +-
 9 files changed, 156 insertions(+), 290 deletions(-)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 54fbe1e80cc41a..4c0ae0f6bce02d 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -556,22 +556,22 @@ static struct blkcg_gq *blkg_lookup_check(struct blkcg *blkcg,
 }
 
 /**
- * blkg_conf_prep - parse and prepare for per-blkg config update
+ * blkcg_conf_get_bdev - parse and open bdev for per-blkg config update
  * @inputp: input string pointer
  *
  * Parse the device node prefix part, MAJ:MIN, of per-blkg config update
- * from @input and get and return the matching gendisk.  *@inputp is
+ * from @input and get and return the matching bdev.  *@inputp is
  * updated to point past the device node prefix.  Returns an ERR_PTR()
  * value on error.
  *
  * Use this function iff blkg_conf_prep() can't be used for some reason.
  */
-struct gendisk *blkcg_conf_get_disk(char **inputp)
+struct block_device *blkcg_conf_get_bdev(char **inputp)
 {
 	char *input = *inputp;
 	unsigned int major, minor;
-	struct gendisk *disk;
-	int key_len, part;
+	struct block_device *bdev;
+	int key_len;
 
 	if (sscanf(input, "%u:%u%n", &major, &minor, &key_len) != 2)
 		return ERR_PTR(-EINVAL);
@@ -581,16 +581,16 @@ struct gendisk *blkcg_conf_get_disk(char **inputp)
 		return ERR_PTR(-EINVAL);
 	input = skip_spaces(input);
 
-	disk = get_gendisk(MKDEV(major, minor), &part);
-	if (!disk)
+	bdev = bdget(MKDEV(major, minor));
+	if (!bdev)
 		return ERR_PTR(-ENODEV);
-	if (part) {
-		put_disk_and_module(disk);
+	if (bdev_is_partition(bdev)) {
+		bdput(bdev);
 		return ERR_PTR(-ENODEV);
 	}
 
 	*inputp = input;
-	return disk;
+	return bdev;
 }
 
 /**
@@ -607,18 +607,18 @@ struct gendisk *blkcg_conf_get_disk(char **inputp)
  */
 int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
 		   char *input, struct blkg_conf_ctx *ctx)
-	__acquires(rcu) __acquires(&disk->queue->queue_lock)
+	__acquires(rcu) __acquires(&bdev->bd_disk->queue->queue_lock)
 {
-	struct gendisk *disk;
+	struct block_device *bdev;
 	struct request_queue *q;
 	struct blkcg_gq *blkg;
 	int ret;
 
-	disk = blkcg_conf_get_disk(&input);
-	if (IS_ERR(disk))
-		return PTR_ERR(disk);
+	bdev = blkcg_conf_get_bdev(&input);
+	if (IS_ERR(bdev))
+		return PTR_ERR(bdev);
 
-	q = disk->queue;
+	q = bdev->bd_disk->queue;
 
 	rcu_read_lock();
 	spin_lock_irq(&q->queue_lock);
@@ -689,7 +689,7 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
 			goto success;
 	}
 success:
-	ctx->disk = disk;
+	ctx->bdev = bdev;
 	ctx->blkg = blkg;
 	ctx->body = input;
 	return 0;
@@ -700,7 +700,7 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
 	spin_unlock_irq(&q->queue_lock);
 	rcu_read_unlock();
 fail:
-	put_disk_and_module(disk);
+	bdput(bdev);
 	/*
 	 * If queue was bypassing, we should retry.  Do so after a
 	 * short msleep().  It isn't strictly necessary but queue
@@ -723,11 +723,11 @@ EXPORT_SYMBOL_GPL(blkg_conf_prep);
  * with blkg_conf_prep().
  */
 void blkg_conf_finish(struct blkg_conf_ctx *ctx)
-	__releases(&ctx->disk->queue->queue_lock) __releases(rcu)
+	__releases(&ctx->bdev->bd_disk->queue->queue_lock) __releases(rcu)
 {
-	spin_unlock_irq(&ctx->disk->queue->queue_lock);
+	spin_unlock_irq(&ctx->bdev->bd_disk->queue->queue_lock);
 	rcu_read_unlock();
-	put_disk_and_module(ctx->disk);
+	bdput(ctx->bdev);
 }
 EXPORT_SYMBOL_GPL(blkg_conf_finish);
 
diff --git a/block/blk-iocost.c b/block/blk-iocost.c
index bbe86d1199dc5b..bd8bfccf6b9ec3 100644
--- a/block/blk-iocost.c
+++ b/block/blk-iocost.c
@@ -3120,23 +3120,23 @@ static const match_table_t qos_tokens = {
 static ssize_t ioc_qos_write(struct kernfs_open_file *of, char *input,
 			     size_t nbytes, loff_t off)
 {
-	struct gendisk *disk;
+	struct block_device *bdev;
 	struct ioc *ioc;
 	u32 qos[NR_QOS_PARAMS];
 	bool enable, user;
 	char *p;
 	int ret;
 
-	disk = blkcg_conf_get_disk(&input);
-	if (IS_ERR(disk))
-		return PTR_ERR(disk);
+	bdev = blkcg_conf_get_bdev(&input);
+	if (IS_ERR(bdev))
+		return PTR_ERR(bdev);
 
-	ioc = q_to_ioc(disk->queue);
+	ioc = q_to_ioc(bdev->bd_disk->queue);
 	if (!ioc) {
-		ret = blk_iocost_init(disk->queue);
+		ret = blk_iocost_init(bdev->bd_disk->queue);
 		if (ret)
 			goto err;
-		ioc = q_to_ioc(disk->queue);
+		ioc = q_to_ioc(bdev->bd_disk->queue);
 	}
 
 	spin_lock_irq(&ioc->lock);
@@ -3231,12 +3231,12 @@ static ssize_t ioc_qos_write(struct kernfs_open_file *of, char *input,
 	ioc_refresh_params(ioc, true);
 	spin_unlock_irq(&ioc->lock);
 
-	put_disk_and_module(disk);
+	bdput(bdev);
 	return nbytes;
 einval:
 	ret = -EINVAL;
 err:
-	put_disk_and_module(disk);
+	bdput(bdev);
 	return ret;
 }
 
@@ -3287,23 +3287,23 @@ static const match_table_t i_lcoef_tokens = {
 static ssize_t ioc_cost_model_write(struct kernfs_open_file *of, char *input,
 				    size_t nbytes, loff_t off)
 {
-	struct gendisk *disk;
+	struct block_device *bdev;
 	struct ioc *ioc;
 	u64 u[NR_I_LCOEFS];
 	bool user;
 	char *p;
 	int ret;
 
-	disk = blkcg_conf_get_disk(&input);
-	if (IS_ERR(disk))
-		return PTR_ERR(disk);
+	bdev = blkcg_conf_get_bdev(&input);
+	if (IS_ERR(bdev))
+		return PTR_ERR(bdev);
 
-	ioc = q_to_ioc(disk->queue);
+	ioc = q_to_ioc(bdev->bd_disk->queue);
 	if (!ioc) {
-		ret = blk_iocost_init(disk->queue);
+		ret = blk_iocost_init(bdev->bd_disk->queue);
 		if (ret)
 			goto err;
-		ioc = q_to_ioc(disk->queue);
+		ioc = q_to_ioc(bdev->bd_disk->queue);
 	}
 
 	spin_lock_irq(&ioc->lock);
@@ -3356,13 +3356,13 @@ static ssize_t ioc_cost_model_write(struct kernfs_open_file *of, char *input,
 	ioc_refresh_params(ioc, true);
 	spin_unlock_irq(&ioc->lock);
 
-	put_disk_and_module(disk);
+	bdput(bdev);
 	return nbytes;
 
 einval:
 	ret = -EINVAL;
 err:
-	put_disk_and_module(disk);
+	bdput(bdev);
 	return ret;
 }
 
diff --git a/block/blk.h b/block/blk.h
index dfab98465db9a5..c4839abcfa27eb 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -352,7 +352,6 @@ struct hd_struct *disk_map_sector_rcu(struct gendisk *disk, sector_t sector);
 
 int blk_alloc_devt(struct hd_struct *part, dev_t *devt);
 void blk_free_devt(dev_t devt);
-void blk_invalidate_devt(dev_t devt);
 char *disk_name(struct gendisk *hd, int partno, char *buf);
 #define ADDPART_FLAG_NONE	0
 #define ADDPART_FLAG_RAID	1
@@ -384,6 +383,7 @@ static inline void hd_free_part(struct hd_struct *part)
 {
 	free_percpu(part->dkstats);
 	kfree(part->info);
+	bdput(part->bdev);
 	percpu_ref_exit(&part->ref);
 }
 
diff --git a/block/genhd.c b/block/genhd.c
index f46e89226fdf91..94de95287a6370 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -27,17 +27,9 @@
 
 static struct kobject *block_depr;
 
-static DEFINE_XARRAY(bdev_map);
-static DEFINE_MUTEX(bdev_map_lock);
-
 /* for extended dynamic devt allocation, currently only one major is used */
 #define NR_EXT_DEVT		(1 << MINORBITS)
-
-/* For extended devt allocation.  ext_devt_lock prevents look up
- * results from going away underneath its user.
- */
-static DEFINE_SPINLOCK(ext_devt_lock);
-static DEFINE_IDR(ext_devt_idr);
+static DEFINE_IDA(ext_devt_ida);
 
 static void disk_check_events(struct disk_events *ev,
 			      unsigned int *clearing_ptr);
@@ -580,14 +572,7 @@ int blk_alloc_devt(struct hd_struct *part, dev_t *devt)
 		return 0;
 	}
 
-	/* allocate ext devt */
-	idr_preload(GFP_KERNEL);
-
-	spin_lock_bh(&ext_devt_lock);
-	idx = idr_alloc(&ext_devt_idr, part, 0, NR_EXT_DEVT, GFP_NOWAIT);
-	spin_unlock_bh(&ext_devt_lock);
-
-	idr_preload_end();
+	idx = ida_alloc_range(&ext_devt_ida, 0, NR_EXT_DEVT, GFP_KERNEL);
 	if (idx < 0)
 		return idx == -ENOSPC ? -EBUSY : idx;
 
@@ -606,26 +591,8 @@ int blk_alloc_devt(struct hd_struct *part, dev_t *devt)
  */
 void blk_free_devt(dev_t devt)
 {
-	if (devt == MKDEV(0, 0))
-		return;
-
-	if (MAJOR(devt) == BLOCK_EXT_MAJOR) {
-		spin_lock_bh(&ext_devt_lock);
-		idr_remove(&ext_devt_idr, blk_mangle_minor(MINOR(devt)));
-		spin_unlock_bh(&ext_devt_lock);
-	}
-}
-
-/*
- * We invalidate devt by assigning NULL pointer for devt in idr.
- */
-void blk_invalidate_devt(dev_t devt)
-{
-	if (MAJOR(devt) == BLOCK_EXT_MAJOR) {
-		spin_lock_bh(&ext_devt_lock);
-		idr_replace(&ext_devt_idr, NULL, blk_mangle_minor(MINOR(devt)));
-		spin_unlock_bh(&ext_devt_lock);
-	}
+	if (MAJOR(devt) == BLOCK_EXT_MAJOR)
+		ida_free(&ext_devt_ida, blk_mangle_minor(MINOR(devt)));
 }
 
 static char *bdevt_str(dev_t devt, char *buf)
@@ -640,28 +607,6 @@ static char *bdevt_str(dev_t devt, char *buf)
 	return buf;
 }
 
-static void blk_register_region(struct gendisk *disk)
-{
-	int i;
-
-	mutex_lock(&bdev_map_lock);
-	for (i = 0; i < disk->minors; i++) {
-		if (xa_insert(&bdev_map, disk_devt(disk) + i, disk, GFP_KERNEL))
-			WARN_ON_ONCE(1);
-	}
-	mutex_unlock(&bdev_map_lock);
-}
-
-static void blk_unregister_region(struct gendisk *disk)
-{
-	int i;
-
-	mutex_lock(&bdev_map_lock);
-	for (i = 0; i < disk->minors; i++)
-		xa_erase(&bdev_map, disk_devt(disk) + i);
-	mutex_unlock(&bdev_map_lock);
-}
-
 static void disk_scan_partitions(struct gendisk *disk)
 {
 	struct block_device *bdev;
@@ -805,7 +750,7 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
 		ret = bdi_register(bdi, "%u:%u", MAJOR(devt), MINOR(devt));
 		WARN_ON(ret);
 		bdi_set_owner(bdi, dev);
-		blk_register_region(disk);
+		bdev_add(disk->part0.bdev, devt);
 	}
 	register_disk(parent, disk, groups);
 	if (register_queue)
@@ -916,16 +861,6 @@ void del_gendisk(struct gendisk *disk)
 	}
 
 	blk_unregister_queue(disk);
-	
-	if (!(disk->flags & GENHD_FL_HIDDEN))
-		blk_unregister_region(disk);
-	/*
-	 * Remove gendisk pointer from idr so that it cannot be looked up
-	 * while RCU period before freeing gendisk is running to prevent
-	 * use-after-free issues. Note that the device number stays
-	 * "in-use" until we really free the gendisk.
-	 */
-	blk_invalidate_devt(disk_devt(disk));
 
 	kobject_put(disk->part0.holder_dir);
 	kobject_put(disk->slave_dir);
@@ -964,7 +899,7 @@ static ssize_t disk_badblocks_store(struct device *dev,
 	return badblocks_store(disk->bb, page, len, 0);
 }
 
-static void request_gendisk_module(dev_t devt)
+void blk_request_module(dev_t devt)
 {
 	unsigned int major = MAJOR(devt);
 	struct blk_major_name **n;
@@ -984,84 +919,6 @@ static void request_gendisk_module(dev_t devt)
 		request_module("block-major-%d", MAJOR(devt));
 }
 
-static bool get_disk_and_module(struct gendisk *disk)
-{
-	struct module *owner;
-
-	if (!disk->fops)
-		return false;
-	owner = disk->fops->owner;
-	if (owner && !try_module_get(owner))
-		return false;
-	if (!kobject_get_unless_zero(&disk_to_dev(disk)->kobj)) {
-		module_put(owner);
-		return false;
-	}
-	return true;
-
-}
-
-/**
- * get_gendisk - get partitioning information for a given device
- * @devt: device to get partitioning information for
- * @partno: returned partition index
- *
- * This function gets the structure containing partitioning
- * information for the given device @devt.
- *
- * Context: can sleep
- */
-struct gendisk *get_gendisk(dev_t devt, int *partno)
-{
-	struct gendisk *disk = NULL;
-
-	might_sleep();
-
-	if (MAJOR(devt) != BLOCK_EXT_MAJOR) {
-		mutex_lock(&bdev_map_lock);
-		disk = xa_load(&bdev_map, devt);
-		if (!disk) {
-			mutex_unlock(&bdev_map_lock);
-			request_gendisk_module(devt);
-			mutex_lock(&bdev_map_lock);
-			disk = xa_load(&bdev_map, devt);
-		}
-		if (disk && !get_disk_and_module(disk))
-			disk = NULL;
-		if (disk)
-			*partno = devt - disk_devt(disk);
-		mutex_unlock(&bdev_map_lock);
-	} else {
-		struct hd_struct *part;
-
-		spin_lock_bh(&ext_devt_lock);
-		part = idr_find(&ext_devt_idr, blk_mangle_minor(MINOR(devt)));
-		if (part && get_disk_and_module(part_to_disk(part))) {
-			*partno = part->partno;
-			disk = part_to_disk(part);
-		}
-		spin_unlock_bh(&ext_devt_lock);
-	}
-
-	if (!disk)
-		return NULL;
-
-	/*
-	 * Synchronize with del_gendisk() to not return disk that is being
-	 * destroyed.
-	 */
-	down_read(&disk->lookup_sem);
-	if (unlikely((disk->flags & GENHD_FL_HIDDEN) ||
-		     !(disk->flags & GENHD_FL_UP))) {
-		up_read(&disk->lookup_sem);
-		put_disk_and_module(disk);
-		disk = NULL;
-	} else {
-		up_read(&disk->lookup_sem);
-	}
-	return disk;
-}
-
 /**
  * bdget_disk - do bdget() by gendisk and partition number
  * @disk: gendisk of interest
@@ -1559,11 +1416,6 @@ int disk_expand_part_tbl(struct gendisk *disk, int partno)
  *
  * This function releases all allocated resources of the gendisk.
  *
- * The struct gendisk refcount is incremented with get_gendisk() or
- * get_disk_and_module(), and its refcount is decremented with
- * put_disk_and_module() or put_disk(). Once the refcount reaches 0 this
- * function is called.
- *
  * Drivers which used __device_add_disk() have a gendisk with a request_queue
  * assigned. Since the request_queue sits on top of the gendisk for these
  * drivers we also call blk_put_queue() for them, and we expect the
@@ -1748,16 +1600,18 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
 	if (!disk)
 		return NULL;
 
+	disk->part0.bdev = bdev_alloc(disk, 0);
+	if (!disk->part0.bdev)
+		goto out_free_disk;
+
 	disk->part0.dkstats = alloc_percpu(struct disk_stats);
 	if (!disk->part0.dkstats)
-		goto out_free_disk;
+		goto out_bdput;
 
 	init_rwsem(&disk->lookup_sem);
 	disk->node_id = node_id;
-	if (disk_expand_part_tbl(disk, 0)) {
-		free_percpu(disk->part0.dkstats);
-		goto out_free_disk;
-	}
+	if (disk_expand_part_tbl(disk, 0))
+		goto out_free_bdstats;
 
 	ptbl = rcu_dereference_protected(disk->part_tbl, 1);
 	rcu_assign_pointer(ptbl->part[0], &disk->part0);
@@ -1772,8 +1626,10 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
 	 * converted to make use of bd_mutex and sequence counters.
 	 */
 	hd_sects_seq_init(&disk->part0);
-	if (hd_ref_init(&disk->part0))
-		goto out_free_part0;
+	if (hd_ref_init(&disk->part0)) {
+		hd_free_part(&disk->part0);
+		return NULL;
+	}
 
 	disk->minors = minors;
 	rand_initialize_disk(disk);
@@ -1782,8 +1638,10 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
 	device_initialize(disk_to_dev(disk));
 	return disk;
 
-out_free_part0:
-	hd_free_part(&disk->part0);
+out_free_bdstats:
+	free_percpu(disk->part0.dkstats);
+out_bdput:
+	bdput(disk->part0.bdev);
 out_free_disk:
 	kfree(disk);
 	return NULL;
@@ -1807,26 +1665,6 @@ void put_disk(struct gendisk *disk)
 }
 EXPORT_SYMBOL(put_disk);
 
-/**
- * put_disk_and_module - decrements the module and gendisk refcount
- * @disk: the struct gendisk to decrement the refcount for
- *
- * This is a counterpart of get_disk_and_module() and thus also of
- * get_gendisk().
- *
- * Context: Any context, but the last reference must not be dropped from
- *          atomic context.
- */
-void put_disk_and_module(struct gendisk *disk)
-{
-	if (disk) {
-		struct module *owner = disk->fops->owner;
-
-		put_disk(disk);
-		module_put(owner);
-	}
-}
-
 static void set_disk_ro_uevent(struct gendisk *gd, int ro)
 {
 	char event[] = "DISK_RO=1";
diff --git a/block/partitions/core.c b/block/partitions/core.c
index a02e224115943d..0ba0bf44b88af3 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -340,12 +340,11 @@ void delete_partition(struct hd_struct *part)
 	device_del(part_to_dev(part));
 
 	/*
-	 * Remove gendisk pointer from idr so that it cannot be looked up
-	 * while RCU period before freeing gendisk is running to prevent
-	 * use-after-free issues. Note that the device number stays
-	 * "in-use" until we really free the gendisk.
+	 * Remove the block device from the inode hash, so that it cannot be
+	 * looked up while waiting for the RCU grace period.
 	 */
-	blk_invalidate_devt(part_devt(part));
+	remove_inode_hash(part->bdev->bd_inode);
+
 	percpu_ref_kill(&part->ref);
 }
 
@@ -402,11 +401,14 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 	if (!p)
 		return ERR_PTR(-EBUSY);
 
+	err = -ENOMEM;
 	p->dkstats = alloc_percpu(struct disk_stats);
-	if (!p->dkstats) {
-		err = -ENOMEM;
+	if (!p->dkstats)
 		goto out_free;
-	}
+
+	p->bdev = bdev_alloc(disk, partno);
+	if (!p->bdev)
+		goto out_free_stats;
 
 	hd_sects_seq_init(p);
 	pdev = part_to_dev(p);
@@ -420,10 +422,8 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 		struct partition_meta_info *pinfo;
 
 		pinfo = kzalloc_node(sizeof(*pinfo), GFP_KERNEL, disk->node_id);
-		if (!pinfo) {
-			err = -ENOMEM;
-			goto out_free_stats;
-		}
+		if (!pinfo)
+			goto out_bdput;
 		memcpy(pinfo, info, sizeof(*info));
 		p->info = pinfo;
 	}
@@ -470,6 +470,7 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 	}
 
 	/* everything is up and running, commence */
+	bdev_add(p->bdev, devt);
 	rcu_assign_pointer(ptbl->part[partno], p);
 
 	/* suppress uevent if the disk suppresses it */
@@ -479,11 +480,14 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 
 out_free_info:
 	kfree(p->info);
+out_bdput:
+	bdput(p->bdev);
 out_free_stats:
 	free_percpu(p->dkstats);
 out_free:
 	kfree(p);
 	return ERR_PTR(err);
+
 out_remove_file:
 	device_remove_file(pdev, &dev_attr_whole_disk);
 out_del:
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 4c4d6c30382c06..e94633dc6ad93b 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -870,34 +870,50 @@ void __init bdev_cache_init(void)
 	blockdev_superblock = bd_mnt->mnt_sb;   /* For writeback */
 }
 
-static struct block_device *bdget(dev_t dev)
+struct block_device *bdev_alloc(struct gendisk *disk, u8 partno)
 {
 	struct block_device *bdev;
 	struct inode *inode;
 
-	inode = iget_locked(blockdev_superblock, dev);
+	inode = new_inode(blockdev_superblock);
 	if (!inode)
 		return NULL;
 
-	bdev = &BDEV_I(inode)->bdev;
+	bdev = I_BDEV(inode);
+	spin_lock_init(&bdev->bd_size_lock);
+	bdev->bd_disk = disk;
+	bdev->bd_partno = partno;
+	bdev->bd_contains = NULL;
+	bdev->bd_super = NULL;
+	bdev->bd_inode = inode;
+	bdev->bd_part_count = 0;
+
+	inode->i_mode = S_IFBLK;
+	inode->i_rdev = 0;
+	inode->i_bdev = bdev;
+	inode->i_data.a_ops = &def_blk_aops;
 
-	if (inode->i_state & I_NEW) {
-		spin_lock_init(&bdev->bd_size_lock);
-		bdev->bd_contains = NULL;
-		bdev->bd_super = NULL;
-		bdev->bd_inode = inode;
-		bdev->bd_part_count = 0;
-		bdev->bd_dev = dev;
-		inode->i_mode = S_IFBLK;
-		inode->i_rdev = dev;
-		inode->i_bdev = bdev;
-		inode->i_data.a_ops = &def_blk_aops;
-		mapping_set_gfp_mask(&inode->i_data, GFP_USER);
-		unlock_new_inode(inode);
-	}
 	return bdev;
 }
 
+void bdev_add(struct block_device *bdev, dev_t dev)
+{
+	bdev->bd_dev = dev;
+	bdev->bd_inode->i_rdev = dev;
+	bdev->bd_inode->i_ino = dev;
+	insert_inode_hash(bdev->bd_inode);
+}
+
+struct block_device *bdget(dev_t dev)
+{
+	struct inode *inode;
+
+	inode = ilookup(blockdev_superblock, dev);
+	if (!inode)
+		return NULL;
+	return &BDEV_I(inode)->bdev;
+}
+
 /**
  * bdgrab -- Grab a reference to an already referenced block device
  * @bdev:	Block device to grab a reference to.
@@ -957,6 +973,10 @@ static struct block_device *bd_acquire(struct inode *inode)
 		bd_forget(inode);
 
 	bdev = bdget(inode->i_rdev);
+	if (!bdev) {
+		blk_request_module(inode->i_rdev);
+		bdev = bdget(inode->i_rdev);
+	}
 	if (bdev) {
 		spin_lock(&bdev_lock);
 		if (!inode->i_bdev) {
@@ -1067,27 +1087,6 @@ int bd_prepare_to_claim(struct block_device *bdev, struct block_device *whole,
 }
 EXPORT_SYMBOL_GPL(bd_prepare_to_claim); /* only for the loop driver */
 
-static struct gendisk *bdev_get_gendisk(struct block_device *bdev, int *partno)
-{
-	struct gendisk *disk = get_gendisk(bdev->bd_dev, partno);
-
-	if (!disk)
-		return NULL;
-	/*
-	 * Now that we hold gendisk reference we make sure bdev we looked up is
-	 * not stale. If it is, it means device got removed and created before
-	 * we looked up gendisk and we fail open in such case. Associating
-	 * unhashed bdev with newly created gendisk could lead to two bdevs
-	 * (and thus two independent caches) being associated with one device
-	 * which is bad.
-	 */
-	if (inode_unhashed(bdev->bd_inode)) {
-		put_disk_and_module(disk);
-		return NULL;
-	}
-	return disk;
-}
-
 static void bd_clear_claiming(struct block_device *whole, void *holder)
 {
 	lockdep_assert_held(&bdev_lock);
@@ -1404,6 +1403,27 @@ int bdev_disk_changed(struct block_device *bdev, bool invalidate)
  */
 EXPORT_SYMBOL_GPL(bdev_disk_changed);
 
+static bool get_disk_and_module(struct gendisk *disk)
+{
+	struct module *owner = disk->fops->owner;
+
+	if (!try_module_get(owner))
+		return false;
+	if (!kobject_get_unless_zero(&disk_to_dev(disk)->kobj)) {
+		module_put(owner);
+		return false;
+	}
+	return true;
+}
+
+static void put_disk_and_module(struct gendisk *disk)
+{
+	struct module *owner = disk->fops->owner;
+
+	put_disk(disk);
+	module_put(owner);
+}
+
 /*
  * bd_mutex locking:
  *
@@ -1415,19 +1435,24 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 		int for_part)
 {
 	struct block_device *whole = NULL, *claiming = NULL;
-	struct gendisk *disk;
+	struct gendisk *disk = bdev->bd_disk;
 	int ret;
-	int partno;
 	bool first_open = false, unblock_events = true, need_restart;
 
  restart:
 	need_restart = false;
-	ret = -ENXIO;
-	disk = bdev_get_gendisk(bdev, &partno);
-	if (!disk)
+
+	down_read(&disk->lookup_sem);
+	if ((disk->flags & GENHD_FL_HIDDEN) ||
+	    !(disk->flags & GENHD_FL_UP) ||
+	    !get_disk_and_module(disk)) {
+		up_read(&disk->lookup_sem);
+		ret = -ENXIO;
 		goto out;
+	}
+	up_read(&disk->lookup_sem);
 
-	if (partno) {
+	if (bdev->bd_partno) {
 		whole = bdget_disk(disk, 0);
 		if (!whole) {
 			ret = -ENOMEM;
@@ -1450,13 +1475,11 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 	mutex_lock_nested(&bdev->bd_mutex, for_part);
 	if (!bdev->bd_openers) {
 		first_open = true;
-		bdev->bd_disk = disk;
 		bdev->bd_contains = bdev;
-		bdev->bd_partno = partno;
 
-		if (!partno) {
+		if (!bdev->bd_partno) {
 			ret = -ENXIO;
-			bdev->bd_part = disk_get_part(disk, partno);
+			bdev->bd_part = disk_get_part(disk, 0);
 			if (!bdev->bd_part)
 				goto out_clear;
 
@@ -1494,7 +1517,7 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 			if (ret)
 				goto out_clear;
 			bdev->bd_contains = bdgrab(whole);
-			bdev->bd_part = disk_get_part(disk, partno);
+			bdev->bd_part = disk_get_part(disk, bdev->bd_partno);
 			if (!(disk->flags & GENHD_FL_UP) ||
 			    !bdev->bd_part || !bdev->bd_part->nr_sects) {
 				ret = -ENXIO;
@@ -1550,7 +1573,6 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 
  out_clear:
 	disk_put_part(bdev->bd_part);
-	bdev->bd_disk = NULL;
 	bdev->bd_part = NULL;
 	if (bdev != bdev->bd_contains)
 		__blkdev_put(bdev->bd_contains, mode, 1);
@@ -1752,7 +1774,6 @@ static void __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part)
 
 		disk_put_part(bdev->bd_part);
 		bdev->bd_part = NULL;
-		bdev->bd_disk = NULL;
 		if (bdev_is_partition(bdev))
 			victim = bdev->bd_contains;
 		bdev->bd_contains = NULL;
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index c8fc9792ac776d..064f14daedebca 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -197,12 +197,12 @@ void blkcg_print_blkgs(struct seq_file *sf, struct blkcg *blkcg,
 u64 __blkg_prfill_u64(struct seq_file *sf, struct blkg_policy_data *pd, u64 v);
 
 struct blkg_conf_ctx {
-	struct gendisk			*disk;
+	struct block_device		*bdev;
 	struct blkcg_gq			*blkg;
 	char				*body;
 };
 
-struct gendisk *blkcg_conf_get_disk(char **inputp);
+struct block_device *blkcg_conf_get_bdev(char **inputp);
 int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
 		   char *input, struct blkg_conf_ctx *ctx);
 void blkg_conf_finish(struct blkg_conf_ctx *ctx);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 05b346a68c2eee..044d9dd159d882 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1994,6 +1994,9 @@ void bd_abort_claiming(struct block_device *bdev, struct block_device *whole,
 		void *holder);
 void blkdev_put(struct block_device *bdev, fmode_t mode);
 
+struct block_device *bdev_alloc(struct gendisk *disk, u8 partno);
+void bdev_add(struct block_device *bdev, dev_t dev);
+struct block_device *bdget(dev_t dev);
 struct block_device *I_BDEV(struct inode *inode);
 struct block_device *bdget_part(struct hd_struct *part);
 struct block_device *bdgrab(struct block_device *bdev);
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index ca5e356084c353..ab5fca99764e7a 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -65,6 +65,7 @@ struct hd_struct {
 	struct disk_stats __percpu *dkstats;
 	struct percpu_ref ref;
 
+	struct block_device *bdev;
 	struct device __dev;
 	struct kobject *holder_dir;
 	int policy, partno;
@@ -300,7 +301,6 @@ static inline void add_disk_no_queue_reg(struct gendisk *disk)
 }
 
 extern void del_gendisk(struct gendisk *gp);
-extern struct gendisk *get_gendisk(dev_t dev, int *partno);
 extern struct block_device *bdget_disk(struct gendisk *disk, int partno);
 
 extern void set_disk_ro(struct gendisk *disk, int flag);
@@ -338,7 +338,6 @@ int blk_drop_partitions(struct block_device *bdev);
 
 extern struct gendisk *__alloc_disk_node(int minors, int node_id);
 extern void put_disk(struct gendisk *disk);
-extern void put_disk_and_module(struct gendisk *disk);
 
 #define alloc_disk_node(minors, node_id)				\
 ({									\
@@ -389,6 +388,7 @@ static inline void bd_unlink_disk_holder(struct block_device *bdev,
 #endif /* CONFIG_SYSFS */
 
 dev_t blk_lookup_devt(const char *name, int partno);
+void blk_request_module(dev_t devt);
 #ifdef CONFIG_BLOCK
 void printk_all_partitions(void);
 #else /* CONFIG_BLOCK */
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:49:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:49:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29474.58927 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ9q-0007bK-H9; Wed, 18 Nov 2020 08:49:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29474.58927; Wed, 18 Nov 2020 08:49:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ9q-0007b9-AQ; Wed, 18 Nov 2020 08:49:42 +0000
Received: by outflank-mailman (input) for mailman id 29474;
 Wed, 18 Nov 2020 08:49:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kfJ9g-0006e0-3n
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:49:32 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ba1e1944-c684-469b-aacc-1d92033464c8;
 Wed, 18 Nov 2020 08:48:34 +0000 (UTC)
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kfJ8Z-0007ml-KP; Wed, 18 Nov 2020 08:48:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kfJ9g-0006e0-3n
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:49:32 +0000
X-Inumbo-ID: ba1e1944-c684-469b-aacc-1d92033464c8
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ba1e1944-c684-469b-aacc-1d92033464c8;
	Wed, 18 Nov 2020 08:48:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=Jn3i40Vc573JfqwwUze67l0f1T02hvayuSlJ2ghsy0k=; b=FTL5yZhtosIFlvGZaUcV3BSZLO
	2TqrrRJoqdDVaWLXhmhx6KTTrdYGYfDRTknpg1OIhc5h+bhaE/jRW42OfB50rTNA9k/yOREjMyc1q
	h1r0D7VZAPNZoHVenErI6dEmWW0bMHPKydKZwcVrSXAtkonnEliwxlKGe0UTBmuxKqeN/Kv7CdhDr
	wSgB/IOGDweRQP5CfnoBHqzOmz/o+GbMPLsREHvH7aVHGynd6iKm2i5wBRhf06XMiFWWVW/NVbBWN
	QEnO32F/mnmpb2MJ7448A+o5rFMOH5KqJcAc12v3q1pWpqxRmnC8C8RDkX6/37Dm9ET28xdA3djUt
	VRLfV1Rw==;
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kfJ8Z-0007ml-KP; Wed, 18 Nov 2020 08:48:24 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 13/20] block: remove ->bd_contains
Date: Wed, 18 Nov 2020 09:47:53 +0100
Message-Id: <20201118084800.2339180-14-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201118084800.2339180-1-hch@lst.de>
References: <20201118084800.2339180-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Now that each hd_struct has a reference to the corresponding
block_device, there is no need for the bd_contains pointer.  Add
a bdev_whole() helper to look up the whole device block_device
struture instead.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/scsi/scsicam.c    |  2 +-
 fs/block_dev.c            | 46 ++++++++++++---------------------------
 include/linux/blk_types.h |  4 +++-
 3 files changed, 18 insertions(+), 34 deletions(-)

diff --git a/drivers/scsi/scsicam.c b/drivers/scsi/scsicam.c
index 682cf08ab04153..f1553a453616fd 100644
--- a/drivers/scsi/scsicam.c
+++ b/drivers/scsi/scsicam.c
@@ -32,7 +32,7 @@
  */
 unsigned char *scsi_bios_ptable(struct block_device *dev)
 {
-	struct address_space *mapping = dev->bd_contains->bd_inode->i_mapping;
+	struct address_space *mapping = bdev_whole(dev)->bd_inode->i_mapping;
 	unsigned char *res = NULL;
 	struct page *page;
 
diff --git a/fs/block_dev.c b/fs/block_dev.c
index dd52dbd266cde7..258a1ced924483 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -879,7 +879,6 @@ struct block_device *bdev_alloc(struct gendisk *disk, u8 partno)
 	spin_lock_init(&bdev->bd_size_lock);
 	bdev->bd_disk = disk;
 	bdev->bd_partno = partno;
-	bdev->bd_contains = NULL;
 	bdev->bd_super = NULL;
 	bdev->bd_inode = inode;
 	bdev->bd_part_count = 0;
@@ -1054,7 +1053,7 @@ static bool bd_may_claim(struct block_device *bdev, struct block_device *whole,
  */
 int bd_prepare_to_claim(struct block_device *bdev, void *holder)
 {
-	struct block_device *whole = bdev->bd_contains;
+	struct block_device *whole = bdev_whole(bdev);
 
 retry:
 	spin_lock(&bdev_lock);
@@ -1102,7 +1101,7 @@ static void bd_clear_claiming(struct block_device *whole, void *holder)
  */
 static void bd_finish_claiming(struct block_device *bdev, void *holder)
 {
-	struct block_device *whole = bdev->bd_contains;
+	struct block_device *whole = bdev_whole(bdev);
 
 	spin_lock(&bdev_lock);
 	BUG_ON(!bd_may_claim(bdev, whole, holder));
@@ -1131,7 +1130,7 @@ static void bd_finish_claiming(struct block_device *bdev, void *holder)
 void bd_abort_claiming(struct block_device *bdev, void *holder)
 {
 	spin_lock(&bdev_lock);
-	bd_clear_claiming(bdev->bd_contains, holder);
+	bd_clear_claiming(bdev_whole(bdev), holder);
 	spin_unlock(&bdev_lock);
 }
 EXPORT_SYMBOL(bd_abort_claiming);
@@ -1429,7 +1428,6 @@ static void put_disk_and_module(struct gendisk *disk)
 static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 		int for_part)
 {
-	struct block_device *whole = NULL;
 	struct gendisk *disk = bdev->bd_disk;
 	int ret;
 	bool first_open = false, unblock_events = true, need_restart;
@@ -1447,26 +1445,17 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 	}
 	up_read(&disk->lookup_sem);
 
-	if (bdev->bd_partno) {
-		whole = bdget_disk(disk, 0);
-		if (!whole) {
-			ret = -ENOMEM;
-			goto out_put_disk;
-		}
-	}
-
 	if (!for_part && (mode & FMODE_EXCL)) {
 		WARN_ON_ONCE(!holder);
 		ret = bd_prepare_to_claim(bdev, holder);
 		if (ret)
-			goto out_put_whole;
+			goto out_put_disk;
 	}
 
 	disk_block_events(disk);
 	mutex_lock_nested(&bdev->bd_mutex, for_part);
 	if (!bdev->bd_openers) {
 		first_open = true;
-		bdev->bd_contains = bdev;
 
 		if (!bdev->bd_partno) {
 			ret = -ENXIO;
@@ -1504,10 +1493,10 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 				goto out_clear;
 		} else {
 			BUG_ON(for_part);
-			ret = __blkdev_get(whole, mode, NULL, 1);
+			ret = __blkdev_get(bdev_whole(bdev), mode, NULL, 1);
 			if (ret)
 				goto out_clear;
-			bdev->bd_contains = bdgrab(whole);
+			bdgrab(bdev_whole(bdev));
 			bdev->bd_part = disk_get_part(disk, bdev->bd_partno);
 			if (!(disk->flags & GENHD_FL_UP) ||
 			    !bdev->bd_part || !bdev->bd_part->nr_sects) {
@@ -1521,7 +1510,7 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 		if (bdev->bd_bdi == &noop_backing_dev_info)
 			bdev->bd_bdi = bdi_get(disk->queue->backing_dev_info);
 	} else {
-		if (bdev->bd_contains == bdev) {
+		if (!bdev->bd_partno) {
 			ret = 0;
 			if (bdev->bd_disk->fops->open)
 				ret = bdev->bd_disk->fops->open(bdev, mode);
@@ -1560,24 +1549,18 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 	/* only one opener holds refs to the module and disk */
 	if (!first_open)
 		put_disk_and_module(disk);
-	if (whole)
-		bdput(whole);
 	return 0;
 
  out_clear:
 	disk_put_part(bdev->bd_part);
 	bdev->bd_part = NULL;
-	if (bdev != bdev->bd_contains)
-		__blkdev_put(bdev->bd_contains, mode, 1);
-	bdev->bd_contains = NULL;
+	if (bdev_is_partition(bdev))
+		__blkdev_put(bdev_whole(bdev), mode, 1);
  out_unlock_bdev:
 	if (!for_part && (mode & FMODE_EXCL))
 		bd_abort_claiming(bdev, holder);
 	mutex_unlock(&bdev->bd_mutex);
 	disk_unblock_events(disk);
- out_put_whole:
- 	if (whole)
-		bdput(whole);
  out_put_disk:
 	put_disk_and_module(disk);
 	if (need_restart)
@@ -1768,8 +1751,7 @@ static void __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part)
 		disk_put_part(bdev->bd_part);
 		bdev->bd_part = NULL;
 		if (bdev_is_partition(bdev))
-			victim = bdev->bd_contains;
-		bdev->bd_contains = NULL;
+			victim = bdev_whole(bdev);
 
 		put_disk_and_module(disk);
 	} else {
@@ -1787,6 +1769,7 @@ void blkdev_put(struct block_device *bdev, fmode_t mode)
 	mutex_lock(&bdev->bd_mutex);
 
 	if (mode & FMODE_EXCL) {
+		struct block_device *whole = bdev_whole(bdev);
 		bool bdev_free;
 
 		/*
@@ -1797,13 +1780,12 @@ void blkdev_put(struct block_device *bdev, fmode_t mode)
 		spin_lock(&bdev_lock);
 
 		WARN_ON_ONCE(--bdev->bd_holders < 0);
-		WARN_ON_ONCE(--bdev->bd_contains->bd_holders < 0);
+		WARN_ON_ONCE(--whole->bd_holders < 0);
 
-		/* bd_contains might point to self, check in a separate step */
 		if ((bdev_free = !bdev->bd_holders))
 			bdev->bd_holder = NULL;
-		if (!bdev->bd_contains->bd_holders)
-			bdev->bd_contains->bd_holder = NULL;
+		if (!whole->bd_holders)
+			whole->bd_holder = NULL;
 
 		spin_unlock(&bdev_lock);
 
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 0069bee992063e..453b940b87d8e9 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -32,7 +32,6 @@ struct block_device {
 #ifdef CONFIG_SYSFS
 	struct list_head	bd_holder_disks;
 #endif
-	struct block_device *	bd_contains;
 	u8			bd_partno;
 	struct hd_struct *	bd_part;
 	/* number of times partitions within this device have been opened. */
@@ -48,6 +47,9 @@ struct block_device {
 	struct mutex		bd_fsfreeze_mutex;
 } __randomize_layout;
 
+#define bdev_whole(_bdev) \
+	((_bdev)->bd_disk->part0.bdev)
+
 #define bdev_kobj(_bdev) \
 	(&part_to_dev((_bdev)->bd_part)->kobj)
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:49:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:49:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29473.58919 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ9q-0007ap-5t; Wed, 18 Nov 2020 08:49:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29473.58919; Wed, 18 Nov 2020 08:49:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJ9q-0007ai-2P; Wed, 18 Nov 2020 08:49:42 +0000
Received: by outflank-mailman (input) for mailman id 29473;
 Wed, 18 Nov 2020 08:49:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kfJ9M-0006e0-2q
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:49:12 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2042ed8d-2c68-4518-bedb-05d51710cbba;
 Wed, 18 Nov 2020 08:48:30 +0000 (UTC)
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kfJ8T-0007lo-R0; Wed, 18 Nov 2020 08:48:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kfJ9M-0006e0-2q
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:49:12 +0000
X-Inumbo-ID: 2042ed8d-2c68-4518-bedb-05d51710cbba
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2042ed8d-2c68-4518-bedb-05d51710cbba;
	Wed, 18 Nov 2020 08:48:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=O60EjIrCGFuLmrlAIlHLEpDJ56KCKSEnNb3gMAlOeiw=; b=cAG4J2G4vRuCyjmJAzVLFXFkqb
	x6xUYtVaKO7buvHrDlsmsBnv0eSXVPrdv/jgAyobf9z47HMXpR7eZS2Ndfm7N8XVbYvlTVxgBd7BV
	xJzNfKTW+m6DAib6qnus4owlBoAaxlRkW+da52xVoZBCKITGTJMv80mFPTWw9HZqKqjJfa9MJFp5K
	529SOyMxZDndXZHfB4GbQcXNHNlmAsuPmUBnFSIfT+slkyUJm4aiMtR944TH2x+XHa978xR8Xj9qH
	K7HW2ybBbdqPQfZlwqpDiKLyURUUJxwuFIukYsFvMJO3S2FevCeQcACefxWbeY6Y3eXTfi7H7dzQB
	qNwY0tFg==;
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kfJ8T-0007lo-R0; Wed, 18 Nov 2020 08:48:18 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 09/20] init: cleanup match_dev_by_uuid and match_dev_by_label
Date: Wed, 18 Nov 2020 09:47:49 +0100
Message-Id: <20201118084800.2339180-10-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201118084800.2339180-1-hch@lst.de>
References: <20201118084800.2339180-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Avoid a totally pointless goto label, and use the same style of
comparism for both helpers.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 init/do_mounts.c | 18 ++++++------------
 1 file changed, 6 insertions(+), 12 deletions(-)

diff --git a/init/do_mounts.c b/init/do_mounts.c
index afa26a4028d25e..5879edf083b318 100644
--- a/init/do_mounts.c
+++ b/init/do_mounts.c
@@ -79,15 +79,10 @@ static int match_dev_by_uuid(struct device *dev, const void *data)
 	const struct uuidcmp *cmp = data;
 	struct hd_struct *part = dev_to_part(dev);
 
-	if (!part->info)
-		goto no_match;
-
-	if (strncasecmp(cmp->uuid, part->info->uuid, cmp->len))
-		goto no_match;
-
+	if (!part->info ||
+	    strncasecmp(cmp->uuid, part->info->uuid, cmp->len))
+		return 0;
 	return 1;
-no_match:
-	return 0;
 }
 
 /**
@@ -174,10 +169,9 @@ static int match_dev_by_label(struct device *dev, const void *data)
 	const char *label = data;
 	struct hd_struct *part = dev_to_part(dev);
 
-	if (part->info && !strcmp(label, part->info->volname))
-		return 1;
-
-	return 0;
+	if (!part->info || strcmp(label, part->info->volname))
+		return 0;
+	return 1;
 }
 
 static dev_t devt_from_partlabel(const char *label)
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:50:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:50:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29477.58955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJA8-0007ur-DC; Wed, 18 Nov 2020 08:50:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29477.58955; Wed, 18 Nov 2020 08:50:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJA8-0007uj-92; Wed, 18 Nov 2020 08:50:00 +0000
Received: by outflank-mailman (input) for mailman id 29477;
 Wed, 18 Nov 2020 08:49:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kfJ9R-0006e0-36
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:49:17 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 803a142a-5aa8-451f-aa48-d8204aa489ba;
 Wed, 18 Nov 2020 08:48:31 +0000 (UTC)
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kfJ8V-0007m0-4B; Wed, 18 Nov 2020 08:48:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kfJ9R-0006e0-36
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:49:17 +0000
X-Inumbo-ID: 803a142a-5aa8-451f-aa48-d8204aa489ba
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 803a142a-5aa8-451f-aa48-d8204aa489ba;
	Wed, 18 Nov 2020 08:48:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=Hm2+pi+4EcUeDYOHdyHUHSpCkqg13LYEV8plE6Pffx8=; b=F+OqFDrimZDfU9oJC3NE+WAnBj
	7NLGpAAHAxBrMkgV1jR3eCykIQBIYIkgtzV298VqK/hLnjFuqPQGLtD2serG/rjdABz8On3RjiItS
	jkjRT9VHC9QKxUb1fSffe1v27SEXnA5UA7cfOhezqYd+B8+1RmQrgFitIopZN9HgcU4BlpENM55mc
	SmJfzHiBAgDUmmdfkk8wpbWRoWTnUjgqAObiihN+jvvcuIIiGzIpd+Z4cEyFPI9BuVali3hgqEbEh
	14dcstmz4K2XoTMFEhD0nljib+tsh8e84xtfzcg1rp0zf4RvjCV27VRRqa3U1Ob0Zbm1BuetfFb0V
	DB31SJ8A==;
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kfJ8V-0007m0-4B; Wed, 18 Nov 2020 08:48:19 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 10/20] block: refactor __blkdev_put
Date: Wed, 18 Nov 2020 09:47:50 +0100
Message-Id: <20201118084800.2339180-11-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201118084800.2339180-1-hch@lst.de>
References: <20201118084800.2339180-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Reorder the code to have one big section for the last close, and to use
bdev_is_partition.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/block_dev.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index 29db12c3bb501c..4c4d6c30382c06 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -1745,22 +1745,22 @@ static void __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part)
 		WARN_ON_ONCE(bdev->bd_holders);
 		sync_blockdev(bdev);
 		kill_bdev(bdev);
-
 		bdev_write_inode(bdev);
-	}
-	if (bdev->bd_contains == bdev) {
-		if (disk->fops->release)
+
+		if (!bdev_is_partition(bdev) && disk->fops->release)
 			disk->fops->release(disk, mode);
-	}
-	if (!bdev->bd_openers) {
+
 		disk_put_part(bdev->bd_part);
 		bdev->bd_part = NULL;
 		bdev->bd_disk = NULL;
-		if (bdev != bdev->bd_contains)
+		if (bdev_is_partition(bdev))
 			victim = bdev->bd_contains;
 		bdev->bd_contains = NULL;
 
 		put_disk_and_module(disk);
+	} else {
+		if (!bdev_is_partition(bdev) && disk->fops->release)
+			disk->fops->release(disk, mode);
 	}
 	mutex_unlock(&bdev->bd_mutex);
 	bdput(bdev);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:50:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:50:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29481.58967 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJAC-0008Gy-Nx; Wed, 18 Nov 2020 08:50:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29481.58967; Wed, 18 Nov 2020 08:50:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJAC-0008GK-JB; Wed, 18 Nov 2020 08:50:04 +0000
Received: by outflank-mailman (input) for mailman id 29481;
 Wed, 18 Nov 2020 08:50:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kfJ9W-0006e0-3M
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:49:22 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 75407356-2982-401d-9af1-f10d91e89832;
 Wed, 18 Nov 2020 08:48:32 +0000 (UTC)
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kfJ8Y-0007mT-3L; Wed, 18 Nov 2020 08:48:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kfJ9W-0006e0-3M
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:49:22 +0000
X-Inumbo-ID: 75407356-2982-401d-9af1-f10d91e89832
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 75407356-2982-401d-9af1-f10d91e89832;
	Wed, 18 Nov 2020 08:48:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=FT4Z5N98GPcubFAGww0jhwPW9F0Di1Byp+gvVq3gb/s=; b=L51yZ/O95Y/bHdFQrdlh9XRfK6
	86E/t3LLdpxXx/N8cYklXxJNmv11jQH9rjzRQTPaU0UGJE3IuyS2UYG5fylg/OJZaXCYSXhoboOs/
	n4h+G4acgA/01wHmgy3n3xnphU5prBfDszk1GSDowgN/R1N6LAFg1Zpny3KhoVer1f8X6IMhBwO2a
	Xj26Fuo2bK70pxLUE7na7K9TRIoh5K+M07lptcyCErjEayu9RJo9mCLv2iZBZL7ydP0Qqv0yrgX41
	WVrQ9DsoftXmRZb7G/HL9+rZAb46BUO1NdTcS1MHouuCE9aksnM6hKJBd3C1zTMplL50RTKz5JvTh
	Sc3RbL7w==;
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kfJ8Y-0007mT-3L; Wed, 18 Nov 2020 08:48:22 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 12/20] block: simplify the block device claiming interface
Date: Wed, 18 Nov 2020 09:47:52 +0100
Message-Id: <20201118084800.2339180-13-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201118084800.2339180-1-hch@lst.de>
References: <20201118084800.2339180-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Stop passing the whole device as a separate argument given that it
can be trivially deducted.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/loop.c   | 12 +++-----
 fs/block_dev.c         | 69 +++++++++++++++++++-----------------------
 include/linux/blkdev.h |  6 ++--
 3 files changed, 38 insertions(+), 49 deletions(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index b42c728620c9e4..599e94a7e69259 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -1071,7 +1071,6 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
 	struct file	*file;
 	struct inode	*inode;
 	struct address_space *mapping;
-	struct block_device *claimed_bdev = NULL;
 	int		error;
 	loff_t		size;
 	bool		partscan;
@@ -1090,8 +1089,7 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
 	 * here to avoid changing device under exclusive owner.
 	 */
 	if (!(mode & FMODE_EXCL)) {
-		claimed_bdev = bdev->bd_contains;
-		error = bd_prepare_to_claim(bdev, claimed_bdev, loop_configure);
+		error = bd_prepare_to_claim(bdev, loop_configure);
 		if (error)
 			goto out_putf;
 	}
@@ -1178,15 +1176,15 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
 	mutex_unlock(&loop_ctl_mutex);
 	if (partscan)
 		loop_reread_partitions(lo, bdev);
-	if (claimed_bdev)
-		bd_abort_claiming(bdev, claimed_bdev, loop_configure);
+	if (!(mode & FMODE_EXCL))
+		bd_abort_claiming(bdev, loop_configure);
 	return 0;
 
 out_unlock:
 	mutex_unlock(&loop_ctl_mutex);
 out_bdev:
-	if (claimed_bdev)
-		bd_abort_claiming(bdev, claimed_bdev, loop_configure);
+	if (!(mode & FMODE_EXCL))
+		bd_abort_claiming(bdev, loop_configure);
 out_putf:
 	fput(file);
 out:
diff --git a/fs/block_dev.c b/fs/block_dev.c
index e94633dc6ad93b..dd52dbd266cde7 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -110,24 +110,20 @@ EXPORT_SYMBOL(invalidate_bdev);
 int truncate_bdev_range(struct block_device *bdev, fmode_t mode,
 			loff_t lstart, loff_t lend)
 {
-	struct block_device *claimed_bdev = NULL;
-	int err;
-
 	/*
 	 * If we don't hold exclusive handle for the device, upgrade to it
 	 * while we discard the buffer cache to avoid discarding buffers
 	 * under live filesystem.
 	 */
 	if (!(mode & FMODE_EXCL)) {
-		claimed_bdev = bdev->bd_contains;
-		err = bd_prepare_to_claim(bdev, claimed_bdev,
-					  truncate_bdev_range);
+		int err = bd_prepare_to_claim(bdev, truncate_bdev_range);
 		if (err)
 			return err;
 	}
+
 	truncate_inode_pages_range(bdev->bd_inode->i_mapping, lstart, lend);
-	if (claimed_bdev)
-		bd_abort_claiming(bdev, claimed_bdev, truncate_bdev_range);
+	if (!(mode & FMODE_EXCL))
+		bd_abort_claiming(bdev, truncate_bdev_range);
 	return 0;
 }
 EXPORT_SYMBOL(truncate_bdev_range);
@@ -1047,7 +1043,6 @@ static bool bd_may_claim(struct block_device *bdev, struct block_device *whole,
 /**
  * bd_prepare_to_claim - claim a block device
  * @bdev: block device of interest
- * @whole: the whole device containing @bdev, may equal @bdev
  * @holder: holder trying to claim @bdev
  *
  * Claim @bdev.  This function fails if @bdev is already claimed by another
@@ -1057,9 +1052,10 @@ static bool bd_may_claim(struct block_device *bdev, struct block_device *whole,
  * RETURNS:
  * 0 if @bdev can be claimed, -EBUSY otherwise.
  */
-int bd_prepare_to_claim(struct block_device *bdev, struct block_device *whole,
-		void *holder)
+int bd_prepare_to_claim(struct block_device *bdev, void *holder)
 {
+	struct block_device *whole = bdev->bd_contains;
+
 retry:
 	spin_lock(&bdev_lock);
 	/* if someone else claimed, fail */
@@ -1099,15 +1095,15 @@ static void bd_clear_claiming(struct block_device *whole, void *holder)
 /**
  * bd_finish_claiming - finish claiming of a block device
  * @bdev: block device of interest
- * @whole: whole block device
  * @holder: holder that has claimed @bdev
  *
  * Finish exclusive open of a block device. Mark the device as exlusively
  * open by the holder and wake up all waiters for exclusive open to finish.
  */
-static void bd_finish_claiming(struct block_device *bdev,
-		struct block_device *whole, void *holder)
+static void bd_finish_claiming(struct block_device *bdev, void *holder)
 {
+	struct block_device *whole = bdev->bd_contains;
+
 	spin_lock(&bdev_lock);
 	BUG_ON(!bd_may_claim(bdev, whole, holder));
 	/*
@@ -1132,11 +1128,10 @@ static void bd_finish_claiming(struct block_device *bdev,
  * also used when exclusive open is not actually desired and we just needed
  * to block other exclusive openers for a while.
  */
-void bd_abort_claiming(struct block_device *bdev, struct block_device *whole,
-		       void *holder)
+void bd_abort_claiming(struct block_device *bdev, void *holder)
 {
 	spin_lock(&bdev_lock);
-	bd_clear_claiming(whole, holder);
+	bd_clear_claiming(bdev->bd_contains, holder);
 	spin_unlock(&bdev_lock);
 }
 EXPORT_SYMBOL(bd_abort_claiming);
@@ -1434,7 +1429,7 @@ static void put_disk_and_module(struct gendisk *disk)
 static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 		int for_part)
 {
-	struct block_device *whole = NULL, *claiming = NULL;
+	struct block_device *whole = NULL;
 	struct gendisk *disk = bdev->bd_disk;
 	int ret;
 	bool first_open = false, unblock_events = true, need_restart;
@@ -1462,11 +1457,7 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 
 	if (!for_part && (mode & FMODE_EXCL)) {
 		WARN_ON_ONCE(!holder);
-		if (whole)
-			claiming = whole;
-		else
-			claiming = bdev;
-		ret = bd_prepare_to_claim(bdev, claiming, holder);
+		ret = bd_prepare_to_claim(bdev, holder);
 		if (ret)
 			goto out_put_whole;
 	}
@@ -1543,21 +1534,23 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 		}
 	}
 	bdev->bd_openers++;
-	if (for_part)
+	if (for_part) {
 		bdev->bd_part_count++;
-	if (claiming)
-		bd_finish_claiming(bdev, claiming, holder);
+	} else if (mode & FMODE_EXCL) {
+		bd_finish_claiming(bdev, holder);
 
-	/*
-	 * Block event polling for write claims if requested.  Any write holder
-	 * makes the write_holder state stick until all are released.  This is
-	 * good enough and tracking individual writeable reference is too
-	 * fragile given the way @mode is used in blkdev_get/put().
-	 */
-	if (claiming && (mode & FMODE_WRITE) && !bdev->bd_write_holder &&
-	    (disk->flags & GENHD_FL_BLOCK_EVENTS_ON_EXCL_WRITE)) {
-		bdev->bd_write_holder = true;
-		unblock_events = false;
+		/*
+		 * Block event polling for write claims if requested.  Any write
+		 * holder makes the write_holder state stick until all are
+		 * released.  This is good enough and tracking individual
+		 * writeable reference is too fragile given the way @mode is
+		 * used in blkdev_get/put().
+		 */
+		if ((mode & FMODE_WRITE) && !bdev->bd_write_holder &&
+		    (disk->flags & GENHD_FL_BLOCK_EVENTS_ON_EXCL_WRITE)) {
+			bdev->bd_write_holder = true;
+			unblock_events = false;
+		}
 	}
 	mutex_unlock(&bdev->bd_mutex);
 
@@ -1578,8 +1571,8 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 		__blkdev_put(bdev->bd_contains, mode, 1);
 	bdev->bd_contains = NULL;
  out_unlock_bdev:
-	if (claiming)
-		bd_abort_claiming(bdev, claiming, holder);
+	if (!for_part && (mode & FMODE_EXCL))
+		bd_abort_claiming(bdev, holder);
 	mutex_unlock(&bdev->bd_mutex);
 	disk_unblock_events(disk);
  out_put_whole:
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 044d9dd159d882..696b2f9c5529d8 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1988,10 +1988,8 @@ void blkdev_show(struct seq_file *seqf, off_t offset);
 struct block_device *blkdev_get_by_path(const char *path, fmode_t mode,
 		void *holder);
 struct block_device *blkdev_get_by_dev(dev_t dev, fmode_t mode, void *holder);
-int bd_prepare_to_claim(struct block_device *bdev, struct block_device *whole,
-		void *holder);
-void bd_abort_claiming(struct block_device *bdev, struct block_device *whole,
-		void *holder);
+int bd_prepare_to_claim(struct block_device *bdev, void *holder);
+void bd_abort_claiming(struct block_device *bdev, void *holder);
 void blkdev_put(struct block_device *bdev, fmode_t mode);
 
 struct block_device *bdev_alloc(struct gendisk *disk, u8 partno);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:50:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:50:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29482.58979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJAE-0008QW-12; Wed, 18 Nov 2020 08:50:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29482.58979; Wed, 18 Nov 2020 08:50:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJAD-0008Q3-TF; Wed, 18 Nov 2020 08:50:05 +0000
Received: by outflank-mailman (input) for mailman id 29482;
 Wed, 18 Nov 2020 08:50:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kfJ9l-0006e0-3z
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:49:37 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dc7fe8f5-3500-44d5-a9af-1bbac2ee93b2;
 Wed, 18 Nov 2020 08:48:36 +0000 (UTC)
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kfJ8b-0007nN-8G; Wed, 18 Nov 2020 08:48:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kfJ9l-0006e0-3z
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:49:37 +0000
X-Inumbo-ID: dc7fe8f5-3500-44d5-a9af-1bbac2ee93b2
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id dc7fe8f5-3500-44d5-a9af-1bbac2ee93b2;
	Wed, 18 Nov 2020 08:48:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=JoiSfNakOoDN8Js/5ilIUu4DQg1IjnPdXCdcwWPUdmw=; b=rGa6t8Dx6NmDPeK2nXbzeM+sdR
	X577T39+JSmItnrefwvdRje6hDp4y/9gKNP/saei25Opd0rFvwDIuXdTVo0rNKiefAW2kX3dMfdRh
	jrPVEd1CdvKIBTcF+1ST7TLXQsBI5HbrgCmQiLnXQt6AfSUm8CmqThUBw9j0QR3ZDPiIMPDL+xwEp
	Mjg5eIu87Q2U5qFnyXTSzpRRQhBATBUh3DUpslR8frMFPyB6U9dRCnCOwDYhygtNBT8qt/6qiFl6U
	AG9ZNHGE+Np9sC/e819nMnnDd0hXxoqVIsN4vshEtrYteyAgH2TG40iyiJ+YlX92ZeiSfB54jbZou
	L/UfiRCw==;
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kfJ8b-0007nN-8G; Wed, 18 Nov 2020 08:48:25 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 14/20] block: remove the nr_sects field in struct hd_struct
Date: Wed, 18 Nov 2020 09:47:54 +0100
Message-Id: <20201118084800.2339180-15-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201118084800.2339180-1-hch@lst.de>
References: <20201118084800.2339180-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Now that the hd_struct always has a block device attached to it, there is
no need for having two size field that just get out of sync.

Additional the field in hd_struct did not use proper serializiation,
possibly allowing for torn writes.  By only using the block_device field
this problem also gets fixed.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/bio.c                        |  2 +-
 block/blk-core.c                   |  2 +-
 block/blk.h                        | 53 ----------------------
 block/genhd.c                      | 34 +++++++-------
 block/partitions/core.c            | 17 ++++---
 drivers/block/loop.c               |  1 -
 drivers/block/nbd.c                |  2 +-
 drivers/block/xen-blkback/common.h |  4 +-
 drivers/md/bcache/super.c          |  2 +-
 drivers/s390/block/dasd_ioctl.c    |  4 +-
 drivers/target/target_core_pscsi.c |  7 +--
 fs/block_dev.c                     | 73 +-----------------------------
 fs/f2fs/super.c                    |  2 +-
 fs/pstore/blk.c                    |  2 +-
 include/linux/genhd.h              | 29 +++---------
 kernel/trace/blktrace.c            |  2 +-
 16 files changed, 47 insertions(+), 189 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index fa01bef35bb1fe..0c5269997434d6 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -613,7 +613,7 @@ void guard_bio_eod(struct bio *bio)
 	rcu_read_lock();
 	part = __disk_get_part(bio->bi_disk, bio->bi_partno);
 	if (part)
-		maxsector = part_nr_sects_read(part);
+		maxsector = bdev_nr_sectors(part->bdev);
 	else
 		maxsector = get_capacity(bio->bi_disk);
 	rcu_read_unlock();
diff --git a/block/blk-core.c b/block/blk-core.c
index 2db8bda43b6e6d..988f45094a387b 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -755,7 +755,7 @@ static inline int blk_partition_remap(struct bio *bio)
 		goto out;
 
 	if (bio_sectors(bio)) {
-		if (bio_check_eod(bio, part_nr_sects_read(p)))
+		if (bio_check_eod(bio, bdev_nr_sectors(p->bdev)))
 			goto out;
 		bio->bi_iter.bi_sector += p->start_sect;
 		trace_block_bio_remap(bio->bi_disk->queue, bio, part_devt(p),
diff --git a/block/blk.h b/block/blk.h
index c4839abcfa27eb..09cee7024fb43e 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -387,59 +387,6 @@ static inline void hd_free_part(struct hd_struct *part)
 	percpu_ref_exit(&part->ref);
 }
 
-/*
- * Any access of part->nr_sects which is not protected by partition
- * bd_mutex or gendisk bdev bd_mutex, should be done using this
- * accessor function.
- *
- * Code written along the lines of i_size_read() and i_size_write().
- * CONFIG_PREEMPTION case optimizes the case of UP kernel with preemption
- * on.
- */
-static inline sector_t part_nr_sects_read(struct hd_struct *part)
-{
-#if BITS_PER_LONG==32 && defined(CONFIG_SMP)
-	sector_t nr_sects;
-	unsigned seq;
-	do {
-		seq = read_seqcount_begin(&part->nr_sects_seq);
-		nr_sects = part->nr_sects;
-	} while (read_seqcount_retry(&part->nr_sects_seq, seq));
-	return nr_sects;
-#elif BITS_PER_LONG==32 && defined(CONFIG_PREEMPTION)
-	sector_t nr_sects;
-
-	preempt_disable();
-	nr_sects = part->nr_sects;
-	preempt_enable();
-	return nr_sects;
-#else
-	return part->nr_sects;
-#endif
-}
-
-/*
- * Should be called with mutex lock held (typically bd_mutex) of partition
- * to provide mutual exlusion among writers otherwise seqcount might be
- * left in wrong state leaving the readers spinning infinitely.
- */
-static inline void part_nr_sects_write(struct hd_struct *part, sector_t size)
-{
-#if BITS_PER_LONG==32 && defined(CONFIG_SMP)
-	preempt_disable();
-	write_seqcount_begin(&part->nr_sects_seq);
-	part->nr_sects = size;
-	write_seqcount_end(&part->nr_sects_seq);
-	preempt_enable();
-#elif BITS_PER_LONG==32 && defined(CONFIG_PREEMPTION)
-	preempt_disable();
-	part->nr_sects = size;
-	preempt_enable();
-#else
-	part->nr_sects = size;
-#endif
-}
-
 int bio_add_hw_page(struct request_queue *q, struct bio *bio,
 		struct page *page, unsigned int len, unsigned int offset,
 		unsigned int max_sectors, bool *same_page);
diff --git a/block/genhd.c b/block/genhd.c
index 94de95287a6370..e101b6843f7437 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -38,6 +38,16 @@ static void disk_add_events(struct gendisk *disk);
 static void disk_del_events(struct gendisk *disk);
 static void disk_release_events(struct gendisk *disk);
 
+void set_capacity(struct gendisk *disk, sector_t sectors)
+{
+	struct block_device *bdev = disk->part0.bdev;
+
+	spin_lock(&bdev->bd_size_lock);
+	i_size_write(bdev->bd_inode, (loff_t)sectors << SECTOR_SHIFT);
+	spin_unlock(&bdev->bd_size_lock);
+}
+EXPORT_SYMBOL(set_capacity);
+
 /*
  * Set disk capacity and notify if the size is not currently zero and will not
  * be set to zero.  Returns true if a uevent was sent, otherwise false.
@@ -47,11 +57,12 @@ bool set_capacity_and_notify(struct gendisk *disk, sector_t size)
 	sector_t capacity = get_capacity(disk);
 
 	set_capacity(disk, size);
-	revalidate_disk_size(disk, true);
 
 	if (capacity != size && capacity != 0 && size != 0) {
 		char *envp[] = { "RESIZE=1", NULL };
 
+		pr_info("%s: detected capacity change from %lld to %lld\n",
+		       disk->disk_name, size, capacity);
 		kobject_uevent_env(&disk_to_dev(disk)->kobj, KOBJ_CHANGE, envp);
 		return true;
 	}
@@ -245,7 +256,7 @@ struct hd_struct *disk_part_iter_next(struct disk_part_iter *piter)
 		part = rcu_dereference(ptbl->part[piter->idx]);
 		if (!part)
 			continue;
-		if (!part_nr_sects_read(part) &&
+		if (!bdev_nr_sectors(part->bdev) &&
 		    !(piter->flags & DISK_PITER_INCL_EMPTY) &&
 		    !(piter->flags & DISK_PITER_INCL_EMPTY_PART0 &&
 		      piter->idx == 0))
@@ -282,7 +293,7 @@ EXPORT_SYMBOL_GPL(disk_part_iter_exit);
 static inline int sector_in_part(struct hd_struct *part, sector_t sector)
 {
 	return part->start_sect <= sector &&
-		sector < part->start_sect + part_nr_sects_read(part);
+		sector < part->start_sect + bdev_nr_sectors(part->bdev);
 }
 
 /**
@@ -983,7 +994,7 @@ void __init printk_all_partitions(void)
 
 			printk("%s%s %10llu %s %s", is_part0 ? "" : "  ",
 			       bdevt_str(part_devt(part), devt_buf),
-			       (unsigned long long)part_nr_sects_read(part) >> 1
+			       bdev_nr_sectors(part->bdev) >> 1
 			       , disk_name(disk, part->partno, name_buf),
 			       part->info ? part->info->uuid : "");
 			if (is_part0) {
@@ -1076,7 +1087,7 @@ static int show_partition(struct seq_file *seqf, void *v)
 	while ((part = disk_part_iter_next(&piter)))
 		seq_printf(seqf, "%4d  %7d %10llu %s\n",
 			   MAJOR(part_devt(part)), MINOR(part_devt(part)),
-			   (unsigned long long)part_nr_sects_read(part) >> 1,
+			   bdev_nr_sectors(part->bdev) >> 1,
 			   disk_name(sgp, part->partno, buf));
 	disk_part_iter_exit(&piter);
 
@@ -1158,8 +1169,7 @@ ssize_t part_size_show(struct device *dev,
 {
 	struct hd_struct *p = dev_to_part(dev);
 
-	return sprintf(buf, "%llu\n",
-		(unsigned long long)part_nr_sects_read(p));
+	return sprintf(buf, "%llu\n", bdev_nr_sectors(p->bdev));
 }
 
 ssize_t part_stat_show(struct device *dev,
@@ -1616,16 +1626,6 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
 	ptbl = rcu_dereference_protected(disk->part_tbl, 1);
 	rcu_assign_pointer(ptbl->part[0], &disk->part0);
 
-	/*
-	 * set_capacity() and get_capacity() currently don't use
-	 * seqcounter to read/update the part0->nr_sects. Still init
-	 * the counter as we can read the sectors in IO submission
-	 * patch using seqence counters.
-	 *
-	 * TODO: Ideally set_capacity() and get_capacity() should be
-	 * converted to make use of bd_mutex and sequence counters.
-	 */
-	hd_sects_seq_init(&disk->part0);
 	if (hd_ref_init(&disk->part0)) {
 		hd_free_part(&disk->part0);
 		return NULL;
diff --git a/block/partitions/core.c b/block/partitions/core.c
index 0ba0bf44b88af3..aae857c22af05d 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -85,6 +85,13 @@ static int (*check_part[])(struct parsed_partitions *) = {
 	NULL
 };
 
+static void bdev_set_nr_sectors(struct block_device *bdev, sector_t sectors)
+{
+	spin_lock(&bdev->bd_size_lock);
+	i_size_write(bdev->bd_inode, (loff_t)sectors << SECTOR_SHIFT);
+	spin_unlock(&bdev->bd_size_lock);
+}
+
 static struct parsed_partitions *allocate_partitions(struct gendisk *hd)
 {
 	struct parsed_partitions *state;
@@ -295,7 +302,7 @@ static void hd_struct_free_work(struct work_struct *work)
 	put_device(disk_to_dev(disk));
 
 	part->start_sect = 0;
-	part->nr_sects = 0;
+	bdev_set_nr_sectors(part->bdev, 0);
 	part_stat_set_all(part, 0);
 	put_device(part_to_dev(part));
 }
@@ -410,11 +417,10 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 	if (!p->bdev)
 		goto out_free_stats;
 
-	hd_sects_seq_init(p);
 	pdev = part_to_dev(p);
 
 	p->start_sect = start;
-	p->nr_sects = len;
+	bdev_set_nr_sectors(p->bdev, len);
 	p->partno = partno;
 	p->policy = get_disk_ro(disk);
 
@@ -508,7 +514,7 @@ static bool partition_overlaps(struct gendisk *disk, sector_t start,
 	disk_part_iter_init(&piter, disk, DISK_PITER_INCL_EMPTY);
 	while ((part = disk_part_iter_next(&piter))) {
 		if (part->partno == skip_partno ||
-		    start >= part->start_sect + part->nr_sects ||
+		    start >= part->start_sect + bdev_nr_sectors(part->bdev) ||
 		    start + length <= part->start_sect)
 			continue;
 		overlap = true;
@@ -599,8 +605,7 @@ int bdev_resize_partition(struct block_device *bdev, int partno,
 	if (partition_overlaps(bdev->bd_disk, start, length, partno))
 		goto out_unlock;
 
-	part_nr_sects_write(part, length);
-	bd_set_nr_sectors(bdevp, length);
+	bdev_set_nr_sectors(bdevp, length);
 
 	ret = 0;
 out_unlock:
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 599e94a7e69259..9d2587f6167cd8 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -1243,7 +1243,6 @@ static int __loop_clr_fd(struct loop_device *lo, bool release)
 	set_capacity(lo->lo_disk, 0);
 	loop_sysfs_exit(lo);
 	if (bdev) {
-		bd_set_nr_sectors(bdev, 0);
 		/* let user-space know about this change */
 		kobject_uevent(&disk_to_dev(bdev->bd_disk)->kobj, KOBJ_CHANGE);
 	}
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 45b0423ef2c53d..014683968ce174 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -1132,7 +1132,7 @@ static void nbd_bdev_reset(struct block_device *bdev)
 {
 	if (bdev->bd_openers > 1)
 		return;
-	bd_set_nr_sectors(bdev, 0);
+	set_capacity(bdev->bd_disk, 0);
 }
 
 static void nbd_parse_flags(struct nbd_device *nbd)
diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback/common.h
index c6ea5d38c509a6..0762db247b41b3 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -358,9 +358,7 @@ struct pending_req {
 };
 
 
-#define vbd_sz(_v)	((_v)->bdev->bd_part ? \
-			 (_v)->bdev->bd_part->nr_sects : \
-			  get_capacity((_v)->bdev->bd_disk))
+#define vbd_sz(_v)	bdev_nr_sectors((_v)->bdev)
 
 #define xen_blkif_get(_b) (atomic_inc(&(_b)->refcnt))
 #define xen_blkif_put(_b)				\
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index a6a5e21e4fd136..e5db2cdd114112 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -1408,7 +1408,7 @@ static int cached_dev_init(struct cached_dev *dc, unsigned int block_size)
 			q->limits.raid_partial_stripes_expensive;
 
 	ret = bcache_device_init(&dc->disk, block_size,
-			 dc->bdev->bd_part->nr_sects - dc->sb.data_offset,
+			 bdev_nr_sectors(dc->bdev) - dc->sb.data_offset,
 			 dc->bdev, &bcache_cached_ops);
 	if (ret)
 		return ret;
diff --git a/drivers/s390/block/dasd_ioctl.c b/drivers/s390/block/dasd_ioctl.c
index 3359559517bfcf..304eba1acf163c 100644
--- a/drivers/s390/block/dasd_ioctl.c
+++ b/drivers/s390/block/dasd_ioctl.c
@@ -54,8 +54,6 @@ dasd_ioctl_enable(struct block_device *bdev)
 		return -ENODEV;
 
 	dasd_enable_device(base);
-	/* Formatting the dasd device can change the capacity. */
-	bd_set_nr_sectors(bdev, get_capacity(base->block->gdp));
 	dasd_put_device(base);
 	return 0;
 }
@@ -88,7 +86,7 @@ dasd_ioctl_disable(struct block_device *bdev)
 	 * Set i_size to zero, since read, write, etc. check against this
 	 * value.
 	 */
-	bd_set_nr_sectors(bdev, 0);
+	set_capacity(bdev->bd_disk, 0);
 	dasd_put_device(base);
 	return 0;
 }
diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c
index 4e37fa9b409d52..a70c33c49f0960 100644
--- a/drivers/target/target_core_pscsi.c
+++ b/drivers/target/target_core_pscsi.c
@@ -1027,12 +1027,7 @@ static u32 pscsi_get_device_type(struct se_device *dev)
 
 static sector_t pscsi_get_blocks(struct se_device *dev)
 {
-	struct pscsi_dev_virt *pdv = PSCSI_DEV(dev);
-
-	if (pdv->pdv_bd && pdv->pdv_bd->bd_part)
-		return pdv->pdv_bd->bd_part->nr_sects;
-
-	return 0;
+	return bdev_nr_sectors(PSCSI_DEV(dev)->pdv_bd);
 }
 
 static void pscsi_req_done(struct request *req, blk_status_t status)
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 258a1ced924483..a5a2ac4ca1ce9c 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -1278,70 +1278,6 @@ void bd_unlink_disk_holder(struct block_device *bdev, struct gendisk *disk)
 EXPORT_SYMBOL_GPL(bd_unlink_disk_holder);
 #endif
 
-/**
- * check_disk_size_change - checks for disk size change and adjusts bdev size.
- * @disk: struct gendisk to check
- * @bdev: struct bdev to adjust.
- * @verbose: if %true log a message about a size change if there is any
- *
- * This routine checks to see if the bdev size does not match the disk size
- * and adjusts it if it differs. When shrinking the bdev size, its all caches
- * are freed.
- */
-static void check_disk_size_change(struct gendisk *disk,
-		struct block_device *bdev, bool verbose)
-{
-	loff_t disk_size, bdev_size;
-
-	spin_lock(&bdev->bd_size_lock);
-	disk_size = (loff_t)get_capacity(disk) << 9;
-	bdev_size = i_size_read(bdev->bd_inode);
-	if (disk_size != bdev_size) {
-		if (verbose) {
-			printk(KERN_INFO
-			       "%s: detected capacity change from %lld to %lld\n",
-			       disk->disk_name, bdev_size, disk_size);
-		}
-		i_size_write(bdev->bd_inode, disk_size);
-	}
-	spin_unlock(&bdev->bd_size_lock);
-}
-
-/**
- * revalidate_disk_size - checks for disk size change and adjusts bdev size.
- * @disk: struct gendisk to check
- * @verbose: if %true log a message about a size change if there is any
- *
- * This routine checks to see if the bdev size does not match the disk size
- * and adjusts it if it differs. When shrinking the bdev size, its all caches
- * are freed.
- */
-void revalidate_disk_size(struct gendisk *disk, bool verbose)
-{
-	struct block_device *bdev;
-
-	/*
-	 * Hidden disks don't have associated bdev so there's no point in
-	 * revalidating them.
-	 */
-	if (disk->flags & GENHD_FL_HIDDEN)
-		return;
-
-	bdev = bdget_disk(disk, 0);
-	if (bdev) {
-		check_disk_size_change(disk, bdev, verbose);
-		bdput(bdev);
-	}
-}
-
-void bd_set_nr_sectors(struct block_device *bdev, sector_t sectors)
-{
-	spin_lock(&bdev->bd_size_lock);
-	i_size_write(bdev->bd_inode, (loff_t)sectors << SECTOR_SHIFT);
-	spin_unlock(&bdev->bd_size_lock);
-}
-EXPORT_SYMBOL(bd_set_nr_sectors);
-
 static void __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part);
 
 int bdev_disk_changed(struct block_device *bdev, bool invalidate)
@@ -1375,8 +1311,6 @@ int bdev_disk_changed(struct block_device *bdev, bool invalidate)
 			disk->fops->revalidate_disk(disk);
 	}
 
-	check_disk_size_change(disk, bdev, !invalidate);
-
 	if (get_capacity(disk)) {
 		ret = blk_add_partitions(disk, bdev);
 		if (ret == -EAGAIN)
@@ -1474,10 +1408,8 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 					need_restart = true;
 			}
 
-			if (!ret) {
-				bd_set_nr_sectors(bdev, get_capacity(disk));
+			if (!ret)
 				set_init_blocksize(bdev);
-			}
 
 			/*
 			 * If the device is invalidated, rescan partition
@@ -1499,11 +1431,10 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 			bdgrab(bdev_whole(bdev));
 			bdev->bd_part = disk_get_part(disk, bdev->bd_partno);
 			if (!(disk->flags & GENHD_FL_UP) ||
-			    !bdev->bd_part || !bdev->bd_part->nr_sects) {
+			    !bdev->bd_part || !bdev_nr_sectors(bdev)) {
 				ret = -ENXIO;
 				goto out_clear;
 			}
-			bd_set_nr_sectors(bdev, bdev->bd_part->nr_sects);
 			set_init_blocksize(bdev);
 		}
 
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index 00eff2f5180790..d4e7fab352bacb 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -3151,7 +3151,7 @@ static int f2fs_report_zone_cb(struct blk_zone *zone, unsigned int idx,
 static int init_blkz_info(struct f2fs_sb_info *sbi, int devi)
 {
 	struct block_device *bdev = FDEV(devi).bdev;
-	sector_t nr_sectors = bdev->bd_part->nr_sects;
+	sector_t nr_sectors = bdev_nr_sectors(bdev);
 	struct f2fs_report_zones_args rep_zone_arg;
 	int ret;
 
diff --git a/fs/pstore/blk.c b/fs/pstore/blk.c
index fcd5563dde063c..777a26f7bbe2aa 100644
--- a/fs/pstore/blk.c
+++ b/fs/pstore/blk.c
@@ -245,7 +245,7 @@ static struct block_device *psblk_get_bdev(void *holder,
 			return bdev;
 	}
 
-	nr_sects = part_nr_sects_read(bdev->bd_part);
+	nr_sects = bdev_nr_sectors(bdev);
 	if (!nr_sects) {
 		pr_err("not enough space for '%s'\n", blkdev);
 		blkdev_put(bdev, mode);
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index ab5fca99764e7a..e01618dfafc05c 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -52,15 +52,6 @@ struct partition_meta_info {
 
 struct hd_struct {
 	sector_t start_sect;
-	/*
-	 * nr_sects is protected by sequence counter. One might extend a
-	 * partition while IO is happening to it and update of nr_sects
-	 * can be non-atomic on 32bit machines with 64bit sector_t.
-	 */
-	sector_t nr_sects;
-#if BITS_PER_LONG==32 && defined(CONFIG_SMP)
-	seqcount_t nr_sects_seq;
-#endif
 	unsigned long stamp;
 	struct disk_stats __percpu *dkstats;
 	struct percpu_ref ref;
@@ -259,13 +250,6 @@ static inline void disk_put_part(struct hd_struct *part)
 		put_device(part_to_dev(part));
 }
 
-static inline void hd_sects_seq_init(struct hd_struct *p)
-{
-#if BITS_PER_LONG==32 && defined(CONFIG_SMP)
-	seqcount_init(&p->nr_sects_seq);
-#endif
-}
-
 /*
  * Smarter partition iterator without context limits.
  */
@@ -323,13 +307,15 @@ static inline sector_t get_start_sect(struct block_device *bdev)
 {
 	return bdev->bd_part->start_sect;
 }
-static inline sector_t get_capacity(struct gendisk *disk)
+	
+static inline sector_t bdev_nr_sectors(struct block_device *bdev)
 {
-	return disk->part0.nr_sects;
+	return i_size_read(bdev->bd_inode) >> 9;
 }
-static inline void set_capacity(struct gendisk *disk, sector_t size)
+	
+static inline sector_t get_capacity(struct gendisk *disk)
 {
-	disk->part0.nr_sects = size;
+	return bdev_nr_sectors(disk->part0.bdev);
 }
 
 int bdev_disk_changed(struct block_device *bdev, bool invalidate);
@@ -363,10 +349,9 @@ int __register_blkdev(unsigned int major, const char *name,
 	__register_blkdev(major, name, NULL)
 void unregister_blkdev(unsigned int major, const char *name);
 
-void revalidate_disk_size(struct gendisk *disk, bool verbose);
 bool bdev_check_media_change(struct block_device *bdev);
 int __invalidate_device(struct block_device *bdev, bool kill_dirty);
-void bd_set_nr_sectors(struct block_device *bdev, sector_t sectors);
+void set_capacity(struct gendisk *disk, sector_t size);
 
 /* for drivers/char/raw.c: */
 int blkdev_ioctl(struct block_device *, fmode_t, unsigned, unsigned long);
diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
index f1022945e3460b..7076d588a50d69 100644
--- a/kernel/trace/blktrace.c
+++ b/kernel/trace/blktrace.c
@@ -465,7 +465,7 @@ static void blk_trace_setup_lba(struct blk_trace *bt,
 
 	if (part) {
 		bt->start_lba = part->start_sect;
-		bt->end_lba = part->start_sect + part->nr_sects;
+		bt->end_lba = part->start_sect + bdev_nr_sectors(bdev);
 	} else {
 		bt->start_lba = 0;
 		bt->end_lba = -1ULL;
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:55:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:55:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29515.58990 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJF1-0000qo-Q9; Wed, 18 Nov 2020 08:55:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29515.58990; Wed, 18 Nov 2020 08:55:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJF1-0000qh-NB; Wed, 18 Nov 2020 08:55:03 +0000
Received: by outflank-mailman (input) for mailman id 29515;
 Wed, 18 Nov 2020 08:55:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fGLJ=EY=suse.de=colyli@srs-us1.protection.inumbo.net>)
 id 1kfJF0-0000qc-9A
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:55:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4e064198-e2d9-4ea2-bcd2-2901ae9ddd47;
 Wed, 18 Nov 2020 08:54:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 65CA0AD45;
 Wed, 18 Nov 2020 08:54:57 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=fGLJ=EY=suse.de=colyli@srs-us1.protection.inumbo.net>)
	id 1kfJF0-0000qc-9A
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:55:02 +0000
X-Inumbo-ID: 4e064198-e2d9-4ea2-bcd2-2901ae9ddd47
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4e064198-e2d9-4ea2-bcd2-2901ae9ddd47;
	Wed, 18 Nov 2020 08:54:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 65CA0AD45;
	Wed, 18 Nov 2020 08:54:57 +0000 (UTC)
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Mike Snitzer <snitzer@redhat.com>, dm-devel@redhat.com,
 Richard Weinberger <richard@nod.at>, Jan Kara <jack@suse.com>,
 linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org,
 linux-fsdevel@vger.kernel.org, linux-mm@kvack.org
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-20-hch@lst.de>
From: Coly Li <colyli@suse.de>
Subject: Re: [PATCH 19/20] bcache: remove a superflous lookup_bdev all
Message-ID: <e7f826fd-cb9c-b4ab-fae8-dad398c14eed@suse.de>
Date: Wed, 18 Nov 2020 16:54:51 +0800
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.16; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201118084800.2339180-20-hch@lst.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/18/20 4:47 PM, Christoph Hellwig wrote:
> Don't bother to call lookup_bdev for just a slightly different error
> message without any functional change.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>ist

Hi Christoph,

NACK. This removing error message is frequently triggered and observed,
and distinct a busy device and an already registered device is important
(the first one is critical error and second one is not).

Remove such error message will be a functional regression.

Coly Li

> ---
>  drivers/md/bcache/super.c | 44 +--------------------------------------
>  1 file changed, 1 insertion(+), 43 deletions(-)
> 
> diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
> index e5db2cdd114112..5c531ed7785280 100644
> --- a/drivers/md/bcache/super.c
> +++ b/drivers/md/bcache/super.c
> @@ -2380,40 +2380,6 @@ kobj_attribute_write(register,		register_bcache);
>  kobj_attribute_write(register_quiet,	register_bcache);
>  kobj_attribute_write(pendings_cleanup,	bch_pending_bdevs_cleanup);
>  
> -static bool bch_is_open_backing(struct block_device *bdev)
> -{
> -	struct cache_set *c, *tc;
> -	struct cached_dev *dc, *t;
> -
> -	list_for_each_entry_safe(c, tc, &bch_cache_sets, list)
> -		list_for_each_entry_safe(dc, t, &c->cached_devs, list)
> -			if (dc->bdev == bdev)
> -				return true;
> -	list_for_each_entry_safe(dc, t, &uncached_devices, list)
> -		if (dc->bdev == bdev)
> -			return true;
> -	return false;
> -}
> -
> -static bool bch_is_open_cache(struct block_device *bdev)
> -{
> -	struct cache_set *c, *tc;
> -
> -	list_for_each_entry_safe(c, tc, &bch_cache_sets, list) {
> -		struct cache *ca = c->cache;
> -
> -		if (ca->bdev == bdev)
> -			return true;
> -	}
> -
> -	return false;
> -}
> -
> -static bool bch_is_open(struct block_device *bdev)
> -{
> -	return bch_is_open_cache(bdev) || bch_is_open_backing(bdev);
> -}
> -
>  struct async_reg_args {
>  	struct delayed_work reg_work;
>  	char *path;
> @@ -2535,15 +2501,7 @@ static ssize_t register_bcache(struct kobject *k, struct kobj_attribute *attr,
>  				  sb);
>  	if (IS_ERR(bdev)) {
>  		if (bdev == ERR_PTR(-EBUSY)) {
> -			bdev = lookup_bdev(strim(path));
> -			mutex_lock(&bch_register_lock);
> -			if (!IS_ERR(bdev) && bch_is_open(bdev))
> -				err = "device already registered";
> -			else
> -				err = "device busy";
> -			mutex_unlock(&bch_register_lock);
> -			if (!IS_ERR(bdev))
> -				bdput(bdev);
> +			err = "device busy";
>  			if (attr == &ksysfs_register_quiet)
>  				goto done;
>  		}
> 





From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:56:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:56:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29519.59003 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJGB-0000xs-4R; Wed, 18 Nov 2020 08:56:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29519.59003; Wed, 18 Nov 2020 08:56:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJGB-0000xl-1Q; Wed, 18 Nov 2020 08:56:15 +0000
Received: by outflank-mailman (input) for mailman id 29519;
 Wed, 18 Nov 2020 08:56:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfJG9-0000xg-Vs
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:56:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dcd3706f-537d-4b1f-a31a-79ff2e86d009;
 Wed, 18 Nov 2020 08:56:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E6417AD2F;
 Wed, 18 Nov 2020 08:56:11 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfJG9-0000xg-Vs
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:56:14 +0000
X-Inumbo-ID: dcd3706f-537d-4b1f-a31a-79ff2e86d009
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id dcd3706f-537d-4b1f-a31a-79ff2e86d009;
	Wed, 18 Nov 2020 08:56:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605689772; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7igakkmhEI4CwNFDZCT70CUptcTv+K/s+YCPQmQlu9Q=;
	b=Y++3q5UJgDUWDlWdeDn6mQNNYdWrbUeFS91t9OchR29jrBjxggFIPk/Zt9w7d5o8Y5Vame
	8OeLl6cMI/636YtXiKuW5EanOqCgIw218k8knrDwI/S+PfWCYEvotYadONsuLQegM5GdhK
	3wwWlNOxGUloBVFJNsllEo8czXvQANE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E6417AD2F;
	Wed, 18 Nov 2020 08:56:11 +0000 (UTC)
Subject: Re: merge struct block_device and struct hd_struct
To: Christoph Hellwig <hch@lst.de>
Cc: Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Coly Li <colyli@suse.de>,
 Mike Snitzer <snitzer@redhat.com>, dm-devel@redhat.com,
 Richard Weinberger <richard@nod.at>, Jan Kara <jack@suse.com>,
 linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org,
 linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
 Jens Axboe <axboe@kernel.dk>
References: <20201118084800.2339180-1-hch@lst.de>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <22ca5396-0253-f286-9eab-d417b2e0b3ad@suse.com>
Date: Wed, 18 Nov 2020 09:56:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201118084800.2339180-1-hch@lst.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Christoph,

On 18.11.2020 09:47, Christoph Hellwig wrote:
> Diffstat:
>  block/bio.c                                  |    6 
>  block/blk-cgroup.c                           |   50 +-
>  block/blk-core.c                             |   85 +--
>  block/blk-flush.c                            |    2 
>  block/blk-iocost.c                           |   36 -
>  block/blk-lib.c                              |    2 
>  block/blk-merge.c                            |    6 
>  block/blk-mq.c                               |   11 
>  block/blk-mq.h                               |    5 
>  block/blk.h                                  |   92 ----
>  block/genhd.c                                |  444 +++++---------------
>  block/ioctl.c                                |    7 
>  block/partitions/core.c                      |  238 +++--------
>  drivers/block/drbd/drbd_receiver.c           |    2 
>  drivers/block/drbd/drbd_worker.c             |    2 
>  drivers/block/loop.c                         |   21 
>  drivers/block/nbd.c                          |    6 
>  drivers/block/xen-blkback/common.h           |    4 
>  drivers/block/xen-blkfront.c                 |   20 
>  drivers/block/zram/zram_drv.c                |   20 
>  drivers/md/bcache/request.c                  |    4 
>  drivers/md/bcache/super.c                    |   53 --
>  drivers/md/dm-table.c                        |    9 
>  drivers/md/dm.c                              |   16 
>  drivers/md/md.c                              |    8 
>  drivers/mtd/mtdsuper.c                       |   17 
>  drivers/nvme/target/admin-cmd.c              |   20 
>  drivers/s390/block/dasd.c                    |    8 
>  drivers/s390/block/dasd_ioctl.c              |    9 
>  drivers/scsi/scsicam.c                       |    2 
>  drivers/target/target_core_file.c            |    6 
>  drivers/target/target_core_pscsi.c           |    7 
>  drivers/usb/gadget/function/storage_common.c |    8 
>  fs/block_dev.c                               |  578 ++++++++-------------------
>  fs/btrfs/sysfs.c                             |   15 
>  fs/btrfs/volumes.c                           |   13 
>  fs/ext4/super.c                              |   18 
>  fs/ext4/sysfs.c                              |   10 
>  fs/f2fs/checkpoint.c                         |    5 
>  fs/f2fs/f2fs.h                               |    2 
>  fs/f2fs/super.c                              |    8 
>  fs/f2fs/sysfs.c                              |    9 
>  fs/inode.c                                   |    3 
>  fs/internal.h                                |    7 
>  fs/io_uring.c                                |   10 
>  fs/pipe.c                                    |    5 
>  fs/pstore/blk.c                              |    2 
>  fs/quota/quota.c                             |   40 +
>  fs/statfs.c                                  |    2 
>  fs/super.c                                   |   86 ----
>  include/linux/blk-cgroup.h                   |    4 
>  include/linux/blk_types.h                    |   26 +
>  include/linux/blkdev.h                       |   24 -
>  include/linux/fs.h                           |    5 
>  include/linux/genhd.h                        |  104 ----
>  include/linux/part_stat.h                    |   17 
>  init/do_mounts.c                             |  271 +++++-------
>  kernel/trace/blktrace.c                      |   54 --
>  mm/filemap.c                                 |    9 
>  59 files changed, 837 insertions(+), 1716 deletions(-)

since this isn't the first series from you recently spamming
xen-devel, may I ask that you don't Cc entire series to lists
which are involved with perhaps just one out of the many patches?
IMO Cc lists should be compiled on a per-patch basis; the cover
letter may of course be sent to the union of all of them.

Thanks much,
Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:57:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:57:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29524.59015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJHk-00016g-GE; Wed, 18 Nov 2020 08:57:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29524.59015; Wed, 18 Nov 2020 08:57:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJHk-00016Z-Ci; Wed, 18 Nov 2020 08:57:52 +0000
Received: by outflank-mailman (input) for mailman id 29524;
 Wed, 18 Nov 2020 08:57:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4dya=EY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kfJHi-00016T-Q9
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:57:50 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 64f7b839-0192-43fe-bfc3-a9e6fc551746;
 Wed, 18 Nov 2020 08:57:48 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=4dya=EY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kfJHi-00016T-Q9
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:57:50 +0000
X-Inumbo-ID: 64f7b839-0192-43fe-bfc3-a9e6fc551746
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 64f7b839-0192-43fe-bfc3-a9e6fc551746;
	Wed, 18 Nov 2020 08:57:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605689868;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=8p1jbKECG05hKPmf6OWYk4c/EC4zZYo57jRnvqOXXUI=;
  b=X3Za9aYnp70K01uzcHg94G0D7FuwD/twi5K+iOPllD79/HzLz5Xh5Xf0
   9hUZG/FGlTAXSOCOGgw9KUvWc7ssPglFXBgLiFuykFo0NP9lFwh+lUpmU
   dRo5irTBfva950x8sldp100Gdf8zSW0ZX+/5xfE0tHIHZMUXk7g2SKC6x
   U=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: a2oQwtXcury+NJZQnQ7pVs1ZV5oxcEWUcD96KIWa5dxDZWOi1qk79yiAK+aoAqmdmlVGauPDQt
 FXkgargbkQmgm5JgFrYvHD83jz0AAuzA3Jz01ByAXHlQFo5iuFtgspZSkq1e8f3xJwdfVDc45Y
 7K1oOvGa/eh4+5POydf0POk5whYWAQRaVjDhJvX2iDrS+aFHJa2o4feMzOFLFm1deBAc7hJoqO
 7AIcaZgpfvj0cDdvJefRel5BiQFNUauKPo8M96tq2gS2xA/RXRoDefrmdRfetNwEEPE7dxrcjT
 XUc=
X-SBRS: None
X-MesageID: 31380212
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,486,1596513600"; 
   d="scan'208";a="31380212"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DRjcTJFbVnlFs/oF0VEbwF9Bnpd071Dj0N+PkunByx/HHPzdYHIt+QOoO/xDfbzQ/YHuTdLhf/D56nb8wk2hOa0TUI6NCPLbrpaBPFAyfZHB2xM5ey0eJK5j6x29MvjPWnb6Dm8WVgDE1iDxsWV9taXcdWWCCUPmAbTYMyaJoWt7vZBmJD9ZQFt7lwr+DP0Gw9vbIhktolNoio4sx42mlzJVmvdthZAMbC+5m2Dap4PmzNLKbBaeExkExlcQLKFMkYnuHzM/CSxE30liifDlPKzKOL0wfHv/49mVCNXRELI+hp7BqNMPWpMJUgp2DdRREJhtvnFA3w1NtZ+sPzhE+g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zIKlsQmdtLHf5IqZKe8dJqMJBLrKcdtfuwQUc3PNPI4=;
 b=PtYBk+0yTdwHGZNgkhEZlDwFKMGSxzVHhm1P7PP9Ku+C2rIg3F/7L7N2WNAeiix1sJgF1PmSXrltufbpf92SlLXWhbqXV2vUtmiTKu/Vsg4uaWIpWvaeUw3oQIMmjkB2Al7elzOCKiOLYrQcBK2jwvf4eBlC4NApoutPdGa8FDA5+INNxh7yVnW88l2JRvlPjszomZTqGUyJmT2wTWPc6V1l/E4dL9QVR63Zbv6EkqYmK/zBxsTD8s+X3K/atYs3dMc/RWaFExGOKMlRkFgxRhXm4lCGddEq5b7bqyZB1+uPx1ZEeVABriZ+VoDi0SoAUQNYu5BUeBwfrV0BUd6dEA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zIKlsQmdtLHf5IqZKe8dJqMJBLrKcdtfuwQUc3PNPI4=;
 b=iv8GF6mTyicjuzkVcdTF0322jjzLP2q8KcbV386Q04/CGoqmcP/zE+yMHMrUdyDpRFUloC7qVinPmHTC4DpkNMt4O8P26pB0TQ4V8JpVZrCum14HDDdrs5v0g9b/3WGpWOc2fvg9LalgjACpt9gxa7z6Gg7Cqjyq5PB/PUPOLm0=
Date: Wed, 18 Nov 2020 09:57:38 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201118085738.wpnfmjagxjf6cofp@Air-de-Roger>
References: <20201117150949.GA3791@antioche.eu.org>
 <20201117155807.a7jgmftnj6njg6oz@Air-de-Roger>
 <20201117164033.GB3093@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201117164033.GB3093@antioche.eu.org>
X-ClientProxiedBy: MR2P264CA0144.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:30::36) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f5b55284-52a0-4f09-15e0-08d88ba00390
X-MS-TrafficTypeDiagnostic: DM6PR03MB3916:
X-Microsoft-Antispam-PRVS: <DM6PR03MB391674D29A01E06EF1F9EDD78FE10@DM6PR03MB3916.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: +/0Y/dl5/oFqKccUPKsaQwtIewof6oaxqlrSchR1xgQ76C2TS6tLKPl0sBLAPcDAoL9TY2sVjckonudK9yclbm1gMtr9hqJgHDSyHCc1I7pUE+B0rbo6pwFE3cO5UpH9Tw8Fd3h9PxZjZo5wJVhX8HvYuBZ7+maSE4HDB1/UclJqIoKFEOLvpmVsTInaKLuvDrpPGwT1nip3fC8fzpr1YiyqOPCbTsoj7+u1d1cjlm5zWfcwki/Y3VJFC3BFqx26a4lywPdJsLncE+RVOBJPoCOWVFEE0yXKJX7IgG9ibpodROWYVLNAnj3PXzW/LEWTtXwyed8+CxcUOoqgwTZn/Rojk3d9pdJBELDobXiswvpKME1JZD5QkNEnhNmdXQsSrYE85xFL1d6oHAv5MclPTw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(376002)(396003)(346002)(366004)(136003)(39850400004)(186003)(66556008)(16526019)(26005)(66476007)(33716001)(85182001)(6496006)(1076003)(6666004)(66946007)(956004)(5660300002)(478600001)(86362001)(966005)(316002)(8676002)(4326008)(6486002)(6916009)(2906002)(9686003)(8936002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 0RuYsOKePLhOmB+Xkt750O5M6cGihm2Gu9lfZoA3FiP8NppMg1HpZl++gKV/0IIKhPezPxIZY9ahnW8M24UqAOyCC0qbwBpNzN0us3nUcAowUQamcZEMPICuidGCrbvo3eqI1Wpm1hhC8opHkH+CiLgpQoeneFEGuNPQKFqGHg9Ovzkj535QhEJxhATq1mUGOjUhNyIHkMWr4gwP16ZTo5WqSWM0slsVZDGKYP+6yx4C77nvqf8g4kOtK/7lWL6jG3/1bxnYMft3rL+UCzBAIQhBCwnX5O4SIPUinFvcgPjBTZh8krwx4StP1lovhL0XG76i+Leh2YnkPEslu8Rx/6J6qpsGEgXOSIGNMo7dxcmC4Nq+kgXmDn2LyqwrvNKhbtpyo6IxaoznlkyM7WAErEzQmhcE65V5gxOJoN44hRE6KoQpvh9kF149Q7WFivmzftyjVwKNIfXT14h30fHd+Zam/fewj3Izz5udIqCX3WvXO9mNI5SKVif2Ky9mtAvsLM5qianKIGlDlnM0V5ePyveqJcQgEEPrW4kvdUqo7LMx7DVTGMcWIhAOIjZlt31FZ601FueeTpD4zhjwaQoQIHDCGXAjUttN4Zy+1OmmpVLT91ph0e258nRPcJwDI2KcQeohz7hKfk38J86nNd6PHfqd8q3+6se/bjlA1HZpVTFmXNKHUEUYYf8VXVpr2MlNEzpgrkR7HN+AusaLkkgeE7EmTqt9OOSORVdpSszS/mg51iACNOr88j77a1TqcWwKuekJA31kR5axxWJoHFSlOYaWdcvZDQM/0BnD2MHGMvCrILYDiDm9qLNiyfKH6AVfSy8hLnl6qCmAUgqdqfNgkCk71yrgYuQV3Iz0HTkQhJ/EASxq3Pkh7KrdLMZROvXPgMOxT9nIxQpraQ6GEI72Vw==
X-MS-Exchange-CrossTenant-Network-Message-Id: f5b55284-52a0-4f09-15e0-08d88ba00390
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Nov 2020 08:57:44.5798
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zwTxbMf0qstXgou0pmxN4iswVh0C/2ZkzveoDATI+jiyzD3aY6UpiIyyn8pMbpid3ee6KIHFO4doPEQ413kb3Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3916
X-OriginatorOrg: citrix.com

On Tue, Nov 17, 2020 at 05:40:33PM +0100, Manuel Bouyer wrote:
> On Tue, Nov 17, 2020 at 04:58:07PM +0100, Roger Pau Monné wrote:
> > [...]
> > 
> > I have attached a patch below that will dump the vIO-APIC info as part
> > of the 'i' debug key output, can you paste the whole output of the 'i'
> > debug key when the system stalls?
> 
> see attached file. Note that the kernel did unstall while 'i' output was
> being printed, so it is mixed with some NetBSD kernel output.
> The idt entry of the 'ioapic2 pin2' interrupt is 103 on CPU 0.
> 
> I also put the whole sequence at
> http://www-soc.lip6.fr/~bouyer/xen-log3.txt

On one of the instances the pin shows up as masked, but I'm not sure
if that's relevant since later it shows up as unmasked. Might just be
part of how NetBSD handles such interrupts.

So taking a look at the information dumped:

(XEN)     IRQ 34 Vec 81:
(XEN)       Apic 0x02, Pin  2: 5vec=51 delivery=LoPri dest=L status=1 polarity=1 irr=1 trig=L mask=0 dest_id:00000001
(XEN) ioapic 2 pin 2 gsi 34 vector 0x67
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 1 IRR 0 trig mode 1 mask 1 dest id 0
[...]
(XEN)     IRQ 34 Vec 81:
(XEN)       Apic 0x02, Pin  2: 0vec=51 delivery=LoPri dest=L status=0 polarity=1 irr=0 trig=L mask=0 dest_id:00000001
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 1 IRR 0 trig mode 1 mask 0 dest id 0

The state between the physical IO-APIC and the emulated one seems
consistent at least.

On a maybe unrelated question, how do you setup the event channel
callback, is it using HVM_PARAM_CALLBACK_IRQ and
HVM_PARAM_CALLBACK_TYPE_VECTOR?

Are you EOI'ing such vector on the local APIC when servicing the
interrupt?

> You'll see that I did hit 'i' 2 times to get the NetBSD kernel to boot
> multiuser.
> 
> > 
> > Can you assert that you properly EOI the vectors on the local APIC? (I
> > don't have a patch to dump the emulated lapic ISR right now, but could
> > provide one if needed).
> 
> Reading the code, I think it's OK (assuming I properly understood what
> you mean too). Wouldn't it cause problems on real hardware too
> if the vectors were not EOI'd ?

Yes it should, just wasn't sure whether NetBSD would use the same
handler code when running under Xen.

I was mostly asking because Xen has a weird semantic with the vector
callback when using HVM_PARAM_CALLBACK_IRQ, see above.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:58:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:58:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29527.59027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJI5-0001DQ-Q1; Wed, 18 Nov 2020 08:58:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29527.59027; Wed, 18 Nov 2020 08:58:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJI5-0001DI-M1; Wed, 18 Nov 2020 08:58:13 +0000
Received: by outflank-mailman (input) for mailman id 29527;
 Wed, 18 Nov 2020 08:58:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VXpI=EY=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1kfJI4-0001Ag-CO
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:58:12 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a59a809c-5e52-44df-9133-83bf144e326c;
 Wed, 18 Nov 2020 08:58:07 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id E406367357; Wed, 18 Nov 2020 09:58:04 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VXpI=EY=lst.de=hch@srs-us1.protection.inumbo.net>)
	id 1kfJI4-0001Ag-CO
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:58:12 +0000
X-Inumbo-ID: a59a809c-5e52-44df-9133-83bf144e326c
Received: from verein.lst.de (unknown [213.95.11.211])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a59a809c-5e52-44df-9133-83bf144e326c;
	Wed, 18 Nov 2020 08:58:07 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
	id E406367357; Wed, 18 Nov 2020 09:58:04 +0100 (CET)
Date: Wed, 18 Nov 2020 09:58:04 +0100
From: Christoph Hellwig <hch@lst.de>
To: Jan Beulich <jbeulich@suse.com>
Cc: Christoph Hellwig <hch@lst.de>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, Jens Axboe <axboe@kernel.dk>
Subject: Re: merge struct block_device and struct hd_struct
Message-ID: <20201118085804.GA20384@lst.de>
References: <20201118084800.2339180-1-hch@lst.de> <22ca5396-0253-f286-9eab-d417b2e0b3ad@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <22ca5396-0253-f286-9eab-d417b2e0b3ad@suse.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Wed, Nov 18, 2020 at 09:56:11AM +0100, Jan Beulich wrote:
> since this isn't the first series from you recently spamming
> xen-devel, may I ask that you don't Cc entire series to lists
> which are involved with perhaps just one out of the many patches?
> IMO Cc lists should be compiled on a per-patch basis; the cover
> letter may of course be sent to the union of all of them.

No way.  Individual CCs are completely broken as they don't provide
the reviewer a context.  If you don't want xen-blkfront patches to
go to xen-devel remove it from MAINTAINERS.


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:59:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:59:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29535.59039 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJJZ-0001Qx-4x; Wed, 18 Nov 2020 08:59:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29535.59039; Wed, 18 Nov 2020 08:59:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJJZ-0001Qq-1Z; Wed, 18 Nov 2020 08:59:45 +0000
Received: by outflank-mailman (input) for mailman id 29535;
 Wed, 18 Nov 2020 08:59:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfJAt-0006e0-6w
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:50:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8bf40fbe-c935-4e0e-a096-066fb9325f3c;
 Wed, 18 Nov 2020 08:50:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 80D87AD45;
 Wed, 18 Nov 2020 08:50:03 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfJAt-0006e0-6w
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:50:47 +0000
X-Inumbo-ID: 8bf40fbe-c935-4e0e-a096-066fb9325f3c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8bf40fbe-c935-4e0e-a096-066fb9325f3c;
	Wed, 18 Nov 2020 08:50:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605689403; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KyU5ejWfgwEnjWjOuAgNBFVH+se8h5ZxMHqQu86/6+A=;
	b=Lplaa1xOPrE/jDH4KfJhgBZ4ybKF6cgmPiszyNju5YYcFJmyOP/6VzRKhetQ6N8kBkmzYg
	48wK07+NjYad+HnBCd8RvKLT+6LLQPyRSks9/UDfrEoHLdn0I+JWJgTdJM8BgHTvzrc8rU
	rjAeRtBBRqfZO8fn6sn/BcDo/gJtFaQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 80D87AD45;
	Wed, 18 Nov 2020 08:50:03 +0000 (UTC)
Subject: Re: [PATCH v2] xen: EXPERT clean-up and introduce UNSUPPORTED
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
 "george.dunlap@citrix.com" <george.dunlap@citrix.com>,
 "iwj@xenproject.org" <iwj@xenproject.org>, "julien@xen.org"
 <julien@xen.org>, "wl@xen.org" <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20201118005051.26115-1-sstabellini@kernel.org>
 <0A50C952-B9D8-44C3-9326-A0555B435693@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c59a1540-2dd0-2813-9fe5-d5be2335fe9b@suse.com>
Date: Wed, 18 Nov 2020 09:50:03 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <0A50C952-B9D8-44C3-9326-A0555B435693@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 18.11.2020 09:45, Bertrand Marquis wrote:
>> On 18 Nov 2020, at 00:50, Stefano Stabellini <sstabellini@kernel.org> wrote:
>> --- a/xen/arch/x86/Kconfig
>> +++ b/xen/arch/x86/Kconfig
>> @@ -102,8 +102,8 @@ config HVM
>> 	  If unsure, say Y.
>>
>> config XEN_SHSTK
>> -	bool "Supervisor Shadow Stacks"
>> -	depends on HAS_AS_CET_SS && EXPERT
>> +	bool "Supervisor Shadow Stacks (UNSUPPORTED)"
>> +	depends on HAS_AS_CET_SS && UNSUPPORTED
> 
> This one is not following the standard scheme with “if UNSUPPORTED"

There's no standard scheme here: There's one case where the entire
option depends on some other setting (e.g. UNSUPPORTED) and another
where just the prompt (i.e. giving the user a choice) does. The
difference becomes obvious when the option has a default other than
"no": Despite the invisible prompt, it may get turned on. In the
case here (serving as a good example), "default y" would mean the
feature gets turned on when "if UNSUPPORTED" would be added to the
prompt and when UNSUPPORTED is itself off.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:59:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:59:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29536.59051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJJc-0001TW-E7; Wed, 18 Nov 2020 08:59:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29536.59051; Wed, 18 Nov 2020 08:59:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJJc-0001TM-A3; Wed, 18 Nov 2020 08:59:48 +0000
Received: by outflank-mailman (input) for mailman id 29536;
 Wed, 18 Nov 2020 08:59:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kfJA0-0006e0-56
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:49:52 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f832a971-d419-49f9-bbc7-b121ac062b86;
 Wed, 18 Nov 2020 08:48:44 +0000 (UTC)
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kfJ8j-0007p0-Ax; Wed, 18 Nov 2020 08:48:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kfJA0-0006e0-56
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:49:52 +0000
X-Inumbo-ID: f832a971-d419-49f9-bbc7-b121ac062b86
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f832a971-d419-49f9-bbc7-b121ac062b86;
	Wed, 18 Nov 2020 08:48:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=hNtZTYwDbpxOWnjbdPfEN2LKFTEyc+Xwk1zoPtNIsws=; b=TqocbXzDnpX4MzcZ8sgwLOlbAf
	xsj4dZC9XsQv1XS+z2XyddLzInqJ0/IKShK3LPDT7Eo8ZIpILYJY0gj9VfwBQ3nMuSg/HN+J3xlST
	FSG10B3yULkQmmKPPGNECUNZ7iTNJ5QD9Ed3/P5kQbf5pUKNe93db9DVG3tkbxyeraQrS6VjfZR1E
	w5El/hZ6ELQvxKpwQr9399o0aiwHBe3HK6K+ZrCT+dby6bdt46qtWjz7jJb2z7KRzu3kN4k6f+J35
	BMhTlxFabVq0KWEkdz8my9MdfYfF/m0hoir4VCd42ijZNbevVJzVu/cbBaaFuWLhgF/M0/GI2ryHL
	ICIiDKNg==;
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kfJ8j-0007p0-Ax; Wed, 18 Nov 2020 08:48:33 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 18/20] fs: remove get_super_thawed and get_super_exclusive_thawed
Date: Wed, 18 Nov 2020 09:47:58 +0100
Message-Id: <20201118084800.2339180-19-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201118084800.2339180-1-hch@lst.de>
References: <20201118084800.2339180-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Just open code the wait in the only caller of both functions.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/internal.h      |  2 ++
 fs/quota/quota.c   | 31 +++++++++++++++++++++-------
 fs/super.c         | 51 ++--------------------------------------------
 include/linux/fs.h |  4 +---
 4 files changed, 29 insertions(+), 59 deletions(-)

diff --git a/fs/internal.h b/fs/internal.h
index a7cd0f64faa4ab..47be21dfeebef5 100644
--- a/fs/internal.h
+++ b/fs/internal.h
@@ -114,7 +114,9 @@ extern struct file *alloc_empty_file_noaccount(int, const struct cred *);
  */
 extern int reconfigure_super(struct fs_context *);
 extern bool trylock_super(struct super_block *sb);
+struct super_block *__get_super(struct block_device *bdev, bool excl);
 extern struct super_block *user_get_super(dev_t);
+void put_super(struct super_block *sb);
 extern bool mount_capable(struct fs_context *);
 
 /*
diff --git a/fs/quota/quota.c b/fs/quota/quota.c
index 9af95c7a0bbe3c..f3d32b0d9008f2 100644
--- a/fs/quota/quota.c
+++ b/fs/quota/quota.c
@@ -20,6 +20,7 @@
 #include <linux/writeback.h>
 #include <linux/nospec.h>
 #include "compat.h"
+#include "../internal.h"
 
 static int check_quotactl_permission(struct super_block *sb, int type, int cmd,
 				     qid_t id)
@@ -868,6 +869,7 @@ static struct super_block *quotactl_block(const char __user *special, int cmd)
 	struct block_device *bdev;
 	struct super_block *sb;
 	struct filename *tmp = getname(special);
+	bool excl = false, thawed = false;
 
 	if (IS_ERR(tmp))
 		return ERR_CAST(tmp);
@@ -875,17 +877,32 @@ static struct super_block *quotactl_block(const char __user *special, int cmd)
 	putname(tmp);
 	if (IS_ERR(bdev))
 		return ERR_CAST(bdev);
-	if (quotactl_cmd_onoff(cmd))
-		sb = get_super_exclusive_thawed(bdev);
-	else if (quotactl_cmd_write(cmd))
-		sb = get_super_thawed(bdev);
-	else
-		sb = get_super(bdev);
+
+	if (quotactl_cmd_onoff(cmd)) {
+		excl = true;
+		thawed = true;
+	} else if (quotactl_cmd_write(cmd)) {
+		thawed = true;
+	}
+
+retry:
+	sb = __get_super(bdev, excl);
+	if (thawed && sb && sb->s_writers.frozen != SB_UNFROZEN) {
+		if (excl)
+			up_write(&sb->s_umount);
+		else
+			up_read(&sb->s_umount);
+		wait_event(sb->s_writers.wait_unfrozen,
+			   sb->s_writers.frozen == SB_UNFROZEN);
+		put_super(sb);
+		goto retry;
+	}
+
 	bdput(bdev);
 	if (!sb)
 		return ERR_PTR(-ENODEV);
-
 	return sb;
+
 #else
 	return ERR_PTR(-ENODEV);
 #endif
diff --git a/fs/super.c b/fs/super.c
index 98bb0629ee108e..343e5c1e538d2a 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -307,7 +307,7 @@ static void __put_super(struct super_block *s)
  *	Drops a temporary reference, frees superblock if there's no
  *	references left.
  */
-static void put_super(struct super_block *sb)
+void put_super(struct super_block *sb)
 {
 	spin_lock(&sb_lock);
 	__put_super(sb);
@@ -740,7 +740,7 @@ void iterate_supers_type(struct file_system_type *type,
 
 EXPORT_SYMBOL(iterate_supers_type);
 
-static struct super_block *__get_super(struct block_device *bdev, bool excl)
+struct super_block *__get_super(struct block_device *bdev, bool excl)
 {
 	struct super_block *sb;
 
@@ -789,53 +789,6 @@ struct super_block *get_super(struct block_device *bdev)
 }
 EXPORT_SYMBOL(get_super);
 
-static struct super_block *__get_super_thawed(struct block_device *bdev,
-					      bool excl)
-{
-	while (1) {
-		struct super_block *s = __get_super(bdev, excl);
-		if (!s || s->s_writers.frozen == SB_UNFROZEN)
-			return s;
-		if (!excl)
-			up_read(&s->s_umount);
-		else
-			up_write(&s->s_umount);
-		wait_event(s->s_writers.wait_unfrozen,
-			   s->s_writers.frozen == SB_UNFROZEN);
-		put_super(s);
-	}
-}
-
-/**
- *	get_super_thawed - get thawed superblock of a device
- *	@bdev: device to get the superblock for
- *
- *	Scans the superblock list and finds the superblock of the file system
- *	mounted on the device. The superblock is returned once it is thawed
- *	(or immediately if it was not frozen). %NULL is returned if no match
- *	is found.
- */
-struct super_block *get_super_thawed(struct block_device *bdev)
-{
-	return __get_super_thawed(bdev, false);
-}
-EXPORT_SYMBOL(get_super_thawed);
-
-/**
- *	get_super_exclusive_thawed - get thawed superblock of a device
- *	@bdev: device to get the superblock for
- *
- *	Scans the superblock list and finds the superblock of the file system
- *	mounted on the device. The superblock is returned once it is thawed
- *	(or immediately if it was not frozen) and s_umount semaphore is held
- *	in exclusive mode. %NULL is returned if no match is found.
- */
-struct super_block *get_super_exclusive_thawed(struct block_device *bdev)
-{
-	return __get_super_thawed(bdev, true);
-}
-EXPORT_SYMBOL(get_super_exclusive_thawed);
-
 /**
  * get_active_super - get an active reference to the superblock of a device
  * @bdev: device to get the superblock for
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 8667d0cdc71e76..a61df0dd4f1989 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1409,7 +1409,7 @@ enum {
 
 struct sb_writers {
 	int				frozen;		/* Is sb frozen? */
-	wait_queue_head_t		wait_unfrozen;	/* for get_super_thawed() */
+	wait_queue_head_t		wait_unfrozen;	/* wait for thaw */
 	struct percpu_rw_semaphore	rw_sem[SB_FREEZE_LEVELS];
 };
 
@@ -3132,8 +3132,6 @@ extern struct file_system_type *get_filesystem(struct file_system_type *fs);
 extern void put_filesystem(struct file_system_type *fs);
 extern struct file_system_type *get_fs_type(const char *name);
 extern struct super_block *get_super(struct block_device *);
-extern struct super_block *get_super_thawed(struct block_device *);
-extern struct super_block *get_super_exclusive_thawed(struct block_device *bdev);
 extern struct super_block *get_active_super(struct block_device *bdev);
 extern void drop_super(struct super_block *sb);
 extern void drop_super_exclusive(struct super_block *sb);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:59:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:59:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29538.59063 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJJd-0001WW-SG; Wed, 18 Nov 2020 08:59:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29538.59063; Wed, 18 Nov 2020 08:59:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJJd-0001WN-OV; Wed, 18 Nov 2020 08:59:49 +0000
Received: by outflank-mailman (input) for mailman id 29538;
 Wed, 18 Nov 2020 08:59:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kfJ9v-0006e0-4X
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:49:47 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e193a6bb-7d5f-43a4-a1bc-2ab04ac718a7;
 Wed, 18 Nov 2020 08:48:43 +0000 (UTC)
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kfJ8h-0007oc-Bb; Wed, 18 Nov 2020 08:48:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kfJ9v-0006e0-4X
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:49:47 +0000
X-Inumbo-ID: e193a6bb-7d5f-43a4-a1bc-2ab04ac718a7
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e193a6bb-7d5f-43a4-a1bc-2ab04ac718a7;
	Wed, 18 Nov 2020 08:48:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=p6UHeMdNPZtbtbIn43eVEAdBbhJQd+3671CGRMSmzGQ=; b=U+CT7FI/RNVqd8977Z9jys84zx
	D5PJs3XqwRZ2I4/5WIasYm3mO3hz6MEPuPBxpOUse9ZssA6t9l0w+A0keOeXTtOgMFzizktjEA7Hr
	lGo6ggP8TPFpI263wFUgLY4Kzp3fTQiuP7Z/THc501YB4CW12JxTpfjg2mdQPzSqE7OPNd4rECdn/
	7jivfGwOCd8eWfwYg+Y4GPsB7EFyXW29c/OGQJjvi228S9KWwCz09Usiub7OnZfO2EQoeDImpFdcG
	Z56X0eWsNkNq8gS/VMw0EsIXHjgjw650qnJtFnoE2b3neXJfoAYmgACwhZLOofNjqd+5jYI2I/PXp
	+43PTD3g==;
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kfJ8h-0007oc-Bb; Wed, 18 Nov 2020 08:48:32 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 17/20] filemap: consistently use ->f_mapping over ->i_mapping
Date: Wed, 18 Nov 2020 09:47:57 +0100
Message-Id: <20201118084800.2339180-18-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201118084800.2339180-1-hch@lst.de>
References: <20201118084800.2339180-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use file->f_mapping in all remaining places that have a struct file
available to properly handle the case where inode->i_mapping !=
file_inode(file)->i_mapping.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 mm/filemap.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index d5e7c2029d16b4..3e3531a757f8db 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2887,13 +2887,13 @@ EXPORT_SYMBOL(filemap_map_pages);
 vm_fault_t filemap_page_mkwrite(struct vm_fault *vmf)
 {
 	struct page *page = vmf->page;
-	struct inode *inode = file_inode(vmf->vma->vm_file);
+	struct inode *inode = vmf->vma->vm_file->f_mapping->host;
 	vm_fault_t ret = VM_FAULT_LOCKED;
 
 	sb_start_pagefault(inode->i_sb);
 	file_update_time(vmf->vma->vm_file);
 	lock_page(page);
-	if (page->mapping != inode->i_mapping) {
+	if (page->mapping != vmf->vma->vm_file->f_mapping) {
 		unlock_page(page);
 		ret = VM_FAULT_NOPAGE;
 		goto out;
@@ -3149,10 +3149,9 @@ void dio_warn_stale_pagecache(struct file *filp)
 {
 	static DEFINE_RATELIMIT_STATE(_rs, 86400 * HZ, DEFAULT_RATELIMIT_BURST);
 	char pathname[128];
-	struct inode *inode = file_inode(filp);
 	char *path;
 
-	errseq_set(&inode->i_mapping->wb_err, -EIO);
+	errseq_set(&filp->f_mapping->wb_err, -EIO);
 	if (__ratelimit(&_rs)) {
 		path = file_path(filp, pathname, sizeof(pathname));
 		if (IS_ERR(path))
@@ -3179,7 +3178,7 @@ generic_file_direct_write(struct kiocb *iocb, struct iov_iter *from)
 
 	if (iocb->ki_flags & IOCB_NOWAIT) {
 		/* If there are pages to writeback, return */
-		if (filemap_range_has_page(inode->i_mapping, pos,
+		if (filemap_range_has_page(file->f_mapping, pos,
 					   pos + write_len - 1))
 			return -EAGAIN;
 	} else {
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 08:59:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 08:59:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29545.59075 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJJn-0001hm-5F; Wed, 18 Nov 2020 08:59:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29545.59075; Wed, 18 Nov 2020 08:59:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJJn-0001hb-1Z; Wed, 18 Nov 2020 08:59:59 +0000
Received: by outflank-mailman (input) for mailman id 29545;
 Wed, 18 Nov 2020 08:59:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kfJ9q-0006e0-4H
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:49:42 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b48ae73e-9e8a-4dbf-b138-9c98ee0aa84f;
 Wed, 18 Nov 2020 08:48:42 +0000 (UTC)
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kfJ8f-0007o7-Ef; Wed, 18 Nov 2020 08:48:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kfJ9q-0006e0-4H
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:49:42 +0000
X-Inumbo-ID: b48ae73e-9e8a-4dbf-b138-9c98ee0aa84f
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b48ae73e-9e8a-4dbf-b138-9c98ee0aa84f;
	Wed, 18 Nov 2020 08:48:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=jJNK9/H8sfmZrYurqzVwgyu9MJNwxw5ihe1GEzXuM6I=; b=jr1pmgSmLgODWAxYLebiTtixWG
	E1FQCz1t4RsLXrDcKfCHo6N5wK4/ZRUATPnOoS+1KhQWSkL+IYBAcKPpKT/D+msyWWNJRH4w+SFnj
	JBRSKKj3TirzVD1MdsKHZ9Flq10GFrmbhjDJZhSyjzB4kyj1B0nzh8UbaRZ4y/HZdtSR9e8+1QDyD
	trkeCJ3uOhAJr0QSCPeeqOHGZi+qybaTd0Iwo6NsH61NtELVxfc3F3mwTNzCSqbmQPVbAslaT7l/f
	rcS41nOzYQ+YaKX/c14HNjwjzYbiCVclFfMYiSfxLG2wzSa+j8BcmUl2NsNkYxtieohJSwf+tr+K1
	+WNVXfww==;
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kfJ8f-0007o7-Ef; Wed, 18 Nov 2020 08:48:30 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 16/20] block: stop using bdget_disk for partition 0
Date: Wed, 18 Nov 2020 09:47:56 +0100
Message-Id: <20201118084800.2339180-17-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201118084800.2339180-1-hch@lst.de>
References: <20201118084800.2339180-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

We can just dereference the point in struct gendisk instead.  Also
remove the now unused export.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/genhd.c                   |  1 -
 drivers/block/nbd.c             |  4 +---
 drivers/block/xen-blkfront.c    | 20 +++++---------------
 drivers/block/zram/zram_drv.c   | 18 +++---------------
 drivers/md/dm.c                 |  8 +-------
 drivers/s390/block/dasd_ioctl.c |  5 ++---
 6 files changed, 12 insertions(+), 44 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index a14e2408e3d4e8..ec41d0f18f5ce1 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -907,7 +907,6 @@ struct block_device *bdget_disk(struct gendisk *disk, int partno)
 
 	return bdev;
 }
-EXPORT_SYMBOL(bdget_disk);
 
 /*
  * print a full list of all partitions - intended for places where the root
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 014683968ce174..92f84ed0ba9eb6 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -1488,12 +1488,10 @@ static int nbd_open(struct block_device *bdev, fmode_t mode)
 static void nbd_release(struct gendisk *disk, fmode_t mode)
 {
 	struct nbd_device *nbd = disk->private_data;
-	struct block_device *bdev = bdget_disk(disk, 0);
 
 	if (test_bit(NBD_RT_DISCONNECT_ON_CLOSE, &nbd->config->runtime_flags) &&
-			bdev->bd_openers == 0)
+			disk->part0->bd_openers == 0)
 		nbd_disconnect_and_put(nbd);
-	bdput(bdev);
 
 	nbd_config_put(nbd);
 	nbd_put(nbd);
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 79521e33d30ed5..188e0b47534bcf 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -2153,7 +2153,7 @@ static void blkfront_closing(struct blkfront_info *info)
 	}
 
 	if (info->gd)
-		bdev = bdget_disk(info->gd, 0);
+		bdev = bdgrab(info->gd->part0);
 
 	mutex_unlock(&info->mutex);
 
@@ -2518,7 +2518,7 @@ static int blkfront_remove(struct xenbus_device *xbdev)
 
 	disk = info->gd;
 	if (disk)
-		bdev = bdget_disk(disk, 0);
+		bdev = bdgrab(disk->part0);
 
 	info->xbdev = NULL;
 	mutex_unlock(&info->mutex);
@@ -2595,19 +2595,11 @@ static int blkif_open(struct block_device *bdev, fmode_t mode)
 static void blkif_release(struct gendisk *disk, fmode_t mode)
 {
 	struct blkfront_info *info = disk->private_data;
-	struct block_device *bdev;
 	struct xenbus_device *xbdev;
 
 	mutex_lock(&blkfront_mutex);
-
-	bdev = bdget_disk(disk, 0);
-
-	if (!bdev) {
-		WARN(1, "Block device %s yanked out from us!\n", disk->disk_name);
+	if (disk->part0->bd_openers)
 		goto out_mutex;
-	}
-	if (bdev->bd_openers)
-		goto out;
 
 	/*
 	 * Check if we have been instructed to close. We will have
@@ -2619,7 +2611,7 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
 
 	if (xbdev && xbdev->state == XenbusStateClosing) {
 		/* pending switch to state closed */
-		dev_info(disk_to_dev(bdev->bd_disk), "releasing disk\n");
+		dev_info(disk_to_dev(disk), "releasing disk\n");
 		xlvbd_release_gendisk(info);
 		xenbus_frontend_closed(info->xbdev);
  	}
@@ -2628,14 +2620,12 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
 
 	if (!xbdev) {
 		/* sudden device removal */
-		dev_info(disk_to_dev(bdev->bd_disk), "releasing disk\n");
+		dev_info(disk_to_dev(disk), "releasing disk\n");
 		xlvbd_release_gendisk(info);
 		disk->private_data = NULL;
 		free_info(info);
 	}
 
-out:
-	bdput(bdev);
 out_mutex:
 	mutex_unlock(&blkfront_mutex);
 }
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 01757f9578dcb8..56024905bd242c 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1748,7 +1748,7 @@ static ssize_t reset_store(struct device *dev,
 		struct device_attribute *attr, const char *buf, size_t len)
 {
 	struct zram *zram = dev_to_zram(dev);
-	struct block_device *bdev;
+	struct block_device *bdev = zram->disk->part0;
 	unsigned short do_reset;
 	int ret = 0;
 
@@ -1758,17 +1758,12 @@ static ssize_t reset_store(struct device *dev,
 	if (!do_reset)
 		return -EINVAL;
 
-	bdev = bdget_disk(zram->disk, 0);
-	if (!bdev)
-		return -ENOMEM;
-
 	mutex_lock(&bdev->bd_mutex);
 	if (bdev->bd_openers)
 		ret = -EBUSY;
 	else
 		zram_reset_device(zram);
 	mutex_unlock(&bdev->bd_mutex);
-	bdput(bdev);
 
 	return ret ? ret : len;
 }
@@ -1933,15 +1928,8 @@ static int zram_add(void)
 
 static int zram_remove(struct zram *zram)
 {
-	struct block_device *bdev = bdget_disk(zram->disk, 0);
-
-	if (bdev) {
-		if (bdev->bd_openers) {
-			bdput(bdev);
-			return -EBUSY;
-		}
-		bdput(bdev);
-	}
+	if (zram->disk->part0->bd_openers)
+		return -EBUSY;
 
 	del_gendisk(zram->disk);
 	zram_debugfs_unregister(zram);
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index c9438feefe55a3..ec48ccae50dd53 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -2375,17 +2375,12 @@ struct dm_table *dm_swap_table(struct mapped_device *md, struct dm_table *table)
  */
 static int lock_fs(struct mapped_device *md)
 {
-	struct block_device *bdev;
 	int r;
 
 	WARN_ON(md->frozen_sb);
 
-	bdev = bdget_disk(md->disk, 0);
-	if (!bdev)
-		return -ENOMEM;
-	md->frozen_sb = freeze_bdev(bdev);
+	md->frozen_sb = freeze_bdev(md->disk->part0);
 	if (IS_ERR(md->frozen_sb)) {
-		bdput(bdev);
 		r = PTR_ERR(md->frozen_sb);
 		md->frozen_sb = NULL;
 		return r;
@@ -2402,7 +2397,6 @@ static void unlock_fs(struct mapped_device *md)
 		return;
 
 	thaw_bdev(md->frozen_sb->s_bdev, md->frozen_sb);
-	bdput(md->frozen_sb->s_bdev);
 	md->frozen_sb = NULL;
 	clear_bit(DMF_FROZEN, &md->flags);
 }
diff --git a/drivers/s390/block/dasd_ioctl.c b/drivers/s390/block/dasd_ioctl.c
index 304eba1acf163c..9f642440894655 100644
--- a/drivers/s390/block/dasd_ioctl.c
+++ b/drivers/s390/block/dasd_ioctl.c
@@ -220,9 +220,8 @@ dasd_format(struct dasd_block *block, struct format_data_t *fdata)
 	 * enabling the device later.
 	 */
 	if (fdata->start_unit == 0) {
-		struct block_device *bdev = bdget_disk(block->gdp, 0);
-		bdev->bd_inode->i_blkbits = blksize_bits(fdata->blksize);
-		bdput(bdev);
+		block->gdp->part0->bd_inode->i_blkbits =
+			blksize_bits(fdata->blksize);
 	}
 
 	rc = base->discipline->format_device(base, fdata, 1);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 09:00:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 09:00:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29547.59087 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJJo-0001ks-Ee; Wed, 18 Nov 2020 09:00:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29547.59087; Wed, 18 Nov 2020 09:00:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJJo-0001kZ-Ar; Wed, 18 Nov 2020 09:00:00 +0000
Received: by outflank-mailman (input) for mailman id 29547;
 Wed, 18 Nov 2020 08:59:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kfJA5-0006e0-58
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:49:57 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 29ef5b0e-ef3c-4ad2-91ae-1fb551e763a6;
 Wed, 18 Nov 2020 08:48:41 +0000 (UTC)
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kfJ8d-0007ni-3J; Wed, 18 Nov 2020 08:48:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kfJA5-0006e0-58
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:49:57 +0000
X-Inumbo-ID: 29ef5b0e-ef3c-4ad2-91ae-1fb551e763a6
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 29ef5b0e-ef3c-4ad2-91ae-1fb551e763a6;
	Wed, 18 Nov 2020 08:48:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=1HqksWkdy5TD5089kbgpLkdwQc0zqxN82ofqJmKnirc=; b=AHXQG/U4KR4qNzMpNSog6V51VV
	wjsi6n9xcpEeqZn/MMB04HP7tR0A8r44SrdkbUxHgMsa5f+/JAaCbglIjKO+SZPbTwLg6pXB882m9
	HOB1Rxb+CCdSeIIDG5HBlR1+Myb+be9Ch4lRIFPc18cX6Xn/zhfrXY7vIcMrmimE0DauKF2Nyl20m
	T0Yc/5ZrOWFuiQZGpMUegLN1ASO8fwh5Nixfks4FyeA84jgEcpaUR6SVVxlriPIhzV8cfEaANYv9y
	S9QIVWOcJtgL3JDV3ADDxRC/UNWlgqr9M4CtBGqHuiQ7nA1DLO9ChILAstf7zOE7tWWUMnn7+8GeW
	nuyGqKxg==;
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kfJ8d-0007ni-3J; Wed, 18 Nov 2020 08:48:28 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 15/20] block: merge struct block_device and struct hd_struct
Date: Wed, 18 Nov 2020 09:47:55 +0100
Message-Id: <20201118084800.2339180-16-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201118084800.2339180-1-hch@lst.de>
References: <20201118084800.2339180-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Instead of having two structures that represent each block device with
different lift time rules merged them into a single one.  This also
greatly simplifies the reference counting rules, as we can use the inode
reference count as the main reference count for the new struct
block_device, with the device model reference front ending it for device
model interaction.  The percpu refcount in struct hd_struct is entirely
gone given that struct block_device must be opened and thus valid for
the duration of the I/O.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/bio.c                        |   6 +-
 block/blk-cgroup.c                 |   9 +-
 block/blk-core.c                   |  85 +++++-----
 block/blk-flush.c                  |   2 +-
 block/blk-lib.c                    |   2 +-
 block/blk-merge.c                  |   6 +-
 block/blk-mq.c                     |  11 +-
 block/blk-mq.h                     |   5 +-
 block/blk.h                        |  39 ++---
 block/genhd.c                      | 240 +++++++++++------------------
 block/ioctl.c                      |   4 +-
 block/partitions/core.c            | 221 ++++++++------------------
 drivers/block/drbd/drbd_receiver.c |   2 +-
 drivers/block/drbd/drbd_worker.c   |   2 +-
 drivers/block/zram/zram_drv.c      |   2 +-
 drivers/md/bcache/request.c        |   4 +-
 drivers/md/dm.c                    |   8 +-
 drivers/md/md.c                    |   4 +-
 drivers/nvme/target/admin-cmd.c    |  20 +--
 drivers/s390/block/dasd.c          |   8 +-
 fs/block_dev.c                     |  77 ++++-----
 fs/ext4/super.c                    |  18 +--
 fs/ext4/sysfs.c                    |  10 +-
 fs/f2fs/checkpoint.c               |   5 +-
 fs/f2fs/f2fs.h                     |   2 +-
 fs/f2fs/super.c                    |   6 +-
 fs/f2fs/sysfs.c                    |   9 --
 include/linux/blk_types.h          |  23 ++-
 include/linux/blkdev.h             |  13 +-
 include/linux/genhd.h              |  67 ++------
 include/linux/part_stat.h          |  17 +-
 init/do_mounts.c                   |  20 +--
 kernel/trace/blktrace.c            |  54 ++-----
 33 files changed, 362 insertions(+), 639 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 0c5269997434d6..4df1ecd53baf8f 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -608,12 +608,12 @@ void bio_truncate(struct bio *bio, unsigned new_size)
 void guard_bio_eod(struct bio *bio)
 {
 	sector_t maxsector;
-	struct hd_struct *part;
+	struct block_device *part;
 
 	rcu_read_lock();
-	part = __disk_get_part(bio->bi_disk, bio->bi_partno);
+	part = __bdget_disk(bio->bi_disk, bio->bi_partno);
 	if (part)
-		maxsector = bdev_nr_sectors(part->bdev);
+		maxsector = bdev_nr_sectors(part);
 	else
 		maxsector = get_capacity(bio->bi_disk);
 	rcu_read_unlock();
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 4c0ae0f6bce02d..fb5076223f10f2 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -820,9 +820,9 @@ static void blkcg_fill_root_iostats(void)
 
 	class_dev_iter_init(&iter, &block_class, NULL, &disk_type);
 	while ((dev = class_dev_iter_next(&iter))) {
-		struct gendisk *disk = dev_to_disk(dev);
-		struct hd_struct *part = disk_get_part(disk, 0);
-		struct blkcg_gq *blkg = blk_queue_root_blkg(disk->queue);
+		struct block_device *bdev = dev_to_bdev(dev);
+		struct blkcg_gq *blkg =
+			blk_queue_root_blkg(bdev->bd_disk->queue);
 		struct blkg_iostat tmp;
 		int cpu;
 
@@ -830,7 +830,7 @@ static void blkcg_fill_root_iostats(void)
 		for_each_possible_cpu(cpu) {
 			struct disk_stats *cpu_dkstats;
 
-			cpu_dkstats = per_cpu_ptr(part->dkstats, cpu);
+			cpu_dkstats = per_cpu_ptr(bdev->bd_stats, cpu);
 			tmp.ios[BLKG_IOSTAT_READ] +=
 				cpu_dkstats->ios[STAT_READ];
 			tmp.ios[BLKG_IOSTAT_WRITE] +=
@@ -849,7 +849,6 @@ static void blkcg_fill_root_iostats(void)
 			blkg_iostat_set(&blkg->iostat.cur, &tmp);
 			u64_stats_update_end(&blkg->iostat.sync);
 		}
-		disk_put_part(part);
 	}
 }
 
diff --git a/block/blk-core.c b/block/blk-core.c
index 988f45094a387b..192607c98e87c5 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -119,7 +119,7 @@ void blk_rq_init(struct request_queue *q, struct request *rq)
 	rq->tag = BLK_MQ_NO_TAG;
 	rq->internal_tag = BLK_MQ_NO_TAG;
 	rq->start_time_ns = ktime_get_ns();
-	rq->part = NULL;
+	rq->bdev = NULL;
 	refcount_set(&rq->ref, 1);
 	blk_crypto_rq_set_defaults(rq);
 }
@@ -666,9 +666,9 @@ static int __init setup_fail_make_request(char *str)
 }
 __setup("fail_make_request=", setup_fail_make_request);
 
-static bool should_fail_request(struct hd_struct *part, unsigned int bytes)
+static bool should_fail_request(struct block_device *bdev, unsigned int bytes)
 {
-	return part->make_it_fail && should_fail(&fail_make_request, bytes);
+	return bdev->bd_make_it_fail && should_fail(&fail_make_request, bytes);
 }
 
 static int __init fail_make_request_debugfs(void)
@@ -683,19 +683,19 @@ late_initcall(fail_make_request_debugfs);
 
 #else /* CONFIG_FAIL_MAKE_REQUEST */
 
-static inline bool should_fail_request(struct hd_struct *part,
-					unsigned int bytes)
+static inline bool should_fail_request(struct block_device *bdev,
+		unsigned int bytes)
 {
 	return false;
 }
 
 #endif /* CONFIG_FAIL_MAKE_REQUEST */
 
-static inline bool bio_check_ro(struct bio *bio, struct hd_struct *part)
+static inline bool bio_check_ro(struct bio *bio, struct block_device *bdev)
 {
 	const int op = bio_op(bio);
 
-	if (part->policy && op_is_write(op)) {
+	if (bdev->bd_policy && op_is_write(op)) {
 		char b[BDEVNAME_SIZE];
 
 		if (op_is_flush(bio->bi_opf) && !bio_sectors(bio))
@@ -703,7 +703,7 @@ static inline bool bio_check_ro(struct bio *bio, struct hd_struct *part)
 
 		WARN_ONCE(1,
 		       "Trying to write to read-only block-device %s (partno %d)\n",
-			bio_devname(bio, b), part->partno);
+			bio_devname(bio, b), bdev->bd_partno);
 		/* Older lvm-tools actually trigger this */
 		return false;
 	}
@@ -713,7 +713,7 @@ static inline bool bio_check_ro(struct bio *bio, struct hd_struct *part)
 
 static noinline int should_fail_bio(struct bio *bio)
 {
-	if (should_fail_request(&bio->bi_disk->part0, bio->bi_iter.bi_size))
+	if (should_fail_request(bio->bi_disk->part0, bio->bi_iter.bi_size))
 		return -EIO;
 	return 0;
 }
@@ -742,11 +742,11 @@ static inline int bio_check_eod(struct bio *bio, sector_t maxsector)
  */
 static inline int blk_partition_remap(struct bio *bio)
 {
-	struct hd_struct *p;
+	struct block_device *p;
 	int ret = -EIO;
 
 	rcu_read_lock();
-	p = __disk_get_part(bio->bi_disk, bio->bi_partno);
+	p = __bdget_disk(bio->bi_disk, bio->bi_partno);
 	if (unlikely(!p))
 		goto out;
 	if (unlikely(should_fail_request(p, bio->bi_iter.bi_size)))
@@ -755,11 +755,11 @@ static inline int blk_partition_remap(struct bio *bio)
 		goto out;
 
 	if (bio_sectors(bio)) {
-		if (bio_check_eod(bio, bdev_nr_sectors(p->bdev)))
+		if (bio_check_eod(bio, bdev_nr_sectors(p)))
 			goto out;
-		bio->bi_iter.bi_sector += p->start_sect;
-		trace_block_bio_remap(bio->bi_disk->queue, bio, part_devt(p),
-				      bio->bi_iter.bi_sector - p->start_sect);
+		bio->bi_iter.bi_sector += p->bd_start_sect;
+		trace_block_bio_remap(bio->bi_disk->queue, bio, p->bd_dev,
+				      bio->bi_iter.bi_sector - p->bd_start_sect);
 	}
 	bio->bi_partno = 0;
 	ret = 0;
@@ -829,7 +829,7 @@ static noinline_for_stack bool submit_bio_checks(struct bio *bio)
 		if (unlikely(blk_partition_remap(bio)))
 			goto end_io;
 	} else {
-		if (unlikely(bio_check_ro(bio, &bio->bi_disk->part0)))
+		if (unlikely(bio_check_ro(bio, bio->bi_disk->part0)))
 			goto end_io;
 		if (unlikely(bio_check_eod(bio, get_capacity(bio->bi_disk))))
 			goto end_io;
@@ -1201,7 +1201,7 @@ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *
 		return ret;
 
 	if (rq->rq_disk &&
-	    should_fail_request(&rq->rq_disk->part0, blk_rq_bytes(rq)))
+	    should_fail_request(rq->rq_disk->part0, blk_rq_bytes(rq)))
 		return BLK_STS_IOERR;
 
 	if (blk_crypto_insert_cloned_request(rq))
@@ -1260,30 +1260,29 @@ unsigned int blk_rq_err_bytes(const struct request *rq)
 }
 EXPORT_SYMBOL_GPL(blk_rq_err_bytes);
 
-static void update_io_ticks(struct hd_struct *part, unsigned long now, bool end)
+static void update_io_ticks(struct block_device *part, unsigned long now,
+		bool end)
 {
 	unsigned long stamp;
 again:
-	stamp = READ_ONCE(part->stamp);
+	stamp = READ_ONCE(part->bd_stamp);
 	if (unlikely(stamp != now)) {
-		if (likely(cmpxchg(&part->stamp, stamp, now) == stamp))
+		if (likely(cmpxchg(&part->bd_stamp, stamp, now) == stamp))
 			__part_stat_add(part, io_ticks, end ? now - stamp : 1);
 	}
-	if (part->partno) {
-		part = &part_to_disk(part)->part0;
+	if (part->bd_partno) {
+		part = part->bd_disk->part0;
 		goto again;
 	}
 }
 
 static void blk_account_io_completion(struct request *req, unsigned int bytes)
 {
-	if (req->part && blk_do_io_stat(req)) {
+	if (req->bdev && blk_do_io_stat(req)) {
 		const int sgrp = op_stat_group(req_op(req));
-		struct hd_struct *part;
 
 		part_stat_lock();
-		part = req->part;
-		part_stat_add(part, sectors[sgrp], bytes >> 9);
+		part_stat_add(req->bdev, sectors[sgrp], bytes >> 9);
 		part_stat_unlock();
 	}
 }
@@ -1295,20 +1294,15 @@ void blk_account_io_done(struct request *req, u64 now)
 	 * normal IO on queueing nor completion.  Accounting the
 	 * containing request is enough.
 	 */
-	if (req->part && blk_do_io_stat(req) &&
+	if (req->bdev && blk_do_io_stat(req) &&
 	    !(req->rq_flags & RQF_FLUSH_SEQ)) {
 		const int sgrp = op_stat_group(req_op(req));
-		struct hd_struct *part;
 
 		part_stat_lock();
-		part = req->part;
-
-		update_io_ticks(part, jiffies, true);
-		part_stat_inc(part, ios[sgrp]);
-		part_stat_add(part, nsecs[sgrp], now - req->start_time_ns);
+		update_io_ticks(req->bdev, jiffies, true);
+		part_stat_inc(req->bdev, ios[sgrp]);
+		part_stat_add(req->bdev, nsecs[sgrp], now - req->start_time_ns);
 		part_stat_unlock();
-
-		hd_struct_put(part);
 	}
 }
 
@@ -1317,15 +1311,15 @@ void blk_account_io_start(struct request *rq)
 	if (!blk_do_io_stat(rq))
 		return;
 
-	rq->part = disk_map_sector_rcu(rq->rq_disk, blk_rq_pos(rq));
+	rq->bdev = disk_map_sector_rcu(rq->rq_disk, blk_rq_pos(rq));
 
 	part_stat_lock();
-	update_io_ticks(rq->part, jiffies, false);
+	update_io_ticks(rq->bdev, jiffies, false);
 	part_stat_unlock();
 }
 
-static unsigned long __part_start_io_acct(struct hd_struct *part,
-					  unsigned int sectors, unsigned int op)
+static unsigned long __part_start_io_acct(struct block_device *part,
+		unsigned int sectors, unsigned int op)
 {
 	const int sgrp = op_stat_group(op);
 	unsigned long now = READ_ONCE(jiffies);
@@ -1340,8 +1334,8 @@ static unsigned long __part_start_io_acct(struct hd_struct *part,
 	return now;
 }
 
-unsigned long part_start_io_acct(struct gendisk *disk, struct hd_struct **part,
-				 struct bio *bio)
+unsigned long part_start_io_acct(struct gendisk *disk,
+		struct block_device **part, struct bio *bio)
 {
 	*part = disk_map_sector_rcu(disk, bio->bi_iter.bi_sector);
 
@@ -1352,11 +1346,11 @@ EXPORT_SYMBOL_GPL(part_start_io_acct);
 unsigned long disk_start_io_acct(struct gendisk *disk, unsigned int sectors,
 				 unsigned int op)
 {
-	return __part_start_io_acct(&disk->part0, sectors, op);
+	return __part_start_io_acct(disk->part0, sectors, op);
 }
 EXPORT_SYMBOL(disk_start_io_acct);
 
-static void __part_end_io_acct(struct hd_struct *part, unsigned int op,
+static void __part_end_io_acct(struct block_device *part, unsigned int op,
 			       unsigned long start_time)
 {
 	const int sgrp = op_stat_group(op);
@@ -1370,18 +1364,17 @@ static void __part_end_io_acct(struct hd_struct *part, unsigned int op,
 	part_stat_unlock();
 }
 
-void part_end_io_acct(struct hd_struct *part, struct bio *bio,
+void part_end_io_acct(struct block_device *part, struct bio *bio,
 		      unsigned long start_time)
 {
 	__part_end_io_acct(part, bio_op(bio), start_time);
-	hd_struct_put(part);
 }
 EXPORT_SYMBOL_GPL(part_end_io_acct);
 
 void disk_end_io_acct(struct gendisk *disk, unsigned int op,
 		      unsigned long start_time)
 {
-	__part_end_io_acct(&disk->part0, op, start_time);
+	__part_end_io_acct(disk->part0, op, start_time);
 }
 EXPORT_SYMBOL(disk_end_io_acct);
 
diff --git a/block/blk-flush.c b/block/blk-flush.c
index e32958f0b68750..9507dcdd58814c 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -139,7 +139,7 @@ static void blk_flush_queue_rq(struct request *rq, bool add_front)
 
 static void blk_account_io_flush(struct request *rq)
 {
-	struct hd_struct *part = &rq->rq_disk->part0;
+	struct block_device *part = rq->rq_disk->part0;
 
 	part_stat_lock();
 	part_stat_inc(part, ios[STAT_FLUSH]);
diff --git a/block/blk-lib.c b/block/blk-lib.c
index e90614fd8d6a42..752f9c7220622a 100644
--- a/block/blk-lib.c
+++ b/block/blk-lib.c
@@ -65,7 +65,7 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
 
 	/* In case the discard request is in a partition */
 	if (bdev_is_partition(bdev))
-		part_offset = bdev->bd_part->start_sect;
+		part_offset = bdev->bd_start_sect;
 
 	while (nr_sects) {
 		sector_t granularity_aligned_lba, req_sects;
diff --git a/block/blk-merge.c b/block/blk-merge.c
index bcf5e458060337..3ec0d322e4a769 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -681,10 +681,8 @@ static void blk_account_io_merge_request(struct request *req)
 {
 	if (blk_do_io_stat(req)) {
 		part_stat_lock();
-		part_stat_inc(req->part, merges[op_stat_group(req_op(req))]);
+		part_stat_inc(req->bdev, merges[op_stat_group(req_op(req))]);
 		part_stat_unlock();
-
-		hd_struct_put(req->part);
 	}
 }
 
@@ -906,7 +904,7 @@ static void blk_account_io_merge_bio(struct request *req)
 		return;
 
 	part_stat_lock();
-	part_stat_inc(req->part, merges[op_stat_group(req_op(req))]);
+	part_stat_inc(req->bdev, merges[op_stat_group(req_op(req))]);
 	part_stat_unlock();
 }
 
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 55bcee5dc0320c..a28475e6405de9 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -95,7 +95,7 @@ static void blk_mq_hctx_clear_pending(struct blk_mq_hw_ctx *hctx,
 }
 
 struct mq_inflight {
-	struct hd_struct *part;
+	struct block_device *part;
 	unsigned int inflight[2];
 };
 
@@ -105,13 +105,14 @@ static bool blk_mq_check_inflight(struct blk_mq_hw_ctx *hctx,
 {
 	struct mq_inflight *mi = priv;
 
-	if (rq->part == mi->part && blk_mq_rq_state(rq) == MQ_RQ_IN_FLIGHT)
+	if (rq->bdev == mi->part && blk_mq_rq_state(rq) == MQ_RQ_IN_FLIGHT)
 		mi->inflight[rq_data_dir(rq)]++;
 
 	return true;
 }
 
-unsigned int blk_mq_in_flight(struct request_queue *q, struct hd_struct *part)
+unsigned int blk_mq_in_flight(struct request_queue *q,
+		struct block_device *part)
 {
 	struct mq_inflight mi = { .part = part };
 
@@ -120,7 +121,7 @@ unsigned int blk_mq_in_flight(struct request_queue *q, struct hd_struct *part)
 	return mi.inflight[0] + mi.inflight[1];
 }
 
-void blk_mq_in_flight_rw(struct request_queue *q, struct hd_struct *part,
+void blk_mq_in_flight_rw(struct request_queue *q, struct block_device *part,
 			 unsigned int inflight[2])
 {
 	struct mq_inflight mi = { .part = part };
@@ -300,7 +301,7 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data,
 	INIT_HLIST_NODE(&rq->hash);
 	RB_CLEAR_NODE(&rq->rb_node);
 	rq->rq_disk = NULL;
-	rq->part = NULL;
+	rq->bdev = NULL;
 #ifdef CONFIG_BLK_RQ_ALLOC_TIME
 	rq->alloc_time_ns = alloc_time_ns;
 #endif
diff --git a/block/blk-mq.h b/block/blk-mq.h
index a52703c98b7736..395fbc6c59d1eb 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -182,8 +182,9 @@ static inline bool blk_mq_hw_queue_mapped(struct blk_mq_hw_ctx *hctx)
 	return hctx->nr_ctx && hctx->tags;
 }
 
-unsigned int blk_mq_in_flight(struct request_queue *q, struct hd_struct *part);
-void blk_mq_in_flight_rw(struct request_queue *q, struct hd_struct *part,
+unsigned int blk_mq_in_flight(struct request_queue *q,
+		struct block_device *bdev);
+void blk_mq_in_flight_rw(struct request_queue *q, struct block_device *bdev,
 			 unsigned int inflight[2]);
 
 static inline void blk_mq_put_dispatch_budget(struct request_queue *q)
diff --git a/block/blk.h b/block/blk.h
index 09cee7024fb43e..90dd2047c6cd29 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -215,7 +215,15 @@ static inline void elevator_exit(struct request_queue *q,
 	__elevator_exit(q, e);
 }
 
-struct hd_struct *__disk_get_part(struct gendisk *disk, int partno);
+static inline struct block_device *__bdget_disk(struct gendisk *disk,
+		int partno)
+{
+	struct disk_part_tbl *ptbl = rcu_dereference(disk->part_tbl);
+
+	if (unlikely(partno < 0 || partno >= ptbl->len))
+		return NULL;
+	return rcu_dereference(ptbl->part[partno]);
+}
 
 ssize_t part_size_show(struct device *dev, struct device_attribute *attr,
 		char *buf);
@@ -348,44 +356,21 @@ void blk_queue_free_zone_bitmaps(struct request_queue *q);
 static inline void blk_queue_free_zone_bitmaps(struct request_queue *q) {}
 #endif
 
-struct hd_struct *disk_map_sector_rcu(struct gendisk *disk, sector_t sector);
+struct block_device *disk_map_sector_rcu(struct gendisk *disk, sector_t sector);
 
-int blk_alloc_devt(struct hd_struct *part, dev_t *devt);
+int blk_alloc_devt(struct block_device *bdev, dev_t *devt);
 void blk_free_devt(dev_t devt);
 char *disk_name(struct gendisk *hd, int partno, char *buf);
 #define ADDPART_FLAG_NONE	0
 #define ADDPART_FLAG_RAID	1
 #define ADDPART_FLAG_WHOLEDISK	2
-void delete_partition(struct hd_struct *part);
+void delete_partition(struct block_device *part);
 int bdev_add_partition(struct block_device *bdev, int partno,
 		sector_t start, sector_t length);
 int bdev_del_partition(struct block_device *bdev, int partno);
 int bdev_resize_partition(struct block_device *bdev, int partno,
 		sector_t start, sector_t length);
 int disk_expand_part_tbl(struct gendisk *disk, int target);
-int hd_ref_init(struct hd_struct *part);
-
-/* no need to get/put refcount of part0 */
-static inline int hd_struct_try_get(struct hd_struct *part)
-{
-	if (part->partno)
-		return percpu_ref_tryget_live(&part->ref);
-	return 1;
-}
-
-static inline void hd_struct_put(struct hd_struct *part)
-{
-	if (part->partno)
-		percpu_ref_put(&part->ref);
-}
-
-static inline void hd_free_part(struct hd_struct *part)
-{
-	free_percpu(part->dkstats);
-	kfree(part->info);
-	bdput(part->bdev);
-	percpu_ref_exit(&part->ref);
-}
 
 int bio_add_hw_page(struct request_queue *q, struct bio *bio,
 		struct page *page, unsigned int len, unsigned int offset,
diff --git a/block/genhd.c b/block/genhd.c
index e101b6843f7437..a14e2408e3d4e8 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -40,7 +40,7 @@ static void disk_release_events(struct gendisk *disk);
 
 void set_capacity(struct gendisk *disk, sector_t sectors)
 {
-	struct block_device *bdev = disk->part0.bdev;
+	struct block_device *bdev = disk->part0;
 
 	spin_lock(&bdev->bd_size_lock);
 	i_size_write(bdev->bd_inode, (loff_t)sectors << SECTOR_SHIFT);
@@ -93,13 +93,14 @@ const char *bdevname(struct block_device *bdev, char *buf)
 }
 EXPORT_SYMBOL(bdevname);
 
-static void part_stat_read_all(struct hd_struct *part, struct disk_stats *stat)
+static void part_stat_read_all(struct block_device *part,
+		struct disk_stats *stat)
 {
 	int cpu;
 
 	memset(stat, 0, sizeof(struct disk_stats));
 	for_each_possible_cpu(cpu) {
-		struct disk_stats *ptr = per_cpu_ptr(part->dkstats, cpu);
+		struct disk_stats *ptr = per_cpu_ptr(part->bd_stats, cpu);
 		int group;
 
 		for (group = 0; group < NR_STAT_GROUPS; group++) {
@@ -113,7 +114,7 @@ static void part_stat_read_all(struct hd_struct *part, struct disk_stats *stat)
 	}
 }
 
-static unsigned int part_in_flight(struct hd_struct *part)
+static unsigned int part_in_flight(struct block_device *part)
 {
 	unsigned int inflight = 0;
 	int cpu;
@@ -128,7 +129,8 @@ static unsigned int part_in_flight(struct hd_struct *part)
 	return inflight;
 }
 
-static void part_in_flight_rw(struct hd_struct *part, unsigned int inflight[2])
+static void part_in_flight_rw(struct block_device *part,
+		unsigned int inflight[2])
 {
 	int cpu;
 
@@ -144,42 +146,6 @@ static void part_in_flight_rw(struct hd_struct *part, unsigned int inflight[2])
 		inflight[1] = 0;
 }
 
-struct hd_struct *__disk_get_part(struct gendisk *disk, int partno)
-{
-	struct disk_part_tbl *ptbl = rcu_dereference(disk->part_tbl);
-
-	if (unlikely(partno < 0 || partno >= ptbl->len))
-		return NULL;
-	return rcu_dereference(ptbl->part[partno]);
-}
-
-/**
- * disk_get_part - get partition
- * @disk: disk to look partition from
- * @partno: partition number
- *
- * Look for partition @partno from @disk.  If found, increment
- * reference count and return it.
- *
- * CONTEXT:
- * Don't care.
- *
- * RETURNS:
- * Pointer to the found partition on success, NULL if not found.
- */
-struct hd_struct *disk_get_part(struct gendisk *disk, int partno)
-{
-	struct hd_struct *part;
-
-	rcu_read_lock();
-	part = __disk_get_part(disk, partno);
-	if (part)
-		get_device(part_to_dev(part));
-	rcu_read_unlock();
-
-	return part;
-}
-
 /**
  * disk_part_iter_init - initialize partition iterator
  * @piter: iterator to initialize
@@ -224,7 +190,7 @@ EXPORT_SYMBOL_GPL(disk_part_iter_init);
  * CONTEXT:
  * Don't care.
  */
-struct hd_struct *disk_part_iter_next(struct disk_part_iter *piter)
+struct block_device *disk_part_iter_next(struct disk_part_iter *piter)
 {
 	struct disk_part_tbl *ptbl;
 	int inc, end;
@@ -251,19 +217,18 @@ struct hd_struct *disk_part_iter_next(struct disk_part_iter *piter)
 
 	/* iterate to the next partition */
 	for (; piter->idx != end; piter->idx += inc) {
-		struct hd_struct *part;
+		struct block_device *part;
 
 		part = rcu_dereference(ptbl->part[piter->idx]);
 		if (!part)
 			continue;
-		if (!bdev_nr_sectors(part->bdev) &&
+		if (!bdev_nr_sectors(part) &&
 		    !(piter->flags & DISK_PITER_INCL_EMPTY) &&
 		    !(piter->flags & DISK_PITER_INCL_EMPTY_PART0 &&
 		      piter->idx == 0))
 			continue;
 
-		get_device(part_to_dev(part));
-		piter->part = part;
+		piter->part = bdgrab(part);
 		piter->idx += inc;
 		break;
 	}
@@ -285,15 +250,16 @@ EXPORT_SYMBOL_GPL(disk_part_iter_next);
  */
 void disk_part_iter_exit(struct disk_part_iter *piter)
 {
-	disk_put_part(piter->part);
+	if (piter->part)
+		bdput(piter->part);
 	piter->part = NULL;
 }
 EXPORT_SYMBOL_GPL(disk_part_iter_exit);
 
-static inline int sector_in_part(struct hd_struct *part, sector_t sector)
+static inline int sector_in_part(struct block_device *part, sector_t sector)
 {
-	return part->start_sect <= sector &&
-		sector < part->start_sect + bdev_nr_sectors(part->bdev);
+	return part->bd_start_sect <= sector &&
+		sector < part->bd_start_sect + bdev_nr_sectors(part);
 }
 
 /**
@@ -313,36 +279,28 @@ static inline int sector_in_part(struct hd_struct *part, sector_t sector)
  * Found partition on success, part0 is returned if no partition matches
  * or the matched partition is being deleted.
  */
-struct hd_struct *disk_map_sector_rcu(struct gendisk *disk, sector_t sector)
+struct block_device *disk_map_sector_rcu(struct gendisk *disk, sector_t sector)
 {
 	struct disk_part_tbl *ptbl;
-	struct hd_struct *part;
+	struct block_device *part;
 	int i;
 
 	rcu_read_lock();
 	ptbl = rcu_dereference(disk->part_tbl);
 
 	part = rcu_dereference(ptbl->last_lookup);
-	if (part && sector_in_part(part, sector) && hd_struct_try_get(part))
+	if (part && sector_in_part(part, sector))
 		goto out_unlock;
 
 	for (i = 1; i < ptbl->len; i++) {
 		part = rcu_dereference(ptbl->part[i]);
-
 		if (part && sector_in_part(part, sector)) {
-			/*
-			 * only live partition can be cached for lookup,
-			 * so use-after-free on cached & deleting partition
-			 * can be avoided
-			 */
-			if (!hd_struct_try_get(part))
-				break;
 			rcu_assign_pointer(ptbl->last_lookup, part);
 			goto out_unlock;
 		}
 	}
 
-	part = &disk->part0;
+	part = disk->part0;
 out_unlock:
 	rcu_read_unlock();
 	return part;
@@ -560,7 +518,7 @@ static int blk_mangle_minor(int minor)
 
 /**
  * blk_alloc_devt - allocate a dev_t for a partition
- * @part: partition to allocate dev_t for
+ * @bdev: partition to allocate dev_t for
  * @devt: out parameter for resulting dev_t
  *
  * Allocate a dev_t for block device.
@@ -572,14 +530,14 @@ static int blk_mangle_minor(int minor)
  * CONTEXT:
  * Might sleep.
  */
-int blk_alloc_devt(struct hd_struct *part, dev_t *devt)
+int blk_alloc_devt(struct block_device *bdev, dev_t *devt)
 {
-	struct gendisk *disk = part_to_disk(part);
+	struct gendisk *disk = bdev->bd_disk;
 	int idx;
 
 	/* in consecutive minor range? */
-	if (part->partno < disk->minors) {
-		*devt = MKDEV(disk->major, disk->first_minor + part->partno);
+	if (bdev->bd_partno < disk->minors) {
+		*devt = MKDEV(disk->major, disk->first_minor + bdev->bd_partno);
 		return 0;
 	}
 
@@ -636,7 +594,7 @@ static void register_disk(struct device *parent, struct gendisk *disk,
 {
 	struct device *ddev = disk_to_dev(disk);
 	struct disk_part_iter piter;
-	struct hd_struct *part;
+	struct block_device *part;
 	int err;
 
 	ddev->parent = parent;
@@ -668,7 +626,8 @@ static void register_disk(struct device *parent, struct gendisk *disk,
 	 */
 	pm_runtime_set_memalloc_noio(ddev, true);
 
-	disk->part0.holder_dir = kobject_create_and_add("holders", &ddev->kobj);
+	disk->part0->bd_holder_dir =
+		kobject_create_and_add("holders", &ddev->kobj);
 	disk->slave_dir = kobject_create_and_add("slaves", &ddev->kobj);
 
 	if (disk->flags & GENHD_FL_HIDDEN) {
@@ -685,7 +644,7 @@ static void register_disk(struct device *parent, struct gendisk *disk,
 	/* announce possible partitions */
 	disk_part_iter_init(&piter, disk, 0);
 	while ((part = disk_part_iter_next(&piter)))
-		kobject_uevent(&part_to_dev(part)->kobj, KOBJ_ADD);
+		kobject_uevent(bdev_kobj(part), KOBJ_ADD);
 	disk_part_iter_exit(&piter);
 
 	if (disk->queue->backing_dev_info->dev) {
@@ -734,7 +693,7 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
 
 	disk->flags |= GENHD_FL_UP;
 
-	retval = blk_alloc_devt(&disk->part0, &devt);
+	retval = blk_alloc_devt(disk->part0, &devt);
 	if (retval) {
 		WARN_ON(1);
 		return;
@@ -761,7 +720,7 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
 		ret = bdi_register(bdi, "%u:%u", MAJOR(devt), MINOR(devt));
 		WARN_ON(ret);
 		bdi_set_owner(bdi, dev);
-		bdev_add(disk->part0.bdev, devt);
+		bdev_add(disk->part0, devt);
 	}
 	register_disk(parent, disk, groups);
 	if (register_queue)
@@ -791,14 +750,8 @@ void device_add_disk_no_queue_reg(struct device *parent, struct gendisk *disk)
 }
 EXPORT_SYMBOL(device_add_disk_no_queue_reg);
 
-static void invalidate_partition(struct gendisk *disk, int partno)
+static void invalidate_partition(struct block_device *bdev)
 {
-	struct block_device *bdev;
-
-	bdev = bdget_disk(disk, partno);
-	if (!bdev)
-		return;
-
 	fsync_bdev(bdev);
 	__invalidate_device(bdev, true);
 
@@ -807,7 +760,6 @@ static void invalidate_partition(struct gendisk *disk, int partno)
 	 * as last inode reference is dropped.
 	 */
 	remove_inode_hash(bdev->bd_inode);
-	bdput(bdev);
 }
 
 /**
@@ -832,7 +784,7 @@ static void invalidate_partition(struct gendisk *disk, int partno)
 void del_gendisk(struct gendisk *disk)
 {
 	struct disk_part_iter piter;
-	struct hd_struct *part;
+	struct block_device *part;
 
 	might_sleep();
 
@@ -851,12 +803,12 @@ void del_gendisk(struct gendisk *disk)
 	disk_part_iter_init(&piter, disk,
 			     DISK_PITER_INCL_EMPTY | DISK_PITER_REVERSE);
 	while ((part = disk_part_iter_next(&piter))) {
-		invalidate_partition(disk, part->partno);
+		invalidate_partition(part);
 		delete_partition(part);
 	}
 	disk_part_iter_exit(&piter);
 
-	invalidate_partition(disk, 0);
+	invalidate_partition(disk->part0);
 	set_capacity(disk, 0);
 	disk->flags &= ~GENHD_FL_UP;
 	up_write(&disk->lookup_sem);
@@ -873,11 +825,11 @@ void del_gendisk(struct gendisk *disk)
 
 	blk_unregister_queue(disk);
 
-	kobject_put(disk->part0.holder_dir);
+	kobject_put(disk->part0->bd_holder_dir);
 	kobject_put(disk->slave_dir);
 
-	part_stat_set_all(&disk->part0, 0);
-	disk->part0.stamp = 0;
+	part_stat_set_all(disk->part0, 0);
+	disk->part0->bd_stamp = 0;
 	if (!sysfs_deprecated)
 		sysfs_remove_link(block_depr, dev_name(disk_to_dev(disk)));
 	pm_runtime_set_memalloc_noio(disk_to_dev(disk), false);
@@ -945,13 +897,13 @@ void blk_request_module(dev_t devt)
  */
 struct block_device *bdget_disk(struct gendisk *disk, int partno)
 {
-	struct hd_struct *part;
 	struct block_device *bdev = NULL;
 
-	part = disk_get_part(disk, partno);
-	if (part)
-		bdev = bdget_part(part);
-	disk_put_part(part);
+	rcu_read_lock();
+	bdev = __bdget_disk(disk, partno);
+	if (bdev)
+		bdgrab(bdev);
+	rcu_read_unlock();
 
 	return bdev;
 }
@@ -971,7 +923,7 @@ void __init printk_all_partitions(void)
 	while ((dev = class_dev_iter_next(&iter))) {
 		struct gendisk *disk = dev_to_disk(dev);
 		struct disk_part_iter piter;
-		struct hd_struct *part;
+		struct block_device *part;
 		char name_buf[BDEVNAME_SIZE];
 		char devt_buf[BDEVT_SIZE];
 
@@ -990,13 +942,14 @@ void __init printk_all_partitions(void)
 		 */
 		disk_part_iter_init(&piter, disk, DISK_PITER_INCL_PART0);
 		while ((part = disk_part_iter_next(&piter))) {
-			bool is_part0 = part == &disk->part0;
+			bool is_part0 = part == disk->part0;
 
 			printk("%s%s %10llu %s %s", is_part0 ? "" : "  ",
-			       bdevt_str(part_devt(part), devt_buf),
-			       bdev_nr_sectors(part->bdev) >> 1
-			       , disk_name(disk, part->partno, name_buf),
-			       part->info ? part->info->uuid : "");
+			       bdevt_str(part->bd_dev, devt_buf),
+			       bdev_nr_sectors(part) >> 1,
+			       disk_name(disk, part->bd_partno, name_buf),
+			       part->bd_meta_info ?
+					part->bd_meta_info->uuid : "");
 			if (is_part0) {
 				if (dev->parent && dev->parent->driver)
 					printk(" driver: %s\n",
@@ -1072,7 +1025,7 @@ static int show_partition(struct seq_file *seqf, void *v)
 {
 	struct gendisk *sgp = v;
 	struct disk_part_iter piter;
-	struct hd_struct *part;
+	struct block_device *part;
 	char buf[BDEVNAME_SIZE];
 
 	/* Don't show non-partitionable removeable devices or empty devices */
@@ -1086,9 +1039,9 @@ static int show_partition(struct seq_file *seqf, void *v)
 	disk_part_iter_init(&piter, sgp, DISK_PITER_INCL_PART0);
 	while ((part = disk_part_iter_next(&piter)))
 		seq_printf(seqf, "%4d  %7d %10llu %s\n",
-			   MAJOR(part_devt(part)), MINOR(part_devt(part)),
-			   bdev_nr_sectors(part->bdev) >> 1,
-			   disk_name(sgp, part->partno, buf));
+			   MAJOR(part->bd_dev), MINOR(part->bd_dev),
+			   bdev_nr_sectors(part) >> 1,
+			   disk_name(sgp, part->bd_partno, buf));
 	disk_part_iter_exit(&piter);
 
 	return 0;
@@ -1167,24 +1120,22 @@ static ssize_t disk_ro_show(struct device *dev,
 ssize_t part_size_show(struct device *dev,
 		       struct device_attribute *attr, char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
-
-	return sprintf(buf, "%llu\n", bdev_nr_sectors(p->bdev));
+	return sprintf(buf, "%llu\n", bdev_nr_sectors(dev_to_bdev(dev)));
 }
 
 ssize_t part_stat_show(struct device *dev,
 		       struct device_attribute *attr, char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
-	struct request_queue *q = part_to_disk(p)->queue;
+	struct block_device *bdev = dev_to_bdev(dev);
+	struct request_queue *q = bdev->bd_disk->queue;
 	struct disk_stats stat;
 	unsigned int inflight;
 
-	part_stat_read_all(p, &stat);
+	part_stat_read_all(bdev, &stat);
 	if (queue_is_mq(q))
-		inflight = blk_mq_in_flight(q, p);
+		inflight = blk_mq_in_flight(q, bdev);
 	else
-		inflight = part_in_flight(p);
+		inflight = part_in_flight(bdev);
 
 	return sprintf(buf,
 		"%8lu %8lu %8llu %8u "
@@ -1219,14 +1170,14 @@ ssize_t part_stat_show(struct device *dev,
 ssize_t part_inflight_show(struct device *dev, struct device_attribute *attr,
 			   char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
-	struct request_queue *q = part_to_disk(p)->queue;
+	struct block_device *bdev = dev_to_bdev(dev);
+	struct request_queue *q = bdev->bd_disk->queue;
 	unsigned int inflight[2];
 
 	if (queue_is_mq(q))
-		blk_mq_in_flight_rw(q, p, inflight);
+		blk_mq_in_flight_rw(q, bdev, inflight);
 	else
-		part_in_flight_rw(p, inflight);
+		part_in_flight_rw(bdev, inflight);
 
 	return sprintf(buf, "%8u %8u\n", inflight[0], inflight[1]);
 }
@@ -1274,16 +1225,14 @@ static DEVICE_ATTR(badblocks, 0644, disk_badblocks_show, disk_badblocks_store);
 ssize_t part_fail_show(struct device *dev,
 		       struct device_attribute *attr, char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
-
-	return sprintf(buf, "%d\n", p->make_it_fail);
+	return sprintf(buf, "%d\n", dev_to_bdev(dev)->make_it_fail);
 }
 
 ssize_t part_fail_store(struct device *dev,
 			struct device_attribute *attr,
 			const char *buf, size_t count)
 {
-	struct hd_struct *p = dev_to_part(dev);
+	struct block_device *p = dev_to_bdev(dev);
 	int i;
 
 	if (count > 0 && sscanf(buf, "%d", &i) > 0)
@@ -1444,9 +1393,9 @@ static void disk_release(struct device *dev)
 	disk_release_events(disk);
 	kfree(disk->random);
 	disk_replace_part_tbl(disk, NULL);
-	hd_free_part(&disk->part0);
 	if (disk->queue)
 		blk_put_queue(disk->queue);
+	bdput(disk->part0);
 	kfree(disk);
 }
 struct class block_class = {
@@ -1482,7 +1431,7 @@ static int diskstats_show(struct seq_file *seqf, void *v)
 {
 	struct gendisk *gp = v;
 	struct disk_part_iter piter;
-	struct hd_struct *hd;
+	struct block_device *hd;
 	char buf[BDEVNAME_SIZE];
 	unsigned int inflight;
 	struct disk_stats stat;
@@ -1510,8 +1459,8 @@ static int diskstats_show(struct seq_file *seqf, void *v)
 			   "%lu %lu %lu %u "
 			   "%lu %u"
 			   "\n",
-			   MAJOR(part_devt(hd)), MINOR(part_devt(hd)),
-			   disk_name(gp, hd->partno, buf),
+			   MAJOR(hd->bd_dev), MINOR(hd->bd_dev),
+			   disk_name(gp, hd->bd_partno, buf),
 			   stat.ios[STAT_READ],
 			   stat.merges[STAT_READ],
 			   stat.sectors[STAT_READ],
@@ -1567,9 +1516,9 @@ dev_t blk_lookup_devt(const char *name, int partno)
 	struct device *dev;
 
 	class_dev_iter_init(&iter, &block_class, NULL, &disk_type);
-	while ((dev = class_dev_iter_next(&iter))) {
+	while ((dev = class_dev_iter_next(&iter)) && !devt) {
 		struct gendisk *disk = dev_to_disk(dev);
-		struct hd_struct *part;
+		struct block_device *bdev;
 
 		if (strcmp(dev_name(dev), name))
 			continue;
@@ -1580,15 +1529,13 @@ dev_t blk_lookup_devt(const char *name, int partno)
 			 */
 			devt = MKDEV(MAJOR(dev->devt),
 				     MINOR(dev->devt) + partno);
-			break;
+		} else {
+			rcu_read_lock();
+			bdev = __bdget_disk(disk, partno);
+			if (bdev)
+				devt = bdev->bd_dev;
+			rcu_read_unlock();
 		}
-		part = disk_get_part(disk, partno);
-		if (part) {
-			devt = part_devt(part);
-			disk_put_part(part);
-			break;
-		}
-		disk_put_part(part);
 	}
 	class_dev_iter_exit(&iter);
 	return devt;
@@ -1610,26 +1557,17 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
 	if (!disk)
 		return NULL;
 
-	disk->part0.bdev = bdev_alloc(disk, 0);
-	if (!disk->part0.bdev)
+	disk->part0 = bdev_alloc(disk, 0);
+	if (!disk->part0)
 		goto out_free_disk;
 
-	disk->part0.dkstats = alloc_percpu(struct disk_stats);
-	if (!disk->part0.dkstats)
-		goto out_bdput;
-
 	init_rwsem(&disk->lookup_sem);
 	disk->node_id = node_id;
 	if (disk_expand_part_tbl(disk, 0))
-		goto out_free_bdstats;
+		goto out_bdput;
 
 	ptbl = rcu_dereference_protected(disk->part_tbl, 1);
-	rcu_assign_pointer(ptbl->part[0], &disk->part0);
-
-	if (hd_ref_init(&disk->part0)) {
-		hd_free_part(&disk->part0);
-		return NULL;
-	}
+	rcu_assign_pointer(ptbl->part[0], disk->part0);
 
 	disk->minors = minors;
 	rand_initialize_disk(disk);
@@ -1638,10 +1576,8 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
 	device_initialize(disk_to_dev(disk));
 	return disk;
 
-out_free_bdstats:
-	free_percpu(disk->part0.dkstats);
 out_bdput:
-	bdput(disk->part0.bdev);
+	bdput(disk->part0);
 out_free_disk:
 	kfree(disk);
 	return NULL;
@@ -1678,16 +1614,16 @@ static void set_disk_ro_uevent(struct gendisk *gd, int ro)
 void set_disk_ro(struct gendisk *disk, int flag)
 {
 	struct disk_part_iter piter;
-	struct hd_struct *part;
+	struct block_device *part;
 
-	if (disk->part0.policy != flag) {
+	if (disk->part0->bd_policy != flag) {
 		set_disk_ro_uevent(disk, flag);
-		disk->part0.policy = flag;
+		disk->part0->bd_policy = flag;
 	}
 
 	disk_part_iter_init(&piter, disk, DISK_PITER_INCL_EMPTY);
 	while ((part = disk_part_iter_next(&piter)))
-		part->policy = flag;
+		part->bd_policy = flag;
 	disk_part_iter_exit(&piter);
 }
 
@@ -1697,7 +1633,7 @@ int bdev_read_only(struct block_device *bdev)
 {
 	if (!bdev)
 		return 0;
-	return bdev->bd_part->policy;
+	return bdev->bd_policy;
 }
 
 EXPORT_SYMBOL(bdev_read_only);
diff --git a/block/ioctl.c b/block/ioctl.c
index 6b785181344fe1..aa9546e5d6a1bd 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -35,7 +35,7 @@ static int blkpg_do_ioctl(struct block_device *bdev,
 	start = p.start >> SECTOR_SHIFT;
 	length = p.length >> SECTOR_SHIFT;
 
-	/* check for fit in a hd_struct */
+	/* check for fit in a sector_t */
 	if (sizeof(sector_t) < sizeof(long long)) {
 		long pstart = start, plength = length;
 
@@ -354,7 +354,7 @@ static int blkdev_roset(struct block_device *bdev, fmode_t mode,
 		if (ret)
 			return ret;
 	}
-	bdev->bd_part->policy = n;
+	bdev->bd_policy = n;
 	return 0;
 }
 
diff --git a/block/partitions/core.c b/block/partitions/core.c
index aae857c22af05d..0269128219decc 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -182,44 +182,39 @@ static struct parsed_partitions *check_partition(struct gendisk *hd,
 static ssize_t part_partition_show(struct device *dev,
 				   struct device_attribute *attr, char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
-
-	return sprintf(buf, "%d\n", p->partno);
+	return sprintf(buf, "%d\n", dev_to_bdev(dev)->bd_partno);
 }
 
 static ssize_t part_start_show(struct device *dev,
 			       struct device_attribute *attr, char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
-
-	return sprintf(buf, "%llu\n",(unsigned long long)p->start_sect);
+	return sprintf(buf, "%llu\n", dev_to_bdev(dev)->bd_start_sect);
 }
 
 static ssize_t part_ro_show(struct device *dev,
 			    struct device_attribute *attr, char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
-	return sprintf(buf, "%d\n", p->policy ? 1 : 0);
+	return sprintf(buf, "%d\n", dev_to_bdev(dev)->bd_policy ? 1 : 0);
 }
 
 static ssize_t part_alignment_offset_show(struct device *dev,
 					  struct device_attribute *attr, char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
+	struct block_device *bdev = dev_to_bdev(dev);
 
 	return sprintf(buf, "%u\n",
-		queue_limit_alignment_offset(&part_to_disk(p)->queue->limits,
-				p->start_sect));
+		queue_limit_alignment_offset(&bdev->bd_disk->queue->limits,
+				bdev->bd_start_sect));
 }
 
 static ssize_t part_discard_alignment_show(struct device *dev,
 					   struct device_attribute *attr, char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
+	struct block_device *bdev = dev_to_bdev(dev);
 
 	return sprintf(buf, "%u\n",
-		queue_limit_discard_alignment(&part_to_disk(p)->queue->limits,
-				p->start_sect));
+		queue_limit_discard_alignment(&bdev->bd_disk->queue->limits,
+				bdev->bd_start_sect));
 }
 
 static DEVICE_ATTR(partition, 0444, part_partition_show, NULL);
@@ -264,19 +259,19 @@ static const struct attribute_group *part_attr_groups[] = {
 
 static void part_release(struct device *dev)
 {
-	struct hd_struct *p = dev_to_part(dev);
+	struct block_device *p = dev_to_bdev(dev);
+
 	blk_free_devt(dev->devt);
-	hd_free_part(p);
-	kfree(p);
+	bdput(p);
 }
 
 static int part_uevent(struct device *dev, struct kobj_uevent_env *env)
 {
-	struct hd_struct *part = dev_to_part(dev);
+	struct block_device *part = dev_to_bdev(dev);
 
-	add_uevent_var(env, "PARTN=%u", part->partno);
-	if (part->info && part->info->volname[0])
-		add_uevent_var(env, "PARTNAME=%s", part->info->volname);
+	add_uevent_var(env, "PARTN=%u", part->bd_partno);
+	if (part->bd_meta_info && part->bd_meta_info->volname[0])
+		add_uevent_var(env, "PARTNAME=%s", part->bd_meta_info->volname);
 	return 0;
 }
 
@@ -287,72 +282,28 @@ struct device_type part_type = {
 	.uevent		= part_uevent,
 };
 
-static void hd_struct_free_work(struct work_struct *work)
-{
-	struct hd_struct *part =
-		container_of(to_rcu_work(work), struct hd_struct, rcu_work);
-	struct gendisk *disk = part_to_disk(part);
-
-	/*
-	 * Release the disk reference acquired in delete_partition here.
-	 * We can't release it in hd_struct_free because the final put_device
-	 * needs process context and thus can't be run directly from a
-	 * percpu_ref ->release handler.
-	 */
-	put_device(disk_to_dev(disk));
-
-	part->start_sect = 0;
-	bdev_set_nr_sectors(part->bdev, 0);
-	part_stat_set_all(part, 0);
-	put_device(part_to_dev(part));
-}
-
-static void hd_struct_free(struct percpu_ref *ref)
-{
-	struct hd_struct *part = container_of(ref, struct hd_struct, ref);
-	struct gendisk *disk = part_to_disk(part);
-	struct disk_part_tbl *ptbl =
-		rcu_dereference_protected(disk->part_tbl, 1);
-
-	rcu_assign_pointer(ptbl->last_lookup, NULL);
-
-	INIT_RCU_WORK(&part->rcu_work, hd_struct_free_work);
-	queue_rcu_work(system_wq, &part->rcu_work);
-}
-
-int hd_ref_init(struct hd_struct *part)
-{
-	if (percpu_ref_init(&part->ref, hd_struct_free, 0, GFP_KERNEL))
-		return -ENOMEM;
-	return 0;
-}
-
 /*
  * Must be called either with bd_mutex held, before a disk can be opened or
  * after all disk users are gone.
  */
-void delete_partition(struct hd_struct *part)
+void delete_partition(struct block_device *part)
 {
-	struct gendisk *disk = part_to_disk(part);
+	struct gendisk *disk = part->bd_disk;
 	struct disk_part_tbl *ptbl =
 		rcu_dereference_protected(disk->part_tbl, 1);
 
-	/*
-	 * ->part_tbl is referenced in this part's release handler, so
-	 *  we have to hold the disk device
-	 */
-	get_device(disk_to_dev(disk));
-	rcu_assign_pointer(ptbl->part[part->partno], NULL);
-	kobject_put(part->holder_dir);
+	rcu_assign_pointer(ptbl->part[part->bd_partno], NULL);
+	rcu_assign_pointer(ptbl->last_lookup, NULL);
+	kobject_put(part->bd_holder_dir);
 	device_del(part_to_dev(part));
 
 	/*
 	 * Remove the block device from the inode hash, so that it cannot be
 	 * looked up while waiting for the RCU grace period.
 	 */
-	remove_inode_hash(part->bdev->bd_inode);
+	remove_inode_hash(part->bd_inode);
 
-	percpu_ref_kill(&part->ref);
+	put_device(part_to_dev(part));
 }
 
 static ssize_t whole_disk_show(struct device *dev,
@@ -366,11 +317,11 @@ static DEVICE_ATTR(whole_disk, 0444, whole_disk_show, NULL);
  * Must be called either with bd_mutex held, before a disk can be opened or
  * after all disk users are gone.
  */
-static struct hd_struct *add_partition(struct gendisk *disk, int partno,
+static struct block_device *add_partition(struct gendisk *disk, int partno,
 				sector_t start, sector_t len, int flags,
 				struct partition_meta_info *info)
 {
-	struct hd_struct *p;
+	struct block_device *p;
 	dev_t devt = MKDEV(0, 0);
 	struct device *ddev = disk_to_dev(disk);
 	struct device *pdev;
@@ -404,36 +355,22 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 	if (ptbl->part[partno])
 		return ERR_PTR(-EBUSY);
 
-	p = kzalloc(sizeof(*p), GFP_KERNEL);
+	p = bdev_alloc(disk, partno);
 	if (!p)
-		return ERR_PTR(-EBUSY);
-
-	err = -ENOMEM;
-	p->dkstats = alloc_percpu(struct disk_stats);
-	if (!p->dkstats)
-		goto out_free;
-
-	p->bdev = bdev_alloc(disk, partno);
-	if (!p->bdev)
-		goto out_free_stats;
-
-	pdev = part_to_dev(p);
+		return ERR_PTR(-ENOMEM);
 
-	p->start_sect = start;
-	bdev_set_nr_sectors(p->bdev, len);
-	p->partno = partno;
-	p->policy = get_disk_ro(disk);
+	p->bd_start_sect = start;
+	bdev_set_nr_sectors(p, len);
+	p->bd_policy = get_disk_ro(disk);
 
 	if (info) {
-		struct partition_meta_info *pinfo;
-
-		pinfo = kzalloc_node(sizeof(*pinfo), GFP_KERNEL, disk->node_id);
-		if (!pinfo)
+		err = -ENOMEM;
+		p->bd_meta_info = kmemdup(info, sizeof(*info), GFP_KERNEL);
+		if (!p->bd_meta_info)
 			goto out_bdput;
-		memcpy(pinfo, info, sizeof(*info));
-		p->info = pinfo;
 	}
 
+	pdev = part_to_dev(p);
 	dname = dev_name(ddev);
 	if (isdigit(dname[strlen(dname) - 1]))
 		dev_set_name(pdev, "%sp%d", dname, partno);
@@ -457,8 +394,8 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 		goto out_put;
 
 	err = -ENOMEM;
-	p->holder_dir = kobject_create_and_add("holders", &pdev->kobj);
-	if (!p->holder_dir)
+	p->bd_holder_dir = kobject_create_and_add("holders", &pdev->kobj);
+	if (!p->bd_holder_dir)
 		goto out_del;
 
 	dev_set_uevent_suppress(pdev, 0);
@@ -468,15 +405,8 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 			goto out_del;
 	}
 
-	err = hd_ref_init(p);
-	if (err) {
-		if (flags & ADDPART_FLAG_WHOLEDISK)
-			goto out_remove_file;
-		goto out_del;
-	}
-
 	/* everything is up and running, commence */
-	bdev_add(p->bdev, devt);
+	bdev_add(p, devt);
 	rcu_assign_pointer(ptbl->part[partno], p);
 
 	/* suppress uevent if the disk suppresses it */
@@ -485,19 +415,13 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 	return p;
 
 out_free_info:
-	kfree(p->info);
+	kfree(p->bd_meta_info);
 out_bdput:
-	bdput(p->bdev);
-out_free_stats:
-	free_percpu(p->dkstats);
-out_free:
-	kfree(p);
+	bdput(p);
 	return ERR_PTR(err);
 
-out_remove_file:
-	device_remove_file(pdev, &dev_attr_whole_disk);
 out_del:
-	kobject_put(p->holder_dir);
+	kobject_put(p->bd_holder_dir);
 	device_del(pdev);
 out_put:
 	put_device(pdev);
@@ -508,14 +432,14 @@ static bool partition_overlaps(struct gendisk *disk, sector_t start,
 		sector_t length, int skip_partno)
 {
 	struct disk_part_iter piter;
-	struct hd_struct *part;
+	struct block_device *part;
 	bool overlap = false;
 
 	disk_part_iter_init(&piter, disk, DISK_PITER_INCL_EMPTY);
 	while ((part = disk_part_iter_next(&piter))) {
-		if (part->partno == skip_partno ||
-		    start >= part->start_sect + bdev_nr_sectors(part->bdev) ||
-		    start + length <= part->start_sect)
+		if (part->bd_partno == skip_partno ||
+		    start >= part->bd_start_sect + bdev_nr_sectors(part) ||
+		    start + length <= part->bd_start_sect)
 			continue;
 		overlap = true;
 		break;
@@ -528,7 +452,7 @@ static bool partition_overlaps(struct gendisk *disk, sector_t start,
 int bdev_add_partition(struct block_device *bdev, int partno,
 		sector_t start, sector_t length)
 {
-	struct hd_struct *part;
+	struct block_device *part;
 
 	mutex_lock(&bdev->bd_mutex);
 	if (partition_overlaps(bdev->bd_disk, start, length, -1)) {
@@ -544,76 +468,59 @@ int bdev_add_partition(struct block_device *bdev, int partno,
 
 int bdev_del_partition(struct block_device *bdev, int partno)
 {
-	struct block_device *bdevp;
-	struct hd_struct *part = NULL;
+	struct block_device *part;
 	int ret;
 
-	bdevp = bdget_disk(bdev->bd_disk, partno);
-	if (!bdevp)
+	part = bdget_disk(bdev->bd_disk, partno);
+	if (!part)
 		return -ENXIO;
 
-	mutex_lock(&bdevp->bd_mutex);
+	mutex_lock(&part->bd_mutex);
 	mutex_lock_nested(&bdev->bd_mutex, 1);
 
-	ret = -ENXIO;
-	part = disk_get_part(bdev->bd_disk, partno);
-	if (!part)
-		goto out_unlock;
-
 	ret = -EBUSY;
-	if (bdevp->bd_openers)
+	if (part->bd_openers)
 		goto out_unlock;
 
-	sync_blockdev(bdevp);
-	invalidate_bdev(bdevp);
+	sync_blockdev(part);
+	invalidate_bdev(part);
 
 	delete_partition(part);
 	ret = 0;
 out_unlock:
 	mutex_unlock(&bdev->bd_mutex);
-	mutex_unlock(&bdevp->bd_mutex);
-	bdput(bdevp);
-	if (part)
-		disk_put_part(part);
+	mutex_unlock(&part->bd_mutex);
+	bdput(part);
 	return ret;
 }
 
 int bdev_resize_partition(struct block_device *bdev, int partno,
 		sector_t start, sector_t length)
 {
-	struct block_device *bdevp;
-	struct hd_struct *part;
+	struct block_device *part;
 	int ret = 0;
 
-	part = disk_get_part(bdev->bd_disk, partno);
+	part = bdget_disk(bdev->bd_disk, partno);
 	if (!part)
 		return -ENXIO;
 
-	ret = -ENOMEM;
-	bdevp = bdget_part(part);
-	if (!bdevp)
-		goto out_put_part;
-
-	mutex_lock(&bdevp->bd_mutex);
+	mutex_lock(&part->bd_mutex);
 	mutex_lock_nested(&bdev->bd_mutex, 1);
-
 	ret = -EINVAL;
-	if (start != part->start_sect)
+	if (start != part->bd_start_sect)
 		goto out_unlock;
 
 	ret = -EBUSY;
 	if (partition_overlaps(bdev->bd_disk, start, length, partno))
 		goto out_unlock;
 
-	bdev_set_nr_sectors(bdevp, length);
+	bdev_set_nr_sectors(part, length);
 
 	ret = 0;
 out_unlock:
-	mutex_unlock(&bdevp->bd_mutex);
+	mutex_unlock(&part->bd_mutex);
 	mutex_unlock(&bdev->bd_mutex);
-	bdput(bdevp);
-out_put_part:
-	disk_put_part(part);
+	bdput(part);
 	return ret;
 }
 
@@ -636,7 +543,7 @@ static bool disk_unlock_native_capacity(struct gendisk *disk)
 int blk_drop_partitions(struct block_device *bdev)
 {
 	struct disk_part_iter piter;
-	struct hd_struct *part;
+	struct block_device *part;
 
 	if (bdev->bd_part_count)
 		return -EBUSY;
@@ -661,7 +568,7 @@ static bool blk_add_partition(struct gendisk *disk, struct block_device *bdev,
 {
 	sector_t size = state->parts[p].size;
 	sector_t from = state->parts[p].from;
-	struct hd_struct *part;
+	struct block_device *part;
 
 	if (!size)
 		return true;
@@ -701,7 +608,7 @@ static bool blk_add_partition(struct gendisk *disk, struct block_device *bdev,
 
 	if (IS_BUILTIN(CONFIG_BLK_DEV_MD) &&
 	    (state->parts[p].flags & ADDPART_FLAG_RAID))
-		md_autodetect_dev(part_to_dev(part)->devt);
+		md_autodetect_dev(part->bd_dev);
 
 	return true;
 }
diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c
index dc333dbe523281..09c86ef3f0fd93 100644
--- a/drivers/block/drbd/drbd_receiver.c
+++ b/drivers/block/drbd/drbd_receiver.c
@@ -2802,7 +2802,7 @@ bool drbd_rs_c_min_rate_throttle(struct drbd_device *device)
 	if (c_min_rate == 0)
 		return false;
 
-	curr_events = (int)part_stat_read_accum(&disk->part0, sectors) -
+	curr_events = (int)part_stat_read_accum(disk->part0, sectors) -
 			atomic_read(&device->rs_sect_ev);
 
 	if (atomic_read(&device->ap_actlog_cnt)
diff --git a/drivers/block/drbd/drbd_worker.c b/drivers/block/drbd/drbd_worker.c
index ba56f3f05312f0..4537559829876e 100644
--- a/drivers/block/drbd/drbd_worker.c
+++ b/drivers/block/drbd/drbd_worker.c
@@ -1678,7 +1678,7 @@ void drbd_rs_controller_reset(struct drbd_device *device)
 	atomic_set(&device->rs_sect_in, 0);
 	atomic_set(&device->rs_sect_ev, 0);
 	device->rs_in_flight = 0;
-	device->rs_last_events = (int)part_stat_read_accum(&disk->part0, sectors);
+	device->rs_last_events = part_stat_read_accum(disk->part0, sectors);
 
 	/* Updating the RCU protected object in place is necessary since
 	   this function gets called from atomic context.
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 88baa6158eaee1..01757f9578dcb8 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1687,7 +1687,7 @@ static void zram_reset_device(struct zram *zram)
 	zram->disksize = 0;
 
 	set_capacity_and_notify(zram->disk, 0);
-	part_stat_set_all(&zram->disk->part0, 0);
+	part_stat_set_all(zram->disk->part0, 0);
 
 	up_write(&zram->init_lock);
 	/* I/O operation under all of CPU are done so let's free */
diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
index afac8d07c1bd00..85b1f2a9b72d68 100644
--- a/drivers/md/bcache/request.c
+++ b/drivers/md/bcache/request.c
@@ -475,7 +475,7 @@ struct search {
 	unsigned int		read_dirty_data:1;
 	unsigned int		cache_missed:1;
 
-	struct hd_struct	*part;
+	struct block_device	*part;
 	unsigned long		start_time;
 
 	struct btree_op		op;
@@ -1073,7 +1073,7 @@ struct detached_dev_io_private {
 	unsigned long		start_time;
 	bio_end_io_t		*bi_end_io;
 	void			*bi_private;
-	struct hd_struct	*part;
+	struct block_device	*part;
 };
 
 static void detached_dev_end_io(struct bio *bio)
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 83fe1e7f13e6b0..c9438feefe55a3 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1607,7 +1607,7 @@ static blk_qc_t __split_and_process_bio(struct mapped_device *md,
 				 * (by eliminating DM's splitting and just using bio_split)
 				 */
 				part_stat_lock();
-				__dm_part_stat_sub(&dm_disk(md)->part0,
+				__dm_part_stat_sub(dm_disk(md)->part0,
 						   sectors[op_stat_group(bio_op(bio))], ci.sector_count);
 				part_stat_unlock();
 
@@ -2242,12 +2242,12 @@ EXPORT_SYMBOL_GPL(dm_put);
 static bool md_in_flight_bios(struct mapped_device *md)
 {
 	int cpu;
-	struct hd_struct *part = &dm_disk(md)->part0;
+	struct block_device *bdev = dm_disk(md)->part0;
 	long sum = 0;
 
 	for_each_possible_cpu(cpu) {
-		sum += part_stat_local_read_cpu(part, in_flight[0], cpu);
-		sum += part_stat_local_read_cpu(part, in_flight[1], cpu);
+		sum += part_stat_local_read_cpu(bdev, in_flight[0], cpu);
+		sum += part_stat_local_read_cpu(bdev, in_flight[1], cpu);
 	}
 
 	return sum != 0;
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 7ce6047c856ea2..0065736f05b428 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -464,7 +464,7 @@ struct md_io {
 	bio_end_io_t *orig_bi_end_io;
 	void *orig_bi_private;
 	unsigned long start_time;
-	struct hd_struct *part;
+	struct block_device *part;
 };
 
 static void md_end_io(struct bio *bio)
@@ -8441,7 +8441,7 @@ static int is_mddev_idle(struct mddev *mddev, int init)
 	rcu_read_lock();
 	rdev_for_each_rcu(rdev, mddev) {
 		struct gendisk *disk = rdev->bdev->bd_disk;
-		curr_events = (int)part_stat_read_accum(&disk->part0, sectors) -
+		curr_events = (int)part_stat_read_accum(disk->part0, sectors) -
 			      atomic_read(&disk->sync_io);
 		/* sync IO will cause sync_io to increase before the disk_stats
 		 * as sync_io is counted when a request starts, and
diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index dca34489a1dc9e..8d90235e4fcc5a 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -89,12 +89,12 @@ static u16 nvmet_get_smart_log_nsid(struct nvmet_req *req,
 	if (!ns->bdev)
 		goto out;
 
-	host_reads = part_stat_read(ns->bdev->bd_part, ios[READ]);
-	data_units_read = DIV_ROUND_UP(part_stat_read(ns->bdev->bd_part,
-		sectors[READ]), 1000);
-	host_writes = part_stat_read(ns->bdev->bd_part, ios[WRITE]);
-	data_units_written = DIV_ROUND_UP(part_stat_read(ns->bdev->bd_part,
-		sectors[WRITE]), 1000);
+	host_reads = part_stat_read(ns->bdev, ios[READ]);
+	data_units_read =
+		DIV_ROUND_UP(part_stat_read(ns->bdev, sectors[READ]), 1000);
+	host_writes = part_stat_read(ns->bdev, ios[WRITE]);
+	data_units_written =
+		DIV_ROUND_UP(part_stat_read(ns->bdev, sectors[WRITE]), 1000);
 
 	put_unaligned_le64(host_reads, &slog->host_reads[0]);
 	put_unaligned_le64(data_units_read, &slog->data_units_read[0]);
@@ -120,12 +120,12 @@ static u16 nvmet_get_smart_log_all(struct nvmet_req *req,
 		/* we don't have the right data for file backed ns */
 		if (!ns->bdev)
 			continue;
-		host_reads += part_stat_read(ns->bdev->bd_part, ios[READ]);
+		host_reads += part_stat_read(ns->bdev, ios[READ]);
 		data_units_read += DIV_ROUND_UP(
-			part_stat_read(ns->bdev->bd_part, sectors[READ]), 1000);
-		host_writes += part_stat_read(ns->bdev->bd_part, ios[WRITE]);
+			part_stat_read(ns->bdev, sectors[READ]), 1000);
+		host_writes += part_stat_read(ns->bdev, ios[WRITE]);
 		data_units_written += DIV_ROUND_UP(
-			part_stat_read(ns->bdev->bd_part, sectors[WRITE]), 1000);
+			part_stat_read(ns->bdev, sectors[WRITE]), 1000);
 	}
 
 	put_unaligned_le64(host_reads, &slog->host_reads[0]);
diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
index db24e04ee9781e..1825fa8d05a780 100644
--- a/drivers/s390/block/dasd.c
+++ b/drivers/s390/block/dasd.c
@@ -432,7 +432,7 @@ dasd_state_ready_to_online(struct dasd_device * device)
 {
 	struct gendisk *disk;
 	struct disk_part_iter piter;
-	struct hd_struct *part;
+	struct block_device *part;
 
 	device->state = DASD_STATE_ONLINE;
 	if (device->block) {
@@ -445,7 +445,7 @@ dasd_state_ready_to_online(struct dasd_device * device)
 		disk = device->block->bdev->bd_disk;
 		disk_part_iter_init(&piter, disk, DISK_PITER_INCL_PART0);
 		while ((part = disk_part_iter_next(&piter)))
-			kobject_uevent(&part_to_dev(part)->kobj, KOBJ_CHANGE);
+			kobject_uevent(bdev_kobj(part), KOBJ_CHANGE);
 		disk_part_iter_exit(&piter);
 	}
 	return 0;
@@ -459,7 +459,7 @@ static int dasd_state_online_to_ready(struct dasd_device *device)
 	int rc;
 	struct gendisk *disk;
 	struct disk_part_iter piter;
-	struct hd_struct *part;
+	struct block_device *part;
 
 	if (device->discipline->online_to_ready) {
 		rc = device->discipline->online_to_ready(device);
@@ -472,7 +472,7 @@ static int dasd_state_online_to_ready(struct dasd_device *device)
 		disk = device->block->bdev->bd_disk;
 		disk_part_iter_init(&piter, disk, DISK_PITER_INCL_PART0);
 		while ((part = disk_part_iter_next(&piter)))
-			kobject_uevent(&part_to_dev(part)->kobj, KOBJ_CHANGE);
+			kobject_uevent(bdev_kobj(part), KOBJ_CHANGE);
 		disk_part_iter_exit(&piter);
 	}
 	return 0;
diff --git a/fs/block_dev.c b/fs/block_dev.c
index a5a2ac4ca1ce9c..c2da068bd983f4 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -34,6 +34,7 @@
 #include <linux/falloc.h>
 #include <linux/uaccess.h>
 #include <linux/suspend.h>
+#include <linux/part_stat.h>
 #include "internal.h"
 
 struct bdev_inode {
@@ -788,23 +789,18 @@ static struct inode *bdev_alloc_inode(struct super_block *sb)
 
 static void bdev_free_inode(struct inode *inode)
 {
+	struct block_device *bdev = I_BDEV(inode);
+
+	kfree(bdev->bd_meta_info);
+	free_percpu(bdev->bd_stats);
 	kmem_cache_free(bdev_cachep, BDEV_I(inode));
 }
 
 static void init_once(void *foo)
 {
 	struct bdev_inode *ei = (struct bdev_inode *) foo;
-	struct block_device *bdev = &ei->bdev;
 
-	memset(bdev, 0, sizeof(*bdev));
-	mutex_init(&bdev->bd_mutex);
-#ifdef CONFIG_SYSFS
-	INIT_LIST_HEAD(&bdev->bd_holder_disks);
-#endif
-	bdev->bd_bdi = &noop_backing_dev_info;
 	inode_init_once(&ei->vfs_inode);
-	/* Initialize mutex for freeze. */
-	mutex_init(&bdev->bd_fsfreeze_mutex);
 }
 
 static void bdev_evict_inode(struct inode *inode)
@@ -876,12 +872,22 @@ struct block_device *bdev_alloc(struct gendisk *disk, u8 partno)
 		return NULL;
 
 	bdev = I_BDEV(inode);
+	memset(bdev, 0, sizeof(*bdev));
 	spin_lock_init(&bdev->bd_size_lock);
+	mutex_init(&bdev->bd_mutex);
+	mutex_init(&bdev->bd_fsfreeze_mutex);
+	bdev->bd_bdi = &noop_backing_dev_info;
 	bdev->bd_disk = disk;
 	bdev->bd_partno = partno;
-	bdev->bd_super = NULL;
 	bdev->bd_inode = inode;
-	bdev->bd_part_count = 0;
+	bdev->bd_stats = alloc_percpu(struct disk_stats);
+	if (!bdev->bd_stats) {
+		iput(inode);
+		return NULL;
+	}
+#ifdef CONFIG_SYSFS
+	INIT_LIST_HEAD(&bdev->bd_holder_disks);
+#endif
 
 	inode->i_mode = S_IFBLK;
 	inode->i_rdev = 0;
@@ -920,11 +926,6 @@ struct block_device *bdgrab(struct block_device *bdev)
 }
 EXPORT_SYMBOL(bdgrab);
 
-struct block_device *bdget_part(struct hd_struct *part)
-{
-	return bdget(part_devt(part));
-}
-
 long nr_blockdev_pages(void)
 {
 	struct inode *inode;
@@ -1201,7 +1202,7 @@ int bd_link_disk_holder(struct block_device *bdev, struct gendisk *disk)
 	WARN_ON_ONCE(!bdev->bd_holder);
 
 	/* FIXME: remove the following once add_disk() handles errors */
-	if (WARN_ON(!disk->slave_dir || !bdev->bd_part->holder_dir))
+	if (WARN_ON(!disk->slave_dir || !bdev->bd_holder_dir))
 		goto out_unlock;
 
 	holder = bd_find_holder_disk(bdev, disk);
@@ -1220,24 +1221,24 @@ int bd_link_disk_holder(struct block_device *bdev, struct gendisk *disk)
 	holder->disk = disk;
 	holder->refcnt = 1;
 
-	ret = add_symlink(disk->slave_dir, &part_to_dev(bdev->bd_part)->kobj);
+	ret = add_symlink(disk->slave_dir, bdev_kobj(bdev));
 	if (ret)
 		goto out_free;
 
-	ret = add_symlink(bdev->bd_part->holder_dir, &disk_to_dev(disk)->kobj);
+	ret = add_symlink(bdev->bd_holder_dir, &disk_to_dev(disk)->kobj);
 	if (ret)
 		goto out_del;
 	/*
 	 * bdev could be deleted beneath us which would implicitly destroy
 	 * the holder directory.  Hold on to it.
 	 */
-	kobject_get(bdev->bd_part->holder_dir);
+	kobject_get(bdev->bd_holder_dir);
 
 	list_add(&holder->list, &bdev->bd_holder_disks);
 	goto out_unlock;
 
 out_del:
-	del_symlink(disk->slave_dir, &part_to_dev(bdev->bd_part)->kobj);
+	del_symlink(disk->slave_dir, bdev_kobj(bdev));
 out_free:
 	kfree(holder);
 out_unlock:
@@ -1265,10 +1266,10 @@ void bd_unlink_disk_holder(struct block_device *bdev, struct gendisk *disk)
 	holder = bd_find_holder_disk(bdev, disk);
 
 	if (!WARN_ON_ONCE(holder == NULL) && !--holder->refcnt) {
-		del_symlink(disk->slave_dir, &part_to_dev(bdev->bd_part)->kobj);
-		del_symlink(bdev->bd_part->holder_dir,
+		del_symlink(disk->slave_dir, bdev_kobj(bdev));
+		del_symlink(bdev->bd_holder_dir,
 			    &disk_to_dev(disk)->kobj);
-		kobject_put(bdev->bd_part->holder_dir);
+		kobject_put(bdev->bd_holder_dir);
 		list_del_init(&holder->list);
 		kfree(holder);
 	}
@@ -1392,11 +1393,6 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 		first_open = true;
 
 		if (!bdev->bd_partno) {
-			ret = -ENXIO;
-			bdev->bd_part = disk_get_part(disk, 0);
-			if (!bdev->bd_part)
-				goto out_clear;
-
 			ret = 0;
 			if (disk->fops->open) {
 				ret = disk->fops->open(bdev, mode);
@@ -1422,19 +1418,16 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 				bdev_disk_changed(bdev, ret == -ENOMEDIUM);
 
 			if (ret)
-				goto out_clear;
+				goto out_unlock_bdev;
 		} else {
 			BUG_ON(for_part);
 			ret = __blkdev_get(bdev_whole(bdev), mode, NULL, 1);
 			if (ret)
-				goto out_clear;
+				goto out_unlock_bdev;
 			bdgrab(bdev_whole(bdev));
-			bdev->bd_part = disk_get_part(disk, bdev->bd_partno);
-			if (!(disk->flags & GENHD_FL_UP) ||
-			    !bdev->bd_part || !bdev_nr_sectors(bdev)) {
-				ret = -ENXIO;
-				goto out_clear;
-			}
+			ret = -ENXIO;
+			if (!bdev_nr_sectors(bdev))
+				goto out_put_whole;
 			set_init_blocksize(bdev);
 		}
 
@@ -1453,6 +1446,7 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 				goto out_unlock_bdev;
 		}
 	}
+
 	bdev->bd_openers++;
 	if (for_part) {
 		bdev->bd_part_count++;
@@ -1482,11 +1476,8 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 		put_disk_and_module(disk);
 	return 0;
 
- out_clear:
-	disk_put_part(bdev->bd_part);
-	bdev->bd_part = NULL;
-	if (bdev_is_partition(bdev))
-		__blkdev_put(bdev_whole(bdev), mode, 1);
+ out_put_whole:
+	__blkdev_put(bdev_whole(bdev), mode, 1);
  out_unlock_bdev:
 	if (!for_part && (mode & FMODE_EXCL))
 		bd_abort_claiming(bdev, holder);
@@ -1679,8 +1670,6 @@ static void __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part)
 		if (!bdev_is_partition(bdev) && disk->fops->release)
 			disk->fops->release(disk, mode);
 
-		disk_put_part(bdev->bd_part);
-		bdev->bd_part = NULL;
 		if (bdev_is_partition(bdev))
 			victim = bdev_whole(bdev);
 
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 6633b20224d509..c303a0ff0b1701 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -4048,9 +4048,8 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
 	sbi->s_sb = sb;
 	sbi->s_inode_readahead_blks = EXT4_DEF_INODE_READAHEAD_BLKS;
 	sbi->s_sb_block = sb_block;
-	if (sb->s_bdev->bd_part)
-		sbi->s_sectors_written_start =
-			part_stat_read(sb->s_bdev->bd_part, sectors[STAT_WRITE]);
+	sbi->s_sectors_written_start =
+		part_stat_read(sb->s_bdev, sectors[STAT_WRITE]);
 
 	/* Cleanup superblock name */
 	strreplace(sb->s_id, '/', '!');
@@ -5509,15 +5508,10 @@ static int ext4_commit_super(struct super_block *sb, int sync)
 	 */
 	if (!(sb->s_flags & SB_RDONLY))
 		ext4_update_tstamp(es, s_wtime);
-	if (sb->s_bdev->bd_part)
-		es->s_kbytes_written =
-			cpu_to_le64(EXT4_SB(sb)->s_kbytes_written +
-			    ((part_stat_read(sb->s_bdev->bd_part,
-					     sectors[STAT_WRITE]) -
-			      EXT4_SB(sb)->s_sectors_written_start) >> 1));
-	else
-		es->s_kbytes_written =
-			cpu_to_le64(EXT4_SB(sb)->s_kbytes_written);
+	es->s_kbytes_written =
+		cpu_to_le64(EXT4_SB(sb)->s_kbytes_written +
+		    ((part_stat_read(sb->s_bdev, sectors[STAT_WRITE]) -
+		      EXT4_SB(sb)->s_sectors_written_start) >> 1));
 	if (percpu_counter_initialized(&EXT4_SB(sb)->s_freeclusters_counter))
 		ext4_free_blocks_count_set(es,
 			EXT4_C2B(EXT4_SB(sb), percpu_counter_sum_positive(
diff --git a/fs/ext4/sysfs.c b/fs/ext4/sysfs.c
index 4e27fe6ed3ae6a..075aa3a19ff5f1 100644
--- a/fs/ext4/sysfs.c
+++ b/fs/ext4/sysfs.c
@@ -62,11 +62,8 @@ static ssize_t session_write_kbytes_show(struct ext4_sb_info *sbi, char *buf)
 {
 	struct super_block *sb = sbi->s_buddy_cache->i_sb;
 
-	if (!sb->s_bdev->bd_part)
-		return snprintf(buf, PAGE_SIZE, "0\n");
 	return snprintf(buf, PAGE_SIZE, "%lu\n",
-			(part_stat_read(sb->s_bdev->bd_part,
-					sectors[STAT_WRITE]) -
+			(part_stat_read(sb->s_bdev, sectors[STAT_WRITE]) -
 			 sbi->s_sectors_written_start) >> 1);
 }
 
@@ -74,12 +71,9 @@ static ssize_t lifetime_write_kbytes_show(struct ext4_sb_info *sbi, char *buf)
 {
 	struct super_block *sb = sbi->s_buddy_cache->i_sb;
 
-	if (!sb->s_bdev->bd_part)
-		return snprintf(buf, PAGE_SIZE, "0\n");
 	return snprintf(buf, PAGE_SIZE, "%llu\n",
 			(unsigned long long)(sbi->s_kbytes_written +
-			((part_stat_read(sb->s_bdev->bd_part,
-					 sectors[STAT_WRITE]) -
+			((part_stat_read(sb->s_bdev, sectors[STAT_WRITE]) -
 			  EXT4_SB(sb)->s_sectors_written_start) >> 1)));
 }
 
diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
index 023462e80e58d5..54a1905af052cc 100644
--- a/fs/f2fs/checkpoint.c
+++ b/fs/f2fs/checkpoint.c
@@ -1395,7 +1395,6 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
 	__u32 crc32 = 0;
 	int i;
 	int cp_payload_blks = __cp_payload(sbi);
-	struct super_block *sb = sbi->sb;
 	struct curseg_info *seg_i = CURSEG_I(sbi, CURSEG_HOT_NODE);
 	u64 kbytes_written;
 	int err;
@@ -1489,9 +1488,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
 	start_blk += data_sum_blocks;
 
 	/* Record write statistics in the hot node summary */
-	kbytes_written = sbi->kbytes_written;
-	if (sb->s_bdev->bd_part)
-		kbytes_written += BD_PART_WRITTEN(sbi);
+	kbytes_written = sbi->kbytes_written + BD_PART_WRITTEN(sbi);
 
 	seg_i->journal->info.kbytes_written = cpu_to_le64(kbytes_written);
 
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index cb700d79729680..5f9522d4c727fb 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -1675,7 +1675,7 @@ static inline bool f2fs_is_multi_device(struct f2fs_sb_info *sbi)
  * and the return value is in kbytes. s is of struct f2fs_sb_info.
  */
 #define BD_PART_WRITTEN(s)						 \
-(((u64)part_stat_read((s)->sb->s_bdev->bd_part, sectors[STAT_WRITE]) -   \
+(((u64)part_stat_read((s)->sb->s_bdev, sectors[STAT_WRITE]) -   \
 		(s)->sectors_written_start) >> 1)
 
 static inline void f2fs_update_time(struct f2fs_sb_info *sbi, int type)
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index d4e7fab352bacb..fae92285f561b4 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -3700,10 +3700,8 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
 	}
 
 	/* For write statistics */
-	if (sb->s_bdev->bd_part)
-		sbi->sectors_written_start =
-			(u64)part_stat_read(sb->s_bdev->bd_part,
-					    sectors[STAT_WRITE]);
+	sbi->sectors_written_start =
+		part_stat_read(sb->s_bdev, sectors[STAT_WRITE]);
 
 	/* Read accumulated write IO statistics if exists */
 	seg_i = CURSEG_I(sbi, CURSEG_HOT_NODE);
diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
index ec77ccfea923dc..24e876e849c512 100644
--- a/fs/f2fs/sysfs.c
+++ b/fs/f2fs/sysfs.c
@@ -90,11 +90,6 @@ static ssize_t free_segments_show(struct f2fs_attr *a,
 static ssize_t lifetime_write_kbytes_show(struct f2fs_attr *a,
 		struct f2fs_sb_info *sbi, char *buf)
 {
-	struct super_block *sb = sbi->sb;
-
-	if (!sb->s_bdev->bd_part)
-		return sprintf(buf, "0\n");
-
 	return sprintf(buf, "%llu\n",
 			(unsigned long long)(sbi->kbytes_written +
 			BD_PART_WRITTEN(sbi)));
@@ -103,12 +98,8 @@ static ssize_t lifetime_write_kbytes_show(struct f2fs_attr *a,
 static ssize_t features_show(struct f2fs_attr *a,
 		struct f2fs_sb_info *sbi, char *buf)
 {
-	struct super_block *sb = sbi->sb;
 	int len = 0;
 
-	if (!sb->s_bdev->bd_part)
-		return sprintf(buf, "0\n");
-
 	if (f2fs_sb_has_encrypt(sbi))
 		len += scnprintf(buf, PAGE_SIZE - len, "%s",
 						"encryption");
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 453b940b87d8e9..a30559501dfe17 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -8,6 +8,7 @@
 
 #include <linux/types.h>
 #include <linux/bvec.h>
+#include <linux/device.h>
 #include <linux/ktime.h>
 
 struct bio_set;
@@ -20,7 +21,13 @@ typedef void (bio_end_io_t) (struct bio *);
 struct bio_crypt_ctx;
 
 struct block_device {
+	sector_t		bd_start_sect;
+	unsigned long		bd_stamp;
+	struct disk_stats __percpu *bd_stats;
+	u8			bd_partno;
+	int			bd_policy;
 	dev_t			bd_dev;
+	struct device		bd_device;
 	int			bd_openers;
 	struct inode *		bd_inode;	/* will die */
 	struct super_block *	bd_super;
@@ -32,8 +39,7 @@ struct block_device {
 #ifdef CONFIG_SYSFS
 	struct list_head	bd_holder_disks;
 #endif
-	u8			bd_partno;
-	struct hd_struct *	bd_part;
+	struct kobject		*bd_holder_dir;
 	/* number of times partitions within this device have been opened. */
 	unsigned		bd_part_count;
 
@@ -45,13 +51,22 @@ struct block_device {
 	int			bd_fsfreeze_count;
 	/* Mutex for freeze */
 	struct mutex		bd_fsfreeze_mutex;
+
+	struct partition_meta_info *bd_meta_info;
+#ifdef CONFIG_FAIL_MAKE_REQUEST
+	int			bd_make_it_fail;
+#endif
 } __randomize_layout;
 
 #define bdev_whole(_bdev) \
-	((_bdev)->bd_disk->part0.bdev)
+	((_bdev)->bd_disk->part0)
+
+#define dev_to_bdev(device) \
+	container_of((device), struct block_device, bd_device)
+#define part_to_dev(part)	(&((part)->bd_device))
 
 #define bdev_kobj(_bdev) \
-	(&part_to_dev((_bdev)->bd_part)->kobj)
+	(&part_to_dev((_bdev))->kobj)
 
 /*
  * Block error status values.  See block/blk-core:blk_errors for the details.
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 696b2f9c5529d8..ed40144ab80339 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -191,7 +191,7 @@ struct request {
 	};
 
 	struct gendisk *rq_disk;
-	struct hd_struct *part;
+	struct block_device *bdev;
 #ifdef CONFIG_BLK_RQ_ALLOC_TIME
 	/* Time that the first bio started allocating this request. */
 	u64 alloc_time_ns;
@@ -1488,7 +1488,7 @@ static inline int bdev_alignment_offset(struct block_device *bdev)
 		return -1;
 	if (bdev_is_partition(bdev))
 		return queue_limit_alignment_offset(&q->limits,
-				bdev->bd_part->start_sect);
+				bdev->bd_start_sect);
 	return q->limits.alignment_offset;
 }
 
@@ -1529,7 +1529,7 @@ static inline int bdev_discard_alignment(struct block_device *bdev)
 
 	if (bdev_is_partition(bdev))
 		return queue_limit_discard_alignment(&q->limits,
-				bdev->bd_part->start_sect);
+				bdev->bd_start_sect);
 	return q->limits.discard_alignment;
 }
 
@@ -1943,9 +1943,9 @@ unsigned long disk_start_io_acct(struct gendisk *disk, unsigned int sectors,
 void disk_end_io_acct(struct gendisk *disk, unsigned int op,
 		unsigned long start_time);
 
-unsigned long part_start_io_acct(struct gendisk *disk, struct hd_struct **part,
-				 struct bio *bio);
-void part_end_io_acct(struct hd_struct *part, struct bio *bio,
+unsigned long part_start_io_acct(struct gendisk *disk,
+		struct block_device **part, struct bio *bio);
+void part_end_io_acct(struct block_device *part, struct bio *bio,
 		      unsigned long start_time);
 
 /**
@@ -1996,7 +1996,6 @@ struct block_device *bdev_alloc(struct gendisk *disk, u8 partno);
 void bdev_add(struct block_device *bdev, dev_t dev);
 struct block_device *bdget(dev_t dev);
 struct block_device *I_BDEV(struct inode *inode);
-struct block_device *bdget_part(struct hd_struct *part);
 struct block_device *bdgrab(struct block_device *bdev);
 void bdput(struct block_device *);
 
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index e01618dfafc05c..6da7bfd6b0bd2a 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -19,11 +19,6 @@
 #include <linux/blk_types.h>
 #include <asm/local.h>
 
-#define dev_to_disk(device)	container_of((device), struct gendisk, part0.__dev)
-#define dev_to_part(device)	container_of((device), struct hd_struct, __dev)
-#define disk_to_dev(disk)	(&(disk)->part0.__dev)
-#define part_to_dev(part)	(&((part)->__dev))
-
 extern const struct device_type disk_type;
 extern struct device_type part_type;
 extern struct class block_class;
@@ -50,23 +45,6 @@ struct partition_meta_info {
 	u8 volname[PARTITION_META_INFO_VOLNAMELTH];
 };
 
-struct hd_struct {
-	sector_t start_sect;
-	unsigned long stamp;
-	struct disk_stats __percpu *dkstats;
-	struct percpu_ref ref;
-
-	struct block_device *bdev;
-	struct device __dev;
-	struct kobject *holder_dir;
-	int policy, partno;
-	struct partition_meta_info *info;
-#ifdef CONFIG_FAIL_MAKE_REQUEST
-	int make_it_fail;
-#endif
-	struct rcu_work rcu_work;
-};
-
 /**
  * DOC: genhd capability flags
  *
@@ -141,8 +119,8 @@ enum {
 struct disk_part_tbl {
 	struct rcu_head rcu_head;
 	int len;
-	struct hd_struct __rcu *last_lookup;
-	struct hd_struct __rcu *part[];
+	struct block_device __rcu *last_lookup;
+	struct block_device __rcu *part[];
 };
 
 struct disk_events;
@@ -176,7 +154,7 @@ struct gendisk {
 	 * helpers.
 	 */
 	struct disk_part_tbl __rcu *part_tbl;
-	struct hd_struct part0;
+	struct block_device *part0;
 
 	const struct block_device_operations *fops;
 	struct request_queue *queue;
@@ -202,23 +180,17 @@ struct gendisk {
 	struct lockdep_map lockdep_map;
 };
 
+#define dev_to_disk(device) \
+	(dev_to_bdev(device)->bd_disk)
+#define disk_to_dev(disk) \
+	(part_to_dev((disk)->part0))
+
 #if IS_REACHABLE(CONFIG_CDROM)
 #define disk_to_cdi(disk)	((disk)->cdi)
 #else
 #define disk_to_cdi(disk)	NULL
 #endif
 
-static inline struct gendisk *part_to_disk(struct hd_struct *part)
-{
-	if (likely(part)) {
-		if (part->partno)
-			return dev_to_disk(part_to_dev(part)->parent);
-		else
-			return dev_to_disk(part_to_dev(part));
-	}
-	return NULL;
-}
-
 static inline int disk_max_parts(struct gendisk *disk)
 {
 	if (disk->flags & GENHD_FL_EXT_DEVT)
@@ -237,19 +209,6 @@ static inline dev_t disk_devt(struct gendisk *disk)
 	return MKDEV(disk->major, disk->first_minor);
 }
 
-static inline dev_t part_devt(struct hd_struct *part)
-{
-	return part_to_dev(part)->devt;
-}
-
-extern struct hd_struct *disk_get_part(struct gendisk *disk, int partno);
-
-static inline void disk_put_part(struct hd_struct *part)
-{
-	if (likely(part))
-		put_device(part_to_dev(part));
-}
-
 /*
  * Smarter partition iterator without context limits.
  */
@@ -260,14 +219,14 @@ static inline void disk_put_part(struct hd_struct *part)
 
 struct disk_part_iter {
 	struct gendisk		*disk;
-	struct hd_struct	*part;
+	struct block_device	*part;
 	int			idx;
 	unsigned int		flags;
 };
 
 extern void disk_part_iter_init(struct disk_part_iter *piter,
 				 struct gendisk *disk, unsigned int flags);
-extern struct hd_struct *disk_part_iter_next(struct disk_part_iter *piter);
+struct block_device *disk_part_iter_next(struct disk_part_iter *piter);
 extern void disk_part_iter_exit(struct disk_part_iter *piter);
 extern bool disk_has_partitions(struct gendisk *disk);
 
@@ -291,7 +250,7 @@ extern void set_disk_ro(struct gendisk *disk, int flag);
 
 static inline int get_disk_ro(struct gendisk *disk)
 {
-	return disk->part0.policy;
+	return disk->part0->bd_policy;
 }
 
 extern void disk_block_events(struct gendisk *disk);
@@ -305,7 +264,7 @@ extern void rand_initialize_disk(struct gendisk *disk);
 
 static inline sector_t get_start_sect(struct block_device *bdev)
 {
-	return bdev->bd_part->start_sect;
+	return bdev->bd_start_sect;
 }
 	
 static inline sector_t bdev_nr_sectors(struct block_device *bdev)
@@ -315,7 +274,7 @@ static inline sector_t bdev_nr_sectors(struct block_device *bdev)
 	
 static inline sector_t get_capacity(struct gendisk *disk)
 {
-	return bdev_nr_sectors(disk->part0.bdev);
+	return bdev_nr_sectors(disk->part0);
 }
 
 int bdev_disk_changed(struct block_device *bdev, bool invalidate);
diff --git a/include/linux/part_stat.h b/include/linux/part_stat.h
index 24125778ef3ec7..3b3621b4983a58 100644
--- a/include/linux/part_stat.h
+++ b/include/linux/part_stat.h
@@ -25,26 +25,26 @@ struct disk_stats {
 #define part_stat_unlock()	preempt_enable()
 
 #define part_stat_get_cpu(part, field, cpu)				\
-	(per_cpu_ptr((part)->dkstats, (cpu))->field)
+	(per_cpu_ptr((part)->bd_stats, (cpu))->field)
 
 #define part_stat_get(part, field)					\
 	part_stat_get_cpu(part, field, smp_processor_id())
 
 #define part_stat_read(part, field)					\
 ({									\
-	typeof((part)->dkstats->field) res = 0;				\
+	typeof((part)->bd_stats->field) res = 0;				\
 	unsigned int _cpu;						\
 	for_each_possible_cpu(_cpu)					\
-		res += per_cpu_ptr((part)->dkstats, _cpu)->field;	\
+		res += per_cpu_ptr((part)->bd_stats, _cpu)->field;	\
 	res;								\
 })
 
-static inline void part_stat_set_all(struct hd_struct *part, int value)
+static inline void part_stat_set_all(struct block_device *bdev, int value)
 {
 	int i;
 
 	for_each_possible_cpu(i)
-		memset(per_cpu_ptr(part->dkstats, i), value,
+		memset(per_cpu_ptr(bdev->bd_stats, i), value,
 				sizeof(struct disk_stats));
 }
 
@@ -54,13 +54,12 @@ static inline void part_stat_set_all(struct hd_struct *part, int value)
 	 part_stat_read(part, field[STAT_DISCARD]))
 
 #define __part_stat_add(part, field, addnd)				\
-	__this_cpu_add((part)->dkstats->field, addnd)
+	__this_cpu_add((part)->bd_stats->field, addnd)
 
 #define part_stat_add(part, field, addnd)	do {			\
 	__part_stat_add((part), field, addnd);				\
-	if ((part)->partno)						\
-		__part_stat_add(&part_to_disk((part))->part0,		\
-				field, addnd);				\
+	if ((part)->bd_partno)						\
+		__part_stat_add((part)->bd_disk->part0, field, addnd);	\
 } while (0)
 
 #define part_stat_dec(gendiskp, field)					\
diff --git a/init/do_mounts.c b/init/do_mounts.c
index 5879edf083b318..a78e44ee6adb8d 100644
--- a/init/do_mounts.c
+++ b/init/do_mounts.c
@@ -76,11 +76,11 @@ struct uuidcmp {
  */
 static int match_dev_by_uuid(struct device *dev, const void *data)
 {
+	struct block_device *bdev = dev_to_bdev(dev);
 	const struct uuidcmp *cmp = data;
-	struct hd_struct *part = dev_to_part(dev);
 
-	if (!part->info ||
-	    strncasecmp(cmp->uuid, part->info->uuid, cmp->len))
+	if (!bdev->bd_meta_info ||
+	    strncasecmp(cmp->uuid, bdev->bd_meta_info->uuid, cmp->len))
 		return 0;
 	return 1;
 }
@@ -133,13 +133,13 @@ static dev_t devt_from_partuuid(const char *uuid_str)
 		 * Attempt to find the requested partition by adding an offset
 		 * to the partition number found by UUID.
 		 */
-		struct hd_struct *part;
+		struct block_device *part;
 
-		part = disk_get_part(dev_to_disk(dev),
-				     dev_to_part(dev)->partno + offset);
+		part = bdget_disk(dev_to_disk(dev),
+				  dev_to_bdev(dev)->bd_partno + offset);
 		if (part) {
-			devt = part_devt(part);
-			put_device(part_to_dev(part));
+			devt = part->bd_dev;
+			bdput(part);
 		}
 	} else {
 		devt = dev->devt;
@@ -166,10 +166,10 @@ static dev_t devt_from_partuuid(const char *uuid_str)
  */
 static int match_dev_by_label(struct device *dev, const void *data)
 {
+	struct block_device *bdev = dev_to_bdev(dev);
 	const char *label = data;
-	struct hd_struct *part = dev_to_part(dev);
 
-	if (!part->info || strcmp(label, part->info->volname))
+	if (!bdev->bd_meta_info || strcmp(label, bdev->bd_meta_info->volname))
 		return 0;
 	return 1;
 }
diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
index 7076d588a50d69..a482a37848bff7 100644
--- a/kernel/trace/blktrace.c
+++ b/kernel/trace/blktrace.c
@@ -458,14 +458,9 @@ static struct rchan_callbacks blk_relay_callbacks = {
 static void blk_trace_setup_lba(struct blk_trace *bt,
 				struct block_device *bdev)
 {
-	struct hd_struct *part = NULL;
-
-	if (bdev)
-		part = bdev->bd_part;
-
-	if (part) {
-		bt->start_lba = part->start_sect;
-		bt->end_lba = part->start_sect + bdev_nr_sectors(bdev);
+	if (bdev) {
+		bt->start_lba = bdev->bd_start_sect;
+		bt->end_lba = bdev->bd_start_sect + bdev_nr_sectors(bdev);
 	} else {
 		bt->start_lba = 0;
 		bt->end_lba = -1ULL;
@@ -1815,30 +1810,15 @@ static ssize_t blk_trace_mask2str(char *buf, int mask)
 	return p - buf;
 }
 
-static struct request_queue *blk_trace_get_queue(struct block_device *bdev)
-{
-	if (bdev->bd_disk == NULL)
-		return NULL;
-
-	return bdev_get_queue(bdev);
-}
-
 static ssize_t sysfs_blk_trace_attr_show(struct device *dev,
 					 struct device_attribute *attr,
 					 char *buf)
 {
-	struct block_device *bdev = bdget_part(dev_to_part(dev));
-	struct request_queue *q;
+	struct block_device *bdev = dev_to_bdev(dev);
+	struct request_queue *q = bdev_get_queue(bdev);
 	struct blk_trace *bt;
 	ssize_t ret = -ENXIO;
 
-	if (bdev == NULL)
-		goto out;
-
-	q = blk_trace_get_queue(bdev);
-	if (q == NULL)
-		goto out_bdput;
-
 	mutex_lock(&q->debugfs_mutex);
 
 	bt = rcu_dereference_protected(q->blk_trace,
@@ -1861,9 +1841,6 @@ static ssize_t sysfs_blk_trace_attr_show(struct device *dev,
 
 out_unlock_bdev:
 	mutex_unlock(&q->debugfs_mutex);
-out_bdput:
-	bdput(bdev);
-out:
 	return ret;
 }
 
@@ -1871,8 +1848,8 @@ static ssize_t sysfs_blk_trace_attr_store(struct device *dev,
 					  struct device_attribute *attr,
 					  const char *buf, size_t count)
 {
-	struct block_device *bdev;
-	struct request_queue *q;
+	struct block_device *bdev = dev_to_bdev(dev);
+	struct request_queue *q = bdev_get_queue(bdev);
 	struct blk_trace *bt;
 	u64 value;
 	ssize_t ret = -EINVAL;
@@ -1888,17 +1865,10 @@ static ssize_t sysfs_blk_trace_attr_store(struct device *dev,
 				goto out;
 			value = ret;
 		}
-	} else if (kstrtoull(buf, 0, &value))
-		goto out;
-
-	ret = -ENXIO;
-	bdev = bdget_part(dev_to_part(dev));
-	if (bdev == NULL)
-		goto out;
-
-	q = blk_trace_get_queue(bdev);
-	if (q == NULL)
-		goto out_bdput;
+	} else {
+		if (kstrtoull(buf, 0, &value))
+			goto out;
+	}
 
 	mutex_lock(&q->debugfs_mutex);
 
@@ -1936,8 +1906,6 @@ static ssize_t sysfs_blk_trace_attr_store(struct device *dev,
 
 out_unlock_bdev:
 	mutex_unlock(&q->debugfs_mutex);
-out_bdput:
-	bdput(bdev);
 out:
 	return ret ? ret : count;
 }
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 09:00:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 09:00:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29548.59094 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJJo-0001lu-W2; Wed, 18 Nov 2020 09:00:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29548.59094; Wed, 18 Nov 2020 09:00:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJJo-0001lU-Oo; Wed, 18 Nov 2020 09:00:00 +0000
Received: by outflank-mailman (input) for mailman id 29548;
 Wed, 18 Nov 2020 08:59:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kfJAA-0006e0-5R
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:50:02 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 895ad695-9953-4c47-aa7b-7ecb0a7a74cb;
 Wed, 18 Nov 2020 08:48:47 +0000 (UTC)
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kfJ8l-0007pS-Je; Wed, 18 Nov 2020 08:48:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kfJAA-0006e0-5R
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:50:02 +0000
X-Inumbo-ID: 895ad695-9953-4c47-aa7b-7ecb0a7a74cb
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 895ad695-9953-4c47-aa7b-7ecb0a7a74cb;
	Wed, 18 Nov 2020 08:48:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=Ax7Ev08zfe+k5fDLujW3aI+bgMMepj+7Q/YK24br9d4=; b=MH4CdclHDFODTjP9xnj5n+SLj0
	KgiBor4Efh0d9jME3EBQSx6adQqstkQTNDaZ2TizC50HfcaRgRLQhFlai8VIjVusfI7Z1sIv/ES2V
	yVYPpyv6TcX6IvPqvrcT6UuNlnH1RiFDoZeB5gewwtGWzxh8hzup0RNJSPK2mZL0Xv6zp19aHzSC6
	FD9BN9wevD0vXr+flfq6Rs+nbACPaGbwFMFYa5Rr9YH61lZzjJ7URnmHIkhz4Av1kbIuigOC3Ufc0
	Qb4rWt3d0clR8N8p05bFTRMhFV3wjTMS4l+SUARMu/uwttgDn3VZ8BLlS2HkkXll+sF4pZ22nbxux
	jd6zetXA==;
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kfJ8l-0007pS-Je; Wed, 18 Nov 2020 08:48:35 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 19/20] bcache: remove a superflous lookup_bdev all
Date: Wed, 18 Nov 2020 09:47:59 +0100
Message-Id: <20201118084800.2339180-20-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201118084800.2339180-1-hch@lst.de>
References: <20201118084800.2339180-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Don't bother to call lookup_bdev for just a slightly different error
message without any functional change.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/md/bcache/super.c | 44 +--------------------------------------
 1 file changed, 1 insertion(+), 43 deletions(-)

diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index e5db2cdd114112..5c531ed7785280 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -2380,40 +2380,6 @@ kobj_attribute_write(register,		register_bcache);
 kobj_attribute_write(register_quiet,	register_bcache);
 kobj_attribute_write(pendings_cleanup,	bch_pending_bdevs_cleanup);
 
-static bool bch_is_open_backing(struct block_device *bdev)
-{
-	struct cache_set *c, *tc;
-	struct cached_dev *dc, *t;
-
-	list_for_each_entry_safe(c, tc, &bch_cache_sets, list)
-		list_for_each_entry_safe(dc, t, &c->cached_devs, list)
-			if (dc->bdev == bdev)
-				return true;
-	list_for_each_entry_safe(dc, t, &uncached_devices, list)
-		if (dc->bdev == bdev)
-			return true;
-	return false;
-}
-
-static bool bch_is_open_cache(struct block_device *bdev)
-{
-	struct cache_set *c, *tc;
-
-	list_for_each_entry_safe(c, tc, &bch_cache_sets, list) {
-		struct cache *ca = c->cache;
-
-		if (ca->bdev == bdev)
-			return true;
-	}
-
-	return false;
-}
-
-static bool bch_is_open(struct block_device *bdev)
-{
-	return bch_is_open_cache(bdev) || bch_is_open_backing(bdev);
-}
-
 struct async_reg_args {
 	struct delayed_work reg_work;
 	char *path;
@@ -2535,15 +2501,7 @@ static ssize_t register_bcache(struct kobject *k, struct kobj_attribute *attr,
 				  sb);
 	if (IS_ERR(bdev)) {
 		if (bdev == ERR_PTR(-EBUSY)) {
-			bdev = lookup_bdev(strim(path));
-			mutex_lock(&bch_register_lock);
-			if (!IS_ERR(bdev) && bch_is_open(bdev))
-				err = "device already registered";
-			else
-				err = "device busy";
-			mutex_unlock(&bch_register_lock);
-			if (!IS_ERR(bdev))
-				bdput(bdev);
+			err = "device busy";
 			if (attr == &ksysfs_register_quiet)
 				goto done;
 		}
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 09:00:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 09:00:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29550.59104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJJp-0001nL-S3; Wed, 18 Nov 2020 09:00:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29550.59104; Wed, 18 Nov 2020 09:00:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJJp-0001mj-7U; Wed, 18 Nov 2020 09:00:01 +0000
Received: by outflank-mailman (input) for mailman id 29550;
 Wed, 18 Nov 2020 08:59:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kfJAF-0006e0-5b
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:50:07 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a851b479-63f6-4b88-9187-1c684d488ac7;
 Wed, 18 Nov 2020 08:48:47 +0000 (UTC)
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kfJ8n-0007po-6Z; Wed, 18 Nov 2020 08:48:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7bqw=EY=casper.srs.infradead.org=batv+9f981d017e6f7609177a+6296+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kfJAF-0006e0-5b
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 08:50:07 +0000
X-Inumbo-ID: a851b479-63f6-4b88-9187-1c684d488ac7
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a851b479-63f6-4b88-9187-1c684d488ac7;
	Wed, 18 Nov 2020 08:48:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=LafRm+ytlh/b9ymN5UUk1jomYBwFtJs/n+YQmnyuC8I=; b=ESHahNrztxvjP/3NrSCGR/jMRs
	BA0ubeZkaulLV/o480sKwL/rdXN+LHbGztoiZLszwqRJoBbCS1HvN+QFeGiEpkSVCQYR9o7wbbZrb
	BDXoCT+DzuLSRqYvTRnuEEa+Fy9sjNNX7iVqjIDKeE2wtjHAfTCjJSdgwxIbSaKKqTKJ9r2PoGxvV
	3usQYTUhWwLmekndtkGnxPNIGF6AWRXzq6RCFwaiiIOsA1HzQdttswdoVY3JwZ1LjgXwzpWOkuiX5
	avqaf2Iycpy9/XkyKznct2n5PfsURFTN7YrGTXCKRinr93lLV9I01BNFpmMHWxQVsDFA7E84tG2oa
	yqnucbvg==;
Received: from [2001:4bb8:18c:31ba:32b1:ec66:5459:36a] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kfJ8n-0007po-6Z; Wed, 18 Nov 2020 08:48:37 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 20/20] block: remove i_bdev
Date: Wed, 18 Nov 2020 09:48:00 +0100
Message-Id: <20201118084800.2339180-21-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201118084800.2339180-1-hch@lst.de>
References: <20201118084800.2339180-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Switch the block device lookup interfaces to directly work with a dev_t
so that struct block_device references are only acquired by the
blkdev_get variants (and the blk-cgroup special case).  This means that
we not don't need an extra reference in the inode and can generally
simplify handling of struct block_device to keep the lookups contained
in the core block layer code.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/ioctl.c                                |   3 +-
 drivers/block/loop.c                         |   8 +-
 drivers/md/dm-table.c                        |   9 +-
 drivers/mtd/mtdsuper.c                       |  17 +-
 drivers/target/target_core_file.c            |   6 +-
 drivers/usb/gadget/function/storage_common.c |   8 +-
 fs/block_dev.c                               | 198 +++++--------------
 fs/btrfs/volumes.c                           |  13 +-
 fs/inode.c                                   |   3 -
 fs/internal.h                                |   7 +-
 fs/io_uring.c                                |  10 +-
 fs/pipe.c                                    |   5 +-
 fs/quota/quota.c                             |  19 +-
 fs/statfs.c                                  |   2 +-
 fs/super.c                                   |  37 ++--
 include/linux/blkdev.h                       |   2 +-
 include/linux/fs.h                           |   1 -
 17 files changed, 104 insertions(+), 244 deletions(-)

diff --git a/block/ioctl.c b/block/ioctl.c
index aa9546e5d6a1bd..fd2944ae01e294 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -599,8 +599,7 @@ long compat_blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg)
 {
 	int ret;
 	void __user *argp = compat_ptr(arg);
-	struct inode *inode = file->f_mapping->host;
-	struct block_device *bdev = inode->i_bdev;
+	struct block_device *bdev = I_BDEV(file->f_mapping->host);
 	struct gendisk *disk = bdev->bd_disk;
 	fmode_t mode = file->f_mode;
 	loff_t size;
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 9d2587f6167cd8..d2ce1ddc192d78 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -675,10 +675,10 @@ static int loop_validate_file(struct file *file, struct block_device *bdev)
 	while (is_loop_device(f)) {
 		struct loop_device *l;
 
-		if (f->f_mapping->host->i_bdev == bdev)
+		if (f->f_mapping->host->i_rdev == bdev->bd_dev)
 			return -EBADF;
 
-		l = f->f_mapping->host->i_bdev->bd_disk->private_data;
+		l = I_BDEV(f->f_mapping->host)->bd_disk->private_data;
 		if (l->lo_state != Lo_bound) {
 			return -EINVAL;
 		}
@@ -885,9 +885,7 @@ static void loop_config_discard(struct loop_device *lo)
 	 * file-backed loop devices: discarded regions read back as zero.
 	 */
 	if (S_ISBLK(inode->i_mode) && !lo->lo_encrypt_key_size) {
-		struct request_queue *backingq;
-
-		backingq = bdev_get_queue(inode->i_bdev);
+		struct request_queue *backingq = bdev_get_queue(I_BDEV(inode));
 
 		max_discard_sectors = backingq->limits.max_write_zeroes_sectors;
 		granularity = backingq->limits.discard_granularity ?:
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index ce543b761be7b2..dea67772171053 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -348,16 +348,9 @@ static int upgrade_mode(struct dm_dev_internal *dd, fmode_t new_mode,
 dev_t dm_get_dev_t(const char *path)
 {
 	dev_t dev;
-	struct block_device *bdev;
 
-	bdev = lookup_bdev(path);
-	if (IS_ERR(bdev))
+	if (lookup_bdev(path, &dev))
 		dev = name_to_dev_t(path);
-	else {
-		dev = bdev->bd_dev;
-		bdput(bdev);
-	}
-
 	return dev;
 }
 EXPORT_SYMBOL_GPL(dm_get_dev_t);
diff --git a/drivers/mtd/mtdsuper.c b/drivers/mtd/mtdsuper.c
index c3e2098372f2e5..38b6aa849c6383 100644
--- a/drivers/mtd/mtdsuper.c
+++ b/drivers/mtd/mtdsuper.c
@@ -120,8 +120,8 @@ int get_tree_mtd(struct fs_context *fc,
 				struct fs_context *fc))
 {
 #ifdef CONFIG_BLOCK
-	struct block_device *bdev;
-	int ret, major;
+	dev_t dev;
+	int ret;
 #endif
 	int mtdnr;
 
@@ -169,20 +169,15 @@ int get_tree_mtd(struct fs_context *fc,
 	/* try the old way - the hack where we allowed users to mount
 	 * /dev/mtdblock$(n) but didn't actually _use_ the blockdev
 	 */
-	bdev = lookup_bdev(fc->source);
-	if (IS_ERR(bdev)) {
-		ret = PTR_ERR(bdev);
+	ret = lookup_bdev(fc->source, &dev);
+	if (ret) {
 		errorf(fc, "MTD: Couldn't look up '%s': %d", fc->source, ret);
 		return ret;
 	}
 	pr_debug("MTDSB: lookup_bdev() returned 0\n");
 
-	major = MAJOR(bdev->bd_dev);
-	mtdnr = MINOR(bdev->bd_dev);
-	bdput(bdev);
-
-	if (major == MTD_BLOCK_MAJOR)
-		return mtd_get_sb_by_nr(fc, mtdnr, fill_super);
+	if (MAJOR(dev) == MTD_BLOCK_MAJOR)
+		return mtd_get_sb_by_nr(fc, MINOR(dev), fill_super);
 
 #endif /* CONFIG_BLOCK */
 
diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
index 7143d03f0e027e..b0cb5b95e892d3 100644
--- a/drivers/target/target_core_file.c
+++ b/drivers/target/target_core_file.c
@@ -133,10 +133,10 @@ static int fd_configure_device(struct se_device *dev)
 	 */
 	inode = file->f_mapping->host;
 	if (S_ISBLK(inode->i_mode)) {
-		struct request_queue *q = bdev_get_queue(inode->i_bdev);
+		struct request_queue *q = bdev_get_queue(I_BDEV(inode));
 		unsigned long long dev_size;
 
-		fd_dev->fd_block_size = bdev_logical_block_size(inode->i_bdev);
+		fd_dev->fd_block_size = bdev_logical_block_size(I_BDEV(inode));
 		/*
 		 * Determine the number of bytes from i_size_read() minus
 		 * one (1) logical sector from underlying struct block_device
@@ -559,7 +559,7 @@ fd_execute_unmap(struct se_cmd *cmd, sector_t lba, sector_t nolb)
 
 	if (S_ISBLK(inode->i_mode)) {
 		/* The backend is block device, use discard */
-		struct block_device *bdev = inode->i_bdev;
+		struct block_device *bdev = I_BDEV(inode);
 		struct se_device *dev = cmd->se_dev;
 
 		ret = blkdev_issue_discard(bdev,
diff --git a/drivers/usb/gadget/function/storage_common.c b/drivers/usb/gadget/function/storage_common.c
index f7e6c42558eb76..b859a158a4140e 100644
--- a/drivers/usb/gadget/function/storage_common.c
+++ b/drivers/usb/gadget/function/storage_common.c
@@ -204,7 +204,7 @@ int fsg_lun_open(struct fsg_lun *curlun, const char *filename)
 	if (!(filp->f_mode & FMODE_WRITE))
 		ro = 1;
 
-	inode = file_inode(filp);
+	inode = filp->f_mapping->host;
 	if ((!S_ISREG(inode->i_mode) && !S_ISBLK(inode->i_mode))) {
 		LINFO(curlun, "invalid file type: %s\n", filename);
 		goto out;
@@ -221,7 +221,7 @@ int fsg_lun_open(struct fsg_lun *curlun, const char *filename)
 	if (!(filp->f_mode & FMODE_CAN_WRITE))
 		ro = 1;
 
-	size = i_size_read(inode->i_mapping->host);
+	size = i_size_read(inode);
 	if (size < 0) {
 		LINFO(curlun, "unable to find file size: %s\n", filename);
 		rc = (int) size;
@@ -231,8 +231,8 @@ int fsg_lun_open(struct fsg_lun *curlun, const char *filename)
 	if (curlun->cdrom) {
 		blksize = 2048;
 		blkbits = 11;
-	} else if (inode->i_bdev) {
-		blksize = bdev_logical_block_size(inode->i_bdev);
+	} else if (S_ISBLK(inode->i_mode)) {
+		blksize = bdev_logical_block_size(I_BDEV(inode));
 		blkbits = blksize_bits(blksize);
 	} else {
 		blksize = 512;
diff --git a/fs/block_dev.c b/fs/block_dev.c
index c2da068bd983f4..bcd6a4bbc1440b 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -891,7 +891,6 @@ struct block_device *bdev_alloc(struct gendisk *disk, u8 partno)
 
 	inode->i_mode = S_IFBLK;
 	inode->i_rdev = 0;
-	inode->i_bdev = bdev;
 	inode->i_data.a_ops = &def_blk_aops;
 
 	return bdev;
@@ -943,71 +942,8 @@ void bdput(struct block_device *bdev)
 {
 	iput(bdev->bd_inode);
 }
-
 EXPORT_SYMBOL(bdput);
  
-static struct block_device *bd_acquire(struct inode *inode)
-{
-	struct block_device *bdev;
-
-	spin_lock(&bdev_lock);
-	bdev = inode->i_bdev;
-	if (bdev && !inode_unhashed(bdev->bd_inode)) {
-		bdgrab(bdev);
-		spin_unlock(&bdev_lock);
-		return bdev;
-	}
-	spin_unlock(&bdev_lock);
-
-	/*
-	 * i_bdev references block device inode that was already shut down
-	 * (corresponding device got removed).  Remove the reference and look
-	 * up block device inode again just in case new device got
-	 * reestablished under the same device number.
-	 */
-	if (bdev)
-		bd_forget(inode);
-
-	bdev = bdget(inode->i_rdev);
-	if (!bdev) {
-		blk_request_module(inode->i_rdev);
-		bdev = bdget(inode->i_rdev);
-	}
-	if (bdev) {
-		spin_lock(&bdev_lock);
-		if (!inode->i_bdev) {
-			/*
-			 * We take an additional reference to bd_inode,
-			 * and it's released in clear_inode() of inode.
-			 * So, we can access it via ->i_mapping always
-			 * without igrab().
-			 */
-			bdgrab(bdev);
-			inode->i_bdev = bdev;
-			inode->i_mapping = bdev->bd_inode->i_mapping;
-		}
-		spin_unlock(&bdev_lock);
-	}
-	return bdev;
-}
-
-/* Call when you free inode */
-
-void bd_forget(struct inode *inode)
-{
-	struct block_device *bdev = NULL;
-
-	spin_lock(&bdev_lock);
-	if (!sb_is_blkdev_sb(inode->i_sb))
-		bdev = inode->i_bdev;
-	inode->i_bdev = NULL;
-	inode->i_mapping = &inode->i_data;
-	spin_unlock(&bdev_lock);
-
-	if (bdev)
-		bdput(bdev);
-}
-
 /**
  * bd_may_claim - test whether a block device can be claimed
  * @bdev: block device of interest
@@ -1492,32 +1428,44 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 }
 
 /**
- * blkdev_get - open a block device
- * @bdev: block_device to open
+ * blkdev_get_by_dev - open a block device by device number
+ * @dev: device number of block device to open
  * @mode: FMODE_* mask
  * @holder: exclusive holder identifier
  *
- * Open @bdev with @mode.  If @mode includes %FMODE_EXCL, @bdev is
- * open with exclusive access.  Specifying %FMODE_EXCL with %NULL
- * @holder is invalid.  Exclusive opens may nest for the same @holder.
+ * Open the block device described by device number @dev.  If @mode includes
+ * If @mode includes %FMODE_EXCL, the block device is opened with exclusive
+ * access.  Specifying %FMODE_EXCL with a %NULL @holder is invalid.  Exclusive
+ * opens may nest for the same @holder.
  *
- * On success, the reference count of @bdev is unchanged.  On failure,
- * @bdev is put.
+ * Use this interface ONLY if you really do not have anything better - i.e. when
+ * you are behind a truly sucky interface and all you are given is a device
+ * number.  Everything else should use blkdev_get_by_path().
  *
  * CONTEXT:
  * Might sleep.
  *
  * RETURNS:
- * 0 on success, -errno on failure.
+ * Reference to the block_device on success, ERR_PTR(-errno) on failure.
  */
-static int blkdev_get(struct block_device *bdev, fmode_t mode, void *holder)
+struct block_device *blkdev_get_by_dev(dev_t dev, fmode_t mode, void *holder)
 {
+	struct block_device *bdev;
 	int ret, perm = 0;
 
 	if (mode & FMODE_READ)
 		perm |= MAY_READ;
 	if (mode & FMODE_WRITE)
 		perm |= MAY_WRITE;
+
+	bdev = bdget(dev);
+	if (!bdev) {
+		blk_request_module(dev);
+		bdev = bdget(dev);
+		if (!bdev)
+			return ERR_PTR(-ENOMEM);
+	}
+
 	ret = devcgroup_inode_permission(bdev->bd_inode, perm);
 	if (ret)
 		goto bdput;
@@ -1525,12 +1473,13 @@ static int blkdev_get(struct block_device *bdev, fmode_t mode, void *holder)
 	ret =__blkdev_get(bdev, mode, holder, 0);
 	if (ret)
 		goto bdput;
-	return 0;
+	return bdev;
 
 bdput:
 	bdput(bdev);
-	return ret;
+	return ERR_PTR(ret);
 }
+EXPORT_SYMBOL(blkdev_get_by_dev);
 
 /**
  * blkdev_get_by_path - open a block device by name
@@ -1538,32 +1487,31 @@ static int blkdev_get(struct block_device *bdev, fmode_t mode, void *holder)
  * @mode: FMODE_* mask
  * @holder: exclusive holder identifier
  *
- * Open the blockdevice described by the device file at @path.  @mode
- * and @holder are identical to blkdev_get().
+ * Open the block device described by the device file at &path.
  *
- * On success, the returned block_device has reference count of one.
+ * If @mode includes %FMODE_EXCL, the block device is opened with exclusive
+ * access.  Specifying %FMODE_EXCL with a %NULL @holder is invalid.  Exclusive
+ * opens may nest for the same @holder.
  *
  * CONTEXT:
  * Might sleep.
  *
  * RETURNS:
- * Pointer to block_device on success, ERR_PTR(-errno) on failure.
+ * Reference to the block_device on success, ERR_PTR(-errno) on failure.
  */
 struct block_device *blkdev_get_by_path(const char *path, fmode_t mode,
 					void *holder)
 {
 	struct block_device *bdev;
-	int err;
-
-	bdev = lookup_bdev(path);
-	if (IS_ERR(bdev))
-		return bdev;
+	dev_t dev;
+	int error;
 
-	err = blkdev_get(bdev, mode, holder);
-	if (err)
-		return ERR_PTR(err);
+	error = lookup_bdev(path, &dev);
+	if (error)
+		return ERR_PTR(error);
 
-	if ((mode & FMODE_WRITE) && bdev_read_only(bdev)) {
+	bdev = blkdev_get_by_dev(dev, mode, holder);
+	if (!IS_ERR(bdev) && (mode & FMODE_WRITE) && bdev_read_only(bdev)) {
 		blkdev_put(bdev, mode);
 		return ERR_PTR(-EACCES);
 	}
@@ -1572,45 +1520,6 @@ struct block_device *blkdev_get_by_path(const char *path, fmode_t mode,
 }
 EXPORT_SYMBOL(blkdev_get_by_path);
 
-/**
- * blkdev_get_by_dev - open a block device by device number
- * @dev: device number of block device to open
- * @mode: FMODE_* mask
- * @holder: exclusive holder identifier
- *
- * Open the blockdevice described by device number @dev.  @mode and
- * @holder are identical to blkdev_get().
- *
- * Use it ONLY if you really do not have anything better - i.e. when
- * you are behind a truly sucky interface and all you are given is a
- * device number.  _Never_ to be used for internal purposes.  If you
- * ever need it - reconsider your API.
- *
- * On success, the returned block_device has reference count of one.
- *
- * CONTEXT:
- * Might sleep.
- *
- * RETURNS:
- * Pointer to block_device on success, ERR_PTR(-errno) on failure.
- */
-struct block_device *blkdev_get_by_dev(dev_t dev, fmode_t mode, void *holder)
-{
-	struct block_device *bdev;
-	int err;
-
-	bdev = bdget(dev);
-	if (!bdev)
-		return ERR_PTR(-ENOMEM);
-
-	err = blkdev_get(bdev, mode, holder);
-	if (err)
-		return ERR_PTR(err);
-
-	return bdev;
-}
-EXPORT_SYMBOL(blkdev_get_by_dev);
-
 static int blkdev_open(struct inode * inode, struct file * filp)
 {
 	struct block_device *bdev;
@@ -1632,14 +1541,12 @@ static int blkdev_open(struct inode * inode, struct file * filp)
 	if ((filp->f_flags & O_ACCMODE) == 3)
 		filp->f_mode |= FMODE_WRITE_IOCTL;
 
-	bdev = bd_acquire(inode);
-	if (bdev == NULL)
-		return -ENOMEM;
-
+	bdev = blkdev_get_by_dev(inode->i_rdev, filp->f_mode, filp);
+	if (IS_ERR(bdev))
+		return PTR_ERR(bdev);
 	filp->f_mapping = bdev->bd_inode->i_mapping;
 	filp->f_wb_err = filemap_sample_wb_err(filp->f_mapping);
-
-	return blkdev_get(bdev, filp->f_mode, filp);
+	return 0;
 }
 
 static void __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part)
@@ -1941,37 +1848,32 @@ const struct file_operations def_blk_fops = {
  * namespace if possible and return it.  Return ERR_PTR(error)
  * otherwise.
  */
-struct block_device *lookup_bdev(const char *pathname)
+int lookup_bdev(const char *pathname, dev_t *dev)
 {
-	struct block_device *bdev;
 	struct inode *inode;
 	struct path path;
 	int error;
 
 	if (!pathname || !*pathname)
-		return ERR_PTR(-EINVAL);
+		return -EINVAL;
 
 	error = kern_path(pathname, LOOKUP_FOLLOW, &path);
 	if (error)
-		return ERR_PTR(error);
+		return error;
 
 	inode = d_backing_inode(path.dentry);
 	error = -ENOTBLK;
 	if (!S_ISBLK(inode->i_mode))
-		goto fail;
+		goto out_path_put;
 	error = -EACCES;
 	if (!may_open_dev(&path))
-		goto fail;
-	error = -ENOMEM;
-	bdev = bd_acquire(inode);
-	if (!bdev)
-		goto fail;
-out:
+		goto out_path_put;
+
+	*dev = inode->i_rdev;
+	error = 0;
+out_path_put:
 	path_put(&path);
-	return bdev;
-fail:
-	bdev = ERR_PTR(error);
-	goto out;
+	return error;
 }
 EXPORT_SYMBOL(lookup_bdev);
 
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index a6406b3b8c2b4f..fbc4b58228f784 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -929,16 +929,16 @@ static noinline struct btrfs_device *device_list_add(const char *path,
 		 * make sure it's the same device if the device is mounted
 		 */
 		if (device->bdev) {
-			struct block_device *path_bdev;
+			int error;
+			dev_t path_dev;
 
-			path_bdev = lookup_bdev(path);
-			if (IS_ERR(path_bdev)) {
+			error = lookup_bdev(path, &path_dev);
+			if (error) {
 				mutex_unlock(&fs_devices->device_list_mutex);
-				return ERR_CAST(path_bdev);
+				return ERR_PTR(error);
 			}
 
-			if (device->bdev != path_bdev) {
-				bdput(path_bdev);
+			if (device->bdev->bd_dev != path_dev) {
 				mutex_unlock(&fs_devices->device_list_mutex);
 				btrfs_warn_in_rcu(device->fs_info,
 	"duplicate device %s devid %llu generation %llu scanned by %s (%d)",
@@ -947,7 +947,6 @@ static noinline struct btrfs_device *device_list_add(const char *path,
 						  task_pid_nr(current));
 				return ERR_PTR(-EEXIST);
 			}
-			bdput(path_bdev);
 			btrfs_info_in_rcu(device->fs_info,
 	"devid %llu device path %s changed to %s scanned by %s (%d)",
 					  devid, rcu_str_deref(device->name),
diff --git a/fs/inode.c b/fs/inode.c
index 9d78c37b00b817..cb008acf0efdb8 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -155,7 +155,6 @@ int inode_init_always(struct super_block *sb, struct inode *inode)
 	inode->i_bytes = 0;
 	inode->i_generation = 0;
 	inode->i_pipe = NULL;
-	inode->i_bdev = NULL;
 	inode->i_cdev = NULL;
 	inode->i_link = NULL;
 	inode->i_dir_seq = 0;
@@ -580,8 +579,6 @@ static void evict(struct inode *inode)
 		truncate_inode_pages_final(&inode->i_data);
 		clear_inode(inode);
 	}
-	if (S_ISBLK(inode->i_mode) && inode->i_bdev)
-		bd_forget(inode);
 	if (S_ISCHR(inode->i_mode) && inode->i_cdev)
 		cd_forget(inode);
 
diff --git a/fs/internal.h b/fs/internal.h
index 47be21dfeebef5..53f890446e7508 100644
--- a/fs/internal.h
+++ b/fs/internal.h
@@ -25,7 +25,6 @@ extern void __init bdev_cache_init(void);
 extern int __sync_blockdev(struct block_device *bdev, int wait);
 void iterate_bdevs(void (*)(struct block_device *, void *), void *);
 void emergency_thaw_bdev(struct super_block *sb);
-void bd_forget(struct inode *inode);
 #else
 static inline void bdev_cache_init(void)
 {
@@ -43,9 +42,6 @@ static inline int emergency_thaw_bdev(struct super_block *sb)
 {
 	return 0;
 }
-static inline void bd_forget(struct inode *inode)
-{
-}
 #endif /* CONFIG_BLOCK */
 
 /*
@@ -114,8 +110,7 @@ extern struct file *alloc_empty_file_noaccount(int, const struct cred *);
  */
 extern int reconfigure_super(struct fs_context *);
 extern bool trylock_super(struct super_block *sb);
-struct super_block *__get_super(struct block_device *bdev, bool excl);
-extern struct super_block *user_get_super(dev_t);
+struct super_block *user_get_super(dev_t, bool excl);
 void put_super(struct super_block *sb);
 extern bool mount_capable(struct fs_context *);
 
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 4ead291b2976f3..8f13c0417f940c 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2716,11 +2716,7 @@ static struct file *__io_file_get(struct io_submit_state *state, int fd)
 
 static bool io_bdev_nowait(struct block_device *bdev)
 {
-#ifdef CONFIG_BLOCK
 	return !bdev || blk_queue_nowait(bdev_get_queue(bdev));
-#else
-	return true;
-#endif
 }
 
 /*
@@ -2733,14 +2729,16 @@ static bool io_file_supports_async(struct file *file, int rw)
 	umode_t mode = file_inode(file)->i_mode;
 
 	if (S_ISBLK(mode)) {
-		if (io_bdev_nowait(file->f_inode->i_bdev))
+		if (IS_ENABLED(CONFIG_BLOCK) &&
+		    io_bdev_nowait(I_BDEV(file->f_mapping->host)))
 			return true;
 		return false;
 	}
 	if (S_ISCHR(mode) || S_ISSOCK(mode))
 		return true;
 	if (S_ISREG(mode)) {
-		if (io_bdev_nowait(file->f_inode->i_sb->s_bdev) &&
+		if (IS_ENABLED(CONFIG_BLOCK) &&
+		    io_bdev_nowait(file->f_inode->i_sb->s_bdev) &&
 		    file->f_op != &io_uring_fops)
 			return true;
 		return false;
diff --git a/fs/pipe.c b/fs/pipe.c
index 0ac197658a2d6e..c5989cfd564d45 100644
--- a/fs/pipe.c
+++ b/fs/pipe.c
@@ -1342,9 +1342,8 @@ static long pipe_set_size(struct pipe_inode_info *pipe, unsigned long arg)
 }
 
 /*
- * After the inode slimming patch, i_pipe/i_bdev/i_cdev share the same
- * location, so checking ->i_pipe is not enough to verify that this is a
- * pipe.
+ * Note that i_pipe and i_cdev share the same location, so checking ->i_pipe is
+ * not enough to verify that this is a pipe.
  */
 struct pipe_inode_info *get_pipe_info(struct file *file, bool for_splice)
 {
diff --git a/fs/quota/quota.c b/fs/quota/quota.c
index f3d32b0d9008f2..6d16b2be5ac4a3 100644
--- a/fs/quota/quota.c
+++ b/fs/quota/quota.c
@@ -866,17 +866,18 @@ static bool quotactl_cmd_onoff(int cmd)
 static struct super_block *quotactl_block(const char __user *special, int cmd)
 {
 #ifdef CONFIG_BLOCK
-	struct block_device *bdev;
 	struct super_block *sb;
 	struct filename *tmp = getname(special);
 	bool excl = false, thawed = false;
+	int error;
+	dev_t dev;
 
 	if (IS_ERR(tmp))
 		return ERR_CAST(tmp);
-	bdev = lookup_bdev(tmp->name);
+	error = lookup_bdev(tmp->name, &dev);
 	putname(tmp);
-	if (IS_ERR(bdev))
-		return ERR_CAST(bdev);
+	if (error)
+		return ERR_PTR(error);
 
 	if (quotactl_cmd_onoff(cmd)) {
 		excl = true;
@@ -886,8 +887,10 @@ static struct super_block *quotactl_block(const char __user *special, int cmd)
 	}
 
 retry:
-	sb = __get_super(bdev, excl);
-	if (thawed && sb && sb->s_writers.frozen != SB_UNFROZEN) {
+	sb = user_get_super(dev, excl);
+	if (!sb)
+		return ERR_PTR(-ENODEV);
+	if (thawed && sb->s_writers.frozen != SB_UNFROZEN) {
 		if (excl)
 			up_write(&sb->s_umount);
 		else
@@ -897,10 +900,6 @@ static struct super_block *quotactl_block(const char __user *special, int cmd)
 		put_super(sb);
 		goto retry;
 	}
-
-	bdput(bdev);
-	if (!sb)
-		return ERR_PTR(-ENODEV);
 	return sb;
 
 #else
diff --git a/fs/statfs.c b/fs/statfs.c
index 59f33752c1311f..68cb077887504f 100644
--- a/fs/statfs.c
+++ b/fs/statfs.c
@@ -235,7 +235,7 @@ SYSCALL_DEFINE3(fstatfs64, unsigned int, fd, size_t, sz, struct statfs64 __user
 
 static int vfs_ustat(dev_t dev, struct kstatfs *sbuf)
 {
-	struct super_block *s = user_get_super(dev);
+	struct super_block *s = user_get_super(dev, false);
 	int err;
 	if (!s)
 		return -EINVAL;
diff --git a/fs/super.c b/fs/super.c
index 343e5c1e538d2a..7a1611e5d0f45d 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -740,7 +740,7 @@ void iterate_supers_type(struct file_system_type *type,
 
 EXPORT_SYMBOL(iterate_supers_type);
 
-struct super_block *__get_super(struct block_device *bdev, bool excl)
+struct super_block *get_super(struct block_device *bdev)
 {
 	struct super_block *sb;
 
@@ -755,17 +755,11 @@ struct super_block *__get_super(struct block_device *bdev, bool excl)
 		if (sb->s_bdev == bdev) {
 			sb->s_count++;
 			spin_unlock(&sb_lock);
-			if (!excl)
-				down_read(&sb->s_umount);
-			else
-				down_write(&sb->s_umount);
+			down_read(&sb->s_umount);
 			/* still alive? */
 			if (sb->s_root && (sb->s_flags & SB_BORN))
 				return sb;
-			if (!excl)
-				up_read(&sb->s_umount);
-			else
-				up_write(&sb->s_umount);
+			up_read(&sb->s_umount);
 			/* nope, got unmounted */
 			spin_lock(&sb_lock);
 			__put_super(sb);
@@ -776,19 +770,6 @@ struct super_block *__get_super(struct block_device *bdev, bool excl)
 	return NULL;
 }
 
-/**
- *	get_super - get the superblock of a device
- *	@bdev: device to get the superblock for
- *
- *	Scans the superblock list and finds the superblock of the file system
- *	mounted on the device given. %NULL is returned if no match is found.
- */
-struct super_block *get_super(struct block_device *bdev)
-{
-	return __get_super(bdev, false);
-}
-EXPORT_SYMBOL(get_super);
-
 /**
  * get_active_super - get an active reference to the superblock of a device
  * @bdev: device to get the superblock for
@@ -820,7 +801,7 @@ struct super_block *get_active_super(struct block_device *bdev)
 	return NULL;
 }
 
-struct super_block *user_get_super(dev_t dev)
+struct super_block *user_get_super(dev_t dev, bool excl)
 {
 	struct super_block *sb;
 
@@ -832,11 +813,17 @@ struct super_block *user_get_super(dev_t dev)
 		if (sb->s_dev ==  dev) {
 			sb->s_count++;
 			spin_unlock(&sb_lock);
-			down_read(&sb->s_umount);
+			if (excl)
+				down_write(&sb->s_umount);
+			else
+				down_read(&sb->s_umount);
 			/* still alive? */
 			if (sb->s_root && (sb->s_flags & SB_BORN))
 				return sb;
-			up_read(&sb->s_umount);
+			if (excl)
+				up_write(&sb->s_umount);
+			else
+				up_read(&sb->s_umount);
 			/* nope, got unmounted */
 			spin_lock(&sb_lock);
 			__put_super(sb);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index ed40144ab80339..9dc44f1ae22bb1 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1973,7 +1973,7 @@ int bdev_read_only(struct block_device *bdev);
 int set_blocksize(struct block_device *bdev, int size);
 
 const char *bdevname(struct block_device *bdev, char *buffer);
-struct block_device *lookup_bdev(const char *);
+int lookup_bdev(const char *pathname, dev_t *dev);
 
 void blkdev_show(struct seq_file *seqf, off_t offset);
 
diff --git a/include/linux/fs.h b/include/linux/fs.h
index a61df0dd4f1989..b0b358309657ba 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -696,7 +696,6 @@ struct inode {
 	struct list_head	i_devices;
 	union {
 		struct pipe_inode_info	*i_pipe;
-		struct block_device	*i_bdev;
 		struct cdev		*i_cdev;
 		char			*i_link;
 		unsigned		i_dir_seq;
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 09:04:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 09:04:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29594.59122 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJNp-0003Aa-Mj; Wed, 18 Nov 2020 09:04:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29594.59122; Wed, 18 Nov 2020 09:04:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJNp-0003AT-Jn; Wed, 18 Nov 2020 09:04:09 +0000
Received: by outflank-mailman (input) for mailman id 29594;
 Wed, 18 Nov 2020 09:04:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfJNn-0003AO-Ue
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:04:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6a6c6d9e-a49d-442d-a643-882295991118;
 Wed, 18 Nov 2020 09:04:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 19B49B138;
 Wed, 18 Nov 2020 09:04:05 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfJNn-0003AO-Ue
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:04:07 +0000
X-Inumbo-ID: 6a6c6d9e-a49d-442d-a643-882295991118
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6a6c6d9e-a49d-442d-a643-882295991118;
	Wed, 18 Nov 2020 09:04:06 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605690246; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=LkhYOLJvmFdpwfgmCORfd6M3DajwjiKhI3bCgvrsEjU=;
	b=bmXyeILM/jFC2y7/cL8bo+i+qT42pSFQTqiJfE9lrw1eXqLr2D6ZDxQVQXD+H6sZYclscK
	41e7ZVfbBmnYeTwtVfb+7Gjw2bitCfE/o1RVw9BIO+YSUBtHEGFI6Iz3TDO51XM38Wl3km
	oroI4x9/8G6WspbnyxouqPKxYU34Y7M=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 19B49B138;
	Wed, 18 Nov 2020 09:04:05 +0000 (UTC)
Subject: Re: merge struct block_device and struct hd_struct
To: Christoph Hellwig <hch@lst.de>
Cc: Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Coly Li <colyli@suse.de>,
 Mike Snitzer <snitzer@redhat.com>, dm-devel@redhat.com,
 Richard Weinberger <richard@nod.at>, Jan Kara <jack@suse.com>,
 linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org,
 linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
 Jens Axboe <axboe@kernel.dk>
References: <20201118084800.2339180-1-hch@lst.de>
 <22ca5396-0253-f286-9eab-d417b2e0b3ad@suse.com>
 <20201118085804.GA20384@lst.de>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1ded2079-f1be-6d5d-01df-65754447df78@suse.com>
Date: Wed, 18 Nov 2020 10:04:04 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201118085804.GA20384@lst.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.11.2020 09:58, Christoph Hellwig wrote:
> On Wed, Nov 18, 2020 at 09:56:11AM +0100, Jan Beulich wrote:
>> since this isn't the first series from you recently spamming
>> xen-devel, may I ask that you don't Cc entire series to lists
>> which are involved with perhaps just one out of the many patches?
>> IMO Cc lists should be compiled on a per-patch basis; the cover
>> letter may of course be sent to the union of all of them.
> 
> No way.  Individual CCs are completely broken as they don't provide
> the reviewer a context.

That's the view of some people, but not all. Context can be easily
established by those who care going to one of the many archives on
which the entire series lands. Getting spammed, however, can't be
avoided by the dozens or hundreds of list subscribers.

>  If you don't want xen-blkfront patches to
> go to xen-devel remove it from MAINTAINERS.

Patches to Xen drivers very much ought to go to xen-devel imo, so
no - removal is not an option.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 09:09:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 09:09:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29602.59134 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJSU-0003N8-7x; Wed, 18 Nov 2020 09:08:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29602.59134; Wed, 18 Nov 2020 09:08:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJSU-0003N1-50; Wed, 18 Nov 2020 09:08:58 +0000
Received: by outflank-mailman (input) for mailman id 29602;
 Wed, 18 Nov 2020 09:08:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VXpI=EY=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1kfJSS-0003Mw-9M
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:08:56 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fc9316e9-9347-4cad-a3e3-51cfa1b52cc3;
 Wed, 18 Nov 2020 09:08:55 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 3A46F67357; Wed, 18 Nov 2020 10:08:53 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VXpI=EY=lst.de=hch@srs-us1.protection.inumbo.net>)
	id 1kfJSS-0003Mw-9M
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:08:56 +0000
X-Inumbo-ID: fc9316e9-9347-4cad-a3e3-51cfa1b52cc3
Received: from verein.lst.de (unknown [213.95.11.211])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id fc9316e9-9347-4cad-a3e3-51cfa1b52cc3;
	Wed, 18 Nov 2020 09:08:55 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
	id 3A46F67357; Wed, 18 Nov 2020 10:08:53 +0100 (CET)
Date: Wed, 18 Nov 2020 10:08:53 +0100
From: Christoph Hellwig <hch@lst.de>
To: Jan Beulich <jbeulich@suse.com>
Cc: Christoph Hellwig <hch@lst.de>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, Jens Axboe <axboe@kernel.dk>
Subject: Re: merge struct block_device and struct hd_struct
Message-ID: <20201118090853.GA21243@lst.de>
References: <20201118084800.2339180-1-hch@lst.de> <22ca5396-0253-f286-9eab-d417b2e0b3ad@suse.com> <20201118085804.GA20384@lst.de> <1ded2079-f1be-6d5d-01df-65754447df78@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <1ded2079-f1be-6d5d-01df-65754447df78@suse.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Wed, Nov 18, 2020 at 10:04:04AM +0100, Jan Beulich wrote:
> That's the view of some people, but not all. Context can be easily
> established by those who care going to one of the many archives on
> which the entire series lands. Getting spammed, however, can't be
> avoided by the dozens or hundreds of list subscribers.

No, that is simply a completely broken model.  Mails a are trivial
to ignore, finding them OTOH is everything but.  Learn how to ignore
a few mails, it isn't hard at all.


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 09:09:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 09:09:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29606.59147 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJT9-0003Td-Ig; Wed, 18 Nov 2020 09:09:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29606.59147; Wed, 18 Nov 2020 09:09:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJT9-0003TW-Er; Wed, 18 Nov 2020 09:09:39 +0000
Received: by outflank-mailman (input) for mailman id 29606;
 Wed, 18 Nov 2020 09:09:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nHbf=EY=linuxfoundation.org=gregkh@srs-us1.protection.inumbo.net>)
 id 1kfJT7-0003TL-75
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:09:37 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8bcd5b7e-4066-4796-9df4-0568ada297d5;
 Wed, 18 Nov 2020 09:09:36 +0000 (UTC)
Received: from localhost (unknown [89.205.136.214])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 9917624654;
 Wed, 18 Nov 2020 09:09:34 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nHbf=EY=linuxfoundation.org=gregkh@srs-us1.protection.inumbo.net>)
	id 1kfJT7-0003TL-75
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:09:37 +0000
X-Inumbo-ID: 8bcd5b7e-4066-4796-9df4-0568ada297d5
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8bcd5b7e-4066-4796-9df4-0568ada297d5;
	Wed, 18 Nov 2020 09:09:36 +0000 (UTC)
Received: from localhost (unknown [89.205.136.214])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 9917624654;
	Wed, 18 Nov 2020 09:09:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org;
	s=korg; t=1605690575;
	bh=rlMS1sadYjwrkOitszZnRWq9x8/FlFRlxO9KFeT4hXw=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=QvQ9CDWAZ+aCGrhuhHt3KiLNX56hdxRsHiHsSNxvbaUSHz6i1GWYU6j5V0JItZEcx
	 llhTvrS0/4DRQzsuq3ZQjlr2mZwhOCX6FAvymqM3x9Z9fA3EfhIfp/BEr9wBHSBMNx
	 c5aj9Y6nPE53BpvdaFfcqrxO6ubTxyi29A1Jsf3g=
Date: Wed, 18 Nov 2020 10:09:31 +0100
From: Greg KH <gregkh@linuxfoundation.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Christoph Hellwig <hch@lst.de>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, Jens Axboe <axboe@kernel.dk>
Subject: Re: merge struct block_device and struct hd_struct
Message-ID: <X7Tky/6dDN8+DrU7@kroah.com>
References: <20201118084800.2339180-1-hch@lst.de>
 <22ca5396-0253-f286-9eab-d417b2e0b3ad@suse.com>
 <20201118085804.GA20384@lst.de>
 <1ded2079-f1be-6d5d-01df-65754447df78@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <1ded2079-f1be-6d5d-01df-65754447df78@suse.com>

On Wed, Nov 18, 2020 at 10:04:04AM +0100, Jan Beulich wrote:
> On 18.11.2020 09:58, Christoph Hellwig wrote:
> > On Wed, Nov 18, 2020 at 09:56:11AM +0100, Jan Beulich wrote:
> >> since this isn't the first series from you recently spamming
> >> xen-devel, may I ask that you don't Cc entire series to lists
> >> which are involved with perhaps just one out of the many patches?
> >> IMO Cc lists should be compiled on a per-patch basis; the cover
> >> letter may of course be sent to the union of all of them.
> > 
> > No way.  Individual CCs are completely broken as they don't provide
> > the reviewer a context.
> 
> That's the view of some people, but not all. Context can be easily
> established by those who care going to one of the many archives on
> which the entire series lands. Getting spammed, however, can't be
> avoided by the dozens or hundreds of list subscribers.

kernel patches are never "spam", sorry, but for developers to try to
determine which lists/maintainers want to see the whole series and which
do not is impossible.

Patches in a series are easily deleted from sane mail clients with a
single click/keystroke all at once, they aren't a problem that needs to
be reduced in volume.

thanks,

greg k-h


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 09:11:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 09:11:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29612.59159 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJUW-0004Of-UW; Wed, 18 Nov 2020 09:11:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29612.59159; Wed, 18 Nov 2020 09:11:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJUW-0004OY-Qr; Wed, 18 Nov 2020 09:11:04 +0000
Received: by outflank-mailman (input) for mailman id 29612;
 Wed, 18 Nov 2020 09:11:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nHbf=EY=linuxfoundation.org=gregkh@srs-us1.protection.inumbo.net>)
 id 1kfJUW-0004OT-6q
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:11:04 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7072fdae-d980-4f8c-b2da-ad8d23c7c655;
 Wed, 18 Nov 2020 09:11:03 +0000 (UTC)
Received: from localhost (unknown [89.205.136.214])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id B78812463B;
 Wed, 18 Nov 2020 09:11:01 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nHbf=EY=linuxfoundation.org=gregkh@srs-us1.protection.inumbo.net>)
	id 1kfJUW-0004OT-6q
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:11:04 +0000
X-Inumbo-ID: 7072fdae-d980-4f8c-b2da-ad8d23c7c655
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7072fdae-d980-4f8c-b2da-ad8d23c7c655;
	Wed, 18 Nov 2020 09:11:03 +0000 (UTC)
Received: from localhost (unknown [89.205.136.214])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id B78812463B;
	Wed, 18 Nov 2020 09:11:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org;
	s=korg; t=1605690662;
	bh=Bue2zB2N2pWeoRPAFrKwsKTqFelMoJ7u0NZ6+qsm2lg=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=AUVDTnsv/loeevWqNPhDEkXzwfiVWRnXETQJyNJXKc5mWs6sh2/R+S+4LQ/OFGnQM
	 mOBEBCOR8sMcPnpugDJC5+SsVWCCE8mnORUdyAcVNy/d+rccVY9fUSts6rTlPUQzCj
	 zWFg0HoHn1XdWsg1BSMcc/c8vCuPf8k8NOKoeBhI=
Date: Wed, 18 Nov 2020 10:10:59 +0100
From: Greg KH <gregkh@linuxfoundation.org>
To: Coly Li <colyli@suse.de>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Mike Snitzer <snitzer@redhat.com>, dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>, Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH 19/20] bcache: remove a superflous lookup_bdev all
Message-ID: <X7TlIzxJPfa2p+Da@kroah.com>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-20-hch@lst.de>
 <e7f826fd-cb9c-b4ab-fae8-dad398c14eed@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <e7f826fd-cb9c-b4ab-fae8-dad398c14eed@suse.de>

On Wed, Nov 18, 2020 at 04:54:51PM +0800, Coly Li wrote:
> On 11/18/20 4:47 PM, Christoph Hellwig wrote:
> > Don't bother to call lookup_bdev for just a slightly different error
> > message without any functional change.
> > 
> > Signed-off-by: Christoph Hellwig <hch@lst.de>ist
> 
> Hi Christoph,
> 
> NACK. This removing error message is frequently triggered and observed,
> and distinct a busy device and an already registered device is important
> (the first one is critical error and second one is not).
> 
> Remove such error message will be a functional regression.

What normal operation causes this error message to be emitted?  And what
can a user do with it?

thanks,

greg k-h


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 09:13:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 09:13:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29619.59170 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJWp-0004dE-BD; Wed, 18 Nov 2020 09:13:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29619.59170; Wed, 18 Nov 2020 09:13:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJWp-0004d7-82; Wed, 18 Nov 2020 09:13:27 +0000
Received: by outflank-mailman (input) for mailman id 29619;
 Wed, 18 Nov 2020 09:13:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nHbf=EY=linuxfoundation.org=gregkh@srs-us1.protection.inumbo.net>)
 id 1kfJWo-0004d1-FV
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:13:26 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7488384b-880e-45ec-9950-8e3bdb54a4f0;
 Wed, 18 Nov 2020 09:13:26 +0000 (UTC)
Received: from localhost (unknown [89.205.136.214])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 080C124656;
 Wed, 18 Nov 2020 09:13:24 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nHbf=EY=linuxfoundation.org=gregkh@srs-us1.protection.inumbo.net>)
	id 1kfJWo-0004d1-FV
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:13:26 +0000
X-Inumbo-ID: 7488384b-880e-45ec-9950-8e3bdb54a4f0
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7488384b-880e-45ec-9950-8e3bdb54a4f0;
	Wed, 18 Nov 2020 09:13:26 +0000 (UTC)
Received: from localhost (unknown [89.205.136.214])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 080C124656;
	Wed, 18 Nov 2020 09:13:24 +0000 (UTC)
Date: Wed, 18 Nov 2020 10:13:21 +0100
From: Greg KH <greg@kroah.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: merge struct block_device and struct hd_struct
Message-ID: <X7TlsaY2vWQceNAI@kroah.com>
References: <20201118084800.2339180-1-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201118084800.2339180-1-hch@lst.de>

On Wed, Nov 18, 2020 at 09:47:40AM +0100, Christoph Hellwig wrote:
> Hi Jens,
> 
> this series cleans up our main per-device node data structure by merging
> the block_device and hd_struct data structures that have the same scope,
> but different life times.  The main effect (besides removing lots of
> code) is that instead of having two device sizes that need complex
> synchronization there is just one now.
> 
> Note that it depends on the previous "misc cleanups" series.
> 
> A git tree is available here:
> 
>     git://git.infradead.org/users/hch/block.git bdev-lookup
> 
> Gitweb:
> 
>     http://git.infradead.org/users/hch/block.git/shortlog/refs/heads/bdev-lookup

Nice cleanups, thanks for doing this.

Series is:
	Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 09:16:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 09:16:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29625.59183 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJZd-0004mU-Rq; Wed, 18 Nov 2020 09:16:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29625.59183; Wed, 18 Nov 2020 09:16:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJZd-0004mN-Ol; Wed, 18 Nov 2020 09:16:21 +0000
Received: by outflank-mailman (input) for mailman id 29625;
 Wed, 18 Nov 2020 09:16:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfJZc-0004mI-GV
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:16:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 463154f0-1008-43ff-ba91-ff879c0ba7ec;
 Wed, 18 Nov 2020 09:16:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8D45FABDE;
 Wed, 18 Nov 2020 09:16:17 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfJZc-0004mI-GV
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:16:20 +0000
X-Inumbo-ID: 463154f0-1008-43ff-ba91-ff879c0ba7ec
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 463154f0-1008-43ff-ba91-ff879c0ba7ec;
	Wed, 18 Nov 2020 09:16:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605690977; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lG//nGhZJIqPebNGS4ELK9sxFv2Pa11V9G3faF2QRSY=;
	b=V60byJxcSYWvObFMqaSjb2N1HM1GuAwvPzlRDBPJ7T+ww+nkXuyF2kTvpAZpj8ATpAb12+
	44iZgB4tkz5XFrdj5NV8Wq7BJj35PD4eub7V7THP0RFQB3cgY9+OjDZAo9xglEQrQvDSIv
	nCQ4pFovYEzkVzAmXR3d07z/IQW3lMw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8D45FABDE;
	Wed, 18 Nov 2020 09:16:17 +0000 (UTC)
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xenproject.org, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <20201117150949.GA3791@antioche.eu.org>
 <20201117155807.a7jgmftnj6njg6oz@Air-de-Roger>
 <20201117164033.GB3093@antioche.eu.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8039a29c-4058-ab6e-56ef-d1383deb7e38@suse.com>
Date: Wed, 18 Nov 2020 10:16:17 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201117164033.GB3093@antioche.eu.org>
Content-Type: text/plain; charset=windows-1252
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 17.11.2020 17:40, Manuel Bouyer wrote:
> On Tue, Nov 17, 2020 at 04:58:07PM +0100, Roger Pau Monn wrote:
>> [...]
>>
>> I have attached a patch below that will dump the vIO-APIC info as part
>> of the 'i' debug key output, can you paste the whole output of the 'i'
>> debug key when the system stalls?
> 
> see attached file. Note that the kernel did unstall while 'i' output was
> being printed, so it is mixed with some NetBSD kernel output.

Could you try to run Xen's serial port without use of any IRQ
(i.e. in "polling" mode), in an attempt to avoid this unstalling
(which is likely to render the resulting output at least in part
meaningless)?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 09:22:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 09:22:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29633.59195 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJfm-0005mV-Ih; Wed, 18 Nov 2020 09:22:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29633.59195; Wed, 18 Nov 2020 09:22:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJfm-0005mO-Fb; Wed, 18 Nov 2020 09:22:42 +0000
Received: by outflank-mailman (input) for mailman id 29633;
 Wed, 18 Nov 2020 09:22:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4dya=EY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kfJfk-0005mJ-Pb
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:22:40 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id aed6da41-192b-4848-828a-e248cec2286b;
 Wed, 18 Nov 2020 09:22:39 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=4dya=EY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kfJfk-0005mJ-Pb
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:22:40 +0000
X-Inumbo-ID: aed6da41-192b-4848-828a-e248cec2286b
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id aed6da41-192b-4848-828a-e248cec2286b;
	Wed, 18 Nov 2020 09:22:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605691359;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=nxPRv55g926gqtbMY1dG5MlPIFWgfxAq1OH8EPP1KNg=;
  b=AZjt7Q/67+nEntY8ACwqrYvzNl60mT74gUGaeRhpqRs8bOmSQwD+IOTa
   QdARYCjWJGVSsLVx6YVs5UsKDS4/m0AX5yTyor4dTmzr6IsnB8jsQSsHt
   tOywkiPag6eqnWbsenYUGyeTLRDLZR8piPKOfgQ7ZXaLVMeqrCTJYHQqY
   U=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: w7ivJZBB14JlVS6jGGeXexx9ZkbKWZkhg51f+k8lu5WQPs5Ju0M7a7pDn602yfRKR7y2q8SFol
 y1vSIzYm648bxjgfBehYTOqhDMgNZjYfioSylVJgSYbI15ivdBwuKSQYLQD6Cze1XZyCaW/qQf
 MbmQJVkSOTTmRO7HYJTvL2XZCUDXvn/cN706vNLSjy5VZ9vdvvsDKPsgW8cD4W6zu7GDq1W0Uc
 XoEVzzF1w7ZLOvD+HRt/7GB55LNX+WsQK138t2Y4O/XPnWhQRU235/e67LmhHmKIF7tv1UR1rz
 8VQ=
X-SBRS: None
X-MesageID: 31410435
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,486,1596513600"; 
   d="scan'208";a="31410435"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=i5WMXFvwDJTfwBhJvoeL1EnANzj63B43MpIqwIH8vEP84mXNNdw+KOvwXvHmafeAzwK3fQgjrMdC6lmKb9wkSUCG4EK0TWlH98Tum7jVbYJtds+3Xm63B/NJWXHNc739f3YfR8eVGNAypH2ZcvfHW5JFzEkaMwbDfak8mLIYX/U7dmUeJ7ilGlE0JLb3nAYXr0b7jPV1+jwR0HLnsSGvvRYlgWQ+vFWvEPxJEWSprEQkWdNOPeiYa+s/3r041SPArymQ/qB+3W0bs/ZDGjD0iloYPtbUu0Uz2s9i+1/0jBMkyNknPMmYSflcEDxdQCJPydE9gCJpPpFkIx5EIDe3bA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iay9eRW/aq5QUHDL1el8fdyJak5Y00b4sZKLgBI5u/0=;
 b=dkSG08Sov1HIBOX2/988Zc+tmy9kw2u8HAemsySbKGI/3CQaviiF7mVerHzC0vYzlRaUn4W+by43YTyzaTRz9Ue6fik/AR3UBt+CPQL/QcJxK7dyyd/Qt68uf0E67ZBfrYbujsTOY/INom70I/dfaqQZl7YQFA7UJnKzyz+pPVRgg0oipb9SXFRueiEiy4kLppQDZo1o5pl77Ii6d3/ZRnyVmLM7XTIp7UF8rP6h82p+Dp9jGIO2D1e/XrlqHtOhbfas6dXBHQBY3HnIysuMuuMg5Wq8/wYhGsfiBiL4lDrwMU1kfRZlPJmMTy3/LgowcUieVDS3WHAshAfo+HFVJQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iay9eRW/aq5QUHDL1el8fdyJak5Y00b4sZKLgBI5u/0=;
 b=r7R/CFv2rytMx86RnQvMon1TJbQ9ngDbKGD0A0JzFno7TmJIKlNkNfIHzxIJShhHVJu1XefN31wiUNPpaMD39+IYmHVFfK00HaMKjZK+ioSqXkFj/KAyIniZnzCRqk3ngclVblO9YT4E5Q6tERJMghJAJ3NM6LGDjrsnbn4BVdY=
Date: Wed, 18 Nov 2020 10:22:30 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Tim Deegan <tim@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>
Subject: Re: [PATCH 5/5] x86/p2m: split write_p2m_entry() hook
Message-ID: <20201118092230.4trcagbv3lxiz6n3@Air-de-Roger>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
 <7b2b7cc9-8828-41bd-7949-764161bbe7ff@suse.com>
 <20201110135944.hbsojy6eeyw53has@Air-de-Roger>
 <d73234b0-f22e-0783-3fbe-759ccb0ecc48@suse.com>
 <20201111121730.pblsf6inot5gixfc@Air-de-Roger>
 <7f916527-9a9c-8afe-5e5c-781554d1bd73@suse.com>
 <20201112130709.r3acpkrkyck6arul@Air-de-Roger>
 <51e646d4-3e1b-3698-c649-a39840275ec9@suse.com>
 <20201112175221.GB43943@deinos.phlegethon.org>
 <40055cf5-ab16-ad73-4446-3f8f730a6613@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <40055cf5-ab16-ad73-4446-3f8f730a6613@suse.com>
X-ClientProxiedBy: MR2P264CA0097.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:33::13) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 750a6850-5e2d-4616-b093-08d88ba37c42
X-MS-TrafficTypeDiagnostic: DM6PR03MB4219:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB42191E2E800A5D6624937E0B8FE10@DM6PR03MB4219.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: gX0kKPPwF53JZxg6mrjbE8u/j1ag1pmkkqXWs2QNiMSK0z0sJoxrxoctLXHrl38yMZSxzmltE6I52Asa7gyPEuJEK0iVVMCCBkW8qGdaiiS2mfF7sh+oezCc/AV1tHBtzbAA1d0DYmV297XQQcOGOW32teyhCkMhbGTSmf5JmgIUrzYMQtOuAE0V9BrAkDk+PizZi0aAzgGaJwt0Omn4U+QQOFdmVVzIR9ryL9V43hdbnPHdZ9cfFtt8M/iIdXKYq6ogCg19+HvR2LIifNeUQLt5CveswmP47xuoHw3tkSRecrWalWJJ0+z2FFN6iDpEGNayRUrYil9dzT+eEiDQ5p7df9IIp9oCGPAiRDsDsiOR3Kz7mgwgPItp9Q1vsTnt
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(39850400004)(346002)(376002)(396003)(366004)(16526019)(53546011)(6666004)(186003)(478600001)(6496006)(85182001)(6916009)(83380400001)(6486002)(4326008)(26005)(54906003)(9686003)(316002)(86362001)(107886003)(956004)(66946007)(33716001)(1076003)(66476007)(2906002)(8676002)(66556008)(8936002)(5660300002)(70780200001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: kSVKwKSRfzEvB3ahrgsxqAHNzxirershW77dYYL+6mBJ3eiI7HUzk7HcpsN50cfUfUo9E9cFZkBF8hubskFkJbwGaxVWgwq3XcfvldMAGn9sMlZe/sw7nPKvZwpZDQvnLyLd/ekI1J9gGxd9lsXZXmvHtdkONcsPz6uAxJ/xf7J4LP7TlKZktKbnoaHqr3ki3BLq1cnju95Yp6XO6xwKv6Jc0/4h3GwKl+8p/bY2nGQczWikCcJoAEA5XCxXIzBrnRBjpA+EkM5c2LI6QOwfJbiqFINSpkogjT0PqxQpG4zkPQTm351Fq5s1EmFcr37yBKP2qJTcyXfOOhNTREJo/+Dhg36W3pkVACPlgj3hpSzX3eSW9xwNhZgSPXEkq+3ztHBqPsx933ofznEUiGgIkjFWsKIPiee/2y62+TqgOhQZocbT3bNmQaRlrcwCLHNcOyAAfSvevrHLwfX3jU/O1WFAMrmjsyzgzjqc+0YBR9XBbB0fD+/21Vcj4ADhnu0VtsgAMGP3OZCgw9qGqYyEdWdPHdQTBS4UAx2zM5InC6R+UMaI8xDgq31pXnDqsac5frVsa4ks/yhSwOl+sIswNa7Ctc2EEuNUA61UtYkn2BVuuuiOpET1LZZvObJJwBeAALQI7z14uryLWG6/xPvk9PjZeG9sY4DHjHdjivwmvhaas6LQTeCFgL7d+vPsY2rOIgcRmsK/jzttSvQn3PBnBJaDbn1hyvRrGiOQRmrafeDdKc82IN5nXCQCXOKrvD+DE2ttMpuPGSOKcIwrotD08SHMF9w2oQWuLgXtkTSc0OCd6BGutk8No3Gkdn+T0b47ark+zOrzF3TKVcjbxW0TAoonoUArPkH0mxmghPeU8NXbh6Hg6nFooKuwl0+RF+N8Ljq6MDYhNmz13PigAYnzWw==
X-MS-Exchange-CrossTenant-Network-Message-Id: 750a6850-5e2d-4616-b093-08d88ba37c42
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Nov 2020 09:22:35.5880
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GDANoaFkE0XtB8u7hEcHUE6IzPLw4jBxpDrqLh/W+lMNBl0zzeY9qGAQo7MkF46al/EAPH5zmXHoPq/6+ql7ZQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4219
X-OriginatorOrg: citrix.com

On Fri, Nov 13, 2020 at 10:52:58AM +0100, Jan Beulich wrote:
> On 12.11.2020 18:52, Tim Deegan wrote:
> > At 15:04 +0100 on 12 Nov (1605193496), Jan Beulich wrote:
> >> On 12.11.2020 14:07, Roger Pau Monné wrote:
> >>> On Thu, Nov 12, 2020 at 01:29:33PM +0100, Jan Beulich wrote:
> >>>> I agree with all this. If only it was merely about TLB flushes. In
> >>>> the shadow case, shadow_blow_all_tables() gets invoked, and that
> >>>> one - looking at the other call sites - wants the paging lock held.
> > [...]
> >>> The post hook for shadow could pick the lock again, as I don't think
> >>> the removal of the tables needs to be strictly done inside of the same
> >>> locked region?
> >>
> >> I think it does, or else a use of the now stale tables may occur
> >> before they got blown away. Tim?
> > 
> > Is this the call to shadow_blow_tables() in the write_p2m_entry path?
> 
> Yes.
> 
> > I think it would be safe to drop and re-take the paging lock there as
> > long as the call happens before the write is considered to have
> > finished.
> > 
> > But it would not be a useful performance improvement - any update that
> > takes this path is going to be very slow regardless.  So unless you
> > have another pressing reason to split it up, I would be inclined to
> > leave it as it is.  That way it's easier to see that the locking is
> > correct.
> 
> Thanks for the clarification.
> 
> Roger - what's your view at this point?

So my main concern was not really about making this path faster -
after all this shouldn't be a hot path anyway, but rather reducing the
region protected by the paging lock, since it's a global lock that's
quite contended. In the HAP case we could move the flush outside of
the locked region thus reducing the lock time.

Anyway, seeing there's not much consensus on this aspect leaving it
as-is is no worse that what's currently in there.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 09:24:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 09:24:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29638.59207 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJgx-0005wR-2I; Wed, 18 Nov 2020 09:23:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29638.59207; Wed, 18 Nov 2020 09:23:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJgw-0005wK-VB; Wed, 18 Nov 2020 09:23:54 +0000
Received: by outflank-mailman (input) for mailman id 29638;
 Wed, 18 Nov 2020 09:23:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfJgv-0005wD-NT
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:23:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 163ac367-22b2-438a-aa24-485afb418367;
 Wed, 18 Nov 2020 09:23:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 004B0ABDE;
 Wed, 18 Nov 2020 09:23:51 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfJgv-0005wD-NT
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:23:53 +0000
X-Inumbo-ID: 163ac367-22b2-438a-aa24-485afb418367
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 163ac367-22b2-438a-aa24-485afb418367;
	Wed, 18 Nov 2020 09:23:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605691432; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mPuZTYIAwAqalq/3PBMt9FUGU5aCOv8nZscLB8ryTqE=;
	b=i0CTyeL1nWw+YANGFV2rMzpIxuK6HpHkslVxAt1+sbizisxFXYmthJpOtqYWv5OEwI8nEF
	5up2ifEYyQez4h2UgcHFo2aze20C95gWGI0urEOXhGszYfyJOCLUL6zcVAJa/mzXzhiIoD
	LlL4BiGABS8D6M/Aksgo6qs6jBauHxs=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 004B0ABDE;
	Wed, 18 Nov 2020 09:23:51 +0000 (UTC)
Subject: Re: merge struct block_device and struct hd_struct
To: Greg KH <gregkh@linuxfoundation.org>
Cc: Christoph Hellwig <hch@lst.de>, Tejun Heo <tj@kernel.org>,
 Josef Bacik <josef@toxicpanda.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Coly Li <colyli@suse.de>,
 Mike Snitzer <snitzer@redhat.com>, dm-devel@redhat.com,
 Richard Weinberger <richard@nod.at>, Jan Kara <jack@suse.com>,
 linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org,
 linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
 Jens Axboe <axboe@kernel.dk>
References: <20201118084800.2339180-1-hch@lst.de>
 <22ca5396-0253-f286-9eab-d417b2e0b3ad@suse.com>
 <20201118085804.GA20384@lst.de>
 <1ded2079-f1be-6d5d-01df-65754447df78@suse.com> <X7Tky/6dDN8+DrU7@kroah.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <61044f85-cd41-87b5-3f41-36e3dffb6f2a@suse.com>
Date: Wed, 18 Nov 2020 10:23:51 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <X7Tky/6dDN8+DrU7@kroah.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.11.2020 10:09, Greg KH wrote:
> On Wed, Nov 18, 2020 at 10:04:04AM +0100, Jan Beulich wrote:
>> On 18.11.2020 09:58, Christoph Hellwig wrote:
>>> On Wed, Nov 18, 2020 at 09:56:11AM +0100, Jan Beulich wrote:
>>>> since this isn't the first series from you recently spamming
>>>> xen-devel, may I ask that you don't Cc entire series to lists
>>>> which are involved with perhaps just one out of the many patches?
>>>> IMO Cc lists should be compiled on a per-patch basis; the cover
>>>> letter may of course be sent to the union of all of them.
>>>
>>> No way.  Individual CCs are completely broken as they don't provide
>>> the reviewer a context.
>>
>> That's the view of some people, but not all. Context can be easily
>> established by those who care going to one of the many archives on
>> which the entire series lands. Getting spammed, however, can't be
>> avoided by the dozens or hundreds of list subscribers.
> 
> kernel patches are never "spam", sorry, but for developers to try to
> determine which lists/maintainers want to see the whole series and which
> do not is impossible.
> 
> Patches in a series are easily deleted from sane mail clients with a
> single click/keystroke all at once, they aren't a problem that needs to
> be reduced in volume.

This doesn't scale, neither in the dimension of recipients nor in
the dimension of possible sources of such series.

While it may seem small, it's also a waste of resources to have mails
sent to hundreds of even thousands of people. So while from a
technical content perspective I surely agree with you saying 'kernel
patches are never "spam"', they still are from the perspective of
what "spam mail" originally means: Mail the recipients did not want
to receive.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 09:24:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 09:24:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29643.59218 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJhk-00063D-CE; Wed, 18 Nov 2020 09:24:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29643.59218; Wed, 18 Nov 2020 09:24:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJhk-000636-8v; Wed, 18 Nov 2020 09:24:44 +0000
Received: by outflank-mailman (input) for mailman id 29643;
 Wed, 18 Nov 2020 09:24:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QZvP=EY=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kfJhj-000630-Bn
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:24:43 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b07cff6f-d614-4f7e-b4d9-18b0e88a0c1b;
 Wed, 18 Nov 2020 09:24:41 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AI9OVwe026567
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Wed, 18 Nov 2020 10:24:31 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id E1CAD2E9CA8; Wed, 18 Nov 2020 10:24:25 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=QZvP=EY=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kfJhj-000630-Bn
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:24:43 +0000
X-Inumbo-ID: b07cff6f-d614-4f7e-b4d9-18b0e88a0c1b
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b07cff6f-d614-4f7e-b4d9-18b0e88a0c1b;
	Wed, 18 Nov 2020 09:24:41 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AI9OVwe026567
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Wed, 18 Nov 2020 10:24:31 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id E1CAD2E9CA8; Wed, 18 Nov 2020 10:24:25 +0100 (MET)
Date: Wed, 18 Nov 2020 10:24:25 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201118092425.GC1085@antioche.eu.org>
References: <20201117150949.GA3791@antioche.eu.org>
 <20201117155807.a7jgmftnj6njg6oz@Air-de-Roger>
 <20201117164033.GB3093@antioche.eu.org>
 <20201118085738.wpnfmjagxjf6cofp@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201118085738.wpnfmjagxjf6cofp@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Wed, 18 Nov 2020 10:24:32 +0100 (MET)

On Wed, Nov 18, 2020 at 09:57:38AM +0100, Roger Pau Monn wrote:
> On Tue, Nov 17, 2020 at 05:40:33PM +0100, Manuel Bouyer wrote:
> > On Tue, Nov 17, 2020 at 04:58:07PM +0100, Roger Pau Monn wrote:
> > > [...]
> > > 
> > > I have attached a patch below that will dump the vIO-APIC info as part
> > > of the 'i' debug key output, can you paste the whole output of the 'i'
> > > debug key when the system stalls?
> > 
> > see attached file. Note that the kernel did unstall while 'i' output was
> > being printed, so it is mixed with some NetBSD kernel output.
> > The idt entry of the 'ioapic2 pin2' interrupt is 103 on CPU 0.
> > 
> > I also put the whole sequence at
> > http://www-soc.lip6.fr/~bouyer/xen-log3.txt
> 
> On one of the instances the pin shows up as masked, but I'm not sure
> if that's relevant since later it shows up as unmasked. Might just be
> part of how NetBSD handles such interrupts.

Yes, NetBSD can mask an interrupt source if the interrupts needs to be delayed.
It will be unmasked once the interrupt has been handled.

Would it be possible that Xen misses an unmask write, or fails to
call the vector if the interrupt is again pending at the time of the
unmask ?


> [...]
> On a maybe unrelated question, how do you setup the event channel
> callback, is it using HVM_PARAM_CALLBACK_IRQ and
> HVM_PARAM_CALLBACK_TYPE_VECTOR?

Yes, the code is at
https://github.com/NetBSD/src/blob/f9a54eaecfb47bce597f72f6cae8861f4d486eb4/sys/arch/xen/xen/hypervisor.c#L457

> 
> Are you EOI'ing such vector on the local APIC when servicing the
> interrupt?

I think it's OK. the code is at
https://github.com/NetBSD/src/blob/f9a54eaecfb47bce597f72f6cae8861f4d486eb4/sys/arch/amd64/amd64/vector.S#L770

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 09:28:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 09:28:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29653.59231 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJlP-0006HP-Sy; Wed, 18 Nov 2020 09:28:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29653.59231; Wed, 18 Nov 2020 09:28:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJlP-0006HI-Pl; Wed, 18 Nov 2020 09:28:31 +0000
Received: by outflank-mailman (input) for mailman id 29653;
 Wed, 18 Nov 2020 09:28:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QZvP=EY=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kfJlN-0006HD-V5
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:28:29 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1a2ac49f-803b-43d9-93ad-2492f99da677;
 Wed, 18 Nov 2020 09:28:29 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AI9SOHZ001422
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Wed, 18 Nov 2020 10:28:25 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 40B382E9CA8; Wed, 18 Nov 2020 10:28:19 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=QZvP=EY=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kfJlN-0006HD-V5
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:28:29 +0000
X-Inumbo-ID: 1a2ac49f-803b-43d9-93ad-2492f99da677
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1a2ac49f-803b-43d9-93ad-2492f99da677;
	Wed, 18 Nov 2020 09:28:29 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AI9SOHZ001422
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Wed, 18 Nov 2020 10:28:25 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 40B382E9CA8; Wed, 18 Nov 2020 10:28:19 +0100 (MET)
Date: Wed, 18 Nov 2020 10:28:19 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org,
        Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201118092819.GE1085@antioche.eu.org>
References: <20201117150949.GA3791@antioche.eu.org>
 <20201117155807.a7jgmftnj6njg6oz@Air-de-Roger>
 <20201117164033.GB3093@antioche.eu.org>
 <8039a29c-4058-ab6e-56ef-d1383deb7e38@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <8039a29c-4058-ab6e-56ef-d1383deb7e38@suse.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Wed, 18 Nov 2020 10:28:25 +0100 (MET)

On Wed, Nov 18, 2020 at 10:16:17AM +0100, Jan Beulich wrote:
> On 17.11.2020 17:40, Manuel Bouyer wrote:
> > On Tue, Nov 17, 2020 at 04:58:07PM +0100, Roger Pau Monn wrote:
> >> [...]
> >>
> >> I have attached a patch below that will dump the vIO-APIC info as part
> >> of the 'i' debug key output, can you paste the whole output of the 'i'
> >> debug key when the system stalls?
> > 
> > see attached file. Note that the kernel did unstall while 'i' output was
> > being printed, so it is mixed with some NetBSD kernel output.
> 
> Could you try to run Xen's serial port without use of any IRQ
> (i.e. in "polling" mode), in an attempt to avoid this unstalling
> (which is likely to render the resulting output at least in part
> meaningless)?

It there a boot line option for that ?

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 09:32:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 09:32:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29660.59242 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJol-0007Cx-D8; Wed, 18 Nov 2020 09:31:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29660.59242; Wed, 18 Nov 2020 09:31:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJol-0007Cq-9t; Wed, 18 Nov 2020 09:31:59 +0000
Received: by outflank-mailman (input) for mailman id 29660;
 Wed, 18 Nov 2020 09:31:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4dya=EY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kfJok-0007Cl-NR
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:31:58 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6d8e474f-826e-43ad-93a1-e649ea5a5cf7;
 Wed, 18 Nov 2020 09:31:57 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=4dya=EY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kfJok-0007Cl-NR
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:31:58 +0000
X-Inumbo-ID: 6d8e474f-826e-43ad-93a1-e649ea5a5cf7
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6d8e474f-826e-43ad-93a1-e649ea5a5cf7;
	Wed, 18 Nov 2020 09:31:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605691917;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=t0zAXLZnc0Gl30pwDAXz1VrhzusxlosKE+oSLWR6ruE=;
  b=L9lUITGbIcbfaTxdbKasr+NDqndWucDtsvgahwQ0ftKJHjKenocnimuv
   O973FkNu9d4RG9bOGdrn5Zl9lhE54BrRQUAyvnModvGfQ+KmPP23oO4Kl
   yzY0hz7zNDUM7A0SjpgdIFn441TkDe6HiKvxTWrNuaGgQnTrin9gsKJ6x
   Q=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: FRg8CvNcHdcJXOZczSlcz6tR7j7LJ8niMNJLZpqtJTHp+kaZuuyG8Jo2ZNhkRfeRPRJQdMVt/p
 zCwv+B82ydhBax8tTmZWNJ4WBEDfRh4E3fh1/WDOYNMg9A4B+bjQu7zzsxuYYXlRKGj/+TyDbJ
 CK1Ne84MJVdtsQdEfAkWGOxpKUtnzwn+xNOyLjbjECQilQnh/Rb0JxXCuvj8ZgLPLXBKHUZ8Hq
 jDJn27dt7GY9M5nfnsToGegHfI8WjW6cz5QwMk4PG5wvFKz+hMldgUD+ssAikBh/e+8iTokQPP
 02o=
X-SBRS: None
X-MesageID: 31428971
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,486,1596513600"; 
   d="scan'208";a="31428971"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=I/K1kP/j2sUa1/fwfQP/s9wm0TnUhezZx0yQKmUh4xby6W/VQwyGXMkquX+3KTm311gnuJ6jyyk3Xx6AAh4AZVuB6w3fbES1GtSuZuOFq0YkNED+Wa7ZsYanljzYNvsalBHh7ty+TF4zIFzE+txHH4/6UN4VvLYfNV1kAde5tNNKFwyt/aQ/6VOfM9NaFytkEZV2zZ/zZtvs2jCgzzuxOQkw2KiCh+Sep8TRJV/ZN6W7siikYFt+OLSIWR+xK151hWUObL76fpO77Bv4MC74AskYa/wvs168YjVKskwAqRhd1TF99y4xKC+Yo/f2xwxdrja3J2IBqUF5Gp6soLTw7w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Uh+DD3GMiZ41hDqyn5oTzk8Apb2C0EIQCg/syawI7YE=;
 b=N+Q5Bbi710y2VuNlM7JAhgtuI0tuHNR31SnacbvFvSnzgl1IOOI+tsY/Wol5HEgIE/s2xqHCncHACyYP0za1BA4SgdCbTmG7wWzJhgTeBrZesnRRdvHP5vhcvk4fEZx6addV9jh/94BbU8klzQahiNSbUKgDVsvTAKNUlJgYHtZvO+B3ZBgsssk+aFXd3lZFln9oFQ3QAfpDS94xHZy4RRpXAxayV8qjV4U5u/LN/vjsFM1546g+6+8B6830fOwfBMA5VkVTbIk5LNyqHCPD2ZQCS9mv6c3HyZtaoH0Wn0fA7ebIvdWIpoQZ5XPAswLCAMd1yM+PJ92eMaxjjSg/JA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Uh+DD3GMiZ41hDqyn5oTzk8Apb2C0EIQCg/syawI7YE=;
 b=Q/sCzpcNGNZAfEjich9vMuRs2cGDt9LMZfvlmMj9fma8BU+P+Qhb4u1hZ+kKEs3q8cSbK/ec19dig3hXJ7ZEvWpXK3XfQW2q0m0Cp+CsLr9gTO4j6zb7gtroYouPIxgoz0vXL/Qjw8HDV/mOFQ7ZRgdktkTN3tg81iIxXrrGpZM=
Date: Wed, 18 Nov 2020 10:31:48 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
Subject: Re: [PATCH 5/5] x86/p2m: split write_p2m_entry() hook
Message-ID: <20201118093148.fzm2ibxcjopgruui@Air-de-Roger>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
 <7b2b7cc9-8828-41bd-7949-764161bbe7ff@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <7b2b7cc9-8828-41bd-7949-764161bbe7ff@suse.com>
X-ClientProxiedBy: LO2P265CA0413.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a0::17) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6a04bdca-7076-40ef-d6c5-08d88ba4c8eb
X-MS-TrafficTypeDiagnostic: DM5PR03MB3370:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB3370248FD436987E796703948FE10@DM5PR03MB3370.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:741;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 6NQLI4knEN7LlkA0ICRAxnQK5gWCd4i+khVqmoug6ncz7aNYCMLvUskbXFxTD+kY/lGhlVam7y4vacOkTURcPHgET+8MTYFVaKMIPX9P8zbazTW0OdYJVBES9CNCJnG3v+A6QAMYyrJaotZT3IMo2pLo1lxuQ2EfFl4kGgfxaCgpEz2dgTFqlziXI8TQM5z6cSMT5UT8ND1lD2KbPpxY1CX8dm/9h2gYTxQsqzprS1FpHlUuJ4A63gcRQtn6K3N8oye9jedsTU//wf57lD0Q29bbs3GOFdbthu33bnctgH2W0jSe9hxipoRPRY4fe0nsoVnPivj2992PJP+rT4I08pxhjiSKgjTRBmnSm9ZZwfez8i+vhQz4+fqJSPadJNbV
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(346002)(376002)(136003)(396003)(39850400004)(1076003)(6916009)(8936002)(4326008)(85182001)(8676002)(4744005)(186003)(54906003)(16526019)(6486002)(956004)(9686003)(26005)(66946007)(66556008)(66476007)(5660300002)(478600001)(6496006)(6666004)(316002)(33716001)(2906002)(86362001)(70780200001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 3n6TvCeoOySsiInNVuCrLu66mDExi9I4TqJBwUDd1COcVOvgt3WyH2BCtFeFWVB4fbH9KGPVHobUVoz1Pw3wEVbFe6gItW92CG7pOdPyD3QyuRwhLN3Sku80JCU0BYuplKLx3+paT8ROlswwPcWIFAEvlAR1Y+1n5dyBYy5tBgMyj04xzNJceNYfd7wDDEtlm2jmCdnAQ4X9MHGyLntDbVIh6gUukuDwjDk67UTRgud2GEpWO/IosF+PARzYKrZ1jB2Kxph8AgqoWQJoPSgecAkN7T+Akqx4gQU9G7iW4mYHFMOmLU/MT4yBCBIHc0/LZJuiQmVVWh2I3SxQSmAmlbCx1sUxhJwY6ETl9kQJZrl0Q5jv6xNu++ZbtTDvxO/Ss+XhSnIvv9gYBuaHe6P4851TwyH04lSuikW+EGNZIdtBsMyMC2Dk9zdey5H6cr+YfO8rroROMi9qPfZPTlC6CqSjpoGugdw1puq9JnD+Ns7HGMVVEiKUAbnkL3+X8HjvkZRXAy69aq8MbaK59yY9BxWhXuuR5WokwANjf9xldooJFbjjE66xPDBHDZzzzjVrU80l3AzGr0pMyHlArB+C0nYPkDqkvwIbzL68jYx1zi9jIwzdkSedtqvbKguJRTWW8xBswvyIfB5eiCEMRVBvYfA6jNsN5Qwekp5aJZkqs38ZtZioVq0SrwR7XhgX6+SPiB3VeK96CHW7DAWVoIsXagTNX9QtaMgp98I4GXnToa49MFO4Zcx5q97TJlc5WfHBjecimAfBkSHf41c9xinGvwozn3aJYDBojwTv8hN7WyEBLKCIhWbdPWJgHWnU8gvfgpia+PnZfo2j9NNnwoMjkqSGx9eDY5I0dHR4lsHyWJ6y+fl50sDkhL24KinWdNjAwVUEabVbcnLNTlBTro410g==
X-MS-Exchange-CrossTenant-Network-Message-Id: 6a04bdca-7076-40ef-d6c5-08d88ba4c8eb
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Nov 2020 09:31:53.6165
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: LbDiJbFi6LJ7ATCY1evc7w7EPf53F71QncAoYGLdwonlHv5uZmge7ULRB8ucJZKlJ8Ghzn/YmjfzmQnQnjFlcw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3370
X-OriginatorOrg: citrix.com

On Wed, Oct 28, 2020 at 10:24:53AM +0100, Jan Beulich wrote:
> Fair parts of the present handlers are identical; in fact
> nestedp2m_write_p2m_entry() lacks a call to p2m_entry_modify(). Move
> common parts right into write_p2m_entry(), splitting the hooks into a
> "pre" one (needed just by shadow code) and a "post" one.
> 
> For the common parts moved I think that the p2m_flush_nestedp2m() is,
> at least from an abstract perspective, also applicable in the shadow
> case. Hence it doesn't get a 3rd hook put in place.
> 
> The initial comment that was in hap_write_p2m_entry() gets dropped: Its
> placement was bogus, and looking back the the commit introducing it
> (dd6de3ab9985 "Implement Nested-on-Nested") I can't see either what use
> of a p2m it was meant to be associated with.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 09:32:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 09:32:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29664.59255 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJp8-0007IV-Lv; Wed, 18 Nov 2020 09:32:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29664.59255; Wed, 18 Nov 2020 09:32:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJp8-0007IO-In; Wed, 18 Nov 2020 09:32:22 +0000
Received: by outflank-mailman (input) for mailman id 29664;
 Wed, 18 Nov 2020 09:32:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nHbf=EY=linuxfoundation.org=gregkh@srs-us1.protection.inumbo.net>)
 id 1kfJp7-0007H4-7R
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:32:21 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5ba75b7b-2f31-4ed3-aa37-e61f79eed017;
 Wed, 18 Nov 2020 09:32:17 +0000 (UTC)
Received: from localhost (unknown [89.205.136.214])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 651A320855;
 Wed, 18 Nov 2020 09:32:15 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nHbf=EY=linuxfoundation.org=gregkh@srs-us1.protection.inumbo.net>)
	id 1kfJp7-0007H4-7R
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:32:21 +0000
X-Inumbo-ID: 5ba75b7b-2f31-4ed3-aa37-e61f79eed017
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5ba75b7b-2f31-4ed3-aa37-e61f79eed017;
	Wed, 18 Nov 2020 09:32:17 +0000 (UTC)
Received: from localhost (unknown [89.205.136.214])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 651A320855;
	Wed, 18 Nov 2020 09:32:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org;
	s=korg; t=1605691936;
	bh=z9AARqxEifx27ZSfSqyQtp5/QTEI/AENwIGCm/wc71c=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=fFTiyRf+lNKL9ST4txcSeaDrCIqy8sGZqtdcsNAnsGly1+5GYCQkd5b0dJtsVNXW7
	 Ty+UOKz4POaIjvZvW72SkGo52lChY2pkhIjug9dEhWmDqtz52fKjvNwDSvPQKw5/EX
	 XjQbNqUIRf7iTd7GfCLBQpWa8uhdec/9xDR/QuiU=
Date: Wed, 18 Nov 2020 10:32:12 +0100
From: Greg KH <gregkh@linuxfoundation.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Christoph Hellwig <hch@lst.de>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, Jens Axboe <axboe@kernel.dk>
Subject: Re: merge struct block_device and struct hd_struct
Message-ID: <X7TqHNotTX6W/bmT@kroah.com>
References: <20201118084800.2339180-1-hch@lst.de>
 <22ca5396-0253-f286-9eab-d417b2e0b3ad@suse.com>
 <20201118085804.GA20384@lst.de>
 <1ded2079-f1be-6d5d-01df-65754447df78@suse.com>
 <X7Tky/6dDN8+DrU7@kroah.com>
 <61044f85-cd41-87b5-3f41-36e3dffb6f2a@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <61044f85-cd41-87b5-3f41-36e3dffb6f2a@suse.com>

On Wed, Nov 18, 2020 at 10:23:51AM +0100, Jan Beulich wrote:
> On 18.11.2020 10:09, Greg KH wrote:
> > On Wed, Nov 18, 2020 at 10:04:04AM +0100, Jan Beulich wrote:
> >> On 18.11.2020 09:58, Christoph Hellwig wrote:
> >>> On Wed, Nov 18, 2020 at 09:56:11AM +0100, Jan Beulich wrote:
> >>>> since this isn't the first series from you recently spamming
> >>>> xen-devel, may I ask that you don't Cc entire series to lists
> >>>> which are involved with perhaps just one out of the many patches?
> >>>> IMO Cc lists should be compiled on a per-patch basis; the cover
> >>>> letter may of course be sent to the union of all of them.
> >>>
> >>> No way.  Individual CCs are completely broken as they don't provide
> >>> the reviewer a context.
> >>
> >> That's the view of some people, but not all. Context can be easily
> >> established by those who care going to one of the many archives on
> >> which the entire series lands. Getting spammed, however, can't be
> >> avoided by the dozens or hundreds of list subscribers.
> > 
> > kernel patches are never "spam", sorry, but for developers to try to
> > determine which lists/maintainers want to see the whole series and which
> > do not is impossible.
> > 
> > Patches in a series are easily deleted from sane mail clients with a
> > single click/keystroke all at once, they aren't a problem that needs to
> > be reduced in volume.
> 
> This doesn't scale, neither in the dimension of recipients nor in
> the dimension of possible sources of such series.

Again, trying to figure out what subsystem does, and does not, want
stuff like this does not scale either.  Remember, we had 4000 developers
last year, how are you going to tell all of them what the special rules
are for your subsystem and how they differ from any other subsystem?

And why does it matter?  We are all working on the same project, why
wouldn't you want to see core block device handling patches?  What
hurts with that, someone might notice something in one of them that a
different developer did not.

> While it may seem small, it's also a waste of resources to have mails
> sent to hundreds of even thousands of people. So while from a
> technical content perspective I surely agree with you saying 'kernel
> patches are never "spam"', they still are from the perspective of
> what "spam mail" originally means: Mail the recipients did not want
> to receive.

Anyone on a kernel subsystem mailing list should expect to see kernel
patches, that's part of the process, and always has been.

Kernel subsystems are not silos, people on them should be aware of what
else is going on in order to stay informed.  And again, if it's a huge
problem, one click/keystroke and they are gone, no waste.

thanks,

greg k-h


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 09:43:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 09:43:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29675.59266 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJzu-0008Pu-Nr; Wed, 18 Nov 2020 09:43:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29675.59266; Wed, 18 Nov 2020 09:43:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfJzu-0008Pn-Kv; Wed, 18 Nov 2020 09:43:30 +0000
Received: by outflank-mailman (input) for mailman id 29675;
 Wed, 18 Nov 2020 09:43:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfJzt-0008Pi-Bt
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:43:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a1294673-27bf-48fb-9612-efeb91b9d5e7;
 Wed, 18 Nov 2020 09:43:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 84630AC2D;
 Wed, 18 Nov 2020 09:43:27 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfJzt-0008Pi-Bt
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:43:29 +0000
X-Inumbo-ID: a1294673-27bf-48fb-9612-efeb91b9d5e7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a1294673-27bf-48fb-9612-efeb91b9d5e7;
	Wed, 18 Nov 2020 09:43:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605692607; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=47DDZ1KzkF6mKWai+9EruK64aLMwC7IKY7PJl9Yv2WE=;
	b=kVpO5MOWU7WA8fArxmLMn5NCtfHjEJLNYaNGFJQ7uYlb88c/mjTtprB4S3l7CV4kP1eO5L
	wwsHJaDra0Mmf5dIpO9rbLsQW9JcZkX7MjAdJDzNJq35zd5pi2r62nOqr1tABAsZny2Hdq
	a4qm2b/Jhj9HbNK1hlnkNchlNgbgLJo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 84630AC2D;
	Wed, 18 Nov 2020 09:43:27 +0000 (UTC)
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xenproject.org, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <20201117150949.GA3791@antioche.eu.org>
 <20201117155807.a7jgmftnj6njg6oz@Air-de-Roger>
 <20201117164033.GB3093@antioche.eu.org>
 <8039a29c-4058-ab6e-56ef-d1383deb7e38@suse.com>
 <20201118092819.GE1085@antioche.eu.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6ad38151-d218-03c4-8085-9eff35bd63ff@suse.com>
Date: Wed, 18 Nov 2020 10:43:27 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201118092819.GE1085@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 18.11.2020 10:28, Manuel Bouyer wrote:
> On Wed, Nov 18, 2020 at 10:16:17AM +0100, Jan Beulich wrote:
>> On 17.11.2020 17:40, Manuel Bouyer wrote:
>>> On Tue, Nov 17, 2020 at 04:58:07PM +0100, Roger Pau Monné wrote:
>>>> [...]
>>>>
>>>> I have attached a patch below that will dump the vIO-APIC info as part
>>>> of the 'i' debug key output, can you paste the whole output of the 'i'
>>>> debug key when the system stalls?
>>>
>>> see attached file. Note that the kernel did unstall while 'i' output was
>>> being printed, so it is mixed with some NetBSD kernel output.
>>
>> Could you try to run Xen's serial port without use of any IRQ
>> (i.e. in "polling" mode), in an attempt to avoid this unstalling
>> (which is likely to render the resulting output at least in part
>> meaningless)?
> 
> It there a boot line option for that ?

Yes, com<N>= has a field for this:

### com1
### com2
> `= <baud>[/<base-baud>][,[DPS][,[<io-base>|pci|amt][,[<irq>|msi][,[<port-bdf>][,[<bridge-bdf>]]]]]]`

with

* `<irq>` is the IRQ number to use, or `0` to use the UART in poll
  mode only, or `msi` to set up a Message Signaled Interrupt.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 09:44:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 09:44:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29680.59279 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfK0s-00005h-72; Wed, 18 Nov 2020 09:44:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29680.59279; Wed, 18 Nov 2020 09:44:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfK0s-00005a-2q; Wed, 18 Nov 2020 09:44:30 +0000
Received: by outflank-mailman (input) for mailman id 29680;
 Wed, 18 Nov 2020 09:44:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4dya=EY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kfK0q-00005R-Tm
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:44:28 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 80b480db-5905-4325-aecc-695260f3d9bd;
 Wed, 18 Nov 2020 09:44:27 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=4dya=EY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kfK0q-00005R-Tm
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:44:28 +0000
X-Inumbo-ID: 80b480db-5905-4325-aecc-695260f3d9bd
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 80b480db-5905-4325-aecc-695260f3d9bd;
	Wed, 18 Nov 2020 09:44:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605692667;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=YmgK07ayw2G2NeZFVI4DRzYRO/KslHxB77nroqGzB2E=;
  b=D0whtFxdBGfQQgoFD4QNCzyKyz+7JMEaHfjWD2tgOYev5gkxbx+rj3qQ
   a7RF14s468VrHc6nGvLOyk+u+7i7aCXqVLKCCreddrgaDG1m892sk73ti
   0F3Up7e9i5U/fb+IXr6ePKVHjL3myWin3aHT8cmipp3gwxdCLDoueLBYW
   8=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: RirDLYARrrYEuaqzJ1YOVEfk0AMqtKljaAf4jhY618h5YNuoFnlQL7otdQnVsKOW0XKBWro6M7
 iJjgcl3SVTfSiAslNoQZk593alDN4pKoTcOLxpi3Of8g3cseGb01VHA7vzN44/wD1Nbp4XGoXN
 l+h3dqxKrT6XWcSn2CBfS96ejRDHtrXJ9HnaaIEyQIZSMchLVdS6c+qBbzlGhZ46PI/NtSeqgD
 wftbcHvJI64ZByGPVnffa1TbVy2sXFvmVQMdGy6LBRjfMA4KTyFsWr1CEJAMQVJlNmcTDPoX4D
 U8E=
X-SBRS: None
X-MesageID: 31754309
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,486,1596513600"; 
   d="scan'208";a="31754309"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QPih3etPREROqIgGukkt7Ip4PO6Np1tYHMeah8EU/Mb7colNFqsyKK58f+YtDNaC/kfjMnqoP88Jwy33qdPxX2+6bEc/KxAyv6/Qom71sHmOgEg+rV/i2pMUwBktdRd/BXKiJrLo3WX9UTNN/2MaZSe6aQ9K8gQocPwCk922baIkpHXUhy9djDUgqB93MDbGEQw/T6IBg1qKvb0Sg9ohrUAzLAh9gX/zR5z19n18UhGNLaHDI4ZonzU7cSkWxwWiMhcUDZyxEL0ph7mwppy7BotIhLqh/9JpR/f4pQCqhTQkyEFMdV4RoUO2DOvr9s9PqbhPjZJhHuX9Hvo6qanlGA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wmmAjf5SK5orqrFLjr6dR7kJQ5lZWnqbrI0k43wG11M=;
 b=FNHix/PUFOv73OecbWPLq5hKt80BuceZPQRSbpYf+jGL3WhwfnQ7qNiLfAKvSP5o6tWRO99gpPkzA9EpUMRbF4hggVYXOUsmMfUNWzu1AOZl0L2CsVRGhaNR0mcRYYtp9NGXnmqsdmTh2+QdMEfMCCBx4qvdfoPHVbjsWlXuT5IdRjTM4Wh3p6PsZ8F4TEWwW1W75vlwzXm2z1jDhzOni7EFP5vkvBe1Mf8insUy8IXS1Da8Yltx9/K/DW4HQXv8b5l0L12qYieESvzBHhUDRoHVk4gszd8QD3gGz+AapaJkupJjpcOX0mkcHm0u2jH0TbPa8ZLxk27ncYxd791+8w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wmmAjf5SK5orqrFLjr6dR7kJQ5lZWnqbrI0k43wG11M=;
 b=EQltPWBpvcrafz8XAvCjTtb1Ed0DFyUNF20qFGZwnB5BsiDsEXo0XSvopjyIaRPSGjCOEvW0gMuyIN0weFaW29wzKA/yJU86bHhcq7bHdBUz7vAwmJE2+DkjkQOcTAXFU+jfqnvK2qkCdwVkfK1X0AIce7QFMx4oZ+0b9k7JFV4=
Date: Wed, 18 Nov 2020 10:44:09 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
Subject: Re: [PATCH 2/5] x86/p2m: collapse the two ->write_p2m_entry() hooks
Message-ID: <20201118094409.kpw2uie6kpb3gso4@Air-de-Roger>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
 <b26981d1-7a1a-2387-0640-574bdf11ceff@suse.com>
 <20201110110611.p3twf6rmy7qdlxa7@Air-de-Roger>
 <b4932c75-c9c4-1da0-2218-fe3cb959e2e2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <b4932c75-c9c4-1da0-2218-fe3cb959e2e2@suse.com>
X-ClientProxiedBy: MR2P264CA0115.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:33::31) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 462ce6de-3fdd-4008-e2b3-08d88ba682cd
X-MS-TrafficTypeDiagnostic: DM5PR03MB2922:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB29220E9F65DD7FC40ECDD7408FE10@DM5PR03MB2922.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: u5lBEeuQgxl6CRlAQt3huOMn/9H6pcwKBaVKq1xW+4eUZi1JMYgFYRO0EhFGGqhNg9eWInSvqdE7jSMOLOq6eML1QIyAQmt45k8YuBSCBHI9fDOKIfrCzbd+/yDDZXI8CINCrkm374BV6QmaGc01fQq1FNF66miDYodJQWW6x/yrU/OhfZlrrTUZ8hRdlE5vyptdkCVBy313cxdhvlbJwLTtF9U3lMf/tD4VXlAiYHiFZ7x4HvH6+1WXyo+CqJ+aNaf/Su/IB57w9exz28zo54jqUGRLp+4Mjl9KmDGpU0fegN6PC93xqbPhOiHnBpfnVtm5Y/qSkSs85OneEwICCvrWp8VillmSMI7IG7ax5zuJKFPk4okiPn6jB/C+/cwK
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(346002)(376002)(396003)(39850400004)(136003)(1076003)(4326008)(6916009)(6486002)(53546011)(5660300002)(85182001)(316002)(26005)(186003)(33716001)(956004)(16526019)(86362001)(66946007)(6666004)(54906003)(9686003)(478600001)(66556008)(8936002)(66476007)(83380400001)(6496006)(2906002)(8676002)(70780200001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: obHOiJhFv8VLJVl55PnxLA0P0AdWvC0H1VbJmbzWTtW1IMfvO8X+WGvWWyZ/Vb0JcLSTFeYigbz9iH+Faqd63qArwXt/Pj3BjTHPIaBJyhUqmgKMi5Q1Vl2ZkAy7GxoB98+XNO+/f73+cNr0kx1bDKYXyNSBwuZb4CIyCy6qMeI/W/Mb3PpYJ8N8c4/5Ub5JQU420X5MDQNzGgehZ5hpgGLRkQElQO0t0C5z0wV32loNtOdu7ACYJ95C0Ymcafi6FhkME9u+ktqBJD8B90n5h13G4yicEej/uDQe8tIqXZGFLDuiPXCue37MXEhGqz9Zt1vpPl/+EoZ8AmOilU/EO0JazlR7iV0c5qcpJPapWBhhx99WXTl7JLxVrlTgXRqg8NpvBIybLan7+l2VrkvbWtdKlblAX9R8/diIu42Ph13hHDPpZ7Ugvt3LziwL0gQSq9ENghNpxGrT1UFbHZW4mQjxBsqDohP1aWk64ZR+WQLsX60Ekh/7CSe3Yv+9Lv0msM69Sit6fy9O4Bl43r9laZTmFegVsn2p4ncYOSEBWVikr8IqqO68bExFV0TopXdngh0ObgzTOQzkJGdqvLPot00ankbWN+wOyRYQ2SJ+QtHSYXcP9PeEWwjGnCiAdCmeA4VbcH5JoXgsML1VdDn0GcxX/M4gC7HVr9hgR08pBs03gUFHL1LdkNlYanfTe/5Gi3zOx8vH9fCUPrrnnb+KBXGthQrYhB5Ng0QdhdqUhCcW4c7gm97MFXd4X+jZOKWvAsSaEf5nYEZ6PE1O9I+m7CmuKNbVS7UcHicwGovC0ZvI/rrAhOIc7VC8/QLy0UBQB+IGmka2cU03jlDgTWMyzmRWAx4tzQMxhdTJbnsctR6/7i+GElqIFPotnJcwPHmcNLt8i7EF66BoeEsan29IqQ==
X-MS-Exchange-CrossTenant-Network-Message-Id: 462ce6de-3fdd-4008-e2b3-08d88ba682cd
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Nov 2020 09:44:15.0568
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tGKuGSuPcxE++h1OvAa5Ch/Cm1JIdqedOEMqU/kpa0a5kD/KwFIR05/+TsXw+RxetC3glHqOiWtkyzU22GlPEw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2922
X-OriginatorOrg: citrix.com

On Tue, Nov 10, 2020 at 02:51:11PM +0100, Jan Beulich wrote:
> On 10.11.2020 12:06, Roger Pau Monné wrote:
> > On Wed, Oct 28, 2020 at 10:22:58AM +0100, Jan Beulich wrote:
> >> @@ -1132,7 +1132,13 @@ void p2m_pt_init(struct p2m_domain *p2m)
> >>      p2m->recalc = do_recalc;
> >>      p2m->change_entry_type_global = p2m_pt_change_entry_type_global;
> >>      p2m->change_entry_type_range = p2m_pt_change_entry_type_range;
> >> -    p2m->write_p2m_entry = write_p2m_entry;
> >> +
> >> +    /* Still too early to use paging_mode_hap(). */
> >> +    if ( hap_enabled(p2m->domain) )
> >> +        hap_p2m_init(p2m);
> >> +    else if ( IS_ENABLED(CONFIG_SHADOW_PAGING) )
> >> +        shadow_p2m_init(p2m);
> > 
> > There's already some logic in p2m_initialise that checks for
> > hap_enabled for EPT specific initialization. Do you think you could
> > move this there so that it's more contained?
> > 
> > I think having the initialization condition sprinkled all over the
> > different functions makes the logic more complicated to follow.
> > 
> > Also, should hap_p2m_init be limited to HAP and PT, as opposed to HAP
> > and EPT which doesn't use the helper AFAICT?
> 
> It is limited to HAP and PT - we're in p2m_pt_init() here. This is
> also why I don't want to move it to e.g. p2m_initialise(), as that
> would be the wrong layer.

All those sub-initializations make the code slightly harder to follow,
but I guess it's fine if we want to keep it layered in this way.

> > Maybe it would be clearer to unify shadow_write_p2m_entry with
> > hap_write_p2m_entry and call it p2m_pt_write_p2m_entry to match the
> > rest of the p2m PT helpers?
> 
> This looks to go along the lines of what I'd put up as a post-
> commit-message remark in "x86/p2m: collapse the two
> ->write_p2m_entry() hooks". The nested handler is perhaps the
> bigger problem with such merging, plus it would feel a little like
> a layering violation (which is why I did put up the question
> instead of doing it right away).

Right, we could look into it on further patches:

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 09:55:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 09:55:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29688.59291 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfKBp-0001Be-9D; Wed, 18 Nov 2020 09:55:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29688.59291; Wed, 18 Nov 2020 09:55:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfKBp-0001BX-5p; Wed, 18 Nov 2020 09:55:49 +0000
Received: by outflank-mailman (input) for mailman id 29688;
 Wed, 18 Nov 2020 09:55:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fGLJ=EY=suse.de=colyli@srs-us1.protection.inumbo.net>)
 id 1kfKBo-0001BS-Ig
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:55:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1505e00b-b29c-43ec-b906-92132c9503e8;
 Wed, 18 Nov 2020 09:55:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8DCB7ABDE;
 Wed, 18 Nov 2020 09:55:46 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=fGLJ=EY=suse.de=colyli@srs-us1.protection.inumbo.net>)
	id 1kfKBo-0001BS-Ig
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 09:55:48 +0000
X-Inumbo-ID: 1505e00b-b29c-43ec-b906-92132c9503e8
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1505e00b-b29c-43ec-b906-92132c9503e8;
	Wed, 18 Nov 2020 09:55:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8DCB7ABDE;
	Wed, 18 Nov 2020 09:55:46 +0000 (UTC)
To: Greg KH <gregkh@linuxfoundation.org>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
 Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Mike Snitzer <snitzer@redhat.com>, dm-devel@redhat.com,
 Richard Weinberger <richard@nod.at>, Jan Kara <jack@suse.com>,
 linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org,
 linux-fsdevel@vger.kernel.org, linux-mm@kvack.org
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-20-hch@lst.de>
 <e7f826fd-cb9c-b4ab-fae8-dad398c14eed@suse.de> <X7TlIzxJPfa2p+Da@kroah.com>
From: Coly Li <colyli@suse.de>
Subject: Re: [PATCH 19/20] bcache: remove a superflous lookup_bdev all
Message-ID: <24c818c2-6aba-098c-0c73-0a5081175c06@suse.de>
Date: Wed, 18 Nov 2020 17:55:38 +0800
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.16; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <X7TlIzxJPfa2p+Da@kroah.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/18/20 5:10 PM, Greg KH wrote:
> On Wed, Nov 18, 2020 at 04:54:51PM +0800, Coly Li wrote:
>> On 11/18/20 4:47 PM, Christoph Hellwig wrote:
>>> Don't bother to call lookup_bdev for just a slightly different error
>>> message without any functional change.
>>>
>>> Signed-off-by: Christoph Hellwig <hch@lst.de>ist
>>
>> Hi Christoph,
>>
>> NACK. This removing error message is frequently triggered and observed,
>> and distinct a busy device and an already registered device is important
>> (the first one is critical error and second one is not).
>>
>> Remove such error message will be a functional regression.
> 
> What normal operation causes this error message to be emitted?  And what
> can a user do with it?

When there was bug and the caching or backing device was not
unregistered successfully, people could see "device busy"; and if it was
because the device registered again, it could be "already registered".
Without the different message, people may think the device is always
busy but indeed it isn't.

he motivation of the patch is OK to me, but we need to make the logical
consistent, otherwise we will have similar bug report for bogus warning
dmesg from bcache users in soon future.

Coly Li


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 10:00:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 10:00:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29694.59303 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfKGS-0002FS-Rq; Wed, 18 Nov 2020 10:00:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29694.59303; Wed, 18 Nov 2020 10:00:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfKGS-0002FL-Om; Wed, 18 Nov 2020 10:00:36 +0000
Received: by outflank-mailman (input) for mailman id 29694;
 Wed, 18 Nov 2020 10:00:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4dya=EY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kfKGR-0002FG-8Z
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 10:00:35 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 60d9740a-35c7-484d-9f5c-46ec809fcdbf;
 Wed, 18 Nov 2020 10:00:33 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=4dya=EY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kfKGR-0002FG-8Z
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 10:00:35 +0000
X-Inumbo-ID: 60d9740a-35c7-484d-9f5c-46ec809fcdbf
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 60d9740a-35c7-484d-9f5c-46ec809fcdbf;
	Wed, 18 Nov 2020 10:00:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605693633;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=gxlA9QrutNsxyLA42SCeHR37Ba32Z0+s/J5ZFSr+UwQ=;
  b=hcvQ9kKdXo6eaz3nTN7/nMGaCUju0ga0jfbGIJL8qEinJx+EqMi/EqJu
   6tcda6P5K9hDv+jG9szRYbu+9mDNVeRRZI7aU2R0y+SDugkEOfQEzugUy
   w4pLukj3y82IFIL99zrilFMvi3oIFV6lhA5dRxB6L8wr1JnVtRgbkR7wC
   Y=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: N40f+M0a68KvLHsb1kObzbG6XV4Y2vkW+MlNxAA70/U7kmO6lhqwJTDLG+R2t1Nntobr7GbWII
 jbJJc20jZZhGGdvoJVq/kKUbtIP2C8kewAnQUNZtnJkl8EUFnYa8lc7ISJ/koZJDz7CJmzti6X
 XPFMBYCPu2dkuPhrMrWNl+5+5lymHEqhjzwURc+/YjCqkEx9/aKAs2yJYGmYADfT1Ye99U4EG4
 cT15XAc42EUWEevcDBomeyFGcHpgCQT4uRToZ/yUZ82Ztq03152IcbfClanfsVUwNw6E5QkjTp
 Eiw=
X-SBRS: None
X-MesageID: 31755251
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,486,1596513600"; 
   d="scan'208";a="31755251"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gCIJY4sk+KUz3bdEuUk97Qeo50zU7NZO4d8AuL46H6FBFJ8XGv2v622lR4aewcQlgVlu1f2NqkHLy44j6WC3VQDkDj0RNFar9055x5DxORds67fhoyQpQuVyPn4ksw1utyFpvgUBmDpNrOGbu/C+oYzZ2cMiWR7faqq+KwpfCc+cnFv/cF/LwFIiAD0NO4SDLVoYq+V2F3Q0l8wYtGJVtazeFaEPHmOWoJUOwJ3EX/71iMJbyjWG+Y3fH62Isqpip6d4EPHsYDkQkEIiZAQWf43vopjNu/yRxEYjt8UYg1Y+OaLAzBOivy/mlaTTsQosP2o0aofmUUMVjow8eFlh5Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gkOofeLWJnU1vVNmEllAyVXQXCX4Pxltzv4G56p6x0o=;
 b=atJ9+7UQr9MQWMzlNJTfFutg5JnkHN5qtNBygS7aaC5aJN46fnP7LkaJUbFTiUbNE4I7nEmbXU3qrMkUIsJOs14hX7kG/WXizAD6AwJbb6CHgJj1Z2WDCDALnNuNeoVsWVuL3i++2CXJsGX2kVuA1RF4pxdVpCbH3fBSlTDuMkccqx8VLdRQgHJ6Ow2q1SSKn1R+wJXd9A9CT8kVzewdQP+yCxLWi3N8IK3T0tHURVS+y5wWwDwzwnCBAyXAhHz1iHN3pQBTmc8h5cSBH2DM8CNpkjjLCyrcxSd6ab1dWdvLLu/aObRFT0FKIeOxzYOrWpPSP5KOWPELF46/frKxiQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gkOofeLWJnU1vVNmEllAyVXQXCX4Pxltzv4G56p6x0o=;
 b=cOBQo/VtiF6YTXCoBqnT2VDjpvjAjRAp5ZWrDCQgM045UcAFJqoEubuXLwB2nl2Kur1v3ZOkpLkAsUVA/KCwYhBPJwWa2RoVSjmlJuo89eKIW62zNr00YPpNz+dwk3r81NAtT3oShUArKGFffTm/CtFmlHu3TQpNwMccXVtdWM8=
Date: Wed, 18 Nov 2020 11:00:25 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201118100025.ic7r3kfsbdnr6muz@Air-de-Roger>
References: <20201117150949.GA3791@antioche.eu.org>
 <20201117155807.a7jgmftnj6njg6oz@Air-de-Roger>
 <20201117164033.GB3093@antioche.eu.org>
 <20201118085738.wpnfmjagxjf6cofp@Air-de-Roger>
 <20201118092425.GC1085@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201118092425.GC1085@antioche.eu.org>
X-ClientProxiedBy: MRXP264CA0024.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:15::36) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 20f0d76b-b283-4380-589e-08d88ba8c85b
X-MS-TrafficTypeDiagnostic: DM6PR03MB5065:
X-Microsoft-Antispam-PRVS: <DM6PR03MB5065E902A79BB44F5588256B8FE10@DM6PR03MB5065.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4714;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: /Gt2Ei7Qz90Y1+VJihlmrWLhYoGPOCkZxN94X222XfhFJ+TiN12OnapV5WM3MzJC3nNetxO8VYjZmKf6aiaFuLYl6R64/CtoAOD5tES5Z5ajZsSVdUyUdkxiY+7YIm+sAEMBv3CD9jzHWWrZf1UCq3y4sR53FLeAmolO/7B76/PfsOCrz0yuxiJyq2Iv/EVsxkG3s3xhO4vXYP2qiEDVscj+A3KiN3Q8MZqR45xQC8S3PjDi+n4WaEGS9JWzOmcYrMcCqDaNIdYzMmtXp7p3PYnaJMY71Z3R5ON2tt7Hb9Ox7U8IY3YDu3gCIQLRFRkKgibpnTQVwwfqwW5DnAIhHnD7unPvnQ5wumFwtdrhPMxivHegNsAXXQe5u+FfYYkHwYtaW1Gm2B2h7xgnHrt3qA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(366004)(136003)(396003)(39850400004)(376002)(346002)(33716001)(6496006)(4326008)(966005)(26005)(6486002)(85182001)(956004)(8936002)(8676002)(1076003)(66476007)(66946007)(6916009)(66556008)(478600001)(316002)(16526019)(2906002)(6666004)(186003)(5660300002)(9686003)(86362001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: uED3n24G0RmCfxubYi1YGEJucgCLOMTHm19pqrc0Rn+WpsPAO2picLDuJiCNi4/3UUekW3L+vZv35cp7CgcnD9tYs8faMmHA/MaQCCVwUue6DSxenju6jtobwxDNHVNHmSq4XEiUrC/MrilKwEwOlh3oA7a5juJJC/K9iGn0GFuZK74XRlSmhMD+MSE0Nqpib0SfmfAsz8ocsQ8Ypx7UE+XyPCODEFStHpH0hnigajSicB1uQRghlW9mW9iQzRHz5IHD5612ZN648WDm4oOC4jxFa5koZCd3P8/7isEr0IC501X2KWYE6eb8vLOsQMXy7dWp+YPt8/L2Y7oUg+eQUzu/7fNi1+ABt3eeMvjxPdTCMCu5QgYuzGzzz7Gtg9niIp3FPA0LYF/gW4b3Zcn9n8v62J46xjaOM9Agw7AxBYv3AjUWQNtTMQnSUnRjFee+B4WUNHwgsAeRJtMlvLtjKKHnpHQQHohAy8MQsZGCOrG7TfB5kXS6JptQIRiG/ZiP2NiIM5Adm1N9E3UJhnSAOq9LwzmRLdRRes0mgppPwT58UJXCMxVDXMYCWx81pKY0VGpSdGVGz+bh1YcbQF+pkLYIQKgCnLZtKMf5+HTGqNoaAEvwMnGcVAs770WWI4Tfp5PcIHrr9eIGKMmE8LrY9DdbnIN9D+eHvRdGxTMPRz7HlC0xBOlr9GOvaa49txsfj65fcPssp/tdWZxj07fGJLi07jdA/4fACHl5RV2gANZZvCw/1w7vIVH699RnM05T0r9QXldmpFyU8WvDmuweUD45B0BWU3qrLqVBHdy7Ffh+PmTOBVUvKF5Bsd5qbMLBad/6uNLdoGfGv/40z9afu7BY+D5c/kld3AtTOgIREcd4gNzzfhdFnMoxQoscKy5K3smpVxA+zXzVTIa/lPLVhw==
X-MS-Exchange-CrossTenant-Network-Message-Id: 20f0d76b-b283-4380-589e-08d88ba8c85b
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Nov 2020 10:00:30.7052
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hJHhzOCZDLtbKgFddl5VDXVKHi9lkQ7ch4ZAJAKeSHHhxwf25SWIdqZD8cZ+W1NCK94U0ec+9NCcKBKEoGMthw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5065
X-OriginatorOrg: citrix.com

On Wed, Nov 18, 2020 at 10:24:25AM +0100, Manuel Bouyer wrote:
> On Wed, Nov 18, 2020 at 09:57:38AM +0100, Roger Pau Monné wrote:
> > On Tue, Nov 17, 2020 at 05:40:33PM +0100, Manuel Bouyer wrote:
> > > On Tue, Nov 17, 2020 at 04:58:07PM +0100, Roger Pau Monné wrote:
> > > > [...]
> > > > 
> > > > I have attached a patch below that will dump the vIO-APIC info as part
> > > > of the 'i' debug key output, can you paste the whole output of the 'i'
> > > > debug key when the system stalls?
> > > 
> > > see attached file. Note that the kernel did unstall while 'i' output was
> > > being printed, so it is mixed with some NetBSD kernel output.
> > > The idt entry of the 'ioapic2 pin2' interrupt is 103 on CPU 0.
> > > 
> > > I also put the whole sequence at
> > > http://www-soc.lip6.fr/~bouyer/xen-log3.txt
> > 
> > On one of the instances the pin shows up as masked, but I'm not sure
> > if that's relevant since later it shows up as unmasked. Might just be
> > part of how NetBSD handles such interrupts.
> 
> Yes, NetBSD can mask an interrupt source if the interrupts needs to be delayed.
> It will be unmasked once the interrupt has been handled.

Yes, I think that's roughly the same model that FreeBSD uses for
level IO-APIC interrupts: mask it until the handlers have been run.

> Would it be possible that Xen misses an unmask write, or fails to
> call the vector if the interrupt is again pending at the time of the
> unmask ?

Well, it should work properly, but we cannot discard anything.

> > [...]
> > On a maybe unrelated question, how do you setup the event channel
> > callback, is it using HVM_PARAM_CALLBACK_IRQ and
> > HVM_PARAM_CALLBACK_TYPE_VECTOR?
> 
> Yes, the code is at
> https://github.com/NetBSD/src/blob/f9a54eaecfb47bce597f72f6cae8861f4d486eb4/sys/arch/xen/xen/hypervisor.c#L457
> 
> > 
> > Are you EOI'ing such vector on the local APIC when servicing the
> > interrupt?
> 
> I think it's OK. the code is at
> https://github.com/NetBSD/src/blob/f9a54eaecfb47bce597f72f6cae8861f4d486eb4/sys/arch/amd64/amd64/vector.S#L770

Yes, it's fine as you also have support for the newly per-CPU vector
callback which does require an EOI.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 10:15:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 10:15:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29703.59318 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfKUU-0003NR-7W; Wed, 18 Nov 2020 10:15:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29703.59318; Wed, 18 Nov 2020 10:15:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfKUU-0003NK-33; Wed, 18 Nov 2020 10:15:06 +0000
Received: by outflank-mailman (input) for mailman id 29703;
 Wed, 18 Nov 2020 10:15:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QZvP=EY=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kfKUS-0003NF-77
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 10:15:04 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3c482435-d26c-469e-b2b7-24c7d7c93c83;
 Wed, 18 Nov 2020 10:15:02 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AIAEvan005478
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Wed, 18 Nov 2020 11:14:58 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 238212E9CC6; Wed, 18 Nov 2020 11:14:52 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=QZvP=EY=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kfKUS-0003NF-77
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 10:15:04 +0000
X-Inumbo-ID: 3c482435-d26c-469e-b2b7-24c7d7c93c83
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3c482435-d26c-469e-b2b7-24c7d7c93c83;
	Wed, 18 Nov 2020 10:15:02 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AIAEvan005478
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Wed, 18 Nov 2020 11:14:58 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 238212E9CC6; Wed, 18 Nov 2020 11:14:52 +0100 (MET)
Date: Wed, 18 Nov 2020 11:14:52 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org,
        Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201118101452.GA1454@antioche.eu.org>
References: <20201117150949.GA3791@antioche.eu.org>
 <20201117155807.a7jgmftnj6njg6oz@Air-de-Roger>
 <20201117164033.GB3093@antioche.eu.org>
 <8039a29c-4058-ab6e-56ef-d1383deb7e38@suse.com>
 <20201118092819.GE1085@antioche.eu.org>
 <6ad38151-d218-03c4-8085-9eff35bd63ff@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <6ad38151-d218-03c4-8085-9eff35bd63ff@suse.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Wed, 18 Nov 2020 11:14:58 +0100 (MET)

On Wed, Nov 18, 2020 at 10:43:27AM +0100, Jan Beulich wrote:
> On 18.11.2020 10:28, Manuel Bouyer wrote:
> > On Wed, Nov 18, 2020 at 10:16:17AM +0100, Jan Beulich wrote:
> >> On 17.11.2020 17:40, Manuel Bouyer wrote:
> >>> On Tue, Nov 17, 2020 at 04:58:07PM +0100, Roger Pau Monn wrote:
> >>>> [...]
> >>>>
> >>>> I have attached a patch below that will dump the vIO-APIC info as part
> >>>> of the 'i' debug key output, can you paste the whole output of the 'i'
> >>>> debug key when the system stalls?
> >>>
> >>> see attached file. Note that the kernel did unstall while 'i' output was
> >>> being printed, so it is mixed with some NetBSD kernel output.
> >>
> >> Could you try to run Xen's serial port without use of any IRQ
> >> (i.e. in "polling" mode), in an attempt to avoid this unstalling
> >> (which is likely to render the resulting output at least in part
> >> meaningless)?
> > 
> > It there a boot line option for that ?
> 
> Yes, com<N>= has a field for this:
> 
> ### com1
> ### com2
> > `= <baud>[/<base-baud>][,[DPS][,[<io-base>|pci|amt][,[<irq>|msi][,[<port-bdf>][,[<bridge-bdf>]]]]]]`
> 
> with
> 
> * `<irq>` is the IRQ number to use, or `0` to use the UART in poll
>   mode only, or `msi` to set up a Message Signaled Interrupt.

thanks.
This marginally changes the boot behavior (the kenrel hangs a little bit
earlier) but switching the console input to Xen is enough to get some
interrupt, and hitting 'i' causes again the boot to complete.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 10:46:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 10:46:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29714.59336 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfKz1-0006KH-OZ; Wed, 18 Nov 2020 10:46:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29714.59336; Wed, 18 Nov 2020 10:46:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfKz1-0006KA-Ky; Wed, 18 Nov 2020 10:46:39 +0000
Received: by outflank-mailman (input) for mailman id 29714;
 Wed, 18 Nov 2020 10:46:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pym1=EY=citrix.com=christian.lindig@srs-us1.protection.inumbo.net>)
 id 1kfKyz-0006K5-Ca
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 10:46:37 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0a10c9cb-ac2b-41cd-94c6-a5759a1173b2;
 Wed, 18 Nov 2020 10:46:36 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pym1=EY=citrix.com=christian.lindig@srs-us1.protection.inumbo.net>)
	id 1kfKyz-0006K5-Ca
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 10:46:37 +0000
X-Inumbo-ID: 0a10c9cb-ac2b-41cd-94c6-a5759a1173b2
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0a10c9cb-ac2b-41cd-94c6-a5759a1173b2;
	Wed, 18 Nov 2020 10:46:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605696396;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=ancuV2AV+elyPGV2VMZf6sKEM6gH0M0N47Gd+4HT978=;
  b=UF7cE/D7brA1SRI+lawcy9N55RFFYKDf5S601a8joc+nMz4biOVDtbV0
   sCXY61NIp0kwX/pAQRVyR2J8p8Q1VCGFk/Et0olj813jRloUZhuZhLBU0
   uxQCNovCDUa6hjW6fQW5a/riv8EJ0CoDsHLG0zNL79B1sP2xJnRsq+FoC
   k=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 1tWbEBLXUmBVlwxZZLLlvq82I0Zrbqa3Dih2OPssJw0McBDKgOg6BkwcOtxbT2uIkEZSyMZedK
 apGoUTZ/rkup4Kyi1JMu4LetNiPCjmeIyLerWfD6zHmEdWv4PpYnx54rUtCRy+8TcdlZhVW5Le
 mITmaBpSK9/2s34tv0aok/6M9G2FfODpI74Ospvjq5o8VZTy7gaD/bJbgPbVy7XI0omU3hKTWl
 A2c2gyv5u4MfPg9CaUZGRiDT/wXNzMzh2IvhShgSyOT15h2ZIyrOsyQnC4XzqlZT1/OmwH50ZW
 iDI=
X-SBRS: None
X-MesageID: 31758070
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,486,1596513600"; 
   d="scan'208";a="31758070"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MqjJToQqHsWivQTMYkSPujSMOGJaLRfZlICwy1MOPSByOaAqbJrw5Z1gPKStBcbuB3qbypxzEufAKEWLgBxd5MxJIlh2TLttDqjLt+IVvNOYo2Oe9g+Q52O7PHNEPRlLHIM7fOTAYdOVoTI8R8MZRj0HFeG4LQp5Aypb7blzotf+DVP58ckiNTO4tja+XTEobD6F+r7q2hnBi4/5KefPK12o9zV2CJGqtZRKQWhRbSioKV3XDCxjxZiSPsLoW1YiUyEhx/akM+n/aTmiC60EG4VNlUXdCQRJTCn/iv2e9YNzQdzo6bbXSyUSr50O8obc5fOg9Zg0OtSFPseXLxHwxg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lTTHZPlnsZ/uKtKoOoXfEjYbBdMzIjdnd9G2ZQNTfqU=;
 b=J6PFVrG+OxhCCZlqbCiZsKp4W4LACudK9zPngpEYRRvEOAaZd6E4zNuwTM/mxgRNyQPT9p9fLg6AfJOq/m/3VqMQyxUMPLiKtE0OmEruU2olOuEuH/57B72s94hsdby20A6AotuPTbxASJfj4dq1mPVXuB5r1g4JdDkpME6AIXhw0oax4nEzvZCWYSyxyWz9bcgpV8Rain1OqtP4vTw4c8PgHwXXUrscKzrdnCQnuUlGi9PJ/ioXvkY2isETtwZ+i/5WYJB2fJovjjhD4qOhf3rWE8x41GIchvu8yv0CeV2LVCRtqcgzayuxPfA6YsWQzaMGLyOOOztrip+yCJ59Yw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lTTHZPlnsZ/uKtKoOoXfEjYbBdMzIjdnd9G2ZQNTfqU=;
 b=ZBVR2ypkhVAuCKo4dTfmg68fyUtJWUdSjPpWQ5cUD2EW/2pZaIVXLBT8MyxB2520tBZ4tx3ci3wq/p0ElRkW3W+f7E64XuZGQEH/O4I+LsBrmB5M0bhcwXZBF5vrqphhQiwkFbLmzFAwtmQxwOpn3cbR/5mj4j6+UQOeeXnhgV8=
From: Christian Lindig <christian.lindig@citrix.com>
To: Edwin Torok <edvin.torok@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Doug Goldstein <cardoe@cardoe.com>, Andrew Cooper
	<Andrew.Cooper3@citrix.com>, George Dunlap <George.Dunlap@citrix.com>, "Ian
 Jackson" <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, David Scott <dave@recoil.org>
Subject: Re: [PATCH v1 0/4] tools/ocaml/libs/xc: domid control at domain
 creation time
Thread-Topic: [PATCH v1 0/4] tools/ocaml/libs/xc: domid control at domain
 creation time
Thread-Index: AQHWvQ7kpc/nzF74N0aOmOwyv2vIBKnNtIjn
Date: Wed, 18 Nov 2020 10:46:03 +0000
Message-ID: <DS7PR03MB565534B0FBF8C837C5DCDB49F6E10@DS7PR03MB5655.namprd03.prod.outlook.com>
References: <cover.1605636799.git.edvin.torok@citrix.com>
In-Reply-To: <cover.1605636799.git.edvin.torok@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: a5e5b6d5-70b5-44c8-67e9-08d88baf25b0
x-ms-traffictypediagnostic: DM6PR03MB4266:
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <DM6PR03MB4266DE8BB35B42EA406835ACF6E10@DM6PR03MB4266.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: NgEP9ymwyfWitAYy0tDWpOs1UcdjIdE1ZVKRPta6tCFXYuWIQR/cbF+H5Lz4/SESW/6BB3TUuTRYkZGScUda4krG0V2/y3ktG3G6gZwfsp0O956VIUFUB3XEBPBpXmEIwDQOrhE2UqQjFbCNIiJ8y3BFNaIgMHkY66/DieOYYr1mwvWI0dqhq8FHr01A2MGO2bcAQmhtkVbpAQzpi9CClDIoK0Dga6Q4njjtpWvMW0kwT9tdnNuQBeUbrkeU/L+2AGZzxHFnoXjJViXPclxjZ8+JkPnZYS0aF7pF1XhQrx1b0y+rhj/hM3jz079us048HOtJnMwyqrhcI8JJVZ7NhA==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5655.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(136003)(396003)(39850400004)(346002)(376002)(7696005)(186003)(52536014)(478600001)(66574015)(66446008)(64756008)(76116006)(91956017)(66476007)(71200400001)(66556008)(4326008)(53546011)(6506007)(316002)(26005)(55236004)(66946007)(33656002)(54906003)(55016002)(83380400001)(110136005)(9686003)(8936002)(5660300002)(8676002)(86362001)(44832011)(2906002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: +k+nqbDquqJ4OjKTaK9dpyc8WYVUXESNScfAdicxf5Oq1pgaSXEDBxQrAysMv1nePsl74Iv2R6CefIPQpqyqKPIzUW4fCN6/TOqDx4dlANxxFevg1Mu77ATeOUDZAKY3wRBw5mrgG4pTRHJBMihLBDiuAAfalsYxZVwusqtRUxjRtVyD0DKp9A8NMlELAjD9fxNtcPrynpcNKCWBsPpCRYKzXOKptH5Dh5lplXYPg0E1GELd30I2WynhG09rY3jOh6bVQRvMjtdZ4rT1HqAjbJNK//J3pg/LEcMKUPPhzrSsRmz00AiMZ4pz+k+iOfMB+ao6BCDqropxvSWscLx+PA4TBXO+oKpu5jgliqH8AxrwxXGdtkYXFU5uWaPQEksvHXHFaV0/sloefOEPrNmKYX4QptXHylmARqV1djt9YOPrA5F+dSokH9nOoc8163YUlJa2yoDe5bbabn3GCN/uqA2vtd6Rxw99ls2A39mDqnu3DFDbbXvx1H+v51WoaeycLs1se3S6LrOfrjmI8tVGVY9KIzYjrScvzTj2v3jD+HGqvTDLnCsXK9YW3TbkMz7lWbdBOwlO3N1nHslNmlFfdbMqY8bofq7ACWfZv/pSp6kkzYdtE2bRAyiKOo8CFQKcAkXNL3u5GJktAaOSJPM6kg==
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5655.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a5e5b6d5-70b5-44c8-67e9-08d88baf25b0
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Nov 2020 10:46:03.9526
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Lg/sDh4anFFECWuvY9C6RBSxnZI1Ixy2pWBsWIw3na1EWMHnU7UNjCZjagTBWlETdWTwdT3RA3FXtyau3pphL02RfUTp7ieb6/L+iw0UnxU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4266
X-OriginatorOrg: citrix.com


I like the improvements for the build process but I wonder whether these sh=
ould be mixed with functional code changes. But that is only a cosmetic con=
cern as it might impact identifying patches when they are backported. The c=
ode change looks good to me, too. I support moving to Dune for building the=
 OCaml part in the future.

Acked-by: Christian Lindig <christian.lindig@citrix.com>

________________________________________
From: Edwin T=F6r=F6k <edvin.torok@citrix.com>
Sent: 17 November 2020 18:24
To: xen-devel@lists.xenproject.org
Cc: Edwin Torok; Doug Goldstein; Andrew Cooper; George Dunlap; Ian Jackson;=
 Jan Beulich; Julien Grall; Stefano Stabellini; Wei Liu; Christian Lindig; =
David Scott
Subject: [PATCH v1 0/4] tools/ocaml/libs/xc: domid control at domain creati=
on time

The xl toolstack allows some control over the domid at VM creation time,
allow xenopsd similar control by exposing the appropriate domid field in th=
e OCaml xenctrl bindings.
A new API function is introduced to preserve backwards compatibility withou=
t merge ordering
requirements between the Xen and xenopsd patches: Xen can merge the patch a=
nd xenopsd will keep
building with the old function, and a new version of xenopsd will start usi=
ng the new function.

I've also included some build system fixes to allow me to test the build
in an upstream build environment:
```
cd automation/build
podman build -t registry.gitlab.com/xen-project/xen/ubuntu:focal -f ubuntu/=
focal.dockerfile ubuntu
DOCKER_CMD=3Dpodman CONTAINER_NO_PULL=3D1 CONTAINER=3Dregistry.gitlab.com/x=
en-project/xen/ubuntu:focal automation/scripts/containerize make build-tool=
s-oxenstored
```

It'd be good if someone could test whether containerize still works on non-=
SELinux systems now, or
whether we need more detection logic in the script.

This works around bugs in the OCaml makefiles that end up in "inconsistent =
assumptions" by doing a
'make clean' before building the OCaml files every time. This is inefficien=
t, but works.
Long term it would be beneficial to switch to Dune as build system,
which can do correct incremental builds with minimal configuration.
I'll send a separate patch series for that.

Edwin T=F6r=F6k (4):
  automation/scripts/containerize: fix DOCKER_CMD=3Dpodman
  automation/: add Ubuntu:focal container
  Makefile: add build-tools-oxenstored
  tools/ocaml/libs/xc: backward compatible domid control at domain
    creation time

 Makefile                                 |  6 +++
 automation/build/ubuntu/focal.dockerfile | 50 ++++++++++++++++++++++++
 automation/scripts/containerize          |  7 ++--
 tools/ocaml/Makefile                     |  8 ++++
 tools/ocaml/libs/xc/xenctrl.ml           |  3 ++
 tools/ocaml/libs/xc/xenctrl.mli          |  2 +
 tools/ocaml/libs/xc/xenctrl_stubs.c      |  9 ++++-
 7 files changed, 80 insertions(+), 5 deletions(-)
 create mode 100644 automation/build/ubuntu/focal.dockerfile

--
2.18.4



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 11:17:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 11:17:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29729.59348 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfLSt-0000sD-CV; Wed, 18 Nov 2020 11:17:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29729.59348; Wed, 18 Nov 2020 11:17:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfLSt-0000s6-8T; Wed, 18 Nov 2020 11:17:31 +0000
Received: by outflank-mailman (input) for mailman id 29729;
 Wed, 18 Nov 2020 11:17:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfLSs-0000s1-30
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 11:17:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4ca6dc0f-9bce-41f0-b405-73cf42526b4a;
 Wed, 18 Nov 2020 11:17:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 645CFABF4;
 Wed, 18 Nov 2020 11:17:28 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfLSs-0000s1-30
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 11:17:30 +0000
X-Inumbo-ID: 4ca6dc0f-9bce-41f0-b405-73cf42526b4a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4ca6dc0f-9bce-41f0-b405-73cf42526b4a;
	Wed, 18 Nov 2020 11:17:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605698248; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Pt8G61mYEc6hT97jgFFG/f1xH3xLW7U7Vey+5fNVWgk=;
	b=qXFdRjaLojDipMlmtlLcAkGVxwHCs+TIho5TfzBQXc7fETtnaWuItsIEMpmwXtZhmZ3p+R
	RtX7WHGWlYSta62Na6gynhLLiQ7PKho0I5l5pqIOKL3dTF+GHd+TtjDqvyvNc0uWnSo5eq
	GnjkLyoWmoFUl2D4kRhsshsFhoB9wR8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 645CFABF4;
	Wed, 18 Nov 2020 11:17:28 +0000 (UTC)
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xenproject.org, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <20201117150949.GA3791@antioche.eu.org>
 <20201117155807.a7jgmftnj6njg6oz@Air-de-Roger>
 <20201117164033.GB3093@antioche.eu.org>
 <8039a29c-4058-ab6e-56ef-d1383deb7e38@suse.com>
 <20201118092819.GE1085@antioche.eu.org>
 <6ad38151-d218-03c4-8085-9eff35bd63ff@suse.com>
 <20201118101452.GA1454@antioche.eu.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f07aa93c-4765-8c78-2bc7-f83ec7be5ae7@suse.com>
Date: Wed, 18 Nov 2020 12:17:29 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201118101452.GA1454@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 18.11.2020 11:14, Manuel Bouyer wrote:
> On Wed, Nov 18, 2020 at 10:43:27AM +0100, Jan Beulich wrote:
>> On 18.11.2020 10:28, Manuel Bouyer wrote:
>>> On Wed, Nov 18, 2020 at 10:16:17AM +0100, Jan Beulich wrote:
>>>> On 17.11.2020 17:40, Manuel Bouyer wrote:
>>>>> On Tue, Nov 17, 2020 at 04:58:07PM +0100, Roger Pau Monné wrote:
>>>>>> [...]
>>>>>>
>>>>>> I have attached a patch below that will dump the vIO-APIC info as part
>>>>>> of the 'i' debug key output, can you paste the whole output of the 'i'
>>>>>> debug key when the system stalls?
>>>>>
>>>>> see attached file. Note that the kernel did unstall while 'i' output was
>>>>> being printed, so it is mixed with some NetBSD kernel output.
>>>>
>>>> Could you try to run Xen's serial port without use of any IRQ
>>>> (i.e. in "polling" mode), in an attempt to avoid this unstalling
>>>> (which is likely to render the resulting output at least in part
>>>> meaningless)?
>>>
>>> It there a boot line option for that ?
>>
>> Yes, com<N>= has a field for this:
>>
>> ### com1
>> ### com2
>>> `= <baud>[/<base-baud>][,[DPS][,[<io-base>|pci|amt][,[<irq>|msi][,[<port-bdf>][,[<bridge-bdf>]]]]]]`
>>
>> with
>>
>> * `<irq>` is the IRQ number to use, or `0` to use the UART in poll
>>   mode only, or `msi` to set up a Message Signaled Interrupt.
> 
> thanks.
> This marginally changes the boot behavior (the kenrel hangs a little bit
> earlier) but switching the console input to Xen is enough to get some
> interrupt, and hitting 'i' causes again the boot to complete.

That's unfortunate for the purposes here, but thanks for trying.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 12:04:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 12:04:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29753.59398 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfMBh-0005rC-LJ; Wed, 18 Nov 2020 12:03:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29753.59398; Wed, 18 Nov 2020 12:03:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfMBh-0005r5-I3; Wed, 18 Nov 2020 12:03:49 +0000
Received: by outflank-mailman (input) for mailman id 29753;
 Wed, 18 Nov 2020 12:03:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fWLM=EY=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kfMBg-0005r0-Cq
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 12:03:48 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.42]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 54db0103-b2e2-470c-906c-068d1201863b;
 Wed, 18 Nov 2020 12:03:47 +0000 (UTC)
Received: from AM6PR0502CA0043.eurprd05.prod.outlook.com
 (2603:10a6:20b:56::20) by PA4PR08MB6015.eurprd08.prod.outlook.com
 (2603:10a6:102:e6::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20; Wed, 18 Nov
 2020 12:03:44 +0000
Received: from AM5EUR03FT008.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:56:cafe::61) by AM6PR0502CA0043.outlook.office365.com
 (2603:10a6:20b:56::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Wed, 18 Nov 2020 12:03:44 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT008.mail.protection.outlook.com (10.152.16.123) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Wed, 18 Nov 2020 12:03:44 +0000
Received: ("Tessian outbound 13ed5f5344c0:v71");
 Wed, 18 Nov 2020 12:03:44 +0000
Received: from c2df1611ba3a.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 80D730B6-126D-4D0D-9AA9-A786686531EE.1; 
 Wed, 18 Nov 2020 12:03:28 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c2df1611ba3a.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 18 Nov 2020 12:03:28 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB7PR08MB3210.eurprd08.prod.outlook.com (2603:10a6:5:20::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25; Wed, 18 Nov
 2020 12:03:25 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3564.028; Wed, 18 Nov 2020
 12:03:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=fWLM=EY=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kfMBg-0005r0-Cq
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 12:03:48 +0000
X-Inumbo-ID: 54db0103-b2e2-470c-906c-068d1201863b
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown [40.107.21.42])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 54db0103-b2e2-470c-906c-068d1201863b;
	Wed, 18 Nov 2020 12:03:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OVZliGCi+zXs1isdfwA+p59HeIN8KXGv0wVGlmUCX/8=;
 b=g9YaCVtjrfuSLPCnA1LWHJvOBhgNG5Os6zKgP0VmbDkJe8GxJAwQrDnNZ60F/UGRNmXH3ttcZBoEWeQzsVuiOAVzbgKSbxu/h6ymR2NMU28ufkPoXM4r8n0mg3tcYEBgYnbse++xzoPF80NdI3Dzy0DQFMCoVK/Sv/fxbYrBcj0=
Received: from AM6PR0502CA0043.eurprd05.prod.outlook.com
 (2603:10a6:20b:56::20) by PA4PR08MB6015.eurprd08.prod.outlook.com
 (2603:10a6:102:e6::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20; Wed, 18 Nov
 2020 12:03:44 +0000
Received: from AM5EUR03FT008.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:56:cafe::61) by AM6PR0502CA0043.outlook.office365.com
 (2603:10a6:20b:56::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Wed, 18 Nov 2020 12:03:44 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT008.mail.protection.outlook.com (10.152.16.123) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Wed, 18 Nov 2020 12:03:44 +0000
Received: ("Tessian outbound 13ed5f5344c0:v71"); Wed, 18 Nov 2020 12:03:44 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 53cd0060d0584fea
X-CR-MTA-TID: 64aa7808
Received: from c2df1611ba3a.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 80D730B6-126D-4D0D-9AA9-A786686531EE.1;
	Wed, 18 Nov 2020 12:03:28 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c2df1611ba3a.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 18 Nov 2020 12:03:28 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GSV/h4OtVCV6BwJZbY3aqOFih66AYeTpDi+Xkcr3J7FEinMkZAmfucy083kz+fv0T9DeGl8e+wzXgdTtYr5I6Ejr2TyfT4WmZ1b6qraxpevrf+k2kv/p16tGqTprTB/ndCY/PZbq7rmMp3Kk52T3e8DO1tDVfmoPVH5G+fcUMk2EsVXrtDHIW8a8MxQ4zXYrfUDE2Ae5R8cGGPJGmEnLPz+F4r++PoBQhT3V7PM76ZRXXEgEW7V6XmmbqzwnM2J8Ul/zgI5xM4gX0KjrdHT2rTDFphCn8g5xgAx0V9fE4aQX2GvcFtQGDO83GKoqqEOLLWaW5iRB04zBb6YuH3octQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OVZliGCi+zXs1isdfwA+p59HeIN8KXGv0wVGlmUCX/8=;
 b=nAKlFtmRGurAUjEPwwgHiiBLYttlqUAD05oxTfdZsMH9j2RWpmCjVbJMfzQVjod1rzmiCPRbRvmt+MtQzR0n8cXcI81x9SWg/GZfPu2Io+pRt6Ihb2xSVoEf+s1Q0fM5LLJrDjwgdqTydC05qQgVDqG1yE74XsdIZbEVQYSuIXJDkmjc5VH5jHTrkbDQKQuCv2HvGOft5nj/sadHbsnU7ihUfv+zggwDC5Uf9rwcaDqFXCwUGI0cjeWHpHMxT5oi+j+JYva6yIz9vN5r409suQy5DtxOID03kbUYxhBwB2K09Lp2J9AvxIJ6LZGa+2D03nEQyrCfyv3Gns8AcU6vhQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OVZliGCi+zXs1isdfwA+p59HeIN8KXGv0wVGlmUCX/8=;
 b=g9YaCVtjrfuSLPCnA1LWHJvOBhgNG5Os6zKgP0VmbDkJe8GxJAwQrDnNZ60F/UGRNmXH3ttcZBoEWeQzsVuiOAVzbgKSbxu/h6ymR2NMU28ufkPoXM4r8n0mg3tcYEBgYnbse++xzoPF80NdI3Dzy0DQFMCoVK/Sv/fxbYrBcj0=
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB7PR08MB3210.eurprd08.prod.outlook.com (2603:10a6:5:20::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25; Wed, 18 Nov
 2020 12:03:25 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3564.028; Wed, 18 Nov 2020
 12:03:25 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <stefano.stabellini@xilinx.com>
CC: Leo Krueger <leo.krueger@zal.aero>, Peng Fan <peng.fan@nxp.com>,
	"brucea@xilinx.com" <brucea@xilinx.com>, Cornelia Bruelhart
	<cornelia.bruelhart@zal.aero>, "oleksandr_andrushchenko@epam.com"
	<oleksandr_andrushchenko@epam.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, "julien@xen.org" <julien@xen.org>
Subject: Re: Xen data from meta-virtualization layer
Thread-Topic: Xen data from meta-virtualization layer
Thread-Index: AQHWvaLRh1nbYfhrCUytfKaPqimIIA==
Date: Wed, 18 Nov 2020 12:03:25 +0000
Message-ID: <B8CEC628-41EB-44F4-BF72-540C42BFA119@arm.com>
References:
 <AM4PR0501MB2227089FDDF0209EF6E215D9E6100@AM4PR0501MB2227.eurprd05.prod.outlook.com>
 <AM4PR0501MB22274E52A5A3BE912D477D8CE6EA0@AM4PR0501MB2227.eurprd05.prod.outlook.com>
 <HE1PR05MB47941E23CE053CE72F18867C8BEA0@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011091858010.21307@sstabellini-ThinkPad-T480s>
 <HE1PR05MB4794B5C57A54A29A48EE8EAE8BE90@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011101842500.21307@sstabellini-ThinkPad-T480s>
 <DB6PR0402MB27608A03EC717053E392A92988E80@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <HE1PR05MB47940ED4E5FDC0BADC54C8E78BE80@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <DB6PR0402MB2760CEEABA9F52CDEB27C1DB88E80@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <HE1PR05MB47944761ED6A26D3E2CE15868BE40@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011161656080.20906@sstabellini-ThinkPad-T480s>
 <HE1PR05MB4794569AC67109AF8B6517268BE20@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011171544380.438@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2011171544380.438@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xilinx.com; dkim=none (message not signed)
 header.d=none;xilinx.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 687d43f3-772a-4d89-7b8e-08d88bb9ff85
x-ms-traffictypediagnostic: DB7PR08MB3210:|PA4PR08MB6015:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<PA4PR08MB6015E9EA9A1FAC7D04E61996FCE10@PA4PR08MB6015.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:4502;OLM:4502;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 hpsjHzNM/HGEhIBctM54HQlskJaARh6Hk4NJMuijgpukuL3Qh3o1c2oBnlXNunagCe2bGxxgmtpMElf6axQGbxNsF0v91VzUtUYhheNnaJw5dLca2OO/SlnWJaoqjB/lw68ws4VnzE004+a/G7cofkV8Eb2Mm24ORQoUy1ilR+6ioc6YwAhQ//ikgazQCt7TUKPYeFAfT2Fk/NgFk8/GbKgqGekw7fYv5S7Pg2m228wbrcinDWIdLX3R1fC2KudPdPnhYKD40EixbmoEfBcWcxykIfFFkA5ouuDEOHu5zai5CAG4vozk8lk4LnFE121HUgaPyecqd7D7v9XOIZLLNw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(346002)(396003)(376002)(136003)(366004)(66574015)(26005)(2616005)(8676002)(53546011)(36756003)(6916009)(33656002)(83380400001)(8936002)(316002)(4326008)(2906002)(5660300002)(6512007)(66556008)(54906003)(66476007)(76116006)(6486002)(478600001)(71200400001)(91956017)(186003)(66946007)(66446008)(64756008)(86362001)(6506007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 EXjd+hWPoe7Vp5cOz0oYT+qd0IuO036hRrtVLS8K1BbAr/HTmlEkZwnxdi30bB4ozcU2wmLlb3sr4i3zzqRQyrJ3xTYar3n0RPZLCDVMufKAdtl4Kn0z6Vk01HAxuXlRBGAmZOZbJH5WQ2a/V44cLBcq7VufimFtWamGwvGIekMiWtcZXWdnF2HYDQom3iy9YZNMJ/Zl7ja+mYoj6PeMQVBcO9sHyolXcMbebJjPuP0ojVKdeoAhNMwiijp3H6tj4UxBCdg8oCx4I2AYGJQPtawbrqO6H4lilLHlgjpMcUYV5FtpmsFmJNLlPe2fA+krq3cw0+LKd//RwyW28IgFcXWZBdQk+Ab7pohYJNJ5vZHSfaGRqSikk/8h9hUdt3WkP7kNrKhILfnbudaq8PZGtx3nortkyEMqCk7YfdCkr17a2WsolZLP2n6JCIuLi2UuGfzz7TOA0Dx5WBvJYxQcyw0v1XAOc/V1FM65/L7nHDHPx7DOv+3vxr0f9LLHEfSFn9QruQWW78hDZsvSYHn7CRyRYieFE+Q1eLpA6+sjvjivwNg9mlkRNXiRxCdsCcFYF5PyYKcZzoo7e1HVF+/+QiYESUMS48FYUbN7vn7Z6zDkmpcD/g+zkUeco/yxeteFGlwjS1BV3oO+3e1R3ia1rPXXshtYRhcYV+SFTkKX2+tRxic22VherM1njklPogpr08sXnJM11Q0RJYLMwi1M1pBJx7qqruOLrLxLI0p0XvqRDWxeUjMBjM9wl/3U+iPHzML720cerl5fPTgWpx4e4y9sdjmY1ghYZX8VAav+DjB7bWpn7MvfR7wVEgi3ojghAv0ZNG5q7O/+BRkYSKfkFm7TNlA/ZhoF9/EvFgMiqlIqGgD988lDtr5wUuYUPqz/HyhWL0PjsGfoESpTyksIyg==
Content-Type: text/plain; charset="utf-8"
Content-ID: <35FEF3807D2A8144AC78CCAEC86711ED@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3210
Original-Authentication-Results: xilinx.com; dkim=none (message not signed)
 header.d=none;xilinx.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT008.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	3385e386-90bf-486c-187b-08d88bb9f409
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	07JrVhIQX2V5BcY5vTsx+KhQtU/eXQFpQBUdZATfhrlFPZ8QlPUJjxiX3qldw2iq2mlAnAuInghP7J0gTPxV1vaeFec2fdrrIRuPyl3gfKXYuvhGag89v38YsODpbBY3BZ9xIkf+1qHV/xGPNvhGRUvI0Lu3BN4rmZ1wWkirYuiPzi57aO1CVqCdSNz6F8ThqYZn4mnJAx26HEWaoIVtl/L7MDGLmiTbbfYmCwSDxoeV9SK0EAHIvw7YgkmSyd+a6ILDXaKveyd11KvPLYueSfCLzckGXOetaBC4snXmqqdMW6xTKzUUY8v+oVvRQ7fbtBlgeQa7g950alj6YGaZ5aEPAXK8rfiI37uV4zQeJcl6KKRlO7qFmQp1dapxmiRGpKW4Ihd4ZCZMAFZAjUzUXg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(39850400004)(396003)(346002)(376002)(46966005)(54906003)(5660300002)(86362001)(66574015)(478600001)(53546011)(70206006)(6506007)(8676002)(70586007)(26005)(82310400003)(81166007)(186003)(6512007)(47076004)(82740400003)(8936002)(356005)(336012)(83380400001)(2906002)(316002)(6486002)(2616005)(36906005)(33656002)(4326008)(36756003)(6862004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Nov 2020 12:03:44.4334
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 687d43f3-772a-4d89-7b8e-08d88bb9ff85
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT008.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB6015

SGVsbG8gU3RlZmFubywNCg0KDQo+IE9uIDE3IE5vdiAyMDIwLCBhdCAxMTo1MyBwbSwgU3RlZmFu
byBTdGFiZWxsaW5pIDxzdGVmYW5vLnN0YWJlbGxpbmlAeGlsaW54LmNvbT4gd3JvdGU6DQo+IA0K
PiBBZGRpbmcgQmVydHJhbmQsIE9sZWtzYW5kciwgSnVsaWVuLCBhbmQgb3RoZXJzIC0tIHRoZXkg
aGF2ZSBhIG1vcmUNCj4gcmVjZW50IGV4cGVyaWVuY2Ugd2l0aCBHSUN2MyBJVFMgdGhhbiBtZSBh
bmQgbWlnaHQgYmUgYWJsZSB0byBoZWxwLg0KPiBJIGFtIGF0dGFjaGluZyB0aGUgZGV2aWNlIHRy
ZWUgTGVvIHNlbnQgYSBmZXcgZGF5cyBhZ28gZm9yIHJlZmVyZW5jZS4NCj4gDQo+IA0KPiBUeXBp
Y2FsbHkgd2hlbiB5b3UgY2FuIHNldCB0aGUgZXRoZXJuZXQgbGluayB1cCBhbmQgbm8gcGFja2V0
cyBhcmUNCj4gZXhjaGFuZ2VkIGl0IGlzIGJlY2F1c2Ugb2YgYSBtaXNzaW5nIGludGVycnVwdC4g
SW4gdGhpcyBjYXNlIGEgbWlzc2luZw0KPiBNU0kuDQo+IA0KPiBCZXJ0cmFuZCwgSSBiZWxpZXZl
IHlvdSB0cmllZCB0aGUgR0lDIElUUyBkcml2ZXIgd2l0aCBQQ0kgZGV2aWNlcw0KPiByZWNlbnRs
eS4gSXQgaXMgZXhwZWN0ZWQgdG8gd29yayBjb3JyZWN0bHkgd2l0aCBNU0lzIGluIERvbTAsIHJp
Z2h0Pw0KPiANCg0KWWVzIHdlIGFyZSB1c2luZyB0aGUgWEVOIEdJQyBJVFMgZHJpdmVyIGFuZCBN
U0kgaW50ZXJydXB0cyBpcyB3b3JraW5nIGZpbmUgaW4gRE9NMC4NCg0KMjA6ICAgICAgICAxMTIg
ICAgICAgICAgMCAgICAgICAgICAwICAgICAgICAgIDAgICBJVFMtTVNJIDE1NzI4NjQgRWRnZSAg
ICAgIGV0aDANCiAyMTogICAgICAgIDQ0MSAgICAgICAgICAwICAgICAgICAgIDAgICAgICAgICAg
MCAgIElUUy1NU0kgMzY3MDAxNiBFZGdlICAgICAgZXRoMQ0KIDIyOiAgICAgICA0Mjg2ICAgICAg
ICAgIDAgICAgICAgICAgMCAgICAgICAgICAwICAgSVRTLU1TSSA0MTk0MzA0IEVkZ2UgICAgICB4
aGNpX2hjZA0KDQoNClJlZ2FyZHMsDQpSYWh1bA0KDQo+IA0KPiBPbiBUdWUsIDE3IE5vdiAyMDIw
LCBMZW8gS3J1ZWdlciB3cm90ZToNCj4+IEhpLA0KPj4gDQo+PiBJIGVuYWJsZWQgQ09ORklHX0hB
U19JVFMgKHdoYXQgYSBzdHVwaWQgbWlzdGFrZSBieSBtZSB0byBub3Qgc2V0IGl0IGJlZm9yZS4u
LikgYnV0IHRoZW4gaGFkIHRvIGFkZCB0aGUgZm9sbG93aW5nIG5vZGUgdG8gbXkgZGV2aWNlIHRy
ZWUNCj4+IA0KPj4gCWdpY19scGlfYmFzZTogc3lzY29uQDB4ODAwMDAwMDAgew0KPj4gCQljb21w
YXRpYmxlID0gImdpYy1scGktYmFzZSI7DQo+PiAJCXJlZyA9IDwweDAgMHg4MDAwMDAwMCAweDAg
MHgxMDAwMDA+Ow0KPj4gCQltYXgtZ2ljLXJlZGlzdHJpYnV0b3JzID0gPDI+Ow0KPj4gCX07DQo+
PiANCj4+IHRvIHNvbWVob3cgY2hhbmdlIHNvbWV0aGluZyBpbiByZWdhcmQgdG8gdGhlIElUUyBh
bmQgTVNJL01TSS1YDQo+PiANCj4+IChYRU4pIEdJQ3YzIGluaXRpYWxpemF0aW9uOg0KPj4gKFhF
TikgICAgICAgZ2ljX2Rpc3RfYWRkcj0weDAwMDAwMDA2MDAwMDAwDQo+PiAoWEVOKSAgICAgICBn
aWNfbWFpbnRlbmFuY2VfaXJxPTI1DQo+PiAoWEVOKSAgICAgICBnaWNfcmRpc3Rfc3RyaWRlPTAN
Cj4+IChYRU4pICAgICAgIGdpY19yZGlzdF9yZWdpb25zPTENCj4+IChYRU4pICAgICAgIHJlZGlz
dHJpYnV0b3IgcmVnaW9uczoNCj4+IChYRU4pICAgICAgICAgLSByZWdpb24gMDogMHgwMDAwMDAw
NjA0MDAwMCAtIDB4MDAwMDAwMDYwODAwMDANCj4+IChYRU4pIEdJQ3YzOiB1c2luZyBhdCBtb3N0
IDU3MzQ0IExQSXMgb24gdGhlIGhvc3QuDQo+PiAoWEVOKSBHSUN2MzogMjg4IGxpbmVzLCAoSUlE
IDAwMDExNDNiKS4NCj4+IChYRU4pIEdJQ3YzOiBGb3VuZCBJVFMgQDB4NjAyMDAwMA0KPj4gKFhF
TikgdXNpbmcgbm9uLWNhY2hlYWJsZSBJVFMgY29tbWFuZCBxdWV1ZQ0KPj4gKFhFTikgR0lDdjM6
IENQVTA6IEZvdW5kIHJlZGlzdHJpYnV0b3IgaW4gcmVnaW9uIDAgQDAwMDAwMDAwNDAwMWMwMDAN
Cj4+IA0KPj4gWyAgICAwLjAwMDAwMF0gR0lDdjM6IERpc3RyaWJ1dG9yIGhhcyBubyBSYW5nZSBT
ZWxlY3RvciBzdXBwb3J0DQo+PiBbICAgIDAuMDAwMDAwXSBHSUN2Mzogbm8gVkxQSSBzdXBwb3J0
LCBubyBkaXJlY3QgTFBJIHN1cHBvcnQNCj4+IFsgICAgMC4wMDAwMDBdIElUUyBbbWVtIDB4MDYw
MjAwMDAtMHgwNjAzZmZmZl0NCj4+IFsgICAgMC4wMDAwMDBdIElUU0AweDAwMDAwMDAwMDYwMjAw
MDA6IGFsbG9jYXRlZCA2NTUzNiBEZXZpY2VzIEBkYzg4MDAwMCAoZmxhdCwgZXN6IDgsIHBzeiA2
NEssIHNociAxKQ0KPj4gWyAgICAwLjAwMDAwMF0gSVRTQDB4MDAwMDAwMDAwNjAyMDAwMDogYWxs
b2NhdGVkIDMyNzY4IEludGVycnVwdCBDb2xsZWN0aW9ucyBAZGM4MjAwMDAgKGZsYXQsIGVzeiAy
LCBwc3ogNjRLLCBzaHIgMSkNCj4+IFsgICAgMC4wMDAwMDBdIEdJQzogdXNpbmcgTFBJIHByb3Bl
cnR5IHRhYmxlIEAweDAwMDAwMDAwZGM4MzAwMDANCj4+IFsgICAgMC4wMDAwMDBdIEdJQ3YzOiBD
UFUwOiBmb3VuZCByZWRpc3RyaWJ1dG9yIDAgcmVnaW9uIDA6MHgwMDAwMDAwMDA2MDQwMDAwDQo+
PiBbICAgIDAuMDAwMDAwXSBDUFUwOiB1c2luZyBMUEkgcGVuZGluZyB0YWJsZSBAMHgwMDAwMDAw
MGRjODQwMDAwDQo+PiAuLi4NCj4+IFsgICAgMC4wNDAwODBdIFBsYXRmb3JtIE1TSTogZ2ljLWl0
cyBkb21haW4gY3JlYXRlZA0KPj4gWyAgICAwLjA0MDEzNl0gUENJL01TSTogL2ludGVycnVwdC1j
b250cm9sbGVyL2dpYy1pdHMgZG9tYWluIGNyZWF0ZWQNCj4+IFsgICAgMC4wNDAxODFdIGZzbC1t
YyBNU0k6IC9pbnRlcnJ1cHQtY29udHJvbGxlci9naWMtaXRzIGRvbWFpbiBjcmVhdGVkDQo+PiAN
Cj4+IA0KPj4gU3RpbGwgSSBhbSBlbmRpbmcgdXAgd2l0aCB0aGUgIiBGYWlsZWQgdG8gYWRkIC0g
cGFzc3Rocm91Z2ggb3IgTVNJL01TSS1YIG1pZ2h0IGZhaWwhIiBsb2cgbWVzc2FnZXMgZm9yIHNv
bWUgUENJIGRldmljZXMsIGJ1dCBhdCBsZWFzdCB0aGUgb24tYm9hcmQgZXRoZXJuZXQgcG9ydHMg
KGZzbF9lbmV0YyApIGFyZSBpbml0aWFsaXplZC4NCj4+IEkgY2FuIHNldCB0aGUgbGluayB1cCBh
bmQgYSBsaW5rIGlzIHN1Y2Nlc3NmdWxseSBlc3RhYmxpc2hlZC4NCj4+IA0KPj4gQnV0ICghKSBJ
IGNhbm5vdCByZWNlaXZlIG9yIHRyYW5zbWl0IGFueXRoaW5nIChubyBlcnJvciBtZXNzYWdlLi4u
KSBhbmQgdGhlcmUgc2VlbSB0byBiZSBubyBpbnRlcnJ1cHRzOg0KPj4gDQo+PiAyOTogICAgICAg
ICAgMCAgIElUUy1NU0kgICAxIEVkZ2UgICAgICBnYmUwLXJ4dHgwDQo+PiAzMjogICAgICAgICAg
MCAgIElUUy1NU0kgODE5MiBFZGdlICAgICAgcHRwX3FvcmlxDQo+PiANCj4+IChmcm9tIC9wcm9j
L2ludGVycnVwdHMpLg0KPj4gDQo+PiBBbnkgaWRlYSBvbiB0aGlzIG9uZT8gSSBrZWVwIGRpZ2dp
bmcgYW5kIGNoZWNraW5nIHdoZXRoZXIgbXkgZGV2aWNlIHRyZWUgbmVlZHMgc29tZSBhZGRpdGlv
bmFsIGZpeGVzLg0KPj4gDQo+PiBLaW5kIHJlZ2FyZHMsDQo+PiBMZW8NCj4+IA0KPj4gLS0NCj4+
IExlbyBLcsO8Z2VyLCBNLlNjLg0KPj4gU2VuaW9yIFN5c3RlbXMgRW5naW5lZXIgRGlzdHJpYnV0
ZWQgU3lzdGVtcw0KPj4gSW50ZWxsaWdlbnQgRGlnaXRhbCBDYWJpbg0KPj4gDQo+PiBaQUwgWmVu
dHJ1bSBmw7xyIEFuZ2V3YW5kdGUgTHVmdGZhaHJ0Zm9yc2NodW5nIEdtYkgNCj4+IEhlaW4tU2HD
ny1XZWcgMjINCj4+IDIxMTI5IEhhbWJ1cmcNCj4+ICANCj4+ICs0OSAoMCkgNDAgMjQ4IDU5NS0x
NTQNCj4+IA0KPj4gemFsLmFlcm8gfCB0d2l0dGVyLmNvbS9aQUxUZWNoQ2VudGVyIHwgZmFjZWJv
b2suY29tL1pBTFRlY2hDZW50ZXIgDQo+PiANCj4+IFpBTCBaZW50cnVtIGbDvHIgQW5nZXdhbmR0
ZSBMdWZ0ZmFocnRmb3JzY2h1bmcgR21iSCANCj4+IFNpdHogZGVyIEdlc2VsbHNjaGFmdCAvIExl
Z2FsIERvbWljaWxlOiBIYW1idXJnIA0KPj4gUmVnaXN0ZXJnZXJpY2h0IC8gUmVnaXN0cmF0aW9u
IENvdXJ0OiBBbXRzZ2VyaWNodCBIYW1idXJnIEhSQiAxMTAyMzINCj4+IFZvcnNpdHplbmRlciBk
ZXMgQXVmc2ljaHRzcmF0ZXMgLyBDaGFpcm1hbiBvZiB0aGUgU3VwZXJ2aXNvcnkgQm9hcmQ6IFN0
UiBBbmRyZWFzIFJpZWNraG9mDQo+PiBHZXNjaMOkZnRzZsO8aHJ1bmcgLyBCb2FyZCBvZiBNYW5h
Z2VtZW50OiBSb2xhbmQgR2VyaGFyZHMNCj4+IA0KPj4gRGlzY2xhaW1lcjoNCj4+IFRoaXMgZS1t
YWlsIG1heSBjb250YWluIGNvbmZpZGVudGlhbCBhbmQvb3IgcHJpdmlsZWdlZCBpbmZvcm1hdGlv
bi4gSWYgeW91IGFyZSBub3QgdGhlIGludGVuZGVkIHJlY2lwaWVudCAob3IgaGF2ZQ0KPj4gcmVj
ZWl2ZWQgdGhpcyBtYWlsIGluIGVycm9yKSwgcGxlYXNlIG5vdGlmeSB0aGUgc2VuZGVyIGltbWVk
aWF0ZWx5IGFuZCBkZXN0cm95IHRoaXMgZS1tYWlsLiBBbnkgdW5hdXRob3Jpc2VkIGNvcHlpbmcs
IA0KPj4gZGlzY2xvc3VyZSBvciBkaXN0cmlidXRpb24gb2YgdGhlIG1hdGVyaWFsIGluIHRoaXMg
ZS1tYWlsIGlzIHN0cmljdGx5IGZvcmJpZGRlbi4NCj4+IA0KPj4+IC0tLS0tVXJzcHLDvG5nbGlj
aGUgTmFjaHJpY2h0LS0tLS0NCj4+PiBWb246IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3RlZmFuby5z
dGFiZWxsaW5pQHhpbGlueC5jb20+DQo+Pj4gR2VzZW5kZXQ6IERpZW5zdGFnLCAxNy4gTm92ZW1i
ZXIgMjAyMCAwMTo1OQ0KPj4+IEFuOiBMZW8gS3J1ZWdlciA8bGVvLmtydWVnZXJAemFsLmFlcm8+
DQo+Pj4gQ2M6IFBlbmcgRmFuIDxwZW5nLmZhbkBueHAuY29tPjsgU3RlZmFubyBTdGFiZWxsaW5p
DQo+Pj4gPHN0ZWZhbm8uc3RhYmVsbGluaUB4aWxpbnguY29tPjsgYnJ1Y2VhQHhpbGlueC5jb207
IENvcm5lbGlhIEJydWVsaGFydA0KPj4+IDxjb3JuZWxpYS5icnVlbGhhcnRAemFsLmFlcm8+DQo+
Pj4gQmV0cmVmZjogUmU6IEFXOiBBVzogQVc6IFhlbiBkYXRhIGZyb20gbWV0YS12aXJ0dWFsaXph
dGlvbiBsYXllcg0KPj4+IA0KPj4+IFJlcGxpZXMgaW5saW5lIGJlbG93DQo+Pj4gDQo+Pj4gDQo+
Pj4gT24gU3VuLCAxNSBOb3YgMjAyMCwgTGVvIEtydWVnZXIgd3JvdGU6DQo+Pj4+IEhpIFBlbmcs
IGhpIFN0ZWZhbm8sDQo+Pj4+IA0KPj4+PiANCj4+Pj4gDQo+Pj4+IHNvcnJ5IGZvciB0aGUgbG9u
ZyBzaWxlbmNl4oCmDQo+Pj4+IA0KPj4+PiANCj4+Pj4gDQo+Pj4+IEkgdHJpZWQgdGhlIGNoYW5n
ZSBzdWdnZXN0ZWQgKGFuZCBob3BlIEkgZGlkbuKAmXQgZG8gYW55dGhpbmcgd3JvbmcNCj4+Pj4g
d2hpbGUgZG9pbmcgc2/igKYpIG9uIHRvcCBvZiBYRU4gNC4xMy4yIChiZWZvcmUsIEkgYWx3YXlz
IHRyaWVkIHdpdGgNCj4+Pj4gNC4xMiBidXQgd2FudGVkIHRvIGdpdmUgNC4xMy4yIGEgdHJ5IGFz
IHdlbGwpIGJ1dCBJIGRvIG5vdCBzZWUgYW55IGRpZmZlcmVuY2UsDQo+Pj4gc3RpbGwgdGhlIHNh
bWUg4oCcdW5oYW5kbGVkIGNvbnRleHQgZmF1bHTigJ0gbG9nIGVudHJpZXMgcG9wIHVwIGFuZCBJ
IGNhbm5vdA0KPj4+IGFjY2VzcyBteSBzZGNhcmQuDQo+Pj4+IA0KPj4+PiANCj4+Pj4gDQo+Pj4+
IEFzIGl0IHNlZW1zIHRvIHdvcmsgd2l0aG91dCByZXNwZWN0aXZlbHkgZGlzYWJsZWQgaW9tbXUs
IHRoYXQgd291bGQgYmUNCj4+Pj4gZmluZSBmb3IgbWUgZm9yIG5vdy4gV2hhdCBJIGFtIHdvcnJp
ZWQgYWJvdXQgYSBiaXQgbW9yZSBpcyBQQ0llIG9yDQo+Pj4gTVNJL01TSVggdG8gYmUgZXhhY3Qu
DQo+Pj4+IA0KPj4+PiANCj4+Pj4gDQo+Pj4+IEhlcmUgaXMgdGhlIGdpYy12MyBhbmQgaXRzIG5v
ZGUgZnJvbSBteSBkZXZpY2UgdHJlZToNCj4+Pj4gDQo+Pj4+IA0KPj4+PiANCj4+Pj4gaW50ZXJy
dXB0LWNvbnRyb2xsZXJANjAwMDAwMCB7DQo+Pj4+IA0KPj4+PiAgICAgICAgIGNvbXBhdGlibGUg
PSAiYXJtLGdpYy12MyI7DQo+Pj4+IA0KPj4+PiAgICAgICAgICNhZGRyZXNzLWNlbGxzID0gPDB4
Mj47DQo+Pj4+IA0KPj4+PiAgICAgICAgICNzaXplLWNlbGxzID0gPDB4Mj47DQo+Pj4+IA0KPj4+
PiAgICAgICAgIHJhbmdlczsNCj4+Pj4gDQo+Pj4+ICAgICAgICAgcmVnID0gPDB4MCAweDYwMDAw
MDAgMHgwIDB4MTAwMDAgMHgwIDB4NjA0MDAwMCAweDAgMHg0MDAwMD47DQo+Pj4+IA0KPj4+PiAg
ICAgICAgICNpbnRlcnJ1cHQtY2VsbHMgPSA8MHgzPjsNCj4+Pj4gDQo+Pj4+ICAgICAgICAgaW50
ZXJydXB0LWNvbnRyb2xsZXI7DQo+Pj4+IA0KPj4+PiAgICAgICAgIGludGVycnVwdHMgPSA8MHgx
IDB4OSAweGYwOD47DQo+Pj4+IA0KPj4+PiAgICAgICAgIHBoYW5kbGUgPSA8MHgxPjsNCj4+Pj4g
DQo+Pj4+IA0KPj4+PiANCj4+Pj4gICAgICAgICBnaWMtaXRzQDYwMjAwMDAgew0KPj4+PiANCj4+
Pj4gICAgICAgICAgICAgICAgIGNvbXBhdGlibGUgPSAiYXJtLGdpYy12My1pdHMiOw0KPj4+PiAN
Cj4+Pj4gICAgICAgICAgICAgICAgIG1zaS1jb250cm9sbGVyOw0KPj4+PiANCj4+Pj4gICAgICAg
ICAgICAgICAgIHJlZyA9IDwweDAgMHg2MDIwMDAwIDB4MCAweDIwMDAwPjsNCj4+Pj4gDQo+Pj4+
ICAgICAgICAgICAgICAgICBwaGFuZGxlID0gPDB4ZD47DQo+Pj4+IA0KPj4+PiAgICAgICAgIH07
DQo+Pj4+IA0KPj4+PiB9Ow0KPj4+PiANCj4+Pj4gDQo+Pj4+IA0KPj4+PiBBbmQgaGVyZSBhcmUg
c29tZSBrZXJuZWwgbG9nIGV4Y2VycHRzIHJlbGF0ZWQgdG8gR0lDIHdoZW4gYm9vdGluZw0KPj4+
IHdpdGhvdXQgKCEpIFhFTjoNCj4+Pj4gDQo+Pj4+IA0KPj4+PiANCj4+Pj4gWyAgICAwLjAwMDAw
MF0gR0lDdjM6IEdJQzogVXNpbmcgc3BsaXQgRU9JL0RlYWN0aXZhdGUgbW9kZQ0KPj4+PiANCj4+
Pj4gWyAgICAwLjAwMDAwMF0gR0lDdjM6IERpc3RyaWJ1dG9yIGhhcyBubyBSYW5nZSBTZWxlY3Rv
ciBzdXBwb3J0DQo+Pj4+IA0KPj4+PiBbICAgIDAuMDAwMDAwXSBHSUN2Mzogbm8gVkxQSSBzdXBw
b3J0LCBubyBkaXJlY3QgTFBJIHN1cHBvcnQNCj4+Pj4gDQo+Pj4+IFsgICAgMC4wMDAwMDBdIElU
UyBbbWVtIDB4MDYwMjAwMDAtMHgwNjAzZmZmZl0NCj4+Pj4gDQo+Pj4+IFsgICAgMC4wMDAwMDBd
IElUU0AweDAwMDAwMDAwMDYwMjAwMDA6IGFsbG9jYXRlZCA2NTUzNiBEZXZpY2VzDQo+Pj4+IEAy
MGZiODgwMDAwIChmbGF0LCBlc3ogOCwgcHN6IDY0Sywgc2hyIDApDQo+Pj4+IA0KPj4+PiBbICAg
IDAuMDAwMDAwXSBJVFM6IHVzaW5nIGNhY2hlIGZsdXNoaW5nIGZvciBjbWQgcXVldWUNCj4+Pj4g
DQo+Pj4+IFsgICAgMC4wMDAwMDBdIEdJQzogdXNpbmcgTFBJIHByb3BlcnR5IHRhYmxlIEAweDAw
MDAwMDIwZmI4MzAwMDANCj4+Pj4gDQo+Pj4+IFsgICAgMC4wMDAwMDBdIEdJQ3YzOiBDUFUwOiBm
b3VuZCByZWRpc3RyaWJ1dG9yIDAgcmVnaW9uDQo+Pj4+IDA6MHgwMDAwMDAwMDA2MDQwMDAwDQo+
Pj4+IA0KPj4+PiBbICAgIDAuMDAwMDAwXSBDUFUwOiB1c2luZyBMUEkgcGVuZGluZyB0YWJsZSBA
MHgwMDAwMDAyMGZiODQwMDAwDQo+Pj4+IA0KPj4+PiBbICAgIDAuMDAwMDAwXSBHSUM6IHVzaW5n
IGNhY2hlIGZsdXNoaW5nIGZvciBMUEkgcHJvcGVydHkgdGFibGUNCj4+Pj4gDQo+Pj4+IA0KPj4+
PiANCj4+Pj4gSG93ZXZlciwgd2hlbiBib290aW5nIHdpdGggWEVOLCBvbmx5IHRoZSBmb2xsb3dp
bmcgdGhyZWUgbGluZXMgYXJlDQo+Pj4gY29udGFpbmVkIGluIHRoZSBrZXJuZWwgbG9nOg0KPj4+
PiANCj4+Pj4gDQo+Pj4+IA0KPj4+PiBbICAgIDAuMDAwMDAwXSBHSUN2MzogRGlzdHJpYnV0b3Ig
aGFzIG5vIFJhbmdlIFNlbGVjdG9yIHN1cHBvcnQNCj4+Pj4gDQo+Pj4+IFsgICAgMC4wMDAwMDBd
IEdJQ3YzOiBubyBWTFBJIHN1cHBvcnQsIG5vIGRpcmVjdCBMUEkgc3VwcG9ydA0KPj4+PiANCj4+
Pj4gWyAgICAwLjAwMDAwMF0gR0lDdjM6IENQVTA6IGZvdW5kIHJlZGlzdHJpYnV0b3IgMCByZWdp
b24NCj4+Pj4gMDoweDAwMDAwMDAwMDYwNDAwMDANCj4+PiANCj4+PiAibm8gVkxQSSBzdXBwb3J0
IiBpcyB2ZXJ5IHN1c3BpY2lvdXMsIGl0IGxvb2tzIGxpa2UgRG9tMCBkb2Vzbid0IGZpbmQgYW55
IElUUw0KPj4+IHN1cHBvcnQuDQo+Pj4gDQo+Pj4gQ2FuIHlvdSBkb3VibGUgY2hlY2sgdGhhdCB5
b3UgaGF2ZSB0aGUgSVRTIGRyaXZlciBpbiBYZW4gYnVpbHQtaW4/IFRoYXQgd291bGQNCj4+PiBi
ZSBDT05GSUdfSEFTX0lUUy4gSWYgeW91IGRvICJtYWtlIG1lbnVjb25maWciIGFuZCBlbmFibGUg
IkNvbmZpZ3VyZQ0KPj4+IHN0YW5kYXJkIFhlbiBmZWF0dXJlcyAoZXhwZXJ0IHVzZXJzKSIgeW91
IHNob3VsZCBnZXQgYSBuZXcgb3B0aW9uICJHSUN2Mw0KPj4+IElUUyBNU0kgY29udHJvbGxlciBz
dXBwb3J0IiB1bmRlciAiQXJjaGl0ZWN0dXJlIEZlYXR1cmVzIi4NCj4+PiBNYWtlIHN1cmUgdG8g
ZW5hYmxlIGl0Lg0KPj4+IA0KPj4+IExldCBtZSBrbm93IGlmIHRoYXQgd29ya3MhDQo+IDxkZXZp
Y2V0cmVlLmR0cz4NCg0KUmVnYXJkcywNClJhaHVsDQoNCg==


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 12:09:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 12:09:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29769.59410 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfMGv-00064m-BK; Wed, 18 Nov 2020 12:09:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29769.59410; Wed, 18 Nov 2020 12:09:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfMGv-00064f-6q; Wed, 18 Nov 2020 12:09:13 +0000
Received: by outflank-mailman (input) for mailman id 29769;
 Wed, 18 Nov 2020 12:09:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfMGu-00064a-Iu
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 12:09:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8d7db6a5-d020-445b-af43-8a626f7669df;
 Wed, 18 Nov 2020 12:09:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5FA3EAC65;
 Wed, 18 Nov 2020 12:09:10 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfMGu-00064a-Iu
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 12:09:12 +0000
X-Inumbo-ID: 8d7db6a5-d020-445b-af43-8a626f7669df
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8d7db6a5-d020-445b-af43-8a626f7669df;
	Wed, 18 Nov 2020 12:09:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605701350; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=B51N5t6CHg+x5iuz3QWjnqchcxRlQVlx4OBPL4gazGc=;
	b=llr7NXcVyRgqVtQcm2FshdvgsHxfoy2tGdyVXfXiSt0MJ7OqW4xumC9Ym0Jz7+T0/m6P3M
	1rvj6qQ6LlXHJP4PsYW6momNLsdMhAFkQfzdFByTGPHk8XoDErHi9C7OkDL72yXACW+T55
	nykyOs3vDpClCLro2t7QULQhtUt5mKk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5FA3EAC65;
	Wed, 18 Nov 2020 12:09:10 +0000 (UTC)
Subject: Re: [PATCH v6 0/3] XSA-343 followup patches
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 "committers@xenproject.org" <committers@xenproject.org>,
 xen-devel@lists.xenproject.org, Julien Grall <julien@xen.org>
References: <20201109163826.13035-1-jgross@suse.com>
 <aaa3c26f-4bfa-d881-8e72-112e3108f4b5@xen.org>
 <1b54d0bb-deab-f4bd-b773-67a716a1fde1@suse.com>
 <4cb2e205-49e2-7dc6-9ae6-39e5335d5a66@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4c6009de-148e-fd06-7890-fb9b6d4ff7a8@suse.com>
Date: Wed, 18 Nov 2020 13:09:10 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <4cb2e205-49e2-7dc6-9ae6-39e5335d5a66@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 18.11.2020 09:41, Jürgen Groß wrote:
> On 18.11.20 09:22, Jan Beulich wrote:
>> On 17.11.2020 19:13, Julien Grall wrote:
>>> On 09/11/2020 16:38, Juergen Gross wrote:
>>>> Juergen Gross (3):
>>>>     xen/events: access last_priority and last_vcpu_id together
>>>>     xen/evtchn: rework per event channel lock
>>>>     xen/evtchn: revert 52e1fc47abc3a0123
>>>
>>> While looking at the list of commits, I noticed that the first patch
>>> hasn't been committed. They were all acked/reviewed, so I am a bit
>>> puzzled why this was omitted...
>>>
>>> I have nearly missed as I was expecting the 3 patches to be committed
>>> together. May I suggest that in the future we reply to the cover letter
>>> and mention which patches are (or not) committed?
>>>
>>> Regarding patch #1, I will commit it tomorrow unless there are strong
>>> objections against.
>>
>> Without a clear outline of what would break with the present logic,
>> I had previously indicated I'm not convinced of the change. This
>> isn't a strong objection, no, but I still wouldn't want to see my
>> name associated with it in such a case. Furthermore I clearly view
>> this as not a backporting candidate, while the other two are (as I
>> did previously indicate). Hence the latter two patches wanted
>> re-basing ahead of the first one anyway, to ease the backports.
> 
> Consider an NMI during evtchn_fifo_set_pending() between updating
> last_vcpu_id and last_priority, while on another cpu a concurrent
> evtchn_fifo_set_pending() is being called. On that other cpu
> lock_old_queue() might return a wrong queue as it will read only
> the new last_vcpu_id, but not the new last_priority value.

Which has become possible only as of patch 2 of this series, i.e.
this one really was a prereq without its description making this
clear. Now that it'll go in on top of the other two, there's no
strict need to change the description anymore, though. But I take
from this that I should also queue it up for backporting then.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 12:09:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 12:09:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29771.59422 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfMHF-0006A0-Js; Wed, 18 Nov 2020 12:09:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29771.59422; Wed, 18 Nov 2020 12:09:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfMHF-00069t-Gn; Wed, 18 Nov 2020 12:09:33 +0000
Received: by outflank-mailman (input) for mailman id 29771;
 Wed, 18 Nov 2020 12:09:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LUXV=EY=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kfMHD-00069g-Jw
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 12:09:31 +0000
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b7c65bfb-658f-45e7-b01d-e03dd5fbf487;
 Wed, 18 Nov 2020 12:09:30 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id 11so2097052ljf.2
 for <xen-devel@lists.xenproject.org>; Wed, 18 Nov 2020 04:09:30 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id w12sm3389818ljo.67.2020.11.18.04.09.28
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 18 Nov 2020 04:09:29 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=LUXV=EY=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kfMHD-00069g-Jw
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 12:09:31 +0000
X-Inumbo-ID: b7c65bfb-658f-45e7-b01d-e03dd5fbf487
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b7c65bfb-658f-45e7-b01d-e03dd5fbf487;
	Wed, 18 Nov 2020 12:09:30 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id 11so2097052ljf.2
        for <xen-devel@lists.xenproject.org>; Wed, 18 Nov 2020 04:09:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:from:to:cc:references:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=zbNp9xUN0gaWpF3LGCgicDWp/VRu0vl+zxACBaRYRYM=;
        b=Lr7Cw9ANWDFnsRZSlnpOwJsZDGilyK2iklhXvlkCs7MTP7IGq1lwsSUsAYfZcRFRYc
         OLi0ZvATkplcapk28aqMfdlXnWgBvRi56+7fJ/hCdmM4KlKChpbOjUhHtrleUGvdf4Y0
         WFgEcxf7IF1wE7AxhCW7hdDXSK+qjTtutW017gOsOl+vIY375yoeodwEXW2cxTM+v2EG
         4A/6DLXHHqp/8Dvx2xglhUPPgmYO+0LZ320BLsU4OTAXO8WYWPIyU/ocRchtj22nZmCv
         nF3kdd9p6KkP1jkzodT1etJpNiQ4j6XG/pwPXgtisuhVCf1yRXAJs3EXdVVD7aUdMxQZ
         HLAQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:from:to:cc:references:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=zbNp9xUN0gaWpF3LGCgicDWp/VRu0vl+zxACBaRYRYM=;
        b=VYZEAJjj3RXrgqkhEk7iDvKZ0j/lxkA6+YLMBIlbXPb2cx9Kn8UFnZz9UdElRBbUvP
         uwv/p6+PkAb0M/tqkGiZ2djNvDzdMu3F0UKA8uev8+TKnARrynzqpWiX79gRlWBhfDhu
         2rvObTVnNKgGXtwWp1Lc6kGoJbhxOk76ez2dt06JBm42WnUBsmFnRo5T7I5S+082oS2y
         CTdO05axIxQXQPwmWejmjyzbrOMRmT0v7dz4ssi9Gol/ylBulkpdyi0PMT+ezouKYaWi
         iyycLZeYrwjiWSiuS0Ry2AwI+b48kV12Sic2TlQBKF7RnpgvJDu6gBx7MQC0I3zYfNlR
         Mtvw==
X-Gm-Message-State: AOAM530LqK5i4MZaAsG6VszPnXfhE6PXI+ncTZXeqMkIp57f9rmq1tj+
	xb6zovXMou7Znd0V67wVPaQ=
X-Google-Smtp-Source: ABdhPJxxD5phpqt+I9oC2LUnEmdBYKa47nu9jFbjg5/QOh1mgsbnTfr0m/nmBqr/ePxzUXN0hhydRw==
X-Received: by 2002:a2e:580d:: with SMTP id m13mr1169660ljb.200.1605701369585;
        Wed, 18 Nov 2020 04:09:29 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id w12sm3389818ljo.67.2020.11.18.04.09.28
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Wed, 18 Nov 2020 04:09:29 -0800 (PST)
Subject: Re: [PATCH V2 07/23] xen/ioreq: Move x86's ioreq_gfn(server) to
 struct domain
From: Oleksandr <olekstysh@gmail.com>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-8-git-send-email-olekstysh@gmail.com>
Message-ID: <79588865-3f28-5436-0763-cb8ee0b87262@gmail.com>
Date: Wed, 18 Nov 2020 14:09:28 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <1602780274-29141-8-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


Hi Paul.

> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -314,6 +314,8 @@ struct sched_unit {
>   
>   struct evtchn_port_ops;
>   
> +#define MAX_NR_IOREQ_SERVERS 8
> +
>   struct domain
>   {
>       domid_t          domain_id;
> @@ -521,6 +523,21 @@ struct domain
>       /* Argo interdomain communication support */
>       struct argo_domain *argo;
>   #endif
> +
> +#ifdef CONFIG_IOREQ_SERVER
> +    /* Guest page range used for non-default ioreq servers */
> +    struct {
> +        unsigned long base;
> +        unsigned long mask;
> +        unsigned long legacy_mask; /* indexed by HVM param number */
> +    } ioreq_gfn;

I assume the whole ioreq_gfn struct doesn't need to be here. According 
to the new requirement to leave legacy interface x86 specific,

these fields won't be used in common code anymore. I will move ioreq_gfn 
struct back to hvm_domain. Please confirm.


> +
> +    /* Lock protects all other values in the sub-struct and the default */
> +    struct {
> +        spinlock_t              lock;
> +        struct ioreq_server     *server[MAX_NR_IOREQ_SERVERS];
> +    } ioreq_server;
> +#endif
>   };
>   
>   static inline struct page_list_head *page_to_list(

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 12:14:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 12:14:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29781.59433 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfMLt-0007AQ-C1; Wed, 18 Nov 2020 12:14:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29781.59433; Wed, 18 Nov 2020 12:14:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfMLt-0007AJ-9C; Wed, 18 Nov 2020 12:14:21 +0000
Received: by outflank-mailman (input) for mailman id 29781;
 Wed, 18 Nov 2020 12:14:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QZvP=EY=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kfMLr-0007AE-GN
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 12:14:19 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b430c0ea-5d18-4847-af2a-c4992c299d16;
 Wed, 18 Nov 2020 12:14:16 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AICE8Mk014008
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Wed, 18 Nov 2020 13:14:09 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id DD7E62E9CA8; Wed, 18 Nov 2020 13:14:03 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=QZvP=EY=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kfMLr-0007AE-GN
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 12:14:19 +0000
X-Inumbo-ID: b430c0ea-5d18-4847-af2a-c4992c299d16
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b430c0ea-5d18-4847-af2a-c4992c299d16;
	Wed, 18 Nov 2020 12:14:16 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AICE8Mk014008
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Wed, 18 Nov 2020 13:14:09 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id DD7E62E9CA8; Wed, 18 Nov 2020 13:14:03 +0100 (MET)
Date: Wed, 18 Nov 2020 13:14:03 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201118121403.GC3126@antioche.eu.org>
References: <20201117150949.GA3791@antioche.eu.org>
 <20201117155807.a7jgmftnj6njg6oz@Air-de-Roger>
 <20201117164033.GB3093@antioche.eu.org>
 <20201118085738.wpnfmjagxjf6cofp@Air-de-Roger>
 <20201118092425.GC1085@antioche.eu.org>
 <20201118100025.ic7r3kfsbdnr6muz@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201118100025.ic7r3kfsbdnr6muz@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Wed, 18 Nov 2020 13:14:10 +0100 (MET)

On Wed, Nov 18, 2020 at 11:00:25AM +0100, Roger Pau Monn wrote:
> On Wed, Nov 18, 2020 at 10:24:25AM +0100, Manuel Bouyer wrote:
> > On Wed, Nov 18, 2020 at 09:57:38AM +0100, Roger Pau Monn wrote:
> > > On Tue, Nov 17, 2020 at 05:40:33PM +0100, Manuel Bouyer wrote:
> > > > On Tue, Nov 17, 2020 at 04:58:07PM +0100, Roger Pau Monn wrote:
> > > > > [...]
> > > > > 
> > > > > I have attached a patch below that will dump the vIO-APIC info as part
> > > > > of the 'i' debug key output, can you paste the whole output of the 'i'
> > > > > debug key when the system stalls?
> > > > 
> > > > see attached file. Note that the kernel did unstall while 'i' output was
> > > > being printed, so it is mixed with some NetBSD kernel output.
> > > > The idt entry of the 'ioapic2 pin2' interrupt is 103 on CPU 0.
> > > > 
> > > > I also put the whole sequence at
> > > > http://www-soc.lip6.fr/~bouyer/xen-log3.txt
> > > 
> > > On one of the instances the pin shows up as masked, but I'm not sure
> > > if that's relevant since later it shows up as unmasked. Might just be
> > > part of how NetBSD handles such interrupts.
> > 
> > Yes, NetBSD can mask an interrupt source if the interrupts needs to be delayed.
> > It will be unmasked once the interrupt has been handled.
> 
> Yes, I think that's roughly the same model that FreeBSD uses for
> level IO-APIC interrupts: mask it until the handlers have been run.
> 
> > Would it be possible that Xen misses an unmask write, or fails to
> > call the vector if the interrupt is again pending at the time of the
> > unmask ?
> 
> Well, it should work properly, but we cannot discard anything.

I did some more instrumentation from the NetBSD kernel, including dumping
the iopic2 pin2 register.

At the time of the command timeout, the register value is 0x0000a067,
which, if I understant it properly, menas that there's no interrupt
pending (bit IOAPIC_REDLO_RIRR, 0x00004000, is not set).
>From the NetBSD ddb, I can dump this register multiple times, waiting
several seconds, etc .., it doens't change).
Now if I call ioapic_dump_raw() from the debugger, which triggers some
XEN printf:
db{0}> call ioapic_dump_raw^M
Register dump of ioapic0^M
[ 203.5489060] 00 08000000 00170011 08000000(XEN) vioapic.c:124:d0v0 apic_mem_re
adl:undefined ioregsel 3
 00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 4
 00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 5
 00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 6
 00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 7
 00000000^M
[ 203.5489060] 08(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 8
 00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 9
 00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel a
 00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel b
 00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel c
 00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel d
 00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel e
 00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel f
 00000000^M
[ 203.5489060] 10 00010000 00000000 00010000 00000000 00010000 00000000 00010000 00000000^M
[...]
[ 203.5489060] Register dump of ioapic2^M
[ 203.5489060] 00 0a000000 00070011 0a000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 3
 00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 4
 00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 5
 00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 6
 00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 7
 00000000^M
[ 203.5489060] 08(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 8
 00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 9
 00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel a
 00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel b
 00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel c
 00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel d
 00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel e
 00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel f
 00000000^M
[ 203.5489060] 10 00010000 00000000 00010000 00000000 0000e067 00000000 00010000 00000000^M

then the register switches to 0000e067, with the IOAPIC_REDLO_RIRR bit set.
>From here, if I continue from ddb, the dom0 boots.

I can get the same effect by just doing ^A^A^A so my guess is that it's
not accessing the iopic's register which changes the IOAPIC_REDLO_RIRR bit,
but the XEN printf. Also, from NetBSD, using a dump fuinction which
doesn't access undefined registers - and so doesn't trigger XEN printfs -
doens't change the IOAPIC_REDLO_RIRR bit either.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 12:15:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 12:15:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29785.59446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfMN3-0007Hm-NQ; Wed, 18 Nov 2020 12:15:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29785.59446; Wed, 18 Nov 2020 12:15:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfMN3-0007Hf-Jy; Wed, 18 Nov 2020 12:15:33 +0000
Received: by outflank-mailman (input) for mailman id 29785;
 Wed, 18 Nov 2020 12:15:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F1t9=EY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kfMN2-0007HX-55
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 12:15:32 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e78400ba-b036-45eb-b76d-7c9e5bcd49eb;
 Wed, 18 Nov 2020 12:15:29 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfMMz-00087w-GX; Wed, 18 Nov 2020 12:15:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfMMz-0005uW-4S; Wed, 18 Nov 2020 12:15:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kfMMz-0004vr-3x; Wed, 18 Nov 2020 12:15:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=F1t9=EY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kfMN2-0007HX-55
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 12:15:32 +0000
X-Inumbo-ID: e78400ba-b036-45eb-b76d-7c9e5bcd49eb
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e78400ba-b036-45eb-b76d-7c9e5bcd49eb;
	Wed, 18 Nov 2020 12:15:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yypXGnLcnizSuwwU9ie/kdWClFtan0EnnuAgR4FwOHM=; b=lJICkbiVOTgeqtLf9ZFDC98hgB
	wttucdMUXqnHIZEg0eZDecf5uw4fS2CXxY3PcBRJuBTvhrOk/qsJvjHbhHjDwOjGbtjVCAt8Qeuo+
	D6DUgIK2e5MSDY0sBOICOAEvnkB5KxHAKdZDxARPaZwGzj2yhC/E7WIhp2vw7Z8g0QWc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfMMz-00087w-GX; Wed, 18 Nov 2020 12:15:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfMMz-0005uW-4S; Wed, 18 Nov 2020 12:15:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfMMz-0004vr-3x; Wed, 18 Nov 2020 12:15:29 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156845-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156845: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5505f5f8e7e805365cfe70b6a4af6115940bb749
X-Osstest-Versions-That:
    xen=5505f5f8e7e805365cfe70b6a4af6115940bb749
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 18 Nov 2020 12:15:29 +0000

flight 156845 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156845/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-examine      4 memdisk-try-append           fail  like 156711
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10       fail  like 156827
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156827
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156827
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156827
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156827
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156827
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156827
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156827
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156827
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156827
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156827
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156827
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  5505f5f8e7e805365cfe70b6a4af6115940bb749
baseline version:
 xen                  5505f5f8e7e805365cfe70b6a4af6115940bb749

Last test of basis   156845  2020-11-18 01:51:29 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 12:24:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 12:24:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29795.59461 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfMVA-0008L3-Bh; Wed, 18 Nov 2020 12:23:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29795.59461; Wed, 18 Nov 2020 12:23:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfMVA-0008Kw-8U; Wed, 18 Nov 2020 12:23:56 +0000
Received: by outflank-mailman (input) for mailman id 29795;
 Wed, 18 Nov 2020 12:23:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfMV9-0008Kr-HL
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 12:23:55 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfMV8-0002g8-Vm; Wed, 18 Nov 2020 12:23:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfMV9-0008Kr-HL
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 12:23:55 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfMV8-0002g8-Vm; Wed, 18 Nov 2020 12:23:55 +0000
Subject: Re: AW: AW: AW: AW: Xen data from meta-virtualization layer
To: Stefano Stabellini <stefano.stabellini@xilinx.com>,
 Leo Krueger <leo.krueger@zal.aero>
Cc: Peng Fan <peng.fan@nxp.com>, "brucea@xilinx.com" <brucea@xilinx.com>,
 Cornelia Bruelhart <cornelia.bruelhart@zal.aero>,
 oleksandr_andrushchenko@epam.com, xen-devel@lists.xenproject.org,
 Bertrand.Marquis@arm.com
References: <AM4PR0501MB2227089FDDF0209EF6E215D9E6100@AM4PR0501MB2227.eurprd05.prod.outlook.com>
 <HE1PR05MB47941E23CE053CE72F18867C8BEA0@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011091858010.21307@sstabellini-ThinkPad-T480s>
 <HE1PR05MB4794B5C57A54A29A48EE8EAE8BE90@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011101842500.21307@sstabellini-ThinkPad-T480s>
 <DB6PR0402MB27608A03EC717053E392A92988E80@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <HE1PR05MB47940ED4E5FDC0BADC54C8E78BE80@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <DB6PR0402MB2760CEEABA9F52CDEB27C1DB88E80@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <HE1PR05MB47944761ED6A26D3E2CE15868BE40@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011161656080.20906@sstabellini-ThinkPad-T480s>
 <HE1PR05MB4794569AC67109AF8B6517268BE20@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011171544380.438@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <5dc63ee2-f1ce-31fc-cb6a-fe4dae929fb3@xen.org>
Date: Wed, 18 Nov 2020 12:23:52 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2011171544380.438@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi,

On 17/11/2020 23:53, Stefano Stabellini wrote:
> Adding Bertrand, Oleksandr, Julien, and others -- they have a more
> recent experience with GICv3 ITS than me and might be able to help.
> I am attaching the device tree Leo sent a few days ago for reference.
> 
> 
> Typically when you can set the ethernet link up and no packets are
> exchanged it is because of a missing interrupt. In this case a missing
> MSI.
> 
> Bertrand, I believe you tried the GIC ITS driver with PCI devices
> recently. It is expected to work correctly with MSIs in Dom0, right?

OSSTest has some hardware (e.g. Thunder-X) where ITS is required to boot 
Dom0. I haven't seen any failure on recent Xen. We are testing 4.11 and 
onwards on Thunder-X.

However, it may be possible that some more work is necessary for other 
hardware (e.g. workaround, missing code...). See more below.

> 
> 
> 
> On Tue, 17 Nov 2020, Leo Krueger wrote:
>> Hi,
>>
>> I enabled CONFIG_HAS_ITS (what a stupid mistake by me to not set it before...) but then had to add the following node to my device tree
>>
>> 	gic_lpi_base: syscon@0x80000000 {
>> 		compatible = "gic-lpi-base";

I couldn't find this compatible defined/used in Linux 5.10-rc4. @Leo, 
could you clarify which flavor/version of Linux you are using?

>> 		reg = <0x0 0x80000000 0x0 0x100000>;
>> 		max-gic-redistributors = <2>;
>> 	};
>>
>> to somehow change something in regard to the ITS and MSI/MSI-X
>>
>> (XEN) GICv3 initialization:
>> (XEN)       gic_dist_addr=0x00000006000000
>> (XEN)       gic_maintenance_irq=25
>> (XEN)       gic_rdist_stride=0
>> (XEN)       gic_rdist_regions=1
>> (XEN)       redistributor regions:
>> (XEN)         - region 0: 0x00000006040000 - 0x00000006080000
>> (XEN) GICv3: using at most 57344 LPIs on the host.
>> (XEN) GICv3: 288 lines, (IID 0001143b).
>> (XEN) GICv3: Found ITS @0x6020000
>> (XEN) using non-cacheable ITS command queue
>> (XEN) GICv3: CPU0: Found redistributor in region 0 @000000004001c000
>>
>> [    0.000000] GICv3: Distributor has no Range Selector support
>> [    0.000000] GICv3: no VLPI support, no direct LPI support
>> [    0.000000] ITS [mem 0x06020000-0x0603ffff]
>> [    0.000000] ITS@0x0000000006020000: allocated 65536 Devices @dc880000 (flat, esz 8, psz 64K, shr 1)
>> [    0.000000] ITS@0x0000000006020000: allocated 32768 Interrupt Collections @dc820000 (flat, esz 2, psz 64K, shr 1)
>> [    0.000000] GIC: using LPI property table @0x00000000dc830000
>> [    0.000000] GICv3: CPU0: found redistributor 0 region 0:0x0000000006040000
>> [    0.000000] CPU0: using LPI pending table @0x00000000dc840000
>> ...
>> [    0.040080] Platform MSI: gic-its domain created
>> [    0.040136] PCI/MSI: /interrupt-controller/gic-its domain created
>> [    0.040181] fsl-mc MSI: /interrupt-controller/gic-its domain created
>>
>>
>> Still I am ending up with the " Failed to add - passthrough or MSI/MSI-X might fail!" log messages for some PCI devices, but at least the on-board ethernet ports (fsl_enetc ) are initialized.
>> I can set the link up and a link is successfully established.

This message is normal. Xen on Arm is not yet aware of PCI devices and 
therefore the hypercalls to add PCI devices will return -EOPNOTSUPP.

However, this is not an issue because the virtual ITS implementation 
will allow dom0 to configure any devices.

>>
>> But (!) I cannot receive or transmit anything (no error message...) and there seem to be no interrupts:
>>
>> 29:          0   ITS-MSI   1 Edge      gbe0-rxtx0
>>   32:          0   ITS-MSI 8192 Edge      ptp_qoriq
>>
>> (from /proc/interrupts).
>>
>> Any idea on this one? I keep digging and checking whether my device tree needs some additional fixes.

Can you apply patch [1] and provide the logs? This will dump more 
information about the command received by the vITS and the one send to 
the host ITS.

Note that Xen will need to be build with CONFIG_DEBUG=y in order to get 
some of the messages.

[...]

>>>> [    0.000000] GICv3: Distributor has no Range Selector support
>>>>
>>>> [    0.000000] GICv3: no VLPI support, no direct LPI support
>>>>
>>>> [    0.000000] GICv3: CPU0: found redistributor 0 region
>>>> 0:0x0000000006040000
>>>
>>> "no VLPI support" is very suspicious, it looks like Dom0 doesn't find any ITS
>>> support.
VLPI is a feature that was introduced in GICv4 to directly inject LPI in 
the guest. So this is normal to see this message when running on Xen 
because we are going to only expose a GICv3 to a domain until at least 
we support nested virt.

However, you were right about that Xen didn't expose the ITS because the 
following lines were missing:

[    0.000000] ITS@0x0000000006020000: allocated 65536 Devices @dc880000 
(flat, esz 8, psz 64K, shr 1)

Cheers,

[1]
diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
index 9558bad96ac3..8a0a02308e74 100644
--- a/xen/arch/arm/gic-v3-its.c
+++ b/xen/arch/arm/gic-v3-its.c
@@ -87,6 +87,10 @@ static int its_send_command(struct host_its *hw_its, 
const void *its_cmd)
      /* No ITS commands from an interrupt handler (at the moment). */
      ASSERT(!in_irq());

+    printk(XENLOG_WARNING, "pITS  cmd 0x%02lx: %016lx %016lx %016lx 
%016lx\n",
+           its_cmd_get_command(command),
+           command[0], command[1], command[2], command[3]);
+
      spin_lock(&hw_its->cmd_lock);

      do {
diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
index 869bc97fa1aa..e7c5bcd8d423 100644
--- a/xen/arch/arm/gic-v3-lpi.c
+++ b/xen/arch/arm/gic-v3-lpi.c
@@ -183,7 +183,10 @@ void gicv3_do_LPI(unsigned int lpi)
      /* Find out if a guest mapped something to this physical LPI. */
      hlpip = gic_get_host_lpi(lpi);
      if ( !hlpip )
+    {
+        printk("%s: Received LPI %u but it is not mapped?\n", __func__, 
lpi);
          goto out;
+    }

      hlpi.data = read_u64_atomic(&hlpip->data);

@@ -222,6 +225,9 @@ void gicv3_lpi_update_host_entry(uint32_t host_lpi, 
int domain_id,
  {
      union host_lpi *hlpip, hlpi;

+    printk("%s: host_lpi %u domain %d virq_lpi %u\n",
+           __func__, host_lpi, domain_id, virq_lpi);
+
      ASSERT(host_lpi >= LPI_OFFSET);

      host_lpi -= LPI_OFFSET;
diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 58d939b85f92..89ef137b3e6b 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -897,7 +897,7 @@ out_unlock:

  static void dump_its_command(uint64_t *command)
  {
-    gdprintk(XENLOG_WARNING, "  cmd 0x%02lx: %016lx %016lx %016lx 
%016lx\n",
+    gdprintk(XENLOG_WARNING, "vITS  cmd 0x%02lx: %016lx %016lx %016lx 
%016lx\n",
               its_cmd_get_command(command),
               command[0], command[1], command[2], command[3]);
  }
@@ -926,6 +926,8 @@ static int vgic_its_handle_cmds(struct domain *d, 
struct virt_its *its)
          if ( ret )
              return ret;

+        dump_its_command(command);
+
          switch ( its_cmd_get_command(command) )
          {
          case GITS_CMD_CLEAR:


-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 12:29:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 12:29:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29800.59473 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfMaS-00007k-6B; Wed, 18 Nov 2020 12:29:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29800.59473; Wed, 18 Nov 2020 12:29:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfMaS-00007d-2H; Wed, 18 Nov 2020 12:29:24 +0000
Received: by outflank-mailman (input) for mailman id 29800;
 Wed, 18 Nov 2020 12:29:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BWMM=EY=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kfMaQ-00006j-Iy
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 12:29:22 +0000
Received: from mail-wr1-x436.google.com (unknown [2a00:1450:4864:20::436])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 152eb89c-3e99-4d2b-af0c-7ce439501a6e;
 Wed, 18 Nov 2020 12:29:21 +0000 (UTC)
Received: by mail-wr1-x436.google.com with SMTP id u12so2087732wrt.0
 for <xen-devel@lists.xenproject.org>; Wed, 18 Nov 2020 04:29:21 -0800 (PST)
Received: from CBGR90WXYV0
 (host109-146-187-185.range109-146.btcentralplus.com. [109.146.187.185])
 by smtp.gmail.com with ESMTPSA id v189sm3989075wmg.14.2020.11.18.04.29.19
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 18 Nov 2020 04:29:20 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BWMM=EY=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kfMaQ-00006j-Iy
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 12:29:22 +0000
X-Inumbo-ID: 152eb89c-3e99-4d2b-af0c-7ce439501a6e
Received: from mail-wr1-x436.google.com (unknown [2a00:1450:4864:20::436])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 152eb89c-3e99-4d2b-af0c-7ce439501a6e;
	Wed, 18 Nov 2020 12:29:21 +0000 (UTC)
Received: by mail-wr1-x436.google.com with SMTP id u12so2087732wrt.0
        for <xen-devel@lists.xenproject.org>; Wed, 18 Nov 2020 04:29:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=a7q5y5kwEhcoxThrF6U+u43rpWzWheQIW/yaAeO7GWk=;
        b=Y03YSM6VsBDbA8isiYP2X7gNhTUXsgwisKUzu/AFjfWh8SOPdW9BFDUg17Ow2z9BUE
         9mPjR+fx1oFvOA+ltrdad44VmfDZVodl1zNibG0PVADnh18Dih3/fOnLkazqJRC9faUa
         TAjdlcUKYeifjYg48KeDjPOWmoGPXIR0og1UtaGZA/uRTQg1bRHdve61/8xcCMdtRrSK
         3Oea7OrS0tvK0QWszCaAN13MVQIh834b7iK3gRgy17+fKjm1CxxLCMAA86SUI86e3NWp
         j7xx1FDt0+bfzhMfmE+GcSRJPVkR31d4I4y2n7UfMM34k5m8KRWFK4xIJ6979QVIbeb7
         oLCQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=a7q5y5kwEhcoxThrF6U+u43rpWzWheQIW/yaAeO7GWk=;
        b=tPxW3FVehEK54vDa9+i6CsGt2BV0M5MtroeDwzgecMyY2gH11stU00KIbUHOZnyIrz
         WNb+A68CczBJlfvKI6dzxoU8oxDK3yWGfXXjnlWseVq6o549UaA/6tlkADrWQCQ6r6Wh
         oZZyuuVLbor/WbBTRglKJlLqgYzb+i1zWmqfzD2y4LIJ61W5l3qkZczMoU/pLnCmzCTc
         OWdbhwubLR+fY7ZuzfDjgBKYUR8Fet8Xeu9qTxdgDFy1VVSQqRGMbplzJo+T3dvdXPFa
         xpgw/x8+Ow6cSJosfOxuFzsA+pDKiaiselI4INH2De71daeZuztf1jHvaNComhbpBmjk
         3BaQ==
X-Gm-Message-State: AOAM5331AxjGXgFuCHJkevVKm4MFqwbx/mMint4mfAvTJRhZZvNNXoRf
	UsD6Wh6YlWvjS6hoZfW4bVs=
X-Google-Smtp-Source: ABdhPJz9jKDbFhsLf/E2gE9cy8bgu/f44QQZAaG7eBduyPPsBCYoA8OOtC7WLNOWbp969e8uD6+ZKw==
X-Received: by 2002:adf:ffc9:: with SMTP id x9mr4667501wrs.148.1605702560655;
        Wed, 18 Nov 2020 04:29:20 -0800 (PST)
Received: from CBGR90WXYV0 (host109-146-187-185.range109-146.btcentralplus.com. [109.146.187.185])
        by smtp.gmail.com with ESMTPSA id v189sm3989075wmg.14.2020.11.18.04.29.19
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Wed, 18 Nov 2020 04:29:20 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Oleksandr'" <olekstysh@gmail.com>
Cc: <xen-devel@lists.xenproject.org>,
	"'Oleksandr Tyshchenko'" <oleksandr_tyshchenko@epam.com>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	=?UTF-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	"'Wei Liu'" <wl@xen.org>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Julien Grall'" <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> <1602780274-29141-8-git-send-email-olekstysh@gmail.com> <79588865-3f28-5436-0763-cb8ee0b87262@gmail.com>
In-Reply-To: <79588865-3f28-5436-0763-cb8ee0b87262@gmail.com>
Subject: RE: [PATCH V2 07/23] xen/ioreq: Move x86's ioreq_gfn(server) to struct domain
Date: Wed, 18 Nov 2020 12:29:19 -0000
Message-ID: <00b101d6bda6$7055f2b0$5101d810$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQFqp5MaNUj6MKEiN9RM6S6pfA5bVAHOAtspAs5yhsWqgRnfwA==

> -----Original Message-----
> From: Oleksandr <olekstysh@gmail.com>
> Sent: 18 November 2020 12:09
> To: Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org; Oleksandr Tyshchenko =
<oleksandr_tyshchenko@epam.com>; Jan Beulich
> <jbeulich@suse.com>; Andrew Cooper <andrew.cooper3@citrix.com>; Roger =
Pau Monn=C3=A9
> <roger.pau@citrix.com>; Wei Liu <wl@xen.org>; George Dunlap =
<george.dunlap@citrix.com>; Ian Jackson
> <iwj@xenproject.org>; Julien Grall <julien@xen.org>; Stefano =
Stabellini <sstabellini@kernel.org>;
> Julien Grall <julien.grall@arm.com>
> Subject: Re: [PATCH V2 07/23] xen/ioreq: Move x86's ioreq_gfn(server) =
to struct domain
>=20
>=20
> Hi Paul.
>=20
> > --- a/xen/include/xen/sched.h
> > +++ b/xen/include/xen/sched.h
> > @@ -314,6 +314,8 @@ struct sched_unit {
> >
> >   struct evtchn_port_ops;
> >
> > +#define MAX_NR_IOREQ_SERVERS 8
> > +
> >   struct domain
> >   {
> >       domid_t          domain_id;
> > @@ -521,6 +523,21 @@ struct domain
> >       /* Argo interdomain communication support */
> >       struct argo_domain *argo;
> >   #endif
> > +
> > +#ifdef CONFIG_IOREQ_SERVER
> > +    /* Guest page range used for non-default ioreq servers */
> > +    struct {
> > +        unsigned long base;
> > +        unsigned long mask;
> > +        unsigned long legacy_mask; /* indexed by HVM param number =
*/
> > +    } ioreq_gfn;
>=20
> I assume the whole ioreq_gfn struct doesn't need to be here. According
> to the new requirement to leave legacy interface x86 specific,
>=20
> these fields won't be used in common code anymore. I will move =
ioreq_gfn
> struct back to hvm_domain. Please confirm.

Yes, leave it where it is in struct hvm_domain.

  Paul

>=20
>=20
> > +
> > +    /* Lock protects all other values in the sub-struct and the =
default */
> > +    struct {
> > +        spinlock_t              lock;
> > +        struct ioreq_server     *server[MAX_NR_IOREQ_SERVERS];
> > +    } ioreq_server;
> > +#endif
> >   };
> >
> >   static inline struct page_list_head *page_to_list(
>=20
> --
> Regards,
>=20
> Oleksandr Tyshchenko




From xen-devel-bounces@lists.xenproject.org Wed Nov 18 12:51:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 12:51:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29817.59491 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfMvS-0002ta-4F; Wed, 18 Nov 2020 12:51:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29817.59491; Wed, 18 Nov 2020 12:51:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfMvS-0002tT-0O; Wed, 18 Nov 2020 12:51:06 +0000
Received: by outflank-mailman (input) for mailman id 29817;
 Wed, 18 Nov 2020 12:51:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8qHR=EY=infradead.org=willy@srs-us1.protection.inumbo.net>)
 id 1kfMvO-0002tO-3O
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 12:51:04 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1557e271-f376-4114-8bb5-35ce5e1da761;
 Wed, 18 Nov 2020 12:50:58 +0000 (UTC)
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red
 Hat Linux)) id 1kfMuz-0006LA-DU; Wed, 18 Nov 2020 12:50:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8qHR=EY=infradead.org=willy@srs-us1.protection.inumbo.net>)
	id 1kfMvO-0002tO-3O
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 12:51:04 +0000
X-Inumbo-ID: 1557e271-f376-4114-8bb5-35ce5e1da761
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1557e271-f376-4114-8bb5-35ce5e1da761;
	Wed, 18 Nov 2020 12:50:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=HSUzUyKTU/QS1kBHnPfjuFObEMrIAG6LjkrRvHzZF6g=; b=mpIozshx6aJo2QoKkj03Sj4lqv
	eATDwM5+pyAtrhOHDUVD77b65789cHxYoWQANXOTIXt7qV/KGnbJ83izmUrP2MQ/2l3mBwDw4mdoW
	Y5kbGddaa/QKi7M9RwOUo9SSjzPgbiiV1208faqzs+g6W2VUCPYuQiOn803XJb/o/nettEMMrZhNr
	200kJLaXlHseKJHdT2eZ+hl3ThDl9guSn9OMId2L1GwrdkO42exGupejK69RQO+t1ajwPpeevqxwt
	f3XgZhPQ7s2PLOBKSeN45qsCk6Fy+tmNaH6Rrigl3ju4Z9SvL4rjgUhEPeklvtVToHeHs9jg6H9jx
	JzIvniVg==;
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kfMuz-0006LA-DU; Wed, 18 Nov 2020 12:50:37 +0000
Date: Wed, 18 Nov 2020 12:50:37 +0000
From: Matthew Wilcox <willy@infradead.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Greg KH <gregkh@linuxfoundation.org>, Christoph Hellwig <hch@lst.de>,
	Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, Jens Axboe <axboe@kernel.dk>
Subject: Re: merge struct block_device and struct hd_struct
Message-ID: <20201118125037.GE29991@casper.infradead.org>
References: <20201118084800.2339180-1-hch@lst.de>
 <22ca5396-0253-f286-9eab-d417b2e0b3ad@suse.com>
 <20201118085804.GA20384@lst.de>
 <1ded2079-f1be-6d5d-01df-65754447df78@suse.com>
 <X7Tky/6dDN8+DrU7@kroah.com>
 <61044f85-cd41-87b5-3f41-36e3dffb6f2a@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <61044f85-cd41-87b5-3f41-36e3dffb6f2a@suse.com>

On Wed, Nov 18, 2020 at 10:23:51AM +0100, Jan Beulich wrote:
> On 18.11.2020 10:09, Greg KH wrote:
> > On Wed, Nov 18, 2020 at 10:04:04AM +0100, Jan Beulich wrote:
> >> On 18.11.2020 09:58, Christoph Hellwig wrote:
> >>> On Wed, Nov 18, 2020 at 09:56:11AM +0100, Jan Beulich wrote:
> >>>> since this isn't the first series from you recently spamming
> >>>> xen-devel, may I ask that you don't Cc entire series to lists
> >>>> which are involved with perhaps just one out of the many patches?
> >>>> IMO Cc lists should be compiled on a per-patch basis; the cover
> >>>> letter may of course be sent to the union of all of them.
> >>>
> >>> No way.  Individual CCs are completely broken as they don't provide
> >>> the reviewer a context.
> >>
> >> That's the view of some people, but not all. Context can be easily
> >> established by those who care going to one of the many archives on
> >> which the entire series lands. Getting spammed, however, can't be
> >> avoided by the dozens or hundreds of list subscribers.
> > 
> > kernel patches are never "spam", sorry, but for developers to try to
> > determine which lists/maintainers want to see the whole series and which
> > do not is impossible.
> > 
> > Patches in a series are easily deleted from sane mail clients with a
> > single click/keystroke all at once, they aren't a problem that needs to
> > be reduced in volume.
> 
> This doesn't scale, neither in the dimension of recipients nor in
> the dimension of possible sources of such series.
> 
> While it may seem small, it's also a waste of resources to have mails
> sent to hundreds of even thousands of people. So while from a
> technical content perspective I surely agree with you saying 'kernel
> patches are never "spam"', they still are from the perspective of
> what "spam mail" originally means: Mail the recipients did not want
> to receive.

What doesn't scale is developers who only care about their tiny
sliver of Linux and don't stick their heads up from time to time and
look around.  This is an opportunity for people to become more involved
in the development of Linux as a whole, instead of just worrying about
their bit.  You're not "a Xen developer".  You're a Linux developer
whose current focus is on Xen.


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 13:19:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 13:19:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29838.59519 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfNMY-00059Q-MR; Wed, 18 Nov 2020 13:19:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29838.59519; Wed, 18 Nov 2020 13:19:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfNMY-00059J-JN; Wed, 18 Nov 2020 13:19:06 +0000
Received: by outflank-mailman (input) for mailman id 29838;
 Wed, 18 Nov 2020 13:19:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfNMX-00059E-CE
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 13:19:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a9b9006b-6191-4470-9e9c-8753a214ded9;
 Wed, 18 Nov 2020 13:19:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D593EAFD8;
 Wed, 18 Nov 2020 13:19:01 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfNMX-00059E-CE
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 13:19:05 +0000
X-Inumbo-ID: a9b9006b-6191-4470-9e9c-8753a214ded9
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a9b9006b-6191-4470-9e9c-8753a214ded9;
	Wed, 18 Nov 2020 13:19:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605705542; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=sGHdC2L5o1J1lqE6yUAnA1GvuIm+6bMrJhY8jdQST/I=;
	b=qySZxs66lDCP88+3Z2BcvdKm+gXwqRXJDfZNttxjnJXw75PLKB/CorpjXfUGsN3H/a4A5R
	UA9mPNVhc4/Ya61IXseangR8Of75SAsaeTPGFcP2lS4juBHREimhDHinpE7CUmIrjB0mov
	doMELB36q1i/16owwchcfoyxvuvrrz0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D593EAFD8;
	Wed, 18 Nov 2020 13:19:01 +0000 (UTC)
Subject: Re: [PATCH v6 2/3] xen/evtchn: rework per event channel lock
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20201109163826.13035-1-jgross@suse.com>
 <20201109163826.13035-3-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <77067fa0-d902-9091-50e0-d6e15e34b159@suse.com>
Date: Wed, 18 Nov 2020 14:19:02 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201109163826.13035-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.11.2020 17:38, Juergen Gross wrote:
> Currently the lock for a single event channel needs to be taken with
> interrupts off, which causes deadlocks in some cases.
> 
> Rework the per event channel lock to be non-blocking for the case of
> sending an event and removing the need for disabling interrupts for
> taking the lock.
> 
> The lock is needed for avoiding races between event channel state
> changes (creation, closing, binding) against normal operations (set
> pending, [un]masking, priority changes).
> 
> Use a rwlock, but with some restrictions:
> 
> - Changing the state of an event channel (creation, closing, binding)
>   needs to use write_lock(), with ASSERT()ing that the lock is taken as
>   writer only when the state of the event channel is either before or
>   after the locked region appropriate (either free or unbound).
> 
> - Sending an event needs to use read_trylock() mostly, in case of not
>   obtaining the lock the operation is omitted. This is needed as
>   sending an event can happen with interrupts off (at least in some
>   cases).
> 
> - Dumping the event channel state for debug purposes is using
>   read_trylock(), too, in order to avoid blocking in case the lock is
>   taken as writer for a long time.
> 
> - All other cases can use read_lock().

One of the implications is that racing invocations of ->set_pending()
are now possible for the same port. Beyond what I said in reply to
0/3 already, I'm afraid there are (latent) issues:

1) The update of ->pending (or basically any bitfield in struct
evtchn, or yet more generically any field getting updated in a read-
modify-write fashion) is no longer generally safe in any of the
hooks called with just a read lock held. ->pending itself is not an
issue now merely because it shares storage only with xen_consumer,
which won't get updated once a port was bound.

2) Of two racing sends, one may now complete without the port
actually having got fully recorded as linked in the FIFO code. This
is because the party losing the race of setting EVTCHN_FIFO_LINKED
will return early, without regard to whether the winner has made
enough progress. (Of course this is possible only with an
intermediate queue change, as only then the lock would become
available to the second of the senders early enough.)

I've gone through other functions called from this path and didn't
find any further race potential there, but I'm not entirely certain
I didn't miss anything.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 14:09:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 14:09:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29846.59531 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfO8x-0001cX-Gs; Wed, 18 Nov 2020 14:09:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29846.59531; Wed, 18 Nov 2020 14:09:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfO8x-0001cQ-Dg; Wed, 18 Nov 2020 14:09:07 +0000
Received: by outflank-mailman (input) for mailman id 29846;
 Wed, 18 Nov 2020 14:09:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WhnZ=EY=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1kfO8w-0001cL-28
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 14:09:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 208269a4-836f-4a18-84d8-5c9f1022cbf0;
 Wed, 18 Nov 2020 14:09:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E3BF2AC90;
 Wed, 18 Nov 2020 14:09:03 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id 5DDFD1E130B; Wed, 18 Nov 2020 15:09:03 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WhnZ=EY=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1kfO8w-0001cL-28
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 14:09:06 +0000
X-Inumbo-ID: 208269a4-836f-4a18-84d8-5c9f1022cbf0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 208269a4-836f-4a18-84d8-5c9f1022cbf0;
	Wed, 18 Nov 2020 14:09:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E3BF2AC90;
	Wed, 18 Nov 2020 14:09:03 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id 5DDFD1E130B; Wed, 18 Nov 2020 15:09:03 +0100 (CET)
Date: Wed, 18 Nov 2020 15:09:03 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 01/20] blk-cgroup: fix a hd_struct leak in
 blkcg_fill_root_iostats
Message-ID: <20201118140903.GF1981@quack2.suse.cz>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-2-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201118084800.2339180-2-hch@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Wed 18-11-20 09:47:41, Christoph Hellwig wrote:
> disk_get_part needs to be paired with a disk_put_part.
> 
> Fixes: ef45fe470e1 ("blk-cgroup: show global disk stats in root cgroup io.stat")
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Looks good to me. You can add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  block/blk-cgroup.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
> index c68bdf58c9a6e1..54fbe1e80cc41a 100644
> --- a/block/blk-cgroup.c
> +++ b/block/blk-cgroup.c
> @@ -849,6 +849,7 @@ static void blkcg_fill_root_iostats(void)
>  			blkg_iostat_set(&blkg->iostat.cur, &tmp);
>  			u64_stats_update_end(&blkg->iostat.sync);
>  		}
> +		disk_put_part(part);
>  	}
>  }
>  
> -- 
> 2.29.2
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 14:18:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 14:18:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29852.59542 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfOHk-0002eK-Dd; Wed, 18 Nov 2020 14:18:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29852.59542; Wed, 18 Nov 2020 14:18:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfOHk-0002eD-Al; Wed, 18 Nov 2020 14:18:12 +0000
Received: by outflank-mailman (input) for mailman id 29852;
 Wed, 18 Nov 2020 14:18:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfOHj-0002e8-CF
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 14:18:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfOHf-0002Ia-AK; Wed, 18 Nov 2020 14:18:07 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfOHf-00023F-2A; Wed, 18 Nov 2020 14:18:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfOHj-0002e8-CF
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 14:18:11 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=b9WFhkn0kJbujSeTGVzt/1Wpi4OV0biFcj8AfM3wHPs=; b=AOdjxU2csTAZVvfKJe2WuPgG60
	MqVJS9E/6+q1dfOr+kDMVWrGVFYg3xEdP16Ks4hSLXpauoRvAsBIWAg5MG+2V+l+qLXwSFb62FTKk
	0L2mvNQLJCNfaScOsFnm0/HUxGoTUcv/BBTlAGK/4Cr6WV5xGPNc1KgI53Rh1fFSv8aU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfOHf-0002Ia-AK; Wed, 18 Nov 2020 14:18:07 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfOHf-00023F-2A; Wed, 18 Nov 2020 14:18:07 +0000
Subject: Re: [PATCH v6 0/3] XSA-343 followup patches
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 "committers@xenproject.org" <committers@xenproject.org>,
 xen-devel@lists.xenproject.org, Juergen Gross <jgross@suse.com>
References: <20201109163826.13035-1-jgross@suse.com>
 <aaa3c26f-4bfa-d881-8e72-112e3108f4b5@xen.org>
 <1b54d0bb-deab-f4bd-b773-67a716a1fde1@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <ddb6025e-a4fb-9691-c71f-7a8f0e5543be@xen.org>
Date: Wed, 18 Nov 2020 14:18:04 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <1b54d0bb-deab-f4bd-b773-67a716a1fde1@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 18/11/2020 08:22, Jan Beulich wrote:
> On 17.11.2020 19:13, Julien Grall wrote:
>> On 09/11/2020 16:38, Juergen Gross wrote:
>>> Juergen Gross (3):
>>>     xen/events: access last_priority and last_vcpu_id together
>>>     xen/evtchn: rework per event channel lock
>>>     xen/evtchn: revert 52e1fc47abc3a0123
>>
>> While looking at the list of commits, I noticed that the first patch
>> hasn't been committed. They were all acked/reviewed, so I am a bit
>> puzzled why this was omitted...
>>
>> I have nearly missed as I was expecting the 3 patches to be committed
>> together. May I suggest that in the future we reply to the cover letter
>> and mention which patches are (or not) committed?
>>
>> Regarding patch #1, I will commit it tomorrow unless there are strong
>> objections against.
> 
> Without a clear outline of what would break with the present logic,
> I had previously indicated I'm not convinced of the change. This
> isn't a strong objection, no, but I still wouldn't want to see my
> name associated with it in such a case.

I was under the impression that the committer job is mostly mechanical 
(i.e. collecting tags and applying patches). There are no rules in 
MAINTAINERS that mention committers can decide what gets committed as 
long as maintainers approved it and there are no strong objections.

> Furthermore I clearly view
> this as not a backporting candidate, while the other two are (as I
> did previously indicate). Hence the latter two patches wanted
> re-basing ahead of the first one anyway, to ease the backports.

I understand the backport concern. However, if there were clash, then it 
means you had to resolve them on commit to staging. Surely, they were 
quite minimal otherwise you would have sent an e-mail on xen-devel, right?

> While I will accept there are different views possible here, I also
> don't think sending mail in such a case is a good use of resources.
> The information what was or was not committed is readily available. > Personally I view such mails as at least very close to spam.

This is a matter of perspective. It helps to confirm with the 
contributor that it was fine to merge only part of the series (multiple 
pair of eyes is always better than one...) or mentioning any clash/reworked.

It also makes easier for reviewers to figure out what was committed as 
it can be difficult to know if a patch was merged because commit title 
can be altered (even simply dropping the prefix "xen/" can take a coule 
of more minutes to pinpoint commit).

Therefore, I think there is a value for such e-mail to be sent out. It 
will likely improve coordination among the member of the community.

I would be happy to consider different approach if it fullfills that goal.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 14:23:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 14:23:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29857.59554 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfOMr-0003cQ-1d; Wed, 18 Nov 2020 14:23:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29857.59554; Wed, 18 Nov 2020 14:23:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfOMq-0003cJ-Uo; Wed, 18 Nov 2020 14:23:28 +0000
Received: by outflank-mailman (input) for mailman id 29857;
 Wed, 18 Nov 2020 14:23:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1xcH=EY=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kfOMq-0003cE-2z
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 14:23:28 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.87]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f0364196-03d9-4835-98ff-06ffb9e2d702;
 Wed, 18 Nov 2020 14:23:25 +0000 (UTC)
Received: from DB6PR0301CA0055.eurprd03.prod.outlook.com (2603:10a6:4:54::23)
 by DB6PR0802MB2421.eurprd08.prod.outlook.com (2603:10a6:4:a2::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.21; Wed, 18 Nov
 2020 14:23:23 +0000
Received: from DB5EUR03FT028.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:54:cafe::6) by DB6PR0301CA0055.outlook.office365.com
 (2603:10a6:4:54::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Wed, 18 Nov 2020 14:23:23 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT028.mail.protection.outlook.com (10.152.20.99) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Wed, 18 Nov 2020 14:23:23 +0000
Received: ("Tessian outbound fcd5bc555ddc:v71");
 Wed, 18 Nov 2020 14:23:23 +0000
Received: from 0dd4d22bc1b7.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D28CB4B1-262F-4807-BF5D-CDC9D87B7EE1.1; 
 Wed, 18 Nov 2020 14:23:15 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0dd4d22bc1b7.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 18 Nov 2020 14:23:15 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3163.eurprd08.prod.outlook.com (2603:10a6:5:1e::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Wed, 18 Nov
 2020 14:23:14 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737%7]) with mapi id 15.20.3564.028; Wed, 18 Nov 2020
 14:23:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1xcH=EY=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kfOMq-0003cE-2z
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 14:23:28 +0000
X-Inumbo-ID: f0364196-03d9-4835-98ff-06ffb9e2d702
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown [40.107.8.87])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f0364196-03d9-4835-98ff-06ffb9e2d702;
	Wed, 18 Nov 2020 14:23:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TbKTD9sVBqkPxVLzxguLKQJDfD3FEPyOPei7Fc1v9gA=;
 b=eB0WGzRNssG97bf3Ktyf8+we4oKnzQQkL81AzfxEjXxE/ngwMe+1gme1IqzAv1FEPMhByf/YIqCTl5sA6XHANuvgsDVJodGuGL6q/MZqXlCR7JpyN3ycrYIW9JO3P1YMz3tNo+ohtoHewwLdlVU8vsGVE3MajBOBLZdpFY8UxyA=
Received: from DB6PR0301CA0055.eurprd03.prod.outlook.com (2603:10a6:4:54::23)
 by DB6PR0802MB2421.eurprd08.prod.outlook.com (2603:10a6:4:a2::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.21; Wed, 18 Nov
 2020 14:23:23 +0000
Received: from DB5EUR03FT028.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:54:cafe::6) by DB6PR0301CA0055.outlook.office365.com
 (2603:10a6:4:54::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Wed, 18 Nov 2020 14:23:23 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT028.mail.protection.outlook.com (10.152.20.99) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Wed, 18 Nov 2020 14:23:23 +0000
Received: ("Tessian outbound fcd5bc555ddc:v71"); Wed, 18 Nov 2020 14:23:23 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 0792166cbeb9e29d
X-CR-MTA-TID: 64aa7808
Received: from 0dd4d22bc1b7.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id D28CB4B1-262F-4807-BF5D-CDC9D87B7EE1.1;
	Wed, 18 Nov 2020 14:23:15 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0dd4d22bc1b7.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 18 Nov 2020 14:23:15 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=docvWPS8rDYILzDjYUB13Jp8pDFE1RNr4VPC0fHOEolXe22Ohizbe29ZdDYQxxjqgk4PIimBe3VzNELV5ZLZ+v6HqGmdEsafrKDJBgbQr7T1PbJrpGIbrm0O/brcw1DhijefHl/fRKAlebZXM8pKbvi1g4qL6KZnHwd49en9km9OSWdD3S9AQQ8XpLprfvmfyXH4v+KLX36OUly4IMib7kWT57MpdN+d0NsqJ51sVKiNrCwkY1y0gEeC8BiLh0NaMRtQYtWv72aN/Nrk5f4eMc6UfJ32Ldcp9XGcQOFBk82jXDWU4K9eqFppuEUcEM8nEbA6PjbByshOBC7LpZcQ0Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TbKTD9sVBqkPxVLzxguLKQJDfD3FEPyOPei7Fc1v9gA=;
 b=GX0tvQIxgqyzYjmp6U9F+esTMfQ06EGpRK8uJLIoiqGGFCkVzpDWAD/SWP/V8fk4CcrwO0JiEo7vIVagis+b3WN04Xo2wibSodA/v1nqCJAoIcm2P22pRJMRDz7X3izXPNJsoqYYJUSDh3N0QA+0dL/yXk14qmAwKb5BrfizkJVcmRQ/R+G5/g/JbRSysc4tRWEIpJCvM5bq+6aqc7odv6Q3Xvdt+oLD5mzuyczhu77RE7qcelNIlK8FzVOJPt2b7fk6vNUYjvW6C1VTNahEzUkOQhvxU7VrWcfxGanOb653iis8W8O+JQ5wpYy0YZzysUjsS+vdZZBdSvUJrLX2AA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TbKTD9sVBqkPxVLzxguLKQJDfD3FEPyOPei7Fc1v9gA=;
 b=eB0WGzRNssG97bf3Ktyf8+we4oKnzQQkL81AzfxEjXxE/ngwMe+1gme1IqzAv1FEPMhByf/YIqCTl5sA6XHANuvgsDVJodGuGL6q/MZqXlCR7JpyN3ycrYIW9JO3P1YMz3tNo+ohtoHewwLdlVU8vsGVE3MajBOBLZdpFY8UxyA=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3163.eurprd08.prod.outlook.com (2603:10a6:5:1e::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Wed, 18 Nov
 2020 14:23:14 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737%7]) with mapi id 15.20.3564.028; Wed, 18 Nov 2020
 14:23:14 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Stefano Stabellini
	<stefano.stabellini@xilinx.com>, "andrew.cooper3@citrix.com"
	<andrew.cooper3@citrix.com>, "george.dunlap@citrix.com"
	<george.dunlap@citrix.com>, "iwj@xenproject.org" <iwj@xenproject.org>,
	"julien@xen.org" <julien@xen.org>, "wl@xen.org" <wl@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v2] xen: EXPERT clean-up and introduce UNSUPPORTED
Thread-Topic: [PATCH v2] xen: EXPERT clean-up and introduce UNSUPPORTED
Thread-Index: AQHWvUT1dGbwa+YaakuAH1PIoRpzaanNk7GAgAABYYCAAF0WgA==
Date: Wed, 18 Nov 2020 14:23:13 +0000
Message-ID: <B35C22E2-FA6C-4F89-B364-35AFB2096E7C@arm.com>
References: <20201118005051.26115-1-sstabellini@kernel.org>
 <0A50C952-B9D8-44C3-9326-A0555B435693@arm.com>
 <c59a1540-2dd0-2813-9fe5-d5be2335fe9b@suse.com>
In-Reply-To: <c59a1540-2dd0-2813-9fe5-d5be2335fe9b@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: f91ef9f2-cc36-458e-1458-08d88bcd81f0
x-ms-traffictypediagnostic: DB7PR08MB3163:|DB6PR0802MB2421:
X-Microsoft-Antispam-PRVS:
	<DB6PR0802MB2421D2FFB8BB5E283EF56A4A9DE10@DB6PR0802MB2421.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8273;OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 QeBw4ZharD67m5X3H3ALK7ff7mD9pOryLT3VAyx1mUiUSFV0JjMmRFD3yKmr6tkkbjWIH2swABc4eWHtb3ULU+tGWrq0NIoIvCcYveXMjf+TPubQNSRPsahEgzgCrYn3e3k1sMFqe9o2zq8bx+0xvsdx16p6w+gRZuYvM6L7YmjX2FHICimhfdTTDO454Sej0KtG2TjqAo+zGOSl5z02QLYfmCgim8wo3//td06VMAN0WRo2v3LQu1keSLzFVWWKqmTMlrXHrzi2qCLg/pWsPp5B2S4WgtNcj3N2QddEvxHlwHOv05hR+VQabHYh2b+hNC6ci38NqKRTOJXPKNeRrA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(366004)(136003)(376002)(346002)(53546011)(6506007)(86362001)(2616005)(54906003)(76116006)(186003)(91956017)(6916009)(5660300002)(26005)(33656002)(316002)(478600001)(2906002)(6512007)(66556008)(64756008)(66446008)(66476007)(8676002)(66946007)(6486002)(8936002)(4326008)(36756003)(71200400001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 opP+MMtlVuxs+4zszsgDUba3f6P3K8OJL4pgkJPdsY0KvvzrMWi6XwDmikb3AWBnN18hk3HpYSc6CrCPCSvtN1UULE4TdSICS+KwwSbELccf/OH3HDCXigTdOtvPuoc+ASKInV+jPR/tmn4B4A7OZq+4UKcTsRoWtYAOcQciE/iHDtsBKpAF9+v1NdrZ4Bi+3uBAFBkiyjBiQxhL9J25i1HRug/9b/a5V5LwsZZTFvAVAkmXVs9fDWR9ZJneVlT2R0Z+WbcbBr4Martez1K5tmONFU1WLJhGrk1yaHIhmtiQG2E7iDuUCM8ZWDZuwmr6PgOjZ6wh3IGmRm8dZ+07d3UxH9NJ/7xi5s820dKiMOm6IO6oEZQcTyF75on4CdCKwF1tuEO3RKA89PtTailFcn2F93K1OBxwHo9lk7rUOI1rfmnu8JG14+y2oVNZvAhgfAgqcn0bYNqE8gCra73kwqsZjvVINUgcUC1O/lRgwBmdRLGRqbIyjnahB7FfUYfyjw4eP7BeNkqCBbT4kgZ/1L7dTW5VTziRtn1uDxCNcE8C29y8QjKz1HWelAhfIq5Bm5QoRg8I5Jx5R62VsfNit9Wv5t5iWAtfiKS2Mm2cCPvDGuJom+dgq6TOr8F2HTg3m2PCFk+NSZiRhfiyTZ7cTw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <E89C7DDC2429D2478F718558492EFC0B@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3163
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT028.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	efb20239-1378-4fb9-f34f-08d88bcd7c2a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	459HeJAOkRt8XIRYA2a1ZpNh30mohOikwH1OTwchN7Ecqs+VVSoIfbPyMR5ekPenJwlJKtvORetGSyoT6vNJ99IAwxCC3tZKtm9rDJbsVloDt22brQlaNH3bjUfowgG5yaSKR5bIQdWBuR85bmgM56QFoMeELETahaW4i12ece/BQiMUiCXFjmp059Pe3HbyK9qy0BXjlwL33nYMdK/cua/IrdnISSOhEMlyYveBVP//hSBIHwWeGDjhQuB2gSaotThpYfcBGVgmySc03ND3SjQca8V2Iae3iLR6tbqtj0Fe9rVmrjwiEyb/Ao/qZKYpY8g/zwd+ZJx12pc0UGSGL2rxAZ1dJsBTNEQEW+I++AigheDTHcPXcwp8BRMLNfnV9u5XPXwrBxKEMoBMQxtsfg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(376002)(346002)(39860400002)(396003)(46966005)(336012)(8676002)(82310400003)(26005)(47076004)(82740400003)(8936002)(54906003)(6506007)(53546011)(2616005)(186003)(316002)(33656002)(5660300002)(6512007)(36756003)(478600001)(107886003)(70206006)(81166007)(4326008)(70586007)(6486002)(6862004)(356005)(86362001)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Nov 2020 14:23:23.7406
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f91ef9f2-cc36-458e-1458-08d88bcd81f0
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT028.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2421

SGkgSmFuLA0KDQo+IE9uIDE4IE5vdiAyMDIwLCBhdCAwODo1MCwgSmFuIEJldWxpY2ggPGpiZXVs
aWNoQHN1c2UuY29tPiB3cm90ZToNCj4gDQo+IE9uIDE4LjExLjIwMjAgMDk6NDUsIEJlcnRyYW5k
IE1hcnF1aXMgd3JvdGU6DQo+Pj4gT24gMTggTm92IDIwMjAsIGF0IDAwOjUwLCBTdGVmYW5vIFN0
YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IHdyb3RlOg0KPj4+IC0tLSBhL3hlbi9h
cmNoL3g4Ni9LY29uZmlnDQo+Pj4gKysrIGIveGVuL2FyY2gveDg2L0tjb25maWcNCj4+PiBAQCAt
MTAyLDggKzEwMiw4IEBAIGNvbmZpZyBIVk0NCj4+PiAJICBJZiB1bnN1cmUsIHNheSBZLg0KPj4+
IA0KPj4+IGNvbmZpZyBYRU5fU0hTVEsNCj4+PiAtCWJvb2wgIlN1cGVydmlzb3IgU2hhZG93IFN0
YWNrcyINCj4+PiAtCWRlcGVuZHMgb24gSEFTX0FTX0NFVF9TUyAmJiBFWFBFUlQNCj4+PiArCWJv
b2wgIlN1cGVydmlzb3IgU2hhZG93IFN0YWNrcyAoVU5TVVBQT1JURUQpIg0KPj4+ICsJZGVwZW5k
cyBvbiBIQVNfQVNfQ0VUX1NTICYmIFVOU1VQUE9SVEVEDQo+PiANCj4+IFRoaXMgb25lIGlzIG5v
dCBmb2xsb3dpbmcgdGhlIHN0YW5kYXJkIHNjaGVtZSB3aXRoIOKAnGlmIFVOU1VQUE9SVEVEIg0K
PiANCj4gVGhlcmUncyBubyBzdGFuZGFyZCBzY2hlbWUgaGVyZTogVGhlcmUncyBvbmUgY2FzZSB3
aGVyZSB0aGUgZW50aXJlDQo+IG9wdGlvbiBkZXBlbmRzIG9uIHNvbWUgb3RoZXIgc2V0dGluZyAo
ZS5nLiBVTlNVUFBPUlRFRCkgYW5kIGFub3RoZXINCj4gd2hlcmUganVzdCB0aGUgcHJvbXB0IChp
LmUuIGdpdmluZyB0aGUgdXNlciBhIGNob2ljZSkgZG9lcy4gVGhlDQo+IGRpZmZlcmVuY2UgYmVj
b21lcyBvYnZpb3VzIHdoZW4gdGhlIG9wdGlvbiBoYXMgYSBkZWZhdWx0IG90aGVyIHRoYW4NCj4g
Im5vIjogRGVzcGl0ZSB0aGUgaW52aXNpYmxlIHByb21wdCwgaXQgbWF5IGdldCB0dXJuZWQgb24u
IEluIHRoZQ0KPiBjYXNlIGhlcmUgKHNlcnZpbmcgYXMgYSBnb29kIGV4YW1wbGUpLCAiZGVmYXVs
dCB5IiB3b3VsZCBtZWFuIHRoZQ0KPiBmZWF0dXJlIGdldHMgdHVybmVkIG9uIHdoZW4gImlmIFVO
U1VQUE9SVEVEIiB3b3VsZCBiZSBhZGRlZCB0byB0aGUNCj4gcHJvbXB0IGFuZCB3aGVuIFVOU1VQ
UE9SVEVEIGlzIGl0c2VsZiBvZmYuDQo+IA0KDQpJdCBtYWtlcyBzZW5zZSwgdGhhbmtzIGZvciB0
aGUgZXhwbGFuYXRpb24NCg0KQmVydHJhbmQNCg0K


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 14:28:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 14:28:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29874.59567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfORJ-0003pq-Np; Wed, 18 Nov 2020 14:28:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29874.59567; Wed, 18 Nov 2020 14:28:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfORJ-0003pj-Kk; Wed, 18 Nov 2020 14:28:05 +0000
Received: by outflank-mailman (input) for mailman id 29874;
 Wed, 18 Nov 2020 14:28:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1xcH=EY=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kfORH-0003pe-NS
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 14:28:03 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.15.54]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 57ea5a96-17db-4626-b543-f65c6f3f2e92;
 Wed, 18 Nov 2020 14:28:02 +0000 (UTC)
Received: from DU2PR04CA0045.eurprd04.prod.outlook.com (2603:10a6:10:234::20)
 by DBBPR08MB4329.eurprd08.prod.outlook.com (2603:10a6:10:c7::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Wed, 18 Nov
 2020 14:28:00 +0000
Received: from DB5EUR03FT007.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:234:cafe::80) by DU2PR04CA0045.outlook.office365.com
 (2603:10a6:10:234::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Wed, 18 Nov 2020 14:28:00 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT007.mail.protection.outlook.com (10.152.20.148) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Wed, 18 Nov 2020 14:28:00 +0000
Received: ("Tessian outbound 814be617737e:v71");
 Wed, 18 Nov 2020 14:28:00 +0000
Received: from 4600c2a73ea4.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 32DA263E-D50B-4B28-90E6-495DD5DA4FFA.1; 
 Wed, 18 Nov 2020 14:27:54 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 4600c2a73ea4.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 18 Nov 2020 14:27:54 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1799.eurprd08.prod.outlook.com (2603:10a6:4:3a::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20; Wed, 18 Nov
 2020 14:27:52 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737%7]) with mapi id 15.20.3564.028; Wed, 18 Nov 2020
 14:27:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1xcH=EY=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kfORH-0003pe-NS
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 14:28:03 +0000
X-Inumbo-ID: 57ea5a96-17db-4626-b543-f65c6f3f2e92
Received: from EUR01-DB5-obe.outbound.protection.outlook.com (unknown [40.107.15.54])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 57ea5a96-17db-4626-b543-f65c6f3f2e92;
	Wed, 18 Nov 2020 14:28:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Eoa5f+8TE4fAfZ8cqc7+v3IXwmgM2gUOr40kRT67MSE=;
 b=m1H8HP7zGWueOD1xsQoIkXMisirvClLeRTUWB4KbmGkAViRkdSKuD4Odr6cZF2DcMszq4D8RI1u439jHrItkbkXNUaUoIzMO4C1+ez6c4SGb89+oVEUVMsOVAtCM7gkMi0ZiMNGdUFndAm0SWOzNq05JxTzW4MvJbpC6dEAA7IQ=
Received: from DU2PR04CA0045.eurprd04.prod.outlook.com (2603:10a6:10:234::20)
 by DBBPR08MB4329.eurprd08.prod.outlook.com (2603:10a6:10:c7::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Wed, 18 Nov
 2020 14:28:00 +0000
Received: from DB5EUR03FT007.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:234:cafe::80) by DU2PR04CA0045.outlook.office365.com
 (2603:10a6:10:234::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Wed, 18 Nov 2020 14:28:00 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT007.mail.protection.outlook.com (10.152.20.148) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Wed, 18 Nov 2020 14:28:00 +0000
Received: ("Tessian outbound 814be617737e:v71"); Wed, 18 Nov 2020 14:28:00 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: fb476fa74cef82f5
X-CR-MTA-TID: 64aa7808
Received: from 4600c2a73ea4.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 32DA263E-D50B-4B28-90E6-495DD5DA4FFA.1;
	Wed, 18 Nov 2020 14:27:54 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 4600c2a73ea4.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 18 Nov 2020 14:27:54 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IhrNqNa9ts72/BfQrcfGfXk52PiRk88ojUk7FBNlO4iZn1XChWRW2CkqE5CNWbGuWRg650ZhybDOhbPs7QO+a4JPYHqBFFrYYYFmORTfcBkNKb2TvBE1jgZgRhw6r4hz/fpIroYr/w1BQ+a+R2H16EGtAzzU5UlkeUGrwoAXpQLeuHSpH1MWOnTkcwJ7+7aSRZoP+VrbuGIKo6lyCyCu+mDD7HuGs5ELiYy2D2FBNWkyyQvwILis85HqlfGRtdM3dDm5p8kDAcUMcaWZDlpthrJ53iO6GC/AoCqY3UwNnjF9PwBTduTqvHdCg2bTzbc59dxtH77J7CvxB+4vjvqK6A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Eoa5f+8TE4fAfZ8cqc7+v3IXwmgM2gUOr40kRT67MSE=;
 b=AXs4vLL8kkeYzjzqOESvbJvFhBrfBCegZyllwQoM2eJFSpfAi49Z2Q9yNWTfh0OYccUiX+099NNPZXgA3c2R0CjQeTJspT+G3gAZOF242YHl4TJaKJQiOMEZwLS3crAR6xsU53lvjU7VKTJQkkjscdRNaHDlbl7GIKr3EZ8EOqTsFgRMsYdZTTEVCTTzK+eLDj/LeB7W8QjMt0kFURuULND8W1msw2QUgqkxTDVKg5Wpsxzz+OgS2bZljaMoC5VNmnhzdVhZsdWa4kJcjS6f2jq9ppqkgW/EO+CBy+2pN15+XsQROtBFedHSIfkYp7XV8CHVOCHMcsqDbBZiBG8DAQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Eoa5f+8TE4fAfZ8cqc7+v3IXwmgM2gUOr40kRT67MSE=;
 b=m1H8HP7zGWueOD1xsQoIkXMisirvClLeRTUWB4KbmGkAViRkdSKuD4Odr6cZF2DcMszq4D8RI1u439jHrItkbkXNUaUoIzMO4C1+ez6c4SGb89+oVEUVMsOVAtCM7gkMi0ZiMNGdUFndAm0SWOzNq05JxTzW4MvJbpC6dEAA7IQ=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1799.eurprd08.prod.outlook.com (2603:10a6:4:3a::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20; Wed, 18 Nov
 2020 14:27:52 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737%7]) with mapi id 15.20.3564.028; Wed, 18 Nov 2020
 14:27:52 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Ian
 Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, Olaf Hering
	<olaf@aepfle.de>
Subject: Re: [PATCH] libxenstat: avoid build race
Thread-Topic: [PATCH] libxenstat: avoid build race
Thread-Index: AQHWvMYUgIctYS6LB0S1NxEfdujSB6nN9HIA
Date: Wed, 18 Nov 2020 14:27:52 +0000
Message-ID: <6CDFEFFF-368E-467B-970A-4CDFA7978A2E@arm.com>
References: <273da46e-2a56-f53c-f137-f6fc411ad8db@suse.com>
In-Reply-To: <273da46e-2a56-f53c-f137-f6fc411ad8db@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: d901c296-130a-40c6-933d-08d88bce26a3
x-ms-traffictypediagnostic: DB6PR0801MB1799:|DBBPR08MB4329:
X-Microsoft-Antispam-PRVS:
	<DBBPR08MB4329F1F2ADE909571DD321F09DE10@DBBPR08MB4329.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8273;OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 HFrAisWBunTjVRXXalobRtABIMFhKoGhvRDxpiNRDl37uSgNlWyEP8CjVt6bB8r16lqO2xtwePeOHSflPhcqIJ6wXRVL7KRrZchskzCx3jVNZtc5DGM0UhK4xCfIXjAMvA8+NnTWadZu2rnsbM/AkPPUgIF6intFvwhZu+z4WYHOCJvBaaKKx1k1zPFTQ7L9B5bcVL0FthOAwY5jU6+AT+sUAV04UfWtxgtJSHAaduRL2DmPrixlVk4fqrWNO0d4TJUmYjGk6AshIGofKrZwXUxn4RHuOOsqf1VxRVplslyeOzOU3Lx0yHrjNXTfmqXAaAEpSO6U3jLsHxB58HY0zg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(396003)(39860400002)(136003)(366004)(6506007)(316002)(8936002)(33656002)(4744005)(8676002)(2906002)(6486002)(54906003)(86362001)(6512007)(2616005)(36756003)(76116006)(53546011)(26005)(478600001)(5660300002)(66946007)(4326008)(66446008)(64756008)(91956017)(6916009)(66476007)(71200400001)(186003)(66556008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 ZPWQfIcNl8jj/SSNd1ENiM9MiELq3UNnWPQLOLmi95ZHVUf59FdIiJUPZcmDAAfLvWxCv4Of5D1BgCp5C/TJTNBi52hhWtJkJTWjypq9VhWeMDwPFufl+k9Ml3hLro83xUP5v/bsahEBhZ7FAnbmQ3AkZEJEnEuvl58f7oW2DGvvGBh57RIeOXZCCfUBfCQI3SSPDMEJCuV9TgtnUsne1pEhIobJMJnmPWAkem195H00inNYcJ19X79AVUzMGceGxjJoCni+Ugn53mUMjvIHWVFfgyQm12N4e4TG1LGvtcrFF8CVV2mxdK1ABkJzYUP8sK2YGm5Pdt0XI4RxvTdimnYog82DLN9BKwc1EiaOcsyMTf9g3HFbWWy+ZYZiZiQFPw4YfGpm20JGXcIJlIYXAcizrV6v+NY6m5/6eWa/b2uwkShodmKru1NxR2y0DQ2xPipk/aL9JSJaUg435YT44VaVhXV1VC+VHPgI6hkaBcpUFdna2pkCDedmpnRpfA+nRjCIU2JoNVtfiuBMT7eCePQA4DNrwleQ+Kt8FpDhG7zu2T+pm3tlOdsSQW31WAnlmeEqm2U7PaqN+k604RlUldLtQEATykw9qLXAJS+KtEjP49FO0iVBbhU3BfU4IBtg9OoooxjmWpBItYh3rM1vMw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <5CC88706503C754D9E93CE4F76963B4D@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1799
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c0c6f7c6-d3ec-444c-ccd7-08d88bce2240
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0OMAbQpu6tUja8ja0xHgxPPArDCYRJVyy55PXDMqqyjVvWsbPQryADiQ7YD44+623pE3WxE+OCAl78sFukCpxDlHNfZTidAcwHZ0k1/LuSu/FBMsWZY96DmQKFCuIJPRunAJtnfFO0t4I2m8+mrZnRscAj4vtNV1NqNog4hdeyxJ8QWy29QSLJ6uztaWKORGwv6hDrgnhcxgV3AKdz3Rbzk+k6bplafG7evkf38y3BzzLVECbKrlKsv7wYNohLs+/sSc2w0X76x8iZNZnLhxeD81X6oAliq7VLnadDu2UX+JeT6bGy5qwkZsGdxeQFxD3gO2v4LSwmlYuCrx3HLWQh96odmqiqb6RXZ17GgUG1l+cjpLDiFsKJoba2rjrM5izyu0j7S/h1kRFiWHqq2a3Q==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(346002)(136003)(39860400002)(376002)(46966005)(47076004)(186003)(316002)(2906002)(54906003)(478600001)(82740400003)(6512007)(86362001)(70586007)(336012)(70206006)(356005)(81166007)(107886003)(2616005)(33656002)(6486002)(6862004)(5660300002)(8936002)(6506007)(4326008)(36756003)(8676002)(82310400003)(26005)(53546011);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Nov 2020 14:28:00.0643
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d901c296-130a-40c6-933d-08d88bce26a3
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4329

SGksDQoNCj4gT24gMTcgTm92IDIwMjAsIGF0IDA5OjQyLCBKYW4gQmV1bGljaCA8amJldWxpY2hA
c3VzZS5jb20+IHdyb3RlOg0KPiANCj4gT2xhZiByZXBvcnRlZCBvYnNlcnZpbmcNCj4gDQo+IHhl
bnN0YXRfcW1wLmM6MjY6MTA6IGZhdGFsIGVycm9yOiBfcGF0aHMuaDogTm8gc3VjaCBmaWxlIG9y
IGRpcmVjdG9yeQ0KPiAuLi4vdG9vbHMvbGlicy9zdGF0Ly4uLy4uLy4uL3Rvb2xzL1J1bGVzLm1r
OjE1MzogeGVuc3RhdF9xbXAub3BpY10gRXJyb3IgMQ0KPiANCj4gT2J2aW91c2x5IF9wYXRocy5o
LCB3aGVuIGluY2x1ZGVkIGJ5IGFueSBvZiB0aGUgc291cmNlcywgbmVlZHMgdG8gYmUNCj4gY3Jl
YXRlZCBpbiBhZHZhbmNlIG9mIGNvbXBpbGluZyBhbnkgb2YgdGhlbSwgbm90IGp1c3QgdGhlIG5v
bi1QSUMgb25lcy4NCj4gDQo+IFJlcG9ydGVkLWJ5OiBPbGFmIEhlcmluZyA8b2xhZkBhZXBmbGUu
ZGU+DQo+IFNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NClJl
dmlld2VkLWJ5OiBCZXJ0cmFuZCBNYXJxdWlzIDxiZXJ0cmFuZC5tYXJxdWlzQGFybS5jb20+DQoN
CkNoZWVycw0KQmVydHJhbmQNCg0KPiAtLS0NCj4gQSBzaW1pbGFyIGlzc3VlIChhdCB0aGUgdGlt
ZSBvZiB0aGUgcmVwb3J0KSBpbiB0aGUgYnVpbGRpbmcgb2YNCj4gbGlieGVuc3RvcmUgd2FzIGFk
ZHJlc3NlZCBieSBKw7xyZ2VuJ3MgOWFmNWUyYjMxYjRlICgidG9vbHMvbGlicy9zdG9yZToNCj4g
ZG9uJ3QgdXNlIHN5bWJvbGljIGxpbmtzIGZvciBleHRlcm5hbCBmaWxlcyIpLg0KPiANCj4gLS0t
IGEvdG9vbHMvbGlicy9zdGF0L01ha2VmaWxlDQo+ICsrKyBiL3Rvb2xzL2xpYnMvc3RhdC9NYWtl
ZmlsZQ0KPiBAQCAtMzAsNyArMzAsNyBAQCBpbmNsdWRlICQoWEVOX1JPT1QpL3Rvb2xzL2xpYnMv
bGlicy5taw0KPiANCj4gaW5jbHVkZSAkKFhFTl9ST09UKS90b29scy9saWJzL2xpYnMubWsNCj4g
DQo+IC0kKExJQl9PQkpTKTogX3BhdGhzLmgNCj4gKyQoTElCX09CSlMpICQoUElDX09CSlMpOiBf
cGF0aHMuaA0KPiANCj4gUFlMSUI9YmluZGluZ3Mvc3dpZy9weXRob24vX3hlbnN0YXQuc28NCj4g
UFlNT0Q9YmluZGluZ3Mvc3dpZy9weXRob24veGVuc3RhdC5weQ0KPiANCg0K


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 14:30:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 14:30:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29889.59579 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfOU0-0004mA-7V; Wed, 18 Nov 2020 14:30:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29889.59579; Wed, 18 Nov 2020 14:30:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfOU0-0004m3-3k; Wed, 18 Nov 2020 14:30:52 +0000
Received: by outflank-mailman (input) for mailman id 29889;
 Wed, 18 Nov 2020 14:30:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1kfOTz-0004ly-7L
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 14:30:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1kfOTz-0002ZT-5Q
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 14:30:51 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1kfOTz-0002wC-4R
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 14:30:51 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1kfOTh-0006rZ-VR; Wed, 18 Nov 2020 14:30:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1kfOTz-0004ly-7L
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 14:30:51 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=luYJGnjdsocZ+HS6heTu0POKKrOl++fdD8URr7N3hmc=; b=aXwkQdGKZLHy4uaT3PSu92G+9Z
	2xkRW4LAmG+2bumLXehU8VPSejE6bVPUNsgidDYqbEZXjEuev5HE9ml/oH5Nz1DRzbVlP+i02fy4L
	HbGqbPBzAcvSFWuja65lOrLLyqfL79nYXj5tIPsQYN1Ae9BMYXvoAVwuI9iWNFDYzTR8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1kfOTz-0002ZT-5Q
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 14:30:51 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1kfOTz-0002wC-4R
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 14:30:51 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
	(envelope-from <iwj@xenproject.org>)
	id 1kfOTh-0006rZ-VR; Wed, 18 Nov 2020 14:30:34 +0000
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24501.12297.176680.254553@mariner.uk.xensource.com>
Date: Wed, 18 Nov 2020 14:30:33 +0000
To: Julien Grall <julien@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>,
    Stefano Stabellini <sstabellini@kernel.org>,
    Wei Liu <wl@xen.org>,
    Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
    Daniel De Graaf <dgdegra@tycho.nsa.gov>,
    "committers\@xenproject.org" <committers@xenproject.org>,
    xen-devel@lists.xenproject.org,
    Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v6 0/3] XSA-343 followup patches
In-Reply-To: <ddb6025e-a4fb-9691-c71f-7a8f0e5543be@xen.org>
References: <20201109163826.13035-1-jgross@suse.com>
	<aaa3c26f-4bfa-d881-8e72-112e3108f4b5@xen.org>
	<1b54d0bb-deab-f4bd-b773-67a716a1fde1@suse.com>
	<ddb6025e-a4fb-9691-c71f-7a8f0e5543be@xen.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Julien Grall writes ("Re: [PATCH v6 0/3] XSA-343 followup patches"):
> On 18/11/2020 08:22, Jan Beulich wrote:
> > Without a clear outline of what would break with the present logic,
> > I had previously indicated I'm not convinced of the change. This
> > isn't a strong objection, no, but I still wouldn't want to see my
> > name associated with it in such a case.
> 
> I was under the impression that the committer job is mostly mechanical 
> (i.e. collecting tags and applying patches). There are no rules in 
> MAINTAINERS that mention committers can decide what gets committed as 
> long as maintainers approved it and there are no strong objections.

I think in practice committers need to exercise quite a bit of
judgement.  I often find myself deciding on what seem to be edge cases
of the rules.  I also sometimes, with something which has the formally
needed acks, but which seems like it might be controversial or
awkward, decide to just make an extra double-check.  I sometimes even
commit something when maybe the formal rules wouldn't permit it, when
I'm confident that the relevant maintainer(s) etc. wouldn't object -
an example being when something is badly broken and this is allegedly
the fix.

However, I don't agree that a committer should omit committing
something which is acked, and is part of a series which they are
committing, simply because they don't see the need for the patch.

If a committer who is also a maintainer has a firm objection, they
should state that objection as a formal NAK.  If a formal NAK is not
warranted, a committer should not engage in passive obstruction.

Also, a committer should not "silently" commit only part of a series.


In summary, committing something is not a declaration by the committer
that they approve of the patch on a technical level.

It is a declaration that (in the committer's view) the patch is
properly acked/approved, and that therefore according to our shared
understanding of the community's processes, it ought to go in.

That might even occur if the committer has an outstanding technical
objection.  I have certainly committed things that I didn't really
like very much, on the grounds that they had enough acks and that I
didn't feel I wanted to give it a formal NAK.


If it would help a committer feel more comfortable to be explicit
about this, there are a number of approaches available: the committer
could commit the patch but also reply to it on-list restating their
objection but saying that they are committing it despite the
objection, because of others' acks.

If it is felt desirable to record this information in the repository,
one could write something like
   Not-Acked-by: Ian Jackson <iwj@xenproject.org>
(which I think is not the same as Nacked-by) or even add a note in
the commit message like
  [ I am committing this, even though I don't think it is
    necessary, because it has the requisite acks.  I do not
    think it warrants a nack.  -iwj ]

> This is a matter of perspective. It helps to confirm with the
> contributor that it was fine to merge only part of the series
> (multiple pair of eyes is always better than one...) or mentioning
> any clash/reworked.

I think it is especially important to send an email about things being
committed when what has happened might be surprising.  For example, if
only part of a series is committed.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 14:33:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 14:33:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29904.59590 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfOWt-0004xg-MK; Wed, 18 Nov 2020 14:33:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29904.59590; Wed, 18 Nov 2020 14:33:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfOWt-0004xZ-JC; Wed, 18 Nov 2020 14:33:51 +0000
Received: by outflank-mailman (input) for mailman id 29904;
 Wed, 18 Nov 2020 14:33:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BWMM=EY=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kfOWs-0004xU-JX
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 14:33:50 +0000
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ec5adf63-0d4e-45cc-9dec-818a141f8828;
 Wed, 18 Nov 2020 14:33:49 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id a186so435738wme.1
 for <xen-devel@lists.xenproject.org>; Wed, 18 Nov 2020 06:33:49 -0800 (PST)
Received: from CBGR90WXYV0
 (host109-146-187-185.range109-146.btcentralplus.com. [109.146.187.185])
 by smtp.gmail.com with ESMTPSA id s2sm4040430wmh.37.2020.11.18.06.33.47
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 18 Nov 2020 06:33:48 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BWMM=EY=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kfOWs-0004xU-JX
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 14:33:50 +0000
X-Inumbo-ID: ec5adf63-0d4e-45cc-9dec-818a141f8828
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ec5adf63-0d4e-45cc-9dec-818a141f8828;
	Wed, 18 Nov 2020 14:33:49 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id a186so435738wme.1
        for <xen-devel@lists.xenproject.org>; Wed, 18 Nov 2020 06:33:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=DJotjOUv926v61vr8+5zIsh5S7P/Yyu7PPcb+ZYdyK8=;
        b=XJlJmh4e4v8JOBiq22YecsH+3mXOHaJJvxTxY1vEjRvzv00Ejmz6AOeMoHwFhAhGDr
         bni2j12CMUftkR3XKYsbDrlIncRYrL0YH7yVpTXU1sBYQP7SYljShVoc+QjWhHeHFgoK
         SayxLTiv9bUalZMmPbnyuXmhWTCAmwMtkQBqld72DEZ+iZnVx91Z7qNgR2wREoHGzCst
         Yxl7vA9usmh0bAdozvihaRkzCKl/eNdggNWoZPcV5hfYE8XevqYh6tapTdvMo79jWoJV
         2/+u1fRiKUuJmlWlmL69tTNgxqy1kabvHxjpJcU7YEaFAhM5LsnQVoTaeRQe1+hZhr3T
         nbNA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=DJotjOUv926v61vr8+5zIsh5S7P/Yyu7PPcb+ZYdyK8=;
        b=nCEvtlkRA7tl0M4b0oe4+wu/u7MT4jOMyqfKye1r/mnno/C8GG8yYejCs13lQHp15z
         u6uMlpTyf0PAvRGfyPsDFhFzX6OS32CCSs/rS7n0jHcnEQEGkiSpsbgqmjJlt3L/x/TK
         W4VKDaXbUjAiYo58t8RUMRsZR74H80ZBXVoBJ+enJFji04i7HdEy98pWHXg6S8yVLvn6
         vBOVVGD7aJ+nGVun4uAKuvZXPUX/fEHp3PD2lszYzUdnbp5suuYrDxKh1YRXHNw4B45T
         L8TwAX1wGiORlqhJKaBtw/WJmCjMMcu89HZK1nd5SKEDc9vwDUqzHHbfIXDe4sOY1nQP
         2+oQ==
X-Gm-Message-State: AOAM530YYs8ZmDS3ImmvMpX15AEPmYPYiosYtcCUWWBMxWn+GEOZZ5zr
	jybVwG9coTZx7mULHe1IZgw=
X-Google-Smtp-Source: ABdhPJzoN8sjB+HEfAvTPSscOLM4UTc9ZgrE4Qz7dErWzkTsTnuSVmwq11j6HMglRcBYnu16A1BWGw==
X-Received: by 2002:a1c:b18a:: with SMTP id a132mr313349wmf.95.1605710028818;
        Wed, 18 Nov 2020 06:33:48 -0800 (PST)
Received: from CBGR90WXYV0 (host109-146-187-185.range109-146.btcentralplus.com. [109.146.187.185])
        by smtp.gmail.com with ESMTPSA id s2sm4040430wmh.37.2020.11.18.06.33.47
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Wed, 18 Nov 2020 06:33:48 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Julien Grall'" <julien@xen.org>,
	"'Jan Beulich'" <jbeulich@suse.com>
Cc: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Wei Liu'" <wl@xen.org>,
	=?UTF-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	"'Daniel De Graaf'" <dgdegra@tycho.nsa.gov>,
	<committers@xenproject.org>,
	<xen-devel@lists.xenproject.org>,
	"'Juergen Gross'" <jgross@suse.com>
References: <20201109163826.13035-1-jgross@suse.com> <aaa3c26f-4bfa-d881-8e72-112e3108f4b5@xen.org> <1b54d0bb-deab-f4bd-b773-67a716a1fde1@suse.com> <ddb6025e-a4fb-9691-c71f-7a8f0e5543be@xen.org>
In-Reply-To: <ddb6025e-a4fb-9691-c71f-7a8f0e5543be@xen.org>
Subject: RE: [PATCH v6 0/3] XSA-343 followup patches
Date: Wed, 18 Nov 2020 14:33:47 -0000
Message-ID: <00b501d6bdb7$d3b7ef00$7b27cd00$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQHTfmsryMCTAg2gQ7GRkOex+DklFQIObga+Ak9QwZ8CDQ+RhKmhGxBw

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of =
Julien Grall
> Sent: 18 November 2020 14:18
> To: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>; George Dunlap =
<george.dunlap@citrix.com>; Ian Jackson
> <iwj@xenproject.org>; Stefano Stabellini <sstabellini@kernel.org>; Wei =
Liu <wl@xen.org>; Roger Pau
> Monn=C3=A9 <roger.pau@citrix.com>; Daniel De Graaf =
<dgdegra@tycho.nsa.gov>; committers@xenproject.org; xen-
> devel@lists.xenproject.org; Juergen Gross <jgross@suse.com>
> Subject: Re: [PATCH v6 0/3] XSA-343 followup patches
>=20
> Hi Jan,
>=20
> On 18/11/2020 08:22, Jan Beulich wrote:
> > On 17.11.2020 19:13, Julien Grall wrote:
> >> On 09/11/2020 16:38, Juergen Gross wrote:
> >>> Juergen Gross (3):
> >>>     xen/events: access last_priority and last_vcpu_id together
> >>>     xen/evtchn: rework per event channel lock
> >>>     xen/evtchn: revert 52e1fc47abc3a0123
> >>
> >> While looking at the list of commits, I noticed that the first =
patch
> >> hasn't been committed. They were all acked/reviewed, so I am a bit
> >> puzzled why this was omitted...
> >>
> >> I have nearly missed as I was expecting the 3 patches to be =
committed
> >> together. May I suggest that in the future we reply to the cover =
letter
> >> and mention which patches are (or not) committed?
> >>
> >> Regarding patch #1, I will commit it tomorrow unless there are =
strong
> >> objections against.
> >
> > Without a clear outline of what would break with the present logic,
> > I had previously indicated I'm not convinced of the change. This
> > isn't a strong objection, no, but I still wouldn't want to see my
> > name associated with it in such a case.
>=20
> I was under the impression that the committer job is mostly mechanical
> (i.e. collecting tags and applying patches). There are no rules in
> MAINTAINERS that mention committers can decide what gets committed as
> long as maintainers approved it and there are no strong objections.
>=20
> > Furthermore I clearly view
> > this as not a backporting candidate, while the other two are (as I
> > did previously indicate). Hence the latter two patches wanted
> > re-basing ahead of the first one anyway, to ease the backports.
>=20
> I understand the backport concern. However, if there were clash, then =
it
> means you had to resolve them on commit to staging. Surely, they were
> quite minimal otherwise you would have sent an e-mail on xen-devel, =
right?
>=20
> > While I will accept there are different views possible here, I also
> > don't think sending mail in such a case is a good use of resources.
> > The information what was or was not committed is readily available. =
> Personally I view such mails
> as at least very close to spam.
>=20
> This is a matter of perspective. It helps to confirm with the
> contributor that it was fine to merge only part of the series =
(multiple
> pair of eyes is always better than one...) or mentioning any =
clash/reworked.
>=20
> It also makes easier for reviewers to figure out what was committed as
> it can be difficult to know if a patch was merged because commit title
> can be altered (even simply dropping the prefix "xen/" can take a =
coule
> of more minutes to pinpoint commit).
>=20
> Therefore, I think there is a value for such e-mail to be sent out. It
> will likely improve coordination among the member of the community.

+1

Knowing that part of a long series has already been committed would be =
useful. When I do the usual pull/rebase dance prior to re-submitting I =
have, more than once, been surprised by rebase failures because some of =
the patches had been committed but were modified in doing do. It's not a =
massive problem but an email would have at least made me aware I needed =
to be a bit more careful.

  Paul

>=20
> I would be happy to consider different approach if it fullfills that =
goal.
>=20
> Cheers,
>=20
> --
> Julien Grall




From xen-devel-bounces@lists.xenproject.org Wed Nov 18 14:39:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 14:39:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29920.59603 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfOcb-0005Cp-CQ; Wed, 18 Nov 2020 14:39:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29920.59603; Wed, 18 Nov 2020 14:39:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfOcb-0005Ci-8z; Wed, 18 Nov 2020 14:39:45 +0000
Received: by outflank-mailman (input) for mailman id 29920;
 Wed, 18 Nov 2020 14:39:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4dya=EY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kfOcZ-0005Cd-9A
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 14:39:43 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab6944e4-dc16-45a4-b4bb-ade02a3dfe0b;
 Wed, 18 Nov 2020 14:39:41 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=4dya=EY=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kfOcZ-0005Cd-9A
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 14:39:43 +0000
X-Inumbo-ID: ab6944e4-dc16-45a4-b4bb-ade02a3dfe0b
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ab6944e4-dc16-45a4-b4bb-ade02a3dfe0b;
	Wed, 18 Nov 2020 14:39:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605710381;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=o32XTEYz+UdwXFT176XyzgPdt707U6PPWV4Bo9JcwOI=;
  b=SAgxIx/x4yNCJoQpn3XjWPhn9mkOrYpsHnsMjJ+vV0aloeuPv4MfHjcs
   2er6bph3XW7QX/DGoeUJ22o5x6Y15f0tFeizFqo9oXQUhOsmijl85l/qu
   KxfgXRluRyiok1nU8tO1vcbTuNhDlXVxPn31ev2H1oF1KVuFfLFpovvNy
   8=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: CtzrP1/SJIy5vSGrdJAm2PP4+dbtT3duhD5qGlDJUlgwERwIT0GDgMNfiN8qCeYkUm9PNxOSEV
 SBrK7At5tW0uVxww1zAx1zPKBXBSZnJhEIBPXD8x7b3fGqHdyeyDwGJHlF0YCRDTLbnmWDXTGY
 kgK7TbvK9jt0rDhjdGl6A8PgtGNHtfNBLK1MwmaufYQd0MIAFOWDx5L1iwr6U745o39Bo6uuJA
 WiQJrax7vZ6/IUNfPPsQxxV1Vk6NuDDIEegPNClyoyAQQKBlmCTFD7QBghF2BHpx11LOnQddXf
 HLo=
X-SBRS: None
X-MesageID: 32576358
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,486,1596513600"; 
   d="scan'208";a="32576358"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oN/fAuJk4pwCc9/AH/2CmUy8IuwkNjcYTk7iUczi3yG2q+NxzwY0Jhk1YYfhZMLvoL+f48qXd2WJocVL7h5PbAQIP3v7eRTOCuvE7KHYXaB9cP6oyFDQnwMTz/r9/Vw1CgftAZARhbqJ95O/JtokkwKMsBKFc3hzA1By8TmK7vUiZFaZCvbXjE71I+81aR1BZ3qDSrZ3zezlNzE3Yj7vvmcgeP4RuqqZFWR0PS7SlLoMpTjM5KjYVtUt7so0lCr3vEf6/Iw/Y5EgzAdvqFdmQfDItPVu6fxglLKJaNq+8bVb0QIEoOLVjpzaqZIicITBgpuIC5w4AOQV6ECA0Y9Wjw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fKgcvU4jHaNyhyd0ZRWsY1Jcj+fW3JM7a14qDrDkDRs=;
 b=WsC86ZgwlbnyupiQvJ4ZDegfJPBpQY0wozTl6FOvfFdnYRI6+TckdSddnJUnaWQugvacUxkKaOmIkX87vTlXgq0Udqn6PY5FBzkE93hR1mqMAPXMyVxhVx6KWeFblSFCx7Xa6aqZc8wRJIKMeMK38cR/b+5y3zo4PiyAfjbQQ/K5rON5VAUkRIumAU3DHMuClq8YXPc7nK2/JvpOK4h2Agfv/97Vogg+U2atWk11lKHlcd60SN/u2mrw1KftjUmzGB7ErcU2//1YBOmngwcljUGfI85LlY4BG62HTVZkIE/P2LHy3jMyGeagaJVJqXK1GGLKr4Oyg0+u8gzWZ0p8Vw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fKgcvU4jHaNyhyd0ZRWsY1Jcj+fW3JM7a14qDrDkDRs=;
 b=IzwxtFlWl7xlg/BuP5VSfMrcadlyzZ6Goz/DnuZnaVPEvnHHoVJ20Sch7MGXDrcUcEPP7q0zmvB6VGWanGlvSvfxKx87/CceM5DHmgkxVw2LSdqTmvaJ8BDPFw0ufJbCpO02x1geZgEDgifVvMAV5I3SecLV1RdB6rSehX2pZsM=
Date: Wed, 18 Nov 2020 15:39:28 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201118143928.hvamuf7t7jycsrzb@Air-de-Roger>
References: <20201117150949.GA3791@antioche.eu.org>
 <20201117155807.a7jgmftnj6njg6oz@Air-de-Roger>
 <20201117164033.GB3093@antioche.eu.org>
 <20201118085738.wpnfmjagxjf6cofp@Air-de-Roger>
 <20201118092425.GC1085@antioche.eu.org>
 <20201118100025.ic7r3kfsbdnr6muz@Air-de-Roger>
 <20201118121403.GC3126@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201118121403.GC3126@antioche.eu.org>
X-ClientProxiedBy: LO4P123CA0004.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:150::9) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 56e3f814-47ac-4606-68bd-08d88bcfc3b4
X-MS-TrafficTypeDiagnostic: DM6PR03MB4476:
X-Microsoft-Antispam-PRVS: <DM6PR03MB447667C437CF6D17C8AFE8238FE10@DM6PR03MB4476.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: K5plJef88AEdBE249eLjVQ9aJaJq6qJshA3rblv5TG/jzHbZ6s05wIvjCGVkKnLlBX/BbrGVthZcQB6s0j29jRjYVb12C6r4X95vl4MF6VaLe60OCK+6LMZe5AHNOyGerFx4L8mZ31bOyeqGfTXIsJ4g4GytrHBuvrhen7WpTZg0/zrfpDuVfhyB+kIXk7v4B8UnR1P7EFj3IcneDc+41zCfoifCLnipx6vPTBjAKtX9TWc64NwyEEy+fqOHBMGk1yEUf/mT88V+8WEYl47sOgQNmZpbtdJipPn7ccRYMntfzsfvqDxtL+JR9B6juOD73oN84473yH491I7uWQDQnENBeLlf6BbnebA3PsVaWCOnnIvkFgC9qRsRt3NXc2pZrytzqv1VaspsFZqMkdtL9A==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(376002)(346002)(396003)(366004)(136003)(39860400002)(33716001)(86362001)(66946007)(26005)(6916009)(956004)(478600001)(6496006)(4326008)(316002)(1076003)(6486002)(2906002)(85182001)(83380400001)(5660300002)(16526019)(186003)(8676002)(6666004)(9686003)(8936002)(66556008)(66476007)(966005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 2jjCaBF+V6r8HRPBY++80Tszo3tu2s+cGY1nbDpzIpjEldUBz8U4xOGsBcNkuK6a86aEz4Wf7CZiOIg3yIBjaAgOb+nwxJq37vhOdm8j1vsCxtKFGrB4kag6DHvEbUQfLJR6bDKQWuZPRLcZQGh5adlMvAlNVrdQdmxlQTv7yp196Hg3P/7K2NEk3UONTEbZlh6kCs07Zn7SCjgCcoGg74tcTYflBgJiJkeHBzUSBPXsC4Ypt8uFZ4EnAaA1djan9Mnq58NbLykWYcUI7Q8BrIp/kZYS7lJMuojo576TrOdk/hqBEa90fwDIix84JrzHSYMoEczhuHG7hSj5sk9dMGHUmiPQvMuHgoY+I98GJ0cQUtjMUncBKKLOnKIYU+iRMTj0YqyqpV/gQC4ZkPAyiFWjiYV08UiTY7rsmF5ERqzkpgL68EWihzWOakG3042v9GQeQj+CMPmcv8tRE2z9MWAgUx+5yspJR3hbRM+b2LwnNBnGEnxP2MJa/FEuIxQlgiuuo4eTFRDeYxOwnXXFwTukkUvRPTEjOSPj6ENYLB64jg602tDI14dbiJjUzh9EjokE+O/8XbOpdzySv9tf0hJ9W+SRTi9abJTslSp7wOjLJmjAn9iJFbbdFAwfjOJ9Au3gz4bBepj7RuhcYfTVP2xtUpodoUYkfGhm7WgJMhqyKmQlxUcj6GPRtOguvWzQJcP8GKx/qkouXjd6r9gcnAFBshDqYiEVLLjTjNMKhIHkS0Gc9vMVfGX50YHBdF9u7OLi+jPOcFyW9Tki4roMd+97b5DHFcY8vEgC9QcP3qkkujaJ4XggNtX+5gFT3uikhq3cylhahnMI6dAzbylyvqxPp2NrN4MZEFTRqt/m2e5Q/zfSCbeFtFNpBeSnAeW004rw8fddpWnqlU5a34L9vw==
X-MS-Exchange-CrossTenant-Network-Message-Id: 56e3f814-47ac-4606-68bd-08d88bcfc3b4
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Nov 2020 14:39:33.2708
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: XxD+nzUDLXtGn8e+WIOtFtKSyaw/Uk9ursUsizSXAP2c5E58FgXOJm9Rngm9pEVAzEyW+zGWLblFiAR2A06SOQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4476
X-OriginatorOrg: citrix.com

On Wed, Nov 18, 2020 at 01:14:03PM +0100, Manuel Bouyer wrote:
> On Wed, Nov 18, 2020 at 11:00:25AM +0100, Roger Pau Monné wrote:
> > On Wed, Nov 18, 2020 at 10:24:25AM +0100, Manuel Bouyer wrote:
> > > On Wed, Nov 18, 2020 at 09:57:38AM +0100, Roger Pau Monné wrote:
> > > > On Tue, Nov 17, 2020 at 05:40:33PM +0100, Manuel Bouyer wrote:
> > > > > On Tue, Nov 17, 2020 at 04:58:07PM +0100, Roger Pau Monné wrote:
> > > > > > [...]
> > > > > > 
> > > > > > I have attached a patch below that will dump the vIO-APIC info as part
> > > > > > of the 'i' debug key output, can you paste the whole output of the 'i'
> > > > > > debug key when the system stalls?
> > > > > 
> > > > > see attached file. Note that the kernel did unstall while 'i' output was
> > > > > being printed, so it is mixed with some NetBSD kernel output.
> > > > > The idt entry of the 'ioapic2 pin2' interrupt is 103 on CPU 0.
> > > > > 
> > > > > I also put the whole sequence at
> > > > > http://www-soc.lip6.fr/~bouyer/xen-log3.txt
> > > > 
> > > > On one of the instances the pin shows up as masked, but I'm not sure
> > > > if that's relevant since later it shows up as unmasked. Might just be
> > > > part of how NetBSD handles such interrupts.
> > > 
> > > Yes, NetBSD can mask an interrupt source if the interrupts needs to be delayed.
> > > It will be unmasked once the interrupt has been handled.
> > 
> > Yes, I think that's roughly the same model that FreeBSD uses for
> > level IO-APIC interrupts: mask it until the handlers have been run.
> > 
> > > Would it be possible that Xen misses an unmask write, or fails to
> > > call the vector if the interrupt is again pending at the time of the
> > > unmask ?
> > 
> > Well, it should work properly, but we cannot discard anything.
> 
> I did some more instrumentation from the NetBSD kernel, including dumping
> the iopic2 pin2 register.
> 
> At the time of the command timeout, the register value is 0x0000a067,
> which, if I understant it properly, menas that there's no interrupt
> pending (bit IOAPIC_REDLO_RIRR, 0x00004000, is not set).
> From the NetBSD ddb, I can dump this register multiple times, waiting
> several seconds, etc .., it doens't change).
> Now if I call ioapic_dump_raw() from the debugger, which triggers some
> XEN printf:
> db{0}> call ioapic_dump_raw^M
> Register dump of ioapic0^M
> [ 203.5489060] 00 08000000 00170011 08000000(XEN) vioapic.c:124:d0v0 apic_mem_re
> adl:undefined ioregsel 3
>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 4
>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 5
>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 6
>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 7
>  00000000^M
> [ 203.5489060] 08(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 8
>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 9
>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel a
>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel b
>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel c
>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel d
>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel e
>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel f
>  00000000^M
> [ 203.5489060] 10 00010000 00000000 00010000 00000000 00010000 00000000 00010000 00000000^M
> [...]
> [ 203.5489060] Register dump of ioapic2^M
> [ 203.5489060] 00 0a000000 00070011 0a000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 3
>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 4
>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 5
>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 6
>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 7
>  00000000^M
> [ 203.5489060] 08(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 8
>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 9
>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel a
>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel b
>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel c
>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel d
>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel e
>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel f
>  00000000^M
> [ 203.5489060] 10 00010000 00000000 00010000 00000000 0000e067 00000000 00010000 00000000^M
> 
> then the register switches to 0000e067, with the IOAPIC_REDLO_RIRR bit set.
> From here, if I continue from ddb, the dom0 boots.
> 
> I can get the same effect by just doing ^A^A^A so my guess is that it's
> not accessing the iopic's register which changes the IOAPIC_REDLO_RIRR bit,
> but the XEN printf. Also, from NetBSD, using a dump fuinction which
> doesn't access undefined registers - and so doesn't trigger XEN printfs -
> doens't change the IOAPIC_REDLO_RIRR bit either.

I'm thinking about further ways to debug this. I see that all active
IO-APIC pins are routed to vCPU0, but does it make a difference if you
boot with dom0_max_vcpus=1 on the Xen command line? (thus limiting
NertBSD dom0 to a single CPU)

I can also prepare a patch that will periodically dump the same stuff
as the 'i' debug key without you having to press anything, but I'm not
sure if it would help much.

Also, does the system work fine when you reach multiuser, or it also
randomly freezes and requires further poking?

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 15:00:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 15:00:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29952.59632 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfOwK-00087P-LX; Wed, 18 Nov 2020 15:00:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29952.59632; Wed, 18 Nov 2020 15:00:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfOwK-00087I-IR; Wed, 18 Nov 2020 15:00:08 +0000
Received: by outflank-mailman (input) for mailman id 29952;
 Wed, 18 Nov 2020 15:00:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfOwI-00083Q-W2
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 15:00:07 +0000
Received: from de-smtp-delivery-52.mimecast.com (unknown [194.104.109.52])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f6ae07c7-9359-4927-8225-eb3673a89ab0;
 Wed, 18 Nov 2020 15:00:05 +0000 (UTC)
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2104.outbound.protection.outlook.com [104.47.18.104])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-1-z_sTIsAbNCq5bfKbnMzdZA-1; Wed, 18 Nov 2020 16:00:02 +0100
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7039.eurprd04.prod.outlook.com (2603:10a6:800:12b::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Wed, 18 Nov
 2020 15:00:01 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::6882:d72e:9dfd:349e]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::6882:d72e:9dfd:349e%5]) with mapi id 15.20.3541.028; Wed, 18 Nov 2020
 15:00:01 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM4PR0302CA0023.eurprd03.prod.outlook.com (2603:10a6:205:2::36) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Wed, 18 Nov 2020 15:00:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfOwI-00083Q-W2
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 15:00:07 +0000
X-Inumbo-ID: f6ae07c7-9359-4927-8225-eb3673a89ab0
Received: from de-smtp-delivery-52.mimecast.com (unknown [194.104.109.52])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f6ae07c7-9359-4927-8225-eb3673a89ab0;
	Wed, 18 Nov 2020 15:00:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1605711604;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZcKlYpifA7sWwZ+E6tA7roktScVd1B73kZOHhF0T69E=;
	b=CKy5/Rcue9G/IjmFxXF4JNw7HYG6znWbXoNyZp7jgq9GROGDTe4rP+hovng9gTc+9MKOJo
	hs8PD6gv/8Fo7npZPaHPTiJyEPqndOny8lKEF8aGkFZ84a226tFOHywA4cX+3Ll0zqrj5c
	teoF1zPUms6H7ClYfvIBoTzdXR0RlTE=
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2104.outbound.protection.outlook.com [104.47.18.104])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-1-z_sTIsAbNCq5bfKbnMzdZA-1; Wed, 18 Nov 2020 16:00:02 +0100
X-MC-Unique: z_sTIsAbNCq5bfKbnMzdZA-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kz7O+S5I6B6DrLJVEKstklsqcBLZ1OEUE5+5XHm3V2P4vnqiWszrEyuloxRqgaADXVEldEXCQ8OKO8za+NcZRxbJ1ajWiPIuv/hFAwRpNpO9D3e01fpFtf2QCx6RqIjoR1VQpQ4Ggw/roQ1u+/gv1r7y74D4pe4FONb5+scsX2T1/fyXc9n4b363fYsXrknWR2P59IOgQdFE2GVbLqwndJhnbZ9OFv51MjYwiP4si7dPBLrsSh7u1ovxIYpCEbP8wXpYMx6AStiPfJ1pxwbVEaojWGeI+HUCL95ZNwJueCAQ74ih1YIT/UlR+/IyTuXIBWdqIcwV8/TAwcfZhj3ngw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YTRar5Ln38uVXsmBvVyLeFY2UisNGPtpkLSVX7oZtx0=;
 b=N6N9QX+4UOqMa7UPjjJiZillS+XEZutwZmLBCezdtK/WxXWujFZ4kBgu8Ne7N6RiD2f96qdyegNd1yEsn2GMZiKoTiO8bjsVwpxnZMFHGQ/EMSTHYZHrXoCbdyEXy4D7kYvsQxKIyZR3Bjv+2ITKH6IJBMsY7j2yZtK8KHSrvNfvPrmgWW+eE6Qpb6O6I8g3tYl7DTQ55HTac/U9+Fxge4jBjEfv+ax+wtJ1bidAq0jV2wafp77g+xTylU+NQX3E9kgLxh59ZcaBK9Vv/tGngQMAKf+VNtl5nhTWLdETJ3czKON3MJlPYfXwalA66esTywcklhOE1Yfq0xpnyiZ4vA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: antioche.eu.org; dkim=none (message not signed)
 header.d=none;antioche.eu.org; dmarc=none action=none header.from=suse.com;
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7039.eurprd04.prod.outlook.com (2603:10a6:800:12b::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Wed, 18 Nov
 2020 15:00:01 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::6882:d72e:9dfd:349e]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::6882:d72e:9dfd:349e%5]) with mapi id 15.20.3541.028; Wed, 18 Nov 2020
 15:00:01 +0000
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: xen-devel@lists.xenproject.org, Manuel Bouyer <bouyer@antioche.eu.org>
References: <20201117150949.GA3791@antioche.eu.org>
 <20201117155807.a7jgmftnj6njg6oz@Air-de-Roger>
 <20201117164033.GB3093@antioche.eu.org>
 <20201118085738.wpnfmjagxjf6cofp@Air-de-Roger>
 <20201118092425.GC1085@antioche.eu.org>
 <20201118100025.ic7r3kfsbdnr6muz@Air-de-Roger>
 <20201118121403.GC3126@antioche.eu.org>
 <20201118143928.hvamuf7t7jycsrzb@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bb2b6182-f3a6-61e5-ee70-90a65ae56435@suse.com>
Date: Wed, 18 Nov 2020 15:59:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
In-Reply-To: <20201118143928.hvamuf7t7jycsrzb@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM4PR0302CA0023.eurprd03.prod.outlook.com
 (2603:10a6:205:2::36) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from [10.156.60.236] (37.24.206.209) by AM4PR0302CA0023.eurprd03.prod.outlook.com (2603:10a6:205:2::36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend Transport; Wed, 18 Nov 2020 15:00:00 +0000
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8cbe1a92-2a6f-4213-3ed9-08d88bd29f84
X-MS-TrafficTypeDiagnostic: VI1PR04MB7039:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB70393CC35A7071C7207A50D8B3E10@VI1PR04MB7039.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SFhJYGuKHqAg8rPgcw0CKl7nvLlkHheRryM9QOVtK7SuAcFUwqVkT6Y9MVg0M8H4dyrf3asKacKVcM9ti0+CAhTTtVo4NA13GqbUnQ8Uvml9mJFhviooZ/SUhcPOafytGDzA+s5tkDRHvlUm/EuVnhDBgVbFvlFKgLGSrEyJMb9+lRHKNzwRG30oZ7srJuHxkYHhSPdzkI4cZbYf7Rx8lAl0MqdPjYbUTkljZBXgPb2FfLoUkZunRDQApbPJlVoe5TJuy0mQtrQ3O08utbO8mStMwU3WsGXEYjSyjvJtE2V+1/RO+GcJiwtzX1PWi4lgh5OJZMdcMuiyjXvbHNtN66TTX8+Sgjg4nmfaSCLHxNzQG223IxcO/mPIhJll/i+Q
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(39860400002)(136003)(376002)(366004)(346002)(6666004)(8676002)(31686004)(6916009)(4326008)(16576012)(26005)(6486002)(316002)(186003)(66556008)(16526019)(66476007)(478600001)(66946007)(52116002)(8936002)(2616005)(86362001)(956004)(36756003)(31696002)(2906002)(53546011)(83380400001)(5660300002)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	e9uZvB2m5vqpQRRLYMQLqERxyYgja1xZTDQvKTVuvIv8jh2uYQ39A715dsKtx9LKHtAx8KUEqEmIuI6cQXscz/5wIZgC6hfh7rO0neOGarlttzjtTbz980RASGOwRAq62hGf7RaS2GzZ+HWVjn8NVSmLWiKS0+rhVRPMGc1vM77amT0LuKOLsjAcjb4TpN7I2ETyh236AOyObU1adw8xog4bN4iLUQ2JeqUO27eWs5DPUwb/PAm9QFWm5cBjFDQxRflydJZfapKFiA/ScMgoOD6+oxipjEdC084wEt9SHZvnAAZsdYpTxLY9NbBGYwIHzte81It0QOUavZYsTt4UaCP/0Ig67ziuSZZZspeOw5ezplGLVzp/Mb32+0j7kjFXz9EeIXiR2rjMS3hHyIVEBMdefPoN/4BZ8CQwES96RBozBmqYh4lVB0JGXmtqVkvxrh1ic82byOXdy4ZfVpRDwfRYozxl6eyqvQ3g+7dGvyJS0hdHy+4MiRGBEKFexN4wr68xf6U1B9eZvRS4TQ7Y3QycCPGtoVHwZJINkz4/8Cebtjd7dzetGiqqWKUHQwSGopquAI8jYRtB+R6aWZhC9sGr84/+CoiJfpN8Z4Z6v4YpZGpZxnilzz8WgEo8oXeRq0WgInI5zvd2qWXcuQdHXu3e3pd95r0fHlitG6Q3XWZNTNsqBENma+pHsC6t/hITZ1NCned8QWieO6G2EeYzXvaYw8zDQjl5SNttaN3tU6enjOEVkbCBZ9f3CUD5nhbZgFY1GobIBfeoZB8PKnaj2OscwTxxIkEyGS0Z3leobtnWvaxL1e900zZ4i28oLaBD3aYTvDDWdMMnQvqjhouh2oyn1QAbftMIJ2iPgFh9h+oOHc/dm/iBb3RCm6LO8Nu2nSxbCw2R16hlCp4Drd95bQ==
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8cbe1a92-2a6f-4213-3ed9-08d88bd29f84
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Nov 2020 15:00:01.0221
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: UmJahtt9ySAoqMRqa5jSMlUEH7I5l8ehUZjEnovoJ7j6aP2fbHUqAeT/aN5sj/Gn2xZYB0UHbYQZl83wovBvlA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7039

On 18.11.2020 15:39, Roger Pau Monn=C3=A9 wrote:
> On Wed, Nov 18, 2020 at 01:14:03PM +0100, Manuel Bouyer wrote:
>> I did some more instrumentation from the NetBSD kernel, including dumpin=
g
>> the iopic2 pin2 register.
>>
>> At the time of the command timeout, the register value is 0x0000a067,
>> which, if I understant it properly, menas that there's no interrupt
>> pending (bit IOAPIC_REDLO_RIRR, 0x00004000, is not set).
>> From the NetBSD ddb, I can dump this register multiple times, waiting
>> several seconds, etc .., it doens't change).
>> Now if I call ioapic_dump_raw() from the debugger, which triggers some
>> XEN printf:
>> db{0}> call ioapic_dump_raw^M
>> Register dump of ioapic0^M
>> [ 203.5489060] 00 08000000 00170011 08000000(XEN) vioapic.c:124:d0v0 api=
c_mem_re
>> adl:undefined ioregsel 3
>>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 4
>>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 5
>>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 6
>>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 7
>>  00000000^M
>> [ 203.5489060] 08(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioreg=
sel 8
>>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 9
>>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel a
>>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel b
>>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel c
>>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel d
>>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel e
>>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel f
>>  00000000^M
>> [ 203.5489060] 10 00010000 00000000 00010000 00000000 00010000 00000000 =
00010000 00000000^M
>> [...]
>> [ 203.5489060] Register dump of ioapic2^M
>> [ 203.5489060] 00 0a000000 00070011 0a000000(XEN) vioapic.c:124:d0v0 api=
c_mem_readl:undefined ioregsel 3
>>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 4
>>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 5
>>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 6
>>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 7
>>  00000000^M
>> [ 203.5489060] 08(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioreg=
sel 8
>>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 9
>>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel a
>>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel b
>>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel c
>>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel d
>>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel e
>>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel f
>>  00000000^M
>> [ 203.5489060] 10 00010000 00000000 00010000 00000000 0000e067 00000000 =
00010000 00000000^M
>>
>> then the register switches to 0000e067, with the IOAPIC_REDLO_RIRR bit s=
et.
>> From here, if I continue from ddb, the dom0 boots.
>>
>> I can get the same effect by just doing ^A^A^A so my guess is that it's
>> not accessing the iopic's register which changes the IOAPIC_REDLO_RIRR b=
it,
>> but the XEN printf. Also, from NetBSD, using a dump fuinction which
>> doesn't access undefined registers - and so doesn't trigger XEN printfs =
-
>> doens't change the IOAPIC_REDLO_RIRR bit either.
>=20
> I'm thinking about further ways to debug this. I see that all active
> IO-APIC pins are routed to vCPU0, but does it make a difference if you
> boot with dom0_max_vcpus=3D1 on the Xen command line? (thus limiting
> NertBSD dom0 to a single CPU)

I too have been pondering possible approaches. One thing I thought might
help is accompany all places setting remote_irr (and calling
vioapic_deliver()) with a conditional log message, turning on the
condition immediately before the first "undefined ioregsel" gets logged.
(And turn it off again once the last RTE was read in sequence, just to
avoid spamming the console.) From Manuel's description above, there has
to be something that sets the bit and causes the delivery _without_ any
active action by the guest (i.e. neither EOI nor RTE write) and
_without_ any new instance of the IRQ appearing. I have some vague hope
that knowing how we end up making the system make progress again may
also help understand how it got stuck.

Jan



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 15:00:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 15:00:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29958.59645 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfOx7-0008Jo-2A; Wed, 18 Nov 2020 15:00:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29958.59645; Wed, 18 Nov 2020 15:00:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfOx6-0008Jf-TG; Wed, 18 Nov 2020 15:00:56 +0000
Received: by outflank-mailman (input) for mailman id 29958;
 Wed, 18 Nov 2020 15:00:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfOx5-0008JW-Pb; Wed, 18 Nov 2020 15:00:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfOx5-0003DE-9g; Wed, 18 Nov 2020 15:00:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfOx5-0004H5-1q; Wed, 18 Nov 2020 15:00:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kfOx5-0000TZ-1M; Wed, 18 Nov 2020 15:00:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfOx5-0008JW-Pb; Wed, 18 Nov 2020 15:00:55 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xMJ3PhfIeACXop9gWYxv7UxDh5n9usfzdHulTLmbM6E=; b=fGgSlIm58Pka3iietEP8Nn7VP2
	P/ToQ/RxAab9xlk5yfSDORA4JrD/04LtyW44H0DDBRXIw+oDclwftESwDbLS/0zgVK6nJE0gzDQdA
	ZViEXdrVXRnTtNSRV6ejB1wU9EnUtA+ruEVUP4WyLRheenk7R6YswFm0s0/TmEILOZwY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfOx5-0003DE-9g; Wed, 18 Nov 2020 15:00:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfOx5-0004H5-1q; Wed, 18 Nov 2020 15:00:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfOx5-0000TZ-1M; Wed, 18 Nov 2020 15:00:55 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156855-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156855: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5200fba9ce534fc55ec40ab622b6058600090415
X-Osstest-Versions-That:
    xen=5505f5f8e7e805365cfe70b6a4af6115940bb749
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 18 Nov 2020 15:00:55 +0000

flight 156855 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156855/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5200fba9ce534fc55ec40ab622b6058600090415
baseline version:
 xen                  5505f5f8e7e805365cfe70b6a4af6115940bb749

Last test of basis   156689  2020-11-11 22:01:23 Z    6 days
Testing same since   156855  2020-11-18 12:00:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5505f5f8e7..5200fba9ce  5200fba9ce534fc55ec40ab622b6058600090415 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 15:01:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 15:01:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29965.59660 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfOxy-0008TE-BO; Wed, 18 Nov 2020 15:01:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29965.59660; Wed, 18 Nov 2020 15:01:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfOxy-0008T7-7D; Wed, 18 Nov 2020 15:01:50 +0000
Received: by outflank-mailman (input) for mailman id 29965;
 Wed, 18 Nov 2020 15:01:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0zvd=EY=gmx.de=xypron.glpk@srs-us1.protection.inumbo.net>)
 id 1kfOxw-0008Sy-LQ
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 15:01:48 +0000
Received: from mout.gmx.net (unknown [212.227.17.22])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a591a40c-982b-4e53-9a86-d9e6f00579b5;
 Wed, 18 Nov 2020 15:01:46 +0000 (UTC)
Received: from [192.168.123.70] ([178.202.41.135]) by mail.gmx.com (mrgmx104
 [212.227.17.168]) with ESMTPSA (Nemesis) id 1MsHnm-1kQlWL2f0s-00th5v; Wed, 18
 Nov 2020 16:01:31 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0zvd=EY=gmx.de=xypron.glpk@srs-us1.protection.inumbo.net>)
	id 1kfOxw-0008Sy-LQ
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 15:01:48 +0000
X-Inumbo-ID: a591a40c-982b-4e53-9a86-d9e6f00579b5
Received: from mout.gmx.net (unknown [212.227.17.22])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a591a40c-982b-4e53-9a86-d9e6f00579b5;
	Wed, 18 Nov 2020 15:01:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net;
	s=badeba3b8450; t=1605711691;
	bh=6sI8GRG+r2bpQj9f4Kf4zQEmoAbekTyDXxqajSWPs3g=;
	h=X-UI-Sender-Class:Subject:To:Cc:References:From:Date:In-Reply-To;
	b=Mo/sNygYxu2Hr14nQWcibBm74o9glLqc2efB2Z1YkhL4c2oNEKaZP4DRsopReAoe2
	 oIyars/FVw7qIB47QKnboUU/IZ2KfYnWtD0WsQboIPeJtoPTp3Le1iLGPQ0XzhY0on
	 oK+4ZW9QvZmTa0N7jj51MHctKorI/TORG3AvrGiM=
X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c
Received: from [192.168.123.70] ([178.202.41.135]) by mail.gmx.com (mrgmx104
 [212.227.17.168]) with ESMTPSA (Nemesis) id 1MsHnm-1kQlWL2f0s-00th5v; Wed, 18
 Nov 2020 16:01:31 +0100
Subject: Re: [SPECIFICATION RFC] The firmware and bootloader log specification
To: Daniel Kiper <daniel.kiper@oracle.com>, coreboot@coreboot.org,
 grub-devel@gnu.org, linux-kernel@vger.kernel.org,
 systemd-devel@lists.freedesktop.org, trenchboot-devel@googlegroups.com,
 u-boot@lists.denx.de, x86@kernel.org, xen-devel@lists.xenproject.org
Cc: alecb@umass.edu, alexander.burmashev@oracle.com, allen.cryptic@gmail.com,
 andrew.cooper3@citrix.com, ard.biesheuvel@linaro.org, btrotter@gmail.com,
 dpsmith@apertussolutions.com, eric.devolder@oracle.com,
 eric.snowberg@oracle.com, hpa@zytor.com, hun@n-dimensional.de,
 javierm@redhat.com, joao.m.martins@oracle.com, kanth.ghatraju@oracle.com,
 konrad.wilk@oracle.com, krystian.hebel@3mdeb.com, leif@nuviainc.com,
 lukasz.hawrylko@intel.com, luto@amacapital.net, michal.zygowski@3mdeb.com,
 mjg59@google.com, mtottenh@akamai.com, phcoder@gmail.com,
 piotr.krol@3mdeb.com, pjones@redhat.com, pmenzel@molgen.mpg.de,
 roger.pau@citrix.com, ross.philipson@oracle.com, tyhicks@linux.microsoft.com
References: <20201113235242.k6fzlwmwm2xqhqsi@tomti.i.net-space.pl>
From: Heinrich Schuchardt <xypron.glpk@gmx.de>
Autocrypt: addr=xypron.glpk@gmx.de; prefer-encrypt=mutual; keydata=
 mQINBE2g3goBEACaikqtClH8OarLlauqv9d9CPndgghjEmi3vvPZJi4jvgrhmIUKwl7q79wG
 IATxJ1UOXIGgriwoBwoHdooOK33QNy4hkjiNFNrtcaNT7uig+BG0g40AxSwVZ/OLmSFyEioO
 BmRqz1Zdo+AQ5RzHpu49ULlppgdSUYMYote8VPsRcE4Z8My/LLKmd7lvCn1kvcTGcOS1hyUC
 4tMvfuloIehHX3tbcbw5UcQkg4IDh4l8XUc7lt2mdiyJwJoouyqezO3TJpkmkayS3L7o7dB5
 AkUwntyY82tE6BU4quRVF6WJ8GH5gNn4y5m3TMDl135w27IIDd9Hv4Y5ycK5sEL3N+mjaWlk
 2Sf6j1AOy3KNMHusXLgivPO8YKcL9GqtKRENpy7n+qWrvyHA9xV2QQiUDF13z85Sgy4Xi307
 ex0GGrIo54EJXZBvwIDkufRyN9y0Ql7AdPyefOTDsGq5U4XTxh6xfsEXLESMDKQMiVMI74Ec
 cPYL8blzdkQc1MZJccU+zAr6yERkUwo1or14GC2WPGJh0y/Ym9L0FhXVkq9e1gnXjpF3QIJh
 wqVkPm4Two93mAL+929ypFr48OIsN7j1NaNAy6TkteIoNUi09winG0tqU5+U944cBMleRQOa
 dw+zQK0DahH4MGQIU0EVos7lVjFetxPjoKJE9SPl/TCSc+e0RwARAQABtChIZWlucmljaCBT
 Y2h1Y2hhcmR0IDx4eXByb24uZ2xwa0BnbXguZGU+iQI4BBMBAgAiAhsDBgsJCAcDAgYVCAIJ
 CgsEFgIDAQIeAQIXgAUCVAqnzgAKCRDEgdu8LAUaxP7AD/9Zwx3SnmrLLc3CqEIcOJP3FMrW
 gLNi5flG4A/WD9mnQAX+6DEpY6AxIagz6Yx8sZF7HUcn1ByDyZPBn8lHk1+ZaWNAD0LDScGi
 Ch5nopbJrpFGDSVnMWUNJJBiVZW7reERpzCJy+8dAxhxCQJLgHHAqPaspGtO7XjRBF6oBQZk
 oJlqbBRFkTcgOI8sDsSpnsfSItZptoaqqm+lZpMCrB5s8x7dsuMEFaRR/4bq1efh8lSq3Kbf
 eSY59MWh49zExRgAb0pwON5SE1X9C84T2hx51QDiWW/G/HvJF2vxF8hCS7RSx0fn/EbPWkM6
 m+O1SncMaA43lx1TvRfPmYhxryncIWcez+YbvH/VqoLtxvz3r3OTH/WEA5J7mu5U1m2lUGNC
 cFN1bDsNoGhdlFZvG/LJJlBClWBWYHqHnnGEqEQJrlie9goBcS8YFUcfqKYpdmp5/F03qigY
 PmrE3ndBFnaOlOT7REEi8t3gmxpriTtGpKytFuwXNty1yK2kMiLRnQKWN7WgK70pbFFO4tyB
 vIhDeXhFmx6pyZHlXjsgbV3H4QbqazqxYOQlfHbkRpUJczuyPGosFe5zH+9eFvqDWYw2qdH+
 b0Nt1r12vFC4Mmj5szi40z3rQrt+bFSfhT+wvW9kZuBB5xEFkTTzWSFZbDTUrdPpn2DjYePS
 sEHKTUhgl7kCDQRNoN4KARAA6WWIVTqFecZHTUXeOfeKYugUwysKBOp8E3WTksnv0zDyLS5T
 ImLI3y9XgAFkiGuKxrJRarDbw8AjLn6SCJSQr4JN+zMu0MSJJ+88v5sreQO/KRzkti+GCQBK
 YR5bpqY520C7EkKr77KHvto9MDvPVMKdfyFHDslloLEYY1HxdFPjOuiMs656pKr2d5P4C8+V
 iAeQlUOFlISaenNe9XRDaO4vMdNy65Xrvdbm3cW2OWCx/LDzMI6abR6qCJFAH9aXoat1voAc
 uoZ5F5NSaXul3RxRE9K+oWv4UbXhVD242iPnPMqdml6hAPYiNW0dlF3f68tFSVbpqusMXfiY
 cxkNECkhGwNlh/XcRDdb+AfpVfhYtRseZ0jEYdXLpUbq1SyYxxkDEvquncz2J9urvTyyXwsO
 QCNZ0oV7UFXf/3pTB7sAcCiAiZPycF4KFS4b7gYo9wBROu82B9aYSCQZnJFxX1tlbvvzTgc+
 ecdQZui+LF/VsDPYdj2ggpgxVsZX5JU+5KGDObBZC7ahOi8Jdy0ondqSRwSczGXYzMsnFkDH
 hKGJaxDcUUw4q+QQuzuAIZZ197lnKJJv3Vd4N0zfxrB0krOcMqyMstvjqCnK/Vn4iOHUiBgA
 OmtIhygAsO4TkFwqVwIpC+cj2uw/ptN6EiKWzXOWsLfHkAE+D24WCtVw9r8AEQEAAYkCHwQY
 AQIACQIbDAUCVAqoNwAKCRDEgdu8LAUaxIkbD/wMTA8n8wgthSkPvhTeL13cO5/C3/EbejQU
 IJOS68I2stnC1ty1FyXwAygixxt3GE+3BlBVNN61dVS9SA498iO0ApxPsy4Q7vvQsF7DuJsC
 PdZzP/LZRySUMif3qAmIvom8fkq/BnyHhfyZ4XOl1HMr8pMIf6/eCBdgIvxfdOz79BeBBJzr
 qFlNpxVP8xrHiEjZxU965sNtDSD/1/9w82Wn3VkVisNP2MpUhowyHqdeOv2uoG6sUftmkXZ8
 RMo+PY/iEIFjNXw1ufHDLRaHihWLkXW3+bS7agEkXo0T3u1qlFTI6xn8maR9Z0eUAjxtO6qV
 lGF58XeVhfunbQH8Kn+UlWgqcMJwBYgM69c65Dp2RCV7Tql+vMsuk4MT65+Lwm88Adnn6ppQ
 S2YmNgDtlNem1Sx3JgCvjq1NowW7q3B+28Onyy2fF0Xq6Kyjx7msPj3XtDZQnhknBwA7mqSZ
 DDw0aNy1mlCv6KmJBRENfOIZBFUqXCtODPvO5TcduJV/5XuxbTR/33Zj7ez2uZkOEuTs/pPN
 oKMATC28qfg0qM59YjDrrkdXi/+iDe7qCX93XxdIxpA5YM/ZiqgwziJX8ZOKV7UDV+Ph5KwF
 lTPJMPdQZYXDOt5DjG5l5j0cQWqE05QtYR/V6g8un6V2PqOs9WzaT/RB12YFcaeWlusa8Iqs Eg==
Message-ID: <ef50dac9-3ded-6836-28b1-7addb0bab986@gmx.de>
Date: Wed, 18 Nov 2020 16:01:18 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201113235242.k6fzlwmwm2xqhqsi@tomti.i.net-space.pl>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Provags-ID: V03:K1:dPo4qeALMWM4Pgg4L2ctr733oiVff5JOAL9lOgM/oSdpk4wx0E3
 yNsWsHvPTTdjw1QELEaleKh0n+Nl34WRqUofql5zXFY/kh8SgoCniRoQmgshoM5iJb9SVlq
 O7WZ9kGWBMa3ZuS3/7qZwqnlz8323FE8R0pziCYJy1yH5HltqPXcC6ve9I2UO1IDGgI7kMr
 WqEjlwIWmK9iqYVJlxBEg==
X-Spam-Flag: NO
X-UI-Out-Filterresults: notjunk:1;V03:K0:77Ci9R1aR/8=:qDp85K6AmJsxF/xbd/w99c
 mgfz5372Cc85Kxks48slOw+XrqMuQvQ8jUPOwd65S0rMiRrF43mktV6o948Feaq1FQB2xFtWg
 2lmdfNASHgrdQWEL8AKVzRVNqmoBNvFfTC/CwwhuI31W+1iq0YUKRcnY1CzuR8LmBWupPtzX4
 6O/S/p4SCMt9750qEMrX09eMZc9A09i6oXKWmnpBRdYwIFNlM5Z8WeETwfjED6+Dh3oP5+AsG
 vUhm2sGxwhGd5XOdCNCnsNn8JSd5qzU79mK/pSSRbSryn3RU15eIaTq+yWckcuupq8JnJV+Ov
 d7Lpp7HgvWiRw4Xnvt/XKPDyx2oNJj9d58EkNFRVlfWeQXALugRLnkXKZZMfD3S30dzr9Exy5
 g0IjBCbVHc/quAVgsWLyIf3Tv9AB1nH3sf4Wz6FpzGAtVF18ZV/vyTjXVH6hKptZYnCfm/F8U
 1vTr6e54b98FatVU6F1PNrFAFibe3THnuhS5lxGgUccSZ1+MAlW86OeM2LiTQTJvrWo46/gZO
 yPRlevJ24yf+0/4tSo59LBkrzy8QH3PvuiCShXmzsl2R8yHYEXlxJ46IIU8CprSNKkNkLGbYE
 E3zLOBd31I0dJIBNefMdjL/h6Ep9cS5C0BmJVEO8Ill1YAukUly+JJ/FFhfjBZJXlStBmqfTf
 l3XiHB7dA2K/bQmaGLpcTliYNxMyHwvwuiFaN2PL1Fw297bnaFYE9qHJdTGsXeVfw9NXr2zZZ
 Dtd57buNjTR7oyl3Y69sc2/+Py3EenbFiyj285YTs2wv24cWQDfQ4NfslAMK12cO/RfrPW3Q6
 hHD1jRYP0Ourg5GLiWod83l6GC4LMs+mpU+0RSec8e9QuA99CzsOsyztsBzsjN6tenEi+ZBPJ
 1l9wnu9SMuPdGjbnDztQ==

On 14.11.20 00:52, Daniel Kiper wrote:
> Hey,
>
> This is next attempt to create firmware and bootloader log specification=
.
> Due to high interest among industry it is an extension to the initial
> bootloader log only specification. It takes into the account most of the
> comments which I got up until now.
>
> The goal is to pass all logs produced by various boot components to the
> running OS. The OS kernel should expose these logs to the user space
> and/or process them internally if needed. The content of these logs
> should be human readable. However, they should also contain the
> information which allows admins to do e.g. boot time analysis.
>
> The log specification should be as much as possible platform agnostic
> and self contained. The final version of this spec should be merged into
> existing specifications, e.g. UEFI, ACPI, Multiboot2, or be a standalone
> spec, e.g. as a part of OASIS Standards. The former seems better but is
> not perfect too...
>
> Here is the description (pseudocode) of the structures which will be
> used to store the log data.

Hello Daniel,

thanks for your suggestion which makes good sense to me.

Why can't we simply use the message format defined in "The Syslog
Protocol", https://tools.ietf.org/html/rfc5424?

>
>   struct bf_log
>   {
>     uint32_t   version;
>     char       producer[64];
>     uint64_t   flags;
>     uint64_t   next_bf_log_addr;
>     uint32_t   next_msg_off;
>     bf_log_msg msgs[];

As bf_log_msg is does not have defined length msgs[] cannot be an array.

>   }
>
>   struct bf_log_msg
>   {
>     uint32_t size;
>     uint64_t ts_nsec;
>     uint32_t level;
>     uint32_t facility;
>     uint32_t msg_off;
>     char     strings[];
>   }
>
> The members of struct bf_log:
>   - version: the firmware and bootloader log format version number, 1 fo=
r now,
>   - producer: the producer/firmware/bootloader/... type; the length
>     allows ASCII UUID storage if somebody needs that functionality,
>   - flags: it can be used to store information about log state, e.g.
>     it was truncated or not (does it make sense to have an information
>     about the number of lost messages?),
>   - next_bf_log_addr: address of next bf_log struct; none if zero (I thi=
nk
>     newer spec versions should not change anything in first 5 bf_log mem=
bers;
>     this way older log parsers will be able to traverse/copy all logs re=
gardless
>     of version used in one log or another),
>   - next_msg_off: the offset, in bytes, from the beginning of the bf_log=
 struct,
>     of the next byte after the last log message in the msgs[]; i.e. the =
offset
>     of the next available log message slot; it is equal to the total siz=
e of
>     the log buffer including the bf_log struct,

Why would you need an offset to first unused byte?

We possibly have multiple producers of messages:

- TF-A
- U-Boot
- iPXE
- GRUB

What we need is the offset to the next struct bf_log.

>   - msgs: the array of log messages,
>   - should we add CRC or hash or signatures here?
>
> The members of struct bf_log_msg:
>   - size: total size of bf_log_msg struct,
>   - ts_nsec: timestamp expressed in nanoseconds starting from 0,

Would each message producer start from 0?

Shouldn't we use the time from the hardware RTC if it is available via
boot service GetTime()?

>   - level: similar to syslog meaning; can be used to differentiate norma=
l messages
>     from debug messages; the exact interpretation depends on the current=
 producer
>     type specified in the bf_log.producer,
>   - facility: similar to syslog meaning; can be used to differentiate th=
e sources of
>     the messages, e.g. message produced by networking module; the exact =
interpretation
>     depends on the current producer type specified in the bf_log.produce=
r,
>   - msg_off: the log message offset in strings[],

What is this field good for? Why don't you start the the string at
strings[0]?
What would be useful would be the offset to the next bf_log_msg.

>   - strings[0]: the beginning of log message type, similar to the facili=
ty member but
>     NUL terminated string instead of integer; this will be used by, e.g.=
, the GRUB2
>     for messages printed using grub_dprintf(),
>   - strings[msg_off]: the beginning of log message, NUL terminated strin=
g.


Why strings in plural? Do you want to put multiple strings into
'strings'? What identifies the last string?


>
> Note: The producers are free to use/ignore any given set of level, facil=
ity and/or
>       log type members. Though the usage of these members has to be clea=
rly defined.
>       Ignored integer members should be set to 0. Ignored log message ty=
pe should
>       contain an empty NUL terminated string. The log message is mandato=
ry but can
>       be an empty NUL terminated string.
>
> There is still not fully solved problem how the logs should be presented=
 to the OS.
> On the UEFI platforms we can use config tables to do that. Then probably
> bf_log.next_bf_log_addr should not be used.

Why? How would you otherwise find the entries of the next produser in
the configuration table? What I am missing is a GUID for the
configuration table.

> On the ACPI and Device Tree platforms
> we can use these mechanisms to present the logs to the OSes. The situati=
on gets more

I do not understand this.

UEFI implementations use either of ACPI and device-trees and support
configuration tables. Why do you want to use some other binding?

Best regards

Heinrich

> difficult if neither of these mechanisms are present. However, maybe we =
should not
> bother too much about that because probably these platforms getting less=
 and less
> common.
>
> Anyway, I am aware that this is not specification per se. The goal of th=
is email is
> to continue the discussion about the idea of the firmware and booloader =
log and to
> find out where the final specification should land. Of course taking int=
o the account
> assumptions made above.
>
> You can find previous discussions about related topics at [1], [2] and [=
3].
>
> Additionally, I am going to present this during GRUB mini-summit session=
 on Tuesday,
> 17th of November at 15:45 UTC. So, if you want to discuss the log design=
 please join
> us. You can find more details here [4].
>
> Daniel
>
> [1] https://lists.gnu.org/archive/html/grub-devel/2019-10/msg00107.html
> [2] https://lists.gnu.org/archive/html/grub-devel/2019-11/msg00079.html
> [3] https://lists.gnu.org/archive/html/grub-devel/2020-05/msg00223.html
> [4] https://twitter.com/3mdeb_com/status/1327278804100931587
>



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 15:03:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 15:03:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29971.59672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfOz6-0000B4-QX; Wed, 18 Nov 2020 15:03:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29971.59672; Wed, 18 Nov 2020 15:03:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfOz6-0000Ax-Ms; Wed, 18 Nov 2020 15:03:00 +0000
Received: by outflank-mailman (input) for mailman id 29971;
 Wed, 18 Nov 2020 15:03:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fWLM=EY=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kfOz6-0000An-0I
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 15:03:00 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.4.50]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9e01cd62-5c49-4229-aca2-aa6d7fb45b49;
 Wed, 18 Nov 2020 15:02:58 +0000 (UTC)
Received: from DB6PR07CA0106.eurprd07.prod.outlook.com (2603:10a6:6:2c::20) by
 AM8PR08MB5650.eurprd08.prod.outlook.com (2603:10a6:20b:1d3::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.21; Wed, 18 Nov
 2020 15:02:56 +0000
Received: from DB5EUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:2c:cafe::71) by DB6PR07CA0106.outlook.office365.com
 (2603:10a6:6:2c::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.15 via Frontend
 Transport; Wed, 18 Nov 2020 15:02:56 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT032.mail.protection.outlook.com (10.152.20.162) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Wed, 18 Nov 2020 15:02:56 +0000
Received: ("Tessian outbound 13ed5f5344c0:v71");
 Wed, 18 Nov 2020 15:02:56 +0000
Received: from edfbfefa65e3.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E941F589-03EF-4034-9C9C-6FAA9E33AA47.1; 
 Wed, 18 Nov 2020 15:02:22 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id edfbfefa65e3.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 18 Nov 2020 15:02:22 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB6073.eurprd08.prod.outlook.com (2603:10a6:10:1f7::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Wed, 18 Nov
 2020 15:02:20 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3564.028; Wed, 18 Nov 2020
 15:02:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=fWLM=EY=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kfOz6-0000An-0I
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 15:03:00 +0000
X-Inumbo-ID: 9e01cd62-5c49-4229-aca2-aa6d7fb45b49
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown [40.107.4.50])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9e01cd62-5c49-4229-aca2-aa6d7fb45b49;
	Wed, 18 Nov 2020 15:02:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hflZgAY4mwAQTVn16eHtaha3nLKmp0IAc9gq4XPv7Yo=;
 b=N5n3fZWXv2uOObt5vq2+da+QP3+fGXGMr1DnZX+yQHGbtdmFwMJbzF73wnVkPPPJvvffxxK99ndaZjWYalIgL3yqEXndxuiuv7esIYyGDdrzqNo+xzgotNsgw/sI3yb8wJ0kN8zur4KcRqpEkCGd0mJIXd9Ge5lphclEnlZVDCA=
Received: from DB6PR07CA0106.eurprd07.prod.outlook.com (2603:10a6:6:2c::20) by
 AM8PR08MB5650.eurprd08.prod.outlook.com (2603:10a6:20b:1d3::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.21; Wed, 18 Nov
 2020 15:02:56 +0000
Received: from DB5EUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:2c:cafe::71) by DB6PR07CA0106.outlook.office365.com
 (2603:10a6:6:2c::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.15 via Frontend
 Transport; Wed, 18 Nov 2020 15:02:56 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT032.mail.protection.outlook.com (10.152.20.162) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Wed, 18 Nov 2020 15:02:56 +0000
Received: ("Tessian outbound 13ed5f5344c0:v71"); Wed, 18 Nov 2020 15:02:56 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 1f191f3cdc6f9ed6
X-CR-MTA-TID: 64aa7808
Received: from edfbfefa65e3.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id E941F589-03EF-4034-9C9C-6FAA9E33AA47.1;
	Wed, 18 Nov 2020 15:02:22 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id edfbfefa65e3.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 18 Nov 2020 15:02:22 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ry14ttndkwA0bGzTpqTHZ8Hs8/OYSj/Z37wX7ZcGVWIIyO/S5qZhrIj0PthpMi0HARIkNJofEjMCG6YLMXwfB2SQIL1k9WpuoQL5uWbquEPGcMBEUOZTiQFB6Ixw2Kh5Bj+MArT0808ZmZuxaEEb33bx6paNp8kpzFt9g0X0vS2XWbAEFacvM6QisXK57Gz8GALFhOS6iwMV6SKli4yh6L2rYjGjRtmRBCO9b0CMYgRCYnONbypHENopra/2xD+l6rEgIzQAeCW6H1KFQxFtk/UEHgHbdoQwfppuI01Lv64wPZHh3gxC71vY6KviIbi9l4fU8ERmJc54CNTEkbdr3A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hflZgAY4mwAQTVn16eHtaha3nLKmp0IAc9gq4XPv7Yo=;
 b=IxC42BiFMyPjtTpHwggCFN9qYV5AhuqI5nEccBaucxwT+qVoU3C/YV/tftZG/1TeoRql1+8fgLTQOMSM9BDLT/maC4bZiOGIxyJCaxmpsy04/Y5V9IVPiaGKzvHZ0PhAACx7hEK5xFbl2mmWj9fNvKiimfnSUI4vvWgZpib9Po2LV9ufvXtCex7MgpD5UXCfDGyOWAn+3J0HyqiiHm76yjZpM1YxP9GC1kl2AItXrvnZC1BJhiBpL3Y93HpJVgISy8tSB/kcFUyaHpQArGOI8gm96h5pc58vvHt7noe0YWjgiz01QCtlb4Wk5a4NqvtwNb7P6c7lqowLZxBubAbt0g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hflZgAY4mwAQTVn16eHtaha3nLKmp0IAc9gq4XPv7Yo=;
 b=N5n3fZWXv2uOObt5vq2+da+QP3+fGXGMr1DnZX+yQHGbtdmFwMJbzF73wnVkPPPJvvffxxK99ndaZjWYalIgL3yqEXndxuiuv7esIYyGDdrzqNo+xzgotNsgw/sI3yb8wJ0kN8zur4KcRqpEkCGd0mJIXd9Ge5lphclEnlZVDCA=
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB6073.eurprd08.prod.outlook.com (2603:10a6:10:1f7::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Wed, 18 Nov
 2020 15:02:20 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3564.028; Wed, 18 Nov 2020
 15:02:20 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
Thread-Topic: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
Thread-Index: AQHWvBOve/67EvsNk0ODGsWoVvEIcanMKDEAgAHXRoA=
Date: Wed, 18 Nov 2020 15:02:20 +0000
Message-ID: <CBBE4253-F244-418D-9EA6-BC39D1BC8DF8@arm.com>
References: <cover.1605527997.git.rahul.singh@arm.com>
 <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
 <bd5fa7bb-7c44-1ec0-fc57-3ecf01c7d651@suse.com>
In-Reply-To: <bd5fa7bb-7c44-1ec0-fc57-3ecf01c7d651@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 8f0af591-8f03-4115-3d6a-08d88bd30820
x-ms-traffictypediagnostic: DBBPR08MB6073:|AM8PR08MB5650:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM8PR08MB565041599E1755C84C2E524AFCE10@AM8PR08MB5650.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 VstU3ntHM6quieKs3t9yeLCIBJZXq0uM5LMAV8Mo1/dfyJNZ9584mDWpSVITHblqJulk5n/PYHU/J/tBbw3yLhG8If60WtWVvff9r4nElN9gin2a9aLOqEV2f39HXUm8Dn+Qpd7twQNmJgkj9ANd31T3Ziip3NmlVSW1xKaEqrm7Bw0imC5EFABu23hJj9yly5sXCtPFrnYkelKp89JdoUTimV68KtJ3t0jJAiNYvaHD4CAgWnn0uponknpO26Q9ci98Ybth0VURGmjH1TeJyjSo2ODhjyXukccNuX0ZBFym5XSLVSXbWM9G7zlrsKdi3LpHT6+9cCCWTai78yGZ9Q==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(346002)(366004)(396003)(136003)(376002)(2906002)(66446008)(316002)(66616009)(66556008)(5660300002)(478600001)(26005)(53546011)(6506007)(6916009)(33656002)(66946007)(76116006)(6486002)(99936003)(66476007)(186003)(71200400001)(2616005)(8676002)(64756008)(83380400001)(91956017)(54906003)(4326008)(86362001)(6512007)(8936002)(36756003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 Fq9eY0dlBYjJHWcNPE1YIbD0C5crIsMrHt0zwK2AEe8/aUQP+XaEaCpACys9WwPsY3NY1/ingeHLDGW1iWIG1dfk1DSlNSWb3z++KVWlWWZXLg139UJcc8nYcG6fhntWOoTyt4Etz9Ifh/NwAqrPsS5mv6qLcjVHXweJC4G7RFNsGGzQp31GIHYrNQy/IK60SditqqMZ65DPuQruaHJvbi3rWzmWHcDUq9fVc3uUkp8DG1891YKlmx39Ol6tC7G/93xxDVs9yjdTegpfBYJjBkaJMoKRxv6hJnan3lpnA+sJBLIonZr4+OvNm/Mc33oZTpEl48QkQybrBX5GI0ylBdq283THexnW1+WaS3Y/sOsnFCdlxqjgFPWD7oAxbi7j6rI/0zOjp0145FxxdcsPqumAE2shAd7W/OVSxs6t3RQg3suwNr6ICWddMxNA6c7U/itBeHLOrk4a4VAYTfdHe+Bv9gE7+7BsNHoWWd2GuEQbirySjBvWEftt2ne8icwMp7t5IqfoWcUlZz4mUeUGIYr1ia2P2dLr2i35h9a4r4re2xN4/pTeZhLS4F3dSBQzYonVjouGBI5i9vnAC4Jm1eM92MIdekTe7Zs4LoJF9qeXmlu6dx1uGOBYY5dgN0GAPw5qv8XEKrbBsTQvpwql9XOabWZbTuHTU9OiYyNiXYoyhhb+SmE/5qKjNDLuo0B54U9C2HkSii1HuXCIk+MXmHMs+RNt+QyXMDk36CqlNy8BznpGfEPl1cNvaRsbCQ+CqQ8cb4a1eJgxddFRZfj1SYS7xwQpaxC3RU370TMFFBGrZ7vMsSZXE+SweB9ZK/iB+AF/pLXSJwjV27Errsxn1TLVM0ryJ3oNVn7EdvyKxJYxDo57nsgeU9guNt51nBbrUIPMFNjSxi+LcLKteHQ4Xg==
Content-Type: multipart/mixed;
	boundary="_002_CBBE4253F244418D9EA6BC39D1BC8DF8armcom_"
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6073
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ab6d8a44-005c-47fa-083d-08d88bd2f2a5
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dbdcxFelKHf4ljwkrAZbSh8JMb6b0DyhMhreqEhEfT41wy2PnYUtUvrR1Tj1Tf7pKz7xT7OUpejsQrspozEqc1YgIIRPqFfNG9uGgP5Fwf4Z/y1X9Zff53muCxHaYoJ5pk1sdcRhh3Oe2RVd5WkVah81Uit5QPSpeM0MihE5XXmMVTzTapPqThO4/K8bdA9q+NsZDkWrKVSTN/XaBFW8cRYK5ZG3Z67Z4EzQYaqqsAdAXTUUi19mhjFRWK27IeXLIg1puYIOcsWTA+usRKrGdUR23ZEiTMyrac49aQ6+9PDVwiAuJNXryEPDC3ZyQeJ3GHvzMn8vC11UvYZz3S9Itc5z1mqnEPtDyctnZBDt9joOAUfzqxC94p4ZgKy7eVoQDvu4XHIdaM2OETJ/6gr6iA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(396003)(346002)(39860400002)(376002)(46966005)(82310400003)(36756003)(83380400001)(186003)(33656002)(8676002)(6512007)(6862004)(6506007)(235185007)(4326008)(26005)(5660300002)(45080400002)(478600001)(54906003)(53546011)(2616005)(8936002)(86362001)(2906002)(21480400003)(66616009)(70206006)(316002)(99936003)(82740400003)(47076004)(356005)(6486002)(336012)(81166007)(70586007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Nov 2020 15:02:56.3536
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8f0af591-8f03-4115-3d6a-08d88bd30820
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB5650

--_002_CBBE4253F244418D9EA6BC39D1BC8DF8armcom_
Content-Type: text/plain; charset="us-ascii"
Content-ID: <A686D0754304BA40BB81A637B3456653@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable

Hello Jan,

> On 17 Nov 2020, at 10:55 am, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 16.11.2020 13:25, Rahul Singh wrote:
>> NS16550 driver has PCI support that is under HAS_PCI flag. When HAS_PCI
>> is enabled for ARM, compilation error is observed for ARM architecture
>> because ARM platforms do not have full PCI support available.
>=20
> While you've extended the sentence, it remains unclear to me what
> compilation error it is that results here. I've requested such
> clarification for v2 already.

Compilation error is related to the code that refer to x86  functions (crea=
te_irq()..) and MSI implementation related error.=20
For more details please find the attached file for compilation error.

>=20
>> --- a/xen/drivers/char/Kconfig
>> +++ b/xen/drivers/char/Kconfig
>> @@ -4,6 +4,10 @@ config HAS_NS16550
>> 	help
>> 	  This selects the 16550-series UART support. For most systems, say Y.
>>=20
>> +config HAS_NS16550_PCI
>> +	def_bool y
>> +	depends on X86 && HAS_NS16550 && HAS_PCI
>=20
> Looking at this again (in particular at all the #ifdef changes in
> the actual source file), I wonder whether an approach with less
> code churn and without such an extra Kconfig setting (with, as
> said, a bogus dependency on x86) couldn't be found. For example,
> how about ...
>=20
>> --- a/xen/drivers/char/ns16550.c
>> +++ b/xen/drivers/char/ns16550.c
>> @@ -16,7 +16,7 @@
>> #include <xen/timer.h>
>> #include <xen/serial.h>
>> #include <xen/iocap.h>
>> -#ifdef CONFIG_HAS_PCI
>> +#ifdef CONFIG_HAS_NS16550_PCI
>> #include <xen/pci.h>
>> #include <xen/pci_regs.h>
>> #include <xen/pci_ids.h>
>=20
> ... #undef-ining CONFIG_HAS_PCI at a suitable position in this
> file (e.g. after all #include-s, to make sure all structure
> layouts remain correct)? This would then be far easier to revert
> down the road, and would confine the oddity to a single file
> (and there a single place) in the code base.
>=20

As for ARM platforms, PCI implementation is in the development process and =
I am not sure if after completion of PCI work, ns16500 PCI part of code wil=
l work out of the box. I think there is some effort required to test the ns=
16550 PCI part of the code on ARM.
As this code is tested on X86 only so I make the options depends on X86 and=
 enable it by default for x86. =20

I feel that adding a new Kconfig options is ok to enable/disable the PCI NS=
16550 support as compared to #undef CONFIG_HAS_PCI in the particular file. =
If in future other architecture wants to implement the PCI they will face t=
he similar compilation error issues.

Please suggest how we can proceed on this.


> Jan

Regards,
Rahul

--_002_CBBE4253F244418D9EA6BC39D1BC8DF8armcom_
Content-Type: text/rtf; name="compilation error.rtf"
Content-Description: compilation error.rtf
Content-Disposition: attachment; filename="compilation error.rtf"; size=10164;
	creation-date="Wed, 18 Nov 2020 15:02:19 GMT";
	modification-date="Wed, 18 Nov 2020 15:02:19 GMT"
Content-ID: <0D62BC021FDFEC47AF113ECB3865B0DA@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64

e1xydGYxXGFuc2lcYW5zaWNwZzEyNTJcY29jb2FydGYyNTEzDQpcY29jb2F0ZXh0c2NhbGluZzBc
Y29jb2FwbGF0Zm9ybTB7XGZvbnR0YmxcZjBcZm5pbFxmY2hhcnNldDAgTWVubG8tQm9sZDtcZjFc
Zm5pbFxmY2hhcnNldDAgTWVubG8tUmVndWxhcjt9DQp7XGNvbG9ydGJsO1xyZWQyNTVcZ3JlZW4y
NTVcYmx1ZTI1NTtccmVkMTgwXGdyZWVuMzZcYmx1ZTI1O1xyZWQwXGdyZWVuMFxibHVlMDtccmVk
NDdcZ3JlZW4xODBcYmx1ZTI5Ow0KXHJlZDQ2XGdyZWVuMTc0XGJsdWUxODc7fQ0Ke1wqXGV4cGFu
ZGVkY29sb3J0Ymw7O1xjc3NyZ2JcYzc2NDA5XGMyMTY5OFxjMTI1MjQ7XGNzZ3JheVxjMDtcY3Nz
cmdiXGMyMDIzOFxjNzM4OThcYzE0OTQ3Ow0KXGNzc3JnYlxjMjAxOTZcYzczMjQwXGM3ODI1MDt9
DQpccGFwZXJ3MTE5MDBccGFwZXJoMTY4NDBcbWFyZ2wxNDQwXG1hcmdyMTQ0MFx2aWV3dzI4NjAw
XHZpZXdoMTgwMDBcdmlld2tpbmQwDQpccGFyZFx0eDU2MFx0eDExMjBcdHgxNjgwXHR4MjI0MFx0
eDI4MDBcdHgzMzYwXHR4MzkyMFx0eDQ0ODBcdHg1MDQwXHR4NTYwMFx0eDYxNjBcdHg2NzIwXHBh
cmRpcm5hdHVyYWxccGFydGlnaHRlbmZhY3RvcjANCg0KXGYwXGJcZnMzNiBcY2YyIFxDb2NvYUxp
Z2F0dXJlMCAgXGNmMyBuczE2NTUwLmM6DQpcZjFcYjAgIEluIGZ1bmN0aW9uIFwnOTENClxmMFxi
IG5zMTY1NTBfaW5pdF9pcnENClxmMVxiMCBcJzkyOlwNClxwYXJkXHR4NTYwXHR4MTEyMFx0eDE2
ODBcdHgyMjQwXHR4MjgwMFx0eDMzNjBcdHgzOTIwXHR4NDQ4MFx0eDUwNDBcdHg1NjAwXHR4NjE2
MFx0eDY3MjBccGFyZGlybmF0dXJhbFxwYXJ0aWdodGVuZmFjdG9yMA0KDQpcZjBcYiBuczE2NTUw
LmM6NzI2OjIxOg0KXGYxXGIwICANClxmMFxiIFxjZjIgZXJyb3I6IA0KXGYxXGIwIFxjZjMgaW1w
bGljaXQgZGVjbGFyYXRpb24gb2YgZnVuY3Rpb24gXCc5MQ0KXGYwXGIgY3JlYXRlX2lycQ0KXGYx
XGIwIFwnOTI7IGRpZCB5b3UgbWVhbiBcJzkxDQpcZjBcYiByZWxlYXNlX2lycQ0KXGYxXGIwIFwn
OTI/IFsNClxmMFxiIFxjZjIgLVdlcnJvcj1pbXBsaWNpdC1mdW5jdGlvbi1kZWNsYXJhdGlvbg0K
XGYxXGIwIFxjZjMgXVwNCiAgICAgICAgIHVhcnQtPmlycSA9IA0KXGYwXGIgXGNmMiBjcmVhdGVf
aXJxDQpcZjFcYjAgXGNmMyAoMCwgZmFsc2UpO1wNCiAgICAgICAgICAgICAgICAgICAgIA0KXGYw
XGIgXGNmMiBefn5+fn5+fn5+DQpcZjFcYjAgXGNmMyBcDQogICAgICAgICAgICAgICAgICAgICBc
Y2Y0IHJlbGVhc2VfaXJxXGNmMyBcDQoNClxmMFxiIG5zMTY1NTAuYzo3MjY6MjE6DQpcZjFcYjAg
IA0KXGYwXGIgXGNmMiBlcnJvcjogDQpcZjFcYjAgXGNmMyBuZXN0ZWQgZXh0ZXJuIGRlY2xhcmF0
aW9uIG9mIFwnOTENClxmMFxiIGNyZWF0ZV9pcnENClxmMVxiMCBcJzkyIFsNClxmMFxiIFxjZjIg
LVdlcnJvcj1uZXN0ZWQtZXh0ZXJucw0KXGYxXGIwIFxjZjMgXVwNCg0KXGYwXGIgbnMxNjU1MC5j
Og0KXGYxXGIwICBJbiBmdW5jdGlvbiBcJzkxDQpcZjBcYiBuczE2NTUwX2luaXRfcG9zdGlycQ0K
XGYxXGIwIFwnOTI6XA0KDQpcZjBcYiBuczE2NTUwLmM6NzY4OjMzOg0KXGYxXGIwICANClxmMFxi
IFxjZjIgZXJyb3I6IA0KXGYxXGIwIFxjZjMgXCc5MQ0KXGYwXGIgbW1pb19yb19yYW5nZXMNClxm
MVxiMCBcJzkyIHVuZGVjbGFyZWQgKGZpcnN0IHVzZSBpbiB0aGlzIGZ1bmN0aW9uKTsgZGlkIHlv
dSBtZWFuIFwnOTENClxmMFxiIG1taW9faGFuZGxlcg0KXGYxXGIwIFwnOTI/XA0KICAgICAgICAg
ICAgICByYW5nZXNldF9hZGRfcmFuZ2UoDQpcZjBcYiBcY2YyIG1taW9fcm9fcmFuZ2VzDQpcZjFc
YjAgXGNmMyAsIHVhcnQtPmlvX2Jhc2UsXA0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgDQpcZjBcYiBcY2YyIF5+fn5+fn5+fn5+fn5+DQpcZjFcYjAgXGNmMyBcDQogICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBcY2Y0IG1taW9faGFuZGxlclxjZjMgXA0KDQpcZjBcYiBu
czE2NTUwLmM6NzY4OjMzOg0KXGYxXGIwICANClxmMFxiIFxjZjUgbm90ZTogDQpcZjFcYjAgXGNm
MyBlYWNoIHVuZGVjbGFyZWQgaWRlbnRpZmllciBpcyByZXBvcnRlZCBvbmx5IG9uY2UgZm9yIGVh
Y2ggZnVuY3Rpb24gaXQgYXBwZWFycyBpblwNCg0KXGYwXGIgbnMxNjU1MC5jOjc4MDoyMDoNClxm
MVxiMCAgDQpcZjBcYiBcY2YyIGVycm9yOiANClxmMVxiMCBcY2YzIHZhcmlhYmxlIFwnOTENClxm
MFxiIG1zaQ0KXGYxXGIwIFwnOTIgaGFzIGluaXRpYWxpemVyIGJ1dCBpbmNvbXBsZXRlIHR5cGVc
DQogICAgICAgICAgICAgc3RydWN0IA0KXGYwXGIgXGNmMiBtc2lfaW5mbw0KXGYxXGIwIFxjZjMg
IG1zaSA9IFx7XA0KICAgICAgICAgICAgICAgICAgICANClxmMFxiIFxjZjIgXn5+fn5+fn4NClxm
MVxiMCBcY2YzIFwNCg0KXGYwXGIgbnMxNjU1MC5jOjc4MToxODoNClxmMVxiMCAgDQpcZjBcYiBc
Y2YyIGVycm9yOiANClxmMVxiMCBcY2YzIFwnOTENClxmMFxiIHN0cnVjdCBtc2lfaW5mbw0KXGYx
XGIwIFwnOTIgaGFzIG5vIG1lbWJlciBuYW1lZCBcJzkxDQpcZjBcYiBidXMNClxmMVxiMCBcJzky
XA0KICAgICAgICAgICAgICAgICAuDQpcZjBcYiBcY2YyIGJ1cw0KXGYxXGIwIFxjZjMgID0gdWFy
dC0+cHNfYmRmWzBdLFwNCiAgICAgICAgICAgICAgICAgIA0KXGYwXGIgXGNmMiBefn4NClxmMVxi
MCBcY2YzIFwNCg0KXGYwXGIgbnMxNjU1MC5jOjc4MToyNDoNClxmMVxiMCAgDQpcZjBcYiBcY2Yy
IGVycm9yOiANClxmMVxiMCBcY2YzIGV4Y2VzcyBlbGVtZW50cyBpbiBzdHJ1Y3QgaW5pdGlhbGl6
ZXIgWw0KXGYwXGIgXGNmMiAtV2Vycm9yDQpcZjFcYjAgXGNmMyBdXA0KICAgICAgICAgICAgICAg
ICAuYnVzID0gDQpcZjBcYiBcY2YyIHVhcnQNClxmMVxiMCBcY2YzIC0+cHNfYmRmWzBdLFwNCiAg
ICAgICAgICAgICAgICAgICAgICAgIA0KXGYwXGIgXGNmMiBefn5+DQpcZjFcYjAgXGNmMyBcDQoN
ClxmMFxiIG5zMTY1NTAuYzo3ODE6MjQ6DQpcZjFcYjAgIA0KXGYwXGIgXGNmNSBub3RlOiANClxm
MVxiMCBcY2YzIChuZWFyIGluaXRpYWxpemF0aW9uIGZvciBcJzkxDQpcZjBcYiBtc2kNClxmMVxi
MCBcJzkyKVwNCg0KXGYwXGIgbnMxNjU1MC5jOjc4MjoxODoNClxmMVxiMCAgDQpcZjBcYiBcY2Yy
IGVycm9yOiANClxmMVxiMCBcY2YzIFwnOTENClxmMFxiIHN0cnVjdCBtc2lfaW5mbw0KXGYxXGIw
IFwnOTIgaGFzIG5vIG1lbWJlciBuYW1lZCBcJzkxDQpcZjBcYiBkZXZmbg0KXGYxXGIwIFwnOTJc
DQogICAgICAgICAgICAgICAgIC4NClxmMFxiIFxjZjIgZGV2Zm4NClxmMVxiMCBcY2YzICA9IFBD
SV9ERVZGTih1YXJ0LT5wc19iZGZbMV0sIHVhcnQtPnBzX2JkZlsyXSksXA0KICAgICAgICAgICAg
ICAgICAgDQpcZjBcYiBcY2YyIF5+fn5+DQpcZjFcYjAgXGNmMyBcDQpJbiBmaWxlIGluY2x1ZGVk
IGZyb20gDQpcZjBcYiAvaG9tZS9yYWhzaW4wMS93b3JrL3hlbi1zY20tZ2l0LWNvZGUvZnJlc2gt
dGVzdC1jb2RlL3hlbi94ZW4vaW5jbHVkZS94ZW4vaW9tbXUuaDoyNTowDQpcZjFcYjAgLFwNCiAg
ICAgICAgICAgICAgICAgZnJvbSANClxmMFxiIC9ob21lL3JhaHNpbjAxL3dvcmsveGVuLXNjbS1n
aXQtY29kZS9mcmVzaC10ZXN0LWNvZGUveGVuL3hlbi9pbmNsdWRlL3hlbi9zY2hlZC5oOjEyDQpc
ZjFcYjAgLFwNCiAgICAgICAgICAgICAgICAgZnJvbSANClxmMFxiIG5zMTY1NTAuYzoxNQ0KXGYx
XGIwIDpcDQoNClxmMFxiIC9ob21lL3JhaHNpbjAxL3dvcmsveGVuLXNjbS1naXQtY29kZS9mcmVz
aC10ZXN0LWNvZGUveGVuL3hlbi9pbmNsdWRlL3hlbi9wY2kuaDozMzoyNToNClxmMVxiMCAgDQpc
ZjBcYiBcY2YyIGVycm9yOiANClxmMVxiMCBcY2YzIGV4Y2VzcyBlbGVtZW50cyBpbiBzdHJ1Y3Qg
aW5pdGlhbGl6ZXIgWw0KXGYwXGIgXGNmMiAtV2Vycm9yDQpcZjFcYjAgXGNmMyBdXA0KICNkZWZp
bmUgUENJX0RFVkZOKGQsZikgIA0KXGYwXGIgXGNmMiAoDQpcZjFcYjAgXGNmMyAoKChkKSAmIDB4
MWYpIDw8IDMpIHwgKChmKSAmIDB4MDcpKVwNCiAgICAgICAgICAgICAgICAgICAgICAgICANClxm
MFxiIFxjZjIgXg0KXGYxXGIwIFxjZjMgXA0KDQpcZjBcYiBuczE2NTUwLmM6NzgyOjI2Og0KXGYx
XGIwICANClxmMFxiIFxjZjUgbm90ZTogDQpcZjFcYjAgXGNmMyBpbiBleHBhbnNpb24gb2YgbWFj
cm8gXCc5MQ0KXGYwXGIgUENJX0RFVkZODQpcZjFcYjAgXCc5MlwNCiAgICAgICAgICAgICAgICAg
LmRldmZuID0gDQpcZjBcYiBcY2Y1IFBDSV9ERVZGTg0KXGYxXGIwIFxjZjMgKHVhcnQtPnBzX2Jk
ZlsxXSwgdWFydC0+cHNfYmRmWzJdKSxcDQogICAgICAgICAgICAgICAgICAgICAgICAgIA0KXGYw
XGIgXGNmNSBefn5+fn5+fn4NClxmMVxiMCBcY2YzIFwNCg0KXGYwXGIgL2hvbWUvcmFoc2luMDEv
d29yay94ZW4tc2NtLWdpdC1jb2RlL2ZyZXNoLXRlc3QtY29kZS94ZW4veGVuL2luY2x1ZGUveGVu
L3BjaS5oOjMzOjI1Og0KXGYxXGIwICANClxmMFxiIFxjZjUgbm90ZTogDQpcZjFcYjAgXGNmMyAo
bmVhciBpbml0aWFsaXphdGlvbiBmb3IgXCc5MQ0KXGYwXGIgbXNpDQpcZjFcYjAgXCc5MilcDQog
I2RlZmluZSBQQ0lfREVWRk4oZCxmKSAgDQpcZjBcYiBcY2Y1ICgNClxmMVxiMCBcY2YzICgoKGQp
ICYgMHgxZikgPDwgMykgfCAoKGYpICYgMHgwNykpXA0KICAgICAgICAgICAgICAgICAgICAgICAg
IA0KXGYwXGIgXGNmNSBeDQpcZjFcYjAgXGNmMyBcDQoNClxmMFxiIG5zMTY1NTAuYzo3ODI6MjY6
DQpcZjFcYjAgIA0KXGYwXGIgXGNmNSBub3RlOiANClxmMVxiMCBcY2YzIGluIGV4cGFuc2lvbiBv
ZiBtYWNybyBcJzkxDQpcZjBcYiBQQ0lfREVWRk4NClxmMVxiMCBcJzkyXA0KICAgICAgICAgICAg
ICAgICAuZGV2Zm4gPSANClxmMFxiIFxjZjUgUENJX0RFVkZODQpcZjFcYjAgXGNmMyAodWFydC0+
cHNfYmRmWzFdLCB1YXJ0LT5wc19iZGZbMl0pLFwNCiAgICAgICAgICAgICAgICAgICAgICAgICAg
DQpcZjBcYiBcY2Y1IF5+fn5+fn5+fg0KXGYxXGIwIFxjZjMgXA0KDQpcZjBcYiBuczE2NTUwLmM6
NzgzOjE4Og0KXGYxXGIwICANClxmMFxiIFxjZjIgZXJyb3I6IA0KXGYxXGIwIFxjZjMgXCc5MQ0K
XGYwXGIgc3RydWN0IG1zaV9pbmZvDQpcZjFcYjAgXCc5MiBoYXMgbm8gbWVtYmVyIG5hbWVkIFwn
OTENClxmMFxiIGlycQ0KXGYxXGIwIFwnOTJcDQogICAgICAgICAgICAgICAgIC4NClxmMFxiIFxj
ZjIgaXJxDQpcZjFcYjAgXGNmMyAgPSByYyA9IHVhcnQtPmlycSxcDQogICAgICAgICAgICAgICAg
ICANClxmMFxiIFxjZjIgXn5+DQpcZjFcYjAgXGNmMyBcDQoNClxmMFxiIG5zMTY1NTAuYzo3ODM6
MjQ6DQpcZjFcYjAgIA0KXGYwXGIgXGNmMiBlcnJvcjogDQpcZjFcYjAgXGNmMyBleGNlc3MgZWxl
bWVudHMgaW4gc3RydWN0IGluaXRpYWxpemVyIFsNClxmMFxiIFxjZjIgLVdlcnJvcg0KXGYxXGIw
IFxjZjMgXVwNCiAgICAgICAgICAgICAgICAgLmlycSA9IA0KXGYwXGIgXGNmMiByYw0KXGYxXGIw
IFxjZjMgID0gdWFydC0+aXJxLFwNCiAgICAgICAgICAgICAgICAgICAgICAgIA0KXGYwXGIgXGNm
MiBefg0KXGYxXGIwIFxjZjMgXA0KDQpcZjBcYiBuczE2NTUwLmM6NzgzOjI0Og0KXGYxXGIwICAN
ClxmMFxiIFxjZjUgbm90ZTogDQpcZjFcYjAgXGNmMyAobmVhciBpbml0aWFsaXphdGlvbiBmb3Ig
XCc5MQ0KXGYwXGIgbXNpDQpcZjFcYjAgXCc5MilcDQoNClxmMFxiIG5zMTY1NTAuYzo3ODQ6MTg6
DQpcZjFcYjAgIA0KXGYwXGIgXGNmMiBlcnJvcjogDQpcZjFcYjAgXGNmMyBcJzkxDQpcZjBcYiBz
dHJ1Y3QgbXNpX2luZm8NClxmMVxiMCBcJzkyIGhhcyBubyBtZW1iZXIgbmFtZWQgXCc5MQ0KXGYw
XGIgZW50cnlfbnINClxmMVxiMCBcJzkyXA0KICAgICAgICAgICAgICAgICAuDQpcZjBcYiBcY2Yy
IGVudHJ5X25yDQpcZjFcYjAgXGNmMyAgPSAxXA0KICAgICAgICAgICAgICAgICAgDQpcZjBcYiBc
Y2YyIF5+fn5+fn5+DQpcZjFcYjAgXGNmMyBcDQoNClxmMFxiIG5zMTY1NTAuYzo3ODQ6Mjk6DQpc
ZjFcYjAgIA0KXGYwXGIgXGNmMiBlcnJvcjogDQpcZjFcYjAgXGNmMyBleGNlc3MgZWxlbWVudHMg
aW4gc3RydWN0IGluaXRpYWxpemVyIFsNClxmMFxiIFxjZjIgLVdlcnJvcg0KXGYxXGIwIFxjZjMg
XVwNCiAgICAgICAgICAgICAgICAgLmVudHJ5X25yID0gDQpcZjBcYiBcY2YyIDENClxmMVxiMCBc
Y2YzIFwNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgDQpcZjBcYiBcY2YyIF4NClxmMVxi
MCBcY2YzIFwNCg0KXGYwXGIgbnMxNjU1MC5jOjc4NDoyOToNClxmMVxiMCAgDQpcZjBcYiBcY2Y1
IG5vdGU6IA0KXGYxXGIwIFxjZjMgKG5lYXIgaW5pdGlhbGl6YXRpb24gZm9yIFwnOTENClxmMFxi
IG1zaQ0KXGYxXGIwIFwnOTIpXA0KDQpcZjBcYiBuczE2NTUwLmM6NzgwOjI5Og0KXGYxXGIwICAN
ClxmMFxiIFxjZjIgZXJyb3I6IA0KXGYxXGIwIFxjZjMgc3RvcmFnZSBzaXplIG9mIFwnOTENClxm
MFxiIG1zaQ0KXGYxXGIwIFwnOTIgaXNuXCc5MnQga25vd25cDQogICAgICAgICAgICAgc3RydWN0
IG1zaV9pbmZvIA0KXGYwXGIgXGNmMiBtc2kNClxmMVxiMCBcY2YzICA9IFx7XA0KICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICANClxmMFxiIFxjZjIgXn5+DQpcZjFcYjAgXGNmMyBcDQoNClxm
MFxiIG5zMTY1NTAuYzo3OTM6MjI6DQpcZjFcYjAgIA0KXGYwXGIgXGNmMiBlcnJvcjogDQpcZjFc
YjAgXGNmMyBpbXBsaWNpdCBkZWNsYXJhdGlvbiBvZiBmdW5jdGlvbiBcJzkxDQpcZjBcYiBwY2lf
ZW5hYmxlX21zaQ0KXGYxXGIwIFwnOTI7IGRpZCB5b3UgbWVhbiBcJzkxDQpcZjBcYiBoYXBfZW5h
YmxlZA0KXGYxXGIwIFwnOTI/IFsNClxmMFxiIFxjZjIgLVdlcnJvcj1pbXBsaWNpdC1mdW5jdGlv
bi1kZWNsYXJhdGlvbg0KXGYxXGIwIFxjZjMgXVwNCiAgICAgICAgICAgICAgICAgcmMgPSANClxm
MFxiIFxjZjIgcGNpX2VuYWJsZV9tc2kNClxmMVxiMCBcY2YzICgmbXNpLCAmbXNpX2Rlc2MpO1wN
CiAgICAgICAgICAgICAgICAgICAgICANClxmMFxiIFxjZjIgXn5+fn5+fn5+fn5+fn4NClxmMVxi
MCBcY2YzIFwNCiAgICAgICAgICAgICAgICAgICAgICBcY2Y0IGhhcF9lbmFibGVkXGNmMyBcDQoN
ClxmMFxiIG5zMTY1NTAuYzo3OTM6MjI6DQpcZjFcYjAgIA0KXGYwXGIgXGNmMiBlcnJvcjogDQpc
ZjFcYjAgXGNmMyBuZXN0ZWQgZXh0ZXJuIGRlY2xhcmF0aW9uIG9mIFwnOTENClxmMFxiIHBjaV9l
bmFibGVfbXNpDQpcZjFcYjAgXCc5MiBbDQpcZjBcYiBcY2YyIC1XZXJyb3I9bmVzdGVkLWV4dGVy
bnMNClxmMVxiMCBcY2YzIF1cDQoNClxmMFxiIG5zMTY1NTAuYzo4MDA6MjY6DQpcZjFcYjAgIA0K
XGYwXGIgXGNmMiBlcnJvcjogDQpcZjFcYjAgXGNmMyBpbXBsaWNpdCBkZWNsYXJhdGlvbiBvZiBm
dW5jdGlvbiBcJzkxDQpcZjBcYiBzZXR1cF9tc2lfaXJxDQpcZjFcYjAgXCc5MjsgZGlkIHlvdSBt
ZWFuIFwnOTENClxmMFxiIHNldHVwX2lycQ0KXGYxXGIwIFwnOTI/IFsNClxmMFxiIFxjZjIgLVdl
cnJvcj1pbXBsaWNpdC1mdW5jdGlvbi1kZWNsYXJhdGlvbg0KXGYxXGIwIFxjZjMgXVwNCiAgICAg
ICAgICAgICAgICAgICAgIHJjID0gDQpcZjBcYiBcY2YyIHNldHVwX21zaV9pcnENClxmMVxiMCBc
Y2YzIChkZXNjLCBtc2lfZGVzYyk7XA0KICAgICAgICAgICAgICAgICAgICAgICAgICANClxmMFxi
IFxjZjIgXn5+fn5+fn5+fn5+fg0KXGYxXGIwIFxjZjMgXA0KICAgICAgICAgICAgICAgICAgICAg
ICAgICBcY2Y0IHNldHVwX2lycVxjZjMgXA0KDQpcZjBcYiBuczE2NTUwLmM6ODAwOjI2Og0KXGYx
XGIwICANClxmMFxiIFxjZjIgZXJyb3I6IA0KXGYxXGIwIFxjZjMgbmVzdGVkIGV4dGVybiBkZWNs
YXJhdGlvbiBvZiBcJzkxDQpcZjBcYiBzZXR1cF9tc2lfaXJxDQpcZjFcYjAgXCc5MiBbDQpcZjBc
YiBcY2YyIC1XZXJyb3I9bmVzdGVkLWV4dGVybnMNClxmMVxiMCBcY2YzIF1cDQoNClxmMFxiIG5z
MTY1NTAuYzo4MDM6MjU6DQpcZjFcYjAgIA0KXGYwXGIgXGNmMiBlcnJvcjogDQpcZjFcYjAgXGNm
MyBpbXBsaWNpdCBkZWNsYXJhdGlvbiBvZiBmdW5jdGlvbiBcJzkxDQpcZjBcYiBwY2lfZGlzYWJs
ZV9tc2kNClxmMVxiMCBcJzkyOyBkaWQgeW91IG1lYW4gXCc5MQ0KXGYwXGIgZ2ljX2Rpc2FibGVf
Y3B1DQpcZjFcYjAgXCc5Mj8gWw0KXGYwXGIgXGNmMiAtV2Vycm9yPWltcGxpY2l0LWZ1bmN0aW9u
LWRlY2xhcmF0aW9uDQpcZjFcYjAgXGNmMyBdXA0KICAgICAgICAgICAgICAgICAgICAgICAgIA0K
XGYwXGIgXGNmMiBwY2lfZGlzYWJsZV9tc2kNClxmMVxiMCBcY2YzIChtc2lfZGVzYyk7XA0KICAg
ICAgICAgICAgICAgICAgICAgICAgIA0KXGYwXGIgXGNmMiBefn5+fn5+fn5+fn5+fn4NClxmMVxi
MCBcY2YzIFwNCiAgICAgICAgICAgICAgICAgICAgICAgICBcY2Y0IGdpY19kaXNhYmxlX2NwdVxj
ZjMgXA0KDQpcZjBcYiBuczE2NTUwLmM6ODAzOjI1Og0KXGYxXGIwICANClxmMFxiIFxjZjIgZXJy
b3I6IA0KXGYxXGIwIFxjZjMgbmVzdGVkIGV4dGVybiBkZWNsYXJhdGlvbiBvZiBcJzkxDQpcZjBc
YiBwY2lfZGlzYWJsZV9tc2kNClxmMVxiMCBcJzkyIFsNClxmMFxiIFxjZjIgLVdlcnJvcj1uZXN0
ZWQtZXh0ZXJucw0KXGYxXGIwIFxjZjMgXVwNCg0KXGYwXGIgbnMxNjU1MC5jOjgxMjoyNToNClxm
MVxiMCAgDQpcZjBcYiBcY2YyIGVycm9yOiANClxmMVxiMCBcY2YzIGltcGxpY2l0IGRlY2xhcmF0
aW9uIG9mIGZ1bmN0aW9uIFwnOTENClxmMFxiIG1zaV9mcmVlX2lycQ0KXGYxXGIwIFwnOTI7IGRp
ZCB5b3UgbWVhbiBcJzkxDQpcZjBcYiB2Z2ljX2ZyZWVfdmlycQ0KXGYxXGIwIFwnOTI/IFsNClxm
MFxiIFxjZjIgLVdlcnJvcj1pbXBsaWNpdC1mdW5jdGlvbi1kZWNsYXJhdGlvbg0KXGYxXGIwIFxj
ZjMgXVwNCiAgICAgICAgICAgICAgICAgICAgICAgICANClxmMFxiIFxjZjIgbXNpX2ZyZWVfaXJx
DQpcZjFcYjAgXGNmMyAobXNpX2Rlc2MpO1wNCiAgICAgICAgICAgICAgICAgICAgICAgICANClxm
MFxiIFxjZjIgXn5+fn5+fn5+fn5+DQpcZjFcYjAgXGNmMyBcDQogICAgICAgICAgICAgICAgICAg
ICAgICAgXGNmNCB2Z2ljX2ZyZWVfdmlycVxjZjMgXA0KDQpcZjBcYiBuczE2NTUwLmM6ODEyOjI1
Og0KXGYxXGIwICANClxmMFxiIFxjZjIgZXJyb3I6IA0KXGYxXGIwIFxjZjMgbmVzdGVkIGV4dGVy
biBkZWNsYXJhdGlvbiBvZiBcJzkxDQpcZjBcYiBtc2lfZnJlZV9pcnENClxmMVxiMCBcJzkyIFsN
ClxmMFxiIFxjZjIgLVdlcnJvcj1uZXN0ZWQtZXh0ZXJucw0KXGYxXGIwIFxjZjMgXVwNCg0KXGYw
XGIgbnMxNjU1MC5jOjgxNDoyNToNClxmMVxiMCAgDQpcZjBcYiBcY2YyIGVycm9yOiANClxmMVxi
MCBcY2YzIGltcGxpY2l0IGRlY2xhcmF0aW9uIG9mIGZ1bmN0aW9uIFwnOTENClxmMFxiIGRlc3Ry
b3lfaXJxDQpcZjFcYjAgXCc5MjsgZGlkIHlvdSBtZWFuIFwnOTENClxmMFxiIHNldHVwX2lycQ0K
XGYxXGIwIFwnOTI/IFsNClxmMFxiIFxjZjIgLVdlcnJvcj1pbXBsaWNpdC1mdW5jdGlvbi1kZWNs
YXJhdGlvbg0KXGYxXGIwIFxjZjMgXVwNCiAgICAgICAgICAgICAgICAgICAgICAgICANClxmMFxi
IFxjZjIgZGVzdHJveV9pcnENClxmMVxiMCBcY2YzIChtc2kuaXJxKTtcDQogICAgICAgICAgICAg
ICAgICAgICAgICAgDQpcZjBcYiBcY2YyIF5+fn5+fn5+fn5+DQpcZjFcYjAgXGNmMyBcDQogICAg
ICAgICAgICAgICAgICAgICAgICAgXGNmNCBzZXR1cF9pcnFcY2YzIFwNCg0KXGYwXGIgbnMxNjU1
MC5jOjgxNDoyNToNClxmMVxiMCAgDQpcZjBcYiBcY2YyIGVycm9yOiANClxmMVxiMCBcY2YzIG5l
c3RlZCBleHRlcm4gZGVjbGFyYXRpb24gb2YgXCc5MQ0KXGYwXGIgZGVzdHJveV9pcnENClxmMVxi
MCBcJzkyIFsNClxmMFxiIFxjZjIgLVdlcnJvcj1uZXN0ZWQtZXh0ZXJucw0KXGYxXGIwIFxjZjMg
XVwNCg0KXGYwXGIgbnMxNjU1MC5jOjc4MDoyOToNClxmMVxiMCAgDQpcZjBcYiBcY2YyIGVycm9y
OiANClxmMVxiMCBcY2YzIHVudXNlZCB2YXJpYWJsZSBcJzkxDQpcZjBcYiBtc2kNClxmMVxiMCBc
JzkyIFsNClxmMFxiIFxjZjIgLVdlcnJvcj11bnVzZWQtdmFyaWFibGUNClxmMVxiMCBcY2YzIF1c
DQogICAgICAgICAgICAgc3RydWN0IG1zaV9pbmZvIA0KXGYwXGIgXGNmMiBtc2kNClxmMVxiMCBc
Y2YzICA9IFx7XA0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICANClxmMFxiIFxjZjIgXn5+
DQpcZjFcYjAgXGNmMyBcDQp9

--_002_CBBE4253F244418D9EA6BC39D1BC8DF8armcom_--


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 15:03:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 15:03:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.29981.59684 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfOzc-0000KX-3z; Wed, 18 Nov 2020 15:03:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 29981.59684; Wed, 18 Nov 2020 15:03:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfOzb-0000KQ-WA; Wed, 18 Nov 2020 15:03:31 +0000
Received: by outflank-mailman (input) for mailman id 29981;
 Wed, 18 Nov 2020 15:03:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QZvP=EY=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kfOza-0000G5-Vq
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 15:03:31 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7821a0e9-7244-430c-9f00-0faf94907314;
 Wed, 18 Nov 2020 15:03:22 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AIF3EkA010941
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Wed, 18 Nov 2020 16:03:15 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id B1EDF2E9CA8; Wed, 18 Nov 2020 16:03:09 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=QZvP=EY=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kfOza-0000G5-Vq
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 15:03:31 +0000
X-Inumbo-ID: 7821a0e9-7244-430c-9f00-0faf94907314
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7821a0e9-7244-430c-9f00-0faf94907314;
	Wed, 18 Nov 2020 15:03:22 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AIF3EkA010941
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Wed, 18 Nov 2020 16:03:15 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id B1EDF2E9CA8; Wed, 18 Nov 2020 16:03:09 +0100 (MET)
Date: Wed, 18 Nov 2020 16:03:09 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201118150309.GG3126@antioche.eu.org>
References: <20201117150949.GA3791@antioche.eu.org>
 <20201117155807.a7jgmftnj6njg6oz@Air-de-Roger>
 <20201117164033.GB3093@antioche.eu.org>
 <20201118085738.wpnfmjagxjf6cofp@Air-de-Roger>
 <20201118092425.GC1085@antioche.eu.org>
 <20201118100025.ic7r3kfsbdnr6muz@Air-de-Roger>
 <20201118121403.GC3126@antioche.eu.org>
 <20201118143928.hvamuf7t7jycsrzb@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201118143928.hvamuf7t7jycsrzb@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Wed, 18 Nov 2020 16:03:16 +0100 (MET)

On Wed, Nov 18, 2020 at 03:39:28PM +0100, Roger Pau Monn wrote:
> > [...]
> > I can get the same effect by just doing ^A^A^A so my guess is that it's
> > not accessing the iopic's register which changes the IOAPIC_REDLO_RIRR bit,
> > but the XEN printf. Also, from NetBSD, using a dump fuinction which
> > doesn't access undefined registers - and so doesn't trigger XEN printfs -
> > doens't change the IOAPIC_REDLO_RIRR bit either.
> 
> I'm thinking about further ways to debug this. I see that all active
> IO-APIC pins are routed to vCPU0, but does it make a difference if you
> boot with dom0_max_vcpus=1 on the Xen command line? (thus limiting
> NertBSD dom0 to a single CPU)

No, the same issue happens

> 
> I can also prepare a patch that will periodically dump the same stuff
> as the 'i' debug key without you having to press anything, but I'm not
> sure if it would help much.

I think the key is to read all the interresting stuff before printing,
as it seems that printing to console is what change states.

> 
> Also, does the system work fine when you reach multiuser, or it also
> randomly freezes and requires further poking?

I let it run overnight, with some cron jobs firing and it didn't wedge.
I guess that once the kernel autoconf is done, the window in which
the interrupt is masked at the ioapic level is much shorter, making the
problem much less likely to happen.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 15:13:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 15:13:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30014.59717 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfP9G-0001da-K3; Wed, 18 Nov 2020 15:13:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30014.59717; Wed, 18 Nov 2020 15:13:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfP9G-0001dT-H2; Wed, 18 Nov 2020 15:13:30 +0000
Received: by outflank-mailman (input) for mailman id 30014;
 Wed, 18 Nov 2020 15:13:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfP9G-0001dJ-1G; Wed, 18 Nov 2020 15:13:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfP9F-0003U9-Pa; Wed, 18 Nov 2020 15:13:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfP9F-0004Xq-GL; Wed, 18 Nov 2020 15:13:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kfP9F-0003RU-Fo; Wed, 18 Nov 2020 15:13:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfP9G-0001dJ-1G; Wed, 18 Nov 2020 15:13:30 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cyeBEAaYUVeU+PCzEmOeO/lWTbeZVwJCKQRuhx5IJlU=; b=fTjBHqjoRUy1PdpOvXOTwIWV0G
	uyGXn4oeTBi2vXlEFRBdAQ0JWIz7laGqex42O7uCoVhIGtfd81JaiLzUvzSeJPgVFtKwvD5HjF0KZ
	pcFLBvFbs7dmk4Sa0WPnkDT803v4JOncX++XlNyDyCdJseVKuMXRXEG3jZqEKI9X/1EQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfP9F-0003U9-Pa; Wed, 18 Nov 2020 15:13:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfP9F-0004Xq-GL; Wed, 18 Nov 2020 15:13:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfP9F-0003RU-Fo; Wed, 18 Nov 2020 15:13:29 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156852-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156852: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=0fa8ee0d9ab95c9350b8b84574824d9a384a9f7d
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 18 Nov 2020 15:13:29 +0000

flight 156852 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156852/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                0fa8ee0d9ab95c9350b8b84574824d9a384a9f7d
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  109 days
Failing since        152366  2020-08-01 20:49:34 Z  108 days  181 attempts
Testing same since   156841  2020-11-17 20:09:03 Z    0 days    2 attempts

------------------------------------------------------------
3528 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 675049 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 15:17:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 15:17:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30029.59732 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfPCg-0001t5-4o; Wed, 18 Nov 2020 15:17:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30029.59732; Wed, 18 Nov 2020 15:17:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfPCg-0001sy-1i; Wed, 18 Nov 2020 15:17:02 +0000
Received: by outflank-mailman (input) for mailman id 30029;
 Wed, 18 Nov 2020 15:17:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfPCe-0001st-O1
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 15:17:00 +0000
Received: from de-smtp-delivery-52.mimecast.com (unknown [51.163.158.52])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bc6fa6f4-d158-4f00-a0a2-15f2bedad7c4;
 Wed, 18 Nov 2020 15:16:59 +0000 (UTC)
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02lp2054.outbound.protection.outlook.com [104.47.5.54]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-17-qywAWm4IPpyoZtpppVvoEQ-1; Wed, 18 Nov 2020 16:16:56 +0100
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7038.eurprd04.prod.outlook.com (2603:10a6:800:12d::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.26; Wed, 18 Nov
 2020 15:16:55 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::6882:d72e:9dfd:349e]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::6882:d72e:9dfd:349e%5]) with mapi id 15.20.3541.028; Wed, 18 Nov 2020
 15:16:55 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM0PR01CA0174.eurprd01.prod.exchangelabs.com (2603:10a6:208:aa::43) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Wed, 18 Nov 2020 15:16:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=763w=EY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfPCe-0001st-O1
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 15:17:00 +0000
X-Inumbo-ID: bc6fa6f4-d158-4f00-a0a2-15f2bedad7c4
Received: from de-smtp-delivery-52.mimecast.com (unknown [51.163.158.52])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id bc6fa6f4-d158-4f00-a0a2-15f2bedad7c4;
	Wed, 18 Nov 2020 15:16:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1605712618;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9miUL1ycQ/wKVbdU7D8a7KBAYMk2rSnGMnl+ZgVXSLo=;
	b=GTO2ViiQ36Ruc6TGf6RnW4+kBmIovd2rugqa7C8+zkN1E2evExutIyV5GDfLYYAHSxzJ3A
	FCMCUY8VX4T26HvG36F/OecC4VHqzRfvH358i+sUICuNQ5HxkbpImvFD2ditVEtIFTceN9
	LXPWzVRd5I6IW7nke6tnNg8I340ahfI=
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur02lp2054.outbound.protection.outlook.com [104.47.5.54]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-17-qywAWm4IPpyoZtpppVvoEQ-1; Wed, 18 Nov 2020 16:16:56 +0100
X-MC-Unique: qywAWm4IPpyoZtpppVvoEQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AR+N9GTmzvmgxDnk2Uln908lSLQSneNZya8bk82gBytrtSeOAYLsUALZ1LpsJd9soxuEhk8RUuwuw2bGus4gnid55XqulsFoN15HZFOc60X7oMdcCwH5RKnTBThaX5Hi5SGXyk9ij4RHrXQmHVrQnNzgZyaf8l3LeqFtv1NoTUgR3IHWmTdtv7Qr3N1M9Jzu5cFhgb33JPm2r7hCn1Yb61xwajRhRKMWYh8e/g8zfoLanXpKADCAuUvz/PjItVQYjoLWy2j79dfUFewtHjzrZbOQckX++F4SWMwkd/+PGznsMy6juLxeQrqAfzgHTCkVjKoGTXukqhlR6AkaHZctnA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9miUL1ycQ/wKVbdU7D8a7KBAYMk2rSnGMnl+ZgVXSLo=;
 b=fK717Lv9jB0WcQNOu7Y0lsR/MyzYJTTReBOK9l/wFe+7GVNgPgHyXN+8s5Aigy/px3BlpA1zo058+aHVlGLecQQqUx9InDWw5a+OUny9xGxYqMlR5SyUUmE/K+JAr1Iwctt8ra+9U0kBFxpA2MS8PkXu0MxLktQl+436SzMMy3pi2ii6xJiaC2IBqU5kWzXAgOeuPYusKous1TnQw4dyhszqWOTK9i2zr5/d93DkvUZvRrduQcCP73i8S9h1fpbPoTBHq05AsRXdamccEXfa9H2zlWzEcHpHnyda2tq/SYuDJa1BBBNHdn4MmQpk8raorfy8XFLNucaV3QI661BkxQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR04MB7038.eurprd04.prod.outlook.com (2603:10a6:800:12d::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.26; Wed, 18 Nov
 2020 15:16:55 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::6882:d72e:9dfd:349e]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::6882:d72e:9dfd:349e%5]) with mapi id 15.20.3541.028; Wed, 18 Nov 2020
 15:16:55 +0000
Subject: Re: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <cover.1605527997.git.rahul.singh@arm.com>
 <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
 <bd5fa7bb-7c44-1ec0-fc57-3ecf01c7d651@suse.com>
 <CBBE4253-F244-418D-9EA6-BC39D1BC8DF8@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1530c2fb-8def-37eb-8a22-d7f9fc4e38b4@suse.com>
Date: Wed, 18 Nov 2020 16:16:47 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
In-Reply-To: <CBBE4253-F244-418D-9EA6-BC39D1BC8DF8@arm.com>
Content-Type: text/plain; charset=windows-1252
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM0PR01CA0174.eurprd01.prod.exchangelabs.com
 (2603:10a6:208:aa::43) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from [10.156.60.236] (37.24.206.209) by AM0PR01CA0174.eurprd01.prod.exchangelabs.com (2603:10a6:208:aa::43) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend Transport; Wed, 18 Nov 2020 15:16:54 +0000
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 83c48390-681e-46d8-06f4-08d88bd4fc0c
X-MS-TrafficTypeDiagnostic: VI1PR04MB7038:
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB70389249F8ED807015396150B3E10@VI1PR04MB7038.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GoRdTsRjX2dejbxbBVlBC7FBVsPY8iISFhHlzI8UrWM6+Z+36Te1/7fISZMXWGRUI8MZ9GZUTV66CRpntE74ePO/o0XlFHs8TP1e7lF3C/DX5L9NtkCqFOe97GUv9e6NmOZm0TPGgfsVhoWHey8u/d4hkHQMxfbGb6tGOiuTx9HSTCL4D75cD6ut1IUPQBTAAKxLVI0/KFDxlzCHiOy0VAIWLZ9Lg6gX1RI6U4INrPD4wbrLleICW6I6Gg0itA3TutDDi5LkP7xn+4FHDhzJCFRMU6zwUmYUwt1dBJNX0/XME0BX+j2fBQ4k960dPbigjcLzndum6Ws4bvlUMggN3kxvvsQjy8IutWpZyDCquTlK423fV94q7YYVCXdFm1CS
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(346002)(396003)(366004)(376002)(39860400002)(8936002)(956004)(52116002)(26005)(54906003)(53546011)(2616005)(478600001)(86362001)(4326008)(5660300002)(31686004)(36756003)(8676002)(66556008)(2906002)(186003)(31696002)(16526019)(16576012)(83380400001)(6666004)(6486002)(6916009)(66476007)(66946007)(316002)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	nvf2YtTgDyBWhmInhhPiZ46k4CdudmsCRSSGEGJrKZjykSc8L85481kQ1vMp2Ujtl3BUsP/eaSLORzCyQztF/9t56DqfmhRYhvY1PlvFmgA91UGPYcaAzN7Ko+FUKo6jNjMLHWg7i46f+HYua1GIVYB7JQruK7gkCm1YD7Nhz/uXTnuyIBaQfBeyhjJcLFRlanH2ig3zshOvmtkh1MV/Y1q5pOWxK+nD7kTTCNEz9ozg58LYTViTDz2LTp8EKimrlOEWdNMFBt4DuJy7tVhP8PWvTALvcfrSJzaRyyviS65kxyWLq87gb15h7K5L5Q7mnnIDSFYVpif73iARkzwJXPUTk/c5xty6D8nZVvOvAe/79NiTxxQCnGg0Vlld59v0dDrhqySScJ1pdWh/BRv8t4mZ0Bg9J+0Vw+uHbCeF1PSwsx4uoME/BP4ND5ekvuG4PY0S6GNT9CGL9LI0f40F0zIYUrBxu5EyF16De4oEoDcApihXsj8wpVsiOhtzInNyUFuflHlVadw81i9/ywLx5xSXTCCUUzLTKHBDWATJxc3caipsgzKlO4xHvozMtnmJO1zxycVpZts7rwHauGGndCO/1O8ZXJatKNEUJ4C2a5D8a8zqOvT6FXsfVNrS7QqVtM/6DTiRykxKbLR9YwNDSxag/5HJggOnaPrZiaN6mt/85+tlMo5NPyg94XKFI/WKs+spT9/QcVpnsyBgWwk3c3rMQIsPNbTLUFXbFn7ysabEKGHozulSY5jSEFWFP0wJtrGBbGfVDd8N8kiSI4Q65OfiUy+k2OffeMRHpMG9Z1/UwdclHWX8MjHxNVjtlrvPxYThj4ERBIVktJtyNWRgzHHpyvHbTr0QY8e0qoF35YfRlLznuwthJZatirAVYPgadlpn82P72+59FFk/z2GARw==
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 83c48390-681e-46d8-06f4-08d88bd4fc0c
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Nov 2020 15:16:55.2228
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: C4FgQdP/TCrr116Qp9HFmSRb31kcngMLKdbjKId4BqQNWqdorysrnFFmdmy3R09G14UoFUF8ienQpeG5LKR5BQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7038

On 18.11.2020 16:02, Rahul Singh wrote:
> Hello Jan,
> 
>> On 17 Nov 2020, at 10:55 am, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 16.11.2020 13:25, Rahul Singh wrote:
>>> NS16550 driver has PCI support that is under HAS_PCI flag. When HAS_PCI
>>> is enabled for ARM, compilation error is observed for ARM architecture
>>> because ARM platforms do not have full PCI support available.
>>
>> While you've extended the sentence, it remains unclear to me what
>> compilation error it is that results here. I've requested such
>> clarification for v2 already.
> 
> Compilation error is related to the code that refer to x86  functions (create_irq()..) and MSI implementation related error. 
> For more details please find the attached file for compilation error.

The use of mmio_ro_ranges is almost quite possibly going to remain
x86-specific, but then I guess this wants abstracting in a suitable
way.

The remaining look to all be MSI-related, so perhaps what you want
to avoid is just that part rather than everything PCI-ish?

>>> --- a/xen/drivers/char/ns16550.c
>>> +++ b/xen/drivers/char/ns16550.c
>>> @@ -16,7 +16,7 @@
>>> #include <xen/timer.h>
>>> #include <xen/serial.h>
>>> #include <xen/iocap.h>
>>> -#ifdef CONFIG_HAS_PCI
>>> +#ifdef CONFIG_HAS_NS16550_PCI
>>> #include <xen/pci.h>
>>> #include <xen/pci_regs.h>
>>> #include <xen/pci_ids.h>
>>
>> ... #undef-ining CONFIG_HAS_PCI at a suitable position in this
>> file (e.g. after all #include-s, to make sure all structure
>> layouts remain correct)? This would then be far easier to revert
>> down the road, and would confine the oddity to a single file
>> (and there a single place) in the code base.
>>
> 
> As for ARM platforms, PCI implementation is in the development process and I am not sure if after completion of PCI work, ns16500 PCI part of code will work out of the box. I think there is some effort required to test the ns16550 PCI part of the code on ARM.
> As this code is tested on X86 only so I make the options depends on X86 and enable it by default for x86.  
> 
> I feel that adding a new Kconfig options is ok to enable/disable the PCI NS16550 support as compared to #undef CONFIG_HAS_PCI in the particular file. If in future other architecture wants to implement the PCI they will face the similar compilation error issues.
> 
> Please suggest how we can proceed on this.

Introduce CONFIG_HAS_PCI_MSI (selected only by x86), if there's no
immediate plan to support it on Arm together with the rest of PCI?

Jan



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 15:17:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 15:17:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30034.59744 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfPDY-00021I-J6; Wed, 18 Nov 2020 15:17:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30034.59744; Wed, 18 Nov 2020 15:17:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfPDY-00021B-G2; Wed, 18 Nov 2020 15:17:56 +0000
Received: by outflank-mailman (input) for mailman id 30034;
 Wed, 18 Nov 2020 15:17:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uMMN=EY=gmail.com=lambert.olivier@srs-us1.protection.inumbo.net>)
 id 1kfPDW-000210-GH
 for xen-devel@lists.xen.org; Wed, 18 Nov 2020 15:17:54 +0000
Received: from mail-ua1-x92c.google.com (unknown [2607:f8b0:4864:20::92c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id daf8d629-c3ee-41d7-8ea4-6c26c984edc5;
 Wed, 18 Nov 2020 15:17:53 +0000 (UTC)
Received: by mail-ua1-x92c.google.com with SMTP id q68so783575uaq.3
 for <xen-devel@lists.xen.org>; Wed, 18 Nov 2020 07:17:53 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uMMN=EY=gmail.com=lambert.olivier@srs-us1.protection.inumbo.net>)
	id 1kfPDW-000210-GH
	for xen-devel@lists.xen.org; Wed, 18 Nov 2020 15:17:54 +0000
X-Inumbo-ID: daf8d629-c3ee-41d7-8ea4-6c26c984edc5
Received: from mail-ua1-x92c.google.com (unknown [2607:f8b0:4864:20::92c])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id daf8d629-c3ee-41d7-8ea4-6c26c984edc5;
	Wed, 18 Nov 2020 15:17:53 +0000 (UTC)
Received: by mail-ua1-x92c.google.com with SMTP id q68so783575uaq.3
        for <xen-devel@lists.xen.org>; Wed, 18 Nov 2020 07:17:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to;
        bh=AifqwwU6opvqQ+dbP3YRrILce8AKsWgnI+JIoG1/g0g=;
        b=VYG+6N7P+JQKeCBD9r4Rn97ZPiyiWhmsp9hCD7gWSDuKQVTQDyQCIIgdPb5V3fHtXJ
         uQTufJ82yi/CR+pi7c3VbjcQN+Ledd3/0WURV3T7NXjatPRCgfm660/OD9RwQ7OxSpdQ
         oL7k4vLeiJlvNXdEvrYuaJ0lhEfOS+seGLYeV6PCrMk6Yof1+Ds70qInARxuUo95QNBS
         ht//r/PXBVrs6zNaqFfHxqxvpV6PKnTPa4rZYvUluH9UpPuDXwEekTO0khXNTrbXkV/W
         HUEbVpAZL1CNJhXfTi4H7kR8tJ/CmJ5Ywu1woNwq2DwfiCM05zJMJo+iF8MCCI7W7it4
         36qw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to;
        bh=AifqwwU6opvqQ+dbP3YRrILce8AKsWgnI+JIoG1/g0g=;
        b=FSv+4WsBYdDu/Oz56PeSFPyg16YF9wr8ScbIcGswTYNmzTawAOGImDBefj4GxPg5z4
         gOVHeVrUo94BLv680QCXa8z4ezyFslhP2byBRPJUJkwz1E4LI3kq1ieC6OepcdC9sQZc
         ojKBz9QaqyHICBNUFTCpETA2+Ck0iGLj8pzn7COsQeQyxKuvHoN+OJih+JYpTLZNnkUK
         qwroLDS9j1Esxykcq3cjg5UAXEeHVdjhRn37K2hdUI3JAg4rdagqqUhD0mPqdsDnnt4B
         do902MyezFA3UCa6jfA5kiymCoYQ+nshnA3QJNXP5dBJHyBvhdNY1k+n68Zd5KrbatV+
         odGA==
X-Gm-Message-State: AOAM5301pLFv1u4klErVrCAPu6eVxyzfd0ElsypswN016ov9XL8DuFXx
	Z8OR4YhQU15uNviTsoGcEzgQmgRVfk2p1U9bcmnkEO+3BnwpueBi
X-Google-Smtp-Source: ABdhPJxXSHfeaUd3SLJ956z1ZIY2oKEFuGexNNAX13RANWFzh8SjDYKbexe4S0QrZgulHURmm7IcEhp9uRZSQcrqniQ=
X-Received: by 2002:ab0:6994:: with SMTP id t20mr3586689uaq.111.1605712672482;
 Wed, 18 Nov 2020 07:17:52 -0800 (PST)
MIME-Version: 1.0
References: <CACJ1ZNuJCgDkRHvH2gXqC5gWTJHdUQ9J4G-HBNFwKYZFaWpWuw@mail.gmail.com>
 <CACJ1ZNupvRX_fcGPWn3mm+3Lm4gT38M088tUc_sSUu8JeQg3Fg@mail.gmail.com> <CACJ1ZNu5Kdf72j1eTtdgTuSOjgkpeEWFM0cKB-54pxqwXuWCDQ@mail.gmail.com>
In-Reply-To: <CACJ1ZNu5Kdf72j1eTtdgTuSOjgkpeEWFM0cKB-54pxqwXuWCDQ@mail.gmail.com>
From: Olivier Lambert <lambert.olivier@gmail.com>
Date: Wed, 18 Nov 2020 16:17:41 +0100
Message-ID: <CACJ1ZNtfgNr9oz7stE=2iwijjAUtZLWR2u_xihFZeEk3Y7gYRQ@mail.gmail.com>
Subject: Re: Schedule for OpenPOWER/Xen meeting
To: "<xen-devel@lists.xen.org>" <xen-devel@lists.xen.org>
Content-Type: multipart/alternative; boundary="000000000000746d0105b4631da0"

--000000000000746d0105b4631da0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi!

So I managed to get an agenda with basic questions. The meeting is at the
planned time (Nov the 19th, at 3PM central time, which is 9PM in UK and
10PM in Europe).

Meeting place will be: https://ibm.webex.com/meet/mendy

Don't forget to ping your colleagues/friends that aren't watching this
mailing list actively, so they won't miss the meeting :)

See you tomorrow!

Olivier.

Le jeu. 12 nov. 2020 =C3=A0 21:44, Olivier Lambert <lambert.olivier@gmail.c=
om> a
=C3=A9crit :

> Okay so before having the meeting webex/whatever link, I think it would b=
e
> more efficient to plan a kind of agenda, something we can pass to the
> OpenPOWER team in the next few days. This way, they could have some answe=
rs
> ready, allowing us to explore more things interactively during the meetin=
g.
>
> Feel free to participate in this thread (even if you won't be at the
> meeting!), so we can gather and then organize a bit of what we'd like to
> know/discuss during this meeting.
>
> So go ahead and start to throw questions :)
>
>
> Thanks,
>
> OIivier.
>
>
> Le jeu. 12 nov. 2020 =C3=A0 09:26, Olivier Lambert <lambert.olivier@gmail=
.com>
> a =C3=A9crit :
>
>> Thanks to everyone who participated in the poll. Due to the limited
>> number of answers, I think it's wiser to go for the second option (Thurs=
day
>> the 19th), because everyone who already answered seems available that da=
y.
>> I'll confirm that to OpenPOWER. When it's confirmed, I'll do a recap her=
e
>> ideally with the meeting place.
>>
>> Thanks,
>>
>> Olivier.
>>
>>
>> Le mar. 10 nov. 2020 =C3=A0 13:41, Olivier Lambert <lambert.olivier@gmai=
l.com>
>> a =C3=A9crit :
>>
>>> Hi everyone,
>>>
>>> We got 2 potential dates for the initial tech meeting with at least one
>>> OpenPOWER expert, so we can discuss the effort needed to port Xen on th=
is
>>> architecture.
>>>
>>> Because of time zones (on OpenPower side, there's one guy in Australia)=
,
>>> we got 2 possible schedules in November:
>>>
>>> 1. 3pm CT on this Thursday the 12th (! this week)
>>> 2. Or next week Thursday the 19th
>>>
>>> I made a doodle-like so everyone can vote on their preferred schedule:
>>> https://framadate.org/QQu5rYEOEYr4ZHc4
>>>
>>> Note: 3pm CT would mean 9pm UTC, 10pm UTC+1 (CET). But correct me if I'=
m
>>> wrong.
>>>
>>> Reminder: the Cryptpad of the last Xen Community meeting contains the
>>> list of people interested. If you are aware of someone interested that
>>> could miss this email on this devel list, feel free to forward it. Cryp=
tpad
>>> link: https://cryptpad.fr/pad/#/2/pad/edit/k-0Aj+Sxb5SliLWrFRBwx49V/
>>>
>>> Thank you and see you soon!
>>>
>>> Olivier.
>>>
>>

--000000000000746d0105b4631da0
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hi!</div><div><br></div><div>So I managed to get an a=
genda with basic questions. The meeting is at the planned time (Nov the 19t=
h, at 3PM central time, which is 9PM in UK and 10PM in Europe).</div><div><=
br></div><div>Meeting place will be: <a href=3D"https://ibm.webex.com/meet/=
mendy">https://ibm.webex.com/meet/mendy</a></div><div><br></div><div>Don&#3=
9;t forget to ping your colleagues/friends that aren&#39;t watching this ma=
iling list actively, so they won&#39;t miss the meeting :)</div><div><br></=
div><div>See you tomorrow!</div><div><br></div><div>Olivier.<br></div></div=
><br><div class=3D"gmail_quote"><div dir=3D"ltr" class=3D"gmail_attr">Le=C2=
=A0jeu. 12 nov. 2020 =C3=A0=C2=A021:44, Olivier Lambert &lt;<a href=3D"mail=
to:lambert.olivier@gmail.com">lambert.olivier@gmail.com</a>&gt; a =C3=A9cri=
t=C2=A0:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px=
 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div di=
r=3D"ltr"><div>Okay so before having the meeting webex/whatever link, I thi=
nk it would be more efficient to plan a kind of agenda, something we can pa=
ss to the OpenPOWER team in the next few days. This way, they could have so=
me answers ready, allowing us to explore more things interactively during t=
he meeting.</div><div><br></div><div>Feel free to participate in this threa=
d (even if you won&#39;t be at the meeting!), so we can gather and then org=
anize a bit of what we&#39;d like to know/discuss during this meeting.</div=
><div><br></div><div>So go ahead and start to throw questions :)<br></div><=
div><div><br></div><div><br></div><div>Thanks,</div><div><br></div><div>OIi=
vier.<br></div><br></div></div><br><div class=3D"gmail_quote"><div dir=3D"l=
tr" class=3D"gmail_attr">Le=C2=A0jeu. 12 nov. 2020 =C3=A0=C2=A009:26, Olivi=
er Lambert &lt;<a href=3D"mailto:lambert.olivier@gmail.com" target=3D"_blan=
k">lambert.olivier@gmail.com</a>&gt; a =C3=A9crit=C2=A0:<br></div><blockquo=
te class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px =
solid rgb(204,204,204);padding-left:1ex"><div dir=3D"ltr"><div>Thanks to ev=
eryone who participated in the poll. Due to the limited number of answers, =
I think it&#39;s wiser to go for the second option (Thursday the 19th), bec=
ause everyone who already answered seems available that day. I&#39;ll confi=
rm that to OpenPOWER. When it&#39;s confirmed, I&#39;ll do a recap here ide=
ally with the meeting place.</div><div><br></div><div>Thanks,</div><div><br=
></div><div>Olivier.<br></div><br></div><br><div class=3D"gmail_quote"><div=
 dir=3D"ltr" class=3D"gmail_attr">Le=C2=A0mar. 10 nov. 2020 =C3=A0=C2=A013:=
41, Olivier Lambert &lt;<a href=3D"mailto:lambert.olivier@gmail.com" target=
=3D"_blank">lambert.olivier@gmail.com</a>&gt; a =C3=A9crit=C2=A0:<br></div>=
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex"><div dir=3D"ltr"><div>Hi =
everyone,</div><div><br></div><div>We got 2 potential dates for the initial=
 tech meeting with at least one OpenPOWER expert, so we can discuss the eff=
ort needed to port Xen on this architecture.</div><div><br></div><div>Becau=
se of time zones (on OpenPower side, there&#39;s one guy in Australia), we =
got 2 possible schedules in November:</div><div><br></div><div>1. 3pm CT on=
 this Thursday the 12th (! this week)<br></div><div>2. Or next week Thursda=
y the 19th</div><div><br></div><div>I made a doodle-like so everyone can vo=
te on their preferred schedule: <a href=3D"https://framadate.org/QQu5rYEOEY=
r4ZHc4" target=3D"_blank">https://framadate.org/QQu5rYEOEYr4ZHc4</a></div><=
div><br></div><div>Note: 3pm CT would mean 9pm UTC, 10pm UTC+1 (CET). But c=
orrect me if I&#39;m wrong.</div><div><br></div><div>Reminder: the Cryptpad=
 of the last Xen Community meeting contains the list of people interested. =
If you are aware of someone interested that could miss this email on this d=
evel list, feel free to forward it. Cryptpad link: <a href=3D"https://crypt=
pad.fr/pad/#/2/pad/edit/k-0Aj+Sxb5SliLWrFRBwx49V/" target=3D"_blank">https:=
//cryptpad.fr/pad/#/2/pad/edit/k-0Aj+Sxb5SliLWrFRBwx49V/</a></div><div><br>=
</div><div>Thank you and see you soon!</div><div><br></div><div>Olivier.<br=
></div></div>
</blockquote></div>
</blockquote></div>
</blockquote></div>

--000000000000746d0105b4631da0--


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 15:26:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 15:26:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30044.59760 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfPLY-00036k-Gi; Wed, 18 Nov 2020 15:26:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30044.59760; Wed, 18 Nov 2020 15:26:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfPLY-00036d-Cr; Wed, 18 Nov 2020 15:26:12 +0000
Received: by outflank-mailman (input) for mailman id 30044;
 Wed, 18 Nov 2020 15:26:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfPLW-00036Y-Dk
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 15:26:10 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfPLV-0003lK-2l; Wed, 18 Nov 2020 15:26:09 +0000
Received: from [54.239.6.190] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfPLU-00074o-SS; Wed, 18 Nov 2020 15:26:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfPLW-00036Y-Dk
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 15:26:10 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=O7LyMIkIfQbqsa+O5wB6y8CzGAuhic2WtHR2zN0NROI=; b=GO4mFFHmkY2ldj1mS1hD7TM+8o
	TIh7p0VfEM4TAaTLAoU+NaNXchFMtUHRV8rDhwdoN0Pxj+d2AJVz3jZBuweESxq/zE9cQuFrksE8U
	nISw8VNETyjL2aCPrMFpHgMFiUfGlNZw05oJNFbpZiYEUsg9lx8PJphfGjilK21ds/nM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfPLV-0003lK-2l; Wed, 18 Nov 2020 15:26:09 +0000
Received: from [54.239.6.190] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfPLU-00074o-SS; Wed, 18 Nov 2020 15:26:09 +0000
Subject: Re: [PATCH v2] xen/arm: Add workaround for Cortex-A76/Neoverse-N1
 erratum #1286807
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, bertrand.marquis@arm.com
References: <20201116121140.26763-1-michal.orzel@arm.com>
 <c7475d91-c956-3e2c-4445-ef5c005ff465@xen.org>
 <d6d632b2-eba7-1746-d398-2bd539a51caf@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <1fe61e2f-a6fc-c1c8-2f29-0c62cc73cb3c@xen.org>
Date: Wed, 18 Nov 2020 15:26:07 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <d6d632b2-eba7-1746-d398-2bd539a51caf@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 18/11/2020 07:12, Michal Orzel wrote:
> Hi Julien,

Hi Michal,

> On 17.11.2020 18:30, Julien Grall wrote:
>> Hi Michal,
>>
>> On 16/11/2020 12:11, Michal Orzel wrote:
>>> On the affected Cortex-A76/Neoverse-N1 cores (r0p0 to r3p0),
>>> if a virtual address for a cacheable mapping of a location is being
>>> accessed by a core while another core is remapping the virtual
>>> address to a new physical page using the recommended break-before-make
>>> sequence, then under very rare circumstances TLBI+DSB completes before
>>> a read using the translation being invalidated has been observed by
>>> other observers. The workaround repeats the TLBI+DSB operation
>>> for all the TLB flush operations on purpose.
>>
>> Sorry for nitpicking, the commit message should contain enough information for future reader to understand why this was done "on purpose".
>>
>> So how about:
>>
>> "The workaround repeats the TLBI+DSB operation for all the TLB flush operations. While this is stricly not necessary, we don't want to take any risk.".
>>
>> I can fix it on commit.
>>
> Ok I agree to add this extra clarification.
> Please go on and fix it on commit/etc.

Thanks. I have committed now with an extra small change (I hope that's 
fine). I decided to not add the extra-clarification in the Kconfig for 
two reasons:
  1) The description is already pretty long and I wanted to keep it 
short. A more user interested with the workaround would likely go and 
read the code.
  2) There are a risk the description will get desynchronized in the 
future. So better to keep the justification close to the implementation.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 15:35:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 15:35:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30059.59772 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfPUC-0004AB-EW; Wed, 18 Nov 2020 15:35:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30059.59772; Wed, 18 Nov 2020 15:35:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfPUC-0004A4-AG; Wed, 18 Nov 2020 15:35:08 +0000
Received: by outflank-mailman (input) for mailman id 30059;
 Wed, 18 Nov 2020 15:35:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfPUB-00049z-5n
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 15:35:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfPUA-0003wZ-AP; Wed, 18 Nov 2020 15:35:06 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfPU9-0007bn-W8; Wed, 18 Nov 2020 15:35:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfPUB-00049z-5n
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 15:35:07 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=rSEK9lYwlW4jAC1pFVIz3xCqG5RQhts4Oxd5Bzm9HKM=; b=e1LEgiaSm0hAfDmGunnWt/7UEx
	TxjkbeVCiGkTjcYE9Q8rMZPJQ9V5+RDFR3cq8f8wIxKjKF5Yy6tXZgdB2REr88N3nr7XnvECpGNr2
	TbDpcVvYpkLj3w5iD9NMTZocUvZb4+h5U92hUymzRXAkwpsB1yo7i/9BffYIb1/RwwNA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfPUA-0003wZ-AP; Wed, 18 Nov 2020 15:35:06 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfPU9-0007bn-W8; Wed, 18 Nov 2020 15:35:06 +0000
Subject: Re: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
To: Jan Beulich <jbeulich@suse.com>, Rahul Singh <Rahul.Singh@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <cover.1605527997.git.rahul.singh@arm.com>
 <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
 <bd5fa7bb-7c44-1ec0-fc57-3ecf01c7d651@suse.com>
 <CBBE4253-F244-418D-9EA6-BC39D1BC8DF8@arm.com>
 <1530c2fb-8def-37eb-8a22-d7f9fc4e38b4@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <0946edb2-c2c1-0d3d-c8ff-f24055f78ebf@xen.org>
Date: Wed, 18 Nov 2020 15:35:03 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <1530c2fb-8def-37eb-8a22-d7f9fc4e38b4@suse.com>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 18/11/2020 15:16, Jan Beulich wrote:
> On 18.11.2020 16:02, Rahul Singh wrote:
>> Hello Jan,
>>
>>> On 17 Nov 2020, at 10:55 am, Jan Beulich <jbeulich@suse.com> wrote:
>>>
>>> On 16.11.2020 13:25, Rahul Singh wrote:
>>>> NS16550 driver has PCI support that is under HAS_PCI flag. When HAS_PCI
>>>> is enabled for ARM, compilation error is observed for ARM architecture
>>>> because ARM platforms do not have full PCI support available.
>>>
>>> While you've extended the sentence, it remains unclear to me what
>>> compilation error it is that results here. I've requested such
>>> clarification for v2 already.
>>
>> Compilation error is related to the code that refer to x86  functions (create_irq()..) and MSI implementation related error.
>> For more details please find the attached file for compilation error.
> 
> The use of mmio_ro_ranges is almost quite possibly going to remain
> x86-specific, but then I guess this wants abstracting in a suitable
> way.
> 
> The remaining look to all be MSI-related, so perhaps what you want
> to avoid is just that part rather than everything PCI-ish?

Not really (see more above).

> 
>>>> --- a/xen/drivers/char/ns16550.c
>>>> +++ b/xen/drivers/char/ns16550.c
>>>> @@ -16,7 +16,7 @@
>>>> #include <xen/timer.h>
>>>> #include <xen/serial.h>
>>>> #include <xen/iocap.h>
>>>> -#ifdef CONFIG_HAS_PCI
>>>> +#ifdef CONFIG_HAS_NS16550_PCI
>>>> #include <xen/pci.h>
>>>> #include <xen/pci_regs.h>
>>>> #include <xen/pci_ids.h>
>>>
>>> ... #undef-ining CONFIG_HAS_PCI at a suitable position in this
>>> file (e.g. after all #include-s, to make sure all structure
>>> layouts remain correct)? This would then be far easier to revert
>>> down the road, and would confine the oddity to a single file
>>> (and there a single place) in the code base.
>>>
>>
>> As for ARM platforms, PCI implementation is in the development process and I am not sure if after completion of PCI work, ns16500 PCI part of code will work out of the box. I think there is some effort required to test the ns16550 PCI part of the code on ARM.
>> As this code is tested on X86 only so I make the options depends on X86 and enable it by default for x86.
>>
>> I feel that adding a new Kconfig options is ok to enable/disable the PCI NS16550 support as compared to #undef CONFIG_HAS_PCI in the particular file. If in future other architecture wants to implement the PCI they will face the similar compilation error issues.
>>
>> Please suggest how we can proceed on this.
> 
> Introduce CONFIG_HAS_PCI_MSI (selected only by x86), if there's no
> immediate plan to support it on Arm together with the rest of PCI?

So even we are going to enable PCI on Arm and fix compilation issue, 
there are no way the NS16550 PCI would be usuable without effort for a 
few reasons:

   1) com1/com2 is x86 specific
   2) ns16550_init() is not used by Arm and the only way to use a PCI UART
   3) UART is discovered through the device-tree/ACPI tables on Arm

So I think CONFIG_HAS_NS16550_PCI is most suitable solution and we 
should probably guard more code (e.g. ns16550_init(), com1, com2...).

Note that's not a request for doing it in this patch :).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 15:51:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 15:51:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30070.59790 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfPjW-00060Q-UL; Wed, 18 Nov 2020 15:50:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30070.59790; Wed, 18 Nov 2020 15:50:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfPjW-00060J-RK; Wed, 18 Nov 2020 15:50:58 +0000
Received: by outflank-mailman (input) for mailman id 30070;
 Wed, 18 Nov 2020 15:50:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfPjV-00060D-TU
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 15:50:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfPjS-0004Fs-0U; Wed, 18 Nov 2020 15:50:54 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfPjR-0000ES-LH; Wed, 18 Nov 2020 15:50:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfPjV-00060D-TU
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 15:50:57 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=H2OnZGygED/PBqBVpxIVeWg1cYBzDTz72kzvRW1xj6E=; b=xPOU7ugciOwcX+UsEXtq7TBf7x
	zcsq/TzIEKZkiZto6qzy1jfiektakyPsP9fRhy3kBfZYhwf2VloF1w0LZqyjPE+bNncoergwdQ6s4
	Pf5TKGF9utmr90CcaVXuXnf3GF514e9Rg/XVWm0sTCAZbSsGWvlHi7c8A5z115sZgppA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfPjS-0004Fs-0U; Wed, 18 Nov 2020 15:50:54 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfPjR-0000ES-LH; Wed, 18 Nov 2020 15:50:53 +0000
Subject: Re: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
To: Rahul Singh <rahul.singh@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <cover.1605527997.git.rahul.singh@arm.com>
 <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <3740e147-719a-4e97-bb0e-fe9bd2ec2aa5@xen.org>
Date: Wed, 18 Nov 2020 15:50:51 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Rahul,

On 16/11/2020 12:25, Rahul Singh wrote:
> NS16550 driver has PCI support that is under HAS_PCI flag. When HAS_PCI
> is enabled for ARM, compilation error is observed for ARM architecture
> because ARM platforms do not have full PCI support available.
 >
> Introducing new kconfig option CONFIG_HAS_NS16550_PCI to support
> ns16550 PCI for X86.
> 
> For X86 platforms it is enabled by default. For ARM platforms it is
> disabled by default, once we have proper support for NS16550 PCI for
> ARM we can enable it.
> 
> No functional change.

NIT: I would say "No functional change intended" to make clear this is 
an expectation and hopefully will be correct :).

Regarding the commit message itself, I would suggest the following to 
address Jan's concern:

"
xen/char: ns16550: Gate all PCI code with a new Kconfig HAS_NS16550_PCI

The NS16550 driver is assuming that NS16550 PCI card are usable if the 
architecture supports PCI (i.e. CONFIG_HAS_PCI=y). However, the code is 
very x86 focus and will fail to build on Arm (/!\ it is not all the errors):

  ns16550.c: In function ‘ns16550_init_irq’:
ns16550.c:726:21: error: implicit declaration of function ‘create_irq’; 
did you mean ‘release_irq’? [-Werror=implicit-function-declaration]
          uart->irq = create_irq(0, false);
                      ^~~~~~~~~~
                      release_irq
ns16550.c:726:21: error: nested extern declaration of ‘create_irq’ 
[-Werror=nested-externs]
ns16550.c: In function ‘ns16550_init_postirq’:
ns16550.c:768:33: error: ‘mmio_ro_ranges’ undeclared (first use in this 
function); did you mean ‘mmio_handler’?
               rangeset_add_range(mmio_ro_ranges, uart->io_base,
                                  ^~~~~~~~~~~~~~
                                  mmio_handler
ns16550.c:768:33: note: each undeclared identifier is reported only once 
for each function it appears in
ns16550.c:780:20: error: variable ‘msi’ has initializer but incomplete type
              struct msi_info msi = {
                     ^~~~~~~~
ns16550.c:781:18: error: ‘struct msi_info’ has no member named ‘bus’
                  .bus = uart->ps_bdf[0],
                   ^~~
ns16550.c:781:24: error: excess elements in struct initializer [-Werror]
                  .bus = uart->ps_bdf[0],
                         ^~~~
ns16550.c:781:24: note: (near initialization for ‘msi’)
ns16550.c:782:18: error: ‘struct msi_info’ has no member named ‘devfn’
                  .devfn = PCI_DEVFN(uart->ps_bdf[1], uart->ps_bdf[2]),

Enabling support for NS16550 PCI card on Arm would require more plumbing 
in addition to fixing the compilation error.

Arm systems tend to have platform UART available such as NS16550, PL011. 
So there are limited reasons to get NS16550 PCI support for now on Arm.

A new Kconfig option CONFIG_HAS_NS16550_PCI is introduced to gate all 
the PCI code.

This option will be select automically for x86 platform and left 
unselectable on Arm.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
[julieng: Commit message]
Signed-off-by: Julien Grall <jgrall@amazon.com>
"

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 16:25:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 16:25:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30085.59805 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfQGL-0001GZ-Mt; Wed, 18 Nov 2020 16:24:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30085.59805; Wed, 18 Nov 2020 16:24:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfQGL-0001GS-Jl; Wed, 18 Nov 2020 16:24:53 +0000
Received: by outflank-mailman (input) for mailman id 30085;
 Wed, 18 Nov 2020 16:24:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VXpI=EY=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1kfQGJ-0001GN-Tr
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 16:24:51 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b0725eb9-ccbf-4c9f-90e4-bcd5756588d5;
 Wed, 18 Nov 2020 16:24:50 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id E61A068B05; Wed, 18 Nov 2020 17:24:48 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VXpI=EY=lst.de=hch@srs-us1.protection.inumbo.net>)
	id 1kfQGJ-0001GN-Tr
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 16:24:51 +0000
X-Inumbo-ID: b0725eb9-ccbf-4c9f-90e4-bcd5756588d5
Received: from verein.lst.de (unknown [213.95.11.211])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b0725eb9-ccbf-4c9f-90e4-bcd5756588d5;
	Wed, 18 Nov 2020 16:24:50 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
	id E61A068B05; Wed, 18 Nov 2020 17:24:48 +0100 (CET)
Date: Wed, 18 Nov 2020 17:24:47 +0100
From: Christoph Hellwig <hch@lst.de>
To: Coly Li <colyli@suse.de>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Mike Snitzer <snitzer@redhat.com>, dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>, Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH 19/20] bcache: remove a superflous lookup_bdev all
Message-ID: <20201118162447.GB16753@lst.de>
References: <20201118084800.2339180-1-hch@lst.de> <20201118084800.2339180-20-hch@lst.de> <e7f826fd-cb9c-b4ab-fae8-dad398c14eed@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <e7f826fd-cb9c-b4ab-fae8-dad398c14eed@suse.de>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Wed, Nov 18, 2020 at 04:54:51PM +0800, Coly Li wrote:
> On 11/18/20 4:47 PM, Christoph Hellwig wrote:
> > Don't bother to call lookup_bdev for just a slightly different error
> > message without any functional change.
> > 
> > Signed-off-by: Christoph Hellwig <hch@lst.de>ist
> 
> Hi Christoph,
> 
> NACK. This removing error message is frequently triggered and observed,
> and distinct a busy device and an already registered device is important
> (the first one is critical error and second one is not).
> 
> Remove such error message will be a functional regression.

I can probably keep it, the amount of code to prettiefy an error message
seems excessive, though.


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 16:27:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 16:27:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30089.59816 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfQIi-0001Oj-4K; Wed, 18 Nov 2020 16:27:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30089.59816; Wed, 18 Nov 2020 16:27:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfQIi-0001Oc-0s; Wed, 18 Nov 2020 16:27:20 +0000
Received: by outflank-mailman (input) for mailman id 30089;
 Wed, 18 Nov 2020 16:27:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PiVl=EY=cardoe.com=cardoe@srs-us1.protection.inumbo.net>)
 id 1kfQIh-0001OX-LI
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 16:27:19 +0000
Received: from mail-qv1-xf2e.google.com (unknown [2607:f8b0:4864:20::f2e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cde413ed-fffe-4560-a53e-974ee780306d;
 Wed, 18 Nov 2020 16:27:19 +0000 (UTC)
Received: by mail-qv1-xf2e.google.com with SMTP id g19so1310103qvy.2
 for <xen-devel@lists.xenproject.org>; Wed, 18 Nov 2020 08:27:19 -0800 (PST)
Received: from localhost.localdomain
 (104-179-196-18.lightspeed.brhmal.sbcglobal.net. [104.179.196.18])
 by smtp.gmail.com with ESMTPSA id k70sm17318603qke.46.2020.11.18.08.27.17
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 18 Nov 2020 08:27:17 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PiVl=EY=cardoe.com=cardoe@srs-us1.protection.inumbo.net>)
	id 1kfQIh-0001OX-LI
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 16:27:19 +0000
X-Inumbo-ID: cde413ed-fffe-4560-a53e-974ee780306d
Received: from mail-qv1-xf2e.google.com (unknown [2607:f8b0:4864:20::f2e])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cde413ed-fffe-4560-a53e-974ee780306d;
	Wed, 18 Nov 2020 16:27:19 +0000 (UTC)
Received: by mail-qv1-xf2e.google.com with SMTP id g19so1310103qvy.2
        for <xen-devel@lists.xenproject.org>; Wed, 18 Nov 2020 08:27:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cardoe.com; s=google;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=zrqXFk4ShVAtVDE1F4lmJEb7S5DU3GeoqHDkSh87ftY=;
        b=DQPmyzHSPZeUv/5KH09b94VJM4dtmhlGhiBm4/oGKKVooNLbjgLuGMbwcrbPXqT3LD
         0LBAfTxQ1GsjWKiny93SaDIjb2YFKjwR6TShsmuKhNBeX6DXZWhxIsEe1rRd0nEkjkAx
         TgFpFK5BCyn/UZKZdMcsjTZYz0l6B+765o97c=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=zrqXFk4ShVAtVDE1F4lmJEb7S5DU3GeoqHDkSh87ftY=;
        b=lJ8Zklv2O4v4CL6OFYOXJWjYT3+/RMo954TLFqHtVloZccRi+TRBBxouyXaKv6IgZ+
         VrnUUGhRyyIWX6AOqvP90H7U56jOgayEB6biYdnojET26nmb86mkY39ufM3jJH+XBfuQ
         drK0PLyJzEFP5CSwB64fSE8bEp8eFJ+4VDWtRgurqg+O9KXfidDS06enEYLi5tCMrVIU
         PjwlaVpk3bGUWRoGV8g/SK+Y2SadaEZCjsr29TGvk5c5sjKKPPO0SgSgvwbiWTIdmM4e
         SujQWhvt5ndHSDyAGW2vUAmuxsdIQD8eGlkqMh/ViScxweqzrSzPs5cQttXUF2BP3mVX
         vBvg==
X-Gm-Message-State: AOAM532gy/FBklKZt38rm7+Muf2vSNCC0jszIdjGEP1Etro2mYMWLq3B
	KlkPvCE7EADV46DRz0ZJB4z790UbZ27bjKcu
X-Google-Smtp-Source: ABdhPJx2C/BCH+Or1fGS11XLEX17s8zzr4AL4qvaf/SCNTZDSHXAUXVqlajGXVxdmg7O2JWOeusjFw==
X-Received: by 2002:ad4:4743:: with SMTP id c3mr5638484qvx.62.1605716838439;
        Wed, 18 Nov 2020 08:27:18 -0800 (PST)
Received: from localhost.localdomain (104-179-196-18.lightspeed.brhmal.sbcglobal.net. [104.179.196.18])
        by smtp.gmail.com with ESMTPSA id k70sm17318603qke.46.2020.11.18.08.27.17
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Wed, 18 Nov 2020 08:27:17 -0800 (PST)
From: Doug Goldstein <cardoe@cardoe.com>
To: xen-devel@lists.xenproject.org
Cc: Doug Goldstein <cardoe@cardoe.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] ci: drop building on CentOS 6
Date: Wed, 18 Nov 2020 10:27:06 -0600
Message-Id: <20201118162706.66551-1-cardoe@cardoe.com>
X-Mailer: git-send-email 2.21.0 (Apple Git-122)
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

CentOS 6 is no longer supported by upstream so we cannot test against it
for future Xen releases.

Signed-off-by: Doug Goldstein <cardoe@cardoe.com>
---
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: George Dunlap <george.dunlap@citrix.com>
CC: Ian Jackson <iwj@xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>
CC: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Liu <wl@xen.org>
---
 automation/gitlab-ci/build.yaml | 10 ----------
 1 file changed, 10 deletions(-)

diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
index 1e61d30c85..4bd1cfc1c0 100644
--- a/automation/gitlab-ci/build.yaml
+++ b/automation/gitlab-ci/build.yaml
@@ -176,16 +176,6 @@ centos-7-gcc-debug:
   variables:
     CONTAINER: centos:7
 
-centos-6-gcc:
-  extends: .gcc-x86-64-build
-  variables:
-    CONTAINER: centos:6
-
-centos-6-gcc-debug:
-  extends: .gcc-x86-64-build-debug
-  variables:
-    CONTAINER: centos:6
-
 debian-jessie-clang:
   extends: .clang-x86-64-build
   variables:
-- 
2.21.0 (Apple Git-122)



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 16:39:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 16:39:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30096.59829 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfQUT-0002YZ-An; Wed, 18 Nov 2020 16:39:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30096.59829; Wed, 18 Nov 2020 16:39:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfQUT-0002YS-7l; Wed, 18 Nov 2020 16:39:29 +0000
Received: by outflank-mailman (input) for mailman id 30096;
 Wed, 18 Nov 2020 16:39:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PiVl=EY=cardoe.com=cardoe@srs-us1.protection.inumbo.net>)
 id 1kfQUS-0002Xi-7x
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 16:39:28 +0000
Received: from mail-qk1-x744.google.com (unknown [2607:f8b0:4864:20::744])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 92ed5e94-fe64-4ef1-befa-107b880a24b1;
 Wed, 18 Nov 2020 16:39:27 +0000 (UTC)
Received: by mail-qk1-x744.google.com with SMTP id l2so2434179qkf.0
 for <xen-devel@lists.xenproject.org>; Wed, 18 Nov 2020 08:39:27 -0800 (PST)
Received: from doug-macbook.local ([2600:1700:7b90:52f0:fcda:1820:8a59:569b])
 by smtp.gmail.com with ESMTPSA id
 k188sm5134259qkd.98.2020.11.18.08.39.26
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 18 Nov 2020 08:39:26 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PiVl=EY=cardoe.com=cardoe@srs-us1.protection.inumbo.net>)
	id 1kfQUS-0002Xi-7x
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 16:39:28 +0000
X-Inumbo-ID: 92ed5e94-fe64-4ef1-befa-107b880a24b1
Received: from mail-qk1-x744.google.com (unknown [2607:f8b0:4864:20::744])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 92ed5e94-fe64-4ef1-befa-107b880a24b1;
	Wed, 18 Nov 2020 16:39:27 +0000 (UTC)
Received: by mail-qk1-x744.google.com with SMTP id l2so2434179qkf.0
        for <xen-devel@lists.xenproject.org>; Wed, 18 Nov 2020 08:39:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cardoe.com; s=google;
        h=subject:to:references:from:message-id:date:user-agent:mime-version
         :in-reply-to:content-language:content-transfer-encoding;
        bh=P2PUiBIwlS7hTlFqmI9iau/Je8dDPQzg5I4bed6wHOI=;
        b=ap5kweSwJK4gLRSeQAaLk2vvGUK+y3k0dx0EN4YlYY42l9T5pccNlfe8IVQg2//15q
         /Kld8FNKumoPF4kj5rT2SrBlEnkXGxnDc63fGfGbjIXMFkTlJFsMYiFJEPijdw9I0L90
         zrBNfeUdYZ6uUHBGnfDO8SExNE+h/7eyXXRh8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=P2PUiBIwlS7hTlFqmI9iau/Je8dDPQzg5I4bed6wHOI=;
        b=O5qh3xxZ8d4Aq3rW+qUph7xj1qRurwlYvoNwH3t3X9HVu5ZPUwfnmvNmrFZwsLyBoV
         11S0vIWoMYeuzCBfDs1vydkqiqj4Ex0f6xo1qL6kCMbwjjOticsxUSfGU5xOVYS6fkWp
         v/EsyxujaxyXACr2Pw16KISd5BRzOuHZQ7rPYmYKdnnMac1Mes0+RGD3KhRq1BQgEA1o
         MzzfpT2jkvGZcrA42ztcsTBY+jzs8aL7W6xqJO/YDoBY/I7q3wOog4qDuVxrsRW1BNXj
         sTtwRgHMnPySh9wmlcZpa7lw0VAXBVU2ypeitX5sgMElTUNFM4slQV1IQrXYLFkfuNmf
         XXSA==
X-Gm-Message-State: AOAM531FcOqbXR4DHP1/nVs91aNWCZjgEdaVoqcqtoWYXHayqy9QX8Fo
	9v5gpcln1vT3ZeTHTUrR7CndpoTZCtHRNA==
X-Google-Smtp-Source: ABdhPJyFlJH0ypwvT+hnhd/rkvCkm/occccPDzW0f0TGblWdTZHyFTufZd/Udf6LisG9qeKcHQ/6Rw==
X-Received: by 2002:a05:620a:140d:: with SMTP id d13mr5810065qkj.470.1605717567101;
        Wed, 18 Nov 2020 08:39:27 -0800 (PST)
Received: from doug-macbook.local ([2600:1700:7b90:52f0:fcda:1820:8a59:569b])
        by smtp.gmail.com with ESMTPSA id k188sm5134259qkd.98.2020.11.18.08.39.26
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Wed, 18 Nov 2020 08:39:26 -0800 (PST)
Subject: Re: [PATCH v1 1/4] automation/scripts/containerize: fix
 DOCKER_CMD=podman
To: =?UTF-8?B?RWR3aW4gVMO2csO2aw==?= <edvin.torok@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1605636799.git.edvin.torok@citrix.com>
 <28469d0fea059a51694c6fa3b5bd3971696a4f13.1605636800.git.edvin.torok@citrix.com>
From: Doug Goldstein <cardoe@cardoe.com>
Message-ID: <29264173-4a1d-d973-cf8d-759ee96bbf2d@cardoe.com>
Date: Wed, 18 Nov 2020 10:39:25 -0600
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <28469d0fea059a51694c6fa3b5bd3971696a4f13.1605636800.git.edvin.torok@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit



On 11/17/20 12:24 PM, Edwin Török wrote:
> On CentOS 8 with SELinux containerize doesn't work at all:
> 
> Make sure that the source code and SSH agent directories are passed on
> with SELinux relabeling enabled.
> (`-security-opt label=disabled` would be another option)
> 
> Signed-off-by: Edwin Török <edvin.torok@citrix.com>

Looks reasonable.

Acked-by: Doug Goldstein <cardoe@cardoe.com>


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 16:39:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 16:39:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30097.59841 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfQUf-0002c3-J3; Wed, 18 Nov 2020 16:39:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30097.59841; Wed, 18 Nov 2020 16:39:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfQUf-0002bu-Fx; Wed, 18 Nov 2020 16:39:41 +0000
Received: by outflank-mailman (input) for mailman id 30097;
 Wed, 18 Nov 2020 16:39:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=135G=EY=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kfQUe-0002bL-4H
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 16:39:40 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3cd111f8-76ea-4c9d-b93a-54e9472fe8c9;
 Wed, 18 Nov 2020 16:39:39 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=135G=EY=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kfQUe-0002bL-4H
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 16:39:40 +0000
X-Inumbo-ID: 3cd111f8-76ea-4c9d-b93a-54e9472fe8c9
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3cd111f8-76ea-4c9d-b93a-54e9472fe8c9;
	Wed, 18 Nov 2020 16:39:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605717579;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=cIqBOwyWtYipTi7x6RKeIF9zCwnkxFiBwPSwymY1Pcg=;
  b=Bamf3iW/V4ohRwTlsTh+603e2lP9qLqe3nrWp2Fp6P0gu1Sp+MiiTNfa
   R6j3bMxniA5P50YTvZmdrQgzpkJFFWCXT21dZskpsuN/KGD32qolE1jVy
   JSp6Ujwh+dUfPIbEmv47PQJTVACfCWNtXX13wgVpidEG78UM6r5WICNg7
   E=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: AXxwPL8FIVb3/4eKlZAlo1psc6ltiyMwBh4OrCIzLXi+Z5NeRY1hWvo+J/NYtL7iZwk6z73bqW
 v/rphDkXg7CWC5DpL7Fl+sGd1EDH82QwpudBl7IurZrQaCVtov5GCsyWVvL5W0hSAWeL1U6ZzD
 DdtCsd+NmP/qYPepddAtxz/Gl25OLAXz2w2ozgI+uJvENGyGX/sR3ZYz/jGBA5EZIeyi75ix4S
 n3HnAUvf8fnQSLRvTjUQibyhWqjtvbg2JHbE3GU/ZXZR0YM9A0KxogULlpEUPAKUfMLcacahKm
 Xnk=
X-SBRS: None
X-MesageID: 32591259
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,488,1596513600"; 
   d="scan'208";a="32591259"
Subject: Re: [PATCH] ci: drop building on CentOS 6
To: Doug Goldstein <cardoe@cardoe.com>, <xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
References: <20201118162706.66551-1-cardoe@cardoe.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <9d24beae-1bcf-5a05-5c1e-a0cd45dfedd7@citrix.com>
Date: Wed, 18 Nov 2020 16:39:31 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201118162706.66551-1-cardoe@cardoe.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 18/11/2020 16:27, Doug Goldstein wrote:
> CentOS 6 is no longer supported by upstream so we cannot test against it
> for future Xen releases.
>
> Signed-off-by: Doug Goldstein <cardoe@cardoe.com>
> ---
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: George Dunlap <george.dunlap@citrix.com>
> CC: Ian Jackson <iwj@xenproject.org>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Julien Grall <julien@xen.org>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Wei Liu <wl@xen.org>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Do we want to drop the dockerfiles as well?  We probably also want to
drop one line from containerise as well.

I can fix on commit if you're happy with this.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 16:40:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 16:40:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30107.59852 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfQVX-0003Qe-TY; Wed, 18 Nov 2020 16:40:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30107.59852; Wed, 18 Nov 2020 16:40:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfQVX-0003QW-QX; Wed, 18 Nov 2020 16:40:35 +0000
Received: by outflank-mailman (input) for mailman id 30107;
 Wed, 18 Nov 2020 16:40:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PiVl=EY=cardoe.com=cardoe@srs-us1.protection.inumbo.net>)
 id 1kfQVW-0003QO-0F
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 16:40:34 +0000
Received: from mail-qt1-x841.google.com (unknown [2607:f8b0:4864:20::841])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9ce7615a-fdcf-4e80-b6e2-8b40eaf9350f;
 Wed, 18 Nov 2020 16:40:33 +0000 (UTC)
Received: by mail-qt1-x841.google.com with SMTP id i12so2101555qtj.0
 for <xen-devel@lists.xenproject.org>; Wed, 18 Nov 2020 08:40:33 -0800 (PST)
Received: from doug-macbook.local ([2600:1700:7b90:52f0:fcda:1820:8a59:569b])
 by smtp.gmail.com with ESMTPSA id
 y44sm17248319qtb.50.2020.11.18.08.40.32
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 18 Nov 2020 08:40:32 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PiVl=EY=cardoe.com=cardoe@srs-us1.protection.inumbo.net>)
	id 1kfQVW-0003QO-0F
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 16:40:34 +0000
X-Inumbo-ID: 9ce7615a-fdcf-4e80-b6e2-8b40eaf9350f
Received: from mail-qt1-x841.google.com (unknown [2607:f8b0:4864:20::841])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9ce7615a-fdcf-4e80-b6e2-8b40eaf9350f;
	Wed, 18 Nov 2020 16:40:33 +0000 (UTC)
Received: by mail-qt1-x841.google.com with SMTP id i12so2101555qtj.0
        for <xen-devel@lists.xenproject.org>; Wed, 18 Nov 2020 08:40:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cardoe.com; s=google;
        h=subject:to:references:from:message-id:date:user-agent:mime-version
         :in-reply-to:content-language:content-transfer-encoding;
        bh=w1Rh0RZEBiXnIYfEPCG0oVkDIRGmJh8N/wsHoBI5mrI=;
        b=LDV/hWDTl4mdRMGKl68VTnFoxSth/J3Ckbj2ZBxWCuV2J31pps2c/ZEp9UvMHS+fM4
         6xEYU+w3ye6+gDIAucrVT9AOBtrIhrwVkaUszR4EvpRycg0v76Ortc1PTaa4r0L/x58I
         zS5N0tQPHZaqhsqILEDxADgDVYNx3Ps60qIWY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=w1Rh0RZEBiXnIYfEPCG0oVkDIRGmJh8N/wsHoBI5mrI=;
        b=KrDe7JB4Kn3yeAub1fNjZOE+0UeKvy8sgti4t/qZMep5uXP1JgNcjLQDNYaeosdJ/U
         YwkI0B7+W6GEmD6ziAh3ch5As1+Wz9uemzmHFxrurKqcKtmpVyZ04EqXzJ18x8274XZI
         5ykbzxrUDCAgsr3B1c1XTTl4The1WZhIlaoMvIN++PAgnbDjYmkLCVXG7xpY0Vg2c9ON
         E98OdZW8jO8Xva0xlN0NLF5p49bXI3F+ZkPQHtvr/2oc2jDrAUfcuz8XTzBi73GR3TBd
         1QFSC7/ife8l0PUryk8rjcHqm5sPm6xo2uHN/yNr1ylrd5fneamEfbQqZkb/ftmLzHTt
         Q+iQ==
X-Gm-Message-State: AOAM53077OIy65GcgSBMM9YRDCvoMxsPWIa9X3SCTJLzi4c1inhie/eQ
	R+iBFJcoqu7QvimOtAkqLIXqoXsCamebQA==
X-Google-Smtp-Source: ABdhPJwbJ+d6g63DdTHRsf3pF+9LL9xyPkkltsiL04CLrAqk9a/nLehN7IgTTCSaC8+E5n4wHOrZJQ==
X-Received: by 2002:ac8:787:: with SMTP id l7mr5478250qth.137.1605717632922;
        Wed, 18 Nov 2020 08:40:32 -0800 (PST)
Received: from doug-macbook.local ([2600:1700:7b90:52f0:fcda:1820:8a59:569b])
        by smtp.gmail.com with ESMTPSA id y44sm17248319qtb.50.2020.11.18.08.40.32
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Wed, 18 Nov 2020 08:40:32 -0800 (PST)
Subject: Re: [PATCH v1 2/4] automation/: add Ubuntu:focal container
To: =?UTF-8?B?RWR3aW4gVMO2csO2aw==?= <edvin.torok@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1605636799.git.edvin.torok@citrix.com>
 <42b2b80779e264d60fa3daf01110fece34f00696.1605636800.git.edvin.torok@citrix.com>
From: Doug Goldstein <cardoe@cardoe.com>
Message-ID: <6d97bb1c-dce1-0813-6c8b-0f4ca223dc51@cardoe.com>
Date: Wed, 18 Nov 2020 10:40:31 -0600
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <42b2b80779e264d60fa3daf01110fece34f00696.1605636800.git.edvin.torok@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit



On 11/17/20 12:24 PM, Edwin Török wrote:
> Signed-off-by: Edwin Török <edvin.torok@citrix.com>

Looks good. Do you have permissions to push the container or do you need
me to?

Acked-by: Doug Goldstein <cardoe@cardoe.com>


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 16:43:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 16:43:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30112.59865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfQYA-0003jo-C7; Wed, 18 Nov 2020 16:43:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30112.59865; Wed, 18 Nov 2020 16:43:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfQYA-0003jh-8O; Wed, 18 Nov 2020 16:43:18 +0000
Received: by outflank-mailman (input) for mailman id 30112;
 Wed, 18 Nov 2020 16:43:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PiVl=EY=cardoe.com=cardoe@srs-us1.protection.inumbo.net>)
 id 1kfQY9-0003jb-Bv
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 16:43:17 +0000
Received: from mail-qt1-x841.google.com (unknown [2607:f8b0:4864:20::841])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 85c13aec-8045-46c3-97d1-f7985819122c;
 Wed, 18 Nov 2020 16:43:16 +0000 (UTC)
Received: by mail-qt1-x841.google.com with SMTP id p12so2064096qtp.7
 for <xen-devel@lists.xenproject.org>; Wed, 18 Nov 2020 08:43:16 -0800 (PST)
Received: from doug-macbook.local ([2600:1700:7b90:52f0:fcda:1820:8a59:569b])
 by smtp.gmail.com with ESMTPSA id
 g123sm10208612qkd.135.2020.11.18.08.43.15
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 18 Nov 2020 08:43:15 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PiVl=EY=cardoe.com=cardoe@srs-us1.protection.inumbo.net>)
	id 1kfQY9-0003jb-Bv
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 16:43:17 +0000
X-Inumbo-ID: 85c13aec-8045-46c3-97d1-f7985819122c
Received: from mail-qt1-x841.google.com (unknown [2607:f8b0:4864:20::841])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 85c13aec-8045-46c3-97d1-f7985819122c;
	Wed, 18 Nov 2020 16:43:16 +0000 (UTC)
Received: by mail-qt1-x841.google.com with SMTP id p12so2064096qtp.7
        for <xen-devel@lists.xenproject.org>; Wed, 18 Nov 2020 08:43:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cardoe.com; s=google;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=n0+UbWi80WPkeLpVmxvdvkL0jkpg6ZmFMVGBtN4prO0=;
        b=AQ4Ma1lo6rI3rfSb3cwI6cbxSxgYKslN2jaonu99oSswID5UzwoDNwCidpfy/7roQW
         mC+Z0Cf0JTZ7xqVeltvyfAxZkdPOhoOv7GSwQLMIbO1yjA7Lt0padowjpUS/SxvFprLm
         wOLzK50io9TXP9xTb7ixCOze+MwPbuPX8NYDA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=n0+UbWi80WPkeLpVmxvdvkL0jkpg6ZmFMVGBtN4prO0=;
        b=WCbxATpLDBT3j3qBaBw+AEfjY6Wps0sipv7iuI8R1AvuBEiFIQQFSrVz/NrDmxJY7Y
         iXr88zd495j2c3BoXmesFqeQR+t64tkcnR1XXU8TqWICOqdo1aVWLTLZeSJWeuFAI6sd
         7MXAcOmPYQlex9k3ax2fsROTJTxcy7kCCtHOim/5l7ktOFVWjC8pD1dJos/W9MTAOvcm
         /CmSwuDAquYrA21jQfI+J8q/k9OEM8fdpun27ZFTlAODbiN/iuHq3aM8l9H4QWgRkrPX
         TRl9MwL1s52V9NuXRo+YTAQSu9vDcPO2QKie8WvVQ9CGRYacNq+R1rNqkxEzWlz5A/95
         bBGQ==
X-Gm-Message-State: AOAM533ElGTpGgnmMaWzviRQb0xpp+CwiDoRK8KltvqE5LD39e6OBjL2
	mYX4DOVQhCH58NPOxtFQCHHzNQ==
X-Google-Smtp-Source: ABdhPJybt0mTTaVo54toN+R3e7L6+VyvqC8d8Oo3cv4CYf5TjymegGOck1LcoSVHnKL3rmvsQ1U0VQ==
X-Received: by 2002:ac8:6c2a:: with SMTP id k10mr5269048qtu.89.1605717796502;
        Wed, 18 Nov 2020 08:43:16 -0800 (PST)
Received: from doug-macbook.local ([2600:1700:7b90:52f0:fcda:1820:8a59:569b])
        by smtp.gmail.com with ESMTPSA id g123sm10208612qkd.135.2020.11.18.08.43.15
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Wed, 18 Nov 2020 08:43:15 -0800 (PST)
Subject: Re: [PATCH] ci: drop building on CentOS 6
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20201118162706.66551-1-cardoe@cardoe.com>
 <9d24beae-1bcf-5a05-5c1e-a0cd45dfedd7@citrix.com>
From: Doug Goldstein <cardoe@cardoe.com>
Message-ID: <8ed3788f-41af-8330-341b-7829e580d9af@cardoe.com>
Date: Wed, 18 Nov 2020 10:43:14 -0600
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <9d24beae-1bcf-5a05-5c1e-a0cd45dfedd7@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit



On 11/18/20 10:39 AM, Andrew Cooper wrote:
> On 18/11/2020 16:27, Doug Goldstein wrote:
>> CentOS 6 is no longer supported by upstream so we cannot test against it
>> for future Xen releases.
>>
>> Signed-off-by: Doug Goldstein <cardoe@cardoe.com>
>> ---
>> CC: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: George Dunlap <george.dunlap@citrix.com>
>> CC: Ian Jackson <iwj@xenproject.org>
>> CC: Jan Beulich <jbeulich@suse.com>
>> CC: Julien Grall <julien@xen.org>
>> CC: Stefano Stabellini <sstabellini@kernel.org>
>> CC: Wei Liu <wl@xen.org>
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Do we want to drop the dockerfiles as well?  We probably also want to
> drop one line from containerise as well.
> 
> I can fix on commit if you're happy with this.
> 
> ~Andrew
> 

Yes. That would make sense and should have included that. Thank you.


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 16:43:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 16:43:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30119.59877 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfQYp-0003r0-Lx; Wed, 18 Nov 2020 16:43:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30119.59877; Wed, 18 Nov 2020 16:43:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfQYp-0003qt-IG; Wed, 18 Nov 2020 16:43:59 +0000
Received: by outflank-mailman (input) for mailman id 30119;
 Wed, 18 Nov 2020 16:43:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qNHr=EY=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1kfQYn-0003qk-Un
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 16:43:58 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 24d469d3-28bc-4aec-9c21-b5b625fb08f1;
 Wed, 18 Nov 2020 16:43:57 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qNHr=EY=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
	id 1kfQYn-0003qk-Un
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 16:43:58 +0000
X-Inumbo-ID: 24d469d3-28bc-4aec-9c21-b5b625fb08f1
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 24d469d3-28bc-4aec-9c21-b5b625fb08f1;
	Wed, 18 Nov 2020 16:43:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605717836;
  h=from:to:subject:date:message-id:references:in-reply-to:
   content-id:content-transfer-encoding:mime-version;
  bh=R6hIYOG45qFa7PRlOtL0Zgx3uECGHfegjxHCVTZ9LcE=;
  b=MF4KvfkI3u4YdCacyT4YkQGdyGWviXYYtRveKszaCHoPbO5cxtmifx6Q
   lKTfBJ9cUrCaZ0mV7wwa0KLtn33ahs/O2kE0bUx1ofSHaYRL+P3LepF7R
   SYzFcKeGzkiQo0+CEpfRI7YfJ7T/DtwPqpPhzk9WHkh3qr7azqTWPES9j
   Y=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: XsbIFo28PLWqy78t02dO4uq0zrCxLKuvUjb62qSFC+kx7Tj3c2vFIuOm9JTKiZUo8WIVNGtv7g
 zWJPEeLnGql+NxVOTrPte4HdEleh5/1V5tZccwn0pzoNcyUr+CyQIXlbhIV/Y2DyATblT/LoQ0
 0uboqIQjrQx45on38tBR4+ztxV/0rpX9DFsVgU6Z7J3OHgrA8xEvkJ1EPVaT/CsQrDwL1nuGB4
 5nq0tyNetaAsagr+iU3XSJzXcLYH1f8p8MoQd/qZbFKu2WxZnId5da2AW1WvLw+9RKAAE5id6K
 JNI=
X-SBRS: None
X-MesageID: 31681163
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,488,1596513600"; 
   d="scan'208";a="31681163"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=J/IoTvhrXqESemEOucfchzZ8AyjXc7rpcxjak8PxrJFDg3oTHnGD0dd1Rowi6OQ1lYSEPwMWS3lY+B1wQQEI73Vmbwl07lVKdOGE1Itbw2G/A1f9XYC85YS5UNzylDeUUKoz+9mRL0G6V9GPvULvIruSPkLKIq7Nd/4DQKk2zPypTT9VpSeN9trJNwYdzzGMjNfsZe4UmgxIDxqTufjaBr8xJfqWzFwGk+MBmZrzHT698kJSlqtzBDcPr6scW7l7NOb7tRHDernBPKM5q5Sri0SUGCviCOinaq9Czi5hoVcaMTcbOJ53pbKKeTXlruPNFnAkLZzknp0+WKRRR4uo1A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R6hIYOG45qFa7PRlOtL0Zgx3uECGHfegjxHCVTZ9LcE=;
 b=SrR4ab0Z94sne1JWY9qLs18NL1F+b84/zdwjL32mVnRNGXWzt9VfQ2Fd9V42NiBJq2TOUoDjN8KEBCyUv0Qrr3u5BK3TR5Lr1qYyx01olMwxJjkb/NFgeUgWFcKwDVtuLTIGCmr5BIZ/PxDe0aaWQBNIfU4gmSyV6+iWED8Ts3PqKekVvSUa3dcm0iTYTY6lOT716gGAngcVMPTlFa2cPYnMQkDHULJC4dZ+RGRjSTe75R+fkRasNrQyUJLKR/RL5vXJfmAJTK8uRjHNxjPhmUJEcsY1jwPJJb5NGIzmM/1mH99wDzWLRMtyRBsnMjm4s8Cqor0G0aA/8ouHiLQkUw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R6hIYOG45qFa7PRlOtL0Zgx3uECGHfegjxHCVTZ9LcE=;
 b=K7PEkT8KUcVv6w99t8r+vRWkOAEVChWEPJ0YsHynxQ/Aj7tUQGstCj5922rpkhOo79r9+X/DYrkGiZF/u1ypvLWonajkJGJ3042X1lqA87Ayx5oytq/2Z4FY2I5oodGv1bDI7iwwIahIWOoZ+ziQOlMUy69w/Rm6HjB38OiBBgU=
From: Edwin Torok <edvin.torok@citrix.com>
To: "cardoe@cardoe.com" <cardoe@cardoe.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v1 2/4] automation/: add Ubuntu:focal container
Thread-Topic: [PATCH v1 2/4] automation/: add Ubuntu:focal container
Thread-Index: AQHWvQ7kTNnWBFxzmkalEKaHqdd2WKnOGPCAgAAA8YA=
Date: Wed, 18 Nov 2020 16:43:53 +0000
Message-ID: <7ae96297b3f489e5cf391b2296742c989e41378a.camel@citrix.com>
References: <cover.1605636799.git.edvin.torok@citrix.com>
	 <42b2b80779e264d60fa3daf01110fece34f00696.1605636800.git.edvin.torok@citrix.com>
	 <6d97bb1c-dce1-0813-6c8b-0f4ca223dc51@cardoe.com>
In-Reply-To: <6d97bb1c-dce1-0813-6c8b-0f4ca223dc51@cardoe.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: bd7dcc89-d1bc-4e6e-1179-08d88be122d3
x-ms-traffictypediagnostic: BYAPR03MB3765:
x-microsoft-antispam-prvs: <BYAPR03MB37656BB6B1167AC3F12CC1C99BE10@BYAPR03MB3765.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:5516;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: yjoleNu6Y959Pr5OweBOtN3WWf+SnO5ghbVXvhym07aaGK/8eVV6Y57d6owlIOpod1h5ueo0alEmFDfXvod/j3nwxx0a0qFLGl9YYdNEOxJUdDEjO+pyKCdoZOvl4iuJdtnBRKwhxCEVAckAe7+TFYT/ZF/0zA2ZJBF+5v0GuqiQdM0y5nNI7PQIeIQ1hizeJ9KGbinR6WgvrKnR1/3UzLOYZ5BavQqmStTWbpLGBNMxLz0M0RcRYE7Zy64HbIr9xx5PBka8DygKH77slK5XsUiFt14+PtjmElD1AYfXMxFv3O81MANcRd29WAatyZKsqbMY/J0EEXmBZafwHo5Bag==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB5888.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(39860400002)(376002)(366004)(396003)(346002)(2616005)(2906002)(110136005)(71200400001)(316002)(26005)(6512007)(478600001)(83380400001)(6486002)(4001150100001)(186003)(8936002)(91956017)(8676002)(6506007)(53546011)(66556008)(64756008)(4744005)(36756003)(5660300002)(66476007)(86362001)(66946007)(76116006)(66446008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: NzyOqfgwtw9HKm3kq9VJIE35AApxrjITgKGw8e87oty83k69l+aNNQB4lCl/mUiJHNtE/2PtPiEotxJD13JNWg3Lm9XTZz/7UL/+tf/oOV4weuwlAXlPAX4G1TpYT2jcSRI7SETbQzEHXSrunJ5r6Jn7x0z8rqVVCe0ENkz/FjeOUZo/HP/1yKH9Is+pFCOajf6glYuQJF71wMxurzXcosRKI7MrKm7rcdS42AdLPtIt0Ob7eFKMrV3GVH5vU2FqUKN7R2NhGSF0Vo+CNNO772oRJhSdt7PnI9d1709gyA4CU0VTBRPIAp5uEy2ndu32gMGOgHvcGgmp07yFM8i7x++3x1CY8yOA/URfAXSjChqo+vsQDNHrHVwoOfWr39ymlQA6JRf3At4VUvmFWQNmssrUF2Vc/8YBCdzmbLL25MpTXbp9VzJT3X5HG1/puLLFeY2WFU/a7sbhIzkFmn+49B9C57aV1QtP2fowNPoCqdt5SssFYzJw+O+4wupa+Ehc2Su3Ao/0vgE0+Y0CXHs7+UmGdX5BvhK1O4EzO3Sq4vjueZtQaQwoCNnzobNtJ/tnjoPKH8ONu+xI7PrJYoijXfqk78a31cGgA6zFQSIaB7YjI66MraHaVFUUY2ubIOJUybBFLXFteUexezxmDyDSNKTbqNe8ozT6oOYZRNU1+iCXrPhsahPg6Ag5qEAsLbRHq1NGnb5JJAoy82K0tLZGgluSXWmXO4FGvyXoLobqXaA9uGOnE6pTQmpQrCe01PuAalpL4rKrnhzwxZEdkeZMuQH1kmYJc++YeMbqhqdJ4RfidPTJgH3nEUGTYfGL63vVxf1bN+0w5YM0YAftisgXtAv3P/Ub3caAy3EQOCZx1aKbBSol5Jl1lhX8YZLkrBFqcrn09nJ+F31GJarVuhyyOw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <AD9122F5782F4A4E863C4D1279754C65@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB5888.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bd7dcc89-d1bc-4e6e-1179-08d88be122d3
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Nov 2020 16:43:54.0209
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: QGv/91JvswMfAGPM1AEUJTxgALfe861h8BKuJUyYBlNQ3m1jiSbX0TeHMEbH84fELBnlIw5UMiDDwm8FRGJILQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3765
X-OriginatorOrg: citrix.com

T24gV2VkLCAyMDIwLTExLTE4IGF0IDEwOjQwIC0wNjAwLCBEb3VnIEdvbGRzdGVpbiB3cm90ZToN
Cj4gDQo+IA0KPiBPbiAxMS8xNy8yMCAxMjoyNCBQTSwgRWR3aW4gVMO2csO2ayB3cm90ZToNCj4g
PiBTaWduZWQtb2ZmLWJ5OiBFZHdpbiBUw7Zyw7ZrIDxlZHZpbi50b3Jva0BjaXRyaXguY29tPg0K
PiANCj4gTG9va3MgZ29vZC4gRG8geW91IGhhdmUgcGVybWlzc2lvbnMgdG8gcHVzaCB0aGUgY29u
dGFpbmVyIG9yIGRvIHlvdQ0KPiBuZWVkDQo+IG1lIHRvPw0KPiANCj4gQWNrZWQtYnk6IERvdWcg
R29sZHN0ZWluIDxjYXJkb2VAY2FyZG9lLmNvbT4NCg0KVGhhbmtzLCBpZiB5b3UgY291bGQgcHVz
aCBpdCB0aGF0J2QgYmUgZ3JlYXQuDQpJIGRvbid0IGhhdmUgYW55IHNwZWNpYWwgcGVybWlzc2lv
bnMgb24gdGhlIGdpdGxhYiByZWdpc3RyeS4NCg0KQmVzdCByZWdhcmRzLA0KLS1FZHdpbg0K


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 16:47:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 16:47:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30126.59888 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfQbi-00041Z-42; Wed, 18 Nov 2020 16:46:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30126.59888; Wed, 18 Nov 2020 16:46:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfQbi-00041S-1A; Wed, 18 Nov 2020 16:46:58 +0000
Received: by outflank-mailman (input) for mailman id 30126;
 Wed, 18 Nov 2020 16:46:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfQbh-00041N-2U
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 16:46:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfQbg-0005yC-7e; Wed, 18 Nov 2020 16:46:56 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfQbg-0006Xf-0h; Wed, 18 Nov 2020 16:46:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfQbh-00041N-2U
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 16:46:57 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=782+a74npD7n9sMy3z9W+cUZeDuA7R91709ruM3+gfA=; b=f8ILJmigIoa9dJhwSJXWsRHOb/
	GftHAUaf9J2H+fKZVf9+9P7PmefitwNQTgIcXqemQP65+wmt559JIUbg98fy7/GLsJOJ4MRVCpKtx
	K1Ct/yVXgJsWi/kHeH4xhlf5OjWVSXlnZRocNa6obEGiAaj/PvADn0QeT/e6QF7HSoLY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfQbg-0005yC-7e; Wed, 18 Nov 2020 16:46:56 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfQbg-0006Xf-0h; Wed, 18 Nov 2020 16:46:56 +0000
Subject: Re: [PATCH] xen: add support for automatic debug key actions in case
 of crash
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201022143905.11032-1-jgross@suse.com>
 <977bab69-892c-d94d-d952-1a748f69d0b6@suse.com>
 <53732f8f-fe6d-91bd-4100-4b4d904a4073@suse.com>
 <ed2f73e7-04cc-f568-f0b7-19c843a8d31b@suse.com>
 <8c77ff71-a14e-7cf7-5f27-c7c152ace240@suse.com>
 <3e2132c9-2ab3-7bfb-656b-2cab58a53342@suse.com>
 <alpine.DEB.2.21.2011121332250.20906@sstabellini-ThinkPad-T480s>
 <383f2f1f-96c1-1634-519f-3526019f4f48@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <4dda402d-62de-cce5-c205-12f4d13ec623@xen.org>
Date: Wed, 18 Nov 2020 16:46:54 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <383f2f1f-96c1-1634-519f-3526019f4f48@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi,

On 13/11/2020 05:18, Jürgen Groß wrote:
> On 12.11.20 22:38, Stefano Stabellini wrote:
>> On Thu, 12 Nov 2020, Jan Beulich wrote:
>>> On 12.11.2020 13:50, Jürgen Groß wrote:
>>>> Any further comments, or even better, Acks?
>>>
>>> To be honest I'd prefer to have at least one of the people Cc-ed
>>> minimally indicate they consider this a good idea. No need for a
>>> close review or such, just a basic opinion. Anyone?
>>
>> I see Jan's point that it is not clear how much this is going to help in
>> production. However, it is not going to hurt either, and I have been
>> told a few times recently that debugging Xen is not easy. Anything that
>> helps in that regard would be good. So I think this patch would be an
>> improvement.
>>
> 
> This patch is an effort to get better diagnostic data in case of
> Xen crashes from our largest customer, so clearly intended for
> production use.
>

I agree with this statement. When you get a crash from Xen in 
production, it can be useful to get as much information as possible 
dumped. Some of the information printed by the debugkeys cannot be 
retrieved from a crashkernel.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 17:00:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 17:00:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30137.59909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfQoh-0005zL-DS; Wed, 18 Nov 2020 17:00:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30137.59909; Wed, 18 Nov 2020 17:00:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfQoh-0005zE-A0; Wed, 18 Nov 2020 17:00:23 +0000
Received: by outflank-mailman (input) for mailman id 30137;
 Wed, 18 Nov 2020 17:00:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfQof-0005z2-Fs
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 17:00:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfQob-0006H6-C3; Wed, 18 Nov 2020 17:00:17 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfQoa-0002z2-Vc; Wed, 18 Nov 2020 17:00:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfQof-0005z2-Fs
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 17:00:21 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=vYgPz77aatxOKfMFV00Xc3Exjd37I7NhhUHKvf9lVlg=; b=ECttsMDiKWIPZeSYkInbSw/2Pf
	rl3TJlzWGu7zr51xT+oZqXzA7vHykZVovpTyrsPK5B1PEh3/HiNkk0AdJLkAeB8vgsmZR9LtsD5DX
	tIB8Cs5YUIoVL+RXFR0jITD5fvozC3/FApLn7b1ortxJgjTlZrDhlAs1aUoqHQ2WasAk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfQob-0006H6-C3; Wed, 18 Nov 2020 17:00:17 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfQoa-0002z2-Vc; Wed, 18 Nov 2020 17:00:17 +0000
Subject: Re: [PATCH v2 1/8] lib: split _ctype[] into its own object, under
 lib/
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <8bf44bbd-8c39-7ab1-3ccb-52bf3744592b@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <0bdd27a2-f770-260b-298b-bceba9b28adc@xen.org>
Date: Wed, 18 Nov 2020 17:00:15 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <8bf44bbd-8c39-7ab1-3ccb-52bf3744592b@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 23/10/2020 11:16, Jan Beulich wrote:
> This is, besides for tidying, in preparation of then starting to use an
> archive rather than an object file for generic library code which
> arch-es (or even specific configurations within a single arch) may or
> may not need.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
>   xen/Makefile     |  3 ++-
>   xen/Rules.mk     |  2 +-
>   xen/common/lib.c | 29 -----------------------------
>   xen/lib/Makefile |  1 +
>   xen/lib/ctype.c  | 38 ++++++++++++++++++++++++++++++++++++++
>   5 files changed, 42 insertions(+), 31 deletions(-)
>   create mode 100644 xen/lib/ctype.c
> 
> diff --git a/xen/Makefile b/xen/Makefile
> index bf0c804d4352..73bdc326c549 100644
> --- a/xen/Makefile
> +++ b/xen/Makefile
> @@ -331,6 +331,7 @@ _clean: delete-unfresh-files
>   	$(MAKE) $(clean) include
>   	$(MAKE) $(clean) common
>   	$(MAKE) $(clean) drivers
> +	$(MAKE) $(clean) lib
>   	$(MAKE) $(clean) xsm
>   	$(MAKE) $(clean) crypto
>   	$(MAKE) $(clean) arch/arm
> @@ -414,7 +415,7 @@ include/asm-$(TARGET_ARCH)/asm-offsets.h: arch/$(TARGET_ARCH)/asm-offsets.s
>   	  echo ""; \
>   	  echo "#endif") <$< >$@
>   
> -SUBDIRS = xsm arch/$(TARGET_ARCH) common drivers test
> +SUBDIRS = xsm arch/$(TARGET_ARCH) common drivers lib test
>   define all_sources
>       ( find include/asm-$(TARGET_ARCH) -name '*.h' -print; \
>         find include -name 'asm-*' -prune -o -name '*.h' -print; \
> diff --git a/xen/Rules.mk b/xen/Rules.mk
> index 891c94e6ad00..333e19bec343 100644
> --- a/xen/Rules.mk
> +++ b/xen/Rules.mk
> @@ -36,7 +36,7 @@ TARGET := $(BASEDIR)/xen
>   # Note that link order matters!
>   ALL_OBJS-y               += $(BASEDIR)/common/built_in.o
>   ALL_OBJS-y               += $(BASEDIR)/drivers/built_in.o
> -ALL_OBJS-$(CONFIG_X86)   += $(BASEDIR)/lib/built_in.o
> +ALL_OBJS-y               += $(BASEDIR)/lib/built_in.o
>   ALL_OBJS-y               += $(BASEDIR)/xsm/built_in.o
>   ALL_OBJS-y               += $(BASEDIR)/arch/$(TARGET_ARCH)/built_in.o
>   ALL_OBJS-$(CONFIG_CRYPTO)   += $(BASEDIR)/crypto/built_in.o
> diff --git a/xen/common/lib.c b/xen/common/lib.c
> index b2b799da44c5..a224efa8f6e8 100644
> --- a/xen/common/lib.c
> +++ b/xen/common/lib.c
> @@ -1,37 +1,8 @@
> -
> -#include <xen/ctype.h>
>   #include <xen/lib.h>
>   #include <xen/types.h>
>   #include <xen/init.h>
>   #include <asm/byteorder.h>
>   
> -/* for ctype.h */
> -const unsigned char _ctype[] = {
> -    _C,_C,_C,_C,_C,_C,_C,_C,                        /* 0-7 */
> -    _C,_C|_S,_C|_S,_C|_S,_C|_S,_C|_S,_C,_C,         /* 8-15 */
> -    _C,_C,_C,_C,_C,_C,_C,_C,                        /* 16-23 */
> -    _C,_C,_C,_C,_C,_C,_C,_C,                        /* 24-31 */
> -    _S|_SP,_P,_P,_P,_P,_P,_P,_P,                    /* 32-39 */
> -    _P,_P,_P,_P,_P,_P,_P,_P,                        /* 40-47 */
> -    _D,_D,_D,_D,_D,_D,_D,_D,                        /* 48-55 */
> -    _D,_D,_P,_P,_P,_P,_P,_P,                        /* 56-63 */
> -    _P,_U|_X,_U|_X,_U|_X,_U|_X,_U|_X,_U|_X,_U,      /* 64-71 */
> -    _U,_U,_U,_U,_U,_U,_U,_U,                        /* 72-79 */
> -    _U,_U,_U,_U,_U,_U,_U,_U,                        /* 80-87 */
> -    _U,_U,_U,_P,_P,_P,_P,_P,                        /* 88-95 */
> -    _P,_L|_X,_L|_X,_L|_X,_L|_X,_L|_X,_L|_X,_L,      /* 96-103 */
> -    _L,_L,_L,_L,_L,_L,_L,_L,                        /* 104-111 */
> -    _L,_L,_L,_L,_L,_L,_L,_L,                        /* 112-119 */
> -    _L,_L,_L,_P,_P,_P,_P,_C,                        /* 120-127 */
> -    0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,                /* 128-143 */
> -    0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,                /* 144-159 */
> -    _S|_SP,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,   /* 160-175 */
> -    _P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,       /* 176-191 */
> -    _U,_U,_U,_U,_U,_U,_U,_U,_U,_U,_U,_U,_U,_U,_U,_U,       /* 192-207 */
> -    _U,_U,_U,_U,_U,_U,_U,_P,_U,_U,_U,_U,_U,_U,_U,_L,       /* 208-223 */
> -    _L,_L,_L,_L,_L,_L,_L,_L,_L,_L,_L,_L,_L,_L,_L,_L,       /* 224-239 */
> -    _L,_L,_L,_L,_L,_L,_L,_P,_L,_L,_L,_L,_L,_L,_L,_L};      /* 240-255 */
> -
>   /*
>    * A couple of 64 bit operations ported from FreeBSD.
>    * The code within the '#if BITS_PER_LONG == 32' block below, and no other
> diff --git a/xen/lib/Makefile b/xen/lib/Makefile
> index 7019ca00e8fd..53b1da025e0d 100644
> --- a/xen/lib/Makefile
> +++ b/xen/lib/Makefile
> @@ -1 +1,2 @@
> +obj-y += ctype.o
>   obj-$(CONFIG_X86) += x86/
> diff --git a/xen/lib/ctype.c b/xen/lib/ctype.c
> new file mode 100644
> index 000000000000..7b233a335fdf
> --- /dev/null
> +++ b/xen/lib/ctype.c
> @@ -0,0 +1,38 @@
> +#include <xen/ctype.h>
> +
> +/* for ctype.h */
> +const unsigned char _ctype[] = {
> +    _C,_C,_C,_C,_C,_C,_C,_C,                        /* 0-7 */
> +    _C,_C|_S,_C|_S,_C|_S,_C|_S,_C|_S,_C,_C,         /* 8-15 */
> +    _C,_C,_C,_C,_C,_C,_C,_C,                        /* 16-23 */
> +    _C,_C,_C,_C,_C,_C,_C,_C,                        /* 24-31 */
> +    _S|_SP,_P,_P,_P,_P,_P,_P,_P,                    /* 32-39 */
> +    _P,_P,_P,_P,_P,_P,_P,_P,                        /* 40-47 */
> +    _D,_D,_D,_D,_D,_D,_D,_D,                        /* 48-55 */
> +    _D,_D,_P,_P,_P,_P,_P,_P,                        /* 56-63 */
> +    _P,_U|_X,_U|_X,_U|_X,_U|_X,_U|_X,_U|_X,_U,      /* 64-71 */
> +    _U,_U,_U,_U,_U,_U,_U,_U,                        /* 72-79 */
> +    _U,_U,_U,_U,_U,_U,_U,_U,                        /* 80-87 */
> +    _U,_U,_U,_P,_P,_P,_P,_P,                        /* 88-95 */
> +    _P,_L|_X,_L|_X,_L|_X,_L|_X,_L|_X,_L|_X,_L,      /* 96-103 */
> +    _L,_L,_L,_L,_L,_L,_L,_L,                        /* 104-111 */
> +    _L,_L,_L,_L,_L,_L,_L,_L,                        /* 112-119 */
> +    _L,_L,_L,_P,_P,_P,_P,_C,                        /* 120-127 */
> +    0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,                /* 128-143 */
> +    0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,                /* 144-159 */
> +    _S|_SP,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,   /* 160-175 */
> +    _P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,       /* 176-191 */
> +    _U,_U,_U,_U,_U,_U,_U,_U,_U,_U,_U,_U,_U,_U,_U,_U,       /* 192-207 */
> +    _U,_U,_U,_U,_U,_U,_U,_P,_U,_U,_U,_U,_U,_U,_U,_L,       /* 208-223 */
> +    _L,_L,_L,_L,_L,_L,_L,_L,_L,_L,_L,_L,_L,_L,_L,_L,       /* 224-239 */
> +    _L,_L,_L,_L,_L,_L,_L,_P,_L,_L,_L,_L,_L,_L,_L,_L};      /* 240-255 */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 17:06:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 17:06:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30143.59921 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfQuK-0006EK-65; Wed, 18 Nov 2020 17:06:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30143.59921; Wed, 18 Nov 2020 17:06:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfQuK-0006ED-35; Wed, 18 Nov 2020 17:06:12 +0000
Received: by outflank-mailman (input) for mailman id 30143;
 Wed, 18 Nov 2020 17:06:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfQuJ-0006E8-JA
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 17:06:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfQuH-0006OZ-0e; Wed, 18 Nov 2020 17:06:09 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfQuG-0003S9-IY; Wed, 18 Nov 2020 17:06:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfQuJ-0006E8-JA
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 17:06:11 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=/qFNyiA3BfnYruEBT82L9stlJ/cQVe0b/IUto/WKBBE=; b=ApldwFrtlGqJ+z56orykA3TL3l
	5FCNNpkmOQcnEKxgioNcorSjmTY9eRsOnV41rqlZ+auiDQgXRqqwVy7Uor9MwHe1EzE63uAYYGRGz
	pc9YcYS1YHDhF6gJZ47gx98091q0oGExH5sq7fsUZ+T3cRH2xOCJy4199rua2h3wcrSY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfQuH-0006OZ-0e; Wed, 18 Nov 2020 17:06:09 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfQuG-0003S9-IY; Wed, 18 Nov 2020 17:06:08 +0000
Subject: Re: [PATCH v2 2/8] lib: collect library files in an archive
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <78dccec2-064f-d4b1-1865-eb3f1f14247a@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <67740969-1ab1-db7f-5e3c-b15a20c1be8b@xen.org>
Date: Wed, 18 Nov 2020 17:06:06 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <78dccec2-064f-d4b1-1865-eb3f1f14247a@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

(+ Anthony)

Hi Jan,

On 23/10/2020 11:17, Jan Beulich wrote:
> In order to (subsequently) drop odd things like CONFIG_NEEDS_LIST_SORT
> just to avoid bloating binaries when only some arch-es and/or
> configurations need generic library routines, combine objects under lib/
> into an archive, which the linker then can pick the necessary objects
> out of.
> 
> Note that we can't use thin archives just yet, until we've raised the
> minimum required binutils version suitably.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
>   xen/Rules.mk          | 33 +++++++++++++++++++++++++++------
>   xen/arch/arm/Makefile |  6 +++---
>   xen/arch/x86/Makefile |  8 ++++----
>   xen/lib/Makefile      |  3 ++-

IIRC, Anthony worked on the build systems. If I am right, it would be 
good to get a review from him.

>   4 files changed, 36 insertions(+), 14 deletions(-)
> 
> diff --git a/xen/Rules.mk b/xen/Rules.mk
> index 333e19bec343..e59c7f213f77 100644
> --- a/xen/Rules.mk
> +++ b/xen/Rules.mk
> @@ -41,12 +41,16 @@ ALL_OBJS-y               += $(BASEDIR)/xsm/built_in.o
>   ALL_OBJS-y               += $(BASEDIR)/arch/$(TARGET_ARCH)/built_in.o
>   ALL_OBJS-$(CONFIG_CRYPTO)   += $(BASEDIR)/crypto/built_in.o
>   
> +ALL_LIBS-y               := $(BASEDIR)/lib/lib.a
> +
>   # Initialise some variables
> +lib-y :=
>   targets :=
>   CFLAGS-y :=
>   AFLAGS-y :=
>   
>   ALL_OBJS := $(ALL_OBJS-y)
> +ALL_LIBS := $(ALL_LIBS-y)
>   
>   SPECIAL_DATA_SECTIONS := rodata $(foreach a,1 2 4 8 16, \
>                                               $(foreach w,1 2 4, \
> @@ -60,7 +64,14 @@ include Makefile
>   # ---------------------------------------------------------------------------
>   
>   quiet_cmd_ld = LD      $@
> -cmd_ld = $(LD) $(XEN_LDFLAGS) -r -o $@ $(real-prereqs)
> +cmd_ld = $(LD) $(XEN_LDFLAGS) -r -o $@ $(filter-out %.a,$(real-prereqs)) \
> +               --start-group $(filter %.a,$(real-prereqs)) --end-group
> +
> +# Archive
> +# ---------------------------------------------------------------------------
> +
> +quiet_cmd_ar = AR      $@
> +cmd_ar = rm -f $@; $(AR) cPrs $@ $(real-prereqs)
>   
>   # Objcopy
>   # ---------------------------------------------------------------------------
> @@ -86,6 +97,10 @@ obj-y    := $(patsubst %/, %/built_in.o, $(obj-y))
>   # tell kbuild to descend
>   subdir-obj-y := $(filter %/built_in.o, $(obj-y))
>   
> +# Libraries are always collected in one lib file.
> +# Filter out objects already built-in
> +lib-y := $(filter-out $(obj-y), $(sort $(lib-y)))
> +
>   $(filter %.init.o,$(obj-y) $(obj-bin-y) $(extra-y)): CFLAGS-y += -DINIT_SECTIONS_ONLY
>   
>   ifeq ($(CONFIG_COVERAGE),y)
> @@ -129,19 +144,25 @@ include $(BASEDIR)/arch/$(TARGET_ARCH)/Rules.mk
>   c_flags += $(CFLAGS-y)
>   a_flags += $(CFLAGS-y) $(AFLAGS-y)
>   
> -built_in.o: $(obj-y) $(extra-y)
> +built_in.o: $(obj-y) $(if $(strip $(lib-y)),lib.a) $(extra-y)
>   ifeq ($(obj-y),)
>   	$(CC) $(c_flags) -c -x c /dev/null -o $@
>   else
>   ifeq ($(CONFIG_LTO),y)
> -	$(LD_LTO) -r -o $@ $(filter-out $(extra-y),$^)
> +	$(LD_LTO) -r -o $@ $(filter-out lib.a $(extra-y),$^)
>   else
> -	$(LD) $(XEN_LDFLAGS) -r -o $@ $(filter-out $(extra-y),$^)
> +	$(LD) $(XEN_LDFLAGS) -r -o $@ $(filter-out lib.a $(extra-y),$^)
>   endif
>   endif
>   
> +lib.a: $(lib-y) FORCE
> +	$(call if_changed,ar)
> +
>   targets += built_in.o
> -targets += $(filter-out $(subdir-obj-y), $(obj-y)) $(extra-y)
> +ifneq ($(strip $(lib-y)),)
> +targets += lib.a
> +endif
> +targets += $(filter-out $(subdir-obj-y), $(obj-y) $(lib-y)) $(extra-y)
>   targets += $(MAKECMDGOALS)
>   
>   built_in_bin.o: $(obj-bin-y) $(extra-y)
> @@ -155,7 +176,7 @@ endif
>   PHONY += FORCE
>   FORCE:
>   
> -%/built_in.o: FORCE
> +%/built_in.o %/lib.a: FORCE
>   	$(MAKE) -f $(BASEDIR)/Rules.mk -C $* built_in.o
>   
>   %/built_in_bin.o: FORCE
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index 296c5e68bbc3..612a83b315c8 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -90,14 +90,14 @@ endif
>   
>   ifeq ($(CONFIG_LTO),y)
>   # Gather all LTO objects together
> -prelink_lto.o: $(ALL_OBJS)
> -	$(LD_LTO) -r -o $@ $^
> +prelink_lto.o: $(ALL_OBJS) $(ALL_LIBS)
> +	$(LD_LTO) -r -o $@ $(filter-out %.a,$^) --start-group $(filter %.a,$^) --end-group
>   
>   # Link it with all the binary objects
>   prelink.o: $(patsubst %/built_in.o,%/built_in_bin.o,$(ALL_OBJS)) prelink_lto.o
>   	$(call if_changed,ld)
>   else
> -prelink.o: $(ALL_OBJS) FORCE
> +prelink.o: $(ALL_OBJS) $(ALL_LIBS) FORCE
>   	$(call if_changed,ld)
>   endif
>   
> diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
> index 9b368632fb43..8f2180485b2b 100644
> --- a/xen/arch/x86/Makefile
> +++ b/xen/arch/x86/Makefile
> @@ -132,8 +132,8 @@ EFI_OBJS-$(XEN_BUILD_EFI) := efi/relocs-dummy.o
>   
>   ifeq ($(CONFIG_LTO),y)
>   # Gather all LTO objects together
> -prelink_lto.o: $(ALL_OBJS)
> -	$(LD_LTO) -r -o $@ $^
> +prelink_lto.o: $(ALL_OBJS) $(ALL_LIBS)
> +	$(LD_LTO) -r -o $@ $(filter-out %.a,$^) --start-group $(filter %.a,$^) --end-group
>   
>   # Link it with all the binary objects
>   prelink.o: $(patsubst %/built_in.o,%/built_in_bin.o,$(ALL_OBJS)) prelink_lto.o $(EFI_OBJS-y) FORCE
> @@ -142,10 +142,10 @@ prelink.o: $(patsubst %/built_in.o,%/built_in_bin.o,$(ALL_OBJS)) prelink_lto.o $
>   prelink-efi.o: $(patsubst %/built_in.o,%/built_in_bin.o,$(ALL_OBJS)) prelink_lto.o FORCE
>   	$(call if_changed,ld)
>   else
> -prelink.o: $(ALL_OBJS) $(EFI_OBJS-y) FORCE
> +prelink.o: $(ALL_OBJS) $(ALL_LIBS) $(EFI_OBJS-y) FORCE
>   	$(call if_changed,ld)
>   
> -prelink-efi.o: $(ALL_OBJS) FORCE
> +prelink-efi.o: $(ALL_OBJS) $(ALL_LIBS) FORCE
>   	$(call if_changed,ld)
>   endif
>   
> diff --git a/xen/lib/Makefile b/xen/lib/Makefile
> index 53b1da025e0d..b8814361d63e 100644
> --- a/xen/lib/Makefile
> +++ b/xen/lib/Makefile
> @@ -1,2 +1,3 @@
> -obj-y += ctype.o
>   obj-$(CONFIG_X86) += x86/
> +
> +lib-y += ctype.o

May I ask why all the code movement by ctype was done after this patch?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 17:31:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 17:31:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30154.59932 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfRIt-0000hf-9d; Wed, 18 Nov 2020 17:31:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30154.59932; Wed, 18 Nov 2020 17:31:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfRIt-0000hY-6g; Wed, 18 Nov 2020 17:31:35 +0000
Received: by outflank-mailman (input) for mailman id 30154;
 Wed, 18 Nov 2020 17:31:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfRIr-0000hT-TR
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 17:31:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfRIp-0006tN-Hy; Wed, 18 Nov 2020 17:31:31 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfRIp-0005Wz-5X; Wed, 18 Nov 2020 17:31:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfRIr-0000hT-TR
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 17:31:33 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=coRpcSN/8SvVHMibKC/EhRm5k8nLDVMXvfuk2U/gWM4=; b=xvNdoV8f7OOcdn8nNf4m5p2g0J
	EEYgt5+5CMt4549Vsu6iUs2zl+qyrxS3k5f8xW8KNZab8lbJgWytGWiadeo0SwlcAcHaq0whe/OZE
	h5tbr93GJW0VEp9ZQzVfxFqwBLM2bCky/zKW7ft8dIBgVHYOrA6+spMsh4GtefDcyGPY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfRIp-0006tN-Hy; Wed, 18 Nov 2020 17:31:31 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfRIp-0005Wz-5X; Wed, 18 Nov 2020 17:31:31 +0000
Subject: Re: [PATCH v2 2/8] lib: collect library files in an archive
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <78dccec2-064f-d4b1-1865-eb3f1f14247a@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <8ff6d14d-fb3f-79d0-888a-3da2b68d1aad@xen.org>
Date: Wed, 18 Nov 2020 17:31:28 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <78dccec2-064f-d4b1-1865-eb3f1f14247a@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 23/10/2020 11:17, Jan Beulich wrote:
> In order to (subsequently) drop odd things like CONFIG_NEEDS_LIST_SORT
> just to avoid bloating binaries when only some arch-es and/or
> configurations need generic library routines, combine objects under lib/
> into an archive, which the linker then can pick the necessary objects
> out of.
> 
> Note that we can't use thin archives just yet, until we've raised the
> minimum required binutils version suitably.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
>   xen/Rules.mk          | 33 +++++++++++++++++++++++++++------
>   xen/arch/arm/Makefile |  6 +++---
>   xen/arch/x86/Makefile |  8 ++++----
>   xen/lib/Makefile      |  3 ++-
>   4 files changed, 36 insertions(+), 14 deletions(-)

I can't build Xen on Arm after this patch:

   AR      lib.a
aarch64-linux-gnu-ld    -EL  -r -o built_in.o
aarch64-linux-gnu-ld: no input files
/home/ANT.AMAZON.COM/jgrall/works/oss/xen/xen/Rules.mk:154: recipe for 
target 'built_in.o' failed

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 17:38:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 17:38:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30163.59944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfRPi-0000xj-35; Wed, 18 Nov 2020 17:38:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30163.59944; Wed, 18 Nov 2020 17:38:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfRPh-0000xc-W4; Wed, 18 Nov 2020 17:38:37 +0000
Received: by outflank-mailman (input) for mailman id 30163;
 Wed, 18 Nov 2020 17:38:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfRPg-0000xX-Vp
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 17:38:37 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfRPe-00073P-OF; Wed, 18 Nov 2020 17:38:34 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfRPe-0006DN-Dt; Wed, 18 Nov 2020 17:38:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfRPg-0000xX-Vp
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 17:38:37 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=8WEfpjuNjOBv86cVdjdze1V6RaHuJunwVeR6bDLFSCI=; b=ixEmZ7er0r5mOHe50m2FYo0YZ2
	f4clazaE9ysXlKLvRunQsb68r1FDvGUF4xJJ90KD+NqTeFurIcJNGYECb5yyv6nkX22oi3JSugmRh
	7j5N9pd8jM7vbAmRHd1c6wFFimUVBsft8Ni7u6lRJZAvE7YUA5YnCNNHeU3wRIDtBx4c=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfRPe-00073P-OF; Wed, 18 Nov 2020 17:38:34 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfRPe-0006DN-Dt; Wed, 18 Nov 2020 17:38:34 +0000
Subject: Re: [PATCH v2 3/8] lib: move list sorting code
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <19d28bcc-9e5b-4902-8e8d-ae95fbc560a6@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <aaf7183b-a843-57e3-84c9-7408a6ebfedf@xen.org>
Date: Wed, 18 Nov 2020 17:38:31 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <19d28bcc-9e5b-4902-8e8d-ae95fbc560a6@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 23/10/2020 11:17, Jan Beulich wrote:
> Build the source file always, as by putting it into an archive it still
> won't be linked into final binaries when not needed. This way possible
> build breakage will be easier to notice, and it's more consistent with
> us unconditionally building other library kind of code (e.g. sort() or
> bsearch()).
> 
> While moving the source file, take the opportunity and drop the
> pointless EXPORT_SYMBOL().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

It looks like the commit message was duplicated.

> Build the source file always, as by putting it into an archive it still
> won't be linked into final binaries when not needed. This way possible
> build breakage will be easier to notice, and it's more consistent with
> us unconditionally building other library kind of code (e.g. sort() or
> bsearch()).
> 
> While moving the source file, take the opportunity and drop the
> pointless EXPORT_SYMBOL().

You are mentioning the EXPORT_SYMBOL() but...

> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
>   xen/arch/arm/Kconfig                        | 4 +---
>   xen/common/Kconfig                          | 3 ---
>   xen/common/Makefile                         | 1 -
>   xen/lib/Makefile                            | 1 +
>   xen/{common/list_sort.c => lib/list-sort.c} | 2 --
>   5 files changed, 2 insertions(+), 9 deletions(-)
>   rename xen/{common/list_sort.c => lib/list-sort.c} (98%)
> 
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index 277738826581..cb7e2523b6de 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -56,9 +56,7 @@ config HVM
>           def_bool y
>   
>   config NEW_VGIC
> -	bool
> -	prompt "Use new VGIC implementation"
> -	select NEEDS_LIST_SORT
> +	bool "Use new VGIC implementation"
>   	---help---
>   
>   	This is an alternative implementation of the ARM GIC interrupt
> diff --git a/xen/common/Kconfig b/xen/common/Kconfig
> index 3e2cf2508899..0661328a99e7 100644
> --- a/xen/common/Kconfig
> +++ b/xen/common/Kconfig
> @@ -66,9 +66,6 @@ config MEM_ACCESS
>   config NEEDS_LIBELF
>   	bool
>   
> -config NEEDS_LIST_SORT
> -	bool
> -
>   menu "Speculative hardening"
>   
>   config SPECULATIVE_HARDEN_ARRAY
> diff --git a/xen/common/Makefile b/xen/common/Makefile
> index 083f62acb634..52d3c2aa9384 100644
> --- a/xen/common/Makefile
> +++ b/xen/common/Makefile
> @@ -21,7 +21,6 @@ obj-y += keyhandler.o
>   obj-$(CONFIG_KEXEC) += kexec.o
>   obj-$(CONFIG_KEXEC) += kimage.o
>   obj-y += lib.o
> -obj-$(CONFIG_NEEDS_LIST_SORT) += list_sort.o
>   obj-$(CONFIG_LIVEPATCH) += livepatch.o livepatch_elf.o
>   obj-$(CONFIG_MEM_ACCESS) += mem_access.o
>   obj-y += memory.o
> diff --git a/xen/lib/Makefile b/xen/lib/Makefile
> index b8814361d63e..764f3624b5f9 100644
> --- a/xen/lib/Makefile
> +++ b/xen/lib/Makefile
> @@ -1,3 +1,4 @@
>   obj-$(CONFIG_X86) += x86/
>   
>   lib-y += ctype.o
> +lib-y += list-sort.o
> diff --git a/xen/common/list_sort.c b/xen/lib/list-sort.c
> similarity index 98%
> rename from xen/common/list_sort.c
> rename to xen/lib/list-sort.c
> index af2b2f6519f1..f8d8bbf28178 100644
> --- a/xen/common/list_sort.c
> +++ b/xen/lib/list-sort.c
> @@ -15,7 +15,6 @@
>    * this program; If not, see <http://www.gnu.org/licenses/>.
>    */
>   
> -#include <xen/lib.h>

... this is not mentionned.

>   #include <xen/list.h>
>   
>   #define MAX_LIST_LENGTH_BITS 20
> @@ -154,4 +153,3 @@ void list_sort(void *priv, struct list_head *head,
>   
>   	merge_and_restore_back_links(priv, cmp, head, part[max_lev], list);
>   }
> -EXPORT_SYMBOL(list_sort);
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 17:39:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 17:39:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30168.59957 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfRQu-0001CR-Fr; Wed, 18 Nov 2020 17:39:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30168.59957; Wed, 18 Nov 2020 17:39:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfRQu-0001CK-B0; Wed, 18 Nov 2020 17:39:52 +0000
Received: by outflank-mailman (input) for mailman id 30168;
 Wed, 18 Nov 2020 17:39:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfRQt-0001CE-3x
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 17:39:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfRQq-00074H-Oo; Wed, 18 Nov 2020 17:39:48 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfRQq-0006Kp-IW; Wed, 18 Nov 2020 17:39:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfRQt-0001CE-3x
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 17:39:51 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=LIjtSl1sh6PSwW8rL6BTN1rxi+eCV/A2IbRRZIwE8lo=; b=Mw1V5hkvjOmZ4WW27MJ3EWD3HN
	RB/p4c/qQmmlINd1yaZ2RRB3klQB1TGo/uuPzu11OwC+ISTc/xjeJXN3Wnxc5n/ovpGBYmiEmJGJG
	FXK9m4TFePu/hWGViocYLDe99YY/Axd2457csHaqms64aUBXWVa8V49a1KXVLalPIEg8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfRQq-00074H-Oo; Wed, 18 Nov 2020 17:39:48 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfRQq-0006Kp-IW; Wed, 18 Nov 2020 17:39:48 +0000
Subject: Re: [PATCH v2 4/8] lib: move parse_size_and_unit()
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <eaffac30-8bd0-6018-5186-ca53d1becfe5@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <ec053ed2-7952-f057-94cb-366cec2a0613@xen.org>
Date: Wed, 18 Nov 2020 17:39:46 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <eaffac30-8bd0-6018-5186-ca53d1becfe5@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 23/10/2020 11:17, Jan Beulich wrote:
> ... into its own CU, to build it into an archive.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> ... into its own CU, to build it into an archive.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 17:42:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 17:42:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30175.59968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfRTl-00022m-Sv; Wed, 18 Nov 2020 17:42:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30175.59968; Wed, 18 Nov 2020 17:42:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfRTl-00022f-Q0; Wed, 18 Nov 2020 17:42:49 +0000
Received: by outflank-mailman (input) for mailman id 30175;
 Wed, 18 Nov 2020 17:42:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfRTl-00022V-Eq
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 17:42:49 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfRTj-00079C-DX; Wed, 18 Nov 2020 17:42:47 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfRTj-0006eK-7o; Wed, 18 Nov 2020 17:42:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfRTl-00022V-Eq
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 17:42:49 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=NFq7AppJuxIWgtdJ0C+aY5M9nma0lDoU03FwtyeQwD8=; b=vwl5fUu6AA4E6A9UYh4s+ftfq1
	M4PQy/km+tHA0AyfWDO1gtFA+jRVen26UwKvziHqpmYRvVmCNRb0bchJ64N2hTmcp1aGcEV0+YtGh
	+4t0DJouiXxfS+S5ESa0FwAnR+lgyONF/IKVEfnsKLFjJXdhabtHBCc9KPqFSumAati0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfRTj-00079C-DX; Wed, 18 Nov 2020 17:42:47 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfRTj-0006eK-7o; Wed, 18 Nov 2020 17:42:47 +0000
Subject: Re: [PATCH v2 5/8] lib: move init_constructors()
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <c7b091f4-b2a4-965e-3a1a-de26f45f0d5d@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <cc2d63ea-3e6f-2e27-7433-2ec006d855a9@xen.org>
Date: Wed, 18 Nov 2020 17:42:45 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <c7b091f4-b2a4-965e-3a1a-de26f45f0d5d@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 23/10/2020 11:18, Jan Beulich wrote:
> ... into its own CU, for being unrelated to other things in
> common/lib.c. For now it gets compiled into built_in.o rather than
> lib.a, as it gets used unconditionally by Arm's as well as x86'es
> {,__}start_xen(). 

AFAICT, parse_size_and_unit() is also used unconditionally on both 
architectures.

I think we want to follow the same approach everywhere. If there are no 
major downside to build an archive, then we built in everything in lib/ 
in the archives.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 17:46:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 17:46:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30184.59984 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfRXN-0002FJ-Eb; Wed, 18 Nov 2020 17:46:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30184.59984; Wed, 18 Nov 2020 17:46:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfRXN-0002FC-BC; Wed, 18 Nov 2020 17:46:33 +0000
Received: by outflank-mailman (input) for mailman id 30184;
 Wed, 18 Nov 2020 17:46:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfRXM-0002F6-9O
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 17:46:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfRXJ-0007DO-Eo; Wed, 18 Nov 2020 17:46:29 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfRXJ-000793-4d; Wed, 18 Nov 2020 17:46:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfRXM-0002F6-9O
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 17:46:32 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Y6NuYGWWy57XvYFSOtY7ipXnIsY+BgksXFq/bf3BCk8=; b=DwRbkCgs+IoWjG+D97HvH/L4pe
	3rfW4Cgn3urF7wB6SBzB8R43wNx5nNxaX71RPRP1PBagv+7t/fTNzBWJODv14zk5716I8jE3UGU1l
	/mNXt5oWa1Ljn9GI38QOemuKqrkkM/Z2zR9XxcfEG/CVX235EPnXseraeuKJDgJLwtus=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfRXJ-0007DO-Eo; Wed, 18 Nov 2020 17:46:29 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfRXJ-000793-4d; Wed, 18 Nov 2020 17:46:29 +0000
Subject: Re: [PATCH v2 6/8] lib: move rbtree code
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <e16975d3-c34b-1b3f-743f-1abe13aa06f7@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <eae3d402-b1d4-6fcb-06b8-ea337a26ab50@xen.org>
Date: Wed, 18 Nov 2020 17:46:27 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <e16975d3-c34b-1b3f-743f-1abe13aa06f7@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 23/10/2020 11:18, Jan Beulich wrote:
> Build this code into an archive, which results in not linking it into
> x86 final binaries. This saves about 1.5k of dead code.
> 
> While moving the source file, take the opportunity and drop the
> pointless EXPORT_SYMBOL().

Given this code is borrowed from Linux, I don't think we want to remove 
them. This is to make easier to re-sync.

> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
>   xen/common/Makefile          | 1 -
>   xen/lib/Makefile             | 1 +
>   xen/{common => lib}/rbtree.c | 9 +--------
>   3 files changed, 2 insertions(+), 9 deletions(-)
>   rename xen/{common => lib}/rbtree.c (98%)
> 
> diff --git a/xen/common/Makefile b/xen/common/Makefile
> index 52d3c2aa9384..7bb779f780a1 100644
> --- a/xen/common/Makefile
> +++ b/xen/common/Makefile
> @@ -33,7 +33,6 @@ obj-y += preempt.o
>   obj-y += random.o
>   obj-y += rangeset.o
>   obj-y += radix-tree.o
> -obj-y += rbtree.o
>   obj-y += rcupdate.o
>   obj-y += rwlock.o
>   obj-y += shutdown.o
> diff --git a/xen/lib/Makefile b/xen/lib/Makefile
> index ba1fb7bcdee2..b469d2dff7b8 100644
> --- a/xen/lib/Makefile
> +++ b/xen/lib/Makefile
> @@ -4,3 +4,4 @@ obj-$(CONFIG_X86) += x86/
>   lib-y += ctype.o
>   lib-y += list-sort.o
>   lib-y += parse-size.o
> +lib-y += rbtree.o
> diff --git a/xen/common/rbtree.c b/xen/lib/rbtree.c
> similarity index 98%
> rename from xen/common/rbtree.c
> rename to xen/lib/rbtree.c
> index 9f5498a89d4e..95e045d52461 100644
> --- a/xen/common/rbtree.c
> +++ b/xen/lib/rbtree.c
> @@ -25,7 +25,7 @@
>   #include <xen/rbtree.h>
>   
>   /*
> - * red-black trees properties:  http://en.wikipedia.org/wiki/Rbtree
> + * red-black trees properties:  http://en.wikipedia.org/wiki/Rbtree

This looks like a spurious change.

>    *
>    *  1) A node is either red or black
>    *  2) The root is black
> @@ -223,7 +223,6 @@ void rb_insert_color(struct rb_node *node, struct rb_root *root)
>   		}
>   	}
>   }
> -EXPORT_SYMBOL(rb_insert_color);
>   
>   static void __rb_erase_color(struct rb_node *parent, struct rb_root *root)
>   {
> @@ -467,7 +466,6 @@ void rb_erase(struct rb_node *node, struct rb_root *root)
>   	if (rebalance)
>   		__rb_erase_color(rebalance, root);
>   }
> -EXPORT_SYMBOL(rb_erase);
>   
>   /*
>    * This function returns the first node (in sort order) of the tree.
> @@ -483,7 +481,6 @@ struct rb_node *rb_first(const struct rb_root *root)
>   		n = n->rb_left;
>   	return n;
>   }
> -EXPORT_SYMBOL(rb_first);
>   
>   struct rb_node *rb_last(const struct rb_root *root)
>   {
> @@ -496,7 +493,6 @@ struct rb_node *rb_last(const struct rb_root *root)
>   		n = n->rb_right;
>   	return n;
>   }
> -EXPORT_SYMBOL(rb_last);
>   
>   struct rb_node *rb_next(const struct rb_node *node)
>   {
> @@ -528,7 +524,6 @@ struct rb_node *rb_next(const struct rb_node *node)
>   
>   	return parent;
>   }
> -EXPORT_SYMBOL(rb_next);
>   
>   struct rb_node *rb_prev(const struct rb_node *node)
>   {
> @@ -557,7 +552,6 @@ struct rb_node *rb_prev(const struct rb_node *node)
>   
>   	return parent;
>   }
> -EXPORT_SYMBOL(rb_prev);
>   
>   void rb_replace_node(struct rb_node *victim, struct rb_node *new,
>   		     struct rb_root *root)
> @@ -574,4 +568,3 @@ void rb_replace_node(struct rb_node *victim, struct rb_node *new,
>   	/* Copy the pointers/colour from the victim to the replacement */
>   	*new = *victim;
>   }
> -EXPORT_SYMBOL(rb_replace_node);
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 17:57:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 17:57:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30192.59999 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfRho-0003LN-Le; Wed, 18 Nov 2020 17:57:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30192.59999; Wed, 18 Nov 2020 17:57:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfRho-0003LG-IG; Wed, 18 Nov 2020 17:57:20 +0000
Received: by outflank-mailman (input) for mailman id 30192;
 Wed, 18 Nov 2020 17:57:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfRhm-0003L8-Ta
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 17:57:18 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfRhi-0007S9-C8; Wed, 18 Nov 2020 17:57:14 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfRhi-0007vV-4S; Wed, 18 Nov 2020 17:57:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfRhm-0003L8-Ta
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 17:57:18 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:References:Cc:To:From:Subject;
	bh=wq6g38iDSwALtsgXy3LrZsNe3AXacyhNesRVAx8yGfs=; b=mHZWdPsl1n3g3aQPfgZGX00mDF
	fZPUNMQ4MyzuJc9YNBUBeLw0QTtC3xlj88tAbNLSJw2fO30Fqrij15t/X08e7nQdFV0NKnWKlPNyh
	IiLdT1TheWABiHkt9G8UENsmevbQCofk5fJf1+f28nf7efbabJljyKDoaJs2Kp+FcRx4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfRhi-0007S9-C8; Wed, 18 Nov 2020 17:57:14 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfRhi-0007vV-4S; Wed, 18 Nov 2020 17:57:14 +0000
Subject: Re: [PATCH v2 4/8] lib: move parse_size_and_unit()
From: Julien Grall <julien@xen.org>
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <eaffac30-8bd0-6018-5186-ca53d1becfe5@suse.com>
 <ec053ed2-7952-f057-94cb-366cec2a0613@xen.org>
Message-ID: <ac498c1b-b2bc-8ad9-1a46-001ae701d230@xen.org>
Date: Wed, 18 Nov 2020 17:57:12 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <ec053ed2-7952-f057-94cb-366cec2a0613@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 18/11/2020 17:39, Julien Grall wrote:
> Hi Jan,
> 
> On 23/10/2020 11:17, Jan Beulich wrote:
>> ... into its own CU, to build it into an archive.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>

I forgot to mention the commit message was duplicated.

>> ... into its own CU, to build it into an archive.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Julien Grall <jgrall@amazon.com>
> 
> Cheers,
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 17:57:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 17:57:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30193.60011 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfRhv-0003Nb-VQ; Wed, 18 Nov 2020 17:57:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30193.60011; Wed, 18 Nov 2020 17:57:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfRhv-0003NU-RP; Wed, 18 Nov 2020 17:57:27 +0000
Received: by outflank-mailman (input) for mailman id 30193;
 Wed, 18 Nov 2020 17:57:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=135G=EY=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kfRhu-0003Mx-EH
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 17:57:26 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 853d4d6c-8c65-4304-a2f6-0ebe9697d8a9;
 Wed, 18 Nov 2020 17:57:25 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=135G=EY=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kfRhu-0003Mx-EH
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 17:57:26 +0000
X-Inumbo-ID: 853d4d6c-8c65-4304-a2f6-0ebe9697d8a9
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 853d4d6c-8c65-4304-a2f6-0ebe9697d8a9;
	Wed, 18 Nov 2020 17:57:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605722245;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=aOeFD5/4w46KTt9yFuqNTO1fr11pDy3GcDph52x7WIk=;
  b=M/PHzvJabvVRAJjepEZ6o7XfQ2q4DDgG7YXTIS5RRBQhJLKMxKj+RfDB
   AB4eVJIriisrz8NlZBB1qXgUmKgu2Jzm7xv3FM063xCf0mZ/w/6xQhH9z
   9ZpzgePmea83xTpyv9+pBRI+rt47Y6SgJxQkZhHd7g5cMvXPuuhrqkdtk
   Y=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: dp7NwnwVGeIuElNjqV35SwOO0XtlmgBMBd6Srloet5Gyd7X7amGhI4N559UKQjRdaz8i8Yo0+k
 sx9jzGmS8iFN/HyVQmQjQFVaQexMSf5WANoXqz73E97qYUQVo9Vecvv1gzH8pZm103MpaB2nYA
 +AensI5P7rL5pXJLo8oP/c1JJYzKeMLsY8B7cz3kgag4H/Nk5DjmsCO8RrdpMx1muSLw3zOVEN
 LanqqKPwnxZfhwebiSihoYX2fkDcwHj96klyK2IC8fsrvtMFgifAjkSmhduovPhUwZX7VEGKf/
 dss=
X-SBRS: None
X-MesageID: 31454861
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,488,1596513600"; 
   d="scan'208";a="31454861"
Subject: Re: [PATCH v1 2/4] automation/: add Ubuntu:focal container
To: =?UTF-8?B?RWR3aW4gVMO2csO2aw==?= <edvin.torok@citrix.com>,
	<xen-devel@lists.xenproject.org>
CC: Doug Goldstein <cardoe@cardoe.com>
References: <cover.1605636799.git.edvin.torok@citrix.com>
 <42b2b80779e264d60fa3daf01110fece34f00696.1605636800.git.edvin.torok@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e74b1d8b-0318-a956-b09a-555795374393@citrix.com>
Date: Wed, 18 Nov 2020 17:57:19 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <42b2b80779e264d60fa3daf01110fece34f00696.1605636800.git.edvin.torok@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 17/11/2020 18:24, Edwin Török wrote:
> Signed-off-by: Edwin Török <edvin.torok@citrix.com>
> ---
>  automation/build/ubuntu/focal.dockerfile | 50 ++++++++++++++++++++++++
>  automation/scripts/containerize          |  1 +
>  2 files changed, 51 insertions(+)
>  create mode 100644 automation/build/ubuntu/focal.dockerfile
>
> diff --git a/automation/build/ubuntu/focal.dockerfile b/automation/build/ubuntu/focal.dockerfile
> new file mode 100644
> index 0000000000..1f014b67bc
> --- /dev/null
> +++ b/automation/build/ubuntu/focal.dockerfile
> @@ -0,0 +1,50 @@
> +FROM ubuntu:20.04
> +LABEL maintainer.name="The Xen Project " \
> +      maintainer.email="xen-devel@lists.xenproject.org"
> +
> +ENV DEBIAN_FRONTEND=noninteractive
> +ENV USER root
> +
> +RUN mkdir /build
> +WORKDIR /build
> +
> +# build depends
> +RUN apt-get update && \
> +    apt-get --quiet --yes install \
> +        build-essential \
> +        zlib1g-dev \
> +        libncurses5-dev \
> +        libssl-dev \
> +        python-dev \

Python2 is legacy in Focal, and shouldn't be necessary for 4.14 and later.

> +        python3-dev \
> +        xorg-dev \
> +        uuid-dev \
> +        libyajl-dev \
> +        libaio-dev \
> +        libglib2.0-dev \
> +        clang \
> +        libpixman-1-dev \
> +        pkg-config \
> +        flex \
> +        bison \
> +        gettext \
> +        acpica-tools \
> +        bin86 \
> +        bcc \
> +        liblzma-dev \
> +        libc6-dev-i386 \
> +        libnl-3-dev \
> +        ocaml-nox \
> +        libfindlib-ocaml-dev \
> +        libsystemd-dev \
> +        markdown \

We dropped markdown as a dependency a release or two ago.

Both these dependences should be fine to drop, if we're happy to not
role Focal testing out to all the older branches.

> +        transfig \
> +        pandoc \
> +        checkpolicy \
> +        wget \

The build has absolutely no business reaching out into the internet.

I'm tempted to forcibly clobber it in the main build script.  (Perhaps
this is best not conflated with the Focal change.)

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 18:09:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 18:09:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30205.60022 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfRtk-0004g1-31; Wed, 18 Nov 2020 18:09:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30205.60022; Wed, 18 Nov 2020 18:09:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfRtj-0004fu-W4; Wed, 18 Nov 2020 18:09:39 +0000
Received: by outflank-mailman (input) for mailman id 30205;
 Wed, 18 Nov 2020 18:09:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfRtj-0004fp-4U
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 18:09:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfRtg-0007ma-So; Wed, 18 Nov 2020 18:09:36 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfRtg-0000Zc-IH; Wed, 18 Nov 2020 18:09:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfRtj-0004fp-4U
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 18:09:39 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=XHtTc7cHUy14smjiT7ljfGJGXfZsUWSLt2e7cqE3FnQ=; b=W5yexjq9gSXJ0bIEvYs+oRUyFy
	q2OdGJrNljC1Wn+Mq4qsL+cF0wXPBcY0Liof/iAfAIPCTULOokhKkcU7UYZe3/kxsR/YX4HXsuwMT
	MDBJEzAkb1Hxrcjlh9NEbY1D+y1Ey7jPYUv0HnaOEVZK1giZlwF5JN6qY1bn3PYIudBw=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfRtg-0007ma-So; Wed, 18 Nov 2020 18:09:36 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfRtg-0000Zc-IH; Wed, 18 Nov 2020 18:09:36 +0000
Subject: Re: [PATCH v2 7/8] lib: move bsearch code
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <87a20884-5a76-a664-dcc9-bd4becee40b3@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <44ffc041-cacd-468e-a835-f5b2048bb201@xen.org>
Date: Wed, 18 Nov 2020 18:09:34 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <87a20884-5a76-a664-dcc9-bd4becee40b3@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 23/10/2020 11:19, Jan Beulich wrote:
> Convert this code to an inline function (backed by an instance in an
> archive in case the compiler decides against inlining), which results
> in not having it in x86 final binaries. This saves a little bit of dead
> code.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: Make the function an extern inline in its header.
> ---
>   xen/common/Makefile        |  1 -
>   xen/common/bsearch.c       | 51 --------------------------------------
>   xen/include/xen/compiler.h |  1 +
>   xen/include/xen/lib.h      | 42 ++++++++++++++++++++++++++++++-
>   xen/lib/Makefile           |  1 +
>   xen/lib/bsearch.c          | 13 ++++++++++
>   6 files changed, 56 insertions(+), 53 deletions(-)
>   delete mode 100644 xen/common/bsearch.c
>   create mode 100644 xen/lib/bsearch.c
> 
> diff --git a/xen/common/Makefile b/xen/common/Makefile
> index 7bb779f780a1..d8519a2cc163 100644
> --- a/xen/common/Makefile
> +++ b/xen/common/Makefile
> @@ -1,6 +1,5 @@
>   obj-$(CONFIG_ARGO) += argo.o
>   obj-y += bitmap.o
> -obj-y += bsearch.o
>   obj-$(CONFIG_HYPFS_CONFIG) += config_data.o
>   obj-$(CONFIG_CORE_PARKING) += core_parking.o
>   obj-y += cpu.o
> diff --git a/xen/common/bsearch.c b/xen/common/bsearch.c
> deleted file mode 100644
> index 7090930aab5c..000000000000
> --- a/xen/common/bsearch.c
> +++ /dev/null
> @@ -1,51 +0,0 @@
> -/*
> - * A generic implementation of binary search for the Linux kernel
> - *
> - * Copyright (C) 2008-2009 Ksplice, Inc.
> - * Author: Tim Abbott <tabbott@ksplice.com>
> - *
> - * This program is free software; you can redistribute it and/or
> - * modify it under the terms of the GNU General Public License as
> - * published by the Free Software Foundation; version 2.
> - */
> -
> -#include <xen/lib.h>
> -
> -/*
> - * bsearch - binary search an array of elements
> - * @key: pointer to item being searched for
> - * @base: pointer to first element to search
> - * @num: number of elements
> - * @size: size of each element
> - * @cmp: pointer to comparison function
> - *
> - * This function does a binary search on the given array.  The
> - * contents of the array should already be in ascending sorted order
> - * under the provided comparison function.
> - *
> - * Note that the key need not have the same type as the elements in
> - * the array, e.g. key could be a string and the comparison function
> - * could compare the string with the struct's name field.  However, if
> - * the key and elements in the array are of the same type, you can use
> - * the same comparison function for both sort() and bsearch().
> - */
> -void *bsearch(const void *key, const void *base, size_t num, size_t size,
> -	      int (*cmp)(const void *key, const void *elt))
> -{
> -	size_t start = 0, end = num;
> -	int result;
> -
> -	while (start < end) {
> -		size_t mid = start + (end - start) / 2;
> -
> -		result = cmp(key, base + mid * size);
> -		if (result < 0)
> -			end = mid;
> -		else if (result > 0)
> -			start = mid + 1;
> -		else
> -			return (void *)base + mid * size;
> -	}
> -
> -	return NULL;
> -}
> diff --git a/xen/include/xen/compiler.h b/xen/include/xen/compiler.h
> index c0e0ee9f27be..2b7acdf3b188 100644
> --- a/xen/include/xen/compiler.h
> +++ b/xen/include/xen/compiler.h
> @@ -12,6 +12,7 @@
>   
>   #define inline        __inline__
>   #define always_inline __inline__ __attribute__ ((__always_inline__))
> +#define gnu_inline    __inline__ __attribute__ ((__gnu_inline__))

bsearch() is only used by Arm and I haven't seen anyone so far 
complaining about the perf of I/O emulation.

Therefore, I am not convinced that there is enough justification to 
introduce a GNU attribute just for this patch.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 18:10:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 18:10:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30209.60035 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfRuJ-0005W7-CZ; Wed, 18 Nov 2020 18:10:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30209.60035; Wed, 18 Nov 2020 18:10:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfRuJ-0005W0-8l; Wed, 18 Nov 2020 18:10:15 +0000
Received: by outflank-mailman (input) for mailman id 30209;
 Wed, 18 Nov 2020 18:10:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfRuI-0005Vc-HL
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 18:10:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfRuG-0007ni-HB; Wed, 18 Nov 2020 18:10:12 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfRuG-0000m4-9H; Wed, 18 Nov 2020 18:10:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfRuI-0005Vc-HL
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 18:10:14 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=4Mtd7AefQzaO82XbeWrK8RVbvU/lHd473PmgSa596a4=; b=Z1TgBG10GYKQn0lxQ/oiIlcXd0
	n8vO/IkXBSkUgXSw4H+xizuKuly3VZcYaE/fuDTvU0a874kb3ICgEqK9N82blpaIy99oEB/AtCMOy
	FSHe3rZd08iBBd88A5RVySS4ICyP5cYrI0PVcnwhpdOwFcI1lXTGeu+imtghHRm12Bd4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfRuG-0007ni-HB; Wed, 18 Nov 2020 18:10:12 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfRuG-0000m4-9H; Wed, 18 Nov 2020 18:10:12 +0000
Subject: Re: [PATCH v2 8/8] lib: move sort code
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <293585a3-5475-0c02-19ce-c2080b2deab1@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <1f2ec061-81ba-7084-9935-e5879841f42b@xen.org>
Date: Wed, 18 Nov 2020 18:10:10 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <293585a3-5475-0c02-19ce-c2080b2deab1@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 23/10/2020 11:19, Jan Beulich wrote:
> Build this code into an archive, partly paralleling bsearch().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 18:13:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 18:13:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30217.60047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfRxN-0005lC-SX; Wed, 18 Nov 2020 18:13:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30217.60047; Wed, 18 Nov 2020 18:13:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfRxN-0005l5-OU; Wed, 18 Nov 2020 18:13:25 +0000
Received: by outflank-mailman (input) for mailman id 30217;
 Wed, 18 Nov 2020 18:13:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=135G=EY=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kfRxM-0005l0-Om
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 18:13:24 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id efdd8961-66b8-46b1-bad1-ba518191c71b;
 Wed, 18 Nov 2020 18:13:23 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=135G=EY=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kfRxM-0005l0-Om
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 18:13:24 +0000
X-Inumbo-ID: efdd8961-66b8-46b1-bad1-ba518191c71b
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id efdd8961-66b8-46b1-bad1-ba518191c71b;
	Wed, 18 Nov 2020 18:13:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605723203;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=f4ynw6wksUdc+r4WzMkoLFih9xGIHRf2DUC66EGCEOA=;
  b=SBjo+VYi21POEybiX11A/YD3oz5MpqV8gy+ftbxOQPp+1ZrcEhqFsFsS
   1DxIoK9iN31UDD37GORtSXgVTFqBl7qBj027NzzNrLgoRXOiF2aVnVzr5
   CvzXuBXp5UOy/8oOaZw2CiWesvAYo2WLT7g7YP7m1hMmxeOOcAdbhLLIq
   c=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: E6JFELmSpBxQy9swYkks9Tck3YBqP9xbUaaTHv2eF8HKz7VY+hgpPaXlmDm1IZdR006Wp84cxu
 fSkwY5ZiL8plHGYKgq3uy82+5PLGXGUbXxTJpCSSGcJnNVSvmTWk1p7vG8ia3v70jiS0oZiTUc
 pg+1mCLVlAjbX4jgl1AeKIJReBa/bpeX2OJh9OtpbxI8aTzSZcaUt6ON1Gan6VLjNzbtg4qz4g
 IrUW1LQ0hFGgOyNT2GXFZqN4Ivb9bvRd0dwiHhaQdVVrFO7rHoCMa1n3rsx7RiQIJf7Ln6QWsV
 ctw=
X-SBRS: None
X-MesageID: 32602623
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,488,1596513600"; 
   d="scan'208";a="32602623"
Subject: Re: [PATCH v1 4/4] tools/ocaml/libs/xc: backward compatible domid
 control at domain creation time
To: =?UTF-8?B?RWR3aW4gVMO2csO2aw==?= <edvin.torok@citrix.com>,
	<xen-devel@lists.xenproject.org>
CC: Christian Lindig <christian.lindig@citrix.com>, David Scott
	<dave@recoil.org>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <cover.1605636799.git.edvin.torok@citrix.com>
 <559929d2ae95f6527e5050051c917b7586182ad2.1605636800.git.edvin.torok@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <dc2e5f92-2528-1475-1513-cfb8d8c3339d@citrix.com>
Date: Wed, 18 Nov 2020 18:13:17 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <559929d2ae95f6527e5050051c917b7586182ad2.1605636800.git.edvin.torok@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 17/11/2020 18:24, Edwin Török wrote:
> One can specify the domid to use when creating the domain, but this was hardcoded to 0.
>
> Keep the existing `domain_create` function (and the type of its parameters) as is to make
> backwards compatibility easier.
> Introduce a new `domain_create_domid` OCaml API that allows specifying the domid.
> A new version of xenopsd can choose to start using this, while old versions of xenopsd will keep
> building and using the old API.
>
> Controlling the domid can be useful during testing or migration.
>
> Signed-off-by: Edwin Török <edvin.torok@citrix.com>
> ---
>  tools/ocaml/libs/xc/xenctrl.ml      | 3 +++
>  tools/ocaml/libs/xc/xenctrl.mli     | 2 ++
>  tools/ocaml/libs/xc/xenctrl_stubs.c | 9 +++++++--
>  3 files changed, 12 insertions(+), 2 deletions(-)
>
> diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
> index e878699b0a..9d720886e9 100644
> --- a/tools/ocaml/libs/xc/xenctrl.ml
> +++ b/tools/ocaml/libs/xc/xenctrl.ml
> @@ -182,6 +182,9 @@ let with_intf f =
>  external domain_create: handle -> domctl_create_config -> domid
>         = "stub_xc_domain_create"
>  
> +external domain_create_domid: handle -> domctl_create_config -> domid -> domid
> +       = "stub_xc_domain_create_domid"

Wouldn't this be better as handle -> domid -> domctl_create_config ->
domid ?

I'm not overwhelmed with the name, but
"domain_create_{specific,with}_domid" don't seem much better either.

> +
>  external domain_sethandle: handle -> domid -> string -> unit
>         = "stub_xc_domain_sethandle"
>  
> diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
> index e64907df8e..e629022901 100644
> --- a/tools/ocaml/libs/xc/xenctrl.mli
> +++ b/tools/ocaml/libs/xc/xenctrl.mli
> @@ -145,6 +145,8 @@ val close_handle: unit -> unit
>  
>  external domain_create : handle -> domctl_create_config -> domid
>    = "stub_xc_domain_create"
> +external domain_create_domid : handle -> domctl_create_config -> domid -> domid
> +  = "stub_xc_domain_create_domid"
>  external domain_sethandle : handle -> domid -> string -> unit = "stub_xc_domain_sethandle"
>  external domain_max_vcpus : handle -> domid -> int -> unit
>    = "stub_xc_domain_max_vcpus"
> diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
> index 94aba38a42..bb718fd164 100644
> --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
> +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
> @@ -175,7 +175,7 @@ static unsigned int ocaml_list_to_c_bitmap(value l)
>  	return val;
>  }
>  
> -CAMLprim value stub_xc_domain_create(value xch, value config)
> +CAMLprim value stub_xc_domain_create_domid(value xch, value config, value want_domid)
>  {
>  	CAMLparam2(xch, config);
>  	CAMLlocal2(l, arch_domconfig);
> @@ -191,7 +191,7 @@ CAMLprim value stub_xc_domain_create(value xch, value config)
>  #define VAL_MAX_MAPTRACK_FRAMES Field(config, 7)
>  #define VAL_ARCH                Field(config, 8)
>  
> -	uint32_t domid = 0;
> +	uint32_t domid = Int_val(want_domid);

wanted_domid would be a slightly better name, because it isn't ambiguous
with a boolean flag.

>  	int result;
>  	struct xen_domctl_createdomain cfg = {
>  		.ssidref = Int32_val(VAL_SSIDREF),
> @@ -262,6 +262,11 @@ CAMLprim value stub_xc_domain_create(value xch, value config)
>  	CAMLreturn(Val_int(domid));
>  }
>  
> +CAMLprim value stub_xc_domain_create(value xch, value config, value want_domid)
> +{
> +    return stub_xc_domain_create_domid(xch, config, Val_int(0));
> +}

Using
https://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=36d94c17fa1e48cc9fb9ed15bc9a2237a1738bbb
as reverse inspiration, couldn't we do the insertion of 0 at the Ocaml
level and avoid doubling up the C stub?

~Andrew

> +
>  CAMLprim value stub_xc_domain_max_vcpus(value xch, value domid,
>                                          value max_vcpus)
>  {



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 18:56:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 18:56:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30228.60065 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfSd8-0001JH-Ew; Wed, 18 Nov 2020 18:56:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30228.60065; Wed, 18 Nov 2020 18:56:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfSd8-0001J7-8O; Wed, 18 Nov 2020 18:56:34 +0000
Received: by outflank-mailman (input) for mailman id 30228;
 Wed, 18 Nov 2020 18:56:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfSd6-0001Ix-ND; Wed, 18 Nov 2020 18:56:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfSd6-0000HN-GH; Wed, 18 Nov 2020 18:56:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfSd6-0007UQ-6l; Wed, 18 Nov 2020 18:56:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kfSd6-0003Vf-4q; Wed, 18 Nov 2020 18:56:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfSd6-0001Ix-ND; Wed, 18 Nov 2020 18:56:32 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=e5VeyMW0ckLL2xnvd6nA36jSvIBVFM6WI3SxOXueqF8=; b=d91s9/PcvoKHfwyzvIYuYN7Kju
	VIam/bYCSIaeRmWiGwiqDbyXHFBQbp1WF0cZEw4MbCBLAtbmEJEzRqNIsgfor5NU/8jYGpnqTxmO3
	KlNb+cRC3o7jrqFWqMCCZEPb3e1Tdo7gPwsapIiJxG8DRwa2QWNJKsBIXkCxatHUQNPw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfSd6-0000HN-GH; Wed, 18 Nov 2020 18:56:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfSd6-0007UQ-6l; Wed, 18 Nov 2020 18:56:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfSd6-0003Vf-4q; Wed, 18 Nov 2020 18:56:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156860-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156860: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=22e323d115d8f26d5926c20c66e11f85a46837d7
X-Osstest-Versions-That:
    xen=5200fba9ce534fc55ec40ab622b6058600090415
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 18 Nov 2020 18:56:32 +0000

flight 156860 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156860/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  22e323d115d8f26d5926c20c66e11f85a46837d7
baseline version:
 xen                  5200fba9ce534fc55ec40ab622b6058600090415

Last test of basis   156855  2020-11-18 12:00:35 Z    0 days
Testing same since   156860  2020-11-18 16:00:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Orzel <michal.orzel@arm.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5200fba9ce..22e323d115  22e323d115d8f26d5926c20c66e11f85a46837d7 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 19:10:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 19:10:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30237.60083 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfSqF-0003Cu-N3; Wed, 18 Nov 2020 19:10:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30237.60083; Wed, 18 Nov 2020 19:10:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfSqF-0003CH-HK; Wed, 18 Nov 2020 19:10:07 +0000
Received: by outflank-mailman (input) for mailman id 30237;
 Wed, 18 Nov 2020 19:10:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfSqE-0003BQ-PV
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 19:10:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfSqC-0000b2-Cb; Wed, 18 Nov 2020 19:10:04 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfSqB-0005Zx-Sk; Wed, 18 Nov 2020 19:10:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfSqE-0003BQ-PV
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 19:10:06 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=ZPDYQeEJMUeJ53IrIFmyaECuSyUuN6lz1LGnQKhaaOY=; b=PeZOptQ+HEnC90bBU/nDLY1FqO
	SLrQoH42sNEM6noPRIzyEPtEbzbVgqgnACvoBZkwoyQByaupx8R24vzd5fuK8bJjxOjyNxrPogya9
	dpEOqoaDnUx/l4sJDZFDBWGJTcTWZV6LxKh1Th917AuSGQi30xRXQ+eLQQZnB2TKJfe8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfSqC-0000b2-Cb; Wed, 18 Nov 2020 19:10:04 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfSqB-0005Zx-Sk; Wed, 18 Nov 2020 19:10:04 +0000
Subject: Re: [PATCH 1/3] mm: introduce xvmalloc() et al and use for grant
 table allocations
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <e0364274-f123-82bd-ec85-bea519a34049@suse.com>
 <d98aabe4-6c1b-0970-2e42-eb991e9075a2@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <e7b72c54-e8e4-428d-9264-484fc0061ba4@xen.org>
Date: Wed, 18 Nov 2020 19:10:01 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <d98aabe4-6c1b-0970-2e42-eb991e9075a2@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 06/11/2020 07:11, Jan Beulich wrote:
> All of the array allocations in grant_table_init() can exceed a page's
> worth of memory, which xmalloc()-based interfaces aren't really suitable
> for after boot. 

I can see a few reasons why they are not suitable:
   - xmalloc() will try to using an order and then throw memory. This is 
pretty inneficient.
   - xmalloc() will allocate physically contiguous memory

It would be good to clarify which one you refer because none of them are 
really a problem only after boot...

One thing to be aware thought is xv*() are going to be more inefficient 
because they involve touching the page-tables (at least until the work 
to map on-demand the direct map is not merged). In addition, on Arm, 
they will also use only 4K mappings (I have a TODO to fix that).

So I think we will need to be careful when to use xmalloc() vs 
xvalloc(). It might be worth outlining that in the documentation of xv*().

The current use in the grant-table code looks fine to me.

[...]

> --- a/xen/common/vmap.c
> +++ b/xen/common/vmap.c
> @@ -7,6 +7,7 @@
>   #include <xen/spinlock.h>
>   #include <xen/types.h>
>   #include <xen/vmap.h>
> +#include <xen/xvmalloc.h>
>   #include <asm/page.h>
>   
>   static DEFINE_SPINLOCK(vm_lock);
> @@ -299,11 +300,29 @@ void *vzalloc(size_t size)
>       return p;
>   }
>   
> -void vfree(void *va)
> +static void _vfree(const void *va, unsigned int pages, enum vmap_region type)

I don't think "unsigned int" is sufficient to cover big size. AFAICT, 
this is not in a new problem in this code and seems to be a latent issue 
so far.

However, I feel that it is wrong to introduce a new set of allocation 
helpers that contained a flaw fixed in xm*alloc() recently (see  commit 
cf38b4926e2b "xmalloc: guard against integer overflow").

> -    unsigned int i, pages;
> +    unsigned int i;
>       struct page_info *pg;
>       PAGE_LIST_HEAD(pg_list);
> +
> +    ASSERT(pages);
> +
> +    for ( i = 0; i < pages; i++ )
> +    {
> +        pg = vmap_to_page(va + i * PAGE_SIZE);
> +        ASSERT(pg);
> +        page_list_add(pg, &pg_list);
> +    }
> +    vunmap(va);
> +
> +    while ( (pg = page_list_remove_head(&pg_list)) != NULL )
> +        free_domheap_page(pg);
> +}
> +
> +void vfree(void *va)
> +{
> +    unsigned int pages;
>       enum vmap_region type = VMAP_DEFAULT;
>   
>       if ( !va )
> @@ -315,18 +334,71 @@ void vfree(void *va)
>           type = VMAP_XEN;
>           pages = vm_size(va, type);
>       }
> -    ASSERT(pages);
>   
> -    for ( i = 0; i < pages; i++ )
> +    _vfree(va, pages, type);
> +}
> +

[...]

> --- /dev/null
> +++ b/xen/include/xen/xvmalloc.h
> @@ -0,0 +1,70 @@
> +
> +#ifndef __XVMALLOC_H__
> +#define __XVMALLOC_H__
> +
> +#include <xen/cache.h>
> +#include <xen/types.h>
> +
> +/*
> + * Xen malloc/free-style interface.

It would be useful to emphase that they should only be used if the 
caller does *not* need physically contiguous memory.

> + */
> +
> +/* Allocate space for typed object. */
> +#define xvmalloc(_type) ((_type *)_xvmalloc(sizeof(_type), __alignof__(_type)))
> +#define xvzalloc(_type) ((_type *)_xvzalloc(sizeof(_type), __alignof__(_type)))
> +
> +/* Allocate space for array of typed objects. */
> +#define xvmalloc_array(_type, _num) \
> +    ((_type *)_xvmalloc_array(sizeof(_type), __alignof__(_type), _num))
> +#define xvzalloc_array(_type, _num) \
> +    ((_type *)_xvzalloc_array(sizeof(_type), __alignof__(_type), _num))
> +
> +/* Allocate space for a structure with a flexible array of typed objects. */
> +#define xvzalloc_flex_struct(type, field, nr) \
> +    ((type *)_xvzalloc(offsetof(type, field[nr]), __alignof__(type)))
> +
> +#define xvmalloc_flex_struct(type, field, nr) \
> +    ((type *)_xvmalloc(offsetof(type, field[nr]), __alignof__(type)))
> +
> +/* Re-allocate space for a structure with a flexible array of typed objects. */
> +#define xvrealloc_flex_struct(ptr, field, nr)                          \
> +    ((typeof(ptr))_xvrealloc(ptr, offsetof(typeof(*(ptr)), field[nr]), \
> +                             __alignof__(typeof(*(ptr)))))
> +
> +/* Allocate untyped storage. */
> +#define xvmalloc_bytes(_bytes) _xvmalloc(_bytes, SMP_CACHE_BYTES)
> +#define xvzalloc_bytes(_bytes) _xvzalloc(_bytes, SMP_CACHE_BYTES)
> +
> +/* Free any of the above. */
> +extern void xvfree(void *);
> +
> +/* Free an allocation, and zero the pointer to it. */
> +#define XVFREE(p) do { \
> +    xvfree(p);         \
> +    (p) = NULL;        \
> +} while ( false )
> +
> +/* Underlying functions */
> +extern void *_xvmalloc(size_t size, unsigned int align);
> +extern void *_xvzalloc(size_t size, unsigned int align);
> +extern void *_xvrealloc(void *ptr, size_t size, unsigned int align);
> +
> +static inline void *_xvmalloc_array(
> +    size_t size, unsigned int align, unsigned long num)
> +{
> +    /* Check for overflow. */
> +    if ( size && num > UINT_MAX / size )
> +        return NULL;
> +    return _xvmalloc(size * num, align);
> +}
> +
> +static inline void *_xvzalloc_array(
> +    size_t size, unsigned int align, unsigned long num)
> +{
> +    /* Check for overflow. */
> +    if ( size && num > UINT_MAX / size )
> +        return NULL;
> +    return _xvzalloc(size * num, align);
> +}
> +
> +#endif /* __XVMALLOC_H__ */
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 21:01:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 21:01:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30251.60104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfUZK-0005UK-3L; Wed, 18 Nov 2020 21:00:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30251.60104; Wed, 18 Nov 2020 21:00:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfUZK-0005UD-0I; Wed, 18 Nov 2020 21:00:46 +0000
Received: by outflank-mailman (input) for mailman id 30251;
 Wed, 18 Nov 2020 21:00:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hmm/=EY=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kfUZI-0005U8-S4
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 21:00:44 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0d645aca-18b7-44cf-be0e-f2afaa826add;
 Wed, 18 Nov 2020 21:00:44 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id B35FE2075B;
 Wed, 18 Nov 2020 21:00:42 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hmm/=EY=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kfUZI-0005U8-S4
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 21:00:44 +0000
X-Inumbo-ID: 0d645aca-18b7-44cf-be0e-f2afaa826add
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0d645aca-18b7-44cf-be0e-f2afaa826add;
	Wed, 18 Nov 2020 21:00:44 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id B35FE2075B;
	Wed, 18 Nov 2020 21:00:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605733243;
	bh=egciVJeNpgcLiVvye+pnZUkuZpAFnD3PDDiETXG6wHA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=eOb8/U1A5BGWtv8oNrTzjJsDIe+lbfwXT2idH70Xp6swVsRMUZ1dPvptGT3uxoISO
	 SDcgWWVrT+F5Vu60BevSUUPBiozWthJT9NTkkXeeyWpAdg16X/h0iKKlICUm9v94dl
	 Cm8irhFOUTM2H9OKOKL6977zMHSmYewtSeOFIMJA=
Date: Wed, 18 Nov 2020 13:00:30 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, andrew.cooper3@citrix.com, 
    Bertrand.Marquis@arm.com, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>, 
    george.dunlap@citrix.com, iwj@xenproject.org, julien@xen.org, wl@xen.org, 
    xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2] xen: EXPERT clean-up and introduce UNSUPPORTED
In-Reply-To: <eb6b32c3-c7e2-1e36-f492-0c00cc170ce2@suse.com>
Message-ID: <alpine.DEB.2.21.2011181241310.11739@sstabellini-ThinkPad-T480s>
References: <20201118005051.26115-1-sstabellini@kernel.org> <eb6b32c3-c7e2-1e36-f492-0c00cc170ce2@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 18 Nov 2020, Jan Beulich wrote:
> On 18.11.2020 01:50, Stefano Stabellini wrote:
> > 1) It is not obvious that "Configure standard Xen features (expert
> > users)" is actually the famous EXPERT we keep talking about on xen-devel
> 
> Which can be addressed by simply changing the one prompt line.
> 
> > 2) It is not obvious when we need to enable EXPERT to get a specific
> > feature
> > 
> > In particular if you want to enable ACPI support so that you can boot
> > Xen on an ACPI platform, you have to enable EXPERT first. But searching
> > through the kconfig menu it is really not clear (type '/' and "ACPI"):
> > nothing in the description tells you that you need to enable EXPERT to
> > get the option.
> 
> And what causes this to be different once you switch to UNSUPPORTED?

Two things: firstly, it doesn't and shouldn't take an expert to enable
ACPI support, even if ACPI support is experimental. So calling it
UNSUPPORTED helps a lot. This is particularly relevant to the ARM Kconfig
options changed by this patch. Secondly, this patch is adding
"(UNSUPPORTED)" in the oneline prompt so that it becomes easy to match
it with the option you need to enable.


> > So this patch makes things easier by doing two things:
> > 
> > - introduce a new kconfig option UNSUPPORTED which is clearly to enable
> >   UNSUPPORTED features as defined by SUPPORT.md
> > 
> > - change EXPERT options to UNSUPPORTED where it makes sense: keep
> >   depending on EXPERT for features made for experts
> > 
> > - tag unsupported features by adding (UNSUPPORTED) to the one-line
> >   description
> 
> I am, btw, not fully convinced of the need for this redundancy. Wouldn't
> it be enough to have just EXPERT as a setting, but varying (<reason>)
> tokens in the prompt text?

I don't think so, for the same reasons written above: EXPERT should not
be gating things like ACPI. Moreover, the advantage of the tag in the
oneline prompt is that you can search for an option and figure out that
you need to enable UNSUPPORTED. It doesn't work if we use a different
tag. Just to get the idea, try to do "make menuconfig" and search for
"ARGO" with '/': you'll see "(UNSUPPORTED)". Then, if you search for
"UNSUPPORTED" you can find what you need to enable.


> > --- a/xen/Kconfig
> > +++ b/xen/Kconfig
> > @@ -34,8 +34,17 @@ config DEFCONFIG_LIST
> >  	option defconfig_list
> >  	default ARCH_DEFCONFIG
> >  
> > +config UNSUPPORTED
> > +	bool "Configure UNSUPPORTED features"
> > +	help
> > +	  This option allows unsupported Xen options to be enabled, which
> 
> I'd recommend against "enabled" - a control may also be there to allow
> disabling something.

I can change that.


> > +	  includes non-security-supported, experimental, and tech preview
> > +	  features as defined by SUPPORT.md. Xen binaries built with this
> > +	  option enabled are not security supported.
> 
> Overall I'm a little afraid of possible inverse implications: Anything
> _not_ dependent upon this option (and in particular anything not
> dependent upon any Kconfig control) may be considered supported then.
> 
> Also the last sentence is already present for EXPERT, 

I am happy to rephrase it. What about:

"This option allows certain unsupported Xen options to be changed, which
includes non-security-supported, experimental, and tech preview features
as defined by SUPPORT.md."


> > +	default n
> 
> I realize you likely merely copied what EXPERT has, but this "default n"
> is pretty pointless and hence would better be omitted imo.

OK


> > --- a/xen/arch/x86/Kconfig
> > +++ b/xen/arch/x86/Kconfig
> > @@ -102,8 +102,8 @@ config HVM
> >  	  If unsure, say Y.
> >  
> >  config XEN_SHSTK
> > -	bool "Supervisor Shadow Stacks"
> > -	depends on HAS_AS_CET_SS && EXPERT
> > +	bool "Supervisor Shadow Stacks (UNSUPPORTED)"
> > +	depends on HAS_AS_CET_SS && UNSUPPORTED
> >  	default y
> 
> Andrew, I think I did ask on v1 already: Do we need to continue to
> consider this unsupported? While perhaps not a change to make right in
> this patch, it should perhaps be a pre-patch then to avoid the need to
> touch it here.

Of course I have no opinion on this. I am happy to do as instructed.


> > @@ -165,7 +165,7 @@ config HVM_FEP
> 
> Seeing just the patch context here, I think HVM_FEP may also want
> converting.

OK


> > --- a/xen/common/Kconfig
> > +++ b/xen/common/Kconfig
> > @@ -151,7 +151,7 @@ config KEXEC
> >  	  If unsure, say Y.
> >  
> >  config EFI_SET_VIRTUAL_ADDRESS_MAP
> > -    bool "EFI: call SetVirtualAddressMap()" if EXPERT
> > +    bool "EFI: call SetVirtualAddressMap() (UNSUPPORTED)" if UNSUPPORTED
> 
> I have to admit I'm pretty unsure about what to do with this one.

Yeah, similarly to XEN_SHSTK, I don't have an opinion here either.  I am
happy to change it or leave it as.


> > @@ -272,7 +272,7 @@ config LATE_HWDOM
> >  	  If unsure, say N.
> >  
> >  config ARGO
> > -	bool "Argo: hypervisor-mediated interdomain communication" if EXPERT
> > +	bool "Argo: hypervisor-mediated interdomain communication (UNSUPPORTED)" if UNSUPPORTED
> 
> Perhaps better (EXPERIMENTAL)?

For this and also the schedulers options below, although it is true that
(EXPERIMENTAL) is a more accurate description, then the problem is that
it is not easy to match against UNSUPPORTED. In other words, if you
search for "ARGO" in make menuconfig and it is marked with
(EXPERIMENTAL), it is not obvious that you need to enable UNSUPPORTED to
get it as an option. For this reason, I think it is better to use
(UNSUPPORTED) both here and below for SCHED_ARINC653 and SCHED_NULL.


> > --- a/xen/common/sched/Kconfig
> > +++ b/xen/common/sched/Kconfig
> > @@ -15,7 +15,7 @@ config SCHED_CREDIT2
> >  	  optimized for lower latency and higher VM density.
> >  
> >  config SCHED_RTDS
> > -	bool "RTDS scheduler support (EXPERIMENTAL)"
> > +	bool "RTDS scheduler support (UNSUPPORTED)" if UNSUPPORTED
> >  	default y
> >  	---help---
> >  	  The RTDS scheduler is a soft and firm real-time scheduler for
> > @@ -23,14 +23,14 @@ config SCHED_RTDS
> >  	  in the cloud, and general low-latency workloads.
> >  
> >  config SCHED_ARINC653
> > -	bool "ARINC653 scheduler support (EXPERIMENTAL)"
> > +	bool "ARINC653 scheduler support (UNSUPPORTED)" if UNSUPPORTED
> >  	default DEBUG
> >  	---help---
> >  	  The ARINC653 scheduler is a hard real-time scheduler for single
> >  	  cores, targeted for avionics, drones, and medical devices.
> >  
> >  config SCHED_NULL
> > -	bool "Null scheduler support (EXPERIMENTAL)"
> > +	bool "Null scheduler support (UNSUPPORTED)" if UNSUPPORTED
> >  	default y
> >  	---help---
> >  	  The null scheduler is a static, zero overhead scheduler,
> 
> I'd like to see (EXPERIMENTAL) stay everywhere here.



From xen-devel-bounces@lists.xenproject.org Wed Nov 18 21:34:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 21:34:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30258.60128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfV60-0008W5-Vd; Wed, 18 Nov 2020 21:34:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30258.60128; Wed, 18 Nov 2020 21:34:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfV60-0008Vx-SE; Wed, 18 Nov 2020 21:34:32 +0000
Received: by outflank-mailman (input) for mailman id 30258;
 Wed, 18 Nov 2020 21:34:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WhnZ=EY=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1kfV60-0008UU-3C
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 21:34:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a31a01dc-0b87-49fe-ab74-8bef08abceb8;
 Wed, 18 Nov 2020 21:34:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 05AD8B004;
 Wed, 18 Nov 2020 21:34:23 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id 4F52B1E1325; Wed, 18 Nov 2020 15:37:47 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WhnZ=EY=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1kfV60-0008UU-3C
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 21:34:32 +0000
X-Inumbo-ID: a31a01dc-0b87-49fe-ab74-8bef08abceb8
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a31a01dc-0b87-49fe-ab74-8bef08abceb8;
	Wed, 18 Nov 2020 21:34:25 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 05AD8B004;
	Wed, 18 Nov 2020 21:34:23 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id 4F52B1E1325; Wed, 18 Nov 2020 15:37:47 +0100 (CET)
Date: Wed, 18 Nov 2020 15:37:47 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 07/20] init: refactor name_to_dev_t
Message-ID: <20201118143747.GL1981@quack2.suse.cz>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-8-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201118084800.2339180-8-hch@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Wed 18-11-20 09:47:47, Christoph Hellwig wrote:
> Split each case into a self-contained helper.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  include/linux/genhd.h |   7 +-
>  init/do_mounts.c      | 183 +++++++++++++++++++++---------------------
>  2 files changed, 91 insertions(+), 99 deletions(-)
> 
> diff --git a/include/linux/genhd.h b/include/linux/genhd.h
> index 22f5b9fd96f8bf..ca5e356084c353 100644
> --- a/include/linux/genhd.h
> +++ b/include/linux/genhd.h
> @@ -388,18 +388,13 @@ static inline void bd_unlink_disk_holder(struct block_device *bdev,
>  }
>  #endif /* CONFIG_SYSFS */
>  
> +dev_t blk_lookup_devt(const char *name, int partno);
>  #ifdef CONFIG_BLOCK
>  void printk_all_partitions(void);
> -dev_t blk_lookup_devt(const char *name, int partno);
>  #else /* CONFIG_BLOCK */
>  static inline void printk_all_partitions(void)
>  {
>  }
> -static inline dev_t blk_lookup_devt(const char *name, int partno)
> -{
> -	dev_t devt = MKDEV(0, 0);
> -	return devt;
> -}
>  #endif /* CONFIG_BLOCK */

This hunk looks unrelated to the change? Also why you move the declaration
outside the CONFIG_BLOCK ifdef? AFAICS blk_lookup_devt() still exists only
when CONFIG_BLOCK is defined? Otherwise the patch looks good to me.

								Honza

-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 21:34:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 21:34:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30257.60116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfV5w-0008Uh-O0; Wed, 18 Nov 2020 21:34:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30257.60116; Wed, 18 Nov 2020 21:34:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfV5w-0008Ua-KA; Wed, 18 Nov 2020 21:34:28 +0000
Received: by outflank-mailman (input) for mailman id 30257;
 Wed, 18 Nov 2020 21:34:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WhnZ=EY=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1kfV5v-0008UU-AM
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 21:34:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 16d212d0-6adf-4d14-becd-af49cfce304b;
 Wed, 18 Nov 2020 21:34:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EF58DAFFD;
 Wed, 18 Nov 2020 21:34:23 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id BC1911E1321; Wed, 18 Nov 2020 15:22:37 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WhnZ=EY=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1kfV5v-0008UU-AM
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 21:34:27 +0000
X-Inumbo-ID: 16d212d0-6adf-4d14-becd-af49cfce304b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 16d212d0-6adf-4d14-becd-af49cfce304b;
	Wed, 18 Nov 2020 21:34:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EF58DAFFD;
	Wed, 18 Nov 2020 21:34:23 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id BC1911E1321; Wed, 18 Nov 2020 15:22:37 +0100 (CET)
Date: Wed, 18 Nov 2020 15:22:37 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 06/20] block: change the hash used for looking up block
 devices
Message-ID: <20201118142237.GK1981@quack2.suse.cz>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-7-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201118084800.2339180-7-hch@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Wed 18-11-20 09:47:46, Christoph Hellwig wrote:
> Adding the minor to the major creates tons of pointless conflicts. Just
> use the dev_t itself, which is 32-bits and thus is guaranteed to fit
> into ino_t.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Fair enough. You can add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  fs/block_dev.c | 26 ++------------------------
>  1 file changed, 2 insertions(+), 24 deletions(-)
> 
> diff --git a/fs/block_dev.c b/fs/block_dev.c
> index d8664f5c1ff669..29db12c3bb501c 100644
> --- a/fs/block_dev.c
> +++ b/fs/block_dev.c
> @@ -870,35 +870,12 @@ void __init bdev_cache_init(void)
>  	blockdev_superblock = bd_mnt->mnt_sb;   /* For writeback */
>  }
>  
> -/*
> - * Most likely _very_ bad one - but then it's hardly critical for small
> - * /dev and can be fixed when somebody will need really large one.
> - * Keep in mind that it will be fed through icache hash function too.
> - */
> -static inline unsigned long hash(dev_t dev)
> -{
> -	return MAJOR(dev)+MINOR(dev);
> -}
> -
> -static int bdev_test(struct inode *inode, void *data)
> -{
> -	return BDEV_I(inode)->bdev.bd_dev == *(dev_t *)data;
> -}
> -
> -static int bdev_set(struct inode *inode, void *data)
> -{
> -	BDEV_I(inode)->bdev.bd_dev = *(dev_t *)data;
> -	return 0;
> -}
> -
>  static struct block_device *bdget(dev_t dev)
>  {
>  	struct block_device *bdev;
>  	struct inode *inode;
>  
> -	inode = iget5_locked(blockdev_superblock, hash(dev),
> -			bdev_test, bdev_set, &dev);
> -
> +	inode = iget_locked(blockdev_superblock, dev);
>  	if (!inode)
>  		return NULL;
>  
> @@ -910,6 +887,7 @@ static struct block_device *bdget(dev_t dev)
>  		bdev->bd_super = NULL;
>  		bdev->bd_inode = inode;
>  		bdev->bd_part_count = 0;
> +		bdev->bd_dev = dev;
>  		inode->i_mode = S_IFBLK;
>  		inode->i_rdev = dev;
>  		inode->i_bdev = bdev;
> -- 
> 2.29.2
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 21:34:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 21:34:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30259.60140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfV66-00008i-7k; Wed, 18 Nov 2020 21:34:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30259.60140; Wed, 18 Nov 2020 21:34:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfV66-00008Z-4D; Wed, 18 Nov 2020 21:34:38 +0000
Received: by outflank-mailman (input) for mailman id 30259;
 Wed, 18 Nov 2020 21:34:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WhnZ=EY=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1kfV65-0008UU-3K
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 21:34:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a4fe5f2b-09c7-46ba-b604-116858de571d;
 Wed, 18 Nov 2020 21:34:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 35A12B011;
 Wed, 18 Nov 2020 21:34:24 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id A0C1D1E130B; Wed, 18 Nov 2020 15:10:24 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WhnZ=EY=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1kfV65-0008UU-3K
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 21:34:37 +0000
X-Inumbo-ID: a4fe5f2b-09c7-46ba-b604-116858de571d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a4fe5f2b-09c7-46ba-b604-116858de571d;
	Wed, 18 Nov 2020 21:34:25 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 35A12B011;
	Wed, 18 Nov 2020 21:34:24 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id A0C1D1E130B; Wed, 18 Nov 2020 15:10:24 +0100 (CET)
Date: Wed, 18 Nov 2020 15:10:24 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 02/20] block: remove a duplicate __disk_get_part prototype
Message-ID: <20201118141024.GG1981@quack2.suse.cz>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-3-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201118084800.2339180-3-hch@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Wed 18-11-20 09:47:42, Christoph Hellwig wrote:
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Looks good. You can add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  include/linux/genhd.h | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/include/linux/genhd.h b/include/linux/genhd.h
> index 46553d6d602563..22f5b9fd96f8bf 100644
> --- a/include/linux/genhd.h
> +++ b/include/linux/genhd.h
> @@ -250,7 +250,6 @@ static inline dev_t part_devt(struct hd_struct *part)
>  	return part_to_dev(part)->devt;
>  }
>  
> -extern struct hd_struct *__disk_get_part(struct gendisk *disk, int partno);
>  extern struct hd_struct *disk_get_part(struct gendisk *disk, int partno);
>  
>  static inline void disk_put_part(struct hd_struct *part)
> -- 
> 2.29.2
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 21:34:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 21:34:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30261.60152 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfV6B-0000DU-IE; Wed, 18 Nov 2020 21:34:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30261.60152; Wed, 18 Nov 2020 21:34:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfV6B-0000DG-Du; Wed, 18 Nov 2020 21:34:43 +0000
Received: by outflank-mailman (input) for mailman id 30261;
 Wed, 18 Nov 2020 21:34:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WhnZ=EY=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1kfV6A-0008UU-3f
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 21:34:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f6f06fa4-23d5-4e9d-9231-4c3091eaed54;
 Wed, 18 Nov 2020 21:34:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 01321B000;
 Wed, 18 Nov 2020 21:34:23 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id BCB4E1E1330; Wed, 18 Nov 2020 15:46:40 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WhnZ=EY=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1kfV6A-0008UU-3f
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 21:34:42 +0000
X-Inumbo-ID: f6f06fa4-23d5-4e9d-9231-4c3091eaed54
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f6f06fa4-23d5-4e9d-9231-4c3091eaed54;
	Wed, 18 Nov 2020 21:34:25 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 01321B000;
	Wed, 18 Nov 2020 21:34:23 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id BCB4E1E1330; Wed, 18 Nov 2020 15:46:40 +0100 (CET)
Date: Wed, 18 Nov 2020 15:46:40 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 10/20] block: refactor __blkdev_put
Message-ID: <20201118144640.GO1981@quack2.suse.cz>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-11-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201118084800.2339180-11-hch@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Wed 18-11-20 09:47:50, Christoph Hellwig wrote:
> Reorder the code to have one big section for the last close, and to use
> bdev_is_partition.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Looks good. You can add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  fs/block_dev.c | 14 +++++++-------
>  1 file changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/fs/block_dev.c b/fs/block_dev.c
> index 29db12c3bb501c..4c4d6c30382c06 100644
> --- a/fs/block_dev.c
> +++ b/fs/block_dev.c
> @@ -1745,22 +1745,22 @@ static void __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part)
>  		WARN_ON_ONCE(bdev->bd_holders);
>  		sync_blockdev(bdev);
>  		kill_bdev(bdev);
> -
>  		bdev_write_inode(bdev);
> -	}
> -	if (bdev->bd_contains == bdev) {
> -		if (disk->fops->release)
> +
> +		if (!bdev_is_partition(bdev) && disk->fops->release)
>  			disk->fops->release(disk, mode);
> -	}
> -	if (!bdev->bd_openers) {
> +
>  		disk_put_part(bdev->bd_part);
>  		bdev->bd_part = NULL;
>  		bdev->bd_disk = NULL;
> -		if (bdev != bdev->bd_contains)
> +		if (bdev_is_partition(bdev))
>  			victim = bdev->bd_contains;
>  		bdev->bd_contains = NULL;
>  
>  		put_disk_and_module(disk);
> +	} else {
> +		if (!bdev_is_partition(bdev) && disk->fops->release)
> +			disk->fops->release(disk, mode);
>  	}
>  	mutex_unlock(&bdev->bd_mutex);
>  	bdput(bdev);
> -- 
> 2.29.2
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 21:34:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 21:34:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30263.60164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfV6G-0000Ii-Tq; Wed, 18 Nov 2020 21:34:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30263.60164; Wed, 18 Nov 2020 21:34:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfV6G-0000IZ-PC; Wed, 18 Nov 2020 21:34:48 +0000
Received: by outflank-mailman (input) for mailman id 30263;
 Wed, 18 Nov 2020 21:34:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WhnZ=EY=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1kfV6F-0008UU-3u
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 21:34:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 240796fb-8a5f-4bb0-a2eb-b63b6a5f901c;
 Wed, 18 Nov 2020 21:34:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2D032B009;
 Wed, 18 Nov 2020 21:34:24 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id D003A1E1316; Wed, 18 Nov 2020 15:18:00 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WhnZ=EY=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1kfV6F-0008UU-3u
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 21:34:47 +0000
X-Inumbo-ID: 240796fb-8a5f-4bb0-a2eb-b63b6a5f901c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 240796fb-8a5f-4bb0-a2eb-b63b6a5f901c;
	Wed, 18 Nov 2020 21:34:25 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2D032B009;
	Wed, 18 Nov 2020 21:34:24 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id D003A1E1316; Wed, 18 Nov 2020 15:18:00 +0100 (CET)
Date: Wed, 18 Nov 2020 15:18:00 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 03/20] block: add a bdev_kobj helper
Message-ID: <20201118141800.GH1981@quack2.suse.cz>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-4-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201118084800.2339180-4-hch@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Wed 18-11-20 09:47:43, Christoph Hellwig wrote:
> Add a little helper to find the kobject for a struct block_device.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Looks good. You can add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  drivers/md/bcache/super.c |  7 ++-----
>  drivers/md/md.c           |  4 +---
>  fs/btrfs/sysfs.c          | 15 +++------------
>  include/linux/blk_types.h |  3 +++
>  4 files changed, 9 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
> index 46a00134a36ae1..a6a5e21e4fd136 100644
> --- a/drivers/md/bcache/super.c
> +++ b/drivers/md/bcache/super.c
> @@ -1447,8 +1447,7 @@ static int register_bdev(struct cache_sb *sb, struct cache_sb_disk *sb_disk,
>  		goto err;
>  
>  	err = "error creating kobject";
> -	if (kobject_add(&dc->disk.kobj, &part_to_dev(bdev->bd_part)->kobj,
> -			"bcache"))
> +	if (kobject_add(&dc->disk.kobj, bdev_kobj(bdev), "bcache"))
>  		goto err;
>  	if (bch_cache_accounting_add_kobjs(&dc->accounting, &dc->disk.kobj))
>  		goto err;
> @@ -2342,9 +2341,7 @@ static int register_cache(struct cache_sb *sb, struct cache_sb_disk *sb_disk,
>  		goto err;
>  	}
>  
> -	if (kobject_add(&ca->kobj,
> -			&part_to_dev(bdev->bd_part)->kobj,
> -			"bcache")) {
> +	if (kobject_add(&ca->kobj, bdev_kobj(bdev), "bcache")) {
>  		err = "error calling kobject_add";
>  		ret = -ENOMEM;
>  		goto out;
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index b2edf5e0f965b5..7ce6047c856ea2 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -2414,7 +2414,6 @@ EXPORT_SYMBOL(md_integrity_add_rdev);
>  static int bind_rdev_to_array(struct md_rdev *rdev, struct mddev *mddev)
>  {
>  	char b[BDEVNAME_SIZE];
> -	struct kobject *ko;
>  	int err;
>  
>  	/* prevent duplicates */
> @@ -2477,9 +2476,8 @@ static int bind_rdev_to_array(struct md_rdev *rdev, struct mddev *mddev)
>  	if ((err = kobject_add(&rdev->kobj, &mddev->kobj, "dev-%s", b)))
>  		goto fail;
>  
> -	ko = &part_to_dev(rdev->bdev->bd_part)->kobj;
>  	/* failure here is OK */
> -	err = sysfs_create_link(&rdev->kobj, ko, "block");
> +	err = sysfs_create_link(&rdev->kobj, bdev_kobj(rdev->bdev), "block");
>  	rdev->sysfs_state = sysfs_get_dirent_safe(rdev->kobj.sd, "state");
>  	rdev->sysfs_unack_badblocks =
>  		sysfs_get_dirent_safe(rdev->kobj.sd, "unacknowledged_bad_blocks");
> diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
> index 279d9262b676d4..24b6c6dc69000a 100644
> --- a/fs/btrfs/sysfs.c
> +++ b/fs/btrfs/sysfs.c
> @@ -1232,8 +1232,6 @@ int btrfs_sysfs_add_space_info_type(struct btrfs_fs_info *fs_info,
>  
>  void btrfs_sysfs_remove_device(struct btrfs_device *device)
>  {
> -	struct hd_struct *disk;
> -	struct kobject *disk_kobj;
>  	struct kobject *devices_kobj;
>  
>  	/*
> @@ -1243,11 +1241,8 @@ void btrfs_sysfs_remove_device(struct btrfs_device *device)
>  	devices_kobj = device->fs_info->fs_devices->devices_kobj;
>  	ASSERT(devices_kobj);
>  
> -	if (device->bdev) {
> -		disk = device->bdev->bd_part;
> -		disk_kobj = &part_to_dev(disk)->kobj;
> -		sysfs_remove_link(devices_kobj, disk_kobj->name);
> -	}
> +	if (device->bdev)
> +		sysfs_remove_link(devices_kobj, bdev_kobj(device->bdev)->name);
>  
>  	if (device->devid_kobj.state_initialized) {
>  		kobject_del(&device->devid_kobj);
> @@ -1353,11 +1348,7 @@ int btrfs_sysfs_add_device(struct btrfs_device *device)
>  	nofs_flag = memalloc_nofs_save();
>  
>  	if (device->bdev) {
> -		struct hd_struct *disk;
> -		struct kobject *disk_kobj;
> -
> -		disk = device->bdev->bd_part;
> -		disk_kobj = &part_to_dev(disk)->kobj;
> +		struct kobject *disk_kobj = bdev_kobj(device->bdev);
>  
>  		ret = sysfs_create_link(devices_kobj, disk_kobj, disk_kobj->name);
>  		if (ret) {
> diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
> index d9b69bbde5cc54..0069bee992063e 100644
> --- a/include/linux/blk_types.h
> +++ b/include/linux/blk_types.h
> @@ -48,6 +48,9 @@ struct block_device {
>  	struct mutex		bd_fsfreeze_mutex;
>  } __randomize_layout;
>  
> +#define bdev_kobj(_bdev) \
> +	(&part_to_dev((_bdev)->bd_part)->kobj)
> +
>  /*
>   * Block error status values.  See block/blk-core:blk_errors for the details.
>   * Alpha cannot write a byte atomically, so we need to use 32-bit value.
> -- 
> 2.29.2
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 21:34:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 21:34:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30266.60176 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfV6L-0000Ol-F2; Wed, 18 Nov 2020 21:34:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30266.60176; Wed, 18 Nov 2020 21:34:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfV6L-0000Ob-AX; Wed, 18 Nov 2020 21:34:53 +0000
Received: by outflank-mailman (input) for mailman id 30266;
 Wed, 18 Nov 2020 21:34:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WhnZ=EY=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1kfV6K-0008UU-3v
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 21:34:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2cf0719f-8d9e-4f2d-a34d-98edcec36759;
 Wed, 18 Nov 2020 21:34:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4578DB018;
 Wed, 18 Nov 2020 21:34:24 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id 94E761E132C; Wed, 18 Nov 2020 15:42:55 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WhnZ=EY=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1kfV6K-0008UU-3v
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 21:34:52 +0000
X-Inumbo-ID: 2cf0719f-8d9e-4f2d-a34d-98edcec36759
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2cf0719f-8d9e-4f2d-a34d-98edcec36759;
	Wed, 18 Nov 2020 21:34:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4578DB018;
	Wed, 18 Nov 2020 21:34:24 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id 94E761E132C; Wed, 18 Nov 2020 15:42:55 +0100 (CET)
Date: Wed, 18 Nov 2020 15:42:55 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 09/20] init: cleanup match_dev_by_uuid and
 match_dev_by_label
Message-ID: <20201118144255.GN1981@quack2.suse.cz>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-10-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201118084800.2339180-10-hch@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Wed 18-11-20 09:47:49, Christoph Hellwig wrote:
> Avoid a totally pointless goto label, and use the same style of
> comparism for both helpers.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

OK. You can add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza
> ---
>  init/do_mounts.c | 18 ++++++------------
>  1 file changed, 6 insertions(+), 12 deletions(-)
> 
> diff --git a/init/do_mounts.c b/init/do_mounts.c
> index afa26a4028d25e..5879edf083b318 100644
> --- a/init/do_mounts.c
> +++ b/init/do_mounts.c
> @@ -79,15 +79,10 @@ static int match_dev_by_uuid(struct device *dev, const void *data)
>  	const struct uuidcmp *cmp = data;
>  	struct hd_struct *part = dev_to_part(dev);
>  
> -	if (!part->info)
> -		goto no_match;
> -
> -	if (strncasecmp(cmp->uuid, part->info->uuid, cmp->len))
> -		goto no_match;
> -
> +	if (!part->info ||
> +	    strncasecmp(cmp->uuid, part->info->uuid, cmp->len))
> +		return 0;
>  	return 1;
> -no_match:
> -	return 0;
>  }
>  
>  /**
> @@ -174,10 +169,9 @@ static int match_dev_by_label(struct device *dev, const void *data)
>  	const char *label = data;
>  	struct hd_struct *part = dev_to_part(dev);
>  
> -	if (part->info && !strcmp(label, part->info->volname))
> -		return 1;
> -
> -	return 0;
> +	if (!part->info || strcmp(label, part->info->volname))
> +		return 0;
> +	return 1;
>  }
>  
>  static dev_t devt_from_partlabel(const char *label)
> -- 
> 2.29.2
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 21:34:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 21:34:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30270.60188 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfV6Q-0000Uj-Ns; Wed, 18 Nov 2020 21:34:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30270.60188; Wed, 18 Nov 2020 21:34:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfV6Q-0000Ua-KL; Wed, 18 Nov 2020 21:34:58 +0000
Received: by outflank-mailman (input) for mailman id 30270;
 Wed, 18 Nov 2020 21:34:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WhnZ=EY=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1kfV6P-0008UU-4N
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 21:34:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 463f5f42-8ca5-494c-ad34-a4fa77ff401f;
 Wed, 18 Nov 2020 21:34:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 53C3FB019;
 Wed, 18 Nov 2020 21:34:24 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id 716231E1328; Wed, 18 Nov 2020 15:41:05 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WhnZ=EY=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1kfV6P-0008UU-4N
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 21:34:57 +0000
X-Inumbo-ID: 463f5f42-8ca5-494c-ad34-a4fa77ff401f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 463f5f42-8ca5-494c-ad34-a4fa77ff401f;
	Wed, 18 Nov 2020 21:34:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 53C3FB019;
	Wed, 18 Nov 2020 21:34:24 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id 716231E1328; Wed, 18 Nov 2020 15:41:05 +0100 (CET)
Date: Wed, 18 Nov 2020 15:41:05 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 08/20] init: refactor devt_from_partuuid
Message-ID: <20201118144105.GM1981@quack2.suse.cz>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-9-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201118084800.2339180-9-hch@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Wed 18-11-20 09:47:48, Christoph Hellwig wrote:
> The code in devt_from_partuuid is very convoluted.  Refactor a bit by
> sanitizing the goto and variable name usage.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

The patch looks good to me. You can add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  init/do_mounts.c | 68 ++++++++++++++++++++++--------------------------
>  1 file changed, 31 insertions(+), 37 deletions(-)
> 
> diff --git a/init/do_mounts.c b/init/do_mounts.c
> index aef2f24461c7f1..afa26a4028d25e 100644
> --- a/init/do_mounts.c
> +++ b/init/do_mounts.c
> @@ -105,13 +105,10 @@ static int match_dev_by_uuid(struct device *dev, const void *data)
>   */
>  static dev_t devt_from_partuuid(const char *uuid_str)
>  {
> -	dev_t res = 0;
>  	struct uuidcmp cmp;
>  	struct device *dev = NULL;
> -	struct gendisk *disk;
> -	struct hd_struct *part;
> +	dev_t devt = 0;
>  	int offset = 0;
> -	bool clear_root_wait = false;
>  	char *slash;
>  
>  	cmp.uuid = uuid_str;
> @@ -120,52 +117,49 @@ static dev_t devt_from_partuuid(const char *uuid_str)
>  	/* Check for optional partition number offset attributes. */
>  	if (slash) {
>  		char c = 0;
> +
>  		/* Explicitly fail on poor PARTUUID syntax. */
> -		if (sscanf(slash + 1,
> -			   "PARTNROFF=%d%c", &offset, &c) != 1) {
> -			clear_root_wait = true;
> -			goto done;
> -		}
> +		if (sscanf(slash + 1, "PARTNROFF=%d%c", &offset, &c) != 1)
> +			goto clear_root_wait;
>  		cmp.len = slash - uuid_str;
>  	} else {
>  		cmp.len = strlen(uuid_str);
>  	}
>  
> -	if (!cmp.len) {
> -		clear_root_wait = true;
> -		goto done;
> -	}
> +	if (!cmp.len)
> +		goto clear_root_wait;
>  
> -	dev = class_find_device(&block_class, NULL, &cmp,
> -				&match_dev_by_uuid);
> +	dev = class_find_device(&block_class, NULL, &cmp, &match_dev_by_uuid);
>  	if (!dev)
> -		goto done;
> -
> -	res = dev->devt;
> +		return 0;
>  
> -	/* Attempt to find the partition by offset. */
> -	if (!offset)
> -		goto no_offset;
> +	if (offset) {
> +		/*
> +		 * Attempt to find the requested partition by adding an offset
> +		 * to the partition number found by UUID.
> +		 */
> +		struct hd_struct *part;
>  
> -	res = 0;
> -	disk = part_to_disk(dev_to_part(dev));
> -	part = disk_get_part(disk, dev_to_part(dev)->partno + offset);
> -	if (part) {
> -		res = part_devt(part);
> -		put_device(part_to_dev(part));
> +		part = disk_get_part(dev_to_disk(dev),
> +				     dev_to_part(dev)->partno + offset);
> +		if (part) {
> +			devt = part_devt(part);
> +			put_device(part_to_dev(part));
> +		}
> +	} else {
> +		devt = dev->devt;
>  	}
>  
> -no_offset:
>  	put_device(dev);
> -done:
> -	if (clear_root_wait) {
> -		pr_err("VFS: PARTUUID= is invalid.\n"
> -		       "Expected PARTUUID=<valid-uuid-id>[/PARTNROFF=%%d]\n");
> -		if (root_wait)
> -			pr_err("Disabling rootwait; root= is invalid.\n");
> -		root_wait = 0;
> -	}
> -	return res;
> +	return devt;
> +
> +clear_root_wait:
> +	pr_err("VFS: PARTUUID= is invalid.\n"
> +	       "Expected PARTUUID=<valid-uuid-id>[/PARTNROFF=%%d]\n");
> +	if (root_wait)
> +		pr_err("Disabling rootwait; root= is invalid.\n");
> +	root_wait = 0;
> +	return 0;
>  }
>  
>  /**
> -- 
> 2.29.2
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 21:35:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 21:35:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30273.60200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfV6W-0000b9-3B; Wed, 18 Nov 2020 21:35:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30273.60200; Wed, 18 Nov 2020 21:35:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfV6V-0000ay-Vl; Wed, 18 Nov 2020 21:35:03 +0000
Received: by outflank-mailman (input) for mailman id 30273;
 Wed, 18 Nov 2020 21:35:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WhnZ=EY=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1kfV6U-0008UU-4P
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 21:35:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7d11608f-e805-418b-99fc-e1b0779d1a6e;
 Wed, 18 Nov 2020 21:34:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3C215B013;
 Wed, 18 Nov 2020 21:34:24 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id 90FA11E1319; Wed, 18 Nov 2020 15:19:27 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WhnZ=EY=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1kfV6U-0008UU-4P
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 21:35:02 +0000
X-Inumbo-ID: 7d11608f-e805-418b-99fc-e1b0779d1a6e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7d11608f-e805-418b-99fc-e1b0779d1a6e;
	Wed, 18 Nov 2020 21:34:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3C215B013;
	Wed, 18 Nov 2020 21:34:24 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id 90FA11E1319; Wed, 18 Nov 2020 15:19:27 +0100 (CET)
Date: Wed, 18 Nov 2020 15:19:27 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 04/20] block: use disk_part_iter_exit in
 disk_part_iter_next
Message-ID: <20201118141927.GI1981@quack2.suse.cz>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-5-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201118084800.2339180-5-hch@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Wed 18-11-20 09:47:44, Christoph Hellwig wrote:
> Call disk_part_iter_exit in disk_part_iter_next instead of duplicating
> the functionality.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

OK. You can add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  block/genhd.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/block/genhd.c b/block/genhd.c
> index 4e039524f92b8f..0bd9c41dd4cb69 100644
> --- a/block/genhd.c
> +++ b/block/genhd.c
> @@ -227,8 +227,7 @@ struct hd_struct *disk_part_iter_next(struct disk_part_iter *piter)
>  	int inc, end;
>  
>  	/* put the last partition */
> -	disk_put_part(piter->part);
> -	piter->part = NULL;
> +	disk_part_iter_exit(piter);
>  
>  	/* get part_tbl */
>  	rcu_read_lock();
> -- 
> 2.29.2
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 21:35:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 21:35:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30278.60212 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfV6a-0000gn-Io; Wed, 18 Nov 2020 21:35:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30278.60212; Wed, 18 Nov 2020 21:35:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfV6a-0000gY-Cp; Wed, 18 Nov 2020 21:35:08 +0000
Received: by outflank-mailman (input) for mailman id 30278;
 Wed, 18 Nov 2020 21:35:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WhnZ=EY=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1kfV6Z-0008UU-4V
 for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 21:35:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7cbf9148-c93f-4aeb-a969-075d42a45ae5;
 Wed, 18 Nov 2020 21:34:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 22171B008;
 Wed, 18 Nov 2020 21:34:24 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id 399B61E131D; Wed, 18 Nov 2020 15:20:20 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WhnZ=EY=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1kfV6Z-0008UU-4V
	for xen-devel@lists.xenproject.org; Wed, 18 Nov 2020 21:35:07 +0000
X-Inumbo-ID: 7cbf9148-c93f-4aeb-a969-075d42a45ae5
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7cbf9148-c93f-4aeb-a969-075d42a45ae5;
	Wed, 18 Nov 2020 21:34:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 22171B008;
	Wed, 18 Nov 2020 21:34:24 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id 399B61E131D; Wed, 18 Nov 2020 15:20:20 +0100 (CET)
Date: Wed, 18 Nov 2020 15:20:20 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 05/20] block: use put_device in put_disk
Message-ID: <20201118142020.GJ1981@quack2.suse.cz>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-6-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201118084800.2339180-6-hch@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Wed 18-11-20 09:47:45, Christoph Hellwig wrote:
> Use put_device to put the device instead of poking into the internals
> and using kobject_put.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Looks good. You can add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  block/genhd.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/block/genhd.c b/block/genhd.c
> index 0bd9c41dd4cb69..f46e89226fdf91 100644
> --- a/block/genhd.c
> +++ b/block/genhd.c
> @@ -1803,7 +1803,7 @@ EXPORT_SYMBOL(__alloc_disk_node);
>  void put_disk(struct gendisk *disk)
>  {
>  	if (disk)
> -		kobject_put(&disk_to_dev(disk)->kobj);
> +		put_device(disk_to_dev(disk));
>  }
>  EXPORT_SYMBOL(put_disk);
>  
> -- 
> 2.29.2
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 21:36:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 21:36:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30300.60224 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfV7w-00015a-U4; Wed, 18 Nov 2020 21:36:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30300.60224; Wed, 18 Nov 2020 21:36:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfV7w-00015T-R1; Wed, 18 Nov 2020 21:36:32 +0000
Received: by outflank-mailman (input) for mailman id 30300;
 Wed, 18 Nov 2020 21:36:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfV7v-00015E-GQ; Wed, 18 Nov 2020 21:36:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfV7u-0003dj-G9; Wed, 18 Nov 2020 21:36:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfV7u-00075z-4k; Wed, 18 Nov 2020 21:36:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kfV7u-00028L-4E; Wed, 18 Nov 2020 21:36:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfV7v-00015E-GQ; Wed, 18 Nov 2020 21:36:31 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nXTKRWkDfudzBcL0ae7qqMw9FXwsJUgs4bu/90+hAek=; b=fw4OmZQhvj2mBPF9QQXFkmFO+F
	N8YCuEHy8soGjaPAqGb54QepKW2PE+3+CzynYNv/ja2EGMt/deetFQK0jAsPdA3Ql9L+sgzy4fayn
	goc4MPAuvSCxfAocxeA//OoNU1cFI5AyaHKzvd9vqQUv27vQQ5dkO7+WpdRuFEXh2Di4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfV7u-0003dj-G9; Wed, 18 Nov 2020 21:36:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfV7u-00075z-4k; Wed, 18 Nov 2020 21:36:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfV7u-00028L-4E; Wed, 18 Nov 2020 21:36:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156862-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156862: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=415f904254b7343a90db895134980cbb7f7f0479
X-Osstest-Versions-That:
    xen=22e323d115d8f26d5926c20c66e11f85a46837d7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 18 Nov 2020 21:36:30 +0000

flight 156862 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156862/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  415f904254b7343a90db895134980cbb7f7f0479
baseline version:
 xen                  22e323d115d8f26d5926c20c66e11f85a46837d7

Last test of basis   156860  2020-11-18 16:00:24 Z    0 days
Testing same since   156862  2020-11-18 19:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Doug Goldstein <cardoe@cardoe.com>
  Edwin Török <edvin.torok@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   22e323d115..415f904254  415f904254b7343a90db895134980cbb7f7f0479 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Nov 18 22:36:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 18 Nov 2020 22:36:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30321.60238 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfW3X-00070G-HZ; Wed, 18 Nov 2020 22:36:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30321.60238; Wed, 18 Nov 2020 22:36:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfW3X-000709-De; Wed, 18 Nov 2020 22:36:03 +0000
Received: by outflank-mailman (input) for mailman id 30321;
 Wed, 18 Nov 2020 22:36:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfW3W-000701-Ds; Wed, 18 Nov 2020 22:36:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfW3W-0004rQ-52; Wed, 18 Nov 2020 22:36:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfW3V-0003fz-Sw; Wed, 18 Nov 2020 22:36:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kfW3V-0004m2-SM; Wed, 18 Nov 2020 22:36:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfW3W-000701-Ds; Wed, 18 Nov 2020 22:36:02 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=A6PH54acPnIxs520ITEuAhKilxh0Cbw8LZufLKyeoB4=; b=JQ+jhF0pdi6734KKudhw3BKBb1
	OJAACG7pgdN18FA9EXfvIrThyZMmPKH91R/H1zzMPzv81NdNddJk9H8Yz8SL5Ytyifz4ksER9RP4q
	EQ2eJc+fwoTwHjoXI8e6Q73ZEdnRNp1TY4hXEIILSB7x7FvYSRAwKWCOfp0otdUXWqck=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfW3W-0004rQ-52; Wed, 18 Nov 2020 22:36:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfW3V-0003fz-Sw; Wed, 18 Nov 2020 22:36:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfW3V-0004m2-SM; Wed, 18 Nov 2020 22:36:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156853-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156853: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=66a300a107ec286725bdc943601cbd4247b82158
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 18 Nov 2020 22:36:01 +0000

flight 156853 qemu-mainline real [real]
flight 156864 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156853/
http://logs.test-lab.xenproject.org/osstest/logs/156864/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                66a300a107ec286725bdc943601cbd4247b82158
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   90 days
Failing since        152659  2020-08-21 14:07:39 Z   89 days  190 attempts
Testing same since   156853  2020-11-18 07:57:18 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 66788 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 01:01:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 01:01:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30341.60280 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfYJe-0005h6-L5; Thu, 19 Nov 2020 01:00:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30341.60280; Thu, 19 Nov 2020 01:00:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfYJe-0005fx-Hi; Thu, 19 Nov 2020 01:00:50 +0000
Received: by outflank-mailman (input) for mailman id 30341;
 Thu, 19 Nov 2020 01:00:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfYJe-0005Ts-04; Thu, 19 Nov 2020 01:00:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfYJd-00014o-Rd; Thu, 19 Nov 2020 01:00:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfYJd-0002qw-GQ; Thu, 19 Nov 2020 01:00:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kfYJd-0005Pk-Fy; Thu, 19 Nov 2020 01:00:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfYJe-0005Ts-04; Thu, 19 Nov 2020 01:00:50 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RPkJYAgv6x3RNZA+NmzXt/Uxsq6wadwVH/BAdPHuqBU=; b=hOId4YDs2moqUOOO6zrjTQKn0g
	7zaDjG3VLtNu/mv5qIG4+rV44iLH5zbVY8VKl6K1LHXrAuja8Ybrj3Sg1QkpUqbvSkfozr8P/W2e+
	R0c8KPPOaBPWIDI2MyfAosTFXoyZ7g3FSzXw7WQeSv03fkLo3Q5nMVA83O+rCfAM1f4w=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfYJd-00014o-Rd; Thu, 19 Nov 2020 01:00:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfYJd-0002qw-GQ; Thu, 19 Nov 2020 01:00:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfYJd-0005Pk-Fy; Thu, 19 Nov 2020 01:00:49 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156857-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156857: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5200fba9ce534fc55ec40ab622b6058600090415
X-Osstest-Versions-That:
    xen=5505f5f8e7e805365cfe70b6a4af6115940bb749
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 19 Nov 2020 01:00:49 +0000

flight 156857 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156857/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10       fail  like 156845
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156845
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156845
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156845
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156845
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156845
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156845
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156845
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156845
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156845
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156845
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156845
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  5200fba9ce534fc55ec40ab622b6058600090415
baseline version:
 xen                  5505f5f8e7e805365cfe70b6a4af6115940bb749

Last test of basis   156845  2020-11-18 01:51:29 Z    0 days
Testing same since   156857  2020-11-18 15:06:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5505f5f8e7..5200fba9ce  5200fba9ce534fc55ec40ab622b6058600090415 -> master


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 04:25:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 04:25:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30366.60313 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfbV8-0004Cj-Ni; Thu, 19 Nov 2020 04:24:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30366.60313; Thu, 19 Nov 2020 04:24:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfbV8-0004Cb-Ho; Thu, 19 Nov 2020 04:24:54 +0000
Received: by outflank-mailman (input) for mailman id 30366;
 Thu, 19 Nov 2020 04:24:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfbV6-0004CT-R6; Thu, 19 Nov 2020 04:24:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfbV6-0000Rg-Gi; Thu, 19 Nov 2020 04:24:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfbV6-0005nf-5n; Thu, 19 Nov 2020 04:24:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kfbV6-0001KW-4s; Thu, 19 Nov 2020 04:24:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfbV6-0004CT-R6; Thu, 19 Nov 2020 04:24:52 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=p28nwn583NkKfd+rhkK5ZFExeS6PNh2PQ8T7u92vtvw=; b=uVQvGE4CWKKA79Qm8wKGeN++hf
	FMc31eRa39Uy+8aNtQeAaz6svBET/0tMG77GVXjDDSZX1tJwuKymgiXsCjsvNeVrM5QQ/0gfBS+/S
	Ux/NHJ0PbxUka7n9Hb87JHeq3jUC3symxZ9CbHczRevm1L5BlVf+aU1ourxuvSF4LaHI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfbV6-0000Rg-Gi; Thu, 19 Nov 2020 04:24:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfbV6-0005nf-5n; Thu, 19 Nov 2020 04:24:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfbV6-0001KW-4s; Thu, 19 Nov 2020 04:24:52 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156858-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156858: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=0fa8ee0d9ab95c9350b8b84574824d9a384a9f7d
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 19 Nov 2020 04:24:52 +0000

flight 156858 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156858/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle 10 host-ping-check-xen fail in 156852 REGR. vs. 152332
 test-arm64-arm64-xl-credit2 10 host-ping-check-xen fail in 156852 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl           8 xen-boot                   fail pass in 156852
 test-arm64-arm64-xl-seattle   8 xen-boot                   fail pass in 156852
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 156852
 test-arm64-arm64-xl-credit2   8 xen-boot                   fail pass in 156852

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm 11 leak-check/basis(11) fail in 156852 blocked in 152332
 test-arm64-arm64-xl   11 leak-check/basis(11) fail in 156852 blocked in 152332
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 156852 like 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                0fa8ee0d9ab95c9350b8b84574824d9a384a9f7d
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  110 days
Failing since        152366  2020-08-01 20:49:34 Z  109 days  182 attempts
Testing same since   156841  2020-11-17 20:09:03 Z    1 days    3 attempts

------------------------------------------------------------
3528 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 675049 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 06:36:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 06:36:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30377.60333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfdYG-0008Na-Ik; Thu, 19 Nov 2020 06:36:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30377.60333; Thu, 19 Nov 2020 06:36:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfdYG-0008NT-Fn; Thu, 19 Nov 2020 06:36:16 +0000
Received: by outflank-mailman (input) for mailman id 30377;
 Thu, 19 Nov 2020 06:36:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfdYF-0008NL-W7; Thu, 19 Nov 2020 06:36:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfdYF-0003Pp-ON; Thu, 19 Nov 2020 06:36:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfdYF-0002Ac-B5; Thu, 19 Nov 2020 06:36:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kfdYF-0007OC-Ab; Thu, 19 Nov 2020 06:36:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfdYF-0008NL-W7; Thu, 19 Nov 2020 06:36:16 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ev3H5VOZ0m45Yiq62H35tAUUErYeP9ei7FK41AegBPo=; b=MW4uwLB2pUo4g/BQ3nH5lsJoah
	YdliWqa8VXeV+uJSMj2NUxisYO0iSVidkpt8ktDr1h0kT2jcoV3HYrRGh8LTR1pxPmXSCYJC4fi9O
	4RE3iU2eAHQgu3Am68uh+aab/Qe0DFIfurFsIrfXukyHuh8zJ8lVTICKMSqwC5VJFBWM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfdYF-0003Pp-ON; Thu, 19 Nov 2020 06:36:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfdYF-0002Ac-B5; Thu, 19 Nov 2020 06:36:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfdYF-0007OC-Ab; Thu, 19 Nov 2020 06:36:15 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156861-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 156861: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-libvirt-vhd:guest-saverestore:fail:regression
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=315443293a2d0d7c183ca6dd4624d9e4f8a7054a
X-Osstest-Versions-That:
    linux=2544d06afd8d060f35b159809274e4b7477e63e8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 19 Nov 2020 06:36:15 +0000

flight 156861 linux-5.4 real [real]
flight 156872 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156861/
http://logs.test-lab.xenproject.org/osstest/logs/156872/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 16 guest-saverestore        fail REGR. vs. 156722

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 156722

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156722
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156722
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156722
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156722
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156722
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156722
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156722
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156722
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156722
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156722
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156722
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                315443293a2d0d7c183ca6dd4624d9e4f8a7054a
baseline version:
 linux                2544d06afd8d060f35b159809274e4b7477e63e8

Last test of basis   156722  2020-11-12 16:54:57 Z    6 days
Testing same since   156861  2020-11-18 18:41:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Eric W. Biederman" <ebiederm@xmission.com>
  Aaron Brown <aaron.f.brown@intel.com>
  Al Viro <viro@zeniv.linux.org.uk>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Lobakin <alobakin@pm.me>
  Alexander Usyskin <alexander.usyskin@intel.com>
  Alexei Starovoitov <ast@kernel.org>
  Anand Jain <anand.jain@oracle.com>
  Anand K Mistry <amistry@google.com>
  Andreas Gruenbacher <agruenba@redhat.com>
  Andrew Jeffery <andrew@aj.id.au>
  Andrew Jones <drjones@redhat.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andrii@kernel.org>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Ansuel Smith <ansuelsmth@gmail.com>
  Ard Biesheuvel <ardb@kernel.org>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnaud de Turckheim <quarium@gmail.com>
  Baolin Wang <baolin.wang7@gmail.com>
  Bartosz Golaszewski <bgolaszewski@baylibre.com>
  Billy Tsai <billy_tsai@aspeedtech.com>
  Bob Peterson <rpeterso@redhat.com>
  Boris Protopopov <pboris@amazon.com>
  Borislav Petkov <bp@suse.de>
  Brian Bunker <brian@purestorage.com>
  Brian Foster <bfoster@redhat.com>
  Brian Norris <briannorris@chromium.org>
  Chao Leng <lengchao@huawei.com>
  Chen Zhou <chenzhou10@huawei.com>
  Chris Brandt <chris.brandt@renesas.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christian Brauner <christian.brauner@ubuntu.com>
  Christoph Hellwig <hch@lst.de>
  Christoph Lameter <cl@linux.com>
  Christophe Leroy <christophe.leroy@csgroup.eu>
  Chuck Lever <chuck.lever@oracle.com>
  Chunyan Zhang <chunyan.zhang@unisoc.com>
  Chunyan Zhang <zhang.lyra@gmail.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Ian King <colin.king@canonical.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
  Darrick J. Wong <darrick.wong@oracle.com>
  David Howells <dhowells@redhat.com>
  David Sterba <dsterba@suse.com>
  David Verbeiren <david.verbeiren@tessares.net>
  Dinghao Liu <dinghao.liu@zju.edu.cn>
  Don Brace <don.brace@microchip.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Dumazet <edumazet@google.com>
  Evan Nimmo <evan.nimmo@alliedtelesis.co.nz>
  Evan Quan <evan.quan@amd.com>
  Evgeny Novikov <novikov@ispras.ru>
  Felipe Balbi <balbi@kernel.org>
  Fred Gao <fred.gao@intel.com>
  Gao Xiang <hsiangkao@redhat.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  George Spelvin <lkml@sdf.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hannes Reinecke <hare@suse.de>
  Heikki Krogerus <heikki.krogerus@linux.intel.com>
  Heiko Carstens <hca@linux.ibm.com>
  Heiner Kallweit <hkallweit1@gmail.com>
  Herbert Xu <herbert@gondor.apana.org.au>
  Ingo Molnar <mingo@kernel.org>
  J. Bruce Fields <bfields@redhat.com>
  Jack Wang <jinpu.wang@cloud.ionos.com>
  Jakub Kicinski <kuba@kernel.org>
  Jan "Yenya" Kasprzak <kas@fi.muni.cz>
  Jarkko Sakkinen <jarkko@kernel.org>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Wang <jasowang@redhat.com>
  Jens Axboe <axboe@kernel.dk>
  Jerry Snitselaar <jsnitsel@redhat.com>
  Jing Xiangfeng <jingxiangfeng@huawei.com>
  Jiri Olsa <jolsa@kernel.org>
  Jiri Olsa <jolsa@redhat.com>
  Jitendra Khasdev <jitendra.khasdev@oracle.com>
  Joakim Zhang <qiangqing.zhang@nxp.com>
  Joel Stanley <joel@jms.id.au>
  Joerg Roedel <jroedel@suse.de>
  Johannes Berg <johannes.berg@intel.com>
  Johannes Thumshirn <johannes.thumshirn@wdc.com>
  Jon Hunter <jonathanh@nvidia.com>
  Josef Bacik <josef@toxicpanda.com>
  Joseph Qi <joseph.qi@linux.alibaba.com>
  Jozsef Kadlecsik <kadlec@netfilter.org>
  Julian Wiedmann <jwi@linux.ibm.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kaixu Xia <kaixuxia@tencent.com>
  Kalle Valo <kvalo@codeaurora.org>
  Keita Suzuki <keitasuzuki.park@sslab.ics.keio.ac.jp>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Laurent Dufour <ldufour@linux.ibm.com>
  Lee Jones <lee.jones@linaro.org>
  lining <lining2020x@163.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Liu, Yi L <yi.l.liu@intel.com>
  Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
  Lu Baolu <baolu.lu@linux.intel.com>
  Luka Oreskovic <luka.oreskovic@sartura.hr>
  Mao Wenan <wenan.mao@linux.alibaba.com>
  Maor Gottlieb <maorg@nvidia.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Mark Brown <broonie@kernel.org>
  Martin Hundebøll <martin@geanix.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin Schiller <ms@dev.tdt.de>
  Martin Willi <martin@strongswan.org>
  Masami Hiramatsu <mhiramat@kernel.org>
  Masashi Honma <masashi.honma@gmail.com>
  Matteo Croce <mcroce@microsoft.com>
  Matthew Wilcox (Oracle) <willy@infradead.org>
  Matthieu Baerts <matthieu.baerts@tessares.net>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael Petlan <mpetlan@redhat.com>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Ming Lei <ming.lei@redhat.com>
  Namhyung Kim <namhyung@kernel.org>
  Navid Emamdoost <navid.emamdoost@gmail.com>
  Nikolay Borisov <nborisov@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Oliver Hartkopp <socketcan@hartkopp.net>
  Oliver Herms <oliver.peter.herms@gmail.com>
  Oliver Neukum <oneukum@suse.com>
  Olivier Moysan <olivier.moysan@st.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul E. McKenney <paulmck@kernel.org>
  Paul Moore <paul@paul-moore.com>
  Pavel Machek <pavel@ucw.cz>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <a.p.zijlstra@chello.nl>
  Peter Zijlstra <peterz@infradead.org>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Qian Cai <cai@redhat.com>
  Qii Wang <qii.wang@mediatek.com>
  Qiujun Huang <hqjagain@gmail.com>
  Qu Wenruo <wqu@suse.com>
  Rob Herring <robh@kernel.org>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Ronnie Sahlberg <lsahlber@redhat.com>
  Russell King <rmk+kernel@armlinux.org.uk>
  Saeed Mahameed <saeedm@nvidia.com>
  Sagi Grimberg <sagi@grimberg.me>
  Sandeep Raghuraman <sandy.8925@gmail.com>
  Santosh Shukla <sashukla@nvidia.com>
  Sasha Levin <sashal@kernel.org>
  Sean Anderson <seanga2@gmail.com>
  Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Sreekanth Reddy <sreekanth.reddy@broadcom.com>
  Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
  Stanislav Ivanichkin <sivanichkin@yandex-team.ru>
  Stefano Brivio <sbrivio@redhat.com>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Steffen Klassert <steffen.klassert@secunet.com>
  Stephane Grosjean <s.grosjean@peak-system.com>
  Stephen Boyd <swboyd@chromium.org>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sudip Mukherjee <sudipm.mukherjee@gmail.com>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Sven Van Asbroeck <thesven73@gmail.com>
  Takashi Iwai <tiwai@suse.de>
  Tapas Kundu <tkundu@vmware.com>
  Theodore Ts'o <tytso@mit.edu>
  Thinh Nguyen <Thinh.Nguyen@synopsys.com>
  Thinh Nguyen <thinhn@synopsys.com>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Zimmermann <tzimmermann@suse.de>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomas Winkler <tomas.winkler@intel.com>
  Tomasz Figa <tfiga@chromium.org>
  Tommi Rantala <tommi.t.rantala@nokia.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Tyler Hicks <tyhicks@linux.microsoft.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Ulrich Hecht <uli+renesas@fpond.eu>
  Ursula Braun <ubraun@linux.ibm.com>
  Veerabadhran Gopalakrishnan <veerabadhran.gopalakrishnan@amd.com>
  Venkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com>
  Vincent Mailhol <mailhol.vincent@wanadoo.fr>
  Vinicius Costa Gomes <vinicius.gomes@intel.com>
  Viresh Kumar <viresh.kumar@linaro.org>
  Vlastimil Babka <vbabka@suse.cz>
  Wade Mealing <wmealing@redhat.com>
  Wang Hai <wanghai38@huawei.com>
  Wei Liu <wei.liu@kernel.org>
  Wengang Wang <wen.gang.wang@oracle.com>
  Will Deacon <will@kernel.org>
  Willem de Bruijn <willemb@google.com>
  Willy Tarreau <w@1wt.eu>
  Wolfram Sang <wsa+renesas@sang-engineering.com>
  Wolfram Sang <wsa@kernel.org>
  Yangbo Lu <yangbo.lu@nxp.com>
  Ye Bin <yebin10@huawei.com>
  Yegor Yefremov <yegorslists@googlemail.com>
  Yi Sun <yi.y.sun@linux.intel.com>
  Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
  Yunsheng Lin <linyunsheng@huawei.com>
  Zeng Tao <prime.zeng@hisilicon.com>
  Zhang Changzhong <zhangchangzhong@huawei.com>
  Zhang Qilong <zhangqilong3@huawei.com>
  zhuoliang zhang <zhuoliang.zhang@mediatek.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 4755 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 07:52:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 07:52:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30391.60354 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfek3-0007Qq-2E; Thu, 19 Nov 2020 07:52:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30391.60354; Thu, 19 Nov 2020 07:52:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfek2-0007Qj-VP; Thu, 19 Nov 2020 07:52:30 +0000
Received: by outflank-mailman (input) for mailman id 30391;
 Thu, 19 Nov 2020 07:52:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fcmS=EZ=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1kfek2-0007Qe-4a
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 07:52:30 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ca5b5c32-c7a6-4a26-bc8a-f11b25facf80;
 Thu, 19 Nov 2020 07:52:28 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id E541867373; Thu, 19 Nov 2020 08:52:25 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=fcmS=EZ=lst.de=hch@srs-us1.protection.inumbo.net>)
	id 1kfek2-0007Qe-4a
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 07:52:30 +0000
X-Inumbo-ID: ca5b5c32-c7a6-4a26-bc8a-f11b25facf80
Received: from verein.lst.de (unknown [213.95.11.211])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ca5b5c32-c7a6-4a26-bc8a-f11b25facf80;
	Thu, 19 Nov 2020 07:52:28 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
	id E541867373; Thu, 19 Nov 2020 08:52:25 +0100 (CET)
Date: Thu, 19 Nov 2020 08:52:25 +0100
From: Christoph Hellwig <hch@lst.de>
To: Jan Kara <jack@suse.cz>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 07/20] init: refactor name_to_dev_t
Message-ID: <20201119075225.GA15815@lst.de>
References: <20201118084800.2339180-1-hch@lst.de> <20201118084800.2339180-8-hch@lst.de> <20201118143747.GL1981@quack2.suse.cz>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201118143747.GL1981@quack2.suse.cz>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Wed, Nov 18, 2020 at 03:37:47PM +0100, Jan Kara wrote:
> > -static inline dev_t blk_lookup_devt(const char *name, int partno)
> > -{
> > -	dev_t devt = MKDEV(0, 0);
> > -	return devt;
> > -}
> >  #endif /* CONFIG_BLOCK */
> 
> This hunk looks unrelated to the change? Also why you move the declaration
> outside the CONFIG_BLOCK ifdef? AFAICS blk_lookup_devt() still exists only
> when CONFIG_BLOCK is defined? Otherwise the patch looks good to me.

blk_lookup_devt is a hack only for name_to_dev_t only referenced from
code under CONFIG_BLOCK now, as it didn't do anything before when
blk_lookup_devt returned 0.  I guess I'll need to update the commit log
a little to mention this.


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 08:04:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 08:04:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30401.60367 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfevc-0000dd-DI; Thu, 19 Nov 2020 08:04:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30401.60367; Thu, 19 Nov 2020 08:04:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfevc-0000dW-9k; Thu, 19 Nov 2020 08:04:28 +0000
Received: by outflank-mailman (input) for mailman id 30401;
 Thu, 19 Nov 2020 08:04:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfevb-0000dR-59
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:04:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 14620c92-e3f1-4809-84c5-c88b100e72cb;
 Thu, 19 Nov 2020 08:04:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 01225ADCD;
 Thu, 19 Nov 2020 08:04:23 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfevb-0000dR-59
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:04:27 +0000
X-Inumbo-ID: 14620c92-e3f1-4809-84c5-c88b100e72cb
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 14620c92-e3f1-4809-84c5-c88b100e72cb;
	Thu, 19 Nov 2020 08:04:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605773063; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IxyAukJSmK/PU4Plp32QUs6CgHqvox/AGwUF6rJdUUY=;
	b=Jigi/H02kDeWyi0vmRXm3s3pegjDu9ITi2Qyx5DPEsBTuNav0LrCV5KuCkhVZ0bFZxMqV0
	bj3eKV3E2Jy5Pft4P+znoCKysMc4QBRHomoY/Lu3yuipaTeiMQLeyqZ6PmDAkilJjONG7B
	w2VPwMlScSzrHOV2ChTSiIZTgVobAnU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 01225ADCD;
	Thu, 19 Nov 2020 08:04:23 +0000 (UTC)
Subject: Re: [PATCH v2] xen: EXPERT clean-up and introduce UNSUPPORTED
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: andrew.cooper3@citrix.com, Bertrand.Marquis@arm.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 george.dunlap@citrix.com, iwj@xenproject.org, julien@xen.org, wl@xen.org,
 xen-devel@lists.xenproject.org
References: <20201118005051.26115-1-sstabellini@kernel.org>
 <eb6b32c3-c7e2-1e36-f492-0c00cc170ce2@suse.com>
 <alpine.DEB.2.21.2011181241310.11739@sstabellini-ThinkPad-T480s>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3e8c03eb-ee3f-4439-90c2-acf340c7d8e7@suse.com>
Date: Thu, 19 Nov 2020 09:04:17 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2011181241310.11739@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.11.2020 22:00, Stefano Stabellini wrote:
> On Wed, 18 Nov 2020, Jan Beulich wrote:
>> On 18.11.2020 01:50, Stefano Stabellini wrote:
>>> 1) It is not obvious that "Configure standard Xen features (expert
>>> users)" is actually the famous EXPERT we keep talking about on xen-devel
>>
>> Which can be addressed by simply changing the one prompt line.
>>
>>> 2) It is not obvious when we need to enable EXPERT to get a specific
>>> feature
>>>
>>> In particular if you want to enable ACPI support so that you can boot
>>> Xen on an ACPI platform, you have to enable EXPERT first. But searching
>>> through the kconfig menu it is really not clear (type '/' and "ACPI"):
>>> nothing in the description tells you that you need to enable EXPERT to
>>> get the option.
>>
>> And what causes this to be different once you switch to UNSUPPORTED?
> 
> Two things: firstly, it doesn't and shouldn't take an expert to enable
> ACPI support, even if ACPI support is experimental. So calling it
> UNSUPPORTED helps a lot. This is particularly relevant to the ARM Kconfig
> options changed by this patch. Secondly, this patch is adding
> "(UNSUPPORTED)" in the oneline prompt so that it becomes easy to match
> it with the option you need to enable.

There's redundancy here then, which I think is in almost all cases
better to avoid. That's first and foremost because the two places
can go out of sync. Therefore, if the primary thing is to help
"make menuconfig" (which I admit I don't normally use, as it's
nothing that gets invoked implicitly by the build process afaict,
i.e. one has to actively invoke it), perhaps we should enhance
kconfig to attach at least a pre-determined subset of labels to
the prompts automatically?

And second, also in reply to what you've been saying further down,
perhaps we would better go with a hierarchy of controls here, e.g.
EXPERT -> EXPERIMENTAL -> UNSUPPORTED?

>>> So this patch makes things easier by doing two things:
>>>
>>> - introduce a new kconfig option UNSUPPORTED which is clearly to enable
>>>   UNSUPPORTED features as defined by SUPPORT.md
>>>
>>> - change EXPERT options to UNSUPPORTED where it makes sense: keep
>>>   depending on EXPERT for features made for experts
>>>
>>> - tag unsupported features by adding (UNSUPPORTED) to the one-line
>>>   description
>>
>> I am, btw, not fully convinced of the need for this redundancy. Wouldn't
>> it be enough to have just EXPERT as a setting, but varying (<reason>)
>> tokens in the prompt text?
> 
> I don't think so, for the same reasons written above: EXPERT should not
> be gating things like ACPI.

Different views are possible here, I suppose. Turning on anything
that's unsupported requires people to know what they're doing (and
be ready to pick up the pieces themselves). I'd consider this to
fall under "expert".

> Moreover, the advantage of the tag in the
> oneline prompt is that you can search for an option and figure out that
> you need to enable UNSUPPORTED. It doesn't work if we use a different
> tag. Just to get the idea, try to do "make menuconfig" and search for
> "ARGO" with '/': you'll see "(UNSUPPORTED)". Then, if you search for
> "UNSUPPORTED" you can find what you need to enable.

Implying that textual representation and Kconfig option name match,
see above. Even a simple spelling mistake would break this model.

>>> --- a/xen/Kconfig
>>> +++ b/xen/Kconfig
>>> @@ -34,8 +34,17 @@ config DEFCONFIG_LIST
>>>  	option defconfig_list
>>>  	default ARCH_DEFCONFIG
>>>  
>>> +config UNSUPPORTED
>>> +	bool "Configure UNSUPPORTED features"
>>> +	help
>>> +	  This option allows unsupported Xen options to be enabled, which
>>
>> I'd recommend against "enabled" - a control may also be there to allow
>> disabling something.
> 
> I can change that.
> 
> 
>>> +	  includes non-security-supported, experimental, and tech preview
>>> +	  features as defined by SUPPORT.md. Xen binaries built with this
>>> +	  option enabled are not security supported.
>>
>> Overall I'm a little afraid of possible inverse implications: Anything
>> _not_ dependent upon this option (and in particular anything not
>> dependent upon any Kconfig control) may be considered supported then.
>>
>> Also the last sentence is already present for EXPERT, 
> 
> I am happy to rephrase it. What about:
> 
> "This option allows certain unsupported Xen options to be changed, which
> includes non-security-supported, experimental, and tech preview features
> as defined by SUPPORT.md."

Sounds better to me.

>>> --- a/xen/common/Kconfig
>>> +++ b/xen/common/Kconfig
>>> @@ -151,7 +151,7 @@ config KEXEC
>>>  	  If unsure, say Y.
>>>  
>>>  config EFI_SET_VIRTUAL_ADDRESS_MAP
>>> -    bool "EFI: call SetVirtualAddressMap()" if EXPERT
>>> +    bool "EFI: call SetVirtualAddressMap() (UNSUPPORTED)" if UNSUPPORTED
>>
>> I have to admit I'm pretty unsure about what to do with this one.
> 
> Yeah, similarly to XEN_SHSTK, I don't have an opinion here either.  I am
> happy to change it or leave it as.

I guess at least for the first cut I'd like to ask to just leave it
alone.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 08:25:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 08:25:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30407.60379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kffFg-0002aT-4G; Thu, 19 Nov 2020 08:25:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30407.60379; Thu, 19 Nov 2020 08:25:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kffFg-0002aM-1J; Thu, 19 Nov 2020 08:25:12 +0000
Received: by outflank-mailman (input) for mailman id 30407;
 Thu, 19 Nov 2020 08:25:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kVKL=EZ=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1kffFe-0002aH-5W
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:25:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f8f1dfd8-6ae2-4d3d-a096-a325b05b9686;
 Thu, 19 Nov 2020 08:25:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1FC9DAD2F;
 Thu, 19 Nov 2020 08:25:06 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id C74001E1303; Thu, 19 Nov 2020 09:25:05 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kVKL=EZ=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1kffFe-0002aH-5W
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:25:10 +0000
X-Inumbo-ID: f8f1dfd8-6ae2-4d3d-a096-a325b05b9686
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f8f1dfd8-6ae2-4d3d-a096-a325b05b9686;
	Thu, 19 Nov 2020 08:25:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1FC9DAD2F;
	Thu, 19 Nov 2020 08:25:06 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id C74001E1303; Thu, 19 Nov 2020 09:25:05 +0100 (CET)
Date: Thu, 19 Nov 2020 09:25:05 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jan Kara <jack@suse.cz>, Jens Axboe <axboe@kernel.dk>,
	Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 07/20] init: refactor name_to_dev_t
Message-ID: <20201119082505.GS1981@quack2.suse.cz>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-8-hch@lst.de>
 <20201118143747.GL1981@quack2.suse.cz>
 <20201119075225.GA15815@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201119075225.GA15815@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Thu 19-11-20 08:52:25, Christoph Hellwig wrote:
> On Wed, Nov 18, 2020 at 03:37:47PM +0100, Jan Kara wrote:
> > > -static inline dev_t blk_lookup_devt(const char *name, int partno)
> > > -{
> > > -	dev_t devt = MKDEV(0, 0);
> > > -	return devt;
> > > -}
> > >  #endif /* CONFIG_BLOCK */
> > 
> > This hunk looks unrelated to the change? Also why you move the declaration
> > outside the CONFIG_BLOCK ifdef? AFAICS blk_lookup_devt() still exists only
> > when CONFIG_BLOCK is defined? Otherwise the patch looks good to me.
> 
> blk_lookup_devt is a hack only for name_to_dev_t only referenced from
> code under CONFIG_BLOCK now, as it didn't do anything before when
> blk_lookup_devt returned 0.  I guess I'll need to update the commit log
> a little to mention this.

OK, understood. Still it would seem more logical to leave blk_lookup_devt()
declaration inside #ifdef CONFIG_BLOCK and just delete the !CONFIG_BLOCK
definition (to make it clear we ever expect only users compiled when
CONFIG_BLOCK is defined). But whatever... Feel free to add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 08:46:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 08:46:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30424.60444 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kffZx-0004gE-B0; Thu, 19 Nov 2020 08:46:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30424.60444; Thu, 19 Nov 2020 08:46:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kffZw-0004fs-UB; Thu, 19 Nov 2020 08:46:08 +0000
Received: by outflank-mailman (input) for mailman id 30424;
 Thu, 19 Nov 2020 08:38:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X+6U=EZ=wdc.com=prvs=58568f3f1=johannes.thumshirn@srs-us1.protection.inumbo.net>)
 id 1kffSc-0003iR-MO
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:38:34 +0000
Received: from esa2.hgst.iphmx.com (unknown [68.232.143.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4f806786-7d2b-45ee-b8ec-f00400227fca;
 Thu, 19 Nov 2020 08:38:30 +0000 (UTC)
Received: from mail-bn7nam10lp2109.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.109])
 by ob1.hgst.iphmx.com with ESMTP; 19 Nov 2020 16:46:49 +0800
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 (2603:10b6:803:47::21) by SN6PR04MB4862.namprd04.prod.outlook.com
 (2603:10b6:805:90::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 19 Nov
 2020 08:38:26 +0000
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98]) by SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98%7]) with mapi id 15.20.3589.022; Thu, 19 Nov 2020
 08:38:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X+6U=EZ=wdc.com=prvs=58568f3f1=johannes.thumshirn@srs-us1.protection.inumbo.net>)
	id 1kffSc-0003iR-MO
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:38:34 +0000
X-Inumbo-ID: 4f806786-7d2b-45ee-b8ec-f00400227fca
Received: from esa2.hgst.iphmx.com (unknown [68.232.143.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4f806786-7d2b-45ee-b8ec-f00400227fca;
	Thu, 19 Nov 2020 08:38:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com;
  t=1605775613; x=1637311613;
  h=from:to:cc:subject:date:message-id:references:
   content-transfer-encoding:mime-version;
  bh=G8FZ0D3PP/OudJ5uuxCAz/C/vBHo8wESZoPwxTWqVEI=;
  b=PvdM8cICbPu2EB5bo2tpOLO3uAxCKWF9GbviickAV4qnqYAz67A154bP
   SV+NsmbE1YnJTj0YVnRlmoYagceGRCrP5RltzimA+pTaqJzduvNYcM8R5
   OvrsFBmmma+gNf8iqN/RNSapVKXwmm2JZEyJlImVZsz9idvbsKJlRDEP7
   CNAYMBMKw7iAymzYV8yc4mpWa5umok/CBNwmDse2eZR9yrYZaCGK1/OsL
   Y0crZ42j7LXJe29SamTNHaLLb/dze1V/GSaHWk0X0UCRQAkfX8ZqdkH02
   1UAAPDlpLOtYQKASH7r9RzDzg3Ssd1ibrc5mJaEDfzTXe78wBOMxx/M8i
   w==;
IronPort-SDR: 7kpecrLv1R+8Tn0CZmRF0yo+J7M5rxF63VVhvXMSVXhy0xPzBwLW9bRZ41RPQuFsLSPhrVgW2p
 Ynmljzaf5ROHMcakCa4XZ803Oubb8Irxuh0L0eAU6Q3z9Iwc6As0Dl5TYLU6rbLuzPURdwfoA9
 NBgF12olrN1laUNt5lJ46l/dLw+0nysf5AujmAWQE82XSaJMG8kn4XSl2u4WsXgxhx/Nq370hC
 DitD+xK1g8tiQSf/o/AMAAeHvZSa61iucXxhtjNy7psFUfQ8rmros+lKLLePsDMOIFyMc5QpEv
 Lp0=
X-IronPort-AV: E=Sophos;i="5.77,490,1596470400"; 
   d="scan'208";a="256563204"
Received: from mail-bn7nam10lp2109.outbound.protection.outlook.com (HELO NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.109])
  by ob1.hgst.iphmx.com with ESMTP; 19 Nov 2020 16:46:49 +0800
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LUgnkraW+QYBqlXscVDNm1LVigFNdyHDXXPvENUQgvpzI8jpbi49DkIamXUsdYjH18fPlELdfRHAWLyFKuv1yf0rE/YWP/dN9CYYn7Z5CMe1FDdehH/6S1vPoPjNpWZ8OKDeucOl3z8z0OTX8mcQYk/PLeFaI61upG7yYSE4Y/EEzb34bEThcqM17Q7EImAs9cKnx3D96Xwx93iTRuFGt6G3hzUolYbtCfZC0WEHwwsNUIr9JHDpe3uF8dcXFRU231iWLzQFcp7XgeAugOZWXiUU6UIo6MX+QDyDK0UuyvXgTkk9svG0CPTRDnyDvUjZC2xqkMzxgu+LcAn9RsUt+w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G8FZ0D3PP/OudJ5uuxCAz/C/vBHo8wESZoPwxTWqVEI=;
 b=ItdyS/TYh8LPv67rajQBVzLUBGeouOrjcTOxHEi0TQ5rON6NwcJTPIr/bskDEqGdKokt78i29jmuj/KoKmrszHkGvWOra4lAOKkpOqm5CbN8fJEwKMIqevR/KobKt1YxNpjxNBquyc2jTlfxT55B3VKOXWhFGcaora93Qdx6XbIG13/H5r5temmK4Jasa7lKTmy/CiFn0P0AnmSfMn2yYpbG2YllHgl4D3OtHaZ94V1574QFzIjcAhvM/CwbPnH08dW7TrK+ydG+QnSDIJUHjPfBUR0+4uCGwJWubshwG2Tz4Dzb2GBywmsSY3Cu8EBPw32FVQd7g2la4iohjx1udw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=wdc.com; dmarc=pass action=none header.from=wdc.com; dkim=pass
 header.d=wdc.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G8FZ0D3PP/OudJ5uuxCAz/C/vBHo8wESZoPwxTWqVEI=;
 b=0H5T2i43HPmpwn6z/c6M16KEb8JBBxQZZCug2FlTzcCTFev3Xixb1d1NAnH9FSwDFd6Bl7ZPS+rSMe/nSLRTaEEda2hOeTbUYAxhpnuTT9RG/c8OrxG2Ta+Siw2nepSNMUEvPzirHrQB3YgVRjWylAOMw/u5s5G9OneGk2UeyXU=
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 (2603:10b6:803:47::21) by SN6PR04MB4862.namprd04.prod.outlook.com
 (2603:10b6:805:90::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 19 Nov
 2020 08:38:26 +0000
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98]) by SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98%7]) with mapi id 15.20.3589.022; Thu, 19 Nov 2020
 08:38:26 +0000
From: Johannes Thumshirn <Johannes.Thumshirn@wdc.com>
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
CC: Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>, Konrad
 Rzeszutek Wilk <konrad.wilk@oracle.com>, Coly Li <colyli@suse.de>, Mike
 Snitzer <snitzer@redhat.com>, "dm-devel@redhat.com" <dm-devel@redhat.com>,
	Richard Weinberger <richard@nod.at>, Jan Kara <jack@suse.com>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	"linux-mtd@lists.infradead.org" <linux-mtd@lists.infradead.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>
Subject: Re: [PATCH 07/20] init: refactor name_to_dev_t
Thread-Topic: [PATCH 07/20] init: refactor name_to_dev_t
Thread-Index: AQHWvYgrYc1Qp0WgLEOTR53TfvPOSQ==
Date: Thu, 19 Nov 2020 08:38:26 +0000
Message-ID:
 <SN4PR0401MB35988935BAEBEC04C143EE759BE00@SN4PR0401MB3598.namprd04.prod.outlook.com>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-8-hch@lst.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: lst.de; dkim=none (message not signed)
 header.d=none;lst.de; dmarc=none action=none header.from=wdc.com;
x-originating-ip: [2001:a62:157d:bf01:851e:5636:4e29:3e2e]
x-ms-publictraffictype: Email
x-ms-office365-filtering-ht: Tenant
x-ms-office365-filtering-correlation-id: b730e505-bdff-4d92-45ab-08d88c667bb5
x-ms-traffictypediagnostic: SN6PR04MB4862:
x-microsoft-antispam-prvs:
 <SN6PR04MB4862C4EF0AEAA7A94CCC4D829BE00@SN6PR04MB4862.namprd04.prod.outlook.com>
wdcipoutbound: EOP-TRUE
x-ms-oob-tlc-oobclassifiers: OLM:1728;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 AYG6BnQlsuGA/acLvLJgD7cGr5/O+puH6AbW6Ju3ilicmeE2aoZVRwEBcJL8W8aHpanaHc8PRJyDum4B+H0jOgvDGD/rMvmlvtImEIktxVkuLuNFT4HUZNSHcSRjPusPZ2bo2XYdTh67fWzZvTAifSG9aOSwEKW9qAqJ5Y9aZZakuYlci1SV9LfPN05lKXRvygjeF/dKLfMFLrF3VwM3p4NNFkQwm6nLHd7nkqQtdj6Bq6P0EiGGhjJNs6jSXIUm+1kPjv2Euhf1k1OC92gcVCDrIY/6DjeLROMeEDlDPTIhaI8sR+zJZNo4tIxjgPes0DLo2qwiUKhorvU5EmOKrg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN4PR0401MB3598.namprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(39860400002)(346002)(396003)(366004)(8936002)(71200400001)(19618925003)(4326008)(33656002)(7416002)(9686003)(55016002)(66946007)(76116006)(478600001)(66476007)(66446008)(64756008)(86362001)(54906003)(66556008)(91956017)(52536014)(558084003)(2906002)(186003)(8676002)(4270600006)(7696005)(316002)(6506007)(110136005)(5660300002);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata:
 hA8+VzTCyu88zqp8ByagDd0VV9gn8kSx4RCMDL80KLf+qsSjq+JJ+bxL7h9DgMh0ZwH1/Hhzz24dfWQK6FvSCu2N2pODdFcZpmexlWvb43BEP7St9yobdcngjk3Lx23YjM1HMbAHTeBb0P7l1ZVhUM0+3BzwYQ3epRmAQhV+fdYJtfD4FNERaTCynw7pFA5B0GmOG2LWvsgbGA+FQ28CWZwmZ8HSz2yVlxB7oMiEt4oyfqDxdbDak+pMRivWO9tR1DX1RHpX1/lRr/ItsnPk2dguWy4anHW7zaImw/O6IIZT4sv3HbDpyH8j3bFI+Pi48foAdx8OazbcByGmP7l8WbieradKMX+tM+ZGCCqcwEW50guJ1Vnjpz+t7iXwUcvB3nW6FTXSjv+ERovMxV50QWObbeA6ZNDNMaucAYyMMejfC1z6KRC2TbfwNvBkfeynbCLY7kQj9a7md0DpP6tNT7E25J72eXrv6hhVxYOqxhvsNZnxZg1FpUlbkL5GRDCxaFgFofYR3IdLVebcBnr50AAiTO/o9WggMNLecfIc+ppyFZbur8YLgkersC1lMcL9QSzbG213UNIp6RuVTy/+FJlzDX9Gi27/XvknejmLj8VqY07T9dFYVsQZPclrWxlG0L889mu5DAIedBfF3TaUy2uAMoLTbmL2OsDxEH7c0w+XTm9Hqpv7dBhgiDMriKT2BXIVJXnQfgXxZHOaHgeBAw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: wdc.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SN4PR0401MB3598.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b730e505-bdff-4d92-45ab-08d88c667bb5
X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Nov 2020 08:38:26.2255
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: CUJKjc+O7Uc5n1gkeDg58OBnuesd0SrFF3yd4avvUfbb7H6fsgi8DwfU5idErUdSk0MYdyD7R2wm2ECOP3+WegiNVulxDiOK0nUj9qsmyKo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR04MB4862

Looks good,=0A=
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>=0A=


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 08:46:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 08:46:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30418.60411 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kffZv-0004df-VH; Thu, 19 Nov 2020 08:46:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30418.60411; Thu, 19 Nov 2020 08:46:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kffZv-0004dL-Lv; Thu, 19 Nov 2020 08:46:07 +0000
Received: by outflank-mailman (input) for mailman id 30418;
 Thu, 19 Nov 2020 08:37:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X+6U=EZ=wdc.com=prvs=58568f3f1=johannes.thumshirn@srs-us1.protection.inumbo.net>)
 id 1kffRw-0003eY-Hb
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:37:52 +0000
Received: from esa3.hgst.iphmx.com (unknown [216.71.153.141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e505cb74-6204-4414-a0db-30c01603b88a;
 Thu, 19 Nov 2020 08:37:51 +0000 (UTC)
Received: from mail-bn7nam10lp2107.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.107])
 by ob1.hgst.iphmx.com with ESMTP; 19 Nov 2020 16:37:48 +0800
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 (2603:10b6:803:47::21) by SN6PR04MB4862.namprd04.prod.outlook.com
 (2603:10b6:805:90::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 19 Nov
 2020 08:37:47 +0000
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98]) by SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98%7]) with mapi id 15.20.3589.022; Thu, 19 Nov 2020
 08:37:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X+6U=EZ=wdc.com=prvs=58568f3f1=johannes.thumshirn@srs-us1.protection.inumbo.net>)
	id 1kffRw-0003eY-Hb
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:37:52 +0000
X-Inumbo-ID: e505cb74-6204-4414-a0db-30c01603b88a
Received: from esa3.hgst.iphmx.com (unknown [216.71.153.141])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e505cb74-6204-4414-a0db-30c01603b88a;
	Thu, 19 Nov 2020 08:37:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com;
  t=1605775071; x=1637311071;
  h=from:to:cc:subject:date:message-id:references:
   content-transfer-encoding:mime-version;
  bh=G8FZ0D3PP/OudJ5uuxCAz/C/vBHo8wESZoPwxTWqVEI=;
  b=U3Gze6K3JeijYkxMO8sA8OOR/OnWxNir7zRQiOP3Uskje9JUl4rK7USP
   qO0ZGnsvpbTGGQ3n/lflHlzvIF5Olp+yoKS/4tSAU9yTFmwCBSC8Yfajg
   zBNvDZBh+YaZFKeLb9zD/b/Un2Id5MV7G9D6vnIVkkeD+497RX2OvrNsJ
   1tVYo8KTnMaONsnBwXM1A7jnOVXxoGZURVeCUdU/RwJOPRUAW7dCAHlCC
   zBofwTcY+CGLmT1GD9XpEA0M1EcT+KJApXsbGnONmuZQVlahFPL17ABDI
   aMeNjlqJbTwQuKFE2DNA4JiZ7aZWW/xJU58J1QeucQjaOQ/CdxTS0bM1w
   w==;
IronPort-SDR: lVYgGj8s/O/AthUUtd8BLwr48Cp2/3Lts7OWFY8l46ecxGJgwGyTEv4SznvpWtoFXPDUmLq46S
 PKl5jTXr2yroQGCOoXR6GLg4IOuQiedGR7+8g2V0WPxkgoq3eA1YN5yrT/U9K/rctJ6WFRYtBp
 9HztdHo8FdhKqxpwXBk6Xg8h2iCjq6rhP0ekHhBdnbb7S7xjZR7U0qyVoAKGSgZk6pZC+Gnowr
 CSs2a4VuTwxteh6JjwZoA7HQJbsmboE1nAUyvDzzgs827uEa+OtkN9Vn/JyuVYzlqP4Q1sXIEO
 ShA=
X-IronPort-AV: E=Sophos;i="5.77,490,1596470400"; 
   d="scan'208";a="157444432"
Received: from mail-bn7nam10lp2107.outbound.protection.outlook.com (HELO NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.107])
  by ob1.hgst.iphmx.com with ESMTP; 19 Nov 2020 16:37:48 +0800
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ujgyvc7VNxSZo67CkpxYYNrqHEw39XLyqABuVMn57Xms3D8MEMG7OJCR7whKIfywIxXBMF9UzqMylrXpdlESxBpBfPJyH95tSXYGHyEQLPj5yUXhTLveCxHHR+juORY/47vAyfXWQdPoc4vbXhDRyvSUan0syEyaWCaxDJ7D6AG5SgS+1P8U5Qs72CqkLCNs/UU+d6m4imVnByk8zVMJltz0vf5GXeW9WCKEW+naGhR0oYiLPYW4ya6AOM9CXNTNRh3Aj8KpgPwr7fBKpmderCFGHKNZLWAstT4DoJADoiPp+oMlNloW9IZaiEEi7J+XRlMCJoVHTmbfgIQQn0NEUg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G8FZ0D3PP/OudJ5uuxCAz/C/vBHo8wESZoPwxTWqVEI=;
 b=lb9/qqyyw0VyU8A1/UfjT+9sid7dZvJ9IYQfI/ZqmOBPZlFfK2Nlh0UDxSKN3+HBcWiyYJse4n+RHOH+8p8yG6iIkzX9mYse/ZYpwo4a0hsDUPxOBnIaWwPVUl7p1GITQjjGyAR6TkV869JJ8iyOGXksisDNPQhoxBh0Y8Sk5pcgRTZlEql2eHdeW7wKDuq5mDunREE3GvS1MwErqvvKXvx40jKP8/aYGaTFhaSR4SssBJa7Yq+P4yq+ij7rOvQLuaWildAVQ+ecFhq2Vr4gTu0EfATP9AvGKBHvF1Vw5q8w3vgBGRTsyvDsIqhbFsY9z53rPDg9qyOCuN0NzIJt0Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=wdc.com; dmarc=pass action=none header.from=wdc.com; dkim=pass
 header.d=wdc.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G8FZ0D3PP/OudJ5uuxCAz/C/vBHo8wESZoPwxTWqVEI=;
 b=bPiB4F9DiLgpTY5tg+wz7YwSGOEh2nJNYLDEhg5C+LCUobxhAzOv6tgU6Ve3ncGAJ8FBSQlzM+woRCvCrT7D82mBox+c+RK6pvcDHa6XEhRsjTElOwFJ8/efC542uYWKugT1ub4HIJLqgcFFno2FtMjmBwpSWlro/hLqLnCUmm0=
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 (2603:10b6:803:47::21) by SN6PR04MB4862.namprd04.prod.outlook.com
 (2603:10b6:805:90::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 19 Nov
 2020 08:37:47 +0000
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98]) by SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98%7]) with mapi id 15.20.3589.022; Thu, 19 Nov 2020
 08:37:47 +0000
From: Johannes Thumshirn <Johannes.Thumshirn@wdc.com>
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
CC: Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>, Konrad
 Rzeszutek Wilk <konrad.wilk@oracle.com>, Coly Li <colyli@suse.de>, Mike
 Snitzer <snitzer@redhat.com>, "dm-devel@redhat.com" <dm-devel@redhat.com>,
	Richard Weinberger <richard@nod.at>, Jan Kara <jack@suse.com>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	"linux-mtd@lists.infradead.org" <linux-mtd@lists.infradead.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>
Subject: Re: [PATCH 03/20] block: add a bdev_kobj helper
Thread-Topic: [PATCH 03/20] block: add a bdev_kobj helper
Thread-Index: AQHWvYgpPQQYaHo5cUCzCzUTvKQePw==
Date: Thu, 19 Nov 2020 08:37:47 +0000
Message-ID:
 <SN4PR0401MB3598F34DA0C62243AE70450F9BE00@SN4PR0401MB3598.namprd04.prod.outlook.com>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-4-hch@lst.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: lst.de; dkim=none (message not signed)
 header.d=none;lst.de; dmarc=none action=none header.from=wdc.com;
x-originating-ip: [2001:a62:157d:bf01:851e:5636:4e29:3e2e]
x-ms-publictraffictype: Email
x-ms-office365-filtering-ht: Tenant
x-ms-office365-filtering-correlation-id: 8ee06524-e692-40f5-4335-08d88c6664ce
x-ms-traffictypediagnostic: SN6PR04MB4862:
x-microsoft-antispam-prvs:
 <SN6PR04MB4862C03F8508DDA87FBC44E39BE00@SN6PR04MB4862.namprd04.prod.outlook.com>
wdcipoutbound: EOP-TRUE
x-ms-oob-tlc-oobclassifiers: OLM:1728;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 cM6Zu8k0lk01FkTpo0twM3+ihg0ZM0fWiBNV2YgNEq57IjliedsG4hdFNcEeWr7gyHPkC8wcmnKwmo6B3s8tfpyVYHRb369EZu5lJp8WgHdKOrCDl8dYh+AoDVZOyHBJAApMKIwuC7ziO+80CWeJMYgP/KjPjWAAmmvtPZpr5T0vHcJ61OqTTQqpm/dAMMvfYKFKyVVrdU6RDU7QvDYf4966kTRcn2fjQAK/lEl83mzI9rCinyGP+fV6M6X1MZDvlQl1fBMPaZYSTHITNHBlN5eH3SpuuTOwrySY8c311ZVBL9e28Qbhi10GesUwg4oIYwFlt3n7YhgV7z/c6qgjQA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN4PR0401MB3598.namprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(39860400002)(346002)(396003)(366004)(8936002)(71200400001)(19618925003)(4326008)(33656002)(7416002)(9686003)(55016002)(66946007)(76116006)(478600001)(66476007)(66446008)(64756008)(86362001)(54906003)(66556008)(91956017)(52536014)(558084003)(2906002)(186003)(8676002)(4270600006)(7696005)(316002)(6506007)(110136005)(5660300002);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata:
 MPTcc0vfLSyiAnTcb7e0ZxYCdd6X03PO+hkR4w17tnb1Hno0kQos4K2mXREGZgLW8ratrL8B7Yc0xKuz7b23avFfKFDWdAJZ0unQKNigvjqNfnlg52/l4vaB8VnQZB3bpxteL0XUwl5NiaDQSk18o+OUjU6RZIyFhczuuhvrHmiKMZyfEnE8pTXBOd+/9pmy9yKTzOl/EY3xuD3FdC+yKpHGqoQeKme4gD8v1ge8V9vYMQPLRQN4SQUoDaP0Vhgal4WsQ11lCXcin1Sgb/kzAOo6HfpEaNbxuyWxQRqCcOrSZPHv5iOftO9p1YtRY2wXtOWa1sA0KDa+0NgPNmtRLl7gp31EOqfqtNqNh6Ji75jeIcd8MNht16s0FlwCAxlS2P8ZifB1ZytH8GPK1oLJUuBHcDST5AFPfxNo41bS07GjTHsegEcVM/GpeqePqTEK25jhlPUX/8ihe+88T/wAtAe8dOwDwVwq3Ninb1XiaK4VoJrXQAY8ejc4zmH3i83S23c1Fzv9HnNe4uR6JJMG8X6/ACG+ofnt/1lGR8TWaNqhuE4Gi7IGxuzaNvNEg8jY0uvKRJBl9Dda3h/Tvg9GBd7a8Wd5g1EN03Czd4mU+OGBFfY63oZR25JElYnN6XzREmBQGQmrYNBD0aZqccL0wwTasGf4sNjm8alHdyyRXA/Cs8mp3uYnhFcU+sEq2shb/XRcDIJn+LkN+wlrb9jGvw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: wdc.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SN4PR0401MB3598.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8ee06524-e692-40f5-4335-08d88c6664ce
X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Nov 2020 08:37:47.7068
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: FtL10fHlsqtF9VKWClE12kvG/4VucNbBlCk7eB3jP86NM/MVKoi5xzwEvgHccVAHfo1Z5D+XGYZM8Ev0yG9YEwoLHBffgmsp1L9OHKub5+c=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR04MB4862

Looks good,=0A=
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>=0A=


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 08:46:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 08:46:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30416.60399 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kffZv-0004ct-Fq; Thu, 19 Nov 2020 08:46:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30416.60399; Thu, 19 Nov 2020 08:46:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kffZv-0004cj-Ba; Thu, 19 Nov 2020 08:46:07 +0000
Received: by outflank-mailman (input) for mailman id 30416;
 Thu, 19 Nov 2020 08:37:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X+6U=EZ=wdc.com=prvs=58568f3f1=johannes.thumshirn@srs-us1.protection.inumbo.net>)
 id 1kffRk-0003e2-HS
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:37:40 +0000
Received: from esa3.hgst.iphmx.com (unknown [216.71.153.141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9fd8a3d5-dd05-4250-99a4-0a0041396c56;
 Thu, 19 Nov 2020 08:37:39 +0000 (UTC)
Received: from mail-bn7nam10lp2100.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.100])
 by ob1.hgst.iphmx.com with ESMTP; 19 Nov 2020 16:37:35 +0800
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 (2603:10b6:803:47::21) by SN6PR04MB4862.namprd04.prod.outlook.com
 (2603:10b6:805:90::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 19 Nov
 2020 08:37:34 +0000
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98]) by SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98%7]) with mapi id 15.20.3589.022; Thu, 19 Nov 2020
 08:37:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X+6U=EZ=wdc.com=prvs=58568f3f1=johannes.thumshirn@srs-us1.protection.inumbo.net>)
	id 1kffRk-0003e2-HS
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:37:40 +0000
X-Inumbo-ID: 9fd8a3d5-dd05-4250-99a4-0a0041396c56
Received: from esa3.hgst.iphmx.com (unknown [216.71.153.141])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9fd8a3d5-dd05-4250-99a4-0a0041396c56;
	Thu, 19 Nov 2020 08:37:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com;
  t=1605775059; x=1637311059;
  h=from:to:cc:subject:date:message-id:references:
   content-transfer-encoding:mime-version;
  bh=G8FZ0D3PP/OudJ5uuxCAz/C/vBHo8wESZoPwxTWqVEI=;
  b=qJHqSFgrWOAVoePii4sh/14KxdT6irr9BCDF+Ql9YylxptHw758npzh3
   VNgvMBT4LH7inbZppzl/JvRtSlbRSokdXcrLucXlHnn4qEVQZ/m7D7Gkv
   8q7HxGFksSYpAkLPjMBxZ6f2F7SafMjWA77UQzJN6QCjhPZwTaIw+V0SO
   AI3l3/nh9ohJnPThNMIhLOL784O4tdiN4/oz2K4jFha3hEQXj9RQwjslU
   DfGfHWmBIdjlJc+s6cCwCT/rtXzuf7LckHTyd631MJQv2YCvIvpTdO82c
   Quf/1CwaoKjqXJcRE76mLJbqvY3M1HQ4ncRSXN4PtwNWGg4m7X7sO7Vzy
   A==;
IronPort-SDR: StdrIOMCYswRZg6jQ2lIawKj4Nb51SsYNOaUKn3yYM4pK8EmKRcYoe4YYh+hOReAxYVxjRQyD3
 zH2YHQ41lsKGs3R6Pqb88ZMyny80vimXpqMFaLq5SJmhDbcY87lJAH4mrboq+PWUYwxx7BSO0V
 r0JP/AZbiuYHtrWwMGTI+UsH0S0WOEYOXo7X22T6amLT/QB4izs34Moyfjiu5zCH5Xtak5PlhB
 FV08lz9n2XXzhhsm8NChZ1aoZy8V3KVHcty6N/Trw8XN0R4n6oHmb2sOfrkVfuOXMyc5CHWYRA
 GeA=
X-IronPort-AV: E=Sophos;i="5.77,490,1596470400"; 
   d="scan'208";a="157444417"
Received: from mail-bn7nam10lp2100.outbound.protection.outlook.com (HELO NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.100])
  by ob1.hgst.iphmx.com with ESMTP; 19 Nov 2020 16:37:35 +0800
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cSUdLulXwC1rvWOimL1A7kY1LTHxVN7ZcGJiCwNlWczsNYGL+0u2rHK44wIUCHgxWkiccBMuheKFbRGYyUOktxuX6su76dOBAycwgXoDAX/gQ6geibQ2rr3TS74yCrqKCBKH3mGbrkMe5KjJdr81bPrlO3qHi5Kel50Q40QMoQu25Hw5NmbYz74xtre+sT/xhakQztZK80kG0Kux6ZupKFkRn5k/5G9c3mSihNYj2B9I5q2hEbM/2DPdBQtW0mZwCsXUKmVKHRLqSUGPH9O6GcVfKH+wrc1cTJwHCKqYwgCfe8/pkJyqWQt1vKMgbp45Z3yFwXaCHwato8Xm5AfcUw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G8FZ0D3PP/OudJ5uuxCAz/C/vBHo8wESZoPwxTWqVEI=;
 b=F8wCY2Fpo50vnXWVzgyJZI5BgL6jPy5P148VhGErow4Y0vBR4GFRt9ZvrHuAsVOlfqhEAYEUoxDxq0iGcWIF6oYVJGJ/NShS32KODDWobErX6Oj3R0hbXGReJqpPCmT2UsAflN2YtLmLZeucflicgToN2EOXgW633PsIHZ9Nr6lyq6uGZ36Nx1azFD9O8db5IRWEDY4uUC/B6DEWxTn7mCzptvkRwJ5oXpec6kJ2z9uDHxPm9eO5eG0ZVn8DJwEmCOkYyaiEOzFlkO8g1wLxFkrg3Lmht8/CluGc16iTN6g3yVdROCVVzkFiO7sWlCxzV0in847V8WvLTy5KDZjxZw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=wdc.com; dmarc=pass action=none header.from=wdc.com; dkim=pass
 header.d=wdc.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G8FZ0D3PP/OudJ5uuxCAz/C/vBHo8wESZoPwxTWqVEI=;
 b=jNc+dVQ7zDr8sGJ/6uqB0+7jh2yCdDXaulxVl+OyY8Valg02KDw9Ke6CW359CyIhDruFYHXYxNuKGJP3QX1m8jhLrInWG9zaEf4qRI867HbxiU3RvccV2R7RZEUhJBQwd2E+CXa4zGTmfu6db4M4D1mRpu67j6twXyfQP9sWs5o=
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 (2603:10b6:803:47::21) by SN6PR04MB4862.namprd04.prod.outlook.com
 (2603:10b6:805:90::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 19 Nov
 2020 08:37:34 +0000
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98]) by SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98%7]) with mapi id 15.20.3589.022; Thu, 19 Nov 2020
 08:37:34 +0000
From: Johannes Thumshirn <Johannes.Thumshirn@wdc.com>
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
CC: Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>, Konrad
 Rzeszutek Wilk <konrad.wilk@oracle.com>, Coly Li <colyli@suse.de>, Mike
 Snitzer <snitzer@redhat.com>, "dm-devel@redhat.com" <dm-devel@redhat.com>,
	Richard Weinberger <richard@nod.at>, Jan Kara <jack@suse.com>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	"linux-mtd@lists.infradead.org" <linux-mtd@lists.infradead.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>
Subject: Re: [PATCH 02/20] block: remove a duplicate __disk_get_part prototype
Thread-Topic: [PATCH 02/20] block: remove a duplicate __disk_get_part
 prototype
Thread-Index: AQHWvYfztpq6w/pFC02UtIWdUqy7dw==
Date: Thu, 19 Nov 2020 08:37:34 +0000
Message-ID:
 <SN4PR0401MB3598F94B66D20E767C5AD4C29BE00@SN4PR0401MB3598.namprd04.prod.outlook.com>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-3-hch@lst.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: lst.de; dkim=none (message not signed)
 header.d=none;lst.de; dmarc=none action=none header.from=wdc.com;
x-originating-ip: [2001:a62:157d:bf01:851e:5636:4e29:3e2e]
x-ms-publictraffictype: Email
x-ms-office365-filtering-ht: Tenant
x-ms-office365-filtering-correlation-id: 98cdfb1a-7b94-4350-579a-08d88c665d0b
x-ms-traffictypediagnostic: SN6PR04MB4862:
x-microsoft-antispam-prvs:
 <SN6PR04MB48628F2FA5D18715BA01A5D79BE00@SN6PR04MB4862.namprd04.prod.outlook.com>
wdcipoutbound: EOP-TRUE
x-ms-oob-tlc-oobclassifiers: OLM:1728;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 9EU6zsu5wfqN3LzwpruHOA3vp8P7XrQotEGqEig19wHTBrjlE1ohLK+ynonRdF1W+UL5wUOVT7vKCtAIQt1dSmxkpZy4DwcLO44FjnTrjbNXSNUI9Mq5wH4rrR8XdBKih9KoY4v1it7GZ4WesFQhZZ5dBKUzP5pmgwnmt0WtD5bIKtK5htwauYpXl8TH4sg6j0HepHhcs7hep71FbRp4+8+nbJpt7IIBzHakBKpMSVoqQ8BhUxMKvxr4VZlyjthyXgTgH9RQ9rYR0A5fg+h6P3vM04AI1LUtG8AD6oJeYvLkuNL9IfWg1mZIXI71jhfByHqksAhEDVkCYrfz51ndNA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN4PR0401MB3598.namprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(39860400002)(346002)(396003)(366004)(8936002)(71200400001)(19618925003)(4326008)(33656002)(7416002)(9686003)(55016002)(66946007)(76116006)(478600001)(66476007)(66446008)(64756008)(86362001)(54906003)(66556008)(91956017)(52536014)(558084003)(2906002)(186003)(8676002)(4270600006)(7696005)(316002)(6506007)(110136005)(5660300002);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata:
 csjjfoFZMdOUtjbQ7Aps9atodfrldZAbocrPpoPa9WuJ6wwSi7DnrL0jTbsl05umbEp5VVC+Yq+5KPyuvTo73nFFa0glxCUD3NqLFMLFCRmAqwzZ6jfwtuTOm1PGA+XNbnO2OBj5W8MgTuXPoSWugEuyBfyseXeerfvuPkcSGFyKN9wp3mddZY7FbFTQEBHdMb0j+HS7xJzjSmA7NBmmQwz5xyTPiYeoH2ga3rmicKdGPs+RkI0tc0bPk1q75G0FK3zd845mg1pA9gJoNCLF7sggrc7oZRshwwBzwahklVTj7UJRr8HiThJRCYgTVJwe9HSVhbVLjCB8fMTf1tFS01qJwvO7QRf5aiCsoSWY4BczKY+KevZvPRJeWcZKXZgWVP1fKakfbsuLMw0G7xu9h1+eOoypobiMkCp4KcJx4HXDIgJErir4uclycSdDgiCfNT58C8WSyGX/pqfKrXhFHSMjbg1zhQFlwLwmq+x4IVXunCLV+I/9vd/d1Xm1mZySuKEKxJX0X5kaxoc5ikUAsNXYZ2Qol7RsOAPY1XB2juhmbsYkVoEPCFliPFtr/McdEff0Y3lH9nbRLV2Fvx8F2elFq3wxIwNwKYRtzIEwafd8M5cLZt0SpoQl2w+Lc99wWYWrIchHiWf8tjaK1z0uxisWXgD48i7rr+9UZP7j5qsbKCrq/rkzjE9rrzRkXhzm8jX/DLvmDT+skKSPQRxSig==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: wdc.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SN4PR0401MB3598.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 98cdfb1a-7b94-4350-579a-08d88c665d0b
X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Nov 2020 08:37:34.7562
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: YNE1HeeUNLBOJT8P2aBEjcGVvGD9E2iGPxFIV8+WaWcaSCFxON7IDGhjH0CQvhEpzGLK8RUwPduSaG/uVJ3gocebzXpV1camyawT1yPVJOY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR04MB4862

Looks good,=0A=
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>=0A=


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 08:46:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 08:46:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30414.60393 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kffZv-0004cO-6f; Thu, 19 Nov 2020 08:46:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30414.60393; Thu, 19 Nov 2020 08:46:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kffZv-0004cK-3L; Thu, 19 Nov 2020 08:46:07 +0000
Received: by outflank-mailman (input) for mailman id 30414;
 Thu, 19 Nov 2020 08:37:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X+6U=EZ=wdc.com=prvs=58568f3f1=johannes.thumshirn@srs-us1.protection.inumbo.net>)
 id 1kffRa-0003dX-7y
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:37:30 +0000
Received: from esa3.hgst.iphmx.com (unknown [216.71.153.141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 60debf2d-9c9b-4a00-9e98-77181ed6fe18;
 Thu, 19 Nov 2020 08:37:28 +0000 (UTC)
Received: from mail-bn7nam10lp2102.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.102])
 by ob1.hgst.iphmx.com with ESMTP; 19 Nov 2020 16:37:25 +0800
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 (2603:10b6:803:47::21) by SN6PR04MB4862.namprd04.prod.outlook.com
 (2603:10b6:805:90::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 19 Nov
 2020 08:37:23 +0000
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98]) by SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98%7]) with mapi id 15.20.3589.022; Thu, 19 Nov 2020
 08:37:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X+6U=EZ=wdc.com=prvs=58568f3f1=johannes.thumshirn@srs-us1.protection.inumbo.net>)
	id 1kffRa-0003dX-7y
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:37:30 +0000
X-Inumbo-ID: 60debf2d-9c9b-4a00-9e98-77181ed6fe18
Received: from esa3.hgst.iphmx.com (unknown [216.71.153.141])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 60debf2d-9c9b-4a00-9e98-77181ed6fe18;
	Thu, 19 Nov 2020 08:37:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com;
  t=1605775048; x=1637311048;
  h=from:to:cc:subject:date:message-id:references:
   content-transfer-encoding:mime-version;
  bh=G8FZ0D3PP/OudJ5uuxCAz/C/vBHo8wESZoPwxTWqVEI=;
  b=DYS8kU5t8+gDFM/AZ3PiEzBGzb/yWd4KkYGKVL7Xhf+F8B91Lpn4Vdwa
   U8ygDho0OyTx5VJwCG99JawrHLqStydKB6zLT3I7vJi5d/HBumwK1zg8F
   s6psKCqr3sieVQEdWDAKWMTRBtEkIThyCKioCLBk1dHF+TAKBQCr1iAMm
   VpEIHI07l3VPuzH3XlLLxtYBVI4JpiBt3d2a45PAoxQc0YXw6CravpQZn
   SWpBNEa+kz/Cdoh9rnhtMnVwuhQEeKO+aGEv1N/5pv1hIvC1/tKAOkupH
   t2YvrS87InViHzR5f28yQ3W2mlVrDbVypdK11I4bENMb9o7jQHIZad5Vu
   Q==;
IronPort-SDR: FXHMCoLsL15+6sOwsJozXMQJ0lALrvI9IL+x+X2crG8/8GPNV5d8ilPUqFgxW3gQzlhCytqPP4
 paw07kWj2A30i4ystN53Nca96hHQVSsXNcoNTP7aTftKoZ99Gd1YF4N69ehve0pCXV/k5NpiCR
 qESF5OKAiESW1N0VJ2wq8o/3niQ+xq+YTcTCbF9iRJnI1JyT5z+bmmZ/PaHxT7f/GbFyQMrjXr
 C1JWLH9LcuytFZOUzKnP8rtzGwFLAsR4HE/xDD2vDjgIlJH3A8hHy5SJ/ocz2e0PiKH0fRzs0E
 3HU=
X-IronPort-AV: E=Sophos;i="5.77,490,1596470400"; 
   d="scan'208";a="157444402"
Received: from mail-bn7nam10lp2102.outbound.protection.outlook.com (HELO NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.102])
  by ob1.hgst.iphmx.com with ESMTP; 19 Nov 2020 16:37:25 +0800
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Cwh67RBgDM1TGPulSTbs6OaFFP71hroH8XuB0QNOJ/2DdfrHW7OfmV1LMs0+ReUQSaO5F67Js2vBtmUS1e+byxSG+G1Nm/Zkpp+nt9W4tIMZoAy4TXLGZDzKdjwSNoItL49P9KaX0TqGLTFVWxH74DmbwQqkT1DhIOmM1+sl8nIKZBrv+MPs60DwYKUw5vx7x8gK6u4r22pdf5LsRB/x2IAO3HpnPHVRCMeh27sE3R44BLSphaRMRSxEJyaoJVpcWPxNUxSEPwNpI1AhoMxv2InMqMX0VKB0uMs5F4ptEe+X5tZskk0hMJpvjjrILQWciRerQaum44WhXnmxc64vvg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G8FZ0D3PP/OudJ5uuxCAz/C/vBHo8wESZoPwxTWqVEI=;
 b=Xhidqf0qyfeWRGQf6Nu4FQjtF41S646jjL5tLscc2DALDxb8nHpXh8s9mG30wg24m+ycoDPRM5FJdG+/5M/IA3PzOB3zRVnLdEDZojBWLClemCI5/8Hqcr36LJAmnVKMKd5ZMCE+ogAKwzOoCpPY/qFIDnsRHPxmL30vT6CpAWq3Bb77EKdcgPTed0G/9eeDD9mCRCfXd3O1WYNClkLF4MIkVwKwuYY/EiyTF/MAGwYwqXRt1vgGQzU6q8Qq7iD+WM89eqeFD0vop0fI43k4UXQ1wi2BEWuSzai9Vd5PQrW4wJoA9jcbUxvPncjgqTTtQ76I+kIdlCIo0RHcfFV2AQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=wdc.com; dmarc=pass action=none header.from=wdc.com; dkim=pass
 header.d=wdc.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G8FZ0D3PP/OudJ5uuxCAz/C/vBHo8wESZoPwxTWqVEI=;
 b=L3hMC8CD7T6O7J97RrSMtA6qCaK4Aqk485OGiHROjriaEr3sB07xrQPcXf4ytiVnC6edePuayQx18+sb4HM0p0dxItuEVhAZ+Cpq9qGon+RoQ6W7LcBfnGRI+mBHDJK6jHla57abhhOgS6q09FQkgpkNUtKwgYGCGwGucJAT+kw=
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 (2603:10b6:803:47::21) by SN6PR04MB4862.namprd04.prod.outlook.com
 (2603:10b6:805:90::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 19 Nov
 2020 08:37:23 +0000
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98]) by SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98%7]) with mapi id 15.20.3589.022; Thu, 19 Nov 2020
 08:37:23 +0000
From: Johannes Thumshirn <Johannes.Thumshirn@wdc.com>
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
CC: Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>, Konrad
 Rzeszutek Wilk <konrad.wilk@oracle.com>, Coly Li <colyli@suse.de>, Mike
 Snitzer <snitzer@redhat.com>, "dm-devel@redhat.com" <dm-devel@redhat.com>,
	Richard Weinberger <richard@nod.at>, Jan Kara <jack@suse.com>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	"linux-mtd@lists.infradead.org" <linux-mtd@lists.infradead.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>
Subject: Re: [PATCH 01/20] blk-cgroup: fix a hd_struct leak in
 blkcg_fill_root_iostats
Thread-Topic: [PATCH 01/20] blk-cgroup: fix a hd_struct leak in
 blkcg_fill_root_iostats
Thread-Index: AQHWvYfzdYP5XcxCwU2lKCcMriBnrQ==
Date: Thu, 19 Nov 2020 08:37:23 +0000
Message-ID:
 <SN4PR0401MB3598AE3DBBB56E8B42A7D3EA9BE00@SN4PR0401MB3598.namprd04.prod.outlook.com>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-2-hch@lst.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: lst.de; dkim=none (message not signed)
 header.d=none;lst.de; dmarc=none action=none header.from=wdc.com;
x-originating-ip: [2001:a62:157d:bf01:851e:5636:4e29:3e2e]
x-ms-publictraffictype: Email
x-ms-office365-filtering-ht: Tenant
x-ms-office365-filtering-correlation-id: d27a3179-29b9-4f23-a2f8-08d88c66568c
x-ms-traffictypediagnostic: SN6PR04MB4862:
x-microsoft-antispam-prvs:
 <SN6PR04MB48624BDF06CA1E8797D237529BE00@SN6PR04MB4862.namprd04.prod.outlook.com>
wdcipoutbound: EOP-TRUE
x-ms-oob-tlc-oobclassifiers: OLM:1728;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 9WlGvL18XbMQQNFB0M4ZbO6snyh/Q5KyvC7OP0/lbAug3R7W9b/3WrXCPlPHusAEvbr7rUJOMGI5xkXkMdl0amoEa38m/hpSf9WFwoF7GlqD4ZBs9vuqNRr4/1SBJwubKx9V9miJOQ34pnqIIjv1D3dNQOJYWQNh98QVZN7hLTnYCwjcNEawLQjJ2sddSVJK72A1WBVPZfAIRSRFVBh6Mou3Mo5+XSmuRvjIpyisKnKXzbJqJfklEMhdvT9AnAcesxr0v7Csc8ei8AjpWVtX31QFHtm0Frg530vHx24TtdNgJti9bH/+Y9qLzq58fZZMJ5XxYEGm4Jfr1wT3FvCU2g==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN4PR0401MB3598.namprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(39860400002)(346002)(396003)(366004)(8936002)(71200400001)(19618925003)(4326008)(33656002)(7416002)(9686003)(55016002)(66946007)(76116006)(478600001)(66476007)(66446008)(64756008)(86362001)(54906003)(66556008)(91956017)(52536014)(558084003)(2906002)(186003)(8676002)(4270600006)(7696005)(316002)(6506007)(110136005)(5660300002);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata:
 tdPYLl1Uv/jCJ+EAJnAzELPE9akpP6mmhvdZT1DlIVbDKnuJkGXzCF6JzYdIl55o5O8VmvUWtGz1PdeDxN63eGmpkxnMVdYAJZCg68XjlpVFb4yhvuL4b4ljwDtlpdbLp32E7gZ71kZ6HIopAY2Q/uP7fSfiYYIT6grFyeiln4UJtvyqkDCHftFj+ysG0U6pKCLKnt2TTZV6JHfeFmDsdnTo+sDs1bSqediaXwJ1vDFOMpWL/zxJ05sP6wnPnQCWT2uRsV8QVz25z22pcM2Ou+ket8IajLVBn1ygPdoeqRLbyIxNK5kCdbh6p22vqv2Uy1BD1e2zJwxYpfA45Nj/ahCPEIN5/Q7am7t2DEn2BIJBZiu3kD29D/htv+Tq5u/N8xvM/2b3DQnnfOh5vwT1jwVjZHGznAUc6e2GXVIjCD7wLv9cvTGnU+CC2/fYCJKXTk6rpEM1VWmEUjhid9+NnZ+CwceDWkA1dyr29vEAYCXR77z55xCB2Hsp7wKIYZXRS+ja9Ngu3+UQPxX3DOxA7vv89/BDx9e/kP6xy7P/E5tORlLCGUHiASz8MKmUrClzeBzeGqbS/h3se1SoZjYydGEFHGmGNwNvdgUkUw3DUIeBmAVcyypTU55UezXMwUdWczjtNsIddu4ZL3a0TYGWSF8OuSximTRkbz42iRaMrRtKPM0gIRAKmgXA6e1NXCktrhMEbHklHouJTXXqKaASwQ==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: wdc.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SN4PR0401MB3598.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d27a3179-29b9-4f23-a2f8-08d88c66568c
X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Nov 2020 08:37:23.7916
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: /WcEokteTN/qdeIKSXHmL80DipE1H5Rz0S33tmNFVvpxBLt6BgRhEshpIUzrG0YnkplWFQTwOiFnV93VTrI8SLTWYoxkpWQRjyQgnsyJSvk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR04MB4862

Looks good,=0A=
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>=0A=


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 08:46:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 08:46:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30422.60432 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kffZw-0004fP-NK; Thu, 19 Nov 2020 08:46:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30422.60432; Thu, 19 Nov 2020 08:46:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kffZw-0004ex-FW; Thu, 19 Nov 2020 08:46:08 +0000
Received: by outflank-mailman (input) for mailman id 30422;
 Thu, 19 Nov 2020 08:38:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X+6U=EZ=wdc.com=prvs=58568f3f1=johannes.thumshirn@srs-us1.protection.inumbo.net>)
 id 1kffSI-0003hq-KN
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:38:14 +0000
Received: from esa2.hgst.iphmx.com (unknown [68.232.143.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7fcdb4e3-6375-4e7e-ac38-ebdfa997c98c;
 Thu, 19 Nov 2020 08:38:13 +0000 (UTC)
Received: from mail-bn7nam10lp2100.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.100])
 by ob1.hgst.iphmx.com with ESMTP; 19 Nov 2020 16:46:24 +0800
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 (2603:10b6:803:47::21) by SN6PR04MB4862.namprd04.prod.outlook.com
 (2603:10b6:805:90::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 19 Nov
 2020 08:38:10 +0000
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98]) by SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98%7]) with mapi id 15.20.3589.022; Thu, 19 Nov 2020
 08:38:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X+6U=EZ=wdc.com=prvs=58568f3f1=johannes.thumshirn@srs-us1.protection.inumbo.net>)
	id 1kffSI-0003hq-KN
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:38:14 +0000
X-Inumbo-ID: 7fcdb4e3-6375-4e7e-ac38-ebdfa997c98c
Received: from esa2.hgst.iphmx.com (unknown [68.232.143.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7fcdb4e3-6375-4e7e-ac38-ebdfa997c98c;
	Thu, 19 Nov 2020 08:38:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com;
  t=1605775588; x=1637311588;
  h=from:to:cc:subject:date:message-id:references:
   content-transfer-encoding:mime-version;
  bh=G8FZ0D3PP/OudJ5uuxCAz/C/vBHo8wESZoPwxTWqVEI=;
  b=ISSQ98JW424+rNEn4E0+jvSz6WxY3/p+xL6eQbskWdbJ4ykl8V+6LtUi
   9ifxxOScAxGNgzUt7q3bTzipTte9AlNDoHvX64utN4yQeEHG6Srq6gPrq
   qiDr5Q8Z1H1pRY5rtihdxbwkZD8eGhJk6LyyaA2xAgApw/gNi5CdC1cTY
   YxtjSQbKCOrZ1cAZHTgNIvt5tAq9TSwBP56ylbBt3zZ40HVxHX1sS4kFW
   s96TyfE1f4vZy565T0N9Tbg4z3t0Hgmi8jB2nJ5icoqzF4UaNleXwgAa9
   WiCgtx9FqV1AijGORfnzxOyJc0RO8Hu0BBGFVg3BRxGVtBEBKZZ8zgfPR
   A==;
IronPort-SDR: STrC28aPaitQo1E7sxRiqmb6GNJBmBQGnQiachRmpdfsFJGfkwSpLiOX3TjtF143VMo+SCCKhw
 +8ltkfjcLgCi9MignoCLoWJ7uRTKMjxK0UAcRorZALJIRRtOVZ8iTuyteQPCiKhf59C1CjOq36
 POBIcljs/LOOsRWxhLJUcgF5HjwiUjV6wfKPMtBa6rYZN8ygp+RI9zaCzhJQhBAVYNbt1wXYeS
 1w8pNSZT6i9l16GrdE6otQX6/O/qYPVsed/uU9gBly4m8dKdu+Hm9SlGAhCTUXLnWhhzc3KPiD
 VM0=
X-IronPort-AV: E=Sophos;i="5.77,490,1596470400"; 
   d="scan'208";a="256563180"
Received: from mail-bn7nam10lp2100.outbound.protection.outlook.com (HELO NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.100])
  by ob1.hgst.iphmx.com with ESMTP; 19 Nov 2020 16:46:24 +0800
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WrPXqYBH7Im3r88nZrkSFtdX2J7o6NcvSoz1XOmRpg53IbpnIwtEj2t52dXsgkr9VEKKux1Wrv4CRxHfvmGA9kbMyFZaE6Zv6W1WviHm1a9dOIdm+KqETW1bxMqBy4iUl3i/OH3rTrIeABK/nZQd3tcuGRqIaE+ckazcK2OyeNSi/i5S6EX6xDBRVkqt7OARea5vzjp6dzhs/sLLJOD4zGh19t7jxf47FEoGeJH/t/ah8mmb1F7A43UiHEHQo4usbPpGChTnDyycqTIU+p725F7LQOWkdD0qgfw6Y6kk5cye8heNjBGhvXXPkJXckAupcxmDVGixwXWQC81JfgQiiw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G8FZ0D3PP/OudJ5uuxCAz/C/vBHo8wESZoPwxTWqVEI=;
 b=cQA31cy74cRbL6J8h1iQnSHwnfiYyWSAvH9e8IJ5X8iAXEGTyoBpHY/AKkLeWc9xj9yQb1MsF8GMZF5RBxrXtpxZbBKO3QUasU5GH15c721ZFPoy5sEcbT68bIOsB3WcH0Ax3JlEKcLnpmvmzwBCjGm1U4MOVSjQVnYNcyOeiFhjOIr6mR9GocbuOSaqsMSyMVPCH6vC2FaAYZUyqSKFJV0QED2NfdXhi2E9uQHIBzn9LmZuFq3ZNtphTBsiUO70/V9upqD7NTMlLEyq8oPUF3RsO6zDu2rL4V77NvdcZN5pUD2ku7RfE4P6ayHaWFgpRQfW+FO835l1qG73T3GR2g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=wdc.com; dmarc=pass action=none header.from=wdc.com; dkim=pass
 header.d=wdc.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G8FZ0D3PP/OudJ5uuxCAz/C/vBHo8wESZoPwxTWqVEI=;
 b=JRSk8Y+Gtz3OMbzTm4M2fArUbQ+G6UsWPaVn5YsmAVDeWvq3o3sey8GhnF8dSo43D1kikHm74uSrSoFJOJNZuKzQJ7DISu96WtF12pzpC4qGVtCNWlYXReDABJyC/FA7UKhg2fz9jFUrYELgF4dpSN8s/0l8+TgYZ49LscZZ7d4=
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 (2603:10b6:803:47::21) by SN6PR04MB4862.namprd04.prod.outlook.com
 (2603:10b6:805:90::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 19 Nov
 2020 08:38:10 +0000
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98]) by SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98%7]) with mapi id 15.20.3589.022; Thu, 19 Nov 2020
 08:38:10 +0000
From: Johannes Thumshirn <Johannes.Thumshirn@wdc.com>
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
CC: Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>, Konrad
 Rzeszutek Wilk <konrad.wilk@oracle.com>, Coly Li <colyli@suse.de>, Mike
 Snitzer <snitzer@redhat.com>, "dm-devel@redhat.com" <dm-devel@redhat.com>,
	Richard Weinberger <richard@nod.at>, Jan Kara <jack@suse.com>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	"linux-mtd@lists.infradead.org" <linux-mtd@lists.infradead.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>
Subject: Re: [PATCH 05/20] block: use put_device in put_disk
Thread-Topic: [PATCH 05/20] block: use put_device in put_disk
Thread-Index: AQHWvYfygihSbkAO3EqJycs/+eV+qA==
Date: Thu, 19 Nov 2020 08:38:09 +0000
Message-ID:
 <SN4PR0401MB35985F6360E32F28890703BE9BE00@SN4PR0401MB3598.namprd04.prod.outlook.com>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-6-hch@lst.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: lst.de; dkim=none (message not signed)
 header.d=none;lst.de; dmarc=none action=none header.from=wdc.com;
x-originating-ip: [2001:a62:157d:bf01:851e:5636:4e29:3e2e]
x-ms-publictraffictype: Email
x-ms-office365-filtering-ht: Tenant
x-ms-office365-filtering-correlation-id: 92c3f2e3-cc64-499d-51ba-08d88c667205
x-ms-traffictypediagnostic: SN6PR04MB4862:
x-microsoft-antispam-prvs:
 <SN6PR04MB4862CB19B47022438805E7B79BE00@SN6PR04MB4862.namprd04.prod.outlook.com>
wdcipoutbound: EOP-TRUE
x-ms-oob-tlc-oobclassifiers: OLM:1728;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 g42yADA20zKfT09YVXuquvxGh0iZXmsRDe9bklMiBrFQthaK28s1yf2EjTv1uoB0v6pKXfQCrEQVKqstUKmCpAt+rzV4LFtnPPrytVFPPjrsUsT1NY3vw96pHhMN4RK1582QV7b7eUxFSPToG8D+dIGtqOaZcHriUkTB/p55+rWo3EWMFtY3A12CPlhoEOaQ/+o71LMRPBCMrxVsqPjYcyrAdcAM1YTGLz9AdAxyjW4hEirLDTpfYU93z1QYWovqJncjaAhVvgJFG4jATu94xNJ42VcFduKC2UAQ6ZFdsGTs9oxiNNnabE5ylyI0dZwNUyEaT7Y+NUIVmBN/UdjTKA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN4PR0401MB3598.namprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(39860400002)(346002)(396003)(366004)(8936002)(71200400001)(19618925003)(4326008)(33656002)(7416002)(9686003)(55016002)(66946007)(76116006)(478600001)(66476007)(66446008)(64756008)(86362001)(54906003)(66556008)(91956017)(52536014)(558084003)(2906002)(186003)(8676002)(4270600006)(7696005)(316002)(6506007)(110136005)(5660300002);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata:
 /akJuE1o1z04hMhHtY8YfzGcCbUKkscN5Mk/ueb8x/1TSn28wXiiO/Uv/GeWQNoedJbudgJRSqr7s1xxUx4VG03KSLCLuNolrhv2cujAFucV4dcggqQKcNMDMvkCpIy/aue5UXHRQywDh1svaqswB+/H8Tn34pS51hr06VmgM9H2JkpsMwabvSeLo4NWzKfzxo24o9bEuEuLJP1Bhnjmh5sjWfYZdJrMCuc1ZPQY45j9hGoEL2xM98JsJlBFSP3qSNRMZa1dAUGcVgXSciVoTriuTumU3lkYPebUUQdSoqxkc9Nt+qtM+u+IcrzJI7+uVnfsfeMpUn4vsHbkn7pN8CVThtTjjSwe+9wqtRzHuh0bW8HKVWIlhs572lHA9Jx61ROnvD1OHWSCOTqI1yntTvV2rcaD79UJuQNREfG6mEbzmw80keiyBT9Q4aleLVtMxL/wsfBY6vOkS2MoMDh4xoh2+E6jQcwgea9GQABAp9leTw3iolDV+eo7LwZQs8XKQkvLAu4egy+aBGJ850FhTuZxrYUeCm1ITygSTGy04jvPEjG8rHPCj8KttrDjmB5vRYElkn7kVNjF90IsehoQjFEttUFlH0jZRgEqOZCJHQtFejZHQPQYhiPbyvQDJ4kRvYk0AqcrdpnGhC0cj4/vm17IkTi9xSqtdiafgvsYM+H8NbgWE6KIpyX38rR/Gw59D7VgN6EPBxkg422mROdckg==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: wdc.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SN4PR0401MB3598.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 92c3f2e3-cc64-499d-51ba-08d88c667205
X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Nov 2020 08:38:09.9609
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: /A+nGOFGHVgtcuTmOCVgRAAjJNv7win23VbADbkAEy0+10HCHizCBKfCvlNVPY6m9rus8x3oWitEqAHBqzMgL65YOdUAeLxVGcm7bjyLto8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR04MB4862

Looks good,=0A=
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>=0A=


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 08:46:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 08:46:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30420.60422 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kffZw-0004eP-AE; Thu, 19 Nov 2020 08:46:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30420.60422; Thu, 19 Nov 2020 08:46:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kffZw-0004e1-0D; Thu, 19 Nov 2020 08:46:08 +0000
Received: by outflank-mailman (input) for mailman id 30420;
 Thu, 19 Nov 2020 08:38:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X+6U=EZ=wdc.com=prvs=58568f3f1=johannes.thumshirn@srs-us1.protection.inumbo.net>)
 id 1kffS7-0003fl-9V
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:38:03 +0000
Received: from esa2.hgst.iphmx.com (unknown [68.232.143.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9310444a-4e03-444c-8bcd-b9839b9c2a57;
 Thu, 19 Nov 2020 08:38:01 +0000 (UTC)
Received: from mail-bn7nam10lp2103.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.103])
 by ob1.hgst.iphmx.com with ESMTP; 19 Nov 2020 16:46:07 +0800
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 (2603:10b6:803:47::21) by SN6PR04MB4862.namprd04.prod.outlook.com
 (2603:10b6:805:90::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 19 Nov
 2020 08:37:58 +0000
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98]) by SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98%7]) with mapi id 15.20.3589.022; Thu, 19 Nov 2020
 08:37:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X+6U=EZ=wdc.com=prvs=58568f3f1=johannes.thumshirn@srs-us1.protection.inumbo.net>)
	id 1kffS7-0003fl-9V
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:38:03 +0000
X-Inumbo-ID: 9310444a-4e03-444c-8bcd-b9839b9c2a57
Received: from esa2.hgst.iphmx.com (unknown [68.232.143.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9310444a-4e03-444c-8bcd-b9839b9c2a57;
	Thu, 19 Nov 2020 08:38:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com;
  t=1605775570; x=1637311570;
  h=from:to:cc:subject:date:message-id:references:
   content-transfer-encoding:mime-version;
  bh=G8FZ0D3PP/OudJ5uuxCAz/C/vBHo8wESZoPwxTWqVEI=;
  b=IHlEp6/e6sY54AX71ncEuhkbxEg9TJAtQke0iCq0qn4zg2DC5Fc/ZEF8
   8lplbX+dZ63S4WbX5l/FaWAS5f0TkgA6gZRIr7R7SB7EjjK1pxnUzV4tj
   ax7CmzITNNaiCe4zc7ojWwccTNuFcMiAzF3GMJeytVSSAZjdEJdbTl7R9
   he/qj0Bj8VVt9BUCmdSHBEnbQ/ya1WUBMufw/uuLS+rd2IsBEqF17+Snw
   MWRyFwxK3cZNyVpscgMW7uGgAdR51XP1IPRkMui7MNT4AcizjIjiULYDn
   dS+VfFgB+/WFraCmgaF7V0opnnGjG/quLT5XPUnDzHFh/EPevcoEYiMkD
   A==;
IronPort-SDR: zh+Lhm4c30+A6tWFjBpJqqqvKXTtmLx/uJxb+Ey5pvk2XxNNdYhISVqynLPPLKekz+HqTu/i0N
 MBrZjlJKSXBsEm90b6+ege7WWcorjsTazp/tDTMlbHOdT3Afcd72/ADGryrhPx8Yp0Tez7QBvQ
 tJlpo5W/KpnUlQr/tfe9MPfmQS2uGznEdG99qLX9DT5RP27jMufJlsZu/CXVM9/7yupYnWPC0A
 /DtOKou0OTjtZ0+ljAW2xXukli+KF+T2Pu2eeDFE+tXFJyXimYQmCcv3VQimYs5bQMCHI7IegW
 dps=
X-IronPort-AV: E=Sophos;i="5.77,490,1596470400"; 
   d="scan'208";a="256563160"
Received: from mail-bn7nam10lp2103.outbound.protection.outlook.com (HELO NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.103])
  by ob1.hgst.iphmx.com with ESMTP; 19 Nov 2020 16:46:07 +0800
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GWVKShOMXhOTzxLJhzsSrtgsRsiqZr/9dClb03QT4bKVIFFTWTM6V1+7e3PTRvf/C93mVN5NJm/qyEZUoTlaJRH3bM2MWAOixIFjp9BUCRgI+UsUWlWnZP65tOhcc8y4H8N8B3iIOkk7H+dcNZFSRkJ7zOH18mNv5H5JfC4gbt7hTTdZT8A4U8Ywwiz2Etnbh3PvoW+0OiNYpc9n/4NYIUBzvI8wqjY0KzIQwhK6uXQm6vgUA8T1kZBeyYkWPQsW1KurIwXhrnB9LHvDyYinKdXv2rl+TzUSe8hlME2Trs0G4XhxKI7FznJCuti5ED2y/846BG794X4h7VEyw7xOag==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G8FZ0D3PP/OudJ5uuxCAz/C/vBHo8wESZoPwxTWqVEI=;
 b=QKjiHISKqHrxtteinSEeFlD+3qKOop0HBg/DnDSrlE4aaVHpRHdpohfxEN8baBYkf4ZxwLk8NDwzUoQ531TtjKQfWN/LOkBZJurjkqw+wdVDwGkoPaFxfMxP5nmoCUr3LQE4sAxOR1qCCJYSXL/EqTtJw/hLFe5t6mA19colAFpg7oKiUI4R9wrI7Wo9KCsnupyRIwzBK4DSOiWMJTk/gvaN0Vy8yx5twbbHzcosjHDm1NfymaQIfLyWPFN4OHVzb7wCJpMXgH7FkH+ikL4N1aG7fLgp1SRMdR8B8eCRAiCOAJKucGDTh0eQPVJmRg1TcZwhjnqaCYkfaks8zDraFQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=wdc.com; dmarc=pass action=none header.from=wdc.com; dkim=pass
 header.d=wdc.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G8FZ0D3PP/OudJ5uuxCAz/C/vBHo8wESZoPwxTWqVEI=;
 b=dHyQrs4qqJq3BNbbaAXd+LWAdp20FrQ7xiUO9rcIZVVEBzAoNqBzTASMQWR2XtUle6fCfqBcmbeCk+cCccmpm9Wyu3x20J3NbMQjIJC3GRCXkhtczlB34AWMhI7tCaukg8h0IXIrJYqr1DnGVGZZ/xwxcinamxLrNKvMPmge6Ys=
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 (2603:10b6:803:47::21) by SN6PR04MB4862.namprd04.prod.outlook.com
 (2603:10b6:805:90::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 19 Nov
 2020 08:37:58 +0000
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98]) by SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98%7]) with mapi id 15.20.3589.022; Thu, 19 Nov 2020
 08:37:58 +0000
From: Johannes Thumshirn <Johannes.Thumshirn@wdc.com>
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
CC: Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>, Konrad
 Rzeszutek Wilk <konrad.wilk@oracle.com>, Coly Li <colyli@suse.de>, Mike
 Snitzer <snitzer@redhat.com>, "dm-devel@redhat.com" <dm-devel@redhat.com>,
	Richard Weinberger <richard@nod.at>, Jan Kara <jack@suse.com>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	"linux-mtd@lists.infradead.org" <linux-mtd@lists.infradead.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>
Subject: Re: [PATCH 04/20] block: use disk_part_iter_exit in
 disk_part_iter_next
Thread-Topic: [PATCH 04/20] block: use disk_part_iter_exit in
 disk_part_iter_next
Thread-Index: AQHWvYfxuab2eSi9v0+4Jul0cvRVrw==
Date: Thu, 19 Nov 2020 08:37:58 +0000
Message-ID:
 <SN4PR0401MB3598E8CCC6C453F977898B9B9BE00@SN4PR0401MB3598.namprd04.prod.outlook.com>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-5-hch@lst.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: lst.de; dkim=none (message not signed)
 header.d=none;lst.de; dmarc=none action=none header.from=wdc.com;
x-originating-ip: [2001:a62:157d:bf01:851e:5636:4e29:3e2e]
x-ms-publictraffictype: Email
x-ms-office365-filtering-ht: Tenant
x-ms-office365-filtering-correlation-id: dc2f26b1-8a58-4df2-8db1-08d88c666af0
x-ms-traffictypediagnostic: SN6PR04MB4862:
x-microsoft-antispam-prvs:
 <SN6PR04MB48621A61F4471E2E89E82EF59BE00@SN6PR04MB4862.namprd04.prod.outlook.com>
wdcipoutbound: EOP-TRUE
x-ms-oob-tlc-oobclassifiers: OLM:1728;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 uM2A3PDmoKphIU4Fc16ogOp1M1rJ9+SLzvD0ZJkt68L/IbRqSIUJqleWl+jCNt8TGI7ZmCpjwSocgOZV5VkS2EMgz/nmv23hcIJhfdraCKUxH+nxb4ms0G8rm7HM+j3Vfro/l2b5oLe/IkFNCzBzjAMfxAUFmF82G7eV5I8T81ZfueCoS4SRCg/mHOZYIeqFhWdBjg7VaU05to+VFT5LaghV+Rjpy10mfkuvQk+lTiIXynjQ28WY52i0wiKk6EZz8L2MWpDs4bZYI+USuWxmYmjQLQgm8i0XSpl/k6axg7m8BdY+MPDgRmw+nBfqGZs/cInTtjx338CMyw1UEwzIwA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN4PR0401MB3598.namprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(39860400002)(346002)(396003)(366004)(8936002)(71200400001)(19618925003)(4326008)(33656002)(7416002)(9686003)(55016002)(66946007)(76116006)(478600001)(66476007)(66446008)(64756008)(86362001)(54906003)(66556008)(91956017)(52536014)(558084003)(2906002)(186003)(8676002)(4270600006)(7696005)(316002)(6506007)(110136005)(5660300002);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata:
 IQWuh4ntmvjG/gqSF5MJQgN08Z6bkNozQweAEIGm/1mQo5R9TCC3cBtseCzb3QmJGQUiE1o54sobNLwSAWX/6oYmKaYDSH7cSH/q3MRUBXK1zMR9M6v1o9qf7W2zYWLcRaubUP88D5MNNVd33oqUf57ice3N/gDvDPltYue/PewJ5xrlecE7FjFNdOsTGym46YNDFl8vhZODWw1vcOKu/eyKQdw6/eCxq6Mapoj6V1w+fkEmMxFMwJA6vwbltHsKO3O6dQdrDEyC02kihOx77Q++4J58jatC3poVI/GqDeje3MjPGwoMGqAVxSVday3r2IpTUIBvunpWHM/D5tcZBtzLR/x6trX7T3UyDzXA7ZcY5PpI/x8Nr5PYK6rmiPggyNBZyPofbe/gLOus8r2qEu1wT8GtpjruCMicI72uiUhRwBIdMzCB23r/N1mHyPdjjTBJfnpXOgWBjhJkPK5aBWyE5cOalePDkgSLqUwDJEJhnEr3CAN+K4P2j1ENBGpdUk2SyQ1PpkgPMkauNbv5fW/IFg8f3HIi56U1o6lpIZTkaV4jXGwhQyLNuSstSjwwkkt5gRjilo+jOskqL45r0nYQy+rtt37OJLx1QNj2yYzp9Qp/uylqrH2drYzLR6OTofT98heAoAyjRO+VAVG5RN2/TFIwQOavTtr3udCtTYLoys/rz1plvkWksJHLNmTXMYVXlh8rU59KUiizMBADxQ==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: wdc.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SN4PR0401MB3598.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dc2f26b1-8a58-4df2-8db1-08d88c666af0
X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Nov 2020 08:37:58.0568
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: xbgWyuKmpIgY3doUcSCJcrP096yLZcBzyRT5z9TiavwjsyTmsIh3cD8YMX5Ad0CTeluy+4pfJXdZTUvG83+xcAFmk9bYEROGcdWgsdhRth0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR04MB4862

Looks good,=0A=
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>=0A=


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 08:46:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 08:46:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30426.60458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kffZy-0004iC-4g; Thu, 19 Nov 2020 08:46:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30426.60458; Thu, 19 Nov 2020 08:46:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kffZx-0004hS-I5; Thu, 19 Nov 2020 08:46:09 +0000
Received: by outflank-mailman (input) for mailman id 30426;
 Thu, 19 Nov 2020 08:38:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X+6U=EZ=wdc.com=prvs=58568f3f1=johannes.thumshirn@srs-us1.protection.inumbo.net>)
 id 1kffSp-0003ix-R4
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:38:47 +0000
Received: from esa2.hgst.iphmx.com (unknown [68.232.143.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 18a317fa-66fd-4989-a408-09888357a07d;
 Thu, 19 Nov 2020 08:38:47 +0000 (UTC)
Received: from mail-bn7nam10lp2102.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.102])
 by ob1.hgst.iphmx.com with ESMTP; 19 Nov 2020 16:47:15 +0800
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 (2603:10b6:803:47::21) by SN6PR04MB4862.namprd04.prod.outlook.com
 (2603:10b6:805:90::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 19 Nov
 2020 08:38:43 +0000
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98]) by SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98%7]) with mapi id 15.20.3589.022; Thu, 19 Nov 2020
 08:38:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X+6U=EZ=wdc.com=prvs=58568f3f1=johannes.thumshirn@srs-us1.protection.inumbo.net>)
	id 1kffSp-0003ix-R4
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:38:47 +0000
X-Inumbo-ID: 18a317fa-66fd-4989-a408-09888357a07d
Received: from esa2.hgst.iphmx.com (unknown [68.232.143.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 18a317fa-66fd-4989-a408-09888357a07d;
	Thu, 19 Nov 2020 08:38:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com;
  t=1605775639; x=1637311639;
  h=from:to:cc:subject:date:message-id:references:
   content-transfer-encoding:mime-version;
  bh=G8FZ0D3PP/OudJ5uuxCAz/C/vBHo8wESZoPwxTWqVEI=;
  b=lYHDsNgYhE8hOt/8RVYGpIQG8kvs7mWLLBQMNVNtG/Et6L4NtkqkO0T4
   fTFMSlGoMMgjEK7jQGno9dcP/bwp8a8WWL33/mcZuL83ZKAuKyYzOGBNw
   Zdk1moXIA4FqOwVYAjeA+jVJ62q+yNP3vBptANZ5ahhOzmX8TfVqm+S2i
   XCEifMaPHj1XIWEWUukKaJeqeIKvCEHn4tQnEPjmibu8Z9DwFRsgVbvTn
   PEnnfKIisk0unNLCB/DtLxmQ0bKKhUAvvwxoAOP31rLZLb6fcCX56DN8f
   Hl2sjNgi2Z/oi6XIOV2zxf8yLPNzx5hJ+7yBIGqcTGXqIgppbEw3+yAfS
   Q==;
IronPort-SDR: vAoAVeJm1/5MEtcMU/kFHtNBbrm5JShpR7BiZqP78Dha/TKDLv3ZCyYmdM+YVahaE7+p4jGq6D
 irSWb7OFiFT0ng9oh8j5mY4+80e3SIWSdTjOOvxPHG9dU8+bqaFoNAU7P8QoNyetXzhr3mdsKa
 SbHZIrsBDMNPmMKym3v9RujfhSvVjfv+oBT9BOmxe2QJgx7aS1OAsySwZ/QxLrFDCdkKoBdYdl
 i03cjz70h+etnh0VznvYAmwNo+yAj+susaHipAF3iyZu6pUiZ8MjAAqTLx3QJn9V03RUCZLns5
 /Oo=
X-IronPort-AV: E=Sophos;i="5.77,490,1596470400"; 
   d="scan'208";a="256563235"
Received: from mail-bn7nam10lp2102.outbound.protection.outlook.com (HELO NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.102])
  by ob1.hgst.iphmx.com with ESMTP; 19 Nov 2020 16:47:15 +0800
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mTJNDjrOkjKEEtltr7Lp28AY44glVJvAntj68LzZ5QtfRJO0CTLu3ccAuX7W9p/dU+zB/8jQaBnjDCqLs3QEN3fvQ+WrINlkaZnGRYqrpOXCxillx8Xt0jh9KLG7YTGT3o9euhxaMe/89QU0pogtqD7HY61EPOoBvzkOheX+ogx/zRv+h3IxZtVS4AJhrFog8SZ2n4IbiB0EnSfKhzyyAWWT6Te5Lti7COYze1WgEQvBKvHqr8mb9ou0B2pdAkGHEmiOCbAMukicxR7l2y3ct92z8iKmN+CZ0XNY8L5N1QWmeYfDBM/w8aZ+YMW3eWvdEm7ZkCHU4M0RXKzz6Fgzsw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G8FZ0D3PP/OudJ5uuxCAz/C/vBHo8wESZoPwxTWqVEI=;
 b=Rp0DYDQGL3aFvz3RrQmZfTVeyGU1Rhpt7+b4iT1/90mzpc9nFVGhPt8ZJdfyVjhdgCTeZ9wubcid0xPEfoMx/9DAGGzmLKOR6w+aI/GRVoJLDjcSf4IwlIrmWGlf4JP/YOE97Bi3xKQ6VbgernV98aWDs68IfYngGdPcmG1aSMmEep+p5O3wfEpUomvGuPipyJ+9+vOTaFi2KJ0+aG95DcT+zwnxEhBOzPYXs2fHQkot0miX6Je1e9zNx6EvkjeIxtx3D0NPh5rbcP70YEslMrqXsg9FXZ04eaabLDe4wMVcdPO4u53gNAWrT8zs9DxDWbCD4WZeeseAYcYK2QXyZw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=wdc.com; dmarc=pass action=none header.from=wdc.com; dkim=pass
 header.d=wdc.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G8FZ0D3PP/OudJ5uuxCAz/C/vBHo8wESZoPwxTWqVEI=;
 b=WtU+DNkWcHHq9v+7vltlYxPAS3rs0uIjWexfEJj+xL69WFw+sQhns2/N3GWO7gySVSfBpEWOxdD1+E7Xv1u0LIDRg+MgDTplJstokIA9gPmHAIGY8QOmAVyPspI6oiBRFg9dRFtL3FXcn9sHHJL0NTsyerr3OHdquakuYrrqYZg=
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 (2603:10b6:803:47::21) by SN6PR04MB4862.namprd04.prod.outlook.com
 (2603:10b6:805:90::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 19 Nov
 2020 08:38:43 +0000
Received: from SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98]) by SN4PR0401MB3598.namprd04.prod.outlook.com
 ([fe80::65d7:592a:32d4:9f98%7]) with mapi id 15.20.3589.022; Thu, 19 Nov 2020
 08:38:43 +0000
From: Johannes Thumshirn <Johannes.Thumshirn@wdc.com>
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
CC: Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>, Konrad
 Rzeszutek Wilk <konrad.wilk@oracle.com>, Coly Li <colyli@suse.de>, Mike
 Snitzer <snitzer@redhat.com>, "dm-devel@redhat.com" <dm-devel@redhat.com>,
	Richard Weinberger <richard@nod.at>, Jan Kara <jack@suse.com>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	"linux-mtd@lists.infradead.org" <linux-mtd@lists.infradead.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>
Subject: Re: [PATCH 09/20] init: cleanup match_dev_by_uuid and
 match_dev_by_label
Thread-Topic: [PATCH 09/20] init: cleanup match_dev_by_uuid and
 match_dev_by_label
Thread-Index: AQHWvYhocOE/KFj+qU6m8HXUxgOagg==
Date: Thu, 19 Nov 2020 08:38:43 +0000
Message-ID:
 <SN4PR0401MB3598D6B584FD73A97E3681AE9BE00@SN4PR0401MB3598.namprd04.prod.outlook.com>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-10-hch@lst.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: lst.de; dkim=none (message not signed)
 header.d=none;lst.de; dmarc=none action=none header.from=wdc.com;
x-originating-ip: [2001:a62:157d:bf01:851e:5636:4e29:3e2e]
x-ms-publictraffictype: Email
x-ms-office365-filtering-ht: Tenant
x-ms-office365-filtering-correlation-id: d44d37f7-3e46-48b6-5716-08d88c668629
x-ms-traffictypediagnostic: SN6PR04MB4862:
x-microsoft-antispam-prvs:
 <SN6PR04MB4862D36B009D3979BBCC93EC9BE00@SN6PR04MB4862.namprd04.prod.outlook.com>
wdcipoutbound: EOP-TRUE
x-ms-oob-tlc-oobclassifiers: OLM:1728;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 vhLfugirm/LBJVDQVEhG83CPym6FDM00kZxCXB0hqndJRE48vo97PGXQ0wPUx9XAR9SJuLVJkqdGd3lL8YCK8RsFy5SnF83vRnxC/BPusxZAymKlBel38EcuEHJd6QjR4Kz31sEMe8qcVSd+0pGApw2l9m3HbURdau7tZ5I+8kmOr7uz6LrzBSsU32UcSnrjD8NPYG4r3AjYWMwzhhnF9CWFdXCDOa75pHsYFCwWhQEiEghykYxLd31FfmTFusr5YAVD1qlZC+QwYG5Jzk/vfrdT6lyEsy02HCuPQIF0NisRD3YdoFcIqgD52iCiY6dIBwy7cq8NPlBxGYslflnbMQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN4PR0401MB3598.namprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(39860400002)(346002)(396003)(366004)(8936002)(71200400001)(19618925003)(4326008)(33656002)(7416002)(9686003)(55016002)(66946007)(76116006)(478600001)(66476007)(66446008)(64756008)(86362001)(54906003)(66556008)(91956017)(52536014)(558084003)(2906002)(186003)(8676002)(4270600006)(7696005)(316002)(6506007)(110136005)(5660300002);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata:
 88nmOM5+EWcTP4MQ2A3An3tXN9wV0en98gGW/l7Mp26Nsg1CXZRXzntWhvD3UXdE2DUij2u+4u4xFcuRndUlxAcOr35K9aUPDmlpPNQa7Za++EWjmdrRWHK2XrtEDCBpLQ8L6IkVMFT0AMwdYTEZr9A5gDSg8RUWyranfcCvVniTCysKqWaaHxJQRK96f4bI47cNE5YSp8IRdG/MULgFkl3KmRpfZ8Xvixx+/KHvI5xqfppbc4Pntr+1RfwPW2m+xfgowibFPyX1IZcq5keRVL0Hz7kE/WYlLs4qSUhy8ftZSG8weuwSfm1XO6Biyn42XDb0QFRJOe0dMlqMQXHZUIQzFgRE4SzU2b4s1dgk+XZVZN6RryjfGOY7IcoNjR3ixggU6a4UEoMsoUyqh1dau7WWKWe0LiCziU3FfcvHXbI5U4hz2DSW751XQLOstKdXGhXGIVccGAL5wkmHvgBJ7PHjk0ucQ5ldk7kR2+HgOiVgffBl5cvXm2pHFHMINAtpOjP/Lj2oVE9To6fFpcn4F5xfNU1Bcy7VOiNoYphQXJOmzNc8qwkCRXYCmwZCUNG9WmuTMP2O+yDKDPtQ1r86gNh2p/1ADkfFWZJAS4BUFZqZL5gmi8uCA9K+hA4AHLCPQ/ieJtv35QDTih9HjWJ6uMm6CONTCswAkdnbbRo2SRl17wCd65388Ojf3tAmnrBHZ517A38ZiACklU367LG6iQ==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: wdc.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SN4PR0401MB3598.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d44d37f7-3e46-48b6-5716-08d88c668629
X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Nov 2020 08:38:43.6815
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: +lLmbh3EUsXFC6D744TZ/ixltcXZZdJdq/76z3I//Re4QhH9XcIZeGLJayetrzTUbYsWZPdAf1w6xEJPAXVxQz9sQiR9y892Xuadh57IlW8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR04MB4862

Looks good,=0A=
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>=0A=


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 08:55:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 08:55:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30467.60481 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kffiY-0006N1-LI; Thu, 19 Nov 2020 08:55:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30467.60481; Thu, 19 Nov 2020 08:55:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kffiY-0006Mu-Hq; Thu, 19 Nov 2020 08:55:02 +0000
Received: by outflank-mailman (input) for mailman id 30467;
 Thu, 19 Nov 2020 08:55:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kffiX-0006Mp-6B
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:55:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 36683ab6-c5d4-44db-af1c-a241ac70e219;
 Thu, 19 Nov 2020 08:55:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 338FDAA4F;
 Thu, 19 Nov 2020 08:54:59 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kffiX-0006Mp-6B
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:55:01 +0000
X-Inumbo-ID: 36683ab6-c5d4-44db-af1c-a241ac70e219
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 36683ab6-c5d4-44db-af1c-a241ac70e219;
	Thu, 19 Nov 2020 08:55:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605776099; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=ZCsZVMJKA3QWFiBtUMP/V6L6DiwzwyFl2FBdn72rle0=;
	b=bqcJkwNtFF7enc2Gz/9e13bixw229sHlYveJt+iZpqyG0SpJ3jHTOKYC3Q8wJCHJSlQ1pB
	O4kCcYlf2axk1w2ldYStK7nMt6RT8w/hw6CWpFsmHcBXkPqGuuCoRuCc8WPHn84zsRlRXz
	T4Rgbt8LvS/p2FFBGvs+edZ1QRFLyik=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 338FDAA4F;
	Thu, 19 Nov 2020 08:54:59 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/2] ns16550: #ifdef-ary
Message-ID: <b74ba81a-da34-1e9a-9a15-f9dbb6005de8@suse.com>
Date: Thu, 19 Nov 2020 09:54:58 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

1: "com<N>=" command line options are x86-specific
2: drop stray "#ifdef CONFIG_HAS_PCI"

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 08:56:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 08:56:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30472.60493 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kffk6-0006UL-05; Thu, 19 Nov 2020 08:56:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30472.60493; Thu, 19 Nov 2020 08:56:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kffk5-0006UE-TL; Thu, 19 Nov 2020 08:56:37 +0000
Received: by outflank-mailman (input) for mailman id 30472;
 Thu, 19 Nov 2020 08:56:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kffk3-0006U6-Vk
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:56:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ca522530-b50e-4a14-bfc8-aa1572bfdbec;
 Thu, 19 Nov 2020 08:56:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 82E77AD2B;
 Thu, 19 Nov 2020 08:56:34 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kffk3-0006U6-Vk
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:56:36 +0000
X-Inumbo-ID: ca522530-b50e-4a14-bfc8-aa1572bfdbec
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ca522530-b50e-4a14-bfc8-aa1572bfdbec;
	Thu, 19 Nov 2020 08:56:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605776194; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6qvxojLQTTwxD4oZzQweUGM/Nhj49DOzWVTHUZReTyQ=;
	b=DCmLNEQyfn+RELETGw0W2hYGcRaNaZRML99OC077SqnfUgzUDHgeuVgQnq+voOX5AI3jYc
	fhaEp+zOnT7va6o2Tiar8fW67gqnrvly+d2os9FoZNuHegtcOJYpFm9AfM+26FyKdftOFG
	T4jqsijCCIjLRJ6BIy0t+bUJkLL1Hk0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 82E77AD2B;
	Thu, 19 Nov 2020 08:56:34 +0000 (UTC)
Subject: Re: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
To: Julien Grall <julien@xen.org>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Rahul Singh <Rahul.Singh@arm.com>
References: <cover.1605527997.git.rahul.singh@arm.com>
 <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
 <bd5fa7bb-7c44-1ec0-fc57-3ecf01c7d651@suse.com>
 <CBBE4253-F244-418D-9EA6-BC39D1BC8DF8@arm.com>
 <1530c2fb-8def-37eb-8a22-d7f9fc4e38b4@suse.com>
 <0946edb2-c2c1-0d3d-c8ff-f24055f78ebf@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2fa74dd2-1ee3-3c59-c711-2dbfd5119c00@suse.com>
Date: Thu, 19 Nov 2020 09:56:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <0946edb2-c2c1-0d3d-c8ff-f24055f78ebf@xen.org>
Content-Type: text/plain; charset=windows-1252
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.11.2020 16:35, Julien Grall wrote:
> So even we are going to enable PCI on Arm and fix compilation issue, 
> there are no way the NS16550 PCI would be usuable without effort for a 
> few reasons:
> 
>    1) com1/com2 is x86 specific
>    2) ns16550_init() is not used by Arm and the only way to use a PCI UART

This is a good observation, and I wasn't aware of this. I'm sending
a patch which ...

>    3) UART is discovered through the device-tree/ACPI tables on Arm
> 
> So I think CONFIG_HAS_NS16550_PCI is most suitable solution and we 
> should probably guard more code (e.g. ns16550_init(), com1, com2...).

... hopefully fulfills this (to be considered a prereq to Rahul's
series then).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 08:57:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 08:57:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30476.60504 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kffky-0006bR-AR; Thu, 19 Nov 2020 08:57:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30476.60504; Thu, 19 Nov 2020 08:57:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kffky-0006bK-7L; Thu, 19 Nov 2020 08:57:32 +0000
Received: by outflank-mailman (input) for mailman id 30476;
 Thu, 19 Nov 2020 08:57:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kffkw-0006bC-VJ
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:57:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4d9b6822-4248-42c4-86e4-73d5d80acc65;
 Thu, 19 Nov 2020 08:57:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 51832AD31;
 Thu, 19 Nov 2020 08:57:26 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kffkw-0006bC-VJ
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:57:30 +0000
X-Inumbo-ID: 4d9b6822-4248-42c4-86e4-73d5d80acc65
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4d9b6822-4248-42c4-86e4-73d5d80acc65;
	Thu, 19 Nov 2020 08:57:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605776246; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8Guw0AQQoq43qB5JEnxwYm1Qf9NsGyB6m+V3JYlUK1A=;
	b=ZMMybzGrav5v7RcHRWg0QW5hjxpgBcsp+mFzm3EBHU6B217u4x5piBxrA/s716qDYIjQM8
	NBhn8c+0GRSL4miKYo8Koe94RllmlxYwA8g/xOu/uocca6btofzGY+10qlfwTSiwqkD8rN
	rOQAuTawWpIlKZiFLgvKK1Knkg4+qnQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 51832AD31;
	Thu, 19 Nov 2020 08:57:26 +0000 (UTC)
Subject: [PATCH 1/2] ns16550: "com<N>=" command line options are x86-specific
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <b74ba81a-da34-1e9a-9a15-f9dbb6005de8@suse.com>
Message-ID: <90d680ae-c0b9-ec4c-ebd3-eea26d286cac@suse.com>
Date: Thu, 19 Nov 2020 09:57:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <b74ba81a-da34-1e9a-9a15-f9dbb6005de8@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Pure code motion (plus the addition of "#ifdef CONFIG_X86); no
functional change intended.

Reported-by: Julien Grall <julien@xen.org>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -318,8 +318,8 @@ Interrupts.  Specifying zero disables CM
 Flag to indicate whether to probe for a CMOS Real Time Clock irrespective of
 ACPI indicating none to be there.
 
-### com1
-### com2
+### com1 (x86)
+### com2 (x86)
 > `= <baud>[/<base-baud>][,[DPS][,[<io-base>|pci|amt][,[<irq>|msi][,[<port-bdf>][,[<bridge-bdf>]]]]]]`
 
 Both option `com1` and `com2` follow the same format.
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -31,38 +31,6 @@
 #include <asm/fixmap.h>
 #endif
 
-/*
- * Configure serial port with a string:
- *   <baud>[/<base_baud>][,DPS[,<io-base>[,<irq>[,<port-bdf>[,<bridge-bdf>]]]]].
- * The tail of the string can be omitted if platform defaults are sufficient.
- * If the baud rate is pre-configured, perhaps by a bootloader, then 'auto'
- * can be specified in place of a numeric baud rate. Polled mode is specified
- * by requesting irq 0.
- */
-static char __initdata opt_com1[128] = "";
-static char __initdata opt_com2[128] = "";
-string_param("com1", opt_com1);
-string_param("com2", opt_com2);
-
-enum serial_param_type {
-    baud,
-    clock_hz,
-    data_bits,
-    io_base,
-    irq,
-    parity,
-    reg_shift,
-    reg_width,
-    stop_bits,
-#ifdef CONFIG_HAS_PCI
-    bridge_bdf,
-    device,
-    port_bdf,
-#endif
-    /* List all parameters before this line. */
-    num_serial_params
-};
-
 static struct ns16550 {
     int baud, clock_hz, data_bits, parity, stop_bits, fifo_size, irq;
     u64 io_base;   /* I/O port or memory-mapped I/O address. */
@@ -98,32 +66,6 @@ static struct ns16550 {
 #endif
 } ns16550_com[2] = { { 0 } };
 
-struct serial_param_var {
-    char name[12];
-    enum serial_param_type type;
-};
-
-/*
- * Enum struct keeping a table of all accepted parameter names for parsing
- * com_console_options for serial port com1 and com2.
- */
-static const struct serial_param_var __initconst sp_vars[] = {
-    {"baud", baud},
-    {"clock-hz", clock_hz},
-    {"data-bits", data_bits},
-    {"io-base", io_base},
-    {"irq", irq},
-    {"parity", parity},
-    {"reg-shift", reg_shift},
-    {"reg-width", reg_width},
-    {"stop-bits", stop_bits},
-#ifdef CONFIG_HAS_PCI
-    {"bridge", bridge_bdf},
-    {"dev", device},
-    {"port", port_bdf},
-#endif
-};
-
 #ifdef CONFIG_HAS_PCI
 struct ns16550_config {
     u16 vendor_id;
@@ -980,6 +922,19 @@ static struct uart_driver __read_mostly
 #endif
 };
 
+static void ns16550_init_common(struct ns16550 *uart)
+{
+    uart->clock_hz  = UART_CLOCK_HZ;
+
+    /* Default is no transmit FIFO. */
+    uart->fifo_size = 1;
+
+    /* Default lsr_mask = UART_LSR_THRE */
+    uart->lsr_mask  = UART_LSR_THRE;
+}
+
+#ifdef CONFIG_X86
+
 static int __init parse_parity_char(int c)
 {
     switch ( c )
@@ -1214,6 +1169,64 @@ pci_uart_config(struct ns16550 *uart, bo
 #endif
 
 /*
+ * Configure serial port with a string:
+ *   <baud>[/<base_baud>][,DPS[,<io-base>[,<irq>[,<port-bdf>[,<bridge-bdf>]]]]].
+ * The tail of the string can be omitted if platform defaults are sufficient.
+ * If the baud rate is pre-configured, perhaps by a bootloader, then 'auto'
+ * can be specified in place of a numeric baud rate. Polled mode is specified
+ * by requesting irq 0.
+ */
+static char __initdata opt_com1[128] = "";
+static char __initdata opt_com2[128] = "";
+string_param("com1", opt_com1);
+string_param("com2", opt_com2);
+
+enum serial_param_type {
+    baud,
+    clock_hz,
+    data_bits,
+    io_base,
+    irq,
+    parity,
+    reg_shift,
+    reg_width,
+    stop_bits,
+#ifdef CONFIG_HAS_PCI
+    bridge_bdf,
+    device,
+    port_bdf,
+#endif
+    /* List all parameters before this line. */
+    num_serial_params
+};
+
+struct serial_param_var {
+    char name[12];
+    enum serial_param_type type;
+};
+
+/*
+ * Enum struct keeping a table of all accepted parameter names for parsing
+ * com_console_options for serial port com1 and com2.
+ */
+static const struct serial_param_var __initconst sp_vars[] = {
+    {"baud", baud},
+    {"clock-hz", clock_hz},
+    {"data-bits", data_bits},
+    {"io-base", io_base},
+    {"irq", irq},
+    {"parity", parity},
+    {"reg-shift", reg_shift},
+    {"reg-width", reg_width},
+    {"stop-bits", stop_bits},
+#ifdef CONFIG_HAS_PCI
+    {"bridge", bridge_bdf},
+    {"dev", device},
+    {"port", port_bdf},
+#endif
+};
+
+/*
  * Used to parse name value pairs and return which value it is along with
  * pointer for the extracted value.
  */
@@ -1501,17 +1514,6 @@ static void __init ns16550_parse_port_co
     serial_register_uart(uart - ns16550_com, &ns16550_driver, uart);
 }
 
-static void ns16550_init_common(struct ns16550 *uart)
-{
-    uart->clock_hz  = UART_CLOCK_HZ;
-
-    /* Default is no transmit FIFO. */
-    uart->fifo_size = 1;
-
-    /* Default lsr_mask = UART_LSR_THRE */
-    uart->lsr_mask  = UART_LSR_THRE;
-}
-
 void __init ns16550_init(int index, struct ns16550_defaults *defaults)
 {
     struct ns16550 *uart;
@@ -1538,6 +1540,8 @@ void __init ns16550_init(int index, stru
     ns16550_parse_port_config(uart, (index == 0) ? opt_com1 : opt_com2);
 }
 
+#endif /* CONFIG_X86 */
+
 #ifdef CONFIG_HAS_DEVICE_TREE
 static int __init ns16550_uart_dt_init(struct dt_device_node *dev,
                                        const void *data)



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 08:57:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 08:57:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30481.60517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfflL-0006iP-Jh; Thu, 19 Nov 2020 08:57:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30481.60517; Thu, 19 Nov 2020 08:57:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfflL-0006iH-Gj; Thu, 19 Nov 2020 08:57:55 +0000
Received: by outflank-mailman (input) for mailman id 30481;
 Thu, 19 Nov 2020 08:57:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfflK-0006hJ-Bp
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:57:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 39181b60-33e7-40bd-9e5d-fe6ef796f688;
 Thu, 19 Nov 2020 08:57:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8CFAEAD2F;
 Thu, 19 Nov 2020 08:57:52 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfflK-0006hJ-Bp
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 08:57:54 +0000
X-Inumbo-ID: 39181b60-33e7-40bd-9e5d-fe6ef796f688
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 39181b60-33e7-40bd-9e5d-fe6ef796f688;
	Thu, 19 Nov 2020 08:57:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605776272; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=crdjdM7DkXckInu6b41yqI+iMgZG4mEeVJLCAwg6n94=;
	b=f08MdXrgrVvmsx1/OmMAn7evZs1MuyGyfJ2FgL2Ow/nYJmYC0/CfEdCIfOgIw8D4sSMq1T
	nI422J0yc8a/yHtyRB5EtmMebeNakNMFzT+9f80qOmpsx/RSWWVJi98tImcT9kN40fu/uj
	VMwDDJB4lthuxvZcuiXOpDNROSB5OAg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8CFAEAD2F;
	Thu, 19 Nov 2020 08:57:52 +0000 (UTC)
Subject: [PATCH 2/2] ns16550: drop stray "#ifdef CONFIG_HAS_PCI"
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <b74ba81a-da34-1e9a-9a15-f9dbb6005de8@suse.com>
Message-ID: <b4617026-32fb-8840-8998-90273a13fb39@suse.com>
Date: Thu, 19 Nov 2020 09:57:52 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <b74ba81a-da34-1e9a-9a15-f9dbb6005de8@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

There's no point wrapping the function invocation when
- the function body is already suitably wrapped,
- the function itself is unconditionally available.

Reported-by: Julien Grall <julien@xen.org>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -968,9 +968,7 @@ static int __init check_existence(struct
     return 1; /* Everything is MMIO */
 #endif
 
-#ifdef CONFIG_HAS_PCI
     pci_serial_early_init(uart);
-#endif
 
     /*
      * Do a simple existence test first; if we fail this,



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 09:00:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 09:00:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30490.60529 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kffns-0007in-4E; Thu, 19 Nov 2020 09:00:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30490.60529; Thu, 19 Nov 2020 09:00:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kffnr-0007ig-VU; Thu, 19 Nov 2020 09:00:31 +0000
Received: by outflank-mailman (input) for mailman id 30490;
 Thu, 19 Nov 2020 09:00:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kffnq-0007iW-G1
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 09:00:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 96ec46dc-60b4-491f-b9c0-a14c5a5c58d8;
 Thu, 19 Nov 2020 09:00:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A70DEAD2B;
 Thu, 19 Nov 2020 09:00:28 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kffnq-0007iW-G1
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 09:00:30 +0000
X-Inumbo-ID: 96ec46dc-60b4-491f-b9c0-a14c5a5c58d8
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 96ec46dc-60b4-491f-b9c0-a14c5a5c58d8;
	Thu, 19 Nov 2020 09:00:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605776428; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zvL9pyoRF907QtRRaXB1t2Dt9IOlGCKcMhWe71ls8sA=;
	b=PAZUYhvCEypm74SzKqtGMny7Ug+x1Nz0QaASDTBdVUogdv9DHEwFbBJRlq+3SA8XYtHQfG
	Z5e9uihcQkUCL+9Vv9I0SaD+WvldCypcNnc+xKVJzQJnk9oDTyp8bEQhqfeRkBLuG5nmir
	NpHCD7d81TjN+O23sOrJZFCDRQd19/A=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A70DEAD2B;
	Thu, 19 Nov 2020 09:00:28 +0000 (UTC)
Subject: Re: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
To: Julien Grall <julien@xen.org>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Rahul Singh <Rahul.Singh@arm.com>
References: <cover.1605527997.git.rahul.singh@arm.com>
 <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
 <bd5fa7bb-7c44-1ec0-fc57-3ecf01c7d651@suse.com>
 <CBBE4253-F244-418D-9EA6-BC39D1BC8DF8@arm.com>
 <1530c2fb-8def-37eb-8a22-d7f9fc4e38b4@suse.com>
 <0946edb2-c2c1-0d3d-c8ff-f24055f78ebf@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9f505669-a107-fecc-d26c-75e14cdabadf@suse.com>
Date: Thu, 19 Nov 2020 10:00:28 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <0946edb2-c2c1-0d3d-c8ff-f24055f78ebf@xen.org>
Content-Type: text/plain; charset=windows-1252
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.11.2020 16:35, Julien Grall wrote:
> On 18/11/2020 15:16, Jan Beulich wrote:
>> On 18.11.2020 16:02, Rahul Singh wrote:
>>> Hello Jan,
>>>
>>>> On 17 Nov 2020, at 10:55 am, Jan Beulich <jbeulich@suse.com> wrote:
>>>>
>>>> On 16.11.2020 13:25, Rahul Singh wrote:
>>>>> NS16550 driver has PCI support that is under HAS_PCI flag. When HAS_PCI
>>>>> is enabled for ARM, compilation error is observed for ARM architecture
>>>>> because ARM platforms do not have full PCI support available.
>>>>
>>>> While you've extended the sentence, it remains unclear to me what
>>>> compilation error it is that results here. I've requested such
>>>> clarification for v2 already.
>>>
>>> Compilation error is related to the code that refer to x86  functions (create_irq()..) and MSI implementation related error.
>>> For more details please find the attached file for compilation error.
>>
>> The use of mmio_ro_ranges is almost quite possibly going to remain
>> x86-specific, but then I guess this wants abstracting in a suitable
>> way.
>>
>> The remaining look to all be MSI-related, so perhaps what you want
>> to avoid is just that part rather than everything PCI-ish?
> 
> Not really (see more above).

Did you really mean "above", not "below"? If so, I guess I need some
clarification. If not, I suppose I've addressed your concern by the
2-patch series I've just sent.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 09:05:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 09:05:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30497.60540 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfft2-0007vz-MQ; Thu, 19 Nov 2020 09:05:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30497.60540; Thu, 19 Nov 2020 09:05:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfft2-0007vs-JM; Thu, 19 Nov 2020 09:05:52 +0000
Received: by outflank-mailman (input) for mailman id 30497;
 Thu, 19 Nov 2020 09:05:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfft1-0007vn-FH
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 09:05:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7a01339e-d583-4cc3-9dd9-404a87dff1b3;
 Thu, 19 Nov 2020 09:05:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 82ACBABF4;
 Thu, 19 Nov 2020 09:05:49 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfft1-0007vn-FH
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 09:05:51 +0000
X-Inumbo-ID: 7a01339e-d583-4cc3-9dd9-404a87dff1b3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7a01339e-d583-4cc3-9dd9-404a87dff1b3;
	Thu, 19 Nov 2020 09:05:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605776749; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5gAxh0vnt33Yqm3BNhR5upND9CkqBWPd9KqhU5sVrB4=;
	b=s/UoXcFHY7E6ZA6y5MVraD1hzc4lq9uYvPm8tKqze/Pn3BcV9yjmF3nmBnBbBe6swmocBg
	Ot59akh5NA98fZpNatj35/pjqNR7HfYcuxfgN1Suf5WIwK+wAKRGjxtDaP3aB5u69DMeps
	LCkJ5gKE9Dieg+zRSQ196WFLJQ6P2Uo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 82ACBABF4;
	Thu, 19 Nov 2020 09:05:49 +0000 (UTC)
Subject: Re: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
To: Julien Grall <julien@xen.org>, Rahul Singh <rahul.singh@arm.com>
Cc: bertrand.marquis@arm.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1605527997.git.rahul.singh@arm.com>
 <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
 <3740e147-719a-4e97-bb0e-fe9bd2ec2aa5@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <aa256a44-8f8f-d4f1-f5f4-12529f45d8c8@suse.com>
Date: Thu, 19 Nov 2020 10:05:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <3740e147-719a-4e97-bb0e-fe9bd2ec2aa5@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 18.11.2020 16:50, Julien Grall wrote:
> On 16/11/2020 12:25, Rahul Singh wrote:
>> NS16550 driver has PCI support that is under HAS_PCI flag. When HAS_PCI
>> is enabled for ARM, compilation error is observed for ARM architecture
>> because ARM platforms do not have full PCI support available.
>  >
>> Introducing new kconfig option CONFIG_HAS_NS16550_PCI to support
>> ns16550 PCI for X86.
>>
>> For X86 platforms it is enabled by default. For ARM platforms it is
>> disabled by default, once we have proper support for NS16550 PCI for
>> ARM we can enable it.
>>
>> No functional change.
> 
> NIT: I would say "No functional change intended" to make clear this is 
> an expectation and hopefully will be correct :).
> 
> Regarding the commit message itself, I would suggest the following to 
> address Jan's concern:

While indeed this is a much better description, I continue to think
that the proposed Kconfig option is undesirable to have. Either,
following the patch I've just sent, truly x86-specific things (at
least as far as current state goes - if any of this was to be
re-used by a future port, suitable further abstraction may be
needed) should be guarded by CONFIG_X86 (or abstracted into arch
hooks), or the HAS_PCI_MSI proposal would at least want further
investigating as to its feasibility to address the issues at hand.

Jan

> "
> xen/char: ns16550: Gate all PCI code with a new Kconfig HAS_NS16550_PCI
> 
> The NS16550 driver is assuming that NS16550 PCI card are usable if the 
> architecture supports PCI (i.e. CONFIG_HAS_PCI=y). However, the code is 
> very x86 focus and will fail to build on Arm (/!\ it is not all the errors):
> 
>   ns16550.c: In function ‘ns16550_init_irq’:
> ns16550.c:726:21: error: implicit declaration of function ‘create_irq’; 
> did you mean ‘release_irq’? [-Werror=implicit-function-declaration]
>           uart->irq = create_irq(0, false);
>                       ^~~~~~~~~~
>                       release_irq
> ns16550.c:726:21: error: nested extern declaration of ‘create_irq’ 
> [-Werror=nested-externs]
> ns16550.c: In function ‘ns16550_init_postirq’:
> ns16550.c:768:33: error: ‘mmio_ro_ranges’ undeclared (first use in this 
> function); did you mean ‘mmio_handler’?
>                rangeset_add_range(mmio_ro_ranges, uart->io_base,
>                                   ^~~~~~~~~~~~~~
>                                   mmio_handler
> ns16550.c:768:33: note: each undeclared identifier is reported only once 
> for each function it appears in
> ns16550.c:780:20: error: variable ‘msi’ has initializer but incomplete type
>               struct msi_info msi = {
>                      ^~~~~~~~
> ns16550.c:781:18: error: ‘struct msi_info’ has no member named ‘bus’
>                   .bus = uart->ps_bdf[0],
>                    ^~~
> ns16550.c:781:24: error: excess elements in struct initializer [-Werror]
>                   .bus = uart->ps_bdf[0],
>                          ^~~~
> ns16550.c:781:24: note: (near initialization for ‘msi’)
> ns16550.c:782:18: error: ‘struct msi_info’ has no member named ‘devfn’
>                   .devfn = PCI_DEVFN(uart->ps_bdf[1], uart->ps_bdf[2]),
> 
> Enabling support for NS16550 PCI card on Arm would require more plumbing 
> in addition to fixing the compilation error.
> 
> Arm systems tend to have platform UART available such as NS16550, PL011. 
> So there are limited reasons to get NS16550 PCI support for now on Arm.
> 
> A new Kconfig option CONFIG_HAS_NS16550_PCI is introduced to gate all 
> the PCI code.
> 
> This option will be select automically for x86 platform and left 
> unselectable on Arm.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> [julieng: Commit message]
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> "
> 
> Cheers,
> 



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 09:13:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 09:13:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30503.60553 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfg0V-0000X1-Gd; Thu, 19 Nov 2020 09:13:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30503.60553; Thu, 19 Nov 2020 09:13:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfg0V-0000Wu-DZ; Thu, 19 Nov 2020 09:13:35 +0000
Received: by outflank-mailman (input) for mailman id 30503;
 Thu, 19 Nov 2020 09:13:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q+xm=EZ=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1kfg0T-0000Wp-3z
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 09:13:33 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 62c683b1-1c5c-4edb-8480-9dd439fe7e97;
 Thu, 19 Nov 2020 09:13:31 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=q+xm=EZ=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
	id 1kfg0T-0000Wp-3z
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 09:13:33 +0000
X-Inumbo-ID: 62c683b1-1c5c-4edb-8480-9dd439fe7e97
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 62c683b1-1c5c-4edb-8480-9dd439fe7e97;
	Thu, 19 Nov 2020 09:13:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605777211;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=/i4zvbrgL4fwiEgjGJIRbLTZsFADwvrNUFh8fMCcY48=;
  b=ZxHv8INIDyFNTADZbRcln4knyaoC5ohh+nZaC7geL/F1MxEYIJUWe1Xk
   C1vQN3p4RUwlnj9j9M/f4Mwg2GwTGzlGFcRZOXfG1v5F56etRe3cGIGIp
   P9mjYM7+11HK3b5/vweDF70LrguvpoT9wZ+5g7bwn2CBJz1HAjIgrJ9zc
   w=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: wPzPK+Nv+M0aSvaeHU0HblsXuTbCsy37+Yc3XCYrpiScb2jZNTGHObyW4tfRlVjb1cXgL4Uv1w
 wv3+d1F9gh3YCfiubkfxEzD6FdcngAGak366SpzANmMbRgmniAD4ySqxjHhzLQe/o3wBMHckmw
 8Ec6uZNmmG+xcDaQs/jGC6+1Vc5JMCLEy4eURgVikemaxKmrR0m5S8DChtkX+TbRX1cNCHi59d
 4DZ/oPXUrrJDtAvR0t8hvcDx5Ml4Tm2kmRf+/aOOWZvySvrdVnbCUloaygZU/OFtncvPySCQQs
 6To=
X-SBRS: None
X-MesageID: 31517615
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,490,1596513600"; 
   d="scan'208";a="31517615"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TgvZyuP9zkXAcvqXGdyBioncVjP2rfPxVb96okMcFmB4bopVIyCI+BQ6Oh/vtmLiCBTXw4BY+Elrm0LXuhwZGV2ARJX2fYGK4DzaNAP8tZfyUu9bJurVoh/2YnCcQ99ImlYxDGu/W2z8RKPT5/b2owACjxOBIrJAWtLKKC1hI/Damzf06ibf4RWx0IIrv8fw+JlUorkTXe7FDV75to0zZtUWLIONR/1VqHCXo2rnikpAzbNCqvuj2sfsT8ZHfZ/mID3ZriILcByPdnRFSH0OHr0NbgSeDJ4P2WqOolSDaCimc0UOzVHdYZ3+LDbYYzWw6j0hSMB3zkdlEJToSwAYpw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/i4zvbrgL4fwiEgjGJIRbLTZsFADwvrNUFh8fMCcY48=;
 b=lLBUF4DcjJROsUDGq3DmzwyHM8/GxnGaM39t4F3Wqcm5fECR/O3ItEo6MUpAh3zpuLsIB0Nm7mbFef5z0ard7e8Zjr7O2X/qFKMWcGkwR9eJAJovHB2a8jdei+YbeKnH2f4OWCHCqvBjrkWJVxOPwlFnvhsuHOeGwmsgSDIqGW8lwuP6/97pd9i33iBSbAcnfyNXeD5L3PN/8s1OJeTBdDrJEa6u4umpCP6adYYci44okibOtRa51Pp+OIVFK/Vq2Kukz3HADhTQknRyMWubAqCucJip9ABBN+DKzRkM+s6/FwUtoX0iM03DHSQu3ctDw/pLcUe/aORQtsAEx07Tuw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/i4zvbrgL4fwiEgjGJIRbLTZsFADwvrNUFh8fMCcY48=;
 b=fPjPn5d/DgmiI2ABoYhD+tJUnJ79femB+T4Nv0ONrEwi+PEkQumcXXdV1IlO8Ug+/tm/pABgCj/1qiWJlMlnBOW5O9H+JqKaJcI71+7VA582QHH2mZLniUlJNar9+MKqsQ58dWkHccO5ZhEtbyc6aygVZF+FeBJlaVMGz7YKC18=
From: Edwin Torok <edvin.torok@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: "wl@xen.org" <wl@xen.org>, "dave@recoil.org" <dave@recoil.org>, "Christian
 Lindig" <christian.lindig@citrix.com>, "iwj@xenproject.org"
	<iwj@xenproject.org>
Subject: Re: [PATCH v1 4/4] tools/ocaml/libs/xc: backward compatible domid
 control at domain creation time
Thread-Topic: [PATCH v1 4/4] tools/ocaml/libs/xc: backward compatible domid
 control at domain creation time
Thread-Index: AQHWvQ7o705wPkFsE0eBdwc77BuOGKnOMtuAgAD7fYA=
Date: Thu, 19 Nov 2020 09:13:24 +0000
Message-ID: <705ac29eab72ea045b4c6be94227f26e1755cd18.camel@citrix.com>
References: <cover.1605636799.git.edvin.torok@citrix.com>
	 <559929d2ae95f6527e5050051c917b7586182ad2.1605636800.git.edvin.torok@citrix.com>
	 <dc2e5f92-2528-1475-1513-cfb8d8c3339d@citrix.com>
In-Reply-To: <dc2e5f92-2528-1475-1513-cfb8d8c3339d@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 853975cb-4921-434d-e1d8-08d88c6b5e6c
x-ms-traffictypediagnostic: BYAPR03MB4631:
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <BYAPR03MB4631A9850F7E987A5DBFD0B99BE00@BYAPR03MB4631.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: rzH2MBbp//5YPmjMCzRiEeL/PkAwU7YeQsvuqLhX5J+iA6LGqV0QxRS1CBEk1DA6z8QQ/2c0Tmb0RIgfjXzXJTMl+OrPlVuT4/i9X0giPbEDcqSyiafZhp7DBrm2h03yoyVw/3Pc1y0Ubsb8GMHpL5xogzkMsPs9OCuh4fPBxMmG9lvM++VJLTglPu/i85Unl+c0R7wS0SPZp3jpZKhtCxCn96tLXmYNwc8ppUvtCHQcBd961VRCdwJkibI9RUKFPsQgp5GEPZh+mGWkAeS2dOSljb24OfUXVY85l+BiW+wQe65Brf0CpK333+tD+Rj1aCmMD2283OPYxzfnilNyan3lzEWP5SkidTB5Lr427wE/DZqjplEaCjzeLBb9MaZ0i1FjgOu0KbKwBYdELUqoSw==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB5888.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(136003)(39860400002)(366004)(396003)(4001150100001)(91956017)(76116006)(64756008)(8936002)(66446008)(66556008)(66946007)(66476007)(5660300002)(53546011)(6506007)(66574015)(6512007)(4326008)(83380400001)(71200400001)(8676002)(186003)(6486002)(26005)(2616005)(316002)(2906002)(86362001)(54906003)(36756003)(110136005)(478600001)(966005);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: e3oobexYOR+GC95MIZM50hSEpOYUO0YzIzSbeFvsNnKWG+rLx8EtPnIXZjDKL0AO2VsolTpd/l2uDVpU518oWDIg2IHhXKeNd2xi9gWKGgCyf0Yc/CJtwvkAgRWreAGUPAxSjWp4OqqJ0rIe2yWZ8+hLWTSayGHWU7kt7AW01CqknihRHofe0ko0K102TqlIJUgRst4Uvgcy9GsH6EtRrJA5hTM1H98HFUwOb9/3gG/QmFkZPygSCbJXANzBtP3u+4ihIe7ojhkHbzArZNP6vBV9gBeimFoVomTL3qG5Qy4J+tYVWPIPIEzwSO7U4Nfd3T7alMqEVAfEQdPpka/4gAS8YnBMUDgOy4dX3Zbm61TfQJsbkdiQYetk2/xMMbAjOePWwVuNO26RxF80r2X6/NDjk9t+W9ubNvkqLMqkBq/SgK8wGn270jZUcIHAcMLHu4sZUE0ptHfkstOd3JRcjdM412h+68kvl9A1kJ1xL8OaOJu+5EKOriwmfW64oXccAGODkNEu//K/1a3nAI3ma05GaWPFW0lIV5huAre3210omoXA94qJcd1uInS5r7lw1tlgeMBJR7DG+bwbCYMPicnGreur1PqnjGoe/wJiMpfmPxYrUihV5NAhTrL7f/3tb2hdvMMQzSYJjmTNZF3S0crZXADs7T9+iamum67z9eV+ZLu/YfFbE7zY1mT3o7aVPMWTQHdLRNAs4MzAJ74kzXp1jpgZwmlCS6we1wlesTHIEzwNNsaOWF/lcxAS/YaljpDLThuJ+QJxB0cQQtkuyBOhaRGzjL8bgZhODzNkAawdGXC4fOsQJhYXDaX+H2yjICxM+A8luklt9+DdyFLed0Hoz2fDBAFAj6PsK2e5KfoEGC9vdNCZeZ3yYYKY+okWcB16v7wsPna+4oNdDP0wNg==
Content-Type: text/plain; charset="utf-8"
Content-ID: <9852BCA91215564B961CE3BE2B46E5AD@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB5888.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 853975cb-4921-434d-e1d8-08d88c6b5e6c
X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Nov 2020 09:13:24.5572
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 7+O0iDi1GjFKtnT7RAQI5QfvWsimP6exDnvqUL8i2UsMpNG9Yhh+9aQYSNBS0Tfnc1pEC40osqviIAIxW+6Mtg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4631
X-OriginatorOrg: citrix.com

T24gV2VkLCAyMDIwLTExLTE4IGF0IDE4OjEzICswMDAwLCBBbmRyZXcgQ29vcGVyIHdyb3RlOg0K
PiBPbiAxNy8xMS8yMDIwIDE4OjI0LCBFZHdpbiBUw7Zyw7ZrIHdyb3RlOg0KPiA+IE9uZSBjYW4g
c3BlY2lmeSB0aGUgZG9taWQgdG8gdXNlIHdoZW4gY3JlYXRpbmcgdGhlIGRvbWFpbiwgYnV0IHRo
aXMNCj4gPiB3YXMgaGFyZGNvZGVkIHRvIDAuDQo+ID4gDQo+ID4gS2VlcCB0aGUgZXhpc3Rpbmcg
YGRvbWFpbl9jcmVhdGVgIGZ1bmN0aW9uIChhbmQgdGhlIHR5cGUgb2YgaXRzDQo+ID4gcGFyYW1l
dGVycykgYXMgaXMgdG8gbWFrZQ0KPiA+IGJhY2t3YXJkcyBjb21wYXRpYmlsaXR5IGVhc2llci4N
Cj4gPiBJbnRyb2R1Y2UgYSBuZXcgYGRvbWFpbl9jcmVhdGVfZG9taWRgIE9DYW1sIEFQSSB0aGF0
IGFsbG93cw0KPiA+IHNwZWNpZnlpbmcgdGhlIGRvbWlkLg0KPiA+IEEgbmV3IHZlcnNpb24gb2Yg
eGVub3BzZCBjYW4gY2hvb3NlIHRvIHN0YXJ0IHVzaW5nIHRoaXMsIHdoaWxlIG9sZA0KPiA+IHZl
cnNpb25zIG9mIHhlbm9wc2Qgd2lsbCBrZWVwDQo+ID4gYnVpbGRpbmcgYW5kIHVzaW5nIHRoZSBv
bGQgQVBJLg0KPiA+IA0KPiA+IENvbnRyb2xsaW5nIHRoZSBkb21pZCBjYW4gYmUgdXNlZnVsIGR1
cmluZyB0ZXN0aW5nIG9yIG1pZ3JhdGlvbi4NCj4gPiANCj4gPiBTaWduZWQtb2ZmLWJ5OiBFZHdp
biBUw7Zyw7ZrIDxlZHZpbi50b3Jva0BjaXRyaXguY29tPg0KPiA+IC0tLQ0KPiA+IMKgdG9vbHMv
b2NhbWwvbGlicy94Yy94ZW5jdHJsLm1swqDCoMKgwqDCoCB8IDMgKysrDQo+ID4gwqB0b29scy9v
Y2FtbC9saWJzL3hjL3hlbmN0cmwubWxpwqDCoMKgwqAgfCAyICsrDQo+ID4gwqB0b29scy9vY2Ft
bC9saWJzL3hjL3hlbmN0cmxfc3R1YnMuYyB8IDkgKysrKysrKy0tDQo+ID4gwqAzIGZpbGVzIGNo
YW5nZWQsIDEyIGluc2VydGlvbnMoKyksIDIgZGVsZXRpb25zKC0pDQo+ID4gDQo+ID4gZGlmZiAt
LWdpdCBhL3Rvb2xzL29jYW1sL2xpYnMveGMveGVuY3RybC5tbA0KPiA+IGIvdG9vbHMvb2NhbWwv
bGlicy94Yy94ZW5jdHJsLm1sDQo+ID4gaW5kZXggZTg3ODY5OWIwYS4uOWQ3MjA4ODZlOSAxMDA2
NDQNCj4gPiAtLS0gYS90b29scy9vY2FtbC9saWJzL3hjL3hlbmN0cmwubWwNCj4gPiArKysgYi90
b29scy9vY2FtbC9saWJzL3hjL3hlbmN0cmwubWwNCj4gPiBAQCAtMTgyLDYgKzE4Miw5IEBAIGxl
dCB3aXRoX2ludGYgZiA9DQo+ID4gwqBleHRlcm5hbCBkb21haW5fY3JlYXRlOiBoYW5kbGUgLT4g
ZG9tY3RsX2NyZWF0ZV9jb25maWcgLT4gZG9taWQNCj4gPiDCoMKgwqDCoMKgwqDCoCA9ICJzdHVi
X3hjX2RvbWFpbl9jcmVhdGUiDQo+ID4gwqANCj4gPiArZXh0ZXJuYWwgZG9tYWluX2NyZWF0ZV9k
b21pZDogaGFuZGxlIC0+IGRvbWN0bF9jcmVhdGVfY29uZmlnIC0+DQo+ID4gZG9taWQgLT4gZG9t
aWQNCj4gPiArwqDCoMKgwqDCoMKgID0gInN0dWJfeGNfZG9tYWluX2NyZWF0ZV9kb21pZCINCj4g
DQo+IFdvdWxkbid0IHRoaXMgYmUgYmV0dGVyIGFzIGhhbmRsZSAtPiBkb21pZCAtPiBkb21jdGxf
Y3JlYXRlX2NvbmZpZyAtPg0KPiBkb21pZCA/DQo+IA0KPiBJJ20gbm90IG92ZXJ3aGVsbWVkIHdp
dGggdGhlIG5hbWUsIGJ1dA0KPiAiZG9tYWluX2NyZWF0ZV97c3BlY2lmaWMsd2l0aH1fZG9taWQi
IGRvbid0IHNlZW0gbXVjaCBiZXR0ZXIgZWl0aGVyLg0KPiANCj4gPiArDQo+ID4gwqBleHRlcm5h
bCBkb21haW5fc2V0aGFuZGxlOiBoYW5kbGUgLT4gZG9taWQgLT4gc3RyaW5nIC0+IHVuaXQNCj4g
PiDCoMKgwqDCoMKgwqDCoCA9ICJzdHViX3hjX2RvbWFpbl9zZXRoYW5kbGUiDQo+ID4gwqANCj4g
PiBkaWZmIC0tZ2l0IGEvdG9vbHMvb2NhbWwvbGlicy94Yy94ZW5jdHJsLm1saQ0KPiA+IGIvdG9v
bHMvb2NhbWwvbGlicy94Yy94ZW5jdHJsLm1saQ0KPiA+IGluZGV4IGU2NDkwN2RmOGUuLmU2Mjkw
MjI5MDEgMTAwNjQ0DQo+ID4gLS0tIGEvdG9vbHMvb2NhbWwvbGlicy94Yy94ZW5jdHJsLm1saQ0K
PiA+ICsrKyBiL3Rvb2xzL29jYW1sL2xpYnMveGMveGVuY3RybC5tbGkNCj4gPiBAQCAtMTQ1LDYg
KzE0NSw4IEBAIHZhbCBjbG9zZV9oYW5kbGU6IHVuaXQgLT4gdW5pdA0KPiA+IMKgDQo+ID4gwqBl
eHRlcm5hbCBkb21haW5fY3JlYXRlIDogaGFuZGxlIC0+IGRvbWN0bF9jcmVhdGVfY29uZmlnIC0+
IGRvbWlkDQo+ID4gwqDCoCA9ICJzdHViX3hjX2RvbWFpbl9jcmVhdGUiDQo+ID4gK2V4dGVybmFs
IGRvbWFpbl9jcmVhdGVfZG9taWQgOiBoYW5kbGUgLT4gZG9tY3RsX2NyZWF0ZV9jb25maWcgLT4N
Cj4gPiBkb21pZCAtPiBkb21pZA0KPiA+ICvCoCA9ICJzdHViX3hjX2RvbWFpbl9jcmVhdGVfZG9t
aWQiDQo+ID4gwqBleHRlcm5hbCBkb21haW5fc2V0aGFuZGxlIDogaGFuZGxlIC0+IGRvbWlkIC0+
IHN0cmluZyAtPiB1bml0ID0NCj4gPiAic3R1Yl94Y19kb21haW5fc2V0aGFuZGxlIg0KPiA+IMKg
ZXh0ZXJuYWwgZG9tYWluX21heF92Y3B1cyA6IGhhbmRsZSAtPiBkb21pZCAtPiBpbnQgLT4gdW5p
dA0KPiA+IMKgwqAgPSAic3R1Yl94Y19kb21haW5fbWF4X3ZjcHVzIg0KPiA+IGRpZmYgLS1naXQg
YS90b29scy9vY2FtbC9saWJzL3hjL3hlbmN0cmxfc3R1YnMuYw0KPiA+IGIvdG9vbHMvb2NhbWwv
bGlicy94Yy94ZW5jdHJsX3N0dWJzLmMNCj4gPiBpbmRleCA5NGFiYTM4YTQyLi5iYjcxOGZkMTY0
IDEwMDY0NA0KPiA+IC0tLSBhL3Rvb2xzL29jYW1sL2xpYnMveGMveGVuY3RybF9zdHVicy5jDQo+
ID4gKysrIGIvdG9vbHMvb2NhbWwvbGlicy94Yy94ZW5jdHJsX3N0dWJzLmMNCj4gPiBAQCAtMTc1
LDcgKzE3NSw3IEBAIHN0YXRpYyB1bnNpZ25lZCBpbnQNCj4gPiBvY2FtbF9saXN0X3RvX2NfYml0
bWFwKHZhbHVlIGwpDQo+ID4gwqDCoMKgwqDCoMKgwqDCoHJldHVybiB2YWw7DQo+ID4gwqB9DQo+
ID4gwqANCj4gPiAtQ0FNTHByaW0gdmFsdWUgc3R1Yl94Y19kb21haW5fY3JlYXRlKHZhbHVlIHhj
aCwgdmFsdWUgY29uZmlnKQ0KPiA+ICtDQU1McHJpbSB2YWx1ZSBzdHViX3hjX2RvbWFpbl9jcmVh
dGVfZG9taWQodmFsdWUgeGNoLCB2YWx1ZQ0KPiA+IGNvbmZpZywgdmFsdWUgd2FudF9kb21pZCkN
Cj4gPiDCoHsNCj4gPiDCoMKgwqDCoMKgwqDCoMKgQ0FNTHBhcmFtMih4Y2gsIGNvbmZpZyk7DQo+
ID4gwqDCoMKgwqDCoMKgwqDCoENBTUxsb2NhbDIobCwgYXJjaF9kb21jb25maWcpOw0KPiA+IEBA
IC0xOTEsNyArMTkxLDcgQEAgQ0FNTHByaW0gdmFsdWUgc3R1Yl94Y19kb21haW5fY3JlYXRlKHZh
bHVlIHhjaCwNCj4gPiB2YWx1ZSBjb25maWcpDQo+ID4gwqAjZGVmaW5lIFZBTF9NQVhfTUFQVFJB
Q0tfRlJBTUVTIEZpZWxkKGNvbmZpZywgNykNCj4gPiDCoCNkZWZpbmUgVkFMX0FSQ0jCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgRmllbGQoY29uZmlnLCA4KQ0KPiA+IMKgDQo+ID4gLcKg
wqDCoMKgwqDCoMKgdWludDMyX3QgZG9taWQgPSAwOw0KPiA+ICvCoMKgwqDCoMKgwqDCoHVpbnQz
Ml90IGRvbWlkID0gSW50X3ZhbCh3YW50X2RvbWlkKTsNCj4gDQo+IHdhbnRlZF9kb21pZCB3b3Vs
ZCBiZSBhIHNsaWdodGx5IGJldHRlciBuYW1lLCBiZWNhdXNlIGl0IGlzbid0DQo+IGFtYmlndW91
cw0KPiB3aXRoIGEgYm9vbGVhbiBmbGFnLg0KPiANCj4gPiDCoMKgwqDCoMKgwqDCoMKgaW50IHJl
c3VsdDsNCj4gPiDCoMKgwqDCoMKgwqDCoMKgc3RydWN0IHhlbl9kb21jdGxfY3JlYXRlZG9tYWlu
IGNmZyA9IHsNCj4gPiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoC5zc2lkcmVmID0g
SW50MzJfdmFsKFZBTF9TU0lEUkVGKSwNCj4gPiBAQCAtMjYyLDYgKzI2MiwxMSBAQCBDQU1McHJp
bSB2YWx1ZSBzdHViX3hjX2RvbWFpbl9jcmVhdGUodmFsdWUNCj4gPiB4Y2gsIHZhbHVlIGNvbmZp
ZykNCj4gPiDCoMKgwqDCoMKgwqDCoMKgQ0FNTHJldHVybihWYWxfaW50KGRvbWlkKSk7DQo+ID4g
wqB9DQo+ID4gwqANCj4gPiArQ0FNTHByaW0gdmFsdWUgc3R1Yl94Y19kb21haW5fY3JlYXRlKHZh
bHVlIHhjaCwgdmFsdWUgY29uZmlnLA0KPiA+IHZhbHVlIHdhbnRfZG9taWQpDQo+ID4gK3sNCj4g
PiArwqDCoMKgIHJldHVybiBzdHViX3hjX2RvbWFpbl9jcmVhdGVfZG9taWQoeGNoLCBjb25maWcs
IFZhbF9pbnQoMCkpOw0KPiA+ICt9DQo+IA0KPiBVc2luZw0KPiBodHRwczovL3hlbmJpdHMueGVu
Lm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPWNvbW1pdGRpZmY7aD0zNmQ5NGMxN2ZhMWU0OGNjOWZi
OWVkMTViYzlhMjIzN2ExNzM4YmJiDQo+IGFzIHJldmVyc2UgaW5zcGlyYXRpb24sIGNvdWxkbid0
IHdlIGRvIHRoZSBpbnNlcnRpb24gb2YgMCBhdCB0aGUNCj4gT2NhbWwNCj4gbGV2ZWwgYW5kIGF2
b2lkIGRvdWJsaW5nIHVwIHRoZSBDIHN0dWI/DQoNCkkgd2FudGVkIHRvIHJldGFpbiB0aGUgb2xk
IEFQSSBmb3IgYmFja3dhcmRzIGNvbXBhdGliaWxpdHksIGJ1dCB5b3UgYXJlDQpyaWdodCB0aGF0
IHRoaXMgY291bGQgYmUgZG9uZSBqdXN0IG9uIHRoZSBPQ2FtbCBsZXZlbCwgSSdsbCB1cGRhdGUg
dGhlDQpwYXRjaC4NCg0KSWYgeW91IHVwZ3JhZGUgWGVuIHdpdGhvdXQgdXBncmFkaW5nIHhlbm9w
c2QgeW91J2xsIGdldCBhIGZhaXJseQ0Kb2J2aW91cyBmYWlsdXJlIHRvIHN0YXJ0IHhlbm9wc2Qg
ZHVlIHRvIHRoZSBtaXNzaW5nIHN5bWJvbCwgYnV0IHRoYXQNCmNvdWxkIGJlIHNvbHZlZCB3aXRo
IGFuIGFwcHJvcHJpYXRlIGRlcGVuZGVuY3kgYXQgdGhlIGRpc3RybyBwYWNrYWdlDQpsZXZlbC4g
QXMgbG9uZyBhcyBtYXRjaGluZyBYZW4gYW5kIHhlbm9wc2QgZ2V0cyBpbnN0YWxsZWQgKGV2ZW4g
aWYgbm90DQpib290ZWQgaW50bykgeGVub3BzZCBzaG91bGQgc3VjY2VlZCBpbiByZXN0YXJ0aW5n
IHRoZW4uDQoNCkJlc3QgcmVnYXJkcywNCi0tRWR3aW4NCg0KPiANCj4gfkFuZHJldw0KPiANCj4g
PiArDQo+ID4gwqBDQU1McHJpbSB2YWx1ZSBzdHViX3hjX2RvbWFpbl9tYXhfdmNwdXModmFsdWUg
eGNoLCB2YWx1ZSBkb21pZCwNCj4gPiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB2YWx1ZSBtYXhf
dmNwdXMpDQo+ID4gwqB7DQo+IA0KDQo=


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 09:21:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 09:21:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30508.60565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfg7z-0001Wb-Fu; Thu, 19 Nov 2020 09:21:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30508.60565; Thu, 19 Nov 2020 09:21:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfg7z-0001WU-CB; Thu, 19 Nov 2020 09:21:19 +0000
Received: by outflank-mailman (input) for mailman id 30508;
 Thu, 19 Nov 2020 09:21:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfg7y-0001WP-4s
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 09:21:18 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfg7x-0007Lw-8k; Thu, 19 Nov 2020 09:21:17 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfg7w-00050i-W0; Thu, 19 Nov 2020 09:21:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfg7y-0001WP-4s
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 09:21:18 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=34dCLaEzvPfuLtsh2yUyi7VJJvxm+wbXreAmOflWzGM=; b=HRb/x51oFm/TMC8R7maBZaxwfC
	HNw4bWW/mBtg205vecqynOpNdaOstcbHJ38MOFZUX9eaTYdPORDu62Zt3tFuaem468l6CFWWvO1DI
	CWsaiQDcF03w6o1F1gHRLA82FQaQbn4SLwK5vOkCuCMmos3+9LdtUAI54j0Vs8oV5k/I=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfg7x-0007Lw-8k; Thu, 19 Nov 2020 09:21:17 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfg7w-00050i-W0; Thu, 19 Nov 2020 09:21:17 +0000
Subject: Re: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
To: Jan Beulich <jbeulich@suse.com>, Rahul Singh <rahul.singh@arm.com>
Cc: bertrand.marquis@arm.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1605527997.git.rahul.singh@arm.com>
 <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
 <3740e147-719a-4e97-bb0e-fe9bd2ec2aa5@xen.org>
 <aa256a44-8f8f-d4f1-f5f4-12529f45d8c8@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <9007e08f-6d90-88ed-ba64-2f0b3c21cb50@xen.org>
Date: Thu, 19 Nov 2020 09:21:14 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <aa256a44-8f8f-d4f1-f5f4-12529f45d8c8@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 19/11/2020 09:05, Jan Beulich wrote:
> On 18.11.2020 16:50, Julien Grall wrote:
>> On 16/11/2020 12:25, Rahul Singh wrote:
>>> NS16550 driver has PCI support that is under HAS_PCI flag. When HAS_PCI
>>> is enabled for ARM, compilation error is observed for ARM architecture
>>> because ARM platforms do not have full PCI support available.
>>   >
>>> Introducing new kconfig option CONFIG_HAS_NS16550_PCI to support
>>> ns16550 PCI for X86.
>>>
>>> For X86 platforms it is enabled by default. For ARM platforms it is
>>> disabled by default, once we have proper support for NS16550 PCI for
>>> ARM we can enable it.
>>>
>>> No functional change.
>>
>> NIT: I would say "No functional change intended" to make clear this is
>> an expectation and hopefully will be correct :).
>>
>> Regarding the commit message itself, I would suggest the following to
>> address Jan's concern:
> 
> While indeed this is a much better description, I continue to think
> that the proposed Kconfig option is undesirable to have.

I am yet to see an argument into why we should keep the PCI code 
compiled on Arm when there will be no-use....

> Either,
> following the patch I've just sent, truly x86-specific things (at
> least as far as current state goes - if any of this was to be
> re-used by a future port, suitable further abstraction may be
> needed) should be guarded by CONFIG_X86 (or abstracted into arch
> hooks), or the HAS_PCI_MSI proposal would at least want further
> investigating as to its feasibility to address the issues at hand.

I would be happy with CONFIG_X86, despite the fact that this is only 
deferring the problem.

Regarding HAS_PCI_MSI, I don't really see the point of introducing given 
that we are not going to use NS16550 PCI on Arm in the forseeable 
future. So why do we need a finer graine Kconfig?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 09:42:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 09:42:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30515.60577 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfgS1-0003UF-8C; Thu, 19 Nov 2020 09:42:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30515.60577; Thu, 19 Nov 2020 09:42:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfgS1-0003U8-4N; Thu, 19 Nov 2020 09:42:01 +0000
Received: by outflank-mailman (input) for mailman id 30515;
 Thu, 19 Nov 2020 09:42:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kVKL=EZ=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1kfgS0-0003U3-33
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 09:42:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 186c4046-a3be-4531-a423-153cc3eaca36;
 Thu, 19 Nov 2020 09:41:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A3791AC98;
 Thu, 19 Nov 2020 09:41:57 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id 286A51E1303; Thu, 19 Nov 2020 10:41:57 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kVKL=EZ=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1kfgS0-0003U3-33
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 09:42:00 +0000
X-Inumbo-ID: 186c4046-a3be-4531-a423-153cc3eaca36
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 186c4046-a3be-4531-a423-153cc3eaca36;
	Thu, 19 Nov 2020 09:41:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A3791AC98;
	Thu, 19 Nov 2020 09:41:57 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id 286A51E1303; Thu, 19 Nov 2020 10:41:57 +0100 (CET)
Date: Thu, 19 Nov 2020 10:41:57 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 11/20] block: reference struct block_device from struct
 hd_struct
Message-ID: <20201119094157.GT1981@quack2.suse.cz>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-12-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201118084800.2339180-12-hch@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Wed 18-11-20 09:47:51, Christoph Hellwig wrote:
> To simplify block device lookup and a few other upcomdin areas, make sure
> that we always have a struct block_device available for each disk and
> each partition.  The only downside of this is that each device and
> partition uses a little more memories.  The upside will be that a lot of
				^^^ memory

> code can be simplified.
> 
> With that all we need to look up the block device is to lookup the inode
> and do a few sanity checks on the gendisk, instead of the separate lookup
> for the gendisk.
> 
> As part of the change switch bdget() to only find existing block devices,
> given that we know that the block_device structure must be allocated at
> probe / partition scan time.
> 
> blk-cgroup needed a bit of a special treatment as the only place that
> wanted to lookup a gendisk outside of the normal blkdev_get path.  It is
> switched to lookup using the block device hash now that this is the
> primary lookup path.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

The patch looks good to me and I like the simplifications! I've found just
one small issue below.

> @@ -1748,16 +1600,18 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
>  	if (!disk)
>  		return NULL;
>  
> +	disk->part0.bdev = bdev_alloc(disk, 0);
> +	if (!disk->part0.bdev)
> +		goto out_free_disk;
> +
>  	disk->part0.dkstats = alloc_percpu(struct disk_stats);
>  	if (!disk->part0.dkstats)
> -		goto out_free_disk;
> +		goto out_bdput;
>  
>  	init_rwsem(&disk->lookup_sem);
>  	disk->node_id = node_id;
> -	if (disk_expand_part_tbl(disk, 0)) {
> -		free_percpu(disk->part0.dkstats);
> -		goto out_free_disk;
> -	}
> +	if (disk_expand_part_tbl(disk, 0))
> +		goto out_free_bdstats;
>  
>  	ptbl = rcu_dereference_protected(disk->part_tbl, 1);
>  	rcu_assign_pointer(ptbl->part[0], &disk->part0);
> @@ -1772,8 +1626,10 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
>  	 * converted to make use of bd_mutex and sequence counters.
>  	 */
>  	hd_sects_seq_init(&disk->part0);
> -	if (hd_ref_init(&disk->part0))
> -		goto out_free_part0;
> +	if (hd_ref_init(&disk->part0)) {
> +		hd_free_part(&disk->part0);

Aren't you missing kfree(disk) here?

> +		return NULL;
> +	}
>  
>  	disk->minors = minors;
>  	rand_initialize_disk(disk);

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 09:45:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 09:45:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30520.60588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfgVT-0003ea-Ok; Thu, 19 Nov 2020 09:45:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30520.60588; Thu, 19 Nov 2020 09:45:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfgVT-0003eT-LU; Thu, 19 Nov 2020 09:45:35 +0000
Received: by outflank-mailman (input) for mailman id 30520;
 Thu, 19 Nov 2020 09:45:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfgVS-0003eO-KQ
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 09:45:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfgVR-0007qe-HO; Thu, 19 Nov 2020 09:45:33 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfgVR-0007ET-9v; Thu, 19 Nov 2020 09:45:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfgVS-0003eO-KQ
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 09:45:34 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=5nbOD9UELbP8gFS0hwOsVv3brwLNuvhMJ67nY+aAgyg=; b=jaquHvPjfBy5pamUc0UtZ7cr+T
	5Jg5LPyc3an20vhWAHEYk9BrJYlDs8/xsRIp3GItVWO7y7tGhRDIg7/uvBSBJoOAsbh/e+ISbhrfX
	E1IF91YMUi0J52a35A/q7MzjJRmSlFevQpQxU1SzNf4a1c+WINqiFCzY1DtYK+DGMsjg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfgVR-0007qe-HO; Thu, 19 Nov 2020 09:45:33 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfgVR-0007ET-9v; Thu, 19 Nov 2020 09:45:33 +0000
Subject: Re: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
To: Jan Beulich <jbeulich@suse.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Rahul Singh <Rahul.Singh@arm.com>
References: <cover.1605527997.git.rahul.singh@arm.com>
 <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
 <bd5fa7bb-7c44-1ec0-fc57-3ecf01c7d651@suse.com>
 <CBBE4253-F244-418D-9EA6-BC39D1BC8DF8@arm.com>
 <1530c2fb-8def-37eb-8a22-d7f9fc4e38b4@suse.com>
 <0946edb2-c2c1-0d3d-c8ff-f24055f78ebf@xen.org>
 <9f505669-a107-fecc-d26c-75e14cdabadf@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <b3eac2b6-f412-0814-b876-72bf48488b09@xen.org>
Date: Thu, 19 Nov 2020 09:45:31 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <9f505669-a107-fecc-d26c-75e14cdabadf@suse.com>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 19/11/2020 09:00, Jan Beulich wrote:
> On 18.11.2020 16:35, Julien Grall wrote:
>> On 18/11/2020 15:16, Jan Beulich wrote:
>>> On 18.11.2020 16:02, Rahul Singh wrote:
>>>> Hello Jan,
>>>>
>>>>> On 17 Nov 2020, at 10:55 am, Jan Beulich <jbeulich@suse.com> wrote:
>>>>>
>>>>> On 16.11.2020 13:25, Rahul Singh wrote:
>>>>>> NS16550 driver has PCI support that is under HAS_PCI flag. When HAS_PCI
>>>>>> is enabled for ARM, compilation error is observed for ARM architecture
>>>>>> because ARM platforms do not have full PCI support available.
>>>>>
>>>>> While you've extended the sentence, it remains unclear to me what
>>>>> compilation error it is that results here. I've requested such
>>>>> clarification for v2 already.
>>>>
>>>> Compilation error is related to the code that refer to x86  functions (create_irq()..) and MSI implementation related error.
>>>> For more details please find the attached file for compilation error.
>>>
>>> The use of mmio_ro_ranges is almost quite possibly going to remain
>>> x86-specific, but then I guess this wants abstracting in a suitable
>>> way.
>>>
>>> The remaining look to all be MSI-related, so perhaps what you want
>>> to avoid is just that part rather than everything PCI-ish?
>>
>> Not really (see more above).
> 
> Did you really mean "above", not "below"? If so, I guess I need some
> clarification. If not, I suppose I've addressed your concern by the
> 2-patch series I've just sent.

This was meant to be "below".

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 09:46:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 09:46:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30525.60601 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfgWc-0003m3-3c; Thu, 19 Nov 2020 09:46:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30525.60601; Thu, 19 Nov 2020 09:46:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfgWc-0003lw-0b; Thu, 19 Nov 2020 09:46:46 +0000
Received: by outflank-mailman (input) for mailman id 30525;
 Thu, 19 Nov 2020 09:46:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfgWZ-0003lq-Vp
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 09:46:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id df5730fa-f754-4bfd-b205-bff77ec3afe1;
 Thu, 19 Nov 2020 09:46:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DFD7CABF4;
 Thu, 19 Nov 2020 09:46:40 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfgWZ-0003lq-Vp
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 09:46:44 +0000
X-Inumbo-ID: df5730fa-f754-4bfd-b205-bff77ec3afe1
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id df5730fa-f754-4bfd-b205-bff77ec3afe1;
	Thu, 19 Nov 2020 09:46:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605779201; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=u94emtc+aLV5/W+0yazA9hgYCHOSfmBt+1kwODKZvhw=;
	b=Kdu1GfeGAizOV59/6otQ/MkIQlifVWmf8MmNZiSzCP2+94rIZQD5GjUCKdDCVyO+g4gW4G
	gqVRL/JPzruTQjOlLXPTCa7hSXj6i11vf6vbaFyzDd8JfgW+pIw/RvPgnw7Tpbi5DkUuy1
	sHESoq9C/TcSetXcJFfkbI7KMNz6Fp4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id DFD7CABF4;
	Thu, 19 Nov 2020 09:46:40 +0000 (UTC)
Subject: Re: [PATCH 1/3] mm: introduce xvmalloc() et al and use for grant
 table allocations
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <e0364274-f123-82bd-ec85-bea519a34049@suse.com>
 <d98aabe4-6c1b-0970-2e42-eb991e9075a2@suse.com>
 <e7b72c54-e8e4-428d-9264-484fc0061ba4@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9adc7ec2-c014-d9ae-a8b5-5b942640386c@suse.com>
Date: Thu, 19 Nov 2020 10:46:40 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <e7b72c54-e8e4-428d-9264-484fc0061ba4@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.11.2020 20:10, Julien Grall wrote:
> On 06/11/2020 07:11, Jan Beulich wrote:
>> All of the array allocations in grant_table_init() can exceed a page's
>> worth of memory, which xmalloc()-based interfaces aren't really suitable
>> for after boot. 
> 
> I can see a few reasons why they are not suitable:
>    - xmalloc() will try to using an order and then throw memory. This is 
> pretty inneficient.

But addressing this inefficiency, while a nice side effect, is
not the goal here.

>    - xmalloc() will allocate physically contiguous memory

This aspect matters here only indirectly: What we care about
avoiding are runtime allocations of non-zero order. The assumption
of how I worded the description is that during boot non-zero-
order allocations can typically be fulfilled and hence aren't a
(general) problem.

> It would be good to clarify which one you refer because none of them are 
> really a problem only after boot...

Given the above, I'm not sure in which way you would see this be
clarified. Any suggestions welcome.

> One thing to be aware thought is xv*() are going to be more inefficient 
> because they involve touching the page-tables (at least until the work 
> to map on-demand the direct map is not merged). In addition, on Arm, 
> they will also use only 4K mappings (I have a TODO to fix that).
> 
> So I think we will need to be careful when to use xmalloc() vs 
> xvalloc(). It might be worth outlining that in the documentation of xv*().

The rule is quite simple and the inefficiencies you mention
shouldn't matter imo: Post-boot there should not be any
implicit allocations of non-zero order. "Implicit" here meaning
to still permit e.g. direct alloc_domheap_pages() invocations,
making apparent at the call site that the aspect of memory
fragmentation was (hopefully) taken into consideration. I'm
actually inclined to suggest (down the road) to have _xmalloc()
no longer fall back to multi-page allocations post-boot, but
instead return NULL.

If you think it is really needed, I can add something like "These
should be used in preference to xmalloc() et al whenever the size
is not known to be constrained to at most a single page" to the
comment at the top of the new header file.

Where the inefficiencies you mention would imo matter is in
(future) decisions whether to use vmalloc() et al vs xvmalloc()
et al: If the size _may_ be no more than a page, the latter may
want preferring.

>> --- a/xen/common/vmap.c
>> +++ b/xen/common/vmap.c
>> @@ -7,6 +7,7 @@
>>   #include <xen/spinlock.h>
>>   #include <xen/types.h>
>>   #include <xen/vmap.h>
>> +#include <xen/xvmalloc.h>
>>   #include <asm/page.h>
>>   
>>   static DEFINE_SPINLOCK(vm_lock);
>> @@ -299,11 +300,29 @@ void *vzalloc(size_t size)
>>       return p;
>>   }
>>   
>> -void vfree(void *va)
>> +static void _vfree(const void *va, unsigned int pages, enum vmap_region type)
> 
> I don't think "unsigned int" is sufficient to cover big size. AFAICT, 
> this is not in a new problem in this code and seems to be a latent issue 
> so far.
> 
> However, I feel that it is wrong to introduce a new set of allocation 
> helpers that contained a flaw fixed in xm*alloc() recently (see  commit 
> cf38b4926e2b "xmalloc: guard against integer overflow").

For _xmalloc() we're talking about bytes (and the guarding you
refer to is actually orthogonal to the limiting done by the
page allocator, as follows from the description of that change).
Here we're talking about pages. I hope it will be decades until we
have to consider allocating 16Tb all in one chunk (and we'd need
to have large enough vmap virtual address space set aside first).
Also note that
- the entire vmap.c right now uses unsigned int for page counts,
  so it would be outright inconsistent to use unsigned long here,
- at least on x86 passing around 64-bit function arguments is
  slightly less efficient than 32-bit ones, and hence I'd prefer
  to keep it like this.

>> --- /dev/null
>> +++ b/xen/include/xen/xvmalloc.h
>> @@ -0,0 +1,70 @@
>> +
>> +#ifndef __XVMALLOC_H__
>> +#define __XVMALLOC_H__
>> +
>> +#include <xen/cache.h>
>> +#include <xen/types.h>
>> +
>> +/*
>> + * Xen malloc/free-style interface.
> 
> It would be useful to emphase that they should only be used if the 
> caller does *not* need physically contiguous memory.

Actually first of all I shouldn't have copied to comment without
editing. I've now made it

/*
 * Xen malloc/free-style interface preferable for allocations possibly
 * exceeding a page's worth of memory, as long as there's no need to have
 * physically contiguous memory allocated.
 */

albeit I'm not sure the "physically discontiguous" really needs
pointing out, considering the 'v' in the function names.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 09:53:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 09:53:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30533.60613 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfgdJ-0004mp-SR; Thu, 19 Nov 2020 09:53:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30533.60613; Thu, 19 Nov 2020 09:53:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfgdJ-0004mh-PC; Thu, 19 Nov 2020 09:53:41 +0000
Received: by outflank-mailman (input) for mailman id 30533;
 Thu, 19 Nov 2020 09:53:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfgdI-0004mc-Tr
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 09:53:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id af2f9386-1538-4890-82bc-a306b3508381;
 Thu, 19 Nov 2020 09:53:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 098EDAC22;
 Thu, 19 Nov 2020 09:53:39 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfgdI-0004mc-Tr
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 09:53:40 +0000
X-Inumbo-ID: af2f9386-1538-4890-82bc-a306b3508381
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id af2f9386-1538-4890-82bc-a306b3508381;
	Thu, 19 Nov 2020 09:53:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605779619; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Bjy5eM5ZfYpFLRU6q1VKG0ulPw7tPJ6XYm6IdPsRa9U=;
	b=phhjnAijCOClhWX53Vaw4V21SriBCtEP6X8p78JJ7xlDn3ZAHdRHGePDfk5apMtdfXsVFj
	hCFKn1zvFrMvjDVxu7wp2mPypA+m3lUHRsR302UqJgZjRskJXD2iNp3VSM3d7ELz0vK/DZ
	FM13lZap0zCPrhDlLt4IYgb7aVoIFjE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 098EDAC22;
	Thu, 19 Nov 2020 09:53:39 +0000 (UTC)
Subject: Re: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
To: Julien Grall <julien@xen.org>
Cc: bertrand.marquis@arm.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org, Rahul Singh <rahul.singh@arm.com>
References: <cover.1605527997.git.rahul.singh@arm.com>
 <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
 <3740e147-719a-4e97-bb0e-fe9bd2ec2aa5@xen.org>
 <aa256a44-8f8f-d4f1-f5f4-12529f45d8c8@suse.com>
 <9007e08f-6d90-88ed-ba64-2f0b3c21cb50@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8531a99d-3c54-36c7-0cd4-2e4838f96eb0@suse.com>
Date: Thu, 19 Nov 2020 10:53:38 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <9007e08f-6d90-88ed-ba64-2f0b3c21cb50@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.11.2020 10:21, Julien Grall wrote:
> Hi Jan,
> 
> On 19/11/2020 09:05, Jan Beulich wrote:
>> On 18.11.2020 16:50, Julien Grall wrote:
>>> On 16/11/2020 12:25, Rahul Singh wrote:
>>>> NS16550 driver has PCI support that is under HAS_PCI flag. When HAS_PCI
>>>> is enabled for ARM, compilation error is observed for ARM architecture
>>>> because ARM platforms do not have full PCI support available.
>>>   >
>>>> Introducing new kconfig option CONFIG_HAS_NS16550_PCI to support
>>>> ns16550 PCI for X86.
>>>>
>>>> For X86 platforms it is enabled by default. For ARM platforms it is
>>>> disabled by default, once we have proper support for NS16550 PCI for
>>>> ARM we can enable it.
>>>>
>>>> No functional change.
>>>
>>> NIT: I would say "No functional change intended" to make clear this is
>>> an expectation and hopefully will be correct :).
>>>
>>> Regarding the commit message itself, I would suggest the following to
>>> address Jan's concern:
>>
>> While indeed this is a much better description, I continue to think
>> that the proposed Kconfig option is undesirable to have.
> 
> I am yet to see an argument into why we should keep the PCI code 
> compiled on Arm when there will be no-use....

Well, see my patch suppressing building of quite a part of it.

>> Either,
>> following the patch I've just sent, truly x86-specific things (at
>> least as far as current state goes - if any of this was to be
>> re-used by a future port, suitable further abstraction may be
>> needed) should be guarded by CONFIG_X86 (or abstracted into arch
>> hooks), or the HAS_PCI_MSI proposal would at least want further
>> investigating as to its feasibility to address the issues at hand.
> 
> I would be happy with CONFIG_X86, despite the fact that this is only 
> deferring the problem.
> 
> Regarding HAS_PCI_MSI, I don't really see the point of introducing given 
> that we are not going to use NS16550 PCI on Arm in the forseeable 
> future.

And I continue to fail to see what would guarantee this: As soon
as you can plug in such a card into an Arm system, people will
want to be able use it. That's why we had to add support for it
on x86, after all.

> So why do we need a finer graine Kconfig?

Because most of the involved code is indeed MSI-related?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 09:59:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 09:59:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30539.60625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfgj7-00050O-Ih; Thu, 19 Nov 2020 09:59:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30539.60625; Thu, 19 Nov 2020 09:59:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfgj7-00050H-FD; Thu, 19 Nov 2020 09:59:41 +0000
Received: by outflank-mailman (input) for mailman id 30539;
 Thu, 19 Nov 2020 09:59:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfgj5-0004zf-TV
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 09:59:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f672350a-6ef8-49f5-bbf2-cb3d094f02ce;
 Thu, 19 Nov 2020 09:59:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 779B6AC22;
 Thu, 19 Nov 2020 09:59:37 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfgj5-0004zf-TV
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 09:59:39 +0000
X-Inumbo-ID: f672350a-6ef8-49f5-bbf2-cb3d094f02ce
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f672350a-6ef8-49f5-bbf2-cb3d094f02ce;
	Thu, 19 Nov 2020 09:59:38 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605779977; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zGxGRRS5LRoHwz+VuxMJXFTK1Rm1o8gu0of3w4kqWYo=;
	b=KGaCh5obVLj4E7wCP1ldlHW52amRpgB/ipA/T3yKkVx92O1vGzDYtWhsek9DWBywlU7zZe
	9hs3GeH8eL0/gq1V2TWG2Wnmp2t2++CnfYz8DmJRHWmHLLmSaq4w9TulrJqWtsurF1fAyQ
	bIPm0FnMMz9EJ4nrqytxeSa94FltM/U=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 779B6AC22;
	Thu, 19 Nov 2020 09:59:37 +0000 (UTC)
Subject: Re: [PATCH] xen: add support for automatic debug key actions in case
 of crash
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <20201022143905.11032-1-jgross@suse.com>
 <977bab69-892c-d94d-d952-1a748f69d0b6@suse.com>
 <53732f8f-fe6d-91bd-4100-4b4d904a4073@suse.com>
 <ed2f73e7-04cc-f568-f0b7-19c843a8d31b@suse.com>
 <8c77ff71-a14e-7cf7-5f27-c7c152ace240@suse.com>
 <3e2132c9-2ab3-7bfb-656b-2cab58a53342@suse.com>
 <alpine.DEB.2.21.2011121332250.20906@sstabellini-ThinkPad-T480s>
 <383f2f1f-96c1-1634-519f-3526019f4f48@suse.com>
 <4dda402d-62de-cce5-c205-12f4d13ec623@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <fc917a93-a7d8-ba6e-091d-1ea4faac4173@suse.com>
Date: Thu, 19 Nov 2020 10:59:36 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <4dda402d-62de-cce5-c205-12f4d13ec623@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 18.11.2020 17:46, Julien Grall wrote:
> On 13/11/2020 05:18, Jürgen Groß wrote:
>> On 12.11.20 22:38, Stefano Stabellini wrote:
>>> On Thu, 12 Nov 2020, Jan Beulich wrote:
>>>> On 12.11.2020 13:50, Jürgen Groß wrote:
>>>>> Any further comments, or even better, Acks?
>>>>
>>>> To be honest I'd prefer to have at least one of the people Cc-ed
>>>> minimally indicate they consider this a good idea. No need for a
>>>> close review or such, just a basic opinion. Anyone?
>>>
>>> I see Jan's point that it is not clear how much this is going to help in
>>> production. However, it is not going to hurt either, and I have been
>>> told a few times recently that debugging Xen is not easy. Anything that
>>> helps in that regard would be good. So I think this patch would be an
>>> improvement.
>>>
>>
>> This patch is an effort to get better diagnostic data in case of
>> Xen crashes from our largest customer, so clearly intended for
>> production use.
>>
> 
> I agree with this statement. When you get a crash from Xen in 
> production, it can be useful to get as much information as possible 
> dumped. Some of the information printed by the debugkeys cannot be 
> retrieved from a crashkernel.

So, Jürgen, my request was satisfied, so all that's needed is, I
suppose, a re-submission with the few smaller items addressed.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 10:07:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 10:07:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30546.60637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfgql-00065X-FW; Thu, 19 Nov 2020 10:07:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30546.60637; Thu, 19 Nov 2020 10:07:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfgql-00065Q-C0; Thu, 19 Nov 2020 10:07:35 +0000
Received: by outflank-mailman (input) for mailman id 30546;
 Thu, 19 Nov 2020 10:07:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kVKL=EZ=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1kfgqj-00065L-Ik
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 10:07:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 646bd827-30de-463a-9d75-1ac1d0fcf34a;
 Thu, 19 Nov 2020 10:07:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3BBD2B114;
 Thu, 19 Nov 2020 10:07:31 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id E777D1E130B; Thu, 19 Nov 2020 11:07:30 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kVKL=EZ=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1kfgqj-00065L-Ik
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 10:07:33 +0000
X-Inumbo-ID: 646bd827-30de-463a-9d75-1ac1d0fcf34a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 646bd827-30de-463a-9d75-1ac1d0fcf34a;
	Thu, 19 Nov 2020 10:07:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3BBD2B114;
	Thu, 19 Nov 2020 10:07:31 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id E777D1E130B; Thu, 19 Nov 2020 11:07:30 +0100 (CET)
Date: Thu, 19 Nov 2020 11:07:30 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 12/20] block: simplify the block device claiming interface
Message-ID: <20201119100730.GU1981@quack2.suse.cz>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-13-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201118084800.2339180-13-hch@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Wed 18-11-20 09:47:52, Christoph Hellwig wrote:
> Stop passing the whole device as a separate argument given that it
> can be trivially deducted.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

The patch looks good to me. You can add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  drivers/block/loop.c   | 12 +++-----
>  fs/block_dev.c         | 69 +++++++++++++++++++-----------------------
>  include/linux/blkdev.h |  6 ++--
>  3 files changed, 38 insertions(+), 49 deletions(-)
> 
> diff --git a/drivers/block/loop.c b/drivers/block/loop.c
> index b42c728620c9e4..599e94a7e69259 100644
> --- a/drivers/block/loop.c
> +++ b/drivers/block/loop.c
> @@ -1071,7 +1071,6 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
>  	struct file	*file;
>  	struct inode	*inode;
>  	struct address_space *mapping;
> -	struct block_device *claimed_bdev = NULL;
>  	int		error;
>  	loff_t		size;
>  	bool		partscan;
> @@ -1090,8 +1089,7 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
>  	 * here to avoid changing device under exclusive owner.
>  	 */
>  	if (!(mode & FMODE_EXCL)) {
> -		claimed_bdev = bdev->bd_contains;
> -		error = bd_prepare_to_claim(bdev, claimed_bdev, loop_configure);
> +		error = bd_prepare_to_claim(bdev, loop_configure);
>  		if (error)
>  			goto out_putf;
>  	}
> @@ -1178,15 +1176,15 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
>  	mutex_unlock(&loop_ctl_mutex);
>  	if (partscan)
>  		loop_reread_partitions(lo, bdev);
> -	if (claimed_bdev)
> -		bd_abort_claiming(bdev, claimed_bdev, loop_configure);
> +	if (!(mode & FMODE_EXCL))
> +		bd_abort_claiming(bdev, loop_configure);
>  	return 0;
>  
>  out_unlock:
>  	mutex_unlock(&loop_ctl_mutex);
>  out_bdev:
> -	if (claimed_bdev)
> -		bd_abort_claiming(bdev, claimed_bdev, loop_configure);
> +	if (!(mode & FMODE_EXCL))
> +		bd_abort_claiming(bdev, loop_configure);
>  out_putf:
>  	fput(file);
>  out:
> diff --git a/fs/block_dev.c b/fs/block_dev.c
> index e94633dc6ad93b..dd52dbd266cde7 100644
> --- a/fs/block_dev.c
> +++ b/fs/block_dev.c
> @@ -110,24 +110,20 @@ EXPORT_SYMBOL(invalidate_bdev);
>  int truncate_bdev_range(struct block_device *bdev, fmode_t mode,
>  			loff_t lstart, loff_t lend)
>  {
> -	struct block_device *claimed_bdev = NULL;
> -	int err;
> -
>  	/*
>  	 * If we don't hold exclusive handle for the device, upgrade to it
>  	 * while we discard the buffer cache to avoid discarding buffers
>  	 * under live filesystem.
>  	 */
>  	if (!(mode & FMODE_EXCL)) {
> -		claimed_bdev = bdev->bd_contains;
> -		err = bd_prepare_to_claim(bdev, claimed_bdev,
> -					  truncate_bdev_range);
> +		int err = bd_prepare_to_claim(bdev, truncate_bdev_range);
>  		if (err)
>  			return err;
>  	}
> +
>  	truncate_inode_pages_range(bdev->bd_inode->i_mapping, lstart, lend);
> -	if (claimed_bdev)
> -		bd_abort_claiming(bdev, claimed_bdev, truncate_bdev_range);
> +	if (!(mode & FMODE_EXCL))
> +		bd_abort_claiming(bdev, truncate_bdev_range);
>  	return 0;
>  }
>  EXPORT_SYMBOL(truncate_bdev_range);
> @@ -1047,7 +1043,6 @@ static bool bd_may_claim(struct block_device *bdev, struct block_device *whole,
>  /**
>   * bd_prepare_to_claim - claim a block device
>   * @bdev: block device of interest
> - * @whole: the whole device containing @bdev, may equal @bdev
>   * @holder: holder trying to claim @bdev
>   *
>   * Claim @bdev.  This function fails if @bdev is already claimed by another
> @@ -1057,9 +1052,10 @@ static bool bd_may_claim(struct block_device *bdev, struct block_device *whole,
>   * RETURNS:
>   * 0 if @bdev can be claimed, -EBUSY otherwise.
>   */
> -int bd_prepare_to_claim(struct block_device *bdev, struct block_device *whole,
> -		void *holder)
> +int bd_prepare_to_claim(struct block_device *bdev, void *holder)
>  {
> +	struct block_device *whole = bdev->bd_contains;
> +
>  retry:
>  	spin_lock(&bdev_lock);
>  	/* if someone else claimed, fail */
> @@ -1099,15 +1095,15 @@ static void bd_clear_claiming(struct block_device *whole, void *holder)
>  /**
>   * bd_finish_claiming - finish claiming of a block device
>   * @bdev: block device of interest
> - * @whole: whole block device
>   * @holder: holder that has claimed @bdev
>   *
>   * Finish exclusive open of a block device. Mark the device as exlusively
>   * open by the holder and wake up all waiters for exclusive open to finish.
>   */
> -static void bd_finish_claiming(struct block_device *bdev,
> -		struct block_device *whole, void *holder)
> +static void bd_finish_claiming(struct block_device *bdev, void *holder)
>  {
> +	struct block_device *whole = bdev->bd_contains;
> +
>  	spin_lock(&bdev_lock);
>  	BUG_ON(!bd_may_claim(bdev, whole, holder));
>  	/*
> @@ -1132,11 +1128,10 @@ static void bd_finish_claiming(struct block_device *bdev,
>   * also used when exclusive open is not actually desired and we just needed
>   * to block other exclusive openers for a while.
>   */
> -void bd_abort_claiming(struct block_device *bdev, struct block_device *whole,
> -		       void *holder)
> +void bd_abort_claiming(struct block_device *bdev, void *holder)
>  {
>  	spin_lock(&bdev_lock);
> -	bd_clear_claiming(whole, holder);
> +	bd_clear_claiming(bdev->bd_contains, holder);
>  	spin_unlock(&bdev_lock);
>  }
>  EXPORT_SYMBOL(bd_abort_claiming);
> @@ -1434,7 +1429,7 @@ static void put_disk_and_module(struct gendisk *disk)
>  static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
>  		int for_part)
>  {
> -	struct block_device *whole = NULL, *claiming = NULL;
> +	struct block_device *whole = NULL;
>  	struct gendisk *disk = bdev->bd_disk;
>  	int ret;
>  	bool first_open = false, unblock_events = true, need_restart;
> @@ -1462,11 +1457,7 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
>  
>  	if (!for_part && (mode & FMODE_EXCL)) {
>  		WARN_ON_ONCE(!holder);
> -		if (whole)
> -			claiming = whole;
> -		else
> -			claiming = bdev;
> -		ret = bd_prepare_to_claim(bdev, claiming, holder);
> +		ret = bd_prepare_to_claim(bdev, holder);
>  		if (ret)
>  			goto out_put_whole;
>  	}
> @@ -1543,21 +1534,23 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
>  		}
>  	}
>  	bdev->bd_openers++;
> -	if (for_part)
> +	if (for_part) {
>  		bdev->bd_part_count++;
> -	if (claiming)
> -		bd_finish_claiming(bdev, claiming, holder);
> +	} else if (mode & FMODE_EXCL) {
> +		bd_finish_claiming(bdev, holder);
>  
> -	/*
> -	 * Block event polling for write claims if requested.  Any write holder
> -	 * makes the write_holder state stick until all are released.  This is
> -	 * good enough and tracking individual writeable reference is too
> -	 * fragile given the way @mode is used in blkdev_get/put().
> -	 */
> -	if (claiming && (mode & FMODE_WRITE) && !bdev->bd_write_holder &&
> -	    (disk->flags & GENHD_FL_BLOCK_EVENTS_ON_EXCL_WRITE)) {
> -		bdev->bd_write_holder = true;
> -		unblock_events = false;
> +		/*
> +		 * Block event polling for write claims if requested.  Any write
> +		 * holder makes the write_holder state stick until all are
> +		 * released.  This is good enough and tracking individual
> +		 * writeable reference is too fragile given the way @mode is
> +		 * used in blkdev_get/put().
> +		 */
> +		if ((mode & FMODE_WRITE) && !bdev->bd_write_holder &&
> +		    (disk->flags & GENHD_FL_BLOCK_EVENTS_ON_EXCL_WRITE)) {
> +			bdev->bd_write_holder = true;
> +			unblock_events = false;
> +		}
>  	}
>  	mutex_unlock(&bdev->bd_mutex);
>  
> @@ -1578,8 +1571,8 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
>  		__blkdev_put(bdev->bd_contains, mode, 1);
>  	bdev->bd_contains = NULL;
>   out_unlock_bdev:
> -	if (claiming)
> -		bd_abort_claiming(bdev, claiming, holder);
> +	if (!for_part && (mode & FMODE_EXCL))
> +		bd_abort_claiming(bdev, holder);
>  	mutex_unlock(&bdev->bd_mutex);
>  	disk_unblock_events(disk);
>   out_put_whole:
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index 044d9dd159d882..696b2f9c5529d8 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -1988,10 +1988,8 @@ void blkdev_show(struct seq_file *seqf, off_t offset);
>  struct block_device *blkdev_get_by_path(const char *path, fmode_t mode,
>  		void *holder);
>  struct block_device *blkdev_get_by_dev(dev_t dev, fmode_t mode, void *holder);
> -int bd_prepare_to_claim(struct block_device *bdev, struct block_device *whole,
> -		void *holder);
> -void bd_abort_claiming(struct block_device *bdev, struct block_device *whole,
> -		void *holder);
> +int bd_prepare_to_claim(struct block_device *bdev, void *holder);
> +void bd_abort_claiming(struct block_device *bdev, void *holder);
>  void blkdev_put(struct block_device *bdev, fmode_t mode);
>  
>  struct block_device *bdev_alloc(struct gendisk *disk, u8 partno);
> -- 
> 2.29.2
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 10:10:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 10:10:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30553.60648 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfgtg-00071E-T9; Thu, 19 Nov 2020 10:10:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30553.60648; Thu, 19 Nov 2020 10:10:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfgtg-000717-QG; Thu, 19 Nov 2020 10:10:36 +0000
Received: by outflank-mailman (input) for mailman id 30553;
 Thu, 19 Nov 2020 10:10:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfgtf-000712-3q
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 10:10:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7924d8a8-83df-492e-8272-154115fc3dac;
 Thu, 19 Nov 2020 10:10:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A8C50AFE6;
 Thu, 19 Nov 2020 10:10:33 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfgtf-000712-3q
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 10:10:35 +0000
X-Inumbo-ID: 7924d8a8-83df-492e-8272-154115fc3dac
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7924d8a8-83df-492e-8272-154115fc3dac;
	Thu, 19 Nov 2020 10:10:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605780633; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=uMZi+Zv34dLKGE4Sf1IGy1fJP6t3LR3Lp0kormnGEAw=;
	b=cV0xdDtJsJa+gw6KPOfTJWp70LI7kau2zWjJsvwNs+4O+c/iQjhIRJvweXX4oVkDxR++hn
	sPHdIoNX8Esu5yyUqfjY6YPk0s4ueiuHgwQGZiXDaGjlhALVg+jnpuUjesUmCaSbHH9rah
	cfQQMaaELZxxP5PqEgfPWZep6P7r3MY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A8C50AFE6;
	Thu, 19 Nov 2020 10:10:33 +0000 (UTC)
Subject: Re: [PATCH v2 3/8] lib: move list sorting code
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <19d28bcc-9e5b-4902-8e8d-ae95fbc560a6@suse.com>
 <aaf7183b-a843-57e3-84c9-7408a6ebfedf@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <791f0f5f-f7d7-64ff-bb4a-01911774eb8a@suse.com>
Date: Thu, 19 Nov 2020 11:10:33 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <aaf7183b-a843-57e3-84c9-7408a6ebfedf@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.11.2020 18:38, Julien Grall wrote:
> On 23/10/2020 11:17, Jan Beulich wrote:
>> Build the source file always, as by putting it into an archive it still
>> won't be linked into final binaries when not needed. This way possible
>> build breakage will be easier to notice, and it's more consistent with
>> us unconditionally building other library kind of code (e.g. sort() or
>> bsearch()).
>>
>> While moving the source file, take the opportunity and drop the
>> pointless EXPORT_SYMBOL().
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> It looks like the commit message was duplicated.

Indeed - no idea how it has happened (also in at least one other
patch in this series, as I've noticed now).

>> Build the source file always, as by putting it into an archive it still
>> won't be linked into final binaries when not needed. This way possible
>> build breakage will be easier to notice, and it's more consistent with
>> us unconditionally building other library kind of code (e.g. sort() or
>> bsearch()).
>>
>> While moving the source file, take the opportunity and drop the
>> pointless EXPORT_SYMBOL().
> 
> You are mentioning the EXPORT_SYMBOL() but...
[...]
>> --- a/xen/common/list_sort.c
>> +++ b/xen/lib/list-sort.c
>> @@ -15,7 +15,6 @@
>>    * this program; If not, see <http://www.gnu.org/licenses/>.
>>    */
>>   
>> -#include <xen/lib.h>
> 
> ... this is not mentionned.

Well, not sure what to say. But anyway, I've added half a sentence
to also mention this.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 10:15:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 10:15:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30560.60660 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfgy4-0007EC-Fa; Thu, 19 Nov 2020 10:15:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30560.60660; Thu, 19 Nov 2020 10:15:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfgy4-0007E5-CQ; Thu, 19 Nov 2020 10:15:08 +0000
Received: by outflank-mailman (input) for mailman id 30560;
 Thu, 19 Nov 2020 10:15:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfgy3-0007E0-C9
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 10:15:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b04b3bfc-8725-4286-906c-f697c260d042;
 Thu, 19 Nov 2020 10:15:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A4B4DACB0;
 Thu, 19 Nov 2020 10:15:05 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfgy3-0007E0-C9
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 10:15:07 +0000
X-Inumbo-ID: b04b3bfc-8725-4286-906c-f697c260d042
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b04b3bfc-8725-4286-906c-f697c260d042;
	Thu, 19 Nov 2020 10:15:06 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605780905; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zwjH+Q6gVheDBgDgInqsbs9iYCY0+7bMWEZay4ju2Sc=;
	b=EYKE7pudBUD6tSYDnGE/lqbXYz6Jkintb16fsPYCfvvscgm2+8CxHpGxEFOz2g6z/TTpBA
	gmovmwaXredKK9t+/MwIfw7H35Svdy94fAweUhPdmelvDAufvC5seGOIW2PruhK9ZPUpTZ
	ush+ORAfKn7dzgImU8S5laNr3j2Pi/o=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A4B4DACB0;
	Thu, 19 Nov 2020 10:15:05 +0000 (UTC)
Subject: Re: [PATCH v2 2/8] lib: collect library files in an archive
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <78dccec2-064f-d4b1-1865-eb3f1f14247a@suse.com>
 <67740969-1ab1-db7f-5e3c-b15a20c1be8b@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <49c2f770-b101-2eea-b9ae-2a9e2f1bfc52@suse.com>
Date: Thu, 19 Nov 2020 11:15:04 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <67740969-1ab1-db7f-5e3c-b15a20c1be8b@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.11.2020 18:06, Julien Grall wrote:
> On 23/10/2020 11:17, Jan Beulich wrote:
>> In order to (subsequently) drop odd things like CONFIG_NEEDS_LIST_SORT
>> just to avoid bloating binaries when only some arch-es and/or
>> configurations need generic library routines, combine objects under lib/
>> into an archive, which the linker then can pick the necessary objects
>> out of.
>>
>> Note that we can't use thin archives just yet, until we've raised the
>> minimum required binutils version suitably.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>>   xen/Rules.mk          | 33 +++++++++++++++++++++++++++------
>>   xen/arch/arm/Makefile |  6 +++---
>>   xen/arch/x86/Makefile |  8 ++++----
>>   xen/lib/Makefile      |  3 ++-
> 
> IIRC, Anthony worked on the build systems. If I am right, it would be 
> good to get a review from him.

I'll try to remember next time round.

>> --- a/xen/lib/Makefile
>> +++ b/xen/lib/Makefile
>> @@ -1,2 +1,3 @@
>> -obj-y += ctype.o
>>   obj-$(CONFIG_X86) += x86/
>> +
>> +lib-y += ctype.o
> 
> May I ask why all the code movement by ctype was done after this patch?

I'm sorry, but I'm afraid I don't understand the question.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 10:16:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 10:16:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30565.60673 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfgzn-0007Lw-Td; Thu, 19 Nov 2020 10:16:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30565.60673; Thu, 19 Nov 2020 10:16:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfgzn-0007Lp-Pe; Thu, 19 Nov 2020 10:16:55 +0000
Received: by outflank-mailman (input) for mailman id 30565;
 Thu, 19 Nov 2020 10:16:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfgzm-0007Lk-UR
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 10:16:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfgzl-0000AF-V9; Thu, 19 Nov 2020 10:16:53 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfgzl-0001ns-Np; Thu, 19 Nov 2020 10:16:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfgzm-0007Lk-UR
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 10:16:54 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=qNJSo2G5SgZQVnKvsfBvSfQdBBs6ER1ANyOEOjh8fmM=; b=JKrERxADEAquNyH4aoXXoQZkYE
	ltunWQxlwXg7LAMMlUUQM82X+QUEFI4LzMUTY19Jy5Yx9Y9/V8p4fYEotHqeE8NqT2qBpKKxEWmfq
	BarcytXxufx1UdxqVm0tp50GGMkoiadxTrD+WjCaRWyPNNnFRBldv2T6oUbpmolcRHxc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfgzl-0000AF-V9; Thu, 19 Nov 2020 10:16:53 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfgzl-0001ns-Np; Thu, 19 Nov 2020 10:16:53 +0000
Subject: Re: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
To: Jan Beulich <jbeulich@suse.com>
Cc: bertrand.marquis@arm.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org, Rahul Singh <rahul.singh@arm.com>
References: <cover.1605527997.git.rahul.singh@arm.com>
 <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
 <3740e147-719a-4e97-bb0e-fe9bd2ec2aa5@xen.org>
 <aa256a44-8f8f-d4f1-f5f4-12529f45d8c8@suse.com>
 <9007e08f-6d90-88ed-ba64-2f0b3c21cb50@xen.org>
 <8531a99d-3c54-36c7-0cd4-2e4838f96eb0@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <ba26fdfb-34f8-c4d3-e082-f1f49c768981@xen.org>
Date: Thu, 19 Nov 2020 10:16:51 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <8531a99d-3c54-36c7-0cd4-2e4838f96eb0@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 19/11/2020 09:53, Jan Beulich wrote:
> On 19.11.2020 10:21, Julien Grall wrote:
>> Hi Jan,
>>
>> On 19/11/2020 09:05, Jan Beulich wrote:
>>> On 18.11.2020 16:50, Julien Grall wrote:
>>>> On 16/11/2020 12:25, Rahul Singh wrote:
>>>>> NS16550 driver has PCI support that is under HAS_PCI flag. When HAS_PCI
>>>>> is enabled for ARM, compilation error is observed for ARM architecture
>>>>> because ARM platforms do not have full PCI support available.
>>>>    >
>>>>> Introducing new kconfig option CONFIG_HAS_NS16550_PCI to support
>>>>> ns16550 PCI for X86.
>>>>>
>>>>> For X86 platforms it is enabled by default. For ARM platforms it is
>>>>> disabled by default, once we have proper support for NS16550 PCI for
>>>>> ARM we can enable it.
>>>>>
>>>>> No functional change.
>>>>
>>>> NIT: I would say "No functional change intended" to make clear this is
>>>> an expectation and hopefully will be correct :).
>>>>
>>>> Regarding the commit message itself, I would suggest the following to
>>>> address Jan's concern:
>>>
>>> While indeed this is a much better description, I continue to think
>>> that the proposed Kconfig option is undesirable to have.
>>
>> I am yet to see an argument into why we should keep the PCI code
>> compiled on Arm when there will be no-use....
> 
> Well, see my patch suppressing building of quite a part of it.

I will let Rahul figuring out whether your patch series is sufficient to 
fix compilation issues (this is what matters right now).

> 
>>> Either,
>>> following the patch I've just sent, truly x86-specific things (at
>>> least as far as current state goes - if any of this was to be
>>> re-used by a future port, suitable further abstraction may be
>>> needed) should be guarded by CONFIG_X86 (or abstracted into arch
>>> hooks), or the HAS_PCI_MSI proposal would at least want further
>>> investigating as to its feasibility to address the issues at hand.
>>
>> I would be happy with CONFIG_X86, despite the fact that this is only
>> deferring the problem.
>>
>> Regarding HAS_PCI_MSI, I don't really see the point of introducing given
>> that we are not going to use NS16550 PCI on Arm in the forseeable
>> future.
> 
> And I continue to fail to see what would guarantee this: As soon
> as you can plug in such a card into an Arm system, people will
> want to be able use it. That's why we had to add support for it
> on x86, after all.

Well, plug-in PCI cards on Arm has been available for quite a while... 
Yet I haven't heard anyone asking for NS16550 PCI support.

This is probably because SBSA compliant server should always provide an 
SBSA UART (a cut-down version of the PL011). So why would bother to lose 
a PCI slot for yet another UART?

> >> So why do we need a finer graine Kconfig?
> 
> Because most of the involved code is indeed MSI-related?

Possibly, yet it would not be necessary if we don't want NS16550 PCI 
support...

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 10:24:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 10:24:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30573.60685 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfh6b-0008Mw-Kw; Thu, 19 Nov 2020 10:23:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30573.60685; Thu, 19 Nov 2020 10:23:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfh6b-0008Mp-Hd; Thu, 19 Nov 2020 10:23:57 +0000
Received: by outflank-mailman (input) for mailman id 30573;
 Thu, 19 Nov 2020 10:23:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfh6a-0008M6-Bu
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 10:23:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dde4293f-2209-413b-839a-79592bf37ed9;
 Thu, 19 Nov 2020 10:23:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EF87FAC24;
 Thu, 19 Nov 2020 10:23:54 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfh6a-0008M6-Bu
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 10:23:56 +0000
X-Inumbo-ID: dde4293f-2209-413b-839a-79592bf37ed9
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id dde4293f-2209-413b-839a-79592bf37ed9;
	Thu, 19 Nov 2020 10:23:55 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605781435; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XxzLM2t2efGhpZWBFlj6CCBvJpZP0Obk0XqAqTGxTek=;
	b=urm+9VgSdcp8uuGoltb2fQcU+Qwm7QxlyIu7U0jxwioABxLmsDyRCTK8hynWaRgPf2Vrwi
	m+FFZM+iaOG/fEjpQnUD7DSaHdK/ZuAvKeW4M7AS29RfZLQdDmKr7y/NTbNiMOWLvXF5tc
	jW8x8JTU6seoGtD8WvvjcgIlwhMol1k=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EF87FAC24;
	Thu, 19 Nov 2020 10:23:54 +0000 (UTC)
Subject: Re: [PATCH v2 6/8] lib: move rbtree code
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <e16975d3-c34b-1b3f-743f-1abe13aa06f7@suse.com>
 <eae3d402-b1d4-6fcb-06b8-ea337a26ab50@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <df17ef4d-39c9-4a26-4aab-e17331e2be55@suse.com>
Date: Thu, 19 Nov 2020 11:23:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <eae3d402-b1d4-6fcb-06b8-ea337a26ab50@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.11.2020 18:46, Julien Grall wrote:
> On 23/10/2020 11:18, Jan Beulich wrote:
>> Build this code into an archive, which results in not linking it into
>> x86 final binaries. This saves about 1.5k of dead code.
>>
>> While moving the source file, take the opportunity and drop the
>> pointless EXPORT_SYMBOL().
> 
> Given this code is borrowed from Linux, I don't think we want to remove 
> them. This is to make easier to re-sync.

That's one view on it. My view is that we should get rid of
EXPORT_SYMBOL altogether (and it is a good opportunity to do so
here). Not the least because otherwise for future cloning of Linux
code we may then need to introduce further variants of this
construct for no real purpose.

>> --- a/xen/common/rbtree.c
>> +++ b/xen/lib/rbtree.c
>> @@ -25,7 +25,7 @@
>>   #include <xen/rbtree.h>
>>   
>>   /*
>> - * red-black trees properties:  http://en.wikipedia.org/wiki/Rbtree
>> + * red-black trees properties:  http://en.wikipedia.org/wiki/Rbtree
> 
> This looks like a spurious change.

Not really, no - while not visible here anymore, it's correcting an
instance of trailing whitespace. Following your request on an
earlier patch, I've also added half a sentence to the description
here to mention this entirely benign change.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 10:27:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 10:27:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30580.60700 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfhAH-00005R-5x; Thu, 19 Nov 2020 10:27:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30580.60700; Thu, 19 Nov 2020 10:27:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfhAH-00005K-2T; Thu, 19 Nov 2020 10:27:45 +0000
Received: by outflank-mailman (input) for mailman id 30580;
 Thu, 19 Nov 2020 10:27:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfhAF-00005E-Nk
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 10:27:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f1369d29-50cc-4b2b-949a-99cf57d0f96d;
 Thu, 19 Nov 2020 10:27:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 30D30AC23;
 Thu, 19 Nov 2020 10:27:42 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfhAF-00005E-Nk
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 10:27:43 +0000
X-Inumbo-ID: f1369d29-50cc-4b2b-949a-99cf57d0f96d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f1369d29-50cc-4b2b-949a-99cf57d0f96d;
	Thu, 19 Nov 2020 10:27:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605781662; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=S7DdDUqSN/Mt0PYfsKOzA7Gon8VVTAb4tK0/kzwQR4k=;
	b=uMyecMQBwY0E58dZzVhzAySSlBpBdGJOYXUTfEA6wWUtLp0HMdrikC6lSDUgPJXDrPjac0
	NQQdGmwxRqniMcHxj59kdP5e/mh0u9Bgapf6VeS02GQudCiWu1dDpzTFYqXMGD+j7yQPPT
	ES8r79WEd06grNTFoF8APYsAuHjZShk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 30D30AC23;
	Thu, 19 Nov 2020 10:27:42 +0000 (UTC)
Subject: Re: [PATCH v2 7/8] lib: move bsearch code
To: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <87a20884-5a76-a664-dcc9-bd4becee40b3@suse.com>
 <44ffc041-cacd-468e-a835-f5b2048bb201@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2cf3a90d-f463-41f8-f861-6ef00279b204@suse.com>
Date: Thu, 19 Nov 2020 11:27:41 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <44ffc041-cacd-468e-a835-f5b2048bb201@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.11.2020 19:09, Julien Grall wrote:
> On 23/10/2020 11:19, Jan Beulich wrote:
>> --- a/xen/include/xen/compiler.h
>> +++ b/xen/include/xen/compiler.h
>> @@ -12,6 +12,7 @@
>>   
>>   #define inline        __inline__
>>   #define always_inline __inline__ __attribute__ ((__always_inline__))
>> +#define gnu_inline    __inline__ __attribute__ ((__gnu_inline__))
> 
> bsearch() is only used by Arm and I haven't seen anyone so far 
> complaining about the perf of I/O emulation.
> 
> Therefore, I am not convinced that there is enough justification to 
> introduce a GNU attribute just for this patch.

Please settle this with Andrew: He had asked for the function to
become inline. I don't view making it static inline in the header
as an option here - if the compiler decides to not inline it, we
should not end up with multiple instances in different CUs. And
without making it static inline the attribute needs adding; at
least I'm unaware of an alternative which works with the various
compiler versions.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 10:33:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 10:33:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30586.60711 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfhFI-00014o-UR; Thu, 19 Nov 2020 10:32:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30586.60711; Thu, 19 Nov 2020 10:32:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfhFI-00014h-RV; Thu, 19 Nov 2020 10:32:56 +0000
Received: by outflank-mailman (input) for mailman id 30586;
 Thu, 19 Nov 2020 10:32:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kVKL=EZ=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1kfhFH-00014c-Uf
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 10:32:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3cde1ad4-23f5-4684-92ec-1f792b84de59;
 Thu, 19 Nov 2020 10:32:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3937FACBA;
 Thu, 19 Nov 2020 10:32:54 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id D5CBE1E130B; Thu, 19 Nov 2020 11:32:53 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kVKL=EZ=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1kfhFH-00014c-Uf
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 10:32:55 +0000
X-Inumbo-ID: 3cde1ad4-23f5-4684-92ec-1f792b84de59
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3cde1ad4-23f5-4684-92ec-1f792b84de59;
	Thu, 19 Nov 2020 10:32:55 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3937FACBA;
	Thu, 19 Nov 2020 10:32:54 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id D5CBE1E130B; Thu, 19 Nov 2020 11:32:53 +0100 (CET)
Date: Thu, 19 Nov 2020 11:32:53 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 13/20] block: remove ->bd_contains
Message-ID: <20201119103253.GV1981@quack2.suse.cz>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-14-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201118084800.2339180-14-hch@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Wed 18-11-20 09:47:53, Christoph Hellwig wrote:
> Now that each hd_struct has a reference to the corresponding
> block_device, there is no need for the bd_contains pointer.  Add
> a bdev_whole() helper to look up the whole device block_device
> struture instead.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Just two nits below. Otherwise feel free to add:

Reviewed-by: Jan Kara <jack@suse.cz>

> @@ -1521,7 +1510,7 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
>  		if (bdev->bd_bdi == &noop_backing_dev_info)
>  			bdev->bd_bdi = bdi_get(disk->queue->backing_dev_info);
>  	} else {
> -		if (bdev->bd_contains == bdev) {
> +		if (!bdev->bd_partno) {

This should be !bdev_is_partition(bdev) for consistency, right?

>  			ret = 0;
>  			if (bdev->bd_disk->fops->open)
>  				ret = bdev->bd_disk->fops->open(bdev, mode);
...
> diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
> index 0069bee992063e..453b940b87d8e9 100644
> --- a/include/linux/blk_types.h
> +++ b/include/linux/blk_types.h
> @@ -32,7 +32,6 @@ struct block_device {
>  #ifdef CONFIG_SYSFS
>  	struct list_head	bd_holder_disks;
>  #endif
> -	struct block_device *	bd_contains;
>  	u8			bd_partno;
>  	struct hd_struct *	bd_part;
>  	/* number of times partitions within this device have been opened. */
> @@ -48,6 +47,9 @@ struct block_device {
>  	struct mutex		bd_fsfreeze_mutex;
>  } __randomize_layout;
>  
> +#define bdev_whole(_bdev) \
> +	((_bdev)->bd_disk->part0.bdev)
> +
>  #define bdev_kobj(_bdev) \
>  	(&part_to_dev((_bdev)->bd_part)->kobj)

I'd somewhat prefer if these helpers could actually be inline functions and
not macros. I guess they are macros because hd_struct isn't in blk_types.h.
But if we moved helpers to blkdev.h, we'd have all definitions we need...
Is that a problem for some users?

								Honza

-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 10:44:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 10:44:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30593.60723 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfhQQ-00029j-VG; Thu, 19 Nov 2020 10:44:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30593.60723; Thu, 19 Nov 2020 10:44:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfhQQ-00029c-SP; Thu, 19 Nov 2020 10:44:26 +0000
Received: by outflank-mailman (input) for mailman id 30593;
 Thu, 19 Nov 2020 10:44:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfhQP-00029X-KR
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 10:44:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3f966afc-7310-4ac1-94b4-2d0eb4304aba;
 Thu, 19 Nov 2020 10:44:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 19745ACBA;
 Thu, 19 Nov 2020 10:44:23 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfhQP-00029X-KR
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 10:44:25 +0000
X-Inumbo-ID: 3f966afc-7310-4ac1-94b4-2d0eb4304aba
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3f966afc-7310-4ac1-94b4-2d0eb4304aba;
	Thu, 19 Nov 2020 10:44:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605782663; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=us09E1uDgSD5oB0WgVAYFdVn4vjHFq+Sxej+KHinsUE=;
	b=BBOXlw+CPKXQE+kXujmWCHYYuds27F5djntIlfgWoKBgWTO+y3mAvS3lDoYj4bGnLJdz5T
	ybPT4isdRFHm1k5DEWuxBOJhuox+BCw+I1x4E/cAUuMt4FFP7zGsfBWNOiO/C4BS0ceBmu
	6Cv9KyfMrKoQ6Wy7kv8YDw8DG/OkNHc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 19745ACBA;
	Thu, 19 Nov 2020 10:44:23 +0000 (UTC)
Subject: Re: [PATCH v2 2/8] lib: collect library files in an archive
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <78dccec2-064f-d4b1-1865-eb3f1f14247a@suse.com>
 <8ff6d14d-fb3f-79d0-888a-3da2b68d1aad@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <36e5b417-bd6c-8e94-9b95-aec7a5dc6e31@suse.com>
Date: Thu, 19 Nov 2020 11:44:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <8ff6d14d-fb3f-79d0-888a-3da2b68d1aad@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.11.2020 18:31, Julien Grall wrote:
> On 23/10/2020 11:17, Jan Beulich wrote:
>> In order to (subsequently) drop odd things like CONFIG_NEEDS_LIST_SORT
>> just to avoid bloating binaries when only some arch-es and/or
>> configurations need generic library routines, combine objects under lib/
>> into an archive, which the linker then can pick the necessary objects
>> out of.
>>
>> Note that we can't use thin archives just yet, until we've raised the
>> minimum required binutils version suitably.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>>   xen/Rules.mk          | 33 +++++++++++++++++++++++++++------
>>   xen/arch/arm/Makefile |  6 +++---
>>   xen/arch/x86/Makefile |  8 ++++----
>>   xen/lib/Makefile      |  3 ++-
>>   4 files changed, 36 insertions(+), 14 deletions(-)
> 
> I can't build Xen on Arm after this patch:
> 
>    AR      lib.a
> aarch64-linux-gnu-ld    -EL  -r -o built_in.o
> aarch64-linux-gnu-ld: no input files
> /home/ANT.AMAZON.COM/jgrall/works/oss/xen/xen/Rules.mk:154: recipe for 
> target 'built_in.o' failed

Oh, indeed, this triggers a pre-existing bug. I didn't notice it
because for Arm I build-tested only the series as a whole. Will
add a fix for the issue as prereq patch.

Thanks for noticing,
Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 10:46:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 10:46:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30598.60735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfhSW-0002I5-Cr; Thu, 19 Nov 2020 10:46:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30598.60735; Thu, 19 Nov 2020 10:46:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfhSW-0002Hy-9x; Thu, 19 Nov 2020 10:46:36 +0000
Received: by outflank-mailman (input) for mailman id 30598;
 Thu, 19 Nov 2020 10:46:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfhSU-0002Ho-HE; Thu, 19 Nov 2020 10:46:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfhSU-0000lm-B8; Thu, 19 Nov 2020 10:46:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfhST-00023w-WC; Thu, 19 Nov 2020 10:46:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kfhST-0007wC-Vi; Thu, 19 Nov 2020 10:46:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfhSU-0002Ho-HE; Thu, 19 Nov 2020 10:46:34 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gyDbOG6+ZReM+ntGWFw6ztlaQTG1KBhKleNYITfTWBU=; b=MjCg+z9NnvEbgl7fDJvIZovAq4
	mg/D675q+wTrEJM/ZWx4IFyAn+RN4X9PQ/KLEj5+bzmcxSbKnUlm6v9tpPs+zJK6CKShwgxt8XBRr
	T6Ng0/bLZSjM3WSatXyDpl8zXtN7k0iKeiufCn5ObX6scDD2ar75CNG0F2dJl9HnWedY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfhSU-0000lm-B8; Thu, 19 Nov 2020 10:46:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfhST-00023w-WC; Thu, 19 Nov 2020 10:46:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfhST-0007wC-Vi; Thu, 19 Nov 2020 10:46:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156870-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156870: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=126cb34a206a44f04e364700b46426dff9f387d5
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 19 Nov 2020 10:46:33 +0000

flight 156870 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156870/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              126cb34a206a44f04e364700b46426dff9f387d5
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  132 days
Failing since        151818  2020-07-11 04:18:52 Z  131 days  126 attempts
Testing same since   156870  2020-11-19 04:20:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 27961 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 10:59:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 10:59:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30610.60751 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfhf5-0003UH-K7; Thu, 19 Nov 2020 10:59:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30610.60751; Thu, 19 Nov 2020 10:59:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfhf5-0003UA-GB; Thu, 19 Nov 2020 10:59:35 +0000
Received: by outflank-mailman (input) for mailman id 30610;
 Thu, 19 Nov 2020 10:59:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfhf3-0003U2-Ht
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 10:59:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfhf2-00012H-4F; Thu, 19 Nov 2020 10:59:32 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfhf1-0005eh-O1; Thu, 19 Nov 2020 10:59:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfhf3-0003U2-Ht
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 10:59:33 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=63HXOrw55m7fSuT5mv2XlypB1wFoU1pivjgY0UAQ+KU=; b=P0vvHCgCCfYXAoNQaaTy+k5nRF
	5s3aLqyo13i9Sx15gHo8amKpd8dr1tY9r2OwzLHVuWGNXl0d7HuO4DJrA668bGdZ31hglaJjiQ/R7
	94ys5+EkVBtoGMkflbIh9vwcIopyx7RGk/5KsLfb3WsQYVfXasE95WN4UE2IZ77aPXsg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfhf2-00012H-4F; Thu, 19 Nov 2020 10:59:32 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfhf1-0005eh-O1; Thu, 19 Nov 2020 10:59:31 +0000
Subject: Re: [PATCH 1/3] mm: introduce xvmalloc() et al and use for grant
 table allocations
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <e0364274-f123-82bd-ec85-bea519a34049@suse.com>
 <d98aabe4-6c1b-0970-2e42-eb991e9075a2@suse.com>
 <e7b72c54-e8e4-428d-9264-484fc0061ba4@xen.org>
 <9adc7ec2-c014-d9ae-a8b5-5b942640386c@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <e115ce52-3c5c-6530-dd3a-bc7f268ef224@xen.org>
Date: Thu, 19 Nov 2020 10:59:29 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <9adc7ec2-c014-d9ae-a8b5-5b942640386c@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 19/11/2020 09:46, Jan Beulich wrote:
> On 18.11.2020 20:10, Julien Grall wrote:
>> On 06/11/2020 07:11, Jan Beulich wrote:
>>> All of the array allocations in grant_table_init() can exceed a page's
>>> worth of memory, which xmalloc()-based interfaces aren't really suitable
>>> for after boot.
>>
>> I can see a few reasons why they are not suitable:
>>     - xmalloc() will try to using an order and then throw memory. This is
>> pretty inneficient.
> 
> But addressing this inefficiency, while a nice side effect, is
> not the goal here.
> 
>>     - xmalloc() will allocate physically contiguous memory
> 
> This aspect matters here only indirectly: What we care about
> avoiding are runtime allocations of non-zero order. The assumption
> of how I worded the description is that during boot non-zero-
> order allocations can typically be fulfilled and hence aren't a
> (general) problem.
Well... In the case of the grant table, if you can't find a small order 
of physically contiguous pages then you have bigger trouble on your 
platform. You will either not have enough space for the allocating the 
domain memory, or the performance will be awful because only 4K pages 
are used.

So while I agree that having xvmalloc() is a good move, I am not 
convinced of your argument regarding the boot vs runtime.

I think a better argument is the allocation doesn't need to be 
physically contiguous in memory. So better avoid it when we can.

> 
>> It would be good to clarify which one you refer because none of them are
>> really a problem only after boot...
> 
> Given the above, I'm not sure in which way you would see this be
> clarified. Any suggestions welcome.
> 
>> One thing to be aware thought is xv*() are going to be more inefficient
>> because they involve touching the page-tables (at least until the work
>> to map on-demand the direct map is not merged). In addition, on Arm,
>> they will also use only 4K mappings (I have a TODO to fix that).
>>
>> So I think we will need to be careful when to use xmalloc() vs
>> xvalloc(). It might be worth outlining that in the documentation of xv*().
> 
> The rule is quite simple and the inefficiencies you mention
> shouldn't matter imo: Post-boot there should not be any
> implicit allocations of non-zero order. "Implicit" here meaning
> to still permit e.g. direct alloc_domheap_pages() invocations,
> making apparent at the call site that the aspect of memory
> fragmentation was (hopefully) taken into consideration. I'm
> actually inclined to suggest (down the road) to have _xmalloc()
> no longer fall back to multi-page allocations post-boot, but
> instead return NULL.

One advantage of xmalloc() is it is able to allocate a suitable xenheap 
area. So it will not touch the page-tables and therefore useful for 
short-life allocation as the overhead will be more limited compare to 
xvalloc().

There is also the problem that alloc_{dom, xen}heap_pages() works using 
order. xmalloc() is handy because it will give back the unnecessary pages.

Maybe we should consider a version of alloc_*heap_pages() that will take 
the number of pages rather than order.

> 
> If you think it is really needed, I can add something like "These
> should be used in preference to xmalloc() et al whenever the size
> is not known to be constrained to at most a single page" to the
> comment at the top of the new header file.

There are quite a few users of xmalloc() with large allocations. Yet, 
they would not be suitable for xvalloc() because they require physically 
contiguous memory. So I think you would want to mention that in the 
sentence.

> Where the inefficiencies you mention would imo matter is in
> (future) decisions whether to use vmalloc() et al vs xvmalloc()
> et al: If the size _may_ be no more than a page, the latter may
> want preferring.
I am not sure to understand this... why would we want to keep vmalloc() 
extern when xvalloc() will be calling it for allocation over a PAGE_SIZE?

> 
>>> --- a/xen/common/vmap.c
>>> +++ b/xen/common/vmap.c
>>> @@ -7,6 +7,7 @@
>>>    #include <xen/spinlock.h>
>>>    #include <xen/types.h>
>>>    #include <xen/vmap.h>
>>> +#include <xen/xvmalloc.h>
>>>    #include <asm/page.h>
>>>    
>>>    static DEFINE_SPINLOCK(vm_lock);
>>> @@ -299,11 +300,29 @@ void *vzalloc(size_t size)
>>>        return p;
>>>    }
>>>    
>>> -void vfree(void *va)
>>> +static void _vfree(const void *va, unsigned int pages, enum vmap_region type)
>>
>> I don't think "unsigned int" is sufficient to cover big size. AFAICT,
>> this is not in a new problem in this code and seems to be a latent issue
>> so far.
>>
>> However, I feel that it is wrong to introduce a new set of allocation
>> helpers that contained a flaw fixed in xm*alloc() recently (see  commit
>> cf38b4926e2b "xmalloc: guard against integer overflow").
> 
> For _xmalloc() we're talking about bytes (and the guarding you
> refer to is actually orthogonal to the limiting done by the
> page allocator, as follows from the description of that change).
> Here we're talking about pages. I hope it will be decades until we
> have to consider allocating 16Tb all in one chunk (and we'd need
> to have large enough vmap virtual address space set aside first).

I think you misunderstood my point here. I am not suggesting that a 
normal user would ask to allocate 16TB but that a caller may pass by 
mistake an unsanitized value to xv*() functions.

IIRC, the overflow check in xm*() were added after we discovered that 
some callers where passing unsanitized values.

I would expect xv*() functions to be more used in the future, so I think 
it would be unwise to not guard against overflow.

I would be happy with just checking that nr always fit in a 32-bit value.

> Also note that
> - the entire vmap.c right now uses unsigned int for page counts,
>    so it would be outright inconsistent to use unsigned long here,

I didn't suggest this would be the only place (note that "new problem"). 
This was the best place I could find to mention an existing problem that 
is widened with the introduction of xv*() helpers.

> - at least on x86 passing around 64-bit function arguments is
>    slightly less efficient than 32-bit ones, and hence I'd prefer
>    to keep it like this.

Don't you have 64-bit registers on x86-64?

But, I am really surprised this is a concern to you when all the 
functions in this code will modify the pages tables. You dismissed this 
overhead in the same e-mail...

> 
>>> --- /dev/null
>>> +++ b/xen/include/xen/xvmalloc.h
>>> @@ -0,0 +1,70 @@
>>> +
>>> +#ifndef __XVMALLOC_H__
>>> +#define __XVMALLOC_H__
>>> +
>>> +#include <xen/cache.h>
>>> +#include <xen/types.h>
>>> +
>>> +/*
>>> + * Xen malloc/free-style interface.
>>
>> It would be useful to emphase that they should only be used if the
>> caller does *not* need physically contiguous memory.
> 
> Actually first of all I shouldn't have copied to comment without
> editing. I've now made it
> 
> /*
>   * Xen malloc/free-style interface preferable for allocations possibly
>   * exceeding a page's worth of memory, as long as there's no need to have
>   * physically contiguous memory allocated.
>   */
> 
> albeit I'm not sure the "physically discontiguous" really needs
> pointing out, considering the 'v' in the function names.

Verbosity never hurt. I would not say the same with figuring out what 
'v' means.

I am not ashame to say that when began to work on Linux/Xen, I had some 
trouble to figure out the difference between kmalloc() and vmalloc().

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 11:10:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 11:10:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30617.60762 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfhp4-0004en-O2; Thu, 19 Nov 2020 11:09:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30617.60762; Thu, 19 Nov 2020 11:09:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfhp4-0004eg-Kl; Thu, 19 Nov 2020 11:09:54 +0000
Received: by outflank-mailman (input) for mailman id 30617;
 Thu, 19 Nov 2020 11:09:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfhp4-0004eY-5D; Thu, 19 Nov 2020 11:09:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfhp3-0001Ge-Si; Thu, 19 Nov 2020 11:09:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfhp3-0002r9-J1; Thu, 19 Nov 2020 11:09:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kfhp3-0002q1-IX; Thu, 19 Nov 2020 11:09:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfhp4-0004eY-5D; Thu, 19 Nov 2020 11:09:54 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NHMX4PLwTs0p5aObEsly9b1dYOAs0ZUrSbgAnE9cawk=; b=DVntDvEqlT41CTgi2n/MyZjgW9
	+ZryyWtwRiUZsptOp5yMRy7QjSMW+UlYA5aPPAqvZVHY6xO8oUNiO3XnAU2uX0geSRc47QSi09XcE
	KIOJVQCT2c4KuDGf/1OSOVoNJAhQV6r7yrCMibG1LSwxoQrtlTvErbnDgmH9ikPxvsI4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfhp3-0001Ge-Si; Thu, 19 Nov 2020 11:09:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfhp3-0002r9-J1; Thu, 19 Nov 2020 11:09:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfhp3-0002q1-IX; Thu, 19 Nov 2020 11:09:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156866-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156866: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    qemu-mainline:test-arm64-arm64-xl:xen-boot:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=3d275bd17c7bdf5acd4e4f58fa7b8b9114bb7484
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 19 Nov 2020 11:09:53 +0000

flight 156866 qemu-mainline real [real]
flight 156877 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156866/
http://logs.test-lab.xenproject.org/osstest/logs/156877/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152631
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                3d275bd17c7bdf5acd4e4f58fa7b8b9114bb7484
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   91 days
Failing since        152659  2020-08-21 14:07:39 Z   89 days  191 attempts
Testing same since   156866  2020-11-18 23:09:18 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 67076 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 11:30:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 11:30:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30630.60778 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfi8j-0007hA-HI; Thu, 19 Nov 2020 11:30:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30630.60778; Thu, 19 Nov 2020 11:30:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfi8j-0007gn-DW; Thu, 19 Nov 2020 11:30:13 +0000
Received: by outflank-mailman (input) for mailman id 30630;
 Thu, 19 Nov 2020 11:30:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfi8i-0007YO-Bp; Thu, 19 Nov 2020 11:30:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfi8h-0001fF-NB; Thu, 19 Nov 2020 11:30:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfi8h-0003Yl-FC; Thu, 19 Nov 2020 11:30:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kfi8h-0006vq-Ej; Thu, 19 Nov 2020 11:30:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfi8i-0007YO-Bp; Thu, 19 Nov 2020 11:30:12 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Y82YQyZ5yzOBjku67SO0QP4Zm3nxp2wNFrT5ujaO9Kg=; b=NKuU1b0XKhbL+H8R9/VqMR/0ER
	yy4cYXyplylfz3IZoXUNay1oZDmg1m4EWaFcJeMWGczD+5O5GFi8qMRvyuMUeXaSGvr+zdkH3S+Et
	/MT6lhUa1DvE7S0mGVVyYYnWKpLq8yt32G6VnkMkbVUzc8Sv4y6s/jjGPCT+LCUmcgIA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfi8h-0001fF-NB; Thu, 19 Nov 2020 11:30:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfi8h-0003Yl-FC; Thu, 19 Nov 2020 11:30:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfi8h-0006vq-Ej; Thu, 19 Nov 2020 11:30:11 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156869-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156869: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=6c4efc050974812d6ebee1cea711e3c81e4e4442
X-Osstest-Versions-That:
    ovmf=404250c8f77d09077321766602c3118cec7f6ecd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 19 Nov 2020 11:30:11 +0000

flight 156869 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156869/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 6c4efc050974812d6ebee1cea711e3c81e4e4442
baseline version:
 ovmf                 404250c8f77d09077321766602c3118cec7f6ecd

Last test of basis   156849  2020-11-18 05:10:58 Z    1 days
Testing same since   156869  2020-11-19 02:39:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Mingyue Liang <mingyuex.liang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   404250c8f7..6c4efc0509  6c4efc050974812d6ebee1cea711e3c81e4e4442 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 12:05:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 12:05:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30647.60792 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfigs-0002ct-KT; Thu, 19 Nov 2020 12:05:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30647.60792; Thu, 19 Nov 2020 12:05:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfigs-0002cm-HU; Thu, 19 Nov 2020 12:05:30 +0000
Received: by outflank-mailman (input) for mailman id 30647;
 Thu, 19 Nov 2020 12:05:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kVKL=EZ=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1kfigr-0002ch-Ge
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 12:05:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d8a9b119-fa2b-42f8-98e9-16b210096692;
 Thu, 19 Nov 2020 12:05:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E1DC2AA4F;
 Thu, 19 Nov 2020 12:05:25 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id 4E3FB1E130B; Thu, 19 Nov 2020 13:05:25 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kVKL=EZ=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1kfigr-0002ch-Ge
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 12:05:29 +0000
X-Inumbo-ID: d8a9b119-fa2b-42f8-98e9-16b210096692
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d8a9b119-fa2b-42f8-98e9-16b210096692;
	Thu, 19 Nov 2020 12:05:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E1DC2AA4F;
	Thu, 19 Nov 2020 12:05:25 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id 4E3FB1E130B; Thu, 19 Nov 2020 13:05:25 +0100 (CET)
Date: Thu, 19 Nov 2020 13:05:25 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 14/20] block: remove the nr_sects field in struct
 hd_struct
Message-ID: <20201119120525.GW1981@quack2.suse.cz>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-15-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201118084800.2339180-15-hch@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Wed 18-11-20 09:47:54, Christoph Hellwig wrote:
> Now that the hd_struct always has a block device attached to it, there is
> no need for having two size field that just get out of sync.
> 
> Additional the field in hd_struct did not use proper serializiation,
> possibly allowing for torn writes.  By only using the block_device field
> this problem also gets fixed.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Overall the patch looks good but I have a couple of comments below.

> diff --git a/block/bio.c b/block/bio.c
> index fa01bef35bb1fe..0c5269997434d6 100644
> --- a/block/bio.c
> +++ b/block/bio.c
> @@ -613,7 +613,7 @@ void guard_bio_eod(struct bio *bio)
>  	rcu_read_lock();
>  	part = __disk_get_part(bio->bi_disk, bio->bi_partno);
>  	if (part)
> -		maxsector = part_nr_sects_read(part);
> +		maxsector = bdev_nr_sectors(part->bdev);
>  	else
>  		maxsector = get_capacity(bio->bi_disk);

I have to say that after these changes I find it a bit confusing that we
have get/set_capacity() and bdev_nr_sectors() / bdev_set_nr_sectors() and
they are all the same thing (i_size of the bdev). Is there a reason for the
distinction?

> diff --git a/block/genhd.c b/block/genhd.c
> index 94de95287a6370..e101b6843f7437 100644
> --- a/block/genhd.c
> +++ b/block/genhd.c
> @@ -38,6 +38,16 @@ static void disk_add_events(struct gendisk *disk);
>  static void disk_del_events(struct gendisk *disk);
>  static void disk_release_events(struct gendisk *disk);
>  
> +void set_capacity(struct gendisk *disk, sector_t sectors)
> +{
> +	struct block_device *bdev = disk->part0.bdev;
> +
> +	spin_lock(&bdev->bd_size_lock);
> +	i_size_write(bdev->bd_inode, (loff_t)sectors << SECTOR_SHIFT);
> +	spin_unlock(&bdev->bd_size_lock);

AFAICT bd_size_lock is pointless after these changes so we can just remove
it?

> +}
> +EXPORT_SYMBOL(set_capacity);
> +
>  /*
>   * Set disk capacity and notify if the size is not currently zero and will not
>   * be set to zero.  Returns true if a uevent was sent, otherwise false.
> @@ -47,11 +57,12 @@ bool set_capacity_and_notify(struct gendisk *disk, sector_t size)
>  	sector_t capacity = get_capacity(disk);
>  
>  	set_capacity(disk, size);
> -	revalidate_disk_size(disk, true);
>  
>  	if (capacity != size && capacity != 0 && size != 0) {
>  		char *envp[] = { "RESIZE=1", NULL };
>  
> +		pr_info("%s: detected capacity change from %lld to %lld\n",
> +		       disk->disk_name, size, capacity);

So we are now missing above message for transitions from / to 0 capacity?
Is there any other notification in the kernel log when e.g. media is
inserted into a CD-ROM drive? I remember using these messages for detecting
that...

Also what about GENHD_FL_HIDDEN devices? Are we sure we never set capacity
for them?

>  		kobject_uevent_env(&disk_to_dev(disk)->kobj, KOBJ_CHANGE, envp);
>  		return true;
>  	}

...

> @@ -983,7 +994,7 @@ void __init printk_all_partitions(void)
>  
>  			printk("%s%s %10llu %s %s", is_part0 ? "" : "  ",
>  			       bdevt_str(part_devt(part), devt_buf),
> -			       (unsigned long long)part_nr_sects_read(part) >> 1
> +			       bdev_nr_sectors(part->bdev) >> 1
>  			       , disk_name(disk, part->partno, name_buf),
>  			       part->info ? part->info->uuid : "");
>  			if (is_part0) {
> @@ -1076,7 +1087,7 @@ static int show_partition(struct seq_file *seqf, void *v)
>  	while ((part = disk_part_iter_next(&piter)))
>  		seq_printf(seqf, "%4d  %7d %10llu %s\n",
>  			   MAJOR(part_devt(part)), MINOR(part_devt(part)),
> -			   (unsigned long long)part_nr_sects_read(part) >> 1,
> +			   bdev_nr_sectors(part->bdev) >> 1,
>  			   disk_name(sgp, part->partno, buf));
>  	disk_part_iter_exit(&piter);
>  
> @@ -1158,8 +1169,7 @@ ssize_t part_size_show(struct device *dev,
>  {
>  	struct hd_struct *p = dev_to_part(dev);
>  
> -	return sprintf(buf, "%llu\n",
> -		(unsigned long long)part_nr_sects_read(p));
> +	return sprintf(buf, "%llu\n", bdev_nr_sectors(p->bdev));

Is sector_t really guaranteed to be unsigned long long?

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 12:14:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 12:14:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30659.60811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfioy-0003kk-Jx; Thu, 19 Nov 2020 12:13:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30659.60811; Thu, 19 Nov 2020 12:13:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfioy-0003kd-GX; Thu, 19 Nov 2020 12:13:52 +0000
Received: by outflank-mailman (input) for mailman id 30659;
 Thu, 19 Nov 2020 12:13:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfiox-0003kY-Df
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 12:13:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfiow-0002Z8-SK; Thu, 19 Nov 2020 12:13:50 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfiow-0001xd-EF; Thu, 19 Nov 2020 12:13:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfiox-0003kY-Df
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 12:13:51 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=rWPk71Vp8kXcGTXjihT8/dmwuZbriucRIZWicAo1F0g=; b=MwshuQS2Vhr4Er725RWGSPwn2L
	C0ovpV2elBSsahLOX7tuqcqnpgeTxZJgNtW7q907aGuQ8wssosi83PSu/HbKQvhaAALx55znO282N
	foyVG3cWMaAO5+/QOto+jLgK3fHoxOjBmJUs5SSfg4xTBNm5iLAp5ucIKykrBe/QHM74=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfiow-0002Z8-SK; Thu, 19 Nov 2020 12:13:50 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfiow-0001xd-EF; Thu, 19 Nov 2020 12:13:50 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: alex.bennee@linaro.org,
	bertrand.marquis@arm.com,
	andre.przywara@arm.com,
	Rahul.Singh@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3 0/2] xen/arm: Allow Xen to boot with ACPI 5.1
Date: Thu, 19 Nov 2020 12:13:45 +0000
Message-Id: <20201119121347.27139-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Hi all,

This series was originally called "xen/arm: Unbreak ACPI". It was
renamed because only the part to allow boot with ACPI 5.1 is not
merged.

With the two changes, it is possible to boot Xen on QEMU with ACPI 5.1.

Cheers,

Julien Grall (2):
  xen/arm: gic: acpi: Use the correct length for the GICC structure
  xen/arm: acpi: Allow Xen to boot with ACPI 5.1

 xen/arch/arm/acpi/boot.c | 6 +++---
 xen/arch/arm/gic-v2.c    | 5 +++--
 xen/arch/arm/gic-v3.c    | 6 +++---
 xen/arch/arm/gic.c       | 2 +-
 4 files changed, 10 insertions(+), 9 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 12:14:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 12:14:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30661.60835 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfip2-0003nY-5D; Thu, 19 Nov 2020 12:13:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30661.60835; Thu, 19 Nov 2020 12:13:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfip2-0003nQ-1S; Thu, 19 Nov 2020 12:13:56 +0000
Received: by outflank-mailman (input) for mailman id 30661;
 Thu, 19 Nov 2020 12:13:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfip0-0003mV-H2
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 12:13:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfip0-0002ZO-31; Thu, 19 Nov 2020 12:13:54 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfioz-0001xd-PV; Thu, 19 Nov 2020 12:13:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfip0-0003mV-H2
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 12:13:54 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=mZpQmkQLUUO1hccs/83l4shhsGEQrqSCrng3B/PMRp8=; b=XLY8g2zOlxlwMzPy8nLAAtYc7
	bZZ2/ui0Ya1sVIL3ejNEJZC7gp/lHh4z8elebFs7GiFvwqwkqajEMQ6Bx9nE0JqvPxf4KOZ0Sk/71
	LV2itXj8wtvEP99t/1qcjZS55IsaHiwARdWWVsPvhqwtNBu6xz91Of00TGolfIDJw5IaU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfip0-0002ZO-31; Thu, 19 Nov 2020 12:13:54 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfioz-0001xd-PV; Thu, 19 Nov 2020 12:13:54 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: alex.bennee@linaro.org,
	bertrand.marquis@arm.com,
	andre.przywara@arm.com,
	Rahul.Singh@arm.com,
	Julien Grall <julien.grall@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v3 2/2] xen/arm: acpi: Allow Xen to boot with ACPI 5.1
Date: Thu, 19 Nov 2020 12:13:47 +0000
Message-Id: <20201119121347.27139-3-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201119121347.27139-1-julien@xen.org>
References: <20201119121347.27139-1-julien@xen.org>

From: Julien Grall <julien.grall@arm.com>

At the moment Xen requires the FADT ACPI table to be at least version
6.0, apparently because of some reliance on other ACPI v6.0 features.

But actually this is overzealous, and Xen works now fine with ACPI v5.1.

Let's relax the version check for the FADT table to allow QEMU to
run the hypervisor with ACPI.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Julien Grall <jgrall@amazon.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v3:
        - Add Stefano's acked-by

    Changes in v2:
        - Patch added
---
 xen/arch/arm/acpi/boot.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/acpi/boot.c b/xen/arch/arm/acpi/boot.c
index 55c3e5cbc834..7ea2990cb82c 100644
--- a/xen/arch/arm/acpi/boot.c
+++ b/xen/arch/arm/acpi/boot.c
@@ -181,8 +181,8 @@ static int __init acpi_parse_fadt(struct acpi_table_header *table)
      * we only deal with ACPI 6.0 or newer revision to get GIC and SMP
      * boot protocol configuration data, or we will disable ACPI.
      */
-    if ( table->revision > 6
-         || (table->revision == 6 && fadt->minor_revision >= 0) )
+    if ( table->revision > 5
+         || (table->revision == 5 && fadt->minor_revision >= 1) )
         return 0;
 
     printk("Unsupported FADT revision %d.%d, should be 6.0+, will disable ACPI\n",
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 12:14:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 12:14:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30660.60823 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfioz-0003lh-RQ; Thu, 19 Nov 2020 12:13:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30660.60823; Thu, 19 Nov 2020 12:13:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfioz-0003la-OM; Thu, 19 Nov 2020 12:13:53 +0000
Received: by outflank-mailman (input) for mailman id 30660;
 Thu, 19 Nov 2020 12:13:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfioy-0003lG-SN
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 12:13:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfioy-0002ZF-F6; Thu, 19 Nov 2020 12:13:52 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfioy-0001xd-4G; Thu, 19 Nov 2020 12:13:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfioy-0003lG-SN
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 12:13:52 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=gcyki/lXOeT2O/oPW4KFyaFMj0CKrIoTPfhPSeOmQZM=; b=LUeO4RKmfJJZNsaXIy1IgkFog
	i8ER5jdKZZEjb4CcDGsbbFow3Jk22pvO0/IA9Vjru58LHksXM/aYz8HHIb6aGJUZNXfCuEOSPoPAG
	o95dMgAZ3iBsCxz32+LVuei2Zjmja+W3shQFzaQTZkTkXdTJHFVVvCShkGhF8nDscvj0Q=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfioy-0002ZF-F6; Thu, 19 Nov 2020 12:13:52 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfioy-0001xd-4G; Thu, 19 Nov 2020 12:13:52 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: alex.bennee@linaro.org,
	bertrand.marquis@arm.com,
	andre.przywara@arm.com,
	Rahul.Singh@arm.com,
	Julien Grall <julien.grall@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v3 1/2] xen/arm: gic: acpi: Use the correct length for the GICC structure
Date: Thu, 19 Nov 2020 12:13:46 +0000
Message-Id: <20201119121347.27139-2-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201119121347.27139-1-julien@xen.org>
References: <20201119121347.27139-1-julien@xen.org>

From: Julien Grall <julien.grall@arm.com>

The length of the GICC structure in the MADT ACPI table differs between
version 5.1 and 6.0, although there are no other relevant differences.

Use the BAD_MADT_GICC_ENTRY macro, which was specifically designed to
overcome this issue.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Julien Grall <jgrall@amazon.com>

---
    Changes in v3:
        - Update the commit title as we also modify GICv3 code
        - Use the correct length in more places

    Changes in v2:
        - Patch added
---
 xen/arch/arm/acpi/boot.c | 2 +-
 xen/arch/arm/gic-v2.c    | 5 +++--
 xen/arch/arm/gic-v3.c    | 6 +++---
 xen/arch/arm/gic.c       | 2 +-
 4 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/acpi/boot.c b/xen/arch/arm/acpi/boot.c
index 30e4bd1bc5a7..55c3e5cbc834 100644
--- a/xen/arch/arm/acpi/boot.c
+++ b/xen/arch/arm/acpi/boot.c
@@ -131,7 +131,7 @@ acpi_parse_gic_cpu_interface(struct acpi_subtable_header *header,
     struct acpi_madt_generic_interrupt *processor =
                container_of(header, struct acpi_madt_generic_interrupt, header);
 
-    if ( BAD_MADT_ENTRY(processor, end) )
+    if ( BAD_MADT_GICC_ENTRY(processor, end) )
         return -EINVAL;
 
     acpi_table_print_madt_entry(header);
diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index 0f747538dbcd..0e5f23201974 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -1136,7 +1136,8 @@ static int gicv2_make_hwdom_madt(const struct domain *d, u32 offset)
 
     host_gicc = container_of(header, struct acpi_madt_generic_interrupt,
                              header);
-    size = sizeof(struct acpi_madt_generic_interrupt);
+
+    size = ACPI_MADT_GICC_LENGTH;
     /* Add Generic Interrupt */
     for ( i = 0; i < d->max_vcpus; i++ )
     {
@@ -1165,7 +1166,7 @@ gic_acpi_parse_madt_cpu(struct acpi_subtable_header *header,
     struct acpi_madt_generic_interrupt *processor =
                container_of(header, struct acpi_madt_generic_interrupt, header);
 
-    if ( BAD_MADT_ENTRY(processor, end) )
+    if ( BAD_MADT_GICC_ENTRY(processor, end) )
         return -EINVAL;
 
     /* Read from APIC table and fill up the GIC variables */
diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index 0f6cbf6224e9..483ec1598f37 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -1499,7 +1499,7 @@ static int gicv3_make_hwdom_madt(const struct domain *d, u32 offset)
 
     host_gicc = container_of(header, struct acpi_madt_generic_interrupt,
                              header);
-    size = sizeof(struct acpi_madt_generic_interrupt);
+    size = ACPI_MADT_GICC_LENGTH;
     for ( i = 0; i < d->max_vcpus; i++ )
     {
         gicc = (struct acpi_madt_generic_interrupt *)(base_ptr + table_len);
@@ -1558,7 +1558,7 @@ gic_acpi_parse_madt_cpu(struct acpi_subtable_header *header,
     struct acpi_madt_generic_interrupt *processor =
                container_of(header, struct acpi_madt_generic_interrupt, header);
 
-    if ( BAD_MADT_ENTRY(processor, end) )
+    if ( BAD_MADT_GICC_ENTRY(processor, end) )
         return -EINVAL;
 
     /* Read from APIC table and fill up the GIC variables */
@@ -1628,7 +1628,7 @@ gic_acpi_get_madt_cpu_num(struct acpi_subtable_header *header,
     struct acpi_madt_generic_interrupt *cpuif;
 
     cpuif = (struct acpi_madt_generic_interrupt *)header;
-    if ( BAD_MADT_ENTRY(cpuif, end) || !cpuif->gicr_base_address )
+    if ( BAD_MADT_GICC_ENTRY(cpuif, end) || !cpuif->gicr_base_address )
         return -EINVAL;
 
     return 0;
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index d623c57cb9fa..b100c85ef314 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -453,7 +453,7 @@ unsigned long gic_get_hwdom_madt_size(const struct domain *d)
     unsigned long madt_size;
 
     madt_size = sizeof(struct acpi_table_madt)
-                + sizeof(struct acpi_madt_generic_interrupt) * d->max_vcpus
+                + ACPI_MADT_GICC_LENGTH * d->max_vcpus
                 + sizeof(struct acpi_madt_generic_distributor)
                 + gic_hw_ops->get_hwdom_extra_madt_size(d);
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 12:33:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 12:33:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30680.60847 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfj7r-00067Y-Oo; Thu, 19 Nov 2020 12:33:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30680.60847; Thu, 19 Nov 2020 12:33:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfj7r-00067R-LZ; Thu, 19 Nov 2020 12:33:23 +0000
Received: by outflank-mailman (input) for mailman id 30680;
 Thu, 19 Nov 2020 12:33:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfj7q-00067M-Kh
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 12:33:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 13fd40f5-7cbf-4374-9f64-e4dbc870f694;
 Thu, 19 Nov 2020 12:33:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3A089AF10;
 Thu, 19 Nov 2020 12:33:20 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfj7q-00067M-Kh
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 12:33:22 +0000
X-Inumbo-ID: 13fd40f5-7cbf-4374-9f64-e4dbc870f694
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 13fd40f5-7cbf-4374-9f64-e4dbc870f694;
	Thu, 19 Nov 2020 12:33:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605789200; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nzQyigJAAb0SQRoz/a8LAfsNCTmesqa2tR+BS4h2I1E=;
	b=tU7K8jg7R4NIfFhDEYHwmGFYhjc/uW1Q2nVSHFIcX3Wy+R+Ubtbj3tM2raVVpOClpOFYmk
	X0BQPKV0yB/g5Feef1BQ/3K22rhbqemcWfP+AJHkIJW05BfTm8ddLnyMx32BFa9dow1KE6
	Ndv4iyo+qqrZhpehvNun+rfV7dB07fI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3A089AF10;
	Thu, 19 Nov 2020 12:33:20 +0000 (UTC)
Subject: Re: [PATCH 1/3] mm: introduce xvmalloc() et al and use for grant
 table allocations
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <e0364274-f123-82bd-ec85-bea519a34049@suse.com>
 <d98aabe4-6c1b-0970-2e42-eb991e9075a2@suse.com>
 <e7b72c54-e8e4-428d-9264-484fc0061ba4@xen.org>
 <9adc7ec2-c014-d9ae-a8b5-5b942640386c@suse.com>
 <e115ce52-3c5c-6530-dd3a-bc7f268ef224@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a00f3c98-04c1-4380-dc62-22d001edae1d@suse.com>
Date: Thu, 19 Nov 2020 13:33:19 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <e115ce52-3c5c-6530-dd3a-bc7f268ef224@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.11.2020 11:59, Julien Grall wrote:
> On 19/11/2020 09:46, Jan Beulich wrote:
>> On 18.11.2020 20:10, Julien Grall wrote:
>>> On 06/11/2020 07:11, Jan Beulich wrote:
>>>> All of the array allocations in grant_table_init() can exceed a page's
>>>> worth of memory, which xmalloc()-based interfaces aren't really suitable
>>>> for after boot.
>>>
>>> I can see a few reasons why they are not suitable:
>>>     - xmalloc() will try to using an order and then throw memory. This is
>>> pretty inneficient.
>>
>> But addressing this inefficiency, while a nice side effect, is
>> not the goal here.
>>
>>>     - xmalloc() will allocate physically contiguous memory
>>
>> This aspect matters here only indirectly: What we care about
>> avoiding are runtime allocations of non-zero order. The assumption
>> of how I worded the description is that during boot non-zero-
>> order allocations can typically be fulfilled and hence aren't a
>> (general) problem.
> Well... In the case of the grant table, if you can't find a small order 
> of physically contiguous pages then you have bigger trouble on your 
> platform. You will either not have enough space for the allocating the 
> domain memory, or the performance will be awful because only 4K pages 
> are used.

Performance will be affected, yes, but I'm not sure I'd call this
"awful". 

> So while I agree that having xvmalloc() is a good move, I am not 
> convinced of your argument regarding the boot vs runtime.
> 
> I think a better argument is the allocation doesn't need to be 
> physically contiguous in memory. So better avoid it when we can.

Well... I've added a sentence.

>>> It would be good to clarify which one you refer because none of them are
>>> really a problem only after boot...
>>
>> Given the above, I'm not sure in which way you would see this be
>> clarified. Any suggestions welcome.
>>
>>> One thing to be aware thought is xv*() are going to be more inefficient
>>> because they involve touching the page-tables (at least until the work
>>> to map on-demand the direct map is not merged). In addition, on Arm,
>>> they will also use only 4K mappings (I have a TODO to fix that).
>>>
>>> So I think we will need to be careful when to use xmalloc() vs
>>> xvalloc(). It might be worth outlining that in the documentation of xv*().
>>
>> The rule is quite simple and the inefficiencies you mention
>> shouldn't matter imo: Post-boot there should not be any
>> implicit allocations of non-zero order. "Implicit" here meaning
>> to still permit e.g. direct alloc_domheap_pages() invocations,
>> making apparent at the call site that the aspect of memory
>> fragmentation was (hopefully) taken into consideration. I'm
>> actually inclined to suggest (down the road) to have _xmalloc()
>> no longer fall back to multi-page allocations post-boot, but
>> instead return NULL.
> 
> One advantage of xmalloc() is it is able to allocate a suitable xenheap 
> area. So it will not touch the page-tables and therefore useful for 
> short-life allocation as the overhead will be more limited compare to 
> xvalloc().

Yet it still shouldn't be used post-boot when the size may exceed
system page size, to avoid reporting -ENOMEM or alike when really
there's ample but fragmented memory available.

> There is also the problem that alloc_{dom, xen}heap_pages() works using 
> order. xmalloc() is handy because it will give back the unnecessary pages.
> 
> Maybe we should consider a version of alloc_*heap_pages() that will take 
> the number of pages rather than order.

Possibly, I've indeed been wondering more than once whether we should.

>> If you think it is really needed, I can add something like "These
>> should be used in preference to xmalloc() et al whenever the size
>> is not known to be constrained to at most a single page" to the
>> comment at the top of the new header file.
> 
> There are quite a few users of xmalloc() with large allocations. Yet, 
> they would not be suitable for xvalloc() because they require physically 
> contiguous memory.

Isn't this a mistake? I.e. am I unaware of some comment saying that
xmalloc() actually guarantees to return physically contiguous memory?
It's certainly against the "Xen malloc/free-style interface" comment
at the top of the header.

It was my understanding so far that with the removal of the direct-
map this property would go away anyway.

> So I think you would want to mention that in the 
> sentence.

Well, as you've seen further down the comment already mentions that
aspect.

>> Where the inefficiencies you mention would imo matter is in
>> (future) decisions whether to use vmalloc() et al vs xvmalloc()
>> et al: If the size _may_ be no more than a page, the latter may
>> want preferring.
> I am not sure to understand this... why would we want to keep vmalloc() 
> extern when xvalloc() will be calling it for allocation over a PAGE_SIZE?

Why force callers knowing they'll allocate more than a page to go
through the extra layer? If that was the plan, then we wouldn't
need a set of new functions, but could instead tweak vmalloc() etc.

>>>> --- a/xen/common/vmap.c
>>>> +++ b/xen/common/vmap.c
>>>> @@ -7,6 +7,7 @@
>>>>    #include <xen/spinlock.h>
>>>>    #include <xen/types.h>
>>>>    #include <xen/vmap.h>
>>>> +#include <xen/xvmalloc.h>
>>>>    #include <asm/page.h>
>>>>    
>>>>    static DEFINE_SPINLOCK(vm_lock);
>>>> @@ -299,11 +300,29 @@ void *vzalloc(size_t size)
>>>>        return p;
>>>>    }
>>>>    
>>>> -void vfree(void *va)
>>>> +static void _vfree(const void *va, unsigned int pages, enum vmap_region type)
>>>
>>> I don't think "unsigned int" is sufficient to cover big size. AFAICT,
>>> this is not in a new problem in this code and seems to be a latent issue
>>> so far.
>>>
>>> However, I feel that it is wrong to introduce a new set of allocation
>>> helpers that contained a flaw fixed in xm*alloc() recently (see  commit
>>> cf38b4926e2b "xmalloc: guard against integer overflow").
>>
>> For _xmalloc() we're talking about bytes (and the guarding you
>> refer to is actually orthogonal to the limiting done by the
>> page allocator, as follows from the description of that change).
>> Here we're talking about pages. I hope it will be decades until we
>> have to consider allocating 16Tb all in one chunk (and we'd need
>> to have large enough vmap virtual address space set aside first).
> 
> I think you misunderstood my point here. I am not suggesting that a 
> normal user would ask to allocate 16TB but that a caller may pass by 
> mistake an unsanitized value to xv*() functions.
> 
> IIRC, the overflow check in xm*() were added after we discovered that 
> some callers where passing unsanitized values.
> 
> I would expect xv*() functions to be more used in the future, so I think 
> it would be unwise to not guard against overflow.
> 
> I would be happy with just checking that nr always fit in a 32-bit value.

The two callers of the function obtain the value from vm_size(),
which returns unsigned int.

>> Also note that
>> - the entire vmap.c right now uses unsigned int for page counts,
>>    so it would be outright inconsistent to use unsigned long here,
> 
> I didn't suggest this would be the only place (note that "new problem"). 
> This was the best place I could find to mention an existing problem that 
> is widened with the introduction of xv*() helpers.

Oh, so you're talking of a separate and entirely unrelated patch
making sure the existing vmalloc() won't suffer such a problem.
Yes, vmalloc_type() could be fixed to this effect. But do you
realize we'd have a security issue much earlier if any guest
action could lead to such a gigantic vmalloc(), as the time to
both allocate and then map 4 billion pages is going to be way
longer than what we may tolerate without preempting?

And no, there's no overflowing calculation anywhere afaics which
would resemble the ones in xmalloc() you refer to.

>> - at least on x86 passing around 64-bit function arguments is
>>    slightly less efficient than 32-bit ones, and hence I'd prefer
>>    to keep it like this.
> 
> Don't you have 64-bit registers on x86-64?

Yes, yet to operate on them most insns need an extra prefix
byte.

> But, I am really surprised this is a concern to you when all the 
> functions in this code will modify the pages tables. You dismissed this 
> overhead in the same e-mail...

Entirely different considerations: The goal of limiting variable
(and parameter) types to 32 bits where possible is a generic one.
Which is, if for nothing else, to avoid introducing bad precedent.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 13:17:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 13:17:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30771.60911 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfjo6-0002Ld-0M; Thu, 19 Nov 2020 13:17:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30771.60911; Thu, 19 Nov 2020 13:17:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfjo5-0002LW-TI; Thu, 19 Nov 2020 13:17:01 +0000
Received: by outflank-mailman (input) for mailman id 30771;
 Thu, 19 Nov 2020 13:17:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rCgr=EZ=nxp.com=jorge.pereira@srs-us1.protection.inumbo.net>)
 id 1kfjo4-0002L0-Bz
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 13:17:00 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.2.57]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5ace5c85-de39-4dbe-8f1c-ad9ce3d35a70;
 Thu, 19 Nov 2020 13:16:59 +0000 (UTC)
Received: from AM6PR04MB5863.eurprd04.prod.outlook.com (2603:10a6:20b:a5::11)
 by AS8PR04MB8021.eurprd04.prod.outlook.com (2603:10a6:20b:2a7::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 19 Nov
 2020 13:16:53 +0000
Received: from AM6PR04MB5863.eurprd04.prod.outlook.com
 ([fe80::31b9:c9d3:cccf:fc94]) by AM6PR04MB5863.eurprd04.prod.outlook.com
 ([fe80::31b9:c9d3:cccf:fc94%6]) with mapi id 15.20.3589.021; Thu, 19 Nov 2020
 13:16:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rCgr=EZ=nxp.com=jorge.pereira@srs-us1.protection.inumbo.net>)
	id 1kfjo4-0002L0-Bz
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 13:17:00 +0000
X-Inumbo-ID: 5ace5c85-de39-4dbe-8f1c-ad9ce3d35a70
Received: from EUR02-VE1-obe.outbound.protection.outlook.com (unknown [40.107.2.57])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5ace5c85-de39-4dbe-8f1c-ad9ce3d35a70;
	Thu, 19 Nov 2020 13:16:59 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WJbSA9QifaNMRNoSXXonvKzf83vqsUazqYr7S+QxVEnfjWHsYNywc7rdatFgDAxf0SKpGdPZXefnyGs++5Ynnd9muOwfVkMsWJ4JNROsMcUiZZcU8WS4TPeyilqQEwcPz5YCoM/+nQTDxCXpUS3QQI9LI/AdVuCmagP4uaz5hFw3HlK8IQEDP4OJk7aP8dMpYtUpnWAxe1vbp7xPN2ADu3iDOzuQNaapVQULmn3rx9nqcXoxA+3fwBpblmXMwSdLdHCewVIP4OeLDcZ66bBqgyr++43pDod4CaLaPF2mq+/O4V1qBke1uT+YT3zziMZFiej/nwbkRcPsOFSofPo4Dg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=akqBHTHhWXED2JlETSeHCpkPfKyML6WEm6Qtt195NcY=;
 b=f/DqZHfuTI5bEi8/YTTB74t7+wDeG6T7mEWoxBHJUcUprdUAimwUBvyv6E9SgiNcWmCsTp1IgObzuNGXk59/1TV25CRI/9OiSDPsQamjYbglEisTjSNKt7JLe+shNFXB2HeERfgP7gt5dM4PTYCAM+56zZUP7ABNYX7Io/YeYKY6B+YV6oFznkRsOez16RlDurx0EnlxlurepWeZzc5hfWYDgPRfdUYiJZvX64HA4fN5d4D2RYFNVOpBbB1Akttor//eQ+hc1Esct1CcNKHBF3+b2sa8ftFdp9Ni64Z7g+4qxDL4JYDGsLYKm8C3aorDjy1iEwwuOpqEg4Aqqv2aFw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=nxp.com; dmarc=pass action=none header.from=nxp.com; dkim=pass
 header.d=nxp.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=akqBHTHhWXED2JlETSeHCpkPfKyML6WEm6Qtt195NcY=;
 b=hgX7atsHllq6LEtyVENCo/cbX4DNKd7ast2ZV8BEG5KewBVtPHryNZM9rjV9D2GEGl6NcgagnvvbTA+HUZx5rwmxu/BpvnBe33o87zTA76x6H0UzwCUCeKwpzpdH/JKm/k+F4tkWV4uOjNfK2UwlrsiCCWGu4gaqW4LEKvN9ZSU=
Received: from AM6PR04MB5863.eurprd04.prod.outlook.com (2603:10a6:20b:a5::11)
 by AS8PR04MB8021.eurprd04.prod.outlook.com (2603:10a6:20b:2a7::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 19 Nov
 2020 13:16:53 +0000
Received: from AM6PR04MB5863.eurprd04.prod.outlook.com
 ([fe80::31b9:c9d3:cccf:fc94]) by AM6PR04MB5863.eurprd04.prod.outlook.com
 ([fe80::31b9:c9d3:cccf:fc94%6]) with mapi id 15.20.3589.021; Thu, 19 Nov 2020
 13:16:53 +0000
From: Jorge Pereira <jorge.pereira@nxp.com>
To: "'xen-devel@lists.xenproject.org'" <xen-devel@lists.xenproject.org>
Subject: smmu-related clarification
Thread-Topic: smmu-related clarification
Thread-Index: Ada+dgwvLpGgRoDvTYGCt70DgFD1qg==
Date: Thu, 19 Nov 2020 13:16:53 +0000
Message-ID:
 <AM6PR04MB58630B20435EEDF65D2577E2F0E00@AM6PR04MB5863.eurprd04.prod.outlook.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=nxp.com;
x-originating-ip: [93.3.33.12]
x-ms-publictraffictype: Email
x-ms-office365-filtering-ht: Tenant
x-ms-office365-filtering-correlation-id: 118d9785-41f8-46b7-dae4-08d88c8d6213
x-ms-traffictypediagnostic: AS8PR04MB8021:
x-microsoft-antispam-prvs:
 <AS8PR04MB8021EFA50C677BA443A5F61AF0E00@AS8PR04MB8021.eurprd04.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 ioSD3s2rEPTmmHy+MuGHrPChwJB8yGm0SxWAKCFE0ypn7RgCis5jDc6huWS1ldeD7e0TxNk+Ohl41cczYnomscnoY7cZ3znjZZqIBKeoBnWhlsR1kU3ctlKfiUYLHgVsV5t+sMahtb5W2lsQL6ZrXBaQggpid5RVMKcz15WGzv3ZgdwoWNGcImX9dTcrMnup/UzPUI4E/WmEiXY6Ipaj//z9turMamIbV3nwR9aozZ7IVKv5GYZcPiG/8OflJPy6UG7heRDG/nlTu2WumwdS1U6Gcoota5xFyxSduayxMvby+XkUvzPuDhip52TyLZRDPFd3NPoFp00vFvCTTZIrHYXK43ie9NNNwOZ+M8XYHEcjB2H08umcwwvZt1qNJq++
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB5863.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(39850400004)(366004)(376002)(346002)(186003)(6506007)(316002)(33656002)(9686003)(7696005)(86362001)(64756008)(66556008)(66476007)(66446008)(71200400001)(76116006)(66946007)(5660300002)(478600001)(26005)(52536014)(4744005)(44832011)(6916009)(8676002)(3480700007)(8936002)(9326002)(55016002)(2906002)(491001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 g26jUx4gd3Du3muJf+OUSo1+JNlhcKsi1de546XT6E1MxNiTfjiBmT2jQj1RIQuiRXF5lCRWK/emo6QQEnCkDC0a9VC3O93kKfjtdulRW9jZpHIG6+7ok3SSzXzCthwTw0pwd3LpeYnMJc3644UNcM7j84Q0At18y42bT7/cBIyWIy62Utsuoym9JMkXkXAeDH1RBtcHaipSE1QPdnriRr9klLeds3/3H6NJyCUREC2TWRv+UAMMab2Qsg5XHPSbXLZSxSS6Qo46Hzl7YhIMm4/0gxQ1tYI0OreAhTMkdOjEtKAhbsCcikz/hGMapWEB8yvlZVFdMojVJFkXGCmi7zA1/MFrPBw4kabQ7n2LsakS7bF5qPYZ/SOHYSETMst0TKpo1RfHnSVS7gmAjm2bSP4NMSKN3PjZpAcm1EQF3A2AOFvzjDgFeCIkKbkJgeWv9LBG1ck8L7IZ5Sl1kbAv/uG1w9C7TgvGdypGQNdpmrIUIe0iXPB2VHk+V+hrHfYcly0+jHBeFqtuA1Oa+S2ekat/koOhIQ4jEZI+vLeR+xarzYhIlibfdeYZGw/PuuY7p2n8D5k2asFtuO8+nWcHXpgKDfQGJv++2Yx4ES6lBk6cEP7XOMzWJYAVhcJcCxQs8POMpf7PXxh5gaMusREY9A==
x-ms-exchange-transport-forked: True
Content-Type: multipart/alternative;
	boundary="_000_AM6PR04MB58630B20435EEDF65D2577E2F0E00AM6PR04MB5863eurp_"
MIME-Version: 1.0
X-OriginatorOrg: nxp.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB5863.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 118d9785-41f8-46b7-dae4-08d88c8d6213
X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Nov 2020 13:16:53.6101
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: VDPXcxYEg2UySVjTnGAoSSL9YXLHxvPQ5C8sHbkpeoQw0w7EpxjnCAWx6D6FyiX/rTi1+CSt1QTGJA8SOh5epg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8021

--_000_AM6PR04MB58630B20435EEDF65D2577E2F0E00AM6PR04MB5863eurp_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Hi All,

I'm having some smmu-related issues, I need help.

So, in my use-case scenario I have two linux guests running in parallel - d=
om0 and domU-. I have to enable the smmu because I want to passthough devic=
es to domU.

Would be great if you help me to clarify the following questions:

  *   if I enable SMMU, it will applies not only to domU but also to dma-ca=
pable devices assigned to dom0?
  *   Do I have to add all smmu-masters  of dom0 in device tree as well? Or=
 since dom0 has 1:1 mapping I don't have to do anything?

Thanks,
Jorge



--_000_AM6PR04MB58630B20435EEDF65D2577E2F0E00AM6PR04MB5863eurp_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:Wingdings;
	panose-1:5 0 0 0 0 0 0 0 0 0;}
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:DengXian;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:"\@DengXian";
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri",sans-serif;}
p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph
	{mso-style-priority:34;
	margin-top:0in;
	margin-right:0in;
	margin-bottom:0in;
	margin-left:.5in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri",sans-serif;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri",sans-serif;
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri",sans-serif;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
/* List Definitions */
@list l0
	{mso-list-id:1456826159;
	mso-list-type:hybrid;
	mso-list-template-ids:327331254 984142610 67698691 67698693 67698689 67698=
691 67698693 67698689 67698691 67698693;}
@list l0:level1
	{mso-level-start-at:0;
	mso-level-number-format:bullet;
	mso-level-text:-;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:"Calibri",sans-serif;
	mso-fareast-font-family:DengXian;}
@list l0:level2
	{mso-level-number-format:bullet;
	mso-level-text:o;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:"Courier New";}
@list l0:level3
	{mso-level-number-format:bullet;
	mso-level-text:\F0A7;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:Wingdings;}
@list l0:level4
	{mso-level-number-format:bullet;
	mso-level-text:\F0B7;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:Symbol;}
@list l0:level5
	{mso-level-number-format:bullet;
	mso-level-text:o;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:"Courier New";}
@list l0:level6
	{mso-level-number-format:bullet;
	mso-level-text:\F0A7;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:Wingdings;}
@list l0:level7
	{mso-level-number-format:bullet;
	mso-level-text:\F0B7;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:Symbol;}
@list l0:level8
	{mso-level-number-format:bullet;
	mso-level-text:o;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:"Courier New";}
@list l0:level9
	{mso-level-number-format:bullet;
	mso-level-text:\F0A7;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:Wingdings;}
ol
	{margin-bottom:0in;}
ul
	{margin-bottom:0in;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hi All,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">I&#8217;m having some smmu-related issues, I need he=
lp. <o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">So, in my use-case scenario I have two linux guests =
running in parallel &#8211; dom0 and domU-. I have to enable the smmu becau=
se I want to passthough devices to domU.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Would be great if you help me to clarify the followi=
ng questions: &nbsp;<o:p></o:p></p>
<ul style=3D"margin-top:0in" type=3D"disc">
<li class=3D"MsoListParagraph" style=3D"margin-left:0in;mso-list:l0 level1 =
lfo1">if I enable SMMU, it will applies not only to domU but also to dma-ca=
pable devices assigned to dom0?
<o:p></o:p></li><li class=3D"MsoListParagraph" style=3D"margin-left:0in;mso=
-list:l0 level1 lfo1">Do I have to add all smmu-masters &nbsp;of dom0 in de=
vice tree as well? Or since dom0 has 1:1 mapping I don&#8217;t have to do a=
nything?<o:p></o:p></li></ul>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Thanks,<o:p></o:p></p>
<p class=3D"MsoNormal">Jorge<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
</body>
</html>

--_000_AM6PR04MB58630B20435EEDF65D2577E2F0E00AM6PR04MB5863eurp_--


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 13:19:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 13:19:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30777.60922 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfjqc-0002Vc-Ee; Thu, 19 Nov 2020 13:19:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30777.60922; Thu, 19 Nov 2020 13:19:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfjqc-0002VV-BX; Thu, 19 Nov 2020 13:19:38 +0000
Received: by outflank-mailman (input) for mailman id 30777;
 Thu, 19 Nov 2020 13:19:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfjqb-0002VQ-Pm
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 13:19:37 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfjqb-0003uy-1n; Thu, 19 Nov 2020 13:19:37 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfjqa-0005bg-LI; Thu, 19 Nov 2020 13:19:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfjqb-0002VQ-Pm
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 13:19:37 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=aY4aowiDUUC0AuXT0PTj3329jWTC40JFc9AZmVhWrtE=; b=4bPLASvP0Egx39EJUzbVByO/6r
	9YMQOWPsYwfc9qeiVed9F/XMyYJZm7DaFJWG9SQ2ZiEUNWf92atOP1n6RADNcEwP7vT94sx97emQf
	SgrFNJZ1uhmHJB4yW6AXR/tiMq6Bo1D9eeYyJ+OY/I+pMbCCdGRHeg/wFksxWUIQJ3n0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfjqb-0003uy-1n; Thu, 19 Nov 2020 13:19:37 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfjqa-0005bg-LI; Thu, 19 Nov 2020 13:19:36 +0000
Subject: Re: [PATCH 1/3] mm: introduce xvmalloc() et al and use for grant
 table allocations
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <e0364274-f123-82bd-ec85-bea519a34049@suse.com>
 <d98aabe4-6c1b-0970-2e42-eb991e9075a2@suse.com>
 <e7b72c54-e8e4-428d-9264-484fc0061ba4@xen.org>
 <9adc7ec2-c014-d9ae-a8b5-5b942640386c@suse.com>
 <e115ce52-3c5c-6530-dd3a-bc7f268ef224@xen.org>
 <a00f3c98-04c1-4380-dc62-22d001edae1d@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <18d80e2c-fd54-e5be-89e0-5fedf16cc9cd@xen.org>
Date: Thu, 19 Nov 2020 13:19:34 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <a00f3c98-04c1-4380-dc62-22d001edae1d@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 19/11/2020 12:33, Jan Beulich wrote:
> On 19.11.2020 11:59, Julien Grall wrote:
>> On 19/11/2020 09:46, Jan Beulich wrote:
>>> On 18.11.2020 20:10, Julien Grall wrote:
>>>> On 06/11/2020 07:11, Jan Beulich wrote:
>>>>> All of the array allocations in grant_table_init() can exceed a page's
>>>>> worth of memory, which xmalloc()-based interfaces aren't really suitable
>>>>> for after boot.
>>>>
>>>> I can see a few reasons why they are not suitable:
>>>>      - xmalloc() will try to using an order and then throw memory. This is
>>>> pretty inneficient.
>>>
>>> But addressing this inefficiency, while a nice side effect, is
>>> not the goal here.
>>>
>>>>      - xmalloc() will allocate physically contiguous memory
>>>
>>> This aspect matters here only indirectly: What we care about
>>> avoiding are runtime allocations of non-zero order. The assumption
>>> of how I worded the description is that during boot non-zero-
>>> order allocations can typically be fulfilled and hence aren't a
>>> (general) problem.
>> Well... In the case of the grant table, if you can't find a small order
>> of physically contiguous pages then you have bigger trouble on your
>> platform. You will either not have enough space for the allocating the
>> domain memory, or the performance will be awful because only 4K pages
>> are used.
> 
> Performance will be affected, yes, but I'm not sure I'd call this
> "awful".

Performance is always subjective...

> 
>> So while I agree that having xvmalloc() is a good move, I am not
>> convinced of your argument regarding the boot vs runtime.
>>
>> I think a better argument is the allocation doesn't need to be
>> physically contiguous in memory. So better avoid it when we can.
> 
> Well... I've added a sentence.
> 
>>>> It would be good to clarify which one you refer because none of them are
>>>> really a problem only after boot...
>>>
>>> Given the above, I'm not sure in which way you would see this be
>>> clarified. Any suggestions welcome.
>>>
>>>> One thing to be aware thought is xv*() are going to be more inefficient
>>>> because they involve touching the page-tables (at least until the work
>>>> to map on-demand the direct map is not merged). In addition, on Arm,
>>>> they will also use only 4K mappings (I have a TODO to fix that).
>>>>
>>>> So I think we will need to be careful when to use xmalloc() vs
>>>> xvalloc(). It might be worth outlining that in the documentation of xv*().
>>>
>>> The rule is quite simple and the inefficiencies you mention
>>> shouldn't matter imo: Post-boot there should not be any
>>> implicit allocations of non-zero order. "Implicit" here meaning
>>> to still permit e.g. direct alloc_domheap_pages() invocations,
>>> making apparent at the call site that the aspect of memory
>>> fragmentation was (hopefully) taken into consideration. I'm
>>> actually inclined to suggest (down the road) to have _xmalloc()
>>> no longer fall back to multi-page allocations post-boot, but
>>> instead return NULL.
>>
>> One advantage of xmalloc() is it is able to allocate a suitable xenheap
>> area. So it will not touch the page-tables and therefore useful for
>> short-life allocation as the overhead will be more limited compare to
>> xvalloc().
> 
> Yet it still shouldn't be used post-boot when the size may exceed
> system page size, to avoid reporting -ENOMEM or alike when really
> there's ample but fragmented memory available.
> 
>> There is also the problem that alloc_{dom, xen}heap_pages() works using
>> order. xmalloc() is handy because it will give back the unnecessary pages.
>>
>> Maybe we should consider a version of alloc_*heap_pages() that will take
>> the number of pages rather than order.
> 
> Possibly, I've indeed been wondering more than once whether we should.
> 
>>> If you think it is really needed, I can add something like "These
>>> should be used in preference to xmalloc() et al whenever the size
>>> is not known to be constrained to at most a single page" to the
>>> comment at the top of the new header file.
>>
>> There are quite a few users of xmalloc() with large allocations. Yet,
>> they would not be suitable for xvalloc() because they require physically
>> contiguous memory.
> 
> Isn't this a mistake? I.e. am I unaware of some comment saying that
> xmalloc() actually guarantees to return physically contiguous memory?

Well, we have been pretty bad at documenting code... So some of us may 
have inferred some behavior from xmalloc().

This is also why I requested to make more explicit what 'v' means.

However...

> It's certainly against the "Xen malloc/free-style interface" comment
> at the top of the header.

... if you consider it as a mistaken, then why did you introduce 
xvalloc() rather than modifying the implementation of xmalloc()?

> 
> It was my understanding so far that with the removal of the direct-
> map this property would go away anyway.

Direct-map is going to disappear on x86, but there are no plan for that 
on Arm so far.

I am not saying I don't want to remove it, I just want to point out that 
decision should not be made solely based on what x86 is doing (see more 
below).

> 
>> So I think you would want to mention that in the
>> sentence.
> 
> Well, as you've seen further down the comment already mentions that
> aspect.
> 
>>> Where the inefficiencies you mention would imo matter is in
>>> (future) decisions whether to use vmalloc() et al vs xvmalloc()
>>> et al: If the size _may_ be no more than a page, the latter may
>>> want preferring.
>> I am not sure to understand this... why would we want to keep vmalloc()
>> extern when xvalloc() will be calling it for allocation over a PAGE_SIZE?
> 
> Why force callers knowing they'll allocate more than a page to go
> through the extra layer? If that was the plan, then we wouldn't
> need a set of new functions, but could instead tweak vmalloc() etc.

Two reasons:
   1) There are too many ways to allocate memory in Xen. This adds 
extra-confusion to use.
   2) The impact of the extra check is going to be insignificant compare 
to the cost of the function. Feel free to prove me otherwise with numbers.

> 
>>>>> --- a/xen/common/vmap.c
>>>>> +++ b/xen/common/vmap.c
>>>>> @@ -7,6 +7,7 @@
>>>>>     #include <xen/spinlock.h>
>>>>>     #include <xen/types.h>
>>>>>     #include <xen/vmap.h>
>>>>> +#include <xen/xvmalloc.h>
>>>>>     #include <asm/page.h>
>>>>>     
>>>>>     static DEFINE_SPINLOCK(vm_lock);
>>>>> @@ -299,11 +300,29 @@ void *vzalloc(size_t size)
>>>>>         return p;
>>>>>     }
>>>>>     
>>>>> -void vfree(void *va)
>>>>> +static void _vfree(const void *va, unsigned int pages, enum vmap_region type)
>>>>
>>>> I don't think "unsigned int" is sufficient to cover big size. AFAICT,
>>>> this is not in a new problem in this code and seems to be a latent issue
>>>> so far.
>>>>
>>>> However, I feel that it is wrong to introduce a new set of allocation
>>>> helpers that contained a flaw fixed in xm*alloc() recently (see  commit
>>>> cf38b4926e2b "xmalloc: guard against integer overflow").
>>>
>>> For _xmalloc() we're talking about bytes (and the guarding you
>>> refer to is actually orthogonal to the limiting done by the
>>> page allocator, as follows from the description of that change).
>>> Here we're talking about pages. I hope it will be decades until we
>>> have to consider allocating 16Tb all in one chunk (and we'd need
>>> to have large enough vmap virtual address space set aside first).
>>
>> I think you misunderstood my point here. I am not suggesting that a
>> normal user would ask to allocate 16TB but that a caller may pass by
>> mistake an unsanitized value to xv*() functions.
>>
>> IIRC, the overflow check in xm*() were added after we discovered that
>> some callers where passing unsanitized values.
>>
>> I would expect xv*() functions to be more used in the future, so I think
>> it would be unwise to not guard against overflow.
>>
>> I would be happy with just checking that nr always fit in a 32-bit value.
> 
> The two callers of the function obtain the value from vm_size(),
> which returns unsigned int.

I can't see a use of vm_size() in vmalloc_type(). I can only see a 
implicit downcast.

> 
>>> Also note that
>>> - the entire vmap.c right now uses unsigned int for page counts,
>>>     so it would be outright inconsistent to use unsigned long here,
>>
>> I didn't suggest this would be the only place (note that "new problem").
>> This was the best place I could find to mention an existing problem that
>> is widened with the introduction of xv*() helpers.
> 
> Oh, so you're talking of a separate and entirely unrelated patch
> making sure the existing vmalloc() won't suffer such a problem.
> Yes, vmalloc_type() could be fixed to this effect. But do you
> realize we'd have a security issue much earlier if any guest
> action could lead to such a gigantic vmalloc(), as the time to
> both allocate and then map 4 billion pages is going to be way
> longer than what we may tolerate without preempting?

Yes I missed that point. But I am not sure where you are trying to infer...

If it wasn't clear enough, I didn't suggest to fix in this patch. I am 
only pointed out that we hardened _xmalloc() and this looks like going 
backwards.

> 
> And no, there's no overflowing calculation anywhere afaics which
> would resemble the ones in xmalloc() you refer to.

"overflow" was probably the wrong word. It would be more a downcast when 
computing the number of pages.

__vmap() is taking an "unsigned int", yet the number of pages is 
computed using size_t.

>>> - at least on x86 passing around 64-bit function arguments is
>>>     slightly less efficient than 32-bit ones, and hence I'd prefer
>>>     to keep it like this.
>>
>> Don't you have 64-bit registers on x86-64?
> 
> Yes, yet to operate on them most insns need an extra prefix
> byte.

... Thankfully we have only 2 architectures to care... Otherwise we 
would never be able to change common code without bikeshedding on 
micro-optimization.

> 
>> But, I am really surprised this is a concern to you when all the
>> functions in this code will modify the pages tables. You dismissed this
>> overhead in the same e-mail...
> 
> Entirely different considerations: The goal of limiting variable
> (and parameter) types to 32 bits where possible is a generic one.

At the cost of introducing multiple implicit downcast that one day or 
another is going to bite us.

> Which is, if for nothing else, to avoid introducing bad precedent.

I am ok with 32-bit internal value, but please at least check the 
downcasting is going to be harmless.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 14:20:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 14:20:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30830.60977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfknG-0001fk-Vt; Thu, 19 Nov 2020 14:20:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30830.60977; Thu, 19 Nov 2020 14:20:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfknG-0001fd-SR; Thu, 19 Nov 2020 14:20:14 +0000
Received: by outflank-mailman (input) for mailman id 30830;
 Thu, 19 Nov 2020 14:20:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4Veg=EZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kfknG-0001fY-2N
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 14:20:14 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7d88f820-47d8-4c90-85a5-1a05cc51748e;
 Thu, 19 Nov 2020 14:20:11 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=4Veg=EZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kfknG-0001fY-2N
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 14:20:14 +0000
X-Inumbo-ID: 7d88f820-47d8-4c90-85a5-1a05cc51748e
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7d88f820-47d8-4c90-85a5-1a05cc51748e;
	Thu, 19 Nov 2020 14:20:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605795611;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=jKiBdHgO9AYfyZCgmCLajHqpgGa4FpWdnQus946xNhE=;
  b=ifRlDbvFzTO+zytFhrR4g9GkejWMcDQ1t3iRYdK2bc0yinP0ioteLiHY
   0R/dp4baPYqA9MtHYTjSyDVbtabEj1ulLX/nx/nJ5vzE1z1dEhO99qH8f
   DDaOCheg0/Z/8LVbpzQSTTCO1Wvf1rJmIOdr6X1siQIuAuCH+yT/jty0P
   Y=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: DQVBbvRaYe1CWwjHi59DscQm+PuppxJDwMMPxYAw8h8DHxsKhgPNs0M1iJuabmZXROJotgAY4v
 lyRBCWDTGBzyzCQf2MEzQUuZJwVAHDWR2KSyR+d7a2M/HPcwNRBRTd9zfS1sWsHq2mdQ10v1n9
 HV6k79ySwP5esofYZitwBPCBNHDw0aEybaI3IRBoAb96o2GA4Tz5nzG8dBs/qI/SBeADYEHjBk
 ns1yAVHe6Ym94qkxkG7kEonoEWfezOFq1bGaqIG0Orvd+dw1878NYuIlqVhfXzC65BQxq9n/sL
 XMQ=
X-SBRS: None
X-MesageID: 31758099
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,490,1596513600"; 
   d="scan'208,223";a="31758099"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jSzO1OPx//NLGPBxIaBNIPRUG1lOfsf6boG6572w9qFVksYvaaKWdoBz2/idG0KGUOC6v2F2+amcZZuZZQyAfnoE9BQtIq2qOopCiTrOv0vTSGklJF06MTjGfVhNRNEL6TqQIHZFttG860uYFCrx1mKrVx++PhyGja/739S2fRU0CRoGBGOgBkYdexHrfqB6U2pU0ENwK/yEQKsqI97LWVM8UlqfIbuBLVkb0fBJEPsIhEpxP0qXwVFpOZ/mJjADXWtb0o2K40pseNSJYvMZjTKPC7WhyfQw66BpJ7FUJNnTP4Cjlmpj2ZQV6d7r2GApvSK1AkMmASDwSkuBuDtU8Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+xYD+6yimUIh0rI6DWtLIt6ifDqPKnRFCVLT5EUL8Og=;
 b=lbFryb9YNSBcYhHcGyH4O5gPWJGeh+8fp9XPKzR7eXLyYxpVQ3B+jkZ9KW71x7J2PMc3lbvNLCIl5HC6Tr/yy3l0OFHlpPeYbto/e+eA7qlchBzcQwD2zWSJYhQfdNNOrettL8s8pxoFqjEeeuNvv2TE9++SBCFt3cNbbjT9bx2lyy24DWae8s3krksPdX/mdYPeJRzXIyMMtFxYrOnhLsJ6u+CAxE0ZooWoblgPYImby9xe4lS7eU59PfkY8P03MpT8OiGW6isdz1HHY56Ha4OiqELu9xx0vslXyLZOJK4f5j8byDA0unn96F78X7JUuxwCAf/rDc1uU9DBguwDRw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+xYD+6yimUIh0rI6DWtLIt6ifDqPKnRFCVLT5EUL8Og=;
 b=B3r/TlK8FBdX3c3k7E+Tny1dgtoMmMtsAAF1JDGsBphG4WV4m/OY9uCDblLPpFTEhU7F4Qf+VeMf/QKPKsL9GT512f8bRcRUbpu/f+zar4L2xdrRXUVBqSOPh3cQ7xg99v8T7otd3HzXFM2JjKeEjjYaocsueIWsdV8XD0TcJRw=
Date: Thu, 19 Nov 2020 15:19:15 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, Manuel Bouyer <bouyer@antioche.eu.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201119141915.igyb7djkw47rf2dt@Air-de-Roger>
References: <20201117150949.GA3791@antioche.eu.org>
 <20201117155807.a7jgmftnj6njg6oz@Air-de-Roger>
 <20201117164033.GB3093@antioche.eu.org>
 <20201118085738.wpnfmjagxjf6cofp@Air-de-Roger>
 <20201118092425.GC1085@antioche.eu.org>
 <20201118100025.ic7r3kfsbdnr6muz@Air-de-Roger>
 <20201118121403.GC3126@antioche.eu.org>
 <20201118143928.hvamuf7t7jycsrzb@Air-de-Roger>
 <bb2b6182-f3a6-61e5-ee70-90a65ae56435@suse.com>
Content-Type: multipart/mixed; boundary="dhu473tb3wpezee4"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <bb2b6182-f3a6-61e5-ee70-90a65ae56435@suse.com>
X-ClientProxiedBy: LO2P265CA0500.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13b::7) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8917a712-5bb4-4c2f-ae00-08d88c961c26
X-MS-TrafficTypeDiagnostic: DM6PR03MB4299:
X-Microsoft-Antispam-PRVS: <DM6PR03MB429918CA37112B094E4BA7788FE00@DM6PR03MB4299.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4502;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Dy8wDGnLgHJ1uZVDaGnpkSdQMjZ0EpHAkFjJUgFb1KBVib0cuSMVVreK4n4CCl4bWDdAGZQdXeXvZ2fo1DhsCqwUXI3RUrp4iEWUv/9mseBdpaWM7gJTYyuBVljSA/ihnF79cYzr5liGJLzB5h++jp/rcqrGbd0XosWiuoD4W3VDsltZJ9ynXkXixhNNbt5Jm4yu8OhAjrBtdJKsYSn7+OmZE+ywbAdU3tUhMdPVjjtN0Zj/7bGxtT8UhgPtmj86itO9Rno4CrSNZb2bbCncLeC8+agNlNp08oSvDOqWd0WyFcUamxaQCQ9r8FS5UpUsJQ5ytPfsT5Xy3F3q2IPganvNwDmDcpJMHa8JjnEucVHZoFbg3ucUtjp9OoQKwYXA
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(366004)(346002)(136003)(39860400002)(376002)(44144004)(8936002)(66556008)(2906002)(86362001)(85182001)(66476007)(110136005)(83380400001)(316002)(53546011)(5660300002)(8676002)(235185007)(33716001)(186003)(16526019)(6496006)(1076003)(6486002)(4326008)(33964004)(6666004)(956004)(26005)(66576008)(478600001)(66946007)(9686003)(2700100001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: cYBcIHfny6nQi6VJePy5U0mi2E87jn+uRMNyyee9h3iLS/tREeGgDBDOn4Q6Xyk+jOBN/Nlgt+/pmKDlhfGLWQ1As7uSnnS2WVTt49CkWqnZkegVJG2CNDf6ClzcMT6P9Q1T+pNJeyKpoB4Njpm9QZ6mLz9vbD8uh7wLNFsfUk00KbDPLDRjUKhyxIKIAfUYzo3duPvO+FulUd9emneB0HYt1jV81OY85NJvBHJ9+yfmKSctJF0CZl8AZ46QQrdgFGdLX9QALqKAwMdaxZYQauG3TwsLP67WyT6zsxlvufkGroVRy2XvTp3V8Rp2ON2j3A3rzTEfay5ZoWh5vRem9UQ+vg+687mOuLBygYDNZ+6b5XvTzzNX0YSGvJtgnW1QGharSHE4txYFJo9S12jLVrGLBtBjv41hU283eoM7ksfSg55kLFy3yrXyxdoxX+yAx6s8Ii9F3Bp/Nocq6YYBgVAGo8V6OcJHoAv3yLF2jLc50GRywhDfxKBOu8bK5snEZwyGPt5NXB+Iz4UYqHYi1I1iAsYn/n/Uru3dXxxZUFOdw7goboH6XCDd8Qg9UoOJPhtzXemPGI/s7b1i0U7JU3clq8FiLE5NzeiTgR2ZXkwoU+LR1aqbpTW/6o32AFQD7FpwmYbt3tluHJJ7gq9bcnqnzF6DBYvQmYAkbhKT8hl9+R35O3+oj6oITyX4ml7HxkTd68I2otxlJjMGPVdJMBZmcbkWCSXtzNIbAn3pyk5iaTHEdI84/HWW7nGKKgb89IgybcwXQlwtMzxiio03c8yXhQKqGacPFQr3XTXC8+7WeW5VX/I1/QwKSPamm/+gSmAYXwBkbpCWNc28g2KhOpNZpiF3mkHRxsVgS1dOaUfoUUI66uYQm+m5Y9o4S41mSIHs2fnDcnaB4CFZnYPmyw==
X-MS-Exchange-CrossTenant-Network-Message-Id: 8917a712-5bb4-4c2f-ae00-08d88c961c26
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Nov 2020 14:19:21.9987
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: XgVgAWpjZ/pZUD+ro4zA0/7VzpcTmNKTSsJpfi3G07umxdvScWcUMRBZjannR7VmU+1+dEfMhSB+bP76ZRuZNw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4299
X-OriginatorOrg: citrix.com

--dhu473tb3wpezee4
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit

On Wed, Nov 18, 2020 at 03:59:44PM +0100, Jan Beulich wrote:
> On 18.11.2020 15:39, Roger Pau Monné wrote:
> > On Wed, Nov 18, 2020 at 01:14:03PM +0100, Manuel Bouyer wrote:
> >> I did some more instrumentation from the NetBSD kernel, including dumping
> >> the iopic2 pin2 register.
> >>
> >> At the time of the command timeout, the register value is 0x0000a067,
> >> which, if I understant it properly, menas that there's no interrupt
> >> pending (bit IOAPIC_REDLO_RIRR, 0x00004000, is not set).
> >> From the NetBSD ddb, I can dump this register multiple times, waiting
> >> several seconds, etc .., it doens't change).
> >> Now if I call ioapic_dump_raw() from the debugger, which triggers some
> >> XEN printf:
> >> db{0}> call ioapic_dump_raw^M
> >> Register dump of ioapic0^M
> >> [ 203.5489060] 00 08000000 00170011 08000000(XEN) vioapic.c:124:d0v0 apic_mem_re
> >> adl:undefined ioregsel 3
> >>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 4
> >>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 5
> >>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 6
> >>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 7
> >>  00000000^M
> >> [ 203.5489060] 08(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 8
> >>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 9
> >>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel a
> >>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel b
> >>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel c
> >>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel d
> >>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel e
> >>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel f
> >>  00000000^M
> >> [ 203.5489060] 10 00010000 00000000 00010000 00000000 00010000 00000000 00010000 00000000^M
> >> [...]
> >> [ 203.5489060] Register dump of ioapic2^M
> >> [ 203.5489060] 00 0a000000 00070011 0a000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 3
> >>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 4
> >>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 5
> >>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 6
> >>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 7
> >>  00000000^M
> >> [ 203.5489060] 08(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 8
> >>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel 9
> >>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel a
> >>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel b
> >>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel c
> >>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel d
> >>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel e
> >>  00000000(XEN) vioapic.c:124:d0v0 apic_mem_readl:undefined ioregsel f
> >>  00000000^M
> >> [ 203.5489060] 10 00010000 00000000 00010000 00000000 0000e067 00000000 00010000 00000000^M
> >>
> >> then the register switches to 0000e067, with the IOAPIC_REDLO_RIRR bit set.
> >> From here, if I continue from ddb, the dom0 boots.
> >>
> >> I can get the same effect by just doing ^A^A^A so my guess is that it's
> >> not accessing the iopic's register which changes the IOAPIC_REDLO_RIRR bit,
> >> but the XEN printf. Also, from NetBSD, using a dump fuinction which
> >> doesn't access undefined registers - and so doesn't trigger XEN printfs -
> >> doens't change the IOAPIC_REDLO_RIRR bit either.
> > 
> > I'm thinking about further ways to debug this. I see that all active
> > IO-APIC pins are routed to vCPU0, but does it make a difference if you
> > boot with dom0_max_vcpus=1 on the Xen command line? (thus limiting
> > NertBSD dom0 to a single CPU)
> 
> I too have been pondering possible approaches. One thing I thought might
> help is accompany all places setting remote_irr (and calling
> vioapic_deliver()) with a conditional log message, turning on the
> condition immediately before the first "undefined ioregsel" gets logged.
> (And turn it off again once the last RTE was read in sequence, just to
> avoid spamming the console.) From Manuel's description above, there has
> to be something that sets the bit and causes the delivery _without_ any
> active action by the guest (i.e. neither EOI nor RTE write) and
> _without_ any new instance of the IRQ appearing. I have some vague hope
> that knowing how we end up making the system make progress again may
> also help understand how it got stuck.

I've got two different debug patches for you to try. I'm attaching both
to this email but I think we should start with Jan's suggestion
(conditional_print.patch). That patch will only print extra messages
between the ioregsel 3 ... ioregsel f existing debug messages, you
will have to trigger this from NetBSD by using ioapic_dump_raw AFAICT.

The other patch (verbose_intr.patch) adds some messages related to
interrupts, but I'm afraid it's likely to interfere with the issue we
are trying to debug, since it will add a lot of extra printk's (and
likely flood your console).

Thanks, Roger.

--dhu473tb3wpezee4
Content-Type: text/plain; charset=utf-8
Content-Disposition: attachment; filename="conditional_print.patch"

>From 9f1cf4f5f4f143be2d9e87d1b2f4c4cf4c286b69 Mon Sep 17 00:00:00 2001
From: Roger Pau Monne <roger.pau@citrix.com>
Date: Thu, 19 Nov 2020 14:11:43 +0100
Subject: [PATCH]

---
 xen/arch/x86/hvm/vioapic.c | 20 ++++++++++++++++++--
 1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 67d4a6237f..9c0423b26e 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -43,7 +43,13 @@
 /* HACK: Route IRQ0 only to VCPU0 to prevent time jumps. */
 #define IRQ0_SPECIAL_ROUTING 1
 
-static void vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int irq);
+static void _vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int irq);
+
+static bool __read_mostly print;
+#define vioapic_deliver(vioapic, irq) ({\
+    if ( print && irq == 34 ) \
+        printk("%s:%d:%s: vioapic_deliver\n", __FILE__, __LINE__, __func__); \
+    _vioapic_deliver(vioapic, irq); })
 
 static struct hvm_vioapic *addr_vioapic(const struct domain *d,
                                         unsigned long addr)
@@ -119,6 +125,16 @@ static uint32_t vioapic_read_indirect(const struct hvm_vioapic *vioapic)
 
         if ( redir_index >= vioapic->nr_pins )
         {
+            switch ( vioapic->ioregsel )
+            {
+            case 3:
+                print = true;
+                break;
+
+            case 0xf:
+                print = false;
+                break;
+            }
             gdprintk(XENLOG_WARNING, "apic_mem_readl:undefined ioregsel %x\n",
                      vioapic->ioregsel);
             break;
@@ -391,7 +407,7 @@ static void ioapic_inj_irq(
     vlapic_set_irq(target, vector, trig_mode);
 }
 
-static void vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int pin)
+static void _vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int pin)
 {
     uint16_t dest = vioapic->redirtbl[pin].fields.dest_id;
     uint8_t dest_mode = vioapic->redirtbl[pin].fields.dest_mode;
-- 
2.29.2


--dhu473tb3wpezee4
Content-Type: text/plain; charset=utf-8
Content-Disposition: attachment; filename="verbose_intr.patch"

>From 381eebe51657a3b3101dc80880fa3350dcfb1e23 Mon Sep 17 00:00:00 2001
From: Roger Pau Monne <roger.pau@citrix.com>
Date: Thu, 19 Nov 2020 10:45:00 +0100
Subject: [PATCH]

---
 xen/arch/x86/hvm/vioapic.c   | 9 +++++++++
 xen/arch/x86/irq.c           | 3 +++
 xen/drivers/passthrough/io.c | 3 +++
 xen/include/asm-x86/irq.h    | 2 ++
 4 files changed, 17 insertions(+)

diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 67d4a6237f..1702434f0d 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -235,6 +235,10 @@ static void vioapic_write_redirent(
     pent = &vioapic->redirtbl[idx];
     ent  = *pent;
 
+    if ( gsi == TRACK_IRQ )
+        printk("vioapic write %s word %d val %#x current %#lx\n",
+               top_word ? "top" : "low", val, ent.bits);
+
     if ( top_word )
     {
         /* Contains only the dest_id. */
@@ -410,6 +414,9 @@ static void vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int pin)
                 "vector=%x trig_mode=%x",
                 dest, dest_mode, delivery_mode, vector, trig_mode);
 
+    if ( irq == TRACK_IRQ )
+        printk("vioapic inject vector %#x\n", vector);
+
     switch ( delivery_mode )
     {
     case dest_LowestPrio:
@@ -517,6 +524,8 @@ void vioapic_update_EOI(struct domain *d, u8 vector)
                 continue;
 
             ent->fields.remote_irr = 0;
+            if ( vioapic->base_gsi + pin == TRACK_IRQ )
+                printk("vioapic EOI\n");
 
             if ( is_iommu_enabled(d) )
             {
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 8d1f9a9fc6..34f50a24ea 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1837,6 +1837,9 @@ static void do_IRQ_guest(struct irq_desc *desc, unsigned int vector)
     unsigned int        i;
     struct pending_eoi *peoi = this_cpu(pending_eoi);
 
+    if ( desc->irq == TRACK_IRQ )
+        printk("do_IRQ_guest\n");
+
     if ( unlikely(!action->nr_guests) )
     {
         /* An interrupt may slip through while freeing an ACKTYPE_EOI irq. */
diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
index 6b1305a3e5..ddbda10593 100644
--- a/xen/drivers/passthrough/io.c
+++ b/xen/drivers/passthrough/io.c
@@ -997,6 +997,9 @@ static void hvm_gsi_eoi(struct domain *d, unsigned int gsi,
     if ( !pirq_dpci(pirq) )
         return;
 
+    if ( gsi == TRACK_IRQ )
+        printk("dpci EOI\n");
+
     hvm_gsi_deassert(d, gsi);
     hvm_pirq_eoi(pirq, ent);
 }
diff --git a/xen/include/asm-x86/irq.h b/xen/include/asm-x86/irq.h
index 7c825e9d9c..bb450e5e59 100644
--- a/xen/include/asm-x86/irq.h
+++ b/xen/include/asm-x86/irq.h
@@ -218,4 +218,6 @@ int allocate_and_map_gsi_pirq(struct domain *d, int index, int *pirq_p);
 int allocate_and_map_msi_pirq(struct domain *d, int index, int *pirq_p,
                               int type, struct msi_info *msi);
 
+#define TRACK_IRQ 34
+
 #endif /* _ASM_HW_IRQ_H */
-- 
2.29.2


--dhu473tb3wpezee4--


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 14:39:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 14:39:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30839.60989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfl5p-0003Bh-Nr; Thu, 19 Nov 2020 14:39:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30839.60989; Thu, 19 Nov 2020 14:39:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfl5p-0003Ba-Kx; Thu, 19 Nov 2020 14:39:25 +0000
Received: by outflank-mailman (input) for mailman id 30839;
 Thu, 19 Nov 2020 14:39:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kVKL=EZ=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1kfl5o-0003BV-Gf
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 14:39:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id daa4a8f7-7942-4167-9918-3197793ba58b;
 Thu, 19 Nov 2020 14:39:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3CBF6AA4F;
 Thu, 19 Nov 2020 14:39:22 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id E1C061E130B; Thu, 19 Nov 2020 15:39:21 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kVKL=EZ=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1kfl5o-0003BV-Gf
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 14:39:24 +0000
X-Inumbo-ID: daa4a8f7-7942-4167-9918-3197793ba58b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id daa4a8f7-7942-4167-9918-3197793ba58b;
	Thu, 19 Nov 2020 14:39:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3CBF6AA4F;
	Thu, 19 Nov 2020 14:39:22 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id E1C061E130B; Thu, 19 Nov 2020 15:39:21 +0100 (CET)
Date: Thu, 19 Nov 2020 15:39:21 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 15/20] block: merge struct block_device and struct
 hd_struct
Message-ID: <20201119143921.GX1981@quack2.suse.cz>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-16-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201118084800.2339180-16-hch@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Wed 18-11-20 09:47:55, Christoph Hellwig wrote:
> Instead of having two structures that represent each block device with
> different lift time rules merged them into a single one.  This also
            ^^^ :) life     ^^^^ merge

> greatly simplifies the reference counting rules, as we can use the inode
> reference count as the main reference count for the new struct
> block_device, with the device model reference front ending it for device
> model interaction.  The percpu refcount in struct hd_struct is entirely
> gone given that struct block_device must be opened and thus valid for
> the duration of the I/O.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

This patch is kind of difficult to review due to the size of mostly
mechanical changes mixed with not completely mechanical changes. Can we
perhaps split out the mechanical bits? E.g. the rq->part => rq->bdev
renaming is mechanical and notable part of the patch. Similarly the
part->foo => part->bd_foo bits...

Also I'm kind of wondering: AFAIU the new lifetime rules, gendisk holds
bdev reference and bdev is created on gendisk allocation so bdev lifetime is
strictly larger than gendisk lifetime. But what now keeps bdev->bd_disk
reference safe in presence device hot unplug? In most cases we are still
protected by gendisk reference taken in __blkdev_get() but how about
disk->lookup_sem and disk->flags dereferences before we actually grab the
reference?

Also I find it rather non-obvious (although elegant ;) that bdev->bd_device
rules the lifetime of gendisk. Can you perhaps explain this in the
changelog and probably also add somewhere to source a documentation about
the new lifetime rules?

> diff --git a/block/blk.h b/block/blk.h
> index 09cee7024fb43e..90dd2047c6cd29 100644
> --- a/block/blk.h
> +++ b/block/blk.h
> @@ -215,7 +215,15 @@ static inline void elevator_exit(struct request_queue *q,
>  	__elevator_exit(q, e);
>  }
>  
> -struct hd_struct *__disk_get_part(struct gendisk *disk, int partno);
> +static inline struct block_device *__bdget_disk(struct gendisk *disk,
> +		int partno)
> +{
> +	struct disk_part_tbl *ptbl = rcu_dereference(disk->part_tbl);
> +
> +	if (unlikely(partno < 0 || partno >= ptbl->len))
> +		return NULL;
> +	return rcu_dereference(ptbl->part[partno]);
> +}

I understand this is lower-level counterpart of bdget_disk() but it is
confusing to me that this has 'bdget' in the name and returns no bdev
reference. Can we call it like __bdev_from_disk() or something like that?

>  
>  ssize_t part_size_show(struct device *dev, struct device_attribute *attr,
>  		char *buf);


									Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 14:40:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 14:40:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30843.61001 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfl7A-00046l-2Z; Thu, 19 Nov 2020 14:40:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30843.61001; Thu, 19 Nov 2020 14:40:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfl79-00046e-Vq; Thu, 19 Nov 2020 14:40:47 +0000
Received: by outflank-mailman (input) for mailman id 30843;
 Thu, 19 Nov 2020 14:40:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfl78-00046Y-BT
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 14:40:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f78f7004-1739-47cb-b4e9-304ecff67a63;
 Thu, 19 Nov 2020 14:40:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 414AEAD43;
 Thu, 19 Nov 2020 14:40:44 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfl78-00046Y-BT
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 14:40:46 +0000
X-Inumbo-ID: f78f7004-1739-47cb-b4e9-304ecff67a63
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f78f7004-1739-47cb-b4e9-304ecff67a63;
	Thu, 19 Nov 2020 14:40:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605796844; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5wCFkZkYyw8UWgsnC4P4SkLdgyG7MS61qvUUBMbGgyE=;
	b=RkKQfEFQ2LGARt7ucSGiURmdwplovRHSPZgINatfjadFmPllacru+6H8oJ6fDXbh/BvuiM
	RNIW08iKOIAymzJPWOlmrCgy2oPFJ5SFnFwjhle8UNIP0UshmsbFj+/24y2+MAWpQHuKQx
	tBab3TEwKue+ueou60ySKBVbDOOSsAk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 414AEAD43;
	Thu, 19 Nov 2020 14:40:44 +0000 (UTC)
Subject: Re: [PATCH 1/3] mm: introduce xvmalloc() et al and use for grant
 table allocations
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <e0364274-f123-82bd-ec85-bea519a34049@suse.com>
 <d98aabe4-6c1b-0970-2e42-eb991e9075a2@suse.com>
 <e7b72c54-e8e4-428d-9264-484fc0061ba4@xen.org>
 <9adc7ec2-c014-d9ae-a8b5-5b942640386c@suse.com>
 <e115ce52-3c5c-6530-dd3a-bc7f268ef224@xen.org>
 <a00f3c98-04c1-4380-dc62-22d001edae1d@suse.com>
 <18d80e2c-fd54-e5be-89e0-5fedf16cc9cd@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7c13a5af-8158-9d00-bc0a-808dcb8fa02c@suse.com>
Date: Thu, 19 Nov 2020 15:40:43 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <18d80e2c-fd54-e5be-89e0-5fedf16cc9cd@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.11.2020 14:19, Julien Grall wrote:
> On 19/11/2020 12:33, Jan Beulich wrote:
>> It's certainly against the "Xen malloc/free-style interface" comment
>> at the top of the header.
> 
> ... if you consider it as a mistaken, then why did you introduce 
> xvalloc() rather than modifying the implementation of xmalloc()?

(a) it didn't occur to me as an option and (b) even if it did, I wouldn't
have wanted to go audit all use sites.

>> It was my understanding so far that with the removal of the direct-
>> map this property would go away anyway.
> 
> Direct-map is going to disappear on x86, but there are no plan for that 
> on Arm so far.
> 
> I am not saying I don't want to remove it, I just want to point out that 
> decision should not be made solely based on what x86 is doing (see more 
> below).

I didn't mean to imply anything like this. I was merely tryin to point
out that there may be a point in time, not too far in the future,
when any such assumption may turn out broken. You said there are a
number of such uses; I don't think I'm aware of any. Is what you're
aware of all in Arm code?

>>>> Where the inefficiencies you mention would imo matter is in
>>>> (future) decisions whether to use vmalloc() et al vs xvmalloc()
>>>> et al: If the size _may_ be no more than a page, the latter may
>>>> want preferring.
>>> I am not sure to understand this... why would we want to keep vmalloc()
>>> extern when xvalloc() will be calling it for allocation over a PAGE_SIZE?
>>
>> Why force callers knowing they'll allocate more than a page to go
>> through the extra layer? If that was the plan, then we wouldn't
>> need a set of new functions, but could instead tweak vmalloc() etc.
> 
> Two reasons:
>    1) There are too many ways to allocate memory in Xen. This adds 
> extra-confusion to use.

Maybe; I wouldn't have thought so.

>    2) The impact of the extra check is going to be insignificant compare 
> to the cost of the function. Feel free to prove me otherwise with numbers.

My point wasn't primarily (if at all) about performance, but about
call layering in general.

>>>>>> --- a/xen/common/vmap.c
>>>>>> +++ b/xen/common/vmap.c
>>>>>> @@ -7,6 +7,7 @@
>>>>>>     #include <xen/spinlock.h>
>>>>>>     #include <xen/types.h>
>>>>>>     #include <xen/vmap.h>
>>>>>> +#include <xen/xvmalloc.h>
>>>>>>     #include <asm/page.h>
>>>>>>     
>>>>>>     static DEFINE_SPINLOCK(vm_lock);
>>>>>> @@ -299,11 +300,29 @@ void *vzalloc(size_t size)
>>>>>>         return p;
>>>>>>     }
>>>>>>     
>>>>>> -void vfree(void *va)
>>>>>> +static void _vfree(const void *va, unsigned int pages, enum vmap_region type)
>>>>>
>>>>> I don't think "unsigned int" is sufficient to cover big size. AFAICT,
>>>>> this is not in a new problem in this code and seems to be a latent issue
>>>>> so far.
>>>>>
>>>>> However, I feel that it is wrong to introduce a new set of allocation
>>>>> helpers that contained a flaw fixed in xm*alloc() recently (see  commit
>>>>> cf38b4926e2b "xmalloc: guard against integer overflow").
>>>>
>>>> For _xmalloc() we're talking about bytes (and the guarding you
>>>> refer to is actually orthogonal to the limiting done by the
>>>> page allocator, as follows from the description of that change).
>>>> Here we're talking about pages. I hope it will be decades until we
>>>> have to consider allocating 16Tb all in one chunk (and we'd need
>>>> to have large enough vmap virtual address space set aside first).
>>>
>>> I think you misunderstood my point here. I am not suggesting that a
>>> normal user would ask to allocate 16TB but that a caller may pass by
>>> mistake an unsanitized value to xv*() functions.
>>>
>>> IIRC, the overflow check in xm*() were added after we discovered that
>>> some callers where passing unsanitized values.
>>>
>>> I would expect xv*() functions to be more used in the future, so I think
>>> it would be unwise to not guard against overflow.
>>>
>>> I would be happy with just checking that nr always fit in a 32-bit value.
>>
>> The two callers of the function obtain the value from vm_size(),
>> which returns unsigned int.
> 
> I can't see a use of vm_size() in vmalloc_type(). I can only see a 
> implicit downcast.

My "the function" was still referring to your initial comment, which
was about _vfree()'s parameter type.

>>>> Also note that
>>>> - the entire vmap.c right now uses unsigned int for page counts,
>>>>     so it would be outright inconsistent to use unsigned long here,
>>>
>>> I didn't suggest this would be the only place (note that "new problem").
>>> This was the best place I could find to mention an existing problem that
>>> is widened with the introduction of xv*() helpers.
>>
>> Oh, so you're talking of a separate and entirely unrelated patch
>> making sure the existing vmalloc() won't suffer such a problem.
>> Yes, vmalloc_type() could be fixed to this effect. But do you
>> realize we'd have a security issue much earlier if any guest
>> action could lead to such a gigantic vmalloc(), as the time to
>> both allocate and then map 4 billion pages is going to be way
>> longer than what we may tolerate without preempting?
> 
> Yes I missed that point. But I am not sure where you are trying to infer...
> 
> If it wasn't clear enough, I didn't suggest to fix in this patch.

Okay, this hadn't become clear to me at all. It was my understanding
that you're requesting changes to be made in this patch.

> I am 
> only pointed out that we hardened _xmalloc() and this looks like going 
> backwards.

I'm not seeing any step backwards. If there's an issue with vmalloc()
that we think needs addressing, let's address it. The patch here only
layers around existing xmalloc() / vmalloc(); I haven't managed to
spot any overflow or truncation issue in it, and I don't think I've
seen you point out any. Quite the contrary, I think I could relax
the checking in _xv{m,z}alloc_array() (but I guess I won't for now).

>> And no, there's no overflowing calculation anywhere afaics which
>> would resemble the ones in xmalloc() you refer to.
> 
> "overflow" was probably the wrong word. It would be more a downcast when 
> computing the number of pages.
> 
> __vmap() is taking an "unsigned int", yet the number of pages is 
> computed using size_t.

Yes, that's what I assume you referred to further up as implicit
downcast. And that's also the one place I spotted when trying to
understand your earlier comments.

>>> But, I am really surprised this is a concern to you when all the
>>> functions in this code will modify the pages tables. You dismissed this
>>> overhead in the same e-mail...
>>
>> Entirely different considerations: The goal of limiting variable
>> (and parameter) types to 32 bits where possible is a generic one.
> 
> At the cost of introducing multiple implicit downcast that one day or 
> another is going to bite us.
> 
>> Which is, if for nothing else, to avoid introducing bad precedent.
> 
> I am ok with 32-bit internal value, but please at least check the 
> downcasting is going to be harmless.

So here things get confusing again: Further up you've just said 
"I didn't suggest to fix in this patch". Here it sounds again as
if you were expecting this to be fixed here and now. So I guess
you may have meant to ask that I add a prereq patch addressing
this?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 14:43:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 14:43:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30850.61012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfl9b-0004M0-GD; Thu, 19 Nov 2020 14:43:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30850.61012; Thu, 19 Nov 2020 14:43:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfl9b-0004Lt-D7; Thu, 19 Nov 2020 14:43:19 +0000
Received: by outflank-mailman (input) for mailman id 30850;
 Thu, 19 Nov 2020 14:43:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kVKL=EZ=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1kfl9a-0004Lo-Eq
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 14:43:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6c0a35c6-0b22-48b9-a832-1baa363c2bc8;
 Thu, 19 Nov 2020 14:43:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 59A0CAC41;
 Thu, 19 Nov 2020 14:43:16 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id 260F01E130B; Thu, 19 Nov 2020 15:43:16 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kVKL=EZ=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1kfl9a-0004Lo-Eq
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 14:43:18 +0000
X-Inumbo-ID: 6c0a35c6-0b22-48b9-a832-1baa363c2bc8
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6c0a35c6-0b22-48b9-a832-1baa363c2bc8;
	Thu, 19 Nov 2020 14:43:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 59A0CAC41;
	Thu, 19 Nov 2020 14:43:16 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id 260F01E130B; Thu, 19 Nov 2020 15:43:16 +0100 (CET)
Date: Thu, 19 Nov 2020 15:43:16 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 16/20] block: stop using bdget_disk for partition 0
Message-ID: <20201119144316.GY1981@quack2.suse.cz>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-17-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201118084800.2339180-17-hch@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Wed 18-11-20 09:47:56, Christoph Hellwig wrote:
> We can just dereference the point in struct gendisk instead.  Also
> remove the now unused export.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Looks good to me. You can add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  block/genhd.c                   |  1 -
>  drivers/block/nbd.c             |  4 +---
>  drivers/block/xen-blkfront.c    | 20 +++++---------------
>  drivers/block/zram/zram_drv.c   | 18 +++---------------
>  drivers/md/dm.c                 |  8 +-------
>  drivers/s390/block/dasd_ioctl.c |  5 ++---
>  6 files changed, 12 insertions(+), 44 deletions(-)
> 
> diff --git a/block/genhd.c b/block/genhd.c
> index a14e2408e3d4e8..ec41d0f18f5ce1 100644
> --- a/block/genhd.c
> +++ b/block/genhd.c
> @@ -907,7 +907,6 @@ struct block_device *bdget_disk(struct gendisk *disk, int partno)
>  
>  	return bdev;
>  }
> -EXPORT_SYMBOL(bdget_disk);
>  
>  /*
>   * print a full list of all partitions - intended for places where the root
> diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
> index 014683968ce174..92f84ed0ba9eb6 100644
> --- a/drivers/block/nbd.c
> +++ b/drivers/block/nbd.c
> @@ -1488,12 +1488,10 @@ static int nbd_open(struct block_device *bdev, fmode_t mode)
>  static void nbd_release(struct gendisk *disk, fmode_t mode)
>  {
>  	struct nbd_device *nbd = disk->private_data;
> -	struct block_device *bdev = bdget_disk(disk, 0);
>  
>  	if (test_bit(NBD_RT_DISCONNECT_ON_CLOSE, &nbd->config->runtime_flags) &&
> -			bdev->bd_openers == 0)
> +			disk->part0->bd_openers == 0)
>  		nbd_disconnect_and_put(nbd);
> -	bdput(bdev);
>  
>  	nbd_config_put(nbd);
>  	nbd_put(nbd);
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index 79521e33d30ed5..188e0b47534bcf 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -2153,7 +2153,7 @@ static void blkfront_closing(struct blkfront_info *info)
>  	}
>  
>  	if (info->gd)
> -		bdev = bdget_disk(info->gd, 0);
> +		bdev = bdgrab(info->gd->part0);
>  
>  	mutex_unlock(&info->mutex);
>  
> @@ -2518,7 +2518,7 @@ static int blkfront_remove(struct xenbus_device *xbdev)
>  
>  	disk = info->gd;
>  	if (disk)
> -		bdev = bdget_disk(disk, 0);
> +		bdev = bdgrab(disk->part0);
>  
>  	info->xbdev = NULL;
>  	mutex_unlock(&info->mutex);
> @@ -2595,19 +2595,11 @@ static int blkif_open(struct block_device *bdev, fmode_t mode)
>  static void blkif_release(struct gendisk *disk, fmode_t mode)
>  {
>  	struct blkfront_info *info = disk->private_data;
> -	struct block_device *bdev;
>  	struct xenbus_device *xbdev;
>  
>  	mutex_lock(&blkfront_mutex);
> -
> -	bdev = bdget_disk(disk, 0);
> -
> -	if (!bdev) {
> -		WARN(1, "Block device %s yanked out from us!\n", disk->disk_name);
> +	if (disk->part0->bd_openers)
>  		goto out_mutex;
> -	}
> -	if (bdev->bd_openers)
> -		goto out;
>  
>  	/*
>  	 * Check if we have been instructed to close. We will have
> @@ -2619,7 +2611,7 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
>  
>  	if (xbdev && xbdev->state == XenbusStateClosing) {
>  		/* pending switch to state closed */
> -		dev_info(disk_to_dev(bdev->bd_disk), "releasing disk\n");
> +		dev_info(disk_to_dev(disk), "releasing disk\n");
>  		xlvbd_release_gendisk(info);
>  		xenbus_frontend_closed(info->xbdev);
>   	}
> @@ -2628,14 +2620,12 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
>  
>  	if (!xbdev) {
>  		/* sudden device removal */
> -		dev_info(disk_to_dev(bdev->bd_disk), "releasing disk\n");
> +		dev_info(disk_to_dev(disk), "releasing disk\n");
>  		xlvbd_release_gendisk(info);
>  		disk->private_data = NULL;
>  		free_info(info);
>  	}
>  
> -out:
> -	bdput(bdev);
>  out_mutex:
>  	mutex_unlock(&blkfront_mutex);
>  }
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index 01757f9578dcb8..56024905bd242c 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -1748,7 +1748,7 @@ static ssize_t reset_store(struct device *dev,
>  		struct device_attribute *attr, const char *buf, size_t len)
>  {
>  	struct zram *zram = dev_to_zram(dev);
> -	struct block_device *bdev;
> +	struct block_device *bdev = zram->disk->part0;
>  	unsigned short do_reset;
>  	int ret = 0;
>  
> @@ -1758,17 +1758,12 @@ static ssize_t reset_store(struct device *dev,
>  	if (!do_reset)
>  		return -EINVAL;
>  
> -	bdev = bdget_disk(zram->disk, 0);
> -	if (!bdev)
> -		return -ENOMEM;
> -
>  	mutex_lock(&bdev->bd_mutex);
>  	if (bdev->bd_openers)
>  		ret = -EBUSY;
>  	else
>  		zram_reset_device(zram);
>  	mutex_unlock(&bdev->bd_mutex);
> -	bdput(bdev);
>  
>  	return ret ? ret : len;
>  }
> @@ -1933,15 +1928,8 @@ static int zram_add(void)
>  
>  static int zram_remove(struct zram *zram)
>  {
> -	struct block_device *bdev = bdget_disk(zram->disk, 0);
> -
> -	if (bdev) {
> -		if (bdev->bd_openers) {
> -			bdput(bdev);
> -			return -EBUSY;
> -		}
> -		bdput(bdev);
> -	}
> +	if (zram->disk->part0->bd_openers)
> +		return -EBUSY;
>  
>  	del_gendisk(zram->disk);
>  	zram_debugfs_unregister(zram);
> diff --git a/drivers/md/dm.c b/drivers/md/dm.c
> index c9438feefe55a3..ec48ccae50dd53 100644
> --- a/drivers/md/dm.c
> +++ b/drivers/md/dm.c
> @@ -2375,17 +2375,12 @@ struct dm_table *dm_swap_table(struct mapped_device *md, struct dm_table *table)
>   */
>  static int lock_fs(struct mapped_device *md)
>  {
> -	struct block_device *bdev;
>  	int r;
>  
>  	WARN_ON(md->frozen_sb);
>  
> -	bdev = bdget_disk(md->disk, 0);
> -	if (!bdev)
> -		return -ENOMEM;
> -	md->frozen_sb = freeze_bdev(bdev);
> +	md->frozen_sb = freeze_bdev(md->disk->part0);
>  	if (IS_ERR(md->frozen_sb)) {
> -		bdput(bdev);
>  		r = PTR_ERR(md->frozen_sb);
>  		md->frozen_sb = NULL;
>  		return r;
> @@ -2402,7 +2397,6 @@ static void unlock_fs(struct mapped_device *md)
>  		return;
>  
>  	thaw_bdev(md->frozen_sb->s_bdev, md->frozen_sb);
> -	bdput(md->frozen_sb->s_bdev);
>  	md->frozen_sb = NULL;
>  	clear_bit(DMF_FROZEN, &md->flags);
>  }
> diff --git a/drivers/s390/block/dasd_ioctl.c b/drivers/s390/block/dasd_ioctl.c
> index 304eba1acf163c..9f642440894655 100644
> --- a/drivers/s390/block/dasd_ioctl.c
> +++ b/drivers/s390/block/dasd_ioctl.c
> @@ -220,9 +220,8 @@ dasd_format(struct dasd_block *block, struct format_data_t *fdata)
>  	 * enabling the device later.
>  	 */
>  	if (fdata->start_unit == 0) {
> -		struct block_device *bdev = bdget_disk(block->gdp, 0);
> -		bdev->bd_inode->i_blkbits = blksize_bits(fdata->blksize);
> -		bdput(bdev);
> +		block->gdp->part0->bd_inode->i_blkbits =
> +			blksize_bits(fdata->blksize);
>  	}
>  
>  	rc = base->discipline->format_device(base, fdata, 1);
> -- 
> 2.29.2
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 14:52:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 14:52:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30858.61024 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kflIM-0005SZ-ED; Thu, 19 Nov 2020 14:52:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30858.61024; Thu, 19 Nov 2020 14:52:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kflIM-0005SS-B8; Thu, 19 Nov 2020 14:52:22 +0000
Received: by outflank-mailman (input) for mailman id 30858;
 Thu, 19 Nov 2020 14:52:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kflIL-0005SN-2a
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 14:52:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kflIJ-0005st-5I; Thu, 19 Nov 2020 14:52:19 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kflII-00039y-RC; Thu, 19 Nov 2020 14:52:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kflIL-0005SN-2a
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 14:52:21 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=VezKhLTcskDqjf5vN3pACojitNeVXOtWXoGo9wlVx5c=; b=hsqdXZaKgU+knug2crcBbr40TO
	TBzlwjmquQBv/36tjN8I6IzQxRbv/RH15d1raerlWXAaiaiK9WhKHMCqi/KJJuJoEMEYiJcH+uvNS
	a+KfOPwT05QZOE+bpKD5kRAe0k/zofQG6wzoG9fUC3joVEc+iuVUw2CsGOvkKwvE1epE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kflIJ-0005st-5I; Thu, 19 Nov 2020 14:52:19 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kflII-00039y-RC; Thu, 19 Nov 2020 14:52:19 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Kevin Tian <kevin.tian@intel.com>
Subject: [PATCH] xen/iommu: vtd: Fix undefined behavior pci_vtd_quirks()
Date: Thu, 19 Nov 2020 14:52:16 +0000
Message-Id: <20201119145216.29280-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

When booting Xen with CONFIG_USBAN=y on Sandy Bridge, UBSAN will throw
the following splat:

(XEN) ================================================================================
(XEN) UBSAN: Undefined behaviour in quirks.c:449:63
(XEN) left shift of 1 by 31 places cannot be represented in type 'int'
(XEN) ----[ Xen-4.11.4  x86_64  debug=y   Not tainted ]----

[...]

(XEN) Xen call trace:
(XEN)    [<ffff82d0802c0ccc>] ubsan.c#ubsan_epilogue+0xa/0xad
(XEN)    [<ffff82d0802c16c9>] __ubsan_handle_shift_out_of_bounds+0xb4/0x145
(XEN)    [<ffff82d0802eeecd>] pci_vtd_quirk+0x3d3/0x74f
(XEN)    [<ffff82d0802e508b>] iommu.c#domain_context_mapping+0x45b/0x46f
(XEN)    [<ffff82d08053f39e>] iommu.c#setup_hwdom_device+0x22/0x3a
(XEN)    [<ffff82d08053dfbc>] pci.c#setup_one_hwdom_device+0x8c/0x124
(XEN)    [<ffff82d08053e302>] pci.c#_setup_hwdom_pci_devices+0xbb/0x2f7
(XEN)    [<ffff82d0802da5b7>] pci.c#pci_segments_iterate+0x4c/0x8c
(XEN)    [<ffff82d08053e8bd>] setup_hwdom_pci_devices+0x25/0x2c
(XEN)    [<ffff82d08053e916>] iommu.c#intel_iommu_hwdom_init+0x52/0x2f3
(XEN)    [<ffff82d08053d6da>] iommu_hwdom_init+0x4e/0xa4
(XEN)    [<ffff82d080577f32>] dom0_construct_pv+0x23c8/0x2476
(XEN)    [<ffff82d08057cb50>] construct_dom0+0x6c/0xa3
(XEN)    [<ffff82d080564822>] __start_xen+0x4651/0x4b55
(XEN)    [<ffff82d0802000f3>] __high_start+0x53/0x55

Note that splat is from 4.11.4 and not staging. Although, the problem is
still present.

This can be solved by making the first operand unsigned int.

Signed-off-by: Julien Grall <jgrall@amazon.com>

CR: https://code.amazon.com/reviews/CR-38873112
---
 xen/drivers/passthrough/vtd/quirks.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/vtd/quirks.c b/xen/drivers/passthrough/vtd/quirks.c
index a8330f17bc0c..8a81d9c9308b 100644
--- a/xen/drivers/passthrough/vtd/quirks.c
+++ b/xen/drivers/passthrough/vtd/quirks.c
@@ -435,7 +435,7 @@ void pci_vtd_quirk(const struct pci_dev *pdev)
     case 0x3728: /* Xeon C5500/C3500 (JasperForest) */
     case 0x3c28: /* Sandybridge */
         val = pci_conf_read32(pdev->sbdf, 0x1AC);
-        pci_conf_write32(pdev->sbdf, 0x1AC, val | (1 << 31));
+        pci_conf_write32(pdev->sbdf, 0x1AC, val | (1U << 31));
         printk(XENLOG_INFO "Masked VT-d error signaling on %pp\n", &pdev->sbdf);
         break;
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 14:53:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 14:53:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30862.61037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kflJ4-0005YZ-PN; Thu, 19 Nov 2020 14:53:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30862.61037; Thu, 19 Nov 2020 14:53:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kflJ4-0005YS-LY; Thu, 19 Nov 2020 14:53:06 +0000
Received: by outflank-mailman (input) for mailman id 30862;
 Thu, 19 Nov 2020 14:53:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kVKL=EZ=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1kflJ3-0005YN-NE
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 14:53:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aca3f7e8-ecfe-4a7d-b782-f1e14293298d;
 Thu, 19 Nov 2020 14:53:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0A44DAC2F;
 Thu, 19 Nov 2020 14:53:04 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id BD7681E130B; Thu, 19 Nov 2020 15:53:03 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kVKL=EZ=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1kflJ3-0005YN-NE
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 14:53:05 +0000
X-Inumbo-ID: aca3f7e8-ecfe-4a7d-b782-f1e14293298d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id aca3f7e8-ecfe-4a7d-b782-f1e14293298d;
	Thu, 19 Nov 2020 14:53:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0A44DAC2F;
	Thu, 19 Nov 2020 14:53:04 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id BD7681E130B; Thu, 19 Nov 2020 15:53:03 +0100 (CET)
Date: Thu, 19 Nov 2020 15:53:03 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 17/20] filemap: consistently use ->f_mapping over
 ->i_mapping
Message-ID: <20201119145303.GZ1981@quack2.suse.cz>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-18-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201118084800.2339180-18-hch@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Wed 18-11-20 09:47:57, Christoph Hellwig wrote:
> Use file->f_mapping in all remaining places that have a struct file
> available to properly handle the case where inode->i_mapping !=
> file_inode(file)->i_mapping.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Looks good. You can add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  mm/filemap.c | 9 ++++-----
>  1 file changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/filemap.c b/mm/filemap.c
> index d5e7c2029d16b4..3e3531a757f8db 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -2887,13 +2887,13 @@ EXPORT_SYMBOL(filemap_map_pages);
>  vm_fault_t filemap_page_mkwrite(struct vm_fault *vmf)
>  {
>  	struct page *page = vmf->page;
> -	struct inode *inode = file_inode(vmf->vma->vm_file);
> +	struct inode *inode = vmf->vma->vm_file->f_mapping->host;
>  	vm_fault_t ret = VM_FAULT_LOCKED;
>  
>  	sb_start_pagefault(inode->i_sb);
>  	file_update_time(vmf->vma->vm_file);
>  	lock_page(page);
> -	if (page->mapping != inode->i_mapping) {
> +	if (page->mapping != vmf->vma->vm_file->f_mapping) {
>  		unlock_page(page);
>  		ret = VM_FAULT_NOPAGE;
>  		goto out;
> @@ -3149,10 +3149,9 @@ void dio_warn_stale_pagecache(struct file *filp)
>  {
>  	static DEFINE_RATELIMIT_STATE(_rs, 86400 * HZ, DEFAULT_RATELIMIT_BURST);
>  	char pathname[128];
> -	struct inode *inode = file_inode(filp);
>  	char *path;
>  
> -	errseq_set(&inode->i_mapping->wb_err, -EIO);
> +	errseq_set(&filp->f_mapping->wb_err, -EIO);
>  	if (__ratelimit(&_rs)) {
>  		path = file_path(filp, pathname, sizeof(pathname));
>  		if (IS_ERR(path))
> @@ -3179,7 +3178,7 @@ generic_file_direct_write(struct kiocb *iocb, struct iov_iter *from)
>  
>  	if (iocb->ki_flags & IOCB_NOWAIT) {
>  		/* If there are pages to writeback, return */
> -		if (filemap_range_has_page(inode->i_mapping, pos,
> +		if (filemap_range_has_page(file->f_mapping, pos,
>  					   pos + write_len - 1))
>  			return -EAGAIN;
>  	} else {
> -- 
> 2.29.2
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 14:54:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 14:54:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30869.61049 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kflKd-0005k9-92; Thu, 19 Nov 2020 14:54:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30869.61049; Thu, 19 Nov 2020 14:54:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kflKd-0005k2-54; Thu, 19 Nov 2020 14:54:43 +0000
Received: by outflank-mailman (input) for mailman id 30869;
 Thu, 19 Nov 2020 14:54:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kflKb-0005jw-VR
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 14:54:41 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kflKb-0005wS-HP; Thu, 19 Nov 2020 14:54:41 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kflKb-0003Ic-7K; Thu, 19 Nov 2020 14:54:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kflKb-0005jw-VR
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 14:54:41 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=IjvAESUItdtzLYryfFbk/PumKSZoZap17L392zxaSg0=; b=aUUKccIm6WEHV+NVl8UJtVbd/h
	wS19pwTVyB9NKwBiAGs9bUt4ETT++QNdv5qc6gz5S/2fRmRg5hTzKtabD3TCI39swY3G87Mnfo3Ig
	M+hjKT7o8a/wk/05rRFIB41YPQqYh7H9rpsFlOzOUWBLFEavZzMCWX2DnDIW0IYdjcLc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kflKb-0005wS-HP; Thu, 19 Nov 2020 14:54:41 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kflKb-0003Ic-7K; Thu, 19 Nov 2020 14:54:41 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] xen/irq: Propagate the error from init_one_desc_irq() in init_irq_data()
Date: Thu, 19 Nov 2020 14:54:34 +0000
Message-Id: <20201119145434.28065-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

init_one_desc_irq() can return an error if it is unable to allocate
memory. While this is unlikely to happen during boot (called from
init_irq_data()), it is better to harden the code by propagting the
return value.

Spotted by coverity.

CID: 106529

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/irq.c | 7 ++++++-
 xen/arch/x86/irq.c | 7 ++++++-
 2 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index 3877657a5277..279d221a2b85 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -88,7 +88,12 @@ static int __init init_irq_data(void)
     for ( irq = NR_LOCAL_IRQS; irq < NR_IRQS; irq++ )
     {
         struct irq_desc *desc = irq_to_desc(irq);
-        init_one_irq_desc(desc);
+        int rc;
+
+        rc = init_one_irq_desc(desc);
+        if ( rc )
+            return rc;
+
         desc->irq = irq;
         desc->action  = NULL;
     }
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 45966947919e..3ebd684415ac 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -428,9 +428,14 @@ int __init init_irq_data(void)
 
     for ( irq = 0; irq < nr_irqs_gsi; irq++ )
     {
+        int rc;
+
         desc = irq_to_desc(irq);
         desc->irq = irq;
-        init_one_irq_desc(desc);
+
+        rc = init_one_irq_desc(desc);
+        if ( rc )
+            return rc;
     }
     for ( ; irq < nr_irqs; irq++ )
         irq_to_desc(irq)->irq = irq;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 15:00:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 15:00:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30885.61065 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kflPj-00064H-W1; Thu, 19 Nov 2020 14:59:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30885.61065; Thu, 19 Nov 2020 14:59:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kflPj-00064A-RV; Thu, 19 Nov 2020 14:59:59 +0000
Received: by outflank-mailman (input) for mailman id 30885;
 Thu, 19 Nov 2020 14:59:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kVKL=EZ=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1kflPi-000645-KM
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 14:59:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e8da910d-2e62-4d7a-b4a4-74a995228049;
 Thu, 19 Nov 2020 14:59:57 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 681B7ABD6;
 Thu, 19 Nov 2020 14:59:56 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id 26B541E130B; Thu, 19 Nov 2020 15:59:56 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kVKL=EZ=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1kflPi-000645-KM
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 14:59:58 +0000
X-Inumbo-ID: e8da910d-2e62-4d7a-b4a4-74a995228049
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e8da910d-2e62-4d7a-b4a4-74a995228049;
	Thu, 19 Nov 2020 14:59:57 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 681B7ABD6;
	Thu, 19 Nov 2020 14:59:56 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id 26B541E130B; Thu, 19 Nov 2020 15:59:56 +0100 (CET)
Date: Thu, 19 Nov 2020 15:59:56 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, Al Viro <viro@zeniv.linux.org.uk>
Subject: Re: [PATCH 18/20] fs: remove get_super_thawed and
 get_super_exclusive_thawed
Message-ID: <20201119145956.GA1981@quack2.suse.cz>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-19-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201118084800.2339180-19-hch@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Wed 18-11-20 09:47:58, Christoph Hellwig wrote:
> Just open code the wait in the only caller of both functions.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

As far as I remember, Al (added to CC) generally objected against exporting
bits from fs/super.c (like put_super(), __get_super()) in the past. FWIW I
also find a dedicated function in fs/super.c somewhat cleaner than
opencoding in quota code but I can live with both...

								Honza

> ---
>  fs/internal.h      |  2 ++
>  fs/quota/quota.c   | 31 +++++++++++++++++++++-------
>  fs/super.c         | 51 ++--------------------------------------------
>  include/linux/fs.h |  4 +---
>  4 files changed, 29 insertions(+), 59 deletions(-)
> 
> diff --git a/fs/internal.h b/fs/internal.h
> index a7cd0f64faa4ab..47be21dfeebef5 100644
> --- a/fs/internal.h
> +++ b/fs/internal.h
> @@ -114,7 +114,9 @@ extern struct file *alloc_empty_file_noaccount(int, const struct cred *);
>   */
>  extern int reconfigure_super(struct fs_context *);
>  extern bool trylock_super(struct super_block *sb);
> +struct super_block *__get_super(struct block_device *bdev, bool excl);
>  extern struct super_block *user_get_super(dev_t);
> +void put_super(struct super_block *sb);
>  extern bool mount_capable(struct fs_context *);
>  
>  /*
> diff --git a/fs/quota/quota.c b/fs/quota/quota.c
> index 9af95c7a0bbe3c..f3d32b0d9008f2 100644
> --- a/fs/quota/quota.c
> +++ b/fs/quota/quota.c
> @@ -20,6 +20,7 @@
>  #include <linux/writeback.h>
>  #include <linux/nospec.h>
>  #include "compat.h"
> +#include "../internal.h"
>  
>  static int check_quotactl_permission(struct super_block *sb, int type, int cmd,
>  				     qid_t id)
> @@ -868,6 +869,7 @@ static struct super_block *quotactl_block(const char __user *special, int cmd)
>  	struct block_device *bdev;
>  	struct super_block *sb;
>  	struct filename *tmp = getname(special);
> +	bool excl = false, thawed = false;
>  
>  	if (IS_ERR(tmp))
>  		return ERR_CAST(tmp);
> @@ -875,17 +877,32 @@ static struct super_block *quotactl_block(const char __user *special, int cmd)
>  	putname(tmp);
>  	if (IS_ERR(bdev))
>  		return ERR_CAST(bdev);
> -	if (quotactl_cmd_onoff(cmd))
> -		sb = get_super_exclusive_thawed(bdev);
> -	else if (quotactl_cmd_write(cmd))
> -		sb = get_super_thawed(bdev);
> -	else
> -		sb = get_super(bdev);
> +
> +	if (quotactl_cmd_onoff(cmd)) {
> +		excl = true;
> +		thawed = true;
> +	} else if (quotactl_cmd_write(cmd)) {
> +		thawed = true;
> +	}
> +
> +retry:
> +	sb = __get_super(bdev, excl);
> +	if (thawed && sb && sb->s_writers.frozen != SB_UNFROZEN) {
> +		if (excl)
> +			up_write(&sb->s_umount);
> +		else
> +			up_read(&sb->s_umount);
> +		wait_event(sb->s_writers.wait_unfrozen,
> +			   sb->s_writers.frozen == SB_UNFROZEN);
> +		put_super(sb);
> +		goto retry;
> +	}
> +
>  	bdput(bdev);
>  	if (!sb)
>  		return ERR_PTR(-ENODEV);
> -
>  	return sb;
> +
>  #else
>  	return ERR_PTR(-ENODEV);
>  #endif
> diff --git a/fs/super.c b/fs/super.c
> index 98bb0629ee108e..343e5c1e538d2a 100644
> --- a/fs/super.c
> +++ b/fs/super.c
> @@ -307,7 +307,7 @@ static void __put_super(struct super_block *s)
>   *	Drops a temporary reference, frees superblock if there's no
>   *	references left.
>   */
> -static void put_super(struct super_block *sb)
> +void put_super(struct super_block *sb)
>  {
>  	spin_lock(&sb_lock);
>  	__put_super(sb);
> @@ -740,7 +740,7 @@ void iterate_supers_type(struct file_system_type *type,
>  
>  EXPORT_SYMBOL(iterate_supers_type);
>  
> -static struct super_block *__get_super(struct block_device *bdev, bool excl)
> +struct super_block *__get_super(struct block_device *bdev, bool excl)
>  {
>  	struct super_block *sb;
>  
> @@ -789,53 +789,6 @@ struct super_block *get_super(struct block_device *bdev)
>  }
>  EXPORT_SYMBOL(get_super);
>  
> -static struct super_block *__get_super_thawed(struct block_device *bdev,
> -					      bool excl)
> -{
> -	while (1) {
> -		struct super_block *s = __get_super(bdev, excl);
> -		if (!s || s->s_writers.frozen == SB_UNFROZEN)
> -			return s;
> -		if (!excl)
> -			up_read(&s->s_umount);
> -		else
> -			up_write(&s->s_umount);
> -		wait_event(s->s_writers.wait_unfrozen,
> -			   s->s_writers.frozen == SB_UNFROZEN);
> -		put_super(s);
> -	}
> -}
> -
> -/**
> - *	get_super_thawed - get thawed superblock of a device
> - *	@bdev: device to get the superblock for
> - *
> - *	Scans the superblock list and finds the superblock of the file system
> - *	mounted on the device. The superblock is returned once it is thawed
> - *	(or immediately if it was not frozen). %NULL is returned if no match
> - *	is found.
> - */
> -struct super_block *get_super_thawed(struct block_device *bdev)
> -{
> -	return __get_super_thawed(bdev, false);
> -}
> -EXPORT_SYMBOL(get_super_thawed);
> -
> -/**
> - *	get_super_exclusive_thawed - get thawed superblock of a device
> - *	@bdev: device to get the superblock for
> - *
> - *	Scans the superblock list and finds the superblock of the file system
> - *	mounted on the device. The superblock is returned once it is thawed
> - *	(or immediately if it was not frozen) and s_umount semaphore is held
> - *	in exclusive mode. %NULL is returned if no match is found.
> - */
> -struct super_block *get_super_exclusive_thawed(struct block_device *bdev)
> -{
> -	return __get_super_thawed(bdev, true);
> -}
> -EXPORT_SYMBOL(get_super_exclusive_thawed);
> -
>  /**
>   * get_active_super - get an active reference to the superblock of a device
>   * @bdev: device to get the superblock for
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index 8667d0cdc71e76..a61df0dd4f1989 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -1409,7 +1409,7 @@ enum {
>  
>  struct sb_writers {
>  	int				frozen;		/* Is sb frozen? */
> -	wait_queue_head_t		wait_unfrozen;	/* for get_super_thawed() */
> +	wait_queue_head_t		wait_unfrozen;	/* wait for thaw */
>  	struct percpu_rw_semaphore	rw_sem[SB_FREEZE_LEVELS];
>  };
>  
> @@ -3132,8 +3132,6 @@ extern struct file_system_type *get_filesystem(struct file_system_type *fs);
>  extern void put_filesystem(struct file_system_type *fs);
>  extern struct file_system_type *get_fs_type(const char *name);
>  extern struct super_block *get_super(struct block_device *);
> -extern struct super_block *get_super_thawed(struct block_device *);
> -extern struct super_block *get_super_exclusive_thawed(struct block_device *bdev);
>  extern struct super_block *get_active_super(struct block_device *bdev);
>  extern void drop_super(struct super_block *sb);
>  extern void drop_super_exclusive(struct super_block *sb);
> -- 
> 2.29.2
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 15:02:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 15:02:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30889.61077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kflS4-00071S-By; Thu, 19 Nov 2020 15:02:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30889.61077; Thu, 19 Nov 2020 15:02:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kflS4-00071L-8O; Thu, 19 Nov 2020 15:02:24 +0000
Received: by outflank-mailman (input) for mailman id 30889;
 Thu, 19 Nov 2020 15:02:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kflS3-00071G-HV
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:02:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9198f529-1b12-41de-84cd-0ceaa10b0ffc;
 Thu, 19 Nov 2020 15:02:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id ED54EABD6;
 Thu, 19 Nov 2020 15:02:21 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kflS3-00071G-HV
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:02:23 +0000
X-Inumbo-ID: 9198f529-1b12-41de-84cd-0ceaa10b0ffc
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9198f529-1b12-41de-84cd-0ceaa10b0ffc;
	Thu, 19 Nov 2020 15:02:22 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605798142; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kcL9z9jC7iEJLcAtPdDO0JaXD92KDhIUbO9txRAzG28=;
	b=VP3vVEi6t/Ncea3SRMsFMig3iscpmhvsTvXoukVL5UhVgEiFCnqezJBK0r5Uat6avniSaX
	u6uuwo+qu1EcOQosWNLXLpn50VDI6Uqsfz4zaoFu5Y0n1NKVFrbmmlF3VrmPzF8QZ90Ej6
	jSJm59a9sDZZkJQM+4CssLaP0vZZNWY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id ED54EABD6;
	Thu, 19 Nov 2020 15:02:21 +0000 (UTC)
Subject: Re: [PATCH] xen/iommu: vtd: Fix undefined behavior pci_vtd_quirks()
To: Julien Grall <julien@xen.org>
Cc: Julien Grall <jgrall@amazon.com>, Kevin Tian <kevin.tian@intel.com>,
 xen-devel@lists.xenproject.org
References: <20201119145216.29280-1-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <fe1a3c21-47a0-18bc-23ff-1f4730e84d69@suse.com>
Date: Thu, 19 Nov 2020 16:02:21 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201119145216.29280-1-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.11.2020 15:52, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> When booting Xen with CONFIG_USBAN=y on Sandy Bridge, UBSAN will throw
> the following splat:
> 
> (XEN) ================================================================================
> (XEN) UBSAN: Undefined behaviour in quirks.c:449:63
> (XEN) left shift of 1 by 31 places cannot be represented in type 'int'
> (XEN) ----[ Xen-4.11.4  x86_64  debug=y   Not tainted ]----
> 
> [...]
> 
> (XEN) Xen call trace:
> (XEN)    [<ffff82d0802c0ccc>] ubsan.c#ubsan_epilogue+0xa/0xad
> (XEN)    [<ffff82d0802c16c9>] __ubsan_handle_shift_out_of_bounds+0xb4/0x145
> (XEN)    [<ffff82d0802eeecd>] pci_vtd_quirk+0x3d3/0x74f
> (XEN)    [<ffff82d0802e508b>] iommu.c#domain_context_mapping+0x45b/0x46f
> (XEN)    [<ffff82d08053f39e>] iommu.c#setup_hwdom_device+0x22/0x3a
> (XEN)    [<ffff82d08053dfbc>] pci.c#setup_one_hwdom_device+0x8c/0x124
> (XEN)    [<ffff82d08053e302>] pci.c#_setup_hwdom_pci_devices+0xbb/0x2f7
> (XEN)    [<ffff82d0802da5b7>] pci.c#pci_segments_iterate+0x4c/0x8c
> (XEN)    [<ffff82d08053e8bd>] setup_hwdom_pci_devices+0x25/0x2c
> (XEN)    [<ffff82d08053e916>] iommu.c#intel_iommu_hwdom_init+0x52/0x2f3
> (XEN)    [<ffff82d08053d6da>] iommu_hwdom_init+0x4e/0xa4
> (XEN)    [<ffff82d080577f32>] dom0_construct_pv+0x23c8/0x2476
> (XEN)    [<ffff82d08057cb50>] construct_dom0+0x6c/0xa3
> (XEN)    [<ffff82d080564822>] __start_xen+0x4651/0x4b55
> (XEN)    [<ffff82d0802000f3>] __high_start+0x53/0x55
> 
> Note that splat is from 4.11.4 and not staging. Although, the problem is
> still present.
> 
> This can be solved by making the first operand unsigned int.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

> --- a/xen/drivers/passthrough/vtd/quirks.c
> +++ b/xen/drivers/passthrough/vtd/quirks.c
> @@ -435,7 +435,7 @@ void pci_vtd_quirk(const struct pci_dev *pdev)
>      case 0x3728: /* Xeon C5500/C3500 (JasperForest) */
>      case 0x3c28: /* Sandybridge */
>          val = pci_conf_read32(pdev->sbdf, 0x1AC);
> -        pci_conf_write32(pdev->sbdf, 0x1AC, val | (1 << 31));
> +        pci_conf_write32(pdev->sbdf, 0x1AC, val | (1U << 31));

I can see a couple of similar uses in arm/ipmmu-vmsa.c and
arm/smmu.c. These are all #define-s though, so would be an issue
only if these #define-s actually get used anywhere.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 15:05:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 15:05:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30897.61088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kflUn-0007D9-QN; Thu, 19 Nov 2020 15:05:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30897.61088; Thu, 19 Nov 2020 15:05:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kflUn-0007D2-NR; Thu, 19 Nov 2020 15:05:13 +0000
Received: by outflank-mailman (input) for mailman id 30897;
 Thu, 19 Nov 2020 15:05:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kflUm-0007Cu-13; Thu, 19 Nov 2020 15:05:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kflUl-0006CY-MX; Thu, 19 Nov 2020 15:05:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kflUl-00075g-Eg; Thu, 19 Nov 2020 15:05:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kflUl-0003zY-Dt; Thu, 19 Nov 2020 15:05:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kflUm-0007Cu-13; Thu, 19 Nov 2020 15:05:12 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5VxUJbFhVMsd/gBc+iePPUaB9fsGpg4ai2LOOvqFoSg=; b=frjOt0FRTq7Gs01rqvZWyrmGr+
	fOmP6UaEYiPFSMrbu8eidEFD2cxAQG0U8vDup4Twy96ijvNoxxknXAYVnUUi5ljkLMGZaZgojYAWQ
	3whzzm5aOwn7JY49gWE9ZlxMaIPnUf6cyBEOIhYqJI5NSD3focmPKEKCMBCfs8DVn6jA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kflUl-0006CY-MX; Thu, 19 Nov 2020 15:05:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kflUl-00075g-Eg; Thu, 19 Nov 2020 15:05:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kflUl-0003zY-Dt; Thu, 19 Nov 2020 15:05:11 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156867-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156867: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=415f904254b7343a90db895134980cbb7f7f0479
X-Osstest-Versions-That:
    xen=5200fba9ce534fc55ec40ab622b6058600090415
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 19 Nov 2020 15:05:11 +0000

flight 156867 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156867/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 156857

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156857
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156857
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156857
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156857
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156857
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156857
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156857
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156857
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156857
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156857
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156857
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 xen                  415f904254b7343a90db895134980cbb7f7f0479
baseline version:
 xen                  5200fba9ce534fc55ec40ab622b6058600090415

Last test of basis   156857  2020-11-18 15:06:30 Z    0 days
Testing same since   156867  2020-11-19 01:08:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Doug Goldstein <cardoe@cardoe.com>
  Edwin Török <edvin.torok@citrix.com>
  Michal Orzel <michal.orzel@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5200fba9ce..415f904254  415f904254b7343a90db895134980cbb7f7f0479 -> master


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 15:05:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 15:05:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30899.61104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kflUs-0007Fg-9t; Thu, 19 Nov 2020 15:05:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30899.61104; Thu, 19 Nov 2020 15:05:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kflUs-0007FZ-60; Thu, 19 Nov 2020 15:05:18 +0000
Received: by outflank-mailman (input) for mailman id 30899;
 Thu, 19 Nov 2020 15:05:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kflUq-0007Eu-2L
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:05:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kflUo-0006Cc-Fz; Thu, 19 Nov 2020 15:05:14 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kflUo-0004J4-8V; Thu, 19 Nov 2020 15:05:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kflUq-0007Eu-2L
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:05:16 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=5tATp8UjGBj/SIwWhwd2k8YICjRxO5clTx0TY39pPeU=; b=6Q+aQEJT3l0TWNuLiiF3FPxIX9
	tXgAjsfXyV9yHHTq1Qd4R3nc3mmmmXjOmyPC5bcIKnB30lJ1GjaLbI30we6qnUK6jHNT4Ryea5/Cq
	bEaLHxcMqaj/TDpOFdkaC7ASgVCwABcbKTpDDU4+PctSeXHEqiEe8wIYxjp1imPxi8gI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kflUo-0006Cc-Fz; Thu, 19 Nov 2020 15:05:14 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kflUo-0004J4-8V; Thu, 19 Nov 2020 15:05:14 +0000
Subject: Re: [PATCH] xen/iommu: vtd: Fix undefined behavior pci_vtd_quirks()
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <jgrall@amazon.com>, Kevin Tian <kevin.tian@intel.com>,
 xen-devel@lists.xenproject.org
References: <20201119145216.29280-1-julien@xen.org>
 <fe1a3c21-47a0-18bc-23ff-1f4730e84d69@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <c531e8c2-74fb-6bdc-4b93-791fbac4500b@xen.org>
Date: Thu, 19 Nov 2020 15:05:12 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <fe1a3c21-47a0-18bc-23ff-1f4730e84d69@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 19/11/2020 15:02, Jan Beulich wrote:
> On 19.11.2020 15:52, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> When booting Xen with CONFIG_USBAN=y on Sandy Bridge, UBSAN will throw
>> the following splat:
>>
>> (XEN) ================================================================================
>> (XEN) UBSAN: Undefined behaviour in quirks.c:449:63
>> (XEN) left shift of 1 by 31 places cannot be represented in type 'int'
>> (XEN) ----[ Xen-4.11.4  x86_64  debug=y   Not tainted ]----
>>
>> [...]
>>
>> (XEN) Xen call trace:
>> (XEN)    [<ffff82d0802c0ccc>] ubsan.c#ubsan_epilogue+0xa/0xad
>> (XEN)    [<ffff82d0802c16c9>] __ubsan_handle_shift_out_of_bounds+0xb4/0x145
>> (XEN)    [<ffff82d0802eeecd>] pci_vtd_quirk+0x3d3/0x74f
>> (XEN)    [<ffff82d0802e508b>] iommu.c#domain_context_mapping+0x45b/0x46f
>> (XEN)    [<ffff82d08053f39e>] iommu.c#setup_hwdom_device+0x22/0x3a
>> (XEN)    [<ffff82d08053dfbc>] pci.c#setup_one_hwdom_device+0x8c/0x124
>> (XEN)    [<ffff82d08053e302>] pci.c#_setup_hwdom_pci_devices+0xbb/0x2f7
>> (XEN)    [<ffff82d0802da5b7>] pci.c#pci_segments_iterate+0x4c/0x8c
>> (XEN)    [<ffff82d08053e8bd>] setup_hwdom_pci_devices+0x25/0x2c
>> (XEN)    [<ffff82d08053e916>] iommu.c#intel_iommu_hwdom_init+0x52/0x2f3
>> (XEN)    [<ffff82d08053d6da>] iommu_hwdom_init+0x4e/0xa4
>> (XEN)    [<ffff82d080577f32>] dom0_construct_pv+0x23c8/0x2476
>> (XEN)    [<ffff82d08057cb50>] construct_dom0+0x6c/0xa3
>> (XEN)    [<ffff82d080564822>] __start_xen+0x4651/0x4b55
>> (XEN)    [<ffff82d0802000f3>] __high_start+0x53/0x55
>>
>> Note that splat is from 4.11.4 and not staging. Although, the problem is
>> still present.
>>
>> This can be solved by making the first operand unsigned int.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
>> --- a/xen/drivers/passthrough/vtd/quirks.c
>> +++ b/xen/drivers/passthrough/vtd/quirks.c
>> @@ -435,7 +435,7 @@ void pci_vtd_quirk(const struct pci_dev *pdev)
>>       case 0x3728: /* Xeon C5500/C3500 (JasperForest) */
>>       case 0x3c28: /* Sandybridge */
>>           val = pci_conf_read32(pdev->sbdf, 0x1AC);
>> -        pci_conf_write32(pdev->sbdf, 0x1AC, val | (1 << 31));
>> +        pci_conf_write32(pdev->sbdf, 0x1AC, val | (1U << 31));
> 
> I can see a couple of similar uses in arm/ipmmu-vmsa.c and
> arm/smmu.c. These are all #define-s though, so would be an issue
> only if these #define-s actually get used anywhere.

There are a few on Arm. I have a couple of patches to fix them. 
Although, I don't think I discovered them one in arm/ipmmu-vmsa.c and 
arm/smmu.c yet.

I will have a look.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 15:10:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 15:10:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30912.61116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kflZx-0008QJ-V2; Thu, 19 Nov 2020 15:10:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30912.61116; Thu, 19 Nov 2020 15:10:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kflZx-0008QC-Rh; Thu, 19 Nov 2020 15:10:33 +0000
Received: by outflank-mailman (input) for mailman id 30912;
 Thu, 19 Nov 2020 15:10:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zZym=EZ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kflZw-0008Q7-Ac
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:10:32 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe1e::615])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0a941b7d-7312-470b-b0d7-fb74401d5f50;
 Thu, 19 Nov 2020 15:10:30 +0000 (UTC)
Received: from AM5PR0101CA0025.eurprd01.prod.exchangelabs.com
 (2603:10a6:206:16::38) by DB6PR0802MB2246.eurprd08.prod.outlook.com
 (2603:10a6:4:86::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Thu, 19 Nov
 2020 15:10:26 +0000
Received: from AM5EUR03FT033.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:16:cafe::7c) by AM5PR0101CA0025.outlook.office365.com
 (2603:10a6:206:16::38) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Thu, 19 Nov 2020 15:10:26 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT033.mail.protection.outlook.com (10.152.16.99) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Thu, 19 Nov 2020 15:10:26 +0000
Received: ("Tessian outbound 13ed5f5344c0:v71");
 Thu, 19 Nov 2020 15:10:26 +0000
Received: from bb547bcae3cd.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 04C75796-C140-4D5C-8C0B-68643381074E.1; 
 Thu, 19 Nov 2020 15:10:07 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id bb547bcae3cd.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 19 Nov 2020 15:10:07 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB6252.eurprd08.prod.outlook.com (2603:10a6:10:20b::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20; Thu, 19 Nov
 2020 15:10:07 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737%7]) with mapi id 15.20.3564.028; Thu, 19 Nov 2020
 15:10:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zZym=EZ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kflZw-0008Q7-Ac
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:10:32 +0000
X-Inumbo-ID: 0a941b7d-7312-470b-b0d7-fb74401d5f50
Received: from EUR01-HE1-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:fe1e::615])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0a941b7d-7312-470b-b0d7-fb74401d5f50;
	Thu, 19 Nov 2020 15:10:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4J8a5oaJM5LVFh5vHYCpAF5nJfxQRghTjNqBKLyX6TA=;
 b=64LgAEPO8OjyAsrs3lPwME9perpROa4sGZa6K1ZoSVJ81unTA9Ckh2U4hMUGBBwAKlEx7mrTRA4SBVuOCJEVQF++fzHr8Mivu6RfFXSqxtF4iXiq1sc2hfDZIoNgTRIaYJAS0B3MlPlHS3hkP5XpmYOwsAOdwCzAZyV3ynp2vTg=
Received: from AM5PR0101CA0025.eurprd01.prod.exchangelabs.com
 (2603:10a6:206:16::38) by DB6PR0802MB2246.eurprd08.prod.outlook.com
 (2603:10a6:4:86::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Thu, 19 Nov
 2020 15:10:26 +0000
Received: from AM5EUR03FT033.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:16:cafe::7c) by AM5PR0101CA0025.outlook.office365.com
 (2603:10a6:206:16::38) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Thu, 19 Nov 2020 15:10:26 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT033.mail.protection.outlook.com (10.152.16.99) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Thu, 19 Nov 2020 15:10:26 +0000
Received: ("Tessian outbound 13ed5f5344c0:v71"); Thu, 19 Nov 2020 15:10:26 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: d913629fb79f12ad
X-CR-MTA-TID: 64aa7808
Received: from bb547bcae3cd.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 04C75796-C140-4D5C-8C0B-68643381074E.1;
	Thu, 19 Nov 2020 15:10:07 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id bb547bcae3cd.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Thu, 19 Nov 2020 15:10:07 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YzQM23IjjRCSK7eRz69vQMiFBIGN/8mO9aBb9XGl6a4rdk1bbkpovEzoXxiM4zAxQlcUmhbR7BuU0jR7qtoiGA5BLRC2IAuM1X2ehBUdZ7RIcqR26Kr0HoQNVFiYCFLwYNW4TJOaMuX4HilhmeR/vsLHu87jkGhrYTb8Pxvbp7r7LPwr5uJQEyGBs9ZwKD1UDivv/5LCjGSl5Ux7pnf2PDDM80CRLhIsAxVwHP7SLj6MNiuHbpJDGjJezM89GOmqYlEdXefT7QDn/bOIeYZJF4UyhezYhknq2oCTFnLLfVceyqSFieurINAj/mZmmEsP9+oQry78zKnCxrq2i11kdw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4J8a5oaJM5LVFh5vHYCpAF5nJfxQRghTjNqBKLyX6TA=;
 b=SIB1Uq+ayKSyC8ZkjDZ2CbgRHJvk4cGazwPkjglXUZ0iVXWYqSwjLUZ3FGLZbyHsZsfkFSgnxp61/r7veUGdZ3f155CmSGNbgm5UXe9e8NX2BB9cynmjBA6iqfuqt6T7L2vLTb4ln5sxVHHk2ZlkmD77xJas+B1R06CeP368u65S/1lhRx2BY/TA28VQwgHzExyzSHdmIgpiUEJEWP72ERFBSxgDWytPaNrcfgBOc/27BFuEq7l+pF+14v0Lc2yYBOB7QVIaeSlop4xsdCIRpbyd5D2anoVM7WjauBBb9X3U7KzA20+pxfzSieb4jXbvgf0+ijlH84R4SZBOpMji9w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4J8a5oaJM5LVFh5vHYCpAF5nJfxQRghTjNqBKLyX6TA=;
 b=64LgAEPO8OjyAsrs3lPwME9perpROa4sGZa6K1ZoSVJ81unTA9Ckh2U4hMUGBBwAKlEx7mrTRA4SBVuOCJEVQF++fzHr8Mivu6RfFXSqxtF4iXiq1sc2hfDZIoNgTRIaYJAS0B3MlPlHS3hkP5XpmYOwsAOdwCzAZyV3ynp2vTg=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB6252.eurprd08.prod.outlook.com (2603:10a6:10:20b::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20; Thu, 19 Nov
 2020 15:10:07 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737%7]) with mapi id 15.20.3564.028; Thu, 19 Nov 2020
 15:10:07 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Julien Grall
	<jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>, Jan Beulich <jbeulich@suse.com>, Andrew
 Cooper <andrew.cooper3@citrix.com>, =?iso-8859-1?Q?Roger_Pau_Monn=E9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] xen/irq: Propagate the error from init_one_desc_irq() in
 init_irq_data()
Thread-Topic: [PATCH] xen/irq: Propagate the error from init_one_desc_irq() in
 init_irq_data()
Thread-Index: AQHWvoP9L+A4RJEKVEq9qJlQ3XFKdanPjxgA
Date: Thu, 19 Nov 2020 15:10:07 +0000
Message-ID: <ACB17DDA-2E25-4135-834B-B6EA3C146FE6@arm.com>
References: <20201119145434.28065-1-julien@xen.org>
In-Reply-To: <20201119145434.28065-1-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: c902682a-3ada-45d1-63b8-08d88c9d3eca
x-ms-traffictypediagnostic: DBBPR08MB6252:|DB6PR0802MB2246:
X-Microsoft-Antispam-PRVS:
	<DB6PR0802MB2246CACC05A7C084C8D9F3309DE00@DB6PR0802MB2246.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:7691;OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Zad9KVk6aSsuqmY7M3366I0dE9AXa7RT5Kq2AEDMQtjbqANNFUopHTGadKmAUo9TRdi7SLywbaJCdteGit4l0OtNQQRrXinbPsk48ys2SWF/FTFVciLlPlVaNK0B9MbVdcgsHN8DlqjY/GMMFYe1VvPGeG4UGfGgBmuY1zkphTAYPAMwQMeuar13vPPotf/6/nOm+ct94BzFL6abmtuMJztB8Fh1ZPZ9kB37jR5ZYC/ZOommOXKHd4ZmXm+q7nJ16h/vH8vMo7CiE0oI7nGfSpMfNa35yWYslS/ruyVTP65+MXF/SwxDS6pTwMNIY2gFwPGemqx1U75ikDfU+SlsxQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(39850400004)(346002)(376002)(136003)(66946007)(2906002)(64756008)(53546011)(2616005)(66476007)(66556008)(83380400001)(6506007)(91956017)(4326008)(76116006)(36756003)(66446008)(54906003)(316002)(478600001)(8676002)(26005)(8936002)(86362001)(6486002)(186003)(5660300002)(33656002)(6916009)(6512007)(71200400001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 voKJuLw2AWVkL1Lg60kNnzJpYTUZy4/S0x1QRhVAEXB55DtXUwzdKUAzHCBfQHS11rosiDOtoakyW6ud1uJIgLCZ9bOXcGAU6iKvhOd6+JPfwhlb2xt8Ret0P8WBZVg/F205ohXHpDnT6e9q3c/pJN6CEESRZUWOPzz7pB+lZw8i4jcf1AcHbWGvZ2EUTEuDuUbgkj9OpqRSkokYVQjLa5hyOm52LzqXazD2IuiRERGYC/vRAoWXcnBLxy4t2by8Mjq4JyLjQEk+LiR3YJR9OO77Vd774E4OxC9T7azodoyA7LtWg4qvAd2cOQRVfST+jG31XXM++IeJgAZb68o168skIUA7ILQgxC9rakk9WtRveEvGidPF3SzwmsnFM+5l8HcOo35bLPViB7otI0gt97zR6qoZWXjUa9Ac/ELTe4FwJJdNwzAvLD74nkvovYgvsxgyGm2MaDFLkVAds51zdXMXyHdvWnriHvhwpxeTgNuP2XqsWyw3Re8S4vA1DSgdZeBduUP8BQTjVhTuc4Zx3FpUH7l5m/C2EuzdKfSzFMTrCzTCdk0n60RA0sDtkTxT40e1C+hYQrUywPcSF8r5NS47buqpIoD8ukAi5Hl3NcUEEyarcLQfxmMo/ncxH56i0LUbmoz5qRmUNwykcmB4Qg==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <3CFF19CC83252646AE0686B89F14AEC1@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6252
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e6899d2c-3a81-4c88-d8bc-08d88c9d334d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	cKv17o7M1RRpgy7prnL3US0ZEhhwvtHe3jN8KJ7WD/CG48RGrjJ0zQdlSnyouwISWRgqrII91AGd1t0J422I5TbNYZAA4unrXiZXL2VijRReR1e/vvN0K6hfLeyI/speaebNzkhE3xJkQof7MLi2tKPmkDIIO/M+gum+pSXYA4RR3//OQ6MXGFUV7lFQKNxguYeUjk3l8Iyt807tqDlbj3CRAfPq0ESl0jw9Wa+OT2Pv3blXO1kxUEfZuzZNKhYVG1LRwQmXuLolsRbuBGur2JBF187BYL43oP0gY7wMlyieltATMVtwy882AYBgQeQ3DBZb2I0njsGkfS+8ZY7cdoWghS0IHDNr5DLmSIkUyTlQhsj3VjKy9wjnWonb8EsvjNcarRPd5nadVH7yvDYIzA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39850400004)(376002)(396003)(136003)(346002)(46966005)(356005)(47076004)(4326008)(36756003)(83380400001)(6512007)(5660300002)(186003)(478600001)(6486002)(2906002)(26005)(6506007)(53546011)(86362001)(336012)(70586007)(70206006)(6862004)(81166007)(82740400003)(8676002)(82310400003)(316002)(2616005)(54906003)(8936002)(33656002)(36906005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Nov 2020 15:10:26.3590
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c902682a-3ada-45d1-63b8-08d88c9d3eca
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2246

Hi,

> On 19 Nov 2020, at 14:54, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> init_one_desc_irq() can return an error if it is unable to allocate
> memory. While this is unlikely to happen during boot (called from
> init_irq_data()), it is better to harden the code by propagting the
> return value.
>=20
> Spotted by coverity.
>=20
> CID: 106529
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand


> ---
> xen/arch/arm/irq.c | 7 ++++++-
> xen/arch/x86/irq.c | 7 ++++++-
> 2 files changed, 12 insertions(+), 2 deletions(-)
>=20
> diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
> index 3877657a5277..279d221a2b85 100644
> --- a/xen/arch/arm/irq.c
> +++ b/xen/arch/arm/irq.c
> @@ -88,7 +88,12 @@ static int __init init_irq_data(void)
>     for ( irq =3D NR_LOCAL_IRQS; irq < NR_IRQS; irq++ )
>     {
>         struct irq_desc *desc =3D irq_to_desc(irq);
> -        init_one_irq_desc(desc);
> +        int rc;
> +
> +        rc =3D init_one_irq_desc(desc);
> +        if ( rc )
> +            return rc;
> +
>         desc->irq =3D irq;
>         desc->action  =3D NULL;
>     }
> diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
> index 45966947919e..3ebd684415ac 100644
> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -428,9 +428,14 @@ int __init init_irq_data(void)
>=20
>     for ( irq =3D 0; irq < nr_irqs_gsi; irq++ )
>     {
> +        int rc;
> +
>         desc =3D irq_to_desc(irq);
>         desc->irq =3D irq;
> -        init_one_irq_desc(desc);
> +
> +        rc =3D init_one_irq_desc(desc);
> +        if ( rc )
> +            return rc;
>     }
>     for ( ; irq < nr_irqs; irq++ )
>         irq_to_desc(irq)->irq =3D irq;
> --=20
> 2.17.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 15:12:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 15:12:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30926.61140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kflbT-0000Ag-EJ; Thu, 19 Nov 2020 15:12:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30926.61140; Thu, 19 Nov 2020 15:12:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kflbT-0000AX-Am; Thu, 19 Nov 2020 15:12:07 +0000
Received: by outflank-mailman (input) for mailman id 30926;
 Thu, 19 Nov 2020 15:12:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4Veg=EZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kflbR-0000AO-Pm
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:12:05 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f6e2eae9-b6a9-4c58-b62c-2f5efe1b5c2e;
 Thu, 19 Nov 2020 15:12:04 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=4Veg=EZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kflbR-0000AO-Pm
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:12:05 +0000
X-Inumbo-ID: f6e2eae9-b6a9-4c58-b62c-2f5efe1b5c2e
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f6e2eae9-b6a9-4c58-b62c-2f5efe1b5c2e;
	Thu, 19 Nov 2020 15:12:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605798724;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=UP6kg95BAs0h/yVgLtA3xeQOX/4tv1T9jvIqv8ibVN4=;
  b=A0/EhR1i9QZt5K8ZKnHvZeC9vOQbt2bVtCaVbD34c74nUC3zGWX3+eXN
   DyEMlOnOSHQhPkAa21Q/LxnduKVHqWWsOJLBbpRqIFb2hKlg1pusImoyF
   RimlC28SHBTaM4HomBbvuUBsxKVylLSTJvFm9jkUQe69xbBefgqobsaqW
   I=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: xSPTS7REGbmy1QGHiiJU0pkX2rocQcNWyA9h6QE5IZ/UL5wQ+7eQRMKVqbtMZlD1QfoZt+hCK2
 OXLqGSsAeN7B9DVXJ3dR5hlhBPDXYBNzhgv005YriIgv1wDjdhkQI7cabDKr5G1pjmyXA107DJ
 hCcd86OMw9V2olmRa/ooSEpFtHq0MKTDHWCHE56+JiN84ATclNHs1MIfuPOt0wsLugrXudYITz
 vEzZ/Om1r1FDPrGazUdh7Dx+R3jfo1obGnQFm7T8fck9h3wdjCGzKoBTuEioYm60pvbxsIa6bx
 IL0=
X-SBRS: None
X-MesageID: 31531907
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,490,1596513600"; 
   d="scan'208";a="31531907"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oW4cMNPRBHDFzS4zvLNHV3zbBDgD7mTUo/9T4s74/UCE7p9pwZNzBqW24G+Ppuy1xZ+rzQUJW7o0FIT87SrHzFuu4zoshqFjbM8x9CGGpOUe0JuhlGjv2j83kTZDX75ahiQ3nPyBiw3fga3rBiHla5rMKB/U5AcFxZfLv5fHle6TBugGq5mSWzb4n39qNg+3pFa6y46imzhtTVoKHRe5+KFSLqWYcGiN2+s+RH2fp4zN/M3NZ+aI05fo6n/8uncuaHlbzslJ490lqzFnWW1UrFWYr6YkQvp0N6LR0W3YDNA37qlNtcgKZBZ7fEEmY3qBFZhUvvm5d4Re1FCmLRU5LA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=L2g0sfSw/+FWdj3AvQM0e97FVPhTPT+NA6Gp956jAtk=;
 b=EgzI0sUZmn6FQM3cIHfv+r+Rp3R7rKSyrcclUdZ+CNK0uBQSaDtpuKM6R1fGeiWrmZCmiZO/0ZNSrx2wexoI9EbFg/k3+OWkm++apxRQcX1+7TsIXVEljCrDGCFhp5tZ68gbV3synxcRdJulO98QOYdo4P1OTtmgL/DixdvAYU9AT1X3DFOPh+3xGrmN3Nk9aR7lsodgTugljb3sNfice+SyFjE+szjG2okW4LU0QDMqftcLs+tZSIv5lesIzvoiS7nZrEcD2hhG2XucRKwau8X79MbXLnmoxK4kvCWu4b0Yxcu9OfVaTkWlb7Bq10884znxhGf0AfuG8zdEw2A+3Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=L2g0sfSw/+FWdj3AvQM0e97FVPhTPT+NA6Gp956jAtk=;
 b=Bp+0LIx91lM6S8g2qStk41KbaXqCEbAiNlWu70MMTqVoU7CqPsNKM/+OVKIm/LIFllYd6tWmk+itGdCZ4nTf1CrEs0hWP2S3PHTUiGBx/mbFXDLQr6ONYjek5aTu3fFOY2BgmAmWVu/UdzRQVBHUhdzYuJOKHiF3xyizX5H8two=
Date: Thu, 19 Nov 2020 16:11:56 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Julien Grall <julien@xen.org>
CC: <xen-devel@lists.xenproject.org>, Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] xen/irq: Propagate the error from init_one_desc_irq() in
 init_irq_data()
Message-ID: <20201119151156.wgkwyslzzlpcirot@Air-de-Roger>
References: <20201119145434.28065-1-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201119145434.28065-1-julien@xen.org>
X-ClientProxiedBy: LO2P265CA0488.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13a::13) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c6f986be-c814-44e4-5c49-08d88c9d76d8
X-MS-TrafficTypeDiagnostic: DM6PR03MB4764:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB47644051362A9E715E76E2258FE00@DM6PR03MB4764.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ROERwgM2iRLFtl5E2akJWjIhsEGOmcpP0osusDcIHxpRhgoAnu2Kk+VNrDBi4/rJjQFKACIXurhAYIB9mTz4R/Rfm+hTjwpfBx0ym2pJPf5O5uI4U66e/bWWpX4cNjpfMyfDmD0lfpvWynddnBxS2BYT/j+PYZLMXCRllyyv711IB5MY41hWJcvHRzZLegt59t6SQWr2JLa28KdbsYiq7hhZBeDmQZFpscq7SuvnqQC1V4HZ3x3ftMTmA5aR5R9ROp7LFzsZ0ikd71QM8VLyFmLrOmnRs2V3dSkV2bchvXb1tw5OMzn6p8LFJKVB7YkyxpC58QBTIzkkT5c9C2H7mw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(39860400002)(376002)(136003)(346002)(396003)(33716001)(66476007)(83380400001)(3716004)(1076003)(956004)(66946007)(54906003)(5660300002)(26005)(6496006)(316002)(16526019)(186003)(8936002)(2906002)(6916009)(8676002)(66556008)(4326008)(9686003)(6486002)(86362001)(6666004)(85182001)(478600001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: OnooJ+62AzYTXqo1iplmgb+GPzBslyrtFK+rjrmseugvaLVc4sZGj6aQ5aDtIu/wWqQhdAf18MI6rItoPeJdv1CryAkc3r0Rbh9mqE0/9PDX5jirBa1Ub4oVZ5UHOGV5Ht7g2joa/VuHYqZorlIbLoLhnGpVtp3nlHogPggAmGMVjzvxFYwurcYZIQxzJ5oVRIoVgPPrCmRvDOS29dWIs6AnMF3uvQBlMUYiMp2LjRqP/R8ABzkkQ5xuq8GHJKy76UdYxCFwWoFqrHXV2Zj+YU16BOEBhoivKcf8cL+en2Y/aSGQrpnHhYjGdFwiO/noeERiT/OqaM+D2nO+RQqX83S3AXF5quF4K4PRDhauK3O7vXInNqb6aR0ZjCeBFsZVbpyTfD5+oWfeERz7BqHqjycofJu1IdC3k99FwLgsjmeFuEfVTWf2JHYsr1YVcwTpu0F+QfA7l8o/CVvTqaNGt/aNGB8eNakgkEnj+A5W89EiIQf7qigHCxFg88pSbP7mfKDMmeY9iR+lL5f2zHFC3LTOwyWUX1jOhK/ufrmNWUe7KUAvf1nCbcvVgD8Bkr1LYuvYSSkXjk+CjHsrS1puIDfqsJE7kvrjl4Z29R9TzNa1yPzE/51YRURmCD4zasgfAQn7091SB5R92hCimdr7LGrCHu+QNHmVq6MepZbfmX5Z1HSMK2Yx+hym1KXWhct4qwMy/bbmbjDwKLZv47i9JUcMyvIuIFRwk2qQ121fdxfNW14PGwzcpEOAYx4lBLwCzQiA4aGV4JaUVDM850xDAsabIcs0hFAfN25GG419j9pMX281R73BcKTBMYc8+dT1ghWbKSV5vYmeNRwF+F91Eu1S4XAWcHhpOkKDVxDEYFcuIhmvNU/AdEXDTLSCpKLdTpDq6bY3l4hauE6uhOdvSw==
X-MS-Exchange-CrossTenant-Network-Message-Id: c6f986be-c814-44e4-5c49-08d88c9d76d8
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Nov 2020 15:12:00.5794
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Ku+tddempvrvb+OmnQigCkpf2QZmyLILGrCCpPYFRcAPiFqDWRkKJEB+ZRhd7DQVxdTp/B8Phra21oo/7fQUUw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4764
X-OriginatorOrg: citrix.com

On Thu, Nov 19, 2020 at 02:54:34PM +0000, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> init_one_desc_irq() can return an error if it is unable to allocate
> memory. While this is unlikely to happen during boot (called from
> init_irq_data()), it is better to harden the code by propagting the
> return value.
> 
> Spotted by coverity.
> 
> CID: 106529
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

> ---
>  xen/arch/x86/irq.c | 7 ++++++-

Fox x86:
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

>  2 files changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
> index 3877657a5277..279d221a2b85 100644
> --- a/xen/arch/arm/irq.c
> +++ b/xen/arch/arm/irq.c
> @@ -88,7 +88,12 @@ static int __init init_irq_data(void)
>      for ( irq = NR_LOCAL_IRQS; irq < NR_IRQS; irq++ )
>      {
>          struct irq_desc *desc = irq_to_desc(irq);
> -        init_one_irq_desc(desc);
> +        int rc;
> +
> +        rc = init_one_irq_desc(desc);

You could init rc at definition.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 15:14:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 15:14:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30933.61151 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfldO-0000LW-RV; Thu, 19 Nov 2020 15:14:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30933.61151; Thu, 19 Nov 2020 15:14:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfldO-0000LP-Of; Thu, 19 Nov 2020 15:14:06 +0000
Received: by outflank-mailman (input) for mailman id 30933;
 Thu, 19 Nov 2020 15:14:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nub5=EZ=infradead.org=willy@srs-us1.protection.inumbo.net>)
 id 1kfldM-0000Ke-Ha
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:14:04 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e85d2283-dab7-4cca-8ae2-64c3052f470e;
 Thu, 19 Nov 2020 15:14:01 +0000 (UTC)
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red
 Hat Linux)) id 1kflcb-0002ed-15; Thu, 19 Nov 2020 15:13:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Nub5=EZ=infradead.org=willy@srs-us1.protection.inumbo.net>)
	id 1kfldM-0000Ke-Ha
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:14:04 +0000
X-Inumbo-ID: e85d2283-dab7-4cca-8ae2-64c3052f470e
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e85d2283-dab7-4cca-8ae2-64c3052f470e;
	Thu, 19 Nov 2020 15:14:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=aI9mH0dq7mTuWpujo4T+CjmE2gbvb95KnVfM843FQ84=; b=NQxl2LUeXdxB2yfG38WXbOdZDF
	4m3yXl2LUAYY4UoHaaLjO55fQMRmjPGQ3+B+xj1A4lwNOqB+KRFQoIk1nbaMMuKB/WnQRRcUmsRbV
	R2K9qIjBs0hPN6TTWcG6BAmilJQG1snUXu5nywrLj8Cz/xFJxvZUmVOZATxF0u8B82LwZZSJWb1hl
	RGyLJC+B2H7/mLhmvZRyTkib7wsKGqq179e8VAUmByqV2VbOMsqMaYFdiiZ3EUGK3Uw1hPAA3usoO
	WlHH4d0KOLmy0/ejtH1mtQgmdWJPaNlHqDfX4UPUMcxUgftU+j2y1V0nrwb5yjBJnF9f3J8T+0xqW
	MLlqYaTw==;
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kflcb-0002ed-15; Thu, 19 Nov 2020 15:13:17 +0000
Date: Thu, 19 Nov 2020 15:13:16 +0000
From: Matthew Wilcox <willy@infradead.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 17/20] filemap: consistently use ->f_mapping over
 ->i_mapping
Message-ID: <20201119151316.GH29991@casper.infradead.org>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-18-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201118084800.2339180-18-hch@lst.de>

On Wed, Nov 18, 2020 at 09:47:57AM +0100, Christoph Hellwig wrote:
> @@ -2887,13 +2887,13 @@ EXPORT_SYMBOL(filemap_map_pages);
>  vm_fault_t filemap_page_mkwrite(struct vm_fault *vmf)
>  {
>  	struct page *page = vmf->page;
> -	struct inode *inode = file_inode(vmf->vma->vm_file);
> +	struct inode *inode = vmf->vma->vm_file->f_mapping->host;
>  	vm_fault_t ret = VM_FAULT_LOCKED;
>  
>  	sb_start_pagefault(inode->i_sb);
>  	file_update_time(vmf->vma->vm_file);
>  	lock_page(page);
> -	if (page->mapping != inode->i_mapping) {
> +	if (page->mapping != vmf->vma->vm_file->f_mapping) {

Bit messy.  I'd do:

	struct address_space *mapping = vmf->vma->vm_file->f_mapping;

	sb_start_pagefault(mapping->host->i_sb);

	if (page->mapping != mapping)

	sb_end_pagefault(mapping->host->i_sb);



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 15:23:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 15:23:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30947.61164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfllu-0001MM-SM; Thu, 19 Nov 2020 15:22:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30947.61164; Thu, 19 Nov 2020 15:22:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfllu-0001MF-NV; Thu, 19 Nov 2020 15:22:54 +0000
Received: by outflank-mailman (input) for mailman id 30947;
 Thu, 19 Nov 2020 15:22:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zZym=EZ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kfllu-0001MA-0c
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:22:54 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe08::630])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 89588174-f36d-420f-989d-9620037c39e8;
 Thu, 19 Nov 2020 15:22:51 +0000 (UTC)
Received: from DB6PR0601CA0022.eurprd06.prod.outlook.com (2603:10a6:4:7b::32)
 by AM9PR08MB6306.eurprd08.prod.outlook.com (2603:10a6:20b:2d6::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20; Thu, 19 Nov
 2020 15:22:49 +0000
Received: from DB5EUR03FT051.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:7b:cafe::67) by DB6PR0601CA0022.outlook.office365.com
 (2603:10a6:4:7b::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Thu, 19 Nov 2020 15:22:49 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT051.mail.protection.outlook.com (10.152.21.19) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Thu, 19 Nov 2020 15:22:49 +0000
Received: ("Tessian outbound 082214a64d39:v71");
 Thu, 19 Nov 2020 15:22:49 +0000
Received: from eb3633972789.3
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DDFD69C8-98C9-4B4B-B9F3-198F645219F8.1; 
 Thu, 19 Nov 2020 15:22:43 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id eb3633972789.3
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 19 Nov 2020 15:22:43 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3851.eurprd08.prod.outlook.com (2603:10a6:10:7a::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 19 Nov
 2020 15:22:41 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737%7]) with mapi id 15.20.3564.028; Thu, 19 Nov 2020
 15:22:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zZym=EZ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kfllu-0001MA-0c
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:22:54 +0000
X-Inumbo-ID: 89588174-f36d-420f-989d-9620037c39e8
Received: from EUR03-AM5-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:fe08::630])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 89588174-f36d-420f-989d-9620037c39e8;
	Thu, 19 Nov 2020 15:22:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nmMDQElag4nHhxSq8mZBMLKt7NRFoLGlfCL5DYtdbGI=;
 b=sU84WCVm1DzBRt7ZSmf/vtia+oYfLPXnx96BghkwRPtEvsl5hn+LrFAFQPg4MXzw102ptVLxOJKjCW4U39SPLaVDnvuoq0GeAeZg+B48ASObDwb3i4mh+/p7/F/VNwr/sMF8Xs8DnF/7vNvxwB9jucszKOURWpPGUPRcruz1SEc=
Received: from DB6PR0601CA0022.eurprd06.prod.outlook.com (2603:10a6:4:7b::32)
 by AM9PR08MB6306.eurprd08.prod.outlook.com (2603:10a6:20b:2d6::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20; Thu, 19 Nov
 2020 15:22:49 +0000
Received: from DB5EUR03FT051.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:7b:cafe::67) by DB6PR0601CA0022.outlook.office365.com
 (2603:10a6:4:7b::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Thu, 19 Nov 2020 15:22:49 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT051.mail.protection.outlook.com (10.152.21.19) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Thu, 19 Nov 2020 15:22:49 +0000
Received: ("Tessian outbound 082214a64d39:v71"); Thu, 19 Nov 2020 15:22:49 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: b4d66dddee9df59c
X-CR-MTA-TID: 64aa7808
Received: from eb3633972789.3
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id DDFD69C8-98C9-4B4B-B9F3-198F645219F8.1;
	Thu, 19 Nov 2020 15:22:43 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id eb3633972789.3
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Thu, 19 Nov 2020 15:22:43 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gLICRvFSnlWGqdoKykGLFIGhlni2t3Qd5bPU+Y1v+QQXdDgwS3uQdTp1jQ8W9immuCEonL877k+ZN5chbfrtKVzMwiOAloqkEr3TUDXRiJUkkyD5ydKPM8M6pS4zDgObZFwwD5FNkhboYN+Ph1l8zmRHv5sXrymH9O8W2GB2SqSZA05gDNCjgkkJ9hno56bp98VJ2KNX78iJvZ+eL3s2u4inaaLaXy0dwQ2Gct9sHtsJa5veG9IG1mh976/hXcCPezjEKClKyYzk6FRSjWpsU9JbPumJkg4csCaNwdiSM5aQuRtDkay42PC10wjMZ14EnKmlopoplmzW598LvmAGFQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nmMDQElag4nHhxSq8mZBMLKt7NRFoLGlfCL5DYtdbGI=;
 b=SFsJ4/BcFqslDKJxZIJjMDh6QotMtTQLRruuyqq8Or4biWpp1xNIUo9MZqlY+oJSudz+znQ//zHYOEb1DTPr4OPawMBbaZH5WkWhYlCdVDWqcwnfjry+giQZKEIXXdG21ai9ZKIIKlR15RasUgnRL9GHH7FMXQnxqIbbhbvTWVxZjG64aeCLG9bvHwGxBsre+T4HkuPf9EEYEUmOlqUxjYu7kL03SBNgGU3OQ1yTzZMfu+2jw7h619dfrVBkqIsr3SERcj9BTeqOQwGKy+M7v2SaXLUCuaV0P/cC7GWWNRmrmBqEGebVihz7q58XrGfA2vMOeyWSZWzEMJ06fsdvbw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nmMDQElag4nHhxSq8mZBMLKt7NRFoLGlfCL5DYtdbGI=;
 b=sU84WCVm1DzBRt7ZSmf/vtia+oYfLPXnx96BghkwRPtEvsl5hn+LrFAFQPg4MXzw102ptVLxOJKjCW4U39SPLaVDnvuoq0GeAeZg+B48ASObDwb3i4mh+/p7/F/VNwr/sMF8Xs8DnF/7vNvxwB9jucszKOURWpPGUPRcruz1SEc=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3851.eurprd08.prod.outlook.com (2603:10a6:10:7a::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 19 Nov
 2020 15:22:41 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737%7]) with mapi id 15.20.3564.028; Thu, 19 Nov 2020
 15:22:41 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, "alex.bennee@linaro.org"
	<alex.bennee@linaro.org>, Andre Przywara <Andre.Przywara@arm.com>, Rahul
 Singh <Rahul.Singh@arm.com>, Julien Grall <Julien.Grall@arm.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH v3 1/2] xen/arm: gic: acpi: Use the correct length for the
 GICC structure
Thread-Topic: [PATCH v3 1/2] xen/arm: gic: acpi: Use the correct length for
 the GICC structure
Thread-Index: AQHWvm1700AF8zQgzEuZU87Ez2FU9KnPksgA
Date: Thu, 19 Nov 2020 15:22:41 +0000
Message-ID: <890D41B9-D3F3-498F-89E4-8BB997B45D6F@arm.com>
References: <20201119121347.27139-1-julien@xen.org>
 <20201119121347.27139-2-julien@xen.org>
In-Reply-To: <20201119121347.27139-2-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: c0ff3728-acf8-41c5-d981-08d88c9ef9a9
x-ms-traffictypediagnostic: DB7PR08MB3851:|AM9PR08MB6306:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM9PR08MB63069A22CA8F0D6FC497E2659DE00@AM9PR08MB6306.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:2958;OLM:2958;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 8P0Fc1prU3beyKxbj984hkAJuIop976+x2n6doW8FWzkqg5GKWh8Uyd3Dil3G1OIwQQUsr9EKqEebY3Roy7BXriSAgiqcSxQaKTQbSZoMXlIg5VVNIuuZ1HQ+ktKWrJJaw6ZLH1GNJ9ZqtFJRJJA3HTYn6WmCj3kzaY7Wcy+kG1uSsj6ECMwla0bObgzsOrXFGPgADbehFHC+97JnmpFrdrTCdtvjHeJZ6obnisaGvEwlnxfo0UfUCSSsTLMCuf5JcmCEebNSR4z0OMKwQqotacsk63d1KzjiUB9paId4rq01RPW2AQT9xs1Pz6IaU1iqgzhR/dCriMg1zC1RT7XhQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(346002)(396003)(376002)(39860400002)(366004)(33656002)(6486002)(6916009)(2616005)(53546011)(6506007)(64756008)(8676002)(83380400001)(2906002)(66556008)(54906003)(8936002)(66446008)(478600001)(71200400001)(26005)(86362001)(36756003)(66476007)(76116006)(5660300002)(66946007)(186003)(91956017)(6512007)(4326008)(316002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 FfELrDswEXPI/TYJPG4nsTex+kRRo+i/tia10+goJkkDTyVzuThBDdxD1XmPH98BSABbud8wwhecGASy18WlrnWBokD3MqdZb99mHYY3gVPbhFc7SEGqeDB7bQ2Jz5+SAgu70V2mVCkHAAQ9sn79cmUbOHVGGkXMZikbZooxVHP/I6u2wTmVtYandDUtGp/c6n8WYAuXOtXyLfDm8hlRXM+wNVfPIB8ZMbB/G+oRq742i5rV1d9vLXJ6poypOOcRi++UC84Uvm7jKQvnciX/r4t5f9bNoRzTKBxLrVPDzhmsSTwaXkaYotlAHlZOgcirzBOe/QZS+YT1aVY4juJ7srKeXu/H+8Oh7+AVhra7rGDDW9QVB8pAM95ctLETGGuBXD9jWgazBszNr0CCP6VqTQUOhsij0MOipTYzsBQZ8CrbrGH8b7c0PP1d5TDw9oowO4Z76Is8qCo/GIWDx5/QuLRNXMrJT6T9dLrSwi9VM0Vf6zWuk4O0mznOlkTESgaHIe9yEcnZi2sDC3s0Bx0NK72oGLly7aUHHnIYx34NkkqoPHEDM9c13ixGpi1JOksj+6LXiBDHwximc9Z5St5OSFA1yOPam80es4YENHZzUCwQwG6MduoT3zpuNvJdXEJJUh9ueKo1XntFOV/2X0KXgA==
Content-Type: text/plain; charset="utf-8"
Content-ID: <5B1FFCF60BF6A244A1841AB87C335BC4@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3851
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	efccea10-f129-45bb-e71c-08d88c9ef505
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JYanPQ76m4g+0cIOWnQKeM2y/9q0AjddSv0xIRKZjm8UxZFVTz9BKqJXB37DJdpNxT6VCUf2AA4KAT8DLYXTv4xkJYCL8N03V9QaP84qwNd4TeALgsYF4YH6kCZekChEOAmdHY8MKMC9ZIz3oOWgIuqZdbKp7OGPM/m1bpCBNZ3eiZ2upTFk7ak45qS/00SDRnO/4XQ9jGVKixtGXc6iEtIZMVq/2wa/PD4J0Zt9v2R++EVGwPi+jbpWS77akisZIc+elhQqtF/7/w/YGiviQ/kIcy4FmWIKWJeuApXBAecG4bnNNowFt0V5ADAzpLHWqZ4xSs+lZXbPBcDAuNdfF9m5JHh4Y3PD0HsliMrKg+35Vu2jfv5W3j9y77yCBSNdhfNr/ligzUk5s6w+NI+tXg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(39850400004)(346002)(396003)(136003)(46966005)(478600001)(36756003)(6486002)(6506007)(53546011)(8936002)(316002)(8676002)(107886003)(2906002)(5660300002)(70586007)(33656002)(70206006)(83380400001)(356005)(82310400003)(26005)(86362001)(186003)(81166007)(6862004)(336012)(54906003)(4326008)(2616005)(47076004)(6512007)(82740400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Nov 2020 15:22:49.4239
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c0ff3728-acf8-41c5-d981-08d88c9ef9a9
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6306

SGkgSnVsaWVuLA0KDQo+IE9uIDE5IE5vdiAyMDIwLCBhdCAxMjoxMywgSnVsaWVuIEdyYWxsIDxq
dWxpZW5AeGVuLm9yZz4gd3JvdGU6DQo+IA0KPiBGcm9tOiBKdWxpZW4gR3JhbGwgPGp1bGllbi5n
cmFsbEBhcm0uY29tPg0KPiANCj4gVGhlIGxlbmd0aCBvZiB0aGUgR0lDQyBzdHJ1Y3R1cmUgaW4g
dGhlIE1BRFQgQUNQSSB0YWJsZSBkaWZmZXJzIGJldHdlZW4NCj4gdmVyc2lvbiA1LjEgYW5kIDYu
MCwgYWx0aG91Z2ggdGhlcmUgYXJlIG5vIG90aGVyIHJlbGV2YW50IGRpZmZlcmVuY2VzLg0KPiAN
Cj4gVXNlIHRoZSBCQURfTUFEVF9HSUNDX0VOVFJZIG1hY3JvLCB3aGljaCB3YXMgc3BlY2lmaWNh
bGx5IGRlc2lnbmVkIHRvDQo+IG92ZXJjb21lIHRoaXMgaXNzdWUuDQo+IA0KDQpUaGUgc2VyaWUg
aXMgYnJha2luZyB0aGUgYnVpbGQgb24gYXJtNjQgaWYgQUNQSSBpcyBub3QgYWN0aXZhdGVkOg0K
YXJjaC9hcm0vZ2ljLmM6NDU2OiB1bmRlZmluZWQgcmVmZXJlbmNlIHRvIGBhY3BpX2dibF9GQURU
4oCZDQoNCkNoZWVycw0KQmVydHJhbmQNCg0KPiBTaWduZWQtb2ZmLWJ5OiBKdWxpZW4gR3JhbGwg
PGp1bGllbi5ncmFsbEBhcm0uY29tPg0KPiBTaWduZWQtb2ZmLWJ5OiBBbmRyZSBQcnp5d2FyYSA8
YW5kcmUucHJ6eXdhcmFAYXJtLmNvbT4NCj4gU2lnbmVkLW9mZi1ieTogSnVsaWVuIEdyYWxsIDxq
Z3JhbGxAYW1hem9uLmNvbT4NCj4gDQo+IC0tLQ0KPiAgICBDaGFuZ2VzIGluIHYzOg0KPiAgICAg
ICAgLSBVcGRhdGUgdGhlIGNvbW1pdCB0aXRsZSBhcyB3ZSBhbHNvIG1vZGlmeSBHSUN2MyBjb2Rl
DQo+ICAgICAgICAtIFVzZSB0aGUgY29ycmVjdCBsZW5ndGggaW4gbW9yZSBwbGFjZXMNCj4gDQo+
ICAgIENoYW5nZXMgaW4gdjI6DQo+ICAgICAgICAtIFBhdGNoIGFkZGVkDQo+IC0tLQ0KPiB4ZW4v
YXJjaC9hcm0vYWNwaS9ib290LmMgfCAyICstDQo+IHhlbi9hcmNoL2FybS9naWMtdjIuYyAgICB8
IDUgKysrLS0NCj4geGVuL2FyY2gvYXJtL2dpYy12My5jICAgIHwgNiArKystLS0NCj4geGVuL2Fy
Y2gvYXJtL2dpYy5jICAgICAgIHwgMiArLQ0KPiA0IGZpbGVzIGNoYW5nZWQsIDggaW5zZXJ0aW9u
cygrKSwgNyBkZWxldGlvbnMoLSkNCj4gDQo+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vYWNw
aS9ib290LmMgYi94ZW4vYXJjaC9hcm0vYWNwaS9ib290LmMNCj4gaW5kZXggMzBlNGJkMWJjNWE3
Li41NWMzZTVjYmM4MzQgMTAwNjQ0DQo+IC0tLSBhL3hlbi9hcmNoL2FybS9hY3BpL2Jvb3QuYw0K
PiArKysgYi94ZW4vYXJjaC9hcm0vYWNwaS9ib290LmMNCj4gQEAgLTEzMSw3ICsxMzEsNyBAQCBh
Y3BpX3BhcnNlX2dpY19jcHVfaW50ZXJmYWNlKHN0cnVjdCBhY3BpX3N1YnRhYmxlX2hlYWRlciAq
aGVhZGVyLA0KPiAgICAgc3RydWN0IGFjcGlfbWFkdF9nZW5lcmljX2ludGVycnVwdCAqcHJvY2Vz
c29yID0NCj4gICAgICAgICAgICAgICAgY29udGFpbmVyX29mKGhlYWRlciwgc3RydWN0IGFjcGlf
bWFkdF9nZW5lcmljX2ludGVycnVwdCwgaGVhZGVyKTsNCj4gDQo+IC0gICAgaWYgKCBCQURfTUFE
VF9FTlRSWShwcm9jZXNzb3IsIGVuZCkgKQ0KPiArICAgIGlmICggQkFEX01BRFRfR0lDQ19FTlRS
WShwcm9jZXNzb3IsIGVuZCkgKQ0KPiAgICAgICAgIHJldHVybiAtRUlOVkFMOw0KPiANCj4gICAg
IGFjcGlfdGFibGVfcHJpbnRfbWFkdF9lbnRyeShoZWFkZXIpOw0KPiBkaWZmIC0tZ2l0IGEveGVu
L2FyY2gvYXJtL2dpYy12Mi5jIGIveGVuL2FyY2gvYXJtL2dpYy12Mi5jDQo+IGluZGV4IDBmNzQ3
NTM4ZGJjZC4uMGU1ZjIzMjAxOTc0IDEwMDY0NA0KPiAtLS0gYS94ZW4vYXJjaC9hcm0vZ2ljLXYy
LmMNCj4gKysrIGIveGVuL2FyY2gvYXJtL2dpYy12Mi5jDQo+IEBAIC0xMTM2LDcgKzExMzYsOCBA
QCBzdGF0aWMgaW50IGdpY3YyX21ha2VfaHdkb21fbWFkdChjb25zdCBzdHJ1Y3QgZG9tYWluICpk
LCB1MzIgb2Zmc2V0KQ0KPiANCj4gICAgIGhvc3RfZ2ljYyA9IGNvbnRhaW5lcl9vZihoZWFkZXIs
IHN0cnVjdCBhY3BpX21hZHRfZ2VuZXJpY19pbnRlcnJ1cHQsDQo+ICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgaGVhZGVyKTsNCj4gLSAgICBzaXplID0gc2l6ZW9mKHN0cnVjdCBhY3BpX21h
ZHRfZ2VuZXJpY19pbnRlcnJ1cHQpOw0KPiArDQo+ICsgICAgc2l6ZSA9IEFDUElfTUFEVF9HSUND
X0xFTkdUSDsNCj4gICAgIC8qIEFkZCBHZW5lcmljIEludGVycnVwdCAqLw0KPiAgICAgZm9yICgg
aSA9IDA7IGkgPCBkLT5tYXhfdmNwdXM7IGkrKyApDQo+ICAgICB7DQo+IEBAIC0xMTY1LDcgKzEx
NjYsNyBAQCBnaWNfYWNwaV9wYXJzZV9tYWR0X2NwdShzdHJ1Y3QgYWNwaV9zdWJ0YWJsZV9oZWFk
ZXIgKmhlYWRlciwNCj4gICAgIHN0cnVjdCBhY3BpX21hZHRfZ2VuZXJpY19pbnRlcnJ1cHQgKnBy
b2Nlc3NvciA9DQo+ICAgICAgICAgICAgICAgIGNvbnRhaW5lcl9vZihoZWFkZXIsIHN0cnVjdCBh
Y3BpX21hZHRfZ2VuZXJpY19pbnRlcnJ1cHQsIGhlYWRlcik7DQo+IA0KPiAtICAgIGlmICggQkFE
X01BRFRfRU5UUlkocHJvY2Vzc29yLCBlbmQpICkNCj4gKyAgICBpZiAoIEJBRF9NQURUX0dJQ0Nf
RU5UUlkocHJvY2Vzc29yLCBlbmQpICkNCj4gICAgICAgICByZXR1cm4gLUVJTlZBTDsNCj4gDQo+
ICAgICAvKiBSZWFkIGZyb20gQVBJQyB0YWJsZSBhbmQgZmlsbCB1cCB0aGUgR0lDIHZhcmlhYmxl
cyAqLw0KPiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL2dpYy12My5jIGIveGVuL2FyY2gvYXJt
L2dpYy12My5jDQo+IGluZGV4IDBmNmNiZjYyMjRlOS4uNDgzZWMxNTk4ZjM3IDEwMDY0NA0KPiAt
LS0gYS94ZW4vYXJjaC9hcm0vZ2ljLXYzLmMNCj4gKysrIGIveGVuL2FyY2gvYXJtL2dpYy12My5j
DQo+IEBAIC0xNDk5LDcgKzE0OTksNyBAQCBzdGF0aWMgaW50IGdpY3YzX21ha2VfaHdkb21fbWFk
dChjb25zdCBzdHJ1Y3QgZG9tYWluICpkLCB1MzIgb2Zmc2V0KQ0KPiANCj4gICAgIGhvc3RfZ2lj
YyA9IGNvbnRhaW5lcl9vZihoZWFkZXIsIHN0cnVjdCBhY3BpX21hZHRfZ2VuZXJpY19pbnRlcnJ1
cHQsDQo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgaGVhZGVyKTsNCj4gLSAgICBzaXpl
ID0gc2l6ZW9mKHN0cnVjdCBhY3BpX21hZHRfZ2VuZXJpY19pbnRlcnJ1cHQpOw0KPiArICAgIHNp
emUgPSBBQ1BJX01BRFRfR0lDQ19MRU5HVEg7DQo+ICAgICBmb3IgKCBpID0gMDsgaSA8IGQtPm1h
eF92Y3B1czsgaSsrICkNCj4gICAgIHsNCj4gICAgICAgICBnaWNjID0gKHN0cnVjdCBhY3BpX21h
ZHRfZ2VuZXJpY19pbnRlcnJ1cHQgKikoYmFzZV9wdHIgKyB0YWJsZV9sZW4pOw0KPiBAQCAtMTU1
OCw3ICsxNTU4LDcgQEAgZ2ljX2FjcGlfcGFyc2VfbWFkdF9jcHUoc3RydWN0IGFjcGlfc3VidGFi
bGVfaGVhZGVyICpoZWFkZXIsDQo+ICAgICBzdHJ1Y3QgYWNwaV9tYWR0X2dlbmVyaWNfaW50ZXJy
dXB0ICpwcm9jZXNzb3IgPQ0KPiAgICAgICAgICAgICAgICBjb250YWluZXJfb2YoaGVhZGVyLCBz
dHJ1Y3QgYWNwaV9tYWR0X2dlbmVyaWNfaW50ZXJydXB0LCBoZWFkZXIpOw0KPiANCj4gLSAgICBp
ZiAoIEJBRF9NQURUX0VOVFJZKHByb2Nlc3NvciwgZW5kKSApDQo+ICsgICAgaWYgKCBCQURfTUFE
VF9HSUNDX0VOVFJZKHByb2Nlc3NvciwgZW5kKSApDQo+ICAgICAgICAgcmV0dXJuIC1FSU5WQUw7
DQo+IA0KPiAgICAgLyogUmVhZCBmcm9tIEFQSUMgdGFibGUgYW5kIGZpbGwgdXAgdGhlIEdJQyB2
YXJpYWJsZXMgKi8NCj4gQEAgLTE2MjgsNyArMTYyOCw3IEBAIGdpY19hY3BpX2dldF9tYWR0X2Nw
dV9udW0oc3RydWN0IGFjcGlfc3VidGFibGVfaGVhZGVyICpoZWFkZXIsDQo+ICAgICBzdHJ1Y3Qg
YWNwaV9tYWR0X2dlbmVyaWNfaW50ZXJydXB0ICpjcHVpZjsNCj4gDQo+ICAgICBjcHVpZiA9IChz
dHJ1Y3QgYWNwaV9tYWR0X2dlbmVyaWNfaW50ZXJydXB0ICopaGVhZGVyOw0KPiAtICAgIGlmICgg
QkFEX01BRFRfRU5UUlkoY3B1aWYsIGVuZCkgfHwgIWNwdWlmLT5naWNyX2Jhc2VfYWRkcmVzcyAp
DQo+ICsgICAgaWYgKCBCQURfTUFEVF9HSUNDX0VOVFJZKGNwdWlmLCBlbmQpIHx8ICFjcHVpZi0+
Z2ljcl9iYXNlX2FkZHJlc3MgKQ0KPiAgICAgICAgIHJldHVybiAtRUlOVkFMOw0KPiANCj4gICAg
IHJldHVybiAwOw0KPiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL2dpYy5jIGIveGVuL2FyY2gv
YXJtL2dpYy5jDQo+IGluZGV4IGQ2MjNjNTdjYjlmYS4uYjEwMGM4NWVmMzE0IDEwMDY0NA0KPiAt
LS0gYS94ZW4vYXJjaC9hcm0vZ2ljLmMNCj4gKysrIGIveGVuL2FyY2gvYXJtL2dpYy5jDQo+IEBA
IC00NTMsNyArNDUzLDcgQEAgdW5zaWduZWQgbG9uZyBnaWNfZ2V0X2h3ZG9tX21hZHRfc2l6ZShj
b25zdCBzdHJ1Y3QgZG9tYWluICpkKQ0KPiAgICAgdW5zaWduZWQgbG9uZyBtYWR0X3NpemU7DQo+
IA0KPiAgICAgbWFkdF9zaXplID0gc2l6ZW9mKHN0cnVjdCBhY3BpX3RhYmxlX21hZHQpDQo+IC0g
ICAgICAgICAgICAgICAgKyBzaXplb2Yoc3RydWN0IGFjcGlfbWFkdF9nZW5lcmljX2ludGVycnVw
dCkgKiBkLT5tYXhfdmNwdXMNCj4gKyAgICAgICAgICAgICAgICArIEFDUElfTUFEVF9HSUNDX0xF
TkdUSCAqIGQtPm1heF92Y3B1cw0KPiAgICAgICAgICAgICAgICAgKyBzaXplb2Yoc3RydWN0IGFj
cGlfbWFkdF9nZW5lcmljX2Rpc3RyaWJ1dG9yKQ0KPiAgICAgICAgICAgICAgICAgKyBnaWNfaHdf
b3BzLT5nZXRfaHdkb21fZXh0cmFfbWFkdF9zaXplKGQpOw0KPiANCj4gLS0gDQo+IDIuMTcuMQ0K
PiANCg0K


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 15:24:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 15:24:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30960.61176 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kflns-0001Wk-7h; Thu, 19 Nov 2020 15:24:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30960.61176; Thu, 19 Nov 2020 15:24:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kflns-0001Wd-4m; Thu, 19 Nov 2020 15:24:56 +0000
Received: by outflank-mailman (input) for mailman id 30960;
 Thu, 19 Nov 2020 15:24:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kflnq-0001WS-R3
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:24:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kflno-0006ca-Gz; Thu, 19 Nov 2020 15:24:52 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kflno-0005ws-8C; Thu, 19 Nov 2020 15:24:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kflnq-0001WS-R3
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:24:54 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=t84XUouGwlIH7LwczZmanbryowdld/KVoT3Q9H8MYOE=; b=6wcCjBgCD0p6kRW8ZGbbs1ZKmJ
	MqMuEgQGOb3otin9OY02Xmj+b/Y/OGq30KGv1FQbVAW+H9nR4CHh9h533BYA4CoF2nuOXPv1IULvO
	2e505t1X1wDoCLtGg6GJh/5Mv0lB6OAU/lzmVnOGhrP6hC8W1oGGkUHCxPu4VC3MeV58=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kflno-0006ca-Gz; Thu, 19 Nov 2020 15:24:52 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kflno-0005ws-8C; Thu, 19 Nov 2020 15:24:52 +0000
Subject: Re: [PATCH v3 1/2] xen/arm: gic: acpi: Use the correct length for the
 GICC structure
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>,
 "alex.bennee@linaro.org" <alex.bennee@linaro.org>,
 Andre Przywara <Andre.Przywara@arm.com>, Rahul Singh <Rahul.Singh@arm.com>,
 Julien Grall <Julien.Grall@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <jgrall@amazon.com>
References: <20201119121347.27139-1-julien@xen.org>
 <20201119121347.27139-2-julien@xen.org>
 <890D41B9-D3F3-498F-89E4-8BB997B45D6F@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <1dfe3afc-55de-77ec-1f21-448c193909b4@xen.org>
Date: Thu, 19 Nov 2020 15:24:50 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <890D41B9-D3F3-498F-89E4-8BB997B45D6F@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 19/11/2020 15:22, Bertrand Marquis wrote:
> Hi Julien,

Hi Bertrand,

> 
>> On 19 Nov 2020, at 12:13, Julien Grall <julien@xen.org> wrote:
>>
>> From: Julien Grall <julien.grall@arm.com>
>>
>> The length of the GICC structure in the MADT ACPI table differs between
>> version 5.1 and 6.0, although there are no other relevant differences.
>>
>> Use the BAD_MADT_GICC_ENTRY macro, which was specifically designed to
>> overcome this issue.
>>
> 
> The serie is braking the build on arm64 if ACPI is not activated:
> arch/arm/gic.c:456: undefined reference to `acpi_gbl_FADT’

:(. Sorry for that. I usually test with and without ACPI enabled but 
forgot to do for this series.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 15:26:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 15:26:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.30969.61187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kflpA-0001fd-Mb; Thu, 19 Nov 2020 15:26:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 30969.61187; Thu, 19 Nov 2020 15:26:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kflpA-0001fW-JX; Thu, 19 Nov 2020 15:26:16 +0000
Received: by outflank-mailman (input) for mailman id 30969;
 Thu, 19 Nov 2020 15:26:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zZym=EZ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kflp9-0001fJ-GV
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:26:15 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.57]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 215c975e-a20c-47b1-b676-fec54cbf2be7;
 Thu, 19 Nov 2020 15:26:12 +0000 (UTC)
Received: from AM6PR08CA0014.eurprd08.prod.outlook.com (2603:10a6:20b:b2::26)
 by VE1PR08MB5807.eurprd08.prod.outlook.com (2603:10a6:800:1b2::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 19 Nov
 2020 15:25:52 +0000
Received: from VE1EUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:b2:cafe::a5) by AM6PR08CA0014.outlook.office365.com
 (2603:10a6:20b:b2::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Thu, 19 Nov 2020 15:25:52 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT034.mail.protection.outlook.com (10.152.18.85) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Thu, 19 Nov 2020 15:25:52 +0000
Received: ("Tessian outbound 814be617737e:v71");
 Thu, 19 Nov 2020 15:25:51 +0000
Received: from 2978a1470557.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 108A4C02-FBFE-4D64-B0C3-F5C925E4E687.1; 
 Thu, 19 Nov 2020 15:25:36 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2978a1470557.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 19 Nov 2020 15:25:36 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5323.eurprd08.prod.outlook.com (2603:10a6:10:fa::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.21; Thu, 19 Nov
 2020 15:25:35 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737%7]) with mapi id 15.20.3564.028; Thu, 19 Nov 2020
 15:25:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zZym=EZ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kflp9-0001fJ-GV
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:26:15 +0000
X-Inumbo-ID: 215c975e-a20c-47b1-b676-fec54cbf2be7
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown [40.107.22.57])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 215c975e-a20c-47b1-b676-fec54cbf2be7;
	Thu, 19 Nov 2020 15:26:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fnRy+fxV9dnRf6NefZ4V+5c/BTUDI2EIrMkGJmEKFTQ=;
 b=lu0oU94LtRMbjz9UxcCfg4U9Y8U+MjaMaRlTo5ExXLNgXAbf3S8lqIg3LmgU1pXI3DS2rs9BN6eZ/uOdgrsHfTnDa6Izb8FWUQjblSrOOP70UMrwMRUEtKemd1AHlqLYFDrRcRQECOcqZ8/93w9SLhy+1SOTkVLm1wlK3vtvkWs=
Received: from AM6PR08CA0014.eurprd08.prod.outlook.com (2603:10a6:20b:b2::26)
 by VE1PR08MB5807.eurprd08.prod.outlook.com (2603:10a6:800:1b2::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 19 Nov
 2020 15:25:52 +0000
Received: from VE1EUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:b2:cafe::a5) by AM6PR08CA0014.outlook.office365.com
 (2603:10a6:20b:b2::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Thu, 19 Nov 2020 15:25:52 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT034.mail.protection.outlook.com (10.152.18.85) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Thu, 19 Nov 2020 15:25:52 +0000
Received: ("Tessian outbound 814be617737e:v71"); Thu, 19 Nov 2020 15:25:51 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 512a5cd746cca79d
X-CR-MTA-TID: 64aa7808
Received: from 2978a1470557.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 108A4C02-FBFE-4D64-B0C3-F5C925E4E687.1;
	Thu, 19 Nov 2020 15:25:36 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2978a1470557.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Thu, 19 Nov 2020 15:25:36 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JLIM1IEVW/g1W9lifEZe1Q42fe9uWHwiilTG3rqUOAwkRdxEQhXI7526+wNy9NQCB/tcg2Q540lLhQ9bmX5vs2IK9RztMbRP8FQNJrv44y16PeNDclAhc/M7TqMFswCHFmoJ71tdTUe6Enb36Y+voTEarWv2zH0Tf/7RfvPz6VugRXN70zvVvEzwGt6MMMOwCA1s0BMofoEaxfRaVzAA7ELblR9XuKxMLXwN1agA6/9Lrig1j0IHE+NKq7AiN0yvGZHKnkJgIL0GVbzBwhhf5JL8jucF0DFqmUXVHgcp0gJL6SwZhOL0HMALqhyuCHnskWmMiJDqfoViYt+MzTyDfg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fnRy+fxV9dnRf6NefZ4V+5c/BTUDI2EIrMkGJmEKFTQ=;
 b=abENyA61P2vmdfwGM75bt20qEyXhRETyOlm26zXR6NJRfeFvYPLR6+IfQPXaJ3QFHl086YJDk+pd6WJdtLP0pg5BzkFGpSc2IsRm55DKml1ko+E8JXMfQk5QziNoWwNjreUMO/zIJYaLOMP92t7pGWazCwxnYTt3TLFSz65LeplqxH3UdcL75kuCdmLxt7TSUnJkWOun2xKxhgg1EtYmGuqZRaqEAckjU8YHYgi1x5HE/C4CfuFjBwfWnBONpWX0+yJbMoBjC8IMqzPTri/ggSGI9cRUNLOVDounnjvYZeQkmdOe0VEeGoGDDgYaRH/EJIdnD8D+jPx7OB3+phR24w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fnRy+fxV9dnRf6NefZ4V+5c/BTUDI2EIrMkGJmEKFTQ=;
 b=lu0oU94LtRMbjz9UxcCfg4U9Y8U+MjaMaRlTo5ExXLNgXAbf3S8lqIg3LmgU1pXI3DS2rs9BN6eZ/uOdgrsHfTnDa6Izb8FWUQjblSrOOP70UMrwMRUEtKemd1AHlqLYFDrRcRQECOcqZ8/93w9SLhy+1SOTkVLm1wlK3vtvkWs=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5323.eurprd08.prod.outlook.com (2603:10a6:10:fa::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.21; Thu, 19 Nov
 2020 15:25:35 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737%7]) with mapi id 15.20.3564.028; Thu, 19 Nov 2020
 15:25:34 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jorge Pereira <jorge.pereira@nxp.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: smmu-related clarification
Thread-Topic: smmu-related clarification
Thread-Index: Ada+dgwvLpGgRoDvTYGCt70DgFD1qgAEi0YA
Date: Thu, 19 Nov 2020 15:25:34 +0000
Message-ID: <0686AEE9-0597-4AA0-98CE-60CC816ABDC3@arm.com>
References:
 <AM6PR04MB58630B20435EEDF65D2577E2F0E00@AM6PR04MB5863.eurprd04.prod.outlook.com>
In-Reply-To:
 <AM6PR04MB58630B20435EEDF65D2577E2F0E00@AM6PR04MB5863.eurprd04.prod.outlook.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: nxp.com; dkim=none (message not signed)
 header.d=none;nxp.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 967a7b14-b51b-4a79-9d7f-08d88c9f66a0
x-ms-traffictypediagnostic: DB8PR08MB5323:|VE1PR08MB5807:
X-Microsoft-Antispam-PRVS:
	<VE1PR08MB5807514450F17506260F52F69DE00@VE1PR08MB5807.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 8pRDqGOJBX1Inq/oFSXRxCOyrJhwB1twd7yDBTEx8GajtdHsg6MMWv8SpPOdFIasQDp/NbeFFQHbSsQMEa3g17HX7RKLsHdu5d/aM2c+j77ESNHfeqEQDpSNnizxVZkqWFU/5/KHkk6zSuchkrePoOyXb3MX/Q9nbMNaoaW+hF7F8xOvOm3dwEWrbgJYuO10KfpOS6UU/RjcuNQPU+sbuHDQfREW2TH+E3/wxG5fEDz32Rq0VC4OnZaBU2dAqyj0bvK9ke2g/0tJRrX33rT4ybo8FW1LcgfrIy1Npxtqy/qONlplSGLEBhTrW70svv3RGejZBoq73H8hTuWao7GEaA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(39850400004)(396003)(136003)(346002)(33656002)(316002)(86362001)(2616005)(66476007)(71200400001)(2906002)(8936002)(3480700007)(66946007)(66446008)(76116006)(91956017)(6486002)(66556008)(64756008)(4744005)(6506007)(83380400001)(186003)(6916009)(53546011)(4326008)(8676002)(6512007)(26005)(478600001)(36756003)(5660300002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 Fzwc6c677siGj3P46jH1N4PyvU2a1hSYiThZfLdw+JRaDEkExwtotRu5Va1IVQhIje1ZTsB7mrGbWVcP2QZOnnkvohfNCwpZ0JGGJJpIuIrKcUbj2mJyUoHNtG1Bt2UfVH3tT+PPg4Gy3iDWuEVX06sUPvBuZ1htCHpbTOI6LRQdguo51J9R3LsPnCoZ/mjuu5myqtjhrO3plSerVaJIcwU6HwQv/cwDq+VHMA+s042XyKElji2byiCkKPhNpZNXM3jB4fTr+OrmFGXcjluZsQjX6TUkYXMLrLMCLW3B6S4KjB2SnB49ip2p1DUYa7V+vBgC4e9dIbxF/YBExdCwzhICAC2UFwQBlhhyzTinxp+xr5hqamlsYSL+EQWBGoclZ3Rem19TnYSompD+q4ssL9O3BYvy2Ezda52cvmLPaBbF3v8cI05PDyjMdJYjHBjPEW+Wo3YMzvSNvyfbMGLmbtiQfet/MjKpTLqZFisJmLvcLqJZhN+aGWdM2p0uemzDbc+Z+mX2S/PKoBFqcSOu/wuE/zLOPIdY09Pm2QJgwaZo55AFHx2SK1npz9E8hns1YmY819aEAqpa31yNHPnhA4iDrcRdqXUoNkiGEImZrpbUh7re0uB55dlvbnCiEwqCVTF90TlWyBi22IMmfarfjg==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <C2E6F6A287E59942B279D2838AC850B6@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5323
Original-Authentication-Results: nxp.com; dkim=none (message not signed)
 header.d=none;nxp.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f80d5604-c193-4162-0d9d-08d88c9f5c2d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pCbMA41JDfd9tXVgurqgNyvo0hItBFmIlzWV0U84VXZaZyNbM9y1Gqlb5GURcAYqJZ+UaP22v6f9M8QEPRaN9UiysGAEmeAkSo15dQWeest2QXz/lC7ha1/xCtkTDhyWcxRoTWqgw2XikXm4HleJIqmRPlcm7x2dwgpPiyuBWNCWKSb9HvLH/cZI6hjAVSL2OuICsXUe69RvAEaRj7BRBJnM//KF5Db2ecl1X7h8L/A0q+s6WKAs+J03fUHPMkRShXTakrkHnZ0gY2R+WNsCJhK1JWzN5Iz3fmMA8sgR60MAYDLqGTINyDdKDJije8ND/eIUkUgsZL8w/qrYOs6sivpZTfXHaWekBUVCshkXyNcQQunloNAHjIemDrC44mQZ/Mddhb6tp5LqEGl7iaMcjw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(136003)(346002)(39860400002)(376002)(46966005)(6486002)(33656002)(4744005)(336012)(47076004)(2616005)(83380400001)(6506007)(53546011)(2906002)(82310400003)(82740400003)(81166007)(8676002)(8936002)(478600001)(26005)(36756003)(86362001)(356005)(3480700007)(70206006)(186003)(5660300002)(6512007)(316002)(6862004)(4326008)(70586007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Nov 2020 15:25:52.0696
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 967a7b14-b51b-4a79-9d7f-08d88c9f66a0
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5807

SGkgSm9yZ2UsDQoNCj4gT24gMTkgTm92IDIwMjAsIGF0IDEzOjE2LCBKb3JnZSBQZXJlaXJhIDxq
b3JnZS5wZXJlaXJhQG54cC5jb20+IHdyb3RlOg0KPiANCj4gSGkgQWxsLA0KPiAgDQo+IEnigJlt
IGhhdmluZyBzb21lIHNtbXUtcmVsYXRlZCBpc3N1ZXMsIEkgbmVlZCBoZWxwLiANCj4gIA0KPiBT
bywgaW4gbXkgdXNlLWNhc2Ugc2NlbmFyaW8gSSBoYXZlIHR3byBsaW51eCBndWVzdHMgcnVubmlu
ZyBpbiBwYXJhbGxlbCDigJMgZG9tMCBhbmQgZG9tVS0uIEkgaGF2ZSB0byBlbmFibGUgdGhlIHNt
bXUgYmVjYXVzZSBJIHdhbnQgdG8gcGFzc3Rob3VnaCBkZXZpY2VzIHRvIGRvbVUuDQo+ICANCj4g
V291bGQgYmUgZ3JlYXQgaWYgeW91IGhlbHAgbWUgdG8gY2xhcmlmeSB0aGUgZm9sbG93aW5nIHF1
ZXN0aW9uczogIA0KPiAJ4oCiIGlmIEkgZW5hYmxlIFNNTVUsIGl0IHdpbGwgYXBwbGllcyBub3Qg
b25seSB0byBkb21VIGJ1dCBhbHNvIHRvIGRtYS1jYXBhYmxlIGRldmljZXMgYXNzaWduZWQgdG8g
ZG9tMD8NCg0KeWVzIGl0IHdpbGwgYXBwbHkgdG8gYWxsIHlvdXIgZG9tYWlucy4NCg0KPiAJ4oCi
IERvIEkgaGF2ZSB0byBhZGQgYWxsIHNtbXUtbWFzdGVycyAgb2YgZG9tMCBpbiBkZXZpY2UgdHJl
ZSBhcyB3ZWxsPyBPciBzaW5jZSBkb20wIGhhcyAxOjEgbWFwcGluZyBJIGRvbuKAmXQgaGF2ZSB0
byBkbyBhbnl0aGluZz8NCg0Kc21tdSBpcyBub3Qgb25seSB1c2VkIGZvciByZW1hcHBpbmcgYnV0
IGFsc28gdG8gcHJvdGVjdCBETUEgYWNjZXNzZXMgd2hpY2ggbWVhbnMgZm9yIGRvbTAgaXQgd2ls
bCB1c2UgMToxIG1hcHBpbmcgYnV0IHdpbGwgbGltaXQgRE1BIHRvIHRoZSBtZW1vcnkgYWNjZXNz
aWJsZSB0byBkb20wLg0KU28geW91IHdpbGwgbmVlZCB0byBhZGQgYWxsIHJlcXVpcmVkIHNtbXUg
aW5mb3JtYXRpb24gdG8gYWxsIGRldmljZXMgeW91IG5lZWQgb3RoZXJ3aXNlIGRldmljZXMgd2ls
bCBiZSBkZW5pZWQgYW55IERNQSBhY2Nlc3MuDQoNClJlZ2FyZHMNCkJlcnRyYW5kDQoNCj4gIA0K
PiBUaGFua3MsDQo+IEpvcmdlDQoNCg==


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 15:54:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 15:54:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31009.61203 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmGg-00056Z-4y; Thu, 19 Nov 2020 15:54:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31009.61203; Thu, 19 Nov 2020 15:54:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmGg-00056S-1R; Thu, 19 Nov 2020 15:54:42 +0000
Received: by outflank-mailman (input) for mailman id 31009;
 Thu, 19 Nov 2020 15:54:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ihJE=EZ=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kfmGe-00056N-LC
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:54:40 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.14.53]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fb6d8f88-2605-4f08-9b81-69b23e05d5dc;
 Thu, 19 Nov 2020 15:54:38 +0000 (UTC)
Received: from AM5PR0402CA0002.eurprd04.prod.outlook.com
 (2603:10a6:203:90::12) by VI1PR08MB3837.eurprd08.prod.outlook.com
 (2603:10a6:803:ba::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.21; Thu, 19 Nov
 2020 15:54:35 +0000
Received: from AM5EUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:90:cafe::ff) by AM5PR0402CA0002.outlook.office365.com
 (2603:10a6:203:90::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Thu, 19 Nov 2020 15:54:35 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT032.mail.protection.outlook.com (10.152.16.84) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Thu, 19 Nov 2020 15:54:35 +0000
Received: ("Tessian outbound 797fb8e1da56:v71");
 Thu, 19 Nov 2020 15:54:34 +0000
Received: from 28cd6dbeeba3.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D65C6FB2-09D3-4ABC-BC74-306EDB20DAF9.1; 
 Thu, 19 Nov 2020 15:54:28 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 28cd6dbeeba3.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 19 Nov 2020 15:54:28 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBAPR08MB5685.eurprd08.prod.outlook.com (2603:10a6:10:1ad::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 19 Nov
 2020 15:54:27 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3564.028; Thu, 19 Nov 2020
 15:54:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ihJE=EZ=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kfmGe-00056N-LC
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:54:40 +0000
X-Inumbo-ID: fb6d8f88-2605-4f08-9b81-69b23e05d5dc
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown [40.107.14.53])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id fb6d8f88-2605-4f08-9b81-69b23e05d5dc;
	Thu, 19 Nov 2020 15:54:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rX0k+1xCoJslVDuaQwtuf9eHFT1B8y3RnVG0KwyuWng=;
 b=EhBK6XP6RdoKGr3cNVig3nJfjOvi4gFi5uO0saVyhiJp/HM5OGBdFIW3n5iAC9Sguf/D6YMx4XU5UxsZlNngkDx4aMS35AXm+hv+EQay7sDmLQLt9jgglJwf7fLxERnNNTJvZkzK2hx7nDs1pruWaOOfBi8Zq+dBlz4axIHZdB4=
Received: from AM5PR0402CA0002.eurprd04.prod.outlook.com
 (2603:10a6:203:90::12) by VI1PR08MB3837.eurprd08.prod.outlook.com
 (2603:10a6:803:ba::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.21; Thu, 19 Nov
 2020 15:54:35 +0000
Received: from AM5EUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:90:cafe::ff) by AM5PR0402CA0002.outlook.office365.com
 (2603:10a6:203:90::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Thu, 19 Nov 2020 15:54:35 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT032.mail.protection.outlook.com (10.152.16.84) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Thu, 19 Nov 2020 15:54:35 +0000
Received: ("Tessian outbound 797fb8e1da56:v71"); Thu, 19 Nov 2020 15:54:34 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: d9a489cd1dc698e8
X-CR-MTA-TID: 64aa7808
Received: from 28cd6dbeeba3.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id D65C6FB2-09D3-4ABC-BC74-306EDB20DAF9.1;
	Thu, 19 Nov 2020 15:54:28 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 28cd6dbeeba3.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Thu, 19 Nov 2020 15:54:28 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BHNC80N3Q46BpNCt/5qQP9c91KzvGSOqShGlu42hksXwzvObHdFbSZfbGlNwa7SW6wOivQLZV8MmLwRW/bY79uzhUtpirtRdGVPok1VXfBj3iaktWRxlKLJXMj7DS4WehcHefFuCa67feaobMkVPA7K+bzzzRCdn2huGlBy/I+p2+WSuxwem+y61sQvq9y9aJg8ZI7/7xnpAbv4hot1n+K679XlTII3rK07N+xtHLHtmznhiW1b2qADsPXOushVGITg+1fuASAMk+kcAfOlf+VeT00ShrLdY6MVKZlNx6PouKzxVud1wLnYy+x3yONjlLMwG3g50PrVupuhPy8VTsg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rX0k+1xCoJslVDuaQwtuf9eHFT1B8y3RnVG0KwyuWng=;
 b=CUcv/nzT8MBHsvxFyLVAD1THrfNAViyHhan+ecs2e7H1/4HxvzKKTRdLBdU/J5H941Y9/4Wx34LbhVymQ6laDeM2f5CGB+IWzTHnbcepqyOMGesWO6nEJ6UTLoxGEEIr5asB/S14W3kjw6vXY1YavLlMTvIojojQNm/aKCmOop/izemSE+tQgZqmqRqIioP3Dxjdree8BhbjdJWYhF8WwQaE2ojQOI76zE6zp1eHuXPJ+VUy/VqZVz1w+dPcaBMz79++URrDsz7egD1MG6txfUWsqgKeQC+uGtlT3PUlu92mBnV8or9KBpHVt7oHc+KE2clCutPnSNhJhuUZN58CKw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rX0k+1xCoJslVDuaQwtuf9eHFT1B8y3RnVG0KwyuWng=;
 b=EhBK6XP6RdoKGr3cNVig3nJfjOvi4gFi5uO0saVyhiJp/HM5OGBdFIW3n5iAC9Sguf/D6YMx4XU5UxsZlNngkDx4aMS35AXm+hv+EQay7sDmLQLt9jgglJwf7fLxERnNNTJvZkzK2hx7nDs1pruWaOOfBi8Zq+dBlz4axIHZdB4=
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBAPR08MB5685.eurprd08.prod.outlook.com (2603:10a6:10:1ad::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 19 Nov
 2020 15:54:27 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::e089:1ed3:63a0:2f28%6]) with mapi id 15.20.3564.028; Thu, 19 Nov 2020
 15:54:27 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: Jan Beulich <jbeulich@suse.com>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Andrew Cooper <andrew.cooper3@citrix.com>, George
 Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
Thread-Topic: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
Thread-Index:
 AQHWvBOve/67EvsNk0ODGsWoVvEIcanODQaAgAEhKgCAAARQAIAACQ0AgAAGfYCAAF5RAA==
Date: Thu, 19 Nov 2020 15:54:27 +0000
Message-ID: <89F35B3F-FAAD-4C58-B3FD-F93CA3290A49@arm.com>
References: <cover.1605527997.git.rahul.singh@arm.com>
 <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
 <3740e147-719a-4e97-bb0e-fe9bd2ec2aa5@xen.org>
 <aa256a44-8f8f-d4f1-f5f4-12529f45d8c8@suse.com>
 <9007e08f-6d90-88ed-ba64-2f0b3c21cb50@xen.org>
 <8531a99d-3c54-36c7-0cd4-2e4838f96eb0@suse.com>
 <ba26fdfb-34f8-c4d3-e082-f1f49c768981@xen.org>
In-Reply-To: <ba26fdfb-34f8-c4d3-e082-f1f49c768981@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 71e0f658-804e-43ca-449c-08d88ca369a1
x-ms-traffictypediagnostic: DBAPR08MB5685:|VI1PR08MB3837:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB3837818A03A543E0655F22CFFCE00@VI1PR08MB3837.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 qVFYSd7+maUCTIyGo2XNwuZgtytKWLomG3+LIScaaOLJF52+abCPUiTHKds6SX+fbdarsu3Vq3JMSKFWCudVC6ndelUcx3jxenGGTeEjSDgDxdezqIVzSlQd1ZD4LuhZFfjfG79y0if56f1wGy7BCWigC1G4eUz9+HMqbHrf2Dj+QSTero0Jg0MRiPF8lxvImcauSDLBP0Uac5b2adClLy5KQ4LmxGYv5bnDZ4lAOeZNNaWAmQIbpaaCdiYstXgS+Pyv5jxMkOx5k4nAMuiDGvZeyOfm/Qa44mnOuQN4CxdXaw6adJzO6eQW2Ap18cMd6pgOZo4JNfy0A9dJ4WjvzA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(346002)(39860400002)(376002)(316002)(6506007)(54906003)(2616005)(2906002)(83380400001)(26005)(53546011)(186003)(86362001)(36756003)(71200400001)(5660300002)(6512007)(76116006)(91956017)(33656002)(66446008)(64756008)(66556008)(66476007)(478600001)(66946007)(8676002)(8936002)(6486002)(4326008)(6916009);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 p8gLi6If9ectoO07s0/OUqv1L8hsEwpgRf08q1Ul6daBlPzful1SYsEt83fanKCk3aKOhKIEjDQk4ijxsXxHdAvt5JMHs6enDTndgGDT+0mgZXAjJPK3ySSYIrlZTRAQONifISYuy3Pxd60qDrTEmNIgK8QewALN8ZubtpW+QqZrCD4ovDJyxQCVm5+x00C/kNAHScg8IRM/aSsTlI0cZ/sJl6uDPI4cQ2o+p5pIM2I7I1+sxRrIW1+jye7LYnWsJinvuWtzE/R9gXGAE4BUQGiqNg7kmpMtmsRBGIuQL9ndLtD81qyVWvHj2QEJD1/TbW2VVh90VftQbnSVFiI9K0Q93Vla3DO4gLVVX36RU8PjcRlcm2RP42tpZC10beVsSDEdTReBI5gtpViyYYCLRSNp9LUOH0PeDguszVaahhTj8x5WcXM8B7hPJOIQ3aqhCklJGqbwa96qpWc9+F4Li64izx7xetAN7HRf1uhAY/rYXMxkQEoRESS+mDiI7Oko3kpi6skXGAofK2LBU4itQ+4oNLVhClmJmTppa0Ne/BU3erTL3nYolX9jbci2jXUhOGFpF5DlugBqvHGws2whP0gbkblY0jGk77xpmGFZwcOCENH0w4Wpli86Nxgl+ePm4bV8hXAIycPaE9wlZkB4+soynpaSL4itHBGQ2d6EOIJfI2/gdLpYHAKE9QUzFRo2RsA7QkomNvMe76jINACJfiPLZNBtSfDVcAjh9ph0LnJwCWm7s/N7gk/B3LkpgGs1xFzLvXDKqKvz47jYg225dT94aXeesRQRWufEVKalFG4DpNtlvQuSFVtdy4I50NeRFKYsJQKnrsMRkjGV6eC618x/+gqTcmEnpLFfnsAtfizJFfJ8yQiKQxjA1TBEL9OfcnXGg9pjAeOMqKgeGD5kCA==
Content-Type: text/plain; charset="utf-8"
Content-ID: <04ED262D099E124ABC2ED99A4E12CFCC@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5685
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	69256941-f3a6-4408-9171-08d88ca364e8
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VvifpvYyCpEpV+upi6sAt4oLGYxjXLrtWl5v2IWBZarklcEi87PjPSQf8PxCOmtL9M1mbxrbJB0hfk+y7wguhHJR2f1yyIsSkDkXANQAoGXdflb+EUNVk7U99xVsY89s/RRnzgaCUnGUsZb+7WxDUtuaOj82+1qe1svsZjbk1K1wcTnIcdNMwXVSoxfuzfUUP7gG93OL+GJVn3WBCvUO0RkSe5CU/FNU0M1xU4kVFyl+CQI2wzFqNRV7EcqxawUIo6R5oRwhfBM9kQK21bc3JKqUsTbJEVcvG773PdM5ZrJdEL5VhPiWKaKeol2N8YvF8/dkwPm2Cb1KNh6JzHTcFXeeb2bbf1bYImeoCccLSULHmLefHlaikCOrpnukI69ub2m9pSsGe0fJjEm+sWUhog==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(376002)(39860400002)(396003)(346002)(46966005)(82310400003)(8676002)(336012)(26005)(82740400003)(53546011)(186003)(47076004)(8936002)(316002)(36906005)(6506007)(33656002)(2616005)(54906003)(83380400001)(36756003)(5660300002)(478600001)(81166007)(70586007)(4326008)(356005)(6862004)(70206006)(6486002)(6512007)(2906002)(86362001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Nov 2020 15:54:35.2009
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 71e0f658-804e-43ca-449c-08d88ca369a1
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3837

SGVsbG8sDQoNCj4gT24gMTkgTm92IDIwMjAsIGF0IDEwOjE2IGFtLCBKdWxpZW4gR3JhbGwgPGp1
bGllbkB4ZW4ub3JnPiB3cm90ZToNCj4gDQo+IA0KPiANCj4gT24gMTkvMTEvMjAyMCAwOTo1Mywg
SmFuIEJldWxpY2ggd3JvdGU6DQo+PiBPbiAxOS4xMS4yMDIwIDEwOjIxLCBKdWxpZW4gR3JhbGwg
d3JvdGU6DQo+Pj4gSGkgSmFuLA0KPj4+IA0KPj4+IE9uIDE5LzExLzIwMjAgMDk6MDUsIEphbiBC
ZXVsaWNoIHdyb3RlOg0KPj4+PiBPbiAxOC4xMS4yMDIwIDE2OjUwLCBKdWxpZW4gR3JhbGwgd3Jv
dGU6DQo+Pj4+PiBPbiAxNi8xMS8yMDIwIDEyOjI1LCBSYWh1bCBTaW5naCB3cm90ZToNCj4+Pj4+
PiBOUzE2NTUwIGRyaXZlciBoYXMgUENJIHN1cHBvcnQgdGhhdCBpcyB1bmRlciBIQVNfUENJIGZs
YWcuIFdoZW4gSEFTX1BDSQ0KPj4+Pj4+IGlzIGVuYWJsZWQgZm9yIEFSTSwgY29tcGlsYXRpb24g
ZXJyb3IgaXMgb2JzZXJ2ZWQgZm9yIEFSTSBhcmNoaXRlY3R1cmUNCj4+Pj4+PiBiZWNhdXNlIEFS
TSBwbGF0Zm9ybXMgZG8gbm90IGhhdmUgZnVsbCBQQ0kgc3VwcG9ydCBhdmFpbGFibGUuDQo+Pj4+
PiAgID4NCj4+Pj4+PiBJbnRyb2R1Y2luZyBuZXcga2NvbmZpZyBvcHRpb24gQ09ORklHX0hBU19O
UzE2NTUwX1BDSSB0byBzdXBwb3J0DQo+Pj4+Pj4gbnMxNjU1MCBQQ0kgZm9yIFg4Ni4NCj4+Pj4+
PiANCj4+Pj4+PiBGb3IgWDg2IHBsYXRmb3JtcyBpdCBpcyBlbmFibGVkIGJ5IGRlZmF1bHQuIEZv
ciBBUk0gcGxhdGZvcm1zIGl0IGlzDQo+Pj4+Pj4gZGlzYWJsZWQgYnkgZGVmYXVsdCwgb25jZSB3
ZSBoYXZlIHByb3BlciBzdXBwb3J0IGZvciBOUzE2NTUwIFBDSSBmb3INCj4+Pj4+PiBBUk0gd2Ug
Y2FuIGVuYWJsZSBpdC4NCj4+Pj4+PiANCj4+Pj4+PiBObyBmdW5jdGlvbmFsIGNoYW5nZS4NCj4+
Pj4+IA0KPj4+Pj4gTklUOiBJIHdvdWxkIHNheSAiTm8gZnVuY3Rpb25hbCBjaGFuZ2UgaW50ZW5k
ZWQiIHRvIG1ha2UgY2xlYXIgdGhpcyBpcw0KPj4+Pj4gYW4gZXhwZWN0YXRpb24gYW5kIGhvcGVm
dWxseSB3aWxsIGJlIGNvcnJlY3QgOikuDQo+Pj4+PiANCj4+Pj4+IFJlZ2FyZGluZyB0aGUgY29t
bWl0IG1lc3NhZ2UgaXRzZWxmLCBJIHdvdWxkIHN1Z2dlc3QgdGhlIGZvbGxvd2luZyB0bw0KPj4+
Pj4gYWRkcmVzcyBKYW4ncyBjb25jZXJuOg0KPj4+PiANCj4+Pj4gV2hpbGUgaW5kZWVkIHRoaXMg
aXMgYSBtdWNoIGJldHRlciBkZXNjcmlwdGlvbiwgSSBjb250aW51ZSB0byB0aGluaw0KPj4+PiB0
aGF0IHRoZSBwcm9wb3NlZCBLY29uZmlnIG9wdGlvbiBpcyB1bmRlc2lyYWJsZSB0byBoYXZlLg0K
Pj4+IA0KPj4+IEkgYW0geWV0IHRvIHNlZSBhbiBhcmd1bWVudCBpbnRvIHdoeSB3ZSBzaG91bGQg
a2VlcCB0aGUgUENJIGNvZGUNCj4+PiBjb21waWxlZCBvbiBBcm0gd2hlbiB0aGVyZSB3aWxsIGJl
IG5vLXVzZS4uLi4NCj4+IFdlbGwsIHNlZSBteSBwYXRjaCBzdXBwcmVzc2luZyBidWlsZGluZyBv
ZiBxdWl0ZSBhIHBhcnQgb2YgaXQuDQo+IA0KPiBJIHdpbGwgbGV0IFJhaHVsIGZpZ3VyaW5nIG91
dCB3aGV0aGVyIHlvdXIgcGF0Y2ggc2VyaWVzIGlzIHN1ZmZpY2llbnQgdG8gZml4IGNvbXBpbGF0
aW9uIGlzc3VlcyAodGhpcyBpcyB3aGF0IG1hdHRlcnMgcmlnaHQgbm93KS4NCg0KSSBqdXN0IGNo
ZWNrZWQgdGhlIGNvbXBpbGF0aW9uIGVycm9yIGZvciBBUk0gYWZ0ZXIgZW5hYmxpbmcgdGhlIEhB
U19QQ0kgb24gQVJNLiBJIGFtIG9ic2VydmluZyB0aGUgc2FtZSBjb21waWxhdGlvbiBlcnJvciB3
aGF0IEkgb2JzZXJ2ZWQgcHJldmlvdXNseS4gDQpUaGVyZSBhcmUgdHdvIG5ldyBlcnJvcnMgcmVs
YXRlZCB0byBzdHJ1Y3QgdWFydF9jb25maWcgYW5kIHN0cnVjdCBwYXJ0X3BhcmFtIGFzIHRob3Nl
IHN0cnVjdCBkZWZpbmVkIGdsb2JhbGx5IGJ1dCB1c2VkIHVuZGVyIFg4NiBmbGFncy4NCg0KQXQg
dG9wIGxldmVsOg0KbnMxNjU1MC5jOjE3OTo0ODogZXJyb3I6IOKAmHVhcnRfY29uZmln4oCZIGRl
ZmluZWQgYnV0IG5vdCB1c2VkIFstV2Vycm9yPXVudXNlZC1jb25zdC12YXJpYWJsZT1dDQogc3Rh
dGljIGNvbnN0IHN0cnVjdCBuczE2NTUwX2NvbmZpZyBfX2luaXRjb25zdCB1YXJ0X2NvbmZpZ1td
ID0NCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIF5+fn5+
fn5+fn5+DQpuczE2NTUwLmM6MTA0OjU0OiBlcnJvcjog4oCYdWFydF9wYXJhbeKAmSBkZWZpbmVk
IGJ1dCBub3QgdXNlZCBbLVdlcnJvcj11bnVzZWQtY29uc3QtdmFyaWFibGU9XQ0KIHN0YXRpYyBj
b25zdCBzdHJ1Y3QgbnMxNjU1MF9jb25maWdfcGFyYW0gX19pbml0Y29uc3QgdWFydF9wYXJhbVtd
ID0geyANCg0KDQo+IA0KPj4+PiBFaXRoZXIsDQo+Pj4+IGZvbGxvd2luZyB0aGUgcGF0Y2ggSSd2
ZSBqdXN0IHNlbnQsIHRydWx5IHg4Ni1zcGVjaWZpYyB0aGluZ3MgKGF0DQo+Pj4+IGxlYXN0IGFz
IGZhciBhcyBjdXJyZW50IHN0YXRlIGdvZXMgLSBpZiBhbnkgb2YgdGhpcyB3YXMgdG8gYmUNCj4+
Pj4gcmUtdXNlZCBieSBhIGZ1dHVyZSBwb3J0LCBzdWl0YWJsZSBmdXJ0aGVyIGFic3RyYWN0aW9u
IG1heSBiZQ0KPj4+PiBuZWVkZWQpIHNob3VsZCBiZSBndWFyZGVkIGJ5IENPTkZJR19YODYgKG9y
IGFic3RyYWN0ZWQgaW50byBhcmNoDQo+Pj4+IGhvb2tzKSwgb3IgdGhlIEhBU19QQ0lfTVNJIHBy
b3Bvc2FsIHdvdWxkIGF0IGxlYXN0IHdhbnQgZnVydGhlcg0KPj4+PiBpbnZlc3RpZ2F0aW5nIGFz
IHRvIGl0cyBmZWFzaWJpbGl0eSB0byBhZGRyZXNzIHRoZSBpc3N1ZXMgYXQgaGFuZC4NCj4+PiAN
Cj4+PiBJIHdvdWxkIGJlIGhhcHB5IHdpdGggQ09ORklHX1g4NiwgZGVzcGl0ZSB0aGUgZmFjdCB0
aGF0IHRoaXMgaXMgb25seQ0KPj4+IGRlZmVycmluZyB0aGUgcHJvYmxlbS4NCj4+PiANCj4+PiBS
ZWdhcmRpbmcgSEFTX1BDSV9NU0ksIEkgZG9uJ3QgcmVhbGx5IHNlZSB0aGUgcG9pbnQgb2YgaW50
cm9kdWNpbmcgZ2l2ZW4NCj4+PiB0aGF0IHdlIGFyZSBub3QgZ29pbmcgdG8gdXNlIE5TMTY1NTAg
UENJIG9uIEFybSBpbiB0aGUgZm9yc2VlYWJsZQ0KPj4+IGZ1dHVyZS4NCj4+IEFuZCBJIGNvbnRp
bnVlIHRvIGZhaWwgdG8gc2VlIHdoYXQgd291bGQgZ3VhcmFudGVlIHRoaXM6IEFzIHNvb24NCj4+
IGFzIHlvdSBjYW4gcGx1ZyBpbiBzdWNoIGEgY2FyZCBpbnRvIGFuIEFybSBzeXN0ZW0sIHBlb3Bs
ZSB3aWxsDQo+PiB3YW50IHRvIGJlIGFibGUgdXNlIGl0LiBUaGF0J3Mgd2h5IHdlIGhhZCB0byBh
ZGQgc3VwcG9ydCBmb3IgaXQNCj4+IG9uIHg4NiwgYWZ0ZXIgYWxsLg0KPiANCj4gV2VsbCwgcGx1
Zy1pbiBQQ0kgY2FyZHMgb24gQXJtIGhhcyBiZWVuIGF2YWlsYWJsZSBmb3IgcXVpdGUgYSB3aGls
ZS4uLiBZZXQgSSBoYXZlbid0IGhlYXJkIGFueW9uZSBhc2tpbmcgZm9yIE5TMTY1NTAgUENJIHN1
cHBvcnQuDQo+IA0KPiBUaGlzIGlzIHByb2JhYmx5IGJlY2F1c2UgU0JTQSBjb21wbGlhbnQgc2Vy
dmVyIHNob3VsZCBhbHdheXMgcHJvdmlkZSBhbiBTQlNBIFVBUlQgKGEgY3V0LWRvd24gdmVyc2lv
biBvZiB0aGUgUEwwMTEpLiBTbyB3aHkgd291bGQgYm90aGVyIHRvIGxvc2UgYSBQQ0kgc2xvdCBm
b3IgeWV0IGFub3RoZXIgVUFSVD8NCj4gDQo+PiA+PiBTbyB3aHkgZG8gd2UgbmVlZCBhIGZpbmVy
IGdyYWluZSBLY29uZmlnPw0KPj4gQmVjYXVzZSBtb3N0IG9mIHRoZSBpbnZvbHZlZCBjb2RlIGlz
IGluZGVlZCBNU0ktcmVsYXRlZD8NCj4gDQo+IFBvc3NpYmx5LCB5ZXQgaXQgd291bGQgbm90IGJl
IG5lY2Vzc2FyeSBpZiB3ZSBkb24ndCB3YW50IE5TMTY1NTAgUENJIHN1cHBvcnQuLi4NCg0KPiAN
Cg0KVG8gZml4IGNvbXBpbGF0aW9uIGVycm9yIG9uIEFSTSBhcyBwZXIgdGhlIGRpc2N1c3Npb24g
dGhlcmUgYXJlIGJlbG93IG9wdGlvbnMgcGxlYXNlIHN1Z2dlc3Qgd2hpY2ggb25lIHRvIHVzZSB0
byBwcm9jZWVkIGZ1cnRoZXIuDQoNCjEuIFVzZSB0aGUgbmV3bHkgaW50cm9kdWNlZCBDT05GSUdf
SEFTX05TMTY1NTBfUENJIGNvbmZpZyBvcHRpb25zLiBUaGlzIGhlbHBzIGFsc28gbm9uLXg4NiBh
cmNoaXRlY3R1cmUgaW4gdGhlIGZ1dHVyZSBub3QgdG8gaGF2ZSBjb21waWxhdGlvbiBlcnJvciAN
CndoYXQgd2UgYXJlIG9ic2VydmluZyBub3cgd2hlbiBIQVNfUENJIGlzIGVuYWJsZWQuDQoNCjIu
IEd1YXJkIHRoZSByZW1haW5pbmcgeDg2IHNwZWNpZmljIGNvZGUgd2l0aCBDT05GSUdfWDg2IGFu
ZCBpbnRyb2R1Y2UgdGhlIG5ldyBDT05GSUdfSEFTX1BDSV9NU0kgb3B0aW9ucyB0byBmaXggdGhl
IE1TSSByZWxhdGVkIGNvbXBpbGF0aW9uIGVycm9yLiANCk9uY2Ugd2UgaGF2ZSBwcm9wZXIgc3Vw
cG9ydCBmb3IgTVNJIGFuZCBQQ0kgZm9yIEFSTSAgKEhBU19QQ0lfTVNJIGFuZCBIQVNfUENJIGVu
YWJsZWQgZm9yIEFSTSBpbiBLY29uZmlnICkgSSBhbSBub3Qgc3VyZSBpZiBOUzE2NTUwIFBDSSB3
aWxsIHdvcmsgb3V0IG9mIHRoZSBib3ggb24gQVJNIC5JbiB0aGF0IGNhc2UsIHdlIG1pZ2h0IG5l
ZWQgdG8gY29tZSBiYWNrIGFnYWluIHRvIGZpeCBOUzE2NTUwIGRyaXZlci4gIA0KDQoNCj4gQ2hl
ZXJzLA0KPiANCj4gLS0gDQo+IEp1bGllbiBHcmFsbA0KDQoNClJlZ2FyZHMsDQpSYWh1bA==


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 15:57:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 15:57:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31022.61215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmJJ-0005G6-Im; Thu, 19 Nov 2020 15:57:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31022.61215; Thu, 19 Nov 2020 15:57:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmJJ-0005Fz-Fr; Thu, 19 Nov 2020 15:57:25 +0000
Received: by outflank-mailman (input) for mailman id 31022;
 Thu, 19 Nov 2020 15:57:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfmJI-0005Fu-A6
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:57:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f3867198-9c2b-432e-ada4-86bb1306e479;
 Thu, 19 Nov 2020 15:57:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C9A0FABD6;
 Thu, 19 Nov 2020 15:57:22 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfmJI-0005Fu-A6
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:57:24 +0000
X-Inumbo-ID: f3867198-9c2b-432e-ada4-86bb1306e479
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f3867198-9c2b-432e-ada4-86bb1306e479;
	Thu, 19 Nov 2020 15:57:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605801442; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=Q8pTdAyL07uOku06xGXBXqPxNVKB/ssYIkgqBhIqPsU=;
	b=LLQSGZkT9aqqj2o8h+ACvcUTcwchDxG9wWIdn/N5GKlBQZpg1TmOJ8h/XPhaLaA7oWhuq3
	VG/ig9UZs8oPWKlhcgiX8Jg0oo39BpGg5r6uj/1gXHE4RXu8oHTb+Vlx+rN+HFSTv0Us1s
	nsGTmEL6SbeQzoSjTzig8Un6NxwOe88=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C9A0FABD6;
	Thu, 19 Nov 2020 15:57:22 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] SVM: avoid UB in intercept mask definitions
Message-ID: <eaef863f-a396-58be-cb32-7a9e2535d9e2@suse.com>
Date: Thu, 19 Nov 2020 16:57:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Found by looking for patterns similar to the one Julien did spot in
pci_vtd_quirks().

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/include/asm-x86/hvm/svm/vmcb.h
+++ b/xen/include/asm-x86/hvm/svm/vmcb.h
@@ -55,7 +55,7 @@ enum GenericIntercept1bits
     GENERAL1_INTERCEPT_MSR_PROT      = 1 << 28,
     GENERAL1_INTERCEPT_TASK_SWITCH   = 1 << 29,
     GENERAL1_INTERCEPT_FERR_FREEZE   = 1 << 30,
-    GENERAL1_INTERCEPT_SHUTDOWN_EVT  = 1 << 31
+    GENERAL1_INTERCEPT_SHUTDOWN_EVT  = 1u << 31
 };
 
 /* general 2 intercepts */
@@ -112,7 +112,7 @@ enum CRInterceptBits
     CR_INTERCEPT_CR12_WRITE = 1 << 28,
     CR_INTERCEPT_CR13_WRITE = 1 << 29,
     CR_INTERCEPT_CR14_WRITE = 1 << 30,
-    CR_INTERCEPT_CR15_WRITE = 1 << 31,
+    CR_INTERCEPT_CR15_WRITE = 1u << 31,
 };
 
 
@@ -150,7 +150,7 @@ enum DRInterceptBits
     DR_INTERCEPT_DR12_WRITE = 1 << 28,
     DR_INTERCEPT_DR13_WRITE = 1 << 29,
     DR_INTERCEPT_DR14_WRITE = 1 << 30,
-    DR_INTERCEPT_DR15_WRITE = 1 << 31,
+    DR_INTERCEPT_DR15_WRITE = 1u << 31,
 };
 
 enum VMEXIT_EXITCODE


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 15:57:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 15:57:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31023.61226 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmJU-0005Jg-Rv; Thu, 19 Nov 2020 15:57:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31023.61226; Thu, 19 Nov 2020 15:57:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmJU-0005JY-Og; Thu, 19 Nov 2020 15:57:36 +0000
Received: by outflank-mailman (input) for mailman id 31023;
 Thu, 19 Nov 2020 15:57:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kdOd=EZ=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kfmJU-0005JK-1g
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:57:36 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0682d113-8cb4-45dd-858d-1eafa680f5bd;
 Thu, 19 Nov 2020 15:57:32 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AJFvNh1005475
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Thu, 19 Nov 2020 16:57:24 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 148162E9CA8; Thu, 19 Nov 2020 16:57:18 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kdOd=EZ=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kfmJU-0005JK-1g
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:57:36 +0000
X-Inumbo-ID: 0682d113-8cb4-45dd-858d-1eafa680f5bd
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0682d113-8cb4-45dd-858d-1eafa680f5bd;
	Thu, 19 Nov 2020 15:57:32 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AJFvNh1005475
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Thu, 19 Nov 2020 16:57:24 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 148162E9CA8; Thu, 19 Nov 2020 16:57:18 +0100 (MET)
Date: Thu, 19 Nov 2020 16:57:18 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201119155718.GB4104@antioche.eu.org>
References: <20201117150949.GA3791@antioche.eu.org>
 <20201117155807.a7jgmftnj6njg6oz@Air-de-Roger>
 <20201117164033.GB3093@antioche.eu.org>
 <20201118085738.wpnfmjagxjf6cofp@Air-de-Roger>
 <20201118092425.GC1085@antioche.eu.org>
 <20201118100025.ic7r3kfsbdnr6muz@Air-de-Roger>
 <20201118121403.GC3126@antioche.eu.org>
 <20201118143928.hvamuf7t7jycsrzb@Air-de-Roger>
 <bb2b6182-f3a6-61e5-ee70-90a65ae56435@suse.com>
 <20201119141915.igyb7djkw47rf2dt@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201119141915.igyb7djkw47rf2dt@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Thu, 19 Nov 2020 16:57:24 +0100 (MET)

On Thu, Nov 19, 2020 at 03:19:15PM +0100, Roger Pau Monn wrote:
> I've got two different debug patches for you to try. I'm attaching both
> to this email but I think we should start with Jan's suggestion
> (conditional_print.patch). That patch will only print extra messages
> between the ioregsel 3 ... ioregsel f existing debug messages, you
> will have to trigger this from NetBSD by using ioapic_dump_raw AFAICT.

thanks. I didn't see any change in behavior or XEN output with this
patch (especially the vioapic_deliver string doesn't show up in the
logs).

> 
> The other patch (verbose_intr.patch) adds some messages related to
> interrupts, but I'm afraid it's likely to interfere with the issue we
> are trying to debug, since it will add a lot of extra printk's (and
> likely flood your console).

with this one, the console is indeed flooded, and the dom0 boots without
problem.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 15:57:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 15:57:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31027.61239 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmJm-0005Pw-6y; Thu, 19 Nov 2020 15:57:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31027.61239; Thu, 19 Nov 2020 15:57:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmJm-0005Pn-2N; Thu, 19 Nov 2020 15:57:54 +0000
Received: by outflank-mailman (input) for mailman id 31027;
 Thu, 19 Nov 2020 15:57:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfmJk-0005PP-T5
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:57:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d9d09f8e-56a3-4b2e-ad32-3239d9e8e902;
 Thu, 19 Nov 2020 15:57:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8EA8EAC2F;
 Thu, 19 Nov 2020 15:57:51 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfmJk-0005PP-T5
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:57:52 +0000
X-Inumbo-ID: d9d09f8e-56a3-4b2e-ad32-3239d9e8e902
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d9d09f8e-56a3-4b2e-ad32-3239d9e8e902;
	Thu, 19 Nov 2020 15:57:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605801471; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=PhGDpB0MqrnKBH+bcl0t0R5vWKSRquPeRxsvi08pE9U=;
	b=hcz8iPQzdswUwBLQQY9fWG4FEVKy4XFCKQY0Suf8WCg+XUJTZgbL3IVH2b269Pay47R4y3
	Ieobun5eDPjVBuKzYugdgZ8rqOdqU8vg5GJ3EGFwgePr8zBVZz4UVPfb07FZyce/YfETV1
	K3tzottQGWXKssItzp4kY+Qh27hzyHs=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8EA8EAC2F;
	Thu, 19 Nov 2020 15:57:51 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/Intel: avoid UB with NMI watchdog on family 15 CPUs
Message-ID: <63500eb6-b1da-ce08-52e2-00b30ffe2c26@suse.com>
Date: Thu, 19 Nov 2020 16:57:51 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Found by looking for patterns similar to the one Julien did spot in
pci_vtd_quirks().

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/nmi.c
+++ b/xen/arch/x86/nmi.c
@@ -132,7 +132,7 @@ int nmi_active;
 #define P4_ESCR_EVENT_SELECT(N)	((N)<<25)
 #define P4_CCCR_OVF_PMI0	(1<<26)
 #define P4_CCCR_OVF_PMI1	(1<<27)
-#define P4_CCCR_OVF		(1<<31)
+#define P4_CCCR_OVF		(1u << 31)
 #define P4_CCCR_THRESHOLD(N)	((N)<<20)
 #define P4_CCCR_COMPLEMENT	(1<<19)
 #define P4_CCCR_COMPARE		(1<<18)


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 15:59:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 15:59:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31038.61251 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmKr-0005d1-IN; Thu, 19 Nov 2020 15:59:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31038.61251; Thu, 19 Nov 2020 15:59:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmKr-0005cu-FT; Thu, 19 Nov 2020 15:59:01 +0000
Received: by outflank-mailman (input) for mailman id 31038;
 Thu, 19 Nov 2020 15:58:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfmKp-0005ch-HW
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:58:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fa303717-5def-4e0f-8768-f75e3b99fd43;
 Thu, 19 Nov 2020 15:58:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0203DABD6;
 Thu, 19 Nov 2020 15:58:57 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfmKp-0005ch-HW
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 15:58:59 +0000
X-Inumbo-ID: fa303717-5def-4e0f-8768-f75e3b99fd43
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fa303717-5def-4e0f-8768-f75e3b99fd43;
	Thu, 19 Nov 2020 15:58:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605801538; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=ysR6G+ilv1ZK/CN9cDx2z22TTMlLoSfLmZbOqMSI0v8=;
	b=q4bJoCGpHCJ+SgQ7ExVNkDi7bSAgbXzag6FkidTDhvDWM5RSkFpo92t5N23MZ1rV3nzQMy
	C/LJxUkbFMAb01xdnHk9N5BtSPpzzwIMcVhUZBlzLnnkoi0rutsD2A8+XrER6WoO0AHrzq
	X/CHsySCe9CIisxyr6kNYVVNSpNUZ/Q=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0203DABD6;
	Thu, 19 Nov 2020 15:58:57 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] AMD/IOMMU: avoid UB in guest CR3 retrieval
Message-ID: <1a5bca28-b37c-eaa7-3a64-51428d24915f@suse.com>
Date: Thu, 19 Nov 2020 16:58:57 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Found by looking for patterns similar to the one Julien did spot in
pci_vtd_quirks(). (Not that it matters much here, considering the code
is dead right now.)

Fixes: 3a7947b69011 ("amd-iommu: use a bitfield for DTE")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/passthrough/amd/iommu_guest.c
+++ b/xen/drivers/passthrough/amd/iommu_guest.c
@@ -70,7 +70,8 @@ static void guest_iommu_disable(struct g
 
 static uint64_t get_guest_cr3_from_dte(struct amd_iommu_dte *dte)
 {
-    return ((dte->gcr3_trp_51_31 << 31) | (dte->gcr3_trp_30_15 << 15) |
+    return (((uint64_t)dte->gcr3_trp_51_31 << 31) |
+            (dte->gcr3_trp_30_15 << 15) |
             (dte->gcr3_trp_14_12 << 12)) >> PAGE_SHIFT;
 }
 


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 16:02:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 16:02:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31048.61262 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmOa-0007Fi-7X; Thu, 19 Nov 2020 16:02:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31048.61262; Thu, 19 Nov 2020 16:02:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmOa-0007Fb-4Q; Thu, 19 Nov 2020 16:02:52 +0000
Received: by outflank-mailman (input) for mailman id 31048;
 Thu, 19 Nov 2020 16:02:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d/3C=EZ=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kfmOZ-0007FW-Cw
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:02:51 +0000
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a21a3481-f5c9-40bb-afd6-2fbf3b204106;
 Thu, 19 Nov 2020 16:02:50 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id b6so7009616wrt.4
 for <xen-devel@lists.xenproject.org>; Thu, 19 Nov 2020 08:02:50 -0800 (PST)
Received: from CBGR90WXYV0
 (host109-146-187-185.range109-146.btcentralplus.com. [109.146.187.185])
 by smtp.gmail.com with ESMTPSA id t11sm418333wmf.35.2020.11.19.08.02.48
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 19 Nov 2020 08:02:49 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=d/3C=EZ=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kfmOZ-0007FW-Cw
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:02:51 +0000
X-Inumbo-ID: a21a3481-f5c9-40bb-afd6-2fbf3b204106
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a21a3481-f5c9-40bb-afd6-2fbf3b204106;
	Thu, 19 Nov 2020 16:02:50 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id b6so7009616wrt.4
        for <xen-devel@lists.xenproject.org>; Thu, 19 Nov 2020 08:02:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=HuGQ3lQ03KGnHXC3BKlflZ3nowvXdSaMhL4k2L0BD2M=;
        b=gx00K8ajFIMTdY+C8DB0TbXQgAP8YaLyEHZ88VNCIrQeJ6IaDjBjgrB3A33by8Hnrr
         PNGiratUaDXWttOpmOqPZVkbpS/FfkK+Ylqz6oPGmZWPys+NAOgY9SiEDNkFPM8VM5ZJ
         yNzvZtIApDidsiTSCy9gm9kUWEueqxJvJdBBRpl1rdPf4hgyjPDR/Wg7eHT2aATzjHrE
         I8QXITSl3E2+V6KuAle9bzgZ0dIpc4NwumFBDsS5vQwV4pVHIMb/FbzTufIcpkSkVpLb
         il0v4LIGHBzsIBYIxrzpcgAts0qMpuBdADk/ml5MwroZnwLkXaSZuxdY490FwFJwcDqF
         4gTg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=HuGQ3lQ03KGnHXC3BKlflZ3nowvXdSaMhL4k2L0BD2M=;
        b=PevSF2oAsCe9zQfOL83ynI4ocNyOEIjWCLoIJUjbMSe4k496R9DvdZtKV4kVpNuQ5R
         qr3E3FkXd4VdrEJDDubV47W63NPBzHc+HgLemZM2syQpuAPhoNVre9qpNCLAZSpmM0kf
         8vQv7tMdld6gJSnCtme8GgmHv2TK+SJ9aVbCzn72fg8iE55+wPXxGAxIxHwsZJJl9BpR
         nU8M1sdnkYwHk8gUKPaCIUOUquceCjp8/pL8QzRaoer4Dc/n72qTWNn4Gl7AGsRh+y8k
         INOleDVBfqmgrnITAmUutqPTK0JNsJucJBa25urrKEjfoo0/prDRs4LBKuWndDLnbSEY
         tzUQ==
X-Gm-Message-State: AOAM531PTjfnm6oNnob2xBHbn6D+AticjVWMmteC5IeTVR4/jODBEhvF
	Kiqkul9ruEr5prhC2oh9mKE=
X-Google-Smtp-Source: ABdhPJxoVO7tqWmWxrswJn8yoH4DYRqq33/CW3fAf4jWthgYI2sCCrr8EgCURFtoZ3KYcnJ2kKl0HQ==
X-Received: by 2002:adf:f3d2:: with SMTP id g18mr11551682wrp.77.1605801769679;
        Thu, 19 Nov 2020 08:02:49 -0800 (PST)
Received: from CBGR90WXYV0 (host109-146-187-185.range109-146.btcentralplus.com. [109.146.187.185])
        by smtp.gmail.com with ESMTPSA id t11sm418333wmf.35.2020.11.19.08.02.48
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 19 Nov 2020 08:02:49 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	"'Wei Liu'" <wl@xen.org>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	=?UTF-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	<xen-devel@lists.xenproject.org>
References: <20201111200721.30551-1-paul@xen.org> <20201111200721.30551-4-paul@xen.org> <01c7747e-70d0-e32b-45a6-afc1688c1741@suse.com>
In-Reply-To: <01c7747e-70d0-e32b-45a6-afc1688c1741@suse.com>
Subject: RE: [PATCH 03/10] viridian: introduce a per-cpu hypercall_vpmask and accessor functions...
Date: Thu, 19 Nov 2020 16:02:48 -0000
Message-ID: <00c901d6be8d$6d7a5c10$486f1430$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQIKCyTUSJ7jm2WJ2nMqsy5EhijKogKUvRrlAaY1NKepRytzUA==

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com
> Sent: 12 November 2020 08:46
> To: Paul Durrant <paul@xen.org>
> Cc: Paul Durrant <pdurrant@amazon.com>; Wei Liu <wl@xen.org>; Andrew =
Cooper
> <andrew.cooper3@citrix.com>; Roger Pau Monn=C3=A9 =
<roger.pau@citrix.com>; xen-devel@lists.xenproject.org
> Subject: Re: [PATCH 03/10] viridian: introduce a per-cpu =
hypercall_vpmask and accessor functions...
>=20
> On 11.11.2020 21:07, Paul Durrant wrote:
> > --- a/xen/arch/x86/hvm/viridian/viridian.c
> > +++ b/xen/arch/x86/hvm/viridian/viridian.c
> > @@ -507,15 +507,41 @@ void viridian_domain_deinit(struct domain *d)
> >      XFREE(d->arch.hvm.viridian);
> >  }
> >
> > +struct hypercall_vpmask {
> > +    DECLARE_BITMAP(mask, HVM_MAX_VCPUS);
> > +};
> > +
> > +static DEFINE_PER_CPU(struct hypercall_vpmask, hypercall_vpmask);
> > +
> > +static void vpmask_empty(struct hypercall_vpmask *vpmask)
>=20
> const?

Yes, I suppose that's ook for all these since the outer struct is not =
changing... It's a bit misleading though.

>=20
> > +{
> > +    bitmap_zero(vpmask->mask, HVM_MAX_VCPUS);
> > +}
> > +
> > +static void vpmask_set(struct hypercall_vpmask *vpmask, unsigned =
int vp)
> > +{
> > +    __set_bit(vp, vpmask->mask);
>=20
> Perhaps assert vp in range here and ...
>=20
> > +}
> > +
> > +static void vpmask_fill(struct hypercall_vpmask *vpmask)
> > +{
> > +    bitmap_fill(vpmask->mask, HVM_MAX_VCPUS);
> > +}
> > +
> > +static bool vpmask_test(struct hypercall_vpmask *vpmask, unsigned =
int vp)
> > +{
> > +    return test_bit(vp, vpmask->mask);
>=20
> ... here?

Ok.

>=20
> This also wants const again.
>=20
> > @@ -567,13 +594,29 @@ static int hvcall_flush(union hypercall_input =
*input,
> >       * so err on the safe side.
> >       */
> >      if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
> > -        input_params.vcpu_mask =3D ~0ul;
> > +        vpmask_fill(vpmask);
> > +    else
> > +    {
> > +        unsigned int vp;
> > +
> > +        vpmask_empty(vpmask);
> > +        for (vp =3D 0; vp < 64; vp++)
>=20
> Nit: Missing blanks.
>=20

Oh yes. You can tell I was moving between this and libxl code :-)

> > +        {
> > +            if ( !input_params.vcpu_mask )
> > +                break;
> > +
> > +            if ( input_params.vcpu_mask & 1 )
> > +                vpmask_set(vpmask, vp);
> > +
> > +            input_params.vcpu_mask >>=3D 1;
> > +        }
>=20
> Wouldn't bitmap_zero() + bitmap_copy() (in a suitable wrapper)
> be quite a bit cheaper a way to achieve the same effect?
>=20

Yes, I guess that would work.

  Paul

> Jan



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 16:04:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 16:04:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31062.61275 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmQQ-0007Px-K5; Thu, 19 Nov 2020 16:04:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31062.61275; Thu, 19 Nov 2020 16:04:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmQQ-0007Pq-Gy; Thu, 19 Nov 2020 16:04:46 +0000
Received: by outflank-mailman (input) for mailman id 31062;
 Thu, 19 Nov 2020 16:04:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d/3C=EZ=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kfmQP-0007Ph-BG
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:04:45 +0000
Received: from mail-wm1-x32b.google.com (unknown [2a00:1450:4864:20::32b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d1cbf80b-d145-42c8-9268-81ee5ffa0c9a;
 Thu, 19 Nov 2020 16:04:44 +0000 (UTC)
Received: by mail-wm1-x32b.google.com with SMTP id 10so7696591wml.2
 for <xen-devel@lists.xenproject.org>; Thu, 19 Nov 2020 08:04:44 -0800 (PST)
Received: from CBGR90WXYV0
 (host109-146-187-185.range109-146.btcentralplus.com. [109.146.187.185])
 by smtp.gmail.com with ESMTPSA id h2sm436496wme.45.2020.11.19.08.04.42
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 19 Nov 2020 08:04:43 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=d/3C=EZ=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kfmQP-0007Ph-BG
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:04:45 +0000
X-Inumbo-ID: d1cbf80b-d145-42c8-9268-81ee5ffa0c9a
Received: from mail-wm1-x32b.google.com (unknown [2a00:1450:4864:20::32b])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d1cbf80b-d145-42c8-9268-81ee5ffa0c9a;
	Thu, 19 Nov 2020 16:04:44 +0000 (UTC)
Received: by mail-wm1-x32b.google.com with SMTP id 10so7696591wml.2
        for <xen-devel@lists.xenproject.org>; Thu, 19 Nov 2020 08:04:44 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=On7WtObYfJqkmBWHnyAjoLI3cKLxtSNPjmpL50xHFRI=;
        b=kauqrQB8AIZxEhk7ED7FpTkba7nnvpn22yiOG+FuxlbGs5Xill+1mzqnPLtUFRInGa
         gdQLnsXvSCKHv8p0qvZ4HrdNyv51EQN/GQQywYmWORxv2em0jjMhfJic4D33Xo7dP38a
         v2RKod8xRVTLzOkGeHu05uZ9jrmrxlxXS/m0cOkg8K/rtloF/IRBVIPbXJLFt1cO8kwB
         0jK1vCR7kbJthfgixI/oV1r4E4QDNPo0NTr1Ynt/XUXOskFTIJnlqhTOns0BtDExGIOG
         lsbzAyVgkXdz+FLYXS3EhopTCD8BX/kanR0zFubQJ+JDgddfxG4bkkGnZPRLsNXkCS60
         YWJw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=On7WtObYfJqkmBWHnyAjoLI3cKLxtSNPjmpL50xHFRI=;
        b=CTpR3W9AA4CqP3+rw6VMR9K0a35CWS+Q6B1ucHnX3uMHlPxFHv4+kDLFsSa/hndMe3
         Dkj+lFwlzwUyM7cf3oVkOpdlLQH+b71E7OkBC9btQVgLEDkAa1LRkDIV82l0HWIeoK35
         UeNx93W8QSD7j8dWllIcYvlrckau2FBA2MSScA4spLuzO3+/ch34njTsmN4HZxjhiLMu
         lR8mAx7p+h2as6F/lr3dOa2eynfnjU+Swn0HWeJX+Gq7PB26msVLAcx/50GnSwHQLe3d
         eGengBmxCtxYa0AGmU74tA5Z9o0l84WtYcPJAyOSp0ATlegj0UCm9DijSK/J/5IbNenK
         Wl7w==
X-Gm-Message-State: AOAM530Sl0MmCjTx0TTqiKG1yFOHKLSpQBqEO0Dj2U88cv9THiWrGldL
	+Q9K1L3j8rNvGSuI0LRo964=
X-Google-Smtp-Source: ABdhPJw6pLgZpb7nxKHqbHv0bFqQp2GJDXX10ie1Fge14RL1t+c7xcjbV32a+1pbg3EF/DE+wNZS9g==
X-Received: by 2002:a1c:3d02:: with SMTP id k2mr5463382wma.183.1605801883717;
        Thu, 19 Nov 2020 08:04:43 -0800 (PST)
Received: from CBGR90WXYV0 (host109-146-187-185.range109-146.btcentralplus.com. [109.146.187.185])
        by smtp.gmail.com with ESMTPSA id h2sm436496wme.45.2020.11.19.08.04.42
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 19 Nov 2020 08:04:43 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	"'Wei Liu'" <wl@xen.org>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	=?UTF-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	<xen-devel@lists.xenproject.org>
References: <20201111200721.30551-1-paul@xen.org> <20201111200721.30551-5-paul@xen.org> <0e05cc16-c7b9-34a1-8862-a4e7d581189f@suse.com>
In-Reply-To: <0e05cc16-c7b9-34a1-8862-a4e7d581189f@suse.com>
Subject: RE: [PATCH 04/10] viridian: use hypercall_vpmask in hvcall_ipi()
Date: Thu, 19 Nov 2020 16:04:42 -0000
Message-ID: <00ca01d6be8d$b17884a0$14698de0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQIKCyTUSJ7jm2WJ2nMqsy5EhijKogGYy4h+AwWv/32pRBDUgA==

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 12 November 2020 08:50
> To: Paul Durrant <paul@xen.org>
> Cc: Paul Durrant <pdurrant@amazon.com>; Wei Liu <wl@xen.org>; Andrew =
Cooper
> <andrew.cooper3@citrix.com>; Roger Pau Monn=C3=A9 =
<roger.pau@citrix.com>; xen-devel@lists.xenproject.org
> Subject: Re: [PATCH 04/10] viridian: use hypercall_vpmask in =
hvcall_ipi()
>=20
> On 11.11.2020 21:07, Paul Durrant wrote:
> > --- a/xen/arch/x86/hvm/viridian/viridian.c
> > +++ b/xen/arch/x86/hvm/viridian/viridian.c
> > @@ -533,6 +533,21 @@ static bool vpmask_test(struct hypercall_vpmask =
*vpmask, unsigned int vp)
> >      return test_bit(vp, vpmask->mask);
> >  }
> >
> > +static unsigned int vpmask_first(struct hypercall_vpmask *vpmask)
> > +{
> > +    return find_first_bit(vpmask->mask, HVM_MAX_VCPUS);
> > +}
> > +
> > +static unsigned int vpmask_next(struct hypercall_vpmask *vpmask, =
unsigned int vp)
> > +{
> > +    return find_next_bit(vpmask->mask, HVM_MAX_VCPUS, vp + 1);
>=20
> Perhaps again assert on vp's range?
>=20

Ok.

> > +}
> > +
> > +#define for_each_vp(vpmask, vp) \
> > +	for ((vp) =3D vpmask_first(vpmask); \
> > +	     (vp) < HVM_MAX_VCPUS; \
> > +	     (vp) =3D vpmask_next(vpmask, vp))
>=20
> Nit again: Missing blanks here and ...

Oh yes.

>=20
> > @@ -669,17 +693,20 @@ static int hvcall_ipi(union hypercall_input =
*input,
> >      if ( vector < 0x10 || vector > 0xff )
> >          return -EINVAL;
> >
> > -    for_each_vcpu ( currd, v )
> > +    vpmask_empty(vpmask);
> > +    for (vp =3D 0; vp < 64; vp++)
>=20
> ... here. I also wonder if the literal 64 couldn't be expressed in
> some suitable way.
>=20

Ok. I'll see if I can suitably macrotize it.

  Paul

> Jan



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 16:08:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 16:08:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31075.61287 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmTq-0007cR-4N; Thu, 19 Nov 2020 16:08:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31075.61287; Thu, 19 Nov 2020 16:08:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmTq-0007cK-1I; Thu, 19 Nov 2020 16:08:18 +0000
Received: by outflank-mailman (input) for mailman id 31075;
 Thu, 19 Nov 2020 16:08:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d/3C=EZ=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kfmTo-0007cF-4f
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:08:16 +0000
Received: from mail-wr1-x42d.google.com (unknown [2a00:1450:4864:20::42d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3b1fe409-9fe3-4371-933d-62cf64f628c8;
 Thu, 19 Nov 2020 16:08:15 +0000 (UTC)
Received: by mail-wr1-x42d.google.com with SMTP id o15so7015193wru.6
 for <xen-devel@lists.xenproject.org>; Thu, 19 Nov 2020 08:08:15 -0800 (PST)
Received: from CBGR90WXYV0
 (host109-146-187-185.range109-146.btcentralplus.com. [109.146.187.185])
 by smtp.gmail.com with ESMTPSA id s2sm442348wmh.37.2020.11.19.08.08.13
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 19 Nov 2020 08:08:13 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=d/3C=EZ=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kfmTo-0007cF-4f
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:08:16 +0000
X-Inumbo-ID: 3b1fe409-9fe3-4371-933d-62cf64f628c8
Received: from mail-wr1-x42d.google.com (unknown [2a00:1450:4864:20::42d])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3b1fe409-9fe3-4371-933d-62cf64f628c8;
	Thu, 19 Nov 2020 16:08:15 +0000 (UTC)
Received: by mail-wr1-x42d.google.com with SMTP id o15so7015193wru.6
        for <xen-devel@lists.xenproject.org>; Thu, 19 Nov 2020 08:08:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=vlGJvl+n48Fq+TCMCHaQE3qgoeZd8yDDxodtR8YJyuY=;
        b=nzn2y6WGgyWKaJ4Kwk6thNR7PcK3wllxIkC4sY6d7jNyGIR0Ta9ETugwLHScvHMdPj
         bAOM5CuSlTRq0sZcAuL0wN06wRyz75UMPR5F+m0aLhmuCp9qD5T/7fxWQuhRU+KPI8nW
         jA2zBRYbaa+BCva+Q4ZRZoq/iD63TUEysJSYptanQLsmUN3QuMeNU8a3rcZldXW5TIgz
         h00E9FUzjBYYgjmox4CN6EQ9ALVDoqldiNdkrohR1FHhcHc8pbdZ+++vwCyrBZ/OEamW
         yGZUZrdckdnY34Sz4yu3S4FlfsR1HjypqQtQWez+jcYGEGXPEVCQNtAX6jsDdCo+5AKA
         yEAw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=vlGJvl+n48Fq+TCMCHaQE3qgoeZd8yDDxodtR8YJyuY=;
        b=uL0p2SxtLfJQAjtHa07WoGlE4a2CMMuJyNhywKCrFw2Nd0gdKRjxjHntHqgYI/7uAc
         0cEaID1b7pkZxuF3zWaStIesFk0G71RXwJncCwwq0AOk9J3/UVFFFbtMBf8rBAafuBg4
         EqijC8814lSwmApHq1s4O6DyH9aapTfazcT8hzAwt5nI7fhLPmDzV90ML0nG8R36whXb
         RPjR1yIYiXuXiqJ2Wa8aSNW/UB0O90fkXFz2E9mCtzMZFBJTByoutQOl4RW+ILe/IQEM
         xn39wJ7h+tmTKezW6eu8+8AXtmfq53LoYAnYPqTAaIlAB9hcQjSfUZ3uBxghHNUPps3C
         0Now==
X-Gm-Message-State: AOAM533I9tEIKUnUSgCVRBy6cgGS1NQC7KHuLPWQiXSGv23Epxew+yNX
	BAT7zwqhJFZBeHDPdbPRAvY=
X-Google-Smtp-Source: ABdhPJy2Z7A2iRrNoVU5XUeapMEd1B5B48MbWEw7EzoD3iprlTGrI9wZ6AGOUiFtipalcbjFF7sUFg==
X-Received: by 2002:adf:f382:: with SMTP id m2mr11416060wro.426.1605802094401;
        Thu, 19 Nov 2020 08:08:14 -0800 (PST)
Received: from CBGR90WXYV0 (host109-146-187-185.range109-146.btcentralplus.com. [109.146.187.185])
        by smtp.gmail.com with ESMTPSA id s2sm442348wmh.37.2020.11.19.08.08.13
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 19 Nov 2020 08:08:13 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	"'Wei Liu'" <wl@xen.org>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	=?UTF-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	<xen-devel@lists.xenproject.org>
References: <20201111200721.30551-1-paul@xen.org> <20201111200721.30551-6-paul@xen.org> <701b30f9-9552-84c0-63fa-0837b7939f4d@suse.com>
In-Reply-To: <701b30f9-9552-84c0-63fa-0837b7939f4d@suse.com>
Subject: RE: [PATCH 05/10] viridian: use softirq batching in hvcall_ipi()
Date: Thu, 19 Nov 2020 16:08:13 -0000
Message-ID: <00cb01d6be8e$2f138450$8d3a8cf0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQIKCyTUSJ7jm2WJ2nMqsy5EhijKogKFZsbGAWFRW+OpSc9NwA==

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 12 November 2020 08:53
> To: Paul Durrant <paul@xen.org>
> Cc: Paul Durrant <pdurrant@amazon.com>; Wei Liu <wl@xen.org>; Andrew =
Cooper
> <andrew.cooper3@citrix.com>; Roger Pau Monn=C3=A9 =
<roger.pau@citrix.com>; xen-devel@lists.xenproject.org
> Subject: Re: [PATCH 05/10] viridian: use softirq batching in =
hvcall_ipi()
>=20
> On 11.11.2020 21:07, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > vlapic_ipi() uses a softirq batching mechanism to improve the =
efficiency of
> > sending a IPIs to large number of processors. This patch modifies =
send_ipi()
> > (the worker function called by hvcall_ipi()) to also make use of the
> > mechanism when there multiple bits set the hypercall_vpmask. Hence a =
`nr`
> > field is added to the structure to track the number of set bits.
>=20
> This is kind of unusual, i.e. we don't do so elsewhere. I take it the
> assumption is that using bitmap_weight() is too much overhead?

It just seemed wasteful in the circumstances. If I move to bitmap copy =
OTOH then I'll have to use bitmap_weight().

>=20
> > @@ -509,6 +510,7 @@ void viridian_domain_deinit(struct domain *d)
> >
> >  struct hypercall_vpmask {
> >      DECLARE_BITMAP(mask, HVM_MAX_VCPUS);
> > +    unsigned int nr;
> >  };
> >
> >  static DEFINE_PER_CPU(struct hypercall_vpmask, hypercall_vpmask);
> > @@ -516,21 +518,24 @@ static DEFINE_PER_CPU(struct hypercall_vpmask, =
hypercall_vpmask);
> >  static void vpmask_empty(struct hypercall_vpmask *vpmask)
> >  {
> >      bitmap_zero(vpmask->mask, HVM_MAX_VCPUS);
> > +    vpmask->nr =3D 0;
> >  }
> >
> >  static void vpmask_set(struct hypercall_vpmask *vpmask, unsigned =
int vp)
> >  {
> > -    __set_bit(vp, vpmask->mask);
> > +    if ( !test_and_set_bit(vp, vpmask->mask) )
> > +        vpmask->nr++;
>=20
> If test_and_set_bit() is the correct thing to use here (rather
> than __test_and_set_bit()), the counter also needs updating
> atomically.
>=20

It doesn't need to be atomic, but I'll probably drop it.

> >  }
> >
> >  static void vpmask_fill(struct hypercall_vpmask *vpmask)
> >  {
> >      bitmap_fill(vpmask->mask, HVM_MAX_VCPUS);
> > +    vpmask->nr =3D HVM_MAX_VCPUS;
> >  }
> >
> >  static bool vpmask_test(struct hypercall_vpmask *vpmask, unsigned =
int vp)
> >  {
> > -    return test_bit(vp, vpmask->mask);
> > +    return vpmask->nr && test_bit(vp, vpmask->mask);
>=20
> Is this in fact an improvement?
>=20

I think so, but it's a moot point if I drop 'nr'.

  Paul

> Jan



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 16:08:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 16:08:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31076.61299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmU6-0007gA-Dt; Thu, 19 Nov 2020 16:08:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31076.61299; Thu, 19 Nov 2020 16:08:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmU6-0007g0-AE; Thu, 19 Nov 2020 16:08:34 +0000
Received: by outflank-mailman (input) for mailman id 31076;
 Thu, 19 Nov 2020 16:08:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2P/M=EZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kfmU4-0007fd-VA
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:08:33 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 766091d8-f149-448d-824c-99df06a1aea4;
 Thu, 19 Nov 2020 16:08:32 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2P/M=EZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kfmU4-0007fd-VA
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:08:33 +0000
X-Inumbo-ID: 766091d8-f149-448d-824c-99df06a1aea4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 766091d8-f149-448d-824c-99df06a1aea4;
	Thu, 19 Nov 2020 16:08:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605802111;
  h=subject:to:references:from:message-id:date:mime-version:
   in-reply-to:content-transfer-encoding;
  bh=nSs4CrsxfMMQoxP8VXBC2GGpZarGmVENVa9Rl6O8stM=;
  b=JzACV2Mh4fB4t87i/sQUB0C/fKyuWb7HjWLpRO45W9X+l1lEYeQNQKoI
   pX0M25oX4gmg+ElMTZyWsFIfuttem2lPb69XE+AOk1nzTUHRFtnSj8cVc
   xsv2TRlYALwSsHDXZFZ1F8dGmlNniAY8R2X2i9ViGa6uBm1RD7TemtCuC
   Q=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: CkKraE4jtMBsK8X7Xl1X8NxhY2hZJ95fAL4+kfQ+/A4q1xatLPgLVNFkjgKO1grXyV5Gd6VCa8
 a6GSrWepx9PCys2NOCkd76h35UtW6XOuVeuAecBROzfDHZH5oT8OE8PL+S58VHShFSQ/uT3fGM
 Q2E35O+s9vh47seOQhwrOXc0h46QCbBzRBa4J9WGn/KfDMsuapIkFix9NedPrEW6pC+eXIjgSV
 oXAFQo2o8aDewu5clLBCoy3ZDqoS5gjqQn/fFEOk29Tof1sD6OqeK/Vd/YQ5nnDQCiYealniss
 Qu4=
X-SBRS: None
X-MesageID: 31508117
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,353,1599537600"; 
   d="scan'208";a="31508117"
Subject: Re: [PATCH] SVM: avoid UB in intercept mask definitions
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <eaef863f-a396-58be-cb32-7a9e2535d9e2@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <fd257643-fd2c-edfe-ccef-fde3d03d514a@citrix.com>
Date: Thu, 19 Nov 2020 16:08:26 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <eaef863f-a396-58be-cb32-7a9e2535d9e2@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 19/11/2020 15:57, Jan Beulich wrote:
> Found by looking for patterns similar to the one Julien did spot in
> pci_vtd_quirks().
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 16:11:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 16:11:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31089.61314 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmXN-0000He-0t; Thu, 19 Nov 2020 16:11:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31089.61314; Thu, 19 Nov 2020 16:11:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmXM-0000HX-TY; Thu, 19 Nov 2020 16:11:56 +0000
Received: by outflank-mailman (input) for mailman id 31089;
 Thu, 19 Nov 2020 16:11:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d/3C=EZ=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kfmXM-0000HS-81
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:11:56 +0000
Received: from mail-wm1-x335.google.com (unknown [2a00:1450:4864:20::335])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 06528d73-8496-40c0-9407-fd32284ff86c;
 Thu, 19 Nov 2020 16:11:55 +0000 (UTC)
Received: by mail-wm1-x335.google.com with SMTP id a186so4922318wme.1
 for <xen-devel@lists.xenproject.org>; Thu, 19 Nov 2020 08:11:55 -0800 (PST)
Received: from CBGR90WXYV0
 (host109-146-187-185.range109-146.btcentralplus.com. [109.146.187.185])
 by smtp.gmail.com with ESMTPSA id f11sm297184wrs.70.2020.11.19.08.11.53
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 19 Nov 2020 08:11:54 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=d/3C=EZ=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kfmXM-0000HS-81
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:11:56 +0000
X-Inumbo-ID: 06528d73-8496-40c0-9407-fd32284ff86c
Received: from mail-wm1-x335.google.com (unknown [2a00:1450:4864:20::335])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 06528d73-8496-40c0-9407-fd32284ff86c;
	Thu, 19 Nov 2020 16:11:55 +0000 (UTC)
Received: by mail-wm1-x335.google.com with SMTP id a186so4922318wme.1
        for <xen-devel@lists.xenproject.org>; Thu, 19 Nov 2020 08:11:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=U6KMN71GE0FFhBDQM3+dgQDGGr3MyR1gzDm51G1UqQM=;
        b=puz/D8EAjXk6Az3NYeaavlZX7aaN3pQI298j4GOaJNIEi06mfNII3NS+C99+I4UOwI
         sug5jf28rFBr0DXlJ3MG4jGd53D2xZPuRGqCSGtICNp4+ALl4lb9LKDluBJChkUIZ1fK
         7xixl3TUgWxbJ7ncS7785AhonDZLyoUNavfPi0Qudj5xykY0K7oEY+PBdE+8VWb47nfL
         ySNbCnWv87BCI3sGPUN+QggYOUcYiA8iqtItNFUx7GaswsMwlLyXFQ/b3ZWaBUxUJqcb
         uylyEhHfrDWwu2mtGg0fkKL1zlodMCg3J5I+mXV+GYSP+rh/KOsEX1zbvMwg+Klbn2xF
         C58g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=U6KMN71GE0FFhBDQM3+dgQDGGr3MyR1gzDm51G1UqQM=;
        b=SkCdE6ucVucZ3jUfTghvgnYlFlVTBTT86IeXOhwYIATBffV56MQvj2Iv+kyoNUSMYF
         kkeNP2wihZC+3juGAWbBIMKcgr2lufojUjcO5sy0AnMGlGhbkRHGEegSkUwYrL0UtHRB
         E6EsvKbZZm/OcS7w4qvCXm+Y62W+Mazw6Jmp3pkPtnqFEFFqfCp4T1TvUW8dNwl9d6RG
         LbiwtJ3IAhu0RdwdBf7Wa56421N8h0rQgpLlb7EZAnUHbNja9xYEoWTkm3wlyE0xRHi7
         iagQdeksgkFTrtZBBT4JCYSI18+5ipTdhnUIRkxAj//UaHaALifZRcuUJ/UzhN43hWHN
         0LxQ==
X-Gm-Message-State: AOAM531gU6NNeCL2UiP4Z1VqE5kn/HM1z7zdHZ8IYG8KE4NxQLGmblR9
	1C3kvl07tPW/MoqQIheuvzI=
X-Google-Smtp-Source: ABdhPJwDNPThcVMp5xwjf6+U92YaRKfqwdPabk3c9lu/nCa7GuuP+/P5NW5rEWRgt022B3fbIAdjrw==
X-Received: by 2002:a1c:544c:: with SMTP id p12mr5310178wmi.146.1605802314559;
        Thu, 19 Nov 2020 08:11:54 -0800 (PST)
Received: from CBGR90WXYV0 (host109-146-187-185.range109-146.btcentralplus.com. [109.146.187.185])
        by smtp.gmail.com with ESMTPSA id f11sm297184wrs.70.2020.11.19.08.11.53
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 19 Nov 2020 08:11:54 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	"'Wei Liu'" <wl@xen.org>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	=?UTF-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	<xen-devel@lists.xenproject.org>
References: <20201111200721.30551-1-paul@xen.org> <20201111200721.30551-7-paul@xen.org> <dd6c4a0d-f611-7b81-8c95-72786891f311@suse.com>
In-Reply-To: <dd6c4a0d-f611-7b81-8c95-72786891f311@suse.com>
Subject: RE: [PATCH 06/10] viridian: add ExProcessorMasks variants of the flush hypercalls
Date: Thu, 19 Nov 2020 16:11:53 -0000
Message-ID: <00cc01d6be8e$b24973c0$16dc5b40$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQIKCyTUSJ7jm2WJ2nMqsy5EhijKogHMVqApAbxd85SpTMBtEA==

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 12 November 2020 09:19
> To: Paul Durrant <paul@xen.org>
> Cc: Paul Durrant <pdurrant@amazon.com>; Wei Liu <wl@xen.org>; Andrew =
Cooper
> <andrew.cooper3@citrix.com>; Roger Pau Monn=C3=A9 =
<roger.pau@citrix.com>; xen-devel@lists.xenproject.org
> Subject: Re: [PATCH 06/10] viridian: add ExProcessorMasks variants of =
the flush hypercalls
>=20
> On 11.11.2020 21:07, Paul Durrant wrote:
> > --- a/xen/arch/x86/hvm/viridian/viridian.c
> > +++ b/xen/arch/x86/hvm/viridian/viridian.c
> > @@ -553,6 +553,83 @@ static unsigned int vpmask_next(struct =
hypercall_vpmask *vpmask, unsigned int
> vp
> >  	     (vp) < HVM_MAX_VCPUS; \
> >  	     (vp) =3D vpmask_next(vpmask, vp))
> >
> > +struct hypercall_vpset {
> > +        struct hv_vpset set;
> > +        uint64_t __bank_contents[64];
>=20
> gcc documents this to be supported as an extension; did you check
> clang supports this, too?

By 'this', do you mean the assumption that that memory layout is =
consecutive?

> (I'd also prefer if the leading
> underscores could be dropped, but as you know I'm not the maintainer
> of this code.)
>=20
> > +static unsigned int hv_vpset_nr_banks(struct hv_vpset *vpset)
> > +{
> > +    uint64_t bank_mask;
> > +    unsigned int nr =3D 0;
> > +
> > +    for ( bank_mask =3D vpset->valid_bank_mask; bank_mask; =
bank_mask >>=3D 1 )
> > +        if ( bank_mask & 1 )
> > +            nr++;
> > +
> > +    return nr;
> > +}
>=20
> Simply use hweight64()?
>=20

Ok.

> > +static uint16_t hv_vpset_to_vpmask(struct hv_vpset *set, size_t =
size,
> > +                                   struct hypercall_vpmask *vpmask)
> > +{
> > +    switch ( set->format )
> > +    {
> > +    case HV_GENERIC_SET_ALL:
> > +        vpmask_fill(vpmask);
> > +        return 0;
> > +
> > +    case HV_GENERIC_SET_SPARSE_4K:
> > +    {
> > +        uint64_t bank_mask;
> > +        unsigned int bank =3D 0, vp =3D 0;
> > +
> > +        vpmask_empty(vpmask);
> > +        for ( bank_mask =3D set->valid_bank_mask; bank_mask; =
bank_mask >>=3D 1 )
> > +        {
> > +            /* Make sure we won't dereference past the end of the =
array */
> > +            if ( (void *)(set->bank_contents + bank) >=3D
> > +                 (void *)set + size )
> > +            {
> > +                ASSERT_UNREACHABLE();
> > +                return -EINVAL;
> > +            }
>=20
> Doesn't this come too late? I.e. don't you want to check instead
> (or also) that you won't overrun the space when copying in from
> the guest? And for the specific purpose here, doesn't it come too
> early, as you won't access any memory when the low bit of bank_mask
> isn't set?
>=20

I'll dry-run this again. It may make more sense to move the check.

  Paul

> Jan



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 16:13:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 16:13:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31093.61325 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmYW-0000PY-BU; Thu, 19 Nov 2020 16:13:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31093.61325; Thu, 19 Nov 2020 16:13:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmYW-0000PR-8B; Thu, 19 Nov 2020 16:13:08 +0000
Received: by outflank-mailman (input) for mailman id 31093;
 Thu, 19 Nov 2020 16:13:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d/3C=EZ=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kfmYU-0000PI-Ri
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:13:06 +0000
Received: from mail-wm1-x335.google.com (unknown [2a00:1450:4864:20::335])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 940b0d52-f5ec-4d75-96a9-b94c4d500609;
 Thu, 19 Nov 2020 16:13:06 +0000 (UTC)
Received: by mail-wm1-x335.google.com with SMTP id p19so6594939wmg.0
 for <xen-devel@lists.xenproject.org>; Thu, 19 Nov 2020 08:13:06 -0800 (PST)
Received: from CBGR90WXYV0
 (host109-146-187-185.range109-146.btcentralplus.com. [109.146.187.185])
 by smtp.gmail.com with ESMTPSA id d8sm493640wmb.11.2020.11.19.08.13.04
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 19 Nov 2020 08:13:04 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=d/3C=EZ=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kfmYU-0000PI-Ri
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:13:06 +0000
X-Inumbo-ID: 940b0d52-f5ec-4d75-96a9-b94c4d500609
Received: from mail-wm1-x335.google.com (unknown [2a00:1450:4864:20::335])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 940b0d52-f5ec-4d75-96a9-b94c4d500609;
	Thu, 19 Nov 2020 16:13:06 +0000 (UTC)
Received: by mail-wm1-x335.google.com with SMTP id p19so6594939wmg.0
        for <xen-devel@lists.xenproject.org>; Thu, 19 Nov 2020 08:13:06 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=+xEjrlXEqXr16s5hjRQWV9J4WDJ7kJeluYlT6tOBBkU=;
        b=ixBI+tjmSLW+nJrwJ6Mgnh7hhov8ZKvRdVwDcGIjkZM4rsgMzrT4jL9A4W+zBOl67y
         H8/eMEsbu7UJhAM1vic4LagCCbsrJ5zc18F8UMWk6pAbiHWBSmEvwSsaw+mRCBlXX4hd
         0O0E+60ywoe3LT9z1nI9h7VJyBET0YRKKwKHLz/oCbVjB+9Df+aDP910J1GgT5ktByJS
         lP5vdDIej+20cevLWk3xTMp0dRSxxOC+NtW3DuKeo65V++xDUmL/W7ggl3a7cAES7vQm
         DFFFSVIMKtH0SD32rojoiy3syG0cHE/Nl0gtaNbdJDDZmF6LCoBJUi8I8RBRAiDhj3Ea
         poig==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=+xEjrlXEqXr16s5hjRQWV9J4WDJ7kJeluYlT6tOBBkU=;
        b=qQYxSEgQ/DRs0XFFp/tR4tNoJZuAoWQfGrmTyHrzXYCdKImZu/kZ+YjMj+Z7PiaM2t
         Kz+k+2XUzq3D+/kBQZAUVBrkex/eDvviBHmWIohgaWdru5GFbDVrE5cPNuVqdUI3Jhnj
         eU28Zk4lcvdNz1XnIlT9URQvU4BLQq2/zSkSelNasi5IaDUGuowOE0R3fW+bmGIwrjaO
         LC8ZUJKI0vCJMIl2lO1q2Xe21bWqUcJu5irELXiM+kQnW/tedT1C7sk3Nh1WJ+JBOXJI
         hgQLw/RMybE7rtN7/sxTSdXeVKbe+HmGwCvFcVdPGH3tZ+8KHTj8tws7CEc6citXt/4T
         C6NQ==
X-Gm-Message-State: AOAM530f8mAs9lPf3OATHcj/ii5/3sWSR8CeCAnSW0aIE/brnkV37jNE
	bZSzLBuoPHDyV0YQyeqLVQ4=
X-Google-Smtp-Source: ABdhPJy8bm4JCybKxEdjfsIo9CWDm76eB+U9+vgSyzY4GdOx0TZrTHF8ktbKf/9SoR5bVECrelaVqg==
X-Received: by 2002:a1c:1d51:: with SMTP id d78mr5311515wmd.60.1605802385378;
        Thu, 19 Nov 2020 08:13:05 -0800 (PST)
Received: from CBGR90WXYV0 (host109-146-187-185.range109-146.btcentralplus.com. [109.146.187.185])
        by smtp.gmail.com with ESMTPSA id d8sm493640wmb.11.2020.11.19.08.13.04
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 19 Nov 2020 08:13:04 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	"'Wei Liu'" <wl@xen.org>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	=?UTF-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	<xen-devel@lists.xenproject.org>
References: <20201111200721.30551-1-paul@xen.org> <20201111200721.30551-9-paul@xen.org> <c9a9f41a-871c-65c3-74b6-e5063261210b@suse.com>
In-Reply-To: <c9a9f41a-871c-65c3-74b6-e5063261210b@suse.com>
Subject: RE: [PATCH 08/10] viridian: log initial invocation of each type of hypercall
Date: Thu, 19 Nov 2020 16:13:04 -0000
Message-ID: <00cd01d6be8e$dc83d8b0$958b8a10$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQIKCyTUSJ7jm2WJ2nMqsy5EhijKogHSIwEYAfICsrypSuYeoA==

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 12 November 2020 09:23
> To: Paul Durrant <paul@xen.org>
> Cc: Paul Durrant <pdurrant@amazon.com>; Wei Liu <wl@xen.org>; Andrew =
Cooper
> <andrew.cooper3@citrix.com>; Roger Pau Monn=C3=A9 =
<roger.pau@citrix.com>; xen-devel@lists.xenproject.org
> Subject: Re: [PATCH 08/10] viridian: log initial invocation of each =
type of hypercall
>=20
> On 11.11.2020 21:07, Paul Durrant wrote:
> > --- a/xen/include/asm-x86/hvm/viridian.h
> > +++ b/xen/include/asm-x86/hvm/viridian.h
> > @@ -59,6 +59,14 @@ struct viridian_domain
> >  {
> >      union hv_guest_os_id guest_os_id;
> >      union hv_vp_assist_page_msr hypercall_gpa;
> > +    unsigned long hypercall_flags;
> > +
> > +#define _HCALL_spin_wait 0
> > +#define _HCALL_flush 1
> > +#define _HCALL_flush_ex 2
> > +#define _HCALL_ipi 3
> > +#define _HCALL_ipi_ex 4
>=20
> I'd like to suggest for this to either be unsigned int until
> more than 32 bits are needed, or be using DECLARE_BITMAP()
> right away.

Ok. I may just go straight for the bitmap then.

  Paul

>=20
> Jan



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 16:20:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 16:20:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31108.61356 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmf9-0000tW-Jz; Thu, 19 Nov 2020 16:19:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31108.61356; Thu, 19 Nov 2020 16:19:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmf9-0000tP-Gl; Thu, 19 Nov 2020 16:19:59 +0000
Received: by outflank-mailman (input) for mailman id 31108;
 Thu, 19 Nov 2020 16:19:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2P/M=EZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kfmf8-0000t8-IS
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:19:58 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id be112a15-e59b-44f3-b2a0-d7a1d432c441;
 Thu, 19 Nov 2020 16:19:57 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2P/M=EZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kfmf8-0000t8-IS
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:19:58 +0000
X-Inumbo-ID: be112a15-e59b-44f3-b2a0-d7a1d432c441
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id be112a15-e59b-44f3-b2a0-d7a1d432c441;
	Thu, 19 Nov 2020 16:19:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605802797;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=VLQh6lkvagwdS8OlSONfiSuYbIOqebtEPK9Fh7AzHC4=;
  b=DiHNdKfkemOXMzrOGCNKTZ6/OE7Xc2OAmQTjx5f9sQgOitOD/cwMhTcy
   qwZgK9I6UpcNLUqcgYewF+z/rDcR7/Afx5YkNOXbx0zPSMOzsrFKCkh1o
   G4j8Rf2VN8ueQyIRU/03VrV99DnaUnnxsXTJA5PRKvlpkkS/mdvVD7Yb0
   o=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: fZUwVa/+/b7Vfk7M1QsemrxC8ZVToTBLgdOJyB+VGAo6v3UKBKdtm0/WW/WTwn5LgmJZcJpIKB
 d6CrhNLrH7KJjQ9w0mnHIVuRy6PHejJSIVCGRcWWFTxfelC6cBbY0c9rk5uOIvQpOs+75nEz6s
 lR9aRv9ThQrst4LBHKbD5M+hXK9/rDLpy3JChvIDC320Qp+76YU9RYvewqT1U7aqBLNKli1bb+
 nHzzXFM1+w/eUBETcqjoRMWQPtviY1hHn/eqdaIgeSHO7lRrNH+ZeerP5hVKqY6qVo6tva0Nts
 fdw=
X-SBRS: None
X-MesageID: 32687863
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,353,1599537600"; 
   d="scan'208";a="32687863"
Subject: Re: [PATCH] x86/Intel: avoid UB with NMI watchdog on family 15 CPUs
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <63500eb6-b1da-ce08-52e2-00b30ffe2c26@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <1c2ffdcb-577d-8bea-35e3-904777a0c2e5@citrix.com>
Date: Thu, 19 Nov 2020 16:10:13 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <63500eb6-b1da-ce08-52e2-00b30ffe2c26@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 19/11/2020 15:57, Jan Beulich wrote:
> Found by looking for patterns similar to the one Julien did spot in
> pci_vtd_quirks().
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Subject is wonky.  Is it P4 (Intel), or Fam15 (AMD) ?  I'd be tempted to
have the prefix as x86/nmi: either way.

With that suitably adjusted, Acked-by: Andrew Cooper
<andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 16:20:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 16:20:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31107.61343 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmf6-0000qs-BO; Thu, 19 Nov 2020 16:19:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31107.61343; Thu, 19 Nov 2020 16:19:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmf6-0000ql-8K; Thu, 19 Nov 2020 16:19:56 +0000
Received: by outflank-mailman (input) for mailman id 31107;
 Thu, 19 Nov 2020 16:19:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2P/M=EZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kfmf4-0000o3-KK
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:19:54 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9da1d026-0909-4c55-8516-8124d7df20ba;
 Thu, 19 Nov 2020 16:19:53 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2P/M=EZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kfmf4-0000o3-KK
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:19:54 +0000
X-Inumbo-ID: 9da1d026-0909-4c55-8516-8124d7df20ba
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9da1d026-0909-4c55-8516-8124d7df20ba;
	Thu, 19 Nov 2020 16:19:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605802792;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=GU3Q/PWBkJSPaUkOtoAnL2oe9HsQ1Z4B7C6u3QJgIpA=;
  b=JKpg5B62SP9ZJqDhV1ggaZLdv79F+R/aB6zdYAbsoZVXfuU/NmzvZBZ3
   INp+lxb7J0/pigwVsQG65MlqVBJ+84m3BJqCaKFrIWRz6oZvU/IpHgFy+
   Cz0ADtKNjwocQGrQwaTGe4gXwSxsNhub2wLz57CvWoTmmsyj79urcH56f
   c=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 3rYK65YTP0NsSE5KBh3xUx7YIq/BgJ0nNvC2qpmKuCSp/AcSivp9lSmK7f+HSzHZ7b3lpgChHM
 6Qpb4DQOWPbqKEHO+nJppyS8gn7z5Ab/iF+J3bXRInaViOLNiI9l8wGgTeDySj5j9faa5Xvbzi
 bWjEDzqtf/TYV47jxtdr70Bdlh2hCm0bqq4FxaWK6F2FqtymZS65CT80ZGH3NYF0+ygxXgf//n
 ewMMSzqcfNa/e9/LJtF1HhX0MsVI8mmnYKdDzxTkIr6igVvRJ8koDUJqoXY6zDJgr69rMeNdQ7
 9hs=
X-SBRS: None
X-MesageID: 31773387
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,353,1599537600"; 
   d="scan'208";a="31773387"
Subject: Re: [PATCH] AMD/IOMMU: avoid UB in guest CR3 retrieval
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <paul@xen.org>
References: <1a5bca28-b37c-eaa7-3a64-51428d24915f@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <913bdd04-29d6-77c4-3c20-1e200c92e0ce@citrix.com>
Date: Thu, 19 Nov 2020 16:12:09 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <1a5bca28-b37c-eaa7-3a64-51428d24915f@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 19/11/2020 15:58, Jan Beulich wrote:
> Found by looking for patterns similar to the one Julien did spot in
> pci_vtd_quirks(). (Not that it matters much here, considering the code
> is dead right now.)
>
> Fixes: 3a7947b69011 ("amd-iommu: use a bitfield for DTE")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

There is *still* an outstanding regression (modulo dead code) in one of
these bitfield-ifications which is off by 12, but I can't remember if it
is this one or not.

>
> --- a/xen/drivers/passthrough/amd/iommu_guest.c
> +++ b/xen/drivers/passthrough/amd/iommu_guest.c
> @@ -70,7 +70,8 @@ static void guest_iommu_disable(struct g
>  
>  static uint64_t get_guest_cr3_from_dte(struct amd_iommu_dte *dte)
>  {
> -    return ((dte->gcr3_trp_51_31 << 31) | (dte->gcr3_trp_30_15 << 15) |
> +    return (((uint64_t)dte->gcr3_trp_51_31 << 31) |
> +            (dte->gcr3_trp_30_15 << 15) |
>              (dte->gcr3_trp_14_12 << 12)) >> PAGE_SHIFT;
>  }
>  



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 16:22:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 16:22:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31118.61371 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmhU-0001vX-3r; Thu, 19 Nov 2020 16:22:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31118.61371; Thu, 19 Nov 2020 16:22:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmhU-0001vQ-06; Thu, 19 Nov 2020 16:22:24 +0000
Received: by outflank-mailman (input) for mailman id 31118;
 Thu, 19 Nov 2020 16:22:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfmhS-0001vL-Pb
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:22:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d9aadc79-7f68-4a2a-8586-b5271563f11c;
 Thu, 19 Nov 2020 16:22:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 42E74AC54;
 Thu, 19 Nov 2020 16:22:20 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfmhS-0001vL-Pb
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:22:22 +0000
X-Inumbo-ID: d9aadc79-7f68-4a2a-8586-b5271563f11c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d9aadc79-7f68-4a2a-8586-b5271563f11c;
	Thu, 19 Nov 2020 16:22:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605802940; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=n/9I4ebCYRTJCMWc7tmCqF9ONvCm8xUIyXovhL9gUiQ=;
	b=Ypy5dfipuRb7P7fuDcfpNXxWXNQB+OqlfZGjjF3sFSsKuqPVAxLfDsA7onGelroJcWD3lj
	tIGDMpsa4jomsAjIT7bbmWGPTBlh9fjxliXSDV0NO6xsHHlRgdS0t9708dFbbzBT0VUXqf
	wpiG8ffnc97oEyBLe8To9qBN4O2wX1M=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 42E74AC54;
	Thu, 19 Nov 2020 16:22:20 +0000 (UTC)
Subject: Re: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Julien Grall <julien@xen.org>
References: <cover.1605527997.git.rahul.singh@arm.com>
 <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
 <3740e147-719a-4e97-bb0e-fe9bd2ec2aa5@xen.org>
 <aa256a44-8f8f-d4f1-f5f4-12529f45d8c8@suse.com>
 <9007e08f-6d90-88ed-ba64-2f0b3c21cb50@xen.org>
 <8531a99d-3c54-36c7-0cd4-2e4838f96eb0@suse.com>
 <ba26fdfb-34f8-c4d3-e082-f1f49c768981@xen.org>
 <89F35B3F-FAAD-4C58-B3FD-F93CA3290A49@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <022dbcfa-a7fc-f348-1a2c-5107fbc3911a@suse.com>
Date: Thu, 19 Nov 2020 17:22:19 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <89F35B3F-FAAD-4C58-B3FD-F93CA3290A49@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 19.11.2020 16:54, Rahul Singh wrote:
>> On 19 Nov 2020, at 10:16 am, Julien Grall <julien@xen.org> wrote:
>> On 19/11/2020 09:53, Jan Beulich wrote:
>>> Well, see my patch suppressing building of quite a part of it.
>>
>> I will let Rahul figuring out whether your patch series is sufficient to fix compilation issues (this is what matters right now).
> 
> I just checked the compilation error for ARM after enabling the HAS_PCI on ARM. I am observing the same compilation error what I observed previously. 
> There are two new errors related to struct uart_config and struct part_param as those struct defined globally but used under X86 flags.
> 
> At top level:
> ns16550.c:179:48: error: ‘uart_config’ defined but not used [-Werror=unused-const-variable=]
>  static const struct ns16550_config __initconst uart_config[] =
>                                                 ^~~~~~~~~~~
> ns16550.c:104:54: error: ‘uart_param’ defined but not used [-Werror=unused-const-variable=]
>  static const struct ns16550_config_param __initconst uart_param[] = { 

Looks like more code movement I need to do in my patch then. I was
never happy about the disconnected placement of these array and
the functions using them ...

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 16:38:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 16:38:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31139.61382 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmwT-0003CG-HO; Thu, 19 Nov 2020 16:37:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31139.61382; Thu, 19 Nov 2020 16:37:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmwT-0003C9-EU; Thu, 19 Nov 2020 16:37:53 +0000
Received: by outflank-mailman (input) for mailman id 31139;
 Thu, 19 Nov 2020 16:37:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfmwR-0003C4-Ex
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:37:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0778bc08-21cd-4961-a343-cc8cfe4918a0;
 Thu, 19 Nov 2020 16:37:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A209DAC2D;
 Thu, 19 Nov 2020 16:37:49 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfmwR-0003C4-Ex
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:37:51 +0000
X-Inumbo-ID: 0778bc08-21cd-4961-a343-cc8cfe4918a0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0778bc08-21cd-4961-a343-cc8cfe4918a0;
	Thu, 19 Nov 2020 16:37:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605803869; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=V+Xf18YHgnei9DXh5gYTPxA+nYmJ8ESY2h/5F4AwS/E=;
	b=Ac5gIp0TMzRzPxZnVNjrxvenqTYQtUesFqwDYg32cY5y2ea41zHuhlkn+Iic6Q9y2C3mOJ
	wf3gDRP0fF0514MT+gPrKn5T2rZg2sCyxRSrC1TUsAIu9qIiGrnhzIKA3/3RM0nez9LjIX
	y5qZDqFUdmQ2SwWcbXQOKSCKFVgd550=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A209DAC2D;
	Thu, 19 Nov 2020 16:37:49 +0000 (UTC)
Subject: Re: [PATCH] x86/Intel: avoid UB with NMI watchdog on family 15 CPUs
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <63500eb6-b1da-ce08-52e2-00b30ffe2c26@suse.com>
 <1c2ffdcb-577d-8bea-35e3-904777a0c2e5@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e056d6ff-aceb-e4f9-1fe8-a41c482e34bc@suse.com>
Date: Thu, 19 Nov 2020 17:37:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <1c2ffdcb-577d-8bea-35e3-904777a0c2e5@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 19.11.2020 17:10, Andrew Cooper wrote:
> On 19/11/2020 15:57, Jan Beulich wrote:
>> Found by looking for patterns similar to the one Julien did spot in
>> pci_vtd_quirks().
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Subject is wonky.  Is it P4 (Intel), or Fam15 (AMD) ?  I'd be tempted to
> have the prefix as x86/nmi: either way.

With this code:

    case X86_VENDOR_INTEL:
        switch (boot_cpu_data.x86) {
        case 6:
            setup_p6_watchdog((boot_cpu_data.x86_model < 14) 
                              ? P6_EVENT_CPU_CLOCKS_NOT_HALTED
                              : CORE_EVENT_CPU_CLOCKS_NOT_HALTED);
            break;
        case 15:
            if (!setup_p4_watchdog())

I think qualifying it like I did is quite reasonable. Hence ...

> With that suitably adjusted, Acked-by: Andrew Cooper
> <andrew.cooper3@citrix.com>

... I'd prefer to keep it as is - please clarify.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 16:38:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 16:38:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31148.61399 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmxP-0003LP-Td; Thu, 19 Nov 2020 16:38:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31148.61399; Thu, 19 Nov 2020 16:38:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmxP-0003LI-Qa; Thu, 19 Nov 2020 16:38:51 +0000
Received: by outflank-mailman (input) for mailman id 31148;
 Thu, 19 Nov 2020 16:38:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2P/M=EZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kfmxO-0003LC-QN
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:38:50 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8d8923b2-a5ac-4564-bffd-c1eb8e33f105;
 Thu, 19 Nov 2020 16:38:49 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2P/M=EZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kfmxO-0003LC-QN
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:38:50 +0000
X-Inumbo-ID: 8d8923b2-a5ac-4564-bffd-c1eb8e33f105
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8d8923b2-a5ac-4564-bffd-c1eb8e33f105;
	Thu, 19 Nov 2020 16:38:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605803929;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=XW+It94G6+dYFOkryCtNAWQQoMoEuCBf2FsJzmnphcg=;
  b=Mxve+bQzubBaq2M7gAdChjZLlIFlWzuFRPV68+rmIvxIfTMAKwGxk1zC
   CkKJRHpSf719c4h7KduUlIZPdcKXpX7rRLBbBRvZeC/AghQzlSlnjx69d
   47CyGvkb60J0FL/GJkzSCBwvwkUInWlj/n0//vboeGNJZo8QAkIYhs9tg
   U=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: fnz8AP2Zq5ukmTk8F97QD9SQiLysb9NgrqMW2iDYaIoFfBYmNC6YzApTzkuMavP3DHPV7N7Gzb
 YsoHBBrNhP1NBQKkIrs4y2F7kQeC2KrC7ESlHoHLqZsOS/mQRPXDLz1aP/xDzNYX+BxmLpHVAP
 1UmzmUgAzP6hN4u57mWHSoiBLVLq7obWAXYeX4noAKxI1Lz7ZeyYD7RZTQ4jXpjSabMZ/ApvSQ
 uHopSMp0YMfwPcHsVVm9XhmCsKGtYaLnl+wzqtWVbQJ6RQrkGOJLMnLlEhc68SrHSP438ly34m
 l3U=
X-SBRS: None
X-MesageID: 32690080
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,353,1599537600"; 
   d="scan'208";a="32690080"
Subject: Re: [PATCH] xen/iommu: vtd: Fix undefined behavior pci_vtd_quirks()
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: Julien Grall <jgrall@amazon.com>, Kevin Tian <kevin.tian@intel.com>
References: <20201119145216.29280-1-julien@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <16b256f5-1ceb-c12f-ff7b-9c6f1a5cc3cb@citrix.com>
Date: Thu, 19 Nov 2020 16:38:43 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201119145216.29280-1-julien@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 19/11/2020 14:52, Julien Grall wrote:
> Note that splat is from 4.11.4 and not staging. Although, the problem is
> still present.
>
> This can be solved by making the first operand unsigned int.
>
> Signed-off-by: Julien Grall <jgrall@amazon.com>
>
> CR: https://code.amazon.com/reviews/CR-38873112

IIRC, this is an internal link, which doesn't want including on the
upstream commit?

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 16:40:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 16:40:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31153.61410 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmzC-0004Ht-9b; Thu, 19 Nov 2020 16:40:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31153.61410; Thu, 19 Nov 2020 16:40:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmzC-0004Hm-6Q; Thu, 19 Nov 2020 16:40:42 +0000
Received: by outflank-mailman (input) for mailman id 31153;
 Thu, 19 Nov 2020 16:40:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfmzA-0004Hh-P9
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:40:40 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfmz9-0000Is-1N; Thu, 19 Nov 2020 16:40:39 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfmz8-00036f-Nn; Thu, 19 Nov 2020 16:40:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfmzA-0004Hh-P9
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:40:40 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Siu+kiN1q+xZWaEI+EZxnH3bFP7l5kBVOzbq2GMdbxk=; b=CkZ0vEmHdc00niKoMHyeFy9TQH
	hEiI9h5/oS/e4c1EW7IYVHj+FWBerKzl3yw+hFrTEy4lQjcne3Zpn+rpoaEiS/+ZCoEAi7K9h2EzW
	hHEHJfsVPqBWuNPugsiukpgBwtuU5MSyy6lnT8sxMpdGvNFb8d1NFnRNhOhsCNUZsth8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfmz9-0000Is-1N; Thu, 19 Nov 2020 16:40:39 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfmz8-00036f-Nn; Thu, 19 Nov 2020 16:40:38 +0000
Subject: Re: [PATCH] xen/iommu: vtd: Fix undefined behavior pci_vtd_quirks()
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>, Kevin Tian <kevin.tian@intel.com>
References: <20201119145216.29280-1-julien@xen.org>
 <16b256f5-1ceb-c12f-ff7b-9c6f1a5cc3cb@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <12223d32-c1da-2b6e-1193-93b6ca113953@xen.org>
Date: Thu, 19 Nov 2020 16:40:37 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <16b256f5-1ceb-c12f-ff7b-9c6f1a5cc3cb@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 19/11/2020 16:38, Andrew Cooper wrote:
> On 19/11/2020 14:52, Julien Grall wrote:
>> Note that splat is from 4.11.4 and not staging. Although, the problem is
>> still present.
>>
>> This can be solved by making the first operand unsigned int.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> CR: https://code.amazon.com/reviews/CR-38873112
> 
> IIRC, this is an internal link, which doesn't want including on the
> upstream commit?

Yes. I forgot to sanitize commit message when sending it.

I will remove it while committing unless there is a need to send a new 
version.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 16:41:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 16:41:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31161.61423 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmzm-0004UN-JU; Thu, 19 Nov 2020 16:41:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31161.61423; Thu, 19 Nov 2020 16:41:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfmzm-0004UF-Fq; Thu, 19 Nov 2020 16:41:18 +0000
Received: by outflank-mailman (input) for mailman id 31161;
 Thu, 19 Nov 2020 16:41:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfmzl-0004U6-KN
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:41:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5438bd50-1e91-4067-933c-827eb51a63ac;
 Thu, 19 Nov 2020 16:41:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E1053AC2D;
 Thu, 19 Nov 2020 16:41:14 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfmzl-0004U6-KN
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:41:17 +0000
X-Inumbo-ID: 5438bd50-1e91-4067-933c-827eb51a63ac
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5438bd50-1e91-4067-933c-827eb51a63ac;
	Thu, 19 Nov 2020 16:41:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605804075; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pwpxDHOqqKFXjl+2b+i8GLr9f2p0gPevl0IRQU3xjg0=;
	b=IXC6vshs5jO4mQjsPVoKXO4YiOcZKqSyA03SO9VSchfxB3nuVL0iZgXGfvJhMAZY7S8zU5
	GxYpZllh461lMWdrgVjzsgydJJITfjbmaHS3a2X7WneaWZme9mIU8NZLonkw/ehJ1bnViT
	D/NIWK20N8UgMtcFJjDNfGxVj5rZwUs=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E1053AC2D;
	Thu, 19 Nov 2020 16:41:14 +0000 (UTC)
Subject: Re: [PATCH 03/10] viridian: introduce a per-cpu hypercall_vpmask and
 accessor functions...
To: paul@xen.org
Cc: 'Paul Durrant' <pdurrant@amazon.com>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201111200721.30551-1-paul@xen.org>
 <20201111200721.30551-4-paul@xen.org>
 <01c7747e-70d0-e32b-45a6-afc1688c1741@suse.com>
 <00c901d6be8d$6d7a5c10$486f1430$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0ec90042-cb19-320e-1676-409b68b73a51@suse.com>
Date: Thu, 19 Nov 2020 17:41:14 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <00c901d6be8d$6d7a5c10$486f1430$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.11.2020 17:02, Paul Durrant wrote:
>> From: Jan Beulich <jbeulich@suse.com
>> Sent: 12 November 2020 08:46
>>
>> On 11.11.2020 21:07, Paul Durrant wrote:
>>> --- a/xen/arch/x86/hvm/viridian/viridian.c
>>> +++ b/xen/arch/x86/hvm/viridian/viridian.c
>>> @@ -507,15 +507,41 @@ void viridian_domain_deinit(struct domain *d)
>>>      XFREE(d->arch.hvm.viridian);
>>>  }
>>>
>>> +struct hypercall_vpmask {
>>> +    DECLARE_BITMAP(mask, HVM_MAX_VCPUS);
>>> +};
>>> +
>>> +static DEFINE_PER_CPU(struct hypercall_vpmask, hypercall_vpmask);
>>> +
>>> +static void vpmask_empty(struct hypercall_vpmask *vpmask)
>>
>> const?
> 
> Yes, I suppose that's ook for all these since the outer struct is
> not changing... It's a bit misleading though.

I'd be curious to learn about that "misleading" aspect.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 16:42:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 16:42:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31165.61435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfn0Q-0004bU-Th; Thu, 19 Nov 2020 16:41:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31165.61435; Thu, 19 Nov 2020 16:41:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfn0Q-0004bN-PX; Thu, 19 Nov 2020 16:41:58 +0000
Received: by outflank-mailman (input) for mailman id 31165;
 Thu, 19 Nov 2020 16:41:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2P/M=EZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kfn0P-0004bD-Mk
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:41:57 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4f2b2b57-1996-4ad2-a8f9-7a48cc5e5a55;
 Thu, 19 Nov 2020 16:41:56 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2P/M=EZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kfn0P-0004bD-Mk
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:41:57 +0000
X-Inumbo-ID: 4f2b2b57-1996-4ad2-a8f9-7a48cc5e5a55
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4f2b2b57-1996-4ad2-a8f9-7a48cc5e5a55;
	Thu, 19 Nov 2020 16:41:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605804116;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=pp5O/TkDrpUj+vxBbksvm2fig6K3QAat0xWShWQy/zA=;
  b=MHPohreBd5eJ7SlLofYAroYuAdhpeNiQx9lXFN9Cg2EcmQveUxLWpsXT
   v4rINTQN6YAQSKPK/KhUood/DgRN+zENYs+GqmYng+e2kqwd+n0Io/aY4
   6xz4tAYFhlZUhOC4bBf3j7gb3wYdyUGM+YADxRG4/qHunoLSg+7ybFmo4
   o=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: GQtfO1VChODKd5TOpknbO/rvyoyxzvPkOPnLVGARPnfQplQJqAjVlmVVG/TcPWYOovfeoqow3E
 lye7Vdes8VH+T10tQRS++uIWkPPZABAjBiWd4mY/t7jL4iiE5KXzJP5R/UzbcgcgiG9iE4+bJa
 n7qVb4GOjXvjXcc/Yvb7JeK4/NqqjbzOncd1Fwyk/IV+xaRnkCfTp+ul+2UzJ6T9GA634Zamzg
 yd3dvhTdeoeYMZ4HzgCBLQQawVsoSvMLsz1O0kmKb3cqkQFpBi3AkWmaCc3cV6WsB2n+eoEhDd
 04I=
X-SBRS: None
X-MesageID: 31885037
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,353,1599537600"; 
   d="scan'208";a="31885037"
Subject: Re: [PATCH] xen/iommu: vtd: Fix undefined behavior pci_vtd_quirks()
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: Julien Grall <jgrall@amazon.com>, Kevin Tian <kevin.tian@intel.com>
References: <20201119145216.29280-1-julien@xen.org>
 <16b256f5-1ceb-c12f-ff7b-9c6f1a5cc3cb@citrix.com>
 <12223d32-c1da-2b6e-1193-93b6ca113953@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <b2fd4b70-a949-479e-b943-b19543d85b0f@citrix.com>
Date: Thu, 19 Nov 2020 16:41:36 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <12223d32-c1da-2b6e-1193-93b6ca113953@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 19/11/2020 16:40, Julien Grall wrote:
>
>
> On 19/11/2020 16:38, Andrew Cooper wrote:
>> On 19/11/2020 14:52, Julien Grall wrote:
>>> Note that splat is from 4.11.4 and not staging. Although, the
>>> problem is
>>> still present.
>>>
>>> This can be solved by making the first operand unsigned int.
>>>
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>>
>>> CR: https://code.amazon.com/reviews/CR-38873112
>>
>> IIRC, this is an internal link, which doesn't want including on the
>> upstream commit?
>
> Yes. I forgot to sanitize commit message when sending it.
>
> I will remove it while committing unless there is a need to send a new
> version.

No problem.  FWIW, Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 16:44:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 16:44:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31175.61447 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfn38-0004pO-Ah; Thu, 19 Nov 2020 16:44:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31175.61447; Thu, 19 Nov 2020 16:44:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfn38-0004pH-7b; Thu, 19 Nov 2020 16:44:46 +0000
Received: by outflank-mailman (input) for mailman id 31175;
 Thu, 19 Nov 2020 16:44:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UkQB=EZ=amazon.co.uk=prvs=585d586a4=pdurrant@srs-us1.protection.inumbo.net>)
 id 1kfn35-0004pB-Av
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:44:44 +0000
Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 774accde-8ff4-48eb-a315-3b1bc2a4ba3e;
 Thu, 19 Nov 2020 16:44:42 +0000 (UTC)
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-2a-d0be17ee.us-west-2.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP;
 19 Nov 2020 16:44:14 +0000
Received: from EX13D32EUC002.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198])
 by email-inbound-relay-2a-d0be17ee.us-west-2.amazon.com (Postfix) with ESMTPS
 id 723A3A1815; Thu, 19 Nov 2020 16:44:13 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC002.ant.amazon.com (10.43.164.94) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 19 Nov 2020 16:44:12 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Thu, 19 Nov 2020 16:44:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=UkQB=EZ=amazon.co.uk=prvs=585d586a4=pdurrant@srs-us1.protection.inumbo.net>)
	id 1kfn35-0004pB-Av
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:44:44 +0000
X-Inumbo-ID: 774accde-8ff4-48eb-a315-3b1bc2a4ba3e
Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 774accde-8ff4-48eb-a315-3b1bc2a4ba3e;
	Thu, 19 Nov 2020 16:44:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
  s=amazon201209; t=1605804283; x=1637340283;
  h=from:to:cc:date:message-id:references:in-reply-to:
   content-transfer-encoding:mime-version:subject;
  bh=VLTt7k+mJi2k1vph95d3zjqOxoHe5LTCd/Mlm0vsea8=;
  b=Hh0Yiltc6qDr/YD8GE+c5r0xyNeKgxsschdUlyZru1Ow4z8LxUF/t9sG
   JI8IxvQBYqIUOhJ4Mys7xLP+BkTNRn+k77iGpygC3qyVGeg8cREzVUVdg
   9TcOvjR5nEH8mUL4jm6G0RXROZqMpoaRMv6RXuyqotU8HhGodrkpzpXEH
   g=;
X-IronPort-AV: E=Sophos;i="5.78,353,1599523200"; 
   d="scan'208";a="97060094"
Subject: RE: [PATCH 03/10] viridian: introduce a per-cpu hypercall_vpmask and
 accessor functions...
Thread-Topic: [PATCH 03/10] viridian: introduce a per-cpu hypercall_vpmask and accessor
 functions...
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2a-d0be17ee.us-west-2.amazon.com) ([10.47.23.38])
  by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP; 19 Nov 2020 16:44:14 +0000
Received: from EX13D32EUC002.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198])
	by email-inbound-relay-2a-d0be17ee.us-west-2.amazon.com (Postfix) with ESMTPS id 723A3A1815;
	Thu, 19 Nov 2020 16:44:13 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC002.ant.amazon.com (10.43.164.94) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 19 Nov 2020 16:44:12 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Thu, 19 Nov 2020 16:44:12 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Jan Beulich <jbeulich@suse.com>, "paul@xen.org" <paul@xen.org>
CC: 'Wei Liu' <wl@xen.org>, 'Andrew Cooper' <andrew.cooper3@citrix.com>,
	=?utf-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Index: AQIKCyTUSJ7jm2WJ2nMqsy5EhijKogKUvRrlAaY1NKepRytzUIAADEcAgAAAkQA=
Date: Thu, 19 Nov 2020 16:44:12 +0000
Message-ID: <b70dad84ef9e400caa023da53494dc0a@EX13D32EUC003.ant.amazon.com>
References: <20201111200721.30551-1-paul@xen.org>
 <20201111200721.30551-4-paul@xen.org>
 <01c7747e-70d0-e32b-45a6-afc1688c1741@suse.com>
 <00c901d6be8d$6d7a5c10$486f1430$@xen.org>
 <0ec90042-cb19-320e-1676-409b68b73a51@suse.com>
In-Reply-To: <0ec90042-cb19-320e-1676-409b68b73a51@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.165.145]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxp
Y2hAc3VzZS5jb20+DQo+IFNlbnQ6IDE5IE5vdmVtYmVyIDIwMjAgMTY6NDENCj4gVG86IHBhdWxA
eGVuLm9yZw0KPiBDYzogRHVycmFudCwgUGF1bCA8cGR1cnJhbnRAYW1hem9uLmNvLnVrPjsgJ1dl
aSBMaXUnIDx3bEB4ZW4ub3JnPjsgJ0FuZHJldyBDb29wZXInDQo+IDxhbmRyZXcuY29vcGVyM0Bj
aXRyaXguY29tPjsgJ1JvZ2VyIFBhdSBNb25uw6knIDxyb2dlci5wYXVAY2l0cml4LmNvbT47IHhl
bi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZw0KPiBTdWJqZWN0OiBSRTogW0VYVEVSTkFMXSBb
UEFUQ0ggMDMvMTBdIHZpcmlkaWFuOiBpbnRyb2R1Y2UgYSBwZXItY3B1IGh5cGVyY2FsbF92cG1h
c2sgYW5kIGFjY2Vzc29yDQo+IGZ1bmN0aW9ucy4uLg0KPiANCj4gQ0FVVElPTjogVGhpcyBlbWFp
bCBvcmlnaW5hdGVkIGZyb20gb3V0c2lkZSBvZiB0aGUgb3JnYW5pemF0aW9uLiBEbyBub3QgY2xp
Y2sgbGlua3Mgb3Igb3Blbg0KPiBhdHRhY2htZW50cyB1bmxlc3MgeW91IGNhbiBjb25maXJtIHRo
ZSBzZW5kZXIgYW5kIGtub3cgdGhlIGNvbnRlbnQgaXMgc2FmZS4NCj4gDQo+IA0KPiANCj4gT24g
MTkuMTEuMjAyMCAxNzowMiwgUGF1bCBEdXJyYW50IHdyb3RlOg0KPiA+PiBGcm9tOiBKYW4gQmV1
bGljaCA8amJldWxpY2hAc3VzZS5jb20NCj4gPj4gU2VudDogMTIgTm92ZW1iZXIgMjAyMCAwODo0
Ng0KPiA+Pg0KPiA+PiBPbiAxMS4xMS4yMDIwIDIxOjA3LCBQYXVsIER1cnJhbnQgd3JvdGU6DQo+
ID4+PiAtLS0gYS94ZW4vYXJjaC94ODYvaHZtL3ZpcmlkaWFuL3ZpcmlkaWFuLmMNCj4gPj4+ICsr
KyBiL3hlbi9hcmNoL3g4Ni9odm0vdmlyaWRpYW4vdmlyaWRpYW4uYw0KPiA+Pj4gQEAgLTUwNywx
NSArNTA3LDQxIEBAIHZvaWQgdmlyaWRpYW5fZG9tYWluX2RlaW5pdChzdHJ1Y3QgZG9tYWluICpk
KQ0KPiA+Pj4gICAgICBYRlJFRShkLT5hcmNoLmh2bS52aXJpZGlhbik7DQo+ID4+PiAgfQ0KPiA+
Pj4NCj4gPj4+ICtzdHJ1Y3QgaHlwZXJjYWxsX3ZwbWFzayB7DQo+ID4+PiArICAgIERFQ0xBUkVf
QklUTUFQKG1hc2ssIEhWTV9NQVhfVkNQVVMpOw0KPiA+Pj4gK307DQo+ID4+PiArDQo+ID4+PiAr
c3RhdGljIERFRklORV9QRVJfQ1BVKHN0cnVjdCBoeXBlcmNhbGxfdnBtYXNrLCBoeXBlcmNhbGxf
dnBtYXNrKTsNCj4gPj4+ICsNCj4gPj4+ICtzdGF0aWMgdm9pZCB2cG1hc2tfZW1wdHkoc3RydWN0
IGh5cGVyY2FsbF92cG1hc2sgKnZwbWFzaykNCj4gPj4NCj4gPj4gY29uc3Q/DQo+ID4NCj4gPiBZ
ZXMsIEkgc3VwcG9zZSB0aGF0J3Mgb29rIGZvciBhbGwgdGhlc2Ugc2luY2UgdGhlIG91dGVyIHN0
cnVjdCBpcw0KPiA+IG5vdCBjaGFuZ2luZy4uLiBJdCdzIGEgYml0IG1pc2xlYWRpbmcgdGhvdWdo
Lg0KPiANCj4gSSdkIGJlIGN1cmlvdXMgdG8gbGVhcm4gYWJvdXQgdGhhdCAibWlzbGVhZGluZyIg
YXNwZWN0Lg0KPiANCg0KQmVjYXVzZSB0aGUgZnVuY3Rpb24gaXMgbW9kaWZ5aW5nICh6ZXJvLWlu
ZykgdGhlIGJpdG1hcC4uLiBzbyBpbXBseWluZyB0aGUgbWFzayBpcyBjb25zdCBpcyBtZWFzbGVh
ZGluZy4NCg0KICBQYXVsDQoNCj4gSmFuDQo=


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 16:44:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 16:44:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31176.61459 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfn3G-0004sa-Np; Thu, 19 Nov 2020 16:44:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31176.61459; Thu, 19 Nov 2020 16:44:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfn3G-0004sT-KI; Thu, 19 Nov 2020 16:44:54 +0000
Received: by outflank-mailman (input) for mailman id 31176;
 Thu, 19 Nov 2020 16:44:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfn3F-0004s7-FJ
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:44:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b8ee63e4-6a6d-4d34-ad09-9a5c45947a30;
 Thu, 19 Nov 2020 16:44:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 464D8AC22;
 Thu, 19 Nov 2020 16:44:51 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfn3F-0004s7-FJ
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:44:53 +0000
X-Inumbo-ID: b8ee63e4-6a6d-4d34-ad09-9a5c45947a30
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b8ee63e4-6a6d-4d34-ad09-9a5c45947a30;
	Thu, 19 Nov 2020 16:44:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605804291; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=bOx0ypNVPvDzfXoaBWYicPCyh9evkGWlh2Ic/Vvq1yY=;
	b=fe/pobK+CAUFJYfarF25N6JjyMrFJz1c6htm3qZUuU1V2xl16He8o+BYz3qpk0o8Pddk5a
	09czZSJ2AQy43CQeb9x07Zlf28/zTHWPWoKjTnD6NsSWLmF9CCMNgP3T55VP7A6uePIoD5
	iwHEz4oD6DA9GjCQaAnvgjXl/eQkLEc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 464D8AC22;
	Thu, 19 Nov 2020 16:44:51 +0000 (UTC)
Subject: Re: [PATCH 06/10] viridian: add ExProcessorMasks variants of the
 flush hypercalls
To: paul@xen.org
Cc: 'Paul Durrant' <pdurrant@amazon.com>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201111200721.30551-1-paul@xen.org>
 <20201111200721.30551-7-paul@xen.org>
 <dd6c4a0d-f611-7b81-8c95-72786891f311@suse.com>
 <00cc01d6be8e$b24973c0$16dc5b40$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f974ee35-239a-644f-ab81-ab4a435e3693@suse.com>
Date: Thu, 19 Nov 2020 17:44:50 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <00cc01d6be8e$b24973c0$16dc5b40$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.11.2020 17:11, Paul Durrant wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 12 November 2020 09:19
>>
>> On 11.11.2020 21:07, Paul Durrant wrote:
>>> --- a/xen/arch/x86/hvm/viridian/viridian.c
>>> +++ b/xen/arch/x86/hvm/viridian/viridian.c
>>> @@ -553,6 +553,83 @@ static unsigned int vpmask_next(struct hypercall_vpmask *vpmask, unsigned int
>> vp
>>>  	     (vp) < HVM_MAX_VCPUS; \
>>>  	     (vp) = vpmask_next(vpmask, vp))
>>>
>>> +struct hypercall_vpset {
>>> +        struct hv_vpset set;
>>> +        uint64_t __bank_contents[64];
>>
>> gcc documents this to be supported as an extension; did you check
>> clang supports this, too?
> 
> By 'this', do you mean the assumption that that memory layout is consecutive?

No, rather the basic language aspect that in standard C a struct
member which is a struct ending in a flexible array member may
not be followed by any other field.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 16:46:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 16:46:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31186.61470 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfn59-00055l-61; Thu, 19 Nov 2020 16:46:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31186.61470; Thu, 19 Nov 2020 16:46:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfn59-00055e-2o; Thu, 19 Nov 2020 16:46:51 +0000
Received: by outflank-mailman (input) for mailman id 31186;
 Thu, 19 Nov 2020 16:46:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kfn57-00055V-E7
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:46:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 47fb2237-f0d5-4969-93ed-b4ec2bd88bdc;
 Thu, 19 Nov 2020 16:46:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BDFAFAC46;
 Thu, 19 Nov 2020 16:46:47 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PWQs=EZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kfn57-00055V-E7
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:46:49 +0000
X-Inumbo-ID: 47fb2237-f0d5-4969-93ed-b4ec2bd88bdc
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 47fb2237-f0d5-4969-93ed-b4ec2bd88bdc;
	Thu, 19 Nov 2020 16:46:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605804407; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9Lp8pf/Gv50jjKY3pqNKsNI1PVOJp90+DlP/KkJvffo=;
	b=YPCsmYEMxm+s06Ov2uge0eDlS7015Z8QYKZAkG7NVAG3jFSbdW26XjEQU0lc9j0LNyYMn8
	N3onIbcvWiT0zR+sqb8RyGFU4zJUnD4hSCyJ8c63twOUwB/Vz8bDVNoQ1vNurx0I1TzLZU
	oT9AUS7n/G3LRoEY1vKzglXnA7HdQY4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id BDFAFAC46;
	Thu, 19 Nov 2020 16:46:47 +0000 (UTC)
Subject: Re: [PATCH 03/10] viridian: introduce a per-cpu hypercall_vpmask and
 accessor functions...
To: "Durrant, Paul" <pdurrant@amazon.co.uk>, "paul@xen.org" <paul@xen.org>
Cc: 'Wei Liu' <wl@xen.org>, 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20201111200721.30551-1-paul@xen.org>
 <20201111200721.30551-4-paul@xen.org>
 <01c7747e-70d0-e32b-45a6-afc1688c1741@suse.com>
 <00c901d6be8d$6d7a5c10$486f1430$@xen.org>
 <0ec90042-cb19-320e-1676-409b68b73a51@suse.com>
 <b70dad84ef9e400caa023da53494dc0a@EX13D32EUC003.ant.amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <73501e7c-9363-1fc8-9262-c4b3d9cc6347@suse.com>
Date: Thu, 19 Nov 2020 17:46:47 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <b70dad84ef9e400caa023da53494dc0a@EX13D32EUC003.ant.amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 19.11.2020 17:44, Durrant, Paul wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 19 November 2020 16:41
>> To: paul@xen.org
>> Cc: Durrant, Paul <pdurrant@amazon.co.uk>; 'Wei Liu' <wl@xen.org>; 'Andrew Cooper'
>> <andrew.cooper3@citrix.com>; 'Roger Pau Monné' <roger.pau@citrix.com>; xen-devel@lists.xenproject.org
>> Subject: RE: [EXTERNAL] [PATCH 03/10] viridian: introduce a per-cpu hypercall_vpmask and accessor
>> functions...
>>
>> CAUTION: This email originated from outside of the organization. Do not click links or open
>> attachments unless you can confirm the sender and know the content is safe.
>>
>>
>>
>> On 19.11.2020 17:02, Paul Durrant wrote:
>>>> From: Jan Beulich <jbeulich@suse.com
>>>> Sent: 12 November 2020 08:46
>>>>
>>>> On 11.11.2020 21:07, Paul Durrant wrote:
>>>>> --- a/xen/arch/x86/hvm/viridian/viridian.c
>>>>> +++ b/xen/arch/x86/hvm/viridian/viridian.c
>>>>> @@ -507,15 +507,41 @@ void viridian_domain_deinit(struct domain *d)
>>>>>      XFREE(d->arch.hvm.viridian);
>>>>>  }
>>>>>
>>>>> +struct hypercall_vpmask {
>>>>> +    DECLARE_BITMAP(mask, HVM_MAX_VCPUS);
>>>>> +};
>>>>> +
>>>>> +static DEFINE_PER_CPU(struct hypercall_vpmask, hypercall_vpmask);
>>>>> +
>>>>> +static void vpmask_empty(struct hypercall_vpmask *vpmask)
>>>>
>>>> const?
>>>
>>> Yes, I suppose that's ook for all these since the outer struct is
>>> not changing... It's a bit misleading though.
>>
>> I'd be curious to learn about that "misleading" aspect.
>>
> 
> Because the function is modifying (zero-ing) the bitmap... so implying
> the mask is const is measleading.

Oh, I was mislead by the name then; should have looked at the return
type (which I was implying to be bool, when it's void). Please
disregard my request(s) in such case(s).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 16:47:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 16:47:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31193.61483 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfn6C-0005DO-Go; Thu, 19 Nov 2020 16:47:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31193.61483; Thu, 19 Nov 2020 16:47:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfn6C-0005DH-DR; Thu, 19 Nov 2020 16:47:56 +0000
Received: by outflank-mailman (input) for mailman id 31193;
 Thu, 19 Nov 2020 16:47:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d/3C=EZ=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kfn6A-0005DB-Tr
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:47:54 +0000
Received: from mail-wm1-x32f.google.com (unknown [2a00:1450:4864:20::32f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c1731af1-f070-4113-b0c2-27995c80fcb4;
 Thu, 19 Nov 2020 16:47:54 +0000 (UTC)
Received: by mail-wm1-x32f.google.com with SMTP id d142so7825493wmd.4
 for <xen-devel@lists.xenproject.org>; Thu, 19 Nov 2020 08:47:54 -0800 (PST)
Received: from CBGR90WXYV0
 (host109-146-187-185.range109-146.btcentralplus.com. [109.146.187.185])
 by smtp.gmail.com with ESMTPSA id u6sm574570wmj.40.2020.11.19.08.47.52
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 19 Nov 2020 08:47:52 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=d/3C=EZ=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kfn6A-0005DB-Tr
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:47:54 +0000
X-Inumbo-ID: c1731af1-f070-4113-b0c2-27995c80fcb4
Received: from mail-wm1-x32f.google.com (unknown [2a00:1450:4864:20::32f])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c1731af1-f070-4113-b0c2-27995c80fcb4;
	Thu, 19 Nov 2020 16:47:54 +0000 (UTC)
Received: by mail-wm1-x32f.google.com with SMTP id d142so7825493wmd.4
        for <xen-devel@lists.xenproject.org>; Thu, 19 Nov 2020 08:47:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=Exw3myhehWT2/9tPrsDQoWiIuT+J+BTbzR4akiYXSpI=;
        b=PsWOOjrzzmk84CdhZM/EW4UrpanJRNqZv+H6Nfe4Mx6XR4wGoNjodOKCqV5+wv5mLr
         KQzWARYzijodq3emI/vf+blBCigaCKCuxnDOjRvSU/Y624C8gnJwKVyTT7yC6X/x/qOL
         FkREZJxpFLzNEQ/zPr+VG7KAApLCwcYvLO00696Nue52gUMLdWPaVMKv3xn6CmfUVvxl
         LS7BloTkIHliU1vEhckwwsVxfQsHR9ejwVIluoSwOXC2CuLjvvFcy+qb+FMizaA/V76q
         HiesQPIi82eBHB1oxI1FhgDz3VC3z5mJ8PGd5sEHuIG8sGHyI1Ak1qs8T1cRw5huti7y
         U3mg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=Exw3myhehWT2/9tPrsDQoWiIuT+J+BTbzR4akiYXSpI=;
        b=SxeCPI/I+za6y694RkRv+/AyZWbjcDbskqAPNDSB50jetQYM+Y9bT8YSvmlKRczhrC
         8ONiyISsu3GYGYCz3CPkgl5L28j52b1Tku0RMW43KIkDwfWSYmWHjEdVApRdMvpn0uFh
         ztNyszoONgsc70LAsq9AGOyJRcMCr5aLqKzX3R4MdBQvMRiksrNX4szsf89ru87M5SYH
         1dvuBF5XF/ZTKasyXzLp1D9KDQ+pYAUrwQlIgNc/b6HLXPGTi9M+U5QolA41eebmzLXA
         I+73lPjic8vUNWGhfgSwk2js8toOYSUys6JRhZ7QhNMt7ZoR8KcxGcUlzRUNzxe+7dVh
         d4qw==
X-Gm-Message-State: AOAM532qf0YEQGTh9qM4j43SwwAs3wulrnSYxnbzI0gPxYZXywtRfy/p
	GLhCfOI+FZ9llvk8fFoiBEU=
X-Google-Smtp-Source: ABdhPJyljCFPFEHkP0a17oyTk6p8MoWTjdAJtafhl5XLvjdfD69NtmX9BA3Cm9nWnvvgrueFiVvSaA==
X-Received: by 2002:a1c:b487:: with SMTP id d129mr5745313wmf.38.1605804473291;
        Thu, 19 Nov 2020 08:47:53 -0800 (PST)
Received: from CBGR90WXYV0 (host109-146-187-185.range109-146.btcentralplus.com. [109.146.187.185])
        by smtp.gmail.com with ESMTPSA id u6sm574570wmj.40.2020.11.19.08.47.52
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 19 Nov 2020 08:47:52 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	"'Wei Liu'" <wl@xen.org>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	=?UTF-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	<xen-devel@lists.xenproject.org>
References: <20201111200721.30551-1-paul@xen.org> <20201111200721.30551-7-paul@xen.org> <dd6c4a0d-f611-7b81-8c95-72786891f311@suse.com> <00cc01d6be8e$b24973c0$16dc5b40$@xen.org> <f974ee35-239a-644f-ab81-ab4a435e3693@suse.com>
In-Reply-To: <f974ee35-239a-644f-ab81-ab4a435e3693@suse.com>
Subject: RE: [PATCH 06/10] viridian: add ExProcessorMasks variants of the flush hypercalls
Date: Thu, 19 Nov 2020 16:47:52 -0000
Message-ID: <00ce01d6be93$b90857d0$2b190770$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQIKCyTUSJ7jm2WJ2nMqsy5EhijKogHMVqApAbxd85QCfSbd3ALQ2i2tqSJbAxA=

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 19 November 2020 16:45
> To: paul@xen.org
> Cc: 'Paul Durrant' <pdurrant@amazon.com>; 'Wei Liu' <wl@xen.org>; =
'Andrew Cooper'
> <andrew.cooper3@citrix.com>; 'Roger Pau Monn=C3=A9' =
<roger.pau@citrix.com>; xen-devel@lists.xenproject.org
> Subject: Re: [PATCH 06/10] viridian: add ExProcessorMasks variants of =
the flush hypercalls
>=20
> On 19.11.2020 17:11, Paul Durrant wrote:
> >> From: Jan Beulich <jbeulich@suse.com>
> >> Sent: 12 November 2020 09:19
> >>
> >> On 11.11.2020 21:07, Paul Durrant wrote:
> >>> --- a/xen/arch/x86/hvm/viridian/viridian.c
> >>> +++ b/xen/arch/x86/hvm/viridian/viridian.c
> >>> @@ -553,6 +553,83 @@ static unsigned int vpmask_next(struct =
hypercall_vpmask *vpmask, unsigned int
> >> vp
> >>>  	     (vp) < HVM_MAX_VCPUS; \
> >>>  	     (vp) =3D vpmask_next(vpmask, vp))
> >>>
> >>> +struct hypercall_vpset {
> >>> +        struct hv_vpset set;
> >>> +        uint64_t __bank_contents[64];
> >>
> >> gcc documents this to be supported as an extension; did you check
> >> clang supports this, too?
> >
> > By 'this', do you mean the assumption that that memory layout is =
consecutive?
>=20
> No, rather the basic language aspect that in standard C a struct
> member which is a struct ending in a flexible array member may
> not be followed by any other field.
>

Ah, ok, now I understand what you mean. I'll union it to reserve the =
space instead.

  Paul
=20
> Jan



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 16:57:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 16:57:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31201.61495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfnFm-0006Rc-GR; Thu, 19 Nov 2020 16:57:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31201.61495; Thu, 19 Nov 2020 16:57:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfnFm-0006RV-Cj; Thu, 19 Nov 2020 16:57:50 +0000
Received: by outflank-mailman (input) for mailman id 31201;
 Thu, 19 Nov 2020 16:57:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kdOd=EZ=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kfnFl-0006RQ-5U
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:57:49 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 77c26135-2719-43ff-8f84-d50e9cdcd87b;
 Thu, 19 Nov 2020 16:57:46 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AJGvdDk003498
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Thu, 19 Nov 2020 17:57:40 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 437A42E9CA8; Thu, 19 Nov 2020 17:57:34 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kdOd=EZ=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kfnFl-0006RQ-5U
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 16:57:49 +0000
X-Inumbo-ID: 77c26135-2719-43ff-8f84-d50e9cdcd87b
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 77c26135-2719-43ff-8f84-d50e9cdcd87b;
	Thu, 19 Nov 2020 16:57:46 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AJGvdDk003498
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Thu, 19 Nov 2020 17:57:40 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 437A42E9CA8; Thu, 19 Nov 2020 17:57:34 +0100 (MET)
Date: Thu, 19 Nov 2020 17:57:34 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201119165734.GA4903@antioche.eu.org>
References: <20201117155807.a7jgmftnj6njg6oz@Air-de-Roger>
 <20201117164033.GB3093@antioche.eu.org>
 <20201118085738.wpnfmjagxjf6cofp@Air-de-Roger>
 <20201118092425.GC1085@antioche.eu.org>
 <20201118100025.ic7r3kfsbdnr6muz@Air-de-Roger>
 <20201118121403.GC3126@antioche.eu.org>
 <20201118143928.hvamuf7t7jycsrzb@Air-de-Roger>
 <bb2b6182-f3a6-61e5-ee70-90a65ae56435@suse.com>
 <20201119141915.igyb7djkw47rf2dt@Air-de-Roger>
 <20201119155718.GB4104@antioche.eu.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201119155718.GB4104@antioche.eu.org>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Thu, 19 Nov 2020 17:57:40 +0100 (MET)

On Thu, Nov 19, 2020 at 04:57:18PM +0100, Manuel Bouyer wrote:
> On Thu, Nov 19, 2020 at 03:19:15PM +0100, Roger Pau Monn wrote:
> > I've got two different debug patches for you to try. I'm attaching both
> > to this email but I think we should start with Jan's suggestion
> > (conditional_print.patch). That patch will only print extra messages
> > between the ioregsel 3 ... ioregsel f existing debug messages, you
> > will have to trigger this from NetBSD by using ioapic_dump_raw AFAICT.
> 
> thanks. I didn't see any change in behavior or XEN output with this
> patch (especially the vioapic_deliver string doesn't show up in the
> logs).

I tried forcing print to 1, and I still don't see "vioapic_deliver" on the
console. I changed the patch to:
#define vioapic_deliver(vioapic, irq) ({\ 
    if ( /* print && irq == 34 */ 1 ) \
        printk("%s:%d:%s: vioapic_deliver %d\n", __FILE__, __LINE__, __func__, i
rq); \
    _vioapic_deliver(vioapic, irq); })

and got:
[  13.8853432] ioapic2: pin2 0x0000a067 0x00000000^M
[  13.8853432] mfii0: cmd timeout ccb 0xffff9780023b7d60 status 0x40000008^M
(XEN) *** Serial input to Xen (type 'CTRL-a' three times to switch input)
(XEN) vioapic.c:511:vioapic_irq_positive_edge: vioapic_deliver 2
[  17.0001093] mfii0: cmd aborted ccb 0xffff9780023b7d60^M
(XEN) vioapic.c:511:vioapic_irq_positive_edge: vioapic_deliver 2
[  17.0217772] config_pending_decr: scsibus0 0^M
(XEN) vioapic.c:511:vioapic_irq_positive_edge: vioapic_deliver 2
[  17.(XEN) vioapic.c:511:vioapic_irq_positive_edge: vioapic_deliver 2
0417095] config_finalize 2185^M

So I guess that the interrupt delivery on XEN output is
vioapic.c:511

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 17:08:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 17:08:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31210.61518 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfnQB-0007id-Q4; Thu, 19 Nov 2020 17:08:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31210.61518; Thu, 19 Nov 2020 17:08:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfnQB-0007iW-Mm; Thu, 19 Nov 2020 17:08:35 +0000
Received: by outflank-mailman (input) for mailman id 31210;
 Thu, 19 Nov 2020 17:08:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfnQA-0007hU-5l
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:08:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfnQ9-0000ur-Nh; Thu, 19 Nov 2020 17:08:33 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfnQA-0000U1-DT; Thu, 19 Nov 2020 17:08:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfnQA-0007hU-5l
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:08:34 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=Svrds4fjq26e/xBeepsLn7GM+LTQUjWZi4AUjhu+NNc=; b=F7FE+1ymFhHucWhrTjgG7Yq1x
	yAm9CdYzA3GaNseWuMaPq4j72+KyEF50J6e4fqB2qB0Qf9mKUh4xPzoKbapPwQIAx2vOz9cljFUqU
	C37+DNiGyurxwkUqtjjsJUNlPZJSwiZ6ARLBfc7K1qkUUYyhAKpfO5MqDPDliha7PMWc4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfnQ9-0000ur-Nh; Thu, 19 Nov 2020 17:08:33 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfnQA-0000U1-DT; Thu, 19 Nov 2020 17:08:34 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: alex.bennee@linaro.org,
	bertrand.marquis@arm.com,
	andre.przywara@arm.com,
	Rahul.Singh@arm.com,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v4 1/3] xen/arm: gic: acpi: Guard helpers to build the MADT with CONFIG_ACPI
Date: Thu, 19 Nov 2020 17:08:27 +0000
Message-Id: <20201119170829.9923-2-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201119170829.9923-1-julien@xen.org>
References: <20201119170829.9923-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

gic_make_hwdom_madt() and gic_get_hwdom_madt_size() are ACPI specific.

While they build fine today, this will change in a follow-up patch.
Rather than trying to fix the build on ACPI, it is best to avoid
compiling the helpers and the associated callbacks when CONFIG_ACPI=n.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---
    Changes in v4:
        - Patch added
---
 xen/arch/arm/gic-v2.c     |  8 +++-----
 xen/arch/arm/gic-v3.c     | 11 ++---------
 xen/arch/arm/gic.c        |  2 ++
 xen/include/asm-arm/gic.h | 10 ++++++++--
 4 files changed, 15 insertions(+), 16 deletions(-)

diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index 0f747538dbcd..581ea5ba6b2c 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -1114,12 +1114,12 @@ static int gicv2_iomem_deny_access(const struct domain *d)
     return iomem_deny_access(d, mfn, mfn + nr);
 }
 
+#ifdef CONFIG_ACPI
 static unsigned long gicv2_get_hwdom_extra_madt_size(const struct domain *d)
 {
     return 0;
 }
 
-#ifdef CONFIG_ACPI
 static int gicv2_make_hwdom_madt(const struct domain *d, u32 offset)
 {
     struct acpi_subtable_header *header;
@@ -1248,10 +1248,6 @@ static void __init gicv2_acpi_init(void)
 }
 #else
 static void __init gicv2_acpi_init(void) { }
-static int gicv2_make_hwdom_madt(const struct domain *d, u32 offset)
-{
-    return 0;
-}
 #endif
 
 static int __init gicv2_init(void)
@@ -1357,8 +1353,10 @@ const static struct gic_hw_operations gicv2_ops = {
     .read_apr            = gicv2_read_apr,
     .read_pending_state  = gicv2_read_pending_state,
     .make_hwdom_dt_node  = gicv2_make_hwdom_dt_node,
+#ifdef CONFIG_ACPI
     .make_hwdom_madt     = gicv2_make_hwdom_madt,
     .get_hwdom_extra_madt_size = gicv2_get_hwdom_extra_madt_size,
+#endif
     .map_hwdom_extra_mappings = gicv2_map_hwdown_extra_mappings,
     .iomem_deny_access   = gicv2_iomem_deny_access,
     .do_LPI              = gicv2_do_LPI,
diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index 0f6cbf6224e9..2a344393a0e4 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -1735,15 +1735,6 @@ static void __init gicv3_acpi_init(void)
 }
 #else
 static void __init gicv3_acpi_init(void) { }
-static int gicv3_make_hwdom_madt(const struct domain *d, u32 offset)
-{
-    return 0;
-}
-
-static unsigned long gicv3_get_hwdom_extra_madt_size(const struct domain *d)
-{
-    return 0;
-}
 #endif
 
 static bool gic_dist_supports_lpis(void)
@@ -1858,8 +1849,10 @@ static const struct gic_hw_operations gicv3_ops = {
     .read_pending_state  = gicv3_read_pending_state,
     .secondary_init      = gicv3_secondary_cpu_init,
     .make_hwdom_dt_node  = gicv3_make_hwdom_dt_node,
+#ifdef CONFIG_ACPI
     .make_hwdom_madt     = gicv3_make_hwdom_madt,
     .get_hwdom_extra_madt_size = gicv3_get_hwdom_extra_madt_size,
+#endif
     .iomem_deny_access   = gicv3_iomem_deny_access,
     .do_LPI              = gicv3_do_LPI,
 };
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index d623c57cb9fa..fe60619e99cf 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -443,6 +443,7 @@ int gic_make_hwdom_dt_node(const struct domain *d,
     return gic_hw_ops->make_hwdom_dt_node(d, gic, fdt);
 }
 
+#ifdef CONFIG_ACPI
 int gic_make_hwdom_madt(const struct domain *d, u32 offset)
 {
     return gic_hw_ops->make_hwdom_madt(d, offset);
@@ -459,6 +460,7 @@ unsigned long gic_get_hwdom_madt_size(const struct domain *d)
 
     return madt_size;
 }
+#endif
 
 int gic_iomem_deny_access(const struct domain *d)
 {
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index ba870523bb2a..ad0f7452d005 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -378,12 +378,14 @@ struct gic_hw_operations {
     /* Create GIC node for the hardware domain */
     int (*make_hwdom_dt_node)(const struct domain *d,
                               const struct dt_device_node *gic, void *fdt);
+#ifdef CONFIG_ACPI
     /* Create MADT table for the hardware domain */
     int (*make_hwdom_madt)(const struct domain *d, u32 offset);
-    /* Map extra GIC MMIO, irqs and other hw stuffs to the hardware domain. */
-    int (*map_hwdom_extra_mappings)(struct domain *d);
     /* Query the size of hardware domain madt table */
     unsigned long (*get_hwdom_extra_madt_size)(const struct domain *d);
+#endif
+    /* Map extra GIC MMIO, irqs and other hw stuffs to the hardware domain. */
+    int (*map_hwdom_extra_mappings)(struct domain *d);
     /* Deny access to GIC regions */
     int (*iomem_deny_access)(const struct domain *d);
     /* Handle LPIs, which require special handling */
@@ -435,8 +437,12 @@ void register_gic_ops(const struct gic_hw_operations *ops);
 int gic_make_hwdom_dt_node(const struct domain *d,
                            const struct dt_device_node *gic,
                            void *fdt);
+
+#ifdef CONFIG_ACPI
 int gic_make_hwdom_madt(const struct domain *d, u32 offset);
 unsigned long gic_get_hwdom_madt_size(const struct domain *d);
+#endif
+
 int gic_map_hwdom_extra_mappings(struct domain *d);
 int gic_iomem_deny_access(const struct domain *d);
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 17:08:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 17:08:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31212.61540 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfnQD-0007lK-PZ; Thu, 19 Nov 2020 17:08:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31212.61540; Thu, 19 Nov 2020 17:08:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfnQD-0007l3-G2; Thu, 19 Nov 2020 17:08:37 +0000
Received: by outflank-mailman (input) for mailman id 31212;
 Thu, 19 Nov 2020 17:08:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfnQC-0007ji-Or
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:08:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfnQC-0000vh-Bw; Thu, 19 Nov 2020 17:08:36 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfnQD-0000U1-2q; Thu, 19 Nov 2020 17:08:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfnQC-0007ji-Or
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:08:36 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=mZpQmkQLUUO1hccs/83l4shhsGEQrqSCrng3B/PMRp8=; b=1HuQ9T7HVyh8X0YHLZ0toblpD
	/xoNNu/ImjbvMuiO15+xQnDkKwHZevcEcccRLdJrk64DEb5LSVWG+fBGghmZX140PCRnwtK4253Fl
	XSz3j2Iu5LR9Z0l3KogPljy/DAu5LG0irbnYAbb+nxWg+rLu4QledAngbQ3RIG+w9AbBY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfnQC-0000vh-Bw; Thu, 19 Nov 2020 17:08:36 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfnQD-0000U1-2q; Thu, 19 Nov 2020 17:08:37 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: alex.bennee@linaro.org,
	bertrand.marquis@arm.com,
	andre.przywara@arm.com,
	Rahul.Singh@arm.com,
	Julien Grall <julien.grall@arm.com>,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v4 3/3] xen/arm: acpi: Allow Xen to boot with ACPI 5.1
Date: Thu, 19 Nov 2020 17:08:29 +0000
Message-Id: <20201119170829.9923-4-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201119170829.9923-1-julien@xen.org>
References: <20201119170829.9923-1-julien@xen.org>

From: Julien Grall <julien.grall@arm.com>

At the moment Xen requires the FADT ACPI table to be at least version
6.0, apparently because of some reliance on other ACPI v6.0 features.

But actually this is overzealous, and Xen works now fine with ACPI v5.1.

Let's relax the version check for the FADT table to allow QEMU to
run the hypervisor with ACPI.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Julien Grall <jgrall@amazon.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v3:
        - Add Stefano's acked-by

    Changes in v2:
        - Patch added
---
 xen/arch/arm/acpi/boot.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/acpi/boot.c b/xen/arch/arm/acpi/boot.c
index 55c3e5cbc834..7ea2990cb82c 100644
--- a/xen/arch/arm/acpi/boot.c
+++ b/xen/arch/arm/acpi/boot.c
@@ -181,8 +181,8 @@ static int __init acpi_parse_fadt(struct acpi_table_header *table)
      * we only deal with ACPI 6.0 or newer revision to get GIC and SMP
      * boot protocol configuration data, or we will disable ACPI.
      */
-    if ( table->revision > 6
-         || (table->revision == 6 && fadt->minor_revision >= 0) )
+    if ( table->revision > 5
+         || (table->revision == 5 && fadt->minor_revision >= 1) )
         return 0;
 
     printk("Unsupported FADT revision %d.%d, should be 6.0+, will disable ACPI\n",
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 17:08:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 17:08:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31211.61531 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfnQD-0007k7-3U; Thu, 19 Nov 2020 17:08:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31211.61531; Thu, 19 Nov 2020 17:08:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfnQC-0007jx-Vb; Thu, 19 Nov 2020 17:08:36 +0000
Received: by outflank-mailman (input) for mailman id 31211;
 Thu, 19 Nov 2020 17:08:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfnQB-0007iR-Cg
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:08:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfnQB-0000va-04; Thu, 19 Nov 2020 17:08:35 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfnQB-0000U1-ML; Thu, 19 Nov 2020 17:08:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfnQB-0007iR-Cg
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:08:35 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=QAujMV9Xzy7Ut9dM8y6NKak187oQRubSEIiR6ad+wBs=; b=aCJJJb6sjI77v6nqN0uVYCtU1
	50k3k2Qm1ZfPgDgkoeXxmsu/PQ4GmrTTGwy9W2yoOoeAgpU28w2huwx1+bLqexzs5wl2UV6WsY+Kp
	VYmRA7KKUSuy61RcjC0Q6+MXJIi/nTrtOW6RydU8NuBuGVeLIGQZPnx91HG3TIOdKB4x8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfnQB-0000va-04; Thu, 19 Nov 2020 17:08:35 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfnQB-0000U1-ML; Thu, 19 Nov 2020 17:08:35 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: alex.bennee@linaro.org,
	bertrand.marquis@arm.com,
	andre.przywara@arm.com,
	Rahul.Singh@arm.com,
	Julien Grall <julien.grall@arm.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v4 2/3] xen/arm: gic: acpi: Use the correct length for the GICC structure
Date: Thu, 19 Nov 2020 17:08:28 +0000
Message-Id: <20201119170829.9923-3-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201119170829.9923-1-julien@xen.org>
References: <20201119170829.9923-1-julien@xen.org>

From: Julien Grall <julien.grall@arm.com>

The length of the GICC structure in the MADT ACPI table differs between
version 5.1 and 6.0, although there are no other relevant differences.

Use the BAD_MADT_GICC_ENTRY macro, which was specifically designed to
overcome this issue.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Julien Grall <jgrall@amazon.com>

---
    Changes in v3:
        - Update the commit title as we also modify GICv3 code
        - Use the correct length in more places

    Changes in v2:
        - Patch added
---
 xen/arch/arm/acpi/boot.c | 2 +-
 xen/arch/arm/gic-v2.c    | 5 +++--
 xen/arch/arm/gic-v3.c    | 6 +++---
 xen/arch/arm/gic.c       | 2 +-
 4 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/acpi/boot.c b/xen/arch/arm/acpi/boot.c
index 30e4bd1bc5a7..55c3e5cbc834 100644
--- a/xen/arch/arm/acpi/boot.c
+++ b/xen/arch/arm/acpi/boot.c
@@ -131,7 +131,7 @@ acpi_parse_gic_cpu_interface(struct acpi_subtable_header *header,
     struct acpi_madt_generic_interrupt *processor =
                container_of(header, struct acpi_madt_generic_interrupt, header);
 
-    if ( BAD_MADT_ENTRY(processor, end) )
+    if ( BAD_MADT_GICC_ENTRY(processor, end) )
         return -EINVAL;
 
     acpi_table_print_madt_entry(header);
diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index 581ea5ba6b2c..b2adc8ec9a64 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -1136,7 +1136,8 @@ static int gicv2_make_hwdom_madt(const struct domain *d, u32 offset)
 
     host_gicc = container_of(header, struct acpi_madt_generic_interrupt,
                              header);
-    size = sizeof(struct acpi_madt_generic_interrupt);
+
+    size = ACPI_MADT_GICC_LENGTH;
     /* Add Generic Interrupt */
     for ( i = 0; i < d->max_vcpus; i++ )
     {
@@ -1165,7 +1166,7 @@ gic_acpi_parse_madt_cpu(struct acpi_subtable_header *header,
     struct acpi_madt_generic_interrupt *processor =
                container_of(header, struct acpi_madt_generic_interrupt, header);
 
-    if ( BAD_MADT_ENTRY(processor, end) )
+    if ( BAD_MADT_GICC_ENTRY(processor, end) )
         return -EINVAL;
 
     /* Read from APIC table and fill up the GIC variables */
diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index 2a344393a0e4..ac28013c1967 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -1499,7 +1499,7 @@ static int gicv3_make_hwdom_madt(const struct domain *d, u32 offset)
 
     host_gicc = container_of(header, struct acpi_madt_generic_interrupt,
                              header);
-    size = sizeof(struct acpi_madt_generic_interrupt);
+    size = ACPI_MADT_GICC_LENGTH;
     for ( i = 0; i < d->max_vcpus; i++ )
     {
         gicc = (struct acpi_madt_generic_interrupt *)(base_ptr + table_len);
@@ -1558,7 +1558,7 @@ gic_acpi_parse_madt_cpu(struct acpi_subtable_header *header,
     struct acpi_madt_generic_interrupt *processor =
                container_of(header, struct acpi_madt_generic_interrupt, header);
 
-    if ( BAD_MADT_ENTRY(processor, end) )
+    if ( BAD_MADT_GICC_ENTRY(processor, end) )
         return -EINVAL;
 
     /* Read from APIC table and fill up the GIC variables */
@@ -1628,7 +1628,7 @@ gic_acpi_get_madt_cpu_num(struct acpi_subtable_header *header,
     struct acpi_madt_generic_interrupt *cpuif;
 
     cpuif = (struct acpi_madt_generic_interrupt *)header;
-    if ( BAD_MADT_ENTRY(cpuif, end) || !cpuif->gicr_base_address )
+    if ( BAD_MADT_GICC_ENTRY(cpuif, end) || !cpuif->gicr_base_address )
         return -EINVAL;
 
     return 0;
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index fe60619e99cf..3b0331b53830 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -454,7 +454,7 @@ unsigned long gic_get_hwdom_madt_size(const struct domain *d)
     unsigned long madt_size;
 
     madt_size = sizeof(struct acpi_table_madt)
-                + sizeof(struct acpi_madt_generic_interrupt) * d->max_vcpus
+                + ACPI_MADT_GICC_LENGTH * d->max_vcpus
                 + sizeof(struct acpi_madt_generic_distributor)
                 + gic_hw_ops->get_hwdom_extra_madt_size(d);
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 17:08:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 17:08:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31209.61507 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfnQA-0007hg-I5; Thu, 19 Nov 2020 17:08:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31209.61507; Thu, 19 Nov 2020 17:08:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfnQA-0007hZ-Ec; Thu, 19 Nov 2020 17:08:34 +0000
Received: by outflank-mailman (input) for mailman id 31209;
 Thu, 19 Nov 2020 17:08:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfnQ9-0007hP-44
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:08:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfnQ8-0000ul-Jv; Thu, 19 Nov 2020 17:08:32 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfnQ9-0000U1-8M; Thu, 19 Nov 2020 17:08:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfnQ9-0007hP-44
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:08:33 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=c1eUtv3J3QHRIBeiCg5Q3ir6r4CZrIczVaetKRD/GdY=; b=sC2SSNZnPUJPw2Au3OaNYf3nSC
	TWyjRmaEGydMixitAOQhunixsaw/pHUGiSTl6Q1yIMKibGB60F3PH1o8xxOfCmMi9bRA3d2knlxg9
	R9bNKQjiAYD1e6zOwsEQzQ3ujkXlI0HlEzqLGAnbtmWda+gbkz/A06hZK6GWI3hd3kTg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfnQ8-0000ul-Jv; Thu, 19 Nov 2020 17:08:32 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfnQ9-0000U1-8M; Thu, 19 Nov 2020 17:08:33 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: alex.bennee@linaro.org,
	bertrand.marquis@arm.com,
	andre.przywara@arm.com,
	Rahul.Singh@arm.com,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v4 0/3] xen/arm: Allow Xen to boot with ACPI 5.1
Date: Thu, 19 Nov 2020 17:08:26 +0000
Message-Id: <20201119170829.9923-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Hi all,

This series was originally called "xen/arm: Unbreak ACPI". It was
renamed because only the part to allow boot with ACPI 5.1 is not
merged.

After this series, it is possible to boot Xen on QEMU with ACPI 5.1.

Cheers,

Julien Grall (3):
  xen/arm: gic: acpi: Guard helpers to build the MADT with CONFIG_ACPI
  xen/arm: gic: acpi: Use the correct length for the GICC structure
  xen/arm: acpi: Allow Xen to boot with ACPI 5.1

 xen/arch/arm/acpi/boot.c  |  6 +++---
 xen/arch/arm/gic-v2.c     | 13 ++++++-------
 xen/arch/arm/gic-v3.c     | 17 +++++------------
 xen/arch/arm/gic.c        |  4 +++-
 xen/include/asm-arm/gic.h | 10 ++++++++--
 5 files changed, 25 insertions(+), 25 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 17:11:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 17:11:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31229.61555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfnSX-0000ef-2e; Thu, 19 Nov 2020 17:11:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31229.61555; Thu, 19 Nov 2020 17:11:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfnSW-0000eY-Vr; Thu, 19 Nov 2020 17:11:00 +0000
Received: by outflank-mailman (input) for mailman id 31229;
 Thu, 19 Nov 2020 17:10:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2P/M=EZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kfnSV-0000eR-Cm
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:10:59 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 87ae63f1-c9c6-470c-a6e8-60f6d5971a18;
 Thu, 19 Nov 2020 17:10:57 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2P/M=EZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kfnSV-0000eR-Cm
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:10:59 +0000
X-Inumbo-ID: 87ae63f1-c9c6-470c-a6e8-60f6d5971a18
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 87ae63f1-c9c6-470c-a6e8-60f6d5971a18;
	Thu, 19 Nov 2020 17:10:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605805857;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=Tb+i1uRCDNjzZU+/tXj6XDZlDzICgJ7qh8vlllpSiLM=;
  b=d0NrOviKLw+nAoDG5vCDru3WG36t90jsOmzAtCxMPa7WESlkjgxKXnz2
   vqxdTAF/cfToLOeKzZ3Cm0R+OayFW3V7cSWWXWVrL1ctMmuTbST0GtOfD
   U6iKqnKLS421u34gJVEnEpspnnMZ2FbGe9WQG1VDcVdQyoZDrPGlmh5Ks
   w=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: OsXCL6on6Wydif/3NuzzzoL3b53eXiZme/5wAw1IHGYQj+ck95LmsDVTrXA8jaMTghSYc5Gr+V
 chXvE7xpWvOoFnGsIQr725Y63wlNrhg3tcjmx5beaeUmAWXZ5KxlLw3EuY+TJ9EOa2j65tjWgF
 S5rSzNZuq/pRAysoIIfyjx5q9Wyl3z1tPDbR6TV8G1B7uZOQ0EW6/0EV7NZJfDlIr+4ZfCmLbb
 D+nGApZ4JQbB4DjotBEazATrgGDIaF6mmgL10k4siU+FtOPACKr72vP2aRMlHAnD+nP59ri/FX
 iNc=
X-SBRS: None
X-MesageID: 31779749
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,353,1599537600"; 
   d="scan'208";a="31779749"
Subject: Re: [PATCH] x86/Intel: avoid UB with NMI watchdog on family 15 CPUs
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <63500eb6-b1da-ce08-52e2-00b30ffe2c26@suse.com>
 <1c2ffdcb-577d-8bea-35e3-904777a0c2e5@citrix.com>
 <e056d6ff-aceb-e4f9-1fe8-a41c482e34bc@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <47a30c4a-b05b-fdc5-0d7a-549fdd15a801@citrix.com>
Date: Thu, 19 Nov 2020 17:10:51 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <e056d6ff-aceb-e4f9-1fe8-a41c482e34bc@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 19/11/2020 16:37, Jan Beulich wrote:
> On 19.11.2020 17:10, Andrew Cooper wrote:
>> On 19/11/2020 15:57, Jan Beulich wrote:
>>> Found by looking for patterns similar to the one Julien did spot in
>>> pci_vtd_quirks().
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Subject is wonky.  Is it P4 (Intel), or Fam15 (AMD) ?  I'd be tempted to
>> have the prefix as x86/nmi: either way.
> With this code:
>
>     case X86_VENDOR_INTEL:
>         switch (boot_cpu_data.x86) {
>         case 6:
>             setup_p6_watchdog((boot_cpu_data.x86_model < 14) 
>                               ? P6_EVENT_CPU_CLOCKS_NOT_HALTED
>                               : CORE_EVENT_CPU_CLOCKS_NOT_HALTED);
>             break;
>         case 15:
>             if (!setup_p4_watchdog())
>
> I think qualifying it like I did is quite reasonable. Hence ...
>
>> With that suitably adjusted, Acked-by: Andrew Cooper
>> <andrew.cooper3@citrix.com>
> ... I'd prefer to keep it as is - please clarify.

Oh - original Xeon's.  I'd honestly forgotten that quirk of history.

I'd recommend "x86/nmi: Avoid UB in for P4-era watchdogs" to avoid the
ambiguity altogether.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 17:12:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 17:12:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31238.61567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfnUG-0000o4-Eg; Thu, 19 Nov 2020 17:12:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31238.61567; Thu, 19 Nov 2020 17:12:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfnUG-0000nx-BI; Thu, 19 Nov 2020 17:12:48 +0000
Received: by outflank-mailman (input) for mailman id 31238;
 Thu, 19 Nov 2020 17:12:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2P/M=EZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kfnUE-0000ns-NV
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:12:46 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 59a51098-f8d7-467e-9ee9-3075b5101549;
 Thu, 19 Nov 2020 17:12:45 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2P/M=EZ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kfnUE-0000ns-NV
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:12:46 +0000
X-Inumbo-ID: 59a51098-f8d7-467e-9ee9-3075b5101549
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 59a51098-f8d7-467e-9ee9-3075b5101549;
	Thu, 19 Nov 2020 17:12:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605805965;
  h=subject:from:to:cc:references:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=Fmi5hEM+WmL9aeHow3ArAEOoU8Z7m3Doroo4VnoxGZs=;
  b=h89GzyrzkJPzyRzXM0lkUetqT/XjOZc32q+k+jna8uum2UKNeuub/Ke6
   5QOlG/tnhFCZDDV+hVHU6bFn5ciiaWrOC17gkh+ue4IjMmxg/uFE0n0Gf
   5i03QZW2GcUDwqrUBT0NqDggKRDqp3AzC+avGwGv5KmQ6YgNVNJKH08Qw
   o=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: aFl85dS2FUjx5v5H/nbfX9iVLtW+1T2O3EZyt10CJbZ00RvRbGaNYgrwWN/pc7/IVpMULGEq09
 +a7i6TOG6EjCUbXn1hF6VLTpXtppYX+pY7Xw5LhfQ9Gh5hEoPUM7ggL4mtduQaKOPdf0CCZDjZ
 peF7ri3qDwbpSKhuqLYrgzbBogQ7B9DjvUjywaRVj3REAkCQjPecSjxAoh7K2RM7B3NFVIwcXw
 a3/WcW9Ega5ISKsiH+p/VmKHW8zb7wwN42znHxlH/58DW18L/wSehAsxEMPPI0yvjkGx23KXCF
 en4=
X-SBRS: None
X-MesageID: 31779965
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,353,1599537600"; 
   d="scan'208";a="31779965"
Subject: Re: [PATCH] x86/Intel: avoid UB with NMI watchdog on family 15 CPUs
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <63500eb6-b1da-ce08-52e2-00b30ffe2c26@suse.com>
 <1c2ffdcb-577d-8bea-35e3-904777a0c2e5@citrix.com>
 <e056d6ff-aceb-e4f9-1fe8-a41c482e34bc@suse.com>
 <47a30c4a-b05b-fdc5-0d7a-549fdd15a801@citrix.com>
Message-ID: <25c7cb0e-951a-9466-6434-41faf748cbd0@citrix.com>
Date: Thu, 19 Nov 2020 17:12:40 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <47a30c4a-b05b-fdc5-0d7a-549fdd15a801@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 19/11/2020 17:10, Andrew Cooper wrote:
> On 19/11/2020 16:37, Jan Beulich wrote:
>> On 19.11.2020 17:10, Andrew Cooper wrote:
>>> On 19/11/2020 15:57, Jan Beulich wrote:
>>>> Found by looking for patterns similar to the one Julien did spot in
>>>> pci_vtd_quirks().
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> Subject is wonky.  Is it P4 (Intel), or Fam15 (AMD) ?  I'd be tempted to
>>> have the prefix as x86/nmi: either way.
>> With this code:
>>
>>     case X86_VENDOR_INTEL:
>>         switch (boot_cpu_data.x86) {
>>         case 6:
>>             setup_p6_watchdog((boot_cpu_data.x86_model < 14) 
>>                               ? P6_EVENT_CPU_CLOCKS_NOT_HALTED
>>                               : CORE_EVENT_CPU_CLOCKS_NOT_HALTED);
>>             break;
>>         case 15:
>>             if (!setup_p4_watchdog())
>>
>> I think qualifying it like I did is quite reasonable. Hence ...
>>
>>> With that suitably adjusted, Acked-by: Andrew Cooper
>>> <andrew.cooper3@citrix.com>
>> ... I'd prefer to keep it as is - please clarify.
> Oh - original Xeon's.  I'd honestly forgotten that quirk of history.
>
> I'd recommend "x86/nmi: Avoid UB in for P4-era watchdogs" to avoid the
> ambiguity altogether.

And if I could actually english, that would read "Avoid UB for P4-".

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 17:24:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 17:24:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31248.61591 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfnfr-000238-Og; Thu, 19 Nov 2020 17:24:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31248.61591; Thu, 19 Nov 2020 17:24:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfnfr-000231-L5; Thu, 19 Nov 2020 17:24:47 +0000
Received: by outflank-mailman (input) for mailman id 31248;
 Thu, 19 Nov 2020 17:24:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kfnfq-00022p-A7
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:24:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kfnfq-0001GS-8o
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:24:46 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kfnfq-0001hR-76
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:24:46 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kfnfl-0001j9-TC; Thu, 19 Nov 2020 17:24:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kfnfq-00022p-A7
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:24:46 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=GMr6/pQELAa5eEG/rFq2xz09Dbsq1stesTPHMh4kRUk=; b=upHGya/3cUqnUS5qeXHRPtlMTA
	jLgtgxVCAvQ+QFpR8lgt2fDx/ueoKerk4Hymb/r9ba+3hPm9n30SljRYpTU43NIlM5ITj4Fcjt/s7
	mqIRefTKOnHdE/zhfwrQ+RVRD9PN3MfHDdTD/drvzaHi4Mus9foQWD5WoyCYejUAKa30=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kfnfq-0001GS-8o
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:24:46 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kfnfq-0001hR-76
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:24:46 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kfnfl-0001j9-TC; Thu, 19 Nov 2020 17:24:42 +0000
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 2/3] sg-report-flight: Actually look at retest flights (part 1)
Date: Thu, 19 Nov 2020 17:24:31 +0000
Message-Id: <20201119172432.15682-2-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201119172432.15682-1-iwj@xenproject.org>
References: <20201119172432.15682-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The existing approach does not find retest flights.  This is because
it starts by looking for flights whose runvars say they built the
version in question, and then looks to see if they contain the
relevant job.

Retest flights don't contain build jobs; they reuse the builds from
the primary flight.

Rather than making a fully general recursion scheme (which would
involve adding another index so we could quickly find all flights
which refer to this one), we add a one-level recursion.

This recursion is restricted to the flights named on the command line.
This means it takes nearly no time (as opposed to searching the whole
db for things that might be relevant - see above re need for an
index).

We filter the command line flights, looking ones which refer to the
only the primarily found flights as build jobs.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 sg-report-flight | 25 ++++++++++++++++++++++++-
 1 file changed, 24 insertions(+), 1 deletion(-)

diff --git a/sg-report-flight b/sg-report-flight
index 40a3cc92..70cad230 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -315,6 +315,11 @@ END
     # Sadly this trick is not formally documented but it is our
     # best option.
 
+    my $cmdline_flight_cond =
+      join ' OR ',
+      (defined $specflight ? "f1.flight=$specflight" : "FALSE"),
+      map { "f1.flight=$_->{Flight}" } @refer_to_flights;
+
     my $flightsq= <<END;
       WITH
       relevant_flights AS (
@@ -327,9 +332,27 @@ $runvars_conds
             ORDER BY flight DESC
             OFFSET 0
             LIMIT 1000
+      ),
+      retest_flights AS (
+        SELECT DISTINCT f1.flight flight, f1.blessing blessing
+             FROM flights f1
+             JOIN jobs j USING (flight) 
+       CROSS JOIN relevant_flights f0
+            WHERE ($cmdline_flight_cond)
+              AND (
+                SELECT bool_and( val LIKE f0.flight || '.%' )
+                       IS TRUE
+                 FROM runvars r
+                WHERE r.flight=j.flight
+                  AND r.job=j.job 
+                  AND r.name LIKE '%buildjob'
+                  )
       )
       SELECT flight, jobs.status
-        FROM relevant_flights
+        FROM (
+           SELECT * FROM relevant_flights
+     UNION SELECT * FROM retest_flights
+             ) all_relevant_flights
 $flightsq_jobs_join
        WHERE (1=1)
 $flightsq_jobcond
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 17:24:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 17:24:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31249.61597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfnfs-00023f-2e; Thu, 19 Nov 2020 17:24:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31249.61597; Thu, 19 Nov 2020 17:24:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfnfr-00023V-U7; Thu, 19 Nov 2020 17:24:47 +0000
Received: by outflank-mailman (input) for mailman id 31249;
 Thu, 19 Nov 2020 17:24:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kfnfq-00022u-Id
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:24:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kfnfq-0001GV-Hn
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:24:46 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kfnfq-0001hW-FT
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:24:46 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kfnfm-0001j9-Eg; Thu, 19 Nov 2020 17:24:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kfnfq-00022u-Id
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:24:46 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=cziSSO/qxv0dihhTZt+q8r4bupo8zCTakTFjg8vAnkk=; b=p8CHirmc3FjRi9eFop2d0cKwyI
	kVbsG6ydDJ6lTUMDEWQOLRjJhialdceTwFC9eQT3KbpIB4vGr4W0wWL2Eycw4QuhUEmL6NFPf9Sil
	5xN5sMolrg4Bw8m1srSkZPst2brOvUmbAq4EXiK8Q630mmsig7b61ikmGuDisg7B4VsU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kfnfq-0001GV-Hn
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:24:46 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kfnfq-0001hW-FT
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:24:46 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kfnfm-0001j9-Eg; Thu, 19 Nov 2020 17:24:42 +0000
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 3/3] sg-report-flight: Actually look at retest flights (part 2)
Date: Thu, 19 Nov 2020 17:24:32 +0000
Message-Id: <20201119172432.15682-3-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201119172432.15682-1-iwj@xenproject.org>
References: <20201119172432.15682-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

To avoid going down ratholes (especially for jobs which reuse outputs
from their previous selves), the primary flight/job finder in
sg-report-flight does not recurse indefinitely through build jobs.
Instead, it restricts the build jobs investigated to those within the
same flight as the job which might be of interest.

As a result, retest jobs are, unfortunately, discarded at this stage
because we insist that the build jobs we find did use the tree
revision we are investigating.

Fix this by recursing into the corresponding primary flight too.
In the $flightsq->fetchrow loop that's $xflight.

For the primary flight, ie the first half of the UNION, that's just
the fligth itself.  So there this has no change.

For the retest flights, it is the flight that all the build jobs refer
to.  Despite the CROSS JOIN, this will be unique for any particular
"retest flight", because the query on the runvars insists that all of
the (at least some) buildjob runvars for f1 point to f0.  Ie, f1 has
no build jobs and refers to f0 for build jobs; so it can't refer to
any other f0' in the cross join.

With this change, a -retest flight can now actually be used to justify
a push.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 sg-report-flight | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/sg-report-flight b/sg-report-flight
index 70cad230..7f0543ff 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -334,9 +334,12 @@ $runvars_conds
             LIMIT 1000
       ),
       retest_flights AS (
-        SELECT DISTINCT f1.flight flight, f1.blessing blessing
+        SELECT DISTINCT 
+                  f1.flight flight,
+                  f0.flight xflight,
+                  f1.blessing blessing
              FROM flights f1
-             JOIN jobs j USING (flight) 
+             JOIN jobs j USING (flight)
        CROSS JOIN relevant_flights f0
             WHERE ($cmdline_flight_cond)
               AND (
@@ -348,10 +351,10 @@ $runvars_conds
                   AND r.name LIKE '%buildjob'
                   )
       )
-      SELECT flight, jobs.status
+      SELECT flight, xflight, jobs.status
         FROM (
-           SELECT * FROM relevant_flights
-     UNION SELECT * FROM retest_flights
+ SELECT        flight, flight xflight, blessing FROM relevant_flights
+ UNION SELECT  flight,        xflight, blessing FROM retest_flights
              ) all_relevant_flights
 $flightsq_jobs_join
        WHERE (1=1)
@@ -392,7 +395,7 @@ END
                 WHERE flight=?
 END
 
-    while (my ($tflight, $tjstatus) = $flightsq->fetchrow_array) {
+    while (my ($tflight, $xflight, $tjstatus) = $flightsq->fetchrow_array) {
 	# Recurse from the starting flight looking for relevant build
 	# jobs.  We start with all jobs in $tflight, and for each job
 	# we also process any other jobs it refers to in *buildjob runvars.
@@ -408,6 +411,11 @@ END
 	# actually interested in (ii) the kind of continuous update
 	# thing seen with freebsdbuildjob.
 	#
+	# However, when we are looking at a retest flight (offered to
+	# us with --refer-to-flights, or found because it was
+	# specified on the command line) we look at its build flight;
+	# that's what $xflight is;
+	#
 	# (This is rather different to cs-bisection-step, which is
 	# less focused on changes in a particular set of trees.)
 	#
@@ -435,7 +443,7 @@ END
 	while (@binfos_todo) {
 	    my ($why,$bflight,$bjob) = @{ shift @binfos_todo };
 	    next if $binfos{$bflight}{$bjob};
-            next unless $bflight == $tflight;
+            next unless $bflight == $tflight || $bflight == $xflight;
 	    $binfos{$bflight}{$bjob} = $why;
 	    $buildjobsq->execute($bflight,$bjob);
 	    $binfos_queue->($bflight,$buildjobsq,$why);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 17:24:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 17:24:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31247.61579 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfnfp-000224-Gh; Thu, 19 Nov 2020 17:24:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31247.61579; Thu, 19 Nov 2020 17:24:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfnfp-00021x-Cn; Thu, 19 Nov 2020 17:24:45 +0000
Received: by outflank-mailman (input) for mailman id 31247;
 Thu, 19 Nov 2020 17:24:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kfnfo-00021s-5v
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:24:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kfnfo-0001GM-3c
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:24:44 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kfnfn-0001gq-V8
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:24:44 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kfnfl-0001j9-Jv; Thu, 19 Nov 2020 17:24:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kfnfo-00021s-5v
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:24:44 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=RSPmlMx6h7lY0hEWezC+PZdIowM8QBXv/eyi0QInOAw=; b=Hu3gydKXJioG6mxexNtWg5BOyy
	mAO0C72Dhb8ApH6sRL7JrbWHpAp2FfhRzbTvydbOFYhTUT7/znCQkZliFqzYiGFdZ2FGzwwMpfWEP
	C6iRUYdXpSxFa8WAWY8zg54fJycotWZEA5cEvcO45MACFa1i6pIYb7VGshWPkNnBjhz0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kfnfo-0001GM-3c
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:24:44 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kfnfn-0001gq-V8
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:24:44 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kfnfl-0001j9-Jv; Thu, 19 Nov 2020 17:24:41 +0000
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 1/3] sg-report-flight: Rename "sub" to more descriptive "relevant_flights"
Date: Thu, 19 Nov 2020 17:24:30 +0000
Message-Id: <20201119172432.15682-1-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

No functional change.  We're going to add another WITH...

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 sg-report-flight | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/sg-report-flight b/sg-report-flight
index fd266586..40a3cc92 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -316,7 +316,8 @@ END
     # best option.
 
     my $flightsq= <<END;
-      WITH sub AS (
+      WITH
+      relevant_flights AS (
         SELECT DISTINCT flight, blessing
              FROM flights
 $runvars_joins
@@ -328,7 +329,7 @@ $runvars_conds
             LIMIT 1000
       )
       SELECT flight, jobs.status
-        FROM sub
+        FROM relevant_flights
 $flightsq_jobs_join
        WHERE (1=1)
 $flightsq_jobcond
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 17:58:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 17:58:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31268.61615 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfoBr-0005bI-Sa; Thu, 19 Nov 2020 17:57:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31268.61615; Thu, 19 Nov 2020 17:57:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfoBr-0005bB-OX; Thu, 19 Nov 2020 17:57:51 +0000
Received: by outflank-mailman (input) for mailman id 31268;
 Thu, 19 Nov 2020 17:57:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kdOd=EZ=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kfoBp-0005b6-Sq
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:57:49 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f78bed2c-2f26-46d2-aeed-59c4fa4293f2;
 Thu, 19 Nov 2020 17:57:47 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AJHvcWQ024042
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Thu, 19 Nov 2020 18:57:39 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 35A302E9CA8; Thu, 19 Nov 2020 18:57:33 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kdOd=EZ=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kfoBp-0005b6-Sq
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 17:57:49 +0000
X-Inumbo-ID: f78bed2c-2f26-46d2-aeed-59c4fa4293f2
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f78bed2c-2f26-46d2-aeed-59c4fa4293f2;
	Thu, 19 Nov 2020 17:57:47 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AJHvcWQ024042
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Thu, 19 Nov 2020 18:57:39 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 35A302E9CA8; Thu, 19 Nov 2020 18:57:33 +0100 (MET)
Date: Thu, 19 Nov 2020 18:57:33 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201119175733.GA6067@antioche.eu.org>
References: <20201117164033.GB3093@antioche.eu.org>
 <20201118085738.wpnfmjagxjf6cofp@Air-de-Roger>
 <20201118092425.GC1085@antioche.eu.org>
 <20201118100025.ic7r3kfsbdnr6muz@Air-de-Roger>
 <20201118121403.GC3126@antioche.eu.org>
 <20201118143928.hvamuf7t7jycsrzb@Air-de-Roger>
 <bb2b6182-f3a6-61e5-ee70-90a65ae56435@suse.com>
 <20201119141915.igyb7djkw47rf2dt@Air-de-Roger>
 <20201119155718.GB4104@antioche.eu.org>
 <20201119165734.GA4903@antioche.eu.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201119165734.GA4903@antioche.eu.org>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Thu, 19 Nov 2020 18:57:39 +0100 (MET)

On Thu, Nov 19, 2020 at 05:57:34PM +0100, Manuel Bouyer wrote:
> On Thu, Nov 19, 2020 at 04:57:18PM +0100, Manuel Bouyer wrote:
> > On Thu, Nov 19, 2020 at 03:19:15PM +0100, Roger Pau Monn wrote:
> > > I've got two different debug patches for you to try. I'm attaching both
> > > to this email but I think we should start with Jan's suggestion
> > > (conditional_print.patch). That patch will only print extra messages
> > > between the ioregsel 3 ... ioregsel f existing debug messages, you
> > > will have to trigger this from NetBSD by using ioapic_dump_raw AFAICT.
> > 
> > thanks. I didn't see any change in behavior or XEN output with this
> > patch (especially the vioapic_deliver string doesn't show up in the
> > logs).
> 
> I tried forcing print to 1, and I still don't see "vioapic_deliver" on the
> console. I changed the patch to:
> #define vioapic_deliver(vioapic, irq) ({\ 
>     if ( /* print && irq == 34 */ 1 ) \
>         printk("%s:%d:%s: vioapic_deliver %d\n", __FILE__, __LINE__, __func__, i
> rq); \
>     _vioapic_deliver(vioapic, irq); })
> 
> and got:
> [  13.8853432] ioapic2: pin2 0x0000a067 0x00000000^M
> [  13.8853432] mfii0: cmd timeout ccb 0xffff9780023b7d60 status 0x40000008^M
> (XEN) *** Serial input to Xen (type 'CTRL-a' three times to switch input)
> (XEN) vioapic.c:511:vioapic_irq_positive_edge: vioapic_deliver 2
> [  17.0001093] mfii0: cmd aborted ccb 0xffff9780023b7d60^M
> (XEN) vioapic.c:511:vioapic_irq_positive_edge: vioapic_deliver 2
> [  17.0217772] config_pending_decr: scsibus0 0^M
> (XEN) vioapic.c:511:vioapic_irq_positive_edge: vioapic_deliver 2
> [  17.(XEN) vioapic.c:511:vioapic_irq_positive_edge: vioapic_deliver 2
> 0417095] config_finalize 2185^M
> 
> So I guess that the interrupt delivery on XEN output is
> vioapic.c:511

I added an ASSERT() after the printf to ket a stack trace, and got:
db{0}> call ioapic_dump_raw^M
Register dump of ioapic0^M
[  13.0193374] 00 08000000 00170011 08000000(XEN) vioapic.c:141:d0v0 apic_mem_readl:undefined ioregsel 3
(XEN) vioapic.c:512:vioapic_irq_positive_edge: vioapic_deliver 2
(XEN) Assertion '!print' failed at vioapic.c:512
(XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:   C   ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82d0402c4164>] vioapic_irq_positive_edge+0x14e/0x150
(XEN) RFLAGS: 0000000000010202   CONTEXT: hypervisor (d0v0)
(XEN) rax: ffff82d0405c806c   rbx: ffff830836650580   rcx: 0000000000000000
(XEN) rdx: ffff8300688bffff   rsi: 000000000000000a   rdi: ffff82d0404b36b8
(XEN) rbp: ffff8300688bfde0   rsp: ffff8300688bfdc0   r8:  0000000000000004
(XEN) r9:  0000000000000032   r10: 0000000000000000   r11: 00000000fffffffd
(XEN) r12: ffff8308366dc000   r13: 0000000000000022   r14: ffff8308366dc31c
(XEN) r15: ffff8308366d1d80   cr0: 0000000080050033   cr4: 00000000003526e0
(XEN) cr3: 00000008366c9000   cr2: 0000000000000000
(XEN) fsb: 0000000000000000   gsb: 0000000000000000   gss: 0000000000000000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen code around <ffff82d0402c4164> (vioapic_irq_positive_edge+0x14e/0x150):
(XEN)  3d 10 be 1d 00 00 74 c2 <0f> 0b 55 48 89 e5 41 57 41 56 41 55 41 54 53 48
(XEN) Xen stack trace from rsp=ffff8300688bfdc0:
(XEN)    0000000200000086 ffff8308366dc000 0000000000000022 0000000000000000
(XEN)    ffff8300688bfe08 ffff82d0402bcc33 ffff8308366dc000 0000000000000022
(XEN)    0000000000000001 ffff8300688bfe40 ffff82d0402bd18f ffff830835a7eb98
(XEN)    ffff8308366dc000 ffff830835a7eb40 ffff8300688bfe68 0100100100100100
(XEN)    ffff8300688bfea0 ffff82d04026f6e1 ffff830835a7eb30 ffff8308366dc0f4
(XEN)    ffff830835a7eb40 ffff8300688bfe68 ffff8300688bfe68 ffff82d0405cec80
(XEN)    ffffffffffffffff ffff82d0405cec80 0000000000000000 ffff82d0405d6c80
(XEN)    ffff8300688bfed8 ffff82d04022b6fa ffff83083663f000 ffff83083663f000
(XEN)    0000000000000000 0000000000000000 0000000a7c62165b ffff8300688bfee8
(XEN)    ffff82d04022b798 ffff8300688bfe08 ffff82d0402a4bcb 0000000000000000
(XEN)    0000000000000206 ffff8316da86e61c ffff8316da86e600 ffff938031fd47c0
(XEN)    0000000000000003 0000000000000400 ff889e8da08f928a 0000000000000000
(XEN)    0000000000000002 0000000000000100 000000000000b86e ffff93803237f010
(XEN)    0000000000000000 ffff8316da86e61c 0000beef0000beef ffffffff80555918
(XEN)    000000bf0000beef 0000000000000046 ffff938031fd4790 000000000000beef
(XEN)    000000000000beef 000000000000beef 000000000000beef 000000000000beef
(XEN)    0000e01000000000 ffff83083663f000 0000000000000000 00000000003526e0
(XEN)    0000000000000000 0000000000000000 0000060100000001 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82d0402c4164>] R vioapic_irq_positive_edge+0x14e/0x150
(XEN)    [<ffff82d0402bcc33>] F arch/x86/hvm/irq.c#assert_gsi+0x5e/0x7b
(XEN)    [<ffff82d0402bd18f>] F hvm_gsi_assert+0x62/0x77
(XEN)    [<ffff82d04026f6e1>] F drivers/passthrough/io.c#dpci_softirq+0x261/0x29e
(XEN)    [<ffff82d04022b6fa>] F common/softirq.c#__do_softirq+0x8a/0xbf
(XEN)    [<ffff82d04022b798>] F do_softirq+0x13/0x15
(XEN)    [<ffff82d0402a4bcb>] F vmx_asm_do_vmentry+0x2b/0x30
(XEN) 
(XEN) 
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Assertion '!print' failed at vioapic.c:512
(XEN) ****************************************


hope this helps ...

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 18:16:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 18:16:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31278.61654 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfoUC-00080p-DH; Thu, 19 Nov 2020 18:16:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31278.61654; Thu, 19 Nov 2020 18:16:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfoUC-00080h-9p; Thu, 19 Nov 2020 18:16:48 +0000
Received: by outflank-mailman (input) for mailman id 31278;
 Thu, 19 Nov 2020 18:16:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zZym=EZ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kfoUA-0007zG-B6
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 18:16:46 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.2.49]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d8cf81dd-9269-4f3d-8e3f-64eb7cea07b1;
 Thu, 19 Nov 2020 18:16:44 +0000 (UTC)
Received: from DB6PR0301CA0078.eurprd03.prod.outlook.com (2603:10a6:6:30::25)
 by VI1PR08MB4032.eurprd08.prod.outlook.com (2603:10a6:803:e2::25)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Thu, 19 Nov
 2020 18:16:41 +0000
Received: from DB5EUR03FT048.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:30:cafe::6d) by DB6PR0301CA0078.outlook.office365.com
 (2603:10a6:6:30::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Thu, 19 Nov 2020 18:16:40 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT048.mail.protection.outlook.com (10.152.21.28) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Thu, 19 Nov 2020 18:16:40 +0000
Received: ("Tessian outbound fcd5bc555ddc:v71");
 Thu, 19 Nov 2020 18:16:40 +0000
Received: from 5fddd674a043.3
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 710755C2-4B53-4FEE-8287-6CE18D68A128.1; 
 Thu, 19 Nov 2020 18:16:34 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5fddd674a043.3
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 19 Nov 2020 18:16:34 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4538.eurprd08.prod.outlook.com (2603:10a6:10:d2::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Thu, 19 Nov
 2020 18:16:31 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737%7]) with mapi id 15.20.3564.028; Thu, 19 Nov 2020
 18:16:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zZym=EZ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kfoUA-0007zG-B6
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 18:16:46 +0000
X-Inumbo-ID: d8cf81dd-9269-4f3d-8e3f-64eb7cea07b1
Received: from EUR02-VE1-obe.outbound.protection.outlook.com (unknown [40.107.2.49])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d8cf81dd-9269-4f3d-8e3f-64eb7cea07b1;
	Thu, 19 Nov 2020 18:16:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1gDaQBXj71zio6oakGrC5v+bim1fumny6+u3JxWrjeI=;
 b=2NwyS5zsx+nNAQpRTILIlBXZHbyUzLBnI77dVCZCNx/qH68ndcSt4FZwr98d3pEHGN2I5DQtdVXwPlXYqzEOT4Pg+hcbrjyyXaRqYRzakBl9KRRmByETAvkhQy8zPK+F21BFT5VNCiRiMlMRiDXcsjzoK7rejFwkNlKwODEjlC0=
Received: from DB6PR0301CA0078.eurprd03.prod.outlook.com (2603:10a6:6:30::25)
 by VI1PR08MB4032.eurprd08.prod.outlook.com (2603:10a6:803:e2::25) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Thu, 19 Nov
 2020 18:16:41 +0000
Received: from DB5EUR03FT048.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:30:cafe::6d) by DB6PR0301CA0078.outlook.office365.com
 (2603:10a6:6:30::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Thu, 19 Nov 2020 18:16:40 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT048.mail.protection.outlook.com (10.152.21.28) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Thu, 19 Nov 2020 18:16:40 +0000
Received: ("Tessian outbound fcd5bc555ddc:v71"); Thu, 19 Nov 2020 18:16:40 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 9b30b564a70646c8
X-CR-MTA-TID: 64aa7808
Received: from 5fddd674a043.3
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 710755C2-4B53-4FEE-8287-6CE18D68A128.1;
	Thu, 19 Nov 2020 18:16:34 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5fddd674a043.3
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Thu, 19 Nov 2020 18:16:34 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Zoo1P756vqIA4oNYx4B04EBBJ3LjLVjVxnwlsXKUP7KcVCSv2VVuXGX9elHMySE7NHMZowgoLdgIFKotghYBla2BfHkTc5drZcdOnsTIKCm2Nhci/rwfxxp4TqTC4ppvduydYUQBXaRzcCerIBKQqsQMzESZyYk6weIrYNzqw2K8Pt3QIHf08mSmz3liVUOlCvX0bF2XYUM3WjJwxWhacN6UWg26BnJ2+0LRoWHkEfayTTh4j4AGnc7pm1UxWPxNn5P0TohLn5W/OnPycj+GGf9ePDp0Pmjssv+mdg/1UJZ5vNU7iKFUMfP0teFPBFLNiqPslhI/qwTMoi1GT7V0KA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1gDaQBXj71zio6oakGrC5v+bim1fumny6+u3JxWrjeI=;
 b=DqMSdc4tbjwh7owLg454pfao96Gukt+Hj0pd4JueZwZVEHmH+k/Oc6w0MbndW7xkVr7z80OHVaAAbEPXZe30NOLXbhI8pAwssXzKQStRaDy0sw+s7eiHR3yuNTK9v6PHTM8bZHyW+suk5BdxNGBPMEwFXsftfIWnRXfXPqk5Nt1cKC90ohaGLaxLTOhsJF4WL/0CeCrJR1FkdotdzgPek/Cj/t4hSEBbmkPkCVEih1KlgZAxeJlLxykHP8IH4j1rjhpW6+GD2SOp6ydcta8ZdTtmLdxI/YwIyECCstrQQ7/X5QcBdeVJkILFVd0cRHNdja3An4ClH3sG3AeaI1VUDg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1gDaQBXj71zio6oakGrC5v+bim1fumny6+u3JxWrjeI=;
 b=2NwyS5zsx+nNAQpRTILIlBXZHbyUzLBnI77dVCZCNx/qH68ndcSt4FZwr98d3pEHGN2I5DQtdVXwPlXYqzEOT4Pg+hcbrjyyXaRqYRzakBl9KRRmByETAvkhQy8zPK+F21BFT5VNCiRiMlMRiDXcsjzoK7rejFwkNlKwODEjlC0=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4538.eurprd08.prod.outlook.com (2603:10a6:10:d2::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Thu, 19 Nov
 2020 18:16:31 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737%7]) with mapi id 15.20.3564.028; Thu, 19 Nov 2020
 18:16:31 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, "alex.bennee@linaro.org"
	<alex.bennee@linaro.org>, Andre Przywara <Andre.Przywara@arm.com>, Rahul
 Singh <Rahul.Singh@arm.com>, Julien Grall <Julien.Grall@arm.com>, Julien
 Grall <jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v4 3/3] xen/arm: acpi: Allow Xen to boot with ACPI 5.1
Thread-Topic: [PATCH v4 3/3] xen/arm: acpi: Allow Xen to boot with ACPI 5.1
Thread-Index: AQHWvpapKLofsEj7tESNWS0wcJnqO6nPwwiA
Date: Thu, 19 Nov 2020 18:16:31 +0000
Message-ID: <24299314-EDE1-451C-89E5-CDEEB5A6614F@arm.com>
References: <20201119170829.9923-1-julien@xen.org>
 <20201119170829.9923-4-julien@xen.org>
In-Reply-To: <20201119170829.9923-4-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 137bb62f-6c65-48bb-aebb-08d88cb74320
x-ms-traffictypediagnostic: DBBPR08MB4538:|VI1PR08MB4032:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB4032202F4B9AE7F0854512359DE00@VI1PR08MB4032.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 UBAgCyygn8zbyKmeDmkGhNVv1UGoPL3SF7+8ZzIUIZAgHSe6V3UT4DZSPT+vyZR+RAAkMaXYyv76dJE8m268Rm7r6z1/1yVOemjQtvdCcMcLp4YbvWYI5Uq5iidE0xaDsSmOtqXHsuXl6Z5WXNK/KP3K+dw4B4yOxZmfn3Pu7MI9mnAMSxkBFMkfH0gFQMnNFalWHHelCszU+/9zrNDbbaYqJIm3+puq4zouhtaKMS8tPq6hMESH6EZ3hxCrOMJho3zVyBzMdHK89MJI9Rn5uxBx0JdA/emO1QshrGSzp8cD69BQUq5W3V0eZC9AKy9kGG9MOCRgWXsKM+sHjvaAvA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(39860400002)(346002)(366004)(396003)(2906002)(66446008)(66556008)(66476007)(6916009)(66946007)(8676002)(8936002)(71200400001)(6486002)(86362001)(76116006)(91956017)(36756003)(5660300002)(6512007)(33656002)(4326008)(54906003)(53546011)(6506007)(26005)(64756008)(186003)(2616005)(478600001)(83380400001)(316002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 QJrF/7XYENpfM/Ymd1ctPe4QcVimVbG/e6tgeXOGjGtfA82xTMIfUoXxC/UZIomKBNKYLVt9Rj5d7X/oiJe4xjoxpx5KkOpyML3jNn5N0ZtcEGC5LXztoXJe5XXq0/CanQYIlSJ+Pe1wUHz1cLdD8/RsKz5KNoT4vCQ3Owgl1uvXvM/muPg+Hb0/6S52yfi57DQ97us91LugwnID2MQldY4ZFPuIOryLrfTOQZReLWi+UXhMTvOFAHIT0J+3FWWkk2S/W7F2IOJrN2gjfFcXLRGvKEuy1m/VN4hn8W1A2eb1ZhE2sG/fhUpYXGPnGT3rJIzZ9yJ+mTWYmGacO0Hdhg5XDbzrQlk9WgI2usJelZJ1RoKBUQ+xs9tUBKv6PJKfhTz7nFQMvt3MaBa10Drsuw7xmWbrw9BxHPqkdSyBl48QvO7Nht3HbCB8n+U08qRrfargwFLuFVhGimhtYQ7zUOq0DbVCgkLWBfx6uhlG9Lfjx81FxNIFkPa85kd7X4A2sMBPj+/QMBbeG7mSY5hOfciiurfxvR1vMRyU2IVktCWwsKOUi8ZSIz0h72y+5p1v27a+H/GXqzYaTRcE4OLHMgraQR1cr45QA8gpn4NgR7mf0gNiuId4FWaq05eD7CrQtIvTHI7G+ltmdySJgxHKCw==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <BB3A8AF1FD0E1B4FAD9F9D93690DF41C@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4538
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c3b596f8-2dbd-480b-e564-08d88cb73da1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UmtRRLEb2WdEMmHRfE2UF1V3tzH6ncv9hYD5pmF/WuE2AOmEPrZLFUdiKHsyOi2oixyUuBkIlMfjBCUffLh+13yaMErcsPTvwhXeZhbKObRZ1XrfNuvsLCkBZ7q3Fjx3el0ZD3ZyIKRzTzCdqaEfJ+/oqlPCQR9sh1id1kBf7/xxPy6XX/kYXlsp1q6xhiAZMppjcwosSPbhyKQuhqkTNvREQtv0iX/kPHeaSC5dmXTnhuF/10JkuD4wyKI34BcwyPWrDkXPpvytnF77P10CWfxC5e6sed5S1xqTkGk+AYvhu2sbgELeVeJTFef0ru7zxq56aukK1p62AZQgXBwRTQFSNg6RrbOt6dtJTh7JlnQdlCT39QWnr9zmGD6KTz/RkZQN1CwjmB//bReVZZt80A==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(39860400002)(396003)(376002)(346002)(46966005)(86362001)(2906002)(33656002)(8676002)(82740400003)(47076004)(5660300002)(81166007)(70206006)(83380400001)(53546011)(356005)(70586007)(82310400003)(316002)(336012)(26005)(2616005)(478600001)(186003)(4326008)(6506007)(6512007)(107886003)(6486002)(8936002)(54906003)(36756003)(6862004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Nov 2020 18:16:40.5964
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 137bb62f-6c65-48bb-aebb-08d88cb74320
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4032

Hi,

> On 19 Nov 2020, at 17:08, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <julien.grall@arm.com>
>=20
> At the moment Xen requires the FADT ACPI table to be at least version
> 6.0, apparently because of some reliance on other ACPI v6.0 features.
>=20
> But actually this is overzealous, and Xen works now fine with ACPI v5.1.
>=20
> Let's relax the version check for the FADT table to allow QEMU to
> run the hypervisor with ACPI.
>=20
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

>=20
> ---
>    Changes in v3:
>        - Add Stefano's acked-by
>=20
>    Changes in v2:
>        - Patch added
> ---
> xen/arch/arm/acpi/boot.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>=20
> diff --git a/xen/arch/arm/acpi/boot.c b/xen/arch/arm/acpi/boot.c
> index 55c3e5cbc834..7ea2990cb82c 100644
> --- a/xen/arch/arm/acpi/boot.c
> +++ b/xen/arch/arm/acpi/boot.c
> @@ -181,8 +181,8 @@ static int __init acpi_parse_fadt(struct acpi_table_h=
eader *table)
>      * we only deal with ACPI 6.0 or newer revision to get GIC and SMP
>      * boot protocol configuration data, or we will disable ACPI.
>      */
> -    if ( table->revision > 6
> -         || (table->revision =3D=3D 6 && fadt->minor_revision >=3D 0) )
> +    if ( table->revision > 5
> +         || (table->revision =3D=3D 5 && fadt->minor_revision >=3D 1) )
>         return 0;
>=20
>     printk("Unsupported FADT revision %d.%d, should be 6.0+, will disable=
 ACPI\n",
> --=20
> 2.17.1
>=20



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 18:16:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 18:16:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31276.61630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfoU5-0007x7-Mu; Thu, 19 Nov 2020 18:16:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31276.61630; Thu, 19 Nov 2020 18:16:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfoU5-0007x0-J3; Thu, 19 Nov 2020 18:16:41 +0000
Received: by outflank-mailman (input) for mailman id 31276;
 Thu, 19 Nov 2020 18:16:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zZym=EZ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kfoU4-0007wu-4u
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 18:16:40 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.82]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8011cd69-79c4-45da-8494-8a696e7cb93a;
 Thu, 19 Nov 2020 18:16:38 +0000 (UTC)
Received: from DB6PR1001CA0038.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:4:55::24)
 by DBAPR08MB5749.eurprd08.prod.outlook.com (2603:10a6:10:1af::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20; Thu, 19 Nov
 2020 18:16:37 +0000
Received: from DB5EUR03FT022.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:55:cafe::4f) by DB6PR1001CA0038.outlook.office365.com
 (2603:10a6:4:55::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Thu, 19 Nov 2020 18:16:37 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT022.mail.protection.outlook.com (10.152.20.171) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Thu, 19 Nov 2020 18:16:37 +0000
Received: ("Tessian outbound 797fb8e1da56:v71");
 Thu, 19 Nov 2020 18:16:37 +0000
Received: from 6896bc3e2f68.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 5656127B-191C-42C8-8536-85DCDA36A230.1; 
 Thu, 19 Nov 2020 18:16:29 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 6896bc3e2f68.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 19 Nov 2020 18:16:29 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4538.eurprd08.prod.outlook.com (2603:10a6:10:d2::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Thu, 19 Nov
 2020 18:16:26 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737%7]) with mapi id 15.20.3564.028; Thu, 19 Nov 2020
 18:16:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zZym=EZ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kfoU4-0007wu-4u
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 18:16:40 +0000
X-Inumbo-ID: 8011cd69-79c4-45da-8494-8a696e7cb93a
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown [40.107.20.82])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8011cd69-79c4-45da-8494-8a696e7cb93a;
	Thu, 19 Nov 2020 18:16:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ADhZtYNBEr3niClwhOMsFJB7OZtTBz55UcV+VbqtfKA=;
 b=ZTUa+qxP3iE0g4Tjv9q5Pj12VyK4aOXYVLMh2PJxoImS9Dazao9Z2dEgO96WG9s5AFnZqR2RaTgm5S6iIuXwp2FtBPPKos9C53hxsPT8HfHYBceXwwaUcHJ7UYB6VHpopwLk4+iVNEmWJkagJ9V+BsefOAZG5CNA3g0yg5WZX9Y=
Received: from DB6PR1001CA0038.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:4:55::24)
 by DBAPR08MB5749.eurprd08.prod.outlook.com (2603:10a6:10:1af::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20; Thu, 19 Nov
 2020 18:16:37 +0000
Received: from DB5EUR03FT022.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:55:cafe::4f) by DB6PR1001CA0038.outlook.office365.com
 (2603:10a6:4:55::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Thu, 19 Nov 2020 18:16:37 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT022.mail.protection.outlook.com (10.152.20.171) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Thu, 19 Nov 2020 18:16:37 +0000
Received: ("Tessian outbound 797fb8e1da56:v71"); Thu, 19 Nov 2020 18:16:37 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: cbf8cdda0acf4d7f
X-CR-MTA-TID: 64aa7808
Received: from 6896bc3e2f68.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 5656127B-191C-42C8-8536-85DCDA36A230.1;
	Thu, 19 Nov 2020 18:16:29 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 6896bc3e2f68.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Thu, 19 Nov 2020 18:16:29 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iJ4l+MTdbD0hSR7bv7PIwucflJVmM5hcw5NQAfb09qIb/ZP1JHCBV8uSY4y3zf0Aef+s/1jrOidT6ybzbBw4upCxz6SvpJPvwUxjktRneQawUBAJj8yhzZruH3h9JcodCB0QCDXduobu8b6AEIDjxAu+ZbAg+KWd+H4RNhZS/YubdA7LQ5NpKQOAGlCH8kR0ievgbJKsthJqeHRwJR0u4UXeH/DQoYeUUbYULO2rAr+OpJnk2ZZk8+qnD+1O6/krPBCDd9M/KgHHFVzVZwWPxipSat33gdhqB2n6HB5EXZcrhPTfnNexGY/EPlTT7iBmcDkESDISKztves5RuBHDdQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ADhZtYNBEr3niClwhOMsFJB7OZtTBz55UcV+VbqtfKA=;
 b=gi1KcQemqBn4pTUcD7Ty45yoAbIat23iENwoOZ6o8dRQ1cgzDZLq/kC+moZcv7pHOYwk6APjtxSaZeT+Wy10zLn6tKIcB8fPN+mjloPkVAjQDMJp6sODzg9o+DVUwM/MEV9fDrYcSAboxlAkL2AOr0+88Gt8uBoSSMQnvZgc0+a7vrzG6V6jjz8jdWIwPY5sAby3NISKCkPmI0mb9Zl+QM8Sf/aIb0mOtO55EDw+X+DT64JwkiGtgCg2Kx5Fi35yvbYEWNXeccBsFcgWwFXMOJ/hbwMZpDSuNeipsizcnOXkW4qag+n2X2n0/6/BTm/xgr2kBBx1VX30j1QbACAISA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ADhZtYNBEr3niClwhOMsFJB7OZtTBz55UcV+VbqtfKA=;
 b=ZTUa+qxP3iE0g4Tjv9q5Pj12VyK4aOXYVLMh2PJxoImS9Dazao9Z2dEgO96WG9s5AFnZqR2RaTgm5S6iIuXwp2FtBPPKos9C53hxsPT8HfHYBceXwwaUcHJ7UYB6VHpopwLk4+iVNEmWJkagJ9V+BsefOAZG5CNA3g0yg5WZX9Y=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4538.eurprd08.prod.outlook.com (2603:10a6:10:d2::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Thu, 19 Nov
 2020 18:16:26 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737%7]) with mapi id 15.20.3564.028; Thu, 19 Nov 2020
 18:16:26 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, "alex.bennee@linaro.org"
	<alex.bennee@linaro.org>, Andre Przywara <Andre.Przywara@arm.com>, Rahul
 Singh <Rahul.Singh@arm.com>, Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH v4 1/3] xen/arm: gic: acpi: Guard helpers to build the
 MADT with CONFIG_ACPI
Thread-Topic: [PATCH v4 1/3] xen/arm: gic: acpi: Guard helpers to build the
 MADT with CONFIG_ACPI
Thread-Index: AQHWvpal2Ywl5gacb02BE3lv2K8FPqnPwwIA
Date: Thu, 19 Nov 2020 18:16:26 +0000
Message-ID: <4F6A86BB-EA0B-4F7A-A1D9-5C5C469FB220@arm.com>
References: <20201119170829.9923-1-julien@xen.org>
 <20201119170829.9923-2-julien@xen.org>
In-Reply-To: <20201119170829.9923-2-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 8ba79cc7-2f75-4af7-1c55-08d88cb7411e
x-ms-traffictypediagnostic: DBBPR08MB4538:|DBAPR08MB5749:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DBAPR08MB5749CC8CAEF81800AC288CCC9DE00@DBAPR08MB5749.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:7691;OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 oY+Xtc7YDhNWVpzmuDaQ4UhhJeUN9IW8gE4gJ6gwafIbqW91UYBLBkLya4fogo1zMJw0fdi/g37gH0bNU+eg+DJcHfvHnvHlFiAGcAe5Y8wjvBF/d69jpX+1C1NEq97jQlNUXyID+zlyk+vYBdlat0bwauEmCmYyJmUTlzqe6Xp4oZZnl5CcCbGxdrzNeIvWlfVnAif5BKlm1lsVkDJ+wLoR0it6+sGsATTvg3BYajoEcAStXBhQVXaO/z4QkO3+9NGL1vqVhYYZoDg29Rx2lbjcXfsrtpX5EYquigf+wtZURuosCFqUY8HATs6GmiKQIF6jckdIasJkx/qxzu126g==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(39860400002)(346002)(366004)(396003)(2906002)(66446008)(66556008)(66476007)(6916009)(66946007)(8676002)(8936002)(71200400001)(6486002)(86362001)(76116006)(91956017)(36756003)(5660300002)(6512007)(33656002)(4326008)(54906003)(53546011)(6506007)(26005)(64756008)(186003)(2616005)(478600001)(83380400001)(316002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 Qr0EuBlxzxav8XkpRhSvBTcET/fwyyP5iSdmHUj3LbWFvCVxlDieDgA9EYel70OC8Fu32puKYEVsEjbstWMo/Z5ezjYRYcCj+rnI5ndfJM7WbNQZWYszOGQoD/fivZNsfNOsUPQwMeH6UoYlCZZDOMebnYlVI+NSYx7CRhwqG7uT0/HGQ7HZv/jgJ45eosZ9eAW1W7NBUV3uE4brWz2N3/uoRBqXJmgyONIvQN75bvnds9dfBh8EGzesU3NCQYdQQKwyFSFzB9VO/0rPAWmQq/C51trPahSrQz5Y6FbmXOXkGziRPJCcgd1dxkyHcJifPHIrjzX8N9QlDWFeEbVlARuUHWiWLQj8UJVrXb9c6tzlUV/uW4MpaOF2XenYnKYorO4Qpr+c3EAt18uGGMrJeqw5uoGdzbCXBE3wtlz1icEXdNHY+qo7G3HFQjrX9MUIKf96KNW5M1yL4AsncCI+zHK1g3VP9Uosf+gGv75kuPaK7oO8dCZllSoOUa4dSPDDrgVAe0aeZoln0V6+W6aQP7O1/S7/9CB1YZkxtOkpDRdL+V2ug768t9fja3FY3j+Xi3oFSTKeKnJFEoYjwNybk41q12eAJgaPYLuYEBLzEBAAgnvy2a9xoYrYZERrld2RmRbOO4m8qxxHJRWOg5kgPg==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <96D93D78A9FB5A4C978131C6BC21360F@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4538
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT022.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d5846e27-3bb3-42ca-d86f-08d88cb73aed
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TEN93U/yPphGPAJqoSPwE/1lmU7G68Xyli7rD9jBa1grf4gmad29cOAsdL3Y4bkaxp5arUJeaxaDzzGvPqi+NkQRh+7vPc35NJRNppF3LwpomJ57VMJ+Tn6roUMOLQRVqOxeZi/DdeFPg5oi8M+HeTuJoJDDU9pAiudRPuiSQ9H6+jZHPZtffDO/DCruySPwcYY69shyrtE++/W7U5nt5fITrj78xOvlVO2M/1Yg555HNwmM5iNzNZHRWVzgiwFpLtzp1GtWW+VeGbW0ol5tsMZA1w10do1YFD7bhmnAqF+ozlYqmLLg0yzdjs36UmGAlroSStf3k4X/AjyuEqK3J6+my7zE/6x3JHkrsH/y4g/apA3hpE3vcHISYBKzsAhp1Nifxqm/X2MOaQOMmgXrBA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(396003)(346002)(376002)(39850400004)(46966005)(82740400003)(47076004)(70206006)(81166007)(356005)(82310400003)(6512007)(2616005)(316002)(2906002)(54906003)(336012)(107886003)(86362001)(6506007)(478600001)(8936002)(186003)(6486002)(70586007)(8676002)(33656002)(4326008)(36756003)(26005)(6862004)(83380400001)(5660300002)(53546011);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Nov 2020 18:16:37.2251
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8ba79cc7-2f75-4af7-1c55-08d88cb7411e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT022.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5749

Hi Julien,

> On 19 Nov 2020, at 17:08, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> gic_make_hwdom_madt() and gic_get_hwdom_madt_size() are ACPI specific.
>=20
> While they build fine today, this will change in a follow-up patch.
> Rather than trying to fix the build on ACPI, it is best to avoid
> compiling the helpers and the associated callbacks when CONFIG_ACPI=3Dn.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

I also tested the serie on FVP without ACPI and Xen is still booting proper=
ly Dom0.

Cheers
Bertrand

>=20
> ---
>    Changes in v4:
>        - Patch added
> ---
> xen/arch/arm/gic-v2.c     |  8 +++-----
> xen/arch/arm/gic-v3.c     | 11 ++---------
> xen/arch/arm/gic.c        |  2 ++
> xen/include/asm-arm/gic.h | 10 ++++++++--
> 4 files changed, 15 insertions(+), 16 deletions(-)
>=20
> diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
> index 0f747538dbcd..581ea5ba6b2c 100644
> --- a/xen/arch/arm/gic-v2.c
> +++ b/xen/arch/arm/gic-v2.c
> @@ -1114,12 +1114,12 @@ static int gicv2_iomem_deny_access(const struct d=
omain *d)
>     return iomem_deny_access(d, mfn, mfn + nr);
> }
>=20
> +#ifdef CONFIG_ACPI
> static unsigned long gicv2_get_hwdom_extra_madt_size(const struct domain =
*d)
> {
>     return 0;
> }
>=20
> -#ifdef CONFIG_ACPI
> static int gicv2_make_hwdom_madt(const struct domain *d, u32 offset)
> {
>     struct acpi_subtable_header *header;
> @@ -1248,10 +1248,6 @@ static void __init gicv2_acpi_init(void)
> }
> #else
> static void __init gicv2_acpi_init(void) { }
> -static int gicv2_make_hwdom_madt(const struct domain *d, u32 offset)
> -{
> -    return 0;
> -}
> #endif
>=20
> static int __init gicv2_init(void)
> @@ -1357,8 +1353,10 @@ const static struct gic_hw_operations gicv2_ops =
=3D {
>     .read_apr            =3D gicv2_read_apr,
>     .read_pending_state  =3D gicv2_read_pending_state,
>     .make_hwdom_dt_node  =3D gicv2_make_hwdom_dt_node,
> +#ifdef CONFIG_ACPI
>     .make_hwdom_madt     =3D gicv2_make_hwdom_madt,
>     .get_hwdom_extra_madt_size =3D gicv2_get_hwdom_extra_madt_size,
> +#endif
>     .map_hwdom_extra_mappings =3D gicv2_map_hwdown_extra_mappings,
>     .iomem_deny_access   =3D gicv2_iomem_deny_access,
>     .do_LPI              =3D gicv2_do_LPI,
> diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
> index 0f6cbf6224e9..2a344393a0e4 100644
> --- a/xen/arch/arm/gic-v3.c
> +++ b/xen/arch/arm/gic-v3.c
> @@ -1735,15 +1735,6 @@ static void __init gicv3_acpi_init(void)
> }
> #else
> static void __init gicv3_acpi_init(void) { }
> -static int gicv3_make_hwdom_madt(const struct domain *d, u32 offset)
> -{
> -    return 0;
> -}
> -
> -static unsigned long gicv3_get_hwdom_extra_madt_size(const struct domain=
 *d)
> -{
> -    return 0;
> -}
> #endif
>=20
> static bool gic_dist_supports_lpis(void)
> @@ -1858,8 +1849,10 @@ static const struct gic_hw_operations gicv3_ops =
=3D {
>     .read_pending_state  =3D gicv3_read_pending_state,
>     .secondary_init      =3D gicv3_secondary_cpu_init,
>     .make_hwdom_dt_node  =3D gicv3_make_hwdom_dt_node,
> +#ifdef CONFIG_ACPI
>     .make_hwdom_madt     =3D gicv3_make_hwdom_madt,
>     .get_hwdom_extra_madt_size =3D gicv3_get_hwdom_extra_madt_size,
> +#endif
>     .iomem_deny_access   =3D gicv3_iomem_deny_access,
>     .do_LPI              =3D gicv3_do_LPI,
> };
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index d623c57cb9fa..fe60619e99cf 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -443,6 +443,7 @@ int gic_make_hwdom_dt_node(const struct domain *d,
>     return gic_hw_ops->make_hwdom_dt_node(d, gic, fdt);
> }
>=20
> +#ifdef CONFIG_ACPI
> int gic_make_hwdom_madt(const struct domain *d, u32 offset)
> {
>     return gic_hw_ops->make_hwdom_madt(d, offset);
> @@ -459,6 +460,7 @@ unsigned long gic_get_hwdom_madt_size(const struct do=
main *d)
>=20
>     return madt_size;
> }
> +#endif
>=20
> int gic_iomem_deny_access(const struct domain *d)
> {
> diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
> index ba870523bb2a..ad0f7452d005 100644
> --- a/xen/include/asm-arm/gic.h
> +++ b/xen/include/asm-arm/gic.h
> @@ -378,12 +378,14 @@ struct gic_hw_operations {
>     /* Create GIC node for the hardware domain */
>     int (*make_hwdom_dt_node)(const struct domain *d,
>                               const struct dt_device_node *gic, void *fdt=
);
> +#ifdef CONFIG_ACPI
>     /* Create MADT table for the hardware domain */
>     int (*make_hwdom_madt)(const struct domain *d, u32 offset);
> -    /* Map extra GIC MMIO, irqs and other hw stuffs to the hardware doma=
in. */
> -    int (*map_hwdom_extra_mappings)(struct domain *d);
>     /* Query the size of hardware domain madt table */
>     unsigned long (*get_hwdom_extra_madt_size)(const struct domain *d);
> +#endif
> +    /* Map extra GIC MMIO, irqs and other hw stuffs to the hardware doma=
in. */
> +    int (*map_hwdom_extra_mappings)(struct domain *d);
>     /* Deny access to GIC regions */
>     int (*iomem_deny_access)(const struct domain *d);
>     /* Handle LPIs, which require special handling */
> @@ -435,8 +437,12 @@ void register_gic_ops(const struct gic_hw_operations=
 *ops);
> int gic_make_hwdom_dt_node(const struct domain *d,
>                            const struct dt_device_node *gic,
>                            void *fdt);
> +
> +#ifdef CONFIG_ACPI
> int gic_make_hwdom_madt(const struct domain *d, u32 offset);
> unsigned long gic_get_hwdom_madt_size(const struct domain *d);
> +#endif
> +
> int gic_map_hwdom_extra_mappings(struct domain *d);
> int gic_iomem_deny_access(const struct domain *d);
>=20
> --=20
> 2.17.1
>=20



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 18:16:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 18:16:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31277.61642 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfoU9-0007yb-Vw; Thu, 19 Nov 2020 18:16:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31277.61642; Thu, 19 Nov 2020 18:16:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfoU9-0007yU-Rn; Thu, 19 Nov 2020 18:16:45 +0000
Received: by outflank-mailman (input) for mailman id 31277;
 Thu, 19 Nov 2020 18:16:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zZym=EZ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kfoU8-0007wu-Sz
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 18:16:44 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.66]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3fbd8aac-cb7a-4a67-8bee-3c91b337eb61;
 Thu, 19 Nov 2020 18:16:39 +0000 (UTC)
Received: from DB6PR0801CA0055.eurprd08.prod.outlook.com (2603:10a6:4:2b::23)
 by AM6PR08MB4024.eurprd08.prod.outlook.com (2603:10a6:20b:a5::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25; Thu, 19 Nov
 2020 18:16:38 +0000
Received: from DB5EUR03FT064.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:2b:cafe::5c) by DB6PR0801CA0055.outlook.office365.com
 (2603:10a6:4:2b::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Thu, 19 Nov 2020 18:16:38 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT064.mail.protection.outlook.com (10.152.21.199) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Thu, 19 Nov 2020 18:16:37 +0000
Received: ("Tessian outbound 082214a64d39:v71");
 Thu, 19 Nov 2020 18:16:37 +0000
Received: from 139024f62be8.3
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DEB8DDA8-E090-4097-B8ED-1B15336A3919.1; 
 Thu, 19 Nov 2020 18:16:31 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 139024f62be8.3
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 19 Nov 2020 18:16:31 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4538.eurprd08.prod.outlook.com (2603:10a6:10:d2::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Thu, 19 Nov
 2020 18:16:29 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737%7]) with mapi id 15.20.3564.028; Thu, 19 Nov 2020
 18:16:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zZym=EZ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kfoU8-0007wu-Sz
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 18:16:44 +0000
X-Inumbo-ID: 3fbd8aac-cb7a-4a67-8bee-3c91b337eb61
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown [40.107.22.66])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3fbd8aac-cb7a-4a67-8bee-3c91b337eb61;
	Thu, 19 Nov 2020 18:16:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GdeZEekMGzCdA7djWe9NFePYVX0bCegyfXcvVcmjAFQ=;
 b=5mWBGa35ajBijxfh10H645djwCeva3O1+gwIH/8bY1lxQNsNApQJ1TQg9v7z1KgW/XxD5cNsCmq1LgulflbQGJpQiJh2F1kjRHsDpWVTaFL/WB3Z3TMGYs1KwyMdN9Ldm4G0bFTk6jp8GU630yodIlPsU5vJJapWFZvwsQfcHY0=
Received: from DB6PR0801CA0055.eurprd08.prod.outlook.com (2603:10a6:4:2b::23)
 by AM6PR08MB4024.eurprd08.prod.outlook.com (2603:10a6:20b:a5::26) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25; Thu, 19 Nov
 2020 18:16:38 +0000
Received: from DB5EUR03FT064.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:2b:cafe::5c) by DB6PR0801CA0055.outlook.office365.com
 (2603:10a6:4:2b::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Thu, 19 Nov 2020 18:16:38 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT064.mail.protection.outlook.com (10.152.21.199) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Thu, 19 Nov 2020 18:16:37 +0000
Received: ("Tessian outbound 082214a64d39:v71"); Thu, 19 Nov 2020 18:16:37 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: d951ece8ce1573da
X-CR-MTA-TID: 64aa7808
Received: from 139024f62be8.3
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id DEB8DDA8-E090-4097-B8ED-1B15336A3919.1;
	Thu, 19 Nov 2020 18:16:31 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 139024f62be8.3
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Thu, 19 Nov 2020 18:16:31 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jcaM866MQ/7TisjluUVGNkXxCaxIQayS3mk3qk0HbMssFHqc6nZxctfO+0hBRoBhZx2A88PZFSLMKnTlyLFNTmDI6V0lJ1oQEBYU3LNqV+Om01Bc2eE14425ZNNgN0Iieo8HrOq6o2vTLwMkvZzUnPICNh3HIVmCBpkuiBROBlSpeS9t2CRy5oeIlHOISDV71bjpUXA5aLx9YZe/S6QPPwDMWghy1I2jgU8ggEuE3tGM49RGWgQgF7zR1X133lWxS+EPKsdZ7Gd6Yz19jQa6p6o9F2YOH+t7TL2AcWuGgVGfKWUq5+Y0vj4KC/bWvMMUtfKbalL3j8TQ/pgMf2TOFQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GdeZEekMGzCdA7djWe9NFePYVX0bCegyfXcvVcmjAFQ=;
 b=hhYZBkSeiS7fSoiGOeQtOqbC8Fh7dkRGTALBVIB8T4LcKlp9L9QUgLvHD2uJn7nDcoxAdszZEHNWTeNRUrKQPQPRyxa3NiZJB+ibjdeNCPwWes02D+LGwyfWFoZSRZD/90SOefpRp2fSc2+JYJhgGyFvdL9sqTV6Qwq+RPW9IJzgOjlQK6+DSDHTM+oBHWNdUyii0S0HIHn4pos9xNNrFcXEKbvCE3iLxiadlZej5PWYaTp61wYz5OArtMIsqZdfiqqGW03YueFa4E8es6xSyZdogzSv/28LihmSOHCRpCtQUdEAj9+2XyFo9RkFgTHvPyhBlMPA9K5LBD1Ei1vZ7A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GdeZEekMGzCdA7djWe9NFePYVX0bCegyfXcvVcmjAFQ=;
 b=5mWBGa35ajBijxfh10H645djwCeva3O1+gwIH/8bY1lxQNsNApQJ1TQg9v7z1KgW/XxD5cNsCmq1LgulflbQGJpQiJh2F1kjRHsDpWVTaFL/WB3Z3TMGYs1KwyMdN9Ldm4G0bFTk6jp8GU630yodIlPsU5vJJapWFZvwsQfcHY0=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4538.eurprd08.prod.outlook.com (2603:10a6:10:d2::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Thu, 19 Nov
 2020 18:16:29 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::39b7:6f9f:d046:e737%7]) with mapi id 15.20.3564.028; Thu, 19 Nov 2020
 18:16:29 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"alex.bennee@linaro.org" <alex.bennee@linaro.org>, Andre Przywara
	<Andre.Przywara@arm.com>, Rahul Singh <Rahul.Singh@arm.com>, Julien Grall
	<Julien.Grall@arm.com>, Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH v4 2/3] xen/arm: gic: acpi: Use the correct length for the
 GICC structure
Thread-Topic: [PATCH v4 2/3] xen/arm: gic: acpi: Use the correct length for
 the GICC structure
Thread-Index: AQHWvpa0bDYxNRQRmUCSGRjy8i0s/KnPwwaA
Date: Thu, 19 Nov 2020 18:16:29 +0000
Message-ID: <03404C68-2415-47E9-AF4D-9B414EA07223@arm.com>
References: <20201119170829.9923-1-julien@xen.org>
 <20201119170829.9923-3-julien@xen.org>
In-Reply-To: <20201119170829.9923-3-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 8ca2eedf-5ca8-4cf6-26eb-08d88cb74188
x-ms-traffictypediagnostic: DBBPR08MB4538:|AM6PR08MB4024:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB402470AE59AF9AA1232862259DE00@AM6PR08MB4024.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:2000;OLM:2000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 o8PcLM7FweEegvLXgpfWLoH7k5gPKxbvp3oGMj6qXkvHmVSRpK3mFbWKhWu9s6Ypxb6fHUmjWDYsUrG/X8Sf56D5mlRMwxJB5KCKJm79a9ATFwT154Em/9ItYcq/ULaA/7FfRAnJ6VE9aEXTM+g4TYIqeg4sJFaFoiCgDR71v+pdjn/zzPNB4NhyoaRzdpXochtHc9rIrWcrKm1llzMh9aAo/fkK3R1DLA/1uoW95J43SJ8WPTSLlW515S2QGlwxVeadDk+3ex1l7O11KmSEinwzX4KAxX0PvEODJX/rJ7b6V0ECq1lR2rZ/1MxxfALY7/ycsdHwOGRCqMB+uQKIwA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(39860400002)(346002)(366004)(396003)(2906002)(66446008)(66556008)(66476007)(6916009)(66946007)(8676002)(8936002)(71200400001)(6486002)(86362001)(76116006)(91956017)(36756003)(5660300002)(6512007)(33656002)(4326008)(54906003)(53546011)(6506007)(26005)(64756008)(186003)(2616005)(478600001)(83380400001)(316002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 oIYFOITq05s0gmLF/aCMeWRgMy4Oktmc0tU+1MORm6/DZk0Rn1ZSpEy39lTqftWVLQwvhUUqulrm4vkP1SoAVfIVsRwo4/w9/As56KvR8EGDuwjJUgL1i6kHSzk+Z81EeMCVmFJ//+v89EUdrW0zo/P/qNDwjolBDfcawnYb+wPM7m6D79AbWxtMXCTuuWqxTsxuNjuIbGL2iHvhPsnYf0/aEpJi2Z9oye5Z6QEVU4RQYvylRw4VDGivak5P9CO6JeGF5XH6oELUU4OWL/nTQPtBi3xVRC7Api7de4cmtZ0aRlsMLrvYWpjuOof4MfIKu+rN253kHSUrkw+kK7gL4Tc3SW+499OnlPWl6w6YV0crdKtm/n9jo5JZGgajsBqvRDBMlczOLxL54i5mOJ88kr3IAouvRjfxcoJTY74DUuVSOr8vaXBxxoGrwUFeE76DfTIZYuaia0bJ3Z47J6saA9iFT27pkK66KpHEJ7EjlUf5mAADEML0oavuE6635HsxM7dZu5sbu6rLOaWW1/r2v5AEkDEsCOYVPuI4OSOTmtj18fX5W8gkO7wiyVFLf18R6vlwIwakH/o4cORfGAHbTRMQRZ3hRWtngXquX9Ig9bGt2vau0S1P9yTtomkBwgpfqk2damQvKYkMqgYYWY0hSw==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <241BFCD68BAD4D48837C8C1929398CC3@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4538
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f7dbd83d-b8f2-4485-757b-08d88cb73c99
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XkMPZsIYnYDzGPFeZxEUd+L++JHghJBL+eHxazcl4ra8+yUq86qJmFoBh8NTtNtOFpAYmBDAxrsvqt7zhfvqpd0BFvl/kd87HV0kHn4u35iVu+u4U7Rm9qIUpsZMhUywTBhSJaUc4E25+7v4W1iSMkHqc8bNS6TMykX3l7yNGyLw580mqTXmd8VpmqVp+hPU3mtPhXXzQlA2eQ7T4UqngwkPAZNGEB7nWxr6QlecS0Y2+SESjMzQuHtWwA+kARnP+PpqQzsEQPB5fyMG+sZKfo1ZLyQef4iyL3Vb+kjeHV4xOk7SlMB1FjN1QutC70/WEuouIUKdVsVHcQ80qMyvNtiZr8ReGvoID6C1oWcrFG/vEBJ4rMtfsOVr+RTSvahxP51v3f5lmXv4DfhbZToT/A==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(346002)(136003)(39860400002)(396003)(46966005)(54906003)(36756003)(53546011)(83380400001)(8936002)(6506007)(107886003)(4326008)(478600001)(86362001)(33656002)(70586007)(70206006)(6862004)(47076004)(2906002)(5660300002)(8676002)(82740400003)(2616005)(316002)(356005)(6512007)(336012)(26005)(81166007)(6486002)(186003)(82310400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Nov 2020 18:16:37.9200
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8ca2eedf-5ca8-4cf6-26eb-08d88cb74188
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4024

Hi,

> On 19 Nov 2020, at 17:08, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <julien.grall@arm.com>
>=20
> The length of the GICC structure in the MADT ACPI table differs between
> version 5.1 and 6.0, although there are no other relevant differences.
>=20
> Use the BAD_MADT_GICC_ENTRY macro, which was specifically designed to
> overcome this issue.
>=20
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

>=20
> ---
>    Changes in v3:
>        - Update the commit title as we also modify GICv3 code
>        - Use the correct length in more places
>=20
>    Changes in v2:
>        - Patch added
> ---
> xen/arch/arm/acpi/boot.c | 2 +-
> xen/arch/arm/gic-v2.c    | 5 +++--
> xen/arch/arm/gic-v3.c    | 6 +++---
> xen/arch/arm/gic.c       | 2 +-
> 4 files changed, 8 insertions(+), 7 deletions(-)
>=20
> diff --git a/xen/arch/arm/acpi/boot.c b/xen/arch/arm/acpi/boot.c
> index 30e4bd1bc5a7..55c3e5cbc834 100644
> --- a/xen/arch/arm/acpi/boot.c
> +++ b/xen/arch/arm/acpi/boot.c
> @@ -131,7 +131,7 @@ acpi_parse_gic_cpu_interface(struct acpi_subtable_hea=
der *header,
>     struct acpi_madt_generic_interrupt *processor =3D
>                container_of(header, struct acpi_madt_generic_interrupt, h=
eader);
>=20
> -    if ( BAD_MADT_ENTRY(processor, end) )
> +    if ( BAD_MADT_GICC_ENTRY(processor, end) )
>         return -EINVAL;
>=20
>     acpi_table_print_madt_entry(header);
> diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
> index 581ea5ba6b2c..b2adc8ec9a64 100644
> --- a/xen/arch/arm/gic-v2.c
> +++ b/xen/arch/arm/gic-v2.c
> @@ -1136,7 +1136,8 @@ static int gicv2_make_hwdom_madt(const struct domai=
n *d, u32 offset)
>=20
>     host_gicc =3D container_of(header, struct acpi_madt_generic_interrupt=
,
>                              header);
> -    size =3D sizeof(struct acpi_madt_generic_interrupt);
> +
> +    size =3D ACPI_MADT_GICC_LENGTH;
>     /* Add Generic Interrupt */
>     for ( i =3D 0; i < d->max_vcpus; i++ )
>     {
> @@ -1165,7 +1166,7 @@ gic_acpi_parse_madt_cpu(struct acpi_subtable_header=
 *header,
>     struct acpi_madt_generic_interrupt *processor =3D
>                container_of(header, struct acpi_madt_generic_interrupt, h=
eader);
>=20
> -    if ( BAD_MADT_ENTRY(processor, end) )
> +    if ( BAD_MADT_GICC_ENTRY(processor, end) )
>         return -EINVAL;
>=20
>     /* Read from APIC table and fill up the GIC variables */
> diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
> index 2a344393a0e4..ac28013c1967 100644
> --- a/xen/arch/arm/gic-v3.c
> +++ b/xen/arch/arm/gic-v3.c
> @@ -1499,7 +1499,7 @@ static int gicv3_make_hwdom_madt(const struct domai=
n *d, u32 offset)
>=20
>     host_gicc =3D container_of(header, struct acpi_madt_generic_interrupt=
,
>                              header);
> -    size =3D sizeof(struct acpi_madt_generic_interrupt);
> +    size =3D ACPI_MADT_GICC_LENGTH;
>     for ( i =3D 0; i < d->max_vcpus; i++ )
>     {
>         gicc =3D (struct acpi_madt_generic_interrupt *)(base_ptr + table_=
len);
> @@ -1558,7 +1558,7 @@ gic_acpi_parse_madt_cpu(struct acpi_subtable_header=
 *header,
>     struct acpi_madt_generic_interrupt *processor =3D
>                container_of(header, struct acpi_madt_generic_interrupt, h=
eader);
>=20
> -    if ( BAD_MADT_ENTRY(processor, end) )
> +    if ( BAD_MADT_GICC_ENTRY(processor, end) )
>         return -EINVAL;
>=20
>     /* Read from APIC table and fill up the GIC variables */
> @@ -1628,7 +1628,7 @@ gic_acpi_get_madt_cpu_num(struct acpi_subtable_head=
er *header,
>     struct acpi_madt_generic_interrupt *cpuif;
>=20
>     cpuif =3D (struct acpi_madt_generic_interrupt *)header;
> -    if ( BAD_MADT_ENTRY(cpuif, end) || !cpuif->gicr_base_address )
> +    if ( BAD_MADT_GICC_ENTRY(cpuif, end) || !cpuif->gicr_base_address )
>         return -EINVAL;
>=20
>     return 0;
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index fe60619e99cf..3b0331b53830 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -454,7 +454,7 @@ unsigned long gic_get_hwdom_madt_size(const struct do=
main *d)
>     unsigned long madt_size;
>=20
>     madt_size =3D sizeof(struct acpi_table_madt)
> -                + sizeof(struct acpi_madt_generic_interrupt) * d->max_vc=
pus
> +                + ACPI_MADT_GICC_LENGTH * d->max_vcpus
>                 + sizeof(struct acpi_madt_generic_distributor)
>                 + gic_hw_ops->get_hwdom_extra_madt_size(d);
>=20
> --=20
> 2.17.1
>=20



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 18:28:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 18:28:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31325.61666 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfofd-000135-Cj; Thu, 19 Nov 2020 18:28:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31325.61666; Thu, 19 Nov 2020 18:28:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfofd-00012y-9S; Thu, 19 Nov 2020 18:28:37 +0000
Received: by outflank-mailman (input) for mailman id 31325;
 Thu, 19 Nov 2020 18:28:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfofb-00012q-N7; Thu, 19 Nov 2020 18:28:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfofb-0002eS-Co; Thu, 19 Nov 2020 18:28:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfofb-0000zj-1u; Thu, 19 Nov 2020 18:28:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kfofa-000620-Ut; Thu, 19 Nov 2020 18:28:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfofb-00012q-N7; Thu, 19 Nov 2020 18:28:35 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xxYaxW5g5/1ukvrBatgIX+Mt/aXGdeI+bUTLyADieq8=; b=3G8HKbO2FZVkzzOOb9VTnmuNOp
	rUknkN6fb2yajQ85xFmwLSZqST0AvXpEy9dPtCsbziFGb2vkZTHvjtngegkFYNzoQ8j4W+/qcP7IS
	eYuvnWquIqdN1W3fV8Kt7dSbGQudpBm/+tPGO9BqDP++39YyzTdAJ567paQOK5l+HPyM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfofb-0002eS-Co; Thu, 19 Nov 2020 18:28:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfofb-0000zj-1u; Thu, 19 Nov 2020 18:28:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfofa-000620-Ut; Thu, 19 Nov 2020 18:28:34 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156871-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156871: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=c2e7554e1b85935d962127efa3c2a76483b0b3b6
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 19 Nov 2020 18:28:34 +0000

flight 156871 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156871/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                c2e7554e1b85935d962127efa3c2a76483b0b3b6
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  110 days
Failing since        152366  2020-08-01 20:49:34 Z  109 days  183 attempts
Testing same since   156871  2020-11-19 04:28:45 Z    0 days    1 attempts

------------------------------------------------------------
3528 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 675413 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 19:08:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 19:08:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31366.61741 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfpHm-0005e6-4M; Thu, 19 Nov 2020 19:08:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31366.61741; Thu, 19 Nov 2020 19:08:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfpHl-0005dw-Va; Thu, 19 Nov 2020 19:08:01 +0000
Received: by outflank-mailman (input) for mailman id 31366;
 Thu, 19 Nov 2020 19:08:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfpHk-0005bQ-KF
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 19:08:00 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfpHk-0003TA-8e; Thu, 19 Nov 2020 19:08:00 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfpHk-0000TE-1g; Thu, 19 Nov 2020 19:08:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfpHk-0005bQ-KF
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 19:08:00 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=OFTW7+XoUzoZJqOmYFJ6tvN5l1t/e5UfO6wPacf/RUI=; b=0DpbZfK/HWJzMuVRx7orNGkBS
	57IMf9WdvCWoIx4VzwZG/bZ5zb5qDQcM9bGlX879p6AbGFSr4LwnZmTi3N/dMVnEjY4jCb2snnRNi
	oMoZTEdFPbvcYBmp5RP4oj+GnnwCVDZrA/coXE5hrf28LGh+cd99Okc1sMszTev8e7HB4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfpHk-0003TA-8e; Thu, 19 Nov 2020 19:08:00 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfpHk-0000TE-1g; Thu, 19 Nov 2020 19:08:00 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	Julien Grall <julien.grall@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Julien Grall <julien.grall@amazon.com>
Subject: [PATCH RFC 5/6] xen/arm: mm: Don't open-code Xen PT update in remove_early_mappings
Date: Thu, 19 Nov 2020 19:07:50 +0000
Message-Id: <20201119190751.22345-6-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201119190751.22345-1-julien@xen.org>
References: <20201119190751.22345-1-julien@xen.org>

From: Julien Grall <julien.grall@arm.com>

Now that xen_pt_update_entry() is able to deal with different mapping
size, we can replace the open-coding of the page-tables update by a call
to modify_xen_mappings().

As the function is not meant to fail, a BUG_ON is added to check the
return.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Julien Grall <julien.grall@amazon.com>
---
 xen/arch/arm/mm.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index af0f12b6e6d3..aee6d410ac4f 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -598,11 +598,11 @@ void * __init early_fdt_map(paddr_t fdt_paddr)
 
 void __init remove_early_mappings(void)
 {
-    lpae_t pte = {0};
-    write_pte(xen_second + second_table_offset(BOOT_FDT_VIRT_START), pte);
-    write_pte(xen_second + second_table_offset(BOOT_FDT_VIRT_START + SZ_2M),
-              pte);
-    flush_xen_tlb_range_va(BOOT_FDT_VIRT_START, BOOT_FDT_SLOT_SIZE);
+    int rc;
+
+    rc = modify_xen_mappings(BOOT_FDT_VIRT_START, BOOT_FDT_VIRT_END,
+                             _PAGE_BLOCK);
+    BUG_ON(rc);
 }
 
 /*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 19:08:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 19:08:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31363.61698 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfpHi-0005Xj-25; Thu, 19 Nov 2020 19:07:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31363.61698; Thu, 19 Nov 2020 19:07:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfpHh-0005Xc-UF; Thu, 19 Nov 2020 19:07:57 +0000
Received: by outflank-mailman (input) for mailman id 31363;
 Thu, 19 Nov 2020 19:07:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfpHh-0005X6-0z
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 19:07:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfpHg-0003Sl-LJ; Thu, 19 Nov 2020 19:07:56 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfpHg-0000TE-Ct; Thu, 19 Nov 2020 19:07:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfpHh-0005X6-0z
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 19:07:57 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=cmOxqGi+9sZD6PnyoF8KEWd7i5yr+UWn5KmlxEDs9uo=; b=Lb5VWktyWuxQbfI5AHpyAtvVK
	HPNbZUKsS1qxJyI+SzqO232NQnYpoo09mDrz5bqngnpw/B/4vG56k9gTPj51G+9kzsruNO0zzrElP
	Zg2FaikhScAI3HIQd8larppXIcOCHy2junX6L536ItQ1xhACDfGt8R432A9pFvCPJve7k=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfpHg-0003Sl-LJ; Thu, 19 Nov 2020 19:07:56 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfpHg-0000TE-Ct; Thu, 19 Nov 2020 19:07:56 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH RFC 2/6] xen/arm: mm: Remove ; at the end of mm_printk()
Date: Thu, 19 Nov 2020 19:07:47 +0000
Message-Id: <20201119190751.22345-3-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201119190751.22345-1-julien@xen.org>
References: <20201119190751.22345-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

The ; at the end of mm_printk() means the following code will not build
correctly:

if ( ... )
    mm_printk(...);
else
    ...

As we treat the macro as a function, we want to remove the ; at the end
of it.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/mm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 4dd886f7c80d..59f8a3f15fd1 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -59,7 +59,7 @@ mm_printk(const char *fmt, ...) {}
     {                                       \
         dprintk(XENLOG_ERR, fmt, ## args);  \
         WARN();                             \
-    } while (0);
+    } while (0)
 #endif
 
 /*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 19:08:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 19:08:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31367.61753 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfpHn-0005gi-HA; Thu, 19 Nov 2020 19:08:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31367.61753; Thu, 19 Nov 2020 19:08:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfpHn-0005gX-Bv; Thu, 19 Nov 2020 19:08:03 +0000
Received: by outflank-mailman (input) for mailman id 31367;
 Thu, 19 Nov 2020 19:08:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfpHl-0005de-Nl
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 19:08:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfpHl-0003TU-Ep; Thu, 19 Nov 2020 19:08:01 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfpHl-0000TE-7K; Thu, 19 Nov 2020 19:08:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfpHl-0005de-Nl
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 19:08:01 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=qv0miUVBKz6uIh+FSZC0kOfLNz+frFcB+3hoEnxh+9s=; b=zL91chzf8hZBUIw5w4SKJJCD1
	W/X7Rm/ungjPoPdwz8RLEZO8sUiZWA7VPaYJjnzH1tDtMwldOMDVQ7vxl8g14t32Vh6GS1ZF8E0s6
	piSb7LB0PqXXVptGpcTy159/CJZiKSLrcG4O7QFfpFifsZrcDgxk4XWSvPv/A/gPKu7xs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfpHl-0003TU-Ep; Thu, 19 Nov 2020 19:08:01 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfpHl-0000TE-7K; Thu, 19 Nov 2020 19:08:01 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	Julien Grall <julien.grall@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH RFC 6/6] xen/arm: mm: Re-implement early_fdt_map() using map_pages_to_xen()
Date: Thu, 19 Nov 2020 19:07:51 +0000
Message-Id: <20201119190751.22345-7-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201119190751.22345-1-julien@xen.org>
References: <20201119190751.22345-1-julien@xen.org>

From: Julien Grall <julien.grall@arm.com>

Now that map_pages_to_xen() has been extended to support 2MB mappings,
we can replace the create_mappings() calls by map_pages_to_xen() calls.

The mapping can also be marked read-only as Xen as no business to modify
the host Device Tree.

Signed-off-by: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/mm.c | 18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index aee6d410ac4f..37ea9d5ce20a 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -558,6 +558,7 @@ void * __init early_fdt_map(paddr_t fdt_paddr)
     paddr_t offset;
     void *fdt_virt;
     uint32_t size;
+    int rc;
 
     /*
      * Check whether the physical FDT address is set and meets the minimum
@@ -573,8 +574,12 @@ void * __init early_fdt_map(paddr_t fdt_paddr)
     /* The FDT is mapped using 2MB superpage */
     BUILD_BUG_ON(BOOT_FDT_VIRT_START % SZ_2M);
 
-    create_mappings(xen_second, BOOT_FDT_VIRT_START, paddr_to_pfn(base_paddr),
-                    SZ_2M >> PAGE_SHIFT, SZ_2M);
+    rc = map_pages_to_xen(BOOT_FDT_VIRT_START, maddr_to_mfn(base_paddr),
+                          SZ_2M >> PAGE_SHIFT,
+                          PAGE_HYPERVISOR_RO | _PAGE_BLOCK);
+    if ( rc )
+        panic("Unable to map the device-tree.\n");
+
 
     offset = fdt_paddr % SECOND_SIZE;
     fdt_virt = (void *)BOOT_FDT_VIRT_START + offset;
@@ -588,9 +593,12 @@ void * __init early_fdt_map(paddr_t fdt_paddr)
 
     if ( (offset + size) > SZ_2M )
     {
-        create_mappings(xen_second, BOOT_FDT_VIRT_START + SZ_2M,
-                        paddr_to_pfn(base_paddr + SZ_2M),
-                        SZ_2M >> PAGE_SHIFT, SZ_2M);
+        rc = map_pages_to_xen(BOOT_FDT_VIRT_START + SZ_2M,
+                              maddr_to_mfn(base_paddr + SZ_2M),
+                              SZ_2M >> PAGE_SHIFT,
+                              PAGE_HYPERVISOR_RO | _PAGE_BLOCK);
+        if ( rc )
+            panic("Unable to map the device-tree\n");
     }
 
     return fdt_virt;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 19:08:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 19:08:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31361.61680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfpHf-0005WG-GZ; Thu, 19 Nov 2020 19:07:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31361.61680; Thu, 19 Nov 2020 19:07:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfpHf-0005W9-Dh; Thu, 19 Nov 2020 19:07:55 +0000
Received: by outflank-mailman (input) for mailman id 31361;
 Thu, 19 Nov 2020 19:07:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfpHe-0005W4-PD
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 19:07:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfpHe-0003SX-7N; Thu, 19 Nov 2020 19:07:54 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfpHd-0000TE-Tw; Thu, 19 Nov 2020 19:07:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfpHe-0005W4-PD
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 19:07:54 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=RBmBaVqmiUjfq+YgUTE0vJmyrxBqE/rwdqrNFYyLDHc=; b=6oUTZanFTg59Q1EOMCPjBTSsMd
	VdE4CrPnGMnT0rDjnM1F05nmfkMS1k4rHxPbg7tPsTFMMeBDRLwMrzVJEZ/3or+imZi3fVDSNrtrx
	hxbtDHkczn46355GctRu8cClSM/x0xiXfudqdBaqF2XGwfUlaptLBOInw4EmfMRb3sMs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfpHe-0003SX-7N; Thu, 19 Nov 2020 19:07:54 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfpHd-0000TE-Tw; Thu, 19 Nov 2020 19:07:54 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH RFC 0/6] xen/arm: mm: Add limited support for superpages
Date: Thu, 19 Nov 2020 19:07:45 +0000
Message-Id: <20201119190751.22345-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Hi all,

This is a first attempt to add superpage mapping in
xen_pt_update_entry(). The end goal if to remove open-coding mappings
which will help to:
  1) get better compliance with the Arm memory model
  2) pave the way for other page size (64KB, 16KB).

For now, only the open-code mappings for the Device-Tree is reworked.
The others will be added later.

Julien Grall (5):
  xen/arm: mm: Remove ; at the end of mm_printk()
  xen/arm: setup: Call unregister_init_virtual_region() after the last
    init function
  xen/arm: mm: Allow other mapping size in xen_pt_update_entry()
  xen/arm: mm: Don't open-code Xen PT update in remove_early_mappings
  xen/arm: mm: Re-implement early_fdt_map() using map_pages_to_xen()

Stefano Stabellini (1):
  xen/arm: mm: Remove special case for CPU0 in dump_hyp_walk()

 xen/arch/arm/mm.c          | 122 +++++++++++++++++++++++++++----------
 xen/arch/arm/setup.c       |   3 +-
 xen/include/asm-arm/page.h |   4 ++
 3 files changed, 95 insertions(+), 34 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 19:08:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 19:08:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31362.61693 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfpHh-0005XI-PO; Thu, 19 Nov 2020 19:07:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31362.61693; Thu, 19 Nov 2020 19:07:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfpHh-0005XB-MD; Thu, 19 Nov 2020 19:07:57 +0000
Received: by outflank-mailman (input) for mailman id 31362;
 Thu, 19 Nov 2020 19:07:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfpHg-0005X0-36
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 19:07:56 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfpHf-0003Sd-Hx; Thu, 19 Nov 2020 19:07:55 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfpHf-0000TE-7B; Thu, 19 Nov 2020 19:07:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfpHg-0005X0-36
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 19:07:56 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=n+a9yOVUI+xS4AJamh0RTQNrcJQmW3VHc9oZb3bl7Og=; b=obXqYpSTe3KR4qspumZnA7LdO
	XskxTBDOrMLk8fcuIB3bI/01mQ7lrNeckM5L35jFgoBYHQyNC4w+kXXPFR7Qg/oLuXAxdsFNGIYMN
	L5Eafa5pMWhEBkU7tP2bA73RmECaelgz8BthzRWpcc52VXS769fL+gz1+eRKXeIqRm05A=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfpHf-0003Sd-Hx; Thu, 19 Nov 2020 19:07:55 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfpHf-0000TE-7B; Thu, 19 Nov 2020 19:07:55 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Stefano Stabellini <stefano.stabellini@xilinx.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH RFC 1/6] xen/arm: mm: Remove special case for CPU0 in dump_hyp_walk()
Date: Thu, 19 Nov 2020 19:07:46 +0000
Message-Id: <20201119190751.22345-2-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201119190751.22345-1-julien@xen.org>
References: <20201119190751.22345-1-julien@xen.org>

From: Stefano Stabellini <sstabellini@kernel.org>

There is no need to have a special case for CPU0 when converting the
page-table virtual address into a physical address. The helper
virt_to_maddr() is able to translate any address as long as the root
page-tables is mapped in the virtual address. This is the case for all
the CPUs at the moment.

So use the same BUG_ON() regardless the CPU.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
[julien: Rework the commit message]
Signed-off-by: Julien Grall <jgrall@amazon.com>

---

I went back through the conversation in [1] regarding the issue when
loading Xen below 2MB on Arm32. The example provided is wrong because to
find the physical address, we need to add 'phys_offset', not substract.

So I removed the comment regarding the code was buggy.

[1] https://marc.info/?l=xen-devel&m=157081398022401
---
 xen/arch/arm/mm.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 9c4b26bf079b..4dd886f7c80d 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -284,10 +284,7 @@ void dump_hyp_walk(vaddr_t addr)
            "on CPU%d via TTBR 0x%016"PRIx64"\n",
            addr, smp_processor_id(), ttbr);
 
-    if ( smp_processor_id() == 0 )
-        BUG_ON( (lpae_t *)(unsigned long)(ttbr - phys_offset) != pgtable );
-    else
-        BUG_ON( virt_to_maddr(pgtable) != ttbr );
+    BUG_ON( virt_to_maddr(pgtable) != ttbr );
     dump_pt_walk(ttbr, addr, HYP_PT_ROOT_LEVEL, 1);
 }
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 19:08:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 19:08:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31365.61725 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfpHk-0005bN-R3; Thu, 19 Nov 2020 19:08:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31365.61725; Thu, 19 Nov 2020 19:08:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfpHk-0005b1-IG; Thu, 19 Nov 2020 19:08:00 +0000
Received: by outflank-mailman (input) for mailman id 31365;
 Thu, 19 Nov 2020 19:07:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfpHj-0005ZJ-4f
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 19:07:59 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfpHi-0003T0-VT; Thu, 19 Nov 2020 19:07:58 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfpHi-0000TE-OH; Thu, 19 Nov 2020 19:07:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfpHj-0005ZJ-4f
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 19:07:59 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=JVCtMkS3vhD4oIjsLKCddPmn6X/R7AsIbb0WSv2njxY=; b=eUfW4TcRvPrmP4Ulje9fZClxy
	3cLEFCVtKsnHLTT9zFmGK5ZsU3AAiuw5tKfLFGZYSr+vzUnvX6eGkqpsh4av2NavyKJz7o43/hHqc
	QGDWkDNI18910ne9fTRfcWcGOiHPsOBkQwAE71wMAkBXg5hNCMF+qtkXn4jAGlypMJjYM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfpHi-0003T0-VT; Thu, 19 Nov 2020 19:07:58 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfpHi-0000TE-OH; Thu, 19 Nov 2020 19:07:58 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	Julien Grall <julien.grall@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH RFC 4/6] xen/arm: mm: Allow other mapping size in xen_pt_update_entry()
Date: Thu, 19 Nov 2020 19:07:49 +0000
Message-Id: <20201119190751.22345-5-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201119190751.22345-1-julien@xen.org>
References: <20201119190751.22345-1-julien@xen.org>

From: Julien Grall <julien.grall@arm.com>

At the moment, xen_pt_update_entry() only supports mapping at level 3
(i.e 4KB mapping). While this is fine for most of the runtime helper,
the boot code will require to use superpage mapping.

We don't want to allow superpage mapping by default as some of the
callers may expect small mappings (i.e populate_pt_range()) or even
expect to unmap only a part of a superpage.

To keep the code simple, a new flag _PAGE_BLOCK is introduced to
allow the caller to enable superpage mapping.

As the code doesn't support all the combinations, xen_pt_check_entry()
is extended to take into account the cases we don't support when
using block mapping:
    - Replacing a table with a mapping. This may happen if region was
    first mapped with 4KB mapping and then later on replaced with a 2MB
    (or 1GB mapping)
    - Removing/modify a table. This may happen if a caller try to remove a
    region with _PAGE_BLOCK set when it was created without it

Note that the current restriction mean that the caller must ensure that
_PAGE_BLOCK is consistently set/cleared across all the updates on a
given virtual region. This ought to be fine with the expected use-cases.

More rework will be necessary if we wanted to remove the restrictions.

Note that nr_mfns is now marked const as it is used for flushing the
TLBs and we don't want it to be modified.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---

This patch is necessary for upcoming changes in the MM code. I would
like to remove most of the open-coding update of the page-tables as they
are not easy to properly fix/extend. For instance, always mapping
xenheap mapping with 1GB superpage is plain wrong because:
    - RAM regions are not always 1GB aligned (such as on RPI 4) and we
    may end up to map MMIO with cacheable attributes
    - RAM may contain reserved regions should either not be mapped
---
 xen/arch/arm/mm.c          | 87 ++++++++++++++++++++++++++++++--------
 xen/include/asm-arm/page.h |  4 ++
 2 files changed, 73 insertions(+), 18 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 59f8a3f15fd1..af0f12b6e6d3 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1060,9 +1060,10 @@ static int xen_pt_next_level(bool read_only, unsigned int level,
 }
 
 /* Sanity check of the entry */
-static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
+static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int level,
+                               unsigned int flags)
 {
-    /* Sanity check when modifying a page. */
+    /* Sanity check when modifying an entry. */
     if ( (flags & _PAGE_PRESENT) && mfn_eq(mfn, INVALID_MFN) )
     {
         /* We don't allow modifying an invalid entry. */
@@ -1072,6 +1073,13 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
             return false;
         }
 
+        /* We don't allow modifying a table entry */
+        if ( !lpae_is_mapping(entry, level) )
+        {
+            mm_printk("Modifying a table entry is not allowed.\n");
+            return false;
+        }
+
         /* We don't allow changing memory attributes. */
         if ( entry.pt.ai != PAGE_AI_MASK(flags) )
         {
@@ -1087,7 +1095,7 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
             return false;
         }
     }
-    /* Sanity check when inserting a page */
+    /* Sanity check when inserting a mapping */
     else if ( flags & _PAGE_PRESENT )
     {
         /* We should be here with a valid MFN. */
@@ -1096,18 +1104,28 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
         /* We don't allow replacing any valid entry. */
         if ( lpae_is_valid(entry) )
         {
-            mm_printk("Changing MFN for a valid entry is not allowed (%#"PRI_mfn" -> %#"PRI_mfn").\n",
-                      mfn_x(lpae_get_mfn(entry)), mfn_x(mfn));
+            if ( lpae_is_mapping(entry, level) )
+                mm_printk("Changing MFN for a valid entry is not allowed (%#"PRI_mfn" -> %#"PRI_mfn").\n",
+                          mfn_x(lpae_get_mfn(entry)), mfn_x(mfn));
+            else
+                mm_printk("Trying to replace a table with a mapping.\n");
             return false;
         }
     }
-    /* Sanity check when removing a page. */
+    /* Sanity check when removing a mapping. */
     else if ( (flags & (_PAGE_PRESENT|_PAGE_POPULATE)) == 0 )
     {
         /* We should be here with an invalid MFN. */
         ASSERT(mfn_eq(mfn, INVALID_MFN));
 
-        /* We don't allow removing page with contiguous bit set. */
+        /* We don't allow removing a table */
+        if ( lpae_is_table(entry, level) )
+        {
+            mm_printk("Removing a table is not allowed.\n");
+            return false;
+        }
+
+        /* We don't allow removing a mapping with contiguous bit set. */
         if ( entry.pt.contig )
         {
             mm_printk("Removing entry with contiguous bit set is not allowed.\n");
@@ -1126,12 +1144,12 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
 }
 
 static int xen_pt_update_entry(mfn_t root, unsigned long virt,
-                               mfn_t mfn, unsigned int flags)
+                               mfn_t mfn, unsigned int page_order,
+                               unsigned int flags)
 {
     int rc;
     unsigned int level;
-    /* We only support 4KB mapping (i.e level 3) for now */
-    unsigned int target = 3;
+    unsigned int target = 3 - (page_order / LPAE_SHIFT);
     lpae_t *table;
     /*
      * The intermediate page tables are read-only when the MFN is not valid
@@ -1186,7 +1204,7 @@ static int xen_pt_update_entry(mfn_t root, unsigned long virt,
     entry = table + offsets[level];
 
     rc = -EINVAL;
-    if ( !xen_pt_check_entry(*entry, mfn, flags) )
+    if ( !xen_pt_check_entry(*entry, mfn, level, flags) )
         goto out;
 
     /* If we are only populating page-table, then we are done. */
@@ -1204,8 +1222,11 @@ static int xen_pt_update_entry(mfn_t root, unsigned long virt,
         {
             pte = mfn_to_xen_entry(mfn, PAGE_AI_MASK(flags));
 
-            /* Third level entries set pte.pt.table = 1 */
-            pte.pt.table = 1;
+            /*
+             * First and second level pages set pte.pt.table = 0, but
+             * third level entries set pte.pt.table = 1.
+             */
+            pte.pt.table = (level == 3);
         }
         else /* We are updating the permission => Copy the current pte. */
             pte = *entry;
@@ -1229,11 +1250,12 @@ static DEFINE_SPINLOCK(xen_pt_lock);
 
 static int xen_pt_update(unsigned long virt,
                          mfn_t mfn,
-                         unsigned long nr_mfns,
+                         const unsigned long nr_mfns,
                          unsigned int flags)
 {
     int rc = 0;
-    unsigned long addr = virt, addr_end = addr + nr_mfns * PAGE_SIZE;
+    unsigned long vfn = paddr_to_pfn(virt);
+    unsigned long left = nr_mfns;
 
     /*
      * For arm32, page-tables are different on each CPUs. Yet, they share
@@ -1265,14 +1287,43 @@ static int xen_pt_update(unsigned long virt,
 
     spin_lock(&xen_pt_lock);
 
-    for ( ; addr < addr_end; addr += PAGE_SIZE )
+    while ( left )
     {
-        rc = xen_pt_update_entry(root, addr, mfn, flags);
+        unsigned int order;
+        unsigned long mask;
+
+        /*
+         * Don't take into account the MFN when removing mapping (i.e
+         * MFN_INVALID) to calculate the correct target order.
+         *
+         * XXX: Support superpage mappings if nr is not aligned to a
+         * superpage size.
+         */
+        mask = !mfn_eq(mfn, INVALID_MFN) ? mfn_x(mfn) : 0;
+        mask |= vfn | left;
+
+        /*
+         * Always use level 3 mapping unless the caller request block
+         * mapping.
+         */
+        if ( likely(!(flags & _PAGE_BLOCK)) )
+            order = THIRD_ORDER;
+        else if ( !(mask & (BIT(FIRST_ORDER, UL) - 1)) )
+            order = FIRST_ORDER;
+        else if ( !(mask & (BIT(SECOND_ORDER, UL) - 1)) )
+            order = SECOND_ORDER;
+        else
+            order = THIRD_ORDER;
+
+        rc = xen_pt_update_entry(root, pfn_to_paddr(vfn), mfn, order, flags);
         if ( rc )
             break;
 
+        vfn += 1U << order;
         if ( !mfn_eq(mfn, INVALID_MFN) )
-            mfn = mfn_add(mfn, 1);
+            mfn = mfn_add(mfn, 1U << order);
+
+        left -= (1U << order);
     }
 
     /*
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 4ea8e97247c8..de096b0968e3 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -79,6 +79,7 @@
  * [3:4] Permission flags
  * [5]   Page present
  * [6]   Only populate page tables
+ * [7]   Use any level mapping only (i.e. superpages is allowed)
  */
 #define PAGE_AI_MASK(x) ((x) & 0x7U)
 
@@ -92,6 +93,9 @@
 #define _PAGE_PRESENT    (1U << 5)
 #define _PAGE_POPULATE   (1U << 6)
 
+#define _PAGE_BLOCK_BIT     7
+#define _PAGE_BLOCK         (1U << _PAGE_BLOCK_BIT)
+
 /*
  * _PAGE_DEVICE and _PAGE_NORMAL are convenience defines. They are not
  * meant to be used outside of this header.
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 19:08:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 19:08:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31364.61717 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfpHk-0005aV-Bg; Thu, 19 Nov 2020 19:08:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31364.61717; Thu, 19 Nov 2020 19:08:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfpHk-0005aN-7Y; Thu, 19 Nov 2020 19:08:00 +0000
Received: by outflank-mailman (input) for mailman id 31364;
 Thu, 19 Nov 2020 19:07:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kfpHi-0005Y6-3U
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 19:07:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfpHh-0003Ss-Qi; Thu, 19 Nov 2020 19:07:57 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kfpHh-0000TE-Ia; Thu, 19 Nov 2020 19:07:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfpHi-0005Y6-3U
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 19:07:58 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=OboHhEHEvSZ64fH4mO7AsxamLOG7CeMXAAQmTYLPwD0=; b=nxMldoX8fo6JIFDoFUq3JXAQZ
	IirT47R30UvlFUhhaYsFHvyUpOZfY8QNrNlbch3PqDjSe/IyEhNkZrV6yMbjtarBnjdiPawRBKqL0
	DeqnHdJXGjt3NvT11mgBXw/dw3+S1x2I8oRis0OguvPO90nfTbP60ojmlWm2Psdr8fvFw=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfpHh-0003Ss-Qi; Thu, 19 Nov 2020 19:07:57 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kfpHh-0000TE-Ia; Thu, 19 Nov 2020 19:07:57 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH RFC 3/6] xen/arm: setup: Call unregister_init_virtual_region() after the last init function
Date: Thu, 19 Nov 2020 19:07:48 +0000
Message-Id: <20201119190751.22345-4-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201119190751.22345-1-julien@xen.org>
References: <20201119190751.22345-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

discard_init_modules() is an init function, if the path contains a
BUG() or WARN() we still want to get the full stack trace.

The init virtual region is now kept after the last init function has
been called.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/setup.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 7fcff9af2a7e..2532ec973913 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -72,10 +72,11 @@ domid_t __read_mostly max_init_domid;
 
 static __used void init_done(void)
 {
+    discard_initial_modules();
+
     /* Must be done past setting system_state. */
     unregister_init_virtual_region();
 
-    discard_initial_modules();
     free_init_memory();
     startup_cpu_idle_loop();
 }
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 19 20:07:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 20:07:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31414.61770 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfqDB-0004XS-G7; Thu, 19 Nov 2020 20:07:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31414.61770; Thu, 19 Nov 2020 20:07:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfqDB-0004XL-Cv; Thu, 19 Nov 2020 20:07:21 +0000
Received: by outflank-mailman (input) for mailman id 31414;
 Thu, 19 Nov 2020 20:07:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfqDA-0004XD-97; Thu, 19 Nov 2020 20:07:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfqD9-0004lC-WC; Thu, 19 Nov 2020 20:07:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfqD9-0006XM-Jd; Thu, 19 Nov 2020 20:07:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kfqD9-00029u-J5; Thu, 19 Nov 2020 20:07:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfqDA-0004XD-97; Thu, 19 Nov 2020 20:07:20 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ho6q1VAx94XQPX7eEkBOWqjqcZYvuiV3gkvRnedrFC0=; b=uIAmktrPKUD0EXlRMJb3S8HOTw
	l+iMp2Df0O65AAkl+OGULAW/rNzxK4Vvn0+YZJyr0tRnlfoc6cMTjJiA0ea+h3hWZ4q5ac0NgIHCL
	esBmoTg0p4V+Gx2dN+sfCU2d478QROgvglz27Mg8EYzxYVIe928sE0EoE9lCCYHBn/ic=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfqD9-0004lC-WC; Thu, 19 Nov 2020 20:07:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfqD9-0006XM-Jd; Thu, 19 Nov 2020 20:07:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfqD9-00029u-J5; Thu, 19 Nov 2020 20:07:19 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156874-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 156874: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=315443293a2d0d7c183ca6dd4624d9e4f8a7054a
X-Osstest-Versions-That:
    linux=2544d06afd8d060f35b159809274e4b7477e63e8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 19 Nov 2020 20:07:19 +0000

flight 156874 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156874/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 156722

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156722
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156722
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156722
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156722
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156722
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156722
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156722
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156722
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156722
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156722
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156722
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                315443293a2d0d7c183ca6dd4624d9e4f8a7054a
baseline version:
 linux                2544d06afd8d060f35b159809274e4b7477e63e8

Last test of basis   156722  2020-11-12 16:54:57 Z    7 days
Testing same since   156861  2020-11-18 18:41:10 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Eric W. Biederman" <ebiederm@xmission.com>
  Aaron Brown <aaron.f.brown@intel.com>
  Al Viro <viro@zeniv.linux.org.uk>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Lobakin <alobakin@pm.me>
  Alexander Usyskin <alexander.usyskin@intel.com>
  Alexei Starovoitov <ast@kernel.org>
  Anand Jain <anand.jain@oracle.com>
  Anand K Mistry <amistry@google.com>
  Andreas Gruenbacher <agruenba@redhat.com>
  Andrew Jeffery <andrew@aj.id.au>
  Andrew Jones <drjones@redhat.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andrii@kernel.org>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Ansuel Smith <ansuelsmth@gmail.com>
  Ard Biesheuvel <ardb@kernel.org>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnaud de Turckheim <quarium@gmail.com>
  Baolin Wang <baolin.wang7@gmail.com>
  Bartosz Golaszewski <bgolaszewski@baylibre.com>
  Billy Tsai <billy_tsai@aspeedtech.com>
  Bob Peterson <rpeterso@redhat.com>
  Boris Protopopov <pboris@amazon.com>
  Borislav Petkov <bp@suse.de>
  Brian Bunker <brian@purestorage.com>
  Brian Foster <bfoster@redhat.com>
  Brian Norris <briannorris@chromium.org>
  Chao Leng <lengchao@huawei.com>
  Chen Zhou <chenzhou10@huawei.com>
  Chris Brandt <chris.brandt@renesas.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christian Brauner <christian.brauner@ubuntu.com>
  Christoph Hellwig <hch@lst.de>
  Christoph Lameter <cl@linux.com>
  Christophe Leroy <christophe.leroy@csgroup.eu>
  Chuck Lever <chuck.lever@oracle.com>
  Chunyan Zhang <chunyan.zhang@unisoc.com>
  Chunyan Zhang <zhang.lyra@gmail.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Ian King <colin.king@canonical.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
  Darrick J. Wong <darrick.wong@oracle.com>
  David Howells <dhowells@redhat.com>
  David Sterba <dsterba@suse.com>
  David Verbeiren <david.verbeiren@tessares.net>
  Dinghao Liu <dinghao.liu@zju.edu.cn>
  Don Brace <don.brace@microchip.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Dumazet <edumazet@google.com>
  Evan Nimmo <evan.nimmo@alliedtelesis.co.nz>
  Evan Quan <evan.quan@amd.com>
  Evgeny Novikov <novikov@ispras.ru>
  Felipe Balbi <balbi@kernel.org>
  Fred Gao <fred.gao@intel.com>
  Gao Xiang <hsiangkao@redhat.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  George Spelvin <lkml@sdf.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hannes Reinecke <hare@suse.de>
  Heikki Krogerus <heikki.krogerus@linux.intel.com>
  Heiko Carstens <hca@linux.ibm.com>
  Heiner Kallweit <hkallweit1@gmail.com>
  Herbert Xu <herbert@gondor.apana.org.au>
  Ingo Molnar <mingo@kernel.org>
  J. Bruce Fields <bfields@redhat.com>
  Jack Wang <jinpu.wang@cloud.ionos.com>
  Jakub Kicinski <kuba@kernel.org>
  Jan "Yenya" Kasprzak <kas@fi.muni.cz>
  Jarkko Sakkinen <jarkko@kernel.org>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Wang <jasowang@redhat.com>
  Jens Axboe <axboe@kernel.dk>
  Jerry Snitselaar <jsnitsel@redhat.com>
  Jing Xiangfeng <jingxiangfeng@huawei.com>
  Jiri Olsa <jolsa@kernel.org>
  Jiri Olsa <jolsa@redhat.com>
  Jitendra Khasdev <jitendra.khasdev@oracle.com>
  Joakim Zhang <qiangqing.zhang@nxp.com>
  Joel Stanley <joel@jms.id.au>
  Joerg Roedel <jroedel@suse.de>
  Johannes Berg <johannes.berg@intel.com>
  Johannes Thumshirn <johannes.thumshirn@wdc.com>
  Jon Hunter <jonathanh@nvidia.com>
  Josef Bacik <josef@toxicpanda.com>
  Joseph Qi <joseph.qi@linux.alibaba.com>
  Jozsef Kadlecsik <kadlec@netfilter.org>
  Julian Wiedmann <jwi@linux.ibm.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kaixu Xia <kaixuxia@tencent.com>
  Kalle Valo <kvalo@codeaurora.org>
  Keita Suzuki <keitasuzuki.park@sslab.ics.keio.ac.jp>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Laurent Dufour <ldufour@linux.ibm.com>
  Lee Jones <lee.jones@linaro.org>
  lining <lining2020x@163.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Liu, Yi L <yi.l.liu@intel.com>
  Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
  Lu Baolu <baolu.lu@linux.intel.com>
  Luka Oreskovic <luka.oreskovic@sartura.hr>
  Mao Wenan <wenan.mao@linux.alibaba.com>
  Maor Gottlieb <maorg@nvidia.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Mark Brown <broonie@kernel.org>
  Martin Hundebøll <martin@geanix.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin Schiller <ms@dev.tdt.de>
  Martin Willi <martin@strongswan.org>
  Masami Hiramatsu <mhiramat@kernel.org>
  Masashi Honma <masashi.honma@gmail.com>
  Matteo Croce <mcroce@microsoft.com>
  Matthew Wilcox (Oracle) <willy@infradead.org>
  Matthieu Baerts <matthieu.baerts@tessares.net>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael Petlan <mpetlan@redhat.com>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Ming Lei <ming.lei@redhat.com>
  Namhyung Kim <namhyung@kernel.org>
  Navid Emamdoost <navid.emamdoost@gmail.com>
  Nikolay Borisov <nborisov@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Oliver Hartkopp <socketcan@hartkopp.net>
  Oliver Herms <oliver.peter.herms@gmail.com>
  Oliver Neukum <oneukum@suse.com>
  Olivier Moysan <olivier.moysan@st.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul E. McKenney <paulmck@kernel.org>
  Paul Moore <paul@paul-moore.com>
  Pavel Machek <pavel@ucw.cz>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <a.p.zijlstra@chello.nl>
  Peter Zijlstra <peterz@infradead.org>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Qian Cai <cai@redhat.com>
  Qii Wang <qii.wang@mediatek.com>
  Qiujun Huang <hqjagain@gmail.com>
  Qu Wenruo <wqu@suse.com>
  Rob Herring <robh@kernel.org>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Ronnie Sahlberg <lsahlber@redhat.com>
  Russell King <rmk+kernel@armlinux.org.uk>
  Saeed Mahameed <saeedm@nvidia.com>
  Sagi Grimberg <sagi@grimberg.me>
  Sandeep Raghuraman <sandy.8925@gmail.com>
  Santosh Shukla <sashukla@nvidia.com>
  Sasha Levin <sashal@kernel.org>
  Sean Anderson <seanga2@gmail.com>
  Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Sreekanth Reddy <sreekanth.reddy@broadcom.com>
  Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
  Stanislav Ivanichkin <sivanichkin@yandex-team.ru>
  Stefano Brivio <sbrivio@redhat.com>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Steffen Klassert <steffen.klassert@secunet.com>
  Stephane Grosjean <s.grosjean@peak-system.com>
  Stephen Boyd <swboyd@chromium.org>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sudip Mukherjee <sudipm.mukherjee@gmail.com>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Sven Van Asbroeck <thesven73@gmail.com>
  Takashi Iwai <tiwai@suse.de>
  Tapas Kundu <tkundu@vmware.com>
  Theodore Ts'o <tytso@mit.edu>
  Thinh Nguyen <Thinh.Nguyen@synopsys.com>
  Thinh Nguyen <thinhn@synopsys.com>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Zimmermann <tzimmermann@suse.de>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomas Winkler <tomas.winkler@intel.com>
  Tomasz Figa <tfiga@chromium.org>
  Tommi Rantala <tommi.t.rantala@nokia.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Tyler Hicks <tyhicks@linux.microsoft.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Ulrich Hecht <uli+renesas@fpond.eu>
  Ursula Braun <ubraun@linux.ibm.com>
  Veerabadhran Gopalakrishnan <veerabadhran.gopalakrishnan@amd.com>
  Venkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com>
  Vincent Mailhol <mailhol.vincent@wanadoo.fr>
  Vinicius Costa Gomes <vinicius.gomes@intel.com>
  Viresh Kumar <viresh.kumar@linaro.org>
  Vlastimil Babka <vbabka@suse.cz>
  Wade Mealing <wmealing@redhat.com>
  Wang Hai <wanghai38@huawei.com>
  Wei Liu <wei.liu@kernel.org>
  Wengang Wang <wen.gang.wang@oracle.com>
  Will Deacon <will@kernel.org>
  Willem de Bruijn <willemb@google.com>
  Willy Tarreau <w@1wt.eu>
  Wolfram Sang <wsa+renesas@sang-engineering.com>
  Wolfram Sang <wsa@kernel.org>
  Yangbo Lu <yangbo.lu@nxp.com>
  Ye Bin <yebin10@huawei.com>
  Yegor Yefremov <yegorslists@googlemail.com>
  Yi Sun <yi.y.sun@linux.intel.com>
  Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
  Yunsheng Lin <linyunsheng@huawei.com>
  Zeng Tao <prime.zeng@hisilicon.com>
  Zhang Changzhong <zhangchangzhong@huawei.com>
  Zhang Qilong <zhangqilong3@huawei.com>
  zhuoliang zhang <zhuoliang.zhang@mediatek.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   2544d06afd8d..315443293a2d  315443293a2d0d7c183ca6dd4624d9e4f8a7054a -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 21:20:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 21:20:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31428.61792 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfrLN-0003o6-QG; Thu, 19 Nov 2020 21:19:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31428.61792; Thu, 19 Nov 2020 21:19:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfrLN-0003nz-N0; Thu, 19 Nov 2020 21:19:53 +0000
Received: by outflank-mailman (input) for mailman id 31428;
 Thu, 19 Nov 2020 21:19:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfrLM-0003nr-7D; Thu, 19 Nov 2020 21:19:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfrLL-0006En-K0; Thu, 19 Nov 2020 21:19:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfrLL-00023b-Bs; Thu, 19 Nov 2020 21:19:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kfrLL-0000gI-BO; Thu, 19 Nov 2020 21:19:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfrLM-0003nr-7D; Thu, 19 Nov 2020 21:19:52 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pumW8diOpeDag8yW+zUSFz80TUk9jtt1VthrU3md9SM=; b=UggZm+dAs8CBQLHw6nrXF/U5xW
	o+YQXYvUjtSUmQ1ad7XqJJ6qb89G1UAxsDCkiz+OSMmGj+415rXwumjFr4tUwd0QwvJwlP13Hsxzt
	BbXpkfDldXi1+hcRCJ4oz/+/gYGLYA0cH5Hl/UL6FG7C7xV6a6rmar6jiMyH+VtkD2Q0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfrLL-0006En-K0; Thu, 19 Nov 2020 21:19:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfrLL-00023b-Bs; Thu, 19 Nov 2020 21:19:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfrLL-0000gI-BO; Thu, 19 Nov 2020 21:19:51 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156879-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156879: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=6c8dd15c4ae42501438a525ec41299f365f223cb
X-Osstest-Versions-That:
    ovmf=6c4efc050974812d6ebee1cea711e3c81e4e4442
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 19 Nov 2020 21:19:51 +0000

flight 156879 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156879/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 6c8dd15c4ae42501438a525ec41299f365f223cb
baseline version:
 ovmf                 6c4efc050974812d6ebee1cea711e3c81e4e4442

Last test of basis   156869  2020-11-19 02:39:50 Z    0 days
Testing same since   156879  2020-11-19 11:39:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bob Feng <bob.c.feng@intel.com>
  Nishant C Mistry <nishant.c.mistry@intel.com>
  Nishant Mistry <devel@edk2.groups.io>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   6c4efc0509..6c8dd15c4a  6c8dd15c4ae42501438a525ec41299f365f223cb -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 21:40:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 21:40:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31441.61811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfrfE-0006zZ-S5; Thu, 19 Nov 2020 21:40:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31441.61811; Thu, 19 Nov 2020 21:40:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfrfE-0006zS-Oj; Thu, 19 Nov 2020 21:40:24 +0000
Received: by outflank-mailman (input) for mailman id 31441;
 Thu, 19 Nov 2020 21:40:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IlxA=EZ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kfrfD-0006zN-B6
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 21:40:23 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6eaef4dc-e5a2-4f96-bc69-e8f29f5da8fc;
 Thu, 19 Nov 2020 21:40:22 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id DDCE922202;
 Thu, 19 Nov 2020 21:40:20 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IlxA=EZ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kfrfD-0006zN-B6
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 21:40:23 +0000
X-Inumbo-ID: 6eaef4dc-e5a2-4f96-bc69-e8f29f5da8fc
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6eaef4dc-e5a2-4f96-bc69-e8f29f5da8fc;
	Thu, 19 Nov 2020 21:40:22 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id DDCE922202;
	Thu, 19 Nov 2020 21:40:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605822021;
	bh=ySPCWCvo4acFab/emnE2bP+R+/dODfYOq9N2H0YZbyM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=obbm0LO7cdAnHU5hUEqexR/EQr/evxMNP+jXlMf++TUvgWQxGs+xi2RIdnfZKtfwI
	 NYkYONRAL8VjuBFiKOtlXfGgJFNJBk/1GULO1OmuMK+chmbt2DuV0Y+W62wD3/nWjp
	 MKULJbrEYjYsgScGk6ivCJnczuj3agar9xtYW5vo=
Date: Thu, 19 Nov 2020 13:40:20 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, andrew.cooper3@citrix.com, 
    Bertrand.Marquis@arm.com, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>, 
    george.dunlap@citrix.com, iwj@xenproject.org, julien@xen.org, wl@xen.org, 
    xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2] xen: EXPERT clean-up and introduce UNSUPPORTED
In-Reply-To: <3e8c03eb-ee3f-4439-90c2-acf340c7d8e7@suse.com>
Message-ID: <alpine.DEB.2.21.2011191310210.11739@sstabellini-ThinkPad-T480s>
References: <20201118005051.26115-1-sstabellini@kernel.org> <eb6b32c3-c7e2-1e36-f492-0c00cc170ce2@suse.com> <alpine.DEB.2.21.2011181241310.11739@sstabellini-ThinkPad-T480s> <3e8c03eb-ee3f-4439-90c2-acf340c7d8e7@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 19 Nov 2020, Jan Beulich wrote:
> On 18.11.2020 22:00, Stefano Stabellini wrote:
> > On Wed, 18 Nov 2020, Jan Beulich wrote:
> >> On 18.11.2020 01:50, Stefano Stabellini wrote:
> >>> 1) It is not obvious that "Configure standard Xen features (expert
> >>> users)" is actually the famous EXPERT we keep talking about on xen-devel
> >>
> >> Which can be addressed by simply changing the one prompt line.
> >>
> >>> 2) It is not obvious when we need to enable EXPERT to get a specific
> >>> feature
> >>>
> >>> In particular if you want to enable ACPI support so that you can boot
> >>> Xen on an ACPI platform, you have to enable EXPERT first. But searching
> >>> through the kconfig menu it is really not clear (type '/' and "ACPI"):
> >>> nothing in the description tells you that you need to enable EXPERT to
> >>> get the option.
> >>
> >> And what causes this to be different once you switch to UNSUPPORTED?
> > 
> > Two things: firstly, it doesn't and shouldn't take an expert to enable
> > ACPI support, even if ACPI support is experimental. So calling it
> > UNSUPPORTED helps a lot. This is particularly relevant to the ARM Kconfig
> > options changed by this patch. Secondly, this patch is adding
> > "(UNSUPPORTED)" in the oneline prompt so that it becomes easy to match
> > it with the option you need to enable.
> 
> There's redundancy here then, which I think is in almost all cases
> better to avoid. That's first and foremost because the two places
> can go out of sync. Therefore, if the primary thing is to help
> "make menuconfig" (which I admit I don't normally use, as it's
> nothing that gets invoked implicitly by the build process afaict,
> i.e. one has to actively invoke it), perhaps we should enhance
> kconfig to attach at least a pre-determined subset of labels to
> the prompts automatically?
> 
> And second, also in reply to what you've been saying further down,
> perhaps we would better go with a hierarchy of controls here, e.g.
> EXPERT -> EXPERIMENTAL -> UNSUPPORTED?

Both these are good ideas worth discussing; somebody else made a similar
suggestion some time back. I was already thinking this could be a great
candidate for one of the first "working groups" as defined by George
during the last community call because the topic is not purely
technical: a working group could help getting alignment and make
progress faster. We can propose it to George when he is back.

However, I don't think we need the working group to make progress on
this limited patch that only addresses the lowest hanging fruit.

I'd like to suggest to make progress on this patch in its current form,
and in parallel start a longer term discussion on how to do something
like you suggested above.


> >>> So this patch makes things easier by doing two things:
> >>>
> >>> - introduce a new kconfig option UNSUPPORTED which is clearly to enable
> >>>   UNSUPPORTED features as defined by SUPPORT.md
> >>>
> >>> - change EXPERT options to UNSUPPORTED where it makes sense: keep
> >>>   depending on EXPERT for features made for experts
> >>>
> >>> - tag unsupported features by adding (UNSUPPORTED) to the one-line
> >>>   description
> >>
> >> I am, btw, not fully convinced of the need for this redundancy. Wouldn't
> >> it be enough to have just EXPERT as a setting, but varying (<reason>)
> >> tokens in the prompt text?
> > 
> > I don't think so, for the same reasons written above: EXPERT should not
> > be gating things like ACPI.
> 
> Different views are possible here, I suppose. Turning on anything
> that's unsupported requires people to know what they're doing (and
> be ready to pick up the pieces themselves). I'd consider this to
> fall under "expert".

For some features it is as you wrote, but it is not true in all cases.
Let's take ACPI as an example: it doesn't take an expert to enable it
and if it breaks you are in no worse situation than if you didn't,
because if you don't enable it you can't boot at all on the platform.

Also, that's not how EXPERT is commonly used in other projects.
Typically EXPERT is to enable advanced debugging features, and I have
been told recently that it is confusing the way we use it in Xen.

These are the two things that I would like to fix as soon as possible.


> > Moreover, the advantage of the tag in the
> > oneline prompt is that you can search for an option and figure out that
> > you need to enable UNSUPPORTED. It doesn't work if we use a different
> > tag. Just to get the idea, try to do "make menuconfig" and search for
> > "ARGO" with '/': you'll see "(UNSUPPORTED)". Then, if you search for
> > "UNSUPPORTED" you can find what you need to enable.
> 
> Implying that textual representation and Kconfig option name match,
> see above. Even a simple spelling mistake would break this model.

True, a spelling mistake would cause problems, but we do reviews, and
we can make sure there are no spelling mistakes in this patch.

 
> >>> --- a/xen/Kconfig
> >>> +++ b/xen/Kconfig
> >>> @@ -34,8 +34,17 @@ config DEFCONFIG_LIST
> >>>  	option defconfig_list
> >>>  	default ARCH_DEFCONFIG
> >>>  
> >>> +config UNSUPPORTED
> >>> +	bool "Configure UNSUPPORTED features"
> >>> +	help
> >>> +	  This option allows unsupported Xen options to be enabled, which
> >>
> >> I'd recommend against "enabled" - a control may also be there to allow
> >> disabling something.
> > 
> > I can change that.
> > 
> > 
> >>> +	  includes non-security-supported, experimental, and tech preview
> >>> +	  features as defined by SUPPORT.md. Xen binaries built with this
> >>> +	  option enabled are not security supported.
> >>
> >> Overall I'm a little afraid of possible inverse implications: Anything
> >> _not_ dependent upon this option (and in particular anything not
> >> dependent upon any Kconfig control) may be considered supported then.
> >>
> >> Also the last sentence is already present for EXPERT, 
> > 
> > I am happy to rephrase it. What about:
> > 
> > "This option allows certain unsupported Xen options to be changed, which
> > includes non-security-supported, experimental, and tech preview features
> > as defined by SUPPORT.md."
> 
> Sounds better to me.

I'll use it


> >>> --- a/xen/common/Kconfig
> >>> +++ b/xen/common/Kconfig
> >>> @@ -151,7 +151,7 @@ config KEXEC
> >>>  	  If unsure, say Y.
> >>>  
> >>>  config EFI_SET_VIRTUAL_ADDRESS_MAP
> >>> -    bool "EFI: call SetVirtualAddressMap()" if EXPERT
> >>> +    bool "EFI: call SetVirtualAddressMap() (UNSUPPORTED)" if UNSUPPORTED
> >>
> >> I have to admit I'm pretty unsure about what to do with this one.
> > 
> > Yeah, similarly to XEN_SHSTK, I don't have an opinion here either.  I am
> > happy to change it or leave it as.
> 
> I guess at least for the first cut I'd like to ask to just leave it
> alone.

OK


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 23:38:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 23:38:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31452.61823 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kftV6-00024X-8t; Thu, 19 Nov 2020 23:38:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31452.61823; Thu, 19 Nov 2020 23:38:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kftV6-00024Q-4v; Thu, 19 Nov 2020 23:38:04 +0000
Received: by outflank-mailman (input) for mailman id 31452;
 Thu, 19 Nov 2020 23:38:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IlxA=EZ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kftV4-00024L-SG
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 23:38:02 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e8b3310a-0b1b-4129-bcb7-866b2abf2369;
 Thu, 19 Nov 2020 23:38:01 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 6101C208FE;
 Thu, 19 Nov 2020 23:38:00 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IlxA=EZ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kftV4-00024L-SG
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 23:38:02 +0000
X-Inumbo-ID: e8b3310a-0b1b-4129-bcb7-866b2abf2369
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e8b3310a-0b1b-4129-bcb7-866b2abf2369;
	Thu, 19 Nov 2020 23:38:01 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 6101C208FE;
	Thu, 19 Nov 2020 23:38:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605829081;
	bh=ZxS7x7R7bNzKe7y4mPvW/YBPlDsE4JAnLjfXtaOBjC8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=p9jFgobEj00jE/nRl1vFjwUKu+Iq6Vo9WOWOFR17i6RwhHvOZTxBBJmNVRivUl51G
	 UadSYOGLRw5+3G30lzU7x/H7QN7138ZDDzuKQ6Tys0xgIlI4V3Y9PExDNdQdo4KTuG
	 ZnSVSmItONbdcj41y6V2KKew8rVBUAhZlqqayyIs=
Date: Thu, 19 Nov 2020 15:37:54 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <Rahul.Singh@arm.com>
cc: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
In-Reply-To: <89F35B3F-FAAD-4C58-B3FD-F93CA3290A49@arm.com>
Message-ID: <alpine.DEB.2.21.2011191534060.7979@sstabellini-ThinkPad-T480s>
References: <cover.1605527997.git.rahul.singh@arm.com> <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com> <3740e147-719a-4e97-bb0e-fe9bd2ec2aa5@xen.org> <aa256a44-8f8f-d4f1-f5f4-12529f45d8c8@suse.com> <9007e08f-6d90-88ed-ba64-2f0b3c21cb50@xen.org>
 <8531a99d-3c54-36c7-0cd4-2e4838f96eb0@suse.com> <ba26fdfb-34f8-c4d3-e082-f1f49c768981@xen.org> <89F35B3F-FAAD-4C58-B3FD-F93CA3290A49@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-941843809-1605828939=:7979"
Content-ID: <alpine.DEB.2.21.2011191536040.7979@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-941843809-1605828939=:7979
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2011191536041.7979@sstabellini-ThinkPad-T480s>

On Thu, 19 Nov 2020, Rahul Singh wrote:
> > On 19/11/2020 09:53, Jan Beulich wrote:
> >> On 19.11.2020 10:21, Julien Grall wrote:
> >>> Hi Jan,
> >>> 
> >>> On 19/11/2020 09:05, Jan Beulich wrote:
> >>>> On 18.11.2020 16:50, Julien Grall wrote:
> >>>>> On 16/11/2020 12:25, Rahul Singh wrote:
> >>>>>> NS16550 driver has PCI support that is under HAS_PCI flag. When HAS_PCI
> >>>>>> is enabled for ARM, compilation error is observed for ARM architecture
> >>>>>> because ARM platforms do not have full PCI support available.
> >>>>>   >
> >>>>>> Introducing new kconfig option CONFIG_HAS_NS16550_PCI to support
> >>>>>> ns16550 PCI for X86.
> >>>>>> 
> >>>>>> For X86 platforms it is enabled by default. For ARM platforms it is
> >>>>>> disabled by default, once we have proper support for NS16550 PCI for
> >>>>>> ARM we can enable it.
> >>>>>> 
> >>>>>> No functional change.
> >>>>> 
> >>>>> NIT: I would say "No functional change intended" to make clear this is
> >>>>> an expectation and hopefully will be correct :).
> >>>>> 
> >>>>> Regarding the commit message itself, I would suggest the following to
> >>>>> address Jan's concern:
> >>>> 
> >>>> While indeed this is a much better description, I continue to think
> >>>> that the proposed Kconfig option is undesirable to have.
> >>> 
> >>> I am yet to see an argument into why we should keep the PCI code
> >>> compiled on Arm when there will be no-use....
> >> Well, see my patch suppressing building of quite a part of it.
> > 
> > I will let Rahul figuring out whether your patch series is sufficient to fix compilation issues (this is what matters right now).
> 
> I just checked the compilation error for ARM after enabling the HAS_PCI on ARM. I am observing the same compilation error what I observed previously. 
> There are two new errors related to struct uart_config and struct part_param as those struct defined globally but used under X86 flags.
> 
> At top level:
> ns16550.c:179:48: error: ‘uart_config’ defined but not used [-Werror=unused-const-variable=]
>  static const struct ns16550_config __initconst uart_config[] =
>                                                 ^~~~~~~~~~~
> ns16550.c:104:54: error: ‘uart_param’ defined but not used [-Werror=unused-const-variable=]
>  static const struct ns16550_config_param __initconst uart_param[] = { 
> 
> 
> > 
> >>>> Either,
> >>>> following the patch I've just sent, truly x86-specific things (at
> >>>> least as far as current state goes - if any of this was to be
> >>>> re-used by a future port, suitable further abstraction may be
> >>>> needed) should be guarded by CONFIG_X86 (or abstracted into arch
> >>>> hooks), or the HAS_PCI_MSI proposal would at least want further
> >>>> investigating as to its feasibility to address the issues at hand.
> >>> 
> >>> I would be happy with CONFIG_X86, despite the fact that this is only
> >>> deferring the problem.
> >>> 
> >>> Regarding HAS_PCI_MSI, I don't really see the point of introducing given
> >>> that we are not going to use NS16550 PCI on Arm in the forseeable
> >>> future.
> >> And I continue to fail to see what would guarantee this: As soon
> >> as you can plug in such a card into an Arm system, people will
> >> want to be able use it. That's why we had to add support for it
> >> on x86, after all.
> > 
> > Well, plug-in PCI cards on Arm has been available for quite a while... Yet I haven't heard anyone asking for NS16550 PCI support.
> > 
> > This is probably because SBSA compliant server should always provide an SBSA UART (a cut-down version of the PL011). So why would bother to lose a PCI slot for yet another UART?
> > 
> >> >> So why do we need a finer graine Kconfig?
> >> Because most of the involved code is indeed MSI-related?
> > 
> > Possibly, yet it would not be necessary if we don't want NS16550 PCI support...
> 
> To fix compilation error on ARM as per the discussion there are below options please suggest which one to use to proceed further.
> 
> 1. Use the newly introduced CONFIG_HAS_NS16550_PCI config options. This helps also non-x86 architecture in the future not to have compilation error 
> what we are observing now when HAS_PCI is enabled.
> 
> 2. Guard the remaining x86 specific code with CONFIG_X86 and introduce the new CONFIG_HAS_PCI_MSI options to fix the MSI related compilation error. 
> Once we have proper support for MSI and PCI for ARM  (HAS_PCI_MSI and HAS_PCI enabled for ARM in Kconfig ) I am not sure if NS16550 PCI will work out of the box on ARM .In that case, we might need to come back again to fix NS16550 driver.  


It doesn't matter too much to me, let's just choose one option so that you
get unblocked soon.

It looks like Jan prefers option 2) and both Julien and I are OK with
it. So let's do 2). Jan, please confirm too :-)
--8323329-941843809-1605828939=:7979--


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 23:44:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 23:44:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31461.61841 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kftbR-0003EL-4k; Thu, 19 Nov 2020 23:44:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31461.61841; Thu, 19 Nov 2020 23:44:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kftbR-0003EE-1h; Thu, 19 Nov 2020 23:44:37 +0000
Received: by outflank-mailman (input) for mailman id 31461;
 Thu, 19 Nov 2020 23:44:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IlxA=EZ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kftbP-0003E9-Gw
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 23:44:35 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d05e92b6-e411-4dea-9e7a-23ead6b593c9;
 Thu, 19 Nov 2020 23:44:34 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id B1E8422227;
 Thu, 19 Nov 2020 23:44:32 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IlxA=EZ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kftbP-0003E9-Gw
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 23:44:35 +0000
X-Inumbo-ID: d05e92b6-e411-4dea-9e7a-23ead6b593c9
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d05e92b6-e411-4dea-9e7a-23ead6b593c9;
	Thu, 19 Nov 2020 23:44:34 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id B1E8422227;
	Thu, 19 Nov 2020 23:44:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605829473;
	bh=J+rHZlNXQ50MPLCu9PF3eq71Es79PZVFv2lVUS6TJ9w=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=jPxP5yu6049nF6ofeLzSBHuqsHgbxkcaDUvfOOwqyBAnUqvj5JRRyVwO3q3ahpsDw
	 TBGjjzOAIL/V+Wje8fUxFUn7rcPVSt1V0PEWVLXDDV6Yecy4o9KTJUYCFpGgP+90l4
	 h+jo30K6VMHBLrjdY49Bpvuc62IjX/7pyBzEwjAg=
Date: Thu, 19 Nov 2020 15:44:31 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH] xen/irq: Propagate the error from init_one_desc_irq()
 in init_irq_data()
In-Reply-To: <20201119145434.28065-1-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2011191542200.7979@sstabellini-ThinkPad-T480s>
References: <20201119145434.28065-1-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 19 Nov 2020, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> init_one_desc_irq() can return an error if it is unable to allocate
> memory. While this is unlikely to happen during boot (called from
> init_irq_data()), it is better to harden the code by propagting the
> return value.
> 
> Spotted by coverity.
> 
> CID: 106529
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Hi Julien,

Thanks for the patch. I was about to commit it when I realized there is
one more caller: xen/arch/arm/irq.c:init_local_irq_data

Should we change that too to check for the return error?


> ---
>  xen/arch/arm/irq.c | 7 ++++++-
>  xen/arch/x86/irq.c | 7 ++++++-
>  2 files changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
> index 3877657a5277..279d221a2b85 100644
> --- a/xen/arch/arm/irq.c
> +++ b/xen/arch/arm/irq.c
> @@ -88,7 +88,12 @@ static int __init init_irq_data(void)
>      for ( irq = NR_LOCAL_IRQS; irq < NR_IRQS; irq++ )
>      {
>          struct irq_desc *desc = irq_to_desc(irq);
> -        init_one_irq_desc(desc);
> +        int rc;
> +
> +        rc = init_one_irq_desc(desc);
> +        if ( rc )
> +            return rc;
> +
>          desc->irq = irq;
>          desc->action  = NULL;
>      }
> diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
> index 45966947919e..3ebd684415ac 100644
> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -428,9 +428,14 @@ int __init init_irq_data(void)
>  
>      for ( irq = 0; irq < nr_irqs_gsi; irq++ )
>      {
> +        int rc;
> +
>          desc = irq_to_desc(irq);
>          desc->irq = irq;
> -        init_one_irq_desc(desc);
> +
> +        rc = init_one_irq_desc(desc);
> +        if ( rc )
> +            return rc;
>      }
>      for ( ; irq < nr_irqs; irq++ )
>          irq_to_desc(irq)->irq = irq;
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 23:49:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 23:49:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31468.61852 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kftfu-0003Pw-Of; Thu, 19 Nov 2020 23:49:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31468.61852; Thu, 19 Nov 2020 23:49:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kftfu-0003Pp-LW; Thu, 19 Nov 2020 23:49:14 +0000
Received: by outflank-mailman (input) for mailman id 31468;
 Thu, 19 Nov 2020 23:49:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IlxA=EZ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kftft-0003Pk-TT
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 23:49:13 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 86de810e-2931-4cb9-9a53-5fd0f9de4ced;
 Thu, 19 Nov 2020 23:49:13 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 9997C22227;
 Thu, 19 Nov 2020 23:49:11 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IlxA=EZ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kftft-0003Pk-TT
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 23:49:13 +0000
X-Inumbo-ID: 86de810e-2931-4cb9-9a53-5fd0f9de4ced
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 86de810e-2931-4cb9-9a53-5fd0f9de4ced;
	Thu, 19 Nov 2020 23:49:13 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 9997C22227;
	Thu, 19 Nov 2020 23:49:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605829752;
	bh=G6ukbbkpPda5QAWWmXouoiYMpJiQ9WlmjD1roz4dw+4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=2cIUpyujydy1QD3z4jrR4VNhpuvCf5qNBUlGXCFN+7Vp9Ts6Cm2iPZMsF/34uzkO4
	 QE8exuzAVg/FyT/eE+zEvd2XReJZA/8L5DA6EfTB13hDILp3YjOMS1nsyRcSyFs4to
	 hJ+MHwCezz61vqXK+bOvpfG6rjBPEVamAZCUL7zA=
Date: Thu, 19 Nov 2020 15:49:10 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    "open list:X86" <xen-devel@lists.xenproject.org>, 
    "alex.bennee@linaro.org" <alex.bennee@linaro.org>, 
    Andre Przywara <Andre.Przywara@arm.com>, Rahul Singh <Rahul.Singh@arm.com>, 
    Julien Grall <Julien.Grall@arm.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH v3 1/2] xen/arm: gic: acpi: Use the correct length for
 the GICC structure
In-Reply-To: <1dfe3afc-55de-77ec-1f21-448c193909b4@xen.org>
Message-ID: <alpine.DEB.2.21.2011191548450.7979@sstabellini-ThinkPad-T480s>
References: <20201119121347.27139-1-julien@xen.org> <20201119121347.27139-2-julien@xen.org> <890D41B9-D3F3-498F-89E4-8BB997B45D6F@arm.com> <1dfe3afc-55de-77ec-1f21-448c193909b4@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-725314894-1605829752=:7979"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-725314894-1605829752=:7979
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Thu, 19 Nov 2020, Julien Grall wrote:
> On 19/11/2020 15:22, Bertrand Marquis wrote:
> > Hi Julien,
> 
> Hi Bertrand,
> 
> > 
> > > On 19 Nov 2020, at 12:13, Julien Grall <julien@xen.org> wrote:
> > > 
> > > From: Julien Grall <julien.grall@arm.com>
> > > 
> > > The length of the GICC structure in the MADT ACPI table differs between
> > > version 5.1 and 6.0, although there are no other relevant differences.
> > > 
> > > Use the BAD_MADT_GICC_ENTRY macro, which was specifically designed to
> > > overcome this issue.
> > > 
> > 
> > The serie is braking the build on arm64 if ACPI is not activated:
> > arch/arm/gic.c:456: undefined reference to `acpi_gbl_FADT’
> 
> :(. Sorry for that. I usually test with and without ACPI enabled but forgot to
> do for this series.

I just wanted to add that I checked and it looks like to converted all
places now
--8323329-725314894-1605829752=:7979--


From xen-devel-bounces@lists.xenproject.org Thu Nov 19 23:50:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 19 Nov 2020 23:50:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31474.61864 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfthC-0004Rf-3q; Thu, 19 Nov 2020 23:50:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31474.61864; Thu, 19 Nov 2020 23:50:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfthC-0004RX-0Z; Thu, 19 Nov 2020 23:50:34 +0000
Received: by outflank-mailman (input) for mailman id 31474;
 Thu, 19 Nov 2020 23:50:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BsEV=EZ=gmail.com=julien.grall.oss@srs-us1.protection.inumbo.net>)
 id 1kfthB-0004RQ-6B
 for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 23:50:33 +0000
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dc716463-d5a1-41e8-841b-32580124494d;
 Thu, 19 Nov 2020 23:50:31 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id a3so8260574wmb.5
 for <xen-devel@lists.xenproject.org>; Thu, 19 Nov 2020 15:50:31 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BsEV=EZ=gmail.com=julien.grall.oss@srs-us1.protection.inumbo.net>)
	id 1kfthB-0004RQ-6B
	for xen-devel@lists.xenproject.org; Thu, 19 Nov 2020 23:50:33 +0000
X-Inumbo-ID: dc716463-d5a1-41e8-841b-32580124494d
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id dc716463-d5a1-41e8-841b-32580124494d;
	Thu, 19 Nov 2020 23:50:31 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id a3so8260574wmb.5
        for <xen-devel@lists.xenproject.org>; Thu, 19 Nov 2020 15:50:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=4FWCyd3ccPYRmK2YYFztu4XdLvsi8+owiJPDmOdZvQ4=;
        b=JrKnPcY1/seX2F7juMKFPjxc6Azbp4s+w7psIumGmFTWSOe5Wvf2ECrJSE2WEN5TZE
         ZOBAkqqJd0aiTjck3rGyUsaNzYwZT0Mdyu6/R1ZMYBOEkL2n1VwGSqaU7UO6doe2tHUW
         notR5lp70c/d98Q/GHxqUADu1SBUO9TFSZtdBrcv2eS7F6VDIDk9qpNc4jwRaCSc6B89
         VKfCI3ywbbGcQ8sDWZ2zTxooe81NyHwitaLajGqHpovCj60afxLGJH1qvhHqPHT/XRjv
         IC5cLzUuatsJfdNW0v/RR0mkqzqRvml/hbyqJaO7S64+MTg2DNQvZG0onTeU6dIDNvN8
         ugXA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=4FWCyd3ccPYRmK2YYFztu4XdLvsi8+owiJPDmOdZvQ4=;
        b=mW6BGTk1ZCvoXEr9dnsxvOfACjOvOhsndWr2QNt+ubXNXGXWMgwwE301zSm/p353ft
         T/DJGnBt0Zh2n7Nb16RCJRHpb5/MjHXTtBbsyZYY//At+Nw0dLTskGvMpRr6SU8k7YaE
         CKYlC+Xy6RzvXC8tXJ3sW94Sgl+xoxnIKUMqonxUoq9anqDpgeNm1TZ3AEgOKKmu1gMZ
         AKiefjEBG+A/zal6KpMx+zzoBsgw9EIe6vzzWbxHKNtcGhsK/XnCOLIw6vWF3lu7d6+7
         LhWYrybxGKxCYhJfsfvW4qWV7Sruw3jlt6vFk6J00pvKnIOY5TDdflrgZh08Q6gQnMZ9
         xiLQ==
X-Gm-Message-State: AOAM533tvO81Z8saFDtfq4dhxFgi3qPo+0PUcToMygkJNhXEUyZtVwfZ
	lamzWQ7xR8Cwa1OXUfthMrfO6pt4ArxZvKeXB6Y=
X-Google-Smtp-Source: ABdhPJzI3g+FgivQaqXhoNN0gXFhCiwql+RPd1td7iWR9u3foX9Bu+rubegU7SAKM4CarnUbPSSfUUXSaj/iLOXHRIs=
X-Received: by 2002:a1c:98c7:: with SMTP id a190mr6582103wme.7.1605829830721;
 Thu, 19 Nov 2020 15:50:30 -0800 (PST)
MIME-Version: 1.0
References: <cover.1605527997.git.rahul.singh@arm.com> <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
 <3740e147-719a-4e97-bb0e-fe9bd2ec2aa5@xen.org> <aa256a44-8f8f-d4f1-f5f4-12529f45d8c8@suse.com>
 <9007e08f-6d90-88ed-ba64-2f0b3c21cb50@xen.org> <8531a99d-3c54-36c7-0cd4-2e4838f96eb0@suse.com>
 <ba26fdfb-34f8-c4d3-e082-f1f49c768981@xen.org> <89F35B3F-FAAD-4C58-B3FD-F93CA3290A49@arm.com>
 <alpine.DEB.2.21.2011191534060.7979@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2011191534060.7979@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien.grall.oss@gmail.com>
Date: Thu, 19 Nov 2020 23:50:19 +0000
Message-ID: <CAJ=z9a0aS1G0F1jAtKNEe4r3tyBoxy1xJ9AV7pYgifsL62iqww@mail.gmail.com>
Subject: Re: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Rahul Singh <Rahul.Singh@arm.com>, Jan Beulich <jbeulich@suse.com>, 
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Content-Type: multipart/alternative; boundary="000000000000a143cd05b47e6441"

--000000000000a143cd05b47e6441
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 19 Nov 2020, 23:38 Stefano Stabellini, <sstabellini@kernel.org>
wrote:

> On Thu, 19 Nov 2020, Rahul Singh wrote:
> > > On 19/11/2020 09:53, Jan Beulich wrote:
> > >> On 19.11.2020 10:21, Julien Grall wrote:
> > >>> Hi Jan,
> > >>>
> > >>> On 19/11/2020 09:05, Jan Beulich wrote:
> > >>>> On 18.11.2020 16:50, Julien Grall wrote:
> > >>>>> On 16/11/2020 12:25, Rahul Singh wrote:
> > >>>>>> NS16550 driver has PCI support that is under HAS_PCI flag. When
> HAS_PCI
> > >>>>>> is enabled for ARM, compilation error is observed for ARM
> architecture
> > >>>>>> because ARM platforms do not have full PCI support available.
> > >>>>>   >
> > >>>>>> Introducing new kconfig option CONFIG_HAS_NS16550_PCI to support
> > >>>>>> ns16550 PCI for X86.
> > >>>>>>
> > >>>>>> For X86 platforms it is enabled by default. For ARM platforms it
> is
> > >>>>>> disabled by default, once we have proper support for NS16550 PCI
> for
> > >>>>>> ARM we can enable it.
> > >>>>>>
> > >>>>>> No functional change.
> > >>>>>
> > >>>>> NIT: I would say "No functional change intended" to make clear
> this is
> > >>>>> an expectation and hopefully will be correct :).
> > >>>>>
> > >>>>> Regarding the commit message itself, I would suggest the followin=
g
> to
> > >>>>> address Jan's concern:
> > >>>>
> > >>>> While indeed this is a much better description, I continue to thin=
k
> > >>>> that the proposed Kconfig option is undesirable to have.
> > >>>
> > >>> I am yet to see an argument into why we should keep the PCI code
> > >>> compiled on Arm when there will be no-use....
> > >> Well, see my patch suppressing building of quite a part of it.
> > >
> > > I will let Rahul figuring out whether your patch series is sufficient
> to fix compilation issues (this is what matters right now).
> >
> > I just checked the compilation error for ARM after enabling the HAS_PCI
> on ARM. I am observing the same compilation error what I observed
> previously.
> > There are two new errors related to struct uart_config and struct
> part_param as those struct defined globally but used under X86 flags.
> >
> > At top level:
> > ns16550.c:179:48: error: =E2=80=98uart_config=E2=80=99 defined but not =
used
> [-Werror=3Dunused-const-variable=3D]
> >  static const struct ns16550_config __initconst uart_config[] =3D
> >                                                 ^~~~~~~~~~~
> > ns16550.c:104:54: error: =E2=80=98uart_param=E2=80=99 defined but not u=
sed
> [-Werror=3Dunused-const-variable=3D]
> >  static const struct ns16550_config_param __initconst uart_param[] =3D =
{
> >
> >
> > >
> > >>>> Either,
> > >>>> following the patch I've just sent, truly x86-specific things (at
> > >>>> least as far as current state goes - if any of this was to be
> > >>>> re-used by a future port, suitable further abstraction may be
> > >>>> needed) should be guarded by CONFIG_X86 (or abstracted into arch
> > >>>> hooks), or the HAS_PCI_MSI proposal would at least want further
> > >>>> investigating as to its feasibility to address the issues at hand.
> > >>>
> > >>> I would be happy with CONFIG_X86, despite the fact that this is onl=
y
> > >>> deferring the problem.
> > >>>
> > >>> Regarding HAS_PCI_MSI, I don't really see the point of introducing
> given
> > >>> that we are not going to use NS16550 PCI on Arm in the forseeable
> > >>> future.
> > >> And I continue to fail to see what would guarantee this: As soon
> > >> as you can plug in such a card into an Arm system, people will
> > >> want to be able use it. That's why we had to add support for it
> > >> on x86, after all.
> > >
> > > Well, plug-in PCI cards on Arm has been available for quite a while..=
.
> Yet I haven't heard anyone asking for NS16550 PCI support.
> > >
> > > This is probably because SBSA compliant server should always provide
> an SBSA UART (a cut-down version of the PL011). So why would bother to lo=
se
> a PCI slot for yet another UART?
> > >
> > >> >> So why do we need a finer graine Kconfig?
> > >> Because most of the involved code is indeed MSI-related?
> > >
> > > Possibly, yet it would not be necessary if we don't want NS16550 PCI
> support...
> >
> > To fix compilation error on ARM as per the discussion there are below
> options please suggest which one to use to proceed further.
> >
> > 1. Use the newly introduced CONFIG_HAS_NS16550_PCI config options. This
> helps also non-x86 architecture in the future not to have compilation err=
or
> > what we are observing now when HAS_PCI is enabled.
> >
> > 2. Guard the remaining x86 specific code with CONFIG_X86 and introduce
> the new CONFIG_HAS_PCI_MSI options to fix the MSI related compilation
> error.
> > Once we have proper support for MSI and PCI for ARM  (HAS_PCI_MSI and
> HAS_PCI enabled for ARM in Kconfig ) I am not sure if NS16550 PCI will wo=
rk
> out of the box on ARM .In that case, we might need to come back again to
> fix NS16550 driver.
>
>
> It doesn't matter too much to me, let's just choose one option so that yo=
u
> get unblocked soon.
>
> It looks like Jan prefers option 2) and both Julien and I are OK with
> it. So let's do 2). Jan, please confirm too :-)


Please don't put words in my mouth... I think introducing HAS_PCI_MSI is
short sighted.

There are no clear benefits of it when NS16550 PCI support is not going to
be enable in the foreseeable future.

I would be ok with moving everything under CONFIG_X86. IHMO this is still
shortsighted but at least we don't introduce a config that's not going to
help Arm or other any architecture to disable completely PCI support in
NS16550.

Cheers,

--000000000000a143cd05b47e6441
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"auto"><div><br><br><div class=3D"gmail_quote"><div dir=3D"ltr" =
class=3D"gmail_attr">On Thu, 19 Nov 2020, 23:38 Stefano Stabellini, &lt;<a =
href=3D"mailto:sstabellini@kernel.org" rel=3D"noreferrer noreferrer norefer=
rer" target=3D"_blank">sstabellini@kernel.org</a>&gt; wrote:<br></div><bloc=
kquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #cc=
c solid;padding-left:1ex">On Thu, 19 Nov 2020, Rahul Singh wrote:<br>
&gt; &gt; On 19/11/2020 09:53, Jan Beulich wrote:<br>
&gt; &gt;&gt; On 19.11.2020 10:21, Julien Grall wrote:<br>
&gt; &gt;&gt;&gt; Hi Jan,<br>
&gt; &gt;&gt;&gt; <br>
&gt; &gt;&gt;&gt; On 19/11/2020 09:05, Jan Beulich wrote:<br>
&gt; &gt;&gt;&gt;&gt; On 18.11.2020 16:50, Julien Grall wrote:<br>
&gt; &gt;&gt;&gt;&gt;&gt; On 16/11/2020 12:25, Rahul Singh wrote:<br>
&gt; &gt;&gt;&gt;&gt;&gt;&gt; NS16550 driver has PCI support that is under =
HAS_PCI flag. When HAS_PCI<br>
&gt; &gt;&gt;&gt;&gt;&gt;&gt; is enabled for ARM, compilation error is obse=
rved for ARM architecture<br>
&gt; &gt;&gt;&gt;&gt;&gt;&gt; because ARM platforms do not have full PCI su=
pport available.<br>
&gt; &gt;&gt;&gt;&gt;&gt;=C2=A0 =C2=A0&gt;<br>
&gt; &gt;&gt;&gt;&gt;&gt;&gt; Introducing new kconfig option CONFIG_HAS_NS1=
6550_PCI to support<br>
&gt; &gt;&gt;&gt;&gt;&gt;&gt; ns16550 PCI for X86.<br>
&gt; &gt;&gt;&gt;&gt;&gt;&gt; <br>
&gt; &gt;&gt;&gt;&gt;&gt;&gt; For X86 platforms it is enabled by default. F=
or ARM platforms it is<br>
&gt; &gt;&gt;&gt;&gt;&gt;&gt; disabled by default, once we have proper supp=
ort for NS16550 PCI for<br>
&gt; &gt;&gt;&gt;&gt;&gt;&gt; ARM we can enable it.<br>
&gt; &gt;&gt;&gt;&gt;&gt;&gt; <br>
&gt; &gt;&gt;&gt;&gt;&gt;&gt; No functional change.<br>
&gt; &gt;&gt;&gt;&gt;&gt; <br>
&gt; &gt;&gt;&gt;&gt;&gt; NIT: I would say &quot;No functional change inten=
ded&quot; to make clear this is<br>
&gt; &gt;&gt;&gt;&gt;&gt; an expectation and hopefully will be correct :).<=
br>
&gt; &gt;&gt;&gt;&gt;&gt; <br>
&gt; &gt;&gt;&gt;&gt;&gt; Regarding the commit message itself, I would sugg=
est the following to<br>
&gt; &gt;&gt;&gt;&gt;&gt; address Jan&#39;s concern:<br>
&gt; &gt;&gt;&gt;&gt; <br>
&gt; &gt;&gt;&gt;&gt; While indeed this is a much better description, I con=
tinue to think<br>
&gt; &gt;&gt;&gt;&gt; that the proposed Kconfig option is undesirable to ha=
ve.<br>
&gt; &gt;&gt;&gt; <br>
&gt; &gt;&gt;&gt; I am yet to see an argument into why we should keep the P=
CI code<br>
&gt; &gt;&gt;&gt; compiled on Arm when there will be no-use....<br>
&gt; &gt;&gt; Well, see my patch suppressing building of quite a part of it=
.<br>
&gt; &gt; <br>
&gt; &gt; I will let Rahul figuring out whether your patch series is suffic=
ient to fix compilation issues (this is what matters right now).<br>
&gt; <br>
&gt; I just checked the compilation error for ARM after enabling the HAS_PC=
I on ARM. I am observing the same compilation error what I observed previou=
sly. <br>
&gt; There are two new errors related to struct uart_config and struct part=
_param as those struct defined globally but used under X86 flags.<br>
&gt; <br>
&gt; At top level:<br>
&gt; ns16550.c:179:48: error: =E2=80=98uart_config=E2=80=99 defined but not=
 used [-Werror=3Dunused-const-variable=3D]<br>
&gt;=C2=A0 static const struct ns16550_config __initconst uart_config[] =3D=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^~~~~~~~~~~<br>
&gt; ns16550.c:104:54: error: =E2=80=98uart_param=E2=80=99 defined but not =
used [-Werror=3Dunused-const-variable=3D]<br>
&gt;=C2=A0 static const struct ns16550_config_param __initconst uart_param[=
] =3D { <br>
&gt; <br>
&gt; <br>
&gt; &gt; <br>
&gt; &gt;&gt;&gt;&gt; Either,<br>
&gt; &gt;&gt;&gt;&gt; following the patch I&#39;ve just sent, truly x86-spe=
cific things (at<br>
&gt; &gt;&gt;&gt;&gt; least as far as current state goes - if any of this w=
as to be<br>
&gt; &gt;&gt;&gt;&gt; re-used by a future port, suitable further abstractio=
n may be<br>
&gt; &gt;&gt;&gt;&gt; needed) should be guarded by CONFIG_X86 (or abstracte=
d into arch<br>
&gt; &gt;&gt;&gt;&gt; hooks), or the HAS_PCI_MSI proposal would at least wa=
nt further<br>
&gt; &gt;&gt;&gt;&gt; investigating as to its feasibility to address the is=
sues at hand.<br>
&gt; &gt;&gt;&gt; <br>
&gt; &gt;&gt;&gt; I would be happy with CONFIG_X86, despite the fact that t=
his is only<br>
&gt; &gt;&gt;&gt; deferring the problem.<br>
&gt; &gt;&gt;&gt; <br>
&gt; &gt;&gt;&gt; Regarding HAS_PCI_MSI, I don&#39;t really see the point o=
f introducing given<br>
&gt; &gt;&gt;&gt; that we are not going to use NS16550 PCI on Arm in the fo=
rseeable<br>
&gt; &gt;&gt;&gt; future.<br>
&gt; &gt;&gt; And I continue to fail to see what would guarantee this: As s=
oon<br>
&gt; &gt;&gt; as you can plug in such a card into an Arm system, people wil=
l<br>
&gt; &gt;&gt; want to be able use it. That&#39;s why we had to add support =
for it<br>
&gt; &gt;&gt; on x86, after all.<br>
&gt; &gt; <br>
&gt; &gt; Well, plug-in PCI cards on Arm has been available for quite a whi=
le... Yet I haven&#39;t heard anyone asking for NS16550 PCI support.<br>
&gt; &gt; <br>
&gt; &gt; This is probably because SBSA compliant server should always prov=
ide an SBSA UART (a cut-down version of the PL011). So why would bother to =
lose a PCI slot for yet another UART?<br>
&gt; &gt; <br>
&gt; &gt;&gt; &gt;&gt; So why do we need a finer graine Kconfig?<br>
&gt; &gt;&gt; Because most of the involved code is indeed MSI-related?<br>
&gt; &gt; <br>
&gt; &gt; Possibly, yet it would not be necessary if we don&#39;t want NS16=
550 PCI support...<br>
&gt; <br>
&gt; To fix compilation error on ARM as per the discussion there are below =
options please suggest which one to use to proceed further.<br>
&gt; <br>
&gt; 1. Use the newly introduced CONFIG_HAS_NS16550_PCI config options. Thi=
s helps also non-x86 architecture in the future not to have compilation err=
or <br>
&gt; what we are observing now when HAS_PCI is enabled.<br>
&gt; <br>
&gt; 2. Guard the remaining x86 specific code with CONFIG_X86 and introduce=
 the new CONFIG_HAS_PCI_MSI options to fix the MSI related compilation erro=
r. <br>
&gt; Once we have proper support for MSI and PCI for ARM=C2=A0 (HAS_PCI_MSI=
 and HAS_PCI enabled for ARM in Kconfig ) I am not sure if NS16550 PCI will=
 work out of the box on ARM .In that case, we might need to come back again=
 to fix NS16550 driver.=C2=A0 <br>
<br>
<br>
It doesn&#39;t matter too much to me, let&#39;s just choose one option so t=
hat you<br>
get unblocked soon.<br>
<br>
It looks like Jan prefers option 2) and both Julien and I are OK with<br>
it. So let&#39;s do 2). Jan, please confirm too :-)</blockquote></div></div=
><div dir=3D"auto"><br></div><div dir=3D"auto">Please don&#39;t put words i=
n my mouth... I think introducing HAS_PCI_MSI is short sighted.</div><div d=
ir=3D"auto"><br></div><div dir=3D"auto">There are no clear benefits of it w=
hen NS16550 PCI support is not going to be enable in the foreseeable future=
.</div><div dir=3D"auto"><br></div><div dir=3D"auto">I would be ok with mov=
ing everything under CONFIG_X86. IHMO this is still shortsighted but at lea=
st we don&#39;t introduce a config that&#39;s not going to help Arm or othe=
r any architecture to disable completely PCI support in NS16550.</div><div =
dir=3D"auto"><br></div><div dir=3D"auto">Cheers,</div><div dir=3D"auto"><br=
></div><div dir=3D"auto"><br></div><div dir=3D"auto"></div></div>

--000000000000a143cd05b47e6441--


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 00:14:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 00:14:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31481.61877 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfu4U-0007NP-5l; Fri, 20 Nov 2020 00:14:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31481.61877; Fri, 20 Nov 2020 00:14:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfu4U-0007NI-2e; Fri, 20 Nov 2020 00:14:38 +0000
Received: by outflank-mailman (input) for mailman id 31481;
 Fri, 20 Nov 2020 00:14:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9Wlp=E2=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kfu4S-0007ND-4k
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 00:14:36 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 031f1bda-85f4-4535-b2de-b55ac261d0ad;
 Fri, 20 Nov 2020 00:14:35 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 0FAD922242;
 Fri, 20 Nov 2020 00:14:34 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=9Wlp=E2=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kfu4S-0007ND-4k
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 00:14:36 +0000
X-Inumbo-ID: 031f1bda-85f4-4535-b2de-b55ac261d0ad
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 031f1bda-85f4-4535-b2de-b55ac261d0ad;
	Fri, 20 Nov 2020 00:14:35 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 0FAD922242;
	Fri, 20 Nov 2020 00:14:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605831274;
	bh=iEczodYKulYxzb4UIyLUtY0GNSopx8LbgoVZObH8Fgs=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=rXFkwJ99VpK1ZrnQ9D2TJAw/neQq51S5dIIPsTTt9mpbkzCaLHTQbJgsigj/0gWSS
	 AvXebDLUQCWRjYap158Xgelu/Cy/fHXfgJ0OUbFGIIWI2fayTY+hqniFoZZOHJqc+B
	 HCbawQ2zXImjkotR1P4KXKXQM0d9Gp7vIdAZpaQc=
Date: Thu, 19 Nov 2020 16:14:33 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien.grall.oss@gmail.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Rahul Singh <Rahul.Singh@arm.com>, Jan Beulich <jbeulich@suse.com>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Wei Liu <wl@xen.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
In-Reply-To: <CAJ=z9a0aS1G0F1jAtKNEe4r3tyBoxy1xJ9AV7pYgifsL62iqww@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2011191551510.7979@sstabellini-ThinkPad-T480s>
References: <cover.1605527997.git.rahul.singh@arm.com> <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com> <3740e147-719a-4e97-bb0e-fe9bd2ec2aa5@xen.org> <aa256a44-8f8f-d4f1-f5f4-12529f45d8c8@suse.com> <9007e08f-6d90-88ed-ba64-2f0b3c21cb50@xen.org>
 <8531a99d-3c54-36c7-0cd4-2e4838f96eb0@suse.com> <ba26fdfb-34f8-c4d3-e082-f1f49c768981@xen.org> <89F35B3F-FAAD-4C58-B3FD-F93CA3290A49@arm.com> <alpine.DEB.2.21.2011191534060.7979@sstabellini-ThinkPad-T480s>
 <CAJ=z9a0aS1G0F1jAtKNEe4r3tyBoxy1xJ9AV7pYgifsL62iqww@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1647819515-1605829983=:7979"
Content-ID: <alpine.DEB.2.21.2011191553220.7979@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1647819515-1605829983=:7979
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2011191553221.7979@sstabellini-ThinkPad-T480s>

On Thu, 19 Nov 2020, Julien Grall wrote:
> On Thu, 19 Nov 2020, 23:38 Stefano Stabellini, <sstabellini@kernel.org> wrote:
>       On Thu, 19 Nov 2020, Rahul Singh wrote:
>       > > On 19/11/2020 09:53, Jan Beulich wrote:
>       > >> On 19.11.2020 10:21, Julien Grall wrote:
>       > >>> Hi Jan,
>       > >>>
>       > >>> On 19/11/2020 09:05, Jan Beulich wrote:
>       > >>>> On 18.11.2020 16:50, Julien Grall wrote:
>       > >>>>> On 16/11/2020 12:25, Rahul Singh wrote:
>       > >>>>>> NS16550 driver has PCI support that is under HAS_PCI flag. When HAS_PCI
>       > >>>>>> is enabled for ARM, compilation error is observed for ARM architecture
>       > >>>>>> because ARM platforms do not have full PCI support available.
>       > >>>>>   >
>       > >>>>>> Introducing new kconfig option CONFIG_HAS_NS16550_PCI to support
>       > >>>>>> ns16550 PCI for X86.
>       > >>>>>>
>       > >>>>>> For X86 platforms it is enabled by default. For ARM platforms it is
>       > >>>>>> disabled by default, once we have proper support for NS16550 PCI for
>       > >>>>>> ARM we can enable it.
>       > >>>>>>
>       > >>>>>> No functional change.
>       > >>>>>
>       > >>>>> NIT: I would say "No functional change intended" to make clear this is
>       > >>>>> an expectation and hopefully will be correct :).
>       > >>>>>
>       > >>>>> Regarding the commit message itself, I would suggest the following to
>       > >>>>> address Jan's concern:
>       > >>>>
>       > >>>> While indeed this is a much better description, I continue to think
>       > >>>> that the proposed Kconfig option is undesirable to have.
>       > >>>
>       > >>> I am yet to see an argument into why we should keep the PCI code
>       > >>> compiled on Arm when there will be no-use....
>       > >> Well, see my patch suppressing building of quite a part of it.
>       > >
>       > > I will let Rahul figuring out whether your patch series is sufficient to fix compilation issues (this is what matters right
>       now).
>       >
>       > I just checked the compilation error for ARM after enabling the HAS_PCI on ARM. I am observing the same compilation error
>       what I observed previously.
>       > There are two new errors related to struct uart_config and struct part_param as those struct defined globally but used under
>       X86 flags.
>       >
>       > At top level:
>       > ns16550.c:179:48: error: ‘uart_config’ defined but not used [-Werror=unused-const-variable=]
>       >  static const struct ns16550_config __initconst uart_config[] =
>       >                                                 ^~~~~~~~~~~
>       > ns16550.c:104:54: error: ‘uart_param’ defined but not used [-Werror=unused-const-variable=]
>       >  static const struct ns16550_config_param __initconst uart_param[] = {
>       >
>       >
>       > >
>       > >>>> Either,
>       > >>>> following the patch I've just sent, truly x86-specific things (at
>       > >>>> least as far as current state goes - if any of this was to be
>       > >>>> re-used by a future port, suitable further abstraction may be
>       > >>>> needed) should be guarded by CONFIG_X86 (or abstracted into arch
>       > >>>> hooks), or the HAS_PCI_MSI proposal would at least want further
>       > >>>> investigating as to its feasibility to address the issues at hand.
>       > >>>
>       > >>> I would be happy with CONFIG_X86, despite the fact that this is only
>       > >>> deferring the problem.
>       > >>>
>       > >>> Regarding HAS_PCI_MSI, I don't really see the point of introducing given
>       > >>> that we are not going to use NS16550 PCI on Arm in the forseeable
>       > >>> future.
>       > >> And I continue to fail to see what would guarantee this: As soon
>       > >> as you can plug in such a card into an Arm system, people will
>       > >> want to be able use it. That's why we had to add support for it
>       > >> on x86, after all.
>       > >
>       > > Well, plug-in PCI cards on Arm has been available for quite a while... Yet I haven't heard anyone asking for NS16550 PCI
>       support.
>       > >
>       > > This is probably because SBSA compliant server should always provide an SBSA UART (a cut-down version of the PL011). So why
>       would bother to lose a PCI slot for yet another UART?
>       > >
>       > >> >> So why do we need a finer graine Kconfig?
>       > >> Because most of the involved code is indeed MSI-related?
>       > >
>       > > Possibly, yet it would not be necessary if we don't want NS16550 PCI support...
>       >
>       > To fix compilation error on ARM as per the discussion there are below options please suggest which one to use to proceed
>       further.
>       >
>       > 1. Use the newly introduced CONFIG_HAS_NS16550_PCI config options. This helps also non-x86 architecture in the future not to
>       have compilation error
>       > what we are observing now when HAS_PCI is enabled.
>       >
>       > 2. Guard the remaining x86 specific code with CONFIG_X86 and introduce the new CONFIG_HAS_PCI_MSI options to fix the MSI
>       related compilation error.
>       > Once we have proper support for MSI and PCI for ARM  (HAS_PCI_MSI and HAS_PCI enabled for ARM in Kconfig ) I am not sure if
>       NS16550 PCI will work out of the box on ARM .In that case, we might need to come back again to fix NS16550 driver. 
> 
> 
>       It doesn't matter too much to me, let's just choose one option so that you
>       get unblocked soon.
> 
>       It looks like Jan prefers option 2) and both Julien and I are OK with
>       it. So let's do 2). Jan, please confirm too :-)
> 
> 
> Please don't put words in my mouth... 

Sorry Julien, I misinterpreted one of your previous comments. Sometimes
it is difficult to do things by email. It is good that you clarified as
my goal was to reach an agreement.


> I think introducing HAS_PCI_MSI is short sighted.
> 
> There are no clear benefits of it when NS16550 PCI support is not going to be enable in the foreseeable future.

I agree


> I would be ok with moving everything under CONFIG_X86. IHMO this is still shortsighted but at least we don't introduce a config that's not
> going to help Arm or other any architecture to disable completely PCI support in NS16550.

So you are suggesting a new option:

3. Guard the remaining x86 specific code *and* the MSI related
compilation errors with CONFIG_X86

Is that right?


My preference is actually option 1) but this series is already at v3 and
I don't think this decision is as important as much as unblocking
Rahul, so I am OK with the other alternatives too.

I tend to agree with you that 3) is better than 2) for the reasons you
wrote above.
--8323329-1647819515-1605829983=:7979--


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 00:27:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 00:27:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31488.61889 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfuGj-000087-Bs; Fri, 20 Nov 2020 00:27:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31488.61889; Fri, 20 Nov 2020 00:27:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfuGj-000080-8T; Fri, 20 Nov 2020 00:27:17 +0000
Received: by outflank-mailman (input) for mailman id 31488;
 Fri, 20 Nov 2020 00:27:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9Wlp=E2=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kfuGi-00007v-8G
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 00:27:16 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 83b038f4-0cb1-4dfc-a735-01efe496ab29;
 Fri, 20 Nov 2020 00:27:14 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id A3F8422242;
 Fri, 20 Nov 2020 00:27:13 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=9Wlp=E2=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kfuGi-00007v-8G
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 00:27:16 +0000
X-Inumbo-ID: 83b038f4-0cb1-4dfc-a735-01efe496ab29
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 83b038f4-0cb1-4dfc-a735-01efe496ab29;
	Fri, 20 Nov 2020 00:27:14 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id A3F8422242;
	Fri, 20 Nov 2020 00:27:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605832033;
	bh=ymS3RouGEHQH32BozXjMWnkiw4MP9yjGz+uxF9BSF50=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=qLDEMC+r1OICRjXrbQLRMf+IuZnvhxRuAIMrvBWHGwPUVxjkjqaWaECRzYsDx+Xq9
	 ng/LNhp9kCpS5IUTfA7afHjwSH+bg0C7mAea4S++29dTFlVjVAM/q7pYcA1zRmtPcS
	 d7M/uRNmESvPPlVzkSycZX430Y0v9CzBAxM3T7Ts=
Date: Thu, 19 Nov 2020 16:27:12 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
cc: Julien Grall <julien@xen.org>, 
    "open list:X86" <xen-devel@lists.xenproject.org>, 
    "alex.bennee@linaro.org" <alex.bennee@linaro.org>, 
    Andre Przywara <Andre.Przywara@arm.com>, Rahul Singh <Rahul.Singh@arm.com>, 
    Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH v4 1/3] xen/arm: gic: acpi: Guard helpers to build the
 MADT with CONFIG_ACPI
In-Reply-To: <4F6A86BB-EA0B-4F7A-A1D9-5C5C469FB220@arm.com>
Message-ID: <alpine.DEB.2.21.2011191627030.7979@sstabellini-ThinkPad-T480s>
References: <20201119170829.9923-1-julien@xen.org> <20201119170829.9923-2-julien@xen.org> <4F6A86BB-EA0B-4F7A-A1D9-5C5C469FB220@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 19 Nov 2020, Bertrand Marquis wrote:
> Hi Julien,
> 
> > On 19 Nov 2020, at 17:08, Julien Grall <julien@xen.org> wrote:
> > 
> > From: Julien Grall <jgrall@amazon.com>
> > 
> > gic_make_hwdom_madt() and gic_get_hwdom_madt_size() are ACPI specific.
> > 
> > While they build fine today, this will change in a follow-up patch.
> > Rather than trying to fix the build on ACPI, it is best to avoid
> > compiling the helpers and the associated callbacks when CONFIG_ACPI=n.
> > 
> > Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
> 
> I also tested the serie on FVP without ACPI and Xen is still booting properly Dom0.

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> > ---
> >    Changes in v4:
> >        - Patch added
> > ---
> > xen/arch/arm/gic-v2.c     |  8 +++-----
> > xen/arch/arm/gic-v3.c     | 11 ++---------
> > xen/arch/arm/gic.c        |  2 ++
> > xen/include/asm-arm/gic.h | 10 ++++++++--
> > 4 files changed, 15 insertions(+), 16 deletions(-)
> > 
> > diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
> > index 0f747538dbcd..581ea5ba6b2c 100644
> > --- a/xen/arch/arm/gic-v2.c
> > +++ b/xen/arch/arm/gic-v2.c
> > @@ -1114,12 +1114,12 @@ static int gicv2_iomem_deny_access(const struct domain *d)
> >     return iomem_deny_access(d, mfn, mfn + nr);
> > }
> > 
> > +#ifdef CONFIG_ACPI
> > static unsigned long gicv2_get_hwdom_extra_madt_size(const struct domain *d)
> > {
> >     return 0;
> > }
> > 
> > -#ifdef CONFIG_ACPI
> > static int gicv2_make_hwdom_madt(const struct domain *d, u32 offset)
> > {
> >     struct acpi_subtable_header *header;
> > @@ -1248,10 +1248,6 @@ static void __init gicv2_acpi_init(void)
> > }
> > #else
> > static void __init gicv2_acpi_init(void) { }
> > -static int gicv2_make_hwdom_madt(const struct domain *d, u32 offset)
> > -{
> > -    return 0;
> > -}
> > #endif
> > 
> > static int __init gicv2_init(void)
> > @@ -1357,8 +1353,10 @@ const static struct gic_hw_operations gicv2_ops = {
> >     .read_apr            = gicv2_read_apr,
> >     .read_pending_state  = gicv2_read_pending_state,
> >     .make_hwdom_dt_node  = gicv2_make_hwdom_dt_node,
> > +#ifdef CONFIG_ACPI
> >     .make_hwdom_madt     = gicv2_make_hwdom_madt,
> >     .get_hwdom_extra_madt_size = gicv2_get_hwdom_extra_madt_size,
> > +#endif
> >     .map_hwdom_extra_mappings = gicv2_map_hwdown_extra_mappings,
> >     .iomem_deny_access   = gicv2_iomem_deny_access,
> >     .do_LPI              = gicv2_do_LPI,
> > diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
> > index 0f6cbf6224e9..2a344393a0e4 100644
> > --- a/xen/arch/arm/gic-v3.c
> > +++ b/xen/arch/arm/gic-v3.c
> > @@ -1735,15 +1735,6 @@ static void __init gicv3_acpi_init(void)
> > }
> > #else
> > static void __init gicv3_acpi_init(void) { }
> > -static int gicv3_make_hwdom_madt(const struct domain *d, u32 offset)
> > -{
> > -    return 0;
> > -}
> > -
> > -static unsigned long gicv3_get_hwdom_extra_madt_size(const struct domain *d)
> > -{
> > -    return 0;
> > -}
> > #endif
> > 
> > static bool gic_dist_supports_lpis(void)
> > @@ -1858,8 +1849,10 @@ static const struct gic_hw_operations gicv3_ops = {
> >     .read_pending_state  = gicv3_read_pending_state,
> >     .secondary_init      = gicv3_secondary_cpu_init,
> >     .make_hwdom_dt_node  = gicv3_make_hwdom_dt_node,
> > +#ifdef CONFIG_ACPI
> >     .make_hwdom_madt     = gicv3_make_hwdom_madt,
> >     .get_hwdom_extra_madt_size = gicv3_get_hwdom_extra_madt_size,
> > +#endif
> >     .iomem_deny_access   = gicv3_iomem_deny_access,
> >     .do_LPI              = gicv3_do_LPI,
> > };
> > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > index d623c57cb9fa..fe60619e99cf 100644
> > --- a/xen/arch/arm/gic.c
> > +++ b/xen/arch/arm/gic.c
> > @@ -443,6 +443,7 @@ int gic_make_hwdom_dt_node(const struct domain *d,
> >     return gic_hw_ops->make_hwdom_dt_node(d, gic, fdt);
> > }
> > 
> > +#ifdef CONFIG_ACPI
> > int gic_make_hwdom_madt(const struct domain *d, u32 offset)
> > {
> >     return gic_hw_ops->make_hwdom_madt(d, offset);
> > @@ -459,6 +460,7 @@ unsigned long gic_get_hwdom_madt_size(const struct domain *d)
> > 
> >     return madt_size;
> > }
> > +#endif
> > 
> > int gic_iomem_deny_access(const struct domain *d)
> > {
> > diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
> > index ba870523bb2a..ad0f7452d005 100644
> > --- a/xen/include/asm-arm/gic.h
> > +++ b/xen/include/asm-arm/gic.h
> > @@ -378,12 +378,14 @@ struct gic_hw_operations {
> >     /* Create GIC node for the hardware domain */
> >     int (*make_hwdom_dt_node)(const struct domain *d,
> >                               const struct dt_device_node *gic, void *fdt);
> > +#ifdef CONFIG_ACPI
> >     /* Create MADT table for the hardware domain */
> >     int (*make_hwdom_madt)(const struct domain *d, u32 offset);
> > -    /* Map extra GIC MMIO, irqs and other hw stuffs to the hardware domain. */
> > -    int (*map_hwdom_extra_mappings)(struct domain *d);
> >     /* Query the size of hardware domain madt table */
> >     unsigned long (*get_hwdom_extra_madt_size)(const struct domain *d);
> > +#endif
> > +    /* Map extra GIC MMIO, irqs and other hw stuffs to the hardware domain. */
> > +    int (*map_hwdom_extra_mappings)(struct domain *d);
> >     /* Deny access to GIC regions */
> >     int (*iomem_deny_access)(const struct domain *d);
> >     /* Handle LPIs, which require special handling */
> > @@ -435,8 +437,12 @@ void register_gic_ops(const struct gic_hw_operations *ops);
> > int gic_make_hwdom_dt_node(const struct domain *d,
> >                            const struct dt_device_node *gic,
> >                            void *fdt);
> > +
> > +#ifdef CONFIG_ACPI
> > int gic_make_hwdom_madt(const struct domain *d, u32 offset);
> > unsigned long gic_get_hwdom_madt_size(const struct domain *d);
> > +#endif
> > +
> > int gic_map_hwdom_extra_mappings(struct domain *d);
> > int gic_iomem_deny_access(const struct domain *d);
> > 
> > -- 
> > 2.17.1
> > 
> 
> 


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 00:41:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 00:41:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31499.61913 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfuUO-0002MD-Pa; Fri, 20 Nov 2020 00:41:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31499.61913; Fri, 20 Nov 2020 00:41:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfuUO-0002M6-MW; Fri, 20 Nov 2020 00:41:24 +0000
Received: by outflank-mailman (input) for mailman id 31499;
 Fri, 20 Nov 2020 00:41:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9Wlp=E2=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kfuUN-0002M1-97
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 00:41:23 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ee2ecc27-d0f6-4e78-8d2b-1fa6c64a921c;
 Fri, 20 Nov 2020 00:41:22 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 1D66F22244;
 Fri, 20 Nov 2020 00:41:21 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=9Wlp=E2=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kfuUN-0002M1-97
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 00:41:23 +0000
X-Inumbo-ID: ee2ecc27-d0f6-4e78-8d2b-1fa6c64a921c
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ee2ecc27-d0f6-4e78-8d2b-1fa6c64a921c;
	Fri, 20 Nov 2020 00:41:22 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 1D66F22244;
	Fri, 20 Nov 2020 00:41:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605832881;
	bh=GavnnliBtrzqeKWl7EDe8YQ9zUGuVT+EuXBe3yvCsYM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=bdC78CeOzGxNqd+CTmurgBmyVlGmtoXT0yOSXVvOp28KDY8cnsgLp7Rv1+jfng5Pt
	 WpH9TK70S4YfGsvgTmJwglRhgCMthTRRrzmwqmmwJ2+RXPf4OSyaKDc6HE7knQfOd7
	 HzTHHDDdbOANr4btC8E59AE5h03G7b6UWlgNm1dA=
Date: Thu, 19 Nov 2020 16:41:20 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH RFC 2/6] xen/arm: mm: Remove ; at the end of
 mm_printk()
In-Reply-To: <20201119190751.22345-3-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2011191641100.7979@sstabellini-ThinkPad-T480s>
References: <20201119190751.22345-1-julien@xen.org> <20201119190751.22345-3-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 19 Nov 2020, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The ; at the end of mm_printk() means the following code will not build
> correctly:
> 
> if ( ... )
>     mm_printk(...);
> else
>     ...
> 
> As we treat the macro as a function, we want to remove the ; at the end
> of it.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/arm/mm.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 4dd886f7c80d..59f8a3f15fd1 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -59,7 +59,7 @@ mm_printk(const char *fmt, ...) {}
>      {                                       \
>          dprintk(XENLOG_ERR, fmt, ## args);  \
>          WARN();                             \
> -    } while (0);
> +    } while (0)
>  #endif
>  
>  /*
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 01:01:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 01:01:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31506.61925 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfuoA-0004bs-En; Fri, 20 Nov 2020 01:01:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31506.61925; Fri, 20 Nov 2020 01:01:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfuoA-0004bK-BK; Fri, 20 Nov 2020 01:01:50 +0000
Received: by outflank-mailman (input) for mailman id 31506;
 Fri, 20 Nov 2020 01:01:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfuo8-0003lP-CN; Fri, 20 Nov 2020 01:01:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfuo8-0001wL-31; Fri, 20 Nov 2020 01:01:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfuo7-0004Nx-OP; Fri, 20 Nov 2020 01:01:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kfuo7-0002cd-Nv; Fri, 20 Nov 2020 01:01:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfuo8-0003lP-CN; Fri, 20 Nov 2020 01:01:48 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Yo1lSI0O5PmzPrBH27LUNtlmCrtJD5dsWp2Sn2Mfexk=; b=tPIp+CVRuN643N4KCX4qxO96ZC
	sRYfidwMFwCNhWuTTwFacd5NAWCmKTpuHIVXGzgv890X+f2TvP9yCHgOqZGT5JLgmmBs4c1EWwZQ+
	X3teATkQXNVYToe0J5adJxw2c2SosnZGrBaCQ0CI+oKTq7tRNn8HVqnerUyg61yFIQ/E=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfuo8-0001wL-31; Fri, 20 Nov 2020 01:01:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfuo7-0004Nx-OP; Fri, 20 Nov 2020 01:01:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfuo7-0002cd-Nv; Fri, 20 Nov 2020 01:01:47 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156878-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156878: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    qemu-mainline:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:guest-saverestore:fail:heisenbug
    qemu-mainline:test-amd64-i386-libvirt:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=3d275bd17c7bdf5acd4e4f58fa7b8b9114bb7484
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 20 Nov 2020 01:01:47 +0000

flight 156878 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156878/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit2   8 xen-boot         fail in 156866 pass in 156878
 test-arm64-arm64-xl           8 xen-boot         fail in 156866 pass in 156878
 test-amd64-amd64-dom0pvh-xl-intel 17 guest-saverestore     fail pass in 156866
 test-amd64-i386-libvirt      20 guest-start/debian.repeat  fail pass in 156866
 test-amd64-i386-libvirt-xsm  20 guest-start/debian.repeat  fail pass in 156866

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                3d275bd17c7bdf5acd4e4f58fa7b8b9114bb7484
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   91 days
Failing since        152659  2020-08-21 14:07:39 Z   90 days  192 attempts
Testing same since   156866  2020-11-18 23:09:18 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 67076 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 01:46:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 01:46:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31515.61940 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfvV3-0005An-2e; Fri, 20 Nov 2020 01:46:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31515.61940; Fri, 20 Nov 2020 01:46:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfvV2-0005Ag-UX; Fri, 20 Nov 2020 01:46:08 +0000
Received: by outflank-mailman (input) for mailman id 31515;
 Fri, 20 Nov 2020 01:46:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9Wlp=E2=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kfvV1-0005AZ-F2
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 01:46:07 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 791fc257-8a63-47dd-8298-a17fa9d82606;
 Fri, 20 Nov 2020 01:46:06 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 26BBA22201;
 Fri, 20 Nov 2020 01:46:05 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=9Wlp=E2=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kfvV1-0005AZ-F2
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 01:46:07 +0000
X-Inumbo-ID: 791fc257-8a63-47dd-8298-a17fa9d82606
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 791fc257-8a63-47dd-8298-a17fa9d82606;
	Fri, 20 Nov 2020 01:46:06 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 26BBA22201;
	Fri, 20 Nov 2020 01:46:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605836765;
	bh=V5boTDEjdkYilLFmPcgOgYX+IVrjIZnVUJnGU+ojcEQ=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=sWT2LdzUb89InejWQmmCK9q/N3hNscjmAC0rygmCJ/pc2oRzOax1LVxdYujHaUoVU
	 2Hi1tsDAT5UL7eBFf9JoOAMYbmPxjKMFiEc5CsPMGR68cpVWKjQRkdN/wDcHpbmodS
	 VLk6etECtFeSRyAAmmsSw+xYVnovP5tkn6vVxGKU=
Date: Thu, 19 Nov 2020 17:46:04 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Julien Grall <julien.grall@arm.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH RFC 4/6] xen/arm: mm: Allow other mapping size in
 xen_pt_update_entry()
In-Reply-To: <20201119190751.22345-5-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2011191706420.7979@sstabellini-ThinkPad-T480s>
References: <20201119190751.22345-1-julien@xen.org> <20201119190751.22345-5-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 19 Nov 2020, Julien Grall wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> At the moment, xen_pt_update_entry() only supports mapping at level 3
> (i.e 4KB mapping). While this is fine for most of the runtime helper,
> the boot code will require to use superpage mapping.
> 
> We don't want to allow superpage mapping by default as some of the
> callers may expect small mappings (i.e populate_pt_range()) or even
> expect to unmap only a part of a superpage.
> 
> To keep the code simple, a new flag _PAGE_BLOCK is introduced to
> allow the caller to enable superpage mapping.
> 
> As the code doesn't support all the combinations, xen_pt_check_entry()
> is extended to take into account the cases we don't support when
> using block mapping:
>     - Replacing a table with a mapping. This may happen if region was
>     first mapped with 4KB mapping and then later on replaced with a 2MB
>     (or 1GB mapping)
>     - Removing/modify a table. This may happen if a caller try to remove a
>     region with _PAGE_BLOCK set when it was created without it
> 
> Note that the current restriction mean that the caller must ensure that
> _PAGE_BLOCK is consistently set/cleared across all the updates on a
> given virtual region. This ought to be fine with the expected use-cases.
> 
> More rework will be necessary if we wanted to remove the restrictions.
> 
> Note that nr_mfns is now marked const as it is used for flushing the
> TLBs and we don't want it to be modified.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Thanks for the patch, you might want to update the Signed-off-by (even
if you haven't changed the patch)


> ---
> 
> This patch is necessary for upcoming changes in the MM code. I would
> like to remove most of the open-coding update of the page-tables as they
> are not easy to properly fix/extend. For instance, always mapping
> xenheap mapping with 1GB superpage is plain wrong because:
>     - RAM regions are not always 1GB aligned (such as on RPI 4) and we
>     may end up to map MMIO with cacheable attributes
>     - RAM may contain reserved regions should either not be mapped
> ---
>  xen/arch/arm/mm.c          | 87 ++++++++++++++++++++++++++++++--------
>  xen/include/asm-arm/page.h |  4 ++
>  2 files changed, 73 insertions(+), 18 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 59f8a3f15fd1..af0f12b6e6d3 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -1060,9 +1060,10 @@ static int xen_pt_next_level(bool read_only, unsigned int level,
>  }
>  
>  /* Sanity check of the entry */
> -static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
> +static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int level,
> +                               unsigned int flags)
>  {
> -    /* Sanity check when modifying a page. */
> +    /* Sanity check when modifying an entry. */
>      if ( (flags & _PAGE_PRESENT) && mfn_eq(mfn, INVALID_MFN) )
>      {
>          /* We don't allow modifying an invalid entry. */
> @@ -1072,6 +1073,13 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
>              return false;
>          }
>  
> +        /* We don't allow modifying a table entry */
> +        if ( !lpae_is_mapping(entry, level) )
> +        {
> +            mm_printk("Modifying a table entry is not allowed.\n");
> +            return false;
> +        }
> +
>          /* We don't allow changing memory attributes. */
>          if ( entry.pt.ai != PAGE_AI_MASK(flags) )
>          {
> @@ -1087,7 +1095,7 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
>              return false;
>          }
>      }
> -    /* Sanity check when inserting a page */
> +    /* Sanity check when inserting a mapping */
>      else if ( flags & _PAGE_PRESENT )
>      {
>          /* We should be here with a valid MFN. */
> @@ -1096,18 +1104,28 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
>          /* We don't allow replacing any valid entry. */
>          if ( lpae_is_valid(entry) )
>          {
> -            mm_printk("Changing MFN for a valid entry is not allowed (%#"PRI_mfn" -> %#"PRI_mfn").\n",
> -                      mfn_x(lpae_get_mfn(entry)), mfn_x(mfn));
> +            if ( lpae_is_mapping(entry, level) )
> +                mm_printk("Changing MFN for a valid entry is not allowed (%#"PRI_mfn" -> %#"PRI_mfn").\n",
> +                          mfn_x(lpae_get_mfn(entry)), mfn_x(mfn));
> +            else
> +                mm_printk("Trying to replace a table with a mapping.\n");
>              return false;
>          }
>      }
> -    /* Sanity check when removing a page. */
> +    /* Sanity check when removing a mapping. */
>      else if ( (flags & (_PAGE_PRESENT|_PAGE_POPULATE)) == 0 )
>      {
>          /* We should be here with an invalid MFN. */
>          ASSERT(mfn_eq(mfn, INVALID_MFN));
>  
> -        /* We don't allow removing page with contiguous bit set. */
> +        /* We don't allow removing a table */
> +        if ( lpae_is_table(entry, level) )
> +        {
> +            mm_printk("Removing a table is not allowed.\n");
> +            return false;
> +        }
> +
> +        /* We don't allow removing a mapping with contiguous bit set. */
>          if ( entry.pt.contig )
>          {
>              mm_printk("Removing entry with contiguous bit set is not allowed.\n");
> @@ -1126,12 +1144,12 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
>  }
>  
>  static int xen_pt_update_entry(mfn_t root, unsigned long virt,
> -                               mfn_t mfn, unsigned int flags)
> +                               mfn_t mfn, unsigned int page_order,
> +                               unsigned int flags)
>  {
>      int rc;
>      unsigned int level;
> -    /* We only support 4KB mapping (i.e level 3) for now */
> -    unsigned int target = 3;
> +    unsigned int target = 3 - (page_order / LPAE_SHIFT);

Given that page_order is not used for anything else in this function,
wouldn't it be easier to just pass the target level to
xen_pt_update_entry? Calculating target from page_order, when page_order
is otherwise unused, it doesn't look like the most straightforward way
to do it.


>      lpae_t *table;
>      /*
>       * The intermediate page tables are read-only when the MFN is not valid
> @@ -1186,7 +1204,7 @@ static int xen_pt_update_entry(mfn_t root, unsigned long virt,
>      entry = table + offsets[level];
>  
>      rc = -EINVAL;
> -    if ( !xen_pt_check_entry(*entry, mfn, flags) )
> +    if ( !xen_pt_check_entry(*entry, mfn, level, flags) )
>          goto out;
>  
>      /* If we are only populating page-table, then we are done. */
> @@ -1204,8 +1222,11 @@ static int xen_pt_update_entry(mfn_t root, unsigned long virt,
>          {
>              pte = mfn_to_xen_entry(mfn, PAGE_AI_MASK(flags));
>  
> -            /* Third level entries set pte.pt.table = 1 */
> -            pte.pt.table = 1;
> +            /*
> +             * First and second level pages set pte.pt.table = 0, but
> +             * third level entries set pte.pt.table = 1.
> +             */
> +            pte.pt.table = (level == 3);
>          }
>          else /* We are updating the permission => Copy the current pte. */
>              pte = *entry;
> @@ -1229,11 +1250,12 @@ static DEFINE_SPINLOCK(xen_pt_lock);
>  
>  static int xen_pt_update(unsigned long virt,
>                           mfn_t mfn,
> -                         unsigned long nr_mfns,
> +                         const unsigned long nr_mfns,
>                           unsigned int flags)
>  {
>      int rc = 0;
> -    unsigned long addr = virt, addr_end = addr + nr_mfns * PAGE_SIZE;
> +    unsigned long vfn = paddr_to_pfn(virt);
> +    unsigned long left = nr_mfns;

Given that paddr_to_pfn is meant for physical addresses, I would rather
opencode paddr_to_pfn using PAGE_SHIFT here. Again, just a suggestion.


>      /*
>       * For arm32, page-tables are different on each CPUs. Yet, they share
> @@ -1265,14 +1287,43 @@ static int xen_pt_update(unsigned long virt,
>  
>      spin_lock(&xen_pt_lock);
>  
> -    for ( ; addr < addr_end; addr += PAGE_SIZE )
> +    while ( left )
>      {
> -        rc = xen_pt_update_entry(root, addr, mfn, flags);
> +        unsigned int order;
> +        unsigned long mask;
> +
> +        /*
> +         * Don't take into account the MFN when removing mapping (i.e
> +         * MFN_INVALID) to calculate the correct target order.
> +         *
> +         * XXX: Support superpage mappings if nr is not aligned to a
> +         * superpage size.

It would be good to add another sentence to explain that the checks
below are simply based on masks and rely on the mfn, vfn, and also
nr_mfn to be superpage aligned. (It took me some time to figure it out.)


> +         */
> +        mask = !mfn_eq(mfn, INVALID_MFN) ? mfn_x(mfn) : 0;
> +        mask |= vfn | left;
> +
> +        /*
> +         * Always use level 3 mapping unless the caller request block
> +         * mapping.
> +         */
> +        if ( likely(!(flags & _PAGE_BLOCK)) )
> +            order = THIRD_ORDER;
> +        else if ( !(mask & (BIT(FIRST_ORDER, UL) - 1)) )
> +            order = FIRST_ORDER;
> +        else if ( !(mask & (BIT(SECOND_ORDER, UL) - 1)) )
> +            order = SECOND_ORDER;
> +        else
> +            order = THIRD_ORDER;
> +
> +        rc = xen_pt_update_entry(root, pfn_to_paddr(vfn), mfn, order, flags);
>          if ( rc )
>              break;
>  
> +        vfn += 1U << order;
>          if ( !mfn_eq(mfn, INVALID_MFN) )
> -            mfn = mfn_add(mfn, 1);
> +            mfn = mfn_add(mfn, 1U << order);
> +
> +        left -= (1U << order);
>
>      }
>  
>      /*
> diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
> index 4ea8e97247c8..de096b0968e3 100644
> --- a/xen/include/asm-arm/page.h
> +++ b/xen/include/asm-arm/page.h
> @@ -79,6 +79,7 @@
>   * [3:4] Permission flags
>   * [5]   Page present
>   * [6]   Only populate page tables
> + * [7]   Use any level mapping only (i.e. superpages is allowed)
>   */
>  #define PAGE_AI_MASK(x) ((x) & 0x7U)
>  
> @@ -92,6 +93,9 @@
>  #define _PAGE_PRESENT    (1U << 5)
>  #define _PAGE_POPULATE   (1U << 6)
>  
> +#define _PAGE_BLOCK_BIT     7
> +#define _PAGE_BLOCK         (1U << _PAGE_BLOCK_BIT)
> +
>  /*
>   * _PAGE_DEVICE and _PAGE_NORMAL are convenience defines. They are not
>   * meant to be used outside of this header.



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 01:47:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 01:47:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31521.61951 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfvW0-0005H7-CR; Fri, 20 Nov 2020 01:47:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31521.61951; Fri, 20 Nov 2020 01:47:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfvW0-0005H0-9P; Fri, 20 Nov 2020 01:47:08 +0000
Received: by outflank-mailman (input) for mailman id 31521;
 Fri, 20 Nov 2020 01:47:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9Wlp=E2=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kfvVz-0005Gv-3B
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 01:47:07 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 85333ba6-9336-427c-9421-d097d67f3bba;
 Fri, 20 Nov 2020 01:47:06 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 1901C22201;
 Fri, 20 Nov 2020 01:47:05 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=9Wlp=E2=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kfvVz-0005Gv-3B
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 01:47:07 +0000
X-Inumbo-ID: 85333ba6-9336-427c-9421-d097d67f3bba
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 85333ba6-9336-427c-9421-d097d67f3bba;
	Fri, 20 Nov 2020 01:47:06 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 1901C22201;
	Fri, 20 Nov 2020 01:47:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605836825;
	bh=qROvm2bLrbiiLZMKfpPsHkFHej1lCqcjjt1J3RD7VyE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=sd1Gr4I/cB1aODykbJKgwlXtoA1DJY2/Ew9p4bcoux/lQgRJQwznFLcBObVSn6spt
	 gaMV07KxgNl79YW/UiLjg7q7MIi+ijai+UsEx5s2H7UkXpbeM4YY//mO7IQNhIbv18
	 J6fJ/zUsmZM8u8/EXkIvSkDLKYFyNHSvAWYgJO+A=
Date: Thu, 19 Nov 2020 17:47:04 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Julien Grall <julien.grall@arm.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Julien Grall <julien.grall@amazon.com>
Subject: Re: [PATCH RFC 5/6] xen/arm: mm: Don't open-code Xen PT update in
 remove_early_mappings
In-Reply-To: <20201119190751.22345-6-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2011191746460.7979@sstabellini-ThinkPad-T480s>
References: <20201119190751.22345-1-julien@xen.org> <20201119190751.22345-6-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 19 Nov 2020, Julien Grall wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> Now that xen_pt_update_entry() is able to deal with different mapping
> size, we can replace the open-coding of the page-tables update by a call
> to modify_xen_mappings().
> 
> As the function is not meant to fail, a BUG_ON is added to check the
> return.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Signed-off-by: Julien Grall <julien.grall@amazon.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/arm/mm.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index af0f12b6e6d3..aee6d410ac4f 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -598,11 +598,11 @@ void * __init early_fdt_map(paddr_t fdt_paddr)
>  
>  void __init remove_early_mappings(void)
>  {
> -    lpae_t pte = {0};
> -    write_pte(xen_second + second_table_offset(BOOT_FDT_VIRT_START), pte);
> -    write_pte(xen_second + second_table_offset(BOOT_FDT_VIRT_START + SZ_2M),
> -              pte);
> -    flush_xen_tlb_range_va(BOOT_FDT_VIRT_START, BOOT_FDT_SLOT_SIZE);
> +    int rc;
> +
> +    rc = modify_xen_mappings(BOOT_FDT_VIRT_START, BOOT_FDT_VIRT_END,
> +                             _PAGE_BLOCK);
> +    BUG_ON(rc);
>  }
>  
>  /*
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 01:54:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 01:54:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31529.61963 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfvcz-0006Sa-3M; Fri, 20 Nov 2020 01:54:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31529.61963; Fri, 20 Nov 2020 01:54:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfvcz-0006ST-0C; Fri, 20 Nov 2020 01:54:21 +0000
Received: by outflank-mailman (input) for mailman id 31529;
 Fri, 20 Nov 2020 01:54:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9Wlp=E2=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kfvcx-0006SO-6Z
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 01:54:19 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0342dc4c-752a-4641-9da8-b36e3d1b47f2;
 Fri, 20 Nov 2020 01:54:18 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 6F93C2145D;
 Fri, 20 Nov 2020 01:54:17 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=9Wlp=E2=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kfvcx-0006SO-6Z
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 01:54:19 +0000
X-Inumbo-ID: 0342dc4c-752a-4641-9da8-b36e3d1b47f2
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0342dc4c-752a-4641-9da8-b36e3d1b47f2;
	Fri, 20 Nov 2020 01:54:18 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 6F93C2145D;
	Fri, 20 Nov 2020 01:54:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605837257;
	bh=xcLh2ZPbmsnC9fqokKIepZfAR6Zn/zyJxtNaUi4gDGw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=IOJiijPv248B4xGBapnoVYc7KgY2xi9G+SamDRMrD+DTqTZFMnb7jbYE83RL6zxAz
	 7P8qnP9Bi/uq93G67AOvR2Zhkhy6l7zMK2c9wPcCLre1F4aXrgosHRBKjsJjrI1ntt
	 u1isR+H6LPgZQ12QVbYG2ykFehAGocGU8g+f00bU=
Date: Thu, 19 Nov 2020 17:54:16 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Julien Grall <julien.grall@arm.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH RFC 6/6] xen/arm: mm: Re-implement early_fdt_map() using
 map_pages_to_xen()
In-Reply-To: <20201119190751.22345-7-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2011191754010.7979@sstabellini-ThinkPad-T480s>
References: <20201119190751.22345-1-julien@xen.org> <20201119190751.22345-7-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 19 Nov 2020, Julien Grall wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> Now that map_pages_to_xen() has been extended to support 2MB mappings,
> we can replace the create_mappings() calls by map_pages_to_xen() calls.
> 
> The mapping can also be marked read-only as Xen as no business to modify
> the host Device Tree.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/arm/mm.c | 18 +++++++++++++-----
>  1 file changed, 13 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index aee6d410ac4f..37ea9d5ce20a 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -558,6 +558,7 @@ void * __init early_fdt_map(paddr_t fdt_paddr)
>      paddr_t offset;
>      void *fdt_virt;
>      uint32_t size;
> +    int rc;
>  
>      /*
>       * Check whether the physical FDT address is set and meets the minimum
> @@ -573,8 +574,12 @@ void * __init early_fdt_map(paddr_t fdt_paddr)
>      /* The FDT is mapped using 2MB superpage */
>      BUILD_BUG_ON(BOOT_FDT_VIRT_START % SZ_2M);
>  
> -    create_mappings(xen_second, BOOT_FDT_VIRT_START, paddr_to_pfn(base_paddr),
> -                    SZ_2M >> PAGE_SHIFT, SZ_2M);
> +    rc = map_pages_to_xen(BOOT_FDT_VIRT_START, maddr_to_mfn(base_paddr),
> +                          SZ_2M >> PAGE_SHIFT,
> +                          PAGE_HYPERVISOR_RO | _PAGE_BLOCK);
> +    if ( rc )
> +        panic("Unable to map the device-tree.\n");
> +
>  
>      offset = fdt_paddr % SECOND_SIZE;
>      fdt_virt = (void *)BOOT_FDT_VIRT_START + offset;
> @@ -588,9 +593,12 @@ void * __init early_fdt_map(paddr_t fdt_paddr)
>  
>      if ( (offset + size) > SZ_2M )
>      {
> -        create_mappings(xen_second, BOOT_FDT_VIRT_START + SZ_2M,
> -                        paddr_to_pfn(base_paddr + SZ_2M),
> -                        SZ_2M >> PAGE_SHIFT, SZ_2M);
> +        rc = map_pages_to_xen(BOOT_FDT_VIRT_START + SZ_2M,
> +                              maddr_to_mfn(base_paddr + SZ_2M),
> +                              SZ_2M >> PAGE_SHIFT,
> +                              PAGE_HYPERVISOR_RO | _PAGE_BLOCK);
> +        if ( rc )
> +            panic("Unable to map the device-tree\n");
>      }
>  
>      return fdt_virt;
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 03:39:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 03:39:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31537.61981 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfxGj-0000u4-Mb; Fri, 20 Nov 2020 03:39:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31537.61981; Fri, 20 Nov 2020 03:39:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfxGj-0000tx-In; Fri, 20 Nov 2020 03:39:29 +0000
Received: by outflank-mailman (input) for mailman id 31537;
 Fri, 20 Nov 2020 03:39:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfxGj-0000tp-1s; Fri, 20 Nov 2020 03:39:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfxGi-0002Gx-Rr; Fri, 20 Nov 2020 03:39:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfxGi-0006a6-I4; Fri, 20 Nov 2020 03:39:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kfxGi-0005PW-Hc; Fri, 20 Nov 2020 03:39:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfxGj-0000tp-1s; Fri, 20 Nov 2020 03:39:29 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YM8F9S8JL4boZetLksOHTfjFh6GTyvcDKWbFJUwx+io=; b=hcSdwFbpRodY9nmk1jEzG0r/Zq
	KmkjZBmoVwQSTcWihLP01RjgeWyeCYPMbHCOzo7siBTBmOE86zYqRYn9iifJYJe/EJCduF3Frsc3Q
	98a8C8fBFhAMxcA85SrlrKyTs52TnNcdBQiLmhMz+Ml01TS0xSN+zSXIIvN0KlZK5nkY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfxGi-0002Gx-Rr; Fri, 20 Nov 2020 03:39:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfxGi-0006a6-I4; Fri, 20 Nov 2020 03:39:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfxGi-0005PW-Hc; Fri, 20 Nov 2020 03:39:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156889-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156889: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0ff2c7e5b4ff9b3066d6cbba9adf95b948b418c9
X-Osstest-Versions-That:
    xen=415f904254b7343a90db895134980cbb7f7f0479
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 20 Nov 2020 03:39:28 +0000

flight 156889 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156889/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  0ff2c7e5b4ff9b3066d6cbba9adf95b948b418c9
baseline version:
 xen                  415f904254b7343a90db895134980cbb7f7f0479

Last test of basis   156862  2020-11-18 19:00:27 Z    1 days
Testing same since   156889  2020-11-20 01:03:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andre Przywara <andre.przywara@arm.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien.grall@arm.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   415f904254..0ff2c7e5b4  0ff2c7e5b4ff9b3066d6cbba9adf95b948b418c9 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 05:28:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 05:28:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31545.61997 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfyxc-0004qv-Nu; Fri, 20 Nov 2020 05:27:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31545.61997; Fri, 20 Nov 2020 05:27:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfyxc-0004qo-KW; Fri, 20 Nov 2020 05:27:52 +0000
Received: by outflank-mailman (input) for mailman id 31545;
 Fri, 20 Nov 2020 05:27:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfyxb-0004qg-F5; Fri, 20 Nov 2020 05:27:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfyxb-0004wW-6u; Fri, 20 Nov 2020 05:27:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kfyxa-0005Im-Tm; Fri, 20 Nov 2020 05:27:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kfyxa-0001gK-TJ; Fri, 20 Nov 2020 05:27:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfyxb-0004qg-F5; Fri, 20 Nov 2020 05:27:51 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Solud8hyeuKsRKxwNPptvearyML+/TtL3CjJp9O4Z34=; b=MAgXEHODmGaaDmjlLZzKrNbOzh
	ZzkFrRQ8z35WcWtbv9gLeoZhYbWrlGE5uf8/Luy6vp5AGbhxvBk2Kg9iRQGKIgPc7czKeiqwl+NEh
	M7ZnIiN/6Dy/Tc5F4ThZwTAAO+NWG0g0YHl4CbGBOHRhfql5lcGOIhc1JFuLmXX8GNOc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfyxb-0004wW-6u; Fri, 20 Nov 2020 05:27:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfyxa-0005Im-Tm; Fri, 20 Nov 2020 05:27:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kfyxa-0001gK-TJ; Fri, 20 Nov 2020 05:27:50 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156882-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156882: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=415f904254b7343a90db895134980cbb7f7f0479
X-Osstest-Versions-That:
    xen=415f904254b7343a90db895134980cbb7f7f0479
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 20 Nov 2020 05:27:50 +0000

flight 156882 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156882/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156867
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156867
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156867
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156867
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156867
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156867
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156867
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 156867
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156867
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156867
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156867
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156867
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  415f904254b7343a90db895134980cbb7f7f0479
baseline version:
 xen                  415f904254b7343a90db895134980cbb7f7f0479

Last test of basis   156882  2020-11-19 15:08:36 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 05:59:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 05:59:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31558.62016 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfzSY-0008Oq-IX; Fri, 20 Nov 2020 05:59:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31558.62016; Fri, 20 Nov 2020 05:59:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kfzSY-0008Oj-Ed; Fri, 20 Nov 2020 05:59:50 +0000
Received: by outflank-mailman (input) for mailman id 31558;
 Fri, 20 Nov 2020 05:59:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kfzSX-0008Oe-Tg
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 05:59:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d8bfbaeb-4613-4268-b01a-abffcbcae682;
 Fri, 20 Nov 2020 05:59:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 173E5AC23;
 Fri, 20 Nov 2020 05:59:48 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kfzSX-0008Oe-Tg
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 05:59:49 +0000
X-Inumbo-ID: d8bfbaeb-4613-4268-b01a-abffcbcae682
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d8bfbaeb-4613-4268-b01a-abffcbcae682;
	Fri, 20 Nov 2020 05:59:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605851988; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=4Y4Wnq1UGB9RNhSifrdmnGmDAtM2j7xXdf7kH9LivWw=;
	b=vEMkUgS5AVxLFI5pSm7zN2Bo7HxVN40A2ppZ/3Iv2yCpBhAz53lRbj6N48N+lWCSsLmjqv
	MwvNOU7ej3Hoh6W+Ne2vInmIkxaN7lCae6EfrVVkrExbYaac34a2jVl0hvWbmo7PLCuDno
	tcksr+CkmRYCxaCziHtuROZ/1LKLGvU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 173E5AC23;
	Fri, 20 Nov 2020 05:59:48 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: torvalds@linux-foundation.org
Cc: linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: [GIT PULL] xen: branch for v5.10-rc5
Date: Fri, 20 Nov 2020 06:59:47 +0100
Message-Id: <20201120055947.613-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.10b-rc5-tag

xen: branch for v5.10-rc5

It contains a single fix for avoiding WARN splats when booting a Xen
guest with nosmt.

Thanks.

Juergen

 arch/x86/xen/spinlock.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

Brian Masney (1):
      x86/xen: don't unbind uninitialized lock_kicker_irq


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 07:20:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 07:20:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31567.62034 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0i8-0000Un-Na; Fri, 20 Nov 2020 07:20:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31567.62034; Fri, 20 Nov 2020 07:20:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0i8-0000Ug-Jm; Fri, 20 Nov 2020 07:20:00 +0000
Received: by outflank-mailman (input) for mailman id 31567;
 Fri, 20 Nov 2020 07:19:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg0i7-0000Ub-37
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:19:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c05e53cf-7c5b-4845-975c-559873b3011d;
 Fri, 20 Nov 2020 07:19:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1A314AB3D;
 Fri, 20 Nov 2020 07:19:57 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg0i7-0000Ub-37
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:19:59 +0000
X-Inumbo-ID: c05e53cf-7c5b-4845-975c-559873b3011d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c05e53cf-7c5b-4845-975c-559873b3011d;
	Fri, 20 Nov 2020 07:19:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1A314AB3D;
	Fri, 20 Nov 2020 07:19:57 +0000 (UTC)
Subject: Re: [PATCH 30/78] block: don't call into the driver for BLKROSET
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-31-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <7cad2b6b-cf14-8cbc-9cea-6f0add4a198c@suse.de>
Date: Fri, 20 Nov 2020 08:19:55 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-31-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:57 PM, Christoph Hellwig wrote:
> Now that all drivers that want to hook into setting or clearing the
> read-only flag use the set_read_only method, this code can be removed.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   block/ioctl.c | 23 -----------------------
>   1 file changed, 23 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 07:20:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 07:20:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31570.62046 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0ia-0001Ea-Vg; Fri, 20 Nov 2020 07:20:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31570.62046; Fri, 20 Nov 2020 07:20:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0ia-0001ET-Si; Fri, 20 Nov 2020 07:20:28 +0000
Received: by outflank-mailman (input) for mailman id 31570;
 Fri, 20 Nov 2020 07:20:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg0iY-0001EI-UX
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:20:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 460dc199-6ef1-4c37-9271-243e37100610;
 Fri, 20 Nov 2020 07:20:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7376AAC23;
 Fri, 20 Nov 2020 07:20:23 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg0iY-0001EI-UX
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:20:26 +0000
X-Inumbo-ID: 460dc199-6ef1-4c37-9271-243e37100610
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 460dc199-6ef1-4c37-9271-243e37100610;
	Fri, 20 Nov 2020 07:20:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7376AAC23;
	Fri, 20 Nov 2020 07:20:23 +0000 (UTC)
Subject: Re: [PATCH 31/78] loop: use set_disk_ro
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-32-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <62c828df-02be-1848-0a95-9b937f9998da@suse.de>
Date: Fri, 20 Nov 2020 08:20:22 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-32-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:57 PM, Christoph Hellwig wrote:
> Use set_disk_ro instead of set_device_ro to match all other block
> drivers and to ensure all partitions mirror the read-only flag.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/block/loop.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/block/loop.c b/drivers/block/loop.c
> index 84a36c242e5550..41caf799df721f 100644
> --- a/drivers/block/loop.c
> +++ b/drivers/block/loop.c
> @@ -1134,7 +1134,7 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
>   	if (error)
>   		goto out_unlock;
>   
> -	set_device_ro(bdev, (lo->lo_flags & LO_FLAGS_READ_ONLY) != 0);
> +	set_disk_ro(lo->lo_disk, (lo->lo_flags & LO_FLAGS_READ_ONLY) != 0);
>   
>   	lo->use_dio = lo->lo_flags & LO_FLAGS_DIRECT_IO;
>   	lo->lo_device = bdev;
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 07:20:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 07:20:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31575.62058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0iw-0001LD-9z; Fri, 20 Nov 2020 07:20:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31575.62058; Fri, 20 Nov 2020 07:20:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0iw-0001L6-6F; Fri, 20 Nov 2020 07:20:50 +0000
Received: by outflank-mailman (input) for mailman id 31575;
 Fri, 20 Nov 2020 07:20:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg0iv-0001L1-9C
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:20:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d2e5e773-0e63-4695-8dff-5d675d0a2258;
 Fri, 20 Nov 2020 07:20:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 95218AC0C;
 Fri, 20 Nov 2020 07:20:47 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg0iv-0001L1-9C
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:20:49 +0000
X-Inumbo-ID: d2e5e773-0e63-4695-8dff-5d675d0a2258
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d2e5e773-0e63-4695-8dff-5d675d0a2258;
	Fri, 20 Nov 2020 07:20:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 95218AC0C;
	Fri, 20 Nov 2020 07:20:47 +0000 (UTC)
Subject: Re: [PATCH 32/78] block: remove set_device_ro
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-33-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <d1beca65-cd8e-57ff-e7d7-6347cb6344b4@suse.de>
Date: Fri, 20 Nov 2020 08:20:45 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-33-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:57 PM, Christoph Hellwig wrote:
> Fold set_device_ro into its only remaining caller.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   block/genhd.c         | 7 -------
>   block/ioctl.c         | 2 +-
>   include/linux/genhd.h | 1 -
>   3 files changed, 1 insertion(+), 9 deletions(-)
> 
> diff --git a/block/genhd.c b/block/genhd.c
> index 8c350fecfe8bfe..b0f0b0cac9aa7f 100644
> --- a/block/genhd.c
> +++ b/block/genhd.c
> @@ -1843,13 +1843,6 @@ static void set_disk_ro_uevent(struct gendisk *gd, int ro)
>   	kobject_uevent_env(&disk_to_dev(gd)->kobj, KOBJ_CHANGE, envp);
>   }
>   
> -void set_device_ro(struct block_device *bdev, int flag)
> -{
> -	bdev->bd_part->policy = flag;
> -}
> -
> -EXPORT_SYMBOL(set_device_ro);
> -
>   void set_disk_ro(struct gendisk *disk, int flag)
>   {
>   	struct disk_part_iter piter;
> diff --git a/block/ioctl.c b/block/ioctl.c
> index 96cb4544736468..04255dc5f3bff3 100644
> --- a/block/ioctl.c
> +++ b/block/ioctl.c
> @@ -371,7 +371,7 @@ static int blkdev_roset(struct block_device *bdev, fmode_t mode,
>   		if (ret)
>   			return ret;
>   	}
> -	set_device_ro(bdev, n);
> +	bdev->bd_part->policy = n;
>   	return 0;
>   }
>   
> diff --git a/include/linux/genhd.h b/include/linux/genhd.h
> index 4b22bfd9336e1a..8427ad8bef520d 100644
> --- a/include/linux/genhd.h
> +++ b/include/linux/genhd.h
> @@ -304,7 +304,6 @@ extern void del_gendisk(struct gendisk *gp);
>   extern struct gendisk *get_gendisk(dev_t dev, int *partno);
>   extern struct block_device *bdget_disk(struct gendisk *disk, int partno);
>   
> -extern void set_device_ro(struct block_device *bdev, int flag);
>   extern void set_disk_ro(struct gendisk *disk, int flag);
>   
>   static inline int get_disk_ro(struct gendisk *disk)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 07:22:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 07:22:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31582.62070 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0kA-0001Up-Kc; Fri, 20 Nov 2020 07:22:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31582.62070; Fri, 20 Nov 2020 07:22:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0kA-0001Ui-HM; Fri, 20 Nov 2020 07:22:06 +0000
Received: by outflank-mailman (input) for mailman id 31582;
 Fri, 20 Nov 2020 07:22:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg0k9-0001Ub-EK
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:22:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ff627d50-ace3-42d3-b47d-6face4669ae3;
 Fri, 20 Nov 2020 07:22:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 91CBDAE76;
 Fri, 20 Nov 2020 07:22:02 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg0k9-0001Ub-EK
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:22:05 +0000
X-Inumbo-ID: ff627d50-ace3-42d3-b47d-6face4669ae3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ff627d50-ace3-42d3-b47d-6face4669ae3;
	Fri, 20 Nov 2020 07:22:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 91CBDAE76;
	Fri, 20 Nov 2020 07:22:02 +0000 (UTC)
Subject: Re: [PATCH 33/78] block: remove __blkdev_driver_ioctl
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-34-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <82dee133-3696-7fb5-efb5-d0a4122846db@suse.de>
Date: Fri, 20 Nov 2020 08:22:00 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-34-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:57 PM, Christoph Hellwig wrote:
> Just open code it in the few callers.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   block/ioctl.c               | 25 +++++--------------------
>   drivers/block/pktcdvd.c     |  6 ++++--
>   drivers/md/bcache/request.c |  5 +++--
>   drivers/md/dm.c             |  5 ++++-
>   include/linux/blkdev.h      |  2 --
>   5 files changed, 16 insertions(+), 27 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 07:23:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 07:23:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31587.62081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0lL-0001fO-Uk; Fri, 20 Nov 2020 07:23:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31587.62081; Fri, 20 Nov 2020 07:23:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0lL-0001fH-Rk; Fri, 20 Nov 2020 07:23:19 +0000
Received: by outflank-mailman (input) for mailman id 31587;
 Fri, 20 Nov 2020 07:23:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg0lK-0001fB-P8
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:23:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 49602c85-020e-44fe-95a1-8d02b84c59a3;
 Fri, 20 Nov 2020 07:23:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EDB7EAC0C;
 Fri, 20 Nov 2020 07:23:16 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg0lK-0001fB-P8
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:23:18 +0000
X-Inumbo-ID: 49602c85-020e-44fe-95a1-8d02b84c59a3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 49602c85-020e-44fe-95a1-8d02b84c59a3;
	Fri, 20 Nov 2020 07:23:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EDB7EAC0C;
	Fri, 20 Nov 2020 07:23:16 +0000 (UTC)
Subject: Re: [PATCH 34/78] block: propagate BLKROSET to all partitions
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-35-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <11ea0b89-d8e6-2d5d-59e4-90e928d54042@suse.de>
Date: Fri, 20 Nov 2020 08:23:15 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-35-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:57 PM, Christoph Hellwig wrote:
> When setting the whole device read-only (or clearing the read-only
> state), also update the policy for all partitions.  The s390 dasd
> driver has awlways been doing this and it makes a lot of sense.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   block/ioctl.c | 5 ++++-
>   1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/block/ioctl.c b/block/ioctl.c
> index 6b785181344fe1..22f394d118c302 100644
> --- a/block/ioctl.c
> +++ b/block/ioctl.c
> @@ -354,7 +354,10 @@ static int blkdev_roset(struct block_device *bdev, fmode_t mode,
>   		if (ret)
>   			return ret;
>   	}
> -	bdev->bd_part->policy = n;
> +	if (bdev_is_partition(bdev))
> +		bdev->bd_part->policy = n;
> +	else
> +		set_disk_ro(bdev->bd_disk, n);
>   	return 0;
>   }
>   
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 07:25:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 07:25:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31596.62094 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0n6-0001p2-Ae; Fri, 20 Nov 2020 07:25:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31596.62094; Fri, 20 Nov 2020 07:25:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0n6-0001ov-7L; Fri, 20 Nov 2020 07:25:08 +0000
Received: by outflank-mailman (input) for mailman id 31596;
 Fri, 20 Nov 2020 07:25:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg0n4-0001oo-Ps
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:25:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 05d94c1f-3a7a-4830-8a10-74c912ce443b;
 Fri, 20 Nov 2020 07:25:05 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0A558AD29;
 Fri, 20 Nov 2020 07:25:05 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg0n4-0001oo-Ps
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:25:06 +0000
X-Inumbo-ID: 05d94c1f-3a7a-4830-8a10-74c912ce443b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 05d94c1f-3a7a-4830-8a10-74c912ce443b;
	Fri, 20 Nov 2020 07:25:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0A558AD29;
	Fri, 20 Nov 2020 07:25:05 +0000 (UTC)
Subject: Re: [PATCH 53/78] blk-cgroup: fix a hd_struct leak in
 blkcg_fill_root_iostats
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-54-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <b7e7dd80-b46b-afaf-9117-8dade0e2420e@suse.de>
Date: Fri, 20 Nov 2020 08:25:03 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-54-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:57 PM, Christoph Hellwig wrote:
> disk_get_part needs to be paired with a disk_put_part.
> 
> Fixes: ef45fe470e1 ("blk-cgroup: show global disk stats in root cgroup io.stat")
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   block/blk-cgroup.c | 1 +
>   1 file changed, 1 insertion(+)
> 
> diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
> index c68bdf58c9a6e1..54fbe1e80cc41a 100644
> --- a/block/blk-cgroup.c
> +++ b/block/blk-cgroup.c
> @@ -849,6 +849,7 @@ static void blkcg_fill_root_iostats(void)
>   			blkg_iostat_set(&blkg->iostat.cur, &tmp);
>   			u64_stats_update_end(&blkg->iostat.sync);
>   		}
> +		disk_put_part(part);
>   	}
>   }
>   
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 07:25:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 07:25:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31601.62106 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0nU-0001vD-JW; Fri, 20 Nov 2020 07:25:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31601.62106; Fri, 20 Nov 2020 07:25:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0nU-0001v6-GQ; Fri, 20 Nov 2020 07:25:32 +0000
Received: by outflank-mailman (input) for mailman id 31601;
 Fri, 20 Nov 2020 07:25:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg0nT-0001ut-8J
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:25:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e5d7f0a9-8935-4d30-a403-af305c013473;
 Fri, 20 Nov 2020 07:25:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id ACBFAAB3D;
 Fri, 20 Nov 2020 07:25:29 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg0nT-0001ut-8J
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:25:31 +0000
X-Inumbo-ID: e5d7f0a9-8935-4d30-a403-af305c013473
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e5d7f0a9-8935-4d30-a403-af305c013473;
	Fri, 20 Nov 2020 07:25:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id ACBFAAB3D;
	Fri, 20 Nov 2020 07:25:29 +0000 (UTC)
Subject: Re: [PATCH 54/78] block: remove a duplicate __disk_get_part prototype
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-55-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <92e3c0c1-aa72-aff9-8916-aa443f85a9e4@suse.de>
Date: Fri, 20 Nov 2020 08:25:28 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-55-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:57 PM, Christoph Hellwig wrote:
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   include/linux/genhd.h | 1 -
>   1 file changed, 1 deletion(-)
> 
> diff --git a/include/linux/genhd.h b/include/linux/genhd.h
> index 46553d6d602563..22f5b9fd96f8bf 100644
> --- a/include/linux/genhd.h
> +++ b/include/linux/genhd.h
> @@ -250,7 +250,6 @@ static inline dev_t part_devt(struct hd_struct *part)
>   	return part_to_dev(part)->devt;
>   }
>   
> -extern struct hd_struct *__disk_get_part(struct gendisk *disk, int partno);
>   extern struct hd_struct *disk_get_part(struct gendisk *disk, int partno);
>   
>   static inline void disk_put_part(struct hd_struct *part)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 07:26:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 07:26:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31607.62117 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0oM-00023i-4N; Fri, 20 Nov 2020 07:26:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31607.62117; Fri, 20 Nov 2020 07:26:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0oM-00023b-1J; Fri, 20 Nov 2020 07:26:26 +0000
Received: by outflank-mailman (input) for mailman id 31607;
 Fri, 20 Nov 2020 07:26:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg0oK-00023V-Sp
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:26:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bda6b7c4-a2dd-42e1-95fd-4bd05bba2548;
 Fri, 20 Nov 2020 07:26:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2A51FAB3D;
 Fri, 20 Nov 2020 07:26:23 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg0oK-00023V-Sp
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:26:24 +0000
X-Inumbo-ID: bda6b7c4-a2dd-42e1-95fd-4bd05bba2548
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bda6b7c4-a2dd-42e1-95fd-4bd05bba2548;
	Fri, 20 Nov 2020 07:26:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2A51FAB3D;
	Fri, 20 Nov 2020 07:26:23 +0000 (UTC)
Subject: Re: [PATCH 55/78] block: change the hash used for looking up block
 devices
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-56-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <75f4c397-ac03-2c5f-d620-0e619ad78fe8@suse.de>
Date: Fri, 20 Nov 2020 08:26:22 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-56-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:57 PM, Christoph Hellwig wrote:
> Adding the minor to the major creates tons of pointless conflicts. Just
> use the dev_t itself, which is 32-bits and thus is guaranteed to fit
> into ino_t.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   fs/block_dev.c | 26 ++------------------------
>   1 file changed, 2 insertions(+), 24 deletions(-)
> 
> diff --git a/fs/block_dev.c b/fs/block_dev.c
> index d8664f5c1ff669..29db12c3bb501c 100644
> --- a/fs/block_dev.c
> +++ b/fs/block_dev.c
> @@ -870,35 +870,12 @@ void __init bdev_cache_init(void)
>   	blockdev_superblock = bd_mnt->mnt_sb;   /* For writeback */
>   }
>   
> -/*
> - * Most likely _very_ bad one - but then it's hardly critical for small
> - * /dev and can be fixed when somebody will need really large one.
> - * Keep in mind that it will be fed through icache hash function too.
> - */
> -static inline unsigned long hash(dev_t dev)
> -{
> -	return MAJOR(dev)+MINOR(dev);
> -}
> -
> -static int bdev_test(struct inode *inode, void *data)
> -{
> -	return BDEV_I(inode)->bdev.bd_dev == *(dev_t *)data;
> -}
> -
> -static int bdev_set(struct inode *inode, void *data)
> -{
> -	BDEV_I(inode)->bdev.bd_dev = *(dev_t *)data;
> -	return 0;
> -}
> -
>   static struct block_device *bdget(dev_t dev)
>   {
>   	struct block_device *bdev;
>   	struct inode *inode;
>   
> -	inode = iget5_locked(blockdev_superblock, hash(dev),
> -			bdev_test, bdev_set, &dev);
> -
> +	inode = iget_locked(blockdev_superblock, dev);
>   	if (!inode)
>   		return NULL;
>   
> @@ -910,6 +887,7 @@ static struct block_device *bdget(dev_t dev)
>   		bdev->bd_super = NULL;
>   		bdev->bd_inode = inode;
>   		bdev->bd_part_count = 0;
> +		bdev->bd_dev = dev;
>   		inode->i_mode = S_IFBLK;
>   		inode->i_rdev = dev;
>   		inode->i_bdev = bdev;
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 07:31:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 07:31:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31615.62130 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0te-0003Cu-Q2; Fri, 20 Nov 2020 07:31:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31615.62130; Fri, 20 Nov 2020 07:31:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0te-0003Cn-My; Fri, 20 Nov 2020 07:31:54 +0000
Received: by outflank-mailman (input) for mailman id 31615;
 Fri, 20 Nov 2020 07:31:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg0td-0003Ci-S4
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:31:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5e5111c6-345b-450d-bd2e-d644c6d50e76;
 Fri, 20 Nov 2020 07:31:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 48C4FAC23;
 Fri, 20 Nov 2020 07:31:52 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg0td-0003Ci-S4
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:31:53 +0000
X-Inumbo-ID: 5e5111c6-345b-450d-bd2e-d644c6d50e76
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5e5111c6-345b-450d-bd2e-d644c6d50e76;
	Fri, 20 Nov 2020 07:31:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 48C4FAC23;
	Fri, 20 Nov 2020 07:31:52 +0000 (UTC)
Subject: Re: [PATCH 56/78] init: refactor name_to_dev_t
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-57-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <a98ac74a-d1c4-776f-145f-583a1d56eed3@suse.de>
Date: Fri, 20 Nov 2020 08:31:51 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-57-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:57 PM, Christoph Hellwig wrote:
> Split each case into a self-contained helper.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   include/linux/genhd.h |   7 +-
>   init/do_mounts.c      | 183 +++++++++++++++++++++---------------------
>   2 files changed, 91 insertions(+), 99 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 07:33:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 07:33:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31624.62158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0vU-0003RY-B9; Fri, 20 Nov 2020 07:33:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31624.62158; Fri, 20 Nov 2020 07:33:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0vU-0003RR-7T; Fri, 20 Nov 2020 07:33:48 +0000
Received: by outflank-mailman (input) for mailman id 31624;
 Fri, 20 Nov 2020 07:33:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg0vS-0003RG-G1
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:33:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4b8a1734-f3eb-4727-8a75-f65eb891b654;
 Fri, 20 Nov 2020 07:33:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 95E0EAB3D;
 Fri, 20 Nov 2020 07:33:44 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg0vS-0003RG-G1
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:33:46 +0000
X-Inumbo-ID: 4b8a1734-f3eb-4727-8a75-f65eb891b654
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4b8a1734-f3eb-4727-8a75-f65eb891b654;
	Fri, 20 Nov 2020 07:33:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 95E0EAB3D;
	Fri, 20 Nov 2020 07:33:44 +0000 (UTC)
Subject: Re: [PATCH 57/78] init: refactor devt_from_partuuid
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-58-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <23b99285-e2b4-45cf-017e-f93e5368bc79@suse.de>
Date: Fri, 20 Nov 2020 08:33:42 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-58-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:57 PM, Christoph Hellwig wrote:
> The code in devt_from_partuuid is very convoluted.  Refactor a bit by
> sanitizing the goto and variable name usage.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   init/do_mounts.c | 68 ++++++++++++++++++++++--------------------------
>   1 file changed, 31 insertions(+), 37 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 07:34:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 07:34:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31629.62170 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0w8-0003Xt-Ki; Fri, 20 Nov 2020 07:34:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31629.62170; Fri, 20 Nov 2020 07:34:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0w8-0003Xm-Gw; Fri, 20 Nov 2020 07:34:28 +0000
Received: by outflank-mailman (input) for mailman id 31629;
 Fri, 20 Nov 2020 07:34:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg0w7-0003Xe-9z
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:34:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 65a688bc-0eb4-4b8c-96ae-e5078c31160c;
 Fri, 20 Nov 2020 07:34:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C2EBBAB3D;
 Fri, 20 Nov 2020 07:34:25 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg0w7-0003Xe-9z
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:34:27 +0000
X-Inumbo-ID: 65a688bc-0eb4-4b8c-96ae-e5078c31160c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 65a688bc-0eb4-4b8c-96ae-e5078c31160c;
	Fri, 20 Nov 2020 07:34:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C2EBBAB3D;
	Fri, 20 Nov 2020 07:34:25 +0000 (UTC)
Subject: Re: [PATCH 58/78] init: cleanup match_dev_by_uuid and
 match_dev_by_label
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-59-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <1742f81b-32e5-4bff-e10c-2c547ee345b6@suse.de>
Date: Fri, 20 Nov 2020 08:34:24 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-59-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:57 PM, Christoph Hellwig wrote:
> Avoid a totally pointless goto label, and use the same style of
> comparism for both helpers.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   init/do_mounts.c | 18 ++++++------------
>   1 file changed, 6 insertions(+), 12 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 07:35:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 07:35:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31634.62182 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0xE-0003gp-Vr; Fri, 20 Nov 2020 07:35:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31634.62182; Fri, 20 Nov 2020 07:35:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0xE-0003gi-S1; Fri, 20 Nov 2020 07:35:36 +0000
Received: by outflank-mailman (input) for mailman id 31634;
 Fri, 20 Nov 2020 07:35:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg0xE-0003gb-4m
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:35:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 73902ba2-db38-4ab9-9522-1e2dd30dddb2;
 Fri, 20 Nov 2020 07:35:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7837DAB3D;
 Fri, 20 Nov 2020 07:35:34 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg0xE-0003gb-4m
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:35:36 +0000
X-Inumbo-ID: 73902ba2-db38-4ab9-9522-1e2dd30dddb2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 73902ba2-db38-4ab9-9522-1e2dd30dddb2;
	Fri, 20 Nov 2020 07:35:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7837DAB3D;
	Fri, 20 Nov 2020 07:35:34 +0000 (UTC)
Subject: Re: [PATCH 59/78] mtip32xx: remove the call to fsync_bdev on removal
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-60-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <5ee8dd18-f420-280c-84b9-78b70f528e26@suse.de>
Date: Fri, 20 Nov 2020 08:35:33 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-60-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:57 PM, Christoph Hellwig wrote:
> del_gendisk already calls fsync_bdev for every partition, no need
> to do this twice.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/block/mtip32xx/mtip32xx.c | 15 ---------------
>   drivers/block/mtip32xx/mtip32xx.h |  2 --
>   2 files changed, 17 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 07:37:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 07:37:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31640.62193 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0yu-0003pJ-BA; Fri, 20 Nov 2020 07:37:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31640.62193; Fri, 20 Nov 2020 07:37:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0yu-0003pC-8E; Fri, 20 Nov 2020 07:37:20 +0000
Received: by outflank-mailman (input) for mailman id 31640;
 Fri, 20 Nov 2020 07:37:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg0ys-0003p2-M3
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:37:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 89d6f3b0-9aca-4e35-92eb-a9ab1bc8b131;
 Fri, 20 Nov 2020 07:37:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DA874AC23;
 Fri, 20 Nov 2020 07:37:16 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg0ys-0003p2-M3
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:37:18 +0000
X-Inumbo-ID: 89d6f3b0-9aca-4e35-92eb-a9ab1bc8b131
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 89d6f3b0-9aca-4e35-92eb-a9ab1bc8b131;
	Fri, 20 Nov 2020 07:37:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id DA874AC23;
	Fri, 20 Nov 2020 07:37:16 +0000 (UTC)
Subject: Re: [PATCH 60/78] zram: remove the claim mechanism
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-61-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <317d324a-f4a2-7fc4-3546-0048c38c55da@suse.de>
Date: Fri, 20 Nov 2020 08:37:15 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-61-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:57 PM, Christoph Hellwig wrote:
> The zram claim mechanism was added to ensure no new opens come in
> during teardown.  But the proper way to archive that is to call
> del_gendisk first, which takes care of all that.  Once del_gendisk
> is called in the right place, the reset side can also be simplified
> as no I/O can be outstanding on a block device that is not open.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/block/zram/zram_drv.c | 76 ++++++++++-------------------------
>   1 file changed, 21 insertions(+), 55 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 07:38:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 07:38:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31646.62205 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0zf-0003w1-LW; Fri, 20 Nov 2020 07:38:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31646.62205; Fri, 20 Nov 2020 07:38:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg0zf-0003vu-IU; Fri, 20 Nov 2020 07:38:07 +0000
Received: by outflank-mailman (input) for mailman id 31646;
 Fri, 20 Nov 2020 07:38:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg0zd-0003vn-P0
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:38:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a73c8294-9aa1-48bc-946f-4a8b748a0e18;
 Fri, 20 Nov 2020 07:38:05 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4661CAC83;
 Fri, 20 Nov 2020 07:38:04 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg0zd-0003vn-P0
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:38:05 +0000
X-Inumbo-ID: a73c8294-9aa1-48bc-946f-4a8b748a0e18
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a73c8294-9aa1-48bc-946f-4a8b748a0e18;
	Fri, 20 Nov 2020 07:38:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4661CAC83;
	Fri, 20 Nov 2020 07:38:04 +0000 (UTC)
Subject: Re: [PATCH 61/78] zram: do not call set_blocksize
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-62-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <e6f89f0b-602b-f297-87f5-86b7c1b353f0@suse.de>
Date: Fri, 20 Nov 2020 08:38:03 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-62-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:57 PM, Christoph Hellwig wrote:
> set_blocksize is used by file systems to use their preferred buffer cache
> block size.  Block drivers should not set it.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/block/zram/zram_drv.c | 11 +----------
>   drivers/block/zram/zram_drv.h |  1 -
>   2 files changed, 1 insertion(+), 11 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 07:38:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 07:38:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31653.62218 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg106-00045J-Vc; Fri, 20 Nov 2020 07:38:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31653.62218; Fri, 20 Nov 2020 07:38:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg106-00045C-Rc; Fri, 20 Nov 2020 07:38:34 +0000
Received: by outflank-mailman (input) for mailman id 31653;
 Fri, 20 Nov 2020 07:38:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg104-00044u-O8
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:38:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0b7434fb-3841-4e30-b5a1-c2824ef8eae8;
 Fri, 20 Nov 2020 07:38:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 39FC4AB3D;
 Fri, 20 Nov 2020 07:38:31 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg104-00044u-O8
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:38:32 +0000
X-Inumbo-ID: 0b7434fb-3841-4e30-b5a1-c2824ef8eae8
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0b7434fb-3841-4e30-b5a1-c2824ef8eae8;
	Fri, 20 Nov 2020 07:38:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 39FC4AB3D;
	Fri, 20 Nov 2020 07:38:31 +0000 (UTC)
Subject: Re: [PATCH 62/78] loop: do not call set_blocksize
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-63-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <b1703b65-244d-e445-2e81-0b63dd1518f2@suse.de>
Date: Fri, 20 Nov 2020 08:38:29 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-63-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:57 PM, Christoph Hellwig wrote:
> set_blocksize is used by file systems to use their preferred buffer cache
> block size.  Block drivers should not set it.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/block/loop.c | 3 ---
>   1 file changed, 3 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 07:41:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 07:41:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31660.62233 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg12n-0005BB-EY; Fri, 20 Nov 2020 07:41:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31660.62233; Fri, 20 Nov 2020 07:41:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg12n-0005B4-BL; Fri, 20 Nov 2020 07:41:21 +0000
Received: by outflank-mailman (input) for mailman id 31660;
 Fri, 20 Nov 2020 07:41:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg12m-0005Az-Lm
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:41:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2501d56c-f0d5-4d82-b740-f1b36ca9181c;
 Fri, 20 Nov 2020 07:41:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0C348AC23;
 Fri, 20 Nov 2020 07:41:19 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg12m-0005Az-Lm
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:41:20 +0000
X-Inumbo-ID: 2501d56c-f0d5-4d82-b740-f1b36ca9181c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2501d56c-f0d5-4d82-b740-f1b36ca9181c;
	Fri, 20 Nov 2020 07:41:19 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0C348AC23;
	Fri, 20 Nov 2020 07:41:19 +0000 (UTC)
Subject: Re: [PATCH 64/78] dm: simplify flush_bio initialization in
 __send_empty_flush
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-65-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <38ac9782-a563-b7ea-595a-124159fb755d@suse.de>
Date: Fri, 20 Nov 2020 08:41:17 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-65-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:57 PM, Christoph Hellwig wrote:
> We don't really need the struct block_device to initialize a bio.  So
> switch from using bio_set_dev to manually setting up bi_disk (bi_partno
> will always be zero and has been cleared by bio_init already).
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/md/dm.c | 12 +++---------
>   1 file changed, 3 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/md/dm.c b/drivers/md/dm.c
> index 54739f1b579bc8..6d7eb72d41f9ea 100644
> --- a/drivers/md/dm.c
> +++ b/drivers/md/dm.c
> @@ -1422,18 +1422,12 @@ static int __send_empty_flush(struct clone_info *ci)
>   	 */
>   	bio_init(&flush_bio, NULL, 0);
>   	flush_bio.bi_opf = REQ_OP_WRITE | REQ_PREFLUSH | REQ_SYNC;
> +	flush_bio.bi_disk = ci->io->md->disk;
> +	bio_associate_blkg(&flush_bio);
> +
>   	ci->bio = &flush_bio;
>   	ci->sector_count = 0;
>   
> -	/*
> -	 * Empty flush uses a statically initialized bio, as the base for
> -	 * cloning.  However, blkg association requires that a bdev is
> -	 * associated with a gendisk, which doesn't happen until the bdev is
> -	 * opened.  So, blkg association is done at issue time of the flush
> -	 * rather than when the device is created in alloc_dev().
> -	 */
> -	bio_set_dev(ci->bio, ci->io->md->bdev);
> -
>   	BUG_ON(bio_has_data(ci->bio));
>   	while ((ti = dm_table_get_target(ci->map, target_nr++)))
>   		__send_duplicate_bios(ci, ti, ti->num_flush_bios, NULL);
> 
Ah, thought as much. I've stumbled across this while debugging 
blk-interposer.

Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 07:43:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 07:43:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31667.62245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg14T-0005Kj-W7; Fri, 20 Nov 2020 07:43:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31667.62245; Fri, 20 Nov 2020 07:43:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg14T-0005Kc-ST; Fri, 20 Nov 2020 07:43:05 +0000
Received: by outflank-mailman (input) for mailman id 31667;
 Fri, 20 Nov 2020 07:43:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg14S-0005KX-SY
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:43:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4d04ed4f-ca80-4566-b552-2f29bd0a7521;
 Fri, 20 Nov 2020 07:43:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E6A77AB3D;
 Fri, 20 Nov 2020 07:43:02 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg14S-0005KX-SY
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:43:04 +0000
X-Inumbo-ID: 4d04ed4f-ca80-4566-b552-2f29bd0a7521
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4d04ed4f-ca80-4566-b552-2f29bd0a7521;
	Fri, 20 Nov 2020 07:43:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E6A77AB3D;
	Fri, 20 Nov 2020 07:43:02 +0000 (UTC)
Subject: Re: [PATCH 65/78] dm: remove the block_device reference in struct
 mapped_device
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-66-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <310bff8b-dbda-609a-a392-619733b86bd1@suse.de>
Date: Fri, 20 Nov 2020 08:43:01 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-66-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:57 PM, Christoph Hellwig wrote:
> Get rid of the long-lasting struct block_device reference in
> struct mapped_device.  The only remaining user is the freeze code,
> where we can trivially look up the block device at freeze time
> and release the reference at thaw time.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/md/dm-core.h |  2 --
>   drivers/md/dm.c      | 22 +++++++++++-----------
>   2 files changed, 11 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h
> index d522093cb39dda..b1b400ed76fe90 100644
> --- a/drivers/md/dm-core.h
> +++ b/drivers/md/dm-core.h
> @@ -107,8 +107,6 @@ struct mapped_device {
>   	/* kobject and completion */
>   	struct dm_kobject_holder kobj_holder;
>   
> -	struct block_device *bdev;
> -
>   	struct dm_stats stats;
>   
>   	/* for blk-mq request-based DM support */
> diff --git a/drivers/md/dm.c b/drivers/md/dm.c
> index 6d7eb72d41f9ea..c789ffea2badde 100644
> --- a/drivers/md/dm.c
> +++ b/drivers/md/dm.c
> @@ -1744,11 +1744,6 @@ static void cleanup_mapped_device(struct mapped_device *md)
>   
>   	cleanup_srcu_struct(&md->io_barrier);
>   
> -	if (md->bdev) {
> -		bdput(md->bdev);
> -		md->bdev = NULL;
> -	}
> -
>   	mutex_destroy(&md->suspend_lock);
>   	mutex_destroy(&md->type_lock);
>   	mutex_destroy(&md->table_devices_lock);
> @@ -1840,10 +1835,6 @@ static struct mapped_device *alloc_dev(int minor)
>   	if (!md->wq)
>   		goto bad;
>   
> -	md->bdev = bdget_disk(md->disk, 0);
> -	if (!md->bdev)
> -		goto bad;
> -
>   	dm_stats_init(&md->stats);
>   
>   	/* Populate the mapping, nobody knows we exist yet */
> @@ -2384,12 +2375,17 @@ struct dm_table *dm_swap_table(struct mapped_device *md, struct dm_table *table)
>    */
>   static int lock_fs(struct mapped_device *md)
>   {
> +	struct block_device *bdev;
>   	int r;
>   
>   	WARN_ON(md->frozen_sb);
>   
> -	md->frozen_sb = freeze_bdev(md->bdev);
> +	bdev = bdget_disk(md->disk, 0);
> +	if (!bdev)
> +		return -ENOMEM;
> +	md->frozen_sb = freeze_bdev(bdev);
>   	if (IS_ERR(md->frozen_sb)) {
> +		bdput(bdev);
>   		r = PTR_ERR(md->frozen_sb);
>   		md->frozen_sb = NULL;
>   		return r;
> @@ -2402,10 +2398,14 @@ static int lock_fs(struct mapped_device *md)
>   
>   static void unlock_fs(struct mapped_device *md)
>   {
> +	struct block_device *bdev;
> +
>   	if (!test_bit(DMF_FROZEN, &md->flags))
>   		return;
>   
> -	thaw_bdev(md->bdev, md->frozen_sb);
> +	bdev = md->frozen_sb->s_bdev;
> +	thaw_bdev(bdev, md->frozen_sb);
> +	bdput(bdev);
>   	md->frozen_sb = NULL;
>   	clear_bit(DMF_FROZEN, &md->flags);
>   }
> 
Yay. Just what I need for the blk-interposer code, where the ->bdev
pointer is really getting in the way.

Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 07:50:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 07:50:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31675.62257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg1Bi-0006WQ-Pn; Fri, 20 Nov 2020 07:50:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31675.62257; Fri, 20 Nov 2020 07:50:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg1Bi-0006WJ-LB; Fri, 20 Nov 2020 07:50:34 +0000
Received: by outflank-mailman (input) for mailman id 31675;
 Fri, 20 Nov 2020 07:50:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg1Bg-0006WE-Va
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:50:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0459ea0c-976a-494e-a8bf-ec78fda54202;
 Fri, 20 Nov 2020 07:50:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 55C9CAC23;
 Fri, 20 Nov 2020 07:50:31 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg1Bg-0006WE-Va
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:50:33 +0000
X-Inumbo-ID: 0459ea0c-976a-494e-a8bf-ec78fda54202
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0459ea0c-976a-494e-a8bf-ec78fda54202;
	Fri, 20 Nov 2020 07:50:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 55C9CAC23;
	Fri, 20 Nov 2020 07:50:31 +0000 (UTC)
Subject: Re: [PATCH 66/78] block: keep a block_device reference for each
 hd_struct
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-67-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <23914ef5-5245-b468-4168-bc1584e979d2@suse.de>
Date: Fri, 20 Nov 2020 08:50:28 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-67-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:57 PM, Christoph Hellwig wrote:
> To simplify block device lookup and a few other upcomdin areas, make sure
> that we always have a struct block_device available for each disk and
> each partition.  The only downside of this is that each device and
> partition uses a little more memories.  The upside will be that a lot of
> code can be simplified.
> 
> With that all we need to look up the block device is to lookup the inode
> and do a few sanity checks on the gendisk, instead of the separate lookup
> for the gendisk.
> 
> As part of the change switch bdget() to only find existing block devices,
> given that we know that the block_device structure must be allocated at
> probe / partition scan time.
> 
> blk-cgroup needed a bit of a special treatment as the only place that
> wanted to lookup a gendisk outside of the normal blkdev_get path.  It is
> switched to lookup using the block device hash now that this is the
> primary lookup path.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   block/blk-cgroup.c         |  42 ++++-----
>   block/blk-iocost.c         |  36 +++----
>   block/blk.h                |   1 -
>   block/genhd.c              | 188 +++----------------------------------
>   block/partitions/core.c    |  28 +++---
>   fs/block_dev.c             | 133 +++++++++++++++-----------
>   include/linux/blk-cgroup.h |   4 +-
>   include/linux/blkdev.h     |   3 +
>   include/linux/genhd.h      |   4 +-
>   9 files changed, 153 insertions(+), 286 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 07:51:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 07:51:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31680.62268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg1Cp-0006dX-3Y; Fri, 20 Nov 2020 07:51:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31680.62268; Fri, 20 Nov 2020 07:51:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg1Co-0006dP-Vn; Fri, 20 Nov 2020 07:51:42 +0000
Received: by outflank-mailman (input) for mailman id 31680;
 Fri, 20 Nov 2020 07:51:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg1Cn-0006d5-ID
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:51:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fd7773b2-c865-42b6-b9f8-5e2603e039df;
 Fri, 20 Nov 2020 07:51:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DE27BAB3D;
 Fri, 20 Nov 2020 07:51:39 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg1Cn-0006d5-ID
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:51:41 +0000
X-Inumbo-ID: fd7773b2-c865-42b6-b9f8-5e2603e039df
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fd7773b2-c865-42b6-b9f8-5e2603e039df;
	Fri, 20 Nov 2020 07:51:40 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id DE27BAB3D;
	Fri, 20 Nov 2020 07:51:39 +0000 (UTC)
Subject: Re: [PATCH 67/78] block: simplify the block device claiming interface
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-68-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <64ae3518-094e-a433-0da6-972b230efc28@suse.de>
Date: Fri, 20 Nov 2020 08:51:38 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-68-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:57 PM, Christoph Hellwig wrote:
> Stop passing the whole device as a separate argument given that it
> can be trivially deducted.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/block/loop.c   | 12 +++-----
>   fs/block_dev.c         | 69 +++++++++++++++++++-----------------------
>   include/linux/blkdev.h |  6 ++--
>   3 files changed, 38 insertions(+), 49 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 07:52:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 07:52:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31685.62281 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg1Dt-0006mo-FG; Fri, 20 Nov 2020 07:52:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31685.62281; Fri, 20 Nov 2020 07:52:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg1Dt-0006mh-BK; Fri, 20 Nov 2020 07:52:49 +0000
Received: by outflank-mailman (input) for mailman id 31685;
 Fri, 20 Nov 2020 07:52:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg1Ds-0006mZ-JU
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:52:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6b6e90c3-6a3b-4cb1-a660-3a56d3e7ac60;
 Fri, 20 Nov 2020 07:52:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 230CEAC0C;
 Fri, 20 Nov 2020 07:52:47 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg1Ds-0006mZ-JU
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:52:48 +0000
X-Inumbo-ID: 6b6e90c3-6a3b-4cb1-a660-3a56d3e7ac60
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6b6e90c3-6a3b-4cb1-a660-3a56d3e7ac60;
	Fri, 20 Nov 2020 07:52:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 230CEAC0C;
	Fri, 20 Nov 2020 07:52:47 +0000 (UTC)
Subject: Re: [PATCH 68/78] block: remove ->bd_contains
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-69-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <3ba54e39-aac0-683a-7edb-7b4172b37cf7@suse.de>
Date: Fri, 20 Nov 2020 08:52:46 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-69-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:57 PM, Christoph Hellwig wrote:
> Now that each gendisk has a reference to the block_device referencing
> it, we can just use that everywhere and get rid of ->bd_contain.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/scsi/scsicam.c    |  2 +-
>   fs/block_dev.c            | 50 +++++++++++++--------------------------
>   include/linux/blk_types.h |  4 +++-
>   3 files changed, 20 insertions(+), 36 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 07:55:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 07:55:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31691.62293 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg1GY-0006yF-UP; Fri, 20 Nov 2020 07:55:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31691.62293; Fri, 20 Nov 2020 07:55:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg1GY-0006y8-QI; Fri, 20 Nov 2020 07:55:34 +0000
Received: by outflank-mailman (input) for mailman id 31691;
 Fri, 20 Nov 2020 07:55:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg1GX-0006y3-W3
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:55:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0a421d75-9810-4503-be7a-020988436fb7;
 Fri, 20 Nov 2020 07:55:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9D32FAC23;
 Fri, 20 Nov 2020 07:55:29 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg1GX-0006y3-W3
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:55:34 +0000
X-Inumbo-ID: 0a421d75-9810-4503-be7a-020988436fb7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0a421d75-9810-4503-be7a-020988436fb7;
	Fri, 20 Nov 2020 07:55:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 9D32FAC23;
	Fri, 20 Nov 2020 07:55:29 +0000 (UTC)
Subject: Re: [PATCH 69/78] block: remove the nr_sects field in struct
 hd_struct
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-70-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <f776dc43-0917-5d09-52a6-0d5e57914dd5@suse.de>
Date: Fri, 20 Nov 2020 08:55:27 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-70-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:58 PM, Christoph Hellwig wrote:
> Now that the hd_struct always has a block device attached to it, there is
> no need for having two size field that just get out of sync.
> 
> Additional the field in hd_struct did not use proper serializiation,
> possibly allowing for torn writes.  By only using the block_device field
> this problem also gets fixed.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   block/bio.c                        |  2 +-
>   block/blk-core.c                   |  2 +-
>   block/blk.h                        | 53 ----------------------
>   block/genhd.c                      | 34 +++++++-------
>   block/partitions/core.c            | 17 ++++---
>   drivers/block/loop.c               |  1 -
>   drivers/block/nbd.c                |  2 +-
>   drivers/block/xen-blkback/common.h |  4 +-
>   drivers/md/bcache/super.c          |  2 +-
>   drivers/s390/block/dasd_ioctl.c    |  4 +-
>   drivers/target/target_core_pscsi.c |  7 +--
>   fs/block_dev.c                     | 73 +-----------------------------
>   fs/f2fs/super.c                    |  2 +-
>   fs/pstore/blk.c                    |  2 +-
>   include/linux/genhd.h              | 29 +++---------
>   kernel/trace/blktrace.c            |  2 +-
>   16 files changed, 47 insertions(+), 189 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 07:58:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 07:58:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31697.62305 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg1JF-00078a-CA; Fri, 20 Nov 2020 07:58:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31697.62305; Fri, 20 Nov 2020 07:58:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg1JF-00078T-94; Fri, 20 Nov 2020 07:58:21 +0000
Received: by outflank-mailman (input) for mailman id 31697;
 Fri, 20 Nov 2020 07:58:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg1JE-00078O-0z
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:58:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0f405a87-d001-498f-899d-ca4c10e93eb1;
 Fri, 20 Nov 2020 07:58:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 75B07AC0C;
 Fri, 20 Nov 2020 07:58:17 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg1JE-00078O-0z
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:58:20 +0000
X-Inumbo-ID: 0f405a87-d001-498f-899d-ca4c10e93eb1
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0f405a87-d001-498f-899d-ca4c10e93eb1;
	Fri, 20 Nov 2020 07:58:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 75B07AC0C;
	Fri, 20 Nov 2020 07:58:17 +0000 (UTC)
Subject: Re: [PATCH 70/78] block: replace bd_mutex with a per-gendisk mutex
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-71-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <c854459e-d124-d0fd-2159-d40ef4d6ca75@suse.de>
Date: Fri, 20 Nov 2020 08:58:15 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-71-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:58 PM, Christoph Hellwig wrote:
> bd_mutex is primarily used for synchronizing the block device open and
> release path, which recurses from partitions to the whole disk device.
> The fact that we have two locks makes life unnecessarily complex due
> to lock order constrains.  Replace the two levels of locking with a
> single mutex in the gendisk structure.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   block/genhd.c                   |  7 ++--
>   block/ioctl.c                   |  4 +-
>   block/partitions/core.c         | 22 +++++-----
>   drivers/block/loop.c            | 14 +++----
>   drivers/block/xen-blkfront.c    |  8 ++--
>   drivers/block/zram/zram_drv.c   |  4 +-
>   drivers/block/zram/zram_drv.h   |  2 +-
>   drivers/md/md.h                 |  7 +---
>   drivers/s390/block/dasd_genhd.c |  8 ++--
>   drivers/scsi/sd.c               |  4 +-
>   fs/block_dev.c                  | 71 +++++++++++++++++----------------
>   fs/btrfs/volumes.c              |  2 +-
>   fs/super.c                      |  8 ++--
>   include/linux/blk_types.h       |  1 -
>   include/linux/genhd.h           |  1 +
>   15 files changed, 80 insertions(+), 83 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 07:59:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 07:59:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31703.62317 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg1Jx-0007FZ-Ln; Fri, 20 Nov 2020 07:59:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31703.62317; Fri, 20 Nov 2020 07:59:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg1Jx-0007FS-IY; Fri, 20 Nov 2020 07:59:05 +0000
Received: by outflank-mailman (input) for mailman id 31703;
 Fri, 20 Nov 2020 07:59:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg1Jw-0007FL-9R
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:59:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 06782743-ec88-480f-9707-e509bc5843be;
 Fri, 20 Nov 2020 07:59:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7D2E6AB3D;
 Fri, 20 Nov 2020 07:59:02 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg1Jw-0007FL-9R
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:59:04 +0000
X-Inumbo-ID: 06782743-ec88-480f-9707-e509bc5843be
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 06782743-ec88-480f-9707-e509bc5843be;
	Fri, 20 Nov 2020 07:59:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7D2E6AB3D;
	Fri, 20 Nov 2020 07:59:02 +0000 (UTC)
Subject: Re: [PATCH 71/78] block: add a bdev_kobj helper
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-72-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <d7ea7ad7-6455-9f89-063c-259ea67e92c6@suse.de>
Date: Fri, 20 Nov 2020 08:59:01 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-72-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:58 PM, Christoph Hellwig wrote:
> Add a little helper to find the kobject for a struct block_device.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/md/bcache/super.c |  7 ++-----
>   drivers/md/md.c           |  4 +---
>   fs/btrfs/sysfs.c          | 15 +++------------
>   include/linux/blk_types.h |  3 +++
>   4 files changed, 9 insertions(+), 20 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 07:59:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 07:59:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31708.62329 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg1KX-0007Q0-4q; Fri, 20 Nov 2020 07:59:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31708.62329; Fri, 20 Nov 2020 07:59:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg1KX-0007PX-16; Fri, 20 Nov 2020 07:59:41 +0000
Received: by outflank-mailman (input) for mailman id 31708;
 Fri, 20 Nov 2020 07:59:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg1KW-0007N0-6K
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:59:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fc85c8b2-5245-484c-a670-a46093089fe3;
 Fri, 20 Nov 2020 07:59:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AEC1DAB3D;
 Fri, 20 Nov 2020 07:59:38 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg1KW-0007N0-6K
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 07:59:40 +0000
X-Inumbo-ID: fc85c8b2-5245-484c-a670-a46093089fe3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fc85c8b2-5245-484c-a670-a46093089fe3;
	Fri, 20 Nov 2020 07:59:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id AEC1DAB3D;
	Fri, 20 Nov 2020 07:59:38 +0000 (UTC)
Subject: Re: [PATCH 72/78] block: use disk_part_iter_exit in
 disk_part_iter_next
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-73-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <aa77ac66-cfdf-a53a-c30d-e44a6fc93b38@suse.de>
Date: Fri, 20 Nov 2020 08:59:37 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-73-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:58 PM, Christoph Hellwig wrote:
> Call disk_part_iter_exit in disk_part_iter_next instead of duplicating
> the functionality.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   block/genhd.c | 3 +--
>   1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/block/genhd.c b/block/genhd.c
> index 999f7142b04e7d..56bc37e98ed852 100644
> --- a/block/genhd.c
> +++ b/block/genhd.c
> @@ -230,8 +230,7 @@ struct hd_struct *disk_part_iter_next(struct disk_part_iter *piter)
>   	int inc, end;
>   
>   	/* put the last partition */
> -	disk_put_part(piter->part);
> -	piter->part = NULL;
> +	disk_part_iter_exit(piter);
>   
>   	/* get part_tbl */
>   	rcu_read_lock();
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 08:02:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 08:02:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31719.62341 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg1Mq-0000Zj-Nr; Fri, 20 Nov 2020 08:02:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31719.62341; Fri, 20 Nov 2020 08:02:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg1Mq-0000Zc-Kh; Fri, 20 Nov 2020 08:02:04 +0000
Received: by outflank-mailman (input) for mailman id 31719;
 Fri, 20 Nov 2020 08:02:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg1Mp-0000ZX-HZ
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 08:02:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 442885dd-d4e0-4b5f-9276-c05904102341;
 Fri, 20 Nov 2020 08:02:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C65C0AB3D;
 Fri, 20 Nov 2020 08:02:01 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg1Mp-0000ZX-HZ
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 08:02:03 +0000
X-Inumbo-ID: 442885dd-d4e0-4b5f-9276-c05904102341
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 442885dd-d4e0-4b5f-9276-c05904102341;
	Fri, 20 Nov 2020 08:02:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C65C0AB3D;
	Fri, 20 Nov 2020 08:02:01 +0000 (UTC)
Subject: Re: [PATCH 73/78] block: use put_device in put_disk
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-74-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <a20f546f-6e14-2866-7c50-09fa385fe6f4@suse.de>
Date: Fri, 20 Nov 2020 09:02:00 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-74-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:58 PM, Christoph Hellwig wrote:
> Use put_device to put the device instead of poking into the internals
> and using kobject_put.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   block/genhd.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/block/genhd.c b/block/genhd.c
> index 56bc37e98ed852..f1e20ec1b62887 100644
> --- a/block/genhd.c
> +++ b/block/genhd.c
> @@ -1659,7 +1659,7 @@ EXPORT_SYMBOL(__alloc_disk_node);
>   void put_disk(struct gendisk *disk)
>   {
>   	if (disk)
> -		kobject_put(&disk_to_dev(disk)->kobj);
> +		put_device(disk_to_dev(disk));
>   }
>   EXPORT_SYMBOL(put_disk);
>   
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 08:10:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 08:10:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31727.62352 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg1UV-000154-Hq; Fri, 20 Nov 2020 08:09:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31727.62352; Fri, 20 Nov 2020 08:09:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg1UV-00014x-Ej; Fri, 20 Nov 2020 08:09:59 +0000
Received: by outflank-mailman (input) for mailman id 31727;
 Fri, 20 Nov 2020 08:09:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kg1UU-0000w3-2P
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 08:09:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 11c8fba8-4262-498f-8cb3-974dae17d73c;
 Fri, 20 Nov 2020 08:09:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C3749AB3D;
 Fri, 20 Nov 2020 08:09:55 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kg1UU-0000w3-2P
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 08:09:58 +0000
X-Inumbo-ID: 11c8fba8-4262-498f-8cb3-974dae17d73c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 11c8fba8-4262-498f-8cb3-974dae17d73c;
	Fri, 20 Nov 2020 08:09:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605859795; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nl2Gtg2ZQUYDJVLbRasGgVEmxNeJNZvrlkFPN9xhQ+w=;
	b=YZS79OdNSBAqztobyMHfJ9Tu8ODbDFwF55uYY367NPcN1AgGFMK6mpg7is7zZSdu8BL9Mo
	k/cyhCWVEu5zIxO1znijEhkKosm+h1Xmp2c6KsRCOPISaHtj8YQwo1KB/5YqahGxGxarUZ
	RmVCTEcKbOTbkc1BpMauum2Ri8k3LzI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C3749AB3D;
	Fri, 20 Nov 2020 08:09:55 +0000 (UTC)
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
To: Manuel Bouyer <bouyer@antioche.eu.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
References: <20201117164033.GB3093@antioche.eu.org>
 <20201118085738.wpnfmjagxjf6cofp@Air-de-Roger>
 <20201118092425.GC1085@antioche.eu.org>
 <20201118100025.ic7r3kfsbdnr6muz@Air-de-Roger>
 <20201118121403.GC3126@antioche.eu.org>
 <20201118143928.hvamuf7t7jycsrzb@Air-de-Roger>
 <bb2b6182-f3a6-61e5-ee70-90a65ae56435@suse.com>
 <20201119141915.igyb7djkw47rf2dt@Air-de-Roger>
 <20201119155718.GB4104@antioche.eu.org>
 <20201119165734.GA4903@antioche.eu.org>
 <20201119175733.GA6067@antioche.eu.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1a50e1e2-b69c-afd6-a179-316231512004@suse.com>
Date: Fri, 20 Nov 2020 09:09:51 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201119175733.GA6067@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.11.2020 18:57, Manuel Bouyer wrote:
> I added an ASSERT() after the printf to ket a stack trace, and got:
> db{0}> call ioapic_dump_raw^M
> Register dump of ioapic0^M
> [  13.0193374] 00 08000000 00170011 08000000(XEN) vioapic.c:141:d0v0 apic_mem_readl:undefined ioregsel 3
> (XEN) vioapic.c:512:vioapic_irq_positive_edge: vioapic_deliver 2
> (XEN) Assertion '!print' failed at vioapic.c:512
> (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:   C   ]----
> (XEN) CPU:    0
> (XEN) RIP:    e008:[<ffff82d0402c4164>] vioapic_irq_positive_edge+0x14e/0x150
> (XEN) RFLAGS: 0000000000010202   CONTEXT: hypervisor (d0v0)
> (XEN) rax: ffff82d0405c806c   rbx: ffff830836650580   rcx: 0000000000000000
> (XEN) rdx: ffff8300688bffff   rsi: 000000000000000a   rdi: ffff82d0404b36b8
> (XEN) rbp: ffff8300688bfde0   rsp: ffff8300688bfdc0   r8:  0000000000000004
> (XEN) r9:  0000000000000032   r10: 0000000000000000   r11: 00000000fffffffd
> (XEN) r12: ffff8308366dc000   r13: 0000000000000022   r14: ffff8308366dc31c
> (XEN) r15: ffff8308366d1d80   cr0: 0000000080050033   cr4: 00000000003526e0
> (XEN) cr3: 00000008366c9000   cr2: 0000000000000000
> (XEN) fsb: 0000000000000000   gsb: 0000000000000000   gss: 0000000000000000
> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
> (XEN) Xen code around <ffff82d0402c4164> (vioapic_irq_positive_edge+0x14e/0x150):
> (XEN)  3d 10 be 1d 00 00 74 c2 <0f> 0b 55 48 89 e5 41 57 41 56 41 55 41 54 53 48
> (XEN) Xen stack trace from rsp=ffff8300688bfdc0:
> (XEN)    0000000200000086 ffff8308366dc000 0000000000000022 0000000000000000
> (XEN)    ffff8300688bfe08 ffff82d0402bcc33 ffff8308366dc000 0000000000000022
> (XEN)    0000000000000001 ffff8300688bfe40 ffff82d0402bd18f ffff830835a7eb98
> (XEN)    ffff8308366dc000 ffff830835a7eb40 ffff8300688bfe68 0100100100100100
> (XEN)    ffff8300688bfea0 ffff82d04026f6e1 ffff830835a7eb30 ffff8308366dc0f4
> (XEN)    ffff830835a7eb40 ffff8300688bfe68 ffff8300688bfe68 ffff82d0405cec80
> (XEN)    ffffffffffffffff ffff82d0405cec80 0000000000000000 ffff82d0405d6c80
> (XEN)    ffff8300688bfed8 ffff82d04022b6fa ffff83083663f000 ffff83083663f000
> (XEN)    0000000000000000 0000000000000000 0000000a7c62165b ffff8300688bfee8
> (XEN)    ffff82d04022b798 ffff8300688bfe08 ffff82d0402a4bcb 0000000000000000
> (XEN)    0000000000000206 ffff8316da86e61c ffff8316da86e600 ffff938031fd47c0
> (XEN)    0000000000000003 0000000000000400 ff889e8da08f928a 0000000000000000
> (XEN)    0000000000000002 0000000000000100 000000000000b86e ffff93803237f010
> (XEN)    0000000000000000 ffff8316da86e61c 0000beef0000beef ffffffff80555918
> (XEN)    000000bf0000beef 0000000000000046 ffff938031fd4790 000000000000beef
> (XEN)    000000000000beef 000000000000beef 000000000000beef 000000000000beef
> (XEN)    0000e01000000000 ffff83083663f000 0000000000000000 00000000003526e0
> (XEN)    0000000000000000 0000000000000000 0000060100000001 0000000000000000
> (XEN) Xen call trace:
> (XEN)    [<ffff82d0402c4164>] R vioapic_irq_positive_edge+0x14e/0x150
> (XEN)    [<ffff82d0402bcc33>] F arch/x86/hvm/irq.c#assert_gsi+0x5e/0x7b
> (XEN)    [<ffff82d0402bd18f>] F hvm_gsi_assert+0x62/0x77
> (XEN)    [<ffff82d04026f6e1>] F drivers/passthrough/io.c#dpci_softirq+0x261/0x29e
> (XEN)    [<ffff82d04022b6fa>] F common/softirq.c#__do_softirq+0x8a/0xbf
> (XEN)    [<ffff82d04022b798>] F do_softirq+0x13/0x15
> (XEN)    [<ffff82d0402a4bcb>] F vmx_asm_do_vmentry+0x2b/0x30
> (XEN) 
> (XEN) 
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) Assertion '!print' failed at vioapic.c:512
> (XEN) ****************************************

Right, this was the expected path after what you've sent prior to this.
Which turned my attention back to the 'i' debug key output you had sent
the other day. There we have

(XEN)    IRQ:  34 vec:51 IO-APIC-level   status=010 aff:{0}/{0-7} in-flight=1 d0: 34(-MM)

i.e. at that point we're waiting for Dom0 to signal it's done handling
the IRQ. There is, however, a timer associated with this. Yet that's
actually to prevent the system getting stuck, i.e. the "in-flight"
state ought to clear 1ms later (when that timer expires), and hence
ought to be pretty unlikely to catch when non-zero _and_ something's
actually stuck.

So for the softirq to get Dom0 out of its stuck state, there has got to
be yet some other event. Nevertheless it may be worthwhile
instrumenting irq_guest_eoi_timer_fn() to prove we actually take this
path, i.e. Xen is trying to "clean up" after Dom0 taking too long to
service an IRQ. In normal operation this path shouldn't be taken, so I
wouldn't exclude something got broken in that logic. (Orthogonal to
this it may also be worth seeing whether increasing the timeout would
actually help things. This wouldn't be a solution, but another data
point hinting something's wrong on this code path.)

Roger, I'm also somewhat puzzled by the trailing (-MM): Is PVH using
event channels for delivering pIRQ-s? I thought that's purely vIO-APIC
and vMSI? I wonder whether we misleadingly dump info from evtchn 0
here, in which case only the 2nd of the M-s would be meaningful (and
would be in line with non-zero in-flight).

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 08:29:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 08:29:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31733.62365 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg1nB-00035Q-7l; Fri, 20 Nov 2020 08:29:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31733.62365; Fri, 20 Nov 2020 08:29:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg1nB-00035J-3r; Fri, 20 Nov 2020 08:29:17 +0000
Received: by outflank-mailman (input) for mailman id 31733;
 Fri, 20 Nov 2020 08:29:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pyS4=E2=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kg1nA-00035E-2H
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 08:29:16 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3681cc1d-15a6-4020-b68f-c76db1f3c5fe;
 Fri, 20 Nov 2020 08:29:13 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pyS4=E2=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kg1nA-00035E-2H
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 08:29:16 +0000
X-Inumbo-ID: 3681cc1d-15a6-4020-b68f-c76db1f3c5fe
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3681cc1d-15a6-4020-b68f-c76db1f3c5fe;
	Fri, 20 Nov 2020 08:29:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605860953;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=e/XOJfm6Yik9fbB6LpDsWUcJp3GKSVVGsFy97xkjHng=;
  b=AtaIZC4mL1++55xtLFLYHSg7PMDGtnyk3poOJwiR2S+I5vYt5tHb3ddH
   4pvB7qZFnpONk5N0DrV9oka3LPWzpj1e3zOJ5yi8pIy0OTHSskgRr5Nc3
   DM7grxVKQT7FdpPmvt5FyL10J01PKAKaksKw24mN3gMDvLhAlGgcSW2FA
   Q=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: M8T2BEx/CbFnLXagUoBnqAmqLkvlvum0KFxQvndoHExxaCSIjIaOsKwi7Xdwj0Jjrn40HILdy0
 IQx6CrzM+vQzcvKoETflzJiXHtGwOzagiwsmBeIGrikYa8MEj5MfMBuCzodgiw1HPzXdeP/KmH
 CgsPUOa4lgA/9kdNzj7CocJxJcgH5Va1yB7npP788KZ2L9yAd8LWj02iKAESxV/QPgE0BDxEbN
 j9D00bbaFgBRNk+etQPBjhAFrpn0978vgA9W/Xo8E6sp6BO8Sxbdd7Z/8SwO7WonpNlORCEZry
 2gU=
X-SBRS: None
X-MesageID: 31569347
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,356,1599537600"; 
   d="scan'208";a="31569347"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=czVRnoQ0dW9Z+4wTICjf8d0Db3kDxJ7QGH6pC+fNREjnP6W7fyJ2bwtbo8LUX+MY93KwLz/toRIW0vq1M0pfc3QRh1VnmizR4qRFwb+eoGSW/G8IIRnks+gc49qDbKrmaC4ItiuaN3BhN7WOzlAeLYiki/jfoPXvGwiNVKg4te2PfslDSSCvun/ohODlVitVpgJvdulTRTR2ELiZuLV+lj0xSMZKqX5mnRbhdap/JkpW/OpGZ4H1HhJY4IOvAVnH64//rkoHkTUE9nx/XHpiiYhKhKAu42c0fipyGiiaR5wnC44v2Xd9Q8edf4SJ+NhmeHRyAsNCV/NoFcwC47/2eg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IvalSxWtSQRZMaNFJ/AY+WptABqmVJaN1WINBaBf2F0=;
 b=ZqdvSPfj2qpXNsY17YdEyp0Ogh/pfDrKm1zlRJ3CQcgApGJ3NEMsB97zwue+l3lYDyLjJw02HS5hY6+6fAgR85v2wEV0ApD1wYLFWzAU69Yqp8v0lS6wxeRrT6aJQ8OAmrc24B72SJkLG/Zk7uLQSfHRlbuplfEA8w4IpLX7tt4/F9AlATxnCtaqPiniW7Tk/JS09cEYRQ7zxKd+tN5xvv0XNkVjtTiEJgqWqpnA7W6sVdRGhvBurNtTCrZu8LGNP2LDQFy5LEo0Hrg7eKTLxTQPXV6BWfgwNxIqOZWXh6PWNMZ+pbZg+TdAwrljlxkyzeTfX48oXwGVWR8QnOynMA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IvalSxWtSQRZMaNFJ/AY+WptABqmVJaN1WINBaBf2F0=;
 b=N8JNHrzcM3kKbi7EHiq1d9kgr2zGv6AnIL68cWtaI92W+gvaq5DC+paASMzi5HE2fZrYFYKfFGIZc2JrR2nbf3VNswESGxK8yhA3eWhAVNv+Vj8Mps9tAac8ts7IVefQS5rWrTE/f2dgwKAxCza7etr72CnhG7FCqEgEWPYnzMA=
Date: Fri, 20 Nov 2020 09:28:55 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Manuel Bouyer <bouyer@antioche.eu.org>, <xen-devel@lists.xenproject.org>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201120082855.5z4cibcd5djlwmgp@Air-de-Roger>
References: <20201118092425.GC1085@antioche.eu.org>
 <20201118100025.ic7r3kfsbdnr6muz@Air-de-Roger>
 <20201118121403.GC3126@antioche.eu.org>
 <20201118143928.hvamuf7t7jycsrzb@Air-de-Roger>
 <bb2b6182-f3a6-61e5-ee70-90a65ae56435@suse.com>
 <20201119141915.igyb7djkw47rf2dt@Air-de-Roger>
 <20201119155718.GB4104@antioche.eu.org>
 <20201119165734.GA4903@antioche.eu.org>
 <20201119175733.GA6067@antioche.eu.org>
 <1a50e1e2-b69c-afd6-a179-316231512004@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <1a50e1e2-b69c-afd6-a179-316231512004@suse.com>
X-ClientProxiedBy: LO2P265CA0231.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:b::27) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ef9ca351-6b3a-4baa-b781-08d88d2e547b
X-MS-TrafficTypeDiagnostic: DM6PR03MB4299:
X-Microsoft-Antispam-PRVS: <DM6PR03MB4299630F2FEE00D25B4DA7E28FFF0@DM6PR03MB4299.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2276;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: YTa8yKe5d3kg4zFQfpBQyXg1UEuF1vkpmDAgEWJBYF9/sXVsMNO8j21eAEnZLkNO/hDwU6J15o6KEPrFZRSKHe0Scu0qoTpTubgziMaXjWPl/y+fX7wgpbgMQnCQ8gWGKF8GJDqPgkcZ4khtUi2Uc6eOFOMviKLgQiNkaWXWV7Gz4ezNEYo7DxFVpx7X8DjSI06jYn2wZC06qzOgAYIpBkCSpfrQQOa5HX+hJ6bay+yioxaUSHLjEd/v1nxngVqIxqV1oQ35znmqQs6F0qgylZc128n6fPOs7RQqENXRWOBaHXgB2L+NY+ppHrBp32XkXAr/zBidLsTEmDdTaUTGcQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(376002)(136003)(39860400002)(366004)(346002)(396003)(8936002)(6666004)(6486002)(9686003)(4326008)(1076003)(26005)(956004)(66946007)(478600001)(316002)(85182001)(86362001)(2906002)(6916009)(16526019)(8676002)(186003)(6496006)(33716001)(83380400001)(66556008)(66476007)(5660300002)(53546011);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 90JPh3RGZBUNsCouo1iDBU0tkhWg6D8wCtYNzsM0EMXc3NbrJf9SCPkzZJQyFaG3peFqdT4GMoxcO7QmZBXAGH8X1kT8ohJK7qCBL/ST2IHzaOi0o8EZLZ3rmx9s134hiFEUbt73fOf1zHA9bfR2zVvH//2rTzFK1E8UDI+zbrY1OOUhw0WdI3J6uTdu0nWlCg8elWPdsMxa4EMeZdMbPJ3Kjduor25J6Oz20x2zZgyK91+JGYgEQ92WYf7x1MrfDFplEL+ryIrFmrGjuYlI0vCthVA7U3azWwvNvRBtJkFWxH81nuC7jKYGo6dF1l2aMlLItk3vBy0tkWnagfzJMoTO+U4DYDRE0so1/OkE0e/oxBP9v/1oakNFw+DuHOovpX80GMk9aQ/s43An6pAHfJpfNmW+sMdsas7hN2ghdQgrZ9xyjaRYxcaBkq4/ObRMv9MFlffuBcUQQfXk69a4nJfuBA1vtZgWNOh/b5d3hqF+26o3DxVcyhYEP4VwD39lz5onlRSG+kL5Qpa/tDLCPo+lQYnF/GuyPcLU4Cr2w2HWG4rVjn3iAf6NM+4izpPwS0tcSD7MzNGmX93PrKH2gROd9tZx0oiNN8ogTCPtYQSo3AVkH6JKiHo1efAZxElVVzEYsgQskvRVC7C69V62QiPE5sAtrJxAN4guaGASNx64SOSF1RQdiJYlju3CwE69p/NOjgk667J5kgEoqW2gu8S2eREk/H3ybeR4pLYVJ5PW4Ji3ZqHO6nn8/qmZ7bBvcLAjatXTRnkIviRY/SaarMlT1UYhLwVb/aFlJgbI+PKjpHL6G11rPSUtrX77ywfTfURV4I5E5vjs5bzdS/yy/Vo1zl91NChe+c0SQBUH5efA/ll5ugyrO9QjbRLNLCtW5n70XBb1K3T9L1H81yNu2g==
X-MS-Exchange-CrossTenant-Network-Message-Id: ef9ca351-6b3a-4baa-b781-08d88d2e547b
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Nov 2020 08:28:59.9874
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: o3cjR7A3BVLwMr8vLRee6loHDVKTLqfPn5pTdLWwnepmNiIbtbbIQfrNxw2tofupM1jQSeeKgvRls576YG6hGw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4299
X-OriginatorOrg: citrix.com

On Fri, Nov 20, 2020 at 09:09:51AM +0100, Jan Beulich wrote:
> On 19.11.2020 18:57, Manuel Bouyer wrote:
> > I added an ASSERT() after the printf to ket a stack trace, and got:
> > db{0}> call ioapic_dump_raw^M
> > Register dump of ioapic0^M
> > [  13.0193374] 00 08000000 00170011 08000000(XEN) vioapic.c:141:d0v0 apic_mem_readl:undefined ioregsel 3
> > (XEN) vioapic.c:512:vioapic_irq_positive_edge: vioapic_deliver 2
> > (XEN) Assertion '!print' failed at vioapic.c:512
> > (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:   C   ]----
> > (XEN) CPU:    0
> > (XEN) RIP:    e008:[<ffff82d0402c4164>] vioapic_irq_positive_edge+0x14e/0x150
> > (XEN) RFLAGS: 0000000000010202   CONTEXT: hypervisor (d0v0)
> > (XEN) rax: ffff82d0405c806c   rbx: ffff830836650580   rcx: 0000000000000000
> > (XEN) rdx: ffff8300688bffff   rsi: 000000000000000a   rdi: ffff82d0404b36b8
> > (XEN) rbp: ffff8300688bfde0   rsp: ffff8300688bfdc0   r8:  0000000000000004
> > (XEN) r9:  0000000000000032   r10: 0000000000000000   r11: 00000000fffffffd
> > (XEN) r12: ffff8308366dc000   r13: 0000000000000022   r14: ffff8308366dc31c
> > (XEN) r15: ffff8308366d1d80   cr0: 0000000080050033   cr4: 00000000003526e0
> > (XEN) cr3: 00000008366c9000   cr2: 0000000000000000
> > (XEN) fsb: 0000000000000000   gsb: 0000000000000000   gss: 0000000000000000
> > (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
> > (XEN) Xen code around <ffff82d0402c4164> (vioapic_irq_positive_edge+0x14e/0x150):
> > (XEN)  3d 10 be 1d 00 00 74 c2 <0f> 0b 55 48 89 e5 41 57 41 56 41 55 41 54 53 48
> > (XEN) Xen stack trace from rsp=ffff8300688bfdc0:
> > (XEN)    0000000200000086 ffff8308366dc000 0000000000000022 0000000000000000
> > (XEN)    ffff8300688bfe08 ffff82d0402bcc33 ffff8308366dc000 0000000000000022
> > (XEN)    0000000000000001 ffff8300688bfe40 ffff82d0402bd18f ffff830835a7eb98
> > (XEN)    ffff8308366dc000 ffff830835a7eb40 ffff8300688bfe68 0100100100100100
> > (XEN)    ffff8300688bfea0 ffff82d04026f6e1 ffff830835a7eb30 ffff8308366dc0f4
> > (XEN)    ffff830835a7eb40 ffff8300688bfe68 ffff8300688bfe68 ffff82d0405cec80
> > (XEN)    ffffffffffffffff ffff82d0405cec80 0000000000000000 ffff82d0405d6c80
> > (XEN)    ffff8300688bfed8 ffff82d04022b6fa ffff83083663f000 ffff83083663f000
> > (XEN)    0000000000000000 0000000000000000 0000000a7c62165b ffff8300688bfee8
> > (XEN)    ffff82d04022b798 ffff8300688bfe08 ffff82d0402a4bcb 0000000000000000
> > (XEN)    0000000000000206 ffff8316da86e61c ffff8316da86e600 ffff938031fd47c0
> > (XEN)    0000000000000003 0000000000000400 ff889e8da08f928a 0000000000000000
> > (XEN)    0000000000000002 0000000000000100 000000000000b86e ffff93803237f010
> > (XEN)    0000000000000000 ffff8316da86e61c 0000beef0000beef ffffffff80555918
> > (XEN)    000000bf0000beef 0000000000000046 ffff938031fd4790 000000000000beef
> > (XEN)    000000000000beef 000000000000beef 000000000000beef 000000000000beef
> > (XEN)    0000e01000000000 ffff83083663f000 0000000000000000 00000000003526e0
> > (XEN)    0000000000000000 0000000000000000 0000060100000001 0000000000000000
> > (XEN) Xen call trace:
> > (XEN)    [<ffff82d0402c4164>] R vioapic_irq_positive_edge+0x14e/0x150
> > (XEN)    [<ffff82d0402bcc33>] F arch/x86/hvm/irq.c#assert_gsi+0x5e/0x7b
> > (XEN)    [<ffff82d0402bd18f>] F hvm_gsi_assert+0x62/0x77
> > (XEN)    [<ffff82d04026f6e1>] F drivers/passthrough/io.c#dpci_softirq+0x261/0x29e
> > (XEN)    [<ffff82d04022b6fa>] F common/softirq.c#__do_softirq+0x8a/0xbf
> > (XEN)    [<ffff82d04022b798>] F do_softirq+0x13/0x15
> > (XEN)    [<ffff82d0402a4bcb>] F vmx_asm_do_vmentry+0x2b/0x30
> > (XEN) 
> > (XEN) 
> > (XEN) ****************************************
> > (XEN) Panic on CPU 0:
> > (XEN) Assertion '!print' failed at vioapic.c:512
> > (XEN) ****************************************
> 
> Right, this was the expected path after what you've sent prior to this.
> Which turned my attention back to the 'i' debug key output you had sent
> the other day. There we have
> 
> (XEN)    IRQ:  34 vec:51 IO-APIC-level   status=010 aff:{0}/{0-7} in-flight=1 d0: 34(-MM)
> 
> i.e. at that point we're waiting for Dom0 to signal it's done handling
> the IRQ. There is, however, a timer associated with this. Yet that's
> actually to prevent the system getting stuck, i.e. the "in-flight"
> state ought to clear 1ms later (when that timer expires), and hence
> ought to be pretty unlikely to catch when non-zero _and_ something's
> actually stuck.

I somehow assumed the interrupt was in-flight because the printing to
the Xen console caused one to be injected, and thus dom0 didn't had
time to Ack it yet.

> 
> So for the softirq to get Dom0 out of its stuck state, there has got to
> be yet some other event. Nevertheless it may be worthwhile
> instrumenting irq_guest_eoi_timer_fn() to prove we actually take this
> path, i.e. Xen is trying to "clean up" after Dom0 taking too long to
> service an IRQ. In normal operation this path shouldn't be taken, so I
> wouldn't exclude something got broken in that logic. (Orthogonal to
> this it may also be worth seeing whether increasing the timeout would
> actually help things. This wouldn't be a solution, but another data
> point hinting something's wrong on this code path.)
> 
> Roger, I'm also somewhat puzzled by the trailing (-MM): Is PVH using
> event channels for delivering pIRQ-s?

No, it's always using emulated interrupt controllers. I explicitly
disabled HVM PIRQ for PVH.

> I thought that's purely vIO-APIC
> and vMSI? I wonder whether we misleadingly dump info from evtchn 0
> here, in which case only the 2nd of the M-s would be meaningful (and
> would be in line with non-zero in-flight).

Likely - will have to look closer but there's no event channel
associated with a PIRQ on PVH dom0. I will send a patch to fix dump_irqs.

Maybe we should track interrupt EOI, and see when the interrupt gets
EOI'ed. Will see if I can find some time later to prepare another
debug patch.

Roger.


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 08:48:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 08:48:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31743.62379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg25Q-0005L8-RT; Fri, 20 Nov 2020 08:48:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31743.62379; Fri, 20 Nov 2020 08:48:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg25Q-0005L1-Oa; Fri, 20 Nov 2020 08:48:08 +0000
Received: by outflank-mailman (input) for mailman id 31743;
 Fri, 20 Nov 2020 08:48:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kg25P-0005Kw-C1
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 08:48:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f37ee631-3b8d-4940-a183-8d063e4770dc;
 Fri, 20 Nov 2020 08:48:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B81B8ACBA;
 Fri, 20 Nov 2020 08:48:05 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kg25P-0005Kw-C1
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 08:48:07 +0000
X-Inumbo-ID: f37ee631-3b8d-4940-a183-8d063e4770dc
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f37ee631-3b8d-4940-a183-8d063e4770dc;
	Fri, 20 Nov 2020 08:48:06 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605862085; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=juElUg37bsElSjbjjv5bDRG8N72xCiELmwQbZh98AjY=;
	b=ivI1v+0FkW+ps6TCHn0BtXmweKKX+llG8m3JtExQ03ManK+ZChL7ZC7Wtb2sYWeRAyHc5s
	JCTV5kiDUCQChMLCsedzm5b2n9kmecZmX0dUHFREH5+oMSGfwkQ7mXCgFZ1DytzUtgy5w3
	b98h5R2xjyua1IZx/+uJ2H/y/uboytw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B81B8ACBA;
	Fri, 20 Nov 2020 08:48:05 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/IRQ: drop two unused variables
Message-ID: <75d17df8-706b-08e5-b839-33ed1ce44bf3@suse.com>
Date: Fri, 20 Nov 2020 09:48:06 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

I didn't bother figuring which commit(s) should have deleted them while
removing their last uses.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1402,7 +1402,6 @@ void desc_guest_eoi(struct irq_desc *des
 {
     irq_guest_action_t *action;
     cpumask_t           cpu_eoi_map;
-    int                 irq;
 
     if ( !(desc->status & IRQ_GUEST) )
     {
@@ -1411,7 +1410,6 @@ void desc_guest_eoi(struct irq_desc *des
     }
 
     action = (irq_guest_action_t *)desc->action;
-    irq = desc - irq_desc;
 
     if ( unlikely(!test_and_clear_bool(pirq->masked)) ||
          unlikely(--action->in_flight != 0) )
@@ -1663,13 +1661,11 @@ int pirq_guest_bind(struct vcpu *v, stru
 static irq_guest_action_t *__pirq_guest_unbind(
     struct domain *d, struct pirq *pirq, struct irq_desc *desc)
 {
-    unsigned int        irq;
     irq_guest_action_t *action;
     cpumask_t           cpu_eoi_map;
     int                 i;
 
     action = (irq_guest_action_t *)desc->action;
-    irq = desc - irq_desc;
 
     if ( unlikely(action == NULL) )
     {


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 08:50:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 08:50:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31750.62392 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg27L-0006Er-Cm; Fri, 20 Nov 2020 08:50:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31750.62392; Fri, 20 Nov 2020 08:50:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg27L-0006Ek-95; Fri, 20 Nov 2020 08:50:07 +0000
Received: by outflank-mailman (input) for mailman id 31750;
 Fri, 20 Nov 2020 08:50:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tRH+=E2=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1kg27K-0005ct-6t
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 08:50:06 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7732ed9f-d17a-43b9-a405-c75d75e7b049;
 Fri, 20 Nov 2020 08:49:59 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id A171367373; Fri, 20 Nov 2020 09:49:56 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tRH+=E2=lst.de=hch@srs-us1.protection.inumbo.net>)
	id 1kg27K-0005ct-6t
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 08:50:06 +0000
X-Inumbo-ID: 7732ed9f-d17a-43b9-a405-c75d75e7b049
Received: from verein.lst.de (unknown [213.95.11.211])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7732ed9f-d17a-43b9-a405-c75d75e7b049;
	Fri, 20 Nov 2020 08:49:59 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
	id A171367373; Fri, 20 Nov 2020 09:49:56 +0100 (CET)
Date: Fri, 20 Nov 2020 09:49:56 +0100
From: Christoph Hellwig <hch@lst.de>
To: Jan Kara <jack@suse.cz>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 07/20] init: refactor name_to_dev_t
Message-ID: <20201120084956.GA21715@lst.de>
References: <20201118084800.2339180-1-hch@lst.de> <20201118084800.2339180-8-hch@lst.de> <20201118143747.GL1981@quack2.suse.cz> <20201119075225.GA15815@lst.de> <20201119082505.GS1981@quack2.suse.cz>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201119082505.GS1981@quack2.suse.cz>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Thu, Nov 19, 2020 at 09:25:05AM +0100, Jan Kara wrote:
> OK, understood. Still it would seem more logical to leave blk_lookup_devt()
> declaration inside #ifdef CONFIG_BLOCK and just delete the !CONFIG_BLOCK
> definition (to make it clear we ever expect only users compiled when
> CONFIG_BLOCK is defined). But whatever... Feel free to add:

Not having the ifdef would allow using IS_ENABLED() around it.  Which
is what I did in one of the earlier variants before settlings on this one.


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 08:53:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 08:53:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31757.62404 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg2AD-0006bP-RW; Fri, 20 Nov 2020 08:53:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31757.62404; Fri, 20 Nov 2020 08:53:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg2AD-0006bI-Ne; Fri, 20 Nov 2020 08:53:05 +0000
Received: by outflank-mailman (input) for mailman id 31757;
 Fri, 20 Nov 2020 08:53:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GrQD=E2=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kg2AC-0006bC-EJ
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 08:53:04 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7d767c42-fd55-44c6-bd1a-6a07adcf4f25;
 Fri, 20 Nov 2020 08:53:03 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AK8qs0H019108
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Fri, 20 Nov 2020 09:52:55 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id C0BEC2E9CA8; Fri, 20 Nov 2020 09:52:49 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GrQD=E2=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kg2AC-0006bC-EJ
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 08:53:04 +0000
X-Inumbo-ID: 7d767c42-fd55-44c6-bd1a-6a07adcf4f25
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7d767c42-fd55-44c6-bd1a-6a07adcf4f25;
	Fri, 20 Nov 2020 08:53:03 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AK8qs0H019108
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Fri, 20 Nov 2020 09:52:55 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id C0BEC2E9CA8; Fri, 20 Nov 2020 09:52:49 +0100 (MET)
Date: Fri, 20 Nov 2020 09:52:49 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201120085249.GA1508@antioche.eu.org>
References: <20201118100025.ic7r3kfsbdnr6muz@Air-de-Roger>
 <20201118121403.GC3126@antioche.eu.org>
 <20201118143928.hvamuf7t7jycsrzb@Air-de-Roger>
 <bb2b6182-f3a6-61e5-ee70-90a65ae56435@suse.com>
 <20201119141915.igyb7djkw47rf2dt@Air-de-Roger>
 <20201119155718.GB4104@antioche.eu.org>
 <20201119165734.GA4903@antioche.eu.org>
 <20201119175733.GA6067@antioche.eu.org>
 <1a50e1e2-b69c-afd6-a179-316231512004@suse.com>
 <20201120082855.5z4cibcd5djlwmgp@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201120082855.5z4cibcd5djlwmgp@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Fri, 20 Nov 2020 09:52:56 +0100 (MET)

On Fri, Nov 20, 2020 at 09:28:55AM +0100, Roger Pau Monn wrote:
> > i.e. at that point we're waiting for Dom0 to signal it's done handling
> > the IRQ. There is, however, a timer associated with this. Yet that's
> > actually to prevent the system getting stuck, i.e. the "in-flight"
> > state ought to clear 1ms later (when that timer expires), and hence
> > ought to be pretty unlikely to catch when non-zero _and_ something's
> > actually stuck.
> 
> I somehow assumed the interrupt was in-flight because the printing to
> the Xen console caused one to be injected, and thus dom0 didn't had
> time to Ack it yet.

What does Xen consider to be an ACK from the dom0 ?
AFAIK we have EOI only for LAPIC interrupts.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 08:54:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 08:54:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31763.62416 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg2Bp-0006lX-7J; Fri, 20 Nov 2020 08:54:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31763.62416; Fri, 20 Nov 2020 08:54:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg2Bp-0006lQ-3P; Fri, 20 Nov 2020 08:54:45 +0000
Received: by outflank-mailman (input) for mailman id 31763;
 Fri, 20 Nov 2020 08:54:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kg2Bn-0006lL-Sj
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 08:54:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5216c461-8419-430e-baf4-b7372db2030d;
 Fri, 20 Nov 2020 08:54:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 01C40AC0C;
 Fri, 20 Nov 2020 08:54:42 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kg2Bn-0006lL-Sj
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 08:54:43 +0000
X-Inumbo-ID: 5216c461-8419-430e-baf4-b7372db2030d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5216c461-8419-430e-baf4-b7372db2030d;
	Fri, 20 Nov 2020 08:54:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605862482; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kD70W5EcwNqUsQIyuK1wkz18oD2obhSJVK0u83cr1xs=;
	b=hitXLS5bF22P5N7CUPyHcgORTK04OXFp7YaSAm75JNSRBD1jPlGdV61TdUpctibRyAvQEh
	3IBesbjD+olv1oHWVwbllKkEqLkwRlKc8jD0yBf9ixvW8zFxjJ/gZ5fYAP/+AcLLkDPeG2
	htncPqHWlTG/Z0MmhQSVHKbD07L4scc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 01C40AC0C;
	Fri, 20 Nov 2020 08:54:42 +0000 (UTC)
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Manuel Bouyer <bouyer@antioche.eu.org>, xen-devel@lists.xenproject.org
References: <20201118092425.GC1085@antioche.eu.org>
 <20201118100025.ic7r3kfsbdnr6muz@Air-de-Roger>
 <20201118121403.GC3126@antioche.eu.org>
 <20201118143928.hvamuf7t7jycsrzb@Air-de-Roger>
 <bb2b6182-f3a6-61e5-ee70-90a65ae56435@suse.com>
 <20201119141915.igyb7djkw47rf2dt@Air-de-Roger>
 <20201119155718.GB4104@antioche.eu.org>
 <20201119165734.GA4903@antioche.eu.org>
 <20201119175733.GA6067@antioche.eu.org>
 <1a50e1e2-b69c-afd6-a179-316231512004@suse.com>
 <20201120082855.5z4cibcd5djlwmgp@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5e637a72-085d-45b9-aa5c-01e138c81875@suse.com>
Date: Fri, 20 Nov 2020 09:54:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201120082855.5z4cibcd5djlwmgp@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 20.11.2020 09:28, Roger Pau Monné wrote:
> On Fri, Nov 20, 2020 at 09:09:51AM +0100, Jan Beulich wrote:
>> On 19.11.2020 18:57, Manuel Bouyer wrote:
>>> I added an ASSERT() after the printf to ket a stack trace, and got:
>>> db{0}> call ioapic_dump_raw^M
>>> Register dump of ioapic0^M
>>> [  13.0193374] 00 08000000 00170011 08000000(XEN) vioapic.c:141:d0v0 apic_mem_readl:undefined ioregsel 3
>>> (XEN) vioapic.c:512:vioapic_irq_positive_edge: vioapic_deliver 2
>>> (XEN) Assertion '!print' failed at vioapic.c:512
>>> (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:   C   ]----
>>> (XEN) CPU:    0
>>> (XEN) RIP:    e008:[<ffff82d0402c4164>] vioapic_irq_positive_edge+0x14e/0x150
>>> (XEN) RFLAGS: 0000000000010202   CONTEXT: hypervisor (d0v0)
>>> (XEN) rax: ffff82d0405c806c   rbx: ffff830836650580   rcx: 0000000000000000
>>> (XEN) rdx: ffff8300688bffff   rsi: 000000000000000a   rdi: ffff82d0404b36b8
>>> (XEN) rbp: ffff8300688bfde0   rsp: ffff8300688bfdc0   r8:  0000000000000004
>>> (XEN) r9:  0000000000000032   r10: 0000000000000000   r11: 00000000fffffffd
>>> (XEN) r12: ffff8308366dc000   r13: 0000000000000022   r14: ffff8308366dc31c
>>> (XEN) r15: ffff8308366d1d80   cr0: 0000000080050033   cr4: 00000000003526e0
>>> (XEN) cr3: 00000008366c9000   cr2: 0000000000000000
>>> (XEN) fsb: 0000000000000000   gsb: 0000000000000000   gss: 0000000000000000
>>> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
>>> (XEN) Xen code around <ffff82d0402c4164> (vioapic_irq_positive_edge+0x14e/0x150):
>>> (XEN)  3d 10 be 1d 00 00 74 c2 <0f> 0b 55 48 89 e5 41 57 41 56 41 55 41 54 53 48
>>> (XEN) Xen stack trace from rsp=ffff8300688bfdc0:
>>> (XEN)    0000000200000086 ffff8308366dc000 0000000000000022 0000000000000000
>>> (XEN)    ffff8300688bfe08 ffff82d0402bcc33 ffff8308366dc000 0000000000000022
>>> (XEN)    0000000000000001 ffff8300688bfe40 ffff82d0402bd18f ffff830835a7eb98
>>> (XEN)    ffff8308366dc000 ffff830835a7eb40 ffff8300688bfe68 0100100100100100
>>> (XEN)    ffff8300688bfea0 ffff82d04026f6e1 ffff830835a7eb30 ffff8308366dc0f4
>>> (XEN)    ffff830835a7eb40 ffff8300688bfe68 ffff8300688bfe68 ffff82d0405cec80
>>> (XEN)    ffffffffffffffff ffff82d0405cec80 0000000000000000 ffff82d0405d6c80
>>> (XEN)    ffff8300688bfed8 ffff82d04022b6fa ffff83083663f000 ffff83083663f000
>>> (XEN)    0000000000000000 0000000000000000 0000000a7c62165b ffff8300688bfee8
>>> (XEN)    ffff82d04022b798 ffff8300688bfe08 ffff82d0402a4bcb 0000000000000000
>>> (XEN)    0000000000000206 ffff8316da86e61c ffff8316da86e600 ffff938031fd47c0
>>> (XEN)    0000000000000003 0000000000000400 ff889e8da08f928a 0000000000000000
>>> (XEN)    0000000000000002 0000000000000100 000000000000b86e ffff93803237f010
>>> (XEN)    0000000000000000 ffff8316da86e61c 0000beef0000beef ffffffff80555918
>>> (XEN)    000000bf0000beef 0000000000000046 ffff938031fd4790 000000000000beef
>>> (XEN)    000000000000beef 000000000000beef 000000000000beef 000000000000beef
>>> (XEN)    0000e01000000000 ffff83083663f000 0000000000000000 00000000003526e0
>>> (XEN)    0000000000000000 0000000000000000 0000060100000001 0000000000000000
>>> (XEN) Xen call trace:
>>> (XEN)    [<ffff82d0402c4164>] R vioapic_irq_positive_edge+0x14e/0x150
>>> (XEN)    [<ffff82d0402bcc33>] F arch/x86/hvm/irq.c#assert_gsi+0x5e/0x7b
>>> (XEN)    [<ffff82d0402bd18f>] F hvm_gsi_assert+0x62/0x77
>>> (XEN)    [<ffff82d04026f6e1>] F drivers/passthrough/io.c#dpci_softirq+0x261/0x29e
>>> (XEN)    [<ffff82d04022b6fa>] F common/softirq.c#__do_softirq+0x8a/0xbf
>>> (XEN)    [<ffff82d04022b798>] F do_softirq+0x13/0x15
>>> (XEN)    [<ffff82d0402a4bcb>] F vmx_asm_do_vmentry+0x2b/0x30
>>> (XEN) 
>>> (XEN) 
>>> (XEN) ****************************************
>>> (XEN) Panic on CPU 0:
>>> (XEN) Assertion '!print' failed at vioapic.c:512
>>> (XEN) ****************************************
>>
>> Right, this was the expected path after what you've sent prior to this.
>> Which turned my attention back to the 'i' debug key output you had sent
>> the other day. There we have
>>
>> (XEN)    IRQ:  34 vec:51 IO-APIC-level   status=010 aff:{0}/{0-7} in-flight=1 d0: 34(-MM)
>>
>> i.e. at that point we're waiting for Dom0 to signal it's done handling
>> the IRQ. There is, however, a timer associated with this. Yet that's
>> actually to prevent the system getting stuck, i.e. the "in-flight"
>> state ought to clear 1ms later (when that timer expires), and hence
>> ought to be pretty unlikely to catch when non-zero _and_ something's
>> actually stuck.
> 
> I somehow assumed the interrupt was in-flight because the printing to
> the Xen console caused one to be injected, and thus dom0 didn't had
> time to Ack it yet.

By "injected" you mean from Xen into Dom0, or by the hardware for Xen
to handle? (I ask because I think I saw you use the term also for the
latter case, in some context.) If the former, then something would
need to have caused Xen to inject it, while in the latter case there
would need to have been a reason that it didn't get delivered earlier.

>From the stack trace above the only possibility I could derive for
now would be that we didn't run softirqs for a long time, but I don't
think that's very realistic here. Otoh, Manuel, does the NMI watchdog
work on that system? It certainly wouldn't hurt if you turned it on,
just in case.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 08:56:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 08:56:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31769.62428 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg2DZ-0006uF-IU; Fri, 20 Nov 2020 08:56:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31769.62428; Fri, 20 Nov 2020 08:56:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg2DZ-0006u8-FA; Fri, 20 Nov 2020 08:56:33 +0000
Received: by outflank-mailman (input) for mailman id 31769;
 Fri, 20 Nov 2020 08:56:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tRH+=E2=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1kg2DX-0006tN-SN
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 08:56:31 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d13ad8e4-9b61-46d7-9130-5e26d69e31fa;
 Fri, 20 Nov 2020 08:56:30 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 134F468B05; Fri, 20 Nov 2020 09:56:27 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tRH+=E2=lst.de=hch@srs-us1.protection.inumbo.net>)
	id 1kg2DX-0006tN-SN
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 08:56:31 +0000
X-Inumbo-ID: d13ad8e4-9b61-46d7-9130-5e26d69e31fa
Received: from verein.lst.de (unknown [213.95.11.211])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d13ad8e4-9b61-46d7-9130-5e26d69e31fa;
	Fri, 20 Nov 2020 08:56:30 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
	id 134F468B05; Fri, 20 Nov 2020 09:56:27 +0100 (CET)
Date: Fri, 20 Nov 2020 09:56:26 +0100
From: Christoph Hellwig <hch@lst.de>
To: Jan Kara <jack@suse.cz>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 11/20] block: reference struct block_device from struct
 hd_struct
Message-ID: <20201120085626.GB21715@lst.de>
References: <20201118084800.2339180-1-hch@lst.de> <20201118084800.2339180-12-hch@lst.de> <20201119094157.GT1981@quack2.suse.cz>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201119094157.GT1981@quack2.suse.cz>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Thu, Nov 19, 2020 at 10:41:57AM +0100, Jan Kara wrote:
> >  	rcu_assign_pointer(ptbl->part[0], &disk->part0);
> > @@ -1772,8 +1626,10 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
> >  	 * converted to make use of bd_mutex and sequence counters.
> >  	 */
> >  	hd_sects_seq_init(&disk->part0);
> > -	if (hd_ref_init(&disk->part0))
> > -		goto out_free_part0;
> > +	if (hd_ref_init(&disk->part0)) {
> > +		hd_free_part(&disk->part0);
> 
> Aren't you missing kfree(disk) here?

This should actually jump to out_free_bdstats, I've fixed it up.


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 08:58:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 08:58:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31774.62439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg2Fh-00074Q-Ut; Fri, 20 Nov 2020 08:58:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31774.62439; Fri, 20 Nov 2020 08:58:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg2Fh-00074J-Ry; Fri, 20 Nov 2020 08:58:45 +0000
Received: by outflank-mailman (input) for mailman id 31774;
 Fri, 20 Nov 2020 08:58:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1kg2Fg-00074E-5q
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 08:58:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d7404143-8c2d-4004-96c9-386086a4fdcc;
 Fri, 20 Nov 2020 08:58:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5CAD1AC23;
 Fri, 20 Nov 2020 08:58:42 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ruer=E2=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1kg2Fg-00074E-5q
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 08:58:44 +0000
X-Inumbo-ID: d7404143-8c2d-4004-96c9-386086a4fdcc
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d7404143-8c2d-4004-96c9-386086a4fdcc;
	Fri, 20 Nov 2020 08:58:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5CAD1AC23;
	Fri, 20 Nov 2020 08:58:42 +0000 (UTC)
Subject: Re: [PATCH 74/78] block: merge struct block_device and struct
 hd_struct
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Justin Sanders <justin@coraid.com>, Josef Bacik <josef@toxicpanda.com>,
 Ilya Dryomov <idryomov@gmail.com>, Jack Wang <jinpu.wang@cloud.ionos.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Minchan Kim <minchan@kernel.org>, Mike Snitzer <snitzer@redhat.com>,
 Song Liu <song@kernel.org>, "Martin K. Petersen"
 <martin.petersen@oracle.com>, dm-devel@redhat.com,
 linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-raid@vger.kernel.org,
 linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org,
 linux-fsdevel@vger.kernel.org
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-75-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <f6e6b948-44c8-50f0-beea-921eb3a268dd@suse.de>
Date: Fri, 20 Nov 2020 09:58:40 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201116145809.410558-75-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/16/20 3:58 PM, Christoph Hellwig wrote:
> Instead of having two structures that represent each block device with
> different lift time rules merged them into a single one.  This also
> greatly simplifies the reference counting rules, as we can use the inode
> reference count as the main reference count for the new struct
> block_device, with the device model reference front ending it for device
> model interaction.  The percpu refcount in struct hd_struct is entirely
> gone given that struct block_device must be opened and thus valid for
> the duration of the I/O.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   block/bio.c                        |   6 +-
>   block/blk-cgroup.c                 |   9 +-
>   block/blk-core.c                   |  85 +++++-----
>   block/blk-flush.c                  |   2 +-
>   block/blk-lib.c                    |   2 +-
>   block/blk-merge.c                  |   6 +-
>   block/blk-mq.c                     |  11 +-
>   block/blk-mq.h                     |   5 +-
>   block/blk.h                        |  38 ++---
>   block/genhd.c                      | 242 +++++++++++------------------
>   block/ioctl.c                      |   4 +-
>   block/partitions/core.c            | 221 +++++++-------------------
>   drivers/block/drbd/drbd_receiver.c |   2 +-
>   drivers/block/drbd/drbd_worker.c   |   2 +-
>   drivers/block/zram/zram_drv.c      |   2 +-
>   drivers/md/bcache/request.c        |   4 +-
>   drivers/md/dm.c                    |   8 +-
>   drivers/md/md.c                    |   4 +-
>   drivers/nvme/target/admin-cmd.c    |  20 +--
>   drivers/s390/block/dasd.c          |   8 +-
>   fs/block_dev.c                     |  68 +++-----
>   fs/ext4/super.c                    |  18 +--
>   fs/ext4/sysfs.c                    |  10 +-
>   fs/f2fs/checkpoint.c               |   5 +-
>   fs/f2fs/f2fs.h                     |   2 +-
>   fs/f2fs/super.c                    |   6 +-
>   fs/f2fs/sysfs.c                    |   9 --
>   include/linux/blk_types.h          |  23 ++-
>   include/linux/blkdev.h             |  13 +-
>   include/linux/genhd.h              |  67 ++------
>   include/linux/part_stat.h          |  17 +-
>   init/do_mounts.c                   |  20 +--
>   kernel/trace/blktrace.c            |  54 ++-----
>   33 files changed, 351 insertions(+), 642 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 09:00:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 09:00:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31780.62452 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg2Gt-0007Nu-AB; Fri, 20 Nov 2020 08:59:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31780.62452; Fri, 20 Nov 2020 08:59:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg2Gt-0007Nn-6e; Fri, 20 Nov 2020 08:59:59 +0000
Received: by outflank-mailman (input) for mailman id 31780;
 Fri, 20 Nov 2020 08:59:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kg2Gs-0007Ng-48
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 08:59:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 792855b5-19c4-49eb-ae08-9956c70f5db3;
 Fri, 20 Nov 2020 08:59:57 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A9A72AC0C;
 Fri, 20 Nov 2020 08:59:56 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kg2Gs-0007Ng-48
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 08:59:58 +0000
X-Inumbo-ID: 792855b5-19c4-49eb-ae08-9956c70f5db3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 792855b5-19c4-49eb-ae08-9956c70f5db3;
	Fri, 20 Nov 2020 08:59:57 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605862796; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vtzVT+psnRyLMZqf7NPV4DyyZ+QvtPWbucYiH+zRXaQ=;
	b=Wzg80ch2E0v/Wehh/deV82GMPaSrZptDAvMyyHCGg4qIMjmuQfg3MJLUMxFazOtBznhUYg
	wnBUm2qK30/jdUWPr33hwkDXVQ1Hl6KnZ6sa/wZ7MDO3mXqVcgTHmls5HRgHlL4IcTx07k
	E2SRyyReiYaZQkMr6LI4IZEkKNUxKlE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A9A72AC0C;
	Fri, 20 Nov 2020 08:59:56 +0000 (UTC)
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
To: Manuel Bouyer <bouyer@antioche.eu.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
References: <20201118100025.ic7r3kfsbdnr6muz@Air-de-Roger>
 <20201118121403.GC3126@antioche.eu.org>
 <20201118143928.hvamuf7t7jycsrzb@Air-de-Roger>
 <bb2b6182-f3a6-61e5-ee70-90a65ae56435@suse.com>
 <20201119141915.igyb7djkw47rf2dt@Air-de-Roger>
 <20201119155718.GB4104@antioche.eu.org>
 <20201119165734.GA4903@antioche.eu.org>
 <20201119175733.GA6067@antioche.eu.org>
 <1a50e1e2-b69c-afd6-a179-316231512004@suse.com>
 <20201120082855.5z4cibcd5djlwmgp@Air-de-Roger>
 <20201120085249.GA1508@antioche.eu.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <97f371a9-00fe-33fe-8923-c247f44f9af6@suse.com>
Date: Fri, 20 Nov 2020 09:59:57 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201120085249.GA1508@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 20.11.2020 09:52, Manuel Bouyer wrote:
> On Fri, Nov 20, 2020 at 09:28:55AM +0100, Roger Pau Monné wrote:
>>> i.e. at that point we're waiting for Dom0 to signal it's done handling
>>> the IRQ. There is, however, a timer associated with this. Yet that's
>>> actually to prevent the system getting stuck, i.e. the "in-flight"
>>> state ought to clear 1ms later (when that timer expires), and hence
>>> ought to be pretty unlikely to catch when non-zero _and_ something's
>>> actually stuck.
>>
>> I somehow assumed the interrupt was in-flight because the printing to
>> the Xen console caused one to be injected, and thus dom0 didn't had
>> time to Ack it yet.
> 
> What does Xen consider to be an ACK from the dom0 ?
> AFAIK we have EOI only for LAPIC interrupts.

Well, anything coming through the LAPIC needs ack-ing (except for
the spurious interrupt of course), or else ISR won't get updated
and further interrupts at this or lower priority can't be serviced
(delivered) anymore. This includes interrupts originally coming
through the IO-APIC. But the same constraint / requirement exists
on baremetal.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 09:01:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 09:01:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31787.62464 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg2Ic-0008ER-M4; Fri, 20 Nov 2020 09:01:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31787.62464; Fri, 20 Nov 2020 09:01:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg2Ic-0008EK-Ip; Fri, 20 Nov 2020 09:01:46 +0000
Received: by outflank-mailman (input) for mailman id 31787;
 Fri, 20 Nov 2020 09:01:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tRH+=E2=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1kg2Ib-0008EE-Au
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:01:45 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e4bf5b1e-2083-4427-a416-3720c2612229;
 Fri, 20 Nov 2020 09:01:44 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 4C98067373; Fri, 20 Nov 2020 10:01:42 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tRH+=E2=lst.de=hch@srs-us1.protection.inumbo.net>)
	id 1kg2Ib-0008EE-Au
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:01:45 +0000
X-Inumbo-ID: e4bf5b1e-2083-4427-a416-3720c2612229
Received: from verein.lst.de (unknown [213.95.11.211])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e4bf5b1e-2083-4427-a416-3720c2612229;
	Fri, 20 Nov 2020 09:01:44 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
	id 4C98067373; Fri, 20 Nov 2020 10:01:42 +0100 (CET)
Date: Fri, 20 Nov 2020 10:01:42 +0100
From: Christoph Hellwig <hch@lst.de>
To: Jan Kara <jack@suse.cz>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 13/20] block: remove ->bd_contains
Message-ID: <20201120090142.GC21715@lst.de>
References: <20201118084800.2339180-1-hch@lst.de> <20201118084800.2339180-14-hch@lst.de> <20201119103253.GV1981@quack2.suse.cz>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201119103253.GV1981@quack2.suse.cz>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Thu, Nov 19, 2020 at 11:32:53AM +0100, Jan Kara wrote:
> > @@ -1521,7 +1510,7 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
> >  		if (bdev->bd_bdi == &noop_backing_dev_info)
> >  			bdev->bd_bdi = bdi_get(disk->queue->backing_dev_info);
> >  	} else {
> > -		if (bdev->bd_contains == bdev) {
> > +		if (!bdev->bd_partno) {
> 
> This should be !bdev_is_partition(bdev) for consistency, right?

Yes.  Same for the same check further up for the !bdev->bd_openers
case.

> > +#define bdev_whole(_bdev) \
> > +	((_bdev)->bd_disk->part0.bdev)
> > +
> >  #define bdev_kobj(_bdev) \
> >  	(&part_to_dev((_bdev)->bd_part)->kobj)
> 
> I'd somewhat prefer if these helpers could actually be inline functions and
> not macros. I guess they are macros because hd_struct isn't in blk_types.h.
> But if we moved helpers to blkdev.h, we'd have all definitions we need...
> Is that a problem for some users?

As you pointed out the reason these are macros is that the obvious
placement doesn't work.  My plan was to look into cleaning up the block
headers, which are a complete mess between blk_types.h, bio.h, blkdev.h
and genhd.h after I'm done making sense of the data structures, so for
now I didn't want to move too much around.  Hopefully we'll be able to
convert these helpers to inlines once I'm done.


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 09:08:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 09:08:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31795.62475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg2P4-00005j-Fi; Fri, 20 Nov 2020 09:08:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31795.62475; Fri, 20 Nov 2020 09:08:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg2P4-00005c-Ce; Fri, 20 Nov 2020 09:08:26 +0000
Received: by outflank-mailman (input) for mailman id 31795;
 Fri, 20 Nov 2020 09:08:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tRH+=E2=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1kg2P2-00005W-O8
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:08:24 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 070ef7dd-4942-468a-8592-5f4489bfd336;
 Fri, 20 Nov 2020 09:08:23 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 0DA3867373; Fri, 20 Nov 2020 10:08:21 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tRH+=E2=lst.de=hch@srs-us1.protection.inumbo.net>)
	id 1kg2P2-00005W-O8
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:08:24 +0000
X-Inumbo-ID: 070ef7dd-4942-468a-8592-5f4489bfd336
Received: from verein.lst.de (unknown [213.95.11.211])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 070ef7dd-4942-468a-8592-5f4489bfd336;
	Fri, 20 Nov 2020 09:08:23 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
	id 0DA3867373; Fri, 20 Nov 2020 10:08:21 +0100 (CET)
Date: Fri, 20 Nov 2020 10:08:20 +0100
From: Christoph Hellwig <hch@lst.de>
To: Jan Kara <jack@suse.cz>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 14/20] block: remove the nr_sects field in struct
 hd_struct
Message-ID: <20201120090820.GD21715@lst.de>
References: <20201118084800.2339180-1-hch@lst.de> <20201118084800.2339180-15-hch@lst.de> <20201119120525.GW1981@quack2.suse.cz>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201119120525.GW1981@quack2.suse.cz>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Thu, Nov 19, 2020 at 01:05:25PM +0100, Jan Kara wrote:
> > @@ -613,7 +613,7 @@ void guard_bio_eod(struct bio *bio)
> >  	rcu_read_lock();
> >  	part = __disk_get_part(bio->bi_disk, bio->bi_partno);
> >  	if (part)
> > -		maxsector = part_nr_sects_read(part);
> > +		maxsector = bdev_nr_sectors(part->bdev);
> >  	else
> >  		maxsector = get_capacity(bio->bi_disk);
> 
> I have to say that after these changes I find it a bit confusing that we
> have get/set_capacity() and bdev_nr_sectors() / bdev_set_nr_sectors() and
> they are all the same thing (i_size of the bdev). Is there a reason for the
> distinction?

get_capacity/set_capacity are the existing unchanged interfaces that
work on struct gendisk, and unchanged from what we had before.  They also
have lots of users which makes them kinda awkward to touch.

bdev_nr_sectors is the public interface to query the size for any
kind of struct block device, to be used by consumers of the block
device interface.

bdev_set_nr_sectors is a private helper for the partitions core that
avoids duplicating a bit of code, and works on partitions.



> > @@ -38,6 +38,16 @@ static void disk_add_events(struct gendisk *disk);
> >  static void disk_del_events(struct gendisk *disk);
> >  static void disk_release_events(struct gendisk *disk);
> >  
> > +void set_capacity(struct gendisk *disk, sector_t sectors)
> > +{
> > +	struct block_device *bdev = disk->part0.bdev;
> > +
> > +	spin_lock(&bdev->bd_size_lock);
> > +	i_size_write(bdev->bd_inode, (loff_t)sectors << SECTOR_SHIFT);
> > +	spin_unlock(&bdev->bd_size_lock);
> 
> AFAICT bd_size_lock is pointless after these changes so we can just remove
> it?

I don't think it is, as reuqiring bd_mutex for size updates leads to
rather awkward lock ordering problems.

> >  	if (capacity != size && capacity != 0 && size != 0) {
> >  		char *envp[] = { "RESIZE=1", NULL };
> >  
> > +		pr_info("%s: detected capacity change from %lld to %lld\n",
> > +		       disk->disk_name, size, capacity);
> 
> So we are now missing above message for transitions from / to 0 capacity?
> Is there any other notification in the kernel log when e.g. media is
> inserted into a CD-ROM drive? I remember using these messages for detecting
> that...

True, I guess we should keep the messages for that case at least under
some circumstances.  Let me take a closer look at what could make sense.

> Also what about GENHD_FL_HIDDEN devices? Are we sure we never set capacity
> for them?

We absolutely set the capacity for them, as we have to.  And even use
this interface.  But yes, I think we should skip sending the uevent for
them.

> > @@ -1158,8 +1169,7 @@ ssize_t part_size_show(struct device *dev,
> >  {
> >  	struct hd_struct *p = dev_to_part(dev);
> >  
> > -	return sprintf(buf, "%llu\n",
> > -		(unsigned long long)part_nr_sects_read(p));
> > +	return sprintf(buf, "%llu\n", bdev_nr_sectors(p->bdev));
> 
> Is sector_t really guaranteed to be unsigned long long?

Yes, it is these days, ever since I removed the option to have a 32-bit
one on 32-bit platforms a while ago.


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 09:12:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 09:12:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31801.62488 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg2Sb-0001Bn-2Z; Fri, 20 Nov 2020 09:12:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31801.62488; Fri, 20 Nov 2020 09:12:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg2Sa-0001Bg-Tm; Fri, 20 Nov 2020 09:12:04 +0000
Received: by outflank-mailman (input) for mailman id 31801;
 Fri, 20 Nov 2020 09:12:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kg2SZ-0001Bb-Me
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:12:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 391ebce0-2f7f-48ee-8f90-5fcb2a73d4af;
 Fri, 20 Nov 2020 09:12:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B67B8AC0C;
 Fri, 20 Nov 2020 09:12:01 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kg2SZ-0001Bb-Me
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:12:03 +0000
X-Inumbo-ID: 391ebce0-2f7f-48ee-8f90-5fcb2a73d4af
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 391ebce0-2f7f-48ee-8f90-5fcb2a73d4af;
	Fri, 20 Nov 2020 09:12:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605863521; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7JeWb2czxrVFkgWEVPsHDHmYKfUupehhGgkrX1SvIDw=;
	b=b2b5Uthk7n6bnxe3OuZLirH/TiW9VT0Bpl4xWU7v/6wK8+zVGkvT5QIWRRCxxlu5HvFat+
	9gQh3bLAR+U7kdAM2TMkt8FgGSeekqDDl7iJXXiyNxevWh0xPWyW3VM4OQtpOrxHJ9YTzM
	l0SHOYJRF0jjoU6cSJ5TWeIWZk2/Iy4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B67B8AC0C;
	Fri, 20 Nov 2020 09:12:01 +0000 (UTC)
Subject: Re: [PATCH] x86/IRQ: drop two unused variables
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <75d17df8-706b-08e5-b839-33ed1ce44bf3@suse.com>
Message-ID: <52caf9b8-d296-398e-81b4-4ec6868d778e@suse.com>
Date: Fri, 20 Nov 2020 10:12:02 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <75d17df8-706b-08e5-b839-33ed1ce44bf3@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.11.2020 09:48, Jan Beulich wrote:
> @@ -1663,13 +1661,11 @@ int pirq_guest_bind(struct vcpu *v, stru

Argh, there's yet one more in this function. Will need v2.

Jan

>  static irq_guest_action_t *__pirq_guest_unbind(
>      struct domain *d, struct pirq *pirq, struct irq_desc *desc)
>  {
> -    unsigned int        irq;
>      irq_guest_action_t *action;
>      cpumask_t           cpu_eoi_map;
>      int                 i;
>  
>      action = (irq_guest_action_t *)desc->action;
> -    irq = desc - irq_desc;
>  
>      if ( unlikely(action == NULL) )
>      {
> 



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 09:13:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 09:13:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31806.62499 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg2To-0001Kj-Be; Fri, 20 Nov 2020 09:13:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31806.62499; Fri, 20 Nov 2020 09:13:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg2To-0001Kc-8h; Fri, 20 Nov 2020 09:13:20 +0000
Received: by outflank-mailman (input) for mailman id 31806;
 Fri, 20 Nov 2020 09:13:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GrQD=E2=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kg2Tm-0001KT-QK
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:13:18 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f7142d15-b95a-4b6b-8262-c42da1162098;
 Fri, 20 Nov 2020 09:13:17 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AK9DBok002911
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Fri, 20 Nov 2020 10:13:12 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 392B42E9CA8; Fri, 20 Nov 2020 10:13:06 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GrQD=E2=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kg2Tm-0001KT-QK
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:13:18 +0000
X-Inumbo-ID: f7142d15-b95a-4b6b-8262-c42da1162098
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f7142d15-b95a-4b6b-8262-c42da1162098;
	Fri, 20 Nov 2020 09:13:17 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AK9DBok002911
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Fri, 20 Nov 2020 10:13:12 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 392B42E9CA8; Fri, 20 Nov 2020 10:13:06 +0100 (MET)
Date: Fri, 20 Nov 2020 10:13:06 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
        xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201120091306.GE1508@antioche.eu.org>
References: <20201118121403.GC3126@antioche.eu.org>
 <20201118143928.hvamuf7t7jycsrzb@Air-de-Roger>
 <bb2b6182-f3a6-61e5-ee70-90a65ae56435@suse.com>
 <20201119141915.igyb7djkw47rf2dt@Air-de-Roger>
 <20201119155718.GB4104@antioche.eu.org>
 <20201119165734.GA4903@antioche.eu.org>
 <20201119175733.GA6067@antioche.eu.org>
 <1a50e1e2-b69c-afd6-a179-316231512004@suse.com>
 <20201120082855.5z4cibcd5djlwmgp@Air-de-Roger>
 <5e637a72-085d-45b9-aa5c-01e138c81875@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <5e637a72-085d-45b9-aa5c-01e138c81875@suse.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Fri, 20 Nov 2020 10:13:12 +0100 (MET)

On Fri, Nov 20, 2020 at 09:54:42AM +0100, Jan Beulich wrote:
> [...]
> >From the stack trace above the only possibility I could derive for
> now would be that we didn't run softirqs for a long time, but I don't
> think that's very realistic here. Otoh, Manuel, does the NMI watchdog
> work on that system? It certainly wouldn't hurt if you turned it on,
> just in case.

I just did, this doesn't changes anything.
For the record, my boot params are now

menu=Boot Xen PVH:load /test console=com0 root=dk0 -vx; multiboot /xen-test.gz dom0_mem=1024M console=com2 com2=57600,8n1,,0 loglvl=all guest_loglvl=all gnttab_max_nr_frames=64 dom0=pvh iommu=debug dom0_vcpus_pin sync_console dom0_max_vcpus=1 watchdog=force

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 09:15:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 09:15:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31812.62512 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg2WG-0001UA-Pc; Fri, 20 Nov 2020 09:15:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31812.62512; Fri, 20 Nov 2020 09:15:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg2WG-0001U3-MX; Fri, 20 Nov 2020 09:15:52 +0000
Received: by outflank-mailman (input) for mailman id 31812;
 Fri, 20 Nov 2020 09:15:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tRH+=E2=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1kg2WF-0001Ty-3Z
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:15:51 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 927debf0-bede-4d9c-bc4d-e25a3a00d421;
 Fri, 20 Nov 2020 09:15:50 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 8D37C67373; Fri, 20 Nov 2020 10:15:47 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tRH+=E2=lst.de=hch@srs-us1.protection.inumbo.net>)
	id 1kg2WF-0001Ty-3Z
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:15:51 +0000
X-Inumbo-ID: 927debf0-bede-4d9c-bc4d-e25a3a00d421
Received: from verein.lst.de (unknown [213.95.11.211])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 927debf0-bede-4d9c-bc4d-e25a3a00d421;
	Fri, 20 Nov 2020 09:15:50 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
	id 8D37C67373; Fri, 20 Nov 2020 10:15:47 +0100 (CET)
Date: Fri, 20 Nov 2020 10:15:46 +0100
From: Christoph Hellwig <hch@lst.de>
To: Jan Kara <jack@suse.cz>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 15/20] block: merge struct block_device and struct
 hd_struct
Message-ID: <20201120091546.GE21715@lst.de>
References: <20201118084800.2339180-1-hch@lst.de> <20201118084800.2339180-16-hch@lst.de> <20201119143921.GX1981@quack2.suse.cz>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201119143921.GX1981@quack2.suse.cz>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Thu, Nov 19, 2020 at 03:39:21PM +0100, Jan Kara wrote:
> This patch is kind of difficult to review due to the size of mostly
> mechanical changes mixed with not completely mechanical changes. Can we
> perhaps split out the mechanical bits? E.g. the rq->part => rq->bdev
> renaming is mechanical and notable part of the patch. Similarly the
> part->foo => part->bd_foo bits...

We'd end with really weird patches that way.  Never mind that I'm not
even sure how we could mechnically do the renaming.

> 
> Also I'm kind of wondering: AFAIU the new lifetime rules, gendisk holds
> bdev reference and bdev is created on gendisk allocation so bdev lifetime is
> strictly larger than gendisk lifetime. But what now keeps bdev->bd_disk
> reference safe in presence device hot unplug? In most cases we are still
> protected by gendisk reference taken in __blkdev_get() but how about
> disk->lookup_sem and disk->flags dereferences before we actually grab the
> reference?

Good question.  I'll need to think about this a bit more.

> Also I find it rather non-obvious (although elegant ;) that bdev->bd_device
> rules the lifetime of gendisk. Can you perhaps explain this in the
> changelog and probably also add somewhere to source a documentation about
> the new lifetime rules?

Yes.

> > -struct hd_struct *__disk_get_part(struct gendisk *disk, int partno);
> > +static inline struct block_device *__bdget_disk(struct gendisk *disk,
> > +		int partno)
> > +{
> > +	struct disk_part_tbl *ptbl = rcu_dereference(disk->part_tbl);
> > +
> > +	if (unlikely(partno < 0 || partno >= ptbl->len))
> > +		return NULL;
> > +	return rcu_dereference(ptbl->part[partno]);
> > +}
> 
> I understand this is lower-level counterpart of bdget_disk() but it is
> confusing to me that this has 'bdget' in the name and returns no bdev
> reference. Can we call it like __bdev_from_disk() or something like that?

Sure.


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 09:17:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 09:17:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31819.62527 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg2Xf-0001cX-6l; Fri, 20 Nov 2020 09:17:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31819.62527; Fri, 20 Nov 2020 09:17:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg2Xf-0001cQ-3j; Fri, 20 Nov 2020 09:17:19 +0000
Received: by outflank-mailman (input) for mailman id 31819;
 Fri, 20 Nov 2020 09:17:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tRH+=E2=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1kg2Xd-0001br-KS
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:17:17 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 91fef6ce-4e0f-4402-b7ee-1b445e355975;
 Fri, 20 Nov 2020 09:17:16 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id E4E0067373; Fri, 20 Nov 2020 10:17:14 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tRH+=E2=lst.de=hch@srs-us1.protection.inumbo.net>)
	id 1kg2Xd-0001br-KS
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:17:17 +0000
X-Inumbo-ID: 91fef6ce-4e0f-4402-b7ee-1b445e355975
Received: from verein.lst.de (unknown [213.95.11.211])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 91fef6ce-4e0f-4402-b7ee-1b445e355975;
	Fri, 20 Nov 2020 09:17:16 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
	id E4E0067373; Fri, 20 Nov 2020 10:17:14 +0100 (CET)
Date: Fri, 20 Nov 2020 10:17:14 +0100
From: Christoph Hellwig <hch@lst.de>
To: Matthew Wilcox <willy@infradead.org>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 17/20] filemap: consistently use ->f_mapping over
 ->i_mapping
Message-ID: <20201120091714.GF21715@lst.de>
References: <20201118084800.2339180-1-hch@lst.de> <20201118084800.2339180-18-hch@lst.de> <20201119151316.GH29991@casper.infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201119151316.GH29991@casper.infradead.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Thu, Nov 19, 2020 at 03:13:16PM +0000, Matthew Wilcox wrote:
> On Wed, Nov 18, 2020 at 09:47:57AM +0100, Christoph Hellwig wrote:
> > @@ -2887,13 +2887,13 @@ EXPORT_SYMBOL(filemap_map_pages);
> >  vm_fault_t filemap_page_mkwrite(struct vm_fault *vmf)
> >  {
> >  	struct page *page = vmf->page;
> > -	struct inode *inode = file_inode(vmf->vma->vm_file);
> > +	struct inode *inode = vmf->vma->vm_file->f_mapping->host;
> >  	vm_fault_t ret = VM_FAULT_LOCKED;
> >  
> >  	sb_start_pagefault(inode->i_sb);
> >  	file_update_time(vmf->vma->vm_file);
> >  	lock_page(page);
> > -	if (page->mapping != inode->i_mapping) {
> > +	if (page->mapping != vmf->vma->vm_file->f_mapping) {
> 
> Bit messy.  I'd do:
> 
> 	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
> 
> 	sb_start_pagefault(mapping->host->i_sb);
> 
> 	if (page->mapping != mapping)
> 
> 	sb_end_pagefault(mapping->host->i_sb);

Fine with me.


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 09:28:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 09:28:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31827.62542 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg2i8-0002qb-AS; Fri, 20 Nov 2020 09:28:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31827.62542; Fri, 20 Nov 2020 09:28:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg2i8-0002qU-7K; Fri, 20 Nov 2020 09:28:08 +0000
Received: by outflank-mailman (input) for mailman id 31827;
 Fri, 20 Nov 2020 09:28:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GrQD=E2=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kg2i7-0002qP-1q
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:28:07 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 917c5e88-468e-49d2-b1e4-16f5232f9364;
 Fri, 20 Nov 2020 09:28:05 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AK9RxsO020134
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Fri, 20 Nov 2020 10:28:00 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id B84052E9CA8; Fri, 20 Nov 2020 10:27:54 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GrQD=E2=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kg2i7-0002qP-1q
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:28:07 +0000
X-Inumbo-ID: 917c5e88-468e-49d2-b1e4-16f5232f9364
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 917c5e88-468e-49d2-b1e4-16f5232f9364;
	Fri, 20 Nov 2020 09:28:05 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AK9RxsO020134
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Fri, 20 Nov 2020 10:28:00 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id B84052E9CA8; Fri, 20 Nov 2020 10:27:54 +0100 (MET)
Date: Fri, 20 Nov 2020 10:27:54 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
        xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201120092754.GH1508@antioche.eu.org>
References: <20201118143928.hvamuf7t7jycsrzb@Air-de-Roger>
 <bb2b6182-f3a6-61e5-ee70-90a65ae56435@suse.com>
 <20201119141915.igyb7djkw47rf2dt@Air-de-Roger>
 <20201119155718.GB4104@antioche.eu.org>
 <20201119165734.GA4903@antioche.eu.org>
 <20201119175733.GA6067@antioche.eu.org>
 <1a50e1e2-b69c-afd6-a179-316231512004@suse.com>
 <20201120082855.5z4cibcd5djlwmgp@Air-de-Roger>
 <20201120085249.GA1508@antioche.eu.org>
 <97f371a9-00fe-33fe-8923-c247f44f9af6@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <97f371a9-00fe-33fe-8923-c247f44f9af6@suse.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Fri, 20 Nov 2020 10:28:01 +0100 (MET)

On Fri, Nov 20, 2020 at 09:59:57AM +0100, Jan Beulich wrote:
> Well, anything coming through the LAPIC needs ack-ing (except for
> the spurious interrupt of course), or else ISR won't get updated
> and further interrupts at this or lower priority can't be serviced
> (delivered) anymore. This includes interrupts originally coming
> through the IO-APIC. But the same constraint / requirement exists
> on baremetal.

OK, so even if I didn't see where this happens, it's happening.
Is it what's Xen is using as ACK from the dom0 for a IOAPIC
interrupt, or is it something else (at the IOAPIC level) ?

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 09:49:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 09:49:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31839.62620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg32V-0005Jr-CH; Fri, 20 Nov 2020 09:49:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31839.62620; Fri, 20 Nov 2020 09:49:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg32V-0005JR-10; Fri, 20 Nov 2020 09:49:11 +0000
Received: by outflank-mailman (input) for mailman id 31839;
 Fri, 20 Nov 2020 09:49:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kg32U-0005IB-8N
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:49:10 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg32U-0002bG-5k; Fri, 20 Nov 2020 09:49:10 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg32T-0003d1-U8; Fri, 20 Nov 2020 09:49:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32U-0005IB-8N
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:49:10 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=WvuwwpXtHIjcO9NlLF0rzPVaiYPqjWXaNgyK1WMlZ8U=; b=HTeXAkg4QeahXXWEcacrgWgUKq
	7SgIVGWC5T/JtLEyEBCLKtYlFzrQcDejLNm93hzt+2X5P6rs43IgAcOVZphuVSS/GCqIyRPBQVo9F
	0HXT2qaB8HG/j2UYW8aTEz5T7fcrrOZI83Q1o+xgF1AhmNfLI7S8Tx1KkAvubFtnsQ1o=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32U-0002bG-5k; Fri, 20 Nov 2020 09:49:10 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32T-0003d1-U8; Fri, 20 Nov 2020 09:49:10 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2 06/12] viridian: use softirq batching in hvcall_ipi()
Date: Fri, 20 Nov 2020 09:48:54 +0000
Message-Id: <20201120094900.1489-7-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201120094900.1489-1-paul@xen.org>
References: <20201120094900.1489-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

vlapic_ipi() uses a softirq batching mechanism to improve the efficiency of
sending a IPIs to large number of processors. This patch modifies send_ipi()
(the worker function called by hvcall_ipi()) to also make use of the
mechanism when there multiple bits set the hypercall_vpmask.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - Don't add the 'nr' field to struct hypercall_vpmask and use
   bitmap_weight() instead
---
 xen/arch/x86/hvm/viridian/viridian.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index d8d8ecc89c80..d6f47b28c1e6 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -11,6 +11,7 @@
 #include <xen/hypercall.h>
 #include <xen/domain_page.h>
 #include <xen/param.h>
+#include <xen/softirq.h>
 #include <asm/guest/hyperv-tlfs.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
@@ -570,6 +571,11 @@ static unsigned int vpmask_next(struct hypercall_vpmask *vpmask, unsigned int vp
 	      (vp) < HVM_MAX_VCPUS; \
 	      (vp) = vpmask_next(vpmask, vp) )
 
+static unsigned int vpmask_nr(const struct hypercall_vpmask *vpmask)
+{
+    return bitmap_weight(vpmask->mask, HVM_MAX_VCPUS);
+}
+
 /*
  * Windows should not issue the hypercalls requiring this callback in the
  * case where vcpu_id would exceed the size of the mask.
@@ -653,10 +659,17 @@ static int hvcall_flush(union hypercall_input *input,
 static void send_ipi(struct hypercall_vpmask *vpmask, uint8_t vector)
 {
     struct domain *currd = current->domain;
+    unsigned int nr = vpmask_nr(vpmask);
     unsigned int vp;
 
+    if ( nr > 1 )
+        cpu_raise_softirq_batch_begin();
+
     for_each_vp ( vpmask, vp )
         vlapic_set_irq(vcpu_vlapic(currd->vcpu[vp]), vector, 0);
+
+    if ( nr > 1 )
+        cpu_raise_softirq_batch_finish();
 }
 
 static int hvcall_ipi(union hypercall_input *input,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 09:49:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 09:49:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31835.62575 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg32R-0005CX-18; Fri, 20 Nov 2020 09:49:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31835.62575; Fri, 20 Nov 2020 09:49:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg32Q-0005CB-Q3; Fri, 20 Nov 2020 09:49:06 +0000
Received: by outflank-mailman (input) for mailman id 31835;
 Fri, 20 Nov 2020 09:49:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kg32P-0005Ag-Vl
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:49:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg32P-0002ag-Lu; Fri, 20 Nov 2020 09:49:05 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg32P-0003d1-Ci; Fri, 20 Nov 2020 09:49:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32P-0005Ag-Vl
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:49:05 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=qhrQ5SA3XRDCClu4Afoub3CK4rPcgTdFaJdSe0ftLhc=; b=JhkFK1bPgMAptBhogqATsxuC/C
	W9E2A6LrTtyW18WXyMHJOiXZEA4/ix0Su1VSaCQjIlZogx6ne/4oD2A1X9mbQ0Wpp8XH04+6TlCeO
	UqZLL4MUY22CBNf+wx7vgKDCe1TtvpJCO3+oEpHxXmOxkwXmEBGozwTYzDE3G9t+eTcs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32P-0002ag-Lu; Fri, 20 Nov 2020 09:49:05 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32P-0003d1-Ci; Fri, 20 Nov 2020 09:49:05 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2 02/12] viridian: move flush hypercall implementation into separate function
Date: Fri, 20 Nov 2020 09:48:50 +0000
Message-Id: <20201120094900.1489-3-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201120094900.1489-1-paul@xen.org>
References: <20201120094900.1489-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This patch moves the implementation of HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST
that is currently inline in viridian_hypercall() into a new hvcall_flush()
function.

The new function returns Xen erro values which are then dealt with
appropriately. A return value of -ERESTART translates to viridian_hypercall()
returning HVM_HCALL_preempted. Other return values translate to status codes
and viridian_hypercall() returning HVM_HCALL_completed. Currently the only
values, other than -ERESTART, returned by hvcall_flush() are 0 (indicating
success) or -EINVAL.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/viridian/viridian.c | 130 ++++++++++++++++-----------
 1 file changed, 78 insertions(+), 52 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 54035f75cb1a..15b1c96fe13b 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -518,6 +518,69 @@ static bool need_flush(void *ctxt, struct vcpu *v)
     return vcpu_mask & (1ul << v->vcpu_id);
 }
 
+union hypercall_input {
+    uint64_t raw;
+    struct {
+        uint16_t call_code;
+        uint16_t fast:1;
+        uint16_t rsvd1:15;
+        uint16_t rep_count:12;
+        uint16_t rsvd2:4;
+        uint16_t rep_start:12;
+        uint16_t rsvd3:4;
+    };
+};
+
+union hypercall_output {
+    uint64_t raw;
+    struct {
+        uint16_t result;
+        uint16_t rsvd1;
+        uint32_t rep_complete:12;
+        uint32_t rsvd2:20;
+    };
+};
+
+static int hvcall_flush(union hypercall_input *input,
+                        union hypercall_output *output,
+                        unsigned long input_params_gpa,
+                        unsigned long output_params_gpa)
+{
+    struct {
+        uint64_t address_space;
+        uint64_t flags;
+        uint64_t vcpu_mask;
+    } input_params;
+
+    /* These hypercalls should never use the fast-call convention. */
+    if ( input->fast )
+        return -EINVAL;
+
+    /* Get input parameters. */
+    if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
+                                  sizeof(input_params)) != HVMTRANS_okay )
+        return -EINVAL;
+
+    /*
+     * It is not clear from the spec. if we are supposed to
+     * include current virtual CPU in the set or not in this case,
+     * so err on the safe side.
+     */
+    if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
+        input_params.vcpu_mask = ~0ul;
+
+    /*
+     * A false return means that another vcpu is currently trying
+     * a similar operation, so back off.
+     */
+    if ( !paging_flush_tlb(need_flush, &input_params.vcpu_mask) )
+        return -ERESTART;
+
+    output->rep_complete = input->rep_count;
+
+    return 0;
+}
+
 int viridian_hypercall(struct cpu_user_regs *regs)
 {
     struct vcpu *curr = current;
@@ -525,29 +588,8 @@ int viridian_hypercall(struct cpu_user_regs *regs)
     int mode = hvm_guest_x86_mode(curr);
     unsigned long input_params_gpa, output_params_gpa;
     uint16_t status = HV_STATUS_SUCCESS;
-
-    union hypercall_input {
-        uint64_t raw;
-        struct {
-            uint16_t call_code;
-            uint16_t fast:1;
-            uint16_t rsvd1:15;
-            uint16_t rep_count:12;
-            uint16_t rsvd2:4;
-            uint16_t rep_start:12;
-            uint16_t rsvd3:4;
-        };
-    } input;
-
-    union hypercall_output {
-        uint64_t raw;
-        struct {
-            uint16_t result;
-            uint16_t rsvd1;
-            uint32_t rep_complete:12;
-            uint32_t rsvd2:20;
-        };
-    } output = { 0 };
+    union hypercall_input input;
+    union hypercall_output output = {};
 
     ASSERT(is_viridian_domain(currd));
 
@@ -580,41 +622,25 @@ int viridian_hypercall(struct cpu_user_regs *regs)
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
     {
-        struct {
-            uint64_t address_space;
-            uint64_t flags;
-            uint64_t vcpu_mask;
-        } input_params;
+        int rc = hvcall_flush(&input, &output, input_params_gpa,
+                              output_params_gpa);
 
-        /* These hypercalls should never use the fast-call convention. */
-        status = HV_STATUS_INVALID_PARAMETER;
-        if ( input.fast )
+        switch ( rc )
+        {
+        case 0:
             break;
 
-        /* Get input parameters. */
-        if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
-                                      sizeof(input_params)) !=
-             HVMTRANS_okay )
-            break;
-
-        /*
-         * It is not clear from the spec. if we are supposed to
-         * include current virtual CPU in the set or not in this case,
-         * so err on the safe side.
-         */
-        if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
-            input_params.vcpu_mask = ~0ul;
-
-        /*
-         * A false return means that another vcpu is currently trying
-         * a similar operation, so back off.
-         */
-        if ( !paging_flush_tlb(need_flush, &input_params.vcpu_mask) )
+        case -ERESTART:
             return HVM_HCALL_preempted;
 
-        output.rep_complete = input.rep_count;
+        default:
+            ASSERT_UNREACHABLE();
+            /* Fallthrough */
+        case -EINVAL:
+            status = HV_STATUS_INVALID_PARAMETER;
+            break;
+        }
 
-        status = HV_STATUS_SUCCESS;
         break;
     }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 09:49:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 09:49:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31837.62602 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg32T-0005GK-K8; Fri, 20 Nov 2020 09:49:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31837.62602; Fri, 20 Nov 2020 09:49:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg32T-0005G8-ER; Fri, 20 Nov 2020 09:49:09 +0000
Received: by outflank-mailman (input) for mailman id 31837;
 Fri, 20 Nov 2020 09:49:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kg32S-0005Dl-8Q
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:49:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg32R-0002ax-TV; Fri, 20 Nov 2020 09:49:07 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg32R-0003d1-LL; Fri, 20 Nov 2020 09:49:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32S-0005Dl-8Q
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:49:08 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=4IjgffEuEnKUzkPcG7Uko4N1YazvYI/Qlj97s2cqBCg=; b=S/HE5+4x9GMOk4dbcXx475CQip
	048rsSSEQodcBZgPP7ozR4YFkR4C9msBDMUBZ35LzAQ6dH+uY2Y4JQBD6PYXClpCIc7nqPQuTsqN7
	yOaigimt5tjyDY1as8zr2wZ5SWhOGo7tbIJ3Soq1DcibVSp9fsAnQu/0qqVtejpVObIU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32R-0002ax-TV; Fri, 20 Nov 2020 09:49:07 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32R-0003d1-LL; Fri, 20 Nov 2020 09:49:07 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2 04/12] viridian: introduce a per-cpu hypercall_vpmask and accessor functions...
Date: Fri, 20 Nov 2020 09:48:52 +0000
Message-Id: <20201120094900.1489-5-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201120094900.1489-1-paul@xen.org>
References: <20201120094900.1489-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... and make use of them in hvcall_flush()/need_flush().

Subsequent patches will need to deal with virtual processor masks potentially
wider than 64 bits. Thus, to avoid using too much stack, this patch
introduces global per-cpu virtual processor masks and converts the
implementation of hvcall_flush() to use them.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - Modified vpmask_set() to take a base 'vp' and a 64-bit 'mask', still
   looping over the mask as bitmap.h does not provide a primitive for copying
   one mask into another at an offset
 - Added ASSERTions to verify that we don't attempt to set or test bits
   beyond the limit of the map
---
 xen/arch/x86/hvm/viridian/viridian.c | 58 ++++++++++++++++++++++++++--
 1 file changed, 54 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index d9864cdc0b7d..334d8527ff59 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -507,15 +507,59 @@ void viridian_domain_deinit(struct domain *d)
     XFREE(d->arch.hvm.viridian);
 }
 
+struct hypercall_vpmask {
+    DECLARE_BITMAP(mask, HVM_MAX_VCPUS);
+};
+
+static DEFINE_PER_CPU(struct hypercall_vpmask, hypercall_vpmask);
+
+static void vpmask_empty(struct hypercall_vpmask *vpmask)
+{
+    bitmap_zero(vpmask->mask, HVM_MAX_VCPUS);
+}
+
+static void vpmask_set(struct hypercall_vpmask *vpmask, unsigned int vp,
+                       uint64_t mask)
+{
+    unsigned int count = sizeof(mask) * 8;
+
+    while ( count-- )
+    {
+        if ( !mask )
+            break;
+
+        if ( mask & 1 )
+        {
+            ASSERT(vp < HVM_MAX_VCPUS);
+            __set_bit(vp, vpmask->mask);
+        }
+
+        mask >>= 1;
+        vp++;
+    }
+}
+
+static void vpmask_fill(struct hypercall_vpmask *vpmask)
+{
+    bitmap_fill(vpmask->mask, HVM_MAX_VCPUS);
+}
+
+static bool vpmask_test(const struct hypercall_vpmask *vpmask,
+                        unsigned int vp)
+{
+    ASSERT(vp < HVM_MAX_VCPUS);
+    return test_bit(vp, vpmask->mask);
+}
+
 /*
  * Windows should not issue the hypercalls requiring this callback in the
  * case where vcpu_id would exceed the size of the mask.
  */
 static bool need_flush(void *ctxt, struct vcpu *v)
 {
-    uint64_t vcpu_mask = *(uint64_t *)ctxt;
+    struct hypercall_vpmask *vpmask = ctxt;
 
-    return vcpu_mask & (1ul << v->vcpu_id);
+    return vpmask_test(vpmask, v->vcpu_id);
 }
 
 union hypercall_input {
@@ -546,6 +590,7 @@ static int hvcall_flush(union hypercall_input *input,
                         unsigned long input_params_gpa,
                         unsigned long output_params_gpa)
 {
+    struct hypercall_vpmask *vpmask = &this_cpu(hypercall_vpmask);
     struct {
         uint64_t address_space;
         uint64_t flags;
@@ -567,13 +612,18 @@ static int hvcall_flush(union hypercall_input *input,
      * so err on the safe side.
      */
     if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
-        input_params.vcpu_mask = ~0ul;
+        vpmask_fill(vpmask);
+    else
+    {
+        vpmask_empty(vpmask);
+        vpmask_set(vpmask, 0, input_params.vcpu_mask);
+    }
 
     /*
      * A false return means that another vcpu is currently trying
      * a similar operation, so back off.
      */
-    if ( !paging_flush_tlb(need_flush, &input_params.vcpu_mask) )
+    if ( !paging_flush_tlb(need_flush, vpmask) )
         return -ERESTART;
 
     output->rep_complete = input->rep_count;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 09:49:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 09:49:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31833.62554 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg32Q-0005Ai-4K; Fri, 20 Nov 2020 09:49:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31833.62554; Fri, 20 Nov 2020 09:49:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg32P-0005Aa-VA; Fri, 20 Nov 2020 09:49:05 +0000
Received: by outflank-mailman (input) for mailman id 31833;
 Fri, 20 Nov 2020 09:49:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kg32O-0005AR-Tw
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:49:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg32O-0002aX-Gk; Fri, 20 Nov 2020 09:49:04 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg32O-0003d1-8I; Fri, 20 Nov 2020 09:49:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32O-0005AR-Tw
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:49:04 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=oFMSo+GJNEgaaZ5yNsIGpJUzjgKARczCMDIFteDVII8=; b=QOpyNdPhT/VqiSD/ey9qO6WGll
	xGgkiEFnZHs7lcBx8sj5zlk+M7aOizBTEaP3jwd9HgQQ3vpVVQnrdBItFkv8kSeF9MFvrZubFIgPu
	ffbY+8Rj04o1fgEsi0syfeZWjSRf9J5dGcZTjQnxuZAcl/plXyRRXUhHpzDBRDSHLOBE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32O-0002aX-Gk; Fri, 20 Nov 2020 09:49:04 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32O-0003d1-8I; Fri, 20 Nov 2020 09:49:04 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2 01/12] viridian: don't blindly write to 32-bit registers is 'mode' is invalid
Date: Fri, 20 Nov 2020 09:48:49 +0000
Message-Id: <20201120094900.1489-2-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201120094900.1489-1-paul@xen.org>
References: <20201120094900.1489-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

If hvm_guest_x86_mode() returns something other than 8 or 4 then
viridian_hypercall() will return immediately but, on the way out, will write
back status as if 'mode' was 4. This patch simply makes it leave the registers
alone.

NOTE: The formatting of the 'out' label and the switch statement are also
      adjusted as per CODING_STYLE.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - New in v2
---
 xen/arch/x86/hvm/viridian/viridian.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index dc7183a54627..54035f75cb1a 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -692,13 +692,14 @@ int viridian_hypercall(struct cpu_user_regs *regs)
         break;
     }
 
-out:
+ out:
     output.result = status;
     switch (mode) {
     case 8:
         regs->rax = output.raw;
         break;
-    default:
+
+    case 4:
         regs->rdx = output.raw >> 32;
         regs->rax = (uint32_t)output.raw;
         break;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 09:49:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 09:49:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31842.62662 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg32a-0005Um-1O; Fri, 20 Nov 2020 09:49:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31842.62662; Fri, 20 Nov 2020 09:49:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg32Z-0005UJ-Lm; Fri, 20 Nov 2020 09:49:15 +0000
Received: by outflank-mailman (input) for mailman id 31842;
 Fri, 20 Nov 2020 09:49:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kg32Y-0005Px-3A
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:49:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg32X-0002bc-SN; Fri, 20 Nov 2020 09:49:13 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg32X-0003d1-Ke; Fri, 20 Nov 2020 09:49:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32Y-0005Px-3A
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:49:14 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=rH60KaFb4nQd7ulI+9/biFMC6TXsBA8iRybF1TD4Eko=; b=3q1KexrHeX/aaEAH76gfFPGqyd
	zJ20hL+j4OziLbxE1NanvJ79ZWJr8lyYsP3zJMD3ujeMtDzvOyn5zPXnF6X1AA2jgtSh9b4FiOhpE
	W9HeuAtNoWGa3KXHMlL2n4mgh3HXLYfhd58kol1TVO0w99tE9ngrZ1CJ+4n1eWgdOUn4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32X-0002bc-SN; Fri, 20 Nov 2020 09:49:13 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32X-0003d1-Ke; Fri, 20 Nov 2020 09:49:13 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2 09/12] viridian: add ExProcessorMasks variant of the IPI hypercall
Date: Fri, 20 Nov 2020 09:48:57 +0000
Message-Id: <20201120094900.1489-10-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201120094900.1489-1-paul@xen.org>
References: <20201120094900.1489-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

A previous patch introduced variants of the flush hypercalls that take a
'Virtual Processor Set' as an argument rather than a simple 64-bit mask.
This patch introduces a similar variant of the HVCALL_SEND_IPI hypercall
(HVCALL_SEND_IPI_EX).

NOTE: As with HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX, a guest should
      not yet issue the HVCALL_SEND_IPI_EX hypercall as support for
      'ExProcessorMasks' is not yet advertised via CPUID.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - Sanity check size before hvm_copy_from_guest_phys()
---
 xen/arch/x86/hvm/viridian/viridian.c | 74 ++++++++++++++++++++++++++++
 1 file changed, 74 insertions(+)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index e736c0739da0..74227fecbcd4 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -860,6 +860,75 @@ static int hvcall_ipi(union hypercall_input *input,
     return 0;
 }
 
+static int hvcall_ipi_ex(union hypercall_input *input,
+                         union hypercall_output *output,
+                         unsigned long input_params_gpa,
+                         unsigned long output_params_gpa)
+{
+    struct hypercall_vpmask *vpmask = &this_cpu(hypercall_vpmask);
+    struct {
+        uint32_t vector;
+        uint8_t target_vtl;
+        uint8_t reserved_zero[3];
+        struct hv_vpset set;
+    } input_params;
+    struct hypercall_vpset *vpset = &this_cpu(hypercall_vpset);
+    struct hv_vpset *set = &vpset->set;
+    size_t size;
+    int rc;
+
+    /* These hypercalls should never use the fast-call convention. */
+    if ( input->fast )
+        return -EINVAL;
+
+    /* Get input parameters. */
+    if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
+                                  sizeof(input_params)) != HVMTRANS_okay )
+        return -EINVAL;
+
+    if ( input_params.target_vtl ||
+         input_params.reserved_zero[0] ||
+         input_params.reserved_zero[1] ||
+         input_params.reserved_zero[2] )
+        return HV_STATUS_INVALID_PARAMETER;
+
+    if ( input_params.vector < 0x10 || input_params.vector > 0xff )
+        return HV_STATUS_INVALID_PARAMETER;
+
+    *set = input_params.set;
+    if ( set->format == HV_GENERIC_SET_SPARSE_4K )
+    {
+        unsigned long offset = offsetof(typeof(input_params),
+                                        set.bank_contents);
+
+        size = sizeof(*set->bank_contents) * hv_vpset_nr_banks(set);
+
+        if ( offsetof(typeof(*vpset), set.bank_contents[0]) + size >
+             sizeof(*vpset) )
+        {
+            ASSERT_UNREACHABLE();
+            return -EINVAL;
+        }
+
+        if ( hvm_copy_from_guest_phys(&set->bank_contents,
+                                      input_params_gpa + offset,
+                                      size) != HVMTRANS_okay)
+            return -EINVAL;
+
+        size += sizeof(*set);
+    }
+    else
+        size = sizeof(*set);
+
+    rc = hv_vpset_to_vpmask(set, vpmask);
+    if ( rc )
+        return rc;
+
+    send_ipi(vpmask, input_params.vector);
+
+    return 0;
+}
+
 int viridian_hypercall(struct cpu_user_regs *regs)
 {
     struct vcpu *curr = current;
@@ -916,6 +985,11 @@ int viridian_hypercall(struct cpu_user_regs *regs)
                         output_params_gpa);
         break;
 
+    case HVCALL_SEND_IPI_EX:
+        rc = hvcall_ipi_ex(&input, &output, input_params_gpa,
+                           output_params_gpa);
+        break;
+
     default:
         gprintk(XENLOG_WARNING, "unimplemented hypercall %04x\n",
                 input.call_code);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 09:49:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 09:49:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31840.62638 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg32X-0005PV-Tm; Fri, 20 Nov 2020 09:49:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31840.62638; Fri, 20 Nov 2020 09:49:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg32X-0005PI-OC; Fri, 20 Nov 2020 09:49:13 +0000
Received: by outflank-mailman (input) for mailman id 31840;
 Fri, 20 Nov 2020 09:49:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kg32W-0005Mr-Dk
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:49:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg32V-0002bN-Jp; Fri, 20 Nov 2020 09:49:11 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg32V-0003d1-Bu; Fri, 20 Nov 2020 09:49:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32W-0005Mr-Dk
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:49:12 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=ur98iexNlxW3wPnCoGH5R8tJsU4anFW0cgN61u59s58=; b=EbmX6KfTkfVSVhcM7nRxkqW8q1
	cP83sNOja9lntToF+lbfAB3WxN/n4OG8WM/rBZz55JBZvx3uqvq/S5dgGVU4ZtpzS/WERLLDgFUTM
	F8kf/CA4Gm7mPnxLhbcE8GQQkycbRgDjWAVmPv8yuB5bKBv3k5gDPF5UFcV6VBHOZ5tg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32V-0002bN-Jp; Fri, 20 Nov 2020 09:49:11 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32V-0003d1-Bu; Fri, 20 Nov 2020 09:49:11 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 07/12] xen/include: import sizeof_field() macro from Linux stddef.h
Date: Fri, 20 Nov 2020 09:48:55 +0000
Message-Id: <20201120094900.1489-8-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201120094900.1489-1-paul@xen.org>
References: <20201120094900.1489-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Co-locate it with the definition of offsetof() (since this is also in stddef.h
in the Linux kernel source). This macro will be needed in a subsequent patch.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>
---
 xen/include/xen/compiler.h | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/xen/include/xen/compiler.h b/xen/include/xen/compiler.h
index c0e0ee9f27be..676c6ea1b0a0 100644
--- a/xen/include/xen/compiler.h
+++ b/xen/include/xen/compiler.h
@@ -76,6 +76,14 @@
 
 #define offsetof(a,b) __builtin_offsetof(a,b)
 
+/**
+ * sizeof_field(TYPE, MEMBER)
+ *
+ * @TYPE: The structure containing the field of interest
+ * @MEMBER: The field to return the size of
+ */
+#define sizeof_field(TYPE, MEMBER) sizeof((((TYPE *)0)->MEMBER))
+
 #if !defined(__STDC_VERSION__) || __STDC_VERSION__ < 201112L
 #define alignof __alignof__
 #endif
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 09:49:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 09:49:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31841.62645 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg32Y-0005QZ-Mc; Fri, 20 Nov 2020 09:49:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31841.62645; Fri, 20 Nov 2020 09:49:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg32Y-0005Q7-5U; Fri, 20 Nov 2020 09:49:14 +0000
Received: by outflank-mailman (input) for mailman id 31841;
 Fri, 20 Nov 2020 09:49:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kg32X-0005OG-05
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:49:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg32W-0002bU-PR; Fri, 20 Nov 2020 09:49:12 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg32W-0003d1-GC; Fri, 20 Nov 2020 09:49:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32X-0005OG-05
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:49:13 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=3DPdSem6nkuIEJJrD8PvA4dT/KiTua7xMj10EiGKfcs=; b=KJU6mAPnc8ruDs1uB7bGJPrkay
	jzetR3GNO2fosrikXijWv7Z0epPXuyVcWLd/g0u61LnKMGtryEEY5GkPNNC+8tcGXuT52gaj85VOv
	HQkICSlJXQ5WVgCxxMGNzFWG2R0bU1DQ0M9X5P7FQsQQlVqOSJvSdzcWAmJjUlrExa74=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32W-0002bU-PR; Fri, 20 Nov 2020 09:49:12 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32W-0003d1-GC; Fri, 20 Nov 2020 09:49:12 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2 08/12] viridian: add ExProcessorMasks variants of the flush hypercalls
Date: Fri, 20 Nov 2020 09:48:56 +0000
Message-Id: <20201120094900.1489-9-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201120094900.1489-1-paul@xen.org>
References: <20201120094900.1489-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

The Microsoft Hypervisor TLFS specifies variants of the already implemented
HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST hypercalls that take a 'Virtual
Processor Set' as an argument rather than a simple 64-bit mask.

This patch adds a new hvcall_flush_ex() function to implement these
(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX) hypercalls. This makes use of
new helper functions, hv_vpset_nr_banks() and hv_vpset_to_vpmask(), to
determine the size of the Virtual Processor Set (so it can be copied from
guest memory) and parse it into hypercall_vpmask (respectively).

NOTE: A guest should not yet issue these hypercalls as 'ExProcessorMasks'
      support needs to be advertised via CPUID. This will be done in a
      subsequent patch.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - Add helper macros to define mask and struct sizes
 - Use a union to determine the size of 'hypercall_vpset'
 - Use hweight64() in hv_vpset_nr_banks()
 - Sanity check size before hvm_copy_from_guest_phys()
---
 xen/arch/x86/hvm/viridian/viridian.c | 142 +++++++++++++++++++++++++++
 1 file changed, 142 insertions(+)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index d6f47b28c1e6..e736c0739da0 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -576,6 +576,70 @@ static unsigned int vpmask_nr(const struct hypercall_vpmask *vpmask)
     return bitmap_weight(vpmask->mask, HVM_MAX_VCPUS);
 }
 
+#define HV_VPSET_BANK_SIZE \
+    sizeof_field(struct hv_vpset, bank_contents[0])
+
+#define HV_VPSET_SIZE(banks)   \
+    (sizeof(struct hv_vpset) + (banks * HV_VPSET_BANK_SIZE))
+
+#define HV_VPSET_MAX_BANKS \
+    (sizeof_field(struct hv_vpset, valid_bank_mask) * 8)
+
+struct hypercall_vpset {
+    union {
+        struct hv_vpset set;
+        uint8_t pad[HV_VPSET_SIZE(HV_VPSET_MAX_BANKS)];
+    };
+};
+
+static DEFINE_PER_CPU(struct hypercall_vpset, hypercall_vpset);
+
+static unsigned int hv_vpset_nr_banks(struct hv_vpset *vpset)
+{
+    return hweight64(vpset->valid_bank_mask);
+}
+
+static uint16_t hv_vpset_to_vpmask(struct hv_vpset *set,
+                                   struct hypercall_vpmask *vpmask)
+{
+#define NR_VPS_PER_BANK (HV_VPSET_BANK_SIZE * 8)
+
+    switch ( set->format )
+    {
+    case HV_GENERIC_SET_ALL:
+        vpmask_fill(vpmask);
+        return 0;
+
+    case HV_GENERIC_SET_SPARSE_4K:
+    {
+        uint64_t bank_mask;
+        unsigned int vp, bank = 0;
+
+        vpmask_empty(vpmask);
+        for ( vp = 0, bank_mask = set->valid_bank_mask;
+              bank_mask;
+              vp += NR_VPS_PER_BANK, bank_mask >>= 1 )
+        {
+            if ( bank_mask & 1 )
+            {
+                uint64_t mask = set->bank_contents[bank];
+
+                vpmask_set(vpmask, vp, mask);
+                bank++;
+            }
+        }
+        return 0;
+    }
+
+    default:
+        break;
+    }
+
+    return -EINVAL;
+
+#undef NR_VPS_PER_BANK
+}
+
 /*
  * Windows should not issue the hypercalls requiring this callback in the
  * case where vcpu_id would exceed the size of the mask.
@@ -656,6 +720,78 @@ static int hvcall_flush(union hypercall_input *input,
     return 0;
 }
 
+static int hvcall_flush_ex(union hypercall_input *input,
+                           union hypercall_output *output,
+                           unsigned long input_params_gpa,
+                           unsigned long output_params_gpa)
+{
+    struct hypercall_vpmask *vpmask = &this_cpu(hypercall_vpmask);
+    struct {
+        uint64_t address_space;
+        uint64_t flags;
+        struct hv_vpset set;
+    } input_params;
+
+    /* These hypercalls should never use the fast-call convention. */
+    if ( input->fast )
+        return -EINVAL;
+
+    /* Get input parameters. */
+    if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
+                                  sizeof(input_params)) != HVMTRANS_okay )
+        return -EINVAL;
+
+    if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
+        vpmask_fill(vpmask);
+    else
+    {
+        struct hypercall_vpset *vpset = &this_cpu(hypercall_vpset);
+        struct hv_vpset *set = &vpset->set;
+        size_t size;
+        int rc;
+
+        *set = input_params.set;
+        if ( set->format == HV_GENERIC_SET_SPARSE_4K )
+        {
+            unsigned long offset = offsetof(typeof(input_params),
+                                            set.bank_contents);
+
+            size = sizeof(*set->bank_contents) * hv_vpset_nr_banks(set);
+
+            if ( offsetof(typeof(*vpset), set.bank_contents[0]) + size >
+                 sizeof(*vpset) )
+            {
+                ASSERT_UNREACHABLE();
+                return -EINVAL;
+            }
+
+            if ( hvm_copy_from_guest_phys(&set->bank_contents[0],
+                                          input_params_gpa + offset,
+                                          size) != HVMTRANS_okay)
+                return -EINVAL;
+
+            size += sizeof(*set);
+        }
+        else
+            size = sizeof(*set);
+
+        rc = hv_vpset_to_vpmask(set, vpmask);
+        if ( rc )
+            return rc;
+    }
+
+    /*
+     * A false return means that another vcpu is currently trying
+     * a similar operation, so back off.
+     */
+    if ( !paging_flush_tlb(need_flush, vpmask) )
+        return -ERESTART;
+
+    output->rep_complete = input->rep_count;
+
+    return 0;
+}
+
 static void send_ipi(struct hypercall_vpmask *vpmask, uint8_t vector)
 {
     struct domain *currd = current->domain;
@@ -769,6 +905,12 @@ int viridian_hypercall(struct cpu_user_regs *regs)
                           output_params_gpa);
         break;
 
+    case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX:
+    case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX:
+        rc = hvcall_flush_ex(&input, &output, input_params_gpa,
+                             output_params_gpa);
+        break;
+
     case HVCALL_SEND_IPI:
         rc = hvcall_ipi(&input, &output, input_params_gpa,
                         output_params_gpa);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 09:49:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 09:49:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31836.62589 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg32S-0005Di-8c; Fri, 20 Nov 2020 09:49:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31836.62589; Fri, 20 Nov 2020 09:49:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg32S-0005Db-5R; Fri, 20 Nov 2020 09:49:08 +0000
Received: by outflank-mailman (input) for mailman id 31836;
 Fri, 20 Nov 2020 09:49:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kg32R-0005D3-6c
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:49:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg32Q-0002ao-QH; Fri, 20 Nov 2020 09:49:06 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg32Q-0003d1-HA; Fri, 20 Nov 2020 09:49:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32R-0005D3-6c
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:49:07 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=UzZ0Uh6qAAbQd6CRJaicCW7T27qr8iutib6QtzQf3Qs=; b=Z83ArtHr/q4BzPJ+73B+N5zptD
	mpZxigXEebDZ+spsfZ2M6t/tAQ8JJfEXdqAXVNzW0uYoNCycwGG9Th3xocX1XhBQARFOdVHw9wYY9
	f+Kjkn9BZdl11jVjv1hAP8ZB3G8eWLyPL6+V2gNZ+4ImKqtqaFdt1MG928+3k5U6/RTE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32Q-0002ao-QH; Fri, 20 Nov 2020 09:49:06 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32Q-0003d1-HA; Fri, 20 Nov 2020 09:49:06 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2 03/12] viridian: move IPI hypercall implementation into separate function
Date: Fri, 20 Nov 2020 09:48:51 +0000
Message-Id: <20201120094900.1489-4-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201120094900.1489-1-paul@xen.org>
References: <20201120094900.1489-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This patch moves the implementation of HVCALL_SEND_IPI that is currently
inline in viridian_hypercall() into a new hvcall_ipi() function.

The new function returns Xen errno values similarly to hvcall_flush(). Hence
the errno translation code in viridial_hypercall() is generalized, removing
the need for the local 'status' variable.

NOTE: The formatting of the switch statement at the top of
      viridial_hypercall() is also adjusted as per CODING_STYLE.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - Different formatting adjustment
---
 xen/arch/x86/hvm/viridian/viridian.c | 168 ++++++++++++++-------------
 1 file changed, 87 insertions(+), 81 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 15b1c96fe13b..d9864cdc0b7d 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -581,13 +581,72 @@ static int hvcall_flush(union hypercall_input *input,
     return 0;
 }
 
+static int hvcall_ipi(union hypercall_input *input,
+                      union hypercall_output *output,
+                      unsigned long input_params_gpa,
+                      unsigned long output_params_gpa)
+{
+    struct domain *currd = current->domain;
+    struct vcpu *v;
+    uint32_t vector;
+    uint64_t vcpu_mask;
+
+    /* Get input parameters. */
+    if ( input->fast )
+    {
+        if ( input_params_gpa >> 32 )
+            return -EINVAL;
+
+        vector = input_params_gpa;
+        vcpu_mask = output_params_gpa;
+    }
+    else
+    {
+        struct {
+            uint32_t vector;
+            uint8_t target_vtl;
+            uint8_t reserved_zero[3];
+            uint64_t vcpu_mask;
+        } input_params;
+
+        if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
+                                      sizeof(input_params)) != HVMTRANS_okay )
+            return -EINVAL;
+
+        if ( input_params.target_vtl ||
+             input_params.reserved_zero[0] ||
+             input_params.reserved_zero[1] ||
+             input_params.reserved_zero[2] )
+            return -EINVAL;
+
+        vector = input_params.vector;
+        vcpu_mask = input_params.vcpu_mask;
+    }
+
+    if ( vector < 0x10 || vector > 0xff )
+        return -EINVAL;
+
+    for_each_vcpu ( currd, v )
+    {
+        if ( v->vcpu_id >= (sizeof(vcpu_mask) * 8) )
+            return -EINVAL;
+
+        if ( !(vcpu_mask & (1ul << v->vcpu_id)) )
+            continue;
+
+        vlapic_set_irq(vcpu_vlapic(v), vector, 0);
+    }
+
+    return 0;
+}
+
 int viridian_hypercall(struct cpu_user_regs *regs)
 {
     struct vcpu *curr = current;
     struct domain *currd = curr->domain;
     int mode = hvm_guest_x86_mode(curr);
     unsigned long input_params_gpa, output_params_gpa;
-    uint16_t status = HV_STATUS_SUCCESS;
+    int rc = 0;
     union hypercall_input input;
     union hypercall_output output = {};
 
@@ -600,11 +659,13 @@ int viridian_hypercall(struct cpu_user_regs *regs)
         input_params_gpa = regs->rdx;
         output_params_gpa = regs->r8;
         break;
+
     case 4:
         input.raw = (regs->rdx << 32) | regs->eax;
         input_params_gpa = (regs->rbx << 32) | regs->ecx;
         output_params_gpa = (regs->rdi << 32) | regs->esi;
         break;
+
     default:
         goto out;
     }
@@ -616,92 +677,18 @@ int viridian_hypercall(struct cpu_user_regs *regs)
          * See section 14.5.1 of the specification.
          */
         do_sched_op(SCHEDOP_yield, guest_handle_from_ptr(NULL, void));
-        status = HV_STATUS_SUCCESS;
         break;
 
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
-    {
-        int rc = hvcall_flush(&input, &output, input_params_gpa,
-                              output_params_gpa);
-
-        switch ( rc )
-        {
-        case 0:
-            break;
-
-        case -ERESTART:
-            return HVM_HCALL_preempted;
-
-        default:
-            ASSERT_UNREACHABLE();
-            /* Fallthrough */
-        case -EINVAL:
-            status = HV_STATUS_INVALID_PARAMETER;
-            break;
-        }
-
+        rc = hvcall_flush(&input, &output, input_params_gpa,
+                          output_params_gpa);
         break;
-    }
 
     case HVCALL_SEND_IPI:
-    {
-        struct vcpu *v;
-        uint32_t vector;
-        uint64_t vcpu_mask;
-
-        status = HV_STATUS_INVALID_PARAMETER;
-
-        /* Get input parameters. */
-        if ( input.fast )
-        {
-            if ( input_params_gpa >> 32 )
-                break;
-
-            vector = input_params_gpa;
-            vcpu_mask = output_params_gpa;
-        }
-        else
-        {
-            struct {
-                uint32_t vector;
-                uint8_t target_vtl;
-                uint8_t reserved_zero[3];
-                uint64_t vcpu_mask;
-            } input_params;
-
-            if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
-                                          sizeof(input_params)) !=
-                 HVMTRANS_okay )
-                break;
-
-            if ( input_params.target_vtl ||
-                 input_params.reserved_zero[0] ||
-                 input_params.reserved_zero[1] ||
-                 input_params.reserved_zero[2] )
-                break;
-
-            vector = input_params.vector;
-            vcpu_mask = input_params.vcpu_mask;
-        }
-
-        if ( vector < 0x10 || vector > 0xff )
-            break;
-
-        for_each_vcpu ( currd, v )
-        {
-            if ( v->vcpu_id >= (sizeof(vcpu_mask) * 8) )
-                break;
-
-            if ( !(vcpu_mask & (1ul << v->vcpu_id)) )
-                continue;
-
-            vlapic_set_irq(vcpu_vlapic(v), vector, 0);
-        }
-
-        status = HV_STATUS_SUCCESS;
+        rc = hvcall_ipi(&input, &output, input_params_gpa,
+                        output_params_gpa);
         break;
-    }
 
     default:
         gprintk(XENLOG_WARNING, "unimplemented hypercall %04x\n",
@@ -714,12 +701,31 @@ int viridian_hypercall(struct cpu_user_regs *regs)
          * Given that return a status of 'invalid code' has not so far
          * caused any problems it's not worth logging.
          */
-        status = HV_STATUS_INVALID_HYPERCALL_CODE;
+        rc = -EOPNOTSUPP;
         break;
     }
 
  out:
-    output.result = status;
+    switch ( rc )
+    {
+    case 0:
+        break;
+
+    case -ERESTART:
+        return HVM_HCALL_preempted;
+
+    case -EOPNOTSUPP:
+        output.result = HV_STATUS_INVALID_HYPERCALL_CODE;
+        break;
+
+    default:
+        ASSERT_UNREACHABLE();
+        /* Fallthrough */
+    case -EINVAL:
+        output.result = HV_STATUS_INVALID_PARAMETER;
+        break;
+    }
+
     switch (mode) {
     case 8:
         regs->rax = output.raw;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 09:49:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 09:49:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31834.62560 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg32Q-0005BD-E9; Fri, 20 Nov 2020 09:49:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31834.62560; Fri, 20 Nov 2020 09:49:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg32Q-0005Ax-6m; Fri, 20 Nov 2020 09:49:06 +0000
Received: by outflank-mailman (input) for mailman id 31834;
 Fri, 20 Nov 2020 09:49:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kg32O-0005AQ-Se
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:49:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg32N-0002aT-F5; Fri, 20 Nov 2020 09:49:03 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg32N-0003d1-3k; Fri, 20 Nov 2020 09:49:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32O-0005AQ-Se
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:49:04 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=Mj4u5ITXwce574pXi1ANbw3qWtzS9xrY/6iHBRtpIyQ=; b=WWI1ucgvZ2HYijG6O8TgKHrHVd
	0zF5rrxTRrQ3T6HEra7Es5aEB+e6RuqgYRm3xmPMTlPxR/7g/gSsICeRFkDh48nP1guhhpsmiMK5/
	k44MAWmusNsYgO9rVDMqYiXclaS03jaWFC7/KjV965B8uaCv6gQ89ZeXVVxhmB7j0US4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32N-0002aT-F5; Fri, 20 Nov 2020 09:49:03 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32N-0003d1-3k; Fri, 20 Nov 2020 09:49:03 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Anthony PERARD <anthony.perard@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 00/12] viridian: add support for ExProcessorMasks
Date: Fri, 20 Nov 2020 09:48:48 +0000
Message-Id: <20201120094900.1489-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Paul Durrant (12):
  viridian: don't blindly write to 32-bit registers is 'mode' is invalid
  viridian: move flush hypercall implementation into separate function
  viridian: move IPI hypercall implementation into separate function
  viridian: introduce a per-cpu hypercall_vpmask and accessor
    functions...
  viridian: use hypercall_vpmask in hvcall_ipi()
  viridian: use softirq batching in hvcall_ipi()
  xen/include: import sizeof_field() macro from Linux stddef.h
  viridian: add ExProcessorMasks variants of the flush hypercalls
  viridian: add ExProcessorMasks variant of the IPI hypercall
  viridian: log initial invocation of each type of hypercall
  viridian: add a new '_HVMPV_ex_processor_masks' bit into
    HVM_PARAM_VIRIDIAN...
  xl / libxl: add 'ex_processor_mask' into
    'libxl_viridian_enlightenment'

 docs/man/xl.cfg.5.pod.in             |   8 +
 tools/include/libxl.h                |   7 +
 tools/libs/light/libxl_types.idl     |   1 +
 tools/libs/light/libxl_x86.c         |   3 +
 xen/arch/x86/hvm/viridian/viridian.c | 601 +++++++++++++++++++++------
 xen/include/asm-x86/hvm/viridian.h   |  10 +
 xen/include/public/hvm/params.h      |   7 +-
 xen/include/xen/compiler.h           |   8 +
 8 files changed, 522 insertions(+), 123 deletions(-)
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 09:49:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 09:49:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31838.62614 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg32U-0005Ix-SY; Fri, 20 Nov 2020 09:49:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31838.62614; Fri, 20 Nov 2020 09:49:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg32U-0005In-Ny; Fri, 20 Nov 2020 09:49:10 +0000
Received: by outflank-mailman (input) for mailman id 31838;
 Fri, 20 Nov 2020 09:49:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kg32T-0005Fv-82
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:49:09 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg32T-0002b7-1R; Fri, 20 Nov 2020 09:49:09 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg32S-0003d1-Pk; Fri, 20 Nov 2020 09:49:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32T-0005Fv-82
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:49:09 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=Ie9PO+o0rvIAgtu6WNO10yIOs6iLnaU4in65atV0VQo=; b=gnp2dlZA0e7aR/3BvFD3stzOWH
	A1BZx8iZK2Y4sf4HB69F3sRKG/RPFfATyGo4WyT/uDS4EznqJNSrgvjAeFem6DKqMrkIaySJmAkxK
	PoL1J7qBlrXVNZM1HzDbxVv7qFbxvNRjWxgM6/pp+TzKiRH/QvBiMYS9Cv3G+eeZ6zqs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32T-0002b7-1R; Fri, 20 Nov 2020 09:49:09 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32S-0003d1-Pk; Fri, 20 Nov 2020 09:49:09 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2 05/12] viridian: use hypercall_vpmask in hvcall_ipi()
Date: Fri, 20 Nov 2020 09:48:53 +0000
Message-Id: <20201120094900.1489-6-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201120094900.1489-1-paul@xen.org>
References: <20201120094900.1489-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

A subsequent patch will need to IPI a mask of virtual processors potentially
wider than 64 bits. A previous patch introduced per-cpu hypercall_vpmask
to allow hvcall_flush() to deal with such wide masks. This patch modifies
the implementation of hvcall_ipi() to make use of the same mask structures,
introducing a for_each_vp() macro to facilitate traversing a mask.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - Drop the 'vp' loop now that vpmask_set() will do it internally
---
 xen/arch/x86/hvm/viridian/viridian.c | 43 +++++++++++++++++++++-------
 1 file changed, 32 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 334d8527ff59..d8d8ecc89c80 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -551,6 +551,25 @@ static bool vpmask_test(const struct hypercall_vpmask *vpmask,
     return test_bit(vp, vpmask->mask);
 }
 
+static unsigned int vpmask_first(struct hypercall_vpmask *vpmask)
+{
+    return find_first_bit(vpmask->mask, HVM_MAX_VCPUS);
+}
+
+static unsigned int vpmask_next(struct hypercall_vpmask *vpmask, unsigned int vp)
+{
+    /*
+     * If vp + 1 > HVM_MAX_VCPUS then find_next_bit() will return
+     * HVM_MAX_VCPUS, ensuring the for_each_vp ( ... ) loop terminates.
+     */
+    return find_next_bit(vpmask->mask, HVM_MAX_VCPUS, vp + 1);
+}
+
+#define for_each_vp(vpmask, vp) \
+	for ( (vp) = vpmask_first(vpmask); \
+	      (vp) < HVM_MAX_VCPUS; \
+	      (vp) = vpmask_next(vpmask, vp) )
+
 /*
  * Windows should not issue the hypercalls requiring this callback in the
  * case where vcpu_id would exceed the size of the mask.
@@ -631,13 +650,21 @@ static int hvcall_flush(union hypercall_input *input,
     return 0;
 }
 
+static void send_ipi(struct hypercall_vpmask *vpmask, uint8_t vector)
+{
+    struct domain *currd = current->domain;
+    unsigned int vp;
+
+    for_each_vp ( vpmask, vp )
+        vlapic_set_irq(vcpu_vlapic(currd->vcpu[vp]), vector, 0);
+}
+
 static int hvcall_ipi(union hypercall_input *input,
                       union hypercall_output *output,
                       unsigned long input_params_gpa,
                       unsigned long output_params_gpa)
 {
-    struct domain *currd = current->domain;
-    struct vcpu *v;
+    struct hypercall_vpmask *vpmask = &this_cpu(hypercall_vpmask);
     uint32_t vector;
     uint64_t vcpu_mask;
 
@@ -676,16 +703,10 @@ static int hvcall_ipi(union hypercall_input *input,
     if ( vector < 0x10 || vector > 0xff )
         return -EINVAL;
 
-    for_each_vcpu ( currd, v )
-    {
-        if ( v->vcpu_id >= (sizeof(vcpu_mask) * 8) )
-            return -EINVAL;
+    vpmask_empty(vpmask);
+    vpmask_set(vpmask, 0, vcpu_mask);
 
-        if ( !(vcpu_mask & (1ul << v->vcpu_id)) )
-            continue;
-
-        vlapic_set_irq(vcpu_vlapic(v), vector, 0);
-    }
+    send_ipi(vpmask, vector);
 
     return 0;
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 10:00:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 10:00:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31899.62674 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg3D5-0008Ru-Vp; Fri, 20 Nov 2020 10:00:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31899.62674; Fri, 20 Nov 2020 10:00:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg3D5-0008Rd-SS; Fri, 20 Nov 2020 10:00:07 +0000
Received: by outflank-mailman (input) for mailman id 31899;
 Fri, 20 Nov 2020 10:00:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kg3D5-0008RY-F4
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 10:00:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f64c7928-8212-47e3-a3d8-6aa5b63871e5;
 Fri, 20 Nov 2020 10:00:05 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9E5DCAC23;
 Fri, 20 Nov 2020 10:00:04 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kg3D5-0008RY-F4
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 10:00:07 +0000
X-Inumbo-ID: f64c7928-8212-47e3-a3d8-6aa5b63871e5
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f64c7928-8212-47e3-a3d8-6aa5b63871e5;
	Fri, 20 Nov 2020 10:00:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605866404; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DmQ50/yXucWRgzVb9oJHTNHg7jZGOFcFMa1qRlKGoU8=;
	b=gzBCRV4iLy876FLKWdxwyEpk9QxqdmDDaZvz7qw4JBESbSV+6S9Jn6ASJCq0Qu5HvDEjyX
	FQZJsFFti8PtqbSz6Bbo4O6v1ssNAUwU4M47efX81f8VwAbHuzktJDUTvwpYMrNbM9QUcS
	7R5ykfC3QstSPwsc8ZZbSBiQCs3j6s0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 9E5DCAC23;
	Fri, 20 Nov 2020 10:00:04 +0000 (UTC)
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201118143928.hvamuf7t7jycsrzb@Air-de-Roger>
 <bb2b6182-f3a6-61e5-ee70-90a65ae56435@suse.com>
 <20201119141915.igyb7djkw47rf2dt@Air-de-Roger>
 <20201119155718.GB4104@antioche.eu.org>
 <20201119165734.GA4903@antioche.eu.org>
 <20201119175733.GA6067@antioche.eu.org>
 <1a50e1e2-b69c-afd6-a179-316231512004@suse.com>
 <20201120082855.5z4cibcd5djlwmgp@Air-de-Roger>
 <20201120085249.GA1508@antioche.eu.org>
 <97f371a9-00fe-33fe-8923-c247f44f9af6@suse.com>
 <20201120092754.GH1508@antioche.eu.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <20904a6a-ac64-755d-d228-4c49faf66fb5@suse.com>
Date: Fri, 20 Nov 2020 11:00:05 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201120092754.GH1508@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.11.2020 10:27, Manuel Bouyer wrote:
> On Fri, Nov 20, 2020 at 09:59:57AM +0100, Jan Beulich wrote:
>> Well, anything coming through the LAPIC needs ack-ing (except for
>> the spurious interrupt of course), or else ISR won't get updated
>> and further interrupts at this or lower priority can't be serviced
>> (delivered) anymore. This includes interrupts originally coming
>> through the IO-APIC. But the same constraint / requirement exists
>> on baremetal.
> 
> OK, so even if I didn't see where this happens, it's happening.
> Is it what's Xen is using as ACK from the dom0 for a IOAPIC
> interrupt, or is it something else (at the IOAPIC level) ?

It's the traditional LAPIC based EOI mechanism that Xen intercepts
(as necessary) on the guest side and then translates (via
surprisingly many layers of calls) into the necessary EOI /
unmask / whatever at the hardware level. Our vIO-APIC
implementation so far doesn't support IO-APIC based EOI at all
(which is reflected in the IO-APIC version ID being 0x11).

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 10:00:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 10:00:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31904.62685 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg3Di-0000Aq-8l; Fri, 20 Nov 2020 10:00:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31904.62685; Fri, 20 Nov 2020 10:00:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg3Di-0000Ab-5c; Fri, 20 Nov 2020 10:00:46 +0000
Received: by outflank-mailman (input) for mailman id 31904;
 Fri, 20 Nov 2020 10:00:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kg3Dg-0000AE-Na
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 10:00:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg3Dg-0002vf-9h; Fri, 20 Nov 2020 10:00:44 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg32Y-0003d1-P4; Fri, 20 Nov 2020 09:49:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg3Dg-0000AE-Na
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 10:00:44 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=Xc3dEZvBV1+eTUG9QjtZCF0WCwEhmm9IyrdLoFO3fio=; b=EgvdqzPzkqvxm+WD2HNz97iqIl
	JjZQ6vw0ArmB5yqjwqUdelNFJbhUbdo3ZMWa9EHc/i75hm4tk2dR3H5s16L9kcfGeYwb+GxvGeDMf
	tHTfYzOc//lamGsnmJzkoq8pR/uuUQS/oslh4Xt/tGawgZQiohupymjwp9bmXqbeQm6k=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg3Dg-0002vf-9h; Fri, 20 Nov 2020 10:00:44 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32Y-0003d1-P4; Fri, 20 Nov 2020 09:49:14 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2 10/12] viridian: log initial invocation of each type of hypercall
Date: Fri, 20 Nov 2020 09:48:58 +0000
Message-Id: <20201120094900.1489-11-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201120094900.1489-1-paul@xen.org>
References: <20201120094900.1489-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

To make is simpler to observe which viridian hypercalls are issued by a
particular Windows guest, this patch adds a per-domain mask to track them.
Each type of hypercall causes a different bit to be set in the mask and
when the bit transitions from clear to set, a log line is emitted showing
the name of the hypercall and the domain that issued it.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - Use DECLARE_BITMAP() for 'hypercall_flags'
 - Use an enum for _HCALL_* values
---
 xen/arch/x86/hvm/viridian/viridian.c | 21 +++++++++++++++++++++
 xen/include/asm-x86/hvm/viridian.h   | 10 ++++++++++
 2 files changed, 31 insertions(+)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 74227fecbcd4..de3f5f86777c 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -933,6 +933,7 @@ int viridian_hypercall(struct cpu_user_regs *regs)
 {
     struct vcpu *curr = current;
     struct domain *currd = curr->domain;
+    struct viridian_domain *vd = currd->arch.hvm.viridian;
     int mode = hvm_guest_x86_mode(curr);
     unsigned long input_params_gpa, output_params_gpa;
     int rc = 0;
@@ -962,6 +963,10 @@ int viridian_hypercall(struct cpu_user_regs *regs)
     switch ( input.call_code )
     {
     case HVCALL_NOTIFY_LONG_SPIN_WAIT:
+        if ( !test_and_set_bit(_HCALL_spin_wait, vd->hypercall_flags) )
+            printk(XENLOG_G_INFO "d%d: VIRIDIAN HVCALL_NOTIFY_LONG_SPIN_WAIT\n",
+                   currd->domain_id);
+
         /*
          * See section 14.5.1 of the specification.
          */
@@ -970,22 +975,38 @@ int viridian_hypercall(struct cpu_user_regs *regs)
 
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
+        if ( !test_and_set_bit(_HCALL_flush, vd->hypercall_flags) )
+            printk(XENLOG_G_INFO "%pd: VIRIDIAN HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST\n",
+                   currd);
+
         rc = hvcall_flush(&input, &output, input_params_gpa,
                           output_params_gpa);
         break;
 
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX:
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX:
+        if ( !test_and_set_bit(_HCALL_flush_ex, vd->hypercall_flags) )
+            printk(XENLOG_G_INFO "%pd: VIRIDIAN HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX\n",
+                   currd);
+
         rc = hvcall_flush_ex(&input, &output, input_params_gpa,
                              output_params_gpa);
         break;
 
     case HVCALL_SEND_IPI:
+        if ( !test_and_set_bit(_HCALL_ipi, vd->hypercall_flags) )
+            printk(XENLOG_G_INFO "%pd: VIRIDIAN HVCALL_SEND_IPI\n",
+                   currd);
+
         rc = hvcall_ipi(&input, &output, input_params_gpa,
                         output_params_gpa);
         break;
 
     case HVCALL_SEND_IPI_EX:
+        if ( !test_and_set_bit(_HCALL_ipi_ex, vd->hypercall_flags) )
+            printk(XENLOG_G_INFO "%pd: VIRIDIAN HVCALL_SEND_IPI_EX\n",
+                   currd);
+
         rc = hvcall_ipi_ex(&input, &output, input_params_gpa,
                            output_params_gpa);
         break;
diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h
index cbf77d9c760b..4c8ff6e80b6f 100644
--- a/xen/include/asm-x86/hvm/viridian.h
+++ b/xen/include/asm-x86/hvm/viridian.h
@@ -55,10 +55,20 @@ struct viridian_time_ref_count
     int64_t off;
 };
 
+enum {
+    _HCALL_spin_wait,
+    _HCALL_flush,
+    _HCALL_flush_ex,
+    _HCALL_ipi,
+    _HCALL_ipi_ex,
+    _HCALL_nr /* must be last */
+};
+
 struct viridian_domain
 {
     union hv_guest_os_id guest_os_id;
     union hv_vp_assist_page_msr hypercall_gpa;
+    DECLARE_BITMAP(hypercall_flags, _HCALL_nr);
     struct viridian_time_ref_count time_ref_count;
     struct viridian_page reference_tsc;
 };
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 10:00:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 10:00:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31905.62695 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg3Di-0000BR-Ni; Fri, 20 Nov 2020 10:00:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31905.62695; Fri, 20 Nov 2020 10:00:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg3Di-0000BC-Db; Fri, 20 Nov 2020 10:00:46 +0000
Received: by outflank-mailman (input) for mailman id 31905;
 Fri, 20 Nov 2020 10:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kg3Dh-0000AK-2w
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 10:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg3Dg-0002vb-0J; Fri, 20 Nov 2020 10:00:44 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg32b-0003d1-Bb; Fri, 20 Nov 2020 09:49:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg3Dh-0000AK-2w
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 10:00:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=/tzXLkeYgeohzzpZOobPIXPIDOQTKzNBQ+V78872kJQ=; b=aljZn49SbQde/x5Nfc4qojAcC9
	PmWnCmYLuahJRepbdlcp7D10Hf7/4GZGlmy4UacseYU3bW0hcFi4rzFS5VtVbEXZSyZ6HDF3UQIyW
	BgIbKsJb5C6OoEFWjSnKcH9N0A5eWc42xHT6tgBvcneNt0Dli+cysl18664ZNsNtqc4U=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg3Dg-0002vb-0J; Fri, 20 Nov 2020 10:00:44 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32b-0003d1-Bb; Fri, 20 Nov 2020 09:49:17 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2 12/12] xl / libxl: add 'ex_processor_mask' into 'libxl_viridian_enlightenment'
Date: Fri, 20 Nov 2020 09:49:00 +0000
Message-Id: <20201120094900.1489-13-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201120094900.1489-1-paul@xen.org>
References: <20201120094900.1489-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Adding the new value into the enumeration makes it immediately available
to xl, so this patch adjusts the xl.cfg(5) documentation accordingly.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 docs/man/xl.cfg.5.pod.in         | 8 ++++++++
 tools/include/libxl.h            | 7 +++++++
 tools/libs/light/libxl_types.idl | 1 +
 tools/libs/light/libxl_x86.c     | 3 +++
 4 files changed, 19 insertions(+)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 0532739c1fff..3f0f8de1e988 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2318,6 +2318,14 @@ This set incorporates use of a hypercall for interprocessor interrupts.
 This enlightenment may improve performance of Windows guests with multiple
 virtual CPUs.
 
+=item B<ex_processor_masks>
+
+This set enables new hypercall variants taking a variably-sized sparse
+B<Virtual Processor Set> as an argument, rather than a simple 64-bit
+mask. Hence this enlightenment must be specified for guests with more
+than 64 vCPUs if B<hcall_remote_tlb_flush> and/or B<hcall_ipi> are also
+specified.
+
 =item B<defaults>
 
 This is a special value that enables the default set of groups, which
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 1ea5b4f446e8..eaffccb30f37 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -444,6 +444,13 @@
  */
 #define LIBXL_HAVE_DISK_SAFE_REMOVE 1
 
+/*
+ * LIBXL_HAVE_VIRIDIAN_EX_PROCESSOR_MASKS indicates that the
+ * 'ex_processor_masks' value is present in the viridian enlightenment
+ * enumeration.
+ */
+#define LIBXL_HAVE_VIRIDIAN_EX_PROCESSOR_MASKS 1
+
 /*
  * libxl ABI compatibility
  *
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 9d3f05f39978..05324736b744 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -238,6 +238,7 @@ libxl_viridian_enlightenment = Enumeration("viridian_enlightenment", [
     (7, "synic"),
     (8, "stimer"),
     (9, "hcall_ipi"),
+    (10, "ex_processor_masks"),
     ])
 
 libxl_hdtype = Enumeration("hdtype", [
diff --git a/tools/libs/light/libxl_x86.c b/tools/libs/light/libxl_x86.c
index e18274cc10e2..86d272999d67 100644
--- a/tools/libs/light/libxl_x86.c
+++ b/tools/libs/light/libxl_x86.c
@@ -366,6 +366,9 @@ static int hvm_set_viridian_features(libxl__gc *gc, uint32_t domid,
     if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_HCALL_IPI))
         mask |= HVMPV_hcall_ipi;
 
+    if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_EX_PROCESSOR_MASKS))
+        mask |= HVMPV_ex_processor_masks;
+
     if (mask != 0 &&
         xc_hvm_param_set(CTX->xch,
                          domid,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 10:00:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 10:00:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31906.62705 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg3Dj-0000Cf-61; Fri, 20 Nov 2020 10:00:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31906.62705; Fri, 20 Nov 2020 10:00:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg3Di-0000C8-RN; Fri, 20 Nov 2020 10:00:46 +0000
Received: by outflank-mailman (input) for mailman id 31906;
 Fri, 20 Nov 2020 10:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kg3Dh-0000AQ-4k
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 10:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg3Dg-0002vd-7M; Fri, 20 Nov 2020 10:00:44 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg32a-0003d1-Ac; Fri, 20 Nov 2020 09:49:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg3Dh-0000AQ-4k
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 10:00:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=/q8/4ovlXCcgIYS3O6ZCn2GAgqJ29SC66Zw3QGsg0QU=; b=IspVjfZySKMpAEaQTeTgVHxRx2
	bjbuIc5e+fq+zFF24VBF0Ua0/X0FOuBlTEIvA0C6amY8t+nqNVqut+Yb3071oVB0T7YfyTmpRWnrw
	pYLZl6P760bxMRRtG+JujIr21Vqm2BpQcJ8tJK/aBmMM+o9uCuPHBkIfd42xYaxGTd/A=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg3Dg-0002vd-7M; Fri, 20 Nov 2020 10:00:44 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg32a-0003d1-Ac; Fri, 20 Nov 2020 09:49:16 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 11/12] viridian: add a new '_HVMPV_ex_processor_masks' bit into HVM_PARAM_VIRIDIAN...
Date: Fri, 20 Nov 2020 09:48:59 +0000
Message-Id: <20201120094900.1489-12-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201120094900.1489-1-paul@xen.org>
References: <20201120094900.1489-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... and advertise ExProcessorMasks support if it is set.

Support is advertised by setting bit 11 in CPUID:40000004:EAX.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/x86/hvm/viridian/viridian.c | 3 +++
 xen/include/public/hvm/params.h      | 7 ++++++-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index de3f5f86777c..54b20188e326 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -84,6 +84,7 @@ typedef union _HV_CRASH_CTL_REG_CONTENTS
 #define CPUID4A_MSR_BASED_APIC         (1 << 3)
 #define CPUID4A_RELAX_TIMER_INT        (1 << 5)
 #define CPUID4A_SYNTHETIC_CLUSTER_IPI  (1 << 10)
+#define CPUID4A_EX_PROCESSOR_MASKS     (1 << 11)
 
 /* Viridian CPUID leaf 6: Implementation HW features detected and in use */
 #define CPUID6A_APIC_OVERLAY    (1 << 0)
@@ -197,6 +198,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
             res->a |= CPUID4A_MSR_BASED_APIC;
         if ( viridian_feature_mask(d) & HVMPV_hcall_ipi )
             res->a |= CPUID4A_SYNTHETIC_CLUSTER_IPI;
+        if ( viridian_feature_mask(d) & HVMPV_ex_processor_masks )
+            res->a |= CPUID4A_EX_PROCESSOR_MASKS;
 
         /*
          * This value is the recommended number of attempts to try to
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index 0e3fdca09646..3b0a0f45da53 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -164,6 +164,10 @@
 #define _HVMPV_hcall_ipi 9
 #define HVMPV_hcall_ipi (1 << _HVMPV_hcall_ipi)
 
+/* Enable ExProcessorMasks */
+#define _HVMPV_ex_processor_masks 10
+#define HVMPV_ex_processor_masks (1 << _HVMPV_ex_processor_masks)
+
 #define HVMPV_feature_mask \
         (HVMPV_base_freq | \
          HVMPV_no_freq | \
@@ -174,7 +178,8 @@
          HVMPV_crash_ctl | \
          HVMPV_synic | \
          HVMPV_stimer | \
-         HVMPV_hcall_ipi)
+         HVMPV_hcall_ipi | \
+         HVMPV_ex_processor_masks)
 
 #endif
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 10:09:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 10:09:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31920.62721 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg3Lj-0000i9-6b; Fri, 20 Nov 2020 10:09:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31920.62721; Fri, 20 Nov 2020 10:09:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg3Lj-0000i2-3k; Fri, 20 Nov 2020 10:09:03 +0000
Received: by outflank-mailman (input) for mailman id 31920;
 Fri, 20 Nov 2020 10:09:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kg3Lh-0000hx-FR
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 10:09:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 147d43fd-7334-4820-a187-120702d09170;
 Fri, 20 Nov 2020 10:08:59 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B6ABBAE78;
 Fri, 20 Nov 2020 10:08:58 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kg3Lh-0000hx-FR
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 10:09:01 +0000
X-Inumbo-ID: 147d43fd-7334-4820-a187-120702d09170
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 147d43fd-7334-4820-a187-120702d09170;
	Fri, 20 Nov 2020 10:08:59 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605866938; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8hf+ygfDKUNsG8QRruMSF4Vyc8o9h0epff8MdNWveAQ=;
	b=ASbNbfEjL+a3mYs5HZ9Y27uJfJ5EAq/CYcZfEDXqmTozEkWZNqwXtGHRZo769gAHE+pNzB
	H7Y81hEUk6Yaby8WxWm70RDMfVHA1nc89cJNqH5C8FFnr2qZu+QI3cY2d4ZpTi7dwvc2RQ
	Pa6z0LxAPg1Cgtu1ciKEZBbcaOGGiQM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B6ABBAE78;
	Fri, 20 Nov 2020 10:08:58 +0000 (UTC)
Subject: Re: [PATCH v2] xen: EXPERT clean-up and introduce UNSUPPORTED
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: andrew.cooper3@citrix.com, Bertrand.Marquis@arm.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 george.dunlap@citrix.com, iwj@xenproject.org, julien@xen.org, wl@xen.org,
 xen-devel@lists.xenproject.org
References: <20201118005051.26115-1-sstabellini@kernel.org>
 <eb6b32c3-c7e2-1e36-f492-0c00cc170ce2@suse.com>
 <alpine.DEB.2.21.2011181241310.11739@sstabellini-ThinkPad-T480s>
 <3e8c03eb-ee3f-4439-90c2-acf340c7d8e7@suse.com>
 <alpine.DEB.2.21.2011191310210.11739@sstabellini-ThinkPad-T480s>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8ff723d7-00e2-be35-48b0-dc4b932d35cc@suse.com>
Date: Fri, 20 Nov 2020 11:08:58 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2011191310210.11739@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.11.2020 22:40, Stefano Stabellini wrote:
> On Thu, 19 Nov 2020, Jan Beulich wrote:
>> On 18.11.2020 22:00, Stefano Stabellini wrote:
>>> On Wed, 18 Nov 2020, Jan Beulich wrote:
>>>> On 18.11.2020 01:50, Stefano Stabellini wrote:
>>>>> 1) It is not obvious that "Configure standard Xen features (expert
>>>>> users)" is actually the famous EXPERT we keep talking about on xen-devel
>>>>
>>>> Which can be addressed by simply changing the one prompt line.
>>>>
>>>>> 2) It is not obvious when we need to enable EXPERT to get a specific
>>>>> feature
>>>>>
>>>>> In particular if you want to enable ACPI support so that you can boot
>>>>> Xen on an ACPI platform, you have to enable EXPERT first. But searching
>>>>> through the kconfig menu it is really not clear (type '/' and "ACPI"):
>>>>> nothing in the description tells you that you need to enable EXPERT to
>>>>> get the option.
>>>>
>>>> And what causes this to be different once you switch to UNSUPPORTED?
>>>
>>> Two things: firstly, it doesn't and shouldn't take an expert to enable
>>> ACPI support, even if ACPI support is experimental. So calling it
>>> UNSUPPORTED helps a lot. This is particularly relevant to the ARM Kconfig
>>> options changed by this patch. Secondly, this patch is adding
>>> "(UNSUPPORTED)" in the oneline prompt so that it becomes easy to match
>>> it with the option you need to enable.
>>
>> There's redundancy here then, which I think is in almost all cases
>> better to avoid. That's first and foremost because the two places
>> can go out of sync. Therefore, if the primary thing is to help
>> "make menuconfig" (which I admit I don't normally use, as it's
>> nothing that gets invoked implicitly by the build process afaict,
>> i.e. one has to actively invoke it), perhaps we should enhance
>> kconfig to attach at least a pre-determined subset of labels to
>> the prompts automatically?
>>
>> And second, also in reply to what you've been saying further down,
>> perhaps we would better go with a hierarchy of controls here, e.g.
>> EXPERT -> EXPERIMENTAL -> UNSUPPORTED?
> 
> Both these are good ideas worth discussing; somebody else made a similar
> suggestion some time back. I was already thinking this could be a great
> candidate for one of the first "working groups" as defined by George
> during the last community call because the topic is not purely
> technical: a working group could help getting alignment and make
> progress faster. We can propose it to George when he is back.
> 
> However, I don't think we need the working group to make progress on
> this limited patch that only addresses the lowest hanging fruit.
> 
> I'd like to suggest to make progress on this patch in its current form,
> and in parallel start a longer term discussion on how to do something
> like you suggested above.

Okay, I guess I can accept this. So FAOD I'm not objecting to the
change (with some suitable adjustments, as discussed), but I'm
then also not going to be the one to ack it. Nevertheless I'd like
to point out that doing such a partial solution may end up adding
confusion rather than reducing it. Much depends on how exactly
consumers interpret what we hand to them.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 10:38:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 10:38:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31932.62734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg3oL-00047R-HL; Fri, 20 Nov 2020 10:38:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31932.62734; Fri, 20 Nov 2020 10:38:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg3oL-00047K-EL; Fri, 20 Nov 2020 10:38:37 +0000
Received: by outflank-mailman (input) for mailman id 31932;
 Fri, 20 Nov 2020 10:38:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GrQD=E2=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kg3oJ-00047F-Km
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 10:38:35 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 50fb38a9-ed13-4321-8704-81cc7e4ca877;
 Fri, 20 Nov 2020 10:38:34 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AKAcTDi005801
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Fri, 20 Nov 2020 11:38:30 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 264D72E9CA8; Fri, 20 Nov 2020 11:38:24 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GrQD=E2=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kg3oJ-00047F-Km
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 10:38:35 +0000
X-Inumbo-ID: 50fb38a9-ed13-4321-8704-81cc7e4ca877
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 50fb38a9-ed13-4321-8704-81cc7e4ca877;
	Fri, 20 Nov 2020 10:38:34 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AKAcTDi005801
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Fri, 20 Nov 2020 11:38:30 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 264D72E9CA8; Fri, 20 Nov 2020 11:38:24 +0100 (MET)
Date: Fri, 20 Nov 2020 11:38:24 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
        xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201120103824.GJ1508@antioche.eu.org>
References: <20201119141915.igyb7djkw47rf2dt@Air-de-Roger>
 <20201119155718.GB4104@antioche.eu.org>
 <20201119165734.GA4903@antioche.eu.org>
 <20201119175733.GA6067@antioche.eu.org>
 <1a50e1e2-b69c-afd6-a179-316231512004@suse.com>
 <20201120082855.5z4cibcd5djlwmgp@Air-de-Roger>
 <20201120085249.GA1508@antioche.eu.org>
 <97f371a9-00fe-33fe-8923-c247f44f9af6@suse.com>
 <20201120092754.GH1508@antioche.eu.org>
 <20904a6a-ac64-755d-d228-4c49faf66fb5@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20904a6a-ac64-755d-d228-4c49faf66fb5@suse.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Fri, 20 Nov 2020 11:38:30 +0100 (MET)

On Fri, Nov 20, 2020 at 11:00:05AM +0100, Jan Beulich wrote:
> On 20.11.2020 10:27, Manuel Bouyer wrote:
> > On Fri, Nov 20, 2020 at 09:59:57AM +0100, Jan Beulich wrote:
> >> Well, anything coming through the LAPIC needs ack-ing (except for
> >> the spurious interrupt of course), or else ISR won't get updated
> >> and further interrupts at this or lower priority can't be serviced
> >> (delivered) anymore. This includes interrupts originally coming
> >> through the IO-APIC. But the same constraint / requirement exists
> >> on baremetal.
> > 
> > OK, so even if I didn't see where this happens, it's happening.
> > Is it what's Xen is using as ACK from the dom0 for a IOAPIC
> > interrupt, or is it something else (at the IOAPIC level) ?
> 
> It's the traditional LAPIC based EOI mechanism that Xen intercepts
> (as necessary) on the guest side and then translates (via
> surprisingly many layers of calls) into the necessary EOI /
> unmask / whatever at the hardware level. Our vIO-APIC
> implementation so far doesn't support IO-APIC based EOI at all
> (which is reflected in the IO-APIC version ID being 0x11).

OK.
I finally found where the EOI occurs (it's within a macro so s simple grep
didn't show it).

When interrupts are not masked (e.g. via cli), the ioapic halder is called.
>From here, 2 paths can happen:
a) the software interrupt priority level (called spl in BSD world) allows the
  driver's handler to run. In this case it's called, then the interrupt
  is unmasked at ioapic level, and EOI'd at lapic.
b) the software interrupt priority level doesn't allow this driver's handler to
  run. In this case, the interrupt is marked as pending in software,
  explicitely masked at the iopic level and EOI'd at lapic.
  Later, when the spl is lowered, the driver's interrupt handler is run,
  then the interrupt is unmasked at ioapic level, and EOI'd at lapic
  (this is the same path as a)). here we may EOI the lapic twice, and the
  second time when there's no hardware interrupt pending any more.

I suspect it's case b) which causes the problem with Xen.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 10:53:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 10:53:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31941.62748 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg42e-0006Lf-UF; Fri, 20 Nov 2020 10:53:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31941.62748; Fri, 20 Nov 2020 10:53:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg42e-0006LY-QX; Fri, 20 Nov 2020 10:53:24 +0000
Received: by outflank-mailman (input) for mailman id 31941;
 Fri, 20 Nov 2020 10:53:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ulUS=E2=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1kg42c-0006LT-Uy
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 10:53:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 41e52736-4b7b-4545-823a-415df167cc6a;
 Fri, 20 Nov 2020 10:53:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 015E9AED9;
 Fri, 20 Nov 2020 10:53:20 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id AC96F1E130B; Fri, 20 Nov 2020 11:53:19 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ulUS=E2=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1kg42c-0006LT-Uy
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 10:53:22 +0000
X-Inumbo-ID: 41e52736-4b7b-4545-823a-415df167cc6a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 41e52736-4b7b-4545-823a-415df167cc6a;
	Fri, 20 Nov 2020 10:53:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 015E9AED9;
	Fri, 20 Nov 2020 10:53:20 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id AC96F1E130B; Fri, 20 Nov 2020 11:53:19 +0100 (CET)
Date: Fri, 20 Nov 2020 11:53:19 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jan Kara <jack@suse.cz>, Jens Axboe <axboe@kernel.dk>,
	Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 15/20] block: merge struct block_device and struct
 hd_struct
Message-ID: <20201120105319.GA15537@quack2.suse.cz>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-16-hch@lst.de>
 <20201119143921.GX1981@quack2.suse.cz>
 <20201120091546.GE21715@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201120091546.GE21715@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Fri 20-11-20 10:15:46, Christoph Hellwig wrote:
> On Thu, Nov 19, 2020 at 03:39:21PM +0100, Jan Kara wrote:
> > This patch is kind of difficult to review due to the size of mostly
> > mechanical changes mixed with not completely mechanical changes. Can we
> > perhaps split out the mechanical bits? E.g. the rq->part => rq->bdev
> > renaming is mechanical and notable part of the patch. Similarly the
> > part->foo => part->bd_foo bits...
> 
> We'd end with really weird patches that way.  Never mind that I'm not
> even sure how we could mechnically do the renaming.

Well, I believe coccinelle should be able to do the renaming automatically.

> > Also I'm kind of wondering: AFAIU the new lifetime rules, gendisk holds
> > bdev reference and bdev is created on gendisk allocation so bdev lifetime is
> > strictly larger than gendisk lifetime. But what now keeps bdev->bd_disk
> > reference safe in presence device hot unplug? In most cases we are still
> > protected by gendisk reference taken in __blkdev_get() but how about
> > disk->lookup_sem and disk->flags dereferences before we actually grab the
> > reference?
> 
> Good question.  I'll need to think about this a bit more.

My thinking was that you could use

kobject_get_unless_zero(bdev->bd_device->kobj)

and after you hold this reference, you can do everything else safely. In
this case it is really useful that device is embedded in block_dev and
not in gendisk itself...

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 10:59:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 10:59:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31948.62764 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg48H-0006Zd-LU; Fri, 20 Nov 2020 10:59:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31948.62764; Fri, 20 Nov 2020 10:59:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg48H-0006ZW-I7; Fri, 20 Nov 2020 10:59:13 +0000
Received: by outflank-mailman (input) for mailman id 31948;
 Fri, 20 Nov 2020 10:59:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kg48G-0006ZO-5M; Fri, 20 Nov 2020 10:59:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kg48F-00048W-W9; Fri, 20 Nov 2020 10:59:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kg48F-0004D5-MO; Fri, 20 Nov 2020 10:59:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kg48F-00073a-Lv; Fri, 20 Nov 2020 10:59:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kg48G-0006ZO-5M; Fri, 20 Nov 2020 10:59:12 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8DA034qCOM62PoLf0t6VAcyjatP1urjIOIU1/hZFRvI=; b=Bz4QEdT6lVR3l/4P9bs7xwhmMW
	yDXGCG8+xbhwNHoFKj53+tPIYUFiSNao2HkO5q0ewH2cVbx+teuzyLVv4APyIj7b80yU/IDvjPIO1
	tkGgYUMQ04aJJ8yCn8fSu6Lh6gYIfOgt0JjyiEsD04UaLzxYMhSxLzfUria6ntvT0x3o=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kg48F-00048W-W9; Fri, 20 Nov 2020 10:59:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kg48F-0004D5-MO; Fri, 20 Nov 2020 10:59:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kg48F-00073a-Lv; Fri, 20 Nov 2020 10:59:11 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156895-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156895: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=846d22d54f24f336fb80d052338e0cd030d54fee
X-Osstest-Versions-That:
    xen=0ff2c7e5b4ff9b3066d6cbba9adf95b948b418c9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 20 Nov 2020 10:59:11 +0000

flight 156895 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156895/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  846d22d54f24f336fb80d052338e0cd030d54fee
baseline version:
 xen                  0ff2c7e5b4ff9b3066d6cbba9adf95b948b418c9

Last test of basis   156889  2020-11-20 01:03:00 Z    0 days
Testing same since   156895  2020-11-20 08:02:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0ff2c7e5b4..846d22d54f  846d22d54f24f336fb80d052338e0cd030d54fee -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 11:06:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 11:06:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31957.62779 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4Ey-0007lL-FR; Fri, 20 Nov 2020 11:06:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31957.62779; Fri, 20 Nov 2020 11:06:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4Ey-0007lE-Bp; Fri, 20 Nov 2020 11:06:08 +0000
Received: by outflank-mailman (input) for mailman id 31957;
 Fri, 20 Nov 2020 11:06:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kg4Ex-0007l6-9C; Fri, 20 Nov 2020 11:06:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kg4Ew-0004Ip-7U; Fri, 20 Nov 2020 11:06:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kg4Ev-0004QH-TW; Fri, 20 Nov 2020 11:06:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kg4Ev-0003YN-Sy; Fri, 20 Nov 2020 11:06:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kg4Ex-0007l6-9C; Fri, 20 Nov 2020 11:06:07 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=K3f/S3Iy9y8mfCQwWObmV4hyw8hh0rAT0FXvJOWd9dE=; b=kLPteVqjq0HEh0zMPQsgTs8zlb
	6/2o+dCuJ/lekSYuHJnrsVOlSQCwY3G+98LpmN6xzoSoutHl40llQIl+aDRRNL7wPdwY/shZ+2zaL
	VpBkYcQkCzMQmGn7y1Pl7+0jg243A5AVsATQwtrXGWzpTvaf/mnfTbHROpk8iU0z/TLk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kg4Ew-0004Ip-7U; Fri, 20 Nov 2020 11:06:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kg4Ev-0004QH-TW; Fri, 20 Nov 2020 11:06:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kg4Ev-0003YN-Sy; Fri, 20 Nov 2020 11:06:05 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156886-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156886: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:heisenbug
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-saverestore.2:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=c2e7554e1b85935d962127efa3c2a76483b0b3b6
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 20 Nov 2020 11:06:05 +0000

flight 156886 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156886/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  12 debian-install           fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu  fail in 156871 REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop     fail in 156871 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit2   8 xen-boot         fail in 156871 pass in 156886
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen        fail pass in 156871
 test-amd64-amd64-i386-pvgrub 18 guest-saverestore.2        fail pass in 156871
 test-arm64-arm64-examine      8 reboot                     fail pass in 156871

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle 11 leak-check/basis(11) fail in 156871 blocked in 152332
 test-arm64-arm64-xl-credit1 11 leak-check/basis(11) fail in 156871 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                c2e7554e1b85935d962127efa3c2a76483b0b3b6
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  111 days
Failing since        152366  2020-08-01 20:49:34 Z  110 days  184 attempts
Testing same since   156871  2020-11-19 04:28:45 Z    1 days    2 attempts

------------------------------------------------------------
3528 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 675413 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 11:21:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 11:21:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31971.62794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4Tl-0001as-VI; Fri, 20 Nov 2020 11:21:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31971.62794; Fri, 20 Nov 2020 11:21:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4Tl-0001al-RB; Fri, 20 Nov 2020 11:21:25 +0000
Received: by outflank-mailman (input) for mailman id 31971;
 Fri, 20 Nov 2020 11:21:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ulUS=E2=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1kg4Tk-0001ag-Eh
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:21:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 35edf7cf-aaef-49ff-a730-73d122afc14b;
 Fri, 20 Nov 2020 11:21:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C4C79AED9;
 Fri, 20 Nov 2020 11:21:21 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id 2825E1E130B; Fri, 20 Nov 2020 12:21:21 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ulUS=E2=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1kg4Tk-0001ag-Eh
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:21:24 +0000
X-Inumbo-ID: 35edf7cf-aaef-49ff-a730-73d122afc14b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 35edf7cf-aaef-49ff-a730-73d122afc14b;
	Fri, 20 Nov 2020 11:21:22 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C4C79AED9;
	Fri, 20 Nov 2020 11:21:21 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id 2825E1E130B; Fri, 20 Nov 2020 12:21:21 +0100 (CET)
Date: Fri, 20 Nov 2020 12:21:21 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jan Kara <jack@suse.cz>, Jens Axboe <axboe@kernel.dk>,
	Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 14/20] block: remove the nr_sects field in struct
 hd_struct
Message-ID: <20201120112121.GB15537@quack2.suse.cz>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-15-hch@lst.de>
 <20201119120525.GW1981@quack2.suse.cz>
 <20201120090820.GD21715@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201120090820.GD21715@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Fri 20-11-20 10:08:20, Christoph Hellwig wrote:
> On Thu, Nov 19, 2020 at 01:05:25PM +0100, Jan Kara wrote:
> > > @@ -613,7 +613,7 @@ void guard_bio_eod(struct bio *bio)
> > >  	rcu_read_lock();
> > >  	part = __disk_get_part(bio->bi_disk, bio->bi_partno);
> > >  	if (part)
> > > -		maxsector = part_nr_sects_read(part);
> > > +		maxsector = bdev_nr_sectors(part->bdev);
> > >  	else
> > >  		maxsector = get_capacity(bio->bi_disk);
> > 
> > I have to say that after these changes I find it a bit confusing that we
> > have get/set_capacity() and bdev_nr_sectors() / bdev_set_nr_sectors() and
> > they are all the same thing (i_size of the bdev). Is there a reason for the
> > distinction?
> 
> get_capacity/set_capacity are the existing unchanged interfaces that
> work on struct gendisk, and unchanged from what we had before.  They also
> have lots of users which makes them kinda awkward to touch.
> 
> bdev_nr_sectors is the public interface to query the size for any
> kind of struct block device, to be used by consumers of the block
> device interface.
> 
> bdev_set_nr_sectors is a private helper for the partitions core that
> avoids duplicating a bit of code, and works on partitions.

OK, I guess I'll get used to this...

> > > @@ -38,6 +38,16 @@ static void disk_add_events(struct gendisk *disk);
> > >  static void disk_del_events(struct gendisk *disk);
> > >  static void disk_release_events(struct gendisk *disk);
> > >  
> > > +void set_capacity(struct gendisk *disk, sector_t sectors)
> > > +{
> > > +	struct block_device *bdev = disk->part0.bdev;
> > > +
> > > +	spin_lock(&bdev->bd_size_lock);
> > > +	i_size_write(bdev->bd_inode, (loff_t)sectors << SECTOR_SHIFT);
> > > +	spin_unlock(&bdev->bd_size_lock);
> > 
> > AFAICT bd_size_lock is pointless after these changes so we can just remove
> > it?
> 
> I don't think it is, as reuqiring bd_mutex for size updates leads to
> rather awkward lock ordering problems.

OK, let me ask differently: What is bd_size_lock protecting now? Ah, I see,
on 32-bit it is needed to prevent torn writes to i_size, right?

> > >  	if (capacity != size && capacity != 0 && size != 0) {
> > >  		char *envp[] = { "RESIZE=1", NULL };
> > >  
> > > +		pr_info("%s: detected capacity change from %lld to %lld\n",
> > > +		       disk->disk_name, size, capacity);
> > 
> > So we are now missing above message for transitions from / to 0 capacity?
> > Is there any other notification in the kernel log when e.g. media is
> > inserted into a CD-ROM drive? I remember using these messages for detecting
> > that...
> 
> True, I guess we should keep the messages for that case at least under
> some circumstances.  Let me take a closer look at what could make sense.
> 
> > Also what about GENHD_FL_HIDDEN devices? Are we sure we never set capacity
> > for them?
> 
> We absolutely set the capacity for them, as we have to.  And even use
> this interface.  But yes, I think we should skip sending the uevent for
> them.

Also previously we were not printing any messages for hidden devices and
now we do. I'm not sure whether that's intended or not.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 11:24:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 11:24:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31977.62806 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4WX-0001lV-E2; Fri, 20 Nov 2020 11:24:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31977.62806; Fri, 20 Nov 2020 11:24:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4WX-0001lO-AV; Fri, 20 Nov 2020 11:24:17 +0000
Received: by outflank-mailman (input) for mailman id 31977;
 Fri, 20 Nov 2020 11:24:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kg4WV-0001lG-Qa; Fri, 20 Nov 2020 11:24:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kg4WV-0004fs-Fd; Fri, 20 Nov 2020 11:24:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kg4WV-0005IK-7R; Fri, 20 Nov 2020 11:24:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kg4WV-0000gC-6u; Fri, 20 Nov 2020 11:24:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kg4WV-0001lG-Qa; Fri, 20 Nov 2020 11:24:15 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wSMogNAwewOurVZn/WcMbVNTDvs5vAgEj85V84pvBGI=; b=k4tUevAC8mMLv68iqAk/WIEJ4W
	6ri/XDjCzShIKCBXvJHYxvo7RK3Iyj2xr664sEqBBduILOjLimuMDe8MXtRzpYSqSdnzULGfUPFaO
	QWtcZz7mEZV+WjtS/owRZ41yVGMYJ+WO2R7DH91Kba5IJWDIdYRVssxk6xjsy67bgSIg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kg4WV-0004fs-Fd; Fri, 20 Nov 2020 11:24:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kg4WV-0005IK-7R; Fri, 20 Nov 2020 11:24:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kg4WV-0000gC-6u; Fri, 20 Nov 2020 11:24:15 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156892-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156892: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=b67c5267253d5d9d7beaf03408369e1bd4449104
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 20 Nov 2020 11:24:15 +0000

flight 156892 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156892/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              b67c5267253d5d9d7beaf03408369e1bd4449104
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  133 days
Failing since        151818  2020-07-11 04:18:52 Z  132 days  127 attempts
Testing same since   156892  2020-11-20 04:19:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 28147 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 11:46:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 11:46:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31989.62833 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4sA-0004AJ-K9; Fri, 20 Nov 2020 11:46:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31989.62833; Fri, 20 Nov 2020 11:46:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4sA-0004AC-G7; Fri, 20 Nov 2020 11:46:38 +0000
Received: by outflank-mailman (input) for mailman id 31989;
 Fri, 20 Nov 2020 11:46:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kg4s8-00049L-TX
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:46:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 33050813-2761-4fe2-999e-a5c2be4caa4a;
 Fri, 20 Nov 2020 11:46:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5C92BAD5C;
 Fri, 20 Nov 2020 11:46:34 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kg4s8-00049L-TX
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:46:36 +0000
X-Inumbo-ID: 33050813-2761-4fe2-999e-a5c2be4caa4a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 33050813-2761-4fe2-999e-a5c2be4caa4a;
	Fri, 20 Nov 2020 11:46:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605872794; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=L8ja12eEdwuLab45Hv+HnSogKKR0GYpH5GZdxj1aLqU=;
	b=HM77VhTXsHVpDUQPyE5fDruB2sTbtB5e6fzYt4oGwkUGhMnRFmm5qJ7Hh1Qqwh7tCGVSjW
	vxclagdoWydacKDLs9MYU0Ww2fo0mZsAUbXNR1eXUFtv5z9bc4jn9amGKNQ9Oy1fQIMrEy
	/C5Lv1AOCFYS3f6gbEVGkpBfnHQU47c=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5C92BAD5C;
	Fri, 20 Nov 2020 11:46:34 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org
Cc: peterz@infradead.org,
	luto@kernel.org,
	Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 03/12] x86/pv: switch SWAPGS to ALTERNATIVE
Date: Fri, 20 Nov 2020 12:46:21 +0100
Message-Id: <20201120114630.13552-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201120114630.13552-1-jgross@suse.com>
References: <20201120114630.13552-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

SWAPGS is used only for interrupts coming from user mode or for
returning to user mode. So there is no reason to use the PARAVIRT
framework, as it can easily be replaced by an ALTERNATIVE depending
on X86_FEATURE_XENPV.

There are several instances using the PV-aware SWAPGS macro in paths
which are never executed in a Xen PV guest. Replace those with the
plain swapgs instruction. For SWAPGS_UNSAFE_STACK the same applies.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Andy Lutomirski <luto@kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/x86/entry/entry_64.S             | 10 +++++-----
 arch/x86/include/asm/irqflags.h       | 20 ++++++++------------
 arch/x86/include/asm/paravirt.h       | 20 --------------------
 arch/x86/include/asm/paravirt_types.h |  2 --
 arch/x86/kernel/asm-offsets_64.c      |  1 -
 arch/x86/kernel/paravirt.c            |  1 -
 arch/x86/kernel/paravirt_patch.c      |  3 ---
 arch/x86/xen/enlighten_pv.c           |  3 ---
 8 files changed, 13 insertions(+), 47 deletions(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index cad08703c4ad..a876204a73e0 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -669,7 +669,7 @@ native_irq_return_ldt:
 	 */
 
 	pushq	%rdi				/* Stash user RDI */
-	SWAPGS					/* to kernel GS */
+	swapgs					/* to kernel GS */
 	SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi	/* to kernel CR3 */
 
 	movq	PER_CPU_VAR(espfix_waddr), %rdi
@@ -699,7 +699,7 @@ native_irq_return_ldt:
 	orq	PER_CPU_VAR(espfix_stack), %rax
 
 	SWITCH_TO_USER_CR3_STACK scratch_reg=%rdi
-	SWAPGS					/* to user GS */
+	swapgs					/* to user GS */
 	popq	%rdi				/* Restore user RDI */
 
 	movq	%rax, %rsp
@@ -943,7 +943,7 @@ SYM_CODE_START_LOCAL(paranoid_entry)
 	ret
 
 .Lparanoid_entry_swapgs:
-	SWAPGS
+	swapgs
 
 	/*
 	 * The above SAVE_AND_SWITCH_TO_KERNEL_CR3 macro doesn't do an
@@ -1001,7 +1001,7 @@ SYM_CODE_START_LOCAL(paranoid_exit)
 	jnz		restore_regs_and_return_to_kernel
 
 	/* We are returning to a context with user GSBASE */
-	SWAPGS_UNSAFE_STACK
+	swapgs
 	jmp		restore_regs_and_return_to_kernel
 SYM_CODE_END(paranoid_exit)
 
@@ -1426,7 +1426,7 @@ nmi_no_fsgsbase:
 	jnz	nmi_restore
 
 nmi_swapgs:
-	SWAPGS_UNSAFE_STACK
+	swapgs
 
 nmi_restore:
 	POP_REGS
diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
index 2dfc8d380dab..8c86edefa115 100644
--- a/arch/x86/include/asm/irqflags.h
+++ b/arch/x86/include/asm/irqflags.h
@@ -131,18 +131,6 @@ static __always_inline unsigned long arch_local_irq_save(void)
 #define SAVE_FLAGS(x)		pushfq; popq %rax
 #endif
 
-#define SWAPGS	swapgs
-/*
- * Currently paravirt can't handle swapgs nicely when we
- * don't have a stack we can rely on (such as a user space
- * stack).  So we either find a way around these or just fault
- * and emulate if a guest tries to call swapgs directly.
- *
- * Either way, this is a good way to document that we don't
- * have a reliable stack. x86_64 only.
- */
-#define SWAPGS_UNSAFE_STACK	swapgs
-
 #define INTERRUPT_RETURN	jmp native_iret
 #define USERGS_SYSRET64				\
 	swapgs;					\
@@ -170,6 +158,14 @@ static __always_inline int arch_irqs_disabled(void)
 
 	return arch_irqs_disabled_flags(flags);
 }
+#else
+#ifdef CONFIG_X86_64
+#ifdef CONFIG_XEN_PV
+#define SWAPGS	ALTERNATIVE "swapgs", "", X86_FEATURE_XENPV
+#else
+#define SWAPGS	swapgs
+#endif
+#endif
 #endif /* !__ASSEMBLY__ */
 
 #endif
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index d25cc6830e89..5647bcdba776 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -776,26 +776,6 @@ extern void default_banner(void);
 
 #ifdef CONFIG_X86_64
 #ifdef CONFIG_PARAVIRT_XXL
-/*
- * If swapgs is used while the userspace stack is still current,
- * there's no way to call a pvop.  The PV replacement *must* be
- * inlined, or the swapgs instruction must be trapped and emulated.
- */
-#define SWAPGS_UNSAFE_STACK						\
-	PARA_SITE(PARA_PATCH(PV_CPU_swapgs), swapgs)
-
-/*
- * Note: swapgs is very special, and in practise is either going to be
- * implemented with a single "swapgs" instruction or something very
- * special.  Either way, we don't need to save any registers for
- * it.
- */
-#define SWAPGS								\
-	PARA_SITE(PARA_PATCH(PV_CPU_swapgs),				\
-		  ANNOTATE_RETPOLINE_SAFE;				\
-		  call PARA_INDIRECT(pv_ops+PV_CPU_swapgs);		\
-		 )
-
 #define USERGS_SYSRET64							\
 	PARA_SITE(PARA_PATCH(PV_CPU_usergs_sysret64),			\
 		  ANNOTATE_RETPOLINE_SAFE;				\
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 0fad9f61c76a..903d71884fa2 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -169,8 +169,6 @@ struct pv_cpu_ops {
 	   frame set up. */
 	void (*iret)(void);
 
-	void (*swapgs)(void);
-
 	void (*start_context_switch)(struct task_struct *prev);
 	void (*end_context_switch)(struct task_struct *next);
 #endif
diff --git a/arch/x86/kernel/asm-offsets_64.c b/arch/x86/kernel/asm-offsets_64.c
index 828be792231e..1354bc30614d 100644
--- a/arch/x86/kernel/asm-offsets_64.c
+++ b/arch/x86/kernel/asm-offsets_64.c
@@ -15,7 +15,6 @@ int main(void)
 #ifdef CONFIG_PARAVIRT_XXL
 	OFFSET(PV_CPU_usergs_sysret64, paravirt_patch_template,
 	       cpu.usergs_sysret64);
-	OFFSET(PV_CPU_swapgs, paravirt_patch_template, cpu.swapgs);
 #ifdef CONFIG_DEBUG_ENTRY
 	OFFSET(PV_IRQ_save_fl, paravirt_patch_template, irq.save_fl);
 #endif
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 6c3407ba6ee9..5e5fcf5c376d 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -312,7 +312,6 @@ struct paravirt_patch_template pv_ops = {
 
 	.cpu.usergs_sysret64	= native_usergs_sysret64,
 	.cpu.iret		= native_iret,
-	.cpu.swapgs		= native_swapgs,
 
 #ifdef CONFIG_X86_IOPL_IOPERM
 	.cpu.invalidate_io_bitmap	= native_tss_invalidate_io_bitmap,
diff --git a/arch/x86/kernel/paravirt_patch.c b/arch/x86/kernel/paravirt_patch.c
index ace6e334cb39..7c518b08aa3c 100644
--- a/arch/x86/kernel/paravirt_patch.c
+++ b/arch/x86/kernel/paravirt_patch.c
@@ -28,7 +28,6 @@ struct patch_xxl {
 	const unsigned char	irq_restore_fl[2];
 	const unsigned char	cpu_wbinvd[2];
 	const unsigned char	cpu_usergs_sysret64[6];
-	const unsigned char	cpu_swapgs[3];
 	const unsigned char	mov64[3];
 };
 
@@ -43,7 +42,6 @@ static const struct patch_xxl patch_data_xxl = {
 	.cpu_wbinvd		= { 0x0f, 0x09 },	// wbinvd
 	.cpu_usergs_sysret64	= { 0x0f, 0x01, 0xf8,
 				    0x48, 0x0f, 0x07 },	// swapgs; sysretq
-	.cpu_swapgs		= { 0x0f, 0x01, 0xf8 },	// swapgs
 	.mov64			= { 0x48, 0x89, 0xf8 },	// mov %rdi, %rax
 };
 
@@ -86,7 +84,6 @@ unsigned int native_patch(u8 type, void *insn_buff, unsigned long addr,
 	PATCH_CASE(mmu, write_cr3, xxl, insn_buff, len);
 
 	PATCH_CASE(cpu, usergs_sysret64, xxl, insn_buff, len);
-	PATCH_CASE(cpu, swapgs, xxl, insn_buff, len);
 	PATCH_CASE(cpu, wbinvd, xxl, insn_buff, len);
 #endif
 
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 76616024129e..44bb18adfb51 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -1085,9 +1085,6 @@ static const struct pv_cpu_ops xen_cpu_ops __initconst = {
 #endif
 	.io_delay = xen_io_delay,
 
-	/* Xen takes care of %gs when switching to usermode for us */
-	.swapgs = paravirt_nop,
-
 	.start_context_switch = paravirt_start_context_switch,
 	.end_context_switch = xen_end_context_switch,
 };
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 11:46:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 11:46:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31988.62821 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4s8-00048f-BZ; Fri, 20 Nov 2020 11:46:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31988.62821; Fri, 20 Nov 2020 11:46:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4s8-00048Y-8I; Fri, 20 Nov 2020 11:46:36 +0000
Received: by outflank-mailman (input) for mailman id 31988;
 Fri, 20 Nov 2020 11:46:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kg4s7-00048T-8j
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:46:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a5e82017-5a27-4552-a6f1-80ff2084a9d8;
 Fri, 20 Nov 2020 11:46:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 92093ACB0;
 Fri, 20 Nov 2020 11:46:33 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kg4s7-00048T-8j
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:46:35 +0000
X-Inumbo-ID: a5e82017-5a27-4552-a6f1-80ff2084a9d8
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a5e82017-5a27-4552-a6f1-80ff2084a9d8;
	Fri, 20 Nov 2020 11:46:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605872793; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=l/t/KHjCWXTPuvdOLof5qOkkQcBYZ5SnwZcODN8IUSA=;
	b=mUXTjK/RfpfB8qehe6baicIMnV1N6l1JWvONIHMdA/X+9o7SA97Orrl+x8nPSwZ1upvNM5
	37fX3mBrO5Ue+Qohc+WRIPluXUtMEIu7HjQQSTOSSjf1G6qiYL3EhJuxJS6auzL27F1TNM
	J5wWI7i+mQocaeKgcKsotzSzfd4F9Qs=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 92093ACB0;
	Fri, 20 Nov 2020 11:46:33 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: peterz@infradead.org,
	luto@kernel.org,
	Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 01/12] x86/xen: use specific Xen pv interrupt entry for MCE
Date: Fri, 20 Nov 2020 12:46:19 +0100
Message-Id: <20201120114630.13552-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201120114630.13552-1-jgross@suse.com>
References: <20201120114630.13552-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Xen PV guests don't use IST. For machine check interrupts switch to
the same model as debug interrupts.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/x86/include/asm/idtentry.h |  3 +++
 arch/x86/xen/enlighten_pv.c     | 16 +++++++++++++++-
 arch/x86/xen/xen-asm.S          |  2 +-
 3 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
index b2442eb0ac2f..3505c0396fa5 100644
--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -588,6 +588,9 @@ DECLARE_IDTENTRY_MCE(X86_TRAP_MC,	exc_machine_check);
 #else
 DECLARE_IDTENTRY_RAW(X86_TRAP_MC,	exc_machine_check);
 #endif
+#ifdef CONFIG_XEN_PV
+DECLARE_IDTENTRY_RAW(X86_TRAP_MC,	xenpv_exc_machine_check);
+#endif
 #endif
 
 /* NMI */
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 4409306364dc..9f5e44c1f70a 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -583,6 +583,20 @@ DEFINE_IDTENTRY_RAW(xenpv_exc_debug)
 		exc_debug(regs);
 }
 
+#ifdef CONFIG_X86_MCE
+DEFINE_IDTENTRY_RAW(xenpv_exc_machine_check)
+{
+	/*
+	 * There's no IST on Xen PV, but we still need to dispatch
+	 * to the correct handler.
+	 */
+	if (user_mode(regs))
+		noist_exc_machine_check(regs);
+	else
+		exc_machine_check(regs);
+}
+#endif
+
 struct trap_array_entry {
 	void (*orig)(void);
 	void (*xen)(void);
@@ -603,7 +617,7 @@ static struct trap_array_entry trap_array[] = {
 	TRAP_ENTRY_REDIR(exc_debug,			true  ),
 	TRAP_ENTRY(exc_double_fault,			true  ),
 #ifdef CONFIG_X86_MCE
-	TRAP_ENTRY(exc_machine_check,			true  ),
+	TRAP_ENTRY_REDIR(exc_machine_check,		true  ),
 #endif
 	TRAP_ENTRY_REDIR(exc_nmi,			true  ),
 	TRAP_ENTRY(exc_int3,				false ),
diff --git a/arch/x86/xen/xen-asm.S b/arch/x86/xen/xen-asm.S
index 1cb0e84b9161..bc2586730a5b 100644
--- a/arch/x86/xen/xen-asm.S
+++ b/arch/x86/xen/xen-asm.S
@@ -172,7 +172,7 @@ xen_pv_trap asm_exc_spurious_interrupt_bug
 xen_pv_trap asm_exc_coprocessor_error
 xen_pv_trap asm_exc_alignment_check
 #ifdef CONFIG_X86_MCE
-xen_pv_trap asm_exc_machine_check
+xen_pv_trap asm_xenpv_exc_machine_check
 #endif /* CONFIG_X86_MCE */
 xen_pv_trap asm_exc_simd_coprocessor_error
 #ifdef CONFIG_IA32_EMULATION
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 11:46:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 11:46:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31990.62845 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4sE-0004DQ-1d; Fri, 20 Nov 2020 11:46:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31990.62845; Fri, 20 Nov 2020 11:46:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4sD-0004DF-TV; Fri, 20 Nov 2020 11:46:41 +0000
Received: by outflank-mailman (input) for mailman id 31990;
 Fri, 20 Nov 2020 11:46:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kg4sC-00048T-5F
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:46:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e2d7855c-8201-44a5-b991-dd6dab2025d5;
 Fri, 20 Nov 2020 11:46:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E646FAD72;
 Fri, 20 Nov 2020 11:46:33 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kg4sC-00048T-5F
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:46:40 +0000
X-Inumbo-ID: e2d7855c-8201-44a5-b991-dd6dab2025d5
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e2d7855c-8201-44a5-b991-dd6dab2025d5;
	Fri, 20 Nov 2020 11:46:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605872794; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DI15xw9ExyUJ7xw8dQLYWmC3PSDAqfO4l4ENDPXt7+Q=;
	b=mWw0Ws0NBqavgb45dBphRfdbFlhVmnXfWBXplGYA/FRvJK5vi2N8tlRc1IvDC3qLAxtJjW
	FN+C6T9+2sHEwpGV2HEJQT2CxFd4GiD8I0seVpnENFNj1FbuiKKkbx3lBpp6f9zW/qYDXT
	9pKdnSSIYQZhIRreOfbYufU434AZdmk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E646FAD72;
	Fri, 20 Nov 2020 11:46:33 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: peterz@infradead.org,
	luto@kernel.org,
	Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 02/12] x86/xen: use specific Xen pv interrupt entry for DF
Date: Fri, 20 Nov 2020 12:46:20 +0100
Message-Id: <20201120114630.13552-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201120114630.13552-1-jgross@suse.com>
References: <20201120114630.13552-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Xen PV guests don't use IST. For double fault interrupts switch to
the same model as NMI.

Correct a typo in a comment while copying it.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
V2:
- fix typo (Andy Lutomirski)
---
 arch/x86/include/asm/idtentry.h |  3 +++
 arch/x86/xen/enlighten_pv.c     | 10 ++++++++--
 arch/x86/xen/xen-asm.S          |  2 +-
 3 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
index 3505c0396fa5..b35825392547 100644
--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -611,6 +611,9 @@ DECLARE_IDTENTRY_RAW(X86_TRAP_DB,	xenpv_exc_debug);
 
 /* #DF */
 DECLARE_IDTENTRY_DF(X86_TRAP_DF,	exc_double_fault);
+#ifdef CONFIG_XEN_PV
+DECLARE_IDTENTRY_RAW_ERRORCODE(X86_TRAP_DF,	xenpv_exc_double_fault);
+#endif
 
 /* #VC */
 #ifdef CONFIG_AMD_MEM_ENCRYPT
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 9f5e44c1f70a..76616024129e 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -567,10 +567,16 @@ void noist_exc_debug(struct pt_regs *regs);
 
 DEFINE_IDTENTRY_RAW(xenpv_exc_nmi)
 {
-	/* On Xen PV, NMI doesn't use IST.  The C part is the sane as native. */
+	/* On Xen PV, NMI doesn't use IST.  The C part is the same as native. */
 	exc_nmi(regs);
 }
 
+DEFINE_IDTENTRY_RAW_ERRORCODE(xenpv_exc_double_fault)
+{
+	/* On Xen PV, DF doesn't use IST.  The C part is the same as native. */
+	exc_double_fault(regs, error_code);
+}
+
 DEFINE_IDTENTRY_RAW(xenpv_exc_debug)
 {
 	/*
@@ -615,7 +621,7 @@ struct trap_array_entry {
 
 static struct trap_array_entry trap_array[] = {
 	TRAP_ENTRY_REDIR(exc_debug,			true  ),
-	TRAP_ENTRY(exc_double_fault,			true  ),
+	TRAP_ENTRY_REDIR(exc_double_fault,		true  ),
 #ifdef CONFIG_X86_MCE
 	TRAP_ENTRY_REDIR(exc_machine_check,		true  ),
 #endif
diff --git a/arch/x86/xen/xen-asm.S b/arch/x86/xen/xen-asm.S
index bc2586730a5b..1d054c915046 100644
--- a/arch/x86/xen/xen-asm.S
+++ b/arch/x86/xen/xen-asm.S
@@ -161,7 +161,7 @@ xen_pv_trap asm_exc_overflow
 xen_pv_trap asm_exc_bounds
 xen_pv_trap asm_exc_invalid_op
 xen_pv_trap asm_exc_device_not_available
-xen_pv_trap asm_exc_double_fault
+xen_pv_trap asm_xenpv_exc_double_fault
 xen_pv_trap asm_exc_coproc_segment_overrun
 xen_pv_trap asm_exc_invalid_tss
 xen_pv_trap asm_exc_segment_not_present
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 11:46:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 11:46:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31991.62857 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4sF-0004FV-Bz; Fri, 20 Nov 2020 11:46:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31991.62857; Fri, 20 Nov 2020 11:46:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4sF-0004FL-6i; Fri, 20 Nov 2020 11:46:43 +0000
Received: by outflank-mailman (input) for mailman id 31991;
 Fri, 20 Nov 2020 11:46:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kg4sD-00049L-SA
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:46:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 15885951-5e6d-42be-a5e6-42036b45ad3e;
 Fri, 20 Nov 2020 11:46:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BC5ABAD77;
 Fri, 20 Nov 2020 11:46:34 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kg4sD-00049L-SA
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:46:41 +0000
X-Inumbo-ID: 15885951-5e6d-42be-a5e6-42036b45ad3e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 15885951-5e6d-42be-a5e6-42036b45ad3e;
	Fri, 20 Nov 2020 11:46:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605872794; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=E6kI9Jqp1IKUIzlIa4vugFA0UvfSeHWadngHYxjXzwk=;
	b=vQkOPJIBvcqks+pwigIKkcnfFK592ZVUHBcwKEJ7JQ/XX5KM36CVFIt5SpksqD7xxGZ7FV
	oEao1DB3ykB9UBxilZT9XlY+NsDq4meItvFXFtW8RtVXFm6tw20eXLoqiUC4ZzYQTDeTwS
	33CldIvUNHVElndvKiS2dP2hsjMa2Ok=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id BC5ABAD77;
	Fri, 20 Nov 2020 11:46:34 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org
Cc: peterz@infradead.org,
	luto@kernel.org,
	Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 04/12] x86/xen: drop USERGS_SYSRET64 paravirt call
Date: Fri, 20 Nov 2020 12:46:22 +0100
Message-Id: <20201120114630.13552-5-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201120114630.13552-1-jgross@suse.com>
References: <20201120114630.13552-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

USERGS_SYSRET64 is used to return from a syscall via sysret, but
a Xen PV guest will nevertheless use the iret hypercall, as there
is no sysret PV hypercall defined.

So instead of testing all the prerequisites for doing a sysret and
then mangling the stack for Xen PV again for doing an iret just use
the iret exit from the beginning.

This can easily be done via an ALTERNATIVE like it is done for the
sysenter compat case already.

It should be noted that this drops the optimization in Xen for not
restoring a few registers when returning to user mode, but it seems
as if the saved instructions in the kernel more than compensate for
this drop (a kernel build in a Xen PV guest was slightly faster with
this patch applied).

While at it remove the stale sysret32 remnants.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Andy Lutomirski <luto@kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/x86/entry/entry_64.S             | 22 +++++++++-------------
 arch/x86/include/asm/irqflags.h       |  6 ------
 arch/x86/include/asm/paravirt.h       |  5 -----
 arch/x86/include/asm/paravirt_types.h |  8 --------
 arch/x86/kernel/asm-offsets_64.c      |  2 --
 arch/x86/kernel/paravirt.c            |  5 +----
 arch/x86/kernel/paravirt_patch.c      |  4 ----
 arch/x86/xen/enlighten_pv.c           |  1 -
 arch/x86/xen/xen-asm.S                | 20 --------------------
 arch/x86/xen/xen-ops.h                |  2 --
 10 files changed, 10 insertions(+), 65 deletions(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index a876204a73e0..df865eebd3d7 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -46,14 +46,6 @@
 .code64
 .section .entry.text, "ax"
 
-#ifdef CONFIG_PARAVIRT_XXL
-SYM_CODE_START(native_usergs_sysret64)
-	UNWIND_HINT_EMPTY
-	swapgs
-	sysretq
-SYM_CODE_END(native_usergs_sysret64)
-#endif /* CONFIG_PARAVIRT_XXL */
-
 /*
  * 64-bit SYSCALL instruction entry. Up to 6 arguments in registers.
  *
@@ -123,12 +115,15 @@ SYM_INNER_LABEL(entry_SYSCALL_64_after_hwframe, SYM_L_GLOBAL)
 	 * Try to use SYSRET instead of IRET if we're returning to
 	 * a completely clean 64-bit userspace context.  If we're not,
 	 * go to the slow exit path.
+	 * In the Xen PV case we must use iret anyway.
 	 */
-	movq	RCX(%rsp), %rcx
-	movq	RIP(%rsp), %r11
 
-	cmpq	%rcx, %r11	/* SYSRET requires RCX == RIP */
-	jne	swapgs_restore_regs_and_return_to_usermode
+	ALTERNATIVE __stringify( \
+		movq	RCX(%rsp), %rcx; \
+		movq	RIP(%rsp), %r11; \
+		cmpq	%rcx, %r11;	/* SYSRET requires RCX == RIP */ \
+		jne	swapgs_restore_regs_and_return_to_usermode), \
+	"jmp	swapgs_restore_regs_and_return_to_usermode", X86_FEATURE_XENPV
 
 	/*
 	 * On Intel CPUs, SYSRET with non-canonical RCX/RIP will #GP
@@ -215,7 +210,8 @@ syscall_return_via_sysret:
 
 	popq	%rdi
 	popq	%rsp
-	USERGS_SYSRET64
+	swapgs
+	sysretq
 SYM_CODE_END(entry_SYSCALL_64)
 
 /*
diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
index 8c86edefa115..e585a4705b8d 100644
--- a/arch/x86/include/asm/irqflags.h
+++ b/arch/x86/include/asm/irqflags.h
@@ -132,12 +132,6 @@ static __always_inline unsigned long arch_local_irq_save(void)
 #endif
 
 #define INTERRUPT_RETURN	jmp native_iret
-#define USERGS_SYSRET64				\
-	swapgs;					\
-	sysretq;
-#define USERGS_SYSRET32				\
-	swapgs;					\
-	sysretl
 
 #else
 #define INTERRUPT_RETURN		iret
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 5647bcdba776..8121cf9b8d81 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -776,11 +776,6 @@ extern void default_banner(void);
 
 #ifdef CONFIG_X86_64
 #ifdef CONFIG_PARAVIRT_XXL
-#define USERGS_SYSRET64							\
-	PARA_SITE(PARA_PATCH(PV_CPU_usergs_sysret64),			\
-		  ANNOTATE_RETPOLINE_SAFE;				\
-		  jmp PARA_INDIRECT(pv_ops+PV_CPU_usergs_sysret64);)
-
 #ifdef CONFIG_DEBUG_ENTRY
 #define SAVE_FLAGS(clobbers)                                        \
 	PARA_SITE(PARA_PATCH(PV_IRQ_save_fl),			    \
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 903d71884fa2..55d8b7950e61 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -157,14 +157,6 @@ struct pv_cpu_ops {
 
 	u64 (*read_pmc)(int counter);
 
-	/*
-	 * Switch to usermode gs and return to 64-bit usermode using
-	 * sysret.  Only used in 64-bit kernels to return to 64-bit
-	 * processes.  Usermode register state, including %rsp, must
-	 * already be restored.
-	 */
-	void (*usergs_sysret64)(void);
-
 	/* Normal iret.  Jump to this with the standard iret stack
 	   frame set up. */
 	void (*iret)(void);
diff --git a/arch/x86/kernel/asm-offsets_64.c b/arch/x86/kernel/asm-offsets_64.c
index 1354bc30614d..b14533af7676 100644
--- a/arch/x86/kernel/asm-offsets_64.c
+++ b/arch/x86/kernel/asm-offsets_64.c
@@ -13,8 +13,6 @@ int main(void)
 {
 #ifdef CONFIG_PARAVIRT
 #ifdef CONFIG_PARAVIRT_XXL
-	OFFSET(PV_CPU_usergs_sysret64, paravirt_patch_template,
-	       cpu.usergs_sysret64);
 #ifdef CONFIG_DEBUG_ENTRY
 	OFFSET(PV_IRQ_save_fl, paravirt_patch_template, irq.save_fl);
 #endif
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 5e5fcf5c376d..18560b71e717 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -135,8 +135,7 @@ unsigned paravirt_patch_default(u8 type, void *insn_buff,
 	else if (opfunc == _paravirt_ident_64)
 		ret = paravirt_patch_ident_64(insn_buff, len);
 
-	else if (type == PARAVIRT_PATCH(cpu.iret) ||
-		 type == PARAVIRT_PATCH(cpu.usergs_sysret64))
+	else if (type == PARAVIRT_PATCH(cpu.iret))
 		/* If operation requires a jmp, then jmp */
 		ret = paravirt_patch_jmp(insn_buff, opfunc, addr, len);
 #endif
@@ -170,7 +169,6 @@ static u64 native_steal_clock(int cpu)
 
 /* These are in entry.S */
 extern void native_iret(void);
-extern void native_usergs_sysret64(void);
 
 static struct resource reserve_ioports = {
 	.start = 0,
@@ -310,7 +308,6 @@ struct paravirt_patch_template pv_ops = {
 
 	.cpu.load_sp0		= native_load_sp0,
 
-	.cpu.usergs_sysret64	= native_usergs_sysret64,
 	.cpu.iret		= native_iret,
 
 #ifdef CONFIG_X86_IOPL_IOPERM
diff --git a/arch/x86/kernel/paravirt_patch.c b/arch/x86/kernel/paravirt_patch.c
index 7c518b08aa3c..2fada2c347c9 100644
--- a/arch/x86/kernel/paravirt_patch.c
+++ b/arch/x86/kernel/paravirt_patch.c
@@ -27,7 +27,6 @@ struct patch_xxl {
 	const unsigned char	mmu_write_cr3[3];
 	const unsigned char	irq_restore_fl[2];
 	const unsigned char	cpu_wbinvd[2];
-	const unsigned char	cpu_usergs_sysret64[6];
 	const unsigned char	mov64[3];
 };
 
@@ -40,8 +39,6 @@ static const struct patch_xxl patch_data_xxl = {
 	.mmu_write_cr3		= { 0x0f, 0x22, 0xdf },	// mov %rdi, %cr3
 	.irq_restore_fl		= { 0x57, 0x9d },	// push %rdi; popfq
 	.cpu_wbinvd		= { 0x0f, 0x09 },	// wbinvd
-	.cpu_usergs_sysret64	= { 0x0f, 0x01, 0xf8,
-				    0x48, 0x0f, 0x07 },	// swapgs; sysretq
 	.mov64			= { 0x48, 0x89, 0xf8 },	// mov %rdi, %rax
 };
 
@@ -83,7 +80,6 @@ unsigned int native_patch(u8 type, void *insn_buff, unsigned long addr,
 	PATCH_CASE(mmu, read_cr3, xxl, insn_buff, len);
 	PATCH_CASE(mmu, write_cr3, xxl, insn_buff, len);
 
-	PATCH_CASE(cpu, usergs_sysret64, xxl, insn_buff, len);
 	PATCH_CASE(cpu, wbinvd, xxl, insn_buff, len);
 #endif
 
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 44bb18adfb51..5476423fc6d0 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -1060,7 +1060,6 @@ static const struct pv_cpu_ops xen_cpu_ops __initconst = {
 	.read_pmc = xen_read_pmc,
 
 	.iret = xen_iret,
-	.usergs_sysret64 = xen_sysret64,
 
 	.load_tr_desc = paravirt_nop,
 	.set_ldt = xen_set_ldt,
diff --git a/arch/x86/xen/xen-asm.S b/arch/x86/xen/xen-asm.S
index 1d054c915046..c0630fd9f44e 100644
--- a/arch/x86/xen/xen-asm.S
+++ b/arch/x86/xen/xen-asm.S
@@ -214,26 +214,6 @@ SYM_CODE_START(xen_iret)
 	jmp hypercall_iret
 SYM_CODE_END(xen_iret)
 
-SYM_CODE_START(xen_sysret64)
-	/*
-	 * We're already on the usermode stack at this point, but
-	 * still with the kernel gs, so we can easily switch back.
-	 *
-	 * tss.sp2 is scratch space.
-	 */
-	movq %rsp, PER_CPU_VAR(cpu_tss_rw + TSS_sp2)
-	movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
-
-	pushq $__USER_DS
-	pushq PER_CPU_VAR(cpu_tss_rw + TSS_sp2)
-	pushq %r11
-	pushq $__USER_CS
-	pushq %rcx
-
-	pushq $VGCF_in_syscall
-	jmp hypercall_iret
-SYM_CODE_END(xen_sysret64)
-
 /*
  * Xen handles syscall callbacks much like ordinary exceptions, which
  * means we have:
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 9546c3384c75..b2fd80a01a36 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -138,8 +138,6 @@ __visible unsigned long xen_read_cr2_direct(void);
 
 /* These are not functions, and cannot be called normally */
 __visible void xen_iret(void);
-__visible void xen_sysret32(void);
-__visible void xen_sysret64(void);
 
 extern int xen_panic_handler_init(void);
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 11:46:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 11:46:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31992.62869 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4sI-0004Jn-M9; Fri, 20 Nov 2020 11:46:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31992.62869; Fri, 20 Nov 2020 11:46:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4sI-0004Je-Hn; Fri, 20 Nov 2020 11:46:46 +0000
Received: by outflank-mailman (input) for mailman id 31992;
 Fri, 20 Nov 2020 11:46:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kg4sH-00048T-5X
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:46:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b44df1a2-1f54-4dad-aae4-857dc435e269;
 Fri, 20 Nov 2020 11:46:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C23BCAD57;
 Fri, 20 Nov 2020 11:46:33 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kg4sH-00048T-5X
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:46:45 +0000
X-Inumbo-ID: b44df1a2-1f54-4dad-aae4-857dc435e269
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b44df1a2-1f54-4dad-aae4-857dc435e269;
	Fri, 20 Nov 2020 11:46:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605872794; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=fjHe3+rmjY5cASW1XAPrcDHWzGbizvpE61ZYWB4j3cw=;
	b=rHmTXjwVBmFJKyhCEqOmhgg9FGZswY8WJdm/UueRf8DddoT49obCKWkkpulXZkEmbMyr53
	+tF6wEHiJQUjLKUN11yRtxzw2OPKHPWiM+6wvnAdRmBLFldnpbyUuBKOY+QFrHkVnZVwFO
	BIZuxt25L9jBmyI/EuK0mKwTZgHx17c=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C23BCAD57;
	Fri, 20 Nov 2020 11:46:33 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-hyperv@vger.kernel.org,
	kvm@vger.kernel.org
Cc: peterz@infradead.org,
	luto@kernel.org,
	Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <sean.j.christopherson@intel.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>,
	Joerg Roedel <joro@8bytes.org>,
	Daniel Lezcano <daniel.lezcano@linaro.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>,
	Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>
Subject: [PATCH v2 00/12] x86: major paravirt cleanup
Date: Fri, 20 Nov 2020 12:46:18 +0100
Message-Id: <20201120114630.13552-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is a major cleanup of the paravirt infrastructure aiming at
eliminating all custom code patching via paravirt patching.

This is achieved by using ALTERNATIVE instead, leading to the ability
to give objtool access to the patched in instructions.

In order to remove most of the 32-bit special handling from pvops the
time related operations are switched to use static_call() instead.

At the end of this series all paravirt patching has to do is to
replace indirect calls with direct ones. In a further step this could
be switched to static_call(), too, but that would require a major
header file disentangling.

Note that an updated objtool is needed for this series, as otherwise
lots of warnings due to alternative instructions modifying the stack
will be issued during the build.

Changes in V2:
- added patches 5-12

Juergen Gross (12):
  x86/xen: use specific Xen pv interrupt entry for MCE
  x86/xen: use specific Xen pv interrupt entry for DF
  x86/pv: switch SWAPGS to ALTERNATIVE
  x86/xen: drop USERGS_SYSRET64 paravirt call
  x86: rework arch_local_irq_restore() to not use popf
  x86/paravirt: switch time pvops functions to use static_call()
  x86: add new features for paravirt patching
  x86/paravirt: remove no longer needed 32-bit pvops cruft
  x86/paravirt: switch iret pvops to ALTERNATIVE
  x86/paravirt: add new macros PVOP_ALT* supporting pvops in
    ALTERNATIVEs
  x86/paravirt: switch functions with custom code to ALTERNATIVE
  x86/paravirt: have only one paravirt patch function

 arch/x86/Kconfig                      |   1 +
 arch/x86/entry/entry_32.S             |   4 +-
 arch/x86/entry/entry_64.S             |  32 ++--
 arch/x86/include/asm/cpufeatures.h    |   3 +
 arch/x86/include/asm/idtentry.h       |   6 +
 arch/x86/include/asm/irqflags.h       |  51 ++----
 arch/x86/include/asm/mshyperv.h       |  11 --
 arch/x86/include/asm/paravirt.h       | 170 ++++++--------------
 arch/x86/include/asm/paravirt_time.h  |  38 +++++
 arch/x86/include/asm/paravirt_types.h | 222 ++++++++++++--------------
 arch/x86/kernel/Makefile              |   3 +-
 arch/x86/kernel/alternative.c         |  30 +++-
 arch/x86/kernel/asm-offsets.c         |   8 -
 arch/x86/kernel/asm-offsets_64.c      |   3 -
 arch/x86/kernel/cpu/vmware.c          |   5 +-
 arch/x86/kernel/head_64.S             |   2 -
 arch/x86/kernel/irqflags.S            |  11 --
 arch/x86/kernel/kvm.c                 |   3 +-
 arch/x86/kernel/kvmclock.c            |   3 +-
 arch/x86/kernel/paravirt.c            |  70 +++-----
 arch/x86/kernel/paravirt_patch.c      | 109 -------------
 arch/x86/kernel/tsc.c                 |   3 +-
 arch/x86/xen/enlighten_pv.c           |  36 +++--
 arch/x86/xen/irq.c                    |  23 ---
 arch/x86/xen/time.c                   |  12 +-
 arch/x86/xen/xen-asm.S                |  52 +-----
 arch/x86/xen/xen-ops.h                |   3 -
 drivers/clocksource/hyperv_timer.c    |   5 +-
 drivers/xen/time.c                    |   3 +-
 kernel/sched/sched.h                  |   1 +
 30 files changed, 325 insertions(+), 598 deletions(-)
 create mode 100644 arch/x86/include/asm/paravirt_time.h
 delete mode 100644 arch/x86/kernel/paravirt_patch.c

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 11:46:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 11:46:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31993.62881 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4sL-0004NI-1l; Fri, 20 Nov 2020 11:46:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31993.62881; Fri, 20 Nov 2020 11:46:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4sK-0004N4-UJ; Fri, 20 Nov 2020 11:46:48 +0000
Received: by outflank-mailman (input) for mailman id 31993;
 Fri, 20 Nov 2020 11:46:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kg4sI-00049L-SH
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:46:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d3613ed9-1f60-4a86-909c-26afcd0fa3e7;
 Fri, 20 Nov 2020 11:46:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2A269AD71;
 Fri, 20 Nov 2020 11:46:35 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kg4sI-00049L-SH
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:46:46 +0000
X-Inumbo-ID: d3613ed9-1f60-4a86-909c-26afcd0fa3e7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d3613ed9-1f60-4a86-909c-26afcd0fa3e7;
	Fri, 20 Nov 2020 11:46:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605872795; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=g5xOP26TwpjiZHzcgL2X/t7QnTZh8kU5ge7RPv9nrzo=;
	b=iQQxpHisPeLnREvar/ecUCKlhH2RtPvnocLSa+FHHHZhkRefsjOfeVmy4ZxWmpHNYpqVVT
	vtxf7NMNzaP1irxky8OLhFe0LdjfLYw4G0RTGrHl3Dju4QU4Ku76OMazxYrhJjZo6ELPhx
	hmE/3qjOxoNPDuKM7bqlagrBeSyqlOg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2A269AD71;
	Fri, 20 Nov 2020 11:46:35 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org
Cc: peterz@infradead.org,
	luto@kernel.org,
	Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 05/12] x86: rework arch_local_irq_restore() to not use popf
Date: Fri, 20 Nov 2020 12:46:23 +0100
Message-Id: <20201120114630.13552-6-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201120114630.13552-1-jgross@suse.com>
References: <20201120114630.13552-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

"popf" is a rather expensive operation, so don't use it for restoring
irq flags. Instead test whether interrupts are enabled in the flags
parameter and enable interrupts via "sti" in that case.

This results in the restore_fl paravirt op to be no longer needed.

Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/include/asm/irqflags.h       | 20 ++++++-------------
 arch/x86/include/asm/paravirt.h       |  5 -----
 arch/x86/include/asm/paravirt_types.h |  7 ++-----
 arch/x86/kernel/irqflags.S            | 11 -----------
 arch/x86/kernel/paravirt.c            |  1 -
 arch/x86/kernel/paravirt_patch.c      |  3 ---
 arch/x86/xen/enlighten_pv.c           |  2 --
 arch/x86/xen/irq.c                    | 23 ----------------------
 arch/x86/xen/xen-asm.S                | 28 ---------------------------
 arch/x86/xen/xen-ops.h                |  1 -
 10 files changed, 8 insertions(+), 93 deletions(-)

diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
index e585a4705b8d..144d70ea4393 100644
--- a/arch/x86/include/asm/irqflags.h
+++ b/arch/x86/include/asm/irqflags.h
@@ -35,15 +35,6 @@ extern __always_inline unsigned long native_save_fl(void)
 	return flags;
 }
 
-extern inline void native_restore_fl(unsigned long flags);
-extern inline void native_restore_fl(unsigned long flags)
-{
-	asm volatile("push %0 ; popf"
-		     : /* no output */
-		     :"g" (flags)
-		     :"memory", "cc");
-}
-
 static __always_inline void native_irq_disable(void)
 {
 	asm volatile("cli": : :"memory");
@@ -79,11 +70,6 @@ static __always_inline unsigned long arch_local_save_flags(void)
 	return native_save_fl();
 }
 
-static __always_inline void arch_local_irq_restore(unsigned long flags)
-{
-	native_restore_fl(flags);
-}
-
 static __always_inline void arch_local_irq_disable(void)
 {
 	native_irq_disable();
@@ -152,6 +138,12 @@ static __always_inline int arch_irqs_disabled(void)
 
 	return arch_irqs_disabled_flags(flags);
 }
+
+static __always_inline void arch_local_irq_restore(unsigned long flags)
+{
+	if (!arch_irqs_disabled_flags(flags))
+		arch_local_irq_enable();
+}
 #else
 #ifdef CONFIG_X86_64
 #ifdef CONFIG_XEN_PV
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 8121cf9b8d81..ce2b8c5aecde 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -648,11 +648,6 @@ static inline notrace unsigned long arch_local_save_flags(void)
 	return PVOP_CALLEE0(unsigned long, irq.save_fl);
 }
 
-static inline notrace void arch_local_irq_restore(unsigned long f)
-{
-	PVOP_VCALLEE1(irq.restore_fl, f);
-}
-
 static inline notrace void arch_local_irq_disable(void)
 {
 	PVOP_VCALLEE0(irq.irq_disable);
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 55d8b7950e61..2031631160d0 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -169,16 +169,13 @@ struct pv_cpu_ops {
 struct pv_irq_ops {
 #ifdef CONFIG_PARAVIRT_XXL
 	/*
-	 * Get/set interrupt state.  save_fl and restore_fl are only
-	 * expected to use X86_EFLAGS_IF; all other bits
-	 * returned from save_fl are undefined, and may be ignored by
-	 * restore_fl.
+	 * Get/set interrupt state.  save_fl is expected to use X86_EFLAGS_IF;
+	 * all other bits returned from save_fl are undefined.
 	 *
 	 * NOTE: These functions callers expect the callee to preserve
 	 * more registers than the standard C calling convention.
 	 */
 	struct paravirt_callee_save save_fl;
-	struct paravirt_callee_save restore_fl;
 	struct paravirt_callee_save irq_disable;
 	struct paravirt_callee_save irq_enable;
 
diff --git a/arch/x86/kernel/irqflags.S b/arch/x86/kernel/irqflags.S
index 0db0375235b4..8ef35063964b 100644
--- a/arch/x86/kernel/irqflags.S
+++ b/arch/x86/kernel/irqflags.S
@@ -13,14 +13,3 @@ SYM_FUNC_START(native_save_fl)
 	ret
 SYM_FUNC_END(native_save_fl)
 EXPORT_SYMBOL(native_save_fl)
-
-/*
- * void native_restore_fl(unsigned long flags)
- * %eax/%rdi: flags
- */
-SYM_FUNC_START(native_restore_fl)
-	push %_ASM_ARG1
-	popf
-	ret
-SYM_FUNC_END(native_restore_fl)
-EXPORT_SYMBOL(native_restore_fl)
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 18560b71e717..c60222ab8ab9 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -320,7 +320,6 @@ struct paravirt_patch_template pv_ops = {
 
 	/* Irq ops. */
 	.irq.save_fl		= __PV_IS_CALLEE_SAVE(native_save_fl),
-	.irq.restore_fl		= __PV_IS_CALLEE_SAVE(native_restore_fl),
 	.irq.irq_disable	= __PV_IS_CALLEE_SAVE(native_irq_disable),
 	.irq.irq_enable		= __PV_IS_CALLEE_SAVE(native_irq_enable),
 	.irq.safe_halt		= native_safe_halt,
diff --git a/arch/x86/kernel/paravirt_patch.c b/arch/x86/kernel/paravirt_patch.c
index 2fada2c347c9..abd27ec67397 100644
--- a/arch/x86/kernel/paravirt_patch.c
+++ b/arch/x86/kernel/paravirt_patch.c
@@ -25,7 +25,6 @@ struct patch_xxl {
 	const unsigned char	mmu_read_cr2[3];
 	const unsigned char	mmu_read_cr3[3];
 	const unsigned char	mmu_write_cr3[3];
-	const unsigned char	irq_restore_fl[2];
 	const unsigned char	cpu_wbinvd[2];
 	const unsigned char	mov64[3];
 };
@@ -37,7 +36,6 @@ static const struct patch_xxl patch_data_xxl = {
 	.mmu_read_cr2		= { 0x0f, 0x20, 0xd0 },	// mov %cr2, %[re]ax
 	.mmu_read_cr3		= { 0x0f, 0x20, 0xd8 },	// mov %cr3, %[re]ax
 	.mmu_write_cr3		= { 0x0f, 0x22, 0xdf },	// mov %rdi, %cr3
-	.irq_restore_fl		= { 0x57, 0x9d },	// push %rdi; popfq
 	.cpu_wbinvd		= { 0x0f, 0x09 },	// wbinvd
 	.mov64			= { 0x48, 0x89, 0xf8 },	// mov %rdi, %rax
 };
@@ -71,7 +69,6 @@ unsigned int native_patch(u8 type, void *insn_buff, unsigned long addr,
 	switch (type) {
 
 #ifdef CONFIG_PARAVIRT_XXL
-	PATCH_CASE(irq, restore_fl, xxl, insn_buff, len);
 	PATCH_CASE(irq, save_fl, xxl, insn_buff, len);
 	PATCH_CASE(irq, irq_enable, xxl, insn_buff, len);
 	PATCH_CASE(irq, irq_disable, xxl, insn_buff, len);
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 5476423fc6d0..32b295cc2716 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -1022,8 +1022,6 @@ void __init xen_setup_vcpu_info_placement(void)
 	 */
 	if (xen_have_vcpu_info_placement) {
 		pv_ops.irq.save_fl = __PV_IS_CALLEE_SAVE(xen_save_fl_direct);
-		pv_ops.irq.restore_fl =
-			__PV_IS_CALLEE_SAVE(xen_restore_fl_direct);
 		pv_ops.irq.irq_disable =
 			__PV_IS_CALLEE_SAVE(xen_irq_disable_direct);
 		pv_ops.irq.irq_enable =
diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
index 850c93f346c7..dfa091d79c2e 100644
--- a/arch/x86/xen/irq.c
+++ b/arch/x86/xen/irq.c
@@ -42,28 +42,6 @@ asmlinkage __visible unsigned long xen_save_fl(void)
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_save_fl);
 
-__visible void xen_restore_fl(unsigned long flags)
-{
-	struct vcpu_info *vcpu;
-
-	/* convert from IF type flag */
-	flags = !(flags & X86_EFLAGS_IF);
-
-	/* See xen_irq_enable() for why preemption must be disabled. */
-	preempt_disable();
-	vcpu = this_cpu_read(xen_vcpu);
-	vcpu->evtchn_upcall_mask = flags;
-
-	if (flags == 0) {
-		barrier(); /* unmask then check (avoid races) */
-		if (unlikely(vcpu->evtchn_upcall_pending))
-			xen_force_evtchn_callback();
-		preempt_enable();
-	} else
-		preempt_enable_no_resched();
-}
-PV_CALLEE_SAVE_REGS_THUNK(xen_restore_fl);
-
 asmlinkage __visible void xen_irq_disable(void)
 {
 	/* There's a one instruction preempt window here.  We need to
@@ -118,7 +96,6 @@ static void xen_halt(void)
 
 static const struct pv_irq_ops xen_irq_ops __initconst = {
 	.save_fl = PV_CALLEE_SAVE(xen_save_fl),
-	.restore_fl = PV_CALLEE_SAVE(xen_restore_fl),
 	.irq_disable = PV_CALLEE_SAVE(xen_irq_disable),
 	.irq_enable = PV_CALLEE_SAVE(xen_irq_enable),
 
diff --git a/arch/x86/xen/xen-asm.S b/arch/x86/xen/xen-asm.S
index c0630fd9f44e..1ea7e41044b5 100644
--- a/arch/x86/xen/xen-asm.S
+++ b/arch/x86/xen/xen-asm.S
@@ -72,34 +72,6 @@ SYM_FUNC_START(xen_save_fl_direct)
 	ret
 SYM_FUNC_END(xen_save_fl_direct)
 
-
-/*
- * In principle the caller should be passing us a value return from
- * xen_save_fl_direct, but for robustness sake we test only the
- * X86_EFLAGS_IF flag rather than the whole byte. After setting the
- * interrupt mask state, it checks for unmasked pending events and
- * enters the hypervisor to get them delivered if so.
- */
-SYM_FUNC_START(xen_restore_fl_direct)
-	FRAME_BEGIN
-	testw $X86_EFLAGS_IF, %di
-	setz PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_mask
-	/*
-	 * Preempt here doesn't matter because that will deal with any
-	 * pending interrupts.  The pending check may end up being run
-	 * on the wrong CPU, but that doesn't hurt.
-	 */
-
-	/* check for unmasked and pending */
-	cmpw $0x0001, PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_pending
-	jnz 1f
-	call check_events
-1:
-	FRAME_END
-	ret
-SYM_FUNC_END(xen_restore_fl_direct)
-
-
 /*
  * Force an event check by making a hypercall, but preserve regs
  * before making the call.
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index b2fd80a01a36..8d7ec49a35fb 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -131,7 +131,6 @@ static inline void __init xen_efi_init(struct boot_params *boot_params)
 __visible void xen_irq_enable_direct(void);
 __visible void xen_irq_disable_direct(void);
 __visible unsigned long xen_save_fl_direct(void);
-__visible void xen_restore_fl_direct(unsigned long);
 
 __visible unsigned long xen_read_cr2(void);
 __visible unsigned long xen_read_cr2_direct(void);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 11:46:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 11:46:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31994.62893 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4sN-0004SS-Pr; Fri, 20 Nov 2020 11:46:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31994.62893; Fri, 20 Nov 2020 11:46:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4sN-0004SH-Jd; Fri, 20 Nov 2020 11:46:51 +0000
Received: by outflank-mailman (input) for mailman id 31994;
 Fri, 20 Nov 2020 11:46:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kg4sM-00048T-5p
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:46:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 09a2fe4f-3661-4538-83c4-66e0625e790d;
 Fri, 20 Nov 2020 11:46:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 49798ADA2;
 Fri, 20 Nov 2020 11:46:36 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kg4sM-00048T-5p
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:46:50 +0000
X-Inumbo-ID: 09a2fe4f-3661-4538-83c4-66e0625e790d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 09a2fe4f-3661-4538-83c4-66e0625e790d;
	Fri, 20 Nov 2020 11:46:37 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605872796; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pHPg6q6EuLm17Am105F0Q74YhZolcr+6t2Iv0ftBmuw=;
	b=ol/HAPpN2w1ftXNp3JzIliV1uKPPjEG5o5Tget6fb0qMwf9eSSvGE1l81h55rnC8ofF+pQ
	i/J3VRYu5fIxCAuNEelPwuOiv0rlmNfYdXbBDhB3BKhRJRLX1D1Nggw8FFys7f4MOE77Fj
	ka0e6jpAG7XmZPURJQ25hVohFPsiwqA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 49798ADA2;
	Fri, 20 Nov 2020 11:46:36 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: peterz@infradead.org,
	luto@kernel.org,
	Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: [PATCH v2 07/12] x86: add new features for paravirt patching
Date: Fri, 20 Nov 2020 12:46:25 +0100
Message-Id: <20201120114630.13552-8-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201120114630.13552-1-jgross@suse.com>
References: <20201120114630.13552-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

For being able to switch paravirt patching from special cased custom
code sequences to ALTERNATIVE handling some X86_FEATURE_* are needed
as new features. This enables to have the standard indirect pv call
as the default code and to patch that with the non-Xen custom code
sequence via ALTERNATIVE patching later.

Make sure paravirt patching is performed before alternative patching.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/include/asm/cpufeatures.h |  3 +++
 arch/x86/kernel/alternative.c      | 28 ++++++++++++++++++++++++++--
 2 files changed, 29 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index dad350d42ecf..ffa23c655412 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -237,6 +237,9 @@
 #define X86_FEATURE_VMCALL		( 8*32+18) /* "" Hypervisor supports the VMCALL instruction */
 #define X86_FEATURE_VMW_VMMCALL		( 8*32+19) /* "" VMware prefers VMMCALL hypercall instruction */
 #define X86_FEATURE_SEV_ES		( 8*32+20) /* AMD Secure Encrypted Virtualization - Encrypted State */
+#define X86_FEATURE_NOT_XENPV		( 8*32+21) /* "" Inverse of X86_FEATURE_XENPV */
+#define X86_FEATURE_NO_PVUNLOCK		( 8*32+22) /* "" No PV unlock function */
+#define X86_FEATURE_NO_VCPUPREEMPT	( 8*32+23) /* "" No PV vcpu_is_preempted function */
 
 /* Intel-defined CPU features, CPUID level 0x00000007:0 (EBX), word 9 */
 #define X86_FEATURE_FSGSBASE		( 9*32+ 0) /* RDFSBASE, WRFSBASE, RDGSBASE, WRGSBASE instructions*/
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 2400ad62f330..f8f9700719cf 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -593,6 +593,18 @@ int alternatives_text_reserved(void *start, void *end)
 #endif /* CONFIG_SMP */
 
 #ifdef CONFIG_PARAVIRT
+static void __init paravirt_set_cap(void)
+{
+	if (!boot_cpu_has(X86_FEATURE_XENPV))
+		setup_force_cpu_cap(X86_FEATURE_NOT_XENPV);
+
+	if (pv_is_native_spin_unlock())
+		setup_force_cpu_cap(X86_FEATURE_NO_PVUNLOCK);
+
+	if (pv_is_native_vcpu_is_preempted())
+		setup_force_cpu_cap(X86_FEATURE_NO_VCPUPREEMPT);
+}
+
 void __init_or_module apply_paravirt(struct paravirt_patch_site *start,
 				     struct paravirt_patch_site *end)
 {
@@ -616,6 +628,8 @@ void __init_or_module apply_paravirt(struct paravirt_patch_site *start,
 }
 extern struct paravirt_patch_site __start_parainstructions[],
 	__stop_parainstructions[];
+#else
+static void __init paravirt_set_cap(void) { }
 #endif	/* CONFIG_PARAVIRT */
 
 /*
@@ -723,6 +737,18 @@ void __init alternative_instructions(void)
 	 * patching.
 	 */
 
+	paravirt_set_cap();
+
+	/*
+	 * First patch paravirt functions, such that we overwrite the indirect
+	 * call with the direct call.
+	 */
+	apply_paravirt(__parainstructions, __parainstructions_end);
+
+	/*
+	 * Then patch alternatives, such that those paravirt calls that are in
+	 * alternatives can be overwritten by their immediate fragments.
+	 */
 	apply_alternatives(__alt_instructions, __alt_instructions_end);
 
 #ifdef CONFIG_SMP
@@ -741,8 +767,6 @@ void __init alternative_instructions(void)
 	}
 #endif
 
-	apply_paravirt(__parainstructions, __parainstructions_end);
-
 	restart_nmi();
 	alternatives_patched = 1;
 }
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 11:46:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 11:46:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31996.62904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4sP-0004WI-7U; Fri, 20 Nov 2020 11:46:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31996.62904; Fri, 20 Nov 2020 11:46:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4sP-0004W1-2d; Fri, 20 Nov 2020 11:46:53 +0000
Received: by outflank-mailman (input) for mailman id 31996;
 Fri, 20 Nov 2020 11:46:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kg4sN-00049L-SW
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:46:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cddb6a84-2b34-4457-ba77-fccafcf6d3da;
 Fri, 20 Nov 2020 11:46:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A72A5ADAA;
 Fri, 20 Nov 2020 11:46:36 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kg4sN-00049L-SW
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:46:51 +0000
X-Inumbo-ID: cddb6a84-2b34-4457-ba77-fccafcf6d3da
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cddb6a84-2b34-4457-ba77-fccafcf6d3da;
	Fri, 20 Nov 2020 11:46:37 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605872796; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HUMrirDSxnra8Oma+gri0UEDo4d8qyXqhA5nloZ6vbU=;
	b=gUIrbTY9TW4PtgOvIXmi3Kinifx9I/MC/3Bm5fu/2nYpCQbcNhtCPe+olR5OMdkTDqCBSe
	YNppQla2QC8CZ4kW3bSq9kEZGLxurWenNCQXFT87CWxa/mYVwLiGE+4/QkoBJ+uPuZhRG9
	dPHk/vNhjv5xh4NOOfar3awwKshGFsU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A72A5ADAA;
	Fri, 20 Nov 2020 11:46:36 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org
Cc: peterz@infradead.org,
	luto@kernel.org,
	Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>
Subject: [PATCH v2 08/12] x86/paravirt: remove no longer needed 32-bit pvops cruft
Date: Fri, 20 Nov 2020 12:46:26 +0100
Message-Id: <20201120114630.13552-9-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201120114630.13552-1-jgross@suse.com>
References: <20201120114630.13552-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

PVOP_VCALL4() is only used for Xen PV, while PVOP_CALL4() isn't used
at all. Keep PVOP_CALL4() for 64 bits due to symmetry reasons.

This allows to remove the 32-bit definitions of those macros leading
to a substantial simplification of the paravirt macros, as those were
the only ones needing non-empty "pre" and "post" parameters.

PVOP_CALLEE2() and PVOP_VCALLEE2() are used nowhere, so remove them.

Another no longer needed case is special handling of return types
larger than unsigned long. Replace that with a BUILD_BUG_ON().

DISABLE_INTERRUPTS() is used in 32-bit code only, so it can just be
replaced by cli.

INTERRUPT_RETURN in 32-bit code can be replaced by iret.

GET_CR2_INTO_AX and ENABLE_INTERRUPTS are used nowhere, so they can
be removed.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/entry/entry_32.S             |   4 +-
 arch/x86/include/asm/irqflags.h       |   5 --
 arch/x86/include/asm/paravirt.h       |  46 +----------
 arch/x86/include/asm/paravirt_types.h | 112 ++++++++------------------
 arch/x86/kernel/asm-offsets.c         |   3 -
 arch/x86/kernel/head_64.S             |   2 -
 6 files changed, 35 insertions(+), 137 deletions(-)

diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index df8c017e6161..765487e57d6e 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -430,7 +430,7 @@
 	 * will soon execute iret and the tracer was already set to
 	 * the irqstate after the IRET:
 	 */
-	DISABLE_INTERRUPTS(CLBR_ANY)
+	cli
 	lss	(%esp), %esp			/* switch to espfix segment */
 .Lend_\@:
 #endif /* CONFIG_X86_ESPFIX32 */
@@ -1077,7 +1077,7 @@ restore_all_switch_stack:
 	 * when returning from IPI handler and when returning from
 	 * scheduler to user-space.
 	 */
-	INTERRUPT_RETURN
+	iret
 
 .section .fixup, "ax"
 SYM_CODE_START(asm_iret_error)
diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
index 144d70ea4393..a0efbcd24b86 100644
--- a/arch/x86/include/asm/irqflags.h
+++ b/arch/x86/include/asm/irqflags.h
@@ -109,9 +109,6 @@ static __always_inline unsigned long arch_local_irq_save(void)
 }
 #else
 
-#define ENABLE_INTERRUPTS(x)	sti
-#define DISABLE_INTERRUPTS(x)	cli
-
 #ifdef CONFIG_X86_64
 #ifdef CONFIG_DEBUG_ENTRY
 #define SAVE_FLAGS(x)		pushfq; popq %rax
@@ -119,8 +116,6 @@ static __always_inline unsigned long arch_local_irq_save(void)
 
 #define INTERRUPT_RETURN	jmp native_iret
 
-#else
-#define INTERRUPT_RETURN		iret
 #endif
 
 #endif /* __ASSEMBLY__ */
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 01b3e36462c3..1dd30c95505d 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -692,6 +692,7 @@ extern void default_banner(void);
 	.if ((~(set)) & mask); pop %reg; .endif
 
 #ifdef CONFIG_X86_64
+#ifdef CONFIG_PARAVIRT_XXL
 
 #define PV_SAVE_REGS(set)			\
 	COND_PUSH(set, CLBR_RAX, rax);		\
@@ -717,46 +718,12 @@ extern void default_banner(void);
 #define PARA_PATCH(off)		((off) / 8)
 #define PARA_SITE(ptype, ops)	_PVSITE(ptype, ops, .quad, 8)
 #define PARA_INDIRECT(addr)	*addr(%rip)
-#else
-#define PV_SAVE_REGS(set)			\
-	COND_PUSH(set, CLBR_EAX, eax);		\
-	COND_PUSH(set, CLBR_EDI, edi);		\
-	COND_PUSH(set, CLBR_ECX, ecx);		\
-	COND_PUSH(set, CLBR_EDX, edx)
-#define PV_RESTORE_REGS(set)			\
-	COND_POP(set, CLBR_EDX, edx);		\
-	COND_POP(set, CLBR_ECX, ecx);		\
-	COND_POP(set, CLBR_EDI, edi);		\
-	COND_POP(set, CLBR_EAX, eax)
-
-#define PARA_PATCH(off)		((off) / 4)
-#define PARA_SITE(ptype, ops)	_PVSITE(ptype, ops, .long, 4)
-#define PARA_INDIRECT(addr)	*%cs:addr
-#endif
 
-#ifdef CONFIG_PARAVIRT_XXL
 #define INTERRUPT_RETURN						\
 	PARA_SITE(PARA_PATCH(PV_CPU_iret),				\
 		  ANNOTATE_RETPOLINE_SAFE;				\
 		  jmp PARA_INDIRECT(pv_ops+PV_CPU_iret);)
 
-#define DISABLE_INTERRUPTS(clobbers)					\
-	PARA_SITE(PARA_PATCH(PV_IRQ_irq_disable),			\
-		  PV_SAVE_REGS(clobbers | CLBR_CALLEE_SAVE);		\
-		  ANNOTATE_RETPOLINE_SAFE;				\
-		  call PARA_INDIRECT(pv_ops+PV_IRQ_irq_disable);	\
-		  PV_RESTORE_REGS(clobbers | CLBR_CALLEE_SAVE);)
-
-#define ENABLE_INTERRUPTS(clobbers)					\
-	PARA_SITE(PARA_PATCH(PV_IRQ_irq_enable),			\
-		  PV_SAVE_REGS(clobbers | CLBR_CALLEE_SAVE);		\
-		  ANNOTATE_RETPOLINE_SAFE;				\
-		  call PARA_INDIRECT(pv_ops+PV_IRQ_irq_enable);		\
-		  PV_RESTORE_REGS(clobbers | CLBR_CALLEE_SAVE);)
-#endif
-
-#ifdef CONFIG_X86_64
-#ifdef CONFIG_PARAVIRT_XXL
 #ifdef CONFIG_DEBUG_ENTRY
 #define SAVE_FLAGS(clobbers)                                        \
 	PARA_SITE(PARA_PATCH(PV_IRQ_save_fl),			    \
@@ -768,17 +735,6 @@ extern void default_banner(void);
 #endif /* CONFIG_PARAVIRT_XXL */
 #endif	/* CONFIG_X86_64 */
 
-#ifdef CONFIG_PARAVIRT_XXL
-
-#define GET_CR2_INTO_AX							\
-	PARA_SITE(PARA_PATCH(PV_MMU_read_cr2),				\
-		  ANNOTATE_RETPOLINE_SAFE;				\
-		  call PARA_INDIRECT(pv_ops+PV_MMU_read_cr2);		\
-		 )
-
-#endif /* CONFIG_PARAVIRT_XXL */
-
-
 #endif /* __ASSEMBLY__ */
 #else  /* CONFIG_PARAVIRT */
 # define default_banner x86_init_noop
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 01af7b944224..b86acbb6449f 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -471,55 +471,34 @@ int paravirt_disable_iospace(void);
 	})
 
 
-#define ____PVOP_CALL(rettype, op, clbr, call_clbr, extra_clbr,		\
-		      pre, post, ...)					\
+#define ____PVOP_CALL(rettype, op, clbr, call_clbr, extra_clbr, ...)	\
 	({								\
-		rettype __ret;						\
 		PVOP_CALL_ARGS;						\
 		PVOP_TEST_NULL(op);					\
-		/* This is 32-bit specific, but is okay in 64-bit */	\
-		/* since this condition will never hold */		\
-		if (sizeof(rettype) > sizeof(unsigned long)) {		\
-			asm volatile(pre				\
-				     paravirt_alt(PARAVIRT_CALL)	\
-				     post				\
-				     : call_clbr, ASM_CALL_CONSTRAINT	\
-				     : paravirt_type(op),		\
-				       paravirt_clobber(clbr),		\
-				       ##__VA_ARGS__			\
-				     : "memory", "cc" extra_clbr);	\
-			__ret = (rettype)((((u64)__edx) << 32) | __eax); \
-		} else {						\
-			asm volatile(pre				\
-				     paravirt_alt(PARAVIRT_CALL)	\
-				     post				\
-				     : call_clbr, ASM_CALL_CONSTRAINT	\
-				     : paravirt_type(op),		\
-				       paravirt_clobber(clbr),		\
-				       ##__VA_ARGS__			\
-				     : "memory", "cc" extra_clbr);	\
-			__ret = (rettype)(__eax & PVOP_RETMASK(rettype));	\
-		}							\
-		__ret;							\
+		BUILD_BUG_ON(sizeof(rettype) > sizeof(unsigned long));	\
+		asm volatile(paravirt_alt(PARAVIRT_CALL)		\
+			     : call_clbr, ASM_CALL_CONSTRAINT		\
+			     : paravirt_type(op),			\
+			       paravirt_clobber(clbr),			\
+			       ##__VA_ARGS__				\
+			     : "memory", "cc" extra_clbr);		\
+		(rettype)(__eax & PVOP_RETMASK(rettype));		\
 	})
 
-#define __PVOP_CALL(rettype, op, pre, post, ...)			\
+#define __PVOP_CALL(rettype, op, ...)					\
 	____PVOP_CALL(rettype, op, CLBR_ANY, PVOP_CALL_CLOBBERS,	\
-		      EXTRA_CLOBBERS, pre, post, ##__VA_ARGS__)
+		      EXTRA_CLOBBERS, ##__VA_ARGS__)
 
-#define __PVOP_CALLEESAVE(rettype, op, pre, post, ...)			\
+#define __PVOP_CALLEESAVE(rettype, op, ...)				\
 	____PVOP_CALL(rettype, op.func, CLBR_RET_REG,			\
-		      PVOP_CALLEE_CLOBBERS, ,				\
-		      pre, post, ##__VA_ARGS__)
+		      PVOP_CALLEE_CLOBBERS, , ##__VA_ARGS__)
 
 
-#define ____PVOP_VCALL(op, clbr, call_clbr, extra_clbr, pre, post, ...)	\
+#define ____PVOP_VCALL(op, clbr, call_clbr, extra_clbr, ...)		\
 	({								\
 		PVOP_VCALL_ARGS;					\
 		PVOP_TEST_NULL(op);					\
-		asm volatile(pre					\
-			     paravirt_alt(PARAVIRT_CALL)		\
-			     post					\
+		asm volatile(paravirt_alt(PARAVIRT_CALL)		\
 			     : call_clbr, ASM_CALL_CONSTRAINT		\
 			     : paravirt_type(op),			\
 			       paravirt_clobber(clbr),			\
@@ -527,84 +506,57 @@ int paravirt_disable_iospace(void);
 			     : "memory", "cc" extra_clbr);		\
 	})
 
-#define __PVOP_VCALL(op, pre, post, ...)				\
+#define __PVOP_VCALL(op, ...)						\
 	____PVOP_VCALL(op, CLBR_ANY, PVOP_VCALL_CLOBBERS,		\
-		       VEXTRA_CLOBBERS,					\
-		       pre, post, ##__VA_ARGS__)
+		       VEXTRA_CLOBBERS, ##__VA_ARGS__)
 
-#define __PVOP_VCALLEESAVE(op, pre, post, ...)				\
+#define __PVOP_VCALLEESAVE(op, ...)					\
 	____PVOP_VCALL(op.func, CLBR_RET_REG,				\
-		      PVOP_VCALLEE_CLOBBERS, ,				\
-		      pre, post, ##__VA_ARGS__)
+		      PVOP_VCALLEE_CLOBBERS, , ##__VA_ARGS__)
 
 
 
 #define PVOP_CALL0(rettype, op)						\
-	__PVOP_CALL(rettype, op, "", "")
+	__PVOP_CALL(rettype, op)
 #define PVOP_VCALL0(op)							\
-	__PVOP_VCALL(op, "", "")
+	__PVOP_VCALL(op)
 
 #define PVOP_CALLEE0(rettype, op)					\
-	__PVOP_CALLEESAVE(rettype, op, "", "")
+	__PVOP_CALLEESAVE(rettype, op)
 #define PVOP_VCALLEE0(op)						\
-	__PVOP_VCALLEESAVE(op, "", "")
+	__PVOP_VCALLEESAVE(op)
 
 
 #define PVOP_CALL1(rettype, op, arg1)					\
-	__PVOP_CALL(rettype, op, "", "", PVOP_CALL_ARG1(arg1))
+	__PVOP_CALL(rettype, op, PVOP_CALL_ARG1(arg1))
 #define PVOP_VCALL1(op, arg1)						\
-	__PVOP_VCALL(op, "", "", PVOP_CALL_ARG1(arg1))
+	__PVOP_VCALL(op, PVOP_CALL_ARG1(arg1))
 
 #define PVOP_CALLEE1(rettype, op, arg1)					\
-	__PVOP_CALLEESAVE(rettype, op, "", "", PVOP_CALL_ARG1(arg1))
+	__PVOP_CALLEESAVE(rettype, op, PVOP_CALL_ARG1(arg1))
 #define PVOP_VCALLEE1(op, arg1)						\
-	__PVOP_VCALLEESAVE(op, "", "", PVOP_CALL_ARG1(arg1))
+	__PVOP_VCALLEESAVE(op, PVOP_CALL_ARG1(arg1))
 
 
 #define PVOP_CALL2(rettype, op, arg1, arg2)				\
-	__PVOP_CALL(rettype, op, "", "", PVOP_CALL_ARG1(arg1),		\
-		    PVOP_CALL_ARG2(arg2))
+	__PVOP_CALL(rettype, op, PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2))
 #define PVOP_VCALL2(op, arg1, arg2)					\
-	__PVOP_VCALL(op, "", "", PVOP_CALL_ARG1(arg1),			\
-		     PVOP_CALL_ARG2(arg2))
-
-#define PVOP_CALLEE2(rettype, op, arg1, arg2)				\
-	__PVOP_CALLEESAVE(rettype, op, "", "", PVOP_CALL_ARG1(arg1),	\
-			  PVOP_CALL_ARG2(arg2))
-#define PVOP_VCALLEE2(op, arg1, arg2)					\
-	__PVOP_VCALLEESAVE(op, "", "", PVOP_CALL_ARG1(arg1),		\
-			   PVOP_CALL_ARG2(arg2))
-
+	__PVOP_VCALL(op, PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2))
 
 #define PVOP_CALL3(rettype, op, arg1, arg2, arg3)			\
-	__PVOP_CALL(rettype, op, "", "", PVOP_CALL_ARG1(arg1),		\
+	__PVOP_CALL(rettype, op, PVOP_CALL_ARG1(arg1),			\
 		    PVOP_CALL_ARG2(arg2), PVOP_CALL_ARG3(arg3))
 #define PVOP_VCALL3(op, arg1, arg2, arg3)				\
-	__PVOP_VCALL(op, "", "", PVOP_CALL_ARG1(arg1),			\
+	__PVOP_VCALL(op, PVOP_CALL_ARG1(arg1),				\
 		     PVOP_CALL_ARG2(arg2), PVOP_CALL_ARG3(arg3))
 
-/* This is the only difference in x86_64. We can make it much simpler */
-#ifdef CONFIG_X86_32
 #define PVOP_CALL4(rettype, op, arg1, arg2, arg3, arg4)			\
 	__PVOP_CALL(rettype, op,					\
-		    "push %[_arg4];", "lea 4(%%esp),%%esp;",		\
-		    PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2),		\
-		    PVOP_CALL_ARG3(arg3), [_arg4] "mr" ((u32)(arg4)))
-#define PVOP_VCALL4(op, arg1, arg2, arg3, arg4)				\
-	__PVOP_VCALL(op,						\
-		    "push %[_arg4];", "lea 4(%%esp),%%esp;",		\
-		    "0" ((u32)(arg1)), "1" ((u32)(arg2)),		\
-		    "2" ((u32)(arg3)), [_arg4] "mr" ((u32)(arg4)))
-#else
-#define PVOP_CALL4(rettype, op, arg1, arg2, arg3, arg4)			\
-	__PVOP_CALL(rettype, op, "", "",				\
 		    PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2),		\
 		    PVOP_CALL_ARG3(arg3), PVOP_CALL_ARG4(arg4))
 #define PVOP_VCALL4(op, arg1, arg2, arg3, arg4)				\
-	__PVOP_VCALL(op, "", "",					\
-		     PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2),	\
+	__PVOP_VCALL(op, PVOP_CALL_ARG1(arg1), PVOP_CALL_ARG2(arg2),	\
 		     PVOP_CALL_ARG3(arg3), PVOP_CALL_ARG4(arg4))
-#endif
 
 /* Lazy mode for batching updates / context switch */
 enum paravirt_lazy_mode {
diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c
index 70b7154f4bdd..736508004b30 100644
--- a/arch/x86/kernel/asm-offsets.c
+++ b/arch/x86/kernel/asm-offsets.c
@@ -63,10 +63,7 @@ static void __used common(void)
 
 #ifdef CONFIG_PARAVIRT_XXL
 	BLANK();
-	OFFSET(PV_IRQ_irq_disable, paravirt_patch_template, irq.irq_disable);
-	OFFSET(PV_IRQ_irq_enable, paravirt_patch_template, irq.irq_enable);
 	OFFSET(PV_CPU_iret, paravirt_patch_template, cpu.iret);
-	OFFSET(PV_MMU_read_cr2, paravirt_patch_template, mmu.read_cr2);
 #endif
 
 #ifdef CONFIG_XEN
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 3c417734790f..ccb3a16ae6d0 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -29,10 +29,8 @@
 #ifdef CONFIG_PARAVIRT_XXL
 #include <asm/asm-offsets.h>
 #include <asm/paravirt.h>
-#define GET_CR2_INTO(reg) GET_CR2_INTO_AX ; _ASM_MOV %_ASM_AX, reg
 #else
 #define INTERRUPT_RETURN iretq
-#define GET_CR2_INTO(reg) _ASM_MOV %cr2, reg
 #endif
 
 /*
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 11:46:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 11:46:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.31997.62917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4sS-0004dP-TF; Fri, 20 Nov 2020 11:46:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 31997.62917; Fri, 20 Nov 2020 11:46:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4sS-0004d0-Ka; Fri, 20 Nov 2020 11:46:56 +0000
Received: by outflank-mailman (input) for mailman id 31997;
 Fri, 20 Nov 2020 11:46:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kg4sR-00048T-6F
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:46:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5e8358de-21ba-476e-a593-0f36788454b9;
 Fri, 20 Nov 2020 11:46:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 10C79AD89;
 Fri, 20 Nov 2020 11:46:36 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kg4sR-00048T-6F
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:46:55 +0000
X-Inumbo-ID: 5e8358de-21ba-476e-a593-0f36788454b9
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5e8358de-21ba-476e-a593-0f36788454b9;
	Fri, 20 Nov 2020 11:46:37 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605872796; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6h75cJ6hW/g0BHQC3u27wm1+Z2/S1ONImoNRdCX4YvI=;
	b=QP/BG5ujOKxmmUSP7oXldFmPl9nNGKmTun/Fr3P44ouLG0mnuu7kt4LHiIIq7QjKsTJQKK
	mJcL8UWeRWXKkYvNYE4Cn2iJV/YzTxrtpbsaMihX4lL+BR1qg44fQqOYL9/u8+g6B0POMG
	4B2TjS0ZEcL9tb8rgc9CjC/PMRMxCzA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 10C79AD89;
	Fri, 20 Nov 2020 11:46:36 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	kvm@vger.kernel.org
Cc: peterz@infradead.org,
	luto@kernel.org,
	Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <sean.j.christopherson@intel.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>,
	Joerg Roedel <joro@8bytes.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Daniel Lezcano <daniel.lezcano@linaro.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>,
	Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>
Subject: [PATCH v2 06/12] x86/paravirt: switch time pvops functions to use static_call()
Date: Fri, 20 Nov 2020 12:46:24 +0100
Message-Id: <20201120114630.13552-7-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201120114630.13552-1-jgross@suse.com>
References: <20201120114630.13552-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The time pvops functions are the only ones left which might be
used in 32-bit mode and which return a 64-bit value.

Switch them to use the static_call() mechanism instead of pvops, as
this allows quite some simplification of the pvops implementation.

Due to include hell this requires to split out the time interfaces
into a new header file.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/Kconfig                      |  1 +
 arch/x86/include/asm/mshyperv.h       | 11 --------
 arch/x86/include/asm/paravirt.h       | 14 ----------
 arch/x86/include/asm/paravirt_time.h  | 38 +++++++++++++++++++++++++++
 arch/x86/include/asm/paravirt_types.h |  6 -----
 arch/x86/kernel/cpu/vmware.c          |  5 ++--
 arch/x86/kernel/kvm.c                 |  3 ++-
 arch/x86/kernel/kvmclock.c            |  3 ++-
 arch/x86/kernel/paravirt.c            | 16 ++++++++---
 arch/x86/kernel/tsc.c                 |  3 ++-
 arch/x86/xen/time.c                   | 12 ++++-----
 drivers/clocksource/hyperv_timer.c    |  5 ++--
 drivers/xen/time.c                    |  3 ++-
 kernel/sched/sched.h                  |  1 +
 14 files changed, 71 insertions(+), 50 deletions(-)
 create mode 100644 arch/x86/include/asm/paravirt_time.h

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index f6946b81f74a..56775acc243e 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -767,6 +767,7 @@ if HYPERVISOR_GUEST
 
 config PARAVIRT
 	bool "Enable paravirtualization code"
+	depends on HAVE_STATIC_CALL
 	help
 	  This changes the kernel so it can modify itself when it is run
 	  under a hypervisor, potentially improving performance significantly
diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
index ffc289992d1b..45942d420626 100644
--- a/arch/x86/include/asm/mshyperv.h
+++ b/arch/x86/include/asm/mshyperv.h
@@ -56,17 +56,6 @@ typedef int (*hyperv_fill_flush_list_func)(
 #define hv_get_raw_timer() rdtsc_ordered()
 #define hv_get_vector() HYPERVISOR_CALLBACK_VECTOR
 
-/*
- * Reference to pv_ops must be inline so objtool
- * detection of noinstr violations can work correctly.
- */
-static __always_inline void hv_setup_sched_clock(void *sched_clock)
-{
-#ifdef CONFIG_PARAVIRT
-	pv_ops.time.sched_clock = sched_clock;
-#endif
-}
-
 void hyperv_vector_handler(struct pt_regs *regs);
 
 static inline void hv_enable_stimer0_percpu_irq(int irq) {}
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index ce2b8c5aecde..01b3e36462c3 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -17,25 +17,11 @@
 #include <linux/cpumask.h>
 #include <asm/frame.h>
 
-static inline unsigned long long paravirt_sched_clock(void)
-{
-	return PVOP_CALL0(unsigned long long, time.sched_clock);
-}
-
-struct static_key;
-extern struct static_key paravirt_steal_enabled;
-extern struct static_key paravirt_steal_rq_enabled;
-
 __visible void __native_queued_spin_unlock(struct qspinlock *lock);
 bool pv_is_native_spin_unlock(void);
 __visible bool __native_vcpu_is_preempted(long cpu);
 bool pv_is_native_vcpu_is_preempted(void);
 
-static inline u64 paravirt_steal_clock(int cpu)
-{
-	return PVOP_CALL1(u64, time.steal_clock, cpu);
-}
-
 /* The paravirtualized I/O functions */
 static inline void slow_down_io(void)
 {
diff --git a/arch/x86/include/asm/paravirt_time.h b/arch/x86/include/asm/paravirt_time.h
new file mode 100644
index 000000000000..76cf94b7c899
--- /dev/null
+++ b/arch/x86/include/asm/paravirt_time.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_X86_PARAVIRT_TIME_H
+#define _ASM_X86_PARAVIRT_TIME_H
+
+/* Time related para-virtualized functions. */
+
+#ifdef CONFIG_PARAVIRT
+
+#include <linux/types.h>
+#include <linux/jump_label.h>
+#include <linux/static_call.h>
+
+extern struct static_key paravirt_steal_enabled;
+extern struct static_key paravirt_steal_rq_enabled;
+
+u64 dummy_steal_clock(int cpu);
+u64 dummy_sched_clock(void);
+
+DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock);
+DECLARE_STATIC_CALL(pv_sched_clock, dummy_sched_clock);
+
+extern bool paravirt_using_native_sched_clock;
+
+void paravirt_set_sched_clock(u64 (*func)(void));
+
+static inline u64 paravirt_sched_clock(void)
+{
+	return static_call(pv_sched_clock)();
+}
+
+static inline u64 paravirt_steal_clock(int cpu)
+{
+	return static_call(pv_steal_clock)(cpu);
+}
+
+#endif /* CONFIG_PARAVIRT */
+
+#endif /* _ASM_X86_PARAVIRT_TIME_H */
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 2031631160d0..01af7b944224 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -96,11 +96,6 @@ struct pv_lazy_ops {
 } __no_randomize_layout;
 #endif
 
-struct pv_time_ops {
-	unsigned long long (*sched_clock)(void);
-	unsigned long long (*steal_clock)(int cpu);
-} __no_randomize_layout;
-
 struct pv_cpu_ops {
 	/* hooks for various privileged instructions */
 	void (*io_delay)(void);
@@ -292,7 +287,6 @@ struct pv_lock_ops {
  * what to patch. */
 struct paravirt_patch_template {
 	struct pv_init_ops	init;
-	struct pv_time_ops	time;
 	struct pv_cpu_ops	cpu;
 	struct pv_irq_ops	irq;
 	struct pv_mmu_ops	mmu;
diff --git a/arch/x86/kernel/cpu/vmware.c b/arch/x86/kernel/cpu/vmware.c
index 924571fe5864..f265426a1c3e 100644
--- a/arch/x86/kernel/cpu/vmware.c
+++ b/arch/x86/kernel/cpu/vmware.c
@@ -34,6 +34,7 @@
 #include <asm/apic.h>
 #include <asm/vmware.h>
 #include <asm/svm.h>
+#include <asm/paravirt_time.h>
 
 #undef pr_fmt
 #define pr_fmt(fmt)	"vmware: " fmt
@@ -336,11 +337,11 @@ static void __init vmware_paravirt_ops_setup(void)
 	vmware_cyc2ns_setup();
 
 	if (vmw_sched_clock)
-		pv_ops.time.sched_clock = vmware_sched_clock;
+		paravirt_set_sched_clock(vmware_sched_clock);
 
 	if (vmware_is_stealclock_available()) {
 		has_steal_clock = true;
-		pv_ops.time.steal_clock = vmware_steal_clock;
+		static_call_update(pv_steal_clock, vmware_steal_clock);
 
 		/* We use reboot notifier only to disable steal clock */
 		register_reboot_notifier(&vmware_pv_reboot_nb);
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 7f57ede3cb8e..6c525fdd0312 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -38,6 +38,7 @@
 #include <asm/cpuidle_haltpoll.h>
 #include <asm/ptrace.h>
 #include <asm/svm.h>
+#include <asm/paravirt_time.h>
 
 DEFINE_STATIC_KEY_FALSE(kvm_async_pf_enabled);
 
@@ -650,7 +651,7 @@ static void __init kvm_guest_init(void)
 
 	if (kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) {
 		has_steal_clock = 1;
-		pv_ops.time.steal_clock = kvm_steal_clock;
+		static_call_update(pv_steal_clock, kvm_steal_clock);
 	}
 
 	if (pv_tlb_flush_supported()) {
diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
index 34b18f6eeb2c..02f60ee16f10 100644
--- a/arch/x86/kernel/kvmclock.c
+++ b/arch/x86/kernel/kvmclock.c
@@ -22,6 +22,7 @@
 #include <asm/x86_init.h>
 #include <asm/reboot.h>
 #include <asm/kvmclock.h>
+#include <asm/paravirt_time.h>
 
 static int kvmclock __initdata = 1;
 static int kvmclock_vsyscall __initdata = 1;
@@ -107,7 +108,7 @@ static inline void kvm_sched_clock_init(bool stable)
 	if (!stable)
 		clear_sched_clock_stable();
 	kvm_sched_clock_offset = kvm_clock_read();
-	pv_ops.time.sched_clock = kvm_sched_clock_read;
+	paravirt_set_sched_clock(kvm_sched_clock_read);
 
 	pr_info("kvm-clock: using sched offset of %llu cycles",
 		kvm_sched_clock_offset);
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index c60222ab8ab9..9f8aa18aa378 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -31,6 +31,7 @@
 #include <asm/special_insns.h>
 #include <asm/tlb.h>
 #include <asm/io_bitmap.h>
+#include <asm/paravirt_time.h>
 
 /*
  * nop stub, which must not clobber anything *including the stack* to
@@ -167,6 +168,17 @@ static u64 native_steal_clock(int cpu)
 	return 0;
 }
 
+DEFINE_STATIC_CALL(pv_steal_clock, native_steal_clock);
+DEFINE_STATIC_CALL(pv_sched_clock, native_sched_clock);
+
+bool paravirt_using_native_sched_clock = true;
+
+void paravirt_set_sched_clock(u64 (*func)(void))
+{
+	static_call_update(pv_sched_clock, func);
+	paravirt_using_native_sched_clock = (func == native_sched_clock);
+}
+
 /* These are in entry.S */
 extern void native_iret(void);
 
@@ -272,10 +284,6 @@ struct paravirt_patch_template pv_ops = {
 	/* Init ops. */
 	.init.patch		= native_patch,
 
-	/* Time ops. */
-	.time.sched_clock	= native_sched_clock,
-	.time.steal_clock	= native_steal_clock,
-
 	/* Cpu ops. */
 	.cpu.io_delay		= native_io_delay,
 
diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index f70dffc2771f..d01245b770de 100644
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -28,6 +28,7 @@
 #include <asm/intel-family.h>
 #include <asm/i8259.h>
 #include <asm/uv/uv.h>
+#include <asm/paravirt_time.h>
 
 unsigned int __read_mostly cpu_khz;	/* TSC clocks / usec, not used here */
 EXPORT_SYMBOL(cpu_khz);
@@ -254,7 +255,7 @@ unsigned long long sched_clock(void)
 
 bool using_native_sched_clock(void)
 {
-	return pv_ops.time.sched_clock == native_sched_clock;
+	return paravirt_using_native_sched_clock;
 }
 #else
 unsigned long long
diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
index 91f5b330dcc6..17e62f4f69a9 100644
--- a/arch/x86/xen/time.c
+++ b/arch/x86/xen/time.c
@@ -18,6 +18,7 @@
 #include <linux/timekeeper_internal.h>
 
 #include <asm/pvclock.h>
+#include <asm/paravirt_time.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
 
@@ -379,11 +380,6 @@ void xen_timer_resume(void)
 	}
 }
 
-static const struct pv_time_ops xen_time_ops __initconst = {
-	.sched_clock = xen_sched_clock,
-	.steal_clock = xen_steal_clock,
-};
-
 static struct pvclock_vsyscall_time_info *xen_clock __read_mostly;
 static u64 xen_clock_value_saved;
 
@@ -528,7 +524,8 @@ static void __init xen_time_init(void)
 void __init xen_init_time_ops(void)
 {
 	xen_sched_clock_offset = xen_clocksource_read();
-	pv_ops.time = xen_time_ops;
+	static_call_update(pv_steal_clock, xen_steal_clock);
+	paravirt_set_sched_clock(xen_sched_clock);
 
 	x86_init.timers.timer_init = xen_time_init;
 	x86_init.timers.setup_percpu_clockev = x86_init_noop;
@@ -570,7 +567,8 @@ void __init xen_hvm_init_time_ops(void)
 	}
 
 	xen_sched_clock_offset = xen_clocksource_read();
-	pv_ops.time = xen_time_ops;
+	static_call_update(pv_steal_clock, xen_steal_clock);
+	paravirt_set_sched_clock(xen_sched_clock);
 	x86_init.timers.setup_percpu_clockev = xen_time_init;
 	x86_cpuinit.setup_percpu_clockev = xen_hvm_setup_cpu_clockevents;
 
diff --git a/drivers/clocksource/hyperv_timer.c b/drivers/clocksource/hyperv_timer.c
index ba04cb381cd3..1ed79993fc50 100644
--- a/drivers/clocksource/hyperv_timer.c
+++ b/drivers/clocksource/hyperv_timer.c
@@ -21,6 +21,7 @@
 #include <clocksource/hyperv_timer.h>
 #include <asm/hyperv-tlfs.h>
 #include <asm/mshyperv.h>
+#include <asm/paravirt_time.h>
 
 static struct clock_event_device __percpu *hv_clock_event;
 static u64 hv_sched_clock_offset __ro_after_init;
@@ -445,7 +446,7 @@ static bool __init hv_init_tsc_clocksource(void)
 	clocksource_register_hz(&hyperv_cs_tsc, NSEC_PER_SEC/100);
 
 	hv_sched_clock_offset = hv_read_reference_counter();
-	hv_setup_sched_clock(read_hv_sched_clock_tsc);
+	paravirt_set_sched_clock(read_hv_sched_clock_tsc);
 
 	return true;
 }
@@ -470,6 +471,6 @@ void __init hv_init_clocksource(void)
 	clocksource_register_hz(&hyperv_cs_msr, NSEC_PER_SEC/100);
 
 	hv_sched_clock_offset = hv_read_reference_counter();
-	hv_setup_sched_clock(read_hv_sched_clock_msr);
+	static_call_update(pv_sched_clock, read_hv_sched_clock_msr);
 }
 EXPORT_SYMBOL_GPL(hv_init_clocksource);
diff --git a/drivers/xen/time.c b/drivers/xen/time.c
index 108edbcbc040..b35ce88427c9 100644
--- a/drivers/xen/time.c
+++ b/drivers/xen/time.c
@@ -9,6 +9,7 @@
 #include <linux/slab.h>
 
 #include <asm/paravirt.h>
+#include <asm/paravirt_time.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
 
@@ -175,7 +176,7 @@ void __init xen_time_setup_guest(void)
 	xen_runstate_remote = !HYPERVISOR_vm_assist(VMASST_CMD_enable,
 					VMASST_TYPE_runstate_update_flag);
 
-	pv_ops.time.steal_clock = xen_steal_clock;
+	static_call_update(pv_steal_clock, xen_steal_clock);
 
 	static_key_slow_inc(&paravirt_steal_enabled);
 	if (xen_runstate_remote)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index df80bfcea92e..1687c5383797 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -71,6 +71,7 @@
 
 #ifdef CONFIG_PARAVIRT
 # include <asm/paravirt.h>
+# include <asm/paravirt_time.h>
 #endif
 
 #include "cpupri.h"
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 11:47:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 11:47:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32004.62929 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4sX-0004m6-HV; Fri, 20 Nov 2020 11:47:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32004.62929; Fri, 20 Nov 2020 11:47:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4sX-0004ls-Am; Fri, 20 Nov 2020 11:47:01 +0000
Received: by outflank-mailman (input) for mailman id 32004;
 Fri, 20 Nov 2020 11:47:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kg4sW-00048T-6M
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:47:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 23341d06-2be7-4e0c-9872-f24490181e15;
 Fri, 20 Nov 2020 11:46:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 18599ADCA;
 Fri, 20 Nov 2020 11:46:37 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kg4sW-00048T-6M
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:47:00 +0000
X-Inumbo-ID: 23341d06-2be7-4e0c-9872-f24490181e15
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 23341d06-2be7-4e0c-9872-f24490181e15;
	Fri, 20 Nov 2020 11:46:37 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605872797; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=F0FxjLQb6W8kGCEjKtE8wMSw1R1pxGMsEFzboyPVoeE=;
	b=bN5ciPN9/DNdp7hpMnvdVggS8h8vSPrE1dpBLTKIao4WCmPdxE/YcHFGwJD6CPBJG5IRvZ
	W1Jvtp8CM17/auMrVrr1BNDfXKPXB2nvcpHoPKafpkAJ3PLKyVDh3moVELGw+VGDI4iD1x
	TfiHTB9PfaDUKUzjFg2n8SehAt/bAl0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 18599ADCA;
	Fri, 20 Nov 2020 11:46:37 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org
Cc: peterz@infradead.org,
	luto@kernel.org,
	Juergen Gross <jgross@suse.com>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 09/12] x86/paravirt: switch iret pvops to ALTERNATIVE
Date: Fri, 20 Nov 2020 12:46:27 +0100
Message-Id: <20201120114630.13552-10-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201120114630.13552-1-jgross@suse.com>
References: <20201120114630.13552-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The iret paravirt op is rather special as it is using a jmp instead
of a call instruction. Switch it to ALTERNATIVE.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/include/asm/paravirt.h       |  7 ++++---
 arch/x86/include/asm/paravirt_types.h |  5 +----
 arch/x86/kernel/asm-offsets.c         |  5 -----
 arch/x86/kernel/paravirt.c            | 26 ++------------------------
 arch/x86/xen/enlighten_pv.c           |  3 +--
 5 files changed, 8 insertions(+), 38 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 1dd30c95505d..62fbcd899539 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -720,9 +720,10 @@ extern void default_banner(void);
 #define PARA_INDIRECT(addr)	*addr(%rip)
 
 #define INTERRUPT_RETURN						\
-	PARA_SITE(PARA_PATCH(PV_CPU_iret),				\
-		  ANNOTATE_RETPOLINE_SAFE;				\
-		  jmp PARA_INDIRECT(pv_ops+PV_CPU_iret);)
+	ANNOTATE_RETPOLINE_SAFE;					\
+	ALTERNATIVE_2 "jmp *paravirt_iret(%rip);",			\
+		      "jmp native_iret;", X86_FEATURE_NOT_XENPV,	\
+		      "jmp xen_iret;", X86_FEATURE_XENPV
 
 #ifdef CONFIG_DEBUG_ENTRY
 #define SAVE_FLAGS(clobbers)                                        \
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index b86acbb6449f..b66650dc869e 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -152,10 +152,6 @@ struct pv_cpu_ops {
 
 	u64 (*read_pmc)(int counter);
 
-	/* Normal iret.  Jump to this with the standard iret stack
-	   frame set up. */
-	void (*iret)(void);
-
 	void (*start_context_switch)(struct task_struct *prev);
 	void (*end_context_switch)(struct task_struct *next);
 #endif
@@ -295,6 +291,7 @@ struct paravirt_patch_template {
 
 extern struct pv_info pv_info;
 extern struct paravirt_patch_template pv_ops;
+extern void (*paravirt_iret)(void);
 
 #define PARAVIRT_PATCH(x)					\
 	(offsetof(struct paravirt_patch_template, x) / sizeof(void *))
diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c
index 736508004b30..ecd3fd6993d1 100644
--- a/arch/x86/kernel/asm-offsets.c
+++ b/arch/x86/kernel/asm-offsets.c
@@ -61,11 +61,6 @@ static void __used common(void)
 	OFFSET(IA32_RT_SIGFRAME_sigcontext, rt_sigframe_ia32, uc.uc_mcontext);
 #endif
 
-#ifdef CONFIG_PARAVIRT_XXL
-	BLANK();
-	OFFSET(PV_CPU_iret, paravirt_patch_template, cpu.iret);
-#endif
-
 #ifdef CONFIG_XEN
 	BLANK();
 	OFFSET(XEN_vcpu_info_mask, vcpu_info, evtchn_upcall_mask);
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 9f8aa18aa378..2ab547dd66c3 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -86,25 +86,6 @@ u64 notrace _paravirt_ident_64(u64 x)
 {
 	return x;
 }
-
-static unsigned paravirt_patch_jmp(void *insn_buff, const void *target,
-				   unsigned long addr, unsigned len)
-{
-	struct branch *b = insn_buff;
-	unsigned long delta = (unsigned long)target - (addr+5);
-
-	if (len < 5) {
-#ifdef CONFIG_RETPOLINE
-		WARN_ONCE(1, "Failing to patch indirect JMP in %ps\n", (void *)addr);
-#endif
-		return len;	/* call too long for patch site */
-	}
-
-	b->opcode = 0xe9;	/* jmp */
-	b->delta = delta;
-
-	return 5;
-}
 #endif
 
 DEFINE_STATIC_KEY_TRUE(virt_spin_lock_key);
@@ -136,9 +117,6 @@ unsigned paravirt_patch_default(u8 type, void *insn_buff,
 	else if (opfunc == _paravirt_ident_64)
 		ret = paravirt_patch_ident_64(insn_buff, len);
 
-	else if (type == PARAVIRT_PATCH(cpu.iret))
-		/* If operation requires a jmp, then jmp */
-		ret = paravirt_patch_jmp(insn_buff, opfunc, addr, len);
 #endif
 	else
 		/* Otherwise call the function. */
@@ -316,8 +294,6 @@ struct paravirt_patch_template pv_ops = {
 
 	.cpu.load_sp0		= native_load_sp0,
 
-	.cpu.iret		= native_iret,
-
 #ifdef CONFIG_X86_IOPL_IOPERM
 	.cpu.invalidate_io_bitmap	= native_tss_invalidate_io_bitmap,
 	.cpu.update_io_bitmap		= native_tss_update_io_bitmap,
@@ -422,6 +398,8 @@ struct paravirt_patch_template pv_ops = {
 NOKPROBE_SYMBOL(native_get_debugreg);
 NOKPROBE_SYMBOL(native_set_debugreg);
 NOKPROBE_SYMBOL(native_load_idt);
+
+void (*paravirt_iret)(void) = native_iret;
 #endif
 
 EXPORT_SYMBOL(pv_ops);
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 32b295cc2716..4716383c64a9 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -1057,8 +1057,6 @@ static const struct pv_cpu_ops xen_cpu_ops __initconst = {
 
 	.read_pmc = xen_read_pmc,
 
-	.iret = xen_iret,
-
 	.load_tr_desc = paravirt_nop,
 	.set_ldt = xen_set_ldt,
 	.load_gdt = xen_load_gdt,
@@ -1222,6 +1220,7 @@ asmlinkage __visible void __init xen_start_kernel(void)
 	pv_info = xen_info;
 	pv_ops.init.patch = paravirt_patch_default;
 	pv_ops.cpu = xen_cpu_ops;
+	paravirt_iret = xen_iret;
 	xen_init_irq_ops();
 
 	/*
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 11:47:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 11:47:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32014.62941 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4sb-0004tw-To; Fri, 20 Nov 2020 11:47:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32014.62941; Fri, 20 Nov 2020 11:47:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4sb-0004th-OF; Fri, 20 Nov 2020 11:47:05 +0000
Received: by outflank-mailman (input) for mailman id 32014;
 Fri, 20 Nov 2020 11:47:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kg4sb-00048T-6Z
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:47:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6b373ac3-0389-4a7c-87dd-2830ea35c45d;
 Fri, 20 Nov 2020 11:46:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 73BF2ADD9;
 Fri, 20 Nov 2020 11:46:37 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kg4sb-00048T-6Z
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:47:05 +0000
X-Inumbo-ID: 6b373ac3-0389-4a7c-87dd-2830ea35c45d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6b373ac3-0389-4a7c-87dd-2830ea35c45d;
	Fri, 20 Nov 2020 11:46:38 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605872797; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=sS1EJKt51h291wm5JVNdsHCnbqIXSSiBfe36LpIoCrA=;
	b=WVjP1cXrvEv6BcOuYYRRPDo3jLHXRzTbMmG8iz26tq7H/AD6XGaCaEuWLh1HYP0KzOQojC
	TL5gEUnwW2xZHsOxw7AUv5t25FlWOGVjL7woGx7jIIthaq+3F+CLPtjKswaNYB80qYpmvG
	3n9/6uZzIwC91SbNbyTtK8/S+fhiOK4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 73BF2ADD9;
	Fri, 20 Nov 2020 11:46:37 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org
Cc: peterz@infradead.org,
	luto@kernel.org,
	Juergen Gross <jgross@suse.com>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: [PATCH v2 10/12] x86/paravirt: add new macros PVOP_ALT* supporting pvops in ALTERNATIVEs
Date: Fri, 20 Nov 2020 12:46:28 +0100
Message-Id: <20201120114630.13552-11-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201120114630.13552-1-jgross@suse.com>
References: <20201120114630.13552-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of using paravirt patching for custom code sequences add
support for using ALTERNATIVE handling combined with paravirt call
patching.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/include/asm/paravirt_types.h | 62 +++++++++++++++++++++++++++
 1 file changed, 62 insertions(+)

diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index b66650dc869e..1f8e1b76e78b 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -482,14 +482,39 @@ int paravirt_disable_iospace(void);
 		(rettype)(__eax & PVOP_RETMASK(rettype));		\
 	})
 
+#define ____PVOP_ALT_CALL(rettype, op, alt, cond, clbr, call_clbr,	\
+			  extra_clbr, ...)				\
+	({								\
+		PVOP_CALL_ARGS;						\
+		PVOP_TEST_NULL(op);					\
+		BUILD_BUG_ON(sizeof(rettype) > sizeof(unsigned long));	\
+		asm volatile(ALTERNATIVE(paravirt_alt(PARAVIRT_CALL),	\
+					 alt, cond)			\
+			     : call_clbr, ASM_CALL_CONSTRAINT		\
+			     : paravirt_type(op),			\
+			       paravirt_clobber(clbr),			\
+			       ##__VA_ARGS__				\
+			     : "memory", "cc" extra_clbr);		\
+		(rettype)(__eax & PVOP_RETMASK(rettype));		\
+	})
+
 #define __PVOP_CALL(rettype, op, ...)					\
 	____PVOP_CALL(rettype, op, CLBR_ANY, PVOP_CALL_CLOBBERS,	\
 		      EXTRA_CLOBBERS, ##__VA_ARGS__)
 
+#define __PVOP_ALT_CALL(rettype, op, alt, cond, ...)			\
+	____PVOP_ALT_CALL(rettype, op, alt, cond, CLBR_ANY,		\
+			  PVOP_CALL_CLOBBERS, EXTRA_CLOBBERS,		\
+			  ##__VA_ARGS__)
+
 #define __PVOP_CALLEESAVE(rettype, op, ...)				\
 	____PVOP_CALL(rettype, op.func, CLBR_RET_REG,			\
 		      PVOP_CALLEE_CLOBBERS, , ##__VA_ARGS__)
 
+#define __PVOP_ALT_CALLEESAVE(rettype, op, alt, cond, ...)		\
+	____PVOP_ALT_CALL(rettype, op.func, alt, cond, CLBR_RET_REG,	\
+			  PVOP_CALLEE_CLOBBERS, , ##__VA_ARGS__)
+
 
 #define ____PVOP_VCALL(op, clbr, call_clbr, extra_clbr, ...)		\
 	({								\
@@ -503,36 +528,73 @@ int paravirt_disable_iospace(void);
 			     : "memory", "cc" extra_clbr);		\
 	})
 
+#define ____PVOP_ALT_VCALL(op, alt, cond, clbr, call_clbr,		\
+			   extra_clbr, ...)				\
+	({								\
+		PVOP_VCALL_ARGS;					\
+		PVOP_TEST_NULL(op);					\
+		asm volatile(ALTERNATIVE(paravirt_alt(PARAVIRT_CALL),	\
+					 alt, cond)			\
+			     : call_clbr, ASM_CALL_CONSTRAINT		\
+			     : paravirt_type(op),			\
+			       paravirt_clobber(clbr),			\
+			       ##__VA_ARGS__				\
+			     : "memory", "cc" extra_clbr);		\
+	})
+
 #define __PVOP_VCALL(op, ...)						\
 	____PVOP_VCALL(op, CLBR_ANY, PVOP_VCALL_CLOBBERS,		\
 		       VEXTRA_CLOBBERS, ##__VA_ARGS__)
 
+#define __PVOP_ALT_VCALL(op, alt, cond, ...)				\
+	____PVOP_ALT_VCALL(op, alt, cond, CLBR_ANY,			\
+			   PVOP_VCALL_CLOBBERS, VEXTRA_CLOBBERS,	\
+			   ##__VA_ARGS__)
+
 #define __PVOP_VCALLEESAVE(op, ...)					\
 	____PVOP_VCALL(op.func, CLBR_RET_REG,				\
 		      PVOP_VCALLEE_CLOBBERS, , ##__VA_ARGS__)
 
+#define __PVOP_ALT_VCALLEESAVE(op, alt, cond, ...)			\
+	____PVOP_ALT_VCALL(op.func, alt, cond, CLBR_RET_REG,		\
+			   PVOP_VCALLEE_CLOBBERS, , ##__VA_ARGS__)
+
 
 
 #define PVOP_CALL0(rettype, op)						\
 	__PVOP_CALL(rettype, op)
 #define PVOP_VCALL0(op)							\
 	__PVOP_VCALL(op)
+#define PVOP_ALT_CALL0(rettype, op, alt, cond)				\
+	__PVOP_ALT_CALL(rettype, op, alt, cond)
+#define PVOP_ALT_VCALL0(op, alt, cond)					\
+	__PVOP_ALT_VCALL(op, alt, cond)
 
 #define PVOP_CALLEE0(rettype, op)					\
 	__PVOP_CALLEESAVE(rettype, op)
 #define PVOP_VCALLEE0(op)						\
 	__PVOP_VCALLEESAVE(op)
+#define PVOP_ALT_CALLEE0(rettype, op, alt, cond)			\
+	__PVOP_ALT_CALLEESAVE(rettype, op, alt, cond)
+#define PVOP_ALT_VCALLEE0(op, alt, cond)				\
+	__PVOP_ALT_VCALLEESAVE(op, alt, cond)
 
 
 #define PVOP_CALL1(rettype, op, arg1)					\
 	__PVOP_CALL(rettype, op, PVOP_CALL_ARG1(arg1))
 #define PVOP_VCALL1(op, arg1)						\
 	__PVOP_VCALL(op, PVOP_CALL_ARG1(arg1))
+#define PVOP_ALT_VCALL1(op, arg1, alt, cond)				\
+	__PVOP_ALT_VCALL(op, alt, cond, PVOP_CALL_ARG1(arg1))
 
 #define PVOP_CALLEE1(rettype, op, arg1)					\
 	__PVOP_CALLEESAVE(rettype, op, PVOP_CALL_ARG1(arg1))
 #define PVOP_VCALLEE1(op, arg1)						\
 	__PVOP_VCALLEESAVE(op, PVOP_CALL_ARG1(arg1))
+#define PVOP_ALT_CALLEE1(rettype, op, arg1, alt, cond)			\
+	__PVOP_ALT_CALLEESAVE(rettype, op, alt, cond, PVOP_CALL_ARG1(arg1))
+#define PVOP_ALT_VCALLEE1(op, arg1, alt, cond)				\
+	__PVOP_ALT_VCALLEESAVE(op, alt, cond, PVOP_CALL_ARG1(arg1))
 
 
 #define PVOP_CALL2(rettype, op, arg1, arg2)				\
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 11:47:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 11:47:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32020.62953 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4sh-00052W-BE; Fri, 20 Nov 2020 11:47:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32020.62953; Fri, 20 Nov 2020 11:47:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4sh-00052N-5t; Fri, 20 Nov 2020 11:47:11 +0000
Received: by outflank-mailman (input) for mailman id 32020;
 Fri, 20 Nov 2020 11:47:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kg4sg-00048T-6s
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:47:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a84e2d65-5ee0-4c43-8545-5a201f8e134f;
 Fri, 20 Nov 2020 11:46:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CC0F6ADCD;
 Fri, 20 Nov 2020 11:46:37 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kg4sg-00048T-6s
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:47:10 +0000
X-Inumbo-ID: a84e2d65-5ee0-4c43-8545-5a201f8e134f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a84e2d65-5ee0-4c43-8545-5a201f8e134f;
	Fri, 20 Nov 2020 11:46:38 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605872797; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZDc33xEnygoFQchTLdLhL9JFXlNStMFG7ljURXXS5PI=;
	b=fmZ+YORWKumSs0M7SYUSpoE2kob7jkjo2CI7xPYb72keOEp+3+4UqS5DGug8VSeshYX+B/
	+GTiGGo/+kIyd3BrYoEzZcK0ttLuoe4gbs5CRgs4BrqkkxkSJRAOvByCRVcaU0bLRG/8h1
	pO6X8uFaPWJJsO8e294G7o8aYZLc5sk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id CC0F6ADCD;
	Fri, 20 Nov 2020 11:46:37 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org
Cc: peterz@infradead.org,
	luto@kernel.org,
	Juergen Gross <jgross@suse.com>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: [PATCH v2 11/12] x86/paravirt: switch functions with custom code to ALTERNATIVE
Date: Fri, 20 Nov 2020 12:46:29 +0100
Message-Id: <20201120114630.13552-12-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201120114630.13552-1-jgross@suse.com>
References: <20201120114630.13552-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of using paravirt patching for custom code sequences use
ALTERNATIVE for the functions with custom code replacements.

Instead of patching an ud2 instruction for unpopulated vector entries
into the caller site, use a simple function just calling BUG() as a
replacement.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/include/asm/paravirt.h       | 73 ++++++++++++++--------
 arch/x86/include/asm/paravirt_types.h |  1 -
 arch/x86/kernel/paravirt.c            | 16 ++---
 arch/x86/kernel/paravirt_patch.c      | 88 ---------------------------
 4 files changed, 54 insertions(+), 124 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 62fbcd899539..500c64d0cfcb 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -108,7 +108,8 @@ static inline void write_cr0(unsigned long x)
 
 static inline unsigned long read_cr2(void)
 {
-	return PVOP_CALLEE0(unsigned long, mmu.read_cr2);
+	return PVOP_ALT_CALLEE0(unsigned long, mmu.read_cr2,
+				"mov %%cr2, %%rax;", X86_FEATURE_NOT_XENPV);
 }
 
 static inline void write_cr2(unsigned long x)
@@ -118,12 +119,14 @@ static inline void write_cr2(unsigned long x)
 
 static inline unsigned long __read_cr3(void)
 {
-	return PVOP_CALL0(unsigned long, mmu.read_cr3);
+	return PVOP_ALT_CALL0(unsigned long, mmu.read_cr3,
+			      "mov %%cr3, %%rax;", X86_FEATURE_NOT_XENPV);
 }
 
 static inline void write_cr3(unsigned long x)
 {
-	PVOP_VCALL1(mmu.write_cr3, x);
+	PVOP_ALT_VCALL1(mmu.write_cr3, x,
+			"mov %%rdi, %%cr3", X86_FEATURE_NOT_XENPV);
 }
 
 static inline void __write_cr4(unsigned long x)
@@ -143,7 +146,7 @@ static inline void halt(void)
 
 static inline void wbinvd(void)
 {
-	PVOP_VCALL0(cpu.wbinvd);
+	PVOP_ALT_VCALL0(cpu.wbinvd, "wbinvd", X86_FEATURE_NOT_XENPV);
 }
 
 static inline u64 paravirt_read_msr(unsigned msr)
@@ -357,22 +360,28 @@ static inline void paravirt_release_p4d(unsigned long pfn)
 
 static inline pte_t __pte(pteval_t val)
 {
-	return (pte_t) { PVOP_CALLEE1(pteval_t, mmu.make_pte, val) };
+	return (pte_t) { PVOP_ALT_CALLEE1(pteval_t, mmu.make_pte, val,
+					  "mov %%rdi, %%rax",
+					  X86_FEATURE_NOT_XENPV) };
 }
 
 static inline pteval_t pte_val(pte_t pte)
 {
-	return PVOP_CALLEE1(pteval_t, mmu.pte_val, pte.pte);
+	return PVOP_ALT_CALLEE1(pteval_t, mmu.pte_val, pte.pte,
+				"mov %%rdi, %%rax", X86_FEATURE_NOT_XENPV);
 }
 
 static inline pgd_t __pgd(pgdval_t val)
 {
-	return (pgd_t) { PVOP_CALLEE1(pgdval_t, mmu.make_pgd, val) };
+	return (pgd_t) { PVOP_ALT_CALLEE1(pgdval_t, mmu.make_pgd, val,
+					  "mov %%rdi, %%rax",
+					  X86_FEATURE_NOT_XENPV) };
 }
 
 static inline pgdval_t pgd_val(pgd_t pgd)
 {
-	return PVOP_CALLEE1(pgdval_t, mmu.pgd_val, pgd.pgd);
+	return PVOP_ALT_CALLEE1(pgdval_t, mmu.pgd_val, pgd.pgd,
+				"mov %%rdi, %%rax", X86_FEATURE_NOT_XENPV);
 }
 
 #define  __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION
@@ -405,12 +414,15 @@ static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
 
 static inline pmd_t __pmd(pmdval_t val)
 {
-	return (pmd_t) { PVOP_CALLEE1(pmdval_t, mmu.make_pmd, val) };
+	return (pmd_t) { PVOP_ALT_CALLEE1(pmdval_t, mmu.make_pmd, val,
+					  "mov %%rdi, %%rax",
+					  X86_FEATURE_NOT_XENPV) };
 }
 
 static inline pmdval_t pmd_val(pmd_t pmd)
 {
-	return PVOP_CALLEE1(pmdval_t, mmu.pmd_val, pmd.pmd);
+	return PVOP_ALT_CALLEE1(pmdval_t, mmu.pmd_val, pmd.pmd,
+				"mov %%rdi, %%rax", X86_FEATURE_NOT_XENPV);
 }
 
 static inline void set_pud(pud_t *pudp, pud_t pud)
@@ -422,14 +434,16 @@ static inline pud_t __pud(pudval_t val)
 {
 	pudval_t ret;
 
-	ret = PVOP_CALLEE1(pudval_t, mmu.make_pud, val);
+	ret = PVOP_ALT_CALLEE1(pudval_t, mmu.make_pud, val,
+			       "mov %%rdi, %%rax", X86_FEATURE_NOT_XENPV);
 
 	return (pud_t) { ret };
 }
 
 static inline pudval_t pud_val(pud_t pud)
 {
-	return PVOP_CALLEE1(pudval_t, mmu.pud_val, pud.pud);
+	return PVOP_ALT_CALLEE1(pudval_t, mmu.pud_val, pud.pud,
+				"mov %%rdi, %%rax", X86_FEATURE_NOT_XENPV);
 }
 
 static inline void pud_clear(pud_t *pudp)
@@ -448,14 +462,17 @@ static inline void set_p4d(p4d_t *p4dp, p4d_t p4d)
 
 static inline p4d_t __p4d(p4dval_t val)
 {
-	p4dval_t ret = PVOP_CALLEE1(p4dval_t, mmu.make_p4d, val);
+	p4dval_t ret = PVOP_ALT_CALLEE1(p4dval_t, mmu.make_p4d, val,
+					"mov %%rdi, %%rax",
+					X86_FEATURE_NOT_XENPV);
 
 	return (p4d_t) { ret };
 }
 
 static inline p4dval_t p4d_val(p4d_t p4d)
 {
-	return PVOP_CALLEE1(p4dval_t, mmu.p4d_val, p4d.p4d);
+	return PVOP_ALT_CALLEE1(p4dval_t, mmu.p4d_val, p4d.p4d,
+				"mov %%rdi, %%rax", X86_FEATURE_NOT_XENPV);
 }
 
 static inline void __set_pgd(pgd_t *pgdp, pgd_t pgd)
@@ -542,7 +559,9 @@ static __always_inline void pv_queued_spin_lock_slowpath(struct qspinlock *lock,
 
 static __always_inline void pv_queued_spin_unlock(struct qspinlock *lock)
 {
-	PVOP_VCALLEE1(lock.queued_spin_unlock, lock);
+	PVOP_ALT_VCALLEE1(lock.queued_spin_unlock, lock,
+			  "movb $0, (%%" _ASM_ARG1 ");",
+			  X86_FEATURE_NO_PVUNLOCK);
 }
 
 static __always_inline void pv_wait(u8 *ptr, u8 val)
@@ -557,7 +576,9 @@ static __always_inline void pv_kick(int cpu)
 
 static __always_inline bool pv_vcpu_is_preempted(long cpu)
 {
-	return PVOP_CALLEE1(bool, lock.vcpu_is_preempted, cpu);
+	return PVOP_ALT_CALLEE1(bool, lock.vcpu_is_preempted, cpu,
+				"xor %%" _ASM_AX ", %%" _ASM_AX ";",
+				X86_FEATURE_NO_VCPUPREEMPT);
 }
 
 void __raw_callee_save___native_queued_spin_unlock(struct qspinlock *lock);
@@ -631,17 +652,18 @@ bool __raw_callee_save___native_vcpu_is_preempted(long cpu);
 #ifdef CONFIG_PARAVIRT_XXL
 static inline notrace unsigned long arch_local_save_flags(void)
 {
-	return PVOP_CALLEE0(unsigned long, irq.save_fl);
+	return PVOP_ALT_CALLEE0(unsigned long, irq.save_fl,
+				"pushf; pop %%rax;", X86_FEATURE_NOT_XENPV);
 }
 
 static inline notrace void arch_local_irq_disable(void)
 {
-	PVOP_VCALLEE0(irq.irq_disable);
+	PVOP_ALT_VCALLEE0(irq.irq_disable, "cli;", X86_FEATURE_NOT_XENPV);
 }
 
 static inline notrace void arch_local_irq_enable(void)
 {
-	PVOP_VCALLEE0(irq.irq_enable);
+	PVOP_ALT_VCALLEE0(irq.irq_enable, "sti;", X86_FEATURE_NOT_XENPV);
 }
 
 static inline notrace unsigned long arch_local_irq_save(void)
@@ -726,12 +748,13 @@ extern void default_banner(void);
 		      "jmp xen_iret;", X86_FEATURE_XENPV
 
 #ifdef CONFIG_DEBUG_ENTRY
-#define SAVE_FLAGS(clobbers)                                        \
-	PARA_SITE(PARA_PATCH(PV_IRQ_save_fl),			    \
-		  PV_SAVE_REGS(clobbers | CLBR_CALLEE_SAVE);        \
-		  ANNOTATE_RETPOLINE_SAFE;			    \
-		  call PARA_INDIRECT(pv_ops+PV_IRQ_save_fl);	    \
-		  PV_RESTORE_REGS(clobbers | CLBR_CALLEE_SAVE);)
+#define SAVE_FLAGS(clobbers)						      \
+	ALTERNATIVE(PARA_SITE(PARA_PATCH(PV_IRQ_save_fl),		      \
+			      PV_SAVE_REGS(clobbers | CLBR_CALLEE_SAVE);      \
+			      ANNOTATE_RETPOLINE_SAFE;			      \
+			      call PARA_INDIRECT(pv_ops+PV_IRQ_save_fl);      \
+			      PV_RESTORE_REGS(clobbers | CLBR_CALLEE_SAVE);), \
+		    "pushf; pop %rax;", X86_FEATURE_NOT_XENPV)
 #endif
 #endif /* CONFIG_PARAVIRT_XXL */
 #endif	/* CONFIG_X86_64 */
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 1f8e1b76e78b..481d3b667005 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -323,7 +323,6 @@ extern void (*paravirt_iret)(void);
 /* Simple instruction patching code. */
 #define NATIVE_LABEL(a,x,b) "\n\t.globl " a #x "_" #b "\n" a #x "_" #b ":\n\t"
 
-unsigned paravirt_patch_ident_64(void *insn_buff, unsigned len);
 unsigned paravirt_patch_default(u8 type, void *insn_buff, unsigned long addr, unsigned len);
 unsigned paravirt_patch_insns(void *insn_buff, unsigned len, const char *start, const char *end);
 
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 2ab547dd66c3..db6ae7f7c14e 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -53,7 +53,10 @@ void __init default_banner(void)
 }
 
 /* Undefined instruction for dealing with missing ops pointers. */
-static const unsigned char ud2a[] = { 0x0f, 0x0b };
+static void paravirt_BUG(void)
+{
+	BUG();
+}
 
 struct branch {
 	unsigned char opcode;
@@ -107,17 +110,10 @@ unsigned paravirt_patch_default(u8 type, void *insn_buff,
 	unsigned ret;
 
 	if (opfunc == NULL)
-		/* If there's no function, patch it with a ud2a (BUG) */
-		ret = paravirt_patch_insns(insn_buff, len, ud2a, ud2a+sizeof(ud2a));
+		/* If there's no function, patch it with paravirt_BUG() */
+		ret = paravirt_patch_call(insn_buff, paravirt_BUG, addr, len);
 	else if (opfunc == _paravirt_nop)
 		ret = 0;
-
-#ifdef CONFIG_PARAVIRT_XXL
-	/* identity functions just return their single argument */
-	else if (opfunc == _paravirt_ident_64)
-		ret = paravirt_patch_ident_64(insn_buff, len);
-
-#endif
 	else
 		/* Otherwise call the function. */
 		ret = paravirt_patch_call(insn_buff, opfunc, addr, len);
diff --git a/arch/x86/kernel/paravirt_patch.c b/arch/x86/kernel/paravirt_patch.c
index abd27ec67397..10543dcc8211 100644
--- a/arch/x86/kernel/paravirt_patch.c
+++ b/arch/x86/kernel/paravirt_patch.c
@@ -4,96 +4,8 @@
 #include <asm/paravirt.h>
 #include <asm/asm-offsets.h>
 
-#define PSTART(d, m)							\
-	patch_data_##d.m
-
-#define PEND(d, m)							\
-	(PSTART(d, m) + sizeof(patch_data_##d.m))
-
-#define PATCH(d, m, insn_buff, len)						\
-	paravirt_patch_insns(insn_buff, len, PSTART(d, m), PEND(d, m))
-
-#define PATCH_CASE(ops, m, data, insn_buff, len)				\
-	case PARAVIRT_PATCH(ops.m):					\
-		return PATCH(data, ops##_##m, insn_buff, len)
-
-#ifdef CONFIG_PARAVIRT_XXL
-struct patch_xxl {
-	const unsigned char	irq_irq_disable[1];
-	const unsigned char	irq_irq_enable[1];
-	const unsigned char	irq_save_fl[2];
-	const unsigned char	mmu_read_cr2[3];
-	const unsigned char	mmu_read_cr3[3];
-	const unsigned char	mmu_write_cr3[3];
-	const unsigned char	cpu_wbinvd[2];
-	const unsigned char	mov64[3];
-};
-
-static const struct patch_xxl patch_data_xxl = {
-	.irq_irq_disable	= { 0xfa },		// cli
-	.irq_irq_enable		= { 0xfb },		// sti
-	.irq_save_fl		= { 0x9c, 0x58 },	// pushf; pop %[re]ax
-	.mmu_read_cr2		= { 0x0f, 0x20, 0xd0 },	// mov %cr2, %[re]ax
-	.mmu_read_cr3		= { 0x0f, 0x20, 0xd8 },	// mov %cr3, %[re]ax
-	.mmu_write_cr3		= { 0x0f, 0x22, 0xdf },	// mov %rdi, %cr3
-	.cpu_wbinvd		= { 0x0f, 0x09 },	// wbinvd
-	.mov64			= { 0x48, 0x89, 0xf8 },	// mov %rdi, %rax
-};
-
-unsigned int paravirt_patch_ident_64(void *insn_buff, unsigned int len)
-{
-	return PATCH(xxl, mov64, insn_buff, len);
-}
-# endif /* CONFIG_PARAVIRT_XXL */
-
-#ifdef CONFIG_PARAVIRT_SPINLOCKS
-struct patch_lock {
-	unsigned char queued_spin_unlock[3];
-	unsigned char vcpu_is_preempted[2];
-};
-
-static const struct patch_lock patch_data_lock = {
-	.vcpu_is_preempted	= { 0x31, 0xc0 },	// xor %eax, %eax
-
-# ifdef CONFIG_X86_64
-	.queued_spin_unlock	= { 0xc6, 0x07, 0x00 },	// movb $0, (%rdi)
-# else
-	.queued_spin_unlock	= { 0xc6, 0x00, 0x00 },	// movb $0, (%eax)
-# endif
-};
-#endif /* CONFIG_PARAVIRT_SPINLOCKS */
-
 unsigned int native_patch(u8 type, void *insn_buff, unsigned long addr,
 			  unsigned int len)
 {
-	switch (type) {
-
-#ifdef CONFIG_PARAVIRT_XXL
-	PATCH_CASE(irq, save_fl, xxl, insn_buff, len);
-	PATCH_CASE(irq, irq_enable, xxl, insn_buff, len);
-	PATCH_CASE(irq, irq_disable, xxl, insn_buff, len);
-
-	PATCH_CASE(mmu, read_cr2, xxl, insn_buff, len);
-	PATCH_CASE(mmu, read_cr3, xxl, insn_buff, len);
-	PATCH_CASE(mmu, write_cr3, xxl, insn_buff, len);
-
-	PATCH_CASE(cpu, wbinvd, xxl, insn_buff, len);
-#endif
-
-#ifdef CONFIG_PARAVIRT_SPINLOCKS
-	case PARAVIRT_PATCH(lock.queued_spin_unlock):
-		if (pv_is_native_spin_unlock())
-			return PATCH(lock, queued_spin_unlock, insn_buff, len);
-		break;
-
-	case PARAVIRT_PATCH(lock.vcpu_is_preempted):
-		if (pv_is_native_vcpu_is_preempted())
-			return PATCH(lock, vcpu_is_preempted, insn_buff, len);
-		break;
-#endif
-	default:
-		break;
-	}
-
 	return paravirt_patch_default(type, insn_buff, addr, len);
 }
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 11:47:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 11:47:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32031.62965 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4sm-00059v-Nl; Fri, 20 Nov 2020 11:47:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32031.62965; Fri, 20 Nov 2020 11:47:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg4sm-00059h-Jh; Fri, 20 Nov 2020 11:47:16 +0000
Received: by outflank-mailman (input) for mailman id 32031;
 Fri, 20 Nov 2020 11:47:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kg4sl-00048T-7C
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:47:15 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ed6e6f4a-e6bf-40d0-86a5-06eeceed8e10;
 Fri, 20 Nov 2020 11:46:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 460FBADDB;
 Fri, 20 Nov 2020 11:46:38 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kg4sl-00048T-7C
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:47:15 +0000
X-Inumbo-ID: ed6e6f4a-e6bf-40d0-86a5-06eeceed8e10
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ed6e6f4a-e6bf-40d0-86a5-06eeceed8e10;
	Fri, 20 Nov 2020 11:46:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605872798; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RvGJzF3kLsbrT/VFYKE6yOfFu7G7ZuC1Hu1vcGbSYR0=;
	b=NFyZJJGo7Rm2MSDuNGewrxDAq0Ask4NpUk48WrgYcF0spwnkO34hJz75+m3ma+FokdkL1B
	FjiHGB8NlDkZCUw1tIyBMEPrKwaVIdnUxNewB8hbvR4fR/YO+4L1JfzR1I+w/zExML29M1
	AZGUh6y9Gai0RPBULFe19SmUiV2I40E=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 460FBADDB;
	Fri, 20 Nov 2020 11:46:38 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org
Cc: peterz@infradead.org,
	luto@kernel.org,
	Juergen Gross <jgross@suse.com>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 12/12] x86/paravirt: have only one paravirt patch function
Date: Fri, 20 Nov 2020 12:46:30 +0100
Message-Id: <20201120114630.13552-13-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201120114630.13552-1-jgross@suse.com>
References: <20201120114630.13552-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

There is no need any longer to have different paravirt patch functions
for native and Xen. Eliminate native_patch() and rename
paravirt_patch_default() to paravirt_patch().

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/include/asm/paravirt_types.h | 19 +------------------
 arch/x86/kernel/Makefile              |  3 +--
 arch/x86/kernel/alternative.c         |  2 +-
 arch/x86/kernel/paravirt.c            |  7 ++-----
 arch/x86/kernel/paravirt_patch.c      | 11 -----------
 arch/x86/xen/enlighten_pv.c           |  1 -
 6 files changed, 5 insertions(+), 38 deletions(-)
 delete mode 100644 arch/x86/kernel/paravirt_patch.c

diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 481d3b667005..bb79e21e1ead 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -74,19 +74,6 @@ struct pv_info {
 	const char *name;
 };
 
-struct pv_init_ops {
-	/*
-	 * Patch may replace one of the defined code sequences with
-	 * arbitrary code, subject to the same register constraints.
-	 * This generally means the code is not free to clobber any
-	 * registers other than EAX.  The patch function should return
-	 * the number of bytes of code generated, as we nop pad the
-	 * rest in generic code.
-	 */
-	unsigned (*patch)(u8 type, void *insn_buff,
-			  unsigned long addr, unsigned len);
-} __no_randomize_layout;
-
 #ifdef CONFIG_PARAVIRT_XXL
 struct pv_lazy_ops {
 	/* Set deferred update mode, used for batching operations. */
@@ -282,7 +269,6 @@ struct pv_lock_ops {
  * number for each function using the offset which we use to indicate
  * what to patch. */
 struct paravirt_patch_template {
-	struct pv_init_ops	init;
 	struct pv_cpu_ops	cpu;
 	struct pv_irq_ops	irq;
 	struct pv_mmu_ops	mmu;
@@ -323,10 +309,7 @@ extern void (*paravirt_iret)(void);
 /* Simple instruction patching code. */
 #define NATIVE_LABEL(a,x,b) "\n\t.globl " a #x "_" #b "\n" a #x "_" #b ":\n\t"
 
-unsigned paravirt_patch_default(u8 type, void *insn_buff, unsigned long addr, unsigned len);
-unsigned paravirt_patch_insns(void *insn_buff, unsigned len, const char *start, const char *end);
-
-unsigned native_patch(u8 type, void *insn_buff, unsigned long addr, unsigned len);
+unsigned paravirt_patch(u8 type, void *insn_buff, unsigned long addr, unsigned len);
 
 int paravirt_disable_iospace(void);
 
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 68608bd892c0..61f52f95670b 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -35,7 +35,6 @@ KASAN_SANITIZE_sev-es.o					:= n
 KCSAN_SANITIZE := n
 
 OBJECT_FILES_NON_STANDARD_test_nx.o			:= y
-OBJECT_FILES_NON_STANDARD_paravirt_patch.o		:= y
 
 ifdef CONFIG_FRAME_POINTER
 OBJECT_FILES_NON_STANDARD_ftrace_$(BITS).o		:= y
@@ -122,7 +121,7 @@ obj-$(CONFIG_AMD_NB)		+= amd_nb.o
 obj-$(CONFIG_DEBUG_NMI_SELFTEST) += nmi_selftest.o
 
 obj-$(CONFIG_KVM_GUEST)		+= kvm.o kvmclock.o
-obj-$(CONFIG_PARAVIRT)		+= paravirt.o paravirt_patch.o
+obj-$(CONFIG_PARAVIRT)		+= paravirt.o
 obj-$(CONFIG_PARAVIRT_SPINLOCKS)+= paravirt-spinlocks.o
 obj-$(CONFIG_PARAVIRT_CLOCK)	+= pvclock.o
 obj-$(CONFIG_X86_PMEM_LEGACY_DEVICE) += pmem.o
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index f8f9700719cf..7ed2c3992eb3 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -617,7 +617,7 @@ void __init_or_module apply_paravirt(struct paravirt_patch_site *start,
 		BUG_ON(p->len > MAX_PATCH_LEN);
 		/* prep the buffer with the original instructions */
 		memcpy(insn_buff, p->instr, p->len);
-		used = pv_ops.init.patch(p->type, insn_buff, (unsigned long)p->instr, p->len);
+		used = paravirt_patch(p->type, insn_buff, (unsigned long)p->instr, p->len);
 
 		BUG_ON(used > p->len);
 
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index db6ae7f7c14e..f05404844245 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -99,8 +99,8 @@ void __init native_pv_lock_init(void)
 		static_branch_disable(&virt_spin_lock_key);
 }
 
-unsigned paravirt_patch_default(u8 type, void *insn_buff,
-				unsigned long addr, unsigned len)
+unsigned paravirt_patch(u8 type, void *insn_buff, unsigned long addr,
+			unsigned len)
 {
 	/*
 	 * Neat trick to map patch type back to the call within the
@@ -255,9 +255,6 @@ struct pv_info pv_info = {
 #define PTE_IDENT	__PV_IS_CALLEE_SAVE(_paravirt_ident_64)
 
 struct paravirt_patch_template pv_ops = {
-	/* Init ops. */
-	.init.patch		= native_patch,
-
 	/* Cpu ops. */
 	.cpu.io_delay		= native_io_delay,
 
diff --git a/arch/x86/kernel/paravirt_patch.c b/arch/x86/kernel/paravirt_patch.c
deleted file mode 100644
index 10543dcc8211..000000000000
--- a/arch/x86/kernel/paravirt_patch.c
+++ /dev/null
@@ -1,11 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-#include <linux/stringify.h>
-
-#include <asm/paravirt.h>
-#include <asm/asm-offsets.h>
-
-unsigned int native_patch(u8 type, void *insn_buff, unsigned long addr,
-			  unsigned int len)
-{
-	return paravirt_patch_default(type, insn_buff, addr, len);
-}
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 4716383c64a9..66f83de4d9e0 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -1218,7 +1218,6 @@ asmlinkage __visible void __init xen_start_kernel(void)
 
 	/* Install Xen paravirt ops */
 	pv_info = xen_info;
-	pv_ops.init.patch = paravirt_patch_default;
 	pv_ops.cpu = xen_cpu_ops;
 	paravirt_iret = xen_iret;
 	xen_init_irq_ops();
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 12:00:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 12:00:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32072.62984 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg555-0007EL-Af; Fri, 20 Nov 2020 11:59:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32072.62984; Fri, 20 Nov 2020 11:59:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg555-0007EE-7e; Fri, 20 Nov 2020 11:59:59 +0000
Received: by outflank-mailman (input) for mailman id 32072;
 Fri, 20 Nov 2020 11:59:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=++Ek=E2=infradead.org=peterz@srs-us1.protection.inumbo.net>)
 id 1kg553-0007E9-Br
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:59:57 +0000
Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e1821b8a-e64f-496c-9a5d-6d8fac097a72;
 Fri, 20 Nov 2020 11:59:54 +0000 (UTC)
Received: from j217100.upc-j.chello.nl ([24.132.217.100]
 helo=noisy.programming.kicks-ass.net)
 by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kg54r-0004IG-6x; Fri, 20 Nov 2020 11:59:45 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 61B3B307062;
 Fri, 20 Nov 2020 12:59:43 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 41741200DF1A6; Fri, 20 Nov 2020 12:59:43 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=++Ek=E2=infradead.org=peterz@srs-us1.protection.inumbo.net>)
	id 1kg553-0007E9-Br
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 11:59:57 +0000
X-Inumbo-ID: e1821b8a-e64f-496c-9a5d-6d8fac097a72
Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e1821b8a-e64f-496c-9a5d-6d8fac097a72;
	Fri, 20 Nov 2020 11:59:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=1IRDo4hBUd8j6ToyEOlhDczl8zQL0MZ1VcE/2Tj86V8=; b=lWgrloRSTuiGC5nk/4mHtAapdq
	VwjRJ9pV/B0T1qMIc8kscWWKsHYukR+oAG35rA52pWTnBDx5lko8Mg2mrSuBc9HXXKzFegH7oJGtp
	cwml6uFpFrBaQnbMehbVUamRufyImy2h3sQ01dQNaCyElJ40tKE7oY7DlyW91yiQQ3HL8MtyQUvbq
	My0bogQ20GyepoV5LrRr5SCuJg65biu6wXWYQzw+W+2vh2FY8g8nwuPKoIFrqvMl6beiZDTPBQfCH
	/pI3NOmklXlVteKMmE3U6Bp4Myx1Ojzlpwrt8x2Its6o0bCkEETXsUOqeydQ25qMtkGezRaNWRG5r
	37I6qZZA==;
Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net)
	by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kg54r-0004IG-6x; Fri, 20 Nov 2020 11:59:45 +0000
Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(Client did not present a certificate)
	by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 61B3B307062;
	Fri, 20 Nov 2020 12:59:43 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
	id 41741200DF1A6; Fri, 20 Nov 2020 12:59:43 +0100 (CET)
Date: Fri, 20 Nov 2020 12:59:43 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, luto@kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>, Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v2 05/12] x86: rework arch_local_irq_restore() to not use
 popf
Message-ID: <20201120115943.GD3021@hirez.programming.kicks-ass.net>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-6-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201120114630.13552-6-jgross@suse.com>

On Fri, Nov 20, 2020 at 12:46:23PM +0100, Juergen Gross wrote:
> +static __always_inline void arch_local_irq_restore(unsigned long flags)
> +{
> +	if (!arch_irqs_disabled_flags(flags))
> +		arch_local_irq_enable();
> +}

If someone were to write horrible code like:

	local_irq_disable();
	local_irq_save(flags);
	local_irq_enable();
	local_irq_restore(flags);

we'd be up some creek without a paddle... now I don't _think_ we have
genius code like that, but I'd feel saver if we can haz an assertion in
there somewhere...

Maybe something like:

#ifdef CONFIG_DEBUG_ENTRY // for lack of something saner
	WARN_ON_ONCE((arch_local_save_flags() ^ flags) & X86_EFLAGS_IF);
#endif

At the end?


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 12:03:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 12:03:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32086.62999 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg57y-0008AY-Ai; Fri, 20 Nov 2020 12:02:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32086.62999; Fri, 20 Nov 2020 12:02:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg57y-0008AR-7d; Fri, 20 Nov 2020 12:02:58 +0000
Received: by outflank-mailman (input) for mailman id 32086;
 Fri, 20 Nov 2020 12:02:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=++Ek=E2=infradead.org=peterz@srs-us1.protection.inumbo.net>)
 id 1kg57x-0008A9-85
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 12:02:57 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f49863d4-8d81-4a8e-8d9e-9d260c9434e5;
 Fri, 20 Nov 2020 12:02:49 +0000 (UTC)
Received: from j217100.upc-j.chello.nl ([24.132.217.100]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kg572-0006MQ-D6; Fri, 20 Nov 2020 12:02:00 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id EAD53305C16;
 Fri, 20 Nov 2020 13:01:54 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id CD10B200DF1A6; Fri, 20 Nov 2020 13:01:54 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=++Ek=E2=infradead.org=peterz@srs-us1.protection.inumbo.net>)
	id 1kg57x-0008A9-85
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 12:02:57 +0000
X-Inumbo-ID: f49863d4-8d81-4a8e-8d9e-9d260c9434e5
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f49863d4-8d81-4a8e-8d9e-9d260c9434e5;
	Fri, 20 Nov 2020 12:02:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=xeukajkb0Dh8Aj05tLhXcYnJX0ZZcEiN+jLi0pnGRWQ=; b=eEI2XdW3SS3SvnEYkgrxIgrkkZ
	QESj0xks6qaTJAnkv/SrECagNU+vSqctcrq0xLMAWIfHAZKLk9ye2uO7gskiUylP8RbcybpWeUXL2
	uK6+gWyoCZxLIYAsj/8Cr6bDQjqDWnVAJEvPDRs9mQv3WAo/KAacUwgv54o4zvTL/2MI1ffWEmq47
	iGzPS/OuVWATcJv3FdWhjqRfpijJMY2zQfXpueb6tRHat5LxD9HkJ5lF4m33YjKhMqWD/w/svRfZq
	kIH2ZsgedGxLYnMj1h8GrezPVSEXeHN6F04tocRnYrqwZsV1M6RSBowt+3mIDzez5FevIapCnyvmm
	E0VqGnQQ==;
Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kg572-0006MQ-D6; Fri, 20 Nov 2020 12:02:00 +0000
Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(Client did not present a certificate)
	by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id EAD53305C16;
	Fri, 20 Nov 2020 13:01:54 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
	id CD10B200DF1A6; Fri, 20 Nov 2020 13:01:54 +0100 (CET)
Date: Fri, 20 Nov 2020 13:01:54 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org,
	virtualization@lists.linux-foundation.org, kvm@vger.kernel.org,
	luto@kernel.org, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <sean.j.christopherson@intel.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Daniel Lezcano <daniel.lezcano@linaro.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>
Subject: Re: [PATCH v2 06/12] x86/paravirt: switch time pvops functions to
 use static_call()
Message-ID: <20201120120154.GE3021@hirez.programming.kicks-ass.net>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-7-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201120114630.13552-7-jgross@suse.com>

On Fri, Nov 20, 2020 at 12:46:24PM +0100, Juergen Gross wrote:
> The time pvops functions are the only ones left which might be
> used in 32-bit mode and which return a 64-bit value.
> 
> Switch them to use the static_call() mechanism instead of pvops, as
> this allows quite some simplification of the pvops implementation.
> 
> Due to include hell this requires to split out the time interfaces
> into a new header file.

There's also this patch floating around; just in case that would come in
handy:

  https://lkml.kernel.org/r/20201110005609.40989-3-frederic@kernel.org


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 12:05:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 12:05:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32096.63015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg5AI-0008Lw-Rd; Fri, 20 Nov 2020 12:05:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32096.63015; Fri, 20 Nov 2020 12:05:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg5AI-0008Lp-O3; Fri, 20 Nov 2020 12:05:22 +0000
Received: by outflank-mailman (input) for mailman id 32096;
 Fri, 20 Nov 2020 12:05:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kg5AH-0008Lk-5p
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 12:05:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 78753ed9-5426-4e6b-a6bd-5f10bf42f020;
 Fri, 20 Nov 2020 12:05:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2EDC4AC23;
 Fri, 20 Nov 2020 12:05:19 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kg5AH-0008Lk-5p
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 12:05:21 +0000
X-Inumbo-ID: 78753ed9-5426-4e6b-a6bd-5f10bf42f020
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 78753ed9-5426-4e6b-a6bd-5f10bf42f020;
	Fri, 20 Nov 2020 12:05:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605873919; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=NzTiA4NG82ayN413wirwqqYr4HehcKAsF24Gn5hP2dU=;
	b=bmYyX8nILg+ZlY8sbtWE5w2bO2VV8WEs5T/iV+E9cNskIM3/nmOQdHogS6LmAUqDiCIN7o
	uep+xZsGp+4iiWlkS7R5O+A3tUuUCh3Zjoj5861nPuFhVVaUm7oGIsCOvz4fsp1+Fs7rrG
	1a0+AfThrxlQZZZfV/WGh6OWJYlmzC4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2EDC4AC23;
	Fri, 20 Nov 2020 12:05:19 +0000 (UTC)
Subject: Re: [PATCH v2 05/12] x86: rework arch_local_irq_restore() to not use
 popf
To: Peter Zijlstra <peterz@infradead.org>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org,
 luto@kernel.org, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 "H. Peter Anvin" <hpa@zytor.com>, Deep Shah <sdeep@vmware.com>,
 "VMware, Inc." <pv-drivers@vmware.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-6-jgross@suse.com>
 <20201120115943.GD3021@hirez.programming.kicks-ass.net>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <d2148d19-b8b3-1492-701b-3a85b761fd76@suse.com>
Date: Fri, 20 Nov 2020 13:05:17 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201120115943.GD3021@hirez.programming.kicks-ass.net>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="ylZJJWu3CTRSDQL4pjinjf9KQnc2Mi8Rq"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--ylZJJWu3CTRSDQL4pjinjf9KQnc2Mi8Rq
Content-Type: multipart/mixed; boundary="OJdSKNZQ7zTeFDk7fKB1vgvgQ8RAMAcBi";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org,
 luto@kernel.org, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 "H. Peter Anvin" <hpa@zytor.com>, Deep Shah <sdeep@vmware.com>,
 "VMware, Inc." <pv-drivers@vmware.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <d2148d19-b8b3-1492-701b-3a85b761fd76@suse.com>
Subject: Re: [PATCH v2 05/12] x86: rework arch_local_irq_restore() to not use
 popf
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-6-jgross@suse.com>
 <20201120115943.GD3021@hirez.programming.kicks-ass.net>
In-Reply-To: <20201120115943.GD3021@hirez.programming.kicks-ass.net>

--OJdSKNZQ7zTeFDk7fKB1vgvgQ8RAMAcBi
Content-Type: multipart/mixed;
 boundary="------------67F8C41D891759989B303093"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------67F8C41D891759989B303093
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 20.11.20 12:59, Peter Zijlstra wrote:
> On Fri, Nov 20, 2020 at 12:46:23PM +0100, Juergen Gross wrote:
>> +static __always_inline void arch_local_irq_restore(unsigned long flag=
s)
>> +{
>> +	if (!arch_irqs_disabled_flags(flags))
>> +		arch_local_irq_enable();
>> +}
>=20
> If someone were to write horrible code like:
>=20
> 	local_irq_disable();
> 	local_irq_save(flags);
> 	local_irq_enable();
> 	local_irq_restore(flags);
>=20
> we'd be up some creek without a paddle... now I don't _think_ we have
> genius code like that, but I'd feel saver if we can haz an assertion in=

> there somewhere...
>=20
> Maybe something like:
>=20
> #ifdef CONFIG_DEBUG_ENTRY // for lack of something saner
> 	WARN_ON_ONCE((arch_local_save_flags() ^ flags) & X86_EFLAGS_IF);
> #endif
>=20
> At the end?
>=20

I'd be fine with that. I didn't add something like that because I
couldn't find a suitable CONFIG_ :-)


Juergen

--------------67F8C41D891759989B303093
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------67F8C41D891759989B303093--

--OJdSKNZQ7zTeFDk7fKB1vgvgQ8RAMAcBi--

--ylZJJWu3CTRSDQL4pjinjf9KQnc2Mi8Rq
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+3sP0FAwAAAAAACgkQsN6d1ii/Ey9w
0Af/dT6XeG9kNTyqYmy9/CPAH4Oth+lNpcfbnKD+N1KHS2cUBc7/EQ1Aqjbe2nDZKRK6vw/E21u8
U8mU0GsAvfwZltDYvmAtBfgOErYFp2LEjWHrNw+jAAFcCQB9VQewNhQ87VwBv3dHNCz5SkabXif9
2ql8lQg3+TvIHyMzqRvXeAHCiIWmvjWceaniEwjo8aVB21XrySpCpSBh7wI9bvD+66Y3rrl9wNTD
K+gLs8izRO4wUOrq8+OJDN+CE2NZIgcnUICM62DRs9EPa6RaDpfXa072Tggc1z0Y0JUElLSS/NDa
QULDL6tNrQjVAVhoV8PiR7Zvit8cbRocdGdIpSlcrA==
=bqp5
-----END PGP SIGNATURE-----

--ylZJJWu3CTRSDQL4pjinjf9KQnc2Mi8Rq--


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 12:08:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 12:08:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32104.63027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg5Cu-0008VH-9N; Fri, 20 Nov 2020 12:08:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32104.63027; Fri, 20 Nov 2020 12:08:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg5Cu-0008VA-5z; Fri, 20 Nov 2020 12:08:04 +0000
Received: by outflank-mailman (input) for mailman id 32104;
 Fri, 20 Nov 2020 12:08:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kg5Cs-0008V5-UI
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 12:08:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9cdab809-22e5-433e-87ab-4002bd8bd114;
 Fri, 20 Nov 2020 12:08:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A5B91AFDB;
 Fri, 20 Nov 2020 12:08:00 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kg5Cs-0008V5-UI
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 12:08:02 +0000
X-Inumbo-ID: 9cdab809-22e5-433e-87ab-4002bd8bd114
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9cdab809-22e5-433e-87ab-4002bd8bd114;
	Fri, 20 Nov 2020 12:08:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605874080; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=fi5hxOosLe4ci8CACHcYoETyc/XuBOtY9iexkoP8wiU=;
	b=a37bqq6FVQfk9qmoQUIC8F5b1rg8hAsd/nFAvEv+0uWryKISA3u857vpvnoUEs8U2FyIRJ
	QxGZDTbHFZpBg4pohlE7UZQvxR9+3R+lntDjNSctEpkEUrDuwjlxVL7L1r27Uelrhbxkdq
	Lx8gvFdR71AmMsL9SFcxyQO7sWimq5I=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A5B91AFDB;
	Fri, 20 Nov 2020 12:08:00 +0000 (UTC)
Subject: Re: [PATCH v2 06/12] x86/paravirt: switch time pvops functions to use
 static_call()
To: Peter Zijlstra <peterz@infradead.org>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org,
 virtualization@lists.linux-foundation.org, kvm@vger.kernel.org,
 luto@kernel.org, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 "H. Peter Anvin" <hpa@zytor.com>, "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>,
 Stephen Hemminger <sthemmin@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
 Deep Shah <sdeep@vmware.com>, "VMware, Inc." <pv-drivers@vmware.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Sean Christopherson <sean.j.christopherson@intel.com>,
 Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>,
 Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Daniel Lezcano <daniel.lezcano@linaro.org>,
 Juri Lelli <juri.lelli@redhat.com>,
 Vincent Guittot <vincent.guittot@linaro.org>,
 Dietmar Eggemann <dietmar.eggemann@arm.com>,
 Steven Rostedt <rostedt@goodmis.org>, Ben Segall <bsegall@google.com>,
 Mel Gorman <mgorman@suse.de>, Daniel Bristot de Oliveira <bristot@redhat.com>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-7-jgross@suse.com>
 <20201120120154.GE3021@hirez.programming.kicks-ass.net>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <eab0567e-26b6-7482-b575-3430a34f61f4@suse.com>
Date: Fri, 20 Nov 2020 13:07:58 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201120120154.GE3021@hirez.programming.kicks-ass.net>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="Js2C2dWgfc4nTTtJ5MoQYP8PDLusaIBXv"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--Js2C2dWgfc4nTTtJ5MoQYP8PDLusaIBXv
Content-Type: multipart/mixed; boundary="gBfRrUi6EIKy8OO8lvbihDGpshaEHf5sL";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org,
 virtualization@lists.linux-foundation.org, kvm@vger.kernel.org,
 luto@kernel.org, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 "H. Peter Anvin" <hpa@zytor.com>, "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>,
 Stephen Hemminger <sthemmin@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
 Deep Shah <sdeep@vmware.com>, "VMware, Inc." <pv-drivers@vmware.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Sean Christopherson <sean.j.christopherson@intel.com>,
 Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>,
 Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Daniel Lezcano <daniel.lezcano@linaro.org>,
 Juri Lelli <juri.lelli@redhat.com>,
 Vincent Guittot <vincent.guittot@linaro.org>,
 Dietmar Eggemann <dietmar.eggemann@arm.com>,
 Steven Rostedt <rostedt@goodmis.org>, Ben Segall <bsegall@google.com>,
 Mel Gorman <mgorman@suse.de>, Daniel Bristot de Oliveira <bristot@redhat.com>
Message-ID: <eab0567e-26b6-7482-b575-3430a34f61f4@suse.com>
Subject: Re: [PATCH v2 06/12] x86/paravirt: switch time pvops functions to use
 static_call()
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-7-jgross@suse.com>
 <20201120120154.GE3021@hirez.programming.kicks-ass.net>
In-Reply-To: <20201120120154.GE3021@hirez.programming.kicks-ass.net>

--gBfRrUi6EIKy8OO8lvbihDGpshaEHf5sL
Content-Type: multipart/mixed;
 boundary="------------2D61E8643D7878B5993793A0"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------2D61E8643D7878B5993793A0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 20.11.20 13:01, Peter Zijlstra wrote:
> On Fri, Nov 20, 2020 at 12:46:24PM +0100, Juergen Gross wrote:
>> The time pvops functions are the only ones left which might be
>> used in 32-bit mode and which return a 64-bit value.
>>
>> Switch them to use the static_call() mechanism instead of pvops, as
>> this allows quite some simplification of the pvops implementation.
>>
>> Due to include hell this requires to split out the time interfaces
>> into a new header file.
>=20
> There's also this patch floating around; just in case that would come i=
n
> handy:
>=20
>    https://lkml.kernel.org/r/20201110005609.40989-3-frederic@kernel.org=

>=20

Ah, yes. This would make life much easier.


Juergen

--------------2D61E8643D7878B5993793A0
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------2D61E8643D7878B5993793A0--

--gBfRrUi6EIKy8OO8lvbihDGpshaEHf5sL--

--Js2C2dWgfc4nTTtJ5MoQYP8PDLusaIBXv
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+3sZ4FAwAAAAAACgkQsN6d1ii/Ey/3
jQgAi585ZnxsqQnHs2MM186NZwzBRkQkzP3jMvU0GbwSpGKausY1llB2/PNBmF1VqwRXspKl+AUA
3jFFImHpbvbwP1nMxRcNdBMN7lPp7ryuT3m0e07QfJ3iM5KieHKeQzWqTWOyTKagFuqZa6FHLYZi
8z8EE3js4klXNxhV7xSk9Gq2uG+6NK3GXrbz+cm+SsY4LV5KVXlq73RS/txcioX0XbMqy6R5GfTO
eTDPz0pch8fftn3VtYoNyp1VnnDd+63JkL1z2yjJZ0NbbjaE3rEMU4902KY0v2WDUMk+FC5/pltI
WRYnBzTbzsXf3edO/7A8t2TV1sUTI6PILETFoLuMyw==
=nK8b
-----END PGP SIGNATURE-----

--Js2C2dWgfc4nTTtJ5MoQYP8PDLusaIBXv--


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 12:08:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 12:08:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32105.63039 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg5D5-000090-Hq; Fri, 20 Nov 2020 12:08:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32105.63039; Fri, 20 Nov 2020 12:08:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg5D5-00008t-EU; Fri, 20 Nov 2020 12:08:15 +0000
Received: by outflank-mailman (input) for mailman id 32105;
 Fri, 20 Nov 2020 12:08:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=++Ek=E2=infradead.org=peterz@srs-us1.protection.inumbo.net>)
 id 1kg5D3-00008R-Rr
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 12:08:13 +0000
Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 94f38a82-f7f1-4ce8-b36f-00f0d6c02c46;
 Fri, 20 Nov 2020 12:08:11 +0000 (UTC)
Received: from j217100.upc-j.chello.nl ([24.132.217.100]
 helo=noisy.programming.kicks-ass.net)
 by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kg5Cx-0004e7-7B; Fri, 20 Nov 2020 12:08:07 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id DB2573069B1;
 Fri, 20 Nov 2020 13:08:05 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id C6997200EC4FD; Fri, 20 Nov 2020 13:08:05 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=++Ek=E2=infradead.org=peterz@srs-us1.protection.inumbo.net>)
	id 1kg5D3-00008R-Rr
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 12:08:13 +0000
X-Inumbo-ID: 94f38a82-f7f1-4ce8-b36f-00f0d6c02c46
Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 94f38a82-f7f1-4ce8-b36f-00f0d6c02c46;
	Fri, 20 Nov 2020 12:08:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=vlXR+DeDm58p736gPlu4ZNiSJsmquQxprWhu8Q3iIL8=; b=fRHC2Nb6zcdTb+UFSiq7GQRUml
	BUcux3dn7RsKDlz4v+LiwCLL5XUhYCq1gUqZpb9juEnjSzn/4qQQIeiFL1LSLrh9HtqKFvu6oo2fy
	YaGX3kbhn8wEu45EEmSdWXGs1CGyP3hJ72OQrgns8cznCA9Yy75jioH9PKqiPbWVoPm3nex2Aw93R
	vbfyV+gyoMK/k6uyjezWSI2d4xiZaaa/qfdcjAV7GkIz2Ijn5O6uyxpTUYsOj7IwiraWKv9KDE44j
	HymhtOgRlYegJPkO3F0EUjLjhaP7ZarIEGVFdAeQafXl7tla9cn5tt/6O/8+i0vh5behC/4jxjuIo
	bAK0zS/w==;
Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net)
	by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kg5Cx-0004e7-7B; Fri, 20 Nov 2020 12:08:07 +0000
Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(Client did not present a certificate)
	by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id DB2573069B1;
	Fri, 20 Nov 2020 13:08:05 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
	id C6997200EC4FD; Fri, 20 Nov 2020 13:08:05 +0100 (CET)
Date: Fri, 20 Nov 2020 13:08:05 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, luto@kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>, Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>
Subject: Re: [PATCH v2 08/12] x86/paravirt: remove no longer needed 32-bit
 pvops cruft
Message-ID: <20201120120805.GF3021@hirez.programming.kicks-ass.net>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-9-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201120114630.13552-9-jgross@suse.com>

On Fri, Nov 20, 2020 at 12:46:26PM +0100, Juergen Gross wrote:
> +#define ____PVOP_CALL(rettype, op, clbr, call_clbr, extra_clbr, ...)	\
>  	({								\
>  		PVOP_CALL_ARGS;						\
>  		PVOP_TEST_NULL(op);					\
> +		BUILD_BUG_ON(sizeof(rettype) > sizeof(unsigned long));	\
> +		asm volatile(paravirt_alt(PARAVIRT_CALL)		\
> +			     : call_clbr, ASM_CALL_CONSTRAINT		\
> +			     : paravirt_type(op),			\
> +			       paravirt_clobber(clbr),			\
> +			       ##__VA_ARGS__				\
> +			     : "memory", "cc" extra_clbr);		\
> +		(rettype)(__eax & PVOP_RETMASK(rettype));		\
>  	})

This is now very similar to ____PVOP_VCALL() (note how PVOP_CALL_ARGS is
PVOP_VCALL_ARGS).

Could we get away with doing something horrible like:

#define ____PVOP_VCALL(X...) (void)____PVOP_CALL(long, X)

?


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 12:16:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 12:16:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32117.63051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg5Kx-0001PD-ED; Fri, 20 Nov 2020 12:16:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32117.63051; Fri, 20 Nov 2020 12:16:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg5Kx-0001P6-A6; Fri, 20 Nov 2020 12:16:23 +0000
Received: by outflank-mailman (input) for mailman id 32117;
 Fri, 20 Nov 2020 12:16:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kg5Kv-0001P1-Sd
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 12:16:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f25d6969-f41b-49d9-9db2-7a6e1c36fb64;
 Fri, 20 Nov 2020 12:16:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1C1E4AD5C;
 Fri, 20 Nov 2020 12:16:20 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kg5Kv-0001P1-Sd
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 12:16:21 +0000
X-Inumbo-ID: f25d6969-f41b-49d9-9db2-7a6e1c36fb64
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f25d6969-f41b-49d9-9db2-7a6e1c36fb64;
	Fri, 20 Nov 2020 12:16:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605874580; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=YefhziZmkc//JJPRQ3LwkHDQuojnCOZiaUec095OI04=;
	b=ILqKfaohLJK8dicHB14Kgv92XrND/IT0Xe8mqjbqLssK5A+dRatdi5sXW7J6K+oCUoK7fz
	5p12xurhl/C/BUJ2g05j0XkM9uGNJk/Ds20Cw3oFy0nfvZZrSjqW2j0pGZZEBxo4KA4ZOi
	q/rkJsUnGvHyvBpLwcKIpeqmJDzgvH0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1C1E4AD5C;
	Fri, 20 Nov 2020 12:16:20 +0000 (UTC)
Subject: Re: [PATCH v2 08/12] x86/paravirt: remove no longer needed 32-bit
 pvops cruft
To: Peter Zijlstra <peterz@infradead.org>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org,
 luto@kernel.org, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 "H. Peter Anvin" <hpa@zytor.com>, Deep Shah <sdeep@vmware.com>,
 "VMware, Inc." <pv-drivers@vmware.com>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-9-jgross@suse.com>
 <20201120120805.GF3021@hirez.programming.kicks-ass.net>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <5c123e7f-299d-2a4c-3e30-878537d71546@suse.com>
Date: Fri, 20 Nov 2020 13:16:19 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201120120805.GF3021@hirez.programming.kicks-ass.net>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="a2dWOuOmbtLJqOPjTKdEeUzEma86pRL3J"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--a2dWOuOmbtLJqOPjTKdEeUzEma86pRL3J
Content-Type: multipart/mixed; boundary="PNbjwsjbyz3PMr6RwhSIwf66NdqTWYnze";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org,
 luto@kernel.org, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 "H. Peter Anvin" <hpa@zytor.com>, Deep Shah <sdeep@vmware.com>,
 "VMware, Inc." <pv-drivers@vmware.com>
Message-ID: <5c123e7f-299d-2a4c-3e30-878537d71546@suse.com>
Subject: Re: [PATCH v2 08/12] x86/paravirt: remove no longer needed 32-bit
 pvops cruft
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-9-jgross@suse.com>
 <20201120120805.GF3021@hirez.programming.kicks-ass.net>
In-Reply-To: <20201120120805.GF3021@hirez.programming.kicks-ass.net>

--PNbjwsjbyz3PMr6RwhSIwf66NdqTWYnze
Content-Type: multipart/mixed;
 boundary="------------648324777AF01D39B6723412"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------648324777AF01D39B6723412
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 20.11.20 13:08, Peter Zijlstra wrote:
> On Fri, Nov 20, 2020 at 12:46:26PM +0100, Juergen Gross wrote:
>> +#define ____PVOP_CALL(rettype, op, clbr, call_clbr, extra_clbr, ...)	=
\
>>   	({								\
>>   		PVOP_CALL_ARGS;						\
>>   		PVOP_TEST_NULL(op);					\
>> +		BUILD_BUG_ON(sizeof(rettype) > sizeof(unsigned long));	\
>> +		asm volatile(paravirt_alt(PARAVIRT_CALL)		\
>> +			     : call_clbr, ASM_CALL_CONSTRAINT		\
>> +			     : paravirt_type(op),			\
>> +			       paravirt_clobber(clbr),			\
>> +			       ##__VA_ARGS__				\
>> +			     : "memory", "cc" extra_clbr);		\
>> +		(rettype)(__eax & PVOP_RETMASK(rettype));		\
>>   	})
>=20
> This is now very similar to ____PVOP_VCALL() (note how PVOP_CALL_ARGS i=
s
> PVOP_VCALL_ARGS).
>=20
> Could we get away with doing something horrible like:
>=20
> #define ____PVOP_VCALL(X...) (void)____PVOP_CALL(long, X)
>=20
> ?

Oh, indeed. And in patch 9 the same could be done for the ALT variants.


Juergen

--------------648324777AF01D39B6723412
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------648324777AF01D39B6723412--

--PNbjwsjbyz3PMr6RwhSIwf66NdqTWYnze--

--a2dWOuOmbtLJqOPjTKdEeUzEma86pRL3J
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+3s5MFAwAAAAAACgkQsN6d1ii/Ey8N
oggAm+XTeQxxn6EH9H9b5kDull2RxB1owydVFJXiteBb0EzjXi2FD6bMGxgzfpeVsvT6e0yQgQN/
PgTwqfzvq5glClCCPcXnzvEJubefKlqCyqBD08LRKXTnP7Dfob9f5/lq8mI507rAQf/OWDpU0djv
gPe3gL9zbundVqMnCaFq3iTSa5jMUgAu3xqaKasSoHjEcFTBGIKYDaEzacdO5PmrsbZ4LZ/aBMow
8ZIUowt41UurQrspfV6OaGmOuALn3DLUnYOX8iEYziExmHG1OTeUBrGK8VfketjkjMwG8EiwEo/j
Wq3vMKtUqrycnxsC2MHef2FRwOBR0ngcXLM9FyZpUg==
=HHlv
-----END PGP SIGNATURE-----

--a2dWOuOmbtLJqOPjTKdEeUzEma86pRL3J--


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 12:45:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 12:45:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32136.63071 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg5n7-0004s6-17; Fri, 20 Nov 2020 12:45:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32136.63071; Fri, 20 Nov 2020 12:45:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg5n6-0004rz-Tw; Fri, 20 Nov 2020 12:45:28 +0000
Received: by outflank-mailman (input) for mailman id 32136;
 Fri, 20 Nov 2020 12:45:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kg5n5-0004rs-Hw
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 12:45:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e90de5e-eadb-499a-b01a-e402c7c062e9;
 Fri, 20 Nov 2020 12:45:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 12743ABD6;
 Fri, 20 Nov 2020 12:45:26 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kg5n5-0004rs-Hw
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 12:45:27 +0000
X-Inumbo-ID: 3e90de5e-eadb-499a-b01a-e402c7c062e9
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3e90de5e-eadb-499a-b01a-e402c7c062e9;
	Fri, 20 Nov 2020 12:45:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605876326; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=H7PHzkAOiCeSpzSd3mLSic9jVidXunJJhK7mu+ZIVNI=;
	b=aV4LmXmI3EUk38/E7y4qXGlwgF1Bh/heM1LyA3UhI2FtOrZW4XECD8f6f6Ab65RBwIOKrm
	uSmQ09kApC0mNZDetupLylvp2oRMmKuQtRSGJvIx3pFLprj1M6ZUE+sXl/UB+JBq4w+d2h
	YJbvUD0fVQpwhgjwhzUp+hcEOm96ZBk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 12743ABD6;
	Fri, 20 Nov 2020 12:45:26 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86: E801 memory "map" use implies no ACPI
Message-ID: <18ca8671-1478-3dc8-7b91-041dbc18829f@suse.com>
Date: Fri, 20 Nov 2020 13:45:19 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

ACPI mandates use of E820 (or newer, e.g. EFI), and in fact firmware
has been observed to include E820_ACPI ranges in what E801 reports as
available (really "configured") memory.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
TBD: Alternatively we could drop all use of E801 (and older), since
     there shouldn't be any 64-bit systems not supporting the more
     modern E820.

--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -74,7 +74,7 @@ bool __read_mostly use_invpcid;
 unsigned long __read_mostly cr4_pv32_mask;
 
 /* **** Linux config option: propagated to domain0. */
-/* "acpi=off":    Sisables both ACPI table parsing and interpreter. */
+/* "acpi=off":    Disables both ACPI table parsing and interpreter. */
 /* "acpi=force":  Override the disable blacklist.                   */
 /* "acpi=ht":     Limit ACPI just to boot-time to enable HT.        */
 /* "acpi=noirq":  Disables ACPI interrupt routing.                  */
@@ -1069,6 +1069,7 @@ void __init noreturn __start_xen(unsigne
         e820_raw.map[1].size = bootsym(highmem_kb) << 10;
         e820_raw.map[1].type = E820_RAM;
         e820_raw.nr_map = 2;
+        disable_acpi();
     }
     else if ( mbi->flags & MBI_MEMLIMITS )
     {
@@ -1080,6 +1081,7 @@ void __init noreturn __start_xen(unsigne
         e820_raw.map[1].size = mbi->mem_upper << 10;
         e820_raw.map[1].type = E820_RAM;
         e820_raw.nr_map = 2;
+        disable_acpi();
     }
     else
         panic("Bootloader provided no memory information\n");


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 12:48:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 12:48:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32144.63083 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg5pj-00054h-EJ; Fri, 20 Nov 2020 12:48:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32144.63083; Fri, 20 Nov 2020 12:48:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg5pj-00054a-Aw; Fri, 20 Nov 2020 12:48:11 +0000
Received: by outflank-mailman (input) for mailman id 32144;
 Fri, 20 Nov 2020 12:48:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kg5ph-00052v-Ha
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 12:48:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 705721be-47c8-42c3-98f2-f8bc4662558a;
 Fri, 20 Nov 2020 12:48:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B54AEAC2F;
 Fri, 20 Nov 2020 12:48:06 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kg5ph-00052v-Ha
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 12:48:09 +0000
X-Inumbo-ID: 705721be-47c8-42c3-98f2-f8bc4662558a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 705721be-47c8-42c3-98f2-f8bc4662558a;
	Fri, 20 Nov 2020 12:48:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605876486; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/W5SKfLxS9P8Oy1kRO1B1GHv3bqQeOd6wIJNTG5W8Ek=;
	b=CNhFFOc/6PWwhGSiqBGd6pxgF4pkdlppFCpL8UOLoD/zbHG8bTCFZGM7FXx6Cd8II1I30P
	EDlzXHUT/JnyI7+YpNYsRQ7EQGOOQ+snqgxdz29sXob2lkOdMyffkP4pmDyrqv3myZctHg
	KzjvQX3kNlCXQkugBG+1k4Zmkqz4zxg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B54AEAC2F;
	Fri, 20 Nov 2020 12:48:06 +0000 (UTC)
Subject: Ping: [PATCH v2] x86/PV: make post-migration page state consistent
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
References: <07ebce3c-4dcf-bc9e-6d82-7f3def486ab8@suse.com>
Message-ID: <b733914b-1bfd-d95d-470e-af3ca7a4f69f@suse.com>
Date: Fri, 20 Nov 2020 13:48:07 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <07ebce3c-4dcf-bc9e-6d82-7f3def486ab8@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.11.2020 08:56, Jan Beulich wrote:
> When a page table page gets de-validated, its type reference count drops
> to zero (and PGT_validated gets cleared), but its type remains intact.
> XEN_DOMCTL_getpageframeinfo3, therefore, so far reported prior usage for
> such pages. An intermediate write to such a page via e.g.
> MMU_NORMAL_PT_UPDATE, however, would transition the page's type to
> PGT_writable_page, thus altering what XEN_DOMCTL_getpageframeinfo3 would
> return. In libxc the decision which pages to normalize / localize
> depends solely on the type returned from the domctl. As a result without
> further precautions the guest won't be able to tell whether such a page
> has had its (apparent) PTE entries transitioned to the new MFNs.
> 
> Add a check of PGT_validated, thus consistently avoiding normalization /
> localization in the tool stack.
> 
> Also use XEN_DOMCTL_PFINFO_NOTAB in the variable's initializer instead
> open coding it.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: Don't change type's type.

Ping?

> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -215,7 +215,7 @@ long arch_do_domctl(
>  
>          for ( i = 0; i < num; ++i )
>          {
> -            unsigned long gfn = 0, type = 0;
> +            unsigned long gfn = 0, type = XEN_DOMCTL_PFINFO_NOTAB;
>              struct page_info *page;
>              p2m_type_t t;
>  
> @@ -255,6 +255,8 @@ long arch_do_domctl(
>  
>                  if ( page->u.inuse.type_info & PGT_pinned )
>                      type |= XEN_DOMCTL_PFINFO_LPINTAB;
> +                else if ( !(page->u.inuse.type_info & PGT_validated) )
> +                    type = XEN_DOMCTL_PFINFO_NOTAB;
>  
>                  if ( page->count_info & PGC_broken )
>                      type = XEN_DOMCTL_PFINFO_BROKEN;
> 



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 12:50:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 12:50:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32149.63094 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg5rV-0005UG-QT; Fri, 20 Nov 2020 12:50:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32149.63094; Fri, 20 Nov 2020 12:50:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg5rV-0005U9-Mx; Fri, 20 Nov 2020 12:50:01 +0000
Received: by outflank-mailman (input) for mailman id 32149;
 Fri, 20 Nov 2020 12:50:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kg5rU-0005U3-Nn
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 12:50:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0c9d0fa7-3621-4664-bc16-859c2f0339b2;
 Fri, 20 Nov 2020 12:50:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 441FCAC54;
 Fri, 20 Nov 2020 12:49:59 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kg5rU-0005U3-Nn
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 12:50:00 +0000
X-Inumbo-ID: 0c9d0fa7-3621-4664-bc16-859c2f0339b2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0c9d0fa7-3621-4664-bc16-859c2f0339b2;
	Fri, 20 Nov 2020 12:50:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605876599; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=t4Hh5f4iDAvskmK3yhQGv4nBpbYLAadWTv9j2Ooxa04=;
	b=URGxCB46a9ikHNFtOZjqW4jSc+vNs/+HwLIWrZTLk4Z+7uGc4FEFVkjl8Dgu6aubClvETp
	pieU9oramiW/o+pBBCGRbI51ebK5reHHZngNluLYiI7CpNcLk3ZvpI8W5dP6+1ZHlEv6s0
	aaA0cX7BzdkJ+HyzPDk0uv64eX8Kd6w=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 441FCAC54;
	Fri, 20 Nov 2020 12:49:59 +0000 (UTC)
Subject: Ping: [PATCH] gnttab: don't allocate status frame tracking array when
 "gnttab=max_ver:1"
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Cc: Ian Jackson <iwj@xenproject.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <a484cc88-f41d-5d38-d098-4eda297569a1@suse.com>
Message-ID: <cfce3be1-c742-72b4-5a39-5b374d27705c@suse.com>
Date: Fri, 20 Nov 2020 13:50:00 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <a484cc88-f41d-5d38-d098-4eda297569a1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 05.11.2020 16:55, Jan Beulich wrote:
> This array can be large when many grant frames are permitted; avoid
> allocating it when it's not going to be used anyway. Do so indirectly
> though, by making grant_to_status_frames() return zero in this case.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Ping?

> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -446,11 +446,14 @@ static inline void active_entry_release(
>  
>  static inline unsigned int grant_to_status_frames(unsigned int grant_frames)
>  {
> +    if ( opt_gnttab_max_version < 2 )
> +        return 0;
>      return DIV_ROUND_UP(grant_frames * GRANT_PER_PAGE, GRANT_STATUS_PER_PAGE);
>  }
>  
>  static inline unsigned int status_to_grant_frames(unsigned int status_frames)
>  {
> +    ASSERT(opt_gnttab_max_version >= 2);
>      return DIV_ROUND_UP(status_frames * GRANT_STATUS_PER_PAGE, GRANT_PER_PAGE);
>  }
>  
> 



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 12:54:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 12:54:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32161.63111 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg5vv-0006OA-Ex; Fri, 20 Nov 2020 12:54:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32161.63111; Fri, 20 Nov 2020 12:54:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg5vv-0006O3-BO; Fri, 20 Nov 2020 12:54:35 +0000
Received: by outflank-mailman (input) for mailman id 32161;
 Fri, 20 Nov 2020 12:54:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=++Ek=E2=infradead.org=peterz@srs-us1.protection.inumbo.net>)
 id 1kg5vu-0006Nv-26
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 12:54:34 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f926ee76-1607-49c5-8aee-179f3a58969d;
 Fri, 20 Nov 2020 12:54:31 +0000 (UTC)
Received: from j217100.upc-j.chello.nl ([24.132.217.100]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kg5v5-0000yt-TS; Fri, 20 Nov 2020 12:53:44 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 9B3B9306BCA;
 Fri, 20 Nov 2020 13:53:42 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 7A8C720244CF6; Fri, 20 Nov 2020 13:53:42 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=++Ek=E2=infradead.org=peterz@srs-us1.protection.inumbo.net>)
	id 1kg5vu-0006Nv-26
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 12:54:34 +0000
X-Inumbo-ID: f926ee76-1607-49c5-8aee-179f3a58969d
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f926ee76-1607-49c5-8aee-179f3a58969d;
	Fri, 20 Nov 2020 12:54:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=ARS+JFM8lp7IU5S39whnGWAR9zPTjiLED++SVe3kkDg=; b=LFnlsnKEh1oJBjqJmqWEjBMDPA
	RhYAb819lhVkUmxOATMprP9zwrg4W8r9fnHYLaWcgSZc3ZZ+ibjAd2T+11VhwGcExv5iGdpTyCMCl
	mu8SvfQAmTH0Vap4aaLOeQ7Yz08iwbMPlGI8DH6PpXe94p9k+F+8oAIrNh0dTW6jn/MznD4fLHthg
	hRpx+wkXIeAkTOptyjsOsVhpg3FHxP6C9ShGQ7RFETLGfKWi3GUQcWQGmtIyrah0wRjprJzE6otgq
	BlItKpepcV2xggTTCc5EEQRfa/Ks4Hgph9aAqxMSO+H7U1UxN0JN5JhIiYLI2O9xDoxiQKOjkmQqX
	Ztj/Up7Q==;
Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kg5v5-0000yt-TS; Fri, 20 Nov 2020 12:53:44 +0000
Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(Client did not present a certificate)
	by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 9B3B9306BCA;
	Fri, 20 Nov 2020 13:53:42 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
	id 7A8C720244CF6; Fri, 20 Nov 2020 13:53:42 +0100 (CET)
Date: Fri, 20 Nov 2020 13:53:42 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-hyperv@vger.kernel.org, kvm@vger.kernel.org, luto@kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <sean.j.christopherson@intel.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	Daniel Lezcano <daniel.lezcano@linaro.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>
Subject: Re: [PATCH v2 00/12] x86: major paravirt cleanup
Message-ID: <20201120125342.GC3040@hirez.programming.kicks-ass.net>
References: <20201120114630.13552-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201120114630.13552-1-jgross@suse.com>

On Fri, Nov 20, 2020 at 12:46:18PM +0100, Juergen Gross wrote:
>  30 files changed, 325 insertions(+), 598 deletions(-)

Much awesome ! I'll try and get that objtool thing sorted.


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 13:11:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 13:11:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32171.63127 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg6C4-0000I9-1S; Fri, 20 Nov 2020 13:11:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32171.63127; Fri, 20 Nov 2020 13:11:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg6C3-0000I2-Ui; Fri, 20 Nov 2020 13:11:15 +0000
Received: by outflank-mailman (input) for mailman id 32171;
 Fri, 20 Nov 2020 13:11:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N4/k=E2=gmail.com=lambert.olivier@srs-us1.protection.inumbo.net>)
 id 1kg6C2-0000Hx-3N
 for xen-devel@lists.xen.org; Fri, 20 Nov 2020 13:11:14 +0000
Received: from mail-vk1-xa31.google.com (unknown [2607:f8b0:4864:20::a31])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e746f718-d898-4057-86d9-1e63b0311efa;
 Fri, 20 Nov 2020 13:11:12 +0000 (UTC)
Received: by mail-vk1-xa31.google.com with SMTP id u16so2186730vkb.1
 for <xen-devel@lists.xen.org>; Fri, 20 Nov 2020 05:11:12 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=N4/k=E2=gmail.com=lambert.olivier@srs-us1.protection.inumbo.net>)
	id 1kg6C2-0000Hx-3N
	for xen-devel@lists.xen.org; Fri, 20 Nov 2020 13:11:14 +0000
X-Inumbo-ID: e746f718-d898-4057-86d9-1e63b0311efa
Received: from mail-vk1-xa31.google.com (unknown [2607:f8b0:4864:20::a31])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e746f718-d898-4057-86d9-1e63b0311efa;
	Fri, 20 Nov 2020 13:11:12 +0000 (UTC)
Received: by mail-vk1-xa31.google.com with SMTP id u16so2186730vkb.1
        for <xen-devel@lists.xen.org>; Fri, 20 Nov 2020 05:11:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=Is4LlyNi/I66refyHQmdCDL6lW4ZWZ2icPR7xWsFFGM=;
        b=AChYZ+S1hdoa/8SzPN4oDXyK916yATiMb4kYWKHiG4sPYotW9Ia2wZ300UCZemcuCj
         9h273X3kmNZ4sr8llgyZyhzNuqjy1SLEN8mMt8CjP3UhrjQVN2HWxeAOzA1JOlk/Atiu
         +ltn9b7a4kCgbMDlvyGBC3pCf4yRC9aApF2IEoeOOo2Rh3ITj7NET5sS8ZT3IcxG97jv
         bzvILUnahm0kNhB14b5BRUJ7sciIrhz7B99juvumfxqI+p07ib9tkWoIL0zuMYrhO0nz
         eHIMA10tNKRP9UkOkbBlODW+MxB1cYBDumeb1Iws8FtHD/ib21oB5YbSRgjTpHxuu0go
         2Olw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=Is4LlyNi/I66refyHQmdCDL6lW4ZWZ2icPR7xWsFFGM=;
        b=H1Wc0vswr2d5TU3j87doh00mUBeInpzmktqu2jt2t56twgPgiaa0cSu95OgSmYx7md
         BNoe/v8MFQRbQ8atE4K/FNGgfbYEYXqj2gvaGFOvOdvU271P9tUDYrKQPcbUVn9/9N5P
         mV+nJKkWJVrzD4E+o6KD6B46EWdAldozn4LMhpElNJ4rWXzfTnotfwlzJualTfy350nN
         9DN5X0ceCljrmtGST6OTfJ5CaqPcDXkwgHo9cmB4+wCMLLeJlxwV+vGAeR7q/to2IYFw
         rnF8ccXn1zSmrUcoYpWU6eWlsTT5Etstrlz13IqmV41Cj/mSEYDrpmMTVSd6qOxknnRX
         dggw==
X-Gm-Message-State: AOAM531bIcsDtWVsBZ0IENCZMRRUsfmrSJnAWlPX7HuYQhTS+G5q8R/e
	rtrWXhE4+Y0tcjpVqGLVJMW6w/EDhZoiFeH7LYiWqbRscWq1oh0N
X-Google-Smtp-Source: ABdhPJy+Y5xPa/pgsmnUlgOSBA5KsoyKfAf7QayiS0Rk+ne6AIl3S6t+gupbBEt4CcqW1c73qC6Ec6fRr+rxNv0nYbU=
X-Received: by 2002:a1f:1c6:: with SMTP id 189mr12053599vkb.13.1605877871800;
 Fri, 20 Nov 2020 05:11:11 -0800 (PST)
MIME-Version: 1.0
References: <CACJ1ZNuJCgDkRHvH2gXqC5gWTJHdUQ9J4G-HBNFwKYZFaWpWuw@mail.gmail.com>
 <CACJ1ZNupvRX_fcGPWn3mm+3Lm4gT38M088tUc_sSUu8JeQg3Fg@mail.gmail.com>
 <CACJ1ZNu5Kdf72j1eTtdgTuSOjgkpeEWFM0cKB-54pxqwXuWCDQ@mail.gmail.com> <CACJ1ZNtfgNr9oz7stE=2iwijjAUtZLWR2u_xihFZeEk3Y7gYRQ@mail.gmail.com>
In-Reply-To: <CACJ1ZNtfgNr9oz7stE=2iwijjAUtZLWR2u_xihFZeEk3Y7gYRQ@mail.gmail.com>
From: Olivier Lambert <lambert.olivier@gmail.com>
Date: Fri, 20 Nov 2020 14:11:00 +0100
Message-ID: <CACJ1ZNtiTg874VDKeV2sLdk0qWzt79pVf3Bo4J+GZWPhZdxWFQ@mail.gmail.com>
Subject: Re: Schedule for OpenPOWER/Xen meeting
To: "<xen-devel@lists.xen.org>" <xen-devel@lists.xen.org>
Cc: damien.thenot@vates.fr
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Thanks everyone for participating in the meeting!

Here is a recap on topics that was discussed (some partial/draft notes
from me and Damien at Vates, sorry if some are not entirely accurate):

* Page table implementation, hardware page walking, virtual memory in Xen
  * Multiple implementation, first a hashed tree structure, then later
a radix tree, closer to x86 and Linux usage.
  * Multiple page size support: 1G, 2M, 64k, 4k
  * All IO are memory mapped access

* Power arch hardware control and power management
  * Based on firmware called
[OPAL](https://open-power.github.io/skiboot/doc/opal-spec.html), Open
Power Abstraction Layer

* Interrupts are different than x86
  * Exceptions are considered interrupts in Power
  * Interrupts coming from external hardware are called external interrupts
    - On [KVM](https://www.kernel.org/doc/html/latest/virt/kvm/devices/xive=
.html)
    - On [Qemu](https://www.qemu.org/docs/master/specs/ppc-spapr-xive.html)

* There is an existing hypervisor spec on Power
  * What kind of hypercalls are needed to run a Linux guest on Power?
  * This spec need to be implemented on Xen to be the most compatible possi=
ble

* Might be interesting to join the OpenPOWER Foundation

* Toolchain to use to develop on Power
  * LLVM/CLang or GCC crosscompiler
  * Qemu emulation of Power architecture? Functional enough to begin
working on it?

* Availability of test hardware
  * Adapting the current CI loop for new architectures

* PCIe root complex each have their own IOMMU
  * Might be able to be disabled
  * Has an integrated error handling scheme

* Hardware information given by a device tree

This is all for our notes. Feel free to share if you have more content
or things to fix.


IMHO, it was very interesting, and also bringing good news:
1. There's an hypervisor specification for POWER, which will be
**really** useful if we want to implement Xen ("just" follow the spec)
2. POWER design and virt mechanisms seem -at first glance-
surprisingly similar to the Xen way (equivalent of grant table
principle etc.), and maybe it will be a great fit in the end!

There's still some technical docs to be published on the OpenPOWER
side, but they are willing to move forward relatively quickly.

In conclusion, I would say we are on the right track, and we might
have good surprises on how Xen can fit easily on POWER. Next steps?
Should we have a Xen meeting dedicated to that topic or should we wait
for the next monthly community call? We could use that time to decide
a kind of agenda for the next month and prepare some tasks to be done.

Thanks again!

Olivier.

For reference, here are the links we had during the meeting:
* https://openpowerfoundation.org/?resource_lib=3Dpower-isa-version-3-022:1=
3
OPAL, is a combination of multiple parts22:31
* https://open-power.github.io/skiboot/doc/opal-spec.html#what-is-opal22:33
* https://github.com/open-power/docs22:34
* https://openpowerfoundation.org/?resource_lib=3Dlinux-on-power-architectu=
re-reference-a-papr-linux-subset-review-draft22:37
* https://www.kernel.org/doc/html/latest/virt/kvm/devices/xive.html22:51
* https://www.qemu.org/docs/master/specs/ppc-spapr-xive.html


Le mer. 18 nov. 2020 =C3=A0 16:17, Olivier Lambert
<lambert.olivier@gmail.com> a =C3=A9crit :
>
> Hi!
>
> So I managed to get an agenda with basic questions. The meeting is at the=
 planned time (Nov the 19th, at 3PM central time, which is 9PM in UK and 10=
PM in Europe).
>
> Meeting place will be: https://ibm.webex.com/meet/mendy
>
> Don't forget to ping your colleagues/friends that aren't watching this ma=
iling list actively, so they won't miss the meeting :)
>
> See you tomorrow!
>
> Olivier.
>
> Le jeu. 12 nov. 2020 =C3=A0 21:44, Olivier Lambert <lambert.olivier@gmail=
.com> a =C3=A9crit :
>>
>> Okay so before having the meeting webex/whatever link, I think it would =
be more efficient to plan a kind of agenda, something we can pass to the Op=
enPOWER team in the next few days. This way, they could have some answers r=
eady, allowing us to explore more things interactively during the meeting.
>>
>> Feel free to participate in this thread (even if you won't be at the mee=
ting!), so we can gather and then organize a bit of what we'd like to know/=
discuss during this meeting.
>>
>> So go ahead and start to throw questions :)
>>
>>
>> Thanks,
>>
>> OIivier.
>>
>>
>> Le jeu. 12 nov. 2020 =C3=A0 09:26, Olivier Lambert <lambert.olivier@gmai=
l.com> a =C3=A9crit :
>>>
>>> Thanks to everyone who participated in the poll. Due to the limited num=
ber of answers, I think it's wiser to go for the second option (Thursday th=
e 19th), because everyone who already answered seems available that day. I'=
ll confirm that to OpenPOWER. When it's confirmed, I'll do a recap here ide=
ally with the meeting place.
>>>
>>> Thanks,
>>>
>>> Olivier.
>>>
>>>
>>> Le mar. 10 nov. 2020 =C3=A0 13:41, Olivier Lambert <lambert.olivier@gma=
il.com> a =C3=A9crit :
>>>>
>>>> Hi everyone,
>>>>
>>>> We got 2 potential dates for the initial tech meeting with at least on=
e OpenPOWER expert, so we can discuss the effort needed to port Xen on this=
 architecture.
>>>>
>>>> Because of time zones (on OpenPower side, there's one guy in Australia=
), we got 2 possible schedules in November:
>>>>
>>>> 1. 3pm CT on this Thursday the 12th (! this week)
>>>> 2. Or next week Thursday the 19th
>>>>
>>>> I made a doodle-like so everyone can vote on their preferred schedule:=
 https://framadate.org/QQu5rYEOEYr4ZHc4
>>>>
>>>> Note: 3pm CT would mean 9pm UTC, 10pm UTC+1 (CET). But correct me if I=
'm wrong.
>>>>
>>>> Reminder: the Cryptpad of the last Xen Community meeting contains the =
list of people interested. If you are aware of someone interested that coul=
d miss this email on this devel list, feel free to forward it. Cryptpad lin=
k: https://cryptpad.fr/pad/#/2/pad/edit/k-0Aj+Sxb5SliLWrFRBwx49V/
>>>>
>>>> Thank you and see you soon!
>>>>
>>>> Olivier.


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 13:13:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 13:13:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32176.63139 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg6Dw-0000S5-Do; Fri, 20 Nov 2020 13:13:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32176.63139; Fri, 20 Nov 2020 13:13:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg6Dw-0000Ry-Ag; Fri, 20 Nov 2020 13:13:12 +0000
Received: by outflank-mailman (input) for mailman id 32176;
 Fri, 20 Nov 2020 13:13:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kg6Dv-0000Rs-1r
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 13:13:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aa1d2455-4234-44a4-9a01-8dbf4009cf8b;
 Fri, 20 Nov 2020 13:13:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DFB90AE9A;
 Fri, 20 Nov 2020 13:13:08 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kg6Dv-0000Rs-1r
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 13:13:11 +0000
X-Inumbo-ID: aa1d2455-4234-44a4-9a01-8dbf4009cf8b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id aa1d2455-4234-44a4-9a01-8dbf4009cf8b;
	Fri, 20 Nov 2020 13:13:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605877989; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=F0OTJuCOh4d9HajXsdk2UN4sq7a2Yi9zh8zUIqKzg3c=;
	b=NkjEXDCpCDQM3QxSYTKmmfN4J+l3AXBEzZRRj70jVxFuT6AtZFPXYZ59L7t9nzS70/Aw2A
	qKbL/spMR0JG5XMrAyeIvsFeJPvD7QeQHuC9WkovDS//HXcrqYyARnyXKtIYnhlyeHY5YV
	RjvpmxHYnFDtmCJcVorQNxmKVxHJ0Kc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id DFB90AE9A;
	Fri, 20 Nov 2020 13:13:08 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2] xen: add support for automatic debug key actions in case of crash
Date: Fri, 20 Nov 2020 14:13:06 +0100
Message-Id: <20201120131306.24388-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When the host crashes it would sometimes be nice to have additional
debug data available which could be produced via debug keys, but
halting the server for manual intervention might be impossible due to
the need to reboot/kexec rather sooner than later.

Add support for automatic debug key actions in case of crashes which
can be activated via boot- or runtime-parameter.

Depending on the type of crash the desired data might be different, so
support different settings for the possible types of crashes.

The parameter is "crash-debug" with the following syntax:

  crash-debug-<type>=<string>

with <type> being one of:

  panic, hwdom, watchdog, kexeccmd, debugkey

and <string> a sequence of debug key characters with '+' having the
special semantics of a 10 millisecond pause.

So "crash-debug-watchdog=0+0qr" would result in special output in case
of watchdog triggered crash (dom0 state, 10 ms pause, dom0 state,
domain info, run queues).

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- switched special character '.' to '+' (Jan Beulich)
- 10 ms instead of 1 s pause (Jan Beulich)
- added more text to the boot parameter description (Jan Beulich)
---
 docs/misc/xen-command-line.pandoc | 39 +++++++++++++++++++++++++++++++
 xen/common/kexec.c                |  8 ++++---
 xen/common/keyhandler.c           | 38 ++++++++++++++++++++++++++++++
 xen/common/shutdown.c             |  4 ++--
 xen/drivers/char/console.c        |  2 +-
 xen/include/xen/kexec.h           | 10 ++++++--
 xen/include/xen/keyhandler.h      | 11 +++++++++
 7 files changed, 104 insertions(+), 8 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index 4ae9391fcd..c274351aeb 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -574,6 +574,45 @@ reduction of features at Xen's disposal to manage guests.
 ### cpuinfo (x86)
 > `= <boolean>`
 
+### crash-debug-debugkey
+### crash-debug-hwdom
+### crash-debug-kexeccmd
+### crash-debug-panic
+### crash-debug-watchdog
+> `= <string>`
+
+> Can be modified at runtime
+
+Specify debug-key actions in cases of crashes. Each of the parameters applies
+to a different crash reason. The `<string>` is a sequence of debug key
+characters, with `+` having the special meaning of a 10 millisecond pause.
+
+`crash-debug-debugkey` will be used for crashes induced by the `C` debug
+key (i.e. manually induced crash).
+
+`crash-debug-hwdom` denotes a crash of dom0.
+
+`crash-debug-kexeccmd` is an explicit request of dom0 to continue with the
+kdump kernel via kexec.
+
+`crash-debug-panic` is a crash of the hypervisor.
+
+`crash-debug-watchdog` is a crash due to the watchdog timer expiring.
+
+It should be noted that dumping diagnosis data to the console can fail in
+multiple ways (missing data, hanging system, ...) depending on the reason
+of the crash, which might have left the hypervisor in a bad state.
+
+So e.g. `crash-debug-watchdog=0+0r` would dump dom0 state twice with 10
+milliseconds between the two state dumps, followed by the run queues of the
+hypervisor, if the system crashes due to a watchdog timeout.
+
+These parameters should be used carefully, as e.g. specifying
+`crash-debug-debugkey=C` would result in an endless loop. Depending on the
+reason of the system crash it might happen that triggering some debug key
+action will result in a hang instead of dumping data and then doing a
+reboot or crash dump.
+
 ### crashinfo_maxaddr
 > `= <size>`
 
diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index 52cdc4ebc3..ebeee6405a 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -373,10 +373,12 @@ static int kexec_common_shutdown(void)
     return 0;
 }
 
-void kexec_crash(void)
+void kexec_crash(enum crash_reason reason)
 {
     int pos;
 
+    keyhandler_crash_action(reason);
+
     pos = (test_bit(KEXEC_FLAG_CRASH_POS, &kexec_flags) != 0);
     if ( !test_bit(KEXEC_IMAGE_CRASH_BASE + pos, &kexec_flags) )
         return;
@@ -409,7 +411,7 @@ static long kexec_reboot(void *_image)
 static void do_crashdump_trigger(unsigned char key)
 {
     printk("'%c' pressed -> triggering crashdump\n", key);
-    kexec_crash();
+    kexec_crash(CRASHREASON_DEBUGKEY);
     printk(" * no crash kernel loaded!\n");
 }
 
@@ -840,7 +842,7 @@ static int kexec_exec(XEN_GUEST_HANDLE_PARAM(void) uarg)
         ret = continue_hypercall_on_cpu(0, kexec_reboot, image);
         break;
     case KEXEC_TYPE_CRASH:
-        kexec_crash(); /* Does not return */
+        kexec_crash(CRASHREASON_KEXECCMD); /* Does not return */
         break;
     }
 
diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
index 68364e987d..c2da7c18a8 100644
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -3,7 +3,9 @@
  */
 
 #include <asm/regs.h>
+#include <xen/delay.h>
 #include <xen/keyhandler.h>
+#include <xen/param.h>
 #include <xen/shutdown.h>
 #include <xen/event.h>
 #include <xen/console.h>
@@ -507,6 +509,42 @@ void __init initialize_keytable(void)
     }
 }
 
+#define CRASHACTION_SIZE  32
+static char crash_debug_panic[CRASHACTION_SIZE];
+static char crash_debug_hwdom[CRASHACTION_SIZE];
+static char crash_debug_watchdog[CRASHACTION_SIZE];
+static char crash_debug_kexeccmd[CRASHACTION_SIZE];
+static char crash_debug_debugkey[CRASHACTION_SIZE];
+
+static char *crash_action[CRASHREASON_N] = {
+    [CRASHREASON_PANIC] = crash_debug_panic,
+    [CRASHREASON_HWDOM] = crash_debug_hwdom,
+    [CRASHREASON_WATCHDOG] = crash_debug_watchdog,
+    [CRASHREASON_KEXECCMD] = crash_debug_kexeccmd,
+    [CRASHREASON_DEBUGKEY] = crash_debug_debugkey,
+};
+
+string_runtime_param("crash-debug-panic", crash_debug_panic);
+string_runtime_param("crash-debug-hwdom", crash_debug_hwdom);
+string_runtime_param("crash-debug-watchdog", crash_debug_watchdog);
+string_runtime_param("crash-debug-kexeccmd", crash_debug_kexeccmd);
+string_runtime_param("crash-debug-debugkey", crash_debug_debugkey);
+
+void keyhandler_crash_action(enum crash_reason reason)
+{
+    const char *action = crash_action[reason];
+    struct cpu_user_regs *regs = get_irq_regs() ? : guest_cpu_user_regs();
+
+    while ( *action )
+    {
+        if ( *action == '+' )
+            mdelay(10);
+        else
+            handle_keypress(*action, regs);
+        action++;
+    }
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/common/shutdown.c b/xen/common/shutdown.c
index 912593915b..abde48aa4c 100644
--- a/xen/common/shutdown.c
+++ b/xen/common/shutdown.c
@@ -43,7 +43,7 @@ void hwdom_shutdown(u8 reason)
     case SHUTDOWN_crash:
         debugger_trap_immediate();
         printk("Hardware Dom%u crashed: ", hardware_domain->domain_id);
-        kexec_crash();
+        kexec_crash(CRASHREASON_HWDOM);
         maybe_reboot();
         break; /* not reached */
 
@@ -56,7 +56,7 @@ void hwdom_shutdown(u8 reason)
     case SHUTDOWN_watchdog:
         printk("Hardware Dom%u shutdown: watchdog rebooting machine\n",
                hardware_domain->domain_id);
-        kexec_crash();
+        kexec_crash(CRASHREASON_WATCHDOG);
         machine_restart(0);
         break; /* not reached */
 
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 861ad53a8f..acec277f5e 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -1271,7 +1271,7 @@ void panic(const char *fmt, ...)
 
     debugger_trap_immediate();
 
-    kexec_crash();
+    kexec_crash(CRASHREASON_PANIC);
 
     if ( opt_noreboot )
         machine_halt();
diff --git a/xen/include/xen/kexec.h b/xen/include/xen/kexec.h
index e85ba16405..9f7a912e97 100644
--- a/xen/include/xen/kexec.h
+++ b/xen/include/xen/kexec.h
@@ -1,6 +1,8 @@
 #ifndef __XEN_KEXEC_H__
 #define __XEN_KEXEC_H__
 
+#include <xen/keyhandler.h>
+
 #ifdef CONFIG_KEXEC
 
 #include <public/kexec.h>
@@ -48,7 +50,7 @@ void machine_kexec_unload(struct kexec_image *image);
 void machine_kexec_reserved(xen_kexec_reserve_t *reservation);
 void machine_reboot_kexec(struct kexec_image *image);
 void machine_kexec(struct kexec_image *image);
-void kexec_crash(void);
+void kexec_crash(enum crash_reason reason);
 void kexec_crash_save_cpu(void);
 struct crash_xen_info *kexec_crash_save_info(void);
 void machine_crash_shutdown(void);
@@ -82,7 +84,11 @@ void vmcoreinfo_append_str(const char *fmt, ...)
 #define kexecing 0
 
 static inline void kexec_early_calculations(void) {}
-static inline void kexec_crash(void) {}
+static inline void kexec_crash(enum crash_reason reason)
+{
+    keyhandler_crash_action(reason);
+}
+
 static inline void kexec_crash_save_cpu(void) {}
 static inline void set_kexec_crash_area_size(u64 system_ram) {}
 
diff --git a/xen/include/xen/keyhandler.h b/xen/include/xen/keyhandler.h
index 5131e86cbc..dbf797a8b4 100644
--- a/xen/include/xen/keyhandler.h
+++ b/xen/include/xen/keyhandler.h
@@ -48,4 +48,15 @@ void register_irq_keyhandler(unsigned char key,
 /* Inject a keypress into the key-handling subsystem. */
 extern void handle_keypress(unsigned char key, struct cpu_user_regs *regs);
 
+enum crash_reason {
+    CRASHREASON_PANIC,
+    CRASHREASON_HWDOM,
+    CRASHREASON_WATCHDOG,
+    CRASHREASON_KEXECCMD,
+    CRASHREASON_DEBUGKEY,
+    CRASHREASON_N
+};
+
+void keyhandler_crash_action(enum crash_reason reason);
+
 #endif /* __XEN_KEYHANDLER_H__ */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 13:24:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 13:24:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32187.63175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg6PB-0001kB-5u; Fri, 20 Nov 2020 13:24:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32187.63175; Fri, 20 Nov 2020 13:24:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg6PB-0001k0-2r; Fri, 20 Nov 2020 13:24:49 +0000
Received: by outflank-mailman (input) for mailman id 32187;
 Fri, 20 Nov 2020 13:24:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kg6P9-0001i7-Eq
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 13:24:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg6P7-0007Cr-B6; Fri, 20 Nov 2020 13:24:45 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg6P7-00028m-0U; Fri, 20 Nov 2020 13:24:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg6P9-0001i7-Eq
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 13:24:47 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=tMs2Q+q+Bnm5wtjajtQeaabWNgddOKx30PoGNnvP4Nw=; b=oEp7b6dZ3+ZtBLX0fRNZ6H4RMH
	iQcOY5SY+SYYo97SksNgjrgufy8ilaEDTX5SXst01KxUWSe3z4/toO18zRNL3atXp/EGfQUEJGoLn
	xUn2mYOgm98D0DNkC0mlOa155Fjl91GoY/3JRKzsG+6T9ig1jqP1u8aiBg+gyBaDJ6Vk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg6P7-0007Cr-B6; Fri, 20 Nov 2020 13:24:45 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg6P7-00028m-0U; Fri, 20 Nov 2020 13:24:45 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Julien Grall <jgrall@amazon.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jun Nakajima <jun.nakajima@intel.com>
Subject: [PATCH v10 1/7] remove remaining uses of iommu_legacy_map/unmap
Date: Fri, 20 Nov 2020 13:24:34 +0000
Message-Id: <20201120132440.1141-2-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201120132440.1141-1-paul@xen.org>
References: <20201120132440.1141-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

The 'legacy' functions do implicit flushing so amend the callers to do the
appropriate flushing.

Unfortunately, because of the structure of the P2M code, we cannot remove
the per-CPU 'iommu_dont_flush_iotlb' global and the optimization it
facilitates. Checking of this flag is now done only in relevant callers of
iommu_iotlb_flush(). Also, 'iommu_dont_flush_iotlb' is now declared
as bool (rather than bool_t) and setting/clearing it are no longer pointlessly
gated on is_iommu_enabled() returning true. (Arguably it is also pointless to
gate the call to iommu_iotlb_flush() on that condition - since it is a no-op
in that case - but the if clause allows the scope of a stack variable to be
restricted).

NOTE: The code in memory_add() now sets 'ret' if iommu_map() or
      iommu_iotlb_flush() fails.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Acked-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Julien Grall <jgrall@amazon.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Jun Nakajima <jun.nakajima@intel.com>

v10:
 - Re-base

v9:
 - Moved check of 'iommu_dont_flush_iotlb' out of iommu_iotlb_flush() and
   into callers to avoid re-introducing a problem on Arm
 - Dropped Jan's R-b due to change

v6:
 - Fix formatting problem in memory_add()

v5:
 - Re-base
 - Removed failure case on overflow of unsigned int as it is no longer
   necessary

v3:
 - Same as v2; elected to implement batch flushing in the grant table code as
   a subsequent patch

v2:
 - Shorten the diff (mainly because of a prior patch introducing automatic
   flush-on-fail into iommu_map() and iommu_unmap())
---
 xen/arch/x86/mm.c               | 26 +++++++++++++++++++-------
 xen/arch/x86/mm/p2m-ept.c       | 20 ++++++++++++--------
 xen/arch/x86/mm/p2m-pt.c        | 16 ++++++++++++----
 xen/arch/x86/mm/p2m.c           | 25 +++++++++++++++++--------
 xen/arch/x86/x86_64/mm.c        | 20 +++++++-------------
 xen/common/grant_table.c        | 29 ++++++++++++++++++++++-------
 xen/common/memory.c             |  6 +++---
 xen/drivers/passthrough/iommu.c | 23 -----------------------
 xen/include/xen/iommu.h         | 21 +++++----------------
 9 files changed, 97 insertions(+), 89 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 5a50339284c7..bb5f504b84e2 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2489,10 +2489,16 @@ static int cleanup_page_mappings(struct page_info *page)
 
         if ( d && unlikely(need_iommu_pt_sync(d)) && is_pv_domain(d) )
         {
-            int rc2 = iommu_legacy_unmap(d, _dfn(mfn), 1u << PAGE_ORDER_4K);
+            unsigned int flush_flags = 0;
+            int err;
+
+            err = iommu_unmap(d, _dfn(mfn), 1ul << PAGE_ORDER_4K, &flush_flags);
+            if ( !err && !this_cpu(iommu_dont_flush_iotlb) )
+                err = iommu_iotlb_flush(d, _dfn(mfn), 1ul << PAGE_ORDER_4K,
+                                        flush_flags);
 
             if ( !rc )
-                rc = rc2;
+                rc = err;
         }
 
         if ( likely(!is_special_page(page)) )
@@ -3014,14 +3020,20 @@ static int _get_page_type(struct page_info *page, unsigned long type,
         if ( d && unlikely(need_iommu_pt_sync(d)) && is_pv_domain(d) )
         {
             mfn_t mfn = page_to_mfn(page);
+            dfn_t dfn = _dfn(mfn_x(mfn));
+            unsigned int flush_flags = 0;
 
             if ( (x & PGT_type_mask) == PGT_writable_page )
-                rc = iommu_legacy_unmap(d, _dfn(mfn_x(mfn)),
-                                        1ul << PAGE_ORDER_4K);
+                rc = iommu_unmap(d, dfn, 1ul << PAGE_ORDER_4K, &flush_flags);
             else
-                rc = iommu_legacy_map(d, _dfn(mfn_x(mfn)), mfn,
-                                      1ul << PAGE_ORDER_4K,
-                                      IOMMUF_readable | IOMMUF_writable);
+            {
+                rc = iommu_map(d, dfn, mfn, 1ul << PAGE_ORDER_4K,
+                               IOMMUF_readable | IOMMUF_writable, &flush_flags);
+            }
+
+            if ( !rc && !this_cpu(iommu_dont_flush_iotlb) )
+                rc = iommu_iotlb_flush(d, dfn, 1ul << PAGE_ORDER_4K,
+                                       flush_flags);
 
             if ( unlikely(rc) )
             {
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 975ab403f235..c04a30eecc65 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -842,15 +842,19 @@ out:
     if ( rc == 0 && p2m_is_hostp2m(p2m) &&
          need_modify_vtd_table )
     {
-        if ( iommu_use_hap_pt(d) && !this_cpu(iommu_dont_flush_iotlb) )
-            rc = iommu_iotlb_flush(d, _dfn(gfn), 1ul << order,
-                                   (iommu_flags ? IOMMU_FLUSHF_added : 0) |
-                                   (vtd_pte_present ? IOMMU_FLUSHF_modified
-                                                    : 0));
-        else if ( need_iommu_pt_sync(d) )
+        unsigned int flush_flags = 0;
+
+        if ( need_iommu_pt_sync(d) )
             rc = iommu_flags ?
-                iommu_legacy_map(d, _dfn(gfn), mfn, 1ul << order, iommu_flags) :
-                iommu_legacy_unmap(d, _dfn(gfn), 1ul << order);
+                iommu_map(d, _dfn(gfn), mfn, 1ul << order, iommu_flags,
+                          &flush_flags) :
+                iommu_unmap(d, _dfn(gfn), 1ul << order, &flush_flags);
+        else if ( iommu_use_hap_pt(d) )
+            flush_flags = (iommu_flags ? IOMMU_FLUSHF_added : 0) |
+                          (vtd_pte_present ? IOMMU_FLUSHF_modified : 0);
+
+        if ( !rc && !this_cpu(iommu_dont_flush_iotlb) )
+            rc = iommu_iotlb_flush(d, _dfn(gfn), 1ul << order, flush_flags);
     }
 
     unmap_domain_page(table);
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 5fa0d30ce7d2..d8ffc6f7e078 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -741,10 +741,18 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
 
     if ( need_iommu_pt_sync(p2m->domain) &&
          (iommu_old_flags != iommu_pte_flags || old_mfn != mfn_x(mfn)) )
-        rc = iommu_pte_flags
-             ? iommu_legacy_map(d, _dfn(gfn), mfn, 1ul << page_order,
-                                iommu_pte_flags)
-             : iommu_legacy_unmap(d, _dfn(gfn), 1ul << page_order);
+    {
+        unsigned int flush_flags = 0;
+
+        rc = iommu_pte_flags ?
+            iommu_map(d, _dfn(gfn), mfn, 1ul << page_order, iommu_pte_flags,
+                      &flush_flags) :
+            iommu_unmap(d, _dfn(gfn), 1ul << page_order, &flush_flags);
+
+        if ( !rc && !this_cpu(iommu_dont_flush_iotlb) )
+            rc = iommu_iotlb_flush(d, _dfn(gfn), 1ul << page_order,
+                                   flush_flags);
+    }
 
     /*
      * Free old intermediate tables if necessary.  This has to be the
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index d9cc1856bb80..8ee33b25ca72 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1351,11 +1351,15 @@ int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
 
     if ( !paging_mode_translate(p2m->domain) )
     {
-        if ( !is_iommu_enabled(d) )
-            return 0;
-        return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l),
-                                1ul << PAGE_ORDER_4K,
-                                IOMMUF_readable | IOMMUF_writable);
+        unsigned int flush_flags = 0;
+
+        ret = iommu_map(d, _dfn(gfn_l), _mfn(gfn_l), 1ul << PAGE_ORDER_4K,
+                        IOMMUF_readable | IOMMUF_writable, &flush_flags);
+        if ( !ret )
+            ret = iommu_iotlb_flush(d, _dfn(gfn_l), 1ul << PAGE_ORDER_4K,
+                                    flush_flags);
+
+        return ret;
     }
 
     gfn_lock(p2m, gfn, 0);
@@ -1443,9 +1447,14 @@ int clear_identity_p2m_entry(struct domain *d, unsigned long gfn_l)
 
     if ( !paging_mode_translate(d) )
     {
-        if ( !is_iommu_enabled(d) )
-            return 0;
-        return iommu_legacy_unmap(d, _dfn(gfn_l), 1ul << PAGE_ORDER_4K);
+        unsigned int flush_flags = 0;
+
+        ret = iommu_unmap(d, _dfn(gfn_l), 1ul << PAGE_ORDER_4K, &flush_flags);
+        if ( !ret )
+            ret = iommu_iotlb_flush(d, _dfn(gfn_l), 1ul << PAGE_ORDER_4K,
+                                    flush_flags);
+
+        return ret;
     }
 
     gfn_lock(p2m, gfn, 0);
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index bce1561e1a80..7e9d16544915 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1284,21 +1284,15 @@ int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm)
          !iommu_use_hap_pt(hardware_domain) &&
          !need_iommu_pt_sync(hardware_domain) )
     {
-        for ( i = spfn; i < epfn; i++ )
-            if ( iommu_legacy_map(hardware_domain, _dfn(i), _mfn(i),
-                                  1ul << PAGE_ORDER_4K,
-                                  IOMMUF_readable | IOMMUF_writable) )
-                break;
-        if ( i != epfn )
-        {
-            while (i-- > old_max)
-                /* If statement to satisfy __must_check. */
-                if ( iommu_legacy_unmap(hardware_domain, _dfn(i),
-                                        1ul << PAGE_ORDER_4K) )
-                    continue;
+        unsigned int flush_flags = 0;
+        unsigned long n = epfn - spfn;
 
+        ret = iommu_map(hardware_domain, _dfn(i), _mfn(i), n,
+                        IOMMUF_readable | IOMMUF_writable, &flush_flags);
+        if ( !ret )
+            ret = iommu_iotlb_flush(hardware_domain, _dfn(i), n, flush_flags);
+        if ( ret )
             goto destroy_m2p;
-        }
     }
 
     /* We can't revert any more */
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index a5d3ed8bdaac..beb6b2d40d68 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1225,11 +1225,22 @@ map_grant_ref(
             kind = IOMMUF_readable;
         else
             kind = 0;
-        if ( kind && iommu_legacy_map(ld, _dfn(mfn_x(mfn)), mfn, 1, kind) )
+
+        if ( kind )
         {
-            double_gt_unlock(lgt, rgt);
-            rc = GNTST_general_error;
-            goto undo_out;
+            dfn_t dfn = _dfn(mfn_x(mfn));
+            unsigned int flush_flags = 0;
+            int err;
+
+            err = iommu_map(ld, dfn, mfn, 1, kind, &flush_flags);
+            if ( !err )
+                err = iommu_iotlb_flush(ld, dfn, 1, flush_flags);
+            if ( err )
+            {
+                double_gt_unlock(lgt, rgt);
+                rc = GNTST_general_error;
+                goto undo_out;
+            }
         }
     }
 
@@ -1473,19 +1484,23 @@ unmap_common(
     if ( rc == GNTST_okay && gnttab_need_iommu_mapping(ld) )
     {
         unsigned int kind;
+        dfn_t dfn = _dfn(mfn_x(op->mfn));
+        unsigned int flush_flags = 0;
         int err = 0;
 
         double_gt_lock(lgt, rgt);
 
         kind = mapkind(lgt, rd, op->mfn);
         if ( !kind )
-            err = iommu_legacy_unmap(ld, _dfn(mfn_x(op->mfn)), 1);
+            err = iommu_unmap(ld, dfn, 1, &flush_flags);
         else if ( !(kind & MAPKIND_WRITE) )
-            err = iommu_legacy_map(ld, _dfn(mfn_x(op->mfn)), op->mfn, 1,
-                                   IOMMUF_readable);
+            err = iommu_map(ld, dfn, op->mfn, 1, IOMMUF_readable,
+                            &flush_flags);
 
         double_gt_unlock(lgt, rgt);
 
+        if ( !err )
+            err = iommu_iotlb_flush(ld, dfn, 1, flush_flags);
         if ( err )
             rc = GNTST_general_error;
     }
diff --git a/xen/common/memory.c b/xen/common/memory.c
index df85b550a1b1..14137c68393c 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -836,8 +836,8 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
 
     if ( is_iommu_enabled(d) )
     {
-       this_cpu(iommu_dont_flush_iotlb) = 1;
-       extra.ppage = &pages[0];
+        this_cpu(iommu_dont_flush_iotlb) = true;
+        extra.ppage = &pages[0];
     }
 
     while ( xatp->size > done )
@@ -867,7 +867,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
         int ret;
         unsigned int i;
 
-        this_cpu(iommu_dont_flush_iotlb) = 0;
+        this_cpu(iommu_dont_flush_iotlb) = false;
 
         ret = iommu_iotlb_flush(d, _dfn(xatp->idx - done), done,
                                 IOMMU_FLUSHF_modified);
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 87f9a857bbae..a9da4d2b0645 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -282,18 +282,6 @@ int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
     return rc;
 }
 
-int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
-                     unsigned long page_count, unsigned int flags)
-{
-    unsigned int flush_flags = 0;
-    int rc = iommu_map(d, dfn, mfn, page_count, flags, &flush_flags);
-
-    if ( !this_cpu(iommu_dont_flush_iotlb) && !rc )
-        rc = iommu_iotlb_flush(d, dfn, page_count, flush_flags);
-
-    return rc;
-}
-
 int iommu_unmap(struct domain *d, dfn_t dfn, unsigned long page_count,
                 unsigned int *flush_flags)
 {
@@ -338,17 +326,6 @@ int iommu_unmap(struct domain *d, dfn_t dfn, unsigned long page_count,
     return rc;
 }
 
-int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned long page_count)
-{
-    unsigned int flush_flags = 0;
-    int rc = iommu_unmap(d, dfn, page_count, &flush_flags);
-
-    if ( !this_cpu(iommu_dont_flush_iotlb) && !rc )
-        rc = iommu_iotlb_flush(d, dfn, page_count, flush_flags);
-
-    return rc;
-}
-
 int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
                       unsigned int *flags)
 {
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 191021870fed..244a11b9b494 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -151,16 +151,8 @@ int __must_check iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
 int __must_check iommu_unmap(struct domain *d, dfn_t dfn,
                              unsigned long page_count,
                              unsigned int *flush_flags);
-
-int __must_check iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
-                                  unsigned long page_count,
-                                  unsigned int flags);
-int __must_check iommu_legacy_unmap(struct domain *d, dfn_t dfn,
-                                    unsigned long page_count);
-
 int __must_check iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
                                    unsigned int *flags);
-
 int __must_check iommu_iotlb_flush(struct domain *d, dfn_t dfn,
                                    unsigned long page_count,
                                    unsigned int flush_flags);
@@ -368,15 +360,12 @@ void iommu_dev_iotlb_flush_timeout(struct domain *d, struct pci_dev *pdev);
 
 /*
  * The purpose of the iommu_dont_flush_iotlb optional cpu flag is to
- * avoid unecessary iotlb_flush in the low level IOMMU code.
- *
- * iommu_map_page/iommu_unmap_page must flush the iotlb but somethimes
- * this operation can be really expensive. This flag will be set by the
- * caller to notify the low level IOMMU code to avoid the iotlb flushes.
- * iommu_iotlb_flush/iommu_iotlb_flush_all will be explicitly called by
- * the caller.
+ * avoid unnecessary IOMMU flushing while updating the P2M.
+ * Setting the value to true will cause iommu_iotlb_flush() to return without
+ * actually performing a flush. A batch flush must therefore be done by the
+ * calling code after setting the value back to false.
  */
-DECLARE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
+DECLARE_PER_CPU(bool, iommu_dont_flush_iotlb);
 
 extern struct spinlock iommu_pt_cleanup_lock;
 extern struct page_list_head iommu_pt_cleanup_list;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 13:24:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 13:24:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32185.63151 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg6P9-0001iJ-MQ; Fri, 20 Nov 2020 13:24:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32185.63151; Fri, 20 Nov 2020 13:24:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg6P9-0001iC-IG; Fri, 20 Nov 2020 13:24:47 +0000
Received: by outflank-mailman (input) for mailman id 32185;
 Fri, 20 Nov 2020 13:24:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kg6P8-0001hx-FG
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 13:24:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg6P5-0007Cp-KX; Fri, 20 Nov 2020 13:24:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg6P5-00028m-9S; Fri, 20 Nov 2020 13:24:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg6P8-0001hx-FG
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 13:24:46 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=oVnrEEire8FyLNYRbm2ismIr24k0ZF76I8uW1Vm4pKk=; b=gsWXffCRYU0/jGWsED+wTkY+0v
	QdekzfpikN32H9YHmeTe9gEhJFw80FTZUkt50mqhPwH8VytkyivnGVXupGfRPOriI5U5mqa0UfbQq
	PS6Akecd5F5XnuhsjvjbbxdFlNVS+wcxLCeFt9PZN6rDkHHH8t4IeuucM50xCtZQca3U=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg6P5-0007Cp-KX; Fri, 20 Nov 2020 13:24:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg6P5-00028m-9S; Fri, 20 Nov 2020 13:24:43 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <jgrall@amazon.com>,
	Jun Nakajima <jun.nakajima@intel.com>,
	Kevin Tian <kevin.tian@intel.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v10 0/7] IOMMU cleanup
Date: Fri, 20 Nov 2020 13:24:33 +0000
Message-Id: <20201120132440.1141-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This is the remainder of the cleanup series deferred until XSA-346 and
XSA-347 were publicly disclosed.

Paul Durrant (7):
  remove remaining uses of iommu_legacy_map/unmap
  common/grant_table: batch flush I/O TLB
  iommu: remove the share_p2m operation
  iommu: stop calling IOMMU page tables 'p2m tables'
  vtd: use a bit field for root_entry
  vtd: use a bit field for context_entry
  vtd: use a bit field for dma_pte

 xen/arch/x86/mm.c                           |  26 ++-
 xen/arch/x86/mm/p2m-ept.c                   |  20 +-
 xen/arch/x86/mm/p2m-pt.c                    |  16 +-
 xen/arch/x86/mm/p2m.c                       |  28 ++-
 xen/arch/x86/x86_64/mm.c                    |  20 +-
 xen/common/grant_table.c                    | 208 ++++++++++++------
 xen/common/memory.c                         |   6 +-
 xen/drivers/passthrough/amd/pci_amd_iommu.c |  20 +-
 xen/drivers/passthrough/iommu.c             |  52 +----
 xen/drivers/passthrough/vtd/extern.h        |   2 +-
 xen/drivers/passthrough/vtd/iommu.c         | 220 +++++++++++---------
 xen/drivers/passthrough/vtd/iommu.h         | 113 ++++------
 xen/drivers/passthrough/vtd/utils.c         |  22 +-
 xen/drivers/passthrough/vtd/x86/ats.c       |  29 +--
 xen/drivers/passthrough/vtd/x86/vtd.c       |   2 +-
 xen/include/xen/iommu.h                     |  26 +--
 16 files changed, 429 insertions(+), 381 deletions(-)
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <jgrall@amazon.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 13:24:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 13:24:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32186.63156 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg6P9-0001ik-Uo; Fri, 20 Nov 2020 13:24:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32186.63156; Fri, 20 Nov 2020 13:24:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg6P9-0001ib-Q8; Fri, 20 Nov 2020 13:24:47 +0000
Received: by outflank-mailman (input) for mailman id 32186;
 Fri, 20 Nov 2020 13:24:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kg6P9-0001i2-00
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 13:24:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg6P8-0007Cw-Mn; Fri, 20 Nov 2020 13:24:46 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg6P8-00028m-EY; Fri, 20 Nov 2020 13:24:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg6P9-0001i2-00
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 13:24:47 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=q4puytQMj2zWKl5eZW6uhYT5yUYlYoMz5vBb3O91yY0=; b=AGCVfjdLJ32ta+FrdVF/jsfQ80
	n0cbiXX/1ANWP+O9n+W9M1JP/q98cHvBiMtMJ9o4+FrxVGJsDcQVxeErSpRKZl9Flf8U7yk+rnFST
	EYnJ4YvsJmuQ1BRj5EZe/UZ1snVMXkHePS/Y3J0wgUo1W1WiLyDAz31h5kAOtfsvAGqc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg6P8-0007Cw-Mn; Fri, 20 Nov 2020 13:24:46 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg6P8-00028m-EY; Fri, 20 Nov 2020 13:24:46 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v10 2/7] common/grant_table: batch flush I/O TLB
Date: Fri, 20 Nov 2020 13:24:35 +0000
Message-Id: <20201120132440.1141-3-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201120132440.1141-1-paul@xen.org>
References: <20201120132440.1141-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This patch avoids calling iommu_iotlb_flush() for each individual GNTTABOP and
instead calls iommu_iotlb_flush_all() at the end of a batch. This should mean
non-singleton map/unmap operations perform better.

NOTE: A batch is the number of operations done before a pre-emption check and,
      in the case of unmap, a TLB flush.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Julien Grall <julien@xen.org>
Reviewed-by: Wei Liu <wl@xen.org>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>

v6:
 - Fix spelling of 'preemption'
 - Drop unneeded 'currd' stack variable

v5:
 - Add batching to gnttab_map_grant_ref() to handle flushing before pre-
   emption check
 - Maintain per-op flushing in the case of singletons

v3:
 - New in v3
---
 xen/common/grant_table.c | 199 ++++++++++++++++++++++++++-------------
 1 file changed, 133 insertions(+), 66 deletions(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index beb6b2d40d68..1e3d7a2d33cb 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -241,7 +241,13 @@ struct gnttab_unmap_common {
     grant_ref_t ref;
 };
 
-/* Number of unmap operations that are done between each tlb flush */
+/* Number of map operations that are done between each preemption check */
+#define GNTTAB_MAP_BATCH_SIZE 32
+
+/*
+ * Number of unmap operations that are done between each tlb flush and
+ * preemption check.
+ */
 #define GNTTAB_UNMAP_BATCH_SIZE 32
 
 
@@ -979,7 +985,7 @@ static unsigned int mapkind(
 
 static void
 map_grant_ref(
-    struct gnttab_map_grant_ref *op)
+    struct gnttab_map_grant_ref *op, bool do_flush, unsigned int *flush_flags)
 {
     struct domain *ld, *rd, *owner = NULL;
     struct grant_table *lgt, *rgt;
@@ -1229,12 +1235,11 @@ map_grant_ref(
         if ( kind )
         {
             dfn_t dfn = _dfn(mfn_x(mfn));
-            unsigned int flush_flags = 0;
             int err;
 
-            err = iommu_map(ld, dfn, mfn, 1, kind, &flush_flags);
-            if ( !err )
-                err = iommu_iotlb_flush(ld, dfn, 1, flush_flags);
+            err = iommu_map(ld, dfn, mfn, 1, kind, flush_flags);
+            if ( do_flush && !err )
+                err = iommu_iotlb_flush(ld, dfn, 1, *flush_flags);
             if ( err )
             {
                 double_gt_unlock(lgt, rgt);
@@ -1319,29 +1324,59 @@ static long
 gnttab_map_grant_ref(
     XEN_GUEST_HANDLE_PARAM(gnttab_map_grant_ref_t) uop, unsigned int count)
 {
-    int i;
-    struct gnttab_map_grant_ref op;
+    unsigned int done = 0;
+    int rc = 0;
 
-    for ( i = 0; i < count; i++ )
+    while ( count )
     {
-        if ( i && hypercall_preempt_check() )
-            return i;
+        unsigned int i, c = min_t(unsigned int, count, GNTTAB_MAP_BATCH_SIZE);
+        unsigned int flush_flags = 0;
 
-        if ( unlikely(__copy_from_guest_offset(&op, uop, i, 1)) )
-            return -EFAULT;
+        for ( i = 0; i < c; i++ )
+        {
+            struct gnttab_map_grant_ref op;
 
-        map_grant_ref(&op);
+            if ( unlikely(__copy_from_guest(&op, uop, 1)) )
+            {
+                rc = -EFAULT;
+                break;
+            }
 
-        if ( unlikely(__copy_to_guest_offset(uop, i, &op, 1)) )
-            return -EFAULT;
+            map_grant_ref(&op, c == 1, &flush_flags);
+
+            if ( unlikely(__copy_to_guest(uop, &op, 1)) )
+            {
+                rc = -EFAULT;
+                break;
+            }
+
+            guest_handle_add_offset(uop, 1);
+        }
+
+        if ( c > 1 )
+        {
+            int err = iommu_iotlb_flush_all(current->domain, flush_flags);
+
+            if ( !rc )
+                rc = err;
+        }
+
+        if ( rc )
+            break;
+
+        count -= c;
+        done += c;
+
+        if ( count && hypercall_preempt_check() )
+            return done;
     }
 
-    return 0;
+    return rc;
 }
 
 static void
 unmap_common(
-    struct gnttab_unmap_common *op)
+    struct gnttab_unmap_common *op, bool do_flush, unsigned int *flush_flags)
 {
     domid_t          dom;
     struct domain   *ld, *rd;
@@ -1485,22 +1520,20 @@ unmap_common(
     {
         unsigned int kind;
         dfn_t dfn = _dfn(mfn_x(op->mfn));
-        unsigned int flush_flags = 0;
         int err = 0;
 
         double_gt_lock(lgt, rgt);
 
         kind = mapkind(lgt, rd, op->mfn);
         if ( !kind )
-            err = iommu_unmap(ld, dfn, 1, &flush_flags);
+            err = iommu_unmap(ld, dfn, 1, flush_flags);
         else if ( !(kind & MAPKIND_WRITE) )
-            err = iommu_map(ld, dfn, op->mfn, 1, IOMMUF_readable,
-                            &flush_flags);
+            err = iommu_map(ld, dfn, op->mfn, 1, IOMMUF_readable, flush_flags);
 
         double_gt_unlock(lgt, rgt);
 
-        if ( !err )
-            err = iommu_iotlb_flush(ld, dfn, 1, flush_flags);
+        if ( do_flush && !err )
+            err = iommu_iotlb_flush(ld, dfn, 1, *flush_flags);
         if ( err )
             rc = GNTST_general_error;
     }
@@ -1599,8 +1632,8 @@ unmap_common_complete(struct gnttab_unmap_common *op)
 
 static void
 unmap_grant_ref(
-    struct gnttab_unmap_grant_ref *op,
-    struct gnttab_unmap_common *common)
+    struct gnttab_unmap_grant_ref *op, struct gnttab_unmap_common *common,
+    bool do_flush, unsigned int *flush_flags)
 {
     common->host_addr = op->host_addr;
     common->dev_bus_addr = op->dev_bus_addr;
@@ -1612,7 +1645,7 @@ unmap_grant_ref(
     common->rd = NULL;
     common->mfn = INVALID_MFN;
 
-    unmap_common(common);
+    unmap_common(common, do_flush, flush_flags);
     op->status = common->status;
 }
 
@@ -1621,31 +1654,55 @@ static long
 gnttab_unmap_grant_ref(
     XEN_GUEST_HANDLE_PARAM(gnttab_unmap_grant_ref_t) uop, unsigned int count)
 {
-    int i, c, partial_done, done = 0;
-    struct gnttab_unmap_grant_ref op;
-    struct gnttab_unmap_common common[GNTTAB_UNMAP_BATCH_SIZE];
+    struct domain *currd = current->domain;
+    unsigned int done = 0;
+    int rc = 0;
 
-    while ( count != 0 )
+    while ( count )
     {
-        c = min(count, (unsigned int)GNTTAB_UNMAP_BATCH_SIZE);
-        partial_done = 0;
+        struct gnttab_unmap_common common[GNTTAB_UNMAP_BATCH_SIZE];
+        unsigned int i, c, partial_done = 0;
+        unsigned int flush_flags = 0;
+
+        c = min_t(unsigned int, count, GNTTAB_UNMAP_BATCH_SIZE);
 
         for ( i = 0; i < c; i++ )
         {
+            struct gnttab_unmap_grant_ref op;
+
             if ( unlikely(__copy_from_guest(&op, uop, 1)) )
-                goto fault;
-            unmap_grant_ref(&op, &common[i]);
+            {
+                rc = -EFAULT;
+                break;
+            }
+
+            unmap_grant_ref(&op, &common[i], c == 1, &flush_flags);
             ++partial_done;
+
             if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
-                goto fault;
+            {
+                rc = -EFAULT;
+                break;
+            }
+
             guest_handle_add_offset(uop, 1);
         }
 
-        gnttab_flush_tlb(current->domain);
+        gnttab_flush_tlb(currd);
+        if ( c > 1 )
+        {
+            int err = iommu_iotlb_flush_all(currd, flush_flags);
+
+            if ( !rc )
+                rc = err;
+        }
 
         for ( i = 0; i < partial_done; i++ )
             unmap_common_complete(&common[i]);
 
+        if ( rc )
+            break;
+
         count -= c;
         done += c;
 
@@ -1653,20 +1710,13 @@ gnttab_unmap_grant_ref(
             return done;
     }
 
-    return 0;
-
-fault:
-    gnttab_flush_tlb(current->domain);
-
-    for ( i = 0; i < partial_done; i++ )
-        unmap_common_complete(&common[i]);
-    return -EFAULT;
+    return rc;
 }
 
 static void
 unmap_and_replace(
-    struct gnttab_unmap_and_replace *op,
-    struct gnttab_unmap_common *common)
+    struct gnttab_unmap_and_replace *op, struct gnttab_unmap_common *common,
+    bool do_flush, unsigned int *flush_flags)
 {
     common->host_addr = op->host_addr;
     common->new_addr = op->new_addr;
@@ -1678,7 +1728,7 @@ unmap_and_replace(
     common->rd = NULL;
     common->mfn = INVALID_MFN;
 
-    unmap_common(common);
+    unmap_common(common, do_flush, flush_flags);
     op->status = common->status;
 }
 
@@ -1686,31 +1736,55 @@ static long
 gnttab_unmap_and_replace(
     XEN_GUEST_HANDLE_PARAM(gnttab_unmap_and_replace_t) uop, unsigned int count)
 {
-    int i, c, partial_done, done = 0;
-    struct gnttab_unmap_and_replace op;
-    struct gnttab_unmap_common common[GNTTAB_UNMAP_BATCH_SIZE];
+    struct domain *currd = current->domain;
+    unsigned int done = 0;
+    int rc = 0;
 
-    while ( count != 0 )
+    while ( count )
     {
-        c = min(count, (unsigned int)GNTTAB_UNMAP_BATCH_SIZE);
-        partial_done = 0;
+        struct gnttab_unmap_common common[GNTTAB_UNMAP_BATCH_SIZE];
+        unsigned int i, c, partial_done = 0;
+        unsigned int flush_flags = 0;
+
+        c = min_t(unsigned int, count, GNTTAB_UNMAP_BATCH_SIZE);
 
         for ( i = 0; i < c; i++ )
         {
+            struct gnttab_unmap_and_replace op;
+
             if ( unlikely(__copy_from_guest(&op, uop, 1)) )
-                goto fault;
-            unmap_and_replace(&op, &common[i]);
+            {
+                rc = -EFAULT;
+                break;
+            }
+
+            unmap_and_replace(&op, &common[i], c == 1, &flush_flags);
             ++partial_done;
+
             if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
-                goto fault;
+            {
+                rc = -EFAULT;
+                break;
+            }
+
             guest_handle_add_offset(uop, 1);
         }
 
-        gnttab_flush_tlb(current->domain);
+        gnttab_flush_tlb(currd);
+        if ( c > 1 )
+        {
+            int err = iommu_iotlb_flush_all(currd, flush_flags);
+
+            if ( !rc )
+                rc = err;
+        }
 
         for ( i = 0; i < partial_done; i++ )
             unmap_common_complete(&common[i]);
 
+        if ( rc )
+            break;
+
         count -= c;
         done += c;
 
@@ -1718,14 +1792,7 @@ gnttab_unmap_and_replace(
             return done;
     }
 
-    return 0;
-
-fault:
-    gnttab_flush_tlb(current->domain);
-
-    for ( i = 0; i < partial_done; i++ )
-        unmap_common_complete(&common[i]);
-    return -EFAULT;
+    return rc;
 }
 
 static int
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 13:24:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 13:24:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32188.63187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg6PC-0001ml-Nk; Fri, 20 Nov 2020 13:24:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32188.63187; Fri, 20 Nov 2020 13:24:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg6PC-0001mc-KH; Fri, 20 Nov 2020 13:24:50 +0000
Received: by outflank-mailman (input) for mailman id 32188;
 Fri, 20 Nov 2020 13:24:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kg6PB-0001lU-ND
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 13:24:49 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg6PA-0007D8-1U; Fri, 20 Nov 2020 13:24:48 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg6P9-00028m-Os; Fri, 20 Nov 2020 13:24:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg6PB-0001lU-ND
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 13:24:49 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=LpiQQ6giFOx1RkU7jU2YCkYAgdsKteSyHKcM7KDqpxY=; b=vI7wKTnKhqg/K9HaZpmJKEvT5i
	n+l2VTtyY+bwwhB4bueiz9/QUIWj/YcJTYVxZGGD5uS1z9WvQKoi3rQfGJhg1K3LOQl656Lwvggr6
	+1d2noB5k+/ChGPvYgsRqcJB3m22NK024x1Tu2f+1xkGEQaF70jsK8DFuqnUpFBM/rD0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg6PA-0007D8-1U; Fri, 20 Nov 2020 13:24:48 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg6P9-00028m-Os; Fri, 20 Nov 2020 13:24:47 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v10 3/7] iommu: remove the share_p2m operation
Date: Fri, 20 Nov 2020 13:24:36 +0000
Message-Id: <20201120132440.1141-4-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201120132440.1141-1-paul@xen.org>
References: <20201120132440.1141-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Sharing of HAP tables is now VT-d specific so the operation is never defined
for AMD IOMMU any more. There's also no need to pro-actively set vtd.pgd_maddr
when using shared EPT as it is straightforward to simply define a helper
function to return the appropriate value in the shared and non-shared cases.

NOTE: This patch also modifies unmap_vtd_domain_page() to take a const
      pointer since the only thing it calls, unmap_domain_page(), also takes
      a const pointer.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v6:
 - Adjust code to return P2M paddr
 - Add removed comment back in

v5:
 - Pass 'nr_pt_levels' into domain_pgd_maddr() directly

v2:
 - Put the PGD level adjust into the helper function too, since it is
   irrelevant in the shared EPT case
---
 xen/arch/x86/mm/p2m.c                 |  3 -
 xen/drivers/passthrough/iommu.c       |  8 ---
 xen/drivers/passthrough/vtd/extern.h  |  2 +-
 xen/drivers/passthrough/vtd/iommu.c   | 90 +++++++++++++++------------
 xen/drivers/passthrough/vtd/x86/vtd.c |  2 +-
 xen/include/xen/iommu.h               |  3 -
 6 files changed, 52 insertions(+), 56 deletions(-)

diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 8ee33b25ca72..34e37a9b1b5d 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -727,9 +727,6 @@ int p2m_alloc_table(struct p2m_domain *p2m)
 
     p2m->phys_table = pagetable_from_mfn(top_mfn);
 
-    if ( hap_enabled(d) )
-        iommu_share_p2m_table(d);
-
     p2m_unlock(p2m);
     return 0;
 }
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index a9da4d2b0645..90748062e5bd 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -500,14 +500,6 @@ int iommu_do_domctl(
     return ret;
 }
 
-void iommu_share_p2m_table(struct domain* d)
-{
-    ASSERT(hap_enabled(d));
-
-    if ( iommu_use_hap_pt(d) )
-        iommu_get_ops()->share_p2m(d);
-}
-
 void iommu_crash_shutdown(void)
 {
     if ( !iommu_crash_disable )
diff --git a/xen/drivers/passthrough/vtd/extern.h b/xen/drivers/passthrough/vtd/extern.h
index ad6c5f907b8c..19a908ab4f71 100644
--- a/xen/drivers/passthrough/vtd/extern.h
+++ b/xen/drivers/passthrough/vtd/extern.h
@@ -72,7 +72,7 @@ void flush_all_cache(void);
 uint64_t alloc_pgtable_maddr(unsigned long npages, nodeid_t node);
 void free_pgtable_maddr(u64 maddr);
 void *map_vtd_domain_page(u64 maddr);
-void unmap_vtd_domain_page(void *va);
+void unmap_vtd_domain_page(const void *va);
 int domain_context_mapping_one(struct domain *domain, struct vtd_iommu *iommu,
                                u8 bus, u8 devfn, const struct pci_dev *);
 int domain_context_unmap_one(struct domain *domain, struct vtd_iommu *iommu,
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index f6c4021fd698..a76e60c99a58 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -318,6 +318,48 @@ static u64 addr_to_dma_page_maddr(struct domain *domain, u64 addr, int alloc)
     return pte_maddr;
 }
 
+static uint64_t domain_pgd_maddr(struct domain *d, unsigned int nr_pt_levels)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    uint64_t pgd_maddr;
+    unsigned int agaw;
+
+    ASSERT(spin_is_locked(&hd->arch.mapping_lock));
+
+    if ( iommu_use_hap_pt(d) )
+    {
+        pagetable_t pgt = p2m_get_pagetable(p2m_get_hostp2m(d));
+
+        return pagetable_get_paddr(pgt);
+    }
+
+    if ( !hd->arch.vtd.pgd_maddr )
+    {
+        /* Ensure we have pagetables allocated down to leaf PTE. */
+        addr_to_dma_page_maddr(d, 0, 1);
+
+        if ( !hd->arch.vtd.pgd_maddr )
+            return 0;
+    }
+
+    pgd_maddr = hd->arch.vtd.pgd_maddr;
+
+    /* Skip top levels of page tables for 2- and 3-level DRHDs. */
+    for ( agaw = level_to_agaw(4);
+          agaw != level_to_agaw(nr_pt_levels);
+          agaw-- )
+    {
+        const struct dma_pte *p = map_vtd_domain_page(pgd_maddr);
+
+        pgd_maddr = dma_pte_addr(*p);
+        unmap_vtd_domain_page(p);
+        if ( !pgd_maddr )
+            return 0;
+    }
+
+    return pgd_maddr;
+}
+
 static void iommu_flush_write_buffer(struct vtd_iommu *iommu)
 {
     u32 val;
@@ -1286,7 +1328,7 @@ int domain_context_mapping_one(
     struct context_entry *context, *context_entries;
     u64 maddr, pgd_maddr;
     u16 seg = iommu->drhd->segment;
-    int agaw, rc, ret;
+    int rc, ret;
     bool_t flush_dev_iotlb;
 
     ASSERT(pcidevs_locked());
@@ -1340,37 +1382,18 @@ int domain_context_mapping_one(
     if ( iommu_hwdom_passthrough && is_hardware_domain(domain) )
     {
         context_set_translation_type(*context, CONTEXT_TT_PASS_THRU);
-        agaw = level_to_agaw(iommu->nr_pt_levels);
     }
     else
     {
         spin_lock(&hd->arch.mapping_lock);
 
-        /* Ensure we have pagetables allocated down to leaf PTE. */
-        if ( hd->arch.vtd.pgd_maddr == 0 )
+        pgd_maddr = domain_pgd_maddr(domain, iommu->nr_pt_levels);
+        if ( !pgd_maddr )
         {
-            addr_to_dma_page_maddr(domain, 0, 1);
-            if ( hd->arch.vtd.pgd_maddr == 0 )
-            {
-            nomem:
-                spin_unlock(&hd->arch.mapping_lock);
-                spin_unlock(&iommu->lock);
-                unmap_vtd_domain_page(context_entries);
-                return -ENOMEM;
-            }
-        }
-
-        /* Skip top levels of page tables for 2- and 3-level DRHDs. */
-        pgd_maddr = hd->arch.vtd.pgd_maddr;
-        for ( agaw = level_to_agaw(4);
-              agaw != level_to_agaw(iommu->nr_pt_levels);
-              agaw-- )
-        {
-            struct dma_pte *p = map_vtd_domain_page(pgd_maddr);
-            pgd_maddr = dma_pte_addr(*p);
-            unmap_vtd_domain_page(p);
-            if ( pgd_maddr == 0 )
-                goto nomem;
+            spin_unlock(&hd->arch.mapping_lock);
+            spin_unlock(&iommu->lock);
+            unmap_vtd_domain_page(context_entries);
+            return -ENOMEM;
         }
 
         context_set_address_root(*context, pgd_maddr);
@@ -1389,7 +1412,7 @@ int domain_context_mapping_one(
         return -EFAULT;
     }
 
-    context_set_address_width(*context, agaw);
+    context_set_address_width(*context, level_to_agaw(iommu->nr_pt_levels));
     context_set_fault_enable(*context);
     context_set_present(*context);
     iommu_sync_cache(context, sizeof(struct context_entry));
@@ -1848,18 +1871,6 @@ static int __init vtd_ept_page_compatible(struct vtd_iommu *iommu)
            (ept_has_1gb(ept_cap) && opt_hap_1gb) <= cap_sps_1gb(vtd_cap);
 }
 
-/*
- * set VT-d page table directory to EPT table if allowed
- */
-static void iommu_set_pgd(struct domain *d)
-{
-    mfn_t pgd_mfn;
-
-    pgd_mfn = pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
-    dom_iommu(d)->arch.vtd.pgd_maddr =
-        pagetable_get_paddr(pagetable_from_mfn(pgd_mfn));
-}
-
 static int rmrr_identity_mapping(struct domain *d, bool_t map,
                                  const struct acpi_rmrr_unit *rmrr,
                                  u32 flag)
@@ -2718,7 +2729,6 @@ static struct iommu_ops __initdata vtd_ops = {
     .adjust_irq_affinities = adjust_vtd_irq_affinities,
     .suspend = vtd_suspend,
     .resume = vtd_resume,
-    .share_p2m = iommu_set_pgd,
     .crash_shutdown = vtd_crash_shutdown,
     .iotlb_flush = iommu_flush_iotlb_pages,
     .iotlb_flush_all = iommu_flush_iotlb_all,
diff --git a/xen/drivers/passthrough/vtd/x86/vtd.c b/xen/drivers/passthrough/vtd/x86/vtd.c
index bbe358dc36c7..6681dccd6970 100644
--- a/xen/drivers/passthrough/vtd/x86/vtd.c
+++ b/xen/drivers/passthrough/vtd/x86/vtd.c
@@ -42,7 +42,7 @@ void *map_vtd_domain_page(u64 maddr)
     return map_domain_page(_mfn(paddr_to_pfn(maddr)));
 }
 
-void unmap_vtd_domain_page(void *va)
+void unmap_vtd_domain_page(const void *va)
 {
     unmap_domain_page(va);
 }
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 244a11b9b494..236c55af8921 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -269,7 +269,6 @@ struct iommu_ops {
 
     int __must_check (*suspend)(void);
     void (*resume)(void);
-    void (*share_p2m)(struct domain *d);
     void (*crash_shutdown)(void);
     int __must_check (*iotlb_flush)(struct domain *d, dfn_t dfn,
                                     unsigned long page_count,
@@ -346,8 +345,6 @@ void iommu_resume(void);
 void iommu_crash_shutdown(void);
 int iommu_get_reserved_device_memory(iommu_grdm_t *, void *);
 
-void iommu_share_p2m_table(struct domain *d);
-
 #ifdef CONFIG_HAS_PCI
 int iommu_do_pci_domctl(struct xen_domctl *, struct domain *d,
                         XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 13:24:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 13:24:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32189.63199 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg6PE-0001pa-3B; Fri, 20 Nov 2020 13:24:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32189.63199; Fri, 20 Nov 2020 13:24:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg6PD-0001pL-U2; Fri, 20 Nov 2020 13:24:51 +0000
Received: by outflank-mailman (input) for mailman id 32189;
 Fri, 20 Nov 2020 13:24:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kg6PC-0001mN-BT
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 13:24:50 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg6PB-0007DE-0t; Fri, 20 Nov 2020 13:24:49 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg6PA-00028m-PL; Fri, 20 Nov 2020 13:24:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg6PC-0001mN-BT
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 13:24:50 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=XCBmmmfKMZU4rLhu9vkKTLYVmgviUAD04twlLuMkZm8=; b=kQ3E56mMg/Nd6NDs/u8XlDGtGh
	kqZIU0SX2eIxBcZ26vYDPdzJfAmcMTlgfrwnQPT+BZ1xYY1NjrtoMHg3geqFCS31JxmhuUaO6wkVo
	EBUlhNHH30LY+1izMEA/lkQes7INvi8hNHuEtBF4WrMsmVKN1FT9QLmGJ3hLNYry9MmE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg6PB-0007DE-0t; Fri, 20 Nov 2020 13:24:49 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg6PA-00028m-PL; Fri, 20 Nov 2020 13:24:48 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [PATCH v10 4/7] iommu: stop calling IOMMU page tables 'p2m tables'
Date: Fri, 20 Nov 2020 13:24:37 +0000
Message-Id: <20201120132440.1141-5-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201120132440.1141-1-paul@xen.org>
References: <20201120132440.1141-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

It's confusing and not consistent with the terminology introduced with 'dfn_t'.
Just call them IOMMU page tables.

Also remove a pointless check of the 'acpi_drhd_units' list in
vtd_dump_page_table_level(). If the list is empty then IOMMU mappings would
not have been enabled for the domain in the first place.

NOTE: All calls to printk() have also been removed from
      iommu_dump_page_tables(); the implementation specific code is now
      responsible for all output.
      The check for the global 'iommu_enabled' has also been replaced by an
      ASSERT since iommu_dump_page_tables() is not registered as a key handler
      unless IOMMU mappings are enabled.
      Error messages are now prefixed with the name of the function.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>

v6:
 - Cosmetic adjustment
 - Drop use of __func__

v5:
 - Make sure domain id is in the output
 - Use VTDPREFIX in output for consistency

v2:
 - Moved all output into implementation specific code
---
 xen/drivers/passthrough/amd/pci_amd_iommu.c | 20 +++++++-------
 xen/drivers/passthrough/iommu.c             | 21 ++++-----------
 xen/drivers/passthrough/vtd/iommu.c         | 30 ++++++++++++---------
 xen/include/xen/iommu.h                     |  2 +-
 4 files changed, 33 insertions(+), 40 deletions(-)

diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index 64c1fca7b5b6..42b5a5a9bec4 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -494,8 +494,8 @@ static int amd_iommu_group_id(u16 seg, u8 bus, u8 devfn)
 
 #include <asm/io_apic.h>
 
-static void amd_dump_p2m_table_level(struct page_info* pg, int level, 
-                                     paddr_t gpa, int indent)
+static void amd_dump_page_table_level(struct page_info *pg, int level,
+                                      paddr_t gpa, int indent)
 {
     paddr_t address;
     const union amd_iommu_pte *table_vaddr;
@@ -507,7 +507,7 @@ static void amd_dump_p2m_table_level(struct page_info* pg, int level,
     table_vaddr = __map_domain_page(pg);
     if ( table_vaddr == NULL )
     {
-        printk("Failed to map IOMMU domain page %"PRIpaddr"\n", 
+        printk("AMD IOMMU failed to map domain page %"PRIpaddr"\n",
                 page_to_maddr(pg));
         return;
     }
@@ -524,7 +524,7 @@ static void amd_dump_p2m_table_level(struct page_info* pg, int level,
 
         if ( pde->next_level && (pde->next_level != (level - 1)) )
         {
-            printk("IOMMU p2m table error. next_level = %d, expected %d\n",
+            printk("AMD IOMMU table error. next_level = %d, expected %d\n",
                    pde->next_level, level - 1);
 
             continue;
@@ -532,7 +532,7 @@ static void amd_dump_p2m_table_level(struct page_info* pg, int level,
 
         address = gpa + amd_offset_level_address(index, level);
         if ( pde->next_level >= 1 )
-            amd_dump_p2m_table_level(
+            amd_dump_page_table_level(
                 mfn_to_page(_mfn(pde->mfn)), pde->next_level,
                 address, indent + 1);
         else
@@ -545,16 +545,16 @@ static void amd_dump_p2m_table_level(struct page_info* pg, int level,
     unmap_domain_page(table_vaddr);
 }
 
-static void amd_dump_p2m_table(struct domain *d)
+static void amd_dump_page_tables(struct domain *d)
 {
     const struct domain_iommu *hd = dom_iommu(d);
 
     if ( !hd->arch.amd.root_table )
         return;
 
-    printk("p2m table has %u levels\n", hd->arch.amd.paging_mode);
-    amd_dump_p2m_table_level(hd->arch.amd.root_table,
-                             hd->arch.amd.paging_mode, 0, 0);
+    printk("AMD IOMMU %pd table has %u levels\n", d, hd->arch.amd.paging_mode);
+    amd_dump_page_table_level(hd->arch.amd.root_table,
+                              hd->arch.amd.paging_mode, 0, 0);
 }
 
 static const struct iommu_ops __initconstrel _iommu_ops = {
@@ -580,7 +580,7 @@ static const struct iommu_ops __initconstrel _iommu_ops = {
     .suspend = amd_iommu_suspend,
     .resume = amd_iommu_resume,
     .crash_shutdown = amd_iommu_crash_shutdown,
-    .dump_p2m_table = amd_dump_p2m_table,
+    .dump_page_tables = amd_dump_page_tables,
 };
 
 static const struct iommu_init_ops __initconstrel _iommu_init_ops = {
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 90748062e5bd..8fae77b59375 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -22,7 +22,7 @@
 #include <xen/keyhandler.h>
 #include <xsm/xsm.h>
 
-static void iommu_dump_p2m_table(unsigned char key);
+static void iommu_dump_page_tables(unsigned char key);
 
 unsigned int __read_mostly iommu_dev_iotlb_timeout = 1000;
 integer_param("iommu_dev_iotlb_timeout", iommu_dev_iotlb_timeout);
@@ -212,7 +212,7 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
     if ( !is_iommu_enabled(d) )
         return;
 
-    register_keyhandler('o', &iommu_dump_p2m_table, "dump iommu p2m table", 0);
+    register_keyhandler('o', &iommu_dump_page_tables, "dump iommu page tables", 0);
 
     hd->platform_ops->hwdom_init(d);
 }
@@ -535,16 +535,12 @@ bool_t iommu_has_feature(struct domain *d, enum iommu_feature feature)
     return is_iommu_enabled(d) && test_bit(feature, dom_iommu(d)->features);
 }
 
-static void iommu_dump_p2m_table(unsigned char key)
+static void iommu_dump_page_tables(unsigned char key)
 {
     struct domain *d;
     const struct iommu_ops *ops;
 
-    if ( !iommu_enabled )
-    {
-        printk("IOMMU not enabled!\n");
-        return;
-    }
+    ASSERT(iommu_enabled);
 
     ops = iommu_get_ops();
 
@@ -555,14 +551,7 @@ static void iommu_dump_p2m_table(unsigned char key)
         if ( is_hardware_domain(d) || !is_iommu_enabled(d) )
             continue;
 
-        if ( iommu_use_hap_pt(d) )
-        {
-            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
-            continue;
-        }
-
-        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
-        ops->dump_p2m_table(d);
+        ops->dump_page_tables(d);
     }
 
     rcu_read_unlock(&domlist_read_lock);
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index a76e60c99a58..d136fe36883b 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -2582,8 +2582,8 @@ static void vtd_resume(void)
     }
 }
 
-static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa, 
-                                     int indent)
+static void vtd_dump_page_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
+                                      int indent)
 {
     paddr_t address;
     int i;
@@ -2596,7 +2596,8 @@ static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
     pt_vaddr = map_vtd_domain_page(pt_maddr);
     if ( pt_vaddr == NULL )
     {
-        printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr);
+        printk(VTDPREFIX " failed to map domain page %"PRIpaddr"\n",
+               pt_maddr);
         return;
     }
 
@@ -2612,8 +2613,8 @@ static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
 
         address = gpa + offset_level_address(i, level);
         if ( next_level >= 1 ) 
-            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, 
-                                     address, indent + 1);
+            vtd_dump_page_table_level(dma_pte_addr(*pte), next_level,
+                                      address, indent + 1);
         else
             printk("%*sdfn: %08lx mfn: %08lx\n",
                    indent, "",
@@ -2624,17 +2625,20 @@ static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
     unmap_vtd_domain_page(pt_vaddr);
 }
 
-static void vtd_dump_p2m_table(struct domain *d)
+static void vtd_dump_page_tables(struct domain *d)
 {
-    const struct domain_iommu *hd;
+    const struct domain_iommu *hd = dom_iommu(d);
 
-    if ( list_empty(&acpi_drhd_units) )
+    if ( iommu_use_hap_pt(d) )
+    {
+        printk(VTDPREFIX " %pd sharing EPT table\n", d);
         return;
+    }
 
-    hd = dom_iommu(d);
-    printk("p2m table has %d levels\n", agaw_to_level(hd->arch.vtd.agaw));
-    vtd_dump_p2m_table_level(hd->arch.vtd.pgd_maddr,
-                             agaw_to_level(hd->arch.vtd.agaw), 0, 0);
+    printk(VTDPREFIX" %pd table has %d levels\n", d,
+           agaw_to_level(hd->arch.vtd.agaw));
+    vtd_dump_page_table_level(hd->arch.vtd.pgd_maddr,
+                              agaw_to_level(hd->arch.vtd.agaw), 0, 0);
 }
 
 static int __init intel_iommu_quarantine_init(struct domain *d)
@@ -2733,7 +2737,7 @@ static struct iommu_ops __initdata vtd_ops = {
     .iotlb_flush = iommu_flush_iotlb_pages,
     .iotlb_flush_all = iommu_flush_iotlb_all,
     .get_reserved_device_memory = intel_iommu_get_reserved_device_memory,
-    .dump_p2m_table = vtd_dump_p2m_table,
+    .dump_page_tables = vtd_dump_page_tables,
 };
 
 const struct iommu_init_ops __initconstrel intel_iommu_init_ops = {
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 236c55af8921..1a369c97c956 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -275,7 +275,7 @@ struct iommu_ops {
                                     unsigned int flush_flags);
     int __must_check (*iotlb_flush_all)(struct domain *d);
     int (*get_reserved_device_memory)(iommu_grdm_t *, void *);
-    void (*dump_p2m_table)(struct domain *d);
+    void (*dump_page_tables)(struct domain *d);
 
 #ifdef CONFIG_HAS_DEVICE_TREE
     /*
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 13:24:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 13:24:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32190.63206 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg6PE-0001qj-JB; Fri, 20 Nov 2020 13:24:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32190.63206; Fri, 20 Nov 2020 13:24:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg6PE-0001qB-9v; Fri, 20 Nov 2020 13:24:52 +0000
Received: by outflank-mailman (input) for mailman id 32190;
 Fri, 20 Nov 2020 13:24:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kg6PD-0001nz-6n
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 13:24:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg6PB-0007DL-Qk; Fri, 20 Nov 2020 13:24:49 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg6PB-00028m-J9; Fri, 20 Nov 2020 13:24:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg6PD-0001nz-6n
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 13:24:51 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=ot0EG7c7zJPZbN0sn3F+EUwxX2ndYclESdsFvCm5FbA=; b=n/nhi0UcrYGr5J0hwUOdprtcQ+
	gt77c4QRRa+jPxO/Wl3xhDTvzujADk7/SZOX1OWQlKEMpb0Hqcx7wybCXD7Lo6GpwCXJJpR7XdvOt
	pWVAbA88UKbbLKORG/g8wV0cft+H7Uiekh7e1as86tNm+6QR/bLgQnNIaABAntFJ2QKc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg6PB-0007DL-Qk; Fri, 20 Nov 2020 13:24:49 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg6PB-00028m-J9; Fri, 20 Nov 2020 13:24:49 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Kevin Tian <kevin.tian@intel.com>
Subject: [PATCH v10 5/7] vtd: use a bit field for root_entry
Date: Fri, 20 Nov 2020 13:24:38 +0000
Message-Id: <20201120132440.1141-6-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201120132440.1141-1-paul@xen.org>
References: <20201120132440.1141-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This makes the code a little easier to read and also makes it more consistent
with iremap_entry.

Also take the opportunity to tidy up the implementation of device_in_domain().

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Kevin Tian <kevin.tian@intel.com>

v10:
 - Small tweaks requested by Jan
 - Remove macros in favour of direct field access
 - Add missing barrier

v4:
 - New in v4
---
 xen/drivers/passthrough/vtd/iommu.c   |  9 +++++----
 xen/drivers/passthrough/vtd/iommu.h   | 25 ++++++++++++-------------
 xen/drivers/passthrough/vtd/utils.c   |  6 +++---
 xen/drivers/passthrough/vtd/x86/ats.c | 27 +++++++++++++++------------
 4 files changed, 35 insertions(+), 32 deletions(-)

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index d136fe36883b..1a038541f0a3 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -237,7 +237,7 @@ static u64 bus_to_context_maddr(struct vtd_iommu *iommu, u8 bus)
     ASSERT(spin_is_locked(&iommu->lock));
     root_entries = (struct root_entry *)map_vtd_domain_page(iommu->root_maddr);
     root = &root_entries[bus];
-    if ( !root_present(*root) )
+    if ( !root->p )
     {
         maddr = alloc_pgtable_maddr(1, iommu->node);
         if ( maddr == 0 )
@@ -245,11 +245,12 @@ static u64 bus_to_context_maddr(struct vtd_iommu *iommu, u8 bus)
             unmap_vtd_domain_page(root_entries);
             return 0;
         }
-        set_root_value(*root, maddr);
-        set_root_present(*root);
+        root->ctp = paddr_to_pfn(maddr);
+        smp_wmb();
+        root->p = true;
         iommu_sync_cache(root, sizeof(struct root_entry));
     }
-    maddr = (u64) get_context_addr(*root);
+    maddr = pfn_to_paddr(root->ctp);
     unmap_vtd_domain_page(root_entries);
     return maddr;
 }
diff --git a/xen/drivers/passthrough/vtd/iommu.h b/xen/drivers/passthrough/vtd/iommu.h
index 216791b3d634..b14628eec260 100644
--- a/xen/drivers/passthrough/vtd/iommu.h
+++ b/xen/drivers/passthrough/vtd/iommu.h
@@ -184,21 +184,20 @@
 #define dma_frcd_source_id(c) (c & 0xffff)
 #define dma_frcd_page_addr(d) (d & (((u64)-1) << 12)) /* low 64 bit */
 
-/*
- * 0: Present
- * 1-11: Reserved
- * 12-63: Context Ptr (12 - (haw-1))
- * 64-127: Reserved
- */
 struct root_entry {
-    u64    val;
-    u64    rsvd1;
+    union {
+        struct { uint64_t lo, hi; };
+        struct {
+            /* 0 - 63 */
+            bool p:1;
+            unsigned int reserved0:11;
+            uint64_t ctp:52;
+
+            /* 64 - 127 */
+            uint64_t reserved1;
+        };
+    };
 };
-#define root_present(root)    ((root).val & 1)
-#define set_root_present(root) do {(root).val |= 1;} while(0)
-#define get_context_addr(root) ((root).val & PAGE_MASK_4K)
-#define set_root_value(root, value) \
-    do {(root).val |= ((value) & PAGE_MASK_4K);} while(0)
 
 struct context_entry {
     u64 lo;
diff --git a/xen/drivers/passthrough/vtd/utils.c b/xen/drivers/passthrough/vtd/utils.c
index 4febcf506d8a..5f25a86a535c 100644
--- a/xen/drivers/passthrough/vtd/utils.c
+++ b/xen/drivers/passthrough/vtd/utils.c
@@ -112,15 +112,15 @@ void print_vtd_entries(struct vtd_iommu *iommu, int bus, int devfn, u64 gmfn)
         return;
     }
 
-    printk("    root_entry[%02x] = %"PRIx64"\n", bus, root_entry[bus].val);
-    if ( !root_present(root_entry[bus]) )
+    printk("    root_entry[%02x] = %"PRIx64"\n", bus, root_entry[bus].lo);
+    if ( !root_entry[bus].p )
     {
         unmap_vtd_domain_page(root_entry);
         printk("    root_entry[%02x] not present\n", bus);
         return;
     }
 
-    val = root_entry[bus].val;
+    val = pfn_to_paddr(root_entry[bus].ctp);
     unmap_vtd_domain_page(root_entry);
     ctxt_entry = map_vtd_domain_page(val);
     if ( ctxt_entry == NULL )
diff --git a/xen/drivers/passthrough/vtd/x86/ats.c b/xen/drivers/passthrough/vtd/x86/ats.c
index 04d702b1d6b1..fec969ef75bb 100644
--- a/xen/drivers/passthrough/vtd/x86/ats.c
+++ b/xen/drivers/passthrough/vtd/x86/ats.c
@@ -74,8 +74,8 @@ int ats_device(const struct pci_dev *pdev, const struct acpi_drhd_unit *drhd)
 static bool device_in_domain(const struct vtd_iommu *iommu,
                              const struct pci_dev *pdev, uint16_t did)
 {
-    struct root_entry *root_entry;
-    struct context_entry *ctxt_entry = NULL;
+    struct root_entry *root_entry, *root_entries;
+    struct context_entry *context_entry, *context_entries = NULL;
     unsigned int tt;
     bool found = false;
 
@@ -85,25 +85,28 @@ static bool device_in_domain(const struct vtd_iommu *iommu,
         return false;
     }
 
-    root_entry = map_vtd_domain_page(iommu->root_maddr);
-    if ( !root_present(root_entry[pdev->bus]) )
+    root_entries = (struct root_entry *)map_vtd_domain_page(iommu->root_maddr);
+    root_entry = &root_entries[pdev->bus];
+    if ( !root_entry->p )
         goto out;
 
-    ctxt_entry = map_vtd_domain_page(root_entry[pdev->bus].val);
-    if ( context_domain_id(ctxt_entry[pdev->devfn]) != did )
+    context_entries = map_vtd_domain_page(root_entry->ctp);
+    context_entry = &context_entries[pdev->devfn];
+    if ( context_domain_id(*context_entry) != did )
         goto out;
 
-    tt = context_translation_type(ctxt_entry[pdev->devfn]);
+    tt = context_translation_type(*context_entry);
     if ( tt != CONTEXT_TT_DEV_IOTLB )
         goto out;
 
     found = true;
-out:
-    if ( root_entry )
-        unmap_vtd_domain_page(root_entry);
 
-    if ( ctxt_entry )
-        unmap_vtd_domain_page(ctxt_entry);
+ out:
+    if ( root_entries )
+        unmap_vtd_domain_page(root_entries);
+
+    if ( context_entries )
+        unmap_vtd_domain_page(context_entries);
 
     return found;
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 13:24:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 13:24:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32191.63220 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg6PF-0001tQ-Mu; Fri, 20 Nov 2020 13:24:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32191.63220; Fri, 20 Nov 2020 13:24:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg6PF-0001sk-6v; Fri, 20 Nov 2020 13:24:53 +0000
Received: by outflank-mailman (input) for mailman id 32191;
 Fri, 20 Nov 2020 13:24:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kg6PE-0001pm-3O
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 13:24:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg6PC-0007DU-Ki; Fri, 20 Nov 2020 13:24:50 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg6PC-00028m-D4; Fri, 20 Nov 2020 13:24:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg6PE-0001pm-3O
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 13:24:52 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=m/FA9f+1NMog7KtW7VWghtLgO7OJlubNnvmswOBT8GE=; b=KNufUPXE/aiJSp3zDmY1xVcVhO
	5Tx3Oo7U0YZNC3A6EzoDlIfdJAD2iepFY1bwqZJJ+EEaYyGciqS0aUwB+6Mbd3wONHciGrAp5IMRt
	qoGjnjnQXlUEIhNB4su0r/nDuYYGcPhS4/9nDqvj2xkuneJTyFwsH9uAF7mvPXf8I5Jg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg6PC-0007DU-Ki; Fri, 20 Nov 2020 13:24:50 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg6PC-00028m-D4; Fri, 20 Nov 2020 13:24:50 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Kevin Tian <kevin.tian@intel.com>
Subject: [PATCH v10 6/7] vtd: use a bit field for context_entry
Date: Fri, 20 Nov 2020 13:24:39 +0000
Message-Id: <20201120132440.1141-7-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201120132440.1141-1-paul@xen.org>
References: <20201120132440.1141-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This removes the need for much shifting, masking and several magic numbers.
On the whole it makes the code quite a bit more readable.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Kevin Tian <kevin.tian@intel.com>

v10:
 - Remove macros in favour of direct field access
 - Adjust field types
 - Add missing barriers

v4:
 - New in v4
---
 xen/drivers/passthrough/vtd/iommu.c   | 36 +++++++++++----------
 xen/drivers/passthrough/vtd/iommu.h   | 45 +++++++++++++--------------
 xen/drivers/passthrough/vtd/utils.c   | 10 +++---
 xen/drivers/passthrough/vtd/x86/ats.c |  6 ++--
 4 files changed, 47 insertions(+), 50 deletions(-)

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 1a038541f0a3..fdb472ad6515 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -86,8 +86,6 @@ static int domain_iommu_domid(struct domain *d,
     return -1;
 }
 
-#define DID_FIELD_WIDTH 16
-#define DID_HIGH_OFFSET 8
 static int context_set_domain_id(struct context_entry *context,
                                  struct domain *d,
                                  struct vtd_iommu *iommu)
@@ -121,21 +119,22 @@ static int context_set_domain_id(struct context_entry *context,
     }
 
     set_bit(i, iommu->domid_bitmap);
-    context->hi |= (i & ((1 << DID_FIELD_WIDTH) - 1)) << DID_HIGH_OFFSET;
+    context->did = i;
+
     return 0;
 }
 
 static int context_get_domain_id(struct context_entry *context,
                                  struct vtd_iommu *iommu)
 {
-    unsigned long dom_index, nr_dom;
     int domid = -1;
 
     if (iommu && context)
     {
-        nr_dom = cap_ndoms(iommu->cap);
+        unsigned long dom_index, nr_dom;
 
-        dom_index = context_domain_id(*context);
+        nr_dom = cap_ndoms(iommu->cap);
+        dom_index = context->did;
 
         if ( dom_index < nr_dom && iommu->domid_map )
             domid = iommu->domid_map[dom_index];
@@ -1338,7 +1337,7 @@ int domain_context_mapping_one(
     context_entries = (struct context_entry *)map_vtd_domain_page(maddr);
     context = &context_entries[devfn];
 
-    if ( context_present(*context) )
+    if ( context->p )
     {
         int res = 0;
 
@@ -1382,7 +1381,7 @@ int domain_context_mapping_one(
 
     if ( iommu_hwdom_passthrough && is_hardware_domain(domain) )
     {
-        context_set_translation_type(*context, CONTEXT_TT_PASS_THRU);
+        context->tt = CONTEXT_TT_PASS_THRU;
     }
     else
     {
@@ -1397,11 +1396,11 @@ int domain_context_mapping_one(
             return -ENOMEM;
         }
 
-        context_set_address_root(*context, pgd_maddr);
+        context->slptptr = paddr_to_pfn(pgd_maddr);
         if ( ats_enabled && ecap_dev_iotlb(iommu->ecap) )
-            context_set_translation_type(*context, CONTEXT_TT_DEV_IOTLB);
+            context->tt = CONTEXT_TT_DEV_IOTLB;
         else
-            context_set_translation_type(*context, CONTEXT_TT_MULTI_LEVEL);
+            context->tt = CONTEXT_TT_MULTI_LEVEL;
 
         spin_unlock(&hd->arch.mapping_lock);
     }
@@ -1413,9 +1412,10 @@ int domain_context_mapping_one(
         return -EFAULT;
     }
 
-    context_set_address_width(*context, level_to_agaw(iommu->nr_pt_levels));
-    context_set_fault_enable(*context);
-    context_set_present(*context);
+    context->aw = level_to_agaw(iommu->nr_pt_levels);
+    context->fpd = false;
+    smp_wmb();
+    context->p = true;
     iommu_sync_cache(context, sizeof(struct context_entry));
     spin_unlock(&iommu->lock);
 
@@ -1567,17 +1567,19 @@ int domain_context_unmap_one(
     context_entries = (struct context_entry *)map_vtd_domain_page(maddr);
     context = &context_entries[devfn];
 
-    if ( !context_present(*context) )
+    if ( !context->p )
     {
         spin_unlock(&iommu->lock);
         unmap_vtd_domain_page(context_entries);
         return 0;
     }
 
-    context_clear_present(*context);
-    context_clear_entry(*context);
+    context->p = false;
+    smp_wmb();
     iommu_sync_cache(context, sizeof(struct context_entry));
 
+    context->val = 0; /* No need to sync; present bit is already cleared */
+
     iommu_domid= domain_iommu_domid(domain, iommu);
     if ( iommu_domid == -1 )
     {
diff --git a/xen/drivers/passthrough/vtd/iommu.h b/xen/drivers/passthrough/vtd/iommu.h
index b14628eec260..33b1abf98526 100644
--- a/xen/drivers/passthrough/vtd/iommu.h
+++ b/xen/drivers/passthrough/vtd/iommu.h
@@ -198,37 +198,34 @@ struct root_entry {
         };
     };
 };
+#define ROOT_ENTRY_NR (PAGE_SIZE_4K / sizeof(struct root_entry))
 
 struct context_entry {
-    u64 lo;
-    u64 hi;
-};
-#define ROOT_ENTRY_NR (PAGE_SIZE_4K/sizeof(struct root_entry))
-#define context_present(c) ((c).lo & 1)
-#define context_fault_disable(c) (((c).lo >> 1) & 1)
-#define context_translation_type(c) (((c).lo >> 2) & 3)
-#define context_address_root(c) ((c).lo & PAGE_MASK_4K)
-#define context_address_width(c) ((c).hi &  7)
-#define context_domain_id(c) (((c).hi >> 8) & ((1 << 16) - 1))
+    union {
+        __uint128_t val;
+        struct { uint64_t lo, hi; };
+        struct {
+            /* 0 - 63 */
+            bool p:1;
+            bool fpd:1;
+            uint64_t tt:2;
 
-#define context_set_present(c) do {(c).lo |= 1;} while(0)
-#define context_clear_present(c) do {(c).lo &= ~1;} while(0)
-#define context_set_fault_enable(c) \
-    do {(c).lo &= (((u64)-1) << 2) | 1;} while(0)
-
-#define context_set_translation_type(c, val) do { \
-        (c).lo &= (((u64)-1) << 4) | 3; \
-        (c).lo |= (val & 3) << 2; \
-    } while(0)
 #define CONTEXT_TT_MULTI_LEVEL 0
 #define CONTEXT_TT_DEV_IOTLB   1
 #define CONTEXT_TT_PASS_THRU   2
 
-#define context_set_address_root(c, val) \
-    do {(c).lo &= 0xfff; (c).lo |= (val) & PAGE_MASK_4K ;} while(0)
-#define context_set_address_width(c, val) \
-    do {(c).hi &= 0xfffffff8; (c).hi |= (val) & 7;} while(0)
-#define context_clear_entry(c) do {(c).lo = 0; (c).hi = 0;} while(0)
+            unsigned int reserved0:8;
+            uint64_t slptptr:52;
+
+            /* 64 - 127 */
+            unsigned int aw:3;
+            unsigned int ignored:4;
+            unsigned int reserved1:1;
+            unsigned int did:16;
+            uint64_t reserved2:40;
+        };
+    };
+};
 
 /* page table handling */
 #define LEVEL_STRIDE       (9)
diff --git a/xen/drivers/passthrough/vtd/utils.c b/xen/drivers/passthrough/vtd/utils.c
index 5f25a86a535c..4bca160bc663 100644
--- a/xen/drivers/passthrough/vtd/utils.c
+++ b/xen/drivers/passthrough/vtd/utils.c
@@ -129,17 +129,17 @@ void print_vtd_entries(struct vtd_iommu *iommu, int bus, int devfn, u64 gmfn)
         return;
     }
 
-    val = ctxt_entry[devfn].lo;
-    printk("    context[%02x] = %"PRIx64"_%"PRIx64"\n",
-           devfn, ctxt_entry[devfn].hi, val);
-    if ( !context_present(ctxt_entry[devfn]) )
+    printk("    context[%02x] = %"PRIx64"_%"PRIx64"\n", devfn,
+           ctxt_entry[devfn].hi, ctxt_entry[devfn].lo);
+    if ( !ctxt_entry[devfn].p )
     {
         unmap_vtd_domain_page(ctxt_entry);
         printk("    ctxt_entry[%02x] not present\n", devfn);
         return;
     }
 
-    level = agaw_to_level(context_address_width(ctxt_entry[devfn]));
+    level = agaw_to_level(ctxt_entry[devfn].aw);
+    val = pfn_to_paddr(ctxt_entry[devfn].slptptr);
     unmap_vtd_domain_page(ctxt_entry);
     if ( level != VTD_PAGE_TABLE_LEVEL_3 &&
          level != VTD_PAGE_TABLE_LEVEL_4)
diff --git a/xen/drivers/passthrough/vtd/x86/ats.c b/xen/drivers/passthrough/vtd/x86/ats.c
index fec969ef75bb..cb057ced3cf7 100644
--- a/xen/drivers/passthrough/vtd/x86/ats.c
+++ b/xen/drivers/passthrough/vtd/x86/ats.c
@@ -76,7 +76,6 @@ static bool device_in_domain(const struct vtd_iommu *iommu,
 {
     struct root_entry *root_entry, *root_entries;
     struct context_entry *context_entry, *context_entries = NULL;
-    unsigned int tt;
     bool found = false;
 
     if ( unlikely(!iommu->root_maddr) )
@@ -92,11 +91,10 @@ static bool device_in_domain(const struct vtd_iommu *iommu,
 
     context_entries = map_vtd_domain_page(root_entry->ctp);
     context_entry = &context_entries[pdev->devfn];
-    if ( context_domain_id(*context_entry) != did )
+    if ( context_entry->did != did )
         goto out;
 
-    tt = context_translation_type(*context_entry);
-    if ( tt != CONTEXT_TT_DEV_IOTLB )
+    if ( context_entry->tt != CONTEXT_TT_DEV_IOTLB )
         goto out;
 
     found = true;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 13:24:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 13:24:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32192.63227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg6PG-0001vF-AB; Fri, 20 Nov 2020 13:24:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32192.63227; Fri, 20 Nov 2020 13:24:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg6PF-0001ub-TV; Fri, 20 Nov 2020 13:24:53 +0000
Received: by outflank-mailman (input) for mailman id 32192;
 Fri, 20 Nov 2020 13:24:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1kg6PE-0001rA-KZ
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 13:24:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg6PD-0007Dc-Eg; Fri, 20 Nov 2020 13:24:51 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kg6PD-00028m-6r; Fri, 20 Nov 2020 13:24:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg6PE-0001rA-KZ
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 13:24:52 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=6D3BpJmayyK/hItO29QkFAbr4QG8hgE67AhfliKmg4U=; b=udPLlgBud16lNy8EFNxWh2IO1t
	EvAJ7aMfqbGHrRFFqPYSwq8bm6twadfgwN1CO6ZBe+qIAvPdjPOyjd7admoLyZ325ky1Q8SGQDm3e
	xXXwjk0PoBRm23G7EvSYFYg3uAqoaq5BNNx1c3AywGUhyYaPDad5V0UFn8xt4glAfAQ4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg6PD-0007Dc-Eg; Fri, 20 Nov 2020 13:24:51 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kg6PD-00028m-6r; Fri, 20 Nov 2020 13:24:51 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Kevin Tian <kevin.tian@intel.com>
Subject: [PATCH v10 7/7] vtd: use a bit field for dma_pte
Date: Fri, 20 Nov 2020 13:24:40 +0000
Message-Id: <20201120132440.1141-8-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201120132440.1141-1-paul@xen.org>
References: <20201120132440.1141-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

As with a prior patch for context_entry, this removes the need for much
shifting, masking and several magic numbers.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Kevin Tian <kevin.tian@intel.com>

v10:
 - Remove macros in favour of direct field access
 - Adjust field types
 - Use write_atomic() to update the live PTE

v4:
 - New in v4
---
 xen/drivers/passthrough/vtd/iommu.c | 61 +++++++++++++++--------------
 xen/drivers/passthrough/vtd/iommu.h | 43 ++++++--------------
 xen/drivers/passthrough/vtd/utils.c |  6 +--
 3 files changed, 47 insertions(+), 63 deletions(-)

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index fdb472ad6515..2389b9fc708d 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -281,7 +281,7 @@ static u64 addr_to_dma_page_maddr(struct domain *domain, u64 addr, int alloc)
         offset = address_level_offset(addr, level);
         pte = &parent[offset];
 
-        pte_maddr = dma_pte_addr(*pte);
+        pte_maddr = pfn_to_paddr(pte->addr);
         if ( !pte_maddr )
         {
             struct page_info *pg;
@@ -294,14 +294,14 @@ static u64 addr_to_dma_page_maddr(struct domain *domain, u64 addr, int alloc)
                 break;
 
             pte_maddr = page_to_maddr(pg);
-            dma_set_pte_addr(*pte, pte_maddr);
+            pte->addr = paddr_to_pfn(pte_maddr);
+            smp_wmb();
 
             /*
              * high level table always sets r/w, last level
              * page table control read/write
              */
-            dma_set_pte_readable(*pte);
-            dma_set_pte_writable(*pte);
+            pte->r = pte->w = true;
             iommu_sync_cache(pte, sizeof(struct dma_pte));
         }
 
@@ -351,7 +351,7 @@ static uint64_t domain_pgd_maddr(struct domain *d, unsigned int nr_pt_levels)
     {
         const struct dma_pte *p = map_vtd_domain_page(pgd_maddr);
 
-        pgd_maddr = dma_pte_addr(*p);
+        pgd_maddr = pfn_to_paddr(p->addr);
         unmap_vtd_domain_page(p);
         if ( !pgd_maddr )
             return 0;
@@ -709,20 +709,23 @@ static void dma_pte_clear_one(struct domain *domain, uint64_t addr,
     page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
     pte = page + address_level_offset(addr, 1);
 
-    if ( !dma_pte_present(*pte) )
+    if ( !pte->r && !pte->w )
     {
         spin_unlock(&hd->arch.mapping_lock);
         unmap_vtd_domain_page(page);
         return;
     }
 
-    dma_clear_pte(*pte);
-    *flush_flags |= IOMMU_FLUSHF_modified;
+    pte->r = pte->w = false;
+    smp_wmb();
+    pte->val = 0;
 
     spin_unlock(&hd->arch.mapping_lock);
     iommu_sync_cache(pte, sizeof(struct dma_pte));
 
     unmap_vtd_domain_page(page);
+
+    *flush_flags |= IOMMU_FLUSHF_modified;
 }
 
 static int iommu_set_root_entry(struct vtd_iommu *iommu)
@@ -1751,7 +1754,7 @@ static int __must_check intel_iommu_map_page(struct domain *d, dfn_t dfn,
                                              unsigned int *flush_flags)
 {
     struct domain_iommu *hd = dom_iommu(d);
-    struct dma_pte *page, *pte, old, new = {};
+    struct dma_pte *page, *pte, old, new;
     u64 pg_maddr;
     int rc = 0;
 
@@ -1775,15 +1778,12 @@ static int __must_check intel_iommu_map_page(struct domain *d, dfn_t dfn,
     page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
     pte = &page[dfn_x(dfn) & LEVEL_MASK];
     old = *pte;
-
-    dma_set_pte_addr(new, mfn_to_maddr(mfn));
-    dma_set_pte_prot(new,
-                     ((flags & IOMMUF_readable) ? DMA_PTE_READ  : 0) |
-                     ((flags & IOMMUF_writable) ? DMA_PTE_WRITE : 0));
-
-    /* Set the SNP on leaf page table if Snoop Control available */
-    if ( iommu_snoop )
-        dma_set_pte_snp(new);
+    new = (struct dma_pte){
+        .r = flags & IOMMUF_readable,
+        .w = flags & IOMMUF_writable,
+        .snp = iommu_snoop,
+        .addr = mfn_x(mfn),
+    };
 
     if ( old.val == new.val )
     {
@@ -1792,14 +1792,14 @@ static int __must_check intel_iommu_map_page(struct domain *d, dfn_t dfn,
         return 0;
     }
 
-    *pte = new;
+    write_atomic(&pte->val, new.val);
+    spin_unlock(&hd->arch.mapping_lock);
 
     iommu_sync_cache(pte, sizeof(struct dma_pte));
-    spin_unlock(&hd->arch.mapping_lock);
     unmap_vtd_domain_page(page);
 
     *flush_flags |= IOMMU_FLUSHF_added;
-    if ( dma_pte_present(old) )
+    if ( old.r || old.w )
         *flush_flags |= IOMMU_FLUSHF_modified;
 
     return rc;
@@ -1851,12 +1851,12 @@ static int intel_iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
     unmap_vtd_domain_page(page);
     spin_unlock(&hd->arch.mapping_lock);
 
-    if ( !dma_pte_present(val) )
+    if ( !val.r && !val.w )
         return -ENOENT;
 
-    *mfn = maddr_to_mfn(dma_pte_addr(val));
-    *flags = dma_pte_read(val) ? IOMMUF_readable : 0;
-    *flags |= dma_pte_write(val) ? IOMMUF_writable : 0;
+    *mfn = _mfn(val.addr);
+    *flags = val.r ? IOMMUF_readable : 0;
+    *flags |= val.w ? IOMMUF_writable : 0;
 
     return 0;
 }
@@ -2611,18 +2611,18 @@ static void vtd_dump_page_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
             process_pending_softirqs();
 
         pte = &pt_vaddr[i];
-        if ( !dma_pte_present(*pte) )
+        if ( !pte->r && !pte->w )
             continue;
 
         address = gpa + offset_level_address(i, level);
         if ( next_level >= 1 ) 
-            vtd_dump_page_table_level(dma_pte_addr(*pte), next_level,
+            vtd_dump_page_table_level(pfn_to_paddr(pte->addr), next_level,
                                       address, indent + 1);
         else
             printk("%*sdfn: %08lx mfn: %08lx\n",
                    indent, "",
                    (unsigned long)(address >> PAGE_SHIFT_4K),
-                   (unsigned long)(dma_pte_addr(*pte) >> PAGE_SHIFT_4K));
+                   (unsigned long)(pte->addr));
     }
 
     unmap_vtd_domain_page(pt_vaddr);
@@ -2690,8 +2690,9 @@ static int __init intel_iommu_quarantine_init(struct domain *d)
         {
             struct dma_pte *pte = &parent[offset];
 
-            dma_set_pte_addr(*pte, maddr);
-            dma_set_pte_readable(*pte);
+            pte->addr = paddr_to_pfn(maddr);
+            smp_wmb();
+            pte->r = 1;
         }
         iommu_sync_cache(parent, PAGE_SIZE);
 
diff --git a/xen/drivers/passthrough/vtd/iommu.h b/xen/drivers/passthrough/vtd/iommu.h
index 33b1abf98526..1b6123e0c947 100644
--- a/xen/drivers/passthrough/vtd/iommu.h
+++ b/xen/drivers/passthrough/vtd/iommu.h
@@ -244,38 +244,21 @@ struct context_entry {
 #define level_size(l) (1 << level_to_offset_bits(l))
 #define align_to_level(addr, l) ((addr + level_size(l) - 1) & level_mask(l))
 
-/*
- * 0: readable
- * 1: writable
- * 2-6: reserved
- * 7: super page
- * 8-11: available
- * 12-63: Host physcial address
- */
 struct dma_pte {
-    u64 val;
+    union {
+        uint64_t val;
+        struct {
+            bool r:1;
+            bool w:1;
+            unsigned int reserved0:1;
+            unsigned int ignored0:4;
+            bool ps:1;
+            unsigned int ignored1:3;
+            bool snp:1;
+            uint64_t addr:52;
+        };
+    };
 };
-#define DMA_PTE_READ (1)
-#define DMA_PTE_WRITE (2)
-#define DMA_PTE_PROT (DMA_PTE_READ | DMA_PTE_WRITE)
-#define DMA_PTE_SP   (1 << 7)
-#define DMA_PTE_SNP  (1 << 11)
-#define dma_clear_pte(p)    do {(p).val = 0;} while(0)
-#define dma_set_pte_readable(p) do {(p).val |= DMA_PTE_READ;} while(0)
-#define dma_set_pte_writable(p) do {(p).val |= DMA_PTE_WRITE;} while(0)
-#define dma_set_pte_superpage(p) do {(p).val |= DMA_PTE_SP;} while(0)
-#define dma_set_pte_snp(p)  do {(p).val |= DMA_PTE_SNP;} while(0)
-#define dma_set_pte_prot(p, prot) do { \
-        (p).val = ((p).val & ~DMA_PTE_PROT) | ((prot) & DMA_PTE_PROT); \
-    } while (0)
-#define dma_pte_prot(p) ((p).val & DMA_PTE_PROT)
-#define dma_pte_read(p) (dma_pte_prot(p) & DMA_PTE_READ)
-#define dma_pte_write(p) (dma_pte_prot(p) & DMA_PTE_WRITE)
-#define dma_pte_addr(p) ((p).val & PADDR_MASK & PAGE_MASK_4K)
-#define dma_set_pte_addr(p, addr) do {\
-            (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
-#define dma_pte_present(p) (((p).val & DMA_PTE_PROT) != 0)
-#define dma_pte_superpage(p) (((p).val & DMA_PTE_SP) != 0)
 
 /* interrupt remap entry */
 struct iremap_entry {
diff --git a/xen/drivers/passthrough/vtd/utils.c b/xen/drivers/passthrough/vtd/utils.c
index 4bca160bc663..a78b02e6edc4 100644
--- a/xen/drivers/passthrough/vtd/utils.c
+++ b/xen/drivers/passthrough/vtd/utils.c
@@ -161,14 +161,14 @@ void print_vtd_entries(struct vtd_iommu *iommu, int bus, int devfn, u64 gmfn)
         unmap_vtd_domain_page(l);
         printk("    l%u[%03x] = %"PRIx64"\n", level, l_index, pte.val);
 
-        if ( !dma_pte_present(pte) )
+        if ( !pte.r && !pte.w )
         {
             printk("    l%u[%03x] not present\n", level, l_index);
             break;
         }
-        if ( dma_pte_superpage(pte) )
+        if ( pte.ps )
             break;
-        val = dma_pte_addr(pte);
+        val = pfn_to_paddr(pte.addr);
     } while ( --level );
 }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 14:19:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 14:19:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32241.63246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg7FX-0008On-81; Fri, 20 Nov 2020 14:18:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32241.63246; Fri, 20 Nov 2020 14:18:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg7FX-0008Og-4x; Fri, 20 Nov 2020 14:18:55 +0000
Received: by outflank-mailman (input) for mailman id 32241;
 Fri, 20 Nov 2020 14:18:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aCua=E2=intel.com=lkp@srs-us1.protection.inumbo.net>)
 id 1kg7FV-0008Ob-Bg
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 14:18:53 +0000
Received: from mga02.intel.com (unknown [134.134.136.20])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b8ad6ee5-32b9-464a-b051-588dcfafd3c0;
 Fri, 20 Nov 2020 14:18:50 +0000 (UTC)
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
 by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 20 Nov 2020 06:18:48 -0800
Received: from lkp-server01.sh.intel.com (HELO ee848a5b85f2) ([10.239.97.150])
 by fmsmga002.fm.intel.com with ESMTP; 20 Nov 2020 06:18:45 -0800
Received: from kbuild by ee848a5b85f2 with local (Exim 4.92)
 (envelope-from <lkp@intel.com>)
 id 1kg7FM-00009S-Iq; Fri, 20 Nov 2020 14:18:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aCua=E2=intel.com=lkp@srs-us1.protection.inumbo.net>)
	id 1kg7FV-0008Ob-Bg
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 14:18:53 +0000
X-Inumbo-ID: b8ad6ee5-32b9-464a-b051-588dcfafd3c0
Received: from mga02.intel.com (unknown [134.134.136.20])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b8ad6ee5-32b9-464a-b051-588dcfafd3c0;
	Fri, 20 Nov 2020 14:18:50 +0000 (UTC)
IronPort-SDR: PeIUWbubg5BHPA1jVcFc0m21Y+8OMcs4hSU+AoeZXIThUhsr1uTxGhuBl/4mqH+wGfjLmmdkf+
 r7UlnAE9nJ1w==
X-IronPort-AV: E=McAfee;i="6000,8403,9810"; a="158517921"
X-IronPort-AV: E=Sophos;i="5.78,356,1599548400"; 
   d="gz'50?scan'50,208,50";a="158517921"
X-Amp-Result: UNKNOWN
X-Amp-Original-Verdict: FILE UNKNOWN
X-Amp-File-Uploaded: False
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
  by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Nov 2020 06:18:48 -0800
IronPort-SDR: I+OqEMvW2wMOZwQKhrqNdw5ma1e4R4ERY/NFZIlabTeRUHmPbeufivCzWsGkNrE5T7HbCRa+TU
 FuCcbtVKZ5IA==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.78,356,1599548400"; 
   d="gz'50?scan'50,208,50";a="363784085"
Received: from lkp-server01.sh.intel.com (HELO ee848a5b85f2) ([10.239.97.150])
  by fmsmga002.fm.intel.com with ESMTP; 20 Nov 2020 06:18:45 -0800
Received: from kbuild by ee848a5b85f2 with local (Exim 4.92)
	(envelope-from <lkp@intel.com>)
	id 1kg7FM-00009S-Iq; Fri, 20 Nov 2020 14:18:44 +0000
Date: Fri, 20 Nov 2020 22:18:13 +0800
From: kernel test robot <lkp@intel.com>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
	x86@kernel.org, virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org
Cc: kbuild-all@lists.01.org, clang-built-linux@googlegroups.com,
	peterz@infradead.org, luto@kernel.org,
	Juergen Gross <jgross@suse.com>, Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [PATCH v2 12/12] x86/paravirt: have only one paravirt patch
 function
Message-ID: <202011202257.AlkwbqQV-lkp@intel.com>
References: <20201120114630.13552-13-jgross@suse.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="OXfL5xGRrasGEqWY"
Content-Disposition: inline
In-Reply-To: <20201120114630.13552-13-jgross@suse.com>
User-Agent: Mutt/1.10.1 (2018-07-13)


--OXfL5xGRrasGEqWY
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hi Juergen,

I love your patch! Perhaps something to improve:

[auto build test WARNING on v5.10-rc4]
[also build test WARNING on next-20201120]
[cannot apply to tip/x86/core xen-tip/linux-next tip/x86/asm]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Juergen-Gross/x86-major-paravirt-cleanup/20201120-194934
base:    09162bc32c880a791c6c0668ce0745cf7958f576
config: x86_64-randconfig-r014-20201120 (attached as .config)
compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project 3ded927cf80ac519f9f9c4664fef08787f7c537d)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install x86_64 cross compiling tool for clang build
        # apt-get install binutils-x86-64-linux-gnu
        # https://github.com/0day-ci/linux/commit/340df5e02c66ec37486a1f31e6497a22dab65059
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Juergen-Gross/x86-major-paravirt-cleanup/20201120-194934
        git checkout 340df5e02c66ec37486a1f31e6497a22dab65059
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> arch/x86/kernel/paravirt.c:124:10: warning: no previous prototype for function 'paravirt_patch_insns' [-Wmissing-prototypes]
   unsigned paravirt_patch_insns(void *insn_buff, unsigned len,
            ^
   arch/x86/kernel/paravirt.c:124:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
   unsigned paravirt_patch_insns(void *insn_buff, unsigned len,
   ^
   static 
   1 warning generated.

vim +/paravirt_patch_insns +124 arch/x86/kernel/paravirt.c

63f70270ccd981c arch/i386/kernel/paravirt.c Jeremy Fitzhardinge 2007-05-02  123  
1fc654cf6e04b40 arch/x86/kernel/paravirt.c  Ingo Molnar         2019-04-25 @124  unsigned paravirt_patch_insns(void *insn_buff, unsigned len,
63f70270ccd981c arch/i386/kernel/paravirt.c Jeremy Fitzhardinge 2007-05-02  125  			      const char *start, const char *end)
63f70270ccd981c arch/i386/kernel/paravirt.c Jeremy Fitzhardinge 2007-05-02  126  {
63f70270ccd981c arch/i386/kernel/paravirt.c Jeremy Fitzhardinge 2007-05-02  127  	unsigned insn_len = end - start;
63f70270ccd981c arch/i386/kernel/paravirt.c Jeremy Fitzhardinge 2007-05-02  128  
2777cae2b19d4a0 arch/x86/kernel/paravirt.c  Ingo Molnar         2019-04-25  129  	/* Alternative instruction is too large for the patch site and we cannot continue: */
2777cae2b19d4a0 arch/x86/kernel/paravirt.c  Ingo Molnar         2019-04-25  130  	BUG_ON(insn_len > len || start == NULL);
2777cae2b19d4a0 arch/x86/kernel/paravirt.c  Ingo Molnar         2019-04-25  131  
1fc654cf6e04b40 arch/x86/kernel/paravirt.c  Ingo Molnar         2019-04-25  132  	memcpy(insn_buff, start, insn_len);
139ec7c416248b9 arch/i386/kernel/paravirt.c Rusty Russell       2006-12-07  133  
139ec7c416248b9 arch/i386/kernel/paravirt.c Rusty Russell       2006-12-07  134  	return insn_len;
139ec7c416248b9 arch/i386/kernel/paravirt.c Rusty Russell       2006-12-07  135  }
139ec7c416248b9 arch/i386/kernel/paravirt.c Rusty Russell       2006-12-07  136  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

--OXfL5xGRrasGEqWY
Content-Type: application/gzip
Content-Disposition: attachment; filename=".config.gz"
Content-Transfer-Encoding: base64

H4sICBfLt18AAy5jb25maWcAjFxZc+O2sn7Pr1BNXnIektgejzO5t/wAkaCEEUkgAKjFLyjH
lie+8TJHtpOZf3+7AS4ACCo5pyqJ0U2svXzdaOj7776fkbfX58fr1/ub64eHb7PP+6f94fp1
fzu7u3/Y/+8s57Oa6xnNmf4JmMv7p7evP3/9eGEuzmcffjo9+enkx8PN+Wy1PzztH2bZ89Pd
/ec36OD++em777/LeF2whckys6ZSMV4bTbf68t3Nw/XT59lf+8ML8M1Oz36CfmY/fL5//Z+f
f4Z/Pt4fDs+Hnx8e/no0Xw7P/7e/eZ29v93f/nr2y83dx5Prmw+nv97B/2/OLy7O7/Z3Jx9/
+fjL3S83H97/cvufd92oi2HYy5OusczHbcDHlMlKUi8uv3mM0FiW+dBkOfrPT89O4H9eHxmp
TcnqlffB0GiUJpplAW1JlCGqMguu+STB8EaLRifprIau6UBi8jez4dKbwbxhZa5ZRY0m85Ia
xaXXlV5KSmCddcHhH8Ci8FM4t+9nCysHD7OX/evbl+EkWc20ofXaEAlbxCqmL9+fAXs3N14J
BsNoqvTs/mX29PyKPQwMDRHMLGFQKkdM3cbzjJTdJr97l2o2pPF3zC7SKFJqj39J1tSsqKxp
aRZXTAzsPmUOlLM0qbyqSJqyvZr6gk8RztOEK6VRvvrt8eab3D5/1scYcO6JrfXnP/6EH+/x
/BgZF5IYMKcFaUptxcY7m655yZWuSUUv3/3w9Py0H1RX7dSaCU9b2gb8d6ZLf/qCK7Y11W8N
bWhyhhuis6WZpmeSK2UqWnG5M0Rrki3ToqtoyeZJEmnAPCaWb4+aSBjecuDkSVl2KgbaOnt5
+/3l28vr/nFQsQWtqWSZVWYh+dzTb5+klnyTptCioJlmOHRRmMopdcQnaJ2z2lqMdCcVW0gw
WKCCSTKrP+EYPnlJZA4kZdTGSKpggPSn2dJXRmzJeUVYHbYpVqWYzJJRiTu6m5g20RLEAXYZ
zIXmMs2F05NruzxT8TyyoAWXGc1b48h8n6AEkYpOb1pO582iUFY890+3s+e76JAHT8KzleIN
DOTkM+feMFZifBarPt9SH69JyXKiqSmJ0ibbZWVCXKz9Xw/SF5Ftf3RNa62OEs1ccpJnMNBx
tgqOieSfmiRfxZVpBE45sohOjzPR2OlKZb1R5M2O8lid0vePACpSagXOd2V4TUFvvHnV3Cyv
0G1VVpR7jYZGARPmOcsSeu2+Yrnd7P4b11o0ZZk0EpacMhJssUSJbNfkC89oNf1GSEoroaHP
OphC177mZVNrInfJmbRcibl032ccPu/2FPb7Z3398ufsFaYzu4apvbxev77Mrm9unt+eXu+f
Pke7jAdEMtuHU59+5DWTOiKjaCRniQplJXfgTfLNVY52MqNgxYE1jTtQUBCBqdSiFQv2ULHe
QeVMIWzKwz7b0/kX++J5GVg0U7y0Rsfvzm6xzJqZSsgsHIcB2iCv8IehWxBNT4ZVwGG/iZpw
8fbTVg0TpFFTk9NUu5YkS8wJ9rYsBz3yKDUFW6roIpuXzLcISCtIDej28uJ83GhKSorL04th
Bx1N6UlFsqPxbI577R9oNHFjMW81T55peBC9V1i5//D8xKrXGJ75g7GVQ7cpQSs5YtUCXDcr
9OXZid+O8lGRrUc/PRu0ktUaIghS0KiP0/eBDW1q1cL8bAm7bo1yp8Xq5o/97dvD/jC721+/
vh32L065W3ADsU0l7FYl9yXxdeCtVCMEhBbK1E1FzJxApJQFrtNybUitgajt7Jq6IjBiOTdF
2ajlKMCBNZ+efYx66MfpqYN9DUZObH+2kLwRno8TZEGdqaMeUAAkmC2iP80K/uXN0Pbkdnlo
LQiTJqQM6l+A+yR1vmG5XqZkV5tkn+1IguUq6M41yzyE+TG9AFtwReXkXphls6BwAt54AmCu
jwNQvnH4lpKYRE7XLJuA1o4DPo0Nc7Q8KovRmueiSI4GECvRk+LoeFoeokkgGRBiAHQDD5Ga
wpJmK8FBntAJA2T0gIlTIww2bcd+n4Cm4EhzCh4TgGbsIbpTpSXZJcaclyvcNovrpHfa9m9S
QccO3nkhk8y7KHboPR8HggOpDV997m3SauZt3OpzTkV7QJqI9OacI2wIrSRoMhfgw9kVRVRt
z5nLCjQ0QC0xm4L/SB1xF/sFf4M7zKiwUN6a+BhWZkqsYGRwvDi0lzQQnsjFLrUC589A3mVw
5KArGE2ZFkmnp4iHFyPtYgmq7yNzB2R7wBfY+fhvU1fMz3V4+zu9OAJhCiJRbw6NptvoT1Bs
bw8E9/kVW9SkLDzptNP1Gyze9xvUMjCThHn5EMZNI0OXkK8ZTLPdLRUdnDX3eAQWiRW52XhR
IwwzJ1Iy326vsJNdpcYtJjiMoXUOeAy2AcXSQYaYw24jKjFG04HojM94cHAdeES2T37ghqvB
cNPkEvqTYYdgSEoIr5KNpqnysJt2U6Lh0W0OWwNzrCHiCiwahK9e7GqNadQGn9M8932Q0yMY
08RBoshOT847gNGmgMX+cPd8eLx+utnP6F/7JwDEBDBEhpAYgpkB3IY99lrm5mSJsFCzrmws
nwQl/3JEL/yo3ICd00/6A14JAofnx52qJPPAEpRNOhGkSj5PmQX4Hg5GAtxohSPsDajoqhEg
GwmmgldTnfRsmG0BMB8oX1MUgP0sqkkkQEBiNa2sc8QcNitY1iV4vMiQF6xMwydrXa2jDGLU
MFHcMV+cz33J39oLg+Bv3+spLRubTYLtyUBBvFm77LexbkNfvts/3F2c//j148WPF+d+angF
nrgDh96SNclWDviPaFXVRNpTIR6VNXhV5rIYl2cfjzGQLea+kwydEHUdTfQTsEF3EO60fH1W
SRGT+3nojhD4Aa+xN1fGHlXgX9zgZNd5RlPk2bgTMGtsLjGnlCN8SZgYjFVwmG2KRgAz4Y0H
tW48wQECBtMyYgHCpiMrAwjTwUGXJIBIzUvVYCDZkayVgq4kZr2WjX/pEvBZZUiyufmwOZW1
SwSCT1ZsXsZTVo3CZOkU2Vp8u3WkHMPpKw77AOf33rtjsKlg+/FUbNOaQJi6VeNYjYyqxNSn
jc0Ye2deAM6gRJa7DHOdvncWCxcrlmASwft+iMIvRfAIUYHwnGjmbIm19OLwfLN/eXk+zF6/
fXGZj1RM2S0+Zcv8FeCqCkp0I6mD6r5FQuL2jIhkHg6JlbBJWf+bBS/zgql0Gl9SDUAH5HOi
PyfcgCxlGc+DbjVIAkpXC7iSAyAnal5pSqHUJAuphn4SMVSPmlRhqrkH07oWJyXhLvZi0N5m
QERaNjLYGxfS8ApEsIBQozcTqcTkDrQIIBog8kVD/dwN7DjBVF7gOtq2cYQ2ZlGC1TaJPXEG
yzVaoXIOwgeuqhW9YffChGEH2sC5R9N0eXTRYKIWZLrUIcIV62VyAUeSjjFrl2Fp2z/Bfi85
wpZuJsMVUSZr15rcmmr1Md0uVJYmIK5L3wGCo0wCiN7A+1i3k1ZZIzB11tvlli58lvJ0mqZV
ZKKySmyz5SJy+Jj7X4ct4BpZ1VRW5wpSsXLn5QKRwcoSBHmV8iABA3NqDYYJwkHkX1fbkSkZ
oA2miDG8pCXIVJDzQM1xShvEpS0BVDWd3mjpy92Cp0Syo2cAOUkjxyNeLQnf+jdfS0Gd/AUC
n1ds4qC3YEhT9xPW+SnEkuD+5nSBWCZNxGu6D6cjYgdTh+NoKV6LMyWq8gGXbaqycQsGszw8
LHujb9C0R+LIu8bAYEoqOUZrmCGYS76itUs64J3jlG/w0wFtAyZQS7og2W5E6iUgasabQLXk
finI8MWnQJasPiwhzIOVrUOn6UUpj89P96/Ph+DmxAuHWvfR1GHsNuaQRIR34SOODC89aDJ6
mpiQv5LTixGKp0oApIhVu7tcBFzWlKOowh2qKPEfdMJrso+rtIyzDBQXbNPUEfuWofXfLDqo
DxbihG05k3A8ZjFHmDVCHJkgrmBHaZalpAuTzZ6jAV8etrRIjWSCRRSbpqZ+8IGWV8WG1cE6
C23AmYOakgQ07clDVBnQraHrwABmHgJpceGAI1rYmFin5bE54BUKoyvfGixxiZpUdhgCb6Mb
enny9XZ/fXvi/S84H8y3QsjDFaYmZCNS0oKKjU626uY3sLoOJqTBXezjLcnGM1SVloE9xb8R
3zLNoux4MAkIzaZ2xAXpEfiCqC5saaowYTsgw2HXtSujMCu6m8aK7iOttvYQDS+Kf82a8ksJ
vragqu+KFmmXs7wypycnU6SzD5Ok9+FXQXcnngO8ujz1BMZ5j6XEa2UvP0e3NHAPtgEDy5QI
Z5KopckbP+QQy51i6H1AyQHGnnw9DQUV024Z0aHOuUPHJDXmBcOjtkGn/UolRoGIelHDKGfB
IPkOIArgsVYYINbmTYAZhwEdS2oDQU/KZhFfeQ764zGkDsCFmj7TMH3nJWPbHkQTMcuW12W6
5iDmxMqFNKiqcps9gCWkMuwgrayAzcr1OAVrUwglW1OBV4x+kupYxDpKUJA8N51X8GnOBnfH
1e7ZP/FI+C8/d4zBgcs3O09gYTeLE7ltN0qUEKcJjBR1GGn4XJh1sHmORM2Wz6eXImBxsOT5
7/1hBijg+vP+cf/0avcG3dbs+QsW8XoJ2zYT4mG7NjXSXjOOCWrFhM1CezpRGVVSKsYtYVIA
WtEmdbwDxKnMhqzoVPQoqoh56r4QSFm58pk3v4Ef24AXo0XBMkaHTH7icwylFoNPDbxuF4Tj
Lnq00V+dTlgzo8Bn8VUTJ3bgvJa6vRDBT4SfsbMtbVrXTd2iPeUlO734U7Qpg0Uy2Hd9iUya
yOq5mQofBzre8BRtm6RrA9IuJcupnzALZwGmui2Um5oHiRc5JxrwxS5ubbQOhB0bC1KPRtQk
HcC5TeFJOGFpNjCVFCRDqWicIZ50GHuSzPLRdmZCZGCo5lPfjBbARBgC+rQJFxDNgiwWEsQN
ApKpftqwJZ5qozQHNVJgcNHBelfSg8F0e4w2phFgX/J4vTEtIZXT5yMyFDaexGJ2hhyia/AY
413rdmbSewZcjMdRp5PzeRqXuW8nLv79rauoXvIjbJLmDdab4o3OhkjEbOXkZNtYIppFlUSq
g9Uggnq2J2xvr5fDHpFwRGOETgNQp95b8FVHztP9d5HeVYGAiAsQVTYJXcF0dymSzt8U7HIo
VpwVh/1/3/ZPN99mLzfXD0GU3Sl0mIuxKr7gayz/xvyQniCP60R7MtqAySSR5eiuhbEjrxBi
Im80/gQNvCJrmhze58T7YVvu8u/nY3F2o1kKcQU7EFZwJDm8eabo/eyS60AOXucURkhVmUSn
Ubdl3JOD9evyxeMuFo/Z7eH+r+BueoiQRGfcw9Aws2lUHGc6wd86kKNMgJNoDg7cJQklq9OP
MOyY5y7dXIXGxC7r5Y/rw/7Wg2x+kWpCH/q9YLcP+1A7QnfVtdjdLAEZUzlBrGjdxEfaEzVN
rytg6jL0SVPmSF0238f2/TL6rIQ9uJjtn4Gu3ZT520vXMPsBXM9s/3rz03+8NB14I5c78rAq
tFWV+2NodS2Y1z49CS8agD2r52cnsO7fGiZT6APvfeeNhznai2DMjEZph50q5v5CJ1bgVnf/
dH34NqOPbw/Xg6h0Q2BevU/gTcrh9n107dCNO+rbdl7cHx7/Bumc5bGS0dyvJYKgixdBxV/B
ZGUdIvjviqTy3MXGZEVbdRV86bV3AWXqQoTzRUn7cfweWhLmvGyS2SKYVBxQsP56tbMxev/5
cD2769btjIuvkxMMHXm0Y4HjXq29vBNeQjWkZFejnCuwpTAEgLX19sOpfx2NqUdyamoWt519
uIhbtSCNTX0EbwSvDzd/3L/ubzCe/vF2/wXWgQo2CiBdJiZMarvUTdjWIbLgaqG7vUIz6QUC
dk+4q1vxuuhaEN2ML1RW7pY8KeKfmkqApZvTZH2ffcRpLycxu1ro4L7RzmWIHpva5n2wgjRD
FB6FdpgJwJeLmtVmrjYkfqHIYE+wUCRRJrGKb/ldK15tpwhcpNvbbsDhmiJVO1k0tUtOQkSH
gUzqzdeahvWJQ+2e7XEJQW1ERCuGmJ0tGt4kXhwp2H/rE9wDrGjXbAUJxLaYBGqLZMcMgOBa
0D9BbLP/1WjT3czds1hXlWQ2S6Zp+Hahr/xQfRLPvkRyX8RdqgrzBO3T1fgMAOmCTmKuBEst
WkkJrbzjC4r0wuPBt7iTHy43Zg7LcSXOEa1iW5DOgazsdCImBGxYNdHI2tQcNj6ouoyLAxPS
gIENAhdbve0qSaKK76GTxPhdKaBstyjM5A6nNijucWqioLOqGgOx8ZK22Q+b0UqS8f1IiqWV
LqcN7qVGewseTaZtdZefE7ScN0EKZliFohnWjB0hteVVvrVrKUffp9qtLUEOoq5HBUBDrwHl
aOcbppdg5Nzx2aqT+IzRHkDcaG3GKigStuSJZ2OxwRw/GIvlnaM8VXFta2euarxQQ8vdZVT/
LZ8RTbJPpGMFa5zVs0Vjloi5XXC1Mn3avLCmSsf+DsxJdwNIMyzk9GSV5w1mE9G7gIOywp4w
gpbUXS+kxg5qHWMXt2U6bZ3Dr4byyUGWuhergRtp0W5ovdriyPdnc+ZqIlLTxM11HfuyObRO
XeBZRwDxFtj39gW73HjljEdI8eduw5Ofp0jD1LG2GwB3e9UUuoYeIIAXS6EANKd+KXP8aVsI
DmApkzsxqrIcUMw0ZfhxCYf1Mr7+8ffrF4gy/3Rl118Oz3f3bXZlwM3A1m7bsa23bB1Ec9MY
CoqPjBRMFn/fA3Eiq5MFyf+ASruuwLJU+BbCt0m27l9h5fnwIyGtYvli1p6vfS9usE4/XcHg
uJr6GEeHDo71oGTW//7FxBvjjpOl828tGTVL0onaxJYHK083ABCUQmPbv6AyrLKXG4nDbWoQ
WLBmu2rO/dccnUWyD1P7S47hqrKcyL+r+nTopKmdONrSQbuXo9uw4d5Fc0RkENR5k7DPXezH
sH18U/sYRm4U6MoE0araBK1XU/vrEvlQ1ziwTFPij+Um/emovdeiGmcEIlESIfCUSJ7jsZou
vTayWN17EDOnBf6re5CS5HX3tBsJnftrHi77rN7Tr/ubt9fr3x/29geDZrak6NUL+uasLiqN
DscL9ssijPhaJpVJ5lurthmkMMjL47eIApNpiKkJ2dlW+8fnw7dZNWSBxrecyaKZjthX3FSk
bkiKMjTZmgD7kkxgIIpVPqmeAPeAiacp0tplPkbVPyOOODLAZ9IL/zLR3jWv8JYVPsDf+/FU
w620f5we9YXpDxzJ/khQHQjL1E142N7ONjCcIUOXieb1RKJm+jq9vSG3t+OuPrGv+rH4IRu9
cEG4JilainRtb+IS3UWrJirGx+oOq3JGx89dXBkxDzN4K+XJRrdke7zuBzNyeXl+8mtUAPbP
Bd0hJf1u5wiwTcJZUm7ILnB1SbbKveCbcvQuQMaqgza7MSgwRBq1rSdO5fj9ajn4Iy5075sK
FTbicxF1efqrJzVJCH0lOA8K4a7mTdoxX70veJm6h7hSVScNA3Pb1j+sqJxZPvK5xVuJFBcm
FLv8jz8ESA2VkvapCXsK+Gg4XVKZd6/BurDqGCQT9sFPGKy4Fxz2bYIvumvbK0ou95/TO0Ys
9F0H9+HunUFfDdupjyuCsT/K4W0BPvwG1LqsiP8Az+Zj8F7UihNWABcph4hrsCERCRDltNUf
THX/ayf1/vXv58OfeC3kX6f05iNb0dRlKwAUD+XjX+DNgmIU25YzkoZmupwo/ytkZf10kgrz
xsrB9Je5sK/iaRKtsTp8zM+Ec1X48z5paRL4YBcv/gCSYCF2KuYHJlH7v+5k/zb5MhPRYNhs
q9SmBkMGSWSajutmYuJH0hxxIVF0q2abqt61HEY3dR2WxAJCAmvPV2wiOew+XOv05ThSC94c
ow3DpgfAYzEk/XLJ0gCGTxOZmEj0WGq/XL8RBTJq0pnomv+fsytpctxW0n+lwif74HgitVEH
HyASktDiVgQkUXVhdFdpniui3dVRVZ7x+/eDBLgAYEKcmEPbpczEQqyJROYHO/tTUvoHsJKo
yGVCAriyX+QCVuDDFkqXf+770YZtaZ1MfNqaxo9uV+34f/zy/Pe31+df7NyzZOkckPpRd17Z
w/S8asc6nNZxnwclpOELwFm8STyHPPj61b2uXd3t2xXSuXYdMlau/FxnzJoszsToqyWtWVVY
2yt2nkiVXGmD4lrSUWo90u5UtVOKtevbHUHV+n4+p/tVk16mylNickvBQ6l0N5fp/YyyUo4d
39QGCDKws8KudVdGKo3KliW3vcxVDkxhbcVFudvyDlMuL0nsqSf4YsaeBbdK8F6Q3YQ3GhF4
HEcaekrYVizZY/qetprD0sBt1BZNQjM7pyRvolkY4J4VCY1zim9jaRrjUXNEkBTvuzpc4lmR
EkcBKA+Fr/hVWlxK4gE8o5TCNy1x7BVoD+XPgn9yjAEPJDlc6ciTHcBN/GV0huw+AoeOM5pZ
UdL8zC9MeKAwz4heYdZTAd5694Gs9Gx+8IW5J2z3wP0akK6pVD+9EukccDBhHfdJPVbCX0Ae
c2z1rEztt9op+DkrzqjEQJsgQwiGwx1iBpk4JZwzbAlWOy1AhfFrY4e0bh8tdaZFIPEtI3CM
oCRr1PkcU+eU0gJ2QI30a2vID5+3j0/H/Ku+7Sh8kH5qSleF3IILeWgpnAZvtfVR9g7D1MyN
8UGyiiS+VvXMuC3eNGQnm7fyLXy75hhjvhUXVtFUX/MPBe/2MKODkcNWz/hxu718PHy+PXy7
ye8Es9ULmKwe5GalBAbDVEeBIxOcdg4KL06BLMyGEi9MUvElfndkqHMV9MrGtBWp38qIwQp3
Rd4gEFxGOzNcSYppeWh86Ln5Dm/pkss9MsV3f6Xt7nAeto136yHgQIDdwThxVoWsnsYfGhyI
CEvBsop5/IiDKIq0W+bcq7EB9Ef1c3L779dnxPdJCzNumDraX30d4Lfc6LawWGT44V2JgDvZ
OKfONUjqr4UYZavs+b69WGZoHNOdHy0OrwUCxJQ5TRu7hgnZGvcgDYggpQGZ2HfVLan1NfGk
aWhcxaNUvMRmpZIvTfOqoiQmuLSWEZmbZbO94ANT1SHB9wvlDIhuF8BR/n7cKegeZAI4v4sT
trcDi9hoTEzdVsAiNECxWVmx4uzJSQ4yV7gkzvZjc8MSx/ZQ1WjdMJzmPHE4GowCGF0ZBCOp
54Frhb9TQMLjW40J0iqE/2ATYRi9vkGtfEfvpmxia/a4nOZJLJfL2R2BEVyrKcEPagzrKK6Y
PTy//fh8f/sOGJ2DP3O7CH28/vvHBVwKQTB+k3/wv3/+fHv/NN0S74np25O3bzLf1+/Avnmz
uSOlt76vLzeIOVfsodIAczzKa1q2d+/FW6BvHfrj5efb649Py5gHK2eeKNcnVBWxEvZZffzP
6+fzn3h72xP30qqmguIAZvdzG+ZFTKrEng1ZzDzIn1JUDu6RwlHGvz9/fX95+Pb++vLvm7ET
XQGcYhhh6mdThC5FDsPC8mLWZNSq0LIKfmBbG4OSlMzR0QZH0tfndpt8KMZ215N2djjQtETn
q1TrRVbubGu0okhdU2NZD0Y+QfKEpAV61VRWuqTe+Vg9LtHNs9419/ubHJnvQyvuLsoBwLwo
hXs9MjgX/2JYoXpp7dc2/ipE8u6lPTg8u9cpY3/ittK9wqrBCs/9farZSPr+3+R6TtoQP6zx
DNFuUWx6rmzPCU0H79k2baMv9nAbBIgRdXndCvvQCgyEGxU96nk+AdjnUwqYXVuWMsHMi9mK
7q07EP27YWE8ol2CESnLTJSTLq35XgC4vyqPrwRwnHcu0oscMTSP9ZUN3p2eydJHL7woldOY
4dmBNY5y1pIwxcOIIuhyMlaXQirWHn+/fW66OMOvRg5uZl6SK2IGUNYdo89Zy7Nq1/I8BTSn
bY2kzgSuqBSYnuEG1pYxnIBtdEAfoSltxbOljus8EgG4PbbDz6eGDD+pdwewlakVInUUrTcr
rBpBGC3upMyLtv4d3byqUfc0alrKowFvw987uLfPt+e37yZ2Z1628czaInDOKKYKWHStQrx+
PI9HKEmW4bKWSrnp42YQ29k3TJNTll1hWmH26W0GLtfmPb1c50ywDsF2WRdxYBg6JXFd1wGW
Zcw385AvZsZ8l5M0LTjgTEEMGovt9e0gJ32KdSIpE76Rx3hinzcZT8PNbDbHClescGbtYDTn
BTx/IXnLJYbr0ElsD8F6baiYHV3VYzMzHROzeDVfGvt+woNVZPzmFXFPSL325H/wqQY4UTll
kx3FEPzAtaepBK+tnM8lyRkmHof2rNS/5XiQdSNVEwZKndaOSVSu/pmhVHYdp+jy5BQaGEYt
0cWFaskZqVfRemmdojVnM4/rFVLPls0S0USbQ0nt72u5lAaz2QJdfZ3KGwvwdh3M1OAdKVHi
9s/Xjwf24+Pz/e+/FApuGxX4+f71xwfk8/D99cft4UVOwdef8KepYAk4Z6J1+X/kOx6pKeNz
mMW4SQ2uMxTgUomt+x1kjgmC2JEa82Q/UEVtTe6zVuXOUmvGtYbLo60myN8DOKOOfqloDBah
q2lso/EBX9DVsCZpDJEOaJn9uLcNLAeyJTlpCLPcF8w107LUsKT3z+VgotZC4zEPTHBlM3PF
EhgK5Ylj75TAFcVDMN8sHn6VWuXtIv/9Ni5O6rwUrKGGC0hLaYqDfaDuGTnqVTGwC341a3+3
IobFVA6JAtB/lN5oOqWSGEIdMwBr3ArD10vWQ/uxcIs22i+2hXpAC1eKYXtCOfAt+xOpcG2F
PqpYvju+HoIS/NpNfg/cifnuN32sc+3jgK7s0ce3ci6dEvw8sPfc/sn6cfcQPHyX/IsXHkuv
OOEVlPTmrHpGvd/mSX2mwnOFpazb7rAbKpVmnhBYqT06ibSN5VWuj6/f/oaXM7k+0hPDAd0y
EXT2lv9jEsPyDI71wh6Y8rSayGVmHhfW7kzTOf7dcr+k+AWBuJaHAkVdMMohCSmFjb7VkhSW
FkzYiQz21J5JVATzwOel0yVKSVwxWYhlguApiwuOwvOZSQUtHEgY6mgX7k4kUKdGM9OMPBU5
2hEks0w18mcUBEHjG4cljCY3onpI29R79KRrFihXjVwwGxbk0Q3hRtJVMf4BMMwKSz0lIvVd
mqeBl4HPSOD4Gn9qFJzkRmx/p6I0+TaKUHw5I7F+vM6eJNsFftW+jTNY/zwXp3mNN0bsG1WC
7Yscn46QGT4bNa4TqLe+hNiGaX9w7MDtbHPMYG+kgQTO4yxy5cZuH6xEZ3ay2lUcTjnYrHJ4
aRK/ETRFztMi271nzTJkKo9Myh5PrtkR+YoDTbl939mSGoGP8Z6Nd23PxsfYwD5jRgqzZqyq
bPNNzKPNPxPjPZYapg3piB+pzCTKKd6aYHHdwPtGuJqD62xGhom9UWgnxJShkMVGqvZedSgo
DXHPHC4734OhY+QHkBbUOoBtaThZd/pkP1RqsDQ4BMo6nMjFPKgYLBaFy7rGWS7yNA3Q5Yy2
YJmW3MzjH7fHb9sl3TPfWO1L4m5CA2fhLR1fCr9kE50lj/FnmtrGvXPmc+zgxz1ePj9esXex
zIJkKSQvrHGRpfWi8fiuSN5ydPA2ufxyl727TNSHxZU9CI48ihaeZ54laxnIbHGb45E/yaS1
5+zpFFq043xYL0m+Xswn9mKVktMMH+vZ1b5Oht/BzNNXO0rSfKK4nIi2sGE10SRcfefRPAon
Vkj5J5haLd2Qh56Rdq5R50U7u6rIiwxfGHK77kwqdhAhkkMgBdw3uerGOIdovpnZq2l4nO7h
/Cy3PmtJ188bU/QZQiNhcbRqDCB2E9uHjo2QX7JnuWMOlfqyHGVow14p3DTt2IS2WtKcQ7y6
ZTUtJre0x7TY26B+jymZ1zWuKTymXh1O5lnTvPGxH1E/dbMiJzAHZZaa9BiDFdDnllxlk0Oi
SqxPq1azxcSYrygccazdlXiO7FEw33g8iYElCnyiVFGw2kxVQo4PwtF5UoFnaYWyOMnkhm/5
tHDYmdyzFZKSmuApJqNI5ZlV/rPfSPE4pEl6s4NunBirnKU2CCmPN+Fsjl0tWKmsOSN/bjxI
25IVbCY6mmd2zCwtWexD7gbZTRB4ziHAXEytpbyI5Wy0wOFNrlDbhX0Jnykb3GTXnex3uUlZ
XjPquWuD4UFxA1kMzrS5Z7dgp4lKXPOi5HZEZHKJmzrdO7N3nFbQw0lYS6mmTKSyU4A/kNQv
IHqAe+IThGPJG+d5tvcB+bOpDswDww3cM0BTMIGBkRrZXtiTE0umKc1l6RtwvQCOCm9krq+P
zMzbCyVSM//S2cqkqWxrn8wuSTwQ96ws/fFdfAsqN27KOVx9Lq6gwCKPPLWeObyzwyNGQoRr
lJh6QuDKEqdzJ4Eq6fD28fn7x+vL7eHEt50RXUndbi+tOzJwOsds8vL15+ftfWz3vzjLXucR
3VwSzEgH4oNZMdPbEsYTB3u/OtyDDBaHpU8tsjPNTFd9k2VYihBud65GWM7zMS6r4szS7w8F
3Hjh/Vcxni2xu3wz0+GshDGp1Pu8bWoq/gi7IrZvs8XrVQiMyRnOMOHTTLrwyD9dE1NDMFnK
3klzZajQV73Kb/7h8gqu77+OwwR+A//6j9vt4fPPTgrx2Lv47kiyGky0+CJy+sIEPzWeGDY5
axb+KwZ108QZvl+p0AvEC31QfnmC3M/9+Pn3p/cakOXlyXy9JVcva+4gUN4NXNA8CC7xBcxo
CY3tcMw841gLZURUrHaFVIVPH7f374DE+gpvtf7X12f7CcE2PdzSOfWwBL4UV8ke15+e76Wi
Z40rYLScz11fJzjS67ZwXDE7mlzB8H3GECiXywh/5s0RwvTnQUQct3gVHkUw87zAYsmsJ2XC
YDUhk7QxXtUqwqPjesn0KOt7X2Rfek7dloQajp7wt15QxGS1CPBAWFMoWgQTXaFH7cS3ZdF8
ji8NRj71er7cTAjF+AoyCJRVEOJ26F4mpxfhubXsZSC0D8xUE8W1Z60JIVFcyIXgV92D1Cmf
HADska889yBDp13SxWw+MSxrMVkW+B8CzLx3XVDLjeEqoB50LHmIkBqS2jF+A2d79YF9dRJg
mpD/96hqg5w8fZDSfYfsnpw8szku4CPZ+KqcibFPUmAhDlrowKUpbL7m27Vjni4fbxZOQUny
WFGMShSn+HBkmP1jENoBhqZ7KTywz5n6+24Wnpre8efUAvIYmlJVyTtC2zhbbtb49Y+WiK+k
xP34NR9a1HWbsgTOvK5rQtzegOXSpQ3Dw/GmdNmg69/Zabn9GFdHaUhO5IjGGHNruxronh3T
EMC0+J4dF9uKIOXtd+ERLXBfoVdPFr8x0VUHzgmeustM99Sep1R7EmMszhJ6YXliO3n3bJFN
tQBTxtp7db6QqmImrk7PycheXXFg9QKAuKLaorVSzK0PYXAQA/wsj246fOGFJV88QB+90NOB
5ocTdjPdiyTbDd6jJKOxZ8sbKnGqtsW+IjvsjmEYbHw5CwKksUC5dIIjel5detA+jP5Jj3J4
SJ0L37x7wbKusGne83eckZXxYpCejwpAwzK+aApMYnB0iT31M6VYKY9zU1IHkssDkge7aBA7
buWPKaGS7gk/eQAntJhegGXryXM4dgxuvx4WYB5X1MQCNogQKFLSqg3wGMowJEiyjtaYvm0L
xXj+yorQZLXwZt8JNGK+nirlJFVcVseswgvbnsJgFszvMMONrx5w1AcwLxbn0XK2nKhJfI1i
kZFgMcML0/x9EHj5QvDSQfVHBKygmjF/MfbSR2R8XsWmbEI2szm+EbtiS+ze2BKCjbIq8Iof
SFbyA/PXmlLU9GuJ7ElK6lHwjCVSx3Pr3UyT2RolcOa+KBITqcyqvNyuzDfdTB5LmRxgnoR8
xa/rVeAp8ZQ/eYYBPYpdGIRrb2P5vFNtIcyzx5RQq0hziWYzTxW1gKMWmQLyDBcEkWcJtwRj
uY2gRm1LKuNBsMDrIteLHaA3s9InoH54eimnNfMMzey4Nt/9sJZJmo+iJK0mhsffxLKeYbEO
pqD6u4LQKl9W6u8Lm+5WvRJO9WwionVd+xeSizygB7WvMrAZQVxlwZnAtCy7Z4P5Oprf/S4m
wgCL4bEEeaymtqeXJDuczeo7a6eW8AwOzVzfZTbMV3iVNSaggDXFWarh3tGv54x7jimWlAjC
eejNo45WqMnb+oSSr5aztbdHn6hYheFUHzwptdrTBsUha3dTb2ezR770uBBYxQA0J8O0ztbQ
oHGNLVoUlVkk+7/Ij/Q6tmhKdSVY+HPcyk3bRBRoDZ3zejY8lunaeGNeHlEsR11HPeyb8lL5
csgyEi3QKDTNV7a7rdxZzHt9g5VQeN8d553Z1vb4bZtBpHKN3AoPYHonxFRYsaC4E1Vv1JUn
nryV9H7EsRZfNkjrwWt3GfFh1CmZK1W3KXck4iyYYTqo5kLISAovd8Ltt2CjIQNvs9zrn9Zw
NojcqUknq9r9jtxJ/c9b5ZKkGeCx+itVxrtoufar9qr3q0KQ6gqxltgA0dqab6YAdzXX3Dsf
ojcI/J30bsrV6XxRu6W3ZHvr0Sy5PISrDTJs44zMce2gTZhQUsLLQ6n8a0vGX1ydw5VcHPQ4
QC5slMBq2Ql4C9JyayOjll1lbOHsPYrk6EaKxjPMTqRYu9ncyUBS3G1P0cOkjRF05c2zeEsJ
R1XYeezBLRPX+FsmZnDQrOXCLXq57O6JDl/fXxSYAvtX8QDXaxZ0vPV9SJy5I6F+NiyaLUKX
KP/rPuGsGbGIwngdeDx5lEhJKp8VvBWIwfCMfL9mp2xrmbs1tSIXl9SGyyDCkpQ5ED5tkipu
7pWtL3q41dUn7gmJB/OPHRDcUZqcL5cRQk8XZs49mWanYHbEPLR6kZ3cl4M/jBhEbCj08YnY
Ray+2fzz6/vXZ/CfGIXBC2GtY2cfzvQmakpxNWZt+5C4j6jfiPkjXPbg8qnCbwWIDEAX6cY2
v72/fv0+fg21NcYY7xzajChczlCi3NrLCuIHaKIgcq03bkw5jURgDZWOFayWyxlpzkSScg/k
pCm/A3ss9qCIKRTrgENPpU0AequWZqiuyaA1qXz1z9TZAVspTam8ak6kEsbzBSa3gtesMtqL
oAXRWtA8Qf1Qra+76IfH0TwSHPDMqosIowjTQE2h1Hov3WoO1g+3/O3H70CTmahxp9yOTBAo
Ozl8fIqf01oJe98yiEZ/u7l+4bj/RcuGGy2Gg8y2EjyO89rjbdVJBCvG154DQyvULqVfBIH4
YNxjxBadFKs8XqyaXZW4Vtyyd1x+fDlVhpJi+S6l9ZQozIOnYI47C3QtVbqR0V2wur0yOV2c
xaJKuzsvN89cdr0CfPIEXefN3jME8uKp8MUXnMC3UHgQzAG6Rx6J8zurEAArWa+DGHT1NTJz
d/+XJHAMywWWr2JY7xaV41WuLB03mTYKOh7HX3fqYpkxsP0nqaV/AxXwEaSWLWw9V3EAvUM/
T4Zr3iDUvhahar0j6CWXkjMdyjSBs92oyAsBBGDPs/O6UnBSw4EHJX87qs9Q6uHSPpVnltoT
9euyrMg8jseDoHIPRIofJJzY4YGxJQvUfXyQ0M69CNkFcR14NSsPtMLUYLjdZk6sbHYhZ6yT
4M0QaklKytFpjW7anDWGzCDojvFDicY8yBG4jw8ULvPct3xFLP+h0J+ytWP7nbCapem1u/Dv
IAZH+phxUmh7uDpx9f4kdmIwReBBix4GTruVhTHih2ceGdUz8ZICTxbTvf0soqQqn5X2waZh
QIdx+zwiPt6BDa+u4s5vkpud6q6G2d/fP19/fr/9I1sAahv/+foTrTIkGi2yHT0V8WI+87w/
0MqUMdksFyjAkiXxj90CwJAtMyZmaR2XqQVqcvdjzPQack9pwHbGnUOIQSLpvrCed+qIsrpm
P/fnAcBOG1qwdfJ+kDlL+p9vH584dKSVOQuW86VboiSu5gixdolZsl6u3G7S1IYvogg7frUi
AFaApGwyj7oAfObcjJgsbl5VaEom3AJKxmrUFCR5ubLVhnYmLVF+zCZymkkHvMmBenI6lskT
4WY5Iq7msxFts6ptmhM60ZLKagxlqVBhEe9ilXNsqxPDCvGfj8/bXw/fAHNPJ3349S85UL7/
5+H217fbC3jh/6uV+l1qzc9yWP/m5h4DoJ/rSWnwE8rZPldASLai7DB5Ss5+boeK5DaHIbIl
V6kC/S9j19ElN46k/0qdti/bO/Tm0AckycyEik4E05Qu+WpK1dP1Ru5Jpdnu/fWLAGhgAqw+
SK8yvgAY8C4MRV05GZmpByrAqqY6G02tqzLNlJsM9CLdRKuXR8BwXzVyXlBonVB61Gl8/DrL
w2hjeW1VYGlhYjVm9SdfSL7wnSrn+Ycc8I+TtQQ60Fc3g1ruI+kY3z81Vv7d6x9yZpsyV3qM
MVfbc6NzijIKjnubFpDdMwRpcmiGIeAX7tRSq4TS06DTXnplgXn2DZadqUqiFNia+0Nl6S0g
PAanTLEhtD3MRQHwI4vDSIj1jjPDEXcKruuP8p+2cYtcQXp29/TpRTpgsw/JkLCoKZgS34s9
EiqDwiWugN5imvonLvfMNI3QRcp/iQC8r1+/26vg2PMyfH36N1qCsb/5cZbdCjMirGrkMRlZ
gcmAM6aOYu3x+PHjC9iA8CEpPvzjfzTjKkuepXi0hWOYcnSirdw0KQz8L+WGcXIuawGyk64Z
rgWWJFhesQqe0Kbog5B5mb5rNFEbYVc/9q7Y97D52WLiu+1heDjTyuECf2KrH9or4j/b/OLQ
XV3vTssHScuP6jW5d3htn9kqfpLnEzb+nDZzlVV7roa3Plk1DR3Z7jQ4fO9PbAeImErflIyf
Ht/keUdYD/F632Crqwt9Wy52agfKqrerf6QH+6NiHAx8qP54/HH37eXL0+v3T5gBoovF6olw
7CF2PyxYlNZ+7ABCF5C5gNxzAcrGAaYkec2pE0Rk3B4sCGVEm9gPVI7b5NTWSESH97qpnBzS
5llI5MAe2B57dxNgIa9fTNLt7BtUy/G/fA41XFMLojAv8dbjnIx1+fnx2ze+aRTbQWQ3Kgvb
lD02tctH+Qvpd1bp4KLblWKZANcdlQpT9SAgJd9lCUuvJrVqP0iVMJV6vmZxbNBs2+C5WLe9
qZevRwLFakcuUHwN+HVC4VXHqD/1M74X3cCqN8oqQy5AwN/UzU8s4SaMp0JHrNzapr5xw67j
spqwGw9Z0WNm1h7TTSRmWuijfvcEfKEtuLi0kl2YnxSm9PNyulV7y2FHUJ///MaXcLtWJyM5
u1El3eHzWRkIHjY8AruXTPStDMVtRGgnnehvJk1NWaTOg9nhx54WQeZ75l7dqCc5uvelXX+6
dLuSf9lvLrjVphzbLjVXgZqnJ0Gs+ywNraFqTPmykEL1w+yAdZAt5xG98CyJs2SjtwuO3PHe
rXI4CzTptNhdWah+OAcAR2OzBTkxzyPtxslukSVQyHZPX+5atNYbs6tVzXxv0dkDWMTzkdOM
uxvSSvIEkZV+KIswMJ1+KIFJzFJpwh8OfDEi2slbysq37mrceREiQVSI/+v/vkwHz+bxx6tp
fO3PAfrA7LLDWmVlKVkQqZ7JVcS/NBhgrtUrwg4UrQJEXrUc7NPjf57NIkxnXr51RgPEzQxM
XpibZCiWFxtSKlC2lSdwqLYBetLEAQSOFJkXO1Kot2U64LsAl1RheCtUH6Q6mLkqgp9q3qiI
NHMImWYOIbNKVajVET9Vx7ve/MpeG56XINYm+n4hUXbq+1pT8VDpW1HAVLbjxeWauC+JZMXm
g2l3RsoCon+O4Ml8LbBUgtsRY/BKsshSp4IyxERdH2IgTIvr8/AccYALd76Oe4l2wzxJcysu
gefjz8MzCzRhgqnPqQxq42t030EPbDrb6f42J+k5Gfm49OI2TImMnHbvg/SqTukGYGrWmfCx
xDYZJlc53k689XkT3Npzg5QT7F+wehE7AayoHPEdvgSUxG+xgN1EijslM1iQRhBI4CN1N2up
NqREK2+4xi6XrbJ0POss91x+WyUPYm9ucMCmSDdemRHHNfz6edFjlJvIOccxTGLfpkNVRHGa
IkgfJMLsy6DznhH58dUBqIdoFQhitDgApSFmNaZwxK7PxZnjc3Geeegwa3ZhlG43jrADyLd7
34GcDhU8TQY5+uy4dJcx9tRVav7IMOaReupcRC/zPFeVRMWUbPy8nWlpkqZbb3mXIHWgHl/5
8QhTuZvitJRppFoKaXRtiVyRxvcCrLg6R4xlCkDiAnIHoK78KuCr/VUB8iDyMGBMr74DiNwA
+nEOJIEDSNFQOhJyaSdNPCxMsflgxYs0CTCBrvS2Jy3o2fDNbY19/z4Db99bmTclOAkdDg+o
+OCPgDWYBc4qHbguQxOzvnLoDU4M47X3sZQF/4/Q4VYYT6EWY8mSwKWyPHP4yWa3Lau65lND
Y1fvZAigWenOGI3vedXtbABuWbx4j5VKXMAEe3wvtjLFYRq79FElz2w84/J0sOTFiiP6CDAz
HOrYz1iDCcuhwGPYeWPh4HsmgiblvXUj3ZEeEz9EBh7dNaRCheFI74gusbDwU6J7C7s2W4za
SSgdr4IBgwgnr7+sHN8V0VZh+cZ18IMAKW1N24qv1gggFhZkHpUAMvlNgLnn02DUsafCwVd1
ZIYBIPBxWaIgCByfi4Joe8oTPA5fUDoPvuFaRgLfwCRegm0hNBY/xyQVUIKdf1WOHG13joR+
6rATUZiS7dlHcIQu6ZJks3MJjhjpWwLIkZ4ipc6xJEUfegE6G48FbkC5JB1SPleESCdpEpSa
4lSsmzUpWvucvtVsdZNhI67R7WwV+hu9tckwJwsrjFUop+LDo8kxO1IFjoMQ2ZoJIMLGqACQ
yuuLLA0TRDQAogCt2HYs5EUTZXg4zoWxGPnQQRoSgBRrSw7wQzSyfWr7okmvV0wccbWd43NA
7/BGNadVLodN5DhicxonY7ssTg7/RMkFxm3qgy27iabi8wVa6RVf0CM0SqPCEfjYEONAAhcc
WLbgYjlKm63pZ2bJkWaR2C7E5z82jix1HIfXHJpkc27mGxg/yMrMdeRgaRZsjXPCS5/hkxZt
SYAav6oMeKfjSBg4nOKtcyJqZLrAx6bAJuax6X1sCAg60ryCnqF0LWyoSndM4k0fo04EZgbw
zFz0J3z3w8EkSwgCjH6AnZXOYxZgB7hLFqZpeMAkBCjzt7arwJH7yIAWQOACkHoVdGSQSjoc
qnQ1HQWv0ywemQtK2gMKJUF63LuQ6oieF+yHJJTFcVc2s1zhivZv65MuIwvUx61bV5ttvPd8
H/VLAouIHk15IoEPWqfh+szDRjJS5jA2npmqhp9bqxasEScrDDjNkYdbw37zTGbjMmUmXwYq
fEPdxoH2zMbLSqqEHrozF6rqbxfKKqxUKuMeDq8iGvdmIdUkImK78JG2mcSdO8K4KS8w7Eh7
EP+9kdEqnCsn+ZQA0d0LYmwcLH4InETMQHKTt9zX50+gYPf98+MnVN1ZhGIVHytqgl5sXLPk
1t/Du0HTL73ws5kF64pbObKZAR8fnDWMvOsbAgELls/ytrOZl1W24riZGV5Fyhus8iCD5jPx
YZZN8+AGX2odY3SnmbKynfYDXoHB1bjKus4PK47PHxyXwVNdd9q7oiFo1gBYzSUMNH7/+eUJ
lDJtt/dT0mZfWq6/gAYXcL5rkynqso9jNFSFSE3GIEs9Q/sdEOGo01NfaQR11qSwxLj2gXd1
+eXcl5ZixErTtThFOU39uIUYYsQMI6rHmpWobF9E7YhHoCtCVKN+Q/LpPs10Ezoj+BlshtEr
pQUMrS9p/mpERRV+eDXbYiLa1be8fyyC8O3+rSeMFvjzDsA8F37ecMgph/f7ExnuEQOEui90
VTYgGPpV68xl+hl2sNyK43j5u4wwX6AhpRfZJ+trrWlWRGwZ3kyv22YITPhoNvN9R9oPt6Lp
8PiKwGHqEwFNejnyMGJsfkGQE/TRX46r5WlMSwYKXmmS451gYciiTYYs9/B3qAUPsGPTguaY
WJyMa/8JfExC9Il7BpEsq3Yf+Dv08h9w8A9kpuHn9ZiPRuycIZJI3SC9fZZnMi2noYjHOHNl
xKoCmW8ZjdLE9HEmAd5XKtndzJFuXxMIahOr56uFZLpABvr9Q8a7inbZQ3bX2PMsQxQ11QMr
dAdGQB0pP9OGYXwFp2qu631grPsw3+hj8ITsiAkwfaZuMLNT0YaGph28ifperPtGE++kDu25
2Zma+/OCIcP0y1ZYj0o307MIfSqbC2WoEy65ZQlGzX0PpQY41e47HOHTjXrOnVUIsG3GjJGT
K+4k54BYa1v95lL7QRqi+ddNGDuiBojPv2+uzjqfFaH1DclAP3QtcbpCFfI0WYQ+qUxgaA73
SfvHqktTMXKlobyzvuREHYQCXI+YfqkmvK4d4pz34hJt/dzqJU28smPAnl4rXoNdPWrvOSsD
WMufhM+Qlp2aCs0djljihKVyrcVb+PiydTAUWzEefRlcIVKMWZbEKFTGYZ6hiNzhYpC9xVUw
bKOr1KelLowzBej9gsHiow1D2jiMY7S0pubmilBW5yHqy1jjSYLUJ3gOMDmn+HnCYHqr+EIV
aLuxgQUvYj0WYZzlLihJEwyydYJ0jM+lDihLohyvEAGi2w+dJ9dXUQPMsUcYjYdvvAJcummf
b7ig03DDC6sO8h3Z9sf7LIvRqobNFd49AQlc3+RYjF196yw52kzmCq4gBcmj2MO/2e9PH8wI
0RjbOcu8NxpT8GTojCGg3CHCMEa40b/KYm4XVaw5o+f0lYUFTU/09xIdZP7291ncZGmSOjKY
9oLbOdSH2Pfw+ZnvKGI/CQMHNm/EUCzQHvt0LPZUnWwTS515TvswHPPdck47LqSKMMMLF5Mr
e2PTpWCLTYYFmXsRHYldaeQ+Y0YKa+M1gDU3dg9Z00FXIu33gnbjh9oK7SDF7LJW9ScAAaAL
xJftACckBz1R6OuJf7i9Oy85Id+HS+WufXCkZaR96N5IfSRDj4rU8G3N/a50ZH1teixjlYVK
NcdNnqFomg0BRfWCLyZmNN/qz9eVc+UIZjoJtin0QHBLZ1krRkwcLfXIt4LUWVrbH6GKtqdz
5/TIC/YE5UAcsQehIcehIs0HV/C9Ybbi25KPHrqhr0+HrRIeTqR1eAPmQ3nkSVFP7bzN6q7r
waxA62VmZIOFBK7TWtbQcTT6JaOD0Rmuu+56K8+4TzeQqsPC1haVPTGIQJkCQQOwrDDYX2jO
RkVuxzTU1atEhJNTzaoMGFD5gGUgtOUDsewuJpv2YeujGpn3rlqrqxndlcNZeDdiVV2J2EST
dfDHl8f5UPX61zfdgmkqKmngonr6glMwGe7pNp4VEY2cSnqgI7Tq+c3cBgJWas6cWDm8mcVs
d+zORVioINko5sFW9czfONOyEiF8zbrmP0Cnt159fp1fPj5/jeqXLz//nCOQrk8bMp9zVCtL
5krTT9AKHVq04i2qmqdLmJTn5cS7lFdC8rzb0FZEdG0PFfZGK7LfX1o+G/+m2DlhhVC6kOLB
ai2iUY8Ij9oJl0cfQZxCbt79/vLp9fn788e7xx9cyk/PT6/w9+vdL3sB3H1WE/9i916winT3
FNnNSEn6UVu9JX2sSJxq2zbZK2mU6p40pIMgoDq/AbBqj7f2TgOY81JpMgt+TqLiL1TOxLhW
UYDbdSSY96VJMkLS1EuOdq77JFO15iVZ3mMZvW532gfGGW2lI51b0Bu+dqsv90qKRjxIK1BU
rzUm3x+Z3b8LsudLb+F4Ypl5yqp1vNVPA8jSfl8ZIvBb2gT83yzF1ryhWrpL0uOXp5dPnx6/
/4W8ccpZdByJeDySz+qDsPyWvHePP1+//rqMgX/+dfcL4RRJsHP+xZxjYOEWs4l8J//58eUr
n9qevoI173/fffv+9en5xw9w0gO+dD6//KlJJ7MYz+IK1JxyxpKkkR7IYwHyLMLPpRNHBeFZ
460WEyzo4VDiDevDSD2UTQ3NwtDLkE7C4jDCLolWuA4DYpWxPoeBR2gRhDsTO5XEDyNrAueH
EE2fcqWGuTWp90HKmv5q0sXGfjfubxJbVRP+VvNJny4lWxjNBuVjP5ndK8z+XVT2df1yZsHX
G7CcQJYhTjbnCUFOVNtajQw7H7vFAMwi/MZNcuzGzMdUBhc0TswvcmKS2N+6Zx6f3jY+1dRZ
wmVNtnhgRvUdbywqB7ZaTL0NLgD5mEKG1IQ4NonzOO1jP7I6kyDrl0gLkHoedrad8EuQ2Y02
XvLcs1pYUJGaBTp6eTAPgGsozS2UPgdd+VHr6UgHTv3UKmlxDeIs0lxoGL1Y+crzlyVvrJkC
7PZSwbMY77I++uCl4tbcAOQwQgdNmCOdAYAYvfaa8TzMcmu+IvdZ5tu948iywEPqbKkfpc5e
PvMZ5z/Pn5+/vN6B90mrYU59mURe6FsTqQSmK1vtO3ae60r1D8ny9JXz8HkOnoLQz8KElsbB
kVmTpTMHGeqgHO5ef37hq+yc7epr3IDkcv7y4+mZr7dfnr+CE9fnT9+UpGa1pqGHNF4TBylq
3DMt1/bOn0GIqJ6WXqCWbkMUWbSemgKuZTMxfTcyntr1GFP8/PH69fPL/z3fjWdZIdbuRfBP
j/bmuUhifEfg6+EhDDQL8i1QHep2vqnvRPNM9YKkgWKT7EopQEfKZgx0JTEDSxwlEZiuOKGj
QYI+9+pMfuiQ+f3oe77j09ci8ILMhcXa5baORZ5uLKpJc615Uofxo82Yug/tE1sRRSxT1xYN
JdfAV59C7Y6g2yWo+L7wPIfKoMWGbzksNlTXxRYpwAWuImel7wu+irn6UJYNLOFJraug6aMn
kmvxLfURGvixo1PTMfdDR6ce+ALh+B5v2dDzh72jSzZ+6fO6igJXuwiOHS9PhJ6o0MlHP27Z
ZysxbR2+P3774+Xph+3DnBwUgwn+Q2q76SRGmU7Q/OpL9bjDqN0tnQ/8GDfgl6eAsQsdwZtm
h6uSlIPt5pdw2hodaF3XFLKg778/fn6+++fP33/nE3qpJJjy3u/QykWTiXS7x6d/f3r51x+v
d/91VxelGcBoqUqO3YqaMDZd1a9VBAjmUxkugWsRC0tNh4r3hhDzl45lQ9V10Wr4mZF1p1Y1
OTF+LCEFFFJfNDqBVe+togJ9IJeGllQtKZA7xkB5HleplvnfLC+9Cn4crDgHQIbQw6AFKq70
HKbnIOx0OdzVJdwYuqUYuuK2d+dzroZdxyp30BEhk3n3uBDn9M78i7G+nUlNS8vIQM2q4UPo
sDvtrQY5wcXOgLTTqWkebPJUp7Mxh80ADSmjluCYTT3TwQaa/hR5vgjPowPrPZbWE1hv1R4U
wFlrBN5UHJW1CqQlacaeYKEgJMaSyCyaDLgkQm9hhTPzh97WkDa44gH3ZDlt1/fH8lexVVfn
uIWmDQdwbDVU4pKQT9Efqt8CL8r0D+zpUF2ow5BHlKpDvWNw5Co0ZaRItLRnu6PmyYWWq5uo
cajaw6gpf3Pc9ZR4gtxtESDH1eGr3MN/e36CgEeQwLo5BH4SjZWqhS5oxXC6GqJI4k0Pe6Mz
gOcQh1TkBJVulL2q72mr06SzaJNG+a8HU56iOx1QH7IANqTgbWyl4fNUSSFOqyNZITYWxucf
eIdRg5ABkbfMoRNOk9VPrNSteqoadkOjBwmwrgrVAb6gfTDiksp2bnZ0cPaCvb5oClrdDbQz
XdwrDGfK59AS0+gHlMtgxAsX1IdKJ1xIPXa9TgPn36xraWGJ9DC4ZmyAKdy561nRsTIzeUd2
aOwhwMYLbY/E6GT3VcsoH226MjYgdeE2gBR41XZnzDxB9rkDLRpewYbIDa+RoWtN4sOeb1/M
AS/f6Q/OGuFr9tCxbj8auXUQbMruJBA5k4pWcxapRU1AAOFTcXVvjSDSgo0Z70qurtdXIwFn
7lZKPophE+ZIBfHxBugjxkjjWwZ+bNNpjFApmkZr2Ek1xRVEeJABy1ODzA/n1vDgxKqGBQt9
3BQcp7avT8xqMleYBujgEL6bMOrqn6zhi/u77sHMV6VvzSUjdXZHPgoZL70p7XiEAFN2OAqF
5QTrzq1noZn2StvG9bkP/FwylWKizhQuv179Hx5KvpiYI0IaGN+Opx1KL7jUXTP9Mparutdu
7bA1b42thK3LIpATtYOcKLyKtSplR0c24lTH4SUzC+Cbi0ZErCm7SysjfKFHF8eXls2NKtm8
KWC7W3cs6K2m41hXt6rla5lSyYAjGg1APtU9taOfKAz8z9Zltwm4iFB8JOx2LEojc0cKaRUm
ahWYRMhO4yUf6P0ff/14eeKNWT/+pcW+WT7Rdr3I8FpUFHdVDaj0/e4q4kiO584UdqnsDTmM
j5DyUOHmr+NDv6XC0/H2ksd7/OYIN8niO4mRFtocPdMcsV+k23r2+vL0b6wul9Snlv0/ZU/S
3LjN7D2/QjWnpCp5Ean9kANFUhLH3IagZHkuLI2t8ahiS36S/CXz/frXDXDB0tDMO9nqbqwE
Gt1AL/gkDoLPOjFDCci1rDANmd8ZaATm83RdZxktEvTT/25gPvJjLa0G0y2BLUYzlxwiipRw
DugaQbMuwns4rwOFreJvcW1gjAlQ+vrrQL2vp3Pvy8vh+Pevzm98PRTLOcdDNe8Y1ppiOr1f
BcMFWSP5Ta60PB+en5VJ8nw/RMfnCPbugyS77/5+f0Pzl8vpZd+7vO33j9+UW3iaQmY8iyiN
5l5KnbshsOHKA64aoeNpITNejjIsxhCq0dS5oXmyCnmmOdJYgzIynIzcrVZbNHVnE9XvTMAt
ye5rpCvfgwpYOHBM6FaNBS0oR7Qzk0BOVNOUukTfbG+kRUSsiw/sdbN5EQXL0CjE7mg3OlGj
00+pCH0cmaeBbHRT+pWSuQQBGHZoPHWmlZa6GXHGvmi0GnQ4N8xoO6iF1QCBdJPYlYJzaRml
0qQirPWXAmE9BTFMxarJVOqsxQlbKvm16zMWYKpBUw3PvDJIaNPTPN5WGq7GiMQgILCkn9Bi
OVca/ASKGh700MFkqaYi7FDUfN5ja6b5aA2/UULxpF6xtZZgfFHVHWzn3xdpviQ+wx5Svyq3
akn4gRaN6qSLz4TWpYFU5Xy9UOz0mrax2kUU05cm67ogubYAAfrLJoSDHE6IB22NIZaF8QL7
Z1mcSLIKvVxfni0cuVOpJ9WtOag2pHae1tsgYqCUyHcRwXA4UeMMo9FHn3a6jRKcbT+KKlrp
qROioUIlPx3wn202v74GLjKc479G0trlCNw2yxBORsa8JXUHgwGH8UJ9HlfZQgk9JGNo63GJ
gqus5GCUQdQlJNFTNv1aY7KiaKECcjQzXoZpVHxSEQGazLWIbkmh9SVpyIIYUOP8TNVf1nWi
i/p+zFIwDcut1jFQl5heUbIYu2Sa0eJTNX/I8Vmii8wtvd7D9yas/yS06qFYJ9VOwpTy3N4E
uWwjwSOzRFkZzzWgToPVKa1waEqmGxC4uiWtBN5EsFrLqOUAU1I8PJ5Pl9PXa2/1/W1//mPT
e37fg8BIZO1bgYRcbMhN+qNamt6Ctt0mZW4YU+kto5QOwMvjBrW2oeIwoxZ3IqQkddcUWdLl
MLW9X8exl2bbloxSndGL2Y+lCwr4gQbacZYpKRQaQry+BvYgnZ9w1qDbiaik62ELtaenkWgS
bzsbqgZJEpZFowEZ+FyjGTn2CobUhpFI/MAPJ6r9l4xlIM71K596r5DbEe581OQYPpKre5ZH
aZ3XW6xEnsGRnd7PVCwhqCPclCijjmSra/xZqdnBgXIeBy1l9xZN1d/qO14UzzM5qo4vJ/eu
hR6FIoLBrSUxXTxbY9LMw2OPI3v57nl/5fkumbTlmofOH5BKpxlviQt6lmc+jCou6tF5QAHa
4nWPdqbmnAovLHw+lOTUFtakuZNSCBpViSbeXi/PlEJb5CAmCnFsifeACCC7LwgFwyV5kNqE
xGDwTVh/LhLqW+b3fmUiIXN25BnEf0MN7fHwFeY8UHVN7/Xl9AxgdvKVcTQP9gRalEOV78la
zMSK1/nzaff0eHq1lSPxIs3ANv9zcd7vL487WCifTufok62SH5Fy2sP/JFtbBQaOIz+9716g
a9a+k/hWRs4w5lGzVbYHUOn/1SpSRf+Nv5YXIFWiVcZ/6ntLohs/dBZFSOVlCbelz29If6mz
QYOKX6tT5hWLIOYB4D4qLnI1YsE8YO6qfiow+pWejq+VJ4xkN6Msy2oyM4JAhxgM1DAjHcbi
318T6Oy6AZfpSIn0VcOLcjqbDDyiJZaMRn3aIqumaC4I6Tu2TH6QjJQc4phnbb1YKD6GLazy
5yRY1VoVuK4aS9jVPRFMBPF3i2jBqVRwWURLkD/DgOyh+HfByDIGKW+VVTlah9UkrkzC7juT
lu7IEIi6ACUsK71szCUEV3t83L/sz6fXvZ7ezQu28WDiWkLmzRNPcegQv1WL2Hniw/rhN24x
DVXpA89Vt07gDUjLafh+RdCXHDQ4QLanlJ4zREODQJ1rkKprhLeNmAWHjxUNvtNEtyyYkWv8
but/vHMsxob+wFXzGSaJNxmORraQhIAdK8apiTcdKlH/Em82Gjl6RC4B1RoCkCV2ETcaJeOh
bf2xq/ITVt6BeEsGzQfM3BsphunayhKr7biD0xWTfT8dng9XzPB9OgKHNdfepD9zCqpfgHJn
yvgAMu6Pq2iBAX5AYPfiOKRc6YBuNpNkOi+IeJYXJQeJ72P0CEcFipCHwLgU6Go7keVfzAQx
nCg94yBL/HuOI7kysviBZnsMOsOY3AqYWmDoarf2afXZmU71FCYNmkdfVAeYems1fLw4EvQR
s3LryD4mJZ+8/tTxNRiDXSAZ/9pyf/GQXIN6YmnFUcgGWwPfLLJbC0pecovz6XjthccnVWoz
kLWI+PYCooRqwZP4Q3ekLPCOSqzdb/tX/nDF9kf08pLuAssYJjNfGW+qAhF+zgzMPAnHSlY8
/lvll77Ppo6y4CLvkxmkqxWC2KRvyZ/G/GBgRkbr0GjGUUR4wi9zW1qOnFkwm8/TGZ2y1Jgx
Yc91eKoBPeDGPR8kzNNRNaWqWbs4NJPE8sCtHrTdiy5Zv3wAJKyLV9+5YzKWN+XaPnUiqYHU
ThS1QhpXs3JxA1yv5yu6WPFVamOVo/6YtuHDoGNT6lUEEMPhWOaEo9HMLaq5J1vUcOigUADj
qVpsPBvrMWd9nq2YvFxnw6ErsY9k7A5Ul1TgcyOH5Il+Ppy4ymkEjAaaGY30YGDNdfOt+RMm
Bpjp+v31tTFGl6wLcAho0Amy0jJMte8l4l1qFqs6RkhZiuBgkAhxkTZK0PtW247v//d9f3z8
3mPfj9dv+8vhv/j4GQTszzyOGwVX3H3wG4fd9XT+MzhcrufDl3e8d5eX7E06Tph/2132f8RA
BqptfDq99X6Fdn7rfW37cZH6Idf9/y3ZmbnfHKGyM56/n0+Xx9PbHqauYbjtZM+TpWPJgbTY
esyFU56UuyT2sXwoMkVuTPL1oK/4RQkAuadFaVK45ChStozKJT6x3lzQ5rAF19zvXq7fpLOn
gZ6vvWJ33feS0/FwVY+lRTgc9ofaBhz0HTrmpEApfm1k9RJS7pHoz/vr4elw/U59Mi9xBw4l
8gWrUj3mVgFKadQ9K2DcvqPI2YrFE1pal1SUmVXJXDlBjfit87ZVubblLokmmhytoPREes0U
6dNR2z4Bm0ILh9f97vJ+Ft6W7zC90rebJ5EzVuQD/K2uxMU2Y9OJrKU1EJXuLtnKvnVRuqki
P8EcYn0aqi14wMBOGPOdoCjtMkKdyHonxCwZB4wWDm7MgjDv4O4m1DrC9BpebLk8DT7CcqDV
Si9Yg3wr55Pz0GFK/Y0RBiRAHrDZQJ4nDpnJn8Zjk4ESRHO+ciYyG8HfsqznJ0Avpz5GgBwO
Dn4P5JBz8Hus5KBd5q6X92WBXkCg7/2+8ijJI4Q7+nRJch+XWVjszvoOkStHYGRvRQ5x1KNa
1sbJhiSCvJDv3T8yz1Hc8oq86I9cTcsqRmRcw3gDH2/oq3HVve1wSJu41CglzGaaeQ4drzTL
y4HivJd7mIuwhkmb33HIkN2IGOrq9WBgCUkAe2W9iZhLc5jSZ4OhQwuBHDe5lemuhC82UvVN
DprSugLiJmSFgBmOZI/XNRs5U1cxU9z4aWyZf4GSMyhswoQriDpEDpe7icdKaIvP8GHcJmlX
zUpUViGeS3fPx/1VXE5ITKTZsnfTmez67931ZzNlC4tLrMRbpiRQC6vsLYHj0NdTSB2WWRKW
YaFKG4k/GLlD5Syr2SZvgQsPNz4rZsmayoEKNITaxQZZJANHZmcqvGXjzXMxNYu/tElD3l72
/2qCoQKvT7vHl8PR9iVkdS714yglZkqiEZejVZGV3NtB7ivZDu9BYyTY+6N3ue6OT6AtHPeq
NoC380Wxzkv65pYb5VG6Jl11fX4dQXoS8TOOz+8v8P/b6XLgIbSNaeBseog5W9R1/eMqFJH5
7XSFU/TQ3fZ2mpw7kbhswBzVtRv0Mi16EGpkcCRYdDWFDZR5rAuOlg6RnYWJuyoHfJzkM6f/
AzFZLS3UGQwqAZIEKTTM8/64n1A2lPMkd9WrGPytSzRBvALORNkfBTnIG/Tm1/2dcnnSIz/H
IAGKvhE7StoZ/ttI+pLHwG0sGV/YaGxzrQfUgA6ZU7Md3lv6TBgNLTdLq9ztj+m7vc+5B1LN
mPyKxqfqRL7j4fisfEGZySvI+qOf/j28ooyN2+TpgNvwcU/sMJRblLwU6ORaoIl/WG3ke/e5
46pbIbeZvRSLYDIZWl7EWLHQHekbzHY2IMPwAGKkMGeoQg2fBUfooE8H4opHg7i/1Tn4D6an
Niu4nF7QNtt+b99aDdykFNx2//qGNw7qRpT5XN8rqzpVIiVC6knXuxUcb2f9sUNZvwiU+tnK
BERk6qmVI6QX1hLYuyrVcYgbkEuXGp8kUJaUm8YmCSthTsWnCH725ufD0zPxAo2kvjdz/K0S
zQygJYiUw6kKW3h3oVLraXd+oiqNkHoiUvm01MYreLfk780ICFHxiUeoMb11mjxassVhkxoG
j1ZlRRrVtGsj9/y7epoaVpx5BWYK8SNXNQqv3aGjPPNLS5I2YGZhKYUCNcaTrx567P3LhdsZ
dINpksABWm4RX7bjZYJg6gzxk+oO030AmasXhZ9VvvUqd5om1YpFNLtUqLAaK5Wf+15u+thI
FHXOKOhuqDm8dIxBGX2r5PDsgJ5kLSeqKrw81l4iO4QiwQZxCKiPoU8GmJEf8uFHbYctCR1z
TN5lfqr9+evp/MpZ16u4iFIMH5sx3SBrFUzZEgLmcKh95GFjm8WzWxLJP49P59PhSZKt0qDI
ZG/0GlDNozTAYC65cnqr2AUl4WsVNC4DH74c0D/m92//1P/85/gk/vtgqx4bb42+LQ9sYjiS
oONRF28p8AvJmZr/bH1VxNXffe963j3yA1pnEKxUbD7hJ94DlBm+SERkCICWAtqoSr2wEY1B
wrFsXdSZYDItsWKHXYVeUc5Dj1qkEtmiLLS8nWLVlytyMokpkK6l8yXtNLFOIxwkqP9ZYXOr
Y1FmSWYbR5ZU3lylgf9TEWu6hvrZOi1VY2qnP8TUekFFCvuK2XPCrZU1U0LtFBEvGQf0oeK8
RfZZ8T1/FVb3WRHUflrSaSaCjsCcM3zkZ7IKFm7RLFNzi6ph1RwtRqssp+YAbfy5RWkk+zMn
sEkwRcGDBb9AW2y/eMhrFbMDb+BkKBXe3gKtLlodxXwdxWWUwsdcpl65LkImV044bQgQ+W05
hic+kurwzDo+rUFVpm9K12W2YENbsBmBtmEXa4zMQOMyGC4mEl6YroH+7vGb7MS0YHxFyNMg
lgimMGbqPAvEKmJltiw8ygK8odGSTTXgbI6nEgj8rJSXb90ncc5c9u9PJwx4vTcWbx2ZR7oQ
RMCd7nvEoZvEkoWMYzF9XRlrFeUeup1kaaRk0BK2u6soDgr5nVKUQIdt9BbGyVrrHfPzNYpA
airuu7BI5SE0DLwTjNVNxgF4KxFtYcNSRjeCYuuVZWEWjDCC+ZiS11frZVjGc7nxGsQnQtqr
IU+eWgCrlp+uGyfpZbT00jLytVLiD1++ijJkfl5J9oiYcLwSPk4UO0nDEnjXnUwlHYlNc9Jv
Wa/kv5VbWAGxTCxHDuXkywJSWXJjZVmJFCRSdI3vAiseN3Ttghqk5OBrIlxDcAQCkTq2IGI8
F/g6yCnneCChLk+WBbfqAgaZSSIUsmT9J86G0qBuGcLWaZH7+u9qCaeMNIs11ODWnX1DmK8q
Ui7zo4VSFf4WnIq6LedYDI90D/uAhT7w+3qC5WnhVPehdwcqF65o2mudU61zjMRjx/NNaOuI
ERKsg9KKRodHaSuv9Fg/GuFP9O/WCvSzwLOeRPZjaJbTXyqV/V7hRxtg7MPhcsJkY384H2Q0
5r3hDHg4mKgFW8zEjpFfEhTMVA30rOHoedeI6Gs+jYiyqFFJxjc6MqY5ikZELXGNZHCjDfom
TCP6mcGOxz9DREUiV0hmg7Hlo81ufLTZ4Cc+2mz4w9ank6HeBkj/uDBJIVwp6yihe3WUo6K4
36wKahpyaLBLgwc0eEiDRzR4bBu0bQE3+JmtIGmLrRBYeuhoXbzLomlVELC1CkPn8SJL5Ggw
DdgPQbr39Z4KDOhc64L2D2mJiswrI4+KV9WSPBRRHEe+2fbSC2O6bYybRKe6aigi6Dgd1qKl
SNdRSVXOZ0Lrs0EEms5dZDnYkGZdLmjH7yCmpHxQl/1MDmZWA6o0KxJQIj/zl8H21kO6sMqq
+0+yOKgoqcIEdP/4fsbLciNHEB5/shwNCnuB4S5RH1ZVGJBlGKgY8MmRrADNUhGp53VxYmS1
0hkGTWttIfhdBStQckMR5Y1+FwYRgyuiScj49WdZRL56dVKT0DcJNdJy1HIGU3IJD/ZQbA02
h861K68IwhTGsebu4/kDl4N8NTmwQXQDBaptHKs5y0wa7CHLPSUU3QKkUFSUxX0OednjofqA
lWBuwVUY50r8UgoNLZWrvz78eflyOP75ftmfX09P+z9EDPdWrKglDumzyBFfYpb89QHtM59O
/xx//7573f3+cto9vR2Ov192X/fQwcPT74fjdf+My/H3L29fP4gVerc/H/cvvW+789OeP4Z1
K/WXLqBQ73A8oG3V4b+72iq0kbN8rjahcl9tPHxzjzBwAobOlDgaSYVByOTbXwDB7Ph3sO9S
7YqsRcFXa2qnHx5VUmzCTod+eTzaaDO15AJsSBfA9yRKed9b5qhB26e4NQbX2UQ7cbh/s+ZG
1D9/f7tiuoPzvnc610H+pW/BiWFMSyWFmAJ2TXjoBSTQJGV3fpSv5OWsIcwiKxFI0QSapIV8
ZdbBSEIzwG/TcWtPPFvn7/LcpAagWQMoiARpF06ChCtPiDUKtz+lWCkFWwWYB1gxql8uHHea
rGMDka5jGkj1JOd/7X3hfwKiIOzlFZwvtPIlSPQ3Am3NRIm57pbxGu/SOVOUIvbm719eDo9/
/L3/3nvke+AZ44B/N5Z+wTyjysBcf6Hsvt/CghUxzNAvAkaFCmpGkVCzChx6E7qjEZnAyKCR
R+q9X7+hGcrjDvOBhUc+XDTP+edw/dbzLpfT44Gjgt11Z4zfl0OaN1MKsFedbgUihuf28yx+
QHNFYv8vI+bIZpoaAv5haVQxFhJsIvwUbYgJXnnASzfNSOfcmwBPuos5jrn5gfzF3ISpl4Qt
lLxsabphVhMX9wYsW8zJ7TKnXpdq7JbYpiBp3ReeyUzSlXXyO1Qzv3o3JApvsyUvi+rPhRFg
ynVCrWzG1KCM4tFtd/lm+yiJZ36VlQDqlW9vztNGFGqMt/aXq9lY4Q9cYhFwsJ7UQUZS6wHh
8OliLVq/1uUteVbNY+8udKm1IDA3llpNUG9vo0+l0w/kWEvNfiW7YV0s7ULAkDlykPfmMAko
2IgYTxLBDg1j/GsfU5EEFFtAsGzC3oHd0ZhoCxAD0uqoYSErzyGKIRj2BAspLb2jgTYFlcma
Vt7Ice1Id2R+KlGGAhNVJAOq2yXIjvOMesxrjspl4czMNu5zqmW+Qiq+nCvgwc1mECLi4e2b
Ghqn4duM6BhAq5IOkSxRNG3cWunZ/SIid49AdHfr5i6qKcT6vXFeehi0KTIP+AZh2wEtXhxZ
wDA7SoNVGLTuT3QMdXbt7UDCmXuWQ9WOmAQEz0Do7f4HZJisDjmowiC0F1/8QBpkXsw819zn
jTxhRdgGCnJwruTiUOH8/PtB2dsTIhH9+EuyxGylvM/IlV3DbR++QVv6rqKrwb0czVCjUcYn
NnmX7062sWs/8yL2SsuzSC3wfKYCqtTI6dBkOfFncwwAW5kn9GdWtkEpi93x6fTaS99fv+zP
jeOoen1Qc5mURZWfUxpgUMyXTVw8AmORPwSOjogokwgB0kQYwI88X32Itn/5A9EganSYh+fG
k5tG2OjMP0VcWAyGdTrU2+1D5udGlC70C4WXw5cz5tg6n96vhyMh+sXRvD5BCHjhDw0Fo35P
34ScpBaEyOKNkNQlR7HSmMqI0orgNWQFAnWzDUtprYlWZ6Pr6FS6m03driWwTHQr6RU8O47j
3OyqVWBUqrrVzZs1/FCDRKJWoNLX6+qeWKQee0iSEO+Y+fU0hmXvapWQ+Xoe1zRsPVfJtqP+
rPJDvKONfDT+ai2/unvnO59NMXfFBvFYi6ChjGuAdNKEbjWMyAQWLzkqkQSmhqM1VojJNoQ1
GNpq8c5EXdAwHz1Vv3Il/8IDmP9fZUeyG7cNvfcrgpxaoA3i1EiTQw4aSTOjRpu1eMa+CKkz
cI00ThDbhT+/byGlR/Jx7B4CxOQbkuLy9uXu5vqWPcCv/j5cfbm5vRa+juQYMg1YHIJV+Z3j
XRb29x9evvR68/3QJXJngt8HEFyE6fT1+7czZA7/yZLu4snFwIvFUnD98AwIwkrkRPXypfCi
esYW2SFXRY2LohJma7vHZRSplUWdJ93UYWJe6eWUeG57qwL4dswIKzbLOnMDS1+naEfomsrz
mpMgZV5Heut8mMahkH4FtmtdcKkr2JtV4bhadpl8q1h/JZ/qsVo5FcfYoiNLOs0e6GmBiSCl
NsJ2ec2ES9A5J63afbplj5kuX3sQqNRfI/tL1WzaspBfOo8BTxcofG1C7Bzklk5pWgwO/5ee
eBgjnVhSVukarHwYJ3cAJ/KZZH9hu5MDYw+gknx1oRnJHYBT5adJt4vzWQgBpxfrjbChqcNm
pcJDBPBpqBNJhSQ+6y8WF66kzppKfL4yJTB7yMJ6cWXYmuVh+yVidWAgSgeBADepQgODqIxN
bKPers8JDKUCTs0O/Pzd+0vsUD52AZ82l4W47qKjvHTSr8uOJnw2ihlyn3RdcsEvQtKvvkkL
eABAnglg6cJHBM9PesNzEyUhd54ltjs5CusckHDPWeFLW6ZO9lEK/KQlu6Pv70j5+7OsmwaQ
LRxM0+9sturFdg7AaSRLPw0E3GfMY7nflLxT4u6iN6nzbdmZxFhls3L/UuzvdWm8RO2Y5SUa
lOWqMes3sCia6qJqC6cGQ0MlvjZAh5wqd8hP2dM+z3rlDmzyAYtONutMnqr8DZUBnSTmWzco
3PpFPKj13ePJW68J7aY9lp8bvBPE+9Bi4IRj8Ju7RvaJn9bl2G89r4IZiEzdVer1kA11l8js
12j+rzcuLp1DKT1a65qWLUNDrd9/3Nzef+Howa+Hu+vQNYJLkdKmOYwbN6NPoG494wANIFWb
Esh2OdsI/4hCnI1FPnw4na+E4fSCEU6FuwX6xZqlZHmZaN70toSr5zfuNHvBT0AjVw2ytXnX
AZSswUHQ8M9UWpX7Ht3LWU1w88/ht/ubr4ZpuiPQK27/Ee48z2UkxKANK+CNqVu3TPT2QP81
EiNAsl3SrR1SuslWWOCmaAfdSySvyQRajahj2+apVqF23cGGTTB07ZcMhRvbAurFwCLV/7oD
cZnGBxjx5nMMP0Rvc3gF0pbKXwJcMDkEVUVfJYOsuuj30Jqmpi4v/DHWDUUDjTX/ICkLTNDw
ZiVfGzoxLJVz9RHY0RcT2rZOMuRnH/1PMi+5ebDZ4a+H62t0VChu7+5/PHx1C5VQMUVk42V0
pmicnST47D68fjwRTvECjmMuozdG+r7bFuP/zKfiXULjMk4AFUYl6bTKHQl9TWJeR4QIP8IN
lXPh35rwOuPcVZ/UwBvWxQDik79S6j0+X9o7RTGxg9qIBwKB0hFmqUcNHXvWmbrby474/qZj
GIOVqYyjyzyYE+GLmBOESEz9GClXwgMiIHEDcbewZldHFGPU3TYF1iytNV5jmWNiecWbvWvg
PSUx34T5GBl4t/d3Q7bMYtWADu1CLqO/PQRvGk0sqD8shxLFmhXex+1H36TwU20vlcRWK/Q4
YBiJEh+kS0dCjEcO1oICPkL2zkQGPjmvQfGW1p542LZMVt5zMLcV+J4S0F+4ZNtzZKmMXcdI
SZ4eKE1mYPI6Y8KjsE6mAno1tRtyZAyXcq45nCo/i4zMpbeUYbkjOjanRifXNYGiuZFC+kBQ
R06DkuT8mbtOneYFMVFB4UFNwLUgrCREWEsH2vo9zp99Abk31D/KXsxRnmz6oBdvKrKidbPg
WZBjrBjoeuUtyMq7WFtOL8BeCAj0ovn2/e7XF5iE8uE708vtp9trN48BTJiiO2CjB4g6/Ui+
x9wpFVWk9BybUVSQQt3KiKhhgJOQoiRWEw47HVYU01dXEpDmUBYWB/ZXiY6w3qx4G2Sl2AWC
ngZ9EjyXqlVhjq9dAD69dh94Xru4vTjZtB1rrB3a60hgdwasGTBoWaObVUipy/OoxPX4dWHP
b+C6Pj9Q2U1BLR0E44kH3Ogy39S2xK5ad1FlbPdy44F8zPOW9ZmsD0V3q4UN+Pnu+80tumDB
J3x9uD88HuA/h/urV69e/SJUpY2tV7oh2S8Mw2s7LBBnQp7VzaQx8Bui6ArVHOOQ7/OAAIoy
QC6C08F3O+4BqtHsyKnaA+h2vRNjya20Qg9NUYhg3oaI0XREP8ZW7Szz2K9xJ8kCeKR4Hi0J
3gxGddN1XExsy0cqWs0+XTs/0/WPfcYT7JJi0CIXrVj/P66MXR0lOEC9y7p0ULfbPtWVUFgQ
pbWZEexXouiEPtxjjbWy4WWwOlMh98xIBK5s/Ei/MP/7+dP9pxfI+F6hCSEQeecYbpcGYnP0
oPuNf48saZUJZ5DRqSfiOYEhxIxwXpq5o8v0V5SCCJ7XQ+FlxWRrezpquMa5RIuRANi5Pk1K
rV2/dtgDHHX0V8gskLw8E7g3Qu6jcfGENSkc+vIzJaSUFklBJ05Yr3pX3a/30MKZEZc7Kyjb
hwBr3gJtKZklHHKb3kfgAWit04uhEbSNzO3LpQ0VeVj4mrqc8JRzIfEf74Wvbbc6jFUirb33
onROu2LYok6zfwZYVnRIN1G75oMbsIoYehgP7VAeCGYgoKNHSBDP6iEYBH0nfMVqakbjoT1k
gaku95P3mbyU1KUJpJf0a+lQPnWCd4x6eMAgpZqS3sEei6GMhqDfSS11C5JWBW+4O9O/NZjP
Con+RAZQUQIHKWKQ/SENsvmNpgKL3avYlVrwinMXVHIxj2GKnGoiHQtL/uywTcBtrpVpmdnh
dnXS7a5MBgXACjNV0XhXw1xTcxV90gMPtgZRBl57tGOWedwjXwHhgZtiPj6ITrLtxsKJVZ3o
B3kk7M2Cw2s5CjgC5Co3ZQGOKJnnugF2Zy9qeM5hNQFMdGIzjmrj8e7xqwmFwuWyTyvAh9sq
6XTWWj6g45B2wqQkC1W0RLE91yEBstTGuRo585PA8/2Jg4inTor+OKTYcnzvx4Zszossn5pt
Wpz8/v6ULGcopevDJlj7QU3otegJKG9WYdSZrm7ekE6GCdiFx3dvVXbB5dMC9BTycUox+qQr
L6yRZOyF/gb9GI0dg9CaLKMqfxUZK1ttIj+gdHr7TMaNGEmoXJH5y6M8y/kHq8c1ot0X051p
LDbW+qD78HqvFksR/e55zB1jYDgKYSKKaWMNIiMUSsSuV3ebRG2w/EOPCBu2tSoUDSNvAynL
JcvUjhgPiUJOyK+N9Y6zxAGzpeM0C8AmG8InKnqfATejp9Vxr620Mw6Hu3sUU1AQT7/9e/jx
6fogVTcfx1q1T6kKMcfi0lZPa83qfMDawCqcRrDJQqLMtU6K0tV5YgsrtD0R1RtDxj0vyBN/
vEZx8elFKHaXGdF8TBsZ0sV6uB7IV3NuUIKblRDhdSwNjB7xMnDZEGFHKmgDxvKNzEePOYi1
ZZvzf3mBv4j8pgEA

--OXfL5xGRrasGEqWY--


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 14:20:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 14:20:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32245.63259 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg7Gm-00011d-Nj; Fri, 20 Nov 2020 14:20:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32245.63259; Fri, 20 Nov 2020 14:20:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg7Gm-00011W-Ka; Fri, 20 Nov 2020 14:20:12 +0000
Received: by outflank-mailman (input) for mailman id 32245;
 Fri, 20 Nov 2020 14:20:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BBs/=E2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kg7Gk-00011R-T9
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 14:20:11 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 833713cc-579f-47d7-adad-50e4ed289afa;
 Fri, 20 Nov 2020 14:20:09 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BBs/=E2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kg7Gk-00011R-T9
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 14:20:11 +0000
X-Inumbo-ID: 833713cc-579f-47d7-adad-50e4ed289afa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 833713cc-579f-47d7-adad-50e4ed289afa;
	Fri, 20 Nov 2020 14:20:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605882010;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=Fqba70I57e6KRrSTc8lVRghlr06RGYUwzyNMoEC4v8Y=;
  b=Z4PzhSIDGE/AMt32gSoV7HlAuSG5Xm3e0wOnU4I8FVTPi12zetvTJF3m
   W+SXET0voXtIZM3nt2lDBnRc3OS7xEfDUe/8p+R2XDoEV2QVrE1U8h6OX
   mPvgAf0k4jrDOz+HHE/qtw+7D2KGIxYGqWhSKSLuIKQ3fZTvK4wVvP4Hs
   g=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: taekLhPy/xegJpCbbLPyk0QWtVOnZwIfwsiNlhu2OfMVLSXLqzx/pXa8cI92Mx38J22Qa29FSJ
 zQ+KKU3POUHpMHwzkK4erpHOfiHDGsS1tiTNXcF54wATgu9yKvBLCojlZj5aIgtj8H8xF/RmJj
 4Vy2WTa0sm4SHpomNJi1RoHWgSYqhVQKhHMjSduwfNPa5NqQ56MnXNJtqXrG4LL9u0kSz7o0LA
 EcFvQHFU+DmcKInwmYz25u9sy3ZNSADKFrdzQmIUD2t7+v1Bx3kaXpX0d+PqGKjKFRK2AgMB9o
 wGs=
X-SBRS: None
X-MesageID: 31588333
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,356,1599537600"; 
   d="scan'208";a="31588333"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wei.liu2@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Paul Durrant
	<paul.durrant@citrix.com>
Subject: [PATCH] amd-iommu: Fix Guest CR3 Table following c/s 3a7947b6901
Date: Fri, 20 Nov 2020 14:19:51 +0000
Message-ID: <20201120141951.13742-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

"amd-iommu: use a bitfield for DTE" renamed iommu_dte_set_guest_cr3()'s gcr3
parameter to gcr3_mfn but ended up with an off-by-PAGE_SIZE error when
extracting bits from the address.

First of all, get_guest_cr3_from_dte() and iommu_dte_set_guest_cr3()
are (almost) getters and setters for the same field, so should live together.

Rename them to dte_{get,set}_gcr3_table() to specifically avoid 'guest_cr3' in
the name.  This field actually points to a table in memory containing an array
of guest CR3 values.  As these functions are used for different logical
indirections, they shouldn't use gfn/mfn terminology for their parameters.
Switch them to use straight uint64_t full addresses.

Fixes: 3a7947b6901 ("amd-iommu: use a bitfield for DTE")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Paul Durrant <paul.durrant@citrix.com>

Rebase over several years worth of changes.

This code is unreachable, so completely untestable, but I think the end result
is better than it was previously.
---
 xen/drivers/passthrough/amd/iommu.h       |  2 --
 xen/drivers/passthrough/amd/iommu_guest.c | 36 +++++++++++++++++++++++++++----
 xen/drivers/passthrough/amd/iommu_map.c   | 21 ------------------
 3 files changed, 32 insertions(+), 27 deletions(-)

diff --git a/xen/drivers/passthrough/amd/iommu.h b/xen/drivers/passthrough/amd/iommu.h
index 28a44ceb85..ad089cb095 100644
--- a/xen/drivers/passthrough/amd/iommu.h
+++ b/xen/drivers/passthrough/amd/iommu.h
@@ -246,8 +246,6 @@ void amd_iommu_set_root_page_table(struct amd_iommu_dte *dte,
 				   uint8_t paging_mode, bool valid);
 void iommu_dte_add_device_entry(struct amd_iommu_dte *dte,
                                 const struct ivrs_mappings *ivrs_dev);
-void iommu_dte_set_guest_cr3(struct amd_iommu_dte *dte, uint16_t dom_id,
-                             uint64_t gcr3_mfn, bool gv, uint8_t glx);
 
 /* send cmd to iommu */
 void amd_iommu_flush_all_pages(struct domain *d);
diff --git a/xen/drivers/passthrough/amd/iommu_guest.c b/xen/drivers/passthrough/amd/iommu_guest.c
index 2a3def9a5d..00c5ccd7b5 100644
--- a/xen/drivers/passthrough/amd/iommu_guest.c
+++ b/xen/drivers/passthrough/amd/iommu_guest.c
@@ -68,11 +68,39 @@ static void guest_iommu_disable(struct guest_iommu *iommu)
     iommu->enabled = 0;
 }
 
-static uint64_t get_guest_cr3_from_dte(struct amd_iommu_dte *dte)
+/*
+ * The Guest CR3 Table is a table written by the guest kernel, pointing at
+ * gCR3 values for PASID transactions to use.  The Device Table Entry points
+ * at a system physical address.
+ *
+ * However, these helpers deliberately use untyped parameters without
+ * reference to gfn/mfn because they are used both for programming the real
+ * IOMMU, and interpreting a guests programming of its vIOMMU.
+ */
+static uint64_t dte_get_gcr3_table(const struct amd_iommu_dte *dte)
 {
     return (((uint64_t)dte->gcr3_trp_51_31 << 31) |
             (dte->gcr3_trp_30_15 << 15) |
-            (dte->gcr3_trp_14_12 << 12)) >> PAGE_SHIFT;
+            (dte->gcr3_trp_14_12 << 12));
+}
+
+static void dte_set_gcr3_table(struct amd_iommu_dte *dte, uint16_t dom_id,
+                               uint64_t addr, bool gv, uint8_t glx)
+{
+#define GCR3_MASK(hi, lo) (((1ul << ((hi) + 1)) - 1) & ~((1ul << (lo)) - 1))
+
+    /* I bit must be set when gcr3 is enabled */
+    dte->i = true;
+
+    dte->gcr3_trp_14_12 = MASK_EXTR(addr, GCR3_MASK(14, 12));
+    dte->gcr3_trp_30_15 = MASK_EXTR(addr, GCR3_MASK(30, 15));
+    dte->gcr3_trp_51_31 = MASK_EXTR(addr, GCR3_MASK(51, 31));
+
+    dte->domain_id = dom_id;
+    dte->glx = glx;
+    dte->gv = gv;
+
+#undef GCR3_MASK
 }
 
 static unsigned int host_domid(struct domain *d, uint64_t g_domid)
@@ -389,7 +417,7 @@ static int do_invalidate_dte(struct domain *d, cmd_entry_t *cmd)
     gdte = &dte_base[gbdf % (PAGE_SIZE / sizeof(struct amd_iommu_dte))];
 
     gdom_id = gdte->domain_id;
-    gcr3_gfn = get_guest_cr3_from_dte(gdte);
+    gcr3_gfn = dte_get_gcr3_table(gdte) >> PAGE_SHIFT;
     glx = gdte->glx;
     gv = gdte->gv;
 
@@ -419,7 +447,7 @@ static int do_invalidate_dte(struct domain *d, cmd_entry_t *cmd)
     mdte = &dte_base[req_id];
 
     spin_lock_irqsave(&iommu->lock, flags);
-    iommu_dte_set_guest_cr3(mdte, hdom_id, gcr3_mfn, gv, glx);
+    dte_set_gcr3_table(mdte, hdom_id, gcr3_mfn << PAGE_SHIFT, gv, glx);
 
     amd_iommu_flush_device(iommu, req_id);
     spin_unlock_irqrestore(&iommu->lock, flags);
diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index f773ab33fd..d3a8b1aec7 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -173,27 +173,6 @@ void __init iommu_dte_add_device_entry(struct amd_iommu_dte *dte,
     };
 }
 
-void iommu_dte_set_guest_cr3(struct amd_iommu_dte *dte, uint16_t dom_id,
-                             uint64_t gcr3_mfn, bool gv, uint8_t glx)
-{
-#define GCR3_MASK(hi, lo) (((1ul << ((hi) + 1)) - 1) & ~((1ul << (lo)) - 1))
-#define GCR3_SHIFT(lo) ((lo) - PAGE_SHIFT)
-
-    /* I bit must be set when gcr3 is enabled */
-    dte->i = true;
-
-    dte->gcr3_trp_14_12 = (gcr3_mfn & GCR3_MASK(14, 12)) >> GCR3_SHIFT(12);
-    dte->gcr3_trp_30_15 = (gcr3_mfn & GCR3_MASK(30, 15)) >> GCR3_SHIFT(15);
-    dte->gcr3_trp_51_31 = (gcr3_mfn & GCR3_MASK(51, 31)) >> GCR3_SHIFT(31);
-
-    dte->domain_id = dom_id;
-    dte->glx = glx;
-    dte->gv = gv;
-
-#undef GCR3_SHIFT
-#undef GCR3_MASK
-}
-
 /* Walk io page tables and build level page tables if necessary
  * {Re, un}mapping super page frames causes re-allocation of io
  * page tables.
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 14:20:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 14:20:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32246.63271 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg7Gv-00014p-0b; Fri, 20 Nov 2020 14:20:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32246.63271; Fri, 20 Nov 2020 14:20:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg7Gu-00014i-Sw; Fri, 20 Nov 2020 14:20:20 +0000
Received: by outflank-mailman (input) for mailman id 32246;
 Fri, 20 Nov 2020 14:20:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kg7Gu-00014T-5F
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 14:20:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 65bd90f2-58a5-45f6-971a-0253e3bb38fa;
 Fri, 20 Nov 2020 14:20:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 76A38AC23;
 Fri, 20 Nov 2020 14:20:18 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kg7Gu-00014T-5F
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 14:20:20 +0000
X-Inumbo-ID: 65bd90f2-58a5-45f6-971a-0253e3bb38fa
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 65bd90f2-58a5-45f6-971a-0253e3bb38fa;
	Fri, 20 Nov 2020 14:20:19 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605882018; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pcqKyf2F69YpsegYySux8ebDCY1ioanHw5/or7/Lx8k=;
	b=H0Kmnhm9YbmqtRnbtIoTN9FyhUWdeOhBqFeGYWy9ai78thwzWgKYtVWDwMxBDOXM9MTSCw
	oqJAzhqAkXsPndopqGdn6gQVBautnU7gjR01DjKWAms6dTz78xT+ux6SrEyThJf2FT9oat
	lE5t+clvnbJNbUbd8VfWgEvHoJ4xKsg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 76A38AC23;
	Fri, 20 Nov 2020 14:20:18 +0000 (UTC)
Subject: Re: [PATCH v2 01/12] viridian: don't blindly write to 32-bit
 registers is 'mode' is invalid
To: Paul Durrant <paul@xen.org>
Cc: Paul Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201120094900.1489-1-paul@xen.org>
 <20201120094900.1489-2-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0367ae3b-88a4-1a8d-b174-794b3fe61760@suse.com>
Date: Fri, 20 Nov 2020 15:20:19 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201120094900.1489-2-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.11.2020 10:48, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> If hvm_guest_x86_mode() returns something other than 8 or 4 then
> viridian_hypercall() will return immediately but, on the way out, will write
> back status as if 'mode' was 4. This patch simply makes it leave the registers
> alone.

IOW 16-bit protected mode and real mode aren't allowed to make
hypercalls (supported also be the earlier switch() in the
function)? But then, to achieve what you want, wouldn't it be
more direct to simply "return HVM_HCALL_completed;" straight
from that earlier switch()'s default case? At which point the
switch() you modify could become if/else? Anyway - you're the
maintainer, I'm just wondering ...

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 14:25:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 14:25:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32258.63283 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg7Lw-0001NC-Kw; Fri, 20 Nov 2020 14:25:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32258.63283; Fri, 20 Nov 2020 14:25:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg7Lw-0001N5-HI; Fri, 20 Nov 2020 14:25:32 +0000
Received: by outflank-mailman (input) for mailman id 32258;
 Fri, 20 Nov 2020 14:25:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WHVp=E2=amazon.co.uk=prvs=5864bad74=pdurrant@srs-us1.protection.inumbo.net>)
 id 1kg7Lv-0001My-FK
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 14:25:31 +0000
Received: from smtp-fw-9101.amazon.com (unknown [207.171.184.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0c00be2c-ca04-4a8a-af52-75e5c2ab1898;
 Fri, 20 Nov 2020 14:25:30 +0000 (UTC)
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-1d-37fd6b3d.us-east-1.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP;
 20 Nov 2020 14:25:12 +0000
Received: from EX13D32EUC002.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38])
 by email-inbound-relay-1d-37fd6b3d.us-east-1.amazon.com (Postfix) with ESMTPS
 id A58B02824CF; Fri, 20 Nov 2020 14:25:10 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC002.ant.amazon.com (10.43.164.94) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 20 Nov 2020 14:25:09 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Fri, 20 Nov 2020 14:25:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WHVp=E2=amazon.co.uk=prvs=5864bad74=pdurrant@srs-us1.protection.inumbo.net>)
	id 1kg7Lv-0001My-FK
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 14:25:31 +0000
X-Inumbo-ID: 0c00be2c-ca04-4a8a-af52-75e5c2ab1898
Received: from smtp-fw-9101.amazon.com (unknown [207.171.184.25])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0c00be2c-ca04-4a8a-af52-75e5c2ab1898;
	Fri, 20 Nov 2020 14:25:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
  s=amazon201209; t=1605882330; x=1637418330;
  h=from:to:cc:date:message-id:references:in-reply-to:
   content-transfer-encoding:mime-version:subject;
  bh=31z40IrFiOkeTaig0H3MJc307/RGe0tvascgzgZUi5Q=;
  b=VbjmDegtZzz86Mg3f9BgHnHZwlnS/+j9F657WetcVfy9ilPUosaeNeUk
   XyKADGGn3Ovtgk/eBLkSl6cNLVFo0XV+zXmQLq6cO7O9DFKrv57fER9FK
   St+ydBryL1pFcgvmALC2rHDf0zj1YiuD809o5BHn0G1gcUjCy2xEIrdzS
   s=;
X-IronPort-AV: E=Sophos;i="5.78,356,1599523200"; 
   d="scan'208";a="89153437"
Subject: RE: [PATCH v2 01/12] viridian: don't blindly write to 32-bit registers is
 'mode' is invalid
Thread-Topic: [PATCH v2 01/12] viridian: don't blindly write to 32-bit registers is 'mode'
 is invalid
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-1d-37fd6b3d.us-east-1.amazon.com) ([10.47.23.38])
  by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP; 20 Nov 2020 14:25:12 +0000
Received: from EX13D32EUC002.ant.amazon.com (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38])
	by email-inbound-relay-1d-37fd6b3d.us-east-1.amazon.com (Postfix) with ESMTPS id A58B02824CF;
	Fri, 20 Nov 2020 14:25:10 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC002.ant.amazon.com (10.43.164.94) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 20 Nov 2020 14:25:09 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Fri, 20 Nov 2020 14:25:09 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
CC: Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Index: AQHWv0hWzbOqyITZRUa/qwL6gjpVQKnRErYQ
Date: Fri, 20 Nov 2020 14:25:09 +0000
Message-ID: <fcd68337f9fd496d9a87c5b84468330a@EX13D32EUC003.ant.amazon.com>
References: <20201120094900.1489-1-paul@xen.org>
 <20201120094900.1489-2-paul@xen.org>
 <0367ae3b-88a4-1a8d-b174-794b3fe61760@suse.com>
In-Reply-To: <0367ae3b-88a4-1a8d-b174-794b3fe61760@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.164.242]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxp
Y2hAc3VzZS5jb20+DQo+IFNlbnQ6IDIwIE5vdmVtYmVyIDIwMjAgMTQ6MjANCj4gVG86IFBhdWwg
RHVycmFudCA8cGF1bEB4ZW4ub3JnPg0KPiBDYzogRHVycmFudCwgUGF1bCA8cGR1cnJhbnRAYW1h
em9uLmNvLnVrPjsgV2VpIExpdSA8d2xAeGVuLm9yZz47IEFuZHJldyBDb29wZXINCj4gPGFuZHJl
dy5jb29wZXIzQGNpdHJpeC5jb20+OyBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4
LmNvbT47IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZw0KPiBTdWJqZWN0OiBSRTogW0VY
VEVSTkFMXSBbUEFUQ0ggdjIgMDEvMTJdIHZpcmlkaWFuOiBkb24ndCBibGluZGx5IHdyaXRlIHRv
IDMyLWJpdCByZWdpc3RlcnMgaXMgJ21vZGUnDQo+IGlzIGludmFsaWQNCj4gDQo+IENBVVRJT046
IFRoaXMgZW1haWwgb3JpZ2luYXRlZCBmcm9tIG91dHNpZGUgb2YgdGhlIG9yZ2FuaXphdGlvbi4g
RG8gbm90IGNsaWNrIGxpbmtzIG9yIG9wZW4NCj4gYXR0YWNobWVudHMgdW5sZXNzIHlvdSBjYW4g
Y29uZmlybSB0aGUgc2VuZGVyIGFuZCBrbm93IHRoZSBjb250ZW50IGlzIHNhZmUuDQo+IA0KPiAN
Cj4gDQo+IE9uIDIwLjExLjIwMjAgMTA6NDgsIFBhdWwgRHVycmFudCB3cm90ZToNCj4gPiBGcm9t
OiBQYXVsIER1cnJhbnQgPHBkdXJyYW50QGFtYXpvbi5jb20+DQo+ID4NCj4gPiBJZiBodm1fZ3Vl
c3RfeDg2X21vZGUoKSByZXR1cm5zIHNvbWV0aGluZyBvdGhlciB0aGFuIDggb3IgNCB0aGVuDQo+
ID4gdmlyaWRpYW5faHlwZXJjYWxsKCkgd2lsbCByZXR1cm4gaW1tZWRpYXRlbHkgYnV0LCBvbiB0
aGUgd2F5IG91dCwgd2lsbCB3cml0ZQ0KPiA+IGJhY2sgc3RhdHVzIGFzIGlmICdtb2RlJyB3YXMg
NC4gVGhpcyBwYXRjaCBzaW1wbHkgbWFrZXMgaXQgbGVhdmUgdGhlIHJlZ2lzdGVycw0KPiA+IGFs
b25lLg0KPiANCj4gSU9XIDE2LWJpdCBwcm90ZWN0ZWQgbW9kZSBhbmQgcmVhbCBtb2RlIGFyZW4n
dCBhbGxvd2VkIHRvIG1ha2UNCj4gaHlwZXJjYWxscyAoc3VwcG9ydGVkIGFsc28gYmUgdGhlIGVh
cmxpZXIgc3dpdGNoKCkgaW4gdGhlDQo+IGZ1bmN0aW9uKT8NCg0KSSBkb24ndCB0aGluayBydW5u
aW5nIGVubGlnaHRlbmVkIHZlcnNpb25zIG9mIE9TLzIgMS4zIGlzIHJlYWxseSBhIHVzZSBjYXNl
IDotKQ0KDQo+IEJ1dCB0aGVuLCB0byBhY2hpZXZlIHdoYXQgeW91IHdhbnQsIHdvdWxkbid0IGl0
IGJlDQo+IG1vcmUgZGlyZWN0IHRvIHNpbXBseSAicmV0dXJuIEhWTV9IQ0FMTF9jb21wbGV0ZWQ7
IiBzdHJhaWdodA0KPiBmcm9tIHRoYXQgZWFybGllciBzd2l0Y2goKSdzIGRlZmF1bHQgY2FzZT8g
QXQgd2hpY2ggcG9pbnQgdGhlDQo+IHN3aXRjaCgpIHlvdSBtb2RpZnkgY291bGQgYmVjb21lIGlm
L2Vsc2U/IEFueXdheSAtIHlvdSdyZSB0aGUNCj4gbWFpbnRhaW5lciwgSSdtIGp1c3Qgd29uZGVy
aW5nIC4uLg0KPiANCg0KSXQgY291bGQgYmUgZG9uZSB0aGF0IHdheSBidXQgSSBwcmVmZXIgdGhl
IGV4aXQgcGF0aCB2aWEgZ290by4NCg0KICBQYXVsDQoNCj4gSmFuDQo=


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 14:32:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 14:32:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32264.63295 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg7Sx-0002Xl-DE; Fri, 20 Nov 2020 14:32:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32264.63295; Fri, 20 Nov 2020 14:32:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg7Sx-0002Xe-9e; Fri, 20 Nov 2020 14:32:47 +0000
Received: by outflank-mailman (input) for mailman id 32264;
 Fri, 20 Nov 2020 14:32:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kg7Sw-0002XZ-8o
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 14:32:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 66ffa606-67d9-49c6-84c8-9344ccc0c705;
 Fri, 20 Nov 2020 14:32:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0CF12AC23;
 Fri, 20 Nov 2020 14:32:44 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kg7Sw-0002XZ-8o
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 14:32:46 +0000
X-Inumbo-ID: 66ffa606-67d9-49c6-84c8-9344ccc0c705
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 66ffa606-67d9-49c6-84c8-9344ccc0c705;
	Fri, 20 Nov 2020 14:32:44 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605882764; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=61pizMgb5lNHjux2ICoE/P5vIGQSORAcFMQOpQT2y5k=;
	b=UD9XwTegXfLHTsxDDutZ4zSgys8zFepaDsun53+lySVGOVz0S6QAsfJ6zWXaWWGW2iA/0z
	1uYGu5ycSP07bn9lzahjeK4jutNOWQngTqYkKuaRxMRhs3SpZG1Y1NovzvOyaWD6xdmmBd
	64dGa8WuUI0Wz/H02gArUg+wcfNwwrg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0CF12AC23;
	Fri, 20 Nov 2020 14:32:44 +0000 (UTC)
Subject: Re: [PATCH] amd-iommu: Fix Guest CR3 Table following c/s 3a7947b6901
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Xen-devel <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul.durrant@citrix.com>
References: <20201120141951.13742-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <dc7a4158-2c8e-5aaf-cc68-ab7f15454c77@suse.com>
Date: Fri, 20 Nov 2020 15:32:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201120141951.13742-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.11.2020 15:19, Andrew Cooper wrote:
> "amd-iommu: use a bitfield for DTE" renamed iommu_dte_set_guest_cr3()'s gcr3
> parameter to gcr3_mfn but ended up with an off-by-PAGE_SIZE error when
> extracting bits from the address.
> 
> First of all, get_guest_cr3_from_dte() and iommu_dte_set_guest_cr3()
> are (almost) getters and setters for the same field, so should live together.
> 
> Rename them to dte_{get,set}_gcr3_table() to specifically avoid 'guest_cr3' in
> the name.  This field actually points to a table in memory containing an array
> of guest CR3 values.  As these functions are used for different logical
> indirections, they shouldn't use gfn/mfn terminology for their parameters.
> Switch them to use straight uint64_t full addresses.

All of this still looks to belong to "First of all ..." - did you
mean to have more in here, but forgot to actually put it in?

> --- a/xen/drivers/passthrough/amd/iommu_guest.c
> +++ b/xen/drivers/passthrough/amd/iommu_guest.c
> @@ -68,11 +68,39 @@ static void guest_iommu_disable(struct guest_iommu *iommu)
>      iommu->enabled = 0;
>  }
>  
> -static uint64_t get_guest_cr3_from_dte(struct amd_iommu_dte *dte)
> +/*
> + * The Guest CR3 Table is a table written by the guest kernel, pointing at
> + * gCR3 values for PASID transactions to use.  The Device Table Entry points
> + * at a system physical address.
> + *
> + * However, these helpers deliberately use untyped parameters without
> + * reference to gfn/mfn because they are used both for programming the real
> + * IOMMU, and interpreting a guests programming of its vIOMMU.
> + */
> +static uint64_t dte_get_gcr3_table(const struct amd_iommu_dte *dte)
>  {
>      return (((uint64_t)dte->gcr3_trp_51_31 << 31) |
>              (dte->gcr3_trp_30_15 << 15) |
> -            (dte->gcr3_trp_14_12 << 12)) >> PAGE_SHIFT;
> +            (dte->gcr3_trp_14_12 << 12));
> +}
> +
> +static void dte_set_gcr3_table(struct amd_iommu_dte *dte, uint16_t dom_id,
> +                               uint64_t addr, bool gv, uint8_t glx)
> +{
> +#define GCR3_MASK(hi, lo) (((1ul << ((hi) + 1)) - 1) & ~((1ul << (lo)) - 1))
> +
> +    /* I bit must be set when gcr3 is enabled */
> +    dte->i = true;
> +
> +    dte->gcr3_trp_14_12 = MASK_EXTR(addr, GCR3_MASK(14, 12));
> +    dte->gcr3_trp_30_15 = MASK_EXTR(addr, GCR3_MASK(30, 15));
> +    dte->gcr3_trp_51_31 = MASK_EXTR(addr, GCR3_MASK(51, 31));
> +
> +    dte->domain_id = dom_id;
> +    dte->glx = glx;
> +    dte->gv = gv;
> +
> +#undef GCR3_MASK
>  }

I realize the question is somewhat unrelated, but aren't we updating
a live DTE here? If so, are there no ordering requirements between the
writes? Might be worth putting in barrier(s) right on this occasion.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 14:37:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 14:37:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32269.63307 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg7Xi-0002jo-0t; Fri, 20 Nov 2020 14:37:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32269.63307; Fri, 20 Nov 2020 14:37:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg7Xh-0002jh-Tq; Fri, 20 Nov 2020 14:37:41 +0000
Received: by outflank-mailman (input) for mailman id 32269;
 Fri, 20 Nov 2020 14:37:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BBs/=E2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kg7Xg-0002jB-QL
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 14:37:40 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4a5f306d-93b5-4c06-ac77-92896595a593;
 Fri, 20 Nov 2020 14:37:39 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BBs/=E2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kg7Xg-0002jB-QL
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 14:37:40 +0000
X-Inumbo-ID: 4a5f306d-93b5-4c06-ac77-92896595a593
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4a5f306d-93b5-4c06-ac77-92896595a593;
	Fri, 20 Nov 2020 14:37:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1605883059;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=U71QbxcHC/PxwgSRJQN8Bud11hm3wMIaHc5A2f2lxO8=;
  b=YLPmSJLCzYqFTSr0rtZXqGR8AlPtAd7KQG2KGvgaspTbsj0QiXdrWFeL
   RPe+zQ6SJG1FI2bo2D9NrNBA9BCbRSqxTZ4AUiqeykoYctuGN3+U/VNsA
   22xybQ4JYcxS/FTJ2IoZ0uvO4C00gsBOjDn9mnTthIkRjJXzCoytbGtCx
   A=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 40k0l0PZ79VBL+FnsYx10JSj/+FmicCQhDs2LQQQXmm+SNqkHb68q/gHL3rFjj3kQLnYtZuP3p
 OmGZgzQu4ZMDXlQfyw7WnZaDpI6PCoainjT/iew7XaZ8W6VyF8fiq9AV7mQezYZtI4LbIKo+/p
 iXgjBTxxt0cHnmP057YBFkWVv2TQcZy0/rWoThCM6yhxUcBuHXpxRjJW3HGA5WZRhO0R6CrDaY
 /yBey8lzhDjOmkkiY0Dx6bU5fWPH9t76nOghFnb8xGYqbNgPeT0rFzSGOa8gELEoIMmgCD8Yi0
 B8g=
X-SBRS: None
X-MesageID: 31960192
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,356,1599537600"; 
   d="scan'208";a="31960192"
Subject: Re: [PATCH] amd-iommu: Fix Guest CR3 Table following c/s 3a7947b6901
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Xen-devel <xen-devel@lists.xenproject.org>, "Paul
 Durrant" <paul.durrant@citrix.com>
References: <20201120141951.13742-1-andrew.cooper3@citrix.com>
 <dc7a4158-2c8e-5aaf-cc68-ab7f15454c77@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <3e82cbbc-c121-aa14-c15b-ca3489f5cf2e@citrix.com>
Date: Fri, 20 Nov 2020 14:37:32 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <dc7a4158-2c8e-5aaf-cc68-ab7f15454c77@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 20/11/2020 14:32, Jan Beulich wrote:
> On 20.11.2020 15:19, Andrew Cooper wrote:
>> "amd-iommu: use a bitfield for DTE" renamed iommu_dte_set_guest_cr3()'s gcr3
>> parameter to gcr3_mfn but ended up with an off-by-PAGE_SIZE error when
>> extracting bits from the address.
>>
>> First of all, get_guest_cr3_from_dte() and iommu_dte_set_guest_cr3()
>> are (almost) getters and setters for the same field, so should live together.
>>
>> Rename them to dte_{get,set}_gcr3_table() to specifically avoid 'guest_cr3' in
>> the name.  This field actually points to a table in memory containing an array
>> of guest CR3 values.  As these functions are used for different logical
>> indirections, they shouldn't use gfn/mfn terminology for their parameters.
>> Switch them to use straight uint64_t full addresses.
> All of this still looks to belong to "First of all ..." - did you
> mean to have more in here, but forgot to actually put it in?

No - I deleted the bit which has caused this to be blocked on minutia
for nearly 2 years.l

>
>> --- a/xen/drivers/passthrough/amd/iommu_guest.c
>> +++ b/xen/drivers/passthrough/amd/iommu_guest.c
>> @@ -68,11 +68,39 @@ static void guest_iommu_disable(struct guest_iommu *iommu)
>>      iommu->enabled = 0;
>>  }
>>  
>> -static uint64_t get_guest_cr3_from_dte(struct amd_iommu_dte *dte)
>> +/*
>> + * The Guest CR3 Table is a table written by the guest kernel, pointing at
>> + * gCR3 values for PASID transactions to use.  The Device Table Entry points
>> + * at a system physical address.
>> + *
>> + * However, these helpers deliberately use untyped parameters without
>> + * reference to gfn/mfn because they are used both for programming the real
>> + * IOMMU, and interpreting a guests programming of its vIOMMU.
>> + */
>> +static uint64_t dte_get_gcr3_table(const struct amd_iommu_dte *dte)
>>  {
>>      return (((uint64_t)dte->gcr3_trp_51_31 << 31) |
>>              (dte->gcr3_trp_30_15 << 15) |
>> -            (dte->gcr3_trp_14_12 << 12)) >> PAGE_SHIFT;
>> +            (dte->gcr3_trp_14_12 << 12));
>> +}
>> +
>> +static void dte_set_gcr3_table(struct amd_iommu_dte *dte, uint16_t dom_id,
>> +                               uint64_t addr, bool gv, uint8_t glx)
>> +{
>> +#define GCR3_MASK(hi, lo) (((1ul << ((hi) + 1)) - 1) & ~((1ul << (lo)) - 1))
>> +
>> +    /* I bit must be set when gcr3 is enabled */
>> +    dte->i = true;
>> +
>> +    dte->gcr3_trp_14_12 = MASK_EXTR(addr, GCR3_MASK(14, 12));
>> +    dte->gcr3_trp_30_15 = MASK_EXTR(addr, GCR3_MASK(30, 15));
>> +    dte->gcr3_trp_51_31 = MASK_EXTR(addr, GCR3_MASK(51, 31));
>> +
>> +    dte->domain_id = dom_id;
>> +    dte->glx = glx;
>> +    dte->gv = gv;
>> +
>> +#undef GCR3_MASK
>>  }
> I realize the question is somewhat unrelated, but aren't we updating
> a live DTE here? If so, are there no ordering requirements between the
> writes? Might be worth putting in barrier(s) right on this occasion.

I don't know.  Honestly, its not relevant either as this is code motion.

This entire file is full of security holes.  None of it is fit for use
in its current form.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 14:46:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 14:46:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32277.63319 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg7fr-0003wJ-SE; Fri, 20 Nov 2020 14:46:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32277.63319; Fri, 20 Nov 2020 14:46:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg7fr-0003wC-P2; Fri, 20 Nov 2020 14:46:07 +0000
Received: by outflank-mailman (input) for mailman id 32277;
 Fri, 20 Nov 2020 14:46:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kg7fq-0003w6-Jr
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 14:46:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bb82e3a6-df55-48e7-8723-65ee043617ef;
 Fri, 20 Nov 2020 14:46:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1124AAA4F;
 Fri, 20 Nov 2020 14:46:01 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kg7fq-0003w6-Jr
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 14:46:06 +0000
X-Inumbo-ID: bb82e3a6-df55-48e7-8723-65ee043617ef
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bb82e3a6-df55-48e7-8723-65ee043617ef;
	Fri, 20 Nov 2020 14:46:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605883561; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=WEbxLpXtv266+hEkZUKOi5uY5gvzppRnL/CqrLHGRVE=;
	b=k1KLS/GgT86ffBf8FI/5wM0wzgLUevCJ+HBQpoK5l2Y4/rGY8YuBazQQ1ZbPabmsXDv7MN
	u5W5ySbGp0/RHfUEMT0WabACxgce/6Ql3jUi2jETcINQ3QKldFDappFweRkpXkmdvTWrGW
	E/BjwMHEX8KdkTktEGrnSs46zh8czns=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1124AAA4F;
	Fri, 20 Nov 2020 14:46:01 +0000 (UTC)
Subject: Re: [PATCH] amd-iommu: Fix Guest CR3 Table following c/s 3a7947b6901
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Xen-devel <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul.durrant@citrix.com>
References: <20201120141951.13742-1-andrew.cooper3@citrix.com>
 <dc7a4158-2c8e-5aaf-cc68-ab7f15454c77@suse.com>
 <3e82cbbc-c121-aa14-c15b-ca3489f5cf2e@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <042d1387-99ac-c7ca-04f9-ba726361d5b7@suse.com>
Date: Fri, 20 Nov 2020 15:46:01 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <3e82cbbc-c121-aa14-c15b-ca3489f5cf2e@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 20.11.2020 15:37, Andrew Cooper wrote:
> On 20/11/2020 14:32, Jan Beulich wrote:
>> On 20.11.2020 15:19, Andrew Cooper wrote:
>>> --- a/xen/drivers/passthrough/amd/iommu_guest.c
>>> +++ b/xen/drivers/passthrough/amd/iommu_guest.c
>>> @@ -68,11 +68,39 @@ static void guest_iommu_disable(struct guest_iommu *iommu)
>>>      iommu->enabled = 0;
>>>  }
>>>  
>>> -static uint64_t get_guest_cr3_from_dte(struct amd_iommu_dte *dte)
>>> +/*
>>> + * The Guest CR3 Table is a table written by the guest kernel, pointing at
>>> + * gCR3 values for PASID transactions to use.  The Device Table Entry points
>>> + * at a system physical address.
>>> + *
>>> + * However, these helpers deliberately use untyped parameters without
>>> + * reference to gfn/mfn because they are used both for programming the real
>>> + * IOMMU, and interpreting a guests programming of its vIOMMU.
>>> + */
>>> +static uint64_t dte_get_gcr3_table(const struct amd_iommu_dte *dte)
>>>  {
>>>      return (((uint64_t)dte->gcr3_trp_51_31 << 31) |
>>>              (dte->gcr3_trp_30_15 << 15) |
>>> -            (dte->gcr3_trp_14_12 << 12)) >> PAGE_SHIFT;
>>> +            (dte->gcr3_trp_14_12 << 12));
>>> +}
>>> +
>>> +static void dte_set_gcr3_table(struct amd_iommu_dte *dte, uint16_t dom_id,
>>> +                               uint64_t addr, bool gv, uint8_t glx)
>>> +{
>>> +#define GCR3_MASK(hi, lo) (((1ul << ((hi) + 1)) - 1) & ~((1ul << (lo)) - 1))
>>> +
>>> +    /* I bit must be set when gcr3 is enabled */
>>> +    dte->i = true;
>>> +
>>> +    dte->gcr3_trp_14_12 = MASK_EXTR(addr, GCR3_MASK(14, 12));
>>> +    dte->gcr3_trp_30_15 = MASK_EXTR(addr, GCR3_MASK(30, 15));
>>> +    dte->gcr3_trp_51_31 = MASK_EXTR(addr, GCR3_MASK(51, 31));
>>> +
>>> +    dte->domain_id = dom_id;
>>> +    dte->glx = glx;
>>> +    dte->gv = gv;
>>> +
>>> +#undef GCR3_MASK
>>>  }
>> I realize the question is somewhat unrelated, but aren't we updating
>> a live DTE here? If so, are there no ordering requirements between the
>> writes? Might be worth putting in barrier(s) right on this occasion.
> 
> I don't know.  Honestly, its not relevant either as this is code motion.

Well, okay:
Acked-by: Jan Beulich <jbeulich@suse.com>

> This entire file is full of security holes.  None of it is fit for use
> in its current form.

We're all aware of this, I think.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 15:09:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 15:09:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32299.63334 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg823-0006Je-VX; Fri, 20 Nov 2020 15:09:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32299.63334; Fri, 20 Nov 2020 15:09:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg823-0006JX-SR; Fri, 20 Nov 2020 15:09:03 +0000
Received: by outflank-mailman (input) for mailman id 32299;
 Fri, 20 Nov 2020 15:09:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kg823-0006JS-6F
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 15:09:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a84c3203-3054-4c88-92bb-461f5f406404;
 Fri, 20 Nov 2020 15:09:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 85138AB3D;
 Fri, 20 Nov 2020 15:09:01 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kg823-0006JS-6F
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 15:09:03 +0000
X-Inumbo-ID: a84c3203-3054-4c88-92bb-461f5f406404
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a84c3203-3054-4c88-92bb-461f5f406404;
	Fri, 20 Nov 2020 15:09:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605884941; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=MKDHEUQBokxra7DPVEaE5cAh4CiadQJVXPCoj2eiT8s=;
	b=hnyHRe97c7xWMUlkBgZzWRl1UUpSI5OZdY/p0K0IKYWhc7Q7hyN0f1s6T8Zz1LSAGhX58s
	2MDOXMQOxuRaaLAKf5SeZzpJUWOvQTn3ZARHqyWwSWNQiw/Mc5ujSUNcRPCcl9n1TvSktB
	n4N+MRDpBjy5a7ceC0lXaOGq6l3q3Ho=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 85138AB3D;
	Fri, 20 Nov 2020 15:09:01 +0000 (UTC)
Subject: Re: [PATCH v2 05/12] viridian: use hypercall_vpmask in hvcall_ipi()
To: Paul Durrant <paul@xen.org>
Cc: Paul Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201120094900.1489-1-paul@xen.org>
 <20201120094900.1489-6-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a7e118c7-2b72-98f8-19a1-82667c47f44f@suse.com>
Date: Fri, 20 Nov 2020 16:09:02 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201120094900.1489-6-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.11.2020 10:48, Paul Durrant wrote:
> --- a/xen/arch/x86/hvm/viridian/viridian.c
> +++ b/xen/arch/x86/hvm/viridian/viridian.c
> @@ -551,6 +551,25 @@ static bool vpmask_test(const struct hypercall_vpmask *vpmask,
>      return test_bit(vp, vpmask->mask);
>  }
>  
> +static unsigned int vpmask_first(struct hypercall_vpmask *vpmask)

Now this and ...

> +{
> +    return find_first_bit(vpmask->mask, HVM_MAX_VCPUS);
> +}
> +
> +static unsigned int vpmask_next(struct hypercall_vpmask *vpmask, unsigned int vp)

... this should really have pointers to const as parameters.

> @@ -631,13 +650,21 @@ static int hvcall_flush(union hypercall_input *input,
>      return 0;
>  }
>  
> +static void send_ipi(struct hypercall_vpmask *vpmask, uint8_t vector)

And I guess this one should, too.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 15:11:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 15:11:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32310.63349 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg84l-0007Fv-Fv; Fri, 20 Nov 2020 15:11:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32310.63349; Fri, 20 Nov 2020 15:11:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg84l-0007Fo-Cd; Fri, 20 Nov 2020 15:11:51 +0000
Received: by outflank-mailman (input) for mailman id 32310;
 Fri, 20 Nov 2020 15:11:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kg84k-0007Fj-D3
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 15:11:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f905bef7-efb1-4a46-a069-504a61eb0382;
 Fri, 20 Nov 2020 15:11:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 150D4AB3D;
 Fri, 20 Nov 2020 15:11:48 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kg84k-0007Fj-D3
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 15:11:50 +0000
X-Inumbo-ID: f905bef7-efb1-4a46-a069-504a61eb0382
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f905bef7-efb1-4a46-a069-504a61eb0382;
	Fri, 20 Nov 2020 15:11:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605885108; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Z2q+0eVq72VvrhyYPm4CfCBl5ZsNx290iGgTWnTDA4E=;
	b=YjNq9S2eFzS0QsvuAcerqm01Qt/WrAqIjJaChJGKyz/ZSqckA30lQt6lWvruaxlvfGnq/Z
	nEk6ZjL+AulXVzrzpT/CCtQMsKdxcLRF80f24+TiX/4n/Er714+4bp0AZd/xLvVQNsvv44
	0ei7TmDxRwxr7VGLaR5YOZg73ZYxxI0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 150D4AB3D;
	Fri, 20 Nov 2020 15:11:48 +0000 (UTC)
Subject: Re: [PATCH v2 06/12] viridian: use softirq batching in hvcall_ipi()
To: Paul Durrant <paul@xen.org>
Cc: Paul Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201120094900.1489-1-paul@xen.org>
 <20201120094900.1489-7-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8dbcfe50-58a3-a519-03cb-1e7f11af567e@suse.com>
Date: Fri, 20 Nov 2020 16:11:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201120094900.1489-7-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.11.2020 10:48, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> vlapic_ipi() uses a softirq batching mechanism to improve the efficiency of
> sending a IPIs to large number of processors. This patch modifies send_ipi()
> (the worker function called by hvcall_ipi()) to also make use of the
> mechanism when there multiple bits set the hypercall_vpmask.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 15:12:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 15:12:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32315.63360 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg85T-0007M3-Pa; Fri, 20 Nov 2020 15:12:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32315.63360; Fri, 20 Nov 2020 15:12:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg85T-0007Lw-Me; Fri, 20 Nov 2020 15:12:35 +0000
Received: by outflank-mailman (input) for mailman id 32315;
 Fri, 20 Nov 2020 15:12:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WHVp=E2=amazon.co.uk=prvs=5864bad74=pdurrant@srs-us1.protection.inumbo.net>)
 id 1kg85S-0007Lo-Al
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 15:12:34 +0000
Received: from smtp-fw-6002.amazon.com (unknown [52.95.49.90])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d2fe4309-0a17-4d83-96c0-2113bf821642;
 Fri, 20 Nov 2020 15:12:33 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-1e-a70de69e.us-east-1.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP;
 20 Nov 2020 15:12:27 +0000
Received: from EX13D32EUC003.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38])
 by email-inbound-relay-1e-a70de69e.us-east-1.amazon.com (Postfix) with ESMTPS
 id 33532A20E0; Fri, 20 Nov 2020 15:12:26 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC003.ant.amazon.com (10.43.164.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 20 Nov 2020 15:12:25 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Fri, 20 Nov 2020 15:12:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WHVp=E2=amazon.co.uk=prvs=5864bad74=pdurrant@srs-us1.protection.inumbo.net>)
	id 1kg85S-0007Lo-Al
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 15:12:34 +0000
X-Inumbo-ID: d2fe4309-0a17-4d83-96c0-2113bf821642
Received: from smtp-fw-6002.amazon.com (unknown [52.95.49.90])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d2fe4309-0a17-4d83-96c0-2113bf821642;
	Fri, 20 Nov 2020 15:12:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
  s=amazon201209; t=1605885153; x=1637421153;
  h=from:to:cc:date:message-id:references:in-reply-to:
   content-transfer-encoding:mime-version:subject;
  bh=NGvsQnBYaANVCJh02E68OqZwYy2JRHKJhBQQw90P4l8=;
  b=pknAlqtwYuSSsQlMPk3jtKShJzARMXzE/VIJGsNSwUAF02gtciC66NeK
   5290r6L9IvZ/KhrDrCoQyLfTGCwYfdE7wEbtLnxCpDfVDuAUmuuiPnLMN
   3qcM1QXZxaS8TQyqyJpijlOKppS0FKxELBBEi2NKxVxJfwB9zp2xuMuBc
   Q=;
X-IronPort-AV: E=Sophos;i="5.78,356,1599523200"; 
   d="scan'208";a="66254629"
Subject: RE: [PATCH v2 05/12] viridian: use hypercall_vpmask in hvcall_ipi()
Thread-Topic: [PATCH v2 05/12] viridian: use hypercall_vpmask in hvcall_ipi()
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO email-inbound-relay-1e-a70de69e.us-east-1.amazon.com) ([10.43.8.2])
  by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP; 20 Nov 2020 15:12:27 +0000
Received: from EX13D32EUC003.ant.amazon.com (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38])
	by email-inbound-relay-1e-a70de69e.us-east-1.amazon.com (Postfix) with ESMTPS id 33532A20E0;
	Fri, 20 Nov 2020 15:12:26 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC003.ant.amazon.com (10.43.164.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 20 Nov 2020 15:12:25 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Fri, 20 Nov 2020 15:12:25 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
CC: Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Index: AQHWv08fwALhu38GGk2wLQdBAEFKPKnRIGSQ
Date: Fri, 20 Nov 2020 15:12:25 +0000
Message-ID: <90afbf7ee1214d79ad506a9e66f05c92@EX13D32EUC003.ant.amazon.com>
References: <20201120094900.1489-1-paul@xen.org>
 <20201120094900.1489-6-paul@xen.org>
 <a7e118c7-2b72-98f8-19a1-82667c47f44f@suse.com>
In-Reply-To: <a7e118c7-2b72-98f8-19a1-82667c47f44f@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.164.242]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxp
Y2hAc3VzZS5jb20+DQo+IFNlbnQ6IDIwIE5vdmVtYmVyIDIwMjAgMTU6MDkNCj4gVG86IFBhdWwg
RHVycmFudCA8cGF1bEB4ZW4ub3JnPg0KPiBDYzogRHVycmFudCwgUGF1bCA8cGR1cnJhbnRAYW1h
em9uLmNvLnVrPjsgV2VpIExpdSA8d2xAeGVuLm9yZz47IEFuZHJldyBDb29wZXINCj4gPGFuZHJl
dy5jb29wZXIzQGNpdHJpeC5jb20+OyBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4
LmNvbT47IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZw0KPiBTdWJqZWN0OiBSRTogW0VY
VEVSTkFMXSBbUEFUQ0ggdjIgMDUvMTJdIHZpcmlkaWFuOiB1c2UgaHlwZXJjYWxsX3ZwbWFzayBp
biBodmNhbGxfaXBpKCkNCj4gDQo+IENBVVRJT046IFRoaXMgZW1haWwgb3JpZ2luYXRlZCBmcm9t
IG91dHNpZGUgb2YgdGhlIG9yZ2FuaXphdGlvbi4gRG8gbm90IGNsaWNrIGxpbmtzIG9yIG9wZW4N
Cj4gYXR0YWNobWVudHMgdW5sZXNzIHlvdSBjYW4gY29uZmlybSB0aGUgc2VuZGVyIGFuZCBrbm93
IHRoZSBjb250ZW50IGlzIHNhZmUuDQo+IA0KPiANCj4gDQo+IE9uIDIwLjExLjIwMjAgMTA6NDgs
IFBhdWwgRHVycmFudCB3cm90ZToNCj4gPiAtLS0gYS94ZW4vYXJjaC94ODYvaHZtL3ZpcmlkaWFu
L3ZpcmlkaWFuLmMNCj4gPiArKysgYi94ZW4vYXJjaC94ODYvaHZtL3ZpcmlkaWFuL3ZpcmlkaWFu
LmMNCj4gPiBAQCAtNTUxLDYgKzU1MSwyNSBAQCBzdGF0aWMgYm9vbCB2cG1hc2tfdGVzdChjb25z
dCBzdHJ1Y3QgaHlwZXJjYWxsX3ZwbWFzayAqdnBtYXNrLA0KPiA+ICAgICAgcmV0dXJuIHRlc3Rf
Yml0KHZwLCB2cG1hc2stPm1hc2spOw0KPiA+ICB9DQo+ID4NCj4gPiArc3RhdGljIHVuc2lnbmVk
IGludCB2cG1hc2tfZmlyc3Qoc3RydWN0IGh5cGVyY2FsbF92cG1hc2sgKnZwbWFzaykNCj4gDQo+
IE5vdyB0aGlzIGFuZCAuLi4NCj4gDQo+ID4gK3sNCj4gPiArICAgIHJldHVybiBmaW5kX2ZpcnN0
X2JpdCh2cG1hc2stPm1hc2ssIEhWTV9NQVhfVkNQVVMpOw0KPiA+ICt9DQo+ID4gKw0KPiA+ICtz
dGF0aWMgdW5zaWduZWQgaW50IHZwbWFza19uZXh0KHN0cnVjdCBoeXBlcmNhbGxfdnBtYXNrICp2
cG1hc2ssIHVuc2lnbmVkIGludCB2cCkNCj4gDQo+IC4uLiB0aGlzIHNob3VsZCByZWFsbHkgaGF2
ZSBwb2ludGVycyB0byBjb25zdCBhcyBwYXJhbWV0ZXJzLg0KPiANCj4gPiBAQCAtNjMxLDEzICs2
NTAsMjEgQEAgc3RhdGljIGludCBodmNhbGxfZmx1c2godW5pb24gaHlwZXJjYWxsX2lucHV0ICpp
bnB1dCwNCj4gPiAgICAgIHJldHVybiAwOw0KPiA+ICB9DQo+ID4NCj4gPiArc3RhdGljIHZvaWQg
c2VuZF9pcGkoc3RydWN0IGh5cGVyY2FsbF92cG1hc2sgKnZwbWFzaywgdWludDhfdCB2ZWN0b3Ip
DQo+IA0KPiBBbmQgSSBndWVzcyB0aGlzIG9uZSBzaG91bGQsIHRvby4NCj4gDQoNClRydWUsIHRo
ZXkgY2FuIGJlIGNvbnN0Lg0KDQogIFBhdWwNCg0KPiBKYW4NCg==


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 15:15:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 15:15:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32331.63373 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg88d-0007bz-9H; Fri, 20 Nov 2020 15:15:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32331.63373; Fri, 20 Nov 2020 15:15:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg88d-0007bs-69; Fri, 20 Nov 2020 15:15:51 +0000
Received: by outflank-mailman (input) for mailman id 32331;
 Fri, 20 Nov 2020 15:15:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kg88c-0007bn-GC
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 15:15:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 37a59104-6263-4f0d-8205-db936b0564f0;
 Fri, 20 Nov 2020 15:15:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E04B7AA4F;
 Fri, 20 Nov 2020 15:15:48 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kg88c-0007bn-GC
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 15:15:50 +0000
X-Inumbo-ID: 37a59104-6263-4f0d-8205-db936b0564f0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 37a59104-6263-4f0d-8205-db936b0564f0;
	Fri, 20 Nov 2020 15:15:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605885349; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AiHwbjLr5XvuI1n19K/8MnCFK+bMLxPzoLHD9sDQuWw=;
	b=W3kDFSgwxeK0VousTZKG40JMln/OPUZV5xjWiOG1vSl1NhA6US1Mn2XiHVvccvMyHzlOZC
	AgWWxEVRX8JOSz5DJsgSzutJf5CdvJ9VL5lbaECifE0OcLd3DaqbQ1cPiom5hkaXKYHonv
	x85kjffZyE7opFC/lVBNdQiAbx/9MwE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E04B7AA4F;
	Fri, 20 Nov 2020 15:15:48 +0000 (UTC)
Subject: Re: [PATCH v2 07/12] xen/include: import sizeof_field() macro from
 Linux stddef.h
To: Paul Durrant <paul@xen.org>
Cc: Paul Durrant <pdurrant@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201120094900.1489-1-paul@xen.org>
 <20201120094900.1489-8-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8ea259ed-0ebb-f0fb-9be1-cd0271a25bd4@suse.com>
Date: Fri, 20 Nov 2020 16:15:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201120094900.1489-8-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.11.2020 10:48, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> Co-locate it with the definition of offsetof() (since this is also in stddef.h
> in the Linux kernel source). This macro will be needed in a subsequent patch.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

You don't fancy replacing in the tree what is now effectively
open-coding this construct, I guess? I'll try to remember to
do so once this has gone in...

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 15:24:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 15:24:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32360.63385 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg8Gg-0000Cm-50; Fri, 20 Nov 2020 15:24:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32360.63385; Fri, 20 Nov 2020 15:24:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg8Gg-0000Cf-1Q; Fri, 20 Nov 2020 15:24:10 +0000
Received: by outflank-mailman (input) for mailman id 32360;
 Fri, 20 Nov 2020 15:24:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=50fm=E2=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kg8Ge-0000Ca-FF
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 15:24:08 +0000
Received: from mail-wr1-x431.google.com (unknown [2a00:1450:4864:20::431])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e5cdac81-4685-430d-93f7-b3a546f37dce;
 Fri, 20 Nov 2020 15:24:07 +0000 (UTC)
Received: by mail-wr1-x431.google.com with SMTP id 23so10391566wrc.8
 for <xen-devel@lists.xenproject.org>; Fri, 20 Nov 2020 07:24:07 -0800 (PST)
Received: from CBGR90WXYV0 (54-240-197-238.amazon.com. [54.240.197.238])
 by smtp.gmail.com with ESMTPSA id h15sm5484176wrw.15.2020.11.20.07.24.05
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 20 Nov 2020 07:24:06 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=50fm=E2=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kg8Ge-0000Ca-FF
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 15:24:08 +0000
X-Inumbo-ID: e5cdac81-4685-430d-93f7-b3a546f37dce
Received: from mail-wr1-x431.google.com (unknown [2a00:1450:4864:20::431])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e5cdac81-4685-430d-93f7-b3a546f37dce;
	Fri, 20 Nov 2020 15:24:07 +0000 (UTC)
Received: by mail-wr1-x431.google.com with SMTP id 23so10391566wrc.8
        for <xen-devel@lists.xenproject.org>; Fri, 20 Nov 2020 07:24:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=XAQmQ3Q7smbXJ1DTvhlKedyyiFZtmtXTAwVefAdZfSE=;
        b=dzFzOzxqKI6twcvA7aLBc+OkJ1SgayREYuGqMV5/Ygn0f8omdmzMtR4IltXySM2lYu
         Lqwc057CAjGV074bv1dylaH+rbc3XkyQnvE4RnQFH5M5s2UiraHLWNARkSqD68XF5tV9
         4SljL2d3GofHNEnyQnNuUMF1DrhE+bnAaE/6a4MVBCY6zSzjVWPHcbiT0ESvsyXtrUOs
         h4Ed0FA8Ws0RJ/r73K86NJ+ngSmDnwS8oM3tKDMrOEO8ue0jYYJigcXZXcpKd1VHFWVf
         8bsCYqTxhgJHctD82Gwy0yK1U0z9rrd1GCckudgcBpPPQ3SEtj9O7mbnaieyF6Wxm4Xj
         w1fw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=XAQmQ3Q7smbXJ1DTvhlKedyyiFZtmtXTAwVefAdZfSE=;
        b=rXQneD3qgn1iIxC2orj+hlkLh1XQf/2oZ44aDoOkMrq1mL8nelfh1oBtOkv2P0lAVW
         lUoxwN0cv6devYWI8Xvcj0PQxRu/YV06ncKFsOr4v5oBnpQumRGWxZrGGPK5Q+oeNsHq
         8MQ2Zms+K40PHStlCPYhHa1rb3ltf+U/QQ9C78rYSksQGKDfJEGwJtFCElyAF9dSeqFs
         hTzBtKeWkx32mixEnCUwx8Onhq1hEVM+ddHgmSz9N9rQ5Dutr8i+Gtlop9wBKRqO9woZ
         59fA3pRblMfT5xSO0qgUOIVxua3WaMgouTY6wXhSf0GpGrg8vSgND1r2J50fbzO7mENP
         Mzzg==
X-Gm-Message-State: AOAM531/99Fq3EP1y3cjYY8CcDUIh4tUt2BH5DxTIlpFXvdRBwLrdJm8
	7OnNSL3+TQT682Je8RqaduI=
X-Google-Smtp-Source: ABdhPJyojglFDa60Ra+LaMqcK5ghmkUxPz4JsGD4kAZt8AwwEF6KWynm9KOwj8SYIFwMlM4Bwj4QRQ==
X-Received: by 2002:adf:f3c4:: with SMTP id g4mr16988826wrp.399.1605885846829;
        Fri, 20 Nov 2020 07:24:06 -0800 (PST)
Received: from CBGR90WXYV0 (54-240-197-238.amazon.com. [54.240.197.238])
        by smtp.gmail.com with ESMTPSA id h15sm5484176wrw.15.2020.11.20.07.24.05
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Fri, 20 Nov 2020 07:24:06 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Wei Liu'" <wl@xen.org>,
	<xen-devel@lists.xenproject.org>
References: <20201120094900.1489-1-paul@xen.org> <20201120094900.1489-8-paul@xen.org> <8ea259ed-0ebb-f0fb-9be1-cd0271a25bd4@suse.com>
In-Reply-To: <8ea259ed-0ebb-f0fb-9be1-cd0271a25bd4@suse.com>
Subject: RE: [PATCH v2 07/12] xen/include: import sizeof_field() macro from Linux stddef.h
Date: Fri, 20 Nov 2020 15:24:05 -0000
Message-ID: <010201d6bf51$2f6bf9f0$8e43edd0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQH2MKAKjLdIZTWPg9fOgT90SKBicQJG9BIUAuuzCmapaKsU8A==

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 20 November 2020 15:16
> To: Paul Durrant <paul@xen.org>
> Cc: Paul Durrant <pdurrant@amazon.com>; Andrew Cooper <andrew.cooper3@citrix.com>; George Dunlap
> <george.dunlap@citrix.com>; Ian Jackson <iwj@xenproject.org>; Julien Grall <julien@xen.org>; Stefano
> Stabellini <sstabellini@kernel.org>; Wei Liu <wl@xen.org>; xen-devel@lists.xenproject.org
> Subject: Re: [PATCH v2 07/12] xen/include: import sizeof_field() macro from Linux stddef.h
> 
> On 20.11.2020 10:48, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > Co-locate it with the definition of offsetof() (since this is also in stddef.h
> > in the Linux kernel source). This macro will be needed in a subsequent patch.
> >
> > Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> 
> Acked-by: Jan Beulich <jbeulich@suse.com>
> 
> You don't fancy replacing in the tree what is now effectively
> open-coding this construct, I guess? I'll try to remember to
> do so once this has gone in...
> 

I'll see what I can find after this series is in, unless you get there first :-)

  Paul



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 15:33:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 15:33:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32387.63397 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg8PE-0001JZ-0f; Fri, 20 Nov 2020 15:33:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32387.63397; Fri, 20 Nov 2020 15:32:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg8PD-0001JS-Tr; Fri, 20 Nov 2020 15:32:59 +0000
Received: by outflank-mailman (input) for mailman id 32387;
 Fri, 20 Nov 2020 15:32:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tRH+=E2=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1kg8PD-0001JN-0K
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 15:32:59 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 209cd0d0-1788-45b3-a500-a7bd4ee488a5;
 Fri, 20 Nov 2020 15:32:56 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id ED47B67373; Fri, 20 Nov 2020 16:32:53 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tRH+=E2=lst.de=hch@srs-us1.protection.inumbo.net>)
	id 1kg8PD-0001JN-0K
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 15:32:59 +0000
X-Inumbo-ID: 209cd0d0-1788-45b3-a500-a7bd4ee488a5
Received: from verein.lst.de (unknown [213.95.11.211])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 209cd0d0-1788-45b3-a500-a7bd4ee488a5;
	Fri, 20 Nov 2020 15:32:56 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
	id ED47B67373; Fri, 20 Nov 2020 16:32:53 +0100 (CET)
Date: Fri, 20 Nov 2020 16:32:53 +0100
From: Christoph Hellwig <hch@lst.de>
To: Jan Kara <jack@suse.cz>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 14/20] block: remove the nr_sects field in struct
 hd_struct
Message-ID: <20201120153253.GA18990@lst.de>
References: <20201118084800.2339180-1-hch@lst.de> <20201118084800.2339180-15-hch@lst.de> <20201119120525.GW1981@quack2.suse.cz> <20201120090820.GD21715@lst.de> <20201120112121.GB15537@quack2.suse.cz>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201120112121.GB15537@quack2.suse.cz>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Fri, Nov 20, 2020 at 12:21:21PM +0100, Jan Kara wrote:
> > > AFAICT bd_size_lock is pointless after these changes so we can just remove
> > > it?
> > 
> > I don't think it is, as reuqiring bd_mutex for size updates leads to
> > rather awkward lock ordering problems.
> 
> OK, let me ask differently: What is bd_size_lock protecting now? Ah, I see,
> on 32-bit it is needed to prevent torn writes to i_size, right?

Exactly.  In theory we could skip it for 64-bit, but as updating the
size isn't a fast path, and struct block_device isn't super size critical
I'd rather keep the same code for 32 vs 64-bit builds.


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 15:33:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 15:33:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32388.63408 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg8PY-0001Pw-9m; Fri, 20 Nov 2020 15:33:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32388.63408; Fri, 20 Nov 2020 15:33:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg8PY-0001Pp-6b; Fri, 20 Nov 2020 15:33:20 +0000
Received: by outflank-mailman (input) for mailman id 32388;
 Fri, 20 Nov 2020 15:33:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kg8PW-0001Pd-TM
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 15:33:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 44de94ec-779a-4d96-b299-cc50a3a745cd;
 Fri, 20 Nov 2020 15:33:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3E8AEAB3D;
 Fri, 20 Nov 2020 15:33:16 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wRTa=E2=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kg8PW-0001Pd-TM
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 15:33:18 +0000
X-Inumbo-ID: 44de94ec-779a-4d96-b299-cc50a3a745cd
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 44de94ec-779a-4d96-b299-cc50a3a745cd;
	Fri, 20 Nov 2020 15:33:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605886396; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=lvPtaEeeOLyizn57MRVVpkF+JbUFnbzP+zwdZtq3RtI=;
	b=b0LPOBtVuFKD1b+VNksejHAk6wXCWf/UxcOKYNZIpCfHx+V7A5WkoYTG9DucUzhrgQlely
	W3wQI0AgWy6ds8vEuNif5VDRS3oPoIABom3rFBQO/yJ5WsJqxnbtHd/f3Yxl2mn8IEJpzc
	Zvzw2p6R+wBl2l3AiCpZA/Jgov0oljY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3E8AEAB3D;
	Fri, 20 Nov 2020 15:33:16 +0000 (UTC)
Subject: Re: [PATCH v2] tools/libs/ctrl: fix dumping of ballooned guest
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20201111100143.13820-1-jgross@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <3720cde6-d138-a081-37be-b68103b8aa6f@suse.com>
Date: Fri, 20 Nov 2020 16:33:15 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201111100143.13820-1-jgross@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="EEkw1WHGfD0V2ByWa7hz60iMBLgApt5UG"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--EEkw1WHGfD0V2ByWa7hz60iMBLgApt5UG
Content-Type: multipart/mixed; boundary="u8uuTPPLE78bmchdJ4j9gNPj9IdgGmhPT";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <3720cde6-d138-a081-37be-b68103b8aa6f@suse.com>
Subject: Re: [PATCH v2] tools/libs/ctrl: fix dumping of ballooned guest
References: <20201111100143.13820-1-jgross@suse.com>
In-Reply-To: <20201111100143.13820-1-jgross@suse.com>

--u8uuTPPLE78bmchdJ4j9gNPj9IdgGmhPT
Content-Type: multipart/mixed;
 boundary="------------C2110B62A65CC1945C7670A3"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------C2110B62A65CC1945C7670A3
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 11.11.20 11:01, Juergen Gross wrote:
> A guest with memory < maxmem often can't be dumped via xl dump-core
> without an error message today:
>=20
> xc: info: exceeded nr_pages (262144) losing pages
>=20
> In case the last page of the guest isn't allocated the loop in
> xc_domain_dumpcore_via_callback() will always spit out this message,
> as the number of already dumped pages is tested before the next page
> is checked to be valid.
>=20
> The guest's p2m_size might be lower than expected, so this should be
> tested in order to avoid reading past the end of it.
>=20
> The guest might use high bits in p2m entries to flag special cases like=

> foreign mappings. Entries with an MFN larger than the highest MFN of
> the host should be skipped.
>=20
> Signed-off-by: Juergen Gross <jgross@suse.com>

This is a real bug fix.

Can any maintainer please have a look?


Juergen

> ---
>   tools/libs/ctrl/xc_core.c | 42 +++++++++++++++++++++++++++++---------=
-
>   1 file changed, 31 insertions(+), 11 deletions(-)
>=20
> diff --git a/tools/libs/ctrl/xc_core.c b/tools/libs/ctrl/xc_core.c
> index e8c6fb96f9..b47ab2f6d8 100644
> --- a/tools/libs/ctrl/xc_core.c
> +++ b/tools/libs/ctrl/xc_core.c
> @@ -439,6 +439,7 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
>       unsigned long i;
>       unsigned long j;
>       unsigned long nr_pages;
> +    unsigned long max_mfn;
>  =20
>       xc_core_memory_map_t *memory_map =3D NULL;
>       unsigned int nr_memory_map;
> @@ -577,6 +578,10 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,=

>                                      &p2m, &dinfo->p2m_size);
>           if ( sts !=3D 0 )
>               goto out;
> +
> +        sts =3D xc_maximum_ram_page(xch, &max_mfn);
> +        if ( sts !=3D 0 )
> +            goto out;
>       }
>       else
>       {
> @@ -818,19 +823,12 @@ xc_domain_dumpcore_via_callback(xc_interface *xch=
,
>           {
>               uint64_t gmfn;
>               void *vaddr;
> -
> -            if ( j >=3D nr_pages )
> -            {
> -                /*
> -                 * When live dump-mode (-L option) is specified,
> -                 * guest domain may increase memory.
> -                 */
> -                IPRINTF("exceeded nr_pages (%ld) losing pages", nr_pag=
es);
> -                goto copy_done;
> -            }
>  =20
>               if ( !auto_translated_physmap )
>               {
> +                if ( i >=3D dinfo->p2m_size )
> +                    break;
> +
>                   if ( dinfo->guest_width >=3D sizeof(unsigned long) )
>                   {
>                       if ( dinfo->guest_width =3D=3D sizeof(unsigned lo=
ng) )
> @@ -846,6 +844,14 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,=

>                       if ( gmfn =3D=3D (uint32_t)INVALID_PFN )
>                          continue;
>                   }
> +                if ( gmfn > max_mfn )
> +                    continue;
> +
> +                if ( j >=3D nr_pages )
> +                {
> +                    j++;
> +                    continue;
> +                }
>  =20
>                   p2m_array[j].pfn =3D i;
>                   p2m_array[j].gmfn =3D gmfn;
> @@ -855,6 +861,12 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,=

>                   if ( !xc_core_arch_gpfn_may_present(&arch_ctxt, i) )
>                       continue;
>  =20
> +                if ( j >=3D nr_pages )
> +                {
> +                    j++;
> +                    continue;
> +                }
> +
>                   gmfn =3D i;
>                   pfn_array[j] =3D i;
>               }
> @@ -879,7 +891,15 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,=

>           }
>       }
>  =20
> -copy_done:
> +    if ( j > nr_pages )
> +    {
> +        /*
> +         * When live dump-mode (-L option) is specified,
> +         * guest domain may increase memory.
> +         */
> +        IPRINTF("exceeded nr_pages (%ld) losing %ld pages", nr_pages, =
j - nr_pages);
> +    }
> +
>       sts =3D dump_rtn(xch, args, dump_mem_start, dump_mem - dump_mem_s=
tart);
>       if ( sts !=3D 0 )
>           goto out;
>=20


--------------C2110B62A65CC1945C7670A3
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------C2110B62A65CC1945C7670A3--

--u8uuTPPLE78bmchdJ4j9gNPj9IdgGmhPT--

--EEkw1WHGfD0V2ByWa7hz60iMBLgApt5UG
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+34bsFAwAAAAAACgkQsN6d1ii/Ey9F
2Qf/QfQOKXYPzLSlgvvgEHcOKgTHwW9bpT9CC7UKI8aUcmGKkcu/zP7hG2EmTjOHjfF8cKtb7jU6
X3w5TRxAbNZ/iwaKB54fYdn+T+g+ubcleP7c4B7S4eXrPzTM6+co7xw2WXF6zHK1RA0HD09+PtIw
CWFxCNGjFFMkSACrlJpUFE8RJkmNTG76kYFMRgXwcwu6Ab7PEPFazFIIapDrbki6k67iOIG40uvq
Nmci58CFtOoeyphgxvDEsQB0cNXvB0dUwWVkms/vrRxjSDEaB3jRhehUFexQPUihmx6BVCIZ+TI+
Zg58igktJqAPDHmMSU+dARM72P/gRdHnfmovly12Dg==
=Rl4r
-----END PGP SIGNATURE-----

--EEkw1WHGfD0V2ByWa7hz60iMBLgApt5UG--


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 15:47:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 15:47:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32435.63445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg8co-0003Ip-4S; Fri, 20 Nov 2020 15:47:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32435.63445; Fri, 20 Nov 2020 15:47:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg8co-0003Ii-0l; Fri, 20 Nov 2020 15:47:02 +0000
Received: by outflank-mailman (input) for mailman id 32435;
 Fri, 20 Nov 2020 15:47:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aCua=E2=intel.com=lkp@srs-us1.protection.inumbo.net>)
 id 1kg8cm-0003IW-Iy
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 15:47:00 +0000
Received: from mga17.intel.com (unknown [192.55.52.151])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a2f5d191-a5a6-4eeb-baf7-0eca5618a972;
 Fri, 20 Nov 2020 15:46:57 +0000 (UTC)
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
 by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 20 Nov 2020 07:46:56 -0800
Received: from lkp-server01.sh.intel.com (HELO 00bc34107a07) ([10.239.97.150])
 by fmsmga001.fm.intel.com with ESMTP; 20 Nov 2020 07:46:53 -0800
Received: from kbuild by 00bc34107a07 with local (Exim 4.92)
 (envelope-from <lkp@intel.com>)
 id 1kg8ce-00000z-Co; Fri, 20 Nov 2020 15:46:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aCua=E2=intel.com=lkp@srs-us1.protection.inumbo.net>)
	id 1kg8cm-0003IW-Iy
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 15:47:00 +0000
X-Inumbo-ID: a2f5d191-a5a6-4eeb-baf7-0eca5618a972
Received: from mga17.intel.com (unknown [192.55.52.151])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a2f5d191-a5a6-4eeb-baf7-0eca5618a972;
	Fri, 20 Nov 2020 15:46:57 +0000 (UTC)
IronPort-SDR: 0cH++OHaItLGjUI5E18A6MTPJ+8sajX0AZNtfkwd61W63uJ/T50GFJKWi6p02+JVHHvn31cth+
 Mp63r7jtNsig==
X-IronPort-AV: E=McAfee;i="6000,8403,9810"; a="151338137"
X-IronPort-AV: E=Sophos;i="5.78,357,1599548400"; 
   d="gz'50?scan'50,208,50";a="151338137"
X-Amp-Result: UNKNOWN
X-Amp-Original-Verdict: FILE UNKNOWN
X-Amp-File-Uploaded: False
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
  by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Nov 2020 07:46:56 -0800
IronPort-SDR: VQAc9lxCSRXV5x0tADO+kuuVUyNkXhvx0dX05aw+4lMm7sRJtVnbczdQV4FShxIuKnCK7h7e4y
 3VQ6Bm750+oQ==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.78,357,1599548400"; 
   d="gz'50?scan'50,208,50";a="431597175"
Received: from lkp-server01.sh.intel.com (HELO 00bc34107a07) ([10.239.97.150])
  by fmsmga001.fm.intel.com with ESMTP; 20 Nov 2020 07:46:53 -0800
Received: from kbuild by 00bc34107a07 with local (Exim 4.92)
	(envelope-from <lkp@intel.com>)
	id 1kg8ce-00000z-Co; Fri, 20 Nov 2020 15:46:52 +0000
Date: Fri, 20 Nov 2020 23:46:41 +0800
From: kernel test robot <lkp@intel.com>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
	x86@kernel.org, virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org
Cc: kbuild-all@lists.01.org, peterz@infradead.org, luto@kernel.org,
	Juergen Gross <jgross@suse.com>, Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [PATCH v2 11/12] x86/paravirt: switch functions with custom code
 to ALTERNATIVE
Message-ID: <202011202314.Q6WA9Qs0-lkp@intel.com>
References: <20201120114630.13552-12-jgross@suse.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="UlVJffcvxoiEqYs2"
Content-Disposition: inline
In-Reply-To: <20201120114630.13552-12-jgross@suse.com>
User-Agent: Mutt/1.10.1 (2018-07-13)


--UlVJffcvxoiEqYs2
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hi Juergen,

I love your patch! Perhaps something to improve:

[auto build test WARNING on v5.10-rc4]
[also build test WARNING on next-20201120]
[cannot apply to tip/x86/core xen-tip/linux-next tip/x86/asm]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Juergen-Gross/x86-major-paravirt-cleanup/20201120-194934
base:    09162bc32c880a791c6c0668ce0745cf7958f576
config: x86_64-rhel-7.6-kselftests (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce (this is a W=1 build):
        # https://github.com/0day-ci/linux/commit/fd8d46a7a2c51313ee14c43af60ff337279384ef
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Juergen-Gross/x86-major-paravirt-cleanup/20201120-194934
        git checkout fd8d46a7a2c51313ee14c43af60ff337279384ef
        # save the attached .config to linux build tree
        make W=1 ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> drivers/misc/lkdtm/bugs.o: warning: objtool: .altinstr_replacement+0x0: alternative modifies stack

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

--UlVJffcvxoiEqYs2
Content-Type: application/gzip
Content-Disposition: attachment; filename=".config.gz"
Content-Transfer-Encoding: base64

H4sICCzNt18AAy5jb25maWcAlDzLctw4kvf5igr3pfvgHkm2Fe7Y0AEkQRIuvhoAS1W6MNRy
2a0YW/LqMWP//WYCIJkAQY23D20xM/FO5Bv1yz9+2bDnp/uv10+3N9dfvvzYfD7eHR+un44f
N59uvxz/Z5O1m6bVG54J/TsQV7d3z9//+f39+XD+dvPu99OT309eP9y83WyPD3fHL5v0/u7T
7edn6OD2/u4fv/wjbZtcFEOaDjsulWibQfO9vnj1+ebm9R+bX7PjX7fXd5s/fn8D3Zy++83+
9Yo0E2oo0vTixwgq5q4u/jh5c3IyIqpsgp+9eXdi/pv6qVhTTOgT0n3KmqESzXYegAAHpZkW
qYcrmRqYqoei1W0UIRpoymeUkH8Ol60kIyS9qDItaj5ollR8UK3UM1aXkrMMuslb+B+QKGwK
W/nLpjBH82XzeHx6/jZvrmiEHnizG5iEbRC10BdvzoB8nFtbdwKG0Vzpze3j5u7+CXuY9q1N
WTVuzatXMfDAerpYM/9BsUoT+pLt+LDlsuHVUFyJbianmAQwZ3FUdVWzOGZ/tdaiXUO8jSOu
lM5mjD/bab/oVOl+hQQ44Zfw+6uXW7cvo9++hMaFRM4y4znrK204gpzNCC5bpRtW84tXv97d
3x1/mwjUJSMHpg5qJ7p0AcB/U13N8K5VYj/Uf/a853Ho3GRawSXTaTkYbGQFqWyVGmpet/Iw
MK1ZWs4994pXIpm/WQ9CKThpJqF3g8ChWVUF5DPUXCm4nZvH578efzw+Hb/OV6rgDZciNZe3
k21ClkdRqmwv4xie5zzVAieU50NtL3FA1/EmE42REPFOalFIEEBwL6No0XzAMSi6ZDIDlIIT
HSRXMIAviLK2ZqLxYUrUMaKhFFzibh6Wo9dKxGftENFxDK6t635lsUxL4Bs4G5A8upVxKlyU
3JlNGeo2C+Rs3sqUZ06EwtYSFu6YVNxNeuJF2nPGk77IlX/rjncfN/efAi6ZtUqbblXbw5iW
q7OWjGgYkZKYS/kj1njHKpExzYeKKT2kh7SK8JtRGLsFU49o0x/f8UarF5FDIluWpTDQy2Q1
cADLPvRRurpVQ9/hlIPbZ+9+2vVmulIZ9RWovxdpzKXUt1+PD4+xewnaeDu0DYeLR+bVtEN5
hXquNndhOl4AdjDhNhNpVJjadiKrYpLIIvOebjb8g+bLoCVLt5a/iJr1cZYZ1zom+yaKEtna
7Ybp0rHdYh/m0TrJed1p6Kzh0bWNBLu26hvN5CEyE0dDjsY1SltoswBbSWNOCE7vn/r68V+b
J5ji5hqm+/h0/fS4ub65uX++e7q9+zyf2U5IbY6bpaZf715GkMhmdGPxchrmn0kiazHsp9IS
rj/bBTI1URlK8ZSDaoFO9Dpm2L0hlhjwJVqAygeBpKjYIejIIPYRmGj9dc8HpERU1vzE1k7s
CPsmVFsxejQy7TcqcnPgDAfALQ/bAqd5wefA93BvYsai8nowfQYg3DPThxMWEdQC1Gc8Bseb
FCCwYziSqppvO8E0HE5f8SJNKkHllsG1aYIbRu+Xv1W+eZuI5oxMXmztH0uI4R+PX7claB+4
ylFjG/vPwXAQub44O6FwPM2a7Qn+9Gw+K9Fo8EdYzoM+Tt94N6BvlHMqzFUwEn3kDHXz9/Hj
85fjw+bT8frp+eH4aO+yM67ASao7s/VRvoy09lSd6rsOHBk1NH3NhoSBy5V6V91QXbJGA1Kb
2fVNzWDEKhnyqlfE0HPuFKz59Ox90MM0TohdG9eHT8Ywb3CfiH2UFrLtO3LZO1ZwKwo5sUbA
Nk2L4DMwoC1sC/8QSVNt3QjhiMOlFJonLN0uMOYQZ2jOhByimDQHxc6a7FJkmuwjyNY4uYV2
IlOeCrNgmfmOjY/N4dJf0Q1x8LIvOBwlgXdgr1PhiRcFx3SYRQ8Z34mUL8BA7cvVcfZc5gtg
0uWRFRnbLibO4HZMNEwT/xPdJLAZQTEQ9wOZmyoDVEoUgD4S/YZVSg+Ai6ffDdfeN5xSuu1a
4Gy0CMAIJrvhdBt44iMXTasEoxDOP+Mg0cF05jHPUKLO8rkRttvYpJL6CPjNaujNmqbEiZRZ
4NcDIHDnAeJ78QCgzrvBt8H3W+/beejT0pK2RXME/46xZDq0YJfU4oqjuWVYopU13HTucUFA
puCPGDcE7q0VqCI7PfdcYaABBZlyYxBZJRVawanqtjAbUMw4HbLtHeFYq2QJt/gj1SCkBHIQ
GRxuGLqSw8ILsBywAOclyIRq4ZtPlqanXcLvoakFmXpPhB6vcjgUyp3rS2bgdvlWdN6DoRx8
wtUg3XettzhRNKzKCZuaBVCAcVooQJWe9GWCsB2YZL30VVO2E4qP+6eC4zRqB0/CKI48Gy59
WZ8wKQU9py12cqjVEjJ4xzNDE7DiYBuQga2NElKYbcRLjDEF74J0+VCpOsLOiFnGQCYlPOpB
JPtAPVMHgKlesoMaqJE1osa2vruFWBBBFfiXkemQDQymg6p+3kaYc5MG3AWuu+e3G3luoJGB
oCeeZVTd2UsJww+Tgzyb4enpiRdqMzaRi2Z3x4dP9w9fr+9ujhv+7+MdmOEMrKEUDXHwzGbr
eqVzO0+DhOUPu9pEN6Lm1U+OOPlNtR1utE8Ir6mqT+zIno5AqDNWjLBom6jfiNFiBlwht1G0
qlgSE53Quz9aGydjOAkJdpVjIb8RYNG8QOt9kCC62np1EjMhxrzA18jipGWf52AOG1tuii2t
rMCY4B2TWjBftmpeG/sAUwgiF2kQlAPDJheVJ1GMWjCa3PPo/Qj+SHz+NqEXcG+SK9431dBK
y96E/WAP0zajgqftddfrwehAffHq+OXT+dvX39+fvz5/SwP7WzAVRjuarFODCWp9rgXOi9qZ
S1uj6S4bdJRstOji7P1LBGyPSYkowchyY0cr/Xhk0N3p+Ug3hfEUGzJqf4wIT3kR4CRRB3NU
3jWyg4Oj71T6kGfpshOQriKRGLvLfAtrkmzIUzjMPoZjYNRhqokbmyRCAXwF0xq6AngsDHWD
DW3NYBtAkZzar+gKjygjEaEridHFsqfZLo/OXJIomZ2PSLhsbOwVDAklkiqcsuoVRrXX0Ebx
mK1j1dJhuGphH+D83hCT0sTsTeM1N9DJWJi6ud7BHuGpVoPeL67XoOpurcvehPwJL+RgNHEm
q0OKYWdqWGQH8BQwll8eFMiFKgj1d4X1xisQ1mBXvCOGK56uYnjyeO/weHlqw95GA3UP9zfH
x8f7h83Tj282DkS89mDHyCWmq8KV5pzpXnLr0Pio/RnraIAGYXVnAuVULBdtleVClVGvQoOp
5qU7sRPL8mAoy8pH8L0G7kCOm+3EaRwkQF89LUUXleVIsIMFRiaCqH4X9habuUdguaMWMXtl
xledCnaO1fMSFm6raFU+1Imgsxlhq54o9jrxn0t/gbtf9dI7C+sEtjXciRz8tEluxQKiB7jW
YNaCv1P0nIbD4IQZxl2XkGG/93J1E3xt2hOB6kRjMhr+RpU7lJAVBjdAd6ZeWmfPG+9j6Hbh
t2Pn+cwACkbBSWwDTYNyV4d9ACi4FQB+d3pWJD5IobiYvW5/TCNjwuSQP0xkTlsYOth7mxXq
ekw6gAiotHN15i2P9jTtcxAIjxzhGPKbevwAbFS2aH+auUTXwFLZvICut+/j8E7FMys12u/x
7DlYJm3MXZk0KvV/xksoGzB0nLq0cc9zSlKdruO0CkRcWnf7tCwCCwuTWrtAFopG1H1txFkO
Ur46XJy/pQSGLVJd1YqwtQD9ZaTu4AUTjPCq9wt5TBI2Jg2B4QlewVWJhTtgIiAkrFiaux7B
IJWWwPJQUFN1BKfgO7BeLhFXJWv3NHVbdtyynQxgvO4rNHykJhuc0ZhBAaZ0mPIFy827jY0x
PRSa+2B8JLxAA/D0j7M4HhPaMezoTURwHszKT1VTs9eA6nQJwThI65+gqYoZlmoTszwLoOSy
RaceQ06JbLcgJ0w4CxP0Aaf58SoHwkB/xQuWxnJ3jibkhRHs8cIIxLy4KkEpLlG2luDiq3dz
Sg5+RDULbmuYEA/16/3d7dP9g5fuI66w0599E8SKFhSSddVL+BTTcN4WURqjjdtLXwtOLtfK
fOlCT88X/hdXHRh9oYwY0++O9z0n0LJBV+H/OI2AiffbeV/BVoR77hUuTKDwLGeEd5ozGE7S
Ssfci0GaM6UiydlkIjj3d8Yq9WGZkHDaQ5GgQa1Cpkw7ZmvilBZpLL+FRwHWC9zTVB46z7sP
UKByjOOVHMbLG0uV99S2xR58iDPlWdqJAGOSPpwKHtQgakyaTYk3a/gbm9dOjkWcmgk9By08
vJHXo9WGVSlhwM2hgkoigzJJkS1eEFsZObNNhXe/Gi08LBLp+cXJ94/H648n5D+6Fx1O0oqM
hVka4P2rbhIP4Fq3CiNvsh+T/d7po/BCM6Me1zOT2g5WxJSt2cHE5SVRoLWWNKsGX+gOCS28
5JIPd+czncPpChmeGBpzRgksiM1OsPAUwUBS4K+htGJ+tsygbTjK305Vs8Db6msRQJyLMTGA
tiVbw5YfVIxSq71hoaHN8/AAQop4BC9CiVmjWKQ0p0H3XMDd7hMfUos99xIy5dVwenISHRhQ
Z+9WUW9OYma77e6EmBZXF6eEq60uLiUW/sxEW77nafCJoZJYBMUiu14WGPk70LVYlIqnliRT
5ZD11Fax9B882OT4g1wEl+rk+6l/LTEenjLtixXLTJiOwri+zwYmImNaqcgorBJFA6OceYOM
UQjHZhU7gOERG84SrGPmgTqWmQq6k+/X09HA9a/6wrfQZ6FA0CcXiyA4xb4UeN5lKsaqTqgF
CtgzCUKSfdtUh+hQIeVqEVVaZyY4B4usIpOCuyVy2O5MLxMtJvpUgYbrsNhhhlPQbKO8EOxZ
MDQczDBqZ4pzstEdpNvv/0Yj4a8d4UB0E20SyqpQ43eJUBi6blRXCQ1KBeajndcZocI4n4ks
RspZKZ0uO4/EWpv3/zk+bMB6u/58/Hq8ezJ7g/p+c/8NnxiQYNgiRmnLcIhdb4OTCwApaZgD
LQ6ltqIzGamY7HJj8SnGQTOH80SiwEE1rMMyRlTU5KLXIEgym4TQfkU+oirOO58YIWGYBOAo
8Q0uXhxYD5dsy03AJhZtqL0xxlwS6T3bYY49W6aZAInvDMb9i3buJr1om5lp2TraeMMg2T5C
fBcUoGnlRUAu/7T+AVZli1TwOfsZ3R0MRBTOkIvZuF6sGHmR8PPiaxQ2RgMosIHabR8GnoHr
S+1yzdiko5kGA3E5KLsK4wwpkqQhYZzOhRiLaEzQ9tWlcggUkp1pR70gS+sznIFJvhtAZkgp
Mh4L9CMNqElXTz3bmQbBwpUlTIN1ewihvdaenEDgDgZsg/5y1iw2QEfzz3ZvfCmFIBPdkRxY
hEZ57TlMIZnJAY2jRbbYgbTr0sF/reC1CeCiq0WwtKi6DQZmRQFWrimX9xs73z1iD7ktQonb
dyBts3DmIS7CXWvb26XIMm3IRfC3ZqBHw0WPKwytEg8pWj+yYvkyCRnLt9jNqL3SLfoqumyz
gDopIhdH8qxHoYYJ5Ut0IEIDghLDXxgumWvJ4Rs8wbSXQh9e3iXnqfqDlzWLecCzUGAdJ6LF
h/ulOhHymbIoecjmBg5Hx9nihAxqkY5YUHDRfKCbQTCYTFzsRsBSnc7X9iryrsLIlz0YHEUo
WzI/ezHyHvy9Errv0GhuO7hCwnemiD2OSsaPfirjNo118Jv84fi/z8e7mx+bx5vrL14sbJQt
c9tJ2hTtDt83YbRXr6DDEucJicKILnRCjHU52JrUwsVt22gjVDCYCfn5JljXYyolVwLWiwbG
peu1qFaW7RfxRSnGWa7gpymt4Nsm49B/trrvjXtktDoCXcPECJ9CRth8fLj9t1cnNPvmXaBR
DKulJv1hOMYLz4yK6mUM/JsEHeJGNe3lsH0fNKszx0q8UWBo7kBo0QtsohsdeKJgdthkgRRN
zC8zo7y1SafaiFmzHY9/Xz8cPy5tdL9fVI9fvZcPkcs0ba/4+OXoXy2ndj3+NIk1PKIK/KSo
EeRR1bzpV7vQPP5G0yMak3hR4W1RY8KPunzTikZiyxYh2X/3f8z+JM+PI2DzK4jyzfHp5vff
SFAeFLMN7RI7GWB1bT98qJeZtSSYADs9KT1THyjTJjk7gY34sxcrBWNYk5P0MS/DVetg4iSI
8XqlZIZlDipP/O7d/qws3G7K7d31w48N//r85TrgQ5Oko0F8b7j9m7MY39gABa1OsaDw2yR8
eoxLY7AGOIxmm9xT2qnlvJLFbM0i8tuHr/+By7TJQlnCs4xeWfjEmGFk4rmQtbFnQJF7Ecus
FtS1h09bGxiA8Dm8KctoOIZKTFwwdx4viVerFB94JjmsX3jvTifELIPyyyHNi2m0aREUPkZf
omxVtG1R8WlpiwJOmOPmV/796Xj3ePvXl+O8jQLrKD9d3xx/26jnb9/uH57IjsLCdowWgSGE
K1oqMdKgAPeyWgFiUnkZ8LnnICGhxPR9DSfCPMfd7ux2PKl4yHZqfClZ1/FwumMeHWO5rr5/
imNhaa4f58AWGMKzGGOpSz/W5ZGmrFN9NXa0SrbyUwMwXSzFlJgg08JPL2GSQNsX31twiLUo
zNVcHUKm4sy6Kqskbuet8Avf6rtb9//hkylGZnaioybhBPKLNc0swF2Gq14OJoskA95yRWU+
1HkuSmXaeNoVM7kC+zb2+PnhevNpnKa1MAxmfDQaJxjRC3nieRRbWj8zQjBhjUVZcUweFlo7
+IDJb69EZcIuqvURWNc02Y4QZirB6ZOJqYdahb4QQqdSS5sVxScafo+7PBxjvC2gHPUBE+7m
xzdcnsYnDYW9t9jk0DEVPhxAZNMO/msGBO5z4BTd2oqb4Hk0FvH0oDmugtihPZo5dwHdgHEn
o8XMZlYumey1APW6Ql7Xffh7Chgs2O3fnZ55IFWy06ERIezs3XkI1R3rTXbD+/GS64ebv2+f
jjcY3X798fgN+BLtmoWpaBMwfuWATcD4sDFO4BV1jMeKhisJLLS2UpvPimiEuGp68+gGhNQ+
OMmp4aIrdL1DB3Eb1o9iyggM0sQ/EPvrMiYTiDnkPJSaIaHJVcQIpynpcGA3E/B4hjwIni6K
XM1C5whp3xjzBd+cpRhtCkJJmCjAl7JwxYfEf/64xbLQoHPzFA7gvWzgSmiRew9mbKkuHCvW
ekcqnRcbaqGRcdxpxeEv7IbB531jk7PmXsV/X2PH/WDL/ELI9Fi27TZAoo2LelUUfdtHfpxB
AW8Yb8L+bEUkZAf2pMZEk3uTtyRA1bkIolGkK+vwrD8yc/uTQ/ZhwXBZCs39B9RT8baaUo3m
cbttEXapaoydu98OCs9A8gLECmZWjKa3vOX7AJZO0bCJfzz4O0erDcvLIYHl2GeUAc5kswla
mekERD/BqrT6aMkNGD9Ef9g8PLVV3cFj1bmTyPjjcyHptsjPQc+n5gmVF7D02djk0/UD2Fcl
d4kAkwGLovHpfIzEcZe9DfZduqurDCfjhIhjLkwEBhSuna2tW8Flbb/ymsC5XOhT2Z94GX/K
KkKL9VIzfWzXFE+R4AWUe5FBPLqwyRoh6QrPtQImDJCLlwCz/P8JOG5xu7C37OqFBu/N8ZMp
HA+ZLl3+sglFr/9MhyfBl7/UEV7AFhm8Dk3GUX42pnAHTmrMGf8s3dD10T4Rj8/uwoybYQeD
xOw1mC8yOpRqc21Nw8U6srE2jKf4IoxcnjbrMdOHChLfyeLti0hlgxqrL2Jje++nQi29Fzqu
LvxW85OsSL/kPdVaJ5Qk0pVDG3IsZAmnafnN/brRUo/CzghbRzC9PCP2Ff7EnChcHpn8Kosb
1OFZoKCnIE0ibNVybGuRIeygxOyOwGYVqkFR6/HH1+Tlnt7BVVTY3HJGtHkMNc8XH9W+ORtL
iXylOhljoP89+2mudsEfOSCPRaN1m+QdLineDA5zNELXMfOvH1qbP213r/+6fjx+3PzLvm/9
9nD/6fb/OHu3JrdxZGHwr1T0w/lmYk9vi6Qu1Eb4ASIpCS7eiqAkll8Y1XZ1u2Jsl6OqPGd8
fv0iAZDEJUF5tyPstjITVwKJRCIv6pFkUuZwMjWDc30TZIPYTZTbxOBYOdOS0VmIHwn3BVqi
jplXbidDVQ1cFTjD1Je8cNFm4OI7RZhU35Uv1MEp0+YTNkCGjRIKFgd1KhV4cqbQy0g07nQx
yWM+vOhnk4whH1GN9jQepBdqlOhLkEZCTIcXDQNXy9nuSZowXM63IK+j/kaiGA+uaFLxi+98
M3xNHt/99vr5gTf2m1MLcKKGi6hzLYHH4oVLpYzBgTrGHOlpIYxL0KKnku92zvvui12V4ySc
pxQD3S2ED/COg8kYTrZVys605oJoIUKJ22R3pjvTFNWG8y71LKmhQPe1YwcUaNhGTPFI2uwA
L+gzqL4NFrrieyAAZ0hMuzHg+XlZtW1uheFysWCnjE6rGKzSpEplnpfsssNv8tp8UYjbxVkt
bqJoECYVet+XXZeeafaQ4NNXNcE1uUAgGfXA6y1NqzS9e3h5ewK2d9P+/K57nY7GaaMd2Dvj
1b3iF6eRBn/2px1OMRz+bK+ZwE1nXcEPfAMx1diShs7WWZAEq7NgacUwBISGSym7tW5Y4BTW
9ey0Q4pAKLaGMmVg7qBPvKR4h9GrnY7ptJjtPztQfOinXATOnC17KrEO3RJ+JGEIUEKjbcHz
1Tq+8nW1PYJRDS+b1vIyeI+jbIUlW9yBKt+Bwd1EV+sCWFgwygCv1RS5TFvDvBytpHF5yqVo
04dZQ97e78wHkwGx29+hwzLbG7fMGK5R6hOMKGNm5CnCysBYM3KjgoOtOOEdAXUyRWwr0Mw0
hRaTVkgpsjDfr9XFMMLiLJuLex6k+Awe3Ch0ijC+Keb968fYhZsLXtSBj4IgPELKd5G6BoZN
0hRO2t6y+Zjk7yGiTL/L9vA/0K6Y0WM1WmkTrh7XJorJSFg+MP7n8eOPtwd4M4I46TfCx+xN
W2E7Wu6LFu55zu0EQ/Efpl5b9Bd0P1PEOn5lVKEFtdUu62JJQ3WRXYG5aJFMym6oUmmTpgcw
zzjEIIvHr88vP2+KyfzAUdPPOj9NnlMFKU8Ew0wg4fgwKOCluxZWU9aB0XqGoc7y2dTx4nIo
bMUihN096AKQsH2/BStlXgBCsWs7So5Uj7up1wWPqdCSiN9emo5+Hst8E656a0ivJsEUfcl+
HnfobfN+ZbHfSh4L3rFLq9AOZFLjHJQAuXax27YFE+qYJgOWZOh/EOv/RGjFeytuB7igiC3d
t3ZknB2/v+o7XLrEV2BgojVUnBC97S3To3aoGRSrRUY6Tpt3y8V29Bw3OavPAtIHP17qii+Q
0nGrnddxoZotGYZLXw4oWSFjmPnuz1J5Dy4W5luNC0nyjEgXOJ338S9lkZk2rPznjJnniN1j
VxHAQvQa9m6jTSyqfPugOjHWLADjXatqJrOLbA/yNdKct4iMTXi96niJRzaYqRi/b84VOOKB
FbxFPBkBfPTvfvvyv8+/mVQf6qrKpwp3p9SdDosm2lc5rnFAyZkbAM1P/u63//3zx6ff7Con
7odVAxVoy8Ueg9Pfsepi4EJacxI2BvEppLjhGa4ihkvwTHQLYfAxPHUaDClrGvOZxAoYL54I
BdzV0Y/SSy0CUZkKbxkkyPIGllYpB6EwrPSgt8eCH9YU3j8NYl4YwiGcDZtVoTOt96XOKCAo
jR3pZfKvFcHOebGeb84DJtrVyi9Wd/QXoSkgDjc2rxDzlV9ijwVpDJ8f8aIJBvyCxYFFHsp7
jJkTynxdSlFfXHIhLnvltRWh3S8gTVKNax/IYSK9TMG3pOnIBwFheYON8d4OwAyB8fVimXSy
252MsTS8vAoprnx8+5/nl3+BQbIjvvHj+1bvofzNB0w063q4uJrXWC5vFhbELNLmTNfR8J9q
FeEKK45uK4xZd3s9lAL84gfiobJAKkzqZMQ5ANWM437SQDSGT/C0DTd9MOChyb3VppRnMgs6
RUewe33ULLABkLHagtBaPCR+1b8x3w4OAGk6rUUs48wMP6mBxefBjHCN9UlrKYCb6SE4dHT4
ExFNGgO3pzvQOMp3AOZWBtK89H4zcDI2iqQgetDqEcdveLtKd48eMUlOGNNNVzmmLmv7d58e
E0NaUGDhs4xbK0uChjSY8aXYnbVuCCchB2HuWZw6G9G3p7LUL0QjPVYFkpkD5lAN2XIGGTEY
8dy817Rg/CIUYEDN0ovfmXmb1S112FN9bqm5Jk8pPtJ9dXIA06zo3QKkvkMEQO6Q6dsoGFg5
ex86BiK+rxPsE1I5BHOjCaDYgmoUJsYemgCa7E7SJTUGhtlRYLObDbk429KkACxfWfCwjzmi
QYP8nwddD2ujdlRTAIzQ5LQzUjMM8Atv61Lp7nIj6sj/hYGZB36/ywkCP2cHYh4NA6Y8zw0R
tDLiVu9WmWPtn7OyQsD3mb7MRjDN+YHMr2wIKk3kAN0OJyn+6aa532HGmYMAO3wDLVCCRPCL
HeZqM6CH6t/99vHHn08ff9N7XKQrZiSpqM9r85fi4KCR3GOY3tR2CIQMng6nWp/qz5KwRtfO
vl1jG3f9Szt3fW3rrt29Cx0saL02WgQgzbEsBrIW72Zfu7sd6jJYnoAw2rqQfm0E0wdomVKW
CEVQe19nFhJtyzgdBMTgowMEL+xyfnNSuDgDL36oP44o75wpI3DuVOFE7hEiG8wO6z6/jJ21
ugNYLrtjV8KJwErqIBdrnY/V4ke1/XBTt0ltMXEBs3i2hJkbh9OCkTaYdql7hnao1W2tBJL9
vVukPt4LOxQuHBW1mZcka20TsRGEcPRdQ1N++ZtKKVe75PnlEUT8v56+vD2++BJwTjVj1wuF
UvcS43BXKBn/UHUCK6sIuOA0U7NMmoRUP+BlKsEZAsPr10VXbK+hIUlBWYrrsgEV2XakPGV4
cAsEr4rfVPAlpVqDWmWKLLSt3lojOspdQToWrurMgwOn/70PaSd+M5Cw/Ix4QA5WLE4PXmwj
q+pWGBlV/GxMahxjirgagiWtpwiXk3LaZp5uEHDYJZ4J37e1B3OMwsiDok3iwUyCOI7nK0GE
SiuZh4CVha9Dde3tK0SD9qGor1DrjL1F9rEOHteDvvadnXTIT/zSgQbl2/clMaeG/8Y+EIDt
7gHMnnmA2SMEmDM2ALpaD4UoCOPswwxoMY2L32f4MuvujfrUQeaCrHvxBJfcQT+Nyn0Lz0WH
DNNHAtLgeOAdCTY/ShIyMUOuqa9m7fCxRVpfTwMmTwSAyAFsgGByTIiYRxMkP6vRtv+A5chq
9x6kSKOOgYMbtdydqhYTxmQ/zCeSCSY/gjVDwizAgAkzLKtBkO9QKROQUiXiRfMzxItrxRry
16wWmYegT081cr4YVfwCyf6SzpxSe7nApMrankENhx2j3SjZCdGiE+/Crzcfn7/++fTt8dPN
12ewa3jFxIqulcceWqtYwjNoJnpptPn28PL345uvqZY0B1AZCIc1vE5FIgJTslNxhWqQ3+ap
5kehUQ3H/Dzhla6nLKnnKY75Ffz1TsCzgnRfmyWDTHnzBLhgNhHMdMU8TpCyJeS9ujIX5f5q
F8q9V77UiCpbYESIQP2asSu9Hk+qK/MyHluzdLzBKwT2+YbRCCP7WZJfWrr8ilQwdpWmqlsw
cK/tzf314e3j5xk+AhnC4UleXJnxRiQR3Azn8Crj4ixJfmKtd/krGn5ZyErfhxxoynJ332a+
WZmo5MX0KpV1iONUM59qIppb0IqqPs3ihaA/S5Cdr0/1DEOTBFlSzuPZfHkQBa7Pm3zHmyfJ
cel4JJBqqNlbokYrgtfPNkjr8/zCycN2fux5Vh7a4zzJ1akpSHIFf2W5SR0RhCWcoyr3PkXA
SGLe5BG8sEico1CvdrMkx3sG4vwszW17lQ0J+XaWYv7AUDQZyX1yykCRXGND4n49SyCk33kS
EZrqGoVQCF+hEmkW50hmDxJFAl5ccwSnKHynh5GaU4gN1UBU18zQ4EonatK9C1drC7qjIH70
tHboR4yxcUykuRsUDjgVVqGCm/vMxM3VJ6zuvLUCtkRGPTbqjkGgvIgS8j/N1DmHmMP5h8iR
dG/IMAorUgHan1TnqeLnoNfVH3zPzBuuUWL5pUi6QAahsinnzPrm7eXh2yvEhQFPs7fnj89f
br48P3y6+fPhy8O3j2Bs8WrHF5LVSW2XqYzWEKfUgyDy/ENxXgQ54nClhpuG8zoYrdvdbRp7
Di8uKE8cIgGy5nmPx1OTyOqMBa9S9e/cFgDmdCQ92hBTOSBhBZZjSZHrFx0JKu8G+VXMFDv6
J4uv0HG1xFqZYqZMIcvQMs06c4k9fP/+5emjYFw3nx+/fHfLGgoy1dt90jrfPFP6NVX3//ML
DwZ7eHBsiHhuWRo6BHmCuHB5AUHgSqUGcE2lNql5ZAFHDQJwvxKE7uYIhkY9hhymksTu8ND4
O/d1wFsfIJ2KzAFOcKHwLAvhNU1dXaijIwagqcnm35XDaW1rMCVc3aCOONyQsnVEU49PSwi2
bXMbgZOP119TsWcgXXWsRBuqAKMEdk82CGwlgdUZ+y4+DK085L4a1dWQ+ipFJnK4+7pz1ZCL
DRrCBdtwvsjw70p8X4gjpqFMvkgzG11xgn+vf40XTHt+7dnza8+eX/v2/Nqz59fX9vz6V3b0
GtvRa8/uNOFqK6/1SV77ttvat980RHai66UHByzWgwLtiQd1zD0I6LfKe4ATFL5OYktLRxtP
AwaKNfhxutY2BNJhT3Ne7qFjMfaxxvfzGtl8a9/uWyM8SG8XZ0I6RVm35hac22HoYWvti2Er
yad333GXaI+XNp2iGgwI9n22s9exwnEEPH6e9Nufhmqdb2YgjXnTMPEi7CMUQ4pKvx/qmKZG
4dQHXqNwS+OhYcwbloZw7vsajrV48+eclL5hNFmd36PI1Ddh0LceR7nnmN49X4WGZlyDDzrz
yTNcMQFcPDa1gNJCMZkMaMSRAoCbJKHpq3Oa6EK4KAdk4dzla6SKrDvbhLhavN03QyKGcVd6
OzkN4VZGETk+fPyXFa1kqBhxd9KrtyrQr6tSRTM5VfPffbo7wHtqUnpCAAqawXJQ2OUKuymw
+MMcvn3kEGJDn0svoZ0TSae32teMhm2sak5fMbJFyx62ST2RLGiNmYaRVtOT8R9cUqPGlA4w
CL9JE1RRCyS5tLgwihV1hT1VA2rXhOt4aReQUP5hvVvH1N3CLzc7ioCetThHAkDtcpmu4mW6
Kc3BML0q9B+21ZbiAPTAbyCsrCrTLk1hgacpfm+HzJAEBXoVkjHhxOul4SCoQEgJ0RA/IwIt
3t8E6w9nfSgaopAIzXw2KTPMRiLPDTNb/hP33yMtyfEg5l24QuE5qXcooj5WeF/WXEitiWF3
pkAzDpIDRXnU7ooaUJiI4xgQKsynKx17rGocYYq/OqaodjQ3pCYdO8T8RZGg2ULGfeAoCD13
TBvoEDqfOi2v5ioN7HnPRQJrNvVFXsWIYUp/mVgIU9jplGUZLOOVwU4maF/m6h9ZV/MdCN+Q
YHY4WhFbs6+hpmU38AeSjM1rO5SpzJfivLv78fjjkZ9df6iAFkZqE0XdJ7s7p4r+2O4Q4J4l
LtRg5QNQJF52oOJtCWmtsWwTBJDtkS6wPVK8ze5yBLrbvzOfANVwsQ06YLMWLdQSGNBMuQM6
hJQ5z20Czv9vxk1Q5E2DzNmdmkunU+x2d6VXybG6zdwq77BJTETgBwe8vxsx7lSSW2xnTEWx
QsejxzhrWDl0rk7UtVIUg4AMTu+zlmF9QHLHSfnwy8Pr69NfSmNrbpUkt1rlAEf7p8BtInXB
DkIwk6UL319cmHwUU0AFsKLQDlDXpF00xs410gUOXSM9gNy+DlRZX7jjtqw2xiqsF10BFzoG
iBlnYLLCTKs5wVQQxyhEUIntXqngwnADxRjTqMGLzHrwHRAiiTOGSEhJUxRDa5bhZWjduhNC
DNvXTGTSlo/d1hAADgEyJ+iBSGvrnVsB+HzbTAjgjBR1jlTsdA2AtiGX7FpmG+nJiqn9MQT0
doeTJ7YNn+x1nTMXal7HB6iz6kS1mA2NxLTCEwrrYVEhE0X3yCxJ81nXi1c2YDMX+cFQeQHQ
vAXRutNdhXBPTYWYGIrRXJsMDuNzbJjq7mBpoi2dtIT42azKz6bN8o6f6UQEh8NCu9VZeWYX
Crv3KwIUngMo4twZn9Uok5XZWSt2HpyhHYjl7DOCc34L2hlWUmeZM+dcJBSrTwQdu46Y/EwU
/njPmfAZKVgqw3vbG8k+OADSH1hl0riSt4DyXWo5uUEVpfkmemTYpVUsADG9RoJaAOcRKELB
BsOyhb9rWjwmomg1YRRpp9GjTzR7JiK7a35bXW14samgiFChR3zRKBw/cQA2HcQLurfScuzu
9B/1vn9vBB7iANY2GSmcNC5QpXh6kLpGM+bCzdvj65sjN9e3LQTPNr5J2lQ1v1yVVEbOGHVJ
TkUWQo/qoH1SUjQkxadH30SQ2snQjQNglxQm4HDRlwtA3gfbaIuH+uFYyiz/eikekfImffz3
00cklxWUOifmvVjAOiiFDqJnuTMUw5YLAAnJE3iNB/9VU4kB2NszgRgRkPpyj0WFETW4EyZA
XGQkLcT0RXEJtcDJZrOwByeAkAbN17TAa+2YkyzSMZV73JtW5OzqrckzsHVGbueHzt4TSGZv
jiQrmBqeUds+DtaLwFPRNM9mXUMXcGimOajLCe+wllUvZ+ZxoMC/GISZkoxzXKWs5oxqyP/0
qut+ocCRRkHQ+Wc9qcPVdbz93QbjNbf5sVsntjO7pdUaQ0AfTuB+LBfIUgCG9kQeBO38J5SV
WePZkZmC4msixU7O2tRmwBqpWVKGvJXRZJi3CovTjIxcfyCBx64s1Xg5PLDs4cQ2iCSob42o
xLxsmdVmZSXEC0ycxA8DSlpeIdikaM2ajjS1AMwoYKa95ACl30EXnaD36NfhsYntW1wE3LWj
mtpsDctBJDMufvnx+Pb8/Pb55pOcficBKrzVicRUxsQl1oS3Jv6Y0F1rrSENLJPdexPO65S7
pLDGMqKK9vZKYejWTxvBUl3oltATaVoM1h+XdgUCvEt08z0NQdpjdOt2WODENPo+6ljBYd11
/mElRbiIOmeua84vXeje4CQSeD7q3BtWU3POHUDvTJIcmPk5+QdglsgyJcf0LSxNe7/nkl1T
42HzOPI2KZCJ8Ah1YGnSmOH5L7TJckN5NEDgBqVBM+Hvpvs/CxD4bzsgqknWyf4AKtnAuKYJ
LXAgEipCbFX8VFEFgSdmOSRX7PnVpeSHGr7pR/oE0jDuqcwp0VclmqZ1pIbg73zEEPoeEvo0
2SHdub0XkX2HbBhA0quIbm5n5aOjJd5PaG+4yLH7TUqGGJ1IAxfjs+R058zuAPO+ISsteeDo
zQMRM67R074MiCaBiKKwrnIcOwYf/RWqd799ffr2+vby+KX//KYnGRhIi4xh1jcjHlg/0gLC
2PUq2RBn0Bfv0KxIpFGe6wVryWBK3vEF9CF7t5jqulAOxW6G+1uq6/3kb2tECkjL+mRkPVHw
Q+1VcW8tzeS2HkKJW3oSjugy/GxV6JnIp4RiVs1JVh/H9NcWDELzcFnDtyZHMthohgrGcErH
ns1rTJln6K20QC0WRAVhUdCUcR5nxojlt27et9xWV4Ceg8sYZtwT4FQiQsEIlEmcjOCdEFW3
Ousq4aw9thAgVKlKJlKZFGm6tUszDs9dUxJT83k6w68PMiGbHnTe/tGnVUGonpEHri7Akoyg
xEPsZigBBCY50c8eBXBiBwO8zxKd6QhSVhcuZOQfZsJuiRNp4RkfGv52bZABg/0l4qwRGW5K
NJSi6HtdZHZ3+tRzYMsCLe7EL5C7C96OmRtWAURqMvmlTJxIXM6sbs3sZ8CCnxaEepVhxoX0
6ekK5GC26xaapBNuNsDZDNDAXU6EXMYFWqjFiLQIAIjoLYQRCTORtDqbAC55WAAi9WRmV8Pa
ypusN2iGVwKQ1G3qPE1+qRMDRXjmSU8+0niWrMBBhsGZtXBivgWIEWZNCH9h+33apvjeJUk9
g+npztDT6PgEstPPttizo8hKKLOvcOqPz9/eXp6/fHl8cS9SZz2J3jT5U0DTQaGRPr4+/f3t
AimfoU7hHTdlPrd240UoSXinPPYQYjvx8wa/us81JfMJPP/Jh/H0BdCPbleGoLp+Ktnjh0+P
3z4+SvQ0R6+aw9WkCrhKO+YjwSd8/BjZt0/fn5++2ZPGuUAqcomiM2IUHKt6/Z+nt4+f8c9r
1M0uSgXfZom3fn9t0/JISGPt7iKhuL6sSeXBpXr7+8eHl083f748ffpb1zndgz3KdDaJn32l
BX+TkIYm1dEGttSGZJw5AIdwKCt2pDvjoG5ITa1b4pRe+umjOu5vKjvQ8UlmwFO+3z9RsMjT
/u63MaIi58NtUetBtwZIX4jwX5PhYwvBj/JKHwIX40Tde9rIpyRIlzwa14x52MFpUHfs2l9E
8jfjNjqAhJiU8or0xCEdl9PHRrTeT6VEJORx5ONUogRc7MpzeAlD9/9UBEtvNhENsqGbdl4N
d7wHQyJOOO+03CQKJZOj4TgLqtkACt0gv1d7knONysPG1h0aBHDfVtX0MocGbp4KZDJNvSIW
GaYxPcM9UxyaMj00+hAXXuRW5YKEKI+jz6ec/yDC/M6Iysuv2UbAd/m7p6FmW6JgrNbUNZCY
WqQ+FStqby4OQO4zfpjKmCEo9/HsOakK/PGqdDUGVyuOFA5hXMmjFRn5UcUvH2Y0eVCkTAHv
xpoPJboYizbVT2T+U3w15jCQKR/V94eXV4sZQzHSbERKK09GP06hJ77yU/H5hgDUGJWTGmvo
iujLif+TH44iBNMN4aQtuCB/kf6h+cNPM8EVb2mX3/KVrr0xS2CV3NpTIpNuNbh77r7F9Yyl
D0G9mGafeqtjbJ/ilwFWeAtB56uq9s825IzwIscMZZB5SLxoO8uiIcUfTVX8sf/y8MoP2c9P
37HDWnz9PfU29D5Ls8THG4BAptAtb/sLTdtjrxnNI9hwFrs0sbxbPQ0QWGgof2BhEvy6JXCV
H0d2LPPIPzOzJwXCh+/f4QVbASH7lKR6+Mi5gDvFFegvuiEvg/+rC+V4f276ssLPAfH1uajr
jHmQQa90TPSMPX7563eQux5EdDRep/vuYbZYJKuVJ8UpR0MiuH1OTIWeQVEkxzqMbsMVbtss
Fjxrw5V/s7B87jPXxzks/zOHFkwkLMw8NPIO8vT6r9+rb78nMIOOXsacgyo5ROgnuT7b0vKC
S2J2pXyDA9i/usmlnyXg56ZDINPAJQnv39+8R9jtA8OO9hvQT0Gc12na3PyX/H/IZe7i5qvM
K+JZRrIANkfXq0LGVWE3YcCedtQ8Ozigv+Qi8zg7VlyY1VNmDQS7bKdsYcKF2RpgIddaMcOS
gQZij+78zFQ0AmvNSyEkLUfMUAQVpoSQec/p4dgOOj84HMyXhQHw1QJwYhfGJWpIJaPrQkZq
YVSH368nGqF3s5/DLDLSxfFmi/lDDxRBGC+dEUCMu77W9YilIUWL1BlKry8z1LjSkop5oqeS
KWtTO6JS6zqAvjzlOfzQngQVZq8ZViYpPzmsCaQp6tmqSoMOgzHgU7SOwk57wvzA+ZZeFfzu
Lw1tM+9VR5CoBG9DxiX8uVW1fuLEM50DC0N3wAAVGehkPOmFW63Iz1sB3WzrabPD2dc46Vfw
7HYueTLrYrfzclJdoBpMsMZw4vUnWEfx0vjQYBSXpGf7+w9gdUOBqCrTg4lBcBF3Tmxrg2YC
7maG7xyoWqV8PKpaHSNOdInyeTYergcwMx/a5Vl0LjJNwzXIzRwqX4idygFlPD8B6ZgjB5fC
geR4KdB8aAK5J7sGEhJ9NaGJ0xCed0GihHe7W0I6vdeESx3HBnsC1MnMLaBj9okPrsqgzVr9
nY5WfdqlnPn0+lG7jg73iqzkl3EGsaOi/LwIjQ9L0lW46vq0rnBVaHoqint4V8CvQLuiJ8zz
fHEkZVthvKKl+8JaGQK06Trj4Zp/zW0UsiVqcccv7XnFTvBSD+qGRPfzhyTSnfYNjnVP88rE
H5qT3pYCed/ISZ2ybbwIiW6JT1kebheLyIaEmjXhMPstx6xWCGJ3DKTVpAUXLW4Xhj37sUjW
0Qr360xZsI5DpOvKxHtIeapbBpC2hZR3/NIWqRcV/Gbqk4d1zXBv20pN7z00p2XXs3Sfoe+1
55qUZo6YJIQD3BVDsxpuaE7EMQnnLDA0/AwnMOY8rrB5diB6/EQFLki3jjcrB76Nkm6NNLKN
um6JX1cUBb+19vH2WGcMt5pUZFkWLBZLdMNbwx9PkN0mWAz7aZpCAfU+r09YvoHZqahbPZde
+/ifh9cbCiYZPyDf3+vN6+eHF34dmcLBfQFx/xNnOE/f4Z+68N7CkyA6gv8f9WJczFT6EbA0
JKCcro00N3A3LjKKgHrz+JngbYdrQCcKeWZeITqm6BGj+VrozR+y8nKHt5slR7wtyHjNB84/
em89uJkkTcu6X6CwDFsnfkN2pCQ9wcufwIcB1yvoB9HIheHGQVPjddySdKW+ATwo1J3X2eqA
hPzbmh6Y0JTv8bbRD4BEfx0XZdKCWBDHvkJAhd51P+4E0RnVi5u3n98fb/7BF+e//vvm7eH7
43/fJOnvfEv+U7P8HCRJXcQ7NhKmm0sOdA1Cd0BguiOR6Oh4AFpw/m94p9FfzgU8rw4HIwKC
gDKwNRa6fWPE7bAfX62ph6s0MtlchEHBVPyNYRhhXnhOd4zgBeyPCFB4x+2Z/nAiUU09tjCp
V6zRWVN0ycG4cKpI9t/I/SZBQrfN7tne7mbSHXaRJEIwSxSzK7vQi+j43Fa6EJ2FA6kjnkeX
vuP/iT2BPc9AnceaEasZXmzb6RfJAcrMJHbyY8Ljqa9yQhJo2y1EEy7dYeZvI3qrd0AB4K1B
2DwMmX2XNkGTMWFolZP7vmDvgtViod0uByp50ElLFky4M8gKwm7fIZU02UEZrYFhia0Ktoaz
XfpHW5yxeRVQ74GtkbS8f7met0XhTgV1Kk3rlh+W+BkiuwoZuPg69n6ZJilY49Sb8Y6EHp0z
F6gETy6zy8FjUDjSSOkLU8wNFC4j4LJKhEJDmB1henngd+8wxkrN4UPss0B0gra+w4xaBP60
Z8cktTojgbYfz4Dq00sCbqm+c9moQnnzzBL2O+ZdM0eQ7GqnG7sT4wcC9bxEiQm5b3ChYMBi
a0aJOPXZ5lCgiJAHhd/iS9n2sLZqiB5Inh8H+s1Z/NQ5ovur35c0cT9lOTfetOiiYBvg6iPZ
dWlaN//dDmmLWSoPp6G7IGjt3XyQGt001B3A4Orl70NdEz+SFqiHgJig1nSflsD7YhUlMWeA
+L1TDQJnBgJ5J1YaKGkXvpbvcmJoR9qkAFgoT6XpZjGB5zkl1OeckndZin84jsC05VImqPfu
SgKgOi32uPpdLqkk2q7+M8N8YWa3Gzy6r6C4pJtg6z1IRA8s1lMXwwlsQuPFInC5wB6m3Ve9
aycu5ZhjljNaiY3m7dnRFr+PfZOSxIUe655dXHBWILQkPxHd3Aa7KWj6UG0OQDsKYqL+SCBs
tMBhVpOUAagScvdZ0+gGGYDinDjJTJB6DJimCIAf6ipFZSRA1sUYkzzRTPX+5+ntM6f/9jvb
72++Pbw9/ftx8g7UpHDRqOGQJEAiYlXGV2YxpIRYOEVQL1uB5SwlCdYhutLkKLnQhzXLaB5q
sVoEaL8f7xJ8KB/tMX788fr2/PVG2Ny646tTfpOAy5rZzh2cCnbbndXyrpC3PNk2h+AdEGRT
i+KbUNo5k8KPad98FGerL6UNAFUNZZk7XQ6E2ZDzxYKccnvaz9SeoDNtM8ZGe9j6V0cv9gHR
G5CQIrUhTavrtSWs5fPmAut4veksKJfk10tjjiX43rGvMwmyPcEebgWOyzbRem01BECndQB2
YYlBI6dPEtx7jMLFdmnjMIis2gTQbvh9QZOmshvmMiW/ZuYWtMzaBIHS8j2JQqeXJYs3ywDT
bAp0laf2opZwLg/OjIxvv3AROvMHuxJewu3aIPwCfnuQ6DSxKjL0GBLCZb6sgSzCzMbQfB0v
HKBNNpjP2n1rG7rPM4yl1dMWMotcaLmrEPuHmla/P3/78tPeUYYl87jKF14JUX58+C5+tPyu
uHQ3fkE/dvbCID/KB4gT4IxxMHP86+HLlz8fPv7r5o+bL49/P3z86Vrl1+PBZ7BfZcbpzKr/
kpe6r+Y6rEiFtWiatUYmVA4GA0SinQdFKnQeCwcSuBCXaLkyVPocij5GTmjhwXFvlVFx9PHn
Z9+z7fiwXQjD6Vb33JlwU4/TQkl9PzXI7rQ35bSBSlk8FqTkF6pGeM9Y7/taJVykqxvKdGaV
Co8nvuVaMPFOpUylt3IqRea7DBN2OFq86hvVsZLU7FiZwPYIt6qmOlMuV5ZGfB+oRFhZOxB+
M7+zeiNMHZyZ1ikyNLAhIBp7aEmOxwbmKIgipksjHATR9MG+nNVG2h6OMaVxDviQNZUBGBcb
Du31wI0GgrVWnyfUkWGxDsS6yMm9vVZOPmrpOGAstn1OjEBfHMT5N23tSiVQ/G9/3zdV1QrX
WOZ5Y5xK4K+EsHaseFpq2sVXZ1br8BhzgOqw9+chg6nxAs2vlnSwGtZgey5V6279AKtNVS2A
4NtrMfSGQFuTRYFepZ7kRyqYLSodKvXGhnC6qxUOGdz+BHtI86oQv5VJ/FiFgqJ3v6GErmJT
MER5pjCJHsxCwaYXB5mGI8uymyDaLm/+sX96ebzwP/90H3j2tMkg5IBWm4L0lXH9GMF8OkIE
XJqGHBO8Ylbm4iGJwVz/RuYPzuIgcShXCNPrnF9bT0XF18Ku1T5BKbIYCxOFiZhqQykzO5YC
SCEmHwQ7DP2eCWM5nCxV/PRweHfiMv0H1PuvlJYo0xvH3gqU2GbECkIIEHh1y9Ck8QZBU53K
tOGX0dJLQcq08jZAkpbPK+wdK82mRgPOOTuSgxOkdpaTxIyICICWGKpPWgMJprcUUfcMh5Sz
GX2GNJkvJPOhxV57efNMD+4E0n1VssryTFewPr0vSUFNejN2m4ipxiHwwtc2/B9GfLZ2p9aZ
xmtO2mxYM8Fx/Vmsu6ZirEefRc5gSja2oKzFSiOIZ25E/YP6znrsVRH/rzCNTkhjBzCfUG0x
bCtHPE2fXt9env788fb46YZJX0Dy8vHz09vjx7cfL6Yt+uCo+YtFht7y4UJED0PSpJURcAAG
mPHl2/RR4vEV0GhISuoWPc10Ii6ZGUsta4MowK4teqGcJELYORq6p5wmlecybRRuM9u3dfgC
0paiZb7onEMVBfmgnzJZSabp+4oWMDwz+c84CAIo4zEs4mUjzJqJl+z5gWgmNFAwCNw5U0TF
REjMTTb2kLPMsqXaAzO5A6MYfDiNpxKYg8rw2CJt7gvEn+NOEoDAJVjAoJki8g7v5onLqYZj
qYT05S6OF5g2XissWX2lhV3YLTWFG/8hPcIhLFWWGzc0hYMzbQ5vMNikAK6LRugqOz3Ks/W0
09JDVUaeYnp8J/GWzhoZJWDaM/f8clPY5mZTmdaooR0r0GEyqDM4/sPJZCGNYLICIsxY0eXD
pzwhqS4HlAT9sEBV6qpoftrsjKNPRk45XlhLTOdRgcO99I0GzvRkRBFrj/xg56Pk09/XeHhy
neR8nWR3wNUgOk1zwDih7F1f65lBcnp3okYErgHC+4JPonzIMAwP1dtGiwZ5HJCa5m+EGRL6
BPWwpIlA79sAlSFzLCAthdelNJRHV0/Crw2VzpPtsOsDHaTPLQ22kHScLxL0Eujj6GmW2Edj
e8opLiLp5cDoan7xcSk6zzptF2Sh0Qv5295HCsr/h8AiBybEx8YBs9v7I7ncohOcfUiOtEZR
h6o6mPEKD+crZ+jxRC6ZwQOP1PdCrRWjcbhCbUt0GhF8UJcpApTbZyLy6U/jZ2b/5vOsG5PR
w874YX8GDjob+SEoP3uRtqk4xX8aP526hlPdAumbhi71LsMvqwCxqa3uoaF89kWwMPyH6QE7
eN9bSbuHDzAo+Cf+fC5S6y311pPnjq9AXGCAlxIQz+a/fcHbJWWlbZ8i75a9HolZAcx5EkBT
JyJAlrpxJINLkulcm3crgcFNfvKOXWbR+8u1pQ9vKZ5okBZVBRv1yjwBGcsKiu7n4l6PHQS/
goVukzJA+BwajHSfkbzEjzWt9pK00PJ8B/k/wTWuNKSV0GN8fO7QbIhmdU1VVoW2Wcq9kYC3
7kldDxkqftpwsit6y0wfUD4zDL1lg8mVlAvjmdJJQ8Kf3hb70Bk785P8ysqvbrVPxm8WFX78
1UQkec3KAz9PDVH9yK81fOGgfbnPIFDHnqL+R1rlWclAv2EYOVcWW3eLSROZqfd3OYkMi8y7
PDGOc/nblkYV1NjXCuYKnWCSZdapZ8bgP5zasxTndqBsEvGE9SD5CTgz8PlEZ7MpfuGbN+mV
WYMQZW1mhI4lLS6AxEG0tf2oNFRbYcHNmjhYb1H20PA1CwpSFAc5D7QNpH5jE8dIwU5mxHYm
jjzfvVgvm2V389PDqpw0e/5H2/JMVz7zHyJyyE8DkKRgYV+aUGspjISuVTnH7GE5lGY7Eqaa
Q8dDc09oZ4PIl5hqICiYto2ymibBwghJDwTbANWvCNRSd94yJjOBCBpd6+t+Kw6UqwM4XWEg
7L6sanZv8CWwF+3yg28zaaXb7Hhqr5wrrcGOW4jZxk/m+ngPEa+x55LczFigVXWmuKGhRnKh
H/BbvUYjHcT0XimXMdJRPw9RNHnOR+2j2acekz8uHtQ4RlwCdvYT+SCCHe+Fz8hXA6BxW3bh
EH0kOeejbUMP8P7KUZioSbsMYlIYxdjeTS1RUHoDVfiijoOeC6ox3DvhZfV4j4500Hj5CaTj
/c7T8UFBZDe6S4rVMgAzCE+9nAC8Aebw8TKOA1+7HL2RxbU7YVJIDbn8INP1mSYkJXYX1RXY
00BKznQa13hPqHOIq6jD8q61a5ZeYN2F3Hsqz8F8vg0WQZCYlak7g13hAObSpqdGKRY75QYx
2DvNE0XrTLVJBMKqp/FShNUmTvNlx6t9Tzindb7zICEMtU5ToA50ezOo09bbRzhhsZFqfN1s
h1+jgkVnPojwixVfPzTxN5PWcRSH4Sy+TeLAP5mihmU8j19vruC3nnEqE0D7Syg+eeDsI2zg
b+93hhxeLN5uV6h9GFxgBz8c44WkN2IAD2RNZgN3tN2R0siILOHwfl5SHw8XNHaEaxNbnH0e
jBLNEghXTj1BfYBE6RxdngtRGIsfX96evn95/I9ktypmJJsJg8SxfQck2EMzUlQrmaMX17rW
zSz5dWzHgPVawDTj4p6e4w+AdioOgBV1bVEJ0xErWnVdV0biUgAYxVqz/cpMAQ3VStdCAyQC
CbZ6bl2W6xmgWa4nAAbcGIYx02VVQAjvHOsJrJaPxPAvLGYM5LiQqaiG9/uxMKAS0uLLEJC3
5OKTzAFdZwfCPPF4VGqNOFhhcsWENQw9Acylr03c4SoFwPM/uIAFyCOr7PpofcxQu7aLfAI3
LAxErOVLigndQD49dBb2PSwt4jDAtOhgnWSnizPqao/mhe44E6iaY1d4YBuB8bxzcNz2lt+I
pw5IiN0tCd21SZV1WgoGvY0tpspW9bfGO9UIxJI+TIIJafJtsMHNP3kV61tc/0Wa1SqMUNSF
5uvQY1DKawwW+ARekjJae9YdFAuwkZsfsjCV0ALgqW+zTlYLxxEdqVV7lZzkxSU+cg53bU8n
LLg2+g4dQO4tJNKb4cllGgltME2CXsZRztP6Evr8uQAXolcRerFDV3DIcrteGYBouwSAOJue
/ucL/Lz5A/4FlDfp458//v4bIgA6kYSH6m39rwlXuTXUwfYrDWj1XOieGp0FgJX4gkPTc2FQ
FdZvUaqqBdPnf51yYoR4HSh2YLWkDkPLCFZFEHfnwqnEp+008GbCkQkFkhOebmSMLO6bLXv9
NOBIomsuKwh/gd90s6bwRAGuV0vF8zzvDZTxm9yV5Tw9dUx3XrrLmpbgjQ5IYfULAZ1xoQzm
LMO5T3HJ42vMp8hSSqwzqeBcZhGc8Do57j+LOZzn3QJw4RzOX+ci8pcLVn7cOvLXuY584UI3
W6tObNaGG6EpB+SJyCzo5q1yKGrsuNVbaIiSMafLWxt2KIMzirmaXiEuxTjXlLiNDyfiv+Pz
BCW7rsMnuGkvcXytp8xQr/Gf/RbVNuqFmCFWJJcAZ/l6EVOLd8mD0BNyFVCeY5yjfJLlJbef
6pA+fLhPiSNAf0h57/GuACoIGiyliV6tUEtlpWmhcNeWcB6L6IfY5XTMWHVhtLBlXpEQt0wp
E+2gfQPzxh6YjnMwZN8e/vzyeHN5gvxN/3AT0v7z5u2ZUz/evH0eqBzV3MUUrXknBINCBnJM
c+32A79U6tqJoSuY/Uigo6VcYlazbyyAvFOKMXb/d7j6Iyf1box1wyv+9PQKI/9k5Wvga5Nf
4fBVQ8oOl/DqJFosrFeWSUtLGrgU4iqP3OdtvysxZgNW3LAS+NE23PC+Irg9uc1yIzGVhiRt
vG72YeQRySbCglMt3y+v0iVJuAqvUpHWp8XQidL9JlziBgJ6iyT2yfx6/5MmXOAqfI1KbClk
qsUDorA5xoJLFh2YcOpOAe9py069HqhPRXywTYwgnDu1LMTdPFmUpbqpCP/FR61HZoVfMq4/
QsbPujTNM5FNQrPKhzq/Gj/7lNU2KA8qOu6drwC6+fzw8kmkfnD2vixy3CdGDuARKhQvCNzI
Qymh5FzsG9p+sOGszrJ0TzobDvJVmVXOiC7r9Ta0gXyS3+vfQXXEYEeq2pq4MEaMKDHluXAY
Kf32/cebN7LXkNtO/2lfBgRsv4eAtWaSSokB63jDAl6Cmch7eVtYvgACV5C2od2tFat5TETw
5YFL5lg2YlUafD2s7PAmBrLRnbDj3yJjSZPx7dS9Cxbhcp7m/t1mHZsk76t7ZNzZGe1adrbu
MNrH8WWTkyVvs/tdZeXYGWCcMeEMXCOoVytTfvIR4UnXJ6K65p8flTUnmvZ2h3f0rg0WK5w1
GjQeJYxGEwbrKzTC2LFPabOOV/OU+e2tJ3TxSOLVgRsUYhdkV6pqE7JeBni8Sp0oXgZXPpjc
QFfGVsSRRzll0ERXaLhotYlWVxZHYWvdHYK64WLoPE2ZXVrPrXikqeqsBCH5SnPKxuMKUVtd
yIXgiqqJ6lReXSRtEfZtdUqOHDJP2bVWZS7X0U5G+MmZWYiAepLXDIPv7lMMDLZO/P91jSG5
IEhqeIabRfasMB+dRhIVUAJtl+6zXVXdYjgR+1zEscWwWQ43kuQ4h/N3CfKGZLlpOqe1LD4W
xTT4E9G+SuACjPfgXPg+Ft6nMWi/ARVMVXTGxsB7/naztMHJPakNp2kJhvmAAK3e8ZwZv2AT
pKQni63q9PjpjeCvNtLKnz4ej4xjMZ2RJGghMKD25eVvcXXk0mNCNLlWR9EadBIY6tAmxsOL
hjqSkl+TMCWiRnS74z88Fcy9Miky+YX5dSypCkx/p0YNH1sKFdrQJyC45NeQPNu0ktQpSMo2
sSfusUm3iTe4PsYhw/m7SYaLGgYNPDb0RYdbGRqUJ35m0i6heOgGnXR34peqAD+lHLrw+kBA
1VaVWU+TMl4tcAnBoL+Pk7YggefG6ZIeAs8l0CRtW1b7bbNd2uWvEYNPa+2xidPpjqSo2ZH+
Qo1Z5ol9YhAdSA6O6mITXKfuQDVxfZbU/fUq3aGqUo9AZIyZplmGa+x1MppTvpSuV8fW7H6z
xqUao3en8sMvTPNtuw+D8PqGzXB3apNEj56gIQR36i8qMJ2XQLJ7tHUuDwZB7FE2GoQJW/3K
Ny4KFgR4OD6DLMv3EEqU1r9AK35c/85l1nmke6O2202A634Mvp2VInPn9c+X8ut0u+oW1zm4
+HcDqYN+jfRCcfHZ6Oevcd1L2gp7P0uowGmL7caj0tbJhKVHVdQVo+317SD+Tfl17zrnb1ki
GM/1T8kpQyfSv5fu+tkg6a5v2aboPWkbDX5C84zgVw2TjP3SZ2FtEEbXFy5ri/2vdO7U/MIJ
yKkgQ3Rkv0rhxF28Xv3Cx6jZerXYXF9gH7J2HXruvAadCCl5/aNVx0JJFdfrpHds9Qt74IMI
Lzujm6IscbVHXEQLlnjlkmDH5ROPekXpn6JuwcfStmguH6XJS1h92yDquoLES9SCSvWuJmWW
u+WEYmTHD1xPfC6NKs2SKr1OdqY71BNP9aPN+QGxa0tmK+VIS0Wu3TYLbRS/mTPef4V2B3Hb
te+3/ikDt63CsP6TiPuMmDbrEpwUwWJrA09SC+s0XSf7eOUJVKsoLsX1CQai+YkTc9tULWnu
IcMBfAm3NyTt8mh2EdKC8T7jwpqiuGPheut5+1ATRGzB0MDD88ftLrWeP+xm0oyvSUhIyf+1
I3OTkzbncL3ouDQsrrLXKNerX6bcYJSKrino0kkeI4A+vi6QuPZVogrt4UJA9gvN/XqAyGPS
ogxTlejFpg8CBxLakGhhvLxLGL5mFRL//BLpOQ0U0jiPhQL9ODwA0T+qGzt1hhjqFFjETcNo
UYifPY0Xy9AG8r9tO0SJSNo4TDae+54kqUnjUyAqggQ0c8iXleic7gwVoITK92kDpOLIAPFX
pw0WwpuXtxE+O6qgAqtXwvH1walRqr0ZLl+c/OLYgRSZHfljNMbCvucYKgx7yZLP5J8fXh4+
vj2+uHnYwOR5ioykaZsSFQKqbUjJcjJkYhopBwIMxhkLZ7oT5nhBqSdwv6MytNiIPpW028Z9
3ZpOZ8rMD8DIp8pTkYHoBDkbRcgTlSr45enhi/sEqvRRGWny+8RwLZSIOFwt7AWtwPw8rhuI
TZGlIvIpH4Vn5QwFrDSfOipYr1YL0p8JB5UeQVOn34OtHaY21Imc+TZ6b6Qd0nuZUByRdaTB
MWXTnyCb+rsoxNANv/HRIlM0S7xuOKUMK3oNW5CSf++qMVIHaXh2JE0GuQD9nwoCtdrZArGu
Ms+spBfTm85A+Zpt2jCOUYdOjSivmWdYBR3Xb/n87XeA8UrEQhaWMHrGY7N4QbrIm4hBJ8FF
FUUC3yu3rqEmhRkbUAN61957VthskkPhpYLimRwVBUuSssO1UiNFsKbMd8dWRIr9v28JhCj0
ZMoxSK+R0X237taYMDbU0yTmISRhsGnkkg6cOpvak9FCoveMz1h9rWOCipYQafoaKavtaI1j
zm+DbVqjKJK2ycUZ53zmUub6Sq03eeFx3npiWiX3SU5SM2Bqcv8BzIo9CX2qjkij9dzngwUU
whUHfcAB2y7zBjJAdP+lAdYfzPsHQ92ZLcOUsj8w3WCo+lCZ2ZBEMua2xZ9EhY1Qz/C4RMdz
oizJtFOWwyQf1ACd/oyiANPNwOVdcHHwZe8ek0xhPRII816U1wMrwOhrwzhEBXJM7GCUtC4o
vE6luWFlBdAU/oh7sWZyAwiINi7jQhv+DICBnJ69CB+M3aBErdIiXgxmb4RNFmg9xK4EMLq3
QBfSJse0OlhgcReu9ho1F4BU3NGfDgjSgICMWGQFUkD5ayAImSNhHPaE2JFlhLk+TRRG/gYd
bGdcmXAdOG41+I0F3m6pLyBlcSFoKCo+/TBiPc5Adr7F03uXZ8iCPd23yMXZExAbWMCzM3sX
B9tQa8fMMH+sM+sX6HUMoW0EgksjwW8LfKkekmMGQZbh+03zeTrzohasTfifGv/6OljQUWYd
uQpqvIIqQvwiPGD5HVr5QX3FUK6Bno4tT+eqtZElS0wAUr1WrdHfLkMfazgmaXb24M4tpJJp
qg4NoT2Mvo2iD7WeAcbGOO83Nt4zgVmeqGDcY9GO5vm9wzHVGepevLTzUH3o5sT4xan2GPfr
RJDeES42iEcwqEVc48VQizoCmRHEB6z4zeVgROMGqLjE8k9UmWB4jiCtBeMSt2nYx4HFqRtM
TzXHZNGv5PPTd0xeVcX8FmQDQd4my8jzGjTQ1AnZrpb4o5tJg6e+Gmj43Mzii7xL6hyXmGYH
rk/WMcshfSRcVM2plZYxxsSS/FDtqPUJAMhHM8w4NDZqBnY/XrXZVm7hN7xmDv/8/PqmpQrB
nMRl9TRYRfjrzohf4zr/Ed9h4U8Ftkg3q7U1SgHr2TKOQwcDQXkN6UuC+6LGlEKChcX6862A
GDleJKRoTQikQFmaoFK8hIQokPd2G6/sjsmYYHxRe/S88JUpW622/unl+HWEqnglcrvuzA4Z
B7cC1CKfg/iysPVd9YeoLBEC6cRCfr6+PX69+ZMvFUV/84+vfM18+Xnz+PXPx0+fHj/d/KGo
fufX0o98hf/TXj0JX8M+synAczGfHkqRktFW8VpoluNCgkWGJRCzSHbknsvTFDe8sKvz+HYA
WVZkZ4+bA8fOcrLKMcvUl15C9GEY37tos8Qemgyf4RwD2X/4WfON39o4zR9yyz98evj+Zmx1
fdS0AsO4k268JrpDpDrXarWpdlW7P3340Fdc6vWOtCUV40I2JrIJNC3ve8ORQC7ZGlLcSVWq
GEz19lkyUjUSbVU6x8gMV/YyR2OW29POHq2z+qwFA6lsvBZHEwnw6iskPvFBP9W1chGaAc7K
+Ff7/YIBVxAm428YJSw5W6pPOfMoHl5hDU2ZATVjfKMCqTjB9Q2A7mSybhnm0EumYrL48acW
bmc5fnEGChWK2ouftruXBGIFgQLF91APNN4ND8i82Cz6PPcoroBAaL74DdKTAJaTVHK7ePF1
R3x+lIAeYhB5CVgSxPy0WXh0TkBB99SzDcSK6ai/9x34cfuxDg8z0B/uy7ui7g93cx+AiwH4
gtVEMUxVCj0/ufwTitYvz2/PH5+/qEXvLHH+h0u//o86JtnJmEfxBg5gebYOO4+SFhrxnHxi
7Y5JObQihSeiHKqoqmvj1sh/urxCCo41u/n45enx29srNo1QMMkphEu9FVdbvK2BRjzPTFxf
w0znjYsTOsavU3/+hpRwD2/PL66Y29a8t88f/+VehTiqD1Zx3Mv72yhMQ+yqtQwKp7N/kxzs
4NBsgibV7bmYqyNt47D2OJO4tIknV55JeC6s8MRDMCdnJsY+0xI0t1o8JVrC3U3/Df+aACqB
nobQVDNweqkqsQmSGFsnNYCLpA4jtsB9eQYi1gWrBfaOMhAMop2xohUuOWZNc3+mGR7FeSDL
7zmzBzeLmWacAClj+03V4VZDYy9IWVYlZCjDZiHJUtJwuQ9Vqioafpids6Y1lScDMisK2rLd
qcHO+oHokBW0pL4+0CQD1OwkvSeMS2g2mT2T2YWKrmhbefiQp7KhLJPuLEgfWnpwqxf7uuE7
/vXh9eb707ePby9fsKw6PpJxEXMmYjziKUC/53KQyE2XUz6L71ZBqFMMiZStQrS5s+NRyK3g
ufWIqtg92zOzrj6RvpA2qD8HFnTKYilVLI9fn19+3nx9+P6d38hEq458L/tfpLUx29Ka7AJO
9ajxAaDhsdePHfkBktNTp6Piwm2WLXbxmnkMFaUBWxev8IuxQM+IDMNw+71t2DxoZ/xzJk8P
ziZ/V1iwrrBm1Wxovwms110TT1tPoA+5FDy21wMyskIBmwRIPliLgAXrZBnjJ8PcKEc9gIA+
/uf7w7dP2OjnnGbldwafSM8b9EQQzgxS6OqiawQeb1hFANaBMzW0NU3C2DZN0q5e1izInbdP
sdkZ1piLVQo4enVOpZ5rZso4m65m1g2kPRKJZTwetANRJqlC3JJMmjqmSRTaS3CME+kMZRSd
rwxRmB1s55a2XDdzk5BEUeyJryMHSFnFZrhX14CjUoQODRmC9LlnO3doBtfSdRdjdUgx+5sf
Dk12IG2FCc5yvJVKOTgWvGBveOJpsW8ylhkmeBoY/m4J+vYuqSCSYH7vlpZwry7BIBrC+U9V
QJBjoMA/iDpJSJpwKQ6u9J6XcN71mWrgPQICWQNXWni8jVT1fXIJFwF+wAwkKQs3nhVmkMw3
JEjwW/VAwnb4M/swHh9+SJzsww/17+5CCGg9SwPOSpuFx2/BIsJHM/SWE8Vbe2NZNHkdbzz+
WwOJV6My1tFGa09QqYGED3wZrPCB6zThar4vQLPxPINoNKtfaGvF5+YqzTbGlP7jcih20XKj
s4Lh+xzI6ZDBK1m49byADXU07Xbpka/GjqTb7RaNr2cl6xA/OeuzzA0AqHSjluJJGrM9vHFZ
AzPGBANr1pMdbU+HU3PS7aYsVGRakSlsuokCrNsawTJYItUCPMbgRbAIAx9i5UOsfYitBxEF
+HiKIDC9k12KbbhcYLW2my5Y4LW2fJpwi7WJYhl4al0G6HxwxDr0IDa+qjYrtIMs2sx2jyWb
dYjPWEf5da4cMtHOVHIbQ3ZFt1+3wQJH7EkRrI7yDEKb5vcOOMwOaKT3gUgEiCkSZD5Eagd8
OiCK0lylbVejs5Hwvwht+sTn6zwQCpMZGPZMKylbh8h3TPlFA9shKaQGYEXhYujqlk/WDplh
fqFarPY4Ig73BwyzijYrhiD4FapIsUnZt6zNTi1pUX3eQHXIV0HMkN5zRLhAEZv1gmANcoTP
nFMSHOlxHaCPveOU7QqSYVO5K+qswxqlXHAUjHi2ZbpaoX5E2gLK8N0Al1sX+j5Zhlhv+KZp
gjCcawpySJJDhpWWpxp+ZJk0G69vkE3nfVDQ6bazHW4TLmEgKx8QYYByNoEKcTcWjWLpL+yx
NNYpUFYgPNLRsOc6xXqxRs40gQmQo0sg1si5CYjtxtOPKNiE8xtCEnmiHGpE63V4ZUTrdYT3
e71eIqeVQKwQLicQcyOaXSpFUkcL/MBqE59371i42XCOgwvV02GaoJk3x4VRrFGBCZ4nZ4tt
ImR9FxtkkXAowhA4FFkeeREjcwyBtlAo2hrGfvJii9a7RT41h6KtbVdhhEiIArHEdrtAIF2s
k3gTrZH+AGIZIt0v26SHNN0FZW2Fyhhl0vL9hhlV6RQbXKziKH6pnd95QLP1+P6PNLXIkjTT
CaF222qTVZs2bSOdAqOCb7jGkmQYFPg4d5ByaO95rp4Ozj7Z72ufp5WiKll9anpas2uETbQK
PaHbNJp4sZ6fWtrUbLX0KL9GIpav4yCauxbkRbharJFbiDjWxJbEjpcoNvUi+Amx9HBBfhRc
6TknChe/wNc5ked6bzLd+Epvo+USuxuBmmIdo5NQ1Hx65iWNusv4aTg/hrZmy8XyyinHiVbR
eoM5zg8kpyTdLhbIEAAR4teFLq2zYFbG+JCvA6xSdmwDhJFxMH52cUSEW9JqFMncCa2sIJHb
Q5FxAQDhkVmRgO4W6w5HhcFijjlyijUo/5AxFixZbooZDHaGSNwu2iId5feP1brrnNQtBh47
BQQiWqMT3rbs2s7gVy4uwlyTFoIwTmMzgqZDxDZxiG4SgdrMfVfCJzrGboW0JOECkcgA3uEX
mZJE11hrm2zmtD7tsUgwoa4t6mCB3lcEZl7kEiRzE8gJlthSA7hHFizqVTC3fs+UgJcAfiPj
yHW8JgiiheD2GDwOcb3TJY42mwg1G9Qo4iB1KwXE1osIfQhEFBNw9ICXGNDzeGxONMKcHxIt
InpI1LpEdAocxTfmEdFDSEx23GO96uApw9F14nbX4z4Bh4xBo2Tj2ttFoCvhhGhIDPsTBYIQ
2bnlJOjQsJa0lHmCbQxEWZE1fBzgXq881UCRQ+77gr1b2MSWMngAXxoqoj1CdlE9/OqAV45T
/aE6Q77Dur9QESPU6bFOuAc9lvDznh2kXgTiK0CI7QQzVhkKmHW7nbU7iaDBTFX8haOnblge
bfsmuxsoZweVFScZfMFZXfTb2+MXSO/w8hWLbiBTfoovmeREZxpcAOrrW3hWK+pxYX01y7Eq
6dOWc+CK7V0rfYMEGcW0+jlptFx0s90EArcfYnsMs9DY4ZOg0Bprerh0NFUyli4KEQ+klttH
vc7Ods8ea50c8a81ht7AvgX+0unv9Og3+tOGDJ5/0yvwgCirC7mvTti77kgj3WeF+1iflbA9
U6QJiBwtnBZ5bdN+H9GD+ZL4tpeHt4+fPz3/fVO/PL49fX18/vF2c3jmg/72bL77j8XrJlN1
w8ZwFstYoS/WO6v2LeJYe0lJC6H49NWhUosOxOj2+kBpA0FuZomUEfk8UXqZx4P2KOqudIck
dyfaZDASHJ+eVTxni2LA57QAjy01FRp0EywCe4KyXdLz+97SU5lQxceZWRfj4sli0bd6AhfG
69nTtk5C/ctMzZyaaqbPdLfhFRqNgKqbGcqPC9lzDuqpYB0tFhnbiTom97gMJG+zWt5riwgg
Y/Lz2vQEBvV2EO7tOuKNCTnWyHo81pymLwfPdBnSZhIXEsih5P3KQjkURJ7hlufeit+8XsiR
4ou3Pq08NYnUw8rMzF4bgIs2u40cLX403RVwhOB1g5hqTNMgUTnQeLNxgVsHWJDk+MHpJV95
Wc0vWBG6rwzeXWTULl7S7SLyT11Jk80iiL34AuIzh4FnBjoZHPTd19H06/c/H14fP008Lnl4
+aSxNghvlWCsrZW5WQYbpCvVcAqsGgbBuSvG6M6IyKK7XwEJ4ydmYeChX5DBDy89YE0gJGOb
KTOgTaj02YcKRfQYvKhJhOLMYCS7pCBIXQCeRi6IZIcT6qEe8fpOnhBcDEIWgcBPfbZqHDoM
qcOSovRg5XDMJnE3DeHt8tePbx8h9ZebAHpYtvvUkSMABk/uHsPDuhBCS73yJYoS5UkbxpuF
3xcOiES4/oXHGkkQpNvVJiguuGeNaKerw4U/4C6QFOAcj7ufiaGkBDa+tzigV6H3fVAjmeuE
IMG1MAPa8yg8onH1g0L7ApkKdF76qy6SgEsi3ez4BprZWa7DtSeOPGQSrgmjCT4CQPOaHVdN
rXLJtO9OpLlFfWoVaV4nysJcAzDT5Hy6p4iPnxxbEL8xh6ipYTMilwm3vAEspMUhJmxdJP3O
E/JfUN2xtccUGtDvSfmBc4nKl+cSaG75VW5mTuO4LmKPOfaE9y9ZgV97AobJfdcFy5Unl4Ii
2GzWW/+6FgSxJ2WyIoi3nkjSIz70j0Hgt1fKb3GbdoFv15EnsdKAnqs9K/dhsCvwTZV9EMEp
MAscKGyZF2sYfqnyZF3lyDrZrzgrwaf0lOyC5eIK00btwHV8u1p46hfoZNWuYj+eZcl8+4wu
N+vuCk0exjZD0dHFSlfEjiDneBWY2/uYr2Ocg5Jdt7o2YfySnHjMbgDdgsNqFK06iIvuyxcC
hHkdbWf2AtjRetwvVDN5MbMuSF540iBDJPFg4TFnlWHGfak75mKQi04Jghj3TZgIPGayw7D4
wGfOb1FFvL5CsPUMQSOYP+BHormDlBNxlht50kBc8uUimllMnGC9WF5ZbZBEdxPN0+RFtJrZ
ovIu5+M74IxlsxzS0A9VSWYnaKCZm59LES9njiSOjoJ5MUSRXGkkWi2u1bLd4i/z0xFeBIve
Yd561B+f8D1V1mQH0OSirh5NMuhYJ4BMDDkIN7TRQjk1yRAPXk822fRlNiI0xUUDfNgDX6Pw
92e8HlaV9ziClPcVjjmSpkYxRZJBoHINN8ltTd8VYynsYt/0VNqZY2WbpChmCovZO9MkY8aM
TiHwjW5mpfmbFqYT7tCVhmAJpOU4zVgnvECb9Qk1p0MG9TVATqQ1GFuWNkRPYAtz3DYZKT7o
64VDlZegasjo76Fq6vx0sPL66gQnUhKjthayAOtd5jM2RDqwqp/JcARYT2oVXl+3q7o+PeNi
LPShwt2DRNLqPuGLX6nyMHYmaAZV31e7sELwrwDxS2bK79LmLOJ2sSzPknbQhxePn54ehr3/
9vO7HmFbdY8UEFHWUTZKLJ/uvOJc/+wjSOmBtiSfoWgIePV5kCxF9JwSNbjx+vDC+WrCae60
zpC1qfj4/IJk0z3TNAM+ocWOU7NTCZP9XA/dmJ5308uT0ahRuWj0/PTp8XmZP3378Z8hAbrd
6nmZa1YcE0xFyRsXhIaBz53xz+05YCQlSc+uVsai2dMu43cAWlYNRPQ7oNbnkrQ9lTqjFMDd
aQ8vSAg0Lfi3PSCIc0HyvEr0ucPmyPhiY2wfZwbtjwTfxl0LSA2i/vTp76e3hy837VmreXo/
4Z+5KNC7D6BKPSqnoCUdn3NSt3D6xTpGxUWR82wEOBHYDML38SsHPKZy3sVv7LnvSYeTn/IM
+6xqwMiQdD5ga95a0OT2WSZ0rNbSh6RP0/aSj2GPf358+OrG6gdSuUqSnDDNsMFCWJmWNaID
k1EDNVCxWi9CE8Ta82LddSbwkMe6QetYW7/LyjsMzgGZXYdE1JQYpicTKm0TZt0oHZqsrQqG
1QvhRWuKNvk+gzfB9ygqh0RWuyTFe3TLK02wE0UjqUpqz6rEFKRBe1o0W/DCQsuUl3iBjqE6
r3Q7fAOhWy1biB4tU5MkXGw8mE1krwgNpVsTTSiWGfZOGqLc8pbC2I9DB8tFTdrtvBj0S8Jf
qwW6RiUK76BArfyotR+FjwpQa29bwcozGXdbTy8AkXgwkWf6wH5oia9ojguCCLOe1Wk4B4jx
qTyVXHhEl3W7DiIUXslglEhn2upU48ksNJpzvIrQBXlOFlGITgCX70mBITraiHjuCW0x9Ick
shlffUnsvnOQ1yF+wHuy3Ss2zVkgZqULhT800Xppd4J/tEu2c8bEwtC8ocvqOap1bSzIt4cv
z3/DmQWSv3O6yKL1ueFYR1JSYDuIjYkcpAIcCfNF99hjmSQ8ppzUHYtYruuFsqWdEbIO1cbK
NKiN+o9P04k9M3pyWsT69tShUoJ0xqeQjX9gSRdGgf5BDXCv3+xNDMkZ8ZWCubZQbbE27MV1
KFqXQsmqbFENnSUhGZnJrBXIux9GPN1B4jLd/XVAkVjvtlZAyCd4awOyFzZ/mNutTYo0zFGL
Ddb2qWj7RYAgks4zfIFQ97iZzhRb48CbOsKvd2cXfq43C92/SIeHSD2HOq7ZrQsvqzPno725
swekuNsj8LRtuWh0chGQf5sEyHfcbxcLpLcS7mhXBnSdtOflKkQw6SUMFkjPEip8rfsW7fV5
FWDflHzggu4GGX6WHEvKiG96zggMRhR4Rhph8PKeZcgAyWm9xpYZ9HWB9DXJ1mGE0GdJoLti
jsuBy+zId8qLLFxhzRZdHgQB27uYps3DuOtO6F4879gtrpoZSD6kgRXWRyMQ66/fndJD1pot
S0ya6S7zBZONNtZ22YVJKOK1JlWN8SgbP3NpB3LCAtNdTruZ/Tfwx388GAfLP+eOlayAyXPP
NgkXB4v39FA0GP9WKOQoUJhmDBjHnv96EwGQPz3+9fTt8dPNy8Onp2erz4aMQ2jDavyrAvpI
ktsGDwEtVhKjIe5errRO/D5s3XqVEuHh+9sPQ3dkzVmR3eOvHUpcqPJq3XleeNSxd1nFHl+8
gWCNP65NaPONye3/Hw+jsOXRgtGzYPhW3QDVk8vRKmlz/K1OKwCLw7uA9jtPWwrRi2j2/HKH
Wygo4Szr6KlQUSSv01UNnZXVig4PP6gUhG0UILkssQn+4/PPP1+ePs3Mc9IFjkAHMK90Fes+
x0o9K9OBJdSdRF5iFaOe6AM+RpqPfc1zxC7nW2tHmxTFIptdwKVdNxcMosVq6QqUnEKhsMJF
ndlKxH7XxkvrSOEgV4xlhGyCyKlXgdFhDjhX8h0wyCgFSvif6pq2SV6FuHRExrq3BFZy3gTB
oqeWblmCzREq0oqlJq08nKxHugmBweRqccHEPrckuAbDzZkTrTYXH4afFcH5nb2tLEkGIgzZ
8lrdBnY7dYsp5AoIXs6QKZEIE3as6lpXawvN7sF4WxMdSncNTc1AIDocjhW50L3nNisoxDj0
4susPdWQXZT/mGOr9SniX7DCTUHUzRbOsNssz/Dov/JBZlRV/zThbUZWm5UhE6gXHLrceCyt
JoLAY9QDJ2/js/QSQg/bed7fRN0F6aj411z7R9Lg3mYa3pf1dtffZpkn8r+QMwncEkq8fTE8
svU4mmvz6jndVf84I9ks1ngkzaGSPT/i8TFICml64RVvpLJiSA07SDgfn79+BRMB8Tzge6eC
I2gZOGy2PdvPB8k9lxIY6/e0KSCRgVVid9qH1u6c4MhjmIAXfPJrhpYYH5QclO8RKjTZuM2y
UAa/XHvA/Vnjm3AJYJSUfMGmLQpvzPD4I1ywyL1HoFrm02uoNML2E/KZCvmfWTrJd3+hQnie
nSOUJ16R/AEG9DfAuR6ck06MEZamvBkZnRVvuNd66iMSje+fXh4v/M/NP2iWZTdBtF3+03Pc
8vWYpbYyQwGlVhR5RtYDEEvQw7ePT1++PLz8RAzZpVDWtoQfmWpv0UaE7FV76+HH2/Pvr49f
Hj++8bvOnz9v/g/hEAlwa/4/jmzeqARxUoP4A65Knx4/PkMM1/+++f7yzO9Lr5CA4IEP4uvT
f4zeDfuVnFI9k7wCp2SzjAy/8xGxjT1xOEeKYLv1WNgpkoysl8EKN3bSSNBgXEoyZ3W0dLWI
CYuihSvIslWkq6cmaB6FBBlkfo7CBaFJGM0dvic+0mjpvxJfinizcZoFqB7wST3k1+GGFTVy
+RbWTLt2z6VfPNLxr313GaE+ZSOhvRI4A1uvVMSRIVq9Tj4ZMOhVuFYG4JU3b4fAKXC5YKJY
e+L7TBSxJ87reCsIcMv/Eb/CTTxH/HoOf8sWgSdCrFqfebzmw1jP0YgjAw1yqeORJdEm0Sre
eAxvh31dr4Ll7CYECo+LxkixWXiCMQ0qhjCe/VLtZesLtqsRzM00EMyqSc51F1lh+7SlCjvg
wdggyLrfBBvs2WMVLxfvbMsUdEM8fpupO9wgmxoQMe4FoO0TT9x4neJaHdHsMhEUHneHiWLl
cbsaKLZRvJ1jlOQ2jj3m+eojH1kc2tcBY9bHGdZm/ekrZ3X/fvz6+O3tBjL8OdN/qtP1chEF
zs1eIuLI/bpundPZ+ock4eLx9xfOYMGMFm0WOOlmFR6ZXv18DVL5mTY3bz++cblgqNaQvCDo
lPO9h5DwVlEpoDy9fnzkEsS3x2fIqfn45TtW9fgFNhEamEjxs1W42S7chewzWR4eRXt+z6Wp
zUQGocrfQdnDh6+PLw+8zDd+mmEKYKXMo6tZZk4LPnFzXEoQzB0XQLCa07UCweZaEx6fgZEg
utaHyOOuJwmqc7ielcyAYDXXBBDMHt6C4EofNlf6sFov5w7F6gzRLa/UMMsXBcF8J1drT1rT
gWATekJYjQQbjyvcSHDtW2yujWJzbSbjeRmmOm+v9WF7baqDKJ5d92e2XnsydCi+0W6LhUcT
olFEc1IGUPhSjowUtc+HZaRor/ajDYIr/TgvrvXjfHUs5/mxsGYRLerEE4FQ0pRVVS6Ca1TF
qqhmH2WalCSFx2NaUbxfLcvZ3q5u1wT3ZNYI5gQMTrDMksPcbuIkqx3BX/EURUFJjaePlARZ
G2e3cyuZrZJNVOAJV/BzSBxEOYdh2ZcG0WgVz84vud1Es7wqvWw3s2cXEMw+A3KCeLHpz3am
PjU2YwBSh/Ll4fWz/7QlaR2sV3NfFHy5PC6oI8F6uUa7YzY+ZuyZF14OLFjbalAtV44rWEjV
DeA03dBYadKlYRwvZBLL5ozWi9Rgqn0GA3lZ8Y/Xt+evT//7CC9AQk5z1ESCHrIo17mmCtVx
oFiJQz1IoIWNw+0cUr/juPVuAi92G+uBjA2k0GL7SgqkcfnR0QWjC9TWwiBqw0Xn6Tfg1p4B
C1zkxYV63FkLF0Se8dy1gWFrpeM6y3jYxK0MezcTt/Tiii7nBfXEAS5203qwyXLJ4oVvBuAm
sXaej/XlEHgGs0/4R/NMkMCFMzhPd1SLnpKZf4b2CZfKfbMXxw0Du0HPDLUnsl0sPCNhNAxW
njVP220QeZZkw7k94rY1frFoEZjGKNgyK4I04LO19MyHwO/4wJb69RLjMDrreX0U+vj9y/O3
N17kdchFK7xCX98evn16ePl084/Xhzd+IXt6e/znzV8aqeqGeLhsd4t4q+kvFXDtGLOBcfZ2
8R8EaD9nc+A6CBBSDrXswmDZd5ZFIf/UKYsCsdqxQX18+PPL483/dcO5NL91v708gRmUZ3hp
01l2iQN7TMI0tTpIzV0k+lLG8XITYsCxexz0O/uVuU66cOm8/QtgGFkttFFgNfoh518kWmNA
++utjsEyRL5eGMfud15g3zl0V4T4pNiKWDjzGy/iyJ30xSJeu6ShbSl4zljQbe3yaqumgdNd
iZJT67bK6+9seuKubVl8jQE32OeyJ4KvHHsVt4wfIRYdX9ZO/yE3KbGblvMlzvBxibU3//iV
Fc9qfrzb/QNY5wwkdIyQJdB4JxpXVIS9jKg9Zu2kfL3cxAE2pKXVi7Jr3RXIV/8KWf3Ryvq+
g233DgcnDngDYBRa20PmcAiD7hmyGoy1nYR5rtXHLEEZabR21hUXUsNFg0CXgW3DIsxibYNc
CQxRICgcEWYX26OWBrPgtFhhkU2ARNp693vHWkaJ2Y7iHtZuori2d9XCro/t7SJnOUQXks0x
JdfajI+nLeNtls8vb59vCL/tPX18+PbH7fPL48O3m3baRX8k4ixJ27O3Z3yFhgvbeL5qVmaA
6wEY2B9gl/Dbk80480PaRpFdqYKuUKgeZVuC+fezFxZs04XFuckpXoUhBuud53IFPy9zpOJg
5EaUpb/Ojrb29+M7K8a5YLhgRhPmofpf/5/abRMImOZwMnF0LyPXjHZwQdHqvnn+9uWnEr7+
qPPcbIADsIMIfDsWNv/VUOJKJ+/BWTL4Lg8X5Ju/nl+kOOFIMdG2u39vLYFydwxX9ggFFEvw
oJC1/T0EzFogkO9jaa9EAbRLS6C1GeHqGjkdO7D4kGMOgCPWPkNJu+PCoM3oOANYr1eWdEk7
fpVeWetZXBpCZ7EJdwmnf8eqObEI132JUiyp2tBv4nfMciwaeyKtryBU88tfDx8fb/6RlatF
GAb/1D3XHWOTgaMuhCRmnsY1rhvxXQ1EN9rn5y+vN2/w3vnvxy/P32++Pf6PV2g+FcX9wOEN
BYlrHSMqP7w8fP/89PHVNYYmh3oyQeQ/IGXhemmCRKwbE8QoMwFnSrSYMyI4zqHVXPTPB9KT
ZucAhN/+oT6xd+uljmIX2ibHrKm00J1pUxg/xFsXl9moCU35IE6dSFxqOVkKrMhBWmDp6Sc0
y/I92D1py5LjbgsGi6g2olEo+H43oZD2eJ8K1oLHa5VXh/u+yfZYrAcosBfBJcb47mZTElmd
s0aa3fGD1mxOEuQZue3r4z2k/sh8Q80rkvb8optOpoJ232sIn+Ip3raFOT0cIGz+anKAaK1V
bnb93JBimCOnHAY/ZEXPjmBMN87smIhePU/fcHZsqSq1CiBMZHLk0uParBjgjOaBmbVowJRd
LZRw2xhXkTt09pOOline100pAjWFofUdHq41sNlqQ9LM4z4BaL5z+UbyosvqdM7IyfM16dbw
W1OQwQekqXbZu99+c9AJqdtTk/VZ01TWppD4qpDGqD4CSIRQtxjmcG5xaH97Lg5jGOVPL1//
eOKYm/Txzx9///307W8jiMhQ7iI64P+eQDPj+GWQiLwB83TswrkzhIiXBard+yxpPSaWThnO
9pLbPiW/1JfDCbcHmKpVrGyeKq8unGmcOdduG5JkdcVZ+JX+yvbPu5yUt3125mvzV+ibUwnx
/vsafx1BPqf5meuX57+e+I3g8OPp0+Onm+r72xM/UR/AZNra/GL5igkdkheAbmKBLkGZIETE
bzqxOivTd1xYcSiPGWnaXUZaccA1Z5IDmUvHl3xW1O3YLpfUHBo49prs7gTGtbsTu78Q2r6L
sf4xfmjoQ3AIAMdyCqvt1MgzI0BmdG7mDD7N+a59EJz5EedhHOficth3JuuQMH4WJfb5dSjM
UBwKtuYwmy5ygKc0N0sS+4QuDuQQ2vXfdVaxXZUcmdVj2vCJA0HEhNekFKKPuoK8fv/y8POm
fvj2+OXV5jOClPNoVu84s7mHLCXViTeU8NVQoovdqs/oovR3+en0ZcIYXZqk193L06e/H53e
SQ902vF/dJvYjrdtdcitzawsa0typmfPikhowwX1/o6LMPb5eiiC8BR5XmhbWt4D0bGLo9UG
D/c20NCcbkNPVF+dJvIkvtdplp7gowNNQRdhHN15kh4ooiarSZ3hJ8xAw9rN6kpbnGQTrfwH
VWcvJX0N76pOvM96KfLsQBI0JgJ81E6GuqsaYfLPsMVXNTQrW8FjeshMcmtR5RRcdcpUpAyQ
j9svD18fb/788ddfXPhJbTdoLjUnRQppnqd69hCWoKX7ex2kM6RBWhWyKzIYXoFIaXPOGBJY
D5rcg+NBnjcyUp+JSKr6nldOHAQtuFy7y6lZhN0zvC5AoHUBQq9rGtcOJj+jh7LnJxAlmPPY
0GKlJ9Hag9P6njMd4RhsVVlUaaYEaIyFc4qW5qIvrcxK4n62zw8vn6STuGt3AZMj9ju66Di2
LnDzHCh4zzlluPC4pXEC0uDCDaC4AM+nCN+U4mux1ovkF8wA34cceYJ1g88UYIxpz/bUmu5y
6TE2ggviAVde7EXojBKcsrzTyIJUBOD34Uu+86m3+oaevTjqM3vjuDyLF6sNbu0CReGe70MW
pG0qb39n7jLwddv7IPQ2S1o8/gBME24nAxhy5nvOi6XemT/7p7XMKr6RqXeR3t43ODPmuCjd
eyfnXFVpVXnX0bmN16F3oC0XEDL/xvA5aYqt6q004bdS6vHPhOmD0Ol+JEtO/sFyqc67vnZc
ZOja5crPIkBwO3nCykLGHakP2TcVX6olLlPAWs34Wi2rwjtAUH+HaH5r2Nf3nLkaznViRYFl
kX9ONrbp42CQhR2YguPuHj7+68vT35/fbv7rJk/SIcaqo9LjOBXvUcYz1jsGuHy5XyzCZdh6
HEUETcG4zHPYe5I/CJL2HK0Wd3heFiCQMhr+3Qe8TxYEfJtW4bLwos+HQ7iMQoJleQX84Dhp
D58ULFpv9wePF4waPV/Pt/uZCZJCqhddtUXE5VPsHIHQyDk9HFvzI+kZfUaK2zYNPaZ/E1F9
wTR8E57U0sQNKXqXVEV/yTN8Y0x0jByJJ3+O1k5ax7HHDtGi8lhiT1RgsRgtrrUoqLCnFI2k
jlemO/2Es1PIYC2cV+Fik+OWrhPZLl0Hnhwk2sibpEtK/Cp4ZW8P4zqmBR1EtOT52+szv89/
Upc25eLqhjs5iJirrNITWMkXhHkw/39+Kkr2Ll7g+Ka6sHfhauSEDSmy3WkPmfmcmhEkX/kt
l5r7uuHicHM/T9tU7aAOn/goWqcShFtym4GeHH+JmZ+7kY1UB0Ocht/8jlOeut4bjECjccRM
lyTJT20YLnXPZ+etZijGqlOp50qGnz3EK7YynxlwUEZxPkP1HG5GLWUqFEiNCaqTwgH0WZ4a
tfTHS5rVJh3L7qZjR4M35PL/UnZtTW7byPqvTO3T7sNWRFKUqD2VBwikJGR4C0FKHL+wHGeS
da1jp+zZOvG/P2iApHBpkDovHqv7w62JSwNodBdCQzWJ8/FwdTrBbYTJ/Ql8yH+3KaMDTcOF
MVcNhjsT43l/Cc61e9E7BBP9WFPLLL7FVfIxW94gQnMcTev1ID0oUSn/MQrN8icf81We2i7F
9XpA4NqTlekVwv9weT5PT9xu+p0r9HRc6ZO19niHkVkUhLd225VfBzHuTDKH08yS2kKRHQKm
DYes0CB7N8Uo3yl2s1PSAJ1pyK5C0XUTux3tngK6iMMSSqSbpqi77SYYOtJYRVR1HsFJB06F
DE3OtXfRhB72A0SioFYXUn4WzPbWlFujDBEogbALVsFos9qaGLqqInKPbxQlIojcMHTBLo4x
g6u7tOx8oWMXpAx7TEmb5SCDUcMGLTPbbTHnzhCbwmFWqjRIkoNdE5KDaZ+3iYK9xa3JFJfF
2ziwBM7ZpbaEK5Yo1tcYTZ7DWHMq6ZJEtzyaaCFCizZOi274wYrkvWujyNxAa9xjq4wNjSSS
KG+WaV5RzPOynLHJJtCvUyVNulayRkP/cha7L3eUSLpdNuXbMMGeJoxMw+n8nSb237ch5bX5
/Wnbn6zapKTJiS3VMysdWk5eXKBKvUVSb7HUFlEoCsSiMIuQ0UsVnU0aK1N2rjAaQ6npTzi2
x8EWWUyLweY5QInuhDYy7DxKHkT7DUZ05oWMB4fI1z2BqXstvdNmNzEuR7pfslfAU5GgL17k
Cp7akypQrBEq1Jhgrxt6z0T7M8ujsKTf4FQr2+eqOQehnW9e5VbHyPvddrfNrPWxIBlvmyrC
qZiMhBJEzPA2QC2LMMbUUzWr9pfGTtCwumXmNa/OLbLIapEgHXYIKQ7trMF7P72yIxoBReqo
6lTLXuBIEtpzw0jEJlx5WFRxawBd+zB0KvRSnMBzoFYZuem6pP+UjgU0/0qy5xC7KxGIzCLW
TTqI3bW1ngNX2VQ5iZRG7aCFDi8JWD6gDR8zLNWdJ4Xx48YGSOeC0g7I0W9TotQTUTR4uXx2
q6rY6l7Rx+XsXBC0oYp/tafCO0tudT08dePg5UKAEWL3FY0v1jB72TW5dj+2ue6ioyHk2yG/
QExnm1ZncRmI+rMxe+rcm6TIwLBJjJMx+Be6+Z37sFvFJnNrINq60EWKWki7bJHOBwZFDjXr
bW+Zc0OhownNQzTgXfZjHITOPDqUF3snoOhQw3F82NOKdz8FnqC/W4TBcs1lkMFOZCEq1YTt
SLAJ3Cw63ocvLpkSRn72kLGJXGUVhGHuJtqBLzWXfGEnYm/GjzQ1jW4nMNzE7lxyXaUo8YKQ
W9EfxmBqFudKxDbCmqyhzjfWWIr/RB31R3O7KpZfr95b9Scs3J7sKhwO4ezcZElV8+w/Hjhm
xwr3tWPUFDz1bzz+OQ1gSzi1ByaGKypPCN8JBZ/V01ZeWTMJxLqeDlusvbTgTFapLkcGtnaW
agqmhsDz7xjvmOivVVSTlZUn/J/czLSFCtft/0a02EUyODkfbhfG29xjiSE7UCZ6QikNCATe
WeH5Fzp6eANb/9PX19dvH95/en2idTc/4hxNxu/Q0WcnkuRfhlehsdEnnovto+dyWwdxgjvV
NjLqxLTv73hzVnw9K16nDPf6oKOyR2pVMHpi+LXcBGNFLyvf4cZIix/CzA2++4XtQvDmHPqH
sirUd9YluSoyvLLUlvaQ7piYMdjAUExK2hqLDSZRInPSVgVMzCxEb6kWYIOllD6QYhzDeCue
xY71OfOz7cOymUVqL+v56GWd82cfi5beVPSUL8i6EIrP8iefceY9zpJEhhMpWG6fWTooDppH
/uyv3QQUGo5UIqQW+HAllLtgtxLwSacIxKZ/cDOfQjlC9dQNDF+HE1h0pfmLUPHK81CSwrsD
uyc8vrS0gSDvu+1GleEp/w6Mg0UghTs0fpPQffgwdBs/BC1If0g2hw1ECh/xvk4ypijlMdxW
oh/oW6KdMintw80+7J1ki4lSsg+DaE2OEprxJAp2D0HLSm04lrBieAsxhslyjoCS8sjDWHT4
Yis+0eMJpOyjeE8Wk0gZHDQwuh/SWtm3bhrfaFlIsihJkUBI55AsosRkJ7viLlLZHsJl4Wh4
8ScOtk4yTx+DhGj9H+ifdtqptAeTyvpu1lMU7fNwbOmV4+YEE4xXp3l5ddWvtvj44esX6ZH5
65fPcA/KwZbiCfQ/5VhUDwEz6QqPp3Lr00MM0H5Vcxhhah6HlZW07YKaqSVZV6v69lSfibcK
7/qhTTEjj/lbhXASI/e7P04ukORKg1hs3heR6QpqWUEXK1ew91h3maBd4HVP6QC5J3iyDvQ6
wzVAQZAMl9tjuNXqPW8Dj39dHRLgpokaZOvx4qdB4ni1oJ0nnoMO8bhlvkPiyGNzrkHiterm
NPaZHE6YYxp6zRJnTDtwittNzNtUHsW5x6WfiVkuSmGWRawwuImbiVmWINxf5SsfQmLi9RGi
cI/k9UCd9msy2oarItqGuzUJbUOPQZcBeazt+/W5AWB9vz6aBS4KPB4bdYznjYgBwd243iHg
2n6lJKUcLsziShN0NQe1dCP0glFMYcg4BEtarIyAhFvf1ZgCgKqJ555E4brwR9jatzxDsNGl
ioid03wZgOgoENPjOdqsDFC1A0h8N453yGHjinlWt7AaSGa8smZIkOmaHkMcTM/lZvn7yBY5
Cjs4dgL3Ciz3z4IXyUHsLm40VXFHPVbVE76mRbBLlocOYPbJYbWzSNyhfxi31qsAl+weyw9w
D+QXbXabR/KTuEfyE8IjD2UogQ/kGAfhX49kKHFr+YlB5TcgkYBcLPaBO1oEPdruCcKAnSZK
PiQYGTZHPvqo6Lq1FjsVz8seHRItTTjqXAMteacHNdHptm3QRN8hs7k840BHKT+34GNyeZiq
dwUDEf+yE1vZgHDWnAbPyZMLXt2miN1/GHmeBOiY3SZc7WATbnlSG88bUGm1JPK8LtAhHofb
dwgbOFne+bWEh/GKSiYxntBgOsYXg8zArGhTAhNvVpR6wOw9MSAMjOcBhoYRW4yVOkPwIo8v
/RlzIodkv4K5hwFancd07FpXm7EQIf1BZNhvH6+DRD9ei4fqkNI+2PoMEiWORyQM9xk2Nlqu
dOTlggC0sgmVIZdWdMlbkcSeiDE6ZGVrKCHrBXnCJWiQvecBpg7xvC3UIR7v9gYEf/GhQ1a2
FgBZmaAkZFV0a1OGhCzPGABJlicwAUk26+NihK0NCAHzxSoyIKud4rCi2UrIassOnngiBmS1
3xw84TomyLs8SjYr9X0nTxkPuzpcrjRo9XtPgJEZ0+4iT2gNA7LcMAHZrVQaTvxjz0tiHZOs
TBXq6gVzZWoiEN1QMWJ0QqzJLog2xOOYzTgrtVIrfQvebnjq1Au1dr4lljZYeZ1htkT8pWwv
YD3rGF/Ll6rIG9URIo9qj93sfPHCUvedlyBq1WDpcJQH1C/SFKw8txeD25Db/XcHaf/Q0073
L+NbM/7n6wdwuAgFO57wAE+2bWbeNUoqpZ10goK0SfEbUxYzcThhbrslW75m/O6QWONkxDvs
LlOyOrA2M5t8zPJnVtpNOGbgfueE68USwM5H+Hq++oITO/1xm6Ix8evFLotWDSceQxjF787E
zy4IJXmOufcAbt1UKXvOXrgtJmWw6C+0Dn1RWiRbCLJl12zgx02M6isS9WKZHgFR9MFzVTaM
m55qZ+qS1DNwubfAzlHvGYqV0aqwhZDllQ//TgjN/lLnrIBI7N7yz6cGN/GSzLxqWOXtm5dq
tLe9J5KUpfZe2ZXkKb7cyiLbXRI1ngJF++QgNYfD80tmEjoKDoSoSbyRvK1qW5pXlt2kRben
xPPL6KfKyItRklplsjazRf8TOTbYs2rgtTdWXoiV7XNWcibmP90VFdBzKm1rTXCepXZj8qys
rr7eASIZZz6EOuivMgyG+FEbYps5nq8M/KYrjnlWkzRcQp0P280S/3bJstwePcY0Ir5yIfqn
I/pCfOzG47RD8V9OOeG+2b7J1Ng2ZVUw2lS8OrUWGRbDJrMmzqLLWzZ1VqPsssXsohSn0Q3n
gVQ1hkW7nB6JWJuzRgxNowNo5KXxV2elkFiJ2Xwpdkvyl7K3ihSLQE5TlKhcHiH0+Tk5zob8
cIbxCkHnUNZYDDF5wndm1E4BD7Wd9boB3xnoqxDJrSglrdlGscg58uek4F15toiwSOqqEoTm
9XZcXmcZ+JJ6tmvIW8sU3uSJ0SB0Hf2VjWR0ZZ13FrHR3yTImQycuxHOjLuBmeivq3IZMqhh
ZpZbkKb9qXoZC7+3XaP78xVLcWXmJ6ZnnmVWL2svYkYsbFrT8XZ88KsVrNOXxkAHOuVQezzw
SER4epc1vqn0RmhlVenGWFG1mf09eyZGmycXKMAW3UTzi+3dSyrUTntB4mLlqJrh0h1ROhVi
qYrxl4kgee3oWYXQpMLQ2opN9iuIfi0V744fcW1fmfY7g10jjIjJBfJYkp3h7FEXLQXsStTe
wHBr62bw+e310xMTUz+ejTQlEuyxyrNc7ozZqVta3Ur13ASVlKek+W2LXjNNENWFim0Za1ux
lVPO1UxBOW7i5DMMZVCm1Ve+kcjkazTcO6t8oJHXDHZrXoD4b+k4H9H4pAENgfDhQs3vaVbP
ePks05WlWIRopt7FSicMfNrBmaFfoReMNutmlxrfGA3gLY7x1m77SWTMStbKSZ95vJXJfAxX
CF5Y1frFKHhy09LRNmce/7UTLmWcHOHT9mKSKkkOY9YjXFgI5fc7i+lMEMy3Kuq5z+wGVsgj
Jy8/hjpbdYn76Pzy7Q38iUyO4lPX4kp+892+32zgi3rq1UMPVR/cSCjp6fFMCWa8PCNUZ3BT
Cjq8r8h8dxx34Ggv7ikku1fPpjbgbFEIfGhbhNu20B252AZjaZFqS/qJ45fPelXQKptdo+/C
YHOpbbEbIMbrINj1i5iT6GTwumAJI1SjaBsGC5+4QmVYzc1xZVEtNVWfcjydp4PHikuV5nkS
OFU2EE0CwRwO+0UQVPFIC/wUYQJwjr+jmvjgdVQ+VNVR8zhTrtCe6Kf33765Z1By3Oo+a+QE
2Ug/zSbxllqoVsYhkuWUQtn415OUS1s14C7w19c/IfzCE7z2oZw9/fLft6dj/gyz68DTpz/e
f5/eBL3/9O3L0y+vT59fX399/fV/ROVfjZwur5/+lC9Z/vjy9fXp4+ffvpi1H3G61qCRvV5a
dIzzVHckyBmtLqwVb8qYtOREjqZMJuZJaLKGVqYzGU8NX8s6T/yftDiLp2mjx8CxeXGM837q
ippfKk+uJCed/gpb51VlZh1w6Nxn0hSehOOx1SBERD0SElPr0B13RphQ9Q50Pq2F3sv+eA/O
zDV///rMkdLEFqTcFhsfU1BZPT2e1fuIoF7H8e8bXwJyqfxrqGD7nd/L1SstPXq9rKscwann
PZtUB27Un1ww8fNFWfIF4tln/pkFpu+9eTcxSx3UQHyu6Djfh3bflW5urFGiXN9Q252Zxrsf
lZsDV3FdN5AuhrCGgvqCVQf8dUZG5DuNNx5ZYyx6ibYByrldxO77kjnDU3HB8gvO7bM8czWj
Ke9arIU9zhpHTJGg7KyoszPKObUpvF2vUOaVGTssjcNq/f20zsDxWXr2t2tiil20Mw2PtUyC
0GObbKLiCDP01HuNdKjqadMNp3cdSodD/ZqUQ+3MfwYf5+Wc4YzqyETvpbikCtqK3XwUesQk
3akut7+o+N4zAhUP4iiQxt2UaZhku/FVoO8g5XIVSnItPGKp8zDSg/lqrKpluyTGu/fPlHT4
uPi5IzlsJ1Emr2md9PayN/LICZ8XgCEkJPb3KSogzrKmIfCUPM90D2s65KU4VrlHhOh5qjHS
j1kjXfTh6XsxqVU+fXWaim4e+Ve1eT2gs4qSifXcm4x60vVwTjMUrae6N8Yvx6pcmak57wJH
5Rm/cOsbDV2d7pPTZh9ht2P61AtLsK40mHt2dB3LCrYLzfoIUmgtFyTtWrdjXrk9F+fZuWrN
SxFJpqndtGmepy97uvMv7fQFTs99vYCl1vmn3GrBQgD3d1YT4I5XbPNr2I9rlZH0oTiJLSPh
LcT8Onu/IRO7+uP1bM+SExlWeXMo5U6724aUNLuyY0PaCrtQk+2qbqRpWNU4qX1xeOR3u/Cs
VZugE+shrJIve+nJ4nSzc38RSXyrTvZOyrZ3+ijs28XfMA5634nJhTMK/4niTeQkH3nbncdM
R4qRlc/gHC1rliUgvl7FxWrlOxZr7ekTDvwRzZ72YENg6eMZOeeZk0UvNyqFPurqf3//9vHD
+09P+fvvRuC/ua5lVavENGO4+2vgwgHfcF06BwTVNbJf1mnntJ6aWMUQobVgK137UmeGVioJ
Q0trbDwqZke5eQYhfg+UohtPYEnfAm4RNd/FvvBpCsJbUfdgZzpMmT9B+/3P139SFVb8z0+v
f71+/SF91X498f/9+Pbh38ZLUCP7ouuHmkXQaTexrbBpEv7/FmTXkHx6e/36+f3b61Px5Vc0
PoaqD4QkzFv7bAOriidH60wGPB2rCInIlyn0wMrix3AEh44IaXJUm0wcLv0cWa7iAG4PW3Wk
XNAfePoDJHrkFBTy8Z1iAI+nF92L5EwS06nckHBuONW982s7mdiNVRcpBgRtusPQcsnbU2G3
W7FO8NfzugtQtyPHjvyk4NipEKmdfFG/VMChx73umwxIV0ZEFs5XvXYQRdukdfxC7bI6UXm2
E10GUz9kkT8rwRupLvxnb3vbil/Ykdi+SAxM4fE4fJdqn5UVZqRTZAUXapyhV040twOpnvj6
x5ev3/nbxw//wcbgnLorpdYsNJeuwFbXgtdNNQ+Xe3quaIvl+keAXQvZJwpNJZ85P8lTn3KI
kh7hNvFBU/Xgjse81Jc3GzKmgOGzfKYOjnEGBpImFrTKPdESJfLYgAZSggJ4ucGyXZ7NEAJS
OhBWAPkaMgdSY4EoJSsvotj0dnsn41vvie97Cy35NSWHxQw8F3Iq8zo6bLdunQTZ8y5k5Mcb
NGjIKO/sWg0FYbmTsaxs7AmdMQF2nocEEpASGoRbvvGYI6tMbp7oGvIbp2GywcxwJXdygrRV
J8Bm0paSXewJh6AAOY0PvrcY89eO/1roUvIo/ZdPHz//5+/BP+QC2pyPT2Mki/9+hqCryKX6
09/v1g//0CKmyAaDmlo4jSnyntY5frI6AZoMPzqVfHCz4+eWjO6T44IkWiaE0Y130ahA2q8f
f//dmGr0O057gpiuPi3/9QZP7IPHk3arLiNf7MPw2d1AFS22KhqQOaKmpyJ3aydfVagn4q0B
IrRlV9ZiGwsDB1OApybTxbg8XZCi//jn2/tfPr1+e3pT8r93vPL17bePoMdBAPHfPv7+9Hf4
TG/vv/7++mb3uvlziI0lZ4aHW7OdRHwu4hVDTSxrSxxWZm2aeSL2mNmB6Ti2OptyHW3c77tx
qaexI8uZJ0YXE/+WQnlA7dwzeOsNPszEppKLLZxm8CBZjpUGUC2MinEIUfLMiAiS6dM/Rya8
AxgK3VumZJwvGbdKUVHS/7Cyl1QV61g0FGL+MlTFkeBsH4e9VRJLwsM+dqiR4axzpIUuLYsC
l9pHiY2Lt27avemWdQQiBccBkjhyaHwMVGpRn3tHaizYlNiWVDLrMtWUnqal0ifod51Q0GC7
S4LE5UzKkEa6UKG9vuDEKXDJ376+fdj87V5LgAh2W13wIQZ8X88CXnktsjkgpiA8fZziq2pz
NgDFqnqae65NhyAfCNkKca/Th45lMuKFv9bNFd/bgZUX1BTR36Z05HiM32Wea8E7KKve4S+U
7pA+2WBK0gRIeRBtjCfAJmegYtrsGmx214H7rS+L/Xa4pdgRigba7a1uCPSC9LuD3vMnRsNj
GmEpGM/FEE18jBBJ0gt67JJreoI3lVibJGvjOZY1QJEJwiD6E3aDkSCMYhu0CSIPRQcpmz0Y
eMefo/AZawYXCv9hg9n6T4hTAf5dsLSN6FMBtuHVAHESIF9OJAwRcWdFtAnRTthcBQd/PqdD
PFuPOyRJPI8hZ3mkorMnzlCFs4SVoQriPyxnLiH4Ea4x2pZbISH4jkOHbJfrIiH49kGHHPAT
GWNwepxBzFI/7FHnM/fesFW9BOlgu/9j7VqaG8eR9H1/hSL2MhOxtSWR1OvQB4qkJJb4MkHJ
cl0YbltdpWhb8tpyTHt+/WYCfABkplwzMYfusvAl8UYiAeRjxFgyGkzAuT4zFAe53qmw2qwR
Yz3d5ONl0/mYaYnuHe6jnTT3p0eCz/c62rZsgiupdDj+dzRkzUpTLm+MdTP3iLwV0uQtK5w9
3V/gyPd8vbZenIo+h4HJYrge0dLHI4IHYPqY5Ky4EczGlXPY6xvG1CF7zXKGTj9dFJvRtHBn
VJmxMytmVJwOncAmWBamj+dEuognFlW7xY0DzI8Yj2zsDYl+wmEa1seh8+kLnsU+4UTLAv7q
cObGpFUcTm9wqv8kC00DH8+wRMf4sdtqPzfft6nMLSIQ9COUY7CzIFkZEcoxrYo7Ky+/kiAS
Jtp9F0E1vNyFnl/5jJJkpQkPMOMgpCbYUwenCkzdwo+NQ2IW7UuuyMo75fe75CbOSj/j6GSQ
0DVWrYxXMf1819JQ43GLdfA6cQqr1HZa1WQdPVhIDriqVRh+QhogiS1mabhABkG6k1sz+N7T
8XC6aIPvirvEK4t9NxMfg5oISmJsp0uZu9LWoc59sV32Ve5l/stQ1ywTtzLVeIWrPid7QEJl
nO6CMkmLcEkfuysyEURLrDn9JlkRrQM36xBU71SdZjSN9rQndXe7r5/pdfNB33GmM0og2whg
B5pArH7LyHG/Df+yp7MO0NHI95buCrm7o+lntmkwCEXwm6VFTAljHFYvDFGrgeyFSusIr1SY
4OaoZSBt8SIMXPkpCXUDoOHyVl7vq17B9TQwtOPQpVO4NBMy5I+rIAnzG+OhHiAfjqAVRGdd
unosQkwQQe6lwu4U4YWac2ajiCQo6HtM+V2+ZWJwIhovJxYVIhCx9a7vDXq3BCBM43grH1hH
HQSY883SNxM7JEkqP2/XnUzNzIewOg0juRK1a+A4drN+TsiV9/rAtsCKYuISjvE8/9xL6sVU
hRaWi7tMvti4ibuSsWjakmB3qmNJUiUBbPSI/F3GQaJ3iEo0XlfbtOqerVsogjDT2DLLBQb5
0VW0mrLjXpoKgPPcKyGOyZeaCpXhSWF5BbC6ZGxgIwc/I0cStaNhQhWRxkZkYudnt49kmlJH
a8uQidJcgCtpJzpPjCoZLaFFZZ9WdXD/zREdXb+d/7gM1h8vh9cvu8GP98PbhfAHUkelN353
g9lWqdsijESPth2sZgv4rHhZx/3hxIajRlcndb7a5NGScS6k+V25TossIi/VkFjeDwOvXEl5
rhMlFglw1QS7wlsb4SZVOd6GdrQC6FLrBiTGsDluUSFGAXhjqDpKqicbGPy3QIPTyqdLt6Wr
hL2Hl3DuJjLEcSlDYn1GhwJnl66RJuSkRupuHbIdugsR1/zOSDLgIl7sm52yxnhk2c5goJge
LEMzAS1Pyn3kFkEnXQnI3Sx3mczRqOc2ydJsi1n4VHdU85KYcm02qzy4W5AON0ThgrC2Mrbf
PBSxhXo29M6eor8U5jAezUZzi3rZBsiIhap+w4K/y6CDPC/OOKzYhCx2G5gQlm5Yh2Da1LIX
VNPz2XRkbQ3q2Wg2C+j3tLwQY2tIX2bsislkTN/6SGjSY2IhMPC3S2US0xz1JOQ+PByeDq/n
58OlcwB0Qa4cTSzmlqxCux6oqunRyVWVdLp/Ov8YXM6Dx+OP4+X+CZ/roCr9cqcz5h4JIKvr
ka4u8Vruevk1/Pvxy+Px9fCAsjVbk2Jqd6tilvdZbiq7+5f7ByA7PRx+qfkjxkcdQFOHrs7n
RaiDkawj/KNg8XG6/Dy8HTsVmM+Y21MJOWQF2JyVpd/h8o/z65+y1z7+eXj9n0H4/HJ4lNX1
mG4Yz7su56uifjGzanpfYLrDl4fXHx8DOR1xEYSeWVYwnXU9MzYzmctAPS8d3s5PyAV/YVwt
MbK6d5hVKZ9l05jjEwu5LWK5KEXMOjhUrLfsuZ+qFsfj6/n4aFRYrGnRMtT1tuGHfKyD4w4e
ZA1vAwB5sFliOrOGVKHtJ1ERlCs/nloO9ULVBOurTMcaXry8LYo7vNwti7RAexI4UIrfJk4f
R79yFWxb2o4F+3K2chdpyqhLJyE0UmSMOzDo+GJJf7kR0yFzcZ2FjjnDZe+v7t/+PFw0y8Xe
CK5csQkKECTcWAZTJPu2k41W1zCIfBSWOIlok3kWHZv9JjINZm+X1CjtZ5M2Glp7MVjPKoyl
dav7dYEf5SJOl4ZiQxQGKmYgoGQt11v3NghZWN3QYdYCz/y3aBUCIg17l4eUxXqb+EG+SCPt
GBLv46q67bgF7g1b8D5007hXr6bxQb72zZZCUlmbDTGfmP2lTDBWsW5hgr73ysjNOs7BZPK1
zCVuZI4pycJMDIIg89rsjVSD0Pf8hasJ9H4QRcCSFmFKJ8qvPyhAxHEH6BYvE/NFkfSStr2y
0plh8CtTzYpXKRit0sO427r9YwO6pvZNkx4FpBfJOIzSMl9uwkhnVdtvYSG2vebU6QVaphrn
21WGHMuTS552s5cpC1LtPJmVfUs0TDQncriIUWSlFoUPPNv1e7VUDwECo0RkWtao4bdBelPb
20iGNS1cTaWoqYVJJd8Klq6HKk2cxxPii1+gq7SRUaOKaLFJu4MOSNsJaIJwUN4EdzAmUdR3
riPVqURmlaTZRxU4Gv0c7pTyWffBISmA91rlrqse26GLgyRKqdi7Ck7dTZErTVcjfafWS7ul
bHOMa2uz7KwiKO0qMnaa5cEqZJzS1cRZntrlYlsUtMK5CHvzCtO6XNZTF/tSeZn0T6tcmPXn
aJV+oyv2y4GrFOm1CVpp1i+Kdp22s6cC173r+Q4Bx+uhRDhoateo8m4jIvh0VLeCyCdzE1c6
d+w3FN2zUYlYsLxGMV5ipJA2nciKUQsgzUDAyIna4Xu2VFOHSQQkSRF2dtKGMo7211ynVFPc
9NWlEnPGTq1SfEZnbJCSBB6hOyVdVImXw+FxIGRIukFxePh5OsP57KPV/aKsl6rc0a4N32og
d5mUd2M9d9xh/XpZ3aKKLYgXUhClT9VVpNAEn+7QG8hNHfyc7c0s9npOKipki56jwoy+z6qa
7m1ZsxKNgh9SLB5Znj7TvHWexkHzFb14Ytg83SS9Ol/kHRXGe20YMfzAa7ooTTdb7QmgJsSo
9iCja5dfSo+7ykS/VKpSpVNvh9Ha18hEOObiVnWoGO/fJpVDPzprRJ7vBdMhfQ2ikwmU1Usm
NrhGyBkfrG9FFiakuY33dH74cyDO768Ph74OCGQa7ApUoR3b2nEQf5bSokcftEXkN5TtnQWV
f7MjwO61SPdtLplnPNLWT/xAQx5R8a0qTHeuflJ1he6wT9G4+sWoSmplFHUgw7P/8WEgwUF2
/+MgFdEHoh8X8zNS/WyMJSlhh14gNUXlIM4VooB1tV1RJogVbay1FgO9d57cmqRyp2mdwFe5
kjq1fqj0G2LzelhLLsWOnk86TavEf0WHAgmXUZpld+Wty5bmuZH0OIc2nZ/km9+UeWC8EFbP
K3V7qoub5/Pl8PJ6fiCVXwJ0hIkqwMx1Te9jlenL89sPMr8sFpXWxkqaZ+cZ3X2KUD2t0UUb
RWjbewqHVxT6e8tYQCP+Jj7eLofnQXoaeD+PL38fvKGxzh8wVf3OdfAz7GSQjNHg9XbUNzYE
rL57U3si81kflfDi9Xz/+HB+5r4jcXWhuM++tjHqb86v4Q2XyWekyrrkf+M9l0EPk+DN+/0T
VI2tO4nr4+V1fH6op7zj0/H0Vy/P5nJBqhDtvC05N6iPG5envzQL2m0db25QAGlUatTPweoM
hKezvhNUULlKd3X4hjTxYQkmvnlcbslgPcqQvklXzqJo8eAhYIf/lBJNv0TWk92oPIGdhrv+
WqlbSdhTt12izm5kGcEexVRG4MFXVophhdpJM8Q3e/WMTqSV3sLgkC3A6rkZJOpE9Rkh2pim
CdrsUh43kHCzDJeS3KxkZadEagKEMiYa/rmkjt/a52aedU0EzpmGxDIzFrX/VbZpQFF9y5K0
te+N7y++lNGSX43SZhmuv49sZ8wG+alxLrqPxKd8/Lca5/JfxO6ICfIEkMWEKwPIYQLnLWJv
NB6qGyl6Hbq9N7wGsZlgUiik+EwPSow0PtBUV2V1StvvTklR1JC7D2lBbLMXPl3yZu9924yG
TNTo2LMt1ouBO3XG/KDXODeoiHMBggCbOYyVNGDzMXNAURjTlL0Hw00fkgCbWMxzNIhtNhtg
sdjMbCbSCWILt/uQ9Z95Wx7ORzldW3x5ZUKiITTn3kSn1oR/rp5zTAEgPsM5/fIPkMOEFQNo
MpyUobqAc3M3ipj1Z1DybGM65Vs1ncxKtl1TZnUjxPfGlDGYQW2AGW2cAtCcsdNAiIkhjdCc
Vpx0/bkz4coKS+AQ8C+9YtfhzGEim6/3XIi8MHGt/Z7NMyo8y5nSn0qMczaA2JwePIXRbYzd
/Who8dhoxCxlBdJTFjGLuTlBzGbs5vBeZsL0W+xltjWkxxAxhwnkhticyTNxt9MZYxlUyHEf
zkb0ONUwozRRw44YdsMRGBQja2TTfVjhw5kYXa3hyJqJIcP6K4rJSEwsempICihhRM8qBU/n
jHoBwEXkOWNmqHdhhi+r+PrOTffqcLPv4f+qhs/y9Xy6DILTo3kG7YHVgfflCY5AvX1iZjPs
bx17jjWma9jmpTL7eXiWDsWUpY1ZQhG5INOuKwGFZjqSJvieXiNaxMGE4bieJ2Yc63Fv8NaY
2bV9e1jyMAZ4yqXexypjBByRCQbZfZ91uW99HdjtLWWodHysDZVQ+cWDE/X59F//TYh46rQg
Q0g8M3B9fNAUbOn81cWJyGqoKdYUHkVW5d6Jf9CeuntZVGpYagLDXL5X05ITV8ZDxj4JIJuR
ABFi9+CxwzAhhLrKZTrE7abj8dxipi9iNo8xLgwBmlhOfkU0GU9mk6vwfHLlPDWeMtKqhDjB
azydsP3GBZlGaDpkO+CKNGSzWpezGXPM9IXDBY6GTX/EHRdQIJgwO1c8sWwOcvfjESMqeJkz
7XJIDZszuzPsIL4L+6TFelFSFOMxIxMpeMqdISt40j1zNAqMV9Zko0X7+P78/FHdrum7Sw+T
4PL18H/vh9PDR6MP+U/0iuT74msWRfWdq3oHkW8H95fz61f/+HZ5Pf7+jrqkHcXMXlRn4ymF
yUIZ9P68fzt8iYDs8DiIzueXwd+gCn8f/NFU8U2rolns0uHCukusOxxVnf7VEuvvPuk0g4H+
+Hg9vz2cXw5QdH+LlRcvQ5YVIjpitqka5RiivNJh+e8+FxZjLi9Bh+nORbwaMZku966wQAC3
uCCP1Ta3usvTzj1HnG3t4XjIssTqHkR9yV6DhMUKfeBcXTv94VB7+OH+6fJTk4Lq1NfLIFe+
N0/HS3f0loHjcGxQYjSzw7gwwysnFQRpDkBWSAP1NqgWvD8fH4+XD3LyxZbNSNL+umBY1Bql
fOZgA5jFaZIa0dPi0Oe8Qa0L0QuJ1kBbBhHhlLv7Qah7Q1j3V7dvKl0NYKjoJO75cP/2/np4
PoAw/g59TSxc7oKxQtnFJ9Ept8VLlL3wDGH5XbkqlTAneCz3qZhBV7HfNwRcDpt4zwgZYbIr
Qy92gOVcWcc6EVcGEgFDmFxlCBoNm49iGpGIJ76gBforg62c6R1//LyQa6dSF2SG8BtMdm6T
d/0tXmIwcyOyuRUEEHA3+g3FzXwxt7nZiOCcm4xialtMTRfrEaeqjxB3pIshwxmjYhLbXLQM
gGzmqgygyYS5E15llpsNmTsHBUKnDYe0UXatLBqKCHZD5n7IJGIc+0hwZFEOV/Q7/qgbU1Sl
Z3lqeEH7JtyRxdw851k+HDOMMCryMSN8RzuYWI7HqDq5e9it+B0JQfp0laQu6+AnzQqYk3R1
MmigNWRhEY5GXdsaDXIYZl9sbJtZPsAJtrtQMFJ/4QnbGdHbtcSmzDNANTcKGP4xc08osRmP
TZm8AXPGNt0/WzEezSzaXcHOSyJ2MBXI3APvgjiaDLlrEQlOGTCacG9032EaWL2Xx4oBmwxW
mTPf/zgdLurthGS9m9mc2zs3wzl3e1o9+sXuKrmyfbY07MOWu7JHn73lYQ5BkcYBRsi1u86+
7XHPKNHcsmQFeDm3UXGPvfHMsdnmdOm4JtV0eQyLh99zO2S93Gp7cGr81Mi2bvSNe08jvRLC
Hp6Op94c6Hd0mHhRmOgd3adR7+5lnhZ1iHlt/yfKkTWoXegOvqA12+kRjtunQ/d2TWri5tus
+PTlXrpwpKmqqtAFVlLICeR+6Z3r/vTj/Qn+fjm/HaUhp75AmjX1OblxRH05X0DuOZJqA2OL
4U6+GHHu6fDmxblyK+Mw0oHC+CsbboNGbMQwSsQ4Jiq/48StIovYYxjTcWSnwmCaR4cozuaj
HntmclZfq+uR18MbyqgkT1xkw8kwpm1MFnHGqjNEa+Dp9DbiZ8L+jM/JkD86d1tnzJwIvWzE
n3mzaDS6omugYJYjZxFwZOY2T4zZJzqAbHqyVWxYto6eHGPu3L/OrOGEbsb3zAXhlzaO7g1u
e/w4oSUtNebCnnf3cX1XNb6rZtD5r+MznniRNzwe35QxNpG3lGZZSTL00QgjLIJyxyzyBRsU
LwsTepbmSzQdZwR5kS+ZuxSxn7MS3x6awECQH+O5AOQjmzud7aKxHQ33/YnajOLVDv43zLHZ
Kzq01GZ4yCclqO3t8PyCF7EMP8Er9zkjsgKXDuNSRvFKvXTbiWVJXfgUQUwr/8fRfj6cMGK3
ArkX6xgOfcwjMUL0ui5gG2YmtYQYgRqv5EazMb1yqZ6sOWVSLHTmCD/RjIxgqYi4sd8lDn1a
h1NiqHvOoioUUBHQnAgpcCFmKbMYkaBISZse+W2Qa37GJDE6q6/CaLbLJQ66Ue9rLnCr2c7C
j75zdkzkzSYRjTIhWMugluBaRHCkknEzzNccJYLmN4OHn8cXwyirFhu7mMY5M9fbdFvdMrkA
I7rBDzSYikxJULHo9d1AvP/+JrWgW4m3cndWAtyNKhutYkymd34vLjdp4sp4biwVpJfZ3i2t
WRLL8G2fU2F+LJUHXZqx3nKQQtlNYM0DOA7RrNToiGaqoKK1pxtOVLZ5bhaVprv6FjAUKv0o
qBzxMyLhoj8mh1f0MytZ+bO6dacmxTWyxsOOa0xy+Fl6zBrFEHq9qrQ+MOqNIPHzNDS8SVVJ
5SJEVwF967quZ4tmV18kOz/UA5LWYc0zwxVbgj4KN8ZvL3JDbUEjRaE5KMAfOpgtNf0HVahM
++ik+e6+l4ahazWze3dfObUz0nS7/J1MeO4kdNpUp27IVKStzVW1eisX/vrPho2pt5fbweX1
/kGKYX3DTlFcs04q1uSgEVm2X6J7EHpvCyivEllcppnhDUW5EFGxlTkOJsKUfvwRURhzH8lz
stc3jm1veNMtktBLshemvD4rScchrWGIfM4+PsE2LJmGbv7hud46KG9T1O2SAUkML4UuSrQg
zcIJPXNzQdoUABamsek9JtgXVsnYxgFml6QdASBOqfuUkwlbAeWDrIR5apFFFC3wPRHuoepR
HxKBt83D4q5TMYeNP/Ft4RsxP/E3SwwFxAvZe4ZbsCCEXgKMafy3HlQBewlonuOwAcqWsdw5
mvs3SL/ZpoVrJund0BSGQE5PK4TSJELfu9I5IEt06+a0SQyCXN+slsIyWlMlSNtS9ILjR4bR
ceopnMhqUeSdnqlT6DY3KAwMiBu4vlY596TaEOfbpBRuAnQl7/JXUfOil8JdAROA7vS2uGBZ
AmfmHBAnYdTvj5aZWdwcwtrpLF79BnbmG2nkksGJpjttrFOq6JdppmHohLkeSi1sJGyvGOj3
rou3NUd/ndIdH+eEAiiwX8joU0uhfDZrW1g3IVQJ0pZMq67bpatTKpaHUnscCmDgidbKziqT
P9F9qTQBbTwNaMJ6DokVGa6aTuMVwK0YhRZ5EBjfLGNY/VTQBYVYnep5hTam6E5yKSRD7aQZ
SUvJX7X15akw0+0moFzFkjMuhdGK3Dv1fbucm1SY6X6Yo2sG+IcccYrWjW5d2G2XcBAw3aRQ
X6EsR++7GtEeZoZs/GeEcQC9mGZ9V7Le/cNP3df9UtS830xAR1+FMGe9AtahKNJV7tLCTU3F
s5eaIl2gjA5SKelUXdLgAjRGpE29UoBGxNS18Vco+0L1i/8lT+Ov/s6XEkZPwACJaT6ZDI0Z
9i2NwkATfL8DkT4lt/6ynlF1iXQp6lo5FV+XbvE12OP/k4KuB2BGHWIB3xkpuy4J/q6t1THa
GfrP/c2xpxQepuilHI6w/1/Zky23rez4Pl/h8tNMVe6ZeI09VX6gyJbUV9zMRZLzwtJxlMSV
eCkvc5P5+gHQTbIXNOP7cI4jAOy90QAaDVwd7l5u7+6MjFgmWdvMeeMaNT7E8fOGESh6UW+q
90pJe9m/fXk8+MqNCj6St7gBAVZ28gGCrTMNHNXVEdxfJCWtbc4yKUF+ttgTAXFIQYyFg9cM
CkyoeCnTpBK5+wXob1EVL2mftW7L47JFS0LcVEZNK1FZ8YidpGFNVno/uTNSIbZR01jhwxUY
+Esizrmw7Mt2AWfGzKxCg6j3xvkpsjkojJWwAgBTX5egDS/kAqP3xM5X6o/D02EXr6Oq03aj
Xgv318FQtaxVagUVZ8hiX0WFGWvDEm2UTODmYZwgQSCEXYY/BFSZtkH0bKKts4nmTMnsE9JY
O5MhcSwG/mkdrPRbCVROJjqN4rOA1tdtVC/NknqIkrTUQWRGi7LQ6lCdKJeSOWYlKLD5IuUL
0hQUSIrXcTlKlKtiNnnxQN5vJxf+WeUn9MtPP3ObzEAXTGnbz2xZn+uGN2sPFKdk5plR3J/P
gRc1Pa3IZiJJ2GiC44RU0SITICZqWQEKvToxRK1teBVmMgeOFEAW2cR+KcO463x7Ook9D2Mr
ptKeF4P0YJ0g9BtPQQxNTgajyjE1aBKYvwHNG0l7utP30i3jd1FenB6/iw4XDUtokxl9nB4E
PyC/U8JAcPhl//Xn7nV/6BHmdZH6w43RaJghnjeVE2XDxgMrskL5KSjsAX7539TrIG+cYLdV
EVo8oGJhYFznPOqR/Uk3CkeoM3IhBglxYn+6PrHPdIJZuS4RUm8iToxRxN2R+3lnqGFl3rNd
UCiK1rCKEoZ4vwObpyC7cV/09XUUlgXZBnnpdCABJUUWyfzq8Mf++WH/86/H52+Hzojgd5kE
ET6QVlcT9TYyqHwmjIGpiqLpcn+kUVnUWYOTnJ09TYRCl0iRyB4uUj8ckKwpPFSblH7WYiBI
rCFJYLa9SUzcmU64qU4sSxoBSr+PiZolNRt8DxPKWKLny/26n0+/AJsOF46yI3R1zT0M6alC
c7SoKFyAqGRhGHhIunB+uv3GkWGHun+DOR6hbV6Vsfu7W5ixaTUMM4XoDG/GOipjaD7Sd6tq
dmaHL6XP+tmXOfVToIkI8xSxGSj0J/YaikW5dCwQGkSnLCeZKTRvQeyR9rBzpUinUpQESf3n
2BFhMSfHZuzqkBfIpNmICCP+odC/dFBtiWlIHKAjQhGMOubA+lGz20vQgN/5gCfFji6YQh1L
zNbZJTDTYNxuJFFYeQieHpdl4OgwcwvCj/HofHv9enFoYnqFvgOF3v5mwHw6+WRwHgvz6SyA
uTj7GMQcBzHh0kItuDgP1nN+FMQEW2AmLnYwp0FMsNXn50HMZQBzeRL65jI4opcnof5cnobq
ufjk9EfWxcXF2WV3Efjg6DhYP6CcoaZkefZq6ss/4qs95sEnPDjQ9jMefM6DP/HgSx58FGjK
UaAtR05jVoW86CoG1towTGYJGkWU++BYgC4Zc3A4PNuqYDBVAXIPW9ZNJdOUK20RCR5eCbHy
wRJapWLkuYi8lU2gb2yTmrZaSWD1FgINhcZ1e5pZP3xe3uYS1yXDE2XRba5NO5B1EaxiJexv
357RN8/Lual9CoZq8HdXietW1FqH5fQIUdUS5HZQc4G+kvnCNK9VbY3ZqWxvBX0tNMLNGrtk
2RVQKImyIb9/fVYnmajJmaipJG/0GC+FHYhlIuzL08qIIeDjzm+UxAJal/KbD37XbedVxqDL
qDHOdu3YsDVksbTOKFMiqv1dlCTV1fnZ2clZj6aYycuoSkQOY9ZS9snyRqX8iixbqkc0germ
UACKceYE+FSUR66MAjd4IFLizVpdtFUgpB+KSDKm8jAi7FKkJetYMIxWDTszb7fMOGpMh7lf
MG6WdavsUWlR8x1VodVHpEU5UWW0joe7nhAN3UHDdikr0KbWUdqKq6MgcS0TWFck9nUzCeVe
TpEewwo3rUjHZ+dcz4HBBPT2nqQpsuKGvxcbaKISBjcLxLAZ5eQiSkrJKZsDyU3kZBgeGhrN
0QFQBuxxYxWgzRSbHPcHx+v62357by1UFXKRR8BsBYeM6pssE8g0HM40khicq3LudkeiIXeO
pppqZBe1iTT2vDRDQEvMIC2iGtWEMq4wmfXV0UcTiyyialM7cTci0Kk4DeTLAXS+GCjcL2u5
+NPX/U3WUMTh3f3uHw/fDjkiWsv1MjpyK3IJjs+4pO0u5dXhy/fd0aFd1KZCd/uygMOb96ZA
okpECUNjUMASryJZC3sG6PZFfed2of+gm7UyfWfhPJ+yKIAjwiwEyplagoCepbDb8V6YW30W
JW7Vbntmv1xlVl54WwARSAEtqOtRld5QxxgSreeCoNSht7tuPhIbZ/86s350qM+Cfte2tvcm
oZJE6bsBqyKQTHWtX0zMyTKU4dEkEWecgd13dYiBW748/uvhw+/d/e7Dz8fdl6e7hw8vu697
oLz78gFzSnxDyerDy/7n3cPbrw8v97vbHx9eH+8ffz9+2D097Z7vH58//P309VCJYisy6B18
3z1/2dMzlVEkUw8F90CPySru8HH+3f/tdJyZgbfIBk+leNXlRW7vb0QVuZIRAlHYPeI5CL9B
2v6NIt+kHh3u0RBnyxU/+95sYc2QLc4wNqk087ZDs4JlIotBvHGgWzMZlAKV1y4E08+fA5OI
CyMHsMobeqVdVuPn30+vjwe3j8/7g8fng+/7n08URMgihsFdWDkJLPCxDwe2xAJ90noVy3Jp
ujU5CP8Tx4A0An3SynTkGmEsoX9h0Tc82JIo1PhVWfrUAHRnoYvwNsQnHdN5s3D/A/IMcwvX
1IMpklwAvU8X86Pji6xNPUTepjzQr76kv14D6E/id7ptlqAZeXBsnwesZeaXsACRs1MCNqb2
8/AqpjiAlafI298/727/8WP/++CWlvu3593T99/eKq/qyOtZsvQLj/2mi5gIjYtvDa6SmnfR
7ruXBYyUegTbai2Oz86O+FgMHhWOhufjFb29fse3pbe71/2XA/FAg4Bvg/919/r9IHp5eby9
I1Sye915oxLHmT/+sZ1cR1MuQW6Pjj+CzHATDBExcIOFrI8CoTUcGvhHncuurgVrgNYDKa7l
2ps+AQ0Cpr/ul8KMYpLdP34xfd765s9irlPzWbjSuPH3aNzUzPKYeXRptWFWTDFVXYlNdMve
NjVTDsgsm8pNceps5WU/Ud7QTpBG6+0kaYSJ65uWU2P6wcA0B/2ELHcv30PzAUqe19slAt2h
3HLjslaf90+09y+vfg1VfHLsF6fAynzB8LTYtM+aUJifFBmpN0NbOrJcMIi1K3E8YyZPYXhZ
0CZx97vXquboYyLnXBcVJtTmhT5l3Xrfs7eHtYJZV1nnsf6ASk79Qys58489CdsYEwxKf5qr
LAEWwYLNu4QRDDoZBz459qm1iucDYcPU4oRDQelh5NnRsUYyNWG7+G+YFQKIQOSk/liZRqNH
9qzg9LH+LF5UR5f+Ot+Uqj3MYuloIXW5HDaOEjDvnr7bObp65l4zywugTgYaH2/U4CDzdiZ9
5gvKrr/MQP7ezCW7KxXCC9vr4tXi9jlBhFnkZBRE/OlDfdoBn30/5XGYFA3afE8Qd8ZDp2uv
G38HEXTqs0RwxxRATzqRiD+yijkvYa6W0efIlw9rzOtKGzoko0yKU5rmj42qhWDqFlWp0qiy
cDprQ4PU00yMo0FiFOPv/4lmN8Jfnc2mYLeDhofWUI8ONNZGdyeb6CZIY/VZsY7H+yeMkmGZ
AoaFM08th+JeqiJHSXc4Lk4nZRbH+ZJBL/knu5rAdbhUYR92D18e7w/yt/u/9899GFyuK1Fe
yy4uUVf1Nk01QwfqvPX1EsSwwpDCcDoyYTiRFREe8J+yaUQl8OG6ebuisahwdpxNoEfwTRiw
Qb1/oFDj4Q61iQams56UcAditD2E98NAJnLSk4sZ+h3aZuTh4Iwa3idaSad4Dsp87ppXft79
/bx7/n3w/Pj2evfAyLqpnOkTkYGr88tblYB6h6CIZIqh/ZGK1TV9OsXJffgg9lV0c3N0xNby
HgFybDOvTPrUAflpufHXOj5vjxLbYdHH0WxM4aFG9jxbd1EDxzNaCaa6OBJi0z+eRuH+IWkc
l2xPAN4l/hmEqLqc/Er9DH1Z1iWz+oca/RShPuF15J+BGt4ly4vLs1+MHaUniE+2220Ye34c
RvZlr+fTpU/hoXxCcwOQS+CI2y7O87OzLZfn1RyspUhryY+yen8YqATvDbehZGXmEsrSYiHj
brHlPPnsa44O3Q/H9WAgy3aWapq6nWmy0WFtJGzKzKRiqsQbii4WeFkuY/QHV+/rzfLKVVxf
4LPSNeIpa33oDT6SfoIDsK7RPYEv6hPZDbEc7oZWLvCOvxTKeZkeJ2O7lHeDYtEYEPgrGcde
Dr5iCI+7bw8qus/t9/3tj7uHbyO7zoqkTQVdKEKFV4e38PHLf+MXQNb92P/+62l/P9wZKjdv
5noriK+vDo17QI0X26aKzEENXSYXeRJV3o0uNyyqYO9izWvaSEFHG/5LtbB/wPiOweuLnMkc
W0fvief96KfBk1FdXpiXGj2km4k8Btmnshw6MAwP39sZ7FgBU2/egvbxdUATzmN0EamKzHlf
bZKkIg9gc4HPH6Xpodmj5jJP4H8VjN7MvBCPiyoxLRkwIpno8jabQRvN7uIyjVK/4DKWQ2wK
B+WA6aIX/dbjrNzGS+U3XYm5Q4EP8eaoKtL7pjKVZk+HMoAPgLCa66ialrASw7kgG+vuJD46
tyl8UxQ0t2k765hA45p18KBdrRbpHDc5y2yJANiXmN1cMJ8qTEi+J5Ko2oQ2laKA2QthA1lW
ABNEfGK6ATKMNkaaY2GYtbQN0QollCdFNj06+NwM5VFbVfqspDgHaj5RsqHq7ZsLP2Xh1jOi
sfkE5ui3nxHs/qZLHRdGoaBKn1ZG56ceMDL900ZYs4Rd5iFqOFn8cmfxP83x1tDASI996xaf
pbEDDcQMEMcsJv1s+sgYCHrhx9EXAfgpC8fh93kF41ZXCTg3QPspLGXfhKJP4wX/AdZooBo4
t2qB3IODdavMuA414LOMBc9rM06Vjiihf9Jrk3WUdjZ4G1VVdKN4min31EUsgYWtRUcEIwrZ
IDBQM8qTAlG4Gjt9O8Ct3Pbww44mktPIKAQcHwvTM5JwiEBXSNQp3ZfRiEP3yK7pzk+tw2Pk
10WFT9OBsM0HP1PjAN/IoklndgPjYkkqPeylInVQtgcbgkpRwaFEKM+6key/7t5+vmKYyde7
b2+Pby8H98phYve83x1gfpz/MbRccsL6LLpsdgMb6Oqjh6jRvK+QJg830fjqFh+LLQKs2ipK
8g4gNlHEiu849CnIjfgy7erCHhK0EoRindSLVG0pY2EWWdbqi2rj4KMQPozfXlzCcNerrpjP
ydnFwnSVtQCTa1MwSAvrWTH+njoX8tR5XJN+Rudfo+HVNWrqRhVZKdXbZUOydpqfyMwiwZh0
Fd7hNpWxydq4PkaZyhI3yfG350vrpDbYWw9diKYBOamYJ+aWNb/pGpKjzCAwBRpchwdhhjdv
zppxiP7i14VTwsUvU5apF872GbYkhY2zjGIAwBEwfakH6lZH6Zmnbb3sH7u7ROSZnMUOhlbH
JkqNFVIDl3CilalBZteBETjYkb9td61e/SHo0/Pdw+sPFen2fv/yzferJ9l+RfNgieYKjM+o
WDUvVi99QThdpOigPLjifApSXLcYauV0HGelInolDBTkyacbkuAjRmPd3uRRJr33dxa4s2OH
gCg8Q8/ITlQVUJmbgKjhvzXmFq3VOOjBDg7gYO6++7n/x+vdvdaeXoj0VsGf/eFWdWl7o+Hp
10Mx9FAbi0C075GslxP+TFmDbsDLwgZRsomqOXf/YNDMSAUcGVYyw4BtsmS3pbbLZi1eZiHz
NPZnBaNPcamujj+eXvyHsfRLOOoxQKMd9wNdZqm0qOYjFi2BQGDylxx2WcqZVVQ3QN+mZ8uZ
rLOoMYUcF0PNw7B0N/40qVN83ubqEzp4upNjzglE+T/q2IWKpbCFqYeWouqcGBWj0v7ehUbL
km4u7m57npDs/3779g0dHuXDy+vzGyZQMpZkFqE5qr6pq2uDC47AwetSzenVx19HHBWo2tLU
fH0cOvy0IPcItEfYo1C7u3F4oeq84xyw6CFHBBmGpZxY30NJ6IbKzBGdY0q8hSVt1oW/ORPd
cBrM6kgH60P5xGkpYafri4HCZDbvmjd7nNTjdnf0MIpOb67RTrFDYWZ8U3pCBAI7puIN+N+q
ApGQBCWWhoopNnkgOCihy0LWRR7yBB9rwYiEwQ1cFbCPIuV16B+8imaz9dfLhpMaB9NMg2+K
rcOPIJzN3ClXBUALPARL21lPxg8tUYSup2iZ6DkGkSQFDuH3q8dMNFGxoLYOyeA1cOdEUwkM
E4zMemrVq2LXWVcu6PGZ36o1z6XdD99RiayaNmL2v0YE1wkMC0Z/RN9xS7BDIMVblMBrQQ6g
5DQ4haZNX69FxY1R3QtOj9rFkdrFPAJ92mwFI46phwqr16CHxRd7KAjmxcheQLd0ItVQGVOe
8OOmdw7CpSRmr/VCIDooHp9ePhxgmtW3J3W2LHcP30xBERoSoyd+YenMFth9WaaQpAO0zahC
okWzxf3VwNCbtoy6mDc+cujv8GbFJKQ6OBNykFi38uM4ZVXi1Kqi5/9mKJQmiF2CPZOVLI3f
sbExBhk15j00w7AaaxRr6JYtPqEC/ZPdcJtrEGNAmEkKnunSJYyqh11E0wtDvdcFWeTLGwog
5vlisRZHRFdALQCbsDFaZf8Ygynb3aU4Dysh3IQl9k6uhMjKpl/u2BPjaP3Pl6e7B3Qxhk7e
v73uf+3hH/vX27/++uu/xq7Q/TEVtyD9zA1YUlbFmg00qxBVtFFF5DDkoTNQ3VHDKISPP7T2
N2IrPHGphhGg+3FXEODJNxuFgSOo2NgvgHVNm9oKdqSg6pbdZmcqhFzp82iNCHYmagpU1epU
hL7GkSbHE60N8+csNQo2Gxo2Qqbesb+6KJPf1/E8+P1ohaoTVdMmko1vTxqV839jdQ0mU4qe
Axx5nkb2A3UT3uWZYaWgA5sIzLEjrQUmqGtzdHSDjaauHCbO45WSYRhrIW5+FdXp4MvudXeA
Iugt3hN66izdMTprpdRAVxyZkv/6ozkQMpEkqY4kQFDtMaWbJ7RanCvQeLfWGFRukTfSyRSr
PMLiluNselfHraWmxi2spSidWEVI8selhkQYlpsvyyBC8YQU3uGEPf/o1OXGV7Ow4pqNwdtn
KLK67knU11p1rRil1bar0LYBJQO9HwKbCzqyhMMvVfIqhaSj7CvctRug8/imMV/EkzfYuCOY
eFJFqcbCCkewNrT4aeyiisolT9ObnOb9Zgwju41slmherd9BpgNQowHOJddkGSVkoLeCVeKQ
YIhbWhhICapX3niFoJvgjQOMdWmqaIcTYcbMbed0UzUlts8esmHO2vncHC2xxlsMpLfMyDjT
uDhUMiRvjI2itBqPIfHMg5cOdzR+s3316uu1PrciTeivnbnHZVFQI7u1/oazSofW1R+WVGg1
/XkhvX8NDU0ALoYuNqb8TWogNzSiH2ngX4uFGeIRhh4E7Ln31UDvwJUA6O2lDWzsETqMdZbJ
IhQlUndVr3X34ASOkIPutiz8ddwjBiXPXlMzODTxsb0aHu/5cg/XfhX4ipw+EIFgnT05bEeO
sK9U5/GRhbvFVlDCTKj9Y6t/JgKPxjw4VK1TRl9pOfdg/UJy4aFWYBm6JRi9vpJs7J9pntVv
Puu6rL7JYQ27zcB48H2OVqsdqgLFZSaSR41cYvRJ4s5Xg++Mvkv3fnVRSjfCOMVsff0ibSI4
9suJU9+oMETs8zy6Fuk8iXYcOeR34UrNFTRNaQ178D4V5SKY/65YxvLo5PKULmBtK0wdYQhR
a+YUqIvabSLrEjrEG6oUlTHNbFxGk0rdHY0+CBqpp0RxR2vczI/J0WCqJYzk7JHQ+Absg4pk
uQGuIKIVLcnJsuZyHgiOowhSuRYl6vtTROpXwKipadZziY/6gBllTSBdjU+ZlP8GZTfn0/z4
xLMiXk42lrNwaArDmkiJv6S+CbB8Lig6l6YwFyVllzVwnmrw6+KcUw0cpc0TJ3ylzqdRgUz0
zWdbm95PF+edvqUkMaQt+a8CZSWzReADSlO4Tez31GIu0VjrJVNwjTDpjC7EWRLlTxFiGCR1
Dme8PxLYX3SKwiR5g+5u+GpoBvhxe/HRmbweEbggHSha+jNNE7g50voN3VKj3c72gymZFDvO
wJEIPqUPZ3LKJUQNDt1xlZYmWrYYNAUNLMGBb/ONSj1YVNaUD3B1S0tcKXCtM5AuWi9QudYh
7T1i+ik0+5dXNJCgPTF+/N/98+6blbl9hV3g3GA4673lrFFmfzbx56KhdwEc3ZQ071c6nvMq
+UqPmmJHKwzu4lr8a5AMi3V/cFlzgvSciAKSFGkhyqboPDpLV0ljMTRl7cVjvC4CWcGIBMP8
LUUgDgFRBL/Xx52Zv4y35IyaOqzuCZmInAYn8KYnY5DKcjUMk6kMESGZS1krz09NLjR8aobu
CZZPQ7cU2yAzVWOrHICU/xkn4PRUtYowZH+9AkRTcJedhNZ+/PcWUDshuUVhzKtwM7dh6Yjw
qAnMQymziKJC52jvHtAZrShwqBBWJtwLLLXMV5nfS7y2soH93ZwNJWsSxad0iii9ocOnFkt0
csIkKsYI0jMCqHJSxaAi5rLKNpEZQkpNcJ+iyJkU77CyVwVFs6Q3KXZxq6xIvBnGKFWgc3MW
es0rtDzpfUlmApkH3Ir6wl0Ca2azzCuVInxRRM9wsfOAqRGqC6suN7CV1j2bZM+pyUPJCx+m
POz+H9A8dOrwMwMA

--UlVJffcvxoiEqYs2--


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 16:00:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 16:00:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32489.63509 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg8ph-0006Kk-4q; Fri, 20 Nov 2020 16:00:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32489.63509; Fri, 20 Nov 2020 16:00:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg8ph-0006Kd-1b; Fri, 20 Nov 2020 16:00:21 +0000
Received: by outflank-mailman (input) for mailman id 32489;
 Fri, 20 Nov 2020 16:00:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XFvM=E2=infradead.org=willy@srs-us1.protection.inumbo.net>)
 id 1kg8pe-0006Fd-Ru
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 16:00:20 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c5f657c-938a-475d-a25f-05b23056ced8;
 Fri, 20 Nov 2020 16:00:15 +0000 (UTC)
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red
 Hat Linux)) id 1kg8pI-0005Bl-Jz; Fri, 20 Nov 2020 15:59:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=XFvM=E2=infradead.org=willy@srs-us1.protection.inumbo.net>)
	id 1kg8pe-0006Fd-Ru
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 16:00:20 +0000
X-Inumbo-ID: 2c5f657c-938a-475d-a25f-05b23056ced8
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2c5f657c-938a-475d-a25f-05b23056ced8;
	Fri, 20 Nov 2020 16:00:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=K5DwT3XZlV3t3RqqSgbM35Y3HrzhNkm8y+Dk2IlqgSg=; b=dKbyJHe0CTU5aQ7qzS3y+R7HzS
	IKnP9HvhkePlkizqwksisFX9ItaAwMtV42JJBmP3ZX68rybYyhhyxN6HF+4s2kqLrzwOxCN590LXF
	ym0WaBVLMdq9rLFtADBHBTFGjcyPZEqJfC7pbYTLnera5pnBeS3gKB7QoEHsSi8lIOiJp2cbk5Dht
	3B0uQlIfZ8f/1YczBdK4QRRNeBKzTo4B1VHMay+VPY7OTUi8keSXIo2ShWdFB8ho8zwbUkIe8+1eP
	wuIgrB00p1rgPw8SIw6y1czXZtN+QrCR9MWZkdKvvhRmCPfud3TmejnBZGx0dJq2Oq4i98pjwEUY2
	dBwSlL3g==;
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kg8pI-0005Bl-Jz; Fri, 20 Nov 2020 15:59:56 +0000
Date: Fri, 20 Nov 2020 15:59:56 +0000
From: Matthew Wilcox <willy@infradead.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jan Kara <jack@suse.cz>, Jens Axboe <axboe@kernel.dk>,
	Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 14/20] block: remove the nr_sects field in struct
 hd_struct
Message-ID: <20201120155956.GB4327@casper.infradead.org>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-15-hch@lst.de>
 <20201119120525.GW1981@quack2.suse.cz>
 <20201120090820.GD21715@lst.de>
 <20201120112121.GB15537@quack2.suse.cz>
 <20201120153253.GA18990@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201120153253.GA18990@lst.de>

On Fri, Nov 20, 2020 at 04:32:53PM +0100, Christoph Hellwig wrote:
> On Fri, Nov 20, 2020 at 12:21:21PM +0100, Jan Kara wrote:
> > > > AFAICT bd_size_lock is pointless after these changes so we can just remove
> > > > it?
> > > 
> > > I don't think it is, as reuqiring bd_mutex for size updates leads to
> > > rather awkward lock ordering problems.
> > 
> > OK, let me ask differently: What is bd_size_lock protecting now? Ah, I see,
> > on 32-bit it is needed to prevent torn writes to i_size, right?
> 
> Exactly.  In theory we could skip it for 64-bit, but as updating the
> size isn't a fast path, and struct block_device isn't super size critical
> I'd rather keep the same code for 32 vs 64-bit builds.

Is it better to switch to i_size_write() / i_size_read()?


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 16:01:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 16:01:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32493.63521 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg8r0-0006Rc-GU; Fri, 20 Nov 2020 16:01:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32493.63521; Fri, 20 Nov 2020 16:01:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg8r0-0006RV-DH; Fri, 20 Nov 2020 16:01:42 +0000
Received: by outflank-mailman (input) for mailman id 32493;
 Fri, 20 Nov 2020 16:01:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tRH+=E2=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1kg8qz-0006RQ-D4
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 16:01:41 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cc9f6f15-29c8-47c4-afe5-32f5be2f3bd0;
 Fri, 20 Nov 2020 16:01:40 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 776B867373; Fri, 20 Nov 2020 17:01:37 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tRH+=E2=lst.de=hch@srs-us1.protection.inumbo.net>)
	id 1kg8qz-0006RQ-D4
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 16:01:41 +0000
X-Inumbo-ID: cc9f6f15-29c8-47c4-afe5-32f5be2f3bd0
Received: from verein.lst.de (unknown [213.95.11.211])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cc9f6f15-29c8-47c4-afe5-32f5be2f3bd0;
	Fri, 20 Nov 2020 16:01:40 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
	id 776B867373; Fri, 20 Nov 2020 17:01:37 +0100 (CET)
Date: Fri, 20 Nov 2020 17:01:37 +0100
From: Christoph Hellwig <hch@lst.de>
To: Matthew Wilcox <willy@infradead.org>
Cc: Christoph Hellwig <hch@lst.de>, Jan Kara <jack@suse.cz>,
	Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 14/20] block: remove the nr_sects field in struct
 hd_struct
Message-ID: <20201120160137.GA20984@lst.de>
References: <20201118084800.2339180-1-hch@lst.de> <20201118084800.2339180-15-hch@lst.de> <20201119120525.GW1981@quack2.suse.cz> <20201120090820.GD21715@lst.de> <20201120112121.GB15537@quack2.suse.cz> <20201120153253.GA18990@lst.de> <20201120155956.GB4327@casper.infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201120155956.GB4327@casper.infradead.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Fri, Nov 20, 2020 at 03:59:56PM +0000, Matthew Wilcox wrote:
> > Exactly.  In theory we could skip it for 64-bit, but as updating the
> > size isn't a fast path, and struct block_device isn't super size critical
> > I'd rather keep the same code for 32 vs 64-bit builds.
> 
> Is it better to switch to i_size_write() / i_size_read()?

I think we've stopped updating the size from contexts that can't block,
so it is worth pursuing. I'd rather do it after this series is done
and merged, though.


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 16:03:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 16:03:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32498.63533 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg8t9-0006d3-UB; Fri, 20 Nov 2020 16:03:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32498.63533; Fri, 20 Nov 2020 16:03:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg8t9-0006cw-QB; Fri, 20 Nov 2020 16:03:55 +0000
Received: by outflank-mailman (input) for mailman id 32498;
 Fri, 20 Nov 2020 16:03:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kg8t8-0006cl-NW
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 16:03:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7eb679b1-11a5-4ea6-aa34-5fc261ea0c99;
 Fri, 20 Nov 2020 16:03:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B0D04ACC5;
 Fri, 20 Nov 2020 16:03:52 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xyTX=E2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kg8t8-0006cl-NW
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 16:03:54 +0000
X-Inumbo-ID: 7eb679b1-11a5-4ea6-aa34-5fc261ea0c99
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7eb679b1-11a5-4ea6-aa34-5fc261ea0c99;
	Fri, 20 Nov 2020 16:03:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1605888232; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kQuRyDTTo9Vnzyr/gEj6ClA7/m3i6E0omdsh9NeGv60=;
	b=jYVGhZuLaODWsNcgzLVP46xv5fNAN42vQibDSdhrHnJRSu0VbK+WdMwRG521atSZ9hDnG5
	fteSzFfs1vXA8LCHlHLaYZwSr/G8dSKrtoZvf8+IS7C5QUJKU7DcOJSx1PlQeWKgB90yxk
	elQ08DD+VmEJfgnQMvRGEyNoOpGQxEI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B0D04ACC5;
	Fri, 20 Nov 2020 16:03:52 +0000 (UTC)
Subject: Re: [PATCH v2 1/7] xen/acpi: Rework acpi_os_map_memory() and
 acpi_os_unmap_memory()
To: Julien Grall <julien@xen.org>
Cc: alex.bennee@linaro.org, masami.hiramatsu@linaro.org, ehem+xen@m5p.com,
 bertrand.marquis@arm.com, andre.przywara@arm.com, Rahul.Singh@arm.com,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20201023154156.6593-1-julien@xen.org>
 <20201023154156.6593-2-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <96a97d2f-90dd-c4a7-5747-825c832ce56d@suse.com>
Date: Fri, 20 Nov 2020 17:03:53 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201023154156.6593-2-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.10.2020 17:41, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The functions acpi_os_{un,}map_memory() are meant to be arch-agnostic
> while the __acpi_os_{un,}map_memory() are meant to be arch-specific.
> 
> Currently, the former are still containing x86 specific code.
> 
> To avoid this rather strange split, the generic helpers are reworked so
> they are arch-agnostic. This requires the introduction of a new helper
> __acpi_os_unmap_memory() that will undo any mapping done by
> __acpi_os_map_memory().
> 
> Currently, the arch-helper for unmap is basically a no-op so it only
> returns whether the mapping was arch specific. But this will change
> in the future.
> 
> Note that the x86 version of acpi_os_map_memory() was already able to
> able the 1MB region. Hence why there is no addition of new code.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Rahul Singh <rahul.singh@arm.com>
> Tested-by: Rahul Singh <rahul.singh@arm.com>

This change breaks shutdown on x86. Either Dom0 no longer requests S5
(in which case I'd expect some data collection there to fail), or Xen
refuses the request. As a result, things go the machine_halt() path
instead. I've looked over the change again, but couldn't spot anything
yet which might explain the behavior. Yet reverting (just the non-Arm
parts, so I wouldn't have to revert multiple commits) made things
work again.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 16:09:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 16:09:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32506.63545 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg8yG-0006ps-F4; Fri, 20 Nov 2020 16:09:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32506.63545; Fri, 20 Nov 2020 16:09:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg8yG-0006pl-C7; Fri, 20 Nov 2020 16:09:12 +0000
Received: by outflank-mailman (input) for mailman id 32506;
 Fri, 20 Nov 2020 16:09:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kg8yF-0006pg-9y
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 16:09:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kg8yE-0002nV-6f; Fri, 20 Nov 2020 16:09:10 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kg8yD-0006L8-S6; Fri, 20 Nov 2020 16:09:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kg8yF-0006pg-9y
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 16:09:11 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=iT8QI1tbSBs5VikmYuAPyr3fJ3V67U4xYEXHCEdjmrw=; b=c5LiGrkhJo71vqVHLuD97zvQL+
	E5HT9x+OmnVuR0DYSVgaQPTPZVH/9YmgBvHjXCT++9el5FeD+wEiO1pRtQShZiFNM/dDYkHsfNts/
	AQDmniwsHitWVqhTZibnFAAWNsMrnxKqfzUaiKUC2o7+lAchC7lWBnBhbrwMRMWE4vgY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kg8yE-0002nV-6f; Fri, 20 Nov 2020 16:09:10 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kg8yD-0006L8-S6; Fri, 20 Nov 2020 16:09:10 +0000
Subject: Re: [PATCH RFC 4/6] xen/arm: mm: Allow other mapping size in
 xen_pt_update_entry()
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com,
 Julien Grall <julien.grall@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20201119190751.22345-1-julien@xen.org>
 <20201119190751.22345-5-julien@xen.org>
 <alpine.DEB.2.21.2011191706420.7979@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <1ba4afef-7efa-6d1a-5929-ec2652dbbb21@xen.org>
Date: Fri, 20 Nov 2020 16:09:07 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2011191706420.7979@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 20/11/2020 01:46, Stefano Stabellini wrote:
> On Thu, 19 Nov 2020, Julien Grall wrote:
>> From: Julien Grall <julien.grall@arm.com>
>>
>> At the moment, xen_pt_update_entry() only supports mapping at level 3
>> (i.e 4KB mapping). While this is fine for most of the runtime helper,
>> the boot code will require to use superpage mapping.
>>
>> We don't want to allow superpage mapping by default as some of the
>> callers may expect small mappings (i.e populate_pt_range()) or even
>> expect to unmap only a part of a superpage.
>>
>> To keep the code simple, a new flag _PAGE_BLOCK is introduced to
>> allow the caller to enable superpage mapping.
>>
>> As the code doesn't support all the combinations, xen_pt_check_entry()
>> is extended to take into account the cases we don't support when
>> using block mapping:
>>      - Replacing a table with a mapping. This may happen if region was
>>      first mapped with 4KB mapping and then later on replaced with a 2MB
>>      (or 1GB mapping)
>>      - Removing/modify a table. This may happen if a caller try to remove a
>>      region with _PAGE_BLOCK set when it was created without it
>>
>> Note that the current restriction mean that the caller must ensure that
>> _PAGE_BLOCK is consistently set/cleared across all the updates on a
>> given virtual region. This ought to be fine with the expected use-cases.
>>
>> More rework will be necessary if we wanted to remove the restrictions.
>>
>> Note that nr_mfns is now marked const as it is used for flushing the
>> TLBs and we don't want it to be modified.
>>
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
> 
> Thanks for the patch, you might want to update the Signed-off-by (even
> if you haven't changed the patch)

Yes, I realized it afterwards. I will update it in the next version.

>>   static int xen_pt_update_entry(mfn_t root, unsigned long virt,
>> -                               mfn_t mfn, unsigned int flags)
>> +                               mfn_t mfn, unsigned int page_order,
>> +                               unsigned int flags)
>>   {
>>       int rc;
>>       unsigned int level;
>> -    /* We only support 4KB mapping (i.e level 3) for now */
>> -    unsigned int target = 3;
>> +    unsigned int target = 3 - (page_order / LPAE_SHIFT);
> 
> Given that page_order is not used for anything else in this function,
> wouldn't it be easier to just pass the target level to
> xen_pt_update_entry? Calculating target from page_order, when page_order
> is otherwise unused, it doesn't look like the most straightforward way
> to do it.

FWIW, this is the same way we use in __p2m_set_entry() 
(xen_pt_update_entry() is derived from it).

Anyway, in the caller, we need to know the size of the mapping. I would 
rather avoid to have to keep two variables when one can "easily" infer 
the second one.

One possibility would be to introduce a static array level_orders 
(already exist in the p2m) that would allow us to easily convert from a 
level to an order.

Let me see if that's fit with my next plan (I am looking to add suport 
for the contiguous bit as well).

> 
> 
>>       lpae_t *table;
>>       /*
>>        * The intermediate page tables are read-only when the MFN is not valid
>> @@ -1186,7 +1204,7 @@ static int xen_pt_update_entry(mfn_t root, unsigned long virt,
>>       entry = table + offsets[level];
>>   
>>       rc = -EINVAL;
>> -    if ( !xen_pt_check_entry(*entry, mfn, flags) )
>> +    if ( !xen_pt_check_entry(*entry, mfn, level, flags) )
>>           goto out;
>>   
>>       /* If we are only populating page-table, then we are done. */
>> @@ -1204,8 +1222,11 @@ static int xen_pt_update_entry(mfn_t root, unsigned long virt,
>>           {
>>               pte = mfn_to_xen_entry(mfn, PAGE_AI_MASK(flags));
>>   
>> -            /* Third level entries set pte.pt.table = 1 */
>> -            pte.pt.table = 1;
>> +            /*
>> +             * First and second level pages set pte.pt.table = 0, but
>> +             * third level entries set pte.pt.table = 1.
>> +             */
>> +            pte.pt.table = (level == 3);
>>           }
>>           else /* We are updating the permission => Copy the current pte. */
>>               pte = *entry;
>> @@ -1229,11 +1250,12 @@ static DEFINE_SPINLOCK(xen_pt_lock);
>>   
>>   static int xen_pt_update(unsigned long virt,
>>                            mfn_t mfn,
>> -                         unsigned long nr_mfns,
>> +                         const unsigned long nr_mfns,
>>                            unsigned int flags)
>>   {
>>       int rc = 0;
>> -    unsigned long addr = virt, addr_end = addr + nr_mfns * PAGE_SIZE;
>> +    unsigned long vfn = paddr_to_pfn(virt);
>> +    unsigned long left = nr_mfns;
> 
> Given that paddr_to_pfn is meant for physical addresses, I would rather
> opencode paddr_to_pfn using PAGE_SHIFT here. Again, just a suggestion.
paddr_to_pfn() is poorly named. This is meant to take any address and 
return the frame.

There are wrapper for machine address and guest address but there is no 
concept for the virtual yet.

Long term,, I would like to kill paddr_to_pfn() use on Arm in favor of 
the typesafe version. So I should probably not introduce a new one :).

I will open-code the shift.

> 
>>       /*
>>        * For arm32, page-tables are different on each CPUs. Yet, they share
>> @@ -1265,14 +1287,43 @@ static int xen_pt_update(unsigned long virt,
>>   
>>       spin_lock(&xen_pt_lock);
>>   
>> -    for ( ; addr < addr_end; addr += PAGE_SIZE )
>> +    while ( left )
>>       {
>> -        rc = xen_pt_update_entry(root, addr, mfn, flags);
>> +        unsigned int order;
>> +        unsigned long mask;
>> +
>> +        /*
>> +         * Don't take into account the MFN when removing mapping (i.e
>> +         * MFN_INVALID) to calculate the correct target order.
>> +         *
>> +         * XXX: Support superpage mappings if nr is not aligned to a
>> +         * superpage size.
> 
> It would be good to add another sentence to explain that the checks
> below are simply based on masks and rely on the mfn, vfn, and also
> nr_mfn to be superpage aligned. (It took me some time to figure it out.)

I am not sure to understand what you wrote here. Could you suggest a 
sentence?

Regarding the TODO itself, we have the exact same one in the P2M code. I 
couldn't find a clever way to deal with it yet. Any idea how this could 
be solved?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 16:42:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 16:42:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32520.63557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg9UC-0002fS-31; Fri, 20 Nov 2020 16:42:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32520.63557; Fri, 20 Nov 2020 16:42:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg9UB-0002fL-Vu; Fri, 20 Nov 2020 16:42:11 +0000
Received: by outflank-mailman (input) for mailman id 32520;
 Fri, 20 Nov 2020 16:42:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kg9UA-0002fC-DC; Fri, 20 Nov 2020 16:42:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kg9UA-0003S3-3O; Fri, 20 Nov 2020 16:42:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kg9U9-0006ts-PS; Fri, 20 Nov 2020 16:42:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kg9U9-0003U3-Oy; Fri, 20 Nov 2020 16:42:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kg9UA-0002fC-DC; Fri, 20 Nov 2020 16:42:10 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3t9XjYtEcVKS5NGfev3ZFfCYgTjd3I7IOxGNVKfUL3g=; b=eunv/74um5BxGMfp06TWgJiSGn
	XEWcwFNfM4tZtSX9FCnPmk1QComNqz7uUZBzBFBlQQUV/yJc5WBQ0kNIP5fnWMYmfYLpO6QkxCT7O
	5Nv5Hc3j+f5nV7v6IbvOut1C69dSOP1y00zaLZhy/bN6T2VnTM2i8p5ESX8p7JnGabt4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kg9UA-0003S3-3O; Fri, 20 Nov 2020 16:42:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kg9U9-0006ts-PS; Fri, 20 Nov 2020 16:42:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kg9U9-0003U3-Oy; Fri, 20 Nov 2020 16:42:09 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156890-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156890: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=7fbd7e710323c8f4c5f6a38a8ae0e6726b5a4599
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 20 Nov 2020 16:42:09 +0000

flight 156890 qemu-mainline real [real]
flight 156900 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156890/
http://logs.test-lab.xenproject.org/osstest/logs/156900/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                7fbd7e710323c8f4c5f6a38a8ae0e6726b5a4599
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   92 days
Failing since        152659  2020-08-21 14:07:39 Z   91 days  193 attempts
Testing same since   156890  2020-11-20 01:06:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 67249 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 16:52:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 16:52:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32528.63571 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg9eG-0003rq-Ax; Fri, 20 Nov 2020 16:52:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32528.63571; Fri, 20 Nov 2020 16:52:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kg9eG-0003rj-7w; Fri, 20 Nov 2020 16:52:36 +0000
Received: by outflank-mailman (input) for mailman id 32528;
 Fri, 20 Nov 2020 16:52:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HtV5=E2=gmail.com=niveditas98@srs-us1.protection.inumbo.net>)
 id 1kg9eF-0003re-3M
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 16:52:35 +0000
Received: from mail-qk1-x741.google.com (unknown [2607:f8b0:4864:20::741])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4ed703fa-8215-404f-b5ab-093037792211;
 Fri, 20 Nov 2020 16:52:34 +0000 (UTC)
Received: by mail-qk1-x741.google.com with SMTP id u4so9468503qkk.10
 for <xen-devel@lists.xenproject.org>; Fri, 20 Nov 2020 08:52:34 -0800 (PST)
Received: from rani.riverdale.lan ([2001:470:1f07:5f3::b55f])
 by smtp.gmail.com with ESMTPSA id e6sm637759qtb.57.2020.11.20.08.52.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 20 Nov 2020 08:52:32 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HtV5=E2=gmail.com=niveditas98@srs-us1.protection.inumbo.net>)
	id 1kg9eF-0003re-3M
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 16:52:35 +0000
X-Inumbo-ID: 4ed703fa-8215-404f-b5ab-093037792211
Received: from mail-qk1-x741.google.com (unknown [2607:f8b0:4864:20::741])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4ed703fa-8215-404f-b5ab-093037792211;
	Fri, 20 Nov 2020 16:52:34 +0000 (UTC)
Received: by mail-qk1-x741.google.com with SMTP id u4so9468503qkk.10
        for <xen-devel@lists.xenproject.org>; Fri, 20 Nov 2020 08:52:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:from:date:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=zg6zl9X6YJy4l7Imn9Y9548H4d9zP5iPhLHw2KjR81I=;
        b=KYTAe23ZfGGHGNAGHpdYB3UZEe8FVCqTVCQmDrAaYYcXWlNJH5iP+0o9R7u9ErRxc0
         8UTAiYyqjuJRoip43grTiRnshuQTsLEGjeAode6kQusufUzWWcu9lNzMRs+SLByYtcJt
         0Wcjmpr74BVpkZ7AnCaZDkUrMmpDol1ArR8LZuK45lIOENZv3uEJjL7p8IaCD9UG2lZ4
         TUTGLuu7SrwiZDu1Qx52MZrXN+igJ3dVBgYr4hDrHGE87Gw1paV6aElVEbHB5/ViuBqz
         Yl+FlX9xWX5xpbQNHtKJyEL5OyJsEbcDzUKY73y3RrF04mSR9lIjLrVS7LrX4RSucrk6
         H1nw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:from:date:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=zg6zl9X6YJy4l7Imn9Y9548H4d9zP5iPhLHw2KjR81I=;
        b=Nk8Q8cQ2uxpIJ0r+uGEjCu7Ucr7W3TikoLTO/+FqTcIrwy2h8TTXS1QfIotiKOlGjM
         oJIWJIgQSB//VLgITl2k4NaX1fqryuRBfeDS9qax/nu48/TxPGoBlP4fvz6eiygWXS4h
         El3lzovgn1Q6zrifgdQc4U4QQxj+P9dnMNy50B1mpXD3CPemgO001QUWB9483x9POtVP
         Nqzkd1YqEnzwYbQqhAr9nwt6HSYOkYS1zyWAj0mMj8hVyz3cZjvPdcenowBIhx03ID1W
         mrb+1dibrxk/n7dRoWA5eOwnB3dzePQnbddFTuEvPOr9fuXfa9v27IWYIYyVLsJRcKC1
         CRgg==
X-Gm-Message-State: AOAM530PoovHvrmjSwnH42LfOa5tlwWInVbraWmm7WrpG423nOLJ8Fu4
	34YLhyKUzPoic9V4UF6arns=
X-Google-Smtp-Source: ABdhPJzQXE8TGuFjICs2cksSw1/J1lR2Og/k6v3kk7zVcez12b9Lg4Iso3TVZEA0vEu0e9mBnNPnxQ==
X-Received: by 2002:a37:aa93:: with SMTP id t141mr16890603qke.400.1605891153722;
        Fri, 20 Nov 2020 08:52:33 -0800 (PST)
Received: from rani.riverdale.lan ([2001:470:1f07:5f3::b55f])
        by smtp.gmail.com with ESMTPSA id e6sm637759qtb.57.2020.11.20.08.52.32
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 20 Nov 2020 08:52:32 -0800 (PST)
Sender: Arvind Sankar <niveditas98@gmail.com>
From: Arvind Sankar <nivedita@alum.mit.edu>
X-Google-Original-From: Arvind Sankar <arvind@rani.riverdale.lan>
Date: Fri, 20 Nov 2020 11:52:31 -0500
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, peterz@infradead.org,
	luto@kernel.org, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>, Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>
Subject: Re: [PATCH v2 08/12] x86/paravirt: remove no longer needed 32-bit
 pvops cruft
Message-ID: <20201120165231.GA1074066@rani.riverdale.lan>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-9-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201120114630.13552-9-jgross@suse.com>

On Fri, Nov 20, 2020 at 12:46:26PM +0100, Juergen Gross wrote:
> PVOP_VCALL4() is only used for Xen PV, while PVOP_CALL4() isn't used
> at all. Keep PVOP_CALL4() for 64 bits due to symmetry reasons.
> 
> This allows to remove the 32-bit definitions of those macros leading
> to a substantial simplification of the paravirt macros, as those were
> the only ones needing non-empty "pre" and "post" parameters.
> 
> PVOP_CALLEE2() and PVOP_VCALLEE2() are used nowhere, so remove them.
> 
> Another no longer needed case is special handling of return types
> larger than unsigned long. Replace that with a BUILD_BUG_ON().
> 
> DISABLE_INTERRUPTS() is used in 32-bit code only, so it can just be
> replaced by cli.
> 
> INTERRUPT_RETURN in 32-bit code can be replaced by iret.
> 
> GET_CR2_INTO_AX and ENABLE_INTERRUPTS are used nowhere, so they can
> be removed.

As a heads-up, GET_CR2_INTO_AX has been removed in tip:master already by
  31d854603305 ("x86/head/64: Remove unused GET_CR2_INTO() macro")


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 17:45:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 17:45:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32539.63589 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgASt-00011j-IP; Fri, 20 Nov 2020 17:44:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32539.63589; Fri, 20 Nov 2020 17:44:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgASt-00011c-F7; Fri, 20 Nov 2020 17:44:55 +0000
Received: by outflank-mailman (input) for mailman id 32539;
 Fri, 20 Nov 2020 17:44:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kgASr-00011X-JS
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 17:44:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kgASp-0004hE-MG; Fri, 20 Nov 2020 17:44:51 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kgASp-00004y-9m; Fri, 20 Nov 2020 17:44:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kgASr-00011X-JS
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 17:44:53 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=GFHFypjQIAns7bCBdheatPzTqbbWDfglKgB92b6DICo=; b=MebDd2bjGxYXwYY02QoOy4dgtf
	mvaDpv7XJhjWUIQv4+YFAwu2Jxohw1nmXpr49IwhdaBhssdxSMCuCPk0ykqc4Eb/oCnsyoltXKqZd
	R+wmIZH7qFpMA3RYzJPUHz6kjU09HsntlzeYQGx0r89n//3syWTStlrztDaYUHniCzAw=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kgASp-0004hE-MG; Fri, 20 Nov 2020 17:44:51 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kgASp-00004y-9m; Fri, 20 Nov 2020 17:44:51 +0000
Subject: Re: [PATCH v2 1/7] xen/acpi: Rework acpi_os_map_memory() and
 acpi_os_unmap_memory()
To: Jan Beulich <jbeulich@suse.com>
Cc: alex.bennee@linaro.org, masami.hiramatsu@linaro.org, ehem+xen@m5p.com,
 bertrand.marquis@arm.com, andre.przywara@arm.com, Rahul.Singh@arm.com,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20201023154156.6593-1-julien@xen.org>
 <20201023154156.6593-2-julien@xen.org>
 <96a97d2f-90dd-c4a7-5747-825c832ce56d@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <dffb010c-c647-89e6-293a-0b2f4a910503@xen.org>
Date: Fri, 20 Nov 2020 17:44:47 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <96a97d2f-90dd-c4a7-5747-825c832ce56d@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 20/11/2020 16:03, Jan Beulich wrote:
> On 23.10.2020 17:41, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> The functions acpi_os_{un,}map_memory() are meant to be arch-agnostic
>> while the __acpi_os_{un,}map_memory() are meant to be arch-specific.
>>
>> Currently, the former are still containing x86 specific code.
>>
>> To avoid this rather strange split, the generic helpers are reworked so
>> they are arch-agnostic. This requires the introduction of a new helper
>> __acpi_os_unmap_memory() that will undo any mapping done by
>> __acpi_os_map_memory().
>>
>> Currently, the arch-helper for unmap is basically a no-op so it only
>> returns whether the mapping was arch specific. But this will change
>> in the future.
>>
>> Note that the x86 version of acpi_os_map_memory() was already able to
>> able the 1MB region. Hence why there is no addition of new code.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>> Reviewed-by: Rahul Singh <rahul.singh@arm.com>
>> Tested-by: Rahul Singh <rahul.singh@arm.com>
> 
> This change breaks shutdown on x86. Either Dom0 no longer requests S5
> (in which case I'd expect some data collection there to fail), or Xen
> refuses the request. As a result, things go the machine_halt() path
> instead. I've looked over the change again, but couldn't spot anything
> yet which might explain the behavior. Yet reverting (just the non-Arm
> parts, so I wouldn't have to revert multiple commits) made things
> work again.

Thank you for the report and sorry for the breakage.

When changing the behavior of __acpi_map_table(), I failed to realize 
that some x86 code will call it directly rather than 
acpi_os_map_memory(). This is the case of acpi_fadt_parse_sleep_info() 
which detects whether ACPI can be used to put the system in sleep state.

I am tempted to require all the callers requiring to map memory to use 
the generic implementation acpi_os_{, un}map_memory().

However, AFAICT, some of the callers (such as acpi_sleep_prepare()) are 
using __acpi_map_table() because the function never failed before. By 
using the generic function, all mappings after boot will be using vmap() 
which may fail.

Would this new behavior be acceptable to you?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 18:29:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 18:29:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32550.63602 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgB9b-0005bW-4h; Fri, 20 Nov 2020 18:29:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32550.63602; Fri, 20 Nov 2020 18:29:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgB9b-0005bP-1U; Fri, 20 Nov 2020 18:29:03 +0000
Received: by outflank-mailman (input) for mailman id 32550;
 Fri, 20 Nov 2020 18:29:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ma5r=E2=perches.com=joe@srs-us1.protection.inumbo.net>)
 id 1kgB9Z-0005bK-PB
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 18:29:01 +0000
Received: from smtprelay.hostedemail.com (unknown [216.40.44.204])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e9e69301-f83a-4884-8f65-f462eae4b1a1;
 Fri, 20 Nov 2020 18:29:00 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net
 [216.40.38.60])
 by smtprelay04.hostedemail.com (Postfix) with ESMTP id 2DC80180A7FF1;
 Fri, 20 Nov 2020 18:29:00 +0000 (UTC)
Received: from XPS-9350.home (unknown [47.151.133.149])
 (Authenticated sender: joe@perches.com)
 by omf05.hostedemail.com (Postfix) with ESMTPA;
 Fri, 20 Nov 2020 18:28:49 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ma5r=E2=perches.com=joe@srs-us1.protection.inumbo.net>)
	id 1kgB9Z-0005bK-PB
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 18:29:01 +0000
X-Inumbo-ID: e9e69301-f83a-4884-8f65-f462eae4b1a1
Received: from smtprelay.hostedemail.com (unknown [216.40.44.204])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e9e69301-f83a-4884-8f65-f462eae4b1a1;
	Fri, 20 Nov 2020 18:29:00 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net [216.40.38.60])
	by smtprelay04.hostedemail.com (Postfix) with ESMTP id 2DC80180A7FF1;
	Fri, 20 Nov 2020 18:29:00 +0000 (UTC)
X-Session-Marker: 6A6F6540706572636865732E636F6D
X-Spam-Summary: 2,0,0,,d41d8cd98f00b204,joe@perches.com,,RULES_HIT:41:355:379:599:988:989:1260:1277:1311:1313:1314:1345:1359:1437:1515:1516:1518:1534:1539:1593:1594:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:2731:2828:3138:3139:3140:3141:3142:3352:3622:3865:3866:3867:3870:3871:3874:4321:4362:5007:6742:6743:10004:10400:10848:11232:11658:11914:12297:12740:12760:12895:13069:13311:13357:13439:14180:14659:14721:21060:21067:21080:21627:21990:30012:30054:30070:30091,0,RBL:none,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none
X-HE-Tag: woman67_620d0012734d
X-Filterd-Recvd-Size: 3843
Received: from XPS-9350.home (unknown [47.151.133.149])
	(Authenticated sender: joe@perches.com)
	by omf05.hostedemail.com (Postfix) with ESMTPA;
	Fri, 20 Nov 2020 18:28:49 +0000 (UTC)
Message-ID: <3e0bbb1644fe53d79322c2feb28ccaf3e20c0e94.camel@perches.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
From: Joe Perches <joe@perches.com>
To: "Gustavo A. R. Silva" <gustavoars@kernel.org>, 
	linux-kernel@vger.kernel.org
Cc: alsa-devel@alsa-project.org, amd-gfx@lists.freedesktop.org, 
 bridge@lists.linux-foundation.org, ceph-devel@vger.kernel.org, 
 cluster-devel@redhat.com, coreteam@netfilter.org,
 devel@driverdev.osuosl.org,  dm-devel@redhat.com,
 drbd-dev@lists.linbit.com, dri-devel@lists.freedesktop.org, 
 GR-everest-linux-l2@marvell.com, GR-Linux-NIC-Dev@marvell.com, 
 intel-gfx@lists.freedesktop.org, intel-wired-lan@lists.osuosl.org, 
 keyrings@vger.kernel.org, linux1394-devel@lists.sourceforge.net, 
 linux-acpi@vger.kernel.org, linux-afs@lists.infradead.org, 
 linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, 
 linux-atm-general@lists.sourceforge.net, linux-block@vger.kernel.org, 
 linux-can@vger.kernel.org, linux-cifs@vger.kernel.org, 
 linux-crypto@vger.kernel.org, linux-decnet-user@lists.sourceforge.net, 
 linux-ext4@vger.kernel.org, linux-fbdev@vger.kernel.org, 
 linux-geode@lists.infradead.org, linux-gpio@vger.kernel.org, 
 linux-hams@vger.kernel.org, linux-hwmon@vger.kernel.org, 
 linux-i3c@lists.infradead.org, linux-ide@vger.kernel.org, 
 linux-iio@vger.kernel.org, linux-input@vger.kernel.org, 
 linux-integrity@vger.kernel.org, linux-mediatek@lists.infradead.org, 
 linux-media@vger.kernel.org, linux-mmc@vger.kernel.org, linux-mm@kvack.org,
  linux-mtd@lists.infradead.org, linux-nfs@vger.kernel.org, 
 linux-rdma@vger.kernel.org, linux-renesas-soc@vger.kernel.org, 
 linux-scsi@vger.kernel.org, linux-sctp@vger.kernel.org, 
 linux-security-module@vger.kernel.org, 
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org, 
 linux-watchdog@vger.kernel.org, linux-wireless@vger.kernel.org, 
 netdev@vger.kernel.org, netfilter-devel@vger.kernel.org, 
 nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org, 
 oss-drivers@netronome.com, patches@opensource.cirrus.com, 
 rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org, 
 samba-technical@lists.samba.org, selinux@vger.kernel.org, 
 target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net, 
 usb-storage@lists.one-eyed-alien.net, 
 virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
 x86@kernel.org, xen-devel@lists.xenproject.org,
 linux-hardening@vger.kernel.org,  Nick Desaulniers
 <ndesaulniers@google.com>, Nathan Chancellor <natechancellor@gmail.com>,
 Miguel Ojeda <ojeda@kernel.org>, Kees Cook <keescook@chromium.org>
Date: Fri, 20 Nov 2020 10:28:48 -0800
In-Reply-To: <cover.1605896059.git.gustavoars@kernel.org>
References: <cover.1605896059.git.gustavoars@kernel.org>
Content-Type: text/plain; charset="ISO-8859-1"
User-Agent: Evolution 3.38.1-1 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Fri, 2020-11-20 at 12:21 -0600, Gustavo A. R. Silva wrote:
> Hi all,
> 
> This series aims to fix almost all remaining fall-through warnings in
> order to enable -Wimplicit-fallthrough for Clang.
> 
> In preparation to enable -Wimplicit-fallthrough for Clang, explicitly
> add multiple break/goto/return/fallthrough statements instead of just
> letting the code fall through to the next case.
> 
> Notice that in order to enable -Wimplicit-fallthrough for Clang, this
> change[1] is meant to be reverted at some point. So, this patch helps
> to move in that direction.

This was a bit hard to parse for a second or three.

Thanks Gustavo.

How was this change done?




From xen-devel-bounces@lists.xenproject.org Fri Nov 20 18:40:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 18:40:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32561.63613 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgBKI-0007ap-8Y; Fri, 20 Nov 2020 18:40:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32561.63613; Fri, 20 Nov 2020 18:40:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgBKI-0007ai-5T; Fri, 20 Nov 2020 18:40:06 +0000
Received: by outflank-mailman (input) for mailman id 32561;
 Fri, 20 Nov 2020 18:40:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ShNn=E2=kernel.org=pr-tracker-bot@srs-us1.protection.inumbo.net>)
 id 1kgBKG-0007O8-Oc
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 18:40:04 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 84a21b75-b21a-47c3-b7d7-62b12c91a21b;
 Fri, 20 Nov 2020 18:40:04 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ShNn=E2=kernel.org=pr-tracker-bot@srs-us1.protection.inumbo.net>)
	id 1kgBKG-0007O8-Oc
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 18:40:04 +0000
X-Inumbo-ID: 84a21b75-b21a-47c3-b7d7-62b12c91a21b
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 84a21b75-b21a-47c3-b7d7-62b12c91a21b;
	Fri, 20 Nov 2020 18:40:04 +0000 (UTC)
Subject: Re: [GIT PULL] xen: branch for v5.10-rc5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605897603;
	bh=yrhE7j00Mtsf9/fQt335X97bxVz/PrGLW3m6roM1KR4=;
	h=From:In-Reply-To:References:Date:To:Cc:From;
	b=08MvlJg85Q1TZ/e5dO04m95fWliauAvm09XpRu0pSGoM4dptkUo6grfMn3qH+fRyT
	 gjmliPDEsW56C8VhyKaN5nYjN2yw8xPf3es1h55RtC/bjRJygNg4sjlUlEpbA1+j+I
	 BR7aCx6LdR/L1cjax4HB6sE4jTUxu7OknBrzFDfo=
From: pr-tracker-bot@kernel.org
In-Reply-To: <20201120055947.613-1-jgross@suse.com>
References: <20201120055947.613-1-jgross@suse.com>
X-PR-Tracked-List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
X-PR-Tracked-Message-Id: <20201120055947.613-1-jgross@suse.com>
X-PR-Tracked-Remote: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.10b-rc5-tag
X-PR-Tracked-Commit-Id: 65cae18882f943215d0505ddc7e70495877308e6
X-PR-Merge-Tree: torvalds/linux.git
X-PR-Merge-Refname: refs/heads/master
X-PR-Merge-Commit-Id: 4ccf7a01e805f04defd423fb410f47a13af76399
Message-Id: <160589760312.4306.8801434833258557230.pr-tracker-bot@kernel.org>
Date: Fri, 20 Nov 2020 18:40:03 +0000
To: Juergen Gross <jgross@suse.com>
Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com

The pull request you sent on Fri, 20 Nov 2020 06:59:47 +0100:

> git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.10b-rc5-tag

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/4ccf7a01e805f04defd423fb410f47a13af76399

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 19:02:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 19:02:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32579.63626 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgBg3-0001bm-4d; Fri, 20 Nov 2020 19:02:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32579.63626; Fri, 20 Nov 2020 19:02:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgBg3-0001bf-1Q; Fri, 20 Nov 2020 19:02:35 +0000
Received: by outflank-mailman (input) for mailman id 32579;
 Fri, 20 Nov 2020 19:02:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5mWO=E2=embeddedor.com=gustavo@srs-us1.protection.inumbo.net>)
 id 1kgBg1-0001ba-VK
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 19:02:34 +0000
Received: from gateway33.websitewelcome.com (unknown [192.185.145.9])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c0b041e7-4fc2-4772-96fa-54904629bbd4;
 Fri, 20 Nov 2020 19:02:31 +0000 (UTC)
Received: from cm17.websitewelcome.com (cm17.websitewelcome.com [100.42.49.20])
 by gateway33.websitewelcome.com (Postfix) with ESMTP id 7C769EC185
 for <xen-devel@lists.xenproject.org>; Fri, 20 Nov 2020 13:02:31 -0600 (CST)
Received: from gator4166.hostgator.com ([108.167.133.22]) by cmsmtp with SMTP
 id gBfzkUIxBAAk4gBfzkygMd; Fri, 20 Nov 2020 13:02:31 -0600
Received: from 187-162-31-110.static.axtel.net ([187.162.31.110]:52198
 helo=[192.168.15.4])
 by gator4166.hostgator.com with esmtpsa (TLS1.2) tls
 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (Exim 4.93)
 (envelope-from <gustavo@embeddedor.com>)
 id 1kgBfw-0000VH-WA; Fri, 20 Nov 2020 13:02:29 -0600
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5mWO=E2=embeddedor.com=gustavo@srs-us1.protection.inumbo.net>)
	id 1kgBg1-0001ba-VK
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 19:02:34 +0000
X-Inumbo-ID: c0b041e7-4fc2-4772-96fa-54904629bbd4
Received: from gateway33.websitewelcome.com (unknown [192.185.145.9])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c0b041e7-4fc2-4772-96fa-54904629bbd4;
	Fri, 20 Nov 2020 19:02:31 +0000 (UTC)
Received: from cm17.websitewelcome.com (cm17.websitewelcome.com [100.42.49.20])
	by gateway33.websitewelcome.com (Postfix) with ESMTP id 7C769EC185
	for <xen-devel@lists.xenproject.org>; Fri, 20 Nov 2020 13:02:31 -0600 (CST)
Received: from gator4166.hostgator.com ([108.167.133.22])
	by cmsmtp with SMTP
	id gBfzkUIxBAAk4gBfzkygMd; Fri, 20 Nov 2020 13:02:31 -0600
X-Authority-Reason: nr=8
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=embeddedor.com; s=default; h=Content-Transfer-Encoding:Content-Type:
	In-Reply-To:MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender
	:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From:
	Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:
	List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
	bh=+4VxQ3E1SMf3fqpPxA59SXnFTIYnqNYQQfNHAF4XRGw=; b=n09KqYnyKHrO7dveIUWc55mzig
	y4HUZZbrqh9oIGiAW16mTaR8pcvnAozTxI+rYVVIt9CxAErudOIqW44giqY0jAAxUq2Az5FkJRw8D
	DdYM3ceCwzW72Iyh3zFwKX+btf9zXkIFnOla0anfthw4DZBTFFqyoxwOvEOl2OJ26TOTLqWU9obCG
	intkFNaJRljaeNwR96RWCduzT1c1ht64s9kRw8R+vb7ftuKF/SC10c1DXlPqmQL512w8QCfyvTWtG
	dzgy+MeNJCgbGq47RynDwYUz9POm4OiFKWoqxvHGtR/GFZ1BW9GSDXleqWC/tOZ433VaTMSGJy7Ut
	yE2h40Yg==;
Received: from 187-162-31-110.static.axtel.net ([187.162.31.110]:52198 helo=[192.168.15.4])
	by gator4166.hostgator.com with esmtpsa  (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
	(Exim 4.93)
	(envelope-from <gustavo@embeddedor.com>)
	id 1kgBfw-0000VH-WA; Fri, 20 Nov 2020 13:02:29 -0600
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
To: Joe Perches <joe@perches.com>, "Gustavo A. R. Silva"
 <gustavoars@kernel.org>, linux-kernel@vger.kernel.org
Cc: alsa-devel@alsa-project.org, amd-gfx@lists.freedesktop.org,
 bridge@lists.linux-foundation.org, ceph-devel@vger.kernel.org,
 cluster-devel@redhat.com, coreteam@netfilter.org,
 devel@driverdev.osuosl.org, dm-devel@redhat.com, drbd-dev@lists.linbit.com,
 dri-devel@lists.freedesktop.org, GR-everest-linux-l2@marvell.com,
 GR-Linux-NIC-Dev@marvell.com, intel-gfx@lists.freedesktop.org,
 intel-wired-lan@lists.osuosl.org, keyrings@vger.kernel.org,
 linux1394-devel@lists.sourceforge.net, linux-acpi@vger.kernel.org,
 linux-afs@lists.infradead.org, linux-arm-kernel@lists.infradead.org,
 linux-arm-msm@vger.kernel.org, linux-atm-general@lists.sourceforge.net,
 linux-block@vger.kernel.org, linux-can@vger.kernel.org,
 linux-cifs@vger.kernel.org, linux-crypto@vger.kernel.org,
 linux-decnet-user@lists.sourceforge.net, linux-ext4@vger.kernel.org,
 linux-fbdev@vger.kernel.org, linux-geode@lists.infradead.org,
 linux-gpio@vger.kernel.org, linux-hams@vger.kernel.org,
 linux-hwmon@vger.kernel.org, linux-i3c@lists.infradead.org,
 linux-ide@vger.kernel.org, linux-iio@vger.kernel.org,
 linux-input@vger.kernel.org, linux-integrity@vger.kernel.org,
 linux-mediatek@lists.infradead.org, linux-media@vger.kernel.org,
 linux-mmc@vger.kernel.org, linux-mm@kvack.org,
 linux-mtd@lists.infradead.org, linux-nfs@vger.kernel.org,
 linux-rdma@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-sctp@vger.kernel.org,
 linux-security-module@vger.kernel.org,
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org,
 linux-watchdog@vger.kernel.org, linux-wireless@vger.kernel.org,
 netdev@vger.kernel.org, netfilter-devel@vger.kernel.org,
 nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org,
 oss-drivers@netronome.com, patches@opensource.cirrus.com,
 rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org,
 samba-technical@lists.samba.org, selinux@vger.kernel.org,
 target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net,
 usb-storage@lists.one-eyed-alien.net,
 virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org,
 x86@kernel.org, xen-devel@lists.xenproject.org,
 linux-hardening@vger.kernel.org, Nick Desaulniers <ndesaulniers@google.com>,
 Nathan Chancellor <natechancellor@gmail.com>, Miguel Ojeda
 <ojeda@kernel.org>, Kees Cook <keescook@chromium.org>
References: <cover.1605896059.git.gustavoars@kernel.org>
 <3e0bbb1644fe53d79322c2feb28ccaf3e20c0e94.camel@perches.com>
From: "Gustavo A. R. Silva" <gustavo@embeddedor.com>
Autocrypt: addr=gustavo@embeddedor.com; keydata=
 xsFNBFssHAwBEADIy3ZoPq3z5UpsUknd2v+IQud4TMJnJLTeXgTf4biSDSrXn73JQgsISBwG
 2Pm4wnOyEgYUyJd5tRWcIbsURAgei918mck3tugT7AQiTUN3/5aAzqe/4ApDUC+uWNkpNnSV
 tjOx1hBpla0ifywy4bvFobwSh5/I3qohxDx+c1obd8Bp/B/iaOtnq0inli/8rlvKO9hp6Z4e
 DXL3PlD0QsLSc27AkwzLEc/D3ZaqBq7ItvT9Pyg0z3Q+2dtLF00f9+663HVC2EUgP25J3xDd
 496SIeYDTkEgbJ7WYR0HYm9uirSET3lDqOVh1xPqoy+U9zTtuA9NQHVGk+hPcoazSqEtLGBk
 YE2mm2wzX5q2uoyptseSNceJ+HE9L+z1KlWW63HhddgtRGhbP8pj42bKaUSrrfDUsicfeJf6
 m1iJRu0SXYVlMruGUB1PvZQ3O7TsVfAGCv85pFipdgk8KQnlRFkYhUjLft0u7CL1rDGZWDDr
 NaNj54q2CX9zuSxBn9XDXvGKyzKEZ4NY1Jfw+TAMPCp4buawuOsjONi2X0DfivFY+ZsjAIcx
 qQMglPtKk/wBs7q2lvJ+pHpgvLhLZyGqzAvKM1sVtRJ5j+ARKA0w4pYs5a5ufqcfT7dN6TBk
 LXZeD9xlVic93Ju08JSUx2ozlcfxq+BVNyA+dtv7elXUZ2DrYwARAQABzStHdXN0YXZvIEEu
 IFIuIFNpbHZhIDxndXN0YXZvYXJzQGtlcm5lbC5vcmc+wsGrBBMBCAA+FiEEkmRahXBSurMI
 g1YvRwW0y0cG2zEFAl6zFvQCGyMFCQlmAYAFCwkIBwIGFQoJCAsCBBYCAwECHgECF4AAIQkQ
 RwW0y0cG2zEWIQSSZFqFcFK6swiDVi9HBbTLRwbbMZsEEACWjJyXLjtTAF21Vuf1VDoGzitP
 oE69rq9UhXIGR+e0KACyIFoB9ibG/1j/ESMa0RPSwLpJDLgfvi/I18H/9cKtdo2uz0XNbDT8
 i3llIu0b43nzGIDzRudINBXC8Coeob+hrp/MMZueyzt0CUoAnY4XqpHQbQsTfTrpFeHT02Qz
 ITw6kTSmK7dNbJj2naH2vSrU11qGdU7aFzI7jnVvGgv4NVQLPxm/t4jTG1o+P1Xk4N6vKafP
 zqzkxj99JrUAPt+LyPS2VpNvmbSNq85PkQ9gpeTHpkio/D9SKsMW62njITPgy6M8TFAmx8JF
 ZAI6k8l1eU29F274WnlQ6ZokkJoNctwHa+88euWKHWUDolCmQpegJJ8932www83GLn1mdUZn
 NsymjFSdMWE+y8apWaV9QsDOKWf7pY2uBuE6GMPRhX7e7h5oQwa1lYeO2L9LTDeXkEOJe+hE
 qQdEEvkC/nok0eoRlBlZh433DQlv4+IvSsfN/uWld2TuQFyjDCLIm1CPRfe7z0TwiCM27F+O
 lHnUspCFSgpnrxqNH6CM4aj1EF4fEX+ZyknTSrKL9BGZ/qRz7Xe9ikU2/7M1ov6rOXCI4NR9
 THsNax6etxCBMzZs2bdMHMcajP5XdRsOIARuN08ytRjDolR2r8SkTN2YMwxodxNWWDC3V8X2
 RHZ4UwQw487BTQRbLBwMARAAsHCE31Ffrm6uig1BQplxMV8WnRBiZqbbsVJBH1AAh8tq2ULl
 7udfQo1bsPLGGQboJSVN9rckQQNahvHAIK8ZGfU4Qj8+CER+fYPp/MDZj+t0DbnWSOrG7z9H
 IZo6PR9z4JZza3Hn/35jFggaqBtuydHwwBANZ7A6DVY+W0COEU4of7CAahQo5NwYiwS0lGis
 LTqks5R0Vh+QpvDVfuaF6I8LUgQR/cSgLkR//V1uCEQYzhsoiJ3zc1HSRyOPotJTApqGBq80
 X0aCVj1LOiOF4rrdvQnj6iIlXQssdb+WhSYHeuJj1wD0ZlC7ds5zovXh+FfFl5qH5RFY/qVn
 3mNIVxeO987WSF0jh+T5ZlvUNdhedGndRmwFTxq2Li6GNMaolgnpO/CPcFpDjKxY/HBUSmaE
 9rNdAa1fCd4RsKLlhXda+IWpJZMHlmIKY8dlUybP+2qDzP2lY7kdFgPZRU+ezS/pzC/YTzAv
 CWM3tDgwoSl17vnZCr8wn2/1rKkcLvTDgiJLPCevqpTb6KFtZosQ02EGMuHQI6Zk91jbx96n
 rdsSdBLGH3hbvLvjZm3C+fNlVb9uvWbdznObqcJxSH3SGOZ7kCHuVmXUcqozol6ioMHMb+In
 rHPP16aVDTBTPEGwgxXI38f7SUEn+NpbizWdLNz2hc907DvoPm6HEGCanpcAEQEAAcLBZQQY
 AQgADwUCWywcDAIbDAUJCWYBgAAKCRBHBbTLRwbbMdsZEACUjmsJx2CAY+QSUMebQRFjKavw
 XB/xE7fTt2ahuhHT8qQ/lWuRQedg4baInw9nhoPE+VenOzhGeGlsJ0Ys52sdXvUjUocKgUQq
 6ekOHbcw919nO5L9J2ejMf/VC/quN3r3xijgRtmuuwZjmmi8ct24TpGeoBK4WrZGh/1hAYw4
 ieARvKvgjXRstcEqM5thUNkOOIheud/VpY+48QcccPKbngy//zNJWKbRbeVnimua0OpqRXhC
 rEVm/xomeOvl1WK1BVO7z8DjSdEBGzbV76sPDJb/fw+y+VWrkEiddD/9CSfgfBNOb1p1jVnT
 2mFgGneIWbU0zdDGhleI9UoQTr0e0b/7TU+Jo6TqwosP9nbk5hXw6uR5k5PF8ieyHVq3qatJ
 9K1jPkBr8YWtI5uNwJJjTKIA1jHlj8McROroxMdI6qZ/wZ1ImuylpJuJwCDCORYf5kW61fcr
 HEDlIvGc371OOvw6ejF8ksX5+L2zwh43l/pKkSVGFpxtMV6d6J3eqwTafL86YJWH93PN+ZUh
 6i6Rd2U/i8jH5WvzR57UeWxE4P8bQc0hNGrUsHQH6bpHV2lbuhDdqo+cM9ehGZEO3+gCDFmK
 rjspZjkJbB5Gadzvts5fcWGOXEvuT8uQSvl+vEL0g6vczsyPBtqoBLa9SNrSVtSixD1uOgyt
 AP7RWS474w==
Message-ID: <9f986394-125a-81f7-7696-fe1a9f4eb4f5@embeddedor.com>
Date: Fri, 20 Nov 2020 13:02:34 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <3e0bbb1644fe53d79322c2feb28ccaf3e20c0e94.camel@perches.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-AntiAbuse: This header was added to track abuse, please include it with any abuse report
X-AntiAbuse: Primary Hostname - gator4166.hostgator.com
X-AntiAbuse: Original Domain - lists.xenproject.org
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain - embeddedor.com
X-BWhitelist: no
X-Source-IP: 187.162.31.110
X-Source-L: No
X-Exim-ID: 1kgBfw-0000VH-WA
X-Source: 
X-Source-Args: 
X-Source-Dir: 
X-Source-Sender: 187-162-31-110.static.axtel.net ([192.168.15.4]) [187.162.31.110]:52198
X-Source-Auth: gustavo@embeddedor.com
X-Email-Count: 72
X-Source-Cap: Z3V6aWRpbmU7Z3V6aWRpbmU7Z2F0b3I0MTY2Lmhvc3RnYXRvci5jb20=
X-Local-Domain: yes



On 11/20/20 12:28, Joe Perches wrote:
> On Fri, 2020-11-20 at 12:21 -0600, Gustavo A. R. Silva wrote:
>> Hi all,
>>
>> This series aims to fix almost all remaining fall-through warnings in
>> order to enable -Wimplicit-fallthrough for Clang.
>>
>> In preparation to enable -Wimplicit-fallthrough for Clang, explicitly
>> add multiple break/goto/return/fallthrough statements instead of just
>> letting the code fall through to the next case.
>>
>> Notice that in order to enable -Wimplicit-fallthrough for Clang, this
>> change[1] is meant to be reverted at some point. So, this patch helps
>> to move in that direction.
> 
> This was a bit hard to parse for a second or three.
> 
> Thanks Gustavo.
> 
> How was this change done?

I audited case by case in order to determine the best fit for each
situation. Depending on the surrounding logic, sometimes it makes
more sense a goto or a fallthrough rather than merely a break.

Thanks
--
Gustavo


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 19:04:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 19:04:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32585.63638 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgBiI-0001m5-J0; Fri, 20 Nov 2020 19:04:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32585.63638; Fri, 20 Nov 2020 19:04:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgBiI-0001ly-Eq; Fri, 20 Nov 2020 19:04:54 +0000
Received: by outflank-mailman (input) for mailman id 32585;
 Fri, 20 Nov 2020 19:04:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5mWO=E2=embeddedor.com=gustavo@srs-us1.protection.inumbo.net>)
 id 1kgBiH-0001lt-7a
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 19:04:53 +0000
Received: from gateway24.websitewelcome.com (unknown [192.185.51.59])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7613a9a8-e4d2-4982-8138-c85e5ac809bb;
 Fri, 20 Nov 2020 19:04:52 +0000 (UTC)
Received: from cm11.websitewelcome.com (cm11.websitewelcome.com [100.42.49.5])
 by gateway24.websitewelcome.com (Postfix) with ESMTP id CCEBF4B03
 for <xen-devel@lists.xenproject.org>; Fri, 20 Nov 2020 13:04:51 -0600 (CST)
Received: from gator4166.hostgator.com ([108.167.133.22]) by cmsmtp with SMTP
 id gBiFk7OvLnPrxgBiFkc0e2; Fri, 20 Nov 2020 13:04:51 -0600
Received: from 187-162-31-110.static.axtel.net ([187.162.31.110]:52360
 helo=[192.168.15.4])
 by gator4166.hostgator.com with esmtpsa (TLS1.2) tls
 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (Exim 4.93)
 (envelope-from <gustavo@embeddedor.com>)
 id 1kgBiD-00024G-7d; Fri, 20 Nov 2020 13:04:49 -0600
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5mWO=E2=embeddedor.com=gustavo@srs-us1.protection.inumbo.net>)
	id 1kgBiH-0001lt-7a
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 19:04:53 +0000
X-Inumbo-ID: 7613a9a8-e4d2-4982-8138-c85e5ac809bb
Received: from gateway24.websitewelcome.com (unknown [192.185.51.59])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7613a9a8-e4d2-4982-8138-c85e5ac809bb;
	Fri, 20 Nov 2020 19:04:52 +0000 (UTC)
Received: from cm11.websitewelcome.com (cm11.websitewelcome.com [100.42.49.5])
	by gateway24.websitewelcome.com (Postfix) with ESMTP id CCEBF4B03
	for <xen-devel@lists.xenproject.org>; Fri, 20 Nov 2020 13:04:51 -0600 (CST)
Received: from gator4166.hostgator.com ([108.167.133.22])
	by cmsmtp with SMTP
	id gBiFk7OvLnPrxgBiFkc0e2; Fri, 20 Nov 2020 13:04:51 -0600
X-Authority-Reason: nr=8
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=embeddedor.com; s=default; h=Content-Transfer-Encoding:Content-Type:
	In-Reply-To:MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender
	:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From:
	Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:
	List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
	bh=o6HouBgCeARvPDycxq+DQL9GTuV0FZaznhFf+B5lvvQ=; b=RC17bKQEy60oNUOsP9/+FoFba3
	7IOvK7tatxfYFulfXrzo1R1inJiR70+fGm96EOiyxuGoF10JW8F/HPs+jPpuI495euTAFQHufGSAV
	KFk2OzUW7hWgtf7v4LM6Poy7NupLGz6fEDbexZ2hS8wrxAWAlGQM9lxjgUTuzQh4+AEwwhtM/J+sO
	AoW8/bIYU33TR8wm0Dkj1ZL5b4OT77sQWi2RWgP14r2tlNn5M1PdILrguUL1Nl2Hy0yV5KIqCCVAo
	j0nfxV1qHzIcwVylK9t1AWmoK15QajwvXxprFA+S9YfES+wOdY+ET5oNf4zzGyJ24cDNT8LjZrCcC
	nhh+2ALA==;
Received: from 187-162-31-110.static.axtel.net ([187.162.31.110]:52360 helo=[192.168.15.4])
	by gator4166.hostgator.com with esmtpsa  (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
	(Exim 4.93)
	(envelope-from <gustavo@embeddedor.com>)
	id 1kgBiD-00024G-7d; Fri, 20 Nov 2020 13:04:49 -0600
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
To: Jakub Kicinski <kuba@kernel.org>,
 "Gustavo A. R. Silva" <gustavoars@kernel.org>
Cc: linux-kernel@vger.kernel.org, alsa-devel@alsa-project.org,
 amd-gfx@lists.freedesktop.org, bridge@lists.linux-foundation.org,
 ceph-devel@vger.kernel.org, cluster-devel@redhat.com,
 coreteam@netfilter.org, devel@driverdev.osuosl.org, dm-devel@redhat.com,
 drbd-dev@lists.linbit.com, dri-devel@lists.freedesktop.org,
 GR-everest-linux-l2@marvell.com, GR-Linux-NIC-Dev@marvell.com,
 intel-gfx@lists.freedesktop.org, intel-wired-lan@lists.osuosl.org,
 keyrings@vger.kernel.org, linux1394-devel@lists.sourceforge.net,
 linux-acpi@vger.kernel.org, linux-afs@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org,
 linux-atm-general@lists.sourceforge.net, linux-block@vger.kernel.org,
 linux-can@vger.kernel.org, linux-cifs@vger.kernel.org,
 linux-crypto@vger.kernel.org, linux-decnet-user@lists.sourceforge.net,
 linux-ext4@vger.kernel.org, linux-fbdev@vger.kernel.org,
 linux-geode@lists.infradead.org, linux-gpio@vger.kernel.org,
 linux-hams@vger.kernel.org, linux-hwmon@vger.kernel.org,
 linux-i3c@lists.infradead.org, linux-ide@vger.kernel.org,
 linux-iio@vger.kernel.org, linux-input@vger.kernel.org,
 linux-integrity@vger.kernel.org, linux-mediatek@lists.infradead.org,
 linux-media@vger.kernel.org, linux-mmc@vger.kernel.org, linux-mm@kvack.org,
 linux-mtd@lists.infradead.org, linux-nfs@vger.kernel.org,
 linux-rdma@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-sctp@vger.kernel.org,
 linux-security-module@vger.kernel.org,
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org,
 linux-watchdog@vger.kernel.org, linux-wireless@vger.kernel.org,
 netdev@vger.kernel.org, netfilter-devel@vger.kernel.org,
 nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org,
 oss-drivers@netronome.com, patches@opensource.cirrus.com,
 rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org,
 samba-technical@lists.samba.org, selinux@vger.kernel.org,
 target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net,
 usb-storage@lists.one-eyed-alien.net,
 virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org,
 x86@kernel.org, xen-devel@lists.xenproject.org,
 linux-hardening@vger.kernel.org, Nick Desaulniers <ndesaulniers@google.com>,
 Nathan Chancellor <natechancellor@gmail.com>, Miguel Ojeda
 <ojeda@kernel.org>, Joe Perches <joe@perches.com>,
 Kees Cook <keescook@chromium.org>
References: <cover.1605896059.git.gustavoars@kernel.org>
 <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
From: "Gustavo A. R. Silva" <gustavo@embeddedor.com>
Autocrypt: addr=gustavo@embeddedor.com; keydata=
 xsFNBFssHAwBEADIy3ZoPq3z5UpsUknd2v+IQud4TMJnJLTeXgTf4biSDSrXn73JQgsISBwG
 2Pm4wnOyEgYUyJd5tRWcIbsURAgei918mck3tugT7AQiTUN3/5aAzqe/4ApDUC+uWNkpNnSV
 tjOx1hBpla0ifywy4bvFobwSh5/I3qohxDx+c1obd8Bp/B/iaOtnq0inli/8rlvKO9hp6Z4e
 DXL3PlD0QsLSc27AkwzLEc/D3ZaqBq7ItvT9Pyg0z3Q+2dtLF00f9+663HVC2EUgP25J3xDd
 496SIeYDTkEgbJ7WYR0HYm9uirSET3lDqOVh1xPqoy+U9zTtuA9NQHVGk+hPcoazSqEtLGBk
 YE2mm2wzX5q2uoyptseSNceJ+HE9L+z1KlWW63HhddgtRGhbP8pj42bKaUSrrfDUsicfeJf6
 m1iJRu0SXYVlMruGUB1PvZQ3O7TsVfAGCv85pFipdgk8KQnlRFkYhUjLft0u7CL1rDGZWDDr
 NaNj54q2CX9zuSxBn9XDXvGKyzKEZ4NY1Jfw+TAMPCp4buawuOsjONi2X0DfivFY+ZsjAIcx
 qQMglPtKk/wBs7q2lvJ+pHpgvLhLZyGqzAvKM1sVtRJ5j+ARKA0w4pYs5a5ufqcfT7dN6TBk
 LXZeD9xlVic93Ju08JSUx2ozlcfxq+BVNyA+dtv7elXUZ2DrYwARAQABzStHdXN0YXZvIEEu
 IFIuIFNpbHZhIDxndXN0YXZvYXJzQGtlcm5lbC5vcmc+wsGrBBMBCAA+FiEEkmRahXBSurMI
 g1YvRwW0y0cG2zEFAl6zFvQCGyMFCQlmAYAFCwkIBwIGFQoJCAsCBBYCAwECHgECF4AAIQkQ
 RwW0y0cG2zEWIQSSZFqFcFK6swiDVi9HBbTLRwbbMZsEEACWjJyXLjtTAF21Vuf1VDoGzitP
 oE69rq9UhXIGR+e0KACyIFoB9ibG/1j/ESMa0RPSwLpJDLgfvi/I18H/9cKtdo2uz0XNbDT8
 i3llIu0b43nzGIDzRudINBXC8Coeob+hrp/MMZueyzt0CUoAnY4XqpHQbQsTfTrpFeHT02Qz
 ITw6kTSmK7dNbJj2naH2vSrU11qGdU7aFzI7jnVvGgv4NVQLPxm/t4jTG1o+P1Xk4N6vKafP
 zqzkxj99JrUAPt+LyPS2VpNvmbSNq85PkQ9gpeTHpkio/D9SKsMW62njITPgy6M8TFAmx8JF
 ZAI6k8l1eU29F274WnlQ6ZokkJoNctwHa+88euWKHWUDolCmQpegJJ8932www83GLn1mdUZn
 NsymjFSdMWE+y8apWaV9QsDOKWf7pY2uBuE6GMPRhX7e7h5oQwa1lYeO2L9LTDeXkEOJe+hE
 qQdEEvkC/nok0eoRlBlZh433DQlv4+IvSsfN/uWld2TuQFyjDCLIm1CPRfe7z0TwiCM27F+O
 lHnUspCFSgpnrxqNH6CM4aj1EF4fEX+ZyknTSrKL9BGZ/qRz7Xe9ikU2/7M1ov6rOXCI4NR9
 THsNax6etxCBMzZs2bdMHMcajP5XdRsOIARuN08ytRjDolR2r8SkTN2YMwxodxNWWDC3V8X2
 RHZ4UwQw487BTQRbLBwMARAAsHCE31Ffrm6uig1BQplxMV8WnRBiZqbbsVJBH1AAh8tq2ULl
 7udfQo1bsPLGGQboJSVN9rckQQNahvHAIK8ZGfU4Qj8+CER+fYPp/MDZj+t0DbnWSOrG7z9H
 IZo6PR9z4JZza3Hn/35jFggaqBtuydHwwBANZ7A6DVY+W0COEU4of7CAahQo5NwYiwS0lGis
 LTqks5R0Vh+QpvDVfuaF6I8LUgQR/cSgLkR//V1uCEQYzhsoiJ3zc1HSRyOPotJTApqGBq80
 X0aCVj1LOiOF4rrdvQnj6iIlXQssdb+WhSYHeuJj1wD0ZlC7ds5zovXh+FfFl5qH5RFY/qVn
 3mNIVxeO987WSF0jh+T5ZlvUNdhedGndRmwFTxq2Li6GNMaolgnpO/CPcFpDjKxY/HBUSmaE
 9rNdAa1fCd4RsKLlhXda+IWpJZMHlmIKY8dlUybP+2qDzP2lY7kdFgPZRU+ezS/pzC/YTzAv
 CWM3tDgwoSl17vnZCr8wn2/1rKkcLvTDgiJLPCevqpTb6KFtZosQ02EGMuHQI6Zk91jbx96n
 rdsSdBLGH3hbvLvjZm3C+fNlVb9uvWbdznObqcJxSH3SGOZ7kCHuVmXUcqozol6ioMHMb+In
 rHPP16aVDTBTPEGwgxXI38f7SUEn+NpbizWdLNz2hc907DvoPm6HEGCanpcAEQEAAcLBZQQY
 AQgADwUCWywcDAIbDAUJCWYBgAAKCRBHBbTLRwbbMdsZEACUjmsJx2CAY+QSUMebQRFjKavw
 XB/xE7fTt2ahuhHT8qQ/lWuRQedg4baInw9nhoPE+VenOzhGeGlsJ0Ys52sdXvUjUocKgUQq
 6ekOHbcw919nO5L9J2ejMf/VC/quN3r3xijgRtmuuwZjmmi8ct24TpGeoBK4WrZGh/1hAYw4
 ieARvKvgjXRstcEqM5thUNkOOIheud/VpY+48QcccPKbngy//zNJWKbRbeVnimua0OpqRXhC
 rEVm/xomeOvl1WK1BVO7z8DjSdEBGzbV76sPDJb/fw+y+VWrkEiddD/9CSfgfBNOb1p1jVnT
 2mFgGneIWbU0zdDGhleI9UoQTr0e0b/7TU+Jo6TqwosP9nbk5hXw6uR5k5PF8ieyHVq3qatJ
 9K1jPkBr8YWtI5uNwJJjTKIA1jHlj8McROroxMdI6qZ/wZ1ImuylpJuJwCDCORYf5kW61fcr
 HEDlIvGc371OOvw6ejF8ksX5+L2zwh43l/pKkSVGFpxtMV6d6J3eqwTafL86YJWH93PN+ZUh
 6i6Rd2U/i8jH5WvzR57UeWxE4P8bQc0hNGrUsHQH6bpHV2lbuhDdqo+cM9ehGZEO3+gCDFmK
 rjspZjkJbB5Gadzvts5fcWGOXEvuT8uQSvl+vEL0g6vczsyPBtqoBLa9SNrSVtSixD1uOgyt
 AP7RWS474w==
Message-ID: <4609d49b-4dd3-c017-b76e-a8a536871c05@embeddedor.com>
Date: Fri, 20 Nov 2020 13:04:55 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-AntiAbuse: This header was added to track abuse, please include it with any abuse report
X-AntiAbuse: Primary Hostname - gator4166.hostgator.com
X-AntiAbuse: Original Domain - lists.xenproject.org
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain - embeddedor.com
X-BWhitelist: no
X-Source-IP: 187.162.31.110
X-Source-L: No
X-Exim-ID: 1kgBiD-00024G-7d
X-Source: 
X-Source-Args: 
X-Source-Dir: 
X-Source-Sender: 187-162-31-110.static.axtel.net ([192.168.15.4]) [187.162.31.110]:52360
X-Source-Auth: gustavo@embeddedor.com
X-Email-Count: 149
X-Source-Cap: Z3V6aWRpbmU7Z3V6aWRpbmU7Z2F0b3I0MTY2Lmhvc3RnYXRvci5jb20=
X-Local-Domain: yes


Hi,

On 11/20/20 12:53, Jakub Kicinski wrote:
> On Fri, 20 Nov 2020 12:21:39 -0600 Gustavo A. R. Silva wrote:
>> This series aims to fix almost all remaining fall-through warnings in
>> order to enable -Wimplicit-fallthrough for Clang.
>>
>> In preparation to enable -Wimplicit-fallthrough for Clang, explicitly
>> add multiple break/goto/return/fallthrough statements instead of just
>> letting the code fall through to the next case.
>>
>> Notice that in order to enable -Wimplicit-fallthrough for Clang, this
>> change[1] is meant to be reverted at some point. So, this patch helps
>> to move in that direction.
>>
>> Something important to mention is that there is currently a discrepancy
>> between GCC and Clang when dealing with switch fall-through to empty case
>> statements or to cases that only contain a break/continue/return
>> statement[2][3][4].
> 
> Are we sure we want to make this change? Was it discussed before?
> 
> Are there any bugs Clangs puritanical definition of fallthrough helped
> find?
> 
> IMVHO compiler warnings are supposed to warn about issues that could
> be bugs. Falling through to default: break; can hardly be a bug?!

The justification for this is explained in this same changelog text:

Now that the -Wimplicit-fallthrough option has been globally enabled[5],
any compiler should really warn on missing either a fallthrough annotation
or any of the other case-terminating statements (break/continue/return/
goto) when falling through to the next case statement. Making exceptions
to this introduces variation in case handling which may continue to lead
to bugs, misunderstandings, and a general lack of robustness. The point
of enabling options like -Wimplicit-fallthrough is to prevent human error
and aid developers in spotting bugs before their code is even built/
submitted/committed, therefore eliminating classes of bugs. So, in order
to really accomplish this, we should, and can, move in the direction of
addressing any error-prone scenarios and get rid of the unintentional
fallthrough bug-class in the kernel, entirely, even if there is some minor
redundancy. Better to have explicit case-ending statements than continue to
have exceptions where one must guess as to the right result. The compiler
will eliminate any actual redundancy.

Note that there is already a patch in mainline that addresses almost
40,000 of these issues[6].

[1] commit e2079e93f562c ("kbuild: Do not enable -Wimplicit-fallthrough for clang for now")
[2] ClangBuiltLinux#636
[3] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91432
[4] https://godbolt.org/z/xgkvIh
[5] commit a035d552a93b ("Makefile: Globally enable fall-through warning")
[6] commit 4169e889e588 ("include: jhash/signal: Fix fall-through warnings for Clang")

Thanks
--
Gustavo


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 19:20:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 19:20:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32575.63678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgBww-0003MH-7z; Fri, 20 Nov 2020 19:20:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32575.63678; Fri, 20 Nov 2020 19:20:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgBwv-0003Lo-Uv; Fri, 20 Nov 2020 19:20:01 +0000
Received: by outflank-mailman (input) for mailman id 32575;
 Fri, 20 Nov 2020 18:53:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/9wm=E2=kernel.org=kuba@srs-us1.protection.inumbo.net>)
 id 1kgBXb-0000Y5-1A
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 18:53:51 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1dd19bca-f630-44b5-bb8f-72504ab66379;
 Fri, 20 Nov 2020 18:53:50 +0000 (UTC)
Received: from kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com (unknown
 [163.114.132.6])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id D19712242B;
 Fri, 20 Nov 2020 18:53:45 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/9wm=E2=kernel.org=kuba@srs-us1.protection.inumbo.net>)
	id 1kgBXb-0000Y5-1A
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 18:53:51 +0000
X-Inumbo-ID: 1dd19bca-f630-44b5-bb8f-72504ab66379
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1dd19bca-f630-44b5-bb8f-72504ab66379;
	Fri, 20 Nov 2020 18:53:50 +0000 (UTC)
Received: from kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com (unknown [163.114.132.6])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id D19712242B;
	Fri, 20 Nov 2020 18:53:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605898429;
	bh=RlEnelajx5E1UvOiu4TwGuYZq41CYqRzy+OE2vpEZwM=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=y6BwGzsUDu1zr4tV0nGpfLuqNb1AQVwTl3HIlZXKXQmalkVaXzcKTw6PaiYzn6cs4
	 cCrQcjpmBltf5qc0pbll6lEfWSr4jv5MMDA/VHBdadKQQtZqXFk9pYOspqu5FdKf1A
	 HiIo+vycYYhwL3jW0KIV5eL5D/2xlLalPiHVxUuA=
Date: Fri, 20 Nov 2020 10:53:44 -0800
From: Jakub Kicinski <kuba@kernel.org>
To: "Gustavo A. R. Silva" <gustavoars@kernel.org>
Cc: linux-kernel@vger.kernel.org, alsa-devel@alsa-project.org,
 amd-gfx@lists.freedesktop.org, bridge@lists.linux-foundation.org,
 ceph-devel@vger.kernel.org, cluster-devel@redhat.com,
 coreteam@netfilter.org, devel@driverdev.osuosl.org, dm-devel@redhat.com,
 drbd-dev@lists.linbit.com, dri-devel@lists.freedesktop.org,
 GR-everest-linux-l2@marvell.com, GR-Linux-NIC-Dev@marvell.com,
 intel-gfx@lists.freedesktop.org, intel-wired-lan@lists.osuosl.org,
 keyrings@vger.kernel.org, linux1394-devel@lists.sourceforge.net,
 linux-acpi@vger.kernel.org, linux-afs@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org,
 linux-atm-general@lists.sourceforge.net, linux-block@vger.kernel.org,
 linux-can@vger.kernel.org, linux-cifs@vger.kernel.org,
 linux-crypto@vger.kernel.org, linux-decnet-user@lists.sourceforge.net,
 linux-ext4@vger.kernel.org, linux-fbdev@vger.kernel.org,
 linux-geode@lists.infradead.org, linux-gpio@vger.kernel.org,
 linux-hams@vger.kernel.org, linux-hwmon@vger.kernel.org,
 linux-i3c@lists.infradead.org, linux-ide@vger.kernel.org,
 linux-iio@vger.kernel.org, linux-input@vger.kernel.org,
 linux-integrity@vger.kernel.org, linux-mediatek@lists.infradead.org,
 linux-media@vger.kernel.org, linux-mmc@vger.kernel.org, linux-mm@kvack.org,
 linux-mtd@lists.infradead.org, linux-nfs@vger.kernel.org,
 linux-rdma@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-sctp@vger.kernel.org,
 linux-security-module@vger.kernel.org,
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org,
 linux-watchdog@vger.kernel.org, linux-wireless@vger.kernel.org,
 netdev@vger.kernel.org, netfilter-devel@vger.kernel.org,
 nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org,
 oss-drivers@netronome.com, patches@opensource.cirrus.com,
 rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org,
 samba-technical@lists.samba.org, selinux@vger.kernel.org,
 target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net,
 usb-storage@lists.one-eyed-alien.net,
 virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org,
 x86@kernel.org, xen-devel@lists.xenproject.org,
 linux-hardening@vger.kernel.org, Nick Desaulniers
 <ndesaulniers@google.com>, Nathan Chancellor <natechancellor@gmail.com>,
 Miguel Ojeda <ojeda@kernel.org>, Joe Perches <joe@perches.com>, Kees Cook
 <keescook@chromium.org>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
Message-ID: <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
In-Reply-To: <cover.1605896059.git.gustavoars@kernel.org>
References: <cover.1605896059.git.gustavoars@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Fri, 20 Nov 2020 12:21:39 -0600 Gustavo A. R. Silva wrote:
> This series aims to fix almost all remaining fall-through warnings in
> order to enable -Wimplicit-fallthrough for Clang.
> 
> In preparation to enable -Wimplicit-fallthrough for Clang, explicitly
> add multiple break/goto/return/fallthrough statements instead of just
> letting the code fall through to the next case.
> 
> Notice that in order to enable -Wimplicit-fallthrough for Clang, this
> change[1] is meant to be reverted at some point. So, this patch helps
> to move in that direction.
> 
> Something important to mention is that there is currently a discrepancy
> between GCC and Clang when dealing with switch fall-through to empty case
> statements or to cases that only contain a break/continue/return
> statement[2][3][4].

Are we sure we want to make this change? Was it discussed before?

Are there any bugs Clangs puritanical definition of fallthrough helped
find?

IMVHO compiler warnings are supposed to warn about issues that could
be bugs. Falling through to default: break; can hardly be a bug?!


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 19:20:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 19:20:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32547.63649 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgBwv-0003Ix-1E; Fri, 20 Nov 2020 19:20:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32547.63649; Fri, 20 Nov 2020 19:20:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgBwu-0003Iq-UO; Fri, 20 Nov 2020 19:20:00 +0000
Received: by outflank-mailman (input) for mailman id 32547;
 Fri, 20 Nov 2020 18:21:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M2Yq=E2=kernel.org=gustavoars@srs-us1.protection.inumbo.net>)
 id 1kgB2T-0005RH-Sb
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 18:21:41 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9f7916a8-7739-4a7f-a827-622d5f25dcf8;
 Fri, 20 Nov 2020 18:21:41 +0000 (UTC)
Received: from embeddedor (187-162-31-110.static.axtel.net [187.162.31.110])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 44F062242B;
 Fri, 20 Nov 2020 18:21:33 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=M2Yq=E2=kernel.org=gustavoars@srs-us1.protection.inumbo.net>)
	id 1kgB2T-0005RH-Sb
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 18:21:41 +0000
X-Inumbo-ID: 9f7916a8-7739-4a7f-a827-622d5f25dcf8
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9f7916a8-7739-4a7f-a827-622d5f25dcf8;
	Fri, 20 Nov 2020 18:21:41 +0000 (UTC)
Received: from embeddedor (187-162-31-110.static.axtel.net [187.162.31.110])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 44F062242B;
	Fri, 20 Nov 2020 18:21:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605896500;
	bh=zypEenmW5uIwS3bf91X44vQpHhZ/gwwpABBFg9PXdgs=;
	h=Date:From:To:Cc:Subject:From;
	b=nJTP3XwDXBXE6z6mnOPSIRaB7LUA+fTNQC4crGQ0MB9NYdfKeothVG2nUhmCQkqWV
	 HRju6dmHoPHhGnor+9/d70zp/zSm3b3cQQg/t0Glx8hFQrCr+IRChnOcINpmrGrvux
	 v00dkQlV3Myz+BPYM8RwrCmoCIOOdaKMOJr+ablQ=
Date: Fri, 20 Nov 2020 12:21:39 -0600
From: "Gustavo A. R. Silva" <gustavoars@kernel.org>
To: linux-kernel@vger.kernel.org
Cc: alsa-devel@alsa-project.org, amd-gfx@lists.freedesktop.org,
	bridge@lists.linux-foundation.org, ceph-devel@vger.kernel.org,
	cluster-devel@redhat.com, coreteam@netfilter.org,
	devel@driverdev.osuosl.org, dm-devel@redhat.com,
	drbd-dev@lists.linbit.com, dri-devel@lists.freedesktop.org,
	GR-everest-linux-l2@marvell.com, GR-Linux-NIC-Dev@marvell.com,
	intel-gfx@lists.freedesktop.org, intel-wired-lan@lists.osuosl.org,
	keyrings@vger.kernel.org, linux1394-devel@lists.sourceforge.net,
	linux-acpi@vger.kernel.org, linux-afs@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org,
	linux-atm-general@lists.sourceforge.net,
	linux-block@vger.kernel.org, linux-can@vger.kernel.org,
	linux-cifs@vger.kernel.org, linux-crypto@vger.kernel.org,
	linux-decnet-user@lists.sourceforge.net, linux-ext4@vger.kernel.org,
	linux-fbdev@vger.kernel.org, linux-geode@lists.infradead.org,
	linux-gpio@vger.kernel.org, linux-hams@vger.kernel.org,
	linux-hwmon@vger.kernel.org, linux-i3c@lists.infradead.org,
	linux-ide@vger.kernel.org, linux-iio@vger.kernel.org,
	linux-input@vger.kernel.org, linux-integrity@vger.kernel.org,
	linux-mediatek@lists.infradead.org, linux-media@vger.kernel.org,
	linux-mmc@vger.kernel.org, linux-mm@kvack.org,
	linux-mtd@lists.infradead.org, linux-nfs@vger.kernel.org,
	linux-rdma@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-sctp@vger.kernel.org,
	linux-security-module@vger.kernel.org,
	linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org,
	linux-watchdog@vger.kernel.org, linux-wireless@vger.kernel.org,
	netdev@vger.kernel.org, netfilter-devel@vger.kernel.org,
	nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org,
	oss-drivers@netronome.com, patches@opensource.cirrus.com,
	rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org,
	samba-technical@lists.samba.org, selinux@vger.kernel.org,
	target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net,
	usb-storage@lists.one-eyed-alien.net,
	virtualization@lists.linux-foundation.org,
	wcn36xx@lists.infradead.org, x86@kernel.org,
	xen-devel@lists.xenproject.org, linux-hardening@vger.kernel.org,
	Nick Desaulniers <ndesaulniers@google.com>,
	Nathan Chancellor <natechancellor@gmail.com>,
	Miguel Ojeda <ojeda@kernel.org>, Joe Perches <joe@perches.com>,
	Kees Cook <keescook@chromium.org>,
	"Gustavo A. R. Silva" <gustavoars@kernel.org>
Subject: [PATCH 000/141] Fix fall-through warnings for Clang
Message-ID: <cover.1605896059.git.gustavoars@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
User-Agent: Mutt/1.9.4 (2018-02-28)

Hi all,

This series aims to fix almost all remaining fall-through warnings in
order to enable -Wimplicit-fallthrough for Clang.

In preparation to enable -Wimplicit-fallthrough for Clang, explicitly
add multiple break/goto/return/fallthrough statements instead of just
letting the code fall through to the next case.

Notice that in order to enable -Wimplicit-fallthrough for Clang, this
change[1] is meant to be reverted at some point. So, this patch helps
to move in that direction.

Something important to mention is that there is currently a discrepancy
between GCC and Clang when dealing with switch fall-through to empty case
statements or to cases that only contain a break/continue/return
statement[2][3][4].

Now that the -Wimplicit-fallthrough option has been globally enabled[5],
any compiler should really warn on missing either a fallthrough annotation
or any of the other case-terminating statements (break/continue/return/
goto) when falling through to the next case statement. Making exceptions
to this introduces variation in case handling which may continue to lead
to bugs, misunderstandings, and a general lack of robustness. The point
of enabling options like -Wimplicit-fallthrough is to prevent human error
and aid developers in spotting bugs before their code is even built/
submitted/committed, therefore eliminating classes of bugs. So, in order
to really accomplish this, we should, and can, move in the direction of
addressing any error-prone scenarios and get rid of the unintentional
fallthrough bug-class in the kernel, entirely, even if there is some minor
redundancy. Better to have explicit case-ending statements than continue to
have exceptions where one must guess as to the right result. The compiler
will eliminate any actual redundancy.

Note that there is already a patch in mainline that addresses almost
40,000 of these issues[6].

I'm happy to carry this whole series in my own tree if people are OK
with it. :)

[1] commit e2079e93f562c ("kbuild: Do not enable -Wimplicit-fallthrough for clang for now")
[2] ClangBuiltLinux#636
[3] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91432
[4] https://godbolt.org/z/xgkvIh
[5] commit a035d552a93b ("Makefile: Globally enable fall-through warning")
[6] commit 4169e889e588 ("include: jhash/signal: Fix fall-through warnings for Clang")

Thanks!

Gustavo A. R. Silva (141):
  afs: Fix fall-through warnings for Clang
  ASoC: codecs: Fix fall-through warnings for Clang
  cifs: Fix fall-through warnings for Clang
  drm/amdgpu: Fix fall-through warnings for Clang
  drm/radeon: Fix fall-through warnings for Clang
  gfs2: Fix fall-through warnings for Clang
  gpio: Fix fall-through warnings for Clang
  IB/hfi1: Fix fall-through warnings for Clang
  igb: Fix fall-through warnings for Clang
  ima: Fix fall-through warnings for Clang
  ipv4: Fix fall-through warnings for Clang
  ixgbe: Fix fall-through warnings for Clang
  media: dvb-frontends: Fix fall-through warnings for Clang
  media: usb: dvb-usb-v2: Fix fall-through warnings for Clang
  netfilter: Fix fall-through warnings for Clang
  nfsd: Fix fall-through warnings for Clang
  nfs: Fix fall-through warnings for Clang
  qed: Fix fall-through warnings for Clang
  qlcnic: Fix fall-through warnings for Clang
  scsi: aic7xxx: Fix fall-through warnings for Clang
  scsi: aic94xx: Fix fall-through warnings for Clang
  scsi: bfa: Fix fall-through warnings for Clang
  staging: rtl8723bs: core: Fix fall-through warnings for Clang
  staging: vt6655: Fix fall-through warnings for Clang
  bnxt_en: Fix fall-through warnings for Clang
  ceph: Fix fall-through warnings for Clang
  drbd: Fix fall-through warnings for Clang
  drm/amd/display: Fix fall-through warnings for Clang
  e1000: Fix fall-through warnings for Clang
  ext2: Fix fall-through warnings for Clang
  ext4: Fix fall-through warnings for Clang
  floppy: Fix fall-through warnings for Clang
  fm10k: Fix fall-through warnings for Clang
  IB/mlx4: Fix fall-through warnings for Clang
  IB/qedr: Fix fall-through warnings for Clang
  ice: Fix fall-through warnings for Clang
  Input: pcspkr - Fix fall-through warnings for Clang
  isofs: Fix fall-through warnings for Clang
  ixgbevf: Fix fall-through warnings for Clang
  kprobes/x86: Fix fall-through warnings for Clang
  mm: Fix fall-through warnings for Clang
  net: 3c509: Fix fall-through warnings for Clang
  net: cassini: Fix fall-through warnings for Clang
  net/mlx4: Fix fall-through warnings for Clang
  net: mscc: ocelot: Fix fall-through warnings for Clang
  netxen_nic: Fix fall-through warnings for Clang
  nfp: Fix fall-through warnings for Clang
  perf/x86: Fix fall-through warnings for Clang
  pinctrl: Fix fall-through warnings for Clang
  RDMA/mlx5: Fix fall-through warnings for Clang
  reiserfs: Fix fall-through warnings for Clang
  security: keys: Fix fall-through warnings for Clang
  selinux: Fix fall-through warnings for Clang
  target: Fix fall-through warnings for Clang
  uprobes/x86: Fix fall-through warnings for Clang
  vxge: Fix fall-through warnings for Clang
  watchdog: Fix fall-through warnings for Clang
  xen-blkfront: Fix fall-through warnings for Clang
  regulator: as3722: Fix fall-through warnings for Clang
  habanalabs: Fix fall-through warnings for Clang
  tee: Fix fall-through warnings for Clang
  HID: usbhid: Fix fall-through warnings for Clang
  HID: input: Fix fall-through warnings for Clang
  ACPI: Fix fall-through warnings for Clang
  airo: Fix fall-through warnings for Clang
  ALSA: hdspm: Fix fall-through warnings for Clang
  ALSA: pcsp: Fix fall-through warnings for Clang
  ALSA: sb: Fix fall-through warnings for Clang
  ath5k: Fix fall-through warnings for Clang
  atm: fore200e: Fix fall-through warnings for Clang
  braille_console: Fix fall-through warnings for Clang
  can: peak_usb: Fix fall-through warnings for Clang
  carl9170: Fix fall-through warnings for Clang
  cfg80211: Fix fall-through warnings for Clang
  crypto: ccree - Fix fall-through warnings for Clang
  decnet: Fix fall-through warnings for Clang
  dm raid: Fix fall-through warnings for Clang
  drm/amd/pm: Fix fall-through warnings for Clang
  drm: Fix fall-through warnings for Clang
  drm/i915/gem: Fix fall-through warnings for Clang
  drm/nouveau/clk: Fix fall-through warnings for Clang
  drm/nouveau: Fix fall-through warnings for Clang
  drm/nouveau/therm: Fix fall-through warnings for Clang
  drm/via: Fix fall-through warnings for Clang
  firewire: core: Fix fall-through warnings for Clang
  hwmon: (corsair-cpro) Fix fall-through warnings for Clang
  hwmon: (max6621) Fix fall-through warnings for Clang
  i3c: master: cdns: Fix fall-through warnings for Clang
  ide: Fix fall-through warnings for Clang
  iio: adc: cpcap: Fix fall-through warnings for Clang
  iwlwifi: iwl-drv: Fix fall-through warnings for Clang
  libata: Fix fall-through warnings for Clang
  mac80211: Fix fall-through warnings for Clang
  media: atomisp: Fix fall-through warnings for Clang
  media: dvb_frontend: Fix fall-through warnings for Clang
  media: rcar_jpu: Fix fall-through warnings for Clang
  media: saa7134: Fix fall-through warnings for Clang
  mmc: sdhci-of-arasan: Fix fall-through warnings for Clang
  mt76: mt7615: Fix fall-through warnings for Clang
  mtd: cfi: Fix fall-through warnings for Clang
  mtd: mtdchar: Fix fall-through warnings for Clang
  mtd: onenand: Fix fall-through warnings for Clang
  mtd: rawnand: fsmc: Fix fall-through warnings for Clang
  mtd: rawnand: stm32_fmc2: Fix fall-through warnings for Clang
  net: ax25: Fix fall-through warnings for Clang
  net: bridge: Fix fall-through warnings for Clang
  net: core: Fix fall-through warnings for Clang
  netfilter: ipt_REJECT: Fix fall-through warnings for Clang
  net: netrom: Fix fall-through warnings for Clang
  net/packet: Fix fall-through warnings for Clang
  net: plip: Fix fall-through warnings for Clang
  net: rose: Fix fall-through warnings for Clang
  nl80211: Fix fall-through warnings for Clang
  phy: qcom-usb-hs: Fix fall-through warnings for Clang
  rds: Fix fall-through warnings for Clang
  rt2x00: Fix fall-through warnings for Clang
  rtl8xxxu: Fix fall-through warnings for Clang
  rtw88: Fix fall-through warnings for Clang
  rxrpc: Fix fall-through warnings for Clang
  scsi: aacraid: Fix fall-through warnings for Clang
  scsi: aha1740: Fix fall-through warnings for Clang
  scsi: csiostor: Fix fall-through warnings for Clang
  scsi: lpfc: Fix fall-through warnings for Clang
  scsi: stex: Fix fall-through warnings for Clang
  sctp: Fix fall-through warnings for Clang
  slimbus: messaging: Fix fall-through warnings for Clang
  staging: qlge: Fix fall-through warnings for Clang
  staging: vt6656: Fix fall-through warnings for Clang
  SUNRPC: Fix fall-through warnings for Clang
  tipc: Fix fall-through warnings for Clang
  tpm: Fix fall-through warnings for Clang
  ubi: Fix fall-through warnings for Clang
  usb: Fix fall-through warnings for Clang
  video: fbdev: lxfb_ops: Fix fall-through warnings for Clang
  video: fbdev: pm2fb: Fix fall-through warnings for Clang
  virtio_net: Fix fall-through warnings for Clang
  wcn36xx: Fix fall-through warnings for Clang
  xen/manage: Fix fall-through warnings for Clang
  xfrm: Fix fall-through warnings for Clang
  zd1201: Fix fall-through warnings for Clang
  Input: libps2 - Fix fall-through warnings for Clang

 arch/x86/events/core.c                                    | 2 +-
 arch/x86/kernel/kprobes/core.c                            | 1 +
 arch/x86/kernel/uprobes.c                                 | 2 ++
 drivers/accessibility/braille/braille_console.c           | 1 +
 drivers/acpi/sbshc.c                                      | 1 +
 drivers/ata/libata-eh.c                                   | 1 +
 drivers/atm/fore200e.c                                    | 1 +
 drivers/block/drbd/drbd_receiver.c                        | 1 +
 drivers/block/drbd/drbd_req.c                             | 1 +
 drivers/block/floppy.c                                    | 1 +
 drivers/block/xen-blkfront.c                              | 1 +
 drivers/char/tpm/eventlog/tpm1.c                          | 1 +
 drivers/crypto/ccree/cc_cipher.c                          | 3 +++
 drivers/firewire/core-topology.c                          | 1 +
 drivers/gpio/gpio-ath79.c                                 | 1 +
 drivers/gpio/gpiolib-acpi.c                               | 1 +
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c                    | 1 +
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c                     | 1 +
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c                     | 1 +
 drivers/gpu/drm/amd/amdgpu/vi.c                           | 1 +
 drivers/gpu/drm/amd/display/dc/bios/bios_parser.c         | 1 +
 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c        | 2 ++
 drivers/gpu/drm/amd/display/dc/core/dc_link.c             | 1 +
 drivers/gpu/drm/amd/pm/powerplay/si_dpm.c                 | 2 +-
 .../gpu/drm/amd/pm/powerplay/smumgr/polaris10_smumgr.c    | 1 +
 drivers/gpu/drm/drm_bufs.c                                | 1 +
 drivers/gpu/drm/i915/gem/i915_gem_shrinker.c              | 1 +
 drivers/gpu/drm/nouveau/nouveau_bo.c                      | 1 +
 drivers/gpu/drm/nouveau/nouveau_connector.c               | 1 +
 drivers/gpu/drm/nouveau/nvkm/subdev/clk/nv50.c            | 1 +
 drivers/gpu/drm/nouveau/nvkm/subdev/therm/gf119.c         | 1 +
 drivers/gpu/drm/radeon/ci_dpm.c                           | 2 +-
 drivers/gpu/drm/radeon/r300.c                             | 1 +
 drivers/gpu/drm/radeon/si_dpm.c                           | 2 +-
 drivers/gpu/drm/via/via_irq.c                             | 1 +
 drivers/hid/hid-input.c                                   | 1 +
 drivers/hid/usbhid/hid-core.c                             | 2 ++
 drivers/hwmon/corsair-cpro.c                              | 1 +
 drivers/hwmon/max6621.c                                   | 2 +-
 drivers/i3c/master/i3c-master-cdns.c                      | 2 ++
 drivers/ide/siimage.c                                     | 1 +
 drivers/iio/adc/cpcap-adc.c                               | 1 +
 drivers/infiniband/hw/hfi1/qp.c                           | 1 +
 drivers/infiniband/hw/hfi1/tid_rdma.c                     | 5 +++++
 drivers/infiniband/hw/mlx4/mad.c                          | 1 +
 drivers/infiniband/hw/mlx5/qp.c                           | 1 +
 drivers/infiniband/hw/qedr/main.c                         | 1 +
 drivers/input/misc/pcspkr.c                               | 1 +
 drivers/input/serio/libps2.c                              | 2 +-
 drivers/md/dm-raid.c                                      | 1 +
 drivers/media/dvb-core/dvb_frontend.c                     | 1 +
 drivers/media/dvb-frontends/cx24120.c                     | 1 +
 drivers/media/dvb-frontends/dib0090.c                     | 2 ++
 drivers/media/dvb-frontends/drxk_hard.c                   | 1 +
 drivers/media/dvb-frontends/m88rs2000.c                   | 1 +
 drivers/media/pci/saa7134/saa7134-tvaudio.c               | 1 +
 drivers/media/platform/rcar_jpu.c                         | 1 +
 drivers/media/usb/dvb-usb-v2/af9015.c                     | 1 +
 drivers/media/usb/dvb-usb-v2/lmedm04.c                    | 1 +
 drivers/misc/habanalabs/gaudi/gaudi.c                     | 1 +
 drivers/mmc/host/sdhci-of-arasan.c                        | 4 ++++
 drivers/mtd/chips/cfi_cmdset_0001.c                       | 1 +
 drivers/mtd/chips/cfi_cmdset_0002.c                       | 2 ++
 drivers/mtd/chips/cfi_cmdset_0020.c                       | 2 ++
 drivers/mtd/mtdchar.c                                     | 1 +
 drivers/mtd/nand/onenand/onenand_samsung.c                | 1 +
 drivers/mtd/nand/raw/fsmc_nand.c                          | 1 +
 drivers/mtd/nand/raw/stm32_fmc2_nand.c                    | 2 ++
 drivers/mtd/ubi/build.c                                   | 1 +
 drivers/net/can/usb/peak_usb/pcan_usb_core.c              | 2 ++
 drivers/net/ethernet/3com/3c509.c                         | 1 +
 drivers/net/ethernet/broadcom/bnxt/bnxt.c                 | 1 +
 drivers/net/ethernet/intel/e1000/e1000_hw.c               | 1 +
 drivers/net/ethernet/intel/fm10k/fm10k_mbx.c              | 2 ++
 drivers/net/ethernet/intel/ice/ice_txrx_lib.c             | 1 +
 drivers/net/ethernet/intel/igb/e1000_phy.c                | 1 +
 drivers/net/ethernet/intel/igb/igb_ethtool.c              | 1 +
 drivers/net/ethernet/intel/igb/igb_ptp.c                  | 1 +
 drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c            | 2 ++
 drivers/net/ethernet/intel/ixgbe/ixgbe_common.c           | 1 +
 drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c              | 1 +
 drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c              | 1 +
 drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c         | 1 +
 drivers/net/ethernet/mellanox/mlx4/resource_tracker.c     | 1 +
 drivers/net/ethernet/mscc/ocelot_vcap.c                   | 1 +
 drivers/net/ethernet/neterion/vxge/vxge-config.c          | 1 +
 drivers/net/ethernet/netronome/nfp/nfp_net_repr.c         | 1 +
 drivers/net/ethernet/qlogic/netxen/netxen_nic_init.c      | 1 +
 drivers/net/ethernet/qlogic/qed/qed_l2.c                  | 1 +
 drivers/net/ethernet/qlogic/qed/qed_sriov.c               | 1 +
 drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c            | 1 +
 drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c          | 1 +
 drivers/net/ethernet/sun/cassini.c                        | 1 +
 drivers/net/plip/plip.c                                   | 2 ++
 drivers/net/virtio_net.c                                  | 1 +
 drivers/net/wireless/ath/ath5k/mac80211-ops.c             | 1 +
 drivers/net/wireless/ath/carl9170/tx.c                    | 1 +
 drivers/net/wireless/ath/wcn36xx/smd.c                    | 2 +-
 drivers/net/wireless/cisco/airo.c                         | 1 +
 drivers/net/wireless/intel/iwlwifi/iwl-drv.c              | 2 +-
 drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c        | 2 +-
 drivers/net/wireless/ralink/rt2x00/rt2x00queue.c          | 1 +
 drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c     | 8 ++++----
 drivers/net/wireless/realtek/rtw88/fw.c                   | 2 +-
 drivers/net/wireless/zydas/zd1201.c                       | 2 +-
 drivers/phy/qualcomm/phy-qcom-usb-hs.c                    | 1 +
 drivers/pinctrl/renesas/pinctrl-rza1.c                    | 1 +
 drivers/regulator/as3722-regulator.c                      | 3 ++-
 drivers/scsi/aacraid/commsup.c                            | 1 +
 drivers/scsi/aha1740.c                                    | 1 +
 drivers/scsi/aic7xxx/aic79xx_core.c                       | 4 +++-
 drivers/scsi/aic7xxx/aic7xxx_core.c                       | 4 ++--
 drivers/scsi/aic94xx/aic94xx_scb.c                        | 2 ++
 drivers/scsi/aic94xx/aic94xx_task.c                       | 2 ++
 drivers/scsi/bfa/bfa_fcs_lport.c                          | 2 +-
 drivers/scsi/bfa/bfa_ioc.c                                | 6 ++++--
 drivers/scsi/csiostor/csio_wr.c                           | 1 +
 drivers/scsi/lpfc/lpfc_bsg.c                              | 1 +
 drivers/scsi/stex.c                                       | 1 +
 drivers/slimbus/messaging.c                               | 1 +
 drivers/staging/media/atomisp/pci/runtime/isys/src/rx.c   | 1 +
 drivers/staging/qlge/qlge_main.c                          | 1 +
 drivers/staging/rtl8723bs/core/rtw_cmd.c                  | 1 +
 drivers/staging/rtl8723bs/core/rtw_mlme_ext.c             | 1 +
 drivers/staging/rtl8723bs/core/rtw_wlan_util.c            | 1 +
 drivers/staging/vt6655/device_main.c                      | 1 +
 drivers/staging/vt6655/rxtx.c                             | 2 ++
 drivers/staging/vt6656/main_usb.c                         | 1 +
 drivers/target/target_core_iblock.c                       | 1 +
 drivers/target/target_core_pr.c                           | 1 +
 drivers/tee/tee_core.c                                    | 1 +
 drivers/usb/gadget/function/f_fs.c                        | 2 ++
 drivers/usb/gadget/function/f_loopback.c                  | 2 +-
 drivers/usb/gadget/function/f_sourcesink.c                | 1 +
 drivers/usb/gadget/udc/dummy_hcd.c                        | 2 ++
 drivers/usb/host/fotg210-hcd.c                            | 2 +-
 drivers/usb/host/isp116x-hcd.c                            | 1 +
 drivers/usb/host/max3421-hcd.c                            | 1 +
 drivers/usb/host/oxu210hp-hcd.c                           | 1 +
 drivers/usb/misc/yurex.c                                  | 1 +
 drivers/usb/musb/tusb6010.c                               | 1 +
 drivers/usb/storage/ene_ub6250.c                          | 1 +
 drivers/usb/storage/uas.c                                 | 1 +
 drivers/video/fbdev/geode/lxfb_ops.c                      | 1 +
 drivers/video/fbdev/pm2fb.c                               | 1 +
 drivers/watchdog/machzwd.c                                | 1 +
 drivers/xen/manage.c                                      | 1 +
 fs/afs/cmservice.c                                        | 5 +++++
 fs/afs/fsclient.c                                         | 4 ++++
 fs/afs/vlclient.c                                         | 1 +
 fs/ceph/dir.c                                             | 2 ++
 fs/cifs/inode.c                                           | 1 +
 fs/cifs/sess.c                                            | 1 +
 fs/cifs/smbdirect.c                                       | 1 +
 fs/ext2/inode.c                                           | 1 +
 fs/ext4/super.c                                           | 1 +
 fs/gfs2/inode.c                                           | 2 ++
 fs/gfs2/recovery.c                                        | 1 +
 fs/isofs/rock.c                                           | 1 +
 fs/nfs/nfs3acl.c                                          | 1 +
 fs/nfs/nfs4client.c                                       | 1 +
 fs/nfs/nfs4proc.c                                         | 2 ++
 fs/nfs/nfs4state.c                                        | 1 +
 fs/nfs/pnfs.c                                             | 2 ++
 fs/nfsd/nfs4state.c                                       | 1 +
 fs/nfsd/nfsctl.c                                          | 1 +
 fs/reiserfs/namei.c                                       | 1 +
 mm/mm_init.c                                              | 1 +
 mm/vmscan.c                                               | 1 +
 net/ax25/af_ax25.c                                        | 1 +
 net/bridge/br_input.c                                     | 1 +
 net/core/dev.c                                            | 1 +
 net/decnet/dn_route.c                                     | 2 +-
 net/ipv4/ah4.c                                            | 1 +
 net/ipv4/esp4.c                                           | 1 +
 net/ipv4/fib_semantics.c                                  | 1 +
 net/ipv4/ip_vti.c                                         | 1 +
 net/ipv4/ipcomp.c                                         | 1 +
 net/ipv4/netfilter/ipt_REJECT.c                           | 1 +
 net/mac80211/cfg.c                                        | 2 ++
 net/netfilter/nf_conntrack_proto_dccp.c                   | 1 +
 net/netfilter/nf_tables_api.c                             | 1 +
 net/netfilter/nft_ct.c                                    | 1 +
 net/netrom/nr_route.c                                     | 4 ++++
 net/packet/af_packet.c                                    | 1 +
 net/rds/tcp_connect.c                                     | 1 +
 net/rds/threads.c                                         | 2 ++
 net/rose/rose_route.c                                     | 2 ++
 net/rxrpc/af_rxrpc.c                                      | 1 +
 net/sctp/input.c                                          | 3 ++-
 net/sunrpc/rpc_pipe.c                                     | 1 +
 net/sunrpc/xprtsock.c                                     | 1 +
 net/tipc/link.c                                           | 1 +
 net/wireless/nl80211.c                                    | 1 +
 net/wireless/util.c                                       | 1 +
 net/xfrm/xfrm_interface.c                                 | 1 +
 security/integrity/ima/ima_main.c                         | 1 +
 security/integrity/ima/ima_policy.c                       | 2 ++
 security/keys/process_keys.c                              | 1 +
 security/selinux/hooks.c                                  | 1 +
 sound/drivers/pcsp/pcsp_input.c                           | 1 +
 sound/isa/sb/sb8_main.c                                   | 1 +
 sound/pci/rme9652/hdspm.c                                 | 1 +
 sound/soc/codecs/adav80x.c                                | 1 +
 sound/soc/codecs/arizona.c                                | 1 +
 sound/soc/codecs/cs42l52.c                                | 1 +
 sound/soc/codecs/cs42l56.c                                | 1 +
 sound/soc/codecs/cs47l92.c                                | 1 +
 sound/soc/codecs/wm8962.c                                 | 1 +
 209 files changed, 264 insertions(+), 26 deletions(-)

-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 19:20:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 19:20:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32568.63665 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgBwv-0003Ki-Q6; Fri, 20 Nov 2020 19:20:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32568.63665; Fri, 20 Nov 2020 19:20:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgBwv-0003KD-He; Fri, 20 Nov 2020 19:20:01 +0000
Received: by outflank-mailman (input) for mailman id 32568;
 Fri, 20 Nov 2020 18:40:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M2Yq=E2=kernel.org=gustavoars@srs-us1.protection.inumbo.net>)
 id 1kgBL0-0007od-TK
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 18:40:50 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cff039bd-886c-4309-9217-822aa8e61cd8;
 Fri, 20 Nov 2020 18:40:50 +0000 (UTC)
Received: from embeddedor (187-162-31-110.static.axtel.net [187.162.31.110])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id C64FD2467D;
 Fri, 20 Nov 2020 18:40:48 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=M2Yq=E2=kernel.org=gustavoars@srs-us1.protection.inumbo.net>)
	id 1kgBL0-0007od-TK
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 18:40:50 +0000
X-Inumbo-ID: cff039bd-886c-4309-9217-822aa8e61cd8
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cff039bd-886c-4309-9217-822aa8e61cd8;
	Fri, 20 Nov 2020 18:40:50 +0000 (UTC)
Received: from embeddedor (187-162-31-110.static.axtel.net [187.162.31.110])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id C64FD2467D;
	Fri, 20 Nov 2020 18:40:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605897649;
	bh=b9FdqbhetGxlSrLNwiAqLkeUTdEkk4mAKgqcmtTsp+U=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=Hrmt+c+r6d/bWt+4ljBbSgqtFgarptX12kO+U2fLRv5VtqpaFFiUvoqLobkrL44vb
	 YzASrHzjRkeEDxzxAjUDereBGScx1uBnMz/YCJDvmwUPgBPkS2EvsTCQpisizSL5Lh
	 KeRWZ32/JPMy2/BEohFcbSIzB6HdVvQS5AJDIFaw=
Date: Fri, 20 Nov 2020 12:40:55 -0600
From: "Gustavo A. R. Silva" <gustavoars@kernel.org>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	linux-hardening@vger.kernel.org,
	"Gustavo A. R. Silva" <gustavoars@kernel.org>
Subject: [PATCH 138/141] xen/manage: Fix fall-through warnings for Clang
Message-ID: <5cfc00b1d8ed68eb2c2b6317806a0aa7e57d27f1.1605896060.git.gustavoars@kernel.org>
References: <cover.1605896059.git.gustavoars@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <cover.1605896059.git.gustavoars@kernel.org>
User-Agent: Mutt/1.9.4 (2018-02-28)

In preparation to enable -Wimplicit-fallthrough for Clang, fix a warning
by explicitly adding a break statement instead of letting the code fall
through to the next case.

Link: https://github.com/KSPP/linux/issues/115
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
---
 drivers/xen/manage.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
index cd046684e0d1..374d36de7f5a 100644
--- a/drivers/xen/manage.c
+++ b/drivers/xen/manage.c
@@ -179,6 +179,7 @@ static int poweroff_nb(struct notifier_block *cb, unsigned long code, void *unus
 	case SYS_HALT:
 	case SYS_POWER_OFF:
 		shutting_down = SHUTDOWN_POWEROFF;
+		break;
 	default:
 		break;
 	}
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 19:20:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 19:20:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32557.63654 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgBwv-0003JS-Be; Fri, 20 Nov 2020 19:20:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32557.63654; Fri, 20 Nov 2020 19:20:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgBwv-0003JI-6F; Fri, 20 Nov 2020 19:20:01 +0000
Received: by outflank-mailman (input) for mailman id 32557;
 Fri, 20 Nov 2020 18:32:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M2Yq=E2=kernel.org=gustavoars@srs-us1.protection.inumbo.net>)
 id 1kgBDL-0006fA-AT
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 18:32:55 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4aa6d7ba-f755-4dda-9a45-df51910e4b61;
 Fri, 20 Nov 2020 18:32:53 +0000 (UTC)
Received: from embeddedor (187-162-31-110.static.axtel.net [187.162.31.110])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id E38B524197;
 Fri, 20 Nov 2020 18:32:51 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=M2Yq=E2=kernel.org=gustavoars@srs-us1.protection.inumbo.net>)
	id 1kgBDL-0006fA-AT
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 18:32:55 +0000
X-Inumbo-ID: 4aa6d7ba-f755-4dda-9a45-df51910e4b61
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4aa6d7ba-f755-4dda-9a45-df51910e4b61;
	Fri, 20 Nov 2020 18:32:53 +0000 (UTC)
Received: from embeddedor (187-162-31-110.static.axtel.net [187.162.31.110])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id E38B524197;
	Fri, 20 Nov 2020 18:32:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605897173;
	bh=GZOB0IrDdnCJYoRPlfA0gPLtj77XQYyYZviKM6e//7k=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=fxwEEcwauxahs/YS06KosdG+kfU0QYX/CNXJsLjFY/I/IEeXbs3S6BntDjjFzARac
	 ehmJRMC3REHriA8zRE/J/MIeGvJ2lsg9FlcRz6tF5irxnVizR03hvPO8OiHUmwm6qr
	 ZjzUvYeaIBKuXmmG1X82Jc6XpGzzMvcigHSA+hYc=
Date: Fri, 20 Nov 2020 12:32:58 -0600
From: "Gustavo A. R. Silva" <gustavoars@kernel.org>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jens Axboe <axboe@kernel.dk>
Cc: xen-devel@lists.xenproject.org, linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-hardening@vger.kernel.org,
	"Gustavo A. R. Silva" <gustavoars@kernel.org>
Subject: [PATCH 058/141] xen-blkfront: Fix fall-through warnings for Clang
Message-ID: <33057688012c34dd60315ad765ff63f070e98c0c.1605896059.git.gustavoars@kernel.org>
References: <cover.1605896059.git.gustavoars@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <cover.1605896059.git.gustavoars@kernel.org>
User-Agent: Mutt/1.9.4 (2018-02-28)

In preparation to enable -Wimplicit-fallthrough for Clang, fix a warning
by explicitly adding a break statement instead of letting the code fall
through to the next case.

Link: https://github.com/KSPP/linux/issues/115
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
---
 drivers/block/xen-blkfront.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 48629d3433b4..34b028be78ab 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -2462,6 +2462,7 @@ static void blkback_changed(struct xenbus_device *dev,
 			break;
 		if (talk_to_blkback(dev, info))
 			break;
+		break;
 	case XenbusStateInitialising:
 	case XenbusStateInitialised:
 	case XenbusStateReconfiguring:
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 19:30:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 19:30:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32619.63697 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgC7L-0005YH-9p; Fri, 20 Nov 2020 19:30:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32619.63697; Fri, 20 Nov 2020 19:30:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgC7L-0005YA-6l; Fri, 20 Nov 2020 19:30:47 +0000
Received: by outflank-mailman (input) for mailman id 32619;
 Fri, 20 Nov 2020 19:30:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dS1w=E2=chromium.org=keescook@srs-us1.protection.inumbo.net>)
 id 1kgC7J-0005Y5-9d
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 19:30:45 +0000
Received: from mail-pf1-x441.google.com (unknown [2607:f8b0:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 63ce36ce-f553-459b-bcf9-fef181120414;
 Fri, 20 Nov 2020 19:30:43 +0000 (UTC)
Received: by mail-pf1-x441.google.com with SMTP id v5so4785493pff.10
 for <xen-devel@lists.xenproject.org>; Fri, 20 Nov 2020 11:30:43 -0800 (PST)
Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163])
 by smtp.gmail.com with ESMTPSA id d10sm4785681pjj.38.2020.11.20.11.30.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 20 Nov 2020 11:30:41 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dS1w=E2=chromium.org=keescook@srs-us1.protection.inumbo.net>)
	id 1kgC7J-0005Y5-9d
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 19:30:45 +0000
X-Inumbo-ID: 63ce36ce-f553-459b-bcf9-fef181120414
Received: from mail-pf1-x441.google.com (unknown [2607:f8b0:4864:20::441])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 63ce36ce-f553-459b-bcf9-fef181120414;
	Fri, 20 Nov 2020 19:30:43 +0000 (UTC)
Received: by mail-pf1-x441.google.com with SMTP id v5so4785493pff.10
        for <xen-devel@lists.xenproject.org>; Fri, 20 Nov 2020 11:30:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=3bv50j9tOMZCSWAChUvUk5K6TgooTRt3SRcQZBJ9fcA=;
        b=GgFl3K9IS/lWsdMjkEVVAtTSDzsQ0sxEOabPKwuHzNJyTA7s1nVN/P5Py+wtAIOvbE
         i43RryzoLL4QMDFVI6bDxTe0ngekUN0rycJ/u5dixn0o4ZWxiMdHtnF6M1zgV7bxdmjG
         OxnZTS8PQwcd6ZCwnahaxVB8GYQEw6f4nxFx4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=3bv50j9tOMZCSWAChUvUk5K6TgooTRt3SRcQZBJ9fcA=;
        b=C81oHXFBnc1R1VLX8TJSepa3ep/2K7xvYWasFP6JYRB6QuKqeJ1loLVIjkUqLRmGTr
         giDDRebxTD3OL63LifnIoa5caZYeObeYbqmGJGFnFf/F8rmBOCVWXzjp0z8wOaHR0OGS
         BL/UK4zj/YjujrV7wrtu9Yk75Xq7R381ahwjtruITSil2UOgnVvOsF9uwLHZ1EV9ImOF
         Gs29Uq2S89Crh16xhIXLINAjM3C2hpP4SvNK0faL4cw1jMJSPqyvQK+EEFPIFB+kOyPI
         2XFu8PApBpv7KGlWotx26iRpJCK+mrrpquuBC271+tzA6hpWb0fCoz31mg2kxO79nrFQ
         l2Sw==
X-Gm-Message-State: AOAM531RWUdY+z6mRBsghiPIOSNUxofkB2VwYSsbXERtIu3W3XUsdeFC
	VlryjWbkPV1MBcznxMx7cDAP8g==
X-Google-Smtp-Source: ABdhPJwl3VgOHP/MWhG5DtpaEL0Yyt6dBbRzpe/94BRgWnGkgcq6AJ5JXkZmDK0TJqDkyAhq9Y3Q+Q==
X-Received: by 2002:a63:5043:: with SMTP id q3mr17907345pgl.137.1605900643099;
        Fri, 20 Nov 2020 11:30:43 -0800 (PST)
Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163])
        by smtp.gmail.com with ESMTPSA id d10sm4785681pjj.38.2020.11.20.11.30.41
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 20 Nov 2020 11:30:41 -0800 (PST)
Date: Fri, 20 Nov 2020 11:30:40 -0800
From: Kees Cook <keescook@chromium.org>
To: Jakub Kicinski <kuba@kernel.org>
Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>,
	linux-kernel@vger.kernel.org, alsa-devel@alsa-project.org,
	amd-gfx@lists.freedesktop.org, bridge@lists.linux-foundation.org,
	ceph-devel@vger.kernel.org, cluster-devel@redhat.com,
	coreteam@netfilter.org, devel@driverdev.osuosl.org,
	dm-devel@redhat.com, drbd-dev@lists.linbit.com,
	dri-devel@lists.freedesktop.org, GR-everest-linux-l2@marvell.com,
	GR-Linux-NIC-Dev@marvell.com, intel-gfx@lists.freedesktop.org,
	intel-wired-lan@lists.osuosl.org, keyrings@vger.kernel.org,
	linux1394-devel@lists.sourceforge.net, linux-acpi@vger.kernel.org,
	linux-afs@lists.infradead.org, linux-arm-kernel@lists.infradead.org,
	linux-arm-msm@vger.kernel.org,
	linux-atm-general@lists.sourceforge.net,
	linux-block@vger.kernel.org, linux-can@vger.kernel.org,
	linux-cifs@vger.kernel.org, linux-crypto@vger.kernel.org,
	linux-decnet-user@lists.sourceforge.net, linux-ext4@vger.kernel.org,
	linux-fbdev@vger.kernel.org, linux-geode@lists.infradead.org,
	linux-gpio@vger.kernel.org, linux-hams@vger.kernel.org,
	linux-hwmon@vger.kernel.org, linux-i3c@lists.infradead.org,
	linux-ide@vger.kernel.org, linux-iio@vger.kernel.org,
	linux-input@vger.kernel.org, linux-integrity@vger.kernel.org,
	linux-mediatek@lists.infradead.org, linux-media@vger.kernel.org,
	linux-mmc@vger.kernel.org, linux-mm@kvack.org,
	linux-mtd@lists.infradead.org, linux-nfs@vger.kernel.org,
	linux-rdma@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-sctp@vger.kernel.org,
	linux-security-module@vger.kernel.org,
	linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org,
	linux-watchdog@vger.kernel.org, linux-wireless@vger.kernel.org,
	netdev@vger.kernel.org, netfilter-devel@vger.kernel.org,
	nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org,
	oss-drivers@netronome.com, patches@opensource.cirrus.com,
	rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org,
	samba-technical@lists.samba.org, selinux@vger.kernel.org,
	target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net,
	usb-storage@lists.one-eyed-alien.net,
	virtualization@lists.linux-foundation.org,
	wcn36xx@lists.infradead.org, x86@kernel.org,
	xen-devel@lists.xenproject.org, linux-hardening@vger.kernel.org,
	Nick Desaulniers <ndesaulniers@google.com>,
	Nathan Chancellor <natechancellor@gmail.com>,
	Miguel Ojeda <ojeda@kernel.org>, Joe Perches <joe@perches.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
Message-ID: <202011201129.B13FDB3C@keescook>
References: <cover.1605896059.git.gustavoars@kernel.org>
 <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>

On Fri, Nov 20, 2020 at 10:53:44AM -0800, Jakub Kicinski wrote:
> On Fri, 20 Nov 2020 12:21:39 -0600 Gustavo A. R. Silva wrote:
> > This series aims to fix almost all remaining fall-through warnings in
> > order to enable -Wimplicit-fallthrough for Clang.
> > 
> > In preparation to enable -Wimplicit-fallthrough for Clang, explicitly
> > add multiple break/goto/return/fallthrough statements instead of just
> > letting the code fall through to the next case.
> > 
> > Notice that in order to enable -Wimplicit-fallthrough for Clang, this
> > change[1] is meant to be reverted at some point. So, this patch helps
> > to move in that direction.
> > 
> > Something important to mention is that there is currently a discrepancy
> > between GCC and Clang when dealing with switch fall-through to empty case
> > statements or to cases that only contain a break/continue/return
> > statement[2][3][4].
> 
> Are we sure we want to make this change? Was it discussed before?
> 
> Are there any bugs Clangs puritanical definition of fallthrough helped
> find?
> 
> IMVHO compiler warnings are supposed to warn about issues that could
> be bugs. Falling through to default: break; can hardly be a bug?!

It's certainly a place where the intent is not always clear. I think
this makes all the cases unambiguous, and doesn't impact the machine
code, since the compiler will happily optimize away any behavioral
redundancy.


-- 
Kees Cook


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 19:31:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 19:31:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32624.63709 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgC7o-0005eK-JI; Fri, 20 Nov 2020 19:31:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32624.63709; Fri, 20 Nov 2020 19:31:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgC7o-0005eD-GD; Fri, 20 Nov 2020 19:31:16 +0000
Received: by outflank-mailman (input) for mailman id 32624;
 Fri, 20 Nov 2020 19:31:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgC7n-0005e2-O1; Fri, 20 Nov 2020 19:31:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgC7n-0006xI-F4; Fri, 20 Nov 2020 19:31:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgC7n-0006qS-6U; Fri, 20 Nov 2020 19:31:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kgC7n-0003nS-5j; Fri, 20 Nov 2020 19:31:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgC7n-0005e2-O1; Fri, 20 Nov 2020 19:31:15 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Fl4WFb/zdNuu3CqpjrqLHeUClU/d6nHiQQTFwgyYaIQ=; b=orEsJ2Qfxeb6PZpL0XxvHdy1kC
	LNVs9iL4tx3nlEqE58iiEYfdXD80VIcir/TMHZKHhSAHkCgn5QfGTqz02g0rfHwrZX8FxciDrb2Lb
	phNZ95WICUDJzZ6gpk95h4j1a6rM6J2xeSIOL7N/i+9rDPFCF9VWTGEZne7PxH/oEIno=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgC7n-0006xI-F4; Fri, 20 Nov 2020 19:31:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgC7n-0006qS-6U; Fri, 20 Nov 2020 19:31:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgC7n-0003nS-5j; Fri, 20 Nov 2020 19:31:15 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156893-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156893: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0ff2c7e5b4ff9b3066d6cbba9adf95b948b418c9
X-Osstest-Versions-That:
    xen=415f904254b7343a90db895134980cbb7f7f0479
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 20 Nov 2020 19:31:15 +0000

flight 156893 xen-unstable real [real]
flight 156904 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156893/
http://logs.test-lab.xenproject.org/osstest/logs/156904/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 156882

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156882
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156882
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156882
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156882
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156882
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156882
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156882
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156882
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156882
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156882
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156882
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  0ff2c7e5b4ff9b3066d6cbba9adf95b948b418c9
baseline version:
 xen                  415f904254b7343a90db895134980cbb7f7f0479

Last test of basis   156882  2020-11-19 15:08:36 Z    1 days
Testing same since   156893  2020-11-20 05:29:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andre Przywara <andre.przywara@arm.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien.grall@arm.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0ff2c7e5b4ff9b3066d6cbba9adf95b948b418c9
Author: Julien Grall <julien.grall@arm.com>
Date:   Thu Nov 19 17:08:29 2020 +0000

    xen/arm: acpi: Allow Xen to boot with ACPI 5.1
    
    At the moment Xen requires the FADT ACPI table to be at least version
    6.0, apparently because of some reliance on other ACPI v6.0 features.
    
    But actually this is overzealous, and Xen works now fine with ACPI v5.1.
    
    Let's relax the version check for the FADT table to allow QEMU to
    run the hypervisor with ACPI.
    
    Signed-off-by: Julien Grall <julien.grall@arm.com>
    Signed-off-by: Andre Przywara <andre.przywara@arm.com>
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

commit 9a3c25b6562416dfb3708b0e61a4e4761db06e4f
Author: Julien Grall <julien.grall@arm.com>
Date:   Thu Nov 19 17:08:28 2020 +0000

    xen/arm: gic: acpi: Use the correct length for the GICC structure
    
    The length of the GICC structure in the MADT ACPI table differs between
    version 5.1 and 6.0, although there are no other relevant differences.
    
    Use the BAD_MADT_GICC_ENTRY macro, which was specifically designed to
    overcome this issue.
    
    Signed-off-by: Julien Grall <julien.grall@arm.com>
    Signed-off-by: Andre Przywara <andre.przywara@arm.com>
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

commit 1965c171a4a791bbed1cf84c097177117bb02dac
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Nov 19 17:08:27 2020 +0000

    xen/arm: gic: acpi: Guard helpers to build the MADT with CONFIG_ACPI
    
    gic_make_hwdom_madt() and gic_get_hwdom_madt_size() are ACPI specific.
    
    While they build fine today, this will change in a follow-up patch.
    Rather than trying to fix the build on ACPI, it is best to avoid
    compiling the helpers and the associated callbacks when CONFIG_ACPI=n.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 19:51:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 19:51:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32636.63732 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgCRj-00080Q-Gf; Fri, 20 Nov 2020 19:51:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32636.63732; Fri, 20 Nov 2020 19:51:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgCRj-00080J-Dd; Fri, 20 Nov 2020 19:51:51 +0000
Received: by outflank-mailman (input) for mailman id 32636;
 Fri, 20 Nov 2020 19:51:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/9wm=E2=kernel.org=kuba@srs-us1.protection.inumbo.net>)
 id 1kgCRi-00080E-0T
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 19:51:50 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7b6c91e0-2be2-4f81-8acb-1de3e6c07031;
 Fri, 20 Nov 2020 19:51:49 +0000 (UTC)
Received: from kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com (unknown
 [163.114.132.6])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id ADA4D206B6;
 Fri, 20 Nov 2020 19:51:43 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/9wm=E2=kernel.org=kuba@srs-us1.protection.inumbo.net>)
	id 1kgCRi-00080E-0T
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 19:51:50 +0000
X-Inumbo-ID: 7b6c91e0-2be2-4f81-8acb-1de3e6c07031
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7b6c91e0-2be2-4f81-8acb-1de3e6c07031;
	Fri, 20 Nov 2020 19:51:49 +0000 (UTC)
Received: from kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com (unknown [163.114.132.6])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id ADA4D206B6;
	Fri, 20 Nov 2020 19:51:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1605901908;
	bh=RBzOhaxiAPO90htsWQNreeN48VhmFALIIs8F+S8F2tg=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=0HJ898xky9A4AiX47c8E/kv2vloKTCLDsa1B08r0Ns0UI6ViLC8sEEmzvLmSWmHDR
	 rh+QM7asV5JMfRqDaW/zMVHUMAG8pWVLOHT3LXTHM6aAp9QNwMcpVcVlrfF8wRUqJ9
	 pWgdeTKSVxQ+8c5KHivLBQCp2OjpIRpNZteZuQdk=
Date: Fri, 20 Nov 2020 11:51:42 -0800
From: Jakub Kicinski <kuba@kernel.org>
To: Kees Cook <keescook@chromium.org>
Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>,
 linux-kernel@vger.kernel.org, alsa-devel@alsa-project.org,
 amd-gfx@lists.freedesktop.org, bridge@lists.linux-foundation.org,
 ceph-devel@vger.kernel.org, cluster-devel@redhat.com,
 coreteam@netfilter.org, devel@driverdev.osuosl.org, dm-devel@redhat.com,
 drbd-dev@lists.linbit.com, dri-devel@lists.freedesktop.org,
 GR-everest-linux-l2@marvell.com, GR-Linux-NIC-Dev@marvell.com,
 intel-gfx@lists.freedesktop.org, intel-wired-lan@lists.osuosl.org,
 keyrings@vger.kernel.org, linux1394-devel@lists.sourceforge.net,
 linux-acpi@vger.kernel.org, linux-afs@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org,
 linux-atm-general@lists.sourceforge.net, linux-block@vger.kernel.org,
 linux-can@vger.kernel.org, linux-cifs@vger.kernel.org,
 linux-crypto@vger.kernel.org, linux-decnet-user@lists.sourceforge.net,
 linux-ext4@vger.kernel.org, linux-fbdev@vger.kernel.org,
 linux-geode@lists.infradead.org, linux-gpio@vger.kernel.org,
 linux-hams@vger.kernel.org, linux-hwmon@vger.kernel.org,
 linux-i3c@lists.infradead.org, linux-ide@vger.kernel.org,
 linux-iio@vger.kernel.org, linux-input@vger.kernel.org,
 linux-integrity@vger.kernel.org, linux-mediatek@lists.infradead.org,
 linux-media@vger.kernel.org, linux-mmc@vger.kernel.org, linux-mm@kvack.org,
 linux-mtd@lists.infradead.org, linux-nfs@vger.kernel.org,
 linux-rdma@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-sctp@vger.kernel.org,
 linux-security-module@vger.kernel.org,
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org,
 linux-watchdog@vger.kernel.org, linux-wireless@vger.kernel.org,
 netdev@vger.kernel.org, netfilter-devel@vger.kernel.org,
 nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org,
 oss-drivers@netronome.com, patches@opensource.cirrus.com,
 rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org,
 samba-technical@lists.samba.org, selinux@vger.kernel.org,
 target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net,
 usb-storage@lists.one-eyed-alien.net,
 virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org,
 x86@kernel.org, xen-devel@lists.xenproject.org,
 linux-hardening@vger.kernel.org, Nick Desaulniers
 <ndesaulniers@google.com>, Nathan Chancellor <natechancellor@gmail.com>,
 Miguel Ojeda <ojeda@kernel.org>, Joe Perches <joe@perches.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
Message-ID: <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
In-Reply-To: <202011201129.B13FDB3C@keescook>
References: <cover.1605896059.git.gustavoars@kernel.org>
	<20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	<202011201129.B13FDB3C@keescook>
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Fri, 20 Nov 2020 11:30:40 -0800 Kees Cook wrote:
> On Fri, Nov 20, 2020 at 10:53:44AM -0800, Jakub Kicinski wrote:
> > On Fri, 20 Nov 2020 12:21:39 -0600 Gustavo A. R. Silva wrote:  
> > > This series aims to fix almost all remaining fall-through warnings in
> > > order to enable -Wimplicit-fallthrough for Clang.
> > > 
> > > In preparation to enable -Wimplicit-fallthrough for Clang, explicitly
> > > add multiple break/goto/return/fallthrough statements instead of just
> > > letting the code fall through to the next case.
> > > 
> > > Notice that in order to enable -Wimplicit-fallthrough for Clang, this
> > > change[1] is meant to be reverted at some point. So, this patch helps
> > > to move in that direction.
> > > 
> > > Something important to mention is that there is currently a discrepancy
> > > between GCC and Clang when dealing with switch fall-through to empty case
> > > statements or to cases that only contain a break/continue/return
> > > statement[2][3][4].  
> > 
> > Are we sure we want to make this change? Was it discussed before?
> > 
> > Are there any bugs Clangs puritanical definition of fallthrough helped
> > find?
> > 
> > IMVHO compiler warnings are supposed to warn about issues that could
> > be bugs. Falling through to default: break; can hardly be a bug?!  
> 
> It's certainly a place where the intent is not always clear. I think
> this makes all the cases unambiguous, and doesn't impact the machine
> code, since the compiler will happily optimize away any behavioral
> redundancy.

If none of the 140 patches here fix a real bug, and there is no change
to machine code then it sounds to me like a W=2 kind of a warning.

I think clang is just being annoying here, but if I'm the only one who
feels this way chances are I'm wrong :)


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 20:06:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 20:06:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32645.63743 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgCfI-0000vO-OV; Fri, 20 Nov 2020 20:05:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32645.63743; Fri, 20 Nov 2020 20:05:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgCfI-0000vH-LM; Fri, 20 Nov 2020 20:05:52 +0000
Received: by outflank-mailman (input) for mailman id 32645;
 Fri, 20 Nov 2020 20:05:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ulUS=E2=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1kgCfH-0000vC-JF
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 20:05:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0f869ad6-fd6c-412f-8998-b832ff1c357a;
 Fri, 20 Nov 2020 20:05:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2C646AE1F;
 Fri, 20 Nov 2020 20:05:49 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id C910C1E1319; Fri, 20 Nov 2020 21:05:48 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ulUS=E2=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1kgCfH-0000vC-JF
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 20:05:51 +0000
X-Inumbo-ID: 0f869ad6-fd6c-412f-8998-b832ff1c357a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0f869ad6-fd6c-412f-8998-b832ff1c357a;
	Fri, 20 Nov 2020 20:05:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2C646AE1F;
	Fri, 20 Nov 2020 20:05:49 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id C910C1E1319; Fri, 20 Nov 2020 21:05:48 +0100 (CET)
Date: Fri, 20 Nov 2020 21:05:48 +0100
From: Jan Kara <jack@suse.cz>
To: Matthew Wilcox <willy@infradead.org>
Cc: Christoph Hellwig <hch@lst.de>, Jan Kara <jack@suse.cz>,
	Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 14/20] block: remove the nr_sects field in struct
 hd_struct
Message-ID: <20201120200548.GA27360@quack2.suse.cz>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-15-hch@lst.de>
 <20201119120525.GW1981@quack2.suse.cz>
 <20201120090820.GD21715@lst.de>
 <20201120112121.GB15537@quack2.suse.cz>
 <20201120153253.GA18990@lst.de>
 <20201120155956.GB4327@casper.infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201120155956.GB4327@casper.infradead.org>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Fri 20-11-20 15:59:56, Matthew Wilcox wrote:
> On Fri, Nov 20, 2020 at 04:32:53PM +0100, Christoph Hellwig wrote:
> > On Fri, Nov 20, 2020 at 12:21:21PM +0100, Jan Kara wrote:
> > > > > AFAICT bd_size_lock is pointless after these changes so we can just remove
> > > > > it?
> > > > 
> > > > I don't think it is, as reuqiring bd_mutex for size updates leads to
> > > > rather awkward lock ordering problems.
> > > 
> > > OK, let me ask differently: What is bd_size_lock protecting now? Ah, I see,
> > > on 32-bit it is needed to prevent torn writes to i_size, right?
> > 
> > Exactly.  In theory we could skip it for 64-bit, but as updating the
> > size isn't a fast path, and struct block_device isn't super size critical
> > I'd rather keep the same code for 32 vs 64-bit builds.
> 
> Is it better to switch to i_size_write() / i_size_read()?

The code is already switched to it AFAICT (the lock is really only used in
the two places that write i_size). But the problem is that in theory two
i_size_write() calls can race in a way that the resulting stored i_size is a
mix of two stored sizes. Now I have hard time imagining how this could
happen for a block device and if two reconfigurations of a block device
could race like that we'd have a large problems anyway...

								Honza

-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 20:48:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 20:48:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32661.63762 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgDKB-0005Nl-2J; Fri, 20 Nov 2020 20:48:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32661.63762; Fri, 20 Nov 2020 20:48:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgDKA-0005Ne-Uy; Fri, 20 Nov 2020 20:48:06 +0000
Received: by outflank-mailman (input) for mailman id 32661;
 Fri, 20 Nov 2020 20:48:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dS1w=E2=chromium.org=keescook@srs-us1.protection.inumbo.net>)
 id 1kgDK8-0005NZ-NT
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 20:48:04 +0000
Received: from mail-pl1-x642.google.com (unknown [2607:f8b0:4864:20::642])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id be337502-5f19-4bbe-b247-5a8fdb29c913;
 Fri, 20 Nov 2020 20:48:03 +0000 (UTC)
Received: by mail-pl1-x642.google.com with SMTP id 5so5468025plj.8
 for <xen-devel@lists.xenproject.org>; Fri, 20 Nov 2020 12:48:03 -0800 (PST)
Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163])
 by smtp.gmail.com with ESMTPSA id w11sm565810pfi.162.2020.11.20.12.48.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 20 Nov 2020 12:48:02 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dS1w=E2=chromium.org=keescook@srs-us1.protection.inumbo.net>)
	id 1kgDK8-0005NZ-NT
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 20:48:04 +0000
X-Inumbo-ID: be337502-5f19-4bbe-b247-5a8fdb29c913
Received: from mail-pl1-x642.google.com (unknown [2607:f8b0:4864:20::642])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id be337502-5f19-4bbe-b247-5a8fdb29c913;
	Fri, 20 Nov 2020 20:48:03 +0000 (UTC)
Received: by mail-pl1-x642.google.com with SMTP id 5so5468025plj.8
        for <xen-devel@lists.xenproject.org>; Fri, 20 Nov 2020 12:48:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=v0uQ3+ZvQ790GJbfbb1ESqfRrqQ38XoL7hho1t1Gb0k=;
        b=avd4TaFUM+Ab56B8iWTc3giej2JyPXQeNqj8Vqwh4pMfYMm6E7ROZ43KbUMGq3c2kN
         Y7wgImiWtwhH33QCuhXwX5xfYnbd8ZoAoymaiVLMsflRM4OrtMTi7raBVieCeB5e9/kv
         qCr4nXKui27aGXF6t1ziK2ispJ9UlbNRnpYGc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=v0uQ3+ZvQ790GJbfbb1ESqfRrqQ38XoL7hho1t1Gb0k=;
        b=YWBi26oMx3deAzVdSWpu1nP2l43mj9HMjdWn7Z8ZCzHYLG5rRRcdsFK08+MyQEGAOW
         nJWWcLFjAZKCj7aY3bdpHbR6OR9TwKnMcpZZQldCYvxj1oPcS7pdciOH2dFsRsqIb/6i
         0YlOG51cLeWGZkf1/do3ubxu9J8G8l0JUD2MIwR97zy7+7iOv9x5snnTi33xqdxVc722
         4JEm6r6BfDlB1gNdFkSDwvpyxJzRHCbWZlPC71etbQuhZ9aFvi2vamzwK9x3P+Ifd/fX
         JOvpS/ztrG2J0wVIpoqQUeYUhxHafCrnD/AXyMl9vRVVBJ7cCq70UrQ6ACzgIUcqhMmL
         Cpaw==
X-Gm-Message-State: AOAM532CmhrfLRxBpbxt+jU0PmQxHSGnm+dEuoKU22XjysBWeGxZZTeX
	cnsYZarrG/stJ4m4GEpXfCPdUg==
X-Google-Smtp-Source: ABdhPJwusp2Lb+4OqT162rurUKkT/e5T3DAOLIduq4rICrKRG7vcFewdIWNkE66BLMZ+8SMrIPoLqA==
X-Received: by 2002:a17:902:7890:b029:d8:bb20:518e with SMTP id q16-20020a1709027890b02900d8bb20518emr15184915pll.66.1605905283037;
        Fri, 20 Nov 2020 12:48:03 -0800 (PST)
Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163])
        by smtp.gmail.com with ESMTPSA id w11sm565810pfi.162.2020.11.20.12.48.01
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 20 Nov 2020 12:48:02 -0800 (PST)
Date: Fri, 20 Nov 2020 12:48:01 -0800
From: Kees Cook <keescook@chromium.org>
To: Jakub Kicinski <kuba@kernel.org>
Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>,
	linux-kernel@vger.kernel.org, alsa-devel@alsa-project.org,
	amd-gfx@lists.freedesktop.org, bridge@lists.linux-foundation.org,
	ceph-devel@vger.kernel.org, cluster-devel@redhat.com,
	coreteam@netfilter.org, devel@driverdev.osuosl.org,
	dm-devel@redhat.com, drbd-dev@lists.linbit.com,
	dri-devel@lists.freedesktop.org, GR-everest-linux-l2@marvell.com,
	GR-Linux-NIC-Dev@marvell.com, intel-gfx@lists.freedesktop.org,
	intel-wired-lan@lists.osuosl.org, keyrings@vger.kernel.org,
	linux1394-devel@lists.sourceforge.net, linux-acpi@vger.kernel.org,
	linux-afs@lists.infradead.org, linux-arm-kernel@lists.infradead.org,
	linux-arm-msm@vger.kernel.org,
	linux-atm-general@lists.sourceforge.net,
	linux-block@vger.kernel.org, linux-can@vger.kernel.org,
	linux-cifs@vger.kernel.org, linux-crypto@vger.kernel.org,
	linux-decnet-user@lists.sourceforge.net, linux-ext4@vger.kernel.org,
	linux-fbdev@vger.kernel.org, linux-geode@lists.infradead.org,
	linux-gpio@vger.kernel.org, linux-hams@vger.kernel.org,
	linux-hwmon@vger.kernel.org, linux-i3c@lists.infradead.org,
	linux-ide@vger.kernel.org, linux-iio@vger.kernel.org,
	linux-input@vger.kernel.org, linux-integrity@vger.kernel.org,
	linux-mediatek@lists.infradead.org, linux-media@vger.kernel.org,
	linux-mmc@vger.kernel.org, linux-mm@kvack.org,
	linux-mtd@lists.infradead.org, linux-nfs@vger.kernel.org,
	linux-rdma@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-sctp@vger.kernel.org,
	linux-security-module@vger.kernel.org,
	linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org,
	linux-watchdog@vger.kernel.org, linux-wireless@vger.kernel.org,
	netdev@vger.kernel.org, netfilter-devel@vger.kernel.org,
	nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org,
	oss-drivers@netronome.com, patches@opensource.cirrus.com,
	rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org,
	samba-technical@lists.samba.org, selinux@vger.kernel.org,
	target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net,
	usb-storage@lists.one-eyed-alien.net,
	virtualization@lists.linux-foundation.org,
	wcn36xx@lists.infradead.org, x86@kernel.org,
	xen-devel@lists.xenproject.org, linux-hardening@vger.kernel.org,
	Nick Desaulniers <ndesaulniers@google.com>,
	Nathan Chancellor <natechancellor@gmail.com>,
	Miguel Ojeda <ojeda@kernel.org>, Joe Perches <joe@perches.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
Message-ID: <202011201244.78E002D5@keescook>
References: <cover.1605896059.git.gustavoars@kernel.org>
 <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011201129.B13FDB3C@keescook>
 <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>

On Fri, Nov 20, 2020 at 11:51:42AM -0800, Jakub Kicinski wrote:
> On Fri, 20 Nov 2020 11:30:40 -0800 Kees Cook wrote:
> > On Fri, Nov 20, 2020 at 10:53:44AM -0800, Jakub Kicinski wrote:
> > > On Fri, 20 Nov 2020 12:21:39 -0600 Gustavo A. R. Silva wrote:  
> > > > This series aims to fix almost all remaining fall-through warnings in
> > > > order to enable -Wimplicit-fallthrough for Clang.
> > > > 
> > > > In preparation to enable -Wimplicit-fallthrough for Clang, explicitly
> > > > add multiple break/goto/return/fallthrough statements instead of just
> > > > letting the code fall through to the next case.
> > > > 
> > > > Notice that in order to enable -Wimplicit-fallthrough for Clang, this
> > > > change[1] is meant to be reverted at some point. So, this patch helps
> > > > to move in that direction.
> > > > 
> > > > Something important to mention is that there is currently a discrepancy
> > > > between GCC and Clang when dealing with switch fall-through to empty case
> > > > statements or to cases that only contain a break/continue/return
> > > > statement[2][3][4].  
> > > 
> > > Are we sure we want to make this change? Was it discussed before?
> > > 
> > > Are there any bugs Clangs puritanical definition of fallthrough helped
> > > find?
> > > 
> > > IMVHO compiler warnings are supposed to warn about issues that could
> > > be bugs. Falling through to default: break; can hardly be a bug?!  
> > 
> > It's certainly a place where the intent is not always clear. I think
> > this makes all the cases unambiguous, and doesn't impact the machine
> > code, since the compiler will happily optimize away any behavioral
> > redundancy.
> 
> If none of the 140 patches here fix a real bug, and there is no change
> to machine code then it sounds to me like a W=2 kind of a warning.

I'd like to avoid splitting common -W options between default and W=2
just based on the compiler. Getting -Wimplicit-fallthrough enabled found
plenty of bugs, so making sure it works correctly for both compilers
feels justified to me. (This is just a subset of the same C language
short-coming.)

> I think clang is just being annoying here, but if I'm the only one who
> feels this way chances are I'm wrong :)

It's being pretty pedantic, but I don't think it's unreasonable to
explicitly state how every case ends. GCC's silence for the case of
"fall through to a break" doesn't really seem justified.

-- 
Kees Cook


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 21:24:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 21:24:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32674.63774 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgDt2-0001Lp-W8; Fri, 20 Nov 2020 21:24:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32674.63774; Fri, 20 Nov 2020 21:24:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgDt2-0001Li-Sl; Fri, 20 Nov 2020 21:24:08 +0000
Received: by outflank-mailman (input) for mailman id 32674;
 Fri, 20 Nov 2020 21:24:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgDt1-0001La-KR; Fri, 20 Nov 2020 21:24:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgDt1-0000wG-AR; Fri, 20 Nov 2020 21:24:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgDt0-0004CH-Un; Fri, 20 Nov 2020 21:24:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kgDt0-0000C6-UI; Fri, 20 Nov 2020 21:24:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgDt1-0001La-KR; Fri, 20 Nov 2020 21:24:07 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FQ8AqBxsbsHQFfNu2VK1lOOyz3KtcPP8bDDHMmNIoZA=; b=XREnlJBgoR0Lctz/o0Ul3+Cp9Q
	Nl83dtI1TR4cKOrA5D6LdP1cTcFeW5fWrz4clQnxtw9NOwgQ/mnCPcIAPJUaXIuiFqfoSv7C0qSLA
	p04+973x4+CoVfWNf7yXQXwyrzxePwJyIJcGSp2dkAHNrRIB4SKAOKoKCuvbFTNz9Y+Y=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgDt1-0000wG-AR; Fri, 20 Nov 2020 21:24:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgDt0-0004CH-Un; Fri, 20 Nov 2020 21:24:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgDt0-0000C6-UI; Fri, 20 Nov 2020 21:24:06 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156898-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156898: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=4d02da974ea85a62074efedf354e82778f910d82
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 20 Nov 2020 21:24:06 +0000

flight 156898 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156898/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                4d02da974ea85a62074efedf354e82778f910d82
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  112 days
Failing since        152366  2020-08-01 20:49:34 Z  111 days  185 attempts
Testing same since   156898  2020-11-20 11:08:43 Z    0 days    1 attempts

------------------------------------------------------------
3555 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 679700 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Nov 20 21:38:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 21:38:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32683.63789 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgE75-0002dY-Hh; Fri, 20 Nov 2020 21:38:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32683.63789; Fri, 20 Nov 2020 21:38:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgE75-0002dR-Dm; Fri, 20 Nov 2020 21:38:39 +0000
Received: by outflank-mailman (input) for mailman id 32683;
 Fri, 20 Nov 2020 21:38:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Mmwl=E2=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1kgE73-0002dM-Km
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 21:38:37 +0000
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c962f002-2e17-4381-8882-e2b39e7b6164;
 Fri, 20 Nov 2020 21:38:36 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0AKLc9ED004163;
 Fri, 20 Nov 2020 21:38:31 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by aserp2120.oracle.com with ESMTP id 34t76mcr2p-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Fri, 20 Nov 2020 21:38:31 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0AKLVUlF003267;
 Fri, 20 Nov 2020 21:36:31 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
 by aserp3020.oracle.com with ESMTP id 34umd3yqg2-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 20 Nov 2020 21:36:31 +0000
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
 by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 0AKLaREe006534;
 Fri, 20 Nov 2020 21:36:28 GMT
Received: from [10.74.102.87] (/10.74.102.87)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Fri, 20 Nov 2020 13:36:27 -0800
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Mmwl=E2=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
	id 1kgE73-0002dM-Km
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 21:38:37 +0000
X-Inumbo-ID: c962f002-2e17-4381-8882-e2b39e7b6164
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c962f002-2e17-4381-8882-e2b39e7b6164;
	Fri, 20 Nov 2020 21:38:36 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
	by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0AKLc9ED004163;
	Fri, 20 Nov 2020 21:38:31 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=vdgWyMqTZ/KgANno+C5rCsxsLffEg7/4wEFhdg4BzwI=;
 b=hjCAAPiQuGKgg9PQlmF2CEhfXgsK4V3IL0TBllq3pwdLW2cAsCBNqTQ4ThGfPvnQoh9+
 V7EbbfgvR6mElEDy6mt+/BayXJ+D9umRbQoMUux/U76sqn2bBS0eFkGWVhKyhmvo+iOW
 pOZGAsOecys9/DjyqOootIsFGCbj54uTsMShNy2TgH7R72HHP4isQNzlEyT35/1CmhDI
 JCsQZTzIA37BdmeXrTiRI13i2DeCcMcrOP56wP6TIXdMpjjWdE4ef5e6oK3KN5BTnDFc
 Ak9rAB4fR9ngZI1whbbshi5ybA/yEtPOObnfChfOccclqBnRcR5YseEhI/aYwHBhJ2g3 3Q== 
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
	by aserp2120.oracle.com with ESMTP id 34t76mcr2p-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
	Fri, 20 Nov 2020 21:38:31 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
	by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0AKLVUlF003267;
	Fri, 20 Nov 2020 21:36:31 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
	by aserp3020.oracle.com with ESMTP id 34umd3yqg2-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
	Fri, 20 Nov 2020 21:36:31 +0000
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 0AKLaREe006534;
	Fri, 20 Nov 2020 21:36:28 GMT
Received: from [10.74.102.87] (/10.74.102.87)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 20 Nov 2020 13:36:27 -0800
Subject: Re: [PATCH 058/141] xen-blkfront: Fix fall-through warnings for Clang
To: "Gustavo A. R. Silva" <gustavoars@kernel.org>,
        Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
        =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
        Juergen Gross <jgross@suse.com>,
        Stefano Stabellini
 <sstabellini@kernel.org>,
        Jens Axboe <axboe@kernel.dk>
Cc: xen-devel@lists.xenproject.org, linux-block@vger.kernel.org,
        linux-kernel@vger.kernel.org, linux-hardening@vger.kernel.org
References: <cover.1605896059.git.gustavoars@kernel.org>
 <33057688012c34dd60315ad765ff63f070e98c0c.1605896059.git.gustavoars@kernel.org>
From: boris.ostrovsky@oracle.com
Organization: Oracle Corporation
Message-ID: <e8d67ea1-3d0d-509a-a2f1-cf1758bb373f@oracle.com>
Date: Fri, 20 Nov 2020 16:36:26 -0500
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <33057688012c34dd60315ad765ff63f070e98c0c.1605896059.git.gustavoars@kernel.org>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9811 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 mlxscore=0 phishscore=0
 spamscore=0 bulkscore=0 mlxlogscore=999 malwarescore=0 adultscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2011200143
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9811 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 phishscore=0
 adultscore=0 priorityscore=1501 bulkscore=0 clxscore=1011 mlxlogscore=999
 malwarescore=0 mlxscore=0 spamscore=0 lowpriorityscore=0 impostorscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2011200143


On 11/20/20 1:32 PM, Gustavo A. R. Silva wrote:
> In preparation to enable -Wimplicit-fallthrough for Clang, fix a warning
> by explicitly adding a break statement instead of letting the code fall
> through to the next case.
>
> Link: https://github.com/KSPP/linux/issues/115
> Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
> ---
>  drivers/block/xen-blkfront.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index 48629d3433b4..34b028be78ab 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -2462,6 +2462,7 @@ static void blkback_changed(struct xenbus_device *dev,
>  			break;
>  		if (talk_to_blkback(dev, info))
>  			break;
> +		break;
>  	case XenbusStateInitialising:
>  	case XenbusStateInitialised:
>  	case XenbusStateReconfiguring:


Reviewed-by Boris Ostrovsky <boris.ostrovsky@oracle.com>


(for patch 138 as well)


Although I thought using 'fallthrough' attribute was the more common approach.


-boris



From xen-devel-bounces@lists.xenproject.org Fri Nov 20 22:43:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 20 Nov 2020 22:43:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32704.63810 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgF77-0001oP-KD; Fri, 20 Nov 2020 22:42:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32704.63810; Fri, 20 Nov 2020 22:42:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgF77-0001oI-Fc; Fri, 20 Nov 2020 22:42:45 +0000
Received: by outflank-mailman (input) for mailman id 32704;
 Fri, 20 Nov 2020 22:42:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgF76-0001oA-DS; Fri, 20 Nov 2020 22:42:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgF76-0002Wp-95; Fri, 20 Nov 2020 22:42:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgF76-0007in-2T; Fri, 20 Nov 2020 22:42:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kgF76-0007z8-20; Fri, 20 Nov 2020 22:42:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgF76-0001oA-DS; Fri, 20 Nov 2020 22:42:44 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DZTPPE5v17XHE6tH9/uegNaRXZ0+z1/K1PqvW5p2JnI=; b=j8PC6VP7akHNN0NNCbJ6R918dq
	MgHpa+E/NB/UXVa4fgEX0E6mTrWLsiNbQ0nktKMSaZAW0Ffnz7Bu9Ju+c3bvoMiFHcBve0iJgaa9R
	KtnS3131qTc3TimvtQmM4v1RhRNEK3Almgcnh7SnTtDEwWoX/DFvXvB4eNx4E5TmUWpk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgF76-0002Wp-95; Fri, 20 Nov 2020 22:42:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgF76-0007in-2T; Fri, 20 Nov 2020 22:42:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgF76-0007z8-20; Fri, 20 Nov 2020 22:42:44 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156907-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156907: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b659a5cebd611dbe698e63c03485b5fe8cd964ad
X-Osstest-Versions-That:
    xen=846d22d54f24f336fb80d052338e0cd030d54fee
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 20 Nov 2020 22:42:44 +0000

flight 156907 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156907/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  b659a5cebd611dbe698e63c03485b5fe8cd964ad
baseline version:
 xen                  846d22d54f24f336fb80d052338e0cd030d54fee

Last test of basis   156895  2020-11-20 08:02:36 Z    0 days
Testing same since   156907  2020-11-20 20:01:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   846d22d54f..b659a5cebd  b659a5cebd611dbe698e63c03485b5fe8cd964ad -> smoke


From xen-devel-bounces@lists.xenproject.org Sat Nov 21 00:47:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Nov 2020 00:47:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32719.63827 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgH3I-00074j-TC; Sat, 21 Nov 2020 00:46:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32719.63827; Sat, 21 Nov 2020 00:46:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgH3I-00074c-QE; Sat, 21 Nov 2020 00:46:56 +0000
Received: by outflank-mailman (input) for mailman id 32719;
 Sat, 21 Nov 2020 00:46:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgH3I-00074U-9r; Sat, 21 Nov 2020 00:46:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgH3I-0005bS-4O; Sat, 21 Nov 2020 00:46:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgH3H-0007kH-Q2; Sat, 21 Nov 2020 00:46:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kgH3H-0003qq-PZ; Sat, 21 Nov 2020 00:46:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgH3I-00074U-9r; Sat, 21 Nov 2020 00:46:56 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=H5xbob2NAy/2N8qU6+5Q8OuYdXWD1Q8wpxxwQzWMsyU=; b=v6FlzXS+J9vI6WPTuWR1VKah0U
	QNICEeRiwcU4oJasTGuUlkq81NXG2Vva8l67Da7IfStEk4ddalRXIOrpsUPomI9TdR+xFCKywr034
	SNxnVYVawXroIbZBoeKakBtcB0vCbRMvLid0pP9HmpSz3CzV2bBHGDc0Fk9wsblurnzM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgH3I-0005bS-4O; Sat, 21 Nov 2020 00:46:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgH3H-0007kH-Q2; Sat, 21 Nov 2020 00:46:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgH3H-0003qq-PZ; Sat, 21 Nov 2020 00:46:55 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156902-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156902: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=7fbd7e710323c8f4c5f6a38a8ae0e6726b5a4599
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 21 Nov 2020 00:46:55 +0000

flight 156902 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156902/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10     fail pass in 156890

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                7fbd7e710323c8f4c5f6a38a8ae0e6726b5a4599
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   92 days
Failing since        152659  2020-08-21 14:07:39 Z   91 days  194 attempts
Testing same since   156890  2020-11-20 01:06:47 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 67249 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Nov 21 05:36:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Nov 2020 05:36:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32702.63852 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgLZY-0001pN-Of; Sat, 21 Nov 2020 05:36:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32702.63852; Sat, 21 Nov 2020 05:36:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgLZY-0001pF-IK; Sat, 21 Nov 2020 05:36:32 +0000
Received: by outflank-mailman (input) for mailman id 32702;
 Fri, 20 Nov 2020 22:21:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ltmN=E2=gmail.com=miguel.ojeda.sandonis@srs-us1.protection.inumbo.net>)
 id 1kgEmX-00084x-QG
 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 22:21:29 +0000
Received: from mail-yb1-xb41.google.com (unknown [2607:f8b0:4864:20::b41])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b6bf0283-a6ac-4448-bac2-1d96ba61b2f1;
 Fri, 20 Nov 2020 22:21:29 +0000 (UTC)
Received: by mail-yb1-xb41.google.com with SMTP id x17so9968621ybr.8
 for <xen-devel@lists.xenproject.org>; Fri, 20 Nov 2020 14:21:28 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ltmN=E2=gmail.com=miguel.ojeda.sandonis@srs-us1.protection.inumbo.net>)
	id 1kgEmX-00084x-QG
	for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 22:21:29 +0000
X-Inumbo-ID: b6bf0283-a6ac-4448-bac2-1d96ba61b2f1
Received: from mail-yb1-xb41.google.com (unknown [2607:f8b0:4864:20::b41])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b6bf0283-a6ac-4448-bac2-1d96ba61b2f1;
	Fri, 20 Nov 2020 22:21:29 +0000 (UTC)
Received: by mail-yb1-xb41.google.com with SMTP id x17so9968621ybr.8
        for <xen-devel@lists.xenproject.org>; Fri, 20 Nov 2020 14:21:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=29/AaAKKWIjlxKBDiu42mXe16/c3LUzQvTdn0Wctw9g=;
        b=th/UjqVcversiy9gEUr46cKRrhHywr0iTgf9mzUejm+T3/Jtu2c6ftjob7N1dnOe2S
         rGVMit/4Gqy7ZDqW+do8x9wI8RCPt/cmuWncILsoBuux3CBAtfG8gT+eTL8wiBVSM5EH
         rDgxqkIvK+vUXnfhgj71GxYIUQH1hInVNMbCl5+3prik9RGhi+hsmul85xXuf9uJVr2L
         NGLQGWDmOXXHg4l9XujW0HrVyacbE3Q1QSLAkLJ0BjE5AdFqhLyeU3T8/6+nJ9hH0JSR
         o1vAVeqr6Bt+/W6rGIwrn7PuvAvMGRmk1Avx4k6vjCg8etglRiDTMnimrqKcX/KEsNCP
         0CAw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=29/AaAKKWIjlxKBDiu42mXe16/c3LUzQvTdn0Wctw9g=;
        b=tf0245agvlXQ5jkbIyvodtLJdX+fSZTYFBRHInfYzFSgjwckkFPBarQY6ODs9T12z+
         C5YNHS5xKLb0RJVmzxzzJ5WSTeOFMHCLLlbA2MRgo+QKmzZ1N97wkl50nBXHjoItA3Mo
         o3BevoAwJEg9B0Zxlz66DLwrZZ+KghecUZxnZJxeWF5+a7AoFJpb//WAsBlbFWOXlHYA
         MiV6NCPnegUudeQZKayJj5esxOuRx+GqHUtO/NpC8GDHN3Ue0kbIWYqZGCj+F0JpVzCm
         IhxYXbpp1PyGD7m10G8UNm4vWfYqiKX1ySERi0Vm76LjUl0pfrkkX5fGCkp28Nje3bHX
         TEsQ==
X-Gm-Message-State: AOAM530xa8+KLGPpK+K4/+HtVoKC0EbVTbD79BGKpQSfB9mr5wRzjepG
	lQA3ASzSmFyBSHDRdPOL1MLUqhZUOo//mePG2V0=
X-Google-Smtp-Source: ABdhPJzYUhkcjTyH5GIcOV5fWMsl+OUdzem471zLY9usW3RhV8izQIRbyESMTmZwFikqgbfOQQqVvSSQCZzBPJX4q90=
X-Received: by 2002:a5b:40e:: with SMTP id m14mr22113400ybp.33.1605910888617;
 Fri, 20 Nov 2020 14:21:28 -0800 (PST)
MIME-Version: 1.0
References: <cover.1605896059.git.gustavoars@kernel.org>
In-Reply-To: <cover.1605896059.git.gustavoars@kernel.org>
From: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Date: Fri, 20 Nov 2020 23:21:17 +0100
Message-ID: <CANiq72=E_gEVvqUUTSqU4zegC2=yZSTM4b=4G-iofp6d3=UgWQ@mail.gmail.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
To: "Gustavo A. R. Silva" <gustavoars@kernel.org>
Cc: linux-kernel <linux-kernel@vger.kernel.org>, alsa-devel@alsa-project.org, 
	amd-gfx@lists.freedesktop.org, bridge@lists.linux-foundation.org, 
	ceph-devel@vger.kernel.org, cluster-devel@redhat.com, coreteam@netfilter.org, 
	devel@driverdev.osuosl.org, dm-devel@redhat.com, drbd-dev@lists.linbit.com, 
	dri-devel@lists.freedesktop.org, GR-everest-linux-l2@marvell.com, 
	GR-Linux-NIC-Dev@marvell.com, intel-gfx@lists.freedesktop.org, 
	intel-wired-lan@lists.osuosl.org, keyrings@vger.kernel.org, 
	linux1394-devel@lists.sourceforge.net, linux-acpi@vger.kernel.org, 
	linux-afs@lists.infradead.org, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, linux-arm-msm@vger.kernel.org, 
	linux-atm-general@lists.sourceforge.net, linux-block@vger.kernel.org, 
	linux-can@vger.kernel.org, linux-cifs@vger.kernel.org, 
	Linux Crypto Mailing List <linux-crypto@vger.kernel.org>, linux-decnet-user@lists.sourceforge.net, 
	Ext4 Developers List <linux-ext4@vger.kernel.org>, linux-fbdev@vger.kernel.org, 
	linux-geode@lists.infradead.org, linux-gpio@vger.kernel.org, 
	linux-hams@vger.kernel.org, linux-hwmon@vger.kernel.org, 
	linux-i3c@lists.infradead.org, linux-ide@vger.kernel.org, 
	linux-iio@vger.kernel.org, linux-input <linux-input@vger.kernel.org>, 
	linux-integrity@vger.kernel.org, linux-mediatek@lists.infradead.org, 
	Linux Media Mailing List <linux-media@vger.kernel.org>, linux-mmc@vger.kernel.org, 
	Linux-MM <linux-mm@kvack.org>, linux-mtd@lists.infradead.org, 
	linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, 
	linux-renesas-soc@vger.kernel.org, linux-scsi@vger.kernel.org, 
	linux-sctp@vger.kernel.org, linux-security-module@vger.kernel.org, 
	linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org, 
	linux-watchdog@vger.kernel.org, 
	linux-wireless <linux-wireless@vger.kernel.org>, 
	Network Development <netdev@vger.kernel.org>, netfilter-devel@vger.kernel.org, 
	nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org, 
	oss-drivers@netronome.com, patches@opensource.cirrus.com, 
	rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org, 
	samba-technical@lists.samba.org, selinux@vger.kernel.org, 
	target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net, 
	usb-storage@lists.one-eyed-alien.net, 
	virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, xen-devel@lists.xenproject.org, 
	linux-hardening@vger.kernel.org, Nick Desaulniers <ndesaulniers@google.com>, 
	Nathan Chancellor <natechancellor@gmail.com>, Miguel Ojeda <ojeda@kernel.org>, 
	Joe Perches <joe@perches.com>, Kees Cook <keescook@chromium.org>
Content-Type: text/plain; charset="UTF-8"

Hi Gustavo,

On Fri, Nov 20, 2020 at 7:21 PM Gustavo A. R. Silva
<gustavoars@kernel.org> wrote:
>
> Hi all,
>
> This series aims to fix almost all remaining fall-through warnings in
> order to enable -Wimplicit-fallthrough for Clang.

Thanks for this.

Since this warning is reliable in both/all compilers and we are
eventually getting rid of all the cases, what about going even further
and making it an error right after?

Cheers,
Miguel


From xen-devel-bounces@lists.xenproject.org Sat Nov 21 06:59:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Nov 2020 06:59:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32763.63869 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgMra-0002GP-UC; Sat, 21 Nov 2020 06:59:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32763.63869; Sat, 21 Nov 2020 06:59:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgMra-0002GI-RF; Sat, 21 Nov 2020 06:59:14 +0000
Received: by outflank-mailman (input) for mailman id 32763;
 Sat, 21 Nov 2020 06:59:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgMrZ-0002GA-Vo; Sat, 21 Nov 2020 06:59:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgMrZ-0001Kx-OD; Sat, 21 Nov 2020 06:59:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgMrZ-0001N2-GU; Sat, 21 Nov 2020 06:59:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kgMrZ-0002hQ-G1; Sat, 21 Nov 2020 06:59:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgMrZ-0002GA-Vo; Sat, 21 Nov 2020 06:59:13 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PhI6r3a5FSj3ZPvmAMVA13lenjfLIbcds/Ehur3Se44=; b=OXrCSWXbFvnGkGHIFzy3FR3Cl7
	irIDB/zilL7JGf+Eo6+WtVCV3wsg/RZHym6juJylB+tOem1jUzOHQ5PPABi4dVAaMvduu0GETjzyP
	xysZX/GeXQN0dFlkl3AFJWutD9WyDBBhif0ZAG7ZHu6ImfqmIW/fdc7bNz0efxVF3wQc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgMrZ-0001Kx-OD; Sat, 21 Nov 2020 06:59:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgMrZ-0001N2-GU; Sat, 21 Nov 2020 06:59:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgMrZ-0002hQ-G1; Sat, 21 Nov 2020 06:59:13 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156905-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156905: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=846d22d54f24f336fb80d052338e0cd030d54fee
X-Osstest-Versions-That:
    xen=415f904254b7343a90db895134980cbb7f7f0479
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 21 Nov 2020 06:59:13 +0000

flight 156905 xen-unstable real [real]
flight 156916 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156905/
http://logs.test-lab.xenproject.org/osstest/logs/156916/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 156916-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156882
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156882
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156882
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156882
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156882
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156882
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156882
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156882
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156882
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156882
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156882
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  846d22d54f24f336fb80d052338e0cd030d54fee
baseline version:
 xen                  415f904254b7343a90db895134980cbb7f7f0479

Last test of basis   156882  2020-11-19 15:08:36 Z    1 days
Failing since        156893  2020-11-20 05:29:38 Z    1 days    2 attempts
Testing same since   156905  2020-11-20 19:39:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andre Przywara <andre.przywara@arm.com>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien.grall@arm.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   415f904254..846d22d54f  846d22d54f24f336fb80d052338e0cd030d54fee -> master


From xen-devel-bounces@lists.xenproject.org Sat Nov 21 08:26:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Nov 2020 08:26:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32788.63913 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgODp-0004Jj-UG; Sat, 21 Nov 2020 08:26:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32788.63913; Sat, 21 Nov 2020 08:26:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgODp-0004Jc-QC; Sat, 21 Nov 2020 08:26:17 +0000
Received: by outflank-mailman (input) for mailman id 32788;
 Sat, 21 Nov 2020 08:26:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgODp-0004JU-FG; Sat, 21 Nov 2020 08:26:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgODp-0003c6-3U; Sat, 21 Nov 2020 08:26:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgODo-0005A5-P9; Sat, 21 Nov 2020 08:26:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kgODo-0005JK-OC; Sat, 21 Nov 2020 08:26:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgODp-0004JU-FG; Sat, 21 Nov 2020 08:26:17 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4iN3vy2CrzfuiMiM/xEmj2ncI/qrxFXHPMJ4KZAx7tc=; b=j3eyXzRtLCcpkwo9a3WyDlglwM
	LjkLMoHguwjSRpME51ziVR+AsKco1q5fYJ5VLfQr2sK8lOT9AqGLJ3TSyIUfbneLhXCu3rpz/QGV4
	OOwAjRNwJWIoEy1uB/FaxF4oKRYYvJqSSILCvCUC76ftmEUcJxb1MBCzO2iXCFTk9epQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgODp-0003c6-3U; Sat, 21 Nov 2020 08:26:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgODo-0005A5-P9; Sat, 21 Nov 2020 08:26:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgODo-0005JK-OC; Sat, 21 Nov 2020 08:26:16 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156909-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156909: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=4ccf7a01e805f04defd423fb410f47a13af76399
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 21 Nov 2020 08:26:16 +0000

flight 156909 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156909/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                4ccf7a01e805f04defd423fb410f47a13af76399
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  112 days
Failing since        152366  2020-08-01 20:49:34 Z  111 days  186 attempts
Testing same since   156909  2020-11-20 21:40:48 Z    0 days    1 attempts

------------------------------------------------------------
3562 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 681148 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Nov 21 08:58:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Nov 2020 08:58:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32800.63928 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgOiR-0007gA-K6; Sat, 21 Nov 2020 08:57:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32800.63928; Sat, 21 Nov 2020 08:57:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgOiR-0007g3-H2; Sat, 21 Nov 2020 08:57:55 +0000
Received: by outflank-mailman (input) for mailman id 32800;
 Sat, 21 Nov 2020 08:57:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgOiQ-0007fv-DH; Sat, 21 Nov 2020 08:57:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgOiQ-0004Gg-81; Sat, 21 Nov 2020 08:57:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgOiQ-0006dq-0Z; Sat, 21 Nov 2020 08:57:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kgOiQ-0008AB-06; Sat, 21 Nov 2020 08:57:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgOiQ-0007fv-DH; Sat, 21 Nov 2020 08:57:54 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qTNQga0iDKttPw+XtbnwvoXP9brgEDpjjSM71WjsRUc=; b=LaKm8BVmcfNxswCueEejUwnkuA
	mpAkdrFucBJUU5ASktFlhxRuDF25A/gqh+R+0s+2V2cIAPfr5d4tp7w4SXe2f5ONg23bWH8UWn0iE
	dKG6WKvEEaBggajExhqHgph/0buycxs68dPOd2H/7AM4JjzlM0/nQUvPocZY0bBU92Gw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgOiQ-0004Gg-81; Sat, 21 Nov 2020 08:57:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgOiQ-0006dq-0Z; Sat, 21 Nov 2020 08:57:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgOiQ-0008AB-06; Sat, 21 Nov 2020 08:57:54 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156915-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156915: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=fd674c09688cedea07eb7c033cbb1eed8ff287f6
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 21 Nov 2020 08:57:54 +0000

flight 156915 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156915/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              fd674c09688cedea07eb7c033cbb1eed8ff287f6
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  134 days
Failing since        151818  2020-07-11 04:18:52 Z  133 days  128 attempts
Testing same since   156915  2020-11-21 04:19:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 28333 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Nov 21 09:09:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Nov 2020 09:09:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32811.63943 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgOt3-0000Vz-NV; Sat, 21 Nov 2020 09:08:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32811.63943; Sat, 21 Nov 2020 09:08:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgOt3-0000Vs-K4; Sat, 21 Nov 2020 09:08:53 +0000
Received: by outflank-mailman (input) for mailman id 32811;
 Sat, 21 Nov 2020 09:08:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgOt2-0000Vk-EG; Sat, 21 Nov 2020 09:08:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgOt2-0004W0-8X; Sat, 21 Nov 2020 09:08:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgOt1-0006yC-Vk; Sat, 21 Nov 2020 09:08:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kgOt1-0002tc-VB; Sat, 21 Nov 2020 09:08:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgOt2-0000Vk-EG; Sat, 21 Nov 2020 09:08:52 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7WPsz5hJi3Bv+7K63Le79Zw3CrpzfmADrzvbUUZXkPE=; b=7PJTminrhzYoIO4EDGQwJczsWi
	Po/Q8xLlu3/izXASEb80a6jIQ3ZuDS6vkp7imcVKRZD5Dw9GHS7rsPlhnaEI3/NQLp0nKpoV6BPOd
	5Bde/9MXuFjO0ikKyuuhjDeFpm+Hv6DRMFj2BiqqhzrWc/DSKdH90ymYBucOEA6/YY+I=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgOt2-0004W0-8X; Sat, 21 Nov 2020 09:08:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgOt1-0006yC-Vk; Sat, 21 Nov 2020 09:08:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgOt1-0002tc-VB; Sat, 21 Nov 2020 09:08:51 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156913-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156913: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=47343af30435302c087027177613412a1a83e919
X-Osstest-Versions-That:
    ovmf=6c8dd15c4ae42501438a525ec41299f365f223cb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 21 Nov 2020 09:08:51 +0000

flight 156913 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156913/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 47343af30435302c087027177613412a1a83e919
baseline version:
 ovmf                 6c8dd15c4ae42501438a525ec41299f365f223cb

Last test of basis   156879  2020-11-19 11:39:56 Z    1 days
Testing same since   156913  2020-11-21 01:54:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Laszlo Ersek <lersek@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   6c8dd15c4a..47343af304  47343af30435302c087027177613412a1a83e919 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Nov 21 10:54:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Nov 2020 10:54:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32843.63964 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgQWj-00041W-FE; Sat, 21 Nov 2020 10:53:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32843.63964; Sat, 21 Nov 2020 10:53:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgQWj-00041P-AR; Sat, 21 Nov 2020 10:53:57 +0000
Received: by outflank-mailman (input) for mailman id 32843;
 Sat, 21 Nov 2020 10:53:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lwRl=E3=gmail.com=persaur@srs-us1.protection.inumbo.net>)
 id 1kgQWh-00041K-W5
 for xen-devel@lists.xenproject.org; Sat, 21 Nov 2020 10:53:56 +0000
Received: from mail-il1-x12b.google.com (unknown [2607:f8b0:4864:20::12b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f2d037d6-8ec1-4c5e-a593-88bb9b58bd09;
 Sat, 21 Nov 2020 10:53:54 +0000 (UTC)
Received: by mail-il1-x12b.google.com with SMTP id h6so10909600ilj.8
 for <xen-devel@lists.xenproject.org>; Sat, 21 Nov 2020 02:53:54 -0800 (PST)
Received: from [100.64.72.31] ([173.245.215.240])
 by smtp.gmail.com with ESMTPSA id c16sm3871734ilj.71.2020.11.21.02.53.52
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sat, 21 Nov 2020 02:53:52 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lwRl=E3=gmail.com=persaur@srs-us1.protection.inumbo.net>)
	id 1kgQWh-00041K-W5
	for xen-devel@lists.xenproject.org; Sat, 21 Nov 2020 10:53:56 +0000
X-Inumbo-ID: f2d037d6-8ec1-4c5e-a593-88bb9b58bd09
Received: from mail-il1-x12b.google.com (unknown [2607:f8b0:4864:20::12b])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f2d037d6-8ec1-4c5e-a593-88bb9b58bd09;
	Sat, 21 Nov 2020 10:53:54 +0000 (UTC)
Received: by mail-il1-x12b.google.com with SMTP id h6so10909600ilj.8
        for <xen-devel@lists.xenproject.org>; Sat, 21 Nov 2020 02:53:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=content-transfer-encoding:from:mime-version:subject:message-id:date
         :to:cc;
        bh=GJlmF9AKA/yoVwceev69lVTOpx+OM8K+hMVriKdWdws=;
        b=PVf+jWydh8M6TK099qKH6DFcAkK4p8sXpyLzSl8Vaz5mXIXOZKOoewmPtzDndjArFv
         xLyYMvKZ85NbAWmgWWxOPVsfaWlUwmhfrus3znMnHY+MH6Nqci7AECpe/EyK3nrT+h1i
         2mUATLBkw5AqIclRp4U+2WE/eyQf+gTe7KEYKRwXKPBK4ETq3amRKn7SrGJH4YX5uMgx
         oAoRmJGptAN22HMnHGBmRSrsJPV7YTyoMJYgcT7CVKl1rZWflY+flER1NGAd3q3MxYB3
         WjISPySNgitw/t/2vlwIPVPcPNSpI2phezi6vhK1eozWCOakdRzb/huv7e8i1xHxZgsk
         pIcw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:content-transfer-encoding:from:mime-version
         :subject:message-id:date:to:cc;
        bh=GJlmF9AKA/yoVwceev69lVTOpx+OM8K+hMVriKdWdws=;
        b=UjjFgDaha5kS4mGhuQkn4RqzBJr14Z4ptSs9xaFK57MEfHHfuSi1r+zdXsY5PJV7Bs
         /BXR4ojX7+w+YKjv8PPSLzpPte+jFZ9D0qKes/T424jjinc84dCC4otcO0Ao2szcGjpX
         nXEnxJfHWPn+ekNyP2MoHnEo9qs9K9Vh98LLC6F9TR/mU6+vsLXMVem/Cy/GLz+IqVHt
         iIPfMTHFbm+jwHELY2FAlNxnFEHBri9zlIuA+Ybe2Wrw/DABaZF+9mcK40gQTT/FTfDD
         +sT0EFsmQBeQgdKFLXAlhK+sL1feOZRVOWFt2X74eDLa5mzEvwbETxCEcZR4J/+4UVdv
         CiLw==
X-Gm-Message-State: AOAM532hUpexuXnUh113+ESsv017x2qfipYFakmKc4WSd7Kvy1TToVgC
	usR7xm6cGXeDX84VdXqDsfqk8K2ymWXWUA==
X-Google-Smtp-Source: ABdhPJz7fhptF9TsGDp0E1iNwhZB4JzKpw8hAlw+PN3AkTNy9iQzL4a4y+KJ/vL4Zv9ctUcJMal+OA==
X-Received: by 2002:a92:6410:: with SMTP id y16mr28036175ilb.126.1605956033405;
        Sat, 21 Nov 2020 02:53:53 -0800 (PST)
Received: from [100.64.72.31] ([173.245.215.240])
        by smtp.gmail.com with ESMTPSA id c16sm3871734ilj.71.2020.11.21.02.53.52
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Sat, 21 Nov 2020 02:53:52 -0800 (PST)
Content-Type: multipart/alternative; boundary=Apple-Mail-D9968B9E-C459-45BF-8459-7167284D8E63
Content-Transfer-Encoding: 7bit
From: Rich Persaud <persaur@gmail.com>
Mime-Version: 1.0 (1.0)
Subject: Re: [PATCH v2] xen: EXPERT clean-up and introduce UNSUPPORTED
Message-Id: <868F968F-83B1-4AB7-A48F-FB4E7D3D5E41@gmail.com>
Date: Sat, 21 Nov 2020 05:53:50 -0500
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, andrew.cooper3@citrix.com,
 Bertrand.Marquis@arm.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 george.dunlap@citrix.com, iwj@xenproject.org, julien@xen.org, wl@xen.org,
 xen-devel@lists.xenproject.org,
 Christopher Clark <christopher.w.clark@gmail.com>,
 Roman Shaposhnik <roman@zededa.com>,
 =?utf-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
X-Mailer: iPad Mail (18B92)


--Apple-Mail-D9968B9E-C459-45BF-8459-7167284D8E63
Content-Type: text/plain;
	charset=utf-8
Content-Transfer-Encoding: quoted-printable

=EF=BB=BF
=EF=BB=BFOn Nov 20, 2020, at 05:09, Jan Beulich <jbeulich@suse.com> wrote:
> =EF=BB=BFOn 19.11.2020 22:40, Stefano Stabellini wrote:
>> On Thu, 19 Nov 2020, Jan Beulich wrote:
>>> On 18.11.2020 22:00, Stefano Stabellini wrote:
>>>> On Wed, 18 Nov 2020, Jan Beulich wrote:
>>>>> On 18.11.2020 01:50, Stefano Stabellini wrote:
>>>>>> 1) It is not obvious that "Configure standard Xen features (expert
>>>>>> users)" is actually the famous EXPERT we keep talking about on xen-de=
vel
>>>>> Which can be addressed by simply changing the one prompt line.
>>>>>> 2) It is not obvious when we need to enable EXPERT to get a specific
>>>>>> feature
>>>>>> In particular if you want to enable ACPI support so that you can boot=

>>>>>> Xen on an ACPI platform, you have to enable EXPERT first. But searchi=
ng
>>>>>> through the kconfig menu it is really not clear (type '/' and "ACPI")=
:
>>>>>> nothing in the description tells you that you need to enable EXPERT t=
o
>>>>>> get the option.
>>>>> And what causes this to be different once you switch to UNSUPPORTED?
>>>> Two things: firstly, it doesn't and shouldn't take an expert to enable
>>>> ACPI support, even if ACPI support is experimental. So calling it
>>>> UNSUPPORTED helps a lot. This is particularly relevant to the ARM Kconf=
ig
>>>> options changed by this patch. Secondly, this patch is adding
>>>> "(UNSUPPORTED)" in the oneline prompt so that it becomes easy to match
>>>> it with the option you need to enable.
>>> There's redundancy here then, which I think is in almost all cases
>>> better to avoid. That's first and foremost because the two places
>>> can go out of sync. Therefore, if the primary thing is to help
>>> "make menuconfig" (which I admit I don't normally use, as it's
>>> nothing that gets invoked implicitly by the build process afaict,
>>> i.e. one has to actively invoke it), perhaps we should enhance
>>> kconfig to attach at least a pre-determined subset of labels to
>>> the prompts automatically?
>>> And second, also in reply to what you've been saying further down,
>>> perhaps we would better go with a hierarchy of controls here, e.g.
>>> EXPERT -> EXPERIMENTAL -> UNSUPPORTED?
>>=20
>> Both these are good ideas worth discussing; somebody else made a similar
>> suggestion some time back. I was already thinking this could be a great
>> candidate for one of the first "working groups" as defined by George
>> during the last community call because the topic is not purely
>> technical: a working group could help getting alignment and make
>> progress faster. We can propose it to George when he is back.
>>=20
>> However, I don't think we need the working group to make progress on
>> this limited patch that only addresses the lowest hanging fruit.
>>=20
>> I'd like to suggest to make progress on this patch in its current form,
>> and in parallel start a longer term discussion on how to do something
>> like you suggested above.
>=20
> Okay, I guess I can accept this. So FAOD I'm not objecting to the
> change (with some suitable adjustments, as discussed), but I'm
> then also not going to be the one to ack it. Nevertheless I'd like
> to point out that doing such a partial solution may end up adding
> confusion rather than reducing it.

Seconded.

> Much depends on how exactly consumers interpret what we hand to them.

How have Xen consumers changed during the last 15 years?  The UX (user exper=
ience) community uses a technique called "user stories" [0][1][2].  It may b=
e worth writing a few sentences about the users (distro packagers? Xen deriv=
atives? hypervisor developers? hosting companies? malware developers?) whose=
 needs are addressed by proposed changes.  One could then reason about UX of=
 Xen feature selection, before and after proposed changes.

Rich

[0] https://uxdict.io/design-thinking-methods-user-stories-3b9467313a04
[1] https://uxdesign.cc/fostering-ux-on-a-devops-culture-bb92716e3f43
[2] https://about.gitlab.com/blog/2020/03/27/how-we-utilize-user-stories-as-=
a-collaborative-design-tool/


--Apple-Mail-D9968B9E-C459-45BF-8459-7167284D8E63
Content-Type: text/html;
	charset=utf-8
Content-Transfer-Encoding: quoted-printable

<html><head><meta http-equiv=3D"content-type" content=3D"text/html; charset=3D=
utf-8"></head><body dir=3D"auto"><div dir=3D"ltr"><div dir=3D"ltr">=EF=BB=BF=
<meta http-equiv=3D"content-type" content=3D"text/html; charset=3Dutf-8"><di=
v dir=3D"ltr">=EF=BB=BFOn Nov 20, 2020, at 05:09, Jan Beulich &lt;jbeulich@s=
use.com&gt; wrote:<div dir=3D"ltr"><blockquote type=3D"cite"><br></blockquot=
e></div><blockquote type=3D"cite"><div dir=3D"ltr">=EF=BB=BF<span>On 19.11.2=
020 22:40, Stefano Stabellini wrote:</span><br><blockquote type=3D"cite"><sp=
an>On Thu, 19 Nov 2020, Jan Beulich wrote:</span><br></blockquote><blockquot=
e type=3D"cite"><blockquote type=3D"cite"><span>On 18.11.2020 22:00, Stefano=
 Stabellini wrote:</span><br></blockquote></blockquote><blockquote type=3D"c=
ite"><blockquote type=3D"cite"><blockquote type=3D"cite"><span>On Wed, 18 No=
v 2020, Jan Beulich wrote:</span><br></blockquote></blockquote></blockquote>=
<blockquote type=3D"cite"><blockquote type=3D"cite"><blockquote type=3D"cite=
"><blockquote type=3D"cite"><span>On 18.11.2020 01:50, Stefano Stabellini wr=
ote:</span><br></blockquote></blockquote></blockquote></blockquote><blockquo=
te type=3D"cite"><blockquote type=3D"cite"><blockquote type=3D"cite"><blockq=
uote type=3D"cite"><blockquote type=3D"cite"><span>1) It is not obvious that=
 "Configure standard Xen features (expert</span><br></blockquote></blockquot=
e></blockquote></blockquote></blockquote><blockquote type=3D"cite"><blockquo=
te type=3D"cite"><blockquote type=3D"cite"><blockquote type=3D"cite"><blockq=
uote type=3D"cite"><span>users)" is actually the famous EXPERT we keep talki=
ng about on xen-devel</span><br></blockquote></blockquote></blockquote></blo=
ckquote></blockquote><blockquote type=3D"cite"><blockquote type=3D"cite"><bl=
ockquote type=3D"cite"><blockquote type=3D"cite"><span></span><br></blockquo=
te></blockquote></blockquote></blockquote><blockquote type=3D"cite"><blockqu=
ote type=3D"cite"><blockquote type=3D"cite"><blockquote type=3D"cite"><span>=
Which can be addressed by simply changing the one prompt line.</span><br></b=
lockquote></blockquote></blockquote></blockquote><blockquote type=3D"cite"><=
blockquote type=3D"cite"><blockquote type=3D"cite"><blockquote type=3D"cite"=
><span></span><br></blockquote></blockquote></blockquote></blockquote><block=
quote type=3D"cite"><blockquote type=3D"cite"><blockquote type=3D"cite"><blo=
ckquote type=3D"cite"><blockquote type=3D"cite"><span>2) It is not obvious w=
hen we need to enable EXPERT to get a specific</span><br></blockquote></bloc=
kquote></blockquote></blockquote></blockquote><blockquote type=3D"cite"><blo=
ckquote type=3D"cite"><blockquote type=3D"cite"><blockquote type=3D"cite"><b=
lockquote type=3D"cite"><span>feature</span><br></blockquote></blockquote></=
blockquote></blockquote></blockquote><blockquote type=3D"cite"><blockquote t=
ype=3D"cite"><blockquote type=3D"cite"><blockquote type=3D"cite"><blockquote=
 type=3D"cite"><span></span><br></blockquote></blockquote></blockquote></blo=
ckquote></blockquote><blockquote type=3D"cite"><blockquote type=3D"cite"><bl=
ockquote type=3D"cite"><blockquote type=3D"cite"><blockquote type=3D"cite"><=
span>In particular if you want to enable ACPI support so that you can boot</=
span><br></blockquote></blockquote></blockquote></blockquote></blockquote><b=
lockquote type=3D"cite"><blockquote type=3D"cite"><blockquote type=3D"cite">=
<blockquote type=3D"cite"><blockquote type=3D"cite"><span>Xen on an ACPI pla=
tform, you have to enable EXPERT first. But searching</span><br></blockquote=
></blockquote></blockquote></blockquote></blockquote><blockquote type=3D"cit=
e"><blockquote type=3D"cite"><blockquote type=3D"cite"><blockquote type=3D"c=
ite"><blockquote type=3D"cite"><span>through the kconfig menu it is really n=
ot clear (type '/' and "ACPI"):</span><br></blockquote></blockquote></blockq=
uote></blockquote></blockquote><blockquote type=3D"cite"><blockquote type=3D=
"cite"><blockquote type=3D"cite"><blockquote type=3D"cite"><blockquote type=3D=
"cite"><span>nothing in the description tells you that you need to enable EX=
PERT to</span><br></blockquote></blockquote></blockquote></blockquote></bloc=
kquote><blockquote type=3D"cite"><blockquote type=3D"cite"><blockquote type=3D=
"cite"><blockquote type=3D"cite"><blockquote type=3D"cite"><span>get the opt=
ion.</span><br></blockquote></blockquote></blockquote></blockquote></blockqu=
ote><blockquote type=3D"cite"><blockquote type=3D"cite"><blockquote type=3D"=
cite"><blockquote type=3D"cite"><span></span><br></blockquote></blockquote><=
/blockquote></blockquote><blockquote type=3D"cite"><blockquote type=3D"cite"=
><blockquote type=3D"cite"><blockquote type=3D"cite"><span>And what causes t=
his to be different once you switch to UNSUPPORTED?</span><br></blockquote><=
/blockquote></blockquote></blockquote><blockquote type=3D"cite"><blockquote t=
ype=3D"cite"><blockquote type=3D"cite"><span></span><br></blockquote></block=
quote></blockquote><blockquote type=3D"cite"><blockquote type=3D"cite"><bloc=
kquote type=3D"cite"><span>Two things: firstly, it doesn't and shouldn't tak=
e an expert to enable</span><br></blockquote></blockquote></blockquote><bloc=
kquote type=3D"cite"><blockquote type=3D"cite"><blockquote type=3D"cite"><sp=
an>ACPI support, even if ACPI support is experimental. So calling it</span><=
br></blockquote></blockquote></blockquote><blockquote type=3D"cite"><blockqu=
ote type=3D"cite"><blockquote type=3D"cite"><span>UNSUPPORTED helps a lot. T=
his is particularly relevant to the ARM Kconfig</span><br></blockquote></blo=
ckquote></blockquote><blockquote type=3D"cite"><blockquote type=3D"cite"><bl=
ockquote type=3D"cite"><span>options changed by this patch. Secondly, this p=
atch is adding</span><br></blockquote></blockquote></blockquote><blockquote t=
ype=3D"cite"><blockquote type=3D"cite"><blockquote type=3D"cite"><span>"(UNS=
UPPORTED)" in the oneline prompt so that it becomes easy to match</span><br>=
</blockquote></blockquote></blockquote><blockquote type=3D"cite"><blockquote=
 type=3D"cite"><blockquote type=3D"cite"><span>it with the option you need t=
o enable.</span><br></blockquote></blockquote></blockquote><blockquote type=3D=
"cite"><blockquote type=3D"cite"><span></span><br></blockquote></blockquote>=
<blockquote type=3D"cite"><blockquote type=3D"cite"><span>There's redundancy=
 here then, which I think is in almost all cases</span><br></blockquote></bl=
ockquote><blockquote type=3D"cite"><blockquote type=3D"cite"><span>better to=
 avoid. That's first and foremost because the two places</span><br></blockqu=
ote></blockquote><blockquote type=3D"cite"><blockquote type=3D"cite"><span>c=
an go out of sync. Therefore, if the primary thing is to help</span><br></bl=
ockquote></blockquote><blockquote type=3D"cite"><blockquote type=3D"cite"><s=
pan>"make menuconfig" (which I admit I don't normally use, as it's</span><br=
></blockquote></blockquote><blockquote type=3D"cite"><blockquote type=3D"cit=
e"><span>nothing that gets invoked implicitly by the build process afaict,</=
span><br></blockquote></blockquote><blockquote type=3D"cite"><blockquote typ=
e=3D"cite"><span>i.e. one has to actively invoke it), perhaps we should enha=
nce</span><br></blockquote></blockquote><blockquote type=3D"cite"><blockquot=
e type=3D"cite"><span>kconfig to attach at least a pre-determined subset of l=
abels to</span><br></blockquote></blockquote><blockquote type=3D"cite"><bloc=
kquote type=3D"cite"><span>the prompts automatically?</span><br></blockquote=
></blockquote><blockquote type=3D"cite"><blockquote type=3D"cite"><span></sp=
an><br></blockquote></blockquote><blockquote type=3D"cite"><blockquote type=3D=
"cite"><span>And second, also in reply to what you've been saying further do=
wn,</span><br></blockquote></blockquote><blockquote type=3D"cite"><blockquot=
e type=3D"cite"><span>perhaps we would better go with a hierarchy of control=
s here, e.g.</span><br></blockquote></blockquote><blockquote type=3D"cite"><=
blockquote type=3D"cite"><span>EXPERT -&gt; EXPERIMENTAL -&gt; UNSUPPORTED?<=
/span><br></blockquote></blockquote><blockquote type=3D"cite"><span></span><=
br></blockquote><blockquote type=3D"cite"><span>Both these are good ideas wo=
rth discussing; somebody else made a similar</span><br></blockquote><blockqu=
ote type=3D"cite"><span>suggestion some time back. I was already thinking th=
is could be a great</span><br></blockquote><blockquote type=3D"cite"><span>c=
andidate for one of the first "working groups" as defined by George</span><b=
r></blockquote><blockquote type=3D"cite"><span>during the last community cal=
l because the topic is not purely</span><br></blockquote><blockquote type=3D=
"cite"><span>technical: a working group could help getting alignment and mak=
e</span><br></blockquote><blockquote type=3D"cite"><span>progress faster. We=
 can propose it to George when he is back.</span><br></blockquote><blockquot=
e type=3D"cite"><span></span><br></blockquote><blockquote type=3D"cite"><spa=
n>However, I don't think we need the working group to make progress on</span=
><br></blockquote><blockquote type=3D"cite"><span>this limited patch that on=
ly addresses the lowest hanging fruit.</span><br></blockquote><blockquote ty=
pe=3D"cite"><span></span><br></blockquote><blockquote type=3D"cite"><span>I'=
d like to suggest to make progress on this patch in its current form,</span>=
<br></blockquote><blockquote type=3D"cite"><span>and in parallel start a lon=
ger term discussion on how to do something</span><br></blockquote><blockquot=
e type=3D"cite"><span>like you suggested above.</span><br></blockquote><span=
></span><br><span>Okay, I guess I can accept this. So FAOD I'm not objecting=
 to the</span><br><span>change (with some suitable adjustments, as discussed=
), but I'm</span><br><span>then also not going to be the one to ack it. Neve=
rtheless I'd like</span><br><span>to point out that doing such a partial sol=
ution may end up adding</span><br><span>confusion rather than reducing it. <=
/span></div></blockquote><div><br></div>Seconded.<div><br><blockquote type=3D=
"cite"><div dir=3D"ltr"><span>Much depends on how exactly&nbsp;</span><span>=
consumers interpret what we hand to them.</span><br></div></blockquote><br><=
div>How have Xen consumers changed during the last 15 years? &nbsp;The UX (u=
ser experience) community uses a technique called "user stories" [0][1][2]. &=
nbsp;It may be worth writing a few sentences about the users (distro package=
rs? Xen derivatives? hypervisor developers? hosting companies? malware devel=
opers?) whose needs are addressed by proposed changes. &nbsp;One could then r=
eason about UX of Xen feature selection, before and after proposed changes.<=
/div><div><br></div><div>Rich</div><div><br></div><div>[0]&nbsp;<a href=3D"h=
ttps://uxdict.io/design-thinking-methods-user-stories-3b9467313a04">https://=
uxdict.io/design-thinking-methods-user-stories-3b9467313a04</a></div><div>[1=
]&nbsp;<a href=3D"https://uxdesign.cc/fostering-ux-on-a-devops-culture-bb927=
16e3f43">https://uxdesign.cc/fostering-ux-on-a-devops-culture-bb92716e3f43</=
a></div><div>[2]&nbsp;<a href=3D"https://about.gitlab.com/blog/2020/03/27/ho=
w-we-utilize-user-stories-as-a-collaborative-design-tool/">https://about.git=
lab.com/blog/2020/03/27/how-we-utilize-user-stories-as-a-collaborative-desig=
n-tool/</a></div><div><br></div></div></div></div></div></body></html>=

--Apple-Mail-D9968B9E-C459-45BF-8459-7167284D8E63--


From xen-devel-bounces@lists.xenproject.org Sat Nov 21 14:07:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Nov 2020 14:07:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32888.63988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgTXh-0007nH-0Z; Sat, 21 Nov 2020 14:07:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32888.63988; Sat, 21 Nov 2020 14:07:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgTXg-0007nA-TC; Sat, 21 Nov 2020 14:07:08 +0000
Received: by outflank-mailman (input) for mailman id 32888;
 Sat, 21 Nov 2020 14:07:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgTXf-0007n1-Hf; Sat, 21 Nov 2020 14:07:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgTXf-0002Fx-7L; Sat, 21 Nov 2020 14:07:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgTXe-0007bT-TA; Sat, 21 Nov 2020 14:07:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kgTXe-0007dC-Sh; Sat, 21 Nov 2020 14:07:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgTXf-0007n1-Hf; Sat, 21 Nov 2020 14:07:07 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xOqz1/+/dSE1zBk6TSGUUAgCoqPsYbaAef8arbJfFRk=; b=VidovsLT3b1fGq54Ff/KxbzRt1
	VIZSmz3QiEAt39yMpLONR22skVpzlo9S1d23qQeEQ09ER5J7CioA4qk1PPdR8fwnnVnbTUF5ZVqyI
	Ni29D7MXlXeG8q/sMYTDbVjtkgTCXm4MjCBHA+2HqxOx4uy4Hk76J2XVPLPdy1j8n3S8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgTXf-0002Fx-7L; Sat, 21 Nov 2020 14:07:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgTXe-0007bT-TA; Sat, 21 Nov 2020 14:07:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgTXe-0007dC-Sh; Sat, 21 Nov 2020 14:07:06 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156912-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156912: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-multivcpu:guest-localmigrate/x10:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=7761e07c3ffd8daad6fd933ad0bb03493080e193
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 21 Nov 2020 14:07:06 +0000

flight 156912 qemu-mainline real [real]
flight 156923 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156912/
http://logs.test-lab.xenproject.org/osstest/logs/156923/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-multivcpu 20 guest-localmigrate/x10 fail pass in 156923-retest

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                7761e07c3ffd8daad6fd933ad0bb03493080e193
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   93 days
Failing since        152659  2020-08-21 14:07:39 Z   91 days  195 attempts
Testing same since   156912  2020-11-21 01:06:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 67366 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Nov 21 16:02:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Nov 2020 16:02:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32963.64041 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgVL6-0004Ux-5X; Sat, 21 Nov 2020 16:02:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32963.64041; Sat, 21 Nov 2020 16:02:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgVL6-0004Uq-21; Sat, 21 Nov 2020 16:02:16 +0000
Received: by outflank-mailman (input) for mailman id 32963;
 Sat, 21 Nov 2020 16:02:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgVL4-0004Ui-PW; Sat, 21 Nov 2020 16:02:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgVL4-00056Z-FT; Sat, 21 Nov 2020 16:02:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgVL4-0005aB-9G; Sat, 21 Nov 2020 16:02:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kgVL4-0002Tk-8m; Sat, 21 Nov 2020 16:02:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgVL4-0004Ui-PW; Sat, 21 Nov 2020 16:02:14 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=w1yZLNaHc/Bsh88luLqFPlLGZm5IfNJwvL1G7E9UggU=; b=ajfR+pyOtLOpnVQr4kiHyrYqB7
	G/zbYqxLPD6YbyRMK/fzPOCmNQulcmNST9QgJq+Ao6lesXBnUoG6oADuzK9daS/H+DQaZtiFmsJeT
	gxrjnpShih82UqzTdGXlSPS+eYqJ4i5UAi5Kf3/EUFMcuBOjtT7ZQkbwoAfN51kzreKw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgVL4-00056Z-FT; Sat, 21 Nov 2020 16:02:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgVL4-0005aB-9G; Sat, 21 Nov 2020 16:02:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgVL4-0002Tk-8m; Sat, 21 Nov 2020 16:02:14 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156920-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156920: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=e7bd0dd26db7e56aa8ca70132d6ea916ee6f3db0
X-Osstest-Versions-That:
    ovmf=47343af30435302c087027177613412a1a83e919
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 21 Nov 2020 16:02:14 +0000

flight 156920 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156920/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 e7bd0dd26db7e56aa8ca70132d6ea916ee6f3db0
baseline version:
 ovmf                 47343af30435302c087027177613412a1a83e919

Last test of basis   156913  2020-11-21 01:54:44 Z    0 days
Testing same since   156920  2020-11-21 09:10:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Laszlo Ersek <lersek@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   47343af304..e7bd0dd26d  e7bd0dd26db7e56aa8ca70132d6ea916ee6f3db0 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Nov 21 16:24:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Nov 2020 16:24:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.32981.64060 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgVgP-0006pf-8J; Sat, 21 Nov 2020 16:24:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 32981.64060; Sat, 21 Nov 2020 16:24:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgVgP-0006pY-5D; Sat, 21 Nov 2020 16:24:17 +0000
Received: by outflank-mailman (input) for mailman id 32981;
 Sat, 21 Nov 2020 16:24:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ondl=E3=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1kgVgN-0006pT-Qb
 for xen-devel@lists.xenproject.org; Sat, 21 Nov 2020 16:24:15 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 828d2389-1b39-4ece-a8a3-db04261d0f36;
 Sat, 21 Nov 2020 16:24:14 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 951F467373; Sat, 21 Nov 2020 17:24:11 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ondl=E3=lst.de=hch@srs-us1.protection.inumbo.net>)
	id 1kgVgN-0006pT-Qb
	for xen-devel@lists.xenproject.org; Sat, 21 Nov 2020 16:24:15 +0000
X-Inumbo-ID: 828d2389-1b39-4ece-a8a3-db04261d0f36
Received: from verein.lst.de (unknown [213.95.11.211])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 828d2389-1b39-4ece-a8a3-db04261d0f36;
	Sat, 21 Nov 2020 16:24:14 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
	id 951F467373; Sat, 21 Nov 2020 17:24:11 +0100 (CET)
Date: Sat, 21 Nov 2020 17:24:11 +0100
From: Christoph Hellwig <hch@lst.de>
To: Jan Kara <jack@suse.cz>
Cc: Matthew Wilcox <willy@infradead.org>, Christoph Hellwig <hch@lst.de>,
	Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 14/20] block: remove the nr_sects field in struct
 hd_struct
Message-ID: <20201121162411.GA18475@lst.de>
References: <20201118084800.2339180-1-hch@lst.de> <20201118084800.2339180-15-hch@lst.de> <20201119120525.GW1981@quack2.suse.cz> <20201120090820.GD21715@lst.de> <20201120112121.GB15537@quack2.suse.cz> <20201120153253.GA18990@lst.de> <20201120155956.GB4327@casper.infradead.org> <20201120200548.GA27360@quack2.suse.cz>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201120200548.GA27360@quack2.suse.cz>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Fri, Nov 20, 2020 at 09:05:48PM +0100, Jan Kara wrote:
> The code is already switched to it AFAICT (the lock is really only used in
> the two places that write i_size). But the problem is that in theory two
> i_size_write() calls can race in a way that the resulting stored i_size is a
> mix of two stored sizes. Now I have hard time imagining how this could
> happen for a block device and if two reconfigurations of a block device
> could race like that we'd have a large problems anyway...

Now that you mention it, yes - i_size_write needs to be under i_rwsem
or an equivalent lock.  We could look into using i_rwsem also for block
device, but for now the spinlock seems to be doing fine.  Note that
in current mainline we only have such a lock protecting i_size of the
block_device inode, but none for the size in hd_struct.


From xen-devel-bounces@lists.xenproject.org Sat Nov 21 16:51:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Nov 2020 16:51:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33011.64088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgW6W-0001kO-MH; Sat, 21 Nov 2020 16:51:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33011.64088; Sat, 21 Nov 2020 16:51:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgW6W-0001kH-Iq; Sat, 21 Nov 2020 16:51:16 +0000
Received: by outflank-mailman (input) for mailman id 33011;
 Sat, 21 Nov 2020 16:51:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KIx3=E3=redhat.com=trix@srs-us1.protection.inumbo.net>)
 id 1kgW6U-0001kC-Jm
 for xen-devel@lists.xenproject.org; Sat, 21 Nov 2020 16:51:14 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 0be25e19-0a35-4f47-8b96-3bc1f2cdf15b;
 Sat, 21 Nov 2020 16:51:12 +0000 (UTC)
Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com
 [209.85.222.198]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-401-X48RMHn8N1Swh15Ci0suvg-1; Sat, 21 Nov 2020 11:51:08 -0500
Received: by mail-qk1-f198.google.com with SMTP id 202so10918044qkl.9
 for <xen-devel@lists.xenproject.org>; Sat, 21 Nov 2020 08:51:08 -0800 (PST)
Received: from trix.remote.csb (075-142-250-213.res.spectrum.com.
 [75.142.250.213])
 by smtp.gmail.com with ESMTPSA id j202sm4129196qke.108.2020.11.21.08.51.03
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 21 Nov 2020 08:51:06 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=KIx3=E3=redhat.com=trix@srs-us1.protection.inumbo.net>)
	id 1kgW6U-0001kC-Jm
	for xen-devel@lists.xenproject.org; Sat, 21 Nov 2020 16:51:14 +0000
X-Inumbo-ID: 0be25e19-0a35-4f47-8b96-3bc1f2cdf15b
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 0be25e19-0a35-4f47-8b96-3bc1f2cdf15b;
	Sat, 21 Nov 2020 16:51:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1605977471;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:content-type:content-type;
	bh=zLH0U0S3qPuM/4/c12zDySDpu72UCHtVP0nLwv0orcI=;
	b=AFEeC4LwJVQY50Rclp0n+cpskzGMKTYLo6JEX0eKNm0GvOoPPRr0PIlzZZR4c90sUF1AFa
	wWxnbbHvzUJD9uKYKMOZDMNEn0QtuGxgpp5z9GJnrDgiQ4ysveuwkSR6FRmHHm8xJN+LCI
	+l5BMm33isMWCeCIZOiQlxjVKkD6iAY=
Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com
 [209.85.222.198]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-401-X48RMHn8N1Swh15Ci0suvg-1; Sat, 21 Nov 2020 11:51:08 -0500
X-MC-Unique: X48RMHn8N1Swh15Ci0suvg-1
Received: by mail-qk1-f198.google.com with SMTP id 202so10918044qkl.9
        for <xen-devel@lists.xenproject.org>; Sat, 21 Nov 2020 08:51:08 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id;
        bh=zLH0U0S3qPuM/4/c12zDySDpu72UCHtVP0nLwv0orcI=;
        b=YTiUCAYWNw96vHft/4JnhlK8v65oiuz/WlCgXBBp7wDLhCVTxKyMxaEeFN4n/l/cI2
         A1orddk6zLBLa9lihnmBulfZPbJGiNBrqE69Ov3KAzjIZ2nCCQEJWRNvFr9M6pCD1s97
         KMwznW4u8+d3TNGSlyWJFiKy+4nOPPHIDeORDwZRBrpR2HuBlptI9flmLVTwfJd9ML4v
         UY5IMN947OQKLz3Va+Zr9AG9XJgjoJpyewl0X9KHQN8imOp0k+bKSNP2RmOAZUa1GlLX
         HBnt3J24Uif8fTQRMD6mV+9Lx5PdOK6C4+DvD7Txr8e5apJX4BeMHvck2/N7PjZiRNlC
         pGoQ==
X-Gm-Message-State: AOAM533/V+++i+3Gv7opWjyuGnlsaGUuLA1ZuwZyfRjfA60FxVGjC4rh
	AMFfqooNO+7JK+wjsH3nD4BluPVW0qcOZi/W0o71FtaWM66UDbsDvHl6/bSapIjB8UE4YzRZdeU
	CPugkgC4OUbjd7n7tB2IYH074W04=
X-Received: by 2002:a05:620a:15ce:: with SMTP id o14mr22885262qkm.231.1605977467694;
        Sat, 21 Nov 2020 08:51:07 -0800 (PST)
X-Google-Smtp-Source: ABdhPJwJMB7Phx6GdQ/NzrSSCIOwTBjDbbn+gugDtHjIEeXte769cdu0l51J2LUn/+h4QxN+fnLujw==
X-Received: by 2002:a05:620a:15ce:: with SMTP id o14mr22885248qkm.231.1605977467468;
        Sat, 21 Nov 2020 08:51:07 -0800 (PST)
Received: from trix.remote.csb (075-142-250-213.res.spectrum.com. [75.142.250.213])
        by smtp.gmail.com with ESMTPSA id j202sm4129196qke.108.2020.11.21.08.51.03
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Sat, 21 Nov 2020 08:51:06 -0800 (PST)
From: trix@redhat.com
To: trix@redhat.com,
	joe@perches.com,
	clang-built-linux@googlegroups.com
Cc: linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	tboot-devel@lists.sourceforge.net,
	kvm@vger.kernel.org,
	linux-crypto@vger.kernel.org,
	linux-acpi@vger.kernel.org,
	devel@acpica.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	netdev@vger.kernel.org,
	linux-media@vger.kernel.org,
	MPT-FusionLinux.pdl@broadcom.com,
	linux-scsi@vger.kernel.org,
	linux-wireless@vger.kernel.org,
	ibm-acpi-devel@lists.sourceforge.net,
	platform-driver-x86@vger.kernel.org,
	linux-usb@vger.kernel.org,
	linux-omap@vger.kernel.org,
	linux-fbdev@vger.kernel.org,
	ecryptfs@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	cluster-devel@redhat.com,
	linux-mtd@lists.infradead.org,
	keyrings@vger.kernel.org,
	netfilter-devel@vger.kernel.org,
	coreteam@netfilter.org,
	alsa-devel@alsa-project.org,
	bpf@vger.kernel.org,
	linux-bluetooth@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	patches@opensource.cirrus.com
Subject: [RFC] MAINTAINERS tag for cleanup robot
Date: Sat, 21 Nov 2020 08:50:58 -0800
Message-Id: <20201121165058.1644182-1-trix@redhat.com>
X-Mailer: git-send-email 2.18.4
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=trix@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="US-ASCII"

A difficult part of automating commits is composing the subsystem
preamble in the commit log.  For the ongoing effort of a fixer producing
one or two fixes a release the use of 'treewide:' does not seem appropriate.

It would be better if the normal prefix was used.  Unfortunately normal is
not consistent across the tree.

So I am looking for comments for adding a new tag to the MAINTAINERS file

	D: Commit subsystem prefix

ex/ for FPGA DFL DRIVERS

	D: fpga: dfl:

Continuing with cleaning up clang's -Wextra-semi-stmt

A significant number of warnings are caused by function like macros with
a trailing semicolon.  For example.

#define FOO(a) a++; <-- extra, unneeded semicolon
void bar() {
	int v = 0;
	FOO(a);
} 

Clang will warn at the FOO(a); expansion location. Instead of removing
the semicolon there,  the fixer removes semicolon from the macro
definition.  After the fixer, the code will be:

#define FOO(a) a++
void bar() {
	int v = 0;
	FOO(a);
} 

The fixer review is
https://reviews.llvm.org/D91789

A run over allyesconfig for x86_64 finds 62 issues, 5 are false positives.
The false positives are caused by macros passed to other macros and by
some macro expansions that did not have an extra semicolon.

This cleans up about 1,000 of the current 10,000 -Wextra-semi-stmt
warnings in linux-next.

An update to [RFC] clang tooling cleanup
This change adds the clang-tidy-fix as a top level target and
uses it to do the cleaning.  The next iteration will do a loop of
cleaners.  This will mean breaking clang-tidy-fix out into its own
processing function 'run_fixers'.

Makefile: Add toplevel target clang-tidy-fix to makefile

Calls clang-tidy with -fix option for a set of checkers that
programatically fixes the kernel source in place, treewide.

Signed-off-by: Tom Rix <trix@redhat.com>
---
 Makefile                               |  7 ++++---
 scripts/clang-tools/run-clang-tools.py | 20 +++++++++++++++++---
 2 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/Makefile b/Makefile
index 47a8add4dd28..57756dbb767b 100644
--- a/Makefile
+++ b/Makefile
@@ -1567,20 +1567,21 @@ help:
 	 echo  ''
 	@echo  'Static analysers:'
 	@echo  '  checkstack      - Generate a list of stack hogs'
 	@echo  '  versioncheck    - Sanity check on version.h usage'
 	@echo  '  includecheck    - Check for duplicate included header files'
 	@echo  '  export_report   - List the usages of all exported symbols'
 	@echo  '  headerdep       - Detect inclusion cycles in headers'
 	@echo  '  coccicheck      - Check with Coccinelle'
 	@echo  '  clang-analyzer  - Check with clang static analyzer'
 	@echo  '  clang-tidy      - Check with clang-tidy'
+	@echo  '  clang-tidy-fix  - Check and fix with clang-tidy'
 	@echo  ''
 	@echo  'Tools:'
 	@echo  '  nsdeps          - Generate missing symbol namespace dependencies'
 	@echo  ''
 	@echo  'Kernel selftest:'
 	@echo  '  kselftest         - Build and run kernel selftest'
 	@echo  '                      Build, install, and boot kernel before'
 	@echo  '                      running kselftest on it'
 	@echo  '                      Run as root for full coverage'
 	@echo  '  kselftest-all     - Build kernel selftest'
@@ -1842,30 +1843,30 @@ nsdeps: modules
 quiet_cmd_gen_compile_commands = GEN     $@
       cmd_gen_compile_commands = $(PYTHON3) $< -a $(AR) -o $@ $(filter-out $<, $(real-prereqs))
 
 $(extmod-prefix)compile_commands.json: scripts/clang-tools/gen_compile_commands.py \
 	$(if $(KBUILD_EXTMOD),,$(KBUILD_VMLINUX_OBJS) $(KBUILD_VMLINUX_LIBS)) \
 	$(if $(CONFIG_MODULES), $(MODORDER)) FORCE
 	$(call if_changed,gen_compile_commands)
 
 targets += $(extmod-prefix)compile_commands.json
 
-PHONY += clang-tidy clang-analyzer
+PHONY += clang-tidy-fix clang-tidy clang-analyzer
 
 ifdef CONFIG_CC_IS_CLANG
 quiet_cmd_clang_tools = CHECK   $<
       cmd_clang_tools = $(PYTHON3) $(srctree)/scripts/clang-tools/run-clang-tools.py $@ $<
 
-clang-tidy clang-analyzer: $(extmod-prefix)compile_commands.json
+clang-tidy-fix clang-tidy clang-analyzer: $(extmod-prefix)compile_commands.json
 	$(call cmd,clang_tools)
 else
-clang-tidy clang-analyzer:
+clang-tidy-fix clang-tidy clang-analyzer:
 	@echo "$@ requires CC=clang" >&2
 	@false
 endif
 
 # Scripts to check various things for consistency
 # ---------------------------------------------------------------------------
 
 PHONY += includecheck versioncheck coccicheck export_report
 
 includecheck:
diff --git a/scripts/clang-tools/run-clang-tools.py b/scripts/clang-tools/run-clang-tools.py
index fa7655c7cec0..c177ca822c56 100755
--- a/scripts/clang-tools/run-clang-tools.py
+++ b/scripts/clang-tools/run-clang-tools.py
@@ -22,43 +22,57 @@ def parse_arguments():
     Returns:
         args: Dict of parsed args
         Has keys: [path, type]
     """
     usage = """Run clang-tidy or the clang static-analyzer on a
         compilation database."""
     parser = argparse.ArgumentParser(description=usage)
 
     type_help = "Type of analysis to be performed"
     parser.add_argument("type",
-                        choices=["clang-tidy", "clang-analyzer"],
+                        choices=["clang-tidy-fix", "clang-tidy", "clang-analyzer"],
                         help=type_help)
     path_help = "Path to the compilation database to parse"
     parser.add_argument("path", type=str, help=path_help)
 
     return parser.parse_args()
 
 
 def init(l, a):
     global lock
     global args
     lock = l
     args = a
 
 
 def run_analysis(entry):
     # Disable all checks, then re-enable the ones we want
     checks = "-checks=-*,"
-    if args.type == "clang-tidy":
+    fix = ""
+    header_filter = ""
+    if args.type == "clang-tidy-fix":
+        checks += "linuxkernel-macro-trailing-semi"
+        #
+        # Fix this
+        # #define M(a) a++; <-- clang-tidy fixes the problem here
+        # int f() {
+        #   int v = 0;
+        #   M(v);  <-- clang reports problem here
+        #   return v;
+        # }
+        fix += "-fix"
+        header_filter += "-header-filter=.*"
+    elif args.type == "clang-tidy":
         checks += "linuxkernel-*"
     else:
         checks += "clang-analyzer-*"
-    p = subprocess.run(["clang-tidy", "-p", args.path, checks, entry["file"]],
+    p = subprocess.run(["clang-tidy", "-p", args.path, checks, header_filter, fix, entry["file"]],
                        stdout=subprocess.PIPE,
                        stderr=subprocess.STDOUT,
                        cwd=entry["directory"])
     with lock:
         sys.stderr.buffer.write(p.stdout)
 
 
 def main():
     args = parse_arguments()
 
-- 
2.18.4



From xen-devel-bounces@lists.xenproject.org Sat Nov 21 17:10:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Nov 2020 17:10:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33021.64100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgWOw-0003xF-8o; Sat, 21 Nov 2020 17:10:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33021.64100; Sat, 21 Nov 2020 17:10:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgWOw-0003x8-5n; Sat, 21 Nov 2020 17:10:18 +0000
Received: by outflank-mailman (input) for mailman id 33021;
 Sat, 21 Nov 2020 17:10:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pLDK=E3=perches.com=joe@srs-us1.protection.inumbo.net>)
 id 1kgWOu-0003x3-LL
 for xen-devel@lists.xenproject.org; Sat, 21 Nov 2020 17:10:16 +0000
Received: from smtprelay.hostedemail.com (unknown [216.40.44.103])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a8584925-c3f6-4189-aac3-6b588fe83b48;
 Sat, 21 Nov 2020 17:10:15 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net
 [216.40.38.60])
 by smtprelay03.hostedemail.com (Postfix) with ESMTP id C09E2837F24A;
 Sat, 21 Nov 2020 17:10:14 +0000 (UTC)
Received: from XPS-9350.home (unknown [47.151.128.180])
 (Authenticated sender: joe@perches.com)
 by omf05.hostedemail.com (Postfix) with ESMTPA;
 Sat, 21 Nov 2020 17:10:09 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pLDK=E3=perches.com=joe@srs-us1.protection.inumbo.net>)
	id 1kgWOu-0003x3-LL
	for xen-devel@lists.xenproject.org; Sat, 21 Nov 2020 17:10:16 +0000
X-Inumbo-ID: a8584925-c3f6-4189-aac3-6b588fe83b48
Received: from smtprelay.hostedemail.com (unknown [216.40.44.103])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a8584925-c3f6-4189-aac3-6b588fe83b48;
	Sat, 21 Nov 2020 17:10:15 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net [216.40.38.60])
	by smtprelay03.hostedemail.com (Postfix) with ESMTP id C09E2837F24A;
	Sat, 21 Nov 2020 17:10:14 +0000 (UTC)
X-Session-Marker: 6A6F6540706572636865732E636F6D
X-Spam-Summary: 50,0,0,,d41d8cd98f00b204,joe@perches.com,,RULES_HIT:41:355:379:599:800:967:973:982:988:989:1260:1277:1311:1313:1314:1345:1359:1437:1515:1516:1518:1534:1542:1593:1594:1711:1730:1747:1777:1792:1801:2198:2199:2393:2525:2560:2563:2682:2685:2828:2859:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3354:3622:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4043:4321:4605:5007:6117:6119:6742:6743:7809:7875:8660:9010:9025:10004:10400:10848:11232:11473:11658:11783:11914:12043:12295:12297:12555:12663:12679:12740:12895:12986:13148:13230:13439:13845:13894:14181:14659:14721:21080:21324:21451:21627:21811:21939:21987:30012:30054:30070:30091,0,RBL:none,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:2,LUA_SUMMARY:none
X-HE-Tag: crook72_110ffeb27355
X-Filterd-Recvd-Size: 4090
Received: from XPS-9350.home (unknown [47.151.128.180])
	(Authenticated sender: joe@perches.com)
	by omf05.hostedemail.com (Postfix) with ESMTPA;
	Sat, 21 Nov 2020 17:10:09 +0000 (UTC)
Message-ID: <2105f0c05e9eae8bee8e17dcc5314474b3c0bc73.camel@perches.com>
Subject: Re: [RFC] MAINTAINERS tag for cleanup robot
From: Joe Perches <joe@perches.com>
To: trix@redhat.com, clang-built-linux@googlegroups.com
Cc: linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, 
 xen-devel@lists.xenproject.org, tboot-devel@lists.sourceforge.net, 
 kvm@vger.kernel.org, linux-crypto@vger.kernel.org,
 linux-acpi@vger.kernel.org,  devel@acpica.org,
 amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, 
 intel-gfx@lists.freedesktop.org, netdev@vger.kernel.org, 
 linux-media@vger.kernel.org, MPT-FusionLinux.pdl@broadcom.com, 
 linux-scsi@vger.kernel.org, linux-wireless@vger.kernel.org, 
 ibm-acpi-devel@lists.sourceforge.net, platform-driver-x86@vger.kernel.org, 
 linux-usb@vger.kernel.org, linux-omap@vger.kernel.org, 
 linux-fbdev@vger.kernel.org, ecryptfs@vger.kernel.org, 
 linux-fsdevel@vger.kernel.org, cluster-devel@redhat.com, 
 linux-mtd@lists.infradead.org, keyrings@vger.kernel.org, 
 netfilter-devel@vger.kernel.org, coreteam@netfilter.org, 
 alsa-devel@alsa-project.org, bpf@vger.kernel.org, 
 linux-bluetooth@vger.kernel.org, linux-nfs@vger.kernel.org, 
 patches@opensource.cirrus.com
Date: Sat, 21 Nov 2020 09:10:08 -0800
In-Reply-To: <20201121165058.1644182-1-trix@redhat.com>
References: <20201121165058.1644182-1-trix@redhat.com>
Content-Type: text/plain; charset="ISO-8859-1"
User-Agent: Evolution 3.38.1-1 
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

On Sat, 2020-11-21 at 08:50 -0800, trix@redhat.com wrote:
> A difficult part of automating commits is composing the subsystem
> preamble in the commit log.  For the ongoing effort of a fixer producing
> one or two fixes a release the use of 'treewide:' does not seem appropriate.
> 
> It would be better if the normal prefix was used.  Unfortunately normal is
> not consistent across the tree.
> 
> So I am looking for comments for adding a new tag to the MAINTAINERS file
> 
> 	D: Commit subsystem prefix
> 
> ex/ for FPGA DFL DRIVERS
> 
> 	D: fpga: dfl:

I'm all for it.  Good luck with the effort.  It's not completely trivial.

>From a decade ago:

https://lore.kernel.org/lkml/1289919077.28741.50.camel@Joe-Laptop/

(and that thread started with extra semicolon patches too)

> Continuing with cleaning up clang's -Wextra-semi-stmt

> diff --git a/Makefile b/Makefile
[]
> @@ -1567,20 +1567,21 @@ help:
> 	 echo  ''
> 	@echo  'Static analysers:'
> 	@echo  '  checkstack      - Generate a list of stack hogs'
> 	@echo  '  versioncheck    - Sanity check on version.h usage'
> 	@echo  '  includecheck    - Check for duplicate included header files'
> 	@echo  '  export_report   - List the usages of all exported symbols'
> 	@echo  '  headerdep       - Detect inclusion cycles in headers'
> 	@echo  '  coccicheck      - Check with Coccinelle'
> 	@echo  '  clang-analyzer  - Check with clang static analyzer'
> 	@echo  '  clang-tidy      - Check with clang-tidy'
> +	@echo  '  clang-tidy-fix  - Check and fix with clang-tidy'

A pity the ordering of the code below isn't the same as the above.

> -PHONY += clang-tidy clang-analyzer
> +PHONY += clang-tidy-fix clang-tidy clang-analyzer
[]
> -clang-tidy clang-analyzer: $(extmod-prefix)compile_commands.json
> +clang-tidy-fix clang-tidy clang-analyzer: $(extmod-prefix)compile_commands.json
> 	$(call cmd,clang_tools)
> else
> -clang-tidy clang-analyzer:
> +clang-tidy-fix clang-tidy clang-analyzer:

[]

> diff --git a/scripts/clang-tools/run-clang-tools.py b/scripts/clang-tools/run-clang-tools.py
[]
> @@ -22,43 +22,57 @@ def parse_arguments():
[]
> parser.add_argument("type",
> -                        choices=["clang-tidy", "clang-analyzer"],
> +                        choices=["clang-tidy-fix", "clang-tidy", "clang-analyzer"],

etc...



From xen-devel-bounces@lists.xenproject.org Sat Nov 21 17:19:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Nov 2020 17:19:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33027.64112 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgWXT-0004Gj-4h; Sat, 21 Nov 2020 17:19:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33027.64112; Sat, 21 Nov 2020 17:19:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgWXT-0004Gc-0w; Sat, 21 Nov 2020 17:19:07 +0000
Received: by outflank-mailman (input) for mailman id 33027;
 Sat, 21 Nov 2020 17:19:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v3X3=E3=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
 id 1kgWXQ-0004GX-Je
 for xen-devel@lists.xenproject.org; Sat, 21 Nov 2020 17:19:05 +0000
Received: from bedivere.hansenpartnership.com (unknown [96.44.175.130])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7a3042af-d186-4004-ba3f-0e15793b4fc3;
 Sat, 21 Nov 2020 17:19:01 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by bedivere.hansenpartnership.com (Postfix) with ESMTP id 2C7E2128048D;
 Sat, 21 Nov 2020 09:19:00 -0800 (PST)
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
 by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new,
 port 10024)
 with ESMTP id dA_Y3gfv-HlR; Sat, 21 Nov 2020 09:19:00 -0800 (PST)
Received: from jarvis.int.hansenpartnership.com (unknown
 [IPv6:2601:600:8280:66d1::527])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id 584A71280481;
 Sat, 21 Nov 2020 09:18:58 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=v3X3=E3=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
	id 1kgWXQ-0004GX-Je
	for xen-devel@lists.xenproject.org; Sat, 21 Nov 2020 17:19:05 +0000
X-Inumbo-ID: 7a3042af-d186-4004-ba3f-0e15793b4fc3
Received: from bedivere.hansenpartnership.com (unknown [96.44.175.130])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7a3042af-d186-4004-ba3f-0e15793b4fc3;
	Sat, 21 Nov 2020 17:19:01 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
	by bedivere.hansenpartnership.com (Postfix) with ESMTP id 2C7E2128048D;
	Sat, 21 Nov 2020 09:19:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=hansenpartnership.com; s=20151216; t=1605979140;
	bh=oKjwoQQYqAGHYIcGa87tH3kfRh6eNDZ11DRWs/5n6Nc=;
	h=Message-ID:Subject:From:To:Date:In-Reply-To:References:From;
	b=Lvt0xf0A2UjzBqGR2epCNoE7DN6YV9pUrVXD+f22BVAeiOW2KxZsGCImkMcXBrgh/
	 zD0zBdigF+USoIPaRfSUxU4T/o3yBTwQz1ZuRUKYioWvma2ORezv7XJ+/UVZk14HDg
	 znn3A0Y1A4QkkRvYDhfOKnhgBoH92sfIP7bAjSz4=
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
	by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id dA_Y3gfv-HlR; Sat, 21 Nov 2020 09:19:00 -0800 (PST)
Received: from jarvis.int.hansenpartnership.com (unknown [IPv6:2601:600:8280:66d1::527])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id 584A71280481;
	Sat, 21 Nov 2020 09:18:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=hansenpartnership.com; s=20151216; t=1605979139;
	bh=oKjwoQQYqAGHYIcGa87tH3kfRh6eNDZ11DRWs/5n6Nc=;
	h=Message-ID:Subject:From:To:Date:In-Reply-To:References:From;
	b=qZT/78tSZnguNV1yoYzSSmvqmDGgzNIuIpg2PbOws0zqwE5yu2CoK/DoUP3fxAE7/
	 cwr3zThASX8R91ADl4Ew6X2dNx0rQAVJjUjWHCFXFVZg8s5iaP8BFiZuwsxl12vuws
	 rCw1QQ3k1zT/j95+Y6p1vgihiOvqC5dVUCKPWDl0=
Message-ID: <5843ef910b0e86c00d9c0143dec20f93823b016b.camel@HansenPartnership.com>
Subject: Re: [RFC] MAINTAINERS tag for cleanup robot
From: James Bottomley <James.Bottomley@HansenPartnership.com>
To: trix@redhat.com, joe@perches.com, clang-built-linux@googlegroups.com
Cc: linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, 
 xen-devel@lists.xenproject.org, tboot-devel@lists.sourceforge.net, 
 kvm@vger.kernel.org, linux-crypto@vger.kernel.org,
 linux-acpi@vger.kernel.org,  devel@acpica.org,
 amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, 
 intel-gfx@lists.freedesktop.org, netdev@vger.kernel.org, 
 linux-media@vger.kernel.org, MPT-FusionLinux.pdl@broadcom.com, 
 linux-scsi@vger.kernel.org, linux-wireless@vger.kernel.org, 
 ibm-acpi-devel@lists.sourceforge.net, platform-driver-x86@vger.kernel.org, 
 linux-usb@vger.kernel.org, linux-omap@vger.kernel.org, 
 linux-fbdev@vger.kernel.org, ecryptfs@vger.kernel.org, 
 linux-fsdevel@vger.kernel.org, cluster-devel@redhat.com, 
 linux-mtd@lists.infradead.org, keyrings@vger.kernel.org, 
 netfilter-devel@vger.kernel.org, coreteam@netfilter.org, 
 alsa-devel@alsa-project.org, bpf@vger.kernel.org, 
 linux-bluetooth@vger.kernel.org, linux-nfs@vger.kernel.org, 
 patches@opensource.cirrus.com
Date: Sat, 21 Nov 2020 09:18:57 -0800
In-Reply-To: <20201121165058.1644182-1-trix@redhat.com>
References: <20201121165058.1644182-1-trix@redhat.com>
Content-Type: text/plain; charset="UTF-8"
User-Agent: Evolution 3.34.4 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Sat, 2020-11-21 at 08:50 -0800, trix@redhat.com wrote:
> A difficult part of automating commits is composing the subsystem
> preamble in the commit log.  For the ongoing effort of a fixer
> producing
> one or two fixes a release the use of 'treewide:' does not seem
> appropriate.
> 
> It would be better if the normal prefix was used.  Unfortunately
> normal is
> not consistent across the tree.
> 
> 
> 	D: Commit subsystem prefix
> 
> ex/ for FPGA DFL DRIVERS
> 
> 	D: fpga: dfl:
> 

I've got to bet this is going to cause more issues than it solves. 
SCSI uses scsi: <driver>: for drivers but not every driver has a
MAINTAINERS entry.  We use either scsi: or scsi: core: for mid layer
things, but we're not consistent.  Block uses blk-<something>: for all
of it's stuff but almost no <somtehing>s have a MAINTAINERS entry.  So
the next thing you're going to cause is an explosion of suggested
MAINTAINERs entries.

Has anyone actually complained about treewide:?

James




From xen-devel-bounces@lists.xenproject.org Sat Nov 21 17:31:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Nov 2020 17:31:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33034.64123 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgWje-0006Lz-4W; Sat, 21 Nov 2020 17:31:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33034.64123; Sat, 21 Nov 2020 17:31:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgWje-0006Ls-1X; Sat, 21 Nov 2020 17:31:42 +0000
Received: by outflank-mailman (input) for mailman id 33034;
 Sat, 21 Nov 2020 17:31:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgWjc-0006Lk-MD; Sat, 21 Nov 2020 17:31:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgWjc-0000Mr-Ev; Sat, 21 Nov 2020 17:31:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgWjc-0000lM-4Z; Sat, 21 Nov 2020 17:31:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kgWjc-00053v-46; Sat, 21 Nov 2020 17:31:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgWjc-0006Lk-MD; Sat, 21 Nov 2020 17:31:40 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Map/XfDlffUDAuuIGW++uMTsxlzY7JihgdHCQP1GneA=; b=wmCVy3ARCq7kaZVk6/oYo+/hv5
	Jh5KiksyZ39u8Mj+g4Y82Y5blH/7ZePI3ow4oVN/90uHuEfEK6yknBPs55ukvviufU2cDtxZ0eDPv
	hsGcOBtCsV5YuJCLgp706zywlgaTEa1AxZI0ozbE8B+YWBnB5LHSBQOE4go09h8UPGvs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgWjc-0000Mr-Ev; Sat, 21 Nov 2020 17:31:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgWjc-0000lM-4Z; Sat, 21 Nov 2020 17:31:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgWjc-00053v-46; Sat, 21 Nov 2020 17:31:40 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156918-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156918: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-raw:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b659a5cebd611dbe698e63c03485b5fe8cd964ad
X-Osstest-Versions-That:
    xen=846d22d54f24f336fb80d052338e0cd030d54fee
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 21 Nov 2020 17:31:40 +0000

flight 156918 xen-unstable real [real]
flight 156927 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156918/
http://logs.test-lab.xenproject.org/osstest/logs/156927/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-raw    19 guest-localmigrate/x10 fail pass in 156927-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156905
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156905
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156905
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156905
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156905
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156905
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156905
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156905
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156905
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156905
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156905
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  b659a5cebd611dbe698e63c03485b5fe8cd964ad
baseline version:
 xen                  846d22d54f24f336fb80d052338e0cd030d54fee

Last test of basis   156905  2020-11-20 19:39:39 Z    0 days
Testing same since   156918  2020-11-21 07:02:01 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   846d22d54f..b659a5cebd  b659a5cebd611dbe698e63c03485b5fe8cd964ad -> master


From xen-devel-bounces@lists.xenproject.org Sat Nov 21 18:02:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Nov 2020 18:02:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33048.64143 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgXDO-0001KY-Pv; Sat, 21 Nov 2020 18:02:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33048.64143; Sat, 21 Nov 2020 18:02:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgXDO-0001KR-MW; Sat, 21 Nov 2020 18:02:26 +0000
Received: by outflank-mailman (input) for mailman id 33048;
 Sat, 21 Nov 2020 18:02:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pLDK=E3=perches.com=joe@srs-us1.protection.inumbo.net>)
 id 1kgXDO-0001KM-1e
 for xen-devel@lists.xenproject.org; Sat, 21 Nov 2020 18:02:26 +0000
Received: from smtprelay.hostedemail.com (unknown [216.40.44.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5a11ab67-b4ef-4a0f-b8c6-cef1181c2685;
 Sat, 21 Nov 2020 18:02:24 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net
 [216.40.38.60])
 by smtprelay07.hostedemail.com (Postfix) with ESMTP id AF7D7181D3025;
 Sat, 21 Nov 2020 18:02:23 +0000 (UTC)
Received: from XPS-9350.home (unknown [47.151.128.180])
 (Authenticated sender: joe@perches.com)
 by omf04.hostedemail.com (Postfix) with ESMTPA;
 Sat, 21 Nov 2020 18:02:18 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pLDK=E3=perches.com=joe@srs-us1.protection.inumbo.net>)
	id 1kgXDO-0001KM-1e
	for xen-devel@lists.xenproject.org; Sat, 21 Nov 2020 18:02:26 +0000
X-Inumbo-ID: 5a11ab67-b4ef-4a0f-b8c6-cef1181c2685
Received: from smtprelay.hostedemail.com (unknown [216.40.44.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5a11ab67-b4ef-4a0f-b8c6-cef1181c2685;
	Sat, 21 Nov 2020 18:02:24 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net [216.40.38.60])
	by smtprelay07.hostedemail.com (Postfix) with ESMTP id AF7D7181D3025;
	Sat, 21 Nov 2020 18:02:23 +0000 (UTC)
X-Session-Marker: 6A6F6540706572636865732E636F6D
X-Spam-Summary: 50,0,0,,d41d8cd98f00b204,joe@perches.com,,RULES_HIT:41:355:379:599:800:960:967:973:988:989:1260:1277:1311:1313:1314:1345:1359:1437:1515:1516:1518:1534:1542:1593:1594:1711:1730:1747:1777:1792:2393:2525:2560:2563:2682:2685:2828:2859:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3354:3622:3865:3866:3867:3868:3870:3871:3872:3873:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4321:5007:6117:6119:6742:6743:7809:7903:9025:10004:10400:10848:11027:11232:11658:11914:12043:12297:12663:12679:12740:12760:12895:13161:13229:13439:13845:14096:14097:14181:14659:14721:21080:21451:21627:21790:21987:30012:30054:30070:30091,0,RBL:none,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none
X-HE-Tag: uncle36_3402e8c27356
X-Filterd-Recvd-Size: 3937
Received: from XPS-9350.home (unknown [47.151.128.180])
	(Authenticated sender: joe@perches.com)
	by omf04.hostedemail.com (Postfix) with ESMTPA;
	Sat, 21 Nov 2020 18:02:18 +0000 (UTC)
Message-ID: <f7643c9cb0a896f3ead65e86084b7c143e21ef43.camel@perches.com>
Subject: Re: [RFC] MAINTAINERS tag for cleanup robot
From: Joe Perches <joe@perches.com>
To: James Bottomley <James.Bottomley@HansenPartnership.com>,
 trix@redhat.com,  clang-built-linux@googlegroups.com
Cc: linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, 
 xen-devel@lists.xenproject.org, tboot-devel@lists.sourceforge.net, 
 kvm@vger.kernel.org, linux-crypto@vger.kernel.org,
 linux-acpi@vger.kernel.org,  devel@acpica.org,
 amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, 
 intel-gfx@lists.freedesktop.org, netdev@vger.kernel.org, 
 linux-media@vger.kernel.org, MPT-FusionLinux.pdl@broadcom.com, 
 linux-scsi@vger.kernel.org, linux-wireless@vger.kernel.org, 
 ibm-acpi-devel@lists.sourceforge.net, platform-driver-x86@vger.kernel.org, 
 linux-usb@vger.kernel.org, linux-omap@vger.kernel.org, 
 linux-fbdev@vger.kernel.org, ecryptfs@vger.kernel.org, 
 linux-fsdevel@vger.kernel.org, cluster-devel@redhat.com, 
 linux-mtd@lists.infradead.org, keyrings@vger.kernel.org, 
 netfilter-devel@vger.kernel.org, coreteam@netfilter.org, 
 alsa-devel@alsa-project.org, bpf@vger.kernel.org, 
 linux-bluetooth@vger.kernel.org, linux-nfs@vger.kernel.org, 
 patches@opensource.cirrus.com
Date: Sat, 21 Nov 2020 10:02:17 -0800
In-Reply-To: <5843ef910b0e86c00d9c0143dec20f93823b016b.camel@HansenPartnership.com>
References: <20201121165058.1644182-1-trix@redhat.com>
	 <5843ef910b0e86c00d9c0143dec20f93823b016b.camel@HansenPartnership.com>
Content-Type: text/plain; charset="ISO-8859-1"
User-Agent: Evolution 3.38.1-1 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Sat, 2020-11-21 at 09:18 -0800, James Bottomley wrote:
> On Sat, 2020-11-21 at 08:50 -0800, trix@redhat.com wrote:
> > A difficult part of automating commits is composing the subsystem
> > preamble in the commit log.  For the ongoing effort of a fixer
> > producing one or two fixes a release the use of 'treewide:' does
> > not seem appropriate.
> > 
> > It would be better if the normal prefix was used.  Unfortunately
> > normal is not consistent across the tree.
> > 
> > 	D: Commit subsystem prefix
> > 
> > ex/ for FPGA DFL DRIVERS
> > 
> > 	D: fpga: dfl:
> 
> I've got to bet this is going to cause more issues than it solves. 
> SCSI uses scsi: <driver>: for drivers but not every driver has a
> MAINTAINERS entry.  We use either scsi: or scsi: core: for mid layer
> things, but we're not consistent.  Block uses blk-<something>: for all
> of it's stuff but almost no <somtehing>s have a MAINTAINERS entry.  So
> the next thing you're going to cause is an explosion of suggested
> MAINTAINERs entries.

As well as some changes require simultaneous changes across
multiple subsystems.

> Has anyone actually complained about treewide:?

It depends on what you mean by treewide:

If a treewide: patch is applied by some "higher level" maintainer,
then generally, no.

If the treewide patch is also cc'd to many individual maintainers,
then yes, many many times.

Mostly because patches cause what is in their view churn or that
changes are not specific to their subsystem grounds.

The treewide patch is sometimes dropped, sometimes broken up and
generally not completely applied.

What would be useful in many cases like this is for a pre and post
application of the treewide patch to be compiled and the object
code verified for lack of any logic change.

Unfortunately, gcc does not guarantee deterministic compilation so
this isn't feasible with at least gcc.  Does clang guarantee this?

I'm not sure it's possible:
https://blog.llvm.org/2019/11/deterministic-builds-with-clang-and-lld.html




From xen-devel-bounces@lists.xenproject.org Sat Nov 21 18:56:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 21 Nov 2020 18:56:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33066.64161 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgY3c-0006n9-UI; Sat, 21 Nov 2020 18:56:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33066.64161; Sat, 21 Nov 2020 18:56:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgY3c-0006n2-Q0; Sat, 21 Nov 2020 18:56:24 +0000
Received: by outflank-mailman (input) for mailman id 33066;
 Sat, 21 Nov 2020 18:56:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgY3b-0006mu-Ab; Sat, 21 Nov 2020 18:56:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgY3b-00026l-1P; Sat, 21 Nov 2020 18:56:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgY3a-00041V-PB; Sat, 21 Nov 2020 18:56:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kgY3a-0004ll-Ol; Sat, 21 Nov 2020 18:56:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgY3b-0006mu-Ab; Sat, 21 Nov 2020 18:56:23 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6LeQao2g1+sl2xLpLNsnUDAGIs10qjm+68faMuBaZi0=; b=3iCjDmUcsH9mvfh5NdkC9OdOk1
	tmkL4S/tnWat3t7M+w4VKMVWhSgkfVocY8a7HvlxOSFdCjUal/HD60nP9tYbNlAcgdV17tjExXY9K
	3r3czYNnIlxDAgQORksbHpCXwG8aJMIz5qpFHQPbHp5l9mtdFsZN1RtAePCAcyBOiIcU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgY3b-00026l-1P; Sat, 21 Nov 2020 18:56:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgY3a-00041V-PB; Sat, 21 Nov 2020 18:56:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgY3a-0004ll-Ol; Sat, 21 Nov 2020 18:56:22 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156919-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156919: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=27bba9c532a8d21050b94224ffd310ad0058c353
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 21 Nov 2020 18:56:22 +0000

flight 156919 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156919/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                27bba9c532a8d21050b94224ffd310ad0058c353
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  112 days
Failing since        152366  2020-08-01 20:49:34 Z  111 days  187 attempts
Testing same since   156919  2020-11-21 08:28:41 Z    0 days    1 attempts

------------------------------------------------------------
3564 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 681485 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 22 01:21:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 01:21:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33121.64198 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kge4D-0002aJ-Ti; Sun, 22 Nov 2020 01:21:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33121.64198; Sun, 22 Nov 2020 01:21:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kge4D-0002aB-OC; Sun, 22 Nov 2020 01:21:25 +0000
Received: by outflank-mailman (input) for mailman id 33121;
 Sun, 22 Nov 2020 01:21:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kge4C-0002a3-O0; Sun, 22 Nov 2020 01:21:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kge4C-0006Dc-G9; Sun, 22 Nov 2020 01:21:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kge4C-00039T-65; Sun, 22 Nov 2020 01:21:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kge4C-0000CE-5Y; Sun, 22 Nov 2020 01:21:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kge4C-0002a3-O0; Sun, 22 Nov 2020 01:21:24 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7XkBDBzMPftleL+CFwCCfzfkjFM5WqmKrh4L6aDp0jc=; b=LEl+hExG/PKYpLHn+dUhssO1o3
	RNcQB+UF/qk8+Yh8Qd3Ctc7vEm8yNNpsWshl9abjNiSDf24ZTjYDcnWgxjgjQtf36hOnWVKhT6uQn
	HpU/wKjBkLZGS+7E25OUh1OzqPeYhWoHVuapkayKU/3y092aj89FmaiuDKswLCiXqmpQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kge4C-0006Dc-G9; Sun, 22 Nov 2020 01:21:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kge4C-00039T-65; Sun, 22 Nov 2020 01:21:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kge4C-0000CE-5Y; Sun, 22 Nov 2020 01:21:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156925-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156925: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=e3a232cccd2445e5d9e607a65a78cdbc33ff8a0f
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 22 Nov 2020 01:21:24 +0000

flight 156925 qemu-mainline real [real]
flight 156932 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156925/
http://logs.test-lab.xenproject.org/osstest/logs/156932/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                e3a232cccd2445e5d9e607a65a78cdbc33ff8a0f
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   93 days
Failing since        152659  2020-08-21 14:07:39 Z   92 days  196 attempts
Testing same since   156925  2020-11-21 14:09:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 67422 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 22 02:53:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 02:53:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33137.64219 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgfUs-00047s-2C; Sun, 22 Nov 2020 02:53:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33137.64219; Sun, 22 Nov 2020 02:53:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgfUr-00047l-UP; Sun, 22 Nov 2020 02:53:01 +0000
Received: by outflank-mailman (input) for mailman id 33137;
 Sun, 22 Nov 2020 02:53:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgfUq-00047d-5Q; Sun, 22 Nov 2020 02:53:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgfUp-00007R-Rt; Sun, 22 Nov 2020 02:52:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgfUp-0005vI-CP; Sun, 22 Nov 2020 02:52:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kgfUp-0006Qd-Bv; Sun, 22 Nov 2020 02:52:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgfUq-00047d-5Q; Sun, 22 Nov 2020 02:53:00 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GnSZFeysHMVo0ZN4RptM3PujOXtjPmZHCYr8oiQJBeY=; b=trYLyUMdF0YgppIQ6aScCsLLlO
	A8oiGnVOfcLDJJ8ehFcrSiH0BpBsVcFNDyhf7UVq9OoGnTQP3V1av6Vhv2Ab25WBRwFCrJPDczHbz
	M9u0hrHsUvMqix/Z50t4b7fzQ1D61+RGaBpaJXFE2CQ1j/VWICJtRpB7yajXPKfRooxo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgfUp-00007R-Rt; Sun, 22 Nov 2020 02:52:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgfUp-0005vI-CP; Sun, 22 Nov 2020 02:52:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgfUp-0006Qd-Bv; Sun, 22 Nov 2020 02:52:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156929-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156929: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=a349e4c659609fd20e4beea89e5c4a4038e33a95
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 22 Nov 2020 02:52:59 +0000

flight 156929 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156929/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-xsm      11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                a349e4c659609fd20e4beea89e5c4a4038e33a95
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  113 days
Failing since        152366  2020-08-01 20:49:34 Z  112 days  188 attempts
Testing same since   156929  2020-11-21 19:09:40 Z    0 days    1 attempts

------------------------------------------------------------
3565 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 681922 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 22 03:23:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 03:23:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33148.64234 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgfyZ-0007S7-Fa; Sun, 22 Nov 2020 03:23:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33148.64234; Sun, 22 Nov 2020 03:23:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgfyZ-0007S0-Cf; Sun, 22 Nov 2020 03:23:43 +0000
Received: by outflank-mailman (input) for mailman id 33148;
 Sun, 22 Nov 2020 03:23:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9StC=E4=infradead.org=willy@srs-us1.protection.inumbo.net>)
 id 1kgfyV-0007RH-HJ
 for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 03:23:42 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 54e51dfc-e980-4aeb-87f2-886d8d9db842;
 Sun, 22 Nov 2020 03:23:35 +0000 (UTC)
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red
 Hat Linux)) id 1kgfxw-0001fc-2G; Sun, 22 Nov 2020 03:23:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=9StC=E4=infradead.org=willy@srs-us1.protection.inumbo.net>)
	id 1kgfyV-0007RH-HJ
	for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 03:23:42 +0000
X-Inumbo-ID: 54e51dfc-e980-4aeb-87f2-886d8d9db842
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 54e51dfc-e980-4aeb-87f2-886d8d9db842;
	Sun, 22 Nov 2020 03:23:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=t7P1qjxLtvKyLv5lupmc5Zc5j0L3lqaoBXMlGIL9vWk=; b=BCb+FFzpwYCsAL5/GuZs0qQqU7
	2wigu715rAloO1MXfHFDjsI/DCCkjrJ0RLz4Myz+wD98+zNWw6243kPWNzvqzMn6Tn8JVpmOuQv+x
	7EKe3xartwVOhq4ran1ZhW92oaIukRN20h1OI3pPMusUrtae1AY4TqDUCPUPtiDgxz6eQOoVcrvcz
	Ara6lCihfF7KrYMxKPcv522GOw1XE7kmR9pLBWd69WCJIf1380JdPIessAMMrwuE8sqCmQUL7YEHL
	BXN0W+EBw/mKx6JrVL7roopk6K2PnHDj+T1EiV35semdke1klNMJQXZf1Gvzraa/TLsC310jvDxAq
	zJ5cWKdw==;
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kgfxw-0001fc-2G; Sun, 22 Nov 2020 03:23:04 +0000
Date: Sun, 22 Nov 2020 03:23:04 +0000
From: Matthew Wilcox <willy@infradead.org>
To: trix@redhat.com
Cc: joe@perches.com, clang-built-linux@googlegroups.com,
	linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org, tboot-devel@lists.sourceforge.net,
	kvm@vger.kernel.org, linux-crypto@vger.kernel.org,
	linux-acpi@vger.kernel.org, devel@acpica.org,
	amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, netdev@vger.kernel.org,
	linux-media@vger.kernel.org, MPT-FusionLinux.pdl@broadcom.com,
	linux-scsi@vger.kernel.org, linux-wireless@vger.kernel.org,
	ibm-acpi-devel@lists.sourceforge.net,
	platform-driver-x86@vger.kernel.org, linux-usb@vger.kernel.org,
	linux-omap@vger.kernel.org, linux-fbdev@vger.kernel.org,
	ecryptfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	cluster-devel@redhat.com, linux-mtd@lists.infradead.org,
	keyrings@vger.kernel.org, netfilter-devel@vger.kernel.org,
	coreteam@netfilter.org, alsa-devel@alsa-project.org,
	bpf@vger.kernel.org, linux-bluetooth@vger.kernel.org,
	linux-nfs@vger.kernel.org, patches@opensource.cirrus.com
Subject: Re: [RFC] MAINTAINERS tag for cleanup robot
Message-ID: <20201122032304.GE4327@casper.infradead.org>
References: <20201121165058.1644182-1-trix@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201121165058.1644182-1-trix@redhat.com>

On Sat, Nov 21, 2020 at 08:50:58AM -0800, trix@redhat.com wrote:
> The fixer review is
> https://reviews.llvm.org/D91789
> 
> A run over allyesconfig for x86_64 finds 62 issues, 5 are false positives.
> The false positives are caused by macros passed to other macros and by
> some macro expansions that did not have an extra semicolon.
> 
> This cleans up about 1,000 of the current 10,000 -Wextra-semi-stmt
> warnings in linux-next.

Are any of them not false-positives?  It's all very well to enable
stricter warnings, but if they don't fix any bugs, they're just churn.


From xen-devel-bounces@lists.xenproject.org Sun Nov 22 06:56:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 06:56:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33169.64252 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgjI0-0004vt-8q; Sun, 22 Nov 2020 06:56:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33169.64252; Sun, 22 Nov 2020 06:56:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgjI0-0004vm-5W; Sun, 22 Nov 2020 06:56:00 +0000
Received: by outflank-mailman (input) for mailman id 33169;
 Sun, 22 Nov 2020 06:55:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JHnk=E4=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kgjHy-0004vh-S0
 for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 06:55:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0052d6bb-2154-4a33-add4-31ff9b8c8229;
 Sun, 22 Nov 2020 06:55:57 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A9C9CAC24;
 Sun, 22 Nov 2020 06:55:56 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JHnk=E4=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kgjHy-0004vh-S0
	for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 06:55:58 +0000
X-Inumbo-ID: 0052d6bb-2154-4a33-add4-31ff9b8c8229
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0052d6bb-2154-4a33-add4-31ff9b8c8229;
	Sun, 22 Nov 2020 06:55:57 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606028156; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=iPGIycc5SreX1tNqAnk1CrbGSvaixY401jSnfZ0OcO8=;
	b=YhoVc6yRJA7Hyd/6qRQ2RCeZ+gNklZ2SO9S+Pxnrycp3TpHrudIFIdz8h9htqfw1z0o0Da
	/WPtpBMxHLf1kadp09NCbwpwD7z0+gmXzGkyoeLkegl8dV6bF2bvdBtEW0aCpuIwNhe5kI
	02OXq2CHZxqbryAx1CzDsfpcPuui+pg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A9C9CAC24;
	Sun, 22 Nov 2020 06:55:56 +0000 (UTC)
Subject: Re: [PATCH v2 05/12] x86: rework arch_local_irq_restore() to not use
 popf
To: Peter Zijlstra <peterz@infradead.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "VMware, Inc." <pv-drivers@vmware.com>, x86@kernel.org,
 linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 luto@kernel.org, "H. Peter Anvin" <hpa@zytor.com>,
 xen-devel@lists.xenproject.org, Thomas Gleixner <tglx@linutronix.de>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-6-jgross@suse.com>
 <20201120115943.GD3021@hirez.programming.kicks-ass.net>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <eb05e878-6334-8d19-496b-6572df67fc56@suse.com>
Date: Sun, 22 Nov 2020 07:55:55 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201120115943.GD3021@hirez.programming.kicks-ass.net>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="518rpEih7o1nPFHrAWjcWEEq1OxhTURnl"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--518rpEih7o1nPFHrAWjcWEEq1OxhTURnl
Content-Type: multipart/mixed; boundary="jQq1T1YouRN2JZiH7iLQxRXnrey8Ohlrr";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "VMware, Inc." <pv-drivers@vmware.com>, x86@kernel.org,
 linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 luto@kernel.org, "H. Peter Anvin" <hpa@zytor.com>,
 xen-devel@lists.xenproject.org, Thomas Gleixner <tglx@linutronix.de>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <eb05e878-6334-8d19-496b-6572df67fc56@suse.com>
Subject: Re: [PATCH v2 05/12] x86: rework arch_local_irq_restore() to not use
 popf
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-6-jgross@suse.com>
 <20201120115943.GD3021@hirez.programming.kicks-ass.net>
In-Reply-To: <20201120115943.GD3021@hirez.programming.kicks-ass.net>

--jQq1T1YouRN2JZiH7iLQxRXnrey8Ohlrr
Content-Type: multipart/mixed;
 boundary="------------E1F431A6017EB6D5E463C990"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------E1F431A6017EB6D5E463C990
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 20.11.20 12:59, Peter Zijlstra wrote:
> On Fri, Nov 20, 2020 at 12:46:23PM +0100, Juergen Gross wrote:
>> +static __always_inline void arch_local_irq_restore(unsigned long flag=
s)
>> +{
>> +	if (!arch_irqs_disabled_flags(flags))
>> +		arch_local_irq_enable();
>> +}
>=20
> If someone were to write horrible code like:
>=20
> 	local_irq_disable();
> 	local_irq_save(flags);
> 	local_irq_enable();
> 	local_irq_restore(flags);
>=20
> we'd be up some creek without a paddle... now I don't _think_ we have
> genius code like that, but I'd feel saver if we can haz an assertion in=

> there somewhere...
>=20
> Maybe something like:
>=20
> #ifdef CONFIG_DEBUG_ENTRY // for lack of something saner
> 	WARN_ON_ONCE((arch_local_save_flags() ^ flags) & X86_EFLAGS_IF);
> #endif
>=20
> At the end?

I'd like to, but using WARN_ON_ONCE() in include/asm/irqflags.h sounds
like a perfect receipt for include dependency hell.

We could use a plain asm("ud2") instead.


Juergen

--------------E1F431A6017EB6D5E463C990
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------E1F431A6017EB6D5E463C990--

--jQq1T1YouRN2JZiH7iLQxRXnrey8Ohlrr--

--518rpEih7o1nPFHrAWjcWEEq1OxhTURnl
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+6C3sFAwAAAAAACgkQsN6d1ii/Ey+j
1Af/ccrLifuyIl3V4LsUu/cvNyV8QALQhJ7KI/N59Zdpr151J4U1s0XAPrdAOlHy6NJQrA7/weGM
JYypQDuZxWj05tupE1AyPtaF3gudxpmB9ZtWbvRNR0VAsJJpi03a5ZcIe7dEc0igSYlccykUcIU/
HaQeb97wnEJhBH0Dc6xAi5DLrmcOxYyHnHBPed2pAwJWYSDiHRK3hqT6UHsxSPcTZqf259C+tAK0
4Pfe7fg6z4rfS4VCao+k0NRAD+B7tRRcYY9Fkz3UxdQh2g18u9w1zscTt8t7t31SWIzSkURhkon3
Z6lm83tdpUrE9142wyGoZZO1XXlAhZJ7m2vGXRzMWw==
=th+3
-----END PGP SIGNATURE-----

--518rpEih7o1nPFHrAWjcWEEq1OxhTURnl--


From xen-devel-bounces@lists.xenproject.org Sun Nov 22 07:11:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 07:11:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33176.64264 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgjXD-0007A1-Ne; Sun, 22 Nov 2020 07:11:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33176.64264; Sun, 22 Nov 2020 07:11:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgjXD-00079u-Ki; Sun, 22 Nov 2020 07:11:43 +0000
Received: by outflank-mailman (input) for mailman id 33176;
 Sun, 22 Nov 2020 07:11:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgjXC-00079m-Tc; Sun, 22 Nov 2020 07:11:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgjXC-0005wo-Mm; Sun, 22 Nov 2020 07:11:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgjXC-0003Da-Dt; Sun, 22 Nov 2020 07:11:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kgjXC-00082s-DM; Sun, 22 Nov 2020 07:11:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgjXC-00079m-Tc; Sun, 22 Nov 2020 07:11:42 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SvxBl60a5mkaTckq4E/L86E3sj4FOi9Qr5ASP8dd9Lk=; b=xoo8/a4fPUtzr7ZHBPzLm79bay
	gG8tYJsaoToKG7o/Tf8fn4vjPvkOjJQJx7tuswgyVGivzJ+ftjbti/0wb6b+1KsAAuFZquq+3Mlpk
	ZZY7OpcLuHUFQBZfZrCJ7PwkZmePtTjxNxqmtlM7sDKPLZl+WctHX4tkZxXILoafdWK4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgjXC-0005wo-Mm; Sun, 22 Nov 2020 07:11:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgjXC-0003Da-Dt; Sun, 22 Nov 2020 07:11:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgjXC-00082s-DM; Sun, 22 Nov 2020 07:11:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156938-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156938: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:build-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    libvirt:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    libvirt=fd674c09688cedea07eb7c033cbb1eed8ff287f6
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 22 Nov 2020 07:11:42 +0000

flight 156938 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156938/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 libvirt              fd674c09688cedea07eb7c033cbb1eed8ff287f6
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  135 days
Failing since        151818  2020-07-11 04:18:52 Z  134 days  129 attempts
Testing same since   156915  2020-11-21 04:19:11 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 28333 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 22 09:47:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 09:47:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33243.64285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kglxw-0006kh-Me; Sun, 22 Nov 2020 09:47:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33243.64285; Sun, 22 Nov 2020 09:47:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kglxw-0006ka-Iu; Sun, 22 Nov 2020 09:47:28 +0000
Received: by outflank-mailman (input) for mailman id 33243;
 Sun, 22 Nov 2020 09:47:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kglxv-0006kS-9t; Sun, 22 Nov 2020 09:47:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kglxu-0001D1-VD; Sun, 22 Nov 2020 09:47:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kglxu-0002er-Kq; Sun, 22 Nov 2020 09:47:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kglxu-0008Nz-KL; Sun, 22 Nov 2020 09:47:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kglxv-0006kS-9t; Sun, 22 Nov 2020 09:47:27 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Amkfqtll0X0YJ6lyampDoyMrC35f7F3Z1jXSiXVvwX4=; b=XrLc3Pes9qpCNHM42z2zyD+PQR
	VL8M+QwOSiYvHihbrTIQBzeEZM3X+QJyX2UKzfUPfSJ6qswOkVmT+WxoJArJKEmZ134UMBOXU8U4J
	V5pzvqKPCzJLSFE+a8Zc0N1uB0+qVHhRoHtmhxqIAMO3Eso9oi5Wj8CAmtZFxzdUonEA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kglxu-0001D1-VD; Sun, 22 Nov 2020 09:47:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kglxu-0002er-Kq; Sun, 22 Nov 2020 09:47:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kglxu-0008Nz-KL; Sun, 22 Nov 2020 09:47:26 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156941-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 156941: all pass - PUSHED
X-Osstest-Versions-This:
    xen=b659a5cebd611dbe698e63c03485b5fe8cd964ad
X-Osstest-Versions-That:
    xen=5505f5f8e7e805365cfe70b6a4af6115940bb749
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 22 Nov 2020 09:47:26 +0000

flight 156941 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156941/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  b659a5cebd611dbe698e63c03485b5fe8cd964ad
baseline version:
 xen                  5505f5f8e7e805365cfe70b6a4af6115940bb749

Last test of basis   156811  2020-11-15 09:18:26 Z    7 days
Testing same since   156941  2020-11-22 09:18:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andre Przywara <andre.przywara@arm.com>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Doug Goldstein <cardoe@cardoe.com>
  Edwin Török <edvin.torok@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien.grall@arm.com>
  Michal Orzel <michal.orzel@arm.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Tim Deegan <tim@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5505f5f8e7..b659a5cebd  b659a5cebd611dbe698e63c03485b5fe8cd964ad -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun Nov 22 13:20:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 13:20:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33288.64306 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgpHu-00042X-A4; Sun, 22 Nov 2020 13:20:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33288.64306; Sun, 22 Nov 2020 13:20:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgpHu-00042Q-6H; Sun, 22 Nov 2020 13:20:18 +0000
Received: by outflank-mailman (input) for mailman id 33288;
 Sun, 22 Nov 2020 13:20:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgpHs-00042I-Hw; Sun, 22 Nov 2020 13:20:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgpHs-0005Ws-Ao; Sun, 22 Nov 2020 13:20:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgpHs-0006y8-2p; Sun, 22 Nov 2020 13:20:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kgpHs-0002JF-2K; Sun, 22 Nov 2020 13:20:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgpHs-00042I-Hw; Sun, 22 Nov 2020 13:20:16 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eSEYNVvj1JC1hvEbu5jq+yt7Cg7eKtV562k9v1mfm1k=; b=B+IPiMHF30WiillCu1YId+OLAA
	sMOVDfhq8gpanbW/LfCjt3PRc3N1BsbdUti/JMdDaacHbDNDqoOAWqBCsjK+sYMjkGuSmoO5vc9S0
	jwk/1HJVaGF2bnI6kI29vPucrKzDIaJJ4ReCviL+ySYg/WFNZm4IwYCYAhUpRTPPfubE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgpHs-0005Ws-Ao; Sun, 22 Nov 2020 13:20:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgpHs-0006y8-2p; Sun, 22 Nov 2020 13:20:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgpHs-0002JF-2K; Sun, 22 Nov 2020 13:20:16 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156934-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156934: regressions - trouble: fail/pass/starved
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=e3a232cccd2445e5d9e607a65a78cdbc33ff8a0f
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 22 Nov 2020 13:20:16 +0000

flight 156934 qemu-mainline real [real]
flight 156944 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156934/
http://logs.test-lab.xenproject.org/osstest/logs/156944/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds      3 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                e3a232cccd2445e5d9e607a65a78cdbc33ff8a0f
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   94 days
Failing since        152659  2020-08-21 14:07:39 Z   92 days  197 attempts
Testing same since   156925  2020-11-21 14:09:11 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 67422 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 22 13:46:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 13:46:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33298.64320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgph5-0006Q2-Dv; Sun, 22 Nov 2020 13:46:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33298.64320; Sun, 22 Nov 2020 13:46:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgph5-0006Pv-Az; Sun, 22 Nov 2020 13:46:19 +0000
Received: by outflank-mailman (input) for mailman id 33298;
 Sun, 22 Nov 2020 13:46:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgph3-0006Pm-JV; Sun, 22 Nov 2020 13:46:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgph3-000626-CU; Sun, 22 Nov 2020 13:46:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgph3-0008AI-3r; Sun, 22 Nov 2020 13:46:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kgph3-0006Jp-0u; Sun, 22 Nov 2020 13:46:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgph3-0006Pm-JV; Sun, 22 Nov 2020 13:46:17 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/yJg090He+7eDweB8ym5ddy9RJrSmDjZKejlUj/0W8w=; b=LZn1YVzwHmE0WeohjsDOk/EMu1
	596ZrQYfcizFuIquKdjELxCge4G4ptTEPtBt8ejOrwPMf5WQQTsPfrvQlXJtDOY5mysimBYT6rBpb
	w4DyNeXEMnFKPC2kdXCU9QT8dkvxSXORjI6CMAAOvgg4uFv3FWaOscEY61g0QLo8nxAI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgph3-000626-CU; Sun, 22 Nov 2020 13:46:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgph3-0008AI-3r; Sun, 22 Nov 2020 13:46:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgph3-0006Jp-0u; Sun, 22 Nov 2020 13:46:17 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156935-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156935: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b659a5cebd611dbe698e63c03485b5fe8cd964ad
X-Osstest-Versions-That:
    xen=b659a5cebd611dbe698e63c03485b5fe8cd964ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 22 Nov 2020 13:46:17 +0000

flight 156935 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156935/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156918
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156918
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156918
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156918
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156918
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156918
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156918
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156918
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156918
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156918
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156918
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  b659a5cebd611dbe698e63c03485b5fe8cd964ad
baseline version:
 xen                  b659a5cebd611dbe698e63c03485b5fe8cd964ad

Last test of basis   156935  2020-11-22 01:51:26 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Nov 22 14:47:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 14:47:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33323.64342 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgqdm-0004LD-HQ; Sun, 22 Nov 2020 14:46:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33323.64342; Sun, 22 Nov 2020 14:46:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgqdm-0004L6-DJ; Sun, 22 Nov 2020 14:46:58 +0000
Received: by outflank-mailman (input) for mailman id 33323;
 Sun, 22 Nov 2020 14:46:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=54+R=E4=redhat.com=trix@srs-us1.protection.inumbo.net>)
 id 1kgqdl-0004Ka-71
 for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 14:46:57 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id a290f822-bb6b-47ee-a05f-17abc098a773;
 Sun, 22 Nov 2020 14:46:56 +0000 (UTC)
Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com
 [209.85.222.197]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-146-e4uQWsE8Onm26TxSK_V78g-1; Sun, 22 Nov 2020 09:46:52 -0500
Received: by mail-qk1-f197.google.com with SMTP id s9so12636390qks.2
 for <xen-devel@lists.xenproject.org>; Sun, 22 Nov 2020 06:46:52 -0800 (PST)
Received: from trix.remote.csb (075-142-250-213.res.spectrum.com.
 [75.142.250.213])
 by smtp.gmail.com with ESMTPSA id x72sm6888242qkb.90.2020.11.22.06.46.47
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sun, 22 Nov 2020 06:46:51 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=54+R=E4=redhat.com=trix@srs-us1.protection.inumbo.net>)
	id 1kgqdl-0004Ka-71
	for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 14:46:57 +0000
X-Inumbo-ID: a290f822-bb6b-47ee-a05f-17abc098a773
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id a290f822-bb6b-47ee-a05f-17abc098a773;
	Sun, 22 Nov 2020 14:46:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1606056416;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=apju6gP6XSpkYwp4WdL57a/nFAvNI/QCKhQkuBxc3S8=;
	b=XA3Bdia/xK9rtALsHLhdMXNkl5EWPz83+inaq5KZnxbZp/pqE7ot4cuq9wfEbTWVu1jCkd
	fkI/qAlh4sU3l8dOrlq7C5hNdTdH0le7t+rDrYZ5+jjUTJP5ekv3G25710udgF6Q97joi/
	ih3DR460gM+B9YwlFm0ND3pgcFfhizQ=
Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com
 [209.85.222.197]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-146-e4uQWsE8Onm26TxSK_V78g-1; Sun, 22 Nov 2020 09:46:52 -0500
X-MC-Unique: e4uQWsE8Onm26TxSK_V78g-1
Received: by mail-qk1-f197.google.com with SMTP id s9so12636390qks.2
        for <xen-devel@lists.xenproject.org>; Sun, 22 Nov 2020 06:46:52 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=apju6gP6XSpkYwp4WdL57a/nFAvNI/QCKhQkuBxc3S8=;
        b=F46NCnsny/1zbFQ9jAnm3TE21zoEQeMdKtCgJHmesafGSoCmebeg6frfOmwSeVDzzH
         rOLpBdkaP8CcY6zrZFPZqKqJUGSt3b5iFtQJ5ex3B32hpbiV1VUzkpBX1Lj4H09VEPT1
         y+w+GANJyikW9hyzcqABWoK3e+GLwJ0oX5i7hNgetklUY++/l7HdQiT6ODJgNUcJ5heD
         dF9t9tlLhEOPWc202I4CKtiCBKk2ErVovkg8jwTWRci/xm8RpzaJlAhoLIvFU4A3UZui
         uLW4acUgRXjwLu0DyJlrQllBnpbwUL4jHkcpuumnMciNTxvodLyYqc22gg1j6Bx3m3Ir
         DVzA==
X-Gm-Message-State: AOAM533hjg++bbtjSaCnbwNQR8c8Wnj7jASg87wV+wdlYHxC9Tq7Ncs1
	jZUhrFjveoG6MJvp0vQ9eckxHgNrSeddDl19QAHNqfLXJ2ZSXyDZ5j5C8NqbtqgqESomelCDE02
	OlOWoe1+QObq5Tkxg1dENuCj6QqM=
X-Received: by 2002:ad4:476b:: with SMTP id d11mr26026190qvx.57.1606056412431;
        Sun, 22 Nov 2020 06:46:52 -0800 (PST)
X-Google-Smtp-Source: ABdhPJyPQ8vJIBgyJxmgPlUVzOaStXFRaD0Z+d8VDmnR7kdLyNkvwByAGPov006wc7+pJBCcgj+/zw==
X-Received: by 2002:ad4:476b:: with SMTP id d11mr26026152qvx.57.1606056412222;
        Sun, 22 Nov 2020 06:46:52 -0800 (PST)
Received: from trix.remote.csb (075-142-250-213.res.spectrum.com. [75.142.250.213])
        by smtp.gmail.com with ESMTPSA id x72sm6888242qkb.90.2020.11.22.06.46.47
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Sun, 22 Nov 2020 06:46:51 -0800 (PST)
Subject: Re: [RFC] MAINTAINERS tag for cleanup robot
To: Matthew Wilcox <willy@infradead.org>
Cc: joe@perches.com, clang-built-linux@googlegroups.com,
 linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org, tboot-devel@lists.sourceforge.net,
 kvm@vger.kernel.org, linux-crypto@vger.kernel.org,
 linux-acpi@vger.kernel.org, devel@acpica.org, amd-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
 netdev@vger.kernel.org, linux-media@vger.kernel.org,
 MPT-FusionLinux.pdl@broadcom.com, linux-scsi@vger.kernel.org,
 linux-wireless@vger.kernel.org, ibm-acpi-devel@lists.sourceforge.net,
 platform-driver-x86@vger.kernel.org, linux-usb@vger.kernel.org,
 linux-omap@vger.kernel.org, linux-fbdev@vger.kernel.org,
 ecryptfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
 cluster-devel@redhat.com, linux-mtd@lists.infradead.org,
 keyrings@vger.kernel.org, netfilter-devel@vger.kernel.org,
 coreteam@netfilter.org, alsa-devel@alsa-project.org, bpf@vger.kernel.org,
 linux-bluetooth@vger.kernel.org, linux-nfs@vger.kernel.org,
 patches@opensource.cirrus.com
References: <20201121165058.1644182-1-trix@redhat.com>
 <20201122032304.GE4327@casper.infradead.org>
From: Tom Rix <trix@redhat.com>
Message-ID: <ddb08a27-3ca1-fb2e-d51f-4b471f1a56a3@redhat.com>
Date: Sun, 22 Nov 2020 06:46:46 -0800
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201122032304.GE4327@casper.infradead.org>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=trix@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 11/21/20 7:23 PM, Matthew Wilcox wrote:
> On Sat, Nov 21, 2020 at 08:50:58AM -0800, trix@redhat.com wrote:
>> The fixer review is
>> https://reviews.llvm.org/D91789
>>
>> A run over allyesconfig for x86_64 finds 62 issues, 5 are false positives.
>> The false positives are caused by macros passed to other macros and by
>> some macro expansions that did not have an extra semicolon.
>>
>> This cleans up about 1,000 of the current 10,000 -Wextra-semi-stmt
>> warnings in linux-next.
> Are any of them not false-positives?  It's all very well to enable
> stricter warnings, but if they don't fix any bugs, they're just churn.
>
While enabling additional warnings may be a side effect of this effort

the primary goal is to set up a cleaning robot. After that a refactoring robot.

Tom



From xen-devel-bounces@lists.xenproject.org Sun Nov 22 14:56:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 14:56:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33333.64354 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgqnO-0005Sk-Fp; Sun, 22 Nov 2020 14:56:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33333.64354; Sun, 22 Nov 2020 14:56:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgqnO-0005Sd-Cp; Sun, 22 Nov 2020 14:56:54 +0000
Received: by outflank-mailman (input) for mailman id 33333;
 Sun, 22 Nov 2020 14:56:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9StC=E4=infradead.org=willy@srs-us1.protection.inumbo.net>)
 id 1kgqnM-0005SY-2q
 for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 14:56:52 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cdcf57cc-e582-44d8-8c1f-fe20e2adadc4;
 Sun, 22 Nov 2020 14:56:47 +0000 (UTC)
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red
 Hat Linux)) id 1kgqn5-0000Ms-Pt; Sun, 22 Nov 2020 14:56:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=9StC=E4=infradead.org=willy@srs-us1.protection.inumbo.net>)
	id 1kgqnM-0005SY-2q
	for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 14:56:52 +0000
X-Inumbo-ID: cdcf57cc-e582-44d8-8c1f-fe20e2adadc4
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cdcf57cc-e582-44d8-8c1f-fe20e2adadc4;
	Sun, 22 Nov 2020 14:56:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=wxlelFGFCxCiVSAw6KoblZyEK+s/RwoBRdtshdQHcTA=; b=IHdP9cS5+aim9K2c8XsejJHGrM
	+S6gdmtkdU0PTIkPZFY+qbqZ8sMUEbhQL8b8Qdm+o2+njoM1Cm2cM6mbD/iY0TNSwZxJRLByIjVzg
	eNcrVpKlWHjnQQvQ1Go6iFjsUh68sqz8dumIdI2qfrFsd2XrVO5w3MRgBBmvjmOvnFaYJLzopUIl7
	R6ZVO/KTc63YeijcPbeFNlRWdMYtVgUZuoprtKEH7yjNmV1uGOzQEbR4NJQpt6J1Nu3N9hq7ljiYQ
	UIsENsOUDnACtwno8Bc2xaczmaEz8q4+sGKzgwN4pxETWG486xJZxN2UeB5LYfIlImw9WtEFIo9GQ
	DSwauRxg==;
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kgqn5-0000Ms-Pt; Sun, 22 Nov 2020 14:56:36 +0000
Date: Sun, 22 Nov 2020 14:56:35 +0000
From: Matthew Wilcox <willy@infradead.org>
To: Tom Rix <trix@redhat.com>
Cc: joe@perches.com, clang-built-linux@googlegroups.com,
	linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org, tboot-devel@lists.sourceforge.net,
	kvm@vger.kernel.org, linux-crypto@vger.kernel.org,
	linux-acpi@vger.kernel.org, devel@acpica.org,
	amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, netdev@vger.kernel.org,
	linux-media@vger.kernel.org, MPT-FusionLinux.pdl@broadcom.com,
	linux-scsi@vger.kernel.org, linux-wireless@vger.kernel.org,
	ibm-acpi-devel@lists.sourceforge.net,
	platform-driver-x86@vger.kernel.org, linux-usb@vger.kernel.org,
	linux-omap@vger.kernel.org, linux-fbdev@vger.kernel.org,
	ecryptfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	cluster-devel@redhat.com, linux-mtd@lists.infradead.org,
	keyrings@vger.kernel.org, netfilter-devel@vger.kernel.org,
	coreteam@netfilter.org, alsa-devel@alsa-project.org,
	bpf@vger.kernel.org, linux-bluetooth@vger.kernel.org,
	linux-nfs@vger.kernel.org, patches@opensource.cirrus.com
Subject: Re: [RFC] MAINTAINERS tag for cleanup robot
Message-ID: <20201122145635.GG4327@casper.infradead.org>
References: <20201121165058.1644182-1-trix@redhat.com>
 <20201122032304.GE4327@casper.infradead.org>
 <ddb08a27-3ca1-fb2e-d51f-4b471f1a56a3@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <ddb08a27-3ca1-fb2e-d51f-4b471f1a56a3@redhat.com>

On Sun, Nov 22, 2020 at 06:46:46AM -0800, Tom Rix wrote:
> 
> On 11/21/20 7:23 PM, Matthew Wilcox wrote:
> > On Sat, Nov 21, 2020 at 08:50:58AM -0800, trix@redhat.com wrote:
> >> The fixer review is
> >> https://reviews.llvm.org/D91789
> >>
> >> A run over allyesconfig for x86_64 finds 62 issues, 5 are false positives.
> >> The false positives are caused by macros passed to other macros and by
> >> some macro expansions that did not have an extra semicolon.
> >>
> >> This cleans up about 1,000 of the current 10,000 -Wextra-semi-stmt
> >> warnings in linux-next.
> > Are any of them not false-positives?  It's all very well to enable
> > stricter warnings, but if they don't fix any bugs, they're just churn.
> >
> While enabling additional warnings may be a side effect of this effort
> 
> the primary goal is to set up a cleaning robot. After that a refactoring robot.

Why do we need such a thing?  Again, it sounds like more churn.
It's really annoying when I'm working on something important that gets
derailed by pointless churn.  Churn also makes it harder to backport
patches to earlier kernels.


From xen-devel-bounces@lists.xenproject.org Sun Nov 22 16:11:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 16:11:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33360.64371 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgrxE-0005mP-39; Sun, 22 Nov 2020 16:11:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33360.64371; Sun, 22 Nov 2020 16:11:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgrxD-0005mI-WF; Sun, 22 Nov 2020 16:11:08 +0000
Received: by outflank-mailman (input) for mailman id 33360;
 Sun, 22 Nov 2020 16:11:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=54+R=E4=redhat.com=trix@srs-us1.protection.inumbo.net>)
 id 1kgrxB-0005mC-TH
 for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 16:11:06 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 77ca28b9-cca2-4b60-86bf-1500a993fbfd;
 Sun, 22 Nov 2020 16:11:03 +0000 (UTC)
Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com
 [209.85.160.197]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-194-PnxnsHywOeO7SP8jZaQIGA-1; Sun, 22 Nov 2020 11:10:59 -0500
Received: by mail-qt1-f197.google.com with SMTP id c2so11750997qtx.3
 for <xen-devel@lists.xenproject.org>; Sun, 22 Nov 2020 08:10:59 -0800 (PST)
Received: from trix.remote.csb (075-142-250-213.res.spectrum.com.
 [75.142.250.213])
 by smtp.gmail.com with ESMTPSA id l3sm2779806qth.13.2020.11.22.08.10.54
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sun, 22 Nov 2020 08:10:58 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=54+R=E4=redhat.com=trix@srs-us1.protection.inumbo.net>)
	id 1kgrxB-0005mC-TH
	for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 16:11:06 +0000
X-Inumbo-ID: 77ca28b9-cca2-4b60-86bf-1500a993fbfd
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 77ca28b9-cca2-4b60-86bf-1500a993fbfd;
	Sun, 22 Nov 2020 16:11:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1606061463;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=D4o3+oRulyAmQWJaTM5GlynW0/tmbkOzx3PzhPB5iTg=;
	b=JuTdFtHhsclSCFUt685nObhejVSN718q0xDzBI7qCmYl4/ObRaglwiPh1kkjwftUYKtjGT
	F9V7hg+6Ged1Ypc78HlKhU8yURVG+HfY97vOU+FDQTH2+EktMkCoujoBzuPIEBmLFdUcTH
	U0gcN5s+aBBBeO3IQtZ/9hGSdS8be9I=
Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com
 [209.85.160.197]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-194-PnxnsHywOeO7SP8jZaQIGA-1; Sun, 22 Nov 2020 11:10:59 -0500
X-MC-Unique: PnxnsHywOeO7SP8jZaQIGA-1
Received: by mail-qt1-f197.google.com with SMTP id c2so11750997qtx.3
        for <xen-devel@lists.xenproject.org>; Sun, 22 Nov 2020 08:10:59 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=D4o3+oRulyAmQWJaTM5GlynW0/tmbkOzx3PzhPB5iTg=;
        b=gP1uMiBDngimd93OcuEp54Y+rkueevHCceMK3hD9VlOTOM7Knzh1ELEFSCuc5RysmD
         hm0gpZiDy5+U862XF2bPNQqYISQ09AwT5D+C7bFVZDvmPN1qo7HXPd8v28ha+RBUlBy0
         Yki+MBlr942jdlvfvb3N70gWsAt4TNPW5WhEXEp28fmmrnfutsOS+qCUumTJP9EupP3s
         sudg2LaoUVu9tT7wxIXTtTlT/btn9TXJExR6dRicztRm777O0Y8b1SvxEo+VgIzX8G7l
         M09JAKf2ygdDMOAedLPRFm/Q7KrEwrRrZ9LXzm/5XLyNwnh2+//8qsquTtBPXbqjx9GQ
         ZOXA==
X-Gm-Message-State: AOAM530b8g/nuKW9dcOlciVK9QUFAUCWiq3sDccHVGC0/xfeTacZcUlK
	X/HS0tsW1Sco2J81Pc8RR4SdMPq7lYBmVOKuYNIR2LsSqOBSFmZgUmfs0xxbC2xyQ4zZdx3TYzI
	pgbeKvMAtRddhJD2uAvRkyvTsth8=
X-Received: by 2002:ac8:5a8c:: with SMTP id c12mr23364800qtc.97.1606061459263;
        Sun, 22 Nov 2020 08:10:59 -0800 (PST)
X-Google-Smtp-Source: ABdhPJw6JfmMkGlJyoaKx6ihYsYbDuDsWkyDwIosiC/apv5eylMrcIvHPxlHVgnFDCnun9WXh4UZ3w==
X-Received: by 2002:ac8:5a8c:: with SMTP id c12mr23364770qtc.97.1606061458997;
        Sun, 22 Nov 2020 08:10:58 -0800 (PST)
Received: from trix.remote.csb (075-142-250-213.res.spectrum.com. [75.142.250.213])
        by smtp.gmail.com with ESMTPSA id l3sm2779806qth.13.2020.11.22.08.10.54
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Sun, 22 Nov 2020 08:10:58 -0800 (PST)
Subject: Re: [RFC] MAINTAINERS tag for cleanup robot
To: Matthew Wilcox <willy@infradead.org>
Cc: joe@perches.com, clang-built-linux@googlegroups.com,
 linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org, tboot-devel@lists.sourceforge.net,
 kvm@vger.kernel.org, linux-crypto@vger.kernel.org,
 linux-acpi@vger.kernel.org, devel@acpica.org, amd-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
 netdev@vger.kernel.org, linux-media@vger.kernel.org,
 MPT-FusionLinux.pdl@broadcom.com, linux-scsi@vger.kernel.org,
 linux-wireless@vger.kernel.org, ibm-acpi-devel@lists.sourceforge.net,
 platform-driver-x86@vger.kernel.org, linux-usb@vger.kernel.org,
 linux-omap@vger.kernel.org, linux-fbdev@vger.kernel.org,
 ecryptfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
 cluster-devel@redhat.com, linux-mtd@lists.infradead.org,
 keyrings@vger.kernel.org, netfilter-devel@vger.kernel.org,
 coreteam@netfilter.org, alsa-devel@alsa-project.org, bpf@vger.kernel.org,
 linux-bluetooth@vger.kernel.org, linux-nfs@vger.kernel.org,
 patches@opensource.cirrus.com
References: <20201121165058.1644182-1-trix@redhat.com>
 <20201122032304.GE4327@casper.infradead.org>
 <ddb08a27-3ca1-fb2e-d51f-4b471f1a56a3@redhat.com>
 <20201122145635.GG4327@casper.infradead.org>
From: Tom Rix <trix@redhat.com>
Message-ID: <0819ce06-c462-d4df-d3d9-14931dc5aefc@redhat.com>
Date: Sun, 22 Nov 2020 08:10:53 -0800
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201122145635.GG4327@casper.infradead.org>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=trix@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 11/22/20 6:56 AM, Matthew Wilcox wrote:
> On Sun, Nov 22, 2020 at 06:46:46AM -0800, Tom Rix wrote:
>> On 11/21/20 7:23 PM, Matthew Wilcox wrote:
>>> On Sat, Nov 21, 2020 at 08:50:58AM -0800, trix@redhat.com wrote:
>>>> The fixer review is
>>>> https://reviews.llvm.org/D91789
>>>>
>>>> A run over allyesconfig for x86_64 finds 62 issues, 5 are false positives.
>>>> The false positives are caused by macros passed to other macros and by
>>>> some macro expansions that did not have an extra semicolon.
>>>>
>>>> This cleans up about 1,000 of the current 10,000 -Wextra-semi-stmt
>>>> warnings in linux-next.
>>> Are any of them not false-positives?  It's all very well to enable
>>> stricter warnings, but if they don't fix any bugs, they're just churn.
>>>
>> While enabling additional warnings may be a side effect of this effort
>>
>> the primary goal is to set up a cleaning robot. After that a refactoring robot.
> Why do we need such a thing?  Again, it sounds like more churn.
> It's really annoying when I'm working on something important that gets
> derailed by pointless churn.  Churn also makes it harder to backport
> patches to earlier kernels.
>
A refactoring example on moving to treewide, consistent use of a new api may help.

Consider

2efc459d06f1630001e3984854848a5647086232

sysfs: Add sysfs_emit and sysfs_emit_at to format sysfs output

A new api for printing in the sysfs.  How do we use it treewide ?

Done manually, it would be a heroic effort requiring high level maintainers pushing and likely only get partially done.

If a refactoring programatic fixit is done and validated on a one subsystem, it can run on all the subsystems.

The effort is a couple of weeks to write and validate the fixer, hours to run over the tree.

It won't be perfect but will be better than doing it manually.

Tom



From xen-devel-bounces@lists.xenproject.org Sun Nov 22 16:17:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 16:17:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33367.64384 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgs31-0005zb-OW; Sun, 22 Nov 2020 16:17:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33367.64384; Sun, 22 Nov 2020 16:17:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgs31-0005zU-LK; Sun, 22 Nov 2020 16:17:07 +0000
Received: by outflank-mailman (input) for mailman id 33367;
 Sun, 22 Nov 2020 16:17:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P5EI=E4=chromium.org=keescook@srs-us1.protection.inumbo.net>)
 id 1kgs31-0005zP-1e
 for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 16:17:07 +0000
Received: from mail-pf1-x441.google.com (unknown [2607:f8b0:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0b5abb6d-c332-4c79-8bb7-44f71d5276bd;
 Sun, 22 Nov 2020 16:17:06 +0000 (UTC)
Received: by mail-pf1-x441.google.com with SMTP id y7so12553044pfq.11
 for <xen-devel@lists.xenproject.org>; Sun, 22 Nov 2020 08:17:06 -0800 (PST)
Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163])
 by smtp.gmail.com with ESMTPSA id k4sm9841327pfg.130.2020.11.22.08.17.03
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 22 Nov 2020 08:17:04 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=P5EI=E4=chromium.org=keescook@srs-us1.protection.inumbo.net>)
	id 1kgs31-0005zP-1e
	for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 16:17:07 +0000
X-Inumbo-ID: 0b5abb6d-c332-4c79-8bb7-44f71d5276bd
Received: from mail-pf1-x441.google.com (unknown [2607:f8b0:4864:20::441])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0b5abb6d-c332-4c79-8bb7-44f71d5276bd;
	Sun, 22 Nov 2020 16:17:06 +0000 (UTC)
Received: by mail-pf1-x441.google.com with SMTP id y7so12553044pfq.11
        for <xen-devel@lists.xenproject.org>; Sun, 22 Nov 2020 08:17:06 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=9LoGd3XD212DnUOzzxWdBwAHKcFiABUM1eku/Z5s9PQ=;
        b=ECdUiFozoGotedNMltHxGvt7ELeQp/og9KGaJat0+erwcdPPWVCrU8KkW+JV4RYPeo
         GTWUobzmr0s313q/lzhn4jF5RxJP4nhZO/aj20hZaH8d/g/a456RbO+LKniOS4LntN7M
         GHx9cYZv8xWYKIg8n9C3ZJn+Q+L/6/s35hbx0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=9LoGd3XD212DnUOzzxWdBwAHKcFiABUM1eku/Z5s9PQ=;
        b=fq8MlgGqVgtcsMSZaZeDmOjgxhYj7N2bAe8L8sFma4PsWHiiBlNSTSCT+6OgObDDI3
         unEGzRSZ/NNB8KXQRPw77K3vh8R6M/IjeSpP9Srhm3Nb+MTR40hNz6HPFcfoSkUlKQ8+
         4MJl9ppq954yEuJ0saWnngSsh5kuuR1qId2Ga/5zyx3k/0xpi5Er6lCis+uSisycjk7T
         6fgNVpq58FYRwPDQfy2rShOPW4wiRQR4tIIEIxikk7jt9Vq0AdasEA4dijZHRVL+UQem
         Ondp4xgqGRATQ3bsZb0gTrQMv0X6VCbAPSTzCLZ8VGkVrj9n2ukeI2XdBnCdJoVhSY8d
         xMPQ==
X-Gm-Message-State: AOAM531ygPHJmHxgBHCpQvaZL3PICVsu6jbJnT4LzNesxSxrHjvaKH4/
	xZ22sb7c2f6wWcPdMT4IE/em2A==
X-Google-Smtp-Source: ABdhPJzjrfS3ZVuiz5fqtjIAbZtKQ5pfqPy3q+oKf7VtoFjoXLjA79CcRlXr5vTeWnitsMgA/HY0Vg==
X-Received: by 2002:a63:1d0b:: with SMTP id d11mr21383404pgd.368.1606061825374;
        Sun, 22 Nov 2020 08:17:05 -0800 (PST)
Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163])
        by smtp.gmail.com with ESMTPSA id k4sm9841327pfg.130.2020.11.22.08.17.03
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Sun, 22 Nov 2020 08:17:04 -0800 (PST)
Date: Sun, 22 Nov 2020 08:17:03 -0800
From: Kees Cook <keescook@chromium.org>
To: Jakub Kicinski <kuba@kernel.org>
Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>,
	linux-kernel@vger.kernel.org, alsa-devel@alsa-project.org,
	amd-gfx@lists.freedesktop.org, bridge@lists.linux-foundation.org,
	ceph-devel@vger.kernel.org, cluster-devel@redhat.com,
	coreteam@netfilter.org, devel@driverdev.osuosl.org,
	dm-devel@redhat.com, drbd-dev@lists.linbit.com,
	dri-devel@lists.freedesktop.org, GR-everest-linux-l2@marvell.com,
	GR-Linux-NIC-Dev@marvell.com, intel-gfx@lists.freedesktop.org,
	intel-wired-lan@lists.osuosl.org, keyrings@vger.kernel.org,
	linux1394-devel@lists.sourceforge.net, linux-acpi@vger.kernel.org,
	linux-afs@lists.infradead.org, linux-arm-kernel@lists.infradead.org,
	linux-arm-msm@vger.kernel.org,
	linux-atm-general@lists.sourceforge.net,
	linux-block@vger.kernel.org, linux-can@vger.kernel.org,
	linux-cifs@vger.kernel.org, linux-crypto@vger.kernel.org,
	linux-decnet-user@lists.sourceforge.net, linux-ext4@vger.kernel.org,
	linux-fbdev@vger.kernel.org, linux-geode@lists.infradead.org,
	linux-gpio@vger.kernel.org, linux-hams@vger.kernel.org,
	linux-hwmon@vger.kernel.org, linux-i3c@lists.infradead.org,
	linux-ide@vger.kernel.org, linux-iio@vger.kernel.org,
	linux-input@vger.kernel.org, linux-integrity@vger.kernel.org,
	linux-mediatek@lists.infradead.org, linux-media@vger.kernel.org,
	linux-mmc@vger.kernel.org, linux-mm@kvack.org,
	linux-mtd@lists.infradead.org, linux-nfs@vger.kernel.org,
	linux-rdma@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-sctp@vger.kernel.org,
	linux-security-module@vger.kernel.org,
	linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org,
	linux-watchdog@vger.kernel.org, linux-wireless@vger.kernel.org,
	netdev@vger.kernel.org, netfilter-devel@vger.kernel.org,
	nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org,
	oss-drivers@netronome.com, patches@opensource.cirrus.com,
	rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org,
	samba-technical@lists.samba.org, selinux@vger.kernel.org,
	target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net,
	usb-storage@lists.one-eyed-alien.net,
	virtualization@lists.linux-foundation.org,
	wcn36xx@lists.infradead.org, x86@kernel.org,
	xen-devel@lists.xenproject.org, linux-hardening@vger.kernel.org,
	Nick Desaulniers <ndesaulniers@google.com>,
	Nathan Chancellor <natechancellor@gmail.com>,
	Miguel Ojeda <ojeda@kernel.org>, Joe Perches <joe@perches.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
Message-ID: <202011220816.8B6591A@keescook>
References: <cover.1605896059.git.gustavoars@kernel.org>
 <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011201129.B13FDB3C@keescook>
 <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>

On Fri, Nov 20, 2020 at 11:51:42AM -0800, Jakub Kicinski wrote:
> On Fri, 20 Nov 2020 11:30:40 -0800 Kees Cook wrote:
> > On Fri, Nov 20, 2020 at 10:53:44AM -0800, Jakub Kicinski wrote:
> > > On Fri, 20 Nov 2020 12:21:39 -0600 Gustavo A. R. Silva wrote:  
> > > > This series aims to fix almost all remaining fall-through warnings in
> > > > order to enable -Wimplicit-fallthrough for Clang.
> > > > 
> > > > In preparation to enable -Wimplicit-fallthrough for Clang, explicitly
> > > > add multiple break/goto/return/fallthrough statements instead of just
> > > > letting the code fall through to the next case.
> > > > 
> > > > Notice that in order to enable -Wimplicit-fallthrough for Clang, this
> > > > change[1] is meant to be reverted at some point. So, this patch helps
> > > > to move in that direction.
> > > > 
> > > > Something important to mention is that there is currently a discrepancy
> > > > between GCC and Clang when dealing with switch fall-through to empty case
> > > > statements or to cases that only contain a break/continue/return
> > > > statement[2][3][4].  
> > > 
> > > Are we sure we want to make this change? Was it discussed before?
> > > 
> > > Are there any bugs Clangs puritanical definition of fallthrough helped
> > > find?
> > > 
> > > IMVHO compiler warnings are supposed to warn about issues that could
> > > be bugs. Falling through to default: break; can hardly be a bug?!  
> > 
> > It's certainly a place where the intent is not always clear. I think
> > this makes all the cases unambiguous, and doesn't impact the machine
> > code, since the compiler will happily optimize away any behavioral
> > redundancy.
> 
> If none of the 140 patches here fix a real bug, and there is no change
> to machine code then it sounds to me like a W=2 kind of a warning.

FWIW, this series has found at least one bug so far:
https://lore.kernel.org/lkml/CAFCwf11izHF=g1mGry1fE5kvFFFrxzhPSM6qKAO8gxSp=Kr_CQ@mail.gmail.com/

-- 
Kees Cook


From xen-devel-bounces@lists.xenproject.org Sun Nov 22 16:33:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 16:33:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33377.64396 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgsIo-00085u-SG; Sun, 22 Nov 2020 16:33:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33377.64396; Sun, 22 Nov 2020 16:33:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgsIo-00085n-MT; Sun, 22 Nov 2020 16:33:26 +0000
Received: by outflank-mailman (input) for mailman id 33377;
 Sun, 22 Nov 2020 16:33:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=54+R=E4=redhat.com=trix@srs-us1.protection.inumbo.net>)
 id 1kgsIn-00085i-Tw
 for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 16:33:25 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 770ac6b5-14be-4778-9677-10cb3ea9e164;
 Sun, 22 Nov 2020 16:33:25 +0000 (UTC)
Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com
 [209.85.219.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-275-kJllZgyeOMqBPpxRusX5Aw-1; Sun, 22 Nov 2020 11:33:22 -0500
Received: by mail-qv1-f71.google.com with SMTP id y21so11246387qve.7
 for <xen-devel@lists.xenproject.org>; Sun, 22 Nov 2020 08:33:21 -0800 (PST)
Received: from trix.remote.csb (075-142-250-213.res.spectrum.com.
 [75.142.250.213])
 by smtp.gmail.com with ESMTPSA id 9sm7113466qke.6.2020.11.22.08.33.17
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sun, 22 Nov 2020 08:33:20 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=54+R=E4=redhat.com=trix@srs-us1.protection.inumbo.net>)
	id 1kgsIn-00085i-Tw
	for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 16:33:25 +0000
X-Inumbo-ID: 770ac6b5-14be-4778-9677-10cb3ea9e164
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 770ac6b5-14be-4778-9677-10cb3ea9e164;
	Sun, 22 Nov 2020 16:33:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1606062804;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=eSFXaX0DZkpNbOwM/RCeF/BEd3n14fz/kurtk7pTmoY=;
	b=StCMHC/HyCDcHG808p/9thQCbNHLB97ek1T9oforhFojh55MW9LfrBZih5vkd7qgZ43dSF
	OZ3JOWRoVWjL7zUSFlP0LDpmoWD2sc3F8Zm34Dd/PaZ0RswUInINC4teeaarn0nRRR/QOI
	+dDEJjm8SJb3l7vs6UWTL1dZHykwyjY=
Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com
 [209.85.219.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-275-kJllZgyeOMqBPpxRusX5Aw-1; Sun, 22 Nov 2020 11:33:22 -0500
X-MC-Unique: kJllZgyeOMqBPpxRusX5Aw-1
Received: by mail-qv1-f71.google.com with SMTP id y21so11246387qve.7
        for <xen-devel@lists.xenproject.org>; Sun, 22 Nov 2020 08:33:21 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=eSFXaX0DZkpNbOwM/RCeF/BEd3n14fz/kurtk7pTmoY=;
        b=X2aJABvWDAx26JXxzHWkt2tAeont+JJQBt9WfSUxMVNYz+1LsHZSSv3dO33VGjZpuF
         7gaR1pZpCQj63Stwik+ASDBn/hMD5vBDyFngQREZ570kGP6eY6x9/MSWeMZF23HuiIHo
         dnpW/Ds9rTCIrleJ/Zsx1R/eL1Xv74+naFvXPc9vv60uRMtAUQTps88Y8WtvAXbQQ03Y
         DcdZvVKnOK0iyXKDr8Vi5ExURMD+G+B2vRFGA5iZJ1Z9d5hlF0RcL5o7WE3bStgpQA69
         S90PzuC0cTcXgUFGXDTM53bHfzPTH1NcD/shgT7PjFrzKOjjQAGHmqV2fOKFaEtJdXP5
         aqsw==
X-Gm-Message-State: AOAM532JnM1MaNGnVEY/WhjjFLQ0I3bDEITNMbwTo8hRCaYDIy0IB5bP
	1x9/OBvyQVL8dZ20V0iyQOtUj7DP8LxSshBrxTEJ7A9DCpFah2iRe3MqwRixfB2FOrUIPlQUtrG
	uGQX46BODQmYIfdnb1Rj0iDAqvQk=
X-Received: by 2002:a37:a783:: with SMTP id q125mr25815207qke.10.1606062801574;
        Sun, 22 Nov 2020 08:33:21 -0800 (PST)
X-Google-Smtp-Source: ABdhPJx2n++9V878KrbbQS+4X3uFT/lj12S/EG8yAhFKA6vFd9PzTR06otQkDsZPYmv5N+Hbwviu/A==
X-Received: by 2002:a37:a783:: with SMTP id q125mr25815168qke.10.1606062801138;
        Sun, 22 Nov 2020 08:33:21 -0800 (PST)
Received: from trix.remote.csb (075-142-250-213.res.spectrum.com. [75.142.250.213])
        by smtp.gmail.com with ESMTPSA id 9sm7113466qke.6.2020.11.22.08.33.17
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Sun, 22 Nov 2020 08:33:20 -0800 (PST)
Subject: Re: [RFC] MAINTAINERS tag for cleanup robot
To: Joe Perches <joe@perches.com>, clang-built-linux@googlegroups.com
Cc: linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org, tboot-devel@lists.sourceforge.net,
 kvm@vger.kernel.org, linux-crypto@vger.kernel.org,
 linux-acpi@vger.kernel.org, devel@acpica.org, amd-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
 netdev@vger.kernel.org, linux-media@vger.kernel.org,
 MPT-FusionLinux.pdl@broadcom.com, linux-scsi@vger.kernel.org,
 linux-wireless@vger.kernel.org, ibm-acpi-devel@lists.sourceforge.net,
 platform-driver-x86@vger.kernel.org, linux-usb@vger.kernel.org,
 linux-omap@vger.kernel.org, linux-fbdev@vger.kernel.org,
 ecryptfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
 cluster-devel@redhat.com, linux-mtd@lists.infradead.org,
 keyrings@vger.kernel.org, netfilter-devel@vger.kernel.org,
 coreteam@netfilter.org, alsa-devel@alsa-project.org, bpf@vger.kernel.org,
 linux-bluetooth@vger.kernel.org, linux-nfs@vger.kernel.org,
 patches@opensource.cirrus.com
References: <20201121165058.1644182-1-trix@redhat.com>
 <2105f0c05e9eae8bee8e17dcc5314474b3c0bc73.camel@perches.com>
From: Tom Rix <trix@redhat.com>
Message-ID: <6e8c1926-4209-8f10-d0f9-72c875a85a88@redhat.com>
Date: Sun, 22 Nov 2020 08:33:16 -0800
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <2105f0c05e9eae8bee8e17dcc5314474b3c0bc73.camel@perches.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=trix@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 11/21/20 9:10 AM, Joe Perches wrote:
> On Sat, 2020-11-21 at 08:50 -0800, trix@redhat.com wrote:
>> A difficult part of automating commits is composing the subsystem
>> preamble in the commit log.  For the ongoing effort of a fixer producing
>> one or two fixes a release the use of 'treewide:' does not seem appropriate.
>>
>> It would be better if the normal prefix was used.  Unfortunately normal is
>> not consistent across the tree.
>>
>> So I am looking for comments for adding a new tag to the MAINTAINERS file
>>
>> 	D: Commit subsystem prefix
>>
>> ex/ for FPGA DFL DRIVERS
>>
>> 	D: fpga: dfl:
> I'm all for it.  Good luck with the effort.  It's not completely trivial.
>
> From a decade ago:
>
> https://lore.kernel.org/lkml/1289919077.28741.50.camel@Joe-Laptop/
>
> (and that thread started with extra semicolon patches too)

Reading the history, how about this.

get_mataintainer.pl outputs a single prefix, if multiple files have the same prefix it works, if they don't its an error.

Another script 'commit_one_file.sh' does the call to get_mainainter.pl to get the prefix and be called by run-clang-tools.py to get the fixer specific message.

Defer minimizing the commits by combining similar subsystems till later.

In a steady state case, this should be uncommon.


>
>> Continuing with cleaning up clang's -Wextra-semi-stmt
>> diff --git a/Makefile b/Makefile
> []
>> @@ -1567,20 +1567,21 @@ help:
>>  	 echo  ''
>>  	@echo  'Static analysers:'
>>  	@echo  '  checkstack      - Generate a list of stack hogs'
>>  	@echo  '  versioncheck    - Sanity check on version.h usage'
>>  	@echo  '  includecheck    - Check for duplicate included header files'
>>  	@echo  '  export_report   - List the usages of all exported symbols'
>>  	@echo  '  headerdep       - Detect inclusion cycles in headers'
>>  	@echo  '  coccicheck      - Check with Coccinelle'
>>  	@echo  '  clang-analyzer  - Check with clang static analyzer'
>>  	@echo  '  clang-tidy      - Check with clang-tidy'
>> +	@echo  '  clang-tidy-fix  - Check and fix with clang-tidy'
> A pity the ordering of the code below isn't the same as the above.

Taken care thanks!

Tom




From xen-devel-bounces@lists.xenproject.org Sun Nov 22 16:49:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 16:49:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33385.64407 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgsYi-0000wT-Cv; Sun, 22 Nov 2020 16:49:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33385.64407; Sun, 22 Nov 2020 16:49:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgsYi-0000wM-9p; Sun, 22 Nov 2020 16:49:52 +0000
Received: by outflank-mailman (input) for mailman id 33385;
 Sun, 22 Nov 2020 16:49:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=08CP=E4=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
 id 1kgsYf-0000wH-7L
 for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 16:49:50 +0000
Received: from bedivere.hansenpartnership.com (unknown [96.44.175.130])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d1fe6885-f3bc-4e2a-add9-3225fa569524;
 Sun, 22 Nov 2020 16:49:46 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by bedivere.hansenpartnership.com (Postfix) with ESMTP id AB0CF1280302;
 Sun, 22 Nov 2020 08:49:44 -0800 (PST)
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
 by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new,
 port 10024)
 with ESMTP id iwIwMm9lBMHQ; Sun, 22 Nov 2020 08:49:44 -0800 (PST)
Received: from jarvis.int.hansenpartnership.com (unknown
 [IPv6:2601:600:8280:66d1::527])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id CCD1012802EA;
 Sun, 22 Nov 2020 08:49:42 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=08CP=E4=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
	id 1kgsYf-0000wH-7L
	for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 16:49:50 +0000
X-Inumbo-ID: d1fe6885-f3bc-4e2a-add9-3225fa569524
Received: from bedivere.hansenpartnership.com (unknown [96.44.175.130])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d1fe6885-f3bc-4e2a-add9-3225fa569524;
	Sun, 22 Nov 2020 16:49:46 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
	by bedivere.hansenpartnership.com (Postfix) with ESMTP id AB0CF1280302;
	Sun, 22 Nov 2020 08:49:44 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=hansenpartnership.com; s=20151216; t=1606063784;
	bh=17TrMRtvefoo+nGHR97pCnj/lxz5M0xWRk4mlB4j/j4=;
	h=Message-ID:Subject:From:To:Date:In-Reply-To:References:From;
	b=DJQ3WuSCh5ONJkQLvrRDKWjtqFtZT1TAPnAUYm6nnSis8bRSxmxTUdD1PrB7UWicY
	 RxKUvvDgawnlhMMDvZIHrNHIQxzEk4H+L7edJ9WYAgYp3e2Z+uWjpWqDuwMfVruTvK
	 GP/WMd/p5KAU/iZA/nGFhNVXHTmLoWjH8aSdKt9E=
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
	by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id iwIwMm9lBMHQ; Sun, 22 Nov 2020 08:49:44 -0800 (PST)
Received: from jarvis.int.hansenpartnership.com (unknown [IPv6:2601:600:8280:66d1::527])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id CCD1012802EA;
	Sun, 22 Nov 2020 08:49:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=hansenpartnership.com; s=20151216; t=1606063784;
	bh=17TrMRtvefoo+nGHR97pCnj/lxz5M0xWRk4mlB4j/j4=;
	h=Message-ID:Subject:From:To:Date:In-Reply-To:References:From;
	b=DJQ3WuSCh5ONJkQLvrRDKWjtqFtZT1TAPnAUYm6nnSis8bRSxmxTUdD1PrB7UWicY
	 RxKUvvDgawnlhMMDvZIHrNHIQxzEk4H+L7edJ9WYAgYp3e2Z+uWjpWqDuwMfVruTvK
	 GP/WMd/p5KAU/iZA/nGFhNVXHTmLoWjH8aSdKt9E=
Message-ID: <751803306cd957d0e7ef6a4fc3dbf12ebceaba92.camel@HansenPartnership.com>
Subject: Re: [RFC] MAINTAINERS tag for cleanup robot
From: James Bottomley <James.Bottomley@HansenPartnership.com>
To: Tom Rix <trix@redhat.com>, Matthew Wilcox <willy@infradead.org>
Cc: joe@perches.com, clang-built-linux@googlegroups.com, 
 linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, 
 xen-devel@lists.xenproject.org, tboot-devel@lists.sourceforge.net, 
 kvm@vger.kernel.org, linux-crypto@vger.kernel.org,
 linux-acpi@vger.kernel.org,  devel@acpica.org,
 amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, 
 intel-gfx@lists.freedesktop.org, netdev@vger.kernel.org, 
 linux-media@vger.kernel.org, MPT-FusionLinux.pdl@broadcom.com, 
 linux-scsi@vger.kernel.org, linux-wireless@vger.kernel.org, 
 ibm-acpi-devel@lists.sourceforge.net, platform-driver-x86@vger.kernel.org, 
 linux-usb@vger.kernel.org, linux-omap@vger.kernel.org, 
 linux-fbdev@vger.kernel.org, ecryptfs@vger.kernel.org, 
 linux-fsdevel@vger.kernel.org, cluster-devel@redhat.com, 
 linux-mtd@lists.infradead.org, keyrings@vger.kernel.org, 
 netfilter-devel@vger.kernel.org, coreteam@netfilter.org, 
 alsa-devel@alsa-project.org, bpf@vger.kernel.org, 
 linux-bluetooth@vger.kernel.org, linux-nfs@vger.kernel.org, 
 patches@opensource.cirrus.com
Date: Sun, 22 Nov 2020 08:49:41 -0800
In-Reply-To: <0819ce06-c462-d4df-d3d9-14931dc5aefc@redhat.com>
References: <20201121165058.1644182-1-trix@redhat.com>
	 <20201122032304.GE4327@casper.infradead.org>
	 <ddb08a27-3ca1-fb2e-d51f-4b471f1a56a3@redhat.com>
	 <20201122145635.GG4327@casper.infradead.org>
	 <0819ce06-c462-d4df-d3d9-14931dc5aefc@redhat.com>
Content-Type: text/plain; charset="UTF-8"
User-Agent: Evolution 3.34.4 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Sun, 2020-11-22 at 08:10 -0800, Tom Rix wrote:
> On 11/22/20 6:56 AM, Matthew Wilcox wrote:
> > On Sun, Nov 22, 2020 at 06:46:46AM -0800, Tom Rix wrote:
> > > On 11/21/20 7:23 PM, Matthew Wilcox wrote:
> > > > On Sat, Nov 21, 2020 at 08:50:58AM -0800, trix@redhat.com
> > > > wrote:
> > > > > The fixer review is
> > > > > https://reviews.llvm.org/D91789
> > > > > 
> > > > > A run over allyesconfig for x86_64 finds 62 issues, 5 are
> > > > > false positives. The false positives are caused by macros
> > > > > passed to other macros and by some macro expansions that did
> > > > > not have an extra semicolon.
> > > > > 
> > > > > This cleans up about 1,000 of the current 10,000 -Wextra-
> > > > > semi-stmt warnings in linux-next.
> > > > Are any of them not false-positives?  It's all very well to
> > > > enable stricter warnings, but if they don't fix any bugs,
> > > > they're just churn.
> > > > 
> > > While enabling additional warnings may be a side effect of this
> > > effort
> > > 
> > > the primary goal is to set up a cleaning robot. After that a
> > > refactoring robot.
> > Why do we need such a thing?  Again, it sounds like more churn.
> > It's really annoying when I'm working on something important that
> > gets derailed by pointless churn.  Churn also makes it harder to
> > backport patches to earlier kernels.
> > 
> A refactoring example on moving to treewide, consistent use of a new
> api may help.
> 
> Consider
> 
> 2efc459d06f1630001e3984854848a5647086232
> 
> sysfs: Add sysfs_emit and sysfs_emit_at to format sysfs output
> 
> A new api for printing in the sysfs.  How do we use it treewide ?
> 
> Done manually, it would be a heroic effort requiring high level
> maintainers pushing and likely only get partially done.
> 
> If a refactoring programatic fixit is done and validated on a one
> subsystem, it can run on all the subsystems.
> 
> The effort is a couple of weeks to write and validate the fixer,
> hours to run over the tree.
> 
> It won't be perfect but will be better than doing it manually.

Here's a thought: perhaps we don't.  sysfs_emit isn't a "new api" its a
minor rewrap of existing best practice.  The damage caused by the churn
of forcing its use everywhere would far outweigh any actual benefit
because pretty much every bug in this area has already been caught and
killed by existing tools.  We can enforce sysfs_emit going forwards
using tools like checkpatch but there's no benefit and a lot of harm to
be done by trying to churn the entire tree retrofitting it (both in
terms of review time wasted as well as patch series derailed).

James




From xen-devel-bounces@lists.xenproject.org Sun Nov 22 16:58:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 16:58:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33396.64419 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgsgW-0001yM-7G; Sun, 22 Nov 2020 16:57:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33396.64419; Sun, 22 Nov 2020 16:57:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgsgW-0001yF-4K; Sun, 22 Nov 2020 16:57:56 +0000
Received: by outflank-mailman (input) for mailman id 33396;
 Sun, 22 Nov 2020 16:57:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgsgU-0001y7-E4; Sun, 22 Nov 2020 16:57:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgsgU-00024f-42; Sun, 22 Nov 2020 16:57:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgsgT-00022C-LR; Sun, 22 Nov 2020 16:57:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kgsgT-0008Pe-Ky; Sun, 22 Nov 2020 16:57:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgsgU-0001y7-E4; Sun, 22 Nov 2020 16:57:54 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mPUqy3SwoQgSWfalhLJsVJ6WxXTUKKdag5s2rh4u4ZU=; b=yNRmXg/T2Cn+ayUKK17PGc7fNQ
	xBC2zXBKM/Ed8nWz9WX+ExgcSRGKylN18+ve/1hpSqoVXajKx2Dq1p8MCwB3hscuMlR4KLj9XOTqW
	9x2+Ypri0b/Ik/+oAywc7Y4LlUB9Kfz75Wpx286hEvH+bdW0lYzMzRLN3o1DTfh0ztnI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgsgU-00024f-42; Sun, 22 Nov 2020 16:57:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgsgT-00022C-LR; Sun, 22 Nov 2020 16:57:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgsgT-0008Pe-Ky; Sun, 22 Nov 2020 16:57:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156937-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156937: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=a349e4c659609fd20e4beea89e5c4a4038e33a95
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 22 Nov 2020 16:57:53 +0000

flight 156937 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156937/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit2   8 xen-boot         fail in 156929 pass in 156937
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen        fail pass in 156929
 test-arm64-arm64-xl           8 xen-boot                   fail pass in 156929
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10     fail pass in 156929
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 156929
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 156929
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat  fail pass in 156929

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-seattle 11 leak-check/basis(11) fail in 156929 blocked in 152332
 test-arm64-arm64-xl-xsm 11 leak-check/basis(11) fail in 156929 blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11) fail in 156929 blocked in 152332
 test-arm64-arm64-xl   11 leak-check/basis(11) fail in 156929 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                a349e4c659609fd20e4beea89e5c4a4038e33a95
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  113 days
Failing since        152366  2020-08-01 20:49:34 Z  112 days  189 attempts
Testing same since   156929  2020-11-21 19:09:40 Z    0 days    2 attempts

------------------------------------------------------------
3565 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 681922 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 22 18:22:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 18:22:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33422.64435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgu03-0002lp-NW; Sun, 22 Nov 2020 18:22:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33422.64435; Sun, 22 Nov 2020 18:22:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgu03-0002li-KG; Sun, 22 Nov 2020 18:22:11 +0000
Received: by outflank-mailman (input) for mailman id 33422;
 Sun, 22 Nov 2020 18:22:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=08CP=E4=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
 id 1kgu02-0002lc-Pm
 for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 18:22:10 +0000
Received: from bedivere.hansenpartnership.com (unknown [2607:fcd0:100:8a00::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 97461730-f4f0-405c-a3d0-3d53ef9abaeb;
 Sun, 22 Nov 2020 18:22:06 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by bedivere.hansenpartnership.com (Postfix) with ESMTP id 5BB5B128028F;
 Sun, 22 Nov 2020 10:22:04 -0800 (PST)
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
 by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new,
 port 10024)
 with ESMTP id nGXBdIQjsZJa; Sun, 22 Nov 2020 10:22:04 -0800 (PST)
Received: from jarvis.int.hansenpartnership.com (unknown
 [IPv6:2601:600:8280:66d1::527])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id DB46B1280287;
 Sun, 22 Nov 2020 10:22:00 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=08CP=E4=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
	id 1kgu02-0002lc-Pm
	for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 18:22:10 +0000
X-Inumbo-ID: 97461730-f4f0-405c-a3d0-3d53ef9abaeb
Received: from bedivere.hansenpartnership.com (unknown [2607:fcd0:100:8a00::2])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 97461730-f4f0-405c-a3d0-3d53ef9abaeb;
	Sun, 22 Nov 2020 18:22:06 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
	by bedivere.hansenpartnership.com (Postfix) with ESMTP id 5BB5B128028F;
	Sun, 22 Nov 2020 10:22:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=hansenpartnership.com; s=20151216; t=1606069324;
	bh=h3sTlT+FA6NHXH38/foFLBJi59858PUlbpLzv2N/tQY=;
	h=Message-ID:Subject:From:To:Date:In-Reply-To:References:From;
	b=rW4pDHaLDXSFZXr6Cvxz8pUU95XYbgWiZrc9VupyznX2uzr45WZkkmPcoVv7UQLC7
	 fsMPdBOfQk9ceeuIggpWHLoyVFttsPe9E/go/w8zBhamKaY37ALEH+1JPMFR7KYHwj
	 pjW3T1Eqz9YAkW2FP12UvIfqAj7a03n4eq2U3CyI=
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
	by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id nGXBdIQjsZJa; Sun, 22 Nov 2020 10:22:04 -0800 (PST)
Received: from jarvis.int.hansenpartnership.com (unknown [IPv6:2601:600:8280:66d1::527])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id DB46B1280287;
	Sun, 22 Nov 2020 10:22:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=hansenpartnership.com; s=20151216; t=1606069324;
	bh=h3sTlT+FA6NHXH38/foFLBJi59858PUlbpLzv2N/tQY=;
	h=Message-ID:Subject:From:To:Date:In-Reply-To:References:From;
	b=rW4pDHaLDXSFZXr6Cvxz8pUU95XYbgWiZrc9VupyznX2uzr45WZkkmPcoVv7UQLC7
	 fsMPdBOfQk9ceeuIggpWHLoyVFttsPe9E/go/w8zBhamKaY37ALEH+1JPMFR7KYHwj
	 pjW3T1Eqz9YAkW2FP12UvIfqAj7a03n4eq2U3CyI=
Message-ID: <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
From: James Bottomley <James.Bottomley@HansenPartnership.com>
To: Kees Cook <keescook@chromium.org>, Jakub Kicinski <kuba@kernel.org>
Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>, 
 linux-kernel@vger.kernel.org, alsa-devel@alsa-project.org, 
 amd-gfx@lists.freedesktop.org, bridge@lists.linux-foundation.org, 
 ceph-devel@vger.kernel.org, cluster-devel@redhat.com,
 coreteam@netfilter.org,  devel@driverdev.osuosl.org, dm-devel@redhat.com,
 drbd-dev@lists.linbit.com,  dri-devel@lists.freedesktop.org,
 GR-everest-linux-l2@marvell.com,  GR-Linux-NIC-Dev@marvell.com,
 intel-gfx@lists.freedesktop.org,  intel-wired-lan@lists.osuosl.org,
 keyrings@vger.kernel.org,  linux1394-devel@lists.sourceforge.net,
 linux-acpi@vger.kernel.org,  linux-afs@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,  linux-arm-msm@vger.kernel.org,
 linux-atm-general@lists.sourceforge.net,  linux-block@vger.kernel.org,
 linux-can@vger.kernel.org,  linux-cifs@vger.kernel.org,
 linux-crypto@vger.kernel.org,  linux-decnet-user@lists.sourceforge.net,
 linux-ext4@vger.kernel.org,  linux-fbdev@vger.kernel.org,
 linux-geode@lists.infradead.org,  linux-gpio@vger.kernel.org,
 linux-hams@vger.kernel.org,  linux-hwmon@vger.kernel.org,
 linux-i3c@lists.infradead.org,  linux-ide@vger.kernel.org,
 linux-iio@vger.kernel.org,  linux-input@vger.kernel.org,
 linux-integrity@vger.kernel.org,  linux-mediatek@lists.infradead.org,
 linux-media@vger.kernel.org,  linux-mmc@vger.kernel.org,
 linux-mm@kvack.org, linux-mtd@lists.infradead.org, 
 linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, 
 linux-renesas-soc@vger.kernel.org, linux-scsi@vger.kernel.org, 
 linux-sctp@vger.kernel.org, linux-security-module@vger.kernel.org, 
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org, 
 linux-watchdog@vger.kernel.org, linux-wireless@vger.kernel.org, 
 netdev@vger.kernel.org, netfilter-devel@vger.kernel.org, 
 nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org, 
 oss-drivers@netronome.com, patches@opensource.cirrus.com, 
 rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org, 
 samba-technical@lists.samba.org, selinux@vger.kernel.org, 
 target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net, 
 usb-storage@lists.one-eyed-alien.net, 
 virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
 x86@kernel.org, xen-devel@lists.xenproject.org,
 linux-hardening@vger.kernel.org,  Nick Desaulniers
 <ndesaulniers@google.com>, Nathan Chancellor <natechancellor@gmail.com>,
 Miguel Ojeda <ojeda@kernel.org>, Joe Perches <joe@perches.com>
Date: Sun, 22 Nov 2020 10:21:59 -0800
In-Reply-To: <202011220816.8B6591A@keescook>
References: <cover.1605896059.git.gustavoars@kernel.org>
	 <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	 <202011201129.B13FDB3C@keescook>
	 <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	 <202011220816.8B6591A@keescook>
Content-Type: text/plain; charset="UTF-8"
User-Agent: Evolution 3.34.4 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Sun, 2020-11-22 at 08:17 -0800, Kees Cook wrote:
> On Fri, Nov 20, 2020 at 11:51:42AM -0800, Jakub Kicinski wrote:
> > On Fri, 20 Nov 2020 11:30:40 -0800 Kees Cook wrote:
> > > On Fri, Nov 20, 2020 at 10:53:44AM -0800, Jakub Kicinski wrote:
> > > > On Fri, 20 Nov 2020 12:21:39 -0600 Gustavo A. R. Silva wrote:  
> > > > > This series aims to fix almost all remaining fall-through
> > > > > warnings in order to enable -Wimplicit-fallthrough for Clang.
> > > > > 
> > > > > In preparation to enable -Wimplicit-fallthrough for Clang,
> > > > > explicitly add multiple break/goto/return/fallthrough
> > > > > statements instead of just letting the code fall through to
> > > > > the next case.
> > > > > 
> > > > > Notice that in order to enable -Wimplicit-fallthrough for
> > > > > Clang, this change[1] is meant to be reverted at some point.
> > > > > So, this patch helps to move in that direction.
> > > > > 
> > > > > Something important to mention is that there is currently a
> > > > > discrepancy between GCC and Clang when dealing with switch
> > > > > fall-through to empty case statements or to cases that only
> > > > > contain a break/continue/return statement[2][3][4].  
> > > > 
> > > > Are we sure we want to make this change? Was it discussed
> > > > before?
> > > > 
> > > > Are there any bugs Clangs puritanical definition of fallthrough
> > > > helped find?
> > > > 
> > > > IMVHO compiler warnings are supposed to warn about issues that
> > > > could be bugs. Falling through to default: break; can hardly be
> > > > a bug?!  
> > > 
> > > It's certainly a place where the intent is not always clear. I
> > > think this makes all the cases unambiguous, and doesn't impact
> > > the machine code, since the compiler will happily optimize away
> > > any behavioral redundancy.
> > 
> > If none of the 140 patches here fix a real bug, and there is no
> > change to machine code then it sounds to me like a W=2 kind of a
> > warning.
> 
> FWIW, this series has found at least one bug so far:
> https://lore.kernel.org/lkml/CAFCwf11izHF=g1mGry1fE5kvFFFrxzhPSM6qKAO8gxSp=Kr_CQ@mail.gmail.com/


Well, it's a problem in an error leg, sure, but it's not a really
compelling reason for a 141 patch series, is it?  All that fixing this
error will do is get the driver to print "oh dear there's a problem"
under four more conditions than it previously did.

We've been at this for three years now with nearly a thousand patches,
firstly marking all the fall throughs with /* fall through */ and later
changing it to fallthrough.  At some point we do have to ask if the
effort is commensurate with the protection afforded.  Please tell me
our reward for all this effort isn't a single missing error print.

James




From xen-devel-bounces@lists.xenproject.org Sun Nov 22 18:22:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 18:22:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33426.64447 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgu0d-0002qx-1N; Sun, 22 Nov 2020 18:22:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33426.64447; Sun, 22 Nov 2020 18:22:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgu0c-0002qq-Ue; Sun, 22 Nov 2020 18:22:46 +0000
Received: by outflank-mailman (input) for mailman id 33426;
 Sun, 22 Nov 2020 18:22:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=D2VG=E4=perches.com=joe@srs-us1.protection.inumbo.net>)
 id 1kgu0c-0002qk-5O
 for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 18:22:46 +0000
Received: from smtprelay.hostedemail.com (unknown [216.40.44.233])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4067301f-669f-469b-95df-740ab119a442;
 Sun, 22 Nov 2020 18:22:42 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net
 [216.40.38.60])
 by smtprelay05.hostedemail.com (Postfix) with ESMTP id 5998D18029124;
 Sun, 22 Nov 2020 18:22:42 +0000 (UTC)
Received: from XPS-9350.home (unknown [47.151.128.180])
 (Authenticated sender: joe@perches.com)
 by omf13.hostedemail.com (Postfix) with ESMTPA;
 Sun, 22 Nov 2020 18:22:37 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=D2VG=E4=perches.com=joe@srs-us1.protection.inumbo.net>)
	id 1kgu0c-0002qk-5O
	for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 18:22:46 +0000
X-Inumbo-ID: 4067301f-669f-469b-95df-740ab119a442
Received: from smtprelay.hostedemail.com (unknown [216.40.44.233])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4067301f-669f-469b-95df-740ab119a442;
	Sun, 22 Nov 2020 18:22:42 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net [216.40.38.60])
	by smtprelay05.hostedemail.com (Postfix) with ESMTP id 5998D18029124;
	Sun, 22 Nov 2020 18:22:42 +0000 (UTC)
X-Session-Marker: 6A6F6540706572636865732E636F6D
X-Spam-Summary: 50,0,0,,d41d8cd98f00b204,joe@perches.com,,RULES_HIT:41:355:379:599:800:960:967:973:982:988:989:1260:1277:1311:1313:1314:1345:1359:1437:1515:1516:1518:1535:1543:1593:1594:1605:1711:1730:1747:1777:1792:2393:2525:2560:2563:2682:2685:2828:2859:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3622:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4043:4321:5007:6119:6742:6743:7809:7903:8660:9025:10004:10400:10848:11026:11232:11473:11658:11914:12043:12295:12296:12297:12555:12663:12740:12760:12895:12986:13095:13148:13161:13229:13230:13439:14181:14659:14721:14822:21080:21324:21394:21433:21451:21627:21740:21811:21939:21987:30041:30054:30091,0,RBL:none,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:2,LUA_SUMMARY:none
X-HE-Tag: base07_180d0122735e
X-Filterd-Recvd-Size: 5347
Received: from XPS-9350.home (unknown [47.151.128.180])
	(Authenticated sender: joe@perches.com)
	by omf13.hostedemail.com (Postfix) with ESMTPA;
	Sun, 22 Nov 2020 18:22:37 +0000 (UTC)
Message-ID: <859bae8ddae3238116824192f6ddf1c91a381913.camel@perches.com>
Subject: Re: [RFC] MAINTAINERS tag for cleanup robot
From: Joe Perches <joe@perches.com>
To: Tom Rix <trix@redhat.com>, clang-built-linux@googlegroups.com
Cc: linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, 
 xen-devel@lists.xenproject.org, tboot-devel@lists.sourceforge.net, 
 kvm@vger.kernel.org, linux-crypto@vger.kernel.org,
 linux-acpi@vger.kernel.org,  devel@acpica.org,
 amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, 
 intel-gfx@lists.freedesktop.org, netdev@vger.kernel.org, 
 linux-media@vger.kernel.org, MPT-FusionLinux.pdl@broadcom.com, 
 linux-scsi@vger.kernel.org, linux-wireless@vger.kernel.org, 
 ibm-acpi-devel@lists.sourceforge.net, platform-driver-x86@vger.kernel.org, 
 linux-usb@vger.kernel.org, linux-omap@vger.kernel.org, 
 linux-fbdev@vger.kernel.org, ecryptfs@vger.kernel.org, 
 linux-fsdevel@vger.kernel.org, cluster-devel@redhat.com, 
 linux-mtd@lists.infradead.org, keyrings@vger.kernel.org, 
 netfilter-devel@vger.kernel.org, coreteam@netfilter.org, 
 alsa-devel@alsa-project.org, bpf@vger.kernel.org, 
 linux-bluetooth@vger.kernel.org, linux-nfs@vger.kernel.org, 
 patches@opensource.cirrus.com
Date: Sun, 22 Nov 2020 10:22:36 -0800
In-Reply-To: <6e8c1926-4209-8f10-d0f9-72c875a85a88@redhat.com>
References: <20201121165058.1644182-1-trix@redhat.com>
	 <2105f0c05e9eae8bee8e17dcc5314474b3c0bc73.camel@perches.com>
	 <6e8c1926-4209-8f10-d0f9-72c875a85a88@redhat.com>
Content-Type: text/plain; charset="ISO-8859-1"
User-Agent: Evolution 3.38.1-1 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Sun, 2020-11-22 at 08:33 -0800, Tom Rix wrote:
> On 11/21/20 9:10 AM, Joe Perches wrote:
> > On Sat, 2020-11-21 at 08:50 -0800, trix@redhat.com wrote:
> > > A difficult part of automating commits is composing the subsystem
> > > preamble in the commit log.  For the ongoing effort of a fixer producing
> > > one or two fixes a release the use of 'treewide:' does not seem appropriate.
> > > 
> > > It would be better if the normal prefix was used.  Unfortunately normal is
> > > not consistent across the tree.
> > > 
> > > So I am looking for comments for adding a new tag to the MAINTAINERS file
> > > 
> > > 	D: Commit subsystem prefix
> > > 
> > > ex/ for FPGA DFL DRIVERS
> > > 
> > > 	D: fpga: dfl:
> > I'm all for it.  Good luck with the effort.  It's not completely trivial.
> > 
> > From a decade ago:
> > 
> > https://lore.kernel.org/lkml/1289919077.28741.50.camel@Joe-Laptop/
> > 
> > (and that thread started with extra semicolon patches too)
> 
> Reading the history, how about this.
> 
> get_maintainer.pl outputs a single prefix, if multiple files have the
> same prefix it works, if they don't its an error.
> 
> Another script 'commit_one_file.sh' does the call to get_mainainter.pl
> to get the prefix and be called by run-clang-tools.py to get the fixer
> specific message.

It's not whether the script used is get_maintainer or any other script,
the question is really if the MAINTAINERS file is the appropriate place
to store per-subsystem patch specific prefixes.

It is.

Then the question should be how are the forms described and what is the
inheritance priority.  My preference would be to have a default of
inherit the parent base and add basename(subsystem dirname).

Commit history seems to have standardized on using colons as the separator
between the commit prefix and the subject.

A good mechanism to explore how various subsystems have uses prefixes in
the past might be something like:

$ git log --no-merges --pretty='%s' -<commit_count> <subsystem_path> | \
  perl -n -e 'print substr($_, 0, rindex($_, ":") + 1) . "\n";' | \
  sort | uniq -c | sort -rn

Using 10000 for commit_count and drivers/scsi for subsystem_path, the
top 40 entries are below:

About 1% don't have a colon, and there is no real consistency even
within individual drivers below scsi.  For instance, qla2xxx:

     1	    814 scsi: qla2xxx:
     2	    691 scsi: lpfc:
     3	    389 scsi: hisi_sas:
     4	    354 scsi: ufs:
     5	    339 scsi:
     6	    291 qla2xxx:
     7	    256 scsi: megaraid_sas:
     8	    249 scsi: mpt3sas:
     9	    200 hpsa:
    10	    190 scsi: aacraid:
    11	    174 lpfc:
    12	    153 scsi: qedf:
    13	    144 scsi: smartpqi:
    14	    139 scsi: cxlflash:
    15	    122 scsi: core:
    16	    110 [SCSI] qla2xxx:
    17	    108 ncr5380:
    18	     98 scsi: hpsa:
    19	     97 
    20	     89 treewide:
    21	     88 mpt3sas:
    22	     86 scsi: libfc:
    23	     85 scsi: qedi:
    24	     84 scsi: be2iscsi:
    25	     81 [SCSI] qla4xxx:
    26	     81 hisi_sas:
    27	     81 block:
    28	     75 megaraid_sas:
    29	     71 scsi: sd:
    30	     69 [SCSI] hpsa:
    31	     68 cxlflash:
    32	     65 scsi: libsas:
    33	     65 scsi: fnic:
    34	     61 scsi: scsi_debug:
    35	     60 scsi: arcmsr:
    36	     57 be2iscsi:
    37	     53 atp870u:
    38	     51 scsi: bfa:
    39	     50 scsi: storvsc:
    40	     48 sd:




From xen-devel-bounces@lists.xenproject.org Sun Nov 22 18:23:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 18:23:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33432.64459 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgu1R-0002zr-CR; Sun, 22 Nov 2020 18:23:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33432.64459; Sun, 22 Nov 2020 18:23:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgu1R-0002zk-82; Sun, 22 Nov 2020 18:23:37 +0000
Received: by outflank-mailman (input) for mailman id 33432;
 Sun, 22 Nov 2020 18:23:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=D2VG=E4=perches.com=joe@srs-us1.protection.inumbo.net>)
 id 1kgu1P-0002zb-DI
 for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 18:23:35 +0000
Received: from smtprelay.hostedemail.com (unknown [216.40.44.203])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f86001f0-9152-4ccb-993e-ce1e8c3a136e;
 Sun, 22 Nov 2020 18:23:34 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net
 [216.40.38.60])
 by smtprelay05.hostedemail.com (Postfix) with ESMTP id 524371802912B;
 Sun, 22 Nov 2020 18:23:34 +0000 (UTC)
Received: from XPS-9350.home (unknown [47.151.128.180])
 (Authenticated sender: joe@perches.com)
 by omf14.hostedemail.com (Postfix) with ESMTPA;
 Sun, 22 Nov 2020 18:23:29 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=D2VG=E4=perches.com=joe@srs-us1.protection.inumbo.net>)
	id 1kgu1P-0002zb-DI
	for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 18:23:35 +0000
X-Inumbo-ID: f86001f0-9152-4ccb-993e-ce1e8c3a136e
Received: from smtprelay.hostedemail.com (unknown [216.40.44.203])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f86001f0-9152-4ccb-993e-ce1e8c3a136e;
	Sun, 22 Nov 2020 18:23:34 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net [216.40.38.60])
	by smtprelay05.hostedemail.com (Postfix) with ESMTP id 524371802912B;
	Sun, 22 Nov 2020 18:23:34 +0000 (UTC)
X-Session-Marker: 6A6F6540706572636865732E636F6D
X-Spam-Summary: 2,0,0,,d41d8cd98f00b204,joe@perches.com,,RULES_HIT:41:355:379:599:988:989:1260:1277:1311:1313:1314:1345:1359:1437:1515:1516:1518:1534:1540:1593:1594:1711:1730:1747:1777:1792:2393:2559:2562:2828:3138:3139:3140:3141:3142:3352:3622:3865:3866:3867:3868:3870:3871:3872:4250:4321:5007:6691:6742:6743:7903:10004:10400:10848:11232:11658:11914:12296:12297:12740:12760:12895:13019:13069:13161:13229:13311:13357:13439:14040:14096:14097:14659:14721:21080:21324:21433:21451:21627:30012:30054:30091,0,RBL:none,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none
X-HE-Tag: match16_380b6892735e
X-Filterd-Recvd-Size: 3051
Received: from XPS-9350.home (unknown [47.151.128.180])
	(Authenticated sender: joe@perches.com)
	by omf14.hostedemail.com (Postfix) with ESMTPA;
	Sun, 22 Nov 2020 18:23:29 +0000 (UTC)
Message-ID: <dec07021e7fc11a02b14c98b713ae2c6e2a4ca00.camel@perches.com>
Subject: Re: [RFC] MAINTAINERS tag for cleanup robot
From: Joe Perches <joe@perches.com>
To: James Bottomley <James.Bottomley@HansenPartnership.com>, Tom Rix
	 <trix@redhat.com>, Matthew Wilcox <willy@infradead.org>
Cc: clang-built-linux@googlegroups.com, linux-hyperv@vger.kernel.org, 
 linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, 
 tboot-devel@lists.sourceforge.net, kvm@vger.kernel.org, 
 linux-crypto@vger.kernel.org, linux-acpi@vger.kernel.org, devel@acpica.org,
  amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, 
 intel-gfx@lists.freedesktop.org, netdev@vger.kernel.org, 
 linux-media@vger.kernel.org, MPT-FusionLinux.pdl@broadcom.com, 
 linux-scsi@vger.kernel.org, linux-wireless@vger.kernel.org, 
 ibm-acpi-devel@lists.sourceforge.net, platform-driver-x86@vger.kernel.org, 
 linux-usb@vger.kernel.org, linux-omap@vger.kernel.org, 
 linux-fbdev@vger.kernel.org, ecryptfs@vger.kernel.org, 
 linux-fsdevel@vger.kernel.org, cluster-devel@redhat.com, 
 linux-mtd@lists.infradead.org, keyrings@vger.kernel.org, 
 netfilter-devel@vger.kernel.org, coreteam@netfilter.org, 
 alsa-devel@alsa-project.org, bpf@vger.kernel.org, 
 linux-bluetooth@vger.kernel.org, linux-nfs@vger.kernel.org, 
 patches@opensource.cirrus.com
Date: Sun, 22 Nov 2020 10:23:28 -0800
In-Reply-To: <751803306cd957d0e7ef6a4fc3dbf12ebceaba92.camel@HansenPartnership.com>
References: <20201121165058.1644182-1-trix@redhat.com>
	 <20201122032304.GE4327@casper.infradead.org>
	 <ddb08a27-3ca1-fb2e-d51f-4b471f1a56a3@redhat.com>
	 <20201122145635.GG4327@casper.infradead.org>
	 <0819ce06-c462-d4df-d3d9-14931dc5aefc@redhat.com>
	 <751803306cd957d0e7ef6a4fc3dbf12ebceaba92.camel@HansenPartnership.com>
Content-Type: text/plain; charset="ISO-8859-1"
User-Agent: Evolution 3.38.1-1 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Sun, 2020-11-22 at 08:49 -0800, James Bottomley wrote:
> We can enforce sysfs_emit going forwards
> using tools like checkpatch

It's not really possible for checkpatch to find or warn about
sysfs uses of sprintf. checkpatch is really just a trivial
line-by-line parser and it has no concept of code intent.

It just can't warn on every use of the sprintf family.
There are just too many perfectly valid uses.

> but there's no benefit and a lot of harm to
> be done by trying to churn the entire tree

Single uses of sprintf for sysfs is not really any problem.

But likely there are still several possible overrun sprintf/snprintf
paths in sysfs.  Some of them are very obscure and unlikely to be
found by a robot as the logic for sysfs buf uses can be fairly twisty.

But provably correct conversions IMO _should_ be done and IMO churn
considerations should generally have less importance.





From xen-devel-bounces@lists.xenproject.org Sun Nov 22 18:25:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 18:25:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33436.64471 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgu3c-0003BV-Ok; Sun, 22 Nov 2020 18:25:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33436.64471; Sun, 22 Nov 2020 18:25:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgu3c-0003BO-La; Sun, 22 Nov 2020 18:25:52 +0000
Received: by outflank-mailman (input) for mailman id 33436;
 Sun, 22 Nov 2020 18:25:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=D2VG=E4=perches.com=joe@srs-us1.protection.inumbo.net>)
 id 1kgu3a-0003BI-SH
 for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 18:25:50 +0000
Received: from smtprelay.hostedemail.com (unknown [216.40.44.109])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3143e4ad-ef99-44c3-97d6-9178eba05b3d;
 Sun, 22 Nov 2020 18:25:50 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net
 [216.40.38.60])
 by smtprelay06.hostedemail.com (Postfix) with ESMTP id 883C21800AEAD;
 Sun, 22 Nov 2020 18:25:49 +0000 (UTC)
Received: from XPS-9350.home (unknown [47.151.128.180])
 (Authenticated sender: joe@perches.com)
 by omf09.hostedemail.com (Postfix) with ESMTPA;
 Sun, 22 Nov 2020 18:25:38 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=D2VG=E4=perches.com=joe@srs-us1.protection.inumbo.net>)
	id 1kgu3a-0003BI-SH
	for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 18:25:50 +0000
X-Inumbo-ID: 3143e4ad-ef99-44c3-97d6-9178eba05b3d
Received: from smtprelay.hostedemail.com (unknown [216.40.44.109])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3143e4ad-ef99-44c3-97d6-9178eba05b3d;
	Sun, 22 Nov 2020 18:25:50 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net [216.40.38.60])
	by smtprelay06.hostedemail.com (Postfix) with ESMTP id 883C21800AEAD;
	Sun, 22 Nov 2020 18:25:49 +0000 (UTC)
X-Session-Marker: 6A6F6540706572636865732E636F6D
X-Spam-Summary: 2,0,0,,d41d8cd98f00b204,joe@perches.com,,RULES_HIT:41:355:379:599:988:989:1260:1277:1311:1313:1314:1345:1359:1437:1515:1516:1518:1534:1537:1561:1593:1594:1711:1714:1730:1747:1777:1792:2393:2559:2562:2828:3138:3139:3140:3141:3142:3622:3865:3866:4321:5007:6742:6743:10004:10400:10848:11232:11658:11914:12297:12740:12760:12895:13019:13069:13161:13229:13311:13357:13439:14659:21080:21450:21627:30054:30091,0,RBL:none,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none
X-HE-Tag: stamp54_4305a342735e
X-Filterd-Recvd-Size: 3841
Received: from XPS-9350.home (unknown [47.151.128.180])
	(Authenticated sender: joe@perches.com)
	by omf09.hostedemail.com (Postfix) with ESMTPA;
	Sun, 22 Nov 2020 18:25:38 +0000 (UTC)
Message-ID: <ca071decb87cc7e905411423c05a48f9fd2f58d7.camel@perches.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
From: Joe Perches <joe@perches.com>
To: James Bottomley <James.Bottomley@HansenPartnership.com>, Kees Cook
	 <keescook@chromium.org>, Jakub Kicinski <kuba@kernel.org>
Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>, 
 linux-kernel@vger.kernel.org, alsa-devel@alsa-project.org, 
 amd-gfx@lists.freedesktop.org, bridge@lists.linux-foundation.org, 
 ceph-devel@vger.kernel.org, cluster-devel@redhat.com,
 coreteam@netfilter.org,  devel@driverdev.osuosl.org, dm-devel@redhat.com,
 drbd-dev@lists.linbit.com,  dri-devel@lists.freedesktop.org,
 GR-everest-linux-l2@marvell.com,  GR-Linux-NIC-Dev@marvell.com,
 intel-gfx@lists.freedesktop.org,  intel-wired-lan@lists.osuosl.org,
 keyrings@vger.kernel.org,  linux1394-devel@lists.sourceforge.net,
 linux-acpi@vger.kernel.org,  linux-afs@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,  linux-arm-msm@vger.kernel.org,
 linux-atm-general@lists.sourceforge.net,  linux-block@vger.kernel.org,
 linux-can@vger.kernel.org,  linux-cifs@vger.kernel.org,
 linux-crypto@vger.kernel.org,  linux-decnet-user@lists.sourceforge.net,
 linux-ext4@vger.kernel.org,  linux-fbdev@vger.kernel.org,
 linux-geode@lists.infradead.org,  linux-gpio@vger.kernel.org,
 linux-hams@vger.kernel.org,  linux-hwmon@vger.kernel.org,
 linux-i3c@lists.infradead.org,  linux-ide@vger.kernel.org,
 linux-iio@vger.kernel.org,  linux-input@vger.kernel.org,
 linux-integrity@vger.kernel.org,  linux-mediatek@lists.infradead.org,
 linux-media@vger.kernel.org,  linux-mmc@vger.kernel.org,
 linux-mm@kvack.org, linux-mtd@lists.infradead.org, 
 linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, 
 linux-renesas-soc@vger.kernel.org, linux-scsi@vger.kernel.org, 
 linux-sctp@vger.kernel.org, linux-security-module@vger.kernel.org, 
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org, 
 linux-watchdog@vger.kernel.org, linux-wireless@vger.kernel.org, 
 netdev@vger.kernel.org, netfilter-devel@vger.kernel.org, 
 nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org, 
 oss-drivers@netronome.com, patches@opensource.cirrus.com, 
 rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org, 
 samba-technical@lists.samba.org, selinux@vger.kernel.org, 
 target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net, 
 usb-storage@lists.one-eyed-alien.net, 
 virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
 x86@kernel.org, xen-devel@lists.xenproject.org,
 linux-hardening@vger.kernel.org,  Nick Desaulniers
 <ndesaulniers@google.com>, Nathan Chancellor <natechancellor@gmail.com>,
 Miguel Ojeda <ojeda@kernel.org>
Date: Sun, 22 Nov 2020 10:25:37 -0800
In-Reply-To: <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
References: <cover.1605896059.git.gustavoars@kernel.org>
	 <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	 <202011201129.B13FDB3C@keescook>
	 <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	 <202011220816.8B6591A@keescook>
	 <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
Content-Type: text/plain; charset="ISO-8859-1"
User-Agent: Evolution 3.38.1-1 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Sun, 2020-11-22 at 10:21 -0800, James Bottomley wrote:
> Please tell me
> our reward for all this effort isn't a single missing error print.

There were quite literally dozens of logical defects found
by the fallthrough additions.  Very few were logging only.





From xen-devel-bounces@lists.xenproject.org Sun Nov 22 19:12:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 19:12:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33452.64483 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgumx-0008JX-Ah; Sun, 22 Nov 2020 19:12:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33452.64483; Sun, 22 Nov 2020 19:12:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgumx-0008JQ-7m; Sun, 22 Nov 2020 19:12:43 +0000
Received: by outflank-mailman (input) for mailman id 33452;
 Sun, 22 Nov 2020 19:12:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=08CP=E4=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
 id 1kgumv-0008JL-Tf
 for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 19:12:41 +0000
Received: from bedivere.hansenpartnership.com (unknown [96.44.175.130])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f673d10d-689f-4e95-8b82-0ce987f19af3;
 Sun, 22 Nov 2020 19:12:35 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by bedivere.hansenpartnership.com (Postfix) with ESMTP id 96EF41280181;
 Sun, 22 Nov 2020 11:12:34 -0800 (PST)
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
 by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new,
 port 10024)
 with ESMTP id T_J7YGNeXPBb; Sun, 22 Nov 2020 11:12:34 -0800 (PST)
Received: from jarvis.int.hansenpartnership.com (unknown
 [IPv6:2601:600:8280:66d1::527])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id D7CFF128010B;
 Sun, 22 Nov 2020 11:12:30 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=08CP=E4=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
	id 1kgumv-0008JL-Tf
	for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 19:12:41 +0000
X-Inumbo-ID: f673d10d-689f-4e95-8b82-0ce987f19af3
Received: from bedivere.hansenpartnership.com (unknown [96.44.175.130])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f673d10d-689f-4e95-8b82-0ce987f19af3;
	Sun, 22 Nov 2020 19:12:35 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
	by bedivere.hansenpartnership.com (Postfix) with ESMTP id 96EF41280181;
	Sun, 22 Nov 2020 11:12:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=hansenpartnership.com; s=20151216; t=1606072354;
	bh=RkMkP78xnGMXpQYwW6B64FZnxQgTZZOXvlSf0xxLQvs=;
	h=Message-ID:Subject:From:To:Date:In-Reply-To:References:From;
	b=cqs+SSmhVi1ZEA6KYpGlCOCKLLv9H3tmiNoTpcFF7F+1vb1QgbekNgnLlgvfKOfrE
	 OZFKROCwJ+4Dh9AKsWr2ZhTbG6lfWTbk5QBrrreOMOUbINEmr9+ncDNzytUg1VYukU
	 VtKrfA4fWPJSjG94bNM6AxKG7W4waC4yzJNdPcng=
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
	by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id T_J7YGNeXPBb; Sun, 22 Nov 2020 11:12:34 -0800 (PST)
Received: from jarvis.int.hansenpartnership.com (unknown [IPv6:2601:600:8280:66d1::527])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id D7CFF128010B;
	Sun, 22 Nov 2020 11:12:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=hansenpartnership.com; s=20151216; t=1606072354;
	bh=RkMkP78xnGMXpQYwW6B64FZnxQgTZZOXvlSf0xxLQvs=;
	h=Message-ID:Subject:From:To:Date:In-Reply-To:References:From;
	b=cqs+SSmhVi1ZEA6KYpGlCOCKLLv9H3tmiNoTpcFF7F+1vb1QgbekNgnLlgvfKOfrE
	 OZFKROCwJ+4Dh9AKsWr2ZhTbG6lfWTbk5QBrrreOMOUbINEmr9+ncDNzytUg1VYukU
	 VtKrfA4fWPJSjG94bNM6AxKG7W4waC4yzJNdPcng=
Message-ID: <0147972a72bc13f3629de8a32dee6f1f308994b5.camel@HansenPartnership.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
From: James Bottomley <James.Bottomley@HansenPartnership.com>
To: Joe Perches <joe@perches.com>, Kees Cook <keescook@chromium.org>, Jakub
	Kicinski <kuba@kernel.org>
Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>, 
 linux-kernel@vger.kernel.org, alsa-devel@alsa-project.org, 
 amd-gfx@lists.freedesktop.org, bridge@lists.linux-foundation.org, 
 ceph-devel@vger.kernel.org, cluster-devel@redhat.com,
 coreteam@netfilter.org,  devel@driverdev.osuosl.org, dm-devel@redhat.com,
 drbd-dev@lists.linbit.com,  dri-devel@lists.freedesktop.org,
 GR-everest-linux-l2@marvell.com,  GR-Linux-NIC-Dev@marvell.com,
 intel-gfx@lists.freedesktop.org,  intel-wired-lan@lists.osuosl.org,
 keyrings@vger.kernel.org,  linux1394-devel@lists.sourceforge.net,
 linux-acpi@vger.kernel.org,  linux-afs@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,  linux-arm-msm@vger.kernel.org,
 linux-atm-general@lists.sourceforge.net,  linux-block@vger.kernel.org,
 linux-can@vger.kernel.org,  linux-cifs@vger.kernel.org,
 linux-crypto@vger.kernel.org,  linux-decnet-user@lists.sourceforge.net,
 linux-ext4@vger.kernel.org,  linux-fbdev@vger.kernel.org,
 linux-geode@lists.infradead.org,  linux-gpio@vger.kernel.org,
 linux-hams@vger.kernel.org,  linux-hwmon@vger.kernel.org,
 linux-i3c@lists.infradead.org,  linux-ide@vger.kernel.org,
 linux-iio@vger.kernel.org,  linux-input@vger.kernel.org,
 linux-integrity@vger.kernel.org,  linux-mediatek@lists.infradead.org,
 linux-media@vger.kernel.org,  linux-mmc@vger.kernel.org,
 linux-mm@kvack.org, linux-mtd@lists.infradead.org, 
 linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, 
 linux-renesas-soc@vger.kernel.org, linux-scsi@vger.kernel.org, 
 linux-sctp@vger.kernel.org, linux-security-module@vger.kernel.org, 
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org, 
 linux-watchdog@vger.kernel.org, linux-wireless@vger.kernel.org, 
 netdev@vger.kernel.org, netfilter-devel@vger.kernel.org, 
 nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org, 
 oss-drivers@netronome.com, patches@opensource.cirrus.com, 
 rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org, 
 samba-technical@lists.samba.org, selinux@vger.kernel.org, 
 target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net, 
 usb-storage@lists.one-eyed-alien.net, 
 virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
 x86@kernel.org, xen-devel@lists.xenproject.org,
 linux-hardening@vger.kernel.org,  Nick Desaulniers
 <ndesaulniers@google.com>, Nathan Chancellor <natechancellor@gmail.com>,
 Miguel Ojeda <ojeda@kernel.org>
Date: Sun, 22 Nov 2020 11:12:30 -0800
In-Reply-To: <ca071decb87cc7e905411423c05a48f9fd2f58d7.camel@perches.com>
References: <cover.1605896059.git.gustavoars@kernel.org>
	 <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	 <202011201129.B13FDB3C@keescook>
	 <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	 <202011220816.8B6591A@keescook>
	 <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
	 <ca071decb87cc7e905411423c05a48f9fd2f58d7.camel@perches.com>
Content-Type: text/plain; charset="UTF-8"
User-Agent: Evolution 3.34.4 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Sun, 2020-11-22 at 10:25 -0800, Joe Perches wrote:
> On Sun, 2020-11-22 at 10:21 -0800, James Bottomley wrote:
> > Please tell me our reward for all this effort isn't a single
> > missing error print.
> 
> There were quite literally dozens of logical defects found
> by the fallthrough additions.  Very few were logging only.

So can you give us the best examples (or indeed all of them if someone
is keeping score)?  hopefully this isn't a US election situation ...

James




From xen-devel-bounces@lists.xenproject.org Sun Nov 22 19:23:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 19:23:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33459.64495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kguww-00011A-Gc; Sun, 22 Nov 2020 19:23:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33459.64495; Sun, 22 Nov 2020 19:23:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kguww-000113-CR; Sun, 22 Nov 2020 19:23:02 +0000
Received: by outflank-mailman (input) for mailman id 33459;
 Sun, 22 Nov 2020 19:23:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=D2VG=E4=perches.com=joe@srs-us1.protection.inumbo.net>)
 id 1kguwv-00010y-9o
 for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 19:23:01 +0000
Received: from smtprelay.hostedemail.com (unknown [216.40.44.148])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4276b92a-1366-4c74-959d-5e426de671e6;
 Sun, 22 Nov 2020 19:23:00 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net
 [216.40.38.60])
 by smtprelay06.hostedemail.com (Postfix) with ESMTP id 8A2DD18221869;
 Sun, 22 Nov 2020 19:22:59 +0000 (UTC)
Received: from XPS-9350.home (unknown [47.151.128.180])
 (Authenticated sender: joe@perches.com)
 by omf17.hostedemail.com (Postfix) with ESMTPA;
 Sun, 22 Nov 2020 19:22:48 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=D2VG=E4=perches.com=joe@srs-us1.protection.inumbo.net>)
	id 1kguwv-00010y-9o
	for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 19:23:01 +0000
X-Inumbo-ID: 4276b92a-1366-4c74-959d-5e426de671e6
Received: from smtprelay.hostedemail.com (unknown [216.40.44.148])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4276b92a-1366-4c74-959d-5e426de671e6;
	Sun, 22 Nov 2020 19:23:00 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net [216.40.38.60])
	by smtprelay06.hostedemail.com (Postfix) with ESMTP id 8A2DD18221869;
	Sun, 22 Nov 2020 19:22:59 +0000 (UTC)
X-Session-Marker: 6A6F6540706572636865732E636F6D
X-Spam-Summary: 50,0,0,,d41d8cd98f00b204,joe@perches.com,,RULES_HIT:41:355:379:599:967:973:988:989:1260:1277:1311:1313:1314:1345:1359:1437:1515:1516:1518:1534:1539:1593:1594:1711:1730:1747:1777:1792:2194:2199:2393:2525:2560:2563:2682:2685:2828:2859:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3352:3622:3865:3866:3871:3873:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4250:4321:5007:6742:6743:7903:8985:9025:9108:10004:10400:10848:11232:11658:11914:12043:12297:12555:12740:12760:12895:13069:13311:13357:13439:14181:14659:14721:21080:21450:21499:21627:30054:30091,0,RBL:none,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:2,LUA_SUMMARY:none
X-HE-Tag: curve68_4013d692735f
X-Filterd-Recvd-Size: 4350
Received: from XPS-9350.home (unknown [47.151.128.180])
	(Authenticated sender: joe@perches.com)
	by omf17.hostedemail.com (Postfix) with ESMTPA;
	Sun, 22 Nov 2020 19:22:48 +0000 (UTC)
Message-ID: <d8d1e9add08cdd4158405e77762d4946037208f8.camel@perches.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
From: Joe Perches <joe@perches.com>
To: James Bottomley <James.Bottomley@HansenPartnership.com>, Kees Cook
	 <keescook@chromium.org>, Jakub Kicinski <kuba@kernel.org>
Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>, 
 linux-kernel@vger.kernel.org, alsa-devel@alsa-project.org, 
 amd-gfx@lists.freedesktop.org, bridge@lists.linux-foundation.org, 
 ceph-devel@vger.kernel.org, cluster-devel@redhat.com,
 coreteam@netfilter.org,  devel@driverdev.osuosl.org, dm-devel@redhat.com,
 drbd-dev@lists.linbit.com,  dri-devel@lists.freedesktop.org,
 GR-everest-linux-l2@marvell.com,  GR-Linux-NIC-Dev@marvell.com,
 intel-gfx@lists.freedesktop.org,  intel-wired-lan@lists.osuosl.org,
 keyrings@vger.kernel.org,  linux1394-devel@lists.sourceforge.net,
 linux-acpi@vger.kernel.org,  linux-afs@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,  linux-arm-msm@vger.kernel.org,
 linux-atm-general@lists.sourceforge.net,  linux-block@vger.kernel.org,
 linux-can@vger.kernel.org,  linux-cifs@vger.kernel.org,
 linux-crypto@vger.kernel.org,  linux-decnet-user@lists.sourceforge.net,
 linux-ext4@vger.kernel.org,  linux-fbdev@vger.kernel.org,
 linux-geode@lists.infradead.org,  linux-gpio@vger.kernel.org,
 linux-hams@vger.kernel.org,  linux-hwmon@vger.kernel.org,
 linux-i3c@lists.infradead.org,  linux-ide@vger.kernel.org,
 linux-iio@vger.kernel.org,  linux-input@vger.kernel.org,
 linux-integrity@vger.kernel.org,  linux-mediatek@lists.infradead.org,
 linux-media@vger.kernel.org,  linux-mmc@vger.kernel.org,
 linux-mm@kvack.org, linux-mtd@lists.infradead.org, 
 linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, 
 linux-renesas-soc@vger.kernel.org, linux-scsi@vger.kernel.org, 
 linux-sctp@vger.kernel.org, linux-security-module@vger.kernel.org, 
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org, 
 linux-watchdog@vger.kernel.org, linux-wireless@vger.kernel.org, 
 netdev@vger.kernel.org, netfilter-devel@vger.kernel.org, 
 nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org, 
 oss-drivers@netronome.com, patches@opensource.cirrus.com, 
 rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org, 
 samba-technical@lists.samba.org, selinux@vger.kernel.org, 
 target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net, 
 usb-storage@lists.one-eyed-alien.net, 
 virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
 x86@kernel.org, xen-devel@lists.xenproject.org,
 linux-hardening@vger.kernel.org,  Nick Desaulniers
 <ndesaulniers@google.com>, Nathan Chancellor <natechancellor@gmail.com>,
 Miguel Ojeda <ojeda@kernel.org>
Date: Sun, 22 Nov 2020 11:22:47 -0800
In-Reply-To: <0147972a72bc13f3629de8a32dee6f1f308994b5.camel@HansenPartnership.com>
References: <cover.1605896059.git.gustavoars@kernel.org>
	 <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	 <202011201129.B13FDB3C@keescook>
	 <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	 <202011220816.8B6591A@keescook>
	 <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
	 <ca071decb87cc7e905411423c05a48f9fd2f58d7.camel@perches.com>
	 <0147972a72bc13f3629de8a32dee6f1f308994b5.camel@HansenPartnership.com>
Content-Type: text/plain; charset="ISO-8859-1"
User-Agent: Evolution 3.38.1-1 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Sun, 2020-11-22 at 11:12 -0800, James Bottomley wrote:
> On Sun, 2020-11-22 at 10:25 -0800, Joe Perches wrote:
> > On Sun, 2020-11-22 at 10:21 -0800, James Bottomley wrote:
> > > Please tell me our reward for all this effort isn't a single
> > > missing error print.
> > 
> > There were quite literally dozens of logical defects found
> > by the fallthrough additions.  Very few were logging only.
> 
> So can you give us the best examples (or indeed all of them if someone
> is keeping score)?  hopefully this isn't a US election situation ...

Gustavo?  Are you running for congress now?

https://lwn.net/Articles/794944/




From xen-devel-bounces@lists.xenproject.org Sun Nov 22 19:27:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 19:27:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33465.64506 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgv1b-0001Hm-2Z; Sun, 22 Nov 2020 19:27:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33465.64506; Sun, 22 Nov 2020 19:27:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgv1a-0001He-Vk; Sun, 22 Nov 2020 19:27:50 +0000
Received: by outflank-mailman (input) for mailman id 33465;
 Sun, 22 Nov 2020 19:27:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgv1Z-0001HS-60; Sun, 22 Nov 2020 19:27:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgv1Y-00059s-PW; Sun, 22 Nov 2020 19:27:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgv1Y-0007Zj-Dx; Sun, 22 Nov 2020 19:27:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kgv1Y-000469-DA; Sun, 22 Nov 2020 19:27:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgv1Z-0001HS-60; Sun, 22 Nov 2020 19:27:49 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Zcnr+M2z5ycv0fBA2Lb+z95QIgZlAcebA9POQ3SN9vQ=; b=YjCdkdi+VYDVe+hQKQkxbX1DtM
	sr3DHj+2zMCbdCLaTYdp9PUa2uHaSbLm0wRS8/0AtMsm73Jjry2wGNcpU9n26qe74RqeT24VMaq4t
	Nu567kfYETHxsTnRu731Tz70LGmiYXmVQpesTab5stdJ8vgkzWuPADWLYD+ga7XlIn2k=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgv1Y-00059s-PW; Sun, 22 Nov 2020 19:27:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgv1Y-0007Zj-Dx; Sun, 22 Nov 2020 19:27:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgv1Y-000469-DA; Sun, 22 Nov 2020 19:27:48 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156942-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 156942: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:guest-start/debianhvm.repeat:fail:heisenbug
    linux-5.4:test-amd64-amd64-libvirt-vhd:guest-saverestore:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=fc8334619167ce90b6d3f76e3dce9284dbe14fa2
X-Osstest-Versions-That:
    linux=315443293a2d0d7c183ca6dd4624d9e4f8a7054a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 22 Nov 2020 19:27:48 +0000

flight 156942 linux-5.4 real [real]
flight 156949 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156942/
http://logs.test-lab.xenproject.org/osstest/logs/156949/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 14 guest-start/debianhvm.repeat fail pass in 156949-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-vhd 16 guest-saverestore            fail  like 156861
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156874
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156874
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156874
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156874
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156874
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156874
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156874
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 156874
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156874
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156874
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156874
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156874
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                fc8334619167ce90b6d3f76e3dce9284dbe14fa2
baseline version:
 linux                315443293a2d0d7c183ca6dd4624d9e4f8a7054a

Last test of basis   156874  2020-11-19 06:40:23 Z    3 days
Testing same since   156942  2020-11-22 09:40:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Bodong Zhao <nopitydays@gmail.com>
  Christophe Leroy <christophe.leroy@csgroup.eu>
  Daniel Axtens <dja@axtens.net>
  David Edmondson <david.edmondson@oracle.com>
  David S. Miller <davem@davemloft.net>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Eran Ben Elisha <eranbe@mellanox.com>
  Eran Ben Elisha <eranbe@nvidia.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hauke Mehrtens <hauke@hauke-m.de>
  Johannes Berg <johannes.berg@intel.com>
  Jon Hunter <jonathanh@nvidia.com>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Michael Ellerman <mpe@ellerman.id.au>
  Moshe Shemesh <moshe@mellanox.com>
  Nicholas Piggin <npiggin@gmail.com>
  Nick Desaulniers <ndesaulniers@google.com>
  Oliver Hartkopp <socketcan@hartkopp.net>
  Paolo Bonzini <pbonzini@redhat.com>
  Parav Pandit <parav@mellanox.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Russell Currey <ruscur@russell.cc>
  Saeed Mahameed <saeedm@mellanox.com>
  Saeed Mahameed <saeedm@nvidia.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Sudip Mukherjee <sudipm.mukherjee@gmail.com>
  Zhang Changzhong <zhangchangzhong@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   315443293a2d..fc8334619167  fc8334619167ce90b6d3f76e3dce9284dbe14fa2 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Sun Nov 22 19:54:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 19:54:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33475.64527 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgvR2-0005UQ-Cm; Sun, 22 Nov 2020 19:54:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33475.64527; Sun, 22 Nov 2020 19:54:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgvR2-0005UJ-9Z; Sun, 22 Nov 2020 19:54:08 +0000
Received: by outflank-mailman (input) for mailman id 33475;
 Sun, 22 Nov 2020 19:54:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=08CP=E4=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
 id 1kgvR0-0005U9-Va
 for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 19:54:07 +0000
Received: from bedivere.hansenpartnership.com (unknown [96.44.175.130])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e817c4d4-e280-44d8-8932-c8e33efce2ef;
 Sun, 22 Nov 2020 19:54:01 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by bedivere.hansenpartnership.com (Postfix) with ESMTP id 5AE091280408;
 Sun, 22 Nov 2020 11:53:59 -0800 (PST)
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
 by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new,
 port 10024)
 with ESMTP id KaMwHtHCHbnw; Sun, 22 Nov 2020 11:53:59 -0800 (PST)
Received: from jarvis.int.hansenpartnership.com (unknown
 [IPv6:2601:600:8280:66d1::527])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id D0B171280404;
 Sun, 22 Nov 2020 11:53:55 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=08CP=E4=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
	id 1kgvR0-0005U9-Va
	for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 19:54:07 +0000
X-Inumbo-ID: e817c4d4-e280-44d8-8932-c8e33efce2ef
Received: from bedivere.hansenpartnership.com (unknown [96.44.175.130])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e817c4d4-e280-44d8-8932-c8e33efce2ef;
	Sun, 22 Nov 2020 19:54:01 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
	by bedivere.hansenpartnership.com (Postfix) with ESMTP id 5AE091280408;
	Sun, 22 Nov 2020 11:53:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=hansenpartnership.com; s=20151216; t=1606074839;
	bh=VEGy54rcLCho40R+6JbprsRZooc9e7x1ylV8+ruCN0g=;
	h=Message-ID:Subject:From:To:Date:In-Reply-To:References:From;
	b=Xy60wiIzsyTeElUeeJd3QYsiHNZfzmGET/Nzo9eZo2OJxmb5EOyvszf5Q8Et12YN3
	 QzMj8C6lMBcV0iKMn2xQmIYhyRP6O8RGeJWdk6ZnR1Mz2fkvBJLWRT04tHjc221TnA
	 rtTRX0GCrDoOkJiVFq/y98T9XhefjbMkzX0sdCSc=
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
	by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id KaMwHtHCHbnw; Sun, 22 Nov 2020 11:53:59 -0800 (PST)
Received: from jarvis.int.hansenpartnership.com (unknown [IPv6:2601:600:8280:66d1::527])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id D0B171280404;
	Sun, 22 Nov 2020 11:53:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=hansenpartnership.com; s=20151216; t=1606074839;
	bh=VEGy54rcLCho40R+6JbprsRZooc9e7x1ylV8+ruCN0g=;
	h=Message-ID:Subject:From:To:Date:In-Reply-To:References:From;
	b=Xy60wiIzsyTeElUeeJd3QYsiHNZfzmGET/Nzo9eZo2OJxmb5EOyvszf5Q8Et12YN3
	 QzMj8C6lMBcV0iKMn2xQmIYhyRP6O8RGeJWdk6ZnR1Mz2fkvBJLWRT04tHjc221TnA
	 rtTRX0GCrDoOkJiVFq/y98T9XhefjbMkzX0sdCSc=
Message-ID: <dbd2cb703ed9eefa7dde9281ea26ab0f7acc8afe.camel@HansenPartnership.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
From: James Bottomley <James.Bottomley@HansenPartnership.com>
To: Joe Perches <joe@perches.com>, Kees Cook <keescook@chromium.org>, Jakub
	Kicinski <kuba@kernel.org>
Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>, 
 linux-kernel@vger.kernel.org, alsa-devel@alsa-project.org, 
 amd-gfx@lists.freedesktop.org, bridge@lists.linux-foundation.org, 
 ceph-devel@vger.kernel.org, cluster-devel@redhat.com,
 coreteam@netfilter.org,  devel@driverdev.osuosl.org, dm-devel@redhat.com,
 drbd-dev@lists.linbit.com,  dri-devel@lists.freedesktop.org,
 GR-everest-linux-l2@marvell.com,  GR-Linux-NIC-Dev@marvell.com,
 intel-gfx@lists.freedesktop.org,  intel-wired-lan@lists.osuosl.org,
 keyrings@vger.kernel.org,  linux1394-devel@lists.sourceforge.net,
 linux-acpi@vger.kernel.org,  linux-afs@lists.infradead.org,
 linux-arm-kernel@lists.infradead.org,  linux-arm-msm@vger.kernel.org,
 linux-atm-general@lists.sourceforge.net,  linux-block@vger.kernel.org,
 linux-can@vger.kernel.org,  linux-cifs@vger.kernel.org,
 linux-crypto@vger.kernel.org,  linux-decnet-user@lists.sourceforge.net,
 linux-ext4@vger.kernel.org,  linux-fbdev@vger.kernel.org,
 linux-geode@lists.infradead.org,  linux-gpio@vger.kernel.org,
 linux-hams@vger.kernel.org,  linux-hwmon@vger.kernel.org,
 linux-i3c@lists.infradead.org,  linux-ide@vger.kernel.org,
 linux-iio@vger.kernel.org,  linux-input@vger.kernel.org,
 linux-integrity@vger.kernel.org,  linux-mediatek@lists.infradead.org,
 linux-media@vger.kernel.org,  linux-mmc@vger.kernel.org,
 linux-mm@kvack.org, linux-mtd@lists.infradead.org, 
 linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, 
 linux-renesas-soc@vger.kernel.org, linux-scsi@vger.kernel.org, 
 linux-sctp@vger.kernel.org, linux-security-module@vger.kernel.org, 
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org, 
 linux-watchdog@vger.kernel.org, linux-wireless@vger.kernel.org, 
 netdev@vger.kernel.org, netfilter-devel@vger.kernel.org, 
 nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org, 
 oss-drivers@netronome.com, patches@opensource.cirrus.com, 
 rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org, 
 samba-technical@lists.samba.org, selinux@vger.kernel.org, 
 target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net, 
 usb-storage@lists.one-eyed-alien.net, 
 virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
 x86@kernel.org, xen-devel@lists.xenproject.org,
 linux-hardening@vger.kernel.org,  Nick Desaulniers
 <ndesaulniers@google.com>, Nathan Chancellor <natechancellor@gmail.com>,
 Miguel Ojeda <ojeda@kernel.org>
Date: Sun, 22 Nov 2020 11:53:55 -0800
In-Reply-To: <d8d1e9add08cdd4158405e77762d4946037208f8.camel@perches.com>
References: <cover.1605896059.git.gustavoars@kernel.org>
	 <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	 <202011201129.B13FDB3C@keescook>
	 <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	 <202011220816.8B6591A@keescook>
	 <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
	 <ca071decb87cc7e905411423c05a48f9fd2f58d7.camel@perches.com>
	 <0147972a72bc13f3629de8a32dee6f1f308994b5.camel@HansenPartnership.com>
	 <d8d1e9add08cdd4158405e77762d4946037208f8.camel@perches.com>
Content-Type: text/plain; charset="UTF-8"
User-Agent: Evolution 3.34.4 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Sun, 2020-11-22 at 11:22 -0800, Joe Perches wrote:
> On Sun, 2020-11-22 at 11:12 -0800, James Bottomley wrote:
> > On Sun, 2020-11-22 at 10:25 -0800, Joe Perches wrote:
> > > On Sun, 2020-11-22 at 10:21 -0800, James Bottomley wrote:
> > > > Please tell me our reward for all this effort isn't a single
> > > > missing error print.
> > > 
> > > There were quite literally dozens of logical defects found
> > > by the fallthrough additions.  Very few were logging only.
> > 
> > So can you give us the best examples (or indeed all of them if
> > someone is keeping score)?  hopefully this isn't a US election
> > situation ...
> 
> Gustavo?  Are you running for congress now?
> 
> https://lwn.net/Articles/794944/

That's 21 reported fixes of which about 50% seem to produce no change
in code behaviour at all, a quarter seem to have no user visible effect
with the remaining quarter producing unexpected errors on obscure
configuration parameters, which is why no-one really noticed them
before.

James




From xen-devel-bounces@lists.xenproject.org Sun Nov 22 20:36:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 20:36:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33485.64544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgw5j-00035x-Ri; Sun, 22 Nov 2020 20:36:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33485.64544; Sun, 22 Nov 2020 20:36:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgw5j-00035q-NQ; Sun, 22 Nov 2020 20:36:11 +0000
Received: by outflank-mailman (input) for mailman id 33485;
 Sun, 22 Nov 2020 20:36:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0jIZ=E4=gmail.com=miguel.ojeda.sandonis@srs-us1.protection.inumbo.net>)
 id 1kgw5i-00035i-OB
 for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 20:36:10 +0000
Received: from mail-yb1-xb31.google.com (unknown [2607:f8b0:4864:20::b31])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f5b45d64-3f5d-439d-b491-8476191028e2;
 Sun, 22 Nov 2020 20:36:10 +0000 (UTC)
Received: by mail-yb1-xb31.google.com with SMTP id x17so14047820ybr.8
 for <xen-devel@lists.xenproject.org>; Sun, 22 Nov 2020 12:36:10 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0jIZ=E4=gmail.com=miguel.ojeda.sandonis@srs-us1.protection.inumbo.net>)
	id 1kgw5i-00035i-OB
	for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 20:36:10 +0000
X-Inumbo-ID: f5b45d64-3f5d-439d-b491-8476191028e2
Received: from mail-yb1-xb31.google.com (unknown [2607:f8b0:4864:20::b31])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f5b45d64-3f5d-439d-b491-8476191028e2;
	Sun, 22 Nov 2020 20:36:10 +0000 (UTC)
Received: by mail-yb1-xb31.google.com with SMTP id x17so14047820ybr.8
        for <xen-devel@lists.xenproject.org>; Sun, 22 Nov 2020 12:36:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=GG13N+h9bYKW0tA8PzZdEh0PqD5/qSPuWfMm4/MLZZE=;
        b=tX7AFJY3IoP+sTWjLWjwUeA0EiMqyjgmEMGUK52Uheybmur4GieXYHjq/4452d4+Q2
         I9IJc2W+KgP7eM5cLMrffBSaL1fq5VPLYq7a7Nqy7aqiJs+SWc7hYJy9lsWlIs20dLH2
         W28Iwaw2K1E1/9bR59jMmk/7Gq8vv14a82SqbrX8Cr26/AWqo5ergIUL6PfX6EI1DxrF
         H3tDAymEGdy6lnWgT39rAP3JOfP6UnfKa9FSSCeE7ggKiNT9+2hZ/9zdGiTOU+6HH+d5
         TmFQQAuEaRP9Z2Bh9bU0txdUhaZCbT+Ezs+qExtvq1zOJlrZzRYF6kJpZnKXIPWyTN/2
         bJqw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=GG13N+h9bYKW0tA8PzZdEh0PqD5/qSPuWfMm4/MLZZE=;
        b=j/aXgW7DLOtU+2TcuTdXKrACQI3AYt5/cf/S+wGFFutaQL0kIOXGwZtZ8kJjX3q7R2
         5WEdw821k410rQbklUS1ZMqcv+pNlTtuGkNvOLtvCm5QHvAUpvDCOYw93CMekefxBAXh
         MqRKOCQDFVAJ1n1fXXhYRz8jLIgqgKTz+Foes3Rw9K2ypGTWH+KTw0XBtE9rHiclT2UD
         1lzrrsUf65o5QRY39sRBcO+2z4i6T43jpCMYIvd9Aj+88FMif9m2jSJiqPuBv/OKbIm3
         TMbuV85UEvYB+L19702MbyNf8njC5qFv6EtDZY4GnSxraSTW4YpjA+Js+8wPgqyLIGs/
         SnaQ==
X-Gm-Message-State: AOAM531xYi3IrkA45j2oFLL7/AAoFIhmEcy08G4bM2NBuQbnx9wgG+tj
	8yQrwhs1u2mFYTSO6bGPN1Weq8Ot9njnuEVYZF4=
X-Google-Smtp-Source: ABdhPJzCLkP7XKvI+ogPcqXNjFlbBz0ulixnxLa8L+LTJiC1sb757UHHSouM0vJ9LX/4+Ocy8hzM6Anb9s4lPpy7cZY=
X-Received: by 2002:a25:6986:: with SMTP id e128mr4956056ybc.93.1606077369721;
 Sun, 22 Nov 2020 12:36:09 -0800 (PST)
MIME-Version: 1.0
References: <cover.1605896059.git.gustavoars@kernel.org> <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook> <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
In-Reply-To: <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
From: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Date: Sun, 22 Nov 2020 21:35:58 +0100
Message-ID: <CANiq72nZrHWTA4_Msg6MP9snTyenC6-eGfD27CyfNSu7QoVZbw@mail.gmail.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
To: James Bottomley <James.Bottomley@hansenpartnership.com>
Cc: Kees Cook <keescook@chromium.org>, Jakub Kicinski <kuba@kernel.org>, 
	"Gustavo A. R. Silva" <gustavoars@kernel.org>, linux-kernel <linux-kernel@vger.kernel.org>, 
	alsa-devel@alsa-project.org, amd-gfx@lists.freedesktop.org, 
	bridge@lists.linux-foundation.org, ceph-devel@vger.kernel.org, 
	cluster-devel@redhat.com, coreteam@netfilter.org, devel@driverdev.osuosl.org, 
	dm-devel@redhat.com, drbd-dev@lists.linbit.com, 
	dri-devel@lists.freedesktop.org, GR-everest-linux-l2@marvell.com, 
	GR-Linux-NIC-Dev@marvell.com, intel-gfx@lists.freedesktop.org, 
	intel-wired-lan@lists.osuosl.org, keyrings@vger.kernel.org, 
	linux1394-devel@lists.sourceforge.net, linux-acpi@vger.kernel.org, 
	linux-afs@lists.infradead.org, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, linux-arm-msm@vger.kernel.org, 
	linux-atm-general@lists.sourceforge.net, linux-block@vger.kernel.org, 
	linux-can@vger.kernel.org, linux-cifs@vger.kernel.org, 
	Linux Crypto Mailing List <linux-crypto@vger.kernel.org>, linux-decnet-user@lists.sourceforge.net, 
	Ext4 Developers List <linux-ext4@vger.kernel.org>, linux-fbdev@vger.kernel.org, 
	linux-geode@lists.infradead.org, linux-gpio@vger.kernel.org, 
	linux-hams@vger.kernel.org, linux-hwmon@vger.kernel.org, 
	linux-i3c@lists.infradead.org, linux-ide@vger.kernel.org, 
	linux-iio@vger.kernel.org, linux-input <linux-input@vger.kernel.org>, 
	linux-integrity@vger.kernel.org, linux-mediatek@lists.infradead.org, 
	Linux Media Mailing List <linux-media@vger.kernel.org>, linux-mmc@vger.kernel.org, 
	Linux-MM <linux-mm@kvack.org>, linux-mtd@lists.infradead.org, 
	linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, 
	linux-renesas-soc@vger.kernel.org, linux-scsi@vger.kernel.org, 
	linux-sctp@vger.kernel.org, linux-security-module@vger.kernel.org, 
	linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org, 
	linux-watchdog@vger.kernel.org, 
	linux-wireless <linux-wireless@vger.kernel.org>, 
	Network Development <netdev@vger.kernel.org>, netfilter-devel@vger.kernel.org, 
	nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org, 
	oss-drivers@netronome.com, patches@opensource.cirrus.com, 
	rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org, 
	samba-technical@lists.samba.org, selinux@vger.kernel.org, 
	target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net, 
	usb-storage@lists.one-eyed-alien.net, 
	virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, xen-devel@lists.xenproject.org, 
	linux-hardening@vger.kernel.org, Nick Desaulniers <ndesaulniers@google.com>, 
	Nathan Chancellor <natechancellor@gmail.com>, Miguel Ojeda <ojeda@kernel.org>, 
	Joe Perches <joe@perches.com>
Content-Type: text/plain; charset="UTF-8"

On Sun, Nov 22, 2020 at 7:22 PM James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
>
> Well, it's a problem in an error leg, sure, but it's not a really
> compelling reason for a 141 patch series, is it?  All that fixing this
> error will do is get the driver to print "oh dear there's a problem"
> under four more conditions than it previously did.
>
> We've been at this for three years now with nearly a thousand patches,
> firstly marking all the fall throughs with /* fall through */ and later
> changing it to fallthrough.  At some point we do have to ask if the
> effort is commensurate with the protection afforded.  Please tell me
> our reward for all this effort isn't a single missing error print.

It isn't that much effort, isn't it? Plus we need to take into account
the future mistakes that it might prevent, too. So even if there were
zero problems found so far, it is still a positive change.

I would agree if these changes were high risk, though; but they are
almost trivial.

Cheers,
Miguel


From xen-devel-bounces@lists.xenproject.org Sun Nov 22 21:45:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 21:45:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33503.64560 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgxAW-0002hm-3O; Sun, 22 Nov 2020 21:45:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33503.64560; Sun, 22 Nov 2020 21:45:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgxAW-0002hf-0J; Sun, 22 Nov 2020 21:45:12 +0000
Received: by outflank-mailman (input) for mailman id 33503;
 Sun, 22 Nov 2020 21:45:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ookb=E4=kernel.org=luto@srs-us1.protection.inumbo.net>)
 id 1kgxAU-0002ha-Ik
 for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 21:45:10 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 11f79419-10fd-421e-b73e-843b6a0110a1;
 Sun, 22 Nov 2020 21:45:08 +0000 (UTC)
Received: from mail-wm1-f41.google.com (mail-wm1-f41.google.com
 [209.85.128.41])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id C9B702084C
 for <xen-devel@lists.xenproject.org>; Sun, 22 Nov 2020 21:45:07 +0000 (UTC)
Received: by mail-wm1-f41.google.com with SMTP id c198so14291657wmd.0
 for <xen-devel@lists.xenproject.org>; Sun, 22 Nov 2020 13:45:07 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ookb=E4=kernel.org=luto@srs-us1.protection.inumbo.net>)
	id 1kgxAU-0002ha-Ik
	for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 21:45:10 +0000
X-Inumbo-ID: 11f79419-10fd-421e-b73e-843b6a0110a1
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 11f79419-10fd-421e-b73e-843b6a0110a1;
	Sun, 22 Nov 2020 21:45:08 +0000 (UTC)
Received: from mail-wm1-f41.google.com (mail-wm1-f41.google.com [209.85.128.41])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id C9B702084C
	for <xen-devel@lists.xenproject.org>; Sun, 22 Nov 2020 21:45:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606081508;
	bh=j8kQr1sqydOsvjEY9shLB0wmUrzUic9iCe1dNmPahV0=;
	h=References:In-Reply-To:From:Date:Subject:To:Cc:From;
	b=2o1jrHuQqj+owOO+c9Zeqxrd9M6szemlUmJSqcWPhibowyO6pXzhHfe+VU2CVHSEx
	 6i9l1pH/OrjB/ycgmevivnDyFQfhPd0sDLvyPdoRmGLzLfeZdce65//c9hJPyiAb0C
	 IF/2dwA3JwvzN2WhaDRzodNq4IxJJOJIIJXD3gjA=
Received: by mail-wm1-f41.google.com with SMTP id c198so14291657wmd.0
        for <xen-devel@lists.xenproject.org>; Sun, 22 Nov 2020 13:45:07 -0800 (PST)
X-Gm-Message-State: AOAM533aLK3TddNWrYmdGe/ittJtb8tAeZ89VQTsA44PWNYO9rwBaUy/
	D/RkMYsx6mfVzVXgVJFBbKJ1I5M96is8nUhLtvwsng==
X-Google-Smtp-Source: ABdhPJwifakpgwUVTLbeDBYdoVmnAfRpiEVwGa2XrrCKUuyY/SYxT6I31Kv43oa6WFJkbjZ+4GFZn5n4pemXbXWt+C8=
X-Received: by 2002:a1c:2781:: with SMTP id n123mr3937832wmn.49.1606081506352;
 Sun, 22 Nov 2020 13:45:06 -0800 (PST)
MIME-Version: 1.0
References: <20201120114630.13552-1-jgross@suse.com> <20201120114630.13552-6-jgross@suse.com>
 <20201120115943.GD3021@hirez.programming.kicks-ass.net> <eb05e878-6334-8d19-496b-6572df67fc56@suse.com>
In-Reply-To: <eb05e878-6334-8d19-496b-6572df67fc56@suse.com>
From: Andy Lutomirski <luto@kernel.org>
Date: Sun, 22 Nov 2020 13:44:53 -0800
X-Gmail-Original-Message-ID: <CALCETrXOGhXoOJpzhAMqD7iibi09WzbGk9SWVH7JzA=d5uarWA@mail.gmail.com>
Message-ID: <CALCETrXOGhXoOJpzhAMqD7iibi09WzbGk9SWVH7JzA=d5uarWA@mail.gmail.com>
Subject: Re: [PATCH v2 05/12] x86: rework arch_local_irq_restore() to not use popf
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Peter Zijlstra <peterz@infradead.org>, Stefano Stabellini <sstabellini@kernel.org>, 
	"VMware, Inc." <pv-drivers@vmware.com>, X86 ML <x86@kernel.org>, 
	LKML <linux-kernel@vger.kernel.org>, 
	Linux Virtualization <virtualization@lists.linux-foundation.org>, Ingo Molnar <mingo@redhat.com>, 
	Borislav Petkov <bp@alien8.de>, Andrew Lutomirski <luto@kernel.org>, "H. Peter Anvin" <hpa@zytor.com>, 
	xen-devel <xen-devel@lists.xenproject.org>, Thomas Gleixner <tglx@linutronix.de>, 
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sat, Nov 21, 2020 at 10:55 PM J=C3=BCrgen Gro=C3=9F <jgross@suse.com> wr=
ote:
>
> On 20.11.20 12:59, Peter Zijlstra wrote:
> > On Fri, Nov 20, 2020 at 12:46:23PM +0100, Juergen Gross wrote:
> >> +static __always_inline void arch_local_irq_restore(unsigned long flag=
s)
> >> +{
> >> +    if (!arch_irqs_disabled_flags(flags))
> >> +            arch_local_irq_enable();
> >> +}
> >
> > If someone were to write horrible code like:
> >
> >       local_irq_disable();
> >       local_irq_save(flags);
> >       local_irq_enable();
> >       local_irq_restore(flags);
> >
> > we'd be up some creek without a paddle... now I don't _think_ we have
> > genius code like that, but I'd feel saver if we can haz an assertion in
> > there somewhere...
> >
> > Maybe something like:
> >
> > #ifdef CONFIG_DEBUG_ENTRY // for lack of something saner
> >       WARN_ON_ONCE((arch_local_save_flags() ^ flags) & X86_EFLAGS_IF);
> > #endif
> >
> > At the end?
>
> I'd like to, but using WARN_ON_ONCE() in include/asm/irqflags.h sounds
> like a perfect receipt for include dependency hell.
>
> We could use a plain asm("ud2") instead.

How about out-of-lining it:

#ifdef CONFIG_DEBUG_ENTRY
extern void warn_bogus_irqrestore();
#endif

static __always_inline void arch_local_irq_restore(unsigned long flags)
{
       if (!arch_irqs_disabled_flags(flags)) {
               arch_local_irq_enable();
       } else {
#ifdef CONFIG_DEBUG_ENTRY
               if (unlikely(arch_local_irq_save() & X86_EFLAGS_IF))
                    warn_bogus_irqrestore();
#endif
}


From xen-devel-bounces@lists.xenproject.org Sun Nov 22 22:11:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 22:11:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33515.64572 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgxZO-0005uR-5u; Sun, 22 Nov 2020 22:10:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33515.64572; Sun, 22 Nov 2020 22:10:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgxZO-0005uK-1q; Sun, 22 Nov 2020 22:10:54 +0000
Received: by outflank-mailman (input) for mailman id 33515;
 Sun, 22 Nov 2020 22:10:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EgT9=E4=ravnborg.org=sam@srs-us1.protection.inumbo.net>)
 id 1kgxZM-0005uF-1T
 for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 22:10:52 +0000
Received: from asavdk4.altibox.net (unknown [109.247.116.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 630083b3-53cd-45e5-95a7-9052cdfbcdba;
 Sun, 22 Nov 2020 22:10:50 +0000 (UTC)
Received: from ravnborg.org (unknown [188.228.123.71])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by asavdk4.altibox.net (Postfix) with ESMTPS id 2C52280567;
 Sun, 22 Nov 2020 23:10:42 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=EgT9=E4=ravnborg.org=sam@srs-us1.protection.inumbo.net>)
	id 1kgxZM-0005uF-1T
	for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 22:10:52 +0000
X-Inumbo-ID: 630083b3-53cd-45e5-95a7-9052cdfbcdba
Received: from asavdk4.altibox.net (unknown [109.247.116.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 630083b3-53cd-45e5-95a7-9052cdfbcdba;
	Sun, 22 Nov 2020 22:10:50 +0000 (UTC)
Received: from ravnborg.org (unknown [188.228.123.71])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by asavdk4.altibox.net (Postfix) with ESMTPS id 2C52280567;
	Sun, 22 Nov 2020 23:10:42 +0100 (CET)
Date: Sun, 22 Nov 2020 23:10:40 +0100
From: Sam Ravnborg <sam@ravnborg.org>
To: James Bottomley <James.Bottomley@hansenpartnership.com>
Cc: Kees Cook <keescook@chromium.org>, Jakub Kicinski <kuba@kernel.org>,
	alsa-devel@alsa-project.org,
	linux-atm-general@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org, linux-iio@vger.kernel.org,
	linux-wireless@vger.kernel.org, linux-fbdev@vger.kernel.org,
	dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org,
	Nathan Chancellor <natechancellor@gmail.com>,
	linux-ide@vger.kernel.org, dm-devel@redhat.com,
	keyrings@vger.kernel.org, linux-mtd@lists.infradead.org,
	GR-everest-linux-l2@marvell.com, wcn36xx@lists.infradead.org,
	samba-technical@lists.samba.org, linux-i3c@lists.infradead.org,
	linux1394-devel@lists.sourceforge.net,
	linux-afs@lists.infradead.org, usb-storage@lists.one-eyed-alien.net,
	drbd-dev@lists.linbit.com, devel@driverdev.osuosl.org,
	linux-cifs@vger.kernel.org, rds-devel@oss.oracle.com,
	Nick Desaulniers <ndesaulniers@google.com>,
	linux-scsi@vger.kernel.org, linux-rdma@vger.kernel.org,
	oss-drivers@netronome.com, bridge@lists.linux-foundation.org,
	linux-security-module@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	linux-stm32@st-md-mailman.stormreply.com, cluster-devel@redhat.com,
	linux-acpi@vger.kernel.org, coreteam@netfilter.org,
	intel-wired-lan@lists.osuosl.org, linux-input@vger.kernel.org,
	Miguel Ojeda <ojeda@kernel.org>,
	tipc-discussion@lists.sourceforge.net, linux-ext4@vger.kernel.org,
	linux-media@vger.kernel.org, linux-watchdog@vger.kernel.org,
	selinux@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	intel-gfx@lists.freedesktop.org, linux-geode@lists.infradead.org,
	linux-can@vger.kernel.org, linux-block@vger.kernel.org,
	linux-gpio@vger.kernel.org, op-tee@lists.trustedfirmware.org,
	linux-mediatek@lists.infradead.org, xen-devel@lists.xenproject.org,
	nouveau@lists.freedesktop.org, linux-hams@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	target-devel@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	linux-hwmon@vger.kernel.org, x86@kernel.org,
	linux-nfs@vger.kernel.org, GR-Linux-NIC-Dev@marvell.com,
	linux-mm@kvack.org, netdev@vger.kernel.org,
	linux-decnet-user@lists.sourceforge.net, linux-mmc@vger.kernel.org,
	"Gustavo A. R. Silva" <gustavoars@kernel.org>,
	linux-renesas-soc@vger.kernel.org, linux-sctp@vger.kernel.org,
	linux-usb@vger.kernel.org, netfilter-devel@vger.kernel.org,
	linux-crypto@vger.kernel.org, patches@opensource.cirrus.com,
	Joe Perches <joe@perches.com>, linux-integrity@vger.kernel.org,
	linux-hardening@vger.kernel.org
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
Message-ID: <20201122221040.GD566387@ravnborg.org>
References: <cover.1605896059.git.gustavoars@kernel.org>
 <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011201129.B13FDB3C@keescook>
 <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook>
 <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
X-CMAE-Score: 0
X-CMAE-Analysis: v=2.3 cv=VafZwmh9 c=1 sm=1 tr=0
	a=S6zTFyMACwkrwXSdXUNehg==:117 a=S6zTFyMACwkrwXSdXUNehg==:17
	a=kj9zAlcOel0A:10 a=VwQbUJbxAAAA:8 a=pGLkceISAAAA:8
	a=7T594MSkF3521FIrX4wA:9 a=CjuIK1q_8ugA:10 a=AjGcO6oz07-iQ99wixmX:22

Hi James.

> > > If none of the 140 patches here fix a real bug, and there is no
> > > change to machine code then it sounds to me like a W=2 kind of a
> > > warning.
> > 
> > FWIW, this series has found at least one bug so far:
> > https://lore.kernel.org/lkml/CAFCwf11izHF=g1mGry1fE5kvFFFrxzhPSM6qKAO8gxSp=Kr_CQ@mail.gmail.com/
> 
> 
> Well, it's a problem in an error leg, sure, but it's not a really
> compelling reason for a 141 patch series, is it?  All that fixing this
> error will do is get the driver to print "oh dear there's a problem"
> under four more conditions than it previously did.

You are asking the wrong question here.

Yuo should ask  how many hours could have been saved by all the bugs
people have been fighting with and then fixed *before* the code
hit the kernel at all.

My personal experience is that I, more than once, have had errors
related to a missing break in my code. So this warnings is IMO a win.

And if we are only ~100 patches to have it globally enabled then it is a
no-brainer in my book.

	Sam


From xen-devel-bounces@lists.xenproject.org Sun Nov 22 22:34:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 22:34:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33526.64590 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgxvm-00086v-67; Sun, 22 Nov 2020 22:34:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33526.64590; Sun, 22 Nov 2020 22:34:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgxvm-00086o-38; Sun, 22 Nov 2020 22:34:02 +0000
Received: by outflank-mailman (input) for mailman id 33526;
 Sun, 22 Nov 2020 22:34:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TnN1=E4=telegraphics.com.au=fthain@srs-us1.protection.inumbo.net>)
 id 1kgxvl-00086j-4Y
 for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 22:34:01 +0000
Received: from kvm5.telegraphics.com.au (unknown [98.124.60.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 23bfc369-788e-4889-a1cb-fdcf3babc460;
 Sun, 22 Nov 2020 22:33:59 +0000 (UTC)
Received: from localhost (localhost.localdomain [127.0.0.1])
 by kvm5.telegraphics.com.au (Postfix) with ESMTP id A588721F21;
 Sun, 22 Nov 2020 17:33:55 -0500 (EST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TnN1=E4=telegraphics.com.au=fthain@srs-us1.protection.inumbo.net>)
	id 1kgxvl-00086j-4Y
	for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 22:34:01 +0000
X-Inumbo-ID: 23bfc369-788e-4889-a1cb-fdcf3babc460
Received: from kvm5.telegraphics.com.au (unknown [98.124.60.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 23bfc369-788e-4889-a1cb-fdcf3babc460;
	Sun, 22 Nov 2020 22:33:59 +0000 (UTC)
Received: from localhost (localhost.localdomain [127.0.0.1])
	by kvm5.telegraphics.com.au (Postfix) with ESMTP id A588721F21;
	Sun, 22 Nov 2020 17:33:55 -0500 (EST)
Date: Mon, 23 Nov 2020 09:33:55 +1100 (AEDT)
From: Finn Thain <fthain@telegraphics.com.au>
To: Joe Perches <joe@perches.com>
cc: James Bottomley <James.Bottomley@HansenPartnership.com>, 
    Tom Rix <trix@redhat.com>, Matthew Wilcox <willy@infradead.org>, 
    clang-built-linux@googlegroups.com, linux-hyperv@vger.kernel.org, 
    linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, 
    tboot-devel@lists.sourceforge.net, kvm@vger.kernel.org, 
    linux-crypto@vger.kernel.org, linux-acpi@vger.kernel.org, devel@acpica.org, 
    amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, 
    intel-gfx@lists.freedesktop.org, netdev@vger.kernel.org, 
    linux-media@vger.kernel.org, MPT-FusionLinux.pdl@broadcom.com, 
    linux-scsi@vger.kernel.org, linux-wireless@vger.kernel.org, 
    ibm-acpi-devel@lists.sourceforge.net, platform-driver-x86@vger.kernel.org, 
    linux-usb@vger.kernel.org, linux-omap@vger.kernel.org, 
    linux-fbdev@vger.kernel.org, ecryptfs@vger.kernel.org, 
    linux-fsdevel@vger.kernel.org, cluster-devel@redhat.com, 
    linux-mtd@lists.infradead.org, keyrings@vger.kernel.org, 
    netfilter-devel@vger.kernel.org, coreteam@netfilter.org, 
    alsa-devel@alsa-project.org, bpf@vger.kernel.org, 
    linux-bluetooth@vger.kernel.org, linux-nfs@vger.kernel.org, 
    patches@opensource.cirrus.com
Subject: Re: [RFC] MAINTAINERS tag for cleanup robot
In-Reply-To: <dec07021e7fc11a02b14c98b713ae2c6e2a4ca00.camel@perches.com>
Message-ID: <alpine.LNX.2.23.453.2011230810210.7@nippy.intranet>
References: <20201121165058.1644182-1-trix@redhat.com>         <20201122032304.GE4327@casper.infradead.org>         <ddb08a27-3ca1-fb2e-d51f-4b471f1a56a3@redhat.com>         <20201122145635.GG4327@casper.infradead.org>         <0819ce06-c462-d4df-d3d9-14931dc5aefc@redhat.com>
         <751803306cd957d0e7ef6a4fc3dbf12ebceaba92.camel@HansenPartnership.com> <dec07021e7fc11a02b14c98b713ae2c6e2a4ca00.camel@perches.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII


On Sun, 22 Nov 2020, Joe Perches wrote:

> On Sun, 2020-11-22 at 08:49 -0800, James Bottomley wrote:
> > We can enforce sysfs_emit going forwards
> > using tools like checkpatch
> 
> It's not really possible for checkpatch to find or warn about
> sysfs uses of sprintf. checkpatch is really just a trivial
> line-by-line parser and it has no concept of code intent.
> 

Checkpatch does suffer from the limitations of regular expressions. But 
that doesn't stop people from using it. Besides, Coccinelle can do 
analyses that can't be done with regular expressions, so it's moot.

> It just can't warn on every use of the sprintf family.
> There are just too many perfectly valid uses.
> 
> > but there's no benefit and a lot of harm to
> > be done by trying to churn the entire tree
> 
> Single uses of sprintf for sysfs is not really any problem.
> 
> But likely there are still several possible overrun sprintf/snprintf
> paths in sysfs.  Some of them are very obscure and unlikely to be
> found by a robot as the logic for sysfs buf uses can be fairly twisty.
> 

Logic errors of this kind are susceptible to fuzzing, formal methods, 
symbolic execution etc. No doubt there are other techniques that I don't 
know about.

> But provably correct conversions IMO _should_ be done and IMO churn 
> considerations should generally have less importance.
> 

Provably equivalent conversions are provably churn. So apparently you're 
advocating changes that are not provably equivalent.

These are patches for code not that's not been shown to be buggy. Code 
which, after patching, can be shown to be free of a specific kind of 
theoretical bug. Hardly "provably correct".

The problem is, the class of theoretical bugs that can be avoided in this 
way is probably limitless, as is the review cost and the risk of 
accidental regressions. And the payoff is entirely theoretical.

Moreover, the patch review workload for skilled humans is being generated 
by the automation, which is completely backwards: the machine is supposed 
to be helping.


From xen-devel-bounces@lists.xenproject.org Sun Nov 22 22:36:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 22:36:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33531.64602 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgxxt-0008Fd-Lt; Sun, 22 Nov 2020 22:36:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33531.64602; Sun, 22 Nov 2020 22:36:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgxxt-0008FW-HA; Sun, 22 Nov 2020 22:36:13 +0000
Received: by outflank-mailman (input) for mailman id 33531;
 Sun, 22 Nov 2020 22:36:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=08CP=E4=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
 id 1kgxxs-0008FR-Bd
 for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 22:36:12 +0000
Received: from bedivere.hansenpartnership.com (unknown [2607:fcd0:100:8a00::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c167ebdb-d760-4d7f-a39c-97f67b3ac309;
 Sun, 22 Nov 2020 22:36:06 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by bedivere.hansenpartnership.com (Postfix) with ESMTP id 3EEFD12808AA;
 Sun, 22 Nov 2020 14:36:05 -0800 (PST)
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
 by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new,
 port 10024)
 with ESMTP id YhaEdPXdRRpF; Sun, 22 Nov 2020 14:36:05 -0800 (PST)
Received: from jarvis.int.hansenpartnership.com (unknown
 [IPv6:2601:600:8280:66d1::527])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id B820412808A7;
 Sun, 22 Nov 2020 14:36:01 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=08CP=E4=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
	id 1kgxxs-0008FR-Bd
	for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 22:36:12 +0000
X-Inumbo-ID: c167ebdb-d760-4d7f-a39c-97f67b3ac309
Received: from bedivere.hansenpartnership.com (unknown [2607:fcd0:100:8a00::2])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c167ebdb-d760-4d7f-a39c-97f67b3ac309;
	Sun, 22 Nov 2020 22:36:06 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
	by bedivere.hansenpartnership.com (Postfix) with ESMTP id 3EEFD12808AA;
	Sun, 22 Nov 2020 14:36:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=hansenpartnership.com; s=20151216; t=1606084565;
	bh=y5Oo39UQGhMKIHztx5i9osM+IVUxOt/AInbXjgVEmOM=;
	h=Message-ID:Subject:From:To:Date:In-Reply-To:References:From;
	b=it2DEm5qbX6tNKGabRZf0YZ7FJN256ppIXEFnMqNHN+XY9h76oTIzuZgEyEoREiKM
	 yiP/0zNVKPNW1kWiLTindkrOQ7bXlSJOTVsRohihqSOzq1tvOmtHcybKKU2pQsn+gK
	 kXbE7Tzelwaqt/71bFtBSSIQ6PZYHLF3M7J5PGe0=
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
	by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id YhaEdPXdRRpF; Sun, 22 Nov 2020 14:36:05 -0800 (PST)
Received: from jarvis.int.hansenpartnership.com (unknown [IPv6:2601:600:8280:66d1::527])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id B820412808A7;
	Sun, 22 Nov 2020 14:36:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=hansenpartnership.com; s=20151216; t=1606084565;
	bh=y5Oo39UQGhMKIHztx5i9osM+IVUxOt/AInbXjgVEmOM=;
	h=Message-ID:Subject:From:To:Date:In-Reply-To:References:From;
	b=it2DEm5qbX6tNKGabRZf0YZ7FJN256ppIXEFnMqNHN+XY9h76oTIzuZgEyEoREiKM
	 yiP/0zNVKPNW1kWiLTindkrOQ7bXlSJOTVsRohihqSOzq1tvOmtHcybKKU2pQsn+gK
	 kXbE7Tzelwaqt/71bFtBSSIQ6PZYHLF3M7J5PGe0=
Message-ID: <1c7d7fde126bc0acf825766de64bf2f9b888f216.camel@HansenPartnership.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
From: James Bottomley <James.Bottomley@HansenPartnership.com>
To: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Cc: Kees Cook <keescook@chromium.org>, Jakub Kicinski <kuba@kernel.org>, 
 "Gustavo A. R. Silva" <gustavoars@kernel.org>, linux-kernel
 <linux-kernel@vger.kernel.org>,  alsa-devel@alsa-project.org,
 amd-gfx@lists.freedesktop.org,  bridge@lists.linux-foundation.org,
 ceph-devel@vger.kernel.org,  cluster-devel@redhat.com,
 coreteam@netfilter.org, devel@driverdev.osuosl.org,  dm-devel@redhat.com,
 drbd-dev@lists.linbit.com, dri-devel@lists.freedesktop.org, 
 GR-everest-linux-l2@marvell.com, GR-Linux-NIC-Dev@marvell.com, 
 intel-gfx@lists.freedesktop.org, intel-wired-lan@lists.osuosl.org, 
 keyrings@vger.kernel.org, linux1394-devel@lists.sourceforge.net, 
 linux-acpi@vger.kernel.org, linux-afs@lists.infradead.org, Linux ARM
 <linux-arm-kernel@lists.infradead.org>, linux-arm-msm@vger.kernel.org, 
 linux-atm-general@lists.sourceforge.net, linux-block@vger.kernel.org, 
 linux-can@vger.kernel.org, linux-cifs@vger.kernel.org, Linux Crypto Mailing
 List <linux-crypto@vger.kernel.org>,
 linux-decnet-user@lists.sourceforge.net,  Ext4 Developers List
 <linux-ext4@vger.kernel.org>, linux-fbdev@vger.kernel.org,
 linux-geode@lists.infradead.org,  linux-gpio@vger.kernel.org,
 linux-hams@vger.kernel.org,  linux-hwmon@vger.kernel.org,
 linux-i3c@lists.infradead.org,  linux-ide@vger.kernel.org,
 linux-iio@vger.kernel.org, linux-input <linux-input@vger.kernel.org>,
 linux-integrity@vger.kernel.org,  linux-mediatek@lists.infradead.org, Linux
 Media Mailing List <linux-media@vger.kernel.org>,
 linux-mmc@vger.kernel.org, Linux-MM <linux-mm@kvack.org>,
 linux-mtd@lists.infradead.org, linux-nfs@vger.kernel.org, 
 linux-rdma@vger.kernel.org, linux-renesas-soc@vger.kernel.org, 
 linux-scsi@vger.kernel.org, linux-sctp@vger.kernel.org, 
 linux-security-module@vger.kernel.org, 
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org, 
 linux-watchdog@vger.kernel.org, linux-wireless
 <linux-wireless@vger.kernel.org>,  Network Development
 <netdev@vger.kernel.org>, netfilter-devel@vger.kernel.org,
 nouveau@lists.freedesktop.org,  op-tee@lists.trustedfirmware.org,
 oss-drivers@netronome.com,  patches@opensource.cirrus.com,
 rds-devel@oss.oracle.com,  reiserfs-devel@vger.kernel.org,
 samba-technical@lists.samba.org,  selinux@vger.kernel.org,
 target-devel@vger.kernel.org,  tipc-discussion@lists.sourceforge.net,
 usb-storage@lists.one-eyed-alien.net, 
 virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
 "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>,
 xen-devel@lists.xenproject.org,  linux-hardening@vger.kernel.org, Nick
 Desaulniers <ndesaulniers@google.com>,  Nathan Chancellor
 <natechancellor@gmail.com>, Miguel Ojeda <ojeda@kernel.org>, Joe Perches
 <joe@perches.com>
Date: Sun, 22 Nov 2020 14:36:00 -0800
In-Reply-To: <CANiq72nZrHWTA4_Msg6MP9snTyenC6-eGfD27CyfNSu7QoVZbw@mail.gmail.com>
References: <cover.1605896059.git.gustavoars@kernel.org>
	 <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	 <202011201129.B13FDB3C@keescook>
	 <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	 <202011220816.8B6591A@keescook>
	 <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
	 <CANiq72nZrHWTA4_Msg6MP9snTyenC6-eGfD27CyfNSu7QoVZbw@mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"
User-Agent: Evolution 3.34.4 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Sun, 2020-11-22 at 21:35 +0100, Miguel Ojeda wrote:
> On Sun, Nov 22, 2020 at 7:22 PM James Bottomley
> <James.Bottomley@hansenpartnership.com> wrote:
> > Well, it's a problem in an error leg, sure, but it's not a really
> > compelling reason for a 141 patch series, is it?  All that fixing
> > this error will do is get the driver to print "oh dear there's a
> > problem" under four more conditions than it previously did.
> > 
> > We've been at this for three years now with nearly a thousand
> > patches, firstly marking all the fall throughs with /* fall through
> > */ and later changing it to fallthrough.  At some point we do have
> > to ask if the effort is commensurate with the protection
> > afforded.  Please tell me our reward for all this effort isn't a
> > single missing error print.
> 
> It isn't that much effort, isn't it?

Well, it seems to be three years of someone's time plus the maintainer
review time and series disruption of nearly a thousand patches.  Let's
be conservative and assume the producer worked about 30% on the series
and it takes about 5-10 minutes per patch to review, merge and for
others to rework existing series.  So let's say it's cost a person year
of a relatively junior engineer producing the patches and say 100h of
review and application time.  The latter is likely the big ticket item
because it's what we have in least supply in the kernel (even though
it's 20x vs the producer time).

>  Plus we need to take into account the future mistakes that it might
> prevent, too. So even if there were zero problems found so far, it is
> still a positive change.

Well, the question I was asking is if it's worth the cost which I've
tried to outline above.

> I would agree if these changes were high risk, though; but they are
> almost trivial.

It's not about the risk of the changes it's about the cost of
implementing them.  Even if you discount the producer time (which
someone gets to pay for, and if I were the engineering manager, I'd be
unhappy about), the review/merge/rework time is pretty significant in
exchange for six minor bug fixes.  Fine, when a new compiler warning
comes along it's certainly reasonable to see if we can benefit from it
and the fact that the compiler people think it's worthwhile is enough
evidence to assume this initially.  But at some point you have to ask
whether that assumption is supported by the evidence we've accumulated
over the time we've been using it.  And if the evidence doesn't support
it perhaps it is time to stop the experiment.

James




From xen-devel-bounces@lists.xenproject.org Sun Nov 22 22:55:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 22:55:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33538.64614 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgyFx-0001vp-BW; Sun, 22 Nov 2020 22:54:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33538.64614; Sun, 22 Nov 2020 22:54:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgyFx-0001vi-8N; Sun, 22 Nov 2020 22:54:53 +0000
Received: by outflank-mailman (input) for mailman id 33538;
 Sun, 22 Nov 2020 22:54:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TnN1=E4=telegraphics.com.au=fthain@srs-us1.protection.inumbo.net>)
 id 1kgyFv-0001vd-QP
 for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 22:54:51 +0000
Received: from kvm5.telegraphics.com.au (unknown [98.124.60.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id c37dab44-18c0-4949-8564-38411d5c7c86;
 Sun, 22 Nov 2020 22:54:51 +0000 (UTC)
Received: from localhost (localhost.localdomain [127.0.0.1])
 by kvm5.telegraphics.com.au (Postfix) with ESMTP id BF98E29DB3;
 Sun, 22 Nov 2020 17:54:47 -0500 (EST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TnN1=E4=telegraphics.com.au=fthain@srs-us1.protection.inumbo.net>)
	id 1kgyFv-0001vd-QP
	for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 22:54:51 +0000
X-Inumbo-ID: c37dab44-18c0-4949-8564-38411d5c7c86
Received: from kvm5.telegraphics.com.au (unknown [98.124.60.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id c37dab44-18c0-4949-8564-38411d5c7c86;
	Sun, 22 Nov 2020 22:54:51 +0000 (UTC)
Received: from localhost (localhost.localdomain [127.0.0.1])
	by kvm5.telegraphics.com.au (Postfix) with ESMTP id BF98E29DB3;
	Sun, 22 Nov 2020 17:54:47 -0500 (EST)
Date: Mon, 23 Nov 2020 09:54:48 +1100 (AEDT)
From: Finn Thain <fthain@telegraphics.com.au>
To: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
cc: James Bottomley <James.Bottomley@hansenpartnership.com>, 
    Kees Cook <keescook@chromium.org>, Jakub Kicinski <kuba@kernel.org>, 
    "Gustavo A. R. Silva" <gustavoars@kernel.org>, 
    linux-kernel <linux-kernel@vger.kernel.org>, alsa-devel@alsa-project.org, 
    amd-gfx@lists.freedesktop.org, bridge@lists.linux-foundation.org, 
    ceph-devel@vger.kernel.org, cluster-devel@redhat.com, 
    coreteam@netfilter.org, devel@driverdev.osuosl.org, dm-devel@redhat.com, 
    drbd-dev@lists.linbit.com, dri-devel@lists.freedesktop.org, 
    GR-everest-linux-l2@marvell.com, GR-Linux-NIC-Dev@marvell.com, 
    intel-gfx@lists.freedesktop.org, intel-wired-lan@lists.osuosl.org, 
    keyrings@vger.kernel.org, linux1394-devel@lists.sourceforge.net, 
    linux-acpi@vger.kernel.org, linux-afs@lists.infradead.org, 
    Linux ARM <linux-arm-kernel@lists.infradead.org>, 
    linux-arm-msm@vger.kernel.org, linux-atm-general@lists.sourceforge.net, 
    linux-block@vger.kernel.org, linux-can@vger.kernel.org, 
    linux-cifs@vger.kernel.org, 
    Linux Crypto Mailing List <linux-crypto@vger.kernel.org>, 
    linux-decnet-user@lists.sourceforge.net, 
    Ext4 Developers List <linux-ext4@vger.kernel.org>, 
    linux-fbdev@vger.kernel.org, linux-geode@lists.infradead.org, 
    linux-gpio@vger.kernel.org, linux-hams@vger.kernel.org, 
    linux-hwmon@vger.kernel.org, linux-i3c@lists.infradead.org, 
    linux-ide@vger.kernel.org, linux-iio@vger.kernel.org, 
    linux-input <linux-input@vger.kernel.org>, linux-integrity@vger.kernel.org, 
    linux-mediatek@lists.infradead.org, 
    Linux Media Mailing List <linux-media@vger.kernel.org>, 
    linux-mmc@vger.kernel.org, Linux-MM <linux-mm@kvack.org>, 
    linux-mtd@lists.infradead.org, linux-nfs@vger.kernel.org, 
    linux-rdma@vger.kernel.org, linux-renesas-soc@vger.kernel.org, 
    linux-scsi@vger.kernel.org, linux-sctp@vger.kernel.org, 
    linux-security-module@vger.kernel.org, 
    linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org, 
    linux-watchdog@vger.kernel.org, 
    linux-wireless <linux-wireless@vger.kernel.org>, 
    Network Development <netdev@vger.kernel.org>, 
    netfilter-devel@vger.kernel.org, nouveau@lists.freedesktop.org, 
    op-tee@lists.trustedfirmware.org, oss-drivers@netronome.com, 
    patches@opensource.cirrus.com, rds-devel@oss.oracle.com, 
    reiserfs-devel@vger.kernel.org, samba-technical@lists.samba.org, 
    selinux@vger.kernel.org, target-devel@vger.kernel.org, 
    tipc-discussion@lists.sourceforge.net, 
    usb-storage@lists.one-eyed-alien.net, 
    virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
    "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, 
    xen-devel@lists.xenproject.org, linux-hardening@vger.kernel.org, 
    Nick Desaulniers <ndesaulniers@google.com>, 
    Nathan Chancellor <natechancellor@gmail.com>, 
    Miguel Ojeda <ojeda@kernel.org>, Joe Perches <joe@perches.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
In-Reply-To: <CANiq72nZrHWTA4_Msg6MP9snTyenC6-eGfD27CyfNSu7QoVZbw@mail.gmail.com>
Message-ID: <alpine.LNX.2.23.453.2011230938390.7@nippy.intranet>
References: <cover.1605896059.git.gustavoars@kernel.org> <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> <202011220816.8B6591A@keescook>
 <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com> <CANiq72nZrHWTA4_Msg6MP9snTyenC6-eGfD27CyfNSu7QoVZbw@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII


On Sun, 22 Nov 2020, Miguel Ojeda wrote:

> 
> It isn't that much effort, isn't it? Plus we need to take into account 
> the future mistakes that it might prevent, too.

We should also take into account optimisim about future improvements in 
tooling.

> So even if there were zero problems found so far, it is still a positive 
> change.
> 

It is if you want to spin it that way.

> I would agree if these changes were high risk, though; but they are 
> almost trivial.
> 

This is trivial:

 case 1:
	this();
+	fallthrough;
 case 2:
 	that();

But what we inevitably get is changes like this:

 case 3:
        this();
+       break;
 case 4:
        hmmm();

Why? Mainly to silence the compiler. Also because the patch author argued 
successfully that they had found a theoretical bug, often in mature code.

But is anyone keeping score of the regressions? If unreported bugs count, 
what about unreported regressions?

> Cheers,
> Miguel
> 


From xen-devel-bounces@lists.xenproject.org Sun Nov 22 23:04:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 22 Nov 2020 23:04:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33559.64647 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgyPZ-0003KZ-V5; Sun, 22 Nov 2020 23:04:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33559.64647; Sun, 22 Nov 2020 23:04:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgyPZ-0003KS-S7; Sun, 22 Nov 2020 23:04:49 +0000
Received: by outflank-mailman (input) for mailman id 33559;
 Sun, 22 Nov 2020 23:04:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=08CP=E4=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
 id 1kgyPX-0003KN-80
 for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 23:04:48 +0000
Received: from bedivere.hansenpartnership.com (unknown [96.44.175.130])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1f3f53ba-5a21-4a66-917e-69c99630a49f;
 Sun, 22 Nov 2020 23:04:42 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by bedivere.hansenpartnership.com (Postfix) with ESMTP id 5C1741280900;
 Sun, 22 Nov 2020 15:04:41 -0800 (PST)
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
 by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new,
 port 10024)
 with ESMTP id PvG_3ynFL_Uj; Sun, 22 Nov 2020 15:04:41 -0800 (PST)
Received: from jarvis.int.hansenpartnership.com (unknown
 [IPv6:2601:600:8280:66d1::527])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id 9178D12808F6;
 Sun, 22 Nov 2020 15:04:37 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=08CP=E4=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
	id 1kgyPX-0003KN-80
	for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 23:04:48 +0000
X-Inumbo-ID: 1f3f53ba-5a21-4a66-917e-69c99630a49f
Received: from bedivere.hansenpartnership.com (unknown [96.44.175.130])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1f3f53ba-5a21-4a66-917e-69c99630a49f;
	Sun, 22 Nov 2020 23:04:42 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
	by bedivere.hansenpartnership.com (Postfix) with ESMTP id 5C1741280900;
	Sun, 22 Nov 2020 15:04:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=hansenpartnership.com; s=20151216; t=1606086281;
	bh=ampKVWKUqLiKYyObj0dhEgltdPGbsuliUrstEBadWMw=;
	h=Message-ID:Subject:From:To:Date:In-Reply-To:References:From;
	b=WmZvrZ8SISP4O7CkmRRwRn7Ww4EqbFeoj9AudkGWHrTHPvBGVyYGXPtxxL5/3UBwZ
	 KEGMUiR7FBhAVO42W5uBkyouydambEWUMRvvMR32eyWutkJh8vdHwfKrPde3Z6lPQr
	 zwZuERjUvzNlbmlNByqn4M9h7sLDVk7BBiQeo3h4=
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
	by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id PvG_3ynFL_Uj; Sun, 22 Nov 2020 15:04:41 -0800 (PST)
Received: from jarvis.int.hansenpartnership.com (unknown [IPv6:2601:600:8280:66d1::527])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id 9178D12808F6;
	Sun, 22 Nov 2020 15:04:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=hansenpartnership.com; s=20151216; t=1606086281;
	bh=ampKVWKUqLiKYyObj0dhEgltdPGbsuliUrstEBadWMw=;
	h=Message-ID:Subject:From:To:Date:In-Reply-To:References:From;
	b=WmZvrZ8SISP4O7CkmRRwRn7Ww4EqbFeoj9AudkGWHrTHPvBGVyYGXPtxxL5/3UBwZ
	 KEGMUiR7FBhAVO42W5uBkyouydambEWUMRvvMR32eyWutkJh8vdHwfKrPde3Z6lPQr
	 zwZuERjUvzNlbmlNByqn4M9h7sLDVk7BBiQeo3h4=
Message-ID: <c3371b7c15ed30b92e9bb8609ff65bdaa0ef61fa.camel@HansenPartnership.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
From: James Bottomley <James.Bottomley@HansenPartnership.com>
To: Finn Thain <fthain@telegraphics.com.au>, Miguel Ojeda
	 <miguel.ojeda.sandonis@gmail.com>
Cc: Kees Cook <keescook@chromium.org>, Jakub Kicinski <kuba@kernel.org>, 
 "Gustavo A. R. Silva" <gustavoars@kernel.org>, linux-kernel
 <linux-kernel@vger.kernel.org>,  alsa-devel@alsa-project.org,
 amd-gfx@lists.freedesktop.org,  bridge@lists.linux-foundation.org,
 ceph-devel@vger.kernel.org,  cluster-devel@redhat.com,
 coreteam@netfilter.org, devel@driverdev.osuosl.org,  dm-devel@redhat.com,
 drbd-dev@lists.linbit.com, dri-devel@lists.freedesktop.org, 
 GR-everest-linux-l2@marvell.com, GR-Linux-NIC-Dev@marvell.com, 
 intel-gfx@lists.freedesktop.org, intel-wired-lan@lists.osuosl.org, 
 keyrings@vger.kernel.org, linux1394-devel@lists.sourceforge.net, 
 linux-acpi@vger.kernel.org, linux-afs@lists.infradead.org, Linux ARM
 <linux-arm-kernel@lists.infradead.org>, linux-arm-msm@vger.kernel.org, 
 linux-atm-general@lists.sourceforge.net, linux-block@vger.kernel.org, 
 linux-can@vger.kernel.org, linux-cifs@vger.kernel.org, Linux Crypto Mailing
 List <linux-crypto@vger.kernel.org>,
 linux-decnet-user@lists.sourceforge.net,  Ext4 Developers List
 <linux-ext4@vger.kernel.org>, linux-fbdev@vger.kernel.org,
 linux-geode@lists.infradead.org,  linux-gpio@vger.kernel.org,
 linux-hams@vger.kernel.org,  linux-hwmon@vger.kernel.org,
 linux-i3c@lists.infradead.org,  linux-ide@vger.kernel.org,
 linux-iio@vger.kernel.org, linux-input <linux-input@vger.kernel.org>,
 linux-integrity@vger.kernel.org,  linux-mediatek@lists.infradead.org, Linux
 Media Mailing List <linux-media@vger.kernel.org>,
 linux-mmc@vger.kernel.org, Linux-MM <linux-mm@kvack.org>,
 linux-mtd@lists.infradead.org, linux-nfs@vger.kernel.org, 
 linux-rdma@vger.kernel.org, linux-renesas-soc@vger.kernel.org, 
 linux-scsi@vger.kernel.org, linux-sctp@vger.kernel.org, 
 linux-security-module@vger.kernel.org, 
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org, 
 linux-watchdog@vger.kernel.org, linux-wireless
 <linux-wireless@vger.kernel.org>,  Network Development
 <netdev@vger.kernel.org>, netfilter-devel@vger.kernel.org,
 nouveau@lists.freedesktop.org,  op-tee@lists.trustedfirmware.org,
 oss-drivers@netronome.com,  patches@opensource.cirrus.com,
 rds-devel@oss.oracle.com,  reiserfs-devel@vger.kernel.org,
 samba-technical@lists.samba.org,  selinux@vger.kernel.org,
 target-devel@vger.kernel.org,  tipc-discussion@lists.sourceforge.net,
 usb-storage@lists.one-eyed-alien.net, 
 virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
 "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>,
 xen-devel@lists.xenproject.org,  linux-hardening@vger.kernel.org, Nick
 Desaulniers <ndesaulniers@google.com>,  Nathan Chancellor
 <natechancellor@gmail.com>, Miguel Ojeda <ojeda@kernel.org>, Joe Perches
 <joe@perches.com>
Date: Sun, 22 Nov 2020 15:04:36 -0800
In-Reply-To: <alpine.LNX.2.23.453.2011230938390.7@nippy.intranet>
References: <cover.1605896059.git.gustavoars@kernel.org>
	 <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	 <202011201129.B13FDB3C@keescook>
	 <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	 <202011220816.8B6591A@keescook>
	 <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
	 <CANiq72nZrHWTA4_Msg6MP9snTyenC6-eGfD27CyfNSu7QoVZbw@mail.gmail.com>
	 <alpine.LNX.2.23.453.2011230938390.7@nippy.intranet>
Content-Type: text/plain; charset="UTF-8"
User-Agent: Evolution 3.34.4 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Mon, 2020-11-23 at 09:54 +1100, Finn Thain wrote:
> But is anyone keeping score of the regressions? If unreported bugs
> count, what about unreported regressions?

Well, I was curious about the former (obviously no tool will tell me
about the latter), so I asked git what patches had a fall-through
series named in a fixes tag and these three popped out:

9cf51446e686 bpf, powerpc: Fix misuse of fallthrough in bpf_jit_comp()
6a9dc5fd6170 lib: Revert use of fallthrough pseudo-keyword in lib/
91dbd73a1739 mips/oprofile: Fix fallthrough placement

I don't think any of these is fixing a significant problem, but they
did cause someone time and trouble to investigate.

James




From xen-devel-bounces@lists.xenproject.org Mon Nov 23 00:00:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 00:00:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33573.64664 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgzHS-0001b0-LB; Mon, 23 Nov 2020 00:00:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33573.64664; Mon, 23 Nov 2020 00:00:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kgzHS-0001at-Gz; Mon, 23 Nov 2020 00:00:30 +0000
Received: by outflank-mailman (input) for mailman id 33573;
 Mon, 23 Nov 2020 00:00:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgzHQ-0001al-S4; Mon, 23 Nov 2020 00:00:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgzHQ-0002yP-GS; Mon, 23 Nov 2020 00:00:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kgzHQ-0003iW-4z; Mon, 23 Nov 2020 00:00:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kgzHQ-0005HE-4S; Mon, 23 Nov 2020 00:00:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgzHQ-0001al-S4; Mon, 23 Nov 2020 00:00:28 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0IfIejsVe0URlg8wx9+QfhpjMdSVOUZEgH6fG2IURbM=; b=REtDbOPV/c7yNczIrPr1r5BRLF
	Ee0BMySnVIn/7EKrQ3eUQiabujIzZTwjdD67DotX1ukZlglQ36IvO4P48m0b5sGEbddJyRge5V6Gv
	RRiYt+VJOf5YV5aJl6Hsw8xTk+qdryNQ4AxAYdHsmy7v4GG9VMz5ujoKpxLKgtEEaWT4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgzHQ-0002yP-GS; Mon, 23 Nov 2020 00:00:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgzHQ-0003iW-4z; Mon, 23 Nov 2020 00:00:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kgzHQ-0005HE-4S; Mon, 23 Nov 2020 00:00:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156945-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156945: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=e3a232cccd2445e5d9e607a65a78cdbc33ff8a0f
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 23 Nov 2020 00:00:28 +0000

flight 156945 qemu-mainline real [real]
flight 156952 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156945/
http://logs.test-lab.xenproject.org/osstest/logs/156952/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                e3a232cccd2445e5d9e607a65a78cdbc33ff8a0f
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   94 days
Failing since        152659  2020-08-21 14:07:39 Z   93 days  198 attempts
Testing same since   156925  2020-11-21 14:09:11 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 67422 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 00:54:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 00:54:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33585.64685 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kh07M-0006oQ-5a; Mon, 23 Nov 2020 00:54:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33585.64685; Mon, 23 Nov 2020 00:54:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kh07M-0006oJ-1r; Mon, 23 Nov 2020 00:54:08 +0000
Received: by outflank-mailman (input) for mailman id 33585;
 Mon, 23 Nov 2020 00:54:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eYGc=E5=perches.com=joe@srs-us1.protection.inumbo.net>)
 id 1kh07K-0006oE-Ik
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 00:54:06 +0000
Received: from smtprelay.hostedemail.com (unknown [216.40.44.118])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6c962261-e1a7-4c4a-8f70-940e7b13b1b2;
 Mon, 23 Nov 2020 00:54:05 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net
 [216.40.38.60])
 by smtprelay05.hostedemail.com (Postfix) with ESMTP id 2D37318029125;
 Mon, 23 Nov 2020 00:54:05 +0000 (UTC)
Received: from XPS-9350.home (unknown [47.151.128.180])
 (Authenticated sender: joe@perches.com)
 by omf02.hostedemail.com (Postfix) with ESMTPA;
 Mon, 23 Nov 2020 00:53:59 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=eYGc=E5=perches.com=joe@srs-us1.protection.inumbo.net>)
	id 1kh07K-0006oE-Ik
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 00:54:06 +0000
X-Inumbo-ID: 6c962261-e1a7-4c4a-8f70-940e7b13b1b2
Received: from smtprelay.hostedemail.com (unknown [216.40.44.118])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6c962261-e1a7-4c4a-8f70-940e7b13b1b2;
	Mon, 23 Nov 2020 00:54:05 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net [216.40.38.60])
	by smtprelay05.hostedemail.com (Postfix) with ESMTP id 2D37318029125;
	Mon, 23 Nov 2020 00:54:05 +0000 (UTC)
X-Session-Marker: 6A6F6540706572636865732E636F6D
X-Spam-Summary: 2,0,0,,d41d8cd98f00b204,joe@perches.com,,RULES_HIT:41:355:379:599:973:982:988:989:1260:1277:1311:1313:1314:1345:1359:1437:1515:1516:1518:1534:1540:1593:1594:1711:1730:1747:1777:1792:2393:2553:2559:2562:2828:2911:3138:3139:3140:3141:3142:3352:3622:3865:3866:3867:3868:3870:3871:3872:4250:4321:4425:5007:6119:6691:6742:6743:7903:10004:10400:10848:11026:11232:11658:11914:12296:12297:12555:12740:12760:12895:13069:13161:13229:13311:13357:13439:14659:14721:21080:21433:21627:21740:30041:30054:30090:30091,0,RBL:none,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none
X-HE-Tag: steam95_4513bd127361
X-Filterd-Recvd-Size: 3177
Received: from XPS-9350.home (unknown [47.151.128.180])
	(Authenticated sender: joe@perches.com)
	by omf02.hostedemail.com (Postfix) with ESMTPA;
	Mon, 23 Nov 2020 00:53:59 +0000 (UTC)
Message-ID: <21826b6d513c4d9ccc795179c1edb0df2361d870.camel@perches.com>
Subject: Re: [RFC] MAINTAINERS tag for cleanup robot
From: Joe Perches <joe@perches.com>
To: Finn Thain <fthain@telegraphics.com.au>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>, Tom Rix
 <trix@redhat.com>, Matthew Wilcox <willy@infradead.org>, 
 clang-built-linux@googlegroups.com, linux-hyperv@vger.kernel.org, 
 linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, 
 tboot-devel@lists.sourceforge.net, kvm@vger.kernel.org, 
 linux-crypto@vger.kernel.org, linux-acpi@vger.kernel.org, devel@acpica.org,
  amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, 
 intel-gfx@lists.freedesktop.org, netdev@vger.kernel.org, 
 linux-media@vger.kernel.org, MPT-FusionLinux.pdl@broadcom.com, 
 linux-scsi@vger.kernel.org, linux-wireless@vger.kernel.org, 
 ibm-acpi-devel@lists.sourceforge.net, platform-driver-x86@vger.kernel.org, 
 linux-usb@vger.kernel.org, linux-omap@vger.kernel.org, 
 linux-fbdev@vger.kernel.org, ecryptfs@vger.kernel.org, 
 linux-fsdevel@vger.kernel.org, cluster-devel@redhat.com, 
 linux-mtd@lists.infradead.org, keyrings@vger.kernel.org, 
 netfilter-devel@vger.kernel.org, coreteam@netfilter.org, 
 alsa-devel@alsa-project.org, bpf@vger.kernel.org, 
 linux-bluetooth@vger.kernel.org, linux-nfs@vger.kernel.org, 
 patches@opensource.cirrus.com
Date: Sun, 22 Nov 2020 16:53:58 -0800
In-Reply-To: <alpine.LNX.2.23.453.2011230810210.7@nippy.intranet>
References: <20201121165058.1644182-1-trix@redhat.com>
	         <20201122032304.GE4327@casper.infradead.org>
	         <ddb08a27-3ca1-fb2e-d51f-4b471f1a56a3@redhat.com>
	         <20201122145635.GG4327@casper.infradead.org>
	         <0819ce06-c462-d4df-d3d9-14931dc5aefc@redhat.com>
	 <751803306cd957d0e7ef6a4fc3dbf12ebceaba92.camel@HansenPartnership.com>
	 <dec07021e7fc11a02b14c98b713ae2c6e2a4ca00.camel@perches.com>
	 <alpine.LNX.2.23.453.2011230810210.7@nippy.intranet>
Content-Type: text/plain; charset="ISO-8859-1"
User-Agent: Evolution 3.38.1-1 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Mon, 2020-11-23 at 09:33 +1100, Finn Thain wrote:
> On Sun, 22 Nov 2020, Joe Perches wrote:

> > But provably correct conversions IMO _should_ be done and IMO churn 
> > considerations should generally have less importance.
[]
> Moreover, the patch review workload for skilled humans is being generated 
> by the automation, which is completely backwards: the machine is supposed 
> to be helping.

Which is why the provably correct matters.

coccinelle transforms can be, but are not necessarily, provably correct.

The _show transforms done via the sysfs_emit_dev.cocci script are correct
as in commit aa838896d87a ("drivers core: Use sysfs_emit and sysfs_emit_at
for show(device *...) functions")

Worthwhile?  A different question, but I think yes as it reduces the
overall question space of the existing other sprintf overrun possibilities.




From xen-devel-bounces@lists.xenproject.org Mon Nov 23 01:25:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 01:25:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33592.64697 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kh0ba-000664-L6; Mon, 23 Nov 2020 01:25:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33592.64697; Mon, 23 Nov 2020 01:25:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kh0ba-00065v-CG; Mon, 23 Nov 2020 01:25:22 +0000
Received: by outflank-mailman (input) for mailman id 33592;
 Mon, 23 Nov 2020 01:25:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kh0bY-00065n-JN; Mon, 23 Nov 2020 01:25:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kh0bY-0000Pw-BL; Mon, 23 Nov 2020 01:25:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kh0bY-0005mc-1b; Mon, 23 Nov 2020 01:25:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kh0bY-00023J-14; Mon, 23 Nov 2020 01:25:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kh0bY-00065n-JN; Mon, 23 Nov 2020 01:25:20 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ow5dmMNAztC0RqU8XgG9bA0YeygXNk7Jl5wKWfP+SOQ=; b=GFJ+k/o7E3uRknEd4BsOux4hfj
	XftxhAU+YCXW4PvjI8e59yxIarNZTJNRuVl1G+gSrwZ2Vse+HN0MhiOWTRn1U40YkUgQEjYdrF3bA
	NNLfljYrr5Nwi8rE0Q3/NfyqoB1XpB/6WYxl4/J+wL8vPH4KVRSJHb2TAaUmLI2fdrYo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kh0bY-0000Pw-BL; Mon, 23 Nov 2020 01:25:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kh0bY-0005mc-1b; Mon, 23 Nov 2020 01:25:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kh0bY-00023J-14; Mon, 23 Nov 2020 01:25:20 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156948-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156948: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-saverestore.2:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=a349e4c659609fd20e4beea89e5c4a4038e33a95
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 23 Nov 2020 01:25:20 +0000

flight 156948 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156948/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop    fail in 156937 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit2   8 xen-boot         fail in 156929 pass in 156948
 test-arm64-arm64-xl-seattle 10 host-ping-check-xen fail in 156937 pass in 156929
 test-arm64-arm64-xl           8 xen-boot         fail in 156937 pass in 156948
 test-amd64-amd64-xl-rtds 20 guest-localmigrate/x10 fail in 156937 pass in 156948
 test-arm64-arm64-libvirt-xsm  8 xen-boot         fail in 156937 pass in 156948
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 156937 pass in 156948
 test-arm64-arm64-xl          10 host-ping-check-xen        fail pass in 156929
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 156929
 test-arm64-arm64-xl-credit1   8 xen-boot                   fail pass in 156937
 test-amd64-amd64-amd64-pvgrub 18 guest-saverestore.2       fail pass in 156937
 test-arm64-arm64-xl-seattle   8 xen-boot                   fail pass in 156937

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-seattle 11 leak-check/basis(11) fail in 156929 blocked in 152332
 test-arm64-arm64-xl-xsm 11 leak-check/basis(11) fail in 156929 blocked in 152332
 test-arm64-arm64-xl   11 leak-check/basis(11) fail in 156929 blocked in 152332
 test-arm64-arm64-xl-credit1 11 leak-check/basis(11) fail in 156937 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                a349e4c659609fd20e4beea89e5c4a4038e33a95
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  114 days
Failing since        152366  2020-08-01 20:49:34 Z  113 days  190 attempts
Testing same since   156929  2020-11-21 19:09:40 Z    1 days    3 attempts

------------------------------------------------------------
3565 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 681922 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 05:22:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 05:22:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33606.64718 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kh4Ic-0006H8-2b; Mon, 23 Nov 2020 05:22:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33606.64718; Mon, 23 Nov 2020 05:22:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kh4Ib-0006Gz-Rh; Mon, 23 Nov 2020 05:22:01 +0000
Received: by outflank-mailman (input) for mailman id 33606;
 Mon, 23 Nov 2020 05:22:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xOkN=E5=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kh4Ia-0006Gu-C2
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 05:22:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3290cf00-ff34-4a28-ab2e-3f70a4a07943;
 Mon, 23 Nov 2020 05:21:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C30A0AC0C;
 Mon, 23 Nov 2020 05:21:57 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xOkN=E5=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kh4Ia-0006Gu-C2
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 05:22:00 +0000
X-Inumbo-ID: 3290cf00-ff34-4a28-ab2e-3f70a4a07943
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3290cf00-ff34-4a28-ab2e-3f70a4a07943;
	Mon, 23 Nov 2020 05:21:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606108917; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=bnlvBaRBDOwhFV9Frw/ljad6M3emL9wmQHU9HFSgjyg=;
	b=hp03zaAa6g9jzr7ZFVJcK9sSTowRPcxkxQEe9lmU/Iow1G3DAlxUFBpvBVKmZJWewAUzYp
	vGYrU8C7R3DG7nHlXCVKch/x7KxTgOIiJvaA3kRKTaVRHf6CnXlZA3TTNnQV9qi0tFvy5H
	GAR5vu9087dpuD1Mp71uPoXjdE0aTbI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C30A0AC0C;
	Mon, 23 Nov 2020 05:21:57 +0000 (UTC)
To: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "VMware, Inc." <pv-drivers@vmware.com>, X86 ML <x86@kernel.org>,
 LKML <linux-kernel@vger.kernel.org>,
 Linux Virtualization <virtualization@lists.linux-foundation.org>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 "H. Peter Anvin" <hpa@zytor.com>, xen-devel
 <xen-devel@lists.xenproject.org>, Thomas Gleixner <tglx@linutronix.de>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-6-jgross@suse.com>
 <20201120115943.GD3021@hirez.programming.kicks-ass.net>
 <eb05e878-6334-8d19-496b-6572df67fc56@suse.com>
 <CALCETrXOGhXoOJpzhAMqD7iibi09WzbGk9SWVH7JzA=d5uarWA@mail.gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v2 05/12] x86: rework arch_local_irq_restore() to not use
 popf
Message-ID: <894a4cec-8426-4152-d391-474711040c5b@suse.com>
Date: Mon, 23 Nov 2020 06:21:56 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <CALCETrXOGhXoOJpzhAMqD7iibi09WzbGk9SWVH7JzA=d5uarWA@mail.gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="KjjwiwVOCYamlMPkg53LvZQjNAFZkdRhT"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--KjjwiwVOCYamlMPkg53LvZQjNAFZkdRhT
Content-Type: multipart/mixed; boundary="PYEGjNEoNCwI2vygBQpmaIoVRT2ThJktq";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "VMware, Inc." <pv-drivers@vmware.com>, X86 ML <x86@kernel.org>,
 LKML <linux-kernel@vger.kernel.org>,
 Linux Virtualization <virtualization@lists.linux-foundation.org>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 "H. Peter Anvin" <hpa@zytor.com>, xen-devel
 <xen-devel@lists.xenproject.org>, Thomas Gleixner <tglx@linutronix.de>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <894a4cec-8426-4152-d391-474711040c5b@suse.com>
Subject: Re: [PATCH v2 05/12] x86: rework arch_local_irq_restore() to not use
 popf
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-6-jgross@suse.com>
 <20201120115943.GD3021@hirez.programming.kicks-ass.net>
 <eb05e878-6334-8d19-496b-6572df67fc56@suse.com>
 <CALCETrXOGhXoOJpzhAMqD7iibi09WzbGk9SWVH7JzA=d5uarWA@mail.gmail.com>
In-Reply-To: <CALCETrXOGhXoOJpzhAMqD7iibi09WzbGk9SWVH7JzA=d5uarWA@mail.gmail.com>

--PYEGjNEoNCwI2vygBQpmaIoVRT2ThJktq
Content-Type: multipart/mixed;
 boundary="------------61EFA80FF3AEF2DEA88EC6CD"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------61EFA80FF3AEF2DEA88EC6CD
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 22.11.20 22:44, Andy Lutomirski wrote:
> On Sat, Nov 21, 2020 at 10:55 PM J=C3=BCrgen Gro=C3=9F <jgross@suse.com=
> wrote:
>>
>> On 20.11.20 12:59, Peter Zijlstra wrote:
>>> On Fri, Nov 20, 2020 at 12:46:23PM +0100, Juergen Gross wrote:
>>>> +static __always_inline void arch_local_irq_restore(unsigned long fl=
ags)
>>>> +{
>>>> +    if (!arch_irqs_disabled_flags(flags))
>>>> +            arch_local_irq_enable();
>>>> +}
>>>
>>> If someone were to write horrible code like:
>>>
>>>        local_irq_disable();
>>>        local_irq_save(flags);
>>>        local_irq_enable();
>>>        local_irq_restore(flags);
>>>
>>> we'd be up some creek without a paddle... now I don't _think_ we have=

>>> genius code like that, but I'd feel saver if we can haz an assertion =
in
>>> there somewhere...
>>>
>>> Maybe something like:
>>>
>>> #ifdef CONFIG_DEBUG_ENTRY // for lack of something saner
>>>        WARN_ON_ONCE((arch_local_save_flags() ^ flags) & X86_EFLAGS_IF=
);
>>> #endif
>>>
>>> At the end?
>>
>> I'd like to, but using WARN_ON_ONCE() in include/asm/irqflags.h sounds=

>> like a perfect receipt for include dependency hell.
>>
>> We could use a plain asm("ud2") instead.
>=20
> How about out-of-lining it:
>=20
> #ifdef CONFIG_DEBUG_ENTRY
> extern void warn_bogus_irqrestore();
> #endif
>=20
> static __always_inline void arch_local_irq_restore(unsigned long flags)=

> {
>         if (!arch_irqs_disabled_flags(flags)) {
>                 arch_local_irq_enable();
>         } else {
> #ifdef CONFIG_DEBUG_ENTRY
>                 if (unlikely(arch_local_irq_save() & X86_EFLAGS_IF))
>                      warn_bogus_irqrestore();
> #endif
> }
>=20

This couldn't be a WARN_ON_ONCE() then (or it would be a catch all).
Another approach might be to open-code the WARN_ON_ONCE(), like:

#ifdef CONFIG_DEBUG_ENTRY
extern void warn_bogus_irqrestore(bool *once);
#endif

static __always_inline void arch_local_irq_restore(unsigned long flags)
{
	if (!arch_irqs_disabled_flags(flags))
		arch_local_irq_enable();
#ifdef CONFIG_DEBUG_ENTRY
	{
		static bool once;

		if (unlikely(arch_local_irq_save() & X86_EFLAGS_IF))
			warn_bogus_irqrestore(&once);
	}
#endif
}


Juergen

--------------61EFA80FF3AEF2DEA88EC6CD
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------61EFA80FF3AEF2DEA88EC6CD--

--PYEGjNEoNCwI2vygBQpmaIoVRT2ThJktq--

--KjjwiwVOCYamlMPkg53LvZQjNAFZkdRhT
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+7RvQFAwAAAAAACgkQsN6d1ii/Ey8g
cQf/Uld5d2rCAffpLFBdphaCOgFSPC2kdsNOxLyfO0gdHESNxsBoZ/BREBUVCnDawcH4cZlOl/6E
ZHl1LQFfe963dpoB0V3n3hscAEalez4yzbsgsoysAqDqyA7Dl/kFW4ycGCnqtQCwK1utjoejDrWo
JnODzWqm4fLZGbe1Y7VUw6G25XSL1mTfoXst86jeU0Ycvr6/NFBA4zFy10wAylyosZe/6xss9lKl
f/GAwNiafJ6o8ZQwxAAh5vRf0oU0ro47p9Pu1wmzXNMfIO8TgazfvOp68Ml/0n4DR+TuCB1tarqx
QqyidG1WLriURSEHtWCAmqiIgeagXTWcLkWsbGIImQ==
=0xzm
-----END PGP SIGNATURE-----

--KjjwiwVOCYamlMPkg53LvZQjNAFZkdRhT--


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 05:22:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 05:22:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33540.64730 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kh4J8-0006N8-Df; Mon, 23 Nov 2020 05:22:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33540.64730; Mon, 23 Nov 2020 05:22:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kh4J8-0006N1-AG; Mon, 23 Nov 2020 05:22:34 +0000
Received: by outflank-mailman (input) for mailman id 33540;
 Sun, 22 Nov 2020 22:55:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OWd+=E4=zal.aero=leo.krueger@srs-us1.protection.inumbo.net>)
 id 1kgyGK-00020K-Rl
 for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 22:55:17 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.1.116]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0f0de731-60a7-4445-ad0a-737534314de0;
 Sun, 22 Nov 2020 22:55:11 +0000 (UTC)
Received: from HE1PR05MB4794.eurprd05.prod.outlook.com (2603:10a6:7:9b::11) by
 HE1PR05MB4538.eurprd05.prod.outlook.com (2603:10a6:7:9d::24) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3564.28; Sun, 22 Nov 2020 22:55:02 +0000
Received: from HE1PR05MB4794.eurprd05.prod.outlook.com
 ([fe80::7d6a:df13:ca6f:b173]) by HE1PR05MB4794.eurprd05.prod.outlook.com
 ([fe80::7d6a:df13:ca6f:b173%5]) with mapi id 15.20.3564.033; Sun, 22 Nov 2020
 22:55:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=OWd+=E4=zal.aero=leo.krueger@srs-us1.protection.inumbo.net>)
	id 1kgyGK-00020K-Rl
	for xen-devel@lists.xenproject.org; Sun, 22 Nov 2020 22:55:17 +0000
X-Inumbo-ID: 0f0de731-60a7-4445-ad0a-737534314de0
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown [40.107.1.116])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0f0de731-60a7-4445-ad0a-737534314de0;
	Sun, 22 Nov 2020 22:55:11 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Qo/E4Qz3ZP1UVQC/8SkxYZXm21dqE39YnvCB8K9igdqCu74A7Y/uLljd/HGERW9gHQuoy0qm1LXSr4DD9JjrQJVJj8yED4mZhx5upyUA8XxszjQjMEG6mteZuwazwJqVQ8uHCVin2taS3O2PzFA2fRIzymeqOZSwLGQyzdiJCT3yjZzz9HN9YCcOelKDk8uk+9SFVLcPS0vXJYWqLLae4LzO5oV5Sgfdd9n4IbWXODIMGdReUEk3q4xMrgghrvGKFQXeQOwfiWlQl0tajTEC/G5UlrgqARskScohckhm9rKm1GH8nseZb3qe31VNI+8inlWWPZbFf/dUUlKE7KJ6HQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=P4JvtAHvQCWugdJtRF2r2xtHFldQtPvvUU9/NaCM6vY=;
 b=QVdSV/fWl9G88L7e6J1GhdqZ9rtadt2Kv4RuQz0JTAGrskZTE+2cWdZLjukltDyy8T9yp+kpfZYitzCMegFfvTa0gjFW5b5G3hUsG9wqe/8s3JSH4fd4oMJkChhicLOkruzER7YAkgbRIiuG9/DrebutJUSum+VZQFjccLQweXSKp+UM+ZA0kVL4zmtXqA7nyxwY3P2THegBP6UnUZdB3jWR6H9oQoAFVD509+dS++7hGqm7cMLpR1H4m4rsk3mdoDtnACZ8mauCT8v+DTQ2HmCcY1d83AuDfRq/+/MR8gpSlyGCSJH3kNlP1sPt/vLIbfA8EpQg+vB7RCVizScjOQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=zal.aero; dmarc=pass action=none header.from=zal.aero;
 dkim=pass header.d=zal.aero; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=zalgmbh.onmicrosoft.com; s=selector2-zalgmbh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=P4JvtAHvQCWugdJtRF2r2xtHFldQtPvvUU9/NaCM6vY=;
 b=N3XuPlK7hetnMDQyaWb4l9Wk41BrXKYYeB3ORMlW/t2BgMipR4GqF3vupi2HImHFSlnugjaiFF5afdz5J0k9bcgTb94Ic4PWbdR1zEdJjQfqqtylY2jxrUTx3zmBNv76kExxgMqAbPN6EEkk/JYjbCXkxqRrYYyUT/SMwh1PROk=
Received: from HE1PR05MB4794.eurprd05.prod.outlook.com (2603:10a6:7:9b::11) by
 HE1PR05MB4538.eurprd05.prod.outlook.com (2603:10a6:7:9d::24) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3564.28; Sun, 22 Nov 2020 22:55:02 +0000
Received: from HE1PR05MB4794.eurprd05.prod.outlook.com
 ([fe80::7d6a:df13:ca6f:b173]) by HE1PR05MB4794.eurprd05.prod.outlook.com
 ([fe80::7d6a:df13:ca6f:b173%5]) with mapi id 15.20.3564.033; Sun, 22 Nov 2020
 22:55:02 +0000
From: Leo Krueger <leo.krueger@zal.aero>
To: Julien Grall <julien@xen.org>, Stefano Stabellini
	<stefano.stabellini@xilinx.com>
CC: Peng Fan <peng.fan@nxp.com>, "brucea@xilinx.com" <brucea@xilinx.com>,
	Cornelia Bruelhart <cornelia.bruelhart@zal.aero>,
	"oleksandr_andrushchenko@epam.com" <oleksandr_andrushchenko@epam.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>
Subject: AW: AW: AW: AW: AW: Xen data from meta-virtualization layer
Thread-Topic: AW: AW: AW: AW: Xen data from meta-virtualization layer
Thread-Index:
 AdaxGoYJbW1LEL/jTUmK3sZDIRyPGAA9xNCAASdllGAAABZN8AAYnVUAACWhGAAAC0bcAAAAU02AAAYKpqgABBfLAADlnDFgADm6ywAAK1PyQAAEq7SAABo3VQAA3unxYA==
Date: Sun, 22 Nov 2020 22:55:02 +0000
Message-ID:
 <HE1PR05MB4794EBDD1FE29BC69D0BCC898BFD0@HE1PR05MB4794.eurprd05.prod.outlook.com>
References:
 <AM4PR0501MB2227089FDDF0209EF6E215D9E6100@AM4PR0501MB2227.eurprd05.prod.outlook.com>
 <HE1PR05MB47941E23CE053CE72F18867C8BEA0@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011091858010.21307@sstabellini-ThinkPad-T480s>
 <HE1PR05MB4794B5C57A54A29A48EE8EAE8BE90@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011101842500.21307@sstabellini-ThinkPad-T480s>
 <DB6PR0402MB27608A03EC717053E392A92988E80@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <HE1PR05MB47940ED4E5FDC0BADC54C8E78BE80@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <DB6PR0402MB2760CEEABA9F52CDEB27C1DB88E80@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <HE1PR05MB47944761ED6A26D3E2CE15868BE40@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011161656080.20906@sstabellini-ThinkPad-T480s>
 <HE1PR05MB4794569AC67109AF8B6517268BE20@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011171544380.438@sstabellini-ThinkPad-T480s>
 <5dc63ee2-f1ce-31fc-cb6a-fe4dae929fb3@xen.org>
In-Reply-To: <5dc63ee2-f1ce-31fc-cb6a-fe4dae929fb3@xen.org>
Accept-Language: de-DE, en-US
Content-Language: de-DE
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=zal.aero;
x-originating-ip: [2003:e4:3f2c:9500:6001:b8e5:6858:d953]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: d4bec878-8288-40ad-b55f-08d88f39a57a
x-ms-traffictypediagnostic: HE1PR05MB4538:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs:
 <HE1PR05MB4538B87C73BFCBBACE5D6B248BFD0@HE1PR05MB4538.eurprd05.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:163;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 f88K15eqN4+HGmFqq2DR+VB+CgIrqTFnNmKEJFs8EnDrLUC6liYMSHG1VBwxxpPfVCyX8dploFu6l4n0nlcW/zdmbAEjiV16CSKAQEJTiWoHmTF92ZbREkBEOn8zCvJqnKXmWSXIYaTCu3vteXlcF1KJf5UU0FNON5dyNyMvFz6LBMXwF1j3NTly3jyuHpLnM42hWQPMhml2Ok1LqQvfVYZcckTuAsVT430+7ntkdUUX/eIB/c3I3E+rFWb8xxw4kpz1/iSQL1cB/dT7iqtl9seJL21k4axPHwKmEgOttNVGqlRiujdGiAexK4TfkzYxva/pgR1BOSGzgzXKw44V/qwMGoVHDMYZLFFluTphOfGVaR6ZBzM3pqVeTswI2wdgh4NaHpi1wgEQ1kl//wWRig==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:HE1PR05MB4794.eurprd05.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(346002)(376002)(396003)(39830400003)(136003)(7696005)(30864003)(2906002)(99936003)(186003)(83380400001)(66574015)(53546011)(6506007)(8936002)(44832011)(8676002)(71200400001)(5660300002)(33656002)(478600001)(66556008)(86362001)(52536014)(110136005)(9686003)(132210200001)(54906003)(66576008)(66476007)(66946007)(4326008)(66446008)(966005)(76116006)(55016002)(316002)(64756008);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata:
	=?utf-8?B?cEd5cWJnM3g0ZkVNN2FaaFhlMmR3ejVRM1VCcnl3R2FhRkV2TWNHUkV2NmN0?=
 =?utf-8?B?L2loUWhPajUrNG5Qb3V2WWpFTGRMeDh2NFRyVndCYVE1ZmI3blFVMWJqbG85?=
 =?utf-8?B?T1phTjV3Vk1IeklzdmdSdmVPVmxXbnAxWmxDaDNPWFRYMHdMTzNNc0NLTWtl?=
 =?utf-8?B?YUdRbjR0R0FHS0JVWTFMUkJRU3B0YnpQT1Fxa0lLak10MjJjVWI1dW1RNkNs?=
 =?utf-8?B?ZVdoOGpXUTFkWG9zbGdtU04yRGlyVUhDemg4K0ZsK2FqQXh3eWtua0txQ0kx?=
 =?utf-8?B?Vzh4Y05qTGtVbVZuRlAwaXc5V1lPdmkvLzVGcGREclVVZituYS9leUlZVnRr?=
 =?utf-8?B?Zy9LUVFyS1NjOXRtelNhYVptaWhKWjlWT0RRbWprb2tEQzZ1WEh3YlJlK2Zj?=
 =?utf-8?B?YmtHcU1vYWJxRWhuLzB4Sm4yNjIxaUc5bHdpcng4Ni9FTDBMTGZXc0c1UGJN?=
 =?utf-8?B?YXlzK3ZIRHFBa3NVek0xeTUzcnpvU1YxTThDcHZMYlZNRTlkaEd1ck5nWEdU?=
 =?utf-8?B?UVRvbmFYVXRqNmNBRzBlZnE2cUVSTklsU1MzQUZtYjYzUVo5clZmbVc4YWhM?=
 =?utf-8?B?Q1hkZE1yU2srK2N3VFJ6djQ5aGhUUU4yQUwrREFYVGt2RGxUc0cyUWhtYjUz?=
 =?utf-8?B?b2dFMDllem1Eb3FCK2hYeStFVTcxc1Rtdld6b29tV2U1dDY0anlqM0RRUkhG?=
 =?utf-8?B?VG1Rb0dYT2VHS00zOGpJOXk3bDZueUVJOCthMTVXR3F6MG5ERXE1TGJ1dDhN?=
 =?utf-8?B?bjduL1haRjJNMzA3WW5QSDhRbW9ybndnUm9OeERTbkVCQmpUYTRENnB3ZTZS?=
 =?utf-8?B?ZDFiMEI0QlAxVVJpM1BMeDRSVmxTVU9kWE0ydlNRR2UyUmpHZEdpODFpS3Zs?=
 =?utf-8?B?YW53bjloNDlXeEJVMkdhSXFyNnJtc1c4MGIxVTVrc01sdjYvVXVyTWVvZUtP?=
 =?utf-8?B?a0tQamdYYUFUd0FTQUI4N3RtZ0VyZ3pyeTYzTEZNT1lyVlZOcXZGVVFSTFdI?=
 =?utf-8?B?eW5ya3Y3eHlKanlUVi84VVJrNnF3V3AvdWRXL1M0Q1NhMXFidmhmUkFHVVBR?=
 =?utf-8?B?OTVTL09pNzR4SGMwRGYxT1UyNVJMNEErcjcxQklsSnZOMnhYMkkzV0VSNGdP?=
 =?utf-8?B?RkU1LzBiZ2txWXVUUzR0QkNlVHlvSUlKTGFYUEE1UENrbHp5MEFLNzE3M2kv?=
 =?utf-8?B?cU02bDVZbVZDV1BIWXNxaDIzSEtnVVh3WTFwa0tJZ3FxMjJVbmdQOXk0eUlr?=
 =?utf-8?B?WEk3NzBTbDBOcjBQZVNZOWRSWlh2WlljTWdGT1JIM0VrbzhoZz09?=
Content-Type: multipart/mixed;
	boundary="_003_HE1PR05MB4794EBDD1FE29BC69D0BCC898BFD0HE1PR05MB4794eurp_"
MIME-Version: 1.0
X-OriginatorOrg: zal.aero
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: HE1PR05MB4794.eurprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d4bec878-8288-40ad-b55f-08d88f39a57a
X-MS-Exchange-CrossTenant-originalarrivaltime: 22 Nov 2020 22:55:02.3176
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: dd36ff89-3bc0-4d3d-b543-76f454a3c8e5
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: MU43Uqjl+eHWBDgZA5YwJnpl7n8mUqevTIw1OUPf1LOimgcCu86ZxI9WcI+TnKQFNFRnzRjcl3idMx2AWac2NQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR05MB4538

--_003_HE1PR05MB4794EBDD1FE29BC69D0BCC898BFD0HE1PR05MB4794eurp_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

SGkgSnVsaWVuLA0KDQpmaW5hbGx5IEkgY291bGQgdHJ5IG91dCB3aGF0IHlvdSBzdWdnZXN0ZWQs
IHBsZWFzZSBmaW5kIG15IGFuc3dlcnMgaW5saW5lLg0KDQo+IC0tLS0tVXJzcHLDvG5nbGljaGUg
TmFjaHJpY2h0LS0tLS0NCj4gVm9uOiBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPg0KPiBH
ZXNlbmRldDogTWl0dHdvY2gsIDE4LiBOb3ZlbWJlciAyMDIwIDEzOjI0DQo+IEFuOiBTdGVmYW5v
IFN0YWJlbGxpbmkgPHN0ZWZhbm8uc3RhYmVsbGluaUB4aWxpbnguY29tPjsgTGVvIEtydWVnZXIN
Cj4gPGxlby5rcnVlZ2VyQHphbC5hZXJvPg0KPiBDYzogUGVuZyBGYW4gPHBlbmcuZmFuQG54cC5j
b20+OyBicnVjZWFAeGlsaW54LmNvbTsgQ29ybmVsaWEgQnJ1ZWxoYXJ0DQo+IDxjb3JuZWxpYS5i
cnVlbGhhcnRAemFsLmFlcm8+OyBvbGVrc2FuZHJfYW5kcnVzaGNoZW5rb0BlcGFtLmNvbTsgeGVu
LQ0KPiBkZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZzsgQmVydHJhbmQuTWFycXVpc0Bhcm0uY29t
DQo+IEJldHJlZmY6IFJlOiBBVzogQVc6IEFXOiBBVzogWGVuIGRhdGEgZnJvbSBtZXRhLXZpcnR1
YWxpemF0aW9uIGxheWVyDQo+IA0KPiBIaSwNCj4gDQo+IE9uIDE3LzExLzIwMjAgMjM6NTMsIFN0
ZWZhbm8gU3RhYmVsbGluaSB3cm90ZToNCj4gPiBBZGRpbmcgQmVydHJhbmQsIE9sZWtzYW5kciwg
SnVsaWVuLCBhbmQgb3RoZXJzIC0tIHRoZXkgaGF2ZSBhIG1vcmUNCj4gPiByZWNlbnQgZXhwZXJp
ZW5jZSB3aXRoIEdJQ3YzIElUUyB0aGFuIG1lIGFuZCBtaWdodCBiZSBhYmxlIHRvIGhlbHAuDQo+
ID4gSSBhbSBhdHRhY2hpbmcgdGhlIGRldmljZSB0cmVlIExlbyBzZW50IGEgZmV3IGRheXMgYWdv
IGZvciByZWZlcmVuY2UuDQo+ID4NCj4gPg0KPiA+IFR5cGljYWxseSB3aGVuIHlvdSBjYW4gc2V0
IHRoZSBldGhlcm5ldCBsaW5rIHVwIGFuZCBubyBwYWNrZXRzIGFyZQ0KPiA+IGV4Y2hhbmdlZCBp
dCBpcyBiZWNhdXNlIG9mIGEgbWlzc2luZyBpbnRlcnJ1cHQuIEluIHRoaXMgY2FzZSBhIG1pc3Np
bmcNCj4gPiBNU0kuDQo+ID4NCj4gPiBCZXJ0cmFuZCwgSSBiZWxpZXZlIHlvdSB0cmllZCB0aGUg
R0lDIElUUyBkcml2ZXIgd2l0aCBQQ0kgZGV2aWNlcw0KPiA+IHJlY2VudGx5LiBJdCBpcyBleHBl
Y3RlZCB0byB3b3JrIGNvcnJlY3RseSB3aXRoIE1TSXMgaW4gRG9tMCwgcmlnaHQ/DQo+IA0KPiBP
U1NUZXN0IGhhcyBzb21lIGhhcmR3YXJlIChlLmcuIFRodW5kZXItWCkgd2hlcmUgSVRTIGlzIHJl
cXVpcmVkIHRvIGJvb3QNCj4gRG9tMC4gSSBoYXZlbid0IHNlZW4gYW55IGZhaWx1cmUgb24gcmVj
ZW50IFhlbi4gV2UgYXJlIHRlc3RpbmcgNC4xMSBhbmQNCj4gb253YXJkcyBvbiBUaHVuZGVyLVgu
DQo+IA0KPiBIb3dldmVyLCBpdCBtYXkgYmUgcG9zc2libGUgdGhhdCBzb21lIG1vcmUgd29yayBp
cyBuZWNlc3NhcnkgZm9yIG90aGVyDQo+IGhhcmR3YXJlIChlLmcuIHdvcmthcm91bmQsIG1pc3Np
bmcgY29kZS4uLikuIFNlZSBtb3JlIGJlbG93Lg0KPiANCj4gPg0KPiA+DQo+ID4NCj4gPiBPbiBU
dWUsIDE3IE5vdiAyMDIwLCBMZW8gS3J1ZWdlciB3cm90ZToNCj4gPj4gSGksDQo+ID4+DQo+ID4+
IEkgZW5hYmxlZCBDT05GSUdfSEFTX0lUUyAod2hhdCBhIHN0dXBpZCBtaXN0YWtlIGJ5IG1lIHRv
IG5vdCBzZXQgaXQNCj4gPj4gYmVmb3JlLi4uKSBidXQgdGhlbiBoYWQgdG8gYWRkIHRoZSBmb2xs
b3dpbmcgbm9kZSB0byBteSBkZXZpY2UgdHJlZQ0KPiA+Pg0KPiA+PiAJZ2ljX2xwaV9iYXNlOiBz
eXNjb25AMHg4MDAwMDAwMCB7DQo+ID4+IAkJY29tcGF0aWJsZSA9ICJnaWMtbHBpLWJhc2UiOw0K
PiANCj4gSSBjb3VsZG4ndCBmaW5kIHRoaXMgY29tcGF0aWJsZSBkZWZpbmVkL3VzZWQgaW4gTGlu
dXggNS4xMC1yYzQuIEBMZW8sIGNvdWxkDQo+IHlvdSBjbGFyaWZ5IHdoaWNoIGZsYXZvci92ZXJz
aW9uIG9mIExpbnV4IHlvdSBhcmUgdXNpbmc/DQoNCkl0IGlzIExpbnV4IDQuMTkgZnJvbSBZb2N0
byAoV2Fycm9yIHJlbGVhc2UpLiBYRU4gNC4xMy4yLg0KV2hpbGUgc2VhcmNoaW5nIGFyb3VuZCB0
aGUgSW50ZXJuZXQgZm9yIGFueSBzb2x1dGlvbiwgSSBjYW1lIGFjcm9zcyBbMF0gd2hpY2ggY29u
dGFpbmVkIHRoZSBnaWMtbHBpLWJhc2Ugbm9kZS4NClNvIEkganVzdCB0cmllZCBhZGRpbmcgaXQg
KHF1aXRlIGRlc3BlcmF0ZSBJIGtub3cpIGFuZCB2b2lsYSwgaXQgYXQgbGVhc3QgYnJvdWdodCBt
ZSBvbmUgc3RlcCBmdXJ0aGVyIChYRU4gZXhwb3NpbmcgdGhlIElUUykuLi4NCg0KPiANCj4gPj4g
CQlyZWcgPSA8MHgwIDB4ODAwMDAwMDAgMHgwIDB4MTAwMDAwPjsNCj4gPj4gCQltYXgtZ2ljLXJl
ZGlzdHJpYnV0b3JzID0gPDI+Ow0KPiA+PiAJfTsNCj4gPj4NCj4gPj4gdG8gc29tZWhvdyBjaGFu
Z2Ugc29tZXRoaW5nIGluIHJlZ2FyZCB0byB0aGUgSVRTIGFuZCBNU0kvTVNJLVgNCj4gPj4NCj4g
Pj4gKFhFTikgR0lDdjMgaW5pdGlhbGl6YXRpb246DQo+ID4+IChYRU4pICAgICAgIGdpY19kaXN0
X2FkZHI9MHgwMDAwMDAwNjAwMDAwMA0KPiA+PiAoWEVOKSAgICAgICBnaWNfbWFpbnRlbmFuY2Vf
aXJxPTI1DQo+ID4+IChYRU4pICAgICAgIGdpY19yZGlzdF9zdHJpZGU9MA0KPiA+PiAoWEVOKSAg
ICAgICBnaWNfcmRpc3RfcmVnaW9ucz0xDQo+ID4+IChYRU4pICAgICAgIHJlZGlzdHJpYnV0b3Ig
cmVnaW9uczoNCj4gPj4gKFhFTikgICAgICAgICAtIHJlZ2lvbiAwOiAweDAwMDAwMDA2MDQwMDAw
IC0gMHgwMDAwMDAwNjA4MDAwMA0KPiA+PiAoWEVOKSBHSUN2MzogdXNpbmcgYXQgbW9zdCA1NzM0
NCBMUElzIG9uIHRoZSBob3N0Lg0KPiA+PiAoWEVOKSBHSUN2MzogMjg4IGxpbmVzLCAoSUlEIDAw
MDExNDNiKS4NCj4gPj4gKFhFTikgR0lDdjM6IEZvdW5kIElUUyBAMHg2MDIwMDAwDQo+ID4+IChY
RU4pIHVzaW5nIG5vbi1jYWNoZWFibGUgSVRTIGNvbW1hbmQgcXVldWUNCj4gPj4gKFhFTikgR0lD
djM6IENQVTA6IEZvdW5kIHJlZGlzdHJpYnV0b3IgaW4gcmVnaW9uIDAgQDAwMDAwMDAwNDAwMWMw
MDANCj4gPj4NCj4gPj4gWyAgICAwLjAwMDAwMF0gR0lDdjM6IERpc3RyaWJ1dG9yIGhhcyBubyBS
YW5nZSBTZWxlY3RvciBzdXBwb3J0DQo+ID4+IFsgICAgMC4wMDAwMDBdIEdJQ3YzOiBubyBWTFBJ
IHN1cHBvcnQsIG5vIGRpcmVjdCBMUEkgc3VwcG9ydA0KPiA+PiBbICAgIDAuMDAwMDAwXSBJVFMg
W21lbSAweDA2MDIwMDAwLTB4MDYwM2ZmZmZdDQo+ID4+IFsgICAgMC4wMDAwMDBdIElUU0AweDAw
MDAwMDAwMDYwMjAwMDA6IGFsbG9jYXRlZCA2NTUzNiBEZXZpY2VzDQo+IEBkYzg4MDAwMCAoZmxh
dCwgZXN6IDgsIHBzeiA2NEssIHNociAxKQ0KPiA+PiBbICAgIDAuMDAwMDAwXSBJVFNAMHgwMDAw
MDAwMDA2MDIwMDAwOiBhbGxvY2F0ZWQgMzI3NjggSW50ZXJydXB0DQo+IENvbGxlY3Rpb25zIEBk
YzgyMDAwMCAoZmxhdCwgZXN6IDIsIHBzeiA2NEssIHNociAxKQ0KPiA+PiBbICAgIDAuMDAwMDAw
XSBHSUM6IHVzaW5nIExQSSBwcm9wZXJ0eSB0YWJsZSBAMHgwMDAwMDAwMGRjODMwMDAwDQo+ID4+
IFsgICAgMC4wMDAwMDBdIEdJQ3YzOiBDUFUwOiBmb3VuZCByZWRpc3RyaWJ1dG9yIDAgcmVnaW9u
DQo+IDA6MHgwMDAwMDAwMDA2MDQwMDAwDQo+ID4+IFsgICAgMC4wMDAwMDBdIENQVTA6IHVzaW5n
IExQSSBwZW5kaW5nIHRhYmxlIEAweDAwMDAwMDAwZGM4NDAwMDANCj4gPj4gLi4uDQo+ID4+IFsg
ICAgMC4wNDAwODBdIFBsYXRmb3JtIE1TSTogZ2ljLWl0cyBkb21haW4gY3JlYXRlZA0KPiA+PiBb
ICAgIDAuMDQwMTM2XSBQQ0kvTVNJOiAvaW50ZXJydXB0LWNvbnRyb2xsZXIvZ2ljLWl0cyBkb21h
aW4gY3JlYXRlZA0KPiA+PiBbICAgIDAuMDQwMTgxXSBmc2wtbWMgTVNJOiAvaW50ZXJydXB0LWNv
bnRyb2xsZXIvZ2ljLWl0cyBkb21haW4gY3JlYXRlZA0KPiA+Pg0KPiA+Pg0KPiA+PiBTdGlsbCBJ
IGFtIGVuZGluZyB1cCB3aXRoIHRoZSAiIEZhaWxlZCB0byBhZGQgLSBwYXNzdGhyb3VnaCBvciBN
U0kvTVNJLVgNCj4gbWlnaHQgZmFpbCEiIGxvZyBtZXNzYWdlcyBmb3Igc29tZSBQQ0kgZGV2aWNl
cywgYnV0IGF0IGxlYXN0IHRoZSBvbi1ib2FyZA0KPiBldGhlcm5ldCBwb3J0cyAoZnNsX2VuZXRj
ICkgYXJlIGluaXRpYWxpemVkLg0KPiA+PiBJIGNhbiBzZXQgdGhlIGxpbmsgdXAgYW5kIGEgbGlu
ayBpcyBzdWNjZXNzZnVsbHkgZXN0YWJsaXNoZWQuDQo+IA0KPiBUaGlzIG1lc3NhZ2UgaXMgbm9y
bWFsLiBYZW4gb24gQXJtIGlzIG5vdCB5ZXQgYXdhcmUgb2YgUENJIGRldmljZXMgYW5kDQo+IHRo
ZXJlZm9yZSB0aGUgaHlwZXJjYWxscyB0byBhZGQgUENJIGRldmljZXMgd2lsbCByZXR1cm4gLUVP
UE5PVFNVUFAuDQo+IA0KPiBIb3dldmVyLCB0aGlzIGlzIG5vdCBhbiBpc3N1ZSBiZWNhdXNlIHRo
ZSB2aXJ0dWFsIElUUyBpbXBsZW1lbnRhdGlvbiB3aWxsDQo+IGFsbG93IGRvbTAgdG8gY29uZmln
dXJlIGFueSBkZXZpY2VzLg0KPiANCj4gPj4NCj4gPj4gQnV0ICghKSBJIGNhbm5vdCByZWNlaXZl
IG9yIHRyYW5zbWl0IGFueXRoaW5nIChubyBlcnJvciBtZXNzYWdlLi4uKSBhbmQNCj4gdGhlcmUg
c2VlbSB0byBiZSBubyBpbnRlcnJ1cHRzOg0KPiA+Pg0KPiA+PiAyOTogICAgICAgICAgMCAgIElU
Uy1NU0kgICAxIEVkZ2UgICAgICBnYmUwLXJ4dHgwDQo+ID4+ICAgMzI6ICAgICAgICAgIDAgICBJ
VFMtTVNJIDgxOTIgRWRnZSAgICAgIHB0cF9xb3JpcQ0KPiA+Pg0KPiA+PiAoZnJvbSAvcHJvYy9p
bnRlcnJ1cHRzKS4NCj4gPj4NCj4gPj4gQW55IGlkZWEgb24gdGhpcyBvbmU/IEkga2VlcCBkaWdn
aW5nIGFuZCBjaGVja2luZyB3aGV0aGVyIG15IGRldmljZSB0cmVlDQo+IG5lZWRzIHNvbWUgYWRk
aXRpb25hbCBmaXhlcy4NCj4gDQo+IENhbiB5b3UgYXBwbHkgcGF0Y2ggWzFdIGFuZCBwcm92aWRl
IHRoZSBsb2dzPyBUaGlzIHdpbGwgZHVtcCBtb3JlDQo+IGluZm9ybWF0aW9uIGFib3V0IHRoZSBj
b21tYW5kIHJlY2VpdmVkIGJ5IHRoZSB2SVRTIGFuZCB0aGUgb25lIHNlbmQgdG8NCj4gdGhlIGhv
c3QgSVRTLg0KDQpGb3IgWEVOIDQuMTMuMiBJIGhhZCB0byBhZGFwdCB5b3VyIHBhdGNoIHNsaWdo
dGx5IFsxXSwgc2VlIGJlbG93ICh5ZXMgSSBrbm93LCBxdWl0ZSB1Z2x5IGluIHBhcnRzKS4NCkZp
bmQgYXR0YWNoZWQgdGhlIGJvb3QgbG9nIGFuZCBhbiBvdXRwdXQgb2YgInhsIGRtZXNnIiB3aGlj
aCBpcyB0cnVuY2F0ZWQgZHVlIHRvIHRoZSBsYXJnZSBhbW91bnQgb2YgbWVzc2FnZXMuDQoNCldo
ZW4gZW5hYmxpbmcgdGhlIG5ldHdvcmsgaW50ZXJmYWNlIChnYmUwKSwgdGhlIGZvbGxvd2luZyBv
dXRwdXQgaXMgdmlzaWJsZToNCg0Kcm9vdEBrb250cm9uLXNhbDI4On4jIGlwIGxpbmsgc2V0IHVw
IGRldiBnYmUwDQooWEVOKSB2Z2ljLXYzLWl0cy5jOjkwMjpkMHYwIHZJVFMgIGNtZCAweDBjOiAw
MDAwMDAxNzAwMDAwMDBjIDAwMDAwMDAwMDAwMDAwMDEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwDQooWEVOKSB2Z2ljLXYzLWl0cy5jOjkwMjpkMHYwIHZJVFMgIGNtZCAweDA1OiAw
MDAwMDAwMDAwMDAwMDA1IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwDQpbICAgMzQuMDM0NTk4XSBBdGhlcm9zIDgwMzEgZXRoZXJuZXQgMDAwMDowMDow
MC4zOjA1OiBhdHRhY2hlZCBQSFkgZHJpdmVyIFtBdGhlcm9zIDgwMzEgZXRoZXJuZXRdIChtaWlf
YnVzOnBoeV9hZGRyPTAwMDA6MDA6MDAuMzowNSwgaXJxPVBPTEwpDQpbICAgMzQuMDQxMTExXSA4
MDIxcTogYWRkaW5nIFZMQU4gMCB0byBIVyBmaWx0ZXIgb24gZGV2aWNlIGdiZTANClsgICAzNC4w
NDEyMDldIElQdjY6IEFERFJDT05GKE5FVERFVl9VUCk6IGdiZTA6IGxpbmsgaXMgbm90IHJlYWR5
DQpyb290QGtvbnRyb24tc2FsMjg6fiMgWyAgIDM1LjA0MTk1MV0gZnNsX2VuZXRjIDAwMDA6MDA6
MDAuMCBnYmUwOiBMaW5rIGlzIERvd24NClsgICAzOC4xMTQ0MjZdIGZzbF9lbmV0YyAwMDAwOjAw
OjAwLjAgZ2JlMDogTGluayBpcyBVcCAtIDFHYnBzL0Z1bGwgLSBmbG93IGNvbnRyb2wgb2ZmDQpb
ICAgMzguMTE0NTA4XSBJUHY2OiBBRERSQ09ORihORVRERVZfQ0hBTkdFKTogZ2JlMDogbGluayBi
ZWNvbWVzIHJlYWR5DQoNCkRvZXMgdGhhdCB0ZWxsIHlvdSBhbnl0aGluZz8NCg0KPiANCj4gTm90
ZSB0aGF0IFhlbiB3aWxsIG5lZWQgdG8gYmUgYnVpbGQgd2l0aCBDT05GSUdfREVCVUc9eSBpbiBv
cmRlciB0byBnZXQNCj4gc29tZSBvZiB0aGUgbWVzc2FnZXMuDQo+IA0KPiBbLi4uXQ0KPiANCj4g
Pj4+PiBbwqDCoMKgIDAuMDAwMDAwXSBHSUN2MzogRGlzdHJpYnV0b3IgaGFzIG5vIFJhbmdlIFNl
bGVjdG9yIHN1cHBvcnQNCj4gPj4+Pg0KPiA+Pj4+IFvCoMKgwqAgMC4wMDAwMDBdIEdJQ3YzOiBu
byBWTFBJIHN1cHBvcnQsIG5vIGRpcmVjdCBMUEkgc3VwcG9ydA0KPiA+Pj4+DQo+ID4+Pj4gW8Kg
wqDCoCAwLjAwMDAwMF0gR0lDdjM6IENQVTA6IGZvdW5kIHJlZGlzdHJpYnV0b3IgMCByZWdpb24N
Cj4gPj4+PiAwOjB4MDAwMDAwMDAwNjA0MDAwMA0KPiA+Pj4NCj4gPj4+ICJubyBWTFBJIHN1cHBv
cnQiIGlzIHZlcnkgc3VzcGljaW91cywgaXQgbG9va3MgbGlrZSBEb20wIGRvZXNuJ3QNCj4gPj4+
IGZpbmQgYW55IElUUyBzdXBwb3J0Lg0KPiBWTFBJIGlzIGEgZmVhdHVyZSB0aGF0IHdhcyBpbnRy
b2R1Y2VkIGluIEdJQ3Y0IHRvIGRpcmVjdGx5IGluamVjdCBMUEkgaW4gdGhlDQo+IGd1ZXN0LiBT
byB0aGlzIGlzIG5vcm1hbCB0byBzZWUgdGhpcyBtZXNzYWdlIHdoZW4gcnVubmluZyBvbiBYZW4g
YmVjYXVzZQ0KPiB3ZSBhcmUgZ29pbmcgdG8gb25seSBleHBvc2UgYSBHSUN2MyB0byBhIGRvbWFp
biB1bnRpbCBhdCBsZWFzdCB3ZSBzdXBwb3J0DQo+IG5lc3RlZCB2aXJ0Lg0KPiANCj4gSG93ZXZl
ciwgeW91IHdlcmUgcmlnaHQgYWJvdXQgdGhhdCBYZW4gZGlkbid0IGV4cG9zZSB0aGUgSVRTIGJl
Y2F1c2UgdGhlDQo+IGZvbGxvd2luZyBsaW5lcyB3ZXJlIG1pc3Npbmc6DQo+IA0KPiBbICAgIDAu
MDAwMDAwXSBJVFNAMHgwMDAwMDAwMDA2MDIwMDAwOiBhbGxvY2F0ZWQgNjU1MzYgRGV2aWNlcyBA
ZGM4ODAwMDANCj4gKGZsYXQsIGVzeiA4LCBwc3ogNjRLLCBzaHIgMSkNCj4gDQo+IENoZWVycywN
Cg0KQmVzdCByZWdhcmRzLA0KTGVvDQoNCj4gDQo+IFsxXQ0KPiBkaWZmIC0tZ2l0IGEveGVuL2Fy
Y2gvYXJtL2dpYy12My1pdHMuYyBiL3hlbi9hcmNoL2FybS9naWMtdjMtaXRzLmMgaW5kZXgNCj4g
OTU1OGJhZDk2YWMzLi44YTBhMDIzMDhlNzQgMTAwNjQ0DQo+IC0tLSBhL3hlbi9hcmNoL2FybS9n
aWMtdjMtaXRzLmMNCj4gKysrIGIveGVuL2FyY2gvYXJtL2dpYy12My1pdHMuYw0KPiBAQCAtODcs
NiArODcsMTAgQEAgc3RhdGljIGludCBpdHNfc2VuZF9jb21tYW5kKHN0cnVjdCBob3N0X2l0cyAq
aHdfaXRzLA0KPiBjb25zdCB2b2lkICppdHNfY21kKQ0KPiAgICAgICAvKiBObyBJVFMgY29tbWFu
ZHMgZnJvbSBhbiBpbnRlcnJ1cHQgaGFuZGxlciAoYXQgdGhlIG1vbWVudCkuICovDQo+ICAgICAg
IEFTU0VSVCghaW5faXJxKCkpOw0KPiANCj4gKyAgICBwcmludGsoWEVOTE9HX1dBUk5JTkcsICJw
SVRTICBjbWQgMHglMDJseDogJTAxNmx4ICUwMTZseCAlMDE2bHgNCj4gJTAxNmx4XG4iLA0KPiAr
ICAgICAgICAgICBpdHNfY21kX2dldF9jb21tYW5kKGNvbW1hbmQpLA0KPiArICAgICAgICAgICBj
b21tYW5kWzBdLCBjb21tYW5kWzFdLCBjb21tYW5kWzJdLCBjb21tYW5kWzNdKTsNCj4gKw0KPiAg
ICAgICBzcGluX2xvY2soJmh3X2l0cy0+Y21kX2xvY2spOw0KPiANCj4gICAgICAgZG8gew0KPiBk
aWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL2dpYy12My1scGkuYyBiL3hlbi9hcmNoL2FybS9naWMt
djMtbHBpLmMgaW5kZXgNCj4gODY5YmM5N2ZhMWFhLi5lN2M1YmNkOGQ0MjMgMTAwNjQ0DQo+IC0t
LSBhL3hlbi9hcmNoL2FybS9naWMtdjMtbHBpLmMNCj4gKysrIGIveGVuL2FyY2gvYXJtL2dpYy12
My1scGkuYw0KPiBAQCAtMTgzLDcgKzE4MywxMCBAQCB2b2lkIGdpY3YzX2RvX0xQSSh1bnNpZ25l
ZCBpbnQgbHBpKQ0KPiAgICAgICAvKiBGaW5kIG91dCBpZiBhIGd1ZXN0IG1hcHBlZCBzb21ldGhp
bmcgdG8gdGhpcyBwaHlzaWNhbCBMUEkuICovDQo+ICAgICAgIGhscGlwID0gZ2ljX2dldF9ob3N0
X2xwaShscGkpOw0KPiAgICAgICBpZiAoICFobHBpcCApDQo+ICsgICAgew0KPiArICAgICAgICBw
cmludGsoIiVzOiBSZWNlaXZlZCBMUEkgJXUgYnV0IGl0IGlzIG5vdCBtYXBwZWQ/XG4iLCBfX2Z1
bmNfXywNCj4gbHBpKTsNCj4gICAgICAgICAgIGdvdG8gb3V0Ow0KPiArICAgIH0NCj4gDQo+ICAg
ICAgIGhscGkuZGF0YSA9IHJlYWRfdTY0X2F0b21pYygmaGxwaXAtPmRhdGEpOw0KPiANCj4gQEAg
LTIyMiw2ICsyMjUsOSBAQCB2b2lkIGdpY3YzX2xwaV91cGRhdGVfaG9zdF9lbnRyeSh1aW50MzJf
dCBob3N0X2xwaSwNCj4gaW50IGRvbWFpbl9pZCwNCj4gICB7DQo+ICAgICAgIHVuaW9uIGhvc3Rf
bHBpICpobHBpcCwgaGxwaTsNCj4gDQo+ICsgICAgcHJpbnRrKCIlczogaG9zdF9scGkgJXUgZG9t
YWluICVkIHZpcnFfbHBpICV1XG4iLA0KPiArICAgICAgICAgICBfX2Z1bmNfXywgaG9zdF9scGks
IGRvbWFpbl9pZCwgdmlycV9scGkpOw0KPiArDQo+ICAgICAgIEFTU0VSVChob3N0X2xwaSA+PSBM
UElfT0ZGU0VUKTsNCj4gDQo+ICAgICAgIGhvc3RfbHBpIC09IExQSV9PRkZTRVQ7DQo+IGRpZmYg
LS1naXQgYS94ZW4vYXJjaC9hcm0vdmdpYy12My1pdHMuYyBiL3hlbi9hcmNoL2FybS92Z2ljLXYz
LWl0cy5jIGluZGV4DQo+IDU4ZDkzOWI4NWY5Mi4uODllZjEzN2IzZTZiIDEwMDY0NA0KPiAtLS0g
YS94ZW4vYXJjaC9hcm0vdmdpYy12My1pdHMuYw0KPiArKysgYi94ZW4vYXJjaC9hcm0vdmdpYy12
My1pdHMuYw0KPiBAQCAtODk3LDcgKzg5Nyw3IEBAIG91dF91bmxvY2s6DQo+IA0KPiAgIHN0YXRp
YyB2b2lkIGR1bXBfaXRzX2NvbW1hbmQodWludDY0X3QgKmNvbW1hbmQpDQo+ICAgew0KPiAtICAg
IGdkcHJpbnRrKFhFTkxPR19XQVJOSU5HLCAiICBjbWQgMHglMDJseDogJTAxNmx4ICUwMTZseCAl
MDE2bHgNCj4gJTAxNmx4XG4iLA0KPiArICAgIGdkcHJpbnRrKFhFTkxPR19XQVJOSU5HLCAidklU
UyAgY21kIDB4JTAybHg6ICUwMTZseCAlMDE2bHggJTAxNmx4DQo+ICUwMTZseFxuIiwNCj4gICAg
ICAgICAgICAgICAgaXRzX2NtZF9nZXRfY29tbWFuZChjb21tYW5kKSwNCj4gICAgICAgICAgICAg
ICAgY29tbWFuZFswXSwgY29tbWFuZFsxXSwgY29tbWFuZFsyXSwgY29tbWFuZFszXSk7DQo+ICAg
fQ0KPiBAQCAtOTI2LDYgKzkyNiw4IEBAIHN0YXRpYyBpbnQgdmdpY19pdHNfaGFuZGxlX2NtZHMo
c3RydWN0IGRvbWFpbiAqZCwNCj4gc3RydWN0IHZpcnRfaXRzICppdHMpDQo+ICAgICAgICAgICBp
ZiAoIHJldCApDQo+ICAgICAgICAgICAgICAgcmV0dXJuIHJldDsNCj4gDQo+ICsgICAgICAgIGR1
bXBfaXRzX2NvbW1hbmQoY29tbWFuZCk7DQo+ICsNCj4gICAgICAgICAgIHN3aXRjaCAoIGl0c19j
bWRfZ2V0X2NvbW1hbmQoY29tbWFuZCkgKQ0KPiAgICAgICAgICAgew0KPiAgICAgICAgICAgY2Fz
ZSBHSVRTX0NNRF9DTEVBUjoNCj4gDQo+IA0KPiAtLQ0KPiBKdWxpZW4gR3JhbGwNCg0KWzBdIGh0
dHBzOi8vd3d3Lm1haWwtYXJjaGl2ZS5jb20vdS1ib290QGxpc3RzLmRlbnguZGUvbXNnMzc5NzA4
Lmh0bWwNClsxXSANCmRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vZ2ljLXYzLWl0cy5jIGIveGVu
L2FyY2gvYXJtL2dpYy12My1pdHMuYw0KaW5kZXggOTU1OGJhZDk2YS4uZDE3NWJhNTJiMCAxMDA2
NDQNCi0tLSBhL3hlbi9hcmNoL2FybS9naWMtdjMtaXRzLmMNCisrKyBiL3hlbi9hcmNoL2FybS9n
aWMtdjMtaXRzLmMNCkBAIC04Nyw2ICs4NywxMCBAQCBzdGF0aWMgaW50IGl0c19zZW5kX2NvbW1h
bmQoc3RydWN0IGhvc3RfaXRzICpod19pdHMsIGNvbnN0IHZvaWQgKml0c19jbWQpDQogICAgIC8q
IE5vIElUUyBjb21tYW5kcyBmcm9tIGFuIGludGVycnVwdCBoYW5kbGVyIChhdCB0aGUgbW9tZW50
KS4gKi8NCiAgICAgQVNTRVJUKCFpbl9pcnEoKSk7DQoNCisgICAgcHJpbnRrKFhFTkxPR19XQVJO
SU5HICJwSVRTICBjbWQgMHglMDJseDogJTAxNmx4ICUwMTZseCAlMDE2bHggJTAxNmx4XG4iLA0K
KyAgICAgICAgKCgodWludDY0X3QgKikgaXRzX2NtZClbMF0gPj4gMCkgJiBHRU5NQVNLKDggLSAx
LCAwKSwNCisgICAgICAgICgodWludDY0X3QgKikgaXRzX2NtZClbMF0sICgodWludDY0X3QgKikg
aXRzX2NtZClbMV0sICgodWludDY0X3QgKikgaXRzX2NtZClbMl0sICgodWludDY0X3QgKikgaXRz
X2NtZClbM10pOw0KKw0KICAgICBzcGluX2xvY2soJmh3X2l0cy0+Y21kX2xvY2spOw0KDQogICAg
IGRvIHsNCmRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vZ2ljLXYzLWxwaS5jIGIveGVuL2FyY2gv
YXJtL2dpYy12My1scGkuYw0KaW5kZXggNzhiOTUyMWIyMS4uMmMzYjBmYzllNSAxMDA2NDQNCi0t
LSBhL3hlbi9hcmNoL2FybS9naWMtdjMtbHBpLmMNCisrKyBiL3hlbi9hcmNoL2FybS9naWMtdjMt
bHBpLmMNCkBAIC0xODEsOCArMTgxLDEwIEBAIHZvaWQgZ2ljdjNfZG9fTFBJKHVuc2lnbmVkIGlu
dCBscGkpDQoNCiAgICAgLyogRmluZCBvdXQgaWYgYSBndWVzdCBtYXBwZWQgc29tZXRoaW5nIHRv
IHRoaXMgcGh5c2ljYWwgTFBJLiAqLw0KICAgICBobHBpcCA9IGdpY19nZXRfaG9zdF9scGkobHBp
KTsNCi0gICAgaWYgKCAhaGxwaXAgKQ0KKyAgICBpZiAoICFobHBpcCApIHsNCisgICAgICAgIHBy
aW50aygiJXM6IFJlY2VpdmVkIExQSSAldSBidXQgaXQgaXMgbm90IG1hcHBlZD9cbiIsIF9fZnVu
Y19fLCBscGkpOw0KICAgICAgICAgZ290byBvdXQ7DQorICAgIH0NCg0KICAgICBobHBpLmRhdGEg
PSByZWFkX3U2NF9hdG9taWMoJmhscGlwLT5kYXRhKTsNCg0KQEAgLTIyMSw2ICsyMjMsOSBAQCB2
b2lkIGdpY3YzX2xwaV91cGRhdGVfaG9zdF9lbnRyeSh1aW50MzJfdCBob3N0X2xwaSwgaW50IGRv
bWFpbl9pZCwNCiB7DQogICAgIHVuaW9uIGhvc3RfbHBpICpobHBpcCwgaGxwaTsNCg0KKyAgICBw
cmludGsoIiVzOiBob3N0X2xwaSAldSBkb21haW4gJWQgdmlydF9scGkgJXVcbiIsDQorICAgICAg
ICBfX2Z1bmNfXywgaG9zdF9scGksIGRvbWFpbl9pZCwgdmlydF9scGkpOw0KKw0KICAgICBBU1NF
UlQoaG9zdF9scGkgPj0gTFBJX09GRlNFVCk7DQoNCiAgICAgaG9zdF9scGkgLT0gTFBJX09GRlNF
VDsNCmRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vdmdpYy12My1pdHMuYyBiL3hlbi9hcmNoL2Fy
bS92Z2ljLXYzLWl0cy5jDQppbmRleCA2ZTE1M2M2OThkLi5kZDUwODFlZjgwIDEwMDY0NA0KLS0t
IGEveGVuL2FyY2gvYXJtL3ZnaWMtdjMtaXRzLmMNCisrKyBiL3hlbi9hcmNoL2FybS92Z2ljLXYz
LWl0cy5jDQpAQCAtODk3LDcgKzg5Nyw3IEBAIG91dF91bmxvY2s6DQoNCiBzdGF0aWMgdm9pZCBk
dW1wX2l0c19jb21tYW5kKHVpbnQ2NF90ICpjb21tYW5kKQ0KIHsNCi0gICAgZ2RwcmludGsoWEVO
TE9HX1dBUk5JTkcsICIgIGNtZCAweCUwMmx4OiAlMDE2bHggJTAxNmx4ICUwMTZseCAlMDE2bHhc
biIsDQorICAgIGdkcHJpbnRrKFhFTkxPR19XQVJOSU5HLCAidklUUyAgY21kIDB4JTAybHg6ICUw
MTZseCAlMDE2bHggJTAxNmx4ICUwMTZseFxuIiwNCiAgICAgICAgICAgICAgaXRzX2NtZF9nZXRf
Y29tbWFuZChjb21tYW5kKSwNCiAgICAgICAgICAgICAgY29tbWFuZFswXSwgY29tbWFuZFsxXSwg
Y29tbWFuZFsyXSwgY29tbWFuZFszXSk7DQogfQ0KQEAgLTkyNiw2ICs5MjYsOCBAQCBzdGF0aWMg
aW50IHZnaWNfaXRzX2hhbmRsZV9jbWRzKHN0cnVjdCBkb21haW4gKmQsIHN0cnVjdCB2aXJ0X2l0
cyAqaXRzKQ0KICAgICAgICAgaWYgKCByZXQgKQ0KICAgICAgICAgICAgIHJldHVybiByZXQ7DQoN
CisgICAgICAgIGR1bXBfaXRzX2NvbW1hbmQoY29tbWFuZCk7DQorDQogICAgICAgICBzd2l0Y2gg
KCBpdHNfY21kX2dldF9jb21tYW5kKGNvbW1hbmQpICkNCiAgICAgICAgIHsNCiAgICAgICAgIGNh
c2UgR0lUU19DTURfQ0xFQVI6DQo=

--_003_HE1PR05MB4794EBDD1FE29BC69D0BCC898BFD0HE1PR05MB4794eurp_
Content-Type: application/octet-stream; name="boot-xendebug.log"
Content-Description: boot-xendebug.log
Content-Disposition: attachment; filename="boot-xendebug.log"; size=26771;
	creation-date="Sun, 22 Nov 2020 22:45:56 GMT";
	modification-date="Sun, 22 Nov 2020 22:45:56 GMT"
Content-Transfer-Encoding: base64

IFhlbiA0LjEzLjINCihYRU4pIFhlbiB2ZXJzaW9uIDQuMTMuMiAoQCkgKGFhcmNoNjQtcG9reS1s
aW51eC1nY2MgKEdDQykgOC4zLjApIGRlYnVnPXkgIFN1biBOb3YgMjIgMjE6NTY6MDYgVVRDIDIw
MjANCihYRU4pIExhdGVzdCBDaGFuZ2VTZXQ6IEZyaSBPY3QgMzAgMTI6MjQ6MzkgMjAyMCArMDEw
MCBnaXQ6MDA2MGFjMjliYy1kaXJ0eQ0KKFhFTikgYnVpbGQtaWQ6IDFmNmMzODIzNWIwNmViOTlh
M2U3M2IzODQ2ZjkxMGE0ZGY5ZGMwMjgNCihYRU4pIFByb2Nlc3NvcjogNDEwZmQwODM6ICJBUk0g
TGltaXRlZCIsIHZhcmlhbnQ6IDB4MCwgcGFydCAweGQwOCwgcmV2IDB4Mw0KKFhFTikgNjQtYml0
IEV4ZWN1dGlvbjoNCihYRU4pICAgUHJvY2Vzc29yIEZlYXR1cmVzOiAwMDAwMDAwMDAxMDAyMjIy
IDAwMDAwMDAwMDAwMDAwMDANCihYRU4pICAgICBFeGNlcHRpb24gTGV2ZWxzOiBFTDM6NjQrMzIg
RUwyOjY0KzMyIEVMMTo2NCszMiBFTDA6NjQrMzINCihYRU4pICAgICBFeHRlbnNpb25zOiBGbG9h
dGluZ1BvaW50IEFkdmFuY2VkU0lNRCBHSUN2My1TeXNSZWcNCihYRU4pICAgRGVidWcgRmVhdHVy
ZXM6IDAwMDAwMDAwMTAzMDUxMDYgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgICBBdXhpbGlhcnkg
RmVhdHVyZXM6IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgICBNZW1v
cnkgTW9kZWwgRmVhdHVyZXM6IDAwMDAwMDAwMDAwMDExMjQgMDAwMDAwMDAwMDAwMDAwMA0KKFhF
TikgICBJU0EgRmVhdHVyZXM6ICAwMDAwMDAwMDAwMDEwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihY
RU4pIDMyLWJpdCBFeGVjdXRpb246DQooWEVOKSAgIFByb2Nlc3NvciBGZWF0dXJlczogMDAwMDAx
MzE6MTAwMTEwMTENCihYRU4pICAgICBJbnN0cnVjdGlvbiBTZXRzOiBBQXJjaDMyIEEzMiBUaHVt
YiBUaHVtYi0yIEphemVsbGUNCihYRU4pICAgICBFeHRlbnNpb25zOiBHZW5lcmljVGltZXIgU2Vj
dXJpdHkNCihYRU4pICAgRGVidWcgRmVhdHVyZXM6IDAzMDEwMDY2DQooWEVOKSAgIEF1eGlsaWFy
eSBGZWF0dXJlczogMDAwMDAwMDANCihYRU4pICAgTWVtb3J5IE1vZGVsIEZlYXR1cmVzOiAxMDIw
MTEwNSA0MDAwMDAwMCAwMTI2MDAwMCAwMjEwMjIxMQ0KKFhFTikgIElTQSBGZWF0dXJlczogMDIx
MDExMTAgMTMxMTIxMTEgMjEyMzIwNDIgMDExMTIxMzEgMDAwMTExNDIgMDAwMTAwMDENCihYRU4p
IFVzaW5nIFNNQyBDYWxsaW5nIENvbnZlbnRpb24gdjEuMA0KKFhFTikgVXNpbmcgUFNDSSB2MC4y
DQooWEVOKSBTTVA6IEFsbG93aW5nIDIgQ1BVcw0KKFhFTikgZW5hYmxlZCB3b3JrYXJvdW5kIGZv
cjogQVJNIGVycmF0dW0gMTMxOTUzNw0KKFhFTikgR2VuZXJpYyBUaW1lciBJUlE6IHBoeXM9MzAg
aHlwPTI2IHZpcnQ9MjcgRnJlcTogMjUwMDAgS0h6DQooWEVOKSBHSUN2MyBpbml0aWFsaXphdGlv
bjoNCihYRU4pICAgICAgIGdpY19kaXN0X2FkZHI9MHgwMDAwMDAwNjAwMDAwMA0KKFhFTikgICAg
ICAgZ2ljX21haW50ZW5hbmNlX2lycT0yNQ0KKFhFTikgICAgICAgZ2ljX3JkaXN0X3N0cmlkZT0w
DQooWEVOKSAgICAgICBnaWNfcmRpc3RfcmVnaW9ucz0xDQooWEVOKSAgICAgICByZWRpc3RyaWJ1
dG9yIHJlZ2lvbnM6DQooWEVOKSAgICAgICAgIC0gcmVnaW9uIDA6IDB4MDAwMDAwMDYwNDAwMDAg
LSAweDAwMDAwMDA2MDgwMDAwDQooWEVOKSBHSUN2MzogdXNpbmcgYXQgbW9zdCA1NzM0NCBMUElz
IG9uIHRoZSBob3N0Lg0KKFhFTikgR0lDdjM6IDI4OCBsaW5lcywgKElJRCAwMDAxMTQzYikuDQoo
WEVOKSBHSUN2MzogRm91bmQgSVRTIEAweDYwMjAwMDANCihYRU4pIHVzaW5nIG5vbi1jYWNoZWFi
bGUgSVRTIGNvbW1hbmQgcXVldWUNCihYRU4pIEdJQ3YzOiBDUFUwOiBGb3VuZCByZWRpc3RyaWJ1
dG9yIGluIHJlZ2lvbiAwIEAwMDAwMDAwMDQwMDFjMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwOTog
MDAwMDAwMDAwMDAwMDAwOSAwMDAwMDAwMDAwMDAwMDAwIDgwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MDU6IDAwMDAwMDAwMDAwMDAwMDUgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIFhTTSBG
cmFtZXdvcmsgdjEuMC4wIGluaXRpYWxpemVkDQooWEVOKSBJbml0aWFsaXNpbmcgWFNNIFNJTE8g
bW9kZQ0KKFhFTikgVXNpbmcgc2NoZWR1bGVyOiBudWxsIFNjaGVkdWxlciAobnVsbCkNCihYRU4p
IEluaXRpYWxpemluZyBudWxsIHNjaGVkdWxlcg0KKFhFTikgV0FSTklORzogVGhpcyBpcyBleHBl
cmltZW50YWwgc29mdHdhcmUgaW4gZGV2ZWxvcG1lbnQuDQooWEVOKSBVc2UgYXQgeW91ciBvd24g
cmlzay4NCihYRU4pIEFsbG9jYXRlZCBjb25zb2xlIHJpbmcgb2YgMTYgS2lCLg0KKFhFTikgQ1BV
MDogR3Vlc3QgYXRvbWljcyB3aWxsIHRyeSAxMyB0aW1lcyBiZWZvcmUgcGF1c2luZyB0aGUgZG9t
YWluDQooWEVOKSBCcmluZ2luZyB1cCBDUFUxDQooWEVOKSBHSUN2MzogQ1BVMTogRm91bmQgcmVk
aXN0cmlidXRvciBpbiByZWdpb24gMCBAMDAwMDAwMDA0MDAzYzAwMA0KKFhFTikgcElUUyAgY21k
IDB4MDk6IDAwMDAwMDAwMDAwMDAwMDkgMDAwMDAwMDAwMDAwMDAwMCA4MDAwMDAwMDAwMDEwMDAx
IDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDA1OiAwMDAwMDAwMDAwMDAwMDA1
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAxMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVO
KSBDUFUxOiBHdWVzdCBhdG9taWNzIHdpbGwgdHJ5IDEyIHRpbWVzIGJlZm9yZSBwYXVzaW5nIHRo
ZSBkb21haW4NCihYRU4pIENQVSAxIGJvb3RlZC4NCihYRU4pIEJyb3VnaHQgdXAgMiBDUFVzDQoo
WEVOKSBJL08gdmlydHVhbGlzYXRpb24gZGlzYWJsZWQNCihYRU4pIFAyTTogNDQtYml0IElQQSB3
aXRoIDQ0LWJpdCBQQSBhbmQgOC1iaXQgVk1JRA0KKFhFTikgUDJNOiA0IGxldmVscyB3aXRoIG9y
ZGVyLTAgcm9vdCwgVlRDUiAweDgwMDQzNTk0DQooWEVOKSBhbHRlcm5hdGl2ZXM6IFBhdGNoaW5n
IHdpdGggYWx0IHRhYmxlIDAwMDAwMDAwMDAyZDQwZTggLT4gMDAwMDAwMDAwMDJkNDhiYw0KKFhF
TikgKioqIExPQURJTkcgRE9NQUlOIDAgKioqDQooWEVOKSBMb2FkaW5nIGQwIGtlcm5lbCBmcm9t
IGJvb3QgbW9kdWxlIEAgMDAwMDAwMDA4MTIwMDAwMA0KKFhFTikgQWxsb2NhdGluZyAxOjEgbWFw
cGluZ3MgdG90YWxsaW5nIDUxMk1CIGZvciBkb20wOg0KKFhFTikgQkFOS1swXSAweDAwMDAwMGMw
MDAwMDAwLTB4MDAwMDAwZTAwMDAwMDAgKDUxMk1CKQ0KKFhFTikgR3JhbnQgdGFibGUgcmFuZ2U6
IDB4MDAwMDAwODEwMDAwMDAtMHgwMDAwMDA4MTA0MDAwMA0KKFhFTikgQWxsb2NhdGluZyBQUEkg
MTYgZm9yIGV2ZW50IGNoYW5uZWwgaW50ZXJydXB0DQooWEVOKSBMb2FkaW5nIHpJbWFnZSBmcm9t
IDAwMDAwMDAwODEyMDAwMDAgdG8gMDAwMDAwMDBjMDA4MDAwMC0wMDAwMDAwMGMxNTk3MDA4DQoo
WEVOKSBMb2FkaW5nIGQwIERUQiB0byAweDAwMDAwMDAwYzgwMDAwMDAtMHgwMDAwMDAwMGM4MDA2
Y2I2DQooWEVOKSBJbml0aWFsIGxvdyBtZW1vcnkgdmlycSB0aHJlc2hvbGQgc2V0IGF0IDB4NDAw
MCBwYWdlcy4NCihYRU4pIFN0ZC4gTG9nbGV2ZWw6IEFsbA0KKFhFTikgR3Vlc3QgTG9nbGV2ZWw6
IEFsbA0KKFhFTikgKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqDQooWEVOKSBObyBzdXBwb3J0IGZvciBBUk1fU01DQ0NfQVJDSF9XT1JLQVJPVU5EXzEu
DQooWEVOKSBQbGVhc2UgdXBkYXRlIHlvdXIgZmlybXdhcmUuDQooWEVOKSAqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCihYRU4pIE5vIHN1cHBvcnQg
Zm9yIEFSTV9TTUNDQ19BUkNIX1dPUktBUk9VTkRfMS4NCihYRU4pIFBsZWFzZSB1cGRhdGUgeW91
ciBmaXJtd2FyZS4NCihYRU4pICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKg0KKFhFTikgMy4uLiAyLi4uIDEuLi4NCihYRU4pICoqKiBTZXJpYWwgaW5w
dXQgdG8gRE9NMCAodHlwZSAnQ1RSTC1hJyB0aHJlZSB0aW1lcyB0byBzd2l0Y2ggaW5wdXQpDQoo
WEVOKSBzY2hlZF9udWxsLmM6MzQ0OiAwIDwtLSBkMHYwDQooWEVOKSBGcmVlZCAzNDRrQiBpbml0
IG1lbW9yeS4NCihYRU4pIGQwdjA6IHZHSUNEOiB1bmhhbmRsZWQgd29yZCB3cml0ZSAweDAwMDAw
MGZmZmZmZmZmIHRvIElDQUNUSVZFUjQNCihYRU4pIGQwdjA6IHZHSUNEOiB1bmhhbmRsZWQgd29y
ZCB3cml0ZSAweDAwMDAwMGZmZmZmZmZmIHRvIElDQUNUSVZFUjgNCihYRU4pIGQwdjA6IHZHSUNE
OiB1bmhhbmRsZWQgd29yZCB3cml0ZSAweDAwMDAwMGZmZmZmZmZmIHRvIElDQUNUSVZFUjEyDQoo
WEVOKSBkMHYwOiB2R0lDRDogdW5oYW5kbGVkIHdvcmQgd3JpdGUgMHgwMDAwMDBmZmZmZmZmZiB0
byBJQ0FDVElWRVIxNg0KKFhFTikgZDB2MDogdkdJQ0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRlIDB4
MDAwMDAwZmZmZmZmZmYgdG8gSUNBQ1RJVkVSMjANCihYRU4pIGQwdjA6IHZHSUNEOiB1bmhhbmRs
ZWQgd29yZCB3cml0ZSAweDAwMDAwMGZmZmZmZmZmIHRvIElDQUNUSVZFUjI0DQooWEVOKSBkMHYw
OiB2R0lDRDogdW5oYW5kbGVkIHdvcmQgd3JpdGUgMHgwMDAwMDBmZmZmZmZmZiB0byBJQ0FDVElW
RVIyOA0KKFhFTikgZDB2MDogdkdJQ0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRlIDB4MDAwMDAwZmZm
ZmZmZmYgdG8gSUNBQ1RJVkVSMzINCihYRU4pIGQwdjA6IHZHSUNSOiBTR0k6IHVuaGFuZGxlZCB3
b3JkIHdyaXRlIDB4MDAwMDAwZmZmZmZmZmYgdG8gSUNBQ1RJVkVSMA0KKFhFTikgdmdpYy12My1p
dHMuYzo5MDI6ZDB2MCB2SVRTICBjbWQgMHgwOTogMDAwMDAwMDAwMDAwMDAwOSAwMDAwMDAwMDAw
MDAwMDAwIDgwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgdmdpYy12My1p
dHMuYzo5MDI6ZDB2MCB2SVRTICBjbWQgMHgwNTogMDAwMDAwMDAwMDAwMDAwNSAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgdmdpYy12My1p
dHMuYzo5MDI6ZDB2MCB2SVRTICBjbWQgMHgwZDogMDAwMDAwMDAwMDAwMDAwZCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KWyAgICAwLjAwMDAwMF0g
Qm9vdGluZyBMaW51eCBvbiBwaHlzaWNhbCBDUFUgMHgwMDAwMDAwMDAwIFsweDQxMGZkMDgzXQ0K
WyAgICAwLjAwMDAwMF0gTGludXggdmVyc2lvbiA0LjE5LjY4K2c0ZjBlY2RiZDRmMzMgKG9lLXVz
ZXJAb2UtaG9zdCkgKGdjYyB2ZXJzaW9uIDguMy4wIChHQ0MpKSAjMSBTTVAgUFJFRU1QVCBUaHUg
T2N0IDI5IDIyOjA1OjU2IFVUQyAyMDIwDQpbICAgIDAuMDAwMDAwXSBNYWNoaW5lIG1vZGVsOiBL
b250cm9uIFNNQVJDLXNBTDI4DQpbICAgIDAuMDAwMDAwXSBYZW4gNC4xMyBzdXBwb3J0IGZvdW5k
DQpbICAgIDAuMDAwMDAwXSBjbWE6IFJlc2VydmVkIDMyIE1pQiBhdCAweDAwMDAwMDAwZGUwMDAw
MDANClsgICAgMC4wMDAwMDBdIE5VTUE6IE5vIE5VTUEgY29uZmlndXJhdGlvbiBmb3VuZA0KWyAg
ICAwLjAwMDAwMF0gTlVNQTogRmFraW5nIGEgbm9kZSBhdCBbbWVtIDB4MDAwMDAwMDAwMDAwMDAw
MC0weDAwMDAwMDAwZGZmZmZmZmZdDQpbICAgIDAuMDAwMDAwXSBOVU1BOiBOT0RFX0RBVEEgW21l
bSAweGRkZmQ1YjQwLTB4ZGRmZDcyZmZdDQpbICAgIDAuMDAwMDAwXSBab25lIHJhbmdlczoNClsg
ICAgMC4wMDAwMDBdICAgRE1BMzIgICAgW21lbSAweDAwMDAwMDAwYzAwMDAwMDAtMHgwMDAwMDAw
MGRmZmZmZmZmXQ0KWyAgICAwLjAwMDAwMF0gICBOb3JtYWwgICBlbXB0eQ0KWyAgICAwLjAwMDAw
MF0gTW92YWJsZSB6b25lIHN0YXJ0IGZvciBlYWNoIG5vZGUNClsgICAgMC4wMDAwMDBdIEVhcmx5
IG1lbW9yeSBub2RlIHJhbmdlcw0KWyAgICAwLjAwMDAwMF0gICBub2RlICAgMDogW21lbSAweDAw
MDAwMDAwYzAwMDAwMDAtMHgwMDAwMDAwMGRmZmZmZmZmXQ0KWyAgICAwLjAwMDAwMF0gSW5pdG1l
bSBzZXR1cCBub2RlIDAgW21lbSAweDAwMDAwMDAwYzAwMDAwMDAtMHgwMDAwMDAwMGRmZmZmZmZm
XQ0KWyAgICAwLjAwMDAwMF0gcHNjaTogcHJvYmluZyBmb3IgY29uZHVpdCBtZXRob2QgZnJvbSBE
VC4NClsgICAgMC4wMDAwMDBdIHBzY2k6IFBTQ0l2MS4xIGRldGVjdGVkIGluIGZpcm13YXJlLg0K
WyAgICAwLjAwMDAwMF0gcHNjaTogVXNpbmcgc3RhbmRhcmQgUFNDSSB2MC4yIGZ1bmN0aW9uIElE
cw0KWyAgICAwLjAwMDAwMF0gcHNjaTogVHJ1c3RlZCBPUyBtaWdyYXRpb24gbm90IHJlcXVpcmVk
DQpbICAgIDAuMDAwMDAwXSBwc2NpOiBTTUMgQ2FsbGluZyBDb252ZW50aW9uIHYxLjENClsgICAg
MC4wMDAwMDBdIHJhbmRvbTogZ2V0X3JhbmRvbV9ieXRlcyBjYWxsZWQgZnJvbSBzdGFydF9rZXJu
ZWwrMHg5NC8weDNlYyB3aXRoIGNybmdfaW5pdD0wDQpbICAgIDAuMDAwMDAwXSBwZXJjcHU6IEVt
YmVkZGVkIDI1IHBhZ2VzL2NwdSBzNjIyOTYgcjgxOTIgZDMxOTEyIHUxMDI0MDANClsgICAgMC4w
MDAwMDBdIERldGVjdGVkIFBJUFQgSS1jYWNoZSBvbiBDUFUwDQpbICAgIDAuMDAwMDAwXSBDUFUg
ZmVhdHVyZXM6IGVuYWJsaW5nIHdvcmthcm91bmQgZm9yIEVMMiB2ZWN0b3IgaGFyZGVuaW5nDQpb
ICAgIDAuMDAwMDAwXSBDUFUgZmVhdHVyZXM6IGVuYWJsaW5nIHdvcmthcm91bmQgZm9yIFNwZWN1
bGF0aXZlIFN0b3JlIEJ5cGFzcyBEaXNhYmxlDQpbICAgIDAuMDAwMDAwXSBDUFUgZmVhdHVyZXM6
IGRldGVjdGVkOiBLZXJuZWwgcGFnZSB0YWJsZSBpc29sYXRpb24gKEtQVEkpDQpbICAgIDAuMDAw
MDAwXSBCdWlsdCAxIHpvbmVsaXN0cywgbW9iaWxpdHkgZ3JvdXBpbmcgb24uICBUb3RhbCBwYWdl
czogMTI5MDI0DQpbICAgIDAuMDAwMDAwXSBQb2xpY3kgem9uZTogRE1BMzINClsgICAgMC4wMDAw
MDBdIEtlcm5lbCBjb21tYW5kIGxpbmU6IHJvb3Q9L2Rldi9tbWNibGsxcDIgY29uc29sZT1odmMw
IGVhcmx5Y29uPXhlbiBlYXJseXByaW50az14ZW4gY2xrX2lnbm9yZV91bnVzZWQgcm9vdHdhaXQN
ClsgICAgMC4wMDAwMDBdIE1lbW9yeTogNDUyMTE2Sy81MjQyODhLIGF2YWlsYWJsZSAoMTI4NjJL
IGtlcm5lbCBjb2RlLCAxNjI4SyByd2RhdGEsIDU3MDhLIHJvZGF0YSwgMTM0NEsgaW5pdCwgODk5
SyBic3MsIDM5NDA0SyByZXNlcnZlZCwgMzI3NjhLIGNtYS1yZXNlcnZlZCkNClsgICAgMC4wMDAw
MDBdIFNMVUI6IEhXYWxpZ249NjQsIE9yZGVyPTAtMywgTWluT2JqZWN0cz0wLCBDUFVzPTEsIE5v
ZGVzPTENClsgICAgMC4wMDAwMDBdIHJjdTogUHJlZW1wdGlibGUgaGllcmFyY2hpY2FsIFJDVSBp
bXBsZW1lbnRhdGlvbi4NClsgICAgMC4wMDAwMDBdIHJjdTogICAgIFJDVSByZXN0cmljdGluZyBD
UFVzIGZyb20gTlJfQ1BVUz02NCB0byBucl9jcHVfaWRzPTEuDQpbICAgIDAuMDAwMDAwXSAgVGFz
a3MgUkNVIGVuYWJsZWQuDQpbICAgIDAuMDAwMDAwXSByY3U6IEFkanVzdGluZyBnZW9tZXRyeSBm
b3IgcmN1X2Zhbm91dF9sZWFmPTE2LCBucl9jcHVfaWRzPTENClsgICAgMC4wMDAwMDBdIE5SX0lS
UVM6IDY0LCBucl9pcnFzOiA2NCwgcHJlYWxsb2NhdGVkIGlycXM6IDANClsgICAgMC4wMDAwMDBd
IEdJQ3YzOiBEaXN0cmlidXRvciBoYXMgbm8gUmFuZ2UgU2VsZWN0b3Igc3VwcG9ydA0KWyAgICAw
LjAwMDAwMF0gR0lDdjM6IG5vIFZMUEkgc3VwcG9ydCwgbm8gZGlyZWN0IExQSSBzdXBwb3J0DQpb
ICAgIDAuMDAwMDAwXSBJVFMgW21lbSAweDA2MDIwMDAwLTB4MDYwM2ZmZmZdDQpbICAgIDAuMDAw
MDAwXSBJVFNAMHgwMDAwMDAwMDA2MDIwMDAwOiBhbGxvY2F0ZWQgNjU1MzYgRGV2aWNlcyBAZGM4
ODAwMDAgKGZsYXQsIGVzeiA4LCBwc3ogNjRLLCBzaHIgMSkNClsgICAgMC4wMDAwMDBdIElUU0Aw
eDAwMDAwMDAwMDYwMjAwMDA6IGFsbG9jYXRlZCAzMjc2OCBJbnRlcnJ1cHQgQ29sbGVjdGlvbnMg
QGRjODIwMDAwIChmbGF0LCBlc3ogMiwgcHN6IDY0Sywgc2hyIDEpDQpbICAgIDAuMDAwMDAwXSBH
SUM6IHVzaW5nIExQSSBwcm9wZXJ0eSB0YWJsZSBAMHgwMDAwMDAwMGRjODMwMDAwDQpbICAgIDAu
MDAwMDAwXSBHSUN2MzogQ1BVMDogZm91bmQgcmVkaXN0cmlidXRvciAwIHJlZ2lvbiAwOjB4MDAw
MDAwMDAwNjA0MDAwMA0KWyAgICAwLjAwMDAwMF0gQ1BVMDogdXNpbmcgTFBJIHBlbmRpbmcgdGFi
bGUgQDB4MDAwMDAwMDBkYzg0MDAwMA0KWyAgICAwLjAwMDAwMF0gYXJjaF90aW1lcjogY3AxNSB0
aW1lcihzKSBydW5uaW5nIGF0IDI1LjAwTUh6ICh2aXJ0KS4NClsgICAgMC4wMDAwMDBdIGNsb2Nr
c291cmNlOiBhcmNoX3N5c19jb3VudGVyOiBtYXNrOiAweGZmZmZmZmZmZmZmZmZmIG1heF9jeWNs
ZXM6IDB4NWM0MDkzOWI1LCBtYXhfaWRsZV9uczogNDQwNzk1MjAyNjQ2IG5zDQpbICAgIDAuMDAw
MDAzXSBzY2hlZF9jbG9jazogNTYgYml0cyBhdCAyNU1IeiwgcmVzb2x1dGlvbiA0MG5zLCB3cmFw
cyBldmVyeSA0Mzk4MDQ2NTExMTAwbnMNClsgICAgMC4wMDAyMzBdIENvbnNvbGU6IGNvbG91ciBk
dW1teSBkZXZpY2UgODB4MjUNClsgICAgMC4wMDA4MTJdIGNvbnNvbGUgW2h2YzBdIGVuYWJsZWQN
ClsgICAgMC4wMDA4NTddIENhbGlicmF0aW5nIGRlbGF5IGxvb3AgKHNraXBwZWQpLCB2YWx1ZSBj
YWxjdWxhdGVkIHVzaW5nIHRpbWVyIGZyZXF1ZW5jeS4uIDUwLjAwIEJvZ29NSVBTIChscGo9MTAw
MDAwKQ0KWyAgICAwLjAwMDg4Ml0gcGlkX21heDogZGVmYXVsdDogMzI3NjggbWluaW11bTogMzAx
DQpbICAgIDAuMDAwOTI3XSBTZWN1cml0eSBGcmFtZXdvcmsgaW5pdGlhbGl6ZWQNClsgICAgMC4w
MDExNzhdIERlbnRyeSBjYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDY1NTM2IChvcmRlcjogNywg
NTI0Mjg4IGJ5dGVzKQ0KWyAgICAwLjAwMTI5M10gSW5vZGUtY2FjaGUgaGFzaCB0YWJsZSBlbnRy
aWVzOiAzMjc2OCAob3JkZXI6IDYsIDI2MjE0NCBieXRlcykNClsgICAgMC4wMDEzMjNdIE1vdW50
LWNhY2hlIGhhc2ggdGFibGUgZW50cmllczogMTAyNCAob3JkZXI6IDEsIDgxOTIgYnl0ZXMpDQpb
ICAgIDAuMDAxMzQyXSBNb3VudHBvaW50LWNhY2hlIGhhc2ggdGFibGUgZW50cmllczogMTAyNCAo
b3JkZXI6IDEsIDgxOTIgYnl0ZXMpDQpbICAgIDAuMDI0MDM0XSBBU0lEIGFsbG9jYXRvciBpbml0
aWFsaXNlZCB3aXRoIDMyNzY4IGVudHJpZXMNClsgICAgMC4wMjQ1NDldIHhlbjpncmFudF90YWJs
ZTogR3JhbnQgdGFibGVzIHVzaW5nIHZlcnNpb24gMSBsYXlvdXQNClsgICAgMC4wMjQ1NzVdIEdy
YW50IHRhYmxlIGluaXRpYWxpemVkDQpbICAgIDAuMDI0NjA2XSB4ZW46ZXZlbnRzOiBVc2luZyBG
SUZPLWJhc2VkIEFCSQ0KWyAgICAwLjAyNDY0Nl0gWGVuOiBpbml0aWFsaXppbmcgY3B1MA0KWyAg
ICAwLjAzMjA0MV0gcmN1OiBIaWVyYXJjaGljYWwgU1JDVSBpbXBsZW1lbnRhdGlvbi4NClsgICAg
MC4wNDAwODFdIFBsYXRmb3JtIE1TSTogZ2ljLWl0cyBkb21haW4gY3JlYXRlZA0KWyAgICAwLjA0
MDE0MF0gUENJL01TSTogL2ludGVycnVwdC1jb250cm9sbGVyL2dpYy1pdHMgZG9tYWluIGNyZWF0
ZWQNClsgICAgMC4wNDAxODddIGZzbC1tYyBNU0k6IC9pbnRlcnJ1cHQtY29udHJvbGxlci9naWMt
aXRzIGRvbWFpbiBjcmVhdGVkDQpbICAgIDAuMDQ4MDcyXSBzbXA6IEJyaW5naW5nIHVwIHNlY29u
ZGFyeSBDUFVzIC4uLg0KWyAgICAwLjA0ODA4Nl0gc21wOiBCcm91Z2h0IHVwIDEgbm9kZSwgMSBD
UFUNClsgICAgMC4wNDgwOTddIFNNUDogVG90YWwgb2YgMSBwcm9jZXNzb3JzIGFjdGl2YXRlZC4N
ClsgICAgMC4wNDgxMTNdIENQVSBmZWF0dXJlczogZGV0ZWN0ZWQ6IEdJQyBzeXN0ZW0gcmVnaXN0
ZXIgQ1BVIGludGVyZmFjZQ0KWyAgICAwLjA0ODEyOV0gQ1BVIGZlYXR1cmVzOiBkZXRlY3RlZDog
MzItYml0IEVMMCBTdXBwb3J0DQpbICAgIDAuMDUxMTE2XSBDUFU6IEFsbCBDUFUocykgc3RhcnRl
ZCBhdCBFTDENClsgICAgMC4wNTExMzNdIGFsdGVybmF0aXZlczogcGF0Y2hpbmcga2VybmVsIGNv
ZGUNClsgICAgMC4wNTE3MjhdIGRldnRtcGZzOiBpbml0aWFsaXplZA0KWyAgICAwLjA1NjMzN10g
Y2xvY2tzb3VyY2U6IGppZmZpZXM6IG1hc2s6IDB4ZmZmZmZmZmYgbWF4X2N5Y2xlczogMHhmZmZm
ZmZmZiwgbWF4X2lkbGVfbnM6IDc2NDUwNDE3ODUxMDAwMDAgbnMNClsgICAgMC4wNTYzNzRdIGZ1
dGV4IGhhc2ggdGFibGUgZW50cmllczogMjU2IChvcmRlcjogMiwgMTYzODQgYnl0ZXMpDQpbICAg
IDAuMDU2OTEyXSBwaW5jdHJsIGNvcmU6IGluaXRpYWxpemVkIHBpbmN0cmwgc3Vic3lzdGVtDQpb
ICAgIDAuMDU3NzA0XSBORVQ6IFJlZ2lzdGVyZWQgcHJvdG9jb2wgZmFtaWx5IDE2DQpbICAgIDAu
MDU3OTA4XSBhdWRpdDogaW5pdGlhbGl6aW5nIG5ldGxpbmsgc3Vic3lzIChkaXNhYmxlZCkNClsg
ICAgMC4wNTg0NjldIGF1ZGl0OiB0eXBlPTIwMDAgYXVkaXQoMC4wNTY6MSk6IHN0YXRlPWluaXRp
YWxpemVkIGF1ZGl0X2VuYWJsZWQ9MCByZXM9MQ0KWyAgICAwLjA1ODkyNV0gdmRzbzogMiBwYWdl
cyAoMSBjb2RlIEAgKF9fX19wdHJ2YWxfX19fKSwgMSBkYXRhIEAgKF9fX19wdHJ2YWxfX19fKSkN
ClsgICAgMC4wNTg5NDddIGh3LWJyZWFrcG9pbnQ6IGZvdW5kIDYgYnJlYWtwb2ludCBhbmQgNCB3
YXRjaHBvaW50IHJlZ2lzdGVycy4NClsgICAgMC4wNjIyOTVdIERNQTogcHJlYWxsb2NhdGVkIDI1
NiBLaUIgcG9vbCBmb3IgYXRvbWljIGFsbG9jYXRpb25zDQpbICAgIDAuMDYyMzk0XSB4ZW46c3dp
b3RsYl94ZW46IFdhcm5pbmc6IG9ubHkgYWJsZSB0byBhbGxvY2F0ZSA0IE1CIGZvciBzb2Z0d2Fy
ZSBJTyBUTEINClsgICAgMC4wNjMxNDJdIHNvZnR3YXJlIElPIFRMQjogbWFwcGVkIFttZW0gMHhk
YmMwMDAwMC0weGRjMDAwMDAwXSAoNE1CKQ0KWyAgICAwLjA2NDI2OV0gU2VyaWFsOiBBTUJBIFBM
MDExIFVBUlQgZHJpdmVyDQpbICAgIDAuMTA0OTQ3XSBIdWdlVExCIHJlZ2lzdGVyZWQgMi4wMCBN
aUIgcGFnZSBzaXplLCBwcmUtYWxsb2NhdGVkIDAgcGFnZXMNClsgICAgMC4xMDg3MjhdIGNyeXB0
ZDogbWF4X2NwdV9xbGVuIHNldCB0byAxMDAwDQpbICAgIDAuMTE3ODc3XSB4ZW46YmFsbG9vbjog
SW5pdGlhbGlzaW5nIGJhbGxvb24gZHJpdmVyDQpbICAgIDAuMTE5NzY3XSB2Z2FhcmI6IGxvYWRl
ZA0KWyAgICAwLjEyMDA3OF0gU0NTSSBzdWJzeXN0ZW0gaW5pdGlhbGl6ZWQNClsgICAgMC4xMjEw
NTddIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgdXNiZnMNClsgICAg
MC4xMjExMzBdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgaHViDQpb
ICAgIDAuMTIxMjA0XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBkZXZpY2UgZHJpdmVyIHVzYg0K
WyAgICAwLjEyMTczNl0gaW14LWkyYyAyMDAwMDAwLmkyYzogc2NsLWdwaW9zIG5vdCBmb3VuZA0K
WyAgICAwLjEyMjE1NV0gb2ZfZG1hX3JlcXVlc3Rfc2xhdmVfY2hhbm5lbDogZG1hLW5hbWVzIHBy
b3BlcnR5IG9mIG5vZGUgJy9zb2MvaTJjQDIwMDAwMDAnIG1pc3Npbmcgb3IgZW1wdHkNClsgICAg
MC4xMjIxNzldIGkyYyBpMmMtMDogSU1YIEkyQyBhZGFwdGVyIHJlZ2lzdGVyZWQNClsgICAgMC4x
MjIzMjhdIGlteC1pMmMgMjAzMDAwMC5pMmM6IHNjbC1ncGlvcyBub3QgZm91bmQNClsgICAgMC4x
MjI0NTJdIG9mX2RtYV9yZXF1ZXN0X3NsYXZlX2NoYW5uZWw6IGRtYS1uYW1lcyBwcm9wZXJ0eSBv
ZiBub2RlICcvc29jL2kyY0AyMDMwMDAwJyBtaXNzaW5nIG9yIGVtcHR5DQpbICAgIDAuMTIyNDc1
XSBpMmMgaTJjLTE6IElNWCBJMkMgYWRhcHRlciByZWdpc3RlcmVkDQpbICAgIDAuMTIyNjE2XSBp
bXgtaTJjIDIwNDAwMDAuaTJjOiBzY2wtZ3Bpb3Mgbm90IGZvdW5kDQpbICAgIDAuMTIyODAyXSBv
Zl9kbWFfcmVxdWVzdF9zbGF2ZV9jaGFubmVsOiBkbWEtbmFtZXMgcHJvcGVydHkgb2Ygbm9kZSAn
L3NvYy9pMmNAMjA0MDAwMCcgbWlzc2luZyBvciBlbXB0eQ0KWyAgICAwLjEyMjgyM10gaTJjIGky
Yy0yOiBJTVggSTJDIGFkYXB0ZXIgcmVnaXN0ZXJlZA0KWyAgICAwLjEyMzI3M10gcHBzX2NvcmU6
IExpbnV4UFBTIEFQSSB2ZXIuIDEgcmVnaXN0ZXJlZA0KWyAgICAwLjEyMzI4OF0gcHBzX2NvcmU6
IFNvZnR3YXJlIHZlci4gNS4zLjYgLSBDb3B5cmlnaHQgMjAwNS0yMDA3IFJvZG9sZm8gR2lvbWV0
dGkgPGdpb21ldHRpQGxpbnV4Lml0Pg0KWyAgICAwLjEyMzM0Nl0gUFRQIGNsb2NrIHN1cHBvcnQg
cmVnaXN0ZXJlZA0KWyAgICAwLjEyMzU0Nl0gRURBQyBNQzogVmVyOiAzLjAuMA0KWyAgICAwLjEy
NjU3Nl0gTm8gQk1hbiBwb3J0YWxzIGF2YWlsYWJsZSENClsgICAgMC4xMjY5NTRdIFFNYW46IEFs
bG9jYXRlZCBsb29rdXAgdGFibGUgYXQgKF9fX19wdHJ2YWxfX19fKSwgZW50cnkgY291bnQgNjU1
MzcNClsgICAgMC4xMjcwNzVdIE5vIFFNYW4gcG9ydGFscyBhdmFpbGFibGUhDQpbICAgIDAuMTI3
MjIzXSBObyBVU0RQQUEgbWVtb3J5LCBubyAnZnNsLHVzZHBhYS1tZW0nIGluIGRldmljZS10cmVl
DQpbICAgIDAuMTI4Njg4XSBBZHZhbmNlZCBMaW51eCBTb3VuZCBBcmNoaXRlY3R1cmUgRHJpdmVy
IEluaXRpYWxpemVkLg0KWyAgICAwLjEyOTM5Nl0gY2xvY2tzb3VyY2U6IFN3aXRjaGVkIHRvIGNs
b2Nrc291cmNlIGFyY2hfc3lzX2NvdW50ZXINClsgICAgMC4xMjk0ODRdIFZGUzogRGlzayBxdW90
YXMgZHF1b3RfNi42LjANClsgICAgMC4xMjk1MjVdIFZGUzogRHF1b3QtY2FjaGUgaGFzaCB0YWJs
ZSBlbnRyaWVzOiA1MTIgKG9yZGVyIDAsIDQwOTYgYnl0ZXMpDQpbICAgIDAuMTM4MDA2XSBORVQ6
IFJlZ2lzdGVyZWQgcHJvdG9jb2wgZmFtaWx5IDINClsgICAgMC4xMzgyOTJdIHRjcF9saXN0ZW5f
cG9ydGFkZHJfaGFzaCBoYXNoIHRhYmxlIGVudHJpZXM6IDI1NiAob3JkZXI6IDAsIDQwOTYgYnl0
ZXMpDQpbICAgIDAuMTM4NDA2XSBUQ1AgZXN0YWJsaXNoZWQgaGFzaCB0YWJsZSBlbnRyaWVzOiA0
MDk2IChvcmRlcjogMywgMzI3NjggYnl0ZXMpDQpbICAgIDAuMTM4NDQwXSBUQ1AgYmluZCBoYXNo
IHRhYmxlIGVudHJpZXM6IDQwOTYgKG9yZGVyOiA0LCA2NTUzNiBieXRlcykNClsgICAgMC4xMzg0
OThdIFRDUDogSGFzaCB0YWJsZXMgY29uZmlndXJlZCAoZXN0YWJsaXNoZWQgNDA5NiBiaW5kIDQw
OTYpDQpbICAgIDAuMTM4NTY3XSBVRFAgaGFzaCB0YWJsZSBlbnRyaWVzOiAyNTYgKG9yZGVyOiAx
LCA4MTkyIGJ5dGVzKQ0KWyAgICAwLjEzODU4OV0gVURQLUxpdGUgaGFzaCB0YWJsZSBlbnRyaWVz
OiAyNTYgKG9yZGVyOiAxLCA4MTkyIGJ5dGVzKQ0KWyAgICAwLjEzODY1NF0gTkVUOiBSZWdpc3Rl
cmVkIHByb3RvY29sIGZhbWlseSAxDQpbICAgIDAuMTQ4MDE3XSBSUEM6IFJlZ2lzdGVyZWQgbmFt
ZWQgVU5JWCBzb2NrZXQgdHJhbnNwb3J0IG1vZHVsZS4NClsgICAgMC4xNDgwNDBdIFJQQzogUmVn
aXN0ZXJlZCB1ZHAgdHJhbnNwb3J0IG1vZHVsZS4NClsgICAgMC4xNDgwNTJdIFJQQzogUmVnaXN0
ZXJlZCB0Y3AgdHJhbnNwb3J0IG1vZHVsZS4NClsgICAgMC4xNDgwNjRdIFJQQzogUmVnaXN0ZXJl
ZCB0Y3AgTkZTdjQuMSBiYWNrY2hhbm5lbCB0cmFuc3BvcnQgbW9kdWxlLg0KWyAgICAwLjE0ODQw
Nl0ga3ZtIFsxXTogSFlQIG1vZGUgbm90IGF2YWlsYWJsZQ0KWyAgICAwLjE0OTI1MV0gSW5pdGlh
bGlzZSBzeXN0ZW0gdHJ1c3RlZCBrZXlyaW5ncw0KWyAgICAwLjE0OTc2Ml0gd29ya2luZ3NldDog
dGltZXN0YW1wX2JpdHM9NDQgbWF4X29yZGVyPTE3IGJ1Y2tldF9vcmRlcj0wDQpbICAgIDAuMTUy
OTQzXSBzcXVhc2hmczogdmVyc2lvbiA0LjAgKDIwMDkvMDEvMzEpIFBoaWxsaXAgTG91Z2hlcg0K
WyAgICAwLjE1NjkxNF0gTkZTOiBSZWdpc3RlcmluZyB0aGUgaWRfcmVzb2x2ZXIga2V5IHR5cGUN
ClsgICAgMC4xNTY5NDhdIEtleSB0eXBlIGlkX3Jlc29sdmVyIHJlZ2lzdGVyZWQNClsgICAgMC4x
NTY5NTldIEtleSB0eXBlIGlkX2xlZ2FjeSByZWdpc3RlcmVkDQpbICAgIDAuMTU2OTczXSBuZnM0
ZmlsZWxheW91dF9pbml0OiBORlN2NCBGaWxlIExheW91dCBEcml2ZXIgUmVnaXN0ZXJpbmcuLi4N
ClsgICAgMC4xNTcwMDldIGpmZnMyOiB2ZXJzaW9uIDIuMi4gKE5BTkQpIMKpIDIwMDEtMjAwNiBS
ZWQgSGF0LCBJbmMuDQpbICAgIDAuMTU3NDg2XSA5cDogSW5zdGFsbGluZyB2OWZzIDlwMjAwMCBm
aWxlIHN5c3RlbSBzdXBwb3J0DQpbICAgIDAuMTYxNzM3XSBLZXkgdHlwZSBhc3ltbWV0cmljIHJl
Z2lzdGVyZWQNClsgICAgMC4xNjE3NThdIEFzeW1tZXRyaWMga2V5IHBhcnNlciAneDUwOScgcmVn
aXN0ZXJlZA0KWyAgICAwLjE2MTgwOF0gQmxvY2sgbGF5ZXIgU0NTSSBnZW5lcmljIChic2cpIGRy
aXZlciB2ZXJzaW9uIDAuNCBsb2FkZWQgKG1ham9yIDI0NCkNClsgICAgMC4xNjE4MjZdIGlvIHNj
aGVkdWxlciBub29wIHJlZ2lzdGVyZWQNClsgICAgMC4xNjE4MzddIGlvIHNjaGVkdWxlciBkZWFk
bGluZSByZWdpc3RlcmVkDQpbICAgIDAuMTYxOTMxXSBpbyBzY2hlZHVsZXIgY2ZxIHJlZ2lzdGVy
ZWQgKGRlZmF1bHQpDQpbICAgIDAuMTYxOTQ0XSBpbyBzY2hlZHVsZXIgbXEtZGVhZGxpbmUgcmVn
aXN0ZXJlZA0KWyAgICAwLjE2MTk1N10gaW8gc2NoZWR1bGVyIGt5YmVyIHJlZ2lzdGVyZWQNClsg
ICAgMC4xNjQ5OTRdIHBjaS1ob3N0LWdlbmVyaWMgMWYwMDAwMDAwLnBjaWU6IGhvc3QgYnJpZGdl
IC9zb2MvcGNpZUAxZjAwMDAwMDAgcmFuZ2VzOg0KWyAgICAwLjE2NTAyOV0gcGNpLWhvc3QtZ2Vu
ZXJpYyAxZjAwMDAwMDAucGNpZTogICBNRU0gMHgxZjgwMDAwMDAuLjB4MWY4MTVmZmZmIC0+IDB4
MDAwMDAwMDANClsgICAgMC4xNjUwNTRdIHBjaS1ob3N0LWdlbmVyaWMgMWYwMDAwMDAwLnBjaWU6
ICAgTUVNIDB4MWY4MTYwMDAwLi4weDFmODFjZmZmZiAtPiAweDAwMDAwMDAwDQpbICAgIDAuMTY1
MDc3XSBwY2ktaG9zdC1nZW5lcmljIDFmMDAwMDAwMC5wY2llOiAgIE1FTSAweDFmODFkMDAwMC4u
MHgxZjgxZWZmZmYgLT4gMHgwMDAwMDAwMA0KWyAgICAwLjE2NTEwMF0gcGNpLWhvc3QtZ2VuZXJp
YyAxZjAwMDAwMDAucGNpZTogICBNRU0gMHgxZjgxZjAwMDAuLjB4MWY4MjBmZmZmIC0+IDB4MDAw
MDAwMDANClsgICAgMC4xNjUxMjNdIHBjaS1ob3N0LWdlbmVyaWMgMWYwMDAwMDAwLnBjaWU6ICAg
TUVNIDB4MWY4MjEwMDAwLi4weDFmODIyZmZmZiAtPiAweDAwMDAwMDAwDQpbICAgIDAuMTY1MTQ2
XSBwY2ktaG9zdC1nZW5lcmljIDFmMDAwMDAwMC5wY2llOiAgIE1FTSAweDFmODIzMDAwMC4uMHgx
ZjgyNGZmZmYgLT4gMHgwMDAwMDAwMA0KWyAgICAwLjE2NTE2N10gcGNpLWhvc3QtZ2VuZXJpYyAx
ZjAwMDAwMDAucGNpZTogICBNRU0gMHgxZmMwMDAwMDAuLjB4MWZjM2ZmZmZmIC0+IDB4MDAwMDAw
MDANClsgICAgMC4xNjUyMThdIHBjaS1ob3N0LWdlbmVyaWMgMWYwMDAwMDAwLnBjaWU6IEVDQU0g
YXQgW21lbSAweDFmMDAwMDAwMC0weDFmMDBmZmZmZl0gZm9yIFtidXMgMDBdDQpbICAgIDAuMTY1
MzA1XSBwY2ktaG9zdC1nZW5lcmljIDFmMDAwMDAwMC5wY2llOiBQQ0kgaG9zdCBicmlkZ2UgdG8g
YnVzIDAwMDA6MDANClsgICAgMC4xNjUzMjJdIHBjaV9idXMgMDAwMDowMDogcm9vdCBidXMgcmVz
b3VyY2UgW2J1cyAwMF0NClsgICAgMC4xNjUzMzddIHBjaV9idXMgMDAwMDowMDogcm9vdCBidXMg
cmVzb3VyY2UgW21lbSAweDFmODAwMDAwMC0weDFmODE1ZmZmZl0gKGJ1cyBhZGRyZXNzIFsweDAw
MDAwMDAwLTB4MDAxNWZmZmZdKQ0KWyAgICAwLjE2NTM2OF0gcGNpX2J1cyAwMDAwOjAwOiByb290
IGJ1cyByZXNvdXJjZSBbbWVtIDB4MWY4MTYwMDAwLTB4MWY4MWNmZmZmIHByZWZdIChidXMgYWRk
cmVzcyBbMHgwMDAwMDAwMC0weDAwMDZmZmZmXSkNClsgICAgMC4xNjgyMzddIHBjaV9idXMgMDAw
MDowMDogcm9vdCBidXMgcmVzb3VyY2UgW21lbSAweDFmODFkMDAwMC0weDFmODFlZmZmZl0gKGJ1
cyBhZGRyZXNzIFsweDAwMDAwMDAwLTB4MDAwMWZmZmZdKQ0KWyAgICAwLjE2ODI3MF0gcGNpX2J1
cyAwMDAwOjAwOiByb290IGJ1cyByZXNvdXJjZSBbbWVtIDB4MWY4MWYwMDAwLTB4MWY4MjBmZmZm
IHByZWZdIChidXMgYWRkcmVzcyBbMHgwMDAwMDAwMC0weDAwMDFmZmZmXSkNClsgICAgMC4xNjgy
OTNdIHBjaV9idXMgMDAwMDowMDogcm9vdCBidXMgcmVzb3VyY2UgW21lbSAweDFmODIxMDAwMC0w
eDFmODIyZmZmZl0gKGJ1cyBhZGRyZXNzIFsweDAwMDAwMDAwLTB4MDAwMWZmZmZdKQ0KWyAgICAw
LjE2ODMxNl0gcGNpX2J1cyAwMDAwOjAwOiByb290IGJ1cyByZXNvdXJjZSBbbWVtIDB4MWY4MjMw
MDAwLTB4MWY4MjRmZmZmIHByZWZdIChidXMgYWRkcmVzcyBbMHgwMDAwMDAwMC0weDAwMDFmZmZm
XSkNClsgICAgMC4xNjgzMzldIHBjaV9idXMgMDAwMDowMDogcm9vdCBidXMgcmVzb3VyY2UgW21l
bSAweDFmYzAwMDAwMC0weDFmYzNmZmZmZl0gKGJ1cyBhZGRyZXNzIFsweDAwMDAwMDAwLTB4MDAz
ZmZmZmZdKQ0KWyAgICAwLjE2ODQ2M10gcGNpIDAwMDA6MDA6MDAuMDogVkYobikgQkFSMCBzcGFj
ZTogW21lbSAweDFmODFkMDAwMC0weDFmODFlZmZmZiA2NGJpdF0gKGNvbnRhaW5zIEJBUjAgZm9y
IDIgVkZzKQ0KWyAgICAwLjE2ODQ4Nl0gcGNpIDAwMDA6MDA6MDAuMDogVkYobikgQkFSMiBzcGFj
ZTogW21lbSAweDFmODFmMDAwMC0weDFmODIwZmZmZiA2NGJpdCBwcmVmXSAoY29udGFpbnMgQkFS
MiBmb3IgMiBWRnMpDQooWEVOKSBwaHlzZGV2LmM6MTY6ZDB2MCBQSFlTREVWT1AgY21kPTI1OiBu
b3QgaW1wbGVtZW50ZWQNCihYRU4pIHBoeXNkZXYuYzoxNjpkMHYwIFBIWVNERVZPUCBjbWQ9MTU6
IG5vdCBpbXBsZW1lbnRlZA0KWyAgICAwLjE2ODYzOV0gcGNpIDAwMDA6MDA6MDAuMDogRmFpbGVk
IHRvIGFkZCAtIHBhc3N0aHJvdWdoIG9yIE1TSS9NU0ktWCBtaWdodCBmYWlsIQ0KWyAgICAwLjE2
ODgwNV0gcGNpIDAwMDA6MDA6MDAuMTogVkYobikgQkFSMCBzcGFjZTogW21lbSAweDFmODIxMDAw
MC0weDFmODIyZmZmZiA2NGJpdF0gKGNvbnRhaW5zIEJBUjAgZm9yIDIgVkZzKQ0KWyAgICAwLjE2
ODgyOV0gcGNpIDAwMDA6MDA6MDAuMTogVkYobikgQkFSMiBzcGFjZTogW21lbSAweDFmODIzMDAw
MC0weDFmODI0ZmZmZiA2NGJpdCBwcmVmXSAoY29udGFpbnMgQkFSMiBmb3IgMiBWRnMpDQooWEVO
KSBwaHlzZGV2LmM6MTY6ZDB2MCBQSFlTREVWT1AgY21kPTE1OiBub3QgaW1wbGVtZW50ZWQNClsg
ICAgMC4xNjg5MzFdIHBjaSAwMDAwOjAwOjAwLjE6IEZhaWxlZCB0byBhZGQgLSBwYXNzdGhyb3Vn
aCBvciBNU0kvTVNJLVggbWlnaHQgZmFpbCENCihYRU4pIHBoeXNkZXYuYzoxNjpkMHYwIFBIWVNE
RVZPUCBjbWQ9MTU6IG5vdCBpbXBsZW1lbnRlZA0KWyAgICAwLjE2OTEzOF0gcGNpIDAwMDA6MDA6
MDAuMjogRmFpbGVkIHRvIGFkZCAtIHBhc3N0aHJvdWdoIG9yIE1TSS9NU0ktWCBtaWdodCBmYWls
IQ0KKFhFTikgcGh5c2Rldi5jOjE2OmQwdjAgUEhZU0RFVk9QIGNtZD0xNTogbm90IGltcGxlbWVu
dGVkDQpbICAgIDAuMTY5MzMyXSBwY2kgMDAwMDowMDowMC4zOiBGYWlsZWQgdG8gYWRkIC0gcGFz
c3Rocm91Z2ggb3IgTVNJL01TSS1YIG1pZ2h0IGZhaWwhDQooWEVOKSBwaHlzZGV2LmM6MTY6ZDB2
MCBQSFlTREVWT1AgY21kPTE1OiBub3QgaW1wbGVtZW50ZWQNClsgICAgMC4xNjk2MTNdIHBjaSAw
MDAwOjAwOjAwLjQ6IEZhaWxlZCB0byBhZGQgLSBwYXNzdGhyb3VnaCBvciBNU0kvTVNJLVggbWln
aHQgZmFpbCENCihYRU4pIHBoeXNkZXYuYzoxNjpkMHYwIFBIWVNERVZPUCBjbWQ9MTU6IG5vdCBp
bXBsZW1lbnRlZA0KWyAgICAwLjE3Mjc4MV0gcGNpIDAwMDA6MDA6MDAuNTogRmFpbGVkIHRvIGFk
ZCAtIHBhc3N0aHJvdWdoIG9yIE1TSS9NU0ktWCBtaWdodCBmYWlsIQ0KKFhFTikgcGh5c2Rldi5j
OjE2OmQwdjAgUEhZU0RFVk9QIGNtZD0xNTogbm90IGltcGxlbWVudGVkDQpbICAgIDAuMTcyOTk1
XSBwY2kgMDAwMDowMDowMC42OiBGYWlsZWQgdG8gYWRkIC0gcGFzc3Rocm91Z2ggb3IgTVNJL01T
SS1YIG1pZ2h0IGZhaWwhDQpbICAgIDAuMTczOTY0XSBPRjogL3NvYy9wY2llQDFmMDAwMDAwMDog
SW52YWxpZCBtc2ktbWFwIHRyYW5zbGF0aW9uIC0gbm8gbWF0Y2ggZm9yIHJpZCAweGY4IG9uICAg
ICAgICAgICAobnVsbCkNCihYRU4pIHBoeXNkZXYuYzoxNjpkMHYwIFBIWVNERVZPUCBjbWQ9MTU6
IG5vdCBpbXBsZW1lbnRlZA0KWyAgICAwLjE3NDA0NF0gcGNpIDAwMDA6MDA6MWYuMDogRmFpbGVk
IHRvIGFkZCAtIHBhc3N0aHJvdWdoIG9yIE1TSS9NU0ktWCBtaWdodCBmYWlsIQ0KWyAgICAwLjE3
NDUzOF0gbGF5ZXJzY2FwZS1wY2llIDM0MDAwMDAucGNpZTogaG9zdCBicmlkZ2UgL3NvYy9wY2ll
QDM0MDAwMDAgcmFuZ2VzOg0KWyAgICAwLjE3NDU3MF0gbGF5ZXJzY2FwZS1wY2llIDM0MDAwMDAu
cGNpZTogICAgSU8gMHg4MDAwMDEwMDAwLi4weDgwMDAwMWZmZmYgLT4gMHgwMDAwMDAwMA0KWyAg
ICAwLjE3NDU5NF0gbGF5ZXJzY2FwZS1wY2llIDM0MDAwMDAucGNpZTogICBNRU0gMHg4MDQwMDAw
MDAwLi4weDgwN2ZmZmZmZmYgLT4gMHg0MDAwMDAwMA0KWyAgICAwLjE3NDgzN10gbGF5ZXJzY2Fw
ZS1wY2llIDM0MDAwMDAucGNpZTogUENJIGhvc3QgYnJpZGdlIHRvIGJ1cyAwMDAxOjAwDQpbICAg
IDAuMTc0ODU2XSBwY2lfYnVzIDAwMDE6MDA6IHJvb3QgYnVzIHJlc291cmNlIFtidXMgMDAtZmZd
DQpbICAgIDAuMTc0ODcwXSBwY2lfYnVzIDAwMDE6MDA6IHJvb3QgYnVzIHJlc291cmNlIFtpbyAg
MHgwMDAwLTB4ZmZmZl0NClsgICAgMC4xNzQ4ODddIHBjaV9idXMgMDAwMTowMDogcm9vdCBidXMg
cmVzb3VyY2UgW21lbSAweDgwNDAwMDAwMDAtMHg4MDdmZmZmZmZmXSAoYnVzIGFkZHJlc3MgWzB4
NDAwMDAwMDAtMHg3ZmZmZmZmZl0pDQpbICAgIDAuMTc1MDcwXSBwY2kgMDAwMTowMDowMC4wOiBG
YWlsZWQgdG8gYWRkIC0gcGFzc3Rocm91Z2ggb3IgTVNJL01TSS1YIG1pZ2h0IGZhaWwhDQpbICAg
IDAuMTc2NTkzXSBwY2kgMDAwMTowMDowMC4wOiBCQVIgNjogYXNzaWduZWQgW21lbSAweDgwNDAw
MDAwMDAtMHg4MDQwMDAwN2ZmIHByZWZdDQpbICAgIDAuMTc2NjEzXSBwY2kgMDAwMTowMDowMC4w
OiBQQ0kgYnJpZGdlIHRvIFtidXMgMDFdDQpbICAgIDAuMTc3MTI1XSBwY2llcG9ydCAwMDAxOjAw
OjAwLjA6IFNpZ25hbGluZyBQTUUgd2l0aCBJUlEgMTYNClsgICAgMC4xNzcyODFdIHBjaWVwb3J0
IDAwMDE6MDA6MDAuMDogQUVSIGVuYWJsZWQgd2l0aCBJUlEgMTcNClsgICAgMC4xNzc4NzRdIGxh
eWVyc2NhcGUtcGNpZSAzNTAwMDAwLnBjaWU6IGhvc3QgYnJpZGdlIC9zb2MvcGNpZUAzNTAwMDAw
IHJhbmdlczoNClsgICAgMC4xNzc5MDhdIGxheWVyc2NhcGUtcGNpZSAzNTAwMDAwLnBjaWU6ICAg
IElPIDB4ODgwMDAxMDAwMC4uMHg4ODAwMDFmZmZmIC0+IDB4MDAwMDAwMDANClsgICAgMC4xNzc5
MzldIGxheWVyc2NhcGUtcGNpZSAzNTAwMDAwLnBjaWU6ICAgTUVNIDB4ODg0MDAwMDAwMC4uMHg4
ODdmZmZmZmZmIC0+IDB4NDAwMDAwMDANClsgICAgMC4xNzgxNzRdIGxheWVyc2NhcGUtcGNpZSAz
NTAwMDAwLnBjaWU6IFBDSSBob3N0IGJyaWRnZSB0byBidXMgMDAwMjowMA0KWyAgICAwLjE3ODE5
M10gcGNpX2J1cyAwMDAyOjAwOiByb290IGJ1cyByZXNvdXJjZSBbYnVzIDAwLWZmXQ0KWyAgICAw
LjE3ODIwOF0gcGNpX2J1cyAwMDAyOjAwOiByb290IGJ1cyByZXNvdXJjZSBbaW8gIDB4MTAwMDAt
MHgxZmZmZl0gKGJ1cyBhZGRyZXNzIFsweDAwMDAtMHhmZmZmXSkNClsgICAgMC4xNzgyMjldIHBj
aV9idXMgMDAwMjowMDogcm9vdCBidXMgcmVzb3VyY2UgW21lbSAweDg4NDAwMDAwMDAtMHg4ODdm
ZmZmZmZmXSAoYnVzIGFkZHJlc3MgWzB4NDAwMDAwMDAtMHg3ZmZmZmZmZl0pDQpbICAgIDAuMTc4
NDE2XSBwY2kgMDAwMjowMDowMC4wOiBGYWlsZWQgdG8gYWRkIC0gcGFzc3Rocm91Z2ggb3IgTVNJ
L01TSS1YIG1pZ2h0IGZhaWwhDQpbICAgIDAuMTc5OTQwXSBwY2kgMDAwMjowMDowMC4wOiBCQVIg
NjogYXNzaWduZWQgW21lbSAweDg4NDAwMDAwMDAtMHg4ODQwMDAwN2ZmIHByZWZdDQpbICAgIDAu
MTc5OTYxXSBwY2kgMDAwMjowMDowMC4wOiBQQ0kgYnJpZGdlIHRvIFtidXMgMDFdDQpbICAgIDAu
MTgwMjc0XSBwY2llcG9ydCAwMDAyOjAwOjAwLjA6IFNpZ25hbGluZyBQTUUgd2l0aCBJUlEgMTgN
ClsgICAgMC4xODA0NThdIHBjaWVwb3J0IDAwMDI6MDA6MDAuMDogQUVSIGVuYWJsZWQgd2l0aCBJ
UlEgMTkNClsgICAgMC4xODYzOTVdIEZyZWVzY2FsZSBMUzIgY29uc29sZSBkcml2ZXINClsgICAg
MC4xODY1MzFdIGZzbC1sczItY29uc29sZTogZGV2aWNlIGZzbF9tY19jb25zb2xlIHJlZ2lzdGVy
ZWQNClsgICAgMC4xODY2MThdIGZzbC1sczItY29uc29sZTogZGV2aWNlIGZzbF9haW9wX2NvbnNv
bGUgcmVnaXN0ZXJlZA0KWyAgICAwLjE4OTMyNV0geGVuOnhlbl9ldnRjaG46IEV2ZW50LWNoYW5u
ZWwgZGV2aWNlIGluc3RhbGxlZA0KWyAgICAwLjE5NjIyM10gU2VyaWFsOiA4MjUwLzE2NTUwIGRy
aXZlciwgNCBwb3J0cywgSVJRIHNoYXJpbmcgZW5hYmxlZA0KWyAgICAwLjE5ODIwNF0gU3VwZXJI
IChIKVNDSShGKSBkcml2ZXIgaW5pdGlhbGl6ZWQNClsgICAgMC4xOTg0OTldIG1zbV9zZXJpYWw6
IGRyaXZlciBpbml0aWFsaXplZA0KWyAgICAwLjE5ODcyNl0gMjI3MDAwMC5zZXJpYWw6IHR0eUxQ
MCBhdCBNTUlPIDB4MjI3MDAwMCAoaXJxID0gMTAsIGJhc2VfYmF1ZCA9IDEyNTAwMDAwKSBpcyBh
IEZTTF9MUFVBUlQNClsgICAgMC4yMDM2NjRdIGNhY2hlaW5mbzogVW5hYmxlIHRvIGRldGVjdCBj
YWNoZSBoaWVyYXJjaHkgZm9yIENQVSAwDQpbICAgIDAuMjA5MTQzXSBsb29wOiBtb2R1bGUgbG9h
ZGVkDQpbICAgIDAuMjA5MjM3XSBJbnZhbGlkIG1heF9xdWV1ZXMgKDQpLCB3aWxsIHVzZSBkZWZh
dWx0IG1heDogMS4NClsgICAgMC4yMTEyMDNdIGF0MjQgMC0wMDUwOiA0MDk2IGJ5dGUgMjRjMzIg
RUVQUk9NLCB3cml0YWJsZSwgMzIgYnl0ZXMvd3JpdGUNClsgICAgMC4yMTI2MDldIGF0MjQgMi0w
MDUwOiA0MDk2IGJ5dGUgMjRjMzIgRUVQUk9NLCB3cml0YWJsZSwgMzIgYnl0ZXMvd3JpdGUNClsg
ICAgMC4yMTY0MDZdIHNsMjhjcGxkIDAtMDA0YTogcmVnaXN0ZXJlZCBJUlEgMjcNClsgICAgMC4y
MTY0MjZdIHNsMjhjcGxkIDAtMDA0YTogc3VjY2Vzc2Z1bGx5IHByb2JlZC4gQ1BMRCB2ZXJzaW9u
IDE4Lg0KWyAgICAwLjIzOTcxMl0gbTI1cDgwIHNwaTAuMDogdzI1cTMyancgKDQwOTYgS2J5dGVz
KQ0KWyAgICAwLjI0MDMxNl0gMTAgZml4ZWQtcGFydGl0aW9ucyBwYXJ0aXRpb25zIGZvdW5kIG9u
IE1URCBkZXZpY2UgMjBjMDAwMC5zcGkNClsgICAgMC4yNDAzMzVdIENyZWF0aW5nIDEwIE1URCBw
YXJ0aXRpb25zIG9uICIyMGMwMDAwLnNwaSI6DQpbICAgIDAuMjQwMzUxXSAweDAwMDAwMDAwMDAw
MC0weDAwMDAwMDAxMDAwMCA6ICJyY3ciDQpbICAgIDAuMjQwODM0XSAweDAwMDAwMDAxMDAwMC0w
eDAwMDAwMDEwMDAwMCA6ICJmYWlsc2FmZSBib290bG9hZGVyIg0KWyAgICAwLjI0MTI4Ml0gMHgw
MDAwMDAxMDAwMDAtMHgwMDAwMDAxNDAwMDAgOiAiZmFpbHNhZmUgRFAgZmlybXdhcmUiDQpbICAg
IDAuMjQxNzkyXSAweDAwMDAwMDE0MDAwMC0weDAwMDAwMDFlMDAwMCA6ICJmYWlsc2FmZSB0cnVz
dGVkIGZpcm13YXJlIg0KWyAgICAwLjI0MjI1Ml0gMHgwMDAwMDAxZTAwMDAtMHgwMDAwMDAyMDAw
MDAgOiAicmVzZXJ2ZWQiDQpbICAgIDAuMjQyNzA4XSAweDAwMDAwMDIwMDAwMC0weDAwMDAwMDIx
MDAwMCA6ICJjb25maWd1cmF0aW9uIHN0b3JlIg0KWyAgICAwLjI0MzE1OV0gMHgwMDAwMDAyMTAw
MDAtMHgwMDAwMDAzMDAwMDAgOiAiYm9vdGxvYWRlciINClsgICAgMC4yNDM2MTNdIDB4MDAwMDAw
MzAwMDAwLTB4MDAwMDAwMzQwMDAwIDogIkRQIGZpcm13YXJlIg0KWyAgICAwLjI0NDA2OV0gMHgw
MDAwMDAzNDAwMDAtMHgwMDAwMDAzZTAwMDAgOiAidHJ1c3RlZCBmaXJtd2FyZSINClsgICAgMC4y
NDQ1MTFdIDB4MDAwMDAwM2UwMDAwLTB4MDAwMDAwNDAwMDAwIDogImJvb3Rsb2FkZXIgZW52aXJv
bm1lbnQiDQpbICAgIDAuMjUwMDQ3XSBsaWJwaHk6IEZpeGVkIE1ESU8gQnVzOiBwcm9iZWQNClsg
ICAgMC4yNTIwNDZdIHR1bjogVW5pdmVyc2FsIFRVTi9UQVAgZGV2aWNlIGRyaXZlciwgMS42DQpb
ICAgIDAuMjU2MTM1XSB0aHVuZGVyX3hjdiwgdmVyIDEuMA0KWyAgICAwLjI1NjIyMF0gdGh1bmRl
cl9iZ3gsIHZlciAxLjANClsgICAgMC4yNTYyODZdIG5pY3BmLCB2ZXIgMS4wDQpbICAgIDAuMjU2
NTA2XSBGcmVlc2NhbGUgRk0gbW9kdWxlLCBGTUQgQVBJIHZlcnNpb24gMjEuMS4wDQpbICAgIDAu
MjU2NjI2XSBGcmVlc2NhbGUgRk0gUG9ydHMgbW9kdWxlDQpbICAgIDAuMjU2NjM3XSBmc2xfbWFj
OiBmc2xfbWFjOiBGU0wgRk1hbiBNQUMgQVBJIGJhc2VkIGRyaXZlcg0KWyAgICAwLjI1NjczN10g
ZnNsX2RwYTogRlNMIERQQUEgRXRoZXJuZXQgZHJpdmVyDQpbICAgIDAuMjU2ODI1XSBmc2xfYWR2
YW5jZWQ6IEZTTCBEUEFBIEFkdmFuY2VkIGRyaXZlcnM6DQpbICAgIDAuMjU2ODM4XSBmc2xfcHJv
eHk6IEZTTCBEUEFBIFByb3h5IGluaXRpYWxpemF0aW9uIGRyaXZlcg0KWyAgICAwLjI1NjkyMl0g
ZnNsX29oOiBGU0wgRk1hbiBPZmZsaW5lIFBhcnNpbmcgcG9ydCBkcml2ZXINClsgICAgMC4zNjE0
MDZdIGZzbF9lbmV0YyAwMDAwOjAwOjAwLjA6IGVuYWJsaW5nIGRldmljZSAoMDQwMCAtPiAwNDAy
KQ0KWyAgICAwLjM2MTQ2OV0gZnNsX2VuZXRjIDAwMDA6MDA6MDAuMDogbm8gTUFDIGFkZHJlc3Mg
c3BlY2lmaWVkIGZvciBTSTEsIHVzaW5nIDQyOjE0OmZkOjQyOjU1OjFkDQpbICAgIDAuMzYxNDkx
XSBmc2xfZW5ldGMgMDAwMDowMDowMC4wOiBubyBNQUMgYWRkcmVzcyBzcGVjaWZpZWQgZm9yIFNJ
MiwgdXNpbmcgMWU6MjI6MTU6OWE6Yzk6ZTkNCihYRU4pIHZnaWMtdjMtaXRzLmM6OTAyOmQwdjAg
dklUUyAgY21kIDB4MDg6IDAwMDAwMDE3MDAwMDAwMDggMDAwMDAwMDAwMDAwMDAwMCA4MDAwMDAw
MGRiNDcxMjAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDA4OiAwMDAwMDAx
NzAwMDAwMDA4IDAwMDAwMDAwMDAwMDAwMDQgODAwMDAwMjBmOWRlNDYwMCAwMDAwMDAwMDAwMDAw
MDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYTogMDAwMDAwMTcwMDAwMDAwYSAwMDAwMjAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4
MGM6IDAwMDAwMDE3MDAwMDAwMGMgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBhOiAwMDAwMDAxNzAwMDAwMDBhIDAw
MDAyMDAxMDAwMDAwMDEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBw
SVRTICBjbWQgMHgwYzogMDAwMDAwMTcwMDAwMDAwYyAwMDAwMDAwMDAwMDAwMDAxIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGE6IDAwMDAwMDE3
MDAwMDAwMGEgMDAwMDIwMDIwMDAwMDAwMiAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDANCihYRU4pIHBJVFMgIGNtZCAweDBjOiAwMDAwMDAxNzAwMDAwMDBjIDAwMDAwMDAwMDAwMDAw
MDIgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgw
YTogMDAwMDAwMTcwMDAwMDAwYSAwMDAwMjAwMzAwMDAwMDAzIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGM6IDAwMDAwMDE3MDAwMDAwMGMgMDAw
MDAwMDAwMDAwMDAwMyAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJ
VFMgIGNtZCAweDBhOiAwMDAwMDAxNzAwMDAwMDBhIDAwMDAyMDA0MDAwMDAwMDQgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYzogMDAwMDAwMTcw
MDAwMDAwYyAwMDAwMDAwMDAwMDAwMDA0IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MA0KKFhFTikgcElUUyAgY21kIDB4MGE6IDAwMDAwMDE3MDAwMDAwMGEgMDAwMDIwMDUwMDAwMDAw
NSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBj
OiAwMDAwMDAxNzAwMDAwMDBjIDAwMDAwMDAwMDAwMDAwMDUgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYTogMDAwMDAwMTcwMDAwMDAwYSAwMDAw
MjAwNjAwMDAwMDA2IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElU
UyAgY21kIDB4MGM6IDAwMDAwMDE3MDAwMDAwMGMgMDAwMDAwMDAwMDAwMDAwNiAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBhOiAwMDAwMDAxNzAw
MDAwMDBhIDAwMDAyMDA3MDAwMDAwMDcgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMFsg
ICAgMS42NTYyNDNdIGZzbF9lbmV0YyAwMDAwOjAwOjAwLjEgZ2JlMTogcmVuYW1lZCBmcm9tIGV0
aDENClsgICAgMS43MjE4MTRdIGZzbF9lbmV0YyAwMDAwOjAwOjAwLjAgZ2JlMDogcmVuYW1lZCBm
cm9tIGV0aDANClsgICAgMy4wMTEzNTZdIHVyYW5kb21fcmVhZDogMyBjYWxsYmFja3Mgc3VwcHJl
c3NlZA0KWyAgICAzLjAxMTM2MF0gcmFuZG9tOiB1ZGV2ZDogdW5pbml0aWFsaXplZCB1cmFuZG9t
IHJlYWQgKDE2IGJ5dGVzIHJlYWQpDQpbICAgIDMuMDExNDkwXSByYW5kb206IHVkZXZkOiB1bmlu
aXRpYWxpemVkIHVyYW5kb20gcmVhZCAoMTYgYnl0ZXMgcmVhZCkNClsgICAgMy4wMTE1NDJdIHJh
bmRvbTogdWRldmQ6IHVuaW5pdGlhbGl6ZWQgdXJhbmRvbSByZWFkICgxNiBieXRlcyByZWFkKQ0K
WyAgICAzLjAxMjc4Ml0gdWRldmRbMTgxMl06IHNwZWNpZmllZCBncm91cCAna3ZtJyB1bmtub3du
DQpbICAgIDQuMDEwNjk4XSBGQVQtZnMgKG1tY2JsazFwMSk6IFZvbHVtZSB3YXMgbm90IHByb3Bl
cmx5IHVubW91bnRlZC4gU29tZSBkYXRhIG1heSBiZSBjb3JydXB0LiBQbGVhc2UgcnVuIGZzY2su
DQpbICAgIDYuMDQzNTg2XSBvYWtfcGNpOiBsb2FkaW5nIG91dC1vZi10cmVlIG1vZHVsZSB0YWlu
dHMga2VybmVsLg0KWyAgICA2LjA0OTQ5Nl0gTWFydmVsbCBQQ0llIFN3aXRjaCBEcml2ZXIgLSAo
b2FrKSB2ZXJzaW9uIDAuMC4wDQpbICAgIDYuMDQ5NTIzXSBDb3B5cmlnaHQgKGMpIE1hcnZlbGwg
LSAyMDE4DQpbICAgIDYuMDczNjQ4XSBFWFQ0LWZzIChtbWNibGsxcDIpOiByZS1tb3VudGVkLiBP
cHRzOiAobnVsbCkNClsgICAgNi41MTY5ODddIHJhbmRvbTogZGQ6IHVuaW5pdGlhbGl6ZWQgdXJh
bmRvbSByZWFkICg1MTIgYnl0ZXMgcmVhZCkNCkFMU0E6IFJlc3RvcmluZyBtaXhlciBzZXR0aW5n
cy4uLg0KL3Vzci9zYmluL2Fsc2FjdGw6IGxvYWRfc3RhdGU6MTczNTogTm8gc291bmRjYXJkcyBm
b3VuZC4uLg0KSU5JVDogRW50ZXJpbmcgcnVubGV2ZWw6IDUNCkNvbmZpZ3VyaW5nIG5ldHdvcmsg
aW50ZXJmYWNlcy4uLiBkb25lLg0KU3RhcnRpbmcgc3lzdGVtIG1lc3NhZ2UgYnVzOiBkYnVzLg0K
TW91bnRpbmcgY2dyb3Vwcy4uLkRvbmUNClN0YXJ0aW5nIERyb3BiZWFyIFNTSCBzZXJ2ZXI6IFsg
ICAgNi43NjQ2MDNdIE5FVDogUmVnaXN0ZXJlZCBwcm90b2NvbCBmYW1pbHkgMTANClsgICAgNi43
Njk3MzhdIFNlZ21lbnQgUm91dGluZyB3aXRoIElQdjYNCmRyb3BiZWFyLg0KU3RhcnRpbmcgcnBj
YmluZCBkYWVtb24uLi5kb25lLg0KU3RhcnRpbmcgYmx1ZXRvb3RoOiBibHVldG9vdGhkLg0KU3Rh
cnRpbmcgZG9ja2VyZDogICAgICAgWyAgICA2LjkyMjk5M10gQmx1ZXRvb3RoOiBDb3JlIHZlciAy
LjIyDQpbICAgIDYuOTIzMDU3XSBORVQ6IFJlZ2lzdGVyZWQgcHJvdG9jb2wgZmFtaWx5IDMxDQpb
ICAgIDYuOTIzMDcwXSBCbHVldG9vdGg6IEhDSSBkZXZpY2UgYW5kIGNvbm5lY3Rpb24gbWFuYWdl
ciBpbml0aWFsaXplZA0KWyAgICA2LjkyMzA5MF0gQmx1ZXRvb3RoOiBIQ0kgc29ja2V0IGxheWVy
IGluaXRpYWxpemVkDQpbICAgIDYuOTIzMTEzXSBCbHVldG9vdGg6IEwyQ0FQIHNvY2tldCBsYXll
ciBpbml0aWFsaXplZA0KWyAgICA2LjkyMzEzN10gQmx1ZXRvb3RoOiBTQ08gc29ja2V0IGxheWVy
IGluaXRpYWxpemVkDQoNCg0KU3RhcnRpbmcgbnRwZDogZG9uZQ0KU3RhcnRpbmcgc3lzbG9nZC9r
bG9nZDogZG9uZQ0KU3RhcnRpbmcgaW50ZXJuZXQgc3VwZXJzZXJ2ZXI6IHhpbmV0ZC4NCiAqIFN0
YXJ0aW5nIEF2YWhpIG1ETlMvRE5TLVNEIERhZW1vbjogYXZhaGktZGFlbW9uICAgICAgICAgICAg
ICAgICAgICAgICBbIG9rIF0NClN0YXJ0aW5nIFRlbGVwaG9ueSBkYWVtb24NClN0YXJ0aW5nIC91
c3Ivc2Jpbi94ZW5zdG9yZWQuLi4NClNldHRpbmcgZG9tYWluIDAgbmFtZSwgZG9taWQgYW5kIEpT
T04gY29uZmlnLi4uDQpEb25lIHNldHRpbmcgdXAgRG9tMA0KU3RhcnRpbmcgeGVuY29uc29sZWQu
Li4NClN0YXJ0aW5nIFFFTVUgYXMgZGlzayBiYWNrZW5kIGZvciBkb20wDQovZXRjL3JjNS5kL1M4
MHhlbmNvbW1vbnM6IGxpbmUgNzM6IC91c3IvYmluL3FlbXUtc3lzdGVtLWkzODY6IE5vIHN1Y2gg
ZmlsZSBvciBkaXJlY3RvcnkNClN0YXJ0aW5nIGRvbWFpbiB3YXRjaGRvZyBkYWVtb246IHhlbndh
dGNoZG9nZCBzdGFydHVwDQoNCltkb25lXQ0KDQpQb2t5IChZb2N0byBQcm9qZWN0IFJlZmVyZW5j
ZSBEaXN0cm8pIDIuNy4xIGtvbnRyb24tc2FsMjggaHZjMA0KDQo=

--_003_HE1PR05MB4794EBDD1FE29BC69D0BCC898BFD0HE1PR05MB4794eurp_
Content-Type: text/plain; name="xen-debug-output.txt"
Content-Description: xen-debug-output.txt
Content-Disposition: attachment; filename="xen-debug-output.txt"; size=16003;
	creation-date="Sun, 22 Nov 2020 22:37:14 GMT";
	modification-date="Sun, 22 Nov 2020 22:37:14 GMT"
Content-Transfer-Encoding: base64

KSBwSVRTICBjbWQgMHgwYzogMDAwMDAwMTcwMDAwMDAwYyAwMDAwMDAwMDAwMDAwMDEzIDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGE6IDAwMDAw
MDE3MDAwMDAwMGEgMDAwMDIwMTQwMDAwMDAxNCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBjOiAwMDAwMDAxNzAwMDAwMDBjIDAwMDAwMDAwMDAw
MDAwMTQgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQg
MHgwYTogMDAwMDAwMTcwMDAwMDAwYSAwMDAwMjAxNTAwMDAwMDE1IDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGM6IDAwMDAwMDE3MDAwMDAwMGMg
MDAwMDAwMDAwMDAwMDAxNSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4p
IHBJVFMgIGNtZCAweDBhOiAwMDAwMDAxNzAwMDAwMDBhIDAwMDAyMDE2MDAwMDAwMTYgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYzogMDAwMDAw
MTcwMDAwMDAwYyAwMDAwMDAwMDAwMDAwMDE2IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGE6IDAwMDAwMDE3MDAwMDAwMGEgMDAwMDIwMTcwMDAw
MDAxNyAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAw
eDBjOiAwMDAwMDAxNzAwMDAwMDBjIDAwMDAwMDAwMDAwMDAwMTcgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYTogMDAwMDAwMTcwMDAwMDAwYSAw
MDAwMjAxODAwMDAwMDE4IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikg
cElUUyAgY21kIDB4MGM6IDAwMDAwMDE3MDAwMDAwMGMgMDAwMDAwMDAwMDAwMDAxOCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCjAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4
MGM6IDAwMDAwMDE3MDAwMDAwMGMgMDAwMDAwMDAwMDAwMDAxOSAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBhOiAwMDAwMDAxNzAwMDAwMDBhIDAw
MDAyMDFhMDAwMDAwMWEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBw
SVRTICBjbWQgMHgwYzogMDAwMDAwMTcwMDAwMDAwYyAwMDAwMDAwMDAwMDAwMDFhIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGE6IDAwMDAwMDE3
MDAwMDAwMGEgMDAwMDIwMWIwMDAwMDAxYiAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDANCihYRU4pIHBJVFMgIGNtZCAweDBjOiAwMDAwMDAxNzAwMDAwMDBjIDAwMDAwMDAwMDAwMDAw
MWIgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgw
YTogMDAwMDAwMTcwMDAwMDAwYSAwMDAwMjAxYzAwMDAwMDFjIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGM6IDAwMDAwMDE3MDAwMDAwMGMgMDAw
MDAwMDAwMDAwMDAxYyAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJ
VFMgIGNtZCAweDBhOiAwMDAwMDAxNzAwMDAwMDBhIDAwMDAyMDFkMDAwMDAwMWQgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYzogMDAwMDAwMTcw
MDAwMDAwYyAwMDAwMDAwMDAwMDAwMDFkIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MA0KKFhFTikgcElUUyAgY21kIDB4MGE6IDAwMDAwMDE3MDAwMDAwMGEgMDAwMDIwMWUwMDAwMDAx
ZSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBj
OiAwMDAwMDAxNzAwMDAwMDBjIDAwMDAwMDAwMDAwMDAwMWUgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYTogMDAwMDAwMTcwMDAwMDAwYSAwMDAw
MjAxZjAwMDAwMDFmIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElU
UyAgY21kIDB4MGM6IDAwMDAwMDE3MDAwMDAwMGMgMDAwMDAwMDAwMDAwMDAxZiAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDA1OiAwMDAwMDAwMDAw
MDAwMDA1IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
DQooWEVOKSB2Z2ljLXYzLWl0cy5jOjkwMjpkMHYwIHZJVFMgIGNtZCAweDBhOiAwMDAwMDAxNzAw
MDAwMDBhIDAwMDAyMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
DQooWEVOKSBnaWN2M19scGlfdXBkYXRlX2hvc3RfZW50cnk6IGhvc3RfbHBpIDgxOTIgZG9tYWlu
IDAgdmlydF9scGkgODE5Mg0KKFhFTikgdmdpYy12My1pdHMuYzo5MDI6ZDB2MCB2SVRTICBjbWQg
MHgwNTogMDAwMDAwMDAwMDAwMDAwNSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgdmdpYy12My1pdHMuYzo5MDI6ZDB2MCB2SVRTICBjbWQg
MHgwYTogMDAwMDAwMTcwMDAwMDAwYSAwMDAwMjAwMTAwMDAwMDAxIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgZ2ljdjNfbHBpX3VwZGF0ZV9ob3N0X2VudHJ5OiBob3N0
X2xwaSA4MTkzIGRvbWFpbiAwIHZpcnRfbHBpIDgxOTMNCihYRU4pIHZnaWMtdjMtaXRzLmM6OTAy
OmQwdjAgdklUUyAgY21kIDB4MDU6IDAwMDAwMDAwMDAwMDAwMDUgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHZnaWMtdjMtaXRzLmM6OTAy
OmQwdjAgdklUUyAgY21kIDB4MDg6IDAwMDAwMDE4MDAwMDAwMDggMDAwMDAwMDAwMDAwMDAwMCA4
MDAwMDAwMGRiNDcxYTAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDA4OiAw
MDAwMDAxODAwMDAwMDA4IDAwMDAwMDAwMDAwMDAwMDQgODAwMDAwMjBmOWRlMjcwMCAwMDAwMDAw
MDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYTogMDAwMDAwMTgwMDAwMDAwYSAwMDAwMjAy
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KMDAwMDAwMDAwDQoo
WEVOKSBwSVRTICBjbWQgMHgwYTogMDAwMDAwMTgwMDAwMDAwYSAwMDAwMjAyMTAwMDAwMDAxIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGM6IDAw
MDAwMDE4MDAwMDAwMGMgMDAwMDAwMDAwMDAwMDAwMSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBhOiAwMDAwMDAxODAwMDAwMDBhIDAwMDAyMDIy
MDAwMDAwMDIgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBj
bWQgMHgwYzogMDAwMDAwMTgwMDAwMDAwYyAwMDAwMDAwMDAwMDAwMDAyIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGE6IDAwMDAwMDE4MDAwMDAw
MGEgMDAwMDIwMjMwMDAwMDAwMyAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihY
RU4pIHBJVFMgIGNtZCAweDBjOiAwMDAwMDAxODAwMDAwMDBjIDAwMDAwMDAwMDAwMDAwMDMgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYTogMDAw
MDAwMTgwMDAwMDAwYSAwMDAwMjAyNDAwMDAwMDA0IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGM6IDAwMDAwMDE4MDAwMDAwMGMgMDAwMDAwMDAw
MDAwMDAwNCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNt
ZCAweDBhOiAwMDAwMDAxODAwMDAwMDBhIDAwMDAyMDI1MDAwMDAwMDUgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYzogMDAwMDAwMTgwMDAwMDAw
YyAwMDAwMDAwMDAwMDAwMDA1IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhF
TikgcElUUyAgY21kIDB4MGE6IDAwMDAwMDE4MDAwMDAwMGEgMDAwMDIwMjYwMDAwMDAwNiAwMDAw
MDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBjOiAwMDAw
MDAxODAwMDAwMDBjIDAwMDAwMDAwMDAwMDAwMDYgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYTogMDAwMDAwMTgwMDAwMDAwYSAwMDAwMjAyNzAw
MDAwMDA3IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21k
IDB4MGM6IDAwMDAwMDE4MDAwMDAwMGMgMDAwMDAwMDAwMDAwMDAwNyAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBhOiAwMDAwMDAxODAwMDAwMDBh
IDAwMDAyMDI4MDAwMDAwMDggMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVO
KSBwSVRTICBjbWQgMHgwYzogMDAwMDAwMTgwMDAwMDAwYyAwMDAwMDAwMDAwMDAwMDA4IDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGE6IDAwMDAw
MDE4MDAwMDAwMGEgMDAwMDIwMjkwMDAwMDAwOSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBjOiAwMDAwMDAxODAwMDAwMDBjIDAwMDAwMDAwMDAw
MDAwMDkgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQg
MHgwYTogMDAwMDAwMTgwMDAwMDAwYSAwMDAwMjAyYTAwMDAwMDBhIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGM6IDAwMDAwMDE4MDAwMDAwMGMg
MDAwMDAwMDAwMDAwMDAwYSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4p
IHBJVFMgIGNtZCAweDBhOiAwMDAwMDAxODAwMDAwMDBhIDAwMDAyMDJiMDAwMDAwMGIgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYzogMDAwMDAw
MTgwMDAwMDAwYyAwMDAwMDAwMDAwMDAwMDBiIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMA0KMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYzogMDAwMDAwMTgwMDAwMDAwYyAw
MDAwMDAwMDAwMDAwMDBjIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikg
cElUUyAgY21kIDB4MGE6IDAwMDAwMDE4MDAwMDAwMGEgMDAwMDIwMmQwMDAwMDAwZCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBjOiAwMDAwMDAx
ODAwMDAwMDBjIDAwMDAwMDAwMDAwMDAwMGQgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYTogMDAwMDAwMTgwMDAwMDAwYSAwMDAwMjAyZTAwMDAw
MDBlIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4
MGM6IDAwMDAwMDE4MDAwMDAwMGMgMDAwMDAwMDAwMDAwMDAwZSAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBhOiAwMDAwMDAxODAwMDAwMDBhIDAw
MDAyMDJmMDAwMDAwMGYgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBw
SVRTICBjbWQgMHgwYzogMDAwMDAwMTgwMDAwMDAwYyAwMDAwMDAwMDAwMDAwMDBmIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGE6IDAwMDAwMDE4
MDAwMDAwMGEgMDAwMDIwMzAwMDAwMDAxMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDANCihYRU4pIHBJVFMgIGNtZCAweDBjOiAwMDAwMDAxODAwMDAwMDBjIDAwMDAwMDAwMDAwMDAw
MTAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgw
YTogMDAwMDAwMTgwMDAwMDAwYSAwMDAwMjAzMTAwMDAwMDExIDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGM6IDAwMDAwMDE4MDAwMDAwMGMgMDAw
MDAwMDAwMDAwMDAxMSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJ
VFMgIGNtZCAweDBhOiAwMDAwMDAxODAwMDAwMDBhIDAwMDAyMDMyMDAwMDAwMTIgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYzogMDAwMDAwMTgw
MDAwMDAwYyAwMDAwMDAwMDAwMDAwMDEyIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MA0KKFhFTikgcElUUyAgY21kIDB4MGE6IDAwMDAwMDE4MDAwMDAwMGEgMDAwMDIwMzMwMDAwMDAx
MyAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBj
OiAwMDAwMDAxODAwMDAwMDBjIDAwMDAwMDAwMDAwMDAwMTMgMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYTogMDAwMDAwMTgwMDAwMDAwYSAwMDAw
MjAzNDAwMDAwMDE0IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElU
UyAgY21kIDB4MGM6IDAwMDAwMDE4MDAwMDAwMGMgMDAwMDAwMDAwMDAwMDAxNCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBhOiAwMDAwMDAxODAw
MDAwMDBhIDAwMDAyMDM1MDAwMDAwMTUgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
DQooWEVOKSBwSVRTICBjbWQgMHgwYzogMDAwMDAwMTgwMDAwMDAwYyAwMDAwMDAwMDAwMDAwMDE1
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGE6
IDAwMDAwMDE4MDAwMDAwMGEgMDAwMDIwMzYwMDAwMDAxNiAwMDAwMDAwMDAwMDAwMDAwIDAwMDAw
MDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBjOiAwMDAwMDAxODAwMDAwMDBjIDAwMDAw
MDAwMDAwMDAwMTYgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRT
ICBjbWQgMHgwYTogMDAwMDAwMTgwMDAwMDAwYSAwMDAwMjAzNzAwMDAwMDE3IDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYTog
MDAwMDAwMTgwMDAwMDAwYSAwMDAwMjAzODAwMDAwMDE4IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGM6IDAwMDAwMDE4MDAwMDAwMGMgMDAwMDAw
MDAwMDAwMDAxOCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMg
IGNtZCAweDBhOiAwMDAwMDAxODAwMDAwMDBhIDAwMDAyMDM5MDAwMDAwMTkgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYzogMDAwMDAwMTgwMDAw
MDAwYyAwMDAwMDAwMDAwMDAwMDE5IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0K
KFhFTikgcElUUyAgY21kIDB4MGE6IDAwMDAwMDE4MDAwMDAwMGEgMDAwMDIwM2EwMDAwMDAxYSAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBjOiAw
MDAwMDAxODAwMDAwMDBjIDAwMDAwMDAwMDAwMDAwMWEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYTogMDAwMDAwMTgwMDAwMDAwYSAwMDAwMjAz
YjAwMDAwMDFiIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAg
Y21kIDB4MGM6IDAwMDAwMDE4MDAwMDAwMGMgMDAwMDAwMDAwMDAwMDAxYiAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBhOiAwMDAwMDAxODAwMDAw
MDBhIDAwMDAyMDNjMDAwMDAwMWMgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQoo
WEVOKSBwSVRTICBjbWQgMHgwYzogMDAwMDAwMTgwMDAwMDAwYyAwMDAwMDAwMDAwMDAwMDFjIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGE6IDAw
MDAwMDE4MDAwMDAwMGEgMDAwMDIwM2QwMDAwMDAxZCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBjOiAwMDAwMDAxODAwMDAwMDBjIDAwMDAwMDAw
MDAwMDAwMWQgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBj
bWQgMHgwYTogMDAwMDAwMTgwMDAwMDAwYSAwMDAwMjAzZTAwMDAwMDFlIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGM6IDAwMDAwMDE4MDAwMDAw
MGMgMDAwMDAwMDAwMDAwMDAxZSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihY
RU4pIHBJVFMgIGNtZCAweDBhOiAwMDAwMDAxODAwMDAwMDBhIDAwMDAyMDNmMDAwMDAwMWYgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYzogMDAw
MDAwMTgwMDAwMDAwYyAwMDAwMDAwMDAwMDAwMDFmIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MDU6IDAwMDAwMDAwMDAwMDAwMDUgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHZnaWMtdjMt
aXRzLmM6OTAyOmQwdjAgdklUUyAgY21kIDB4MGE6IDAwMDAwMDE4MDAwMDAwMGEgMDAwMDIwMDIw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIGdpY3YzX2xw
aV91cGRhdGVfaG9zdF9lbnRyeTogaG9zdF9scGkgODIyNCBkb21haW4gMCB2aXJ0X2xwaSA4MTk0
DQooWEVOKSB2Z2ljLXYzLWl0cy5jOjkwMjpkMHYwIHZJVFMgIGNtZCAweDA1OiAwMDAwMDAwMDAw
MDAwMDA1IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
DQooWEVOKSB2Z2ljLXYzLWl0cy5jOjkwMjpkMHYwIHZJVFMgIGNtZCAweDBhOiAwMDAwMDAxODAw
MDAwMDBhIDAwMDAyMDAzMDAwMDAwMDEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
DQooWEVOKSBnaWN2M19scGlfdXBkYXRlX2hvc3RfZW50cnk6IGhvc3RfbHBpIDgyMjUgZG9tYWlu
IDAgdmlydF9scGkgODE5NQ0KKFhFTikgdmdpYy12My1pdHMuYzo5MDI6ZDB2MCB2SVRTICBjbWQg
MHgwNTogMDAwMDAwMDAwMDAwMDAwNSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMA0KMDAwMDAwMGRiYjY5MjAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4p
IHBJVFMgIGNtZCAweDA4OiAwMDAwMDAxYjAwMDAwMDA4IDAwMDAwMDAwMDAwMDAwMDQgODAwMDAw
MjBmOWRlMTEwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYTogMDAwMDAw
MWIwMDAwMDAwYSAwMDAwMjA0MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGM6IDAwMDAwMDFiMDAwMDAwMGMgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAw
eDBhOiAwMDAwMDAxYjAwMDAwMDBhIDAwMDAyMDQxMDAwMDAwMDEgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYzogMDAwMDAwMWIwMDAwMDAwYyAw
MDAwMDAwMDAwMDAwMDAxIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikg
cElUUyAgY21kIDB4MGE6IDAwMDAwMDFiMDAwMDAwMGEgMDAwMDIwNDIwMDAwMDAwMiAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBjOiAwMDAwMDAx
YjAwMDAwMDBjIDAwMDAwMDAwMDAwMDAwMDIgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYTogMDAwMDAwMWIwMDAwMDAwYSAwMDAwMjA0MzAwMDAw
MDAzIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4
MGM6IDAwMDAwMDFiMDAwMDAwMGMgMDAwMDAwMDAwMDAwMDAwMyAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBhOiAwMDAwMDAxYjAwMDAwMDBhIDAw
MDAyMDQ0MDAwMDAwMDQgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBw
SVRTICBjbWQgMHgwYzogMDAwMDAwMWIwMDAwMDAwYyAwMDAwMDAwMDAwMDAwMDA0IDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGE6IDAwMDAwMDFi
MDAwMDAwMGEgMDAwMDIwNDUwMDAwMDAwNSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDANCihYRU4pIHBJVFMgIGNtZCAweDBjOiAwMDAwMDAxYjAwMDAwMDBjIDAwMDAwMDAwMDAwMDAw
MDUgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgw
YTogMDAwMDAwMWIwMDAwMDAwYSAwMDAwMjA0NjAwMDAwMDA2IDAwMDAwMDAwMDAwMDAwMDAgMDAw
MDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGM6IDAwMDAwMDFiMDAwMDAwMGMgMDAw
MDAwMDAwMDAwMDAwNiAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJ
VFMgIGNtZCAweDBhOiAwMDAwMDAxYjAwMDAwMDBhIDAwMDAyMDQ3MDAwMDAwMDcgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYzogMDAwMDAwMWIw
MDAwMDAwYyAwMDAwMDAwMDAwMDAwMDA3IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MA0KKFhFTikgcElUUyAgY21kIDB4MGE6IDAwMDAwMDFiMDAwMDAwMGEgMDAwMDIwNDgwMDAwMDAw
OCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBj
OiAwMDAwMDAxYjAwMDAwMDBjIDAwMDAwMDAwMDAwMDAwMDggMDAwMDAwMDAwMDAwMDAwMCAwMDAw
MDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYTogMDAwMDAwMWIwMDAwMDAwYSAwMDAw
MjA0OTAwMDAwMDA5IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElU
UyAgY21kIDB4MGM6IDAwMDAwMDFiMDAwMDAwMGMgMDAwMDAwMDAwMDAwMDAwOSAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBhOiAwMDAwMDAxYjAw
MDAwMDBhIDAwMDAyMDRhMDAwMDAwMGEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
DQowMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBhOiAwMDAwMDAxYjAwMDAwMDBhIDAwMDAy
MDRiMDAwMDAwMGIgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRT
ICBjbWQgMHgwYzogMDAwMDAwMWIwMDAwMDAwYyAwMDAwMDAwMDAwMDAwMDBiIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGE6IDAwMDAwMDFiMDAw
MDAwMGEgMDAwMDIwNGMwMDAwMDAwYyAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAN
CihYRU4pIHBJVFMgIGNtZCAweDBjOiAwMDAwMDAxYjAwMDAwMDBjIDAwMDAwMDAwMDAwMDAwMGMg
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYTog
MDAwMDAwMWIwMDAwMDAwYSAwMDAwMjA0ZDAwMDAwMDBkIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAw
MDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGM6IDAwMDAwMDFiMDAwMDAwMGMgMDAwMDAw
MDAwMDAwMDAwZCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMg
IGNtZCAweDBhOiAwMDAwMDAxYjAwMDAwMDBhIDAwMDAyMDRlMDAwMDAwMGUgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYzogMDAwMDAwMWIwMDAw
MDAwYyAwMDAwMDAwMDAwMDAwMDBlIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0K
KFhFTikgcElUUyAgY21kIDB4MGE6IDAwMDAwMDFiMDAwMDAwMGEgMDAwMDIwNGYwMDAwMDAwZiAw
MDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBjOiAw
MDAwMDAxYjAwMDAwMDBjIDAwMDAwMDAwMDAwMDAwMGYgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAw
MDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYTogMDAwMDAwMWIwMDAwMDAwYSAwMDAwMjA1
MDAwMDAwMDEwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAg
Y21kIDB4MGM6IDAwMDAwMDFiMDAwMDAwMGMgMDAwMDAwMDAwMDAwMDAxMCAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBhOiAwMDAwMDAxYjAwMDAw
MDBhIDAwMDAyMDUxMDAwMDAwMTEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQoo
WEVOKSBwSVRTICBjbWQgMHgwYzogMDAwMDAwMWIwMDAwMDAwYyAwMDAwMDAwMDAwMDAwMDExIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGE6IDAw
MDAwMDFiMDAwMDAwMGEgMDAwMDIwNTIwMDAwMDAxMiAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBjOiAwMDAwMDAxYjAwMDAwMDBjIDAwMDAwMDAw
MDAwMDAwMTIgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBj
bWQgMHgwYTogMDAwMDAwMWIwMDAwMDAwYSAwMDAwMjA1MzAwMDAwMDEzIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGM6IDAwMDAwMDFiMDAwMDAw
MGMgMDAwMDAwMDAwMDAwMDAxMyAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihY
RU4pIHBJVFMgIGNtZCAweDBhOiAwMDAwMDAxYjAwMDAwMDBhIDAwMDAyMDU0MDAwMDAwMTQgMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYzogMDAw
MDAwMWIwMDAwMDAwYyAwMDAwMDAwMDAwMDAwMDE0IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGE6IDAwMDAwMDFiMDAwMDAwMGEgMDAwMDIwNTUw
MDAwMDAxNSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNt
ZCAweDBjOiAwMDAwMDAxYjAwMDAwMDBjIDAwMDAwMDAwMDAwMDAwMTUgMDAwMDAwMDAwMDAwMDAw
MCAwMDAwMDAwMDAwMDAwMDAwDQowMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBjOiAwMDAw
MDAxYjAwMDAwMDBjIDAwMDAwMDAwMDAwMDAwMTYgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYTogMDAwMDAwMWIwMDAwMDAwYSAwMDAwMjA1NzAw
MDAwMDE3IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21k
IDB4MGM6IDAwMDAwMDFiMDAwMDAwMGMgMDAwMDAwMDAwMDAwMDAxNyAwMDAwMDAwMDAwMDAwMDAw
IDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBhOiAwMDAwMDAxYjAwMDAwMDBh
IDAwMDAyMDU4MDAwMDAwMTggMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVO
KSBwSVRTICBjbWQgMHgwYzogMDAwMDAwMWIwMDAwMDAwYyAwMDAwMDAwMDAwMDAwMDE4IDAwMDAw
MDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGE6IDAwMDAw
MDFiMDAwMDAwMGEgMDAwMDIwNTkwMDAwMDAxOSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBjOiAwMDAwMDAxYjAwMDAwMDBjIDAwMDAwMDAwMDAw
MDAwMTkgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQg
MHgwYTogMDAwMDAwMWIwMDAwMDAwYSAwMDAwMjA1YTAwMDAwMDFhIDAwMDAwMDAwMDAwMDAwMDAg
MDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGM6IDAwMDAwMDFiMDAwMDAwMGMg
MDAwMDAwMDAwMDAwMDAxYSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4p
IHBJVFMgIGNtZCAweDBhOiAwMDAwMDAxYjAwMDAwMDBhIDAwMDAyMDViMDAwMDAwMWIgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYzogMDAwMDAw
MWIwMDAwMDAwYyAwMDAwMDAwMDAwMDAwMDFiIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMA0KKFhFTikgcElUUyAgY21kIDB4MGE6IDAwMDAwMDFiMDAwMDAwMGEgMDAwMDIwNWMwMDAw
MDAxYyAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAw
eDBjOiAwMDAwMDAxYjAwMDAwMDBjIDAwMDAwMDAwMDAwMDAwMWMgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYTogMDAwMDAwMWIwMDAwMDAwYSAw
MDAwMjA1ZDAwMDAwMDFkIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikg
cElUUyAgY21kIDB4MGM6IDAwMDAwMDFiMDAwMDAwMGMgMDAwMDAwMDAwMDAwMDAxZCAwMDAwMDAw
MDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBhOiAwMDAwMDAx
YjAwMDAwMDBhIDAwMDAyMDVlMDAwMDAwMWUgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwDQooWEVOKSBwSVRTICBjbWQgMHgwYzogMDAwMDAwMWIwMDAwMDAwYyAwMDAwMDAwMDAwMDAw
MDFlIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgcElUUyAgY21kIDB4
MGE6IDAwMDAwMDFiMDAwMDAwMGEgMDAwMDIwNWYwMDAwMDAxZiAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDANCihYRU4pIHBJVFMgIGNtZCAweDBjOiAwMDAwMDAxYjAwMDAwMDBjIDAw
MDAwMDAwMDAwMDAwMWYgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSBw
SVRTICBjbWQgMHgwNTogMDAwMDAwMDAwMDAwMDAwNSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgdmdpYy12My1pdHMuYzo5MDI6ZDB2MCB2
SVRTICBjbWQgMHgwYTogMDAwMDAwMWIwMDAwMDAwYSAwMDAwMjAwNDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgZ2ljdjNfbHBpX3VwZGF0ZV9ob3N0X2Vu
dHJ5OiBob3N0X2xwaSA4MjU2IGRvbWFpbiAwIHZpcnRfbHBpIDgxOTYNCihYRU4pIHZnaWMtdjMt
aXRzLmM6OTAyOmQwdjAgdklUUyAgY21kIDB4MDU6IDAwMDAwMDAwMDAwMDAwMDUgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHZnaWMtdjMt
aXRzLmM6OTAyOmQwdjAgdklUUyAgY21kIDB4MGM6IDAwMDAwMDFiMDAwMDAwMGMgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHZnaWMtdjMt
aXRzLmM6OTAyOmQwdjAgdklUUyAgY21kIDB4MDU6IDAwMDAwMDAwMDAwMDAwMDUgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHZnaWMtdjMt
aXRzLmM6OTAyOmQwdjAgdklUUyAgY21kIDB4MGM6IDAwMDAwMDE3MDAwMDAwMGMgMDAwMDAwMDAw
MDAwMDAwMSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIHZnaWMtdjMt
aXRzLmM6OTAyOmQwdjAgdklUUyAgY21kIDB4MDU6IDAwMDAwMDAwMDAwMDAwMDUgMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCg==

--_003_HE1PR05MB4794EBDD1FE29BC69D0BCC898BFD0HE1PR05MB4794eurp_--


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 07:29:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 07:29:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33632.64755 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kh6Hg-00023W-Jm; Mon, 23 Nov 2020 07:29:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33632.64755; Mon, 23 Nov 2020 07:29:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kh6Hg-00023P-GX; Mon, 23 Nov 2020 07:29:12 +0000
Received: by outflank-mailman (input) for mailman id 33632;
 Mon, 23 Nov 2020 07:29:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xOkN=E5=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kh6He-00023K-S2
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 07:29:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2d2eae3d-3400-4d94-a564-a9f22a0bfcd8;
 Mon, 23 Nov 2020 07:29:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9B892ABCE;
 Mon, 23 Nov 2020 07:29:08 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xOkN=E5=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kh6He-00023K-S2
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 07:29:10 +0000
X-Inumbo-ID: 2d2eae3d-3400-4d94-a564-a9f22a0bfcd8
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2d2eae3d-3400-4d94-a564-a9f22a0bfcd8;
	Mon, 23 Nov 2020 07:29:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606116548; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=i09SCvlePJ2J/OPXmW03ImaqfNDdSHiM7gHhLVc0sAE=;
	b=drX6B+4WzXHC6ZlBxsWb0tsYNdZVzjP//rp8+kt67jiJmKBel8HfI+VOjUFhA+WtAwT9+X
	Ia7lLtNiDN4o6vKmzptkuen14hUiHs5SOI3C+pteEjR/+xROTHldKi5yfzqt8LWwGlvxbw
	Y7xlaUA0ysrHyaI2Ikkgo64K1zxZQyA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 9B892ABCE;
	Mon, 23 Nov 2020 07:29:08 +0000 (UTC)
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20201109163826.13035-1-jgross@suse.com>
 <20201109163826.13035-3-jgross@suse.com>
 <77067fa0-d902-9091-50e0-d6e15e34b159@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v6 2/3] xen/evtchn: rework per event channel lock
Message-ID: <70b4c68e-4131-1543-fa66-0efd743a055a@suse.com>
Date: Mon, 23 Nov 2020 08:29:07 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <77067fa0-d902-9091-50e0-d6e15e34b159@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="Hd5XsRLGsgwwVLZo82mlVFTJfc1TcY5Pl"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--Hd5XsRLGsgwwVLZo82mlVFTJfc1TcY5Pl
Content-Type: multipart/mixed; boundary="vqkhtjnuIxrozriPkBil9AOpzBexo3UD3";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Message-ID: <70b4c68e-4131-1543-fa66-0efd743a055a@suse.com>
Subject: Re: [PATCH v6 2/3] xen/evtchn: rework per event channel lock
References: <20201109163826.13035-1-jgross@suse.com>
 <20201109163826.13035-3-jgross@suse.com>
 <77067fa0-d902-9091-50e0-d6e15e34b159@suse.com>
In-Reply-To: <77067fa0-d902-9091-50e0-d6e15e34b159@suse.com>

--vqkhtjnuIxrozriPkBil9AOpzBexo3UD3
Content-Type: multipart/mixed;
 boundary="------------578FAFE8DF50A2D49AE1D294"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------578FAFE8DF50A2D49AE1D294
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 18.11.20 14:19, Jan Beulich wrote:
> On 09.11.2020 17:38, Juergen Gross wrote:
>> Currently the lock for a single event channel needs to be taken with
>> interrupts off, which causes deadlocks in some cases.
>>
>> Rework the per event channel lock to be non-blocking for the case of
>> sending an event and removing the need for disabling interrupts for
>> taking the lock.
>>
>> The lock is needed for avoiding races between event channel state
>> changes (creation, closing, binding) against normal operations (set
>> pending, [un]masking, priority changes).
>>
>> Use a rwlock, but with some restrictions:
>>
>> - Changing the state of an event channel (creation, closing, binding)
>>    needs to use write_lock(), with ASSERT()ing that the lock is taken =
as
>>    writer only when the state of the event channel is either before or=

>>    after the locked region appropriate (either free or unbound).
>>
>> - Sending an event needs to use read_trylock() mostly, in case of not
>>    obtaining the lock the operation is omitted. This is needed as
>>    sending an event can happen with interrupts off (at least in some
>>    cases).
>>
>> - Dumping the event channel state for debug purposes is using
>>    read_trylock(), too, in order to avoid blocking in case the lock is=

>>    taken as writer for a long time.
>>
>> - All other cases can use read_lock().
>=20
> One of the implications is that racing invocations of ->set_pending()
> are now possible for the same port. Beyond what I said in reply to
> 0/3 already, I'm afraid there are (latent) issues:
>=20
> 1) The update of ->pending (or basically any bitfield in struct
> evtchn, or yet more generically any field getting updated in a read-
> modify-write fashion) is no longer generally safe in any of the
> hooks called with just a read lock held. ->pending itself is not an
> issue now merely because it shares storage only with xen_consumer,
> which won't get updated once a port was bound.

This is fragile.

We should put the pending indicator into a dedicated byte.

> 2) Of two racing sends, one may now complete without the port
> actually having got fully recorded as linked in the FIFO code. This
> is because the party losing the race of setting EVTCHN_FIFO_LINKED
> will return early, without regard to whether the winner has made
> enough progress. (Of course this is possible only with an
> intermediate queue change, as only then the lock would become
> available to the second of the senders early enough.)

No, I don't think this is limited to a queue change. If a caller of
evtchn_fifo_set_pending() is being interrupted after setting
EVTCHN_FIFO_PENDING, and then a second caller can make it to setting
EVTCHN_FIFO_LINKED, the first caller won't even try to take the queue
lock, resulting in evtchn_check_pollers() being called before the
event might have been put properly into the queue.

I'd suggest to extend the fifo queue lock region in order to mitigate
this problem.

>=20
> I've gone through other functions called from this path and didn't
> find any further race potential there, but I'm not entirely certain
> I didn't miss anything.

I can prepare a patch if you agree my ideas.


Juergen

--------------578FAFE8DF50A2D49AE1D294
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------578FAFE8DF50A2D49AE1D294--

--vqkhtjnuIxrozriPkBil9AOpzBexo3UD3--

--Hd5XsRLGsgwwVLZo82mlVFTJfc1TcY5Pl
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+7ZMMFAwAAAAAACgkQsN6d1ii/Ey+Y
owf/XRhml7vg/AN5HHVWcom+FP9smUNe5+HQP0aQCqukx+Q2wuXyD8vmS8S05zUn82lqjEPYFbbi
7dHq9uUpIawxiAWHrdaPD3DEABklJLYUtW/pwp8QOjBoLEWqknFn7KGNW+u0FoXVg9knFUvCnfpM
hkSOUhsIv0LSs1m19ylOwhH5B9QH+pHvAsk/OFPjnsDeaoBbzzVkFs2RwJsWWo7afZQUf3QZYRH4
5zK9mKa137gm7/lYM9Xx2H4329cPeOSXjG3lXIizNLI2s6CwM/4uTJ64qdIsV2Fg/G4UO3Ba8ZZm
CYzfpHqB1EpZU0izeLIW+KdGSy9/Rc/7WFolqS8obQ==
=z5CC
-----END PGP SIGNATURE-----

--Hd5XsRLGsgwwVLZo82mlVFTJfc1TcY5Pl--


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 08:31:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 08:31:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33647.64774 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kh7Fh-0001DT-Jx; Mon, 23 Nov 2020 08:31:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33647.64774; Mon, 23 Nov 2020 08:31:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kh7Fh-0001DM-Gj; Mon, 23 Nov 2020 08:31:13 +0000
Received: by outflank-mailman (input) for mailman id 33647;
 Mon, 23 Nov 2020 08:31:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kh7Fg-0001DH-4Y
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 08:31:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fb9f1317-a422-4eb8-bffd-1882866552fe;
 Mon, 23 Nov 2020 08:31:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 92EEDAFFB;
 Mon, 23 Nov 2020 08:31:09 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kh7Fg-0001DH-4Y
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 08:31:12 +0000
X-Inumbo-ID: fb9f1317-a422-4eb8-bffd-1882866552fe
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fb9f1317-a422-4eb8-bffd-1882866552fe;
	Mon, 23 Nov 2020 08:31:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606120270; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cMEKD6XKjxxI1BMkdYnOKXh/k36eVZVR4ToH07KhWZg=;
	b=u5kdi7J8KUbzWBWlwC2aQe0MijDcrSAVwrzFo66McB9N1MHCGajVmnCEWIPNEbiHy8ueB2
	Tih5DrXxvF+xISCQcqx9UUGByFmh19ABGsIruY5iPcuUcCofIe0fUGhxTZri/vFtRKYmHN
	2vtnhAZgN52j06I9tRhETx3rvLoQCcc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 92EEDAFFB;
	Mon, 23 Nov 2020 08:31:09 +0000 (UTC)
Subject: Re: [PATCH v2 1/7] xen/acpi: Rework acpi_os_map_memory() and
 acpi_os_unmap_memory()
To: Julien Grall <julien@xen.org>
Cc: alex.bennee@linaro.org, masami.hiramatsu@linaro.org, ehem+xen@m5p.com,
 bertrand.marquis@arm.com, andre.przywara@arm.com, Rahul.Singh@arm.com,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20201023154156.6593-1-julien@xen.org>
 <20201023154156.6593-2-julien@xen.org>
 <96a97d2f-90dd-c4a7-5747-825c832ce56d@suse.com>
 <dffb010c-c647-89e6-293a-0b2f4a910503@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1d88f199-e8d0-0293-69ff-51b01da6caa7@suse.com>
Date: Mon, 23 Nov 2020 09:31:04 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <dffb010c-c647-89e6-293a-0b2f4a910503@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.11.2020 18:44, Julien Grall wrote:
> Hi Jan,
> 
> On 20/11/2020 16:03, Jan Beulich wrote:
>> On 23.10.2020 17:41, Julien Grall wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> The functions acpi_os_{un,}map_memory() are meant to be arch-agnostic
>>> while the __acpi_os_{un,}map_memory() are meant to be arch-specific.
>>>
>>> Currently, the former are still containing x86 specific code.
>>>
>>> To avoid this rather strange split, the generic helpers are reworked so
>>> they are arch-agnostic. This requires the introduction of a new helper
>>> __acpi_os_unmap_memory() that will undo any mapping done by
>>> __acpi_os_map_memory().
>>>
>>> Currently, the arch-helper for unmap is basically a no-op so it only
>>> returns whether the mapping was arch specific. But this will change
>>> in the future.
>>>
>>> Note that the x86 version of acpi_os_map_memory() was already able to
>>> able the 1MB region. Hence why there is no addition of new code.
>>>
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>> Reviewed-by: Rahul Singh <rahul.singh@arm.com>
>>> Tested-by: Rahul Singh <rahul.singh@arm.com>
>>
>> This change breaks shutdown on x86. Either Dom0 no longer requests S5
>> (in which case I'd expect some data collection there to fail), or Xen
>> refuses the request. As a result, things go the machine_halt() path
>> instead. I've looked over the change again, but couldn't spot anything
>> yet which might explain the behavior. Yet reverting (just the non-Arm
>> parts, so I wouldn't have to revert multiple commits) made things
>> work again.
> 
> Thank you for the report and sorry for the breakage.
> 
> When changing the behavior of __acpi_map_table(), I failed to realize 
> that some x86 code will call it directly rather than 
> acpi_os_map_memory(). This is the case of acpi_fadt_parse_sleep_info() 
> which detects whether ACPI can be used to put the system in sleep state.
> 
> I am tempted to require all the callers requiring to map memory to use 
> the generic implementation acpi_os_{, un}map_memory().
> 
> However, AFAICT, some of the callers (such as acpi_sleep_prepare()) are 
> using __acpi_map_table() because the function never failed before. By 
> using the generic function, all mappings after boot will be using vmap() 
> which may fail.

The FACS mapping can (and perhaps should long have been) be switched
to acpi_os_map_memory(). The other two direct uses of the function,
however, will require more care. I'm in the process or making a
series, also noticing some further shortcomings of the FACS handling.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 08:50:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 08:50:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33653.64786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kh7Y2-00039v-77; Mon, 23 Nov 2020 08:50:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33653.64786; Mon, 23 Nov 2020 08:50:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kh7Y2-00039n-0q; Mon, 23 Nov 2020 08:50:10 +0000
Received: by outflank-mailman (input) for mailman id 33653;
 Mon, 23 Nov 2020 08:50:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kh7Y0-00039U-Ds; Mon, 23 Nov 2020 08:50:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kh7Y0-0002xI-02; Mon, 23 Nov 2020 08:50:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kh7Xz-00043s-Mq; Mon, 23 Nov 2020 08:50:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kh7Xz-00035f-MK; Mon, 23 Nov 2020 08:50:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kh7Y0-00039U-Ds; Mon, 23 Nov 2020 08:50:08 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0oTykOIy5LTsjQsg3/+MCcW21nsbpWFEL/9da6tN/78=; b=nBZFk4iV8MxZNYAWnSQhJv//y6
	i2Z3IH6QFylS2Glpjm3v3acVUlHdFwsnJEHq3X+m136QegwAtBq28OVIBBvMm9FccOhBgddHDM+4q
	wTDHv/76k7Q3xqRTEtkGBI3FvfqIOMxPO7fe1fohQtRZ8824ThM263zBYQJQhEmQU+NM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kh7Y0-0002xI-02; Mon, 23 Nov 2020 08:50:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kh7Xz-00043s-Mq; Mon, 23 Nov 2020 08:50:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kh7Xz-00035f-MK; Mon, 23 Nov 2020 08:50:07 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156958-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156958: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=fd674c09688cedea07eb7c033cbb1eed8ff287f6
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 23 Nov 2020 08:50:07 +0000

flight 156958 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156958/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              fd674c09688cedea07eb7c033cbb1eed8ff287f6
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  136 days
Failing since        151818  2020-07-11 04:18:52 Z  135 days  130 attempts
Testing same since   156915  2020-11-21 04:19:11 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 28333 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 09:14:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 09:14:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33674.64801 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kh7vV-0005Za-8f; Mon, 23 Nov 2020 09:14:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33674.64801; Mon, 23 Nov 2020 09:14:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kh7vV-0005ZT-5L; Mon, 23 Nov 2020 09:14:25 +0000
Received: by outflank-mailman (input) for mailman id 33674;
 Mon, 23 Nov 2020 09:14:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kh7vT-0005ZL-PC; Mon, 23 Nov 2020 09:14:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kh7vT-0003UN-GZ; Mon, 23 Nov 2020 09:14:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kh7vT-0005ev-6s; Mon, 23 Nov 2020 09:14:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kh7vT-0005pu-6M; Mon, 23 Nov 2020 09:14:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kh7vT-0005ZL-PC; Mon, 23 Nov 2020 09:14:23 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6pxuZMBpF8BaWn1GBU1ybljZjoiqRjgIvmhFqEHyNHg=; b=wY8A7g4Yheuo2LUfnKbTf0Q+tQ
	Zc0LcGqVLHxMbS3slNZN/j2LAq6bZR0BbsXxz3d1LjEQ9mm++CWQcJ1tMID4P0lvI16y6fGLOkHhT
	tXfGp3h6xgdn7Duvu3Wu+L7n6lKzlEIGsaBDroSgT6YVfsAlPMifgtK51zd4IX3xes+w=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kh7vT-0003UN-GZ; Mon, 23 Nov 2020 09:14:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kh7vT-0005ev-6s; Mon, 23 Nov 2020 09:14:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kh7vT-0005pu-6M; Mon, 23 Nov 2020 09:14:23 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156953-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156953: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=8cc30eb1400fc01f2b139cdd3dc524f8b84dbe07
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 23 Nov 2020 09:14:23 +0000

flight 156953 qemu-mainline real [real]
flight 156960 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156953/
http://logs.test-lab.xenproject.org/osstest/logs/156960/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                8cc30eb1400fc01f2b139cdd3dc524f8b84dbe07
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   95 days
Failing since        152659  2020-08-21 14:07:39 Z   93 days  199 attempts
Testing same since   156953  2020-11-23 00:08:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 67462 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 09:49:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 09:49:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33686.64816 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kh8TT-0000MO-Ek; Mon, 23 Nov 2020 09:49:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33686.64816; Mon, 23 Nov 2020 09:49:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kh8TT-0000MH-9z; Mon, 23 Nov 2020 09:49:31 +0000
Received: by outflank-mailman (input) for mailman id 33686;
 Mon, 23 Nov 2020 09:49:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kh8TS-0000MC-5D
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 09:49:30 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 83da4162-00ac-4b4d-96f6-cf13af9158f5;
 Mon, 23 Nov 2020 09:49:28 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kh8TS-0000MC-5D
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 09:49:30 +0000
X-Inumbo-ID: 83da4162-00ac-4b4d-96f6-cf13af9158f5
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 83da4162-00ac-4b4d-96f6-cf13af9158f5;
	Mon, 23 Nov 2020 09:49:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606124968;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=JAP52AdonB3Z7Dh+6YuV32toCEepximl/yiUayUcmPI=;
  b=doOUMlf/kyFDvwIBPPGUvzwhyYM4XlKgNuP4hXEvXwBN6ky2r0Bc8IT/
   e3NbX8JKxiU8Rb0IL0IkbLlBejdaIIilP0EY7fI96aQkcDfRwlRH18oqV
   oH0NImBvlEItZhXdGvuQ1Hz0PldBvfVsNhsCPtwmhDXCZgyLFUqrxo803
   4=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: TOBUjuq1fzHaVhzdt5eMkZuBUJL851VcAtdqAE+Jxrz60PvJmouCVm6wE8Egi8vep/mrDLP88C
 +msOXakPrCYIZQaKjqEFZyuuk7SBaSuzm8eWe6XbG+93I4C4gaMgMs0xKmtrR8gND/ZVwGty2M
 GFtR09/zVXOTRqEiRf1DrvHnzGq0Y5hCrsHdArovH/Prf7ooGSDedxrjF6qgDxFoGcuUKGD+9R
 FD8QYrZkK4ZNKx7zvysLHAYyJGUMCEhmki2B0m58gdetDG6dTgcihZc7IP2xTq+0Mzns9Qx3ht
 T5o=
X-SBRS: None
X-MesageID: 32883893
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,363,1599537600"; 
   d="scan'208";a="32883893"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Cy4hQmYkr5U8k2Hg3yHcBK2wOgUz36CBFxsuXnu9OrFd7p3GpZsU8plDwUBSfMqBcDIfiQ4FXUEBZoPG+Cxw/0zPvNgi6xGO6meW4jRpVDnfJxYmmFIlC1oLvI3Y35T8S9qWpV7N2cyzDnE6Jolu7RapdXFxMzpNiV5XKw6bS/L6pRLiaUl3/6rJPhdBnrpVL1GxKkvyh9lVSpAabn7KNDTA+rSel8sb+OmioUVv0BFe4ht/awn1gm5ZZMXwv/YoxJRkXPutb/ND15k3yvaFu2w/7wkDkGA2PRwTaEmuO8fwt3cn1VA3xTCdfXuf1k9S85MtcMFG2fc2EiqwqKXDCQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PJFMd475iwVIU2mGpNJbjXvsEuSEI59CiBzjPZm+gac=;
 b=Jqcygkd5aVibyw1QmZC3VVc/szyVI0+8O49vq3mRPrQRfOnJJoiT+qlnljL/N9SCzwcGkWUQAokJH1SHYkjXXGusfdZ7Z8oAml3oKcYrK4D8MALNZ5zKrQ/Ib+jCoQn8uWmcf8H9f6ry/ySUZFtviA12wXD09npHnmBUXGgsb7QFSOlM3/t/CJKYUSoFceqdcN1CfMvlIzbaAZ2AuJYD0LCPvaA2/RL32SsFTnltzSY0FeLAf50wNanr32bh7YHpn156XMwNV3WTZgXaHc2jTZ2NYxapuUU39u/AhiJZtuOgDadP90PZdZOumkgZRI6V6q/Z47yL620u5X8NzKeAFg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PJFMd475iwVIU2mGpNJbjXvsEuSEI59CiBzjPZm+gac=;
 b=X8h2OrSe2Ua+xwjG6xujhMfdonWL5sBsNzCSnhtzrjLPBHmgYB+XbhdIcsDqjxpNT27rE//cXuMlwixoBTiY32Nx+dNqEhXNaRVh9CYxn3BAkoaQZPMHaoGz9yxSEh8g3X/o4HGMxkjOEIXCX2OeWslxRCv5jBbSbM6HPLL9a6Q=
Date: Mon, 23 Nov 2020 10:49:18 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Manuel Bouyer <bouyer@antioche.eu.org>, <xen-devel@lists.xenproject.org>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201123094918.xat7oka47nehqsng@Air-de-Roger>
References: <20201118121403.GC3126@antioche.eu.org>
 <20201118143928.hvamuf7t7jycsrzb@Air-de-Roger>
 <bb2b6182-f3a6-61e5-ee70-90a65ae56435@suse.com>
 <20201119141915.igyb7djkw47rf2dt@Air-de-Roger>
 <20201119155718.GB4104@antioche.eu.org>
 <20201119165734.GA4903@antioche.eu.org>
 <20201119175733.GA6067@antioche.eu.org>
 <1a50e1e2-b69c-afd6-a179-316231512004@suse.com>
 <20201120082855.5z4cibcd5djlwmgp@Air-de-Roger>
 <5e637a72-085d-45b9-aa5c-01e138c81875@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <5e637a72-085d-45b9-aa5c-01e138c81875@suse.com>
X-ClientProxiedBy: MRXP264CA0048.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:14::36) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1c06ff43-8254-480c-6755-08d88f950f78
X-MS-TrafficTypeDiagnostic: DM5PR03MB3209:
X-Microsoft-Antispam-PRVS: <DM5PR03MB3209C60F211EAFE8479E480F8FFC0@DM5PR03MB3209.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: mq/qzPiWwuC7dBazRhnK/365vjX2rXO5J48lhMvyXXs6i6NdTYQUfFgx6SpfMo69QEE2zyu9Wry/Mdmmr1YcO9N89p4KCY6Al2+UhgUHxU1v57j8VMxJDGFnxHEnhcprDyluLoyA2AAItpK7Jzx7v8UMSLtLSR8g8yXudPPWLZwEB5SH9cPJsn+VOl2lAggr8Omb2xaclJPrPCthPzhSY7OK8Sz5sWelYKJhoCOi7P16uwpwjTXblN0QDeaK0jR24f06FRKNJjYLxGFiIZoD6OpY6dD7zKCwWqEfhKUPmB1gRzI7fDQyYdDTCclv3JeixbWMqnk7IrZXBGPbWZfQKg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(39850400004)(396003)(376002)(366004)(346002)(83380400001)(1076003)(33716001)(2906002)(66946007)(9686003)(6916009)(85182001)(478600001)(4326008)(86362001)(316002)(6496006)(5660300002)(16526019)(8676002)(26005)(8936002)(6666004)(66556008)(53546011)(6486002)(186003)(956004)(66476007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: J5A4NfHPJghWuxWObKd10XKzJGpSLwksUljXmTThzShWtOKfBzHcSlJs3DZhiGJItHi7P3L7rN63W3dC849JdsnxLDM3DCn05JcTrqt6u3naQHdjn690oXlcDEmegAV4KRZgiZCGlcOTRKPw4Otp5DlDbK+ihmG0jENobwhQ+TYiXd4vMXeeqa9R5sLu2WYSC87FYBWz14WCkCFm4N6PL20QyaaBkTm/t1kj6gEGo/+HOK/toi+u/BsxWNjdi0ZRCwjEEbtQgJ46AWgRYjYEsZolVLjV8WXmZx0oshNVd+aEW9ePq65NAtZMUYfxartPMYW8Ss0oeSb0LK+fVf/gdiw9d3ISeMVDcx+guEUVVJb8hkrKWMW37a6Sr8vIB0/Z1NbYSc5rckbl4tPjsxjcd9yj8qLxyeKtRVopSRE8XIHji5u0lsrxRhosox06zYdUEE0oMY4lHaBr8zcO66DY3Q4Kphq4cDBzhpUFLMCMRp37T2NJrNMD+8s+0PgZLMXK/LmEu+kJIir/iUYKxuliiqzBObUSi/NIJwkRh+pqr8JkHZ+5+GpfN88NDfLarIIFyKIWB3tYXYw3B99f5WQDgd3EPBnKeqdAc5qMkt2mBtrTh7hIhFVnnRXeLtVICCYNGNkx3eQ/ujmhOmHmQkzS4iQPHhaGbR80WqEjGR3SomH2zwt9YGFLk5yT9Xf6U0VZr2bEP4pkcZEiKVlhzEVy9QdsKDbLFTO+HVl/AW30fX6Xomvq1dLNMNg+5E3h5I/bpoCTn1P0uX2hSy7ehKMiAkg8QTRzFE0oyGcH3UUHDmvL5emTyz2v6svhgXEOt8d8hnXcJV1SJxxeZ9TVvM1g14ASHXTPiFE/KV1OvJBQUZDqT3pWNv44Dgix70Hmp4ZGxHcVPCS8iQdRfoYkmdrx5w==
X-MS-Exchange-CrossTenant-Network-Message-Id: 1c06ff43-8254-480c-6755-08d88f950f78
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Nov 2020 09:49:24.6548
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: B/2SaKMISNtZT6rMHR4H893sLXeiKFgVu+YKMDFknokh+MDvIy/JjXEDjI516zPg9ESz32+B3JkhxfBeWbegjw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3209
X-OriginatorOrg: citrix.com

On Fri, Nov 20, 2020 at 09:54:42AM +0100, Jan Beulich wrote:
> On 20.11.2020 09:28, Roger Pau Monné wrote:
> > On Fri, Nov 20, 2020 at 09:09:51AM +0100, Jan Beulich wrote:
> >> On 19.11.2020 18:57, Manuel Bouyer wrote:
> >>> I added an ASSERT() after the printf to ket a stack trace, and got:
> >>> db{0}> call ioapic_dump_raw^M
> >>> Register dump of ioapic0^M
> >>> [  13.0193374] 00 08000000 00170011 08000000(XEN) vioapic.c:141:d0v0 apic_mem_readl:undefined ioregsel 3
> >>> (XEN) vioapic.c:512:vioapic_irq_positive_edge: vioapic_deliver 2
> >>> (XEN) Assertion '!print' failed at vioapic.c:512
> >>> (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:   C   ]----
> >>> (XEN) CPU:    0
> >>> (XEN) RIP:    e008:[<ffff82d0402c4164>] vioapic_irq_positive_edge+0x14e/0x150
> >>> (XEN) RFLAGS: 0000000000010202   CONTEXT: hypervisor (d0v0)
> >>> (XEN) rax: ffff82d0405c806c   rbx: ffff830836650580   rcx: 0000000000000000
> >>> (XEN) rdx: ffff8300688bffff   rsi: 000000000000000a   rdi: ffff82d0404b36b8
> >>> (XEN) rbp: ffff8300688bfde0   rsp: ffff8300688bfdc0   r8:  0000000000000004
> >>> (XEN) r9:  0000000000000032   r10: 0000000000000000   r11: 00000000fffffffd
> >>> (XEN) r12: ffff8308366dc000   r13: 0000000000000022   r14: ffff8308366dc31c
> >>> (XEN) r15: ffff8308366d1d80   cr0: 0000000080050033   cr4: 00000000003526e0
> >>> (XEN) cr3: 00000008366c9000   cr2: 0000000000000000
> >>> (XEN) fsb: 0000000000000000   gsb: 0000000000000000   gss: 0000000000000000
> >>> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
> >>> (XEN) Xen code around <ffff82d0402c4164> (vioapic_irq_positive_edge+0x14e/0x150):
> >>> (XEN)  3d 10 be 1d 00 00 74 c2 <0f> 0b 55 48 89 e5 41 57 41 56 41 55 41 54 53 48
> >>> (XEN) Xen stack trace from rsp=ffff8300688bfdc0:
> >>> (XEN)    0000000200000086 ffff8308366dc000 0000000000000022 0000000000000000
> >>> (XEN)    ffff8300688bfe08 ffff82d0402bcc33 ffff8308366dc000 0000000000000022
> >>> (XEN)    0000000000000001 ffff8300688bfe40 ffff82d0402bd18f ffff830835a7eb98
> >>> (XEN)    ffff8308366dc000 ffff830835a7eb40 ffff8300688bfe68 0100100100100100
> >>> (XEN)    ffff8300688bfea0 ffff82d04026f6e1 ffff830835a7eb30 ffff8308366dc0f4
> >>> (XEN)    ffff830835a7eb40 ffff8300688bfe68 ffff8300688bfe68 ffff82d0405cec80
> >>> (XEN)    ffffffffffffffff ffff82d0405cec80 0000000000000000 ffff82d0405d6c80
> >>> (XEN)    ffff8300688bfed8 ffff82d04022b6fa ffff83083663f000 ffff83083663f000
> >>> (XEN)    0000000000000000 0000000000000000 0000000a7c62165b ffff8300688bfee8
> >>> (XEN)    ffff82d04022b798 ffff8300688bfe08 ffff82d0402a4bcb 0000000000000000
> >>> (XEN)    0000000000000206 ffff8316da86e61c ffff8316da86e600 ffff938031fd47c0
> >>> (XEN)    0000000000000003 0000000000000400 ff889e8da08f928a 0000000000000000
> >>> (XEN)    0000000000000002 0000000000000100 000000000000b86e ffff93803237f010
> >>> (XEN)    0000000000000000 ffff8316da86e61c 0000beef0000beef ffffffff80555918
> >>> (XEN)    000000bf0000beef 0000000000000046 ffff938031fd4790 000000000000beef
> >>> (XEN)    000000000000beef 000000000000beef 000000000000beef 000000000000beef
> >>> (XEN)    0000e01000000000 ffff83083663f000 0000000000000000 00000000003526e0
> >>> (XEN)    0000000000000000 0000000000000000 0000060100000001 0000000000000000
> >>> (XEN) Xen call trace:
> >>> (XEN)    [<ffff82d0402c4164>] R vioapic_irq_positive_edge+0x14e/0x150
> >>> (XEN)    [<ffff82d0402bcc33>] F arch/x86/hvm/irq.c#assert_gsi+0x5e/0x7b
> >>> (XEN)    [<ffff82d0402bd18f>] F hvm_gsi_assert+0x62/0x77
> >>> (XEN)    [<ffff82d04026f6e1>] F drivers/passthrough/io.c#dpci_softirq+0x261/0x29e
> >>> (XEN)    [<ffff82d04022b6fa>] F common/softirq.c#__do_softirq+0x8a/0xbf
> >>> (XEN)    [<ffff82d04022b798>] F do_softirq+0x13/0x15
> >>> (XEN)    [<ffff82d0402a4bcb>] F vmx_asm_do_vmentry+0x2b/0x30
> >>> (XEN) 
> >>> (XEN) 
> >>> (XEN) ****************************************
> >>> (XEN) Panic on CPU 0:
> >>> (XEN) Assertion '!print' failed at vioapic.c:512
> >>> (XEN) ****************************************
> >>
> >> Right, this was the expected path after what you've sent prior to this.
> >> Which turned my attention back to the 'i' debug key output you had sent
> >> the other day. There we have
> >>
> >> (XEN)    IRQ:  34 vec:51 IO-APIC-level   status=010 aff:{0}/{0-7} in-flight=1 d0: 34(-MM)
> >>
> >> i.e. at that point we're waiting for Dom0 to signal it's done handling
> >> the IRQ. There is, however, a timer associated with this. Yet that's
> >> actually to prevent the system getting stuck, i.e. the "in-flight"
> >> state ought to clear 1ms later (when that timer expires), and hence
> >> ought to be pretty unlikely to catch when non-zero _and_ something's
> >> actually stuck.
> > 
> > I somehow assumed the interrupt was in-flight because the printing to
> > the Xen console caused one to be injected, and thus dom0 didn't had
> > time to Ack it yet.
> 
> By "injected" you mean from Xen into Dom0, or by the hardware for Xen
> to handle? (I ask because I think I saw you use the term also for the
> latter case, in some context.) If the former, then something would
> need to have caused Xen to inject it, while in the latter case there
> would need to have been a reason that it didn't get delivered earlier.

Sorry, wrote this in a hurry and didn't realize it could be
misleading. I meant injected from hardware to Xen, which would then
also be injected from Xen to dom0.

I would expect softirqs to be running normally (as you have already
asked and Manuel proved the watchdog is not triggering).

Roger.


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 09:57:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 09:57:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33695.64828 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kh8bP-0001Ro-8B; Mon, 23 Nov 2020 09:57:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33695.64828; Mon, 23 Nov 2020 09:57:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kh8bP-0001Rh-5D; Mon, 23 Nov 2020 09:57:43 +0000
Received: by outflank-mailman (input) for mailman id 33695;
 Mon, 23 Nov 2020 09:57:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kh8bO-0001Rc-0u
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 09:57:42 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7b0f31a6-0d6b-4e6b-8438-f59cbbc32c30;
 Mon, 23 Nov 2020 09:57:40 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kh8bO-0001Rc-0u
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 09:57:42 +0000
X-Inumbo-ID: 7b0f31a6-0d6b-4e6b-8438-f59cbbc32c30
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7b0f31a6-0d6b-4e6b-8438-f59cbbc32c30;
	Mon, 23 Nov 2020 09:57:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606125460;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=FACOP9clLsqQRO2M9eoube6dA+RVyfetgtZuOvtMFuo=;
  b=BQLtG0QSUhKcicBJSoXjpy0EQ4TICGtsc2Z5OUNtm+QS1u2uoPVjjKlY
   Sj4P0TNw6GMV5wZW7XHxkfjy4Lyr1t6/+HY87hLQcFH49uxomYapUyOC6
   ZihV/UPfxcSx0HLZ3/Yt6/3eYW6iI+LbdiNwPLSlrqvAi/C1CCaFEhXeS
   s=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: +Qc82t5JRKzqzJRCJ+u7kUzebj3VM7I8RacvfF4bkSQ74mRFVrlLClbz/CARfa51WAXdEEc/PZ
 /QbEVxs0cJ7LfqWG8H6FfDidq7TIBTV6O4mSWuALF3ngfJOWoQhRUFbh76epx09iTcl5c8i3mq
 IFvmm+K1Dyz4kEG2bs92G2M9mGmMwI0UGneWA3VEO39R8Z4bQyKDxZh/+S8QL6EMIHRXzZVqHQ
 sDhkqpFndOwTH/c7sVw5Q3GEDYXiquHI3BCMBdBU+B/CU0pdLcrk/lcuM9RKWxZfoNGzktSb3Z
 Yeo=
X-SBRS: None
X-MesageID: 31960572
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,363,1599537600"; 
   d="scan'208";a="31960572"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UiJkjNlbv5isphXqBD6pLOjy3J4UT5FXjV2MpNGgp3ysB2Up0ErUFmDtFU3VfPMef0k/Y57tTurcLwlVdP3Tp7G6+CkYlsLA4Du9SwFWw/jQAMYFd0NZq1YRa88YZfrUPr4yKrXQc4P0boiGDpVgbmTDWjkb2dG9zzQLD42MRZ3MPccACKc1WFgkw5EI2rr+0oKX9mnfERAkVXQYL6+UyvMBPPO0cd2/EcxFeWXn5arZQyN81eNJEYgfn3Mj9wfEesT1Z5lRZVb6amTXTfqoFXmkNyvLrTnkAcTNvw+6Vi/132TGjjBmCFCnNOzyIbVFmOx2E5xehSXCaZLcl8TBbw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hwAq/saEr/OrWnzPV+cn/1xvxOm5C3fwrhPvbNNontc=;
 b=ALTrKs7GhprmMCnQp7QdDI9hlGnd6nZksWbr0qjNhTvdLIAhvUWKhajfxPslyRsYIugfJDcXlxYDWx6hvOD6GLNDuJc3pJYzELP7iazJpEkq/uhqh0D4SNPe6hCwZSPfZl0lbj/bDEmJ3g/chWqCEWLXzTtmfCE9PA+Wt5rPc5AofKO4Mse5N8SAcD7ea2N6rIwBcnCYFGRCq1BeoxnenhwLG8fuxY4O0SMlMQWiS8KSvnwGaBlkkDkvxzDqSRD7WB9mmZoLdK6XG0KPyupjRaP/K9DXfxirs2EyMWYTB8CcRYF3F9Z5ilaWKDL9mCqlSdP7xdGxbwzF8a0NdvGGzg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hwAq/saEr/OrWnzPV+cn/1xvxOm5C3fwrhPvbNNontc=;
 b=t3joNi+0UKr9RBitbME7oXNXb59IJLL8vDyfKfANLP5iR/8YBg+PYlSrQ7bdSCY8pYLqVpZ3G20CWtvuBKIt1Y6PWbE+U4YoCamffHiqtjeg4X+ElC9relSFp9yzBuchUzzqKN9acraz88nVqEybM6iitvoqUHg8lA3x5ALSfmY=
Date: Mon, 23 Nov 2020 10:57:13 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: Jan Beulich <jbeulich@suse.com>, <xen-devel@lists.xenproject.org>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201123095713.orfpg72r73m7f46n@Air-de-Roger>
References: <20201119155718.GB4104@antioche.eu.org>
 <20201119165734.GA4903@antioche.eu.org>
 <20201119175733.GA6067@antioche.eu.org>
 <1a50e1e2-b69c-afd6-a179-316231512004@suse.com>
 <20201120082855.5z4cibcd5djlwmgp@Air-de-Roger>
 <20201120085249.GA1508@antioche.eu.org>
 <97f371a9-00fe-33fe-8923-c247f44f9af6@suse.com>
 <20201120092754.GH1508@antioche.eu.org>
 <20904a6a-ac64-755d-d228-4c49faf66fb5@suse.com>
 <20201120103824.GJ1508@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201120103824.GJ1508@antioche.eu.org>
X-ClientProxiedBy: LO2P265CA0302.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a5::26) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e457a894-4116-48d4-62ed-08d88f962960
X-MS-TrafficTypeDiagnostic: DM5PR03MB3371:
X-Microsoft-Antispam-PRVS: <DM5PR03MB3371DEC7E709EFB6230246DF8FFC0@DM5PR03MB3371.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: fhp68/h6a9zBmPKH1aZ7UR94h71fLNOA0mwf5Y8vgP0z8gDIxkFA9C/7SUzEVYkXeT/8UDoJNL0WBxMbzsyoI5E5b9OXz2+qI+O6/dBr2MooSiWVwOUOSWZ9VzJ+X0ln4B8JqsuG9D/GMN8D2tLli+l2izTfRWvYLextnkxiz57Ane5/8Z/PKYO+qJJiP1Cd+449GDU7RqGWkueQDh1MqynN8ze/nJoQXUJmCVZZvhBCe0JLUfUmSwJinec+kTX2ZnF2zIONm4JhGdWgQ2SstBbbyDRXpkTpDBpJTHYRvzqIHQTnG1mjJuT0y/k6BxicM+qy9FXe3U0H/zvDPJlc8A==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(396003)(366004)(346002)(376002)(136003)(39850400004)(16526019)(6496006)(6916009)(9686003)(83380400001)(53546011)(1076003)(8936002)(26005)(186003)(85182001)(66476007)(66556008)(4326008)(33716001)(2906002)(316002)(86362001)(6666004)(66946007)(6486002)(8676002)(956004)(5660300002)(478600001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: pYDPpQ9xjG6RHXHsan0Ge+mNHWEcmp6prIS3esD5t68/Ckebwb9po6bxRHCgEJo8B9yfUKnakGtr0jirhYjA0J3mFe+tPJ1NwOP40yMWVcpyRqSu41v+J0hHTC2YXCkNKPqZR0uKzLtWgDkpyPb++eTiWFi+CcD6cvxecf1T5ZMdrd/Sj4LR76YS88ZYQ6VSqmznY/Y8oqb6WH2DPvNxMJK6DaIw+TR22+UUNL3FlpCti+CiJfJJ+WSQ6kdFqPVcBIyzHMmEXDlYMnk7bFcd8GCNataontDRqPxzzXcr47AkzDkWanwKTtsCSH6R2PA1UbavRe3WJBvskTsmNn00PJSiehMJlcEoxs5I5Vd0++BOPW37VTYeggFNeTODNe5+W2L673tZoh4W41Ll8JAbRfOi9RXvMsei8DHEpil2oGQyubKLJD9I59wej13b2hPHITCd2+MGHFXUcu3hQhyID14b19aG34ufZk/U4kDAExPzWzdMBdklekUDcY+58mGerkfr0zJ96Noocs1F6YSwSAq9aTFT8cCAjjSGafQcOci8ZK7UFRIQj7Pfn9ov2jolcDsGDi/xhdwxifNvmnlvdwNVJCFuWdtyzBPdhE8zGwin+wU+gJmXNFKoiBYVlimrTXaleVsANcnDP3GAb3MXQWj1diShU3SK+GOSuMX42bDwhHpM8plv/+DJy2d5p1ey+NcgD5Ik+XqPddZG5Rb4s27WMlRKU+Mu7UV8io5FWxbtGjZIgeIYS05z1wKGT1E0JuTae51L33el7tnrmmOUyDROo76GFfEurHBiE44MzYbWapHyh50SyhzEgj90GPUOJv7XakjPZKAl3cjNK8mlHwn7W0Lmnk8dRX6/L5FWc0t/e/3OgCVw5lUGtMhXMszYCV3EozFmS6bCH9n4bqSW+g==
X-MS-Exchange-CrossTenant-Network-Message-Id: e457a894-4116-48d4-62ed-08d88f962960
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Nov 2020 09:57:17.6238
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1csaF3NxAtqpW+QE0glJfB4qFgZBVHjRTsORgEqFalxFE+6ENjMKTa5TwBOIwH2gL7a58UHGi//sHGmkVnFmyg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3371
X-OriginatorOrg: citrix.com

On Fri, Nov 20, 2020 at 11:38:24AM +0100, Manuel Bouyer wrote:
> On Fri, Nov 20, 2020 at 11:00:05AM +0100, Jan Beulich wrote:
> > On 20.11.2020 10:27, Manuel Bouyer wrote:
> > > On Fri, Nov 20, 2020 at 09:59:57AM +0100, Jan Beulich wrote:
> > >> Well, anything coming through the LAPIC needs ack-ing (except for
> > >> the spurious interrupt of course), or else ISR won't get updated
> > >> and further interrupts at this or lower priority can't be serviced
> > >> (delivered) anymore. This includes interrupts originally coming
> > >> through the IO-APIC. But the same constraint / requirement exists
> > >> on baremetal.
> > > 
> > > OK, so even if I didn't see where this happens, it's happening.
> > > Is it what's Xen is using as ACK from the dom0 for a IOAPIC
> > > interrupt, or is it something else (at the IOAPIC level) ?
> > 
> > It's the traditional LAPIC based EOI mechanism that Xen intercepts
> > (as necessary) on the guest side and then translates (via
> > surprisingly many layers of calls) into the necessary EOI /
> > unmask / whatever at the hardware level. Our vIO-APIC
> > implementation so far doesn't support IO-APIC based EOI at all
> > (which is reflected in the IO-APIC version ID being 0x11).
> 
> OK.
> I finally found where the EOI occurs (it's within a macro so s simple grep
> didn't show it).
> 
> When interrupts are not masked (e.g. via cli), the ioapic halder is called.
> From here, 2 paths can happen:
> a) the software interrupt priority level (called spl in BSD world) allows the
>   driver's handler to run. In this case it's called, then the interrupt
>   is unmasked at ioapic level, and EOI'd at lapic.
> b) the software interrupt priority level doesn't allow this driver's handler to
>   run. In this case, the interrupt is marked as pending in software,
>   explicitely masked at the iopic level and EOI'd at lapic.
>   Later, when the spl is lowered, the driver's interrupt handler is run,
>   then the interrupt is unmasked at ioapic level, and EOI'd at lapic
>   (this is the same path as a)). here we may EOI the lapic twice, and the
>   second time when there's no hardware interrupt pending any more.
> 
> I suspect it's case b) which causes the problem with Xen.

Case b) should be handled fine AFAICT. If there's no interrupt pending
in the lapic ISR the EOI is just a noop. Iff there's somehow another
vector pending in ISR you might actually be EOIing the wrong vector,
and thus this would be a bug in NetBSD. I certainly don't know much of
NetBSD interrupt model in order to know whether this second EOI is just
not necessary or whether it could cause issues.

Can you actually assert that disabling this second unneeded EOI does
solve the problem?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 10:28:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 10:28:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.33710.64846 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kh95Q-0004hp-RB; Mon, 23 Nov 2020 10:28:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 33710.64846; Mon, 23 Nov 2020 10:28:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kh95Q-0004hi-O7; Mon, 23 Nov 2020 10:28:44 +0000
Received: by outflank-mailman (input) for mailman id 33710;
 Mon, 23 Nov 2020 10:28:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kh95Q-0004hd-02
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 10:28:44 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0336085d-d0c2-4e49-945a-6030380555ee;
 Mon, 23 Nov 2020 10:28:42 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kh95Q-0004hd-02
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 10:28:44 +0000
X-Inumbo-ID: 0336085d-d0c2-4e49-945a-6030380555ee
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0336085d-d0c2-4e49-945a-6030380555ee;
	Mon, 23 Nov 2020 10:28:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606127322;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=JbFcyG1geXIbPfSiPoanX3UaC0NrUBzcTd5k+xwL+Jc=;
  b=PbgqbSC9Gjpgze46Iqg3t78MWTpfrH3d9RuxUX7Bf8uFsReM1ykOb4Dz
   vWacvEQO0JHQD/y16SDNUOMlorcIwfdWEc0ZMak+sC2Kg0Nv0Fv70G27z
   ZzVIWlUWtBCHmgSyl31DM5O9SyuTfqnBxAgr5BIitjgxVPeu2hgujrOUa
   M=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 1bRO8SMn5awFSZa2v6QL32ET6Q2hSNLaaaQojI3Z8n2HQPy1+E2JO8qtxlT0YcHIoS1Rle+H3H
 x3aMbeteU5UIRld8ky1VMyxRXqZ9r14Y0+bvjLcd7Z3S52FISkx/L7XIG19Yx2k3oy5RKq1E44
 EaqLEjR8GMJLBb5KFDxy0cIkpr5WjA8y9z6P1dZ+m89Y3eoFlKqRrKAG1pPNJHrUpVuzCpahsK
 zA8FiLmO87FVAAAQka8vL7BIb9Dy9LIO+yamSwXCFVF9l6LuNaND54rf14IH/sH2OAD7tx/Ui2
 uZc=
X-SBRS: None
X-MesageID: 31701223
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,363,1599537600"; 
   d="scan'208";a="31701223"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RpiVSaRiGyZg21ibnRO0KSCIurVQM/clHyYxyuM89nNyeIrCwYA5ne/Vkg6lJ9M1s0q/DchPCgb19BzhRjerAclNmQc2j45wEmvHMecpS65lps9RBovFuWZI/y4mUox/oh0gMeYIy9dKl5YvzCUywz6NrbH8hk1XO9yyVWMzvfwGCyT7W7ZrhrUj0WPRZE+Ib0XGO3giKO24Z3JnqqleOqcGjCqUcuhoOoHzLFRpG6w1rcBVgzYOF7DOQeTlI9zjbcz0wK80zHzVOoPUgIzUfv2sGeKNskHovi5q1JWLrtFLf0WRZ20drKf0VoA1jbElBfo6cqitguwm6xOVoDwm0A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SMyS7uxn/ezwxNaFplls4x7G6ok6YjvqFLqIhuFoSZ0=;
 b=cLaquo7TuvaRO9KJdOKnwk8V6l15TqjLdR5Hpe3/KkjgJNXf9mvdO8vEofIm1PndwwOLfWH+pUR7dzdZOkrhtWPmYinVGwGrkDlp4UdK4ZsmnI9NBHjEYs2rj5i5q9qxCY8Lk7zV869IcTzr1KcnfZC4gtqsFupM74UDR973P72CZ8evlheC3yUT05AN02ggl3AXsraFpEVn3/ugZ4GW+n7JBqKVQH6XvJL2/DPZ84eQyIOy8FVclSJh+RxjeM7OYUfbnD3DrhAD7u0dO1LwdFrG/RZ+qvCkEvpY22SRg3l7kB0dg1OFLR23Zap9plzCQflbiX/H4xZ/U32iv7Qlpg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SMyS7uxn/ezwxNaFplls4x7G6ok6YjvqFLqIhuFoSZ0=;
 b=qfnIhiBZtLPGBXfYvrQ6zAkmyttl3h6Zas4SVoTh7qwYfD5UsAMepioB7o9C1LF5TFq8ejyYuvyllIC65NnlTf7QW4JY15li5C/OgmIYthOSt9hwT9mmxq8VOSKhKdFzcfjQe+oGXJmc5+LBX2Qz3j80yxm3wMsbR1MJRevjJFw=
Date: Mon, 23 Nov 2020 11:28:32 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: "Gustavo A. R. Silva" <gustavoars@kernel.org>
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Jens Axboe <axboe@kernel.dk>,
	<xen-devel@lists.xenproject.org>, <linux-block@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <linux-hardening@vger.kernel.org>
Subject: Re: [PATCH 058/141] xen-blkfront: Fix fall-through warnings for Clang
Message-ID: <20201123102832.4f33hkwuaas4vs7m@Air-de-Roger>
References: <cover.1605896059.git.gustavoars@kernel.org>
 <33057688012c34dd60315ad765ff63f070e98c0c.1605896059.git.gustavoars@kernel.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <33057688012c34dd60315ad765ff63f070e98c0c.1605896059.git.gustavoars@kernel.org>
X-ClientProxiedBy: LO2P265CA0054.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:60::18) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 26298a66-f6f2-4f2d-9a57-08d88f9a8996
X-MS-TrafficTypeDiagnostic: DS7PR03MB5542:
X-Microsoft-Antispam-PRVS: <DS7PR03MB5542198ED6A174566F6C72B98FFC0@DS7PR03MB5542.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: dcI4qZkWGiVmqxcroJHhkO5UxXMu3vgvX8q6o3oT/oiYslGLgPWVUuJuLKbvqEBI14UXoXKt0mJslSvG6wlHl1NFtpcOPNbnA5sU2ZvUkm92yGvGiNr3xdOUCpfApac2QgVR2aKMku3dsPlLT/OeLboihv9HtHK12stqfdq1yJKNkUE6EQZ+ozB8oA/Jg+5XLYXm2MYrtZ68uSIb7fFKTNz9Gjs71nPKMXRy+2QLeOzexwCc1UPFbdmnBSEzEv6azd+PExaRyv9LRdiGpk9w3o9MUQtlHA9I54BxoIs3hA5Bnfko05DGSbish30ibNlzFui8brOh19pYP0JZGCMWNwtAn3n+NYxrzCro7RdCLg8ZqW1JMpo/Jo2FvaSYrU0K7YQhTntMR/sKYIsZf4oKmw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(376002)(39860400002)(136003)(366004)(396003)(346002)(66476007)(66556008)(2906002)(1076003)(85182001)(54906003)(6486002)(9686003)(86362001)(33716001)(956004)(316002)(8676002)(83380400001)(4326008)(8936002)(66946007)(186003)(7416002)(6916009)(6666004)(966005)(16526019)(5660300002)(6496006)(26005)(478600001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: fdOccS6pzuN3DpyBkAd1oRENCi8q2xL81AvVXXkb6Xe9PqvOkYawaR6mS788hRZPOt08zzPfPL2Ze34HTMuUWqHiNntG3BgmXaXsUpXBbsC/w3q96YpuZ25wjxSHgmlcHoHSixLt/F1PQUuURJQt0jVGjtTqRmYiJ2etXg+W3pHgrUbM5E2+u3DdbwyxiaKk9+f2vTe80N6cV2n8TVVX3bT5XL4OgR69mUyHH0ly7HKJVnsHK8uOAOOojgVkiH0hBbJBOm0VjJsHUAUKCKVPUm6hnl9wrzoMpVig0V4cMr7K+Tgwldjkaz4LarFrVeXZJXaVcPLHTdxGp5PpWVuHBjA+jQmRlHQDeue9UGK5nHk3qPyIVcCPY0w9Rg5xcLLxcBbmMYuKo+8AvHnORp/kiMbhOzWW7gmz0AXRAPv+xO4r2dVRuMd/KqJd4Mb3yLr29Q1FJC+MeE/jcKKBuwzSDoIfTdD8rM1RbFmBlHQbEBHqwDQi45DRzZ7rzxPA2QenggNOvGBJXXP7GU91F2jyS7z/IURzY3aVoawTUxyMAZ22lROzqGJ/M13aTNDU2O0yCY5jkfGrVx5Bxv3ZNCySREpSQX6QUdFMAXaVvb46gW3Gazm1GVioYyrg11tIZj6HYyBGZIFel0OepC/smObiye3TsnCU1+6vy7UhmuQH7Y7N0m2fDZ7U2hwTEE7Rw8JwDRR5eeSaU6kHCLfspT+WZI0Bz7UH4EEymQ3RG8h0XhpdmdkFmTnp4YTDrzVCRrU/ZyJS6pQ/XAAAJ8gCcvE+ZSgpITJLnaj43b/afFwBJhC0d1X7gioL18ckzAj5nut/itGR1GeJ9hH6RyYWa6tRAreJGW1nkSJ/+Zw6NNqDEh5J+ELQoT6P4RZiwiMuanX/fdclF4w/DeEkC+2i+zFzPw==
X-MS-Exchange-CrossTenant-Network-Message-Id: 26298a66-f6f2-4f2d-9a57-08d88f9a8996
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Nov 2020 10:28:37.1129
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: R9+ul0BdrXUu6zD/tKMdHexwz+V6N4izkJGWirzuawsbwQq0hBThA+b5fm6gWXiwwi6DI0o5OcoEoHpazSyL+Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5542
X-OriginatorOrg: citrix.com

On Fri, Nov 20, 2020 at 12:32:58PM -0600, Gustavo A. R. Silva wrote:
> In preparation to enable -Wimplicit-fallthrough for Clang, fix a warning
> by explicitly adding a break statement instead of letting the code fall
> through to the next case.
> 
> Link: https://github.com/KSPP/linux/issues/115
> Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
> ---
>  drivers/block/xen-blkfront.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index 48629d3433b4..34b028be78ab 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -2462,6 +2462,7 @@ static void blkback_changed(struct xenbus_device *dev,
>  			break;
>  		if (talk_to_blkback(dev, info))
>  			break;
> +		break;

I would have added a fallthrough like it's done below in
XenbusStateClosed.

Also, FWIW, I think clang's fallthrough warnings are a bit too verbose.
Falling through to a break like the case here shouldn't cause a
warning IMO, falling through to anything != break should indeed cause
those warnings to appear.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 11:17:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 11:17:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34023.64857 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kh9py-0001ZJ-Ly; Mon, 23 Nov 2020 11:16:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34023.64857; Mon, 23 Nov 2020 11:16:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kh9py-0001ZC-Ik; Mon, 23 Nov 2020 11:16:50 +0000
Received: by outflank-mailman (input) for mailman id 34023;
 Mon, 23 Nov 2020 11:16:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kh9px-0001Z4-JR; Mon, 23 Nov 2020 11:16:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kh9px-00063a-5s; Mon, 23 Nov 2020 11:16:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kh9pw-0003fr-Su; Mon, 23 Nov 2020 11:16:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kh9pw-0006I0-SO; Mon, 23 Nov 2020 11:16:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kh9px-0001Z4-JR; Mon, 23 Nov 2020 11:16:49 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ToGQRLt0py1lek6AADstW0PU8xseTt52VbNdYT8tyZ0=; b=JOsYeHj7ABxdQwZeZ6EcWuY0b1
	ZeAYa4WOWh5KJ6mvkykqghWjiA/3BJI4M2OEogGdJayKygwfUq3y4FbnYwiX5oO6x3b1cZm+aPEj9
	07Z39Su1N6H9tn2MwZhoqpqP2AmcAsyoSSnX7k69w1udHVXOn4HhBFb0XSwdFsQdJtyM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kh9px-00063a-5s; Mon, 23 Nov 2020 11:16:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kh9pw-0003fr-Su; Mon, 23 Nov 2020 11:16:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kh9pw-0006I0-SO; Mon, 23 Nov 2020 11:16:48 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156955-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156955: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=418baf2c28f3473039f2f7377760bd8f6897ae18
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 23 Nov 2020 11:16:48 +0000

flight 156955 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156955/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  12 debian-install           fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                418baf2c28f3473039f2f7377760bd8f6897ae18
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  114 days
Failing since        152366  2020-08-01 20:49:34 Z  113 days  191 attempts
Testing same since   156955  2020-11-23 01:40:54 Z    0 days    1 attempts

------------------------------------------------------------
3576 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 683939 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 11:33:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 11:33:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34035.64873 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khA5a-0003Id-4A; Mon, 23 Nov 2020 11:32:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34035.64873; Mon, 23 Nov 2020 11:32:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khA5a-0003IW-0T; Mon, 23 Nov 2020 11:32:58 +0000
Received: by outflank-mailman (input) for mailman id 34035;
 Mon, 23 Nov 2020 11:32:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MzqB=E5=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1khA5Y-0003IR-TS
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 11:32:56 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c83555a2-1431-4a71-9a52-25b4ed801777;
 Mon, 23 Nov 2020 11:32:53 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0ANBWkIf006181
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Mon, 23 Nov 2020 12:32:47 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id BA9822E9CAC; Mon, 23 Nov 2020 12:32:41 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MzqB=E5=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1khA5Y-0003IR-TS
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 11:32:56 +0000
X-Inumbo-ID: c83555a2-1431-4a71-9a52-25b4ed801777
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c83555a2-1431-4a71-9a52-25b4ed801777;
	Mon, 23 Nov 2020 11:32:53 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0ANBWkIf006181
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Mon, 23 Nov 2020 12:32:47 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id BA9822E9CAC; Mon, 23 Nov 2020 12:32:41 +0100 (MET)
Date: Mon, 23 Nov 2020 12:32:41 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201123113241.GE2520@antioche.eu.org>
References: <20201119165734.GA4903@antioche.eu.org>
 <20201119175733.GA6067@antioche.eu.org>
 <1a50e1e2-b69c-afd6-a179-316231512004@suse.com>
 <20201120082855.5z4cibcd5djlwmgp@Air-de-Roger>
 <20201120085249.GA1508@antioche.eu.org>
 <97f371a9-00fe-33fe-8923-c247f44f9af6@suse.com>
 <20201120092754.GH1508@antioche.eu.org>
 <20904a6a-ac64-755d-d228-4c49faf66fb5@suse.com>
 <20201120103824.GJ1508@antioche.eu.org>
 <20201123095713.orfpg72r73m7f46n@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201123095713.orfpg72r73m7f46n@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Mon, 23 Nov 2020 12:32:48 +0100 (MET)

On Mon, Nov 23, 2020 at 10:57:13AM +0100, Roger Pau Monn wrote:
> > [...]
> > OK.
> > I finally found where the EOI occurs (it's within a macro so s simple grep
> > didn't show it).
> > 
> > When interrupts are not masked (e.g. via cli), the ioapic halder is called.
> > From here, 2 paths can happen:
> > a) the software interrupt priority level (called spl in BSD world) allows the
> >   driver's handler to run. In this case it's called, then the interrupt
> >   is unmasked at ioapic level, and EOI'd at lapic.
> > b) the software interrupt priority level doesn't allow this driver's handler to
> >   run. In this case, the interrupt is marked as pending in software,
> >   explicitely masked at the iopic level and EOI'd at lapic.
> >   Later, when the spl is lowered, the driver's interrupt handler is run,
> >   then the interrupt is unmasked at ioapic level, and EOI'd at lapic
> >   (this is the same path as a)). here we may EOI the lapic twice, and the
> >   second time when there's no hardware interrupt pending any more.
> > 
> > I suspect it's case b) which causes the problem with Xen.
> 
> Case b) should be handled fine AFAICT. If there's no interrupt pending
> in the lapic ISR the EOI is just a noop. Iff there's somehow another
> vector pending in ISR you might actually be EOIing the wrong vector,
> and thus this would be a bug in NetBSD. I certainly don't know much of
> NetBSD interrupt model in order to know whether this second EOI is just
> not necessary or whether it could cause issues.
> 
> Can you actually assert that disabling this second unneeded EOI does
> solve the problem?

I tried this, and it didn't change anything ...

I'm out of idea to try.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 11:33:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 11:33:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34036.64885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khA5v-0003OL-Cu; Mon, 23 Nov 2020 11:33:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34036.64885; Mon, 23 Nov 2020 11:33:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khA5v-0003OE-9G; Mon, 23 Nov 2020 11:33:19 +0000
Received: by outflank-mailman (input) for mailman id 34036;
 Mon, 23 Nov 2020 11:33:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khA5u-0003Nv-3Z
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 11:33:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9d8f3a34-8ff3-4ea1-8170-84fc69e6ed6f;
 Mon, 23 Nov 2020 11:33:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 81559AC41;
 Mon, 23 Nov 2020 11:33:16 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khA5u-0003Nv-3Z
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 11:33:18 +0000
X-Inumbo-ID: 9d8f3a34-8ff3-4ea1-8170-84fc69e6ed6f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9d8f3a34-8ff3-4ea1-8170-84fc69e6ed6f;
	Mon, 23 Nov 2020 11:33:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606131196; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=SDONeI7DyKyFUimEW6bd/Hu8WdJdN9YQcEQmWQvjWKs=;
	b=eSNbLbZeieuC8Bs9yq1hcybeAzibyyOGgkloaJ8rW8WSTb2+i27NZiohfViVHMO8WRTSBW
	/gpF0o6z/jps0I3pLb523Lm75OPiy5yI8p93rPbD2ct+gdS6Y5i/WvOO1EM5ORUcrlJpbx
	KVl17wvz9hpgKaynUAp0wBuPWpL/yq4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 81559AC41;
	Mon, 23 Nov 2020 11:33:16 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/DMI: fix SMBIOS pointer range check
Message-ID: <7823a8e0-6388-08f6-0ce6-36bd7139ff54@suse.com>
Date: Mon, 23 Nov 2020 12:33:15 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Forever since its introduction this has been using an inverted relation
operator.

Fixes: 54057a28f22b ("x86: support SMBIOS v3")
Signed-off-by: Jan Beulich <JBeulich@suse.com>

--- a/xen/arch/x86/dmi_scan.c
+++ b/xen/arch/x86/dmi_scan.c
@@ -357,7 +357,7 @@ static int __init dmi_iterate(void (*dec
 			memcpy_fromio(&smbios3, q, sizeof(smbios3));
 			if (memcmp(smbios3.anchor, "_SM3_", 5) ||
 			    smbios3.length < sizeof(smbios3) ||
-			    q < p + 0x10000 - smbios3.length ||
+			    q > p + 0x10000 - smbios3.length ||
 			    !dmi_checksum(q, smbios3.length))
 				smbios3.length = 0;
 		}


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 11:41:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 11:41:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34049.64897 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khADh-0004MR-C5; Mon, 23 Nov 2020 11:41:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34049.64897; Mon, 23 Nov 2020 11:41:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khADh-0004MK-8g; Mon, 23 Nov 2020 11:41:21 +0000
Received: by outflank-mailman (input) for mailman id 34049;
 Mon, 23 Nov 2020 11:41:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sKME=E5=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1khADf-0004MF-T1
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 11:41:20 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.42]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f11d01f2-cf40-4b58-a9b7-82036ed10515;
 Mon, 23 Nov 2020 11:41:17 +0000 (UTC)
Received: from AM5PR0601CA0072.eurprd06.prod.outlook.com (2603:10a6:206::37)
 by AM6PR08MB4803.eurprd08.prod.outlook.com (2603:10a6:20b:c4::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Mon, 23 Nov
 2020 11:41:14 +0000
Received: from AM5EUR03FT019.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:0:cafe::ef) by AM5PR0601CA0072.outlook.office365.com
 (2603:10a6:206::37) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.21 via Frontend
 Transport; Mon, 23 Nov 2020 11:41:14 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT019.mail.protection.outlook.com (10.152.16.104) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Mon, 23 Nov 2020 11:41:13 +0000
Received: ("Tessian outbound 814be617737e:v71");
 Mon, 23 Nov 2020 11:41:13 +0000
Received: from 606f07e76f97.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 CF5C4126-6716-40F7-A5D1-8031FDA1008B.1; 
 Mon, 23 Nov 2020 11:41:08 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 606f07e76f97.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 23 Nov 2020 11:41:08 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB6233.eurprd08.prod.outlook.com (2603:10a6:10:204::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.21; Mon, 23 Nov
 2020 11:41:07 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3589.030; Mon, 23 Nov 2020
 11:41:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sKME=E5=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1khADf-0004MF-T1
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 11:41:20 +0000
X-Inumbo-ID: f11d01f2-cf40-4b58-a9b7-82036ed10515
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown [40.107.20.42])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f11d01f2-cf40-4b58-a9b7-82036ed10515;
	Mon, 23 Nov 2020 11:41:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VtD46QMeINkduY2XywzbAXnHVDQRRObMhdnVyogng64=;
 b=FV+jnDKMNpLR7omwhNds4kzScKQfb7jN4lx+Fmeb+q0ZU4ug5g604Xq/EnOpmhykpPtfiwa0D8+rWy5IDozIpLpFfHQrqqbqye/SiW2AiyF1y1t/WWuU3RHe+jbiORrylmyrW3SKpetyHGtN9TDn0JoKJ+vwlEImNZwK/6zfHVk=
Received: from AM5PR0601CA0072.eurprd06.prod.outlook.com (2603:10a6:206::37)
 by AM6PR08MB4803.eurprd08.prod.outlook.com (2603:10a6:20b:c4::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Mon, 23 Nov
 2020 11:41:14 +0000
Received: from AM5EUR03FT019.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:0:cafe::ef) by AM5PR0601CA0072.outlook.office365.com
 (2603:10a6:206::37) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.21 via Frontend
 Transport; Mon, 23 Nov 2020 11:41:14 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT019.mail.protection.outlook.com (10.152.16.104) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Mon, 23 Nov 2020 11:41:13 +0000
Received: ("Tessian outbound 814be617737e:v71"); Mon, 23 Nov 2020 11:41:13 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: f6388205ddd3974d
X-CR-MTA-TID: 64aa7808
Received: from 606f07e76f97.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id CF5C4126-6716-40F7-A5D1-8031FDA1008B.1;
	Mon, 23 Nov 2020 11:41:08 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 606f07e76f97.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Mon, 23 Nov 2020 11:41:08 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jM0Ycbhc8p2Hl0iDcjWAmKWYUOoJZ/jjkh99rd7H3pSobOnw0O30C2Vh7JJnBiZ5yXhBz7Ggh5lPy1P5L9NpDo4clYL2LnwOIN3afm3mOWR/1zuoCGmOHJ78krTB9lEZYvZsnPoqxGpDCh6AqlbbG17cUc3jatNPGBoCJZn9Y5l6l8DOOF40IT+cb0uBaeIkoQI0GgUW/g0wn7CdRzAMiUr36r6Ft84wIg0GjslGXBoKpA7St5lHLUspGba8FBm07V42QfKaDfKWep6pT2fpOLNi6uUwybBbNc4vAn1erTkrSEr9yuneJxv/pjmbjOJoUytrDahZo/o3R68ZxsWfVw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VtD46QMeINkduY2XywzbAXnHVDQRRObMhdnVyogng64=;
 b=EnTxXjBEjaxqjYUnJcVp5EOcWeR0yq9g1lqykw6SEgvFgP56m4qojSaAoNZ5mh7CQS0DBvQK9fvUHDK4bLyZrW46dOcEr2sj2ge7jWZRuwmZ5R6wJcMHPQdPQcBcqJ2i7XeE9KS25lZwJKFUn/ynrHNEG4S4liYG4YaM+h0nacUCKcJIy30rESNJytkr8+gjwbGMJTNMt7xVPdZamZUp3+KEACR0YxuE4xGe6AirCGdx6fDshILonJIvxt7vJ+ygz0VoDGq74bHiIJNzxGFaFSk7C8MDYZBq/SS5eg8lX261uX5BEcONyOsKvC4J0DOhgGRV0kEHZgejmPspa5J71g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VtD46QMeINkduY2XywzbAXnHVDQRRObMhdnVyogng64=;
 b=FV+jnDKMNpLR7omwhNds4kzScKQfb7jN4lx+Fmeb+q0ZU4ug5g604Xq/EnOpmhykpPtfiwa0D8+rWy5IDozIpLpFfHQrqqbqye/SiW2AiyF1y1t/WWuU3RHe+jbiORrylmyrW3SKpetyHGtN9TDn0JoKJ+vwlEImNZwK/6zfHVk=
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB6233.eurprd08.prod.outlook.com (2603:10a6:10:204::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.21; Mon, 23 Nov
 2020 11:41:07 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3589.030; Mon, 23 Nov 2020
 11:41:07 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Leo Krueger <leo.krueger@zal.aero>
CC: Julien Grall <julien@xen.org>, Stefano Stabellini
	<stefano.stabellini@xilinx.com>, Peng Fan <peng.fan@nxp.com>,
	"brucea@xilinx.com" <brucea@xilinx.com>, Cornelia Bruelhart
	<cornelia.bruelhart@zal.aero>, "oleksandr_andrushchenko@epam.com"
	<oleksandr_andrushchenko@epam.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: Xen data from meta-virtualization layer
Thread-Topic: Xen data from meta-virtualization layer
Thread-Index: AQHWwY2IpqlJM9tSokSREK4Y63jM5A==
Date: Mon, 23 Nov 2020 11:41:07 +0000
Message-ID: <50B2EEEF-4BF2-4511-98D5-F165A70E2EC6@arm.com>
References:
 <AM4PR0501MB2227089FDDF0209EF6E215D9E6100@AM4PR0501MB2227.eurprd05.prod.outlook.com>
 <HE1PR05MB47941E23CE053CE72F18867C8BEA0@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011091858010.21307@sstabellini-ThinkPad-T480s>
 <HE1PR05MB4794B5C57A54A29A48EE8EAE8BE90@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011101842500.21307@sstabellini-ThinkPad-T480s>
 <DB6PR0402MB27608A03EC717053E392A92988E80@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <HE1PR05MB47940ED4E5FDC0BADC54C8E78BE80@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <DB6PR0402MB2760CEEABA9F52CDEB27C1DB88E80@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <HE1PR05MB47944761ED6A26D3E2CE15868BE40@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011161656080.20906@sstabellini-ThinkPad-T480s>
 <HE1PR05MB4794569AC67109AF8B6517268BE20@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011171544380.438@sstabellini-ThinkPad-T480s>
 <5dc63ee2-f1ce-31fc-cb6a-fe4dae929fb3@xen.org>
 <HE1PR05MB4794EBDD1FE29BC69D0BCC898BFD0@HE1PR05MB4794.eurprd05.prod.outlook.com>
In-Reply-To:
 <HE1PR05MB4794EBDD1FE29BC69D0BCC898BFD0@HE1PR05MB4794.eurprd05.prod.outlook.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: zal.aero; dkim=none (message not signed)
 header.d=none;zal.aero; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 95ff3ff2-7381-4660-809a-08d88fa4ae96
x-ms-traffictypediagnostic: DBBPR08MB6233:|AM6PR08MB4803:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB4803C2596395402549A0F59EFCFC0@AM6PR08MB4803.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:169;OLM:169;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 WF5EZsOLWZr15oxyDvMT/aGX+w8jC/KaEUuyukOR9BlPZA97Y72ijPn+bg40bcNNWZSc/dRZ9uPtYrvW6XfLbZUIzWTp7Jv7hwY8+M3ffGjvN25zONw0ONKeooCcMmCmeD8kq2vfCRGNtyD5ooV2xE0JU1ZDcOA/RiHHHt8qhueWr6jgKjCk2KGvdiaq7cmA9HPk3CQ6lE2LMC8pIYfNoVOsirJHnhceSfnPCp0y7/K6aRN8+Fi54UxbxUkP9RHfa7NP2CSyG2C6TRyh2QhsN/bA8dyFyRyv65G05W+OButhrvK9TomHzS5+En5CnsnVI/xKO/xAWxA9ajki+ZSopK7gg1fF23epWNbnRV6SXIomGrienYftC7uYCfzaegDxTsUzzUPA0kbd9Ta4yadR1A==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(366004)(376002)(346002)(136003)(478600001)(26005)(66574015)(83380400001)(6512007)(36756003)(76116006)(86362001)(66446008)(66556008)(64756008)(2906002)(6486002)(4326008)(66946007)(91956017)(6916009)(66476007)(5660300002)(54906003)(966005)(8936002)(30864003)(8676002)(71200400001)(186003)(6506007)(33656002)(2616005)(316002)(53546011);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 g7GXuImTdcKUZlkop8Gwb3zYxVHGoP0iKrLbwuxIAmH3FOWD35lwg8mddrrskZLqNsFSRPLz/AVemplPI2Rk3tlymEzFZf3tr62mIkQTTyvTWjckgA65LsPpOS/W1M4xab5nd8eme+HZW21wHoUxctADkz/YjHb1zjh8zojZArSxEPbBu4CcEkiBs7WKBUJ5VYQ727m4N7CAwBOSACewJNmt5A9blJhEZ1kxdvUNH7weJWXSQNJ74ayORztqoBNXHlDQJ19GnYqbe1+GARMh49kACc6L2xUyxo1/ME3Ozr5q0wV6pC7Skqdz3/1KzMn6CUQQmfcMJS+9pjd9qQMAb4KYuwspXoyO8gIPZvnHI/HeoG8q9jPNTcV9kNnqfYkjJ11WgzenVfxC8tr0njxIR/VOGQXF2bielh6a4WUWxH6tY7kJWEcn11fjLw3JN+w4xggGMYcnr2kr6YvWE24Ya75CxDwHLtBDN+Bpi00St6pJmjcymo2WPzq4ifpQJasYg4Cw95tH4Hma2ACtKiuV6Pzpya0XBTjML7ZVT19qhcoNPCMNsgTPH56u5ABV4JeJpkcAMLQ2pK9Kg/nxykXAcJtwqtO0Pfgnr4RR7zwUVlEcHwH4ZQvyj4g65fC59Tnw6paNjwZT90nifjfd4hvBrkha6ps4i+aaV+eDaU+UtZA8aEMoBS9EM960w2VP+Wa98/2Rjf2Tl7i/f6ws7xSqCfZkwAruLzbJq+WTocQ1x0jaTKdAvNpoQMNMqHBjbjIMXMVreae6i0MNLOK8y8vi7rvSwlyp+j97M6mIgOw8K/I85gh1sBFI9NnqjqRqQAFY6vdV2zL2CtKdZ7aechvhur/7JoThQYOPk/sLEJifZU5qDPzg5aX204ECZAwgbeAaTOI3UiRcACR2DBwInLOYXw==
Content-Type: text/plain; charset="utf-8"
Content-ID: <39A058DAB3691D42B6EC3FEE89584A8D@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6233
Original-Authentication-Results: zal.aero; dkim=none (message not signed)
 header.d=none;zal.aero; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	37573c1b-67f5-4bc4-c2af-08d88fa4aaa7
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AiM6Vg91RUDG635EhHoBxJP1vlANYKVGzpBtUlMZLcoqmg3fGWEKWhl+ancJHuya1wDuKCzkYHvBtbg5aeJ5zHLUlnR2qiT5+QT8dfwNmDzhp092WTKSvIJEUHzVPpgfFPKFap0rt0RDQjaKCVd7ncUICQ8fcxuLUsrn/UpK5g9PJwvLeaGruX2sxzzXGzf3ZlJ1Agod/urtayJ3boAXWaJZndXRLCkekyodmEW5/tAj7J4YF4/0zQO9O7VG8vLGYsvPyXEVp0fB7dO4epS05tJ+tNR6O57aTIe1h34FRPZrIgPFD6D5FlUgXl88TS0zcV4kdWReOEjb3w3DrDENkz0C4Bv4Klr6rRlzYGYk5U/L9toIYBBKskfRDpVaUAQiRiMr0V/E8f/73Shhc8PanZQ22zuO8sQjbTEaSpyr8oBGXfSG5EE5MesNFhXHB1XVpSkP8qVYc9Aa+5DtwwdBrMNXQraCUG3OfjNIsWjbfrk=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(396003)(39860400002)(136003)(346002)(46966005)(478600001)(8936002)(53546011)(36756003)(6486002)(4326008)(36906005)(6512007)(8676002)(316002)(26005)(336012)(6506007)(2906002)(54906003)(2616005)(186003)(6862004)(70206006)(82310400003)(33656002)(966005)(356005)(81166007)(5660300002)(66574015)(83380400001)(47076004)(30864003)(82740400003)(70586007)(86362001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Nov 2020 11:41:13.8403
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 95ff3ff2-7381-4660-809a-08d88fa4ae96
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4803

SGVsbG8gLA0KDQo+IE9uIDIyIE5vdiAyMDIwLCBhdCAxMDo1NSBwbSwgTGVvIEtydWVnZXIgPGxl
by5rcnVlZ2VyQHphbC5hZXJvPiB3cm90ZToNCj4gDQo+IEhpIEp1bGllbiwNCj4gDQo+IGZpbmFs
bHkgSSBjb3VsZCB0cnkgb3V0IHdoYXQgeW91IHN1Z2dlc3RlZCwgcGxlYXNlIGZpbmQgbXkgYW5z
d2VycyBpbmxpbmUuDQo+IA0KPj4gLS0tLS1VcnNwcsO8bmdsaWNoZSBOYWNocmljaHQtLS0tLQ0K
Pj4gVm9uOiBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPg0KPj4gR2VzZW5kZXQ6IE1pdHR3
b2NoLCAxOC4gTm92ZW1iZXIgMjAyMCAxMzoyNA0KPj4gQW46IFN0ZWZhbm8gU3RhYmVsbGluaSA8
c3RlZmFuby5zdGFiZWxsaW5pQHhpbGlueC5jb20+OyBMZW8gS3J1ZWdlcg0KPj4gPGxlby5rcnVl
Z2VyQHphbC5hZXJvPg0KPj4gQ2M6IFBlbmcgRmFuIDxwZW5nLmZhbkBueHAuY29tPjsgYnJ1Y2Vh
QHhpbGlueC5jb207IENvcm5lbGlhIEJydWVsaGFydA0KPj4gPGNvcm5lbGlhLmJydWVsaGFydEB6
YWwuYWVybz47IG9sZWtzYW5kcl9hbmRydXNoY2hlbmtvQGVwYW0uY29tOyB4ZW4tDQo+PiBkZXZl
bEBsaXN0cy54ZW5wcm9qZWN0Lm9yZzsgQmVydHJhbmQuTWFycXVpc0Bhcm0uY29tDQo+PiBCZXRy
ZWZmOiBSZTogQVc6IEFXOiBBVzogQVc6IFhlbiBkYXRhIGZyb20gbWV0YS12aXJ0dWFsaXphdGlv
biBsYXllcg0KPj4gDQo+PiBIaSwNCj4+IA0KPj4gT24gMTcvMTEvMjAyMCAyMzo1MywgU3RlZmFu
byBTdGFiZWxsaW5pIHdyb3RlOg0KPj4+IEFkZGluZyBCZXJ0cmFuZCwgT2xla3NhbmRyLCBKdWxp
ZW4sIGFuZCBvdGhlcnMgLS0gdGhleSBoYXZlIGEgbW9yZQ0KPj4+IHJlY2VudCBleHBlcmllbmNl
IHdpdGggR0lDdjMgSVRTIHRoYW4gbWUgYW5kIG1pZ2h0IGJlIGFibGUgdG8gaGVscC4NCj4+PiBJ
IGFtIGF0dGFjaGluZyB0aGUgZGV2aWNlIHRyZWUgTGVvIHNlbnQgYSBmZXcgZGF5cyBhZ28gZm9y
IHJlZmVyZW5jZS4NCj4+PiANCj4+PiANCj4+PiBUeXBpY2FsbHkgd2hlbiB5b3UgY2FuIHNldCB0
aGUgZXRoZXJuZXQgbGluayB1cCBhbmQgbm8gcGFja2V0cyBhcmUNCj4+PiBleGNoYW5nZWQgaXQg
aXMgYmVjYXVzZSBvZiBhIG1pc3NpbmcgaW50ZXJydXB0LiBJbiB0aGlzIGNhc2UgYSBtaXNzaW5n
DQo+Pj4gTVNJLg0KPj4+IA0KPj4+IEJlcnRyYW5kLCBJIGJlbGlldmUgeW91IHRyaWVkIHRoZSBH
SUMgSVRTIGRyaXZlciB3aXRoIFBDSSBkZXZpY2VzDQo+Pj4gcmVjZW50bHkuIEl0IGlzIGV4cGVj
dGVkIHRvIHdvcmsgY29ycmVjdGx5IHdpdGggTVNJcyBpbiBEb20wLCByaWdodD8NCj4+IA0KPj4g
T1NTVGVzdCBoYXMgc29tZSBoYXJkd2FyZSAoZS5nLiBUaHVuZGVyLVgpIHdoZXJlIElUUyBpcyBy
ZXF1aXJlZCB0byBib290DQo+PiBEb20wLiBJIGhhdmVuJ3Qgc2VlbiBhbnkgZmFpbHVyZSBvbiBy
ZWNlbnQgWGVuLiBXZSBhcmUgdGVzdGluZyA0LjExIGFuZA0KPj4gb253YXJkcyBvbiBUaHVuZGVy
LVguDQo+PiANCj4+IEhvd2V2ZXIsIGl0IG1heSBiZSBwb3NzaWJsZSB0aGF0IHNvbWUgbW9yZSB3
b3JrIGlzIG5lY2Vzc2FyeSBmb3Igb3RoZXINCj4+IGhhcmR3YXJlIChlLmcuIHdvcmthcm91bmQs
IG1pc3NpbmcgY29kZS4uLikuIFNlZSBtb3JlIGJlbG93Lg0KPj4gDQo+Pj4gDQo+Pj4gDQo+Pj4g
DQo+Pj4gT24gVHVlLCAxNyBOb3YgMjAyMCwgTGVvIEtydWVnZXIgd3JvdGU6DQo+Pj4+IEhpLA0K
Pj4+PiANCj4+Pj4gSSBlbmFibGVkIENPTkZJR19IQVNfSVRTICh3aGF0IGEgc3R1cGlkIG1pc3Rh
a2UgYnkgbWUgdG8gbm90IHNldCBpdA0KPj4+PiBiZWZvcmUuLi4pIGJ1dCB0aGVuIGhhZCB0byBh
ZGQgdGhlIGZvbGxvd2luZyBub2RlIHRvIG15IGRldmljZSB0cmVlDQo+Pj4+IA0KPj4+PiAJZ2lj
X2xwaV9iYXNlOiBzeXNjb25AMHg4MDAwMDAwMCB7DQo+Pj4+IAkJY29tcGF0aWJsZSA9ICJnaWMt
bHBpLWJhc2UiOw0KPj4gDQo+PiBJIGNvdWxkbid0IGZpbmQgdGhpcyBjb21wYXRpYmxlIGRlZmlu
ZWQvdXNlZCBpbiBMaW51eCA1LjEwLXJjNC4gQExlbywgY291bGQNCj4+IHlvdSBjbGFyaWZ5IHdo
aWNoIGZsYXZvci92ZXJzaW9uIG9mIExpbnV4IHlvdSBhcmUgdXNpbmc/DQo+IA0KPiBJdCBpcyBM
aW51eCA0LjE5IGZyb20gWW9jdG8gKFdhcnJvciByZWxlYXNlKS4gWEVOIDQuMTMuMi4NCj4gV2hp
bGUgc2VhcmNoaW5nIGFyb3VuZCB0aGUgSW50ZXJuZXQgZm9yIGFueSBzb2x1dGlvbiwgSSBjYW1l
IGFjcm9zcyBbMF0gd2hpY2ggY29udGFpbmVkIHRoZSBnaWMtbHBpLWJhc2Ugbm9kZS4NCj4gU28g
SSBqdXN0IHRyaWVkIGFkZGluZyBpdCAocXVpdGUgZGVzcGVyYXRlIEkga25vdykgYW5kIHZvaWxh
LCBpdCBhdCBsZWFzdCBicm91Z2h0IG1lIG9uZSBzdGVwIGZ1cnRoZXIgKFhFTiBleHBvc2luZyB0
aGUgSVRTKS4uLg0KPiANCj4+IA0KPj4+PiAJCXJlZyA9IDwweDAgMHg4MDAwMDAwMCAweDAgMHgx
MDAwMDA+Ow0KPj4+PiAJCW1heC1naWMtcmVkaXN0cmlidXRvcnMgPSA8Mj47DQo+Pj4+IAl9Ow0K
Pj4+PiANCj4+Pj4gdG8gc29tZWhvdyBjaGFuZ2Ugc29tZXRoaW5nIGluIHJlZ2FyZCB0byB0aGUg
SVRTIGFuZCBNU0kvTVNJLVgNCj4+Pj4gDQo+Pj4+IChYRU4pIEdJQ3YzIGluaXRpYWxpemF0aW9u
Og0KPj4+PiAoWEVOKSAgICAgICBnaWNfZGlzdF9hZGRyPTB4MDAwMDAwMDYwMDAwMDANCj4+Pj4g
KFhFTikgICAgICAgZ2ljX21haW50ZW5hbmNlX2lycT0yNQ0KPj4+PiAoWEVOKSAgICAgICBnaWNf
cmRpc3Rfc3RyaWRlPTANCj4+Pj4gKFhFTikgICAgICAgZ2ljX3JkaXN0X3JlZ2lvbnM9MQ0KPj4+
PiAoWEVOKSAgICAgICByZWRpc3RyaWJ1dG9yIHJlZ2lvbnM6DQo+Pj4+IChYRU4pICAgICAgICAg
LSByZWdpb24gMDogMHgwMDAwMDAwNjA0MDAwMCAtIDB4MDAwMDAwMDYwODAwMDANCj4+Pj4gKFhF
TikgR0lDdjM6IHVzaW5nIGF0IG1vc3QgNTczNDQgTFBJcyBvbiB0aGUgaG9zdC4NCj4+Pj4gKFhF
TikgR0lDdjM6IDI4OCBsaW5lcywgKElJRCAwMDAxMTQzYikuDQo+Pj4+IChYRU4pIEdJQ3YzOiBG
b3VuZCBJVFMgQDB4NjAyMDAwMA0KPj4+PiAoWEVOKSB1c2luZyBub24tY2FjaGVhYmxlIElUUyBj
b21tYW5kIHF1ZXVlDQo+Pj4+IChYRU4pIEdJQ3YzOiBDUFUwOiBGb3VuZCByZWRpc3RyaWJ1dG9y
IGluIHJlZ2lvbiAwIEAwMDAwMDAwMDQwMDFjMDAwDQo+Pj4+IA0KPj4+PiBbICAgIDAuMDAwMDAw
XSBHSUN2MzogRGlzdHJpYnV0b3IgaGFzIG5vIFJhbmdlIFNlbGVjdG9yIHN1cHBvcnQNCj4+Pj4g
WyAgICAwLjAwMDAwMF0gR0lDdjM6IG5vIFZMUEkgc3VwcG9ydCwgbm8gZGlyZWN0IExQSSBzdXBw
b3J0DQo+Pj4+IFsgICAgMC4wMDAwMDBdIElUUyBbbWVtIDB4MDYwMjAwMDAtMHgwNjAzZmZmZl0N
Cj4+Pj4gWyAgICAwLjAwMDAwMF0gSVRTQDB4MDAwMDAwMDAwNjAyMDAwMDogYWxsb2NhdGVkIDY1
NTM2IERldmljZXMNCj4+IEBkYzg4MDAwMCAoZmxhdCwgZXN6IDgsIHBzeiA2NEssIHNociAxKQ0K
Pj4+PiBbICAgIDAuMDAwMDAwXSBJVFNAMHgwMDAwMDAwMDA2MDIwMDAwOiBhbGxvY2F0ZWQgMzI3
NjggSW50ZXJydXB0DQo+PiBDb2xsZWN0aW9ucyBAZGM4MjAwMDAgKGZsYXQsIGVzeiAyLCBwc3og
NjRLLCBzaHIgMSkNCj4+Pj4gWyAgICAwLjAwMDAwMF0gR0lDOiB1c2luZyBMUEkgcHJvcGVydHkg
dGFibGUgQDB4MDAwMDAwMDBkYzgzMDAwMA0KPj4+PiBbICAgIDAuMDAwMDAwXSBHSUN2MzogQ1BV
MDogZm91bmQgcmVkaXN0cmlidXRvciAwIHJlZ2lvbg0KPj4gMDoweDAwMDAwMDAwMDYwNDAwMDAN
Cj4+Pj4gWyAgICAwLjAwMDAwMF0gQ1BVMDogdXNpbmcgTFBJIHBlbmRpbmcgdGFibGUgQDB4MDAw
MDAwMDBkYzg0MDAwMA0KPj4+PiAuLi4NCj4+Pj4gWyAgICAwLjA0MDA4MF0gUGxhdGZvcm0gTVNJ
OiBnaWMtaXRzIGRvbWFpbiBjcmVhdGVkDQo+Pj4+IFsgICAgMC4wNDAxMzZdIFBDSS9NU0k6IC9p
bnRlcnJ1cHQtY29udHJvbGxlci9naWMtaXRzIGRvbWFpbiBjcmVhdGVkDQo+Pj4+IFsgICAgMC4w
NDAxODFdIGZzbC1tYyBNU0k6IC9pbnRlcnJ1cHQtY29udHJvbGxlci9naWMtaXRzIGRvbWFpbiBj
cmVhdGVkDQo+Pj4+IA0KPj4+PiANCj4+Pj4gU3RpbGwgSSBhbSBlbmRpbmcgdXAgd2l0aCB0aGUg
IiBGYWlsZWQgdG8gYWRkIC0gcGFzc3Rocm91Z2ggb3IgTVNJL01TSS1YDQo+PiBtaWdodCBmYWls
ISIgbG9nIG1lc3NhZ2VzIGZvciBzb21lIFBDSSBkZXZpY2VzLCBidXQgYXQgbGVhc3QgdGhlIG9u
LWJvYXJkDQo+PiBldGhlcm5ldCBwb3J0cyAoZnNsX2VuZXRjICkgYXJlIGluaXRpYWxpemVkLg0K
Pj4+PiBJIGNhbiBzZXQgdGhlIGxpbmsgdXAgYW5kIGEgbGluayBpcyBzdWNjZXNzZnVsbHkgZXN0
YWJsaXNoZWQuDQo+PiANCj4+IFRoaXMgbWVzc2FnZSBpcyBub3JtYWwuIFhlbiBvbiBBcm0gaXMg
bm90IHlldCBhd2FyZSBvZiBQQ0kgZGV2aWNlcyBhbmQNCj4+IHRoZXJlZm9yZSB0aGUgaHlwZXJj
YWxscyB0byBhZGQgUENJIGRldmljZXMgd2lsbCByZXR1cm4gLUVPUE5PVFNVUFAuDQo+PiANCj4+
IEhvd2V2ZXIsIHRoaXMgaXMgbm90IGFuIGlzc3VlIGJlY2F1c2UgdGhlIHZpcnR1YWwgSVRTIGlt
cGxlbWVudGF0aW9uIHdpbGwNCj4+IGFsbG93IGRvbTAgdG8gY29uZmlndXJlIGFueSBkZXZpY2Vz
Lg0KPj4gDQo+Pj4+IA0KPj4+PiBCdXQgKCEpIEkgY2Fubm90IHJlY2VpdmUgb3IgdHJhbnNtaXQg
YW55dGhpbmcgKG5vIGVycm9yIG1lc3NhZ2UuLi4pIGFuZA0KPj4gdGhlcmUgc2VlbSB0byBiZSBu
byBpbnRlcnJ1cHRzOg0KPj4+PiANCj4+Pj4gMjk6ICAgICAgICAgIDAgICBJVFMtTVNJICAgMSBF
ZGdlICAgICAgZ2JlMC1yeHR4MA0KPj4+PiAgMzI6ICAgICAgICAgIDAgICBJVFMtTVNJIDgxOTIg
RWRnZSAgICAgIHB0cF9xb3JpcQ0KPj4+PiANCj4+Pj4gKGZyb20gL3Byb2MvaW50ZXJydXB0cyku
DQo+Pj4+IA0KPj4+PiBBbnkgaWRlYSBvbiB0aGlzIG9uZT8gSSBrZWVwIGRpZ2dpbmcgYW5kIGNo
ZWNraW5nIHdoZXRoZXIgbXkgZGV2aWNlIHRyZWUNCj4+IG5lZWRzIHNvbWUgYWRkaXRpb25hbCBm
aXhlcy4NCj4+IA0KPj4gQ2FuIHlvdSBhcHBseSBwYXRjaCBbMV0gYW5kIHByb3ZpZGUgdGhlIGxv
Z3M/IFRoaXMgd2lsbCBkdW1wIG1vcmUNCj4+IGluZm9ybWF0aW9uIGFib3V0IHRoZSBjb21tYW5k
IHJlY2VpdmVkIGJ5IHRoZSB2SVRTIGFuZCB0aGUgb25lIHNlbmQgdG8NCj4+IHRoZSBob3N0IElU
Uy4NCj4gDQo+IEZvciBYRU4gNC4xMy4yIEkgaGFkIHRvIGFkYXB0IHlvdXIgcGF0Y2ggc2xpZ2h0
bHkgWzFdLCBzZWUgYmVsb3cgKHllcyBJIGtub3csIHF1aXRlIHVnbHkgaW4gcGFydHMpLg0KPiBG
aW5kIGF0dGFjaGVkIHRoZSBib290IGxvZyBhbmQgYW4gb3V0cHV0IG9mICJ4bCBkbWVzZyIgd2hp
Y2ggaXMgdHJ1bmNhdGVkIGR1ZSB0byB0aGUgbGFyZ2UgYW1vdW50IG9mIG1lc3NhZ2VzLg0KPiAN
Cj4gV2hlbiBlbmFibGluZyB0aGUgbmV0d29yayBpbnRlcmZhY2UgKGdiZTApLCB0aGUgZm9sbG93
aW5nIG91dHB1dCBpcyB2aXNpYmxlOg0KPiANCj4gcm9vdEBrb250cm9uLXNhbDI4On4jIGlwIGxp
bmsgc2V0IHVwIGRldiBnYmUwDQo+IChYRU4pIHZnaWMtdjMtaXRzLmM6OTAyOmQwdjAgdklUUyAg
Y21kIDB4MGM6IDAwMDAwMDE3MDAwMDAwMGMgMDAwMDAwMDAwMDAwMDAwMSAwMDAwMDAwMDAwMDAw
MDAwIDAwMDAwMDAwMDAwMDAwMDANCj4gKFhFTikgdmdpYy12My1pdHMuYzo5MDI6ZDB2MCB2SVRT
ICBjbWQgMHgwNTogMDAwMDAwMDAwMDAwMDAwNSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KPiBbICAgMzQuMDM0NTk4XSBBdGhlcm9zIDgwMzEgZXRo
ZXJuZXQgMDAwMDowMDowMC4zOjA1OiBhdHRhY2hlZCBQSFkgZHJpdmVyIFtBdGhlcm9zIDgwMzEg
ZXRoZXJuZXRdIChtaWlfYnVzOnBoeV9hZGRyPTAwMDA6MDA6MDAuMzowNSwgaXJxPVBPTEwpDQo+
IFsgICAzNC4wNDExMTFdIDgwMjFxOiBhZGRpbmcgVkxBTiAwIHRvIEhXIGZpbHRlciBvbiBkZXZp
Y2UgZ2JlMA0KPiBbICAgMzQuMDQxMjA5XSBJUHY2OiBBRERSQ09ORihORVRERVZfVVApOiBnYmUw
OiBsaW5rIGlzIG5vdCByZWFkeQ0KPiByb290QGtvbnRyb24tc2FsMjg6fiMgWyAgIDM1LjA0MTk1
MV0gZnNsX2VuZXRjIDAwMDA6MDA6MDAuMCBnYmUwOiBMaW5rIGlzIERvd24NCj4gWyAgIDM4LjEx
NDQyNl0gZnNsX2VuZXRjIDAwMDA6MDA6MDAuMCBnYmUwOiBMaW5rIGlzIFVwIC0gMUdicHMvRnVs
bCAtIGZsb3cgY29udHJvbCBvZmYNCj4gWyAgIDM4LjExNDUwOF0gSVB2NjogQUREUkNPTkYoTkVU
REVWX0NIQU5HRSk6IGdiZTA6IGxpbmsgYmVjb21lcyByZWFkeQ0KPiANCj4gRG9lcyB0aGF0IHRl
bGwgeW91IGFueXRoaW5nPw0KPiANCg0KSSBqdXN0IGNoZWNrZWQgdGhlIGxvZ3Mgc2hhcmVkLCB3
aGF0IEkgZm91bmQgb3V0IHRoYXQgdGhlcmXigJlzIGlzIGFuIGVycm9yIHdoaWxlIGJvb3Rpbmcg
dG8gY29uZmlndXJlIHRoZSBNU0kgZm9yIHRoZSBQQ0kgZGV2aWNlIGJlY2F1c2Ugb2YgdGhhdCB0
aGVyZSB3aWxsIGJlIGNhc2UgdGhhdCBEZXZpY2UgSWQgZ2VuZXJhdGUgb3V0LW9mLWJhbmQgaXMg
bm90IG1hcHBlZCBjb3JyZWN0bHkgdG8gSVRTIGRldmljZSB0YWJsZSBjcmVhdGVkIHdoaWxlIGlu
aXRpYWxpc2luZyB0aGUgTVNJIGZvciB0aGUgZGV2aWNlLiANCkkgbWlnaHQgYmUgd3JvbmcgbGV0
IHNvbWVvbmUgZWxzZSBhbHNvIGNvbW1lbnRzIG9uIHRoaXMuIA0KDQogDQpbICAgIDAuMTczOTY0
XSBPRjogL3NvYy9wY2llQDFmMDAwMDAwMDogSW52YWxpZCBtc2ktbWFwIHRyYW5zbGF0aW9uIC0g
bm8gbWF0Y2ggZm9yIHJpZCAweGY4IG9uICAgICAgICAgICAobnVsbCkNCiANClJlZ2FyZHMsDQpS
YWh1bA0KDQo+PiANCj4+IE5vdGUgdGhhdCBYZW4gd2lsbCBuZWVkIHRvIGJlIGJ1aWxkIHdpdGgg
Q09ORklHX0RFQlVHPXkgaW4gb3JkZXIgdG8gZ2V0DQo+PiBzb21lIG9mIHRoZSBtZXNzYWdlcy4N
Cj4+IA0KPj4gWy4uLl0NCj4+IA0KPj4+Pj4+IFsgICAgMC4wMDAwMDBdIEdJQ3YzOiBEaXN0cmli
dXRvciBoYXMgbm8gUmFuZ2UgU2VsZWN0b3Igc3VwcG9ydA0KPj4+Pj4+IA0KPj4+Pj4+IFsgICAg
MC4wMDAwMDBdIEdJQ3YzOiBubyBWTFBJIHN1cHBvcnQsIG5vIGRpcmVjdCBMUEkgc3VwcG9ydA0K
Pj4+Pj4+IA0KPj4+Pj4+IFsgICAgMC4wMDAwMDBdIEdJQ3YzOiBDUFUwOiBmb3VuZCByZWRpc3Ry
aWJ1dG9yIDAgcmVnaW9uDQo+Pj4+Pj4gMDoweDAwMDAwMDAwMDYwNDAwMDANCj4+Pj4+IA0KPj4+
Pj4gIm5vIFZMUEkgc3VwcG9ydCIgaXMgdmVyeSBzdXNwaWNpb3VzLCBpdCBsb29rcyBsaWtlIERv
bTAgZG9lc24ndA0KPj4+Pj4gZmluZCBhbnkgSVRTIHN1cHBvcnQuDQo+PiBWTFBJIGlzIGEgZmVh
dHVyZSB0aGF0IHdhcyBpbnRyb2R1Y2VkIGluIEdJQ3Y0IHRvIGRpcmVjdGx5IGluamVjdCBMUEkg
aW4gdGhlDQo+PiBndWVzdC4gU28gdGhpcyBpcyBub3JtYWwgdG8gc2VlIHRoaXMgbWVzc2FnZSB3
aGVuIHJ1bm5pbmcgb24gWGVuIGJlY2F1c2UNCj4+IHdlIGFyZSBnb2luZyB0byBvbmx5IGV4cG9z
ZSBhIEdJQ3YzIHRvIGEgZG9tYWluIHVudGlsIGF0IGxlYXN0IHdlIHN1cHBvcnQNCj4+IG5lc3Rl
ZCB2aXJ0Lg0KPj4gDQo+PiBIb3dldmVyLCB5b3Ugd2VyZSByaWdodCBhYm91dCB0aGF0IFhlbiBk
aWRuJ3QgZXhwb3NlIHRoZSBJVFMgYmVjYXVzZSB0aGUNCj4+IGZvbGxvd2luZyBsaW5lcyB3ZXJl
IG1pc3Npbmc6DQo+PiANCj4+IFsgICAgMC4wMDAwMDBdIElUU0AweDAwMDAwMDAwMDYwMjAwMDA6
IGFsbG9jYXRlZCA2NTUzNiBEZXZpY2VzIEBkYzg4MDAwMA0KPj4gKGZsYXQsIGVzeiA4LCBwc3og
NjRLLCBzaHIgMSkNCj4+IA0KPj4gQ2hlZXJzLA0KPiANCj4gQmVzdCByZWdhcmRzLA0KPiBMZW8N
Cj4gDQo+PiANCj4+IFsxXQ0KPj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9naWMtdjMtaXRz
LmMgYi94ZW4vYXJjaC9hcm0vZ2ljLXYzLWl0cy5jIGluZGV4DQo+PiA5NTU4YmFkOTZhYzMuLjhh
MGEwMjMwOGU3NCAxMDA2NDQNCj4+IC0tLSBhL3hlbi9hcmNoL2FybS9naWMtdjMtaXRzLmMNCj4+
ICsrKyBiL3hlbi9hcmNoL2FybS9naWMtdjMtaXRzLmMNCj4+IEBAIC04Nyw2ICs4NywxMCBAQCBz
dGF0aWMgaW50IGl0c19zZW5kX2NvbW1hbmQoc3RydWN0IGhvc3RfaXRzICpod19pdHMsDQo+PiBj
b25zdCB2b2lkICppdHNfY21kKQ0KPj4gICAgICAvKiBObyBJVFMgY29tbWFuZHMgZnJvbSBhbiBp
bnRlcnJ1cHQgaGFuZGxlciAoYXQgdGhlIG1vbWVudCkuICovDQo+PiAgICAgIEFTU0VSVCghaW5f
aXJxKCkpOw0KPj4gDQo+PiArICAgIHByaW50ayhYRU5MT0dfV0FSTklORywgInBJVFMgIGNtZCAw
eCUwMmx4OiAlMDE2bHggJTAxNmx4ICUwMTZseA0KPj4gJTAxNmx4XG4iLA0KPj4gKyAgICAgICAg
ICAgaXRzX2NtZF9nZXRfY29tbWFuZChjb21tYW5kKSwNCj4+ICsgICAgICAgICAgIGNvbW1hbmRb
MF0sIGNvbW1hbmRbMV0sIGNvbW1hbmRbMl0sIGNvbW1hbmRbM10pOw0KPj4gKw0KPj4gICAgICBz
cGluX2xvY2soJmh3X2l0cy0+Y21kX2xvY2spOw0KPj4gDQo+PiAgICAgIGRvIHsNCj4+IGRpZmYg
LS1naXQgYS94ZW4vYXJjaC9hcm0vZ2ljLXYzLWxwaS5jIGIveGVuL2FyY2gvYXJtL2dpYy12My1s
cGkuYyBpbmRleA0KPj4gODY5YmM5N2ZhMWFhLi5lN2M1YmNkOGQ0MjMgMTAwNjQ0DQo+PiAtLS0g
YS94ZW4vYXJjaC9hcm0vZ2ljLXYzLWxwaS5jDQo+PiArKysgYi94ZW4vYXJjaC9hcm0vZ2ljLXYz
LWxwaS5jDQo+PiBAQCAtMTgzLDcgKzE4MywxMCBAQCB2b2lkIGdpY3YzX2RvX0xQSSh1bnNpZ25l
ZCBpbnQgbHBpKQ0KPj4gICAgICAvKiBGaW5kIG91dCBpZiBhIGd1ZXN0IG1hcHBlZCBzb21ldGhp
bmcgdG8gdGhpcyBwaHlzaWNhbCBMUEkuICovDQo+PiAgICAgIGhscGlwID0gZ2ljX2dldF9ob3N0
X2xwaShscGkpOw0KPj4gICAgICBpZiAoICFobHBpcCApDQo+PiArICAgIHsNCj4+ICsgICAgICAg
IHByaW50aygiJXM6IFJlY2VpdmVkIExQSSAldSBidXQgaXQgaXMgbm90IG1hcHBlZD9cbiIsIF9f
ZnVuY19fLA0KPj4gbHBpKTsNCj4+ICAgICAgICAgIGdvdG8gb3V0Ow0KPj4gKyAgICB9DQo+PiAN
Cj4+ICAgICAgaGxwaS5kYXRhID0gcmVhZF91NjRfYXRvbWljKCZobHBpcC0+ZGF0YSk7DQo+PiAN
Cj4+IEBAIC0yMjIsNiArMjI1LDkgQEAgdm9pZCBnaWN2M19scGlfdXBkYXRlX2hvc3RfZW50cnko
dWludDMyX3QgaG9zdF9scGksDQo+PiBpbnQgZG9tYWluX2lkLA0KPj4gIHsNCj4+ICAgICAgdW5p
b24gaG9zdF9scGkgKmhscGlwLCBobHBpOw0KPj4gDQo+PiArICAgIHByaW50aygiJXM6IGhvc3Rf
bHBpICV1IGRvbWFpbiAlZCB2aXJxX2xwaSAldVxuIiwNCj4+ICsgICAgICAgICAgIF9fZnVuY19f
LCBob3N0X2xwaSwgZG9tYWluX2lkLCB2aXJxX2xwaSk7DQo+PiArDQo+PiAgICAgIEFTU0VSVCho
b3N0X2xwaSA+PSBMUElfT0ZGU0VUKTsNCj4+IA0KPj4gICAgICBob3N0X2xwaSAtPSBMUElfT0ZG
U0VUOw0KPj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS92Z2ljLXYzLWl0cy5jIGIveGVuL2Fy
Y2gvYXJtL3ZnaWMtdjMtaXRzLmMgaW5kZXgNCj4+IDU4ZDkzOWI4NWY5Mi4uODllZjEzN2IzZTZi
IDEwMDY0NA0KPj4gLS0tIGEveGVuL2FyY2gvYXJtL3ZnaWMtdjMtaXRzLmMNCj4+ICsrKyBiL3hl
bi9hcmNoL2FybS92Z2ljLXYzLWl0cy5jDQo+PiBAQCAtODk3LDcgKzg5Nyw3IEBAIG91dF91bmxv
Y2s6DQo+PiANCj4+ICBzdGF0aWMgdm9pZCBkdW1wX2l0c19jb21tYW5kKHVpbnQ2NF90ICpjb21t
YW5kKQ0KPj4gIHsNCj4+IC0gICAgZ2RwcmludGsoWEVOTE9HX1dBUk5JTkcsICIgIGNtZCAweCUw
Mmx4OiAlMDE2bHggJTAxNmx4ICUwMTZseA0KPj4gJTAxNmx4XG4iLA0KPj4gKyAgICBnZHByaW50
ayhYRU5MT0dfV0FSTklORywgInZJVFMgIGNtZCAweCUwMmx4OiAlMDE2bHggJTAxNmx4ICUwMTZs
eA0KPj4gJTAxNmx4XG4iLA0KPj4gICAgICAgICAgICAgICBpdHNfY21kX2dldF9jb21tYW5kKGNv
bW1hbmQpLA0KPj4gICAgICAgICAgICAgICBjb21tYW5kWzBdLCBjb21tYW5kWzFdLCBjb21tYW5k
WzJdLCBjb21tYW5kWzNdKTsNCj4+ICB9DQo+PiBAQCAtOTI2LDYgKzkyNiw4IEBAIHN0YXRpYyBp
bnQgdmdpY19pdHNfaGFuZGxlX2NtZHMoc3RydWN0IGRvbWFpbiAqZCwNCj4+IHN0cnVjdCB2aXJ0
X2l0cyAqaXRzKQ0KPj4gICAgICAgICAgaWYgKCByZXQgKQ0KPj4gICAgICAgICAgICAgIHJldHVy
biByZXQ7DQo+PiANCj4+ICsgICAgICAgIGR1bXBfaXRzX2NvbW1hbmQoY29tbWFuZCk7DQo+PiAr
DQo+PiAgICAgICAgICBzd2l0Y2ggKCBpdHNfY21kX2dldF9jb21tYW5kKGNvbW1hbmQpICkNCj4+
ICAgICAgICAgIHsNCj4+ICAgICAgICAgIGNhc2UgR0lUU19DTURfQ0xFQVI6DQo+PiANCj4+IA0K
Pj4gLS0NCj4+IEp1bGllbiBHcmFsbA0KPiANCj4gWzBdIGh0dHBzOi8vd3d3Lm1haWwtYXJjaGl2
ZS5jb20vdS1ib290QGxpc3RzLmRlbnguZGUvbXNnMzc5NzA4Lmh0bWwNCj4gWzFdIA0KPiBkaWZm
IC0tZ2l0IGEveGVuL2FyY2gvYXJtL2dpYy12My1pdHMuYyBiL3hlbi9hcmNoL2FybS9naWMtdjMt
aXRzLmMNCj4gaW5kZXggOTU1OGJhZDk2YS4uZDE3NWJhNTJiMCAxMDA2NDQNCj4gLS0tIGEveGVu
L2FyY2gvYXJtL2dpYy12My1pdHMuYw0KPiArKysgYi94ZW4vYXJjaC9hcm0vZ2ljLXYzLWl0cy5j
DQo+IEBAIC04Nyw2ICs4NywxMCBAQCBzdGF0aWMgaW50IGl0c19zZW5kX2NvbW1hbmQoc3RydWN0
IGhvc3RfaXRzICpod19pdHMsIGNvbnN0IHZvaWQgKml0c19jbWQpDQo+ICAgICAvKiBObyBJVFMg
Y29tbWFuZHMgZnJvbSBhbiBpbnRlcnJ1cHQgaGFuZGxlciAoYXQgdGhlIG1vbWVudCkuICovDQo+
ICAgICBBU1NFUlQoIWluX2lycSgpKTsNCj4gDQo+ICsgICAgcHJpbnRrKFhFTkxPR19XQVJOSU5H
ICJwSVRTICBjbWQgMHglMDJseDogJTAxNmx4ICUwMTZseCAlMDE2bHggJTAxNmx4XG4iLA0KPiAr
ICAgICAgICAoKCh1aW50NjRfdCAqKSBpdHNfY21kKVswXSA+PiAwKSAmIEdFTk1BU0soOCAtIDEs
IDApLA0KPiArICAgICAgICAoKHVpbnQ2NF90ICopIGl0c19jbWQpWzBdLCAoKHVpbnQ2NF90ICop
IGl0c19jbWQpWzFdLCAoKHVpbnQ2NF90ICopIGl0c19jbWQpWzJdLCAoKHVpbnQ2NF90ICopIGl0
c19jbWQpWzNdKTsNCj4gKw0KPiAgICAgc3Bpbl9sb2NrKCZod19pdHMtPmNtZF9sb2NrKTsNCj4g
DQo+ICAgICBkbyB7DQo+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vZ2ljLXYzLWxwaS5jIGIv
eGVuL2FyY2gvYXJtL2dpYy12My1scGkuYw0KPiBpbmRleCA3OGI5NTIxYjIxLi4yYzNiMGZjOWU1
IDEwMDY0NA0KPiAtLS0gYS94ZW4vYXJjaC9hcm0vZ2ljLXYzLWxwaS5jDQo+ICsrKyBiL3hlbi9h
cmNoL2FybS9naWMtdjMtbHBpLmMNCj4gQEAgLTE4MSw4ICsxODEsMTAgQEAgdm9pZCBnaWN2M19k
b19MUEkodW5zaWduZWQgaW50IGxwaSkNCj4gDQo+ICAgICAvKiBGaW5kIG91dCBpZiBhIGd1ZXN0
IG1hcHBlZCBzb21ldGhpbmcgdG8gdGhpcyBwaHlzaWNhbCBMUEkuICovDQo+ICAgICBobHBpcCA9
IGdpY19nZXRfaG9zdF9scGkobHBpKTsNCj4gLSAgICBpZiAoICFobHBpcCApDQo+ICsgICAgaWYg
KCAhaGxwaXAgKSB7DQo+ICsgICAgICAgIHByaW50aygiJXM6IFJlY2VpdmVkIExQSSAldSBidXQg
aXQgaXMgbm90IG1hcHBlZD9cbiIsIF9fZnVuY19fLCBscGkpOw0KPiAgICAgICAgIGdvdG8gb3V0
Ow0KPiArICAgIH0NCj4gDQo+ICAgICBobHBpLmRhdGEgPSByZWFkX3U2NF9hdG9taWMoJmhscGlw
LT5kYXRhKTsNCj4gDQo+IEBAIC0yMjEsNiArMjIzLDkgQEAgdm9pZCBnaWN2M19scGlfdXBkYXRl
X2hvc3RfZW50cnkodWludDMyX3QgaG9zdF9scGksIGludCBkb21haW5faWQsDQo+IHsNCj4gICAg
IHVuaW9uIGhvc3RfbHBpICpobHBpcCwgaGxwaTsNCj4gDQo+ICsgICAgcHJpbnRrKCIlczogaG9z
dF9scGkgJXUgZG9tYWluICVkIHZpcnRfbHBpICV1XG4iLA0KPiArICAgICAgICBfX2Z1bmNfXywg
aG9zdF9scGksIGRvbWFpbl9pZCwgdmlydF9scGkpOw0KPiArDQo+ICAgICBBU1NFUlQoaG9zdF9s
cGkgPj0gTFBJX09GRlNFVCk7DQo+IA0KPiAgICAgaG9zdF9scGkgLT0gTFBJX09GRlNFVDsNCj4g
ZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS92Z2ljLXYzLWl0cy5jIGIveGVuL2FyY2gvYXJtL3Zn
aWMtdjMtaXRzLmMNCj4gaW5kZXggNmUxNTNjNjk4ZC4uZGQ1MDgxZWY4MCAxMDA2NDQNCj4gLS0t
IGEveGVuL2FyY2gvYXJtL3ZnaWMtdjMtaXRzLmMNCj4gKysrIGIveGVuL2FyY2gvYXJtL3ZnaWMt
djMtaXRzLmMNCj4gQEAgLTg5Nyw3ICs4OTcsNyBAQCBvdXRfdW5sb2NrOg0KPiANCj4gc3RhdGlj
IHZvaWQgZHVtcF9pdHNfY29tbWFuZCh1aW50NjRfdCAqY29tbWFuZCkNCj4gew0KPiAtICAgIGdk
cHJpbnRrKFhFTkxPR19XQVJOSU5HLCAiICBjbWQgMHglMDJseDogJTAxNmx4ICUwMTZseCAlMDE2
bHggJTAxNmx4XG4iLA0KPiArICAgIGdkcHJpbnRrKFhFTkxPR19XQVJOSU5HLCAidklUUyAgY21k
IDB4JTAybHg6ICUwMTZseCAlMDE2bHggJTAxNmx4ICUwMTZseFxuIiwNCj4gICAgICAgICAgICAg
IGl0c19jbWRfZ2V0X2NvbW1hbmQoY29tbWFuZCksDQo+ICAgICAgICAgICAgICBjb21tYW5kWzBd
LCBjb21tYW5kWzFdLCBjb21tYW5kWzJdLCBjb21tYW5kWzNdKTsNCj4gfQ0KPiBAQCAtOTI2LDYg
KzkyNiw4IEBAIHN0YXRpYyBpbnQgdmdpY19pdHNfaGFuZGxlX2NtZHMoc3RydWN0IGRvbWFpbiAq
ZCwgc3RydWN0IHZpcnRfaXRzICppdHMpDQo+ICAgICAgICAgaWYgKCByZXQgKQ0KPiAgICAgICAg
ICAgICByZXR1cm4gcmV0Ow0KPiANCj4gKyAgICAgICAgZHVtcF9pdHNfY29tbWFuZChjb21tYW5k
KTsNCj4gKw0KPiAgICAgICAgIHN3aXRjaCAoIGl0c19jbWRfZ2V0X2NvbW1hbmQoY29tbWFuZCkg
KQ0KPiAgICAgICAgIHsNCj4gICAgICAgICBjYXNlIEdJVFNfQ01EX0NMRUFSOg0KPiA8Ym9vdC14
ZW5kZWJ1Zy5sb2c+PHhlbi1kZWJ1Zy1vdXRwdXQudHh0Pg0KDQo=


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 11:44:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 11:44:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34064.64912 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khAGQ-0004Yv-SL; Mon, 23 Nov 2020 11:44:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34064.64912; Mon, 23 Nov 2020 11:44:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khAGQ-0004Yo-PQ; Mon, 23 Nov 2020 11:44:10 +0000
Received: by outflank-mailman (input) for mailman id 34064;
 Mon, 23 Nov 2020 11:44:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WaDe=E5=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1khAGO-0004Yj-Ve
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 11:44:09 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 565f2a2e-50cc-4dab-ac4f-871ad59c73a0;
 Mon, 23 Nov 2020 11:44:08 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WaDe=E5=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1khAGO-0004Yj-Ve
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 11:44:09 +0000
X-Inumbo-ID: 565f2a2e-50cc-4dab-ac4f-871ad59c73a0
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 565f2a2e-50cc-4dab-ac4f-871ad59c73a0;
	Mon, 23 Nov 2020 11:44:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606131848;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=gM0BV9SZvN21gQf6gMU83YW93bJ7J73jbMVa9MDje70=;
  b=NwAJe9vWt6pIbOHN0k1b8/Fl5fExtV/2qkcpE1d+t7Y8M/9pPwsU0aSN
   gVY2UkMllvnr3sQ3k4vVEpZusskEd4iEkkv3ltxuMCx6JYjRe14IjnkNv
   77Fs8Bf/GerElHo4Dya+cJWctWg9ZbAkDX27miLu0WnTIUPMfg86g2Knh
   A=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Ax8TINSluyw8DGB8pKBWMZTKlZvlYDF1Z0432ly6bdx1G63ffkfo7NtHfpTZnAjirQORnSDO23
 erCB0jZV5ZAbwjbc15FcvfibFtf5xE1mqsX0gTcW0zJ4SWgf7W1Smc/sPi7IdoypFpjCOlz0Ym
 Z6Y/geI9R/2U7kl1aANHMeVTlVxX104VWxyApea0YmRsvlO+xcOy36dQnsKPGdZoIjfomSiifl
 lFllwi9eeocfVruERgfr7HkB1MN6/1IgEa8kavwDQe3F1Lc7KgnqH3MPSnJwAwXNHIgtMIhR3V
 UJg=
X-SBRS: None
X-MesageID: 31704912
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,363,1599537600"; 
   d="scan'208";a="31704912"
Subject: Re: [PATCH] x86/DMI: fix SMBIOS pointer range check
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <7823a8e0-6388-08f6-0ce6-36bd7139ff54@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c1cca0e5-d457-3e9c-17e6-1764471e1a3c@citrix.com>
Date: Mon, 23 Nov 2020 11:44:02 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <7823a8e0-6388-08f6-0ce6-36bd7139ff54@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 23/11/2020 11:33, Jan Beulich wrote:
> Forever since its introduction this has been using an inverted relation
> operator.
>
> Fixes: 54057a28f22b ("x86: support SMBIOS v3")
> Signed-off-by: Jan Beulich <JBeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 11:49:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 11:49:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34075.64924 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khAL4-0004n1-IW; Mon, 23 Nov 2020 11:48:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34075.64924; Mon, 23 Nov 2020 11:48:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khAL4-0004mu-FS; Mon, 23 Nov 2020 11:48:58 +0000
Received: by outflank-mailman (input) for mailman id 34075;
 Mon, 23 Nov 2020 11:48:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1khAL3-0004mp-55
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 11:48:57 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6efa7576-de90-4d40-a0e0-e31db62d4860;
 Mon, 23 Nov 2020 11:48:55 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1khAL3-0004mp-55
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 11:48:57 +0000
X-Inumbo-ID: 6efa7576-de90-4d40-a0e0-e31db62d4860
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6efa7576-de90-4d40-a0e0-e31db62d4860;
	Mon, 23 Nov 2020 11:48:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606132136;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=cUWVsEO7n22giDc6fslcuMPM4yHuaj5zi9uNbbapgAg=;
  b=XCI9VdCI94kgCtYpw10DUt/nNUSRjvuI3FPZR/vONtcjebENMvhFFfTr
   OmGRASB1UtX5zMfZ8nM0YM/fSQXObAq9PugBYA9ljkJ+KPHMSrszuDmQQ
   A8R6qWBV54A5aD7TusHSq2F3SB+9FlPsoanW65DFDoLMwgZTdUZh2CXKq
   0=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: oxEEeMk8Z75SMSuS7hG46ZdJ7NSuWcxehSCnA4BZ+PLmFspGHv6C1hbGsZDO5qO8lVKfjvNYo9
 ub4Y9f0f6OjPvTzpYTO8UQi8MPtWbvrIQgl0C8QQClj+SIDbchd1/gmUuJfwq+z2vDEs8kLqzj
 HaWFE71+fX/g0n5c+1zvHIFJhG5u6DMvCSHPtW4hXiv+dTWRDYkcvZK+IYeHpV6md64wWGa0vg
 lBYBVf7wY5FpTMmyDPFsz7YyQxKG/WU7aQPzGp2rPwybY4Tu3FpmIVdp/LLPsWVZOwvtGXYa7U
 E1k=
X-SBRS: None
X-MesageID: 31733249
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,363,1599537600"; 
   d="scan'208";a="31733249"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Snqi20AnYD3JAR1yrjTm6DDhiqzGmbR5nx2hIl/sQA/x14mv2kiqKHus7tFYeuvAGe8MvudkI01FrzHe4vQHS5J9qqBleASBbDKvRs4COOCUUQpLpnp0aN9MB5ZLChFZQ1ooaml3xHI2CSNX2dZbb/nydsGV8sb9VHWgKYt1AlKdvJfp02UlbSJsjm5tZ54z7ng/bRGoIhOXIFKyQQqu/Y3gM5NGNXRNvyWk19ZxNvT//ptLx4llb5StXKi41gqWf9WlZtWLSOJkDRG2I4YiJ8kjnEtyrUnwuVSctMuXHb8p1wODfmHUJ33YD8y6MGv0pi9DkPQMNcoSUTK8d55N+w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=db88IQDK/BpAmsJMUM22U2nhGSmJcS5NbW0gVk4oxLg=;
 b=D6Y2MwaXIGY4cWmRk1eMjX/ljIySkTF6mGWq7eCfq9bb38sx0kNVVpVRiRx5/t5Jl2+fvrAUUV17KGqbFyLE9bbbimkVbLLomV7QfP6eNWB4aMVyPEjks3cBmhdXsgsCifEcBhDcOhmIqtEOcgL64ntBqjIM8flZf4CUoGFvtkz6BCCEU7iAd/MPKyE9JZTyn2dOFNPZyl2V4dYV9hy59mJCzZ5oYtm9Ot3DCtFOa9WtOcVkh5qfF3ozSZSbihR6jgzJlOKL7RvClGJtFK70u113fn7fa0sVXLBcrmzYIZOpPNTDVjUV/F/pNpis5gLEIldGH66BYR4+PxA0Wanf8g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=db88IQDK/BpAmsJMUM22U2nhGSmJcS5NbW0gVk4oxLg=;
 b=cFQo0JvfdbMWlcH5h3Lb5Cu+1Pl9gVwIZuTocNVKEwKnp8biVKgnIkbRBPaxTTgZE5c1kii4z+5W/+Gk0gYP55ltcTJQTToGDLCg3k4mvau1ZNIuAvOm87S5Ni1lefSweEgClt/g+oKkMNuI7PvKPgewIBUo3U2Hu93o8Lr1/WQ=
Date: Mon, 23 Nov 2020 12:48:43 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2] x86/PV: make post-migration page state consistent
Message-ID: <20201123114843.ocjwlv4wkukcdcgf@Air-de-Roger>
References: <07ebce3c-4dcf-bc9e-6d82-7f3def486ab8@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <07ebce3c-4dcf-bc9e-6d82-7f3def486ab8@suse.com>
X-ClientProxiedBy: LO3P265CA0004.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:bb::9) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: fad81775-fade-4f84-0482-08d88fa5bfcc
X-MS-TrafficTypeDiagnostic: DM6PR03MB5084:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB5084B264FB44AD4F547AED648FFC0@DM6PR03MB5084.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 8Pli99rWW7RqusZTV3mOC1tZ8DxoWfLAgT0LuND0/PGvmTP39lGBg8KknkrFVK2GGn2nRqvvtIokqpr4jjoB5xkEcnTsnoCir4oXUs/rO7ho42uc2q31+yi10bTJWJB7jJMQfGQmQYaRMvpD9MePVWVfxBIpeFxZT5THQnrFDO1B6tfPMNYBDPupBXt5Rb0m6/1xizETpszYMtHOE8jct5SZcyeCvi77qOGXep7FDFJW4SqP9u8MuaczxOtJrtlgt2NN6q6ZV143IA6DHTEux8//jBZvGq+GAAEVopwbK6SJAvMAnvD1KxW41WI0Vu12w7e9AXru1Sec6BgQMzdLkg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(396003)(39860400002)(366004)(376002)(346002)(86362001)(6666004)(1076003)(4326008)(66476007)(66556008)(478600001)(956004)(8936002)(186003)(8676002)(316002)(6496006)(33716001)(85182001)(16526019)(6486002)(5660300002)(66946007)(9686003)(6916009)(83380400001)(2906002)(54906003)(26005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: hsRNjIvZVMKfbih5MHNDqPAZCAuvMBBoIdfPIImqlTk/Yo8mnpO5kPJgZpAQnCY6QwiAs7AIYfdLFDxk7Lgf1IKco5FeZ/Mga0pw306fD/k8fWFvRICpfJXvOI5Mg/Kd+8zctLQVB2a5frlwajfuNLM6MO1VP5cQh59NV8EqbtKNbuQDFkwbvhORAIbZqCE6WF6FUgMoSfkM5/VlZdjAz3PcJLUAfWw8ZP+6V4U4XBGD3t/yYw9M2NRaIqxyXWhgWjpUq09rfxwpSl6p4dWbOCb1S/uGcbl0i1ahbmOQ1V4L9AazUIYLLVmhnhEDUU+hSHKSt/j7B/eRMUfow3L4n0JFh+zUpXqaZoCdziM4AAbYMcqzHCMjpD5IzEia0PBbbn81yAbJi5b2pBd0OKEOodmvV+41y4AKvALCmQIRglzlxTw6BPpPeHEy0UPRNNkbQIOqiCTm5R6DrCiUaGgaNe2uC7SGDld9uVaEAEhMUuLUzSY99qiu9ghz2fA0sf6vsDuQaHosthTcFpXVU8H/rTKCVlE0Z2KhUhnIcf1jN/z4CcTpdanR1PaOj3mgEHRyl0MuDucVQcAUS0BG1X4HO+C4YpjiJ4aZDO4VI/N9BrpJ2/nNOEmQRNH5Er6z3TIJI6tPNblBPATYHhWvbVGfiw7WUSaT3qkbpF6Fe7f6V8iDQ9l3mYldFzYrFJqCvN7mftQ8fDgDBl4k9vrpGFUwtS0V312Blt4nRmV9CIH9QfqnHuAwatvH9siaKaqEgYDd3btZs1TmkAgRevhpDWfJx3ChK1tniFDuqgmsqSyJyIlMr9efM238hGGdmuxkTku36Hhy3EkGhYr/2RI6YGKcw/OGOrQuAbahllvQuYt2ZG5IzZmSnYRHvQ5JyzZAUI2c/TaT7HCwdRGi7iVTUCRJwA==
X-MS-Exchange-CrossTenant-Network-Message-Id: fad81775-fade-4f84-0482-08d88fa5bfcc
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Nov 2020 11:48:52.5449
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Bi1JeH5BapResXekZ5QYD5BSk6jpZ3tOBeyGyuacOcLxOjgKz/v9k6y2Mvn8quCBDycP5ep2iqh1RC/IXdFosA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5084
X-OriginatorOrg: citrix.com

On Wed, Nov 04, 2020 at 08:56:50AM +0100, Jan Beulich wrote:
> When a page table page gets de-validated, its type reference count drops
> to zero (and PGT_validated gets cleared), but its type remains intact.
> XEN_DOMCTL_getpageframeinfo3, therefore, so far reported prior usage for
> such pages. An intermediate write to such a page via e.g.
> MMU_NORMAL_PT_UPDATE, however, would transition the page's type to
> PGT_writable_page, thus altering what XEN_DOMCTL_getpageframeinfo3 would
> return. In libxc the decision which pages to normalize / localize
> depends solely on the type returned from the domctl. As a result without
> further precautions the guest won't be able to tell whether such a page
> has had its (apparent) PTE entries transitioned to the new MFNs.
> 
> Add a check of PGT_validated, thus consistently avoiding normalization /
> localization in the tool stack.
> 
> Also use XEN_DOMCTL_PFINFO_NOTAB in the variable's initializer instead
> open coding it.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Maybe the switch could be avoided if the page is not validated or
broken? Not that I care that much.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 11:54:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 11:54:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34086.64937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khAQS-0005gU-Dx; Mon, 23 Nov 2020 11:54:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34086.64937; Mon, 23 Nov 2020 11:54:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khAQS-0005gN-AN; Mon, 23 Nov 2020 11:54:32 +0000
Received: by outflank-mailman (input) for mailman id 34086;
 Mon, 23 Nov 2020 11:54:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sKME=E5=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1khAQR-0005gI-36
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 11:54:31 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com (unknown
 [40.107.3.86]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 374306a6-4143-48a4-b376-bfaefafa5202;
 Mon, 23 Nov 2020 11:54:28 +0000 (UTC)
Received: from AM5PR0202CA0019.eurprd02.prod.outlook.com
 (2603:10a6:203:69::29) by AM6PR08MB4215.eurprd08.prod.outlook.com
 (2603:10a6:20b:90::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.30; Mon, 23 Nov
 2020 11:54:22 +0000
Received: from AM5EUR03FT055.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:69:cafe::21) by AM5PR0202CA0019.outlook.office365.com
 (2603:10a6:203:69::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Mon, 23 Nov 2020 11:54:22 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT055.mail.protection.outlook.com (10.152.17.214) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Mon, 23 Nov 2020 11:54:22 +0000
Received: ("Tessian outbound 39167997cde8:v71");
 Mon, 23 Nov 2020 11:54:22 +0000
Received: from 57e2c3b58326.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 381B64E1-8FA9-4B68-8594-5453DF850408.1; 
 Mon, 23 Nov 2020 11:54:16 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 57e2c3b58326.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 23 Nov 2020 11:54:16 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB8PR08MB5242.eurprd08.prod.outlook.com (2603:10a6:10:e8::29) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.30; Mon, 23 Nov
 2020 11:54:14 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3589.030; Mon, 23 Nov 2020
 11:54:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sKME=E5=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1khAQR-0005gI-36
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 11:54:31 +0000
X-Inumbo-ID: 374306a6-4143-48a4-b376-bfaefafa5202
Received: from EUR03-AM5-obe.outbound.protection.outlook.com (unknown [40.107.3.86])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 374306a6-4143-48a4-b376-bfaefafa5202;
	Mon, 23 Nov 2020 11:54:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=94dE/Vdt2QNAbG4KO6BqSxenWzr86t/S2QxJ/IFq3vs=;
 b=jFBdv2GWmXNeTROOmAU4+HWXXwIfmOdZL8z8wgDMeT9WLehXO8rWoRr+PHzav7tj08fnkamM5maOUMRr7HpSW9CTPilZulEUR5x42Vk5l9qnbyvV7ye43xZaraXzftIa9MMEId46dM/4j0RobrdU0j1PedZSzJ4alLF6ysAQ86w=
Received: from AM5PR0202CA0019.eurprd02.prod.outlook.com
 (2603:10a6:203:69::29) by AM6PR08MB4215.eurprd08.prod.outlook.com
 (2603:10a6:20b:90::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.30; Mon, 23 Nov
 2020 11:54:22 +0000
Received: from AM5EUR03FT055.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:69:cafe::21) by AM5PR0202CA0019.outlook.office365.com
 (2603:10a6:203:69::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Mon, 23 Nov 2020 11:54:22 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT055.mail.protection.outlook.com (10.152.17.214) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Mon, 23 Nov 2020 11:54:22 +0000
Received: ("Tessian outbound 39167997cde8:v71"); Mon, 23 Nov 2020 11:54:22 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 807cf874092e0f2a
X-CR-MTA-TID: 64aa7808
Received: from 57e2c3b58326.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 381B64E1-8FA9-4B68-8594-5453DF850408.1;
	Mon, 23 Nov 2020 11:54:16 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 57e2c3b58326.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Mon, 23 Nov 2020 11:54:16 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SjodCmwVFRuORotAIp6KQb7tH48Xt1UY4eCmtH5snc69ZIHyUGsvUp5LkA7yZhXdNuTjqlw9DwwoUFip1hYNMbmxf4uvpP7dKEmgvy2FnvH+skcqiaOiCaiCC09bf0nLmAfWrfjOYcqMi5BrhhzaOwM+nfisMXo3zzJ49joO32+vs80xhBgoW5TYzyf06DHNIrgDS8VEzby+9uwKiaWD9qZuAua4qjwUNivBAQpvGFUNBNrnX1ix+YZJMiMnxHQdkkv05zU9IraHgkjrcIuHFGuS8uK//JtnqOQn8YMtztx+/kDMREQSvXHjHQP+p/71xA7LtgyAbVXaq+BnUZpFjA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=94dE/Vdt2QNAbG4KO6BqSxenWzr86t/S2QxJ/IFq3vs=;
 b=d4eLFJoMRoZIi1joApcorPo3qEdt54qm+w2yg5ynzWlgu9+9nQcv/8JRiht9/F+KHbgfKmNSyC/G27ANETHfexf3xyZpU/lwZLPaqeYX3thVV5mcDmEoVlmJtm1KDtXodan7/w52+YoNrlCUx+bcjU15usmu5UDDgPdpx/9RAI55Q4H0JhlyExBXY3URBmGeVI/0qRqY4+Wheg9bvwKMVHtK+H0Gd8+R8x5uWhbVq+b/c1bD6gyttcfhAqH2WwdbscYM7FhWragh1RHanc593QT4hZeftZfM8vuigpB8Fj2qzi/IxNP/Zz4XNfXEC8sMpu/6rtbMD/hRiVk+AuzREg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=94dE/Vdt2QNAbG4KO6BqSxenWzr86t/S2QxJ/IFq3vs=;
 b=jFBdv2GWmXNeTROOmAU4+HWXXwIfmOdZL8z8wgDMeT9WLehXO8rWoRr+PHzav7tj08fnkamM5maOUMRr7HpSW9CTPilZulEUR5x42Vk5l9qnbyvV7ye43xZaraXzftIa9MMEId46dM/4j0RobrdU0j1PedZSzJ4alLF6ysAQ86w=
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB8PR08MB5242.eurprd08.prod.outlook.com (2603:10a6:10:e8::29) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.30; Mon, 23 Nov
 2020 11:54:14 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3589.030; Mon, 23 Nov 2020
 11:54:14 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall.oss@gmail.com>, Jan Beulich
	<jbeulich@suse.com>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
Thread-Topic: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
Thread-Index:
 AQHWvBOve/67EvsNk0ODGsWoVvEIcanODQaAgAEhKgCAAARQAIAACQ0AgAAGfYCAAF5RAIAAgX4AgAADeICAAAbGgIAFenqA
Date: Mon, 23 Nov 2020 11:54:13 +0000
Message-ID: <37511625-C475-497B-BA83-B762687148BF@arm.com>
References: <cover.1605527997.git.rahul.singh@arm.com>
 <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
 <3740e147-719a-4e97-bb0e-fe9bd2ec2aa5@xen.org>
 <aa256a44-8f8f-d4f1-f5f4-12529f45d8c8@suse.com>
 <9007e08f-6d90-88ed-ba64-2f0b3c21cb50@xen.org>
 <8531a99d-3c54-36c7-0cd4-2e4838f96eb0@suse.com>
 <ba26fdfb-34f8-c4d3-e082-f1f49c768981@xen.org>
 <89F35B3F-FAAD-4C58-B3FD-F93CA3290A49@arm.com>
 <alpine.DEB.2.21.2011191534060.7979@sstabellini-ThinkPad-T480s>
 <CAJ=z9a0aS1G0F1jAtKNEe4r3tyBoxy1xJ9AV7pYgifsL62iqww@mail.gmail.com>
 <alpine.DEB.2.21.2011191551510.7979@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2011191551510.7979@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: f39e4ec0-4e41-44a3-a5f0-08d88fa684a1
x-ms-traffictypediagnostic: DB8PR08MB5242:|AM6PR08MB4215:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB42153BFBFF453E119D567E58FCFC0@AM6PR08MB4215.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 mgZU0KBV7KMJrhgjTqlRdJE3xya0Wh7WSuIh1jwR9c5j1lUrQKINarJU2iF6jGvdzQYkyMPLRfIj1n7i4Xcu+LpaMH8p/2gPi3lWkXU8oE39Kh+dbwFdW9PuNFZ7gys14il4JjitIN/c5nsrknRxTSSvRJ9opna43XfEEE/VrQfbqbq+LQYMxXcdYbWpAIbLJOH7thwWzCtqReIVvrLSQmK2R/VGVo0S6voV182g833zNpaR4CZcgmFntFaCcsWF7fndN+A5ExmPoVnNILRc3jgKT4031bfhZLgYCVC1ykod6nvDd0crnR5gWOEAbmgrMLVb38fCf5n610IFpXYmdQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(346002)(136003)(366004)(376002)(39860400002)(8936002)(53546011)(64756008)(66476007)(91956017)(6512007)(66446008)(66556008)(6506007)(66946007)(26005)(8676002)(71200400001)(186003)(478600001)(5660300002)(83380400001)(76116006)(6486002)(2906002)(86362001)(2616005)(54906003)(316002)(36756003)(6916009)(33656002)(4326008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 EJKfnqc4Wfe8ct9ZOm9bL8mFmC9ngjGxZLrUIKP2jiIFbf2gnAbvCCBNxGLj5OR+zDzTPXNtz+h0qcKapCFVjenuMDXYeuWSW7/DxAufoJtN9wYnce5u4U5Ggwnm8gAEFGw3m6cFYyAjcIQvNl0G0TyzC2dN+YCuyMhRkUIXyHU+T0Zd5uBaKZdEgnZAIlM1ihGN8bm/Ubo8lX+pt34d9+z5Jij0IEbHO7MvLAcv2ccw5npwedGHjGFI3ovLjIkT6NPnUPwRTMNX17IOR4mPcvzUy6MAiWIakZ7Ggi07KIrkSeD4YzS7mj+T6TG33GVrAvWyKnDPJ+dYouFd80tJ5+U+rV0uq2s0gIxY6eAvRzkeOR8bGbb7H8srfA+YoL7a/M7laet5r0qIsj5hDbbqA1ZWDaxPKBRhUvnHIMSuFM5CT8Y6gdIYqvxjcVPZJl0mC1HKZK9aK8ImD4vKIrSkhiXTyoJIIhTKT99wWsHn8ZPVGWVv0DpTRAVWzaDPXnsK/deIUjZF5tC9n3qhF1Djc/ZqSuSf8tjeE/Zc1LOG12HsTQbkk4VhAGHETOgXVASsQQfXuz6YParGMctqHTVgzr+cHMepsZSU5W8eL0oJkaebnwoneHhPmRj/czk6hRsUZAtKC2GiUE2c3SxKTi9BoX60OigrFqVFxoqKZH9dcR4+G0qhyw78CwkyNTfOYOora6blVDPjkWFKf7Cw8HILQ1o5fnm6N0ZQyZnBXMFGepMNuqXa3ioANgIgfUjPVJlDaM6QQGEvPjgXVAekawwtqeToYs7KIq7pcIECfv6yVKtSFHjbdajCCjtd0FqLKKI4NglPWJSvHqwbHjPm7WuLLBrnyUVyp8ciQgJcPfemGw7dtW+wwzgKEAbzqed8N2r/VemJ0OeFmf+3pmYTcaI51g==
Content-Type: text/plain; charset="utf-8"
Content-ID: <4BEAF2DB5E47A84BAC890A603CB67BB4@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5242
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	9c2cb5f6-48a1-46b1-5696-08d88fa67f92
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kgUXq3hIITwiuQZaP8MrTYh44AgBRsTFNNmUTyWCmfAeJkQ3ifsxuyaur0vB1gNMjc7bof7O9kT11cy1/emTUmXb787H89q1bh4AlQVrkyw6WNB2b1dyu8+X7WmK7Lpc3Xjsq07wuXL3fXM1UiZfmbJfY7Wk2ggU2hDZLErsnBm1fcZ0RFikVzsLu6rAkMY5niNEUJnCna3jBLLFmRUrpY0YlaUF6gMoX5dQmxoSoOfg/MjEJkuY/FHesf9YHpZFyFurHZfbuzIwc8zsKKWKFdm7/8ocxJGanGFv5LHHH+DkHoYsN28ruEfE1aeRt3Wm4awQ+T/533xM24hIvXWFIJIM3xXwqHMwhtUiHFw50qRgSRPCh/jA3jWzMVVPSPWpuvPU2/X1ezHj9cscXg6rNg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(136003)(396003)(376002)(39860400002)(46966005)(70586007)(33656002)(186003)(70206006)(82310400003)(81166007)(6862004)(336012)(4326008)(47076004)(82740400003)(6486002)(54906003)(83380400001)(36906005)(316002)(6512007)(26005)(5660300002)(86362001)(478600001)(2906002)(53546011)(356005)(8936002)(2616005)(6506007)(36756003)(8676002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Nov 2020 11:54:22.4445
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f39e4ec0-4e41-44a3-a5f0-08d88fa684a1
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4215

SGVsbG8gSmFuLA0KDQo+IE9uIDIwIE5vdiAyMDIwLCBhdCAxMjoxNCBhbSwgU3RlZmFubyBTdGFi
ZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPiB3cm90ZToNCj4gDQo+IE9uIFRodSwgMTkg
Tm92IDIwMjAsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4+IE9uIFRodSwgMTkgTm92IDIwMjAsIDIz
OjM4IFN0ZWZhbm8gU3RhYmVsbGluaSwgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IHdyb3RlOg0K
Pj4gICAgICBPbiBUaHUsIDE5IE5vdiAyMDIwLCBSYWh1bCBTaW5naCB3cm90ZToNCj4+Pj4gT24g
MTkvMTEvMjAyMCAwOTo1MywgSmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4+PiBPbiAxOS4xMS4yMDIw
IDEwOjIxLCBKdWxpZW4gR3JhbGwgd3JvdGU6DQo+Pj4+Pj4gSGkgSmFuLA0KPj4+Pj4+IA0KPj4+
Pj4+IE9uIDE5LzExLzIwMjAgMDk6MDUsIEphbiBCZXVsaWNoIHdyb3RlOg0KPj4+Pj4+PiBPbiAx
OC4xMS4yMDIwIDE2OjUwLCBKdWxpZW4gR3JhbGwgd3JvdGU6DQo+Pj4+Pj4+PiBPbiAxNi8xMS8y
MDIwIDEyOjI1LCBSYWh1bCBTaW5naCB3cm90ZToNCj4+Pj4+Pj4+PiBOUzE2NTUwIGRyaXZlciBo
YXMgUENJIHN1cHBvcnQgdGhhdCBpcyB1bmRlciBIQVNfUENJIGZsYWcuIFdoZW4gSEFTX1BDSQ0K
Pj4+Pj4+Pj4+IGlzIGVuYWJsZWQgZm9yIEFSTSwgY29tcGlsYXRpb24gZXJyb3IgaXMgb2JzZXJ2
ZWQgZm9yIEFSTSBhcmNoaXRlY3R1cmUNCj4+Pj4+Pj4+PiBiZWNhdXNlIEFSTSBwbGF0Zm9ybXMg
ZG8gbm90IGhhdmUgZnVsbCBQQ0kgc3VwcG9ydCBhdmFpbGFibGUuDQo+Pj4+Pj4+PiAgICA+DQo+
Pj4+Pj4+Pj4gSW50cm9kdWNpbmcgbmV3IGtjb25maWcgb3B0aW9uIENPTkZJR19IQVNfTlMxNjU1
MF9QQ0kgdG8gc3VwcG9ydA0KPj4+Pj4+Pj4+IG5zMTY1NTAgUENJIGZvciBYODYuDQo+Pj4+Pj4+
Pj4gDQo+Pj4+Pj4+Pj4gRm9yIFg4NiBwbGF0Zm9ybXMgaXQgaXMgZW5hYmxlZCBieSBkZWZhdWx0
LiBGb3IgQVJNIHBsYXRmb3JtcyBpdCBpcw0KPj4+Pj4+Pj4+IGRpc2FibGVkIGJ5IGRlZmF1bHQs
IG9uY2Ugd2UgaGF2ZSBwcm9wZXIgc3VwcG9ydCBmb3IgTlMxNjU1MCBQQ0kgZm9yDQo+Pj4+Pj4+
Pj4gQVJNIHdlIGNhbiBlbmFibGUgaXQuDQo+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gTm8gZnVuY3Rp
b25hbCBjaGFuZ2UuDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IE5JVDogSSB3b3VsZCBzYXkgIk5vIGZ1
bmN0aW9uYWwgY2hhbmdlIGludGVuZGVkIiB0byBtYWtlIGNsZWFyIHRoaXMgaXMNCj4+Pj4+Pj4+
IGFuIGV4cGVjdGF0aW9uIGFuZCBob3BlZnVsbHkgd2lsbCBiZSBjb3JyZWN0IDopLg0KPj4+Pj4+
Pj4gDQo+Pj4+Pj4+PiBSZWdhcmRpbmcgdGhlIGNvbW1pdCBtZXNzYWdlIGl0c2VsZiwgSSB3b3Vs
ZCBzdWdnZXN0IHRoZSBmb2xsb3dpbmcgdG8NCj4+Pj4+Pj4+IGFkZHJlc3MgSmFuJ3MgY29uY2Vy
bjoNCj4+Pj4+Pj4gDQo+Pj4+Pj4+IFdoaWxlIGluZGVlZCB0aGlzIGlzIGEgbXVjaCBiZXR0ZXIg
ZGVzY3JpcHRpb24sIEkgY29udGludWUgdG8gdGhpbmsNCj4+Pj4+Pj4gdGhhdCB0aGUgcHJvcG9z
ZWQgS2NvbmZpZyBvcHRpb24gaXMgdW5kZXNpcmFibGUgdG8gaGF2ZS4NCj4+Pj4+PiANCj4+Pj4+
PiBJIGFtIHlldCB0byBzZWUgYW4gYXJndW1lbnQgaW50byB3aHkgd2Ugc2hvdWxkIGtlZXAgdGhl
IFBDSSBjb2RlDQo+Pj4+Pj4gY29tcGlsZWQgb24gQXJtIHdoZW4gdGhlcmUgd2lsbCBiZSBuby11
c2UuLi4uDQo+Pj4+PiBXZWxsLCBzZWUgbXkgcGF0Y2ggc3VwcHJlc3NpbmcgYnVpbGRpbmcgb2Yg
cXVpdGUgYSBwYXJ0IG9mIGl0Lg0KPj4+PiANCj4+Pj4gSSB3aWxsIGxldCBSYWh1bCBmaWd1cmlu
ZyBvdXQgd2hldGhlciB5b3VyIHBhdGNoIHNlcmllcyBpcyBzdWZmaWNpZW50IHRvIGZpeCBjb21w
aWxhdGlvbiBpc3N1ZXMgKHRoaXMgaXMgd2hhdCBtYXR0ZXJzIHJpZ2h0DQo+PiAgICAgIG5vdyku
DQo+Pj4gDQo+Pj4gSSBqdXN0IGNoZWNrZWQgdGhlIGNvbXBpbGF0aW9uIGVycm9yIGZvciBBUk0g
YWZ0ZXIgZW5hYmxpbmcgdGhlIEhBU19QQ0kgb24gQVJNLiBJIGFtIG9ic2VydmluZyB0aGUgc2Ft
ZSBjb21waWxhdGlvbiBlcnJvcg0KPj4gICAgICB3aGF0IEkgb2JzZXJ2ZWQgcHJldmlvdXNseS4N
Cj4+PiBUaGVyZSBhcmUgdHdvIG5ldyBlcnJvcnMgcmVsYXRlZCB0byBzdHJ1Y3QgdWFydF9jb25m
aWcgYW5kIHN0cnVjdCBwYXJ0X3BhcmFtIGFzIHRob3NlIHN0cnVjdCBkZWZpbmVkIGdsb2JhbGx5
IGJ1dCB1c2VkIHVuZGVyDQo+PiAgICAgIFg4NiBmbGFncy4NCj4+PiANCj4+PiBBdCB0b3AgbGV2
ZWw6DQo+Pj4gbnMxNjU1MC5jOjE3OTo0ODogZXJyb3I6IOKAmHVhcnRfY29uZmln4oCZIGRlZmlu
ZWQgYnV0IG5vdCB1c2VkIFstV2Vycm9yPXVudXNlZC1jb25zdC12YXJpYWJsZT1dDQo+Pj4gICBz
dGF0aWMgY29uc3Qgc3RydWN0IG5zMTY1NTBfY29uZmlnIF9faW5pdGNvbnN0IHVhcnRfY29uZmln
W10gPQ0KPj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBefn5+fn5+fn5+fg0KPj4+IG5zMTY1NTAuYzoxMDQ6NTQ6IGVycm9yOiDigJh1YXJ0X3BhcmFt
4oCZIGRlZmluZWQgYnV0IG5vdCB1c2VkIFstV2Vycm9yPXVudXNlZC1jb25zdC12YXJpYWJsZT1d
DQo+Pj4gICBzdGF0aWMgY29uc3Qgc3RydWN0IG5zMTY1NTBfY29uZmlnX3BhcmFtIF9faW5pdGNv
bnN0IHVhcnRfcGFyYW1bXSA9IHsNCj4+PiANCj4+PiANCj4+Pj4gDQo+Pj4+Pj4+IEVpdGhlciwN
Cj4+Pj4+Pj4gZm9sbG93aW5nIHRoZSBwYXRjaCBJJ3ZlIGp1c3Qgc2VudCwgdHJ1bHkgeDg2LXNw
ZWNpZmljIHRoaW5ncyAoYXQNCj4+Pj4+Pj4gbGVhc3QgYXMgZmFyIGFzIGN1cnJlbnQgc3RhdGUg
Z29lcyAtIGlmIGFueSBvZiB0aGlzIHdhcyB0byBiZQ0KPj4+Pj4+PiByZS11c2VkIGJ5IGEgZnV0
dXJlIHBvcnQsIHN1aXRhYmxlIGZ1cnRoZXIgYWJzdHJhY3Rpb24gbWF5IGJlDQo+Pj4+Pj4+IG5l
ZWRlZCkgc2hvdWxkIGJlIGd1YXJkZWQgYnkgQ09ORklHX1g4NiAob3IgYWJzdHJhY3RlZCBpbnRv
IGFyY2gNCj4+Pj4+Pj4gaG9va3MpLCBvciB0aGUgSEFTX1BDSV9NU0kgcHJvcG9zYWwgd291bGQg
YXQgbGVhc3Qgd2FudCBmdXJ0aGVyDQo+Pj4+Pj4+IGludmVzdGlnYXRpbmcgYXMgdG8gaXRzIGZl
YXNpYmlsaXR5IHRvIGFkZHJlc3MgdGhlIGlzc3VlcyBhdCBoYW5kLg0KPj4+Pj4+IA0KPj4+Pj4+
IEkgd291bGQgYmUgaGFwcHkgd2l0aCBDT05GSUdfWDg2LCBkZXNwaXRlIHRoZSBmYWN0IHRoYXQg
dGhpcyBpcyBvbmx5DQo+Pj4+Pj4gZGVmZXJyaW5nIHRoZSBwcm9ibGVtLg0KPj4+Pj4+IA0KPj4+
Pj4+IFJlZ2FyZGluZyBIQVNfUENJX01TSSwgSSBkb24ndCByZWFsbHkgc2VlIHRoZSBwb2ludCBv
ZiBpbnRyb2R1Y2luZyBnaXZlbg0KPj4+Pj4+IHRoYXQgd2UgYXJlIG5vdCBnb2luZyB0byB1c2Ug
TlMxNjU1MCBQQ0kgb24gQXJtIGluIHRoZSBmb3JzZWVhYmxlDQo+Pj4+Pj4gZnV0dXJlLg0KPj4+
Pj4gQW5kIEkgY29udGludWUgdG8gZmFpbCB0byBzZWUgd2hhdCB3b3VsZCBndWFyYW50ZWUgdGhp
czogQXMgc29vbg0KPj4+Pj4gYXMgeW91IGNhbiBwbHVnIGluIHN1Y2ggYSBjYXJkIGludG8gYW4g
QXJtIHN5c3RlbSwgcGVvcGxlIHdpbGwNCj4+Pj4+IHdhbnQgdG8gYmUgYWJsZSB1c2UgaXQuIFRo
YXQncyB3aHkgd2UgaGFkIHRvIGFkZCBzdXBwb3J0IGZvciBpdA0KPj4+Pj4gb24geDg2LCBhZnRl
ciBhbGwuDQo+Pj4+IA0KPj4+PiBXZWxsLCBwbHVnLWluIFBDSSBjYXJkcyBvbiBBcm0gaGFzIGJl
ZW4gYXZhaWxhYmxlIGZvciBxdWl0ZSBhIHdoaWxlLi4uIFlldCBJIGhhdmVuJ3QgaGVhcmQgYW55
b25lIGFza2luZyBmb3IgTlMxNjU1MCBQQ0kNCj4+ICAgICAgc3VwcG9ydC4NCj4+Pj4gDQo+Pj4+
IFRoaXMgaXMgcHJvYmFibHkgYmVjYXVzZSBTQlNBIGNvbXBsaWFudCBzZXJ2ZXIgc2hvdWxkIGFs
d2F5cyBwcm92aWRlIGFuIFNCU0EgVUFSVCAoYSBjdXQtZG93biB2ZXJzaW9uIG9mIHRoZSBQTDAx
MSkuIFNvIHdoeQ0KPj4gICAgICB3b3VsZCBib3RoZXIgdG8gbG9zZSBhIFBDSSBzbG90IGZvciB5
ZXQgYW5vdGhlciBVQVJUPw0KPj4+PiANCj4+Pj4+Pj4gU28gd2h5IGRvIHdlIG5lZWQgYSBmaW5l
ciBncmFpbmUgS2NvbmZpZz8NCj4+Pj4+IEJlY2F1c2UgbW9zdCBvZiB0aGUgaW52b2x2ZWQgY29k
ZSBpcyBpbmRlZWQgTVNJLXJlbGF0ZWQ/DQo+Pj4+IA0KPj4+PiBQb3NzaWJseSwgeWV0IGl0IHdv
dWxkIG5vdCBiZSBuZWNlc3NhcnkgaWYgd2UgZG9uJ3Qgd2FudCBOUzE2NTUwIFBDSSBzdXBwb3J0
Li4uDQo+Pj4gDQo+Pj4gVG8gZml4IGNvbXBpbGF0aW9uIGVycm9yIG9uIEFSTSBhcyBwZXIgdGhl
IGRpc2N1c3Npb24gdGhlcmUgYXJlIGJlbG93IG9wdGlvbnMgcGxlYXNlIHN1Z2dlc3Qgd2hpY2gg
b25lIHRvIHVzZSB0byBwcm9jZWVkDQo+PiAgICAgIGZ1cnRoZXIuDQo+Pj4gDQo+Pj4gMS4gVXNl
IHRoZSBuZXdseSBpbnRyb2R1Y2VkIENPTkZJR19IQVNfTlMxNjU1MF9QQ0kgY29uZmlnIG9wdGlv
bnMuIFRoaXMgaGVscHMgYWxzbyBub24teDg2IGFyY2hpdGVjdHVyZSBpbiB0aGUgZnV0dXJlIG5v
dCB0bw0KPj4gICAgICBoYXZlIGNvbXBpbGF0aW9uIGVycm9yDQo+Pj4gd2hhdCB3ZSBhcmUgb2Jz
ZXJ2aW5nIG5vdyB3aGVuIEhBU19QQ0kgaXMgZW5hYmxlZC4NCj4+PiANCj4+PiAyLiBHdWFyZCB0
aGUgcmVtYWluaW5nIHg4NiBzcGVjaWZpYyBjb2RlIHdpdGggQ09ORklHX1g4NiBhbmQgaW50cm9k
dWNlIHRoZSBuZXcgQ09ORklHX0hBU19QQ0lfTVNJIG9wdGlvbnMgdG8gZml4IHRoZSBNU0kNCj4+
ICAgICAgcmVsYXRlZCBjb21waWxhdGlvbiBlcnJvci4NCj4+PiBPbmNlIHdlIGhhdmUgcHJvcGVy
IHN1cHBvcnQgZm9yIE1TSSBhbmQgUENJIGZvciBBUk0gIChIQVNfUENJX01TSSBhbmQgSEFTX1BD
SSBlbmFibGVkIGZvciBBUk0gaW4gS2NvbmZpZyApIEkgYW0gbm90IHN1cmUgaWYNCj4+ICAgICAg
TlMxNjU1MCBQQ0kgd2lsbCB3b3JrIG91dCBvZiB0aGUgYm94IG9uIEFSTSAuSW4gdGhhdCBjYXNl
LCB3ZSBtaWdodCBuZWVkIHRvIGNvbWUgYmFjayBhZ2FpbiB0byBmaXggTlMxNjU1MCBkcml2ZXIu
IA0KPj4gDQo+PiANCj4+ICAgICAgSXQgZG9lc24ndCBtYXR0ZXIgdG9vIG11Y2ggdG8gbWUsIGxl
dCdzIGp1c3QgY2hvb3NlIG9uZSBvcHRpb24gc28gdGhhdCB5b3UNCj4+ICAgICAgZ2V0IHVuYmxv
Y2tlZCBzb29uLg0KPj4gDQo+PiAgICAgIEl0IGxvb2tzIGxpa2UgSmFuIHByZWZlcnMgb3B0aW9u
IDIpIGFuZCBib3RoIEp1bGllbiBhbmQgSSBhcmUgT0sgd2l0aA0KPj4gICAgICBpdC4gU28gbGV0
J3MgZG8gMikuIEphbiwgcGxlYXNlIGNvbmZpcm0gdG9vIDotKQ0KPj4gDQo+PiANCj4+IFBsZWFz
ZSBkb24ndCBwdXQgd29yZHMgaW4gbXkgbW91dGguLi4gDQo+IA0KPiBTb3JyeSBKdWxpZW4sIEkg
bWlzaW50ZXJwcmV0ZWQgb25lIG9mIHlvdXIgcHJldmlvdXMgY29tbWVudHMuIFNvbWV0aW1lcw0K
PiBpdCBpcyBkaWZmaWN1bHQgdG8gZG8gdGhpbmdzIGJ5IGVtYWlsLiBJdCBpcyBnb29kIHRoYXQg
eW91IGNsYXJpZmllZCBhcw0KPiBteSBnb2FsIHdhcyB0byByZWFjaCBhbiBhZ3JlZW1lbnQuDQo+
IA0KPiANCj4+IEkgdGhpbmsgaW50cm9kdWNpbmcgSEFTX1BDSV9NU0kgaXMgc2hvcnQgc2lnaHRl
ZC4NCj4+IA0KPj4gVGhlcmUgYXJlIG5vIGNsZWFyIGJlbmVmaXRzIG9mIGl0IHdoZW4gTlMxNjU1
MCBQQ0kgc3VwcG9ydCBpcyBub3QgZ29pbmcgdG8gYmUgZW5hYmxlIGluIHRoZSBmb3Jlc2VlYWJs
ZSBmdXR1cmUuDQo+IA0KPiBJIGFncmVlDQo+IA0KPiANCj4+IEkgd291bGQgYmUgb2sgd2l0aCBt
b3ZpbmcgZXZlcnl0aGluZyB1bmRlciBDT05GSUdfWDg2LiBJSE1PIHRoaXMgaXMgc3RpbGwgc2hv
cnRzaWdodGVkIGJ1dCBhdCBsZWFzdCB3ZSBkb24ndCBpbnRyb2R1Y2UgYSBjb25maWcgdGhhdCdz
IG5vdA0KPj4gZ29pbmcgdG8gaGVscCBBcm0gb3Igb3RoZXIgYW55IGFyY2hpdGVjdHVyZSB0byBk
aXNhYmxlIGNvbXBsZXRlbHkgUENJIHN1cHBvcnQgaW4gTlMxNjU1MC4NCj4gDQo+IFNvIHlvdSBh
cmUgc3VnZ2VzdGluZyBhIG5ldyBvcHRpb246DQo+IA0KPiAzLiBHdWFyZCB0aGUgcmVtYWluaW5n
IHg4NiBzcGVjaWZpYyBjb2RlICphbmQqIHRoZSBNU0kgcmVsYXRlZA0KPiBjb21waWxhdGlvbiBl
cnJvcnMgd2l0aCBDT05GSUdfWDg2DQo+IA0KPiBJcyB0aGF0IHJpZ2h0Pw0KPiANCj4gDQo+IE15
IHByZWZlcmVuY2UgaXMgYWN0dWFsbHkgb3B0aW9uIDEpIGJ1dCB0aGlzIHNlcmllcyBpcyBhbHJl
YWR5IGF0IHYzIGFuZA0KPiBJIGRvbid0IHRoaW5rIHRoaXMgZGVjaXNpb24gaXMgYXMgaW1wb3J0
YW50IGFzIG11Y2ggYXMgdW5ibG9ja2luZw0KPiBSYWh1bCwgc28gSSBhbSBPSyB3aXRoIHRoZSBv
dGhlciBhbHRlcm5hdGl2ZXMgdG9vLg0KPiANCj4gSSB0ZW5kIHRvIGFncmVlIHdpdGggeW91IHRo
YXQgMykgaXMgYmV0dGVyIHRoYW4gMikgZm9yIHRoZSByZWFzb25zIHlvdQ0KPiB3cm90ZSBhYm92
ZS4NCg0KDQpDYW4geW91IHBsZWFzZSBwcm92aWRlIHlvdXIgc3VnZ2VzdGlvbiBob3cgdG8gcHJv
Y2VlZCBvbiB0aGlzIHNvIHRoYXQgSSBjYW4gc2VuZCBteSBuZXh0IHBhdGNoLg0KSSBhbSB3YWl0
aW5nIGZvciB5b3VyIHJlcGx5IGlmIHlvdSBhcmUgYWxzbyBvayBmb3IgdGhlIG9wdGlvbnMgMy4N
Cg0KVGhhbmtzIGluIGFkdmFuY2UuDQoNClJlZ2FyZHMsDQpSYWh1bA==


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 12:08:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 12:08:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34113.64948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khAde-0006nd-3J; Mon, 23 Nov 2020 12:08:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34113.64948; Mon, 23 Nov 2020 12:08:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khAde-0006nW-07; Mon, 23 Nov 2020 12:08:10 +0000
Received: by outflank-mailman (input) for mailman id 34113;
 Mon, 23 Nov 2020 12:08:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1khAdc-0006mg-6j
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 12:08:08 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 18cbe7dd-4d42-4d43-87ad-77d1413812dd;
 Mon, 23 Nov 2020 12:08:07 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1khAdc-0006mg-6j
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 12:08:08 +0000
X-Inumbo-ID: 18cbe7dd-4d42-4d43-87ad-77d1413812dd
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 18cbe7dd-4d42-4d43-87ad-77d1413812dd;
	Mon, 23 Nov 2020 12:08:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606133287;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=2LP87x5kjG2xivPZkwr9dApE+dAaOSQIhfqR9PmApl4=;
  b=KDHTFSsoYyn8ozmNptcYn+Gau5zxg7BAJlFWFrbj61NB8qf24Zoz0hdr
   RMaGbxw1B4NmZT8dTbR18qjTOvQNKBEm54PyA7/DhTByMvCoPAsDF/xr4
   d6bF7/9nGjoe8s3Sv0IPQraAR5buOSfkqsqCU3BKGIsMiJyGMdyXLrLEK
   I=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: l3yS6Hc2TnEnP+vWPTqEpy7cMQaku9aSTL2C99NqzgO9a23Z/uFNb7gwm28huXkn/3EgMgEJhv
 Q6OfKoM1oWrm3s8XI3Z2iLTIHwoS6QcCjwItBvoT/q8Wgi2xB97/TfQ1yjIzYSG2Vqn+TxtPxL
 BB8WF/6QdnSgsBDjG++2TQlfsXFnLANFCrCSDypAL8fbVqK9n0RQrWfJONWtpNBlXIRqCIpNxh
 ASSHyZXPIaA5mNTXkPsiJkbbchw8o6ZDMGl1kHqZ7zj2yySshnTPrTXnBR1rDKNhSoUPH5+LjI
 pfs=
X-SBRS: None
X-MesageID: 32891285
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,363,1599537600"; 
   d="scan'208";a="32891285"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WoPX1qJc0JnbIiBaSzttsS3w5EeV45naksqI+2K/ha7I8rWAUpB+APd5pPkubK7hR4whD6Kj3dX1L8raBwEWADg9ZoAWwG4zqS4v90bm9U9redRHCtb5zIefCli/phW203sJDUmu73QdOUSYN4Fqh9t/TStL7GwHy+wkCKUiCWePv/cevAqKxMLRSG+tyMJe+9yb8rHN2FVwGasQOzn6MHihG23p3sNEJdLHKUvEh/2OgsZyq7b1aFc8+fdxusqOLIuenV4EN2lnASMyZYGZZ4y27MBQwMgcWIPTRj2r0ATfaZPN1y1KEANfqO5vbf9SOC8kvR28GPFnUzc++FH16Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hZfya7UYdqHgTdqJL+cOXpho71GTWHJAI5CqVUxcWcY=;
 b=fh25HphKfHP1B7nmXoLyZ0cq3GyMXHiweesUZTPTZs9ckLcA5KXcmiQac+coy3PZ3tQ5bD0lHN4sFP55QApvNUyzuC3p0jrljn7ZQgfrFJUav5sqBzFB1Tg2gfocokR0NNuIIM/eyNuvycxKzA1eS6ka2MfchfVrUIUyxhpZzArXeMHAqOL+JMFJZiWGbU1UWV1bc4R+foHOLzuevoiOj7vv9RwregfH6J3T2HviUsxLR0n/wRusGLY2taMrKpTRgtt5Rpo4+QzLGzkoHIBL7kaTvqrlTCZg6g0RsV2jfYbQjiuNeJiQcq4YkRd9rNSSpf2iY0NKk0kXFH44c2fbHQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hZfya7UYdqHgTdqJL+cOXpho71GTWHJAI5CqVUxcWcY=;
 b=lvqq/1VuIDkKrffyYuITkRocSR5cu0KMCEflbqmUR4VDsZbsJ7ml70QQmu7++BOOKMrSEbJyZFH9aeQmrHPQyJnvpHAgMpa14AQLPXqvDICZcfxrq2Fnyd1BH7BzRGIQNfQku/89JEMT+ZddE+YOmrau19LpZOYGs2K3GOCJ/hY=
Date: Mon, 23 Nov 2020 13:07:58 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86: E801 memory "map" use implies no ACPI
Message-ID: <20201123120758.aefatbbaqy5s7l3t@Air-de-Roger>
References: <18ca8671-1478-3dc8-7b91-041dbc18829f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <18ca8671-1478-3dc8-7b91-041dbc18829f@suse.com>
X-ClientProxiedBy: MRXP264CA0020.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:15::32) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 59625594-2b0e-4f08-b159-08d88fa86df8
X-MS-TrafficTypeDiagnostic: DM6PR03MB5273:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB5273923B47ED60EE89FD0BF28FFC0@DM6PR03MB5273.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: P3t9ANiecFMwOQzeMN7SgtFFdAyO2jy5AIqK99Vt3/QFtE7mf7OChw4aHpH80qB8P5lHafM61yyz4XIRkWAs1O1MtgsByJG+VZfqLdbl6yFoAR+XizGgfy/h6A0JfuLTnYMjpCCheC6pZmCHYBl65DYOg1oe3COT7xWgHsG5aHAI8IH+gZnZfRFnVXgfiaM9ZI/8hAlOIr67wDPmGcTyeQ+n6ovz+nPfb2UuELUaepSRULVI5Acn1sARY/HXeRjSduj6C/tyIV/iJsoM40+5oWDIIlIJbTicrNl3oLE7Kia8IhWvUg1USjWsv9LA0e2ab/IMx6II1KSajxOP2P4QqQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39860400002)(136003)(346002)(366004)(396003)(376002)(54906003)(8936002)(186003)(6496006)(16526019)(316002)(66476007)(66946007)(26005)(8676002)(66556008)(6486002)(956004)(6666004)(4744005)(5660300002)(6916009)(478600001)(2906002)(4326008)(33716001)(9686003)(86362001)(85182001)(1076003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: k81HFAKxWX0npd/lwSyZwnv3qGy9x8qVBdvg6xQxKigpsfI1XrarcADq0grFqTgoSK5o/Bm9//Yv1X5kxqI50+mZ/wed9MMIOYlq3lDZUwdHvF6htbNlFdl1N0VGL8MQ5Akxue3zNk5Y5P8NPHihnoioxtnNOII1IbhvYeuY9NqWEisl9Rc0Art2lbIrOMOWtMSwlqw9iMrHVqprf7a9bk1nZL++ZRa1UZLt1tlPaUU9MDMUoYtD8zwTwG0l8+zhNbxd8p4WIXf5QC191XXm3pG57lCL5ggG7KUaS05+PPA6xf1Xj1PDDTsDBMg03s023qv8LokKVHtzXtd57Veew8P7MiXuCYikCmMxEc2A+CVv/5CRgeGMU1xLKb6ZR11BsXDpJfFj5GP0+eZMEioBQUH8xl0yXwfQ1WsQdS0L44krUHlA33sFPXOzxPosKC30sKtWWwsAWC462mXeDkOuaV79z4Yuz6J510ZWqvhz9wW5I3fFtCCNRxcgxWo7b+yPvyqfgmMtZ+9gbwvzb22JdR/c1k4S70Aup1Qrg1R3guCc94Asanma42GpWfQ/0kqpCVXJPYDkZ4HsMNjSA6q4DGd5jtQ0Dquf2hPxgUaQn6L2RluQntSbVOWMAt+pjwbK8Mb7IQiZtJvhf7wS80bmvI60prgJ7sclJXHRpUyBc/Ns3LTOAbk1gAZhk7oKnbH9c6yfFs5Fk3z2EcQH3yy04JWCO7K3FDSnnjUx1Oi0c02R6O1rIK4rXfh8J9i03t8v15i/kym4zJddNgtZXCiEnLrn7eG+0NaH3eP1k/ZCrRcgRJAqMnwSXH8zakHo5+v+ICo+BICEYL+Q0KVGix57agLltGNAcK5peecmTBdih1stq1XHPMEsUBb2G14lKlCNqC/zUcSqXIeDWF+W3GCF0g==
X-MS-Exchange-CrossTenant-Network-Message-Id: 59625594-2b0e-4f08-b159-08d88fa86df8
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Nov 2020 12:08:03.6283
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: uJvSZa0CF3YKwF1Vtag80PwXqcLoKbLiU6BY5bDs+LAPC81cLaoCvQm8ACjdH9jbQUz/DHoEfvlrQF+ZjX81Ng==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5273
X-OriginatorOrg: citrix.com

On Fri, Nov 20, 2020 at 01:45:19PM +0100, Jan Beulich wrote:
> ACPI mandates use of E820 (or newer, e.g. EFI), and in fact firmware
> has been observed to include E820_ACPI ranges in what E801 reports as
> available (really "configured") memory.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> TBD: Alternatively we could drop all use of E801 (and older), since
>      there shouldn't be any 64-bit systems not supporting the more
>      modern E820.

That seems fine to me, but I don't want to force you to do the work.
If you want to go that route it seems better to me - as I image it
would be code removal mostly.


Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 12:26:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 12:26:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34145.64964 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khAvT-0000B7-NY; Mon, 23 Nov 2020 12:26:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34145.64964; Mon, 23 Nov 2020 12:26:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khAvT-0000B0-Kf; Mon, 23 Nov 2020 12:26:35 +0000
Received: by outflank-mailman (input) for mailman id 34145;
 Mon, 23 Nov 2020 12:26:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WaDe=E5=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1khAvS-0000Av-MO
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 12:26:34 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f8e59d50-e4f0-428d-b443-f176b10ec7e4;
 Mon, 23 Nov 2020 12:26:33 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WaDe=E5=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1khAvS-0000Av-MO
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 12:26:34 +0000
X-Inumbo-ID: f8e59d50-e4f0-428d-b443-f176b10ec7e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f8e59d50-e4f0-428d-b443-f176b10ec7e4;
	Mon, 23 Nov 2020 12:26:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606134393;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=X3ywSJwCxr8loBrZm4rgWehNqaqAQQSzGvAtJ9NS+Qc=;
  b=Kc5a8JwPwIVVtk2rqV0Yl5B2cJ2ibHsaY/B+v8Ws3xEDybmmcKNi3/8d
   VNEIbAg6Coey1rNTkGPaecyAjfEGifC6Wo8bRsn7sABtGFu6cYzEGYdVm
   9bV9C91Yv3NLbzWWK4sw53wZSXGo+rv9IoiQsGOKHz/pYjQ6d1gX7mrqS
   k=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: ZTWxce4bX9lztQz6JEvQx5H7e9403izrKGNBsQWjUxQXdk3jysuqBOfH6X0/YVtPcRucvkEWo3
 wnglOwUthDcIVTV1kuXZiOf0iY5ojxvEDoYZiDQnOrAcnQW+JLuwqT6ndd6XmIjGAQNuQkhvRK
 b50PtMC1QA+DH8Zg5yWl2wy3Uq1WE+qu/MVyknwyViYFGuvMYSCqOR5JOoLDU4EYSdQ2pzLw+D
 H32VZxZVIu882q1hfHmlLtDXWfck+Ykq8t5VtYUompfqJURiZxN+6if1yuWAml67cFWCuMJsRP
 u4Q=
X-SBRS: None
X-MesageID: 32078809
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,363,1599537600"; 
   d="scan'208";a="32078809"
Subject: Re: Ping: [PATCH v2] x86/PV: make post-migration page state
 consistent
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Wei Liu
	<wl@xen.org>
References: <07ebce3c-4dcf-bc9e-6d82-7f3def486ab8@suse.com>
 <b733914b-1bfd-d95d-470e-af3ca7a4f69f@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e2ac69e3-64ef-5362-427b-7e52735ea834@citrix.com>
Date: Mon, 23 Nov 2020 12:26:27 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <b733914b-1bfd-d95d-470e-af3ca7a4f69f@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 20/11/2020 12:48, Jan Beulich wrote:
> On 04.11.2020 08:56, Jan Beulich wrote:
>> When a page table page gets de-validated, its type reference count drops
>> to zero (and PGT_validated gets cleared), but its type remains intact.
>> XEN_DOMCTL_getpageframeinfo3, therefore, so far reported prior usage for
>> such pages. An intermediate write to such a page via e.g.
>> MMU_NORMAL_PT_UPDATE, however, would transition the page's type to
>> PGT_writable_page, thus altering what XEN_DOMCTL_getpageframeinfo3 would
>> return. In libxc the decision which pages to normalize / localize
>> depends solely on the type returned from the domctl. As a result without
>> further precautions the guest won't be able to tell whether such a page
>> has had its (apparent) PTE entries transitioned to the new MFNs.
>>
>> Add a check of PGT_validated, thus consistently avoiding normalization /
>> localization in the tool stack.
>>
>> Also use XEN_DOMCTL_PFINFO_NOTAB in the variable's initializer instead
>> open coding it.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> v2: Don't change type's type.
> Ping?

Ping what?  There is still nothing addressing my concerns from v1.

To re-iterate - this is a very subtle change, in a very complicated
piece of migration.  As the problems described do not manifest in
practice, it is vital to understand why.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 12:30:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 12:30:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34151.64976 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khAyz-00012Z-7n; Mon, 23 Nov 2020 12:30:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34151.64976; Mon, 23 Nov 2020 12:30:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khAyz-00012S-4h; Mon, 23 Nov 2020 12:30:13 +0000
Received: by outflank-mailman (input) for mailman id 34151;
 Mon, 23 Nov 2020 12:30:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khAyx-00012N-Hj
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 12:30:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0ef8067c-7322-4542-90b0-25c2110cd060;
 Mon, 23 Nov 2020 12:30:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EB675AFC1;
 Mon, 23 Nov 2020 12:30:09 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khAyx-00012N-Hj
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 12:30:11 +0000
X-Inumbo-ID: 0ef8067c-7322-4542-90b0-25c2110cd060
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0ef8067c-7322-4542-90b0-25c2110cd060;
	Mon, 23 Nov 2020 12:30:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606134610; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qwFVAuzV3tht+QwpySCOfxOj21TQQI0/ENeuahmUz04=;
	b=l/MptpJqYzVd/Z0Zx6LcI0GBmg01gnYsVBb9UY5y1U+frBUPRjBz84mmOIbsWkI/VizFGu
	lhHzR2l6XMlMCyyq3UfSaeeIb43t5ncgky2B2q3KhXhvNNQYhfCeC3QBBehtOau7wLwMW/
	N58ZVTEIAD+tVBx2ui8Nzk57skaDHus=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EB675AFC1;
	Mon, 23 Nov 2020 12:30:09 +0000 (UTC)
Subject: Re: [PATCH v2] x86/PV: make post-migration page state consistent
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <07ebce3c-4dcf-bc9e-6d82-7f3def486ab8@suse.com>
 <20201123114843.ocjwlv4wkukcdcgf@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c9578226-91b6-c621-2e61-dfeb7cbfdbe5@suse.com>
Date: Mon, 23 Nov 2020 13:30:09 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201123114843.ocjwlv4wkukcdcgf@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 23.11.2020 12:48, Roger Pau Monné wrote:
> On Wed, Nov 04, 2020 at 08:56:50AM +0100, Jan Beulich wrote:
>> When a page table page gets de-validated, its type reference count drops
>> to zero (and PGT_validated gets cleared), but its type remains intact.
>> XEN_DOMCTL_getpageframeinfo3, therefore, so far reported prior usage for
>> such pages. An intermediate write to such a page via e.g.
>> MMU_NORMAL_PT_UPDATE, however, would transition the page's type to
>> PGT_writable_page, thus altering what XEN_DOMCTL_getpageframeinfo3 would
>> return. In libxc the decision which pages to normalize / localize
>> depends solely on the type returned from the domctl. As a result without
>> further precautions the guest won't be able to tell whether such a page
>> has had its (apparent) PTE entries transitioned to the new MFNs.
>>
>> Add a check of PGT_validated, thus consistently avoiding normalization /
>> localization in the tool stack.
>>
>> Also use XEN_DOMCTL_PFINFO_NOTAB in the variable's initializer instead
>> open coding it.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

> Maybe the switch could be avoided if the page is not validated or
> broken? Not that I care that much.

It certainly could be, but it didn't seem worth the code churn
to me.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 12:37:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 12:37:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34161.64989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khB6O-0001HS-1C; Mon, 23 Nov 2020 12:37:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34161.64989; Mon, 23 Nov 2020 12:37:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khB6N-0001HL-UB; Mon, 23 Nov 2020 12:37:51 +0000
Received: by outflank-mailman (input) for mailman id 34161;
 Mon, 23 Nov 2020 12:37:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WaDe=E5=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1khB6M-0001HG-9N
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 12:37:50 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 79b253d9-3b0a-4bdf-a611-c6bf5ea0b1ff;
 Mon, 23 Nov 2020 12:37:49 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WaDe=E5=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1khB6M-0001HG-9N
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 12:37:50 +0000
X-Inumbo-ID: 79b253d9-3b0a-4bdf-a611-c6bf5ea0b1ff
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 79b253d9-3b0a-4bdf-a611-c6bf5ea0b1ff;
	Mon, 23 Nov 2020 12:37:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606135069;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=HlVpSfs5Kq92ct6jbEauNLakfVPoNjRZN6RNY9JUIP4=;
  b=e7RqMcMwqMGNMIP6TK1xr0toK7pHSttwCyA4/ps1R1t5eZMT4TL0essl
   /PuwEjr5FF+XfE49dgF0NL0d36Qrf4goNj0rYJwOjh90Y5e3gIjNh4LD8
   e8zd6DgWKD/s3VjSA1jxiravBGBSnkjmPt8Y6lvI/7K1jrcYA52O6crg7
   k=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: LfTT8jqM+ceZyq4OIVrVm7MuMP5CMQWAov+SIlguifH+On8vrjS0240yx4yyRpUYxY/p7Qj5sK
 nr0UF9zlIu+Xeub1HXHEjP5wuE5wCvMQOgC62MxNcCshNP0oyXqbaP4uS8gNnyck/NmQkm47co
 AoMeTtyQNBHr5Nh+wihWIOldLFgkvNGz/fGaAVqAmQ7u+JhpEgDBLLjtv/+A2l45OCkHVpebVS
 vzOST8IM+W2pny7EKeGiqoBf6d+PJ83RaRHSwIryVA+qlVHAodE8fwMjnCGS6YSbIbTBa2eP+0
 RXU=
X-SBRS: None
X-MesageID: 32079416
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,363,1599537600"; 
   d="scan'208";a="32079416"
Subject: Re: [PATCH] x86: E801 memory "map" use implies no ACPI
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <18ca8671-1478-3dc8-7b91-041dbc18829f@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <cff9ccc4-965a-18e9-1ac3-9779e39c2e62@citrix.com>
Date: Mon, 23 Nov 2020 12:37:43 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <18ca8671-1478-3dc8-7b91-041dbc18829f@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 20/11/2020 12:45, Jan Beulich wrote:
> ACPI mandates use of E820 (or newer, e.g. EFI), and in fact firmware
> has been observed to include E820_ACPI ranges in what E801 reports as
> available (really "configured") memory.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> TBD: Alternatively we could drop all use of E801 (and older), since
>      there shouldn't be any 64-bit systems not supporting the more
>      modern E820.

I'd definitely be in favour of deleting the legacy logic.  The very fact
that firmware has been observed to include E820_ACPI in E801 maps shows
that the change here isn't correct in practice.

I think we should go further and depend on the bootloader providing the
memory/video/etc details, which also rips out a lot of 16bit handling
code in the trampoline.

Judging by the context below, I think we should also drop various ACPI
related options.  Given its ubiquity these days, turning various bits of
ACPI off is only going to make problems worse.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 12:38:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 12:38:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34166.65000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khB6r-0001OR-AO; Mon, 23 Nov 2020 12:38:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34166.65000; Mon, 23 Nov 2020 12:38:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khB6r-0001OK-7J; Mon, 23 Nov 2020 12:38:21 +0000
Received: by outflank-mailman (input) for mailman id 34166;
 Mon, 23 Nov 2020 12:38:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khB6p-0001OB-Dy
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 12:38:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 39d23983-c48b-4001-a1e7-fae065c71bc6;
 Mon, 23 Nov 2020 12:38:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F1A44ABCE;
 Mon, 23 Nov 2020 12:38:17 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khB6p-0001OB-Dy
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 12:38:19 +0000
X-Inumbo-ID: 39d23983-c48b-4001-a1e7-fae065c71bc6
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 39d23983-c48b-4001-a1e7-fae065c71bc6;
	Mon, 23 Nov 2020 12:38:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606135098; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=NOhIeP5/zFABIK+Xst34MyxyojektpL89knqwSFWA1c=;
	b=fHQxkzShBQizuNau3BioNKKo7OWsWAsRKhP32x3ik8RHqLUsoVDzNP9bggvZDqwgWQ5b1x
	9S1VFy99A+f1kwTECtkX2CzX+xxT2rNsx9M8/dMWo91Tdwevlih+gZyltHjnVd6/VPUosz
	lgiOd+d+n+FfAeNJlWnBXal/RJIMCs8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id F1A44ABCE;
	Mon, 23 Nov 2020 12:38:17 +0000 (UTC)
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/4] x86: ACPI and DMI table mapping fixes
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <7f895b0e-f46f-8fe2-b0ac-e0503ef06a1f@suse.com>
Date: Mon, 23 Nov 2020 13:38:17 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The first three patches fix fallout from the re-work of
acpi_os_{,un}map_memory(): Direct uses of __acpi_map_table() are now
no longer valid once we've reached SYS_STATE_boot. This was originally
noticed by system shutdown no longer working (patch 1), but clearly
extends beyond of this (patches 2 and 3). The last patch relaxes
things such that entering S5 would still work even if there was a
problem with FACS (information collected from there is only needed for
entering S3 and, once we support it, S4 via S4BIOS_REQ).

1: x86/ACPI: fix mapping of FACS
2: x86/ACPI: fix S3 wakeup vector mapping
3: x86/DMI: fix table mapping when one lives above 1Mb
4: x86/ACPI: don't invalidate S5 data when S3 wakeup vector cannot be determined

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 12:39:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 12:39:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34173.65012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khB8P-0001Xr-Ml; Mon, 23 Nov 2020 12:39:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34173.65012; Mon, 23 Nov 2020 12:39:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khB8P-0001Xj-JI; Mon, 23 Nov 2020 12:39:57 +0000
Received: by outflank-mailman (input) for mailman id 34173;
 Mon, 23 Nov 2020 12:39:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khB8P-0001Xd-5F
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 12:39:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 417f9d19-27c2-48c4-9f41-f122c49c5d15;
 Mon, 23 Nov 2020 12:39:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A64AFAC75;
 Mon, 23 Nov 2020 12:39:55 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khB8P-0001Xd-5F
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 12:39:57 +0000
X-Inumbo-ID: 417f9d19-27c2-48c4-9f41-f122c49c5d15
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 417f9d19-27c2-48c4-9f41-f122c49c5d15;
	Mon, 23 Nov 2020 12:39:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606135195; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cPfib+BwguvslmqXZRj+FnJjCsQqD71Psr1mDnpBDDc=;
	b=k3Np63Do2MCnyD8xk+WTEnRBJA35RjVU22z3Mh+L4BM0BLKavbR+FNg5DSJBzMClD6SFph
	51LFSTNJh9zxGHvQWgbbiW3hcjdeoWX1tRQ1vz0ZLR/IN9dEG5MO9vaCuuZZPT+Gx7czEo
	GqBFhi55fU8QwLR82/GYWPcya0ROkpo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A64AFAC75;
	Mon, 23 Nov 2020 12:39:55 +0000 (UTC)
Subject: [PATCH 1/4] x86/ACPI: fix mapping of FACS
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>
References: <7f895b0e-f46f-8fe2-b0ac-e0503ef06a1f@suse.com>
Message-ID: <81a8c2f0-ae9b-98e0-f5c5-d32b423db491@suse.com>
Date: Mon, 23 Nov 2020 13:39:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <7f895b0e-f46f-8fe2-b0ac-e0503ef06a1f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

acpi_fadt_parse_sleep_info() runs when the system is already in
SYS_STATE_boot. Hence its direct call to __acpi_map_table() won't work
anymore. This call should probably have been replaced long ago already,
as the layering violation hasn't been necessary for quite some time.

Fixes: 1c4aa69ca1e1 ("xen/acpi: Rework acpi_os_map_memory() and acpi_os_unmap_memory()")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/acpi/boot.c
+++ b/xen/arch/x86/acpi/boot.c
@@ -422,8 +422,7 @@ acpi_fadt_parse_sleep_info(struct acpi_t
 	if (!facs_pa)
 		goto bad;
 
-	facs = (struct acpi_table_facs *)
-		__acpi_map_table(facs_pa, sizeof(struct acpi_table_facs));
+	facs = acpi_os_map_memory(facs_pa, sizeof(*facs));
 	if (!facs)
 		goto bad;
 
@@ -448,11 +447,16 @@ acpi_fadt_parse_sleep_info(struct acpi_t
 		offsetof(struct acpi_table_facs, firmware_waking_vector);
 	acpi_sinfo.vector_width = 32;
 
+	acpi_os_unmap_memory(facs, sizeof(*facs));
+
 	printk(KERN_INFO PREFIX
 	       "            wakeup_vec[%"PRIx64"], vec_size[%x]\n",
 	       acpi_sinfo.wakeup_vector, acpi_sinfo.vector_width);
 	return;
-bad:
+
+ bad:
+	if (facs)
+		acpi_os_unmap_memory(facs, sizeof(*facs));
 	memset(&acpi_sinfo, 0,
 	       offsetof(struct acpi_sleep_info, sleep_control));
 	memset(&acpi_sinfo.sleep_status + 1, 0,



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 12:40:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 12:40:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34174.65025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khB8h-0002Ic-VZ; Mon, 23 Nov 2020 12:40:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34174.65025; Mon, 23 Nov 2020 12:40:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khB8h-0002IV-Sa; Mon, 23 Nov 2020 12:40:15 +0000
Received: by outflank-mailman (input) for mailman id 34174;
 Mon, 23 Nov 2020 12:40:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khB8g-0002I5-Cl
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 12:40:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c7678cbe-34fb-4a3d-979a-983d358e2936;
 Mon, 23 Nov 2020 12:40:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B2FA5AC66;
 Mon, 23 Nov 2020 12:40:12 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khB8g-0002I5-Cl
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 12:40:14 +0000
X-Inumbo-ID: c7678cbe-34fb-4a3d-979a-983d358e2936
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c7678cbe-34fb-4a3d-979a-983d358e2936;
	Mon, 23 Nov 2020 12:40:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606135212; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wbsqmr/l1bE1HyCRA6+0mV3xj2TzTTHW3TbItze53ho=;
	b=JXRx3OrLUhCfeSMYr2aweRmyPeOtOJ5Zq5ly7vvMFSqVX6P6kWXMt2DFt+9XbMOd8pALe5
	57XePsQeZmtNUgSp6DfXGzWp94f5krxxZz+agu3WtN4H4aDixFNYsA7f/FjmsNOO5R4paW
	0H642uy5fugBuMKVTGpimHnsUCrsfnU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B2FA5AC66;
	Mon, 23 Nov 2020 12:40:12 +0000 (UTC)
Subject: [PATCH 2/4] x86/ACPI: fix S3 wakeup vector mapping
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>
References: <7f895b0e-f46f-8fe2-b0ac-e0503ef06a1f@suse.com>
Message-ID: <c0210cbf-c07d-7fa6-2ae0-59764514836a@suse.com>
Date: Mon, 23 Nov 2020 13:40:12 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <7f895b0e-f46f-8fe2-b0ac-e0503ef06a1f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Use of __acpi_map_table() here was at least close to an abuse already
before, but it will now consistently return NULL here. Drop the layering
violation and use set_fixmap() directly. Re-use of the ACPI fixmap area
is hopefully going to remain "fine" for the time being.

Add checks to acpi_enter_sleep(): The vector now needs to be contained
within a single page, but the ACPI spec requires 64-byte alignment of
FACS anyway. Also bail if no wakeup vector was determined in the first
place, in part as preparation for a subsequent relaxation change.

Fixes: 1c4aa69ca1e1 ("xen/acpi: Rework acpi_os_map_memory() and acpi_os_unmap_memory()")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/acpi/boot.c
+++ b/xen/arch/x86/acpi/boot.c
@@ -443,6 +443,11 @@ acpi_fadt_parse_sleep_info(struct acpi_t
 			"FACS is shorter than ACPI spec allow: %#x",
 			facs->length);
 
+	if (facs_pa % 64)
+		printk(KERN_WARNING PREFIX
+			"FACS is not 64-byte aligned: %#lx",
+			facs_pa);
+
 	acpi_sinfo.wakeup_vector = facs_pa + 
 		offsetof(struct acpi_table_facs, firmware_waking_vector);
 	acpi_sinfo.vector_width = 32;
--- a/xen/arch/x86/acpi/power.c
+++ b/xen/arch/x86/acpi/power.c
@@ -174,17 +174,20 @@ static void acpi_sleep_prepare(u32 state
     if ( state != ACPI_STATE_S3 )
         return;
 
-    wakeup_vector_va = __acpi_map_table(
-        acpi_sinfo.wakeup_vector, sizeof(uint64_t));
-
     /* TBoot will set resume vector itself (when it is safe to do so). */
     if ( tboot_in_measured_env() )
         return;
 
+    set_fixmap(FIX_ACPI_END, acpi_sinfo.wakeup_vector);
+    wakeup_vector_va = fix_to_virt(FIX_ACPI_END) +
+                       PAGE_OFFSET(acpi_sinfo.wakeup_vector);
+
     if ( acpi_sinfo.vector_width == 32 )
         *(uint32_t *)wakeup_vector_va = bootsym_phys(wakeup_start);
     else
         *(uint64_t *)wakeup_vector_va = bootsym_phys(wakeup_start);
+
+    clear_fixmap(FIX_ACPI_END);
 }
 
 static void acpi_sleep_post(u32 state) {}
@@ -331,6 +334,12 @@ static long enter_state_helper(void *dat
  */
 int acpi_enter_sleep(struct xenpf_enter_acpi_sleep *sleep)
 {
+    if ( sleep->sleep_state == ACPI_STATE_S3 &&
+         (!acpi_sinfo.wakeup_vector || !acpi_sinfo.vector_width ||
+          (PAGE_OFFSET(acpi_sinfo.wakeup_vector) >
+           PAGE_SIZE - acpi_sinfo.vector_width / 8)) )
+        return -EOPNOTSUPP;
+
     if ( sleep->flags & XENPF_ACPI_SLEEP_EXTENDED )
     {
         if ( !acpi_sinfo.sleep_control.address ||



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 12:40:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 12:40:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34182.65037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khB93-0002QS-Di; Mon, 23 Nov 2020 12:40:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34182.65037; Mon, 23 Nov 2020 12:40:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khB93-0002QK-A8; Mon, 23 Nov 2020 12:40:37 +0000
Received: by outflank-mailman (input) for mailman id 34182;
 Mon, 23 Nov 2020 12:40:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khB92-0002OQ-Cj
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 12:40:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a14cf33e-5462-4b5e-b240-2c95284fbd29;
 Mon, 23 Nov 2020 12:40:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 943D8ABCE;
 Mon, 23 Nov 2020 12:40:30 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khB92-0002OQ-Cj
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 12:40:36 +0000
X-Inumbo-ID: a14cf33e-5462-4b5e-b240-2c95284fbd29
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a14cf33e-5462-4b5e-b240-2c95284fbd29;
	Mon, 23 Nov 2020 12:40:31 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606135230; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SVkCF63cSzSDN2HfqCHv8cKlqji3PTKbOSNyrWOJa6o=;
	b=gbIT+V0AFH7qMN6Ub5jgkUvEvx8DXDNRxGyOWXzKF/Bx2OUqxpU2HIwp7y3Vv4nbkBz+zL
	LRqV+jYO/NhD1Y8zDaW0nizoEzr4nnkme5v7mqHFqEUy9RdBG98ngzFM3wgVZ3Fddmkhae
	HroC8qSqZTCOrVfpbv103VIajGMpLkk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 943D8ABCE;
	Mon, 23 Nov 2020 12:40:30 +0000 (UTC)
Subject: [PATCH 3/4] x86/DMI: fix table mapping when one lives above 1Mb
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>
References: <7f895b0e-f46f-8fe2-b0ac-e0503ef06a1f@suse.com>
Message-ID: <53cd4ae3-d806-c3ad-02fd-317a09f15a24@suse.com>
Date: Mon, 23 Nov 2020 13:40:30 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <7f895b0e-f46f-8fe2-b0ac-e0503ef06a1f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Use of __acpi_map_table() is kind of an abuse here, and doesn't work
anymore for the majority of cases if any of the tables lives outside the
low first Mb. Keep this (ab)use only prior to reaching SYS_STATE_boot,
primarily to avoid needing to audit whether any of the calls here can
happen this early in the first place; quite likely this isn't necessary
at all - at least dmi_scan_machine() gets called late enough.

For the "normal" case, call __vmap() directly, despite effectively
duplicating acpi_os_map_memory(). There's one difference though: We
shouldn't need to establish UC- mappings, WP or r/o WB mappings ought to
be fine, as the tables are going to live in either RAM or ROM. Short of
having PAGE_HYPERVISOR_WP and wanting to map the tables r/o anyway, use
the latter of the two options. The r/o mapping implies some
constification of code elsewhere in the file. For code touched anyway
also switch to void (where possible) or uint8_t.

Fixes: 1c4aa69ca1e1 ("xen/acpi: Rework acpi_os_map_memory() and acpi_os_unmap_memory()")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/dmi_scan.c
+++ b/xen/arch/x86/dmi_scan.c
@@ -12,8 +12,6 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 
-#define bt_ioremap(b,l)  ((void *)__acpi_map_table(b,l))
-#define bt_iounmap(b,l)  ((void)0)
 #define memcpy_fromio    memcpy
 #define alloc_bootmem(l) xmalloc_bytes(l)
 
@@ -111,9 +109,32 @@ enum dmi_entry_type {
 #define dmi_printk(x)
 #endif
 
-static char * __init dmi_string(struct dmi_header *dm, u8 s)
+static const void *__init bt_ioremap(paddr_t addr, unsigned int len)
 {
-	char *bp=(char *)dm;
+    mfn_t mfn = _mfn(PFN_DOWN(addr));
+    unsigned int offs = PAGE_OFFSET(addr);
+
+    if ( addr + len <= MB(1) )
+        return __va(addr);
+
+    if ( system_state < SYS_STATE_boot )
+        return __acpi_map_table(addr, len);
+
+    return __vmap(&mfn, PFN_UP(offs + len), 1, 1, PAGE_HYPERVISOR_RO,
+                  VMAP_DEFAULT) + offs;
+}
+
+static void __init bt_iounmap(const void *ptr, unsigned int len)
+{
+    if ( (unsigned long)ptr < DIRECTMAP_VIRT_START &&
+         system_state >= SYS_STATE_boot )
+        vunmap(ptr);
+}
+
+static const char *__init dmi_string(const struct dmi_header *dm, uint8_t s)
+{
+	const char *bp = (const void *)dm;
+
 	bp+=dm->length;
 	if(!s)
 		return "";
@@ -133,11 +154,10 @@ static char * __init dmi_string(struct d
  */
  
 static int __init dmi_table(paddr_t base, u32 len, int num,
-			    void (*decode)(struct dmi_header *))
+			    void (*decode)(const struct dmi_header *))
 {
-	u8 *buf;
-	struct dmi_header *dm;
-	u8 *data;
+	const uint8_t *buf, *data;
+	const struct dmi_header *dm;
 	int i=0;
 		
 	buf = bt_ioremap(base, len);
@@ -301,7 +321,7 @@ typedef union {
 
 static int __init _dmi_iterate(const struct dmi_eps *dmi,
 			       const smbios_eps_u smbios,
-			       void (*decode)(struct dmi_header *))
+			       void (*decode)(const struct dmi_header *))
 {
 	int num;
 	u32 len;
@@ -335,7 +355,7 @@ static int __init _dmi_iterate(const str
 	return dmi_table(base, len, num, decode);
 }
 
-static int __init dmi_iterate(void (*decode)(struct dmi_header *))
+static int __init dmi_iterate(void (*decode)(const struct dmi_header *))
 {
 	struct dmi_eps dmi;
 	struct smbios3_eps smbios3;
@@ -370,7 +390,7 @@ static int __init dmi_iterate(void (*dec
 	return -1;
 }
 
-static int __init dmi_efi_iterate(void (*decode)(struct dmi_header *))
+static int __init dmi_efi_iterate(void (*decode)(const struct dmi_header *))
 {
 	int ret = -1;
 
@@ -433,10 +453,11 @@ static char *__initdata dmi_ident[DMI_ST
  *	Save a DMI string
  */
  
-static void __init dmi_save_ident(struct dmi_header *dm, int slot, int string)
+static void __init dmi_save_ident(const struct dmi_header *dm, int slot, int string)
 {
-	char *d = (char*)dm;
-	char *p = dmi_string(dm, d[string]);
+	const char *d = (const void *)dm;
+	const char *p = dmi_string(dm, d[string]);
+
 	if(p==NULL || *p == 0)
 		return;
 	if (dmi_ident[slot])
@@ -629,10 +650,10 @@ static const struct dmi_blacklist __init
  *	out of here.
  */
 
-static void __init dmi_decode(struct dmi_header *dm)
+static void __init dmi_decode(const struct dmi_header *dm)
 {
 #ifdef DMI_DEBUG
-	u8 *data = (u8 *)dm;
+	const uint8_t *data = (const void *)dm;
 #endif
 	
 	switch(dm->type)



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 12:41:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 12:41:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34190.65048 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khB9a-0002YF-ME; Mon, 23 Nov 2020 12:41:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34190.65048; Mon, 23 Nov 2020 12:41:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khB9a-0002Y8-J0; Mon, 23 Nov 2020 12:41:10 +0000
Received: by outflank-mailman (input) for mailman id 34190;
 Mon, 23 Nov 2020 12:41:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khB9Y-0002Xw-KH
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 12:41:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 05c52c33-af8b-431e-a2d7-2b1f5c93c1a3;
 Mon, 23 Nov 2020 12:41:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 203E0AC60;
 Mon, 23 Nov 2020 12:41:07 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khB9Y-0002Xw-KH
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 12:41:08 +0000
X-Inumbo-ID: 05c52c33-af8b-431e-a2d7-2b1f5c93c1a3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 05c52c33-af8b-431e-a2d7-2b1f5c93c1a3;
	Mon, 23 Nov 2020 12:41:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606135267; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=YhDy7Mg6pP0gtjGsaM0dFyUyU3hPq8Z+OxGSvcjCo4c=;
	b=ZJ4yp331J2ZsZW67BKFgAMSZCYQmldzGDUPUwJNFQiG9m9CTVIopKsCXxW+gvRPuggRnqS
	gxZzLKL4OEDmEmRW/v0FJcsV1M5Jo/raux4VaJUCI12TKjuRX0ZlrKCp/ZPvyTn53DUryk
	iM8pnPZEtfWu5FVvoq77kiOzvKyilpE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 203E0AC60;
	Mon, 23 Nov 2020 12:41:07 +0000 (UTC)
Subject: [PATCH 4/4] x86/ACPI: don't invalidate S5 data when S3 wakeup vector
 cannot be determined
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>
References: <7f895b0e-f46f-8fe2-b0ac-e0503ef06a1f@suse.com>
Message-ID: <d2b9d231-8a05-6164-66f8-74d7bfe4b40f@suse.com>
Date: Mon, 23 Nov 2020 13:41:06 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <7f895b0e-f46f-8fe2-b0ac-e0503ef06a1f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

We can be more tolerant as long as the data collected from FACS is only
needed to enter S3. A prior change already added suitable checking to
acpi_enter_sleep().

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/acpi/boot.c
+++ b/xen/arch/x86/acpi/boot.c
@@ -420,22 +420,22 @@ acpi_fadt_parse_sleep_info(struct acpi_t
 		facs_pa = (uint64_t)fadt->facs;
 	}
 	if (!facs_pa)
-		goto bad;
+		return;
 
 	facs = acpi_os_map_memory(facs_pa, sizeof(*facs));
 	if (!facs)
-		goto bad;
+		return;
 
 	if (strncmp(facs->signature, "FACS", 4)) {
 		printk(KERN_ERR PREFIX "Invalid FACS signature %.4s\n",
 			facs->signature);
-		goto bad;
+		goto done;
 	}
 
 	if (facs->length < 24) {
 		printk(KERN_ERR PREFIX "Invalid FACS table length: %#x",
 			facs->length);
-		goto bad;
+		goto done;
 	}
 
 	if (facs->length < 64)
@@ -452,6 +452,7 @@ acpi_fadt_parse_sleep_info(struct acpi_t
 		offsetof(struct acpi_table_facs, firmware_waking_vector);
 	acpi_sinfo.vector_width = 32;
 
+ done:
 	acpi_os_unmap_memory(facs, sizeof(*facs));
 
 	printk(KERN_INFO PREFIX



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 12:50:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 12:50:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34203.65061 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBIn-0003Z6-Ju; Mon, 23 Nov 2020 12:50:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34203.65061; Mon, 23 Nov 2020 12:50:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBIn-0003Yz-Gm; Mon, 23 Nov 2020 12:50:41 +0000
Received: by outflank-mailman (input) for mailman id 34203;
 Mon, 23 Nov 2020 12:50:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khBIm-0003Yu-Sw
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 12:50:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9af8a926-941b-4b3b-b27d-c480e5efbaec;
 Mon, 23 Nov 2020 12:50:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 22C56AC2E;
 Mon, 23 Nov 2020 12:50:39 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khBIm-0003Yu-Sw
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 12:50:40 +0000
X-Inumbo-ID: 9af8a926-941b-4b3b-b27d-c480e5efbaec
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9af8a926-941b-4b3b-b27d-c480e5efbaec;
	Mon, 23 Nov 2020 12:50:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606135839; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=B92wNGMBZyqdB83JFdpLlQToKw+/MZ3Rz0BnXyrEf24=;
	b=OrE3mqM894tK8rhjsI+bVI4cPO9D/XuqWsgAWvq340T6wHZKlZ5le5dcBvwNZN5mXvYHPM
	83IpaxXC53sZc2ptyzFLNRP0oxVc+g51ThytqZogsJYCYZVo7hy2JRnhpB+CE8RcaONEhI
	udu76q2vNI9YzbJOh6xZVDSmkAOiBv4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 22C56AC2E;
	Mon, 23 Nov 2020 12:50:39 +0000 (UTC)
Subject: Re: Ping: [PATCH v2] x86/PV: make post-migration page state
 consistent
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <07ebce3c-4dcf-bc9e-6d82-7f3def486ab8@suse.com>
 <b733914b-1bfd-d95d-470e-af3ca7a4f69f@suse.com>
 <e2ac69e3-64ef-5362-427b-7e52735ea834@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8df7e434-6cea-54ca-4f24-09cbd692ad3b@suse.com>
Date: Mon, 23 Nov 2020 13:50:38 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <e2ac69e3-64ef-5362-427b-7e52735ea834@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 23.11.2020 13:26, Andrew Cooper wrote:
> On 20/11/2020 12:48, Jan Beulich wrote:
>> On 04.11.2020 08:56, Jan Beulich wrote:
>>> When a page table page gets de-validated, its type reference count drops
>>> to zero (and PGT_validated gets cleared), but its type remains intact.
>>> XEN_DOMCTL_getpageframeinfo3, therefore, so far reported prior usage for
>>> such pages. An intermediate write to such a page via e.g.
>>> MMU_NORMAL_PT_UPDATE, however, would transition the page's type to
>>> PGT_writable_page, thus altering what XEN_DOMCTL_getpageframeinfo3 would
>>> return. In libxc the decision which pages to normalize / localize
>>> depends solely on the type returned from the domctl. As a result without
>>> further precautions the guest won't be able to tell whether such a page
>>> has had its (apparent) PTE entries transitioned to the new MFNs.
>>>
>>> Add a check of PGT_validated, thus consistently avoiding normalization /
>>> localization in the tool stack.
>>>
>>> Also use XEN_DOMCTL_PFINFO_NOTAB in the variable's initializer instead
>>> open coding it.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> ---
>>> v2: Don't change type's type.
>> Ping?
> 
> Ping what?  There is still nothing addressing my concerns from v1.

I did reply to your concerns on Sep 11th, and then replied to this
reply of mine another time on Sep 28th. Neither of these got any
response from you, hence I had to conclude - after two further
pings on v1 - that they were satisfactory to you. Now you say they
weren't, but without saying in which way, so I still wouldn't know
what to change in the description.

On the code change itself you did say "... so this is probably a
good change", so I was further understanding that your concern is
merely with the description. Maybe I misunderstood this aspect,
too?

> To re-iterate - this is a very subtle change, in a very complicated
> piece of migration.  As the problems described do not manifest in
> practice, it is vital to understand why.

Until now it has been my understanding that they just don't happen
to manifest, because guests know to behave themselves (read: pin,
first and foremost, all their page tables, which means we wouldn't
in practice run into ones with an in-flight state change).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 12:51:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 12:51:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34209.65073 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBJU-0003ew-TT; Mon, 23 Nov 2020 12:51:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34209.65073; Mon, 23 Nov 2020 12:51:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBJU-0003eo-QB; Mon, 23 Nov 2020 12:51:24 +0000
Received: by outflank-mailman (input) for mailman id 34209;
 Mon, 23 Nov 2020 12:51:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1khBJT-0003ei-Gm
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 12:51:23 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 31a95a6f-9ede-4aad-97ab-3266d758226c;
 Mon, 23 Nov 2020 12:51:22 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1khBJT-0003ei-Gm
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 12:51:23 +0000
X-Inumbo-ID: 31a95a6f-9ede-4aad-97ab-3266d758226c
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 31a95a6f-9ede-4aad-97ab-3266d758226c;
	Mon, 23 Nov 2020 12:51:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606135882;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=SfC3p/z9eWWdbWa/c5ZLmvKkAg+PRbhplWH4zRvZ6tY=;
  b=VcSrpFLosPffYvtnOpZTqvv0mqDvqasUsbQ4KQQ0H1UaJPtHDG7kgGi6
   79OaOmO+L/3wYCD7M+2amW5n0YSaJ+urs6k9kE5Zw7mdVBLZR49aOyW+O
   3c91E2HMyD2iwHbK72B+Qj0TrFJUdQWwvmj2ss2ulXSb3KK3zK99tHhhd
   c=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 2D09qRZfCGM8GLFzpPQhyuTjolrFcwmMot1c+i67v30RLY3E8pUkb+LeYHG/9BMtg8tgq8bbtf
 DR4qkLXlRDEaiWZuwaWR69vcOG1PimAob5mI6PQRFXzyL9/EFbBoc7r3oVw2UFsXMx6pJf6+Pa
 7FvUXOJ9MwtZl7l1ehi0hPLAyVc7wWqcwFAJvPaG9mHxVFtLDUnyhP3PnMy1I95Q94PBv0lFFb
 Ou7bIKAmSEEjVO7B5TmuhKlnbFJ3M3fXuWV21Ff7WWL50+27ZTs0yeFjj8U64C0ghTmYh8uZTf
 cxs=
X-SBRS: None
X-MesageID: 31708105
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,363,1599537600"; 
   d="scan'208";a="31708105"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hrv/o9PbOAi6UxfPCsN8M30AGXRqZF2fkEjLZlYuxmtqaIJHthfVD/CDr6P9HKHglNBkEVfCFEaQtA9nGe+SxsmcHpMXJYJ7VG6IIIEavI2WeEM3kmHjTkgBPErYWG1SarRJ8Z0fbiMRVkpAfaHCswEug6RQqd1Azi4hkTWawJGqhSiP0Wup+N0J+0IG7Z8f3Xl4gLDgv7v15nDhIX/slnhm5YZkEy6+Hl5h6EN+SnJVkmVwlEhKC+gH1dcc83+HG/a0sUAvKEebbxmZMLzCPXg+3OlSBv+NA2NOdunJP7AMzYI2HzG5Od8Bcsts/tcc3b3sM9wJ7jkNuByNwT8ORA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wS4mvdF9pF3kroJXsX5KKF0x+deMXeeFrt+fAWdTMlc=;
 b=OVAVbj4dW95SxhnKVo3NU30BVdyTMczYVkNr8bEsoAtNK+DPIiJrCsObBt2hxu8x4aGeX2OvglD9YU5fISIdjCRDJ0IFGR0ZASSTAqrHgzmGkRpy/qTHiIh9rlQVyWayX0QxxThLn1gBcqn3VCZHlQrIhWZ+jq5w0aT08R05drrR3T02anU8tbLcc1wMyEocVFSecwatpIzBZpG8iSsFys6iIUghvCt6IBHSbUyzurEMiz5PGa4rqtDqr+ImYyVGzCfLJD/H06SVWrmQlN9awHnZSDf6lR1/37ptSv3VGb8EcUZUccdx1lhD7TbZ/cTN9wyhwXgGe+nrlI7GmC11gw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wS4mvdF9pF3kroJXsX5KKF0x+deMXeeFrt+fAWdTMlc=;
 b=pAKucgd4XzFTW17K1IMFTCPXle6yJ5iBmSnT+2UMHw4ArXmCu2rPyAonzw50ulBdKhQFreXbB0t89wifOyTf+I+zOdXs6ObdIS0wysVRdUtCncca//qsPbFt6hOt5R98tD9FoaA9T1WnIqhoLs6eRe5yz4gZZpsC3Vt1lz0manU=
Date: Mon, 23 Nov 2020 13:51:12 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: Jan Beulich <jbeulich@suse.com>, <xen-devel@lists.xenproject.org>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201123125112.q3zqb4e5nk6jg4hw@Air-de-Roger>
References: <20201119175733.GA6067@antioche.eu.org>
 <1a50e1e2-b69c-afd6-a179-316231512004@suse.com>
 <20201120082855.5z4cibcd5djlwmgp@Air-de-Roger>
 <20201120085249.GA1508@antioche.eu.org>
 <97f371a9-00fe-33fe-8923-c247f44f9af6@suse.com>
 <20201120092754.GH1508@antioche.eu.org>
 <20904a6a-ac64-755d-d228-4c49faf66fb5@suse.com>
 <20201120103824.GJ1508@antioche.eu.org>
 <20201123095713.orfpg72r73m7f46n@Air-de-Roger>
 <20201123113241.GE2520@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201123113241.GE2520@antioche.eu.org>
X-ClientProxiedBy: MR2P264CA0173.FRAP264.PROD.OUTLOOK.COM (2603:10a6:501::12)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7b6be81e-432f-472c-e769-08d88fae7821
X-MS-TrafficTypeDiagnostic: DS7PR03MB5607:
X-Microsoft-Antispam-PRVS: <DS7PR03MB5607AA64FBCC3735E07786728FFC0@DS7PR03MB5607.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: wAYhAsCgu+r8UfPR06RqUfsedtDOjXeNiSSKJdsN7je9lPhVOKuf7gqJs+BU6XWsmC0mGQdgn3aCM0ShI3Y21Qg+xyjUwWWfXjTnfy0yza83nE1VuzTLUI91IbAC1+T1S6W1gTddDuhgtu1d7HWiYV/nYuWHKlYgYRnOQDV9zpWgOrb8CMX7/pXJphvLUuzcotyyH1dxtSlws0kxPnLu9GstxuC548ONNxWFitO8hQwGC+xWYakP5WWLWbuO9UxMAiFmDYCLQclqumZGk5yXbxcdNhy0Dv6Q2hVel/aL6EyPkCa89vuHhJi5DmpAIXlk6Hz+AP8FJVvoOwrquElx3g==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(39860400002)(366004)(376002)(346002)(136003)(396003)(8676002)(9686003)(86362001)(26005)(5660300002)(83380400001)(6916009)(316002)(956004)(66946007)(6486002)(66476007)(85182001)(66556008)(1076003)(186003)(478600001)(16526019)(4326008)(6666004)(2906002)(6496006)(8936002)(33716001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: M5F770z8JJ5BE3Q+nzQEs1oj0FJ5OgD2uLMlt8rM9o1BZd617fCO3k1rj/NrjAjm3xrn7ICBZ/hGStWQjb4j0BiNnJijCOOx9SNKERhNohWNhZnGJhBsiU47yAWATfDTEP6sjxjWe8F+EClkjzNbBBkNUvRbjK22jEeQVCjea6ch3zIs6BHuJyf4+bcU6XD4ZkMg6Mz/IRPdWp2pmXnUuYfRSqJpLH46zL5S7eyASTNhYhfpxUr+mjtmjBVXHq0P3N6QY7D91wgB9EPA4NCDYpJNpxhn9ipQ4NDDWBqBuUOk8CCku2ZIyRCC4IFF82uM7Ct2PcBP0DkaB4IyMV6dQTc0GxULw2W2ZK6QgqMmUfhPFz2W6buhC6uOJ2hUuadG7PDGHkrVpfhGrsN1Yfxt1660AEdMBoR9Ntgg3O0xw+gBvm2cX0G06DLCnrEFCUEPh2p7pT1zEN1iB69NCFe8gPJgtap3wZ+u8swQlXaUTB8yG70ojkAO724Erya4RO/QIvhdvyI3bsJ6t/QhpgFq9KesA7N9Wdk6q+PvHSJItAjkFOElq8GOgB8tYB0CW+RaFFTC6+yIOz3RRkDaLJ7pm6DNId8IVA0BUN5MM6R6q0u49jxqh8s7GWLVHDCL7R1UUmh5f1+pEzmLYNpjUP9H9gQaPynxtCwTiLCO03PyXcCKQOPDsTKbgLV8bG2Cue98X1qP4919lttgmb1EfingZUFcHFho41XORZihCYsl2/uZ9hAYudBvBOSfpvfHb79t918sBOXBA5cHtUVzKDJsWW4mULZnvJKTh39e9Fpgxyqxz/Ra4Ai4E5JiAlTu4rEUTL5I6e2bfy1lyXCFPgr4JaNhg0xb6xDZwUB8in+W/x4z6OiSwDjdMrl33rBG0zY5GgvsFvbMSldv1USwF9hjyQ==
X-MS-Exchange-CrossTenant-Network-Message-Id: 7b6be81e-432f-472c-e769-08d88fae7821
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Nov 2020 12:51:17.6699
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 76XGQAHkkMK1DMQFuht1AtlNe/osqXRUouawicdW5qqRBcGh9hOkbbhzBV2oNTZQvqgJZdHEJ7CeerEuq4JtoA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5607
X-OriginatorOrg: citrix.com

On Mon, Nov 23, 2020 at 12:32:41PM +0100, Manuel Bouyer wrote:
> On Mon, Nov 23, 2020 at 10:57:13AM +0100, Roger Pau Monné wrote:
> > > [...]
> > > OK.
> > > I finally found where the EOI occurs (it's within a macro so s simple grep
> > > didn't show it).
> > > 
> > > When interrupts are not masked (e.g. via cli), the ioapic halder is called.
> > > From here, 2 paths can happen:
> > > a) the software interrupt priority level (called spl in BSD world) allows the
> > >   driver's handler to run. In this case it's called, then the interrupt
> > >   is unmasked at ioapic level, and EOI'd at lapic.
> > > b) the software interrupt priority level doesn't allow this driver's handler to
> > >   run. In this case, the interrupt is marked as pending in software,
> > >   explicitely masked at the iopic level and EOI'd at lapic.
> > >   Later, when the spl is lowered, the driver's interrupt handler is run,
> > >   then the interrupt is unmasked at ioapic level, and EOI'd at lapic
> > >   (this is the same path as a)). here we may EOI the lapic twice, and the
> > >   second time when there's no hardware interrupt pending any more.
> > > 
> > > I suspect it's case b) which causes the problem with Xen.
> > 
> > Case b) should be handled fine AFAICT. If there's no interrupt pending
> > in the lapic ISR the EOI is just a noop. Iff there's somehow another
> > vector pending in ISR you might actually be EOIing the wrong vector,
> > and thus this would be a bug in NetBSD. I certainly don't know much of
> > NetBSD interrupt model in order to know whether this second EOI is just
> > not necessary or whether it could cause issues.
> > 
> > Can you actually assert that disabling this second unneeded EOI does
> > solve the problem?
> 
> I tried this, and it didn't change anything ...
> 
> I'm out of idea to try.

Hm, yes, it's quite weird. Do you know whether a NetBSD kernel can be
multibooted from pxelinux with Xen? I would like to see if I can
reproduce this myself.

I have the following patch also which will print a warning message
when GSI 34 is injected from hardware or when Xen performs an EOI
(either from a time out or when reacting to a guest one). I would
expect at least the interrupt injection one to trigger together with
the existing message.

Thanks, Roger.
---8<---
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 67d4a6237f..bbd141a3d9 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -43,7 +43,12 @@
 /* HACK: Route IRQ0 only to VCPU0 to prevent time jumps. */
 #define IRQ0_SPECIAL_ROUTING 1
 
-static void vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int irq);
+static void _vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int irq);
+
+bool __read_mostly irqprint;
+#define vioapic_deliver(vioapic, irq) ({\
+    WARN_ON(irqprint && vioapic->base_gsi + irq == 34); \
+    _vioapic_deliver(vioapic, irq); })
 
 static struct hvm_vioapic *addr_vioapic(const struct domain *d,
                                         unsigned long addr)
@@ -119,6 +124,16 @@ static uint32_t vioapic_read_indirect(const struct hvm_vioapic *vioapic)
 
         if ( redir_index >= vioapic->nr_pins )
         {
+            switch ( vioapic->ioregsel )
+            {
+            case 3:
+                irqprint = true;
+                break;
+
+            case 0xf:
+                irqprint = false;
+                break;
+            }
             gdprintk(XENLOG_WARNING, "apic_mem_readl:undefined ioregsel %x\n",
                      vioapic->ioregsel);
             break;
@@ -391,7 +406,7 @@ static void ioapic_inj_irq(
     vlapic_set_irq(target, vector, trig_mode);
 }
 
-static void vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int pin)
+static void _vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int pin)
 {
     uint16_t dest = vioapic->redirtbl[pin].fields.dest_id;
     uint8_t dest_mode = vioapic->redirtbl[pin].fields.dest_mode;
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 8d1f9a9fc6..91fb99d271 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1388,10 +1388,12 @@ static void set_eoi_ready(void *data)
     flush_ready_eoi();
 }
 
+extern bool irqprint;
 void pirq_guest_eoi(struct pirq *pirq)
 {
     struct irq_desc *desc;
 
+    WARN_ON(irqprint && pirq->pirq == 34);
     ASSERT(local_irq_is_enabled());
     desc = pirq_spin_lock_irq_desc(pirq, NULL);
     if ( desc )
@@ -1837,6 +1839,8 @@ static void do_IRQ_guest(struct irq_desc *desc, unsigned int vector)
     unsigned int        i;
     struct pending_eoi *peoi = this_cpu(pending_eoi);
 
+    WARN_ON(irqprint && desc->irq == 34);
+
     if ( unlikely(!action->nr_guests) )
     {
         /* An interrupt may slip through while freeing an ACKTYPE_EOI irq. */



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:03:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:03:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34217.65085 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBVR-0004jI-21; Mon, 23 Nov 2020 13:03:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34217.65085; Mon, 23 Nov 2020 13:03:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBVQ-0004jB-Uc; Mon, 23 Nov 2020 13:03:44 +0000
Received: by outflank-mailman (input) for mailman id 34217;
 Mon, 23 Nov 2020 13:03:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Rez2=E5=kernel.org=gustavoars@srs-us1.protection.inumbo.net>)
 id 1khBVP-0004j1-Iu
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:03:43 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a195bb2-1734-4483-b0bb-6e01aa6f56c3;
 Mon, 23 Nov 2020 13:03:42 +0000 (UTC)
Received: from embeddedor (187-162-31-110.static.axtel.net [187.162.31.110])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 191D120758;
 Mon, 23 Nov 2020 13:03:35 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Rez2=E5=kernel.org=gustavoars@srs-us1.protection.inumbo.net>)
	id 1khBVP-0004j1-Iu
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:03:43 +0000
X-Inumbo-ID: 4a195bb2-1734-4483-b0bb-6e01aa6f56c3
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4a195bb2-1734-4483-b0bb-6e01aa6f56c3;
	Mon, 23 Nov 2020 13:03:42 +0000 (UTC)
Received: from embeddedor (187-162-31-110.static.axtel.net [187.162.31.110])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 191D120758;
	Mon, 23 Nov 2020 13:03:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606136621;
	bh=J/TET3MqSRKQBkZDLmw3offBuTNq8xblR6VPj5c7KTQ=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=huMDpxd34lY8zl81VJt6WrkzA8VbZ3WZKfCc+01YUsnQkEm+dvBfrNL343ZTscxgS
	 4cn7RRAIuR/6lyK6TO0qxACy3TNrSBuTodAx+s4Q2YpvApK9inZpqsbsSdtJmbV9Zx
	 YwwBYqtSxUH9kHvWkiEz2t98c4vnYVnAJ6qqQTEg=
Date: Mon, 23 Nov 2020 07:03:48 -0600
From: "Gustavo A. R. Silva" <gustavoars@kernel.org>
To: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Joe Perches <joe@perches.com>, Kees Cook <keescook@chromium.org>,
	Jakub Kicinski <kuba@kernel.org>, alsa-devel@alsa-project.org,
	linux-atm-general@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org, linux-iio@vger.kernel.org,
	linux-wireless@vger.kernel.org, linux-fbdev@vger.kernel.org,
	dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org,
	Nathan Chancellor <natechancellor@gmail.com>,
	linux-ide@vger.kernel.org, dm-devel@redhat.com,
	keyrings@vger.kernel.org, linux-mtd@lists.infradead.org,
	GR-everest-linux-l2@marvell.com, wcn36xx@lists.infradead.org,
	samba-technical@lists.samba.org, linux-i3c@lists.infradead.org,
	linux1394-devel@lists.sourceforge.net,
	linux-afs@lists.infradead.org, usb-storage@lists.one-eyed-alien.net,
	drbd-dev@lists.linbit.com, devel@driverdev.osuosl.org,
	linux-cifs@vger.kernel.org, rds-devel@oss.oracle.com,
	Nick Desaulniers <ndesaulniers@google.com>,
	linux-scsi@vger.kernel.org, linux-rdma@vger.kernel.org,
	oss-drivers@netronome.com, bridge@lists.linux-foundation.org,
	linux-security-module@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	linux-stm32@st-md-mailman.stormreply.com, cluster-devel@redhat.com,
	linux-acpi@vger.kernel.org, coreteam@netfilter.org,
	intel-wired-lan@lists.osuosl.org, linux-input@vger.kernel.org,
	Miguel Ojeda <ojeda@kernel.org>,
	tipc-discussion@lists.sourceforge.net, linux-ext4@vger.kernel.org,
	linux-media@vger.kernel.org, linux-watchdog@vger.kernel.org,
	selinux@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	intel-gfx@lists.freedesktop.org, linux-geode@lists.infradead.org,
	linux-can@vger.kernel.org, linux-block@vger.kernel.org,
	linux-gpio@vger.kernel.org, op-tee@lists.trustedfirmware.org,
	linux-mediatek@lists.infradead.org, xen-devel@lists.xenproject.org,
	nouveau@lists.freedesktop.org, linux-hams@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-arm-kernel@lists.infradead.org, linux-hwmon@vger.kernel.org,
	x86@kernel.org, linux-nfs@vger.kernel.org,
	GR-Linux-NIC-Dev@marvell.com, linux-mm@kvack.org,
	netdev@vger.kernel.org, linux-decnet-user@lists.sourceforge.net,
	linux-mmc@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
	linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org,
	netfilter-devel@vger.kernel.org, linux-crypto@vger.kernel.org,
	patches@opensource.cirrus.com, linux-integrity@vger.kernel.org,
	target-devel@vger.kernel.org, linux-hardening@vger.kernel.org,
	Jonathan Cameron <Jonathan.Cameron@huawei.com>,
	Greg KH <gregkh@linuxfoundation.org>
Subject: Re: [Intel-wired-lan] [PATCH 000/141] Fix fall-through warnings for
 Clang
Message-ID: <20201123130348.GA3119@embeddedor>
References: <cover.1605896059.git.gustavoars@kernel.org>
 <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011201129.B13FDB3C@keescook>
 <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook>
 <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
 <ca071decb87cc7e905411423c05a48f9fd2f58d7.camel@perches.com>
 <0147972a72bc13f3629de8a32dee6f1f308994b5.camel@HansenPartnership.com>
 <d8d1e9add08cdd4158405e77762d4946037208f8.camel@perches.com>
 <dbd2cb703ed9eefa7dde9281ea26ab0f7acc8afe.camel@HansenPartnership.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <dbd2cb703ed9eefa7dde9281ea26ab0f7acc8afe.camel@HansenPartnership.com>
User-Agent: Mutt/1.9.4 (2018-02-28)

On Sun, Nov 22, 2020 at 11:53:55AM -0800, James Bottomley wrote:
> On Sun, 2020-11-22 at 11:22 -0800, Joe Perches wrote:
> > On Sun, 2020-11-22 at 11:12 -0800, James Bottomley wrote:
> > > On Sun, 2020-11-22 at 10:25 -0800, Joe Perches wrote:
> > > > On Sun, 2020-11-22 at 10:21 -0800, James Bottomley wrote:
> > > > > Please tell me our reward for all this effort isn't a single
> > > > > missing error print.
> > > > 
> > > > There were quite literally dozens of logical defects found
> > > > by the fallthrough additions.  Very few were logging only.
> > > 
> > > So can you give us the best examples (or indeed all of them if
> > > someone is keeping score)?  hopefully this isn't a US election
> > > situation ...
> > 
> > Gustavo?  Are you running for congress now?
> > 
> > https://lwn.net/Articles/794944/
> 
> That's 21 reported fixes of which about 50% seem to produce no change
> in code behaviour at all, a quarter seem to have no user visible effect
> with the remaining quarter producing unexpected errors on obscure
> configuration parameters, which is why no-one really noticed them
> before.

The really important point here is the number of bugs this has prevented
and will prevent in the future. See an example of this, below:

https://lore.kernel.org/linux-iio/20190813135802.GB27392@kroah.com/

This work is still relevant, even if the total number of issues/bugs
we find in the process is zero (which is not the case).

"The sucky thing about doing hard work to deploy hardening is that the
result is totally invisible by definition (things not happening) [..]"
- Dmitry Vyukov

Thanks
--
Gustavo







From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:07:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:07:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34226.65101 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBYv-0004um-O1; Mon, 23 Nov 2020 13:07:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34226.65101; Mon, 23 Nov 2020 13:07:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBYv-0004uf-K9; Mon, 23 Nov 2020 13:07:21 +0000
Received: by outflank-mailman (input) for mailman id 34226;
 Mon, 23 Nov 2020 13:07:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khBYt-0004uW-Td
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:07:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 71f9fffa-fa58-4f14-9a61-f23540af3381;
 Mon, 23 Nov 2020 13:07:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 45A5DAF40;
 Mon, 23 Nov 2020 13:07:18 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khBYt-0004uW-Td
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:07:19 +0000
X-Inumbo-ID: 71f9fffa-fa58-4f14-9a61-f23540af3381
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 71f9fffa-fa58-4f14-9a61-f23540af3381;
	Mon, 23 Nov 2020 13:07:19 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606136838; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SG87VMAsY8N17K1dmsmWhSR2FvjwB2NZUALLHbpffng=;
	b=gq8l43NEgSqWOEbo7Ep9zAtsGsWKRh8RjNVicrs4ba0lLNc+rYiCtGY+azAA7qBCXT2FjI
	BY+9NmDYCWmxr7Iwz6BLpl0QU68EXUe8R12OLAr+SwBKJzrBMKc8XOsSKJw4JdyZz1oEMa
	UrRErnzA3JsFADlaFn/3K3M8aglnQnU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 45A5DAF40;
	Mon, 23 Nov 2020 13:07:18 +0000 (UTC)
Subject: Re: [PATCH] x86: E801 memory "map" use implies no ACPI
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <18ca8671-1478-3dc8-7b91-041dbc18829f@suse.com>
 <cff9ccc4-965a-18e9-1ac3-9779e39c2e62@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <293cc4e1-af95-586c-d908-a392aa9179ef@suse.com>
Date: Mon, 23 Nov 2020 14:07:17 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <cff9ccc4-965a-18e9-1ac3-9779e39c2e62@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 23.11.2020 13:37, Andrew Cooper wrote:
> On 20/11/2020 12:45, Jan Beulich wrote:
>> ACPI mandates use of E820 (or newer, e.g. EFI), and in fact firmware
>> has been observed to include E820_ACPI ranges in what E801 reports as
>> available (really "configured") memory.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> TBD: Alternatively we could drop all use of E801 (and older), since
>>      there shouldn't be any 64-bit systems not supporting the more
>>      modern E820.
> 
> I'd definitely be in favour of deleting the legacy logic.  The very fact
> that firmware has been observed to include E820_ACPI in E801 maps shows
> that the change here isn't correct in practice.

Mind explaining yourself? I don't see how the change here wouldn't be
correct in practice. Inclusion of ACPI table space in E801 data simply
means either the OS is ACPI-enabled, in which case it won't use E801,
or it's not (or use of ACPI was disabled there by some means) in which
case it's fine to re-use the memory as normal RAM.

> I think we should go further and depend on the bootloader providing the
> memory/video/etc details, which also rips out a lot of 16bit handling
> code in the trampoline.

I wouldn't go this far just yet. We've not been using boot loader
provided memory map data so far, so a first step would imo be to
validate that boot loader provided data is consistent with raw E820
one. We don't want to chance getting screwed up by an old, buggy
boot loader.

> Judging by the context below, I think we should also drop various ACPI
> related options.  Given its ubiquity these days, turning various bits of
> ACPI off is only going to make problems worse.

Probably, but an orthogonal change (and iirc I did recently suggest
the same in a different context). I have to admit that I'd be weary
of regressions if we did so, and if anyone in fact used any of these
options. At the very least I think we would better verbosely deprecate
these (sub)options first, for one or even two release cycles. I still
recall us (SUSE) dropping a custom patch we had been carrying (and
which was rejected upstream) changing the default IO-APIC ack mode for
single IO-APIC systems to old-style, resulting in an instant report
that some director's or even VP's laptop broke. Hence, despite the
upstream rejection, we've now been carrying this change for perhaps
over 10 years. Any change in such areas feels like it could suffer
similar fallout.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:12:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:12:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34233.65113 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBeC-0005n4-DQ; Mon, 23 Nov 2020 13:12:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34233.65113; Mon, 23 Nov 2020 13:12:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBeC-0005mx-9N; Mon, 23 Nov 2020 13:12:48 +0000
Received: by outflank-mailman (input) for mailman id 34233;
 Mon, 23 Nov 2020 13:12:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khBeB-0005ms-2T
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:12:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 399ceed5-0165-4fbc-84cf-4abea1a741e3;
 Mon, 23 Nov 2020 13:12:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 65E4EACD5;
 Mon, 23 Nov 2020 13:12:45 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khBeB-0005ms-2T
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:12:47 +0000
X-Inumbo-ID: 399ceed5-0165-4fbc-84cf-4abea1a741e3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 399ceed5-0165-4fbc-84cf-4abea1a741e3;
	Mon, 23 Nov 2020 13:12:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606137165; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=twROV6lJ9RY5ZXELEEcNrUSlo6tUc/6IjVY7g9rYQ58=;
	b=AFQ4R5v6o/2Td25yy0j5VSEy0PImSnoJ6mWYbpDPdlQocz5dVBTHcFMO6of6ghoPcAXU0+
	ldJ1/HKy/QA/R1qDxD365bCmbOj9eozU1NSNiOuR8sZrSXNVSt9xjs0ukls4C1pLm55zxE
	5oFYOEOW9aYneSLtrvlqn9wgSbLPhj4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 65E4EACD5;
	Mon, 23 Nov 2020 13:12:45 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/3] ns16550: #ifdef-ary
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Rahul Singh <Rahul.Singh@arm.com>
Message-ID: <96115b2b-c104-e566-2368-6a2439d2c988@suse.com>
Date: Mon, 23 Nov 2020 14:12:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

1: move PCI arrays next to the function using them
2: "com<N>=" command line options are x86-specific
3: drop stray "#ifdef CONFIG_HAS_PCI"

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:14:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:14:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34237.65125 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBfp-0005vd-Oh; Mon, 23 Nov 2020 13:14:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34237.65125; Mon, 23 Nov 2020 13:14:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBfp-0005vW-L0; Mon, 23 Nov 2020 13:14:29 +0000
Received: by outflank-mailman (input) for mailman id 34237;
 Mon, 23 Nov 2020 13:14:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khBfo-0005vQ-Ne
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:14:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 10ca4d60-cc42-42bd-b60e-3bbdded2d894;
 Mon, 23 Nov 2020 13:14:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B2B26ACF1;
 Mon, 23 Nov 2020 13:14:25 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khBfo-0005vQ-Ne
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:14:28 +0000
X-Inumbo-ID: 10ca4d60-cc42-42bd-b60e-3bbdded2d894
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 10ca4d60-cc42-42bd-b60e-3bbdded2d894;
	Mon, 23 Nov 2020 13:14:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606137265; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FtIXZJ0HqJDtlmSLnhGEm6Ve8l/UI7mpJ78L1OTqVUg=;
	b=s1fMI/4aztigtUGG4QzR3gKdcGIQcvh0VSFkf6h4rMQXg08FBgPEuyrrT5FnmF824zZgUK
	nhLwIwjDbWaauGlhTMrUmkCWjYCOQGunViN8Ta/F+Zy77a/THkgemcB8fHITfqt+1LMu1p
	JRY9EDoVLUTf789YrpLGjXrUvv43neU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B2B26ACF1;
	Mon, 23 Nov 2020 13:14:25 +0000 (UTC)
Subject: [PATCH v2 1/3] ns16550: move PCI arrays next to the function using
 them
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Rahul Singh <Rahul.Singh@arm.com>
References: <96115b2b-c104-e566-2368-6a2439d2c988@suse.com>
Message-ID: <b47b5557-ad67-5bf4-45ce-c305ee5da977@suse.com>
Date: Mon, 23 Nov 2020 14:14:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <96115b2b-c104-e566-2368-6a2439d2c988@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Pure code motion; no functional change intended.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -153,312 +153,6 @@ struct ns16550_config_param {
     unsigned int uart_offset;
     unsigned int first_offset;
 };
-
-/*
- * Create lookup tables for specific devices. It is assumed that if
- * the device found is MMIO, then you have indexed it here. Else, the
- * driver does nothing for MMIO based devices.
- */
-static const struct ns16550_config_param __initconst uart_param[] = {
-    [param_default] = {
-        .reg_width = 1,
-        .lsr_mask = UART_LSR_THRE,
-        .max_ports = 1,
-    },
-    [param_trumanage] = {
-        .reg_shift = 2,
-        .reg_width = 1,
-        .fifo_size = 16,
-        .lsr_mask = (UART_LSR_THRE | UART_LSR_TEMT),
-        .mmio = 1,
-        .max_ports = 1,
-    },
-    [param_oxford] = {
-        .base_baud = 4000000,
-        .uart_offset = 0x200,
-        .first_offset = 0x1000,
-        .reg_width = 1,
-        .fifo_size = 16,
-        .lsr_mask = UART_LSR_THRE,
-        .mmio = 1,
-        .max_ports = 1, /* It can do more, but we would need more custom code.*/
-    },
-    [param_oxford_2port] = {
-        .base_baud = 4000000,
-        .uart_offset = 0x200,
-        .first_offset = 0x1000,
-        .reg_width = 1,
-        .fifo_size = 16,
-        .lsr_mask = UART_LSR_THRE,
-        .mmio = 1,
-        .max_ports = 2,
-    },
-    [param_pericom_1port] = {
-        .base_baud = 921600,
-        .uart_offset = 8,
-        .reg_width = 1,
-        .fifo_size = 16,
-        .lsr_mask = UART_LSR_THRE,
-        .bar0 = 1,
-        .max_ports = 1,
-    },
-    [param_pericom_2port] = {
-        .base_baud = 921600,
-        .uart_offset = 8,
-        .reg_width = 1,
-        .fifo_size = 16,
-        .lsr_mask = UART_LSR_THRE,
-        .bar0 = 1,
-        .max_ports = 2,
-    },
-    /*
-     * Of the two following ones, we can't really use all of their ports,
-     * unless ns16550_com[] would get grown.
-     */
-    [param_pericom_4port] = {
-        .base_baud = 921600,
-        .uart_offset = 8,
-        .reg_width = 1,
-        .fifo_size = 16,
-        .lsr_mask = UART_LSR_THRE,
-        .bar0 = 1,
-        .max_ports = 4,
-    },
-    [param_pericom_8port] = {
-        .base_baud = 921600,
-        .uart_offset = 8,
-        .reg_width = 1,
-        .fifo_size = 16,
-        .lsr_mask = UART_LSR_THRE,
-        .bar0 = 1,
-        .max_ports = 8,
-    }
-};
-static const struct ns16550_config __initconst uart_config[] =
-{
-    /* Broadcom TruManage device */
-    {
-        .vendor_id = PCI_VENDOR_ID_BROADCOM,
-        .dev_id = 0x160a,
-        .param = param_trumanage,
-    },
-    /* OXPCIe952 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc11b,
-        .param = param_oxford,
-    },
-    /* OXPCIe952 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc11f,
-        .param = param_oxford,
-    },
-    /* OXPCIe952 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc138,
-        .param = param_oxford,
-    },
-    /* OXPCIe952 2 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc158,
-        .param = param_oxford_2port,
-    },
-    /* OXPCIe952 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc13d,
-        .param = param_oxford,
-    },
-    /* OXPCIe952 2 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc15d,
-        .param = param_oxford_2port,
-    },
-    /* OXPCIe952 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc40b,
-        .param = param_oxford,
-    },
-    /* OXPCIe200 1 Native UART */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc40f,
-        .param = param_oxford,
-    },
-    /* OXPCIe200 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc41b,
-        .param = param_oxford,
-    },
-    /* OXPCIe200 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc41f,
-        .param = param_oxford,
-    },
-    /* OXPCIe200 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc42b,
-        .param = param_oxford,
-    },
-    /* OXPCIe200 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc42f,
-        .param = param_oxford,
-    },
-    /* OXPCIe200 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc43b,
-        .param = param_oxford,
-    },
-    /* OXPCIe200 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc43f,
-        .param = param_oxford,
-    },
-    /* OXPCIe200 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc44b,
-        .param = param_oxford,
-    },
-    /* OXPCIe200 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc44f,
-        .param = param_oxford,
-    },
-    /* OXPCIe200 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc45b,
-        .param = param_oxford,
-    },
-    /* OXPCIe200 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc45f,
-        .param = param_oxford,
-    },
-    /* OXPCIe200 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc46b,
-        .param = param_oxford,
-    },
-    /* OXPCIe200 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc46f,
-        .param = param_oxford,
-    },
-    /* OXPCIe200 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc47b,
-        .param = param_oxford,
-    },
-    /* OXPCIe200 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc47f,
-        .param = param_oxford,
-    },
-    /* OXPCIe200 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc48b,
-        .param = param_oxford,
-    },
-    /* OXPCIe200 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc48f,
-        .param = param_oxford,
-    },
-    /* OXPCIe200 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc49b,
-        .param = param_oxford,
-    },
-    /* OXPCIe200 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc49f,
-        .param = param_oxford,
-    },
-    /* OXPCIe200 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc4ab,
-        .param = param_oxford,
-    },
-    /* OXPCIe200 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc4af,
-        .param = param_oxford,
-    },
-    /* OXPCIe200 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc4bb,
-        .param = param_oxford,
-    },
-    /* OXPCIe200 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc4bf,
-        .param = param_oxford,
-    },
-    /* OXPCIe200 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc4cb,
-        .param = param_oxford,
-    },
-    /* OXPCIe200 1 Native UART  */
-    {
-        .vendor_id = PCI_VENDOR_ID_OXSEMI,
-        .dev_id = 0xc4cf,
-        .param = param_oxford,
-    },
-    /* Pericom PI7C9X7951 Uno UART */
-    {
-        .vendor_id = PCI_VENDOR_ID_PERICOM,
-        .dev_id = 0x7951,
-        .param = param_pericom_1port
-    },
-    /* Pericom PI7C9X7952 Duo UART */
-    {
-        .vendor_id = PCI_VENDOR_ID_PERICOM,
-        .dev_id = 0x7952,
-        .param = param_pericom_2port
-    },
-    /* Pericom PI7C9X7954 Quad UART */
-    {
-        .vendor_id = PCI_VENDOR_ID_PERICOM,
-        .dev_id = 0x7954,
-        .param = param_pericom_4port
-    },
-    /* Pericom PI7C9X7958 Octal UART */
-    {
-        .vendor_id = PCI_VENDOR_ID_PERICOM,
-        .dev_id = 0x7958,
-        .param = param_pericom_8port
-    }
-};
 #endif
 
 static void ns16550_delayed_resume(void *data);
@@ -1045,6 +739,314 @@ static int __init check_existence(struct
 }
 
 #ifdef CONFIG_HAS_PCI
+
+/*
+ * Create lookup tables for specific devices. It is assumed that if
+ * the device found is MMIO, then you have indexed it here. Else, the
+ * driver does nothing for MMIO based devices.
+ */
+static const struct ns16550_config_param __initconst uart_param[] = {
+    [param_default] = {
+        .reg_width = 1,
+        .lsr_mask = UART_LSR_THRE,
+        .max_ports = 1,
+    },
+    [param_trumanage] = {
+        .reg_shift = 2,
+        .reg_width = 1,
+        .fifo_size = 16,
+        .lsr_mask = (UART_LSR_THRE | UART_LSR_TEMT),
+        .mmio = 1,
+        .max_ports = 1,
+    },
+    [param_oxford] = {
+        .base_baud = 4000000,
+        .uart_offset = 0x200,
+        .first_offset = 0x1000,
+        .reg_width = 1,
+        .fifo_size = 16,
+        .lsr_mask = UART_LSR_THRE,
+        .mmio = 1,
+        .max_ports = 1, /* It can do more, but we would need more custom code.*/
+    },
+    [param_oxford_2port] = {
+        .base_baud = 4000000,
+        .uart_offset = 0x200,
+        .first_offset = 0x1000,
+        .reg_width = 1,
+        .fifo_size = 16,
+        .lsr_mask = UART_LSR_THRE,
+        .mmio = 1,
+        .max_ports = 2,
+    },
+    [param_pericom_1port] = {
+        .base_baud = 921600,
+        .uart_offset = 8,
+        .reg_width = 1,
+        .fifo_size = 16,
+        .lsr_mask = UART_LSR_THRE,
+        .bar0 = 1,
+        .max_ports = 1,
+    },
+    [param_pericom_2port] = {
+        .base_baud = 921600,
+        .uart_offset = 8,
+        .reg_width = 1,
+        .fifo_size = 16,
+        .lsr_mask = UART_LSR_THRE,
+        .bar0 = 1,
+        .max_ports = 2,
+    },
+    /*
+     * Of the two following ones, we can't really use all of their ports,
+     * unless ns16550_com[] would get grown.
+     */
+    [param_pericom_4port] = {
+        .base_baud = 921600,
+        .uart_offset = 8,
+        .reg_width = 1,
+        .fifo_size = 16,
+        .lsr_mask = UART_LSR_THRE,
+        .bar0 = 1,
+        .max_ports = 4,
+    },
+    [param_pericom_8port] = {
+        .base_baud = 921600,
+        .uart_offset = 8,
+        .reg_width = 1,
+        .fifo_size = 16,
+        .lsr_mask = UART_LSR_THRE,
+        .bar0 = 1,
+        .max_ports = 8,
+    }
+};
+
+static const struct ns16550_config __initconst uart_config[] =
+{
+    /* Broadcom TruManage device */
+    {
+        .vendor_id = PCI_VENDOR_ID_BROADCOM,
+        .dev_id = 0x160a,
+        .param = param_trumanage,
+    },
+    /* OXPCIe952 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc11b,
+        .param = param_oxford,
+    },
+    /* OXPCIe952 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc11f,
+        .param = param_oxford,
+    },
+    /* OXPCIe952 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc138,
+        .param = param_oxford,
+    },
+    /* OXPCIe952 2 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc158,
+        .param = param_oxford_2port,
+    },
+    /* OXPCIe952 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc13d,
+        .param = param_oxford,
+    },
+    /* OXPCIe952 2 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc15d,
+        .param = param_oxford_2port,
+    },
+    /* OXPCIe952 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc40b,
+        .param = param_oxford,
+    },
+    /* OXPCIe200 1 Native UART */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc40f,
+        .param = param_oxford,
+    },
+    /* OXPCIe200 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc41b,
+        .param = param_oxford,
+    },
+    /* OXPCIe200 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc41f,
+        .param = param_oxford,
+    },
+    /* OXPCIe200 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc42b,
+        .param = param_oxford,
+    },
+    /* OXPCIe200 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc42f,
+        .param = param_oxford,
+    },
+    /* OXPCIe200 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc43b,
+        .param = param_oxford,
+    },
+    /* OXPCIe200 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc43f,
+        .param = param_oxford,
+    },
+    /* OXPCIe200 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc44b,
+        .param = param_oxford,
+    },
+    /* OXPCIe200 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc44f,
+        .param = param_oxford,
+    },
+    /* OXPCIe200 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc45b,
+        .param = param_oxford,
+    },
+    /* OXPCIe200 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc45f,
+        .param = param_oxford,
+    },
+    /* OXPCIe200 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc46b,
+        .param = param_oxford,
+    },
+    /* OXPCIe200 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc46f,
+        .param = param_oxford,
+    },
+    /* OXPCIe200 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc47b,
+        .param = param_oxford,
+    },
+    /* OXPCIe200 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc47f,
+        .param = param_oxford,
+    },
+    /* OXPCIe200 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc48b,
+        .param = param_oxford,
+    },
+    /* OXPCIe200 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc48f,
+        .param = param_oxford,
+    },
+    /* OXPCIe200 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc49b,
+        .param = param_oxford,
+    },
+    /* OXPCIe200 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc49f,
+        .param = param_oxford,
+    },
+    /* OXPCIe200 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc4ab,
+        .param = param_oxford,
+    },
+    /* OXPCIe200 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc4af,
+        .param = param_oxford,
+    },
+    /* OXPCIe200 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc4bb,
+        .param = param_oxford,
+    },
+    /* OXPCIe200 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc4bf,
+        .param = param_oxford,
+    },
+    /* OXPCIe200 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc4cb,
+        .param = param_oxford,
+    },
+    /* OXPCIe200 1 Native UART  */
+    {
+        .vendor_id = PCI_VENDOR_ID_OXSEMI,
+        .dev_id = 0xc4cf,
+        .param = param_oxford,
+    },
+    /* Pericom PI7C9X7951 Uno UART */
+    {
+        .vendor_id = PCI_VENDOR_ID_PERICOM,
+        .dev_id = 0x7951,
+        .param = param_pericom_1port
+    },
+    /* Pericom PI7C9X7952 Duo UART */
+    {
+        .vendor_id = PCI_VENDOR_ID_PERICOM,
+        .dev_id = 0x7952,
+        .param = param_pericom_2port
+    },
+    /* Pericom PI7C9X7954 Quad UART */
+    {
+        .vendor_id = PCI_VENDOR_ID_PERICOM,
+        .dev_id = 0x7954,
+        .param = param_pericom_4port
+    },
+    /* Pericom PI7C9X7958 Octal UART */
+    {
+        .vendor_id = PCI_VENDOR_ID_PERICOM,
+        .dev_id = 0x7958,
+        .param = param_pericom_8port
+    }
+};
+
 static int __init
 pci_uart_config(struct ns16550 *uart, bool_t skip_amt, unsigned int idx)
 {
@@ -1211,7 +1213,8 @@ pci_uart_config(struct ns16550 *uart, bo
 
     return 0;
 }
-#endif
+
+#endif /* CONFIG_HAS_PCI */
 
 /*
  * Used to parse name value pairs and return which value it is along with



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:14:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:14:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34239.65137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBg9-00060y-0x; Mon, 23 Nov 2020 13:14:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34239.65137; Mon, 23 Nov 2020 13:14:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBg8-00060r-Te; Mon, 23 Nov 2020 13:14:48 +0000
Received: by outflank-mailman (input) for mailman id 34239;
 Mon, 23 Nov 2020 13:14:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khBg7-00060b-4u
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:14:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cbc8f054-9cab-4c43-bad8-eeee34dc9f79;
 Mon, 23 Nov 2020 13:14:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 52413AC2E;
 Mon, 23 Nov 2020 13:14:45 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khBg7-00060b-4u
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:14:47 +0000
X-Inumbo-ID: cbc8f054-9cab-4c43-bad8-eeee34dc9f79
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cbc8f054-9cab-4c43-bad8-eeee34dc9f79;
	Mon, 23 Nov 2020 13:14:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606137285; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0FIY0qtw5BehydKHuoFVwmLca+rCcxPomo1pnRTPLJA=;
	b=QtQ5S0xqRADBewNZa7h9fQAWbDDvtAy0n4cVMh+OxqP+KdQygoyQtjSNejEggxFLqaAomz
	eh/f+I0OdteFxBx7vdLg7e2S6zbB0qy39KB2G8ir9r3VxLttL7cpGQRR22fePEhgnE1JgG
	OmnL9HtpRRq11343RcRD8LN1DcIN2Nk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 52413AC2E;
	Mon, 23 Nov 2020 13:14:45 +0000 (UTC)
Subject: [PATCH v2 2/3] ns16550: "com<N>=" command line options are
 x86-specific
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Rahul Singh <Rahul.Singh@arm.com>
References: <96115b2b-c104-e566-2368-6a2439d2c988@suse.com>
Message-ID: <bfa07fc2-9151-402f-3b73-dedf8280cb66@suse.com>
Date: Mon, 23 Nov 2020 14:14:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <96115b2b-c104-e566-2368-6a2439d2c988@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Pure code motion (plus the addition of "#ifdef CONFIG_X86); no
functional change intended.

Reported-by: Julien Grall <julien@xen.org>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Re-base over new earlier patch.

--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -318,8 +318,8 @@ Interrupts.  Specifying zero disables CM
 Flag to indicate whether to probe for a CMOS Real Time Clock irrespective of
 ACPI indicating none to be there.
 
-### com1
-### com2
+### com1 (x86)
+### com2 (x86)
 > `= <baud>[/<base-baud>][,[DPS][,[<io-base>|pci|amt][,[<irq>|msi][,[<port-bdf>][,[<bridge-bdf>]]]]]]`
 
 Both option `com1` and `com2` follow the same format.
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -31,38 +31,6 @@
 #include <asm/fixmap.h>
 #endif
 
-/*
- * Configure serial port with a string:
- *   <baud>[/<base_baud>][,DPS[,<io-base>[,<irq>[,<port-bdf>[,<bridge-bdf>]]]]].
- * The tail of the string can be omitted if platform defaults are sufficient.
- * If the baud rate is pre-configured, perhaps by a bootloader, then 'auto'
- * can be specified in place of a numeric baud rate. Polled mode is specified
- * by requesting irq 0.
- */
-static char __initdata opt_com1[128] = "";
-static char __initdata opt_com2[128] = "";
-string_param("com1", opt_com1);
-string_param("com2", opt_com2);
-
-enum serial_param_type {
-    baud,
-    clock_hz,
-    data_bits,
-    io_base,
-    irq,
-    parity,
-    reg_shift,
-    reg_width,
-    stop_bits,
-#ifdef CONFIG_HAS_PCI
-    bridge_bdf,
-    device,
-    port_bdf,
-#endif
-    /* List all parameters before this line. */
-    num_serial_params
-};
-
 static struct ns16550 {
     int baud, clock_hz, data_bits, parity, stop_bits, fifo_size, irq;
     u64 io_base;   /* I/O port or memory-mapped I/O address. */
@@ -98,32 +66,6 @@ static struct ns16550 {
 #endif
 } ns16550_com[2] = { { 0 } };
 
-struct serial_param_var {
-    char name[12];
-    enum serial_param_type type;
-};
-
-/*
- * Enum struct keeping a table of all accepted parameter names for parsing
- * com_console_options for serial port com1 and com2.
- */
-static const struct serial_param_var __initconst sp_vars[] = {
-    {"baud", baud},
-    {"clock-hz", clock_hz},
-    {"data-bits", data_bits},
-    {"io-base", io_base},
-    {"irq", irq},
-    {"parity", parity},
-    {"reg-shift", reg_shift},
-    {"reg-width", reg_width},
-    {"stop-bits", stop_bits},
-#ifdef CONFIG_HAS_PCI
-    {"bridge", bridge_bdf},
-    {"dev", device},
-    {"port", port_bdf},
-#endif
-};
-
 #ifdef CONFIG_HAS_PCI
 struct ns16550_config {
     u16 vendor_id;
@@ -674,6 +616,19 @@ static struct uart_driver __read_mostly
 #endif
 };
 
+static void ns16550_init_common(struct ns16550 *uart)
+{
+    uart->clock_hz  = UART_CLOCK_HZ;
+
+    /* Default is no transmit FIFO. */
+    uart->fifo_size = 1;
+
+    /* Default lsr_mask = UART_LSR_THRE */
+    uart->lsr_mask  = UART_LSR_THRE;
+}
+
+#ifdef CONFIG_X86
+
 static int __init parse_parity_char(int c)
 {
     switch ( c )
@@ -1217,6 +1172,64 @@ pci_uart_config(struct ns16550 *uart, bo
 #endif /* CONFIG_HAS_PCI */
 
 /*
+ * Configure serial port with a string:
+ *   <baud>[/<base_baud>][,DPS[,<io-base>[,<irq>[,<port-bdf>[,<bridge-bdf>]]]]].
+ * The tail of the string can be omitted if platform defaults are sufficient.
+ * If the baud rate is pre-configured, perhaps by a bootloader, then 'auto'
+ * can be specified in place of a numeric baud rate. Polled mode is specified
+ * by requesting irq 0.
+ */
+static char __initdata opt_com1[128] = "";
+static char __initdata opt_com2[128] = "";
+string_param("com1", opt_com1);
+string_param("com2", opt_com2);
+
+enum serial_param_type {
+    baud,
+    clock_hz,
+    data_bits,
+    io_base,
+    irq,
+    parity,
+    reg_shift,
+    reg_width,
+    stop_bits,
+#ifdef CONFIG_HAS_PCI
+    bridge_bdf,
+    device,
+    port_bdf,
+#endif
+    /* List all parameters before this line. */
+    num_serial_params
+};
+
+struct serial_param_var {
+    char name[12];
+    enum serial_param_type type;
+};
+
+/*
+ * Enum struct keeping a table of all accepted parameter names for parsing
+ * com_console_options for serial port com1 and com2.
+ */
+static const struct serial_param_var __initconst sp_vars[] = {
+    {"baud", baud},
+    {"clock-hz", clock_hz},
+    {"data-bits", data_bits},
+    {"io-base", io_base},
+    {"irq", irq},
+    {"parity", parity},
+    {"reg-shift", reg_shift},
+    {"reg-width", reg_width},
+    {"stop-bits", stop_bits},
+#ifdef CONFIG_HAS_PCI
+    {"bridge", bridge_bdf},
+    {"dev", device},
+    {"port", port_bdf},
+#endif
+};
+
+/*
  * Used to parse name value pairs and return which value it is along with
  * pointer for the extracted value.
  */
@@ -1504,17 +1517,6 @@ static void __init ns16550_parse_port_co
     serial_register_uart(uart - ns16550_com, &ns16550_driver, uart);
 }
 
-static void ns16550_init_common(struct ns16550 *uart)
-{
-    uart->clock_hz  = UART_CLOCK_HZ;
-
-    /* Default is no transmit FIFO. */
-    uart->fifo_size = 1;
-
-    /* Default lsr_mask = UART_LSR_THRE */
-    uart->lsr_mask  = UART_LSR_THRE;
-}
-
 void __init ns16550_init(int index, struct ns16550_defaults *defaults)
 {
     struct ns16550 *uart;
@@ -1541,6 +1543,8 @@ void __init ns16550_init(int index, stru
     ns16550_parse_port_config(uart, (index == 0) ? opt_com1 : opt_com2);
 }
 
+#endif /* CONFIG_X86 */
+
 #ifdef CONFIG_HAS_DEVICE_TREE
 static int __init ns16550_uart_dt_init(struct dt_device_node *dev,
                                        const void *data)



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:15:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:15:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34245.65149 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBgX-000686-Du; Mon, 23 Nov 2020 13:15:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34245.65149; Mon, 23 Nov 2020 13:15:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBgX-00067z-AE; Mon, 23 Nov 2020 13:15:13 +0000
Received: by outflank-mailman (input) for mailman id 34245;
 Mon, 23 Nov 2020 13:15:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khBgV-00067e-53
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:15:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4a519533-cd01-4a1c-98a4-41afd169b46e;
 Mon, 23 Nov 2020 13:15:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A19B6AF16;
 Mon, 23 Nov 2020 13:15:09 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khBgV-00067e-53
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:15:11 +0000
X-Inumbo-ID: 4a519533-cd01-4a1c-98a4-41afd169b46e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4a519533-cd01-4a1c-98a4-41afd169b46e;
	Mon, 23 Nov 2020 13:15:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606137309; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=STV/VFDycICb3sCl4qYDaDzw0LD/V4afIeeRpkzJDFg=;
	b=ihSX8z9trpD6NXkBPAkTnI3mf4ZpNO4txDGsE/Imf5s3WcEo9j+CvRbChyRnRLqsPlrWnc
	hWEOKLTT6q6VgbuhC+mvwDXzL6zEetW58XYpqRgXrarmoNTsTdigewmt3sOtsU50tghF/Z
	DyiJzqtng6Db2njxg91v379bazxWGFM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A19B6AF16;
	Mon, 23 Nov 2020 13:15:09 +0000 (UTC)
Subject: [PATCH v2 3/3] ns16550: drop stray "#ifdef CONFIG_HAS_PCI"
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Rahul Singh <Rahul.Singh@arm.com>
References: <96115b2b-c104-e566-2368-6a2439d2c988@suse.com>
Message-ID: <c5cf7b83-9948-dd87-dfe0-40d36df0db70@suse.com>
Date: Mon, 23 Nov 2020 14:15:09 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <96115b2b-c104-e566-2368-6a2439d2c988@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

There's no point wrapping the function invocation when
- the function body is already suitably wrapped,
- the function itself is unconditionally available.

Reported-by: Julien Grall <julien@xen.org>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -662,9 +662,7 @@ static int __init check_existence(struct
     return 1; /* Everything is MMIO */
 #endif
 
-#ifdef CONFIG_HAS_PCI
     pci_serial_early_init(uart);
-#endif
 
     /*
      * Do a simple existence test first; if we fail this,



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:19:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:19:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34257.65161 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBl3-0006Oj-W2; Mon, 23 Nov 2020 13:19:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34257.65161; Mon, 23 Nov 2020 13:19:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBl3-0006Oc-T5; Mon, 23 Nov 2020 13:19:53 +0000
Received: by outflank-mailman (input) for mailman id 34257;
 Mon, 23 Nov 2020 13:19:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khBl3-0006OX-Am
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:19:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8c56346e-bb5e-4458-aff1-872c2868977d;
 Mon, 23 Nov 2020 13:19:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 425A9ABCE;
 Mon, 23 Nov 2020 13:19:51 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khBl3-0006OX-Am
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:19:53 +0000
X-Inumbo-ID: 8c56346e-bb5e-4458-aff1-872c2868977d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8c56346e-bb5e-4458-aff1-872c2868977d;
	Mon, 23 Nov 2020 13:19:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606137591; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=F0WQb5XrXmpz9R+OLh9HDDVQgae/e9HlQcjbwZQYgq0=;
	b=e4vRUVDoUIhR2LiGgXBXzZ0576jMPLbP07EtjJaXUpl1hO9FQK+b+X7ThfWJEhnwzVIYNK
	4He18YCUqhT2rs9lSdhd4yFKz0ijbvkRJV4qca8u0Z9Xwo5H5ue15HZFvZbGCu/RWSXw0b
	nFF3kKXotfYidHBIZw+4O7FGnyCiK8w=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 425A9ABCE;
	Mon, 23 Nov 2020 13:19:51 +0000 (UTC)
Subject: Re: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: Julien Grall <julien.grall.oss@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <cover.1605527997.git.rahul.singh@arm.com>
 <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
 <3740e147-719a-4e97-bb0e-fe9bd2ec2aa5@xen.org>
 <aa256a44-8f8f-d4f1-f5f4-12529f45d8c8@suse.com>
 <9007e08f-6d90-88ed-ba64-2f0b3c21cb50@xen.org>
 <8531a99d-3c54-36c7-0cd4-2e4838f96eb0@suse.com>
 <ba26fdfb-34f8-c4d3-e082-f1f49c768981@xen.org>
 <89F35B3F-FAAD-4C58-B3FD-F93CA3290A49@arm.com>
 <alpine.DEB.2.21.2011191534060.7979@sstabellini-ThinkPad-T480s>
 <CAJ=z9a0aS1G0F1jAtKNEe4r3tyBoxy1xJ9AV7pYgifsL62iqww@mail.gmail.com>
 <alpine.DEB.2.21.2011191551510.7979@sstabellini-ThinkPad-T480s>
 <37511625-C475-497B-BA83-B762687148BF@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f24a1db6-64a6-96d2-d67c-dc1b03c9cc49@suse.com>
Date: Mon, 23 Nov 2020 14:19:50 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <37511625-C475-497B-BA83-B762687148BF@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Rahul,

On 23.11.2020 12:54, Rahul Singh wrote:
> Hello Jan,

as an aside - it helps if you also put the addressee of your mail
on the To list.

>> On 20 Nov 2020, at 12:14 am, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>
>> On Thu, 19 Nov 2020, Julien Grall wrote:
>>> On Thu, 19 Nov 2020, 23:38 Stefano Stabellini, <sstabellini@kernel.org> wrote:
>>>      On Thu, 19 Nov 2020, Rahul Singh wrote:
>>>>> On 19/11/2020 09:53, Jan Beulich wrote:
>>>>>> On 19.11.2020 10:21, Julien Grall wrote:
>>>>>>> Hi Jan,
>>>>>>>
>>>>>>> On 19/11/2020 09:05, Jan Beulich wrote:
>>>>>>>> On 18.11.2020 16:50, Julien Grall wrote:
>>>>>>>>> On 16/11/2020 12:25, Rahul Singh wrote:
>>>>>>>>>> NS16550 driver has PCI support that is under HAS_PCI flag. When HAS_PCI
>>>>>>>>>> is enabled for ARM, compilation error is observed for ARM architecture
>>>>>>>>>> because ARM platforms do not have full PCI support available.
>>>>>>>>>    >
>>>>>>>>>> Introducing new kconfig option CONFIG_HAS_NS16550_PCI to support
>>>>>>>>>> ns16550 PCI for X86.
>>>>>>>>>>
>>>>>>>>>> For X86 platforms it is enabled by default. For ARM platforms it is
>>>>>>>>>> disabled by default, once we have proper support for NS16550 PCI for
>>>>>>>>>> ARM we can enable it.
>>>>>>>>>>
>>>>>>>>>> No functional change.
>>>>>>>>>
>>>>>>>>> NIT: I would say "No functional change intended" to make clear this is
>>>>>>>>> an expectation and hopefully will be correct :).
>>>>>>>>>
>>>>>>>>> Regarding the commit message itself, I would suggest the following to
>>>>>>>>> address Jan's concern:
>>>>>>>>
>>>>>>>> While indeed this is a much better description, I continue to think
>>>>>>>> that the proposed Kconfig option is undesirable to have.
>>>>>>>
>>>>>>> I am yet to see an argument into why we should keep the PCI code
>>>>>>> compiled on Arm when there will be no-use....
>>>>>> Well, see my patch suppressing building of quite a part of it.
>>>>>
>>>>> I will let Rahul figuring out whether your patch series is sufficient to fix compilation issues (this is what matters right
>>>      now).
>>>>
>>>> I just checked the compilation error for ARM after enabling the HAS_PCI on ARM. I am observing the same compilation error
>>>      what I observed previously.
>>>> There are two new errors related to struct uart_config and struct part_param as those struct defined globally but used under
>>>      X86 flags.
>>>>
>>>> At top level:
>>>> ns16550.c:179:48: error: ‘uart_config’ defined but not used [-Werror=unused-const-variable=]
>>>>   static const struct ns16550_config __initconst uart_config[] =
>>>>                                                  ^~~~~~~~~~~
>>>> ns16550.c:104:54: error: ‘uart_param’ defined but not used [-Werror=unused-const-variable=]
>>>>   static const struct ns16550_config_param __initconst uart_param[] = {
>>>>
>>>>
>>>>>
>>>>>>>> Either,
>>>>>>>> following the patch I've just sent, truly x86-specific things (at
>>>>>>>> least as far as current state goes - if any of this was to be
>>>>>>>> re-used by a future port, suitable further abstraction may be
>>>>>>>> needed) should be guarded by CONFIG_X86 (or abstracted into arch
>>>>>>>> hooks), or the HAS_PCI_MSI proposal would at least want further
>>>>>>>> investigating as to its feasibility to address the issues at hand.
>>>>>>>
>>>>>>> I would be happy with CONFIG_X86, despite the fact that this is only
>>>>>>> deferring the problem.
>>>>>>>
>>>>>>> Regarding HAS_PCI_MSI, I don't really see the point of introducing given
>>>>>>> that we are not going to use NS16550 PCI on Arm in the forseeable
>>>>>>> future.
>>>>>> And I continue to fail to see what would guarantee this: As soon
>>>>>> as you can plug in such a card into an Arm system, people will
>>>>>> want to be able use it. That's why we had to add support for it
>>>>>> on x86, after all.
>>>>>
>>>>> Well, plug-in PCI cards on Arm has been available for quite a while... Yet I haven't heard anyone asking for NS16550 PCI
>>>      support.
>>>>>
>>>>> This is probably because SBSA compliant server should always provide an SBSA UART (a cut-down version of the PL011). So why
>>>      would bother to lose a PCI slot for yet another UART?
>>>>>
>>>>>>>> So why do we need a finer graine Kconfig?
>>>>>> Because most of the involved code is indeed MSI-related?
>>>>>
>>>>> Possibly, yet it would not be necessary if we don't want NS16550 PCI support...
>>>>
>>>> To fix compilation error on ARM as per the discussion there are below options please suggest which one to use to proceed
>>>      further.
>>>>
>>>> 1. Use the newly introduced CONFIG_HAS_NS16550_PCI config options. This helps also non-x86 architecture in the future not to
>>>      have compilation error
>>>> what we are observing now when HAS_PCI is enabled.
>>>>
>>>> 2. Guard the remaining x86 specific code with CONFIG_X86 and introduce the new CONFIG_HAS_PCI_MSI options to fix the MSI
>>>      related compilation error.
>>>> Once we have proper support for MSI and PCI for ARM  (HAS_PCI_MSI and HAS_PCI enabled for ARM in Kconfig ) I am not sure if
>>>      NS16550 PCI will work out of the box on ARM .In that case, we might need to come back again to fix NS16550 driver. 
>>>
>>>
>>>      It doesn't matter too much to me, let's just choose one option so that you
>>>      get unblocked soon.
>>>
>>>      It looks like Jan prefers option 2) and both Julien and I are OK with
>>>      it. So let's do 2). Jan, please confirm too :-)
>>>
>>>
>>> Please don't put words in my mouth... 
>>
>> Sorry Julien, I misinterpreted one of your previous comments. Sometimes
>> it is difficult to do things by email. It is good that you clarified as
>> my goal was to reach an agreement.
>>
>>
>>> I think introducing HAS_PCI_MSI is short sighted.
>>>
>>> There are no clear benefits of it when NS16550 PCI support is not going to be enable in the foreseeable future.
>>
>> I agree
>>
>>
>>> I would be ok with moving everything under CONFIG_X86. IHMO this is still shortsighted but at least we don't introduce a config that's not
>>> going to help Arm or other any architecture to disable completely PCI support in NS16550.
>>
>> So you are suggesting a new option:
>>
>> 3. Guard the remaining x86 specific code *and* the MSI related
>> compilation errors with CONFIG_X86
>>
>> Is that right?
>>
>>
>> My preference is actually option 1) but this series is already at v3 and
>> I don't think this decision is as important as much as unblocking
>> Rahul, so I am OK with the other alternatives too.
>>
>> I tend to agree with you that 3) is better than 2) for the reasons you
>> wrote above.
> 
> 
> Can you please provide your suggestion how to proceed on this so that I can send my next patch.
> I am waiting for your reply if you are also ok for the options 3.

I can live with 3, I guess, but I still think a separate PCI_MSI
control would be better. Please realize though that things also
depend on how the change is going to look like in the end, i.e.
I'm not going to assure you this is my final view on it. In any
event I've just sent v2 of my series, which I consider a prereq
of yours.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:23:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:23:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34267.65173 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBoW-0007FV-IF; Mon, 23 Nov 2020 13:23:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34267.65173; Mon, 23 Nov 2020 13:23:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBoW-0007FN-DV; Mon, 23 Nov 2020 13:23:28 +0000
Received: by outflank-mailman (input) for mailman id 34267;
 Mon, 23 Nov 2020 13:23:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khBoV-0007FD-EC
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:23:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 601600ac-bd94-402e-a552-eca6ca2eaaad;
 Mon, 23 Nov 2020 13:23:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EF0F6ACC6;
 Mon, 23 Nov 2020 13:23:25 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khBoV-0007FD-EC
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:23:27 +0000
X-Inumbo-ID: 601600ac-bd94-402e-a552-eca6ca2eaaad
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 601600ac-bd94-402e-a552-eca6ca2eaaad;
	Mon, 23 Nov 2020 13:23:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606137806; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ckY06JkNzZUVtS4pxyVz0NwIbNY7+mHt0ItbeV5rr0g=;
	b=S5pjf58xRZR31IwC5B9ka71XPJl4r/Z59hN3QGRFDipOehQ/lk7PbNhyKxfJku3GaDWIBL
	wgY/H+W44YmuQiMI64jQR1e84ZAXmmxgZG56MNuksiVxOrRsQ9qPLVbMw1IZzKDYSm8wfQ
	UGxt3ePzw/Y66aYneBT2356XF6ynBv0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EF0F6ACC6;
	Mon, 23 Nov 2020 13:23:25 +0000 (UTC)
Subject: =?UTF-8?Q?Ping=c2=b2=3a_=5bPATCH_2/2=5d_tools/libs=3a_fix_uninstall?=
 =?UTF-8?Q?_rule_for_header_files?=
To: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <2c9a0407-1bd1-6898-d1e3-9be4c869684b@suse.com>
 <74c629db-0f63-aba0-f294-9668c29b8f70@suse.com>
 <5495896C-2AD6-413E-A1A6-D9994F10D391@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <937093c2-3d7a-af83-8919-b74caff2663e@suse.com>
Date: Mon, 23 Nov 2020 14:23:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <5495896C-2AD6-413E-A1A6-D9994F10D391@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 29.10.2020 16:24, Bertrand Marquis wrote:
>> On 19 Oct 2020, at 08:21, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> This again was working right only as long as $(LIBHEADER) consisted of
>> just one entry.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
> 
> The change is obviously fixing a bug :-) and the double $ is required to protect from make.

I'll give it a day or two more to get an ack (or any negative
form of feedback), but I guess I'll go ahead and commit this
with just Bertrand's R-b otherwise.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:27:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:27:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34274.65185 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBrv-0007PJ-0q; Mon, 23 Nov 2020 13:26:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34274.65185; Mon, 23 Nov 2020 13:26:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBru-0007PC-U5; Mon, 23 Nov 2020 13:26:58 +0000
Received: by outflank-mailman (input) for mailman id 34274;
 Mon, 23 Nov 2020 13:26:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khBrt-0007P7-Jx
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:26:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fe2199b3-d489-481a-909a-c1b57fabc7f1;
 Mon, 23 Nov 2020 13:26:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E4179AC2E;
 Mon, 23 Nov 2020 13:26:55 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khBrt-0007P7-Jx
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:26:57 +0000
X-Inumbo-ID: fe2199b3-d489-481a-909a-c1b57fabc7f1
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fe2199b3-d489-481a-909a-c1b57fabc7f1;
	Mon, 23 Nov 2020 13:26:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606138016; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=ermRl1yZ3mVSGF+O9FQRzJSxMX/19Is0gCXXBe7D0qQ=;
	b=oOFoGa1/XjQI1xXTaviyBRs30PTssMo2ZlRzH3gkpE38wyvAJI72E18R0x8x/2EuYSBU7j
	B7xvv3gZ+K4VRwYVo4G/2o440HlPqH/Q+1lrnxqNieA1eBQclE3BMefExOAePN3TRj7lxx
	jJcj4YvhM0PXbU1KJfcqZrzTzm9z3iM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E4179AC2E;
	Mon, 23 Nov 2020 13:26:55 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v3 0/5] evtchn: (not so) recent XSAs follow-on
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
Date: Mon, 23 Nov 2020 14:26:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

These are grouped into a series largely because of their origin,
not so much because there are heavy dependencies among them.
Compared to v2, there's a new patch resulting from review feedback,
and the last patch should be fully usable now. See also the
individual patches.

1: drop acquiring of per-channel lock from send_guest_{global,vcpu}_virq()
2: avoid access tearing for ->virq_to_evtchn[] accesses
3: convert vIRQ lock to an r/w one
4: convert domain event lock to an r/w one
5: don't call Xen consumer callback with per-channel lock held

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:28:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:28:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34279.65196 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBsz-0007Vh-BF; Mon, 23 Nov 2020 13:28:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34279.65196; Mon, 23 Nov 2020 13:28:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBsz-0007Va-8N; Mon, 23 Nov 2020 13:28:05 +0000
Received: by outflank-mailman (input) for mailman id 34279;
 Mon, 23 Nov 2020 13:28:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khBsx-0007VT-8a
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:28:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 382e91eb-a16f-4163-8d26-460ff8fdb498;
 Mon, 23 Nov 2020 13:28:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 75DD1ABCE;
 Mon, 23 Nov 2020 13:28:01 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khBsx-0007VT-8a
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:28:03 +0000
X-Inumbo-ID: 382e91eb-a16f-4163-8d26-460ff8fdb498
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 382e91eb-a16f-4163-8d26-460ff8fdb498;
	Mon, 23 Nov 2020 13:28:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606138081; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Wx22CUcVcpFAZagcGFpuRTrgWXF49Q2PbjSMmqNqzXk=;
	b=u36KuH/KcLIsybJGTvI1VETWVGHZMWqssoCH7PVsnAJFYaYQYvB9OT9Ubm/Y9HLyEUZFQG
	9exaNxmqNQ1bQcraK2BBsoxqaG3FPMLqWQ4vFjZ4SW1h2wghx93moXxYLepTpcT4+VE5oZ
	0Zb5zf7mAwu6f0mi6zfJQB1yHR5l5Cs=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 75DD1ABCE;
	Mon, 23 Nov 2020 13:28:01 +0000 (UTC)
Subject: [PATCH v3 1/5] evtchn: drop acquiring of per-channel lock from
 send_guest_{global,vcpu}_virq()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
Message-ID: <d709a9c3-dbe2-65c6-2c2f-6a12f486335d@suse.com>
Date: Mon, 23 Nov 2020 14:28:00 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The per-vCPU virq_lock, which is being held anyway, together with there
not being any call to evtchn_port_set_pending() when v->virq_to_evtchn[]
is zero, provide sufficient guarantees. Undo the lock addition done for
XSA-343 (commit e045199c7c9c "evtchn: address races with
evtchn_reset()"). Update the description next to struct evtchn_port_ops
accordingly.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: Re-base.
v2: New.

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -809,7 +809,6 @@ void send_guest_vcpu_virq(struct vcpu *v
     unsigned long flags;
     int port;
     struct domain *d;
-    struct evtchn *chn;
 
     ASSERT(!virq_is_global(virq));
 
@@ -820,12 +819,7 @@ void send_guest_vcpu_virq(struct vcpu *v
         goto out;
 
     d = v->domain;
-    chn = evtchn_from_port(d, port);
-    if ( evtchn_read_trylock(chn) )
-    {
-        evtchn_port_set_pending(d, v->vcpu_id, chn);
-        evtchn_read_unlock(chn);
-    }
+    evtchn_port_set_pending(d, v->vcpu_id, evtchn_from_port(d, port));
 
  out:
     spin_unlock_irqrestore(&v->virq_lock, flags);
@@ -854,11 +848,7 @@ void send_guest_global_virq(struct domai
         goto out;
 
     chn = evtchn_from_port(d, port);
-    if ( evtchn_read_trylock(chn) )
-    {
-        evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
-        evtchn_read_unlock(chn);
-    }
+    evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
 
  out:
     spin_unlock_irqrestore(&v->virq_lock, flags);
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -192,9 +192,16 @@ int evtchn_reset(struct domain *d, bool
  * Low-level event channel port ops.
  *
  * All hooks have to be called with a lock held which prevents the channel
- * from changing state. This may be the domain event lock, the per-channel
- * lock, or in the case of sending interdomain events also the other side's
- * per-channel lock. Exceptions apply in certain cases for the PV shim.
+ * from changing state. This may be
+ * - the domain event lock,
+ * - the per-channel lock,
+ * - in the case of sending interdomain events the other side's per-channel
+ *   lock,
+ * - in the case of sending non-global vIRQ-s the per-vCPU virq_lock (in
+ *   combination with the ordering enforced through how the vCPU's
+ *   virq_to_evtchn[] gets updated),
+ * - in the case of sending global vIRQ-s vCPU 0's virq_lock.
+ * Exceptions apply in certain cases for the PV shim.
  */
 struct evtchn_port_ops {
     void (*init)(struct domain *d, struct evtchn *evtchn);



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:28:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:28:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34282.65209 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBtH-0007cC-KM; Mon, 23 Nov 2020 13:28:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34282.65209; Mon, 23 Nov 2020 13:28:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBtH-0007c4-HN; Mon, 23 Nov 2020 13:28:23 +0000
Received: by outflank-mailman (input) for mailman id 34282;
 Mon, 23 Nov 2020 13:28:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khBtH-0007bu-10
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:28:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4042d843-f1e3-460f-8058-9a1bce6b1987;
 Mon, 23 Nov 2020 13:28:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9301EAC2E;
 Mon, 23 Nov 2020 13:28:21 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khBtH-0007bu-10
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:28:23 +0000
X-Inumbo-ID: 4042d843-f1e3-460f-8058-9a1bce6b1987
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4042d843-f1e3-460f-8058-9a1bce6b1987;
	Mon, 23 Nov 2020 13:28:22 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606138101; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=iIpBVVXINJLxyaNZ9rqCJ2+moiWnSKhQgi3f+gHmkr8=;
	b=EgFofeHvZq1Fx66zNAN6E4Q4VugkMnnfcWFLj8KxcOTmhCeOukza8WnCGFLzVnKTFcm/ir
	zAEEJF6COD36UNpZwf58ys3aWyZ9/KswF/a4pii1PwWvM20Qm0dvqAHlcuRxBNbCEvUHdT
	+3Akux17DQg3KI22vTYdsBRKKcrarW0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 9301EAC2E;
	Mon, 23 Nov 2020 13:28:21 +0000 (UTC)
Subject: [PATCH v3 2/5] evtchn: avoid access tearing for ->virq_to_evtchn[]
 accesses
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
Message-ID: <ce6ce543-d57a-4111-2e66-871c4f4633a8@suse.com>
Date: Mon, 23 Nov 2020 14:28:21 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Use {read,write}_atomic() to exclude any eventualities, in particular
observing that accesses aren't all happening under a consistent lock.

Requested-by: Julien Grall <julien@xen.org>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: New.

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -446,7 +446,7 @@ int evtchn_bind_virq(evtchn_bind_virq_t
 
     spin_lock(&d->event_lock);
 
-    if ( v->virq_to_evtchn[virq] != 0 )
+    if ( read_atomic(&v->virq_to_evtchn[virq]) )
         ERROR_EXIT(-EEXIST);
 
     if ( port != 0 )
@@ -474,7 +474,8 @@ int evtchn_bind_virq(evtchn_bind_virq_t
 
     evtchn_write_unlock(chn);
 
-    v->virq_to_evtchn[virq] = bind->port = port;
+    bind->port = port;
+    write_atomic(&v->virq_to_evtchn[virq], port);
 
  out:
     spin_unlock(&d->event_lock);
@@ -660,9 +661,9 @@ int evtchn_close(struct domain *d1, int
     case ECS_VIRQ:
         for_each_vcpu ( d1, v )
         {
-            if ( v->virq_to_evtchn[chn1->u.virq] != port1 )
+            if ( read_atomic(&v->virq_to_evtchn[chn1->u.virq]) != port1 )
                 continue;
-            v->virq_to_evtchn[chn1->u.virq] = 0;
+            write_atomic(&v->virq_to_evtchn[chn1->u.virq], 0);
             spin_barrier(&v->virq_lock);
         }
         break;
@@ -801,7 +802,7 @@ bool evtchn_virq_enabled(const struct vc
     if ( virq_is_global(virq) && v->vcpu_id )
         v = domain_vcpu(v->domain, 0);
 
-    return v->virq_to_evtchn[virq];
+    return read_atomic(&v->virq_to_evtchn[virq]);
 }
 
 void send_guest_vcpu_virq(struct vcpu *v, uint32_t virq)
@@ -814,7 +815,7 @@ void send_guest_vcpu_virq(struct vcpu *v
 
     spin_lock_irqsave(&v->virq_lock, flags);
 
-    port = v->virq_to_evtchn[virq];
+    port = read_atomic(&v->virq_to_evtchn[virq]);
     if ( unlikely(port == 0) )
         goto out;
 
@@ -843,7 +844,7 @@ void send_guest_global_virq(struct domai
 
     spin_lock_irqsave(&v->virq_lock, flags);
 
-    port = v->virq_to_evtchn[virq];
+    port = read_atomic(&v->virq_to_evtchn[virq]);
     if ( unlikely(port == 0) )
         goto out;
 



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:28:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:28:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34288.65221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBte-0007kL-3K; Mon, 23 Nov 2020 13:28:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34288.65221; Mon, 23 Nov 2020 13:28:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBte-0007kE-0D; Mon, 23 Nov 2020 13:28:46 +0000
Received: by outflank-mailman (input) for mailman id 34288;
 Mon, 23 Nov 2020 13:28:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khBtc-0007k0-Ty
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:28:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1efc338a-1c06-4c6c-b64a-8e394327c95a;
 Mon, 23 Nov 2020 13:28:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4089EACC6;
 Mon, 23 Nov 2020 13:28:43 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khBtc-0007k0-Ty
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:28:44 +0000
X-Inumbo-ID: 1efc338a-1c06-4c6c-b64a-8e394327c95a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1efc338a-1c06-4c6c-b64a-8e394327c95a;
	Mon, 23 Nov 2020 13:28:44 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606138123; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=u2CaRag6/oUqYllh9jVd06ZD4oOB0HabLrsdZBZjXfs=;
	b=koWTdlgvz4DaO2jfk3SOuhmqC1wZ9ssswhghaExLgcbamSKMFK7KHqaiIZuaBDIML+5ZpJ
	f/JWg62SDspsjzxd2s0574g2S9PSEuff+9DkVPmlvnw1Z5NO1fRkMAU4wc52WvJTg1iefG
	AuERTJ6ZiQmbcNZ6k8AAUuWsgNIfAxA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4089EACC6;
	Mon, 23 Nov 2020 13:28:43 +0000 (UTC)
Subject: [PATCH v3 3/5] evtchn: convert vIRQ lock to an r/w one
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
Message-ID: <d2461bd6-fb2f-447f-11c6-bd8afd573d7b@suse.com>
Date: Mon, 23 Nov 2020 14:28:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

There's no need to serialize all sending of vIRQ-s; all that's needed
is serialization against the closing of the respective event channels
(so far by means of a barrier). To facilitate the conversion, switch to
an ordinary write locked region in evtchn_close().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: Re-base over added new earlier patch.
v2: Don't introduce/use rw_barrier() here. Add comment to
    evtchn_bind_virq(). Re-base.

--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -160,7 +160,7 @@ struct vcpu *vcpu_create(struct domain *
     v->vcpu_id = vcpu_id;
     v->dirty_cpu = VCPU_CPU_CLEAN;
 
-    spin_lock_init(&v->virq_lock);
+    rwlock_init(&v->virq_lock);
 
     tasklet_init(&v->continue_hypercall_tasklet, NULL, NULL);
 
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -475,6 +475,13 @@ int evtchn_bind_virq(evtchn_bind_virq_t
     evtchn_write_unlock(chn);
 
     bind->port = port;
+    /*
+     * If by any, the update of virq_to_evtchn[] would need guarding by
+     * virq_lock, but since this is the last action here, there's no strict
+     * need to acquire the lock. Hence holding event_lock isn't helpful
+     * anymore at this point, but utilize that its unlocking acts as the
+     * otherwise necessary smp_wmb() here.
+     */
     write_atomic(&v->virq_to_evtchn[virq], port);
 
  out:
@@ -661,10 +668,12 @@ int evtchn_close(struct domain *d1, int
     case ECS_VIRQ:
         for_each_vcpu ( d1, v )
         {
-            if ( read_atomic(&v->virq_to_evtchn[chn1->u.virq]) != port1 )
-                continue;
-            write_atomic(&v->virq_to_evtchn[chn1->u.virq], 0);
-            spin_barrier(&v->virq_lock);
+            unsigned long flags;
+
+            write_lock_irqsave(&v->virq_lock, flags);
+            if ( read_atomic(&v->virq_to_evtchn[chn1->u.virq]) == port1 )
+                write_atomic(&v->virq_to_evtchn[chn1->u.virq], 0);
+            write_unlock_irqrestore(&v->virq_lock, flags);
         }
         break;
 
@@ -813,7 +822,7 @@ void send_guest_vcpu_virq(struct vcpu *v
 
     ASSERT(!virq_is_global(virq));
 
-    spin_lock_irqsave(&v->virq_lock, flags);
+    read_lock_irqsave(&v->virq_lock, flags);
 
     port = read_atomic(&v->virq_to_evtchn[virq]);
     if ( unlikely(port == 0) )
@@ -823,7 +832,7 @@ void send_guest_vcpu_virq(struct vcpu *v
     evtchn_port_set_pending(d, v->vcpu_id, evtchn_from_port(d, port));
 
  out:
-    spin_unlock_irqrestore(&v->virq_lock, flags);
+    read_unlock_irqrestore(&v->virq_lock, flags);
 }
 
 void send_guest_global_virq(struct domain *d, uint32_t virq)
@@ -842,7 +851,7 @@ void send_guest_global_virq(struct domai
     if ( unlikely(v == NULL) )
         return;
 
-    spin_lock_irqsave(&v->virq_lock, flags);
+    read_lock_irqsave(&v->virq_lock, flags);
 
     port = read_atomic(&v->virq_to_evtchn[virq]);
     if ( unlikely(port == 0) )
@@ -852,7 +861,7 @@ void send_guest_global_virq(struct domai
     evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
 
  out:
-    spin_unlock_irqrestore(&v->virq_lock, flags);
+    read_unlock_irqrestore(&v->virq_lock, flags);
 }
 
 void send_guest_pirq(struct domain *d, const struct pirq *pirq)
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -238,7 +238,7 @@ struct vcpu
 
     /* IRQ-safe virq_lock protects against delivering VIRQ to stale evtchn. */
     evtchn_port_t    virq_to_evtchn[NR_VIRQS];
-    spinlock_t       virq_lock;
+    rwlock_t         virq_lock;
 
     /* Tasklet for continue_hypercall_on_cpu(). */
     struct tasklet   continue_hypercall_tasklet;



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:29:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:29:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34293.65232 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBu5-0007rb-DG; Mon, 23 Nov 2020 13:29:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34293.65232; Mon, 23 Nov 2020 13:29:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBu5-0007rU-9A; Mon, 23 Nov 2020 13:29:13 +0000
Received: by outflank-mailman (input) for mailman id 34293;
 Mon, 23 Nov 2020 13:29:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khBu3-0007rB-J7
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:29:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 21a62d6e-6f53-4610-ab6f-f624c3a00f84;
 Mon, 23 Nov 2020 13:29:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 53929ACC6;
 Mon, 23 Nov 2020 13:29:08 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khBu3-0007rB-J7
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:29:11 +0000
X-Inumbo-ID: 21a62d6e-6f53-4610-ab6f-f624c3a00f84
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 21a62d6e-6f53-4610-ab6f-f624c3a00f84;
	Mon, 23 Nov 2020 13:29:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606138148; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zYGWGZcvIzB6bNUNFtqGuAY55PoIprvcCYGCok4/knU=;
	b=LLiVfDZn/mJAL9+XMS8xT1a12vtyga7qtUil+/B5bvVJd2yDK2ik02gomF/qPYU3NWiKKU
	QJ9tCCH9ejw7rPCuHA08NZ/e/XMMzX4bjlUxR5z321usgemvqHtFa61kQSwzq01pdnjLQb
	r5zQmn6Zeu+Q3AeYH3N0f6dLboZoc5U=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 53929ACC6;
	Mon, 23 Nov 2020 13:29:08 +0000 (UTC)
Subject: [PATCH v3 4/5] evtchn: convert domain event lock to an r/w one
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
Message-ID: <a333387e-f9e5-7051-569a-1a9a37da53ca@suse.com>
Date: Mon, 23 Nov 2020 14:29:07 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Especially for the use in evtchn_move_pirqs() (called when moving a vCPU
across pCPU-s) and the ones in EOI handling in PCI pass-through code,
serializing perhaps an entire domain isn't helpful when no state (which
isn't e.g. further protected by the per-channel lock) changes.

Unfortunately this implies dropping of lock profiling for this lock,
until r/w locks may get enabled for such functionality.

While ->notify_vcpu_id is now meant to be consistently updated with the
per-channel lock held, an extension applies to ECS_PIRQ: The field is
also guaranteed to not change with the per-domain event lock held for
writing. Therefore the link_pirq_port() call from evtchn_bind_pirq()
could in principle be moved out of the per-channel locked regions, but
this further code churn didn't seem worth it.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: Re-base.
v2: Consistently lock for writing in evtchn_reset(). Fix error path in
    pci_clean_dpci_irqs(). Lock for writing in pt_irq_time_out(),
    hvm_dirq_assist(), hvm_dpci_eoi(), and hvm_dpci_isairq_eoi(). Move
    rw_barrier() introduction here. Re-base over changes earlier in the
    series.
---
RFC: Wouldn't flask_get_peer_sid() better use the per-channel lock?

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -917,7 +917,7 @@ int arch_domain_soft_reset(struct domain
     if ( !is_hvm_domain(d) )
         return -EINVAL;
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
     for ( i = 0; i < d->nr_pirqs ; i++ )
     {
         if ( domain_pirq_to_emuirq(d, i) != IRQ_UNBOUND )
@@ -927,7 +927,7 @@ int arch_domain_soft_reset(struct domain
                 break;
         }
     }
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 
     if ( ret )
         return ret;
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -528,9 +528,9 @@ void hvm_migrate_pirqs(struct vcpu *v)
     if ( !is_iommu_enabled(d) || !hvm_domain_irq(d)->dpci )
        return;
 
-    spin_lock(&d->event_lock);
+    read_lock(&d->event_lock);
     pt_pirq_iterate(d, migrate_pirq, v);
-    spin_unlock(&d->event_lock);
+    read_unlock(&d->event_lock);
 }
 
 static bool hvm_get_pending_event(struct vcpu *v, struct x86_event *info)
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -404,9 +404,9 @@ int hvm_inject_msi(struct domain *d, uin
             {
                 int rc;
 
-                spin_lock(&d->event_lock);
+                write_lock(&d->event_lock);
                 rc = map_domain_emuirq_pirq(d, pirq, IRQ_MSI_EMU);
-                spin_unlock(&d->event_lock);
+                write_unlock(&d->event_lock);
                 if ( rc )
                     return rc;
                 info = pirq_info(d, pirq);
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -203,9 +203,9 @@ static int vioapic_hwdom_map_gsi(unsigne
     {
         gprintk(XENLOG_WARNING, "vioapic: error binding GSI %u: %d\n",
                 gsi, ret);
-        spin_lock(&currd->event_lock);
+        write_lock(&currd->event_lock);
         unmap_domain_pirq(currd, pirq);
-        spin_unlock(&currd->event_lock);
+        write_unlock(&currd->event_lock);
     }
     pcidevs_unlock();
 
--- a/xen/arch/x86/hvm/vmsi.c
+++ b/xen/arch/x86/hvm/vmsi.c
@@ -465,7 +465,7 @@ int msixtbl_pt_register(struct domain *d
     int r = -EINVAL;
 
     ASSERT(pcidevs_locked());
-    ASSERT(spin_is_locked(&d->event_lock));
+    ASSERT(rw_is_write_locked(&d->event_lock));
 
     if ( !msixtbl_initialised(d) )
         return -ENODEV;
@@ -535,7 +535,7 @@ void msixtbl_pt_unregister(struct domain
     struct msixtbl_entry *entry;
 
     ASSERT(pcidevs_locked());
-    ASSERT(spin_is_locked(&d->event_lock));
+    ASSERT(rw_is_write_locked(&d->event_lock));
 
     if ( !msixtbl_initialised(d) )
         return;
@@ -589,13 +589,13 @@ void msixtbl_pt_cleanup(struct domain *d
     if ( !msixtbl_initialised(d) )
         return;
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
 
     list_for_each_entry_safe( entry, temp,
                               &d->arch.hvm.msixtbl_list, list )
         del_msixtbl_entry(entry);
 
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 }
 
 void msix_write_completion(struct vcpu *v)
@@ -719,9 +719,9 @@ int vpci_msi_arch_update(struct vpci_msi
                          msi->arch.pirq, msi->mask);
     if ( rc )
     {
-        spin_lock(&pdev->domain->event_lock);
+        write_lock(&pdev->domain->event_lock);
         unmap_domain_pirq(pdev->domain, msi->arch.pirq);
-        spin_unlock(&pdev->domain->event_lock);
+        write_unlock(&pdev->domain->event_lock);
         pcidevs_unlock();
         msi->arch.pirq = INVALID_PIRQ;
         return rc;
@@ -760,9 +760,9 @@ static int vpci_msi_enable(const struct
     rc = vpci_msi_update(pdev, data, address, vectors, pirq, mask);
     if ( rc )
     {
-        spin_lock(&pdev->domain->event_lock);
+        write_lock(&pdev->domain->event_lock);
         unmap_domain_pirq(pdev->domain, pirq);
-        spin_unlock(&pdev->domain->event_lock);
+        write_unlock(&pdev->domain->event_lock);
         pcidevs_unlock();
         return rc;
     }
@@ -807,9 +807,9 @@ static void vpci_msi_disable(const struc
         ASSERT(!rc);
     }
 
-    spin_lock(&pdev->domain->event_lock);
+    write_lock(&pdev->domain->event_lock);
     unmap_domain_pirq(pdev->domain, pirq);
-    spin_unlock(&pdev->domain->event_lock);
+    write_unlock(&pdev->domain->event_lock);
     pcidevs_unlock();
 }
 
--- a/xen/arch/x86/io_apic.c
+++ b/xen/arch/x86/io_apic.c
@@ -2413,10 +2413,10 @@ int ioapic_guest_write(unsigned long phy
     }
     if ( pirq >= 0 )
     {
-        spin_lock(&hardware_domain->event_lock);
+        write_lock(&hardware_domain->event_lock);
         ret = map_domain_pirq(hardware_domain, pirq, irq,
                               MAP_PIRQ_TYPE_GSI, NULL);
-        spin_unlock(&hardware_domain->event_lock);
+        write_unlock(&hardware_domain->event_lock);
         if ( ret < 0 )
             return ret;
     }
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1536,7 +1536,7 @@ int pirq_guest_bind(struct vcpu *v, stru
     irq_guest_action_t *action, *newaction = NULL;
     int                 rc = 0;
 
-    WARN_ON(!spin_is_locked(&v->domain->event_lock));
+    WARN_ON(!rw_is_write_locked(&v->domain->event_lock));
     BUG_ON(!local_irq_is_enabled());
 
  retry:
@@ -1756,7 +1756,7 @@ void pirq_guest_unbind(struct domain *d,
     struct irq_desc *desc;
     int irq = 0;
 
-    WARN_ON(!spin_is_locked(&d->event_lock));
+    WARN_ON(!rw_is_write_locked(&d->event_lock));
 
     BUG_ON(!local_irq_is_enabled());
     desc = pirq_spin_lock_irq_desc(pirq, NULL);
@@ -1793,7 +1793,7 @@ static bool pirq_guest_force_unbind(stru
     unsigned int i;
     bool bound = false;
 
-    WARN_ON(!spin_is_locked(&d->event_lock));
+    WARN_ON(!rw_is_write_locked(&d->event_lock));
 
     BUG_ON(!local_irq_is_enabled());
     desc = pirq_spin_lock_irq_desc(pirq, NULL);
@@ -2037,7 +2037,7 @@ int get_free_pirq(struct domain *d, int
 {
     int i;
 
-    ASSERT(spin_is_locked(&d->event_lock));
+    ASSERT(rw_is_write_locked(&d->event_lock));
 
     if ( type == MAP_PIRQ_TYPE_GSI )
     {
@@ -2062,7 +2062,7 @@ int get_free_pirqs(struct domain *d, uns
 {
     unsigned int i, found = 0;
 
-    ASSERT(spin_is_locked(&d->event_lock));
+    ASSERT(rw_is_write_locked(&d->event_lock));
 
     for ( i = d->nr_pirqs - 1; i >= nr_irqs_gsi; --i )
         if ( is_free_pirq(d, pirq_info(d, i)) )
@@ -2090,7 +2090,7 @@ int map_domain_pirq(
     DECLARE_BITMAP(prepared, MAX_MSI_IRQS) = {};
     DECLARE_BITMAP(granted, MAX_MSI_IRQS) = {};
 
-    ASSERT(spin_is_locked(&d->event_lock));
+    ASSERT(rw_is_write_locked(&d->event_lock));
 
     if ( !irq_access_permitted(current->domain, irq))
         return -EPERM;
@@ -2309,7 +2309,7 @@ int unmap_domain_pirq(struct domain *d,
         return -EINVAL;
 
     ASSERT(pcidevs_locked());
-    ASSERT(spin_is_locked(&d->event_lock));
+    ASSERT(rw_is_write_locked(&d->event_lock));
 
     info = pirq_info(d, pirq);
     if ( !info || (irq = info->arch.irq) <= 0 )
@@ -2436,13 +2436,13 @@ void free_domain_pirqs(struct domain *d)
     int i;
 
     pcidevs_lock();
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
 
     for ( i = 0; i < d->nr_pirqs; i++ )
         if ( domain_pirq_to_irq(d, i) > 0 )
             unmap_domain_pirq(d, i);
 
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
     pcidevs_unlock();
 }
 
@@ -2683,7 +2683,7 @@ int map_domain_emuirq_pirq(struct domain
     int old_emuirq = IRQ_UNBOUND, old_pirq = IRQ_UNBOUND;
     struct pirq *info;
 
-    ASSERT(spin_is_locked(&d->event_lock));
+    ASSERT(rw_is_write_locked(&d->event_lock));
 
     if ( !is_hvm_domain(d) )
         return -EINVAL;
@@ -2749,7 +2749,7 @@ int unmap_domain_pirq_emuirq(struct doma
     if ( (pirq < 0) || (pirq >= d->nr_pirqs) )
         return -EINVAL;
 
-    ASSERT(spin_is_locked(&d->event_lock));
+    ASSERT(rw_is_write_locked(&d->event_lock));
 
     emuirq = domain_pirq_to_emuirq(d, pirq);
     if ( emuirq == IRQ_UNBOUND )
@@ -2797,7 +2797,7 @@ static int allocate_pirq(struct domain *
 {
     int current_pirq;
 
-    ASSERT(spin_is_locked(&d->event_lock));
+    ASSERT(rw_is_write_locked(&d->event_lock));
     current_pirq = domain_irq_to_pirq(d, irq);
     if ( pirq < 0 )
     {
@@ -2869,7 +2869,7 @@ int allocate_and_map_gsi_pirq(struct dom
     }
 
     /* Verify or get pirq. */
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
     pirq = allocate_pirq(d, index, *pirq_p, irq, MAP_PIRQ_TYPE_GSI, NULL);
     if ( pirq < 0 )
     {
@@ -2882,7 +2882,7 @@ int allocate_and_map_gsi_pirq(struct dom
         *pirq_p = pirq;
 
  done:
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 
     return ret;
 }
@@ -2923,7 +2923,7 @@ int allocate_and_map_msi_pirq(struct dom
 
     pcidevs_lock();
     /* Verify or get pirq. */
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
     pirq = allocate_pirq(d, index, *pirq_p, irq, type, &msi->entry_nr);
     if ( pirq < 0 )
     {
@@ -2936,7 +2936,7 @@ int allocate_and_map_msi_pirq(struct dom
         *pirq_p = pirq;
 
  done:
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
     pcidevs_unlock();
     if ( ret )
     {
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -34,7 +34,7 @@ static int physdev_hvm_map_pirq(
 
     ASSERT(!is_hardware_domain(d));
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
     switch ( type )
     {
     case MAP_PIRQ_TYPE_GSI: {
@@ -84,7 +84,7 @@ static int physdev_hvm_map_pirq(
         break;
     }
 
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
     return ret;
 }
 
@@ -154,18 +154,18 @@ int physdev_unmap_pirq(domid_t domid, in
 
     if ( is_hvm_domain(d) && has_pirq(d) )
     {
-        spin_lock(&d->event_lock);
+        write_lock(&d->event_lock);
         if ( domain_pirq_to_emuirq(d, pirq) != IRQ_UNBOUND )
             ret = unmap_domain_pirq_emuirq(d, pirq);
-        spin_unlock(&d->event_lock);
+        write_unlock(&d->event_lock);
         if ( domid == DOMID_SELF || ret )
             goto free_domain;
     }
 
     pcidevs_lock();
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
     ret = unmap_domain_pirq(d, pirq);
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
     pcidevs_unlock();
 
  free_domain:
@@ -192,10 +192,10 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
         ret = -EINVAL;
         if ( eoi.irq >= currd->nr_pirqs )
             break;
-        spin_lock(&currd->event_lock);
+        read_lock(&currd->event_lock);
         pirq = pirq_info(currd, eoi.irq);
         if ( !pirq ) {
-            spin_unlock(&currd->event_lock);
+            read_unlock(&currd->event_lock);
             break;
         }
         if ( currd->arch.auto_unmask )
@@ -214,7 +214,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
                     && hvm_irq->gsi_assert_count[gsi] )
                 send_guest_pirq(currd, pirq);
         }
-        spin_unlock(&currd->event_lock);
+        read_unlock(&currd->event_lock);
         ret = 0;
         break;
     }
@@ -626,7 +626,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
         if ( copy_from_guest(&out, arg, 1) != 0 )
             break;
 
-        spin_lock(&currd->event_lock);
+        write_lock(&currd->event_lock);
 
         ret = get_free_pirq(currd, out.type);
         if ( ret >= 0 )
@@ -639,7 +639,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
                 ret = -ENOMEM;
         }
 
-        spin_unlock(&currd->event_lock);
+        write_unlock(&currd->event_lock);
 
         if ( ret >= 0 )
         {
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -448,7 +448,7 @@ static long pv_shim_event_channel_op(int
         if ( rc )                                                           \
             break;                                                          \
                                                                             \
-        spin_lock(&d->event_lock);                                          \
+        write_lock(&d->event_lock);                                         \
         rc = evtchn_allocate_port(d, op.port_field);                        \
         if ( rc )                                                           \
         {                                                                   \
@@ -457,7 +457,7 @@ static long pv_shim_event_channel_op(int
         }                                                                   \
         else                                                                \
             evtchn_reserve(d, op.port_field);                               \
-        spin_unlock(&d->event_lock);                                        \
+        write_unlock(&d->event_lock);                                       \
                                                                             \
         if ( !rc && __copy_to_guest(arg, &op, 1) )                          \
             rc = -EFAULT;                                                   \
@@ -585,11 +585,11 @@ static long pv_shim_event_channel_op(int
         if ( rc )
             break;
 
-        spin_lock(&d->event_lock);
+        write_lock(&d->event_lock);
         rc = evtchn_allocate_port(d, ipi.port);
         if ( rc )
         {
-            spin_unlock(&d->event_lock);
+            write_unlock(&d->event_lock);
 
             close.port = ipi.port;
             BUG_ON(xen_hypercall_event_channel_op(EVTCHNOP_close, &close));
@@ -598,7 +598,7 @@ static long pv_shim_event_channel_op(int
 
         evtchn_assign_vcpu(d, ipi.port, ipi.vcpu);
         evtchn_reserve(d, ipi.port);
-        spin_unlock(&d->event_lock);
+        write_unlock(&d->event_lock);
 
         if ( __copy_to_guest(arg, &ipi, 1) )
             rc = -EFAULT;
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -294,7 +294,7 @@ static long evtchn_alloc_unbound(evtchn_
     if ( d == NULL )
         return -ESRCH;
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
 
     if ( (port = get_free_port(d)) < 0 )
         ERROR_EXIT_DOM(port, d);
@@ -317,7 +317,7 @@ static long evtchn_alloc_unbound(evtchn_
 
  out:
     check_free_port(d, port);
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
     rcu_unlock_domain(d);
 
     return rc;
@@ -363,14 +363,14 @@ static long evtchn_bind_interdomain(evtc
     /* Avoid deadlock by first acquiring lock of domain with smaller id. */
     if ( ld < rd )
     {
-        spin_lock(&ld->event_lock);
-        spin_lock(&rd->event_lock);
+        write_lock(&ld->event_lock);
+        read_lock(&rd->event_lock);
     }
     else
     {
         if ( ld != rd )
-            spin_lock(&rd->event_lock);
-        spin_lock(&ld->event_lock);
+            read_lock(&rd->event_lock);
+        write_lock(&ld->event_lock);
     }
 
     if ( (lport = get_free_port(ld)) < 0 )
@@ -411,9 +411,9 @@ static long evtchn_bind_interdomain(evtc
 
  out:
     check_free_port(ld, lport);
-    spin_unlock(&ld->event_lock);
+    write_unlock(&ld->event_lock);
     if ( ld != rd )
-        spin_unlock(&rd->event_lock);
+        read_unlock(&rd->event_lock);
     
     rcu_unlock_domain(rd);
 
@@ -444,7 +444,7 @@ int evtchn_bind_virq(evtchn_bind_virq_t
     if ( (v = domain_vcpu(d, vcpu)) == NULL )
         return -ENOENT;
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
 
     if ( read_atomic(&v->virq_to_evtchn[virq]) )
         ERROR_EXIT(-EEXIST);
@@ -485,7 +485,7 @@ int evtchn_bind_virq(evtchn_bind_virq_t
     write_atomic(&v->virq_to_evtchn[virq], port);
 
  out:
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 
     return rc;
 }
@@ -501,7 +501,7 @@ static long evtchn_bind_ipi(evtchn_bind_
     if ( domain_vcpu(d, vcpu) == NULL )
         return -ENOENT;
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
 
     if ( (port = get_free_port(d)) < 0 )
         ERROR_EXIT(port);
@@ -519,7 +519,7 @@ static long evtchn_bind_ipi(evtchn_bind_
     bind->port = port;
 
  out:
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 
     return rc;
 }
@@ -565,7 +565,7 @@ static long evtchn_bind_pirq(evtchn_bind
     if ( !is_hvm_domain(d) && !pirq_access_permitted(d, pirq) )
         return -EPERM;
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
 
     if ( pirq_to_evtchn(d, pirq) != 0 )
         ERROR_EXIT(-EEXIST);
@@ -605,7 +605,7 @@ static long evtchn_bind_pirq(evtchn_bind
 
  out:
     check_free_port(d, port);
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 
     return rc;
 }
@@ -620,7 +620,7 @@ int evtchn_close(struct domain *d1, int
     long           rc = 0;
 
  again:
-    spin_lock(&d1->event_lock);
+    write_lock(&d1->event_lock);
 
     if ( !port_is_valid(d1, port1) )
     {
@@ -690,13 +690,11 @@ int evtchn_close(struct domain *d1, int
                 BUG();
 
             if ( d1 < d2 )
-            {
-                spin_lock(&d2->event_lock);
-            }
+                read_lock(&d2->event_lock);
             else if ( d1 != d2 )
             {
-                spin_unlock(&d1->event_lock);
-                spin_lock(&d2->event_lock);
+                write_unlock(&d1->event_lock);
+                read_lock(&d2->event_lock);
                 goto again;
             }
         }
@@ -743,11 +741,11 @@ int evtchn_close(struct domain *d1, int
     if ( d2 != NULL )
     {
         if ( d1 != d2 )
-            spin_unlock(&d2->event_lock);
+            read_unlock(&d2->event_lock);
         put_domain(d2);
     }
 
-    spin_unlock(&d1->event_lock);
+    write_unlock(&d1->event_lock);
 
     return rc;
 }
@@ -963,7 +961,7 @@ int evtchn_status(evtchn_status_t *statu
     if ( d == NULL )
         return -ESRCH;
 
-    spin_lock(&d->event_lock);
+    read_lock(&d->event_lock);
 
     if ( !port_is_valid(d, port) )
     {
@@ -1016,7 +1014,7 @@ int evtchn_status(evtchn_status_t *statu
     status->vcpu = chn->notify_vcpu_id;
 
  out:
-    spin_unlock(&d->event_lock);
+    read_unlock(&d->event_lock);
     rcu_unlock_domain(d);
 
     return rc;
@@ -1034,7 +1032,7 @@ long evtchn_bind_vcpu(unsigned int port,
     if ( (v = domain_vcpu(d, vcpu_id)) == NULL )
         return -ENOENT;
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
 
     if ( !port_is_valid(d, port) )
     {
@@ -1078,7 +1076,7 @@ long evtchn_bind_vcpu(unsigned int port,
     }
 
  out:
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 
     return rc;
 }
@@ -1124,7 +1122,7 @@ int evtchn_reset(struct domain *d, bool
     if ( d != current->domain && !d->controller_pause_count )
         return -EINVAL;
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
 
     /*
      * If we are resuming, then start where we stopped. Otherwise, check
@@ -1135,7 +1133,7 @@ int evtchn_reset(struct domain *d, bool
     if ( i > d->next_evtchn )
         d->next_evtchn = i;
 
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 
     if ( !i )
         return -EBUSY;
@@ -1147,14 +1145,14 @@ int evtchn_reset(struct domain *d, bool
         /* NB: Choice of frequency is arbitrary. */
         if ( !(i & 0x3f) && hypercall_preempt_check() )
         {
-            spin_lock(&d->event_lock);
+            write_lock(&d->event_lock);
             d->next_evtchn = i;
-            spin_unlock(&d->event_lock);
+            write_unlock(&d->event_lock);
             return -ERESTART;
         }
     }
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
 
     d->next_evtchn = 0;
 
@@ -1167,7 +1165,7 @@ int evtchn_reset(struct domain *d, bool
         evtchn_2l_init(d);
     }
 
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 
     return rc;
 }
@@ -1357,7 +1355,7 @@ int alloc_unbound_xen_event_channel(
     struct evtchn *chn;
     int            port, rc;
 
-    spin_lock(&ld->event_lock);
+    write_lock(&ld->event_lock);
 
     port = rc = get_free_port(ld);
     if ( rc < 0 )
@@ -1385,7 +1383,7 @@ int alloc_unbound_xen_event_channel(
 
  out:
     check_free_port(ld, port);
-    spin_unlock(&ld->event_lock);
+    write_unlock(&ld->event_lock);
 
     return rc < 0 ? rc : port;
 }
@@ -1473,7 +1471,8 @@ int evtchn_init(struct domain *d, unsign
         return -ENOMEM;
     d->valid_evtchns = EVTCHNS_PER_BUCKET;
 
-    spin_lock_init_prof(d, event_lock);
+    rwlock_init(&d->event_lock);
+
     if ( get_free_port(d) != 0 )
     {
         free_evtchn_bucket(d, d->evtchn);
@@ -1500,7 +1499,7 @@ int evtchn_destroy(struct domain *d)
 
     /* After this barrier no new event-channel allocations can occur. */
     BUG_ON(!d->is_dying);
-    spin_barrier(&d->event_lock);
+    rw_barrier(&d->event_lock);
 
     /* Close all existing event channels. */
     for ( i = d->valid_evtchns; --i; )
@@ -1558,13 +1557,13 @@ void evtchn_move_pirqs(struct vcpu *v)
     unsigned int port;
     struct evtchn *chn;
 
-    spin_lock(&d->event_lock);
+    read_lock(&d->event_lock);
     for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
     {
         chn = evtchn_from_port(d, port);
         pirq_set_affinity(d, chn->u.pirq.irq, mask);
     }
-    spin_unlock(&d->event_lock);
+    read_unlock(&d->event_lock);
 }
 
 
@@ -1577,7 +1576,7 @@ static void domain_dump_evtchn_info(stru
            "Polling vCPUs: {%*pbl}\n"
            "    port [p/m/s]\n", d->domain_id, d->max_vcpus, d->poll_mask);
 
-    spin_lock(&d->event_lock);
+    read_lock(&d->event_lock);
 
     for ( port = 1; port_is_valid(d, port); ++port )
     {
@@ -1624,7 +1623,7 @@ static void domain_dump_evtchn_info(stru
         }
     }
 
-    spin_unlock(&d->event_lock);
+    read_unlock(&d->event_lock);
 }
 
 static void dump_evtchn_info(unsigned char key)
--- a/xen/common/event_fifo.c
+++ b/xen/common/event_fifo.c
@@ -561,7 +561,7 @@ int evtchn_fifo_init_control(struct evtc
     if ( offset & (8 - 1) )
         return -EINVAL;
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
 
     /*
      * If this is the first control block, setup an empty event array
@@ -593,13 +593,13 @@ int evtchn_fifo_init_control(struct evtc
     else
         rc = map_control_block(v, gfn, offset);
 
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 
     return rc;
 
  error:
     evtchn_fifo_destroy(d);
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
     return rc;
 }
 
@@ -652,9 +652,9 @@ int evtchn_fifo_expand_array(const struc
     if ( !d->evtchn_fifo )
         return -EOPNOTSUPP;
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
     rc = add_page_to_event_array(d, expand_array->array_gfn);
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 
     return rc;
 }
--- a/xen/drivers/passthrough/io.c
+++ b/xen/drivers/passthrough/io.c
@@ -105,7 +105,7 @@ static void pt_pirq_softirq_reset(struct
 {
     struct domain *d = pirq_dpci->dom;
 
-    ASSERT(spin_is_locked(&d->event_lock));
+    ASSERT(rw_is_write_locked(&d->event_lock));
 
     switch ( cmpxchg(&pirq_dpci->state, 1 << STATE_SCHED, 0) )
     {
@@ -162,7 +162,7 @@ static void pt_irq_time_out(void *data)
     const struct hvm_irq_dpci *dpci;
     const struct dev_intx_gsi_link *digl;
 
-    spin_lock(&irq_map->dom->event_lock);
+    write_lock(&irq_map->dom->event_lock);
 
     if ( irq_map->flags & HVM_IRQ_DPCI_IDENTITY_GSI )
     {
@@ -177,7 +177,7 @@ static void pt_irq_time_out(void *data)
         hvm_gsi_deassert(irq_map->dom, dpci_pirq(irq_map)->pirq);
         irq_map->flags |= HVM_IRQ_DPCI_EOI_LATCH;
         pt_irq_guest_eoi(irq_map->dom, irq_map, NULL);
-        spin_unlock(&irq_map->dom->event_lock);
+        write_unlock(&irq_map->dom->event_lock);
         return;
     }
 
@@ -185,7 +185,7 @@ static void pt_irq_time_out(void *data)
     if ( unlikely(!dpci) )
     {
         ASSERT_UNREACHABLE();
-        spin_unlock(&irq_map->dom->event_lock);
+        write_unlock(&irq_map->dom->event_lock);
         return;
     }
     list_for_each_entry ( digl, &irq_map->digl_list, list )
@@ -204,7 +204,7 @@ static void pt_irq_time_out(void *data)
 
     pt_pirq_iterate(irq_map->dom, pt_irq_guest_eoi, NULL);
 
-    spin_unlock(&irq_map->dom->event_lock);
+    write_unlock(&irq_map->dom->event_lock);
 }
 
 struct hvm_irq_dpci *domain_get_irq_dpci(const struct domain *d)
@@ -288,7 +288,7 @@ int pt_irq_create_bind(
         return -EINVAL;
 
  restart:
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
 
     hvm_irq_dpci = domain_get_irq_dpci(d);
     if ( !hvm_irq_dpci && !is_hardware_domain(d) )
@@ -304,7 +304,7 @@ int pt_irq_create_bind(
         hvm_irq_dpci = xzalloc(struct hvm_irq_dpci);
         if ( hvm_irq_dpci == NULL )
         {
-            spin_unlock(&d->event_lock);
+            write_unlock(&d->event_lock);
             return -ENOMEM;
         }
         for ( i = 0; i < NR_HVM_DOMU_IRQS; i++ )
@@ -316,7 +316,7 @@ int pt_irq_create_bind(
     info = pirq_get_info(d, pirq);
     if ( !info )
     {
-        spin_unlock(&d->event_lock);
+        write_unlock(&d->event_lock);
         return -ENOMEM;
     }
     pirq_dpci = pirq_dpci(info);
@@ -331,7 +331,7 @@ int pt_irq_create_bind(
      */
     if ( pt_pirq_softirq_active(pirq_dpci) )
     {
-        spin_unlock(&d->event_lock);
+        write_unlock(&d->event_lock);
         cpu_relax();
         goto restart;
     }
@@ -389,7 +389,7 @@ int pt_irq_create_bind(
                 pirq_dpci->dom = NULL;
                 pirq_dpci->flags = 0;
                 pirq_cleanup_check(info, d);
-                spin_unlock(&d->event_lock);
+                write_unlock(&d->event_lock);
                 return rc;
             }
         }
@@ -399,7 +399,7 @@ int pt_irq_create_bind(
 
             if ( (pirq_dpci->flags & mask) != mask )
             {
-                spin_unlock(&d->event_lock);
+                write_unlock(&d->event_lock);
                 return -EBUSY;
             }
 
@@ -423,7 +423,7 @@ int pt_irq_create_bind(
 
         dest_vcpu_id = hvm_girq_dest_2_vcpu_id(d, dest, dest_mode);
         pirq_dpci->gmsi.dest_vcpu_id = dest_vcpu_id;
-        spin_unlock(&d->event_lock);
+        write_unlock(&d->event_lock);
 
         pirq_dpci->gmsi.posted = false;
         vcpu = (dest_vcpu_id >= 0) ? d->vcpu[dest_vcpu_id] : NULL;
@@ -483,7 +483,7 @@ int pt_irq_create_bind(
 
             if ( !digl || !girq )
             {
-                spin_unlock(&d->event_lock);
+                write_unlock(&d->event_lock);
                 xfree(girq);
                 xfree(digl);
                 return -ENOMEM;
@@ -510,7 +510,7 @@ int pt_irq_create_bind(
             if ( pt_irq_bind->irq_type != PT_IRQ_TYPE_PCI ||
                  pirq >= hvm_domain_irq(d)->nr_gsis )
             {
-                spin_unlock(&d->event_lock);
+                write_unlock(&d->event_lock);
 
                 return -EINVAL;
             }
@@ -546,7 +546,7 @@ int pt_irq_create_bind(
 
                     if ( mask < 0 || trigger_mode < 0 )
                     {
-                        spin_unlock(&d->event_lock);
+                        write_unlock(&d->event_lock);
 
                         ASSERT_UNREACHABLE();
                         return -EINVAL;
@@ -594,14 +594,14 @@ int pt_irq_create_bind(
                 }
                 pirq_dpci->flags = 0;
                 pirq_cleanup_check(info, d);
-                spin_unlock(&d->event_lock);
+                write_unlock(&d->event_lock);
                 xfree(girq);
                 xfree(digl);
                 return rc;
             }
         }
 
-        spin_unlock(&d->event_lock);
+        write_unlock(&d->event_lock);
 
         if ( iommu_verbose )
         {
@@ -619,7 +619,7 @@ int pt_irq_create_bind(
     }
 
     default:
-        spin_unlock(&d->event_lock);
+        write_unlock(&d->event_lock);
         return -EOPNOTSUPP;
     }
 
@@ -672,13 +672,13 @@ int pt_irq_destroy_bind(
         return -EOPNOTSUPP;
     }
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
 
     hvm_irq_dpci = domain_get_irq_dpci(d);
 
     if ( !hvm_irq_dpci && !is_hardware_domain(d) )
     {
-        spin_unlock(&d->event_lock);
+        write_unlock(&d->event_lock);
         return -EINVAL;
     }
 
@@ -711,7 +711,7 @@ int pt_irq_destroy_bind(
 
         if ( girq )
         {
-            spin_unlock(&d->event_lock);
+            write_unlock(&d->event_lock);
             return -EINVAL;
         }
 
@@ -755,7 +755,7 @@ int pt_irq_destroy_bind(
         pirq_cleanup_check(pirq, d);
     }
 
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 
     if ( what && iommu_verbose )
     {
@@ -799,7 +799,7 @@ int pt_pirq_iterate(struct domain *d,
     unsigned int pirq = 0, n, i;
     struct pirq *pirqs[8];
 
-    ASSERT(spin_is_locked(&d->event_lock));
+    ASSERT(rw_is_locked(&d->event_lock));
 
     do {
         n = radix_tree_gang_lookup(&d->pirq_tree, (void **)pirqs, pirq,
@@ -880,9 +880,9 @@ void hvm_dpci_msi_eoi(struct domain *d,
          (!hvm_domain_irq(d)->dpci && !is_hardware_domain(d)) )
        return;
 
-    spin_lock(&d->event_lock);
+    read_lock(&d->event_lock);
     pt_pirq_iterate(d, _hvm_dpci_msi_eoi, (void *)(long)vector);
-    spin_unlock(&d->event_lock);
+    read_unlock(&d->event_lock);
 }
 
 static void hvm_dirq_assist(struct domain *d, struct hvm_pirq_dpci *pirq_dpci)
@@ -893,7 +893,7 @@ static void hvm_dirq_assist(struct domai
         return;
     }
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
     if ( test_and_clear_bool(pirq_dpci->masked) )
     {
         struct pirq *pirq = dpci_pirq(pirq_dpci);
@@ -947,7 +947,7 @@ static void hvm_dirq_assist(struct domai
     }
 
  out:
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 }
 
 static void hvm_pirq_eoi(struct pirq *pirq,
@@ -1012,7 +1012,7 @@ void hvm_dpci_eoi(struct domain *d, unsi
 
     if ( is_hardware_domain(d) )
     {
-        spin_lock(&d->event_lock);
+        write_lock(&d->event_lock);
         hvm_gsi_eoi(d, guest_gsi, ent);
         goto unlock;
     }
@@ -1023,7 +1023,7 @@ void hvm_dpci_eoi(struct domain *d, unsi
         return;
     }
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
     hvm_irq_dpci = domain_get_irq_dpci(d);
 
     if ( !hvm_irq_dpci )
@@ -1033,7 +1033,7 @@ void hvm_dpci_eoi(struct domain *d, unsi
         __hvm_dpci_eoi(d, girq, ent);
 
 unlock:
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 }
 
 /*
--- a/xen/common/rwlock.c
+++ b/xen/common/rwlock.c
@@ -102,6 +102,14 @@ void queue_write_lock_slowpath(rwlock_t
     spin_unlock(&lock->lock);
 }
 
+void _rw_barrier(rwlock_t *lock)
+{
+    check_barrier(&lock->lock.debug);
+    smp_mb();
+    while ( _rw_is_locked(lock) )
+        arch_lock_relax();
+    smp_mb();
+}
 
 static DEFINE_PER_CPU(cpumask_t, percpu_rwlock_readers);
 
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -65,7 +65,7 @@ void check_lock(union lock_debug *debug,
     }
 }
 
-static void check_barrier(union lock_debug *debug)
+void check_barrier(union lock_debug *debug)
 {
     if ( unlikely(atomic_read(&spin_debug) <= 0) )
         return;
@@ -108,7 +108,6 @@ void spin_debug_disable(void)
 
 #else /* CONFIG_DEBUG_LOCKS */
 
-#define check_barrier(l) ((void)0)
 #define got_lock(l) ((void)0)
 #define rel_lock(l) ((void)0)
 
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -878,7 +878,7 @@ static int pci_clean_dpci_irqs(struct do
     if ( !is_hvm_domain(d) )
         return 0;
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
     hvm_irq_dpci = domain_get_irq_dpci(d);
     if ( hvm_irq_dpci != NULL )
     {
@@ -896,14 +896,14 @@ static int pci_clean_dpci_irqs(struct do
             ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
         if ( ret )
         {
-            spin_unlock(&d->event_lock);
+            write_unlock(&d->event_lock);
             return ret;
         }
 
         hvm_domain_irq(d)->dpci = NULL;
         free_hvm_irq_dpci(hvm_irq_dpci);
     }
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
     return 0;
 }
 
--- a/xen/drivers/passthrough/vtd/x86/hvm.c
+++ b/xen/drivers/passthrough/vtd/x86/hvm.c
@@ -54,7 +54,7 @@ void hvm_dpci_isairq_eoi(struct domain *
     if ( !is_iommu_enabled(d) )
         return;
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
 
     dpci = domain_get_irq_dpci(d);
 
@@ -63,5 +63,5 @@ void hvm_dpci_isairq_eoi(struct domain *
         /* Multiple mirq may be mapped to one isa irq */
         pt_pirq_iterate(d, _hvm_dpci_isairq_eoi, (void *)(long)isairq);
     }
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 }
--- a/xen/include/xen/rwlock.h
+++ b/xen/include/xen/rwlock.h
@@ -247,6 +247,8 @@ static inline int _rw_is_write_locked(rw
     return (atomic_read(&lock->cnts) & _QW_WMASK) == _QW_LOCKED;
 }
 
+void _rw_barrier(rwlock_t *lock);
+
 #define read_lock(l)                  _read_lock(l)
 #define read_lock_irq(l)              _read_lock_irq(l)
 #define read_lock_irqsave(l, f)                                 \
@@ -276,6 +278,7 @@ static inline int _rw_is_write_locked(rw
 #define rw_is_locked(l)               _rw_is_locked(l)
 #define rw_is_write_locked(l)         _rw_is_write_locked(l)
 
+#define rw_barrier(l)                 _rw_barrier(l)
 
 typedef struct percpu_rwlock percpu_rwlock_t;
 
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -376,7 +376,7 @@ struct domain
     unsigned int     xen_evtchns;
     /* Port to resume from in evtchn_reset(), when in a continuation. */
     unsigned int     next_evtchn;
-    spinlock_t       event_lock;
+    rwlock_t         event_lock;
     const struct evtchn_port_ops *evtchn_port_ops;
     struct evtchn_fifo_domain *evtchn_fifo;
 
--- a/xen/include/xen/spinlock.h
+++ b/xen/include/xen/spinlock.h
@@ -22,12 +22,14 @@ union lock_debug {
 };
 #define _LOCK_DEBUG { LOCK_DEBUG_INITVAL }
 void check_lock(union lock_debug *debug, bool try);
+void check_barrier(union lock_debug *debug);
 void spin_debug_enable(void);
 void spin_debug_disable(void);
 #else
 union lock_debug { };
 #define _LOCK_DEBUG { }
 #define check_lock(l, t) ((void)0)
+#define check_barrier(l) ((void)0)
 #define spin_debug_enable() ((void)0)
 #define spin_debug_disable() ((void)0)
 #endif
--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -555,7 +555,7 @@ static int flask_get_peer_sid(struct xen
     struct evtchn *chn;
     struct domain_security_struct *dsec;
 
-    spin_lock(&d->event_lock);
+    read_lock(&d->event_lock);
 
     if ( !port_is_valid(d, arg->evtchn) )
         goto out;
@@ -573,7 +573,7 @@ static int flask_get_peer_sid(struct xen
     rv = 0;
 
  out:
-    spin_unlock(&d->event_lock);
+    read_unlock(&d->event_lock);
     return rv;
 }
 



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:30:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:30:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34300.65245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBvI-0000HT-Tm; Mon, 23 Nov 2020 13:30:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34300.65245; Mon, 23 Nov 2020 13:30:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khBvI-0000HL-Qf; Mon, 23 Nov 2020 13:30:28 +0000
Received: by outflank-mailman (input) for mailman id 34300;
 Mon, 23 Nov 2020 13:30:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khBvI-0000HE-0F
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:30:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 287d2545-a77c-426f-8d65-7c70bb0fa7d7;
 Mon, 23 Nov 2020 13:30:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5CD23AF16;
 Mon, 23 Nov 2020 13:30:26 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khBvI-0000HE-0F
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:30:28 +0000
X-Inumbo-ID: 287d2545-a77c-426f-8d65-7c70bb0fa7d7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 287d2545-a77c-426f-8d65-7c70bb0fa7d7;
	Mon, 23 Nov 2020 13:30:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606138226; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7jt0BkRlKG8ZX6o9qxDhFaVFYDZdwpFBoupgoJURDZ8=;
	b=BG9k2RjRcZu8KM66M5gYSdqbg/ZJjaxOQBOZ1vp66TrSez4PyekKHZ4NKfM6EnVGUa1+kI
	8wZSo+8viK66yeY0eBzxSnMbVbwtVlJSdKHbPWURRT0cARKTGaq8IomDc3aB64DgbG9hoH
	JEoVOivGheiKmUT1htOCisPPxKpxh4s=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5CD23AF16;
	Mon, 23 Nov 2020 13:30:26 +0000 (UTC)
Subject: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <lengyelt@ainfosec.com>,
 Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>,
 Alexandru Isaila <aisaila@bitdefender.com>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
Message-ID: <d821c715-966a-b48b-a877-c5dac36822f0@suse.com>
Date: Mon, 23 Nov 2020 14:30:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

While there don't look to be any problems with this right now, the lock
order implications from holding the lock can be very difficult to follow
(and may be easy to violate unknowingly). The present callbacks don't
(and no such callback should) have any need for the lock to be held.

However, vm_event_disable() frees the structures used by respective
callbacks and isn't otherwise synchronized with invocations of these
callbacks, so maintain a count of in-progress calls, for evtchn_close()
to wait to drop to zero before freeing the port (and dropping the lock).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Should we make this accounting optional, to be requested through a new
parameter to alloc_unbound_xen_event_channel(), or derived from other
than the default callback being requested?
---
v3: Drain callbacks before proceeding with closing. Re-base.

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -397,6 +397,7 @@ static long evtchn_bind_interdomain(evtc
     
     rchn->u.interdomain.remote_dom  = ld;
     rchn->u.interdomain.remote_port = lport;
+    atomic_set(&rchn->u.interdomain.active_calls, 0);
     rchn->state                     = ECS_INTERDOMAIN;
 
     /*
@@ -720,6 +721,10 @@ int evtchn_close(struct domain *d1, int
 
         double_evtchn_lock(chn1, chn2);
 
+        if ( consumer_is_xen(chn1) )
+            while ( atomic_read(&chn1->u.interdomain.active_calls) )
+                cpu_relax();
+
         evtchn_free(d1, chn1);
 
         chn2->state = ECS_UNBOUND;
@@ -781,9 +786,15 @@ int evtchn_send(struct domain *ld, unsig
         rport = lchn->u.interdomain.remote_port;
         rchn  = evtchn_from_port(rd, rport);
         if ( consumer_is_xen(rchn) )
+        {
+            /* Don't keep holding the lock for the call below. */
+            atomic_inc(&rchn->u.interdomain.active_calls);
+            evtchn_read_unlock(lchn);
             xen_notification_fn(rchn)(rd->vcpu[rchn->notify_vcpu_id], rport);
-        else
-            evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
+            atomic_dec(&rchn->u.interdomain.active_calls);
+            return 0;
+        }
+        evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
         break;
     case ECS_IPI:
         evtchn_port_set_pending(ld, lchn->notify_vcpu_id, lchn);
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -104,6 +104,7 @@ struct evtchn
         } unbound;     /* state == ECS_UNBOUND */
         struct {
             evtchn_port_t  remote_port;
+            atomic_t       active_calls;
             struct domain *remote_dom;
         } interdomain; /* state == ECS_INTERDOMAIN */
         struct {



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:42:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:42:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34314.65262 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khC6e-0001NK-3H; Mon, 23 Nov 2020 13:42:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34314.65262; Mon, 23 Nov 2020 13:42:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khC6e-0001ND-0E; Mon, 23 Nov 2020 13:42:12 +0000
Received: by outflank-mailman (input) for mailman id 34314;
 Mon, 23 Nov 2020 13:42:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khC6d-0001N7-5o
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:42:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 141e6cd0-5d26-4321-b570-e5e371b9e97a;
 Mon, 23 Nov 2020 13:42:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8290FACC6;
 Mon, 23 Nov 2020 13:42:09 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khC6d-0001N7-5o
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:42:11 +0000
X-Inumbo-ID: 141e6cd0-5d26-4321-b570-e5e371b9e97a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 141e6cd0-5d26-4321-b570-e5e371b9e97a;
	Mon, 23 Nov 2020 13:42:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606138929; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=yvT1rtFNJqHFdld4N2rCK8jcitLajPpc2UIYfJuNRJA=;
	b=XdP9jIXLpiTfwJMa/1oObmMdJGlc8I2on9+Xsi0sk0tNWObdr/iLSWETskIUko3fBDfUyk
	6xA2hm9TtMb6VBTlQaYYu0d6fUSc/GvELlo6NXdxPnhQ+R0u2s1XkAc5bso0jqvvDwRxDQ
	RyC+Imy9avWUxVFAe1zGJI2UqQeFAvo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8290FACC6;
	Mon, 23 Nov 2020 13:42:09 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v3 0/7] x86: some assembler macro rework
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <8e7d1472-dd37-8ed3-ec2f-ce954ea61dfd@suse.com>
Date: Mon, 23 Nov 2020 14:42:08 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Parts of this were discussed in the context of Andrew's CET-SS work.
Further parts simply fit the underlying picture. And a few patches
towards the end get attached here simply because of their dependency.
What is now patch 7 has been moved to the end of the series, in the
hope of at least unblocking the rest.

Most patches in principle have acks / R-b-s which would allow them
to go in. However, there still the controversy on the naming of the
newly introduced header in patch 1 (which subsequent patches then
add to). There hasn't been a name suggestion which would - imo -
truly represent an improvement, and I've explained why I think this
seemingly ambiguous name is actually intentionally very similar to
its sibling's. To prevent this series from further being stuck on
this I'll give it a few more days for better suggestions (or vetos)
to surface, and otherwise commit what I have suitable tags for.

It's also still not really clear to me what - if any - changes to
make to patch 7. As said there I'd be willing to drop some of the
changes made, but not all. Prior discussion hasn't led to a clear
understanding on my part of what's wanted to be kept / dropped. It
could have looked like the entire patch was wanted to go away, but
I don't think I can agree with this.

1: replace __ASM_{CL,ST}AC
2: drop ASM_{CL,ST}AC
3: fold indirect_thunk_asm.h into asm-defns.h
4: guard against straight-line speculation past RET
5: limit amount of INT3 in IND_THUNK_*
6: make guarding against straight-line speculation optional
7: reduce CET-SS related #ifdef-ary

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:43:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:43:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34320.65274 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khC7q-0001Vm-Dy; Mon, 23 Nov 2020 13:43:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34320.65274; Mon, 23 Nov 2020 13:43:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khC7q-0001Vf-AY; Mon, 23 Nov 2020 13:43:26 +0000
Received: by outflank-mailman (input) for mailman id 34320;
 Mon, 23 Nov 2020 13:43:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khC7p-0001Va-8E
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:43:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d6fc5328-1d54-4315-bc9f-f5c811249af7;
 Mon, 23 Nov 2020 13:43:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8B0CDAD09;
 Mon, 23 Nov 2020 13:43:23 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khC7p-0001Va-8E
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:43:25 +0000
X-Inumbo-ID: d6fc5328-1d54-4315-bc9f-f5c811249af7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d6fc5328-1d54-4315-bc9f-f5c811249af7;
	Mon, 23 Nov 2020 13:43:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606139003; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AnKusbjZAyoMSt2heKF01++irq1SZLWnDVB4dzApjOw=;
	b=qj1tBgwOGrrzSoQ4Xejg5QdkYCx8Hj35SUlSaCzaxAWGSpvl3n+7Peb7H9JNvSsJWGxYgZ
	4E++k2jQEnxLgumyQ8xC7M5i2EBu59oJODssZIWUi8EBQlMxL+9/U/uxeXle/wfm9DRwKU
	cz1ltrXZ8/ZFVL50lNtpypKDf8CIEzc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8B0CDAD09;
	Mon, 23 Nov 2020 13:43:23 +0000 (UTC)
Subject: [PATCH v3 1/7] x86: replace __ASM_{CL,ST}AC
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8e7d1472-dd37-8ed3-ec2f-ce954ea61dfd@suse.com>
Message-ID: <b48024f5-47c6-ed82-2201-e99dcf4ef8db@suse.com>
Date: Mon, 23 Nov 2020 14:43:23 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <8e7d1472-dd37-8ed3-ec2f-ce954ea61dfd@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Introduce proper assembler macros instead, enabled only when the
assembler itself doesn't support the insns. To avoid duplicating the
macros for assembly and C files, have them processed into asm-macros.h.
This in turn requires adding a multiple inclusion guard when generating
that header.

No change to generated code.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -243,7 +243,10 @@ $(BASEDIR)/include/asm-x86/asm-macros.h:
 	echo '#if 0' >$@.new
 	echo '.if 0' >>$@.new
 	echo '#endif' >>$@.new
+	echo '#ifndef __ASM_MACROS_H__' >>$@.new
+	echo '#define __ASM_MACROS_H__' >>$@.new
 	echo 'asm ( ".include \"$@\"" );' >>$@.new
+	echo '#endif /* __ASM_MACROS_H__ */' >>$@.new
 	echo '#if 0' >>$@.new
 	echo '.endif' >>$@.new
 	cat $< >>$@.new
--- a/xen/arch/x86/arch.mk
+++ b/xen/arch/x86/arch.mk
@@ -20,6 +20,7 @@ $(call as-option-add,CFLAGS,CC,"rdrand %
 $(call as-option-add,CFLAGS,CC,"rdfsbase %rax",-DHAVE_AS_FSGSBASE)
 $(call as-option-add,CFLAGS,CC,"xsaveopt (%rax)",-DHAVE_AS_XSAVEOPT)
 $(call as-option-add,CFLAGS,CC,"rdseed %eax",-DHAVE_AS_RDSEED)
+$(call as-option-add,CFLAGS,CC,"clac",-DHAVE_AS_CLAC_STAC)
 $(call as-option-add,CFLAGS,CC,"clwb (%rax)",-DHAVE_AS_CLWB)
 $(call as-option-add,CFLAGS,CC,".equ \"x\"$$(comma)1",-DHAVE_AS_QUOTED_SYM)
 $(call as-option-add,CFLAGS,CC,"invpcid (%rax)$$(comma)%rax",-DHAVE_AS_INVPCID)
--- a/xen/arch/x86/asm-macros.c
+++ b/xen/arch/x86/asm-macros.c
@@ -1 +1,2 @@
+#include <asm/asm-defns.h>
 #include <asm/alternative-asm.h>
--- /dev/null
+++ b/xen/include/asm-x86/asm-defns.h
@@ -0,0 +1,9 @@
+#ifndef HAVE_AS_CLAC_STAC
+.macro clac
+    .byte 0x0f, 0x01, 0xca
+.endm
+
+.macro stac
+    .byte 0x0f, 0x01, 0xcb
+.endm
+#endif
--- a/xen/include/asm-x86/asm_defns.h
+++ b/xen/include/asm-x86/asm_defns.h
@@ -13,10 +13,12 @@
 #include <asm/alternative.h>
 
 #ifdef __ASSEMBLY__
+#include <asm/asm-defns.h>
 #ifndef CONFIG_INDIRECT_THUNK
 .equ CONFIG_INDIRECT_THUNK, 0
 #endif
 #else
+#include <asm/asm-macros.h>
 asm ( "\t.equ CONFIG_INDIRECT_THUNK, "
       __stringify(IS_ENABLED(CONFIG_INDIRECT_THUNK)) );
 #endif
@@ -200,34 +202,27 @@ register unsigned long current_stack_poi
 
 #endif
 
-/* "Raw" instruction opcodes */
-#define __ASM_CLAC      ".byte 0x0f,0x01,0xca"
-#define __ASM_STAC      ".byte 0x0f,0x01,0xcb"
-
 #ifdef __ASSEMBLY__
 .macro ASM_STAC
-    ALTERNATIVE "", __ASM_STAC, X86_FEATURE_XEN_SMAP
+    ALTERNATIVE "", stac, X86_FEATURE_XEN_SMAP
 .endm
 .macro ASM_CLAC
-    ALTERNATIVE "", __ASM_CLAC, X86_FEATURE_XEN_SMAP
+    ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
 .endm
 #else
 static always_inline void clac(void)
 {
     /* Note: a barrier is implicit in alternative() */
-    alternative("", __ASM_CLAC, X86_FEATURE_XEN_SMAP);
+    alternative("", "clac", X86_FEATURE_XEN_SMAP);
 }
 
 static always_inline void stac(void)
 {
     /* Note: a barrier is implicit in alternative() */
-    alternative("", __ASM_STAC, X86_FEATURE_XEN_SMAP);
+    alternative("", "stac", X86_FEATURE_XEN_SMAP);
 }
 #endif
 
-#undef __ASM_STAC
-#undef __ASM_CLAC
-
 #ifdef __ASSEMBLY__
 .macro SAVE_ALL op, compat=0
 .ifeqs "\op", "CLAC"



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:43:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:43:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34323.65286 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khC8A-0001bI-MV; Mon, 23 Nov 2020 13:43:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34323.65286; Mon, 23 Nov 2020 13:43:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khC8A-0001bA-J8; Mon, 23 Nov 2020 13:43:46 +0000
Received: by outflank-mailman (input) for mailman id 34323;
 Mon, 23 Nov 2020 13:43:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khC89-0001az-Et
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:43:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 931f676e-68dc-4108-bbf8-6dbf17fe81ba;
 Mon, 23 Nov 2020 13:43:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2A77AAD80;
 Mon, 23 Nov 2020 13:43:43 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khC89-0001az-Et
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:43:45 +0000
X-Inumbo-ID: 931f676e-68dc-4108-bbf8-6dbf17fe81ba
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 931f676e-68dc-4108-bbf8-6dbf17fe81ba;
	Mon, 23 Nov 2020 13:43:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606139023; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PgSMLKtdyjHrf/GCQK6MQcXchHhGvrCH3R9vQSxzTCE=;
	b=DIT2NspTL8RJcMxk39xvf87OhlaFUfeOZBiw0acedlGWS1rvPBzw8reRrIWnasEMa4KjdQ
	MqcUeXR7y4dR8l9jGHjtG3bnW4hRKIW6PdskEmuqyzxNF7o6KT8ZZxkD+oCG+d4gEyCe0+
	8xb2RasYsk4TL8WIr9G9Beu8jqAYXTo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2A77AAD80;
	Mon, 23 Nov 2020 13:43:43 +0000 (UTC)
Subject: [PATCH v3 2/7] x86: drop ASM_{CL,ST}AC
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8e7d1472-dd37-8ed3-ec2f-ce954ea61dfd@suse.com>
Message-ID: <81f34bf0-e802-ae11-3a46-9ba45b17fe3f@suse.com>
Date: Mon, 23 Nov 2020 14:43:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <8e7d1472-dd37-8ed3-ec2f-ce954ea61dfd@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Use ALTERNATIVE directly, such that at the use sites it is visible that
alternative code patching is in use. Similarly avoid hiding the fact in
SAVE_ALL.

No change to generated code.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
v2: Further adjust comment in asm_domain_crash_synchronous().

--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2200,9 +2200,8 @@ void activate_debugregs(const struct vcp
 void asm_domain_crash_synchronous(unsigned long addr)
 {
     /*
-     * We need clear AC bit here because in entry.S AC is set
-     * by ASM_STAC to temporarily allow accesses to user pages
-     * which is prevented by SMAP by default.
+     * We need to clear the AC bit here because the exception fixup logic
+     * may leave user accesses enabled.
      *
      * For some code paths, where this function is called, clac()
      * is not needed, but adding clac() here instead of each place
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -12,7 +12,7 @@
 #include <irq_vectors.h>
 
 ENTRY(entry_int82)
-        ASM_CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         pushq $0
         movl  $HYPERCALL_VECTOR, 4(%rsp)
         SAVE_ALL compat=1 /* DPL1 gate, restricted to 32bit PV guests only. */
@@ -286,7 +286,7 @@ ENTRY(compat_int80_direct_trap)
 compat_create_bounce_frame:
         ASSERT_INTERRUPTS_ENABLED
         mov   %fs,%edi
-        ASM_STAC
+        ALTERNATIVE "", stac, X86_FEATURE_XEN_SMAP
         testb $2,UREGS_cs+8(%rsp)
         jz    1f
         /* Push new frame at registered guest-OS stack base. */
@@ -333,7 +333,7 @@ compat_create_bounce_frame:
         movl  TRAPBOUNCE_error_code(%rdx),%eax
 .Lft8:  movl  %eax,%fs:(%rsi)           # ERROR CODE
 1:
-        ASM_CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         /* Rewrite our stack frame and return to guest-OS mode. */
         /* IA32 Ref. Vol. 3: TF, VM, RF and NT flags are cleared on trap. */
         andl  $~(X86_EFLAGS_VM|X86_EFLAGS_RF|\
@@ -379,7 +379,7 @@ compat_crash_page_fault_4:
         addl  $4,%esi
 compat_crash_page_fault:
 .Lft14: mov   %edi,%fs
-        ASM_CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         movl  %esi,%edi
         call  show_page_walk
         jmp   dom_crash_sync_extable
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -280,7 +280,7 @@ ENTRY(sysenter_entry)
         pushq $0
         pushfq
 GLOBAL(sysenter_eflags_saved)
-        ASM_CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         pushq $3 /* ring 3 null cs */
         pushq $0 /* null rip */
         pushq $0
@@ -333,7 +333,7 @@ UNLIKELY_END(sysenter_gpf)
         jmp   .Lbounce_exception
 
 ENTRY(int80_direct_trap)
-        ASM_CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         pushq $0
         movl  $0x80, 4(%rsp)
         SAVE_ALL
@@ -452,7 +452,7 @@ __UNLIKELY_END(create_bounce_frame_bad_s
 
         subq  $7*8,%rsi
         movq  UREGS_ss+8(%rsp),%rax
-        ASM_STAC
+        ALTERNATIVE "", stac, X86_FEATURE_XEN_SMAP
         movq  VCPU_domain(%rbx),%rdi
         STORE_GUEST_STACK(rax,6)        # SS
         movq  UREGS_rsp+8(%rsp),%rax
@@ -490,7 +490,7 @@ __UNLIKELY_END(create_bounce_frame_bad_s
         STORE_GUEST_STACK(rax,1)        # R11
         movq  UREGS_rcx+8(%rsp),%rax
         STORE_GUEST_STACK(rax,0)        # RCX
-        ASM_CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
 
 #undef STORE_GUEST_STACK
 
@@ -532,11 +532,11 @@ domain_crash_page_fault_2x8:
 domain_crash_page_fault_1x8:
         addq  $8,%rsi
 domain_crash_page_fault_0x8:
-        ASM_CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         movq  %rsi,%rdi
         call  show_page_walk
 ENTRY(dom_crash_sync_extable)
-        ASM_CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         # Get out of the guest-save area of the stack.
         GET_STACK_END(ax)
         leaq  STACK_CPUINFO_FIELD(guest_cpu_user_regs)(%rax),%rsp
@@ -594,7 +594,8 @@ UNLIKELY_END(exit_cr3)
         iretq
 
 ENTRY(common_interrupt)
-        SAVE_ALL CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
+        SAVE_ALL
 
         GET_STACK_END(14)
 
@@ -626,7 +627,8 @@ ENTRY(page_fault)
         movl  $TRAP_page_fault,4(%rsp)
 /* No special register assumptions. */
 GLOBAL(handle_exception)
-        SAVE_ALL CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
+        SAVE_ALL
 
         GET_STACK_END(14)
 
@@ -831,7 +833,8 @@ ENTRY(entry_CP)
 ENTRY(double_fault)
         movl  $TRAP_double_fault,4(%rsp)
         /* Set AC to reduce chance of further SMAP faults */
-        SAVE_ALL STAC
+        ALTERNATIVE "", stac, X86_FEATURE_XEN_SMAP
+        SAVE_ALL
 
         GET_STACK_END(14)
 
@@ -864,7 +867,8 @@ ENTRY(nmi)
         pushq $0
         movl  $TRAP_nmi,4(%rsp)
 handle_ist_exception:
-        SAVE_ALL CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
+        SAVE_ALL
 
         GET_STACK_END(14)
 
--- a/xen/include/asm-x86/asm_defns.h
+++ b/xen/include/asm-x86/asm_defns.h
@@ -200,16 +200,6 @@ register unsigned long current_stack_poi
         UNLIKELY_END_SECTION "\n"          \
         ".Llikely." #tag ".%=:"
 
-#endif
-
-#ifdef __ASSEMBLY__
-.macro ASM_STAC
-    ALTERNATIVE "", stac, X86_FEATURE_XEN_SMAP
-.endm
-.macro ASM_CLAC
-    ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
-.endm
-#else
 static always_inline void clac(void)
 {
     /* Note: a barrier is implicit in alternative() */
@@ -224,18 +214,7 @@ static always_inline void stac(void)
 #endif
 
 #ifdef __ASSEMBLY__
-.macro SAVE_ALL op, compat=0
-.ifeqs "\op", "CLAC"
-        ASM_CLAC
-.else
-.ifeqs "\op", "STAC"
-        ASM_STAC
-.else
-.ifnb \op
-        .err
-.endif
-.endif
-.endif
+.macro SAVE_ALL compat=0
         addq  $-(UREGS_error_code-UREGS_r15), %rsp
         cld
         movq  %rdi,UREGS_rdi(%rsp)



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:44:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:44:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34329.65298 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khC8b-0001iJ-W8; Mon, 23 Nov 2020 13:44:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34329.65298; Mon, 23 Nov 2020 13:44:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khC8b-0001iC-T8; Mon, 23 Nov 2020 13:44:13 +0000
Received: by outflank-mailman (input) for mailman id 34329;
 Mon, 23 Nov 2020 13:44:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a5JY=E5=infradead.org=peterz@srs-us1.protection.inumbo.net>)
 id 1khC8Y-0001hs-D5
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:44:13 +0000
Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 357b0d76-9572-4816-93df-74b42e054565;
 Mon, 23 Nov 2020 13:44:04 +0000 (UTC)
Received: from j217100.upc-j.chello.nl ([24.132.217.100]
 helo=noisy.programming.kicks-ass.net)
 by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khC7m-0008NA-Cg; Mon, 23 Nov 2020 13:43:22 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id A2CC6307958;
 Mon, 23 Nov 2020 14:43:17 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 87D77200C8D27; Mon, 23 Nov 2020 14:43:17 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=a5JY=E5=infradead.org=peterz@srs-us1.protection.inumbo.net>)
	id 1khC8Y-0001hs-D5
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:44:13 +0000
X-Inumbo-ID: 357b0d76-9572-4816-93df-74b42e054565
Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 357b0d76-9572-4816-93df-74b42e054565;
	Mon, 23 Nov 2020 13:44:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=RydA/us2lnbAjoJ7xmSIIjKk6qIoneLQLLcDUjJYnxM=; b=U9RjAS+s/uGM6Pd4fbRXzPe1KM
	4cjU70YB/r0s6veegEZX2/ID9UVs2g2w7Wrmga4Av7HoJvyzB8HCjxBt0lh6n1TqwfJ4H8BPNl89O
	KIhaST/RMq5ls4KSapEf/Q6y1MdBZKQM+ClOUtBpZEeAcAx2GszgOUlSRnA9b4WaKmlsnyRCywUnh
	vXDHsNJ6jumKfmyLNMg/Aeo8j8lSlrdZ0BpCfqzMFFUESs/Ck5qBZ6Po2Vxy3P3PKNtMfg9nJ2J5z
	CnS28YMDzG9RwB2Zw8XvsCQJKU3GVz9fc12LIsoBs7G9XkQj9WAL8reynr3QvIcEFav40d0mVYp/L
	oe9fweFQ==;
Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net)
	by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khC7m-0008NA-Cg; Mon, 23 Nov 2020 13:43:22 +0000
Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(Client did not present a certificate)
	by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id A2CC6307958;
	Mon, 23 Nov 2020 14:43:17 +0100 (CET)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
	id 87D77200C8D27; Mon, 23 Nov 2020 14:43:17 +0100 (CET)
Date: Mon, 23 Nov 2020 14:43:17 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-hyperv@vger.kernel.org, kvm@vger.kernel.org, luto@kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <sean.j.christopherson@intel.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	Daniel Lezcano <daniel.lezcano@linaro.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	Josh Poimboeuf <jpoimboe@redhat.com>
Subject: Re: [PATCH v2 00/12] x86: major paravirt cleanup
Message-ID: <20201123134317.GE3092@hirez.programming.kicks-ass.net>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120125342.GC3040@hirez.programming.kicks-ass.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201120125342.GC3040@hirez.programming.kicks-ass.net>

On Fri, Nov 20, 2020 at 01:53:42PM +0100, Peter Zijlstra wrote:
> On Fri, Nov 20, 2020 at 12:46:18PM +0100, Juergen Gross wrote:
> >  30 files changed, 325 insertions(+), 598 deletions(-)
> 
> Much awesome ! I'll try and get that objtool thing sorted.

This seems to work for me. It isn't 100% accurate, because it doesn't
know about the direct call instruction, but I can either fudge that or
switching to static_call() will cure that.

It's not exactly pretty, but it should be straight forward.

Index: linux-2.6/tools/objtool/check.c
===================================================================
--- linux-2.6.orig/tools/objtool/check.c
+++ linux-2.6/tools/objtool/check.c
@@ -1090,6 +1090,32 @@ static int handle_group_alt(struct objto
 		return -1;
 	}
 
+	/*
+	 * Add the filler NOP, required for alternative CFI.
+	 */
+	if (special_alt->group && special_alt->new_len < special_alt->orig_len) {
+		struct instruction *nop = malloc(sizeof(*nop));
+		if (!nop) {
+			WARN("malloc failed");
+			return -1;
+		}
+		memset(nop, 0, sizeof(*nop));
+		INIT_LIST_HEAD(&nop->alts);
+		INIT_LIST_HEAD(&nop->stack_ops);
+		init_cfi_state(&nop->cfi);
+
+		nop->sec = last_new_insn->sec;
+		nop->ignore = last_new_insn->ignore;
+		nop->func = last_new_insn->func;
+		nop->alt_group = alt_group;
+		nop->offset = last_new_insn->offset + last_new_insn->len;
+		nop->type = INSN_NOP;
+		nop->len = special_alt->orig_len - special_alt->new_len;
+
+		list_add(&nop->list, &last_new_insn->list);
+		last_new_insn = nop;
+	}
+
 	if (fake_jump)
 		list_add(&fake_jump->list, &last_new_insn->list);
 
@@ -2190,18 +2216,12 @@ static int handle_insn_ops(struct instru
 	struct stack_op *op;
 
 	list_for_each_entry(op, &insn->stack_ops, list) {
-		struct cfi_state old_cfi = state->cfi;
 		int res;
 
 		res = update_cfi_state(insn, &state->cfi, op);
 		if (res)
 			return res;
 
-		if (insn->alt_group && memcmp(&state->cfi, &old_cfi, sizeof(struct cfi_state))) {
-			WARN_FUNC("alternative modifies stack", insn->sec, insn->offset);
-			return -1;
-		}
-
 		if (op->dest.type == OP_DEST_PUSHF) {
 			if (!state->uaccess_stack) {
 				state->uaccess_stack = 1;
@@ -2399,19 +2419,137 @@ static int validate_return(struct symbol
  * unreported (because they're NOPs), such holes would result in CFI_UNDEFINED
  * states which then results in ORC entries, which we just said we didn't want.
  *
- * Avoid them by copying the CFI entry of the first instruction into the whole
- * alternative.
+ * Avoid them by copying the CFI entry of the first instruction into the hole.
  */
-static void fill_alternative_cfi(struct objtool_file *file, struct instruction *insn)
+static void __fill_alt_cfi(struct objtool_file *file, struct instruction *insn)
 {
 	struct instruction *first_insn = insn;
 	int alt_group = insn->alt_group;
 
-	sec_for_each_insn_continue(file, insn) {
+	sec_for_each_insn_from(file, insn) {
 		if (insn->alt_group != alt_group)
 			break;
-		insn->cfi = first_insn->cfi;
+
+		if (!insn->visited)
+			insn->cfi = first_insn->cfi;
+	}
+}
+
+static void fill_alt_cfi(struct objtool_file *file, struct instruction *alt_insn)
+{
+	struct alternative *alt;
+
+	__fill_alt_cfi(file, alt_insn);
+
+	list_for_each_entry(alt, &alt_insn->alts, list)
+		__fill_alt_cfi(file, alt->insn);
+}
+
+static struct instruction *
+__find_unwind(struct objtool_file *file,
+	      struct instruction *insn, unsigned long offset)
+{
+	int alt_group = insn->alt_group;
+	struct instruction *next;
+	unsigned long off = 0;
+
+	while ((off + insn->len) <= offset) {
+		next = next_insn_same_sec(file, insn);
+		if (next && next->alt_group != alt_group)
+			next = NULL;
+
+		if (!next)
+			break;
+
+		off += insn->len;
+		insn = next;
 	}
+
+	return insn;
+}
+
+struct instruction *
+find_alt_unwind(struct objtool_file *file,
+		struct instruction *alt_insn, unsigned long offset)
+{
+	struct instruction *fit;
+	struct alternative *alt;
+	unsigned long fit_off;
+
+	fit = __find_unwind(file, alt_insn, offset);
+	fit_off = (fit->offset - alt_insn->offset);
+
+	list_for_each_entry(alt, &alt_insn->alts, list) {
+		struct instruction *x;
+		unsigned long x_off;
+
+		x = __find_unwind(file, alt->insn, offset);
+		x_off = (x->offset - alt->insn->offset);
+
+		if (fit_off < x_off) {
+			fit = x;
+			fit_off = x_off;
+
+		} else if (fit_off == x_off &&
+			   memcmp(&fit->cfi, &x->cfi, sizeof(struct cfi_state))) {
+
+			char *_str1 = offstr(fit->sec, fit->offset);
+			char *_str2 = offstr(x->sec, x->offset);
+			WARN("%s: equal-offset incompatible alternative: %s\n", _str1, _str2);
+			free(_str1);
+			free(_str2);
+			return fit;
+		}
+	}
+
+	return fit;
+}
+
+static int __validate_unwind(struct objtool_file *file,
+			     struct instruction *alt_insn,
+			     struct instruction *insn)
+{
+	int alt_group = insn->alt_group;
+	struct instruction *unwind;
+	unsigned long offset = 0;
+
+	sec_for_each_insn_from(file, insn) {
+		if (insn->alt_group != alt_group)
+			break;
+
+		unwind = find_alt_unwind(file, alt_insn, offset);
+
+		if (memcmp(&insn->cfi, &unwind->cfi, sizeof(struct cfi_state))) {
+
+			char *_str1 = offstr(insn->sec, insn->offset);
+			char *_str2 = offstr(unwind->sec, unwind->offset);
+			WARN("%s: unwind incompatible alternative: %s (%ld)\n",
+			     _str1, _str2, offset);
+			free(_str1);
+			free(_str2);
+			return 1;
+		}
+
+		offset += insn->len;
+	}
+
+	return 0;
+}
+
+static int validate_alt_unwind(struct objtool_file *file,
+			       struct instruction *alt_insn)
+{
+	struct alternative *alt;
+
+	if (__validate_unwind(file, alt_insn, alt_insn))
+		return 1;
+
+	list_for_each_entry(alt, &alt_insn->alts, list) {
+		if (__validate_unwind(file, alt_insn, alt->insn))
+			return 1;
+	}
+
+	return 0;
 }
 
 /*
@@ -2423,9 +2561,10 @@ static void fill_alternative_cfi(struct
 static int validate_branch(struct objtool_file *file, struct symbol *func,
 			   struct instruction *insn, struct insn_state state)
 {
+	struct instruction *next_insn, *alt_insn = NULL;
 	struct alternative *alt;
-	struct instruction *next_insn;
 	struct section *sec;
+	int alt_group = 0;
 	u8 visited;
 	int ret;
 
@@ -2480,8 +2619,10 @@ static int validate_branch(struct objtoo
 				}
 			}
 
-			if (insn->alt_group)
-				fill_alternative_cfi(file, insn);
+			if (insn->alt_group) {
+				alt_insn = insn;
+				alt_group = insn->alt_group;
+			}
 
 			if (skip_orig)
 				return 0;
@@ -2613,6 +2754,17 @@ static int validate_branch(struct objtoo
 		}
 
 		insn = next_insn;
+
+		if (alt_insn && insn->alt_group != alt_group) {
+			alt_insn->alt_end = insn;
+
+			fill_alt_cfi(file, alt_insn);
+
+			if (validate_alt_unwind(file, alt_insn))
+				return 1;
+
+			alt_insn = NULL;
+		}
 	}
 
 	return 0;
Index: linux-2.6/tools/objtool/check.h
===================================================================
--- linux-2.6.orig/tools/objtool/check.h
+++ linux-2.6/tools/objtool/check.h
@@ -40,6 +40,7 @@ struct instruction {
 	struct instruction *first_jump_src;
 	struct reloc *jump_table;
 	struct list_head alts;
+	struct instruction *alt_end;
 	struct symbol *func;
 	struct list_head stack_ops;
 	struct cfi_state cfi;
@@ -54,6 +55,10 @@ static inline bool is_static_jump(struct
 	       insn->type == INSN_JUMP_UNCONDITIONAL;
 }
 
+struct instruction *
+find_alt_unwind(struct objtool_file *file,
+		struct instruction *alt_insn, unsigned long offset);
+
 struct instruction *find_insn(struct objtool_file *file,
 			      struct section *sec, unsigned long offset);
 
Index: linux-2.6/tools/objtool/orc_gen.c
===================================================================
--- linux-2.6.orig/tools/objtool/orc_gen.c
+++ linux-2.6/tools/objtool/orc_gen.c
@@ -12,75 +12,86 @@
 #include "check.h"
 #include "warn.h"
 
-int create_orc(struct objtool_file *file)
+static int create_orc_insn(struct objtool_file *file, struct instruction *insn)
 {
-	struct instruction *insn;
+	struct orc_entry *orc = &insn->orc;
+	struct cfi_reg *cfa = &insn->cfi.cfa;
+	struct cfi_reg *bp = &insn->cfi.regs[CFI_BP];
+
+	orc->end = insn->cfi.end;
+
+	if (cfa->base == CFI_UNDEFINED) {
+		orc->sp_reg = ORC_REG_UNDEFINED;
+		return 0;
+	}
 
-	for_each_insn(file, insn) {
-		struct orc_entry *orc = &insn->orc;
-		struct cfi_reg *cfa = &insn->cfi.cfa;
-		struct cfi_reg *bp = &insn->cfi.regs[CFI_BP];
+	switch (cfa->base) {
+	case CFI_SP:
+		orc->sp_reg = ORC_REG_SP;
+		break;
+	case CFI_SP_INDIRECT:
+		orc->sp_reg = ORC_REG_SP_INDIRECT;
+		break;
+	case CFI_BP:
+		orc->sp_reg = ORC_REG_BP;
+		break;
+	case CFI_BP_INDIRECT:
+		orc->sp_reg = ORC_REG_BP_INDIRECT;
+		break;
+	case CFI_R10:
+		orc->sp_reg = ORC_REG_R10;
+		break;
+	case CFI_R13:
+		orc->sp_reg = ORC_REG_R13;
+		break;
+	case CFI_DI:
+		orc->sp_reg = ORC_REG_DI;
+		break;
+	case CFI_DX:
+		orc->sp_reg = ORC_REG_DX;
+		break;
+	default:
+		WARN_FUNC("unknown CFA base reg %d",
+			  insn->sec, insn->offset, cfa->base);
+		return -1;
+	}
 
-		if (!insn->sec->text)
-			continue;
+	switch(bp->base) {
+	case CFI_UNDEFINED:
+		orc->bp_reg = ORC_REG_UNDEFINED;
+		break;
+	case CFI_CFA:
+		orc->bp_reg = ORC_REG_PREV_SP;
+		break;
+	case CFI_BP:
+		orc->bp_reg = ORC_REG_BP;
+		break;
+	default:
+		WARN_FUNC("unknown BP base reg %d",
+			  insn->sec, insn->offset, bp->base);
+		return -1;
+	}
 
-		orc->end = insn->cfi.end;
+	orc->sp_offset = cfa->offset;
+	orc->bp_offset = bp->offset;
+	orc->type = insn->cfi.type;
 
-		if (cfa->base == CFI_UNDEFINED) {
-			orc->sp_reg = ORC_REG_UNDEFINED;
-			continue;
-		}
+	return 0;
+}
 
-		switch (cfa->base) {
-		case CFI_SP:
-			orc->sp_reg = ORC_REG_SP;
-			break;
-		case CFI_SP_INDIRECT:
-			orc->sp_reg = ORC_REG_SP_INDIRECT;
-			break;
-		case CFI_BP:
-			orc->sp_reg = ORC_REG_BP;
-			break;
-		case CFI_BP_INDIRECT:
-			orc->sp_reg = ORC_REG_BP_INDIRECT;
-			break;
-		case CFI_R10:
-			orc->sp_reg = ORC_REG_R10;
-			break;
-		case CFI_R13:
-			orc->sp_reg = ORC_REG_R13;
-			break;
-		case CFI_DI:
-			orc->sp_reg = ORC_REG_DI;
-			break;
-		case CFI_DX:
-			orc->sp_reg = ORC_REG_DX;
-			break;
-		default:
-			WARN_FUNC("unknown CFA base reg %d",
-				  insn->sec, insn->offset, cfa->base);
-			return -1;
-		}
+int create_orc(struct objtool_file *file)
+{
+	struct instruction *insn;
 
-		switch(bp->base) {
-		case CFI_UNDEFINED:
-			orc->bp_reg = ORC_REG_UNDEFINED;
-			break;
-		case CFI_CFA:
-			orc->bp_reg = ORC_REG_PREV_SP;
-			break;
-		case CFI_BP:
-			orc->bp_reg = ORC_REG_BP;
-			break;
-		default:
-			WARN_FUNC("unknown BP base reg %d",
-				  insn->sec, insn->offset, bp->base);
-			return -1;
-		}
+	for_each_insn(file, insn) {
+		int ret;
+	       
+		if (!insn->sec->text)
+			continue;
 
-		orc->sp_offset = cfa->offset;
-		orc->bp_offset = bp->offset;
-		orc->type = insn->cfi.type;
+		ret = create_orc_insn(file, insn);
+		if (ret)
+			return ret;
 	}
 
 	return 0;
@@ -166,6 +177,28 @@ int create_orc_sections(struct objtool_f
 
 		prev_insn = NULL;
 		sec_for_each_insn(file, sec, insn) {
+
+			if (insn->alt_end) {
+				unsigned int offset, alt_len;
+				struct instruction *unwind;
+
+				alt_len = insn->alt_end->offset - insn->offset;
+				for (offset = 0; offset < alt_len; offset++) {
+					unwind = find_alt_unwind(file, insn, offset);
+					/* XXX: skipped earlier ! */
+					create_orc_insn(file, unwind);
+					if (!prev_insn ||
+					    memcmp(&unwind->orc, &prev_insn->orc,
+						   sizeof(struct orc_entry))) {
+						idx++;
+//						WARN_FUNC("ORC @ %d/%d", sec, insn->offset+offset, offset, alt_len);
+					}
+					prev_insn = unwind;
+				}
+
+				insn = insn->alt_end;
+			}
+
 			if (!prev_insn ||
 			    memcmp(&insn->orc, &prev_insn->orc,
 				   sizeof(struct orc_entry))) {
@@ -203,6 +236,31 @@ int create_orc_sections(struct objtool_f
 
 		prev_insn = NULL;
 		sec_for_each_insn(file, sec, insn) {
+
+			if (insn->alt_end) {
+				unsigned int offset, alt_len;
+				struct instruction *unwind;
+
+				alt_len = insn->alt_end->offset - insn->offset;
+				for (offset = 0; offset < alt_len; offset++) {
+					unwind = find_alt_unwind(file, insn, offset);
+					if (!prev_insn ||
+					    memcmp(&unwind->orc, &prev_insn->orc,
+						   sizeof(struct orc_entry))) {
+
+						if (create_orc_entry(file->elf, u_sec, ip_relocsec, idx,
+								     insn->sec, insn->offset + offset,
+								     &unwind->orc))
+							return -1;
+
+						idx++;
+					}
+					prev_insn = unwind;
+				}
+
+				insn = insn->alt_end;
+			}
+
 			if (!prev_insn || memcmp(&insn->orc, &prev_insn->orc,
 						 sizeof(struct orc_entry))) {
 


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:44:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:44:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34330.65310 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khC8e-0001kM-CA; Mon, 23 Nov 2020 13:44:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34330.65310; Mon, 23 Nov 2020 13:44:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khC8e-0001kF-8I; Mon, 23 Nov 2020 13:44:16 +0000
Received: by outflank-mailman (input) for mailman id 34330;
 Mon, 23 Nov 2020 13:44:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khC8c-0001jS-Nt
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:44:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1be9698c-6ac1-4ab8-9de4-04ea58a5fcc4;
 Mon, 23 Nov 2020 13:44:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E3A64AC0C;
 Mon, 23 Nov 2020 13:44:12 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khC8c-0001jS-Nt
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:44:14 +0000
X-Inumbo-ID: 1be9698c-6ac1-4ab8-9de4-04ea58a5fcc4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1be9698c-6ac1-4ab8-9de4-04ea58a5fcc4;
	Mon, 23 Nov 2020 13:44:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606139053; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=WBhxnrqI/ea3YaLwkwfUd+kbHRlyoh/3QMt0/b0VBtU=;
	b=Hk4DtJHgoyaRgau9Umhf8UEB71yZ5+l+p6PXVzfplk0gQdnErlVbuB5QUVcwkfZ+oE2iXi
	G9/j1UwbyAPrXqQf32osNY1Co6iSS5SqVqDDNKRJ1YT1hLMtkafCdnqk1moSuLLDvt/tbG
	T8bjzhhHyF/W5zM0USlN3rOmFdomQXc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E3A64AC0C;
	Mon, 23 Nov 2020 13:44:12 +0000 (UTC)
Subject: [PATCH v3 3/7] x86: fold indirect_thunk_asm.h into asm-defns.h
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8e7d1472-dd37-8ed3-ec2f-ce954ea61dfd@suse.com>
Message-ID: <80152a45-0737-eff4-d2ee-6630ffbf34b9@suse.com>
Date: Mon, 23 Nov 2020 14:44:12 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <8e7d1472-dd37-8ed3-ec2f-ce954ea61dfd@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

There's little point in having two separate headers both getting
included by asm_defns.h. This in particular reduces the number of
instances of guarding asm(".include ...") suitably in such dual use
headers.

No change to generated code.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

--- a/xen/Makefile
+++ b/xen/Makefile
@@ -139,7 +139,7 @@ ifeq ($(TARGET_ARCH),x86)
 t1 = $(call as-insn,$(CC),".L0: .L1: .skip (.L1 - .L0)",,-no-integrated-as)
 
 # Check whether clang asm()-s support .include.
-t2 = $(call as-insn,$(CC) -I$(BASEDIR)/include,".include \"asm-x86/indirect_thunk_asm.h\"",,-no-integrated-as)
+t2 = $(call as-insn,$(CC) -I$(BASEDIR)/include,".include \"asm-x86/asm-defns.h\"",,-no-integrated-as)
 
 # Check whether clang keeps .macro-s between asm()-s:
 # https://bugs.llvm.org/show_bug.cgi?id=36110
--- a/xen/include/asm-x86/asm-defns.h
+++ b/xen/include/asm-x86/asm-defns.h
@@ -7,3 +7,40 @@
     .byte 0x0f, 0x01, 0xcb
 .endm
 #endif
+
+.macro INDIRECT_BRANCH insn:req arg:req
+/*
+ * Create an indirect branch.  insn is one of call/jmp, arg is a single
+ * register.
+ *
+ * With no compiler support, this degrades into a plain indirect call/jmp.
+ * With compiler support, dispatch to the correct __x86_indirect_thunk_*
+ */
+    .if CONFIG_INDIRECT_THUNK == 1
+
+        $done = 0
+        .irp reg, ax, cx, dx, bx, bp, si, di, 8, 9, 10, 11, 12, 13, 14, 15
+        .ifeqs "\arg", "%r\reg"
+            \insn __x86_indirect_thunk_r\reg
+            $done = 1
+           .exitm
+        .endif
+        .endr
+
+        .if $done != 1
+            .error "Bad register arg \arg"
+        .endif
+
+    .else
+        \insn *\arg
+    .endif
+.endm
+
+/* Convenience wrappers. */
+.macro INDIRECT_CALL arg:req
+    INDIRECT_BRANCH call \arg
+.endm
+
+.macro INDIRECT_JMP arg:req
+    INDIRECT_BRANCH jmp \arg
+.endm
--- a/xen/include/asm-x86/asm_defns.h
+++ b/xen/include/asm-x86/asm_defns.h
@@ -22,7 +22,6 @@
 asm ( "\t.equ CONFIG_INDIRECT_THUNK, "
       __stringify(IS_ENABLED(CONFIG_INDIRECT_THUNK)) );
 #endif
-#include <asm/indirect_thunk_asm.h>
 
 #ifndef __ASSEMBLY__
 void ret_from_intr(void);
--- a/xen/include/asm-x86/indirect_thunk_asm.h
+++ /dev/null
@@ -1,53 +0,0 @@
-/*
- * Trickery to allow this header to be included at the C level, to permit
- * proper dependency tracking in .*.o.d files, while still having it contain
- * assembler only macros.
- */
-#ifndef __ASSEMBLY__
-# if 0
-  .if 0
-# endif
-asm ( "\t.include \"asm/indirect_thunk_asm.h\"" );
-# if 0
-  .endif
-# endif
-#else
-
-.macro INDIRECT_BRANCH insn:req arg:req
-/*
- * Create an indirect branch.  insn is one of call/jmp, arg is a single
- * register.
- *
- * With no compiler support, this degrades into a plain indirect call/jmp.
- * With compiler support, dispatch to the correct __x86_indirect_thunk_*
- */
-    .if CONFIG_INDIRECT_THUNK == 1
-
-        $done = 0
-        .irp reg, ax, cx, dx, bx, bp, si, di, 8, 9, 10, 11, 12, 13, 14, 15
-        .ifeqs "\arg", "%r\reg"
-            \insn __x86_indirect_thunk_r\reg
-            $done = 1
-           .exitm
-        .endif
-        .endr
-
-        .if $done != 1
-            .error "Bad register arg \arg"
-        .endif
-
-    .else
-        \insn *\arg
-    .endif
-.endm
-
-/* Convenience wrappers. */
-.macro INDIRECT_CALL arg:req
-    INDIRECT_BRANCH call \arg
-.endm
-
-.macro INDIRECT_JMP arg:req
-    INDIRECT_BRANCH jmp \arg
-.endm
-
-#endif



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:44:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:44:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34338.65322 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khC98-0001vw-Lz; Mon, 23 Nov 2020 13:44:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34338.65322; Mon, 23 Nov 2020 13:44:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khC98-0001vo-Ig; Mon, 23 Nov 2020 13:44:46 +0000
Received: by outflank-mailman (input) for mailman id 34338;
 Mon, 23 Nov 2020 13:44:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khC97-0001vW-27
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:44:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fbf8f3b7-055f-420e-9797-ded7925fee27;
 Mon, 23 Nov 2020 13:44:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6F810AD8D;
 Mon, 23 Nov 2020 13:44:43 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khC97-0001vW-27
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:44:45 +0000
X-Inumbo-ID: fbf8f3b7-055f-420e-9797-ded7925fee27
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id fbf8f3b7-055f-420e-9797-ded7925fee27;
	Mon, 23 Nov 2020 13:44:44 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606139083; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=m5Zmmmf0MXsKoaKGmPTD4yW/lR2RJgU9NvmMhz/MhpE=;
	b=Ey5Ku3DHR8NVCbvpfRLtc84MjFMjwVKLLNCFcTzI6kAUDf+faOxgUX7eKWf6oIhece4UHC
	6m6+9rW0Leee4EWeUgpq+SnEIjTJRPwUilV6U4nvMz8wdRsZyMb5rfra3eTPQPOfpsvYW/
	8lMTprNy4ww7Tnzu+urckAK78ZlszFg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6F810AD8D;
	Mon, 23 Nov 2020 13:44:43 +0000 (UTC)
Subject: [PATCH v3 4/7] x86: guard against straight-line speculation past RET
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8e7d1472-dd37-8ed3-ec2f-ce954ea61dfd@suse.com>
Message-ID: <0f717e5d-c1f7-ff0d-e136-16cea6b77de3@suse.com>
Date: Mon, 23 Nov 2020 14:44:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <8e7d1472-dd37-8ed3-ec2f-ce954ea61dfd@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Under certain conditions CPUs can speculate into the instruction stream
past a RET instruction. Guard against this just like 3b7dab93f240
("x86/spec-ctrl: Protect against CALL/JMP straight-line speculation")
did - by inserting an "INT $3" insn. It's merely the mechanics of how to
achieve this that differ: A set of macros gets introduced to post-
process RET insns issued by the compiler (or living in assembly files).

Unfortunately for clang this requires further features their built-in
assembler doesn't support: We need to be able to override insn mnemonics
produced by the compiler (which may be impossible, if internally
assembly mnemonics never get generated).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Roger Pau Monné <roger.pau@citrix.com>
---
TBD: Would be nice to avoid the additions in .init.text, but a query to
     the binutils folks regarding the ability to identify the section
     stuff is in (by Peter Zijlstra over a year ago:
     https://sourceware.org/pipermail/binutils/2019-July/107528.html)
     has been left without helpful replies.
---
v4: Drop left-over checking of clang for \(text) handling.
v3: Use .byte 0xc[23] instead of the nested macros.
v2: Fix build with newer clang. Use int3 mnemonic. Also override retq.

--- a/xen/Makefile
+++ b/xen/Makefile
@@ -145,7 +145,10 @@ t2 = $(call as-insn,$(CC) -I$(BASEDIR)/i
 # https://bugs.llvm.org/show_bug.cgi?id=36110
 t3 = $(call as-insn,$(CC),".macro FOO;.endm"$(close); asm volatile $(open)".macro FOO;.endm",-no-integrated-as)
 
-CLANG_FLAGS += $(call or,$(t1),$(t2),$(t3))
+# Check whether macros can override insn mnemonics in inline assembly.
+t4 = $(call as-insn,$(CC),".macro ret; .error; .endm; .macro retq; .error; .endm",-no-integrated-as)
+
+CLANG_FLAGS += $(call or,$(t1),$(t2),$(t3),$(t4))
 endif
 
 CLANG_FLAGS += -Werror=unknown-warning-option
--- a/xen/include/asm-x86/asm-defns.h
+++ b/xen/include/asm-x86/asm-defns.h
@@ -44,3 +44,19 @@
 .macro INDIRECT_JMP arg:req
     INDIRECT_BRANCH jmp \arg
 .endm
+
+/*
+ * To guard against speculation past RET, insert a breakpoint insn
+ * immediately after them.
+ */
+.macro ret operand:vararg
+    retq \operand
+.endm
+.macro retq operand:vararg
+    .ifb \operand
+    .byte 0xc3
+    .else
+    .byte 0xc2
+    .word \operand
+    .endif
+.endm



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:45:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:45:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34347.65334 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khC9x-00025x-1F; Mon, 23 Nov 2020 13:45:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34347.65334; Mon, 23 Nov 2020 13:45:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khC9w-00025q-Tw; Mon, 23 Nov 2020 13:45:36 +0000
Received: by outflank-mailman (input) for mailman id 34347;
 Mon, 23 Nov 2020 13:45:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khC9v-00025f-DZ
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:45:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 88cc19ee-b5b7-4f6f-aeae-258e2af5d9ea;
 Mon, 23 Nov 2020 13:45:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E6960ABCE;
 Mon, 23 Nov 2020 13:45:33 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khC9v-00025f-DZ
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:45:35 +0000
X-Inumbo-ID: 88cc19ee-b5b7-4f6f-aeae-258e2af5d9ea
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 88cc19ee-b5b7-4f6f-aeae-258e2af5d9ea;
	Mon, 23 Nov 2020 13:45:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606139134; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cGukHYFeZjwNwwKmI/A8FWjkYK+Lh6/hnHJ4QJdVKlA=;
	b=UGPdq806MwOeFU4A8nfqF6OoZBySvnaH8YlLUj+QFDYWEe9Wel1Pa8Y7bXS6CPDhIHMVeb
	h4k4ylbhbK8sQoBlB9ap3KbNSAl3+1ZzfMABH4sE5CmGKNe294qz5lf89KuKDjBAZdJtKX
	2P4YzKM2NS8jBejprz54cAZs1cqzczA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E6960ABCE;
	Mon, 23 Nov 2020 13:45:33 +0000 (UTC)
Subject: [PATCH v3 5/7] x86: limit amount of INT3 in IND_THUNK_*
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8e7d1472-dd37-8ed3-ec2f-ce954ea61dfd@suse.com>
Message-ID: <dbbc4ec4-799c-47c8-2cf9-4d7188e254e1@suse.com>
Date: Mon, 23 Nov 2020 14:45:33 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <8e7d1472-dd37-8ed3-ec2f-ce954ea61dfd@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

There's no point having every replacement variant to also specify the
INT3 - just have it once in the base macro. When patching, NOPs will get
inserted, which are fine to speculate through (until reaching the INT3).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Roger Pau Monné <roger.pau@citrix.com>
---
I also wonder whether the LFENCE in IND_THUNK_RETPOLINE couldn't be
replaced by INT3 as well. Of course the effect will be marginal, as the
size of the thunk will still be 16 bytes when including tail padding
resulting from alignment.
---
v3: Add comment.
v2: New.

--- a/xen/arch/x86/indirect-thunk.S
+++ b/xen/arch/x86/indirect-thunk.S
@@ -11,6 +11,9 @@
 
 #include <asm/asm_defns.h>
 
+/* Don't transform the "ret" further down. */
+.purgem ret
+
 .macro IND_THUNK_RETPOLINE reg:req
         call 2f
 1:
@@ -24,12 +27,10 @@
 .macro IND_THUNK_LFENCE reg:req
         lfence
         jmp *%\reg
-        int3 /* Halt straight-line speculation */
 .endm
 
 .macro IND_THUNK_JMP reg:req
         jmp *%\reg
-        int3 /* Halt straight-line speculation */
 .endm
 
 /*
@@ -44,6 +45,8 @@ ENTRY(__x86_indirect_thunk_\reg)
         __stringify(IND_THUNK_LFENCE \reg), X86_FEATURE_IND_THUNK_LFENCE, \
         __stringify(IND_THUNK_JMP \reg),    X86_FEATURE_IND_THUNK_JMP
 
+        int3 /* Halt straight-line speculation */
+
         .size __x86_indirect_thunk_\reg, . - __x86_indirect_thunk_\reg
         .type __x86_indirect_thunk_\reg, @function
 .endm



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:46:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:46:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34351.65346 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCAJ-0002CI-Bc; Mon, 23 Nov 2020 13:45:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34351.65346; Mon, 23 Nov 2020 13:45:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCAJ-0002CB-6u; Mon, 23 Nov 2020 13:45:59 +0000
Received: by outflank-mailman (input) for mailman id 34351;
 Mon, 23 Nov 2020 13:45:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khCAI-0002C4-R3
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:45:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fcb645b7-d493-414d-8ed5-da826e2edc67;
 Mon, 23 Nov 2020 13:45:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6277DAD19;
 Mon, 23 Nov 2020 13:45:57 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khCAI-0002C4-R3
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:45:58 +0000
X-Inumbo-ID: fcb645b7-d493-414d-8ed5-da826e2edc67
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fcb645b7-d493-414d-8ed5-da826e2edc67;
	Mon, 23 Nov 2020 13:45:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606139157; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=w7uDOn4N0IoJ51us1LkK9OZYghwqQsMbEk0Oi9fZxmc=;
	b=vVX4omisYEgIILBK7CbJImhtB56Q0TRg2zdFwOd5W9a35SD3MCmc8tUfmQHZCCX6gHZVw8
	1BRDnqx24cDcwiNY2Dn44KlU9EZc00YAKS2vyeFVjjmhrxvtsyH/t5X0Qsi/+vRbLF3Iul
	HKGyOPtTmLW6CgpA3rze81SEmbGnoM8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6277DAD19;
	Mon, 23 Nov 2020 13:45:57 +0000 (UTC)
Subject: [PATCH v3 6/7] x86: make guarding against straight-line speculation
 optional
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8e7d1472-dd37-8ed3-ec2f-ce954ea61dfd@suse.com>
Message-ID: <f05d9ee9-2cf0-f2e5-b6e2-7846b9dfec12@suse.com>
Date: Mon, 23 Nov 2020 14:45:56 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <8e7d1472-dd37-8ed3-ec2f-ce954ea61dfd@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Put insertion of INT3 behind CONFIG_SPECULATIVE_HARDEN_BRANCH
conditionals.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: New.

--- a/xen/arch/x86/indirect-thunk.S
+++ b/xen/arch/x86/indirect-thunk.S
@@ -11,8 +11,10 @@
 
 #include <asm/asm_defns.h>
 
+#ifdef CONFIG_SPECULATIVE_HARDEN_BRANCH
 /* Don't transform the "ret" further down. */
 .purgem ret
+#endif
 
 .macro IND_THUNK_RETPOLINE reg:req
         call 2f
@@ -45,7 +47,9 @@ ENTRY(__x86_indirect_thunk_\reg)
         __stringify(IND_THUNK_LFENCE \reg), X86_FEATURE_IND_THUNK_LFENCE, \
         __stringify(IND_THUNK_JMP \reg),    X86_FEATURE_IND_THUNK_JMP
 
+#ifdef CONFIG_SPECULATIVE_HARDEN_BRANCH
         int3 /* Halt straight-line speculation */
+#endif
 
         .size __x86_indirect_thunk_\reg, . - __x86_indirect_thunk_\reg
         .type __x86_indirect_thunk_\reg, @function
--- a/xen/include/asm-x86/asm-defns.h
+++ b/xen/include/asm-x86/asm-defns.h
@@ -45,6 +45,8 @@
     INDIRECT_BRANCH jmp \arg
 .endm
 
+#ifdef CONFIG_SPECULATIVE_HARDEN_BRANCH
+
 /*
  * To guard against speculation past RET, insert a breakpoint insn
  * immediately after them.
@@ -60,3 +62,5 @@
     .word \operand
     .endif
 .endm
+
+#endif



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:46:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:46:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34360.65358 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCAk-0002Ji-KA; Mon, 23 Nov 2020 13:46:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34360.65358; Mon, 23 Nov 2020 13:46:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCAk-0002Ja-Gs; Mon, 23 Nov 2020 13:46:26 +0000
Received: by outflank-mailman (input) for mailman id 34360;
 Mon, 23 Nov 2020 13:46:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khCAj-0002JO-Lw
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:46:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 985749d8-5b92-4062-a5bc-1b08fd081024;
 Mon, 23 Nov 2020 13:46:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EF7CAAC23;
 Mon, 23 Nov 2020 13:46:23 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khCAj-0002JO-Lw
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:46:25 +0000
X-Inumbo-ID: 985749d8-5b92-4062-a5bc-1b08fd081024
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 985749d8-5b92-4062-a5bc-1b08fd081024;
	Mon, 23 Nov 2020 13:46:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606139184; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vDqW5RTe5KhXmgkVYzMyjwwQbOdd2k70RljFlzhoaCU=;
	b=pFHfdZa+cuPa90DerVTHkiQI8Vm/AJhQHtwMNNMS0fTtFJdermuXy7tWML8godkCDBjkbv
	f0wuDWwgEu5nU6CCIaEXvaVf0QitqWL67X+iqppFshfVSoYWej/fW5yw1B1jVe5lIkeV4Z
	T/l5zpYzVLCjODWlInwYnEy8sH+lBlo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EF7CAAC23;
	Mon, 23 Nov 2020 13:46:23 +0000 (UTC)
Subject: [PATCH v3 7/7] x86: reduce CET-SS related #ifdef-ary
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8e7d1472-dd37-8ed3-ec2f-ce954ea61dfd@suse.com>
Message-ID: <3b55e0f7-a9ad-73f4-bf2c-99053f7886e3@suse.com>
Date: Mon, 23 Nov 2020 14:46:23 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <8e7d1472-dd37-8ed3-ec2f-ce954ea61dfd@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Commit b586a81b7a90 ("x86/CET: Fix build following c/s 43b98e7190") had
to introduce a number of #ifdef-s to make the build work with older tool
chains. Introduce an assembler macro covering for tool chains not
knowing of CET-SS, allowing some conditionals where just SETSSBSY is the
problem to be dropped again.

No change to generated code.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
v4: Move to end of series.
---
Now that I've done this I'm no longer sure which direction is better to
follow: On one hand this introduces dead code (even if just NOPs) into
CET-SS-disabled builds. Otoh this is a step towards breaking the tool
chain version dependency of the feature.

I've also dropped conditionals around bigger chunks of code; while I
think that's preferable, I'm open to undo those parts.

--- a/xen/arch/x86/boot/x86_64.S
+++ b/xen/arch/x86/boot/x86_64.S
@@ -31,7 +31,6 @@ ENTRY(__high_start)
         jz      .L_bsp
 
         /* APs.  Set up shadow stacks before entering C. */
-#ifdef CONFIG_XEN_SHSTK
         testl   $cpufeat_mask(X86_FEATURE_XEN_SHSTK), \
                 CPUINFO_FEATURE_OFFSET(X86_FEATURE_XEN_SHSTK) + boot_cpu_data(%rip)
         je      .L_ap_shstk_done
@@ -55,7 +54,6 @@ ENTRY(__high_start)
         mov     $XEN_MINIMAL_CR4 | X86_CR4_CET, %ecx
         mov     %rcx, %cr4
         setssbsy
-#endif
 
 .L_ap_shstk_done:
         call    start_secondary
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -668,7 +668,7 @@ static void __init noreturn reinit_bsp_s
     stack_base[0] = stack;
     memguard_guard_stack(stack);
 
-    if ( IS_ENABLED(CONFIG_XEN_SHSTK) && cpu_has_xen_shstk )
+    if ( cpu_has_xen_shstk )
     {
         wrmsrl(MSR_PL0_SSP,
                (unsigned long)stack + (PRIMARY_SHSTK_SLOT + 1) * PAGE_SIZE - 8);
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -197,9 +197,7 @@ ENTRY(cr4_pv32_restore)
 
 /* See lstar_enter for entry register state. */
 ENTRY(cstar_enter)
-#ifdef CONFIG_XEN_SHSTK
         ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
-#endif
         /* sti could live here when we don't switch page tables below. */
         CR4_PV32_RESTORE
         movq  8(%rsp),%rax /* Restore %rax. */
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -236,9 +236,7 @@ iret_exit_to_guest:
  * %ss must be saved into the space left by the trampoline.
  */
 ENTRY(lstar_enter)
-#ifdef CONFIG_XEN_SHSTK
         ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
-#endif
         /* sti could live here when we don't switch page tables below. */
         movq  8(%rsp),%rax /* Restore %rax. */
         movq  $FLAT_KERNEL_SS,8(%rsp)
@@ -272,9 +270,7 @@ ENTRY(lstar_enter)
         jmp   test_all_events
 
 ENTRY(sysenter_entry)
-#ifdef CONFIG_XEN_SHSTK
         ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
-#endif
         /* sti could live here when we don't switch page tables below. */
         pushq $FLAT_USER_SS
         pushq $0
--- a/xen/include/asm-x86/asm-defns.h
+++ b/xen/include/asm-x86/asm-defns.h
@@ -8,6 +8,12 @@
 .endm
 #endif
 
+#ifndef CONFIG_HAS_AS_CET_SS
+.macro setssbsy
+    .byte 0xf3, 0x0f, 0x01, 0xe8
+.endm
+#endif
+
 .macro INDIRECT_BRANCH insn:req arg:req
 /*
  * Create an indirect branch.  insn is one of call/jmp, arg is a single



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:48:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:48:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34371.65369 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCCK-0002WC-0A; Mon, 23 Nov 2020 13:48:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34371.65369; Mon, 23 Nov 2020 13:48:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCCJ-0002W5-TM; Mon, 23 Nov 2020 13:48:03 +0000
Received: by outflank-mailman (input) for mailman id 34371;
 Mon, 23 Nov 2020 13:48:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khCCI-0002Vu-F2
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:48:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 39254c2f-0256-4c53-b71f-b28ad8bb66dc;
 Mon, 23 Nov 2020 13:48:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 51A11AD79;
 Mon, 23 Nov 2020 13:48:00 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khCCI-0002Vu-F2
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:48:02 +0000
X-Inumbo-ID: 39254c2f-0256-4c53-b71f-b28ad8bb66dc
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 39254c2f-0256-4c53-b71f-b28ad8bb66dc;
	Mon, 23 Nov 2020 13:48:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606139280; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=40ADJm/3sEW4KdQfL076qWzLzocoau+nfpYBAnALOL0=;
	b=XDd/dtaU4OEOFneSCQMl61KYdeQVoQbQMIZra9YhPk/V83lF8H+oGIlOsOZrpbMSHu4fjp
	Ckq+joqIkCQtBwJDcJ2gdtpRybKJf06E0hTTvoJLGDmwa7l92bsKtfp1aTH77cSSda+E++
	61djAcW/TKSbXut/eKEOLoDqsCVoBys=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 51A11AD79;
	Mon, 23 Nov 2020 13:48:00 +0000 (UTC)
Subject: [really v4] Re: [PATCH v3 0/7] x86: some assembler macro rework
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8e7d1472-dd37-8ed3-ec2f-ce954ea61dfd@suse.com>
Message-ID: <08b82025-a2ce-1e72-5301-4ceadeaba873@suse.com>
Date: Mon, 23 Nov 2020 14:47:59 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <8e7d1472-dd37-8ed3-ec2f-ce954ea61dfd@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.11.2020 14:42, Jan Beulich wrote:
> 1: replace __ASM_{CL,ST}AC
> 2: drop ASM_{CL,ST}AC
> 3: fold indirect_thunk_asm.h into asm-defns.h
> 4: guard against straight-line speculation past RET
> 5: limit amount of INT3 in IND_THUNK_*
> 6: make guarding against straight-line speculation optional
> 7: reduce CET-SS related #ifdef-ary

I'm sorry, I've realized only after sending that this really is v4
of the series. I don't think though it's worth re-sending.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:50:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:50:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34381.65382 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCEE-0002p6-Dp; Mon, 23 Nov 2020 13:50:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34381.65382; Mon, 23 Nov 2020 13:50:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCEE-0002oz-AB; Mon, 23 Nov 2020 13:50:02 +0000
Received: by outflank-mailman (input) for mailman id 34381;
 Mon, 23 Nov 2020 13:50:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khCEC-0002gu-W5
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:50:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c950dab6-87f7-4743-9a93-f12026a5e057;
 Mon, 23 Nov 2020 13:50:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 91D0CAC0C;
 Mon, 23 Nov 2020 13:49:59 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khCEC-0002gu-W5
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:50:01 +0000
X-Inumbo-ID: c950dab6-87f7-4743-9a93-f12026a5e057
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c950dab6-87f7-4743-9a93-f12026a5e057;
	Mon, 23 Nov 2020 13:50:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606139399; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2MYR1tCC68AyvWxk3zsBK94D4ktPHWPLPWkA84rnIpc=;
	b=SzOSC2Zx2T80Yye7d/v+73IUNfdgr1+F0HyTNd7/BGDuV69JGDLQwfrbh0WS9ST9/Mr8Xy
	ywQb5f8WXrdYh1M9W5ULLn9gM/0WUtPr1+5/XVD84PUE0DRiIHDVb/QdZuzrMuXu75k6sA
	O3Qh4VF4e2DwTjVISmyTIeyP+k0fmbw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 91D0CAC0C;
	Mon, 23 Nov 2020 13:49:59 +0000 (UTC)
Subject: Ping: [PATCH 0/5] x86/PV: memory management consistency and minor
 relaxations
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <10a01f61-197b-7df4-192d-917fe135df70@suse.com>
Message-ID: <5534e9bc-7c89-53ba-1459-4064c209584d@suse.com>
Date: Mon, 23 Nov 2020 14:49:59 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <10a01f61-197b-7df4-192d-917fe135df70@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.11.2020 11:54, Jan Beulich wrote:
> Especially the latter three patches provide only small possible
> gains, from all I can tell. I nevertheless wanted to put up the
> entire set for consideration.
> 
> 1: consistently inline {,un}adjust_guest_l<N>e()
> 2: fold redundant calls to adjust_guest_l<N>e()
> 3: _PAGE_RW changes may take fast path of mod_l[234]_entry()
> 4: restrict TLB flushing after mod_l[234]_entry()
> 5: avoid TLB flushing after mod_l3_entry()

Ping?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:56:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:56:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34389.65399 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCKL-0003bk-EH; Mon, 23 Nov 2020 13:56:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34389.65399; Mon, 23 Nov 2020 13:56:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCKL-0003bd-BH; Mon, 23 Nov 2020 13:56:21 +0000
Received: by outflank-mailman (input) for mailman id 34389;
 Mon, 23 Nov 2020 13:56:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khCKJ-0003bY-Tf
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:56:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a8d7b347-ff7f-458d-bc3d-98884cd39800;
 Mon, 23 Nov 2020 13:56:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 69C67ABCE;
 Mon, 23 Nov 2020 13:56:17 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khCKJ-0003bY-Tf
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:56:19 +0000
X-Inumbo-ID: a8d7b347-ff7f-458d-bc3d-98884cd39800
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a8d7b347-ff7f-458d-bc3d-98884cd39800;
	Mon, 23 Nov 2020 13:56:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606139777; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fkAx5Sa8Xg9v4MjlIS31hFT/eN0ekMNmGsh79spZ71g=;
	b=ZsMCYQFRvZkFNbAq8Yitzckfs6K4FM2bTbKYQ+7qkERTCNkV9YoM0uHQZc/fAfqqMd9zgF
	JOpMUXVG5DUNLw8N7CLGbZxmjSTBkerorWZAl+xZh6V5mneKl+WwbsL6erzDNyWMeXMpKS
	rZV0attO4KIG5eT9oDeJla3TcVmIckA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 69C67ABCE;
	Mon, 23 Nov 2020 13:56:17 +0000 (UTC)
Subject: Ping: [PATCH] x86/PV: conditionally avoid raising #GP for early guest
 MSR accesses
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <7e69db81-cee7-3c7b-be64-4f5ff50fbe9c@suse.com>
 <cf814663-0319-6a30-f3a2-dc43432eedb1@citrix.com>
 <cf24a63e-afe9-be6a-3ab9-cc65e19a7a0f@suse.com>
Message-ID: <b58ed34b-2850-342f-eb80-74ab84dc9332@suse.com>
Date: Mon, 23 Nov 2020 14:56:16 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <cf24a63e-afe9-be6a-3ab9-cc65e19a7a0f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 04.11.2020 11:50, Jan Beulich wrote:
> On 03.11.2020 18:31, Andrew Cooper wrote:
>> On 03/11/2020 17:06, Jan Beulich wrote:
>>> Prior to 4.15 Linux, when running in PV mode, did not install a #GP
>>> handler early enough to cover for example the rdmsrl_safe() of
>>> MSR_K8_TSEG_ADDR in bsp_init_amd() (not to speak of the unguarded read
>>> of MSR_K7_HWCR later in the same function). The respective change
>>> (42b3a4cb5609 "x86/xen: Support early interrupts in xen pv guests") was
>>> backported to 4.14, but no further - presumably since it wasn't really
>>> easy because of other dependencies.
>>>
>>> Therefore, to prevent our change in the handling of guest MSR accesses
>>> to render PV Linux 4.13 and older unusable on at least AMD systems, make
>>> the raising of #GP on these paths conditional upon the guest having
>>> installed a handler. Producing zero for reads and discarding writes
>>> isn't necessarily correct and may trip code trying to detect presence of
>>> MSRs early, but since such detection logic won't work without a #GP
>>> handler anyway, this ought to be a fair workaround.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> I appreciate that we probably have to do something, but I don't think
>> this is a wise move.
> 
> I wouldn't call it wise either, but I'm afraid something along
> these lines is necessary.
> 
>> Linux is fundamentally buggy.  It is deliberately looking for a
>> potential #GP fault given its use of rdmsrl_safe().  The reason this bug
>> stayed hidden for so long was as a consequence of Xen's inappropriate
>> MSR handling for guests, and the reasons for changing Xen's behaviour
>> still stand.
> 
> I agree.
> 
>> This change, in particular, does not apply to any explicitly handled
>> MSRs, and therefore is not a comprehensive fix.
> 
> But it's intentional that this deals with the situation in a
> generic way, not on a per-MSR basis. If we did as you suggest
> further down, we'd have to audit all Linux versions up to 4.14
> for similar issues with other MSRs. I don't think this would
> be a practical thing to do, and I also don't think that leaving
> things as they are until we have concrete reports of problems
> is a viable option either.
> 
> Adding explicit handling for the two offending MSRs (and any
> possible further ones we discover) would imo only be to avoid
> issuing the respective log messages.
> 
>>   Nor is it robust to
>> someone adding code to explicitly handling the impacted MSRs at a later
>> date (which are are likely to need to do for HWCR), and which would
>> reintroduce this failure to boot.
> 
> I'm afraid I don't understand. Looking at the two functions
> the patch alters, only X86EMUL_OKAY is used in return statements
> other than the final one. If this model is to be followed by
> future additions (which I think it ought to be; perhaps we
> should add comments to this effect), the code introduced here
> will take care of the situation nevertheless.
> 
>> We should have the impacted MSRs handled explicitly, with a note stating
>> this was a bug in Linux 4.14 and older.  We already have workaround for
>> similar bugs in Windows, and it also gives us a timeline to eventually
>> removing support for obsolete workarounds, rather than having a "now and
>> in the future, we'll explicitly tolerate broken PV behaviour for one bug
>> back in ancient linux".
> 
> Comparing with Windows isn't very helpful; the patch here is
> specifically about PV, and would help other OSes as well in
> case they would have missed setting up exceptions early in
> just the PV-on-Xen case. For the HVM case I'd indeed rather
> see us go the route we've gone for Windows, if need be.

As can be seen from this reply, we're not in agreement what to
do here. But we need to do something. I'm not sure how to get
unstuck discussions like this one ...

Besides this suggestion of yours I also continue to have
trouble seeing what good it will do to record an exception to
inject into a guest when we know it didn't install a handler
yet.

Jan

> There's one adjustment to the logic here that I've been
> considering, but I'm still undecided due to the downsides:
> Without exposing the value, we could make the decision to zap
> the #GP dependent upon us being able to read the MSR.
> 
> The other possible adjustment would be to avoid issuing two
> log messages for the same operation (affecting debug builds
> only). But the code structure (which isn't really consistent
> about when the pre-existing message would get issued)
> doesn't directly lend itself to such an adjustment without
> altering the behavior for some of the MSRs explicitly
> handled.
> 
> As a tangent, while discussing this situation, please let's
> not forget about this code in Linux:
> 
> static u64 xen_read_msr(unsigned int msr)
> {
> 	/*
> 	 * This will silently swallow a #GP from RDMSR.  It may be worth
> 	 * changing that.
> 	 */
> 	int err;
> 
> 	return xen_read_msr_safe(msr, &err);
> }
> 
> static void xen_write_msr(unsigned int msr, unsigned low, unsigned high)
> {
> 	/*
> 	 * This will silently swallow a #GP from WRMSR.  It may be worth
> 	 * changing that.
> 	 */
> 	xen_write_msr_safe(msr, low, high);
> }
> 
> Imo this "silent swallowing" has always been the wrong thing
> to do, and hence ought to be dropped. Of course right now it
> saves the kernel from dying on the HWCR read.
> 
> Jan
> 



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 13:58:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 13:58:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34395.65412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCM8-0003l9-R6; Mon, 23 Nov 2020 13:58:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34395.65412; Mon, 23 Nov 2020 13:58:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCM8-0003l2-Nj; Mon, 23 Nov 2020 13:58:12 +0000
Received: by outflank-mailman (input) for mailman id 34395;
 Mon, 23 Nov 2020 13:58:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khCM7-0003kw-Ri
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:58:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a2e01039-815e-46a7-9af3-aec410442773;
 Mon, 23 Nov 2020 13:58:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4F319ABE4;
 Mon, 23 Nov 2020 13:58:10 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khCM7-0003kw-Ri
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 13:58:11 +0000
X-Inumbo-ID: a2e01039-815e-46a7-9af3-aec410442773
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a2e01039-815e-46a7-9af3-aec410442773;
	Mon, 23 Nov 2020 13:58:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606139890; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/lYFTUsX+0/pVFdYdfVYe2ygcWjaAE030Av5jYiLRLo=;
	b=jz6QdI5Gq1MJXVHRM7slJ0g3kxMEzsYtmzRJeCKv69fzH1Mwc2nkgm2Jvk9CLx7C2m+/k/
	gl3bOz8P55EqBKd7KlEF2Xi3JFutARu9bmxNUt6ZAX1gmDSiD8025681w/YSf6fnliNTa+
	C1y5A2qjsSCnd5taDaaoPCV+hJuqLWY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4F319ABE4;
	Mon, 23 Nov 2020 13:58:10 +0000 (UTC)
Subject: Re: [PATCH] libxg: don't use max policy in xc_cpuid_xend_policy()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Ian Jackson <iwj@xenproject.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <4fa05759-24ac-5ff3-3db9-94537f6be95d@suse.com>
 <20201106155839.vnhdqcptbpkbzfly@liuwe-devbox-debian-v2>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bab94d0a-89e8-18d8-9098-e30781c3a2e9@suse.com>
Date: Mon, 23 Nov 2020 14:58:09 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201106155839.vnhdqcptbpkbzfly@liuwe-devbox-debian-v2>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 06.11.2020 16:58, Wei Liu wrote:
> On Thu, Nov 05, 2020 at 04:56:53PM +0100, Jan Beulich wrote:
>> Using max undermines the separation between default and max. For example,
>> turning off AVX512F on an MPX-capable system silently turns on MPX,
>> despite this not being part of the default policy anymore. Since the
>> information is used only for determining what to convert 'x' to (but not
>> to e.g. validate '1' settings), the effect of this change is identical
>> for guests with (suitable) "cpuid=" settings to that of the changes
>> separating default from max and then converting (e.g.) MPX from being
>> part of default to only being part of max for guests without (affected)
>> "cpuid=" settings.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> I will defer this to Andrew.

Andrew?

Thanks, Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 14:00:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 14:00:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34403.65424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCOd-0004fV-A2; Mon, 23 Nov 2020 14:00:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34403.65424; Mon, 23 Nov 2020 14:00:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCOd-0004fO-5U; Mon, 23 Nov 2020 14:00:47 +0000
Received: by outflank-mailman (input) for mailman id 34403;
 Mon, 23 Nov 2020 14:00:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khCOc-0004fJ-4z
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:00:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9ecaf108-f42d-4e10-9de6-853d9cccf809;
 Mon, 23 Nov 2020 14:00:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A2A39ABCE;
 Mon, 23 Nov 2020 14:00:44 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khCOc-0004fJ-4z
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:00:46 +0000
X-Inumbo-ID: 9ecaf108-f42d-4e10-9de6-853d9cccf809
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9ecaf108-f42d-4e10-9de6-853d9cccf809;
	Mon, 23 Nov 2020 14:00:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606140044; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=T7xEL+nZ6ORkadDdGG1Wgx62xuWkiAD7wj8c6vYeF/g=;
	b=uFUYn8TWtNYLMiWFU7teWNOxFsfmW9bH+p6HcgxO+TP2NuWYO0ktIMamMtemadL+Ds03nx
	ujQEX39mbyPYSyxGIL6oeGnk49TuiM5iuU4nLyxMRpHuOuDkyilippiro+7QHa3xaLaB3i
	xOmfZFYcyNWu0t/7ze5JAhosR9cMgyo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A2A39ABCE;
	Mon, 23 Nov 2020 14:00:44 +0000 (UTC)
Subject: Ping: [PATCH v2 6/9] x86/p2m: avoid unnecessary calls of
 write_p2m_entry_pre() hook
From: Jan Beulich <jbeulich@suse.com>
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <4b63025f-164c-2e93-3d54-7a7f145ad046@suse.com>
 <3386a823-5560-9cf3-5711-219d5bd0e54e@suse.com>
Message-ID: <775760fa-03c8-5c2e-317b-e89e5b8d789a@suse.com>
Date: Mon, 23 Nov 2020 15:00:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <3386a823-5560-9cf3-5711-219d5bd0e54e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 06.11.2020 10:37, Jan Beulich wrote:
> When shattering a large page, we first construct the new page table page
> and only then hook it up. The "pre" hook in this case does nothing, for
> the page starting out all blank. Avoid 512 calls into shadow code in
> this case by passing in INVALID_GFN, indicating the page being updated
> is (not yet) associated with any GFN. (The alternative to this change
> would be to actually pass in a correct GFN, which can't be all the same
> on every loop iteration.)
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: New.

Ping?

Thanks, Jan

> --- a/xen/arch/x86/mm/p2m-pt.c
> +++ b/xen/arch/x86/mm/p2m-pt.c
> @@ -134,7 +134,7 @@ static int write_p2m_entry(struct p2m_do
>  
>          paging_lock(d);
>  
> -        if ( p2m->write_p2m_entry_pre )
> +        if ( p2m->write_p2m_entry_pre && gfn != gfn_x(INVALID_GFN) )
>              p2m->write_p2m_entry_pre(d, gfn, p, new, level);
>  
>          oflags = l1e_get_flags(*p);
> @@ -290,7 +290,8 @@ p2m_next_level(struct p2m_domain *p2m, v
>          {
>              new_entry = l1e_from_pfn(pfn | (i << ((level - 1) * PAGETABLE_ORDER)),
>                                       flags);
> -            rc = write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
> +            rc = write_p2m_entry(p2m, gfn_x(INVALID_GFN), l1_entry + i,
> +                                 new_entry, level);
>              if ( rc )
>              {
>                  unmap_domain_page(l1_entry);
> 
> 



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 14:05:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 14:05:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34410.65436 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCTS-0004rp-Sn; Mon, 23 Nov 2020 14:05:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34410.65436; Mon, 23 Nov 2020 14:05:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCTS-0004ri-PC; Mon, 23 Nov 2020 14:05:46 +0000
Received: by outflank-mailman (input) for mailman id 34410;
 Mon, 23 Nov 2020 14:05:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0pQc=E5=gmail.com=miguel.ojeda.sandonis@srs-us1.protection.inumbo.net>)
 id 1khCTQ-0004rd-PF
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:05:44 +0000
Received: from mail-yb1-xb44.google.com (unknown [2607:f8b0:4864:20::b44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 916664b4-7914-47a5-977f-dd2c9181dbd2;
 Mon, 23 Nov 2020 14:05:43 +0000 (UTC)
Received: by mail-yb1-xb44.google.com with SMTP id t33so16054313ybd.0
 for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 06:05:43 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0pQc=E5=gmail.com=miguel.ojeda.sandonis@srs-us1.protection.inumbo.net>)
	id 1khCTQ-0004rd-PF
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:05:44 +0000
X-Inumbo-ID: 916664b4-7914-47a5-977f-dd2c9181dbd2
Received: from mail-yb1-xb44.google.com (unknown [2607:f8b0:4864:20::b44])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 916664b4-7914-47a5-977f-dd2c9181dbd2;
	Mon, 23 Nov 2020 14:05:43 +0000 (UTC)
Received: by mail-yb1-xb44.google.com with SMTP id t33so16054313ybd.0
        for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 06:05:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=RhHqKgQbKUW77nc47JuvCnp+w8QNxENxQLSt6AHkTqQ=;
        b=uc3VE1PZNnY/Z1NgZXLeWe/Nj5hsoBfQkeeHXaE+d0SDr9xNRMPYxU1o6fpuaiqkgi
         yuFjhawxyOxFbziEfkWs4inb92LCIVTnNTVXAL7657JtY5jUPnHae9XC4JONvfltcDzK
         9TpDS0ylXwfesoyru6or5tLuj2Wgq4fxc0XGG5evkxw7F5K63x1NbbMukm854FcfQLy0
         gnTDe+NWIPcxyPxl6ZwlkcZY1OnasK1C98JFaIzSzrlrdcg6icgY2nCNokwGspTvBpMG
         u0c2fJxhgJsKPBZAzgP85ZG8VhKJUulmNcJ8sZ+phgCZ9U4trQ3IF/NnqsJiuQ1qY+5Q
         UH8Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=RhHqKgQbKUW77nc47JuvCnp+w8QNxENxQLSt6AHkTqQ=;
        b=IgZ/ORoME/QGAsRjzZmR1CBMJ/clqW4c9oiYxD5BDQkxL/SIX6Bhkv8qH2r0POv41O
         HD+S9H5u1t/dvOQ41KAqHKQZ/Ul4Ri3Ze5Y7GTD1XWYZjmegLJhYigukJo5DZISgc0Ju
         AddqpnhKBPbqes3VFxBXFRWn7tVl7TrlWKFUekhmpRmcqRusgzOMBUKGAuPDOld2qab3
         C67XkEUBVqPQ18xnJ6Xu9QUgWWBntLb/OKq9/4R+dmM4jAkjiX9/v0y7gmVCMywV9+Gi
         /3E0Kt73cPkd03SN+uDtOuyqbgGfwfbZUp61B1KIYSq1dLClMQseTu/Q4DKstIxyyH2a
         ktPg==
X-Gm-Message-State: AOAM531Cc1hku+sYjRtrQ8qR7b5acLYjwVnyrFYbkgR9FxRNiwgSfzsp
	hchJ1k/GDHGc/y812LpjfHHcjso0ndf7lHvwX3c=
X-Google-Smtp-Source: ABdhPJz9DsZ58e7OIIOr/VE9Xtax3PWaLuFuRyVLpjTsCzIYcuPGWJiVUhGusztX9v02ET+47HU3GtURC6oS5LfC9Lw=
X-Received: by 2002:a5b:40e:: with SMTP id m14mr35121900ybp.33.1606140343388;
 Mon, 23 Nov 2020 06:05:43 -0800 (PST)
MIME-Version: 1.0
References: <cover.1605896059.git.gustavoars@kernel.org> <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook> <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
 <CANiq72nZrHWTA4_Msg6MP9snTyenC6-eGfD27CyfNSu7QoVZbw@mail.gmail.com> <alpine.LNX.2.23.453.2011230938390.7@nippy.intranet>
In-Reply-To: <alpine.LNX.2.23.453.2011230938390.7@nippy.intranet>
From: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Date: Mon, 23 Nov 2020 15:05:31 +0100
Message-ID: <CANiq72=z+tmuey9wj3Kk7wX5s0hTHpsQdLhAqcOVNrHon6xn5Q@mail.gmail.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
To: Finn Thain <fthain@telegraphics.com.au>
Cc: James Bottomley <James.Bottomley@hansenpartnership.com>, 
	Kees Cook <keescook@chromium.org>, Jakub Kicinski <kuba@kernel.org>, 
	"Gustavo A. R. Silva" <gustavoars@kernel.org>, linux-kernel <linux-kernel@vger.kernel.org>, 
	alsa-devel@alsa-project.org, amd-gfx@lists.freedesktop.org, 
	bridge@lists.linux-foundation.org, ceph-devel@vger.kernel.org, 
	cluster-devel@redhat.com, coreteam@netfilter.org, devel@driverdev.osuosl.org, 
	dm-devel@redhat.com, drbd-dev@lists.linbit.com, 
	dri-devel@lists.freedesktop.org, GR-everest-linux-l2@marvell.com, 
	GR-Linux-NIC-Dev@marvell.com, intel-gfx@lists.freedesktop.org, 
	intel-wired-lan@lists.osuosl.org, keyrings@vger.kernel.org, 
	linux1394-devel@lists.sourceforge.net, linux-acpi@vger.kernel.org, 
	linux-afs@lists.infradead.org, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, linux-arm-msm@vger.kernel.org, 
	linux-atm-general@lists.sourceforge.net, linux-block@vger.kernel.org, 
	linux-can@vger.kernel.org, linux-cifs@vger.kernel.org, 
	Linux Crypto Mailing List <linux-crypto@vger.kernel.org>, linux-decnet-user@lists.sourceforge.net, 
	Ext4 Developers List <linux-ext4@vger.kernel.org>, linux-fbdev@vger.kernel.org, 
	linux-geode@lists.infradead.org, linux-gpio@vger.kernel.org, 
	linux-hams@vger.kernel.org, linux-hwmon@vger.kernel.org, 
	linux-i3c@lists.infradead.org, linux-ide@vger.kernel.org, 
	linux-iio@vger.kernel.org, linux-input <linux-input@vger.kernel.org>, 
	linux-integrity@vger.kernel.org, linux-mediatek@lists.infradead.org, 
	Linux Media Mailing List <linux-media@vger.kernel.org>, linux-mmc@vger.kernel.org, 
	Linux-MM <linux-mm@kvack.org>, linux-mtd@lists.infradead.org, 
	linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, 
	linux-renesas-soc@vger.kernel.org, linux-scsi@vger.kernel.org, 
	linux-sctp@vger.kernel.org, linux-security-module@vger.kernel.org, 
	linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org, 
	linux-watchdog@vger.kernel.org, 
	linux-wireless <linux-wireless@vger.kernel.org>, 
	Network Development <netdev@vger.kernel.org>, netfilter-devel@vger.kernel.org, 
	nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org, 
	oss-drivers@netronome.com, patches@opensource.cirrus.com, 
	rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org, 
	samba-technical@lists.samba.org, selinux@vger.kernel.org, 
	target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net, 
	usb-storage@lists.one-eyed-alien.net, 
	virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, xen-devel@lists.xenproject.org, 
	linux-hardening@vger.kernel.org, Nick Desaulniers <ndesaulniers@google.com>, 
	Nathan Chancellor <natechancellor@gmail.com>, Miguel Ojeda <ojeda@kernel.org>, 
	Joe Perches <joe@perches.com>
Content-Type: text/plain; charset="UTF-8"

On Sun, Nov 22, 2020 at 11:54 PM Finn Thain <fthain@telegraphics.com.au> wrote:
>
> We should also take into account optimisim about future improvements in
> tooling.

Not sure what you mean here. There is no reliable way to guess what
the intention was with a missing fallthrough, even if you parsed
whitespace and indentation.

> It is if you want to spin it that way.

How is that a "spin"? It is a fact that we won't get *implicit*
fallthrough mistakes anymore (in particular if we make it a hard
error).

> But what we inevitably get is changes like this:
>
>  case 3:
>         this();
> +       break;
>  case 4:
>         hmmm();
>
> Why? Mainly to silence the compiler. Also because the patch author argued
> successfully that they had found a theoretical bug, often in mature code.

If someone changes control flow, that is on them. Every kernel
developer knows what `break` does.

> But is anyone keeping score of the regressions? If unreported bugs count,
> what about unreported regressions?

Introducing `fallthrough` does not change semantics. If you are really
keen, you can always compare the objects because the generated code
shouldn't change.

Cheers,
Miguel


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 14:20:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 14:20:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34418.65448 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khChM-0006Y9-5t; Mon, 23 Nov 2020 14:20:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34418.65448; Mon, 23 Nov 2020 14:20:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khChM-0006Y2-2v; Mon, 23 Nov 2020 14:20:08 +0000
Received: by outflank-mailman (input) for mailman id 34418;
 Mon, 23 Nov 2020 14:20:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0pQc=E5=gmail.com=miguel.ojeda.sandonis@srs-us1.protection.inumbo.net>)
 id 1khChL-0006UD-19
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:20:07 +0000
Received: from mail-yb1-xb42.google.com (unknown [2607:f8b0:4864:20::b42])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4095cb29-3c24-4e18-89b6-b4f387cf8a58;
 Mon, 23 Nov 2020 14:20:06 +0000 (UTC)
Received: by mail-yb1-xb42.google.com with SMTP id v92so16085706ybi.4
 for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 06:20:06 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0pQc=E5=gmail.com=miguel.ojeda.sandonis@srs-us1.protection.inumbo.net>)
	id 1khChL-0006UD-19
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:20:07 +0000
X-Inumbo-ID: 4095cb29-3c24-4e18-89b6-b4f387cf8a58
Received: from mail-yb1-xb42.google.com (unknown [2607:f8b0:4864:20::b42])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4095cb29-3c24-4e18-89b6-b4f387cf8a58;
	Mon, 23 Nov 2020 14:20:06 +0000 (UTC)
Received: by mail-yb1-xb42.google.com with SMTP id v92so16085706ybi.4
        for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 06:20:06 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=WUYMqcUnfpAQa1YuH9tQ3ze5bp2bxaoGLXc9Sg/470Y=;
        b=b0LkeT2q71Z3peIccxL7MkU5QadaCN3igdEC89IE4ykmdOxIlhuoo/0+H7pQCoNmlh
         0UX19Z7soasUpz2fDZHX56luUWrH4GLKAJ9K28HwPu9km7qlcvasqfBffaQW+LtXvh6a
         fVP4J8wQFxbi1QWFB10Wsq9dLONxRShLcqQtcaktrZCy3tSRV5R4FOw2MSdgwNuCxNwd
         cKQMyE/jYgmlc9Qm972BZKz9xJaasT5iW6gpZgai8YpCh1sxJNgZFzlfCpv21Fvd7rwb
         akOsznbnFT4mJT95mXFDUPnplTdAJirWAcm8YfzHFRAfOGn9Vk91PuRcq7JipLelDPMB
         VWgQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=WUYMqcUnfpAQa1YuH9tQ3ze5bp2bxaoGLXc9Sg/470Y=;
        b=WEVpTVA0ExJe7F3FkrQ+gV5WOeNpNy7EgvnBfVaqYI1Aritms6j2zKWXZ97rwlqePN
         aQPDn2p1+U9ZSmheUB18SEtcNE6RFE4O0cdOyTgjLRuyDVQL+5PkAckHMzlgNFu7iQvw
         8F66MiIB62VDdTwdnQoLAsGO/Mooe54jf9LIT+C2x2NDJXrHo8tvTNZUY4xqihVJ+aCW
         bVspBgEhyYdB/dqc8pjDOD+016M7ucJooX3TYrUeHabNeNZLLqMmyxVdXVCyRuQwpqac
         dxIwebxtFPf2sBSIc0HpyKJWa5euiVcCGljKbTGqJswJscQ8ix3uVcNu61oHxVln5oqy
         1jQA==
X-Gm-Message-State: AOAM5335RSXeXrz/fKugIfPdsH0VtC8/mudHPYnUrSZ+Hl0R7G0LckFT
	hAdqUReZHlMQTFEVapWZlwoybRCN3mec4N72yaU=
X-Google-Smtp-Source: ABdhPJyiJqjBIpEzWlk5pyqpoGG3+KpoWdKnlyza2YA6ODhXnRhATytwh5Bq+iGOzNqc5gs+zuqHC8iB1cjfDTXU/ik=
X-Received: by 2002:a25:bcc7:: with SMTP id l7mr32380985ybm.115.1606141205830;
 Mon, 23 Nov 2020 06:20:05 -0800 (PST)
MIME-Version: 1.0
References: <cover.1605896059.git.gustavoars@kernel.org> <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook> <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
 <CANiq72nZrHWTA4_Msg6MP9snTyenC6-eGfD27CyfNSu7QoVZbw@mail.gmail.com> <1c7d7fde126bc0acf825766de64bf2f9b888f216.camel@HansenPartnership.com>
In-Reply-To: <1c7d7fde126bc0acf825766de64bf2f9b888f216.camel@HansenPartnership.com>
From: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Date: Mon, 23 Nov 2020 15:19:55 +0100
Message-ID: <CANiq72m22Jb5_+62NnwX8xds2iUdWDMAqD8PZw9cuxdHd95W0A@mail.gmail.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
To: James Bottomley <James.Bottomley@hansenpartnership.com>
Cc: Kees Cook <keescook@chromium.org>, Jakub Kicinski <kuba@kernel.org>, 
	"Gustavo A. R. Silva" <gustavoars@kernel.org>, linux-kernel <linux-kernel@vger.kernel.org>, 
	alsa-devel@alsa-project.org, amd-gfx@lists.freedesktop.org, 
	bridge@lists.linux-foundation.org, ceph-devel@vger.kernel.org, 
	cluster-devel@redhat.com, coreteam@netfilter.org, devel@driverdev.osuosl.org, 
	dm-devel@redhat.com, drbd-dev@lists.linbit.com, 
	dri-devel@lists.freedesktop.org, GR-everest-linux-l2@marvell.com, 
	GR-Linux-NIC-Dev@marvell.com, intel-gfx@lists.freedesktop.org, 
	intel-wired-lan@lists.osuosl.org, keyrings@vger.kernel.org, 
	linux1394-devel@lists.sourceforge.net, linux-acpi@vger.kernel.org, 
	linux-afs@lists.infradead.org, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, linux-arm-msm@vger.kernel.org, 
	linux-atm-general@lists.sourceforge.net, linux-block@vger.kernel.org, 
	linux-can@vger.kernel.org, linux-cifs@vger.kernel.org, 
	Linux Crypto Mailing List <linux-crypto@vger.kernel.org>, linux-decnet-user@lists.sourceforge.net, 
	Ext4 Developers List <linux-ext4@vger.kernel.org>, linux-fbdev@vger.kernel.org, 
	linux-geode@lists.infradead.org, linux-gpio@vger.kernel.org, 
	linux-hams@vger.kernel.org, linux-hwmon@vger.kernel.org, 
	linux-i3c@lists.infradead.org, linux-ide@vger.kernel.org, 
	linux-iio@vger.kernel.org, linux-input <linux-input@vger.kernel.org>, 
	linux-integrity@vger.kernel.org, linux-mediatek@lists.infradead.org, 
	Linux Media Mailing List <linux-media@vger.kernel.org>, linux-mmc@vger.kernel.org, 
	Linux-MM <linux-mm@kvack.org>, linux-mtd@lists.infradead.org, 
	linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, 
	linux-renesas-soc@vger.kernel.org, linux-scsi@vger.kernel.org, 
	linux-sctp@vger.kernel.org, linux-security-module@vger.kernel.org, 
	linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org, 
	linux-watchdog@vger.kernel.org, 
	linux-wireless <linux-wireless@vger.kernel.org>, 
	Network Development <netdev@vger.kernel.org>, netfilter-devel@vger.kernel.org, 
	nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org, 
	oss-drivers@netronome.com, patches@opensource.cirrus.com, 
	rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org, 
	samba-technical@lists.samba.org, selinux@vger.kernel.org, 
	target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net, 
	usb-storage@lists.one-eyed-alien.net, 
	virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, xen-devel@lists.xenproject.org, 
	linux-hardening@vger.kernel.org, Nick Desaulniers <ndesaulniers@google.com>, 
	Nathan Chancellor <natechancellor@gmail.com>, Miguel Ojeda <ojeda@kernel.org>, 
	Joe Perches <joe@perches.com>
Content-Type: text/plain; charset="UTF-8"

On Sun, Nov 22, 2020 at 11:36 PM James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
>
> Well, it seems to be three years of someone's time plus the maintainer
> review time and series disruption of nearly a thousand patches.  Let's
> be conservative and assume the producer worked about 30% on the series
> and it takes about 5-10 minutes per patch to review, merge and for
> others to rework existing series.  So let's say it's cost a person year
> of a relatively junior engineer producing the patches and say 100h of
> review and application time.  The latter is likely the big ticket item
> because it's what we have in least supply in the kernel (even though
> it's 20x vs the producer time).

How are you arriving at such numbers? It is a total of ~200 trivial lines.

> It's not about the risk of the changes it's about the cost of
> implementing them.  Even if you discount the producer time (which
> someone gets to pay for, and if I were the engineering manager, I'd be
> unhappy about), the review/merge/rework time is pretty significant in
> exchange for six minor bug fixes.  Fine, when a new compiler warning
> comes along it's certainly reasonable to see if we can benefit from it
> and the fact that the compiler people think it's worthwhile is enough
> evidence to assume this initially.  But at some point you have to ask
> whether that assumption is supported by the evidence we've accumulated
> over the time we've been using it.  And if the evidence doesn't support
> it perhaps it is time to stop the experiment.

Maintainers routinely review 1-line trivial patches, not to mention
internal API changes, etc.

If some company does not want to pay for that, that's fine, but they
don't get to be maintainers and claim `Supported`.

Cheers,
Miguel


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 14:21:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 14:21:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34424.65459 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCir-0006gW-Mk; Mon, 23 Nov 2020 14:21:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34424.65459; Mon, 23 Nov 2020 14:21:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCir-0006gP-Jc; Mon, 23 Nov 2020 14:21:41 +0000
Received: by outflank-mailman (input) for mailman id 34424;
 Mon, 23 Nov 2020 14:21:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khCiq-0006gA-Im
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:21:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3d513835-2e39-4bab-9ec9-afc2661fe1a7;
 Mon, 23 Nov 2020 14:21:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B225DAF54;
 Mon, 23 Nov 2020 14:21:34 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khCiq-0006gA-Im
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:21:40 +0000
X-Inumbo-ID: 3d513835-2e39-4bab-9ec9-afc2661fe1a7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3d513835-2e39-4bab-9ec9-afc2661fe1a7;
	Mon, 23 Nov 2020 14:21:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606141294; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=ZoKvCu5J16lRfvmkYOYXSTPWjH5bbUhuTfmyeH+XFCc=;
	b=ms4yIJIYbnZicKngg8mNcwOmI85jFX+bn2nf41GnObWH5IOOo5Ua9O9W5uFsU2MqW8Wmg8
	rxXJCx167MXz+WJCemb3tOIqZ0J8/NmjuWFi8JsbcXL7bJ5EJ0tj/mB/0KpfK5DEFdjvBF
	FDynt3bi7e7k0MNQe11UMvwRD+IkjYU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B225DAF54;
	Mon, 23 Nov 2020 14:21:34 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 00/17] xvmalloc() / x86 xstate area / x86 CPUID / AMX
 beginnings
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Date: Mon, 23 Nov 2020 15:21:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

While the sub-groups may seem somewhat unrelated, there are inter-
dependencies (logical, functional, or contextual). The majority of
the patches is new in v2; one previously standalone patch has been
integrated into here.

01: mm: check for truncation in vmalloc_type()
02: mm: introduce xvmalloc() et al and use for grant table allocations
03: x86/xstate: use xvzalloc() for save area allocation
04: x86/xstate: re-size save area when CPUID policy changes
05: x86/xstate: re-use valid_xcr0() for boot-time checks
06: x86/xstate: drop xstate_offsets[] and xstate_sizes[]
07: x86/xstate: replace xsave_cntxt_size and drop XCNTXT_MASK
08: x86/xstate: avoid accounting for unsupported components
09: x86: use xvmalloc() for extended context buffer allocations
10: x86/xstate: enable AMX components
11: x86/CPUID: adjust extended leaves out of range clearing
12: x86/CPUID: shrink max_{,sub}leaf fields according to actual leaf contents
13: x86/CPUID: move bounding of max_{,sub}leaf fields to library code
14: x86/CPUID: enable AMX leaves
15: x86emul: introduce X86EMUL_FPU_tile
16: x86emul: support TILERELEASE
17: x86emul: support {LD,ST}TILECFG

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 14:23:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 14:23:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34429.65471 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCkK-0006qE-2l; Mon, 23 Nov 2020 14:23:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34429.65471; Mon, 23 Nov 2020 14:23:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCkJ-0006q7-Vx; Mon, 23 Nov 2020 14:23:11 +0000
Received: by outflank-mailman (input) for mailman id 34429;
 Mon, 23 Nov 2020 14:23:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khCkI-0006pI-DW
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:23:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e20a46a5-a302-45d1-89ed-37c5c2fcae21;
 Mon, 23 Nov 2020 14:23:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B75F5AC23;
 Mon, 23 Nov 2020 14:23:08 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khCkI-0006pI-DW
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:23:10 +0000
X-Inumbo-ID: e20a46a5-a302-45d1-89ed-37c5c2fcae21
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e20a46a5-a302-45d1-89ed-37c5c2fcae21;
	Mon, 23 Nov 2020 14:23:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606141388; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2fPG7s3RVZ6z8F+rd3NAtEN5uxWJlCimqeKe7pK6kmg=;
	b=mKy0e5uMoZ5S3ZAFw6jSf7ANERrttbAHKAj2dboOr2KIcyUexOsj/absppgkhHaQA7raXT
	k+Q20HfPkX+DKY4qSe5FplL5C4yZaX2SXr2JYha8AnFwvNADt9vlFWVJmg370IzUuBy1Qs
	y7wR1grjwgab4b7t4t5ip6Ah06XYVAQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B75F5AC23;
	Mon, 23 Nov 2020 14:23:08 +0000 (UTC)
Subject: [PATCH v2 01/17] mm: check for truncation in vmalloc_type()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Message-ID: <5adb0089-ef9b-cfe2-db0d-7142eccc914f@suse.com>
Date: Mon, 23 Nov 2020 15:23:08 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

While it's currently implied from the checking xmalloc_array() does,
let's make this more explicit in the function itself. As a result both
involved local variables don't need to have size_t type anymore. This
brings them in line with the rest of the code in this file.

Requested-by: Julien Grall <julien@xen.org>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/common/vmap.c
+++ b/xen/common/vmap.c
@@ -242,13 +242,15 @@ void vunmap(const void *va)
 static void *vmalloc_type(size_t size, enum vmap_region type)
 {
     mfn_t *mfn;
-    size_t pages, i;
+    unsigned int i, pages = PFN_UP(size);
     struct page_info *pg;
     void *va;
 
     ASSERT(size);
 
-    pages = PFN_UP(size);
+    if ( PFN_DOWN(size) > pages )
+        return NULL;
+
     mfn = xmalloc_array(mfn_t, pages);
     if ( mfn == NULL )
         return NULL;



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 14:23:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 14:23:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34434.65484 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCkp-0006wr-Co; Mon, 23 Nov 2020 14:23:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34434.65484; Mon, 23 Nov 2020 14:23:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCkp-0006wk-9c; Mon, 23 Nov 2020 14:23:43 +0000
Received: by outflank-mailman (input) for mailman id 34434;
 Mon, 23 Nov 2020 14:23:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khCko-0006we-PV
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:23:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7a41489f-005d-4e85-af81-b52ce8d1ccbb;
 Mon, 23 Nov 2020 14:23:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7797DAFC6;
 Mon, 23 Nov 2020 14:23:40 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khCko-0006we-PV
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:23:42 +0000
X-Inumbo-ID: 7a41489f-005d-4e85-af81-b52ce8d1ccbb
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7a41489f-005d-4e85-af81-b52ce8d1ccbb;
	Mon, 23 Nov 2020 14:23:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606141420; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=npXvc83H7VL7EyMc4+byhWI4gtJoq7f1VRBIByZhCIc=;
	b=aXiQO0b53zHtBPUt4MZqazy/x+S1ffiv8ja8MUl3UGecbfKgMFH4Rpmw+cJQXW+cx09Wz+
	TytS+zpoQNZ5Xzf1x2bsB5KB2+Wb3kxhyrEmF+686o8CeoeP3errPRzPyW4XoXQRbVSKz7
	bbia4KD9iD8X5DdQpp1yNoixEjM5Qwg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7797DAFC6;
	Mon, 23 Nov 2020 14:23:40 +0000 (UTC)
Subject: [PATCH v2 02/17] mm: introduce xvmalloc() et al and use for grant
 table allocations
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Message-ID: <23acd443-348c-5ef9-0fb5-880e06cc9a2d@suse.com>
Date: Mon, 23 Nov 2020 15:23:39 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

All of the array allocations in grant_table_init() can exceed a page's
worth of memory, which xmalloc()-based interfaces aren't really suitable
for after boot. We also don't need any of these allocations to be
physically contiguous.. Introduce interfaces dynamically switching
between xmalloc() et al and vmalloc() et al, based on requested size,
and use them instead.

All the wrappers in the new header get cloned mostly verbatim from
xmalloc.h, with the sole adjustment to switch unsigned long to size_t
for sizes and to unsigned int for alignments.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Actually edit a copy-and-pasted comment in xvmalloc.h which was
    meant to be edited from the beginning.
---
I'm unconvinced of the mentioning of "physically contiguous" in the
comment at the top of the new header: I don't think xmalloc() provides
such a guarantee. Any use assuming so would look (latently) broken to
me.

--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -37,7 +37,7 @@
 #include <xen/iommu.h>
 #include <xen/paging.h>
 #include <xen/keyhandler.h>
-#include <xen/vmap.h>
+#include <xen/xvmalloc.h>
 #include <xen/nospec.h>
 #include <xsm/xsm.h>
 #include <asm/flushtlb.h>
@@ -1925,27 +1925,28 @@ int grant_table_init(struct domain *d, i
     d->grant_table = gt;
 
     /* Active grant table. */
-    gt->active = xzalloc_array(struct active_grant_entry *,
-                               max_nr_active_grant_frames(gt));
+    gt->active = xvzalloc_array(struct active_grant_entry *,
+                                max_nr_active_grant_frames(gt));
     if ( gt->active == NULL )
         goto out;
 
     /* Tracking of mapped foreign frames table */
     if ( gt->max_maptrack_frames )
     {
-        gt->maptrack = vzalloc(gt->max_maptrack_frames * sizeof(*gt->maptrack));
+        gt->maptrack = xvzalloc_array(struct grant_mapping *,
+                                      gt->max_maptrack_frames);
         if ( gt->maptrack == NULL )
             goto out;
     }
 
     /* Shared grant table. */
-    gt->shared_raw = xzalloc_array(void *, gt->max_grant_frames);
+    gt->shared_raw = xvzalloc_array(void *, gt->max_grant_frames);
     if ( gt->shared_raw == NULL )
         goto out;
 
     /* Status pages for grant table - for version 2 */
-    gt->status = xzalloc_array(grant_status_t *,
-                               grant_to_status_frames(gt->max_grant_frames));
+    gt->status = xvzalloc_array(grant_status_t *,
+                                grant_to_status_frames(gt->max_grant_frames));
     if ( gt->status == NULL )
         goto out;
 
@@ -3870,19 +3871,19 @@ grant_table_destroy(
 
     for ( i = 0; i < nr_grant_frames(t); i++ )
         free_xenheap_page(t->shared_raw[i]);
-    xfree(t->shared_raw);
+    xvfree(t->shared_raw);
 
     for ( i = 0; i < nr_maptrack_frames(t); i++ )
         free_xenheap_page(t->maptrack[i]);
-    vfree(t->maptrack);
+    xvfree(t->maptrack);
 
     for ( i = 0; i < nr_active_grant_frames(t); i++ )
         free_xenheap_page(t->active[i]);
-    xfree(t->active);
+    xvfree(t->active);
 
     for ( i = 0; i < nr_status_frames(t); i++ )
         free_xenheap_page(t->status[i]);
-    xfree(t->status);
+    xvfree(t->status);
 
     xfree(t);
     d->grant_table = NULL;
--- a/xen/common/vmap.c
+++ b/xen/common/vmap.c
@@ -7,6 +7,7 @@
 #include <xen/spinlock.h>
 #include <xen/types.h>
 #include <xen/vmap.h>
+#include <xen/xvmalloc.h>
 #include <asm/page.h>
 
 static DEFINE_SPINLOCK(vm_lock);
@@ -301,11 +302,29 @@ void *vzalloc(size_t size)
     return p;
 }
 
-void vfree(void *va)
+static void _vfree(const void *va, unsigned int pages, enum vmap_region type)
 {
-    unsigned int i, pages;
+    unsigned int i;
     struct page_info *pg;
     PAGE_LIST_HEAD(pg_list);
+
+    ASSERT(pages);
+
+    for ( i = 0; i < pages; i++ )
+    {
+        pg = vmap_to_page(va + i * PAGE_SIZE);
+        ASSERT(pg);
+        page_list_add(pg, &pg_list);
+    }
+    vunmap(va);
+
+    while ( (pg = page_list_remove_head(&pg_list)) != NULL )
+        free_domheap_page(pg);
+}
+
+void vfree(void *va)
+{
+    unsigned int pages;
     enum vmap_region type = VMAP_DEFAULT;
 
     if ( !va )
@@ -317,18 +336,71 @@ void vfree(void *va)
         type = VMAP_XEN;
         pages = vm_size(va, type);
     }
-    ASSERT(pages);
 
-    for ( i = 0; i < pages; i++ )
+    _vfree(va, pages, type);
+}
+
+void xvfree(void *va)
+{
+    unsigned int pages = vm_size(va, VMAP_DEFAULT);
+
+    if ( pages )
+        _vfree(va, pages, VMAP_DEFAULT);
+    else
+        xfree(va);
+}
+
+void *_xvmalloc(size_t size, unsigned int align)
+{
+    ASSERT(align <= PAGE_SIZE);
+    return size <= PAGE_SIZE ? _xmalloc(size, align) : vmalloc(size);
+}
+
+void *_xvzalloc(size_t size, unsigned int align)
+{
+    ASSERT(align <= PAGE_SIZE);
+    return size <= PAGE_SIZE ? _xzalloc(size, align) : vzalloc(size);
+}
+
+void *_xvrealloc(void *va, size_t size, unsigned int align)
+{
+    size_t pages = vm_size(va, VMAP_DEFAULT);
+    void *ptr;
+
+    ASSERT(align <= PAGE_SIZE);
+
+    if ( !pages )
     {
-        struct page_info *page = vmap_to_page(va + i * PAGE_SIZE);
+        if ( size <= PAGE_SIZE )
+            return _xrealloc(va, size, align);
 
-        ASSERT(page);
-        page_list_add(page, &pg_list);
+        ptr = vmalloc(size);
+        if ( ptr && va && va != ZERO_BLOCK_PTR )
+        {
+            /*
+             * xmalloc-based allocations up to PAGE_SIZE don't cross page
+             * boundaries. Therefore, without needing to know the exact
+             * prior allocation size, simply copy the entire tail of the
+             * page containing the earlier allocation.
+             */
+            memcpy(ptr, va, PAGE_SIZE - PAGE_OFFSET(va));
+            xfree(va);
+        }
+    }
+    else if ( pages == PFN_UP(size) )
+        ptr = va;
+    else
+    {
+        ptr = _xvmalloc(size, align);
+        if ( ptr )
+        {
+            memcpy(ptr, va, min(size, pages << PAGE_SHIFT));
+            vfree(va);
+        }
+        else if ( pages > PFN_UP(size) )
+            ptr = va;
     }
-    vunmap(va);
 
-    while ( (pg = page_list_remove_head(&pg_list)) != NULL )
-        free_domheap_page(pg);
+    return ptr;
 }
 #endif
--- /dev/null
+++ b/xen/include/xen/xvmalloc.h
@@ -0,0 +1,73 @@
+
+#ifndef __XVMALLOC_H__
+#define __XVMALLOC_H__
+
+#include <xen/cache.h>
+#include <xen/types.h>
+
+/*
+ * Xen malloc/free-style interface for allocations possibly exceeding a page's
+ * worth of memory, as long as there's no need to have physically contiguous
+ * memory allocated.  These should be used in preference to xmalloc() et al
+ * whenever the size is not known to be constrained to at most a single page.
+ */
+
+/* Allocate space for typed object. */
+#define xvmalloc(_type) ((_type *)_xvmalloc(sizeof(_type), __alignof__(_type)))
+#define xvzalloc(_type) ((_type *)_xvzalloc(sizeof(_type), __alignof__(_type)))
+
+/* Allocate space for array of typed objects. */
+#define xvmalloc_array(_type, _num) \
+    ((_type *)_xvmalloc_array(sizeof(_type), __alignof__(_type), _num))
+#define xvzalloc_array(_type, _num) \
+    ((_type *)_xvzalloc_array(sizeof(_type), __alignof__(_type), _num))
+
+/* Allocate space for a structure with a flexible array of typed objects. */
+#define xvzalloc_flex_struct(type, field, nr) \
+    ((type *)_xvzalloc(offsetof(type, field[nr]), __alignof__(type)))
+
+#define xvmalloc_flex_struct(type, field, nr) \
+    ((type *)_xvmalloc(offsetof(type, field[nr]), __alignof__(type)))
+
+/* Re-allocate space for a structure with a flexible array of typed objects. */
+#define xvrealloc_flex_struct(ptr, field, nr)                          \
+    ((typeof(ptr))_xvrealloc(ptr, offsetof(typeof(*(ptr)), field[nr]), \
+                             __alignof__(typeof(*(ptr)))))
+
+/* Allocate untyped storage. */
+#define xvmalloc_bytes(_bytes) _xvmalloc(_bytes, SMP_CACHE_BYTES)
+#define xvzalloc_bytes(_bytes) _xvzalloc(_bytes, SMP_CACHE_BYTES)
+
+/* Free any of the above. */
+extern void xvfree(void *);
+
+/* Free an allocation, and zero the pointer to it. */
+#define XVFREE(p) do { \
+    xvfree(p);         \
+    (p) = NULL;        \
+} while ( false )
+
+/* Underlying functions */
+extern void *_xvmalloc(size_t size, unsigned int align);
+extern void *_xvzalloc(size_t size, unsigned int align);
+extern void *_xvrealloc(void *ptr, size_t size, unsigned int align);
+
+static inline void *_xvmalloc_array(
+    size_t size, unsigned int align, unsigned long num)
+{
+    /* Check for overflow. */
+    if ( size && num > UINT_MAX / size )
+        return NULL;
+    return _xvmalloc(size * num, align);
+}
+
+static inline void *_xvzalloc_array(
+    size_t size, unsigned int align, unsigned long num)
+{
+    /* Check for overflow. */
+    if ( size && num > UINT_MAX / size )
+        return NULL;
+    return _xvzalloc(size * num, align);
+}
+
+#endif /* __XVMALLOC_H__ */



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 14:28:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 14:28:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34443.65496 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCoi-00078y-Uc; Mon, 23 Nov 2020 14:27:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34443.65496; Mon, 23 Nov 2020 14:27:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCoi-00078r-RL; Mon, 23 Nov 2020 14:27:44 +0000
Received: by outflank-mailman (input) for mailman id 34443;
 Mon, 23 Nov 2020 14:27:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khCoh-00078m-Ge
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:27:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2f5b63af-c3c9-47a2-9db1-b01128e29233;
 Mon, 23 Nov 2020 14:27:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 01043AC23;
 Mon, 23 Nov 2020 14:27:42 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khCoh-00078m-Ge
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:27:43 +0000
X-Inumbo-ID: 2f5b63af-c3c9-47a2-9db1-b01128e29233
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2f5b63af-c3c9-47a2-9db1-b01128e29233;
	Mon, 23 Nov 2020 14:27:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606141662; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=CX7ycSIbHLxk9/beplw9uWXkdYfiG5DPYy83G5AM4JE=;
	b=MKmQG8/AN3wFaxoMGrWE/Cw/M/LHuRlZbDGfLe55vl/6C2/05eiid8YJZx0rWMW5m9aQZF
	TWKdupSzEbgOJ45dAeCZRiCRaSJUNsMp8gkBHnZC23L1YJqd2LAco3j/8OcaRSJNivTpUp
	6R9f2qGeWbJaY8jDbT9XTY7oln1wh44=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 01043AC23;
	Mon, 23 Nov 2020 14:27:42 +0000 (UTC)
Subject: [PATCH v2 03/17] x86/xstate: use xvzalloc() for save area allocation
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Message-ID: <2f79bfd4-779f-de9f-2a6c-a393ce3cdfc8@suse.com>
Date: Mon, 23 Nov 2020 15:27:41 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

This is in preparation for the area size exceeding a page's worth of
space, as will happen with AMX as well as Architectural LBR.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -8,6 +8,7 @@
 #include <xen/param.h>
 #include <xen/percpu.h>
 #include <xen/sched.h>
+#include <xen/xvmalloc.h>
 #include <asm/current.h>
 #include <asm/processor.h>
 #include <asm/hvm/support.h>
@@ -522,7 +523,7 @@ int xstate_alloc_save_area(struct vcpu *
 
     /* XSAVE/XRSTOR requires the save area be 64-byte-boundary aligned. */
     BUILD_BUG_ON(__alignof(*save_area) < 64);
-    save_area = _xzalloc(size, __alignof(*save_area));
+    save_area = _xvzalloc(size, __alignof(*save_area));
     if ( save_area == NULL )
         return -ENOMEM;
 
@@ -543,8 +544,7 @@ int xstate_alloc_save_area(struct vcpu *
 
 void xstate_free_save_area(struct vcpu *v)
 {
-    xfree(v->arch.xsave_area);
-    v->arch.xsave_area = NULL;
+    XVFREE(v->arch.xsave_area);
 }
 
 static unsigned int _xstate_ctxt_size(u64 xcr0)



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 14:28:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 14:28:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34447.65508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCpE-0007FP-7v; Mon, 23 Nov 2020 14:28:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34447.65508; Mon, 23 Nov 2020 14:28:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCpE-0007FI-3p; Mon, 23 Nov 2020 14:28:16 +0000
Received: by outflank-mailman (input) for mailman id 34447;
 Mon, 23 Nov 2020 14:28:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khCpC-0007FC-I9
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:28:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 60da0572-9c1a-4bcb-84db-d55a58a12e0f;
 Mon, 23 Nov 2020 14:28:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1BFDBB019;
 Mon, 23 Nov 2020 14:28:12 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khCpC-0007FC-I9
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:28:14 +0000
X-Inumbo-ID: 60da0572-9c1a-4bcb-84db-d55a58a12e0f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 60da0572-9c1a-4bcb-84db-d55a58a12e0f;
	Mon, 23 Nov 2020 14:28:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606141692; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/nn+/ySe+jWNSGLmxXglGChmECf472LynLy5PEksSzo=;
	b=LLjyypCvAyLtSGSuWSwZUtMucDR/PVey7td52kwLuuSDMMqkSK6krPser7vFma68i3UgR0
	pvjRLfKK3lt+Wwy69Q3UvdhipFErnCkgPMhj+cFVTynKD3K0vitA8B+XBe9hcjZp0sEZGn
	ZKysPT3NmqFcpK7Tu5w9/urcQhDv4uY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1BFDBB019;
	Mon, 23 Nov 2020 14:28:12 +0000 (UTC)
Subject: [PATCH v2 04/17] x86/xstate: re-size save area when CPUID policy
 changes
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Message-ID: <f7aa9ca4-788e-ca18-2dd8-5f9b5a61661f@suse.com>
Date: Mon, 23 Nov 2020 15:28:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

vCPU-s get maximum size areas allocated initially. Hidden (and in
particular default-off) features may allow for a smaller size area to
suffice.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Use 1ul instead of 1ull.
---
This could be further shrunk if we used XSAVEC / if we really used
XSAVES, as then we don't need to also cover the holes. But since we
currently use neither of the two in reality, this would require more
work than just adding the alternative size calculation here.

Seeing that both vcpu_init_fpu() and cpuid_policy_updated() get called
from arch_vcpu_create(), I'm not sure we really need this two-stage
approach - the slightly longer period of time during which
v->arch.xsave_area would remain NULL doesn't look all that problematic.
But since xstate_alloc_save_area() gets called for idle vCPU-s, it has
to stay anyway in some form, so the extra code churn may not be worth
it.

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -294,7 +294,21 @@ void update_guest_memory_policy(struct v
     }
 }
 
-void domain_cpu_policy_changed(struct domain *d)
+/*
+ * Called during vcpu construction, and each time the toolstack changes the
+ * CPUID configuration for the domain.
+ */
+static int __must_check cpuid_policy_updated(struct vcpu *v)
+{
+    int rc = xstate_update_save_area(v);
+
+    if ( !rc && is_hvm_vcpu(v) )
+        hvm_cpuid_policy_changed(v);
+
+    return rc;
+}
+
+int domain_cpu_policy_changed(struct domain *d)
 {
     const struct cpuid_policy *p = d->arch.cpuid;
     struct vcpu *v;
@@ -452,13 +466,18 @@ void domain_cpu_policy_changed(struct do
 
     for_each_vcpu ( d, v )
     {
-        cpuid_policy_updated(v);
+        int rc = cpuid_policy_updated(v);
+
+        if ( rc )
+            return rc;
 
         /* If PMU version is zero then the guest doesn't have VPMU */
         if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
              p->basic.pmu_version == 0 )
             vpmu_destroy(v);
     }
+
+    return 0;
 }
 
 #ifndef CONFIG_BIGMEM
@@ -597,7 +616,7 @@ int arch_vcpu_create(struct vcpu *v)
     {
         vpmu_initialise(v);
 
-        cpuid_policy_updated(v);
+        rc = cpuid_policy_updated(v);
     }
 
     return rc;
@@ -841,9 +860,9 @@ int arch_domain_create(struct domain *d,
      */
     d->arch.x87_fip_width = cpu_has_fpu_sel ? 0 : 8;
 
-    domain_cpu_policy_changed(d);
-
-    return 0;
+    rc = domain_cpu_policy_changed(d);
+    if ( !rc )
+        return 0;
 
  fail:
     d->is_dying = DOMDYING_dead;
@@ -2434,16 +2453,6 @@ int domain_relinquish_resources(struct d
     return 0;
 }
 
-/*
- * Called during vcpu construction, and each time the toolstack changes the
- * CPUID configuration for the domain.
- */
-void cpuid_policy_updated(struct vcpu *v)
-{
-    if ( is_hvm_vcpu(v) )
-        hvm_cpuid_policy_changed(v);
-}
-
 void arch_dump_domain_info(struct domain *d)
 {
     paging_dump_domain_info(d);
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -91,7 +91,7 @@ static int update_domain_cpu_policy(stru
     recalculate_cpuid_policy(d);
 
     /* Recalculate relevant dom/vcpu state now the policy has changed. */
-    domain_cpu_policy_changed(d);
+    ret = domain_cpu_policy_changed(d);
 
  out:
     /* Free whichever cpuid/msr structs are not installed in struct domain. */
--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -541,6 +541,41 @@ int xstate_alloc_save_area(struct vcpu *
 
     return 0;
 }
+
+int xstate_update_save_area(struct vcpu *v)
+{
+    unsigned int i, size, old;
+    struct xsave_struct *save_area;
+    uint64_t xcr0_max = cpuid_policy_xcr0_max(v->domain->arch.cpuid);
+
+    ASSERT(!is_idle_vcpu(v));
+
+    if ( !cpu_has_xsave )
+        return 0;
+
+    if ( v->arch.xcr0_accum & ~xcr0_max )
+        return -EBUSY;
+
+    for ( size = old = XSTATE_AREA_MIN_SIZE, i = 2; i < xstate_features; ++i )
+    {
+        if ( xcr0_max & (1ul << i) )
+            size = max(size, xstate_offsets[i] + xstate_sizes[i]);
+        if ( v->arch.xcr0_accum & (1ul << i) )
+            old = max(old, xstate_offsets[i] + xstate_sizes[i]);
+    }
+
+    save_area = _xvrealloc(v->arch.xsave_area, size, __alignof(*save_area));
+    if ( !save_area )
+        return -ENOMEM;
+
+    ASSERT(old <= size);
+    memset((void *)save_area + old, 0, size - old);
+
+    v->arch.xsave_area = save_area;
+    v->arch.fpu_ctxt = &v->arch.xsave_area->fpu_sse;
+
+    return 0;
+}
 
 void xstate_free_save_area(struct vcpu *v)
 {
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -78,8 +78,6 @@ void toggle_guest_mode(struct vcpu *);
 /* x86/64: toggle guest page tables between kernel and user modes. */
 void toggle_guest_pt(struct vcpu *);
 
-void cpuid_policy_updated(struct vcpu *v);
-
 /*
  * Initialise a hypercall-transfer page. The given pointer must be mapped
  * in Xen virtual address space (accesses are not validated or checked).
@@ -667,7 +665,7 @@ struct guest_memory_policy
 void update_guest_memory_policy(struct vcpu *v,
                                 struct guest_memory_policy *policy);
 
-void domain_cpu_policy_changed(struct domain *d);
+int __must_check domain_cpu_policy_changed(struct domain *d);
 
 bool update_runstate_area(struct vcpu *);
 bool update_secondary_system_time(struct vcpu *,
--- a/xen/include/asm-x86/xstate.h
+++ b/xen/include/asm-x86/xstate.h
@@ -106,6 +106,7 @@ void compress_xsave_states(struct vcpu *
 /* extended state init and cleanup functions */
 void xstate_free_save_area(struct vcpu *v);
 int xstate_alloc_save_area(struct vcpu *v);
+int xstate_update_save_area(struct vcpu *v);
 void xstate_init(struct cpuinfo_x86 *c);
 unsigned int xstate_ctxt_size(u64 xcr0);
 



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 14:28:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 14:28:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34453.65519 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCpp-0007Nf-JV; Mon, 23 Nov 2020 14:28:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34453.65519; Mon, 23 Nov 2020 14:28:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCpp-0007NY-GZ; Mon, 23 Nov 2020 14:28:53 +0000
Received: by outflank-mailman (input) for mailman id 34453;
 Mon, 23 Nov 2020 14:28:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khCpo-0007NR-BU
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:28:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 17b88f0e-fb39-4d76-a764-5f487489d6af;
 Mon, 23 Nov 2020 14:28:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7B7B1AC0C;
 Mon, 23 Nov 2020 14:28:50 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khCpo-0007NR-BU
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:28:52 +0000
X-Inumbo-ID: 17b88f0e-fb39-4d76-a764-5f487489d6af
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 17b88f0e-fb39-4d76-a764-5f487489d6af;
	Mon, 23 Nov 2020 14:28:51 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606141730; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/2IjvyVaTtBoXKw+aBkmUgzfUhgQtRJHM3VJQSAXVi0=;
	b=L7Ay3yAv+eM/6Wp6ZELrjbzSrvAWpDny4Dz2Bnl8h21I5fOGfUjA5eLLNVlVMVePIZdefU
	/HiBV2okpQit4u6pSNfCdItfIxSRDXLXYDHbiOfYDcR7HvzK1Q0BgD3aPE7pjDYr7bZjFB
	60TJ+OsWxEKwA1zC0RkYQH1DynBR6O8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7B7B1AC0C;
	Mon, 23 Nov 2020 14:28:50 +0000 (UTC)
Subject: [PATCH v2 05/17] x86/xstate: re-use valid_xcr0() for boot-time checks
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Message-ID: <ddb0309a-4e6b-a6e8-9b54-7b2005f87112@suse.com>
Date: Mon, 23 Nov 2020 15:28:50 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Instead of (just partially) open-coding it, re-use the function after
suitably moving it up.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -611,6 +611,34 @@ unsigned int xstate_ctxt_size(u64 xcr0)
     return _xstate_ctxt_size(xcr0);
 }
 
+static bool valid_xcr0(uint64_t xcr0)
+{
+    /* FP must be unconditionally set. */
+    if ( !(xcr0 & X86_XCR0_FP) )
+        return false;
+
+    /* YMM depends on SSE. */
+    if ( (xcr0 & X86_XCR0_YMM) && !(xcr0 & X86_XCR0_SSE) )
+        return false;
+
+    if ( xcr0 & (X86_XCR0_OPMASK | X86_XCR0_ZMM | X86_XCR0_HI_ZMM) )
+    {
+        /* OPMASK, ZMM, and HI_ZMM require YMM. */
+        if ( !(xcr0 & X86_XCR0_YMM) )
+            return false;
+
+        /* OPMASK, ZMM, and HI_ZMM must be the same. */
+        if ( ~xcr0 & (X86_XCR0_OPMASK | X86_XCR0_ZMM | X86_XCR0_HI_ZMM) )
+            return false;
+    }
+
+    /* BNDREGS and BNDCSR must be the same. */
+    if ( !(xcr0 & X86_XCR0_BNDREGS) != !(xcr0 & X86_XCR0_BNDCSR) )
+        return false;
+
+    return true;
+}
+
 /* Collect the information of processor's extended state */
 void xstate_init(struct cpuinfo_x86 *c)
 {
@@ -646,10 +674,9 @@ void xstate_init(struct cpuinfo_x86 *c)
     }
 
     cpuid_count(XSTATE_CPUID, 0, &eax, &ebx, &ecx, &edx);
-
-    BUG_ON((eax & XSTATE_FP_SSE) != XSTATE_FP_SSE);
-    BUG_ON((eax & X86_XCR0_YMM) && !(eax & X86_XCR0_SSE));
     feature_mask = (((u64)edx << 32) | eax) & XCNTXT_MASK;
+    BUG_ON(!valid_xcr0(feature_mask));
+    BUG_ON(!(feature_mask & X86_XCR0_SSE));
 
     /*
      * Set CR4_OSXSAVE and run "cpuid" to get xsave_cntxt_size.
@@ -679,31 +706,6 @@ void xstate_init(struct cpuinfo_x86 *c)
         BUG();
 }
 
-static bool valid_xcr0(u64 xcr0)
-{
-    /* FP must be unconditionally set. */
-    if ( !(xcr0 & X86_XCR0_FP) )
-        return false;
-
-    /* YMM depends on SSE. */
-    if ( (xcr0 & X86_XCR0_YMM) && !(xcr0 & X86_XCR0_SSE) )
-        return false;
-
-    if ( xcr0 & (X86_XCR0_OPMASK | X86_XCR0_ZMM | X86_XCR0_HI_ZMM) )
-    {
-        /* OPMASK, ZMM, and HI_ZMM require YMM. */
-        if ( !(xcr0 & X86_XCR0_YMM) )
-            return false;
-
-        /* OPMASK, ZMM, and HI_ZMM must be the same. */
-        if ( ~xcr0 & (X86_XCR0_OPMASK | X86_XCR0_ZMM | X86_XCR0_HI_ZMM) )
-            return false;
-    }
-
-    /* BNDREGS and BNDCSR must be the same. */
-    return !(xcr0 & X86_XCR0_BNDREGS) == !(xcr0 & X86_XCR0_BNDCSR);
-}
-
 int validate_xstate(const struct domain *d, uint64_t xcr0, uint64_t xcr0_accum,
                     const struct xsave_hdr *hdr)
 {



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 14:29:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 14:29:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34459.65532 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCqU-0007UW-T7; Mon, 23 Nov 2020 14:29:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34459.65532; Mon, 23 Nov 2020 14:29:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCqU-0007UP-Q4; Mon, 23 Nov 2020 14:29:34 +0000
Received: by outflank-mailman (input) for mailman id 34459;
 Mon, 23 Nov 2020 14:29:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khCqT-0007UF-VE
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:29:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8240edfc-1097-492e-848d-1604c4052fea;
 Mon, 23 Nov 2020 14:29:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0CFC2AD1E;
 Mon, 23 Nov 2020 14:29:32 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khCqT-0007UF-VE
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:29:33 +0000
X-Inumbo-ID: 8240edfc-1097-492e-848d-1604c4052fea
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8240edfc-1097-492e-848d-1604c4052fea;
	Mon, 23 Nov 2020 14:29:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606141772; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=topsFTtubCCTfjnREywJuAhrQ3UsXaerCVcSXj//Ras=;
	b=PLpqinmHAaaLZEN0XUryDve53WCXXATMcXpFZyXbZIh2f88ulib03wYKuNaDtHUhooDgff
	D4JD8BjR+p/luOL89a9XpJUiVW4Wiwie5Cl+cdvmbZQFWJO3IH2WFfbVGn5Bxre2ZILR1S
	N/8FN//78wWKZzhL6TtN78v4sYXlW5I=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0CFC2AD1E;
	Mon, 23 Nov 2020 14:29:32 +0000 (UTC)
Subject: [PATCH v2 06/17] x86/xstate: drop xstate_offsets[] and xstate_sizes[]
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Message-ID: <6d6d1c7f-3d17-5031-ad31-600cff88c55c@suse.com>
Date: Mon, 23 Nov 2020 15:29:31 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

They're redundant with respective fields from the raw CPUID policy; no
need to keep two copies of the same data. This also breaks
recalculate_xstate()'s dependency on xstate_init(), allowing host CPUID
policy calculation to be moved together with that of the raw one (which
a subsequent change willl require anyway).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -498,6 +498,8 @@ void identify_cpu(struct cpuinfo_x86 *c)
 	}
 
 	/* Now the feature flags better reflect actual CPU features! */
+	if (c == &boot_cpu_data)
+		init_host_cpuid();
 
 	xstate_init(c);
 
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -167,32 +167,32 @@ static void recalculate_xstate(struct cp
     {
         xstates |= X86_XCR0_YMM;
         xstate_size = max(xstate_size,
-                          xstate_offsets[X86_XCR0_YMM_POS] +
-                          xstate_sizes[X86_XCR0_YMM_POS]);
+                          xstate_offset(X86_XCR0_YMM_POS) +
+                          xstate_size(X86_XCR0_YMM_POS));
     }
 
     if ( p->feat.mpx )
     {
         xstates |= X86_XCR0_BNDREGS | X86_XCR0_BNDCSR;
         xstate_size = max(xstate_size,
-                          xstate_offsets[X86_XCR0_BNDCSR_POS] +
-                          xstate_sizes[X86_XCR0_BNDCSR_POS]);
+                          xstate_offset(X86_XCR0_BNDCSR_POS) +
+                          xstate_size(X86_XCR0_BNDCSR_POS));
     }
 
     if ( p->feat.avx512f )
     {
         xstates |= X86_XCR0_OPMASK | X86_XCR0_ZMM | X86_XCR0_HI_ZMM;
         xstate_size = max(xstate_size,
-                          xstate_offsets[X86_XCR0_HI_ZMM_POS] +
-                          xstate_sizes[X86_XCR0_HI_ZMM_POS]);
+                          xstate_offset(X86_XCR0_HI_ZMM_POS) +
+                          xstate_size(X86_XCR0_HI_ZMM_POS));
     }
 
     if ( p->feat.pku )
     {
         xstates |= X86_XCR0_PKRU;
         xstate_size = max(xstate_size,
-                          xstate_offsets[X86_XCR0_PKRU_POS] +
-                          xstate_sizes[X86_XCR0_PKRU_POS]);
+                          xstate_offset(X86_XCR0_PKRU_POS) +
+                          xstate_size(X86_XCR0_PKRU_POS));
     }
 
     p->xstate.max_size  =  xstate_size;
@@ -215,8 +215,8 @@ static void recalculate_xstate(struct cp
         if ( !(xstates & curr_xstate) )
             continue;
 
-        p->xstate.comp[i].size   = xstate_sizes[i];
-        p->xstate.comp[i].offset = xstate_offsets[i];
+        p->xstate.comp[i].size   = xstate_size(i);
+        p->xstate.comp[i].offset = xstate_offset(i);
         p->xstate.comp[i].xss    = curr_xstate & XSTATE_XSAVES_ONLY;
         p->xstate.comp[i].align  = curr_xstate & xstate_align;
     }
@@ -512,10 +512,16 @@ static void __init calculate_hvm_def_pol
     recalculate_xstate(p);
 }
 
-void __init init_guest_cpuid(void)
+void __init init_host_cpuid(void)
 {
     calculate_raw_policy();
     calculate_host_policy();
+}
+
+void __init init_guest_cpuid(void)
+{
+    /* Do this a 2nd time to account for setup_{clear,force}_cpu_cap() uses. */
+    calculate_host_policy();
 
     if ( IS_ENABLED(CONFIG_PV) )
     {
--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -9,6 +9,7 @@
 #include <xen/percpu.h>
 #include <xen/sched.h>
 #include <xen/xvmalloc.h>
+#include <asm/cpuid.h>
 #include <asm/current.h>
 #include <asm/processor.h>
 #include <asm/hvm/support.h>
@@ -26,8 +27,6 @@ static u32 __read_mostly xsave_cntxt_siz
 /* A 64-bit bitmask of the XSAVE/XRSTOR features supported by processor. */
 u64 __read_mostly xfeature_mask;
 
-unsigned int *__read_mostly xstate_offsets;
-unsigned int *__read_mostly xstate_sizes;
 u64 __read_mostly xstate_align;
 static unsigned int __read_mostly xstate_features;
 
@@ -93,34 +92,19 @@ static int setup_xstate_features(bool bs
     unsigned int leaf, eax, ebx, ecx, edx;
 
     if ( bsp )
-    {
         xstate_features = flsl(xfeature_mask);
-        xstate_offsets = xzalloc_array(unsigned int, xstate_features);
-        if ( !xstate_offsets )
-            return -ENOMEM;
-
-        xstate_sizes = xzalloc_array(unsigned int, xstate_features);
-        if ( !xstate_sizes )
-            return -ENOMEM;
-    }
 
     for ( leaf = 2; leaf < xstate_features; leaf++ )
     {
-        if ( bsp )
-        {
-            cpuid_count(XSTATE_CPUID, leaf, &xstate_sizes[leaf],
-                        &xstate_offsets[leaf], &ecx, &edx);
-            if ( ecx & XSTATE_ALIGN64 )
-                __set_bit(leaf, &xstate_align);
-        }
+        cpuid_count(XSTATE_CPUID, leaf, &eax,
+                    &ebx, &ecx, &edx);
+        BUG_ON(eax != xstate_size(leaf));
+        BUG_ON(ebx != xstate_offset(leaf));
+
+        if ( bsp && (ecx & XSTATE_ALIGN64) )
+            __set_bit(leaf, &xstate_align);
         else
-        {
-            cpuid_count(XSTATE_CPUID, leaf, &eax,
-                        &ebx, &ecx, &edx);
-            BUG_ON(eax != xstate_sizes[leaf]);
-            BUG_ON(ebx != xstate_offsets[leaf]);
             BUG_ON(!(ecx & XSTATE_ALIGN64) != !test_bit(leaf, &xstate_align));
-        }
     }
 
     return 0;
@@ -150,7 +134,7 @@ static void setup_xstate_comp(uint16_t *
             if ( test_bit(i, &xstate_align) )
                 offset = ROUNDUP(offset, 64);
             comp_offsets[i] = offset;
-            offset += xstate_sizes[i];
+            offset += xstate_size(i);
         }
     }
     ASSERT(offset <= xsave_cntxt_size);
@@ -213,10 +197,10 @@ void expand_xsave_states(struct vcpu *v,
          * comp_offsets[] information, something is very broken.
          */
         BUG_ON(!comp_offsets[index]);
-        BUG_ON((xstate_offsets[index] + xstate_sizes[index]) > size);
+        BUG_ON((xstate_offset(index) + xstate_size(index)) > size);
 
-        memcpy(dest + xstate_offsets[index], src + comp_offsets[index],
-               xstate_sizes[index]);
+        memcpy(dest + xstate_offset(index), src + comp_offsets[index],
+               xstate_size(index));
 
         valid &= ~feature;
     }
@@ -279,10 +263,10 @@ void compress_xsave_states(struct vcpu *
          * comp_offset[] information, something is very broken.
          */
         BUG_ON(!comp_offsets[index]);
-        BUG_ON((xstate_offsets[index] + xstate_sizes[index]) > size);
+        BUG_ON((xstate_offset(index) + xstate_size(index)) > size);
 
-        memcpy(dest + comp_offsets[index], src + xstate_offsets[index],
-               xstate_sizes[index]);
+        memcpy(dest + comp_offsets[index], src + xstate_offset(index),
+               xstate_size(index));
 
         valid &= ~feature;
     }
@@ -516,8 +500,8 @@ int xstate_alloc_save_area(struct vcpu *
         unsigned int i;
 
         for ( size = 0, i = 2; i < xstate_features; ++i )
-            if ( size < xstate_sizes[i] )
-                size = xstate_sizes[i];
+            if ( size < xstate_size(i) )
+                size = xstate_size(i);
         size += XSTATE_AREA_MIN_SIZE;
     }
 
@@ -560,9 +544,9 @@ int xstate_update_save_area(struct vcpu
     for ( size = old = XSTATE_AREA_MIN_SIZE, i = 2; i < xstate_features; ++i )
     {
         if ( xcr0_max & (1ul << i) )
-            size = max(size, xstate_offsets[i] + xstate_sizes[i]);
+            size = max(size, xstate_offset(i) + xstate_size(i));
         if ( v->arch.xcr0_accum & (1ul << i) )
-            old = max(old, xstate_offsets[i] + xstate_sizes[i]);
+            old = max(old, xstate_offset(i) + xstate_size(i));
     }
 
     save_area = _xvrealloc(v->arch.xsave_area, size, __alignof(*save_area));
@@ -821,7 +805,7 @@ uint64_t read_bndcfgu(void)
               : "=m" (*xstate)
               : "a" (X86_XCR0_BNDCSR), "d" (0), "D" (xstate) );
 
-        bndcsr = (void *)xstate + xstate_offsets[X86_XCR0_BNDCSR_POS];
+        bndcsr = (void *)xstate + xstate_offset(X86_XCR0_BNDCSR_POS);
     }
 
     if ( cr0 & X86_CR0_TS )
--- a/xen/include/asm-x86/cpuid.h
+++ b/xen/include/asm-x86/cpuid.h
@@ -16,6 +16,7 @@
 extern const uint32_t known_features[FSCAPINTS];
 extern const uint32_t special_features[FSCAPINTS];
 
+void init_host_cpuid(void);
 void init_guest_cpuid(void);
 
 /*
--- a/xen/include/asm-x86/xstate.h
+++ b/xen/include/asm-x86/xstate.h
@@ -44,8 +44,9 @@ extern uint32_t mxcsr_mask;
 
 extern u64 xfeature_mask;
 extern u64 xstate_align;
-extern unsigned int *xstate_offsets;
-extern unsigned int *xstate_sizes;
+
+#define xstate_offset(n) (raw_cpuid_policy.xstate.comp[n].offset)
+#define xstate_size(n)   (raw_cpuid_policy.xstate.comp[n].size)
 
 /* extended state save area */
 struct __attribute__((aligned (64))) xsave_struct



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 14:30:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 14:30:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34464.65543 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCqy-0007q4-6o; Mon, 23 Nov 2020 14:30:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34464.65543; Mon, 23 Nov 2020 14:30:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCqy-0007px-38; Mon, 23 Nov 2020 14:30:04 +0000
Received: by outflank-mailman (input) for mailman id 34464;
 Mon, 23 Nov 2020 14:30:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khCqx-0007ni-Es
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:30:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2667c35f-2fd9-4e18-b94b-ac9affba3010;
 Mon, 23 Nov 2020 14:30:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 70395ACD5;
 Mon, 23 Nov 2020 14:30:01 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khCqx-0007ni-Es
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:30:03 +0000
X-Inumbo-ID: 2667c35f-2fd9-4e18-b94b-ac9affba3010
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2667c35f-2fd9-4e18-b94b-ac9affba3010;
	Mon, 23 Nov 2020 14:30:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606141801; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1rKdh3/bOWbvpOmzUer7mtwBmGntPL/PTs0xVbXua64=;
	b=CqMStz0V7owpn6kmtqENXaUaRIJv9VilTlsw8f+K1ucjbS9nhaahNcEaN6E7NtnPjR14Ex
	7X7ACFcYPm3TUGpj+2sCylYUo83A8S67VXFMGmHKxY1Ez6Pt50NCk8Gqk0tPsa41QyBH0H
	S8KH+ZMt88W4953TOsQq9v10F/Gewgc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 70395ACD5;
	Mon, 23 Nov 2020 14:30:01 +0000 (UTC)
Subject: [PATCH v2 07/17] x86/xstate: replace xsave_cntxt_size and drop
 XCNTXT_MASK
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Message-ID: <61ff0ac8-a1ab-b0d4-e466-34b47f13a734@suse.com>
Date: Mon, 23 Nov 2020 15:30:00 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

XCNTXT_MASK is effectively embedded in recalculate_xstate(), and
xsave_cntxt_size was redundant with the host CPUID policy's
xstate.max_size field.

Use the host CPUID policy as input (requiring it to be calculated
earlier), thus allowing e.g. "cpuid=no-avx512f" to also result in
avoiding allocation of space for ZMM and mask register state.

Also drop a stale part of an adjacent comment.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -20,9 +20,10 @@
 /*
  * Maximum size (in byte) of the XSAVE/XRSTOR save area required by all
  * the supported and enabled features on the processor, including the
- * XSAVE.HEADER. We only enable XCNTXT_MASK that we have known.
+ * XSAVE.HEADER. We only enable cpuid_policy_xcr0_max(&host_cpuid_policy).
+ * Note that this identifier should not be usable as an lvalue.
  */
-static u32 __read_mostly xsave_cntxt_size;
+#define xsave_cntxt_size (host_cpuid_policy.xstate.max_size | 0)
 
 /* A 64-bit bitmask of the XSAVE/XRSTOR features supported by processor. */
 u64 __read_mostly xfeature_mask;
@@ -577,8 +578,23 @@ static unsigned int _xstate_ctxt_size(u6
     ASSERT(ok);
     cpuid_count(XSTATE_CPUID, 0, &eax, &ebx, &ecx, &edx);
     ASSERT(ebx <= ecx);
-    ok = set_xcr0(act_xcr0);
-    ASSERT(ok);
+
+    /*
+     * When called the very first time from xstate_init(), act_xcr0 (as read
+     * from per-CPU data) is still zero. xstate_init() wants this function to
+     * leave xfeature_mask in place, so avoid restoration in this case (which
+     * would fail anyway).
+     */
+    if ( act_xcr0 )
+    {
+        ok = set_xcr0(act_xcr0);
+        ASSERT(ok);
+    }
+    else
+    {
+        BUG_ON(!ok);
+        ASSERT(xcr0 == xfeature_mask);
+    }
 
     return ebx;
 }
@@ -650,42 +666,35 @@ void xstate_init(struct cpuinfo_x86 *c)
         return;
 
     if ( (bsp && !use_xsave) ||
-         boot_cpu_data.cpuid_level < XSTATE_CPUID )
+         c->cpuid_level < XSTATE_CPUID )
     {
         BUG_ON(!bsp);
         setup_clear_cpu_cap(X86_FEATURE_XSAVE);
         return;
     }
 
-    cpuid_count(XSTATE_CPUID, 0, &eax, &ebx, &ecx, &edx);
-    feature_mask = (((u64)edx << 32) | eax) & XCNTXT_MASK;
-    BUG_ON(!valid_xcr0(feature_mask));
-    BUG_ON(!(feature_mask & X86_XCR0_SSE));
-
-    /*
-     * Set CR4_OSXSAVE and run "cpuid" to get xsave_cntxt_size.
-     */
-    set_in_cr4(X86_CR4_OSXSAVE);
-    if ( !set_xcr0(feature_mask) )
-        BUG();
-
     if ( bsp )
     {
+        feature_mask = cpuid_policy_xcr0_max(&host_cpuid_policy);
+        BUG_ON(!valid_xcr0(feature_mask));
+        BUG_ON(!(feature_mask & X86_XCR0_SSE));
+
         xfeature_mask = feature_mask;
-        /*
-         * xsave_cntxt_size is the max size required by enabled features.
-         * We know FP/SSE and YMM about eax, and nothing about edx at present.
-         */
-        xsave_cntxt_size = _xstate_ctxt_size(feature_mask);
+        /* xsave_cntxt_size is the max size required by enabled features. */
         printk("xstate: size: %#x and states: %#"PRIx64"\n",
-               xsave_cntxt_size, xfeature_mask);
-    }
-    else
-    {
-        BUG_ON(xfeature_mask != feature_mask);
-        BUG_ON(xsave_cntxt_size != _xstate_ctxt_size(feature_mask));
+               xsave_cntxt_size, feature_mask);
+
+        set_in_cr4(X86_CR4_OSXSAVE);
     }
 
+    cpuid_count(XSTATE_CPUID, 0, &eax, &ebx, &ecx, &edx);
+    feature_mask = (((uint64_t)edx << 32) | eax) & xfeature_mask;
+    BUG_ON(xfeature_mask != feature_mask);
+
+    /* This has the side effect of set_xcr0(feature_mask). */
+    if ( xsave_cntxt_size != _xstate_ctxt_size(feature_mask) )
+        BUG();
+
     if ( setup_xstate_features(bsp) && bsp )
         BUG();
 }
--- a/xen/include/asm-x86/xstate.h
+++ b/xen/include/asm-x86/xstate.h
@@ -30,9 +30,6 @@ extern uint32_t mxcsr_mask;
 #define XSTATE_AREA_MIN_SIZE      (FXSAVE_SIZE + XSAVE_HDR_SIZE)
 
 #define XSTATE_FP_SSE  (X86_XCR0_FP | X86_XCR0_SSE)
-#define XCNTXT_MASK    (X86_XCR0_FP | X86_XCR0_SSE | X86_XCR0_YMM | \
-                        X86_XCR0_OPMASK | X86_XCR0_ZMM | X86_XCR0_HI_ZMM | \
-                        XSTATE_NONLAZY)
 
 #define XSTATE_ALL     (~(1ULL << 63))
 #define XSTATE_NONLAZY (X86_XCR0_BNDREGS | X86_XCR0_BNDCSR | X86_XCR0_PKRU)



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 14:30:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 14:30:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34466.65556 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCrC-0008MS-Gf; Mon, 23 Nov 2020 14:30:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34466.65556; Mon, 23 Nov 2020 14:30:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCrC-0008MI-DC; Mon, 23 Nov 2020 14:30:18 +0000
Received: by outflank-mailman (input) for mailman id 34466;
 Mon, 23 Nov 2020 14:30:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1khCrA-0008Lq-IA
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:30:16 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7c82c84c-7e9e-4739-8bb8-35aa458c212d;
 Mon, 23 Nov 2020 14:30:15 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1khCrA-0008Lq-IA
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:30:16 +0000
X-Inumbo-ID: 7c82c84c-7e9e-4739-8bb8-35aa458c212d
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7c82c84c-7e9e-4739-8bb8-35aa458c212d;
	Mon, 23 Nov 2020 14:30:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606141815;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=gFXqga1oMk4Udo3cl2Q8/5xxLWu0sY5heonPRRyRyuY=;
  b=W9akWsDyDo+NQc6Ysfqq9Y37MDEj6XEvEP13Ld6pBf//W6yAlhj9Akwk
   k+Tkc5DCm8qSkvbxG6V+DQRNKUnvtu81lS1Nashef8jMzG0E7xAUdfx1S
   dRRcnwzjLmChHGZtfnLIUbpvVgGgzR101v3OJ2ApB7yZw4FbkXbvzXGZr
   I=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: It0JTbDD6Aym5pd3q298kS9/wQZlL2w8Eyv4pTiSa79r9CfQq3FHpwUVH8BHF+MAIARWZRMfDK
 rKTbw9bUg9vd0pQ08FwuKa/bwAthGLWBizU14aLFaf7/UO/VwGBcgysLlGIrj5j1la0nQ95nbq
 OfmQta2A4vx+7L3juLYw0JuBsT88IDonLygnE9Z75Ct/x5+BLi08fxjquL6xg/46TSXapUOFyN
 p1H2v6BoKRSyXZi01Zj06/fiR7d2Thd63Ck0hE6gXv8iXHujTdBWOqxuoUADEcVU7VSjF8xHAR
 yh4=
X-SBRS: None
X-MesageID: 31758649
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,363,1599537600"; 
   d="scan'208";a="31758649"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=S1D6ufz8RKlDn2ka7t6Xpxa9Xas245gK3IqVj9/1DLfHIOKvG1tvjKIumSXQ8tnhfKMJAYI8yQ6lAFcuuLBkaEKkUxCqeiG4ZSMLGW01sKxJt90SW0FdF5QxPKnK6QRr4Uc3yB3cxVrodAU6XsRNt+vtO6anX0u5m2xnaHgxOEx5JZ5wTRntDLFZcKN+x3uwGlkPEKkdyCYEC5dfVMx4BFqFdokiBgPXtvBXhew9cMLlao1gJT2Ro0bHMcIS7d6XptkaRhZZ2o438ee+0E/z1ruTOdHYiYzs2nAegAFZ0nW0gVjgXDPs852ItWvYbEYhMCDlDzbxQQ3WXUoTqJc9Jg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VfNA8hwIVqbVBxLUtTRL6SHTSdqw9MhH+OlkaXnc0u0=;
 b=EaogWZciDN+/EW7drvm4l4RTRXtwiDYqOi5ui3ZLjuEPO5v3EfOa/rRquqAKQ8TRYiE4VdwL4KWGTU5zJmKYGS5/KVHMzq5Gyi4T0HNZj8WQGdN47Wrq6obg1dtyyTNzAqpmDh5RtryCsW6e1Y1cG55A7o7v3giBeHqryFw33T9poO65uFPHC2v5wEpsvC5FGs0yUjuqvm+PW7X1vNuNtAr1W6GXmhCgJ3fuOAJTcOmQaRY5CSvbQW81Va9MFWz5LrQQ+9NWNCvMalvAPRf7oLZRFQcClNC98EBVKJovjGjUpoQe6lbumdCjpxImOyWlYH9IkntugpNmgsCdBWd9ZA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VfNA8hwIVqbVBxLUtTRL6SHTSdqw9MhH+OlkaXnc0u0=;
 b=hQ5dEsg2DG6JceeFYplFnjHDaWe+ZnSkgQJtdCUg/eQSLocfmQ5xKOpRJm4LsSvENYeWGxnAzYzIKadLx4Z8FaYvWJDYcQ8LyFrIGV9zi0CRdIUXdBZ2Lqg6Bb8QsG2foFvFigiqEhwCCzceOmubri/1dueiDSU7gt3Y9J1JrHQ=
Date: Mon, 23 Nov 2020 15:30:07 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>
Subject: Re: [PATCH 1/4] x86/ACPI: fix mapping of FACS
Message-ID: <20201123143007.avmqwbwxfyk6owpr@Air-de-Roger>
References: <7f895b0e-f46f-8fe2-b0ac-e0503ef06a1f@suse.com>
 <81a8c2f0-ae9b-98e0-f5c5-d32b423db491@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <81a8c2f0-ae9b-98e0-f5c5-d32b423db491@suse.com>
X-ClientProxiedBy: LO2P265CA0015.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:62::27) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 17a4936c-1020-4e99-e7a8-08d88fbc4974
X-MS-TrafficTypeDiagnostic: DS7PR03MB5608:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DS7PR03MB560833DD9209CCC2D94A8E2F8FFC0@DS7PR03MB5608.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6430;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: qTx6Ym7UE8EUCkTCyqcZJa7H2N/GH4apx8WTmdSvLNweZncAuHQfL8ln2206apPz2kFdyt/m+mujRZr6+cp6JI7PGB4ZdUntbu1amDKkJRI31LNF3OXr5N9dthsKVR4KQI1tKzr0d3+vfYDduSRQHM06eCTZHVI8dnmmjFYqpJV/fJI3Ny8PU/rMSLquvlCdtiRmR9SazuxwEY5XSZNzl8Wiztx8Qt+A+yCQhky45eJOZmZsltH5m/SuGcC7BQ0kIIO28ttCz4GOpYZQ7RZjIktegrfmC/OZJmxRKYUkt+lxAgOcxYhA0Z1XQoTqi35bnSu/qmxU2vVwL9KqWjjIdQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39860400002)(366004)(346002)(396003)(136003)(376002)(54906003)(478600001)(6666004)(6916009)(33716001)(316002)(4744005)(186003)(16526019)(8936002)(956004)(86362001)(26005)(85182001)(1076003)(2906002)(9686003)(4326008)(66476007)(8676002)(6496006)(66946007)(5660300002)(6486002)(66556008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: z0OtOHsB6/eTct0NslN1Hq38TXxVFkVBq/brf9jjEalO2eBVjASjMs+WBh03q2DulfrpgkSWokMCBGAJywbbfhIAYblcpWIp/oKJQs+sH7T0RtJpCa5+cpqLXwfZRq7jm82l+XH+p3VFuDLpV4q9PYbore1uagJqr2gmbsyn9xjj+kxJNIdorBWKVH1lhzQnuGRWDD4xHpiQVePQTrZ6bLn7MBp1RWWDjandWaofNZcme2+9EqhJ7ybqLPExQ7F06qajXNiFYqX3iUxwoehRvmzj8jZ5xd4haUSAI39885WxRzHZhRtnECUulB1kwNWPgaRO1CfS3tEeWVbL1R0IOKp47dMeVEa0Twwpdn0K+Ez/dRgGa1EFx4s7S9tclJnofkB5NVHCwN7FnmG7oQ/CeV5pdM21KFAqGrbrl5TqzEhU5vPcMEt+XvhZkDdjZZHsazZ70u5GDjO46EY1YI/g2V53stsgX0stw1AXYwkWwFMN44lfHpovJvD1d8Oej/iGKHKL7jDshSuarPa3yTnpo6/8Ob8ipGxsXcC3HNOJJqYrLIL3w/LSdEHL8SHl896B4KqcoWh1kvAd7+MhP3AxmxiD5V3g7DzapzIigflKKBx5uUUZVIbLia8ZidEOHhZQJ0q4Xde6YmszskPEX0f9kQobU8IeuaSwvhuYzrG22FMWQ+fe9E9gX3eGva0Pf/2EgyV3m4b7pSKtv7fB8Sx46CHbxqIDEwQqttJdJpSVNFFVzEDYHWvIC2tPFjOBnGcCkX9AIELyUPRxoLTH6D2vhdURlOVdeaJtftb5n53fNuhhcbaqFzC/+9d4SSQvykXedYTdO7k5PmM6DxhhyozyQKj5/9cUazMQ8YgWKbZdUbQkVpVIojhTJDrjPbwtcBxusJA+zJSkjNDVmNRvyXbZxA==
X-MS-Exchange-CrossTenant-Network-Message-Id: 17a4936c-1020-4e99-e7a8-08d88fbc4974
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Nov 2020 14:30:12.3649
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mZFtiJOKqi4Dp12r2jWMeeeUEqZ3IKHx7aT3L6fwRtI+XzRWtC0r3NQj2a1P50OQwlZkFNfhmt215WaqQFjYPQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5608
X-OriginatorOrg: citrix.com

On Mon, Nov 23, 2020 at 01:39:55PM +0100, Jan Beulich wrote:
> acpi_fadt_parse_sleep_info() runs when the system is already in
> SYS_STATE_boot. Hence its direct call to __acpi_map_table() won't work
> anymore. This call should probably have been replaced long ago already,
> as the layering violation hasn't been necessary for quite some time.
> 
> Fixes: 1c4aa69ca1e1 ("xen/acpi: Rework acpi_os_map_memory() and acpi_os_unmap_memory()")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 14:30:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 14:30:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34475.65568 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCrj-0008WO-VS; Mon, 23 Nov 2020 14:30:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34475.65568; Mon, 23 Nov 2020 14:30:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCrj-0008WH-RV; Mon, 23 Nov 2020 14:30:51 +0000
Received: by outflank-mailman (input) for mailman id 34475;
 Mon, 23 Nov 2020 14:30:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khCrh-0008Vw-MU
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:30:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 895ee0fe-0416-4462-8814-7929f6e73c74;
 Mon, 23 Nov 2020 14:30:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4D3ADACD5;
 Mon, 23 Nov 2020 14:30:48 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khCrh-0008Vw-MU
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:30:49 +0000
X-Inumbo-ID: 895ee0fe-0416-4462-8814-7929f6e73c74
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 895ee0fe-0416-4462-8814-7929f6e73c74;
	Mon, 23 Nov 2020 14:30:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606141848; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=x6t5Vob9tcf8+2MIVaNLOirct/IJx1acn361i7fk3Us=;
	b=Lc2vFDkXPjVOUok7Qeipvlor9wheCJ+8XYn2vpwjBGTFlMY45vzTswNzZuz2KC12cbVDQL
	gp5vkxqIWNA5nPhun32M3l7/lHFOSpAmWcZavIoQIsrkXcTyYYEm4+4n2uzaWlDwrYPAnv
	PygbW5FTcotpqDzpo7sHnHPiPAgok/I=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4D3ADACD5;
	Mon, 23 Nov 2020 14:30:48 +0000 (UTC)
Subject: [PATCH v2 08/17] x86/xstate: avoid accounting for unsupported
 components
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Message-ID: <8794fa5b-a584-44af-9cba-a2641779a427@suse.com>
Date: Mon, 23 Nov 2020 15:30:47 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

There's no point in including unsupported components in the size
calculations of xstate_{alloc,update}_save_area().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -501,8 +501,12 @@ int xstate_alloc_save_area(struct vcpu *
         unsigned int i;
 
         for ( size = 0, i = 2; i < xstate_features; ++i )
+        {
+            if ( !(xfeature_mask & (1ul << i)) )
+                continue;
             if ( size < xstate_size(i) )
                 size = xstate_size(i);
+        }
         size += XSTATE_AREA_MIN_SIZE;
     }
 
@@ -544,6 +548,8 @@ int xstate_update_save_area(struct vcpu
 
     for ( size = old = XSTATE_AREA_MIN_SIZE, i = 2; i < xstate_features; ++i )
     {
+        if ( !(xfeature_mask & (1ul << i)) )
+            continue;
         if ( xcr0_max & (1ul << i) )
             size = max(size, xstate_offset(i) + xstate_size(i));
         if ( v->arch.xcr0_accum & (1ul << i) )



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 14:31:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 14:31:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34483.65580 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCsF-0000Ct-7a; Mon, 23 Nov 2020 14:31:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34483.65580; Mon, 23 Nov 2020 14:31:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCsF-0000Cl-4V; Mon, 23 Nov 2020 14:31:23 +0000
Received: by outflank-mailman (input) for mailman id 34483;
 Mon, 23 Nov 2020 14:31:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khCsD-0000Ce-T1
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:31:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 24754e3a-a5a6-41cf-8762-b3a6221cee56;
 Mon, 23 Nov 2020 14:31:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2EACAACD5;
 Mon, 23 Nov 2020 14:31:20 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khCsD-0000Ce-T1
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:31:21 +0000
X-Inumbo-ID: 24754e3a-a5a6-41cf-8762-b3a6221cee56
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 24754e3a-a5a6-41cf-8762-b3a6221cee56;
	Mon, 23 Nov 2020 14:31:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606141880; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IsGbx7nHAowOELB1I3S+0clwo01gLUbz8o/IEuZBhpE=;
	b=rwSZO9BPFZHHcFHWKBUzLbQPaBugEUg808iZMW88IY0qavsHxBEnfIvAMiZ2+btxdPkBCF
	RUBcemqpJHiZ3FwEEqKIC65HjJiEdnveeq+X4TLWmkZwURhovAPfdemP7HL0HKN1nfID/0
	W6v/VmD7k9T7/MtMlXdRkCALwfeLldA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2EACAACD5;
	Mon, 23 Nov 2020 14:31:20 +0000 (UTC)
Subject: [PATCH v2 09/17] x86: use xvmalloc() for extended context buffer
 allocations
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Message-ID: <1e352dd9-016c-d15e-c1bb-64fb28df8b73@suse.com>
Date: Mon, 23 Nov 2020 15:31:19 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

This is in preparation for the buffer sizes exceeding a page's worth of
space, as will happen with AMX as well as Architectural LBR.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -30,6 +30,7 @@
 #include <xsm/xsm.h>
 #include <xen/iommu.h>
 #include <xen/vm_event.h>
+#include <xen/xvmalloc.h>
 #include <public/vm_event.h>
 #include <asm/mem_sharing.h>
 #include <asm/xstate.h>
@@ -331,7 +332,7 @@ long arch_do_domctl(
             goto sethvmcontext_out;
 
         ret = -ENOMEM;
-        if ( (c.data = xmalloc_bytes(c.size)) == NULL )
+        if ( (c.data = xvmalloc_bytes(c.size)) == NULL )
             goto sethvmcontext_out;
 
         ret = -EFAULT;
@@ -343,7 +344,7 @@ long arch_do_domctl(
         domain_unpause(d);
 
     sethvmcontext_out:
-        xfree(c.data);
+        xvfree(c.data);
         break;
     }
 
@@ -373,7 +374,7 @@ long arch_do_domctl(
 
         /* Allocate our own marshalling buffer */
         ret = -ENOMEM;
-        if ( (c.data = xmalloc_bytes(c.size)) == NULL )
+        if ( (c.data = xvmalloc_bytes(c.size)) == NULL )
             goto gethvmcontext_out;
 
         domain_pause(d);
@@ -386,7 +387,7 @@ long arch_do_domctl(
 
     gethvmcontext_out:
         copyback = true;
-        xfree(c.data);
+        xvfree(c.data);
         break;
     }
 
@@ -904,7 +905,7 @@ long arch_do_domctl(
             if ( !ret && size > PV_XSAVE_HDR_SIZE )
             {
                 unsigned int xsave_size = size - PV_XSAVE_HDR_SIZE;
-                void *xsave_area = xmalloc_bytes(xsave_size);
+                void *xsave_area = xvmalloc_bytes(xsave_size);
 
                 if ( !xsave_area )
                 {
@@ -918,7 +919,7 @@ long arch_do_domctl(
                 if ( copy_to_guest_offset(evc->buffer, offset, xsave_area,
                                           xsave_size) )
                      ret = -EFAULT;
-                xfree(xsave_area);
+                xvfree(xsave_area);
            }
 
             vcpu_unpause(v);
@@ -938,7 +939,7 @@ long arch_do_domctl(
                  evc->size > PV_XSAVE_SIZE(xfeature_mask) )
                 goto vcpuextstate_out;
 
-            receive_buf = xmalloc_bytes(evc->size);
+            receive_buf = xvmalloc_bytes(evc->size);
             if ( !receive_buf )
             {
                 ret = -ENOMEM;
@@ -948,7 +949,7 @@ long arch_do_domctl(
                                         offset, evc->size) )
             {
                 ret = -EFAULT;
-                xfree(receive_buf);
+                xvfree(receive_buf);
                 goto vcpuextstate_out;
             }
 
@@ -966,7 +967,7 @@ long arch_do_domctl(
                 ret = 0;
             if ( ret )
             {
-                xfree(receive_buf);
+                xvfree(receive_buf);
                 goto vcpuextstate_out;
             }
 
@@ -994,7 +995,7 @@ long arch_do_domctl(
                 vcpu_unpause(v);
             }
 
-            xfree(receive_buf);
+            xvfree(receive_buf);
         }
 
 #undef PV_XSAVE_HDR_SIZE
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -23,6 +23,7 @@
 #include <xen/guest_access.h>
 #include <xen/softirq.h>
 #include <xen/version.h>
+#include <xen/xvmalloc.h>
 
 #include <asm/hvm/support.h>
 
@@ -154,7 +155,7 @@ int hvm_save_one(struct domain *d, unsig
     else
         v = d->vcpu[instance];
     ctxt.size = hvm_sr_handlers[typecode].size;
-    ctxt.data = xmalloc_bytes(ctxt.size);
+    ctxt.data = xvmalloc_bytes(ctxt.size);
     if ( !ctxt.data )
         return -ENOMEM;
 
@@ -200,7 +201,7 @@ int hvm_save_one(struct domain *d, unsig
     else
         domain_unpause(d);
 
-    xfree(ctxt.data);
+    xvfree(ctxt.data);
     return rv;
 }
 



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 14:32:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 14:32:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34492.65592 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCsv-0000Mr-I3; Mon, 23 Nov 2020 14:32:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34492.65592; Mon, 23 Nov 2020 14:32:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCsv-0000Mk-Ex; Mon, 23 Nov 2020 14:32:05 +0000
Received: by outflank-mailman (input) for mailman id 34492;
 Mon, 23 Nov 2020 14:32:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khCsu-0000Ma-AR
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:32:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f2efc230-2365-4696-8c19-d6b22003b213;
 Mon, 23 Nov 2020 14:32:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8CE42AC23;
 Mon, 23 Nov 2020 14:32:02 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khCsu-0000Ma-AR
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:32:04 +0000
X-Inumbo-ID: f2efc230-2365-4696-8c19-d6b22003b213
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f2efc230-2365-4696-8c19-d6b22003b213;
	Mon, 23 Nov 2020 14:32:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606141922; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=C/br/wr1vs+jHb0x3GkA7iy/FWtl0oeRJ/XrBSDPWX4=;
	b=RYldOfDTrJG4kDD9DBVRUMeZ15TD7Kb42zNBgJWfT7PiW5bCU8aKw8miOwPLVPiTT+pJhk
	gcvMOtZB6oOfEldfRSiLRmP0YivGRBCy50GMZmFhPJ0j3/zSWw99jSErv7x0p/YLJgQtzB
	YdWuIvZzK9QquxX0tZWsCoGKx1CqXNE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8CE42AC23;
	Mon, 23 Nov 2020 14:32:02 +0000 (UTC)
Subject: [PATCH v2 10/17] x86/xstate: enable AMX components
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Message-ID: <5490ecab-8fe1-aa46-95e0-4ad6c6318879@suse.com>
Date: Mon, 23 Nov 2020 15:32:01 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

These being controlled by XCR0, enabling support is relatively
straightforward. Note however that there won't be any use of them until
their dependent ISA extension CPUID flags get exposed, not the least due
to the way recalculate_xstate() handles the dependencies in kind of a
reverse manner.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/tools/libs/light/libxl_cpuid.c
+++ b/tools/libs/light/libxl_cpuid.c
@@ -219,6 +219,9 @@ int libxl_cpuid_parse_config(libxl_cpuid
         {"md-clear",     0x00000007,  0, CPUID_REG_EDX, 10,  1},
         {"serialize",    0x00000007,  0, CPUID_REG_EDX, 14,  1},
         {"cet-ibt",      0x00000007,  0, CPUID_REG_EDX, 20,  1},
+        {"amx-bf16",     0x00000007,  0, CPUID_REG_EDX, 22,  1},
+        {"amx-tile",     0x00000007,  0, CPUID_REG_EDX, 24,  1},
+        {"amx-int8",     0x00000007,  0, CPUID_REG_EDX, 25,  1},
         {"ibrsb",        0x00000007,  0, CPUID_REG_EDX, 26,  1},
         {"stibp",        0x00000007,  0, CPUID_REG_EDX, 27,  1},
         {"l1d-flush",    0x00000007,  0, CPUID_REG_EDX, 28,  1},
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -167,7 +167,8 @@ static const char *const str_7d0[32] =
 
     [18] = "pconfig",
     [20] = "cet-ibt",
-
+    [22] = "amx-bf16",
+    [24] = "amx-tile",      [25] = "amx-int8",
     [26] = "ibrsb",         [27] = "stibp",
     [28] = "l1d-flush",     [29] = "arch-caps",
     [30] = "core-caps",     [31] = "ssbd",
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -195,6 +195,14 @@ static void recalculate_xstate(struct cp
                           xstate_size(X86_XCR0_PKRU_POS));
     }
 
+    if ( p->feat.amx_tile )
+    {
+        xstates |= X86_XCR0_TILECFG | X86_XCR0_TILEDATA;
+        xstate_size = max(xstate_size,
+                          xstate_offset(X86_XCR0_TILEDATA_POS) +
+                          xstate_size(X86_XCR0_TILEDATA_POS));
+    }
+
     p->xstate.max_size  =  xstate_size;
     p->xstate.xcr0_low  =  xstates & ~XSTATE_XSAVES_ONLY;
     p->xstate.xcr0_high = (xstates & ~XSTATE_XSAVES_ONLY) >> 32;
--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -642,6 +642,10 @@ static bool valid_xcr0(uint64_t xcr0)
     if ( !(xcr0 & X86_XCR0_BNDREGS) != !(xcr0 & X86_XCR0_BNDCSR) )
         return false;
 
+    /* TILECFG and TILEDATA must be the same. */
+    if ( !(xcr0 & X86_XCR0_TILECFG) != !(xcr0 & X86_XCR0_TILEDATA) )
+        return false;
+
     return true;
 }
 
--- a/xen/include/asm-x86/x86-defns.h
+++ b/xen/include/asm-x86/x86-defns.h
@@ -96,6 +96,10 @@
 #define X86_XCR0_HI_ZMM           (1ULL << X86_XCR0_HI_ZMM_POS)
 #define X86_XCR0_PKRU_POS         9
 #define X86_XCR0_PKRU             (1ULL << X86_XCR0_PKRU_POS)
+#define X86_XCR0_TILECFG_POS      17
+#define X86_XCR0_TILECFG          (1ULL << X86_XCR0_TILECFG_POS)
+#define X86_XCR0_TILEDATA_POS     18
+#define X86_XCR0_TILEDATA         (1ULL << X86_XCR0_TILEDATA_POS)
 #define X86_XCR0_LWP_POS          62
 #define X86_XCR0_LWP              (1ULL << X86_XCR0_LWP_POS)
 
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -265,6 +265,9 @@ XEN_CPUFEATURE(MD_CLEAR,      9*32+10) /
 XEN_CPUFEATURE(TSX_FORCE_ABORT, 9*32+13) /* MSR_TSX_FORCE_ABORT.RTM_ABORT */
 XEN_CPUFEATURE(SERIALIZE,     9*32+14) /*a  SERIALIZE insn */
 XEN_CPUFEATURE(CET_IBT,       9*32+20) /*   CET - Indirect Branch Tracking */
+XEN_CPUFEATURE(AMX_BF16,      9*32+22) /*   AMX BFloat16 instructions */
+XEN_CPUFEATURE(AMX_TILE,      9*32+24) /*   AMX tile architecture */
+XEN_CPUFEATURE(AMX_INT8,      9*32+25) /*   AMX 8-bit integer instructions */
 XEN_CPUFEATURE(IBRSB,         9*32+26) /*A  IBRS and IBPB support (used by Intel) */
 XEN_CPUFEATURE(STIBP,         9*32+27) /*A  STIBP */
 XEN_CPUFEATURE(L1D_FLUSH,     9*32+28) /*S  MSR_FLUSH_CMD and L1D flush. */
--- a/xen/tools/gen-cpuid.py
+++ b/xen/tools/gen-cpuid.py
@@ -222,7 +222,7 @@ def crunch_numbers(state):
         # instruction groups which are specified to require XSAVE for state
         # management.
         XSAVE: [XSAVEOPT, XSAVEC, XGETBV1, XSAVES,
-                AVX, MPX, PKU, LWP],
+                AVX, MPX, PKU, AMX_TILE, LWP],
 
         # AVX is taken to mean hardware support for 256bit registers (which in
         # practice depends on the VEX prefix to encode), and the instructions
@@ -288,6 +288,11 @@ def crunch_numbers(state):
 
         # In principle the TSXLDTRK insns could also be considered independent.
         RTM: [TSXLDTRK],
+
+        # AMX-TILE means hardware support for tile registers and general non-
+        # computational instructions.  All further AMX features are built on top
+        # of AMX-TILE.
+        AMX_TILE: [AMX_BF16, AMX_INT8],
     }
 
     deep_features = tuple(sorted(deps.keys()))



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 14:32:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 14:32:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34495.65604 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCtF-0000SF-TH; Mon, 23 Nov 2020 14:32:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34495.65604; Mon, 23 Nov 2020 14:32:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCtF-0000S7-O4; Mon, 23 Nov 2020 14:32:25 +0000
Received: by outflank-mailman (input) for mailman id 34495;
 Mon, 23 Nov 2020 14:32:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MzqB=E5=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1khCtE-0000Ro-0L
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:32:24 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fa94b267-4d49-45a0-8355-46e92fb224d1;
 Mon, 23 Nov 2020 14:32:20 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0ANEVtj7027034
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Mon, 23 Nov 2020 15:31:56 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 81F5F2E9CAC; Mon, 23 Nov 2020 15:31:50 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MzqB=E5=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1khCtE-0000Ro-0L
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:32:24 +0000
X-Inumbo-ID: fa94b267-4d49-45a0-8355-46e92fb224d1
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fa94b267-4d49-45a0-8355-46e92fb224d1;
	Mon, 23 Nov 2020 14:32:20 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0ANEVtj7027034
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Mon, 23 Nov 2020 15:31:56 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 81F5F2E9CAC; Mon, 23 Nov 2020 15:31:50 +0100 (MET)
Date: Mon, 23 Nov 2020 15:31:50 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201123143150.GG2520@antioche.eu.org>
References: <1a50e1e2-b69c-afd6-a179-316231512004@suse.com>
 <20201120082855.5z4cibcd5djlwmgp@Air-de-Roger>
 <20201120085249.GA1508@antioche.eu.org>
 <97f371a9-00fe-33fe-8923-c247f44f9af6@suse.com>
 <20201120092754.GH1508@antioche.eu.org>
 <20904a6a-ac64-755d-d228-4c49faf66fb5@suse.com>
 <20201120103824.GJ1508@antioche.eu.org>
 <20201123095713.orfpg72r73m7f46n@Air-de-Roger>
 <20201123113241.GE2520@antioche.eu.org>
 <20201123125112.q3zqb4e5nk6jg4hw@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201123125112.q3zqb4e5nk6jg4hw@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Mon, 23 Nov 2020 15:31:57 +0100 (MET)

On Mon, Nov 23, 2020 at 01:51:12PM +0100, Roger Pau Monn wrote:
> Hm, yes, it's quite weird. Do you know whether a NetBSD kernel can be
> multibooted from pxelinux with Xen? I would like to see if I can
> reproduce this myself.

Yes, if Xen+linux can boot, Xen+netbsd should boot too.
In a previous mail I wrote:
In case it helps, I put by Xen and netbsd kernels at
http://www-soc.lip6.fr/~bouyer/netbsd-dom0-pvh/
I boot it from the NetBSD boot loader with:
menu=Boot Xen PVH:load /netbsd-test console=com0 root=dk0 -vx; multiboot /xen-te
st.gz dom0_mem=1024M console=com2 com2=57600,8n1 loglvl=all guest_loglvl=all gnt
tab_max_nr_frames=64 dom0=pvh iommu=debug
I guess with grub this would be
kernel /xen-test.gz dom0_mem=1024M console=com2 com2=57600,8n1 loglvl=all guest_
loglvl=all gnttab_max_nr_frames=64 dom0=pvh iommu=debug
module /netbsd-test console=com0 root=dk0 -vx

(yes, com2 for xen and com0 for netbsd, that's not a bug :)
You can enter the NetBSD debugger with
+++++
you can then enter commands, lile
sh ev /i
to see the interrupt counters

> 
> I have the following patch also which will print a warning message
> when GSI 34 is injected from hardware or when Xen performs an EOI
> (either from a time out or when reacting to a guest one). I would
> expect at least the interrupt injection one to trigger together with
> the existing message.

It's quite verbose. I put the full log at
http://www-soc.lip6.fr/~bouyer/xen-log4.txt

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 14:32:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 14:32:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34499.65616 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCtU-0000Xm-57; Mon, 23 Nov 2020 14:32:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34499.65616; Mon, 23 Nov 2020 14:32:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCtU-0000Xf-0n; Mon, 23 Nov 2020 14:32:40 +0000
Received: by outflank-mailman (input) for mailman id 34499;
 Mon, 23 Nov 2020 14:32:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khCtR-0000Wz-U9
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:32:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 62aab5dd-6404-4f46-bcf3-de15ee685f36;
 Mon, 23 Nov 2020 14:32:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 69685AC23;
 Mon, 23 Nov 2020 14:32:36 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khCtR-0000Wz-U9
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:32:37 +0000
X-Inumbo-ID: 62aab5dd-6404-4f46-bcf3-de15ee685f36
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 62aab5dd-6404-4f46-bcf3-de15ee685f36;
	Mon, 23 Nov 2020 14:32:37 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606141956; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jzhBWTNHQY0ajJi4n63Xj/QfqXX25ivh2dwdvwKI0Bs=;
	b=IevHifplCTH5SXR+OiqrdnoyQMgAtZ1l5ZnNoL/wBZXkYlsClhQkihvQW9u6P9T92oqDeZ
	2fgN2kV+Ku/uylj/2s/aJDYDiMgvAiQyNmfZ/NCRyt8iRwy1ralnCRM5GNkR1xLJRjJXha
	YH4rYoy5e1O4ZZ6lbTnifO7clE/1Eek=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 69685AC23;
	Mon, 23 Nov 2020 14:32:36 +0000 (UTC)
Subject: [PATCH v2 11/17] x86/CPUID: adjust extended leaves out of range
 clearing
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Message-ID: <0f04f568-e55a-ef20-aa97-fbb199dfae37@suse.com>
Date: Mon, 23 Nov 2020 15:32:35 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

A maximum extended leaf input value with the high half different from
0x8000 should not be considered valid - all leaves should be cleared in
this case.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Integrate into series.

--- a/tools/tests/cpu-policy/test-cpu-policy.c
+++ b/tools/tests/cpu-policy/test-cpu-policy.c
@@ -516,11 +516,22 @@ static void test_cpuid_out_of_range_clea
             },
         },
         {
+            .name = "no extd",
+            .nr_markers = 0,
+            .p = {
+                /* Clears all markers. */
+                .extd.max_leaf = 0,
+
+                .extd.vendor_ebx = 0xc2,
+                .extd.raw_fms = 0xc2,
+            },
+        },
+        {
             .name = "extd",
             .nr_markers = 1,
             .p = {
                 /* Retains marker in leaf 0.  Clears others. */
-                .extd.max_leaf = 0,
+                .extd.max_leaf = 0x80000000,
                 .extd.vendor_ebx = 0xc2,
 
                 .extd.raw_fms = 0xc2,
--- a/xen/lib/x86/cpuid.c
+++ b/xen/lib/x86/cpuid.c
@@ -232,7 +232,9 @@ void x86_cpuid_policy_clear_out_of_range
                     ARRAY_SIZE(p->xstate.raw) - 1);
     }
 
-    zero_leaves(p->extd.raw, (p->extd.max_leaf & 0xffff) + 1,
+    zero_leaves(p->extd.raw,
+                ((p->extd.max_leaf >> 16) == 0x8000
+                 ? (p->extd.max_leaf & 0xffff) + 1 : 0),
                 ARRAY_SIZE(p->extd.raw) - 1);
 }
 


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 14:33:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 14:33:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34511.65628 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCtv-0000hR-F8; Mon, 23 Nov 2020 14:33:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34511.65628; Mon, 23 Nov 2020 14:33:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCtv-0000hH-AM; Mon, 23 Nov 2020 14:33:07 +0000
Received: by outflank-mailman (input) for mailman id 34511;
 Mon, 23 Nov 2020 14:33:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khCtu-0000h8-O1
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:33:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3947a626-3e1b-499c-befe-1caf7951f3a7;
 Mon, 23 Nov 2020 14:33:05 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 813B1AC23;
 Mon, 23 Nov 2020 14:33:04 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khCtu-0000h8-O1
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:33:06 +0000
X-Inumbo-ID: 3947a626-3e1b-499c-befe-1caf7951f3a7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3947a626-3e1b-499c-befe-1caf7951f3a7;
	Mon, 23 Nov 2020 14:33:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606141984; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7ScbkEt/httVYD5V87cUng7tX2G5V1THImA0dtVNriM=;
	b=Pmf6i45Fb6TxN2XdsbP2IjRhwWUIMhsUirusJBYCING3leubeZ7gAk27yLR0ykjcocwfIt
	vX+uefIjeeEMbRp2ODhAR0Tw3BhCBSWaUKnBnw6xoFdSqCCEtA8ZivAHTW5lo6qP8lZflB
	fwt8RlgSQj+uIfYlU027XIvihCbMT1U=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 813B1AC23;
	Mon, 23 Nov 2020 14:33:04 +0000 (UTC)
Subject: [PATCH v2 12/17] x86/CPUID: shrink max_{,sub}leaf fields according to
 actual leaf contents
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Message-ID: <2aaffa0e-e17f-6581-6003-e58d2c9fc1d7@suse.com>
Date: Mon, 23 Nov 2020 15:33:03 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Zapping leaf data for out of range leaves is just one half of it: To
avoid guests (bogusly or worse) inferring information from mere leaf
presence, also shrink maximum indicators such that the respective
trailing entry is not all blank (unless of course it's the initial
subleaf of a leaf that's not the final one).

This is also in preparation of bumping the maximum basic leaf we
support, to ensure guests not getting exposed related features won't
observe a change in behavior.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/tools/tests/cpu-policy/test-cpu-policy.c
+++ b/tools/tests/cpu-policy/test-cpu-policy.c
@@ -8,10 +8,13 @@
 #include <err.h>
 
 #include <xen-tools/libs.h>
+#include <xen/asm/x86-defns.h>
 #include <xen/asm/x86-vendors.h>
 #include <xen/lib/x86/cpu-policy.h>
 #include <xen/domctl.h>
 
+#define XSTATE_FP_SSE  (X86_XCR0_FP | X86_XCR0_SSE)
+
 static unsigned int nr_failures;
 #define fail(fmt, ...)                          \
 ({                                              \
@@ -564,6 +567,103 @@ static void test_cpuid_out_of_range_clea
     }
 }
 
+static void test_cpuid_maximum_leaf_shrinking(void)
+{
+    static const struct test {
+        const char *name;
+        struct cpuid_policy p;
+    } tests[] = {
+        {
+            .name = "basic",
+            .p = {
+                /* Very basic information only. */
+                .basic.max_leaf = 1,
+                .basic.raw_fms = 0xc2,
+            },
+        },
+        {
+            .name = "cache",
+            .p = {
+                /* Cache subleaves present. */
+                .basic.max_leaf = 4,
+                .cache.subleaf[0].type = 1,
+            },
+        },
+        {
+            .name = "feat#0",
+            .p = {
+                /* Subleaf 0 only with some valid bit. */
+                .basic.max_leaf = 7,
+                .feat.max_subleaf = 0,
+                .feat.fsgsbase = 1,
+            },
+        },
+        {
+            .name = "feat#1",
+            .p = {
+                /* Subleaf 1 only with some valid bit. */
+                .basic.max_leaf = 7,
+                .feat.max_subleaf = 1,
+                .feat.avx_vnni = 1,
+            },
+        },
+        {
+            .name = "topo",
+            .p = {
+                /* Topology subleaves present. */
+                .basic.max_leaf = 0xb,
+                .topo.subleaf[0].type = 1,
+            },
+        },
+        {
+            .name = "xstate",
+            .p = {
+                /* First subleaf always valid (and then non-zero). */
+                .basic.max_leaf = 0xd,
+                .xstate.xcr0_low = XSTATE_FP_SSE,
+            },
+        },
+        {
+            .name = "extd",
+            .p = {
+                /* Commonly available information only. */
+                .extd.max_leaf = 0x80000008,
+                .extd.maxphysaddr = 0x28,
+                .extd.maxlinaddr = 0x30,
+            },
+        },
+    };
+
+    printf("Testing CPUID maximum leaf shrinking:\n");
+
+    for ( size_t i = 0; i < ARRAY_SIZE(tests); ++i )
+    {
+        const struct test *t = &tests[i];
+        struct cpuid_policy *p = memdup(&t->p);
+
+        p->basic.max_leaf = ARRAY_SIZE(p->basic.raw) - 1;
+        p->feat.max_subleaf = ARRAY_SIZE(p->feat.raw) - 1;
+        p->extd.max_leaf = 0x80000000 | (ARRAY_SIZE(p->extd.raw) - 1);
+
+        x86_cpuid_policy_shrink_max_leaves(p);
+
+        /* Check the the resulting max (sub)leaf values against expecations. */
+        if ( p->basic.max_leaf != t->p.basic.max_leaf )
+             fail("  Test %s basic fail - expected %#x, got %#x\n",
+                  t->name, t->p.basic.max_leaf, p->basic.max_leaf);
+
+        if ( p->extd.max_leaf != t->p.extd.max_leaf )
+             fail("  Test %s extd fail - expected %#x, got %#x\n",
+                  t->name, t->p.extd.max_leaf, p->extd.max_leaf);
+
+        if ( p->feat.max_subleaf != t->p.feat.max_subleaf )
+             fail("  Test %s feat fail - expected %#x, got %#x\n",
+                  t->name, t->p.feat.max_subleaf, p->feat.max_subleaf);
+
+        free(p);
+    }
+}
+
 static void test_is_compatible_success(void)
 {
     static struct test {
@@ -679,6 +779,7 @@ int main(int argc, char **argv)
     test_cpuid_serialise_success();
     test_cpuid_deserialise_failure();
     test_cpuid_out_of_range_clearing();
+    test_cpuid_maximum_leaf_shrinking();
 
     test_msr_serialise_success();
     test_msr_deserialise_failure();
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -346,6 +346,8 @@ static void __init calculate_host_policy
         p->extd.raw[0xa].d |= ((1u << SVM_FEATURE_VMCBCLEAN) |
                                (1u << SVM_FEATURE_TSCRATEMSR));
     }
+
+    x86_cpuid_policy_shrink_max_leaves(p);
 }
 
 static void __init guest_common_default_feature_adjustments(uint32_t *fs)
@@ -415,6 +417,8 @@ static void __init calculate_pv_max_poli
     recalculate_xstate(p);
 
     p->extd.raw[0xa] = EMPTY_LEAF; /* No SVM for PV guests. */
+
+    x86_cpuid_policy_shrink_max_leaves(p);
 }
 
 static void __init calculate_pv_def_policy(void)
@@ -435,6 +439,8 @@ static void __init calculate_pv_def_poli
     sanitise_featureset(pv_featureset);
     cpuid_featureset_to_policy(pv_featureset, p);
     recalculate_xstate(p);
+
+    x86_cpuid_policy_shrink_max_leaves(p);
 }
 
 static void __init calculate_hvm_max_policy(void)
@@ -494,6 +500,8 @@ static void __init calculate_hvm_max_pol
     sanitise_featureset(hvm_featureset);
     cpuid_featureset_to_policy(hvm_featureset, p);
     recalculate_xstate(p);
+
+    x86_cpuid_policy_shrink_max_leaves(p);
 }
 
 static void __init calculate_hvm_def_policy(void)
@@ -518,6 +526,8 @@ static void __init calculate_hvm_def_pol
     sanitise_featureset(hvm_featureset);
     cpuid_featureset_to_policy(hvm_featureset, p);
     recalculate_xstate(p);
+
+    x86_cpuid_policy_shrink_max_leaves(p);
 }
 
 void __init init_host_cpuid(void)
@@ -704,6 +714,8 @@ void recalculate_cpuid_policy(struct dom
 
     if ( !p->extd.page1gb )
         p->extd.raw[0x19] = EMPTY_LEAF;
+
+    x86_cpuid_policy_shrink_max_leaves(p);
 }
 
 int init_domain_cpuid_policy(struct domain *d)
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -121,7 +121,9 @@ void cpuid_viridian_leaves(const struct
     switch ( leaf )
     {
     case 0:
-        res->a = 0x40000006; /* Maximum leaf */
+        /* Maximum leaf */
+        cpuid_viridian_leaves(v, 0x40000006, 0, res);
+        res->a = res->a | res->b | res->c | res->d ? 0x40000006 : 0x40000004;
         memcpy(&res->b, "Micr", 4);
         memcpy(&res->c, "osof", 4);
         memcpy(&res->d, "t Hv", 4);
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -964,13 +964,15 @@ void cpuid_hypervisor_leaves(const struc
     uint32_t base = is_viridian_domain(d) ? 0x40000100 : 0x40000000;
     uint32_t idx  = leaf - base;
     unsigned int limit = is_viridian_domain(d) ? p->hv2_limit : p->hv_limit;
+    unsigned int dflt = is_pv_domain(d) ? XEN_CPUID_MAX_PV_NUM_LEAVES
+                                        : XEN_CPUID_MAX_HVM_NUM_LEAVES;
 
     if ( limit == 0 )
         /* Default number of leaves */
-        limit = XEN_CPUID_MAX_NUM_LEAVES;
+        limit = dflt;
     else
         /* Clamp toolstack value between 2 and MAX_NUM_LEAVES. */
-        limit = min(max(limit, 2u), XEN_CPUID_MAX_NUM_LEAVES + 0u);
+        limit = min(max(limit, 2u), dflt);
 
     if ( idx > limit )
         return;
--- a/xen/include/public/arch-x86/cpuid.h
+++ b/xen/include/public/arch-x86/cpuid.h
@@ -113,6 +113,10 @@
 /* Max. address width in bits taking memory hotplug into account. */
 #define XEN_CPUID_MACHINE_ADDRESS_WIDTH_MASK (0xffu << 0)
 
-#define XEN_CPUID_MAX_NUM_LEAVES 5
+#define XEN_CPUID_MAX_PV_NUM_LEAVES 5
+#define XEN_CPUID_MAX_HVM_NUM_LEAVES 4
+#define XEN_CPUID_MAX_NUM_LEAVES \
+    (XEN_CPUID_MAX_PV_NUM_LEAVES > XEN_CPUID_MAX_HVM_NUM_LEAVES ? \
+     XEN_CPUID_MAX_PV_NUM_LEAVES : XEN_CPUID_MAX_HVM_NUM_LEAVES)
 
 #endif /* __XEN_PUBLIC_ARCH_X86_CPUID_H__ */
--- a/xen/include/xen/lib/x86/cpuid.h
+++ b/xen/include/xen/lib/x86/cpuid.h
@@ -351,6 +351,13 @@ void x86_cpuid_policy_fill_native(struct
  */
 void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p);
 
+/**
+ * Shrink max leaf/subleaf values such that the last respective valid entry
+ * isn't all blank.  While permitted by the spec, such extraneous leaves may
+ * provide undue "hints" to guests.
+ */
+void x86_cpuid_policy_shrink_max_leaves(struct cpuid_policy *p);
+
 #ifdef __XEN__
 #include <public/arch-x86/xen.h>
 typedef XEN_GUEST_HANDLE_64(xen_cpuid_leaf_t) cpuid_leaf_buffer_t;
--- a/xen/lib/x86/cpuid.c
+++ b/xen/lib/x86/cpuid.c
@@ -238,6 +238,45 @@ void x86_cpuid_policy_clear_out_of_range
                 ARRAY_SIZE(p->extd.raw) - 1);
 }
 
+void x86_cpuid_policy_shrink_max_leaves(struct cpuid_policy *p)
+{
+    unsigned int i;
+
+    p->basic.raw[0x4] = p->cache.raw[0];
+
+    for ( i = p->feat.max_subleaf; i; --i )
+        if ( p->feat.raw[i].a | p->feat.raw[i].b |
+             p->feat.raw[i].c | p->feat.raw[i].d )
+            break;
+    p->feat.max_subleaf = i;
+    p->basic.raw[0x7] = p->feat.raw[0];
+
+    p->basic.raw[0xb] = p->topo.raw[0];
+
+    /*
+     * Due to the way xstate gets handled in the hypervisor (see
+     * recalculate_xstate()) there is (for now at least) no need to fiddle
+     * with the xstate subleaves (IOW we assume they're already in consistent
+     * shape, for coming from either hardware or recalculate_xstate()).
+     */
+    p->basic.raw[0xd] = p->xstate.raw[0];
+
+    for ( i = p->basic.max_leaf; i; --i )
+        if ( p->basic.raw[i].a | p->basic.raw[i].b |
+             p->basic.raw[i].c | p->basic.raw[i].d )
+            break;
+    p->basic.max_leaf = i;
+
+    for ( i = p->extd.max_leaf & 0xffff; i; --i )
+        if ( p->extd.raw[i].a | p->extd.raw[i].b |
+             p->extd.raw[i].c | p->extd.raw[i].d )
+            break;
+    if ( i | p->extd.raw[0].b | p->extd.raw[0].c | p->extd.raw[0].d )
+        p->extd.max_leaf = 0x80000000 | i;
+    else
+        p->extd.max_leaf = 0;
+}
+
 const uint32_t *x86_cpuid_lookup_deep_deps(uint32_t feature)
 {
     static const uint32_t deep_features[] = INIT_DEEP_FEATURES;



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 14:33:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 14:33:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34521.65640 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCuR-0000sB-Sn; Mon, 23 Nov 2020 14:33:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34521.65640; Mon, 23 Nov 2020 14:33:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCuR-0000s4-P7; Mon, 23 Nov 2020 14:33:39 +0000
Received: by outflank-mailman (input) for mailman id 34521;
 Mon, 23 Nov 2020 14:33:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khCuQ-0000rn-Et
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:33:38 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2c2fdd20-ba1b-449e-a865-66f7e18be1f2;
 Mon, 23 Nov 2020 14:33:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8B8B2AC23;
 Mon, 23 Nov 2020 14:33:36 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khCuQ-0000rn-Et
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:33:38 +0000
X-Inumbo-ID: 2c2fdd20-ba1b-449e-a865-66f7e18be1f2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2c2fdd20-ba1b-449e-a865-66f7e18be1f2;
	Mon, 23 Nov 2020 14:33:37 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606142016; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Yr7qr3FzccOgQJxvEx3GRtjgApqjrqxvbBS1Z4WMing=;
	b=M5dKwH8+fn+ULaVyx+E2eFvSVa5N4fg1gnfNcHaO/QjHgbPvNioKQ0JaWyBMKHzhNfzfO/
	GS4dEdcNzxBZbAdzFDmv0TiaxmhA6Cp1CbIJctDHqGcgZwv6IPamde2JhkwxLgLetAzLtj
	U6b57EH3VcmiypjAgd5rtj+OBqCgTvA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8B8B2AC23;
	Mon, 23 Nov 2020 14:33:36 +0000 (UTC)
Subject: [PATCH v2 13/17] x86/CPUID: move bounding of max_{,sub}leaf fields to
 library code
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Message-ID: <c7476bc5-d164-b417-4596-eff7562dff65@suse.com>
Date: Mon, 23 Nov 2020 15:33:36 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Break out this logic from calculate_host_policy() to also use it in the
x86 emulator harness, where subsequently we'll want to avoid open-coding
AMX maximum palette bounding.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/tools/tests/x86_emulator/x86-emulate.c
+++ b/tools/tests/x86_emulator/x86-emulate.c
@@ -79,6 +79,7 @@ bool emul_test_init(void)
     unsigned long sp;
 
     x86_cpuid_policy_fill_native(&cp);
+    x86_cpuid_policy_bound_max_leaves(&cp);
 
     /*
      * The emulator doesn't use these instructions, so can always emulate
@@ -91,6 +92,8 @@ bool emul_test_init(void)
     cp.feat.rdpid = true;
     cp.extd.clzero = true;
 
+    x86_cpuid_policy_shrink_max_leaves(&cp);
+
     if ( cpu_has_xsave )
     {
         unsigned int tmp, ebx;
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -319,12 +319,7 @@ static void __init calculate_host_policy
 
     *p = raw_cpuid_policy;
 
-    p->basic.max_leaf =
-        min_t(uint32_t, p->basic.max_leaf,   ARRAY_SIZE(p->basic.raw) - 1);
-    p->feat.max_subleaf =
-        min_t(uint32_t, p->feat.max_subleaf, ARRAY_SIZE(p->feat.raw) - 1);
-    p->extd.max_leaf = 0x80000000 | min_t(uint32_t, p->extd.max_leaf & 0xffff,
-                                          ARRAY_SIZE(p->extd.raw) - 1);
+    x86_cpuid_policy_bound_max_leaves(p);
 
     cpuid_featureset_to_policy(boot_cpu_data.x86_capability, p);
     recalculate_xstate(p);
--- a/xen/include/xen/lib/x86/cpuid.h
+++ b/xen/include/xen/lib/x86/cpuid.h
@@ -352,6 +352,12 @@ void x86_cpuid_policy_fill_native(struct
 void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p);
 
 /**
+ * Bound max leaf/subleaf values according to the capacity of the respective
+ * arrays in struct cpuid_policy.
+ */
+void x86_cpuid_policy_bound_max_leaves(struct cpuid_policy *p);
+
+/**
  * Shrink max leaf/subleaf values such that the last respective valid entry
  * isn't all blank.  While permitted by the spec, such extraneous leaves may
  * provide undue "hints" to guests.
--- a/xen/lib/x86/cpuid.c
+++ b/xen/lib/x86/cpuid.c
@@ -238,6 +238,16 @@ void x86_cpuid_policy_clear_out_of_range
                 ARRAY_SIZE(p->extd.raw) - 1);
 }
 
+void x86_cpuid_policy_bound_max_leaves(struct cpuid_policy *p)
+{
+    p->basic.max_leaf =
+        min_t(uint32_t, p->basic.max_leaf, ARRAY_SIZE(p->basic.raw) - 1);
+    p->feat.max_subleaf =
+        min_t(uint32_t, p->feat.max_subleaf, ARRAY_SIZE(p->feat.raw) - 1);
+    p->extd.max_leaf = 0x80000000 | min_t(uint32_t, p->extd.max_leaf & 0xffff,
+                                          ARRAY_SIZE(p->extd.raw) - 1);
+}
+
 void x86_cpuid_policy_shrink_max_leaves(struct cpuid_policy *p)
 {
     unsigned int i;



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 14:34:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 14:34:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34530.65652 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCvB-00011d-7A; Mon, 23 Nov 2020 14:34:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34530.65652; Mon, 23 Nov 2020 14:34:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCvB-00011W-3f; Mon, 23 Nov 2020 14:34:25 +0000
Received: by outflank-mailman (input) for mailman id 34530;
 Mon, 23 Nov 2020 14:34:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khCv9-00011G-Rx
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:34:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 09dbe9db-f991-4003-a1a4-1a94be257658;
 Mon, 23 Nov 2020 14:34:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 35739AC23;
 Mon, 23 Nov 2020 14:34:21 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khCv9-00011G-Rx
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:34:23 +0000
X-Inumbo-ID: 09dbe9db-f991-4003-a1a4-1a94be257658
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 09dbe9db-f991-4003-a1a4-1a94be257658;
	Mon, 23 Nov 2020 14:34:22 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606142061; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QxKWZSZRQygyQ3FnflcgF2rPGZueav8ba8ZpNwpn4ek=;
	b=I4TGxMkXXysN7+oG1cIUayExp/6JStkvdjjLqsNeT7Ha2cgwMDrCjsdGxQ+CcurTuR0173
	YZPOSYTpbTvtFjVyrA1lN0ICKpEPoqiw8lv+FPloNZI6+SargsJLHovr4z18G1STHWqs09
	1w0OSU8ybrq49i7KH43ffSNOn/qvZu8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 35739AC23;
	Mon, 23 Nov 2020 14:34:21 +0000 (UTC)
Subject: [PATCH v2 14/17] x86/CPUID: enable AMX leaves
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Message-ID: <e083de34-bd5d-7059-8e69-66129974cb55@suse.com>
Date: Mon, 23 Nov 2020 15:34:20 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

This requires bumping the number of basic leaves we support. Apart from
this the logic is modeled as closely as possible to that of leaf 7
handling.

The checks in x86_cpu_policies_are_compatible() may be more strict than
they ultimately need to be, but I'd rather start being on the safe side.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.
---
It's not clear to me in how far libxl_cpuid.c would want extending: It
doesn't look to offer a way to override the maximum subleaf of leaf 7.
In fact I can't seem to be able to spot a max extended leaf override
mechanism either.

--- a/tools/tests/cpu-policy/test-cpu-policy.c
+++ b/tools/tests/cpu-policy/test-cpu-policy.c
@@ -190,6 +190,40 @@ static void test_cpuid_serialise_success
             },
             .nr_leaves = 4 + 0xd + 1 + 1,
         },
+
+        /* Leaf 0x1d serialisation stops at max_palette. */
+        {
+            .name = "empty leaf 0x1d",
+            .p = {
+                .basic.max_leaf = 0x1d,
+            },
+            .nr_leaves = 4 + 0x1d + 1,
+        },
+        {
+            .name = "partial leaf 0x1d",
+            .p = {
+                .basic.max_leaf = 0x1d,
+                .tile.max_palette = 1,
+            },
+            .nr_leaves = 4 + 0x1d + 1 + 1,
+        },
+
+        /* Leaf 0x1e serialisation stops at 0. */
+        {
+            .name = "empty leaf 0x1e",
+            .p = {
+                .basic.max_leaf = 0x1e,
+            },
+            .nr_leaves = 4 + 0x1e + 1,
+        },
+        {
+            .name = "partial leaf 0x1e",
+            .p = {
+                .basic.max_leaf = 0x1e,
+                .tmul.maxk = 16,
+            },
+            .nr_leaves = 4 + 0x1e + 1,
+        },
     };
 
     printf("Testing CPUID serialise success:\n");
@@ -321,6 +355,14 @@ static void test_cpuid_deserialise_failu
             .leaf = { .leaf = 0xd, .subleaf = CPUID_GUEST_NR_XSTATE },
         },
         {
+            .name = "OoB tile leaf",
+            .leaf = { .leaf = 0x1d, .subleaf = CPUID_GUEST_NR_PALETTE },
+        },
+        {
+            .name = "OoB tmul leaf",
+            .leaf = { .leaf = 0x1e, .subleaf = CPUID_GUEST_NR_TMUL },
+        },
+        {
             .name = "OoB extd leaf",
             .leaf = { .leaf = 0x80000000 | CPUID_GUEST_NR_EXTD },
         },
@@ -432,6 +474,8 @@ static void test_cpuid_out_of_range_clea
                 .topo.raw[0].a = 0xc2,
                 .xstate.raw[0].a = 0xc2,
                 .xstate.raw[1].a = 0xc2,
+                .tile.raw[0].a = 0xc2,
+                .tmul.raw[0].a = 0xc2,
             },
         },
         {
@@ -447,6 +491,8 @@ static void test_cpuid_out_of_range_clea
                 .topo.raw[0].a = 0xc2,
                 .xstate.raw[0].a = 0xc2,
                 .xstate.raw[1].a = 0xc2,
+                .tile.raw[0].a = 0xc2,
+                .tmul.raw[0].a = 0xc2,
             },
         },
         {
@@ -461,6 +507,8 @@ static void test_cpuid_out_of_range_clea
                 .topo.raw[0].a = 0xc2,
                 .xstate.raw[0].a = 0xc2,
                 .xstate.raw[1].a = 0xc2,
+                .tile.raw[0].a = 0xc2,
+                .tmul.raw[0].a = 0xc2,
             },
         },
         {
@@ -474,6 +522,8 @@ static void test_cpuid_out_of_range_clea
                 .topo.raw[1].b = 0xc2,
                 .xstate.raw[0].a = 0xc2,
                 .xstate.raw[1].a = 0xc2,
+                .tile.raw[0].a = 0xc2,
+                .tmul.raw[0].a = 0xc2,
             },
         },
         {
@@ -488,6 +538,8 @@ static void test_cpuid_out_of_range_clea
 
                 .xstate.raw[2].b = 0xc2,
                 .xstate.raw[3].b = 0xc2,
+                .tile.raw[0].a = 0xc2,
+                .tmul.raw[0].a = 0xc2,
             },
         },
         {
@@ -530,6 +582,34 @@ static void test_cpuid_out_of_range_clea
             },
         },
         {
+            .name = "tile no palette",
+            .nr_markers = 0,
+            .p = {
+                /* First two subleaves invalid as a pair.  Others cleared. */
+                .basic.max_leaf = 0x1d,
+                .xstate.xcr0_low = XSTATE_FP_SSE,
+
+                .tile.raw[0].a = 0xc2,
+                .tile.raw[1].b = 0xc2,
+                .tmul.raw[0].a = 0xc2,
+            },
+        },
+        {
+            .name = "tile palette 1",
+            .nr_markers = 1,
+            .p = {
+                /* First two subleaves valid as a pair.  Others cleared. */
+                .basic.max_leaf = 0x1d,
+                .feat.amx_tile = 1,
+                .xstate.xcr0_low = XSTATE_FP_SSE | X86_XCR0_TILECFG |
+                                   X86_XCR0_TILEDATA,
+                .tile.raw[0].a = 1,
+                .tile.raw[1].b = 0xc2,
+
+                .tmul.raw[0].a = 0xc2,
+            },
+        },
+        {
             .name = "extd",
             .nr_markers = 1,
             .p = {
@@ -624,6 +704,24 @@ static void test_cpuid_maximum_leaf_shri
             },
         },
         {
+            .name = "tile",
+            .p = {
+                /* Subleaf 1 only with some valid value. */
+                .basic.max_leaf = 0x1d,
+                .tile.raw[0].a = 1,
+                .tile.raw[1].a = 1024,
+            },
+        },
+        {
+            .name = "tmul",
+            .p = {
+                /* Subleaf 0 only with some valid values. */
+                .basic.max_leaf = 0x1e,
+                .tmul.maxk = 16,
+                .tmul.maxn = 16,
+            },
+        },
+        {
             .name = "extd",
             .p = {
                 /* Commonly available information only. */
@@ -643,6 +741,7 @@ static void test_cpuid_maximum_leaf_shri
 
         p->basic.max_leaf = ARRAY_SIZE(p->basic.raw) - 1;
         p->feat.max_subleaf = ARRAY_SIZE(p->feat.raw) - 1;
+        p->tile.max_palette = ARRAY_SIZE(p->tile.raw) - 1;
         p->extd.max_leaf = 0x80000000 | (ARRAY_SIZE(p->extd.raw) - 1);
 
         x86_cpuid_policy_shrink_max_leaves(p);
@@ -660,6 +759,10 @@ static void test_cpuid_maximum_leaf_shri
              fail("  Test %s feat fail - expected %#x, got %#x\n",
                   t->name, t->p.feat.max_subleaf, p->feat.max_subleaf);
 
+        if ( p->tile.max_palette != t->p.tile.max_palette )
+             fail("  Test %s tile fail - expected %#x, got %#x\n",
+                  t->name, t->p.tile.max_palette, p->tile.max_palette);
+
         free(p);
     }
 }
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -230,6 +230,29 @@ static void recalculate_xstate(struct cp
     }
 }
 
+static void recalculate_tile(struct cpuid_policy *p)
+{
+    unsigned int i;
+
+    if ( !p->feat.amx_tile )
+    {
+        memset(&p->tile, 0, sizeof(p->tile));
+        memset(&p->tmul, 0, sizeof(p->tmul));
+        return;
+    }
+
+    p->tile.raw[0].b = p->tile.raw[0].c = p->tile.raw[0].d = 0;
+
+    for ( i = 1; i <= p->tile.max_palette; ++i )
+    {
+        p->tile.raw[i].c &= 0x0000ffff;
+        p->tile.raw[i].d = 0;
+    }
+
+    p->tmul.raw[0].a = p->tmul.raw[0].c = p->tmul.raw[0].d = 0;
+    p->tmul.raw[0].b &= 0x00ffffff;
+}
+
 /*
  * Misc adjustments to the policy.  Mostly clobbering reserved fields and
  * duplicating shared fields.  Intentionally hidden fields are annotated.
@@ -249,6 +272,8 @@ static void recalculate_misc(struct cpui
 
     p->basic.raw[0xc] = EMPTY_LEAF;
 
+    zero_leaves(p->basic.raw, 0xe, 0x1c);
+
     p->extd.e1d &= ~CPUID_COMMON_1D_FEATURES;
 
     /* Most of Power/RAS hidden from guests. */
@@ -323,6 +348,7 @@ static void __init calculate_host_policy
 
     cpuid_featureset_to_policy(boot_cpu_data.x86_capability, p);
     recalculate_xstate(p);
+    recalculate_tile(p);
     recalculate_misc(p);
 
     /* When vPMU is disabled, drop it from the host policy. */
@@ -410,6 +436,7 @@ static void __init calculate_pv_max_poli
     sanitise_featureset(pv_featureset);
     cpuid_featureset_to_policy(pv_featureset, p);
     recalculate_xstate(p);
+    recalculate_tile(p);
 
     p->extd.raw[0xa] = EMPTY_LEAF; /* No SVM for PV guests. */
 
@@ -434,6 +461,7 @@ static void __init calculate_pv_def_poli
     sanitise_featureset(pv_featureset);
     cpuid_featureset_to_policy(pv_featureset, p);
     recalculate_xstate(p);
+    recalculate_tile(p);
 
     x86_cpuid_policy_shrink_max_leaves(p);
 }
@@ -495,6 +523,7 @@ static void __init calculate_hvm_max_pol
     sanitise_featureset(hvm_featureset);
     cpuid_featureset_to_policy(hvm_featureset, p);
     recalculate_xstate(p);
+    recalculate_tile(p);
 
     x86_cpuid_policy_shrink_max_leaves(p);
 }
@@ -521,6 +550,7 @@ static void __init calculate_hvm_def_pol
     sanitise_featureset(hvm_featureset);
     cpuid_featureset_to_policy(hvm_featureset, p);
     recalculate_xstate(p);
+    recalculate_tile(p);
 
     x86_cpuid_policy_shrink_max_leaves(p);
 }
@@ -591,6 +621,7 @@ void recalculate_cpuid_policy(struct dom
 
     p->basic.max_leaf   = min(p->basic.max_leaf,   max->basic.max_leaf);
     p->feat.max_subleaf = min(p->feat.max_subleaf, max->feat.max_subleaf);
+    p->tile.max_palette = min(p->tile.max_palette, max->tile.max_palette);
     p->extd.max_leaf    = 0x80000000 | min(p->extd.max_leaf & 0xffff,
                                            ((p->x86_vendor & (X86_VENDOR_AMD |
                                                               X86_VENDOR_HYGON))
@@ -681,6 +712,7 @@ void recalculate_cpuid_policy(struct dom
     p->extd.maxlinaddr = p->extd.lm ? 48 : 32;
 
     recalculate_xstate(p);
+    recalculate_tile(p);
     recalculate_misc(p);
 
     for ( i = 0; i < ARRAY_SIZE(p->cache.raw); ++i )
@@ -803,6 +835,22 @@ void guest_cpuid(const struct vcpu *v, u
             *res = array_access_nospec(p->xstate.raw, subleaf);
             break;
 
+        case 0x1d:
+            ASSERT(p->tile.max_palette < ARRAY_SIZE(p->tile.raw));
+            if ( subleaf > min_t(uint32_t, p->tile.max_palette,
+                                 ARRAY_SIZE(p->tile.raw) - 1) )
+                return;
+
+            *res = array_access_nospec(p->tile.raw, subleaf);
+            break;
+
+        case 0x1e:
+            if ( subleaf >= ARRAY_SIZE(p->tmul.raw) )
+                return;
+
+            *res = array_access_nospec(p->tmul.raw, subleaf);
+            break;
+
         default:
             *res = array_access_nospec(p->basic.raw, leaf);
             break;
@@ -1136,6 +1184,8 @@ static void __init __maybe_unused build_
                  sizeof(raw_cpuid_policy.feat.raw));
     BUILD_BUG_ON(sizeof(raw_cpuid_policy.xstate) !=
                  sizeof(raw_cpuid_policy.xstate.raw));
+    BUILD_BUG_ON(sizeof(raw_cpuid_policy.tile) !=
+                 sizeof(raw_cpuid_policy.tile.raw));
     BUILD_BUG_ON(sizeof(raw_cpuid_policy.extd) !=
                  sizeof(raw_cpuid_policy.extd.raw));
 }
--- a/xen/include/xen/lib/x86/cpuid.h
+++ b/xen/include/xen/lib/x86/cpuid.h
@@ -78,11 +78,13 @@ unsigned int x86_cpuid_lookup_vendor(uin
  */
 const char *x86_cpuid_vendor_to_str(unsigned int vendor);
 
-#define CPUID_GUEST_NR_BASIC      (0xdu + 1)
+#define CPUID_GUEST_NR_BASIC      (0x1eu + 1)
 #define CPUID_GUEST_NR_CACHE      (5u + 1)
 #define CPUID_GUEST_NR_FEAT       (1u + 1)
 #define CPUID_GUEST_NR_TOPO       (1u + 1)
 #define CPUID_GUEST_NR_XSTATE     (62u + 1)
+#define CPUID_GUEST_NR_PALETTE    (1u + 1)
+#define CPUID_GUEST_NR_TMUL       (0u + 1)
 #define CPUID_GUEST_NR_EXTD_INTEL (0x8u + 1)
 #define CPUID_GUEST_NR_EXTD_AMD   (0x1cu + 1)
 #define CPUID_GUEST_NR_EXTD       MAX(CPUID_GUEST_NR_EXTD_INTEL, \
@@ -225,6 +227,35 @@ struct cpuid_policy
         } comp[CPUID_GUEST_NR_XSTATE];
     } xstate;
 
+    /* Structured tile information leaf: 0x00000001d[xx] */
+    union {
+        struct cpuid_leaf raw[CPUID_GUEST_NR_PALETTE];
+        struct {
+            /* Subleaf 0. */
+            uint32_t max_palette;
+            uint32_t /* b */:32, /* c */:32, /* d */:32;
+        };
+
+        /* Per-palette common state.  Valid for i >= 1. */
+        struct {
+            uint16_t tot_bytes, bytes_per_tile;
+            uint16_t bytes_per_row, num_regs;
+            uint16_t max_rows, :16;
+            uint32_t /* d */:32;
+        } palette[CPUID_GUEST_NR_PALETTE];
+    } tile;
+
+    /* Structured tmul information leaf: 0x00000001e[xx] */
+    union {
+        struct cpuid_leaf raw[CPUID_GUEST_NR_TMUL];
+        struct {
+            /* Subleaf 0. */
+            uint32_t /* a */:32;
+            uint32_t maxk:8, maxn:16, :8;
+            uint32_t /* c */:32, /* d */:32;
+        };
+    } tmul;
+
     /* Extended leaves: 0x800000xx */
     union {
         struct cpuid_leaf raw[CPUID_GUEST_NR_EXTD];
--- a/xen/lib/x86/cpuid.c
+++ b/xen/lib/x86/cpuid.c
@@ -170,6 +170,18 @@ void x86_cpuid_policy_fill_native(struct
         }
     }
 
+    if ( p->basic.max_leaf >= 0x1d )
+    {
+        cpuid_count_leaf(0x1d, 0, &p->tile.raw[0]);
+
+        for ( i = 1; i <= MIN(p->tile.max_palette,
+                              ARRAY_SIZE(p->tile.raw) - 1); ++i )
+            cpuid_count_leaf(0x1d, i, &p->tile.raw[i]);
+    }
+
+    if ( p->basic.max_leaf >= 0x1e )
+        cpuid_count_leaf(0x1e, 0, &p->tmul.raw[0]);
+
     /* Extended leaves. */
     cpuid_leaf(0x80000000, &p->extd.raw[0]);
     for ( i = 1; i <= MIN(p->extd.max_leaf & 0xffffU,
@@ -232,6 +244,19 @@ void x86_cpuid_policy_clear_out_of_range
                     ARRAY_SIZE(p->xstate.raw) - 1);
     }
 
+    if ( p->basic.max_leaf < 0x1d ||
+         (cpuid_policy_xstates(p) &
+          (X86_XCR0_TILECFG | X86_XCR0_TILEDATA)) !=
+         (X86_XCR0_TILECFG | X86_XCR0_TILEDATA) )
+        memset(p->tile.raw, 0, sizeof(p->tile.raw));
+    else
+        zero_leaves(p->tile.raw, p->tile.max_palette + 1,
+                    ARRAY_SIZE(p->tile.raw) - 1);
+
+    if ( p->basic.max_leaf < 0x1e || !p->tile.max_palette ||
+         (!p->feat.amx_int8 && !p->feat.amx_bf16) )
+        memset(p->tmul.raw, 0, sizeof(p->tmul.raw));
+
     zero_leaves(p->extd.raw,
                 ((p->extd.max_leaf >> 16) == 0x8000
                  ? (p->extd.max_leaf & 0xffff) + 1 : 0),
@@ -244,6 +269,8 @@ void x86_cpuid_policy_bound_max_leaves(s
         min_t(uint32_t, p->basic.max_leaf, ARRAY_SIZE(p->basic.raw) - 1);
     p->feat.max_subleaf =
         min_t(uint32_t, p->feat.max_subleaf, ARRAY_SIZE(p->feat.raw) - 1);
+    p->tile.max_palette =
+        min_t(uint32_t, p->tile.max_palette, ARRAY_SIZE(p->tile.raw) - 1);
     p->extd.max_leaf = 0x80000000 | min_t(uint32_t, p->extd.max_leaf & 0xffff,
                                           ARRAY_SIZE(p->extd.raw) - 1);
 }
@@ -271,6 +298,21 @@ void x86_cpuid_policy_shrink_max_leaves(
      */
     p->basic.raw[0xd] = p->xstate.raw[0];
 
+    for ( i = p->tile.max_palette; i; --i )
+        if ( p->tile.raw[i].a | p->tile.raw[i].b |
+             p->tile.raw[i].c | p->tile.raw[i].d )
+            break;
+    if ( i )
+        p->tile.max_palette = i;
+    else
+    {
+        ASSERT(!p->feat.amx_tile);
+        zero_leaves(p->tile.raw, 0, 0);
+    }
+    p->basic.raw[0x1d] = p->tile.raw[0];
+
+    p->basic.raw[0x1e] = p->tmul.raw[0];
+
     for ( i = p->basic.max_leaf; i; --i )
         if ( p->basic.raw[i].a | p->basic.raw[i].b |
              p->basic.raw[i].c | p->basic.raw[i].d )
@@ -404,6 +446,19 @@ int x86_cpuid_copy_to_buffer(const struc
             break;
         }
 
+        case 0x1d:
+            for ( subleaf = 0;
+                  subleaf <= MIN(p->tile.max_palette,
+                                 ARRAY_SIZE(p->tile.raw) - 1); ++subleaf )
+                COPY_LEAF(leaf, subleaf, &p->tile.raw[subleaf]);
+            break;
+
+        case 0x1e:
+            for ( subleaf = 0;
+                  subleaf <= ARRAY_SIZE(p->tmul.raw) - 1; ++subleaf )
+                COPY_LEAF(leaf, subleaf, &p->tmul.raw[subleaf]);
+            break;
+
         default:
             COPY_LEAF(leaf, XEN_CPUID_NO_SUBLEAF, &p->basic.raw[leaf]);
             break;
@@ -496,6 +551,20 @@ int x86_cpuid_copy_from_buffer(struct cp
                 array_access_nospec(p->xstate.raw, data.subleaf) = l;
                 break;
 
+            case 0x1d:
+                if ( data.subleaf >= ARRAY_SIZE(p->tile.raw) )
+                    goto out_of_range;
+
+                array_access_nospec(p->tile.raw, data.subleaf) = l;
+                break;
+
+            case 0x1e:
+                if ( data.subleaf >= ARRAY_SIZE(p->tmul.raw) )
+                    goto out_of_range;
+
+                array_access_nospec(p->tmul.raw, data.subleaf) = l;
+                break;
+
             default:
                 if ( data.subleaf != XEN_CPUID_NO_SUBLEAF )
                     goto out_of_range;
--- a/xen/lib/x86/policy.c
+++ b/xen/lib/x86/policy.c
@@ -7,6 +7,7 @@ int x86_cpu_policies_are_compatible(cons
                                     struct cpu_policy_errors *err)
 {
     struct cpu_policy_errors e = INIT_CPU_POLICY_ERRORS;
+    unsigned int i;
     int ret = -EINVAL;
 
 #define NA XEN_CPUID_NO_SUBLEAF
@@ -21,6 +22,31 @@ int x86_cpu_policies_are_compatible(cons
     if ( guest->cpuid->feat.max_subleaf > host->cpuid->feat.max_subleaf )
         FAIL_CPUID(7, 0);
 
+    if ( (guest->cpuid->feat.amx_tile && !guest->cpuid->tile.max_palette) ||
+         guest->cpuid->tile.max_palette > host->cpuid->tile.max_palette )
+        FAIL_CPUID(0x1d, 0);
+
+    for ( i = 1; i <= guest->cpuid->tile.max_palette; ++i )
+    {
+        const typeof(guest->cpuid->tile.palette[0]) *gt, *ht;
+
+        gt = &guest->cpuid->tile.palette[i];
+        ht = &host->cpuid->tile.palette[i];
+
+        if ( gt->tot_bytes != ht->tot_bytes ||
+             gt->bytes_per_tile != ht->bytes_per_tile ||
+             gt->bytes_per_row != ht->bytes_per_row ||
+             !gt->num_regs || gt->num_regs > ht->num_regs ||
+             !gt->max_rows || gt->max_rows > ht->max_rows )
+            FAIL_CPUID(0x1d, i);
+    }
+
+    if ( ((guest->cpuid->feat.amx_int8 || guest->cpuid->feat.amx_bf16) &&
+          (!guest->cpuid->tmul.maxk || !guest->cpuid->tmul.maxn)) ||
+         guest->cpuid->tmul.maxk > host->cpuid->tmul.maxk ||
+         guest->cpuid->tmul.maxn > host->cpuid->tmul.maxn )
+        FAIL_CPUID(0x1e, 0);
+
     if ( guest->cpuid->extd.max_leaf > host->cpuid->extd.max_leaf )
         FAIL_CPUID(0x80000000, NA);
 
--- a/xen/lib/x86/private.h
+++ b/xen/lib/x86/private.h
@@ -17,13 +17,17 @@
 
 #else
 
+#include <assert.h>
 #include <errno.h>
 #include <inttypes.h>
 #include <stdbool.h>
 #include <stddef.h>
 #include <string.h>
 
+#define ASSERT assert
+
 #include <xen/asm/msr-index.h>
+#include <xen/asm/x86-defns.h>
 #include <xen/asm/x86-vendors.h>
 
 #include <xen-tools/libs.h>



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 14:36:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 14:36:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34542.65664 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCwi-0001Ct-Kc; Mon, 23 Nov 2020 14:36:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34542.65664; Mon, 23 Nov 2020 14:36:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCwi-0001Cm-G4; Mon, 23 Nov 2020 14:36:00 +0000
Received: by outflank-mailman (input) for mailman id 34542;
 Mon, 23 Nov 2020 14:35:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khCwg-0001Ch-Pt
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:35:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bca35d8d-a5fb-45c3-ab03-fb8450ed5b2a;
 Mon, 23 Nov 2020 14:35:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 56C35ACD5;
 Mon, 23 Nov 2020 14:35:57 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khCwg-0001Ch-Pt
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:35:58 +0000
X-Inumbo-ID: bca35d8d-a5fb-45c3-ab03-fb8450ed5b2a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id bca35d8d-a5fb-45c3-ab03-fb8450ed5b2a;
	Mon, 23 Nov 2020 14:35:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606142157; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=34ofaxpzjh8AyHvZBsnreMoOPw77UuBjoo4CEQ9cQ6Q=;
	b=SJ1UK5yM/go1LSkVuhzHDE/ZMT7yPPj4nuzcY74a3T8873g8Jk5xNKayxOEN9KV/HqPlo/
	bBQyPAz1TGlTBnP/4gtAFenD8IQwrWjy0nou4Ibn6d/aL/0REqVRtSJGON0EaiO3gdk33a
	7i4rLepni/cnIIeHamgVdsSzj8eDdM0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 56C35ACD5;
	Mon, 23 Nov 2020 14:35:57 +0000 (UTC)
Subject: [PATCH v2 15/17] x86emul: introduce X86EMUL_FPU_tile
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Message-ID: <2a1e1a7e-d29c-ef4e-8741-ceb46ee58dde@suse.com>
Date: Mon, 23 Nov 2020 15:35:56 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

This will be used by AMX insns. Note that CR0.TS behavior is only
assumed to be similar to AVX* insns, as the ISA extensions document (as
of rev 041) doesn't specify this either way. But since XFD is not
supposed to be used for lazy context restore, it's unlikely to work any
other way.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -1420,6 +1420,12 @@ static int _get_fpu(
             return X86EMUL_UNHANDLEABLE;
         break;
 
+    case X86EMUL_FPU_tile:
+        ASSERT(mode_64bit());
+        if ( !(xcr0 & X86_XCR0_TILECFG) || !(xcr0 & X86_XCR0_TILEDATA) )
+            return X86EMUL_UNHANDLEABLE;
+        break;
+
     default:
         break;
     }
--- a/xen/arch/x86/x86_emulate/x86_emulate.h
+++ b/xen/arch/x86/x86_emulate/x86_emulate.h
@@ -172,6 +172,7 @@ enum x86_emulate_fpu_type {
     X86EMUL_FPU_ymm, /* AVX/XOP instruction set (%ymm0-%ymm7/15) */
     X86EMUL_FPU_opmask, /* AVX512 opmask instruction set (%k0-%k7) */
     X86EMUL_FPU_zmm, /* AVX512 instruction set (%zmm0-%zmm7/31) */
+    X86EMUL_FPU_tile, /* AMX instruction set (%tmm0-%tmmN, tilecfg) */
     /* This sentinel will never be passed to ->get_fpu(). */
     X86EMUL_FPU_none
 };



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 14:36:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 14:36:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34548.65676 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCxA-0001KG-0h; Mon, 23 Nov 2020 14:36:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34548.65676; Mon, 23 Nov 2020 14:36:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCx9-0001K9-Tu; Mon, 23 Nov 2020 14:36:27 +0000
Received: by outflank-mailman (input) for mailman id 34548;
 Mon, 23 Nov 2020 14:36:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khCx8-0001Jm-Jh
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:36:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c066fc8b-2b11-40bf-82ea-686599a4b2ef;
 Mon, 23 Nov 2020 14:36:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D0371ADD6;
 Mon, 23 Nov 2020 14:36:24 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khCx8-0001Jm-Jh
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:36:26 +0000
X-Inumbo-ID: c066fc8b-2b11-40bf-82ea-686599a4b2ef
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c066fc8b-2b11-40bf-82ea-686599a4b2ef;
	Mon, 23 Nov 2020 14:36:25 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606142184; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=r3g4BskFXFpCG96kBCYAqDugoPJIjg5Q3ivacDS2QB8=;
	b=U0iR6XkTniNv+RdfUInl8gokCgn7R7EBABm33ruRxk85NEsELHmiZdTXg4FdGl13sm7BlL
	aEWChRKq1Cq/em/4wjGB1TdQfTQe2Bew4jkt195IYCH3WDFudwN+HAwjYSo5GQyKfTJXQF
	Z+5IACIyP1MVSsFSNjfbo19OoyVLVrM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D0371ADD6;
	Mon, 23 Nov 2020 14:36:24 +0000 (UTC)
Subject: [PATCH v2 16/17] x86emul: support TILERELEASE
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Message-ID: <0efaaac8-e304-a1dc-d5cc-7081dc9f945e@suse.com>
Date: Mon, 23 Nov 2020 15:36:24 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

This is relatively straightforward, and hence best suited to introduce a
few other general pieces.

Testing of this will be added once a sensible test can be put together,
i.e. when support for other insns is also there.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -1335,6 +1335,7 @@ static const struct vex {
     { { 0x45 }, 2, T, R, pfx_66, Wn, Ln }, /* vpsrlv{d,q} */
     { { 0x46 }, 2, T, R, pfx_66, W0, Ln }, /* vpsravd */
     { { 0x47 }, 2, T, R, pfx_66, Wn, Ln }, /* vpsllv{d,q} */
+    { { 0x49, 0xc0 }, 2, F, N, pfx_no, W0, L0 }, /* tilerelease */
     { { 0x50 }, 2, T, R, pfx_66, W0, Ln }, /* vpdpbusd */
     { { 0x51 }, 2, T, R, pfx_66, W0, Ln }, /* vpdpbusds */
     { { 0x52 }, 2, T, R, pfx_66, W0, Ln }, /* vpdpwssd */
--- a/tools/tests/x86_emulator/x86-emulate.c
+++ b/tools/tests/x86_emulator/x86-emulate.c
@@ -247,6 +247,9 @@ int emul_test_get_fpu(
             break;
     default:
         return X86EMUL_UNHANDLEABLE;
+
+    case X86EMUL_FPU_tile:
+        return cpu_has_amx_tile ? X86EMUL_OKAY : X86EMUL_UNHANDLEABLE;
     }
     return X86EMUL_OKAY;
 }
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -475,6 +475,7 @@ static const struct ext0f38_table {
     [0x43] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
     [0x44] = { .simd_size = simd_packed_int, .two_op = 1, .d8s = d8s_vl },
     [0x45 ... 0x47] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x49] = { .simd_size = simd_other, .two_op = 1 },
     [0x4c] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
     [0x4d] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
     [0x4e] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
@@ -2014,6 +2015,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_avx512_4fmaps() (ctxt->cpuid->feat.avx512_4fmaps)
 #define vcpu_has_avx512_vp2intersect() (ctxt->cpuid->feat.avx512_vp2intersect)
 #define vcpu_has_serialize()   (ctxt->cpuid->feat.serialize)
+#define vcpu_has_amx_tile()    (ctxt->cpuid->feat.amx_tile)
 #define vcpu_has_avx_vnni()    (ctxt->cpuid->feat.avx_vnni)
 #define vcpu_has_avx512_bf16() (ctxt->cpuid->feat.avx512_bf16)
 
@@ -9460,6 +9462,24 @@ x86_emulate(
         generate_exception_if(vex.l, EXC_UD);
         goto simd_0f_avx;
 
+    case X86EMUL_OPC_VEX(0x0f38, 0x49):
+        generate_exception_if(!mode_64bit() || vex.l || vex.w, EXC_UD);
+        if ( ea.type == OP_REG )
+        {
+            switch ( modrm )
+            {
+            case 0xc0: /* tilerelease */
+                host_and_vcpu_must_have(amx_tile);
+                get_fpu(X86EMUL_FPU_tile);
+                op_bytes = 1; /* fake */
+                goto simd_0f_common;
+
+            default:
+                goto unrecognized_insn;
+            }
+        }
+        goto unimplemented_insn;
+
     case X86EMUL_OPC_VEX_66(0x0f38, 0x50): /* vpdpbusd [xy]mm/mem,[xy]mm,[xy]mm */
     case X86EMUL_OPC_VEX_66(0x0f38, 0x51): /* vpdpbusds [xy]mm/mem,[xy]mm,[xy]mm */
     case X86EMUL_OPC_VEX_66(0x0f38, 0x52): /* vpdpwssd [xy]mm/mem,[xy]mm,[xy]mm */
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -131,6 +131,7 @@
 #define cpu_has_avx512_vp2intersect boot_cpu_has(X86_FEATURE_AVX512_VP2INTERSECT)
 #define cpu_has_tsx_force_abort boot_cpu_has(X86_FEATURE_TSX_FORCE_ABORT)
 #define cpu_has_serialize       boot_cpu_has(X86_FEATURE_SERIALIZE)
+#define cpu_has_amx_tile        boot_cpu_has(X86_FEATURE_AMX_TILE)
 
 /* CPUID level 0x00000007:1.eax */
 #define cpu_has_avx_vnni        boot_cpu_has(X86_FEATURE_AVX_VNNI)



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 14:37:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 14:37:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34558.65688 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCxl-0001SE-AQ; Mon, 23 Nov 2020 14:37:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34558.65688; Mon, 23 Nov 2020 14:37:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCxl-0001S7-6P; Mon, 23 Nov 2020 14:37:05 +0000
Received: by outflank-mailman (input) for mailman id 34558;
 Mon, 23 Nov 2020 14:37:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khCxk-0001S2-DU
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:37:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f6836569-30e1-4496-bef1-0b7b2567b3f6;
 Mon, 23 Nov 2020 14:37:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6769AAD1E;
 Mon, 23 Nov 2020 14:37:02 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khCxk-0001S2-DU
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:37:04 +0000
X-Inumbo-ID: f6836569-30e1-4496-bef1-0b7b2567b3f6
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f6836569-30e1-4496-bef1-0b7b2567b3f6;
	Mon, 23 Nov 2020 14:37:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606142222; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5oy8cDRDAJz5j/V8PTEeRrB9FvSG2sdhoCKudh+oRWk=;
	b=EtCE+v8PnSPfcTbvFkK/x8NivyV4ijk+LCq9YRH/DGAlzG/XN1dSz4f6bagw6diqex9B1V
	frJ748136zv9LBo53esfod8TWvqXk+W4Q314nbBGdyEuqgxGnrlFvlD9e3XKDEqOlR0w6x
	3Wegc4g58syJ5/n+/Q4NfVpnXSKiF2M=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6769AAD1E;
	Mon, 23 Nov 2020 14:37:02 +0000 (UTC)
Subject: [PATCH v2 17/17] x86emul: support {LD,ST}TILECFG
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Message-ID: <9e10e975-7a5a-8ac9-219f-ebc9a5e373ee@suse.com>
Date: Mon, 23 Nov 2020 15:37:01 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

While ver 041 of the ISA extensions doc also specifies
xcr0_supports_palette() returning false as one of the #GP(0) reasons for
LDTILECFG, the earlier #UD conditions look to make this fully dead.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.
---
SDE: -spr

--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -1335,6 +1335,8 @@ static const struct vex {
     { { 0x45 }, 2, T, R, pfx_66, Wn, Ln }, /* vpsrlv{d,q} */
     { { 0x46 }, 2, T, R, pfx_66, W0, Ln }, /* vpsravd */
     { { 0x47 }, 2, T, R, pfx_66, Wn, Ln }, /* vpsllv{d,q} */
+    { { 0x49, 0x00 }, 2, F, R, pfx_no, W0, L0 }, /* ldtilecfg */
+    { { 0x49, 0x00 }, 2, F, W, pfx_66, W0, L0 }, /* sttilecfg */
     { { 0x49, 0xc0 }, 2, F, N, pfx_no, W0, L0 }, /* tilerelease */
     { { 0x50 }, 2, T, R, pfx_66, W0, Ln }, /* vpdpbusd */
     { { 0x51 }, 2, T, R, pfx_66, W0, Ln }, /* vpdpbusds */
--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -898,6 +898,16 @@ int main(int argc, char **argv)
     int rc;
 #ifdef __x86_64__
     unsigned int vendor_native;
+    static const struct {
+        uint8_t palette, start_row;
+        uint8_t res[14];
+        uint16_t colsb[16];
+        uint8_t rows[16];
+    } tilecfg = {
+        .palette = 1,
+        .colsb = { 2, 4, 5, 3 },
+        .rows = { 2, 4, 3, 5 },
+    };
 #else
     unsigned int bcdres_native, bcdres_emul;
 #endif
@@ -4463,6 +4473,74 @@ int main(int argc, char **argv)
         printf("skipped\n");
 
 #ifdef __x86_64__
+    printf("%-40s", "Testing tilerelease;sttilecfg 4(%rcx)...");
+    if ( stack_exec && cpu_has_amx_tile )
+    {
+        decl_insn(tilerelease);
+
+        asm volatile ( put_insn(tilerelease,
+                                /* tilerelease */
+                                ".byte 0xC4, 0xE2, 0x78, 0x49, 0xC0;"
+                                /* sttilecfg 4(%0) */
+                                ".byte 0xC4, 0xE2, 0x79, 0x49, 0x41, 0x04")
+                                :: "c" (NULL) );
+
+        memset(res, ~0, 72);
+        set_insn(tilerelease);
+        regs.ecx = (unsigned long)res;
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( rc == X86EMUL_OKAY )
+            rc = x86_emulate(&ctxt, &emulops);
+        if ( rc != X86EMUL_OKAY || !check_eip(tilerelease) ||
+             ~res[0] || ~res[17] || memchr_inv(res + 1, 0, 64) )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
+    printf("%-40s", "Testing ldtilecfg (%rdx)...");
+    if ( stack_exec && cpu_has_amx_tile )
+    {
+        decl_insn(ldtilecfg);
+
+        asm volatile ( put_insn(ldtilecfg,
+                                /* ldtilecfg (%0) */
+                                ".byte 0xC4, 0xE2, 0x78, 0x49, 0x02")
+                                :: "d" (NULL) );
+
+        set_insn(ldtilecfg);
+        regs.edx = (unsigned long)&tilecfg;
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( rc != X86EMUL_OKAY || !check_eip(ldtilecfg) )
+            goto fail;
+        printf("pending\n");
+    }
+    else
+        printf("skipped\n");
+
+    printf("%-40s", "Testing sttilecfg -4(%rcx)...");
+    if ( stack_exec && cpu_has_amx_tile )
+    {
+        decl_insn(sttilecfg);
+
+        asm volatile ( put_insn(sttilecfg,
+                                /* sttilecfg 4(%0) */
+                                ".byte 0xC4, 0xE2, 0x79, 0x49, 0x41, 0xfc")
+                                :: "c" (NULL) );
+
+        memset(res, ~0, 72);
+        set_insn(sttilecfg);
+        regs.ecx = (unsigned long)(res + 2);
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( rc != X86EMUL_OKAY || !check_eip(sttilecfg) ||
+             ~res[0] || ~res[17] || memcmp(res + 1, &tilecfg, 64) )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
     printf("%-40s", "Testing vzeroupper (compat)...");
     if ( cpu_has_avx )
     {
--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -67,6 +67,17 @@
 
 #define is_canonical_address(x) (((int64_t)(x) >> 47) == ((int64_t)(x) >> 63))
 
+static inline void *memchr_inv(const void *s, int c, size_t n)
+{
+    const unsigned char *p = s;
+
+    while ( n-- )
+        if ( (unsigned char)c != *p++ )
+            return (void *)(p - 1);
+
+    return NULL;
+}
+
 extern uint32_t mxcsr_mask;
 extern struct cpuid_policy cp;
 
@@ -170,6 +181,8 @@ static inline bool xcr0_mask(uint64_t ma
 #define cpu_has_avx512_4fmaps (cp.feat.avx512_4fmaps && xcr0_mask(0xe6))
 #define cpu_has_avx512_vp2intersect (cp.feat.avx512_vp2intersect && xcr0_mask(0xe6))
 #define cpu_has_serialize  cp.feat.serialize
+#define cpu_has_amx_tile   (cp.feat.amx_tile && \
+                            xcr0_mask(X86_XCR0_TILECFG | X86_XCR0_TILEDATA))
 #define cpu_has_avx_vnni   (cp.feat.avx_vnni && xcr0_mask(6))
 #define cpu_has_avx512_bf16 (cp.feat.avx512_bf16 && xcr0_mask(0xe6))
 
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -957,6 +957,12 @@ typedef union {
     uint64_t __attribute__ ((aligned(16))) xmm[2];
     uint64_t __attribute__ ((aligned(32))) ymm[4];
     uint64_t __attribute__ ((aligned(64))) zmm[8];
+    struct {
+        uint8_t palette, start_row;
+        uint8_t res[14];
+        uint16_t colsb[16];
+        uint8_t rows[16];
+    } tilecfg;
     uint32_t data32[16];
 } mmval_t;
 
@@ -2848,6 +2854,10 @@ x86_decode_0f38(
         state->simd_size = simd_scalar_vexw;
         break;
 
+    case X86EMUL_OPC_VEX_66(0, 0x49): /* sttilecfg */
+        state->desc = DstMem | SrcImplicit | Mov;
+        break;
+
     case X86EMUL_OPC_EVEX_66(0, 0x7a): /* vpbroadcastb */
     case X86EMUL_OPC_EVEX_66(0, 0x7b): /* vpbroadcastw */
     case X86EMUL_OPC_EVEX_66(0, 0x7c): /* vpbroadcast{d,q} */
@@ -9478,7 +9488,66 @@ x86_emulate(
                 goto unrecognized_insn;
             }
         }
-        goto unimplemented_insn;
+
+        switch ( modrm_reg & 7 )
+        {
+        case 0: /* ldtilecfg mem */
+            generate_exception_if(vex.reg != 0xf, EXC_UD);
+            host_and_vcpu_must_have(amx_tile);
+            get_fpu(X86EMUL_FPU_tile);
+            rc = ops->read(ea.mem.seg, ea.mem.off, mmvalp, 64, ctxt);
+            if ( rc != X86EMUL_OKAY )
+                goto done;
+            generate_exception_if((mmvalp->tilecfg.palette >
+                                   ctxt->cpuid->tile.max_palette),
+                                  EXC_GP, 0);
+            if ( mmvalp->tilecfg.palette )
+            {
+                const typeof(*ctxt->cpuid->tile.palette) *palette;
+
+                generate_exception_if(memchr_inv(mmvalp->tilecfg.res, 0,
+                                                 sizeof(mmvalp->tilecfg.res)),
+                                      EXC_GP, 0);
+
+                /*
+                 * Parameters for valid registers must be within bounds, or
+                 * both be zero at the same time.
+                 */
+                palette = &ctxt->cpuid->tile.palette[mmvalp->tilecfg.palette];
+                for ( i = 0; i < palette->num_regs; ++i )
+                    generate_exception_if(((mmvalp->tilecfg.colsb[i] >
+                                            palette->bytes_per_row) ||
+                                           (mmvalp->tilecfg.rows[i] >
+                                            palette->max_rows) ||
+                                           (!mmvalp->tilecfg.colsb[i] !=
+                                            !mmvalp->tilecfg.rows[i])),
+                                          EXC_GP, 0);
+
+                /* All remaining entries must be zero. */
+                for ( ; i < 16; ++i )
+                    generate_exception_if((mmvalp->tilecfg.colsb[i] ||
+                                           mmvalp->tilecfg.rows[i]),
+                                          EXC_GP, 0);
+            }
+            op_bytes = 64;
+            goto simd_0f_common;
+        }
+        goto unrecognized_insn;
+
+    case X86EMUL_OPC_VEX_66(0x0f38, 0x49):
+        generate_exception_if(!mode_64bit() || vex.l || vex.w, EXC_UD);
+        if ( ea.type == OP_REG )
+            goto unrecognized_insn;
+
+        switch ( modrm_reg & 7 )
+        {
+        case 0: /* sttilecfg mem */
+            host_and_vcpu_must_have(amx_tile);
+            get_fpu(X86EMUL_FPU_tile);
+            op_bytes = 64;
+            goto simd_0f_common;
+        }
+        goto unrecognized_insn;
 
     case X86EMUL_OPC_VEX_66(0x0f38, 0x50): /* vpdpbusd [xy]mm/mem,[xy]mm,[xy]mm */
     case X86EMUL_OPC_VEX_66(0x0f38, 0x51): /* vpdpbusds [xy]mm/mem,[xy]mm,[xy]mm */



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 14:39:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 14:39:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34567.65700 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCzj-0001er-NW; Mon, 23 Nov 2020 14:39:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34567.65700; Mon, 23 Nov 2020 14:39:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khCzj-0001ek-Iw; Mon, 23 Nov 2020 14:39:07 +0000
Received: by outflank-mailman (input) for mailman id 34567;
 Mon, 23 Nov 2020 14:39:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dDmC=E5=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1khCzi-0001ef-CR
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:39:06 +0000
Received: from mail-lj1-x22b.google.com (unknown [2a00:1450:4864:20::22b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e5f8726-d788-4e7b-9752-7eb3fed2083d;
 Mon, 23 Nov 2020 14:39:05 +0000 (UTC)
Received: by mail-lj1-x22b.google.com with SMTP id f24so5368441ljk.13
 for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 06:39:05 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id x123sm1406452lfa.154.2020.11.23.06.39.02
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 23 Nov 2020 06:39:03 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dDmC=E5=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1khCzi-0001ef-CR
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:39:06 +0000
X-Inumbo-ID: 3e5f8726-d788-4e7b-9752-7eb3fed2083d
Received: from mail-lj1-x22b.google.com (unknown [2a00:1450:4864:20::22b])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3e5f8726-d788-4e7b-9752-7eb3fed2083d;
	Mon, 23 Nov 2020 14:39:05 +0000 (UTC)
Received: by mail-lj1-x22b.google.com with SMTP id f24so5368441ljk.13
        for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 06:39:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:from:to:cc:references:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=iLACVdXQ9kDpgteS7S8jPt+5lGYTIBzruTxQ2C6tPk4=;
        b=cZspmf3cAjT2CaNqU0iYH10U+2vKhvKKP3HIwFLrjha4cCoJ4eLEeAEwGfeQwxf+8y
         qTzCEvKUf2+/8aUBslnRxM2e8ehO7sndTTHcUlL2Pvd7WHIQjA5pojs8XmEtvN++eN8v
         bN2/YCDhDXtLVr6F1Y6k7GfRPlWdnSwrkInbfiJRuwjXFtJ2zn0x6u1pbUuQHrxRKVeo
         J6+mz4O9Ovfx2r6aC40oXqo0tLY8Y6wLXvYNhK6TblBLJKSdE54benqHa5yX+sy8QKMi
         +FLCpIVHEVKls5Q3LOkLM0sR2fT3SHYKWqxV5h7jL9JXevrp7WjTpu6PM4k8sx4BdsfB
         x8og==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:from:to:cc:references:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=iLACVdXQ9kDpgteS7S8jPt+5lGYTIBzruTxQ2C6tPk4=;
        b=KFcd1XFssBzmOkXtvit7S0OS96Ksz6X2kgb2PAHybOmI1O3nKhZj6BlEMUyrRtTSSC
         81VPTlxBBKtQg7erSVVPPCUYvX23olkRG00yHP/sqCnjnbsNes+mEZsSN0chKqWd+CjF
         y7WGJvdS2osQKBbtHNTs5WhbpM+W5Erufrxd6Usz7EZiAuBizx7JS7Hw3wG03Lb7ZOEh
         1lLtS2Uw4jq2BZ2gf8dPlCxWt6K2yqWlbAk8aFpXQkfCQDtPe+LkIt7R+FBEK7amIsa9
         ZwCb2cUV8CJxs82UnAm2bViawqs3KzZNFa2ew4TKgruf3781rqu64ecrjx2Jr/rdvTtP
         K95Q==
X-Gm-Message-State: AOAM532jnTNP3uaHdRf/L+cwhx7+02x7T6NKMbSHT62SFyCaVicPlcK1
	/Ke/9fV3YHHg1Qy+DWoSBC8e1wmZGcY/Ew==
X-Google-Smtp-Source: ABdhPJwXhfHRhMozPV7Sw/jeg7YKbhyp1adzES0tsdUvg/Qi0enhQC6m/0NiGSD2mdTPUoVWJ6MqOg==
X-Received: by 2002:a2e:b536:: with SMTP id z22mr13626210ljm.177.1606142343916;
        Mon, 23 Nov 2020 06:39:03 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id x123sm1406452lfa.154.2020.11.23.06.39.02
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Mon, 23 Nov 2020 06:39:03 -0800 (PST)
Subject: Re: [PATCH V2 12/23] xen/ioreq: Remove "hvm" prefixes from involved
 function names
From: Oleksandr <olekstysh@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Paul Durrant <paul@xen.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-13-git-send-email-olekstysh@gmail.com>
 <e3064b77-71c3-9d8d-2324-6839895101f4@suse.com>
 <d3b6623c-683d-2845-78c3-a114193b0ce4@gmail.com>
Message-ID: <04a81b7e-213a-968b-048c-dfa68b6e3b0d@gmail.com>
Date: Mon, 23 Nov 2020 16:39:02 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <d3b6623c-683d-2845-78c3-a114193b0ce4@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


Hi Jan.


As it was agreed, below the list of proposed renaming (naming) within 
current series.

If there are no objections I will follow the proposed renaming. If any 
please let me know.


1. Global (existing):
hvm_map_mem_type_to_ioreq_server     -> ioreq_server_map_mem_type
hvm_select_ioreq_server              -> ioreq_server_select
hvm_send_ioreq                       -> ioreq_send
hvm_ioreq_init                       -> ioreq_init
hvm_destroy_all_ioreq_servers        -> ioreq_server_destroy_all
hvm_all_ioreq_servers_add_vcpu       -> ioreq_server_add_vcpu_all
hvm_all_ioreq_servers_remove_vcpu    -> ioreq_server_remove_vcpu_all
hvm_broadcast_ioreq                  -> ioreq_broadcast
hvm_create_ioreq_server              -> ioreq_server_create
hvm_get_ioreq_server_info            -> ioreq_server_get_info
hvm_map_io_range_to_ioreq_server     -> ioreq_server_map_io_range
hvm_unmap_io_range_from_ioreq_server -> ioreq_server_unmap_io_range
hvm_set_ioreq_server_state           -> ioreq_server_set_state
hvm_destroy_ioreq_server             -> ioreq_server_destroy
hvm_get_ioreq_server_frame           -> ioreq_server_get_frame
hvm_ioreq_needs_completion           -> ioreq_needs_completion
hvm_mmio_first_byte                  -> ioreq_mmio_first_byte
hvm_mmio_last_byte                   -> ioreq_mmio_last_byte
send_invalidate_req                  -> ioreq_signal_mapcache_invalidate

handle_hvm_io_completion             -> handle_io_completion
hvm_io_pending                       -> io_pending


2. Global (new):
arch_io_completion
arch_ioreq_server_map_pages
arch_ioreq_server_unmap_pages
arch_ioreq_server_enable
arch_ioreq_server_disable
arch_ioreq_server_destroy
arch_ioreq_server_map_mem_type
arch_ioreq_server_destroy_all
arch_ioreq_server_get_type_addr
arch_ioreq_init
domain_has_ioreq_server


3. Local (existing) in common ioreq.c:
hvm_alloc_ioreq_mfn               -> ioreq_alloc_mfn
hvm_free_ioreq_mfn                -> ioreq_free_mfn
hvm_update_ioreq_evtchn           -> ioreq_update_evtchn
hvm_ioreq_server_add_vcpu         -> ioreq_server_add_vcpu
hvm_ioreq_server_remove_vcpu      -> ioreq_server_remove_vcpu
hvm_ioreq_server_remove_all_vcpus -> ioreq_server_remove_all_vcpus
hvm_ioreq_server_alloc_pages      -> ioreq_server_alloc_pages
hvm_ioreq_server_free_pages       -> ioreq_server_free_pages
hvm_ioreq_server_free_rangesets   -> ioreq_server_free_rangesets
hvm_ioreq_server_alloc_rangesets  -> ioreq_server_alloc_rangesets
hvm_ioreq_server_enable           -> ioreq_server_enable
hvm_ioreq_server_disable          -> ioreq_server_disable
hvm_ioreq_server_init             -> ioreq_server_init
hvm_ioreq_server_deinit           -> ioreq_server_deinit
hvm_send_buffered_ioreq           -> ioreq_send_buffered

hvm_wait_for_io                   -> wait_for_io

4. Local (existing) in x86 ioreq.c:
Everything related to legacy interface (hvm_alloc_legacy_ioreq_gfn, etc) 
are going
to remain as is.



-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 14:56:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 14:56:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34581.65712 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDGF-0003OB-6B; Mon, 23 Nov 2020 14:56:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34581.65712; Mon, 23 Nov 2020 14:56:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDGF-0003O4-2L; Mon, 23 Nov 2020 14:56:11 +0000
Received: by outflank-mailman (input) for mailman id 34581;
 Mon, 23 Nov 2020 14:56:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khDGD-0003Nz-JV
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:56:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b6bd6944-e9b3-43bb-9b57-535183e8b5e9;
 Mon, 23 Nov 2020 14:56:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 97C36ACD5;
 Mon, 23 Nov 2020 14:56:06 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khDGD-0003Nz-JV
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 14:56:09 +0000
X-Inumbo-ID: b6bd6944-e9b3-43bb-9b57-535183e8b5e9
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b6bd6944-e9b3-43bb-9b57-535183e8b5e9;
	Mon, 23 Nov 2020 14:56:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606143366; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=CaMDCDhPysB8w6KfnUoI0F/Qbt77/bl/GFHDOpG5NT0=;
	b=cym4jwAPdYLMevmYI1quJbbjv8pAClM58R1c+1Vc+ZNR2uJjITu1FVA5qjs1d30pIb3lnr
	tisbiKdiI2cQlcIKJ47d0zJTn0AnNnMUmjHqupihVBtoEge4yCNVoVmFXahrxkqtPAoEC6
	To9SbJi70FHcRnEWG8aaJKMiNiZY1QE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 97C36ACD5;
	Mon, 23 Nov 2020 14:56:06 +0000 (UTC)
Subject: Re: [PATCH V2 12/23] xen/ioreq: Remove "hvm" prefixes from involved
 function names
To: Oleksandr <olekstysh@gmail.com>, Paul Durrant <paul@xen.org>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-13-git-send-email-olekstysh@gmail.com>
 <e3064b77-71c3-9d8d-2324-6839895101f4@suse.com>
 <d3b6623c-683d-2845-78c3-a114193b0ce4@gmail.com>
 <04a81b7e-213a-968b-048c-dfa68b6e3b0d@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <96e6622c-08b3-ff85-75f1-14c8b7cd6d6e@suse.com>
Date: Mon, 23 Nov 2020 15:56:05 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <04a81b7e-213a-968b-048c-dfa68b6e3b0d@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 23.11.2020 15:39, Oleksandr wrote:
> As it was agreed, below the list of proposed renaming (naming) within 
> current series.

Thanks for compiling this. A couple of suggestions for consideration:

> 1. Global (existing):
> hvm_map_mem_type_to_ioreq_server     -> ioreq_server_map_mem_type
> hvm_select_ioreq_server              -> ioreq_server_select
> hvm_send_ioreq                       -> ioreq_send
> hvm_ioreq_init                       -> ioreq_init

ioreq_domain_init() (or, imo less desirable domain_ioreq_init())?

> hvm_destroy_all_ioreq_servers        -> ioreq_server_destroy_all
> hvm_all_ioreq_servers_add_vcpu       -> ioreq_server_add_vcpu_all
> hvm_all_ioreq_servers_remove_vcpu    -> ioreq_server_remove_vcpu_all
> hvm_broadcast_ioreq                  -> ioreq_broadcast
> hvm_create_ioreq_server              -> ioreq_server_create
> hvm_get_ioreq_server_info            -> ioreq_server_get_info
> hvm_map_io_range_to_ioreq_server     -> ioreq_server_map_io_range
> hvm_unmap_io_range_from_ioreq_server -> ioreq_server_unmap_io_range
> hvm_set_ioreq_server_state           -> ioreq_server_set_state
> hvm_destroy_ioreq_server             -> ioreq_server_destroy
> hvm_get_ioreq_server_frame           -> ioreq_server_get_frame
> hvm_ioreq_needs_completion           -> ioreq_needs_completion
> hvm_mmio_first_byte                  -> ioreq_mmio_first_byte
> hvm_mmio_last_byte                   -> ioreq_mmio_last_byte
> send_invalidate_req                  -> ioreq_signal_mapcache_invalidate
> 
> handle_hvm_io_completion             -> handle_io_completion

For this one I'm not sure what to suggest, but I'm not overly happy
with the name.

> hvm_io_pending                       -> io_pending

vcpu_ioreq_pending() or vcpu_any_ioreq_pending()?

> 2. Global (new):
> arch_io_completion
> arch_ioreq_server_map_pages
> arch_ioreq_server_unmap_pages
> arch_ioreq_server_enable
> arch_ioreq_server_disable
> arch_ioreq_server_destroy
> arch_ioreq_server_map_mem_type
> arch_ioreq_server_destroy_all
> arch_ioreq_server_get_type_addr
> arch_ioreq_init

Assuming this is the arch hook of the similarly named function
further up, a similar adjustment may then be wanted here.

> domain_has_ioreq_server
> 
> 
> 3. Local (existing) in common ioreq.c:
> hvm_alloc_ioreq_mfn               -> ioreq_alloc_mfn
> hvm_free_ioreq_mfn                -> ioreq_free_mfn

These two are server functions, so should imo be ioreq_server_...().

However, if they're static (as they're now), no distinguishing
prefix is strictly necessary, i.e. alloc_mfn() and free_mfn() may
be fine. The two names may be too short for Paul's taste, though.
Some similar shortening may be possible for some or all of the ones
below here.

Jan

> hvm_update_ioreq_evtchn           -> ioreq_update_evtchn
> hvm_ioreq_server_add_vcpu         -> ioreq_server_add_vcpu
> hvm_ioreq_server_remove_vcpu      -> ioreq_server_remove_vcpu
> hvm_ioreq_server_remove_all_vcpus -> ioreq_server_remove_all_vcpus
> hvm_ioreq_server_alloc_pages      -> ioreq_server_alloc_pages
> hvm_ioreq_server_free_pages       -> ioreq_server_free_pages
> hvm_ioreq_server_free_rangesets   -> ioreq_server_free_rangesets
> hvm_ioreq_server_alloc_rangesets  -> ioreq_server_alloc_rangesets
> hvm_ioreq_server_enable           -> ioreq_server_enable
> hvm_ioreq_server_disable          -> ioreq_server_disable
> hvm_ioreq_server_init             -> ioreq_server_init
> hvm_ioreq_server_deinit           -> ioreq_server_deinit
> hvm_send_buffered_ioreq           -> ioreq_send_buffered
> 
> hvm_wait_for_io                   -> wait_for_io
> 
> 4. Local (existing) in x86 ioreq.c:
> Everything related to legacy interface (hvm_alloc_legacy_ioreq_gfn, etc) 
> are going
> to remain as is.
> 
> 
> 



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 15:00:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 15:00:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34591.65724 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDKn-0004Iv-RM; Mon, 23 Nov 2020 15:00:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34591.65724; Mon, 23 Nov 2020 15:00:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDKn-0004Io-OE; Mon, 23 Nov 2020 15:00:53 +0000
Received: by outflank-mailman (input) for mailman id 34591;
 Mon, 23 Nov 2020 15:00:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khDKm-0004Ij-T9
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:00:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id df3cb675-74ef-42cf-ac82-a6d0e724fbe2;
 Mon, 23 Nov 2020 15:00:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7F29BAD7C;
 Mon, 23 Nov 2020 15:00:51 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khDKm-0004Ij-T9
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:00:52 +0000
X-Inumbo-ID: df3cb675-74ef-42cf-ac82-a6d0e724fbe2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id df3cb675-74ef-42cf-ac82-a6d0e724fbe2;
	Mon, 23 Nov 2020 15:00:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606143651; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=HYKRgqgTqjq0czPpd7ZSoeRjVFRO+iwY4Agy3Dgqao4=;
	b=rNLVU5TFJxWvHcAJ6Y6/6VHH+d0B569R7mDCL+jce1EwaZTVlvvlNgOZ8/EUIcN2UBvjwN
	bVZyVoCgx2Y4oCjI/qgWYUbKDTbjkqV0kzyFxWsYUUMJ+Ky+H69c8Ieb1HFPdAsuybVJRd
	eSymHOtJydUkhuTJ0n1+6pSpFelwrEo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7F29BAD7C;
	Mon, 23 Nov 2020 15:00:51 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/2] x86/IRQ: a little bit of tidying
Message-ID: <935d31ab-cb65-02d7-a624-d5e047316389@suse.com>
Date: Mon, 23 Nov 2020 16:00:50 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

1: drop three unused variables
2: reduce casting involved in guest action retrieval

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 15:02:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 15:02:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34598.65736 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDLp-0004Pt-6B; Mon, 23 Nov 2020 15:01:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34598.65736; Mon, 23 Nov 2020 15:01:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDLp-0004Pm-2W; Mon, 23 Nov 2020 15:01:57 +0000
Received: by outflank-mailman (input) for mailman id 34598;
 Mon, 23 Nov 2020 15:01:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khDLn-0004Pe-Jp
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:01:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dd456b9a-936d-46a8-b5da-2a135e65e8c0;
 Mon, 23 Nov 2020 15:01:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 13045AD7C;
 Mon, 23 Nov 2020 15:01:54 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khDLn-0004Pe-Jp
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:01:55 +0000
X-Inumbo-ID: dd456b9a-936d-46a8-b5da-2a135e65e8c0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id dd456b9a-936d-46a8-b5da-2a135e65e8c0;
	Mon, 23 Nov 2020 15:01:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606143714; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=i+7w6mRETpLiN9gIS/CB5Ox8gdBTEBBgqW1Djhev8xk=;
	b=syqth9XSc/mUBVBEYhesPojU45Qb0OIARL9ykjTHe2nkP3UCHZlyDbqbhSqstqof35oSjg
	9yFJUOmn4iHAvntmhg4kNhamyt1hg0WAbRM06eLYRK2p0OcIEc08jZIhlSsfoCFbjDbr3G
	7ucNGiDLi1PsoLJfvWuXHWbAiVNqz3M=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 13045AD7C;
	Mon, 23 Nov 2020 15:01:54 +0000 (UTC)
Subject: [PATCH v2 1/2] x86/IRQ: drop three unused variables
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <935d31ab-cb65-02d7-a624-d5e047316389@suse.com>
Message-ID: <1f4956f8-1d84-b7e1-65ab-d6b78b178a02@suse.com>
Date: Mon, 23 Nov 2020 16:01:53 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <935d31ab-cb65-02d7-a624-d5e047316389@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

I didn't bother figuring which commit(s) should have deleted them while
removing their last uses.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Yet one more.

--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1402,7 +1402,6 @@ void desc_guest_eoi(struct irq_desc *des
 {
     irq_guest_action_t *action;
     cpumask_t           cpu_eoi_map;
-    int                 irq;
 
     if ( !(desc->status & IRQ_GUEST) )
     {
@@ -1411,7 +1410,6 @@ void desc_guest_eoi(struct irq_desc *des
     }
 
     action = (irq_guest_action_t *)desc->action;
-    irq = desc - irq_desc;
 
     if ( unlikely(!test_and_clear_bool(pirq->masked)) ||
          unlikely(--action->in_flight != 0) )
@@ -1531,7 +1529,6 @@ int pirq_shared(struct domain *d, int pi
 
 int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
 {
-    unsigned int        irq;
     struct irq_desc         *desc;
     irq_guest_action_t *action, *newaction = NULL;
     int                 rc = 0;
@@ -1548,7 +1545,6 @@ int pirq_guest_bind(struct vcpu *v, stru
     }
 
     action = (irq_guest_action_t *)desc->action;
-    irq = desc - irq_desc;
 
     if ( !(desc->status & IRQ_GUEST) )
     {
@@ -1663,13 +1659,11 @@ int pirq_guest_bind(struct vcpu *v, stru
 static irq_guest_action_t *__pirq_guest_unbind(
     struct domain *d, struct pirq *pirq, struct irq_desc *desc)
 {
-    unsigned int        irq;
     irq_guest_action_t *action;
     cpumask_t           cpu_eoi_map;
     int                 i;
 
     action = (irq_guest_action_t *)desc->action;
-    irq = desc - irq_desc;
 
     if ( unlikely(action == NULL) )
     {



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 15:02:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 15:02:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34599.65747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDM6-0004Tt-E7; Mon, 23 Nov 2020 15:02:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34599.65747; Mon, 23 Nov 2020 15:02:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDM6-0004Tl-Ak; Mon, 23 Nov 2020 15:02:14 +0000
Received: by outflank-mailman (input) for mailman id 34599;
 Mon, 23 Nov 2020 15:02:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khDM5-0004TX-7C
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:02:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6917ec34-258d-44f8-88ca-c337457f2ffb;
 Mon, 23 Nov 2020 15:02:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 490E0ACD5;
 Mon, 23 Nov 2020 15:02:11 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khDM5-0004TX-7C
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:02:13 +0000
X-Inumbo-ID: 6917ec34-258d-44f8-88ca-c337457f2ffb
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6917ec34-258d-44f8-88ca-c337457f2ffb;
	Mon, 23 Nov 2020 15:02:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606143731; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FeXeUubaQzbsqNUm4P0TNbxNAkoVC7iXWI1ErzBtwJE=;
	b=dzc3866MEIqaIwMbKLZTn/m6MGANimcPuL5QcPLiUd3G7HwuHz0uhk0tJGWSvAHKIuDdAL
	aAUDb03z7pKYgH3I0uKsl8GjbpX89iibjeiiyNbq7xdtKVaiBfJrnmjTonsCqYrbsZsU5P
	TtFk4Rgb2odkFb10FCu631gm3dEjZPU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 490E0ACD5;
	Mon, 23 Nov 2020 15:02:11 +0000 (UTC)
Subject: [PATCH v2 2/2] x86/IRQ: reduce casting involved in guest action
 retrieval
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <935d31ab-cb65-02d7-a624-d5e047316389@suse.com>
Message-ID: <2f817015-29f6-a3d5-0a3a-e9f0c91ea1c6@suse.com>
Date: Mon, 23 Nov 2020 16:02:10 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <935d31ab-cb65-02d7-a624-d5e047316389@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Introduce a helper function covering both the IRQ_GUEST check and the
cast involved in obtaining the (correctly typed) pointer. Where possible
add const and/or reduce variable scope.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1042,6 +1042,11 @@ typedef struct {
     struct domain *guest[IRQ_MAX_GUESTS];
 } irq_guest_action_t;
 
+static irq_guest_action_t *guest_action(const struct irq_desc *desc)
+{
+    return desc->status & IRQ_GUEST ? (void *)desc->action : NULL;
+}
+
 /*
  * Stack of interrupts awaiting EOI on each CPU. These must be popped in
  * order, as only the current highest-priority pending irq can be EOIed.
@@ -1111,11 +1116,9 @@ static void irq_guest_eoi_timer_fn(void
 
     spin_lock_irq(&desc->lock);
     
-    if ( !(desc->status & IRQ_GUEST) )
+    if ( !(action = guest_action(desc)) )
         goto out;
 
-    action = (irq_guest_action_t *)desc->action;
-
     ASSERT(action->ack_type != ACKTYPE_NONE);
 
     /*
@@ -1351,16 +1354,15 @@ static void flush_ready_eoi(void)
     pending_eoi_sp(peoi) = sp+1;
 }
 
-static void __set_eoi_ready(struct irq_desc *desc)
+static void __set_eoi_ready(const struct irq_desc *desc)
 {
-    irq_guest_action_t *action = (irq_guest_action_t *)desc->action;
+    irq_guest_action_t *action = guest_action(desc);
     struct pending_eoi *peoi = this_cpu(pending_eoi);
     int                 irq, sp;
 
     irq = desc - irq_desc;
 
-    if ( !(desc->status & IRQ_GUEST) ||
-         (action->in_flight != 0) ||
+    if ( !action || action->in_flight ||
          !cpumask_test_and_clear_cpu(smp_processor_id(),
                                      action->cpu_eoi_map) )
         return;
@@ -1400,18 +1402,11 @@ void pirq_guest_eoi(struct pirq *pirq)
 
 void desc_guest_eoi(struct irq_desc *desc, struct pirq *pirq)
 {
-    irq_guest_action_t *action;
+    irq_guest_action_t *action = guest_action(desc);
     cpumask_t           cpu_eoi_map;
 
-    if ( !(desc->status & IRQ_GUEST) )
-    {
-        spin_unlock_irq(&desc->lock);
-        return;
-    }
-
-    action = (irq_guest_action_t *)desc->action;
-
-    if ( unlikely(!test_and_clear_bool(pirq->masked)) ||
+    if ( unlikely(!action) ||
+         unlikely(!test_and_clear_bool(pirq->masked)) ||
          unlikely(--action->in_flight != 0) )
     {
         spin_unlock_irq(&desc->lock);
@@ -1510,8 +1505,8 @@ static int irq_acktype(const struct irq_
 
 int pirq_shared(struct domain *d, int pirq)
 {
-    struct irq_desc         *desc;
-    irq_guest_action_t *action;
+    struct irq_desc    *desc;
+    const irq_guest_action_t *action;
     unsigned long       flags;
     int                 shared;
 
@@ -1519,8 +1514,8 @@ int pirq_shared(struct domain *d, int pi
     if ( desc == NULL )
         return 0;
 
-    action = (irq_guest_action_t *)desc->action;
-    shared = ((desc->status & IRQ_GUEST) && (action->nr_guests > 1));
+    action = guest_action(desc);
+    shared = (action && (action->nr_guests > 1));
 
     spin_unlock_irqrestore(&desc->lock, flags);
 
@@ -1544,9 +1539,7 @@ int pirq_guest_bind(struct vcpu *v, stru
         goto out;
     }
 
-    action = (irq_guest_action_t *)desc->action;
-
-    if ( !(desc->status & IRQ_GUEST) )
+    if ( !(action = guest_action(desc)) )
     {
         if ( desc->action != NULL )
         {
@@ -1659,21 +1652,18 @@ int pirq_guest_bind(struct vcpu *v, stru
 static irq_guest_action_t *__pirq_guest_unbind(
     struct domain *d, struct pirq *pirq, struct irq_desc *desc)
 {
-    irq_guest_action_t *action;
+    irq_guest_action_t *action = guest_action(desc);
     cpumask_t           cpu_eoi_map;
     int                 i;
 
-    action = (irq_guest_action_t *)desc->action;
-
     if ( unlikely(action == NULL) )
     {
         dprintk(XENLOG_G_WARNING, "dom%d: pirq %d: desc->action is NULL!\n",
                 d->domain_id, pirq->pirq);
+        BUG_ON(!(desc->status & IRQ_GUEST));
         return NULL;
     }
 
-    BUG_ON(!(desc->status & IRQ_GUEST));
-
     for ( i = 0; (i < action->nr_guests) && (action->guest[i] != d); i++ )
         continue;
     BUG_ON(i == action->nr_guests);
@@ -1793,14 +1783,12 @@ static bool pirq_guest_force_unbind(stru
     desc = pirq_spin_lock_irq_desc(pirq, NULL);
     BUG_ON(desc == NULL);
 
-    if ( !(desc->status & IRQ_GUEST) )
-        goto out;
-
-    action = (irq_guest_action_t *)desc->action;
+    action = guest_action(desc);
     if ( unlikely(action == NULL) )
     {
-        dprintk(XENLOG_G_WARNING, "dom%d: pirq %d: desc->action is NULL!\n",
-            d->domain_id, pirq->pirq);
+        if ( desc->status & IRQ_GUEST )
+            dprintk(XENLOG_G_WARNING, "%pd: pirq %d: desc->action is NULL!\n",
+                    d, pirq->pirq);
         goto out;
     }
 
@@ -1827,7 +1815,7 @@ static bool pirq_guest_force_unbind(stru
 
 static void do_IRQ_guest(struct irq_desc *desc, unsigned int vector)
 {
-    irq_guest_action_t *action = (irq_guest_action_t *)desc->action;
+    irq_guest_action_t *action = guest_action(desc);
     unsigned int        i;
     struct pending_eoi *peoi = this_cpu(pending_eoi);
 
@@ -2444,7 +2432,6 @@ static void dump_irqs(unsigned char key)
 {
     int i, irq, pirq;
     struct irq_desc *desc;
-    irq_guest_action_t *action;
     struct domain *d;
     const struct pirq *info;
     unsigned long flags;
@@ -2454,6 +2441,8 @@ static void dump_irqs(unsigned char key)
 
     for ( irq = 0; irq < nr_irqs; irq++ )
     {
+        const irq_guest_action_t *action;
+
         if ( !(irq & 0x1f) )
             process_pending_softirqs();
 
@@ -2473,10 +2462,9 @@ static void dump_irqs(unsigned char key)
         if ( ssid )
             printk("Z=%-25s ", ssid);
 
-        if ( desc->status & IRQ_GUEST )
+        action = guest_action(desc);
+        if ( action )
         {
-            action = (irq_guest_action_t *)desc->action;
-
             printk("in-flight=%d%c",
                    action->in_flight, action->nr_guests ? ' ' : '\n');
 
@@ -2651,17 +2639,15 @@ void fixup_irqs(const cpumask_t *mask, b
 void fixup_eoi(void)
 {
     unsigned int irq, sp;
-    struct irq_desc *desc;
-    irq_guest_action_t *action;
     struct pending_eoi *peoi;
 
     /* Clean up cpu_eoi_map of every interrupt to exclude this CPU. */
     for ( irq = 0; irq < nr_irqs; irq++ )
     {
-        desc = irq_to_desc(irq);
-        if ( !(desc->status & IRQ_GUEST) )
+        irq_guest_action_t *action = guest_action(irq_to_desc(irq));
+
+        if ( !action )
             continue;
-        action = (irq_guest_action_t *)desc->action;
         cpumask_clear_cpu(smp_processor_id(), action->cpu_eoi_map);
     }
 



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 15:15:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 15:15:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34618.65759 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDYq-0005Yp-Ja; Mon, 23 Nov 2020 15:15:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34618.65759; Mon, 23 Nov 2020 15:15:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDYq-0005Yi-GV; Mon, 23 Nov 2020 15:15:24 +0000
Received: by outflank-mailman (input) for mailman id 34618;
 Mon, 23 Nov 2020 15:15:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+T/s=E5=amacapital.net=luto@srs-us1.protection.inumbo.net>)
 id 1khDYo-0005Yd-H4
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:15:22 +0000
Received: from mail-pl1-x641.google.com (unknown [2607:f8b0:4864:20::641])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 424a5339-3006-4123-93b3-d38537d01dce;
 Mon, 23 Nov 2020 15:15:21 +0000 (UTC)
Received: by mail-pl1-x641.google.com with SMTP id u2so8975243pls.10
 for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 07:15:21 -0800 (PST)
Received: from ?IPv6:2601:646:c200:1ef2:344b:912a:a361:223f?
 ([2601:646:c200:1ef2:344b:912a:a361:223f])
 by smtp.gmail.com with ESMTPSA id g3sm3460848pfr.145.2020.11.23.07.15.19
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 23 Nov 2020 07:15:19 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+T/s=E5=amacapital.net=luto@srs-us1.protection.inumbo.net>)
	id 1khDYo-0005Yd-H4
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:15:22 +0000
X-Inumbo-ID: 424a5339-3006-4123-93b3-d38537d01dce
Received: from mail-pl1-x641.google.com (unknown [2607:f8b0:4864:20::641])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 424a5339-3006-4123-93b3-d38537d01dce;
	Mon, 23 Nov 2020 15:15:21 +0000 (UTC)
Received: by mail-pl1-x641.google.com with SMTP id u2so8975243pls.10
        for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 07:15:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=amacapital-net.20150623.gappssmtp.com; s=20150623;
        h=content-transfer-encoding:from:mime-version:subject:date:message-id
         :references:cc:in-reply-to:to;
        bh=9/7B9fYOBF3iBNDP75oo3XjDT3J214mKbuz8gALB4vU=;
        b=yVT8mRm3Z3LX4kK+1gmW0OdDkKPQsxQqCiNqqyao3OeXSp3AeU/oUDBO2LFha72Ykg
         hfakFOwwGzMWTxhbJ9u4ZRrLs9E+AUi3bUdWX7FF9C1AL9DI7uzuPx703IMYlUpEfQ0R
         cKbQk2gp7fkskTTopdejNkIPV40k3NdJdEWpv8NlYlb+lwXNFQ7m4QTsPwnXAY41XDjo
         ie2xHVuANYQFTwR5JV4GsFGoOojFILHnydRahEGFMrSdyzuTcX1MyaggfyX5GhzN48f7
         dxYqGPnL5YK+lGPBgsFICbFcJ3HdWvHqDNfvP4QmHS7Z6Td+jfStdZ8S8kzdPOmMP3XW
         IYQA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:content-transfer-encoding:from:mime-version
         :subject:date:message-id:references:cc:in-reply-to:to;
        bh=9/7B9fYOBF3iBNDP75oo3XjDT3J214mKbuz8gALB4vU=;
        b=dN79wc+dvNJL3H8x3vysj9hmavR02IV29FS/DdYmacDmOp003B5+BB5ikD/0Ugy3ce
         u5nIhuHaE/eGJTSrrU1HUgyhiYhB8xLx+xUU6+lbGbJkvxsLRL7vpPW1+Ea37VVfwp7G
         mW72awwNhnJpoZEzrCZVeDxTJuA7K2aJ3UXlbtmURx2CWC6GwP28kF3g12LZrdcKbAOo
         QCIC0iFuZOChFKd+syM3t+kE+b76cs27rbAFeBSxvBPkMryU7gcrYh4Qa8gHYN8qH+Bh
         Fn42f1pmCtyDyriD12SDf8tQps7PuKg80Ka5h4QU8Lln/ZpHRCgSuzkINti5sXAR/0te
         SCeA==
X-Gm-Message-State: AOAM530/lKMb8HiCTNWf1qAo7VaI0uepWo74lY4AmQmggzjoF5nQn/7l
	UWrMlRcpafP58ifALntTyXDYkg==
X-Google-Smtp-Source: ABdhPJzPXkVRGeAzX8ONsYwMK6/RMdfFNxyslupj1imDDyoMb4VPUrwfTY+HgnKVxxx/unH4Oe4Vhw==
X-Received: by 2002:a17:902:d207:b029:da:13cb:e182 with SMTP id t7-20020a170902d207b02900da13cbe182mr1831784ply.69.1606144520547;
        Mon, 23 Nov 2020 07:15:20 -0800 (PST)
Received: from ?IPv6:2601:646:c200:1ef2:344b:912a:a361:223f? ([2601:646:c200:1ef2:344b:912a:a361:223f])
        by smtp.gmail.com with ESMTPSA id g3sm3460848pfr.145.2020.11.23.07.15.19
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Mon, 23 Nov 2020 07:15:19 -0800 (PST)
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
From: Andy Lutomirski <luto@amacapital.net>
Mime-Version: 1.0 (1.0)
Subject: Re: [PATCH v2 05/12] x86: rework arch_local_irq_restore() to not use popf
Date: Mon, 23 Nov 2020 07:15:18 -0800
Message-Id: <DD12CB44-457E-4FE1-8240-B334B785A93C@amacapital.net>
References: <894a4cec-8426-4152-d391-474711040c5b@suse.com>
Cc: Andy Lutomirski <luto@kernel.org>,
 Peter Zijlstra <peterz@infradead.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "VMware, Inc." <pv-drivers@vmware.com>, X86 ML <x86@kernel.org>,
 LKML <linux-kernel@vger.kernel.org>,
 Linux Virtualization <virtualization@lists.linux-foundation.org>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 "H. Peter Anvin" <hpa@zytor.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Thomas Gleixner <tglx@linutronix.de>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
In-Reply-To: <894a4cec-8426-4152-d391-474711040c5b@suse.com>
To: =?utf-8?Q?J=C3=BCrgen_Gro=C3=9F?= <jgross@suse.com>
X-Mailer: iPhone Mail (18B92)





> On Nov 22, 2020, at 9:22 PM, J=C3=BCrgen Gro=C3=9F <jgross@suse.com> wrote=
:
>=20
> =EF=BB=BFOn 22.11.20 22:44, Andy Lutomirski wrote:
>>> On Sat, Nov 21, 2020 at 10:55 PM J=C3=BCrgen Gro=C3=9F <jgross@suse.com>=
 wrote:
>>>=20
>>> On 20.11.20 12:59, Peter Zijlstra wrote:
>>>> On Fri, Nov 20, 2020 at 12:46:23PM +0100, Juergen Gross wrote:
>>>>> +static __always_inline void arch_local_irq_restore(unsigned long flag=
s)
>>>>> +{
>>>>> +    if (!arch_irqs_disabled_flags(flags))
>>>>> +            arch_local_irq_enable();
>>>>> +}
>>>>=20
>>>> If someone were to write horrible code like:
>>>>=20
>>>>       local_irq_disable();
>>>>       local_irq_save(flags);
>>>>       local_irq_enable();
>>>>       local_irq_restore(flags);
>>>>=20
>>>> we'd be up some creek without a paddle... now I don't _think_ we have
>>>> genius code like that, but I'd feel saver if we can haz an assertion in=

>>>> there somewhere...
>>>>=20
>>>> Maybe something like:
>>>>=20
>>>> #ifdef CONFIG_DEBUG_ENTRY // for lack of something saner
>>>>       WARN_ON_ONCE((arch_local_save_flags() ^ flags) & X86_EFLAGS_IF);
>>>> #endif
>>>>=20
>>>> At the end?
>>>=20
>>> I'd like to, but using WARN_ON_ONCE() in include/asm/irqflags.h sounds
>>> like a perfect receipt for include dependency hell.
>>>=20
>>> We could use a plain asm("ud2") instead.
>> How about out-of-lining it:
>> #ifdef CONFIG_DEBUG_ENTRY
>> extern void warn_bogus_irqrestore();
>> #endif
>> static __always_inline void arch_local_irq_restore(unsigned long flags)
>> {
>>        if (!arch_irqs_disabled_flags(flags)) {
>>                arch_local_irq_enable();
>>        } else {
>> #ifdef CONFIG_DEBUG_ENTRY
>>                if (unlikely(arch_local_irq_save() & X86_EFLAGS_IF))
>>                     warn_bogus_irqrestore();
>> #endif
>> }
>=20
> This couldn't be a WARN_ON_ONCE() then (or it would be a catch all).

If you put the WARN_ON_ONCE in the out-of-line helper, it should work reason=
ably well.

> Another approach might be to open-code the WARN_ON_ONCE(), like:
>=20
> #ifdef CONFIG_DEBUG_ENTRY
> extern void warn_bogus_irqrestore(bool *once);
> #endif
>=20
> static __always_inline void arch_local_irq_restore(unsigned long flags)
> {
>    if (!arch_irqs_disabled_flags(flags))
>        arch_local_irq_enable();
> #ifdef CONFIG_DEBUG_ENTRY
>    {
>        static bool once;
>=20
>        if (unlikely(arch_local_irq_save() & X86_EFLAGS_IF))
>            warn_bogus_irqrestore(&once);
>    }
> #endif
> }
>=20

I don=E2=80=99t know precisely what a static variable in an __always_inline f=
unction will do, but I imagine it will be, at best, erratic, especially when=
 modules are involved.

>=20
> Juergen
> <OpenPGP_0xB0DE9DD628BF132F.asc>


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 15:16:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 15:16:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34623.65771 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDZV-0005et-UM; Mon, 23 Nov 2020 15:16:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34623.65771; Mon, 23 Nov 2020 15:16:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDZV-0005em-Qe; Mon, 23 Nov 2020 15:16:05 +0000
Received: by outflank-mailman (input) for mailman id 34623;
 Mon, 23 Nov 2020 15:16:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khDZU-0005ee-FH
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:16:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 07c78f70-22cc-4a71-a6d4-feb5a7f4c358;
 Mon, 23 Nov 2020 15:16:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CE71DAF9E;
 Mon, 23 Nov 2020 15:16:02 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khDZU-0005ee-FH
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:16:04 +0000
X-Inumbo-ID: 07c78f70-22cc-4a71-a6d4-feb5a7f4c358
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 07c78f70-22cc-4a71-a6d4-feb5a7f4c358;
	Mon, 23 Nov 2020 15:16:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606144562; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=ONPDiUZ6wtOlhqSQwk8qXYvId75rESm8eY+xvhUcyuw=;
	b=MkuvRFTKtuIrmR6bXJv+lkwFbk8DseLa9UyvhInGSL8c3Rfjnp2JuDAEC64z9/st2szCem
	Hmqb9gL29QUEre+CLZa4DrEg92GozVBRA4GuK9Pbu8YLuN+hCrih1OcXcQMTikBIADoQ/F
	g/q0EO2t2c73H3Lud188vWM7DqLLrbw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id CE71DAF9E;
	Mon, 23 Nov 2020 15:16:02 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v3 0/8] xen: beginnings of moving library-like code into an
 archive
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>
Message-ID: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
Date: Mon, 23 Nov 2020 16:16:02 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

In a few cases we link in library-like functions when they're not
actually needed. While we could use Kconfig options for each one
of them, I think the better approach for such generic code is to
build it always (thus making sure a build issue can't be introduced
for these in any however exotic configuration) and then put it into
an archive, for the linker to pick up as needed. The series here
presents a first few tiny steps towards such a goal.

Note that we can't use thin archives yet, due to our tool chain
(binutils) baseline being too low.

Further almost immediate steps I'd like to take if the approach
meets no opposition are
- split and move the rest of common/lib.c,
- split and move common/string.c, dropping the need for all the
  __HAVE_ARCH_* (implying possible per-arch archives then need to
  be specified ahead of lib/lib.a on the linker command lines),
- move common/libelf/ and common/libfdt/.

v3 has a new 1st patch and some review feedback addressed. See
individual patches.

1: xen: fix build when $(obj-y) consists of just blanks
2: lib: collect library files in an archive
3: lib: move list sorting code
4: lib: move parse_size_and_unit()
5: lib: move init_constructors()
6: lib: move rbtree code
7: lib: move bsearch code
8: lib: move sort code

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 15:21:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 15:21:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34634.65783 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDeC-0006YU-Gf; Mon, 23 Nov 2020 15:20:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34634.65783; Mon, 23 Nov 2020 15:20:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDeC-0006YN-Dh; Mon, 23 Nov 2020 15:20:56 +0000
Received: by outflank-mailman (input) for mailman id 34634;
 Mon, 23 Nov 2020 15:20:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khDeB-0006YI-0G
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:20:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 00327f36-3284-496e-a046-bd44e03cc72f;
 Mon, 23 Nov 2020 15:20:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0EE9BAFC1;
 Mon, 23 Nov 2020 15:20:53 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khDeB-0006YI-0G
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:20:55 +0000
X-Inumbo-ID: 00327f36-3284-496e-a046-bd44e03cc72f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 00327f36-3284-496e-a046-bd44e03cc72f;
	Mon, 23 Nov 2020 15:20:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606144853; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jLxsRc45eh7UkRXObTXpepNvE2qyML0sy1x3+LOrMrU=;
	b=k16gwm0wi2qK6YZWAoiu+PkDnP8FvHs/UO62GKE0xjxPSoa5xAIb1+r0jCUN9MbAZeXnM2
	SxIq98z5NlP0LHrWdMs2WpMMTawp2zydAc8MY9kgDT6ubqqQEJB0apGraDjRGXbqu4b4kW
	ZaV8ZFz49CFhcc5hLEx8mVjtvejZfck=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0EE9BAFC1;
	Mon, 23 Nov 2020 15:20:53 +0000 (UTC)
Subject: [PATCH v3 1/8] xen: fix build when $(obj-y) consists of just blanks
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
Message-ID: <511be84d-9a13-17ae-f3d9-d6daf9c02711@suse.com>
Date: Mon, 23 Nov 2020 16:20:52 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

This case can occur when combining empty lists

obj-y :=
...
obj-y += $(empty)

or

obj-y := $(empty) $(empty)

where (only) blanks would accumulate. This was only a latent issue until
now, but would become an active issue for Arm once lib/ gets populated
with all respective objects going into the to be introduced lib.a.

Also address a related issue at this occasion: When an empty built_in.o
gets created, .built_in.o.d will have its dependencies recorded. If, on
a subsequent incremental build, an actual constituent of built_in.o
appeared, the $(filter-out ) would leave these recorded dependencies in
place. But of course the linker won't know what to do with C header
files. (The apparent alternative of avoiding to pass $(c_flags) or
$(a_flags) would not be reliable afaict, as among these flags there may
be some affecting information conveyed via the object file to the
linker. The linker, finding inconsistent flags across object files, may
then error out.) Using just $(obj-y) won't work either: It breaks when
the same object file is listed more than once.

Reported-by: Julien Grall <julien@xen.org>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
 xen/Rules.mk | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/xen/Rules.mk b/xen/Rules.mk
index 333e19bec343..d5e5eb33de39 100644
--- a/xen/Rules.mk
+++ b/xen/Rules.mk
@@ -130,13 +130,13 @@ c_flags += $(CFLAGS-y)
 a_flags += $(CFLAGS-y) $(AFLAGS-y)
 
 built_in.o: $(obj-y) $(extra-y)
-ifeq ($(obj-y),)
+ifeq ($(strip $(obj-y)),)
 	$(CC) $(c_flags) -c -x c /dev/null -o $@
 else
 ifeq ($(CONFIG_LTO),y)
-	$(LD_LTO) -r -o $@ $(filter-out $(extra-y),$^)
+	$(LD_LTO) -r -o $@ $(filter $(obj-y),$^)
 else
-	$(LD) $(XEN_LDFLAGS) -r -o $@ $(filter-out $(extra-y),$^)
+	$(LD) $(XEN_LDFLAGS) -r -o $@ $(filter $(obj-y),$^)
 endif
 endif
 
@@ -145,10 +145,10 @@ targets += $(filter-out $(subdir-obj-y), $(obj-y)) $(extra-y)
 targets += $(MAKECMDGOALS)
 
 built_in_bin.o: $(obj-bin-y) $(extra-y)
-ifeq ($(obj-bin-y),)
+ifeq ($(strip $(obj-bin-y)),)
 	$(CC) $(a_flags) -c -x assembler /dev/null -o $@
 else
-	$(LD) $(XEN_LDFLAGS) -r -o $@ $(filter-out $(extra-y),$^)
+	$(LD) $(XEN_LDFLAGS) -r -o $@ $(filter $(obj-bin-y),$^)
 endif
 
 # Force execution of pattern rules (for which PHONY cannot be directly used).
-- 
2.22.0




From xen-devel-bounces@lists.xenproject.org Mon Nov 23 15:21:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 15:21:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34637.65796 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDed-0006eC-VE; Mon, 23 Nov 2020 15:21:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34637.65796; Mon, 23 Nov 2020 15:21:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDed-0006e5-Ru; Mon, 23 Nov 2020 15:21:23 +0000
Received: by outflank-mailman (input) for mailman id 34637;
 Mon, 23 Nov 2020 15:21:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khDec-0006du-J9
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:21:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1625aba3-5ee8-4a01-9fc2-df14e972cbcb;
 Mon, 23 Nov 2020 15:21:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 91B61AF9E;
 Mon, 23 Nov 2020 15:21:20 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khDec-0006du-J9
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:21:22 +0000
X-Inumbo-ID: 1625aba3-5ee8-4a01-9fc2-df14e972cbcb
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1625aba3-5ee8-4a01-9fc2-df14e972cbcb;
	Mon, 23 Nov 2020 15:21:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606144880; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cuO4qVJ1llY0PSDhrVuUhRfZB05Gf7cWsKZi8aM03oE=;
	b=lVpak4STOi/mO4KPxsT9oV8uBjAXmOU+ouQke8fqwI3v8KtrGsZapuH4IfJ5hsAU4Vfjfo
	DEUo7okw7j6LVAog0Yj37lAwfnK8iqAkFv2dfOhMxt3R8pcwQCYOAWnA0NL8qz/kz+CRXB
	RCMrQGqh//x4DRVBkB7na8+qka856E8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 91B61AF9E;
	Mon, 23 Nov 2020 15:21:20 +0000 (UTC)
Subject: [PATCH v3 2/8] lib: collect library files in an archive
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
Message-ID: <21714b83-8619-5aa9-be5b-3015d05a26a4@suse.com>
Date: Mon, 23 Nov 2020 16:21:19 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

In order to (subsequently) drop odd things like CONFIG_NEEDS_LIST_SORT
just to avoid bloating binaries when only some arch-es and/or
configurations need generic library routines, combine objects under lib/
into an archive, which the linker then can pick the necessary objects
out of.

Note that we can't use thin archives just yet, until we've raised the
minimum required binutils version suitably.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
 xen/Rules.mk          | 29 +++++++++++++++++++++++++----
 xen/arch/arm/Makefile |  6 +++---
 xen/arch/x86/Makefile |  8 ++++----
 xen/lib/Makefile      |  3 ++-
 4 files changed, 34 insertions(+), 12 deletions(-)

diff --git a/xen/Rules.mk b/xen/Rules.mk
index d5e5eb33de39..aba6ca2a90f5 100644
--- a/xen/Rules.mk
+++ b/xen/Rules.mk
@@ -41,12 +41,16 @@ ALL_OBJS-y               += $(BASEDIR)/xsm/built_in.o
 ALL_OBJS-y               += $(BASEDIR)/arch/$(TARGET_ARCH)/built_in.o
 ALL_OBJS-$(CONFIG_CRYPTO)   += $(BASEDIR)/crypto/built_in.o
 
+ALL_LIBS-y               := $(BASEDIR)/lib/lib.a
+
 # Initialise some variables
+lib-y :=
 targets :=
 CFLAGS-y :=
 AFLAGS-y :=
 
 ALL_OBJS := $(ALL_OBJS-y)
+ALL_LIBS := $(ALL_LIBS-y)
 
 SPECIAL_DATA_SECTIONS := rodata $(foreach a,1 2 4 8 16, \
                                             $(foreach w,1 2 4, \
@@ -60,7 +64,14 @@ include Makefile
 # ---------------------------------------------------------------------------
 
 quiet_cmd_ld = LD      $@
-cmd_ld = $(LD) $(XEN_LDFLAGS) -r -o $@ $(real-prereqs)
+cmd_ld = $(LD) $(XEN_LDFLAGS) -r -o $@ $(filter-out %.a,$(real-prereqs)) \
+               --start-group $(filter %.a,$(real-prereqs)) --end-group
+
+# Archive
+# ---------------------------------------------------------------------------
+
+quiet_cmd_ar = AR      $@
+cmd_ar = rm -f $@; $(AR) cPrs $@ $(real-prereqs)
 
 # Objcopy
 # ---------------------------------------------------------------------------
@@ -86,6 +97,10 @@ obj-y    := $(patsubst %/, %/built_in.o, $(obj-y))
 # tell kbuild to descend
 subdir-obj-y := $(filter %/built_in.o, $(obj-y))
 
+# Libraries are always collected in one lib file.
+# Filter out objects already built-in
+lib-y := $(filter-out $(obj-y), $(sort $(lib-y)))
+
 $(filter %.init.o,$(obj-y) $(obj-bin-y) $(extra-y)): CFLAGS-y += -DINIT_SECTIONS_ONLY
 
 ifeq ($(CONFIG_COVERAGE),y)
@@ -129,7 +144,7 @@ include $(BASEDIR)/arch/$(TARGET_ARCH)/Rules.mk
 c_flags += $(CFLAGS-y)
 a_flags += $(CFLAGS-y) $(AFLAGS-y)
 
-built_in.o: $(obj-y) $(extra-y)
+built_in.o: $(obj-y) $(if $(strip $(lib-y)),lib.a) $(extra-y)
 ifeq ($(strip $(obj-y)),)
 	$(CC) $(c_flags) -c -x c /dev/null -o $@
 else
@@ -140,8 +155,14 @@ else
 endif
 endif
 
+lib.a: $(lib-y) FORCE
+	$(call if_changed,ar)
+
 targets += built_in.o
-targets += $(filter-out $(subdir-obj-y), $(obj-y)) $(extra-y)
+ifneq ($(strip $(lib-y)),)
+targets += lib.a
+endif
+targets += $(filter-out $(subdir-obj-y), $(obj-y) $(lib-y)) $(extra-y)
 targets += $(MAKECMDGOALS)
 
 built_in_bin.o: $(obj-bin-y) $(extra-y)
@@ -155,7 +176,7 @@ endif
 PHONY += FORCE
 FORCE:
 
-%/built_in.o: FORCE
+%/built_in.o %/lib.a: FORCE
 	$(MAKE) -f $(BASEDIR)/Rules.mk -C $* built_in.o
 
 %/built_in_bin.o: FORCE
diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 296c5e68bbc3..612a83b315c8 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -90,14 +90,14 @@ endif
 
 ifeq ($(CONFIG_LTO),y)
 # Gather all LTO objects together
-prelink_lto.o: $(ALL_OBJS)
-	$(LD_LTO) -r -o $@ $^
+prelink_lto.o: $(ALL_OBJS) $(ALL_LIBS)
+	$(LD_LTO) -r -o $@ $(filter-out %.a,$^) --start-group $(filter %.a,$^) --end-group
 
 # Link it with all the binary objects
 prelink.o: $(patsubst %/built_in.o,%/built_in_bin.o,$(ALL_OBJS)) prelink_lto.o
 	$(call if_changed,ld)
 else
-prelink.o: $(ALL_OBJS) FORCE
+prelink.o: $(ALL_OBJS) $(ALL_LIBS) FORCE
 	$(call if_changed,ld)
 endif
 
diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index 9b368632fb43..8f2180485b2b 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -132,8 +132,8 @@ EFI_OBJS-$(XEN_BUILD_EFI) := efi/relocs-dummy.o
 
 ifeq ($(CONFIG_LTO),y)
 # Gather all LTO objects together
-prelink_lto.o: $(ALL_OBJS)
-	$(LD_LTO) -r -o $@ $^
+prelink_lto.o: $(ALL_OBJS) $(ALL_LIBS)
+	$(LD_LTO) -r -o $@ $(filter-out %.a,$^) --start-group $(filter %.a,$^) --end-group
 
 # Link it with all the binary objects
 prelink.o: $(patsubst %/built_in.o,%/built_in_bin.o,$(ALL_OBJS)) prelink_lto.o $(EFI_OBJS-y) FORCE
@@ -142,10 +142,10 @@ prelink.o: $(patsubst %/built_in.o,%/built_in_bin.o,$(ALL_OBJS)) prelink_lto.o $
 prelink-efi.o: $(patsubst %/built_in.o,%/built_in_bin.o,$(ALL_OBJS)) prelink_lto.o FORCE
 	$(call if_changed,ld)
 else
-prelink.o: $(ALL_OBJS) $(EFI_OBJS-y) FORCE
+prelink.o: $(ALL_OBJS) $(ALL_LIBS) $(EFI_OBJS-y) FORCE
 	$(call if_changed,ld)
 
-prelink-efi.o: $(ALL_OBJS) FORCE
+prelink-efi.o: $(ALL_OBJS) $(ALL_LIBS) FORCE
 	$(call if_changed,ld)
 endif
 
diff --git a/xen/lib/Makefile b/xen/lib/Makefile
index 53b1da025e0d..b8814361d63e 100644
--- a/xen/lib/Makefile
+++ b/xen/lib/Makefile
@@ -1,2 +1,3 @@
-obj-y += ctype.o
 obj-$(CONFIG_X86) += x86/
+
+lib-y += ctype.o



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 15:21:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 15:21:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34643.65808 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDf9-0006kr-9O; Mon, 23 Nov 2020 15:21:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34643.65808; Mon, 23 Nov 2020 15:21:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDf9-0006kk-5C; Mon, 23 Nov 2020 15:21:55 +0000
Received: by outflank-mailman (input) for mailman id 34643;
 Mon, 23 Nov 2020 15:21:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khDf7-0006kV-Dm
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:21:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9e3d3f61-ee19-4ff6-b288-ca9cb0580c30;
 Mon, 23 Nov 2020 15:21:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A4AAFB00D;
 Mon, 23 Nov 2020 15:21:51 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khDf7-0006kV-Dm
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:21:53 +0000
X-Inumbo-ID: 9e3d3f61-ee19-4ff6-b288-ca9cb0580c30
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9e3d3f61-ee19-4ff6-b288-ca9cb0580c30;
	Mon, 23 Nov 2020 15:21:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606144911; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/ldUqwFP54C4R/7w401QZG+FeOKXlfpX2Gg3oaRW57A=;
	b=qiLShgIg0egBAZqvag3aTqfWVsVr6aQi7sHrBUi1VHu6DKM9gPe38Kua5oMXFYE4Eq7Vo0
	a5EPQnG6qHxNRK0OdH+Gf7Y5W9u8JJNgRdBoqwgEm3uTMezxKV9jTTE6Ijl2G7bBf0A9fx
	bYSGyqYiPy3yF2TYewyibcq6zm6bOjA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A4AAFB00D;
	Mon, 23 Nov 2020 15:21:51 +0000 (UTC)
Subject: [PATCH v3 3/8] lib: move list sorting code
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
Message-ID: <9e855f2f-c654-6515-ae4f-9c69859c1c88@suse.com>
Date: Mon, 23 Nov 2020 16:21:51 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Build the source file always, as by putting it into an archive it still
won't be linked into final binaries when not needed. This way possible
build breakage will be easier to notice, and it's more consistent with
us unconditionally building other library kind of code (e.g. sort() or
bsearch()).

While moving the source file, take the opportunity and drop the
pointless EXPORT_SYMBOL() and an unnecessary #include.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/arm/Kconfig                        | 4 +---
 xen/common/Kconfig                          | 3 ---
 xen/common/Makefile                         | 1 -
 xen/lib/Makefile                            | 1 +
 xen/{common/list_sort.c => lib/list-sort.c} | 2 --
 5 files changed, 2 insertions(+), 9 deletions(-)
 rename xen/{common/list_sort.c => lib/list-sort.c} (98%)

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index f5b1bcda0323..38b6c31ba5dd 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -56,9 +56,7 @@ config HVM
         def_bool y
 
 config NEW_VGIC
-	bool
-	prompt "Use new VGIC implementation"
-	select NEEDS_LIST_SORT
+	bool "Use new VGIC implementation"
 	---help---
 
 	This is an alternative implementation of the ARM GIC interrupt
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index 3e2cf2508899..0661328a99e7 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -66,9 +66,6 @@ config MEM_ACCESS
 config NEEDS_LIBELF
 	bool
 
-config NEEDS_LIST_SORT
-	bool
-
 menu "Speculative hardening"
 
 config SPECULATIVE_HARDEN_ARRAY
diff --git a/xen/common/Makefile b/xen/common/Makefile
index d109f279a490..332e7d667cec 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -21,7 +21,6 @@ obj-y += keyhandler.o
 obj-$(CONFIG_KEXEC) += kexec.o
 obj-$(CONFIG_KEXEC) += kimage.o
 obj-y += lib.o
-obj-$(CONFIG_NEEDS_LIST_SORT) += list_sort.o
 obj-$(CONFIG_LIVEPATCH) += livepatch.o livepatch_elf.o
 obj-$(CONFIG_MEM_ACCESS) += mem_access.o
 obj-y += memory.o
diff --git a/xen/lib/Makefile b/xen/lib/Makefile
index b8814361d63e..764f3624b5f9 100644
--- a/xen/lib/Makefile
+++ b/xen/lib/Makefile
@@ -1,3 +1,4 @@
 obj-$(CONFIG_X86) += x86/
 
 lib-y += ctype.o
+lib-y += list-sort.o
diff --git a/xen/common/list_sort.c b/xen/lib/list-sort.c
similarity index 98%
rename from xen/common/list_sort.c
rename to xen/lib/list-sort.c
index af2b2f6519f1..f8d8bbf28178 100644
--- a/xen/common/list_sort.c
+++ b/xen/lib/list-sort.c
@@ -15,7 +15,6 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
-#include <xen/lib.h>
 #include <xen/list.h>
 
 #define MAX_LIST_LENGTH_BITS 20
@@ -154,4 +153,3 @@ void list_sort(void *priv, struct list_head *head,
 
 	merge_and_restore_back_links(priv, cmp, head, part[max_lev], list);
 }
-EXPORT_SYMBOL(list_sort);



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 15:22:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 15:22:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34647.65820 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDff-0006rE-HX; Mon, 23 Nov 2020 15:22:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34647.65820; Mon, 23 Nov 2020 15:22:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDff-0006r7-EQ; Mon, 23 Nov 2020 15:22:27 +0000
Received: by outflank-mailman (input) for mailman id 34647;
 Mon, 23 Nov 2020 15:22:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khDff-0006r0-1K
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:22:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8b20ca46-c071-4c66-af73-a01d9b8724bb;
 Mon, 23 Nov 2020 15:22:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 52C8CAFD8;
 Mon, 23 Nov 2020 15:22:25 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khDff-0006r0-1K
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:22:27 +0000
X-Inumbo-ID: 8b20ca46-c071-4c66-af73-a01d9b8724bb
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8b20ca46-c071-4c66-af73-a01d9b8724bb;
	Mon, 23 Nov 2020 15:22:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606144945; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hAAsFxAwY1+LMn7V37iA5/bcKSLQ7I5O5Ww6wo1JjgI=;
	b=eLAR4Po6k9wxgEstIlr7BI75HvZw2KPCxVJw8xQ6BECGmLTNvihsJFNuSCqBvPq5q5o9KK
	ZrCGzdDy//vGRg/OZB4Ze+bIPSF2oCBccHRLptBa6nLUuPBkFJkAZ0aQjtEW+NJrfCEWVe
	pwfsznhTU1/M3SzQqlYzXclBYv7Lr7Q=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 52C8CAFD8;
	Mon, 23 Nov 2020 15:22:25 +0000 (UTC)
Subject: [PATCH v3 4/8] lib: move parse_size_and_unit()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
Message-ID: <489b3707-9e9c-184b-3e1f-1d28fd1fb0ee@suse.com>
Date: Mon, 23 Nov 2020 16:22:24 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

... into its own CU, to build it into an archive.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
 xen/common/lib.c     | 39 ----------------------------------
 xen/lib/Makefile     |  1 +
 xen/lib/parse-size.c | 50 ++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 51 insertions(+), 39 deletions(-)
 create mode 100644 xen/lib/parse-size.c

diff --git a/xen/common/lib.c b/xen/common/lib.c
index a224efa8f6e8..6cfa332142a5 100644
--- a/xen/common/lib.c
+++ b/xen/common/lib.c
@@ -423,45 +423,6 @@ uint64_t muldiv64(uint64_t a, uint32_t b, uint32_t c)
 #endif
 }
 
-unsigned long long parse_size_and_unit(const char *s, const char **ps)
-{
-    unsigned long long ret;
-    const char *s1;
-
-    ret = simple_strtoull(s, &s1, 0);
-
-    switch ( *s1 )
-    {
-    case 'T': case 't':
-        ret <<= 10;
-        /* fallthrough */
-    case 'G': case 'g':
-        ret <<= 10;
-        /* fallthrough */
-    case 'M': case 'm':
-        ret <<= 10;
-        /* fallthrough */
-    case 'K': case 'k':
-        ret <<= 10;
-        /* fallthrough */
-    case 'B': case 'b':
-        s1++;
-        break;
-    case '%':
-        if ( ps )
-            break;
-        /* fallthrough */
-    default:
-        ret <<= 10; /* default to kB */
-        break;
-    }
-
-    if ( ps != NULL )
-        *ps = s1;
-
-    return ret;
-}
-
 typedef void (*ctor_func_t)(void);
 extern const ctor_func_t __ctors_start[], __ctors_end[];
 
diff --git a/xen/lib/Makefile b/xen/lib/Makefile
index 764f3624b5f9..99f857540c99 100644
--- a/xen/lib/Makefile
+++ b/xen/lib/Makefile
@@ -2,3 +2,4 @@ obj-$(CONFIG_X86) += x86/
 
 lib-y += ctype.o
 lib-y += list-sort.o
+lib-y += parse-size.o
diff --git a/xen/lib/parse-size.c b/xen/lib/parse-size.c
new file mode 100644
index 000000000000..ec980cadfff3
--- /dev/null
+++ b/xen/lib/parse-size.c
@@ -0,0 +1,50 @@
+#include <xen/lib.h>
+
+unsigned long long parse_size_and_unit(const char *s, const char **ps)
+{
+    unsigned long long ret;
+    const char *s1;
+
+    ret = simple_strtoull(s, &s1, 0);
+
+    switch ( *s1 )
+    {
+    case 'T': case 't':
+        ret <<= 10;
+        /* fallthrough */
+    case 'G': case 'g':
+        ret <<= 10;
+        /* fallthrough */
+    case 'M': case 'm':
+        ret <<= 10;
+        /* fallthrough */
+    case 'K': case 'k':
+        ret <<= 10;
+        /* fallthrough */
+    case 'B': case 'b':
+        s1++;
+        break;
+    case '%':
+        if ( ps )
+            break;
+        /* fallthrough */
+    default:
+        ret <<= 10; /* default to kB */
+        break;
+    }
+
+    if ( ps != NULL )
+        *ps = s1;
+
+    return ret;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 15:22:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 15:22:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34653.65831 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDg2-0006y3-Ql; Mon, 23 Nov 2020 15:22:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34653.65831; Mon, 23 Nov 2020 15:22:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDg2-0006xw-NL; Mon, 23 Nov 2020 15:22:50 +0000
Received: by outflank-mailman (input) for mailman id 34653;
 Mon, 23 Nov 2020 15:22:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khDg1-0006xj-Ve
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:22:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 542f9c41-7242-4cec-abaf-df9059b01856;
 Mon, 23 Nov 2020 15:22:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6E2D3B01E;
 Mon, 23 Nov 2020 15:22:48 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khDg1-0006xj-Ve
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:22:50 +0000
X-Inumbo-ID: 542f9c41-7242-4cec-abaf-df9059b01856
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 542f9c41-7242-4cec-abaf-df9059b01856;
	Mon, 23 Nov 2020 15:22:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606144968; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8fmfCVpbzWpimEPZJkVHrvDAFBSfAhhMSPnv3DhM26s=;
	b=cAP4j0Jlb/yiQ5VcN3xmT3RACuC6QRyEliTufHXhlkCWrOFQixEZmfxEvq3Msqin4xANcX
	83rnHESxwcz5adoXjHz5Sw01QyysMAwr73SWc+rdvz+bt36DKPg4i6Zw0XsTegqmD+l1Sl
	ubM2RYM0QTAwHx7mh9zBLlZju+VKhUA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6E2D3B01E;
	Mon, 23 Nov 2020 15:22:48 +0000 (UTC)
Subject: [PATCH v3 5/8] lib: move init_constructors()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
Message-ID: <c67ca263-8a82-d0c8-e6e1-6afdeeb9df8c@suse.com>
Date: Mon, 23 Nov 2020 16:22:47 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

... into its own CU, for being unrelated to other things in
common/lib.c.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
 xen/common/lib.c | 14 --------------
 xen/lib/Makefile |  1 +
 xen/lib/ctors.c  | 25 +++++++++++++++++++++++++
 3 files changed, 26 insertions(+), 14 deletions(-)
 create mode 100644 xen/lib/ctors.c

diff --git a/xen/common/lib.c b/xen/common/lib.c
index 6cfa332142a5..f5ca179a0af4 100644
--- a/xen/common/lib.c
+++ b/xen/common/lib.c
@@ -1,6 +1,5 @@
 #include <xen/lib.h>
 #include <xen/types.h>
-#include <xen/init.h>
 #include <asm/byteorder.h>
 
 /*
@@ -423,19 +422,6 @@ uint64_t muldiv64(uint64_t a, uint32_t b, uint32_t c)
 #endif
 }
 
-typedef void (*ctor_func_t)(void);
-extern const ctor_func_t __ctors_start[], __ctors_end[];
-
-void __init init_constructors(void)
-{
-    const ctor_func_t *f;
-    for ( f = __ctors_start; f < __ctors_end; ++f )
-        (*f)();
-
-    /* Putting this here seems as good (or bad) as any other place. */
-    BUILD_BUG_ON(sizeof(size_t) != sizeof(ssize_t));
-}
-
 /*
  * Local variables:
  * mode: C
diff --git a/xen/lib/Makefile b/xen/lib/Makefile
index 99f857540c99..72c72fffecf2 100644
--- a/xen/lib/Makefile
+++ b/xen/lib/Makefile
@@ -1,5 +1,6 @@
 obj-$(CONFIG_X86) += x86/
 
+lib-y += ctors.o
 lib-y += ctype.o
 lib-y += list-sort.o
 lib-y += parse-size.o
diff --git a/xen/lib/ctors.c b/xen/lib/ctors.c
new file mode 100644
index 000000000000..5bdc591cd50a
--- /dev/null
+++ b/xen/lib/ctors.c
@@ -0,0 +1,25 @@
+#include <xen/init.h>
+#include <xen/lib.h>
+
+typedef void (*ctor_func_t)(void);
+extern const ctor_func_t __ctors_start[], __ctors_end[];
+
+void __init init_constructors(void)
+{
+    const ctor_func_t *f;
+    for ( f = __ctors_start; f < __ctors_end; ++f )
+        (*f)();
+
+    /* Putting this here seems as good (or bad) as any other place. */
+    BUILD_BUG_ON(sizeof(size_t) != sizeof(ssize_t));
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 15:23:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 15:23:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34660.65843 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDgQ-00077E-4v; Mon, 23 Nov 2020 15:23:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34660.65843; Mon, 23 Nov 2020 15:23:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDgQ-000776-1e; Mon, 23 Nov 2020 15:23:14 +0000
Received: by outflank-mailman (input) for mailman id 34660;
 Mon, 23 Nov 2020 15:23:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khDgO-00073t-Oc
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:23:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1bc9a156-806b-4d8c-9393-0b56d5cc388c;
 Mon, 23 Nov 2020 15:23:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 89CDCAE42;
 Mon, 23 Nov 2020 15:23:10 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khDgO-00073t-Oc
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:23:12 +0000
X-Inumbo-ID: 1bc9a156-806b-4d8c-9393-0b56d5cc388c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1bc9a156-806b-4d8c-9393-0b56d5cc388c;
	Mon, 23 Nov 2020 15:23:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606144990; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=YDkXTDAOobi6xJ5cnvbE+OQk8dJ5rRtfEWvkVEPSe/g=;
	b=fHNjIIwoFNC7QCFV0gmsGacLniA7WxmVKEa9wJhKeTuyLL28Nyd7eRk6ErHjwG0nr8aXCu
	XxzOy6heVKvj/P6p8XUsLV7LkhtiBUJIWjZg7tvO7PqnFkL3Svje7YoDdAB8mcqGbbcP4+
	paH8s42GVGmzdI0cA+ZE5fQBrwvhbOU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 89CDCAE42;
	Mon, 23 Nov 2020 15:23:10 +0000 (UTC)
Subject: [PATCH v3 6/8] lib: move rbtree code
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
Message-ID: <749adfdd-70d6-c653-7fcf-dad13fd8463f@suse.com>
Date: Mon, 23 Nov 2020 16:23:09 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Build this code into an archive, which results in not linking it into
x86 final binaries. This saves about 1.5k of dead code.

While moving the source file, take the opportunity and drop the
pointless EXPORT_SYMBOL() and an instance of trailing whitespace.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
 xen/common/Makefile          | 1 -
 xen/lib/Makefile             | 1 +
 xen/{common => lib}/rbtree.c | 9 +--------
 3 files changed, 2 insertions(+), 9 deletions(-)
 rename xen/{common => lib}/rbtree.c (98%)

diff --git a/xen/common/Makefile b/xen/common/Makefile
index 332e7d667cec..d65c9fe9cb4e 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -33,7 +33,6 @@ obj-y += preempt.o
 obj-y += random.o
 obj-y += rangeset.o
 obj-y += radix-tree.o
-obj-y += rbtree.o
 obj-y += rcupdate.o
 obj-y += rwlock.o
 obj-y += shutdown.o
diff --git a/xen/lib/Makefile b/xen/lib/Makefile
index 72c72fffecf2..b0fe8c72acf5 100644
--- a/xen/lib/Makefile
+++ b/xen/lib/Makefile
@@ -4,3 +4,4 @@ lib-y += ctors.o
 lib-y += ctype.o
 lib-y += list-sort.o
 lib-y += parse-size.o
+lib-y += rbtree.o
diff --git a/xen/common/rbtree.c b/xen/lib/rbtree.c
similarity index 98%
rename from xen/common/rbtree.c
rename to xen/lib/rbtree.c
index 9f5498a89d4e..95e045d52461 100644
--- a/xen/common/rbtree.c
+++ b/xen/lib/rbtree.c
@@ -25,7 +25,7 @@
 #include <xen/rbtree.h>
 
 /*
- * red-black trees properties:  http://en.wikipedia.org/wiki/Rbtree 
+ * red-black trees properties:  http://en.wikipedia.org/wiki/Rbtree
  *
  *  1) A node is either red or black
  *  2) The root is black
@@ -223,7 +223,6 @@ void rb_insert_color(struct rb_node *node, struct rb_root *root)
 		}
 	}
 }
-EXPORT_SYMBOL(rb_insert_color);
 
 static void __rb_erase_color(struct rb_node *parent, struct rb_root *root)
 {
@@ -467,7 +466,6 @@ void rb_erase(struct rb_node *node, struct rb_root *root)
 	if (rebalance)
 		__rb_erase_color(rebalance, root);
 }
-EXPORT_SYMBOL(rb_erase);
 
 /*
  * This function returns the first node (in sort order) of the tree.
@@ -483,7 +481,6 @@ struct rb_node *rb_first(const struct rb_root *root)
 		n = n->rb_left;
 	return n;
 }
-EXPORT_SYMBOL(rb_first);
 
 struct rb_node *rb_last(const struct rb_root *root)
 {
@@ -496,7 +493,6 @@ struct rb_node *rb_last(const struct rb_root *root)
 		n = n->rb_right;
 	return n;
 }
-EXPORT_SYMBOL(rb_last);
 
 struct rb_node *rb_next(const struct rb_node *node)
 {
@@ -528,7 +524,6 @@ struct rb_node *rb_next(const struct rb_node *node)
 
 	return parent;
 }
-EXPORT_SYMBOL(rb_next);
 
 struct rb_node *rb_prev(const struct rb_node *node)
 {
@@ -557,7 +552,6 @@ struct rb_node *rb_prev(const struct rb_node *node)
 
 	return parent;
 }
-EXPORT_SYMBOL(rb_prev);
 
 void rb_replace_node(struct rb_node *victim, struct rb_node *new,
 		     struct rb_root *root)
@@ -574,4 +568,3 @@ void rb_replace_node(struct rb_node *victim, struct rb_node *new,
 	/* Copy the pointers/colour from the victim to the replacement */
 	*new = *victim;
 }
-EXPORT_SYMBOL(rb_replace_node);



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 15:23:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 15:23:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34670.65855 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDgn-0007F2-F6; Mon, 23 Nov 2020 15:23:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34670.65855; Mon, 23 Nov 2020 15:23:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDgn-0007Ev-C4; Mon, 23 Nov 2020 15:23:37 +0000
Received: by outflank-mailman (input) for mailman id 34670;
 Mon, 23 Nov 2020 15:23:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khDgl-0007EG-Ha
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:23:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fa93e399-45c3-45b4-8d0d-19fd44d70e05;
 Mon, 23 Nov 2020 15:23:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2EA1BAF9E;
 Mon, 23 Nov 2020 15:23:33 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khDgl-0007EG-Ha
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:23:35 +0000
X-Inumbo-ID: fa93e399-45c3-45b4-8d0d-19fd44d70e05
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fa93e399-45c3-45b4-8d0d-19fd44d70e05;
	Mon, 23 Nov 2020 15:23:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606145013; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=23oE8NeUoAj9T9FiFgiIcEmaLzXlZefKvQvHgfauRUg=;
	b=RjjNqObEtvRblsBMrTgZSuG0MDW0CYjNGXDJYciTv0qRDsjQc6MB8CuUscfPDTs8PqsRS7
	VMjqLrLI+cRjq9sYcX7TICbCz6H1sAmT5xXSR8ZBWmk/2xtKmWHywazV64NPJwOK14/Do+
	ntGtkPwlSvQeuBaxNuLPcCqfFZwKVNo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2EA1BAF9E;
	Mon, 23 Nov 2020 15:23:33 +0000 (UTC)
Subject: [PATCH v3 7/8] lib: move bsearch code
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
Message-ID: <0b676274-e7a0-9591-ddc9-7c7d5c0e1e2d@suse.com>
Date: Mon, 23 Nov 2020 16:23:32 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Convert this code to an inline function (backed by an instance in an
archive in case the compiler decides against inlining), which results
in not having it in x86 final binaries. This saves a little bit of dead
code.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
 xen/common/Makefile        |  1 -
 xen/common/bsearch.c       | 51 --------------------------------------
 xen/include/xen/compiler.h |  1 +
 xen/include/xen/lib.h      | 42 ++++++++++++++++++++++++++++++-
 xen/lib/Makefile           |  1 +
 xen/lib/bsearch.c          | 13 ++++++++++
 6 files changed, 56 insertions(+), 53 deletions(-)
 delete mode 100644 xen/common/bsearch.c
 create mode 100644 xen/lib/bsearch.c

diff --git a/xen/common/Makefile b/xen/common/Makefile
index d65c9fe9cb4e..e8ce23acea67 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -1,6 +1,5 @@
 obj-$(CONFIG_ARGO) += argo.o
 obj-y += bitmap.o
-obj-y += bsearch.o
 obj-$(CONFIG_HYPFS_CONFIG) += config_data.o
 obj-$(CONFIG_CORE_PARKING) += core_parking.o
 obj-y += cpu.o
diff --git a/xen/common/bsearch.c b/xen/common/bsearch.c
deleted file mode 100644
index 7090930aab5c..000000000000
--- a/xen/common/bsearch.c
+++ /dev/null
@@ -1,51 +0,0 @@
-/*
- * A generic implementation of binary search for the Linux kernel
- *
- * Copyright (C) 2008-2009 Ksplice, Inc.
- * Author: Tim Abbott <tabbott@ksplice.com>
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License as
- * published by the Free Software Foundation; version 2.
- */
-
-#include <xen/lib.h>
-
-/*
- * bsearch - binary search an array of elements
- * @key: pointer to item being searched for
- * @base: pointer to first element to search
- * @num: number of elements
- * @size: size of each element
- * @cmp: pointer to comparison function
- *
- * This function does a binary search on the given array.  The
- * contents of the array should already be in ascending sorted order
- * under the provided comparison function.
- *
- * Note that the key need not have the same type as the elements in
- * the array, e.g. key could be a string and the comparison function
- * could compare the string with the struct's name field.  However, if
- * the key and elements in the array are of the same type, you can use
- * the same comparison function for both sort() and bsearch().
- */
-void *bsearch(const void *key, const void *base, size_t num, size_t size,
-	      int (*cmp)(const void *key, const void *elt))
-{
-	size_t start = 0, end = num;
-	int result;
-
-	while (start < end) {
-		size_t mid = start + (end - start) / 2;
-
-		result = cmp(key, base + mid * size);
-		if (result < 0)
-			end = mid;
-		else if (result > 0)
-			start = mid + 1;
-		else
-			return (void *)base + mid * size;
-	}
-
-	return NULL;
-}
diff --git a/xen/include/xen/compiler.h b/xen/include/xen/compiler.h
index c0e0ee9f27be..2b7acdf3b188 100644
--- a/xen/include/xen/compiler.h
+++ b/xen/include/xen/compiler.h
@@ -12,6 +12,7 @@
 
 #define inline        __inline__
 #define always_inline __inline__ __attribute__ ((__always_inline__))
+#define gnu_inline    __inline__ __attribute__ ((__gnu_inline__))
 #define noinline      __attribute__((__noinline__))
 
 #define noreturn      __attribute__((__noreturn__))
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index a9679c913d5c..48429b69b8df 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -204,8 +204,48 @@ void dump_execstate(struct cpu_user_regs *);
 
 void init_constructors(void);
 
+/*
+ * bsearch - binary search an array of elements
+ * @key: pointer to item being searched for
+ * @base: pointer to first element to search
+ * @num: number of elements
+ * @size: size of each element
+ * @cmp: pointer to comparison function
+ *
+ * This function does a binary search on the given array.  The
+ * contents of the array should already be in ascending sorted order
+ * under the provided comparison function.
+ *
+ * Note that the key need not have the same type as the elements in
+ * the array, e.g. key could be a string and the comparison function
+ * could compare the string with the struct's name field.  However, if
+ * the key and elements in the array are of the same type, you can use
+ * the same comparison function for both sort() and bsearch().
+ */
+#ifndef BSEARCH_IMPLEMENTATION
+extern gnu_inline
+#endif
 void *bsearch(const void *key, const void *base, size_t num, size_t size,
-              int (*cmp)(const void *key, const void *elt));
+              int (*cmp)(const void *key, const void *elt))
+{
+    size_t start = 0, end = num;
+    int result;
+
+    while ( start < end )
+    {
+        size_t mid = start + (end - start) / 2;
+
+        result = cmp(key, base + mid * size);
+        if ( result < 0 )
+            end = mid;
+        else if ( result > 0 )
+            start = mid + 1;
+        else
+            return (void *)base + mid * size;
+    }
+
+    return NULL;
+}
 
 #endif /* __ASSEMBLY__ */
 
diff --git a/xen/lib/Makefile b/xen/lib/Makefile
index b0fe8c72acf5..f12dab7a737a 100644
--- a/xen/lib/Makefile
+++ b/xen/lib/Makefile
@@ -1,5 +1,6 @@
 obj-$(CONFIG_X86) += x86/
 
+lib-y += bsearch.o
 lib-y += ctors.o
 lib-y += ctype.o
 lib-y += list-sort.o
diff --git a/xen/lib/bsearch.c b/xen/lib/bsearch.c
new file mode 100644
index 000000000000..149f7feafd1f
--- /dev/null
+++ b/xen/lib/bsearch.c
@@ -0,0 +1,13 @@
+/*
+ * A generic implementation of binary search for the Linux kernel
+ *
+ * Copyright (C) 2008-2009 Ksplice, Inc.
+ * Author: Tim Abbott <tabbott@ksplice.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; version 2.
+ */
+
+#define BSEARCH_IMPLEMENTATION
+#include <xen/lib.h>



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 15:24:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 15:24:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34676.65868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDhD-0007O5-Tv; Mon, 23 Nov 2020 15:24:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34676.65868; Mon, 23 Nov 2020 15:24:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDhD-0007Nx-Q7; Mon, 23 Nov 2020 15:24:03 +0000
Received: by outflank-mailman (input) for mailman id 34676;
 Mon, 23 Nov 2020 15:24:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khDhD-0007Nk-3B
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:24:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d7c51524-a4e1-4201-a3c3-4e2a0362897b;
 Mon, 23 Nov 2020 15:24:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9EBE7AC23;
 Mon, 23 Nov 2020 15:24:01 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khDhD-0007Nk-3B
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:24:03 +0000
X-Inumbo-ID: d7c51524-a4e1-4201-a3c3-4e2a0362897b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d7c51524-a4e1-4201-a3c3-4e2a0362897b;
	Mon, 23 Nov 2020 15:24:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606145041; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=w0KlMYiiW0WboZScCSGBoeGaSU9YjcD5lFMncp+ZtPw=;
	b=Xq4CA1XdpUFDcZy4KorttAc/zYWs+iDVnlavwZOdcJ3QkM96U03rfqf7UTyTH991ZOOPSw
	TfaL626urcyZPycV82CCKE4UILOAOi/7ZNOXlmknassHLKq95NYRFQZGOBQSCBlZWIyddh
	Pv7nkKz/qTJNkpvM6YyPxWu1MQclMpA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 9EBE7AC23;
	Mon, 23 Nov 2020 15:24:01 +0000 (UTC)
Subject: [PATCH v3 8/8] lib: move sort code
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
Message-ID: <84bd9aaf-e94d-da6b-f2f9-c2da64df5312@suse.com>
Date: Mon, 23 Nov 2020 16:24:01 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <1a6bac6a-7d83-f5b6-c5b9-8b3b39824d40@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Build this code into an archive, partly paralleling bsearch().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
 xen/common/Makefile        | 1 -
 xen/lib/Makefile           | 1 +
 xen/{common => lib}/sort.c | 0
 3 files changed, 1 insertion(+), 1 deletion(-)
 rename xen/{common => lib}/sort.c (100%)

diff --git a/xen/common/Makefile b/xen/common/Makefile
index e8ce23acea67..7a4e652b575e 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -36,7 +36,6 @@ obj-y += rcupdate.o
 obj-y += rwlock.o
 obj-y += shutdown.o
 obj-y += softirq.o
-obj-y += sort.o
 obj-y += smp.o
 obj-y += spinlock.o
 obj-y += stop_machine.o
diff --git a/xen/lib/Makefile b/xen/lib/Makefile
index f12dab7a737a..42cf7a1164ef 100644
--- a/xen/lib/Makefile
+++ b/xen/lib/Makefile
@@ -6,3 +6,4 @@ lib-y += ctype.o
 lib-y += list-sort.o
 lib-y += parse-size.o
 lib-y += rbtree.o
+lib-y += sort.o
diff --git a/xen/common/sort.c b/xen/lib/sort.c
similarity index 100%
rename from xen/common/sort.c
rename to xen/lib/sort.c



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 15:25:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 15:25:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34689.65880 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDiC-0007Yu-8U; Mon, 23 Nov 2020 15:25:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34689.65880; Mon, 23 Nov 2020 15:25:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDiC-0007Ym-55; Mon, 23 Nov 2020 15:25:04 +0000
Received: by outflank-mailman (input) for mailman id 34689;
 Mon, 23 Nov 2020 15:25:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1khDiB-0007Yd-3I
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:25:03 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2d762adb-0ce2-4a6e-bfba-7831b717e676;
 Mon, 23 Nov 2020 15:25:01 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1khDiB-0007Yd-3I
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:25:03 +0000
X-Inumbo-ID: 2d762adb-0ce2-4a6e-bfba-7831b717e676
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2d762adb-0ce2-4a6e-bfba-7831b717e676;
	Mon, 23 Nov 2020 15:25:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606145100;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=Ib90u/e+eUtTaYO/KZU1Z4//0YdJtvxkIPd9eDEj66Q=;
  b=Hx/cwgfLnE92RFSDsgNRBb+0DWHqveHlnLJqbundSsq3k8ZHoXO2HHOZ
   +dWV0lUhdahCy/S7bD6ZoS7+ULUtNY0NokuV5k6lP/JJnCr12FDUeLZQh
   5Tb/ek8X7bgQeqXnwqCeUKDNup1HWpSlfTFUFKigHkoZ1e/aVkbDBywRk
   I=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: BrZe3/oKhLRGhrzpF7iAOMaLLlpsXVbDtuzsJ08pyHEWqTgvtCpQFH83+p1s6fVhN8ZOqk9Hc9
 8cWuxBFl8OaMvA+v5xOMiYW6+nOWZgtBRSNMZuaPK6s2XdUGvSyCiEtKLHZGnnPGXwvdltZYRr
 VcQ40IhLSESWvU1IFBG5o7rG1CDfBJ3DvoHkM5NfhaDhBTdpddaCgTpwc8G/rBFFxQxv+4nABZ
 vT/hCDlsdDqW5v7vyEAXH+2murT6iVpLZN/Eq3DxiPsiZQJsSFUxqjV8TIVGrYGslWvBiRbcJD
 ruk=
X-SBRS: None
X-MesageID: 32911325
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,363,1599537600"; 
   d="scan'208";a="32911325"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UJ8sJQrgAFeNoUz2h/seYIafOurN4TyFZLWZi6o1Nwwl55MBe6VirYox4MI1eOvwVfcQ6mobrbV+sk2J0agOonC50k/0pIZXlCZkwGi6qERXZxZT1WRNByubtBsh5yLmyGMpgNa7BocI5pRSD8UOI6sgXSOLIF/MH+8EEYbOVoyv9DoMTiBBlmWxsey1R2CfaHGXX85xTWrYeDVzi+Cw8NlakPcnCJUwda9k8iu/nxRcJftCwZOCyKqkSMP+gTfLuljQE28lVsLh9IKQx0eGMBTRyNAYvfncDyuSe7VC+CW/rzhs+3p3l4y7hRk1xbmsFJyE4NPQSaqvs/5yYWymgg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jqa7ZKI8l4naPhLwRnvT0E+Zu0CVzlLTZRP53nFoKpE=;
 b=j7VlZDWEPebLOpNiixQ5BTnZQp44aOKMSwxFHbIWVrC5quXmoA/QdmnI//klQLZl1SQ/n2OrXmDi/2OVnlErvjpKXPZr7BzcjhIKYklsv3jC6bzapFPNtLtQ47SuEKjsuA4ekfurLjgMJPeIra+CyOLqPrhLpm3pCyKs2CioJHHgynsq9OgHryLnjHykOzvt6JqkajeFH6DI3gai3C83A98Zyu7PkZ4Q2/nbfoEdQfZ0duBPOTbr5csGLxc6YNKCy11YQwzNjie1ldmKMjDgm8WkJjE3IWMJAtcT3vOOSFxBDpHgeFa49lNa4hKCaiYbU+UppPC/Ao4+X/NlQp5X+Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jqa7ZKI8l4naPhLwRnvT0E+Zu0CVzlLTZRP53nFoKpE=;
 b=WHVGNvNjcc2l1CqESTHpkqE344dSC+zK15Q/6Pm3kjwlJtwzAfHgrMIt3aU5dsKpKW/fgBlMOkcL5UMq2Zkc+vaz0s/3KwSbqq0/cXqCyM0gujgOm9W1XoNt5zsKB7zNua88YiPQg9jmldjZdfDrkSoJO0uHttzjZP5Xk0qLz5o=
Date: Mon, 23 Nov 2020 16:24:54 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>
Subject: Re: [PATCH 2/4] x86/ACPI: fix S3 wakeup vector mapping
Message-ID: <20201123152454.yjr3jgvsyucftrff@Air-de-Roger>
References: <7f895b0e-f46f-8fe2-b0ac-e0503ef06a1f@suse.com>
 <c0210cbf-c07d-7fa6-2ae0-59764514836a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <c0210cbf-c07d-7fa6-2ae0-59764514836a@suse.com>
X-ClientProxiedBy: MR2P264CA0130.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:30::22) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 250508b1-9116-43f8-2610-08d88fc3f015
X-MS-TrafficTypeDiagnostic: DM6PR03MB3481:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB34815D9260263FE3244C623B8FFC0@DM6PR03MB3481.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ojLH8U+xe+WSb9S6av6XpaNJUHrUcRjXrfMjtbYBCrcEz4sy9DS2gh5nHn1t/YJo4V+fmEQxA/zQnG9AqEloEL3HlDxFj62d5yw5SP/TBxmD/kzJx7UMWwU4jZ5+j4/7vwg+aTR14WohU2uVH9jXYXIa6krv7qErqA3uEpsmYzWVU1bkr+t4whQQUm2Fk4UIuWusKsXGHUmsxFO0XTgIxcLKJOPyjO0pUIedX8t2ZOPS2sEghqHFQFdPX7nyLF2Mh710atVh5gECzOiaEQ+GmKUpF/GjcJdWx3fN3c9+K1b7c8EJ/6MGe7HZs5cuBrPiYWSXqO42AYJAvK8Zg39fxA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(39860400002)(366004)(346002)(136003)(376002)(5660300002)(85182001)(2906002)(6916009)(1076003)(956004)(478600001)(66556008)(66476007)(66946007)(6486002)(6496006)(54906003)(8676002)(9686003)(86362001)(33716001)(316002)(186003)(16526019)(83380400001)(4326008)(26005)(6666004)(8936002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: oF1Etf6fow4FI6ydalvy3iYg85b69TzDHkM2ptuJ9qd8BIAIxy2XBJb3FIAV6sipssmfazlfkTimJ4rOjLHF76OSsJ3XqJrpEY+v8P6mTAifnClHgDq/Qne7GCBdq6fubOPnEnMXbzIHfNdBjWM2cflDMleqSY7DALOpOL7kD3zsQBV395SclQzBxWW9ZEESahcek7i8WxI2VMmaB9kV6+X3Xh+lsC/pDJJczxwEXlLa96h6Z716UdHBNLvRj5m2pz00RM2o97Ylf+w1tslbJc9OVNMIU8dGq2LEZW/bFq2kRwbNBsuH019NjuN7GFIsuky3L5lOA82mL/VuXYFkzeel0FCm+UopnLhbfrGZLaHBHB63KFBVa/U7t/X8FdIaaLqXPv/hMgBfxVIoKYyGZsfd0e3KQF/ERAqh2yy/IYzbDcs7XOvOhv/6y+TB14NNMwA1MpJh5I3PsMUqjvKP0e/FN5gt5uLUIiwGAesz/auPAPnTOwXGOLpTBlHOC60bKjfbjnkzgtPlhHZxU+dDdPfFnwVyR/dDNiVGvN4CVmBUrbaUi8t1uE/B0Zia154mMKDCZy7t+KSfrcv3bn5J3kn76CsnMZq7Nr+NLlIx8u5s8NmaBJHPHNhvAwSHxrxyOL1Icvb3dToW0dL+TxViK3G/uicl3I1UHHrtwrgAWZrHKcEaU2wbSz1lHwOyzZv6VXXWf3O+i1klsYgjkv44Q+PVDnhEfFuAP4lElw4WND+lNjcxiSR1VO7PMzA+j0Klo7y6PYXbYHlacwX/WDzQhLwdnij8qymgA9CdjHVqqRTFXt1vPTQPy32m5kMamDbFc1+kHXM90lOYpzy8YzVSCM5najHOpkQ9ybTBAvQgtrqdj187AcUCeufjlLjeJhoEF3g50G2ylbctDBY/CQt+0A==
X-MS-Exchange-CrossTenant-Network-Message-Id: 250508b1-9116-43f8-2610-08d88fc3f015
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Nov 2020 15:24:58.4846
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: CBfqvtvH0wqd71LSBQ5B626GznajZPz44ptqbE8fYCsd0/QIO4XtdCOqsABDWh3AnY8A/PE3yV9T8xUD33Yutw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3481
X-OriginatorOrg: citrix.com

On Mon, Nov 23, 2020 at 01:40:12PM +0100, Jan Beulich wrote:
> Use of __acpi_map_table() here was at least close to an abuse already
> before, but it will now consistently return NULL here. Drop the layering
> violation and use set_fixmap() directly. Re-use of the ACPI fixmap area
> is hopefully going to remain "fine" for the time being.
> 
> Add checks to acpi_enter_sleep(): The vector now needs to be contained
> within a single page, but the ACPI spec requires 64-byte alignment of
> FACS anyway. Also bail if no wakeup vector was determined in the first
> place, in part as preparation for a subsequent relaxation change.
> 
> Fixes: 1c4aa69ca1e1 ("xen/acpi: Rework acpi_os_map_memory() and acpi_os_unmap_memory()")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/acpi/boot.c
> +++ b/xen/arch/x86/acpi/boot.c
> @@ -443,6 +443,11 @@ acpi_fadt_parse_sleep_info(struct acpi_t
>  			"FACS is shorter than ACPI spec allow: %#x",
>  			facs->length);
>  
> +	if (facs_pa % 64)
> +		printk(KERN_WARNING PREFIX
> +			"FACS is not 64-byte aligned: %#lx",
> +			facs_pa);
> +
>  	acpi_sinfo.wakeup_vector = facs_pa + 
>  		offsetof(struct acpi_table_facs, firmware_waking_vector);
>  	acpi_sinfo.vector_width = 32;
> --- a/xen/arch/x86/acpi/power.c
> +++ b/xen/arch/x86/acpi/power.c
> @@ -174,17 +174,20 @@ static void acpi_sleep_prepare(u32 state
>      if ( state != ACPI_STATE_S3 )
>          return;
>  
> -    wakeup_vector_va = __acpi_map_table(
> -        acpi_sinfo.wakeup_vector, sizeof(uint64_t));
> -
>      /* TBoot will set resume vector itself (when it is safe to do so). */
>      if ( tboot_in_measured_env() )
>          return;
>  
> +    set_fixmap(FIX_ACPI_END, acpi_sinfo.wakeup_vector);
> +    wakeup_vector_va = fix_to_virt(FIX_ACPI_END) +
> +                       PAGE_OFFSET(acpi_sinfo.wakeup_vector);
> +
>      if ( acpi_sinfo.vector_width == 32 )
>          *(uint32_t *)wakeup_vector_va = bootsym_phys(wakeup_start);
>      else
>          *(uint64_t *)wakeup_vector_va = bootsym_phys(wakeup_start);
> +
> +    clear_fixmap(FIX_ACPI_END);

Why not use vmap here instead of the fixmap?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 15:30:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 15:30:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34703.65892 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDn6-0008TS-SQ; Mon, 23 Nov 2020 15:30:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34703.65892; Mon, 23 Nov 2020 15:30:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDn6-0008TL-PE; Mon, 23 Nov 2020 15:30:08 +0000
Received: by outflank-mailman (input) for mailman id 34703;
 Mon, 23 Nov 2020 15:30:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khDn5-0008PW-PQ
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:30:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d5aef0e5-ecb5-42af-8cfc-c36dde18149d;
 Mon, 23 Nov 2020 15:30:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0C92FAFFA;
 Mon, 23 Nov 2020 15:30:06 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qJrE=E5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khDn5-0008PW-PQ
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:30:07 +0000
X-Inumbo-ID: d5aef0e5-ecb5-42af-8cfc-c36dde18149d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d5aef0e5-ecb5-42af-8cfc-c36dde18149d;
	Mon, 23 Nov 2020 15:30:06 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606145406; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=b+iUpz5dQLBuIyN2tKisN5WrWt6YFNR4JXd+JNMkhks=;
	b=NFkr9zxXBXUqFA42/vKh1+G+0M6+89ryUTlgh501Xys7DhHWj7LgRAxrXMvN/8NONZptmX
	bzJdjSctfRz3BlZwgmG1smbXdOZdeNKydRzyUtU7vY+qM33C51XpAMNUjhbIv67oujsBuK
	zRt/ecBG1+VDUI3bFPc1yrTDpnh7ZAw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0C92FAFFA;
	Mon, 23 Nov 2020 15:30:06 +0000 (UTC)
Subject: Re: [PATCH 2/4] x86/ACPI: fix S3 wakeup vector mapping
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>
References: <7f895b0e-f46f-8fe2-b0ac-e0503ef06a1f@suse.com>
 <c0210cbf-c07d-7fa6-2ae0-59764514836a@suse.com>
 <20201123152454.yjr3jgvsyucftrff@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <79776889-c566-5f07-abfe-2cb79cfa78fa@suse.com>
Date: Mon, 23 Nov 2020 16:30:05 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201123152454.yjr3jgvsyucftrff@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 23.11.2020 16:24, Roger Pau Monné wrote:
> On Mon, Nov 23, 2020 at 01:40:12PM +0100, Jan Beulich wrote:
>> --- a/xen/arch/x86/acpi/power.c
>> +++ b/xen/arch/x86/acpi/power.c
>> @@ -174,17 +174,20 @@ static void acpi_sleep_prepare(u32 state
>>      if ( state != ACPI_STATE_S3 )
>>          return;
>>  
>> -    wakeup_vector_va = __acpi_map_table(
>> -        acpi_sinfo.wakeup_vector, sizeof(uint64_t));
>> -
>>      /* TBoot will set resume vector itself (when it is safe to do so). */
>>      if ( tboot_in_measured_env() )
>>          return;
>>  
>> +    set_fixmap(FIX_ACPI_END, acpi_sinfo.wakeup_vector);
>> +    wakeup_vector_va = fix_to_virt(FIX_ACPI_END) +
>> +                       PAGE_OFFSET(acpi_sinfo.wakeup_vector);
>> +
>>      if ( acpi_sinfo.vector_width == 32 )
>>          *(uint32_t *)wakeup_vector_va = bootsym_phys(wakeup_start);
>>      else
>>          *(uint64_t *)wakeup_vector_va = bootsym_phys(wakeup_start);
>> +
>> +    clear_fixmap(FIX_ACPI_END);
> 
> Why not use vmap here instead of the fixmap?

Considering the S3 path is relatively fragile (as in: we end up
breaking it more often than about anything else) I wanted to
make as little of a change as possible. Hence I decided to stick
to the fixmap use that was (indirectly) used before as well.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 15:41:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 15:41:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34715.65904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDyC-000124-Uu; Mon, 23 Nov 2020 15:41:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34715.65904; Mon, 23 Nov 2020 15:41:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khDyC-00011x-Rh; Mon, 23 Nov 2020 15:41:36 +0000
Received: by outflank-mailman (input) for mailman id 34715;
 Mon, 23 Nov 2020 15:41:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1khDyA-00011r-QL
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:41:34 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 864a8188-4f21-442a-b23f-68d85667882c;
 Mon, 23 Nov 2020 15:41:33 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1khDyA-00011r-QL
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:41:34 +0000
X-Inumbo-ID: 864a8188-4f21-442a-b23f-68d85667882c
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 864a8188-4f21-442a-b23f-68d85667882c;
	Mon, 23 Nov 2020 15:41:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606146093;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=MEJWVYFAiq7vnKvLr3M9QQUYhWI1k4NAiDnCKn4ITYU=;
  b=W5DOHAn/m2J2yhinvTDK/UIW5zQ/7+d5mQoPHhEI0+E/7B+v6jzgEobg
   mSC/N6UHW5Gg/ze3DQjp4u6/agEE9LvzJ2GCbnJJj7y8LK7oWMje1N5OL
   8WZwrZb1PkNj+knmijkW6QuPn+5pZaZjix9a4UX0n69zumUnyIgWeiHFe
   s=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 7qCbsb1jt0B1kFWijrnms59X6SVbH6TOtGVWcWsubV9GWy7ll8BaGLp/wHxszReQIaNcvr7yRU
 jFegfXEFjNtRSqqOuIQ+t2ZlKhqfLnMcnbBTVAXnUwnWC55JXsBKusoJqCS8I3E8uK+ItwFPIU
 A3oVNbCkjs/jqWZIKZa2StbFmct6pDOqyWRpOWXzb8Y1KTPkFsX98TN3IdXJ0bHnhE1g9OpaAO
 f0CegCGILy6K02n3Nu42eMNZ8bdtRl6W2BBpvCTIH3bmvAvpFKdC2LQw6SELCcOdGO0ozTXlw0
 xjE=
X-SBRS: None
X-MesageID: 31766015
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,363,1599537600"; 
   d="scan'208";a="31766015"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mzfFyPon5P8vIQpmKuHkEzzBZYKEMcRbDw2go5fufI7X2BfgqD2Td8rZejWal3qDcmrUSHQY/llqEilo0wznP8ZscvzoR3hoydmJA3aK+//wguCwCAe+tpH6clK7eQrXS3eaHYDVec6JcjxtebrkutxUcz2ROiWa3LDhoEaGHNUUf3Jfs0iw+PBJHwWkq+/nUzdwUbr2n0tuahcHoPHcLp0QWATz1J4m2Og9lMO+S72zbiROYPZQ+KaI2QBsz7sUEdS81msI9mo6Z+ip84TUSu1/2aaq3t+Z0q4ucAK1Qi1cyLL7PKv6inv2gD0v4FVJHTO+MGlu47vkXcUWdVie8w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=g8m5e98SQ6tCHfxh9kP9bnoce95sGLFrftHvjBmBMuo=;
 b=Q9D6e0Knl5QC0URNBM+NiQJmJ7B1AM7PJ7gA1w3TW1nvhUw68LhOM6Xe1VkJATd7tYHEsc8KgOc6MAN+s3SF5MU1rxSl5JK6wY+XZ5Yb1gxBno5VF35peSID5O6YJDgTbM5bSubK0N3Md5STel52ZkSxbxjTAYJspBUyRZFmHwHPa41ZAyvxPsOyWVkJpL85NGjRDQpfG++Jwt3VQ9jPccnAqvrX6Kt0rp2bQjUboThe08GLhLY71H8V6rWJsCW58UJbKQWTfZkWFJbVTtVmOvDb8n8wvHBG0/NKzuT13fSKUthymnC13U7bMYA2k44dLufg6z1duIF63M7rGx/oKA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=g8m5e98SQ6tCHfxh9kP9bnoce95sGLFrftHvjBmBMuo=;
 b=e/YiBQWMGUm++Y3mZE2bvu7iSNK50RnlbhoT/hmnWXregAzLRXbD4dctRMzUWQfRl2qxPZS4xqYMc62ZAnftWDCTg2AjmxJMKI5w85419+jqz9ObwJvV1HqQnLPwv420dACCDxq4pyuI61AUrOsHOGZAAm4TreM1DMxuSFta2DA=
Date: Mon, 23 Nov 2020 16:41:25 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>
Subject: Re: [PATCH 3/4] x86/DMI: fix table mapping when one lives above 1Mb
Message-ID: <20201123154125.pvfatxnpkgxsz7sm@Air-de-Roger>
References: <7f895b0e-f46f-8fe2-b0ac-e0503ef06a1f@suse.com>
 <53cd4ae3-d806-c3ad-02fd-317a09f15a24@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <53cd4ae3-d806-c3ad-02fd-317a09f15a24@suse.com>
X-ClientProxiedBy: LO2P265CA0393.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:f::21) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 54453d40-dea2-4d6f-0192-08d88fc63f7a
X-MS-TrafficTypeDiagnostic: DM6PR03MB3740:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB37404D09869161A33367575E8FFC0@DM6PR03MB3740.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: bNt1i8hTdTjCITUcv40u3iGHZ+iL/QNtekU/sggFVXtFZiZy1Dsb7mZ1vep3heHa0Hw1rC0n7TgKOstuC+jURycrVR4mWgt/Nl5lDZgRhmh2iRDJn5PDoeq0Yz+FtshVNeJtbcE5zzvjA8ToTaYRrd1tEHzJZbNzCE8Qb8zrVXevV7TJE5Cr10QpEmdLRyriEuXJwKEgcNEMkpnvMtmooX/kgWbzhi4qjNk+CMvhUtf7p4vSOvQXz3oCs9pwu1geoPPQaVNNJUboHA9H9nfsXrwRIsT+o1/O0QM/5kVAMCNJ/TwlowQyhIxX0PoD84oeHwV77vGTO2XDidXTfJUblA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(39860400002)(136003)(376002)(346002)(396003)(9686003)(6666004)(6496006)(33716001)(478600001)(316002)(5660300002)(54906003)(66946007)(956004)(66476007)(66556008)(16526019)(186003)(26005)(86362001)(1076003)(85182001)(8936002)(8676002)(6916009)(6486002)(4326008)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 58rUxKqLK/e6IvBRLIJmQH5By/oy9uxNWN8g5olVXtMnNqn50hEf7Jyg6KQWaFcmpeRIv2Xjg8pzuEA6w2wZPI/Mw4THuME8Ew5uGaowaQq5YfaPdrvCPC32PudzeA1Hm9b4TlCxdsSB+0MxLw90M3JzJtukIO/zZ3qlD3MyYFrvbkQaUUQxR88xGvtrG32VDYK9uIvA/4y9f4rynx2awheC3XyU3ArflVnuyghd/TlfAUN9KDEWdnvlDFvDpZZ/ULsPEET2IqlzYZxOtzDy4v5Vad8TIRhI3EohMGhIqWW6ROcXOXh4YWL+9h06bsMYOrGHg56QyqxbsPyNhYcUOQYeg2ZxYJ8wvRN6SbNFpCOnMmBg+TuDTn0Hmmxjf8o24MzZrCha8rQXoRsxzjMPmz4681Xne4RNsLp8jaaeWFaYNz30aTbjF9MuDe2ZImGju9swqUG4azgOt3ppeQspbXLpaqewj/EQFdxK//VlRVt/IdRHgakIWAukj0INcn/UVbKzu9/l0hdxGB52aV0W3scF33ujEAILQI/YhHYsI/lYuNMxZOa9oA9Lj4ZCvYGJmTS3OdWWik4WrzYXooGQ8E1SguQ+peAn7o0FMnAcDoqKkszryOjq8FztI1UKCQFYauhzR/CEKQPvFoyc9k9gA2p73smStBto/Z5UXj+uhkH68OL1wZ6BWVYzTYNWXWX7w6jfn12Xq4sDqO6TDjGvnPi3NQbqrs2uqGUMMHr6XExRcVjccaZMbdZC8wbFMKVWWJNchF253PFdTczQRYBNfsJ9y4kdoZSFdbi0FcPEfbfxBRz5V/VGKV/oYnJOnv8qjKIh8jQxzByhQhmsR6Vq1yxVu9V1Ybg2GaeIEc+pozFZ1VnZImA5OeP8BD3EmOB2bkmuIzQarTmMZM6EGVxW0g==
X-MS-Exchange-CrossTenant-Network-Message-Id: 54453d40-dea2-4d6f-0192-08d88fc63f7a
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Nov 2020 15:41:30.6770
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: iKCDgRyNI9YuFmNTUPWcHl6V12p9blcDwD78rgJgUWZTQ9MXDO3SYp915/tMa2Fx96+g+JnNqJUnz607pGquJw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3740
X-OriginatorOrg: citrix.com

On Mon, Nov 23, 2020 at 01:40:30PM +0100, Jan Beulich wrote:
> Use of __acpi_map_table() is kind of an abuse here, and doesn't work
> anymore for the majority of cases if any of the tables lives outside the
> low first Mb. Keep this (ab)use only prior to reaching SYS_STATE_boot,
> primarily to avoid needing to audit whether any of the calls here can
> happen this early in the first place; quite likely this isn't necessary
> at all - at least dmi_scan_machine() gets called late enough.
> 
> For the "normal" case, call __vmap() directly, despite effectively
> duplicating acpi_os_map_memory(). There's one difference though: We
> shouldn't need to establish UC- mappings, WP or r/o WB mappings ought to
> be fine, as the tables are going to live in either RAM or ROM. Short of
> having PAGE_HYPERVISOR_WP and wanting to map the tables r/o anyway, use
> the latter of the two options. The r/o mapping implies some
> constification of code elsewhere in the file. For code touched anyway
> also switch to void (where possible) or uint8_t.
> 
> Fixes: 1c4aa69ca1e1 ("xen/acpi: Rework acpi_os_map_memory() and acpi_os_unmap_memory()")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 15:44:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 15:44:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34721.65916 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khE1K-0001DZ-Ei; Mon, 23 Nov 2020 15:44:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34721.65916; Mon, 23 Nov 2020 15:44:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khE1K-0001DS-Bh; Mon, 23 Nov 2020 15:44:50 +0000
Received: by outflank-mailman (input) for mailman id 34721;
 Mon, 23 Nov 2020 15:44:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1khE1J-0001DN-Au
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:44:49 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a9fc51ee-7db9-4c79-9c04-5f68bd615732;
 Mon, 23 Nov 2020 15:44:47 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1khE1J-0001DN-Au
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:44:49 +0000
X-Inumbo-ID: a9fc51ee-7db9-4c79-9c04-5f68bd615732
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a9fc51ee-7db9-4c79-9c04-5f68bd615732;
	Mon, 23 Nov 2020 15:44:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606146287;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=iMeadjRxQcZlD2zTEb+imI1bmEGKVKytoWBZ5ZeE2+w=;
  b=GDjzQ/Y8lJK3LBU87KVQJi+KHgk12zNufFWjgb8zna2v2ojI8Lkmf2uD
   llOKYZLamZ/NvTrJlFbQGEj122XMWzVrbd4/6WX7Gbos13HFiCPgisEhJ
   afAfFebf1lGPnvZBS0872jHO0SZtFhyr/UeM7CD5bS6emVIWmTW7qRxVn
   A=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: SGVF6Jdj7E3ZrWBACYftE+CWJszS3Ln3/A4rBf2mz9dtWCPkuwJwBwD3qH2P/9q+w32u2bjXUJ
 kt4K/6urFJ5zkbMeL1BRiNZIxEcVzs4wylfHA3UQxIb69nXN55rTHeTUR6RUs/Q4emxjW0mTgP
 eOwsOYepgfNRNl6OyY6MkegLGiOdYfOgvQVtRtKJpE5Nu8HLEOGEcy+LTBrwbsG10whxlsiLKl
 89c7Ff2pj6CJTpnSXZO08Zf/PLYuQKJd+7J+HMi8Xd+V2xStOCj4bs+KAeTFkShK/18oSLaJxl
 fmw=
X-SBRS: None
X-MesageID: 32913229
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,363,1599537600"; 
   d="scan'208";a="32913229"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=A93fW7brkgzwEzfuvKBfYRcEpviZLCST6lqXOhFvENAeVUg23zJ46BjlDfjB2licb7PvtbSte2pV2ieggXCt3mypQ+6TSj/wV8LsaWKPyoUEsnXwpe29tHXx/5R5A7VZ3u2wvduqKOUoRuzS+BvWpx/6qeZSRUF0FbGe8z3nXQmJNMUzoo1SMWrVqWlddNYphl0QBv2wpCuXZoY6sCgEIrjptkAE/nb7hRF2/b9KmJTAkQ47pxQJpf4xP5ZUf4vrGJ1u/4pJeM2CZOuLuGk4Hr/+6Qdq7GmVUWbrqd08kRdsZHMU1i7TxzG/iCetldUnuKydaDV5Q1I+4Sn+KPE9zQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=C1Wd4GzeYtWnis+nIYyELt1FZdGc5LtAW9yKBZSKUQE=;
 b=ATKQcZxANPykSJMIhqL0uTcRyPOooWILjnA2DXlQ5Fcp5a4CRrNRnX/lmxBa6i0kHMo9cu0JUdpKs39pn3zg/KdkSMthzo3WqH7GQyO/HKHbJg/dPwZKvLmk5whlLab2z2LvmnNcWqX+r9utz5g6HzTbw6T+TEOLDfu3sNwMj+L791QtYmPnH7lgktY7yQcDRkIkz9iuMKGB/sT0CMCKXx1zyOpT0KF6RC4hpG3owOMkuSsF3w17psVYC0lEMAKDmH96+lGW5zhnF8JKySVHmVHv52GH/kfraalifSQStQ1hjTnChLxIvjSj0fadYvItY4TmSJmZ1xnubpz32DgIaQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=C1Wd4GzeYtWnis+nIYyELt1FZdGc5LtAW9yKBZSKUQE=;
 b=io3OGhObNTm+258lVI0Vv9+G/nfe+IWrZDloXRaNtM4QDTlwKhbG+dPT0+WEmkyHtlbxrK4YRXUKY6Uc4xxw7UuPET8eujzqKj2dModXKMplXj0aZOg5AoINRIhya9tixUVSUuYcaEUvyMjuGhVMg/9nQpcPo55OknvpzRDqYEM=
Date: Mon, 23 Nov 2020 16:44:38 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>
Subject: Re: [PATCH 4/4] x86/ACPI: don't invalidate S5 data when S3 wakeup
 vector cannot be determined
Message-ID: <20201123154438.r4dspkhbmgos5j7n@Air-de-Roger>
References: <7f895b0e-f46f-8fe2-b0ac-e0503ef06a1f@suse.com>
 <d2b9d231-8a05-6164-66f8-74d7bfe4b40f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d2b9d231-8a05-6164-66f8-74d7bfe4b40f@suse.com>
X-ClientProxiedBy: MR2P264CA0014.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:1::26) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a363ef01-2813-44fb-df15-08d88fc6b2e5
X-MS-TrafficTypeDiagnostic: DM5PR03MB2633:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB263381619EAEC7335E0572FF8FFC0@DM5PR03MB2633.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4303;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Lby4VmSivWS0hw1GCxxdpD2E6CPFpXJyATsGDvJdrcxowNWODeg81SyZWTM+tAidMWLgy5tvxP2HlonRv65VI6T4dhHYbJgHs0JyRmFyOs70Ku4+Ai6T4yx0gfYZgAdVjbHLfwV6S6Ni2GmWIF4/8zdXSuAeeoVbnVXh/kBbTnA54QYe2/PDuJsqSM+hSjHJif4t6bN/HJUuwonAnnrw3pSHCvFXbC0kp+vWsnhVlBODCWo6NBCLdjKOsg8T3hwjTgLWvRatl5adwWtI1WZBoJJWak/aYJlairyy7KMvgoiWPSybdPeoXn00A+aTQFEL
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(376002)(346002)(39860400002)(396003)(366004)(9686003)(6496006)(6916009)(6666004)(6486002)(33716001)(8936002)(4326008)(2906002)(1076003)(8676002)(478600001)(66556008)(316002)(4744005)(54906003)(66946007)(956004)(85182001)(5660300002)(86362001)(16526019)(186003)(66476007)(26005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: rTuF6NHP3JdKnqTV39Y/BWd7YbwmxtMwzUNg99Vjn06qPb14IjyHPe0exExz2LttfmdLw4cjwcgYpRyylsdmN5c6qOUsB9PoURRbml0XZ2RkCpnSuuWRQEbL781F43KYeDkLLHlQJjdFJfLWKFTiGqRvRuUUEeU00lV617KuLm6Mwzj/gI2lCluTrMTrxY0Y36boYoD33uy5WtHbziGWFiM+OgMS/j7sdrK1rcTDkNQ1j7pwvNg1RcirnTsNbOZ/FaV+cwErgyKkXEx00+A4EZs1pAgl5sve1Rx+mUj2MmcaK019jd6ZqSmpAl2cSgv52pP7FMlL/55nZZSvqJjioAZE2+/f/aXYPGM4+Dt9X6WQbxxk/8qaO9S+z3JW/dtgW6HB2KHtOdCpBX2r5MHNsEIGAZT/PoJqYwVCWl9b07BKhP9SmXSu6JrckhcShLSV90QF536/tf19Dq7LfI+5/ZKeyXbkUwe3itq4UCDZzcmQJMLeMQZLPx7JtInEUUF/vMrr2auMZKe2Un5WGEgUOf7+zRBR3pB2vMwDF0Gmrq4k/4PlKAZ8A1P99bBLc5/pW/eKsgxoWDm6iDTpd+fVlmIIxLG8Ocy4Oj+dq1VZUV8kBd436GwCv+epl4k8sgYYFMft7WvWqK2OP/AlGVwdIx6bn0Ox+NLdwFzHnyH+Ac2f+x+QYoPrJx3IosiVn4s5BygBaxNECBJ3WDWKbXkr8iIzzNUz8KC+sayD9LptHEXgezcDb4kO+m0LULWmMidEVMTMW9Sil+ZqNvLY5MsvvJ72fTUoM7af8W23LiF6gmumh/4yMLd/zKWuXovYgwcJVqe1jAn0zd6/lyvg1nV4AgANaw5+5TFTAQyJ/CIr6iwvIbDPypaKZ7EI6h0ZCAJciYiaFdkqAshF0GG54hASdw==
X-MS-Exchange-CrossTenant-Network-Message-Id: a363ef01-2813-44fb-df15-08d88fc6b2e5
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Nov 2020 15:44:44.1825
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bD/AGpwmBRA8FgJh8x4WNLCby8lL1og+DwI1dMoZg3F3jAUUup0rzuy5WgFxxw3yfRIUF/m0mmiQIcfScI5oCw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2633
X-OriginatorOrg: citrix.com

On Mon, Nov 23, 2020 at 01:41:06PM +0100, Jan Beulich wrote:
> We can be more tolerant as long as the data collected from FACS is only
> needed to enter S3. A prior change already added suitable checking to
> acpi_enter_sleep().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 15:47:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 15:47:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34728.65928 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khE4D-0001P6-3u; Mon, 23 Nov 2020 15:47:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34728.65928; Mon, 23 Nov 2020 15:47:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khE4D-0001Oz-0u; Mon, 23 Nov 2020 15:47:49 +0000
Received: by outflank-mailman (input) for mailman id 34728;
 Mon, 23 Nov 2020 15:47:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dDmC=E5=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1khE4B-0001Oq-C1
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:47:47 +0000
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 414d1422-316d-42d9-9bc4-a15bb4475700;
 Mon, 23 Nov 2020 15:47:46 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id 142so18466511ljj.10
 for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 07:47:46 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id s62sm3552lja.102.2020.11.23.07.47.43
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 23 Nov 2020 07:47:44 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dDmC=E5=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1khE4B-0001Oq-C1
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:47:47 +0000
X-Inumbo-ID: 414d1422-316d-42d9-9bc4-a15bb4475700
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 414d1422-316d-42d9-9bc4-a15bb4475700;
	Mon, 23 Nov 2020 15:47:46 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id 142so18466511ljj.10
        for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 07:47:46 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=6BjATHCxirlMscvoXaspmm4DKGb45kqaNww0aQhBLrQ=;
        b=uZdvRNn1LYALcK93XCRoJlD9kYiCNAtQGcM/ihVDybp6lXIjsBGgJXV7ZJqjp1Sc/K
         8ERB4r/Wioa/fSTuQWerlp72SCjqSVgld38wTHOe36kj4BJTILm7aY1cW2XrGYrd7wJb
         l/7RxNG4lVyp2/vroymXZZIsG6wiuYw5CBvNvizf+tdytsLcxbzleEJKWMt56vgv0DNO
         YUPZi2UdGn3t5J5cjZlPt4E/qCK7YOXx0R2Q9bmXPMwGfeviMetA+9r5eCu1ur/cxtoi
         1FU62uMfxZnvD7z3IpoPlQoGV6B0coqQedK5ezspYCrvDsdfeZoybjKBfWhunBU4C+bx
         aQAA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=6BjATHCxirlMscvoXaspmm4DKGb45kqaNww0aQhBLrQ=;
        b=GpRO7tB5WOVsV07oruhJSmq+68lRy+Vl8Tv6I7owqHstFTaReTjAlzOCoWTLnLFm3S
         WViRQypoQ2w62C1u/dvqlPGLSglxsqISViINi8/nhLbwpcdqDlwx6IqhfdvR5vYw46Uz
         k0/IZJg0MzUN1v/Gdn77gZmpIkxe1colF8xvzPLBTvM/DZTwvks417BxOSSmkNlFvXkn
         WErx6/f7MOIe6ZtWoMJByaWs6y8J2SDe0XiDB6g6k9e1XfTtRgoLUM2uwIDmnR02SJ+V
         qiKbbfcFBJcFGXSIgrUfeOJV3dNg6k3Zn+n8RiuFFVxdxf/AqMezv3sEkF5OCh9CO0Bk
         Lv4A==
X-Gm-Message-State: AOAM531NkMq44BF0O2wc2IwgndiBB8q2wxmqg1hjR54d3Z9QAJblmjz0
	fsKQ3iEh13u61YTneTDCnWYCkTcUlXoKvA==
X-Google-Smtp-Source: ABdhPJxg0rSQzCab1r/3SntdBdL5F6WJpzjZ1Zuek553WQZKxfgF82EOdsSRqvPgwyUh+AZVzsBr6A==
X-Received: by 2002:a2e:86c1:: with SMTP id n1mr65040ljj.351.1606146464679;
        Mon, 23 Nov 2020 07:47:44 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id s62sm3552lja.102.2020.11.23.07.47.43
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Mon, 23 Nov 2020 07:47:44 -0800 (PST)
Subject: Re: [PATCH V2 12/23] xen/ioreq: Remove "hvm" prefixes from involved
 function names
To: Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-13-git-send-email-olekstysh@gmail.com>
 <e3064b77-71c3-9d8d-2324-6839895101f4@suse.com>
 <d3b6623c-683d-2845-78c3-a114193b0ce4@gmail.com>
 <04a81b7e-213a-968b-048c-dfa68b6e3b0d@gmail.com>
 <96e6622c-08b3-ff85-75f1-14c8b7cd6d6e@suse.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <30c01448-d4f2-803e-1569-5e806f830efc@gmail.com>
Date: Mon, 23 Nov 2020 17:47:38 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <96e6622c-08b3-ff85-75f1-14c8b7cd6d6e@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 23.11.20 16:56, Jan Beulich wrote:

Hi Jan, Paul

> On 23.11.2020 15:39, Oleksandr wrote:
>> As it was agreed, below the list of proposed renaming (naming) within
>> current series.
> Thanks for compiling this. A couple of suggestions for consideration:
>
>> 1. Global (existing):
>> hvm_map_mem_type_to_ioreq_server     -> ioreq_server_map_mem_type
>> hvm_select_ioreq_server              -> ioreq_server_select
>> hvm_send_ioreq                       -> ioreq_send
>> hvm_ioreq_init                       -> ioreq_init
> ioreq_domain_init() (or, imo less desirable domain_ioreq_init())?
On Arm (for example) I see two variants are present:
1. That starts with subsystem:
- tee_domain_init
- iommu_domain_init


2. Where sybsystem in the middle:
- domain_io_init
- domain_vuart_init
- domain_vtimer_init

If there is no rule, but a matter of taste then I would use 
ioreq_domain_init(),
so arch_ioreq_init() wants to be arch_ioreq_domain_init().

>
>> hvm_destroy_all_ioreq_servers        -> ioreq_server_destroy_all
>> hvm_all_ioreq_servers_add_vcpu       -> ioreq_server_add_vcpu_all
>> hvm_all_ioreq_servers_remove_vcpu    -> ioreq_server_remove_vcpu_all
>> hvm_broadcast_ioreq                  -> ioreq_broadcast
>> hvm_create_ioreq_server              -> ioreq_server_create
>> hvm_get_ioreq_server_info            -> ioreq_server_get_info
>> hvm_map_io_range_to_ioreq_server     -> ioreq_server_map_io_range
>> hvm_unmap_io_range_from_ioreq_server -> ioreq_server_unmap_io_range
>> hvm_set_ioreq_server_state           -> ioreq_server_set_state
>> hvm_destroy_ioreq_server             -> ioreq_server_destroy
>> hvm_get_ioreq_server_frame           -> ioreq_server_get_frame
>> hvm_ioreq_needs_completion           -> ioreq_needs_completion
>> hvm_mmio_first_byte                  -> ioreq_mmio_first_byte
>> hvm_mmio_last_byte                   -> ioreq_mmio_last_byte
>> send_invalidate_req                  -> ioreq_signal_mapcache_invalidate
>>
>> handle_hvm_io_completion             -> handle_io_completion
> For this one I'm not sure what to suggest, but I'm not overly happy
> with the name.

I also failed to find a better name. Probably ioreq_ or vcpu_ioreq_ 
prefix wants to be added here?


>
>> hvm_io_pending                       -> io_pending
> vcpu_ioreq_pending() or vcpu_any_ioreq_pending()?

I am fine with vcpu_ioreq_pending()


>
>> 2. Global (new):
>> arch_io_completion
>> arch_ioreq_server_map_pages
>> arch_ioreq_server_unmap_pages
>> arch_ioreq_server_enable
>> arch_ioreq_server_disable
>> arch_ioreq_server_destroy
>> arch_ioreq_server_map_mem_type
>> arch_ioreq_server_destroy_all
>> arch_ioreq_server_get_type_addr
>> arch_ioreq_init
> Assuming this is the arch hook of the similarly named function
> further up, a similar adjustment may then be wanted here.

Yes.


>
>> domain_has_ioreq_server
>>
>>
>> 3. Local (existing) in common ioreq.c:
>> hvm_alloc_ioreq_mfn               -> ioreq_alloc_mfn
>> hvm_free_ioreq_mfn                -> ioreq_free_mfn
> These two are server functions, so should imo be ioreq_server_...().

ok, but ...


> However, if they're static (as they're now), no distinguishing
> prefix is strictly necessary, i.e. alloc_mfn() and free_mfn() may
> be fine. The two names may be too short for Paul's taste, though.
> Some similar shortening may be possible for some or all of the ones


... In general I would be fine with any option. However, using the 
shortening rule for all
we are going to end up with single-word function names (enable, init, etc).
So I would prefer to leave locals as is (but dropping hvm prefixes of 
course and
clarify ioreq_server_alloc_mfn/ioreq_server_free_mfn).

Paul, Jan what do you think?


> below here.
>
> Jan
>
>> hvm_update_ioreq_evtchn           -> ioreq_update_evtchn
>> hvm_ioreq_server_add_vcpu         -> ioreq_server_add_vcpu
>> hvm_ioreq_server_remove_vcpu      -> ioreq_server_remove_vcpu
>> hvm_ioreq_server_remove_all_vcpus -> ioreq_server_remove_all_vcpus
>> hvm_ioreq_server_alloc_pages      -> ioreq_server_alloc_pages
>> hvm_ioreq_server_free_pages       -> ioreq_server_free_pages
>> hvm_ioreq_server_free_rangesets   -> ioreq_server_free_rangesets
>> hvm_ioreq_server_alloc_rangesets  -> ioreq_server_alloc_rangesets
>> hvm_ioreq_server_enable           -> ioreq_server_enable
>> hvm_ioreq_server_disable          -> ioreq_server_disable
>> hvm_ioreq_server_init             -> ioreq_server_init
>> hvm_ioreq_server_deinit           -> ioreq_server_deinit
>> hvm_send_buffered_ioreq           -> ioreq_send_buffered
>>
>> hvm_wait_for_io                   -> wait_for_io
>>
>> 4. Local (existing) in x86 ioreq.c:
>> Everything related to legacy interface (hvm_alloc_legacy_ioreq_gfn, etc)
>> are going
>> to remain as is.
>>
>>
>>
-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 15:52:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 15:52:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34739.65940 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khE8u-0002Hz-N8; Mon, 23 Nov 2020 15:52:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34739.65940; Mon, 23 Nov 2020 15:52:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khE8u-0002Hs-Jy; Mon, 23 Nov 2020 15:52:40 +0000
Received: by outflank-mailman (input) for mailman id 34739;
 Mon, 23 Nov 2020 15:52:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IF/R=E5=intel.com=jani.nikula@srs-us1.protection.inumbo.net>)
 id 1khE8t-0002Hn-HG
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:52:39 +0000
Received: from mga12.intel.com (unknown [192.55.52.136])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c815b950-555c-40b8-afe9-08ffb359d729;
 Mon, 23 Nov 2020 15:52:36 +0000 (UTC)
Received: from orsmga005.jf.intel.com ([10.7.209.41])
 by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 23 Nov 2020 07:52:35 -0800
Received: from suygunge-mobl.ger.corp.intel.com (HELO localhost)
 ([10.249.40.108])
 by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 23 Nov 2020 07:52:23 -0800
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IF/R=E5=intel.com=jani.nikula@srs-us1.protection.inumbo.net>)
	id 1khE8t-0002Hn-HG
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:52:39 +0000
X-Inumbo-ID: c815b950-555c-40b8-afe9-08ffb359d729
Received: from mga12.intel.com (unknown [192.55.52.136])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c815b950-555c-40b8-afe9-08ffb359d729;
	Mon, 23 Nov 2020 15:52:36 +0000 (UTC)
IronPort-SDR: nWTYa8g0rWcbxhpszlWq+vd69NOYVvmzmokM+9CYLSZ1l9yziEjZJoG+FRqiTknlvuVS6VPCEy
 kGUiTNsx2QGQ==
X-IronPort-AV: E=McAfee;i="6000,8403,9813"; a="151049419"
X-IronPort-AV: E=Sophos;i="5.78,363,1599548400"; 
   d="scan'208";a="151049419"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga005.jf.intel.com ([10.7.209.41])
  by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 07:52:35 -0800
IronPort-SDR: o6LRPZSLFHblJ72nLIxI7QtuEJsSPQd11jVZTPC9b01xjBsR6V5ML2IOh4GsWUUm1pKbyVJ0KQ
 V9qBxJqg/0zw==
X-IronPort-AV: E=Sophos;i="5.78,363,1599548400"; 
   d="scan'208";a="546463497"
Received: from suygunge-mobl.ger.corp.intel.com (HELO localhost) ([10.249.40.108])
  by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 07:52:23 -0800
From: Jani Nikula <jani.nikula@linux.intel.com>
To: James Bottomley <James.Bottomley@HansenPartnership.com>, trix@redhat.com, joe@perches.com, clang-built-linux@googlegroups.com
Cc: linux-hyperv@vger.kernel.org, kvm@vger.kernel.org, linux-fbdev@vger.kernel.org, dri-devel@lists.freedesktop.org, platform-driver-x86@vger.kernel.org, ibm-acpi-devel@lists.sourceforge.net, keyrings@vger.kernel.org, linux-mtd@lists.infradead.org, linux-scsi@vger.kernel.org, amd-gfx@lists.freedesktop.org, cluster-devel@redhat.com, linux-acpi@vger.kernel.org, tboot-devel@lists.sourceforge.net, coreteam@netfilter.org, xen-devel@lists.xenproject.org, MPT-FusionLinux.pdl@broadcom.com, linux-media@vger.kernel.org, alsa-devel@alsa-project.org, intel-gfx@lists.freedesktop.org, ecryptfs@vger.kernel.org, linux-omap@vger.kernel.org, devel@acpica.org, linux-nfs@vger.kernel.org, netdev@vger.kernel.org, linux-usb@vger.kernel.org, linux-wireless@vger.kernel.org, linux-kernel@vger.kernel.org, linux-bluetooth@vger.kernel.org, netfilter-devel@vger.kernel.org, linux-crypto@vger.kernel.org, patches@opensource.cirrus.com, linux-fsdevel@vger.kernel.org, bpf@vger.kernel.org
Subject: Re: [RFC] MAINTAINERS tag for cleanup robot
In-Reply-To: <5843ef910b0e86c00d9c0143dec20f93823b016b.camel@HansenPartnership.com>
Organization: Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo
References: <20201121165058.1644182-1-trix@redhat.com> <5843ef910b0e86c00d9c0143dec20f93823b016b.camel@HansenPartnership.com>
Date: Mon, 23 Nov 2020 17:52:20 +0200
Message-ID: <87y2ism5or.fsf@intel.com>
MIME-Version: 1.0
Content-Type: text/plain

On Sat, 21 Nov 2020, James Bottomley <James.Bottomley@HansenPartnership.com> wrote:
> On Sat, 2020-11-21 at 08:50 -0800, trix@redhat.com wrote:
>> A difficult part of automating commits is composing the subsystem
>> preamble in the commit log.  For the ongoing effort of a fixer
>> producing
>> one or two fixes a release the use of 'treewide:' does not seem
>> appropriate.
>> 
>> It would be better if the normal prefix was used.  Unfortunately
>> normal is
>> not consistent across the tree.
>> 
>> 
>> 	D: Commit subsystem prefix
>> 
>> ex/ for FPGA DFL DRIVERS
>> 
>> 	D: fpga: dfl:
>> 
>
> I've got to bet this is going to cause more issues than it solves.

Agreed.

> SCSI uses scsi: <driver>: for drivers but not every driver has a
> MAINTAINERS entry.  We use either scsi: or scsi: core: for mid layer
> things, but we're not consistent.  Block uses blk-<something>: for all
> of it's stuff but almost no <somtehing>s have a MAINTAINERS entry.  So
> the next thing you're going to cause is an explosion of suggested
> MAINTAINERs entries.

On the one hand, adoption of new MAINTAINERS entries has been really
slow. Look at B, C, or P, for instance. On the other hand, if this were
to get adopted, you'll potentially get conflicting prefixes for patches
touching multiple files. Then what?

I'm guessing a script looking at git log could come up with better
suggestions for prefixes via popularity contest than manually maintained
MAINTAINERS entries. It might not always get it right, but then human
outsiders aren't going to always get it right either.

Now you'll only need Someone(tm) to write the script. ;)

Something quick like this:

git log --since={1year} --pretty=format:%s -- <FILES> |\
	grep -v "^\(Merge\|Revert\)" |\
        sed 's/:[^:]*$//' |\
        sort | uniq -c | sort -rn | head -5

already gives me results that really aren't worse than some of the
prefixes invented by drive-by contributors.

> Has anyone actually complained about treewide:?

As Joe said, I'd feel silly applying patches to drivers with that
prefix. If it gets applied by someone else higher up, literally
treewide, then no complaints.

BR,
Jani.


-- 
Jani Nikula, Intel Open Source Graphics Center


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 15:54:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 15:54:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34746.65951 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khEAc-0002RH-2a; Mon, 23 Nov 2020 15:54:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34746.65951; Mon, 23 Nov 2020 15:54:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khEAb-0002RA-Vk; Mon, 23 Nov 2020 15:54:25 +0000
Received: by outflank-mailman (input) for mailman id 34746;
 Mon, 23 Nov 2020 15:54:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1hMa=E5=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1khEAa-0002R5-Pa
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:54:24 +0000
Received: from mail-wr1-x431.google.com (unknown [2a00:1450:4864:20::431])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d6e539e6-f9fd-4a3e-b479-5c067959b420;
 Mon, 23 Nov 2020 15:54:23 +0000 (UTC)
Received: by mail-wr1-x431.google.com with SMTP id p8so19090620wrx.5
 for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 07:54:23 -0800 (PST)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
 by smtp.gmail.com with ESMTPSA id v64sm17806354wme.25.2020.11.23.07.54.21
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 23 Nov 2020 07:54:22 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1hMa=E5=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1khEAa-0002R5-Pa
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:54:24 +0000
X-Inumbo-ID: d6e539e6-f9fd-4a3e-b479-5c067959b420
Received: from mail-wr1-x431.google.com (unknown [2a00:1450:4864:20::431])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d6e539e6-f9fd-4a3e-b479-5c067959b420;
	Mon, 23 Nov 2020 15:54:23 +0000 (UTC)
Received: by mail-wr1-x431.google.com with SMTP id p8so19090620wrx.5
        for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 07:54:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=dtLp3z3OszpC4Cvul1IHj72ypy0NqjbhBISXM5maT7k=;
        b=P+5izuE0fIgR9dOhxFtO1vk2J6BriMf8+U0AaC01Itb5Q9wi61roe6kaiW0I5M6NQS
         NLxFunhsMF/CJuu4VWt/ORHI0RJ5VPdb8/PQg7avMuoPgh/3MOYPDg/6C5A/9KxDDFyj
         1WXCTujOz2xod9/c3sM7ENBsqBoe+mR0aj72Fdf6l1JVBE/VLjhebhxCXqXdZ6LhGl0n
         3cu/Wo/UxfnoaiC4PuZ+1JWwJ8ID9sZnuC00mov6wJfy76V33Y224MfElj13/62QZLFW
         bR9cIGIDOly7IgGdE2V1ikKgPt7Dw+KQK+aqrXs8lC0OX2YD7eyp7+zEOODfCAadAJ0P
         RH/w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=dtLp3z3OszpC4Cvul1IHj72ypy0NqjbhBISXM5maT7k=;
        b=RUN5Q4zRckV3l455CvocJ5L9uz1rwpxawr+bLLLSMxzFwdmjF9Q10Z3yZmd0bIdKp/
         zLmdw1VCRVy87CXRfBCG+WKBJ+8voVV1iZ6lYR1FzWojRJt2EO3U5K4X2Kf1iI4iazgZ
         KhBCO+z/YCGqFXX6cyZjokfSqAm8jJcDbEarZt6faSVK5SnEO7GmzpX+SKKRzdA90hjY
         hXb9Zg+KB9iZQKG8KMkbiLFAhJKyhi+fpZ+s7yiE58AEuVyW6rgYDoT1pDiGvOMYBerk
         gLGChZ0aLMlRfuXzMwC6adTS9CYW1oth4S7L2tkgoSG5n8JXyrHGswlsohQxa5LBcwzT
         rcBA==
X-Gm-Message-State: AOAM532kdMynBCZWSN9rroWoR1w9/iBo6ur1ZgRtH/0e7KwoTx2qfDkt
	MKpVnx2Q4u2fqZpcO8q7F+o=
X-Google-Smtp-Source: ABdhPJyPp8dyWgfITSgE+EOR/rQ5F3MH+VXMWaCyOolFYEJbX0ezODzgJe6FxrVtYfjzZ0mFOWRqrA==
X-Received: by 2002:adf:e544:: with SMTP id z4mr297428wrm.83.1606146862784;
        Mon, 23 Nov 2020 07:54:22 -0800 (PST)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
        by smtp.gmail.com with ESMTPSA id v64sm17806354wme.25.2020.11.23.07.54.21
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Mon, 23 Nov 2020 07:54:22 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Oleksandr'" <olekstysh@gmail.com>,
	"'Jan Beulich'" <jbeulich@suse.com>
Cc: "'Oleksandr Tyshchenko'" <oleksandr_tyshchenko@epam.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	=?UTF-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	"'Wei Liu'" <wl@xen.org>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Jun Nakajima'" <jun.nakajima@intel.com>,
	"'Kevin Tian'" <kevin.tian@intel.com>,
	"'Julien Grall'" <julien.grall@arm.com>,
	<xen-devel@lists.xenproject.org>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> <1602780274-29141-13-git-send-email-olekstysh@gmail.com> <e3064b77-71c3-9d8d-2324-6839895101f4@suse.com> <d3b6623c-683d-2845-78c3-a114193b0ce4@gmail.com> <04a81b7e-213a-968b-048c-dfa68b6e3b0d@gmail.com> <96e6622c-08b3-ff85-75f1-14c8b7cd6d6e@suse.com> <30c01448-d4f2-803e-1569-5e806f830efc@gmail.com>
In-Reply-To: <30c01448-d4f2-803e-1569-5e806f830efc@gmail.com>
Subject: RE: [PATCH V2 12/23] xen/ioreq: Remove "hvm" prefixes from involved function names
Date: Mon, 23 Nov 2020 15:54:20 -0000
Message-ID: <002101d6c1b0$e906a520$bb13ef60$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQFqp5MaNUj6MKEiN9RM6S6pfA5bVAHGHN6+A53Zm3YCEAEcxwGeQu0fAlZCbOYBa5sxg6pHcMug

> -----Original Message-----
> From: Oleksandr <olekstysh@gmail.com>
> Sent: 23 November 2020 15:48
> To: Jan Beulich <jbeulich@suse.com>; Paul Durrant <paul@xen.org>
> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Andrew =
Cooper <andrew.cooper3@citrix.com>;
> Roger Pau Monn=C3=A9 <roger.pau@citrix.com>; Wei Liu <wl@xen.org>; =
George Dunlap
> <george.dunlap@citrix.com>; Ian Jackson <iwj@xenproject.org>; Julien =
Grall <julien@xen.org>; Stefano
> Stabellini <sstabellini@kernel.org>; Jun Nakajima =
<jun.nakajima@intel.com>; Kevin Tian
> <kevin.tian@intel.com>; Julien Grall <julien.grall@arm.com>; =
xen-devel@lists.xenproject.org
> Subject: Re: [PATCH V2 12/23] xen/ioreq: Remove "hvm" prefixes from =
involved function names
>=20
>=20
> On 23.11.20 16:56, Jan Beulich wrote:
>=20
> Hi Jan, Paul
>=20
> > On 23.11.2020 15:39, Oleksandr wrote:
> >> As it was agreed, below the list of proposed renaming (naming) =
within
> >> current series.
> > Thanks for compiling this. A couple of suggestions for =
consideration:
> >
> >> 1. Global (existing):
> >> hvm_map_mem_type_to_ioreq_server     -> ioreq_server_map_mem_type
> >> hvm_select_ioreq_server              -> ioreq_server_select
> >> hvm_send_ioreq                       -> ioreq_send
> >> hvm_ioreq_init                       -> ioreq_init
> > ioreq_domain_init() (or, imo less desirable domain_ioreq_init())?
> On Arm (for example) I see two variants are present:
> 1. That starts with subsystem:
> - tee_domain_init
> - iommu_domain_init
>=20
>=20
> 2. Where sybsystem in the middle:
> - domain_io_init
> - domain_vuart_init
> - domain_vtimer_init
>=20
> If there is no rule, but a matter of taste then I would use
> ioreq_domain_init(),
> so arch_ioreq_init() wants to be arch_ioreq_domain_init().
>=20
> >
> >> hvm_destroy_all_ioreq_servers        -> ioreq_server_destroy_all
> >> hvm_all_ioreq_servers_add_vcpu       -> ioreq_server_add_vcpu_all
> >> hvm_all_ioreq_servers_remove_vcpu    -> =
ioreq_server_remove_vcpu_all
> >> hvm_broadcast_ioreq                  -> ioreq_broadcast
> >> hvm_create_ioreq_server              -> ioreq_server_create
> >> hvm_get_ioreq_server_info            -> ioreq_server_get_info
> >> hvm_map_io_range_to_ioreq_server     -> ioreq_server_map_io_range
> >> hvm_unmap_io_range_from_ioreq_server -> ioreq_server_unmap_io_range
> >> hvm_set_ioreq_server_state           -> ioreq_server_set_state
> >> hvm_destroy_ioreq_server             -> ioreq_server_destroy
> >> hvm_get_ioreq_server_frame           -> ioreq_server_get_frame
> >> hvm_ioreq_needs_completion           -> ioreq_needs_completion
> >> hvm_mmio_first_byte                  -> ioreq_mmio_first_byte
> >> hvm_mmio_last_byte                   -> ioreq_mmio_last_byte
> >> send_invalidate_req                  -> =
ioreq_signal_mapcache_invalidate
> >>
> >> handle_hvm_io_completion             -> handle_io_completion
> > For this one I'm not sure what to suggest, but I'm not overly happy
> > with the name.
>=20
> I also failed to find a better name. Probably ioreq_ or vcpu_ioreq_
> prefix wants to be added here?
>=20
>=20
> >
> >> hvm_io_pending                       -> io_pending
> > vcpu_ioreq_pending() or vcpu_any_ioreq_pending()?
>=20
> I am fine with vcpu_ioreq_pending()
>=20

...in which case vcpu_ioreq_handle_completion() seems like a reasonable =
choice.

>=20
> >
> >> 2. Global (new):
> >> arch_io_completion
> >> arch_ioreq_server_map_pages
> >> arch_ioreq_server_unmap_pages
> >> arch_ioreq_server_enable
> >> arch_ioreq_server_disable
> >> arch_ioreq_server_destroy
> >> arch_ioreq_server_map_mem_type
> >> arch_ioreq_server_destroy_all
> >> arch_ioreq_server_get_type_addr
> >> arch_ioreq_init
> > Assuming this is the arch hook of the similarly named function
> > further up, a similar adjustment may then be wanted here.
>=20
> Yes.
>=20
>=20
> >
> >> domain_has_ioreq_server
> >>
> >>
> >> 3. Local (existing) in common ioreq.c:
> >> hvm_alloc_ioreq_mfn               -> ioreq_alloc_mfn
> >> hvm_free_ioreq_mfn                -> ioreq_free_mfn
> > These two are server functions, so should imo be ioreq_server_...().
>=20
> ok, but ...
>=20
>=20
> > However, if they're static (as they're now), no distinguishing
> > prefix is strictly necessary, i.e. alloc_mfn() and free_mfn() may
> > be fine. The two names may be too short for Paul's taste, though.
> > Some similar shortening may be possible for some or all of the ones
>=20
>=20
> ... In general I would be fine with any option. However, using the
> shortening rule for all
> we are going to end up with single-word function names (enable, init, =
etc).
> So I would prefer to leave locals as is (but dropping hvm prefixes of
> course and
> clarify ioreq_server_alloc_mfn/ioreq_server_free_mfn).
>=20
> Paul, Jan what do you think?

I prefer ioreq_server_alloc_mfn/ioreq_server_free_mfn. The problem with =
shortening is that function names become ambiguous within the source =
base and hence harder to find.

  Paul



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 15:55:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 15:55:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34756.65964 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khEC4-0002Zt-EQ; Mon, 23 Nov 2020 15:55:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34756.65964; Mon, 23 Nov 2020 15:55:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khEC4-0002Zm-Az; Mon, 23 Nov 2020 15:55:56 +0000
Received: by outflank-mailman (input) for mailman id 34756;
 Mon, 23 Nov 2020 15:55:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khEC3-0002Zd-AW; Mon, 23 Nov 2020 15:55:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khEC3-0003hM-51; Mon, 23 Nov 2020 15:55:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khEC2-0000OY-Oy; Mon, 23 Nov 2020 15:55:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1khEC2-00079K-OB; Mon, 23 Nov 2020 15:55:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khEC3-0002Zd-AW; Mon, 23 Nov 2020 15:55:55 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bo4yzmFFqx0pFPQXP3WaqFd9tA1jPfTJHe1gdGjTuPw=; b=VWKbHiBh1lq0PPHpyStZO4G4Qi
	623Yx6kfF1MOT5r1DnKpUQG5DVinmDyuta3lOk6vNDU9gcZQLN28DzXRJJtUVpURpzTOKy1GaUM/L
	0XHBm8V111K45moQsqdY+EUn4cUaJMwgWUgqVZTUTPfo4QuWXR7Hb4Ig1VoR1y0+S9tw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khEC3-0003hM-51; Mon, 23 Nov 2020 15:55:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khEC2-0000OY-Oy; Mon, 23 Nov 2020 15:55:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khEC2-00079K-OB; Mon, 23 Nov 2020 15:55:54 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156956-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156956: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b659a5cebd611dbe698e63c03485b5fe8cd964ad
X-Osstest-Versions-That:
    xen=b659a5cebd611dbe698e63c03485b5fe8cd964ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 23 Nov 2020 15:55:54 +0000

flight 156956 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156956/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10     fail pass in 156935
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 156935
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 156935

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156935
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156935
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156935
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156935
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156935
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156935
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156935
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156935
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156935
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156935
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156935
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  b659a5cebd611dbe698e63c03485b5fe8cd964ad
baseline version:
 xen                  b659a5cebd611dbe698e63c03485b5fe8cd964ad

Last test of basis   156956  2020-11-23 01:51:33 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 15:58:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 15:58:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34767.65979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khEEQ-0002m3-4G; Mon, 23 Nov 2020 15:58:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34767.65979; Mon, 23 Nov 2020 15:58:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khEEQ-0002lw-0g; Mon, 23 Nov 2020 15:58:22 +0000
Received: by outflank-mailman (input) for mailman id 34767;
 Mon, 23 Nov 2020 15:58:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6H7V=E5=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
 id 1khEEN-0002lr-RG
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:58:20 +0000
Received: from bedivere.hansenpartnership.com (unknown [96.44.175.130])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id af4dd6ea-9eb5-43ab-bce7-4d3f98027596;
 Mon, 23 Nov 2020 15:58:11 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by bedivere.hansenpartnership.com (Postfix) with ESMTP id 8958812803A6;
 Mon, 23 Nov 2020 07:58:10 -0800 (PST)
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
 by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new,
 port 10024)
 with ESMTP id pNOg0X0CRuCM; Mon, 23 Nov 2020 07:58:10 -0800 (PST)
Received: from jarvis.int.hansenpartnership.com (unknown
 [IPv6:2601:600:8280:66d1::527])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id 1690C12802D9;
 Mon, 23 Nov 2020 07:58:07 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6H7V=E5=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
	id 1khEEN-0002lr-RG
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 15:58:20 +0000
X-Inumbo-ID: af4dd6ea-9eb5-43ab-bce7-4d3f98027596
Received: from bedivere.hansenpartnership.com (unknown [96.44.175.130])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id af4dd6ea-9eb5-43ab-bce7-4d3f98027596;
	Mon, 23 Nov 2020 15:58:11 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
	by bedivere.hansenpartnership.com (Postfix) with ESMTP id 8958812803A6;
	Mon, 23 Nov 2020 07:58:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=hansenpartnership.com; s=20151216; t=1606147090;
	bh=lBjYTVUJkmNX+Ql1BcMOHCS5uLgrfuyir/PBRp4vAgY=;
	h=Message-ID:Subject:From:To:Date:In-Reply-To:References:From;
	b=tw2BKkTzWzFWEQdrm6zgBKJtIh2PRfv5Mb0TfkjZKqSr5KeTmWZWgl8VEPx5No8bX
	 gdEQ1NYoCzrZ51ueWl3PIAwT19fSirwyiz4cqIKbNqhoqeMTMjHgs4jilQhkGLvH0x
	 +YA09wNIiuW9eCRS9chAzVTlxjJYgA69RXyn6sLU=
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
	by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id pNOg0X0CRuCM; Mon, 23 Nov 2020 07:58:10 -0800 (PST)
Received: from jarvis.int.hansenpartnership.com (unknown [IPv6:2601:600:8280:66d1::527])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id 1690C12802D9;
	Mon, 23 Nov 2020 07:58:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=hansenpartnership.com; s=20151216; t=1606147090;
	bh=lBjYTVUJkmNX+Ql1BcMOHCS5uLgrfuyir/PBRp4vAgY=;
	h=Message-ID:Subject:From:To:Date:In-Reply-To:References:From;
	b=tw2BKkTzWzFWEQdrm6zgBKJtIh2PRfv5Mb0TfkjZKqSr5KeTmWZWgl8VEPx5No8bX
	 gdEQ1NYoCzrZ51ueWl3PIAwT19fSirwyiz4cqIKbNqhoqeMTMjHgs4jilQhkGLvH0x
	 +YA09wNIiuW9eCRS9chAzVTlxjJYgA69RXyn6sLU=
Message-ID: <fc45750b6d0277c401015b7aa11e16cd15f32ab2.camel@HansenPartnership.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
From: James Bottomley <James.Bottomley@HansenPartnership.com>
To: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Cc: Kees Cook <keescook@chromium.org>, Jakub Kicinski <kuba@kernel.org>, 
 "Gustavo A. R. Silva" <gustavoars@kernel.org>, linux-kernel
 <linux-kernel@vger.kernel.org>,  alsa-devel@alsa-project.org,
 amd-gfx@lists.freedesktop.org,  bridge@lists.linux-foundation.org,
 ceph-devel@vger.kernel.org,  cluster-devel@redhat.com,
 coreteam@netfilter.org, devel@driverdev.osuosl.org,  dm-devel@redhat.com,
 drbd-dev@lists.linbit.com, dri-devel@lists.freedesktop.org, 
 GR-everest-linux-l2@marvell.com, GR-Linux-NIC-Dev@marvell.com, 
 intel-gfx@lists.freedesktop.org, intel-wired-lan@lists.osuosl.org, 
 keyrings@vger.kernel.org, linux1394-devel@lists.sourceforge.net, 
 linux-acpi@vger.kernel.org, linux-afs@lists.infradead.org, Linux ARM
 <linux-arm-kernel@lists.infradead.org>, linux-arm-msm@vger.kernel.org, 
 linux-atm-general@lists.sourceforge.net, linux-block@vger.kernel.org, 
 linux-can@vger.kernel.org, linux-cifs@vger.kernel.org, Linux Crypto Mailing
 List <linux-crypto@vger.kernel.org>,
 linux-decnet-user@lists.sourceforge.net,  Ext4 Developers List
 <linux-ext4@vger.kernel.org>, linux-fbdev@vger.kernel.org,
 linux-geode@lists.infradead.org,  linux-gpio@vger.kernel.org,
 linux-hams@vger.kernel.org,  linux-hwmon@vger.kernel.org,
 linux-i3c@lists.infradead.org,  linux-ide@vger.kernel.org,
 linux-iio@vger.kernel.org, linux-input <linux-input@vger.kernel.org>,
 linux-integrity@vger.kernel.org,  linux-mediatek@lists.infradead.org, Linux
 Media Mailing List <linux-media@vger.kernel.org>,
 linux-mmc@vger.kernel.org, Linux-MM <linux-mm@kvack.org>,
 linux-mtd@lists.infradead.org, linux-nfs@vger.kernel.org, 
 linux-rdma@vger.kernel.org, linux-renesas-soc@vger.kernel.org, 
 linux-scsi@vger.kernel.org, linux-sctp@vger.kernel.org, 
 linux-security-module@vger.kernel.org, 
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org, 
 linux-watchdog@vger.kernel.org, linux-wireless
 <linux-wireless@vger.kernel.org>,  Network Development
 <netdev@vger.kernel.org>, netfilter-devel@vger.kernel.org,
 nouveau@lists.freedesktop.org,  op-tee@lists.trustedfirmware.org,
 oss-drivers@netronome.com,  patches@opensource.cirrus.com,
 rds-devel@oss.oracle.com,  reiserfs-devel@vger.kernel.org,
 samba-technical@lists.samba.org,  selinux@vger.kernel.org,
 target-devel@vger.kernel.org,  tipc-discussion@lists.sourceforge.net,
 usb-storage@lists.one-eyed-alien.net, 
 virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
 "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>,
 xen-devel@lists.xenproject.org,  linux-hardening@vger.kernel.org, Nick
 Desaulniers <ndesaulniers@google.com>,  Nathan Chancellor
 <natechancellor@gmail.com>, Miguel Ojeda <ojeda@kernel.org>, Joe Perches
 <joe@perches.com>
Date: Mon, 23 Nov 2020 07:58:06 -0800
In-Reply-To: <CANiq72m22Jb5_+62NnwX8xds2iUdWDMAqD8PZw9cuxdHd95W0A@mail.gmail.com>
References: <cover.1605896059.git.gustavoars@kernel.org>
	 <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	 <202011201129.B13FDB3C@keescook>
	 <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	 <202011220816.8B6591A@keescook>
	 <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
	 <CANiq72nZrHWTA4_Msg6MP9snTyenC6-eGfD27CyfNSu7QoVZbw@mail.gmail.com>
	 <1c7d7fde126bc0acf825766de64bf2f9b888f216.camel@HansenPartnership.com>
	 <CANiq72m22Jb5_+62NnwX8xds2iUdWDMAqD8PZw9cuxdHd95W0A@mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"
User-Agent: Evolution 3.34.4 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Mon, 2020-11-23 at 15:19 +0100, Miguel Ojeda wrote:
> On Sun, Nov 22, 2020 at 11:36 PM James Bottomley
> <James.Bottomley@hansenpartnership.com> wrote:
> > Well, it seems to be three years of someone's time plus the
> > maintainer review time and series disruption of nearly a thousand
> > patches.  Let's be conservative and assume the producer worked
> > about 30% on the series and it takes about 5-10 minutes per patch
> > to review, merge and for others to rework existing series.  So
> > let's say it's cost a person year of a relatively junior engineer
> > producing the patches and say 100h of review and application
> > time.  The latter is likely the big ticket item because it's what
> > we have in least supply in the kernel (even though it's 20x vs the
> > producer time).
> 
> How are you arriving at such numbers? It is a total of ~200 trivial
> lines.

Well, I used git.  It says that as of today in Linus' tree we have 889
patches related to fall throughs and the first series went in in
october 2017 ... ignoring a couple of outliers back to February.

> > It's not about the risk of the changes it's about the cost of
> > implementing them.  Even if you discount the producer time (which
> > someone gets to pay for, and if I were the engineering manager, I'd
> > be unhappy about), the review/merge/rework time is pretty
> > significant in exchange for six minor bug fixes.  Fine, when a new
> > compiler warning comes along it's certainly reasonable to see if we
> > can benefit from it and the fact that the compiler people think
> > it's worthwhile is enough evidence to assume this initially.  But
> > at some point you have to ask whether that assumption is supported
> > by the evidence we've accumulated over the time we've been using
> > it.  And if the evidence doesn't support it perhaps it is time to
> > stop the experiment.
> 
> Maintainers routinely review 1-line trivial patches, not to mention
> internal API changes, etc.

We're also complaining about the inability to recruit maintainers:

https://www.theregister.com/2020/06/30/hard_to_find_linux_maintainers_says_torvalds/

And burn out:

http://antirez.com/news/129

The whole crux of your argument seems to be maintainers' time isn't
important so we should accept all trivial patches ... I'm pushing back
on that assumption in two places, firstly the valulessness of the time
and secondly that all trivial patches are valuable.

> If some company does not want to pay for that, that's fine, but they
> don't get to be maintainers and claim `Supported`.

What I'm actually trying to articulate is a way of measuring value of
the patch vs cost ... it has nothing really to do with who foots the
actual bill.

One thesis I'm actually starting to formulate is that this continual
devaluing of maintainers is why we have so much difficulty keeping and
recruiting them.

James





From xen-devel-bounces@lists.xenproject.org Mon Nov 23 16:01:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 16:01:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34775.65991 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khEGu-00048E-Hb; Mon, 23 Nov 2020 16:00:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34775.65991; Mon, 23 Nov 2020 16:00:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khEGu-000487-EQ; Mon, 23 Nov 2020 16:00:56 +0000
Received: by outflank-mailman (input) for mailman id 34775;
 Mon, 23 Nov 2020 16:00:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Rrc6=E5=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1khEGr-000482-VE
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 16:00:54 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.22])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 597aee59-c80c-41a2-91b6-6e76a3bed4e2;
 Mon, 23 Nov 2020 16:00:52 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.4 DYNA|AUTH)
 with ESMTPSA id V0b6ccwANG0hrJD
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Mon, 23 Nov 2020 17:00:43 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Rrc6=E5=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1khEGr-000482-VE
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 16:00:54 +0000
X-Inumbo-ID: 597aee59-c80c-41a2-91b6-6e76a3bed4e2
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.22])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 597aee59-c80c-41a2-91b6-6e76a3bed4e2;
	Mon, 23 Nov 2020 16:00:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1606147252;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=VUdBJc6ahXffD4ePwK9dNsIUxMg8YOVPnQnEip+4khE=;
	b=Ssnb0hVzCicshlGVT6JWrG7eI3G7C2ENLQ+++cShganlf85itSQ3K+FwqWBbzKsG/u
	qi8r2UKwwmvdrAJwdn4oCfHiMgLc6PC1+0sHsK01VcohMiy+JB8oBd0l6NZyWilIXFU3
	RC9jkN03r3VohN3/1pdLVpmrULEXmtx4AU0iT40rQGdthyZSmFAHoKbGo5Qy6Tpf+5bq
	9GFD/kCsv76oSihRmTii71Uffv68Qe3qR5wKVSdtOVahf7IOvyzQkJY6cWskI40BX6VM
	tGf8HC30e+YVAmPJmsPmSFq3IuMBAHEzi9lpP/gV862gWJzchuVRBS7VI6N3IVSslT+w
	4voA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTW+uX"
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.4 DYNA|AUTH)
	with ESMTPSA id V0b6ccwANG0hrJD
	(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
	(Client did not present a certificate);
	Mon, 23 Nov 2020 17:00:43 +0100 (CET)
Date: Mon, 23 Nov 2020 17:00:31 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v1 00/23] reduce overhead during live migration
Message-ID: <20201123170031.147efdb1.olaf@aepfle.de>
In-Reply-To: <20201029172004.17219-1-olaf@aepfle.de>
References: <20201029172004.17219-1-olaf@aepfle.de>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/aTcd5+wmI6HVz+GTz8amsC8"; protocol="application/pgp-signature"

--Sig_/aTcd5+wmI6HVz+GTz8amsC8
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

There was no feedback to this series within the past three weeks.

Please review this series.

Thanks,
Olaf

Am Thu, 29 Oct 2020 18:19:40 +0100
schrieb Olaf Hering <olaf@aepfle.de>:

> The current live migration code can easily saturate an 1Gb link.
> There is still room for improvement with faster network connections.
> Even with this series reviewed and applied.
> See description of patch #6.
>=20
> Olaf
>=20
> Olaf Hering (23):
>   tools: add readv_exact to libxenctrl
>   tools: add xc_is_known_page_type to libxenctrl
>   tools: use xc_is_known_page_type
>   tools: unify type checking for data pfns in migration stream
>   tools: show migration transfer rate in send_dirty_pages
>   tools/guest: prepare to allocate arrays once
>   tools/guest: save: move batch_pfns
>   tools/guest: save: move mfns array
>   tools/guest: save: move types array
>   tools/guest: save: move errors array
>   tools/guest: save: move iov array
>   tools/guest: save: move rec_pfns array
>   tools/guest: save: move guest_data array
>   tools/guest: save: move local_pages array
>   tools/guest: restore: move pfns array
>   tools/guest: restore: move types array
>   tools/guest: restore: move mfns array
>   tools/guest: restore: move map_errs array
>   tools/guest: restore: move mfns array in populate_pfns
>   tools/guest: restore: move pfns array in populate_pfns
>   tools/guest: restore: split record processing
>   tools/guest: restore: split handle_page_data
>   tools/guest: restore: write data directly into guest
>=20
>  tools/libs/ctrl/xc_private.c          |  54 ++-
>  tools/libs/ctrl/xc_private.h          |  34 ++
>  tools/libs/guest/xg_sr_common.c       |  33 +-
>  tools/libs/guest/xg_sr_common.h       |  86 +++-
>  tools/libs/guest/xg_sr_restore.c      | 562 +++++++++++++++++---------
>  tools/libs/guest/xg_sr_save.c         | 158 ++++----
>  tools/libs/guest/xg_sr_save_x86_hvm.c |   5 +-
>  tools/libs/guest/xg_sr_save_x86_pv.c  |  31 +-
>  8 files changed, 666 insertions(+), 297 deletions(-)

--Sig_/aTcd5+wmI6HVz+GTz8amsC8
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAl+73KAACgkQ86SN7mm1
DoCrxA//WUzp3ahy/fcxMkrL3ZqAp4DAFOGGG0lTIZwPmgpP+cRzCiWCRbt9Ky6R
4DOYKw/JLndXc6sxvhuKoFdK0SBRidIFkFlIjymkvvtCpE1mk6mHhh5lcEusvtfh
URII1DfazKhlc71o7KvQPKGOEVWx4LjwMDvyPtkJzbfamB9dH5WyKpG/UtB6dWVy
oiJ/20TJqEN/oj4Z3T70jCyWnxrpH2XIpofmBFIsYdvSfufoZJfNiPPnBGzc0PVC
GWHy2qU6CzpmrKlwVLaIiO87FFLPFknDeiqMIemaPst9ZG9V6YoR2v+E/dKuffWa
rcFaIehNu74l13P1iCZf2Vkd25/E3/Xqjt78kb35gQUwsBCbVqt+xLwkNPrsVTr9
fW3d5rEs0dJLyszZnFnHgcEbqcWjO/K3DskoE9C5OZ+KEzV8UG5P5mwMedrFfs0g
GRe65lJfrDB66jbgzZDoAq3MU/kfnYC3/F5rAU7/OU/4izl6tQDUQ9aoahdND1l+
YyBRDN7pdyjbGcsucyx7p5W9HJ2JEOO6g94v1BvS4Zf+PxdnRXn+tOPmajYaq/BX
/pQcDoSriwNCDqzMW6pWy3iPOqyErn6icMmJ5AW5fv+aRx4+2I8PNvqPNeFGgAo6
QndxMmGW0aJxyDkewP08/t40Fpi7m0MIYLKderI/xnVj/sKINS8=
=HdJZ
-----END PGP SIGNATURE-----

--Sig_/aTcd5+wmI6HVz+GTz8amsC8--


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 16:05:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 16:05:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34783.66003 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khEKo-0004Jz-2r; Mon, 23 Nov 2020 16:04:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34783.66003; Mon, 23 Nov 2020 16:04:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khEKn-0004Js-Ve; Mon, 23 Nov 2020 16:04:57 +0000
Received: by outflank-mailman (input) for mailman id 34783;
 Mon, 23 Nov 2020 16:04:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/KCf=E5=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1khEKn-0004Jn-C5
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 16:04:57 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 21ebb239-01de-4132-8be8-45ca20f0855b;
 Mon, 23 Nov 2020 16:04:56 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/KCf=E5=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
	id 1khEKn-0004Jn-C5
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 16:04:57 +0000
X-Inumbo-ID: 21ebb239-01de-4132-8be8-45ca20f0855b
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 21ebb239-01de-4132-8be8-45ca20f0855b;
	Mon, 23 Nov 2020 16:04:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606147496;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=m43E1j1pgce7TcObGftnisojvbl9tRjPYzsFht/m8E8=;
  b=KpvAT03qCDz3FKAlGuZ1CIzLKxnGJngW76c7yEfPm0lxju954iwaGrfg
   MwvYj2zGw2fIM8UuGBGTCmzJtza4KCFQ+zBw6Sh4YP663UAUkceSQ3/VC
   zcFQ+oA98Yaf2Bv++CeI9QJSTxcAEYzkWs0rVfoViSfWJtsa7U0cFo0LH
   k=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Gqqm3vfdwr9zFqbi6kea/7+Q2JJDIadQjbPWzpMtYnYPxvHXa22K75WQvRqZYgKSClFtZztZNt
 TGRvDaWhItenmQWjXI+K7YW4P3i6rGw8BxhutRr3vSQRGpuO4FBjL3UOX6Qa9GXhWultR42DRL
 08o7BFkEPPzqoR5wKK+4oWds3ip/RP0nsR1lzyRK4/prsG4lFQYFt75mKwuL1j7FsfGlZGTX9e
 nSnQgYPaG5TBAVe9mhbyp+M+2OmgX4K+zxg6PLABaRQA1cljzuAJDge+bfm4rG1C2xC63Ryu7c
 DKU=
X-SBRS: None
X-MesageID: 31728320
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,363,1599537600"; 
   d="scan'208";a="31728320"
From: George Dunlap <george.dunlap@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<ian.jackson@citrix.com>, Wei Liu <wl@xen.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>, Roger Pau Monne
	<roger.pau@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>, "Julien
 Grall" <julien@xen.org>, Paul Durrant <paul@xen.org>
Subject: [PATCH] MAINTINERS: Propose Ian Jackson as new release manager
Date: Mon, 23 Nov 2020 16:04:00 +0000
Message-ID: <20201123160400.1273386-1-george.dunlap@citrix.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Ian Jackson has agreed to be the release manager for 4.15.  Signify
this by giving him maintainership over CHANGELOG.md.

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Roger Pau Monne <roger.pau@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Paul Durrant <paul@xen.org>
---
 MAINTAINERS | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index dab38a6a14..a9872df1de 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -250,7 +250,7 @@ F:	xen/include/public/arch-arm/
 F:	xen/include/public/arch-arm.h
 
 Change Log
-M:	Paul Durrant <paul@xen.org>
+M:	Ian Jackson <ian.jackson@citrix.com>
 R:	Community Manager <community.manager@xenproject.org>
 S:	Maintained
 F:	CHANGELOG.md
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 16:08:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 16:08:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34790.66015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khENo-0004Tc-I3; Mon, 23 Nov 2020 16:08:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34790.66015; Mon, 23 Nov 2020 16:08:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khENo-0004TV-Es; Mon, 23 Nov 2020 16:08:04 +0000
Received: by outflank-mailman (input) for mailman id 34790;
 Mon, 23 Nov 2020 16:08:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1khENn-0004TQ-Hw
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 16:08:03 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a748d045-e4e2-46f3-88d8-bb1f4ae36658;
 Mon, 23 Nov 2020 16:08:02 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1khENn-0004TQ-Hw
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 16:08:03 +0000
X-Inumbo-ID: a748d045-e4e2-46f3-88d8-bb1f4ae36658
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a748d045-e4e2-46f3-88d8-bb1f4ae36658;
	Mon, 23 Nov 2020 16:08:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606147682;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=8+ZKc+k1ZmmB3rxiFd3LuQWvhFPGf4Er4HSCxSJXD5o=;
  b=OrtDNfl9UUQp/B4NfF20F1cJL5N/nLawZ0Hc1HA5RLEAgEHQgikcacX8
   CbIueuOp3oDg61XQ/O+Qswrh6oi/BtDyDQm9QIMihz2ipIf0xnZsLMdhp
   opOSs6Pvw2/FSj/mvjH/TEUzv1EUSvCUC3Kqa4KWmGLOyCI8KrhCctAzT
   g=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: KlRta09svDEbrKFHohYIm9jEuN01XoIKs6mE2wbn6jj+wN8eIi3VDQ7fMmOxnug1UlERvzAQP0
 0DePRvQ51Xk8i9GiQgib7DBoShroII9hCs47BnD2V0YYQEmHmjMBIC8u/ICs0u4a+zdSeRRrR/
 w+ph35bb/QauBgYOqrNDj/u1NXEHa9K1hulBYJlbBwuEHEQCVblwk+8dvTaO5M4hbWiNlDjtPE
 G7DPV8aOY9TSzqIDFytj42LYUfJL5689M1jg8iTbV8LmqA//4Q2QAKrpoPyxLyf7g634cDfpwZ
 XNY=
X-SBRS: None
X-MesageID: 31728834
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,363,1599537600"; 
   d="scan'208";a="31728834"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZdXf9HL7hf813Qy82MUefN7erm2yjocB0cuYjwK1vv3BVsvPQNDbn2Myun6nlLTYrfbc+elIwjBjHLXxBdo1DYmudAhjcyw4RBoQV2hiCxmoiaEa5dAK98kgEy2B1VM2v75tCVqyi9fFKU2EBI4q4S77qfvzcxzYkFf4GR4aacrqZLv9tOgFXGDJ7uc8kuQpdT/FicFZUnneKPYvG9IardpZiCxN9jnM0cCWsjLVrFxWTJjL5shEfQIez+KeSNET+/kHO9YWCvVpaqRHLpDTreZv7DDJEKTHHr2R3BH5jnP9PDwZ37PAzm0BpBbzmhl/PxOaib2lZYsfVwrtsettQw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/WhAvPdIgkF2fb97bY+SwGofAoF7RVa+5/6DK1whcMk=;
 b=FeFypgMjyb8C0kjTBojP+SEc68hQpui3w1qqIWTPLsgvlODUxWMZwr/2R/T2iwGXenHsKxIx5KkI+xBKfdGqajfMCFmpLX6w9FP6iXx6pU2JvrGrRrUdLrlukaKHbMS7po+/y7APFq2LNon7yE24pyfBnxuICMr+2KceBIsYxvP0+oXG98IjrpLLaveATNW+rFCH+ter5uxQ6+U7TkM3gXc7KAIclf4FFLJeCtejJxlXnPZgLN2KDp4WYoaJKeT6U6AQDBHtXHwT9TD9hei+WQdcAHBJ9q09aGfBGNaWOa+BRMnLjn1QCjni6rz3rxg6w7gzF7tts5A2K6G3RQ4/lw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/WhAvPdIgkF2fb97bY+SwGofAoF7RVa+5/6DK1whcMk=;
 b=afklY2d7mudh31SAA287HVlF1aWAt37tKrpB3CLs69nxBTnVSSRz1v83zCJ13veDCqdQo98isGo2bH6mIZq8Dq+f2RyVJA/lX2T0kZ9X+A5f0qBnqnTAdb0xP5Zh7PgkQ+fU+78QZL6AmNccx9/zfMZi5EH+VTgFyTBhio1lrKw=
Date: Mon, 23 Nov 2020 17:07:52 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>
Subject: Re: [PATCH 2/4] x86/ACPI: fix S3 wakeup vector mapping
Message-ID: <20201123160752.uzczcxnz5ytvtd46@Air-de-Roger>
References: <7f895b0e-f46f-8fe2-b0ac-e0503ef06a1f@suse.com>
 <c0210cbf-c07d-7fa6-2ae0-59764514836a@suse.com>
 <20201123152454.yjr3jgvsyucftrff@Air-de-Roger>
 <79776889-c566-5f07-abfe-2cb79cfa78fa@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <79776889-c566-5f07-abfe-2cb79cfa78fa@suse.com>
X-ClientProxiedBy: LO2P265CA0365.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a3::17) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 0067c905-0c3d-4a97-1a0a-08d88fc9f163
X-MS-TrafficTypeDiagnostic: DM6PR03MB3483:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB34835E3A0FF56E860C3772FC8FFC0@DM6PR03MB3483.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: u/J3vKmu+IhdIGmzIqhbBpL1EpUvZrJ7TypBQCBkiGCSInlxOqVT7TfAgPyqklownaAfcCx5JiZpLqskodLbw66tlmlxevQDwWBJxqPk8XLWlsOCBDZE8y4ZzAexca1YR+H1NKcE0cwHb4IoOgyhMfVvHwZw51rt6r0nCUyGV61Al6rZx0dIjeWDeX/FT9CXb9c5+zU9PEnzbBYeejbnlx4wiYYuLTRCRJNRjvYzSOnx5wezuw3CngGdEpx5Z8Y5tmhuNKEFeo56skg6t7rl8NmprmDss7pwCZkpO7Jf0Q6SAzEVwa8iIxz4mJaaqg/T6rhMkt+RF0jOu8m9S99csw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(39860400002)(366004)(396003)(376002)(136003)(346002)(66946007)(66476007)(66556008)(33716001)(8676002)(6666004)(9686003)(186003)(16526019)(316002)(26005)(478600001)(5660300002)(956004)(85182001)(6916009)(53546011)(2906002)(4326008)(1076003)(54906003)(8936002)(6496006)(86362001)(6486002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 2NW3Qosd7hXg+AIGcBrBUZyh54Ucj1i8DBZumoG4B37jVpEvxD7PGtiqXiAQ/sZp+ytyBEmUKX81gqcTjbYB1+w8QCQAkHzQkrLxLPHNWCmyUtb6MIoKcPMv1k4l5F4Ups+um7u0rIrcBk8SNSmn2l2j630Hl0O2n2RM3osvSLqn9M1STg6/ha8LI316/Kq3zhK48jDHY8JMOf+07BARn94fl7djQPHP8PrZT0p8FHZEy7dpbh2vs/bxAwzrjfZpzBzNdYh+is6s4rHzrVw5XjDnc7TpFKR5EHB2G3I5iaDlfx7Mwv2A+9h/HFSygWcNMkQSkSxjLCnKUzJMVwJgmqK99GXD8Fsi2EzbGddTLtKQFfsQ7Pkc5oScE5eifs60XnNTMavWxiEPczFVQUWJ5y0MIjrsTSg5hUk/HB8L1ZGA7OoQMtX4bimC19XFSlRh+btGGPw88q8CB9xGIoYLvRN9f5PNTfDVYzlA0oL5RzKRbQEZefDNkv7TMnh5i3yJX8H0K8Fy8alGfbmjAKI8McULvRRtJWmYpNddqGDheuGjNcKbZScqfeNaREJpps2P4mG0Isqy9vTPimQoNwljU+aAIMc7/8vJE4CbY2rbl94QYUVzhz0FmZl9RZHKes662xj7EnAx41pEpaAT6n5XUeg9DhzvZzQ43EoBXg9p9L8NDfZStun/lR1ntkuj1aGp/UaGFTENoExMo7mLeMaJuJuXAsObzRXFviqczVudZ6y6Niomh/GWEKpFmzs0ZLVqumHiY50NjRlk47QwyyKnl0HgpXYMjk25sBnqX4EQOJH4LNZXsOjU4ddd8QeRzHkDUpJ3LaC2AxDNpOV65qu9xo09zp8wfijoU39lsrF+1t7hBF5ZEuhbYbDSfvIbxVkYcfDOYVr4U/xnmfFa+l38pg==
X-MS-Exchange-CrossTenant-Network-Message-Id: 0067c905-0c3d-4a97-1a0a-08d88fc9f163
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Nov 2020 16:07:57.5968
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ENU8FbwpSFnF0KhLORW341SKGHz0aeAlZ3/20ypNjvyshiXJJ14OYsOqJH5/SD5K5qWIS7Uz08MNDNJEYpQf0w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3483
X-OriginatorOrg: citrix.com

On Mon, Nov 23, 2020 at 04:30:05PM +0100, Jan Beulich wrote:
> On 23.11.2020 16:24, Roger Pau Monné wrote:
> > On Mon, Nov 23, 2020 at 01:40:12PM +0100, Jan Beulich wrote:
> >> --- a/xen/arch/x86/acpi/power.c
> >> +++ b/xen/arch/x86/acpi/power.c
> >> @@ -174,17 +174,20 @@ static void acpi_sleep_prepare(u32 state
> >>      if ( state != ACPI_STATE_S3 )
> >>          return;
> >>  
> >> -    wakeup_vector_va = __acpi_map_table(
> >> -        acpi_sinfo.wakeup_vector, sizeof(uint64_t));
> >> -
> >>      /* TBoot will set resume vector itself (when it is safe to do so). */
> >>      if ( tboot_in_measured_env() )
> >>          return;
> >>  
> >> +    set_fixmap(FIX_ACPI_END, acpi_sinfo.wakeup_vector);
> >> +    wakeup_vector_va = fix_to_virt(FIX_ACPI_END) +
> >> +                       PAGE_OFFSET(acpi_sinfo.wakeup_vector);
> >> +
> >>      if ( acpi_sinfo.vector_width == 32 )
> >>          *(uint32_t *)wakeup_vector_va = bootsym_phys(wakeup_start);
> >>      else
> >>          *(uint64_t *)wakeup_vector_va = bootsym_phys(wakeup_start);
> >> +
> >> +    clear_fixmap(FIX_ACPI_END);
> > 
> > Why not use vmap here instead of the fixmap?
> 
> Considering the S3 path is relatively fragile (as in: we end up
> breaking it more often than about anything else) I wanted to
> make as little of a change as possible. Hence I decided to stick
> to the fixmap use that was (indirectly) used before as well.

Unless there's a restriction to use the ACPI fixmap entry I would just
switch to use vmap, as it's used extensively in the code and less
likely to trigger issues in the future, or else a bunch of other stuff
would also be broken.

IMO doing the mapping differently here when it's not required will end
up turning this code more fragile in the long run.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 16:10:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 16:10:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34798.66027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khEPr-0005J6-VM; Mon, 23 Nov 2020 16:10:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34798.66027; Mon, 23 Nov 2020 16:10:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khEPr-0005Iz-SB; Mon, 23 Nov 2020 16:10:11 +0000
Received: by outflank-mailman (input) for mailman id 34798;
 Mon, 23 Nov 2020 16:10:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1khEPq-0005Iu-7I
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 16:10:10 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2777f2bc-c262-4b61-95c4-94fb0909ce57;
 Mon, 23 Nov 2020 16:10:09 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1khEPq-0005Iu-7I
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 16:10:10 +0000
X-Inumbo-ID: 2777f2bc-c262-4b61-95c4-94fb0909ce57
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2777f2bc-c262-4b61-95c4-94fb0909ce57;
	Mon, 23 Nov 2020 16:10:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606147808;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=ryx6tvbDSn4Pgt6hgVpjI4WmhMm0V87W8eAjRkqQYCs=;
  b=S4BIQiI0SLcFSbybwnY3QkUb8QwiE4iocJ0U0oEl0i2em9kHpTAey7UM
   WoTQYCP+8zGwAymiYGBWORxAiaUEsiGGFoJMMGPZMDf1vuJpkKHHSxboZ
   str9ZQHOhgX1AxQEIsZ5Cnq3qKe/RGibbAn5OSsog1SZRhhl6xEtU9BIY
   4=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 7sNxluc5xYVLmZVFvE4hP3uLIWy513TgFvZpPa8Nwg2lPu0wwSHdkW237ndP6ARk/R/i2oIG2F
 ZD5kNUjM5g8JR97a+AF4GEr0jKaPShUrSHqdS8VojpFGZIolEyCLpFY5QOWEQG08HJ52SZRy56
 p/vmiOxhKJLo8Wj5LKPGRjC2xoUAPcukncPvZo81zkduRo92/sMZy11IxK9peNOUI79/VlL7MV
 E4aLp20PoQWh93tZFX64Wq2J58e+biCaiSx6v5gHujcGIHAuVPX+pwcJ+elZGX4m8Adg13IYNS
 ijc=
X-SBRS: None
X-MesageID: 32100465
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,363,1599537600"; 
   d="scan'208";a="32100465"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hK7DVLoiZTYgE3lc1DvHIQDCXa7EBYx1yBDu/5gigFnk9dmgXAeh7Fbag9G3vlDdvjYc0Q8lKLsmqUBrHYEm/D419sujG0MNWEA7MF5k0UdKmnTpDESKs3fKdRVcDrhXIkj1GEuGDTQbB+qiZP6PCElOklGTrQnUQIaLlkSUWTxW0F74OOjjgo/fjGkWYMalxYFo6xS9pQDiyyo/32YCv0ZyPLhfZpQVJ+nwJdBWKjoIpVc3/HGOyD10dq6VX5qpfRpTxxu5e99N4Y3ysg5KmdRZGby3jz+3rH5SnEeTTX4beceSPmS1fjt8YKBMqJkHjDcD8bZhPTUp6+oggz9s+w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6L+vGl7XkwkUhbmdK7E3v/6ATjVKN0IXMowwJC+cO5Y=;
 b=bKcLNyLMw93L07b+OP1Vek+qn86UwazYsXqcIlBGXog3Pg7UcFrg77cinSMN0Bm9MxdvZd8JRkfE+45Omjt1CUZiRUYhIAUv7F/dPuQ9WTbpCrqoZQdpNyZ5czaPet8kJMwIgzzL1FktXyW8r1utcsXe7gNpahVBYXBSHwtuGOe/WNUilOuIvo8TlGhQ1mIhPGcOtuY0IdXWcRohyiljwHjN1QbZ73qBzsi3MdbBVVHoycjw9CFNM02hz4MYTT3C6AejQ4jmeAPtJLLJMMEgPwIOmmlHz72Vadb/d6+7WOI7BA7G6OJhAXYFpOGth/UqJHkbRqZTrvqc/bnAOmIvMw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6L+vGl7XkwkUhbmdK7E3v/6ATjVKN0IXMowwJC+cO5Y=;
 b=Z05cBux0nN91VoFf0/t1nfRtlcLaKBfsjK8e+o26/j2V1IviR6qmCLvPFeR7BVBCn5NTumiI0GX2OERGVmXC608dXKkGsaIh1QAMhl4QVWasX1bIJknlN7mAOZ5wZtpGWCyxN70DBy/nWU7Ho7t/UBjQ5uVCRRsy3xgwOgUYoMA=
Date: Mon, 23 Nov 2020 17:09:43 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: George Dunlap <george.dunlap@citrix.com>
CC: <xen-devel@lists.xenproject.org>, Ian Jackson <ian.jackson@citrix.com>,
	Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, "Julien
 Grall" <julien@xen.org>, Paul Durrant <paul@xen.org>
Subject: Re: [PATCH] MAINTINERS: Propose Ian Jackson as new release manager
Message-ID: <20201123160943.heq6jwcofvauwfjs@Air-de-Roger>
References: <20201123160400.1273386-1-george.dunlap@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201123160400.1273386-1-george.dunlap@citrix.com>
X-ClientProxiedBy: LO2P123CA0090.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:138::23) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 95ab9a56-5a7e-4abc-5eb7-08d88fca33d2
X-MS-TrafficTypeDiagnostic: DM5PR03MB3369:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB3369332CB1E4BF2E6AFEB1EA8FFC0@DM5PR03MB3369.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: BL9hnLG9JpcPP8hx8R4NB7prPqvFLOUNjwBK/E4cdCCbl4+yqVdLl328RZOOdepJwzl/ROsa383cO86sCQn6xdVPW6/qGqgQHeh/Ux2WQThjK12w96Fll5xWXZA6HO/DLpNDBnD3bBi7Xi7y2N15odHk78jTu7j0duoOsto9f6g7EwHWPjOHwPAY5+jOL741kkIr2glrDqLJFzQBeStQtDFeH3y4eKhyZ+SiJ2B76jdrlM8GCPREzuqq/2MoflFBwPSiLsmIKfSN6NS6hgdIMoYPLctEph0eD659f9UZrn82dLAb+eBat82mdRKSAgBfnfn5UpieX7CUEOj8UOlxGA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(376002)(346002)(39860400002)(136003)(366004)(66476007)(66556008)(86362001)(83380400001)(4326008)(8676002)(6496006)(5660300002)(6636002)(8936002)(54906003)(6486002)(2906002)(6862004)(316002)(85182001)(9686003)(6666004)(33716001)(66946007)(26005)(956004)(16526019)(186003)(4744005)(478600001)(1076003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 0qAgL1S2V/HjUEaKccXj6rgy5d50ihTyeH88HC+KIVrJSvWzuaD+KIyG965WcSzAppuzBSj+uliFWxzO9Gg7GUYTYD09JQ9UVCzsUpHnwo6+JoCxFNeJqveQxu/Eq4ZvSG6+3/RwKBxchjnMiH8YZkaNdhoflV8w1qySysVjwBvdu1zsVruW25LM5PywNztGniHzbOSCsS7W/iuokekyPF579RywOjEVyCasKin7sJUGD9FTVhF9dvlE01fojPwjZzsTiVF79O+NyVJ5H+WcDoIHH7avzHLoBizDLs+xxe7Iva2MlKAJjnjDk48VRzYiKa+FHp4/38naVYzhJJ3qPxYNS9SComF7OPmvP5DSq3elydb6n3hTQYIHqWFtGW6lHTGA2OFstnQ5/CpW2G0Bp8Zr4lBpzI7kyKE1Ygl3nyZwQg7qtlD0D+DzboPB2A2E1EzCZgpog0LHgE8S/JAp99MeGsJNnjAYe7bGFbWdHtHRXiiygZKH74D4kt6g8HqIrzWNeJXykt2d9iFNwintj1aMElHpwakIBsqADlJtl31xk5pudXh3rhKVfqqu60MnNVkzOHAGfmGNxA+H+BnyPRRLrxvhaW0qVdbVW68JpgOpwtzGD0xRVHiQyWskjTIvOwEDRRREwwMGyDfOOKoteSrPUSXm1W+SOuUQLSUyANs7nTN4LIvOBDr+y1Ev34Yjmse++vXPiWlQHX5p/kVtiIE95y+WljpwAiaMAqZgPx5yM6uwA4lgC69b/xfYdE8EI6X4ABe3MbNawbXp57iDPRSgRewDtrhiaj8vG8udEjRenC0Zp+tyOhttvJiQdWyTxAcGSXfCzvHxv6DVEF9Id6JOjuCP4oiagpv+5R+/xF2YmJL9fes7eBcQc2umuu07HGglL3hOY9cS6MiI66ldEA==
X-MS-Exchange-CrossTenant-Network-Message-Id: 95ab9a56-5a7e-4abc-5eb7-08d88fca33d2
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Nov 2020 16:09:48.9690
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: K7AIEjILeKO6fPhK44+JfJRZEzBsxz2egKgGcEh49IksBSscFrMfSmEWuxcAKN9FqPHo6VyQcW1455dfgCIDTg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3369
X-OriginatorOrg: citrix.com

On Mon, Nov 23, 2020 at 04:04:00PM +0000, George Dunlap wrote:
> Ian Jackson has agreed to be the release manager for 4.15.  Signify
> this by giving him maintainership over CHANGELOG.md.
> 
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Congratulations!


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 16:14:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 16:14:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34809.66039 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khEU2-0005W8-L4; Mon, 23 Nov 2020 16:14:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34809.66039; Mon, 23 Nov 2020 16:14:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khEU2-0005W1-I5; Mon, 23 Nov 2020 16:14:30 +0000
Received: by outflank-mailman (input) for mailman id 34809;
 Mon, 23 Nov 2020 16:14:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WaDe=E5=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1khEU0-0005Vw-JU
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 16:14:28 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b69dc372-cbeb-4cc8-b3fe-d2f9d882f0ae;
 Mon, 23 Nov 2020 16:14:27 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WaDe=E5=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1khEU0-0005Vw-JU
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 16:14:28 +0000
X-Inumbo-ID: b69dc372-cbeb-4cc8-b3fe-d2f9d882f0ae
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b69dc372-cbeb-4cc8-b3fe-d2f9d882f0ae;
	Mon, 23 Nov 2020 16:14:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606148067;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=mFjkAR5xlPZJepJuz48GvEajQvwesyj66AWmS6z40oY=;
  b=VWHzCUZIplIZSFwKS0uevGKM0cGUJUhUWU9skbsyVYu8UFtDnOtCs1HR
   3PpjybUm2nGiD6V+pLj64pAfYGUJHJgytx9D8SaiFSxSZDrvG/x8QohyJ
   VY/AoNOmagPQMtqgiZO+7LlUAnDVVf4AfoQSEbr27fpKD9iPvBjw1TWAX
   o=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: QB4QenefA4x6DchwyXOHMn3V5uWHRlQfYIbLRrUQovGBujfLnFbOjaTB5xe7fp7ZUVTAqjoT2S
 rl/rzBXr2H/zCHy3DBwQ4hf5/XuU3SZQ3lqVSjOdgfyKXqPlApwrhDOebGkPNVczjKPfq9pWyl
 l6+JWklCfi37AMbbp9cT8KOTIEeoi7AIdRYGxc/DnYqc0F1pRXoBMzMRBB2QsrUAFJ9d4/LVg4
 EihUOHa5wZrBqdiZ1XDTTsAUL92Vf7NZCE6rwaGS3s6c30vYmRjh47qsqBVcxMdoSzdZsEHD4a
 B0I=
X-SBRS: None
X-MesageID: 32101061
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,363,1599537600"; 
   d="scan'208";a="32101061"
Subject: Re: [PATCH 2/4] x86/ACPI: fix S3 wakeup vector mapping
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Jan Beulich
	<jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Wei Liu
	<wl@xen.org>, Julien Grall <julien@xen.org>
References: <7f895b0e-f46f-8fe2-b0ac-e0503ef06a1f@suse.com>
 <c0210cbf-c07d-7fa6-2ae0-59764514836a@suse.com>
 <20201123152454.yjr3jgvsyucftrff@Air-de-Roger>
 <79776889-c566-5f07-abfe-2cb79cfa78fa@suse.com>
 <20201123160752.uzczcxnz5ytvtd46@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <fe2ec163-c6c7-12d6-0c89-57a238514e25@citrix.com>
Date: Mon, 23 Nov 2020 16:14:12 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201123160752.uzczcxnz5ytvtd46@Air-de-Roger>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 23/11/2020 16:07, Roger Pau Monné wrote:
> On Mon, Nov 23, 2020 at 04:30:05PM +0100, Jan Beulich wrote:
>> On 23.11.2020 16:24, Roger Pau Monné wrote:
>>> On Mon, Nov 23, 2020 at 01:40:12PM +0100, Jan Beulich wrote:
>>>> --- a/xen/arch/x86/acpi/power.c
>>>> +++ b/xen/arch/x86/acpi/power.c
>>>> @@ -174,17 +174,20 @@ static void acpi_sleep_prepare(u32 state
>>>>      if ( state != ACPI_STATE_S3 )
>>>>          return;
>>>>  
>>>> -    wakeup_vector_va = __acpi_map_table(
>>>> -        acpi_sinfo.wakeup_vector, sizeof(uint64_t));
>>>> -
>>>>      /* TBoot will set resume vector itself (when it is safe to do so). */
>>>>      if ( tboot_in_measured_env() )
>>>>          return;
>>>>  
>>>> +    set_fixmap(FIX_ACPI_END, acpi_sinfo.wakeup_vector);
>>>> +    wakeup_vector_va = fix_to_virt(FIX_ACPI_END) +
>>>> +                       PAGE_OFFSET(acpi_sinfo.wakeup_vector);
>>>> +
>>>>      if ( acpi_sinfo.vector_width == 32 )
>>>>          *(uint32_t *)wakeup_vector_va = bootsym_phys(wakeup_start);
>>>>      else
>>>>          *(uint64_t *)wakeup_vector_va = bootsym_phys(wakeup_start);
>>>> +
>>>> +    clear_fixmap(FIX_ACPI_END);
>>> Why not use vmap here instead of the fixmap?
>> Considering the S3 path is relatively fragile (as in: we end up
>> breaking it more often than about anything else) I wanted to
>> make as little of a change as possible. Hence I decided to stick
>> to the fixmap use that was (indirectly) used before as well.
> Unless there's a restriction to use the ACPI fixmap entry I would just
> switch to use vmap, as it's used extensively in the code and less
> likely to trigger issues in the future, or else a bunch of other stuff
> would also be broken.
>
> IMO doing the mapping differently here when it's not required will end
> up turning this code more fragile in the long run.

We can't enter S3 at all until dom0 has booted, as one detail has to
come from AML.

Therefore, we're fully up and running by this point, and vmap() will be
fine.

However, why are we re-writing the wakeup vector every time?  Its fixed
by the position of the trampoline, so we'd actually simplify the S3 path
by only setting it up once.

~Andrew

(The fix for fragility is to actually test it, not shy away from making
any change)


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 16:16:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 16:16:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34815.66051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khEVV-0005dj-2Q; Mon, 23 Nov 2020 16:16:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34815.66051; Mon, 23 Nov 2020 16:16:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khEVU-0005da-Tw; Mon, 23 Nov 2020 16:16:00 +0000
Received: by outflank-mailman (input) for mailman id 34815;
 Mon, 23 Nov 2020 16:16:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1hMa=E5=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1khEVU-0005dV-Cc
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 16:16:00 +0000
Received: from mail-wr1-x435.google.com (unknown [2a00:1450:4864:20::435])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7791be02-7c69-4cb5-9519-eb7a1b0b8520;
 Mon, 23 Nov 2020 16:15:59 +0000 (UTC)
Received: by mail-wr1-x435.google.com with SMTP id i2so125135wrs.4
 for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 08:15:59 -0800 (PST)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
 by smtp.gmail.com with ESMTPSA id c64sm17093555wmd.41.2020.11.23.08.15.57
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 23 Nov 2020 08:15:58 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1hMa=E5=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1khEVU-0005dV-Cc
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 16:16:00 +0000
X-Inumbo-ID: 7791be02-7c69-4cb5-9519-eb7a1b0b8520
Received: from mail-wr1-x435.google.com (unknown [2a00:1450:4864:20::435])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7791be02-7c69-4cb5-9519-eb7a1b0b8520;
	Mon, 23 Nov 2020 16:15:59 +0000 (UTC)
Received: by mail-wr1-x435.google.com with SMTP id i2so125135wrs.4
        for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 08:15:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=DUIdO66xWcFwCrqDsQq3zk+eXo4iY2dtdw/XxFTcOTg=;
        b=qJ6OAgiJ2OlB4skX/QIUCdiaDbedeSmjhkQfvIArgBY385uhfhHE9LtOG9qhwD2+HX
         o/ckq8iaRPsrVVB1bXTd7Sm3uo53msF8QRppEU/AvarCEIULKKwI3nemTqUeT/h8HnA4
         nkt8rI6N24gkqFRG79kQNaZ1OnIci/KAdAqAVJQZgrzfHN6oE0dciJos939q3K6g8OnU
         8+2QH5svfTSB7+jl5eXL9cau20yfTlf3fl8rhU7UJ1flRSihsq7QnqpbT55sj5t9zDmV
         eDnobfAFVB+ovNwMSwBOheueWApVWiIpY1bmHyDcz485JrN+lGeGUWq9KnDEG2ak/cNI
         3sFA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=DUIdO66xWcFwCrqDsQq3zk+eXo4iY2dtdw/XxFTcOTg=;
        b=HGQq0xJz6wgY2aA20v5U8xCu4QBsqhCB05Osz6Kjf368biJby7Ev6Ir5NAO0eR1cw2
         1gRQRx4l8e4ws4+mppoWtJzOXp8v/Dv6h4tNHRJEKLC+yjABY5b79/HHWBjOgoopy10v
         Gqoqjxb9spOO/VwwC+FbnI2aL/b8DWeF8EAuMgQJK3T84GMIMd945ugPoEfr0Uc0iXgd
         mjAEUNgQNjmxDUOvEJJjg4huqtrdQwWEsyI3SO9qlX0Xp3CdhtRWgGyghrMIBOKoJvPA
         j6waPrOkfn7q4xzgPBuz124eOhu6GaPsllnAmBJvyz3fWuFKrhlD0sKNsI4M231kaN08
         gJqA==
X-Gm-Message-State: AOAM531CMCh0tRasGc+DWly8gEd9uJZIFKWAmrGpKHaNpUisan0lvvxQ
	0RDL3fgwF+P2w1PmWn0hDdE=
X-Google-Smtp-Source: ABdhPJwx1Ij0RqLsfIxMRkS6BsWpc0syflT4bsEJbfJnRP4b54MRJiS2FRSgMd7Dd6hOglK13eYdGg==
X-Received: by 2002:a5d:690a:: with SMTP id t10mr418108wru.203.1606148158618;
        Mon, 23 Nov 2020 08:15:58 -0800 (PST)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
        by smtp.gmail.com with ESMTPSA id c64sm17093555wmd.41.2020.11.23.08.15.57
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Mon, 23 Nov 2020 08:15:58 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'George Dunlap'" <george.dunlap@citrix.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Ian Jackson'" <ian.jackson@citrix.com>,
	"'Wei Liu'" <wl@xen.org>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'Roger Pau Monne'" <roger.pau@citrix.com>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Julien Grall'" <julien@xen.org>
References: <20201123160400.1273386-1-george.dunlap@citrix.com>
In-Reply-To: <20201123160400.1273386-1-george.dunlap@citrix.com>
Subject: RE: [PATCH] MAINTINERS: Propose Ian Jackson as new release manager
Date: Mon, 23 Nov 2020 16:15:56 -0000
Message-ID: <002601d6c1b3$ed622fb0$c8268f10$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQMoFzxQ8z2DYHxeStd58zb4GUnL8aczOQRA

> -----Original Message-----
> From: George Dunlap <george.dunlap@citrix.com>
> Sent: 23 November 2020 16:04
> To: xen-devel@lists.xenproject.org
> Cc: George Dunlap <george.dunlap@citrix.com>; Ian Jackson <ian.jackson@citrix.com>; Wei Liu
> <wl@xen.org>; Andrew Cooper <andrew.cooper3@citrix.com>; Jan Beulich <jbeulich@suse.com>; Roger Pau
> Monne <roger.pau@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>; Julien Grall
> <julien@xen.org>; Paul Durrant <paul@xen.org>
> Subject: [PATCH] MAINTINERS: Propose Ian Jackson as new release manager
> 
> Ian Jackson has agreed to be the release manager for 4.15.  Signify
> this by giving him maintainership over CHANGELOG.md.
> 
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>

Thank you Ian.

Acked-by: Paul Durrant <paul@xen.org>

> ---
> CC: Ian Jackson <ian.jackson@citrix.com>
> CC: Wei Liu <wl@xen.org>
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Roger Pau Monne <roger.pau@citrix.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien@xen.org>
> CC: Paul Durrant <paul@xen.org>
> ---
>  MAINTAINERS | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index dab38a6a14..a9872df1de 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -250,7 +250,7 @@ F:	xen/include/public/arch-arm/
>  F:	xen/include/public/arch-arm.h
> 
>  Change Log
> -M:	Paul Durrant <paul@xen.org>
> +M:	Ian Jackson <ian.jackson@citrix.com>
>  R:	Community Manager <community.manager@xenproject.org>
>  S:	Maintained
>  F:	CHANGELOG.md
> --
> 2.25.1




From xen-devel-bounces@lists.xenproject.org Mon Nov 23 16:17:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 16:17:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34822.66063 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khEWg-0005lZ-BS; Mon, 23 Nov 2020 16:17:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34822.66063; Mon, 23 Nov 2020 16:17:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khEWg-0005lS-8H; Mon, 23 Nov 2020 16:17:14 +0000
Received: by outflank-mailman (input) for mailman id 34822;
 Mon, 23 Nov 2020 16:17:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zhGQ=E5=gmail.com=lukas.bulwahn@srs-us1.protection.inumbo.net>)
 id 1khEWf-0005lN-CQ
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 16:17:13 +0000
Received: from mail-io1-xd44.google.com (unknown [2607:f8b0:4864:20::d44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e1a3b8c9-2715-401c-9188-36a186621ab8;
 Mon, 23 Nov 2020 16:17:12 +0000 (UTC)
Received: by mail-io1-xd44.google.com with SMTP id r9so18650138ioo.7
 for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 08:17:12 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zhGQ=E5=gmail.com=lukas.bulwahn@srs-us1.protection.inumbo.net>)
	id 1khEWf-0005lN-CQ
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 16:17:13 +0000
X-Inumbo-ID: e1a3b8c9-2715-401c-9188-36a186621ab8
Received: from mail-io1-xd44.google.com (unknown [2607:f8b0:4864:20::d44])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e1a3b8c9-2715-401c-9188-36a186621ab8;
	Mon, 23 Nov 2020 16:17:12 +0000 (UTC)
Received: by mail-io1-xd44.google.com with SMTP id r9so18650138ioo.7
        for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 08:17:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=9G9wehhbpTZzWWOvGr3rWAgWQ0geLC9Yp77M/YC6Idw=;
        b=jzsPhOCPkH8JcubkL+ilpCqrpmJxzR/ZZhmza1K1c0eddk7+QY69lMXZ1zjj7yUobv
         UPOat94Oznz5N+aYQtyK2tx44PHGHvaT6c5ydFaOoT5387NjxS4giDE0liSNJKSpvdGp
         HKiV67o8ZfM+QomP6uHJNCHgME5k654Lm9OlI5uZEZV/BZyGXhrMcp4osA9RqzKjR7Ze
         0dIcMPHMz9M6e1KbFfvtyGGjldyt3D137VsBHg9GBRp12UjhpWPsI6mtFPtV54IoaC+K
         mSdXyd8g5UOodS4awURhueIsrAq2XHUWM9vJndBZGt3fLtR88qq6iY1NEsTPrUzb2MVL
         n+0A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=9G9wehhbpTZzWWOvGr3rWAgWQ0geLC9Yp77M/YC6Idw=;
        b=A79bGu7fBptxF62byK7FXc0JZrkMG8D5L2KsxwOsl1lwIDBQEgfE9Hsnn9BK1HoihA
         76PDtnH6d2LDOsbR81LrfT72O1vyfX/YPNL1LNUCzXPhMeB5d3TYvMYq6AiTvQQTo+Ut
         aLfLM8YesilbozUBnEEEdNMLSM505aDAQ8ldqLh48LDXDyp8zU73GsEGFszqN2Ik8BBd
         zRtVV3rNfd7pnp8/pgmjq4Tr5kBtJavnbtjTGe27NG2ozIdktvu6DTjrs1IvWbo1fhsL
         j0JkkBI/UG/tWqARZ4tkpOEAI97a4gQ5MjE4g4KbvI6HooUpO9oAC25gfCiZrGCNm9Zk
         5XKg==
X-Gm-Message-State: AOAM533KaOS6N+0adQtHXmAmCaGWrAMnskmjZUx0UbKERj5JRktFK3o4
	4cB/ecoSZVG4+ll5XoLOBjEZ42sV0bVUZOtCJlI=
X-Google-Smtp-Source: ABdhPJw5wxDBX8QpwE2CghMfFtPIAtnmG9XIUqmbpxXD9wxvuYGnavpxLlav0NBfuecvTCMrGKLgN0Lee5ktV7i8vQ8=
X-Received: by 2002:a05:6602:22c7:: with SMTP id e7mr415585ioe.114.1606148231969;
 Mon, 23 Nov 2020 08:17:11 -0800 (PST)
MIME-Version: 1.0
References: <20201121165058.1644182-1-trix@redhat.com> <5843ef910b0e86c00d9c0143dec20f93823b016b.camel@HansenPartnership.com>
 <87y2ism5or.fsf@intel.com>
In-Reply-To: <87y2ism5or.fsf@intel.com>
From: Lukas Bulwahn <lukas.bulwahn@gmail.com>
Date: Mon, 23 Nov 2020 17:17:00 +0100
Message-ID: <CAKXUXMydH+VtMeuftPRgCg_PYm2iChOMkUYjO=QTG=NRM3QFiw@mail.gmail.com>
Subject: Re: [RFC] MAINTAINERS tag for cleanup robot
To: Jani Nikula <jani.nikula@linux.intel.com>
Cc: James Bottomley <James.Bottomley@hansenpartnership.com>, Tom Rix <trix@redhat.com>, 
	Joe Perches <joe@perches.com>, clang-built-linux <clang-built-linux@googlegroups.com>, 
	linux-hyperv@vger.kernel.org, kvm@vger.kernel.org, 
	linux-fbdev@vger.kernel.org, dri-devel@lists.freedesktop.org, 
	platform-driver-x86@vger.kernel.org, ibm-acpi-devel@lists.sourceforge.net, 
	"open list:ASYMMETRIC KEYS" <keyrings@vger.kernel.org>, linux-mtd@lists.infradead.org, 
	linux-scsi@vger.kernel.org, amd-gfx@lists.freedesktop.org, 
	cluster-devel@redhat.com, linux-acpi@vger.kernel.org, 
	tboot-devel@lists.sourceforge.net, coreteam@netfilter.org, 
	xen-devel@lists.xenproject.org, MPT-FusionLinux.pdl@broadcom.com, 
	Linux Media Mailing List <linux-media@vger.kernel.org>, alsa-devel@alsa-project.org, 
	intel-gfx@lists.freedesktop.org, ecryptfs@vger.kernel.org, 
	linux-omap@vger.kernel.org, devel@acpica.org, linux-nfs@vger.kernel.org, 
	Netdev <netdev@vger.kernel.org>, linux-usb@vger.kernel.org, 
	linux-wireless <linux-wireless@vger.kernel.org>, 
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>, linux-bluetooth@vger.kernel.org, 
	netfilter-devel@vger.kernel.org, linux-crypto@vger.kernel.org, 
	patches@opensource.cirrus.com, linux-fsdevel@vger.kernel.org, 
	bpf@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"

On Mon, Nov 23, 2020 at 4:52 PM Jani Nikula <jani.nikula@linux.intel.com> wrote:
>
> On Sat, 21 Nov 2020, James Bottomley <James.Bottomley@HansenPartnership.com> wrote:
> > On Sat, 2020-11-21 at 08:50 -0800, trix@redhat.com wrote:
> >> A difficult part of automating commits is composing the subsystem
> >> preamble in the commit log.  For the ongoing effort of a fixer
> >> producing
> >> one or two fixes a release the use of 'treewide:' does not seem
> >> appropriate.
> >>
> >> It would be better if the normal prefix was used.  Unfortunately
> >> normal is
> >> not consistent across the tree.
> >>
> >>
> >>      D: Commit subsystem prefix
> >>
> >> ex/ for FPGA DFL DRIVERS
> >>
> >>      D: fpga: dfl:
> >>
> >
> > I've got to bet this is going to cause more issues than it solves.
>
> Agreed.
>

Tom, this a problem only kernel janitors encounter; all other
developers really do not have that issue. The time spent on creating
the patch is much larger than the amount saved if the commit log
header line prefix would be derived automatically. I believe Julia
Lawall, Arnd Bergmann and Nathan Chancellor as long-term
high-frequency janitors do have already scripted approaches to that
issue. Maybe they simply need to share these scripts with you and you
consolidate them and share with everyone?

Also, making clean-up patches cumbersome has a positive side as well;
maintainers are not swamped with fully automated patch submissions.
There have been some bad experiences with some submitters on that in
the past...

> > SCSI uses scsi: <driver>: for drivers but not every driver has a
> > MAINTAINERS entry.  We use either scsi: or scsi: core: for mid layer
> > things, but we're not consistent.  Block uses blk-<something>: for all
> > of it's stuff but almost no <somtehing>s have a MAINTAINERS entry.  So
> > the next thing you're going to cause is an explosion of suggested
> > MAINTAINERs entries.
>
> On the one hand, adoption of new MAINTAINERS entries has been really
> slow. Look at B, C, or P, for instance. On the other hand, if this were
> to get adopted, you'll potentially get conflicting prefixes for patches
> touching multiple files. Then what?
>
> I'm guessing a script looking at git log could come up with better
> suggestions for prefixes via popularity contest than manually maintained
> MAINTAINERS entries. It might not always get it right, but then human
> outsiders aren't going to always get it right either.
>
> Now you'll only need Someone(tm) to write the script. ;)
>
> Something quick like this:
>
> git log --since={1year} --pretty=format:%s -- <FILES> |\
>         grep -v "^\(Merge\|Revert\)" |\
>         sed 's/:[^:]*$//' |\
>         sort | uniq -c | sort -rn | head -5
>
> already gives me results that really aren't worse than some of the
> prefixes invented by drive-by contributors.
>

I agree I do not see the need to introduce something in MAINTAINERS;
from my observations maintaining MAINTAINERS, there is sufficient work
on adoption and maintenance of the existing entries already without
such an yet another additional entry. Some entries are outdated or
wrong and the janitor task of cleaning those up is already enough work
for involved janitors and enough churn for involved maintainers. So a
machine-learned approach as above is probably good enough, but if you
think you need more complex rules try to learn them from the data at
hand... certainly a nice task to do with machine learning on commit
message prefixes.

Lukas


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 16:25:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 16:25:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34833.66075 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khEeI-0006ib-5y; Mon, 23 Nov 2020 16:25:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34833.66075; Mon, 23 Nov 2020 16:25:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khEeI-0006iU-2d; Mon, 23 Nov 2020 16:25:06 +0000
Received: by outflank-mailman (input) for mailman id 34833;
 Mon, 23 Nov 2020 16:25:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+Zne=E5=gmail.com=rjwysocki@srs-us1.protection.inumbo.net>)
 id 1khEeG-0006iP-Cw
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 16:25:04 +0000
Received: from mail-ot1-f67.google.com (unknown [209.85.210.67])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f7fd4e75-15da-47d6-a01e-dff62d973210;
 Mon, 23 Nov 2020 16:25:03 +0000 (UTC)
Received: by mail-ot1-f67.google.com with SMTP id n11so16428742ota.2
 for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 08:25:03 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+Zne=E5=gmail.com=rjwysocki@srs-us1.protection.inumbo.net>)
	id 1khEeG-0006iP-Cw
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 16:25:04 +0000
X-Inumbo-ID: f7fd4e75-15da-47d6-a01e-dff62d973210
Received: from mail-ot1-f67.google.com (unknown [209.85.210.67])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f7fd4e75-15da-47d6-a01e-dff62d973210;
	Mon, 23 Nov 2020 16:25:03 +0000 (UTC)
Received: by mail-ot1-f67.google.com with SMTP id n11so16428742ota.2
        for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 08:25:03 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=SWzdE85KMSmQ5K6aYMgWVQjn8/tst2VI3dLUnv/55SM=;
        b=hjSIx2MRHiQ5/pfgGNPQdO9DTATY345/PLpLEi9jq+6oWscmJWtdxCCNHVf+e5vZMe
         yaYm1c5YF74vxb5AggaoW5WmgaXiUfhIP8Y6+9FqXgUGyLG8e9X1CoDcvEkKv6dFsA2P
         5mZucN20icZ3K3xnanZ48CR5LvQ4sP5hs3FMQPocCy4dpG7LHd3sWsONS06lcxfQ9QqR
         W0L+35IEOe4lhFCAfAhM74b4pipVd8lhgeGlZYn9KjVhAolL3AuIJ2sPoJ4Emwy3L0Iz
         Ck8ancE5wh3IW7ggJV8iUXqp5nOOF0VjDZsCySxWEmecHnX/wPItJ+/DJshuQJzAsNWn
         3Elg==
X-Gm-Message-State: AOAM531oquS/Bn7NjrTnByKFdLxFGveziGO2O11djz3j6tBMyjAhRrGU
	1pBl7EeIhX2TR3xMN02VoA8mnN5soePloGsRgeM=
X-Google-Smtp-Source: ABdhPJxW9u67TbnPp8ObIChv5a6F+ncd8SoV+BFvs894Bl4MJzyAX0TVAjkpawJr+z1OvIzzPyEcIsCefAg0WelXSFg=
X-Received: by 2002:a9d:16f:: with SMTP id 102mr68959otu.206.1606148702991;
 Mon, 23 Nov 2020 08:25:02 -0800 (PST)
MIME-Version: 1.0
References: <cover.1605896059.git.gustavoars@kernel.org> <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook> <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
 <CANiq72nZrHWTA4_Msg6MP9snTyenC6-eGfD27CyfNSu7QoVZbw@mail.gmail.com>
 <1c7d7fde126bc0acf825766de64bf2f9b888f216.camel@HansenPartnership.com>
 <CANiq72m22Jb5_+62NnwX8xds2iUdWDMAqD8PZw9cuxdHd95W0A@mail.gmail.com> <fc45750b6d0277c401015b7aa11e16cd15f32ab2.camel@HansenPartnership.com>
In-Reply-To: <fc45750b6d0277c401015b7aa11e16cd15f32ab2.camel@HansenPartnership.com>
From: "Rafael J. Wysocki" <rafael@kernel.org>
Date: Mon, 23 Nov 2020 17:24:51 +0100
Message-ID: <CAJZ5v0jJ6GFm4LFCR2V3qvD9rZrVw=pXyXSjSWPYtQudg-F3xg@mail.gmail.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
To: James Bottomley <James.Bottomley@hansenpartnership.com>
Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>, Kees Cook <keescook@chromium.org>, 
	Jakub Kicinski <kuba@kernel.org>, "Gustavo A. R. Silva" <gustavoars@kernel.org>, 
	linux-kernel <linux-kernel@vger.kernel.org>, 
	"moderated list:SOUND - SOC LAYER / DYNAMIC AUDIO POWER MANAGEM..." <alsa-devel@alsa-project.org>, amd-gfx list <amd-gfx@lists.freedesktop.org>, 
	bridge@lists.linux-foundation.org, ceph-devel@vger.kernel.org, 
	cluster-devel@redhat.com, coreteam@netfilter.org, devel@driverdev.osuosl.org, 
	dm-devel@redhat.com, drbd-dev@lists.linbit.com, 
	dri-devel <dri-devel@lists.freedesktop.org>, GR-everest-linux-l2@marvell.com, 
	GR-Linux-NIC-Dev@marvell.com, intel-gfx <intel-gfx@lists.freedesktop.org>, 
	intel-wired-lan@lists.osuosl.org, keyrings@vger.kernel.org, 
	linux1394-devel@lists.sourceforge.net, 
	ACPI Devel Maling List <linux-acpi@vger.kernel.org>, linux-afs@lists.infradead.org, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, 
	linux-arm-msm <linux-arm-msm@vger.kernel.org>, linux-atm-general@lists.sourceforge.net, 
	linux-block@vger.kernel.org, linux-can@vger.kernel.org, 
	linux-cifs@vger.kernel.org, 
	Linux Crypto Mailing List <linux-crypto@vger.kernel.org>, linux-decnet-user@lists.sourceforge.net, 
	Ext4 Developers List <linux-ext4@vger.kernel.org>, 
	"open list:FRAMEBUFFER LAYER" <linux-fbdev@vger.kernel.org>, linux-geode@lists.infradead.org, 
	linux-gpio@vger.kernel.org, linux-hams@vger.kernel.org, 
	linux-hwmon@vger.kernel.org, linux-i3c@lists.infradead.org, 
	"open list:LIBATA SUBSYSTEM (Serial and Parallel ATA drivers)" <linux-ide@vger.kernel.org>, linux-iio@vger.kernel.org, 
	linux-input <linux-input@vger.kernel.org>, linux-integrity@vger.kernel.org, 
	"moderated list:ARM/Mediatek SoC..." <linux-mediatek@lists.infradead.org>, 
	Linux Media Mailing List <linux-media@vger.kernel.org>, linux-mmc <linux-mmc@vger.kernel.org>, 
	Linux-MM <linux-mm@kvack.org>, linux-mtd@lists.infradead.org, 
	linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, 
	Linux-Renesas <linux-renesas-soc@vger.kernel.org>, 
	"open list:TARGET SUBSYSTEM" <linux-scsi@vger.kernel.org>, linux-sctp@vger.kernel.org, 
	linux-security-module@vger.kernel.org, 
	linux-stm32@st-md-mailman.stormreply.com, 
	"open list:ULTRA-WIDEBAND (UWB) SUBSYSTEM:" <linux-usb@vger.kernel.org>, linux-watchdog@vger.kernel.org, 
	linux-wireless <linux-wireless@vger.kernel.org>, 
	Network Development <netdev@vger.kernel.org>, netfilter-devel@vger.kernel.org, 
	nouveau <nouveau@lists.freedesktop.org>, op-tee@lists.trustedfirmware.org, 
	oss-drivers@netronome.com, patches@opensource.cirrus.com, 
	rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org, 
	samba-technical@lists.samba.org, selinux@vger.kernel.org, 
	target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net, 
	usb-storage@lists.one-eyed-alien.net, 
	virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, xen-devel@lists.xenproject.org, 
	linux-hardening@vger.kernel.org, Nick Desaulniers <ndesaulniers@google.com>, 
	Nathan Chancellor <natechancellor@gmail.com>, Miguel Ojeda <ojeda@kernel.org>, 
	Joe Perches <joe@perches.com>
Content-Type: text/plain; charset="UTF-8"

On Mon, Nov 23, 2020 at 4:58 PM James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
>
> On Mon, 2020-11-23 at 15:19 +0100, Miguel Ojeda wrote:
> > On Sun, Nov 22, 2020 at 11:36 PM James Bottomley
> > <James.Bottomley@hansenpartnership.com> wrote:

[cut]

> >
> > Maintainers routinely review 1-line trivial patches, not to mention
> > internal API changes, etc.
>
> We're also complaining about the inability to recruit maintainers:
>
> https://www.theregister.com/2020/06/30/hard_to_find_linux_maintainers_says_torvalds/
>
> And burn out:
>
> http://antirez.com/news/129

Right.

> The whole crux of your argument seems to be maintainers' time isn't
> important so we should accept all trivial patches ... I'm pushing back
> on that assumption in two places, firstly the valulessness of the time
> and secondly that all trivial patches are valuable.
>
> > If some company does not want to pay for that, that's fine, but they
> > don't get to be maintainers and claim `Supported`.
>
> What I'm actually trying to articulate is a way of measuring value of
> the patch vs cost ... it has nothing really to do with who foots the
> actual bill.
>
> One thesis I'm actually starting to formulate is that this continual
> devaluing of maintainers is why we have so much difficulty keeping and
> recruiting them.

Absolutely.

This is just one of the factors involved, but a significant one IMV.


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 16:31:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 16:31:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34843.66086 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khEkj-0007c9-Tr; Mon, 23 Nov 2020 16:31:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34843.66086; Mon, 23 Nov 2020 16:31:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khEkj-0007c2-QU; Mon, 23 Nov 2020 16:31:45 +0000
Received: by outflank-mailman (input) for mailman id 34843;
 Mon, 23 Nov 2020 16:31:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6H7V=E5=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
 id 1khEki-0007bx-9X
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 16:31:44 +0000
Received: from bedivere.hansenpartnership.com (unknown [2607:fcd0:100:8a00::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 00706bbb-d52e-4f7c-a3a8-1c6873558f91;
 Mon, 23 Nov 2020 16:31:36 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by bedivere.hansenpartnership.com (Postfix) with ESMTP id 7EDBD12808F6;
 Mon, 23 Nov 2020 08:31:35 -0800 (PST)
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
 by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new,
 port 10024)
 with ESMTP id S05__PC1UR2d; Mon, 23 Nov 2020 08:31:35 -0800 (PST)
Received: from jarvis.int.hansenpartnership.com (unknown
 [IPv6:2601:600:8280:66d1::527])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id 9FAA112808A8;
 Mon, 23 Nov 2020 08:31:31 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6H7V=E5=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
	id 1khEki-0007bx-9X
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 16:31:44 +0000
X-Inumbo-ID: 00706bbb-d52e-4f7c-a3a8-1c6873558f91
Received: from bedivere.hansenpartnership.com (unknown [2607:fcd0:100:8a00::2])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 00706bbb-d52e-4f7c-a3a8-1c6873558f91;
	Mon, 23 Nov 2020 16:31:36 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
	by bedivere.hansenpartnership.com (Postfix) with ESMTP id 7EDBD12808F6;
	Mon, 23 Nov 2020 08:31:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=hansenpartnership.com; s=20151216; t=1606149095;
	bh=+IhPqZ/v6VfDyyXzj4lMu9axEsbedJZyqBpFfwsfG+g=;
	h=Message-ID:Subject:From:To:Date:In-Reply-To:References:From;
	b=taYTQdLWt1uKXwIt/3Ve/mEupTdq+Wcpdv+UXp5WQMTWxY34l98m0qHLcdAiwuo2t
	 gwBg46qri78QHRql74q8THMzP+7WPx9XqttvrPch20gBcYUMT4pLXQarcLhIoin1Gp
	 Z+ziweydBKwdaV8ZmrW12X55c5G6vUR8Kiznotik=
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
	by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id S05__PC1UR2d; Mon, 23 Nov 2020 08:31:35 -0800 (PST)
Received: from jarvis.int.hansenpartnership.com (unknown [IPv6:2601:600:8280:66d1::527])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id 9FAA112808A8;
	Mon, 23 Nov 2020 08:31:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=hansenpartnership.com; s=20151216; t=1606149095;
	bh=+IhPqZ/v6VfDyyXzj4lMu9axEsbedJZyqBpFfwsfG+g=;
	h=Message-ID:Subject:From:To:Date:In-Reply-To:References:From;
	b=taYTQdLWt1uKXwIt/3Ve/mEupTdq+Wcpdv+UXp5WQMTWxY34l98m0qHLcdAiwuo2t
	 gwBg46qri78QHRql74q8THMzP+7WPx9XqttvrPch20gBcYUMT4pLXQarcLhIoin1Gp
	 Z+ziweydBKwdaV8ZmrW12X55c5G6vUR8Kiznotik=
Message-ID: <8f5611bb015e044fa1c0a48147293923c2d904e4.camel@HansenPartnership.com>
Subject: Re: [Intel-wired-lan] [PATCH 000/141] Fix fall-through warnings for
 Clang
From: James Bottomley <James.Bottomley@HansenPartnership.com>
To: "Gustavo A. R. Silva" <gustavoars@kernel.org>
Cc: Joe Perches <joe@perches.com>, Kees Cook <keescook@chromium.org>, Jakub
 Kicinski <kuba@kernel.org>, alsa-devel@alsa-project.org,
 linux-atm-general@lists.sourceforge.net,  reiserfs-devel@vger.kernel.org,
 linux-iio@vger.kernel.org,  linux-wireless@vger.kernel.org,
 linux-fbdev@vger.kernel.org,  dri-devel@lists.freedesktop.org,
 linux-kernel@vger.kernel.org, Nathan Chancellor <natechancellor@gmail.com>,
 linux-ide@vger.kernel.org, dm-devel@redhat.com,  keyrings@vger.kernel.org,
 linux-mtd@lists.infradead.org,  GR-everest-linux-l2@marvell.com,
 wcn36xx@lists.infradead.org,  samba-technical@lists.samba.org,
 linux-i3c@lists.infradead.org,  linux1394-devel@lists.sourceforge.net,
 linux-afs@lists.infradead.org,  usb-storage@lists.one-eyed-alien.net,
 drbd-dev@lists.linbit.com,  devel@driverdev.osuosl.org,
 linux-cifs@vger.kernel.org, rds-devel@oss.oracle.com,  Nick Desaulniers
 <ndesaulniers@google.com>, linux-scsi@vger.kernel.org,
 linux-rdma@vger.kernel.org,  oss-drivers@netronome.com,
 bridge@lists.linux-foundation.org,  linux-security-module@vger.kernel.org,
 amd-gfx@lists.freedesktop.org,  linux-stm32@st-md-mailman.stormreply.com,
 cluster-devel@redhat.com,  linux-acpi@vger.kernel.org,
 coreteam@netfilter.org,  intel-wired-lan@lists.osuosl.org,
 linux-input@vger.kernel.org, Miguel Ojeda <ojeda@kernel.org>,
 tipc-discussion@lists.sourceforge.net,  linux-ext4@vger.kernel.org,
 linux-media@vger.kernel.org,  linux-watchdog@vger.kernel.org,
 selinux@vger.kernel.org,  linux-arm-msm@vger.kernel.org,
 intel-gfx@lists.freedesktop.org,  linux-geode@lists.infradead.org,
 linux-can@vger.kernel.org,  linux-block@vger.kernel.org,
 linux-gpio@vger.kernel.org,  op-tee@lists.trustedfirmware.org,
 linux-mediatek@lists.infradead.org,  xen-devel@lists.xenproject.org,
 nouveau@lists.freedesktop.org,  linux-hams@vger.kernel.org,
 ceph-devel@vger.kernel.org,  virtualization@lists.linux-foundation.org, 
 linux-arm-kernel@lists.infradead.org, linux-hwmon@vger.kernel.org, 
 x86@kernel.org, linux-nfs@vger.kernel.org, GR-Linux-NIC-Dev@marvell.com, 
 linux-mm@kvack.org, netdev@vger.kernel.org, 
 linux-decnet-user@lists.sourceforge.net, linux-mmc@vger.kernel.org, 
 linux-renesas-soc@vger.kernel.org, linux-sctp@vger.kernel.org, 
 linux-usb@vger.kernel.org, netfilter-devel@vger.kernel.org, 
 linux-crypto@vger.kernel.org, patches@opensource.cirrus.com, 
 linux-integrity@vger.kernel.org, target-devel@vger.kernel.org, 
 linux-hardening@vger.kernel.org, Jonathan Cameron
 <Jonathan.Cameron@huawei.com>,  Greg KH <gregkh@linuxfoundation.org>
Date: Mon, 23 Nov 2020 08:31:30 -0800
In-Reply-To: <20201123130348.GA3119@embeddedor>
References: <cover.1605896059.git.gustavoars@kernel.org>
	 <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	 <202011201129.B13FDB3C@keescook>
	 <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	 <202011220816.8B6591A@keescook>
	 <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
	 <ca071decb87cc7e905411423c05a48f9fd2f58d7.camel@perches.com>
	 <0147972a72bc13f3629de8a32dee6f1f308994b5.camel@HansenPartnership.com>
	 <d8d1e9add08cdd4158405e77762d4946037208f8.camel@perches.com>
	 <dbd2cb703ed9eefa7dde9281ea26ab0f7acc8afe.camel@HansenPartnership.com>
	 <20201123130348.GA3119@embeddedor>
Content-Type: text/plain; charset="UTF-8"
User-Agent: Evolution 3.34.4 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Mon, 2020-11-23 at 07:03 -0600, Gustavo A. R. Silva wrote:
> On Sun, Nov 22, 2020 at 11:53:55AM -0800, James Bottomley wrote:
> > On Sun, 2020-11-22 at 11:22 -0800, Joe Perches wrote:
> > > On Sun, 2020-11-22 at 11:12 -0800, James Bottomley wrote:
> > > > On Sun, 2020-11-22 at 10:25 -0800, Joe Perches wrote:
> > > > > On Sun, 2020-11-22 at 10:21 -0800, James Bottomley wrote:
> > > > > > Please tell me our reward for all this effort isn't a
> > > > > > single missing error print.
> > > > > 
> > > > > There were quite literally dozens of logical defects found
> > > > > by the fallthrough additions.  Very few were logging only.
> > > > 
> > > > So can you give us the best examples (or indeed all of them if
> > > > someone is keeping score)?  hopefully this isn't a US election
> > > > situation ...
> > > 
> > > Gustavo?  Are you running for congress now?
> > > 
> > > https://lwn.net/Articles/794944/
> > 
> > That's 21 reported fixes of which about 50% seem to produce no
> > change in code behaviour at all, a quarter seem to have no user
> > visible effect with the remaining quarter producing unexpected
> > errors on obscure configuration parameters, which is why no-one
> > really noticed them before.
> 
> The really important point here is the number of bugs this has
> prevented and will prevent in the future. See an example of this,
> below:
> 
> https://lore.kernel.org/linux-iio/20190813135802.GB27392@kroah.com/

I think this falls into the same category as the other six bugs: it
changes the output/input for parameters but no-one has really noticed,
usually because the command is obscure or the bias effect is minor.

> This work is still relevant, even if the total number of issues/bugs
> we find in the process is zero (which is not the case).

Really, no ... something which produces no improvement has no value at
all ... we really shouldn't be wasting maintainer time with it because
it has a cost to merge.  I'm not sure we understand where the balance
lies in value vs cost to merge but I am confident in the zero value
case.

> "The sucky thing about doing hard work to deploy hardening is that
> the result is totally invisible by definition (things not happening)
> [..]"
> - Dmitry Vyukov

Really, no.  Something that can't be measured at all doesn't exist.

And actually hardening is one of those things you can measure (which I
do have to admit isn't true for everything in the security space) ...
it's number of exploitable bugs found before you did it vs number of
exploitable bugs found after you did it.  Usually hardening eliminates
a class of bug, so the way I've measured hardening before is to go
through the CVE list for the last couple of years for product X, find
all the bugs that are of the class we're looking to eliminate and say
if we had hardened X against this class of bug we'd have eliminated Y%
of the exploits.  It can be quite impressive if Y is a suitably big
number.

James




From xen-devel-bounces@lists.xenproject.org Mon Nov 23 16:32:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 16:32:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34851.66098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khElt-0007jf-Cm; Mon, 23 Nov 2020 16:32:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34851.66098; Mon, 23 Nov 2020 16:32:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khElt-0007jY-9b; Mon, 23 Nov 2020 16:32:57 +0000
Received: by outflank-mailman (input) for mailman id 34851;
 Mon, 23 Nov 2020 16:32:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eYGc=E5=perches.com=joe@srs-us1.protection.inumbo.net>)
 id 1khElr-0007jT-VS
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 16:32:56 +0000
Received: from smtprelay.hostedemail.com (unknown [216.40.44.163])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a6d61e26-9f67-4593-84c8-ad8ad8de68bf;
 Mon, 23 Nov 2020 16:32:54 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net
 [216.40.38.60])
 by smtprelay07.hostedemail.com (Postfix) with ESMTP id A722B181D3025;
 Mon, 23 Nov 2020 16:32:53 +0000 (UTC)
Received: from XPS-9350.home (unknown [47.151.128.180])
 (Authenticated sender: joe@perches.com)
 by omf03.hostedemail.com (Postfix) with ESMTPA;
 Mon, 23 Nov 2020 16:32:42 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=eYGc=E5=perches.com=joe@srs-us1.protection.inumbo.net>)
	id 1khElr-0007jT-VS
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 16:32:56 +0000
X-Inumbo-ID: a6d61e26-9f67-4593-84c8-ad8ad8de68bf
Received: from smtprelay.hostedemail.com (unknown [216.40.44.163])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a6d61e26-9f67-4593-84c8-ad8ad8de68bf;
	Mon, 23 Nov 2020 16:32:54 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net [216.40.38.60])
	by smtprelay07.hostedemail.com (Postfix) with ESMTP id A722B181D3025;
	Mon, 23 Nov 2020 16:32:53 +0000 (UTC)
X-Session-Marker: 6A6F6540706572636865732E636F6D
X-Spam-Summary: 50,0,0,,d41d8cd98f00b204,joe@perches.com,,RULES_HIT:41:355:379:599:960:967:973:988:989:1260:1263:1277:1311:1313:1314:1345:1359:1437:1515:1516:1518:1535:1541:1593:1594:1711:1730:1747:1777:1792:2393:2525:2565:2682:2685:2740:2828:2859:2912:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3353:3622:3653:3865:3866:3867:3868:3870:3871:3872:3873:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4321:5007:6119:6742:6743:7903:9025:9388:10004:10400:10848:10946:11026:11232:11658:11914:12043:12049:12297:12438:12663:12740:12760:12895:13069:13161:13172:13229:13311:13357:13439:13972:14096:14097:14181:14659:14721:14764:21080:21451:21627:21781:21788:21809:21990:30034:30041:30054:30060:30091,0,RBL:none,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none
X-HE-Tag: can43_5c1502d27366
X-Filterd-Recvd-Size: 5503
Received: from XPS-9350.home (unknown [47.151.128.180])
	(Authenticated sender: joe@perches.com)
	by omf03.hostedemail.com (Postfix) with ESMTPA;
	Mon, 23 Nov 2020 16:32:42 +0000 (UTC)
Message-ID: <32dc7423124b51da4e144e931bf099a368ab50a8.camel@perches.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
From: Joe Perches <joe@perches.com>
To: James Bottomley <James.Bottomley@HansenPartnership.com>, Miguel Ojeda
	 <miguel.ojeda.sandonis@gmail.com>
Cc: Kees Cook <keescook@chromium.org>, Jakub Kicinski <kuba@kernel.org>, 
 "Gustavo A. R. Silva" <gustavoars@kernel.org>, linux-kernel
 <linux-kernel@vger.kernel.org>,  alsa-devel@alsa-project.org,
 amd-gfx@lists.freedesktop.org,  bridge@lists.linux-foundation.org,
 ceph-devel@vger.kernel.org,  cluster-devel@redhat.com,
 coreteam@netfilter.org, devel@driverdev.osuosl.org,  dm-devel@redhat.com,
 drbd-dev@lists.linbit.com, dri-devel@lists.freedesktop.org, 
 GR-everest-linux-l2@marvell.com, GR-Linux-NIC-Dev@marvell.com, 
 intel-gfx@lists.freedesktop.org, intel-wired-lan@lists.osuosl.org, 
 keyrings@vger.kernel.org, linux1394-devel@lists.sourceforge.net, 
 linux-acpi@vger.kernel.org, linux-afs@lists.infradead.org, Linux ARM
 <linux-arm-kernel@lists.infradead.org>, linux-arm-msm@vger.kernel.org, 
 linux-atm-general@lists.sourceforge.net, linux-block@vger.kernel.org, 
 linux-can@vger.kernel.org, linux-cifs@vger.kernel.org, Linux Crypto Mailing
 List <linux-crypto@vger.kernel.org>,
 linux-decnet-user@lists.sourceforge.net,  Ext4 Developers List
 <linux-ext4@vger.kernel.org>, linux-fbdev@vger.kernel.org,
 linux-geode@lists.infradead.org,  linux-gpio@vger.kernel.org,
 linux-hams@vger.kernel.org,  linux-hwmon@vger.kernel.org,
 linux-i3c@lists.infradead.org,  linux-ide@vger.kernel.org,
 linux-iio@vger.kernel.org, linux-input <linux-input@vger.kernel.org>,
 linux-integrity@vger.kernel.org,  linux-mediatek@lists.infradead.org, Linux
 Media Mailing List <linux-media@vger.kernel.org>,
 linux-mmc@vger.kernel.org, Linux-MM <linux-mm@kvack.org>,
 linux-mtd@lists.infradead.org, linux-nfs@vger.kernel.org, 
 linux-rdma@vger.kernel.org, linux-renesas-soc@vger.kernel.org, 
 linux-scsi@vger.kernel.org, linux-sctp@vger.kernel.org, 
 linux-security-module@vger.kernel.org, 
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org, 
 linux-watchdog@vger.kernel.org, linux-wireless
 <linux-wireless@vger.kernel.org>,  Network Development
 <netdev@vger.kernel.org>, netfilter-devel@vger.kernel.org,
 nouveau@lists.freedesktop.org,  op-tee@lists.trustedfirmware.org,
 oss-drivers@netronome.com,  patches@opensource.cirrus.com,
 rds-devel@oss.oracle.com,  reiserfs-devel@vger.kernel.org,
 samba-technical@lists.samba.org,  selinux@vger.kernel.org,
 target-devel@vger.kernel.org,  tipc-discussion@lists.sourceforge.net,
 usb-storage@lists.one-eyed-alien.net, 
 virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
 "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>,
 xen-devel@lists.xenproject.org,  linux-hardening@vger.kernel.org, Nick
 Desaulniers <ndesaulniers@google.com>,  Nathan Chancellor
 <natechancellor@gmail.com>, Miguel Ojeda <ojeda@kernel.org>
Date: Mon, 23 Nov 2020 08:32:41 -0800
In-Reply-To: <fc45750b6d0277c401015b7aa11e16cd15f32ab2.camel@HansenPartnership.com>
References: <cover.1605896059.git.gustavoars@kernel.org>
	 <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	 <202011201129.B13FDB3C@keescook>
	 <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	 <202011220816.8B6591A@keescook>
	 <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
	 <CANiq72nZrHWTA4_Msg6MP9snTyenC6-eGfD27CyfNSu7QoVZbw@mail.gmail.com>
	 <1c7d7fde126bc0acf825766de64bf2f9b888f216.camel@HansenPartnership.com>
	 <CANiq72m22Jb5_+62NnwX8xds2iUdWDMAqD8PZw9cuxdHd95W0A@mail.gmail.com>
	 <fc45750b6d0277c401015b7aa11e16cd15f32ab2.camel@HansenPartnership.com>
Content-Type: text/plain; charset="ISO-8859-1"
User-Agent: Evolution 3.38.1-1 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Mon, 2020-11-23 at 07:58 -0800, James Bottomley wrote:
> We're also complaining about the inability to recruit maintainers:
> 
> https://www.theregister.com/2020/06/30/hard_to_find_linux_maintainers_says_torvalds/
> 
> And burn out:
> 
> http://antirez.com/news/129

https://www.wired.com/story/open-source-coders-few-tired/

> What I'm actually trying to articulate is a way of measuring value of
> the patch vs cost ... it has nothing really to do with who foots the
> actual bill.

It's unclear how to measure value in consistency.

But one way that costs can be reduced is by automation and _not_
involving maintainers when the patch itself is provably correct.

> One thesis I'm actually starting to formulate is that this continual
> devaluing of maintainers is why we have so much difficulty keeping and
> recruiting them.

The linux kernel has something like 1500 different maintainers listed
in the MAINTAINERS file.  That's not a trivial number.

$ git grep '^M:' MAINTAINERS | sort | uniq -c | wc -l
1543
$ git grep '^M:' MAINTAINERS| cut -f1 -d'<' | sort | uniq -c | wc -l
1446

I think the question you are asking is about trust and how it
effects development.

And back to that wired story, the actual number of what you might
be considering to be maintainers is likely less than 10% of the
listed numbers above.




From xen-devel-bounces@lists.xenproject.org Mon Nov 23 16:36:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 16:36:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34861.66114 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khEpG-0007yM-Vb; Mon, 23 Nov 2020 16:36:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34861.66114; Mon, 23 Nov 2020 16:36:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khEpG-0007yF-Rk; Mon, 23 Nov 2020 16:36:26 +0000
Received: by outflank-mailman (input) for mailman id 34861;
 Mon, 23 Nov 2020 16:36:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dDmC=E5=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1khEpG-0007yA-0r
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 16:36:26 +0000
Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c173b3b0-9329-4d6b-a938-e3e53edaadee;
 Mon, 23 Nov 2020 16:36:25 +0000 (UTC)
Received: by mail-lf1-x144.google.com with SMTP id s30so24654040lfc.4
 for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 08:36:24 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id i10sm1427063lfo.19.2020.11.23.08.36.22
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 23 Nov 2020 08:36:23 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dDmC=E5=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1khEpG-0007yA-0r
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 16:36:26 +0000
X-Inumbo-ID: c173b3b0-9329-4d6b-a938-e3e53edaadee
Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c173b3b0-9329-4d6b-a938-e3e53edaadee;
	Mon, 23 Nov 2020 16:36:25 +0000 (UTC)
Received: by mail-lf1-x144.google.com with SMTP id s30so24654040lfc.4
        for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 08:36:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=/K5iUHatmuqEhjbGJrlXqgis7vMnx7mGfGzhYkWi4Lk=;
        b=hpag8Ohy0eqJW7lFDqpoS7fwWHLBaWodIwFKCn73f1bp4ILV6Rt4XLoGzQFArD0ZFe
         aJD1i09SNG1Y1ghDcmaD7rQueu1uENJSlC2EA7GEoSBgm6hdNT+AbVCAkL+nSWQOxXgl
         u/1lXYy3bTZxgzt7Ox70Vh5870q0UmIPTLTv1g+RDPX8eKZEwwv0lWcLpVMs9PkDbCYj
         oldrWJq8OvBR+E3LCX9SnrdRWT9CyeTMJN6CvunD/ipPV3ddXZuWwubq6W60g9M9n5r+
         ZJPVZ3v2aZO07DiJC4oVmNZdjWueck2lCRDnlWg7e/e9AoMQ2GpMM/KKaolA4+PhPb/L
         9qPw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=/K5iUHatmuqEhjbGJrlXqgis7vMnx7mGfGzhYkWi4Lk=;
        b=tcTPVpDsmeK8zDFV2cjbZbf7wuSGXOzbvt/gCF8w86JYg9Qsw12G1PIRjxr4XbZw4E
         eltiP3INSbo4UjPmB+gfv1FoctbzTeEfk56v2nEotz963rcGLDswMpkpXZ/Vy5Uvui3w
         q0uIzpyM2G5KOPl55IcaFYFhrMJtpQt8B8G0KaomJD7SPJUeMlP1RKhrdQczuQNrZyHu
         C3y7FXjeDXSLGOwj6M/l6RlUwiqloiBUc7+z0OOfPoERMGaDVfemi3L9kf13P8WtP7h+
         YUELhJGtY7C/br4KOywNcdm4B3sGR/HlAxz9yIKhurdukuq7pGl+xjTtj4dOAm/TL0e8
         m7VA==
X-Gm-Message-State: AOAM531M271SeTlYwywlXS7VFPy5bHyIL6I9FT/ONSQ2KvaYGrVHOUH9
	VxmmqCyFmHTAw1c4DWAd0IIYNUGBz3ig8w==
X-Google-Smtp-Source: ABdhPJz76K7PsT0Uou9OKoNF4Et8bXBU8xHH7QuobTFBx8CsBz/ywiEB+w/2LmaEIV51m62KMtTV0g==
X-Received: by 2002:a19:c714:: with SMTP id x20mr12641037lff.537.1606149383746;
        Mon, 23 Nov 2020 08:36:23 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id i10sm1427063lfo.19.2020.11.23.08.36.22
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Mon, 23 Nov 2020 08:36:23 -0800 (PST)
Subject: Re: [PATCH V2 12/23] xen/ioreq: Remove "hvm" prefixes from involved
 function names
To: paul@xen.org
Cc: 'Jan Beulich' <jbeulich@suse.com>,
 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 'Wei Liu' <wl@xen.org>, 'George Dunlap' <george.dunlap@citrix.com>,
 'Ian Jackson' <iwj@xenproject.org>, 'Julien Grall' <julien@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Jun Nakajima' <jun.nakajima@intel.com>, 'Kevin Tian'
 <kevin.tian@intel.com>, 'Julien Grall' <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-13-git-send-email-olekstysh@gmail.com>
 <e3064b77-71c3-9d8d-2324-6839895101f4@suse.com>
 <d3b6623c-683d-2845-78c3-a114193b0ce4@gmail.com>
 <04a81b7e-213a-968b-048c-dfa68b6e3b0d@gmail.com>
 <96e6622c-08b3-ff85-75f1-14c8b7cd6d6e@suse.com>
 <30c01448-d4f2-803e-1569-5e806f830efc@gmail.com>
 <002101d6c1b0$e906a520$bb13ef60$@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <0b502895-e715-4dee-6476-8e25f3a7a3ec@gmail.com>
Date: Mon, 23 Nov 2020 18:36:22 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <002101d6c1b0$e906a520$bb13ef60$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 23.11.20 17:54, Paul Durrant wrote:

Hi Paul

>> -----Original Message-----
>> From: Oleksandr <olekstysh@gmail.com>
>> Sent: 23 November 2020 15:48
>> To: Jan Beulich <jbeulich@suse.com>; Paul Durrant <paul@xen.org>
>> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Andrew Cooper <andrew.cooper3@citrix.com>;
>> Roger Pau Monné <roger.pau@citrix.com>; Wei Liu <wl@xen.org>; George Dunlap
>> <george.dunlap@citrix.com>; Ian Jackson <iwj@xenproject.org>; Julien Grall <julien@xen.org>; Stefano
>> Stabellini <sstabellini@kernel.org>; Jun Nakajima <jun.nakajima@intel.com>; Kevin Tian
>> <kevin.tian@intel.com>; Julien Grall <julien.grall@arm.com>; xen-devel@lists.xenproject.org
>> Subject: Re: [PATCH V2 12/23] xen/ioreq: Remove "hvm" prefixes from involved function names
>>
>>
>> On 23.11.20 16:56, Jan Beulich wrote:
>>
>> Hi Jan, Paul
>>
>>> On 23.11.2020 15:39, Oleksandr wrote:
>>>> As it was agreed, below the list of proposed renaming (naming) within
>>>> current series.
>>> Thanks for compiling this. A couple of suggestions for consideration:
>>>
>>>> 1. Global (existing):
>>>> hvm_map_mem_type_to_ioreq_server     -> ioreq_server_map_mem_type
>>>> hvm_select_ioreq_server              -> ioreq_server_select
>>>> hvm_send_ioreq                       -> ioreq_send
>>>> hvm_ioreq_init                       -> ioreq_init
>>> ioreq_domain_init() (or, imo less desirable domain_ioreq_init())?
>> On Arm (for example) I see two variants are present:
>> 1. That starts with subsystem:
>> - tee_domain_init
>> - iommu_domain_init
>>
>>
>> 2. Where sybsystem in the middle:
>> - domain_io_init
>> - domain_vuart_init
>> - domain_vtimer_init
>>
>> If there is no rule, but a matter of taste then I would use
>> ioreq_domain_init(),
>> so arch_ioreq_init() wants to be arch_ioreq_domain_init().
>>
>>>> hvm_destroy_all_ioreq_servers        -> ioreq_server_destroy_all
>>>> hvm_all_ioreq_servers_add_vcpu       -> ioreq_server_add_vcpu_all
>>>> hvm_all_ioreq_servers_remove_vcpu    -> ioreq_server_remove_vcpu_all
>>>> hvm_broadcast_ioreq                  -> ioreq_broadcast
>>>> hvm_create_ioreq_server              -> ioreq_server_create
>>>> hvm_get_ioreq_server_info            -> ioreq_server_get_info
>>>> hvm_map_io_range_to_ioreq_server     -> ioreq_server_map_io_range
>>>> hvm_unmap_io_range_from_ioreq_server -> ioreq_server_unmap_io_range
>>>> hvm_set_ioreq_server_state           -> ioreq_server_set_state
>>>> hvm_destroy_ioreq_server             -> ioreq_server_destroy
>>>> hvm_get_ioreq_server_frame           -> ioreq_server_get_frame
>>>> hvm_ioreq_needs_completion           -> ioreq_needs_completion
>>>> hvm_mmio_first_byte                  -> ioreq_mmio_first_byte
>>>> hvm_mmio_last_byte                   -> ioreq_mmio_last_byte
>>>> send_invalidate_req                  -> ioreq_signal_mapcache_invalidate
>>>>
>>>> handle_hvm_io_completion             -> handle_io_completion
>>> For this one I'm not sure what to suggest, but I'm not overly happy
>>> with the name.
>> I also failed to find a better name. Probably ioreq_ or vcpu_ioreq_
>> prefix wants to be added here?
>>
>>
>>>> hvm_io_pending                       -> io_pending
>>> vcpu_ioreq_pending() or vcpu_any_ioreq_pending()?
>> I am fine with vcpu_ioreq_pending()
>>
> ...in which case vcpu_ioreq_handle_completion() seems like a reasonable choice.

ok, will rename here ...


>
>>>> 2. Global (new):
>>>> arch_io_completion

and here arch_vcpu_ioreq_completion() (without handle in the middle).



>>>> arch_ioreq_server_map_pages
>>>> arch_ioreq_server_unmap_pages
>>>> arch_ioreq_server_enable
>>>> arch_ioreq_server_disable
>>>> arch_ioreq_server_destroy
>>>> arch_ioreq_server_map_mem_type
>>>> arch_ioreq_server_destroy_all
>>>> arch_ioreq_server_get_type_addr
>>>> arch_ioreq_init
>>> Assuming this is the arch hook of the similarly named function
>>> further up, a similar adjustment may then be wanted here.
>> Yes.
>>
>>
>>>> domain_has_ioreq_server
>>>>
>>>>
>>>> 3. Local (existing) in common ioreq.c:
>>>> hvm_alloc_ioreq_mfn               -> ioreq_alloc_mfn
>>>> hvm_free_ioreq_mfn                -> ioreq_free_mfn
>>> These two are server functions, so should imo be ioreq_server_...().
>> ok, but ...
>>
>>
>>> However, if they're static (as they're now), no distinguishing
>>> prefix is strictly necessary, i.e. alloc_mfn() and free_mfn() may
>>> be fine. The two names may be too short for Paul's taste, though.
>>> Some similar shortening may be possible for some or all of the ones
>>
>> ... In general I would be fine with any option. However, using the
>> shortening rule for all
>> we are going to end up with single-word function names (enable, init, etc).
>> So I would prefer to leave locals as is (but dropping hvm prefixes of
>> course and
>> clarify ioreq_server_alloc_mfn/ioreq_server_free_mfn).
>>
>> Paul, Jan what do you think?
> I prefer ioreq_server_alloc_mfn/ioreq_server_free_mfn. The problem with shortening is that function names become ambiguous within the source base and hence harder to find.

Got it.


Thank you

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 17:06:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 17:06:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34873.66129 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFI7-0002FA-8A; Mon, 23 Nov 2020 17:06:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34873.66129; Mon, 23 Nov 2020 17:06:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFI7-0002F3-4m; Mon, 23 Nov 2020 17:06:15 +0000
Received: by outflank-mailman (input) for mailman id 34873;
 Mon, 23 Nov 2020 17:06:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2ut/=E5=redhat.com=trix@srs-us1.protection.inumbo.net>)
 id 1khFI5-0002Ey-UJ
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:06:14 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 2e3560cf-3647-426e-9b95-86125a1de11e;
 Mon, 23 Nov 2020 17:06:11 +0000 (UTC)
Received: from mail-qv1-f72.google.com (mail-qv1-f72.google.com
 [209.85.219.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-177-wZephkW_NCajj8fxvey0QQ-1; Mon, 23 Nov 2020 12:06:09 -0500
Received: by mail-qv1-f72.google.com with SMTP id t14so13427563qvc.13
 for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 09:06:09 -0800 (PST)
Received: from trix.remote.csb (075-142-250-213.res.spectrum.com.
 [75.142.250.213])
 by smtp.gmail.com with ESMTPSA id o187sm10226153qkb.120.2020.11.23.09.06.04
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 23 Nov 2020 09:06:07 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2ut/=E5=redhat.com=trix@srs-us1.protection.inumbo.net>)
	id 1khFI5-0002Ey-UJ
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:06:14 +0000
X-Inumbo-ID: 2e3560cf-3647-426e-9b95-86125a1de11e
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 2e3560cf-3647-426e-9b95-86125a1de11e;
	Mon, 23 Nov 2020 17:06:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1606151171;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ETdBmUCF01QVQJOZ+sQu9TUGQeVKu9xOEmE1E5gBvfU=;
	b=BtEofZ03By1sp6OyH8lSI6IT4Bb/Q48OsWV2XnBXmdgQcceGh9rKSaQV86czSHO4W4CJHn
	vUaUaH4n0l1WmtKl1NY4JjZQEtZdHR6F0okj/kLdyZp+LnqjvTzaYPmjipu9+IS6RcbTNg
	lanzsvwsZXvmVS2IMlU/0SstZMlkjqs=
Received: from mail-qv1-f72.google.com (mail-qv1-f72.google.com
 [209.85.219.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-177-wZephkW_NCajj8fxvey0QQ-1; Mon, 23 Nov 2020 12:06:09 -0500
X-MC-Unique: wZephkW_NCajj8fxvey0QQ-1
Received: by mail-qv1-f72.google.com with SMTP id t14so13427563qvc.13
        for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 09:06:09 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=ETdBmUCF01QVQJOZ+sQu9TUGQeVKu9xOEmE1E5gBvfU=;
        b=AFnjmI7l7Q/wflePjF28tRpUmuHqwFuAEMfgQf6zReXwgPd6W0bGksYGHW8WPPF+FD
         lseisESDouCSBLmWzJEP7KSW1IYcn7xaPgMPN2E9a8unX4WJdnkfa13Zcg8fl14lADYC
         /ABwwjn5kqbg21sWbeLtcAZTkH2OaIyMhdCPJgH9WgI5cucL1k6EgOAHBLErjVLgfLux
         h8mKLHjymDuLdrxp6Hd+UmzwaWeaB15VmqLriZdbNhc7EkU+6NT9FkoH01Ary5S+H7bM
         IOI9Im857Zu5NgFYlI+0MzQlkL08VoWW3v2MBF5Ue+mEP+4n8RXOJcrEyYY0poDy4gYC
         K5mQ==
X-Gm-Message-State: AOAM530UWmKadx6MQovM+uFYaF4CMwhi1iIjzE1JxEaSSlnU150yDQew
	MxxlXKUPJWRpecmbgAfN7FPpv6zPCBBPFLrnTOaR1aU5CrRXy1fnERJD+pQNrafugvx7Cr4YB2x
	Ih3sCdhTLTwameb6xAD8xIo6z0Kg=
X-Received: by 2002:ac8:5d53:: with SMTP id g19mr70868qtx.354.1606151168818;
        Mon, 23 Nov 2020 09:06:08 -0800 (PST)
X-Google-Smtp-Source: ABdhPJwHR8oVpP3xv7xpCkK6lH4mawBfXgRI3GL2dEiLGp13/vfLrDKV7SBtsWnvpv2iFDtHltekRw==
X-Received: by 2002:ac8:5d53:: with SMTP id g19mr70839qtx.354.1606151168572;
        Mon, 23 Nov 2020 09:06:08 -0800 (PST)
Received: from trix.remote.csb (075-142-250-213.res.spectrum.com. [75.142.250.213])
        by smtp.gmail.com with ESMTPSA id o187sm10226153qkb.120.2020.11.23.09.06.04
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Mon, 23 Nov 2020 09:06:07 -0800 (PST)
Subject: Re: [RFC] MAINTAINERS tag for cleanup robot
To: Joe Perches <joe@perches.com>, clang-built-linux@googlegroups.com
Cc: linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org, tboot-devel@lists.sourceforge.net,
 kvm@vger.kernel.org, linux-crypto@vger.kernel.org,
 linux-acpi@vger.kernel.org, devel@acpica.org, amd-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
 netdev@vger.kernel.org, linux-media@vger.kernel.org,
 MPT-FusionLinux.pdl@broadcom.com, linux-scsi@vger.kernel.org,
 linux-wireless@vger.kernel.org, ibm-acpi-devel@lists.sourceforge.net,
 platform-driver-x86@vger.kernel.org, linux-usb@vger.kernel.org,
 linux-omap@vger.kernel.org, linux-fbdev@vger.kernel.org,
 ecryptfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
 cluster-devel@redhat.com, linux-mtd@lists.infradead.org,
 keyrings@vger.kernel.org, netfilter-devel@vger.kernel.org,
 coreteam@netfilter.org, alsa-devel@alsa-project.org, bpf@vger.kernel.org,
 linux-bluetooth@vger.kernel.org, linux-nfs@vger.kernel.org,
 patches@opensource.cirrus.com
References: <20201121165058.1644182-1-trix@redhat.com>
 <2105f0c05e9eae8bee8e17dcc5314474b3c0bc73.camel@perches.com>
 <6e8c1926-4209-8f10-d0f9-72c875a85a88@redhat.com>
 <859bae8ddae3238116824192f6ddf1c91a381913.camel@perches.com>
From: Tom Rix <trix@redhat.com>
Message-ID: <88eeba27-ee36-df63-8cd9-3cccbe5e0850@redhat.com>
Date: Mon, 23 Nov 2020 09:06:03 -0800
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <859bae8ddae3238116824192f6ddf1c91a381913.camel@perches.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=trix@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 11/22/20 10:22 AM, Joe Perches wrote:
> On Sun, 2020-11-22 at 08:33 -0800, Tom Rix wrote:
>> On 11/21/20 9:10 AM, Joe Perches wrote:
>>> On Sat, 2020-11-21 at 08:50 -0800, trix@redhat.com wrote:
>>>> A difficult part of automating commits is composing the subsystem
>>>> preamble in the commit log.  For the ongoing effort of a fixer producing
>>>> one or two fixes a release the use of 'treewide:' does not seem appropriate.
>>>>
>>>> It would be better if the normal prefix was used.  Unfortunately normal is
>>>> not consistent across the tree.
>>>>
>>>> So I am looking for comments for adding a new tag to the MAINTAINERS file
>>>>
>>>> 	D: Commit subsystem prefix
>>>>
>>>> ex/ for FPGA DFL DRIVERS
>>>>
>>>> 	D: fpga: dfl:
>>> I'm all for it.  Good luck with the effort.  It's not completely trivial.
>>>
>>> From a decade ago:
>>>
>>> https://lore.kernel.org/lkml/1289919077.28741.50.camel@Joe-Laptop/
>>>
>>> (and that thread started with extra semicolon patches too)
>> Reading the history, how about this.
>>
>> get_maintainer.pl outputs a single prefix, if multiple files have the
>> same prefix it works, if they don't its an error.
>>
>> Another script 'commit_one_file.sh' does the call to get_mainainter.pl
>> to get the prefix and be called by run-clang-tools.py to get the fixer
>> specific message.
> It's not whether the script used is get_maintainer or any other script,
> the question is really if the MAINTAINERS file is the appropriate place
> to store per-subsystem patch specific prefixes.
>
> It is.
>
> Then the question should be how are the forms described and what is the
> inheritance priority.  My preference would be to have a default of
> inherit the parent base and add basename(subsystem dirname).
>
> Commit history seems to have standardized on using colons as the separator
> between the commit prefix and the subject.
>
> A good mechanism to explore how various subsystems have uses prefixes in
> the past might be something like:
>
> $ git log --no-merges --pretty='%s' -<commit_count> <subsystem_path> | \
>   perl -n -e 'print substr($_, 0, rindex($_, ":") + 1) . "\n";' | \
>   sort | uniq -c | sort -rn

Thanks, I have shamelessly stolen this line and limited the commits to the maintainer.

I will post something once the generation of the prefixes is done.

Tom



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 17:06:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 17:06:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34874.66140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFIF-0002HN-GD; Mon, 23 Nov 2020 17:06:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34874.66140; Mon, 23 Nov 2020 17:06:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFIF-0002H9-D3; Mon, 23 Nov 2020 17:06:23 +0000
Received: by outflank-mailman (input) for mailman id 34874;
 Mon, 23 Nov 2020 17:06:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1khFIE-0002Gk-11
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:06:22 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bd7b5ca9-072e-4126-9b2b-042ca67fce6c;
 Mon, 23 Nov 2020 17:06:20 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/SOx=E5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1khFIE-0002Gk-11
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:06:22 +0000
X-Inumbo-ID: bd7b5ca9-072e-4126-9b2b-042ca67fce6c
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bd7b5ca9-072e-4126-9b2b-042ca67fce6c;
	Mon, 23 Nov 2020 17:06:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606151180;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=hVzk91sE/ycAQyMp9sp9SsR7dFa6lWuJDB8t4pwhI0k=;
  b=R5MXi5FWts8T0d7QLOKlHIpUuiH29Xpi9JOOEIZTGE9uP6VWPhia2ly+
   f78xrLOSuIng8zXdLS2kb+dY5rHllIZPDb5l2Mb32WzCCVEYarHSFqb6c
   KsUckrKt4l2cwqP1QnIlu7WJrC/AliOs9IOMHpjHmnCBcoRDjE4JjL58f
   I=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: k/RH6pOlc7qwYfdZdpvTXOmLD/L9pWCGcRuWRJ6POCh57kARWR8OmH77mHIlz11L/e6ckqB943
 CV1MsEoFb5j3YPIBIVKCtIBAgNfNrd36meS42Jz/L9O8Hub5peZvy2O0/4YgjUiE4PkviOp3uW
 m3YVN8co9gmEugxjmmFBr238SomsSzey12Ofdkdxo0b9/7hhByz/X9uK01aYMqwf4/NtvsU8Db
 4OdRePx6wZNVL6UCoMSKIK6Jz7ZJvE1b/GKR9IduIZY1xBG3XWbNgPbcMDwEUAXH8cVcqwyI0L
 nLA=
X-SBRS: None
X-MesageID: 32106154
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,363,1599537600"; 
   d="scan'208";a="32106154"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TXrTd1JsZEs0sM6HzVoMBwiwRxXGuk5OhhrxLMpNDptISylPd4VZtxyEecAdngTBc5Xq9w2PVe9+kj605hJ9xWedq+JYarlJevKVh5wakdE1qdvqahdqb0i6vDA9OHniK973nyzDevOKy7e65/kE/yyneII0pEU99+HBRXF4Ctl5hC588J5nlgbe6RrtfVU/2wgwUWj2zFLPyg5eBySvo7pyKWGq/lUm7W+x6qNLU/rO8DNNVM/9xPk2EMD3asaHti7HxSIflGPM+z/cMB5f3AaAcj6H8EsjDobpmD+Uc3VBYh+5Q4WRsUX/lfYKQ2o+vHKa8CdBDQl1dv7TGkDnmQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=asUtqppFQO1pDKssJig/X4Cohpfx++RKZRdzC55z5hk=;
 b=iwSPFmSyXPWvzSicgYzmngRj8jkzWhGap+e4JuolymSF07C7HDcY8ZC4iplLtKsItJZCm9YKAlILfIO3jqrJ5wvIDdptdNVZfGxz8E9lzNVFdqzGEINU53hN7zlsGm2uonWwTPC2lw1P0Wv1TOyHilueTKbGreDcnVJC5Z0Gc0VbgWzskBpsib9flEcXBLjzbjivL1dxoJPKZ0NgpIZfg8qfWMfohg6DH5hj0Dxm1Lc5/LLth+BB8nN+cjGXoaXr3k9HCXJotwA0IbqtIrRJLYjwo5UQZlQggw2l5J6qELlMSm1MZNe+2/5qPRls9pcD/QTe7fIvhpDDOTWWwZegPA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=asUtqppFQO1pDKssJig/X4Cohpfx++RKZRdzC55z5hk=;
 b=AjMI4uNBc/Cf83rnEMx3HRUAnOoTVkSOrjBiZLkrzU9BCyu5PSCK+5FF2ZE1DzgsL61loaXZKmjgTbmXsNX1mBhfuiRlViFLbWHqXkoNO739SmNv2k7qGr5UYphXqv2mWGR6RepmDGG413qGuEtghIDknOJP1NxSH8ORo/WQGwA=
Date: Mon, 23 Nov 2020 18:06:10 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: Jan Beulich <jbeulich@suse.com>, <xen-devel@lists.xenproject.org>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201123170610.kzfxvcgkdkvh3ex4@Air-de-Roger>
References: <20201120082855.5z4cibcd5djlwmgp@Air-de-Roger>
 <20201120085249.GA1508@antioche.eu.org>
 <97f371a9-00fe-33fe-8923-c247f44f9af6@suse.com>
 <20201120092754.GH1508@antioche.eu.org>
 <20904a6a-ac64-755d-d228-4c49faf66fb5@suse.com>
 <20201120103824.GJ1508@antioche.eu.org>
 <20201123095713.orfpg72r73m7f46n@Air-de-Roger>
 <20201123113241.GE2520@antioche.eu.org>
 <20201123125112.q3zqb4e5nk6jg4hw@Air-de-Roger>
 <20201123143150.GG2520@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201123143150.GG2520@antioche.eu.org>
X-ClientProxiedBy: MR2P264CA0019.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:1::31) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2c1a49a1-52be-4998-a676-08d88fd216d1
X-MS-TrafficTypeDiagnostic: DM6PR03MB3739:
X-Microsoft-Antispam-PRVS: <DM6PR03MB3739A2EBAE6AE6164C158CB68FFC0@DM6PR03MB3739.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: CkK/MF5qLUoOXFCsvTgxP+k5+SlxRXkU+sR37uylOlxCrb/eSJ1e4+3lc9A/DuAJywSqai7z5xrYn/E9PVH/DIwZcOIeLYFgiYvOfRwZhoTUj/eTJYn3N3LuRZzeIazYez85QzL9NXMq74ACmOQ99wnKvCxJCTAEn8FogRuonbpuX3fojUG5VZ1nmo+SYYTpo99Y4RISKNsoDU3QMtRXVE0pOiiSqbLraMmuo0sgeuEPChLroAKK8nNXvzK5yIufUlSXiymsGMWu5LitlxFhTRzc23eq7olQ3/XdbiYs9ds01GrlDRckf4kY5rTS3WFHvdj0H9DSz8HvAvVVSVyTG4K/0tdcBWQdEVhqzxvjuXi6h2hNI+SuTbkGnYbOB08+dX/K0PewULXTZarHqmYTvA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(366004)(39860400002)(136003)(346002)(376002)(9686003)(86362001)(83380400001)(316002)(6486002)(1076003)(6916009)(5660300002)(186003)(956004)(966005)(16526019)(478600001)(26005)(66476007)(66556008)(66946007)(4326008)(8936002)(8676002)(6496006)(2906002)(6666004)(85182001)(33716001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: HVwc/PFgxsHRGsKVENhFNvCZh2Dq1MmcGxW3V0Gb7KYEeCK5LlUkVUxUUIIxLpWO29BltL+1KhWFNF+6QlnOq2ZAqhQvhxQudCu7Y2GY+gkiB7P1419i/z58cAUAe4Nkrb8LCo5OQmGaeuEaFkBpHbn7YoOgTGa43SmlUi8JjT/yXKq1WNNv84cGwLbmBaId7v7uqBu+7wM9p3ptZArGnkFxTGWme6ADhTjs14bYY1Hj4rPQFD4waOs9dyz5FIyi8VVcTozlEcxbL6/0+oHkprfABJbaW/qaODVbwFMtLslaKdmNRutWHDJB1/OWhKKVMcIoDrY9PtgsuuapnD5WfEStZbfczeTMMPsii69XtvOfV/owEREr0kWxdZgx2lWjBYuK0oF38meN2GW9MF2L/hOINUrggyWvhvnTT/OeWvq8WehWQpLUqc8naTRMjEH8+5Ccg4Ou+yEk8ST1jw8iiuHy3tgXglXXsZ/tWKvC5tWHFQ9bSp88bga7Sc0BRZ6V5cMSE62HRZS3RIcR/ciHCZeaazZXpMUlJpgMxvvxQL6R/uQe8RY7N6zrdzIZbznDtfDhLKfyufI8o08IGKyvsTI25oPHkDHypTlgSeM6jWVg1cWibIjP7K6AjCpqKT5x+g9bWvYkWIuKLb6uoz/aI5iLJuSyvdHFjOAeGU+VHpihw3XLPq3qNw/yptr9Ig8zgPOjhogd9u7GwL1rDj0edgUINJ10zJSLovT7HIT9j4xlAef7kSmyswWdjuxUbtRfO9uyn6PzBEjhD6sjPJxqqpz0tyofJnjJx+b+XneoT0XVolp4W4h/HN7kj0ZRvT2wNUFsO5k1sTt+xvMVu4/cpfyJ/mix2EbLf/VvnwpsWA2Jt1PhZwRtGDqMdpnMIC0MAY/l0Gl+jWkDm+gNmVhMBQ==
X-MS-Exchange-CrossTenant-Network-Message-Id: 2c1a49a1-52be-4998-a676-08d88fd216d1
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Nov 2020 17:06:16.3688
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: U2gbw10tY0M3CyOq4sFQwhE45ybumTAMO9n54K6qVLjJ9d/00a69jVu9UGK8uEs282Ji8jWLPTE6vkQpIRGVaA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3739
X-OriginatorOrg: citrix.com

On Mon, Nov 23, 2020 at 03:31:50PM +0100, Manuel Bouyer wrote:
> On Mon, Nov 23, 2020 at 01:51:12PM +0100, Roger Pau Monné wrote:
> > Hm, yes, it's quite weird. Do you know whether a NetBSD kernel can be
> > multibooted from pxelinux with Xen? I would like to see if I can
> > reproduce this myself.
> 
> Yes, if Xen+linux can boot, Xen+netbsd should boot too.
> In a previous mail I wrote:
> In case it helps, I put by Xen and netbsd kernels at
> http://www-soc.lip6.fr/~bouyer/netbsd-dom0-pvh/
> I boot it from the NetBSD boot loader with:
> menu=Boot Xen PVH:load /netbsd-test console=com0 root=dk0 -vx; multiboot /xen-te
> st.gz dom0_mem=1024M console=com2 com2=57600,8n1 loglvl=all guest_loglvl=all gnt
> tab_max_nr_frames=64 dom0=pvh iommu=debug
> I guess with grub this would be
> kernel /xen-test.gz dom0_mem=1024M console=com2 com2=57600,8n1 loglvl=all guest_
> loglvl=all gnttab_max_nr_frames=64 dom0=pvh iommu=debug
> module /netbsd-test console=com0 root=dk0 -vx
> 
> (yes, com2 for xen and com0 for netbsd, that's not a bug :)
> You can enter the NetBSD debugger with
> +++++
> you can then enter commands, lile
> sh ev /i
> to see the interrupt counters
> 
> > 
> > I have the following patch also which will print a warning message
> > when GSI 34 is injected from hardware or when Xen performs an EOI
> > (either from a time out or when reacting to a guest one). I would
> > expect at least the interrupt injection one to trigger together with
> > the existing message.
> 
> It's quite verbose. I put the full log at
> http://www-soc.lip6.fr/~bouyer/xen-log4.txt

OK, I'm afraid this is likely too verbose and messes with the timings.

I've been looking (again) into the code, and I found something weird
that I think could be related to the issue you are seeing, but haven't
managed to try to boot the NetBSD kernel provided in order to assert
whether it solves the issue or not (or even whether I'm able to
repro it). Would you mind giving the patch below a try?

Thanks, Roger.
---8<---
diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
index 6b1305a3e5..ebd6c8e933 100644
--- a/xen/drivers/passthrough/io.c
+++ b/xen/drivers/passthrough/io.c
@@ -174,7 +174,6 @@ static void pt_irq_time_out(void *data)
          * In the identity mapped case the EOI can also be done now, this way
          * the iteration over the list of domain pirqs is avoided.
          */
-        hvm_gsi_deassert(irq_map->dom, dpci_pirq(irq_map)->pirq);
         irq_map->flags |= HVM_IRQ_DPCI_EOI_LATCH;
         pt_irq_guest_eoi(irq_map->dom, irq_map, NULL);
         spin_unlock(&irq_map->dom->event_lock);



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 17:08:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 17:08:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34886.66152 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFK7-0002Wj-1b; Mon, 23 Nov 2020 17:08:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34886.66152; Mon, 23 Nov 2020 17:08:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFK6-0002Wc-UZ; Mon, 23 Nov 2020 17:08:18 +0000
Received: by outflank-mailman (input) for mailman id 34886;
 Mon, 23 Nov 2020 17:08:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1khFK5-0002WW-Sd
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:08:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1khFK5-0005nQ-RE
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:08:17 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1khFK5-0005xo-QS
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:08:17 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1khFJx-0004JM-Rg; Mon, 23 Nov 2020 17:08:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1khFK5-0002WW-Sd
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:08:17 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=s7qzcMd4ZKa+8U82fxWH/zht3/KC7larJAj01W7JFbs=; b=o9FNcy3zazJrNTUvnc9Z49IdSe
	yY/TD+EeMmZf7VbLH2a3YFcjwV64CyWXFi0O/1+bj3WGwNnZthf81Zx5VrHueH9EfYHEjDOhrgtcd
	L+YL+KtRVu5DeAMUQcO9BeB77F8ZWmE46VV5O09J2njl9dgNj+2nnGmkOgL/G/cjP0ks=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1khFK5-0005nQ-RE
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:08:17 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1khFK5-0005xo-QS
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:08:17 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
	(envelope-from <iwj@xenproject.org>)
	id 1khFJx-0004JM-Rg; Mon, 23 Nov 2020 17:08:09 +0000
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24507.60537.640007.567348@mariner.uk.xensource.com>
Date: Mon, 23 Nov 2020 17:08:09 +0000
To: George Dunlap <george.dunlap@citrix.com>,
    Juergen Gross <jgross@suse.com>,
    Wei Liu <wl@xen.org>,
    Paul Durrant <paul@xen.org>
Cc: <xen-devel@lists.xenproject.org>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    Jan Beulich <jbeulich@suse.com>,
    Roger Pau Monne <roger.pau@citrix.com>,
    Stefano Stabellini <sstabellini@kernel.org>,
    "Julien  Grall" <julien@xen.org>
Subject: Re: [PATCH] MAINTINERS: Propose Ian Jackson as new release manager
In-Reply-To: <20201123160400.1273386-1-george.dunlap@citrix.com>
References: <20201123160400.1273386-1-george.dunlap@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

George Dunlap writes ("[PATCH] MAINTINERS: Propose Ian Jackson as new release manager"):
> Ian Jackson has agreed to be the release manager for 4.15.  Signify
> this by giving him maintainership over CHANGELOG.md.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Obviously that signifies my consent but I think it needs more acks.

Wei, Juergen, Paul, I think I am likely to ask you some questions.
Any tips etc would be welcome.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 17:13:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 17:13:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34896.66168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFPB-0003QS-N0; Mon, 23 Nov 2020 17:13:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34896.66168; Mon, 23 Nov 2020 17:13:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFPB-0003QL-Jd; Mon, 23 Nov 2020 17:13:33 +0000
Received: by outflank-mailman (input) for mailman id 34896;
 Mon, 23 Nov 2020 17:13:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1khFPA-0003QG-TY
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:13:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khFP9-0005uO-Ps; Mon, 23 Nov 2020 17:13:31 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khFP9-0006TX-Cu; Mon, 23 Nov 2020 17:13:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khFPA-0003QG-TY
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:13:32 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=XrE+ALSM1RWKwNoUwriGqCl0llXfg3e+F8zsvRlu8YI=; b=cd6VKHUkWSJN5qqXYp76pntEaN
	eFh0+IvhUXnFL+SifmKBqwLCyx5wYlM8/FfGctTjHZWeGc493b9yEa8s1WcV978zszPZWlWx1p9Aj
	ucmTRL7a1Go3yJ3FiQQIGUbLwBMN8zL0GSqsZK+8VP4MEB7++mfE1+fTRjxkvYcPbj04=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khFP9-0005uO-Ps; Mon, 23 Nov 2020 17:13:31 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khFP9-0006TX-Cu; Mon, 23 Nov 2020 17:13:31 +0000
Subject: Re: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
To: Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>
Cc: Rahul Singh <Rahul.Singh@arm.com>, Jan Beulich <jbeulich@suse.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <cover.1605527997.git.rahul.singh@arm.com>
 <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
 <3740e147-719a-4e97-bb0e-fe9bd2ec2aa5@xen.org>
 <aa256a44-8f8f-d4f1-f5f4-12529f45d8c8@suse.com>
 <9007e08f-6d90-88ed-ba64-2f0b3c21cb50@xen.org>
 <8531a99d-3c54-36c7-0cd4-2e4838f96eb0@suse.com>
 <ba26fdfb-34f8-c4d3-e082-f1f49c768981@xen.org>
 <89F35B3F-FAAD-4C58-B3FD-F93CA3290A49@arm.com>
 <alpine.DEB.2.21.2011191534060.7979@sstabellini-ThinkPad-T480s>
 <CAJ=z9a0aS1G0F1jAtKNEe4r3tyBoxy1xJ9AV7pYgifsL62iqww@mail.gmail.com>
 <alpine.DEB.2.21.2011191551510.7979@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <6d2dae58-bfe5-596e-7850-a20ed54e1a81@xen.org>
Date: Mon, 23 Nov 2020 17:13:29 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2011191551510.7979@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Stefano,

On 20/11/2020 00:14, Stefano Stabellini wrote:
> On Thu, 19 Nov 2020, Julien Grall wrote:
>> On Thu, 19 Nov 2020, 23:38 Stefano Stabellini, <sstabellini@kernel.org> wrote:
>>        On Thu, 19 Nov 2020, Rahul Singh wrote:
>>        > > On 19/11/2020 09:53, Jan Beulich wrote:
>>        > >> On 19.11.2020 10:21, Julien Grall wrote:
>>        > >>> Hi Jan,
>>        > >>>
>>        > >>> On 19/11/2020 09:05, Jan Beulich wrote:
>>        > >>>> On 18.11.2020 16:50, Julien Grall wrote:
>>        > >>>>> On 16/11/2020 12:25, Rahul Singh wrote:
>>        > >>>>>> NS16550 driver has PCI support that is under HAS_PCI flag. When HAS_PCI
>>        > >>>>>> is enabled for ARM, compilation error is observed for ARM architecture
>>        > >>>>>> because ARM platforms do not have full PCI support available.
>>        > >>>>>   >
>>        > >>>>>> Introducing new kconfig option CONFIG_HAS_NS16550_PCI to support
>>        > >>>>>> ns16550 PCI for X86.
>>        > >>>>>>
>>        > >>>>>> For X86 platforms it is enabled by default. For ARM platforms it is
>>        > >>>>>> disabled by default, once we have proper support for NS16550 PCI for
>>        > >>>>>> ARM we can enable it.
>>        > >>>>>>
>>        > >>>>>> No functional change.
>>        > >>>>>
>>        > >>>>> NIT: I would say "No functional change intended" to make clear this is
>>        > >>>>> an expectation and hopefully will be correct :).
>>        > >>>>>
>>        > >>>>> Regarding the commit message itself, I would suggest the following to
>>        > >>>>> address Jan's concern:
>>        > >>>>
>>        > >>>> While indeed this is a much better description, I continue to think
>>        > >>>> that the proposed Kconfig option is undesirable to have.
>>        > >>>
>>        > >>> I am yet to see an argument into why we should keep the PCI code
>>        > >>> compiled on Arm when there will be no-use....
>>        > >> Well, see my patch suppressing building of quite a part of it.
>>        > >
>>        > > I will let Rahul figuring out whether your patch series is sufficient to fix compilation issues (this is what matters right
>>        now).
>>        >
>>        > I just checked the compilation error for ARM after enabling the HAS_PCI on ARM. I am observing the same compilation error
>>        what I observed previously.
>>        > There are two new errors related to struct uart_config and struct part_param as those struct defined globally but used under
>>        X86 flags.
>>        >
>>        > At top level:
>>        > ns16550.c:179:48: error: ‘uart_config’ defined but not used [-Werror=unused-const-variable=]
>>        >  static const struct ns16550_config __initconst uart_config[] =
>>        >                                                 ^~~~~~~~~~~
>>        > ns16550.c:104:54: error: ‘uart_param’ defined but not used [-Werror=unused-const-variable=]
>>        >  static const struct ns16550_config_param __initconst uart_param[] = {
>>        >
>>        >
>>        > >
>>        > >>>> Either,
>>        > >>>> following the patch I've just sent, truly x86-specific things (at
>>        > >>>> least as far as current state goes - if any of this was to be
>>        > >>>> re-used by a future port, suitable further abstraction may be
>>        > >>>> needed) should be guarded by CONFIG_X86 (or abstracted into arch
>>        > >>>> hooks), or the HAS_PCI_MSI proposal would at least want further
>>        > >>>> investigating as to its feasibility to address the issues at hand.
>>        > >>>
>>        > >>> I would be happy with CONFIG_X86, despite the fact that this is only
>>        > >>> deferring the problem.
>>        > >>>
>>        > >>> Regarding HAS_PCI_MSI, I don't really see the point of introducing given
>>        > >>> that we are not going to use NS16550 PCI on Arm in the forseeable
>>        > >>> future.
>>        > >> And I continue to fail to see what would guarantee this: As soon
>>        > >> as you can plug in such a card into an Arm system, people will
>>        > >> want to be able use it. That's why we had to add support for it
>>        > >> on x86, after all.
>>        > >
>>        > > Well, plug-in PCI cards on Arm has been available for quite a while... Yet I haven't heard anyone asking for NS16550 PCI
>>        support.
>>        > >
>>        > > This is probably because SBSA compliant server should always provide an SBSA UART (a cut-down version of the PL011). So why
>>        would bother to lose a PCI slot for yet another UART?
>>        > >
>>        > >> >> So why do we need a finer graine Kconfig?
>>        > >> Because most of the involved code is indeed MSI-related?
>>        > >
>>        > > Possibly, yet it would not be necessary if we don't want NS16550 PCI support...
>>        >
>>        > To fix compilation error on ARM as per the discussion there are below options please suggest which one to use to proceed
>>        further.
>>        >
>>        > 1. Use the newly introduced CONFIG_HAS_NS16550_PCI config options. This helps also non-x86 architecture in the future not to
>>        have compilation error
>>        > what we are observing now when HAS_PCI is enabled.
>>        >
>>        > 2. Guard the remaining x86 specific code with CONFIG_X86 and introduce the new CONFIG_HAS_PCI_MSI options to fix the MSI
>>        related compilation error.
>>        > Once we have proper support for MSI and PCI for ARM  (HAS_PCI_MSI and HAS_PCI enabled for ARM in Kconfig ) I am not sure if
>>        NS16550 PCI will work out of the box on ARM .In that case, we might need to come back again to fix NS16550 driver.
>>
>>
>>        It doesn't matter too much to me, let's just choose one option so that you
>>        get unblocked soon.
>>
>>        It looks like Jan prefers option 2) and both Julien and I are OK with
>>        it. So let's do 2). Jan, please confirm too :-)
>>
>>
>> Please don't put words in my mouth...
> 
> Sorry Julien, I misinterpreted one of your previous comments. Sometimes
> it is difficult to do things by email. It is good that you clarified as
> my goal was to reach an agreement.

No worries. I would like to apologies for been harsher than I would have 
wanted on my reply. The thread had a lot of back and forth so far.

> 
> 
>> I think introducing HAS_PCI_MSI is short sighted.
>>
>> There are no clear benefits of it when NS16550 PCI support is not going to be enable in the foreseeable future.
> 
> I agree
> 
> 
>> I would be ok with moving everything under CONFIG_X86. IHMO this is still shortsighted but at least we don't introduce a config that's not
>> going to help Arm or other any architecture to disable completely PCI support in NS16550.
> 
> So you are suggesting a new option:
> 
> 3. Guard the remaining x86 specific code *and* the MSI related
> compilation errors with CONFIG_X86
> 
> Is that right?

That's correct.

> My preference is actually option 1) but this series is already at v3 and
> I don't think this decision is as important as much as unblocking
> Rahul, so I am OK with the other alternatives too.

In order, my preferences are 1) 3) 2). AFAICT...

> 
> I tend to agree with you that 3) is better than 2) for the reasons you
> wrote above.

... this is the same order as you. Although I probably have a stronger 
dislike for 2) because I feel it is pushed for the wrong reasons (e.g. 
matter of taste) so far.

My view on 2) can change if Jan provides enough information into why one 
would want NS1650 PCI enabled by default on Arm but disable MSI.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 17:14:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 17:14:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34901.66180 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFPw-0003Wp-1T; Mon, 23 Nov 2020 17:14:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34901.66180; Mon, 23 Nov 2020 17:14:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFPv-0003Wi-UG; Mon, 23 Nov 2020 17:14:19 +0000
Received: by outflank-mailman (input) for mailman id 34901;
 Mon, 23 Nov 2020 17:14:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1khFPu-0003Wc-8z
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:14:18 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khFPs-0005vF-Sn; Mon, 23 Nov 2020 17:14:16 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khFPs-0006Xa-Mt; Mon, 23 Nov 2020 17:14:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khFPu-0003Wc-8z
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:14:18 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=sDQ1P3GMiyM1/UNVRYUxlXFHl7wGylJrDOUJKtg10Co=; b=W+gkknvNm70lhnAkWGMcmSPrfD
	QJiuqYqL62ycnRJxOyYSzqMwBcLtfoB0m40PGdSxoaCxR/vYSnNMZBvHGnu9Ne53p3TPZbFtpn2qh
	MQZR9i64gATiE1Gdrk99GFiQHuL71rHzrExFss4VvfDJq8Z9isMag2n+HVOEa7d2r00E=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khFPs-0005vF-Sn; Mon, 23 Nov 2020 17:14:16 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khFPs-0006Xa-Mt; Mon, 23 Nov 2020 17:14:16 +0000
Subject: Re: [PATCH] MAINTINERS: Propose Ian Jackson as new release manager
To: George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@citrix.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>
References: <20201123160400.1273386-1-george.dunlap@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <4aa1cc9b-ca25-e72a-21b2-35fcb661d66d@xen.org>
Date: Mon, 23 Nov 2020 17:14:13 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201123160400.1273386-1-george.dunlap@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi George,

NIT: s/MAINTINERS/MAINTAINERS/

On 23/11/2020 16:04, George Dunlap wrote:
> Ian Jackson has agreed to be the release manager for 4.15.  Signify
> this by giving him maintainership over CHANGELOG.md.
> 
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 17:40:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 17:40:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34917.66192 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFoS-0005QB-2G; Mon, 23 Nov 2020 17:39:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34917.66192; Mon, 23 Nov 2020 17:39:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFoR-0005Q4-VM; Mon, 23 Nov 2020 17:39:39 +0000
Received: by outflank-mailman (input) for mailman id 34917;
 Mon, 23 Nov 2020 17:39:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MzqB=E5=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1khFoQ-0005Pz-Pv
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:39:38 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c572b9d5-6aa0-4e8c-b6d4-b7a3602bc39a;
 Mon, 23 Nov 2020 17:39:37 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0ANHdUO4021625
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Mon, 23 Nov 2020 18:39:31 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id A28602E9CAC; Mon, 23 Nov 2020 18:39:25 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MzqB=E5=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1khFoQ-0005Pz-Pv
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:39:38 +0000
X-Inumbo-ID: c572b9d5-6aa0-4e8c-b6d4-b7a3602bc39a
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c572b9d5-6aa0-4e8c-b6d4-b7a3602bc39a;
	Mon, 23 Nov 2020 17:39:37 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0ANHdUO4021625
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Mon, 23 Nov 2020 18:39:31 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id A28602E9CAC; Mon, 23 Nov 2020 18:39:25 +0100 (MET)
Date: Mon, 23 Nov 2020 18:39:25 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201123173925.GG4662@antioche.eu.org>
References: <20201120085249.GA1508@antioche.eu.org>
 <97f371a9-00fe-33fe-8923-c247f44f9af6@suse.com>
 <20201120092754.GH1508@antioche.eu.org>
 <20904a6a-ac64-755d-d228-4c49faf66fb5@suse.com>
 <20201120103824.GJ1508@antioche.eu.org>
 <20201123095713.orfpg72r73m7f46n@Air-de-Roger>
 <20201123113241.GE2520@antioche.eu.org>
 <20201123125112.q3zqb4e5nk6jg4hw@Air-de-Roger>
 <20201123143150.GG2520@antioche.eu.org>
 <20201123170610.kzfxvcgkdkvh3ex4@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201123170610.kzfxvcgkdkvh3ex4@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Mon, 23 Nov 2020 18:39:32 +0100 (MET)

On Mon, Nov 23, 2020 at 06:06:10PM +0100, Roger Pau Monn wrote:
> OK, I'm afraid this is likely too verbose and messes with the timings.
> 
> I've been looking (again) into the code, and I found something weird
> that I think could be related to the issue you are seeing, but haven't
> managed to try to boot the NetBSD kernel provided in order to assert
> whether it solves the issue or not (or even whether I'm able to
> repro it). Would you mind giving the patch below a try?

With this, I get the same hang but XEN outputs don't wake up the interrupt
any more. The NetBSD counter shows only one interrupt for ioapic2 pin 2,
while I would have about 8 at the time of the hang.

So, now it looks like interrupts are blocked forever. At
http://www-soc.lip6.fr/~bouyer/xen-log5.txt
you'll find the output of the 'i' key.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 17:45:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 17:45:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34927.66221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFtm-0006Kb-Km; Mon, 23 Nov 2020 17:45:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34927.66221; Mon, 23 Nov 2020 17:45:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFtm-0006KL-B0; Mon, 23 Nov 2020 17:45:10 +0000
Received: by outflank-mailman (input) for mailman id 34927;
 Mon, 23 Nov 2020 17:45:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khFtk-0006Ih-8f
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:45:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtj-0006YP-7D; Mon, 23 Nov 2020 17:45:07 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtj-0000at-3n; Mon, 23 Nov 2020 17:45:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtk-0006Ih-8f
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:45:08 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=yMHep/dhCDaK4WgXxVfnOxAemfKui/VNIp6TlUXURAA=; b=tApZNdSKetGwP2W9kVbtQQDEi
	czKBzJO6ayHtWf+bRcHZ3ZuCH37uUIlvjJ+sVnMwh9OtuPZ4k71j9Y8s+6V2yeeIfnUsNkPe7VtH4
	fEbSlNc5ywto+3xepNqtwbg7PJbkl98cO+VMm8OFKI4bDU+o6kHlZ5MPe1rwe/Yjyphhw=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtj-0006YP-7D; Mon, 23 Nov 2020 17:45:07 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtj-0000at-3n; Mon, 23 Nov 2020 17:45:07 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 02/23] libxl: make libxl__device_list() work correctly for LIBXL__DEVICE_KIND_PCI...
Date: Mon, 23 Nov 2020 17:44:42 +0000
Message-Id: <20201123174503.6800-3-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201123174503.6800-1-paul@xen.org>
References: <20201123174503.6800-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

... devices.

Currently there is an assumption built into libxl__device_list() that device
backends are fully enumarated under the '/libxl' path in xenstore. This is
not the case for PCI backend devices, which are only properly enumerated
under '/local/domain/0/backend'.

This patch adds a new get_path() method to libxl__device_type to allow a
backend implementation (such as PCI) to specify the xenstore path where
devices are enumerated and modifies libxl__device_list() to use this method
if it is available. Also, if the get_num() method is defined then the
from_xenstore() method expects to be passed the backend path without the device
number concatenated, so this issue is also rectified.

Having made libxl__device_list() work correctly, this patch removes the
open-coded libxl_pci_device_pci_list() in favour of an evaluation of the
LIBXL_DEFINE_DEVICE_LIST() macro. This has the side-effect of also defining
libxl_pci_device_pci_list_free() which will be used in subsequent patches.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>

v3:
 - New in v3 (replacing "libxl: use LIBXL_DEFINE_DEVICE_LIST for pci devices")
---
 tools/include/libxl.h             |  7 +++++
 tools/libs/light/libxl_device.c   | 66 +++++++++++++++++++++------------------
 tools/libs/light/libxl_internal.h |  2 ++
 tools/libs/light/libxl_pci.c      | 29 +++++------------
 4 files changed, 52 insertions(+), 52 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index fbe4c81ba5..ee52d3cf7e 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -452,6 +452,12 @@
 #define LIBXL_HAVE_CONFIG_PCIS 1
 
 /*
+ * LIBXL_HAVE_DEVICE_PCI_LIST_FREE indicates that the
+ * libxl_device_pci_list_free() function is defined.
+ */
+#define LIBXL_HAVE_DEVICE_PCI_LIST_FREE 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
@@ -2321,6 +2327,7 @@ int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
 
 libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid,
                                         int *num);
+void libxl_device_pci_list_free(libxl_device_pci* list, int num);
 
 /*
  * Turns the current process into a backend device service daemon
diff --git a/tools/libs/light/libxl_device.c b/tools/libs/light/libxl_device.c
index e081faf9a9..ac173a043d 100644
--- a/tools/libs/light/libxl_device.c
+++ b/tools/libs/light/libxl_device.c
@@ -2011,7 +2011,7 @@ void *libxl__device_list(libxl__gc *gc, const libxl__device_type *dt,
     void *r = NULL;
     void *list = NULL;
     void *item = NULL;
-    char *libxl_path;
+    char *path;
     char **dir = NULL;
     unsigned int ndirs = 0;
     unsigned int ndevs = 0;
@@ -2019,42 +2019,46 @@ void *libxl__device_list(libxl__gc *gc, const libxl__device_type *dt,
 
     *num = 0;
 
-    libxl_path = GCSPRINTF("%s/device/%s",
-                           libxl__xs_libxl_path(gc, domid),
-                           libxl__device_kind_to_string(dt->type));
-
-    dir = libxl__xs_directory(gc, XBT_NULL, libxl_path, &ndirs);
+    if (dt->get_path) {
+        rc = dt->get_path(gc, domid, &path);
+        if (rc) goto out;
+    } else {
+        path = GCSPRINTF("%s/device/%s",
+                         libxl__xs_libxl_path(gc, domid),
+                         libxl__device_kind_to_string(dt->type));
+    }
 
-    if (dir && ndirs) {
-        if (dt->get_num) {
-            if (ndirs != 1) {
-                LOGD(ERROR, domid, "multiple entries in %s\n", libxl_path);
-                rc = ERROR_FAIL;
-                goto out;
-            }
-            rc = dt->get_num(gc, GCSPRINTF("%s/%s", libxl_path, *dir), &ndevs);
-            if (rc) goto out;
-        } else {
+    if (dt->get_num) {
+        rc = dt->get_num(gc, path, &ndevs);
+        if (rc) goto out;
+    } else {
+        dir = libxl__xs_directory(gc, XBT_NULL, path, &ndirs);
+        if (dir && ndirs)
             ndevs = ndirs;
-        }
-        list = libxl__malloc(NOGC, dt->dev_elem_size * ndevs);
-        item = list;
+    }
 
-        while (*num < ndevs) {
-            dt->init(item);
+    if (!ndevs)
+        return NULL;
 
-            if (dt->from_xenstore) {
-                int nr = dt->get_num ? *num : atoi(*dir);
-                char *device_libxl_path = GCSPRINTF("%s/%s", libxl_path, *dir);
-                rc = dt->from_xenstore(gc, device_libxl_path, nr, item);
-                if (rc) goto out;
-            }
+    list = libxl__malloc(NOGC, dt->dev_elem_size * ndevs);
+    item = list;
 
-            item = (uint8_t *)item + dt->dev_elem_size;
-            ++(*num);
-            if (!dt->get_num)
-                ++dir;
+    while (*num < ndevs) {
+        dt->init(item);
+
+        if (dt->from_xenstore) {
+            int nr = dt->get_num ? *num : atoi(*dir);
+            char *device_path = dt->get_num ? path :
+                GCSPRINTF("%s/%d", path, nr);
+
+            rc = dt->from_xenstore(gc, device_path, nr, item);
+            if (rc) goto out;
         }
+
+        item = (uint8_t *)item + dt->dev_elem_size;
+        ++(*num);
+        if (!dt->get_num)
+            ++dir;
     }
 
     r = list;
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 3e70ff639b..ecee61b541 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -3917,6 +3917,7 @@ typedef int (*device_dm_needed_fn_t)(void *, unsigned);
 typedef void (*device_update_config_fn_t)(libxl__gc *, void *, void *);
 typedef int (*device_update_devid_fn_t)(libxl__gc *, uint32_t, void *);
 typedef int (*device_get_num_fn_t)(libxl__gc *, const char *, unsigned int *);
+typedef int (*device_get_path_fn_t)(libxl__gc *, uint32_t, char **);
 typedef int (*device_from_xenstore_fn_t)(libxl__gc *, const char *,
                                          libxl_devid, void *);
 typedef int (*device_set_xenstore_config_fn_t)(libxl__gc *, uint32_t, void *,
@@ -3941,6 +3942,7 @@ struct libxl__device_type {
     device_update_config_fn_t       update_config;
     device_update_devid_fn_t        update_devid;
     device_get_num_fn_t             get_num;
+    device_get_path_fn_t            get_path;
     device_from_xenstore_fn_t       from_xenstore;
     device_set_xenstore_config_fn_t set_xenstore_config;
 };
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 2ff1c64a31..9d44b28f0a 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -2393,29 +2393,13 @@ static int libxl__device_pci_get_num(libxl__gc *gc, const char *be_path,
     return rc;
 }
 
-libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid, int *num)
+static int libxl__device_pci_get_path(libxl__gc *gc, uint32_t domid,
+                                      char **path)
 {
-    GC_INIT(ctx);
-    char *be_path;
-    unsigned int n, i;
-    libxl_device_pci *pcis = NULL;
-
-    *num = 0;
-
-    be_path = libxl__domain_device_backend_path(gc, 0, domid, 0,
-                                                LIBXL__DEVICE_KIND_PCI);
-    if (libxl__device_pci_get_num(gc, be_path, &n))
-        goto out;
+    *path = libxl__domain_device_backend_path(gc, 0, domid, 0,
+                                              LIBXL__DEVICE_KIND_PCI);
 
-    pcis = calloc(n, sizeof(libxl_device_pci));
-
-    for (i = 0; i < n; i++)
-        libxl__device_pci_from_xs_be(gc, be_path, i, pcis + i);
-
-    *num = n;
-out:
-    GC_FREE;
-    return pcis;
+    return 0;
 }
 
 void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
@@ -2492,10 +2476,13 @@ static int libxl_device_pci_compare(const libxl_device_pci *d1,
     return COMPARE_PCI(d1, d2);
 }
 
+LIBXL_DEFINE_DEVICE_LIST(pci)
+
 #define libxl__device_pci_update_devid NULL
 
 DEFINE_DEVICE_TYPE_STRUCT(pci, PCI,
     .get_num = libxl__device_pci_get_num,
+    .get_path = libxl__device_pci_get_path,
     .from_xenstore = libxl__device_pci_from_xs_be,
 );
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 17:45:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 17:45:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34928.66239 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFtn-0006Mg-H5; Mon, 23 Nov 2020 17:45:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34928.66239; Mon, 23 Nov 2020 17:45:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFtn-0006MJ-4t; Mon, 23 Nov 2020 17:45:11 +0000
Received: by outflank-mailman (input) for mailman id 34928;
 Mon, 23 Nov 2020 17:45:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khFtk-0006Im-ML
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:45:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtj-0006Yd-QW; Mon, 23 Nov 2020 17:45:07 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtj-0000at-Nl; Mon, 23 Nov 2020 17:45:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtk-0006Im-ML
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:45:08 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=woMUqgZMx7At7jFsiX2KySgK2cVzSMaxkSnE1C583H4=; b=bm1X1qImy3nH5EeAFveIsjQaJ
	sZ8qwmFTdaYPnRnACqa10woviAAgF5m9V4hh3s/A1Mh+b32idFR6hJaZaTCoultxoVwmf+3SMeJIX
	Rl98+UfPR0EnOgQPJVNkRYF6BXamkPBvSAqGMl3uFNObvHXbNeDq0/MID764CzeSASloI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtj-0006Yd-QW; Mon, 23 Nov 2020 17:45:07 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtj-0000at-Nl; Mon, 23 Nov 2020 17:45:07 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 05/23] libxl: s/detatched/detached in libxl_pci.c
Date: Mon, 23 Nov 2020 17:44:45 +0000
Message-Id: <20201123174503.6800-6-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201123174503.6800-1-paul@xen.org>
References: <20201123174503.6800-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

Simply spelling correction. Purely cosmetic fix.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 50c96cbfa6..de617e95eb 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1864,7 +1864,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
     libxl__ev_qmp *qmp, const libxl__json_object *response, int rc);
 static void pci_remove_timeout(libxl__egc *egc,
     libxl__ev_time *ev, const struct timeval *requested_abs, int rc);
-static void pci_remove_detatched(libxl__egc *egc,
+static void pci_remove_detached(libxl__egc *egc,
     pci_remove_state *prs, int rc);
 static void pci_remove_stubdom_done(libxl__egc *egc,
     libxl__ao_device *aodev);
@@ -1978,7 +1978,7 @@ skip1:
 skip_irq:
     rc = 0;
 out_fail:
-    pci_remove_detatched(egc, prs, rc); /* must be last */
+    pci_remove_detached(egc, prs, rc); /* must be last */
 }
 
 static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
@@ -2002,7 +2002,7 @@ static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
     rc = qemu_pci_remove_xenstore(gc, domid, pci, prs->force);
 
 out:
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
 static void pci_remove_qmp_device_del(libxl__egc *egc,
@@ -2028,7 +2028,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     return;
 
 out:
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
 static void pci_remove_qmp_device_del_cb(libxl__egc *egc,
@@ -2051,7 +2051,7 @@ static void pci_remove_qmp_device_del_cb(libxl__egc *egc,
     return;
 
 out:
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
 static void pci_remove_qmp_retry_timer_cb(libxl__egc *egc, libxl__ev_time *ev,
@@ -2067,7 +2067,7 @@ static void pci_remove_qmp_retry_timer_cb(libxl__egc *egc, libxl__ev_time *ev,
     return;
 
 out:
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
 static void pci_remove_qmp_query_cb(libxl__egc *egc,
@@ -2127,7 +2127,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
     }
 
 out:
-    pci_remove_detatched(egc, prs, rc); /* must be last */
+    pci_remove_detached(egc, prs, rc); /* must be last */
 }
 
 static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
@@ -2146,12 +2146,12 @@ static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
     /* If we timed out, we might still want to keep destroying the device
      * (when force==true), so let the next function decide what to do on
      * error */
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
-static void pci_remove_detatched(libxl__egc *egc,
-                                 pci_remove_state *prs,
-                                 int rc)
+static void pci_remove_detached(libxl__egc *egc,
+                                pci_remove_state *prs,
+                                int rc)
 {
     STATE_AO_GC(prs->aodev->ao);
     int stubdomid = 0;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 17:45:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 17:45:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34925.66204 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFtl-0006JN-OG; Mon, 23 Nov 2020 17:45:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34925.66204; Mon, 23 Nov 2020 17:45:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFtl-0006JG-K9; Mon, 23 Nov 2020 17:45:09 +0000
Received: by outflank-mailman (input) for mailman id 34925;
 Mon, 23 Nov 2020 17:45:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khFtk-0006IX-44
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:45:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFti-0006YN-W4; Mon, 23 Nov 2020 17:45:06 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFti-0000at-QS; Mon, 23 Nov 2020 17:45:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtk-0006IX-44
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:45:08 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=WvyItcKZ4OKbkkykEic1UVBZaut7cfBmGpJNtPhNblA=; b=H/GrDeC6izy5iTF0LTA5//vx5
	NMeAk5vfH1l/FnDHfbEhnGIvv3+MCg2+Go6zt561iTDrqYVq9XxXCw0zRdMRrnApMTSc/1AruD/qd
	GMPhLbnEUeIH0LqX9sz/Kv6aW+yfrl+eCUdQS8HshlhlMiovFc8sMkbaAA8IP/v0sKeqY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFti-0006YN-W4; Mon, 23 Nov 2020 17:45:06 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFti-0000at-QS; Mon, 23 Nov 2020 17:45:06 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 01/23] xl / libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
Date: Mon, 23 Nov 2020 17:44:41 +0000
Message-Id: <20201123174503.6800-2-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201123174503.6800-1-paul@xen.org>
References: <20201123174503.6800-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

The seemingly arbitrary use of 'pci' and 'pcidev' in the code in libxl_pci.c
is confusing and also compromises use of some macros used for other device
types. Indeed it seems that DEFINE_DEVICE_TYPE_STRUCT_X exists solely because
of this duality.

This patch purges use of 'pcidev' from the libxl code, allowing evaluation of
DEFINE_DEVICE_TYPE_STRUCT_X to be replaced with DEFINE_DEVICE_TYPE_STRUCT,
hence allowing removal of the former.

For consistency the xl and libs/util code is also modified, but in this case
it is purely cosmetic.

NOTE: Some of the more gross formatting errors (such as lack of spaces after
      keywords) that came into context have been fixed in libxl_pci.c.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxl.h             |  17 +-
 tools/libs/light/libxl_create.c   |   6 +-
 tools/libs/light/libxl_dm.c       |  18 +-
 tools/libs/light/libxl_internal.h |  45 ++-
 tools/libs/light/libxl_pci.c      | 582 +++++++++++++++++++-------------------
 tools/libs/light/libxl_types.idl  |   2 +-
 tools/libs/util/libxlu_pci.c      |  36 +--
 tools/xl/xl_parse.c               |  28 +-
 tools/xl/xl_pci.c                 |  68 ++---
 tools/xl/xl_sxp.c                 |  12 +-
 10 files changed, 409 insertions(+), 405 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 1ea5b4f446..fbe4c81ba5 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -445,6 +445,13 @@
 #define LIBXL_HAVE_DISK_SAFE_REMOVE 1
 
 /*
+ * LIBXL_HAVE_CONFIG_PCIS indicates that the 'pcidevs' and 'num_pcidevs'
+ * fields in libxl_domain_config have been renamed to 'pcis' and 'num_pcis'
+ * respectively.
+ */
+#define LIBXL_HAVE_CONFIG_PCIS 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
@@ -2300,15 +2307,15 @@ int libxl_device_pvcallsif_destroy(libxl_ctx *ctx, uint32_t domid,
 
 /* PCI Passthrough */
 int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
-                         libxl_device_pci *pcidev,
+                         libxl_device_pci *pci,
                          const libxl_asyncop_how *ao_how)
                          LIBXL_EXTERNAL_CALLERS_ONLY;
 int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid,
-                            libxl_device_pci *pcidev,
+                            libxl_device_pci *pci,
                             const libxl_asyncop_how *ao_how)
                             LIBXL_EXTERNAL_CALLERS_ONLY;
 int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
-                             libxl_device_pci *pcidev,
+                             libxl_device_pci *pci,
                              const libxl_asyncop_how *ao_how)
                              LIBXL_EXTERNAL_CALLERS_ONLY;
 
@@ -2352,8 +2359,8 @@ int libxl_device_events_handler(libxl_ctx *ctx,
  * added or is not bound, the functions will emit a warning but return
  * SUCCESS.
  */
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pcidev, int rebind);
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pcidev, int rebind);
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
 
 /* CPUID handling */
diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index 321a13e519..1f5052c520 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -1100,7 +1100,7 @@ int libxl__domain_config_setdefault(libxl__gc *gc,
         goto error_out;
     }
 
-    bool need_pt = d_config->num_pcidevs || d_config->num_dtdevs;
+    bool need_pt = d_config->num_pcis || d_config->num_dtdevs;
     if (c_info->passthrough == LIBXL_PASSTHROUGH_DEFAULT) {
         c_info->passthrough = need_pt
             ? LIBXL_PASSTHROUGH_ENABLED : LIBXL_PASSTHROUGH_DISABLED;
@@ -1141,7 +1141,7 @@ int libxl__domain_config_setdefault(libxl__gc *gc,
      * assignment when PoD is enabled.
      */
     if (d_config->c_info.type != LIBXL_DOMAIN_TYPE_PV &&
-        d_config->num_pcidevs && pod_enabled) {
+        d_config->num_pcis && pod_enabled) {
         ret = ERROR_INVAL;
         LOGD(ERROR, domid,
              "PCI device assignment for HVM guest failed due to PoD enabled");
@@ -1817,7 +1817,7 @@ const libxl__device_type *device_type_tbl[] = {
     &libxl__vtpm_devtype,
     &libxl__usbctrl_devtype,
     &libxl__usbdev_devtype,
-    &libxl__pcidev_devtype,
+    &libxl__pci_devtype,
     &libxl__dtdev_devtype,
     &libxl__vdispl_devtype,
     &libxl__vsnd_devtype,
diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 3da83259c0..8ebe1b60c9 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -442,7 +442,7 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
 
     /* Might not expose rdm. */
     if (strategy == LIBXL_RDM_RESERVE_STRATEGY_IGNORE &&
-        !d_config->num_pcidevs)
+        !d_config->num_pcis)
         return 0;
 
     /* Query all RDM entries in this platform */
@@ -469,13 +469,13 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
     }
 
     /* Query RDM entries per-device */
-    for (i = 0; i < d_config->num_pcidevs; i++) {
+    for (i = 0; i < d_config->num_pcis; i++) {
         unsigned int n, nr_entries;
 
-        seg = d_config->pcidevs[i].domain;
-        bus = d_config->pcidevs[i].bus;
-        devfn = PCI_DEVFN(d_config->pcidevs[i].dev,
-                          d_config->pcidevs[i].func);
+        seg = d_config->pcis[i].domain;
+        bus = d_config->pcis[i].bus;
+        devfn = PCI_DEVFN(d_config->pcis[i].dev,
+                          d_config->pcis[i].func);
         nr_entries = 0;
         rc = libxl__xc_device_get_rdm(gc, 0,
                                       seg, bus, devfn, &nr_entries, &xrdm);
@@ -488,7 +488,7 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
         assert(xrdm);
 
         rc = libxl__device_pci_setdefault(gc, DOMID_INVALID,
-                                          &d_config->pcidevs[i], false);
+                                          &d_config->pcis[i], false);
         if (rc)
             goto out;
 
@@ -516,7 +516,7 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
                      * global policy in this case.
                      */
                     d_config->rdms[j].policy
-                        = d_config->pcidevs[i].rdm_policy;
+                        = d_config->pcis[i].rdm_policy;
                     new = false;
                     break;
                 }
@@ -526,7 +526,7 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
                 add_rdm_entry(gc, d_config,
                               pfn_to_paddr(xrdm[n].start_pfn),
                               pfn_to_paddr(xrdm[n].nr_pages),
-                              d_config->pcidevs[i].rdm_policy);
+                              d_config->pcis[i].rdm_policy);
         }
     }
 
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index e26cda9b50..3e70ff639b 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -1709,7 +1709,7 @@ _hidden int libxl__pci_topology_init(libxl__gc *gc,
 /* from libxl_pci */
 
 _hidden void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
-                                   libxl_device_pci *pcidev, bool starting,
+                                   libxl_device_pci *pci, bool starting,
                                    libxl__ao_device *aodev);
 _hidden void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
                                            libxl__multidev *);
@@ -3945,30 +3945,27 @@ struct libxl__device_type {
     device_set_xenstore_config_fn_t set_xenstore_config;
 };
 
-#define DEFINE_DEVICE_TYPE_STRUCT_X(name, sname, kind, ...)                    \
-    const libxl__device_type libxl__ ## name ## _devtype = {                   \
-        .type          = LIBXL__DEVICE_KIND_ ## kind,                       \
-        .ptr_offset    = offsetof(libxl_domain_config, name ## s),             \
-        .num_offset    = offsetof(libxl_domain_config, num_ ## name ## s),     \
-        .dev_elem_size = sizeof(libxl_device_ ## sname),                       \
-        .add           = libxl__add_ ## name ## s,                             \
-        .set_default   = (device_set_default_fn_t)                             \
-                         libxl__device_ ## sname ## _setdefault,               \
-        .to_device     = (device_to_device_fn_t)libxl__device_from_ ## name,   \
-        .init          = (device_init_fn_t)libxl_device_ ## sname ## _init,    \
-        .copy          = (device_copy_fn_t)libxl_device_ ## sname ## _copy,    \
-        .dispose       = (device_dispose_fn_t)                                 \
-                         libxl_device_ ## sname ## _dispose,                   \
-        .compare       = (device_compare_fn_t)                                 \
-                         libxl_device_ ## sname ## _compare,                   \
-        .update_devid  = (device_update_devid_fn_t)                            \
-                         libxl__device_ ## sname ## _update_devid,             \
-        __VA_ARGS__                                                            \
+#define DEFINE_DEVICE_TYPE_STRUCT(name, kind, ...)                           \
+    const libxl__device_type libxl__ ## name ## _devtype = {                 \
+        .type          = LIBXL__DEVICE_KIND_ ## kind,                        \
+        .ptr_offset    = offsetof(libxl_domain_config, name ## s),           \
+        .num_offset    = offsetof(libxl_domain_config, num_ ## name ## s),   \
+        .dev_elem_size = sizeof(libxl_device_ ## name),                      \
+        .add           = libxl__add_ ## name ## s,                           \
+        .set_default   = (device_set_default_fn_t)                           \
+                         libxl__device_ ## name ## _setdefault,              \
+        .to_device     = (device_to_device_fn_t)libxl__device_from_ ## name, \
+        .init          = (device_init_fn_t)libxl_device_ ## name ## _init,   \
+        .copy          = (device_copy_fn_t)libxl_device_ ## name ## _copy,   \
+        .dispose       = (device_dispose_fn_t)                               \
+                         libxl_device_ ## name ## _dispose,                  \
+        .compare       = (device_compare_fn_t)                               \
+                         libxl_device_ ## name ## _compare,                  \
+        .update_devid  = (device_update_devid_fn_t)                          \
+                         libxl__device_ ## name ## _update_devid,            \
+        __VA_ARGS__                                                          \
     }
 
-#define DEFINE_DEVICE_TYPE_STRUCT(name, kind, ...)                             \
-    DEFINE_DEVICE_TYPE_STRUCT_X(name, name, kind, __VA_ARGS__)
-
 static inline void **libxl__device_type_get_ptr(
     const libxl__device_type *dt, const libxl_domain_config *d_config)
 {
@@ -3995,7 +3992,7 @@ extern const libxl__device_type libxl__nic_devtype;
 extern const libxl__device_type libxl__vtpm_devtype;
 extern const libxl__device_type libxl__usbctrl_devtype;
 extern const libxl__device_type libxl__usbdev_devtype;
-extern const libxl__device_type libxl__pcidev_devtype;
+extern const libxl__device_type libxl__pci_devtype;
 extern const libxl__device_type libxl__vdispl_devtype;
 extern const libxl__device_type libxl__p9_devtype;
 extern const libxl__device_type libxl__pvcallsif_devtype;
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index bc5843b137..2ff1c64a31 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -25,51 +25,51 @@
 #define PCI_BDF_XSPATH         "%04x-%02x-%02x-%01x"
 #define PCI_PT_QDEV_ID         "pci-pt-%02x_%02x.%01x"
 
-static unsigned int pcidev_encode_bdf(libxl_device_pci *pcidev)
+static unsigned int pci_encode_bdf(libxl_device_pci *pci)
 {
     unsigned int value;
 
-    value = pcidev->domain << 16;
-    value |= (pcidev->bus & 0xff) << 8;
-    value |= (pcidev->dev & 0x1f) << 3;
-    value |= (pcidev->func & 0x7);
+    value = pci->domain << 16;
+    value |= (pci->bus & 0xff) << 8;
+    value |= (pci->dev & 0x1f) << 3;
+    value |= (pci->func & 0x7);
 
     return value;
 }
 
-static void pcidev_struct_fill(libxl_device_pci *pcidev, unsigned int domain,
-                               unsigned int bus, unsigned int dev,
-                               unsigned int func, unsigned int vdevfn)
+static void pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
+                            unsigned int bus, unsigned int dev,
+                            unsigned int func, unsigned int vdevfn)
 {
-    pcidev->domain = domain;
-    pcidev->bus = bus;
-    pcidev->dev = dev;
-    pcidev->func = func;
-    pcidev->vdevfn = vdevfn;
+    pci->domain = domain;
+    pci->bus = bus;
+    pci->dev = dev;
+    pci->func = func;
+    pci->vdevfn = vdevfn;
 }
 
 static void libxl_create_pci_backend_device(libxl__gc *gc,
                                             flexarray_t *back,
                                             int num,
-                                            const libxl_device_pci *pcidev)
+                                            const libxl_device_pci *pci)
 {
     flexarray_append(back, GCSPRINTF("key-%d", num));
-    flexarray_append(back, GCSPRINTF(PCI_BDF, pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func));
+    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->domain, pci->bus, pci->dev, pci->func));
     flexarray_append(back, GCSPRINTF("dev-%d", num));
-    flexarray_append(back, GCSPRINTF(PCI_BDF, pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func));
-    if (pcidev->vdevfn)
-        flexarray_append_pair(back, GCSPRINTF("vdevfn-%d", num), GCSPRINTF("%x", pcidev->vdevfn));
+    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->domain, pci->bus, pci->dev, pci->func));
+    if (pci->vdevfn)
+        flexarray_append_pair(back, GCSPRINTF("vdevfn-%d", num), GCSPRINTF("%x", pci->vdevfn));
     flexarray_append(back, GCSPRINTF("opts-%d", num));
     flexarray_append(back,
               GCSPRINTF("msitranslate=%d,power_mgmt=%d,permissive=%d",
-                             pcidev->msitranslate, pcidev->power_mgmt,
-                             pcidev->permissive));
+                             pci->msitranslate, pci->power_mgmt,
+                             pci->permissive));
     flexarray_append_pair(back, GCSPRINTF("state-%d", num), GCSPRINTF("%d", XenbusStateInitialising));
 }
 
-static void libxl__device_from_pcidev(libxl__gc *gc, uint32_t domid,
-                                      const libxl_device_pci *pcidev,
-                                      libxl__device *device)
+static void libxl__device_from_pci(libxl__gc *gc, uint32_t domid,
+                                   const libxl_device_pci *pci,
+                                   libxl__device *device)
 {
     device->backend_devid = 0;
     device->backend_domid = 0;
@@ -80,7 +80,7 @@ static void libxl__device_from_pcidev(libxl__gc *gc, uint32_t domid,
 }
 
 static int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
-                                     const libxl_device_pci *pcidev,
+                                     const libxl_device_pci *pci,
                                      int num)
 {
     flexarray_t *front = NULL;
@@ -94,15 +94,15 @@ static int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
     LOGD(DEBUG, domid, "Creating pci backend");
 
     /* add pci device */
-    libxl__device_from_pcidev(gc, domid, pcidev, &device);
+    libxl__device_from_pci(gc, domid, pci, &device);
 
     flexarray_append_pair(back, "frontend-id", GCSPRINTF("%d", domid));
     flexarray_append_pair(back, "online", "1");
     flexarray_append_pair(back, "state", GCSPRINTF("%d", XenbusStateInitialising));
     flexarray_append_pair(back, "domain", libxl__domid_to_name(gc, domid));
 
-    for (i = 0; i < num; i++, pcidev++)
-        libxl_create_pci_backend_device(gc, back, i, pcidev);
+    for (i = 0; i < num; i++, pci++)
+        libxl_create_pci_backend_device(gc, back, i, pci);
 
     flexarray_append_pair(back, "num_devs", GCSPRINTF("%d", num));
     flexarray_append_pair(front, "backend-id", GCSPRINTF("%d", 0));
@@ -116,7 +116,7 @@ static int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
 
 static int libxl__device_pci_add_xenstore(libxl__gc *gc,
                                           uint32_t domid,
-                                          const libxl_device_pci *pcidev,
+                                          const libxl_device_pci *pci,
                                           bool starting)
 {
     flexarray_t *back;
@@ -136,7 +136,7 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
                                                 LIBXL__DEVICE_KIND_PCI);
     num_devs = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/num_devs", be_path));
     if (!num_devs)
-        return libxl__create_pci_backend(gc, domid, pcidev, 1);
+        return libxl__create_pci_backend(gc, domid, pci, 1);
 
     libxl_domain_type domtype = libxl__domain_type(gc, domid);
     if (domtype == LIBXL_DOMAIN_TYPE_INVALID)
@@ -151,7 +151,7 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
 
     LOGD(DEBUG, domid, "Adding new pci device to xenstore");
     num = atoi(num_devs);
-    libxl_create_pci_backend_device(gc, back, num, pcidev);
+    libxl_create_pci_backend_device(gc, back, num, pci);
     flexarray_append_pair(back, "num_devs", GCSPRINTF("%d", num + 1));
     if (!starting)
         flexarray_append_pair(back, "state", GCSPRINTF("%d", XenbusStateReconfiguring));
@@ -170,8 +170,8 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
         rc = libxl__get_domain_configuration(gc, domid, &d_config);
         if (rc) goto out;
 
-        device_add_domain_config(gc, &d_config, &libxl__pcidev_devtype,
-                                 pcidev);
+        device_add_domain_config(gc, &d_config, &libxl__pci_devtype,
+                                 pci);
 
         rc = libxl__dm_check_start(gc, &d_config, domid);
         if (rc) goto out;
@@ -201,7 +201,7 @@ out:
     return rc;
 }
 
-static int libxl__device_pci_remove_xenstore(libxl__gc *gc, uint32_t domid, libxl_device_pci *pcidev)
+static int libxl__device_pci_remove_xenstore(libxl__gc *gc, uint32_t domid, libxl_device_pci *pci)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     char *be_path, *num_devs_path, *num_devs, *xsdev, *tmp, *tmppath;
@@ -231,8 +231,8 @@ static int libxl__device_pci_remove_xenstore(libxl__gc *gc, uint32_t domid, libx
         unsigned int domain = 0, bus = 0, dev = 0, func = 0;
         xsdev = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/dev-%d", be_path, i));
         sscanf(xsdev, PCI_BDF, &domain, &bus, &dev, &func);
-        if (domain == pcidev->domain && bus == pcidev->bus &&
-            pcidev->dev == dev && pcidev->func == func) {
+        if (domain == pci->domain && bus == pci->bus &&
+            pci->dev == dev && pci->func == func) {
             break;
         }
     }
@@ -350,7 +350,7 @@ static int get_all_assigned_devices(libxl__gc *gc, libxl_device_pci **list, int
                     *list = realloc(*list, sizeof(libxl_device_pci) * ((*num) + 1));
                     if (*list == NULL)
                         return ERROR_NOMEM;
-                    pcidev_struct_fill(*list + *num, dom, bus, dev, func, 0);
+                    pci_struct_fill(*list + *num, dom, bus, dev, func, 0);
                     (*num)++;
                 }
             }
@@ -361,8 +361,8 @@ static int get_all_assigned_devices(libxl__gc *gc, libxl_device_pci **list, int
     return 0;
 }
 
-static int is_pcidev_in_array(libxl_device_pci *assigned, int num_assigned,
-                       int dom, int bus, int dev, int func)
+static int is_pci_in_array(libxl_device_pci *assigned, int num_assigned,
+                           int dom, int bus, int dev, int func)
 {
     int i;
 
@@ -383,7 +383,7 @@ static int is_pcidev_in_array(libxl_device_pci *assigned, int num_assigned,
 
 /* Write the standard BDF into the sysfs path given by sysfs_path. */
 static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
-                           libxl_device_pci *pcidev)
+                           libxl_device_pci *pci)
 {
     int rc, fd;
     char *buf;
@@ -394,8 +394,8 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
         return ERROR_FAIL;
     }
 
-    buf = GCSPRINTF(PCI_BDF, pcidev->domain, pcidev->bus,
-                    pcidev->dev, pcidev->func);
+    buf = GCSPRINTF(PCI_BDF, pci->domain, pci->bus,
+                    pci->dev, pci->func);
     rc = write(fd, buf, strlen(buf));
     /* Annoying to have two if's, but we need the errno */
     if (rc < 0)
@@ -411,7 +411,7 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 {
     GC_INIT(ctx);
-    libxl_device_pci *pcidevs = NULL, *new, *assigned;
+    libxl_device_pci *pcis = NULL, *new, *assigned;
     struct dirent *de;
     DIR *dir;
     int r, num_assigned;
@@ -436,40 +436,40 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         if (sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4)
             continue;
 
-        if (is_pcidev_in_array(assigned, num_assigned, dom, bus, dev, func))
+        if (is_pci_in_array(assigned, num_assigned, dom, bus, dev, func))
             continue;
 
-        new = realloc(pcidevs, ((*num) + 1) * sizeof(*new));
+        new = realloc(pcis, ((*num) + 1) * sizeof(*new));
         if (NULL == new)
             continue;
 
-        pcidevs = new;
-        new = pcidevs + *num;
+        pcis = new;
+        new = pcis + *num;
 
         memset(new, 0, sizeof(*new));
-        pcidev_struct_fill(new, dom, bus, dev, func, 0);
+        pci_struct_fill(new, dom, bus, dev, func, 0);
         (*num)++;
     }
 
     closedir(dir);
 out:
     GC_FREE;
-    return pcidevs;
+    return pcis;
 }
 
 /* Unbind device from its current driver, if any.  If driver_path is non-NULL,
  * store the path to the original driver in it. */
-static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pcidev,
+static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
                             char **driver_path)
 {
     char * spath, *dp = NULL;
     struct stat st;
 
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/driver",
-                           pcidev->domain,
-                           pcidev->bus,
-                           pcidev->dev,
-                           pcidev->func);
+                           pci->domain,
+                           pci->bus,
+                           pci->dev,
+                           pci->func);
     if ( !lstat(spath, &st) ) {
         /* Find the canonical path to the driver. */
         dp = libxl__zalloc(gc, PATH_MAX);
@@ -483,7 +483,7 @@ static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pcidev,
 
         /* Unbind from the old driver */
         spath = GCSPRINTF("%s/unbind", dp);
-        if ( sysfs_write_bdf(gc, spath, pcidev) < 0 ) {
+        if ( sysfs_write_bdf(gc, spath, pci) < 0 ) {
             LOGE(ERROR, "Couldn't unbind device");
             return -1;
         }
@@ -495,11 +495,11 @@ static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pcidev,
     return 0;
 }
 
-static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pcidev)
+static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pci)
 {
     char *pci_device_vendor_path =
             GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/vendor",
-                      pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+                      pci->domain, pci->bus, pci->dev, pci->func);
     uint16_t read_items;
     uint16_t pci_device_vendor;
 
@@ -507,7 +507,7 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pcidev)
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have vendor attribute",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         return 0xffff;
     }
     read_items = fscanf(f, "0x%hx\n", &pci_device_vendor);
@@ -515,18 +515,18 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pcidev)
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read vendor of pci device "PCI_BDF,
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         return 0xffff;
     }
 
     return pci_device_vendor;
 }
 
-static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pcidev)
+static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pci)
 {
     char *pci_device_device_path =
             GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/device",
-                      pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+                      pci->domain, pci->bus, pci->dev, pci->func);
     uint16_t read_items;
     uint16_t pci_device_device;
 
@@ -534,7 +534,7 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pcidev)
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have device attribute",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         return 0xffff;
     }
     read_items = fscanf(f, "0x%hx\n", &pci_device_device);
@@ -542,25 +542,25 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pcidev)
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read device of pci device "PCI_BDF,
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         return 0xffff;
     }
 
     return pci_device_device;
 }
 
-static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pcidev,
+static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pci,
                                unsigned long *class)
 {
     char *pci_device_class_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/class",
-                     pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+                     pci->domain, pci->bus, pci->dev, pci->func);
     int read_items, ret = 0;
 
     FILE *f = fopen(pci_device_class_path, "r");
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have class attribute",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         ret = ERROR_FAIL;
         goto out;
     }
@@ -569,7 +569,7 @@ static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pcidev,
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read class of pci device "PCI_BDF,
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         ret = ERROR_FAIL;
     }
 
@@ -588,16 +588,16 @@ bool libxl__is_igd_vga_passthru(libxl__gc *gc,
     uint16_t pt_vendor, pt_device;
     unsigned long class;
 
-    for (i = 0 ; i < d_config->num_pcidevs ; i++) {
-        libxl_device_pci *pcidev = &d_config->pcidevs[i];
-        pt_vendor = sysfs_dev_get_vendor(gc, pcidev);
-        pt_device = sysfs_dev_get_device(gc, pcidev);
+    for (i = 0 ; i < d_config->num_pcis ; i++) {
+        libxl_device_pci *pci = &d_config->pcis[i];
+        pt_vendor = sysfs_dev_get_vendor(gc, pci);
+        pt_device = sysfs_dev_get_device(gc, pci);
 
         if (pt_vendor == 0xffff || pt_device == 0xffff ||
             pt_vendor != 0x8086)
             continue;
 
-        if (sysfs_dev_get_class(gc, pcidev, &class))
+        if (sysfs_dev_get_class(gc, pci, &class))
             continue;
         if (class == 0x030000)
             return true;
@@ -621,8 +621,8 @@ bool libxl__is_igd_vga_passthru(libxl__gc *gc,
  * already exist.
  */
 
-/* Scan through /sys/.../pciback/slots looking for pcidev's BDF */
-static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pcidev)
+/* Scan through /sys/.../pciback/slots looking for pci's BDF */
+static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
 {
     FILE *f;
     int rc = 0;
@@ -635,11 +635,11 @@ static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pcidev)
         return ERROR_FAIL;
     }
 
-    while(fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func)==4) {
-        if(dom == pcidev->domain
-           && bus == pcidev->bus
-           && dev == pcidev->dev
-           && func == pcidev->func) {
+    while (fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func)==4) {
+        if (dom == pci->domain
+            && bus == pci->bus
+            && dev == pci->dev
+            && func == pci->func) {
             rc = 1;
             goto out;
         }
@@ -649,7 +649,7 @@ out:
     return rc;
 }
 
-static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pcidev)
+static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
 {
     char * spath;
     int rc;
@@ -665,8 +665,8 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pcidev)
     }
 
     spath = GCSPRINTF(SYSFS_PCIBACK_DRIVER"/"PCI_BDF,
-                      pcidev->domain, pcidev->bus,
-                      pcidev->dev, pcidev->func);
+                      pci->domain, pci->bus,
+                      pci->dev, pci->func);
     rc = lstat(spath, &st);
 
     if( rc == 0 )
@@ -677,40 +677,40 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pcidev)
     return -1;
 }
 
-static int pciback_dev_assign(libxl__gc *gc, libxl_device_pci *pcidev)
+static int pciback_dev_assign(libxl__gc *gc, libxl_device_pci *pci)
 {
     int rc;
 
-    if ( (rc=pciback_dev_has_slot(gc, pcidev)) < 0 ) {
+    if ( (rc = pciback_dev_has_slot(gc, pci)) < 0 ) {
         LOGE(ERROR, "Error checking for pciback slot");
         return ERROR_FAIL;
     } else if (rc == 0) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/new_slot",
-                             pcidev) < 0 ) {
+                             pci) < 0 ) {
             LOGE(ERROR, "Couldn't bind device to pciback!");
             return ERROR_FAIL;
         }
     }
 
-    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind", pcidev) < 0 ) {
+    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind", pci) < 0 ) {
         LOGE(ERROR, "Couldn't bind device to pciback!");
         return ERROR_FAIL;
     }
     return 0;
 }
 
-static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pcidev)
+static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
 {
     /* Remove from pciback */
-    if ( sysfs_dev_unbind(gc, pcidev, NULL) < 0 ) {
+    if ( sysfs_dev_unbind(gc, pci, NULL) < 0 ) {
         LOG(ERROR, "Couldn't unbind device!");
         return ERROR_FAIL;
     }
 
     /* Remove slot if necessary */
-    if ( pciback_dev_has_slot(gc, pcidev) > 0 ) {
+    if ( pciback_dev_has_slot(gc, pci) > 0 ) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/remove_slot",
-                             pcidev) < 0 ) {
+                             pci) < 0 ) {
             LOGE(ERROR, "Couldn't remove pciback slot");
             return ERROR_FAIL;
         }
@@ -721,49 +721,49 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pcidev)
 #define PCIBACK_INFO_PATH "/libxl/pciback"
 
 static void pci_assignable_driver_path_write(libxl__gc *gc,
-                                            libxl_device_pci *pcidev,
+                                            libxl_device_pci *pci,
                                             char *driver_path)
 {
     char *path;
 
     path = GCSPRINTF(PCIBACK_INFO_PATH"/"PCI_BDF_XSPATH"/driver_path",
-                     pcidev->domain,
-                     pcidev->bus,
-                     pcidev->dev,
-                     pcidev->func);
+                     pci->domain,
+                     pci->bus,
+                     pci->dev,
+                     pci->func);
     if ( libxl__xs_printf(gc, XBT_NULL, path, "%s", driver_path) < 0 ) {
         LOGE(WARN, "Write of %s to node %s failed.", driver_path, path);
     }
 }
 
 static char * pci_assignable_driver_path_read(libxl__gc *gc,
-                                              libxl_device_pci *pcidev)
+                                              libxl_device_pci *pci)
 {
     return libxl__xs_read(gc, XBT_NULL,
                           GCSPRINTF(
                            PCIBACK_INFO_PATH "/" PCI_BDF_XSPATH "/driver_path",
-                           pcidev->domain,
-                           pcidev->bus,
-                           pcidev->dev,
-                           pcidev->func));
+                           pci->domain,
+                           pci->bus,
+                           pci->dev,
+                           pci->func));
 }
 
 static void pci_assignable_driver_path_remove(libxl__gc *gc,
-                                              libxl_device_pci *pcidev)
+                                              libxl_device_pci *pci)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
 
     /* Remove the xenstore entry */
     xs_rm(ctx->xsh, XBT_NULL,
           GCSPRINTF(PCIBACK_INFO_PATH "/" PCI_BDF_XSPATH,
-                    pcidev->domain,
-                    pcidev->bus,
-                    pcidev->dev,
-                    pcidev->func) );
+                    pci->domain,
+                    pci->bus,
+                    pci->dev,
+                    pci->func) );
 }
 
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
-                                            libxl_device_pci *pcidev,
+                                            libxl_device_pci *pci,
                                             int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -773,10 +773,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     struct stat st;
 
     /* Local copy for convenience */
-    dom = pcidev->domain;
-    bus = pcidev->bus;
-    dev = pcidev->dev;
-    func = pcidev->func;
+    dom = pci->domain;
+    bus = pci->bus;
+    dev = pci->dev;
+    func = pci->func;
 
     /* See if the device exists */
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF, dom, bus, dev, func);
@@ -786,7 +786,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
 
     /* Check to see if it's already assigned to pciback */
-    rc = pciback_dev_is_assigned(gc, pcidev);
+    rc = pciback_dev_is_assigned(gc, pci);
     if ( rc < 0 ) {
         return ERROR_FAIL;
     }
@@ -796,7 +796,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
 
     /* Check to see if there's already a driver that we need to unbind from */
-    if ( sysfs_dev_unbind(gc, pcidev, &driver_path ) ) {
+    if ( sysfs_dev_unbind(gc, pci, &driver_path ) ) {
         LOG(ERROR, "Couldn't unbind "PCI_BDF" from driver",
             dom, bus, dev, func);
         return ERROR_FAIL;
@@ -805,9 +805,9 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     /* Store driver_path for rebinding to dom0 */
     if ( rebind ) {
         if ( driver_path ) {
-            pci_assignable_driver_path_write(gc, pcidev, driver_path);
+            pci_assignable_driver_path_write(gc, pci, driver_path);
         } else if ( (driver_path =
-                     pci_assignable_driver_path_read(gc, pcidev)) != NULL ) {
+                     pci_assignable_driver_path_read(gc, pci)) != NULL ) {
             LOG(INFO, PCI_BDF" not bound to a driver, will be rebound to %s",
                 dom, bus, dev, func, driver_path);
         } else {
@@ -815,10 +815,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
                 dom, bus, dev, func);
         }
     } else {
-        pci_assignable_driver_path_remove(gc, pcidev);
+        pci_assignable_driver_path_remove(gc, pci);
     }
 
-    if ( pciback_dev_assign(gc, pcidev) ) {
+    if ( pciback_dev_assign(gc, pci) ) {
         LOG(ERROR, "Couldn't bind device to pciback!");
         return ERROR_FAIL;
     }
@@ -829,7 +829,7 @@ quarantine:
      * so always pass XEN_DOMCTL_DEV_RDM_RELAXED to avoid assignment being
      * unnecessarily denied.
      */
-    rc = xc_assign_device(ctx->xch, DOMID_IO, pcidev_encode_bdf(pcidev),
+    rc = xc_assign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci),
                           XEN_DOMCTL_DEV_RDM_RELAXED);
     if ( rc < 0 ) {
         LOG(ERROR, "failed to quarantine "PCI_BDF, dom, bus, dev, func);
@@ -840,7 +840,7 @@ quarantine:
 }
 
 static int libxl__device_pci_assignable_remove(libxl__gc *gc,
-                                               libxl_device_pci *pcidev,
+                                               libxl_device_pci *pci,
                                                int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -848,24 +848,24 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     char *driver_path;
 
     /* De-quarantine */
-    rc = xc_deassign_device(ctx->xch, DOMID_IO, pcidev_encode_bdf(pcidev));
+    rc = xc_deassign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci));
     if ( rc < 0 ) {
-        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pcidev->domain, pcidev->bus,
-            pcidev->dev, pcidev->func);
+        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pci->domain, pci->bus,
+            pci->dev, pci->func);
         return ERROR_FAIL;
     }
 
     /* Unbind from pciback */
-    if ( (rc=pciback_dev_is_assigned(gc, pcidev)) < 0 ) {
+    if ( (rc = pciback_dev_is_assigned(gc, pci)) < 0 ) {
         return ERROR_FAIL;
     } else if ( rc ) {
-        pciback_dev_unassign(gc, pcidev);
+        pciback_dev_unassign(gc, pci);
     } else {
         LOG(WARN, "Not bound to pciback");
     }
 
     /* Rebind if necessary */
-    driver_path = pci_assignable_driver_path_read(gc, pcidev);
+    driver_path = pci_assignable_driver_path_read(gc, pci);
 
     if ( driver_path ) {
         if ( rebind ) {
@@ -873,12 +873,12 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
 
             if ( sysfs_write_bdf(gc,
                                  GCSPRINTF("%s/bind", driver_path),
-                                 pcidev) < 0 ) {
+                                 pci) < 0 ) {
                 LOGE(ERROR, "Couldn't bind device to %s", driver_path);
                 return -1;
             }
 
-            pci_assignable_driver_path_remove(gc, pcidev);
+            pci_assignable_driver_path_remove(gc, pci);
         }
     } else {
         if ( rebind ) {
@@ -890,26 +890,26 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     return 0;
 }
 
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pcidev,
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci,
                                     int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_add(gc, pcidev, rebind);
+    rc = libxl__device_pci_assignable_add(gc, pci, rebind);
 
     GC_FREE;
     return rc;
 }
 
 
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pcidev,
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci,
                                        int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_remove(gc, pcidev, rebind);
+    rc = libxl__device_pci_assignable_remove(gc, pci, rebind);
 
     GC_FREE;
     return rc;
@@ -920,7 +920,7 @@ int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pcidev,
  * driver. It also initialises a bit-mask of which function numbers are present
  * on that device.
 */
-static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pcidev, unsigned int *func_mask)
+static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pci, unsigned int *func_mask)
 {
     struct dirent *de;
     DIR *dir;
@@ -940,11 +940,11 @@ static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pcidev, unsi
 
         if ( sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4 )
             continue;
-        if ( pcidev->domain != dom )
+        if ( pci->domain != dom )
             continue;
-        if ( pcidev->bus != bus )
+        if ( pci->bus != bus )
             continue;
-        if ( pcidev->dev != dev )
+        if ( pci->dev != dev )
             continue;
 
         path = GCSPRINTF("%s/" PCI_BDF, SYSFS_PCIBACK_DRIVER, dom, bus, dev, func);
@@ -979,7 +979,7 @@ static int pci_ins_check(libxl__gc *gc, uint32_t domid, const char *state, void
 }
 
 static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
-                                 libxl_device_pci *pcidev)
+                                 libxl_device_pci *pci)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     int rc = 0;
@@ -991,15 +991,15 @@ static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/state");
     state = libxl__xs_read(gc, XBT_NULL, path);
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/parameter");
-    if (pcidev->vdevfn) {
+    if (pci->vdevfn) {
         libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF_VDEVFN","PCI_OPTIONS,
-                         pcidev->domain, pcidev->bus, pcidev->dev,
-                         pcidev->func, pcidev->vdevfn, pcidev->msitranslate,
-                         pcidev->power_mgmt);
+                         pci->domain, pci->bus, pci->dev,
+                         pci->func, pci->vdevfn, pci->msitranslate,
+                         pci->power_mgmt);
     } else {
         libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF","PCI_OPTIONS,
-                         pcidev->domain,  pcidev->bus, pcidev->dev,
-                         pcidev->func, pcidev->msitranslate, pcidev->power_mgmt);
+                         pci->domain,  pci->bus, pci->dev,
+                         pci->func, pci->msitranslate, pci->power_mgmt);
     }
 
     libxl__qemu_traditional_cmd(gc, domid, "pci-ins");
@@ -1010,7 +1010,7 @@ static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/state");
     if ( rc < 0 )
         LOGD(ERROR, domid, "qemu refused to add device: %s", vdevfn);
-    else if ( sscanf(vdevfn, "0x%x", &pcidev->vdevfn) != 1 ) {
+    else if ( sscanf(vdevfn, "0x%x", &pci->vdevfn) != 1 ) {
         LOGD(ERROR, domid, "wrong format for the vdevfn: '%s'", vdevfn);
         rc = -1;
     }
@@ -1054,7 +1054,7 @@ typedef struct pci_add_state {
     libxl__xswait_state xswait;
     libxl__ev_qmp qmp;
     libxl__ev_time timeout;
-    libxl_device_pci *pcidev;
+    libxl_device_pci *pci;
     int pci_domid;
 } pci_add_state;
 
@@ -1072,7 +1072,7 @@ static void pci_add_dm_done(libxl__egc *,
 
 static void do_pci_add(libxl__egc *egc,
                        libxl_domid domid,
-                       libxl_device_pci *pcidev,
+                       libxl_device_pci *pci,
                        pci_add_state *pas)
 {
     STATE_AO_GC(pas->aodev->ao);
@@ -1082,7 +1082,7 @@ static void do_pci_add(libxl__egc *egc,
     /* init pci_add_state */
     libxl__xswait_init(&pas->xswait);
     libxl__ev_qmp_init(&pas->qmp);
-    pas->pcidev = pcidev;
+    pas->pci = pci;
     pas->pci_domid = domid;
     libxl__ev_time_init(&pas->timeout);
 
@@ -1128,7 +1128,7 @@ static void pci_add_qemu_trad_watch_state_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pcidev = pas->pcidev;
+    libxl_device_pci *pci = pas->pci;
 
     rc = check_qemu_running(gc, domid, xswa, rc, state);
     if (rc == ERROR_NOT_READY)
@@ -1136,7 +1136,7 @@ static void pci_add_qemu_trad_watch_state_cb(libxl__egc *egc,
     if (rc)
         goto out;
 
-    rc = qemu_pci_add_xenstore(gc, domid, pcidev);
+    rc = qemu_pci_add_xenstore(gc, domid, pci);
 out:
     pci_add_dm_done(egc, pas, rc); /* must be last */
 }
@@ -1149,7 +1149,7 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pcidev = pas->pcidev;
+    libxl_device_pci *pci = pas->pci;
     libxl__ev_qmp *const qmp = &pas->qmp;
 
     rc = libxl__ev_time_register_rel(ao, &pas->timeout,
@@ -1160,14 +1160,14 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
     libxl__qmp_param_add_string(gc, &args, "driver",
                                 "xen-pci-passthrough");
     QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
-                           pcidev->bus, pcidev->dev, pcidev->func);
+                           pci->bus, pci->dev, pci->func);
     QMP_PARAMETERS_SPRINTF(&args, "hostaddr",
-                           "%04x:%02x:%02x.%01x", pcidev->domain,
-                           pcidev->bus, pcidev->dev, pcidev->func);
-    if (pcidev->vdevfn) {
+                           "%04x:%02x:%02x.%01x", pci->domain,
+                           pci->bus, pci->dev, pci->func);
+    if (pci->vdevfn) {
         QMP_PARAMETERS_SPRINTF(&args, "addr", "%x.%x",
-                               PCI_SLOT(pcidev->vdevfn),
-                               PCI_FUNC(pcidev->vdevfn));
+                               PCI_SLOT(pci->vdevfn),
+                               PCI_FUNC(pci->vdevfn));
     }
     /*
      * Version of QEMU prior to the XSA-131 fix did not support
@@ -1179,7 +1179,7 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
      * set the permissive flag if it is true. Users of older QEMU
      * have no reason to set the flag so this is ok.
      */
-    if (pcidev->permissive)
+    if (pci->permissive)
         libxl__qmp_param_add_bool(gc, &args, "permissive", true);
 
     qmp->ao = pas->aodev->ao;
@@ -1230,7 +1230,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
     int dev_slot, dev_func;
 
     /* Convenience aliases */
-    libxl_device_pci *pcidev = pas->pcidev;
+    libxl_device_pci *pci = pas->pci;
 
     if (rc) goto out;
 
@@ -1251,7 +1251,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
      */
 
     asked_id = GCSPRINTF(PCI_PT_QDEV_ID,
-                         pcidev->bus, pcidev->dev, pcidev->func);
+                         pci->bus, pci->dev, pci->func);
 
     for (i = 0; (bus = libxl__json_array_get(response, i)); i++) {
         devices = libxl__json_map_get("devices", bus, JSON_ARRAY);
@@ -1283,7 +1283,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
              }
              dev_func = libxl__json_object_get_integer(o);
 
-             pcidev->vdevfn = PCI_DEVFN(dev_slot, dev_func);
+             pci->vdevfn = PCI_DEVFN(dev_slot, dev_func);
 
              rc = 0;
              goto out;
@@ -1331,7 +1331,7 @@ static void pci_add_dm_done(libxl__egc *egc,
 
     /* Convenience aliases */
     bool starting = pas->starting;
-    libxl_device_pci *pcidev = pas->pcidev;
+    libxl_device_pci *pci = pas->pci;
     bool hvm = libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM;
 
     libxl__ev_qmp_dispose(gc, &pas->qmp);
@@ -1342,8 +1342,8 @@ static void pci_add_dm_done(libxl__egc *egc,
     if (isstubdom)
         starting = false;
 
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pcidev->domain,
-                           pcidev->bus, pcidev->dev, pcidev->func);
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->domain,
+                           pci->bus, pci->dev, pci->func);
     f = fopen(sysfs_path, "r");
     start = end = flags = size = 0;
     irq = 0;
@@ -1383,8 +1383,8 @@ static void pci_add_dm_done(libxl__egc *egc,
         }
     }
     fclose(f);
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pcidev->domain,
-                                pcidev->bus, pcidev->dev, pcidev->func);
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
+                                pci->bus, pci->dev, pci->func);
     f = fopen(sysfs_path, "r");
     if (f == NULL) {
         LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
@@ -1411,9 +1411,9 @@ static void pci_add_dm_done(libxl__egc *egc,
     fclose(f);
 
     /* Don't restrict writes to the PCI config space from this VM */
-    if (pcidev->permissive) {
+    if (pci->permissive) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/permissive",
-                             pcidev) < 0 ) {
+                             pci) < 0 ) {
             LOGD(ERROR, domainid, "Setting permissive for device");
             rc = ERROR_FAIL;
             goto out;
@@ -1422,14 +1422,14 @@ static void pci_add_dm_done(libxl__egc *egc,
 
 out_no_irq:
     if (!isstubdom) {
-        if (pcidev->rdm_policy == LIBXL_RDM_RESERVE_POLICY_STRICT) {
+        if (pci->rdm_policy == LIBXL_RDM_RESERVE_POLICY_STRICT) {
             flag &= ~XEN_DOMCTL_DEV_RDM_RELAXED;
-        } else if (pcidev->rdm_policy != LIBXL_RDM_RESERVE_POLICY_RELAXED) {
+        } else if (pci->rdm_policy != LIBXL_RDM_RESERVE_POLICY_RELAXED) {
             LOGED(ERROR, domainid, "unknown rdm check flag.");
             rc = ERROR_FAIL;
             goto out;
         }
-        r = xc_assign_device(ctx->xch, domid, pcidev_encode_bdf(pcidev), flag);
+        r = xc_assign_device(ctx->xch, domid, pci_encode_bdf(pci), flag);
         if (r < 0 && (hvm || errno != ENOSYS)) {
             LOGED(ERROR, domainid, "xc_assign_device failed");
             rc = ERROR_FAIL;
@@ -1438,7 +1438,7 @@ out_no_irq:
     }
 
     if (!starting && !libxl_get_stubdom_id(CTX, domid))
-        rc = libxl__device_pci_add_xenstore(gc, domid, pcidev, starting);
+        rc = libxl__device_pci_add_xenstore(gc, domid, pci, starting);
     else
         rc = 0;
 out:
@@ -1493,7 +1493,7 @@ int libxl__device_pci_setdefault(libxl__gc *gc, uint32_t domid,
 }
 
 int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
-                         libxl_device_pci *pcidev,
+                         libxl_device_pci *pci,
                          const libxl_asyncop_how *ao_how)
 {
     AO_CREATE(ctx, domid, ao_how);
@@ -1504,24 +1504,24 @@ int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
     aodev->action = LIBXL__DEVICE_ACTION_ADD;
     aodev->callback = device_addrm_aocomplete;
     aodev->update_json = true;
-    libxl__device_pci_add(egc, domid, pcidev, false, aodev);
+    libxl__device_pci_add(egc, domid, pci, false, aodev);
     return AO_INPROGRESS;
 }
 
-static int libxl_pcidev_assignable(libxl_ctx *ctx, libxl_device_pci *pcidev)
+static int libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
 {
-    libxl_device_pci *pcidevs;
+    libxl_device_pci *pcis;
     int num, i;
 
-    pcidevs = libxl_device_pci_assignable_list(ctx, &num);
+    pcis = libxl_device_pci_assignable_list(ctx, &num);
     for (i = 0; i < num; i++) {
-        if (pcidevs[i].domain == pcidev->domain &&
-            pcidevs[i].bus == pcidev->bus &&
-            pcidevs[i].dev == pcidev->dev &&
-            pcidevs[i].func == pcidev->func)
+        if (pcis[i].domain == pci->domain &&
+            pcis[i].bus == pci->bus &&
+            pcis[i].dev == pci->dev &&
+            pcis[i].func == pci->func)
             break;
     }
-    free(pcidevs);
+    free(pcis);
     return i != num;
 }
 
@@ -1535,7 +1535,7 @@ static void device_pci_add_done(libxl__egc *egc,
     pci_add_state *, int rc);
 
 void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
-                           libxl_device_pci *pcidev, bool starting,
+                           libxl_device_pci *pci, bool starting,
                            libxl__ao_device *aodev)
 {
     STATE_AO_GC(aodev->ao);
@@ -1545,9 +1545,9 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     int stubdomid = 0;
     pci_add_state *pas;
 
-    /* Store *pcidev to be used by callbacks */
-    aodev->device_config = pcidev;
-    aodev->device_type = &libxl__pcidev_devtype;
+    /* Store *pci to be used by callbacks */
+    aodev->device_config = pci;
+    aodev->device_type = &libxl__pci_devtype;
 
     GCNEW(pas);
     pas->aodev = aodev;
@@ -1556,29 +1556,29 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     pas->callback = device_pci_add_stubdom_done;
 
     if (libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
-        rc = xc_test_assign_device(ctx->xch, domid, pcidev_encode_bdf(pcidev));
+        rc = xc_test_assign_device(ctx->xch, domid, pci_encode_bdf(pci));
         if (rc) {
             LOGD(ERROR, domid,
                  "PCI device %04x:%02x:%02x.%u %s?",
-                 pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func,
+                 pci->domain, pci->bus, pci->dev, pci->func,
                  errno == EOPNOTSUPP ? "cannot be assigned - no IOMMU"
                  : "already assigned to a different guest");
             goto out;
         }
     }
 
-    rc = libxl__device_pci_setdefault(gc, domid, pcidev, !starting);
+    rc = libxl__device_pci_setdefault(gc, domid, pci, !starting);
     if (rc) goto out;
 
-    if (pcidev->seize && !pciback_dev_is_assigned(gc, pcidev)) {
-        rc = libxl__device_pci_assignable_add(gc, pcidev, 1);
+    if (pci->seize && !pciback_dev_is_assigned(gc, pci)) {
+        rc = libxl__device_pci_assignable_add(gc, pci, 1);
         if ( rc )
             goto out;
     }
 
-    if (!libxl_pcidev_assignable(ctx, pcidev)) {
+    if (!libxl_pci_assignable(ctx, pci)) {
         LOGD(ERROR, domid, "PCI device %x:%x:%x.%x is not assignable",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         rc = ERROR_FAIL;
         goto out;
     }
@@ -1589,25 +1589,25 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
              "cannot determine if device is assigned, refusing to continue");
         goto out;
     }
-    if ( is_pcidev_in_array(assigned, num_assigned, pcidev->domain,
-                     pcidev->bus, pcidev->dev, pcidev->func) ) {
+    if ( is_pci_in_array(assigned, num_assigned, pci->domain,
+                         pci->bus, pci->dev, pci->func) ) {
         LOGD(ERROR, domid, "PCI device already attached to a domain");
         rc = ERROR_FAIL;
         goto out;
     }
 
-    libxl__device_pci_reset(gc, pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+    libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
     if (stubdomid != 0) {
-        libxl_device_pci *pcidev_s;
+        libxl_device_pci *pci_s;
 
-        GCNEW(pcidev_s);
-        libxl_device_pci_init(pcidev_s);
-        libxl_device_pci_copy(CTX, pcidev_s, pcidev);
+        GCNEW(pci_s);
+        libxl_device_pci_init(pci_s);
+        libxl_device_pci_copy(CTX, pci_s, pci);
         pas->callback = device_pci_add_stubdom_wait;
 
-        do_pci_add(egc, stubdomid, pcidev_s, pas); /* must be last */
+        do_pci_add(egc, stubdomid, pci_s, pas); /* must be last */
         return;
     }
 
@@ -1664,42 +1664,42 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
     /* Convenience aliases */
     libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pcidev = aodev->device_config;
+    libxl_device_pci *pci = aodev->device_config;
 
     if (rc) goto out;
 
-    orig_vdev = pcidev->vdevfn & ~7U;
+    orig_vdev = pci->vdevfn & ~7U;
 
-    if ( pcidev->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
-        if ( !(pcidev->vdevfn >> 3) ) {
+    if ( pci->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
+        if ( !(pci->vdevfn >> 3) ) {
             LOGD(ERROR, domid, "Must specify a v-slot for multi-function devices");
             rc = ERROR_INVAL;
             goto out;
         }
-        if ( pci_multifunction_check(gc, pcidev, &pfunc_mask) ) {
+        if ( pci_multifunction_check(gc, pci, &pfunc_mask) ) {
             rc = ERROR_FAIL;
             goto out;
         }
-        pcidev->vfunc_mask &= pfunc_mask;
+        pci->vfunc_mask &= pfunc_mask;
         /* so now vfunc_mask == pfunc_mask */
     }else{
-        pfunc_mask = (1 << pcidev->func);
+        pfunc_mask = (1 << pci->func);
     }
 
-    for(rc = 0, i = 7; i >= 0; --i) {
+    for (rc = 0, i = 7; i >= 0; --i) {
         if ( (1 << i) & pfunc_mask ) {
-            if ( pcidev->vfunc_mask == pfunc_mask ) {
-                pcidev->func = i;
-                pcidev->vdevfn = orig_vdev | i;
-            }else{
+            if ( pci->vfunc_mask == pfunc_mask ) {
+                pci->func = i;
+                pci->vdevfn = orig_vdev | i;
+            } else {
                 /* if not passing through multiple devices in a block make
                  * sure that virtual function number 0 is always used otherwise
                  * guest won't see the device
                  */
-                pcidev->vdevfn = orig_vdev;
+                pci->vdevfn = orig_vdev;
             }
             pas->callback = device_pci_add_done;
-            do_pci_add(egc, domid, pcidev, pas); /* must be last */
+            do_pci_add(egc, domid, pci, pas); /* must be last */
             return;
         }
     }
@@ -1715,13 +1715,13 @@ static void device_pci_add_done(libxl__egc *egc,
     EGC_GC;
     libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pcidev = aodev->device_config;
+    libxl_device_pci *pci = aodev->device_config;
 
     if (rc) {
         LOGD(ERROR, domid,
              "libxl__device_pci_add  failed for "
              "PCI device %x:%x:%x.%x (rc %d)",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func,
+             pci->domain, pci->bus, pci->dev, pci->func,
              rc);
     }
     aodev->rc = rc;
@@ -1733,16 +1733,16 @@ typedef struct {
     libxl__ao_device *outer_aodev;
     libxl_domain_config *d_config;
     libxl_domid domid;
-} add_pcidevs_state;
+} add_pcis_state;
 
-static void add_pcidevs_done(libxl__egc *, libxl__multidev *, int rc);
+static void add_pcis_done(libxl__egc *, libxl__multidev *, int rc);
 
-static void libxl__add_pcidevs(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
-                               libxl_domain_config *d_config,
-                               libxl__multidev *multidev)
+static void libxl__add_pcis(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
+                            libxl_domain_config *d_config,
+                            libxl__multidev *multidev)
 {
     AO_GC;
-    add_pcidevs_state *apds;
+    add_pcis_state *apds;
     int i;
 
     /* We need to start a new multidev in order to be able to execute
@@ -1752,23 +1752,23 @@ static void libxl__add_pcidevs(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
     apds->outer_aodev = libxl__multidev_prepare(multidev);
     apds->d_config = d_config;
     apds->domid = domid;
-    apds->multidev.callback = add_pcidevs_done;
+    apds->multidev.callback = add_pcis_done;
     libxl__multidev_begin(ao, &apds->multidev);
 
-    for (i = 0; i < d_config->num_pcidevs; i++) {
+    for (i = 0; i < d_config->num_pcis; i++) {
         libxl__ao_device *aodev = libxl__multidev_prepare(&apds->multidev);
-        libxl__device_pci_add(egc, domid, &d_config->pcidevs[i],
+        libxl__device_pci_add(egc, domid, &d_config->pcis[i],
                               true, aodev);
     }
 
     libxl__multidev_prepared(egc, &apds->multidev, 0);
 }
 
-static void add_pcidevs_done(libxl__egc *egc, libxl__multidev *multidev,
+static void add_pcis_done(libxl__egc *egc, libxl__multidev *multidev,
                              int rc)
 {
     EGC_GC;
-    add_pcidevs_state *apds = CONTAINER_OF(multidev, *apds, multidev);
+    add_pcis_state *apds = CONTAINER_OF(multidev, *apds, multidev);
 
     /* Convenience aliases */
     libxl_domain_config *d_config = apds->d_config;
@@ -1777,9 +1777,9 @@ static void add_pcidevs_done(libxl__egc *egc, libxl__multidev *multidev,
 
     if (rc) goto out;
 
-    if (d_config->num_pcidevs > 0 && !libxl_get_stubdom_id(CTX, domid)) {
-        rc = libxl__create_pci_backend(gc, domid, d_config->pcidevs,
-            d_config->num_pcidevs);
+    if (d_config->num_pcis > 0 && !libxl_get_stubdom_id(CTX, domid)) {
+        rc = libxl__create_pci_backend(gc, domid, d_config->pcis,
+                                       d_config->num_pcis);
         if (rc < 0) {
             LOGD(ERROR, domid, "libxl_create_pci_backend failed: %d", rc);
             goto out;
@@ -1792,7 +1792,7 @@ out:
 }
 
 static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
-                                    libxl_device_pci *pcidev, int force)
+                                    libxl_device_pci *pci, int force)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     char *state;
@@ -1804,12 +1804,12 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/state");
     state = libxl__xs_read(gc, XBT_NULL, path);
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/parameter");
-    libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF, pcidev->domain,
-                     pcidev->bus, pcidev->dev, pcidev->func);
+    libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF, pci->domain,
+                     pci->bus, pci->dev, pci->func);
 
     /* Remove all functions at once atomically by only signalling
      * device-model for function 0 */
-    if ( !force && (pcidev->vdevfn & 0x7) == 0 ) {
+    if ( !force && (pci->vdevfn & 0x7) == 0 ) {
         libxl__qemu_traditional_cmd(gc, domid, "pci-rem");
         if (libxl__wait_for_device_model_deprecated(gc, domid, "pci-removed",
                                          NULL, NULL, NULL) < 0) {
@@ -1830,7 +1830,7 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
 typedef struct pci_remove_state {
     libxl__ao_device *aodev;
     libxl_domid domid;
-    libxl_device_pci *pcidev;
+    libxl_device_pci *pci;
     bool force;
     bool hvm;
     unsigned int orig_vdev;
@@ -1844,7 +1844,7 @@ typedef struct pci_remove_state {
 } pci_remove_state;
 
 static void libxl__device_pci_remove_common(libxl__egc *egc,
-    uint32_t domid, libxl_device_pci *pcidev, bool force,
+    uint32_t domid, libxl_device_pci *pci, bool force,
     libxl__ao_device *aodev);
 static void device_pci_remove_common_next(libxl__egc *egc,
     pci_remove_state *prs, int rc);
@@ -1869,7 +1869,7 @@ static void pci_remove_done(libxl__egc *egc,
     pci_remove_state *prs, int rc);
 
 static void do_pci_remove(libxl__egc *egc, uint32_t domid,
-                          libxl_device_pci *pcidev, int force,
+                          libxl_device_pci *pci, int force,
                           pci_remove_state *prs)
 {
     STATE_AO_GC(prs->aodev->ao);
@@ -1887,8 +1887,8 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
     libxl__ptr_add(gc, assigned);
 
     rc = ERROR_INVAL;
-    if ( !is_pcidev_in_array(assigned, num, pcidev->domain,
-                      pcidev->bus, pcidev->dev, pcidev->func) ) {
+    if ( !is_pci_in_array(assigned, num, pci->domain,
+                          pci->bus, pci->dev, pci->func) ) {
         LOGD(ERROR, domainid, "PCI device not attached to this domain");
         goto out_fail;
     }
@@ -1917,8 +1917,8 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
     } else {
         assert(type == LIBXL_DOMAIN_TYPE_PV);
 
-        char *sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pcidev->domain,
-                                     pcidev->bus, pcidev->dev, pcidev->func);
+        char *sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->domain,
+                                     pci->bus, pci->dev, pci->func);
         FILE *f = fopen(sysfs_path, "r");
         unsigned int start = 0, end = 0, flags = 0, size = 0;
         int irq = 0;
@@ -1953,8 +1953,8 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
         }
         fclose(f);
 skip1:
-        sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pcidev->domain,
-                               pcidev->bus, pcidev->dev, pcidev->func);
+        sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
+                               pci->bus, pci->dev, pci->func);
         f = fopen(sysfs_path, "r");
         if (f == NULL) {
             LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
@@ -1988,7 +1988,7 @@ static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = prs->domid;
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
 
     rc = check_qemu_running(gc, domid, xswa, rc, state);
     if (rc == ERROR_NOT_READY)
@@ -1996,7 +1996,7 @@ static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
     if (rc)
         goto out;
 
-    rc = qemu_pci_remove_xenstore(gc, domid, pcidev, prs->force);
+    rc = qemu_pci_remove_xenstore(gc, domid, pci, prs->force);
 
 out:
     pci_remove_detatched(egc, prs, rc);
@@ -2010,7 +2010,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     int rc;
 
     /* Convenience aliases */
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
 
     rc = libxl__ev_time_register_rel(ao, &prs->timeout,
                                      pci_remove_timeout,
@@ -2018,7 +2018,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     if (rc) goto out;
 
     QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
-                           pcidev->bus, pcidev->dev, pcidev->func);
+                           pci->bus, pci->dev, pci->func);
     prs->qmp.callback = pci_remove_qmp_device_del_cb;
     rc = libxl__ev_qmp_send(egc, &prs->qmp, "device_del", args);
     if (rc) goto out;
@@ -2080,14 +2080,14 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl__ao *const ao = prs->aodev->ao;
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
 
     if (rc) goto out;
 
     libxl__ev_qmp_dispose(gc, qmp);
 
     asked_id = GCSPRINTF(PCI_PT_QDEV_ID,
-                         pcidev->bus, pcidev->dev, pcidev->func);
+                         pci->bus, pci->dev, pci->func);
 
     /* query-pci response:
      * [{ 'devices': [ 'qdev_id': 'str', ...  ], ... }]
@@ -2135,10 +2135,10 @@ static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
     pci_remove_state *prs = CONTAINER_OF(ev, *prs, timeout);
 
     /* Convenience aliases */
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
 
     LOGD(WARN, prs->domid, "timed out waiting for DM to remove "
-         PCI_PT_QDEV_ID, pcidev->bus, pcidev->dev, pcidev->func);
+         PCI_PT_QDEV_ID, pci->bus, pci->dev, pci->func);
 
     /* If we timed out, we might still want to keep destroying the device
      * (when force==true), so let the next function decide what to do on
@@ -2156,7 +2156,7 @@ static void pci_remove_detatched(libxl__egc *egc,
     bool isstubdom;
 
     /* Convenience aliases */
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
     libxl_domid domid = prs->domid;
 
     /* Cleaning QMP states ASAP */
@@ -2170,30 +2170,30 @@ static void pci_remove_detatched(libxl__egc *egc,
     isstubdom = libxl_is_stubdom(CTX, domid, &domainid);
 
     /* don't do multiple resets while some functions are still passed through */
-    if ( (pcidev->vdevfn & 0x7) == 0 ) {
-        libxl__device_pci_reset(gc, pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+    if ((pci->vdevfn & 0x7) == 0) {
+        libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
     }
 
     if (!isstubdom) {
-        rc = xc_deassign_device(CTX->xch, domid, pcidev_encode_bdf(pcidev));
+        rc = xc_deassign_device(CTX->xch, domid, pci_encode_bdf(pci));
         if (rc < 0 && (prs->hvm || errno != ENOSYS))
             LOGED(ERROR, domainid, "xc_deassign_device failed");
     }
 
     stubdomid = libxl_get_stubdom_id(CTX, domid);
     if (stubdomid != 0) {
-        libxl_device_pci *pcidev_s;
+        libxl_device_pci *pci_s;
         libxl__ao_device *const stubdom_aodev = &prs->stubdom_aodev;
 
-        GCNEW(pcidev_s);
-        libxl_device_pci_init(pcidev_s);
-        libxl_device_pci_copy(CTX, pcidev_s, pcidev);
+        GCNEW(pci_s);
+        libxl_device_pci_init(pci_s);
+        libxl_device_pci_copy(CTX, pci_s, pci);
 
         libxl__prepare_ao_device(ao, stubdom_aodev);
         stubdom_aodev->action = LIBXL__DEVICE_ACTION_REMOVE;
         stubdom_aodev->callback = pci_remove_stubdom_done;
         stubdom_aodev->update_json = prs->aodev->update_json;
-        libxl__device_pci_remove_common(egc, stubdomid, pcidev_s,
+        libxl__device_pci_remove_common(egc, stubdomid, pci_s,
                                         prs->force, stubdom_aodev);
         return;
     }
@@ -2219,14 +2219,14 @@ static void pci_remove_done(libxl__egc *egc,
 
     if (rc) goto out;
 
-    libxl__device_pci_remove_xenstore(gc, prs->domid, prs->pcidev);
+    libxl__device_pci_remove_xenstore(gc, prs->domid, prs->pci);
 out:
     device_pci_remove_common_next(egc, prs, rc);
 }
 
 static void libxl__device_pci_remove_common(libxl__egc *egc,
                                             uint32_t domid,
-                                            libxl_device_pci *pcidev,
+                                            libxl_device_pci *pci,
                                             bool force,
                                             libxl__ao_device *aodev)
 {
@@ -2237,7 +2237,7 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
     GCNEW(prs);
     prs->aodev = aodev;
     prs->domid = domid;
-    prs->pcidev = pcidev;
+    prs->pci = pci;
     prs->force = force;
     libxl__xswait_init(&prs->xswait);
     libxl__ev_qmp_init(&prs->qmp);
@@ -2247,16 +2247,16 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
     libxl__ev_time_init(&prs->timeout);
     libxl__ev_time_init(&prs->retry_timer);
 
-    prs->orig_vdev = pcidev->vdevfn & ~7U;
+    prs->orig_vdev = pci->vdevfn & ~7U;
 
-    if ( pcidev->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
-        if ( pci_multifunction_check(gc, pcidev, &prs->pfunc_mask) ) {
+    if ( pci->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
+        if ( pci_multifunction_check(gc, pci, &prs->pfunc_mask) ) {
             rc = ERROR_FAIL;
             goto out;
         }
-        pcidev->vfunc_mask &= prs->pfunc_mask;
-    }else{
-        prs->pfunc_mask = (1 << pcidev->func);
+        pci->vfunc_mask &= prs->pfunc_mask;
+    } else {
+        prs->pfunc_mask = (1 << pci->func);
     }
 
     rc = 0;
@@ -2273,7 +2273,7 @@ static void device_pci_remove_common_next(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = prs->domid;
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
     libxl__ao_device *const aodev = prs->aodev;
     const unsigned int pfunc_mask = prs->pfunc_mask;
     const unsigned int orig_vdev = prs->orig_vdev;
@@ -2284,13 +2284,13 @@ static void device_pci_remove_common_next(libxl__egc *egc,
         const int i = prs->next_func;
         prs->next_func--;
         if ( (1 << i) & pfunc_mask ) {
-            if ( pcidev->vfunc_mask == pfunc_mask ) {
-                pcidev->func = i;
-                pcidev->vdevfn = orig_vdev | i;
-            }else{
-                pcidev->vdevfn = orig_vdev;
+            if ( pci->vfunc_mask == pfunc_mask ) {
+                pci->func = i;
+                pci->vdevfn = orig_vdev | i;
+            } else {
+                pci->vdevfn = orig_vdev;
             }
-            do_pci_remove(egc, domid, pcidev, prs->force, prs);
+            do_pci_remove(egc, domid, pci, prs->force, prs);
             return;
         }
     }
@@ -2306,7 +2306,7 @@ out:
 }
 
 int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid,
-                            libxl_device_pci *pcidev,
+                            libxl_device_pci *pci,
                             const libxl_asyncop_how *ao_how)
 
 {
@@ -2318,12 +2318,12 @@ int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid,
     aodev->action = LIBXL__DEVICE_ACTION_REMOVE;
     aodev->callback = device_addrm_aocomplete;
     aodev->update_json = true;
-    libxl__device_pci_remove_common(egc, domid, pcidev, false, aodev);
+    libxl__device_pci_remove_common(egc, domid, pci, false, aodev);
     return AO_INPROGRESS;
 }
 
 int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
-                             libxl_device_pci *pcidev,
+                             libxl_device_pci *pci,
                              const libxl_asyncop_how *ao_how)
 {
     AO_CREATE(ctx, domid, ao_how);
@@ -2334,7 +2334,7 @@ int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
     aodev->action = LIBXL__DEVICE_ACTION_REMOVE;
     aodev->callback = device_addrm_aocomplete;
     aodev->update_json = true;
-    libxl__device_pci_remove_common(egc, domid, pcidev, true, aodev);
+    libxl__device_pci_remove_common(egc, domid, pci, true, aodev);
     return AO_INPROGRESS;
 }
 
@@ -2353,7 +2353,7 @@ static int libxl__device_pci_from_xs_be(libxl__gc *gc,
     if (s)
         vdevfn = strtol(s, (char **) NULL, 16);
 
-    pcidev_struct_fill(pci, domain, bus, dev, func, vdevfn);
+    pci_struct_fill(pci, domain, bus, dev, func, vdevfn);
 
     s = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/opts-%d", be_path, nr));
     if (s) {
@@ -2398,7 +2398,7 @@ libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid, int *num
     GC_INIT(ctx);
     char *be_path;
     unsigned int n, i;
-    libxl_device_pci *pcidevs = NULL;
+    libxl_device_pci *pcis = NULL;
 
     *num = 0;
 
@@ -2407,28 +2407,28 @@ libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid, int *num
     if (libxl__device_pci_get_num(gc, be_path, &n))
         goto out;
 
-    pcidevs = calloc(n, sizeof(libxl_device_pci));
+    pcis = calloc(n, sizeof(libxl_device_pci));
 
     for (i = 0; i < n; i++)
-        libxl__device_pci_from_xs_be(gc, be_path, i, pcidevs + i);
+        libxl__device_pci_from_xs_be(gc, be_path, i, pcis + i);
 
     *num = n;
 out:
     GC_FREE;
-    return pcidevs;
+    return pcis;
 }
 
 void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
                                    libxl__multidev *multidev)
 {
     STATE_AO_GC(multidev->ao);
-    libxl_device_pci *pcidevs;
+    libxl_device_pci *pcis;
     int num, i;
 
-    pcidevs = libxl_device_pci_list(CTX, domid, &num);
-    if ( pcidevs == NULL )
+    pcis = libxl_device_pci_list(CTX, domid, &num);
+    if ( pcis == NULL )
         return;
-    libxl__ptr_add(gc, pcidevs);
+    libxl__ptr_add(gc, pcis);
 
     for (i = 0; i < num; i++) {
         /* Force remove on shutdown since, on HVM, qemu will not always
@@ -2436,7 +2436,7 @@ void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
          * devices by the time we even get here!
          */
         libxl__ao_device *aodev = libxl__multidev_prepare(multidev);
-        libxl__device_pci_remove_common(egc, domid, pcidevs + i, true,
+        libxl__device_pci_remove_common(egc, domid, pcis + i, true,
                                         aodev);
     }
 }
@@ -2449,13 +2449,13 @@ int libxl__grant_vga_iomem_permission(libxl__gc *gc, const uint32_t domid,
     if (!libxl_defbool_val(d_config->b_info.u.hvm.gfx_passthru))
         return 0;
 
-    for (i = 0 ; i < d_config->num_pcidevs ; i++) {
+    for (i = 0 ; i < d_config->num_pcis ; i++) {
         uint64_t vga_iomem_start = 0xa0000 >> XC_PAGE_SHIFT;
         uint32_t stubdom_domid;
-        libxl_device_pci *pcidev = &d_config->pcidevs[i];
+        libxl_device_pci *pci = &d_config->pcis[i];
         unsigned long pci_device_class;
 
-        if (sysfs_dev_get_class(gc, pcidev, &pci_device_class))
+        if (sysfs_dev_get_class(gc, pci, &pci_device_class))
             continue;
         if (pci_device_class != 0x030000) /* VGA class */
             continue;
@@ -2494,7 +2494,7 @@ static int libxl_device_pci_compare(const libxl_device_pci *d1,
 
 #define libxl__device_pci_update_devid NULL
 
-DEFINE_DEVICE_TYPE_STRUCT_X(pcidev, pci, PCI,
+DEFINE_DEVICE_TYPE_STRUCT(pci, PCI,
     .get_num = libxl__device_pci_get_num,
     .from_xenstore = libxl__device_pci_from_xs_be,
 );
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 9d3f05f399..20f8dd7cfa 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -940,7 +940,7 @@ libxl_domain_config = Struct("domain_config", [
 
     ("disks", Array(libxl_device_disk, "num_disks")),
     ("nics", Array(libxl_device_nic, "num_nics")),
-    ("pcidevs", Array(libxl_device_pci, "num_pcidevs")),
+    ("pcis", Array(libxl_device_pci, "num_pcis")),
     ("rdms", Array(libxl_device_rdm, "num_rdms")),
     ("dtdevs", Array(libxl_device_dtdev, "num_dtdevs")),
     ("vfbs", Array(libxl_device_vfb, "num_vfbs")),
diff --git a/tools/libs/util/libxlu_pci.c b/tools/libs/util/libxlu_pci.c
index 12fc0b3a7f..1d38fffce3 100644
--- a/tools/libs/util/libxlu_pci.c
+++ b/tools/libs/util/libxlu_pci.c
@@ -23,15 +23,15 @@ static int hex_convert(const char *str, unsigned int *val, unsigned int mask)
     return 0;
 }
 
-static int pcidev_struct_fill(libxl_device_pci *pcidev, unsigned int domain,
-                               unsigned int bus, unsigned int dev,
-                               unsigned int func, unsigned int vdevfn)
+static int pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
+                           unsigned int bus, unsigned int dev,
+                           unsigned int func, unsigned int vdevfn)
 {
-    pcidev->domain = domain;
-    pcidev->bus = bus;
-    pcidev->dev = dev;
-    pcidev->func = func;
-    pcidev->vdevfn = vdevfn;
+    pci->domain = domain;
+    pci->bus = bus;
+    pci->dev = dev;
+    pci->func = func;
+    pci->vdevfn = vdevfn;
     return 0;
 }
 
@@ -47,7 +47,7 @@ static int pcidev_struct_fill(libxl_device_pci *pcidev, unsigned int domain,
 #define STATE_RDM_STRATEGY      10
 #define STATE_RESERVE_POLICY    11
 #define INVALID         0xffffffff
-int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str)
+int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pci, const char *str)
 {
     unsigned state = STATE_DOMAIN;
     unsigned dom = INVALID, bus = INVALID, dev = INVALID, func = INVALID, vslot = 0;
@@ -110,11 +110,11 @@ int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str
                 }
                 *ptr = '\0';
                 if ( !strcmp(tok, "*") ) {
-                    pcidev->vfunc_mask = LIBXL_PCI_FUNC_ALL;
+                    pci->vfunc_mask = LIBXL_PCI_FUNC_ALL;
                 }else{
                     if ( hex_convert(tok, &func, 0x7) )
                         goto parse_error;
-                    pcidev->vfunc_mask = (1 << 0);
+                    pci->vfunc_mask = (1 << 0);
                 }
                 tok = ptr + 1;
             }
@@ -141,18 +141,18 @@ int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str
                 state = (*ptr == ',') ? STATE_OPTIONS_K : STATE_TERMINAL;
                 *ptr = '\0';
                 if ( !strcmp(optkey, "msitranslate") ) {
-                    pcidev->msitranslate = atoi(tok);
+                    pci->msitranslate = atoi(tok);
                 }else if ( !strcmp(optkey, "power_mgmt") ) {
-                    pcidev->power_mgmt = atoi(tok);
+                    pci->power_mgmt = atoi(tok);
                 }else if ( !strcmp(optkey, "permissive") ) {
-                    pcidev->permissive = atoi(tok);
+                    pci->permissive = atoi(tok);
                 }else if ( !strcmp(optkey, "seize") ) {
-                    pcidev->seize = atoi(tok);
+                    pci->seize = atoi(tok);
                 } else if (!strcmp(optkey, "rdm_policy")) {
                     if (!strcmp(tok, "strict")) {
-                        pcidev->rdm_policy = LIBXL_RDM_RESERVE_POLICY_STRICT;
+                        pci->rdm_policy = LIBXL_RDM_RESERVE_POLICY_STRICT;
                     } else if (!strcmp(tok, "relaxed")) {
-                        pcidev->rdm_policy = LIBXL_RDM_RESERVE_POLICY_RELAXED;
+                        pci->rdm_policy = LIBXL_RDM_RESERVE_POLICY_RELAXED;
                     } else {
                         XLU__PCI_ERR(cfg, "%s is not an valid PCI RDM property"
                                           " policy: 'strict' or 'relaxed'.",
@@ -175,7 +175,7 @@ int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str
     assert(dom != INVALID && bus != INVALID && dev != INVALID && func != INVALID);
 
     /* Just a pretty way to fill in the values */
-    pcidev_struct_fill(pcidev, dom, bus, dev, func, vslot << 3);
+    pci_struct_fill(pci, dom, bus, dev, func, vslot << 3);
 
     free(buf2);
 
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index cae8eb679c..0765780d9f 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1470,24 +1470,24 @@ void parse_config_data(const char *config_source,
     }
 
     if (!xlu_cfg_get_list (config, "pci", &pcis, 0, 0)) {
-        d_config->num_pcidevs = 0;
-        d_config->pcidevs = NULL;
+        d_config->num_pcis = 0;
+        d_config->pcis = NULL;
         for(i = 0; (buf = xlu_cfg_get_listitem (pcis, i)) != NULL; i++) {
-            libxl_device_pci *pcidev;
-
-            pcidev = ARRAY_EXTEND_INIT_NODEVID(d_config->pcidevs,
-                                               d_config->num_pcidevs,
-                                               libxl_device_pci_init);
-            pcidev->msitranslate = pci_msitranslate;
-            pcidev->power_mgmt = pci_power_mgmt;
-            pcidev->permissive = pci_permissive;
-            pcidev->seize = pci_seize;
+            libxl_device_pci *pci;
+
+            pci = ARRAY_EXTEND_INIT_NODEVID(d_config->pcis,
+                                            d_config->num_pcis,
+                                            libxl_device_pci_init);
+            pci->msitranslate = pci_msitranslate;
+            pci->power_mgmt = pci_power_mgmt;
+            pci->permissive = pci_permissive;
+            pci->seize = pci_seize;
             /*
              * Like other pci option, the per-device policy always follows
              * the global policy by default.
              */
-            pcidev->rdm_policy = b_info->u.hvm.rdm.policy;
-            e = xlu_pci_parse_bdf(config, pcidev, buf);
+            pci->rdm_policy = b_info->u.hvm.rdm.policy;
+            e = xlu_pci_parse_bdf(config, pci, buf);
             if (e) {
                 fprintf(stderr,
                         "unable to parse PCI BDF `%s' for passthrough\n",
@@ -1495,7 +1495,7 @@ void parse_config_data(const char *config_source,
                 exit(-e);
             }
         }
-        if (d_config->num_pcidevs && c_info->type == LIBXL_DOMAIN_TYPE_PV)
+        if (d_config->num_pcis && c_info->type == LIBXL_DOMAIN_TYPE_PV)
             libxl_defbool_set(&b_info->u.pv.e820_host, true);
     }
 
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 58345bdae2..34fcf5a4fa 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -24,20 +24,20 @@
 
 static void pcilist(uint32_t domid)
 {
-    libxl_device_pci *pcidevs;
+    libxl_device_pci *pcis;
     int num, i;
 
-    pcidevs = libxl_device_pci_list(ctx, domid, &num);
-    if (pcidevs == NULL)
+    pcis = libxl_device_pci_list(ctx, domid, &num);
+    if (pcis == NULL)
         return;
     printf("Vdev Device\n");
     for (i = 0; i < num; i++) {
         printf("%02x.%01x %04x:%02x:%02x.%01x\n",
-               (pcidevs[i].vdevfn >> 3) & 0x1f, pcidevs[i].vdevfn & 0x7,
-               pcidevs[i].domain, pcidevs[i].bus, pcidevs[i].dev, pcidevs[i].func);
-        libxl_device_pci_dispose(&pcidevs[i]);
+               (pcis[i].vdevfn >> 3) & 0x1f, pcis[i].vdevfn & 0x7,
+               pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
+        libxl_device_pci_dispose(&pcis[i]);
     }
-    free(pcidevs);
+    free(pcis);
 }
 
 int main_pcilist(int argc, char **argv)
@@ -57,28 +57,28 @@ int main_pcilist(int argc, char **argv)
 
 static int pcidetach(uint32_t domid, const char *bdf, int force)
 {
-    libxl_device_pci pcidev;
+    libxl_device_pci pci;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pcidev);
+    libxl_device_pci_init(&pci);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_inig"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcidev, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
         fprintf(stderr, "pci-detach: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
     if (force) {
-        if (libxl_device_pci_destroy(ctx, domid, &pcidev, 0))
+        if (libxl_device_pci_destroy(ctx, domid, &pci, 0))
             r = 1;
     } else {
-        if (libxl_device_pci_remove(ctx, domid, &pcidev, 0))
+        if (libxl_device_pci_remove(ctx, domid, &pci, 0))
             r = 1;
     }
 
-    libxl_device_pci_dispose(&pcidev);
+    libxl_device_pci_dispose(&pci);
     xlu_cfg_destroy(config);
 
     return r;
@@ -108,24 +108,24 @@ int main_pcidetach(int argc, char **argv)
 
 static int pciattach(uint32_t domid, const char *bdf, const char *vs)
 {
-    libxl_device_pci pcidev;
+    libxl_device_pci pci;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pcidev);
+    libxl_device_pci_init(&pci);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_inig"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcidev, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
         fprintf(stderr, "pci-attach: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_add(ctx, domid, &pcidev, 0))
+    if (libxl_device_pci_add(ctx, domid, &pci, 0))
         r = 1;
 
-    libxl_device_pci_dispose(&pcidev);
+    libxl_device_pci_dispose(&pci);
     xlu_cfg_destroy(config);
 
     return r;
@@ -155,19 +155,19 @@ int main_pciattach(int argc, char **argv)
 
 static void pciassignable_list(void)
 {
-    libxl_device_pci *pcidevs;
+    libxl_device_pci *pcis;
     int num, i;
 
-    pcidevs = libxl_device_pci_assignable_list(ctx, &num);
+    pcis = libxl_device_pci_assignable_list(ctx, &num);
 
-    if ( pcidevs == NULL )
+    if ( pcis == NULL )
         return;
     for (i = 0; i < num; i++) {
         printf("%04x:%02x:%02x.%01x\n",
-               pcidevs[i].domain, pcidevs[i].bus, pcidevs[i].dev, pcidevs[i].func);
-        libxl_device_pci_dispose(&pcidevs[i]);
+               pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
+        libxl_device_pci_dispose(&pcis[i]);
     }
-    free(pcidevs);
+    free(pcis);
 }
 
 int main_pciassignable_list(int argc, char **argv)
@@ -184,24 +184,24 @@ int main_pciassignable_list(int argc, char **argv)
 
 static int pciassignable_add(const char *bdf, int rebind)
 {
-    libxl_device_pci pcidev;
+    libxl_device_pci pci;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pcidev);
+    libxl_device_pci_init(&pci);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcidev, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
         fprintf(stderr, "pci-assignable-add: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_add(ctx, &pcidev, rebind))
+    if (libxl_device_pci_assignable_add(ctx, &pci, rebind))
         r = 1;
 
-    libxl_device_pci_dispose(&pcidev);
+    libxl_device_pci_dispose(&pci);
     xlu_cfg_destroy(config);
 
     return r;
@@ -226,24 +226,24 @@ int main_pciassignable_add(int argc, char **argv)
 
 static int pciassignable_remove(const char *bdf, int rebind)
 {
-    libxl_device_pci pcidev;
+    libxl_device_pci pci;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pcidev);
+    libxl_device_pci_init(&pci);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcidev, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
         fprintf(stderr, "pci-assignable-remove: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_remove(ctx, &pcidev, rebind))
+    if (libxl_device_pci_assignable_remove(ctx, &pci, rebind))
         r = 1;
 
-    libxl_device_pci_dispose(&pcidev);
+    libxl_device_pci_dispose(&pci);
     xlu_cfg_destroy(config);
 
     return r;
diff --git a/tools/xl/xl_sxp.c b/tools/xl/xl_sxp.c
index 359a001570..b03e348ffb 100644
--- a/tools/xl/xl_sxp.c
+++ b/tools/xl/xl_sxp.c
@@ -190,16 +190,16 @@ void printf_info_sexp(int domid, libxl_domain_config *d_config, FILE *fh)
         fprintf(fh, "\t)\n");
     }
 
-    for (i = 0; i < d_config->num_pcidevs; i++) {
+    for (i = 0; i < d_config->num_pcis; i++) {
         fprintf(fh, "\t(device\n");
         fprintf(fh, "\t\t(pci\n");
         fprintf(fh, "\t\t\t(pci dev %04x:%02x:%02x.%01x@%02x)\n",
-               d_config->pcidevs[i].domain, d_config->pcidevs[i].bus,
-               d_config->pcidevs[i].dev, d_config->pcidevs[i].func,
-               d_config->pcidevs[i].vdevfn);
+               d_config->pcis[i].domain, d_config->pcis[i].bus,
+               d_config->pcis[i].dev, d_config->pcis[i].func,
+               d_config->pcis[i].vdevfn);
         fprintf(fh, "\t\t\t(opts msitranslate %d power_mgmt %d)\n",
-               d_config->pcidevs[i].msitranslate,
-               d_config->pcidevs[i].power_mgmt);
+               d_config->pcis[i].msitranslate,
+               d_config->pcis[i].power_mgmt);
         fprintf(fh, "\t\t)\n");
         fprintf(fh, "\t)\n");
     }
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 17:45:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 17:45:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34929.66247 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFtn-0006Nm-Uw; Mon, 23 Nov 2020 17:45:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34929.66247; Mon, 23 Nov 2020 17:45:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFtn-0006NM-JH; Mon, 23 Nov 2020 17:45:11 +0000
Received: by outflank-mailman (input) for mailman id 34929;
 Mon, 23 Nov 2020 17:45:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khFtk-0006Ir-NR
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:45:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtj-0006YV-L4; Mon, 23 Nov 2020 17:45:07 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtj-0000at-HN; Mon, 23 Nov 2020 17:45:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtk-0006Ir-NR
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:45:08 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=dpUixXjI/I4QIesMLF+10yxwU7Nt8aYiH/KlAxrUXlM=; b=o+392nWlICupgPcT/zrNrfMCS
	/ycRGM7S2GfEgkxNlTwlOMQxjZ+CA16dawSkPJRpIelIMUxYl8TxZU2eSArY0FPM0v/geuYKi22K1
	Qw5CawOnvsIbLpkpVXCyARhNLIghJcajAOLzURbtici+MdeN+aX6io4wr9g8gxUrqxPSI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtj-0006YV-L4; Mon, 23 Nov 2020 17:45:07 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtj-0000at-HN; Mon, 23 Nov 2020 17:45:07 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 04/23] libxl: add/recover 'rdm_policy' to/from PCI backend in xenstore
Date: Mon, 23 Nov 2020 17:44:44 +0000
Message-Id: <20201123174503.6800-5-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201123174503.6800-1-paul@xen.org>
References: <20201123174503.6800-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

Other parameters, such as 'msitranslate' and 'permissive' are dealt with
but 'rdm_policy' appears to be have been completely missed.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index da01c77ba2..50c96cbfa6 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -61,9 +61,9 @@ static void libxl_create_pci_backend_device(libxl__gc *gc,
         flexarray_append_pair(back, GCSPRINTF("vdevfn-%d", num), GCSPRINTF("%x", pci->vdevfn));
     flexarray_append(back, GCSPRINTF("opts-%d", num));
     flexarray_append(back,
-              GCSPRINTF("msitranslate=%d,power_mgmt=%d,permissive=%d",
-                             pci->msitranslate, pci->power_mgmt,
-                             pci->permissive));
+              GCSPRINTF("msitranslate=%d,power_mgmt=%d,permissive=%d,rdm_policy=%s",
+                        pci->msitranslate, pci->power_mgmt,
+                        pci->permissive, libxl_rdm_reserve_policy_to_string(pci->rdm_policy)));
     flexarray_append_pair(back, GCSPRINTF("state-%d", num), GCSPRINTF("%d", XenbusStateInitialising));
 }
 
@@ -2374,6 +2374,9 @@ static int libxl__device_pci_from_xs_be(libxl__gc *gc,
             } else if (!strcmp(p, "permissive")) {
                 p = strtok_r(NULL, ",=", &saveptr);
                 pci->permissive = atoi(p);
+            } else if (!strcmp(p, "rdm_policy")) {
+                p = strtok_r(NULL, ",=", &saveptr);
+                libxl_rdm_reserve_policy_from_string(p, &pci->rdm_policy);
             }
         } while ((p = strtok_r(NULL, ",=", &saveptr)) != NULL);
     }
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 17:45:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 17:45:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34926.66210 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFtm-0006Jx-5W; Mon, 23 Nov 2020 17:45:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34926.66210; Mon, 23 Nov 2020 17:45:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFtl-0006Jm-VA; Mon, 23 Nov 2020 17:45:09 +0000
Received: by outflank-mailman (input) for mailman id 34926;
 Mon, 23 Nov 2020 17:45:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khFtk-0006Ic-6L
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:45:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFti-0006YL-Qb; Mon, 23 Nov 2020 17:45:06 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFti-0000at-JD; Mon, 23 Nov 2020 17:45:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtk-0006Ic-6L
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:45:08 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=W2WouK8f1AdKprWs2W2fVtTJuFUfHceJRHCSdCahaSQ=; b=fUCA4TBGOf2b6jb/qeOLgg0SFg
	Q5df14mTwOIuvUYTAR2xTtV8VRkRDxTBsr54jNVnmUFGLxPPIHDNIWVdPpP4vI3wKiG4HEBj+6RK0
	AQuBUz+ndN0sAY1RFUgh8xqzLbJXd3MdZJuQp5karpOTl/dMi2vwYqyuLb27np8lpmng=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFti-0006YL-Qb; Mon, 23 Nov 2020 17:45:06 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFti-0000at-JD; Mon, 23 Nov 2020 17:45:06 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Nick Rosbrook <rosbrookn@ainfosec.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 00/23] xl / libxl: named PCI pass-through devices
Date: Mon, 23 Nov 2020 17:44:40 +0000
Message-Id: <20201123174503.6800-1-paul@xen.org>
X-Mailer: git-send-email 2.11.0

From: Paul Durrant <pdurrant@amazon.com>

Paul Durrant (23):
  xl / libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
  libxl: make libxl__device_list() work correctly for
    LIBXL__DEVICE_KIND_PCI...
  libxl: Make sure devices added by pci-attach are reflected in the
    config
  libxl: add/recover 'rdm_policy' to/from PCI backend in xenstore
  libxl: s/detatched/detached in libxl_pci.c
  libxl: remove extraneous arguments to do_pci_remove() in libxl_pci.c
  libxl: stop using aodev->device_config in libxl__device_pci_add()...
  libxl: generalise 'driver_path' xenstore access functions in
    libxl_pci.c
  libxl: remove unnecessary check from libxl__device_pci_add()
  libxl: remove get_all_assigned_devices() from libxl_pci.c
  libxl: make sure callers of libxl_device_pci_list() free the list
    after use
  libxl: add libxl_device_pci_assignable_list_free()...
  libxl: use COMPARE_PCI() macro is_pci_in_array()...
  docs/man: extract documentation of PCI_SPEC_STRING from the xl.cfg
    manpage...
  docs/man: improve documentation of PCI_SPEC_STRING...
  docs/man: fix xl(1) documentation for 'pci' operations
  libxl: introduce 'libxl_pci_bdf' in the idl...
  libxlu: introduce xlu_pci_parse_spec_string()
  libxl: modify
    libxl_device_pci_assignable_add/remove/list/list_free()...
  docs/man: modify xl(1) in preparation for naming of assignable devices
  xl / libxl: support naming of assignable devices
  docs/man: modify xl-pci-configuration(5) to add 'name' field to
    PCI_SPEC_STRING
  xl / libxl: support 'xl pci-attach/detach' by name

 docs/man/xl-pci-configuration.5.pod  |  218 +++++++
 docs/man/xl.1.pod.in                 |   39 +-
 docs/man/xl.cfg.5.pod.in             |   68 +--
 tools/golang/xenlight/helpers.gen.go |   77 ++-
 tools/golang/xenlight/types.gen.go   |    8 +-
 tools/include/libxl.h                |   67 ++-
 tools/include/libxlutil.h            |    8 +-
 tools/libs/light/libxl_create.c      |    6 +-
 tools/libs/light/libxl_device.c      |   66 ++-
 tools/libs/light/libxl_dm.c          |   18 +-
 tools/libs/light/libxl_internal.h    |   55 +-
 tools/libs/light/libxl_pci.c         | 1048 ++++++++++++++++++----------------
 tools/libs/light/libxl_types.idl     |   19 +-
 tools/libs/util/libxlu_pci.c         |  359 ++++++------
 tools/ocaml/libs/xl/xenlight_stubs.c |   19 +-
 tools/xl/xl_cmdtable.c               |   16 +-
 tools/xl/xl_parse.c                  |   30 +-
 tools/xl/xl_pci.c                    |  163 +++---
 tools/xl/xl_sxp.c                    |   12 +-
 19 files changed, 1367 insertions(+), 929 deletions(-)
 create mode 100644 docs/man/xl-pci-configuration.5.pod
---
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Christian Lindig <christian.lindig@citrix.com>
Cc: David Scott <dave@recoil.org>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>
Cc: Wei Liu <wl@xen.org>
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 17:45:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 17:45:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34930.66256 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFto-0006P6-Hg; Mon, 23 Nov 2020 17:45:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34930.66256; Mon, 23 Nov 2020 17:45:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFto-0006OX-2z; Mon, 23 Nov 2020 17:45:12 +0000
Received: by outflank-mailman (input) for mailman id 34930;
 Mon, 23 Nov 2020 17:45:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khFtk-0006Iw-PQ
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:45:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtk-0006Yh-0b; Mon, 23 Nov 2020 17:45:08 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtj-0000at-U9; Mon, 23 Nov 2020 17:45:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtk-0006Iw-PQ
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:45:08 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=QpeMsaNDODuqzK/3ooOCFKLwmrQNIi9uPeHe1VKXhiI=; b=4AA71pzOYbruA2LTRlY/huEYI
	w6QfVxPvHhFP1y29Q8iGK/O0XhnPBdGQDZvtyhwaEOLFnQh0BVC0TiBbczx0kq/OY+t0oi5WH24X/
	Rtu28U357CSPJF/5/GtAl1jJAy1PRUzkLSAF42a4DcjLN1sT89LfdKy5Le9RQObbRJS+w=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtk-0006Yh-0b; Mon, 23 Nov 2020 17:45:08 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtj-0000at-U9; Mon, 23 Nov 2020 17:45:07 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 06/23] libxl: remove extraneous arguments to do_pci_remove() in libxl_pci.c
Date: Mon, 23 Nov 2020 17:44:46 +0000
Message-Id: <20201123174503.6800-7-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201123174503.6800-1-paul@xen.org>
References: <20201123174503.6800-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

Both 'domid' and 'pci' are available in 'pci_remove_state' so there is no
need to also pass them as separate arguments.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index de617e95eb..41e4b2b571 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1871,14 +1871,14 @@ static void pci_remove_stubdom_done(libxl__egc *egc,
 static void pci_remove_done(libxl__egc *egc,
     pci_remove_state *prs, int rc);
 
-static void do_pci_remove(libxl__egc *egc, uint32_t domid,
-                          libxl_device_pci *pci, int force,
-                          pci_remove_state *prs)
+static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
 {
     STATE_AO_GC(prs->aodev->ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
     libxl_device_pci *assigned;
+    uint32_t domid = prs->domid;
     libxl_domain_type type = libxl__domain_type(gc, domid);
+    libxl_device_pci *pci = prs->pci;
     int rc, num;
     uint32_t domainid = domid;
 
@@ -2275,7 +2275,6 @@ static void device_pci_remove_common_next(libxl__egc *egc,
     EGC_GC;
 
     /* Convenience aliases */
-    libxl_domid domid = prs->domid;
     libxl_device_pci *const pci = prs->pci;
     libxl__ao_device *const aodev = prs->aodev;
     const unsigned int pfunc_mask = prs->pfunc_mask;
@@ -2293,7 +2292,7 @@ static void device_pci_remove_common_next(libxl__egc *egc,
             } else {
                 pci->vdevfn = orig_vdev;
             }
-            do_pci_remove(egc, domid, pci, prs->force, prs);
+            do_pci_remove(egc, prs);
             return;
         }
     }
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 17:45:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 17:45:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34931.66267 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFtp-0006Ra-DF; Mon, 23 Nov 2020 17:45:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34931.66267; Mon, 23 Nov 2020 17:45:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFto-0006Qm-RP; Mon, 23 Nov 2020 17:45:12 +0000
Received: by outflank-mailman (input) for mailman id 34931;
 Mon, 23 Nov 2020 17:45:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khFtk-0006Iy-Ps
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:45:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtj-0006YT-Dc; Mon, 23 Nov 2020 17:45:07 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtj-0000at-Ay; Mon, 23 Nov 2020 17:45:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtk-0006Iy-Ps
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:45:08 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=zmhShPhhJnaAiA904102UbZsZbkJVaSxT/H2EVj5rXI=; b=ok+/T+YjAZ/HN9yeNgNBknn4X
	CsgmrOKs9bw35BOOsFnEasss/C0ztnQsriKS/Omjp6ceVnrukkF5sHf6rviLw+Z436JvkaHK8n5CJ
	UawmsOjOrNwP8TrKUnDU1nVDFVCuNP7G6uAMVlLqK5ZdXr1TSHWteMSfLdOBV5Of7YWLY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtj-0006YT-Dc; Mon, 23 Nov 2020 17:45:07 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtj-0000at-Ay; Mon, 23 Nov 2020 17:45:07 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 03/23] libxl: Make sure devices added by pci-attach are reflected in the config
Date: Mon, 23 Nov 2020 17:44:43 +0000
Message-Id: <20201123174503.6800-4-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201123174503.6800-1-paul@xen.org>
References: <20201123174503.6800-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

Currently libxl__device_pci_add_xenstore() is broken in that does not
update the domain's configuration for the first device added (which causes
creation of the overall backend area in xenstore). This can be easily observed
by running 'xl list -l' after adding a single device: the device will be
missing.

This patch fixes the problem and adds a DEBUG log line to allow easy
verification that the domain configuration is being modified. Also, the use
of libxl__device_generic_add() is dropped as it leads to a confusing situation
where only partial backend information is written under the xenstore
'/libxl' path. For LIBXL__DEVICE_KIND_PCI devices the only definitive
information in xenstore is under '/local/domain/0/backend' (the '0' being
hard-coded).

NOTE: This patch includes a whitespace in add_pcis_done().

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>

v2:
 - Avoid having two completely different ways of adding devices into xenstore

v3:
 - Revert some changes form v2 as there is confusion over use of the libxl
   and backend xenstore paths which needs to be fixed
---
 tools/libs/light/libxl_pci.c | 87 +++++++++++++++++++++++---------------------
 1 file changed, 45 insertions(+), 42 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 9d44b28f0a..da01c77ba2 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -79,39 +79,55 @@ static void libxl__device_from_pci(libxl__gc *gc, uint32_t domid,
     device->kind = LIBXL__DEVICE_KIND_PCI;
 }
 
-static int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
-                                     const libxl_device_pci *pci,
-                                     int num)
+static void libxl__create_pci_backend(libxl__gc *gc, xs_transaction_t t,
+                                      uint32_t domid, const libxl_device_pci *pci)
 {
-    flexarray_t *front = NULL;
-    flexarray_t *back = NULL;
-    libxl__device device;
-    int i;
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+    flexarray_t *front, *back;
+    char *fe_path, *be_path;
+    struct xs_permissions fe_perms[2], be_perms[2];
+
+    LOGD(DEBUG, domid, "Creating pci backend");
 
     front = flexarray_make(gc, 16, 1);
     back = flexarray_make(gc, 16, 1);
 
-    LOGD(DEBUG, domid, "Creating pci backend");
-
-    /* add pci device */
-    libxl__device_from_pci(gc, domid, pci, &device);
+    fe_path = libxl__domain_device_frontend_path(gc, domid, 0,
+                                                 LIBXL__DEVICE_KIND_PCI);
+    be_path = libxl__domain_device_backend_path(gc, 0, domid, 0,
+                                                LIBXL__DEVICE_KIND_PCI);
 
+    flexarray_append_pair(back, "frontend", fe_path);
     flexarray_append_pair(back, "frontend-id", GCSPRINTF("%d", domid));
-    flexarray_append_pair(back, "online", "1");
+    flexarray_append_pair(back, "online", GCSPRINTF("%d", 1));
     flexarray_append_pair(back, "state", GCSPRINTF("%d", XenbusStateInitialising));
     flexarray_append_pair(back, "domain", libxl__domid_to_name(gc, domid));
 
-    for (i = 0; i < num; i++, pci++)
-        libxl_create_pci_backend_device(gc, back, i, pci);
+    be_perms[0].id = 0;
+    be_perms[0].perms = XS_PERM_NONE;
+    be_perms[1].id = domid;
+    be_perms[1].perms = XS_PERM_READ;
+
+    xs_rm(ctx->xsh, t, be_path);
+    xs_mkdir(ctx->xsh, t, be_path);
+    xs_set_permissions(ctx->xsh, t, be_path, be_perms,
+                       ARRAY_SIZE(be_perms));
+    libxl__xs_writev(gc, t, be_path, libxl__xs_kvs_of_flexarray(gc, back));
 
-    flexarray_append_pair(back, "num_devs", GCSPRINTF("%d", num));
+    flexarray_append_pair(front, "backend", be_path);
     flexarray_append_pair(front, "backend-id", GCSPRINTF("%d", 0));
     flexarray_append_pair(front, "state", GCSPRINTF("%d", XenbusStateInitialising));
 
-    return libxl__device_generic_add(gc, XBT_NULL, &device,
-                                     libxl__xs_kvs_of_flexarray(gc, back),
-                                     libxl__xs_kvs_of_flexarray(gc, front),
-                                     NULL);
+    fe_perms[0].id = domid;
+    fe_perms[0].perms = XS_PERM_NONE;
+    fe_perms[1].id = 0;
+    fe_perms[1].perms = XS_PERM_READ;
+
+    xs_rm(ctx->xsh, t, fe_path);
+    xs_mkdir(ctx->xsh, t, fe_path);
+    xs_set_permissions(ctx->xsh, t, fe_path,
+                       fe_perms, ARRAY_SIZE(fe_perms));
+    libxl__xs_writev(gc, t, fe_path, libxl__xs_kvs_of_flexarray(gc, front));
 }
 
 static int libxl__device_pci_add_xenstore(libxl__gc *gc,
@@ -135,8 +151,6 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
     be_path = libxl__domain_device_backend_path(gc, 0, domid, 0,
                                                 LIBXL__DEVICE_KIND_PCI);
     num_devs = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/num_devs", be_path));
-    if (!num_devs)
-        return libxl__create_pci_backend(gc, domid, pci, 1);
 
     libxl_domain_type domtype = libxl__domain_type(gc, domid);
     if (domtype == LIBXL_DOMAIN_TYPE_INVALID)
@@ -150,17 +164,17 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
     back = flexarray_make(gc, 16, 1);
 
     LOGD(DEBUG, domid, "Adding new pci device to xenstore");
-    num = atoi(num_devs);
+    num = num_devs ? atoi(num_devs) : 0;
     libxl_create_pci_backend_device(gc, back, num, pci);
     flexarray_append_pair(back, "num_devs", GCSPRINTF("%d", num + 1));
-    if (!starting)
+    if (num && !starting)
         flexarray_append_pair(back, "state", GCSPRINTF("%d", XenbusStateReconfiguring));
 
     /*
      * Stubdomin config is derived from its target domain, it doesn't have
      * its own file.
      */
-    if (!is_stubdomain) {
+    if (!is_stubdomain && !starting) {
         lock = libxl__lock_domain_userdata(gc, domid);
         if (!lock) {
             rc = ERROR_LOCK_FAIL;
@@ -170,6 +184,7 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
         rc = libxl__get_domain_configuration(gc, domid, &d_config);
         if (rc) goto out;
 
+        LOGD(DEBUG, domid, "Adding new pci device to config");
         device_add_domain_config(gc, &d_config, &libxl__pci_devtype,
                                  pci);
 
@@ -186,6 +201,10 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
             if (rc) goto out;
         }
 
+        /* This is the first device, so create the backend */
+        if (!num_devs)
+            libxl__create_pci_backend(gc, t, domid, pci);
+
         libxl__xs_writev(gc, t, be_path, libxl__xs_kvs_of_flexarray(gc, back));
 
         rc = libxl__xs_transaction_commit(gc, &t);
@@ -1437,7 +1456,7 @@ out_no_irq:
         }
     }
 
-    if (!starting && !libxl_get_stubdom_id(CTX, domid))
+    if (!libxl_get_stubdom_id(CTX, domid))
         rc = libxl__device_pci_add_xenstore(gc, domid, pci, starting);
     else
         rc = 0;
@@ -1765,28 +1784,12 @@ static void libxl__add_pcis(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
 }
 
 static void add_pcis_done(libxl__egc *egc, libxl__multidev *multidev,
-                             int rc)
+                          int rc)
 {
     EGC_GC;
     add_pcis_state *apds = CONTAINER_OF(multidev, *apds, multidev);
-
-    /* Convenience aliases */
-    libxl_domain_config *d_config = apds->d_config;
-    libxl_domid domid = apds->domid;
     libxl__ao_device *aodev = apds->outer_aodev;
 
-    if (rc) goto out;
-
-    if (d_config->num_pcis > 0 && !libxl_get_stubdom_id(CTX, domid)) {
-        rc = libxl__create_pci_backend(gc, domid, d_config->pcis,
-                                       d_config->num_pcis);
-        if (rc < 0) {
-            LOGD(ERROR, domid, "libxl_create_pci_backend failed: %d", rc);
-            goto out;
-        }
-    }
-
-out:
     aodev->rc = rc;
     aodev->callback(egc, aodev);
 }
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 17:45:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 17:45:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34932.66278 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFtq-0006TC-7J; Mon, 23 Nov 2020 17:45:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34932.66278; Mon, 23 Nov 2020 17:45:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFtp-0006Se-I7; Mon, 23 Nov 2020 17:45:13 +0000
Received: by outflank-mailman (input) for mailman id 34932;
 Mon, 23 Nov 2020 17:45:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khFtl-0006J6-2z
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:45:09 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtk-0006Yx-JX; Mon, 23 Nov 2020 17:45:08 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtk-0000at-H2; Mon, 23 Nov 2020 17:45:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtl-0006J6-2z
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:45:09 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=ydZtYAWQIkmj6i2skveH66kfmosN7aYsaYp/dzWVx0Q=; b=QRFg5o7rk895NbrgBzIoFiA9s
	7U5fputeiBGyEiKB3H8jFHAoUcbjGNK5yMkqLWQo2V2X6XE5fy/vzvYDzZOhERnUWI/Tl234zpAK+
	9WTaFxxi2e0bq8NgkQnAVqWVXUFOxZ30C+lsUWnZBsaD0zJIIJWRFcuT9KnvmrCzL6aQk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtk-0006Yx-JX; Mon, 23 Nov 2020 17:45:08 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtk-0000at-H2; Mon, 23 Nov 2020 17:45:08 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 09/23] libxl: remove unnecessary check from libxl__device_pci_add()
Date: Mon, 23 Nov 2020 17:44:49 +0000
Message-Id: <20201123174503.6800-10-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201123174503.6800-1-paul@xen.org>
References: <20201123174503.6800-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

The code currently checks explicitly whether the device is already assigned,
but this is actually unnecessary as assigned devices do not form part of
the list returned by libxl_device_pci_assignable_list() and hence the
libxl_pci_assignable() test would have already failed.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 16 +---------------
 1 file changed, 1 insertion(+), 15 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index a5d5d2e78b..ec101f255f 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1555,8 +1555,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
 {
     STATE_AO_GC(aodev->ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
-    libxl_device_pci *assigned;
-    int num_assigned, rc;
+    int rc;
     int stubdomid = 0;
     pci_add_state *pas;
 
@@ -1595,19 +1594,6 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
         goto out;
     }
 
-    rc = get_all_assigned_devices(gc, &assigned, &num_assigned);
-    if ( rc ) {
-        LOGD(ERROR, domid,
-             "cannot determine if device is assigned, refusing to continue");
-        goto out;
-    }
-    if ( is_pci_in_array(assigned, num_assigned, pci->domain,
-                         pci->bus, pci->dev, pci->func) ) {
-        LOGD(ERROR, domid, "PCI device already attached to a domain");
-        rc = ERROR_FAIL;
-        goto out;
-    }
-
     libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 17:45:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 17:45:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34933.66287 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFtr-0006WF-3H; Mon, 23 Nov 2020 17:45:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34933.66287; Mon, 23 Nov 2020 17:45:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFtq-0006VA-Dy; Mon, 23 Nov 2020 17:45:14 +0000
Received: by outflank-mailman (input) for mailman id 34933;
 Mon, 23 Nov 2020 17:45:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khFtl-0006JB-5T
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:45:09 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtk-0006Yv-Ds; Mon, 23 Nov 2020 17:45:08 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtk-0000at-Ae; Mon, 23 Nov 2020 17:45:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtl-0006JB-5T
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:45:09 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=EIPA9+ZEASvzq7mL3kgJN1+orOrgFwgTrTG56YTc2Qo=; b=alnstYDprqGWpWE7pKIzZXfj0
	a+q0hUmEGTc6ymdmazlluf9If1BM60o3YI55GRS9S/jUjRaVEAeTl3+ReQSuoen80B7tpnlyAly7x
	ti71fu4o1xYo75OmjEV14tZ3OywhdCUSKqL1Mw9jazkJRjiLPxiNqGub/uZXL53A+n+Vc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtk-0006Yv-Ds; Mon, 23 Nov 2020 17:45:08 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtk-0000at-Ae; Mon, 23 Nov 2020 17:45:08 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 08/23] libxl: generalise 'driver_path' xenstore access functions in libxl_pci.c
Date: Mon, 23 Nov 2020 17:44:48 +0000
Message-Id: <20201123174503.6800-9-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201123174503.6800-1-paul@xen.org>
References: <20201123174503.6800-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

For the purposes of re-binding a device to its previous driver
libxl__device_pci_assignable_add() writes the driver path into xenstore.
This path is then read back in libxl__device_pci_assignable_remove().

The functions that support this writing to and reading from xenstore are
currently dedicated for this purpose and hence the node name 'driver_path'
is hard-coded. This patch generalizes these utility functions and passes
'driver_path' as an argument. Subsequent patches will invoke them to
access other nodes.

NOTE: Because functions will have a broader use (other than storing a
      driver path in lieu of pciback) the base xenstore path is also
      changed from '/libxl/pciback' to '/libxl/pci'.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 66 +++++++++++++++++++++-----------------------
 1 file changed, 32 insertions(+), 34 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 77edd27345..a5d5d2e78b 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -737,48 +737,46 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
     return 0;
 }
 
-#define PCIBACK_INFO_PATH "/libxl/pciback"
+#define PCI_INFO_PATH "/libxl/pci"
 
-static void pci_assignable_driver_path_write(libxl__gc *gc,
-                                            libxl_device_pci *pci,
-                                            char *driver_path)
+static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node)
 {
-    char *path;
+    return node ?
+        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
+                  pci->domain, pci->bus, pci->dev, pci->func,
+                  node) :
+        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
+                  pci->domain, pci->bus, pci->dev, pci->func);
+}
+
+
+static void pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node, const char *val)
+{
+    char *path = pci_info_xs_path(gc, pci, node);
 
-    path = GCSPRINTF(PCIBACK_INFO_PATH"/"PCI_BDF_XSPATH"/driver_path",
-                     pci->domain,
-                     pci->bus,
-                     pci->dev,
-                     pci->func);
-    if ( libxl__xs_printf(gc, XBT_NULL, path, "%s", driver_path) < 0 ) {
-        LOGE(WARN, "Write of %s to node %s failed.", driver_path, path);
+    if ( libxl__xs_printf(gc, XBT_NULL, path, "%s", val) < 0 ) {
+        LOGE(WARN, "Write of %s to node %s failed.", val, path);
     }
 }
 
-static char * pci_assignable_driver_path_read(libxl__gc *gc,
-                                              libxl_device_pci *pci)
+static char *pci_info_xs_read(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node)
 {
-    return libxl__xs_read(gc, XBT_NULL,
-                          GCSPRINTF(
-                           PCIBACK_INFO_PATH "/" PCI_BDF_XSPATH "/driver_path",
-                           pci->domain,
-                           pci->bus,
-                           pci->dev,
-                           pci->func));
+    char *path = pci_info_xs_path(gc, pci, node);
+
+    return libxl__xs_read(gc, XBT_NULL, path);
 }
 
-static void pci_assignable_driver_path_remove(libxl__gc *gc,
-                                              libxl_device_pci *pci)
+static void pci_info_xs_remove(libxl__gc *gc, libxl_device_pci *pci,
+                               const char *node)
 {
+    char *path = pci_info_xs_path(gc, pci, node);
     libxl_ctx *ctx = libxl__gc_owner(gc);
 
     /* Remove the xenstore entry */
-    xs_rm(ctx->xsh, XBT_NULL,
-          GCSPRINTF(PCIBACK_INFO_PATH "/" PCI_BDF_XSPATH,
-                    pci->domain,
-                    pci->bus,
-                    pci->dev,
-                    pci->func) );
+    xs_rm(ctx->xsh, XBT_NULL, path);
 }
 
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
@@ -824,9 +822,9 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     /* Store driver_path for rebinding to dom0 */
     if ( rebind ) {
         if ( driver_path ) {
-            pci_assignable_driver_path_write(gc, pci, driver_path);
+            pci_info_xs_write(gc, pci, "driver_path", driver_path);
         } else if ( (driver_path =
-                     pci_assignable_driver_path_read(gc, pci)) != NULL ) {
+                     pci_info_xs_read(gc, pci, "driver_path")) != NULL ) {
             LOG(INFO, PCI_BDF" not bound to a driver, will be rebound to %s",
                 dom, bus, dev, func, driver_path);
         } else {
@@ -834,7 +832,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
                 dom, bus, dev, func);
         }
     } else {
-        pci_assignable_driver_path_remove(gc, pci);
+        pci_info_xs_remove(gc, pci, "driver_path");
     }
 
     if ( pciback_dev_assign(gc, pci) ) {
@@ -884,7 +882,7 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     }
 
     /* Rebind if necessary */
-    driver_path = pci_assignable_driver_path_read(gc, pci);
+    driver_path = pci_info_xs_read(gc, pci, "driver_path");
 
     if ( driver_path ) {
         if ( rebind ) {
@@ -897,7 +895,7 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
                 return -1;
             }
 
-            pci_assignable_driver_path_remove(gc, pci);
+            pci_info_xs_remove(gc, pci, "driver_path");
         }
     } else {
         if ( rebind ) {
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 17:45:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 17:45:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.34934.66302 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFts-0006ZU-65; Mon, 23 Nov 2020 17:45:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 34934.66302; Mon, 23 Nov 2020 17:45:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khFtr-0006YP-I1; Mon, 23 Nov 2020 17:45:15 +0000
Received: by outflank-mailman (input) for mailman id 34934;
 Mon, 23 Nov 2020 17:45:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khFtl-0006JQ-NJ
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:45:09 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtk-0006Yo-6k; Mon, 23 Nov 2020 17:45:08 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtk-0000at-4G; Mon, 23 Nov 2020 17:45:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtl-0006JQ-NJ
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 17:45:09 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=gOoYvYBdUT6Nx0tE3SEPd8rpiDKo5xSEp4PAlyL6C2k=; b=19D3PZ71froCvMd0/cWe6upTV
	8yrsNLXtTcsFgdxmsyayoYwCnORvsSAhizZq4SVB6Sfxsh0AvG/VYvbx8apzcJ095obQNdJPsi0/Z
	4Cox3BzkjsppqniWq2t2Yvf8o1zOGzbjtB72FGjzxPmHiL3P9xNQdw6We9WkRJKW+vxRQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtk-0006Yo-6k; Mon, 23 Nov 2020 17:45:08 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtk-0000at-4G; Mon, 23 Nov 2020 17:45:08 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 07/23] libxl: stop using aodev->device_config in libxl__device_pci_add()...
Date: Mon, 23 Nov 2020 17:44:47 +0000
Message-Id: <20201123174503.6800-8-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201123174503.6800-1-paul@xen.org>
References: <20201123174503.6800-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

... to hold a pointer to the device.

There is already a 'pci' field in 'pci_add_state' so simply use that from
the start. This also allows the 'pci' (#3) argument to be dropped from
do_pci_add().

NOTE: This patch also changes the type of the 'pci_domid' field in
      'pci_add_state' from 'int' to 'libxl_domid' which is more appropriate
      given what the field is used for.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 19 +++++++------------
 1 file changed, 7 insertions(+), 12 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 41e4b2b571..77edd27345 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1074,7 +1074,7 @@ typedef struct pci_add_state {
     libxl__ev_qmp qmp;
     libxl__ev_time timeout;
     libxl_device_pci *pci;
-    int pci_domid;
+    libxl_domid pci_domid;
 } pci_add_state;
 
 static void pci_add_qemu_trad_watch_state_cb(libxl__egc *egc,
@@ -1091,7 +1091,6 @@ static void pci_add_dm_done(libxl__egc *,
 
 static void do_pci_add(libxl__egc *egc,
                        libxl_domid domid,
-                       libxl_device_pci *pci,
                        pci_add_state *pas)
 {
     STATE_AO_GC(pas->aodev->ao);
@@ -1101,7 +1100,6 @@ static void do_pci_add(libxl__egc *egc,
     /* init pci_add_state */
     libxl__xswait_init(&pas->xswait);
     libxl__ev_qmp_init(&pas->qmp);
-    pas->pci = pci;
     pas->pci_domid = domid;
     libxl__ev_time_init(&pas->timeout);
 
@@ -1564,13 +1562,10 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     int stubdomid = 0;
     pci_add_state *pas;
 
-    /* Store *pci to be used by callbacks */
-    aodev->device_config = pci;
-    aodev->device_type = &libxl__pci_devtype;
-
     GCNEW(pas);
     pas->aodev = aodev;
     pas->domid = domid;
+    pas->pci = pci;
     pas->starting = starting;
     pas->callback = device_pci_add_stubdom_done;
 
@@ -1624,9 +1619,10 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
         GCNEW(pci_s);
         libxl_device_pci_init(pci_s);
         libxl_device_pci_copy(CTX, pci_s, pci);
+        pas->pci = pci_s;
         pas->callback = device_pci_add_stubdom_wait;
 
-        do_pci_add(egc, stubdomid, pci_s, pas); /* must be last */
+        do_pci_add(egc, stubdomid, pas); /* must be last */
         return;
     }
 
@@ -1681,9 +1677,8 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
     int i;
 
     /* Convenience aliases */
-    libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = aodev->device_config;
+    libxl_device_pci *pci = pas->pci;
 
     if (rc) goto out;
 
@@ -1718,7 +1713,7 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
                 pci->vdevfn = orig_vdev;
             }
             pas->callback = device_pci_add_done;
-            do_pci_add(egc, domid, pci, pas); /* must be last */
+            do_pci_add(egc, domid, pas); /* must be last */
             return;
         }
     }
@@ -1734,7 +1729,7 @@ static void device_pci_add_done(libxl__egc *egc,
     EGC_GC;
     libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = aodev->device_config;
+    libxl_device_pci *pci = pas->pci;
 
     if (rc) {
         LOGD(ERROR, domid,
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 18:00:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 18:00:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35003.66332 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG8s-0000nr-Un; Mon, 23 Nov 2020 18:00:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35003.66332; Mon, 23 Nov 2020 18:00:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG8s-0000na-NE; Mon, 23 Nov 2020 18:00:46 +0000
Received: by outflank-mailman (input) for mailman id 35003;
 Mon, 23 Nov 2020 18:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khG8r-0000m8-3s
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khG8q-00070l-9n; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtn-0000at-Eg; Mon, 23 Nov 2020 17:45:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8r-0000m8-3s
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=z+TBph7KfTXS8lKGFLMVlUTRo1VQXbbt6hL6Ha7NBXk=; b=bKFYgvDLuvdd90sf0SQRz7YZW
	Gl3EccU45hARkXirfJb3fYJjKejcjZ2LAt1qj8UxFq4TL5XLSYVKhTrbkaMQj+3G5Rzi18zyTLpqA
	1eu1mKFyY8v09JBzZiDlsm9JqB5cj5owkxMy4Him1rUH83JGowtmNq0f1ZHCwYq7uuIfU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8q-00070l-9n; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtn-0000at-Eg; Mon, 23 Nov 2020 17:45:11 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 22/23] docs/man: modify xl-pci-configuration(5) to add 'name' field to PCI_SPEC_STRING
Date: Mon, 23 Nov 2020 17:45:02 +0000
Message-Id: <20201123174503.6800-23-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201123174503.6800-1-paul@xen.org>
References: <20201123174503.6800-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

Since assignable devices can be named, a subsequent patch will support use
of a PCI_SPEC_STRING containing a 'name' parameter instead of a 'bdf'. In
this case the name will be used to look up the 'bdf' in the list of assignable
(or assigned) devices.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 docs/man/xl-pci-configuration.5.pod | 25 +++++++++++++++++++++++--
 1 file changed, 23 insertions(+), 2 deletions(-)

diff --git a/docs/man/xl-pci-configuration.5.pod b/docs/man/xl-pci-configuration.5.pod
index 4dd73bc498..db3360307c 100644
--- a/docs/man/xl-pci-configuration.5.pod
+++ b/docs/man/xl-pci-configuration.5.pod
@@ -51,7 +51,7 @@ is not specified, or if it is specified with an empty value (whether
 positionally or explicitly).
 
 B<NOTE>: In context of B<xl pci-detach> (see L<xl(1)>), parameters other than
-B<bdf> will be ignored.
+B<bdf> or B<name> will be ignored.
 
 =head1 Positional Parameters
 
@@ -70,7 +70,11 @@ B<*> to indicate all functions of a multi-function device.
 
 =item Default Value
 
-None. This parameter is mandatory as it identifies the device.
+None. This parameter is mandatory in its positional form. As a non-positional
+parameter it is also mandatory unless a B<name> parameter is present, in
+which case B<bdf> must not be present since the B<name> will be used to find
+the B<bdf> in the list of assignable devices. See L<xl(1)> for more information
+on naming assignable devices.
 
 =back
 
@@ -194,4 +198,21 @@ B<NOTE>: This overrides the global B<rdm> option.
 
 =back
 
+=item B<name>=I<STRING>
+
+=over 4
+
+=item Description
+
+This is the name given when the B<BDF> was made assignable. See L<xl(1)> for
+more information on naming assignable devices.
+
+=item Default Value
+
+None. This parameter must not be present if a B<bdf> parameter is present.
+If a B<bdf> parameter is not present then B<name> is mandatory as it is
+required to look up the B<BDF> in the list of assignable devices.
+
+=back
+
 =back
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 18:00:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 18:00:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35006.66360 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG8u-0000qC-1P; Mon, 23 Nov 2020 18:00:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35006.66360; Mon, 23 Nov 2020 18:00:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG8t-0000pp-RS; Mon, 23 Nov 2020 18:00:47 +0000
Received: by outflank-mailman (input) for mailman id 35006;
 Mon, 23 Nov 2020 18:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khG8r-0000mN-7q
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khG8q-000713-JQ; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtl-0000at-DZ; Mon, 23 Nov 2020 17:45:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8r-0000mN-7q
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=QgHDaVAQBuzu/wVXFL4ANbr3u6TzV6QgoeQN2pH/gEg=; b=5h8Sdvp0Wa16jCGuk2sR6KlGM
	dQHdHViUYRgn+5SPQwmPSWuNZwOLs1ueC2iW6zwHYaNDwpivdIIQArY3iaKyN9tC3qxy1pRZpjnqs
	EcIRVAQiIVJ5HZvsfAL+uaaGDOmaiD58Y3jpppzOYy+7N8qV2OmG/Hc9nLGedXfyLWsvU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8q-000713-JQ; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtl-0000at-DZ; Mon, 23 Nov 2020 17:45:09 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 13/23] libxl: use COMPARE_PCI() macro is_pci_in_array()...
Date: Mon, 23 Nov 2020 17:44:53 +0000
Message-Id: <20201123174503.6800-14-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201123174503.6800-1-paul@xen.org>
References: <20201123174503.6800-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

... rather than an open-coded equivalent.

This patch tidies up the is_pci_in_array() function, making it take a single
'libxl_device_pci' argument rather than separate domain, bus, device and
function arguments. The already-available COMPARE_PCI() macro can then be
used and it is also modified to return 'bool' rather than 'int'.

The patch also modifies libxl_pci_assignable() to use is_pci_in_array() rather
than a separate open-coded equivalent, and also modifies it to return a
'bool' rather than an 'int'.

NOTE: The COMPARE_PCI() macro is also fixed to include the 'domain' in its
      comparison, which should always have been the case.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_internal.h |  7 ++++---
 tools/libs/light/libxl_pci.c      | 38 +++++++++++++-------------------------
 2 files changed, 17 insertions(+), 28 deletions(-)

diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index ecee61b541..02f8a3179c 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -4746,9 +4746,10 @@ void libxl__xcinfo2xlinfo(libxl_ctx *ctx,
  * devices have same identifier. */
 #define COMPARE_DEVID(a, b) ((a)->devid == (b)->devid)
 #define COMPARE_DISK(a, b) (!strcmp((a)->vdev, (b)->vdev))
-#define COMPARE_PCI(a, b) ((a)->func == (b)->func &&    \
-                           (a)->bus == (b)->bus &&      \
-                           (a)->dev == (b)->dev)
+#define COMPARE_PCI(a, b) ((a)->domain == (b)->domain && \
+                           (a)->bus == (b)->bus &&       \
+                           (a)->dev == (b)->dev &&       \
+                           (a)->func == (b)->func)
 #define COMPARE_USB(a, b) ((a)->ctrl == (b)->ctrl && \
                            (a)->port == (b)->port)
 #define COMPARE_USBCTRL(a, b) ((a)->devid == (b)->devid)
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 5a3352c2ec..e0b616fe18 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -336,24 +336,17 @@ retry_transaction2:
     return 0;
 }
 
-static int is_pci_in_array(libxl_device_pci *assigned, int num_assigned,
-                           int dom, int bus, int dev, int func)
+static bool is_pci_in_array(libxl_device_pci *pcis, int num,
+                            libxl_device_pci *pci)
 {
     int i;
 
-    for(i = 0; i < num_assigned; i++) {
-        if ( assigned[i].domain != dom )
-            continue;
-        if ( assigned[i].bus != bus )
-            continue;
-        if ( assigned[i].dev != dev )
-            continue;
-        if ( assigned[i].func != func )
-            continue;
-        return 1;
+    for (i = 0; i < num; i++) {
+        if (COMPARE_PCI(pci, &pcis[i]))
+            break;
     }
 
-    return 0;
+    return i < num;
 }
 
 /* Write the standard BDF into the sysfs path given by sysfs_path. */
@@ -1487,21 +1480,17 @@ int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
     return AO_INPROGRESS;
 }
 
-static int libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
+static bool libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
 {
     libxl_device_pci *pcis;
-    int num, i;
+    int num;
+    bool assignable;
 
     pcis = libxl_device_pci_assignable_list(ctx, &num);
-    for (i = 0; i < num; i++) {
-        if (pcis[i].domain == pci->domain &&
-            pcis[i].bus == pci->bus &&
-            pcis[i].dev == pci->dev &&
-            pcis[i].func == pci->func)
-            break;
-    }
+    assignable = is_pci_in_array(pcis, num, pci);
     libxl_device_pci_assignable_list_free(pcis, num);
-    return i != num;
+
+    return assignable;
 }
 
 static void device_pci_add_stubdom_wait(libxl__egc *egc,
@@ -1834,8 +1823,7 @@ static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
         goto out_fail;
     }
 
-    attached = is_pci_in_array(pcis, num, pci->domain,
-                               pci->bus, pci->dev, pci->func);
+    attached = is_pci_in_array(pcis, num, pci);
     libxl_device_pci_list_free(pcis, num);
 
     rc = ERROR_INVAL;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 18:00:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 18:00:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35002.66324 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG8s-0000nI-Gm; Mon, 23 Nov 2020 18:00:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35002.66324; Mon, 23 Nov 2020 18:00:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG8s-0000nB-Di; Mon, 23 Nov 2020 18:00:46 +0000
Received: by outflank-mailman (input) for mailman id 35002;
 Mon, 23 Nov 2020 18:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khG8r-0000m3-1d
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khG8q-00070h-3C; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtn-0000at-Ls; Mon, 23 Nov 2020 17:45:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8r-0000m3-1d
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=76KlTk8bHaA9SDHeFdHL0duSfvVvDvb9nsw14DdiG9Q=; b=plm5b0KVLxpKbyBRR7SpwCDiM
	iSSQe1fwCPOUoU4I3ghimoXuLVD5PutUpMeV+iGz+t9liqsw32kiEjtfDSunMSIsQk8lErcS6IolU
	t2cuZieZaiLcCY3UtpJfas+nY4x98nWg/4Ce+ehQBnQ3gvJMBj+li26JrAJ8qg+zG3RXo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8q-00070h-3C; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtn-0000at-Ls; Mon, 23 Nov 2020 17:45:11 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 23/23] xl / libxl: support 'xl pci-attach/detach' by name
Date: Mon, 23 Nov 2020 17:45:03 +0000
Message-Id: <20201123174503.6800-24-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201123174503.6800-1-paul@xen.org>
References: <20201123174503.6800-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

This patch adds a 'name' field into the idl for 'libxl_device_pci' and
libxlu_pci_parse_spec_string() is modified to parse the new 'name'
parameter of PCI_SPEC_STRING detailed in the updated documention in
xl-pci-configuration(5).

If the 'name' field is non-NULL then both libxl_device_pci_add() and
libxl_device_pci_remove() will use it to look up the device BDF in
the list of assignable devices.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxl.h            |  6 ++++
 tools/libs/light/libxl_pci.c     | 67 +++++++++++++++++++++++++++++++++++++---
 tools/libs/light/libxl_types.idl |  1 +
 tools/libs/util/libxlu_pci.c     |  7 ++++-
 4 files changed, 75 insertions(+), 6 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 4025d3a3d4..5b55a20155 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -485,6 +485,12 @@
 #define LIBXL_HAVE_PCI_ASSIGNABLE_NAME 1
 
 /*
+ * LIBXL_HAVE_DEVICE_PCI_NAME indicates that the 'name' field of
+ * libxl_device_pci is defined.
+ */
+#define LIBXL_HAVE_DEVICE_PCI_NAME 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index a1c9ae0d5b..986fb11d5c 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -60,6 +60,10 @@ static void libxl_create_pci_backend_device(libxl__gc *gc,
                                             int num,
                                             const libxl_device_pci *pci)
 {
+    if (pci->name) {
+        flexarray_append(back, GCSPRINTF("name-%d", num));
+        flexarray_append(back, GCSPRINTF("%s", pci->name));
+    }
     flexarray_append(back, GCSPRINTF("key-%d", num));
     flexarray_append(back, GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func));
     flexarray_append(back, GCSPRINTF("dev-%d", num));
@@ -284,6 +288,7 @@ retry_transaction:
 
 retry_transaction2:
     t = xs_transaction_start(ctx->xsh);
+    xs_rm(ctx->xsh, t, GCSPRINTF("%s/name-%d", be_path, i));
     xs_rm(ctx->xsh, t, GCSPRINTF("%s/state-%d", be_path, i));
     xs_rm(ctx->xsh, t, GCSPRINTF("%s/key-%d", be_path, i));
     xs_rm(ctx->xsh, t, GCSPRINTF("%s/dev-%d", be_path, i));
@@ -322,6 +327,12 @@ retry_transaction2:
             xs_write(ctx->xsh, t, GCSPRINTF("%s/vdevfn-%d", be_path, j - 1), tmp, strlen(tmp));
             xs_rm(ctx->xsh, t, tmppath);
         }
+        tmppath = GCSPRINTF("%s/name-%d", be_path, j);
+        tmp = libxl__xs_read(gc, t, tmppath);
+        if (tmp) {
+            xs_write(ctx->xsh, t, GCSPRINTF("%s/name-%d", be_path, j - 1), tmp, strlen(tmp));
+            xs_rm(ctx->xsh, t, tmppath);
+        }
     }
     if (!xs_transaction_end(ctx->xsh, t, 0))
         if (errno == EAGAIN)
@@ -1619,6 +1630,23 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     pas->starting = starting;
     pas->callback = device_pci_add_stubdom_done;
 
+    if (pci->name) {
+        libxl_pci_bdf *pcibdf =
+            libxl_device_pci_assignable_name2bdf(CTX, pci->name);
+
+        if (!pcibdf) {
+            rc = ERROR_FAIL;
+            goto out;
+        }
+
+        LOGD(DETAIL, domid, "'%s' -> %04x:%02x:%02x.%u", pci->name,
+             pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func);
+
+        libxl_pci_bdf_copy(CTX, &pci->bdf, pcibdf);
+        libxl_pci_bdf_dispose(pcibdf);
+        free(pcibdf);
+    }
+
     if (libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
         rc = xc_test_assign_device(ctx->xch, domid,
                                    pci_encode_bdf(&pci->bdf));
@@ -1767,11 +1795,19 @@ static void device_pci_add_done(libxl__egc *egc,
     libxl_device_pci *pci = &pas->pci;
 
     if (rc) {
-        LOGD(ERROR, domid,
-             "libxl__device_pci_add  failed for "
-             "PCI device %x:%x:%x.%x (rc %d)",
-             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
-             rc);
+        if (pci->name) {
+            LOGD(ERROR, domid,
+                 "libxl__device_pci_add failed for "
+                 "PCI device '%s' (rc %d)",
+                 pci->name,
+                 rc);
+        } else {
+            LOGD(ERROR, domid,
+                 "libxl__device_pci_add failed for "
+                 "PCI device %x:%x:%x.%x (rc %d)",
+                 pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
+                 rc);
+        }
         pci_info_xs_remove(gc, &pci->bdf, "domid");
     }
     libxl_device_pci_dispose(pci);
@@ -2288,6 +2324,23 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
     libxl__ev_time_init(&prs->timeout);
     libxl__ev_time_init(&prs->retry_timer);
 
+    if (pci->name) {
+        libxl_pci_bdf *pcibdf =
+            libxl_device_pci_assignable_name2bdf(CTX, pci->name);
+
+        if (!pcibdf) {
+            rc = ERROR_FAIL;
+            goto out;
+        }
+
+        LOGD(DETAIL, domid, "'%s' -> %04x:%02x:%02x.%u", pci->name,
+             pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func);
+
+        libxl_pci_bdf_copy(CTX, &prs->pci.bdf, pcibdf);
+        libxl_pci_bdf_dispose(pcibdf);
+        free(pcibdf);
+    }
+
     prs->orig_vdev = pci->vdevfn & ~7U;
 
     if ( pci->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
@@ -2422,6 +2475,10 @@ static int libxl__device_pci_from_xs_be(libxl__gc *gc,
         } while ((p = strtok_r(NULL, ",=", &saveptr)) != NULL);
     }
 
+    s = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/name-%d", be_path, nr));
+    if (s)
+        pci->name = strdup(s);
+
     return 0;
 }
 
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 2c441142fb..44bad36f1c 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -778,6 +778,7 @@ libxl_pci_bdf = Struct("pci_bdf", [
 
 libxl_device_pci = Struct("device_pci", [
     ("bdf", libxl_pci_bdf),
+    ("name", string),
     ("vdevfn", uint32),
     ("vfunc_mask", uint32),
     ("msitranslate", bool),
diff --git a/tools/libs/util/libxlu_pci.c b/tools/libs/util/libxlu_pci.c
index a8b6ce5427..543a1f80e9 100644
--- a/tools/libs/util/libxlu_pci.c
+++ b/tools/libs/util/libxlu_pci.c
@@ -151,6 +151,7 @@ int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pcidev,
 {
     const char *ptr = str;
     bool bdf_present = false;
+    bool name_present = false;
     int ret;
 
     /* Attempt to parse 'bdf' as positional parameter */
@@ -193,6 +194,10 @@ int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pcidev,
             pcidev->power_mgmt = atoi(val);
         } else if (!strcmp(key, "rdm_policy")) {
             ret = parse_rdm_policy(cfg, &pcidev->rdm_policy, val);
+        } else if (!strcmp(key, "name")) {
+            name_present = true;
+            pcidev->name = strdup(val);
+            if (!pcidev->name) ret = ERROR_NOMEM;
         } else {
             XLU__PCI_ERR(cfg, "Unknown PCI_SPEC_STRING option: %s", key);
             ret = ERROR_INVAL;
@@ -205,7 +210,7 @@ int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pcidev,
             return ret;
     }
 
-    if (!bdf_present)
+    if (!(bdf_present ^ name_present))
         return ERROR_INVAL;
 
     return 0;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 18:00:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 18:00:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35005.66353 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG8t-0000pN-PS; Mon, 23 Nov 2020 18:00:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35005.66353; Mon, 23 Nov 2020 18:00:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG8t-0000oy-F6; Mon, 23 Nov 2020 18:00:47 +0000
Received: by outflank-mailman (input) for mailman id 35005;
 Mon, 23 Nov 2020 18:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khG8r-0000mI-63
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khG8q-00070j-6t; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtk-0000at-Ud; Mon, 23 Nov 2020 17:45:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8r-0000mI-63
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=gs+2F3wLageZmiAxkHoKR705R9GKQDZVMvtOetNICfQ=; b=TVbw8i00rv/itOOVZaDubEkXy
	38eLG0I6WGXSx6cSCzEPYLYKZBI6p4AQGPdd4YlwYoKi+1iOC5SP2mh5z9bo5khrjYLTA9zdj3OO8
	M6zbKuNBGUILTyKNpEL0iPrDrjWs2ASdjeVBhXslyWAHHiv6PT37neWwqAXH71ghGeFXY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8q-00070j-6t; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtk-0000at-Ud; Mon, 23 Nov 2020 17:45:08 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 11/23] libxl: make sure callers of libxl_device_pci_list() free the list after use
Date: Mon, 23 Nov 2020 17:44:51 +0000
Message-Id: <20201123174503.6800-12-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201123174503.6800-1-paul@xen.org>
References: <20201123174503.6800-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

A previous patch introduced libxl_device_pci_list_free() which should be used
by callers of libxl_device_pci_list() to properly dispose of the exported
'libxl_device_pci' types and the free the memory holding them. Whilst all
current callers do ensure the memory is freed, only the code in xl's
pcilist() function actually calls libxl_device_pci_dispose(). As it stands
this laxity does not lead to any memory leaks, but the simple addition of
.e.g. a 'string' into the idl definition of 'libxl_device_pci' would lead
to leaks.

This patch makes sure all callers of libxl_device_pci_list() can call
libxl_device_pci_list_free() by keeping copies of 'libxl_device_pci'
structures inline in 'pci_add_state' and 'pci_remove_state' (and also making
sure these are properly disposed at the end of the operations) rather
than keeping pointers to the structures returned by libxl_device_pci_list().

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libs/light/libxl_pci.c | 68 ++++++++++++++++++++++++--------------------
 tools/xl/xl_pci.c            |  3 +-
 2 files changed, 38 insertions(+), 33 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index d3c7a547c3..0f41939d1f 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1025,7 +1025,7 @@ typedef struct pci_add_state {
     libxl__xswait_state xswait;
     libxl__ev_qmp qmp;
     libxl__ev_time timeout;
-    libxl_device_pci *pci;
+    libxl_device_pci pci;
     libxl_domid pci_domid;
 } pci_add_state;
 
@@ -1097,7 +1097,7 @@ static void pci_add_qemu_trad_watch_state_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
 
     rc = check_qemu_running(gc, domid, xswa, rc, state);
     if (rc == ERROR_NOT_READY)
@@ -1118,7 +1118,7 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
     libxl__ev_qmp *const qmp = &pas->qmp;
 
     rc = libxl__ev_time_register_rel(ao, &pas->timeout,
@@ -1199,7 +1199,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
     int dev_slot, dev_func;
 
     /* Convenience aliases */
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
 
     if (rc) goto out;
 
@@ -1300,7 +1300,7 @@ static void pci_add_dm_done(libxl__egc *egc,
 
     /* Convenience aliases */
     bool starting = pas->starting;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
     bool hvm = libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM;
 
     libxl__ev_qmp_dispose(gc, &pas->qmp);
@@ -1516,7 +1516,10 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     GCNEW(pas);
     pas->aodev = aodev;
     pas->domid = domid;
-    pas->pci = pci;
+
+    libxl_device_pci_copy(CTX, &pas->pci, pci);
+    pci = &pas->pci;
+
     pas->starting = starting;
     pas->callback = device_pci_add_stubdom_done;
 
@@ -1555,12 +1558,6 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
     if (stubdomid != 0) {
-        libxl_device_pci *pci_s;
-
-        GCNEW(pci_s);
-        libxl_device_pci_init(pci_s);
-        libxl_device_pci_copy(CTX, pci_s, pci);
-        pas->pci = pci_s;
         pas->callback = device_pci_add_stubdom_wait;
 
         do_pci_add(egc, stubdomid, pas); /* must be last */
@@ -1619,7 +1616,7 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
 
     if (rc) goto out;
 
@@ -1670,7 +1667,7 @@ static void device_pci_add_done(libxl__egc *egc,
     EGC_GC;
     libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
 
     if (rc) {
         LOGD(ERROR, domid,
@@ -1680,6 +1677,7 @@ static void device_pci_add_done(libxl__egc *egc,
              rc);
         pci_info_xs_remove(gc, pci, "domid");
     }
+    libxl_device_pci_dispose(pci);
     aodev->rc = rc;
     aodev->callback(egc, aodev);
 }
@@ -1770,7 +1768,7 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
 typedef struct pci_remove_state {
     libxl__ao_device *aodev;
     libxl_domid domid;
-    libxl_device_pci *pci;
+    libxl_device_pci pci;
     bool force;
     bool hvm;
     unsigned int orig_vdev;
@@ -1812,23 +1810,26 @@ static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
 {
     STATE_AO_GC(prs->aodev->ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
-    libxl_device_pci *assigned;
+    libxl_device_pci *pcis;
+    bool attached;
     uint32_t domid = prs->domid;
     libxl_domain_type type = libxl__domain_type(gc, domid);
-    libxl_device_pci *pci = prs->pci;
+    libxl_device_pci *pci = &prs->pci;
     int rc, num;
     uint32_t domainid = domid;
 
-    assigned = libxl_device_pci_list(ctx, domid, &num);
-    if (assigned == NULL) {
+    pcis = libxl_device_pci_list(ctx, domid, &num);
+    if (!pcis) {
         rc = ERROR_FAIL;
         goto out_fail;
     }
-    libxl__ptr_add(gc, assigned);
+
+    attached = is_pci_in_array(pcis, num, pci->domain,
+                               pci->bus, pci->dev, pci->func);
+    libxl_device_pci_list_free(pcis, num);
 
     rc = ERROR_INVAL;
-    if ( !is_pci_in_array(assigned, num, pci->domain,
-                          pci->bus, pci->dev, pci->func) ) {
+    if (!attached) {
         LOGD(ERROR, domainid, "PCI device not attached to this domain");
         goto out_fail;
     }
@@ -1928,7 +1929,7 @@ static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = prs->domid;
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
 
     rc = check_qemu_running(gc, domid, xswa, rc, state);
     if (rc == ERROR_NOT_READY)
@@ -1950,7 +1951,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     int rc;
 
     /* Convenience aliases */
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
 
     rc = libxl__ev_time_register_rel(ao, &prs->timeout,
                                      pci_remove_timeout,
@@ -2020,7 +2021,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl__ao *const ao = prs->aodev->ao;
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
 
     if (rc) goto out;
 
@@ -2075,7 +2076,7 @@ static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
     pci_remove_state *prs = CONTAINER_OF(ev, *prs, timeout);
 
     /* Convenience aliases */
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
 
     LOGD(WARN, prs->domid, "timed out waiting for DM to remove "
          PCI_PT_QDEV_ID, pci->bus, pci->dev, pci->func);
@@ -2096,7 +2097,7 @@ static void pci_remove_detached(libxl__egc *egc,
     bool isstubdom;
 
     /* Convenience aliases */
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
     libxl_domid domid = prs->domid;
 
     /* Cleaning QMP states ASAP */
@@ -2159,7 +2160,7 @@ static void pci_remove_done(libxl__egc *egc,
 
     if (rc) goto out;
 
-    libxl__device_pci_remove_xenstore(gc, prs->domid, prs->pci);
+    libxl__device_pci_remove_xenstore(gc, prs->domid, &prs->pci);
 out:
     device_pci_remove_common_next(egc, prs, rc);
 }
@@ -2177,7 +2178,10 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
     GCNEW(prs);
     prs->aodev = aodev;
     prs->domid = domid;
-    prs->pci = pci;
+
+    libxl_device_pci_copy(CTX, &prs->pci, pci);
+    pci = &prs->pci;
+
     prs->force = force;
     libxl__xswait_init(&prs->xswait);
     libxl__ev_qmp_init(&prs->qmp);
@@ -2212,7 +2216,7 @@ static void device_pci_remove_common_next(libxl__egc *egc,
     EGC_GC;
 
     /* Convenience aliases */
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
     libxl__ao_device *const aodev = prs->aodev;
     const unsigned int pfunc_mask = prs->pfunc_mask;
     const unsigned int orig_vdev = prs->orig_vdev;
@@ -2243,6 +2247,7 @@ out:
 
     if (!rc) pci_info_xs_remove(gc, pci, "domid");
 
+    libxl_device_pci_dispose(pci);
     aodev->rc = rc;
     aodev->callback(egc, aodev);
 }
@@ -2357,7 +2362,6 @@ void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
     pcis = libxl_device_pci_list(CTX, domid, &num);
     if ( pcis == NULL )
         return;
-    libxl__ptr_add(gc, pcis);
 
     for (i = 0; i < num; i++) {
         /* Force remove on shutdown since, on HVM, qemu will not always
@@ -2368,6 +2372,8 @@ void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
         libxl__device_pci_remove_common(egc, domid, pcis + i, true,
                                         aodev);
     }
+
+    libxl_device_pci_list_free(pcis, num);
 }
 
 int libxl__grant_vga_iomem_permission(libxl__gc *gc, const uint32_t domid,
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 34fcf5a4fa..7c0f102ac7 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -35,9 +35,8 @@ static void pcilist(uint32_t domid)
         printf("%02x.%01x %04x:%02x:%02x.%01x\n",
                (pcis[i].vdevfn >> 3) & 0x1f, pcis[i].vdevfn & 0x7,
                pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
-        libxl_device_pci_dispose(&pcis[i]);
     }
-    free(pcis);
+    libxl_device_pci_list_free(pcis, num);
 }
 
 int main_pcilist(int argc, char **argv)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 18:00:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 18:00:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35004.66342 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG8t-0000oa-BD; Mon, 23 Nov 2020 18:00:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35004.66342; Mon, 23 Nov 2020 18:00:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG8t-0000oG-2R; Mon, 23 Nov 2020 18:00:47 +0000
Received: by outflank-mailman (input) for mailman id 35004;
 Mon, 23 Nov 2020 18:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khG8r-0000mD-4x
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khG8q-00070p-Bv; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtl-0000at-Jy; Mon, 23 Nov 2020 17:45:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8r-0000mD-4x
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=GMhuBQvWwKt0oiSVX55rbWxIi0temtvls6NwYa6kuYM=; b=uKuJQjHSU4jonhkQ5fwLD95SN
	8DuE+OJxRR/ThyS1jCByHuT//rt5HnTeDv9ueF8dEPlTVudn5DEMrp6Ahm7vUIeqiUeFUmxSw7L1G
	vj5tqeU7Cb/fmgnrx1bt01rwIAxnEVUZDpJ/BnTEcw3lGXw2omMv62sYQRjDb3dPkb3L4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8q-00070p-Bv; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtl-0000at-Jy; Mon, 23 Nov 2020 17:45:09 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 14/23] docs/man: extract documentation of PCI_SPEC_STRING from the xl.cfg manpage...
Date: Mon, 23 Nov 2020 17:44:54 +0000
Message-Id: <20201123174503.6800-15-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201123174503.6800-1-paul@xen.org>
References: <20201123174503.6800-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

... and put it into a new xl-pci-configuration(5) manpage, akin to the
xl-network-configration(5) and xl-disk-configuration(5) manpages.

This patch moves the content of the section verbatim. A subsequent patch
will improve the documentation, once it is in its new location.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 docs/man/xl-pci-configuration.5.pod | 78 +++++++++++++++++++++++++++++++++++++
 docs/man/xl.cfg.5.pod.in            | 68 +-------------------------------
 2 files changed, 79 insertions(+), 67 deletions(-)
 create mode 100644 docs/man/xl-pci-configuration.5.pod

diff --git a/docs/man/xl-pci-configuration.5.pod b/docs/man/xl-pci-configuration.5.pod
new file mode 100644
index 0000000000..72a27bd95d
--- /dev/null
+++ b/docs/man/xl-pci-configuration.5.pod
@@ -0,0 +1,78 @@
+=encoding utf8
+
+=head1 NAME
+
+xl-pci-configuration - XL PCI Configuration Syntax
+
+=head1 SYNTAX
+
+This document specifies the format for B<PCI_SPEC_STRING> which is used by
+the L<xl.cfg(5)> pci configuration option, and related L<xl(1)> commands.
+
+Each B<PCI_SPEC_STRING> has the form of
+B<[DDDD:]BB:DD.F[@VSLOT],KEY=VALUE,KEY=VALUE,...> where:
+
+=over 4
+
+=item B<[DDDD:]BB:DD.F>
+
+Identifies the PCI device from the host perspective in the domain
+(B<DDDD>), Bus (B<BB>), Device (B<DD>) and Function (B<F>) syntax. This is
+the same scheme as used in the output of B<lspci(1)> for the device in
+question.
+
+Note: by default B<lspci(1)> will omit the domain (B<DDDD>) if it
+is zero and it is optional here also. You may specify the function
+(B<F>) as B<*> to indicate all functions.
+
+=item B<@VSLOT>
+
+Specifies the virtual slot where the guest will see this
+device. This is equivalent to the B<DD> which the guest sees. In a
+guest B<DDDD> and B<BB> are C<0000:00>.
+
+=item B<permissive=BOOLEAN>
+
+By default pciback only allows PV guests to write "known safe" values
+into PCI configuration space, likewise QEMU (both qemu-xen and
+qemu-xen-traditional) imposes the same constraint on HVM guests.
+However, many devices require writes to other areas of the configuration space
+in order to operate properly.  This option tells the backend (pciback or QEMU)
+to allow all writes to the PCI configuration space of this device by this
+domain.
+
+B<This option should be enabled with caution:> it gives the guest much
+more control over the device, which may have security or stability
+implications.  It is recommended to only enable this option for
+trusted VMs under administrator's control.
+
+=item B<msitranslate=BOOLEAN>
+
+Specifies that MSI-INTx translation should be turned on for the PCI
+device. When enabled, MSI-INTx translation will always enable MSI on
+the PCI device regardless of whether the guest uses INTx or MSI. Some
+device drivers, such as NVIDIA's, detect an inconsistency and do not
+function when this option is enabled. Therefore the default is false (0).
+
+=item B<seize=BOOLEAN>
+
+Tells B<xl> to automatically attempt to re-assign a device to
+pciback if it is not already assigned.
+
+B<WARNING:> If you set this option, B<xl> will gladly re-assign a critical
+system device, such as a network or a disk controller being used by
+dom0 without confirmation.  Please use with care.
+
+=item B<power_mgmt=BOOLEAN>
+
+B<(HVM only)> Specifies that the VM should be able to program the
+D0-D3hot power management states for the PCI device. The default is false (0).
+
+=item B<rdm_policy=STRING>
+
+B<(HVM/x86 only)> This is the same as the policy setting inside the B<rdm>
+option but just specific to a given device. The default is "relaxed".
+
+Note: this would override global B<rdm> option.
+
+=back
diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 0532739c1f..b00644e852 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1101,73 +1101,7 @@ option is valid only when the B<controller> option is specified.
 =item B<pci=[ "PCI_SPEC_STRING", "PCI_SPEC_STRING", ...]>
 
 Specifies the host PCI devices to passthrough to this guest.
-Each B<PCI_SPEC_STRING> has the form of
-B<[DDDD:]BB:DD.F[@VSLOT],KEY=VALUE,KEY=VALUE,...> where:
-
-=over 4
-
-=item B<[DDDD:]BB:DD.F>
-
-Identifies the PCI device from the host perspective in the domain
-(B<DDDD>), Bus (B<BB>), Device (B<DD>) and Function (B<F>) syntax. This is
-the same scheme as used in the output of B<lspci(1)> for the device in
-question.
-
-Note: by default B<lspci(1)> will omit the domain (B<DDDD>) if it
-is zero and it is optional here also. You may specify the function
-(B<F>) as B<*> to indicate all functions.
-
-=item B<@VSLOT>
-
-Specifies the virtual slot where the guest will see this
-device. This is equivalent to the B<DD> which the guest sees. In a
-guest B<DDDD> and B<BB> are C<0000:00>.
-
-=item B<permissive=BOOLEAN>
-
-By default pciback only allows PV guests to write "known safe" values
-into PCI configuration space, likewise QEMU (both qemu-xen and
-qemu-xen-traditional) imposes the same constraint on HVM guests.
-However, many devices require writes to other areas of the configuration space
-in order to operate properly.  This option tells the backend (pciback or QEMU)
-to allow all writes to the PCI configuration space of this device by this
-domain.
-
-B<This option should be enabled with caution:> it gives the guest much
-more control over the device, which may have security or stability
-implications.  It is recommended to only enable this option for
-trusted VMs under administrator's control.
-
-=item B<msitranslate=BOOLEAN>
-
-Specifies that MSI-INTx translation should be turned on for the PCI
-device. When enabled, MSI-INTx translation will always enable MSI on
-the PCI device regardless of whether the guest uses INTx or MSI. Some
-device drivers, such as NVIDIA's, detect an inconsistency and do not
-function when this option is enabled. Therefore the default is false (0).
-
-=item B<seize=BOOLEAN>
-
-Tells B<xl> to automatically attempt to re-assign a device to
-pciback if it is not already assigned.
-
-B<WARNING:> If you set this option, B<xl> will gladly re-assign a critical
-system device, such as a network or a disk controller being used by
-dom0 without confirmation.  Please use with care.
-
-=item B<power_mgmt=BOOLEAN>
-
-B<(HVM only)> Specifies that the VM should be able to program the
-D0-D3hot power management states for the PCI device. The default is false (0).
-
-=item B<rdm_policy=STRING>
-
-B<(HVM/x86 only)> This is the same as the policy setting inside the B<rdm>
-option but just specific to a given device. The default is "relaxed".
-
-Note: this would override global B<rdm> option.
-
-=back
+See L<xl-pci-configuration(5)> for more details.
 
 =item B<pci_permissive=BOOLEAN>
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 18:00:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 18:00:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35009.66398 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG8v-0000vB-V9; Mon, 23 Nov 2020 18:00:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35009.66398; Mon, 23 Nov 2020 18:00:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG8v-0000ua-Gt; Mon, 23 Nov 2020 18:00:49 +0000
Received: by outflank-mailman (input) for mailman id 35009;
 Mon, 23 Nov 2020 18:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khG8r-0000mc-E2
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khG8q-00071P-TD; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtm-0000at-Vj; Mon, 23 Nov 2020 17:45:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8r-0000mc-E2
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=+FpU4YZrBWh/cVU+pi5ZnHT9t/65wsl1ql6zxUHKlbw=; b=GQiCuT9imAlHwhjUgSCq38fcx
	kjNc1Bsxa3/MZoklq85E4TnDpVpMcu/9n8EFBZxQQD6T6InRpbElE49SHo4tvtXw9ROfmkRHgTyzw
	hzWdFgACW2GVKlfblVfv5Zr/pdcaR4uX18HxTe2Lmh5Pi5g0gkdlicQcgRPAh6UI1uG30=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8q-00071P-TD; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtm-0000at-Vj; Mon, 23 Nov 2020 17:45:11 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 20/23] docs/man: modify xl(1) in preparation for naming of assignable devices
Date: Mon, 23 Nov 2020 17:45:00 +0000
Message-Id: <20201123174503.6800-21-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201123174503.6800-1-paul@xen.org>
References: <20201123174503.6800-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

A subsequent patch will introduce code to allow a name to be specified to
'xl pci-assignable-add' such that the assignable device may be referred to
by than name in subsequent operations.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 docs/man/xl.1.pod.in | 19 ++++++++++++-------
 1 file changed, 12 insertions(+), 7 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index c5fbce3b5c..0822a58428 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -1595,19 +1595,23 @@ List virtual network interfaces for a domain.
 
 =over 4
 
-=item B<pci-assignable-list>
+=item B<pci-assignable-list> [I<-n>]
 
 List all the B<BDF> of assignable PCI devices. See
-L<xl-pci-configuration(5)> for more information.
+L<xl-pci-configuration(5)> for more information. If the -n option is
+specified then any name supplied when the device was made assignable
+will also be displayed.
 
 These are devices in the system which are configured to be
 available for passthrough and are bound to a suitable PCI
 backend driver in domain 0 rather than a real driver.
 
-=item B<pci-assignable-add> I<BDF>
+=item B<pci-assignable-add> [I<-n NAME>] I<BDF>
 
 Make the device at B<BDF> assignable to guests. See
-L<xl-pci-configuration(5)> for more information.
+L<xl-pci-configuration(5)> for more information. If the -n option is
+supplied then the assignable device entry will the named with the
+given B<NAME>.
 
 This will bind the device to the pciback driver and assign it to the
 "quarantine domain".  If it is already bound to a driver, it will
@@ -1622,10 +1626,11 @@ not to do this on a device critical to domain 0's operation, such as
 storage controllers, network interfaces, or GPUs that are currently
 being used.
 
-=item B<pci-assignable-remove> [I<-r>] I<BDF>
+=item B<pci-assignable-remove> [I<-r>] I<BDF>|I<NAME>
 
-Make the device at B<BDF> not assignable to guests. See
-L<xl-pci-configuration(5)> for more information.
+Make a device non-assignable to guests. The device may be identified
+either by its B<BDF> or the B<NAME> supplied when the device was made
+assignable. See L<xl-pci-configuration(5)> for more information.
 
 This will at least unbind the device from pciback, and
 re-assign it from the "quarantine domain" back to domain 0.  If the -r
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 18:00:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 18:00:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35010.66412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG8w-0000xK-S1; Mon, 23 Nov 2020 18:00:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35010.66412; Mon, 23 Nov 2020 18:00:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG8w-0000wZ-9Z; Mon, 23 Nov 2020 18:00:50 +0000
Received: by outflank-mailman (input) for mailman id 35010;
 Mon, 23 Nov 2020 18:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khG8r-0000mh-Hi
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khG8q-00070t-G3; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtm-0000at-9L; Mon, 23 Nov 2020 17:45:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8r-0000mh-Hi
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=8fS6RrvvdnuNUI/KdahI5cH7K76QmmNtKDWTvY4ag4k=; b=F1v4i8wXRK5RYACrnqI0QCW3u
	u+a9PDTSdB9N5wLo6gXkWVBsY5e2xnGh3IbcOjHxl3cWYgse6VIOx/vtCgrUgQcG+y34fJP+xMb31
	Bnx1jC9rYm4UY10x8ZdEnjiKuglPoDr3hOBQM9JuUJ5rr9I5NcBA7rd6YBemBX3QmCo8Y=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8q-00070t-G3; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtm-0000at-9L; Mon, 23 Nov 2020 17:45:10 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Nick Rosbrook <rosbrookn@ainfosec.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 17/23] libxl: introduce 'libxl_pci_bdf' in the idl...
Date: Mon, 23 Nov 2020 17:44:57 +0000
Message-Id: <20201123174503.6800-18-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201123174503.6800-1-paul@xen.org>
References: <20201123174503.6800-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

... and use in 'libxl_device_pci'

This patch is preparatory work for restricting the type passed to functions
that only require BDF information, rather than passing a 'libxl_device_pci'
structure which is only partially filled. In this patch only the minimal
mechanical changes necessary to deal with the structural changes are made.
Subsequent patches will adjust the code to make better use of the new type.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/golang/xenlight/helpers.gen.go |  77 ++++++++++++------
 tools/golang/xenlight/types.gen.go   |   8 +-
 tools/include/libxl.h                |   6 ++
 tools/libs/light/libxl_dm.c          |   8 +-
 tools/libs/light/libxl_internal.h    |   3 +-
 tools/libs/light/libxl_pci.c         | 148 +++++++++++++++++------------------
 tools/libs/light/libxl_types.idl     |  16 ++--
 tools/libs/util/libxlu_pci.c         |   8 +-
 tools/xl/xl_pci.c                    |   6 +-
 tools/xl/xl_sxp.c                    |   4 +-
 10 files changed, 167 insertions(+), 117 deletions(-)

diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index c8605994e7..b7230f693c 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -1999,6 +1999,41 @@ xc.colo_checkpoint_port = C.CString(x.ColoCheckpointPort)}
  return nil
  }
 
+// NewPciBdf returns an instance of PciBdf initialized with defaults.
+func NewPciBdf() (*PciBdf, error) {
+var (
+x PciBdf
+xc C.libxl_pci_bdf)
+
+C.libxl_pci_bdf_init(&xc)
+defer C.libxl_pci_bdf_dispose(&xc)
+
+if err := x.fromC(&xc); err != nil {
+return nil, err }
+
+return &x, nil}
+
+func (x *PciBdf) fromC(xc *C.libxl_pci_bdf) error {
+ x.Func = byte(xc._func)
+x.Dev = byte(xc.dev)
+x.Bus = byte(xc.bus)
+x.Domain = int(xc.domain)
+
+ return nil}
+
+func (x *PciBdf) toC(xc *C.libxl_pci_bdf) (err error){defer func(){
+if err != nil{
+C.libxl_pci_bdf_dispose(xc)}
+}()
+
+xc._func = C.uint8_t(x.Func)
+xc.dev = C.uint8_t(x.Dev)
+xc.bus = C.uint8_t(x.Bus)
+xc.domain = C.int(x.Domain)
+
+ return nil
+ }
+
 // NewDevicePci returns an instance of DevicePci initialized with defaults.
 func NewDevicePci() (*DevicePci, error) {
 var (
@@ -2014,10 +2049,9 @@ return nil, err }
 return &x, nil}
 
 func (x *DevicePci) fromC(xc *C.libxl_device_pci) error {
- x.Func = byte(xc._func)
-x.Dev = byte(xc.dev)
-x.Bus = byte(xc.bus)
-x.Domain = int(xc.domain)
+ if err := x.Bdf.fromC(&xc.bdf);err != nil {
+return fmt.Errorf("converting field Bdf: %v", err)
+}
 x.Vdevfn = uint32(xc.vdevfn)
 x.VfuncMask = uint32(xc.vfunc_mask)
 x.Msitranslate = bool(xc.msitranslate)
@@ -2033,10 +2067,9 @@ if err != nil{
 C.libxl_device_pci_dispose(xc)}
 }()
 
-xc._func = C.uint8_t(x.Func)
-xc.dev = C.uint8_t(x.Dev)
-xc.bus = C.uint8_t(x.Bus)
-xc.domain = C.int(x.Domain)
+if err := x.Bdf.toC(&xc.bdf); err != nil {
+return fmt.Errorf("converting field Bdf: %v", err)
+}
 xc.vdevfn = C.uint32_t(x.Vdevfn)
 xc.vfunc_mask = C.uint32_t(x.VfuncMask)
 xc.msitranslate = C.bool(x.Msitranslate)
@@ -2766,13 +2799,13 @@ if err := x.Nics[i].fromC(&v); err != nil {
 return fmt.Errorf("converting field Nics: %v", err) }
 }
 }
-x.Pcidevs = nil
-if n := int(xc.num_pcidevs); n > 0 {
-cPcidevs := (*[1<<28]C.libxl_device_pci)(unsafe.Pointer(xc.pcidevs))[:n:n]
-x.Pcidevs = make([]DevicePci, n)
-for i, v := range cPcidevs {
-if err := x.Pcidevs[i].fromC(&v); err != nil {
-return fmt.Errorf("converting field Pcidevs: %v", err) }
+x.Pcis = nil
+if n := int(xc.num_pcis); n > 0 {
+cPcis := (*[1<<28]C.libxl_device_pci)(unsafe.Pointer(xc.pcis))[:n:n]
+x.Pcis = make([]DevicePci, n)
+for i, v := range cPcis {
+if err := x.Pcis[i].fromC(&v); err != nil {
+return fmt.Errorf("converting field Pcis: %v", err) }
 }
 }
 x.Rdms = nil
@@ -2922,13 +2955,13 @@ return fmt.Errorf("converting field Nics: %v", err)
 }
 }
 }
-if numPcidevs := len(x.Pcidevs); numPcidevs > 0 {
-xc.pcidevs = (*C.libxl_device_pci)(C.malloc(C.ulong(numPcidevs)*C.sizeof_libxl_device_pci))
-xc.num_pcidevs = C.int(numPcidevs)
-cPcidevs := (*[1<<28]C.libxl_device_pci)(unsafe.Pointer(xc.pcidevs))[:numPcidevs:numPcidevs]
-for i,v := range x.Pcidevs {
-if err := v.toC(&cPcidevs[i]); err != nil {
-return fmt.Errorf("converting field Pcidevs: %v", err)
+if numPcis := len(x.Pcis); numPcis > 0 {
+xc.pcis = (*C.libxl_device_pci)(C.malloc(C.ulong(numPcis)*C.sizeof_libxl_device_pci))
+xc.num_pcis = C.int(numPcis)
+cPcis := (*[1<<28]C.libxl_device_pci)(unsafe.Pointer(xc.pcis))[:numPcis:numPcis]
+for i,v := range x.Pcis {
+if err := v.toC(&cPcis[i]); err != nil {
+return fmt.Errorf("converting field Pcis: %v", err)
 }
 }
 }
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index b4c5df0f2c..bc62ae8ce9 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -707,11 +707,15 @@ ColoCheckpointHost string
 ColoCheckpointPort string
 }
 
-type DevicePci struct {
+type PciBdf struct {
 Func byte
 Dev byte
 Bus byte
 Domain int
+}
+
+type DevicePci struct {
+Bdf PciBdf
 Vdevfn uint32
 VfuncMask uint32
 Msitranslate bool
@@ -896,7 +900,7 @@ CInfo DomainCreateInfo
 BInfo DomainBuildInfo
 Disks []DeviceDisk
 Nics []DeviceNic
-Pcidevs []DevicePci
+Pcis []DevicePci
 Rdms []DeviceRdm
 Dtdevs []DeviceDtdev
 Vfbs []DeviceVfb
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 8225809d94..5edacccbd1 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -464,6 +464,12 @@
 #define LIBXL_HAVE_DEVICE_PCI_ASSIGNABLE_LIST_FREE 1
 
 /*
+ * LIBXL_HAVE_PCI_BDF indicates that the 'libxl_pci_bdf' type is defined
+ * is embedded in the 'libxl_device_pci' type.
+ */
+#define LIBXL_HAVE_PCI_BDF 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 8ebe1b60c9..a25bf23834 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -472,10 +472,10 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
     for (i = 0; i < d_config->num_pcis; i++) {
         unsigned int n, nr_entries;
 
-        seg = d_config->pcis[i].domain;
-        bus = d_config->pcis[i].bus;
-        devfn = PCI_DEVFN(d_config->pcis[i].dev,
-                          d_config->pcis[i].func);
+        seg = d_config->pcis[i].bdf.domain;
+        bus = d_config->pcis[i].bdf.bus;
+        devfn = PCI_DEVFN(d_config->pcis[i].bdf.dev,
+                          d_config->pcis[i].bdf.func);
         nr_entries = 0;
         rc = libxl__xc_device_get_rdm(gc, 0,
                                       seg, bus, devfn, &nr_entries, &xrdm);
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 02f8a3179c..da12d92209 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -4746,10 +4746,11 @@ void libxl__xcinfo2xlinfo(libxl_ctx *ctx,
  * devices have same identifier. */
 #define COMPARE_DEVID(a, b) ((a)->devid == (b)->devid)
 #define COMPARE_DISK(a, b) (!strcmp((a)->vdev, (b)->vdev))
-#define COMPARE_PCI(a, b) ((a)->domain == (b)->domain && \
+#define COMPARE_BDF(a, b) ((a)->domain == (b)->domain && \
                            (a)->bus == (b)->bus &&       \
                            (a)->dev == (b)->dev &&       \
                            (a)->func == (b)->func)
+#define COMPARE_PCI(a, b) COMPARE_BDF(&((a)->bdf), &((b)->bdf))
 #define COMPARE_USB(a, b) ((a)->ctrl == (b)->ctrl && \
                            (a)->port == (b)->port)
 #define COMPARE_USBCTRL(a, b) ((a)->devid == (b)->devid)
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index e0b616fe18..3cfba0e527 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -29,10 +29,10 @@ static unsigned int pci_encode_bdf(libxl_device_pci *pci)
 {
     unsigned int value;
 
-    value = pci->domain << 16;
-    value |= (pci->bus & 0xff) << 8;
-    value |= (pci->dev & 0x1f) << 3;
-    value |= (pci->func & 0x7);
+    value = pci->bdf.domain << 16;
+    value |= (pci->bdf.bus & 0xff) << 8;
+    value |= (pci->bdf.dev & 0x1f) << 3;
+    value |= (pci->bdf.func & 0x7);
 
     return value;
 }
@@ -41,10 +41,10 @@ static void pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
                             unsigned int bus, unsigned int dev,
                             unsigned int func, unsigned int vdevfn)
 {
-    pci->domain = domain;
-    pci->bus = bus;
-    pci->dev = dev;
-    pci->func = func;
+    pci->bdf.domain = domain;
+    pci->bdf.bus = bus;
+    pci->bdf.dev = dev;
+    pci->bdf.func = func;
     pci->vdevfn = vdevfn;
 }
 
@@ -54,9 +54,9 @@ static void libxl_create_pci_backend_device(libxl__gc *gc,
                                             const libxl_device_pci *pci)
 {
     flexarray_append(back, GCSPRINTF("key-%d", num));
-    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->domain, pci->bus, pci->dev, pci->func));
+    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func));
     flexarray_append(back, GCSPRINTF("dev-%d", num));
-    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->domain, pci->bus, pci->dev, pci->func));
+    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func));
     if (pci->vdevfn)
         flexarray_append_pair(back, GCSPRINTF("vdevfn-%d", num), GCSPRINTF("%x", pci->vdevfn));
     flexarray_append(back, GCSPRINTF("opts-%d", num));
@@ -250,8 +250,8 @@ static int libxl__device_pci_remove_xenstore(libxl__gc *gc, uint32_t domid, libx
         unsigned int domain = 0, bus = 0, dev = 0, func = 0;
         xsdev = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/dev-%d", be_path, i));
         sscanf(xsdev, PCI_BDF, &domain, &bus, &dev, &func);
-        if (domain == pci->domain && bus == pci->bus &&
-            pci->dev == dev && pci->func == func) {
+        if (domain == pci->bdf.domain && bus == pci->bdf.bus &&
+            pci->bdf.dev == dev && pci->bdf.func == func) {
             break;
         }
     }
@@ -362,8 +362,8 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
         return ERROR_FAIL;
     }
 
-    buf = GCSPRINTF(PCI_BDF, pci->domain, pci->bus,
-                    pci->dev, pci->func);
+    buf = GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus,
+                    pci->bdf.dev, pci->bdf.func);
     rc = write(fd, buf, strlen(buf));
     /* Annoying to have two if's, but we need the errno */
     if (rc < 0)
@@ -383,10 +383,10 @@ static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
 {
     return node ?
         GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
-                  pci->domain, pci->bus, pci->dev, pci->func,
+                  pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
                   node) :
         GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
-                  pci->domain, pci->bus, pci->dev, pci->func);
+                  pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 }
 
 
@@ -484,10 +484,10 @@ static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
     struct stat st;
 
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/driver",
-                           pci->domain,
-                           pci->bus,
-                           pci->dev,
-                           pci->func);
+                           pci->bdf.domain,
+                           pci->bdf.bus,
+                           pci->bdf.dev,
+                           pci->bdf.func);
     if ( !lstat(spath, &st) ) {
         /* Find the canonical path to the driver. */
         dp = libxl__zalloc(gc, PATH_MAX);
@@ -517,7 +517,7 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pci)
 {
     char *pci_device_vendor_path =
             GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/vendor",
-                      pci->domain, pci->bus, pci->dev, pci->func);
+                      pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     uint16_t read_items;
     uint16_t pci_device_vendor;
 
@@ -525,7 +525,7 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pci)
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have vendor attribute",
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         return 0xffff;
     }
     read_items = fscanf(f, "0x%hx\n", &pci_device_vendor);
@@ -533,7 +533,7 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pci)
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read vendor of pci device "PCI_BDF,
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         return 0xffff;
     }
 
@@ -544,7 +544,7 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pci)
 {
     char *pci_device_device_path =
             GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/device",
-                      pci->domain, pci->bus, pci->dev, pci->func);
+                      pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     uint16_t read_items;
     uint16_t pci_device_device;
 
@@ -552,7 +552,7 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pci)
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have device attribute",
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         return 0xffff;
     }
     read_items = fscanf(f, "0x%hx\n", &pci_device_device);
@@ -560,7 +560,7 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pci)
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read device of pci device "PCI_BDF,
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         return 0xffff;
     }
 
@@ -571,14 +571,14 @@ static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pci,
                                unsigned long *class)
 {
     char *pci_device_class_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/class",
-                     pci->domain, pci->bus, pci->dev, pci->func);
+                     pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     int read_items, ret = 0;
 
     FILE *f = fopen(pci_device_class_path, "r");
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have class attribute",
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         ret = ERROR_FAIL;
         goto out;
     }
@@ -587,7 +587,7 @@ static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pci,
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read class of pci device "PCI_BDF,
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         ret = ERROR_FAIL;
     }
 
@@ -654,10 +654,10 @@ static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
     }
 
     while (fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func)==4) {
-        if (dom == pci->domain
-            && bus == pci->bus
-            && dev == pci->dev
-            && func == pci->func) {
+        if (dom == pci->bdf.domain
+            && bus == pci->bdf.bus
+            && dev == pci->bdf.dev
+            && func == pci->bdf.func) {
             rc = 1;
             goto out;
         }
@@ -683,8 +683,8 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
     }
 
     spath = GCSPRINTF(SYSFS_PCIBACK_DRIVER"/"PCI_BDF,
-                      pci->domain, pci->bus,
-                      pci->dev, pci->func);
+                      pci->bdf.domain, pci->bdf.bus,
+                      pci->bdf.dev, pci->bdf.func);
     rc = lstat(spath, &st);
 
     if( rc == 0 )
@@ -747,10 +747,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     struct stat st;
 
     /* Local copy for convenience */
-    dom = pci->domain;
-    bus = pci->bus;
-    dev = pci->dev;
-    func = pci->func;
+    dom = pci->bdf.domain;
+    bus = pci->bdf.bus;
+    dev = pci->bdf.dev;
+    func = pci->bdf.func;
 
     /* See if the device exists */
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF, dom, bus, dev, func);
@@ -824,8 +824,8 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     /* De-quarantine */
     rc = xc_deassign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci));
     if ( rc < 0 ) {
-        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pci->domain, pci->bus,
-            pci->dev, pci->func);
+        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pci->bdf.domain, pci->bdf.bus,
+            pci->bdf.dev, pci->bdf.func);
         return ERROR_FAIL;
     }
 
@@ -914,11 +914,11 @@ static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pci, unsigne
 
         if ( sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4 )
             continue;
-        if ( pci->domain != dom )
+        if ( pci->bdf.domain != dom )
             continue;
-        if ( pci->bus != bus )
+        if ( pci->bdf.bus != bus )
             continue;
-        if ( pci->dev != dev )
+        if ( pci->bdf.dev != dev )
             continue;
 
         path = GCSPRINTF("%s/" PCI_BDF, SYSFS_PCIBACK_DRIVER, dom, bus, dev, func);
@@ -967,13 +967,13 @@ static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/parameter");
     if (pci->vdevfn) {
         libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF_VDEVFN","PCI_OPTIONS,
-                         pci->domain, pci->bus, pci->dev,
-                         pci->func, pci->vdevfn, pci->msitranslate,
+                         pci->bdf.domain, pci->bdf.bus, pci->bdf.dev,
+                         pci->bdf.func, pci->vdevfn, pci->msitranslate,
                          pci->power_mgmt);
     } else {
         libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF","PCI_OPTIONS,
-                         pci->domain,  pci->bus, pci->dev,
-                         pci->func, pci->msitranslate, pci->power_mgmt);
+                         pci->bdf.domain,  pci->bdf.bus, pci->bdf.dev,
+                         pci->bdf.func, pci->msitranslate, pci->power_mgmt);
     }
 
     libxl__qemu_traditional_cmd(gc, domid, "pci-ins");
@@ -1132,10 +1132,10 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
     libxl__qmp_param_add_string(gc, &args, "driver",
                                 "xen-pci-passthrough");
     QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
-                           pci->bus, pci->dev, pci->func);
+                           pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     QMP_PARAMETERS_SPRINTF(&args, "hostaddr",
-                           "%04x:%02x:%02x.%01x", pci->domain,
-                           pci->bus, pci->dev, pci->func);
+                           "%04x:%02x:%02x.%01x", pci->bdf.domain,
+                           pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     if (pci->vdevfn) {
         QMP_PARAMETERS_SPRINTF(&args, "addr", "%x.%x",
                                PCI_SLOT(pci->vdevfn),
@@ -1223,7 +1223,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
      */
 
     asked_id = GCSPRINTF(PCI_PT_QDEV_ID,
-                         pci->bus, pci->dev, pci->func);
+                         pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     for (i = 0; (bus = libxl__json_array_get(response, i)); i++) {
         devices = libxl__json_map_get("devices", bus, JSON_ARRAY);
@@ -1314,8 +1314,8 @@ static void pci_add_dm_done(libxl__egc *egc,
     if (isstubdom)
         starting = false;
 
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->domain,
-                           pci->bus, pci->dev, pci->func);
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->bdf.domain,
+                           pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     f = fopen(sysfs_path, "r");
     start = end = flags = size = 0;
     irq = 0;
@@ -1355,8 +1355,8 @@ static void pci_add_dm_done(libxl__egc *egc,
         }
     }
     fclose(f);
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
-                                pci->bus, pci->dev, pci->func);
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->bdf.domain,
+                                pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     f = fopen(sysfs_path, "r");
     if (f == NULL) {
         LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
@@ -1527,7 +1527,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
         if (rc) {
             LOGD(ERROR, domid,
                  "PCI device %04x:%02x:%02x.%u %s?",
-                 pci->domain, pci->bus, pci->dev, pci->func,
+                 pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
                  errno == EOPNOTSUPP ? "cannot be assigned - no IOMMU"
                  : "already assigned to a different guest");
             goto out;
@@ -1545,7 +1545,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
 
     if (!libxl_pci_assignable(ctx, pci)) {
         LOGD(ERROR, domid, "PCI device %x:%x:%x.%x is not assignable",
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         rc = ERROR_FAIL;
         goto out;
     }
@@ -1553,7 +1553,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     rc = pci_info_xs_write(gc, pci, "domid", GCSPRINTF("%u", domid));
     if (rc) goto out;
 
-    libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
+    libxl__device_pci_reset(gc, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
     if (stubdomid != 0) {
@@ -1634,13 +1634,13 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
         pci->vfunc_mask &= pfunc_mask;
         /* so now vfunc_mask == pfunc_mask */
     }else{
-        pfunc_mask = (1 << pci->func);
+        pfunc_mask = (1 << pci->bdf.func);
     }
 
     for (rc = 0, i = 7; i >= 0; --i) {
         if ( (1 << i) & pfunc_mask ) {
             if ( pci->vfunc_mask == pfunc_mask ) {
-                pci->func = i;
+                pci->bdf.func = i;
                 pci->vdevfn = orig_vdev | i;
             } else {
                 /* if not passing through multiple devices in a block make
@@ -1672,7 +1672,7 @@ static void device_pci_add_done(libxl__egc *egc,
         LOGD(ERROR, domid,
              "libxl__device_pci_add  failed for "
              "PCI device %x:%x:%x.%x (rc %d)",
-             pci->domain, pci->bus, pci->dev, pci->func,
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
              rc);
         pci_info_xs_remove(gc, pci, "domid");
     }
@@ -1741,8 +1741,8 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/state");
     state = libxl__xs_read(gc, XBT_NULL, path);
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/parameter");
-    libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF, pci->domain,
-                     pci->bus, pci->dev, pci->func);
+    libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF, pci->bdf.domain,
+                     pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     /* Remove all functions at once atomically by only signalling
      * device-model for function 0 */
@@ -1856,8 +1856,8 @@ static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
     } else {
         assert(type == LIBXL_DOMAIN_TYPE_PV);
 
-        char *sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->domain,
-                                     pci->bus, pci->dev, pci->func);
+        char *sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->bdf.domain,
+                                     pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         FILE *f = fopen(sysfs_path, "r");
         unsigned int start = 0, end = 0, flags = 0, size = 0;
         int irq = 0;
@@ -1892,8 +1892,8 @@ static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
         }
         fclose(f);
 skip1:
-        sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
-                               pci->bus, pci->dev, pci->func);
+        sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->bdf.domain,
+                               pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         f = fopen(sysfs_path, "r");
         if (f == NULL) {
             LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
@@ -1957,7 +1957,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     if (rc) goto out;
 
     QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
-                           pci->bus, pci->dev, pci->func);
+                           pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     prs->qmp.callback = pci_remove_qmp_device_del_cb;
     rc = libxl__ev_qmp_send(egc, &prs->qmp, "device_del", args);
     if (rc) goto out;
@@ -2026,7 +2026,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
     libxl__ev_qmp_dispose(gc, qmp);
 
     asked_id = GCSPRINTF(PCI_PT_QDEV_ID,
-                         pci->bus, pci->dev, pci->func);
+                         pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     /* query-pci response:
      * [{ 'devices': [ 'qdev_id': 'str', ...  ], ... }]
@@ -2077,7 +2077,7 @@ static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
     libxl_device_pci *const pci = &prs->pci;
 
     LOGD(WARN, prs->domid, "timed out waiting for DM to remove "
-         PCI_PT_QDEV_ID, pci->bus, pci->dev, pci->func);
+         PCI_PT_QDEV_ID, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     /* If we timed out, we might still want to keep destroying the device
      * (when force==true), so let the next function decide what to do on
@@ -2110,7 +2110,7 @@ static void pci_remove_detached(libxl__egc *egc,
 
     /* don't do multiple resets while some functions are still passed through */
     if ((pci->vdevfn & 0x7) == 0) {
-        libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
+        libxl__device_pci_reset(gc, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     }
 
     if (!isstubdom) {
@@ -2198,7 +2198,7 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
         }
         pci->vfunc_mask &= prs->pfunc_mask;
     } else {
-        prs->pfunc_mask = (1 << pci->func);
+        prs->pfunc_mask = (1 << pci->bdf.func);
     }
 
     rc = 0;
@@ -2226,7 +2226,7 @@ static void device_pci_remove_common_next(libxl__egc *egc,
         prs->next_func--;
         if ( (1 << i) & pfunc_mask ) {
             if ( pci->vfunc_mask == pfunc_mask ) {
-                pci->func = i;
+                pci->bdf.func = i;
                 pci->vdevfn = orig_vdev | i;
             } else {
                 pci->vdevfn = orig_vdev;
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 20f8dd7cfa..2c441142fb 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -769,18 +769,22 @@ libxl_device_nic = Struct("device_nic", [
     ("colo_checkpoint_port", string)
     ])
 
+libxl_pci_bdf = Struct("pci_bdf", [
+    ("func", uint8),
+    ("dev", uint8),
+    ("bus", uint8),
+    ("domain", integer),
+    ])
+
 libxl_device_pci = Struct("device_pci", [
-    ("func",      uint8),
-    ("dev",       uint8),
-    ("bus",       uint8),
-    ("domain",    integer),
-    ("vdevfn",    uint32),
+    ("bdf", libxl_pci_bdf),
+    ("vdevfn", uint32),
     ("vfunc_mask", uint32),
     ("msitranslate", bool),
     ("power_mgmt", bool),
     ("permissive", bool),
     ("seize", bool),
-    ("rdm_policy",      libxl_rdm_reserve_policy),
+    ("rdm_policy", libxl_rdm_reserve_policy),
     ])
 
 libxl_device_rdm = Struct("device_rdm", [
diff --git a/tools/libs/util/libxlu_pci.c b/tools/libs/util/libxlu_pci.c
index 1d38fffce3..5c107f2642 100644
--- a/tools/libs/util/libxlu_pci.c
+++ b/tools/libs/util/libxlu_pci.c
@@ -27,10 +27,10 @@ static int pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
                            unsigned int bus, unsigned int dev,
                            unsigned int func, unsigned int vdevfn)
 {
-    pci->domain = domain;
-    pci->bus = bus;
-    pci->dev = dev;
-    pci->func = func;
+    pci->bdf.domain = domain;
+    pci->bdf.bus = bus;
+    pci->bdf.dev = dev;
+    pci->bdf.func = func;
     pci->vdevfn = vdevfn;
     return 0;
 }
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index f71498cbb5..b6dc7c2840 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -34,7 +34,8 @@ static void pcilist(uint32_t domid)
     for (i = 0; i < num; i++) {
         printf("%02x.%01x %04x:%02x:%02x.%01x\n",
                (pcis[i].vdevfn >> 3) & 0x1f, pcis[i].vdevfn & 0x7,
-               pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
+               pcis[i].bdf.domain, pcis[i].bdf.bus, pcis[i].bdf.dev,
+               pcis[i].bdf.func);
     }
     libxl_device_pci_list_free(pcis, num);
 }
@@ -163,7 +164,8 @@ static void pciassignable_list(void)
         return;
     for (i = 0; i < num; i++) {
         printf("%04x:%02x:%02x.%01x\n",
-               pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
+               pcis[i].bdf.domain, pcis[i].bdf.bus, pcis[i].bdf.dev,
+               pcis[i].bdf.func);
     }
     libxl_device_pci_assignable_list_free(pcis, num);
 }
diff --git a/tools/xl/xl_sxp.c b/tools/xl/xl_sxp.c
index b03e348ffb..95180b60df 100644
--- a/tools/xl/xl_sxp.c
+++ b/tools/xl/xl_sxp.c
@@ -194,8 +194,8 @@ void printf_info_sexp(int domid, libxl_domain_config *d_config, FILE *fh)
         fprintf(fh, "\t(device\n");
         fprintf(fh, "\t\t(pci\n");
         fprintf(fh, "\t\t\t(pci dev %04x:%02x:%02x.%01x@%02x)\n",
-               d_config->pcis[i].domain, d_config->pcis[i].bus,
-               d_config->pcis[i].dev, d_config->pcis[i].func,
+               d_config->pcis[i].bdf.domain, d_config->pcis[i].bdf.bus,
+               d_config->pcis[i].bdf.dev, d_config->pcis[i].bdf.func,
                d_config->pcis[i].vdevfn);
         fprintf(fh, "\t\t\t(opts msitranslate %d power_mgmt %d)\n",
                d_config->pcis[i].msitranslate,
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 18:00:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 18:00:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35008.66385 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG8v-0000sj-7I; Mon, 23 Nov 2020 18:00:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35008.66385; Mon, 23 Nov 2020 18:00:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG8u-0000rt-Qc; Mon, 23 Nov 2020 18:00:48 +0000
Received: by outflank-mailman (input) for mailman id 35008;
 Mon, 23 Nov 2020 18:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khG8r-0000mX-DF
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khG8q-00070r-Dd; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtm-0000at-0a; Mon, 23 Nov 2020 17:45:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8r-0000mX-DF
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=YcXfMQGt7JCycdtTvU5ue8RUHb6lcyzpJs/F2AZGOVM=; b=LaCR8JlKTnIY8XPMT1QVGLDtH
	EVQqVETA0EWBL+UWNfT19D7HEt6t+GGAI1TZVPxT4n/hZv5vneixIls+pso2Aq29CtafMBjbAWhnl
	bZ3dE252zZNT2SzxHuUB8eXDgBKaAXqi/kL+kngG4TSStn4GJ/ZQzMQRhPjWKetbcarPY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8q-00070r-Dd; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtm-0000at-0a; Mon, 23 Nov 2020 17:45:10 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 16/23] docs/man: fix xl(1) documentation for 'pci' operations
Date: Mon, 23 Nov 2020 17:44:56 +0000
Message-Id: <20201123174503.6800-17-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201123174503.6800-1-paul@xen.org>
References: <20201123174503.6800-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

Currently the documentation completely fails to mention the existence of
PCI_SPEC_STRING. This patch tidies things up, specifically clarifying that
'pci-assignable-add/remove' take <BDF> arguments where as 'pci-attach/detach'
take <PCI_SPEC_STRING> arguments (which will be enforced in a subsequent
patch).

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 docs/man/xl.1.pod.in | 28 +++++++++++++++++-----------
 1 file changed, 17 insertions(+), 11 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index f92bacfa72..c5fbce3b5c 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -1597,14 +1597,18 @@ List virtual network interfaces for a domain.
 
 =item B<pci-assignable-list>
 
-List all the assignable PCI devices.
+List all the B<BDF> of assignable PCI devices. See
+L<xl-pci-configuration(5)> for more information.
+
 These are devices in the system which are configured to be
 available for passthrough and are bound to a suitable PCI
 backend driver in domain 0 rather than a real driver.
 
 =item B<pci-assignable-add> I<BDF>
 
-Make the device at PCI Bus/Device/Function BDF assignable to guests.
+Make the device at B<BDF> assignable to guests. See
+L<xl-pci-configuration(5)> for more information.
+
 This will bind the device to the pciback driver and assign it to the
 "quarantine domain".  If it is already bound to a driver, it will
 first be unbound, and the original driver stored so that it can be
@@ -1620,8 +1624,10 @@ being used.
 
 =item B<pci-assignable-remove> [I<-r>] I<BDF>
 
-Make the device at PCI Bus/Device/Function BDF not assignable to
-guests.  This will at least unbind the device from pciback, and
+Make the device at B<BDF> not assignable to guests. See
+L<xl-pci-configuration(5)> for more information.
+
+This will at least unbind the device from pciback, and
 re-assign it from the "quarantine domain" back to domain 0.  If the -r
 option is specified, it will also attempt to re-bind the device to its
 original driver, making it usable by Domain 0 again.  If the device is
@@ -1637,15 +1643,15 @@ As always, this should only be done if you trust the guest, or are
 confident that the particular device you're re-assigning to dom0 will
 cancel all in-flight DMA on FLR.
 
-=item B<pci-attach> I<domain-id> I<BDF>
+=item B<pci-attach> I<domain-id> I<PCI_SPEC_STRING>
 
-Hot-plug a new pass-through pci device to the specified domain.
-B<BDF> is the PCI Bus/Device/Function of the physical device to pass-through.
+Hot-plug a new pass-through pci device to the specified domain. See
+L<xl-pci-configuration(5)> for more information.
 
-=item B<pci-detach> [I<OPTIONS>] I<domain-id> I<BDF>
+=item B<pci-detach> [I<OPTIONS>] I<domain-id> I<PCI_SPEC_STRING>
 
-Hot-unplug a previously assigned pci device from a domain. B<BDF> is the PCI
-Bus/Device/Function of the physical device to be removed from the guest domain.
+Hot-unplug a pci device that was previously passed through to a domain. See
+L<xl-pci-configuration(5)> for more information.
 
 B<OPTIONS>
 
@@ -1660,7 +1666,7 @@ even without guest domain's collaboration.
 
 =item B<pci-list> I<domain-id>
 
-List pass-through pci devices for a domain.
+List the B<BDF> of pci devices passed through to a domain.
 
 =back
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 18:00:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 18:00:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35011.66425 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG8x-0000zu-VM; Mon, 23 Nov 2020 18:00:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35011.66425; Mon, 23 Nov 2020 18:00:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG8x-0000z8-6s; Mon, 23 Nov 2020 18:00:51 +0000
Received: by outflank-mailman (input) for mailman id 35011;
 Mon, 23 Nov 2020 18:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khG8r-0000mm-LP
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khG8q-00070v-Hh; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtm-0000at-GY; Mon, 23 Nov 2020 17:45:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8r-0000mm-LP
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=FS2lsRq/N5rr0lREQKsZJxqwgY3o2rJbmA8WgHHgsnw=; b=CxiKYpU0sI4det2CUzdumTyfX
	F7mBEePaFs5Dcfu6XK+6VtenPfxXBNuGCU4Mq7YZ0DwNZRh1rWP65wItgnKT6rcofUVvWXIaR9lfq
	oa7+olOWIF/7Gh5VTM8B1tcGhrevHG0qesH+G2WLeSBHShWrWVxtXQAxH+gzUXvqA/bdE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8q-00070v-Hh; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtm-0000at-GY; Mon, 23 Nov 2020 17:45:10 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 18/23] libxlu: introduce xlu_pci_parse_spec_string()
Date: Mon, 23 Nov 2020 17:44:58 +0000
Message-Id: <20201123174503.6800-19-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201123174503.6800-1-paul@xen.org>
References: <20201123174503.6800-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

This patch largely re-writes the code to parse a PCI_SPEC_STRING and enters
it via the newly introduced function. The new parser also deals with 'bdf'
and 'vslot' as non-positional paramaters, as per the documentation in
xl-pci-configuration(5).

The existing xlu_pci_parse_bdf() function remains, but now strictly parses
BDF values. Some existing callers of xlu_pci_parse_bdf() are
modified to call xlu_pci_parse_spec_string() as per the documentation in xl(1).

NOTE: Usage text in xl_cmdtable.c and error messages are also modified
      appropriately.

Fixes: d25cc3ec93eb ("libxl: workaround gcc 10.2 maybe-uninitialized warning")
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxlutil.h    |   8 +-
 tools/libs/util/libxlu_pci.c | 354 +++++++++++++++++++++++--------------------
 tools/xl/xl_cmdtable.c       |   4 +-
 tools/xl/xl_parse.c          |   4 +-
 tools/xl/xl_pci.c            |  37 +++--
 5 files changed, 220 insertions(+), 187 deletions(-)

diff --git a/tools/include/libxlutil.h b/tools/include/libxlutil.h
index 92e35c5462..cdd6aab4f8 100644
--- a/tools/include/libxlutil.h
+++ b/tools/include/libxlutil.h
@@ -109,9 +109,15 @@ int xlu_disk_parse(XLU_Config *cfg, int nspecs, const char *const *specs,
    */
 
 /*
+ * PCI BDF
+ */
+int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_pci_bdf *bdf, const char *str);
+
+/*
  * PCI specification parsing
  */
-int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str);
+int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pci,
+                              const char *str);
 
 /*
  * RDM parsing
diff --git a/tools/libs/util/libxlu_pci.c b/tools/libs/util/libxlu_pci.c
index 5c107f2642..a8b6ce5427 100644
--- a/tools/libs/util/libxlu_pci.c
+++ b/tools/libs/util/libxlu_pci.c
@@ -1,5 +1,7 @@
 #define _GNU_SOURCE
 
+#include <ctype.h>
+
 #include "libxlu_internal.h"
 #include "libxlu_disk_l.h"
 #include "libxlu_disk_i.h"
@@ -9,185 +11,213 @@
 #define XLU__PCI_ERR(_c, _x, _a...) \
     if((_c) && (_c)->report) fprintf((_c)->report, _x, ##_a)
 
-static int hex_convert(const char *str, unsigned int *val, unsigned int mask)
+static int parse_bdf(libxl_pci_bdf *bdfp, uint32_t *vfunc_maskp,
+                     const char *str, const char **endp)
 {
-    unsigned long ret;
-    char *end;
-
-    ret = strtoul(str, &end, 16);
-    if ( end == str || *end != '\0' )
-        return -1;
-    if ( ret & ~mask )
-        return -1;
-    *val = (unsigned int)ret & mask;
+    const char *ptr = str;
+    unsigned int colons = 0;
+    unsigned int domain, bus, dev, func;
+    int n;
+
+    /* Count occurrences of ':' to detrmine presence/absence of the 'domain' */
+    while (isxdigit(*ptr) || *ptr == ':') {
+        if (*ptr == ':')
+            colons++;
+        ptr++;
+    }
+
+    ptr = str;
+    switch (colons) {
+    case 1:
+        domain = 0;
+        if (sscanf(ptr, "%x:%x.%n", &bus, &dev, &n) != 2)
+            return ERROR_INVAL;
+        break;
+    case 2:
+        if (sscanf(ptr, "%x:%x:%x.%n", &domain, &bus, &dev, &n) != 3)
+            return ERROR_INVAL;
+        break;
+    default:
+        return ERROR_INVAL;
+    }
+
+    if (domain > 0xffff || bus > 0xff || dev > 0x1f)
+        return ERROR_INVAL;
+
+    ptr += n;
+    if (*ptr == '*') {
+        if (!vfunc_maskp)
+            return ERROR_INVAL;
+        *vfunc_maskp = LIBXL_PCI_FUNC_ALL;
+        func = 0;
+        ptr++;
+    } else {
+        if (sscanf(ptr, "%x%n", &func, &n) != 1)
+            return ERROR_INVAL;
+        if (func > 7)
+            return ERROR_INVAL;
+        if (vfunc_maskp)
+            *vfunc_maskp = 1;
+        ptr += n;
+    }
+
+    bdfp->domain = domain;
+    bdfp->bus = bus;
+    bdfp->dev = dev;
+    bdfp->func = func;
+
+    if (endp)
+        *endp = ptr;
+
     return 0;
 }
 
-static int pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
-                           unsigned int bus, unsigned int dev,
-                           unsigned int func, unsigned int vdevfn)
+static int parse_vslot(uint32_t *vdevfnp, const char *str, const char **endp)
 {
-    pci->bdf.domain = domain;
-    pci->bdf.bus = bus;
-    pci->bdf.dev = dev;
-    pci->bdf.func = func;
-    pci->vdevfn = vdevfn;
+    const char *ptr = str;
+    unsigned int val;
+    int n;
+
+    if (sscanf(ptr, "%x%n", &val, &n) != 1)
+        return ERROR_INVAL;
+
+    if (val > 0x1f)
+        return ERROR_INVAL;
+
+    ptr += n;
+
+    *vdevfnp = val << 3;
+
+    if (endp)
+        *endp = ptr;
+
     return 0;
 }
 
-#define STATE_DOMAIN    0
-#define STATE_BUS       1
-#define STATE_DEV       2
-#define STATE_FUNC      3
-#define STATE_VSLOT     4
-#define STATE_OPTIONS_K 6
-#define STATE_OPTIONS_V 7
-#define STATE_TERMINAL  8
-#define STATE_TYPE      9
-#define STATE_RDM_STRATEGY      10
-#define STATE_RESERVE_POLICY    11
-#define INVALID         0xffffffff
-int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pci, const char *str)
+static int parse_key_val(char **keyp, char**valp, const char *str,
+                         const char **endp)
 {
-    unsigned state = STATE_DOMAIN;
-    unsigned dom = INVALID, bus = INVALID, dev = INVALID, func = INVALID, vslot = 0;
-    char *buf2, *tok, *ptr, *end, *optkey = NULL;
+    const char *ptr = str;
+    char *key, *val;
+
+    while (*ptr != '=' && *ptr != '\0')
+        ptr++;
 
-    if ( NULL == (buf2 = ptr = strdup(str)) )
+    if (*ptr == '\0')
+        return ERROR_INVAL;
+
+    key = strndup(str, ptr - str);
+    if (!key)
         return ERROR_NOMEM;
 
-    for(tok = ptr, end = ptr + strlen(ptr) + 1; ptr < end; ptr++) {
-        switch(state) {
-        case STATE_DOMAIN:
-            if ( *ptr == ':' ) {
-                state = STATE_BUS;
-                *ptr = '\0';
-                if ( hex_convert(tok, &dom, 0xffff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_BUS:
-            if ( *ptr == ':' ) {
-                state = STATE_DEV;
-                *ptr = '\0';
-                if ( hex_convert(tok, &bus, 0xff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }else if ( *ptr == '.' ) {
-                state = STATE_FUNC;
-                *ptr = '\0';
-                if ( dom & ~0xff )
-                    goto parse_error;
-                bus = dom;
-                dom = 0;
-                if ( hex_convert(tok, &dev, 0xff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_DEV:
-            if ( *ptr == '.' ) {
-                state = STATE_FUNC;
-                *ptr = '\0';
-                if ( hex_convert(tok, &dev, 0xff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_FUNC:
-            if ( *ptr == '\0' || *ptr == '@' || *ptr == ',' ) {
-                switch( *ptr ) {
-                case '\0':
-                    state = STATE_TERMINAL;
-                    break;
-                case '@':
-                    state = STATE_VSLOT;
-                    break;
-                case ',':
-                    state = STATE_OPTIONS_K;
-                    break;
-                }
-                *ptr = '\0';
-                if ( !strcmp(tok, "*") ) {
-                    pci->vfunc_mask = LIBXL_PCI_FUNC_ALL;
-                }else{
-                    if ( hex_convert(tok, &func, 0x7) )
-                        goto parse_error;
-                    pci->vfunc_mask = (1 << 0);
-                }
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_VSLOT:
-            if ( *ptr == '\0' || *ptr == ',' ) {
-                state = ( *ptr == ',' ) ? STATE_OPTIONS_K : STATE_TERMINAL;
-                *ptr = '\0';
-                if ( hex_convert(tok, &vslot, 0xff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_OPTIONS_K:
-            if ( *ptr == '=' ) {
-                state = STATE_OPTIONS_V;
-                *ptr = '\0';
-                optkey = tok;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_OPTIONS_V:
-            if ( *ptr == ',' || *ptr == '\0' ) {
-                state = (*ptr == ',') ? STATE_OPTIONS_K : STATE_TERMINAL;
-                *ptr = '\0';
-                if ( !strcmp(optkey, "msitranslate") ) {
-                    pci->msitranslate = atoi(tok);
-                }else if ( !strcmp(optkey, "power_mgmt") ) {
-                    pci->power_mgmt = atoi(tok);
-                }else if ( !strcmp(optkey, "permissive") ) {
-                    pci->permissive = atoi(tok);
-                }else if ( !strcmp(optkey, "seize") ) {
-                    pci->seize = atoi(tok);
-                } else if (!strcmp(optkey, "rdm_policy")) {
-                    if (!strcmp(tok, "strict")) {
-                        pci->rdm_policy = LIBXL_RDM_RESERVE_POLICY_STRICT;
-                    } else if (!strcmp(tok, "relaxed")) {
-                        pci->rdm_policy = LIBXL_RDM_RESERVE_POLICY_RELAXED;
-                    } else {
-                        XLU__PCI_ERR(cfg, "%s is not an valid PCI RDM property"
-                                          " policy: 'strict' or 'relaxed'.",
-                                     tok);
-                        goto parse_error;
-                    }
-                } else {
-                    XLU__PCI_ERR(cfg, "Unknown PCI BDF option: %s", optkey);
-                }
-                tok = ptr + 1;
-            }
-        default:
-            break;
+    str = ++ptr; /* skip '=' */
+    while (*ptr != ',' && *ptr != '\0')
+        ptr++;
+
+    val = strndup(str, ptr - str);
+    if (!val) {
+        free(key);
+        return ERROR_NOMEM;
+    }
+
+    if (*ptr == ',')
+        ptr++;
+
+    *keyp = key;
+    *valp = val;
+    *endp = ptr;
+
+    return 0;
+}
+
+static int parse_rdm_policy(XLU_Config *cfg, libxl_rdm_reserve_policy *policy,
+                            const char *str)
+{
+    int ret = libxl_rdm_reserve_policy_from_string(str, policy);
+
+    if (ret)
+        XLU__PCI_ERR(cfg, "Unknown RDM policy: %s", str);
+
+    return ret;
+}
+
+int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_pci_bdf *bdf, const char *str)
+{
+    return parse_bdf(bdf, NULL, str, NULL);
+}
+
+int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pcidev,
+                              const char *str)
+{
+    const char *ptr = str;
+    bool bdf_present = false;
+    int ret;
+
+    /* Attempt to parse 'bdf' as positional parameter */
+    ret = parse_bdf(&pcidev->bdf, &pcidev->vfunc_mask, ptr, &ptr);
+    if (!ret) {
+        bdf_present = true;
+
+        /* Check whether 'vslot' if present */
+        if (*ptr == '@') {
+            ret = parse_vslot(&pcidev->vdevfn, ++ptr, &ptr);
+            if (ret)
+                return ret;
         }
+        if (*ptr == ',')
+            ptr++;
+        else if (*ptr != '\0')
+            return ERROR_INVAL;
     }
 
-    if ( tok != ptr || state != STATE_TERMINAL )
-        goto parse_error;
+    /* Parse the rest as 'key=val' pairs */
+    while (*ptr != '\0') {
+        char *key, *val;
 
-    assert(dom != INVALID && bus != INVALID && dev != INVALID && func != INVALID);
+        ret = parse_key_val(&key, &val, ptr, &ptr);
+        if (ret)
+            return ret;
 
-    /* Just a pretty way to fill in the values */
-    pci_struct_fill(pci, dom, bus, dev, func, vslot << 3);
+        if (!strcmp(key, "bdf")) {
+            ret = parse_bdf(&pcidev->bdf, &pcidev->vfunc_mask, val, NULL);
+            bdf_present = !ret;
+        } else if (!strcmp(key, "vslot")) {
+            ret = parse_vslot(&pcidev->vdevfn, val, NULL);
+        } else if (!strcmp(key, "permissive")) {
+            pcidev->permissive = atoi(val);
+        } else if (!strcmp(key, "msitranslate")) {
+            pcidev->msitranslate = atoi(val);
+        } else if (!strcmp(key, "seize")) {
+            pcidev->seize= atoi(val);
+        } else if (!strcmp(key, "power_mgmt")) {
+            pcidev->power_mgmt = atoi(val);
+        } else if (!strcmp(key, "rdm_policy")) {
+            ret = parse_rdm_policy(cfg, &pcidev->rdm_policy, val);
+        } else {
+            XLU__PCI_ERR(cfg, "Unknown PCI_SPEC_STRING option: %s", key);
+            ret = ERROR_INVAL;
+        }
 
-    free(buf2);
+        free(key);
+        free(val);
 
-    return 0;
+        if (ret)
+            return ret;
+    }
 
-parse_error:
-    free(buf2);
-    return ERROR_INVAL;
+    if (!bdf_present)
+        return ERROR_INVAL;
+
+    return 0;
 }
 
 int xlu_rdm_parse(XLU_Config *cfg, libxl_rdm_reserve *rdm, const char *str)
 {
+#define STATE_TYPE           0
+#define STATE_RDM_STRATEGY   1
+#define STATE_RESERVE_POLICY 2
+#define STATE_TERMINAL       3
+
     unsigned state = STATE_TYPE;
     char *buf2, *tok, *ptr, *end;
 
@@ -227,15 +257,8 @@ int xlu_rdm_parse(XLU_Config *cfg, libxl_rdm_reserve *rdm, const char *str)
             if (*ptr == ',' || *ptr == '\0') {
                 state = *ptr == ',' ? STATE_TYPE : STATE_TERMINAL;
                 *ptr = '\0';
-                if (!strcmp(tok, "strict")) {
-                    rdm->policy = LIBXL_RDM_RESERVE_POLICY_STRICT;
-                } else if (!strcmp(tok, "relaxed")) {
-                    rdm->policy = LIBXL_RDM_RESERVE_POLICY_RELAXED;
-                } else {
-                    XLU__PCI_ERR(cfg, "Unknown RDM property policy value: %s",
-                                 tok);
+                if (!parse_rdm_policy(cfg, &rdm->policy, tok))
                     goto parse_error;
-                }
                 tok = ptr + 1;
             }
         default:
@@ -253,6 +276,11 @@ int xlu_rdm_parse(XLU_Config *cfg, libxl_rdm_reserve *rdm, const char *str)
 parse_error:
     free(buf2);
     return ERROR_INVAL;
+
+#undef STATE_TYPE
+#undef STATE_RDM_STRATEGY
+#undef STATE_RESERVE_POLICY
+#undef STATE_TERMINAL
 }
 
 /*
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 7da6c1b927..2ee0c49673 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -90,12 +90,12 @@ struct cmd_spec cmd_table[] = {
     { "pci-attach",
       &main_pciattach, 0, 1,
       "Insert a new pass-through pci device",
-      "<Domain> <BDF> [Virtual Slot]",
+      "<Domain> <PCI_SPEC_STRING>",
     },
     { "pci-detach",
       &main_pcidetach, 0, 1,
       "Remove a domain's pass-through pci device",
-      "<Domain> <BDF>",
+      "<Domain> <PCI_SPEC_STRING>",
     },
     { "pci-list",
       &main_pcilist, 0, 0,
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 0765780d9f..6a4703e745 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1487,10 +1487,10 @@ void parse_config_data(const char *config_source,
              * the global policy by default.
              */
             pci->rdm_policy = b_info->u.hvm.rdm.policy;
-            e = xlu_pci_parse_bdf(config, pci, buf);
+            e = xlu_pci_parse_spec_string(config, pci, buf);
             if (e) {
                 fprintf(stderr,
-                        "unable to parse PCI BDF `%s' for passthrough\n",
+                        "unable to parse PCI_SPEC_STRING `%s' for passthrough\n",
                         buf);
                 exit(-e);
             }
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index b6dc7c2840..9c24496cb2 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -55,7 +55,7 @@ int main_pcilist(int argc, char **argv)
     return 0;
 }
 
-static int pcidetach(uint32_t domid, const char *bdf, int force)
+static int pcidetach(uint32_t domid, const char *spec_string, int force)
 {
     libxl_device_pci pci;
     XLU_Config *config;
@@ -66,8 +66,9 @@ static int pcidetach(uint32_t domid, const char *bdf, int force)
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_inig"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
-        fprintf(stderr, "pci-detach: malformed BDF specification \"%s\"\n", bdf);
+    if (xlu_pci_parse_spec_string(config, &pci, spec_string)) {
+        fprintf(stderr, "pci-detach: malformed PCI_SPEC_STRING \"%s\"\n",
+                spec_string);
         exit(2);
     }
     if (force) {
@@ -89,7 +90,7 @@ int main_pcidetach(int argc, char **argv)
     uint32_t domid;
     int opt;
     int force = 0;
-    const char *bdf = NULL;
+    const char *spec_string = NULL;
 
     SWITCH_FOREACH_OPT(opt, "f", NULL, "pci-detach", 2) {
     case 'f':
@@ -98,15 +99,15 @@ int main_pcidetach(int argc, char **argv)
     }
 
     domid = find_domain(argv[optind]);
-    bdf = argv[optind + 1];
+    spec_string = argv[optind + 1];
 
-    if (pcidetach(domid, bdf, force))
+    if (pcidetach(domid, spec_string, force))
         return EXIT_FAILURE;
 
     return EXIT_SUCCESS;
 }
 
-static int pciattach(uint32_t domid, const char *bdf, const char *vs)
+static int pciattach(uint32_t domid, const char *spec_string)
 {
     libxl_device_pci pci;
     XLU_Config *config;
@@ -117,8 +118,9 @@ static int pciattach(uint32_t domid, const char *bdf, const char *vs)
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_inig"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
-        fprintf(stderr, "pci-attach: malformed BDF specification \"%s\"\n", bdf);
+    if (xlu_pci_parse_spec_string(config, &pci, spec_string)) {
+        fprintf(stderr, "pci-attach: malformed PCI_SPEC_STRING \"%s\"\n",
+                spec_string);
         exit(2);
     }
 
@@ -135,19 +137,16 @@ int main_pciattach(int argc, char **argv)
 {
     uint32_t domid;
     int opt;
-    const char *bdf = NULL, *vs = NULL;
+    const char *spec_string = NULL;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "pci-attach", 2) {
         /* No options */
     }
 
     domid = find_domain(argv[optind]);
-    bdf = argv[optind + 1];
-
-    if (optind + 1 < argc)
-        vs = argv[optind + 2];
+    spec_string = argv[optind + 1];
 
-    if (pciattach(domid, bdf, vs))
+    if (pciattach(domid, spec_string))
         return EXIT_FAILURE;
 
     return EXIT_SUCCESS;
@@ -193,8 +192,8 @@ static int pciassignable_add(const char *bdf, int rebind)
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
-        fprintf(stderr, "pci-assignable-add: malformed BDF specification \"%s\"\n", bdf);
+    if (xlu_pci_parse_bdf(config, &pci.bdf, bdf)) {
+        fprintf(stderr, "pci-assignable-add: malformed BDF \"%s\"\n", bdf);
         exit(2);
     }
 
@@ -235,8 +234,8 @@ static int pciassignable_remove(const char *bdf, int rebind)
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
-        fprintf(stderr, "pci-assignable-remove: malformed BDF specification \"%s\"\n", bdf);
+    if (xlu_pci_parse_bdf(config, &pci.bdf, bdf)) {
+        fprintf(stderr, "pci-assignable-remove: malformed BDF \"%s\"\n", bdf);
         exit(2);
     }
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 18:00:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 18:00:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35007.66374 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG8u-0000r2-MO; Mon, 23 Nov 2020 18:00:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35007.66374; Mon, 23 Nov 2020 18:00:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG8u-0000qb-98; Mon, 23 Nov 2020 18:00:48 +0000
Received: by outflank-mailman (input) for mailman id 35007;
 Mon, 23 Nov 2020 18:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khG8r-0000mS-CJ
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khG8q-000717-L0; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtk-0000at-NQ; Mon, 23 Nov 2020 17:45:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8r-0000mS-CJ
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=cM4E5HSO7y4rsBvC0ulob6owEH5QJoWwuEB72xnydjQ=; b=ZjNrMud0il4tCG3UnQeRrha6J
	UNx9VtQCsB/GoTbhyH/m7CLPhaiCk5fNeaW+TL3R66T4Vr0QGO+3KHWekt0c39Wpqovi0WicpArrG
	4rpcan3fZ448ucPzuLZcYyo2eDy+mIBn1eZeKcMJp2LqBSRnANC8JRhfX8yXIz6d2uQqo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8q-000717-L0; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtk-0000at-NQ; Mon, 23 Nov 2020 17:45:08 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 10/23] libxl: remove get_all_assigned_devices() from libxl_pci.c
Date: Mon, 23 Nov 2020 17:44:50 +0000
Message-Id: <20201123174503.6800-11-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201123174503.6800-1-paul@xen.org>
References: <20201123174503.6800-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

Use of this function is a very inefficient way to check whether a device
has already been assigned.

This patch adds code that saves the domain id in xenstore at the point of
assignment, and removes it again when the device id de-assigned (or the
domain is destroyed). It is then straightforward to check whether a device
has been assigned by checking whether a device has a saved domain id.

NOTE: To facilitate the xenstore check it is necessary to move the
      pci_info_xs_read() earlier in libxl_pci.c. To keep related functions
      together, the rest of the pci_info_xs_XXX() functions are moved too.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 149 ++++++++++++++++---------------------------
 1 file changed, 55 insertions(+), 94 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index ec101f255f..d3c7a547c3 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -336,50 +336,6 @@ retry_transaction2:
     return 0;
 }
 
-static int get_all_assigned_devices(libxl__gc *gc, libxl_device_pci **list, int *num)
-{
-    char **domlist;
-    unsigned int nd = 0, i;
-
-    *list = NULL;
-    *num = 0;
-
-    domlist = libxl__xs_directory(gc, XBT_NULL, "/local/domain", &nd);
-    for(i = 0; i < nd; i++) {
-        char *path, *num_devs;
-
-        path = GCSPRINTF("/local/domain/0/backend/%s/%s/0/num_devs",
-                         libxl__device_kind_to_string(LIBXL__DEVICE_KIND_PCI),
-                         domlist[i]);
-        num_devs = libxl__xs_read(gc, XBT_NULL, path);
-        if ( num_devs ) {
-            int ndev = atoi(num_devs), j;
-            char *devpath, *bdf;
-
-            for(j = 0; j < ndev; j++) {
-                devpath = GCSPRINTF("/local/domain/0/backend/%s/%s/0/dev-%u",
-                                    libxl__device_kind_to_string(LIBXL__DEVICE_KIND_PCI),
-                                    domlist[i], j);
-                bdf = libxl__xs_read(gc, XBT_NULL, devpath);
-                if ( bdf ) {
-                    unsigned dom, bus, dev, func;
-                    if ( sscanf(bdf, PCI_BDF, &dom, &bus, &dev, &func) != 4 )
-                        continue;
-
-                    *list = realloc(*list, sizeof(libxl_device_pci) * ((*num) + 1));
-                    if (*list == NULL)
-                        return ERROR_NOMEM;
-                    pci_struct_fill(*list + *num, dom, bus, dev, func, 0);
-                    (*num)++;
-                }
-            }
-        }
-    }
-    libxl__ptr_add(gc, *list);
-
-    return 0;
-}
-
 static int is_pci_in_array(libxl_device_pci *assigned, int num_assigned,
                            int dom, int bus, int dev, int func)
 {
@@ -427,19 +383,58 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
     return 0;
 }
 
+#define PCI_INFO_PATH "/libxl/pci"
+
+static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node)
+{
+    return node ?
+        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
+                  pci->domain, pci->bus, pci->dev, pci->func,
+                  node) :
+        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
+                  pci->domain, pci->bus, pci->dev, pci->func);
+}
+
+
+static int pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node, const char *val)
+{
+    char *path = pci_info_xs_path(gc, pci, node);
+    int rc = libxl__xs_printf(gc, XBT_NULL, path, "%s", val);
+
+    if (rc) LOGE(WARN, "Write of %s to node %s failed.", val, path);
+
+    return rc;
+}
+
+static char *pci_info_xs_read(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node)
+{
+    char *path = pci_info_xs_path(gc, pci, node);
+
+    return libxl__xs_read(gc, XBT_NULL, path);
+}
+
+static void pci_info_xs_remove(libxl__gc *gc, libxl_device_pci *pci,
+                               const char *node)
+{
+    char *path = pci_info_xs_path(gc, pci, node);
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+
+    /* Remove the xenstore entry */
+    xs_rm(ctx->xsh, XBT_NULL, path);
+}
+
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 {
     GC_INIT(ctx);
-    libxl_device_pci *pcis = NULL, *new, *assigned;
+    libxl_device_pci *pcis = NULL, *new;
     struct dirent *de;
     DIR *dir;
-    int r, num_assigned;
 
     *num = 0;
 
-    r = get_all_assigned_devices(gc, &assigned, &num_assigned);
-    if (r) goto out;
-
     dir = opendir(SYSFS_PCIBACK_DRIVER);
     if (NULL == dir) {
         if (errno == ENOENT) {
@@ -455,9 +450,6 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         if (sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4)
             continue;
 
-        if (is_pci_in_array(assigned, num_assigned, dom, bus, dev, func))
-            continue;
-
         new = realloc(pcis, ((*num) + 1) * sizeof(*new));
         if (NULL == new)
             continue;
@@ -467,6 +459,10 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 
         memset(new, 0, sizeof(*new));
         pci_struct_fill(new, dom, bus, dev, func, 0);
+
+        if (pci_info_xs_read(gc, new, "domid")) /* already assigned */
+            continue;
+
         (*num)++;
     }
 
@@ -737,48 +733,6 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
     return 0;
 }
 
-#define PCI_INFO_PATH "/libxl/pci"
-
-static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
-                              const char *node)
-{
-    return node ?
-        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
-                  pci->domain, pci->bus, pci->dev, pci->func,
-                  node) :
-        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
-                  pci->domain, pci->bus, pci->dev, pci->func);
-}
-
-
-static void pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
-                              const char *node, const char *val)
-{
-    char *path = pci_info_xs_path(gc, pci, node);
-
-    if ( libxl__xs_printf(gc, XBT_NULL, path, "%s", val) < 0 ) {
-        LOGE(WARN, "Write of %s to node %s failed.", val, path);
-    }
-}
-
-static char *pci_info_xs_read(libxl__gc *gc, libxl_device_pci *pci,
-                              const char *node)
-{
-    char *path = pci_info_xs_path(gc, pci, node);
-
-    return libxl__xs_read(gc, XBT_NULL, path);
-}
-
-static void pci_info_xs_remove(libxl__gc *gc, libxl_device_pci *pci,
-                               const char *node)
-{
-    char *path = pci_info_xs_path(gc, pci, node);
-    libxl_ctx *ctx = libxl__gc_owner(gc);
-
-    /* Remove the xenstore entry */
-    xs_rm(ctx->xsh, XBT_NULL, path);
-}
-
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
                                             libxl_device_pci *pci,
                                             int rebind)
@@ -1594,6 +1548,9 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
         goto out;
     }
 
+    rc = pci_info_xs_write(gc, pci, "domid", GCSPRINTF("%u", domid));
+    if (rc) goto out;
+
     libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
@@ -1721,6 +1678,7 @@ static void device_pci_add_done(libxl__egc *egc,
              "PCI device %x:%x:%x.%x (rc %d)",
              pci->domain, pci->bus, pci->dev, pci->func,
              rc);
+        pci_info_xs_remove(gc, pci, "domid");
     }
     aodev->rc = rc;
     aodev->callback(egc, aodev);
@@ -2282,6 +2240,9 @@ out:
     libxl__xswait_stop(gc, &prs->xswait);
     libxl__ev_time_deregister(gc, &prs->timeout);
     libxl__ev_time_deregister(gc, &prs->retry_timer);
+
+    if (!rc) pci_info_xs_remove(gc, pci, "domid");
+
     aodev->rc = rc;
     aodev->callback(egc, aodev);
 }
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 18:00:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 18:00:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35012.66434 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG8y-00013r-RD; Mon, 23 Nov 2020 18:00:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35012.66434; Mon, 23 Nov 2020 18:00:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG8y-000126-7M; Mon, 23 Nov 2020 18:00:52 +0000
Received: by outflank-mailman (input) for mailman id 35012;
 Mon, 23 Nov 2020 18:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khG8r-0000mr-PX
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khG8q-00071H-Q5; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtl-0000at-QK; Mon, 23 Nov 2020 17:45:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8r-0000mr-PX
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=z6HgaaJxE43aYUfbefthFXj4bV6TUshSIIqyN3QunIw=; b=WKA6HRQs8Jvwkj8kYCCZlo0Pz
	+G5B9onTWOXSwgcwuaP1C35lJX0hvb8MCuvL3p4nMwk1NYV5RF7gXVRE8xHFpRip5UkxaafpI1ZTu
	i1Yn/4+dqIExp/Kq8TbIiLtPQvUplDmgi/bjaliHaOu2TGWSxo3xeKluqkjb4alwqO+/Q=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8q-00071H-Q5; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtl-0000at-QK; Mon, 23 Nov 2020 17:45:09 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 15/23] docs/man: improve documentation of PCI_SPEC_STRING...
Date: Mon, 23 Nov 2020 17:44:55 +0000
Message-Id: <20201123174503.6800-16-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201123174503.6800-1-paul@xen.org>
References: <20201123174503.6800-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

... and prepare for adding support for non-positional parsing of 'bdf' and
'vslot' in a subsequent patch.

Also document 'BDF' as a first-class parameter type and fix the documentation
to state that the default value of 'rdm_policy' is actually 'strict', not
'relaxed', as can be seen in libxl__device_pci_setdefault().

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 docs/man/xl-pci-configuration.5.pod | 177 ++++++++++++++++++++++++++++++------
 1 file changed, 148 insertions(+), 29 deletions(-)

diff --git a/docs/man/xl-pci-configuration.5.pod b/docs/man/xl-pci-configuration.5.pod
index 72a27bd95d..4dd73bc498 100644
--- a/docs/man/xl-pci-configuration.5.pod
+++ b/docs/man/xl-pci-configuration.5.pod
@@ -6,32 +6,105 @@ xl-pci-configuration - XL PCI Configuration Syntax
 
 =head1 SYNTAX
 
-This document specifies the format for B<PCI_SPEC_STRING> which is used by
-the L<xl.cfg(5)> pci configuration option, and related L<xl(1)> commands.
+This document specifies the format for B<BDF> and B<PCI_SPEC_STRING> which are
+used by the L<xl.cfg(5)> pci configuration option, and related L<xl(1)>
+commands.
 
-Each B<PCI_SPEC_STRING> has the form of
-B<[DDDD:]BB:DD.F[@VSLOT],KEY=VALUE,KEY=VALUE,...> where:
+A B<BDF> has the following form:
+
+    [DDDD:]BB:SS.F
+
+B<DDDD> is the domain number, B<BB> is the bus number, B<SS> is the device (or
+slot) number, and B<F> is the function number. This is the same scheme as
+used in the output of L<lspci(1)> for the device in question. By default
+L<lspci(1)> will omit the domain (B<DDDD>) if it is zero and hence a zero
+value for domain may also be omitted when specifying a B<BDF>.
+
+Each B<PCI_SPEC_STRING> has the one of the forms:
+
+=over 4
+
+    [<bdf>[@<vslot>,][<key>=<value>,]*
+    [<key>=<value>,]*
+
+=back
+
+For example, these strings are equivalent:
 
 =over 4
 
-=item B<[DDDD:]BB:DD.F>
+    36:00.0@20,seize=1
+    36:00.0,vslot=20,seize=1
+    bdf=36:00.0,vslot=20,seize=1
 
-Identifies the PCI device from the host perspective in the domain
-(B<DDDD>), Bus (B<BB>), Device (B<DD>) and Function (B<F>) syntax. This is
-the same scheme as used in the output of B<lspci(1)> for the device in
-question.
+=back
+
+More formally, the string is a series of comma-separated keyword/value
+pairs, flags and positional parameters.  Parameters which are not bare
+keywords and which do not contain "=" symbols are assigned to the
+positional parameters, in the order specified below.  The positional
+parameters may also be specified by name.
+
+Each parameter may be specified at most once, either as a positional
+parameter or a named parameter.  Default values apply if the parameter
+is not specified, or if it is specified with an empty value (whether
+positionally or explicitly).
+
+B<NOTE>: In context of B<xl pci-detach> (see L<xl(1)>), parameters other than
+B<bdf> will be ignored.
+
+=head1 Positional Parameters
+
+=over 4
+
+=item B<bdf>=I<BDF>
+
+=over 4
 
-Note: by default B<lspci(1)> will omit the domain (B<DDDD>) if it
-is zero and it is optional here also. You may specify the function
-(B<F>) as B<*> to indicate all functions.
+=item Description
 
-=item B<@VSLOT>
+This identifies the PCI device from the host perspective.
 
-Specifies the virtual slot where the guest will see this
-device. This is equivalent to the B<DD> which the guest sees. In a
-guest B<DDDD> and B<BB> are C<0000:00>.
+In the context of a B<PCI_SPEC_STRING> you may specify the function (B<F>) as
+B<*> to indicate all functions of a multi-function device.
 
-=item B<permissive=BOOLEAN>
+=item Default Value
+
+None. This parameter is mandatory as it identifies the device.
+
+=back
+
+=item B<vslot>=I<NUMBER>
+
+=over 4
+
+=item Description
+
+Specifies the virtual slot (device) number where the guest will see this
+device. For example, running L<lspci(1)> in a Linux guest where B<vslot>
+was specified as C<8> would identify the device as C<00:08.0>. Virtual domain
+and bus numbers are always 0.
+
+B<NOTE:> This parameter is always parsed as a hexidecimal value.
+
+=item Default Value
+
+None. This parameter is not mandatory. An available B<vslot> will be selected
+if this parameter is not specified.
+
+=back
+
+=back
+
+=head1 Other Parameters and Flags
+
+=over 4
+
+=item B<permissive>=I<BOOLEAN>
+
+=over 4
+
+=item Description
 
 By default pciback only allows PV guests to write "known safe" values
 into PCI configuration space, likewise QEMU (both qemu-xen and
@@ -46,33 +119,79 @@ more control over the device, which may have security or stability
 implications.  It is recommended to only enable this option for
 trusted VMs under administrator's control.
 
-=item B<msitranslate=BOOLEAN>
+=item Default Value
+
+0
+
+=back
+
+=item B<msitranslate>=I<BOOLEAN>
+
+=over 4
+
+=item Description
 
 Specifies that MSI-INTx translation should be turned on for the PCI
 device. When enabled, MSI-INTx translation will always enable MSI on
-the PCI device regardless of whether the guest uses INTx or MSI. Some
-device drivers, such as NVIDIA's, detect an inconsistency and do not
+the PCI device regardless of whether the guest uses INTx or MSI.
+
+=item Default Value
+
+Some device drivers, such as NVIDIA's, detect an inconsistency and do not
 function when this option is enabled. Therefore the default is false (0).
 
-=item B<seize=BOOLEAN>
+=back
+
+=item B<seize>=I<BOOLEAN>
+
+=over 4
+
+=item Description
 
-Tells B<xl> to automatically attempt to re-assign a device to
-pciback if it is not already assigned.
+Tells L<xl(1)> to automatically attempt to make the device assignable to
+guests if that has not already been done by the B<pci-assignable-add>
+command.
 
-B<WARNING:> If you set this option, B<xl> will gladly re-assign a critical
+B<WARNING:> If you set this option, L<xl> will gladly re-assign a critical
 system device, such as a network or a disk controller being used by
 dom0 without confirmation.  Please use with care.
 
-=item B<power_mgmt=BOOLEAN>
+=item Default Value
+
+0
+
+=back
+
+=item B<power_mgmt>=I<BOOLEAN>
+
+=over 4
+
+=item Description
 
 B<(HVM only)> Specifies that the VM should be able to program the
-D0-D3hot power management states for the PCI device. The default is false (0).
+D0-D3hot power management states for the PCI device.
+
+=item Default Value
+
+0
 
-=item B<rdm_policy=STRING>
+=back
+
+=item B<rdm_policy>=I<STRING>
+
+=over 4
+
+=item Description
 
 B<(HVM/x86 only)> This is the same as the policy setting inside the B<rdm>
-option but just specific to a given device. The default is "relaxed".
+option in L<xl.cfg(5)> but just specific to a given device.
 
-Note: this would override global B<rdm> option.
+B<NOTE>: This overrides the global B<rdm> option.
+
+=item Default Value
+
+"strict"
+
+=back
 
 =back
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 18:00:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 18:00:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35013.66448 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG90-00016x-HE; Mon, 23 Nov 2020 18:00:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35013.66448; Mon, 23 Nov 2020 18:00:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG8z-00015T-Dd; Mon, 23 Nov 2020 18:00:53 +0000
Received: by outflank-mailman (input) for mailman id 35013;
 Mon, 23 Nov 2020 18:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khG8r-0000mw-RN
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khG8q-00071B-Oc; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtm-0000at-PK; Mon, 23 Nov 2020 17:45:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8r-0000mw-RN
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=4LIIgPLaZ7owad3kxi5/a8tOUYDqOVCOVfSVh7XjWYQ=; b=jFr8hOSMe3GoL6yhE0bjzIQcY
	M7OP1k/DqWcD7HPqz1Acd03PUlZdQd8HOgnBPW/lvwmtDvXQChs2sRutOjfJaBOCoezLTnB4EdPDr
	mXnkydfn788YTan8wUkMxuh72w7MkH8CSnZz/AFH2vRwHxPB+/O8lTXDFW4IoZlHABHG8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8q-00071B-Oc; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtm-0000at-PK; Mon, 23 Nov 2020 17:45:10 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 19/23] libxl: modify libxl_device_pci_assignable_add/remove/list/list_free()...
Date: Mon, 23 Nov 2020 17:44:59 +0000
Message-Id: <20201123174503.6800-20-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201123174503.6800-1-paul@xen.org>
References: <20201123174503.6800-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

... to use 'libxl_pci_bdf' rather than 'libxl_device_pci'.

This patch modifies the API and callers accordingly. It also modifies
several internal functions in libxl_pci.c that support the API to also use
'libxl_pci_bdf'.

NOTE: The OCaml bindings are adjusted to contain the interface change. It
      should therefore not affect compatibility with OCaml-based utilities.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Christian Lindig <christian.lindig@citrix.com>
Cc: David Scott <dave@recoil.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxl.h                |  15 ++-
 tools/libs/light/libxl_pci.c         | 215 +++++++++++++++++++----------------
 tools/ocaml/libs/xl/xenlight_stubs.c |  15 ++-
 tools/xl/xl_pci.c                    |  32 +++---
 4 files changed, 157 insertions(+), 120 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 5edacccbd1..5703fdf367 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -470,6 +470,13 @@
 #define LIBXL_HAVE_PCI_BDF 1
 
 /*
+ * LIBXL_HAVE_PCI_ASSIGNABLE_BDF indicates that the
+ * libxl_device_pci_assignable_add/remove/list/list_free() functions all
+ * use the 'libxl_pci_bdf' type rather than 'libxl_device_pci' type.
+ */
+#define LIBXL_HAVE_PCI_ASSIGNABLE_BDF 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
@@ -2378,10 +2385,10 @@ int libxl_device_events_handler(libxl_ctx *ctx,
  * added or is not bound, the functions will emit a warning but return
  * SUCCESS.
  */
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
-libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
-void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num);
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
+libxl_pci_bdf *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
+void libxl_device_pci_assignable_list_free(libxl_pci_bdf *list, int num);
 
 /* CPUID handling */
 int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str);
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 3cfba0e527..f9ace1faec 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -25,26 +25,33 @@
 #define PCI_BDF_XSPATH         "%04x-%02x-%02x-%01x"
 #define PCI_PT_QDEV_ID         "pci-pt-%02x_%02x.%01x"
 
-static unsigned int pci_encode_bdf(libxl_device_pci *pci)
+static unsigned int pci_encode_bdf(libxl_pci_bdf *pcibdf)
 {
     unsigned int value;
 
-    value = pci->bdf.domain << 16;
-    value |= (pci->bdf.bus & 0xff) << 8;
-    value |= (pci->bdf.dev & 0x1f) << 3;
-    value |= (pci->bdf.func & 0x7);
+    value = pcibdf->domain << 16;
+    value |= (pcibdf->bus & 0xff) << 8;
+    value |= (pcibdf->dev & 0x1f) << 3;
+    value |= (pcibdf->func & 0x7);
 
     return value;
 }
 
+static void pcibdf_struct_fill(libxl_pci_bdf *pcibdf, unsigned int domain,
+                               unsigned int bus, unsigned int dev,
+                               unsigned int func)
+{
+    pcibdf->domain = domain;
+    pcibdf->bus = bus;
+    pcibdf->dev = dev;
+    pcibdf->func = func;
+}
+
 static void pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
                             unsigned int bus, unsigned int dev,
                             unsigned int func, unsigned int vdevfn)
 {
-    pci->bdf.domain = domain;
-    pci->bdf.bus = bus;
-    pci->bdf.dev = dev;
-    pci->bdf.func = func;
+    pcibdf_struct_fill(&pci->bdf, domain, bus, dev, func);
     pci->vdevfn = vdevfn;
 }
 
@@ -350,8 +357,8 @@ static bool is_pci_in_array(libxl_device_pci *pcis, int num,
 }
 
 /* Write the standard BDF into the sysfs path given by sysfs_path. */
-static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
-                           libxl_device_pci *pci)
+static int sysfs_write_bdf(libxl__gc *gc, const char *sysfs_path,
+                           libxl_pci_bdf *pcibdf)
 {
     int rc, fd;
     char *buf;
@@ -362,8 +369,8 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
         return ERROR_FAIL;
     }
 
-    buf = GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus,
-                    pci->bdf.dev, pci->bdf.func);
+    buf = GCSPRINTF(PCI_BDF, pcibdf->domain, pcibdf->bus,
+                    pcibdf->dev, pcibdf->func);
     rc = write(fd, buf, strlen(buf));
     /* Annoying to have two if's, but we need the errno */
     if (rc < 0)
@@ -378,22 +385,22 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
 
 #define PCI_INFO_PATH "/libxl/pci"
 
-static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
+static char *pci_info_xs_path(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                               const char *node)
 {
     return node ?
         GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
-                  pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
+                  pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func,
                   node) :
         GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
-                  pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
+                  pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func);
 }
 
 
-static int pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
+static int pci_info_xs_write(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                               const char *node, const char *val)
 {
-    char *path = pci_info_xs_path(gc, pci, node);
+    char *path = pci_info_xs_path(gc, pcibdf, node);
     int rc = libxl__xs_printf(gc, XBT_NULL, path, "%s", val);
 
     if (rc) LOGE(WARN, "Write of %s to node %s failed.", val, path);
@@ -401,28 +408,28 @@ static int pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
     return rc;
 }
 
-static char *pci_info_xs_read(libxl__gc *gc, libxl_device_pci *pci,
+static char *pci_info_xs_read(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                               const char *node)
 {
-    char *path = pci_info_xs_path(gc, pci, node);
+    char *path = pci_info_xs_path(gc, pcibdf, node);
 
     return libxl__xs_read(gc, XBT_NULL, path);
 }
 
-static void pci_info_xs_remove(libxl__gc *gc, libxl_device_pci *pci,
+static void pci_info_xs_remove(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                                const char *node)
 {
-    char *path = pci_info_xs_path(gc, pci, node);
+    char *path = pci_info_xs_path(gc, pcibdf, node);
     libxl_ctx *ctx = libxl__gc_owner(gc);
 
     /* Remove the xenstore entry */
     xs_rm(ctx->xsh, XBT_NULL, path);
 }
 
-libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
+libxl_pci_bdf *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 {
     GC_INIT(ctx);
-    libxl_device_pci *pcis = NULL, *new;
+    libxl_pci_bdf *pcibdfs = NULL, *new;
     struct dirent *de;
     DIR *dir;
 
@@ -443,15 +450,15 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         if (sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4)
             continue;
 
-        new = realloc(pcis, ((*num) + 1) * sizeof(*new));
+        new = realloc(pcibdfs, ((*num) + 1) * sizeof(*new));
         if (NULL == new)
             continue;
 
-        pcis = new;
-        new = pcis + *num;
+        pcibdfs = new;
+        new = pcibdfs + *num;
 
-        libxl_device_pci_init(new);
-        pci_struct_fill(new, dom, bus, dev, func, 0);
+        libxl_pci_bdf_init(new);
+        pcibdf_struct_fill(new, dom, bus, dev, func);
 
         if (pci_info_xs_read(gc, new, "domid")) /* already assigned */
             continue;
@@ -462,32 +469,32 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
     closedir(dir);
 out:
     GC_FREE;
-    return pcis;
+    return pcibdfs;
 }
 
-void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num)
+void libxl_device_pci_assignable_list_free(libxl_pci_bdf *list, int num)
 {
     int i;
 
     for (i = 0; i < num; i++)
-        libxl_device_pci_dispose(&list[i]);
+        libxl_pci_bdf_dispose(&list[i]);
 
     free(list);
 }
 
 /* Unbind device from its current driver, if any.  If driver_path is non-NULL,
  * store the path to the original driver in it. */
-static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
+static int sysfs_dev_unbind(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                             char **driver_path)
 {
     char * spath, *dp = NULL;
     struct stat st;
 
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/driver",
-                           pci->bdf.domain,
-                           pci->bdf.bus,
-                           pci->bdf.dev,
-                           pci->bdf.func);
+                           pcibdf->domain,
+                           pcibdf->bus,
+                           pcibdf->dev,
+                           pcibdf->func);
     if ( !lstat(spath, &st) ) {
         /* Find the canonical path to the driver. */
         dp = libxl__zalloc(gc, PATH_MAX);
@@ -501,7 +508,7 @@ static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
 
         /* Unbind from the old driver */
         spath = GCSPRINTF("%s/unbind", dp);
-        if ( sysfs_write_bdf(gc, spath, pci) < 0 ) {
+        if ( sysfs_write_bdf(gc, spath, pcibdf) < 0 ) {
             LOGE(ERROR, "Couldn't unbind device");
             return -1;
         }
@@ -639,8 +646,8 @@ bool libxl__is_igd_vga_passthru(libxl__gc *gc,
  * already exist.
  */
 
-/* Scan through /sys/.../pciback/slots looking for pci's BDF */
-static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
+/* Scan through /sys/.../pciback/slots looking for BDF */
+static int pciback_dev_has_slot(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 {
     FILE *f;
     int rc = 0;
@@ -653,11 +660,11 @@ static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
         return ERROR_FAIL;
     }
 
-    while (fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func)==4) {
-        if (dom == pci->bdf.domain
-            && bus == pci->bdf.bus
-            && dev == pci->bdf.dev
-            && func == pci->bdf.func) {
+    while (fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func) == 4) {
+        if (dom == pcibdf->domain
+           && bus == pcibdf->bus
+           && dev == pcibdf->dev
+           && func == pcibdf->func) {
             rc = 1;
             goto out;
         }
@@ -667,7 +674,7 @@ out:
     return rc;
 }
 
-static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
+static int pciback_dev_is_assigned(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 {
     char * spath;
     int rc;
@@ -683,8 +690,8 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
     }
 
     spath = GCSPRINTF(SYSFS_PCIBACK_DRIVER"/"PCI_BDF,
-                      pci->bdf.domain, pci->bdf.bus,
-                      pci->bdf.dev, pci->bdf.func);
+                      pcibdf->domain, pcibdf->bus,
+                      pcibdf->dev, pcibdf->func);
     rc = lstat(spath, &st);
 
     if( rc == 0 )
@@ -695,40 +702,40 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
     return -1;
 }
 
-static int pciback_dev_assign(libxl__gc *gc, libxl_device_pci *pci)
+static int pciback_dev_assign(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 {
     int rc;
 
-    if ( (rc = pciback_dev_has_slot(gc, pci)) < 0 ) {
+    if ( (rc = pciback_dev_has_slot(gc, pcibdf)) < 0 ) {
         LOGE(ERROR, "Error checking for pciback slot");
         return ERROR_FAIL;
     } else if (rc == 0) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/new_slot",
-                             pci) < 0 ) {
+                             pcibdf) < 0 ) {
             LOGE(ERROR, "Couldn't bind device to pciback!");
             return ERROR_FAIL;
         }
     }
 
-    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind", pci) < 0 ) {
+    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind", pcibdf) < 0 ) {
         LOGE(ERROR, "Couldn't bind device to pciback!");
         return ERROR_FAIL;
     }
     return 0;
 }
 
-static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
+static int pciback_dev_unassign(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 {
     /* Remove from pciback */
-    if ( sysfs_dev_unbind(gc, pci, NULL) < 0 ) {
+    if ( sysfs_dev_unbind(gc, pcibdf, NULL) < 0 ) {
         LOG(ERROR, "Couldn't unbind device!");
         return ERROR_FAIL;
     }
 
     /* Remove slot if necessary */
-    if ( pciback_dev_has_slot(gc, pci) > 0 ) {
+    if ( pciback_dev_has_slot(gc, pcibdf) > 0 ) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/remove_slot",
-                             pci) < 0 ) {
+                             pcibdf) < 0 ) {
             LOGE(ERROR, "Couldn't remove pciback slot");
             return ERROR_FAIL;
         }
@@ -737,7 +744,7 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
 }
 
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
-                                            libxl_device_pci *pci,
+                                            libxl_pci_bdf *pcibdf,
                                             int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -747,10 +754,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     struct stat st;
 
     /* Local copy for convenience */
-    dom = pci->bdf.domain;
-    bus = pci->bdf.bus;
-    dev = pci->bdf.dev;
-    func = pci->bdf.func;
+    dom = pcibdf->domain;
+    bus = pcibdf->bus;
+    dev = pcibdf->dev;
+    func = pcibdf->func;
 
     /* See if the device exists */
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF, dom, bus, dev, func);
@@ -760,7 +767,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
 
     /* Check to see if it's already assigned to pciback */
-    rc = pciback_dev_is_assigned(gc, pci);
+    rc = pciback_dev_is_assigned(gc, pcibdf);
     if ( rc < 0 ) {
         return ERROR_FAIL;
     }
@@ -770,7 +777,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
 
     /* Check to see if there's already a driver that we need to unbind from */
-    if ( sysfs_dev_unbind(gc, pci, &driver_path ) ) {
+    if ( sysfs_dev_unbind(gc, pcibdf, &driver_path ) ) {
         LOG(ERROR, "Couldn't unbind "PCI_BDF" from driver",
             dom, bus, dev, func);
         return ERROR_FAIL;
@@ -779,9 +786,9 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     /* Store driver_path for rebinding to dom0 */
     if ( rebind ) {
         if ( driver_path ) {
-            pci_info_xs_write(gc, pci, "driver_path", driver_path);
+            pci_info_xs_write(gc, pcibdf, "driver_path", driver_path);
         } else if ( (driver_path =
-                     pci_info_xs_read(gc, pci, "driver_path")) != NULL ) {
+                     pci_info_xs_read(gc, pcibdf, "driver_path")) != NULL ) {
             LOG(INFO, PCI_BDF" not bound to a driver, will be rebound to %s",
                 dom, bus, dev, func, driver_path);
         } else {
@@ -789,10 +796,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
                 dom, bus, dev, func);
         }
     } else {
-        pci_info_xs_remove(gc, pci, "driver_path");
+        pci_info_xs_remove(gc, pcibdf, "driver_path");
     }
 
-    if ( pciback_dev_assign(gc, pci) ) {
+    if ( pciback_dev_assign(gc, pcibdf) ) {
         LOG(ERROR, "Couldn't bind device to pciback!");
         return ERROR_FAIL;
     }
@@ -803,7 +810,7 @@ quarantine:
      * so always pass XEN_DOMCTL_DEV_RDM_RELAXED to avoid assignment being
      * unnecessarily denied.
      */
-    rc = xc_assign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci),
+    rc = xc_assign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pcibdf),
                           XEN_DOMCTL_DEV_RDM_RELAXED);
     if ( rc < 0 ) {
         LOG(ERROR, "failed to quarantine "PCI_BDF, dom, bus, dev, func);
@@ -814,7 +821,7 @@ quarantine:
 }
 
 static int libxl__device_pci_assignable_remove(libxl__gc *gc,
-                                               libxl_device_pci *pci,
+                                               libxl_pci_bdf *pcibdf,
                                                int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -822,24 +829,24 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     char *driver_path;
 
     /* De-quarantine */
-    rc = xc_deassign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci));
+    rc = xc_deassign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pcibdf));
     if ( rc < 0 ) {
-        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pci->bdf.domain, pci->bdf.bus,
-            pci->bdf.dev, pci->bdf.func);
+        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pcibdf->domain,
+            pcibdf->bus, pcibdf->dev, pcibdf->func);
         return ERROR_FAIL;
     }
 
     /* Unbind from pciback */
-    if ( (rc = pciback_dev_is_assigned(gc, pci)) < 0 ) {
+    if ( (rc = pciback_dev_is_assigned(gc, pcibdf)) < 0 ) {
         return ERROR_FAIL;
     } else if ( rc ) {
-        pciback_dev_unassign(gc, pci);
+        pciback_dev_unassign(gc, pcibdf);
     } else {
         LOG(WARN, "Not bound to pciback");
     }
 
     /* Rebind if necessary */
-    driver_path = pci_info_xs_read(gc, pci, "driver_path");
+    driver_path = pci_info_xs_read(gc, pcibdf, "driver_path");
 
     if ( driver_path ) {
         if ( rebind ) {
@@ -847,12 +854,12 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
 
             if ( sysfs_write_bdf(gc,
                                  GCSPRINTF("%s/bind", driver_path),
-                                 pci) < 0 ) {
+                                 pcibdf) < 0 ) {
                 LOGE(ERROR, "Couldn't bind device to %s", driver_path);
                 return -1;
             }
 
-            pci_info_xs_remove(gc, pci, "driver_path");
+            pci_info_xs_remove(gc, pcibdf, "driver_path");
         }
     } else {
         if ( rebind ) {
@@ -864,26 +871,26 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     return 0;
 }
 
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci,
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
                                     int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_add(gc, pci, rebind);
+    rc = libxl__device_pci_assignable_add(gc, pcibdf, rebind);
 
     GC_FREE;
     return rc;
 }
 
 
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci,
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
                                        int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_remove(gc, pci, rebind);
+    rc = libxl__device_pci_assignable_remove(gc, pcibdf, rebind);
 
     GC_FREE;
     return rc;
@@ -1385,7 +1392,7 @@ static void pci_add_dm_done(libxl__egc *egc,
     /* Don't restrict writes to the PCI config space from this VM */
     if (pci->permissive) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/permissive",
-                             pci) < 0 ) {
+                             &pci->bdf) < 0 ) {
             LOGD(ERROR, domainid, "Setting permissive for device");
             rc = ERROR_FAIL;
             goto out;
@@ -1401,7 +1408,8 @@ out_no_irq:
             rc = ERROR_FAIL;
             goto out;
         }
-        r = xc_assign_device(ctx->xch, domid, pci_encode_bdf(pci), flag);
+        r = xc_assign_device(ctx->xch, domid, pci_encode_bdf(&pci->bdf),
+                             flag);
         if (r < 0 && (hvm || errno != ENOSYS)) {
             LOGED(ERROR, domainid, "xc_assign_device failed");
             rc = ERROR_FAIL;
@@ -1480,15 +1488,28 @@ int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
     return AO_INPROGRESS;
 }
 
-static bool libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
+static int is_bdf_in_array(libxl_pci_bdf *pcibdfs, int num,
+                           libxl_pci_bdf *pcibdf)
 {
-    libxl_device_pci *pcis;
+    int i;
+
+    for(i = 0; i < num; i++) {
+        if (COMPARE_BDF(pcibdf, &pcibdfs[i]))
+            break;
+    }
+
+    return i < num;
+}
+
+static bool is_bdf_assignable(libxl_ctx *ctx, libxl_pci_bdf *pcibdf)
+{
+    libxl_pci_bdf *pcibdfs;
     int num;
     bool assignable;
 
-    pcis = libxl_device_pci_assignable_list(ctx, &num);
-    assignable = is_pci_in_array(pcis, num, pci);
-    libxl_device_pci_assignable_list_free(pcis, num);
+    pcibdfs = libxl_device_pci_assignable_list(ctx, &num);
+    assignable = is_bdf_in_array(pcibdfs, num, pcibdf);
+    libxl_device_pci_assignable_list_free(pcibdfs, num);
 
     return assignable;
 }
@@ -1523,7 +1544,8 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     pas->callback = device_pci_add_stubdom_done;
 
     if (libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
-        rc = xc_test_assign_device(ctx->xch, domid, pci_encode_bdf(pci));
+        rc = xc_test_assign_device(ctx->xch, domid,
+                                   pci_encode_bdf(&pci->bdf));
         if (rc) {
             LOGD(ERROR, domid,
                  "PCI device %04x:%02x:%02x.%u %s?",
@@ -1537,20 +1559,20 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     rc = libxl__device_pci_setdefault(gc, domid, pci, !starting);
     if (rc) goto out;
 
-    if (pci->seize && !pciback_dev_is_assigned(gc, pci)) {
-        rc = libxl__device_pci_assignable_add(gc, pci, 1);
+    if (pci->seize && !pciback_dev_is_assigned(gc, &pci->bdf)) {
+        rc = libxl__device_pci_assignable_add(gc, &pci->bdf, 1);
         if ( rc )
             goto out;
     }
 
-    if (!libxl_pci_assignable(ctx, pci)) {
+    if (!is_bdf_assignable(ctx, &pci->bdf)) {
         LOGD(ERROR, domid, "PCI device %x:%x:%x.%x is not assignable",
              pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         rc = ERROR_FAIL;
         goto out;
     }
 
-    rc = pci_info_xs_write(gc, pci, "domid", GCSPRINTF("%u", domid));
+    rc = pci_info_xs_write(gc, &pci->bdf, "domid", GCSPRINTF("%u", domid));
     if (rc) goto out;
 
     libxl__device_pci_reset(gc, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
@@ -1674,7 +1696,7 @@ static void device_pci_add_done(libxl__egc *egc,
              "PCI device %x:%x:%x.%x (rc %d)",
              pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
              rc);
-        pci_info_xs_remove(gc, pci, "domid");
+        pci_info_xs_remove(gc, &pci->bdf, "domid");
     }
     libxl_device_pci_dispose(pci);
     aodev->rc = rc;
@@ -2114,7 +2136,8 @@ static void pci_remove_detached(libxl__egc *egc,
     }
 
     if (!isstubdom) {
-        rc = xc_deassign_device(CTX->xch, domid, pci_encode_bdf(pci));
+        rc = xc_deassign_device(CTX->xch, domid,
+                                pci_encode_bdf(&pci->bdf));
         if (rc < 0 && (prs->hvm || errno != ENOSYS))
             LOGED(ERROR, domainid, "xc_deassign_device failed");
     }
@@ -2243,7 +2266,7 @@ out:
     libxl__ev_time_deregister(gc, &prs->timeout);
     libxl__ev_time_deregister(gc, &prs->retry_timer);
 
-    if (!rc) pci_info_xs_remove(gc, pci, "domid");
+    if (!rc) pci_info_xs_remove(gc, &pci->bdf, "domid");
 
     libxl_device_pci_dispose(pci);
     aodev->rc = rc;
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 352a00134d..2388f23869 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -840,7 +840,7 @@ value stub_xl_device_pci_assignable_add(value ctx, value info, value rebind)
 	device_pci_val(CTX, &c_info, info);
 
 	caml_enter_blocking_section();
-	ret = libxl_device_pci_assignable_add(CTX, &c_info, c_rebind);
+	ret = libxl_device_pci_assignable_add(CTX, &c_info.bdf, c_rebind);
 	caml_leave_blocking_section();
 
 	libxl_device_pci_dispose(&c_info);
@@ -861,7 +861,7 @@ value stub_xl_device_pci_assignable_remove(value ctx, value info, value rebind)
 	device_pci_val(CTX, &c_info, info);
 
 	caml_enter_blocking_section();
-	ret = libxl_device_pci_assignable_remove(CTX, &c_info, c_rebind);
+	ret = libxl_device_pci_assignable_remove(CTX, &c_info.bdf, c_rebind);
 	caml_leave_blocking_section();
 
 	libxl_device_pci_dispose(&c_info);
@@ -876,7 +876,7 @@ value stub_xl_device_pci_assignable_list(value ctx)
 {
 	CAMLparam1(ctx);
 	CAMLlocal2(list, temp);
-	libxl_device_pci *c_list;
+	libxl_pci_bdf *c_list;
 	int i, nb;
 	uint32_t c_domid;
 
@@ -889,11 +889,18 @@ value stub_xl_device_pci_assignable_list(value ctx)
 
 	list = temp = Val_emptylist;
 	for (i = 0; i < nb; i++) {
+		libxl_device_pci pci;
+
+		libxl_device_pci_init(&pci);
+		libxl_pci_bdf_copy(CTX, &pci.bdf, &c_list[i]);
+
 		list = caml_alloc_small(2, Tag_cons);
 		Field(list, 0) = Val_int(0);
 		Field(list, 1) = temp;
 		temp = list;
-		Store_field(list, 0, Val_device_pci(&c_list[i]));
+		Store_field(list, 0, Val_device_pci(&pci));
+
+		libxl_device_pci_dispose(&pci);
 	}
 	libxl_device_pci_assignable_list_free(c_list, nb);
 
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 9c24496cb2..37708b4eb1 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -154,19 +154,19 @@ int main_pciattach(int argc, char **argv)
 
 static void pciassignable_list(void)
 {
-    libxl_device_pci *pcis;
+    libxl_pci_bdf *pcibdfs;
     int num, i;
 
-    pcis = libxl_device_pci_assignable_list(ctx, &num);
+    pcibdfs = libxl_device_pci_assignable_list(ctx, &num);
 
-    if ( pcis == NULL )
+    if ( pcibdfs == NULL )
         return;
     for (i = 0; i < num; i++) {
         printf("%04x:%02x:%02x.%01x\n",
-               pcis[i].bdf.domain, pcis[i].bdf.bus, pcis[i].bdf.dev,
-               pcis[i].bdf.func);
+               pcibdfs[i].domain, pcibdfs[i].bus, pcibdfs[i].dev,
+               pcibdfs[i].func);
     }
-    libxl_device_pci_assignable_list_free(pcis, num);
+    libxl_device_pci_assignable_list_free(pcibdfs, num);
 }
 
 int main_pciassignable_list(int argc, char **argv)
@@ -183,24 +183,24 @@ int main_pciassignable_list(int argc, char **argv)
 
 static int pciassignable_add(const char *bdf, int rebind)
 {
-    libxl_device_pci pci;
+    libxl_pci_bdf pcibdf;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pci);
+    libxl_pci_bdf_init(&pcibdf);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci.bdf, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pcibdf, bdf)) {
         fprintf(stderr, "pci-assignable-add: malformed BDF \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_add(ctx, &pci, rebind))
+    if (libxl_device_pci_assignable_add(ctx, &pcibdf, rebind))
         r = 1;
 
-    libxl_device_pci_dispose(&pci);
+    libxl_pci_bdf_dispose(&pcibdf);
     xlu_cfg_destroy(config);
 
     return r;
@@ -225,24 +225,24 @@ int main_pciassignable_add(int argc, char **argv)
 
 static int pciassignable_remove(const char *bdf, int rebind)
 {
-    libxl_device_pci pci;
+    libxl_pci_bdf pcibdf;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pci);
+    libxl_pci_bdf_init(&pcibdf);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci.bdf, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pcibdf, bdf)) {
         fprintf(stderr, "pci-assignable-remove: malformed BDF \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_remove(ctx, &pci, rebind))
+    if (libxl_device_pci_assignable_remove(ctx, &pcibdf, rebind))
         r = 1;
 
-    libxl_device_pci_dispose(&pci);
+    libxl_pci_bdf_dispose(&pcibdf);
     xlu_cfg_destroy(config);
 
     return r;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 18:00:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 18:00:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35014.66462 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG91-0001B3-Qt; Mon, 23 Nov 2020 18:00:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35014.66462; Mon, 23 Nov 2020 18:00:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG91-0001A4-1O; Mon, 23 Nov 2020 18:00:55 +0000
Received: by outflank-mailman (input) for mailman id 35014;
 Mon, 23 Nov 2020 18:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khG8r-0000n1-SV
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khG8q-000719-N8; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtl-0000at-7A; Mon, 23 Nov 2020 17:45:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8r-0000n1-SV
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=Ni3v8rNOQnYX+FJEz4ZSjie9Y3E4/QZ6KFSYwE8qkX0=; b=HUbcLcri8euWWCAQek9iap8pf
	gb9/NsWal+w/ACsywDUUFgGvNEd77afn1hL3FXYDMABhIAAUL7f6O4K+zx69kluOvWtVulr7DQ4ig
	PtfOnRHlfWmW1kTtMMWq2RuyOTcPsk2pPIXWBbAzjDOh2Jw5SFv4HLe8LgbxIpS4VFM9g=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8q-000719-N8; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtl-0000at-7A; Mon, 23 Nov 2020 17:45:09 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 12/23] libxl: add libxl_device_pci_assignable_list_free()...
Date: Mon, 23 Nov 2020 17:44:52 +0000
Message-Id: <20201123174503.6800-13-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201123174503.6800-1-paul@xen.org>
References: <20201123174503.6800-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

... to be used by callers of libxl_device_pci_assignable_list().

Currently there is no API for callers of libxl_device_pci_assignable_list()
to free the list. The xl function pciassignable_list() calls
libxl_device_pci_dispose() on each element of the returned list, but
libxl_pci_assignable() in libxl_pci.c does not. Neither does the implementation
of libxl_device_pci_assignable_list() call libxl_device_pci_init().

This patch adds the new API function, makes sure it is used everywhere and
also modifies libxl_device_pci_assignable_list() to initialize list
entries rather than just zeroing them.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Christian Lindig <christian.lindig@citrix.com>
Cc: David Scott <dave@recoil.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxl.h                |  7 +++++++
 tools/libs/light/libxl_pci.c         | 14 ++++++++++++--
 tools/ocaml/libs/xl/xenlight_stubs.c |  3 +--
 tools/xl/xl_pci.c                    |  3 +--
 4 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index ee52d3cf7e..8225809d94 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -458,6 +458,12 @@
 #define LIBXL_HAVE_DEVICE_PCI_LIST_FREE 1
 
 /*
+ * LIBXL_HAVE_DEVICE_PCI_ASSIGNABLE_LIST_FREE indicates that the
+ * libxl_device_pci_assignable_list_free() function is defined.
+ */
+#define LIBXL_HAVE_DEVICE_PCI_ASSIGNABLE_LIST_FREE 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
@@ -2369,6 +2375,7 @@ int libxl_device_events_handler(libxl_ctx *ctx,
 int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
 int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
+void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num);
 
 /* CPUID handling */
 int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str);
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 0f41939d1f..5a3352c2ec 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -457,7 +457,7 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         pcis = new;
         new = pcis + *num;
 
-        memset(new, 0, sizeof(*new));
+        libxl_device_pci_init(new);
         pci_struct_fill(new, dom, bus, dev, func, 0);
 
         if (pci_info_xs_read(gc, new, "domid")) /* already assigned */
@@ -472,6 +472,16 @@ out:
     return pcis;
 }
 
+void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num)
+{
+    int i;
+
+    for (i = 0; i < num; i++)
+        libxl_device_pci_dispose(&list[i]);
+
+    free(list);
+}
+
 /* Unbind device from its current driver, if any.  If driver_path is non-NULL,
  * store the path to the original driver in it. */
 static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
@@ -1490,7 +1500,7 @@ static int libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
             pcis[i].func == pci->func)
             break;
     }
-    free(pcis);
+    libxl_device_pci_assignable_list_free(pcis, num);
     return i != num;
 }
 
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 1181971da4..352a00134d 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -894,9 +894,8 @@ value stub_xl_device_pci_assignable_list(value ctx)
 		Field(list, 1) = temp;
 		temp = list;
 		Store_field(list, 0, Val_device_pci(&c_list[i]));
-		libxl_device_pci_dispose(&c_list[i]);
 	}
-	free(c_list);
+	libxl_device_pci_assignable_list_free(c_list, nb);
 
 	CAMLreturn(list);
 }
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 7c0f102ac7..f71498cbb5 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -164,9 +164,8 @@ static void pciassignable_list(void)
     for (i = 0; i < num; i++) {
         printf("%04x:%02x:%02x.%01x\n",
                pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
-        libxl_device_pci_dispose(&pcis[i]);
     }
-    free(pcis);
+    libxl_device_pci_assignable_list_free(pcis, num);
 }
 
 int main_pciassignable_list(int argc, char **argv)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 18:00:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 18:00:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35015.66472 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG93-0001Ez-Dy; Mon, 23 Nov 2020 18:00:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35015.66472; Mon, 23 Nov 2020 18:00:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khG92-0001Dp-B9; Mon, 23 Nov 2020 18:00:56 +0000
Received: by outflank-mailman (input) for mailman id 35015;
 Mon, 23 Nov 2020 18:00:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khG8r-0000n6-W1
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khG8q-00071L-Rf; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khFtn-0000at-8G; Mon, 23 Nov 2020 17:45:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8r-0000n6-W1
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:00:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=g4vlrHOO7l/XQjq8yeT79RpnipWh6aBaWnJezRVSrUw=; b=LX7dN7lqiKt5xN25kuRAwlDME
	7EwIhV9x95chJflfVcX6MxKsQ7+v5PefyuBMf6REgfCQfwOYcR6U7F0RCpALMn2J6uOmN1dYQGIUZ
	KhCVVkE9aE27TptUeKIhYmDLjp9wIQoNC5IcrX/DVP+u3Q7KYX/OTxNc65Ya9lFyxGQ+U=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khG8q-00071L-Rf; Mon, 23 Nov 2020 18:00:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khFtn-0000at-8G; Mon, 23 Nov 2020 17:45:11 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 21/23] xl / libxl: support naming of assignable devices
Date: Mon, 23 Nov 2020 17:45:01 +0000
Message-Id: <20201123174503.6800-22-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201123174503.6800-1-paul@xen.org>
References: <20201123174503.6800-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

This patch modifies libxl_device_pci_assignable_add() to take an optional
'name' argument, which (if supplied) is saved into xenstore and can hence be
used to refer to the now-assignable BDF in subsequent operations. To
facilitate this, a new libxl_device_pci_assignable_name2bdf() function is
added.

The xl code is modified to allow a name to be specified in the
'pci-assignable-add' operation and also allow an option to be specified to
'pci-assignable-list' requesting that names be displayed. The latter is
facilitated by a new libxl_device_pci_assignable_bdf2name() function. Finally
xl 'pci-assignable-remove' is modified to that either a name or BDF can be
supplied. The supplied 'identifier' is first assumed to be a name, but if
libxl_device_pci_assignable_name2bdf() fails to find a matching BDF the
identifier itself will be parsed as a BDF. Names my only include printable
characters and may not include whitespace.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Christian Lindig <christian.lindig@citrix.com>
Cc: David Scott <dave@recoil.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxl.h                | 19 +++++++-
 tools/libs/light/libxl_pci.c         | 86 +++++++++++++++++++++++++++++++++---
 tools/ocaml/libs/xl/xenlight_stubs.c |  3 +-
 tools/xl/xl_cmdtable.c               | 12 +++--
 tools/xl/xl_pci.c                    | 84 ++++++++++++++++++++++++-----------
 5 files changed, 166 insertions(+), 38 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 5703fdf367..4025d3a3d4 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -477,6 +477,14 @@
 #define LIBXL_HAVE_PCI_ASSIGNABLE_BDF 1
 
 /*
+ * LIBXL_HAVE_PCI_ASSIGNABLE_NAME indicates that the
+ * libxl_device_pci_assignable_add() function takes a 'name' argument
+ * and that the libxl_device_pci_assignable_name2bdf() and
+ * libxl_device_pci_assignable_bdf2name() functions are defined.
+ */
+#define LIBXL_HAVE_PCI_ASSIGNABLE_NAME 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
@@ -2385,11 +2393,18 @@ int libxl_device_events_handler(libxl_ctx *ctx,
  * added or is not bound, the functions will emit a warning but return
  * SUCCESS.
  */
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
+                                    const char *name, int rebind);
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
+                                       int rebind);
 libxl_pci_bdf *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
 void libxl_device_pci_assignable_list_free(libxl_pci_bdf *list, int num);
 
+libxl_pci_bdf *libxl_device_pci_assignable_name2bdf(libxl_ctx *ctx,
+                                                    const char *name);
+char *libxl_device_pci_assignable_bdf2name(libxl_ctx *ctx,
+                                           libxl_pci_bdf *pcibdf);
+
 /* CPUID handling */
 int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str);
 int libxl_cpuid_parse_config_xend(libxl_cpuid_policy_list *cpuid,
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index f9ace1faec..a1c9ae0d5b 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -745,6 +745,7 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
                                             libxl_pci_bdf *pcibdf,
+                                            const char *name,
                                             int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -753,6 +754,23 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     int rc;
     struct stat st;
 
+    /* Sanitise any name that was passed */
+    if (name) {
+        unsigned int i, n = strlen(name);
+
+        if (n > 64) { /* Reasonable upper bound on name length */
+            LOG(ERROR, "Name too long");
+            return ERROR_FAIL;
+        }
+
+        for (i = 0; i < n; i++) {
+            if (!isgraph(name[i])) {
+                LOG(ERROR, "Names may only include printable characters");
+                return ERROR_FAIL;
+            }
+        }
+    }
+
     /* Local copy for convenience */
     dom = pcibdf->domain;
     bus = pcibdf->bus;
@@ -773,7 +791,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
     if ( rc ) {
         LOG(WARN, PCI_BDF" already assigned to pciback", dom, bus, dev, func);
-        goto quarantine;
+        goto name;
     }
 
     /* Check to see if there's already a driver that we need to unbind from */
@@ -804,7 +822,12 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
         return ERROR_FAIL;
     }
 
-quarantine:
+name:
+    if (name)
+        pci_info_xs_write(gc, pcibdf, "name", name);
+    else
+        pci_info_xs_remove(gc, pcibdf, "name");
+
     /*
      * DOMID_IO is just a sentinel domain, without any actual mappings,
      * so always pass XEN_DOMCTL_DEV_RDM_RELAXED to avoid assignment being
@@ -868,16 +891,18 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
         }
     }
 
+    pci_info_xs_remove(gc, pcibdf, "name");
+
     return 0;
 }
 
 int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
-                                    int rebind)
+                                    const char *name, int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_add(gc, pcibdf, rebind);
+    rc = libxl__device_pci_assignable_add(gc, pcibdf, name, rebind);
 
     GC_FREE;
     return rc;
@@ -896,6 +921,57 @@ int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
     return rc;
 }
 
+libxl_pci_bdf *libxl_device_pci_assignable_name2bdf(libxl_ctx *ctx,
+                                                    const char *name)
+{
+    GC_INIT(ctx);
+    char **bdfs;
+    libxl_pci_bdf *pcibdf;
+    unsigned int i, n;
+
+    bdfs = libxl__xs_directory(gc, XBT_NULL, PCI_INFO_PATH, &n);
+    if (!n)
+        goto out;
+
+    pcibdf = calloc(1, sizeof(*pcibdf));
+    if (!pcibdf)
+        goto out;
+
+    for (i = 0; i < n; i++) {
+        unsigned dom, bus, dev, func;
+        const char *tmp;
+
+        if (sscanf(bdfs[i], PCI_BDF_XSPATH, &dom, &bus, &dev, &func) != 4)
+            continue;
+
+        pcibdf_struct_fill(pcibdf, dom, bus, dev, func);
+
+        tmp = pci_info_xs_read(gc, pcibdf, "name");
+        if (tmp && !strcmp(tmp, name))
+            goto out;
+    }
+
+    free(pcibdf);
+    pcibdf = NULL;
+
+out:
+    GC_FREE;
+    return pcibdf;
+}
+
+char *libxl_device_pci_assignable_bdf2name(libxl_ctx *ctx,
+                                           libxl_pci_bdf *pcibdf)
+{
+    GC_INIT(ctx);
+    char *name = NULL, *tmp = pci_info_xs_read(gc, pcibdf, "name");
+
+    if (tmp)
+        name = strdup(tmp);
+
+    GC_FREE;
+    return name;
+}
+
 /*
  * This function checks that all functions of a device are bound to pciback
  * driver. It also initialises a bit-mask of which function numbers are present
@@ -1560,7 +1636,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     if (rc) goto out;
 
     if (pci->seize && !pciback_dev_is_assigned(gc, &pci->bdf)) {
-        rc = libxl__device_pci_assignable_add(gc, &pci->bdf, 1);
+        rc = libxl__device_pci_assignable_add(gc, &pci->bdf, NULL, 1);
         if ( rc )
             goto out;
     }
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 2388f23869..96bb4655e0 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -840,7 +840,8 @@ value stub_xl_device_pci_assignable_add(value ctx, value info, value rebind)
 	device_pci_val(CTX, &c_info, info);
 
 	caml_enter_blocking_section();
-	ret = libxl_device_pci_assignable_add(CTX, &c_info.bdf, c_rebind);
+	ret = libxl_device_pci_assignable_add(CTX, &c_info.bdf, NULL,
+					      c_rebind);
 	caml_leave_blocking_section();
 
 	libxl_device_pci_dispose(&c_info);
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 2ee0c49673..9e9aa448e2 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -105,21 +105,25 @@ struct cmd_spec cmd_table[] = {
     { "pci-assignable-add",
       &main_pciassignable_add, 0, 1,
       "Make a device assignable for pci-passthru",
-      "<BDF>",
+      "[options] <BDF>",
+      "-n NAME, --name=NAME    Name the assignable device.\n"
       "-h                      Print this help.\n"
     },
     { "pci-assignable-remove",
       &main_pciassignable_remove, 0, 1,
       "Remove a device from being assignable",
-      "[options] <BDF>",
+      "[options] <BDF>|NAME",
       "-h                      Print this help.\n"
       "-r                      Attempt to re-assign the device to the\n"
-      "                        original driver"
+      "                        original driver."
     },
     { "pci-assignable-list",
       &main_pciassignable_list, 0, 0,
       "List all the assignable pci devices",
-      "",
+      "[options]",
+      "-h                      Print this help.\n"
+      "-n, --show-names        Display assignable device names where\n"
+      "                        supplied.\n"
     },
     { "pause",
       &main_pause, 0, 1,
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 37708b4eb1..f1b58b3976 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -152,7 +152,7 @@ int main_pciattach(int argc, char **argv)
     return EXIT_SUCCESS;
 }
 
-static void pciassignable_list(void)
+static void pciassignable_list(bool show_names)
 {
     libxl_pci_bdf *pcibdfs;
     int num, i;
@@ -162,9 +162,15 @@ static void pciassignable_list(void)
     if ( pcibdfs == NULL )
         return;
     for (i = 0; i < num; i++) {
-        printf("%04x:%02x:%02x.%01x\n",
-               pcibdfs[i].domain, pcibdfs[i].bus, pcibdfs[i].dev,
-               pcibdfs[i].func);
+        libxl_pci_bdf *pcibdf = &pcibdfs[i];
+        char *name = show_names ?
+            libxl_device_pci_assignable_bdf2name(ctx, pcibdf) : NULL;
+
+        printf("%04x:%02x:%02x.%01x %s\n",
+               pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func,
+               name ?: "");
+
+        free(name);
     }
     libxl_device_pci_assignable_list_free(pcibdfs, num);
 }
@@ -172,16 +178,23 @@ static void pciassignable_list(void)
 int main_pciassignable_list(int argc, char **argv)
 {
     int opt;
-
-    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-assignable-list", 0) {
-        /* No options */
+    static struct option opts[] = {
+        {"show-names", 0, 0, 'n'},
+        COMMON_LONG_OPTS
+    };
+    bool show_names = false;
+
+    SWITCH_FOREACH_OPT(opt, "n", opts, "pci-assignable-list", 0) {
+    case 'n':
+        show_names = true;
+        break;
     }
 
-    pciassignable_list();
+    pciassignable_list(show_names);
     return 0;
 }
 
-static int pciassignable_add(const char *bdf, int rebind)
+static int pciassignable_add(const char *bdf, const char *name, int rebind)
 {
     libxl_pci_bdf pcibdf;
     XLU_Config *config;
@@ -197,7 +210,7 @@ static int pciassignable_add(const char *bdf, int rebind)
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_add(ctx, &pcibdf, rebind))
+    if (libxl_device_pci_assignable_add(ctx, &pcibdf, name, rebind))
         r = 1;
 
     libxl_pci_bdf_dispose(&pcibdf);
@@ -210,39 +223,58 @@ int main_pciassignable_add(int argc, char **argv)
 {
     int opt;
     const char *bdf = NULL;
-
-    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-assignable-add", 1) {
-        /* No options */
+    static struct option opts[] = {
+        {"name", 1, 0, 'n'},
+        COMMON_LONG_OPTS
+    };
+    const char *name = NULL;
+
+    SWITCH_FOREACH_OPT(opt, "n:", opts, "pci-assignable-add", 0) {
+    case 'n':
+        name = optarg;
+        break;
     }
 
     bdf = argv[optind];
 
-    if (pciassignable_add(bdf, 1))
+    if (pciassignable_add(bdf, name, 1))
         return EXIT_FAILURE;
 
     return EXIT_SUCCESS;
 }
 
-static int pciassignable_remove(const char *bdf, int rebind)
+static int pciassignable_remove(const char *ident, int rebind)
 {
-    libxl_pci_bdf pcibdf;
+    libxl_pci_bdf *pcibdf;
     XLU_Config *config;
     int r = 0;
 
-    libxl_pci_bdf_init(&pcibdf);
-
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcibdf, bdf)) {
-        fprintf(stderr, "pci-assignable-remove: malformed BDF \"%s\"\n", bdf);
-        exit(2);
+    pcibdf = libxl_device_pci_assignable_name2bdf(ctx, ident);
+    if (!pcibdf) {
+        pcibdf = calloc(1, sizeof(*pcibdf));
+
+        if (!pcibdf) {
+            fprintf(stderr,
+                    "pci-assignable-remove: failed to allocate memory\n");
+            exit(2);
+        }
+
+        libxl_pci_bdf_init(pcibdf);
+        if (xlu_pci_parse_bdf(config, pcibdf, ident)) {
+            fprintf(stderr,
+                    "pci-assignable-remove: malformed BDF '%s'\n", ident);
+            exit(2);
+        }
     }
 
-    if (libxl_device_pci_assignable_remove(ctx, &pcibdf, rebind))
+    if (libxl_device_pci_assignable_remove(ctx, pcibdf, rebind))
         r = 1;
 
-    libxl_pci_bdf_dispose(&pcibdf);
+    libxl_pci_bdf_dispose(pcibdf);
+    free(pcibdf);
     xlu_cfg_destroy(config);
 
     return r;
@@ -251,7 +283,7 @@ static int pciassignable_remove(const char *bdf, int rebind)
 int main_pciassignable_remove(int argc, char **argv)
 {
     int opt;
-    const char *bdf = NULL;
+    const char *ident = NULL;
     int rebind = 0;
 
     SWITCH_FOREACH_OPT(opt, "r", NULL, "pci-assignable-remove", 1) {
@@ -260,9 +292,9 @@ int main_pciassignable_remove(int argc, char **argv)
         break;
     }
 
-    bdf = argv[optind];
+    ident = argv[optind];
 
-    if (pciassignable_remove(bdf, rebind))
+    if (pciassignable_remove(ident, rebind))
         return EXIT_FAILURE;
 
     return EXIT_SUCCESS;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Nov 23 18:27:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 18:27:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35114.66492 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khGYa-0004KN-Gr; Mon, 23 Nov 2020 18:27:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35114.66492; Mon, 23 Nov 2020 18:27:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khGYa-0004KG-Dn; Mon, 23 Nov 2020 18:27:20 +0000
Received: by outflank-mailman (input) for mailman id 35114;
 Mon, 23 Nov 2020 18:27:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1khGYZ-0004K8-GQ
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:27:19 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khGYX-0007a9-VS; Mon, 23 Nov 2020 18:27:17 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khGYX-0003rA-Jg; Mon, 23 Nov 2020 18:27:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khGYZ-0004K8-GQ
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:27:19 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=v8AYlSkV+k3q8JSiD8YvMwRTa5fXhluaknB6u6qK634=; b=LvHzQOnPsJEjHZwEzfec+omYV4
	QCmj/yex+J0N13O5i7EOH7010nUidzredCawIlB+6hSsjVwzBdixgcsCkpxPb62RpfLmbCSdk9Ix9
	i7f02EvB26axUZG9mh3Wb7OB6ap0VnBn15GRxRdVZ4VGNK3g7PKJl2UIZQ6xpRcLp7is=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khGYX-0007a9-VS; Mon, 23 Nov 2020 18:27:17 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khGYX-0003rA-Jg; Mon, 23 Nov 2020 18:27:17 +0000
Subject: Re: AW: AW: AW: AW: AW: Xen data from meta-virtualization layer
To: Leo Krueger <leo.krueger@zal.aero>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Cc: Peng Fan <peng.fan@nxp.com>, "brucea@xilinx.com" <brucea@xilinx.com>,
 Cornelia Bruelhart <cornelia.bruelhart@zal.aero>,
 "oleksandr_andrushchenko@epam.com" <oleksandr_andrushchenko@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>
References: <AM4PR0501MB2227089FDDF0209EF6E215D9E6100@AM4PR0501MB2227.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011091858010.21307@sstabellini-ThinkPad-T480s>
 <HE1PR05MB4794B5C57A54A29A48EE8EAE8BE90@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011101842500.21307@sstabellini-ThinkPad-T480s>
 <DB6PR0402MB27608A03EC717053E392A92988E80@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <HE1PR05MB47940ED4E5FDC0BADC54C8E78BE80@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <DB6PR0402MB2760CEEABA9F52CDEB27C1DB88E80@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <HE1PR05MB47944761ED6A26D3E2CE15868BE40@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011161656080.20906@sstabellini-ThinkPad-T480s>
 <HE1PR05MB4794569AC67109AF8B6517268BE20@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011171544380.438@sstabellini-ThinkPad-T480s>
 <5dc63ee2-f1ce-31fc-cb6a-fe4dae929fb3@xen.org>
 <HE1PR05MB4794EBDD1FE29BC69D0BCC898BFD0@HE1PR05MB4794.eurprd05.prod.outlook.com>
From: Julien Grall <julien@xen.org>
Message-ID: <b67581c6-6682-5059-55d1-a9c695a8cdc3@xen.org>
Date: Mon, 23 Nov 2020 18:27:15 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <HE1PR05MB4794EBDD1FE29BC69D0BCC898BFD0@HE1PR05MB4794.eurprd05.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 22/11/2020 22:55, Leo Krueger wrote:
> Hi Julien,

Hi Leo,

> 
> finally I could try out what you suggested, please find my answers inline.

Thank you for sending the logs!

> 
>> -----Ursprüngliche Nachricht-----
>> Von: Julien Grall <julien@xen.org>
>> Gesendet: Mittwoch, 18. November 2020 13:24
>> An: Stefano Stabellini <stefano.stabellini@xilinx.com>; Leo Krueger
>> <leo.krueger@zal.aero>
>> Cc: Peng Fan <peng.fan@nxp.com>; brucea@xilinx.com; Cornelia Bruelhart
>> <cornelia.bruelhart@zal.aero>; oleksandr_andrushchenko@epam.com; xen-
>> devel@lists.xenproject.org; Bertrand.Marquis@arm.com
>> Betreff: Re: AW: AW: AW: AW: Xen data from meta-virtualization layer
>>
>> Hi,
>>
>> On 17/11/2020 23:53, Stefano Stabellini wrote:
>>> Adding Bertrand, Oleksandr, Julien, and others -- they have a more
>>> recent experience with GICv3 ITS than me and might be able to help.
>>> I am attaching the device tree Leo sent a few days ago for reference.
>>>
>>>
>>> Typically when you can set the ethernet link up and no packets are
>>> exchanged it is because of a missing interrupt. In this case a missing
>>> MSI.
>>>
>>> Bertrand, I believe you tried the GIC ITS driver with PCI devices
>>> recently. It is expected to work correctly with MSIs in Dom0, right?
>>
>> OSSTest has some hardware (e.g. Thunder-X) where ITS is required to boot
>> Dom0. I haven't seen any failure on recent Xen. We are testing 4.11 and
>> onwards on Thunder-X.
>>
>> However, it may be possible that some more work is necessary for other
>> hardware (e.g. workaround, missing code...). See more below.
>>
>>>
>>>
>>>
>>> On Tue, 17 Nov 2020, Leo Krueger wrote:
>>>> Hi,
>>>>
>>>> I enabled CONFIG_HAS_ITS (what a stupid mistake by me to not set it
>>>> before...) but then had to add the following node to my device tree
>>>>
>>>> 	gic_lpi_base: syscon@0x80000000 {
>>>> 		compatible = "gic-lpi-base";
>>
>> I couldn't find this compatible defined/used in Linux 5.10-rc4. @Leo, could
>> you clarify which flavor/version of Linux you are using?
> 
> It is Linux 4.19 from Yocto (Warror release). XEN 4.13.2.

Do you have a link to the Linux tree? Is there any additional patches on 
top of vanilla?

> While searching around the Internet for any solution, I came across [0] which contained the gic-lpi-base node.
> So I just tried adding it (quite desperate I know) and voila, it at least brought me one step further (XEN exposing the ITS)...

I am slightly confused to how this would help. Xen and, AFAICT, Linux 
don't understand gic-lpi-base. Do you have modification in your Linux to 
use it?

Looking at the DT changes in [0], it looks like the node is not a child 
of gic@. So I think Xen will map the region to Dom0.

There are two things that I can notice:
   1) This region is RAM, but I can't find any reserve node. Is there 
any specific code in Linux to reserve it?
   2) The implementation in U-boot seems to suggest that the firmware 
will configure the LPIs and then enable it. If that's the case, then Xen 
needs to re-use the table in the DT rather than allocating a new one. 
However, I would have expected an error message in the log:

    "GICv3: CPUx: Cannot initialize LPIs"

At least Xen should not expose gic-lpi-base to the kernel, but I will 
wait on more details about the Linux kernel used before commenting more.

I would also be interested to know more details about the failure when 
gic-lpi-base is not added in your DT. In particular, I am interested to 
understand why Xen would not expose the ITS as we don't parse that node.

[...]

> For XEN 4.13.2 I had to adapt your patch slightly [1], see below (yes I know, quite ugly in parts).

No worries, debug patches are not meant to be nice to read ;).

> Find attached the boot log and an output of "xl dmesg" which is truncated due to the large amount of messages.
> 
> When enabling the network interface (gbe0), the following output is visible:
> 
> root@kontron-sal28:~# ip link set up dev gbe0
> (XEN) vgic-v3-its.c:902:d0v0 vITS  cmd 0x0c: 000000170000000c 0000000000000001 0000000000000000 0000000000000000
> (XEN) vgic-v3-its.c:902:d0v0 vITS  cmd 0x05: 0000000000000005 0000000000000000 0000000000000000 0000000000000000

0xc is INV and 0x5 is SYNC. Most likely the driver unmask the interrupt 
by writing in the property table (access are not trapped to Xen) and 
then requested to invalidate the cache state.

> [   34.034598] Atheros 8031 ethernet 0000:00:00.3:05: attached PHY driver [Atheros 8031 ethernet] (mii_bus:phy_addr=0000:00:00.3:05, irq=POLL)
> [   34.041111] 8021q: adding VLAN 0 to HW filter on device gbe0
> [   34.041209] IPv6: ADDRCONF(NETDEV_UP): gbe0: link is not ready
> root@kontron-sal28:~# [   35.041951] fsl_enetc 0000:00:00.0 gbe0: Link is Down
> [   38.114426] fsl_enetc 0000:00:00.0 gbe0: Link is Up - 1Gbps/Full - flow control off
> [   38.114508] IPv6: ADDRCONF(NETDEV_CHANGE): gbe0: link becomes ready
> 
> Does that tell you anything?

It is at least a good sign because it means Linux is able to 
initialize/talk to the vITS.

I would lean towards one (or multiple) issue with pITS and/or the 
device-tree exposed to Linux. I am not entirely what exactly... I think 
having more details about the Linux setup would be helpful.

I will reply on Rahul's e-mail separately.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 18:42:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 18:42:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35122.66504 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khGmj-000635-TY; Mon, 23 Nov 2020 18:41:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35122.66504; Mon, 23 Nov 2020 18:41:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khGmj-00062y-QM; Mon, 23 Nov 2020 18:41:57 +0000
Received: by outflank-mailman (input) for mailman id 35122;
 Mon, 23 Nov 2020 18:41:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1khGmi-00062t-1G
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:41:56 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khGmg-0007un-HU; Mon, 23 Nov 2020 18:41:54 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khGmg-0004wA-9P; Mon, 23 Nov 2020 18:41:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khGmi-00062t-1G
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:41:56 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=20n6of72vjnzsOaDGpjvEGBeHQ8aEYL1sayx0U86rec=; b=BTQyY1mBryRsF93eYlYiqXvDb9
	OMP8KCgjrwoejOpKRN7ZB0eUNih4Tzj85EZr5RyATOCrOS8IiPHj8n1KKVg6ajJGgwm7/SGRXL6se
	rM0ToMQVlSVUB+m6XKy7wWOXJneNIJRiKHgfyRhBCh+H5bqXV/fshQc2Fs2JAjNfG2xI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khGmg-0007un-HU; Mon, 23 Nov 2020 18:41:54 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khGmg-0004wA-9P; Mon, 23 Nov 2020 18:41:54 +0000
Subject: Re: Xen data from meta-virtualization layer
To: Rahul Singh <Rahul.Singh@arm.com>, Leo Krueger <leo.krueger@zal.aero>
Cc: Stefano Stabellini <stefano.stabellini@xilinx.com>,
 Peng Fan <peng.fan@nxp.com>, "brucea@xilinx.com" <brucea@xilinx.com>,
 Cornelia Bruelhart <cornelia.bruelhart@zal.aero>,
 "oleksandr_andrushchenko@epam.com" <oleksandr_andrushchenko@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <AM4PR0501MB2227089FDDF0209EF6E215D9E6100@AM4PR0501MB2227.eurprd05.prod.outlook.com>
 <HE1PR05MB4794B5C57A54A29A48EE8EAE8BE90@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011101842500.21307@sstabellini-ThinkPad-T480s>
 <DB6PR0402MB27608A03EC717053E392A92988E80@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <HE1PR05MB47940ED4E5FDC0BADC54C8E78BE80@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <DB6PR0402MB2760CEEABA9F52CDEB27C1DB88E80@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <HE1PR05MB47944761ED6A26D3E2CE15868BE40@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011161656080.20906@sstabellini-ThinkPad-T480s>
 <HE1PR05MB4794569AC67109AF8B6517268BE20@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011171544380.438@sstabellini-ThinkPad-T480s>
 <5dc63ee2-f1ce-31fc-cb6a-fe4dae929fb3@xen.org>
 <HE1PR05MB4794EBDD1FE29BC69D0BCC898BFD0@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <50B2EEEF-4BF2-4511-98D5-F165A70E2EC6@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <15608dee-1f95-2b08-9fe5-cd015f4d1cdf@xen.org>
Date: Mon, 23 Nov 2020 18:41:52 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <50B2EEEF-4BF2-4511-98D5-F165A70E2EC6@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 23/11/2020 11:41, Rahul Singh wrote:
> Hello ,

Hi Rahul,

>> On 22 Nov 2020, at 10:55 pm, Leo Krueger <leo.krueger@zal.aero> wrote:
>> root@kontron-sal28:~# ip link set up dev gbe0
>> (XEN) vgic-v3-its.c:902:d0v0 vITS  cmd 0x0c: 000000170000000c 0000000000000001 0000000000000000 0000000000000000
>> (XEN) vgic-v3-its.c:902:d0v0 vITS  cmd 0x05: 0000000000000005 0000000000000000 0000000000000000 0000000000000000
>> [   34.034598] Atheros 8031 ethernet 0000:00:00.3:05: attached PHY driver [Atheros 8031 ethernet] (mii_bus:phy_addr=0000:00:00.3:05, irq=POLL)
>> [   34.041111] 8021q: adding VLAN 0 to HW filter on device gbe0
>> [   34.041209] IPv6: ADDRCONF(NETDEV_UP): gbe0: link is not ready
>> root@kontron-sal28:~# [   35.041951] fsl_enetc 0000:00:00.0 gbe0: Link is Down
>> [   38.114426] fsl_enetc 0000:00:00.0 gbe0: Link is Up - 1Gbps/Full - flow control off
>> [   38.114508] IPv6: ADDRCONF(NETDEV_CHANGE): gbe0: link becomes ready
>>
>> Does that tell you anything?
>>
> 
> I just checked the logs shared, what I found out that there’s is an error while booting to configure the MSI for the PCI device because of that there will be case that Device Id generate out-of-band is not mapped correctly to ITS device table created while initialising the MSI for the device.
> I might be wrong let someone else also comments on this.

I think there might be multiple issues. You spotted one below :).

> [    0.173964] OF: /soc/pcie@1f0000000: Invalid msi-map translation - no match for rid 0xf8 on           (null)

Leo, just to confirm, this error message is not spotted when booting 
Linux on baremetal?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 18:56:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 18:56:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35130.66516 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khH0Z-00074D-6R; Mon, 23 Nov 2020 18:56:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35130.66516; Mon, 23 Nov 2020 18:56:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khH0Z-000746-1v; Mon, 23 Nov 2020 18:56:15 +0000
Received: by outflank-mailman (input) for mailman id 35130;
 Mon, 23 Nov 2020 18:56:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0pQc=E5=gmail.com=miguel.ojeda.sandonis@srs-us1.protection.inumbo.net>)
 id 1khH0X-000741-Ip
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:56:13 +0000
Received: from mail-yb1-xb41.google.com (unknown [2607:f8b0:4864:20::b41])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4ea3ab6f-d25f-4f22-850e-06b44c785b68;
 Mon, 23 Nov 2020 18:56:12 +0000 (UTC)
Received: by mail-yb1-xb41.google.com with SMTP id 2so16859257ybc.12
 for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 10:56:12 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0pQc=E5=gmail.com=miguel.ojeda.sandonis@srs-us1.protection.inumbo.net>)
	id 1khH0X-000741-Ip
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 18:56:13 +0000
X-Inumbo-ID: 4ea3ab6f-d25f-4f22-850e-06b44c785b68
Received: from mail-yb1-xb41.google.com (unknown [2607:f8b0:4864:20::b41])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4ea3ab6f-d25f-4f22-850e-06b44c785b68;
	Mon, 23 Nov 2020 18:56:12 +0000 (UTC)
Received: by mail-yb1-xb41.google.com with SMTP id 2so16859257ybc.12
        for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 10:56:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=KNG9rf3jiBLOHJ1Qe+5HClUHwDC8G2v5PJoNSRMQj2o=;
        b=ZA1WcoeOdsWbRfOoumZEJTLTRH6V2Lpq2CDbZ0VVs1hbOT/vd/v8/YkJaULhb4MkV5
         MHmDtgQZ7Y6vQOoRafNcjdab1m8jmYbh8Ox0xcyAJF866JXyArBCoNYzebFkQV1wRZF3
         r3hM3WSnIq7Ht5VQ2PIwvurJMfamtV7PLgYZxEoHoT74qEv3IGeXPryfDRdu7AW/qetQ
         L1ocOXaYsoIrsq1AVQ8cgaa4G2qWRkZviQ+mOBHOVW/MFUti3ALLJAr2MKUeWh+s4BW6
         tAJsdEl41qUIuUvcW5DdLnVlmhWMC8lZFuXyyo4x1R3eX3DaWLHh1JzDrRZDt/CN6qga
         036w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=KNG9rf3jiBLOHJ1Qe+5HClUHwDC8G2v5PJoNSRMQj2o=;
        b=mdEyKoFaBnLfIEt61bzk0FujwNNJPAE11KkmI8FFlXeH6s33/pjmuhEQVMemcevFb0
         SmT0TP/HIHWynbLxcFblxw6Wfq5EXzn+TnUigHSJUMVXJeLkeZAPWTpuTRkUngviK4pL
         eL4G5fX41+OfwbsXNUNhSNPnNQqAFlITDhmwxWEgu6GCe17G4vUvzUp9b/Ofp2E6B4+Y
         Rz65yt7Nt0SR1JoayEzf2/tE4eaNXAmjgOmP9d9m5w8RnyCr+FPQg+zcgy5vMNHSvkQ4
         4IrtNVeBtlf2+0zKCVk5dh7wi3UVrM3qWVXss2B6E5tM4wMYarMnOJdg35l/HQgpcTv/
         QTXA==
X-Gm-Message-State: AOAM532Tk7wz2J1xhBBCli8QUhC3ZYx6DLRJiOvnIXma92PYyVMhaGvA
	bRlyP3JBBTqBnrFYrGe7p+lUg1Ygb7e7InKSwuM=
X-Google-Smtp-Source: ABdhPJyhWLCSkpSmtD0p55Cmpr9Ao1aJs0IYHWLu4Tcyj9q39OBvqgrIxMZMaEy7w1zacpD3mVr5R93EsjzwMBkGuSA=
X-Received: by 2002:a25:df55:: with SMTP id w82mr977719ybg.135.1606157772316;
 Mon, 23 Nov 2020 10:56:12 -0800 (PST)
MIME-Version: 1.0
References: <cover.1605896059.git.gustavoars@kernel.org> <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook> <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
 <CANiq72nZrHWTA4_Msg6MP9snTyenC6-eGfD27CyfNSu7QoVZbw@mail.gmail.com>
 <1c7d7fde126bc0acf825766de64bf2f9b888f216.camel@HansenPartnership.com>
 <CANiq72m22Jb5_+62NnwX8xds2iUdWDMAqD8PZw9cuxdHd95W0A@mail.gmail.com> <fc45750b6d0277c401015b7aa11e16cd15f32ab2.camel@HansenPartnership.com>
In-Reply-To: <fc45750b6d0277c401015b7aa11e16cd15f32ab2.camel@HansenPartnership.com>
From: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Date: Mon, 23 Nov 2020 19:56:01 +0100
Message-ID: <CANiq72k5tpDoDPmJ0ZWc1DGqm+81Gi-uEENAtvEs9v3SZcx6_Q@mail.gmail.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
To: James Bottomley <James.Bottomley@hansenpartnership.com>
Cc: Kees Cook <keescook@chromium.org>, Jakub Kicinski <kuba@kernel.org>, 
	"Gustavo A. R. Silva" <gustavoars@kernel.org>, linux-kernel <linux-kernel@vger.kernel.org>, 
	alsa-devel@alsa-project.org, amd-gfx@lists.freedesktop.org, 
	bridge@lists.linux-foundation.org, ceph-devel@vger.kernel.org, 
	cluster-devel@redhat.com, coreteam@netfilter.org, devel@driverdev.osuosl.org, 
	dm-devel@redhat.com, drbd-dev@lists.linbit.com, 
	dri-devel@lists.freedesktop.org, GR-everest-linux-l2@marvell.com, 
	GR-Linux-NIC-Dev@marvell.com, intel-gfx@lists.freedesktop.org, 
	intel-wired-lan@lists.osuosl.org, keyrings@vger.kernel.org, 
	linux1394-devel@lists.sourceforge.net, linux-acpi@vger.kernel.org, 
	linux-afs@lists.infradead.org, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, linux-arm-msm@vger.kernel.org, 
	linux-atm-general@lists.sourceforge.net, linux-block@vger.kernel.org, 
	linux-can@vger.kernel.org, linux-cifs@vger.kernel.org, 
	Linux Crypto Mailing List <linux-crypto@vger.kernel.org>, linux-decnet-user@lists.sourceforge.net, 
	Ext4 Developers List <linux-ext4@vger.kernel.org>, linux-fbdev@vger.kernel.org, 
	linux-geode@lists.infradead.org, linux-gpio@vger.kernel.org, 
	linux-hams@vger.kernel.org, linux-hwmon@vger.kernel.org, 
	linux-i3c@lists.infradead.org, linux-ide@vger.kernel.org, 
	linux-iio@vger.kernel.org, linux-input <linux-input@vger.kernel.org>, 
	linux-integrity@vger.kernel.org, linux-mediatek@lists.infradead.org, 
	Linux Media Mailing List <linux-media@vger.kernel.org>, linux-mmc@vger.kernel.org, 
	Linux-MM <linux-mm@kvack.org>, linux-mtd@lists.infradead.org, 
	linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, 
	linux-renesas-soc@vger.kernel.org, linux-scsi@vger.kernel.org, 
	linux-sctp@vger.kernel.org, linux-security-module@vger.kernel.org, 
	linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org, 
	linux-watchdog@vger.kernel.org, 
	linux-wireless <linux-wireless@vger.kernel.org>, 
	Network Development <netdev@vger.kernel.org>, netfilter-devel@vger.kernel.org, 
	nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org, 
	oss-drivers@netronome.com, patches@opensource.cirrus.com, 
	rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org, 
	samba-technical@lists.samba.org, selinux@vger.kernel.org, 
	target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net, 
	usb-storage@lists.one-eyed-alien.net, 
	virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, xen-devel@lists.xenproject.org, 
	linux-hardening@vger.kernel.org, Nick Desaulniers <ndesaulniers@google.com>, 
	Nathan Chancellor <natechancellor@gmail.com>, Miguel Ojeda <ojeda@kernel.org>, 
	Joe Perches <joe@perches.com>
Content-Type: text/plain; charset="UTF-8"

On Mon, Nov 23, 2020 at 4:58 PM James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
>
> Well, I used git.  It says that as of today in Linus' tree we have 889
> patches related to fall throughs and the first series went in in
> october 2017 ... ignoring a couple of outliers back to February.

I can see ~10k insertions over ~1k commits and 15 years that mention a
fallthrough in the entire repo. That is including some commits (like
the biggest one, 960 insertions) that have nothing to do with C
fallthrough. A single kernel release has an order of magnitude more
changes than this...

But if we do the math, for an author, at even 1 minute per line change
and assuming nothing can be automated at all, it would take 1 month of
work. For maintainers, a couple of trivial lines is noise compared to
many other patches.

In fact, this discussion probably took more time than the time it
would take to review the 200 lines. :-)

> We're also complaining about the inability to recruit maintainers:
>
> https://www.theregister.com/2020/06/30/hard_to_find_linux_maintainers_says_torvalds/
>
> And burn out:
>
> http://antirez.com/news/129

Accepting trivial and useful 1-line patches is not what makes a
voluntary maintainer quit... Thankless work with demanding deadlines is.

> The whole crux of your argument seems to be maintainers' time isn't
> important so we should accept all trivial patches

I have not said that, at all. In fact, I am a voluntary one and I
welcome patches like this. It takes very little effort on my side to
review and it helps the kernel overall. Paid maintainers are the ones
that can take care of big features/reviews.

> What I'm actually trying to articulate is a way of measuring value of
> the patch vs cost ... it has nothing really to do with who foots the
> actual bill.

I understand your point, but you were the one putting it in terms of a
junior FTE. In my view, 1 month-work (worst case) is very much worth
removing a class of errors from a critical codebase.

> One thesis I'm actually starting to formulate is that this continual
> devaluing of maintainers is why we have so much difficulty keeping and
> recruiting them.

That may very well be true, but I don't feel anybody has devalued
maintainers in this discussion.

Cheers,
Miguel


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 19:04:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 19:04:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35139.66528 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khH8i-00081s-0y; Mon, 23 Nov 2020 19:04:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35139.66528; Mon, 23 Nov 2020 19:04:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khH8h-00081l-U7; Mon, 23 Nov 2020 19:04:39 +0000
Received: by outflank-mailman (input) for mailman id 35139;
 Mon, 23 Nov 2020 19:04:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xOkN=E5=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1khH8h-00081g-90
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 19:04:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c0b14898-5a65-4ba4-8ef9-d232c4a3f0f1;
 Mon, 23 Nov 2020 19:04:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 822DDAC41;
 Mon, 23 Nov 2020 19:04:36 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xOkN=E5=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1khH8h-00081g-90
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 19:04:39 +0000
X-Inumbo-ID: c0b14898-5a65-4ba4-8ef9-d232c4a3f0f1
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c0b14898-5a65-4ba4-8ef9-d232c4a3f0f1;
	Mon, 23 Nov 2020 19:04:37 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606158276; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=s3ouLa9RJIbqH4FhgXgE1Dst59qBpgo5CefVFmZ55ag=;
	b=oS9oYDCMT3jlYSkOFZSnjQzaU35+kHdBIAFfPx5y8O6e+7Rd0dgRBaGt5jCbkCPNRL/Th8
	XM0N/TQ8Q84IggsAMF3fCAKoAPgRZ0XYRflaKegwFayb19R76chwV1qYSoA6EChxvt0QbY
	pG+v1WC7N0QizMyqsn/XA/02bTRwQj4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 822DDAC41;
	Mon, 23 Nov 2020 19:04:36 +0000 (UTC)
Subject: Re: [PATCH] MAINTINERS: Propose Ian Jackson as new release manager
To: Ian Jackson <iwj@xenproject.org>, George Dunlap
 <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
References: <20201123160400.1273386-1-george.dunlap@citrix.com>
 <24507.60537.640007.567348@mariner.uk.xensource.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <25b93d61-b52c-c333-2583-07b5d03692b8@suse.com>
Date: Mon, 23 Nov 2020 20:04:35 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <24507.60537.640007.567348@mariner.uk.xensource.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="RbsJ6C35zk97y4RuVSYXKEJCDvswJGwu1"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--RbsJ6C35zk97y4RuVSYXKEJCDvswJGwu1
Content-Type: multipart/mixed; boundary="UkBBn4ep6qKbB6UcvhpTDXSxzMTeFi3bH";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Ian Jackson <iwj@xenproject.org>, George Dunlap
 <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Message-ID: <25b93d61-b52c-c333-2583-07b5d03692b8@suse.com>
Subject: Re: [PATCH] MAINTINERS: Propose Ian Jackson as new release manager
References: <20201123160400.1273386-1-george.dunlap@citrix.com>
 <24507.60537.640007.567348@mariner.uk.xensource.com>
In-Reply-To: <24507.60537.640007.567348@mariner.uk.xensource.com>

--UkBBn4ep6qKbB6UcvhpTDXSxzMTeFi3bH
Content-Type: multipart/mixed;
 boundary="------------F4FCA6AB20BDD98D93C6403B"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------F4FCA6AB20BDD98D93C6403B
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 23.11.20 18:08, Ian Jackson wrote:
> George Dunlap writes ("[PATCH] MAINTINERS: Propose Ian Jackson as new r=
elease manager"):
>> Ian Jackson has agreed to be the release manager for 4.15.  Signify
>> this by giving him maintainership over CHANGELOG.md.
>=20
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
>=20
> Obviously that signifies my consent but I think it needs more acks.
>=20
> Wei, Juergen, Paul, I think I am likely to ask you some questions.
> Any tips etc would be welcome.

Fine with me. :-)


Juergen

--------------F4FCA6AB20BDD98D93C6403B
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------F4FCA6AB20BDD98D93C6403B--

--UkBBn4ep6qKbB6UcvhpTDXSxzMTeFi3bH--

--RbsJ6C35zk97y4RuVSYXKEJCDvswJGwu1
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+8B8MFAwAAAAAACgkQsN6d1ii/Ey/k
Mwf9FPem376Mc0hndA7HbuDIoqJCnmCH5s4F2Ubfueng4Va1QiduWwe47eesx+7W9Be8IEPIc16V
YuWgAhrsY/ttad6h5fFTZOlHimpDc8xKSurxw4sesRVPVT4+n/1y3WH/9Nh/akWvPPPaGAFEjYdI
KPML8jbqg65buOg9Pe4mFbr/7kRM4MuhU87O8NQOw4TCafaTOvBZB1SpCt0pgVcO7IqMNT7ACL+z
liRPHiNaqJexze5jLuGQ0x+KQuK4xzd+yqlHCvcx79bX4iwfrf8G1Syw4QSqOzedwXokOkAtmSDk
s3R9tK6gCNwhoA1NEkY4REB3hmNijQnHEHmyo8UD2g==
=V6Xy
-----END PGP SIGNATURE-----

--RbsJ6C35zk97y4RuVSYXKEJCDvswJGwu1--


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 20:04:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 20:04:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35151.66546 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khI42-0004oT-JF; Mon, 23 Nov 2020 20:03:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35151.66546; Mon, 23 Nov 2020 20:03:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khI42-0004oM-FE; Mon, 23 Nov 2020 20:03:54 +0000
Received: by outflank-mailman (input) for mailman id 35151;
 Mon, 23 Nov 2020 20:03:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UxIT=E5=nvidia.com=jgg@srs-us1.protection.inumbo.net>)
 id 1khI41-0004oH-FW
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 20:03:53 +0000
Received: from hqnvemgate25.nvidia.com (unknown [216.228.121.64])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 38838641-c63d-4d9c-af1d-ffaab802aa77;
 Mon, 23 Nov 2020 20:03:52 +0000 (UTC)
Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by
 hqnvemgate25.nvidia.com (using TLS: TLSv1.2, AES256-SHA)
 id <B5fbc15a70004>; Mon, 23 Nov 2020 12:03:51 -0800
Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL107.nvidia.com
 (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 23 Nov
 2020 20:03:49 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com (104.47.58.103)
 by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Frontend Transport; Mon, 23 Nov 2020 20:03:49 +0000
Received: from DM6PR12MB3834.namprd12.prod.outlook.com (2603:10b6:5:14a::12)
 by DM6PR12MB4338.namprd12.prod.outlook.com (2603:10b6:5:2a2::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.21; Mon, 23 Nov
 2020 20:03:48 +0000
Received: from DM6PR12MB3834.namprd12.prod.outlook.com
 ([fe80::e40c:730c:156c:2ef9]) by DM6PR12MB3834.namprd12.prod.outlook.com
 ([fe80::e40c:730c:156c:2ef9%7]) with mapi id 15.20.3589.022; Mon, 23 Nov 2020
 20:03:48 +0000
Received: from mlx.ziepe.ca (156.34.48.30) by
 MN2PR03CA0013.namprd03.prod.outlook.com (2603:10b6:208:23a::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Mon, 23 Nov 2020 20:03:47 +0000
Received: from jgg by mlx with local (Exim 4.94)	(envelope-from
 <jgg@nvidia.com>)	id 1khI3t-000A35-Tb; Mon, 23 Nov 2020 16:03:45 -0400
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=UxIT=E5=nvidia.com=jgg@srs-us1.protection.inumbo.net>)
	id 1khI41-0004oH-FW
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 20:03:53 +0000
X-Inumbo-ID: 38838641-c63d-4d9c-af1d-ffaab802aa77
Received: from hqnvemgate25.nvidia.com (unknown [216.228.121.64])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 38838641-c63d-4d9c-af1d-ffaab802aa77;
	Mon, 23 Nov 2020 20:03:52 +0000 (UTC)
Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, AES256-SHA)
	id <B5fbc15a70004>; Mon, 23 Nov 2020 12:03:51 -0800
Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL107.nvidia.com
 (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 23 Nov
 2020 20:03:49 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com (104.47.58.103)
 by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Frontend Transport; Mon, 23 Nov 2020 20:03:49 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gITd84wMHHzcOptjQRg1Bi4wKYLoloErjGXTzbxbsYYxXpRm6DfDjP1G8JsqPPruG8n8djHpWT3ChykgCoTQdTlPHkj05TPw7WZ4Y46HlI8bprZC3XuF3n009Te/qaTwPxc9ef3s3wxgnUStlvtZrJvP5WQhh3MIKLFTGEWjhLXWcgs1VmoV9q6ndrBwWgPhsRBIC9rKh9qqm9cf9Ujr9sks/ml2cZ4bW16uPSJVifE1ke5RuUikXPZ60YcpNVw15sbmeBPfJ8v059YAkVfr8AOpsBgi+OymMySTg/JYVNFtVJ2pGo3M9pC5txLp474ztgTCR2D9RfvqUQOXT+42RA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QophqWBVIGhmInjMMmoN1JCz5yEuED07MVw87J8AVUY=;
 b=dpOj+p56VpZQgFejSTc+TjZx9PpIbmIlWcJPfXyA8QiV6hyerQ1PNIg37pZm/OoPMM7dRNU+MPO2Sxmva5Z6iKtLQpQNkcM79tS52of8WjxHEmReB+Qc/VB0gzy26dU2FtXMoEzn5Rx6YHRp97uhlWeSk7Nsi1BUrieORD+G9yqkgKBmwQvC726s6EkJ0s32uNc7iMGffyCjKbyBtSLggJX9G9WEJ0m0GYQlYWRE7aVXL+Iy16bXoksvn7nW7YXP74v45GAqxPt7EElquzCZ4kXurFfWXXWM9ThsUobGLm/yb7wWVAmTEM+ttWZwc1mR7P3I+RsId2H9o2Xiqq+zAg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com;
 dkim=pass header.d=nvidia.com; arc=none
Received: from DM6PR12MB3834.namprd12.prod.outlook.com (2603:10b6:5:14a::12)
 by DM6PR12MB4338.namprd12.prod.outlook.com (2603:10b6:5:2a2::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.21; Mon, 23 Nov
 2020 20:03:48 +0000
Received: from DM6PR12MB3834.namprd12.prod.outlook.com
 ([fe80::e40c:730c:156c:2ef9]) by DM6PR12MB3834.namprd12.prod.outlook.com
 ([fe80::e40c:730c:156c:2ef9%7]) with mapi id 15.20.3589.022; Mon, 23 Nov 2020
 20:03:48 +0000
Date: Mon, 23 Nov 2020 16:03:45 -0400
From: Jason Gunthorpe <jgg@nvidia.com>
To: "Gustavo A. R. Silva" <gustavoars@kernel.org>
CC: <linux-kernel@vger.kernel.org>, <alsa-devel@alsa-project.org>,
	<amd-gfx@lists.freedesktop.org>, <bridge@lists.linux-foundation.org>,
	<ceph-devel@vger.kernel.org>, <cluster-devel@redhat.com>,
	<coreteam@netfilter.org>, <devel@driverdev.osuosl.org>,
	<dm-devel@redhat.com>, <drbd-dev@lists.linbit.com>,
	<dri-devel@lists.freedesktop.org>, <GR-everest-linux-l2@marvell.com>,
	<GR-Linux-NIC-Dev@marvell.com>, <intel-gfx@lists.freedesktop.org>,
	<intel-wired-lan@lists.osuosl.org>, <keyrings@vger.kernel.org>,
	<linux1394-devel@lists.sourceforge.net>, <linux-acpi@vger.kernel.org>,
	<linux-afs@lists.infradead.org>, <linux-arm-kernel@lists.infradead.org>,
	<linux-arm-msm@vger.kernel.org>, <linux-atm-general@lists.sourceforge.net>,
	<linux-block@vger.kernel.org>, <linux-can@vger.kernel.org>,
	<linux-cifs@vger.kernel.org>, <linux-crypto@vger.kernel.org>,
	<linux-decnet-user@lists.sourceforge.net>, <linux-ext4@vger.kernel.org>,
	<linux-fbdev@vger.kernel.org>, <linux-geode@lists.infradead.org>,
	<linux-gpio@vger.kernel.org>, <linux-hams@vger.kernel.org>,
	<linux-hwmon@vger.kernel.org>, <linux-i3c@lists.infradead.org>,
	<linux-ide@vger.kernel.org>, <linux-iio@vger.kernel.org>,
	<linux-input@vger.kernel.org>, <linux-integrity@vger.kernel.org>,
	<linux-mediatek@lists.infradead.org>, <linux-media@vger.kernel.org>,
	<linux-mmc@vger.kernel.org>, <linux-mm@kvack.org>,
	<linux-mtd@lists.infradead.org>, <linux-nfs@vger.kernel.org>,
	<linux-rdma@vger.kernel.org>, <linux-renesas-soc@vger.kernel.org>,
	<linux-scsi@vger.kernel.org>, <linux-sctp@vger.kernel.org>,
	<linux-security-module@vger.kernel.org>,
	<linux-stm32@st-md-mailman.stormreply.com>, <linux-usb@vger.kernel.org>,
	<linux-watchdog@vger.kernel.org>, <linux-wireless@vger.kernel.org>,
	<netdev@vger.kernel.org>, <netfilter-devel@vger.kernel.org>,
	<nouveau@lists.freedesktop.org>, <op-tee@lists.trustedfirmware.org>,
	<oss-drivers@netronome.com>, <patches@opensource.cirrus.com>,
	<rds-devel@oss.oracle.com>, <reiserfs-devel@vger.kernel.org>,
	<samba-technical@lists.samba.org>, <selinux@vger.kernel.org>,
	<target-devel@vger.kernel.org>, <tipc-discussion@lists.sourceforge.net>,
	<usb-storage@lists.one-eyed-alien.net>,
	<virtualization@lists.linux-foundation.org>, <wcn36xx@lists.infradead.org>,
	<x86@kernel.org>, <xen-devel@lists.xenproject.org>,
	<linux-hardening@vger.kernel.org>, Nick Desaulniers
	<ndesaulniers@google.com>, Nathan Chancellor <natechancellor@gmail.com>,
	Miguel Ojeda <ojeda@kernel.org>, Joe Perches <joe@perches.com>, Kees Cook
	<keescook@chromium.org>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
Message-ID: <20201123200345.GA38546@nvidia.com>
References: <cover.1605896059.git.gustavoars@kernel.org>
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1605896059.git.gustavoars@kernel.org>
X-ClientProxiedBy: MN2PR03CA0013.namprd03.prod.outlook.com
 (2603:10b6:208:23a::18) To DM6PR12MB3834.namprd12.prod.outlook.com
 (2603:10b6:5:14a::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from mlx.ziepe.ca (156.34.48.30) by MN2PR03CA0013.namprd03.prod.outlook.com (2603:10b6:208:23a::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend Transport; Mon, 23 Nov 2020 20:03:47 +0000
Received: from jgg by mlx with local (Exim 4.94)	(envelope-from <jgg@nvidia.com>)	id 1khI3t-000A35-Tb; Mon, 23 Nov 2020 16:03:45 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1;
	t=1606161831; bh=QophqWBVIGhmInjMMmoN1JCz5yEuED07MVw87J8AVUY=;
	h=ARC-Seal:ARC-Message-Signature:ARC-Authentication-Results:Date:
	 From:To:CC:Subject:Message-ID:References:Content-Type:
	 Content-Disposition:In-Reply-To:X-ClientProxiedBy:MIME-Version:
	 X-MS-Exchange-MessageSentRepresentingType;
	b=Zr1EZlr7FGouweCXJ2A3YJZ8lxsTazMwmiIDkNNgeYuPc4M3hA0h9guNHLXrnnLeX
	 Dp0jtpGLpYuZZsYit0m8+Y/3Pgk+U78P2KDuhjfei0oh+kHbQnRfzB2jD1Wu7rVyZ8
	 A2iuCgvA8hhwNVx8Bo/l4LfRAECKvf8eJj6um7c8+wyJ6oFgyijvPixB8Xcq6YNTLj
	 o7o09Zdo2SkPJV9Ld82VvGAW1KENwGx8qxL8L4kHw5xGizl/kk/4FLfOCs8mx17bXD
	 N2PIS7AsaPoH2bHogxWrZ7vcH6YOCMGYKk/oZQ1BhSoaDoH96AMZAs9BCirfcyYEMq
	 3EMRDyReptNPA==

On Fri, Nov 20, 2020 at 12:21:39PM -0600, Gustavo A. R. Silva wrote:

>   IB/hfi1: Fix fall-through warnings for Clang
>   IB/mlx4: Fix fall-through warnings for Clang
>   IB/qedr: Fix fall-through warnings for Clang
>   RDMA/mlx5: Fix fall-through warnings for Clang

I picked these four to the rdma tree, thanks

Jason


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 20:05:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 20:05:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35158.66558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khI5H-0004vt-1F; Mon, 23 Nov 2020 20:05:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35158.66558; Mon, 23 Nov 2020 20:05:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khI5G-0004vm-Tz; Mon, 23 Nov 2020 20:05:10 +0000
Received: by outflank-mailman (input) for mailman id 35158;
 Mon, 23 Nov 2020 20:05:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khI5F-0004vd-4s; Mon, 23 Nov 2020 20:05:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khI5E-0001F4-QH; Mon, 23 Nov 2020 20:05:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khI5E-0001cX-Hk; Mon, 23 Nov 2020 20:05:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1khI5E-0005oC-HG; Mon, 23 Nov 2020 20:05:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khI5F-0004vd-4s; Mon, 23 Nov 2020 20:05:09 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oT7sZFm0T5y5w+QC4mtYtKr5sr7vSuW4eMVjLkv8jdY=; b=MRmNjP91hrAMgTPCv26F7hecWv
	1IZFWdFlqVHow1w/kPra/e2MoCIm08j3Aw3lS0GEPRjdmaWtZFYE4BST0LmKg3QixZBby/HXathx+
	9fzQsDD85HcJD00jBMz48t3HXkDsGW2BSiXTYibw4A2kHeZ5bxpKJXkjHXqrFTzj9tPY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khI5E-0001F4-QH; Mon, 23 Nov 2020 20:05:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khI5E-0001cX-Hk; Mon, 23 Nov 2020 20:05:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khI5E-0005oC-HG; Mon, 23 Nov 2020 20:05:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156962-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156962: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=8cc30eb1400fc01f2b139cdd3dc524f8b84dbe07
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 23 Nov 2020 20:05:08 +0000

flight 156962 qemu-mainline real [real]
flight 156968 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156962/
http://logs.test-lab.xenproject.org/osstest/logs/156968/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                8cc30eb1400fc01f2b139cdd3dc524f8b84dbe07
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   95 days
Failing since        152659  2020-08-21 14:07:39 Z   94 days  200 attempts
Testing same since   156953  2020-11-23 00:08:31 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 67462 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 20:38:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 20:38:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35173.66572 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khIbC-0007dI-Oa; Mon, 23 Nov 2020 20:38:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35173.66572; Mon, 23 Nov 2020 20:38:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khIbC-0007d5-Kz; Mon, 23 Nov 2020 20:38:10 +0000
Received: by outflank-mailman (input) for mailman id 35173;
 Mon, 23 Nov 2020 20:38:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6H7V=E5=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
 id 1khIbB-0007cA-NX
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 20:38:09 +0000
Received: from bedivere.hansenpartnership.com (unknown [2607:fcd0:100:8a00::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 47906707-fe2a-4def-8cdc-54b4d5c9c0b7;
 Mon, 23 Nov 2020 20:38:04 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by bedivere.hansenpartnership.com (Postfix) with ESMTP id 81FC7128092C;
 Mon, 23 Nov 2020 12:38:03 -0800 (PST)
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
 by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new,
 port 10024)
 with ESMTP id Fm687JPabQpA; Mon, 23 Nov 2020 12:38:03 -0800 (PST)
Received: from jarvis.int.hansenpartnership.com (unknown
 [IPv6:2601:600:8280:66d1::527])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id EBC5C128091E;
 Mon, 23 Nov 2020 12:37:59 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6H7V=E5=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
	id 1khIbB-0007cA-NX
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 20:38:09 +0000
X-Inumbo-ID: 47906707-fe2a-4def-8cdc-54b4d5c9c0b7
Received: from bedivere.hansenpartnership.com (unknown [2607:fcd0:100:8a00::2])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 47906707-fe2a-4def-8cdc-54b4d5c9c0b7;
	Mon, 23 Nov 2020 20:38:04 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
	by bedivere.hansenpartnership.com (Postfix) with ESMTP id 81FC7128092C;
	Mon, 23 Nov 2020 12:38:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=hansenpartnership.com; s=20151216; t=1606163883;
	bh=+EDGs3PYzl3z47JpXWUueALZlElPDdJywkYLk/HcIjg=;
	h=Message-ID:Subject:From:To:Date:In-Reply-To:References:From;
	b=Tyy0xQy0htMQEdpfMUvFUuPG04g7ZXvYvYsCjWoq+QOlUp2WQfo8Vk+CnXXw5nkQT
	 a3Wz7+ONj/4K4WJ6m4qOiNdEl9e5tbHlW07s/zxEoMhv+eMdbQKfvYZ25zqNb6Olj/
	 onXIz2W3FBWOnXIoTYXwnsUNPzdRLL+aS2e3QsY4=
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
	by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id Fm687JPabQpA; Mon, 23 Nov 2020 12:38:03 -0800 (PST)
Received: from jarvis.int.hansenpartnership.com (unknown [IPv6:2601:600:8280:66d1::527])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id EBC5C128091E;
	Mon, 23 Nov 2020 12:37:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=hansenpartnership.com; s=20151216; t=1606163883;
	bh=+EDGs3PYzl3z47JpXWUueALZlElPDdJywkYLk/HcIjg=;
	h=Message-ID:Subject:From:To:Date:In-Reply-To:References:From;
	b=Tyy0xQy0htMQEdpfMUvFUuPG04g7ZXvYvYsCjWoq+QOlUp2WQfo8Vk+CnXXw5nkQT
	 a3Wz7+ONj/4K4WJ6m4qOiNdEl9e5tbHlW07s/zxEoMhv+eMdbQKfvYZ25zqNb6Olj/
	 onXIz2W3FBWOnXIoTYXwnsUNPzdRLL+aS2e3QsY4=
Message-ID: <4993259d01a0064f8bb22770503490f9252f3659.camel@HansenPartnership.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
From: James Bottomley <James.Bottomley@HansenPartnership.com>
To: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Cc: Kees Cook <keescook@chromium.org>, Jakub Kicinski <kuba@kernel.org>, 
 "Gustavo A. R. Silva" <gustavoars@kernel.org>, linux-kernel
 <linux-kernel@vger.kernel.org>,  alsa-devel@alsa-project.org,
 amd-gfx@lists.freedesktop.org,  bridge@lists.linux-foundation.org,
 ceph-devel@vger.kernel.org,  cluster-devel@redhat.com,
 coreteam@netfilter.org, devel@driverdev.osuosl.org,  dm-devel@redhat.com,
 drbd-dev@lists.linbit.com, dri-devel@lists.freedesktop.org, 
 GR-everest-linux-l2@marvell.com, GR-Linux-NIC-Dev@marvell.com, 
 intel-gfx@lists.freedesktop.org, intel-wired-lan@lists.osuosl.org, 
 keyrings@vger.kernel.org, linux1394-devel@lists.sourceforge.net, 
 linux-acpi@vger.kernel.org, linux-afs@lists.infradead.org, Linux ARM
 <linux-arm-kernel@lists.infradead.org>, linux-arm-msm@vger.kernel.org, 
 linux-atm-general@lists.sourceforge.net, linux-block@vger.kernel.org, 
 linux-can@vger.kernel.org, linux-cifs@vger.kernel.org, Linux Crypto Mailing
 List <linux-crypto@vger.kernel.org>,
 linux-decnet-user@lists.sourceforge.net,  Ext4 Developers List
 <linux-ext4@vger.kernel.org>, linux-fbdev@vger.kernel.org,
 linux-geode@lists.infradead.org,  linux-gpio@vger.kernel.org,
 linux-hams@vger.kernel.org,  linux-hwmon@vger.kernel.org,
 linux-i3c@lists.infradead.org,  linux-ide@vger.kernel.org,
 linux-iio@vger.kernel.org, linux-input <linux-input@vger.kernel.org>,
 linux-integrity@vger.kernel.org,  linux-mediatek@lists.infradead.org, Linux
 Media Mailing List <linux-media@vger.kernel.org>,
 linux-mmc@vger.kernel.org, Linux-MM <linux-mm@kvack.org>,
 linux-mtd@lists.infradead.org, linux-nfs@vger.kernel.org, 
 linux-rdma@vger.kernel.org, linux-renesas-soc@vger.kernel.org, 
 linux-scsi@vger.kernel.org, linux-sctp@vger.kernel.org, 
 linux-security-module@vger.kernel.org, 
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org, 
 linux-watchdog@vger.kernel.org, linux-wireless
 <linux-wireless@vger.kernel.org>,  Network Development
 <netdev@vger.kernel.org>, netfilter-devel@vger.kernel.org,
 nouveau@lists.freedesktop.org,  op-tee@lists.trustedfirmware.org,
 oss-drivers@netronome.com,  patches@opensource.cirrus.com,
 rds-devel@oss.oracle.com,  reiserfs-devel@vger.kernel.org,
 samba-technical@lists.samba.org,  selinux@vger.kernel.org,
 target-devel@vger.kernel.org,  tipc-discussion@lists.sourceforge.net,
 usb-storage@lists.one-eyed-alien.net, 
 virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
 "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>,
 xen-devel@lists.xenproject.org,  linux-hardening@vger.kernel.org, Nick
 Desaulniers <ndesaulniers@google.com>,  Nathan Chancellor
 <natechancellor@gmail.com>, Miguel Ojeda <ojeda@kernel.org>, Joe Perches
 <joe@perches.com>
Date: Mon, 23 Nov 2020 12:37:58 -0800
In-Reply-To: <CANiq72k5tpDoDPmJ0ZWc1DGqm+81Gi-uEENAtvEs9v3SZcx6_Q@mail.gmail.com>
References: <cover.1605896059.git.gustavoars@kernel.org>
	 <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	 <202011201129.B13FDB3C@keescook>
	 <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	 <202011220816.8B6591A@keescook>
	 <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
	 <CANiq72nZrHWTA4_Msg6MP9snTyenC6-eGfD27CyfNSu7QoVZbw@mail.gmail.com>
	 <1c7d7fde126bc0acf825766de64bf2f9b888f216.camel@HansenPartnership.com>
	 <CANiq72m22Jb5_+62NnwX8xds2iUdWDMAqD8PZw9cuxdHd95W0A@mail.gmail.com>
	 <fc45750b6d0277c401015b7aa11e16cd15f32ab2.camel@HansenPartnership.com>
	 <CANiq72k5tpDoDPmJ0ZWc1DGqm+81Gi-uEENAtvEs9v3SZcx6_Q@mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"
User-Agent: Evolution 3.34.4 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Mon, 2020-11-23 at 19:56 +0100, Miguel Ojeda wrote:
> On Mon, Nov 23, 2020 at 4:58 PM James Bottomley
> <James.Bottomley@hansenpartnership.com> wrote:
> > Well, I used git.  It says that as of today in Linus' tree we have
> > 889 patches related to fall throughs and the first series went in
> > in october 2017 ... ignoring a couple of outliers back to February.
> 
> I can see ~10k insertions over ~1k commits and 15 years that mention
> a fallthrough in the entire repo. That is including some commits
> (like the biggest one, 960 insertions) that have nothing to do with C
> fallthrough. A single kernel release has an order of magnitude more
> changes than this...
> 
> But if we do the math, for an author, at even 1 minute per line
> change and assuming nothing can be automated at all, it would take 1
> month of work. For maintainers, a couple of trivial lines is noise
> compared to many other patches.

So you think a one line patch should take one minute to produce ... I
really don't think that's grounded in reality.  I suppose a one line
patch only takes a minute to merge with b4 if no-one reviews or tests
it, but that's not really desirable.

> In fact, this discussion probably took more time than the time it
> would take to review the 200 lines. :-)

I'm framing the discussion in terms of the whole series of changes we
have done for fall through, both what's in the tree currently (889
patches) both in terms of the produce and the consumer.  That's what I
used for my figures for cost.

> > We're also complaining about the inability to recruit maintainers:
> > 
> > https://www.theregister.com/2020/06/30/hard_to_find_linux_maintainers_says_torvalds/
> > 
> > And burn out:
> > 
> > http://antirez.com/news/129
> 
> Accepting trivial and useful 1-line patches

Part of what I'm trying to measure is the "and useful" bit because
that's not a given.

> is not what makes a voluntary maintainer quit...

so the proverb "straw which broke the camel's back" uniquely doesn't
apply to maintainers

>  Thankless work with demanding deadlines is.

That's another potential reason, but it doesn't may other reasons less
valid.

> > The whole crux of your argument seems to be maintainers' time isn't
> > important so we should accept all trivial patches
> 
> I have not said that, at all. In fact, I am a voluntary one and I
> welcome patches like this. It takes very little effort on my side to
> review and it helps the kernel overall.

Well, you know, subsystems are very different in terms of the amount of
patches a maintainer has to process per release cycle of the kernel. 
If a maintainer is close to capacity, additional patches, however
trivial, become a problem.  If a maintainer has spare cycles, trivial
patches may look easy.

> Paid maintainers are the ones that can take care of big
> features/reviews.
> 
> > What I'm actually trying to articulate is a way of measuring value
> > of the patch vs cost ... it has nothing really to do with who foots
> > the actual bill.
> 
> I understand your point, but you were the one putting it in terms of
> a junior FTE.

No, I evaluated the producer side in terms of an FTE.  What we're
mostly arguing about here is the consumer side: the maintainers and
people who have to rework their patch sets. I estimated that at 100h.

>  In my view, 1 month-work (worst case) is very much worth
> removing a class of errors from a critical codebase.
> 
> > One thesis I'm actually starting to formulate is that this
> > continual devaluing of maintainers is why we have so much
> > difficulty keeping and recruiting them.
> 
> That may very well be true, but I don't feel anybody has devalued
> maintainers in this discussion.

You seem to be saying that because you find it easy to merge trivial
patches, everyone should.  I'm reminded of a friend long ago who
thought being a Tees River Pilot was a sinecure because he could
navigate the Tees blindfold.  What he forgot, of course, is that just
because it's easy with a trawler doesn't mean it's easy with an oil
tanker.  In fact it takes longer to qualify as a Tees River Pilot than
it does to get a PhD.

James




From xen-devel-bounces@lists.xenproject.org Mon Nov 23 21:38:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 21:38:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35188.66591 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khJXf-0004Ov-My; Mon, 23 Nov 2020 21:38:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35188.66591; Mon, 23 Nov 2020 21:38:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khJXf-0004Oo-It; Mon, 23 Nov 2020 21:38:35 +0000
Received: by outflank-mailman (input) for mailman id 35188;
 Mon, 23 Nov 2020 21:38:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khJXd-0004Og-JG; Mon, 23 Nov 2020 21:38:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khJXd-000384-BC; Mon, 23 Nov 2020 21:38:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khJXc-0004Kg-UV; Mon, 23 Nov 2020 21:38:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1khJXc-00073g-Tz; Mon, 23 Nov 2020 21:38:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khJXd-0004Og-JG; Mon, 23 Nov 2020 21:38:33 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pfoe4ff5Hf0lSzjbEdAXY+Sq8bbT+GKzI58fZpYAwFs=; b=WzFL4fueJIWzo11SoCDUn+iYm5
	uqyA5J2tCixmpxAtjYAL5GtLZ6jCToslAlRewG5n4X/v+WAQhtU3/JigeIrZ8n5xadkkRaJTG4El8
	ibFfd0hoKbkYN5ILuheeSDmSQ4QhCFRnc34iQQXy8jhzYXELos05RNUs4mUkzAT2zlYA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khJXd-000384-BC; Mon, 23 Nov 2020 21:38:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khJXc-0004Kg-UV; Mon, 23 Nov 2020 21:38:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khJXc-00073g-Tz; Mon, 23 Nov 2020 21:38:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156964-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156964: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-install:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=418baf2c28f3473039f2f7377760bd8f6897ae18
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 23 Nov 2020 21:38:32 +0000

flight 156964 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156964/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  12 debian-install fail in 156955 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds 20 guest-localmigrate/x10 fail in 156955 pass in 156964
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen        fail pass in 156955
 test-arm64-arm64-xl           8 xen-boot                   fail pass in 156955
 test-arm64-arm64-xl-seattle   8 xen-boot                   fail pass in 156955
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 156955

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle 11 leak-check/basis(11) fail in 156955 blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11) fail in 156955 blocked in 152332
 test-arm64-arm64-xl   11 leak-check/basis(11) fail in 156955 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                418baf2c28f3473039f2f7377760bd8f6897ae18
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  115 days
Failing since        152366  2020-08-01 20:49:34 Z  114 days  192 attempts
Testing same since   156955  2020-11-23 01:40:54 Z    0 days    2 attempts

------------------------------------------------------------
3576 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 683939 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 22:18:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 22:18:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35198.66606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khKAY-0007s6-Rj; Mon, 23 Nov 2020 22:18:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35198.66606; Mon, 23 Nov 2020 22:18:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khKAY-0007rz-Oa; Mon, 23 Nov 2020 22:18:46 +0000
Received: by outflank-mailman (input) for mailman id 35198;
 Mon, 23 Nov 2020 22:18:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WaDe=E5=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1khKAX-0007ru-EB
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 22:18:45 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b0fc3a25-5a5f-49b5-94e0-2c652fb07d6f;
 Mon, 23 Nov 2020 22:18:43 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WaDe=E5=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1khKAX-0007ru-EB
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 22:18:45 +0000
X-Inumbo-ID: b0fc3a25-5a5f-49b5-94e0-2c652fb07d6f
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b0fc3a25-5a5f-49b5-94e0-2c652fb07d6f;
	Mon, 23 Nov 2020 22:18:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606169923;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=jAAfu4xO7vv46wl9cxSRuvWJTJrfxpupa07NGhSrHQg=;
  b=gBcbnJLsxTnLXzc04KvkF//hK7GQA1ygOdD6vZNiTxQzbUooOdujMvhF
   3tYUcMclAy2P5JJV3hiuQAueahekI89qdVX95AyAFN6J7UtfMsNjMEEi7
   uUib9sMFQtfmtnQphsGUwaZypJ9BTf+3YeOMzdGL5Y7h1/odSXKmuTY2N
   w=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: LXVVTD0u9I9HJrqOHSOBu/Xiunsk07/qHR1sr5L9Iz9Ayxt+DDTwmmwVyokdzbf10czdSLNMR5
 FMWi/ctDrqPN8koI19uUTlSHvSO6E2BndLx+AzDGFCBwzyFdJUuW2U650Yc/MtFVnPOOvrZwrX
 Zy94HTQwj5tLdabrX2BeeV3QQsEATzsFcUcSlv8jW2MoKEZrUuaHCU/8028xksRUzmhU3Cq9aE
 h9uIEF0aRHbuvQeTnlq7Q9EZZNMVCuoRKDt7pCsbz2rccCDM+pgvVJMtBu15iCsDsDbXir9UgX
 3rc=
X-SBRS: None
X-MesageID: 31796836
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,364,1599537600"; 
   d="scan'208";a="31796836"
Subject: Re: [PATCH v3 00/23] xl / libxl: named PCI pass-through devices
To: Paul Durrant <paul@xen.org>, <xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, Anthony PERARD
	<anthony.perard@citrix.com>, Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>, George Dunlap <george.dunlap@citrix.com>, "Ian
 Jackson" <iwj@xenproject.org>, Nick Rosbrook <rosbrookn@ainfosec.com>, Wei
 Liu <wl@xen.org>
References: <20201123174503.6800-1-paul@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <822734cf-a048-2f53-940a-9f5ccf9df40f@citrix.com>
Date: Mon, 23 Nov 2020 22:18:21 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201123174503.6800-1-paul@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 23/11/2020 17:44, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
>
> Paul Durrant (23):
>   xl / libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
>   libxl: make libxl__device_list() work correctly for
>     LIBXL__DEVICE_KIND_PCI...
>   libxl: Make sure devices added by pci-attach are reflected in the
>     config
>   libxl: add/recover 'rdm_policy' to/from PCI backend in xenstore
>   libxl: s/detatched/detached in libxl_pci.c
>   libxl: remove extraneous arguments to do_pci_remove() in libxl_pci.c
>   libxl: stop using aodev->device_config in libxl__device_pci_add()...
>   libxl: generalise 'driver_path' xenstore access functions in
>     libxl_pci.c
>   libxl: remove unnecessary check from libxl__device_pci_add()
>   libxl: remove get_all_assigned_devices() from libxl_pci.c
>   libxl: make sure callers of libxl_device_pci_list() free the list
>     after use
>   libxl: add libxl_device_pci_assignable_list_free()...
>   libxl: use COMPARE_PCI() macro is_pci_in_array()...
>   docs/man: extract documentation of PCI_SPEC_STRING from the xl.cfg
>     manpage...
>   docs/man: improve documentation of PCI_SPEC_STRING...
>   docs/man: fix xl(1) documentation for 'pci' operations
>   libxl: introduce 'libxl_pci_bdf' in the idl...
>   libxlu: introduce xlu_pci_parse_spec_string()
>   libxl: modify
>     libxl_device_pci_assignable_add/remove/list/list_free()...
>   docs/man: modify xl(1) in preparation for naming of assignable devices
>   xl / libxl: support naming of assignable devices
>   docs/man: modify xl-pci-configuration(5) to add 'name' field to
>     PCI_SPEC_STRING
>   xl / libxl: support 'xl pci-attach/detach' by name

We're trying to get the CI loop up and running.  Its not emailing
xen-devel yet, but has found a real error somewhere in this series.

https://gitlab.com/xen-project/patchew/xen/-/pipelines/220153571

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 22:27:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 22:27:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35209.66618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khKJQ-0000NY-1j; Mon, 23 Nov 2020 22:27:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35209.66618; Mon, 23 Nov 2020 22:27:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khKJP-0000NR-Ug; Mon, 23 Nov 2020 22:27:55 +0000
Received: by outflank-mailman (input) for mailman id 35209;
 Mon, 23 Nov 2020 22:27:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VxlT=E5=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1khKJO-0000NM-N5
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 22:27:54 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 09d3b6af-6795-497d-b1c3-f78e4847d379;
 Mon, 23 Nov 2020 22:27:54 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 1B99020715;
 Mon, 23 Nov 2020 22:27:53 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VxlT=E5=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1khKJO-0000NM-N5
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 22:27:54 +0000
X-Inumbo-ID: 09d3b6af-6795-497d-b1c3-f78e4847d379
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 09d3b6af-6795-497d-b1c3-f78e4847d379;
	Mon, 23 Nov 2020 22:27:54 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 1B99020715;
	Mon, 23 Nov 2020 22:27:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606170473;
	bh=XTVgf6MJsIORC0sxL3htLEwzrBRdJZChC4NEG8kN+Ug=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=XRwAlmz74nRf162Rcwqvgh1OPWdRni+yM43ZANzvbmfP2N0CctHYseLbWeVmBwx1L
	 alXg0wID0r4YAvr9e+uu4/Z5FnuAcMQi6SjHb8fyPuwqW2kgiSineG/zK96vgk5+r5
	 2+NlcAG8re81ZGnMOWMY7ZIzEj1Nu6J3253WKaWM=
Date: Mon, 23 Nov 2020 14:27:52 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Julien Grall <julien.grall@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH RFC 4/6] xen/arm: mm: Allow other mapping size in
 xen_pt_update_entry()
In-Reply-To: <1ba4afef-7efa-6d1a-5929-ec2652dbbb21@xen.org>
Message-ID: <alpine.DEB.2.21.2011231409050.7979@sstabellini-ThinkPad-T480s>
References: <20201119190751.22345-1-julien@xen.org> <20201119190751.22345-5-julien@xen.org> <alpine.DEB.2.21.2011191706420.7979@sstabellini-ThinkPad-T480s> <1ba4afef-7efa-6d1a-5929-ec2652dbbb21@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 20 Nov 2020, Julien Grall wrote:
> > >       /*
> > >        * For arm32, page-tables are different on each CPUs. Yet, they
> > > share
> > > @@ -1265,14 +1287,43 @@ static int xen_pt_update(unsigned long virt,
> > >         spin_lock(&xen_pt_lock);
> > >   -    for ( ; addr < addr_end; addr += PAGE_SIZE )
> > > +    while ( left )
> > >       {
> > > -        rc = xen_pt_update_entry(root, addr, mfn, flags);
> > > +        unsigned int order;
> > > +        unsigned long mask;
> > > +
> > > +        /*
> > > +         * Don't take into account the MFN when removing mapping (i.e
> > > +         * MFN_INVALID) to calculate the correct target order.
> > > +         *
> > > +         * XXX: Support superpage mappings if nr is not aligned to a
> > > +         * superpage size.
> > 
> > It would be good to add another sentence to explain that the checks
> > below are simply based on masks and rely on the mfn, vfn, and also
> > nr_mfn to be superpage aligned. (It took me some time to figure it out.)
> 
> I am not sure to understand what you wrote here. Could you suggest a sentence?

Something like the following:

/*
 * Don't take into account the MFN when removing mapping (i.e
 * MFN_INVALID) to calculate the correct target order.
 *
 * This loop relies on mfn, vfn, and nr_mfn, to be all superpage
 * aligned, and it uses `mask' to check for that.
 *
 * XXX: Support superpage mappings if nr_mfn is not aligned to a
 * superpage size.
 */


> Regarding the TODO itself, we have the exact same one in the P2M code. I
> couldn't find a clever way to deal with it yet. Any idea how this could be
> solved?
 
I was thinking of a loop that start with the highest possible superpage
size that virt and mfn are aligned to, and also smaller or equal to
nr_mfn. So rather than using the mask to also make sure nr_mfns is
aligned, I would only use the mask to check that mfn and virt are
aligned. Then, we only need to check that superpage_size <= left.

Concrete example: virt and mfn are 2MB aligned, nr_mfn is 5MB / 1280 4K
pages. We allocate 2MB superpages until onlt 1MB is left. At that point
superpage_size <= left fails and we go down to 4K allocations.

Would that work?


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 22:31:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 22:31:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35215.66630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khKN6-0001GI-Lb; Mon, 23 Nov 2020 22:31:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35215.66630; Mon, 23 Nov 2020 22:31:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khKN6-0001GB-H1; Mon, 23 Nov 2020 22:31:44 +0000
Received: by outflank-mailman (input) for mailman id 35215;
 Mon, 23 Nov 2020 22:31:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jwan=E5=zal.aero=leo.krueger@srs-us1.protection.inumbo.net>)
 id 1khKN4-0001G6-Jd
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 22:31:42 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.139]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2ba826bf-e936-4d92-a21b-a78ba13b54f9;
 Mon, 23 Nov 2020 22:31:40 +0000 (UTC)
Received: from HE1PR05MB4794.eurprd05.prod.outlook.com (2603:10a6:7:9b::11) by
 HE1PR0501MB2393.eurprd05.prod.outlook.com (2603:10a6:3:69::11) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.28; Mon, 23 Nov 2020 22:31:36 +0000
Received: from HE1PR05MB4794.eurprd05.prod.outlook.com
 ([fe80::7d6a:df13:ca6f:b173]) by HE1PR05MB4794.eurprd05.prod.outlook.com
 ([fe80::7d6a:df13:ca6f:b173%5]) with mapi id 15.20.3564.033; Mon, 23 Nov 2020
 22:31:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Jwan=E5=zal.aero=leo.krueger@srs-us1.protection.inumbo.net>)
	id 1khKN4-0001G6-Jd
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 22:31:42 +0000
X-Inumbo-ID: 2ba826bf-e936-4d92-a21b-a78ba13b54f9
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown [40.107.22.139])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2ba826bf-e936-4d92-a21b-a78ba13b54f9;
	Mon, 23 Nov 2020 22:31:40 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=meZn+AAaOc/RywqRHrWtVWIGTj9RfdDpbXtYtR4osqimVjxNhJ4qJHOBrjA0VU7yLo4Og1y0XDgp2a43ykAceWPnQZQd/sbzyvpxQlTYXyovRE9kmk16REMiOMyZrn6DZmUlbGvxzHdg45RFaVMMv44EA4KA9abAIcdjoxZAEVCbQHa2P0bNxVYg1uUaxDV6wKWtGAdCy6PpOszy5ETq7gJXHavmHu0NihkgwlRakS6raU1bacdi9+b8ViM1ID9lLE16rhfJTm1LmH8C8PRAPjnVA9ek+wszKPKTYvR68fe6UWmJb4tWXI55kPfHX6jVxMlJ2VkzWC7nF6xpRMARiA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Nj7oRowiAmeJhJJ2mdzzoyIh6f2du/9/9BoNG80YEU0=;
 b=l2dVLIhmbdYJrqwmS6AlOeNfOZizLzJrGHH2Liv3e3I3933oNiekRA8rYTQdmlFtviVsWPN+ak10Oi/yvetyCGSoxgn8behEFtLkeNTu66xOto3VvSewGaPDv2qIKubvo6abl11chhX92EIMk6pcAHRvur1wa5SkOR1WwEc0o5bY56KKuG4o44riazSRKu7qRC6nrfhcdSVAVKXIAsZjDIJwajEAHXJwQjJS7eGfEuLwX85Vc9YehHaCZcN9wUx2mOaWxl4Rdip9taiqm3aXpjwa/O/0cjJZ8SPZt51gXARQm2nw5acS5ubJWR9pOnlG8hRHYS2SB9VB5lYr0lsQ5w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=zal.aero; dmarc=pass action=none header.from=zal.aero;
 dkim=pass header.d=zal.aero; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=zalgmbh.onmicrosoft.com; s=selector2-zalgmbh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Nj7oRowiAmeJhJJ2mdzzoyIh6f2du/9/9BoNG80YEU0=;
 b=T7PYsjpS0lA6N2ssHrPutsX4ZYuLgm182i6DbwMu+tDGHUuJnHp3Q9pPCOWTsOSYXTbV8F5byVxwsLoHuJRuKGgoxCLP1/sK9gchHX0LhUMPX1lFO2rhCMc1UkWan/uMcuWpDvnwnyPz0TJgzumvM0KmXMwoD6fXKoplMAO45Fs=
Received: from HE1PR05MB4794.eurprd05.prod.outlook.com (2603:10a6:7:9b::11) by
 HE1PR0501MB2393.eurprd05.prod.outlook.com (2603:10a6:3:69::11) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.28; Mon, 23 Nov 2020 22:31:36 +0000
Received: from HE1PR05MB4794.eurprd05.prod.outlook.com
 ([fe80::7d6a:df13:ca6f:b173]) by HE1PR05MB4794.eurprd05.prod.outlook.com
 ([fe80::7d6a:df13:ca6f:b173%5]) with mapi id 15.20.3564.033; Mon, 23 Nov 2020
 22:31:36 +0000
From: Leo Krueger <leo.krueger@zal.aero>
To: Julien Grall <julien@xen.org>, Rahul Singh <Rahul.Singh@arm.com>
CC: Stefano Stabellini <stefano.stabellini@xilinx.com>, Peng Fan
	<peng.fan@nxp.com>, "brucea@xilinx.com" <brucea@xilinx.com>, Cornelia
 Bruelhart <cornelia.bruelhart@zal.aero>, "oleksandr_andrushchenko@epam.com"
	<oleksandr_andrushchenko@epam.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: AW: Xen data from meta-virtualization layer
Thread-Topic: Xen data from meta-virtualization layer
Thread-Index: AQHWwY2XkGXatLUPJUSwHO7jlfFZt6nWDYMAgAA/5AA=
Date: Mon, 23 Nov 2020 22:31:35 +0000
Message-ID:
 <HE1PR05MB4794A2DA40D46770D971AD598BFC0@HE1PR05MB4794.eurprd05.prod.outlook.com>
References:
 <AM4PR0501MB2227089FDDF0209EF6E215D9E6100@AM4PR0501MB2227.eurprd05.prod.outlook.com>
 <HE1PR05MB4794B5C57A54A29A48EE8EAE8BE90@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011101842500.21307@sstabellini-ThinkPad-T480s>
 <DB6PR0402MB27608A03EC717053E392A92988E80@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <HE1PR05MB47940ED4E5FDC0BADC54C8E78BE80@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <DB6PR0402MB2760CEEABA9F52CDEB27C1DB88E80@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <HE1PR05MB47944761ED6A26D3E2CE15868BE40@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011161656080.20906@sstabellini-ThinkPad-T480s>
 <HE1PR05MB4794569AC67109AF8B6517268BE20@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011171544380.438@sstabellini-ThinkPad-T480s>
 <5dc63ee2-f1ce-31fc-cb6a-fe4dae929fb3@xen.org>
 <HE1PR05MB4794EBDD1FE29BC69D0BCC898BFD0@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <50B2EEEF-4BF2-4511-98D5-F165A70E2EC6@arm.com>
 <15608dee-1f95-2b08-9fe5-cd015f4d1cdf@xen.org>
In-Reply-To: <15608dee-1f95-2b08-9fe5-cd015f4d1cdf@xen.org>
Accept-Language: de-DE, en-US
Content-Language: de-DE
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=zal.aero;
x-originating-ip: [2003:e4:3f2c:9500:8d15:373c:6c7a:4573]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: abdc43bc-b676-49e4-6bf9-08d88fff89ac
x-ms-traffictypediagnostic: HE1PR0501MB2393:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs:
 <HE1PR0501MB239364410B7E964A81FD35738BFC0@HE1PR0501MB2393.eurprd05.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7691;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 5Es0fD4o6Hqkn1iWEjA140kArrX9dJckNiCO+l19x5F47Xd6RfuoIGOCrGlMEhtbd7FOIapgSHTRSgMlU+xZqRkhXkpz5XY+u3hfPMZo7fYnUKyasWHNkoz8Nz34nhCMcKwzLL7lravnIGO1aJmuN1FfAeieZlvU43hs/MzwkfnvbyHAjjqZ6Vu7czSLVZRG/w/Mly0Y/NUC/22Hv17XOrJHXA/YdiXZWOr10IoheHvrcYDnqYXKbm/i37TuRtH7NF0wr8dBVo3DTbvG8FluGjNNfEW3yaMpLjP/haFz6HCQq1/wOLj++koigvpZVs4l7OIyZBRhpznYL54Uxb1HVg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:HE1PR05MB4794.eurprd05.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(136003)(376002)(366004)(39830400003)(396003)(66556008)(478600001)(4326008)(64756008)(66476007)(86362001)(83380400001)(9686003)(53546011)(6506007)(8936002)(66574015)(66946007)(2906002)(132210200001)(76116006)(55016002)(7696005)(71200400001)(66446008)(5660300002)(54906003)(186003)(8676002)(52536014)(33656002)(44832011)(316002)(110136005);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata:
	=?utf-8?B?OTBNTnNCMGRrdnpVaWZ5bDVlYk1KYlUvc2dSNWZhM3Q3TFZMK1FXYllDd21a?=
 =?utf-8?B?YWlqL3hyS252bWowYjRzZzQ4WExVQllpakZNTU9ENHNJZi9ZMzN2YWpSa0Fs?=
 =?utf-8?B?N3VGQTU0a3pIbWhTTXdEL0t2bm9EOGJkcnUxNEpwSTFMY3FObFhXK3JKN2lK?=
 =?utf-8?B?elJKVjhRLzV2aVp0bXlFdHpOTThVTm54MGdzWjUzOXhPNDdXUmZvZVlUcE5W?=
 =?utf-8?B?eFJ6dHVwQ2tUZ01EWGJNRnJ4aDVEazljT3A1V1M1RHFlQXdYMGRMdnhNQTFM?=
 =?utf-8?B?cXZZWlZPd1FGQzBsa3FCZ3ROSUtNY05oRkQxQ21RVWJWMWM1NE0wYkhYR2hu?=
 =?utf-8?B?ZVdkaEFhMWFHM0NkQ1JDQjRCWmxSTnhGRk8zcWc2NU50Y0k4eSsxdTFNaGdh?=
 =?utf-8?B?N3FMZnloa1RrVEc0QmNmNmtRNTl6T2l5L3crRkdCczdDeDRPb2JXTXpOc1pu?=
 =?utf-8?B?UWRBOUFKQXJOL1hwdjNBZkUyVVc4eVlhb2hUMStLZGNIYmpCcEh3OFZ0a0Nz?=
 =?utf-8?B?WlRrVS9DOVQyOE9oOFMwcXlWNlZKamt4akF6cjZVTGlLU0Y3Um8xRDVjaGxx?=
 =?utf-8?B?ZXo0c25oT1BwWFMvQjZ5T0JKL3pQOTFjSHNNME5HOUQrYXBJaHRINzErS1F3?=
 =?utf-8?B?VTZCOVNydWtHQ3VjSzZiVlE0L1JrWHlmQ2gyZTVzRGhzTHNTNGliM09YQ29a?=
 =?utf-8?B?SHJyNy9aOG9rRjVrV01WZllhMGRqQ2ljSnBnYk9ONmdLNHFLT2psZUF1YWVw?=
 =?utf-8?B?dkxncGY1Z3hSaHRUQmFKanJUYnZreXBSN0QxSDFCNDFWeHlXRkRoTGpqYXhu?=
 =?utf-8?B?Wmg1SDJ4WGwyNEx6K0ZMZk9VM2YxcjRKdW1hTXBsaHdMWEtXRG03eTZ1K2po?=
 =?utf-8?B?djBRdmdiTHliRnJpdENPbDh3UllMVUU5ZkZnd0JGSkdoSGlGR09uRzhVZkhi?=
 =?utf-8?B?SVpEa1NoeXhadndyNzRzSmJ3WFp6SWFIOGxZVHAvUklPaW1QekQrRkltRFJT?=
 =?utf-8?B?bFgrRzdrbkdyNEUzcHZQOVZzREZvL3E1L2JsZzd3NmNJdU9USEkrMWNEeStN?=
 =?utf-8?B?enVUeERuV1k2b2F2Y0lVRnFJVzdWdjE3RksydlpCZTNhRzM5cFUzU0pYZ0xZ?=
 =?utf-8?B?MFc3S24vVVdOTnFRY0F2dHlybnIvWHRRTlNYay9MZkk2ODB4K3FvWW1UU2lB?=
 =?utf-8?B?NVpIMDRFQXZhMjZBUkVZUWdtOWNROEpNakMwY0tJMmlQcnUxSnVxc0t3b3JL?=
 =?utf-8?B?VWVlZVhPYUEwWU5INEpDRG5sNWFuVVhHYURwVk41UlFxQ0J6Zz09?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: zal.aero
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: HE1PR05MB4794.eurprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: abdc43bc-b676-49e4-6bf9-08d88fff89ac
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Nov 2020 22:31:36.0636
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: dd36ff89-3bc0-4d3d-b543-76f454a3c8e5
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: jQXUNC+D+dJQ3T8mtiHd9YZy/N4LfQ6ZLAjFacl9oNLZv9FXLIsGMUvFUpcB4aPgbo3v5Qz7PQUQz/XFBVLULw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0501MB2393

SGksDQoNClRoYW5rcyBmb3IgeW91ciBlZmZvcnQhDQoNCj4gLS0tLS1VcnNwcsO8bmdsaWNoZSBO
YWNocmljaHQtLS0tLQ0KPiBWb246IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IEdl
c2VuZGV0OiBNb250YWcsIDIzLiBOb3ZlbWJlciAyMDIwIDE5OjQyDQo+IEFuOiBSYWh1bCBTaW5n
aCA8UmFodWwuU2luZ2hAYXJtLmNvbT47IExlbyBLcnVlZ2VyDQo+IDxsZW8ua3J1ZWdlckB6YWwu
YWVybz4NCj4gQ2M6IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3RlZmFuby5zdGFiZWxsaW5pQHhpbGlu
eC5jb20+OyBQZW5nIEZhbg0KPiA8cGVuZy5mYW5AbnhwLmNvbT47IGJydWNlYUB4aWxpbnguY29t
OyBDb3JuZWxpYSBCcnVlbGhhcnQNCj4gPGNvcm5lbGlhLmJydWVsaGFydEB6YWwuYWVybz47IG9s
ZWtzYW5kcl9hbmRydXNoY2hlbmtvQGVwYW0uY29tOyB4ZW4tDQo+IGRldmVsQGxpc3RzLnhlbnBy
b2plY3Qub3JnOyBCZXJ0cmFuZCBNYXJxdWlzDQo+IDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+
DQo+IEJldHJlZmY6IFJlOiBYZW4gZGF0YSBmcm9tIG1ldGEtdmlydHVhbGl6YXRpb24gbGF5ZXIN
Cj4gDQo+IA0KPiANCj4gT24gMjMvMTEvMjAyMCAxMTo0MSwgUmFodWwgU2luZ2ggd3JvdGU6DQo+
ID4gSGVsbG8gLA0KPiANCj4gSGkgUmFodWwsDQo+IA0KPiA+PiBPbiAyMiBOb3YgMjAyMCwgYXQg
MTA6NTUgcG0sIExlbyBLcnVlZ2VyIDxsZW8ua3J1ZWdlckB6YWwuYWVybz4gd3JvdGU6DQo+ID4+
IHJvb3RAa29udHJvbi1zYWwyODp+IyBpcCBsaW5rIHNldCB1cCBkZXYgZ2JlMA0KPiA+PiAoWEVO
KSB2Z2ljLXYzLWl0cy5jOjkwMjpkMHYwIHZJVFMgIGNtZCAweDBjOiAwMDAwMDAxNzAwMDAwMDBj
DQo+ID4+IDAwMDAwMDAwMDAwMDAwMDEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAw
DQo+ID4+IChYRU4pIHZnaWMtdjMtaXRzLmM6OTAyOmQwdjAgdklUUyAgY21kIDB4MDU6IDAwMDAw
MDAwMDAwMDAwMDUNCj4gMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDANCj4gPj4gWyAgIDM0LjAzNDU5OF0gQXRoZXJvcyA4MDMxIGV0aGVybmV0IDAwMDA6
MDA6MDAuMzowNTogYXR0YWNoZWQgUEhZIGRyaXZlcg0KPiBbQXRoZXJvcyA4MDMxIGV0aGVybmV0
XSAobWlpX2J1czpwaHlfYWRkcj0wMDAwOjAwOjAwLjM6MDUsIGlycT1QT0xMKQ0KPiA+PiBbICAg
MzQuMDQxMTExXSA4MDIxcTogYWRkaW5nIFZMQU4gMCB0byBIVyBmaWx0ZXIgb24gZGV2aWNlIGdi
ZTANCj4gPj4gWyAgIDM0LjA0MTIwOV0gSVB2NjogQUREUkNPTkYoTkVUREVWX1VQKTogZ2JlMDog
bGluayBpcyBub3QgcmVhZHkNCj4gPj4gcm9vdEBrb250cm9uLXNhbDI4On4jIFsgICAzNS4wNDE5
NTFdIGZzbF9lbmV0YyAwMDAwOjAwOjAwLjAgZ2JlMDogTGluayBpcw0KPiBEb3duDQo+ID4+IFsg
ICAzOC4xMTQ0MjZdIGZzbF9lbmV0YyAwMDAwOjAwOjAwLjAgZ2JlMDogTGluayBpcyBVcCAtIDFH
YnBzL0Z1bGwgLSBmbG93DQo+IGNvbnRyb2wgb2ZmDQo+ID4+IFsgICAzOC4xMTQ1MDhdIElQdjY6
IEFERFJDT05GKE5FVERFVl9DSEFOR0UpOiBnYmUwOiBsaW5rIGJlY29tZXMNCj4gcmVhZHkNCj4g
Pj4NCj4gPj4gRG9lcyB0aGF0IHRlbGwgeW91IGFueXRoaW5nPw0KPiA+Pg0KPiA+DQo+ID4gSSBq
dXN0IGNoZWNrZWQgdGhlIGxvZ3Mgc2hhcmVkLCB3aGF0IEkgZm91bmQgb3V0IHRoYXQgdGhlcmXi
gJlzIGlzIGFuIGVycm9yDQo+IHdoaWxlIGJvb3RpbmcgdG8gY29uZmlndXJlIHRoZSBNU0kgZm9y
IHRoZSBQQ0kgZGV2aWNlIGJlY2F1c2Ugb2YgdGhhdCB0aGVyZQ0KPiB3aWxsIGJlIGNhc2UgdGhh
dCBEZXZpY2UgSWQgZ2VuZXJhdGUgb3V0LW9mLWJhbmQgaXMgbm90IG1hcHBlZCBjb3JyZWN0bHkg
dG8NCj4gSVRTIGRldmljZSB0YWJsZSBjcmVhdGVkIHdoaWxlIGluaXRpYWxpc2luZyB0aGUgTVNJ
IGZvciB0aGUgZGV2aWNlLg0KPiA+IEkgbWlnaHQgYmUgd3JvbmcgbGV0IHNvbWVvbmUgZWxzZSBh
bHNvIGNvbW1lbnRzIG9uIHRoaXMuDQo+IA0KPiBJIHRoaW5rIHRoZXJlIG1pZ2h0IGJlIG11bHRp
cGxlIGlzc3Vlcy4gWW91IHNwb3R0ZWQgb25lIGJlbG93IDopLg0KPiANCj4gPiBbICAgIDAuMTcz
OTY0XSBPRjogL3NvYy9wY2llQDFmMDAwMDAwMDogSW52YWxpZCBtc2ktbWFwIHRyYW5zbGF0aW9u
IC0gbm8NCj4gbWF0Y2ggZm9yIHJpZCAweGY4IG9uICAgICAgICAgICAobnVsbCkNCj4gDQo+IExl
bywganVzdCB0byBjb25maXJtLCB0aGlzIGVycm9yIG1lc3NhZ2UgaXMgbm90IHNwb3R0ZWQgd2hl
biBib290aW5nIExpbnV4IG9uDQo+IGJhcmVtZXRhbD8NCg0KSW4gZmFjdCBpdCBpczoNCg0KWyAg
ICAwLjE2MDA3N10gT0Y6IC9zb2MvcGNpZUAxZjAwMDAwMDA6IEludmFsaWQgbXNpLW1hcCB0cmFu
c2xhdGlvbiAtIG5vIG1hdGNoIGZvciByaWQgMHhmOCBvbiAgICAgICAgICAgKG51bGwpDQoNCkJ1
dCBldmVyeXRoaW5nIHdvcmtzIGFzIGV4cGVjdGVkIGhlcmU6DQoNCjExMDogICAgICAzNDI4OCAg
ICAgICAgICAwICAgSVRTLU1TSSAgIDEgRWRnZSAgICAgIGdiZTAtcnh0eDANCjExMTogICAgICAg
ICAgMCAgICAgICA2MTk2ICAgSVRTLU1TSSAgIDIgRWRnZSAgICAgIGdiZTAtcnh0eDENCg0KPiAN
Cj4gQ2hlZXJzLA0KDQpCZXN0IHdpc2hlcw0KDQo+IA0KPiAtLQ0KPiBKdWxpZW4gR3JhbGwNCg==


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 22:39:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 22:39:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35224.66642 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khKTt-0001Va-EX; Mon, 23 Nov 2020 22:38:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35224.66642; Mon, 23 Nov 2020 22:38:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khKTt-0001VT-Ah; Mon, 23 Nov 2020 22:38:45 +0000
Received: by outflank-mailman (input) for mailman id 35224;
 Mon, 23 Nov 2020 22:38:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VxlT=E5=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1khKTs-0001VO-3N
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 22:38:44 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 85b93523-35bb-448b-90e2-e5378130d2da;
 Mon, 23 Nov 2020 22:38:43 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 9EB58206B7;
 Mon, 23 Nov 2020 22:38:41 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VxlT=E5=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1khKTs-0001VO-3N
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 22:38:44 +0000
X-Inumbo-ID: 85b93523-35bb-448b-90e2-e5378130d2da
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 85b93523-35bb-448b-90e2-e5378130d2da;
	Mon, 23 Nov 2020 22:38:43 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 9EB58206B7;
	Mon, 23 Nov 2020 22:38:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606171122;
	bh=J8yaeI82deNDReC9zVtqWLtxHJahijABZ89UbZIKDGo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=VsU7u8B5aSiFAj0P2ep8UYezT5NdV34xPp9TQT/PHkJvOMo1P67Cy3IoH3Im3Axmp
	 JyKj4Aao0N+V3JQkEGrv2axnsXoTiQsyg+w7MQtPYfLcNu7np6/8O4oO356sgKbaaM
	 P9+IjZT5+q2FjJr9CEjUdHpSWFFEZ+EsYqHpw+mw=
Date: Mon, 23 Nov 2020 14:38:40 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: Rahul Singh <Rahul.Singh@arm.com>, 
    Julien Grall <julien.grall.oss@gmail.com>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Wei Liu <wl@xen.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
In-Reply-To: <f24a1db6-64a6-96d2-d67c-dc1b03c9cc49@suse.com>
Message-ID: <alpine.DEB.2.21.2011231436350.7979@sstabellini-ThinkPad-T480s>
References: <cover.1605527997.git.rahul.singh@arm.com> <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com> <3740e147-719a-4e97-bb0e-fe9bd2ec2aa5@xen.org> <aa256a44-8f8f-d4f1-f5f4-12529f45d8c8@suse.com> <9007e08f-6d90-88ed-ba64-2f0b3c21cb50@xen.org>
 <8531a99d-3c54-36c7-0cd4-2e4838f96eb0@suse.com> <ba26fdfb-34f8-c4d3-e082-f1f49c768981@xen.org> <89F35B3F-FAAD-4C58-B3FD-F93CA3290A49@arm.com> <alpine.DEB.2.21.2011191534060.7979@sstabellini-ThinkPad-T480s> <CAJ=z9a0aS1G0F1jAtKNEe4r3tyBoxy1xJ9AV7pYgifsL62iqww@mail.gmail.com>
 <alpine.DEB.2.21.2011191551510.7979@sstabellini-ThinkPad-T480s> <37511625-C475-497B-BA83-B762687148BF@arm.com> <f24a1db6-64a6-96d2-d67c-dc1b03c9cc49@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-897275416-1606171023=:7979"
Content-ID: <alpine.DEB.2.21.2011231437420.7979@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-897275416-1606171023=:7979
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2011231437421.7979@sstabellini-ThinkPad-T480s>

On Mon, 23 Nov 2020, Jan Beulich wrote:
> Rahul,
> 
> On 23.11.2020 12:54, Rahul Singh wrote:
> > Hello Jan,
> 
> as an aside - it helps if you also put the addressee of your mail
> on the To list.
> 
> >> On 20 Nov 2020, at 12:14 am, Stefano Stabellini <sstabellini@kernel.org> wrote:
> >>
> >> On Thu, 19 Nov 2020, Julien Grall wrote:
> >>> On Thu, 19 Nov 2020, 23:38 Stefano Stabellini, <sstabellini@kernel.org> wrote:
> >>>      On Thu, 19 Nov 2020, Rahul Singh wrote:
> >>>>> On 19/11/2020 09:53, Jan Beulich wrote:
> >>>>>> On 19.11.2020 10:21, Julien Grall wrote:
> >>>>>>> Hi Jan,
> >>>>>>>
> >>>>>>> On 19/11/2020 09:05, Jan Beulich wrote:
> >>>>>>>> On 18.11.2020 16:50, Julien Grall wrote:
> >>>>>>>>> On 16/11/2020 12:25, Rahul Singh wrote:
> >>>>>>>>>> NS16550 driver has PCI support that is under HAS_PCI flag. When HAS_PCI
> >>>>>>>>>> is enabled for ARM, compilation error is observed for ARM architecture
> >>>>>>>>>> because ARM platforms do not have full PCI support available.
> >>>>>>>>>    >
> >>>>>>>>>> Introducing new kconfig option CONFIG_HAS_NS16550_PCI to support
> >>>>>>>>>> ns16550 PCI for X86.
> >>>>>>>>>>
> >>>>>>>>>> For X86 platforms it is enabled by default. For ARM platforms it is
> >>>>>>>>>> disabled by default, once we have proper support for NS16550 PCI for
> >>>>>>>>>> ARM we can enable it.
> >>>>>>>>>>
> >>>>>>>>>> No functional change.
> >>>>>>>>>
> >>>>>>>>> NIT: I would say "No functional change intended" to make clear this is
> >>>>>>>>> an expectation and hopefully will be correct :).
> >>>>>>>>>
> >>>>>>>>> Regarding the commit message itself, I would suggest the following to
> >>>>>>>>> address Jan's concern:
> >>>>>>>>
> >>>>>>>> While indeed this is a much better description, I continue to think
> >>>>>>>> that the proposed Kconfig option is undesirable to have.
> >>>>>>>
> >>>>>>> I am yet to see an argument into why we should keep the PCI code
> >>>>>>> compiled on Arm when there will be no-use....
> >>>>>> Well, see my patch suppressing building of quite a part of it.
> >>>>>
> >>>>> I will let Rahul figuring out whether your patch series is sufficient to fix compilation issues (this is what matters right
> >>>      now).
> >>>>
> >>>> I just checked the compilation error for ARM after enabling the HAS_PCI on ARM. I am observing the same compilation error
> >>>      what I observed previously.
> >>>> There are two new errors related to struct uart_config and struct part_param as those struct defined globally but used under
> >>>      X86 flags.
> >>>>
> >>>> At top level:
> >>>> ns16550.c:179:48: error: ‘uart_config’ defined but not used [-Werror=unused-const-variable=]
> >>>>   static const struct ns16550_config __initconst uart_config[] =
> >>>>                                                  ^~~~~~~~~~~
> >>>> ns16550.c:104:54: error: ‘uart_param’ defined but not used [-Werror=unused-const-variable=]
> >>>>   static const struct ns16550_config_param __initconst uart_param[] = {
> >>>>
> >>>>
> >>>>>
> >>>>>>>> Either,
> >>>>>>>> following the patch I've just sent, truly x86-specific things (at
> >>>>>>>> least as far as current state goes - if any of this was to be
> >>>>>>>> re-used by a future port, suitable further abstraction may be
> >>>>>>>> needed) should be guarded by CONFIG_X86 (or abstracted into arch
> >>>>>>>> hooks), or the HAS_PCI_MSI proposal would at least want further
> >>>>>>>> investigating as to its feasibility to address the issues at hand.
> >>>>>>>
> >>>>>>> I would be happy with CONFIG_X86, despite the fact that this is only
> >>>>>>> deferring the problem.
> >>>>>>>
> >>>>>>> Regarding HAS_PCI_MSI, I don't really see the point of introducing given
> >>>>>>> that we are not going to use NS16550 PCI on Arm in the forseeable
> >>>>>>> future.
> >>>>>> And I continue to fail to see what would guarantee this: As soon
> >>>>>> as you can plug in such a card into an Arm system, people will
> >>>>>> want to be able use it. That's why we had to add support for it
> >>>>>> on x86, after all.
> >>>>>
> >>>>> Well, plug-in PCI cards on Arm has been available for quite a while... Yet I haven't heard anyone asking for NS16550 PCI
> >>>      support.
> >>>>>
> >>>>> This is probably because SBSA compliant server should always provide an SBSA UART (a cut-down version of the PL011). So why
> >>>      would bother to lose a PCI slot for yet another UART?
> >>>>>
> >>>>>>>> So why do we need a finer graine Kconfig?
> >>>>>> Because most of the involved code is indeed MSI-related?
> >>>>>
> >>>>> Possibly, yet it would not be necessary if we don't want NS16550 PCI support...
> >>>>
> >>>> To fix compilation error on ARM as per the discussion there are below options please suggest which one to use to proceed
> >>>      further.
> >>>>
> >>>> 1. Use the newly introduced CONFIG_HAS_NS16550_PCI config options. This helps also non-x86 architecture in the future not to
> >>>      have compilation error
> >>>> what we are observing now when HAS_PCI is enabled.
> >>>>
> >>>> 2. Guard the remaining x86 specific code with CONFIG_X86 and introduce the new CONFIG_HAS_PCI_MSI options to fix the MSI
> >>>      related compilation error.
> >>>> Once we have proper support for MSI and PCI for ARM  (HAS_PCI_MSI and HAS_PCI enabled for ARM in Kconfig ) I am not sure if
> >>>      NS16550 PCI will work out of the box on ARM .In that case, we might need to come back again to fix NS16550 driver. 
> >>>
> >>>
> >>>      It doesn't matter too much to me, let's just choose one option so that you
> >>>      get unblocked soon.
> >>>
> >>>      It looks like Jan prefers option 2) and both Julien and I are OK with
> >>>      it. So let's do 2). Jan, please confirm too :-)
> >>>
> >>>
> >>> Please don't put words in my mouth... 
> >>
> >> Sorry Julien, I misinterpreted one of your previous comments. Sometimes
> >> it is difficult to do things by email. It is good that you clarified as
> >> my goal was to reach an agreement.
> >>
> >>
> >>> I think introducing HAS_PCI_MSI is short sighted.
> >>>
> >>> There are no clear benefits of it when NS16550 PCI support is not going to be enable in the foreseeable future.
> >>
> >> I agree
> >>
> >>
> >>> I would be ok with moving everything under CONFIG_X86. IHMO this is still shortsighted but at least we don't introduce a config that's not
> >>> going to help Arm or other any architecture to disable completely PCI support in NS16550.
> >>
> >> So you are suggesting a new option:
> >>
> >> 3. Guard the remaining x86 specific code *and* the MSI related
> >> compilation errors with CONFIG_X86
> >>
> >> Is that right?
> >>
> >>
> >> My preference is actually option 1) but this series is already at v3 and
> >> I don't think this decision is as important as much as unblocking
> >> Rahul, so I am OK with the other alternatives too.
> >>
> >> I tend to agree with you that 3) is better than 2) for the reasons you
> >> wrote above.
> > 
> > 
> > Can you please provide your suggestion how to proceed on this so that I can send my next patch.
> > I am waiting for your reply if you are also ok for the options 3.
> 
> I can live with 3, I guess, but I still think a separate PCI_MSI
> control would be better. Please realize though that things also
> depend on how the change is going to look like in the end, i.e.
> I'm not going to assure you this is my final view on it. In any
> event I've just sent v2 of my series, which I consider a prereq
> of yours.

It is great that we have a way forward.

I'll try to have a look at your series -- it looks pretty
straightforward.
--8323329-897275416-1606171023=:7979--


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 22:42:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 22:42:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35231.66654 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khKX4-0002L9-TD; Mon, 23 Nov 2020 22:42:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35231.66654; Mon, 23 Nov 2020 22:42:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khKX4-0002L2-Q3; Mon, 23 Nov 2020 22:42:02 +0000
Received: by outflank-mailman (input) for mailman id 35231;
 Mon, 23 Nov 2020 22:42:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VxlT=E5=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1khKX3-0002Kw-6c
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 22:42:01 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 541bf3ed-facc-44ee-93b0-9dd8b1ceea74;
 Mon, 23 Nov 2020 22:42:00 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 09397206B7;
 Mon, 23 Nov 2020 22:41:58 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VxlT=E5=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1khKX3-0002Kw-6c
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 22:42:01 +0000
X-Inumbo-ID: 541bf3ed-facc-44ee-93b0-9dd8b1ceea74
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 541bf3ed-facc-44ee-93b0-9dd8b1ceea74;
	Mon, 23 Nov 2020 22:42:00 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 09397206B7;
	Mon, 23 Nov 2020 22:41:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606171319;
	bh=+5QGZAfUOAQQd7G+TYaZK2GT2F5H4rT/WGvl5K7sRf0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=GzWITx+r+VYQXS0PC/SD4W5yyE6it/YhflEkM076Zumy+9zAqw1ZX5Biia/3MXQwZ
	 xTExKMKzq9gVhLF9XjKZNf51JUbVCCW5+85HrSz89oXWnJImLthmcTIY9mi0hbEAiz
	 pwF5dpu4QBj66QD3qsLEVQG1l6OpeEsxnh6URQgI=
Date: Mon, 23 Nov 2020 14:41:58 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, andrew.cooper3@citrix.com, 
    Bertrand.Marquis@arm.com, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>, 
    george.dunlap@citrix.com, iwj@xenproject.org, julien@xen.org, wl@xen.org, 
    xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2] xen: EXPERT clean-up and introduce UNSUPPORTED
In-Reply-To: <8ff723d7-00e2-be35-48b0-dc4b932d35cc@suse.com>
Message-ID: <alpine.DEB.2.21.2011231440070.7979@sstabellini-ThinkPad-T480s>
References: <20201118005051.26115-1-sstabellini@kernel.org> <eb6b32c3-c7e2-1e36-f492-0c00cc170ce2@suse.com> <alpine.DEB.2.21.2011181241310.11739@sstabellini-ThinkPad-T480s> <3e8c03eb-ee3f-4439-90c2-acf340c7d8e7@suse.com> <alpine.DEB.2.21.2011191310210.11739@sstabellini-ThinkPad-T480s>
 <8ff723d7-00e2-be35-48b0-dc4b932d35cc@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 20 Nov 2020, Jan Beulich wrote:
> On 19.11.2020 22:40, Stefano Stabellini wrote:
> > On Thu, 19 Nov 2020, Jan Beulich wrote:
> >> On 18.11.2020 22:00, Stefano Stabellini wrote:
> >>> On Wed, 18 Nov 2020, Jan Beulich wrote:
> >>>> On 18.11.2020 01:50, Stefano Stabellini wrote:
> >>>>> 1) It is not obvious that "Configure standard Xen features (expert
> >>>>> users)" is actually the famous EXPERT we keep talking about on xen-devel
> >>>>
> >>>> Which can be addressed by simply changing the one prompt line.
> >>>>
> >>>>> 2) It is not obvious when we need to enable EXPERT to get a specific
> >>>>> feature
> >>>>>
> >>>>> In particular if you want to enable ACPI support so that you can boot
> >>>>> Xen on an ACPI platform, you have to enable EXPERT first. But searching
> >>>>> through the kconfig menu it is really not clear (type '/' and "ACPI"):
> >>>>> nothing in the description tells you that you need to enable EXPERT to
> >>>>> get the option.
> >>>>
> >>>> And what causes this to be different once you switch to UNSUPPORTED?
> >>>
> >>> Two things: firstly, it doesn't and shouldn't take an expert to enable
> >>> ACPI support, even if ACPI support is experimental. So calling it
> >>> UNSUPPORTED helps a lot. This is particularly relevant to the ARM Kconfig
> >>> options changed by this patch. Secondly, this patch is adding
> >>> "(UNSUPPORTED)" in the oneline prompt so that it becomes easy to match
> >>> it with the option you need to enable.
> >>
> >> There's redundancy here then, which I think is in almost all cases
> >> better to avoid. That's first and foremost because the two places
> >> can go out of sync. Therefore, if the primary thing is to help
> >> "make menuconfig" (which I admit I don't normally use, as it's
> >> nothing that gets invoked implicitly by the build process afaict,
> >> i.e. one has to actively invoke it), perhaps we should enhance
> >> kconfig to attach at least a pre-determined subset of labels to
> >> the prompts automatically?
> >>
> >> And second, also in reply to what you've been saying further down,
> >> perhaps we would better go with a hierarchy of controls here, e.g.
> >> EXPERT -> EXPERIMENTAL -> UNSUPPORTED?
> > 
> > Both these are good ideas worth discussing; somebody else made a similar
> > suggestion some time back. I was already thinking this could be a great
> > candidate for one of the first "working groups" as defined by George
> > during the last community call because the topic is not purely
> > technical: a working group could help getting alignment and make
> > progress faster. We can propose it to George when he is back.
> > 
> > However, I don't think we need the working group to make progress on
> > this limited patch that only addresses the lowest hanging fruit.
> > 
> > I'd like to suggest to make progress on this patch in its current form,
> > and in parallel start a longer term discussion on how to do something
> > like you suggested above.
> 
> Okay, I guess I can accept this. So FAOD I'm not objecting to the
> change (with some suitable adjustments, as discussed), but I'm
> then also not going to be the one to ack it. Nevertheless I'd like
> to point out that doing such a partial solution may end up adding
> confusion rather than reducing it. Much depends on how exactly
> consumers interpret what we hand to them.

Thank you Jan. I'll clarify the patch and address your comments. I'll
also try to get the attention of one of the other maintainers for the
ack.


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 22:49:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 22:49:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35240.66666 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khKeI-0002bL-Oq; Mon, 23 Nov 2020 22:49:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35240.66666; Mon, 23 Nov 2020 22:49:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khKeI-0002bE-Ld; Mon, 23 Nov 2020 22:49:30 +0000
Received: by outflank-mailman (input) for mailman id 35240;
 Mon, 23 Nov 2020 22:49:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1khKeH-0002b9-5b
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 22:49:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khKeG-0004Z9-2A; Mon, 23 Nov 2020 22:49:28 +0000
Received: from gw1.octic.net ([81.187.162.82] helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khKeF-0005zt-R4; Mon, 23 Nov 2020 22:49:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khKeH-0002b9-5b
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 22:49:29 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=5MNW1inTob+ISExQ5qkJRx9m4GK2Yllrafe4Wg6R6aU=; b=K6Xx+Q7ux+tNHwU1Cjnerz6ggG
	s+mJtj/tIDzGZFalWyRYdZ+RtI4nb10dMLxszsDHcT7c1JJ7rrMjfhfLTUVnvEprulmWUuCvKKzcQ
	xPPzv0TgVfwPcZbq4PBUmd/4DBLzwjFdtTyeSMXA6qV5CGA+UwU8ZHenB8MhNtePq80o=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khKeG-0004Z9-2A; Mon, 23 Nov 2020 22:49:28 +0000
Received: from gw1.octic.net ([81.187.162.82] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khKeF-0005zt-R4; Mon, 23 Nov 2020 22:49:27 +0000
Subject: Re: [PATCH v2 7/8] lib: move bsearch code
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <87a20884-5a76-a664-dcc9-bd4becee40b3@suse.com>
 <44ffc041-cacd-468e-a835-f5b2048bb201@xen.org>
 <2cf3a90d-f463-41f8-f861-6ef00279b204@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <2419eccf-c696-6aa1-ada4-0f7bd6bc5657@xen.org>
Date: Mon, 23 Nov 2020 22:49:25 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <2cf3a90d-f463-41f8-f861-6ef00279b204@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 19/11/2020 10:27, Jan Beulich wrote:
> On 18.11.2020 19:09, Julien Grall wrote:
>> On 23/10/2020 11:19, Jan Beulich wrote:
>>> --- a/xen/include/xen/compiler.h
>>> +++ b/xen/include/xen/compiler.h
>>> @@ -12,6 +12,7 @@
>>>    
>>>    #define inline        __inline__
>>>    #define always_inline __inline__ __attribute__ ((__always_inline__))
>>> +#define gnu_inline    __inline__ __attribute__ ((__gnu_inline__))
>>
>> bsearch() is only used by Arm and I haven't seen anyone so far
>> complaining about the perf of I/O emulation.
>>
>> Therefore, I am not convinced that there is enough justification to
>> introduce a GNU attribute just for this patch.
> 
> Please settle this with Andrew: He had asked for the function to
> become inline. I don't view making it static inline in the header
> as an option here - if the compiler decides to not inline it, we
> should not end up with multiple instances in different CUs.

That's the cons of static inline... but then why is it suddenly a 
problem with this helper?

> And
> without making it static inline the attribute needs adding; at
> least I'm unaware of an alternative which works with the various
> compiler versions.

The question we have to answer is: What is the gain with this approach?

If it is not quantifiable, then introducing compiler specific attribute 
is not an option.

IIRC, there are only two callers (all in Arm code) of this function. 
Even inlined, I don't believe you would drastically reduce the number of 
instructions compare to a full blown version. To be generous, I would 
say you may save ~20 instructions per copy.

Therefore, so far, the compiler specific attribute doesn't look 
justified to me. As usual, I am happy to be proven wrong.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 22:53:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 22:53:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35248.66677 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khKiT-0003Sw-9k; Mon, 23 Nov 2020 22:53:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35248.66677; Mon, 23 Nov 2020 22:53:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khKiT-0003Sp-6o; Mon, 23 Nov 2020 22:53:49 +0000
Received: by outflank-mailman (input) for mailman id 35248;
 Mon, 23 Nov 2020 22:53:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Rez2=E5=kernel.org=gustavoars@srs-us1.protection.inumbo.net>)
 id 1khKiR-0003SJ-Fj
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 22:53:47 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6b314eb7-5d78-44bf-8276-561e9b09ebf7;
 Mon, 23 Nov 2020 22:53:46 +0000 (UTC)
Received: from embeddedor (187-162-31-110.static.axtel.net [187.162.31.110])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id E0DFC206D8;
 Mon, 23 Nov 2020 22:53:44 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Rez2=E5=kernel.org=gustavoars@srs-us1.protection.inumbo.net>)
	id 1khKiR-0003SJ-Fj
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 22:53:47 +0000
X-Inumbo-ID: 6b314eb7-5d78-44bf-8276-561e9b09ebf7
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6b314eb7-5d78-44bf-8276-561e9b09ebf7;
	Mon, 23 Nov 2020 22:53:46 +0000 (UTC)
Received: from embeddedor (187-162-31-110.static.axtel.net [187.162.31.110])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id E0DFC206D8;
	Mon, 23 Nov 2020 22:53:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606172025;
	bh=Y754Uxsb+oRdteJzxeLGJZ0Nr6qJ2w8yPhIjK1E9ygw=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=Hd8DEuKDlU/OBnXmphx1Bjt66POq5VZAkAJuwm05bXMWgCP0KNMsZhuX1pGccgVAQ
	 CuJqNVszqNMvfApMAr7ZRwkEnkR59sOEF5TtNw+vX34N/EMP+A/lHmONMXhkr1ttvy
	 rNUTWmbmeyjT6VfheHX8QTYB5CtlRKh4XbeKEb9c=
Date: Mon, 23 Nov 2020 16:53:59 -0600
From: "Gustavo A. R. Silva" <gustavoars@kernel.org>
To: boris.ostrovsky@oracle.com
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jens Axboe <axboe@kernel.dk>, xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-hardening@vger.kernel.org
Subject: Re: [PATCH 058/141] xen-blkfront: Fix fall-through warnings for Clang
Message-ID: <20201123225359.GP21644@embeddedor>
References: <cover.1605896059.git.gustavoars@kernel.org>
 <33057688012c34dd60315ad765ff63f070e98c0c.1605896059.git.gustavoars@kernel.org>
 <e8d67ea1-3d0d-509a-a2f1-cf1758bb373f@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <e8d67ea1-3d0d-509a-a2f1-cf1758bb373f@oracle.com>
User-Agent: Mutt/1.9.4 (2018-02-28)

On Fri, Nov 20, 2020 at 04:36:26PM -0500, boris.ostrovsky@oracle.com wrote:
> 
> On 11/20/20 1:32 PM, Gustavo A. R. Silva wrote:
> > In preparation to enable -Wimplicit-fallthrough for Clang, fix a warning
> > by explicitly adding a break statement instead of letting the code fall
> > through to the next case.
> >
> > Link: https://github.com/KSPP/linux/issues/115
> > Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
> > ---
> >  drivers/block/xen-blkfront.c | 1 +
> >  1 file changed, 1 insertion(+)
> >
> > diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> > index 48629d3433b4..34b028be78ab 100644
> > --- a/drivers/block/xen-blkfront.c
> > +++ b/drivers/block/xen-blkfront.c
> > @@ -2462,6 +2462,7 @@ static void blkback_changed(struct xenbus_device *dev,
> >  			break;
> >  		if (talk_to_blkback(dev, info))
> >  			break;
> > +		break;
> >  	case XenbusStateInitialising:
> >  	case XenbusStateInitialised:
> >  	case XenbusStateReconfiguring:
> 
> 
> Reviewed-by Boris Ostrovsky <boris.ostrovsky@oracle.com>
> 
> 
> (for patch 138 as well)

Thank you for both reviews, Boris.

> Although I thought using 'fallthrough' attribute was the more common approach.

I've got it. I will consider that for a future patch.

Thanks
--
Gustavo


From xen-devel-bounces@lists.xenproject.org Mon Nov 23 23:24:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 23 Nov 2020 23:24:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35258.66690 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khLBh-00066j-Ts; Mon, 23 Nov 2020 23:24:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35258.66690; Mon, 23 Nov 2020 23:24:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khLBh-00066c-Qq; Mon, 23 Nov 2020 23:24:01 +0000
Received: by outflank-mailman (input) for mailman id 35258;
 Mon, 23 Nov 2020 23:24:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1khLBh-00066X-5Q
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 23:24:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khLBf-0005HI-NX; Mon, 23 Nov 2020 23:23:59 +0000
Received: from gw1.octic.net ([81.187.162.82] helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khLBf-0008Fz-DU; Mon, 23 Nov 2020 23:23:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khLBh-00066X-5Q
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 23:24:01 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=qfsA+1qWKMzMEUrAFmz6oH4DoGmUhg+nVT9wBBuQga8=; b=B8n3TbqfUSI0qJRel7k4ozlAuX
	qe0JhidJEktDCkmf1zxmwesiGasfYrTkEuoiM6QjQH1SiH0uQ+WO/hlEutyOnTtmCcr6E9c5syII5
	pIZYDxkLPG30cRPKbZxj5VqBr+akOhELcbaRWSWscg9d+5UlKk2okh+VyVklSadY7w0o=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khLBf-0005HI-NX; Mon, 23 Nov 2020 23:23:59 +0000
Received: from gw1.octic.net ([81.187.162.82] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khLBf-0008Fz-DU; Mon, 23 Nov 2020 23:23:59 +0000
Subject: Re: [PATCH RFC 4/6] xen/arm: mm: Allow other mapping size in
 xen_pt_update_entry()
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com,
 Julien Grall <julien.grall@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20201119190751.22345-1-julien@xen.org>
 <20201119190751.22345-5-julien@xen.org>
 <alpine.DEB.2.21.2011191706420.7979@sstabellini-ThinkPad-T480s>
 <1ba4afef-7efa-6d1a-5929-ec2652dbbb21@xen.org>
 <alpine.DEB.2.21.2011231409050.7979@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <eff4cb40-ac90-940c-aa97-16a5021386d3@xen.org>
Date: Mon, 23 Nov 2020 23:23:57 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2011231409050.7979@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 23/11/2020 22:27, Stefano Stabellini wrote:
> On Fri, 20 Nov 2020, Julien Grall wrote:
>>>>        /*
>>>>         * For arm32, page-tables are different on each CPUs. Yet, they
>>>> share
>>>> @@ -1265,14 +1287,43 @@ static int xen_pt_update(unsigned long virt,
>>>>          spin_lock(&xen_pt_lock);
>>>>    -    for ( ; addr < addr_end; addr += PAGE_SIZE )
>>>> +    while ( left )
>>>>        {
>>>> -        rc = xen_pt_update_entry(root, addr, mfn, flags);
>>>> +        unsigned int order;
>>>> +        unsigned long mask;
>>>> +
>>>> +        /*
>>>> +         * Don't take into account the MFN when removing mapping (i.e
>>>> +         * MFN_INVALID) to calculate the correct target order.
>>>> +         *
>>>> +         * XXX: Support superpage mappings if nr is not aligned to a
>>>> +         * superpage size.
>>>
>>> It would be good to add another sentence to explain that the checks
>>> below are simply based on masks and rely on the mfn, vfn, and also
>>> nr_mfn to be superpage aligned. (It took me some time to figure it out.)
>>
>> I am not sure to understand what you wrote here. Could you suggest a sentence?
> 
> Something like the following:
> 
> /*
>   * Don't take into account the MFN when removing mapping (i.e
>   * MFN_INVALID) to calculate the correct target order.
>   *
>   * This loop relies on mfn, vfn, and nr_mfn, to be all superpage
>   * aligned, and it uses `mask' to check for that.

Unfortunately, I am still not sure to understand this comment.
The loop can deal with any (super)page size (4KB, 2MB, 1GB). There are 
no assumption on any alignment for mfn, vfn and nr_mfn.

By OR-ing the 3 components together, we can use it to find out the 
maximum size that can be used for the mapping.

So can you clarify what you mean?

>   *
>   * XXX: Support superpage mappings if nr_mfn is not aligned to a
>   * superpage size.
>   */
> 
> 
>> Regarding the TODO itself, we have the exact same one in the P2M code. I
>> couldn't find a clever way to deal with it yet. Any idea how this could be
>> solved?
>   
> I was thinking of a loop that start with the highest possible superpage
> size that virt and mfn are aligned to, and also smaller or equal to
> nr_mfn. So rather than using the mask to also make sure nr_mfns is
> aligned, I would only use the mask to check that mfn and virt are
> aligned. Then, we only need to check that superpage_size <= left.
> 
> Concrete example: virt and mfn are 2MB aligned, nr_mfn is 5MB / 1280 4K
> pages. We allocate 2MB superpages until onlt 1MB is left. At that point
> superpage_size <= left fails and we go down to 4K allocations.
> 
> Would that work?

Unfortunately no, AFAICT, your assumption is that vfn/mfn are originally 
aligned to higest possible superpage size. There are situation where 
this is not the case.

To give a concrete example, at the moment the RAM is mapped using 1GB 
superpage in Xen. But in the future, we will only want to map RAM 
regions in the directmap that haven't been marked as reserved [1].

Those reserved regions don't have architectural alignment or placement.

I will use an over-exegerated example (or maybe not :)).

Imagine you have 4GB of RAM starting at 0. The HW/Software engineer 
decided to place a 2MB reserved region start at 512MB.

As a result we would want to map two RAM regions:
    1) 0 to 512MB
    2) 514MB to 4GB

I will only focus on 2). In the ideal situation, we would want to map
    a) 514MB to 1GB using 2MB superpage
    b) 1GB to 4GB using 1GB superpage

We don't want be to use 2MB superpage because this will increase TLB 
pressure (we want to avoid Xen using too much TLB entries) and also 
increase the size of the page-tables.

Therefore, we want to select the best size for each iteration. For now, 
the only solution I can come up with is to OR vfn/mfn and then use a 
series of check to compare the mask and nr_mfn.

In addition to the "classic" mappings (i.e. 4KB, 2MB, 1GB). I would like 
to explore contiguous mapping (e.g. 64KB, 32MB) to further reduce the 
TLBs pressure. Note that a processor may or may not take advantage of 
contiguous mapping to reduce the number of TLBs used.

This will unfortunately increase the numbers of check. I will try to 
come up with a patch and we can discuss from there.

Cheers,

[1] Reserved region may be marked as uncacheable and therefore we 
shouldn't map them in Xen address space to avoid break cache coherency.

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 00:02:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 00:02:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35269.66708 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khLmr-0001k4-G0; Tue, 24 Nov 2020 00:02:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35269.66708; Tue, 24 Nov 2020 00:02:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khLmr-0001jx-CK; Tue, 24 Nov 2020 00:02:25 +0000
Received: by outflank-mailman (input) for mailman id 35269;
 Tue, 24 Nov 2020 00:02:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PpH5=E6=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1khLmp-0001js-Mf
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 00:02:23 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 80521caf-2d02-437e-ab3c-bf7d9f048533;
 Tue, 24 Nov 2020 00:02:22 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 7D51720728;
 Tue, 24 Nov 2020 00:02:21 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PpH5=E6=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1khLmp-0001js-Mf
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 00:02:23 +0000
X-Inumbo-ID: 80521caf-2d02-437e-ab3c-bf7d9f048533
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 80521caf-2d02-437e-ab3c-bf7d9f048533;
	Tue, 24 Nov 2020 00:02:22 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 7D51720728;
	Tue, 24 Nov 2020 00:02:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606176142;
	bh=AcOJif3NNxO7h1CzhhcsAQ9DmGfREJcUX6jhbBVffpQ=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=0DMH828VJD4L9WZsaV70fy/B+qvE04fY8Yj88bRPYYVEJiaqiBjlBGyW2VaY7QYAa
	 VSX74SeKvGPJ6G43DsLXePk8vEzrnwzXPecAujn6LuadI4OlX5rb3xGXOPZ1mMLaRQ
	 HaTME881tU0AL/2gmsFHTbcnIwdxeEwZlA1zEqLc=
Date: Mon, 23 Nov 2020 16:02:20 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>, Rahul Singh <Rahul.Singh@arm.com>
Subject: Re: [PATCH v2 1/3] ns16550: move PCI arrays next to the function
 using them
In-Reply-To: <b47b5557-ad67-5bf4-45ce-c305ee5da977@suse.com>
Message-ID: <alpine.DEB.2.21.2011231602071.7979@sstabellini-ThinkPad-T480s>
References: <96115b2b-c104-e566-2368-6a2439d2c988@suse.com> <b47b5557-ad67-5bf4-45ce-c305ee5da977@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 23 Nov 2020, Jan Beulich wrote:
> Pure code motion; no functional change intended.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> v2: New.
> 
> --- a/xen/drivers/char/ns16550.c
> +++ b/xen/drivers/char/ns16550.c
> @@ -153,312 +153,6 @@ struct ns16550_config_param {
>      unsigned int uart_offset;
>      unsigned int first_offset;
>  };
> -
> -/*
> - * Create lookup tables for specific devices. It is assumed that if
> - * the device found is MMIO, then you have indexed it here. Else, the
> - * driver does nothing for MMIO based devices.
> - */
> -static const struct ns16550_config_param __initconst uart_param[] = {
> -    [param_default] = {
> -        .reg_width = 1,
> -        .lsr_mask = UART_LSR_THRE,
> -        .max_ports = 1,
> -    },
> -    [param_trumanage] = {
> -        .reg_shift = 2,
> -        .reg_width = 1,
> -        .fifo_size = 16,
> -        .lsr_mask = (UART_LSR_THRE | UART_LSR_TEMT),
> -        .mmio = 1,
> -        .max_ports = 1,
> -    },
> -    [param_oxford] = {
> -        .base_baud = 4000000,
> -        .uart_offset = 0x200,
> -        .first_offset = 0x1000,
> -        .reg_width = 1,
> -        .fifo_size = 16,
> -        .lsr_mask = UART_LSR_THRE,
> -        .mmio = 1,
> -        .max_ports = 1, /* It can do more, but we would need more custom code.*/
> -    },
> -    [param_oxford_2port] = {
> -        .base_baud = 4000000,
> -        .uart_offset = 0x200,
> -        .first_offset = 0x1000,
> -        .reg_width = 1,
> -        .fifo_size = 16,
> -        .lsr_mask = UART_LSR_THRE,
> -        .mmio = 1,
> -        .max_ports = 2,
> -    },
> -    [param_pericom_1port] = {
> -        .base_baud = 921600,
> -        .uart_offset = 8,
> -        .reg_width = 1,
> -        .fifo_size = 16,
> -        .lsr_mask = UART_LSR_THRE,
> -        .bar0 = 1,
> -        .max_ports = 1,
> -    },
> -    [param_pericom_2port] = {
> -        .base_baud = 921600,
> -        .uart_offset = 8,
> -        .reg_width = 1,
> -        .fifo_size = 16,
> -        .lsr_mask = UART_LSR_THRE,
> -        .bar0 = 1,
> -        .max_ports = 2,
> -    },
> -    /*
> -     * Of the two following ones, we can't really use all of their ports,
> -     * unless ns16550_com[] would get grown.
> -     */
> -    [param_pericom_4port] = {
> -        .base_baud = 921600,
> -        .uart_offset = 8,
> -        .reg_width = 1,
> -        .fifo_size = 16,
> -        .lsr_mask = UART_LSR_THRE,
> -        .bar0 = 1,
> -        .max_ports = 4,
> -    },
> -    [param_pericom_8port] = {
> -        .base_baud = 921600,
> -        .uart_offset = 8,
> -        .reg_width = 1,
> -        .fifo_size = 16,
> -        .lsr_mask = UART_LSR_THRE,
> -        .bar0 = 1,
> -        .max_ports = 8,
> -    }
> -};
> -static const struct ns16550_config __initconst uart_config[] =
> -{
> -    /* Broadcom TruManage device */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_BROADCOM,
> -        .dev_id = 0x160a,
> -        .param = param_trumanage,
> -    },
> -    /* OXPCIe952 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc11b,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe952 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc11f,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe952 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc138,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe952 2 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc158,
> -        .param = param_oxford_2port,
> -    },
> -    /* OXPCIe952 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc13d,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe952 2 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc15d,
> -        .param = param_oxford_2port,
> -    },
> -    /* OXPCIe952 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc40b,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe200 1 Native UART */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc40f,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe200 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc41b,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe200 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc41f,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe200 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc42b,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe200 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc42f,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe200 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc43b,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe200 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc43f,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe200 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc44b,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe200 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc44f,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe200 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc45b,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe200 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc45f,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe200 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc46b,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe200 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc46f,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe200 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc47b,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe200 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc47f,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe200 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc48b,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe200 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc48f,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe200 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc49b,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe200 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc49f,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe200 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc4ab,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe200 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc4af,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe200 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc4bb,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe200 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc4bf,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe200 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc4cb,
> -        .param = param_oxford,
> -    },
> -    /* OXPCIe200 1 Native UART  */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> -        .dev_id = 0xc4cf,
> -        .param = param_oxford,
> -    },
> -    /* Pericom PI7C9X7951 Uno UART */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_PERICOM,
> -        .dev_id = 0x7951,
> -        .param = param_pericom_1port
> -    },
> -    /* Pericom PI7C9X7952 Duo UART */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_PERICOM,
> -        .dev_id = 0x7952,
> -        .param = param_pericom_2port
> -    },
> -    /* Pericom PI7C9X7954 Quad UART */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_PERICOM,
> -        .dev_id = 0x7954,
> -        .param = param_pericom_4port
> -    },
> -    /* Pericom PI7C9X7958 Octal UART */
> -    {
> -        .vendor_id = PCI_VENDOR_ID_PERICOM,
> -        .dev_id = 0x7958,
> -        .param = param_pericom_8port
> -    }
> -};
>  #endif
>  
>  static void ns16550_delayed_resume(void *data);
> @@ -1045,6 +739,314 @@ static int __init check_existence(struct
>  }
>  
>  #ifdef CONFIG_HAS_PCI
> +
> +/*
> + * Create lookup tables for specific devices. It is assumed that if
> + * the device found is MMIO, then you have indexed it here. Else, the
> + * driver does nothing for MMIO based devices.
> + */
> +static const struct ns16550_config_param __initconst uart_param[] = {
> +    [param_default] = {
> +        .reg_width = 1,
> +        .lsr_mask = UART_LSR_THRE,
> +        .max_ports = 1,
> +    },
> +    [param_trumanage] = {
> +        .reg_shift = 2,
> +        .reg_width = 1,
> +        .fifo_size = 16,
> +        .lsr_mask = (UART_LSR_THRE | UART_LSR_TEMT),
> +        .mmio = 1,
> +        .max_ports = 1,
> +    },
> +    [param_oxford] = {
> +        .base_baud = 4000000,
> +        .uart_offset = 0x200,
> +        .first_offset = 0x1000,
> +        .reg_width = 1,
> +        .fifo_size = 16,
> +        .lsr_mask = UART_LSR_THRE,
> +        .mmio = 1,
> +        .max_ports = 1, /* It can do more, but we would need more custom code.*/
> +    },
> +    [param_oxford_2port] = {
> +        .base_baud = 4000000,
> +        .uart_offset = 0x200,
> +        .first_offset = 0x1000,
> +        .reg_width = 1,
> +        .fifo_size = 16,
> +        .lsr_mask = UART_LSR_THRE,
> +        .mmio = 1,
> +        .max_ports = 2,
> +    },
> +    [param_pericom_1port] = {
> +        .base_baud = 921600,
> +        .uart_offset = 8,
> +        .reg_width = 1,
> +        .fifo_size = 16,
> +        .lsr_mask = UART_LSR_THRE,
> +        .bar0 = 1,
> +        .max_ports = 1,
> +    },
> +    [param_pericom_2port] = {
> +        .base_baud = 921600,
> +        .uart_offset = 8,
> +        .reg_width = 1,
> +        .fifo_size = 16,
> +        .lsr_mask = UART_LSR_THRE,
> +        .bar0 = 1,
> +        .max_ports = 2,
> +    },
> +    /*
> +     * Of the two following ones, we can't really use all of their ports,
> +     * unless ns16550_com[] would get grown.
> +     */
> +    [param_pericom_4port] = {
> +        .base_baud = 921600,
> +        .uart_offset = 8,
> +        .reg_width = 1,
> +        .fifo_size = 16,
> +        .lsr_mask = UART_LSR_THRE,
> +        .bar0 = 1,
> +        .max_ports = 4,
> +    },
> +    [param_pericom_8port] = {
> +        .base_baud = 921600,
> +        .uart_offset = 8,
> +        .reg_width = 1,
> +        .fifo_size = 16,
> +        .lsr_mask = UART_LSR_THRE,
> +        .bar0 = 1,
> +        .max_ports = 8,
> +    }
> +};
> +
> +static const struct ns16550_config __initconst uart_config[] =
> +{
> +    /* Broadcom TruManage device */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_BROADCOM,
> +        .dev_id = 0x160a,
> +        .param = param_trumanage,
> +    },
> +    /* OXPCIe952 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc11b,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe952 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc11f,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe952 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc138,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe952 2 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc158,
> +        .param = param_oxford_2port,
> +    },
> +    /* OXPCIe952 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc13d,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe952 2 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc15d,
> +        .param = param_oxford_2port,
> +    },
> +    /* OXPCIe952 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc40b,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe200 1 Native UART */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc40f,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe200 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc41b,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe200 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc41f,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe200 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc42b,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe200 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc42f,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe200 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc43b,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe200 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc43f,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe200 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc44b,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe200 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc44f,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe200 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc45b,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe200 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc45f,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe200 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc46b,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe200 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc46f,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe200 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc47b,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe200 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc47f,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe200 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc48b,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe200 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc48f,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe200 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc49b,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe200 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc49f,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe200 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc4ab,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe200 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc4af,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe200 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc4bb,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe200 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc4bf,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe200 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc4cb,
> +        .param = param_oxford,
> +    },
> +    /* OXPCIe200 1 Native UART  */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_OXSEMI,
> +        .dev_id = 0xc4cf,
> +        .param = param_oxford,
> +    },
> +    /* Pericom PI7C9X7951 Uno UART */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_PERICOM,
> +        .dev_id = 0x7951,
> +        .param = param_pericom_1port
> +    },
> +    /* Pericom PI7C9X7952 Duo UART */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_PERICOM,
> +        .dev_id = 0x7952,
> +        .param = param_pericom_2port
> +    },
> +    /* Pericom PI7C9X7954 Quad UART */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_PERICOM,
> +        .dev_id = 0x7954,
> +        .param = param_pericom_4port
> +    },
> +    /* Pericom PI7C9X7958 Octal UART */
> +    {
> +        .vendor_id = PCI_VENDOR_ID_PERICOM,
> +        .dev_id = 0x7958,
> +        .param = param_pericom_8port
> +    }
> +};
> +
>  static int __init
>  pci_uart_config(struct ns16550 *uart, bool_t skip_amt, unsigned int idx)
>  {
> @@ -1211,7 +1213,8 @@ pci_uart_config(struct ns16550 *uart, bo
>  
>      return 0;
>  }
> -#endif
> +
> +#endif /* CONFIG_HAS_PCI */
>  
>  /*
>   * Used to parse name value pairs and return which value it is along with
> 


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 00:11:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 00:11:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35277.66720 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khLvb-0002hL-Hz; Tue, 24 Nov 2020 00:11:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35277.66720; Tue, 24 Nov 2020 00:11:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khLvb-0002hE-Ek; Tue, 24 Nov 2020 00:11:27 +0000
Received: by outflank-mailman (input) for mailman id 35277;
 Tue, 24 Nov 2020 00:11:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PpH5=E6=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1khLva-0002h3-Bg
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 00:11:26 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 356b6fb5-cdb7-4e30-9c1e-bb3d8825ff54;
 Tue, 24 Nov 2020 00:11:25 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 2E9D620729;
 Tue, 24 Nov 2020 00:11:24 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PpH5=E6=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1khLva-0002h3-Bg
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 00:11:26 +0000
X-Inumbo-ID: 356b6fb5-cdb7-4e30-9c1e-bb3d8825ff54
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 356b6fb5-cdb7-4e30-9c1e-bb3d8825ff54;
	Tue, 24 Nov 2020 00:11:25 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 2E9D620729;
	Tue, 24 Nov 2020 00:11:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606176684;
	bh=/HuuCVOyJQhQ+dppQE7kCCPgDIgGdcU1N2XEH7loqTM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=K8xNpn9aqFKuFo1wK23p5plJ5EY6s6UyTxdAodPw8P3USYoUF4WORAXv+NvG3onx7
	 8ZwaSPGYcxwqa4Rz/xjQWHIWxCKZBfRpcAc6gJcxN71PdJ7ZzsIK2l47U8ONRXuiyB
	 47wx58jrTCS0BDfH17fiipguKlxsDPu/1AnPU7bo=
Date: Mon, 23 Nov 2020 16:11:23 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>, Rahul Singh <Rahul.Singh@arm.com>
Subject: Re: [PATCH v2 2/3] ns16550: "com<N>=" command line options are
 x86-specific
In-Reply-To: <bfa07fc2-9151-402f-3b73-dedf8280cb66@suse.com>
Message-ID: <alpine.DEB.2.21.2011231608120.7979@sstabellini-ThinkPad-T480s>
References: <96115b2b-c104-e566-2368-6a2439d2c988@suse.com> <bfa07fc2-9151-402f-3b73-dedf8280cb66@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 23 Nov 2020, Jan Beulich wrote:
> Pure code motion (plus the addition of "#ifdef CONFIG_X86); no
> functional change intended.
> 
> Reported-by: Julien Grall <julien@xen.org>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Great cleanup

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> v2: Re-base over new earlier patch.
> 
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -318,8 +318,8 @@ Interrupts.  Specifying zero disables CM
>  Flag to indicate whether to probe for a CMOS Real Time Clock irrespective of
>  ACPI indicating none to be there.
>  
> -### com1
> -### com2
> +### com1 (x86)
> +### com2 (x86)
>  > `= <baud>[/<base-baud>][,[DPS][,[<io-base>|pci|amt][,[<irq>|msi][,[<port-bdf>][,[<bridge-bdf>]]]]]]`
>  
>  Both option `com1` and `com2` follow the same format.
> --- a/xen/drivers/char/ns16550.c
> +++ b/xen/drivers/char/ns16550.c
> @@ -31,38 +31,6 @@
>  #include <asm/fixmap.h>
>  #endif
>  
> -/*
> - * Configure serial port with a string:
> - *   <baud>[/<base_baud>][,DPS[,<io-base>[,<irq>[,<port-bdf>[,<bridge-bdf>]]]]].
> - * The tail of the string can be omitted if platform defaults are sufficient.
> - * If the baud rate is pre-configured, perhaps by a bootloader, then 'auto'
> - * can be specified in place of a numeric baud rate. Polled mode is specified
> - * by requesting irq 0.
> - */
> -static char __initdata opt_com1[128] = "";
> -static char __initdata opt_com2[128] = "";
> -string_param("com1", opt_com1);
> -string_param("com2", opt_com2);
> -
> -enum serial_param_type {
> -    baud,
> -    clock_hz,
> -    data_bits,
> -    io_base,
> -    irq,
> -    parity,
> -    reg_shift,
> -    reg_width,
> -    stop_bits,
> -#ifdef CONFIG_HAS_PCI
> -    bridge_bdf,
> -    device,
> -    port_bdf,
> -#endif
> -    /* List all parameters before this line. */
> -    num_serial_params
> -};
> -
>  static struct ns16550 {
>      int baud, clock_hz, data_bits, parity, stop_bits, fifo_size, irq;
>      u64 io_base;   /* I/O port or memory-mapped I/O address. */
> @@ -98,32 +66,6 @@ static struct ns16550 {
>  #endif
>  } ns16550_com[2] = { { 0 } };
>  
> -struct serial_param_var {
> -    char name[12];
> -    enum serial_param_type type;
> -};
> -
> -/*
> - * Enum struct keeping a table of all accepted parameter names for parsing
> - * com_console_options for serial port com1 and com2.
> - */
> -static const struct serial_param_var __initconst sp_vars[] = {
> -    {"baud", baud},
> -    {"clock-hz", clock_hz},
> -    {"data-bits", data_bits},
> -    {"io-base", io_base},
> -    {"irq", irq},
> -    {"parity", parity},
> -    {"reg-shift", reg_shift},
> -    {"reg-width", reg_width},
> -    {"stop-bits", stop_bits},
> -#ifdef CONFIG_HAS_PCI
> -    {"bridge", bridge_bdf},
> -    {"dev", device},
> -    {"port", port_bdf},
> -#endif
> -};
> -
>  #ifdef CONFIG_HAS_PCI
>  struct ns16550_config {
>      u16 vendor_id;
> @@ -674,6 +616,19 @@ static struct uart_driver __read_mostly
>  #endif
>  };
>  
> +static void ns16550_init_common(struct ns16550 *uart)
> +{
> +    uart->clock_hz  = UART_CLOCK_HZ;
> +
> +    /* Default is no transmit FIFO. */
> +    uart->fifo_size = 1;
> +
> +    /* Default lsr_mask = UART_LSR_THRE */
> +    uart->lsr_mask  = UART_LSR_THRE;
> +}
> +
> +#ifdef CONFIG_X86
> +
>  static int __init parse_parity_char(int c)
>  {
>      switch ( c )
> @@ -1217,6 +1172,64 @@ pci_uart_config(struct ns16550 *uart, bo
>  #endif /* CONFIG_HAS_PCI */
>  
>  /*
> + * Configure serial port with a string:
> + *   <baud>[/<base_baud>][,DPS[,<io-base>[,<irq>[,<port-bdf>[,<bridge-bdf>]]]]].
> + * The tail of the string can be omitted if platform defaults are sufficient.
> + * If the baud rate is pre-configured, perhaps by a bootloader, then 'auto'
> + * can be specified in place of a numeric baud rate. Polled mode is specified
> + * by requesting irq 0.
> + */
> +static char __initdata opt_com1[128] = "";
> +static char __initdata opt_com2[128] = "";
> +string_param("com1", opt_com1);
> +string_param("com2", opt_com2);
> +
> +enum serial_param_type {
> +    baud,
> +    clock_hz,
> +    data_bits,
> +    io_base,
> +    irq,
> +    parity,
> +    reg_shift,
> +    reg_width,
> +    stop_bits,
> +#ifdef CONFIG_HAS_PCI
> +    bridge_bdf,
> +    device,
> +    port_bdf,
> +#endif
> +    /* List all parameters before this line. */
> +    num_serial_params
> +};
> +
> +struct serial_param_var {
> +    char name[12];
> +    enum serial_param_type type;
> +};
> +
> +/*
> + * Enum struct keeping a table of all accepted parameter names for parsing
> + * com_console_options for serial port com1 and com2.
> + */
> +static const struct serial_param_var __initconst sp_vars[] = {
> +    {"baud", baud},
> +    {"clock-hz", clock_hz},
> +    {"data-bits", data_bits},
> +    {"io-base", io_base},
> +    {"irq", irq},
> +    {"parity", parity},
> +    {"reg-shift", reg_shift},
> +    {"reg-width", reg_width},
> +    {"stop-bits", stop_bits},
> +#ifdef CONFIG_HAS_PCI
> +    {"bridge", bridge_bdf},
> +    {"dev", device},
> +    {"port", port_bdf},
> +#endif
> +};
> +
> +/*
>   * Used to parse name value pairs and return which value it is along with
>   * pointer for the extracted value.
>   */
> @@ -1504,17 +1517,6 @@ static void __init ns16550_parse_port_co
>      serial_register_uart(uart - ns16550_com, &ns16550_driver, uart);
>  }
>  
> -static void ns16550_init_common(struct ns16550 *uart)
> -{
> -    uart->clock_hz  = UART_CLOCK_HZ;
> -
> -    /* Default is no transmit FIFO. */
> -    uart->fifo_size = 1;
> -
> -    /* Default lsr_mask = UART_LSR_THRE */
> -    uart->lsr_mask  = UART_LSR_THRE;
> -}
> -
>  void __init ns16550_init(int index, struct ns16550_defaults *defaults)
>  {
>      struct ns16550 *uart;
> @@ -1541,6 +1543,8 @@ void __init ns16550_init(int index, stru
>      ns16550_parse_port_config(uart, (index == 0) ? opt_com1 : opt_com2);
>  }
>  
> +#endif /* CONFIG_X86 */
> +
>  #ifdef CONFIG_HAS_DEVICE_TREE
>  static int __init ns16550_uart_dt_init(struct dt_device_node *dev,
>                                         const void *data)
> 


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 00:11:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 00:11:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35278.66732 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khLvi-0002jp-Qc; Tue, 24 Nov 2020 00:11:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35278.66732; Tue, 24 Nov 2020 00:11:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khLvi-0002jh-NG; Tue, 24 Nov 2020 00:11:34 +0000
Received: by outflank-mailman (input) for mailman id 35278;
 Tue, 24 Nov 2020 00:11:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PpH5=E6=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1khLvh-0002jO-Hu
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 00:11:33 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6bd65b08-e5c0-4bbd-898b-17334643333c;
 Tue, 24 Nov 2020 00:11:32 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id D056420729;
 Tue, 24 Nov 2020 00:11:31 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PpH5=E6=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1khLvh-0002jO-Hu
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 00:11:33 +0000
X-Inumbo-ID: 6bd65b08-e5c0-4bbd-898b-17334643333c
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6bd65b08-e5c0-4bbd-898b-17334643333c;
	Tue, 24 Nov 2020 00:11:32 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id D056420729;
	Tue, 24 Nov 2020 00:11:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606176692;
	bh=oS/Xzsk9XvLaGNEPERRM2FAqt1U7+QT5J4rRjREQehQ=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=qUJtNEWk/4onZ3846GyRqO+0fhO6NIsGV12F+Gn7V3HryNN5Xf5DvObz+y/kxj+mB
	 +bf+1awEmmscC7nr/tqQGzyFRmrwHCmipRGSOKTFdxCEuwjQ2JV78iSIdQakFWhOb5
	 bhbKBDeHTIwI8r1wAso7U8ubduAKRi538/denH94=
Date: Mon, 23 Nov 2020 16:11:31 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>, Rahul Singh <Rahul.Singh@arm.com>
Subject: Re: [PATCH v2 3/3] ns16550: drop stray "#ifdef CONFIG_HAS_PCI"
In-Reply-To: <c5cf7b83-9948-dd87-dfe0-40d36df0db70@suse.com>
Message-ID: <alpine.DEB.2.21.2011231610110.7979@sstabellini-ThinkPad-T480s>
References: <96115b2b-c104-e566-2368-6a2439d2c988@suse.com> <c5cf7b83-9948-dd87-dfe0-40d36df0db70@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 23 Nov 2020, Jan Beulich wrote:
> There's no point wrapping the function invocation when
> - the function body is already suitably wrapped,
> - the function itself is unconditionally available.
> 
> Reported-by: Julien Grall <julien@xen.org>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> --- a/xen/drivers/char/ns16550.c
> +++ b/xen/drivers/char/ns16550.c
> @@ -662,9 +662,7 @@ static int __init check_existence(struct
>      return 1; /* Everything is MMIO */
>  #endif
>  
> -#ifdef CONFIG_HAS_PCI
>      pci_serial_early_init(uart);
> -#endif
>  
>      /*
>       * Do a simple existence test first; if we fail this,
> 


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 00:25:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 00:25:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35293.66744 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khM9G-0003pT-3b; Tue, 24 Nov 2020 00:25:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35293.66744; Tue, 24 Nov 2020 00:25:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khM9F-0003pM-W2; Tue, 24 Nov 2020 00:25:33 +0000
Received: by outflank-mailman (input) for mailman id 35293;
 Tue, 24 Nov 2020 00:25:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PpH5=E6=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1khM9E-0003pH-N8
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 00:25:32 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 15bd4e62-52f7-49d0-b69d-fda49432d0fc;
 Tue, 24 Nov 2020 00:25:31 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 9B70220729;
 Tue, 24 Nov 2020 00:25:30 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PpH5=E6=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1khM9E-0003pH-N8
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 00:25:32 +0000
X-Inumbo-ID: 15bd4e62-52f7-49d0-b69d-fda49432d0fc
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 15bd4e62-52f7-49d0-b69d-fda49432d0fc;
	Tue, 24 Nov 2020 00:25:31 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 9B70220729;
	Tue, 24 Nov 2020 00:25:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606177530;
	bh=zRsMQ1sdMyKyuXMkMtUSFWYf/9u5wU1Aw7W/E8hs25o=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=0/54lLXhqIcRaCOiYEzIl7O2kG3JRXpA42CtsNMs+l1GhhcSu2rxL42iCWfu7Wvoe
	 YaYMG8zOLz4dbCP54KKlVavby9LalyjXlV12FAc/IiVDNxEPa298rS9lCaYeeH9WMI
	 c8hsOjQu686uaC8O9StyxjpThkLDKY0PyiTP9CZ0=
Date: Mon, 23 Nov 2020 16:25:29 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Julien Grall <julien.grall@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH RFC 4/6] xen/arm: mm: Allow other mapping size in
 xen_pt_update_entry()
In-Reply-To: <eff4cb40-ac90-940c-aa97-16a5021386d3@xen.org>
Message-ID: <alpine.DEB.2.21.2011231612330.7979@sstabellini-ThinkPad-T480s>
References: <20201119190751.22345-1-julien@xen.org> <20201119190751.22345-5-julien@xen.org> <alpine.DEB.2.21.2011191706420.7979@sstabellini-ThinkPad-T480s> <1ba4afef-7efa-6d1a-5929-ec2652dbbb21@xen.org> <alpine.DEB.2.21.2011231409050.7979@sstabellini-ThinkPad-T480s>
 <eff4cb40-ac90-940c-aa97-16a5021386d3@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 23 Nov 2020, Julien Grall wrote:
> Hi Stefano,
> 
> On 23/11/2020 22:27, Stefano Stabellini wrote:
> > On Fri, 20 Nov 2020, Julien Grall wrote:
> > > > >        /*
> > > > >         * For arm32, page-tables are different on each CPUs. Yet, they
> > > > > share
> > > > > @@ -1265,14 +1287,43 @@ static int xen_pt_update(unsigned long virt,
> > > > >          spin_lock(&xen_pt_lock);
> > > > >    -    for ( ; addr < addr_end; addr += PAGE_SIZE )
> > > > > +    while ( left )
> > > > >        {
> > > > > -        rc = xen_pt_update_entry(root, addr, mfn, flags);
> > > > > +        unsigned int order;
> > > > > +        unsigned long mask;
> > > > > +
> > > > > +        /*
> > > > > +         * Don't take into account the MFN when removing mapping (i.e
> > > > > +         * MFN_INVALID) to calculate the correct target order.
> > > > > +         *
> > > > > +         * XXX: Support superpage mappings if nr is not aligned to a
> > > > > +         * superpage size.
> > > > 
> > > > It would be good to add another sentence to explain that the checks
> > > > below are simply based on masks and rely on the mfn, vfn, and also
> > > > nr_mfn to be superpage aligned. (It took me some time to figure it out.)
> > > 
> > > I am not sure to understand what you wrote here. Could you suggest a
> > > sentence?
> > 
> > Something like the following:
> > 
> > /*
> >   * Don't take into account the MFN when removing mapping (i.e
> >   * MFN_INVALID) to calculate the correct target order.
> >   *
> >   * This loop relies on mfn, vfn, and nr_mfn, to be all superpage
> >   * aligned, and it uses `mask' to check for that.
> 
> Unfortunately, I am still not sure to understand this comment.
> The loop can deal with any (super)page size (4KB, 2MB, 1GB). There are no
> assumption on any alignment for mfn, vfn and nr_mfn.
> 
> By OR-ing the 3 components together, we can use it to find out the maximum
> size that can be used for the mapping.
> 
> So can you clarify what you mean?

In pseudo-code:

  mask = mfn | vfn | nr_mfns;
  if (mask & ((1<<FIRST_ORDER) - 1))
  if (mask & ((1<<SECOND_ORDER) - 1))
  if (mask & ((1<<THIRD_ORDER) - 1))
  ...

As you wrote the mask is used to find the max size that can be used for
the mapping.

But let's take nr_mfns out of the equation for a moment for clarity:

  mask = mfn | vfn;
  if (mask & ((1<<FIRST_ORDER) - 1))
  if (mask & ((1<<SECOND_ORDER) - 1))
  if (mask & ((1<<THIRD_ORDER) - 1))
  ...

How would you describe this check? I'd call this an alignment check,
is it not?


> >   *
> >   * XXX: Support superpage mappings if nr_mfn is not aligned to a
> >   * superpage size.
> >   */
> > 
> > 
> > > Regarding the TODO itself, we have the exact same one in the P2M code. I
> > > couldn't find a clever way to deal with it yet. Any idea how this could be
> > > solved?
> >   I was thinking of a loop that start with the highest possible superpage
> > size that virt and mfn are aligned to, and also smaller or equal to
> > nr_mfn. So rather than using the mask to also make sure nr_mfns is
> > aligned, I would only use the mask to check that mfn and virt are
> > aligned. Then, we only need to check that superpage_size <= left.
> > 
> > Concrete example: virt and mfn are 2MB aligned, nr_mfn is 5MB / 1280 4K
> > pages. We allocate 2MB superpages until onlt 1MB is left. At that point
> > superpage_size <= left fails and we go down to 4K allocations.
> > 
> > Would that work?
> 
> Unfortunately no, AFAICT, your assumption is that vfn/mfn are originally
> aligned to higest possible superpage size. There are situation where this is
> not the case.

Yes, I was assuming that vfn/mfn are originally aligned to higest
possible superpage size. It is more difficult without that assumption
:-)


> To give a concrete example, at the moment the RAM is mapped using 1GB
> superpage in Xen. But in the future, we will only want to map RAM regions in
> the directmap that haven't been marked as reserved [1].
> 
> Those reserved regions don't have architectural alignment or placement.
> 
> I will use an over-exegerated example (or maybe not :)).
> 
> Imagine you have 4GB of RAM starting at 0. The HW/Software engineer decided to
> place a 2MB reserved region start at 512MB.
> 
> As a result we would want to map two RAM regions:
>    1) 0 to 512MB
>    2) 514MB to 4GB
> 
> I will only focus on 2). In the ideal situation, we would want to map
>    a) 514MB to 1GB using 2MB superpage
>    b) 1GB to 4GB using 1GB superpage
> 
> We don't want be to use 2MB superpage because this will increase TLB pressure
> (we want to avoid Xen using too much TLB entries) and also increase the size
> of the page-tables.
> 
> Therefore, we want to select the best size for each iteration. For now, the
> only solution I can come up with is to OR vfn/mfn and then use a series of
> check to compare the mask and nr_mfn.

Yeah, that's more or less what I was imagining too. Maybe we could use
ffs and friends to avoid or simplify some of those checks.


> In addition to the "classic" mappings (i.e. 4KB, 2MB, 1GB). I would like to
> explore contiguous mapping (e.g. 64KB, 32MB) to further reduce the TLBs
> pressure. Note that a processor may or may not take advantage of contiguous
> mapping to reduce the number of TLBs used.
> 
> This will unfortunately increase the numbers of check. I will try to come up
> with a patch and we can discuss from there.

OK


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 00:41:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 00:41:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35303.66756 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khMOD-0005Xf-Cy; Tue, 24 Nov 2020 00:41:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35303.66756; Tue, 24 Nov 2020 00:41:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khMOD-0005XY-9H; Tue, 24 Nov 2020 00:41:01 +0000
Received: by outflank-mailman (input) for mailman id 35303;
 Tue, 24 Nov 2020 00:40:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rpYA=E6=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1khMOB-0005XR-7I
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 00:40:59 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 734d45c5-fe77-4abd-be52-f55b66f8e497;
 Tue, 24 Nov 2020 00:40:57 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rpYA=E6=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1khMOB-0005XR-7I
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 00:40:59 +0000
X-Inumbo-ID: 734d45c5-fe77-4abd-be52-f55b66f8e497
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 734d45c5-fe77-4abd-be52-f55b66f8e497;
	Tue, 24 Nov 2020 00:40:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606178457;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=hDAXSFYCTwV7+OA3EDO2Of56ILPq9gGqRN4D8Fc0RUg=;
  b=WEuA2VUQsq+fLhKfdC/SRbYjik9xr6lJbP9+T2Kdjfj08f+A4HTE1Uhq
   b2am9fOup7h0vyC4VoaXZagB1BmC6LFKbezeuyI1OEHD2mPdnz8o4UwaC
   HtFTRIU6z2IHXOZKSXh255VZvkW3n8FiAkI0WgxjCR+K4xinTEkDAFNhx
   A=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 23uAuWnWj9ztOiIbw/OfThm+MbJqABlR73RviENEroa6b2kOXIZ5959DbpDA1Dd1FDzUgnFqqc
 n+9U+xJq8T8wONqSBJt4cx2m4WD3DQr6rtNnqTBdxtuzVyhu0mRq8LjcMz7hpnlXEl0s0apWcd
 +Wj1J4ojM0uGj3xmCTxBwn0E4DTGpw4EVPLUzbCnhMXpW054EcgL6WbnhRINGQhDG0/NeN6gVd
 Yq1M3Gi4rYr5Rty2Kc4bf5bnlN/o+R/hsQ5T4p/1bP/ZM1QIBYpdgOuzhxDhc0AuRA4TQPtO+D
 Ito=
X-SBRS: None
X-MesageID: 32134937
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,364,1599537600"; 
   d="scan'208";a="32134937"
Subject: Re: [PATCH v2 7/8] lib: move bsearch code
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
CC: George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <87a20884-5a76-a664-dcc9-bd4becee40b3@suse.com>
 <44ffc041-cacd-468e-a835-f5b2048bb201@xen.org>
 <2cf3a90d-f463-41f8-f861-6ef00279b204@suse.com>
 <2419eccf-c696-6aa1-ada4-0f7bd6bc5657@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <77534dc3-bdd6-f884-99e3-90dc9b02a81f@citrix.com>
Date: Tue, 24 Nov 2020 00:40:50 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <2419eccf-c696-6aa1-ada4-0f7bd6bc5657@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 23/11/2020 22:49, Julien Grall wrote:
> Hi Jan,
>
> On 19/11/2020 10:27, Jan Beulich wrote:
>> On 18.11.2020 19:09, Julien Grall wrote:
>>> On 23/10/2020 11:19, Jan Beulich wrote:
>>>> --- a/xen/include/xen/compiler.h
>>>> +++ b/xen/include/xen/compiler.h
>>>> @@ -12,6 +12,7 @@
>>>>       #define inline        __inline__
>>>>    #define always_inline __inline__ __attribute__
>>>> ((__always_inline__))
>>>> +#define gnu_inline    __inline__ __attribute__ ((__gnu_inline__))
>>>
>>> bsearch() is only used by Arm and I haven't seen anyone so far
>>> complaining about the perf of I/O emulation.
>>>
>>> Therefore, I am not convinced that there is enough justification to
>>> introduce a GNU attribute just for this patch.
>>
>> Please settle this with Andrew: He had asked for the function to
>> become inline. I don't view making it static inline in the header
>> as an option here - if the compiler decides to not inline it, we
>> should not end up with multiple instances in different CUs.
>
> That's the cons of static inline... but then why is it suddenly a
> problem with this helper?
>
>> And
>> without making it static inline the attribute needs adding; at
>> least I'm unaware of an alternative which works with the various
>> compiler versions.
>
> The question we have to answer is: What is the gain with this approach?

Substantial.

>
> If it is not quantifiable, then introducing compiler specific
> attribute is not an option.
>
> IIRC, there are only two callers (all in Arm code) of this function.
> Even inlined, I don't believe you would drastically reduce the number
> of instructions compare to a full blown version. To be generous, I
> would say you may save ~20 instructions per copy.
>
> Therefore, so far, the compiler specific attribute doesn't look
> justified to me. As usual, I am happy to be proven wrong

There is a very good reason why this is the classic example used for
extern inline's in various libc's.

The gains are from the compiler being able to optimise away the function
pointer(s) entirely.  Instead of working on opaque objects, it can see
the accesses directly, implement compares as straight array reads, (for
sorting, the swap() call turns into memcpy()) and because it can see all
the memory accesses, doesn't have to assume that every call to cmp()
modifies arbitrary data in the array (i.e. doesn't have to reload the
objects from memory every iteration).

extern inline allows the compiler full flexibility to judge whether
inlining is a net win, based on optimisation settings and observing what
the practical memory access pattern would be from not inlining.

extern inline is the appropriate thing to use here, except for the big
note in the GCC manual saying "always use gnu_inline in this case" which
appears to be working around a change in the C99 standard which forces
any non-static inline to emit a body even when its not called, due to
rules about global symbols.

Therefore, Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

Some further observations:

For arch/arm/io.c, the handlers are sorted, so find_mmio_handler() will
be O(lg n), but it will surely be faster with the inlined version, and
this is the fastpath.

register_mmio_handler() OTOH is massively expensive, because sort()
turns the array into a heap and back into an array on every insertion,
just to insert an entry into an already sorted array.  It would be more
efficient to library-fy the work I did for VT-x MSR load/save lists
(again, extern inline) and reuse
"insert_$FOO_into_sorted_list_of_FOOs()" which is a search, single
memmove() to make a gap, and a memcpy() into place.

When you compile io.c with this patch in place, the delta is:

add/remove: 0/1 grow/shrink: 1/0 up/down: 92/-164 (-72)
Function                                     old     new   delta
try_handle_mmio                              720     812     +92
bsearch                                      164       -    -164
Total: Before=992489, After=992417, chg -0.01%

The reason cmp_mmio_handler (140 bytes) doesn't drop out is because it
is referenced by register_mmio_hanlder()'s call to sort().  All in all,
the inlined version is less than 1/3 the size of the out-of-lined
version, but I haven't characterised it further than that.


On a totally separate point,  I wonder if we'd be better off compiling
with -fgnu89-inline because I can't see any case we're we'd want the C99
inline semantics anywhere in Xen.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 00:58:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 00:58:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35311.66768 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khMfL-0006cO-UP; Tue, 24 Nov 2020 00:58:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35311.66768; Tue, 24 Nov 2020 00:58:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khMfL-0006cH-RN; Tue, 24 Nov 2020 00:58:43 +0000
Received: by outflank-mailman (input) for mailman id 35311;
 Tue, 24 Nov 2020 00:58:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rpYA=E6=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1khMfK-0006cC-Qb
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 00:58:42 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ce88a05e-d02b-4883-989b-0dccbfe11bfc;
 Tue, 24 Nov 2020 00:58:41 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rpYA=E6=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1khMfK-0006cC-Qb
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 00:58:42 +0000
X-Inumbo-ID: ce88a05e-d02b-4883-989b-0dccbfe11bfc
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ce88a05e-d02b-4883-989b-0dccbfe11bfc;
	Tue, 24 Nov 2020 00:58:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606179521;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=AXk/zYR173btMvz+fOH+QB2vsjO3ci4oUuKHh0S1jBc=;
  b=UTVFd4MYtGWT32BNHsFI3B0kpNABSWgRk6rKlhavvjV/7GFzffgAGz41
   LPQesFRI9kkVtASQsPXoKedZoCqXKjbhxvdgKJAFXG5x5UmpppFRgWKAG
   rtF4JTicVyUyINroErDACtj5hVoQ9g9LQf94etRf7D7Ww+iT++7A267XO
   c=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 6K45fDnuX7TsL/uMrd1UGzoTP9lDxlozUYwdwgMcu+Vn/qruYU3plmNu/YBrw4I3/qsCYCPKqF
 2Gk/JzBVrtI8L84wNNYpDVQD8jfg4N1zuEiMochvgyB5tPfeEVhu+S3HtJx3AMcCG39rWsnoZ7
 16hxb/JqPDNTqOaAw2/uk8JQnsdLxtuPA0C9bX37JPs/HmZrluSHVc3yWD5ACDNA2YCKBhqsO9
 ulgrH2WNSfg0xCuyIYGcPBBN+Jv1qx8FTvKSmIzVhIMLMt26GoIICjed41ctTu1zkXRcGY0Mxy
 zm4=
X-SBRS: None
X-MesageID: 32135498
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,364,1599537600"; 
   d="scan'208";a="32135498"
Subject: Re: [PATCH v2 4/8] lib: move parse_size_and_unit()
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <eaffac30-8bd0-6018-5186-ca53d1becfe5@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <1bd906ff-0b37-07de-75ab-84a169151c2d@citrix.com>
Date: Tue, 24 Nov 2020 00:58:12 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <eaffac30-8bd0-6018-5186-ca53d1becfe5@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 23/10/2020 11:17, Jan Beulich wrote:
> ... into its own CU, to build it into an archive.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
>  xen/common/lib.c     | 39 ----------------------------------
>  xen/lib/Makefile     |  1 +
>  xen/lib/parse-size.c | 50 ++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 51 insertions(+), 39 deletions(-)
>  create mode 100644 xen/lib/parse-size.c

What is the point of turning this into a library?  It isn't a leaf
function (calls simple_strtoull()) and doesn't have any any plausible
way of losing all its callers in various configurations (given its
direct use by the cmdline parsing logic).

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 01:06:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 01:06:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35319.66779 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khMmA-0003fx-PN; Tue, 24 Nov 2020 01:05:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35319.66779; Tue, 24 Nov 2020 01:05:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khMmA-0003fq-ML; Tue, 24 Nov 2020 01:05:46 +0000
Received: by outflank-mailman (input) for mailman id 35319;
 Tue, 24 Nov 2020 01:05:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=s6aG=E6=perches.com=joe@srs-us1.protection.inumbo.net>)
 id 1khMm9-0003fl-FW
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 01:05:45 +0000
Received: from smtprelay.hostedemail.com (unknown [216.40.44.133])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 792fb3e0-d08c-41cd-bde2-922febcbc97d;
 Tue, 24 Nov 2020 01:05:43 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net
 [216.40.38.60])
 by smtprelay08.hostedemail.com (Postfix) with ESMTP id C6794182CED28;
 Tue, 24 Nov 2020 01:05:42 +0000 (UTC)
Received: from XPS-9350.home (unknown [47.151.128.180])
 (Authenticated sender: joe@perches.com)
 by omf04.hostedemail.com (Postfix) with ESMTPA;
 Tue, 24 Nov 2020 01:05:31 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=s6aG=E6=perches.com=joe@srs-us1.protection.inumbo.net>)
	id 1khMm9-0003fl-FW
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 01:05:45 +0000
X-Inumbo-ID: 792fb3e0-d08c-41cd-bde2-922febcbc97d
Received: from smtprelay.hostedemail.com (unknown [216.40.44.133])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 792fb3e0-d08c-41cd-bde2-922febcbc97d;
	Tue, 24 Nov 2020 01:05:43 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net [216.40.38.60])
	by smtprelay08.hostedemail.com (Postfix) with ESMTP id C6794182CED28;
	Tue, 24 Nov 2020 01:05:42 +0000 (UTC)
X-Session-Marker: 6A6F6540706572636865732E636F6D
X-Spam-Summary: 2,0,0,,d41d8cd98f00b204,joe@perches.com,,RULES_HIT:41:355:379:599:973:988:989:1260:1277:1311:1313:1314:1345:1359:1437:1515:1516:1518:1534:1538:1567:1593:1594:1711:1714:1730:1747:1777:1792:2393:2559:2562:2828:3138:3139:3140:3141:3142:3622:3865:3867:3868:3872:3874:4321:5007:6119:6742:6743:7903:10004:10400:10848:11658:11914:12297:12740:12760:12895:13069:13311:13357:13439:14659:21080:21627:30012:30054:30060:30091,0,RBL:none,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none
X-HE-Tag: sea70_4d178da27369
X-Filterd-Recvd-Size: 4565
Received: from XPS-9350.home (unknown [47.151.128.180])
	(Authenticated sender: joe@perches.com)
	by omf04.hostedemail.com (Postfix) with ESMTPA;
	Tue, 24 Nov 2020 01:05:31 +0000 (UTC)
Message-ID: <e72a1aaef8673553a3ee9dfa033d6e893e00abcd.camel@perches.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
From: Joe Perches <joe@perches.com>
To: Finn Thain <fthain@telegraphics.com.au>, Miguel Ojeda
	 <miguel.ojeda.sandonis@gmail.com>
Cc: James Bottomley <James.Bottomley@hansenpartnership.com>, Kees Cook
 <keescook@chromium.org>, Jakub Kicinski <kuba@kernel.org>, "Gustavo A. R.
 Silva" <gustavoars@kernel.org>, linux-kernel
 <linux-kernel@vger.kernel.org>,  alsa-devel@alsa-project.org,
 amd-gfx@lists.freedesktop.org,  bridge@lists.linux-foundation.org,
 ceph-devel@vger.kernel.org,  cluster-devel@redhat.com,
 coreteam@netfilter.org, devel@driverdev.osuosl.org,  dm-devel@redhat.com,
 drbd-dev@lists.linbit.com, dri-devel@lists.freedesktop.org, 
 GR-everest-linux-l2@marvell.com, GR-Linux-NIC-Dev@marvell.com, 
 intel-gfx@lists.freedesktop.org, intel-wired-lan@lists.osuosl.org, 
 keyrings@vger.kernel.org, linux1394-devel@lists.sourceforge.net, 
 linux-acpi@vger.kernel.org, linux-afs@lists.infradead.org, Linux ARM
 <linux-arm-kernel@lists.infradead.org>, linux-arm-msm@vger.kernel.org, 
 linux-atm-general@lists.sourceforge.net, linux-block@vger.kernel.org, 
 linux-can@vger.kernel.org, linux-cifs@vger.kernel.org, Linux Crypto Mailing
 List <linux-crypto@vger.kernel.org>,
 linux-decnet-user@lists.sourceforge.net,  Ext4 Developers List
 <linux-ext4@vger.kernel.org>, linux-fbdev@vger.kernel.org,
 linux-geode@lists.infradead.org,  linux-gpio@vger.kernel.org,
 linux-hams@vger.kernel.org,  linux-hwmon@vger.kernel.org,
 linux-i3c@lists.infradead.org,  linux-ide@vger.kernel.org,
 linux-iio@vger.kernel.org, linux-input <linux-input@vger.kernel.org>,
 linux-integrity@vger.kernel.org,  linux-mediatek@lists.infradead.org, Linux
 Media Mailing List <linux-media@vger.kernel.org>,
 linux-mmc@vger.kernel.org, Linux-MM <linux-mm@kvack.org>,
 linux-mtd@lists.infradead.org, linux-nfs@vger.kernel.org, 
 linux-rdma@vger.kernel.org, linux-renesas-soc@vger.kernel.org, 
 linux-scsi@vger.kernel.org, linux-sctp@vger.kernel.org, 
 linux-security-module@vger.kernel.org, 
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org, 
 linux-watchdog@vger.kernel.org, linux-wireless
 <linux-wireless@vger.kernel.org>,  Network Development
 <netdev@vger.kernel.org>, netfilter-devel@vger.kernel.org,
 nouveau@lists.freedesktop.org,  op-tee@lists.trustedfirmware.org,
 oss-drivers@netronome.com,  patches@opensource.cirrus.com,
 rds-devel@oss.oracle.com,  reiserfs-devel@vger.kernel.org,
 samba-technical@lists.samba.org,  selinux@vger.kernel.org,
 target-devel@vger.kernel.org,  tipc-discussion@lists.sourceforge.net,
 usb-storage@lists.one-eyed-alien.net, 
 virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
 "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>,
 xen-devel@lists.xenproject.org,  linux-hardening@vger.kernel.org, Nick
 Desaulniers <ndesaulniers@google.com>,  Nathan Chancellor
 <natechancellor@gmail.com>, Miguel Ojeda <ojeda@kernel.org>
Date: Mon, 23 Nov 2020 17:05:30 -0800
In-Reply-To: <alpine.LNX.2.23.453.2011241036520.7@nippy.intranet>
References: <cover.1605896059.git.gustavoars@kernel.org>
	 <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	 <202011201129.B13FDB3C@keescook>
	 <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	 <202011220816.8B6591A@keescook>
	 <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
	 <CANiq72nZrHWTA4_Msg6MP9snTyenC6-eGfD27CyfNSu7QoVZbw@mail.gmail.com>
	 <alpine.LNX.2.23.453.2011230938390.7@nippy.intranet>
	 <CANiq72=z+tmuey9wj3Kk7wX5s0hTHpsQdLhAqcOVNrHon6xn5Q@mail.gmail.com>
	 <alpine.LNX.2.23.453.2011241036520.7@nippy.intranet>
Content-Type: text/plain; charset="ISO-8859-1"
User-Agent: Evolution 3.38.1-1 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Tue, 2020-11-24 at 11:58 +1100, Finn Thain wrote:
> it's not for me to prove that such patches don't affect code 
> generation. That's for the patch author and (unfortunately) for reviewers.

Ideally, that proof would be provided by the compilation system itself
and not patch authors nor reviewers nor maintainers.

Unfortunately gcc does not guarantee repeatability or deterministic output.
To my knowledge, neither does clang.




From xen-devel-bounces@lists.xenproject.org Tue Nov 24 01:18:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 01:18:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35325.66792 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khMxx-0004eK-UM; Tue, 24 Nov 2020 01:17:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35325.66792; Tue, 24 Nov 2020 01:17:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khMxx-0004eD-R1; Tue, 24 Nov 2020 01:17:57 +0000
Received: by outflank-mailman (input) for mailman id 35325;
 Tue, 24 Nov 2020 01:17:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=53sX=E6=telegraphics.com.au=fthain@srs-us1.protection.inumbo.net>)
 id 1khMxv-0004e8-S4
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 01:17:55 +0000
Received: from kvm5.telegraphics.com.au (unknown [98.124.60.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 06df1276-6f68-4ffe-83aa-6eb82f589ee5;
 Tue, 24 Nov 2020 01:17:54 +0000 (UTC)
Received: from localhost (localhost.localdomain [127.0.0.1])
 by kvm5.telegraphics.com.au (Postfix) with ESMTP id 0EF842A8E0;
 Mon, 23 Nov 2020 19:58:39 -0500 (EST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=53sX=E6=telegraphics.com.au=fthain@srs-us1.protection.inumbo.net>)
	id 1khMxv-0004e8-S4
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 01:17:55 +0000
X-Inumbo-ID: 06df1276-6f68-4ffe-83aa-6eb82f589ee5
Received: from kvm5.telegraphics.com.au (unknown [98.124.60.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 06df1276-6f68-4ffe-83aa-6eb82f589ee5;
	Tue, 24 Nov 2020 01:17:54 +0000 (UTC)
Received: from localhost (localhost.localdomain [127.0.0.1])
	by kvm5.telegraphics.com.au (Postfix) with ESMTP id 0EF842A8E0;
	Mon, 23 Nov 2020 19:58:39 -0500 (EST)
Date: Tue, 24 Nov 2020 11:58:37 +1100 (AEDT)
From: Finn Thain <fthain@telegraphics.com.au>
To: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
cc: James Bottomley <James.Bottomley@hansenpartnership.com>, 
    Kees Cook <keescook@chromium.org>, Jakub Kicinski <kuba@kernel.org>, 
    "Gustavo A. R. Silva" <gustavoars@kernel.org>, 
    linux-kernel <linux-kernel@vger.kernel.org>, alsa-devel@alsa-project.org, 
    amd-gfx@lists.freedesktop.org, bridge@lists.linux-foundation.org, 
    ceph-devel@vger.kernel.org, cluster-devel@redhat.com, 
    coreteam@netfilter.org, devel@driverdev.osuosl.org, dm-devel@redhat.com, 
    drbd-dev@lists.linbit.com, dri-devel@lists.freedesktop.org, 
    GR-everest-linux-l2@marvell.com, GR-Linux-NIC-Dev@marvell.com, 
    intel-gfx@lists.freedesktop.org, intel-wired-lan@lists.osuosl.org, 
    keyrings@vger.kernel.org, linux1394-devel@lists.sourceforge.net, 
    linux-acpi@vger.kernel.org, linux-afs@lists.infradead.org, 
    Linux ARM <linux-arm-kernel@lists.infradead.org>, 
    linux-arm-msm@vger.kernel.org, linux-atm-general@lists.sourceforge.net, 
    linux-block@vger.kernel.org, linux-can@vger.kernel.org, 
    linux-cifs@vger.kernel.org, 
    Linux Crypto Mailing List <linux-crypto@vger.kernel.org>, 
    linux-decnet-user@lists.sourceforge.net, 
    Ext4 Developers List <linux-ext4@vger.kernel.org>, 
    linux-fbdev@vger.kernel.org, linux-geode@lists.infradead.org, 
    linux-gpio@vger.kernel.org, linux-hams@vger.kernel.org, 
    linux-hwmon@vger.kernel.org, linux-i3c@lists.infradead.org, 
    linux-ide@vger.kernel.org, linux-iio@vger.kernel.org, 
    linux-input <linux-input@vger.kernel.org>, linux-integrity@vger.kernel.org, 
    linux-mediatek@lists.infradead.org, 
    Linux Media Mailing List <linux-media@vger.kernel.org>, 
    linux-mmc@vger.kernel.org, Linux-MM <linux-mm@kvack.org>, 
    linux-mtd@lists.infradead.org, linux-nfs@vger.kernel.org, 
    linux-rdma@vger.kernel.org, linux-renesas-soc@vger.kernel.org, 
    linux-scsi@vger.kernel.org, linux-sctp@vger.kernel.org, 
    linux-security-module@vger.kernel.org, 
    linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org, 
    linux-watchdog@vger.kernel.org, 
    linux-wireless <linux-wireless@vger.kernel.org>, 
    Network Development <netdev@vger.kernel.org>, 
    netfilter-devel@vger.kernel.org, nouveau@lists.freedesktop.org, 
    op-tee@lists.trustedfirmware.org, oss-drivers@netronome.com, 
    patches@opensource.cirrus.com, rds-devel@oss.oracle.com, 
    reiserfs-devel@vger.kernel.org, samba-technical@lists.samba.org, 
    selinux@vger.kernel.org, target-devel@vger.kernel.org, 
    tipc-discussion@lists.sourceforge.net, 
    usb-storage@lists.one-eyed-alien.net, 
    virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
    "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, 
    xen-devel@lists.xenproject.org, linux-hardening@vger.kernel.org, 
    Nick Desaulniers <ndesaulniers@google.com>, 
    Nathan Chancellor <natechancellor@gmail.com>, 
    Miguel Ojeda <ojeda@kernel.org>, Joe Perches <joe@perches.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
In-Reply-To: <CANiq72=z+tmuey9wj3Kk7wX5s0hTHpsQdLhAqcOVNrHon6xn5Q@mail.gmail.com>
Message-ID: <alpine.LNX.2.23.453.2011241036520.7@nippy.intranet>
References: <cover.1605896059.git.gustavoars@kernel.org> <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> <202011220816.8B6591A@keescook>
 <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com> <CANiq72nZrHWTA4_Msg6MP9snTyenC6-eGfD27CyfNSu7QoVZbw@mail.gmail.com> <alpine.LNX.2.23.453.2011230938390.7@nippy.intranet>
 <CANiq72=z+tmuey9wj3Kk7wX5s0hTHpsQdLhAqcOVNrHon6xn5Q@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII


On Mon, 23 Nov 2020, Miguel Ojeda wrote:

> On Mon, 23 Nov 2020, Finn Thain wrote:
> 
> > On Sun, 22 Nov 2020, Miguel Ojeda wrote:
> > 
> > > 
> > > It isn't that much effort, isn't it? Plus we need to take into 
> > > account the future mistakes that it might prevent, too.
> > 
> > We should also take into account optimisim about future improvements 
> > in tooling.
> > 
> Not sure what you mean here. There is no reliable way to guess what the 
> intention was with a missing fallthrough, even if you parsed whitespace 
> and indentation.
> 

What I meant was that you've used pessimism as if it was fact.

For example, "There is no way to guess what the effect would be if the 
compiler trained programmers to add a knee-jerk 'break' statement to avoid 
a warning".

Moreover, what I meant was that preventing programmer mistakes is a 
problem to be solved by development tools. The idea that retro-fitting new 
language constructs onto mature code is somehow necessary to "prevent 
future mistakes" is entirely questionable.

> > > So even if there were zero problems found so far, it is still a 
> > > positive change.
> > > 
> > 
> > It is if you want to spin it that way.
> > 
> How is that a "spin"? It is a fact that we won't get *implicit* 
> fallthrough mistakes anymore (in particular if we make it a hard error).
> 

Perhaps "handwaving" is a better term?

> > > I would agree if these changes were high risk, though; but they are 
> > > almost trivial.
> > > 
> > 
> > This is trivial:
> > 
> >  case 1:
> >         this();
> > +       fallthrough;
> >  case 2:
> >         that();
> > 
> > But what we inevitably get is changes like this:
> > 
> >  case 3:
> >         this();
> > +       break;
> >  case 4:
> >         hmmm();
> > 
> > Why? Mainly to silence the compiler. Also because the patch author 
> > argued successfully that they had found a theoretical bug, often in 
> > mature code.
> > 
> If someone changes control flow, that is on them. Every kernel developer 
> knows what `break` does.
> 

Sure. And if you put -Wimplicit-fallthrough into the Makefile and if that 
leads to well-intentioned patches that cause regressions, it is partly on 
you.

Have you ever considered the overall cost of the countless 
-Wpresume-incompetence flags?

Perhaps you pay the power bill for a build farm that produces logs that 
no-one reads? Perhaps you've run git bisect, knowing that the compiler 
messages are not interesting? Or compiled software in using a language 
that generates impenetrable messages? If so, here's a tip:

# grep CFLAGS /etc/portage/make.conf 
CFLAGS="... -Wno-all -Wno-extra ..."
CXXFLAGS="${CFLAGS}"

Now allow me some pessimism: the hardware upgrades, gigawatt hours and 
wait time attributable to obligatory static analyses are a net loss.

> > But is anyone keeping score of the regressions? If unreported bugs 
> > count, what about unreported regressions?
> > 
> Introducing `fallthrough` does not change semantics. If you are really 
> keen, you can always compare the objects because the generated code 
> shouldn't change.
> 

No, it's not for me to prove that such patches don't affect code 
generation. That's for the patch author and (unfortunately) for reviewers.

> Cheers,
> Miguel
> 


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 01:33:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 01:33:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35332.66804 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khNCe-0006LW-8X; Tue, 24 Nov 2020 01:33:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35332.66804; Tue, 24 Nov 2020 01:33:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khNCe-0006LP-5B; Tue, 24 Nov 2020 01:33:08 +0000
Received: by outflank-mailman (input) for mailman id 35332;
 Tue, 24 Nov 2020 01:33:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EY5B=E6=google.com=ndesaulniers@srs-us1.protection.inumbo.net>)
 id 1khNCc-0006LK-SP
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 01:33:06 +0000
Received: from mail-pf1-x441.google.com (unknown [2607:f8b0:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b332b676-615d-4d5e-b992-7df8553dfda5;
 Tue, 24 Nov 2020 01:33:06 +0000 (UTC)
Received: by mail-pf1-x441.google.com with SMTP id t8so16801014pfg.8
 for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 17:33:05 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=EY5B=E6=google.com=ndesaulniers@srs-us1.protection.inumbo.net>)
	id 1khNCc-0006LK-SP
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 01:33:06 +0000
X-Inumbo-ID: b332b676-615d-4d5e-b992-7df8553dfda5
Received: from mail-pf1-x441.google.com (unknown [2607:f8b0:4864:20::441])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b332b676-615d-4d5e-b992-7df8553dfda5;
	Tue, 24 Nov 2020 01:33:06 +0000 (UTC)
Received: by mail-pf1-x441.google.com with SMTP id t8so16801014pfg.8
        for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 17:33:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=1Fpt+3NGNjcuXPoKd+YKzvfWCCI+i2QS11ln4xb/K2c=;
        b=mZ10EKN9Q6HwSmECmmsUHbUHF/oTLT4c/YpQVGCE0gPLnlz6SgHQW3ieQ7zeUHJnUi
         XUPs+F1GoZaFLBKWU3CvMTPUTS8a5+RdM3NBjgdsJpNB5L7Gee8a5rDblh6bnWIWT9TK
         uir+AjWjaGJnRAA0K84IM0yu+WQnaSePUaJNJPiE7LfLBhB3Pd0A4gWblql3Lao7defI
         n9Uw15itvZVSId22hyle+f6GM1M4THjCwMgL7v1hyU5f7oNm+YC3/Fk6KnTIGBq7Lgby
         +eM5Ju8Jh9l5BxBe3HwjA5bt+bnUjVm7e9zM9zQUlIbVcSVLiqKYS2m+arIIzBAIXu/j
         s+uw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=1Fpt+3NGNjcuXPoKd+YKzvfWCCI+i2QS11ln4xb/K2c=;
        b=iFS0Zy7U4ZMdt3kHxvZvz/9bfIxbsTzp8zmXB787WuvqR66m63jedXIKqjolsQProd
         uk8DV1za5lCwPrftWdpcTMDxFoLHpgStAHPT4pU/ChlR0c/afcNYeIplgjoZZeOrBlRb
         m4L6ul3Eev1160Y6I5pmEIg6zfT+6KxDtEGQ1Wm94tJMsHobEEJ7DWwxIrIw3MC2wHbL
         rXJ817xaeJS7XnXEswhSW8Jc3c/1sYS3dJsV89XG/30kaMacINKshyJqJc3XAXJZrhS4
         jqRSKtlMGF6WftcitFGjkEkE16Z8j1afmffHT+Y0eMUz0nLXWbWdoM/67+AHvOMLHEkE
         e9oQ==
X-Gm-Message-State: AOAM5327ikfkZsj/Vp5iPmuCUFJRFwWpznYt9KGZZd2JbAIa62z1M4Zq
	YiLsd90TmHR2Iwwj+g0SDIaZMEIE9i1y3X2gYzEYzw==
X-Google-Smtp-Source: ABdhPJwQDF2vxX46wbajF4ioOOwzM/J33jC4qlEIQ0nX0CJ8Ae2/iYNAtfiNkgC6UbM9BT3sKlieSNPFWrTquhIIwSk=
X-Received: by 2002:a65:6a4e:: with SMTP id o14mr1859973pgu.263.1606181584110;
 Mon, 23 Nov 2020 17:33:04 -0800 (PST)
MIME-Version: 1.0
References: <cover.1605896059.git.gustavoars@kernel.org> <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook>
In-Reply-To: <202011220816.8B6591A@keescook>
From: Nick Desaulniers <ndesaulniers@google.com>
Date: Mon, 23 Nov 2020 17:32:51 -0800
Message-ID: <CAKwvOdntVfXj2WRR5n6Kw7BfG7FdKpTeHeh5nPu5AzwVMhOHTg@mail.gmail.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
To: Kees Cook <keescook@chromium.org>
Cc: Jakub Kicinski <kuba@kernel.org>, "Gustavo A. R. Silva" <gustavoars@kernel.org>, 
	LKML <linux-kernel@vger.kernel.org>, alsa-devel@alsa-project.org, 
	amd-gfx list <amd-gfx@lists.freedesktop.org>, bridge@lists.linux-foundation.org, 
	ceph-devel@vger.kernel.org, cluster-devel@redhat.com, coreteam@netfilter.org, 
	devel@driverdev.osuosl.org, dm-devel@redhat.com, drbd-dev@lists.linbit.com, 
	dri-devel <dri-devel@lists.freedesktop.org>, GR-everest-linux-l2@marvell.com, 
	GR-Linux-NIC-Dev@marvell.com, intel-gfx@lists.freedesktop.org, 
	intel-wired-lan@lists.osuosl.org, keyrings@vger.kernel.org, 
	linux1394-devel@lists.sourceforge.net, linux-acpi@vger.kernel.org, 
	linux-afs@lists.infradead.org, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, 
	linux-arm-msm <linux-arm-msm@vger.kernel.org>, linux-atm-general@lists.sourceforge.net, 
	linux-block@vger.kernel.org, linux-can@vger.kernel.org, 
	linux-cifs@vger.kernel.org, 
	"open list:HARDWARE RANDOM NUMBER GENERATOR CORE" <linux-crypto@vger.kernel.org>, linux-decnet-user@lists.sourceforge.net, 
	linux-ext4@vger.kernel.org, linux-fbdev@vger.kernel.org, 
	linux-geode@lists.infradead.org, linux-gpio@vger.kernel.org, 
	linux-hams@vger.kernel.org, linux-hwmon@vger.kernel.org, 
	linux-i3c@lists.infradead.org, linux-ide@vger.kernel.org, 
	linux-iio@vger.kernel.org, linux-input@vger.kernel.org, 
	linux-integrity@vger.kernel.org, linux-mediatek@lists.infradead.org, 
	linux-media@vger.kernel.org, linux-mmc@vger.kernel.org, 
	Linux Memory Management List <linux-mm@kvack.org>, linux-mtd@lists.infradead.org, 
	linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, 
	Linux-Renesas <linux-renesas-soc@vger.kernel.org>, linux-scsi@vger.kernel.org, 
	linux-sctp@vger.kernel.org, linux-security-module@vger.kernel.org, 
	linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org, 
	linux-watchdog@vger.kernel.org, 
	linux-wireless <linux-wireless@vger.kernel.org>, 
	Network Development <netdev@vger.kernel.org>, netfilter-devel@vger.kernel.org, 
	nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org, 
	oss-drivers@netronome.com, patches@opensource.cirrus.com, 
	rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org, 
	samba-technical@lists.samba.org, selinux@vger.kernel.org, 
	target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net, 
	usb-storage@lists.one-eyed-alien.net, 
	virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, xen-devel@lists.xenproject.org, 
	linux-hardening@vger.kernel.org, Nathan Chancellor <natechancellor@gmail.com>, 
	Miguel Ojeda <ojeda@kernel.org>, Joe Perches <joe@perches.com>
Content-Type: text/plain; charset="UTF-8"

On Sun, Nov 22, 2020 at 8:17 AM Kees Cook <keescook@chromium.org> wrote:
>
> On Fri, Nov 20, 2020 at 11:51:42AM -0800, Jakub Kicinski wrote:
> > If none of the 140 patches here fix a real bug, and there is no change
> > to machine code then it sounds to me like a W=2 kind of a warning.
>
> FWIW, this series has found at least one bug so far:
> https://lore.kernel.org/lkml/CAFCwf11izHF=g1mGry1fE5kvFFFrxzhPSM6qKAO8gxSp=Kr_CQ@mail.gmail.com/

So looks like the bulk of these are:
switch (x) {
  case 0:
    ++x;
  default:
    break;
}

I have a patch that fixes those up for clang:
https://reviews.llvm.org/D91895

There's 3 other cases that don't quite match between GCC and Clang I
observe in the kernel:
switch (x) {
  case 0:
    ++x;
  default:
    goto y;
}
y:;

switch (x) {
  case 0:
    ++x;
  default:
    return;
}

switch (x) {
  case 0:
    ++x;
  default:
    ;
}

Based on your link, and Nathan's comment on my patch, maybe Clang
should continue to warn for the above (at least the `default: return;`
case) and GCC should change?  While the last case looks harmless,
there were only 1 or 2 across the tree in my limited configuration
testing; I really think we should just add `break`s for those.
-- 
Thanks,
~Nick Desaulniers


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 02:49:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 02:49:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35343.66822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khONl-0004Li-2m; Tue, 24 Nov 2020 02:48:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35343.66822; Tue, 24 Nov 2020 02:48:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khONk-0004Lb-Uu; Tue, 24 Nov 2020 02:48:40 +0000
Received: by outflank-mailman (input) for mailman id 35343;
 Tue, 24 Nov 2020 02:48:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=53sX=E6=telegraphics.com.au=fthain@srs-us1.protection.inumbo.net>)
 id 1khONj-0004LW-LG
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 02:48:39 +0000
Received: from kvm5.telegraphics.com.au (unknown [98.124.60.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 0ff3c3fd-67c8-4c3f-94a2-bc2cd37df513;
 Tue, 24 Nov 2020 02:48:38 +0000 (UTC)
Received: from localhost (localhost.localdomain [127.0.0.1])
 by kvm5.telegraphics.com.au (Postfix) with ESMTP id EF15F2AA0D;
 Mon, 23 Nov 2020 21:48:35 -0500 (EST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=53sX=E6=telegraphics.com.au=fthain@srs-us1.protection.inumbo.net>)
	id 1khONj-0004LW-LG
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 02:48:39 +0000
X-Inumbo-ID: 0ff3c3fd-67c8-4c3f-94a2-bc2cd37df513
Received: from kvm5.telegraphics.com.au (unknown [98.124.60.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 0ff3c3fd-67c8-4c3f-94a2-bc2cd37df513;
	Tue, 24 Nov 2020 02:48:38 +0000 (UTC)
Received: from localhost (localhost.localdomain [127.0.0.1])
	by kvm5.telegraphics.com.au (Postfix) with ESMTP id EF15F2AA0D;
	Mon, 23 Nov 2020 21:48:35 -0500 (EST)
Date: Tue, 24 Nov 2020 13:48:34 +1100 (AEDT)
From: Finn Thain <fthain@telegraphics.com.au>
To: Joe Perches <joe@perches.com>
cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>, 
    James Bottomley <James.Bottomley@hansenpartnership.com>, 
    Kees Cook <keescook@chromium.org>, Jakub Kicinski <kuba@kernel.org>, 
    "Gustavo A. R. Silva" <gustavoars@kernel.org>, 
    linux-kernel <linux-kernel@vger.kernel.org>, alsa-devel@alsa-project.org, 
    amd-gfx@lists.freedesktop.org, bridge@lists.linux-foundation.org, 
    ceph-devel@vger.kernel.org, cluster-devel@redhat.com, 
    coreteam@netfilter.org, devel@driverdev.osuosl.org, dm-devel@redhat.com, 
    drbd-dev@lists.linbit.com, dri-devel@lists.freedesktop.org, 
    GR-everest-linux-l2@marvell.com, GR-Linux-NIC-Dev@marvell.com, 
    intel-gfx@lists.freedesktop.org, intel-wired-lan@lists.osuosl.org, 
    keyrings@vger.kernel.org, linux1394-devel@lists.sourceforge.net, 
    linux-acpi@vger.kernel.org, linux-afs@lists.infradead.org, 
    Linux ARM <linux-arm-kernel@lists.infradead.org>, 
    linux-arm-msm@vger.kernel.org, linux-atm-general@lists.sourceforge.net, 
    linux-block@vger.kernel.org, linux-can@vger.kernel.org, 
    linux-cifs@vger.kernel.org, 
    Linux Crypto Mailing List <linux-crypto@vger.kernel.org>, 
    linux-decnet-user@lists.sourceforge.net, 
    Ext4 Developers List <linux-ext4@vger.kernel.org>, 
    linux-fbdev@vger.kernel.org, linux-geode@lists.infradead.org, 
    linux-gpio@vger.kernel.org, linux-hams@vger.kernel.org, 
    linux-hwmon@vger.kernel.org, linux-i3c@lists.infradead.org, 
    linux-ide@vger.kernel.org, linux-iio@vger.kernel.org, 
    linux-input <linux-input@vger.kernel.org>, linux-integrity@vger.kernel.org, 
    linux-mediatek@lists.infradead.org, 
    Linux Media Mailing List <linux-media@vger.kernel.org>, 
    linux-mmc@vger.kernel.org, Linux-MM <linux-mm@kvack.org>, 
    linux-mtd@lists.infradead.org, linux-nfs@vger.kernel.org, 
    linux-rdma@vger.kernel.org, linux-renesas-soc@vger.kernel.org, 
    linux-scsi@vger.kernel.org, linux-sctp@vger.kernel.org, 
    linux-security-module@vger.kernel.org, 
    linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org, 
    linux-watchdog@vger.kernel.org, 
    linux-wireless <linux-wireless@vger.kernel.org>, 
    Network Development <netdev@vger.kernel.org>, 
    netfilter-devel@vger.kernel.org, nouveau@lists.freedesktop.org, 
    op-tee@lists.trustedfirmware.org, oss-drivers@netronome.com, 
    patches@opensource.cirrus.com, rds-devel@oss.oracle.com, 
    reiserfs-devel@vger.kernel.org, samba-technical@lists.samba.org, 
    selinux@vger.kernel.org, target-devel@vger.kernel.org, 
    tipc-discussion@lists.sourceforge.net, 
    usb-storage@lists.one-eyed-alien.net, 
    virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
    "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, 
    xen-devel@lists.xenproject.org, linux-hardening@vger.kernel.org, 
    Nick Desaulniers <ndesaulniers@google.com>, 
    Nathan Chancellor <natechancellor@gmail.com>, 
    Miguel Ojeda <ojeda@kernel.org>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
In-Reply-To: <e72a1aaef8673553a3ee9dfa033d6e893e00abcd.camel@perches.com>
Message-ID: <alpine.LNX.2.23.453.2011241210310.7@nippy.intranet>
References: <cover.1605896059.git.gustavoars@kernel.org>  <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>  <202011201129.B13FDB3C@keescook>  <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>  <202011220816.8B6591A@keescook>
  <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>  <CANiq72nZrHWTA4_Msg6MP9snTyenC6-eGfD27CyfNSu7QoVZbw@mail.gmail.com>  <alpine.LNX.2.23.453.2011230938390.7@nippy.intranet>  <CANiq72=z+tmuey9wj3Kk7wX5s0hTHpsQdLhAqcOVNrHon6xn5Q@mail.gmail.com>
  <alpine.LNX.2.23.453.2011241036520.7@nippy.intranet> <e72a1aaef8673553a3ee9dfa033d6e893e00abcd.camel@perches.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII


On Mon, 23 Nov 2020, Joe Perches wrote:

> On Tue, 2020-11-24 at 11:58 +1100, Finn Thain wrote:
> > it's not for me to prove that such patches don't affect code 
> > generation. That's for the patch author and (unfortunately) for 
> > reviewers.
> 
> Ideally, that proof would be provided by the compilation system itself 
> and not patch authors nor reviewers nor maintainers.
> 
> Unfortunately gcc does not guarantee repeatability or deterministic 
> output. To my knowledge, neither does clang.
> 

Yes, I've said the same thing myself. But having attempted it, I now think 
this is a hard problem. YMMV.

https://lore.kernel.org/linux-scsi/alpine.LNX.2.22.394.2004281017310.12@nippy.intranet/
https://lore.kernel.org/linux-scsi/alpine.LNX.2.22.394.2005211358460.8@nippy.intranet/


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 04:51:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 04:51:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35355.66834 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khQIA-000715-9W; Tue, 24 Nov 2020 04:51:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35355.66834; Tue, 24 Nov 2020 04:51:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khQIA-00070y-4f; Tue, 24 Nov 2020 04:51:02 +0000
Received: by outflank-mailman (input) for mailman id 35355;
 Tue, 24 Nov 2020 04:51:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khQI9-00070q-43; Tue, 24 Nov 2020 04:51:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khQI8-0000Qo-RN; Tue, 24 Nov 2020 04:51:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khQI8-0000Ru-J6; Tue, 24 Nov 2020 04:51:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1khQI8-0002SU-Ic; Tue, 24 Nov 2020 04:51:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khQI9-00070q-43; Tue, 24 Nov 2020 04:51:01 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=L6iJWa2m5rnfeaht36AtQB9X7IB4luhVB8WdU7sW/0c=; b=M32s6oB7U7qipVk3sfspw+4mei
	UPBa2DxwpkAYzesNbLyuY6DzaHhlrH26hhTxTrao3l4Yy+Lauf8ww8/tzv/j7Q3HugHY1752SSswk
	g8k+3bl9zS3yQ+stHmNiqb9S/rhDbpTujsMC/gwyCyql/hjRZgxOKHPv9wl1ZDAHZ5tY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khQI8-0000Qo-RN; Tue, 24 Nov 2020 04:51:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khQI8-0000Ru-J6; Tue, 24 Nov 2020 04:51:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khQI8-0002SU-Ic; Tue, 24 Nov 2020 04:51:00 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156970-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156970: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=fb764373eaf7f65fd9e85377736f83aae09817b2
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 24 Nov 2020 04:51:00 +0000

flight 156970 qemu-mainline real [real]
flight 156976 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156970/
http://logs.test-lab.xenproject.org/osstest/logs/156976/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                fb764373eaf7f65fd9e85377736f83aae09817b2
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   95 days
Failing since        152659  2020-08-21 14:07:39 Z   94 days  201 attempts
Testing same since   156970  2020-11-23 20:07:42 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 68790 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 05:37:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 05:37:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35256.66855 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khR0a-0002UT-4H; Tue, 24 Nov 2020 05:36:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35256.66855; Tue, 24 Nov 2020 05:36:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khR0Z-0002UM-VV; Tue, 24 Nov 2020 05:36:56 +0000
Received: by outflank-mailman (input) for mailman id 35256;
 Mon, 23 Nov 2020 23:01:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FBUm=E5=durham.ac.uk=m.a.young@srs-us1.protection.inumbo.net>)
 id 1khKph-0004Nl-6z
 for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 23:01:17 +0000
Received: from GBR01-CWL-obe.outbound.protection.outlook.com (unknown
 [40.107.11.138]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 57239568-1c56-4dd2-b27e-8f870e3faafe;
 Mon, 23 Nov 2020 23:01:14 +0000 (UTC)
Received: from LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:83::20)
 by LNXP265MB1177.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:87::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.25; Mon, 23 Nov
 2020 23:01:12 +0000
Received: from LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM
 ([fe80::7956:7dd:b840:ac1b]) by LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM
 ([fe80::7956:7dd:b840:ac1b%6]) with mapi id 15.20.3564.034; Mon, 23 Nov 2020
 23:01:12 +0000
Received: from broadband.bt.com (2a00:23c6:751d:7701:1f1a:39af:4235:7681) by
 LO2P265CA0246.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:8a::18) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Mon, 23 Nov 2020 23:01:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FBUm=E5=durham.ac.uk=m.a.young@srs-us1.protection.inumbo.net>)
	id 1khKph-0004Nl-6z
	for xen-devel@lists.xenproject.org; Mon, 23 Nov 2020 23:01:17 +0000
X-Inumbo-ID: 57239568-1c56-4dd2-b27e-8f870e3faafe
Received: from GBR01-CWL-obe.outbound.protection.outlook.com (unknown [40.107.11.138])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 57239568-1c56-4dd2-b27e-8f870e3faafe;
	Mon, 23 Nov 2020 23:01:14 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jz19VYN69Qkrn9d+NDqkEjtSWSOKMzD9CXnbZUKElSYCEhXOolN//Fsc3ManATZld7AscbdHedHXX72cD0/61+06OH84PWXJDxen7Vk9+avsGe5FrpsN6WoSLd/AIgp0PtgraBl1GEUsVwREqV6UTw8JasKkBcPJg/KV5NIzej5FHJTrPpb5B1kISgtSCaVLQtswbnFsIcSDV6rDYpLWiSnNcb3ypfDN4JqoKJjuV8uCGDG9aFNmLRN3GSYsDYx3qxJ+XMMz0fckLEABCCGjXj2ntl/EjZfCP5/1QZO8wd6AIJ/HQDJ995GDWgGY7I4AYA0gZrD7fGlP4enAJcOnAw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VF7SJocNMcMORPgv9hgiFHHhrj/jwlJ4yyENtKUR5/c=;
 b=FeZ3iVUF6fNt/UCEUbDvZ7CFuaujbOvjbRF0xGIsN36nfDQEyyCcCkYXZobi7uO8SWEoZn4Ui1Wp+Vq+Xj1GkO8bUbpzlJCmOu3HwxEIjxZ6EMiDKOQqbaMo8nC3Os+jfF8/QIBxpEFXgJ5fN3ME2hFXFY8INT1B73kgeC5dvad0jx05IvZC9SuHW0SBuXy9tIWPd/XKFwRU36bHMoLSQlhbJqRdhVfO2FIfBdzScFg2d5CfqIceYstoQSkfvq7glbX7vL3zN2MY6Vk6BHrgRETgCFonLPHC4QU9NR2t0baK8HKjRNj2pMwGmV8Q/VYDRuWaDAd6L5ZGyEb7tUIQzA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=durham.ac.uk; dmarc=pass action=none header.from=durham.ac.uk;
 dkim=pass header.d=durham.ac.uk; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=durhamuniversity.onmicrosoft.com;
 s=selector2-durhamuniversity-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VF7SJocNMcMORPgv9hgiFHHhrj/jwlJ4yyENtKUR5/c=;
 b=Uvusl22Bvs/tYmS4ZVpOU4+45Q1BwK/ZvIXkwcZLflaC3CVyYPy0grx+wQFancVQxpt+MCJppd7efTUKBpWMBEbXoZL6I8S104zkjyMC6QvV9pP1DPqRyCfdWTR9K4DS8EwbTbKycslj1c24sTlkDGvxB3Ka98XwSGCQ90T7zNw=
Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=durham.ac.uk;
Received: from LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:83::20)
 by LNXP265MB1177.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:87::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.25; Mon, 23 Nov
 2020 23:01:12 +0000
Received: from LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM
 ([fe80::7956:7dd:b840:ac1b]) by LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM
 ([fe80::7956:7dd:b840:ac1b%6]) with mapi id 15.20.3564.034; Mon, 23 Nov 2020
 23:01:12 +0000
Date: Mon, 23 Nov 2020 23:01:09 +0000 (GMT)
From: Michael Young <m.a.young@durham.ac.uk>
To: Andrew Cooper <andrew.cooper3@citrix.com>
cc: xen-devel@lists.xenproject.org
Subject: Re: zstd compressed kernels
In-Reply-To: <71d36766-1258-0a79-02ff-d888a41e431e@citrix.com>
Message-ID: <6edb6c99-4289-b991-c767-333e376ce66@austen3.home>
References: <1abcd9d-428f-93d-b63d-996ef4592723@austen3.home> <71d36766-1258-0a79-02ff-d888a41e431e@citrix.com>
Content-Type: multipart/mixed; boundary="8323328-1543918517-1606172471=:3753"
X-Originating-IP: [2a00:23c6:751d:7701:1f1a:39af:4235:7681]
X-ClientProxiedBy: LO2P265CA0246.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8a::18) To LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:83::20)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from broadband.bt.com (2a00:23c6:751d:7701:1f1a:39af:4235:7681) by LO2P265CA0246.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:8a::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend Transport; Mon, 23 Nov 2020 23:01:11 +0000
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4755ab6f-57ef-45ef-bc8f-08d89003abe0
X-MS-TrafficTypeDiagnostic: LNXP265MB1177:
X-Microsoft-Antispam-PRVS:
	<LNXP265MB1177BFC8EA9E0EEBA52326DF87FC0@LNXP265MB1177.GBRP265.PROD.OUTLOOK.COM>
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5HXg4yz90ktkCyTMBDrdLH2qwt+hwzQ8qShIxedalzqLMb0/PiYIJ3f51gfyT9klRT5l4daVDVhsJdG/s1YuBx7a8gRp1t5ETPJHIqdz0E58g2FrPLlQFtTiQrLZ5tYSSyucoXlVyRqreiDXe92VxdvyZ5goXeUXD0LVs2+58NW2ksHWhjO04oVtXB/B+aaRjeEUZ0kGnhjT6G7KDFvclf7hNFYu0Ebe9Pzhflw8qCn369oR7R3EcQpH1uAgZjizu7AOpdguACF2Wj0RnfkiLBcgQh9tk/Sb/P0aCEFrtoYL64H7ph7CPuigjrpSbUAft2VqelpsABRelcGoiZHQn9Q3GQWTYrH26SlKcEIIPpm/9TPSxdiwQkTDkgJ1w/xO
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(366004)(346002)(39860400002)(376002)(66576008)(4326008)(66556008)(66476007)(478600001)(52116002)(66946007)(6916009)(86362001)(8936002)(786003)(2906002)(3480700007)(316002)(7116003)(6506007)(5660300002)(235185007)(186003)(16526019)(8676002)(36756003)(6486002)(6512007)(9686003)(58440200005);DIR:OUT;SFP:1102;
X-MS-Exchange-AntiSpam-MessageData:
	=?us-ascii?Q?6MviWVw/NI9lHNKvfmQj/EX6z6dfLmHKeKVFtTQXm8bvxz/wn7ht6Rnl9PTi?=
 =?us-ascii?Q?3OZEDEzeKn1RcpgZcq/csEQUkQqK11nK1xGxMwT5aQH1r+krlTvC/I6J3j0m?=
 =?us-ascii?Q?1I13/iemce+nuIwjyO7LKjYprZ14jHxD3FbQpuYuWqV4bGL0RyOm/dW2ZvSO?=
 =?us-ascii?Q?/JjH1XhnToTAKPZ2lx2j124KT6zp+og+tQMzYy2VJLV3Ig2Q7pH6x4V8D0ll?=
 =?us-ascii?Q?AlxNPlhGP5SN/OCN6ZVfPOKo7TrShNlrIr5rHcoeINXz6zD0mgDKXg8hktfN?=
 =?us-ascii?Q?CSwk61Sw1Me0xnUSEShSnDkNCS0DILy6CLftoueWqmE3bTAxTpr5050X7Hz5?=
 =?us-ascii?Q?rUpoxx2jujMDwoxINTM/zaPAgrLDALeoswXq6o3BZdWU7ox7jTp7UBRAicI2?=
 =?us-ascii?Q?dla5KFOkCRYcE4/Zmk4MkAfD6YzQ5sDhBh9hUf099TbE5/qv2fTaZisO9de3?=
 =?us-ascii?Q?uo6aR0gIu3x/X9Ec64CB5WIbCFTbb+wDyRRFN5eT0jc46o12yApy8VszXVTe?=
 =?us-ascii?Q?9W3ZYTxMEf5NkIq2nkI2GyGUj+RjdBeR9osewYgN8xeHF0TIIvzkDqM+gbQW?=
 =?us-ascii?Q?9pdGj7mpsyeVRW+9q1bDK7nvU3BuwqpzGPXGEM0b4Dj5kzLpfxUIXgR7kNyI?=
 =?us-ascii?Q?1rwcS8cmU8wrbuFM26pGP+X/SMZzEuUOXL7wiq1DPImk1O2pLf+zazKFc4bS?=
 =?us-ascii?Q?UQshMwq/bpoaUFbxkyCEmESScQv9wxC6ScRCw0zGkwBwKE5WBz3MqUZjbCib?=
 =?us-ascii?Q?iSzUUoc2nUBZif+QI+6Zh9LN8LAsywUAXiX50uA8p8u4sj6waWAukQujEXjY?=
 =?us-ascii?Q?MEAT8MTE/TlyC9VRC+SWkTGQlNRA1UoX+/mjhLPtmkdYEvuldCKV1qJ/vcaR?=
 =?us-ascii?Q?/R6cfLiln/NlLy0mQYNgfN8Woo/uroRvXDpIlcWYTSawgqHQ7hveRUpCexzB?=
 =?us-ascii?Q?ToQxfpe35QsE5kyw/J4sMg=3D=3D?=
X-OriginatorOrg: durham.ac.uk
X-MS-Exchange-CrossTenant-Network-Message-Id: 4755ab6f-57ef-45ef-bc8f-08d89003abe0
X-MS-Exchange-CrossTenant-AuthSource: LNXP265MB0924.GBRP265.PROD.OUTLOOK.COM
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Nov 2020 23:01:12.4584
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 7250d88b-4b68-4529-be44-d59a2d8a6f94
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tWHZzdtnVBELmULRJmIUEVpzhwrX9cgI4TFY7vluS9QxAdLGb2xOXIswv1sFQFDSGWJ8zwl5fXL/hfuxRof1yQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: LNXP265MB1177

--8323328-1543918517-1606172471=:3753
Content-Type: text/plain; charset=US-ASCII; format=flowed

On Tue, 17 Nov 2020, Andrew Cooper wrote:

> If you're willing to have a go:
>
> For dom0 support, port Linux's decompressor into xen/common/ and plumb
> it into xen/common/decompress.c
>
> For domU's, tools/libs/guest/xg_dom_bzimageloader.c and
> xc_dom_probe_bzimage_kernel()

Here is what I have so far. It works for me with a dom0 boot, though only 
after I reduced the the setting of the maximum decompressed size (out_len) 
from LONG_MAX to INT_MAX. The patches aren't intended to be final, and I 
suspect there may need to be adjustments for guest support.

 	Michael Young
--8323328-1543918517-1606172471=:3753
Content-Type: text/plain; charset=US-ASCII; name=0001-import-zstd-decompress-code-from-fedora-kernel-kerne.patch
Content-Transfer-Encoding: BASE64
Content-ID: <716075c8-bcc5-b44b-a261-4b103cb7a97e@austen3.home>
Content-Description:
Content-Disposition: attachment; filename=0001-import-zstd-decompress-code-from-fedora-kernel-kerne.patch

RnJvbSAxZDRkNzZhOTk3MWZiYzM1ODg3NWQ5NTU0M2Q2YWE3ZDhlNzI3YmE0IE1vbiBTZXAgMTcg
MDA6MDA6MDAgMjAwMQ0KTWVzc2FnZS1JZDogPDFkNGQ3NmE5OTcxZmJjMzU4ODc1ZDk1NTQzZDZh
YTdkOGU3MjdiYTQuMTYwNjE3MDgzNC5naXQubS5hLnlvdW5nQGR1cmhhbS5hYy51az4NCkZyb206
IE1pY2hhZWwgWW91bmcgPG0uYS55b3VuZ0BkdXJoYW0uYWMudWs+DQpEYXRlOiBTYXQsIDIxIE5v
diAyMDIwIDIwOjI4OjU0ICswMDAwDQpTdWJqZWN0OiBbWEVOIFBBVENIIDEvMl0gaW1wb3J0IHpz
dGQgZGVjb21wcmVzcyBjb2RlIGZyb20gZmVkb3JhIGtlcm5lbA0KIGtlcm5lbC01LjkuOC0yMDAu
ZmMzMw0KDQotLS0NCiB4ZW4vY29tbW9uL2RlY29tcHJlc3NfdW56c3RkLmMgICB8ICAzNDUgKysr
Kw0KIHhlbi9jb21tb24veHhoYXNoLmMgICAgICAgICAgICAgIHwgIDUwMCArKysrKysNCiB4ZW4v
Y29tbW9uL3pzdGQvYml0c3RyZWFtLmggICAgICB8ICAzNzkgKysrKysNCiB4ZW4vY29tbW9uL3pz
dGQvZGVjb21wcmVzcy5jICAgICB8IDI1MzEgKysrKysrKysrKysrKysrKysrKysrKysrKysrKysr
DQogeGVuL2NvbW1vbi96c3RkL2VudHJvcHlfY29tbW9uLmMgfCAgMjQzICsrKw0KIHhlbi9jb21t
b24venN0ZC9lcnJvcl9wcml2YXRlLmggIHwgICA1MyArDQogeGVuL2NvbW1vbi96c3RkL2ZzZS5o
ICAgICAgICAgICAgfCAgNTc1ICsrKysrKysNCiB4ZW4vY29tbW9uL3pzdGQvZnNlX2RlY29tcHJl
c3MuYyB8ICAzMjUgKysrKw0KIHhlbi9jb21tb24venN0ZC9odWYuaCAgICAgICAgICAgIHwgIDIx
MiArKysNCiB4ZW4vY29tbW9uL3pzdGQvaHVmX2RlY29tcHJlc3MuYyB8ICA5NjAgKysrKysrKysr
KysNCiB4ZW4vY29tbW9uL3pzdGQvbWVtLmggICAgICAgICAgICB8ICAxNTEgKysNCiB4ZW4vY29t
bW9uL3pzdGQvenN0ZF9jb21tb24uYyAgICB8ICAgNzUgKw0KIHhlbi9jb21tb24venN0ZC96c3Rk
X2ludGVybmFsLmggIHwgIDI3MyArKysrDQogeGVuL2NvbW1vbi96c3RkL3pzdGRfb3B0LmggICAg
ICAgfCAxMDE0ICsrKysrKysrKysrKw0KIHhlbi9pbmNsdWRlL3hlbi94eGhhc2guaCAgICAgICAg
IHwgIDI1OSArKysNCiB4ZW4vaW5jbHVkZS94ZW4venN0ZC5oICAgICAgICAgICB8IDExNTcgKysr
KysrKysrKysrKysNCiAxNiBmaWxlcyBjaGFuZ2VkLCA5MDUyIGluc2VydGlvbnMoKykNCiBjcmVh
dGUgbW9kZSAxMDA2NDQgeGVuL2NvbW1vbi9kZWNvbXByZXNzX3VuenN0ZC5jDQogY3JlYXRlIG1v
ZGUgMTAwNjQ0IHhlbi9jb21tb24veHhoYXNoLmMNCiBjcmVhdGUgbW9kZSAxMDA2NDQgeGVuL2Nv
bW1vbi96c3RkL2JpdHN0cmVhbS5oDQogY3JlYXRlIG1vZGUgMTAwNjQ0IHhlbi9jb21tb24venN0
ZC9kZWNvbXByZXNzLmMNCiBjcmVhdGUgbW9kZSAxMDA2NDQgeGVuL2NvbW1vbi96c3RkL2VudHJv
cHlfY29tbW9uLmMNCiBjcmVhdGUgbW9kZSAxMDA2NDQgeGVuL2NvbW1vbi96c3RkL2Vycm9yX3By
aXZhdGUuaA0KIGNyZWF0ZSBtb2RlIDEwMDY0NCB4ZW4vY29tbW9uL3pzdGQvZnNlLmgNCiBjcmVh
dGUgbW9kZSAxMDA2NDQgeGVuL2NvbW1vbi96c3RkL2ZzZV9kZWNvbXByZXNzLmMNCiBjcmVhdGUg
bW9kZSAxMDA2NDQgeGVuL2NvbW1vbi96c3RkL2h1Zi5oDQogY3JlYXRlIG1vZGUgMTAwNjQ0IHhl
bi9jb21tb24venN0ZC9odWZfZGVjb21wcmVzcy5jDQogY3JlYXRlIG1vZGUgMTAwNjQ0IHhlbi9j
b21tb24venN0ZC9tZW0uaA0KIGNyZWF0ZSBtb2RlIDEwMDY0NCB4ZW4vY29tbW9uL3pzdGQvenN0
ZF9jb21tb24uYw0KIGNyZWF0ZSBtb2RlIDEwMDY0NCB4ZW4vY29tbW9uL3pzdGQvenN0ZF9pbnRl
cm5hbC5oDQogY3JlYXRlIG1vZGUgMTAwNjQ0IHhlbi9jb21tb24venN0ZC96c3RkX29wdC5oDQog
Y3JlYXRlIG1vZGUgMTAwNjQ0IHhlbi9pbmNsdWRlL3hlbi94eGhhc2guaA0KIGNyZWF0ZSBtb2Rl
IDEwMDY0NCB4ZW4vaW5jbHVkZS94ZW4venN0ZC5oDQoNCmRpZmYgLS1naXQgYS94ZW4vY29tbW9u
L2RlY29tcHJlc3NfdW56c3RkLmMgYi94ZW4vY29tbW9uL2RlY29tcHJlc3NfdW56c3RkLmMNCm5l
dyBmaWxlIG1vZGUgMTAwNjQ0DQppbmRleCAwMDAwMDAwMDAwLi4wYWQyYzE1NDc5DQotLS0gL2Rl
di9udWxsDQorKysgYi94ZW4vY29tbW9uL2RlY29tcHJlc3NfdW56c3RkLmMNCkBAIC0wLDAgKzEs
MzQ1IEBADQorLy8gU1BEWC1MaWNlbnNlLUlkZW50aWZpZXI6IEdQTC0yLjANCisNCisvKg0KKyAq
IEltcG9ydGFudCBub3RlcyBhYm91dCBpbi1wbGFjZSBkZWNvbXByZXNzaW9uDQorICoNCisgKiBB
dCBsZWFzdCBvbiB4ODYsIHRoZSBrZXJuZWwgaXMgZGVjb21wcmVzc2VkIGluIHBsYWNlOiB0aGUg
Y29tcHJlc3NlZCBkYXRhDQorICogaXMgcGxhY2VkIHRvIHRoZSBlbmQgb2YgdGhlIG91dHB1dCBi
dWZmZXIsIGFuZCB0aGUgZGVjb21wcmVzc29yIG92ZXJ3cml0ZXMNCisgKiBtb3N0IG9mIHRoZSBj
b21wcmVzc2VkIGRhdGEuIFRoZXJlIG11c3QgYmUgZW5vdWdoIHNhZmV0eSBtYXJnaW4gdG8NCisg
KiBndWFyYW50ZWUgdGhhdCB0aGUgd3JpdGUgcG9zaXRpb24gaXMgYWx3YXlzIGJlaGluZCB0aGUg
cmVhZCBwb3NpdGlvbi4NCisgKg0KKyAqIFRoZSBzYWZldHkgbWFyZ2luIGZvciBaU1REIHdpdGgg
YSAxMjggS0IgYmxvY2sgc2l6ZSBpcyBjYWxjdWxhdGVkIGJlbG93Lg0KKyAqIE5vdGUgdGhhdCB0
aGUgbWFyZ2luIHdpdGggWlNURCBpcyBiaWdnZXIgdGhhbiB3aXRoIEdaSVAgb3IgWFohDQorICoN
CisgKiBUaGUgd29yc3QgY2FzZSBmb3IgaW4tcGxhY2UgZGVjb21wcmVzc2lvbiBpcyB0aGF0IHRo
ZSBiZWdpbm5pbmcgb2YNCisgKiB0aGUgZmlsZSBpcyBjb21wcmVzc2VkIGV4dHJlbWVseSB3ZWxs
LCBhbmQgdGhlIHJlc3Qgb2YgdGhlIGZpbGUgaXMNCisgKiB1bmNvbXByZXNzaWJsZS4gVGh1cywg
d2UgbXVzdCBsb29rIGZvciB3b3JzdC1jYXNlIGV4cGFuc2lvbiB3aGVuIHRoZQ0KKyAqIGNvbXBy
ZXNzb3IgaXMgZW5jb2RpbmcgdW5jb21wcmVzc2libGUgZGF0YS4NCisgKg0KKyAqIFRoZSBzdHJ1
Y3R1cmUgb2YgdGhlIC56c3QgZmlsZSBpbiBjYXNlIG9mIGEgY29tcHJlc2VkIGtlcm5lbCBpcyBh
cyBmb2xsb3dzLg0KKyAqIE1heGltdW0gc2l6ZXMgKGFzIGJ5dGVzKSBvZiB0aGUgZmllbGRzIGFy
ZSBpbiBwYXJlbnRoZXNpcy4NCisgKg0KKyAqICAgIEZyYW1lIEhlYWRlcjogKDE4KQ0KKyAqICAg
IEJsb2NrczogKE4pDQorICogICAgQ2hlY2tzdW06ICg0KQ0KKyAqDQorICogVGhlIGZyYW1lIGhl
YWRlciBhbmQgY2hlY2tzdW0gb3ZlcmhlYWQgaXMgYXQgbW9zdCAyMiBieXRlcy4NCisgKg0KKyAq
IFpTVEQgc3RvcmVzIHRoZSBkYXRhIGluIGJsb2Nrcy4gRWFjaCBibG9jayBoYXMgYSBoZWFkZXIg
d2hvc2Ugc2l6ZSBpcw0KKyAqIGEgMyBieXRlcy4gQWZ0ZXIgdGhlIGJsb2NrIGhlYWRlciwgdGhl
cmUgaXMgdXAgdG8gMTI4IEtCIG9mIHBheWxvYWQuDQorICogVGhlIG1heGltdW0gdW5jb21wcmVz
c2VkIHNpemUgb2YgdGhlIHBheWxvYWQgaXMgMTI4IEtCLiBUaGUgbWluaW11bQ0KKyAqIHVuY29t
cHJlc3NlZCBzaXplIG9mIHRoZSBwYXlsb2FkIGlzIG5ldmVyIGxlc3MgdGhhbiB0aGUgcGF5bG9h
ZCBzaXplDQorICogKGV4Y2x1ZGluZyB0aGUgYmxvY2sgaGVhZGVyKS4NCisgKg0KKyAqIFRoZSBh
c3N1bXB0aW9uLCB0aGF0IHRoZSB1bmNvbXByZXNzZWQgc2l6ZSBvZiB0aGUgcGF5bG9hZCBpcyBu
ZXZlcg0KKyAqIHNtYWxsZXIgdGhhbiB0aGUgcGF5bG9hZCBpdHNlbGYsIGlzIHZhbGlkIG9ubHkg
d2hlbiB0YWxraW5nIGFib3V0DQorICogdGhlIHBheWxvYWQgYXMgYSB3aG9sZS4gSXQgaXMgcG9z
c2libGUgdGhhdCB0aGUgcGF5bG9hZCBoYXMgcGFydHMgd2hlcmUNCisgKiB0aGUgZGVjb21wcmVz
c29yIGNvbnN1bWVzIG1vcmUgaW5wdXQgdGhhbiBpdCBwcm9kdWNlcyBvdXRwdXQuIENhbGN1bGF0
aW5nDQorICogdGhlIHdvcnN0IGNhc2UgZm9yIHRoaXMgd291bGQgYmUgdHJpY2t5LiBJbnN0ZWFk
IG9mIHRyeWluZyB0byBkbyB0aGF0LA0KKyAqIGxldCdzIHNpbXBseSBtYWtlIHN1cmUgdGhhdCB0
aGUgZGVjb21wcmVzc29yIG5ldmVyIG92ZXJ3cml0ZXMgYW55IGJ5dGVzDQorICogb2YgdGhlIHBh
eWxvYWQgd2hpY2ggaXQgaXMgY3VycmVudGx5IHJlYWRpbmcuDQorICoNCisgKiBOb3cgd2UgaGF2
ZSBlbm91Z2ggaW5mb3JtYXRpb24gdG8gY2FsY3VsYXRlIHRoZSBzYWZldHkgbWFyZ2luLiBXZSBu
ZWVkDQorICogICAtIDIyIGJ5dGVzIGZvciB0aGUgLnpzdCBmaWxlIGZvcm1hdCBoZWFkZXJzOw0K
KyAqICAgLSAzIGJ5dGVzIHBlciBldmVyeSAxMjggS2lCIG9mIHVuY29tcHJlc3NlZCBzaXplIChv
bmUgYmxvY2sgaGVhZGVyIHBlcg0KKyAqICAgICBibG9jayk7IGFuZA0KKyAqICAgLSAxMjggS2lC
IChiaWdnZXN0IHBvc3NpYmxlIHpzdGQgYmxvY2sgc2l6ZSkgdG8gbWFrZSBzdXJlIHRoYXQgdGhl
DQorICogICAgIGRlY29tcHJlc3NvciBuZXZlciBvdmVyd3JpdGVzIGFueXRoaW5nIGZyb20gdGhl
IGJsb2NrIGl0IGlzIGN1cnJlbnRseQ0KKyAqICAgICByZWFkaW5nLg0KKyAqDQorICogV2UgZ2V0
IHRoZSBmb2xsb3dpbmcgZm9ybXVsYToNCisgKg0KKyAqICAgIHNhZmV0eV9tYXJnaW4gPSAyMiAr
IHVuY29tcHJlc3NlZF9zaXplICogMyAvIDEzMTA3MiArIDEzMTA3Mg0KKyAqICAgICAgICAgICAg
ICAgICA8PSAyMiArICh1bmNvbXByZXNzZWRfc2l6ZSA+PiAxNSkgKyAxMzEwNzINCisgKi8NCisN
CisvKg0KKyAqIFByZWJvb3QgZW52aXJvbm1lbnRzICNpbmNsdWRlICJwYXRoL3RvL2RlY29tcHJl
c3NfdW56c3RkLmMiLg0KKyAqIEFsbCBvZiB0aGUgc291cmNlIGZpbGVzIHdlIGRlcGVuZCBvbiBt
dXN0IGJlICNpbmNsdWRlZC4NCisgKiB6c3RkJ3Mgb25seSBzb3VyY2UgZGVwZW5kZW55IGlzIHh4
aGFzaCwgd2hpY2ggaGFzIG5vIHNvdXJjZQ0KKyAqIGRlcGVuZGVuY2llcy4NCisgKg0KKyAqIFdo
ZW4gVU5aU1REX1BSRUJPT1QgaXMgZGVmaW5lZCB3ZSBkZWNsYXJlIF9fZGVjb21wcmVzcygpLCB3
aGljaCBpcw0KKyAqIHVzZWQgZm9yIGtlcm5lbCBkZWNvbXByZXNzaW9uLCBpbnN0ZWFkIG9mIHVu
enN0ZCgpLg0KKyAqDQorICogRGVmaW5lIF9fRElTQUJMRV9FWFBPUlRTIGluIHByZWJvb3QgZW52
aXJvbm1lbnRzIHRvIHByZXZlbnQgc3ltYm9scw0KKyAqIGZyb20geHhoYXNoIGFuZCB6c3RkIGZy
b20gYmVpbmcgZXhwb3J0ZWQgYnkgdGhlIEVYUE9SVF9TWU1CT0wgbWFjcm8uDQorICovDQorI2lm
ZGVmIFNUQVRJQw0KKyMgZGVmaW5lIFVOWlNURF9QUkVCT09UDQorIyBpbmNsdWRlICJ4eGhhc2gu
YyINCisjIGluY2x1ZGUgInpzdGQvZW50cm9weV9jb21tb24uYyINCisjIGluY2x1ZGUgInpzdGQv
ZnNlX2RlY29tcHJlc3MuYyINCisjIGluY2x1ZGUgInpzdGQvaHVmX2RlY29tcHJlc3MuYyINCisj
IGluY2x1ZGUgInpzdGQvenN0ZF9jb21tb24uYyINCisjIGluY2x1ZGUgInpzdGQvZGVjb21wcmVz
cy5jIg0KKyNlbmRpZg0KKw0KKyNpbmNsdWRlIDxsaW51eC9kZWNvbXByZXNzL21tLmg+DQorI2lu
Y2x1ZGUgPGxpbnV4L2tlcm5lbC5oPg0KKyNpbmNsdWRlIDxsaW51eC96c3RkLmg+DQorDQorLyog
MTI4TUIgaXMgdGhlIG1heGltdW0gd2luZG93IHNpemUgc3VwcG9ydGVkIGJ5IHpzdGQuICovDQor
I2RlZmluZSBaU1REX1dJTkRPV1NJWkVfTUFYCSgxIDw8IFpTVERfV0lORE9XTE9HX01BWCkNCisv
Kg0KKyAqIFNpemUgb2YgdGhlIGlucHV0IGFuZCBvdXRwdXQgYnVmZmVycyBpbiBtdWx0aS1jYWxs
IG1vZGUuDQorICogUGljayBhIGxhcmdlciBzaXplIGJlY2F1c2UgaXQgaXNuJ3QgdXNlZCBkdXJp
bmcga2VybmVsIGRlY29tcHJlc3Npb24sDQorICogc2luY2UgdGhhdCBpcyBzaW5nbGUgcGFzcywg
YW5kIHdlIGhhdmUgdG8gYWxsb2NhdGUgYSBsYXJnZSBidWZmZXIgZm9yDQorICogenN0ZCdzIHdp
bmRvdyBhbnl3YXkuIFRoZSBsYXJnZXIgc2l6ZSBzcGVlZHMgdXAgaW5pdHJhbWZzIGRlY29tcHJl
c3Npb24uDQorICovDQorI2RlZmluZSBaU1REX0lPQlVGX1NJWkUJCSgxIDw8IDE3KQ0KKw0KK3N0
YXRpYyBpbnQgSU5JVCBoYW5kbGVfenN0ZF9lcnJvcihzaXplX3QgcmV0LCB2b2lkICgqZXJyb3Ip
KGNoYXIgKngpKQ0KK3sNCisJY29uc3QgaW50IGVyciA9IFpTVERfZ2V0RXJyb3JDb2RlKHJldCk7
DQorDQorCWlmICghWlNURF9pc0Vycm9yKHJldCkpDQorCQlyZXR1cm4gMDsNCisNCisJc3dpdGNo
IChlcnIpIHsNCisJY2FzZSBaU1REX2Vycm9yX21lbW9yeV9hbGxvY2F0aW9uOg0KKwkJZXJyb3Io
IlpTVEQgZGVjb21wcmVzc29yIHJhbiBvdXQgb2YgbWVtb3J5Iik7DQorCQlicmVhazsNCisJY2Fz
ZSBaU1REX2Vycm9yX3ByZWZpeF91bmtub3duOg0KKwkJZXJyb3IoIklucHV0IGlzIG5vdCBpbiB0
aGUgWlNURCBmb3JtYXQgKHdyb25nIG1hZ2ljIGJ5dGVzKSIpOw0KKwkJYnJlYWs7DQorCWNhc2Ug
WlNURF9lcnJvcl9kc3RTaXplX3Rvb1NtYWxsOg0KKwljYXNlIFpTVERfZXJyb3JfY29ycnVwdGlv
bl9kZXRlY3RlZDoNCisJY2FzZSBaU1REX2Vycm9yX2NoZWNrc3VtX3dyb25nOg0KKwkJZXJyb3Io
IlpTVEQtY29tcHJlc3NlZCBkYXRhIGlzIGNvcnJ1cHQiKTsNCisJCWJyZWFrOw0KKwlkZWZhdWx0
Og0KKwkJZXJyb3IoIlpTVEQtY29tcHJlc3NlZCBkYXRhIGlzIHByb2JhYmx5IGNvcnJ1cHQiKTsN
CisJCWJyZWFrOw0KKwl9DQorCXJldHVybiAtMTsNCit9DQorDQorLyoNCisgKiBIYW5kbGUgdGhl
IGNhc2Ugd2hlcmUgd2UgaGF2ZSB0aGUgZW50aXJlIGlucHV0IGFuZCBvdXRwdXQgaW4gb25lIHNl
Z21lbnQuDQorICogV2UgY2FuIGFsbG9jYXRlIGxlc3MgbWVtb3J5IChubyBjaXJjdWxhciBidWZm
ZXIgZm9yIHRoZSBzbGlkaW5nIHdpbmRvdyksDQorICogYW5kIGF2b2lkIHNvbWUgbWVtY3B5KCkg
Y2FsbHMuDQorICovDQorc3RhdGljIGludCBJTklUIGRlY29tcHJlc3Nfc2luZ2xlKGNvbnN0IHU4
ICppbl9idWYsIGxvbmcgaW5fbGVuLCB1OCAqb3V0X2J1ZiwNCisJCQkJICBsb25nIG91dF9sZW4s
IGxvbmcgKmluX3BvcywNCisJCQkJICB2b2lkICgqZXJyb3IpKGNoYXIgKngpKQ0KK3sNCisJY29u
c3Qgc2l6ZV90IHdrc3Bfc2l6ZSA9IFpTVERfREN0eFdvcmtzcGFjZUJvdW5kKCk7DQorCXZvaWQg
Kndrc3AgPSBsYXJnZV9tYWxsb2Mod2tzcF9zaXplKTsNCisJWlNURF9EQ3R4ICpkY3R4ID0gWlNU
RF9pbml0REN0eCh3a3NwLCB3a3NwX3NpemUpOw0KKwlpbnQgZXJyOw0KKwlzaXplX3QgcmV0Ow0K
Kw0KKwlpZiAoZGN0eCA9PSBOVUxMKSB7DQorCQllcnJvcigiT3V0IG9mIG1lbW9yeSB3aGlsZSBh
bGxvY2F0aW5nIFpTVERfREN0eCIpOw0KKwkJZXJyID0gLTE7DQorCQlnb3RvIG91dDsNCisJfQ0K
KwkvKg0KKwkgKiBGaW5kIG91dCBob3cgbGFyZ2UgdGhlIGZyYW1lIGFjdHVhbGx5IGlzLCB0aGVy
ZSBtYXkgYmUganVuayBhdA0KKwkgKiB0aGUgZW5kIG9mIHRoZSBmcmFtZSB0aGF0IFpTVERfZGVj
b21wcmVzc0RDdHgoKSBjYW4ndCBoYW5kbGUuDQorCSAqLw0KKwlyZXQgPSBaU1REX2ZpbmRGcmFt
ZUNvbXByZXNzZWRTaXplKGluX2J1ZiwgaW5fbGVuKTsNCisJZXJyID0gaGFuZGxlX3pzdGRfZXJy
b3IocmV0LCBlcnJvcik7DQorCWlmIChlcnIpDQorCQlnb3RvIG91dDsNCisJaW5fbGVuID0gKGxv
bmcpcmV0Ow0KKw0KKwlyZXQgPSBaU1REX2RlY29tcHJlc3NEQ3R4KGRjdHgsIG91dF9idWYsIG91
dF9sZW4sIGluX2J1ZiwgaW5fbGVuKTsNCisJZXJyID0gaGFuZGxlX3pzdGRfZXJyb3IocmV0LCBl
cnJvcik7DQorCWlmIChlcnIpDQorCQlnb3RvIG91dDsNCisNCisJaWYgKGluX3BvcyAhPSBOVUxM
KQ0KKwkJKmluX3BvcyA9IGluX2xlbjsNCisNCisJZXJyID0gMDsNCitvdXQ6DQorCWlmICh3a3Nw
ICE9IE5VTEwpDQorCQlsYXJnZV9mcmVlKHdrc3ApOw0KKwlyZXR1cm4gZXJyOw0KK30NCisNCitz
dGF0aWMgaW50IElOSVQgX191bnpzdGQodW5zaWduZWQgY2hhciAqaW5fYnVmLCBsb25nIGluX2xl
biwNCisJCQkgbG9uZyAoKmZpbGwpKHZvaWQqLCB1bnNpZ25lZCBsb25nKSwNCisJCQkgbG9uZyAo
KmZsdXNoKSh2b2lkKiwgdW5zaWduZWQgbG9uZyksDQorCQkJIHVuc2lnbmVkIGNoYXIgKm91dF9i
dWYsIGxvbmcgb3V0X2xlbiwNCisJCQkgbG9uZyAqaW5fcG9zLA0KKwkJCSB2b2lkICgqZXJyb3Ip
KGNoYXIgKngpKQ0KK3sNCisJWlNURF9pbkJ1ZmZlciBpbjsNCisJWlNURF9vdXRCdWZmZXIgb3V0
Ow0KKwlaU1REX2ZyYW1lUGFyYW1zIHBhcmFtczsNCisJdm9pZCAqaW5fYWxsb2NhdGVkID0gTlVM
TDsNCisJdm9pZCAqb3V0X2FsbG9jYXRlZCA9IE5VTEw7DQorCXZvaWQgKndrc3AgPSBOVUxMOw0K
KwlzaXplX3Qgd2tzcF9zaXplOw0KKwlaU1REX0RTdHJlYW0gKmRzdHJlYW07DQorCWludCBlcnI7
DQorCXNpemVfdCByZXQ7DQorDQorCWlmIChvdXRfbGVuID09IDApDQorCQlvdXRfbGVuID0gTE9O
R19NQVg7IC8qIG5vIGxpbWl0ICovDQorDQorCWlmIChmaWxsID09IE5VTEwgJiYgZmx1c2ggPT0g
TlVMTCkNCisJCS8qDQorCQkgKiBXZSBjYW4gZGVjb21wcmVzcyBmYXN0ZXIgYW5kIHdpdGggbGVz
cyBtZW1vcnkgd2hlbiB3ZSBoYXZlIGENCisJCSAqIHNpbmdsZSBjaHVuay4NCisJCSAqLw0KKwkJ
cmV0dXJuIGRlY29tcHJlc3Nfc2luZ2xlKGluX2J1ZiwgaW5fbGVuLCBvdXRfYnVmLCBvdXRfbGVu
LA0KKwkJCQkJIGluX3BvcywgZXJyb3IpOw0KKw0KKwkvKg0KKwkgKiBJZiBpbl9idWYgaXMgbm90
IHByb3ZpZGVkLCB3ZSBtdXN0IGJlIHVzaW5nIGZpbGwoKSwgc28gYWxsb2NhdGUNCisJICogYSBs
YXJnZSBlbm91Z2ggYnVmZmVyLiBJZiBpdCBpcyBwcm92aWRlZCwgaXQgbXVzdCBiZSBhdCBsZWFz
dA0KKwkgKiBaU1REX0lPQlVGX1NJWkUgbGFyZ2UuDQorCSAqLw0KKwlpZiAoaW5fYnVmID09IE5V
TEwpIHsNCisJCWluX2FsbG9jYXRlZCA9IGxhcmdlX21hbGxvYyhaU1REX0lPQlVGX1NJWkUpOw0K
KwkJaWYgKGluX2FsbG9jYXRlZCA9PSBOVUxMKSB7DQorCQkJZXJyb3IoIk91dCBvZiBtZW1vcnkg
d2hpbGUgYWxsb2NhdGluZyBpbnB1dCBidWZmZXIiKTsNCisJCQllcnIgPSAtMTsNCisJCQlnb3Rv
IG91dDsNCisJCX0NCisJCWluX2J1ZiA9IGluX2FsbG9jYXRlZDsNCisJCWluX2xlbiA9IDA7DQor
CX0NCisJLyogUmVhZCB0aGUgZmlyc3QgY2h1bmssIHNpbmNlIHdlIG5lZWQgdG8gZGVjb2RlIHRo
ZSBmcmFtZSBoZWFkZXIuICovDQorCWlmIChmaWxsICE9IE5VTEwpDQorCQlpbl9sZW4gPSBmaWxs
KGluX2J1ZiwgWlNURF9JT0JVRl9TSVpFKTsNCisJaWYgKGluX2xlbiA8IDApIHsNCisJCWVycm9y
KCJaU1RELWNvbXByZXNzZWQgZGF0YSBpcyB0cnVuY2F0ZWQiKTsNCisJCWVyciA9IC0xOw0KKwkJ
Z290byBvdXQ7DQorCX0NCisJLyogU2V0IHRoZSBmaXJzdCBub24tZW1wdHkgaW5wdXQgYnVmZmVy
LiAqLw0KKwlpbi5zcmMgPSBpbl9idWY7DQorCWluLnBvcyA9IDA7DQorCWluLnNpemUgPSBpbl9s
ZW47DQorCS8qIEFsbG9jYXRlIHRoZSBvdXRwdXQgYnVmZmVyIGlmIHdlIGFyZSB1c2luZyBmbHVz
aCgpLiAqLw0KKwlpZiAoZmx1c2ggIT0gTlVMTCkgew0KKwkJb3V0X2FsbG9jYXRlZCA9IGxhcmdl
X21hbGxvYyhaU1REX0lPQlVGX1NJWkUpOw0KKwkJaWYgKG91dF9hbGxvY2F0ZWQgPT0gTlVMTCkg
ew0KKwkJCWVycm9yKCJPdXQgb2YgbWVtb3J5IHdoaWxlIGFsbG9jYXRpbmcgb3V0cHV0IGJ1ZmZl
ciIpOw0KKwkJCWVyciA9IC0xOw0KKwkJCWdvdG8gb3V0Ow0KKwkJfQ0KKwkJb3V0X2J1ZiA9IG91
dF9hbGxvY2F0ZWQ7DQorCQlvdXRfbGVuID0gWlNURF9JT0JVRl9TSVpFOw0KKwl9DQorCS8qIFNl
dCB0aGUgb3V0cHV0IGJ1ZmZlci4gKi8NCisJb3V0LmRzdCA9IG91dF9idWY7DQorCW91dC5wb3Mg
PSAwOw0KKwlvdXQuc2l6ZSA9IG91dF9sZW47DQorDQorCS8qDQorCSAqIFdlIG5lZWQgdG8ga25v
dyB0aGUgd2luZG93IHNpemUgdG8gYWxsb2NhdGUgdGhlIFpTVERfRFN0cmVhbS4NCisJICogU2lu
Y2Ugd2UgYXJlIHN0cmVhbWluZywgd2UgbmVlZCB0byBhbGxvY2F0ZSBhIGJ1ZmZlciBmb3IgdGhl
IHNsaWRpbmcNCisJICogd2luZG93LiBUaGUgd2luZG93IHNpemUgdmFyaWVzIGZyb20gMSBLQiB0
byBaU1REX1dJTkRPV1NJWkVfTUFYDQorCSAqICg4IE1CKSwgc28gaXQgaXMgaW1wb3J0YW50IHRv
IHVzZSB0aGUgYWN0dWFsIHZhbHVlIHNvIGFzIG5vdCB0bw0KKwkgKiB3YXN0ZSBtZW1vcnkgd2hl
biBpdCBpcyBzbWFsbGVyLg0KKwkgKi8NCisJcmV0ID0gWlNURF9nZXRGcmFtZVBhcmFtcygmcGFy
YW1zLCBpbi5zcmMsIGluLnNpemUpOw0KKwllcnIgPSBoYW5kbGVfenN0ZF9lcnJvcihyZXQsIGVy
cm9yKTsNCisJaWYgKGVycikNCisJCWdvdG8gb3V0Ow0KKwlpZiAocmV0ICE9IDApIHsNCisJCWVy
cm9yKCJaU1RELWNvbXByZXNzZWQgZGF0YSBoYXMgYW4gaW5jb21wbGV0ZSBmcmFtZSBoZWFkZXIi
KTsNCisJCWVyciA9IC0xOw0KKwkJZ290byBvdXQ7DQorCX0NCisJaWYgKHBhcmFtcy53aW5kb3dT
aXplID4gWlNURF9XSU5ET1dTSVpFX01BWCkgew0KKwkJZXJyb3IoIlpTVEQtY29tcHJlc3NlZCBk
YXRhIGhhcyB0b28gbGFyZ2UgYSB3aW5kb3cgc2l6ZSIpOw0KKwkJZXJyID0gLTE7DQorCQlnb3Rv
IG91dDsNCisJfQ0KKw0KKwkvKg0KKwkgKiBBbGxvY2F0ZSB0aGUgWlNURF9EU3RyZWFtIG5vdyB0
aGF0IHdlIGtub3cgaG93IG11Y2ggbWVtb3J5IGlzDQorCSAqIHJlcXVpcmVkLg0KKwkgKi8NCisJ
d2tzcF9zaXplID0gWlNURF9EU3RyZWFtV29ya3NwYWNlQm91bmQocGFyYW1zLndpbmRvd1NpemUp
Ow0KKwl3a3NwID0gbGFyZ2VfbWFsbG9jKHdrc3Bfc2l6ZSk7DQorCWRzdHJlYW0gPSBaU1REX2lu
aXREU3RyZWFtKHBhcmFtcy53aW5kb3dTaXplLCB3a3NwLCB3a3NwX3NpemUpOw0KKwlpZiAoZHN0
cmVhbSA9PSBOVUxMKSB7DQorCQllcnJvcigiT3V0IG9mIG1lbW9yeSB3aGlsZSBhbGxvY2F0aW5n
IFpTVERfRFN0cmVhbSIpOw0KKwkJZXJyID0gLTE7DQorCQlnb3RvIG91dDsNCisJfQ0KKw0KKwkv
Kg0KKwkgKiBEZWNvbXByZXNzaW9uIGxvb3A6DQorCSAqIFJlYWQgbW9yZSBkYXRhIGlmIG5lY2Vz
c2FyeSAoZXJyb3IgaWYgbm8gbW9yZSBkYXRhIGNhbiBiZSByZWFkKS4NCisJICogQ2FsbCB0aGUg
ZGVjb21wcmVzc2lvbiBmdW5jdGlvbiwgd2hpY2ggcmV0dXJucyAwIHdoZW4gZmluaXNoZWQuDQor
CSAqIEZsdXNoIGFueSBkYXRhIHByb2R1Y2VkIGlmIHVzaW5nIGZsdXNoKCkuDQorCSAqLw0KKwlp
ZiAoaW5fcG9zICE9IE5VTEwpDQorCQkqaW5fcG9zID0gMDsNCisJZG8gew0KKwkJLyoNCisJCSAq
IElmIHdlIG5lZWQgdG8gcmVsb2FkIGRhdGEsIGVpdGhlciB3ZSBoYXZlIGZpbGwoKSBhbmQgY2Fu
DQorCQkgKiB0cnkgdG8gZ2V0IG1vcmUgZGF0YSwgb3Igd2UgZG9uJ3QgYW5kIHRoZSBpbnB1dCBp
cyB0cnVuY2F0ZWQuDQorCQkgKi8NCisJCWlmIChpbi5wb3MgPT0gaW4uc2l6ZSkgew0KKwkJCWlm
IChpbl9wb3MgIT0gTlVMTCkNCisJCQkJKmluX3BvcyArPSBpbi5wb3M7DQorCQkJaW5fbGVuID0g
ZmlsbCA/IGZpbGwoaW5fYnVmLCBaU1REX0lPQlVGX1NJWkUpIDogLTE7DQorCQkJaWYgKGluX2xl
biA8IDApIHsNCisJCQkJZXJyb3IoIlpTVEQtY29tcHJlc3NlZCBkYXRhIGlzIHRydW5jYXRlZCIp
Ow0KKwkJCQllcnIgPSAtMTsNCisJCQkJZ290byBvdXQ7DQorCQkJfQ0KKwkJCWluLnBvcyA9IDA7
DQorCQkJaW4uc2l6ZSA9IGluX2xlbjsNCisJCX0NCisJCS8qIFJldHVybnMgemVybyB3aGVuIHRo
ZSBmcmFtZSBpcyBjb21wbGV0ZS4gKi8NCisJCXJldCA9IFpTVERfZGVjb21wcmVzc1N0cmVhbShk
c3RyZWFtLCAmb3V0LCAmaW4pOw0KKwkJZXJyID0gaGFuZGxlX3pzdGRfZXJyb3IocmV0LCBlcnJv
cik7DQorCQlpZiAoZXJyKQ0KKwkJCWdvdG8gb3V0Ow0KKwkJLyogRmx1c2ggYWxsIG9mIHRoZSBk
YXRhIHByb2R1Y2VkIGlmIHVzaW5nIGZsdXNoKCkuICovDQorCQlpZiAoZmx1c2ggIT0gTlVMTCAm
JiBvdXQucG9zID4gMCkgew0KKwkJCWlmIChvdXQucG9zICE9IGZsdXNoKG91dC5kc3QsIG91dC5w
b3MpKSB7DQorCQkJCWVycm9yKCJGYWlsZWQgdG8gZmx1c2goKSIpOw0KKwkJCQllcnIgPSAtMTsN
CisJCQkJZ290byBvdXQ7DQorCQkJfQ0KKwkJCW91dC5wb3MgPSAwOw0KKwkJfQ0KKwl9IHdoaWxl
IChyZXQgIT0gMCk7DQorDQorCWlmIChpbl9wb3MgIT0gTlVMTCkNCisJCSppbl9wb3MgKz0gaW4u
cG9zOw0KKw0KKwllcnIgPSAwOw0KK291dDoNCisJaWYgKGluX2FsbG9jYXRlZCAhPSBOVUxMKQ0K
KwkJbGFyZ2VfZnJlZShpbl9hbGxvY2F0ZWQpOw0KKwlpZiAob3V0X2FsbG9jYXRlZCAhPSBOVUxM
KQ0KKwkJbGFyZ2VfZnJlZShvdXRfYWxsb2NhdGVkKTsNCisJaWYgKHdrc3AgIT0gTlVMTCkNCisJ
CWxhcmdlX2ZyZWUod2tzcCk7DQorCXJldHVybiBlcnI7DQorfQ0KKw0KKyNpZm5kZWYgVU5aU1RE
X1BSRUJPT1QNCitTVEFUSUMgaW50IElOSVQgdW56c3RkKHVuc2lnbmVkIGNoYXIgKmJ1ZiwgbG9u
ZyBsZW4sDQorCQkgICAgICAgbG9uZyAoKmZpbGwpKHZvaWQqLCB1bnNpZ25lZCBsb25nKSwNCisJ
CSAgICAgICBsb25nICgqZmx1c2gpKHZvaWQqLCB1bnNpZ25lZCBsb25nKSwNCisJCSAgICAgICB1
bnNpZ25lZCBjaGFyICpvdXRfYnVmLA0KKwkJICAgICAgIGxvbmcgKnBvcywNCisJCSAgICAgICB2
b2lkICgqZXJyb3IpKGNoYXIgKngpKQ0KK3sNCisJcmV0dXJuIF9fdW56c3RkKGJ1ZiwgbGVuLCBm
aWxsLCBmbHVzaCwgb3V0X2J1ZiwgMCwgcG9zLCBlcnJvcik7DQorfQ0KKyNlbHNlDQorU1RBVElD
IGludCBJTklUIF9fZGVjb21wcmVzcyh1bnNpZ25lZCBjaGFyICpidWYsIGxvbmcgbGVuLA0KKwkJ
CSAgICAgbG9uZyAoKmZpbGwpKHZvaWQqLCB1bnNpZ25lZCBsb25nKSwNCisJCQkgICAgIGxvbmcg
KCpmbHVzaCkodm9pZCosIHVuc2lnbmVkIGxvbmcpLA0KKwkJCSAgICAgdW5zaWduZWQgY2hhciAq
b3V0X2J1ZiwgbG9uZyBvdXRfbGVuLA0KKwkJCSAgICAgbG9uZyAqcG9zLA0KKwkJCSAgICAgdm9p
ZCAoKmVycm9yKShjaGFyICp4KSkNCit7DQorCXJldHVybiBfX3VuenN0ZChidWYsIGxlbiwgZmls
bCwgZmx1c2gsIG91dF9idWYsIG91dF9sZW4sIHBvcywgZXJyb3IpOw0KK30NCisjZW5kaWYNCmRp
ZmYgLS1naXQgYS94ZW4vY29tbW9uL3h4aGFzaC5jIGIveGVuL2NvbW1vbi94eGhhc2guYw0KbmV3
IGZpbGUgbW9kZSAxMDA2NDQNCmluZGV4IDAwMDAwMDAwMDAuLmQ1YmI5ZmYxMDYNCi0tLSAvZGV2
L251bGwNCisrKyBiL3hlbi9jb21tb24veHhoYXNoLmMNCkBAIC0wLDAgKzEsNTAwIEBADQorLyoN
CisgKiB4eEhhc2ggLSBFeHRyZW1lbHkgRmFzdCBIYXNoIGFsZ29yaXRobQ0KKyAqIENvcHlyaWdo
dCAoQykgMjAxMi0yMDE2LCBZYW5uIENvbGxldC4NCisgKg0KKyAqIEJTRCAyLUNsYXVzZSBMaWNl
bnNlIChodHRwOi8vd3d3Lm9wZW5zb3VyY2Uub3JnL2xpY2Vuc2VzL2JzZC1saWNlbnNlLnBocCkN
CisgKg0KKyAqIFJlZGlzdHJpYnV0aW9uIGFuZCB1c2UgaW4gc291cmNlIGFuZCBiaW5hcnkgZm9y
bXMsIHdpdGggb3Igd2l0aG91dA0KKyAqIG1vZGlmaWNhdGlvbiwgYXJlIHBlcm1pdHRlZCBwcm92
aWRlZCB0aGF0IHRoZSBmb2xsb3dpbmcgY29uZGl0aW9ucyBhcmUNCisgKiBtZXQ6DQorICoNCisg
KiAgICogUmVkaXN0cmlidXRpb25zIG9mIHNvdXJjZSBjb2RlIG11c3QgcmV0YWluIHRoZSBhYm92
ZSBjb3B5cmlnaHQNCisgKiAgICAgbm90aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9ucyBhbmQg
dGhlIGZvbGxvd2luZyBkaXNjbGFpbWVyLg0KKyAqICAgKiBSZWRpc3RyaWJ1dGlvbnMgaW4gYmlu
YXJ5IGZvcm0gbXVzdCByZXByb2R1Y2UgdGhlIGFib3ZlDQorICogICAgIGNvcHlyaWdodCBub3Rp
Y2UsIHRoaXMgbGlzdCBvZiBjb25kaXRpb25zIGFuZCB0aGUgZm9sbG93aW5nIGRpc2NsYWltZXIN
CisgKiAgICAgaW4gdGhlIGRvY3VtZW50YXRpb24gYW5kL29yIG90aGVyIG1hdGVyaWFscyBwcm92
aWRlZCB3aXRoIHRoZQ0KKyAqICAgICBkaXN0cmlidXRpb24uDQorICoNCisgKiBUSElTIFNPRlRX
QVJFIElTIFBST1ZJREVEIEJZIFRIRSBDT1BZUklHSFQgSE9MREVSUyBBTkQgQ09OVFJJQlVUT1JT
DQorICogIkFTIElTIiBBTkQgQU5ZIEVYUFJFU1MgT1IgSU1QTElFRCBXQVJSQU5USUVTLCBJTkNM
VURJTkcsIEJVVCBOT1QNCisgKiBMSU1JVEVEIFRPLCBUSEUgSU1QTElFRCBXQVJSQU5USUVTIE9G
IE1FUkNIQU5UQUJJTElUWSBBTkQgRklUTkVTUyBGT1INCisgKiBBIFBBUlRJQ1VMQVIgUFVSUE9T
RSBBUkUgRElTQ0xBSU1FRC4gSU4gTk8gRVZFTlQgU0hBTEwgVEhFIENPUFlSSUdIVA0KKyAqIE9X
TkVSIE9SIENPTlRSSUJVVE9SUyBCRSBMSUFCTEUgRk9SIEFOWSBESVJFQ1QsIElORElSRUNULCBJ
TkNJREVOVEFMLA0KKyAqIFNQRUNJQUwsIEVYRU1QTEFSWSwgT1IgQ09OU0VRVUVOVElBTCBEQU1B
R0VTIChJTkNMVURJTkcsIEJVVCBOT1QNCisgKiBMSU1JVEVEIFRPLCBQUk9DVVJFTUVOVCBPRiBT
VUJTVElUVVRFIEdPT0RTIE9SIFNFUlZJQ0VTOyBMT1NTIE9GIFVTRSwNCisgKiBEQVRBLCBPUiBQ
Uk9GSVRTOyBPUiBCVVNJTkVTUyBJTlRFUlJVUFRJT04pIEhPV0VWRVIgQ0FVU0VEIEFORCBPTiBB
TlkNCisgKiBUSEVPUlkgT0YgTElBQklMSVRZLCBXSEVUSEVSIElOIENPTlRSQUNULCBTVFJJQ1Qg
TElBQklMSVRZLCBPUiBUT1JUDQorICogKElOQ0xVRElORyBORUdMSUdFTkNFIE9SIE9USEVSV0lT
RSkgQVJJU0lORyBJTiBBTlkgV0FZIE9VVCBPRiBUSEUgVVNFDQorICogT0YgVEhJUyBTT0ZUV0FS
RSwgRVZFTiBJRiBBRFZJU0VEIE9GIFRIRSBQT1NTSUJJTElUWSBPRiBTVUNIIERBTUFHRS4NCisg
Kg0KKyAqIFRoaXMgcHJvZ3JhbSBpcyBmcmVlIHNvZnR3YXJlOyB5b3UgY2FuIHJlZGlzdHJpYnV0
ZSBpdCBhbmQvb3IgbW9kaWZ5IGl0IHVuZGVyDQorICogdGhlIHRlcm1zIG9mIHRoZSBHTlUgR2Vu
ZXJhbCBQdWJsaWMgTGljZW5zZSB2ZXJzaW9uIDIgYXMgcHVibGlzaGVkIGJ5IHRoZQ0KKyAqIEZy
ZWUgU29mdHdhcmUgRm91bmRhdGlvbi4gVGhpcyBwcm9ncmFtIGlzIGR1YWwtbGljZW5zZWQ7IHlv
dSBtYXkgc2VsZWN0DQorICogZWl0aGVyIHZlcnNpb24gMiBvZiB0aGUgR05VIEdlbmVyYWwgUHVi
bGljIExpY2Vuc2UgKCJHUEwiKSBvciBCU0QgbGljZW5zZQ0KKyAqICgiQlNEIikuDQorICoNCisg
KiBZb3UgY2FuIGNvbnRhY3QgdGhlIGF1dGhvciBhdDoNCisgKiAtIHh4SGFzaCBob21lcGFnZTog
aHR0cHM6Ly9jeWFuNDk3My5naXRodWIuaW8veHhIYXNoLw0KKyAqIC0geHhIYXNoIHNvdXJjZSBy
ZXBvc2l0b3J5OiBodHRwczovL2dpdGh1Yi5jb20vQ3lhbjQ5NzMveHhIYXNoDQorICovDQorDQor
I2luY2x1ZGUgPGFzbS91bmFsaWduZWQuaD4NCisjaW5jbHVkZSA8bGludXgvZXJybm8uaD4NCisj
aW5jbHVkZSA8bGludXgvY29tcGlsZXIuaD4NCisjaW5jbHVkZSA8bGludXgva2VybmVsLmg+DQor
I2luY2x1ZGUgPGxpbnV4L21vZHVsZS5oPg0KKyNpbmNsdWRlIDxsaW51eC9zdHJpbmcuaD4NCisj
aW5jbHVkZSA8bGludXgveHhoYXNoLmg+DQorDQorLyotKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKg0KKyAqIE1hY3Jvcw0KKyAqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKi8NCisjZGVmaW5lIHh4aF9yb3RsMzIoeCwgcikgKCh4IDw8IHIpIHwgKHggPj4g
KDMyIC0gcikpKQ0KKyNkZWZpbmUgeHhoX3JvdGw2NCh4LCByKSAoKHggPDwgcikgfCAoeCA+PiAo
NjQgLSByKSkpDQorDQorI2lmZGVmIF9fTElUVExFX0VORElBTg0KKyMgZGVmaW5lIFhYSF9DUFVf
TElUVExFX0VORElBTiAxDQorI2Vsc2UNCisjIGRlZmluZSBYWEhfQ1BVX0xJVFRMRV9FTkRJQU4g
MA0KKyNlbmRpZg0KKw0KKy8qLSoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioN
CisgKiBDb25zdGFudHMNCisgKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKiov
DQorc3RhdGljIGNvbnN0IHVpbnQzMl90IFBSSU1FMzJfMSA9IDI2NTQ0MzU3NjFVOw0KK3N0YXRp
YyBjb25zdCB1aW50MzJfdCBQUklNRTMyXzIgPSAyMjQ2ODIyNTE5VTsNCitzdGF0aWMgY29uc3Qg
dWludDMyX3QgUFJJTUUzMl8zID0gMzI2NjQ4OTkxN1U7DQorc3RhdGljIGNvbnN0IHVpbnQzMl90
IFBSSU1FMzJfNCA9ICA2NjgyNjUyNjNVOw0KK3N0YXRpYyBjb25zdCB1aW50MzJfdCBQUklNRTMy
XzUgPSAgMzc0NzYxMzkzVTsNCisNCitzdGF0aWMgY29uc3QgdWludDY0X3QgUFJJTUU2NF8xID0g
MTE0MDA3MTQ3ODUwNzQ2OTQ3OTFVTEw7DQorc3RhdGljIGNvbnN0IHVpbnQ2NF90IFBSSU1FNjRf
MiA9IDE0MDI5NDY3MzY2ODk3MDE5NzI3VUxMOw0KK3N0YXRpYyBjb25zdCB1aW50NjRfdCBQUklN
RTY0XzMgPSAgMTYwOTU4NzkyOTM5MjgzOTE2MVVMTDsNCitzdGF0aWMgY29uc3QgdWludDY0X3Qg
UFJJTUU2NF80ID0gIDk2NTAwMjkyNDIyODc4Mjg1NzlVTEw7DQorc3RhdGljIGNvbnN0IHVpbnQ2
NF90IFBSSU1FNjRfNSA9ICAyODcwMTc3NDUwMDEyNjAwMjYxVUxMOw0KKw0KKy8qLSoqKioqKioq
KioqKioqKioqKioqKioqKioqDQorICogIFV0aWxzDQorICoqKioqKioqKioqKioqKioqKioqKioq
KioqKi8NCit2b2lkIHh4aDMyX2NvcHlfc3RhdGUoc3RydWN0IHh4aDMyX3N0YXRlICpkc3QsIGNv
bnN0IHN0cnVjdCB4eGgzMl9zdGF0ZSAqc3JjKQ0KK3sNCisJbWVtY3B5KGRzdCwgc3JjLCBzaXpl
b2YoKmRzdCkpOw0KK30NCitFWFBPUlRfU1lNQk9MKHh4aDMyX2NvcHlfc3RhdGUpOw0KKw0KK3Zv
aWQgeHhoNjRfY29weV9zdGF0ZShzdHJ1Y3QgeHhoNjRfc3RhdGUgKmRzdCwgY29uc3Qgc3RydWN0
IHh4aDY0X3N0YXRlICpzcmMpDQorew0KKwltZW1jcHkoZHN0LCBzcmMsIHNpemVvZigqZHN0KSk7
DQorfQ0KK0VYUE9SVF9TWU1CT0woeHhoNjRfY29weV9zdGF0ZSk7DQorDQorLyotKioqKioqKioq
KioqKioqKioqKioqKioqKioqDQorICogU2ltcGxlIEhhc2ggRnVuY3Rpb25zDQorICoqKioqKioq
KioqKioqKioqKioqKioqKioqKiovDQorc3RhdGljIHVpbnQzMl90IHh4aDMyX3JvdW5kKHVpbnQz
Ml90IHNlZWQsIGNvbnN0IHVpbnQzMl90IGlucHV0KQ0KK3sNCisJc2VlZCArPSBpbnB1dCAqIFBS
SU1FMzJfMjsNCisJc2VlZCA9IHh4aF9yb3RsMzIoc2VlZCwgMTMpOw0KKwlzZWVkICo9IFBSSU1F
MzJfMTsNCisJcmV0dXJuIHNlZWQ7DQorfQ0KKw0KK3VpbnQzMl90IHh4aDMyKGNvbnN0IHZvaWQg
KmlucHV0LCBjb25zdCBzaXplX3QgbGVuLCBjb25zdCB1aW50MzJfdCBzZWVkKQ0KK3sNCisJY29u
c3QgdWludDhfdCAqcCA9IChjb25zdCB1aW50OF90ICopaW5wdXQ7DQorCWNvbnN0IHVpbnQ4X3Qg
KmJfZW5kID0gcCArIGxlbjsNCisJdWludDMyX3QgaDMyOw0KKw0KKwlpZiAobGVuID49IDE2KSB7
DQorCQljb25zdCB1aW50OF90ICpjb25zdCBsaW1pdCA9IGJfZW5kIC0gMTY7DQorCQl1aW50MzJf
dCB2MSA9IHNlZWQgKyBQUklNRTMyXzEgKyBQUklNRTMyXzI7DQorCQl1aW50MzJfdCB2MiA9IHNl
ZWQgKyBQUklNRTMyXzI7DQorCQl1aW50MzJfdCB2MyA9IHNlZWQgKyAwOw0KKwkJdWludDMyX3Qg
djQgPSBzZWVkIC0gUFJJTUUzMl8xOw0KKw0KKwkJZG8gew0KKwkJCXYxID0geHhoMzJfcm91bmQo
djEsIGdldF91bmFsaWduZWRfbGUzMihwKSk7DQorCQkJcCArPSA0Ow0KKwkJCXYyID0geHhoMzJf
cm91bmQodjIsIGdldF91bmFsaWduZWRfbGUzMihwKSk7DQorCQkJcCArPSA0Ow0KKwkJCXYzID0g
eHhoMzJfcm91bmQodjMsIGdldF91bmFsaWduZWRfbGUzMihwKSk7DQorCQkJcCArPSA0Ow0KKwkJ
CXY0ID0geHhoMzJfcm91bmQodjQsIGdldF91bmFsaWduZWRfbGUzMihwKSk7DQorCQkJcCArPSA0
Ow0KKwkJfSB3aGlsZSAocCA8PSBsaW1pdCk7DQorDQorCQloMzIgPSB4eGhfcm90bDMyKHYxLCAx
KSArIHh4aF9yb3RsMzIodjIsIDcpICsNCisJCQl4eGhfcm90bDMyKHYzLCAxMikgKyB4eGhfcm90
bDMyKHY0LCAxOCk7DQorCX0gZWxzZSB7DQorCQloMzIgPSBzZWVkICsgUFJJTUUzMl81Ow0KKwl9
DQorDQorCWgzMiArPSAodWludDMyX3QpbGVuOw0KKw0KKwl3aGlsZSAocCArIDQgPD0gYl9lbmQp
IHsNCisJCWgzMiArPSBnZXRfdW5hbGlnbmVkX2xlMzIocCkgKiBQUklNRTMyXzM7DQorCQloMzIg
PSB4eGhfcm90bDMyKGgzMiwgMTcpICogUFJJTUUzMl80Ow0KKwkJcCArPSA0Ow0KKwl9DQorDQor
CXdoaWxlIChwIDwgYl9lbmQpIHsNCisJCWgzMiArPSAoKnApICogUFJJTUUzMl81Ow0KKwkJaDMy
ID0geHhoX3JvdGwzMihoMzIsIDExKSAqIFBSSU1FMzJfMTsNCisJCXArKzsNCisJfQ0KKw0KKwlo
MzIgXj0gaDMyID4+IDE1Ow0KKwloMzIgKj0gUFJJTUUzMl8yOw0KKwloMzIgXj0gaDMyID4+IDEz
Ow0KKwloMzIgKj0gUFJJTUUzMl8zOw0KKwloMzIgXj0gaDMyID4+IDE2Ow0KKw0KKwlyZXR1cm4g
aDMyOw0KK30NCitFWFBPUlRfU1lNQk9MKHh4aDMyKTsNCisNCitzdGF0aWMgdWludDY0X3QgeHho
NjRfcm91bmQodWludDY0X3QgYWNjLCBjb25zdCB1aW50NjRfdCBpbnB1dCkNCit7DQorCWFjYyAr
PSBpbnB1dCAqIFBSSU1FNjRfMjsNCisJYWNjID0geHhoX3JvdGw2NChhY2MsIDMxKTsNCisJYWNj
ICo9IFBSSU1FNjRfMTsNCisJcmV0dXJuIGFjYzsNCit9DQorDQorc3RhdGljIHVpbnQ2NF90IHh4
aDY0X21lcmdlX3JvdW5kKHVpbnQ2NF90IGFjYywgdWludDY0X3QgdmFsKQ0KK3sNCisJdmFsID0g
eHhoNjRfcm91bmQoMCwgdmFsKTsNCisJYWNjIF49IHZhbDsNCisJYWNjID0gYWNjICogUFJJTUU2
NF8xICsgUFJJTUU2NF80Ow0KKwlyZXR1cm4gYWNjOw0KK30NCisNCit1aW50NjRfdCB4eGg2NChj
b25zdCB2b2lkICppbnB1dCwgY29uc3Qgc2l6ZV90IGxlbiwgY29uc3QgdWludDY0X3Qgc2VlZCkN
Cit7DQorCWNvbnN0IHVpbnQ4X3QgKnAgPSAoY29uc3QgdWludDhfdCAqKWlucHV0Ow0KKwljb25z
dCB1aW50OF90ICpjb25zdCBiX2VuZCA9IHAgKyBsZW47DQorCXVpbnQ2NF90IGg2NDsNCisNCisJ
aWYgKGxlbiA+PSAzMikgew0KKwkJY29uc3QgdWludDhfdCAqY29uc3QgbGltaXQgPSBiX2VuZCAt
IDMyOw0KKwkJdWludDY0X3QgdjEgPSBzZWVkICsgUFJJTUU2NF8xICsgUFJJTUU2NF8yOw0KKwkJ
dWludDY0X3QgdjIgPSBzZWVkICsgUFJJTUU2NF8yOw0KKwkJdWludDY0X3QgdjMgPSBzZWVkICsg
MDsNCisJCXVpbnQ2NF90IHY0ID0gc2VlZCAtIFBSSU1FNjRfMTsNCisNCisJCWRvIHsNCisJCQl2
MSA9IHh4aDY0X3JvdW5kKHYxLCBnZXRfdW5hbGlnbmVkX2xlNjQocCkpOw0KKwkJCXAgKz0gODsN
CisJCQl2MiA9IHh4aDY0X3JvdW5kKHYyLCBnZXRfdW5hbGlnbmVkX2xlNjQocCkpOw0KKwkJCXAg
Kz0gODsNCisJCQl2MyA9IHh4aDY0X3JvdW5kKHYzLCBnZXRfdW5hbGlnbmVkX2xlNjQocCkpOw0K
KwkJCXAgKz0gODsNCisJCQl2NCA9IHh4aDY0X3JvdW5kKHY0LCBnZXRfdW5hbGlnbmVkX2xlNjQo
cCkpOw0KKwkJCXAgKz0gODsNCisJCX0gd2hpbGUgKHAgPD0gbGltaXQpOw0KKw0KKwkJaDY0ID0g
eHhoX3JvdGw2NCh2MSwgMSkgKyB4eGhfcm90bDY0KHYyLCA3KSArDQorCQkJeHhoX3JvdGw2NCh2
MywgMTIpICsgeHhoX3JvdGw2NCh2NCwgMTgpOw0KKwkJaDY0ID0geHhoNjRfbWVyZ2Vfcm91bmQo
aDY0LCB2MSk7DQorCQloNjQgPSB4eGg2NF9tZXJnZV9yb3VuZChoNjQsIHYyKTsNCisJCWg2NCA9
IHh4aDY0X21lcmdlX3JvdW5kKGg2NCwgdjMpOw0KKwkJaDY0ID0geHhoNjRfbWVyZ2Vfcm91bmQo
aDY0LCB2NCk7DQorDQorCX0gZWxzZSB7DQorCQloNjQgID0gc2VlZCArIFBSSU1FNjRfNTsNCisJ
fQ0KKw0KKwloNjQgKz0gKHVpbnQ2NF90KWxlbjsNCisNCisJd2hpbGUgKHAgKyA4IDw9IGJfZW5k
KSB7DQorCQljb25zdCB1aW50NjRfdCBrMSA9IHh4aDY0X3JvdW5kKDAsIGdldF91bmFsaWduZWRf
bGU2NChwKSk7DQorDQorCQloNjQgXj0gazE7DQorCQloNjQgPSB4eGhfcm90bDY0KGg2NCwgMjcp
ICogUFJJTUU2NF8xICsgUFJJTUU2NF80Ow0KKwkJcCArPSA4Ow0KKwl9DQorDQorCWlmIChwICsg
NCA8PSBiX2VuZCkgew0KKwkJaDY0IF49ICh1aW50NjRfdCkoZ2V0X3VuYWxpZ25lZF9sZTMyKHAp
KSAqIFBSSU1FNjRfMTsNCisJCWg2NCA9IHh4aF9yb3RsNjQoaDY0LCAyMykgKiBQUklNRTY0XzIg
KyBQUklNRTY0XzM7DQorCQlwICs9IDQ7DQorCX0NCisNCisJd2hpbGUgKHAgPCBiX2VuZCkgew0K
KwkJaDY0IF49ICgqcCkgKiBQUklNRTY0XzU7DQorCQloNjQgPSB4eGhfcm90bDY0KGg2NCwgMTEp
ICogUFJJTUU2NF8xOw0KKwkJcCsrOw0KKwl9DQorDQorCWg2NCBePSBoNjQgPj4gMzM7DQorCWg2
NCAqPSBQUklNRTY0XzI7DQorCWg2NCBePSBoNjQgPj4gMjk7DQorCWg2NCAqPSBQUklNRTY0XzM7
DQorCWg2NCBePSBoNjQgPj4gMzI7DQorDQorCXJldHVybiBoNjQ7DQorfQ0KK0VYUE9SVF9TWU1C
T0woeHhoNjQpOw0KKw0KKy8qLSoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqDQorICogQWR2YW5jZWQgSGFzaCBGdW5jdGlvbnMNCisgKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqLw0KK3ZvaWQgeHhoMzJfcmVz
ZXQoc3RydWN0IHh4aDMyX3N0YXRlICpzdGF0ZVB0ciwgY29uc3QgdWludDMyX3Qgc2VlZCkNCit7
DQorCS8qIHVzZSBhIGxvY2FsIHN0YXRlIGZvciBtZW1jcHkoKSB0byBhdm9pZCBzdHJpY3QtYWxp
YXNpbmcgd2FybmluZ3MgKi8NCisJc3RydWN0IHh4aDMyX3N0YXRlIHN0YXRlOw0KKw0KKwltZW1z
ZXQoJnN0YXRlLCAwLCBzaXplb2Yoc3RhdGUpKTsNCisJc3RhdGUudjEgPSBzZWVkICsgUFJJTUUz
Ml8xICsgUFJJTUUzMl8yOw0KKwlzdGF0ZS52MiA9IHNlZWQgKyBQUklNRTMyXzI7DQorCXN0YXRl
LnYzID0gc2VlZCArIDA7DQorCXN0YXRlLnY0ID0gc2VlZCAtIFBSSU1FMzJfMTsNCisJbWVtY3B5
KHN0YXRlUHRyLCAmc3RhdGUsIHNpemVvZihzdGF0ZSkpOw0KK30NCitFWFBPUlRfU1lNQk9MKHh4
aDMyX3Jlc2V0KTsNCisNCit2b2lkIHh4aDY0X3Jlc2V0KHN0cnVjdCB4eGg2NF9zdGF0ZSAqc3Rh
dGVQdHIsIGNvbnN0IHVpbnQ2NF90IHNlZWQpDQorew0KKwkvKiB1c2UgYSBsb2NhbCBzdGF0ZSBm
b3IgbWVtY3B5KCkgdG8gYXZvaWQgc3RyaWN0LWFsaWFzaW5nIHdhcm5pbmdzICovDQorCXN0cnVj
dCB4eGg2NF9zdGF0ZSBzdGF0ZTsNCisNCisJbWVtc2V0KCZzdGF0ZSwgMCwgc2l6ZW9mKHN0YXRl
KSk7DQorCXN0YXRlLnYxID0gc2VlZCArIFBSSU1FNjRfMSArIFBSSU1FNjRfMjsNCisJc3RhdGUu
djIgPSBzZWVkICsgUFJJTUU2NF8yOw0KKwlzdGF0ZS52MyA9IHNlZWQgKyAwOw0KKwlzdGF0ZS52
NCA9IHNlZWQgLSBQUklNRTY0XzE7DQorCW1lbWNweShzdGF0ZVB0ciwgJnN0YXRlLCBzaXplb2Yo
c3RhdGUpKTsNCit9DQorRVhQT1JUX1NZTUJPTCh4eGg2NF9yZXNldCk7DQorDQoraW50IHh4aDMy
X3VwZGF0ZShzdHJ1Y3QgeHhoMzJfc3RhdGUgKnN0YXRlLCBjb25zdCB2b2lkICppbnB1dCwgY29u
c3Qgc2l6ZV90IGxlbikNCit7DQorCWNvbnN0IHVpbnQ4X3QgKnAgPSAoY29uc3QgdWludDhfdCAq
KWlucHV0Ow0KKwljb25zdCB1aW50OF90ICpjb25zdCBiX2VuZCA9IHAgKyBsZW47DQorDQorCWlm
IChpbnB1dCA9PSBOVUxMKQ0KKwkJcmV0dXJuIC1FSU5WQUw7DQorDQorCXN0YXRlLT50b3RhbF9s
ZW5fMzIgKz0gKHVpbnQzMl90KWxlbjsNCisJc3RhdGUtPmxhcmdlX2xlbiB8PSAobGVuID49IDE2
KSB8IChzdGF0ZS0+dG90YWxfbGVuXzMyID49IDE2KTsNCisNCisJaWYgKHN0YXRlLT5tZW1zaXpl
ICsgbGVuIDwgMTYpIHsgLyogZmlsbCBpbiB0bXAgYnVmZmVyICovDQorCQltZW1jcHkoKHVpbnQ4
X3QgKikoc3RhdGUtPm1lbTMyKSArIHN0YXRlLT5tZW1zaXplLCBpbnB1dCwgbGVuKTsNCisJCXN0
YXRlLT5tZW1zaXplICs9ICh1aW50MzJfdClsZW47DQorCQlyZXR1cm4gMDsNCisJfQ0KKw0KKwlp
ZiAoc3RhdGUtPm1lbXNpemUpIHsgLyogc29tZSBkYXRhIGxlZnQgZnJvbSBwcmV2aW91cyB1cGRh
dGUgKi8NCisJCWNvbnN0IHVpbnQzMl90ICpwMzIgPSBzdGF0ZS0+bWVtMzI7DQorDQorCQltZW1j
cHkoKHVpbnQ4X3QgKikoc3RhdGUtPm1lbTMyKSArIHN0YXRlLT5tZW1zaXplLCBpbnB1dCwNCisJ
CQkxNiAtIHN0YXRlLT5tZW1zaXplKTsNCisNCisJCXN0YXRlLT52MSA9IHh4aDMyX3JvdW5kKHN0
YXRlLT52MSwgZ2V0X3VuYWxpZ25lZF9sZTMyKHAzMikpOw0KKwkJcDMyKys7DQorCQlzdGF0ZS0+
djIgPSB4eGgzMl9yb3VuZChzdGF0ZS0+djIsIGdldF91bmFsaWduZWRfbGUzMihwMzIpKTsNCisJ
CXAzMisrOw0KKwkJc3RhdGUtPnYzID0geHhoMzJfcm91bmQoc3RhdGUtPnYzLCBnZXRfdW5hbGln
bmVkX2xlMzIocDMyKSk7DQorCQlwMzIrKzsNCisJCXN0YXRlLT52NCA9IHh4aDMyX3JvdW5kKHN0
YXRlLT52NCwgZ2V0X3VuYWxpZ25lZF9sZTMyKHAzMikpOw0KKwkJcDMyKys7DQorDQorCQlwICs9
IDE2LXN0YXRlLT5tZW1zaXplOw0KKwkJc3RhdGUtPm1lbXNpemUgPSAwOw0KKwl9DQorDQorCWlm
IChwIDw9IGJfZW5kIC0gMTYpIHsNCisJCWNvbnN0IHVpbnQ4X3QgKmNvbnN0IGxpbWl0ID0gYl9l
bmQgLSAxNjsNCisJCXVpbnQzMl90IHYxID0gc3RhdGUtPnYxOw0KKwkJdWludDMyX3QgdjIgPSBz
dGF0ZS0+djI7DQorCQl1aW50MzJfdCB2MyA9IHN0YXRlLT52MzsNCisJCXVpbnQzMl90IHY0ID0g
c3RhdGUtPnY0Ow0KKw0KKwkJZG8gew0KKwkJCXYxID0geHhoMzJfcm91bmQodjEsIGdldF91bmFs
aWduZWRfbGUzMihwKSk7DQorCQkJcCArPSA0Ow0KKwkJCXYyID0geHhoMzJfcm91bmQodjIsIGdl
dF91bmFsaWduZWRfbGUzMihwKSk7DQorCQkJcCArPSA0Ow0KKwkJCXYzID0geHhoMzJfcm91bmQo
djMsIGdldF91bmFsaWduZWRfbGUzMihwKSk7DQorCQkJcCArPSA0Ow0KKwkJCXY0ID0geHhoMzJf
cm91bmQodjQsIGdldF91bmFsaWduZWRfbGUzMihwKSk7DQorCQkJcCArPSA0Ow0KKwkJfSB3aGls
ZSAocCA8PSBsaW1pdCk7DQorDQorCQlzdGF0ZS0+djEgPSB2MTsNCisJCXN0YXRlLT52MiA9IHYy
Ow0KKwkJc3RhdGUtPnYzID0gdjM7DQorCQlzdGF0ZS0+djQgPSB2NDsNCisJfQ0KKw0KKwlpZiAo
cCA8IGJfZW5kKSB7DQorCQltZW1jcHkoc3RhdGUtPm1lbTMyLCBwLCAoc2l6ZV90KShiX2VuZC1w
KSk7DQorCQlzdGF0ZS0+bWVtc2l6ZSA9ICh1aW50MzJfdCkoYl9lbmQtcCk7DQorCX0NCisNCisJ
cmV0dXJuIDA7DQorfQ0KK0VYUE9SVF9TWU1CT0woeHhoMzJfdXBkYXRlKTsNCisNCit1aW50MzJf
dCB4eGgzMl9kaWdlc3QoY29uc3Qgc3RydWN0IHh4aDMyX3N0YXRlICpzdGF0ZSkNCit7DQorCWNv
bnN0IHVpbnQ4X3QgKnAgPSAoY29uc3QgdWludDhfdCAqKXN0YXRlLT5tZW0zMjsNCisJY29uc3Qg
dWludDhfdCAqY29uc3QgYl9lbmQgPSAoY29uc3QgdWludDhfdCAqKShzdGF0ZS0+bWVtMzIpICsN
CisJCXN0YXRlLT5tZW1zaXplOw0KKwl1aW50MzJfdCBoMzI7DQorDQorCWlmIChzdGF0ZS0+bGFy
Z2VfbGVuKSB7DQorCQloMzIgPSB4eGhfcm90bDMyKHN0YXRlLT52MSwgMSkgKyB4eGhfcm90bDMy
KHN0YXRlLT52MiwgNykgKw0KKwkJCXh4aF9yb3RsMzIoc3RhdGUtPnYzLCAxMikgKyB4eGhfcm90
bDMyKHN0YXRlLT52NCwgMTgpOw0KKwl9IGVsc2Ugew0KKwkJaDMyID0gc3RhdGUtPnYzIC8qID09
IHNlZWQgKi8gKyBQUklNRTMyXzU7DQorCX0NCisNCisJaDMyICs9IHN0YXRlLT50b3RhbF9sZW5f
MzI7DQorDQorCXdoaWxlIChwICsgNCA8PSBiX2VuZCkgew0KKwkJaDMyICs9IGdldF91bmFsaWdu
ZWRfbGUzMihwKSAqIFBSSU1FMzJfMzsNCisJCWgzMiA9IHh4aF9yb3RsMzIoaDMyLCAxNykgKiBQ
UklNRTMyXzQ7DQorCQlwICs9IDQ7DQorCX0NCisNCisJd2hpbGUgKHAgPCBiX2VuZCkgew0KKwkJ
aDMyICs9ICgqcCkgKiBQUklNRTMyXzU7DQorCQloMzIgPSB4eGhfcm90bDMyKGgzMiwgMTEpICog
UFJJTUUzMl8xOw0KKwkJcCsrOw0KKwl9DQorDQorCWgzMiBePSBoMzIgPj4gMTU7DQorCWgzMiAq
PSBQUklNRTMyXzI7DQorCWgzMiBePSBoMzIgPj4gMTM7DQorCWgzMiAqPSBQUklNRTMyXzM7DQor
CWgzMiBePSBoMzIgPj4gMTY7DQorDQorCXJldHVybiBoMzI7DQorfQ0KK0VYUE9SVF9TWU1CT0wo
eHhoMzJfZGlnZXN0KTsNCisNCitpbnQgeHhoNjRfdXBkYXRlKHN0cnVjdCB4eGg2NF9zdGF0ZSAq
c3RhdGUsIGNvbnN0IHZvaWQgKmlucHV0LCBjb25zdCBzaXplX3QgbGVuKQ0KK3sNCisJY29uc3Qg
dWludDhfdCAqcCA9IChjb25zdCB1aW50OF90ICopaW5wdXQ7DQorCWNvbnN0IHVpbnQ4X3QgKmNv
bnN0IGJfZW5kID0gcCArIGxlbjsNCisNCisJaWYgKGlucHV0ID09IE5VTEwpDQorCQlyZXR1cm4g
LUVJTlZBTDsNCisNCisJc3RhdGUtPnRvdGFsX2xlbiArPSBsZW47DQorDQorCWlmIChzdGF0ZS0+
bWVtc2l6ZSArIGxlbiA8IDMyKSB7IC8qIGZpbGwgaW4gdG1wIGJ1ZmZlciAqLw0KKwkJbWVtY3B5
KCgodWludDhfdCAqKXN0YXRlLT5tZW02NCkgKyBzdGF0ZS0+bWVtc2l6ZSwgaW5wdXQsIGxlbik7
DQorCQlzdGF0ZS0+bWVtc2l6ZSArPSAodWludDMyX3QpbGVuOw0KKwkJcmV0dXJuIDA7DQorCX0N
CisNCisJaWYgKHN0YXRlLT5tZW1zaXplKSB7IC8qIHRtcCBidWZmZXIgaXMgZnVsbCAqLw0KKwkJ
dWludDY0X3QgKnA2NCA9IHN0YXRlLT5tZW02NDsNCisNCisJCW1lbWNweSgoKHVpbnQ4X3QgKilw
NjQpICsgc3RhdGUtPm1lbXNpemUsIGlucHV0LA0KKwkJCTMyIC0gc3RhdGUtPm1lbXNpemUpOw0K
Kw0KKwkJc3RhdGUtPnYxID0geHhoNjRfcm91bmQoc3RhdGUtPnYxLCBnZXRfdW5hbGlnbmVkX2xl
NjQocDY0KSk7DQorCQlwNjQrKzsNCisJCXN0YXRlLT52MiA9IHh4aDY0X3JvdW5kKHN0YXRlLT52
MiwgZ2V0X3VuYWxpZ25lZF9sZTY0KHA2NCkpOw0KKwkJcDY0Kys7DQorCQlzdGF0ZS0+djMgPSB4
eGg2NF9yb3VuZChzdGF0ZS0+djMsIGdldF91bmFsaWduZWRfbGU2NChwNjQpKTsNCisJCXA2NCsr
Ow0KKwkJc3RhdGUtPnY0ID0geHhoNjRfcm91bmQoc3RhdGUtPnY0LCBnZXRfdW5hbGlnbmVkX2xl
NjQocDY0KSk7DQorDQorCQlwICs9IDMyIC0gc3RhdGUtPm1lbXNpemU7DQorCQlzdGF0ZS0+bWVt
c2l6ZSA9IDA7DQorCX0NCisNCisJaWYgKHAgKyAzMiA8PSBiX2VuZCkgew0KKwkJY29uc3QgdWlu
dDhfdCAqY29uc3QgbGltaXQgPSBiX2VuZCAtIDMyOw0KKwkJdWludDY0X3QgdjEgPSBzdGF0ZS0+
djE7DQorCQl1aW50NjRfdCB2MiA9IHN0YXRlLT52MjsNCisJCXVpbnQ2NF90IHYzID0gc3RhdGUt
PnYzOw0KKwkJdWludDY0X3QgdjQgPSBzdGF0ZS0+djQ7DQorDQorCQlkbyB7DQorCQkJdjEgPSB4
eGg2NF9yb3VuZCh2MSwgZ2V0X3VuYWxpZ25lZF9sZTY0KHApKTsNCisJCQlwICs9IDg7DQorCQkJ
djIgPSB4eGg2NF9yb3VuZCh2MiwgZ2V0X3VuYWxpZ25lZF9sZTY0KHApKTsNCisJCQlwICs9IDg7
DQorCQkJdjMgPSB4eGg2NF9yb3VuZCh2MywgZ2V0X3VuYWxpZ25lZF9sZTY0KHApKTsNCisJCQlw
ICs9IDg7DQorCQkJdjQgPSB4eGg2NF9yb3VuZCh2NCwgZ2V0X3VuYWxpZ25lZF9sZTY0KHApKTsN
CisJCQlwICs9IDg7DQorCQl9IHdoaWxlIChwIDw9IGxpbWl0KTsNCisNCisJCXN0YXRlLT52MSA9
IHYxOw0KKwkJc3RhdGUtPnYyID0gdjI7DQorCQlzdGF0ZS0+djMgPSB2MzsNCisJCXN0YXRlLT52
NCA9IHY0Ow0KKwl9DQorDQorCWlmIChwIDwgYl9lbmQpIHsNCisJCW1lbWNweShzdGF0ZS0+bWVt
NjQsIHAsIChzaXplX3QpKGJfZW5kLXApKTsNCisJCXN0YXRlLT5tZW1zaXplID0gKHVpbnQzMl90
KShiX2VuZCAtIHApOw0KKwl9DQorDQorCXJldHVybiAwOw0KK30NCitFWFBPUlRfU1lNQk9MKHh4
aDY0X3VwZGF0ZSk7DQorDQordWludDY0X3QgeHhoNjRfZGlnZXN0KGNvbnN0IHN0cnVjdCB4eGg2
NF9zdGF0ZSAqc3RhdGUpDQorew0KKwljb25zdCB1aW50OF90ICpwID0gKGNvbnN0IHVpbnQ4X3Qg
KilzdGF0ZS0+bWVtNjQ7DQorCWNvbnN0IHVpbnQ4X3QgKmNvbnN0IGJfZW5kID0gKGNvbnN0IHVp
bnQ4X3QgKilzdGF0ZS0+bWVtNjQgKw0KKwkJc3RhdGUtPm1lbXNpemU7DQorCXVpbnQ2NF90IGg2
NDsNCisNCisJaWYgKHN0YXRlLT50b3RhbF9sZW4gPj0gMzIpIHsNCisJCWNvbnN0IHVpbnQ2NF90
IHYxID0gc3RhdGUtPnYxOw0KKwkJY29uc3QgdWludDY0X3QgdjIgPSBzdGF0ZS0+djI7DQorCQlj
b25zdCB1aW50NjRfdCB2MyA9IHN0YXRlLT52MzsNCisJCWNvbnN0IHVpbnQ2NF90IHY0ID0gc3Rh
dGUtPnY0Ow0KKw0KKwkJaDY0ID0geHhoX3JvdGw2NCh2MSwgMSkgKyB4eGhfcm90bDY0KHYyLCA3
KSArDQorCQkJeHhoX3JvdGw2NCh2MywgMTIpICsgeHhoX3JvdGw2NCh2NCwgMTgpOw0KKwkJaDY0
ID0geHhoNjRfbWVyZ2Vfcm91bmQoaDY0LCB2MSk7DQorCQloNjQgPSB4eGg2NF9tZXJnZV9yb3Vu
ZChoNjQsIHYyKTsNCisJCWg2NCA9IHh4aDY0X21lcmdlX3JvdW5kKGg2NCwgdjMpOw0KKwkJaDY0
ID0geHhoNjRfbWVyZ2Vfcm91bmQoaDY0LCB2NCk7DQorCX0gZWxzZSB7DQorCQloNjQgID0gc3Rh
dGUtPnYzICsgUFJJTUU2NF81Ow0KKwl9DQorDQorCWg2NCArPSAodWludDY0X3Qpc3RhdGUtPnRv
dGFsX2xlbjsNCisNCisJd2hpbGUgKHAgKyA4IDw9IGJfZW5kKSB7DQorCQljb25zdCB1aW50NjRf
dCBrMSA9IHh4aDY0X3JvdW5kKDAsIGdldF91bmFsaWduZWRfbGU2NChwKSk7DQorDQorCQloNjQg
Xj0gazE7DQorCQloNjQgPSB4eGhfcm90bDY0KGg2NCwgMjcpICogUFJJTUU2NF8xICsgUFJJTUU2
NF80Ow0KKwkJcCArPSA4Ow0KKwl9DQorDQorCWlmIChwICsgNCA8PSBiX2VuZCkgew0KKwkJaDY0
IF49ICh1aW50NjRfdCkoZ2V0X3VuYWxpZ25lZF9sZTMyKHApKSAqIFBSSU1FNjRfMTsNCisJCWg2
NCA9IHh4aF9yb3RsNjQoaDY0LCAyMykgKiBQUklNRTY0XzIgKyBQUklNRTY0XzM7DQorCQlwICs9
IDQ7DQorCX0NCisNCisJd2hpbGUgKHAgPCBiX2VuZCkgew0KKwkJaDY0IF49ICgqcCkgKiBQUklN
RTY0XzU7DQorCQloNjQgPSB4eGhfcm90bDY0KGg2NCwgMTEpICogUFJJTUU2NF8xOw0KKwkJcCsr
Ow0KKwl9DQorDQorCWg2NCBePSBoNjQgPj4gMzM7DQorCWg2NCAqPSBQUklNRTY0XzI7DQorCWg2
NCBePSBoNjQgPj4gMjk7DQorCWg2NCAqPSBQUklNRTY0XzM7DQorCWg2NCBePSBoNjQgPj4gMzI7
DQorDQorCXJldHVybiBoNjQ7DQorfQ0KK0VYUE9SVF9TWU1CT0woeHhoNjRfZGlnZXN0KTsNCisN
CitNT0RVTEVfTElDRU5TRSgiRHVhbCBCU0QvR1BMIik7DQorTU9EVUxFX0RFU0NSSVBUSU9OKCJ4
eEhhc2giKTsNCmRpZmYgLS1naXQgYS94ZW4vY29tbW9uL3pzdGQvYml0c3RyZWFtLmggYi94ZW4v
Y29tbW9uL3pzdGQvYml0c3RyZWFtLmgNCm5ldyBmaWxlIG1vZGUgMTAwNjQ0DQppbmRleCAwMDAw
MDAwMDAwLi4zYTQ5Nzg0ZDVjDQotLS0gL2Rldi9udWxsDQorKysgYi94ZW4vY29tbW9uL3pzdGQv
Yml0c3RyZWFtLmgNCkBAIC0wLDAgKzEsMzc5IEBADQorLyoNCisgKiBiaXRzdHJlYW0NCisgKiBQ
YXJ0IG9mIEZTRSBsaWJyYXJ5DQorICogaGVhZGVyIGZpbGUgKHRvIGluY2x1ZGUpDQorICogQ29w
eXJpZ2h0IChDKSAyMDEzLTIwMTYsIFlhbm4gQ29sbGV0Lg0KKyAqDQorICogQlNEIDItQ2xhdXNl
IExpY2Vuc2UgKGh0dHA6Ly93d3cub3BlbnNvdXJjZS5vcmcvbGljZW5zZXMvYnNkLWxpY2Vuc2Uu
cGhwKQ0KKyAqDQorICogUmVkaXN0cmlidXRpb24gYW5kIHVzZSBpbiBzb3VyY2UgYW5kIGJpbmFy
eSBmb3Jtcywgd2l0aCBvciB3aXRob3V0DQorICogbW9kaWZpY2F0aW9uLCBhcmUgcGVybWl0dGVk
IHByb3ZpZGVkIHRoYXQgdGhlIGZvbGxvd2luZyBjb25kaXRpb25zIGFyZQ0KKyAqIG1ldDoNCisg
Kg0KKyAqICAgKiBSZWRpc3RyaWJ1dGlvbnMgb2Ygc291cmNlIGNvZGUgbXVzdCByZXRhaW4gdGhl
IGFib3ZlIGNvcHlyaWdodA0KKyAqIG5vdGljZSwgdGhpcyBsaXN0IG9mIGNvbmRpdGlvbnMgYW5k
IHRoZSBmb2xsb3dpbmcgZGlzY2xhaW1lci4NCisgKiAgICogUmVkaXN0cmlidXRpb25zIGluIGJp
bmFyeSBmb3JtIG11c3QgcmVwcm9kdWNlIHRoZSBhYm92ZQ0KKyAqIGNvcHlyaWdodCBub3RpY2Us
IHRoaXMgbGlzdCBvZiBjb25kaXRpb25zIGFuZCB0aGUgZm9sbG93aW5nIGRpc2NsYWltZXINCisg
KiBpbiB0aGUgZG9jdW1lbnRhdGlvbiBhbmQvb3Igb3RoZXIgbWF0ZXJpYWxzIHByb3ZpZGVkIHdp
dGggdGhlDQorICogZGlzdHJpYnV0aW9uLg0KKyAqDQorICogVEhJUyBTT0ZUV0FSRSBJUyBQUk9W
SURFRCBCWSBUSEUgQ09QWVJJR0hUIEhPTERFUlMgQU5EIENPTlRSSUJVVE9SUw0KKyAqICJBUyBJ
UyIgQU5EIEFOWSBFWFBSRVNTIE9SIElNUExJRUQgV0FSUkFOVElFUywgSU5DTFVESU5HLCBCVVQg
Tk9UDQorICogTElNSVRFRCBUTywgVEhFIElNUExJRUQgV0FSUkFOVElFUyBPRiBNRVJDSEFOVEFC
SUxJVFkgQU5EIEZJVE5FU1MgRk9SDQorICogQSBQQVJUSUNVTEFSIFBVUlBPU0UgQVJFIERJU0NM
QUlNRUQuIElOIE5PIEVWRU5UIFNIQUxMIFRIRSBDT1BZUklHSFQNCisgKiBPV05FUiBPUiBDT05U
UklCVVRPUlMgQkUgTElBQkxFIEZPUiBBTlkgRElSRUNULCBJTkRJUkVDVCwgSU5DSURFTlRBTCwN
CisgKiBTUEVDSUFMLCBFWEVNUExBUlksIE9SIENPTlNFUVVFTlRJQUwgREFNQUdFUyAoSU5DTFVE
SU5HLCBCVVQgTk9UDQorICogTElNSVRFRCBUTywgUFJPQ1VSRU1FTlQgT0YgU1VCU1RJVFVURSBH
T09EUyBPUiBTRVJWSUNFUzsgTE9TUyBPRiBVU0UsDQorICogREFUQSwgT1IgUFJPRklUUzsgT1Ig
QlVTSU5FU1MgSU5URVJSVVBUSU9OKSBIT1dFVkVSIENBVVNFRCBBTkQgT04gQU5ZDQorICogVEhF
T1JZIE9GIExJQUJJTElUWSwgV0hFVEhFUiBJTiBDT05UUkFDVCwgU1RSSUNUIExJQUJJTElUWSwg
T1IgVE9SVA0KKyAqIChJTkNMVURJTkcgTkVHTElHRU5DRSBPUiBPVEhFUldJU0UpIEFSSVNJTkcg
SU4gQU5ZIFdBWSBPVVQgT0YgVEhFIFVTRQ0KKyAqIE9GIFRISVMgU09GVFdBUkUsIEVWRU4gSUYg
QURWSVNFRCBPRiBUSEUgUE9TU0lCSUxJVFkgT0YgU1VDSCBEQU1BR0UuDQorICoNCisgKiBUaGlz
IHByb2dyYW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5kL29y
IG1vZGlmeSBpdCB1bmRlcg0KKyAqIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGlj
IExpY2Vuc2UgdmVyc2lvbiAyIGFzIHB1Ymxpc2hlZCBieSB0aGUNCisgKiBGcmVlIFNvZnR3YXJl
IEZvdW5kYXRpb24uIFRoaXMgcHJvZ3JhbSBpcyBkdWFsLWxpY2Vuc2VkOyB5b3UgbWF5IHNlbGVj
dA0KKyAqIGVpdGhlciB2ZXJzaW9uIDIgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNl
ICgiR1BMIikgb3IgQlNEIGxpY2Vuc2UNCisgKiAoIkJTRCIpLg0KKyAqDQorICogWW91IGNhbiBj
b250YWN0IHRoZSBhdXRob3IgYXQgOg0KKyAqIC0gU291cmNlIHJlcG9zaXRvcnkgOiBodHRwczov
L2dpdGh1Yi5jb20vQ3lhbjQ5NzMvRmluaXRlU3RhdGVFbnRyb3B5DQorICovDQorI2lmbmRlZiBC
SVRTVFJFQU1fSF9NT0RVTEUNCisjZGVmaW5lIEJJVFNUUkVBTV9IX01PRFVMRQ0KKw0KKy8qDQor
KiAgVGhpcyBBUEkgY29uc2lzdHMgb2Ygc21hbGwgdW5pdGFyeSBmdW5jdGlvbnMsIHdoaWNoIG11
c3QgYmUgaW5saW5lZCBmb3IgYmVzdCBwZXJmb3JtYW5jZS4NCisqICBTaW5jZSBsaW5rLXRpbWUt
b3B0aW1pemF0aW9uIGlzIG5vdCBhdmFpbGFibGUgZm9yIGFsbCBjb21waWxlcnMsDQorKiAgdGhl
c2UgZnVuY3Rpb25zIGFyZSBkZWZpbmVkIGludG8gYSAuaCB0byBiZSBpbmNsdWRlZC4NCisqLw0K
Kw0KKy8qLSoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCisqICBEZXBl
bmRlbmNpZXMNCisqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKiovDQor
I2luY2x1ZGUgImVycm9yX3ByaXZhdGUuaCIgLyogZXJyb3IgY29kZXMgYW5kIG1lc3NhZ2VzICov
DQorI2luY2x1ZGUgIm1lbS5oIgkgICAvKiB1bmFsaWduZWQgYWNjZXNzIHJvdXRpbmVzICovDQor
DQorLyo9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQ0KKyogIFRhcmdl
dCBzcGVjaWZpYw0KKz09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09Ki8N
CisjZGVmaW5lIFNUUkVBTV9BQ0NVTVVMQVRPUl9NSU5fMzIgMjUNCisjZGVmaW5lIFNUUkVBTV9B
Q0NVTVVMQVRPUl9NSU5fNjQgNTcNCisjZGVmaW5lIFNUUkVBTV9BQ0NVTVVMQVRPUl9NSU4gKChV
MzIpKFpTVERfMzJiaXRzKCkgPyBTVFJFQU1fQUNDVU1VTEFUT1JfTUlOXzMyIDogU1RSRUFNX0FD
Q1VNVUxBVE9SX01JTl82NCkpDQorDQorLyotKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqDQorKiAgYml0U3RyZWFtIGVuY29kaW5nIEFQSSAod3JpdGUgZm9yd2FyZCkN
CisqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKi8NCisvKiBiaXRT
dHJlYW0gY2FuIG1peCBpbnB1dCBmcm9tIG11bHRpcGxlIHNvdXJjZXMuDQorKiAgQSBjcml0aWNh
bCBwcm9wZXJ0eSBvZiB0aGVzZSBzdHJlYW1zIGlzIHRoYXQgdGhleSBlbmNvZGUgYW5kIGRlY29k
ZSBpbiAqKnJldmVyc2UqKiBkaXJlY3Rpb24uDQorKiAgU28gdGhlIGZpcnN0IGJpdCBzZXF1ZW5j
ZSB5b3UgYWRkIHdpbGwgYmUgdGhlIGxhc3QgdG8gYmUgcmVhZCwgbGlrZSBhIExJRk8gc3RhY2su
DQorKi8NCit0eXBlZGVmIHN0cnVjdCB7DQorCXNpemVfdCBiaXRDb250YWluZXI7DQorCWludCBi
aXRQb3M7DQorCWNoYXIgKnN0YXJ0UHRyOw0KKwljaGFyICpwdHI7DQorCWNoYXIgKmVuZFB0cjsN
Cit9IEJJVF9DU3RyZWFtX3Q7DQorDQorWlNURF9TVEFUSUMgc2l6ZV90IEJJVF9pbml0Q1N0cmVh
bShCSVRfQ1N0cmVhbV90ICpiaXRDLCB2b2lkICpkc3RCdWZmZXIsIHNpemVfdCBkc3RDYXBhY2l0
eSk7DQorWlNURF9TVEFUSUMgdm9pZCBCSVRfYWRkQml0cyhCSVRfQ1N0cmVhbV90ICpiaXRDLCBz
aXplX3QgdmFsdWUsIHVuc2lnbmVkIG5iQml0cyk7DQorWlNURF9TVEFUSUMgdm9pZCBCSVRfZmx1
c2hCaXRzKEJJVF9DU3RyZWFtX3QgKmJpdEMpOw0KK1pTVERfU1RBVElDIHNpemVfdCBCSVRfY2xv
c2VDU3RyZWFtKEJJVF9DU3RyZWFtX3QgKmJpdEMpOw0KKw0KKy8qIFN0YXJ0IHdpdGggaW5pdENT
dHJlYW0sIHByb3ZpZGluZyB0aGUgc2l6ZSBvZiBidWZmZXIgdG8gd3JpdGUgaW50by4NCisqICBi
aXRTdHJlYW0gd2lsbCBuZXZlciB3cml0ZSBvdXRzaWRlIG9mIHRoaXMgYnVmZmVyLg0KKyogIGBk
c3RDYXBhY2l0eWAgbXVzdCBiZSA+PSBzaXplb2YoYml0RC0+Yml0Q29udGFpbmVyKSwgb3RoZXJ3
aXNlIEByZXR1cm4gd2lsbCBiZSBhbiBlcnJvciBjb2RlLg0KKyoNCisqICBiaXRzIGFyZSBmaXJz
dCBhZGRlZCB0byBhIGxvY2FsIHJlZ2lzdGVyLg0KKyogIExvY2FsIHJlZ2lzdGVyIGlzIHNpemVf
dCwgaGVuY2UgNjQtYml0cyBvbiA2NC1iaXRzIHN5c3RlbXMsIG9yIDMyLWJpdHMgb24gMzItYml0
cyBzeXN0ZW1zLg0KKyogIFdyaXRpbmcgZGF0YSBpbnRvIG1lbW9yeSBpcyBhbiBleHBsaWNpdCBv
cGVyYXRpb24sIHBlcmZvcm1lZCBieSB0aGUgZmx1c2hCaXRzIGZ1bmN0aW9uLg0KKyogIEhlbmNl
IGtlZXAgdHJhY2sgaG93IG1hbnkgYml0cyBhcmUgcG90ZW50aWFsbHkgc3RvcmVkIGludG8gbG9j
YWwgcmVnaXN0ZXIgdG8gYXZvaWQgcmVnaXN0ZXIgb3ZlcmZsb3cuDQorKiAgQWZ0ZXIgYSBmbHVz
aEJpdHMsIGEgbWF4aW11bSBvZiA3IGJpdHMgbWlnaHQgc3RpbGwgYmUgc3RvcmVkIGludG8gbG9j
YWwgcmVnaXN0ZXIuDQorKg0KKyogIEF2b2lkIHN0b3JpbmcgZWxlbWVudHMgb2YgbW9yZSB0aGFu
IDI0IGJpdHMgaWYgeW91IHdhbnQgY29tcGF0aWJpbGl0eSB3aXRoIDMyLWJpdHMgYml0c3RyZWFt
IHJlYWRlcnMuDQorKg0KKyogIExhc3Qgb3BlcmF0aW9uIGlzIHRvIGNsb3NlIHRoZSBiaXRTdHJl
YW0uDQorKiAgVGhlIGZ1bmN0aW9uIHJldHVybnMgdGhlIGZpbmFsIHNpemUgb2YgQ1N0cmVhbSBp
biBieXRlcy4NCisqICBJZiBkYXRhIGNvdWxkbid0IGZpdCBpbnRvIGBkc3RCdWZmZXJgLCBpdCB3
aWxsIHJldHVybiBhIDAgKCA9PSBub3Qgc3RvcmFibGUpDQorKi8NCisNCisvKi0qKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KKyogIGJpdFN0cmVhbSBkZWNvZGlu
ZyBBUEkgKHJlYWQgYmFja3dhcmQpDQorKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKi8NCit0eXBlZGVmIHN0cnVjdCB7DQorCXNpemVfdCBiaXRDb250YWluZXI7
DQorCXVuc2lnbmVkIGJpdHNDb25zdW1lZDsNCisJY29uc3QgY2hhciAqcHRyOw0KKwljb25zdCBj
aGFyICpzdGFydDsNCit9IEJJVF9EU3RyZWFtX3Q7DQorDQordHlwZWRlZiBlbnVtIHsNCisJQklU
X0RTdHJlYW1fdW5maW5pc2hlZCA9IDAsDQorCUJJVF9EU3RyZWFtX2VuZE9mQnVmZmVyID0gMSwN
CisJQklUX0RTdHJlYW1fY29tcGxldGVkID0gMiwNCisJQklUX0RTdHJlYW1fb3ZlcmZsb3cgPSAz
DQorfSBCSVRfRFN0cmVhbV9zdGF0dXM7IC8qIHJlc3VsdCBvZiBCSVRfcmVsb2FkRFN0cmVhbSgp
ICovDQorLyogMSwyLDQsOCB3b3VsZCBiZSBiZXR0ZXIgZm9yIGJpdG1hcCBjb21iaW5hdGlvbnMs
IGJ1dCBzbG93cyBkb3duIHBlcmZvcm1hbmNlIGEgYml0IC4uLiA6KCAqLw0KKw0KK1pTVERfU1RB
VElDIHNpemVfdCBCSVRfaW5pdERTdHJlYW0oQklUX0RTdHJlYW1fdCAqYml0RCwgY29uc3Qgdm9p
ZCAqc3JjQnVmZmVyLCBzaXplX3Qgc3JjU2l6ZSk7DQorWlNURF9TVEFUSUMgc2l6ZV90IEJJVF9y
ZWFkQml0cyhCSVRfRFN0cmVhbV90ICpiaXRELCB1bnNpZ25lZCBuYkJpdHMpOw0KK1pTVERfU1RB
VElDIEJJVF9EU3RyZWFtX3N0YXR1cyBCSVRfcmVsb2FkRFN0cmVhbShCSVRfRFN0cmVhbV90ICpi
aXREKTsNCitaU1REX1NUQVRJQyB1bnNpZ25lZCBCSVRfZW5kT2ZEU3RyZWFtKGNvbnN0IEJJVF9E
U3RyZWFtX3QgKmJpdEQpOw0KKw0KKy8qIFN0YXJ0IGJ5IGludm9raW5nIEJJVF9pbml0RFN0cmVh
bSgpLg0KKyogIEEgY2h1bmsgb2YgdGhlIGJpdFN0cmVhbSBpcyB0aGVuIHN0b3JlZCBpbnRvIGEg
bG9jYWwgcmVnaXN0ZXIuDQorKiAgTG9jYWwgcmVnaXN0ZXIgc2l6ZSBpcyA2NC1iaXRzIG9uIDY0
LWJpdHMgc3lzdGVtcywgMzItYml0cyBvbiAzMi1iaXRzIHN5c3RlbXMgKHNpemVfdCkuDQorKiAg
WW91IGNhbiB0aGVuIHJldHJpZXZlIGJpdEZpZWxkcyBzdG9yZWQgaW50byB0aGUgbG9jYWwgcmVn
aXN0ZXIsICoqaW4gcmV2ZXJzZSBvcmRlcioqLg0KKyogIExvY2FsIHJlZ2lzdGVyIGlzIGV4cGxp
Y2l0bHkgcmVsb2FkZWQgZnJvbSBtZW1vcnkgYnkgdGhlIEJJVF9yZWxvYWREU3RyZWFtKCkgbWV0
aG9kLg0KKyogIEEgcmVsb2FkIGd1YXJhbnRlZSBhIG1pbmltdW0gb2YgKCg4KnNpemVvZihiaXRE
LT5iaXRDb250YWluZXIpKS03KSBiaXRzIHdoZW4gaXRzIHJlc3VsdCBpcyBCSVRfRFN0cmVhbV91
bmZpbmlzaGVkLg0KKyogIE90aGVyd2lzZSwgaXQgY2FuIGJlIGxlc3MgdGhhbiB0aGF0LCBzbyBw
cm9jZWVkIGFjY29yZGluZ2x5Lg0KKyogIENoZWNraW5nIGlmIERTdHJlYW0gaGFzIHJlYWNoZWQg
aXRzIGVuZCBjYW4gYmUgcGVyZm9ybWVkIHdpdGggQklUX2VuZE9mRFN0cmVhbSgpLg0KKyovDQor
DQorLyotKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KKyogIHVuc2Fm
ZSBBUEkNCisqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKiovDQorWlNU
RF9TVEFUSUMgdm9pZCBCSVRfYWRkQml0c0Zhc3QoQklUX0NTdHJlYW1fdCAqYml0Qywgc2l6ZV90
IHZhbHVlLCB1bnNpZ25lZCBuYkJpdHMpOw0KKy8qIGZhc3RlciwgYnV0IHdvcmtzIG9ubHkgaWYg
dmFsdWUgaXMgImNsZWFuIiwgbWVhbmluZyBhbGwgaGlnaCBiaXRzIGFib3ZlIG5iQml0cyBhcmUg
MCAqLw0KKw0KK1pTVERfU1RBVElDIHZvaWQgQklUX2ZsdXNoQml0c0Zhc3QoQklUX0NTdHJlYW1f
dCAqYml0Qyk7DQorLyogdW5zYWZlIHZlcnNpb247IGRvZXMgbm90IGNoZWNrIGJ1ZmZlciBvdmVy
ZmxvdyAqLw0KKw0KK1pTVERfU1RBVElDIHNpemVfdCBCSVRfcmVhZEJpdHNGYXN0KEJJVF9EU3Ry
ZWFtX3QgKmJpdEQsIHVuc2lnbmVkIG5iQml0cyk7DQorLyogZmFzdGVyLCBidXQgd29ya3Mgb25s
eSBpZiBuYkJpdHMgPj0gMSAqLw0KKw0KKy8qLSoqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQorKiAgSW50ZXJuYWwgZnVuY3Rpb25z
DQorKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKi8NCitaU1REX1NUQVRJQyB1bnNpZ25lZCBCSVRfaGlnaGJpdDMyKHJlZ2lzdGVy
IFUzMiB2YWwpIHsgcmV0dXJuIDMxIC0gX19idWlsdGluX2Nseih2YWwpOyB9DQorDQorLyo9PT09
PSAgICBMb2NhbCBDb25zdGFudHMgICA9PT09PSovDQorc3RhdGljIGNvbnN0IHVuc2lnbmVkIEJJ
VF9tYXNrW10gPSB7MCwgICAgICAgMSwgICAgICAgMywgICAgICAgNywJMHhGLCAgICAgIDB4MUYs
ICAgICAweDNGLCAgICAgMHg3RiwgICAgICAweEZGLA0KKwkJCQkgICAgMHgxRkYsICAgMHgzRkYs
ICAgMHg3RkYsICAgMHhGRkYsICAgIDB4MUZGRiwgICAweDNGRkYsICAgMHg3RkZGLCAgIDB4RkZG
RiwgICAgMHgxRkZGRiwNCisJCQkJICAgIDB4M0ZGRkYsIDB4N0ZGRkYsIDB4RkZGRkYsIDB4MUZG
RkZGLCAweDNGRkZGRiwgMHg3RkZGRkYsIDB4RkZGRkZGLCAweDFGRkZGRkYsIDB4M0ZGRkZGRn07
IC8qIHVwIHRvIDI2IGJpdHMgKi8NCisNCisvKi0qKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KKyogIGJpdFN0cmVhbSBlbmNvZGlu
Zw0KKyoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKiovDQorLyohIEJJVF9pbml0Q1N0cmVhbSgpIDoNCisgKiAgYGRzdENhcGFjaXR5
YCBtdXN0IGJlID4gc2l6ZW9mKHZvaWQqKQ0KKyAqICBAcmV0dXJuIDogMCBpZiBzdWNjZXNzLA0K
KwkJCSAgb3RoZXJ3aXNlIGFuIGVycm9yIGNvZGUgKGNhbiBiZSB0ZXN0ZWQgdXNpbmcgRVJSX2lz
RXJyb3IoKSApICovDQorWlNURF9TVEFUSUMgc2l6ZV90IEJJVF9pbml0Q1N0cmVhbShCSVRfQ1N0
cmVhbV90ICpiaXRDLCB2b2lkICpzdGFydFB0ciwgc2l6ZV90IGRzdENhcGFjaXR5KQ0KK3sNCisJ
Yml0Qy0+Yml0Q29udGFpbmVyID0gMDsNCisJYml0Qy0+Yml0UG9zID0gMDsNCisJYml0Qy0+c3Rh
cnRQdHIgPSAoY2hhciAqKXN0YXJ0UHRyOw0KKwliaXRDLT5wdHIgPSBiaXRDLT5zdGFydFB0cjsN
CisJYml0Qy0+ZW5kUHRyID0gYml0Qy0+c3RhcnRQdHIgKyBkc3RDYXBhY2l0eSAtIHNpemVvZihi
aXRDLT5wdHIpOw0KKwlpZiAoZHN0Q2FwYWNpdHkgPD0gc2l6ZW9mKGJpdEMtPnB0cikpDQorCQly
ZXR1cm4gRVJST1IoZHN0U2l6ZV90b29TbWFsbCk7DQorCXJldHVybiAwOw0KK30NCisNCisvKiEg
QklUX2FkZEJpdHMoKSA6DQorCWNhbiBhZGQgdXAgdG8gMjYgYml0cyBpbnRvIGBiaXRDYC4NCisJ
RG9lcyBub3QgY2hlY2sgZm9yIHJlZ2lzdGVyIG92ZXJmbG93ICEgKi8NCitaU1REX1NUQVRJQyB2
b2lkIEJJVF9hZGRCaXRzKEJJVF9DU3RyZWFtX3QgKmJpdEMsIHNpemVfdCB2YWx1ZSwgdW5zaWdu
ZWQgbmJCaXRzKQ0KK3sNCisJYml0Qy0+Yml0Q29udGFpbmVyIHw9ICh2YWx1ZSAmIEJJVF9tYXNr
W25iQml0c10pIDw8IGJpdEMtPmJpdFBvczsNCisJYml0Qy0+Yml0UG9zICs9IG5iQml0czsNCit9
DQorDQorLyohIEJJVF9hZGRCaXRzRmFzdCgpIDoNCisgKiAgd29ya3Mgb25seSBpZiBgdmFsdWVg
IGlzIF9jbGVhbl8sIG1lYW5pbmcgYWxsIGhpZ2ggYml0cyBhYm92ZSBuYkJpdHMgYXJlIDAgKi8N
CitaU1REX1NUQVRJQyB2b2lkIEJJVF9hZGRCaXRzRmFzdChCSVRfQ1N0cmVhbV90ICpiaXRDLCBz
aXplX3QgdmFsdWUsIHVuc2lnbmVkIG5iQml0cykNCit7DQorCWJpdEMtPmJpdENvbnRhaW5lciB8
PSB2YWx1ZSA8PCBiaXRDLT5iaXRQb3M7DQorCWJpdEMtPmJpdFBvcyArPSBuYkJpdHM7DQorfQ0K
Kw0KKy8qISBCSVRfZmx1c2hCaXRzRmFzdCgpIDoNCisgKiAgdW5zYWZlIHZlcnNpb247IGRvZXMg
bm90IGNoZWNrIGJ1ZmZlciBvdmVyZmxvdyAqLw0KK1pTVERfU1RBVElDIHZvaWQgQklUX2ZsdXNo
Qml0c0Zhc3QoQklUX0NTdHJlYW1fdCAqYml0QykNCit7DQorCXNpemVfdCBjb25zdCBuYkJ5dGVz
ID0gYml0Qy0+Yml0UG9zID4+IDM7DQorCVpTVERfd3JpdGVMRVNUKGJpdEMtPnB0ciwgYml0Qy0+
Yml0Q29udGFpbmVyKTsNCisJYml0Qy0+cHRyICs9IG5iQnl0ZXM7DQorCWJpdEMtPmJpdFBvcyAm
PSA3Ow0KKwliaXRDLT5iaXRDb250YWluZXIgPj49IG5iQnl0ZXMgKiA4OyAvKiBpZiBiaXRQb3Mg
Pj0gc2l6ZW9mKGJpdENvbnRhaW5lcikqOCAtLT4gdW5kZWZpbmVkIGJlaGF2aW9yICovDQorfQ0K
Kw0KKy8qISBCSVRfZmx1c2hCaXRzKCkgOg0KKyAqICBzYWZlIHZlcnNpb247IGNoZWNrIGZvciBi
dWZmZXIgb3ZlcmZsb3csIGFuZCBwcmV2ZW50cyBpdC4NCisgKiAgbm90ZSA6IGRvZXMgbm90IHNp
Z25hbCBidWZmZXIgb3ZlcmZsb3cuIFRoaXMgd2lsbCBiZSByZXZlYWxlZCBsYXRlciBvbiB1c2lu
ZyBCSVRfY2xvc2VDU3RyZWFtKCkgKi8NCitaU1REX1NUQVRJQyB2b2lkIEJJVF9mbHVzaEJpdHMo
QklUX0NTdHJlYW1fdCAqYml0QykNCit7DQorCXNpemVfdCBjb25zdCBuYkJ5dGVzID0gYml0Qy0+
Yml0UG9zID4+IDM7DQorCVpTVERfd3JpdGVMRVNUKGJpdEMtPnB0ciwgYml0Qy0+Yml0Q29udGFp
bmVyKTsNCisJYml0Qy0+cHRyICs9IG5iQnl0ZXM7DQorCWlmIChiaXRDLT5wdHIgPiBiaXRDLT5l
bmRQdHIpDQorCQliaXRDLT5wdHIgPSBiaXRDLT5lbmRQdHI7DQorCWJpdEMtPmJpdFBvcyAmPSA3
Ow0KKwliaXRDLT5iaXRDb250YWluZXIgPj49IG5iQnl0ZXMgKiA4OyAvKiBpZiBiaXRQb3MgPj0g
c2l6ZW9mKGJpdENvbnRhaW5lcikqOCAtLT4gdW5kZWZpbmVkIGJlaGF2aW9yICovDQorfQ0KKw0K
Ky8qISBCSVRfY2xvc2VDU3RyZWFtKCkgOg0KKyAqICBAcmV0dXJuIDogc2l6ZSBvZiBDU3RyZWFt
LCBpbiBieXRlcywNCisJCQkgIG9yIDAgaWYgaXQgY291bGQgbm90IGZpdCBpbnRvIGRzdEJ1ZmZl
ciAqLw0KK1pTVERfU1RBVElDIHNpemVfdCBCSVRfY2xvc2VDU3RyZWFtKEJJVF9DU3RyZWFtX3Qg
KmJpdEMpDQorew0KKwlCSVRfYWRkQml0c0Zhc3QoYml0QywgMSwgMSk7IC8qIGVuZE1hcmsgKi8N
CisJQklUX2ZsdXNoQml0cyhiaXRDKTsNCisNCisJaWYgKGJpdEMtPnB0ciA+PSBiaXRDLT5lbmRQ
dHIpDQorCQlyZXR1cm4gMDsgLyogZG9lc24ndCBmaXQgd2l0aGluIGF1dGhvcml6ZWQgYnVkZ2V0
IDogY2FuY2VsICovDQorDQorCXJldHVybiAoYml0Qy0+cHRyIC0gYml0Qy0+c3RhcnRQdHIpICsg
KGJpdEMtPmJpdFBvcyA+IDApOw0KK30NCisNCisvKi0qKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KKyogYml0U3RyZWFtIGRlY29kaW5nDQor
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
Ki8NCisvKiEgQklUX2luaXREU3RyZWFtKCkgOg0KKyogICBJbml0aWFsaXplIGEgQklUX0RTdHJl
YW1fdC4NCisqICAgYGJpdERgIDogYSBwb2ludGVyIHRvIGFuIGFscmVhZHkgYWxsb2NhdGVkIEJJ
VF9EU3RyZWFtX3Qgc3RydWN0dXJlLg0KKyogICBgc3JjU2l6ZWAgbXVzdCBiZSB0aGUgKmV4YWN0
KiBzaXplIG9mIHRoZSBiaXRTdHJlYW0sIGluIGJ5dGVzLg0KKyogICBAcmV0dXJuIDogc2l6ZSBv
ZiBzdHJlYW0gKD09IHNyY1NpemUpIG9yIGFuIGVycm9yQ29kZSBpZiBhIHByb2JsZW0gaXMgZGV0
ZWN0ZWQNCisqLw0KK1pTVERfU1RBVElDIHNpemVfdCBCSVRfaW5pdERTdHJlYW0oQklUX0RTdHJl
YW1fdCAqYml0RCwgY29uc3Qgdm9pZCAqc3JjQnVmZmVyLCBzaXplX3Qgc3JjU2l6ZSkNCit7DQor
CWlmIChzcmNTaXplIDwgMSkgew0KKwkJbWVtc2V0KGJpdEQsIDAsIHNpemVvZigqYml0RCkpOw0K
KwkJcmV0dXJuIEVSUk9SKHNyY1NpemVfd3JvbmcpOw0KKwl9DQorDQorCWlmIChzcmNTaXplID49
IHNpemVvZihiaXRELT5iaXRDb250YWluZXIpKSB7IC8qIG5vcm1hbCBjYXNlICovDQorCQliaXRE
LT5zdGFydCA9IChjb25zdCBjaGFyICopc3JjQnVmZmVyOw0KKwkJYml0RC0+cHRyID0gKGNvbnN0
IGNoYXIgKilzcmNCdWZmZXIgKyBzcmNTaXplIC0gc2l6ZW9mKGJpdEQtPmJpdENvbnRhaW5lcik7
DQorCQliaXRELT5iaXRDb250YWluZXIgPSBaU1REX3JlYWRMRVNUKGJpdEQtPnB0cik7DQorCQl7
DQorCQkJQllURSBjb25zdCBsYXN0Qnl0ZSA9ICgoY29uc3QgQllURSAqKXNyY0J1ZmZlcilbc3Jj
U2l6ZSAtIDFdOw0KKwkJCWJpdEQtPmJpdHNDb25zdW1lZCA9IGxhc3RCeXRlID8gOCAtIEJJVF9o
aWdoYml0MzIobGFzdEJ5dGUpIDogMDsgLyogZW5zdXJlcyBiaXRzQ29uc3VtZWQgaXMgYWx3YXlz
IHNldCAqLw0KKwkJCWlmIChsYXN0Qnl0ZSA9PSAwKQ0KKwkJCQlyZXR1cm4gRVJST1IoR0VORVJJ
Qyk7IC8qIGVuZE1hcmsgbm90IHByZXNlbnQgKi8NCisJCX0NCisJfSBlbHNlIHsNCisJCWJpdEQt
PnN0YXJ0ID0gKGNvbnN0IGNoYXIgKilzcmNCdWZmZXI7DQorCQliaXRELT5wdHIgPSBiaXRELT5z
dGFydDsNCisJCWJpdEQtPmJpdENvbnRhaW5lciA9ICooY29uc3QgQllURSAqKShiaXRELT5zdGFy
dCk7DQorCQlzd2l0Y2ggKHNyY1NpemUpIHsNCisJCWNhc2UgNzogYml0RC0+Yml0Q29udGFpbmVy
ICs9IChzaXplX3QpKCgoY29uc3QgQllURSAqKShzcmNCdWZmZXIpKVs2XSkgPDwgKHNpemVvZihi
aXRELT5iaXRDb250YWluZXIpICogOCAtIDE2KTsNCisJCQkvKiBmYWxsIHRocm91Z2ggKi8NCisJ
CWNhc2UgNjogYml0RC0+Yml0Q29udGFpbmVyICs9IChzaXplX3QpKCgoY29uc3QgQllURSAqKShz
cmNCdWZmZXIpKVs1XSkgPDwgKHNpemVvZihiaXRELT5iaXRDb250YWluZXIpICogOCAtIDI0KTsN
CisJCQkvKiBmYWxsIHRocm91Z2ggKi8NCisJCWNhc2UgNTogYml0RC0+Yml0Q29udGFpbmVyICs9
IChzaXplX3QpKCgoY29uc3QgQllURSAqKShzcmNCdWZmZXIpKVs0XSkgPDwgKHNpemVvZihiaXRE
LT5iaXRDb250YWluZXIpICogOCAtIDMyKTsNCisJCQkvKiBmYWxsIHRocm91Z2ggKi8NCisJCWNh
c2UgNDogYml0RC0+Yml0Q29udGFpbmVyICs9IChzaXplX3QpKCgoY29uc3QgQllURSAqKShzcmNC
dWZmZXIpKVszXSkgPDwgMjQ7DQorCQkJLyogZmFsbCB0aHJvdWdoICovDQorCQljYXNlIDM6IGJp
dEQtPmJpdENvbnRhaW5lciArPSAoc2l6ZV90KSgoKGNvbnN0IEJZVEUgKikoc3JjQnVmZmVyKSlb
Ml0pIDw8IDE2Ow0KKwkJCS8qIGZhbGwgdGhyb3VnaCAqLw0KKwkJY2FzZSAyOiBiaXRELT5iaXRD
b250YWluZXIgKz0gKHNpemVfdCkoKChjb25zdCBCWVRFICopKHNyY0J1ZmZlcikpWzFdKSA8PCA4
Ow0KKwkJZGVmYXVsdDo7DQorCQl9DQorCQl7DQorCQkJQllURSBjb25zdCBsYXN0Qnl0ZSA9ICgo
Y29uc3QgQllURSAqKXNyY0J1ZmZlcilbc3JjU2l6ZSAtIDFdOw0KKwkJCWJpdEQtPmJpdHNDb25z
dW1lZCA9IGxhc3RCeXRlID8gOCAtIEJJVF9oaWdoYml0MzIobGFzdEJ5dGUpIDogMDsNCisJCQlp
ZiAobGFzdEJ5dGUgPT0gMCkNCisJCQkJcmV0dXJuIEVSUk9SKEdFTkVSSUMpOyAvKiBlbmRNYXJr
IG5vdCBwcmVzZW50ICovDQorCQl9DQorCQliaXRELT5iaXRzQ29uc3VtZWQgKz0gKFUzMikoc2l6
ZW9mKGJpdEQtPmJpdENvbnRhaW5lcikgLSBzcmNTaXplKSAqIDg7DQorCX0NCisNCisJcmV0dXJu
IHNyY1NpemU7DQorfQ0KKw0KK1pTVERfU1RBVElDIHNpemVfdCBCSVRfZ2V0VXBwZXJCaXRzKHNp
emVfdCBiaXRDb250YWluZXIsIFUzMiBjb25zdCBzdGFydCkgeyByZXR1cm4gYml0Q29udGFpbmVy
ID4+IHN0YXJ0OyB9DQorDQorWlNURF9TVEFUSUMgc2l6ZV90IEJJVF9nZXRNaWRkbGVCaXRzKHNp
emVfdCBiaXRDb250YWluZXIsIFUzMiBjb25zdCBzdGFydCwgVTMyIGNvbnN0IG5iQml0cykgeyBy
ZXR1cm4gKGJpdENvbnRhaW5lciA+PiBzdGFydCkgJiBCSVRfbWFza1tuYkJpdHNdOyB9DQorDQor
WlNURF9TVEFUSUMgc2l6ZV90IEJJVF9nZXRMb3dlckJpdHMoc2l6ZV90IGJpdENvbnRhaW5lciwg
VTMyIGNvbnN0IG5iQml0cykgeyByZXR1cm4gYml0Q29udGFpbmVyICYgQklUX21hc2tbbmJCaXRz
XTsgfQ0KKw0KKy8qISBCSVRfbG9va0JpdHMoKSA6DQorICogIFByb3ZpZGVzIG5leHQgbiBiaXRz
IGZyb20gbG9jYWwgcmVnaXN0ZXIuDQorICogIGxvY2FsIHJlZ2lzdGVyIGlzIG5vdCBtb2RpZmll
ZC4NCisgKiAgT24gMzItYml0cywgbWF4TmJCaXRzPT0yNC4NCisgKiAgT24gNjQtYml0cywgbWF4
TmJCaXRzPT01Ni4NCisgKiAgQHJldHVybiA6IHZhbHVlIGV4dHJhY3RlZA0KKyAqLw0KK1pTVERf
U1RBVElDIHNpemVfdCBCSVRfbG9va0JpdHMoY29uc3QgQklUX0RTdHJlYW1fdCAqYml0RCwgVTMy
IG5iQml0cykNCit7DQorCVUzMiBjb25zdCBiaXRNYXNrID0gc2l6ZW9mKGJpdEQtPmJpdENvbnRh
aW5lcikgKiA4IC0gMTsNCisJcmV0dXJuICgoYml0RC0+Yml0Q29udGFpbmVyIDw8IChiaXRELT5i
aXRzQ29uc3VtZWQgJiBiaXRNYXNrKSkgPj4gMSkgPj4gKChiaXRNYXNrIC0gbmJCaXRzKSAmIGJp
dE1hc2spOw0KK30NCisNCisvKiEgQklUX2xvb2tCaXRzRmFzdCgpIDoNCisqICAgdW5zYWZlIHZl
cnNpb247IG9ubHkgd29ya3Mgb25seSBpZiBuYkJpdHMgPj0gMSAqLw0KK1pTVERfU1RBVElDIHNp
emVfdCBCSVRfbG9va0JpdHNGYXN0KGNvbnN0IEJJVF9EU3RyZWFtX3QgKmJpdEQsIFUzMiBuYkJp
dHMpDQorew0KKwlVMzIgY29uc3QgYml0TWFzayA9IHNpemVvZihiaXRELT5iaXRDb250YWluZXIp
ICogOCAtIDE7DQorCXJldHVybiAoYml0RC0+Yml0Q29udGFpbmVyIDw8IChiaXRELT5iaXRzQ29u
c3VtZWQgJiBiaXRNYXNrKSkgPj4gKCgoYml0TWFzayArIDEpIC0gbmJCaXRzKSAmIGJpdE1hc2sp
Ow0KK30NCisNCitaU1REX1NUQVRJQyB2b2lkIEJJVF9za2lwQml0cyhCSVRfRFN0cmVhbV90ICpi
aXRELCBVMzIgbmJCaXRzKSB7IGJpdEQtPmJpdHNDb25zdW1lZCArPSBuYkJpdHM7IH0NCisNCisv
KiEgQklUX3JlYWRCaXRzKCkgOg0KKyAqICBSZWFkIChjb25zdW1lKSBuZXh0IG4gYml0cyBmcm9t
IGxvY2FsIHJlZ2lzdGVyIGFuZCB1cGRhdGUuDQorICogIFBheSBhdHRlbnRpb24gdG8gbm90IHJl
YWQgbW9yZSB0aGFuIG5iQml0cyBjb250YWluZWQgaW50byBsb2NhbCByZWdpc3Rlci4NCisgKiAg
QHJldHVybiA6IGV4dHJhY3RlZCB2YWx1ZS4NCisgKi8NCitaU1REX1NUQVRJQyBzaXplX3QgQklU
X3JlYWRCaXRzKEJJVF9EU3RyZWFtX3QgKmJpdEQsIFUzMiBuYkJpdHMpDQorew0KKwlzaXplX3Qg
Y29uc3QgdmFsdWUgPSBCSVRfbG9va0JpdHMoYml0RCwgbmJCaXRzKTsNCisJQklUX3NraXBCaXRz
KGJpdEQsIG5iQml0cyk7DQorCXJldHVybiB2YWx1ZTsNCit9DQorDQorLyohIEJJVF9yZWFkQml0
c0Zhc3QoKSA6DQorKiAgIHVuc2FmZSB2ZXJzaW9uOyBvbmx5IHdvcmtzIG9ubHkgaWYgbmJCaXRz
ID49IDEgKi8NCitaU1REX1NUQVRJQyBzaXplX3QgQklUX3JlYWRCaXRzRmFzdChCSVRfRFN0cmVh
bV90ICpiaXRELCBVMzIgbmJCaXRzKQ0KK3sNCisJc2l6ZV90IGNvbnN0IHZhbHVlID0gQklUX2xv
b2tCaXRzRmFzdChiaXRELCBuYkJpdHMpOw0KKwlCSVRfc2tpcEJpdHMoYml0RCwgbmJCaXRzKTsN
CisJcmV0dXJuIHZhbHVlOw0KK30NCisNCisvKiEgQklUX3JlbG9hZERTdHJlYW0oKSA6DQorKiAg
IFJlZmlsbCBgYml0RGAgZnJvbSBidWZmZXIgcHJldmlvdXNseSBzZXQgaW4gQklUX2luaXREU3Ry
ZWFtKCkgLg0KKyogICBUaGlzIGZ1bmN0aW9uIGlzIHNhZmUsIGl0IGd1YXJhbnRlZXMgaXQgd2ls
bCBub3QgcmVhZCBiZXlvbmQgc3JjIGJ1ZmZlci4NCisqICAgQHJldHVybiA6IHN0YXR1cyBvZiBg
QklUX0RTdHJlYW1fdGAgaW50ZXJuYWwgcmVnaXN0ZXIuDQorCQkJICBpZiBzdGF0dXMgPT0gQklU
X0RTdHJlYW1fdW5maW5pc2hlZCwgaW50ZXJuYWwgcmVnaXN0ZXIgaXMgZmlsbGVkIHdpdGggPj0g
KHNpemVvZihiaXRELT5iaXRDb250YWluZXIpKjggLSA3KSBiaXRzICovDQorWlNURF9TVEFUSUMg
QklUX0RTdHJlYW1fc3RhdHVzIEJJVF9yZWxvYWREU3RyZWFtKEJJVF9EU3RyZWFtX3QgKmJpdEQp
DQorew0KKwlpZiAoYml0RC0+Yml0c0NvbnN1bWVkID4gKHNpemVvZihiaXRELT5iaXRDb250YWlu
ZXIpICogOCkpIC8qIHNob3VsZCBub3QgaGFwcGVuID0+IGNvcnJ1cHRpb24gZGV0ZWN0ZWQgKi8N
CisJCXJldHVybiBCSVRfRFN0cmVhbV9vdmVyZmxvdzsNCisNCisJaWYgKGJpdEQtPnB0ciA+PSBi
aXRELT5zdGFydCArIHNpemVvZihiaXRELT5iaXRDb250YWluZXIpKSB7DQorCQliaXRELT5wdHIg
LT0gYml0RC0+Yml0c0NvbnN1bWVkID4+IDM7DQorCQliaXRELT5iaXRzQ29uc3VtZWQgJj0gNzsN
CisJCWJpdEQtPmJpdENvbnRhaW5lciA9IFpTVERfcmVhZExFU1QoYml0RC0+cHRyKTsNCisJCXJl
dHVybiBCSVRfRFN0cmVhbV91bmZpbmlzaGVkOw0KKwl9DQorCWlmIChiaXRELT5wdHIgPT0gYml0
RC0+c3RhcnQpIHsNCisJCWlmIChiaXRELT5iaXRzQ29uc3VtZWQgPCBzaXplb2YoYml0RC0+Yml0
Q29udGFpbmVyKSAqIDgpDQorCQkJcmV0dXJuIEJJVF9EU3RyZWFtX2VuZE9mQnVmZmVyOw0KKwkJ
cmV0dXJuIEJJVF9EU3RyZWFtX2NvbXBsZXRlZDsNCisJfQ0KKwl7DQorCQlVMzIgbmJCeXRlcyA9
IGJpdEQtPmJpdHNDb25zdW1lZCA+PiAzOw0KKwkJQklUX0RTdHJlYW1fc3RhdHVzIHJlc3VsdCA9
IEJJVF9EU3RyZWFtX3VuZmluaXNoZWQ7DQorCQlpZiAoYml0RC0+cHRyIC0gbmJCeXRlcyA8IGJp
dEQtPnN0YXJ0KSB7DQorCQkJbmJCeXRlcyA9IChVMzIpKGJpdEQtPnB0ciAtIGJpdEQtPnN0YXJ0
KTsgLyogcHRyID4gc3RhcnQgKi8NCisJCQlyZXN1bHQgPSBCSVRfRFN0cmVhbV9lbmRPZkJ1ZmZl
cjsNCisJCX0NCisJCWJpdEQtPnB0ciAtPSBuYkJ5dGVzOw0KKwkJYml0RC0+Yml0c0NvbnN1bWVk
IC09IG5iQnl0ZXMgKiA4Ow0KKwkJYml0RC0+Yml0Q29udGFpbmVyID0gWlNURF9yZWFkTEVTVChi
aXRELT5wdHIpOyAvKiByZW1pbmRlciA6IHNyY1NpemUgPiBzaXplb2YoYml0RCkgKi8NCisJCXJl
dHVybiByZXN1bHQ7DQorCX0NCit9DQorDQorLyohIEJJVF9lbmRPZkRTdHJlYW0oKSA6DQorKiAg
IEByZXR1cm4gVGVsbHMgaWYgRFN0cmVhbSBoYXMgZXhhY3RseSByZWFjaGVkIGl0cyBlbmQgKGFs
bCBiaXRzIGNvbnN1bWVkKS4NCisqLw0KK1pTVERfU1RBVElDIHVuc2lnbmVkIEJJVF9lbmRPZkRT
dHJlYW0oY29uc3QgQklUX0RTdHJlYW1fdCAqRFN0cmVhbSkNCit7DQorCXJldHVybiAoKERTdHJl
YW0tPnB0ciA9PSBEU3RyZWFtLT5zdGFydCkgJiYgKERTdHJlYW0tPmJpdHNDb25zdW1lZCA9PSBz
aXplb2YoRFN0cmVhbS0+Yml0Q29udGFpbmVyKSAqIDgpKTsNCit9DQorDQorI2VuZGlmIC8qIEJJ
VFNUUkVBTV9IX01PRFVMRSAqLw0KZGlmZiAtLWdpdCBhL3hlbi9jb21tb24venN0ZC9kZWNvbXBy
ZXNzLmMgYi94ZW4vY29tbW9uL3pzdGQvZGVjb21wcmVzcy5jDQpuZXcgZmlsZSBtb2RlIDEwMDY0
NA0KaW5kZXggMDAwMDAwMDAwMC4uZGI2NzYxZWE0ZA0KLS0tIC9kZXYvbnVsbA0KKysrIGIveGVu
L2NvbW1vbi96c3RkL2RlY29tcHJlc3MuYw0KQEAgLTAsMCArMSwyNTMxIEBADQorLyoqDQorICog
Q29weXJpZ2h0IChjKSAyMDE2LXByZXNlbnQsIFlhbm4gQ29sbGV0LCBGYWNlYm9vaywgSW5jLg0K
KyAqIEFsbCByaWdodHMgcmVzZXJ2ZWQuDQorICoNCisgKiBUaGlzIHNvdXJjZSBjb2RlIGlzIGxp
Y2Vuc2VkIHVuZGVyIHRoZSBCU0Qtc3R5bGUgbGljZW5zZSBmb3VuZCBpbiB0aGUNCisgKiBMSUNF
TlNFIGZpbGUgaW4gdGhlIHJvb3QgZGlyZWN0b3J5IG9mIGh0dHBzOi8vZ2l0aHViLmNvbS9mYWNl
Ym9vay96c3RkLg0KKyAqIEFuIGFkZGl0aW9uYWwgZ3JhbnQgb2YgcGF0ZW50IHJpZ2h0cyBjYW4g
YmUgZm91bmQgaW4gdGhlIFBBVEVOVFMgZmlsZSBpbiB0aGUNCisgKiBzYW1lIGRpcmVjdG9yeS4N
CisgKg0KKyAqIFRoaXMgcHJvZ3JhbSBpcyBmcmVlIHNvZnR3YXJlOyB5b3UgY2FuIHJlZGlzdHJp
YnV0ZSBpdCBhbmQvb3IgbW9kaWZ5IGl0IHVuZGVyDQorICogdGhlIHRlcm1zIG9mIHRoZSBHTlUg
R2VuZXJhbCBQdWJsaWMgTGljZW5zZSB2ZXJzaW9uIDIgYXMgcHVibGlzaGVkIGJ5IHRoZQ0KKyAq
IEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbi4gVGhpcyBwcm9ncmFtIGlzIGR1YWwtbGljZW5zZWQ7
IHlvdSBtYXkgc2VsZWN0DQorICogZWl0aGVyIHZlcnNpb24gMiBvZiB0aGUgR05VIEdlbmVyYWwg
UHVibGljIExpY2Vuc2UgKCJHUEwiKSBvciBCU0QgbGljZW5zZQ0KKyAqICgiQlNEIikuDQorICov
DQorDQorLyogKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqDQorKiAgVHVuaW5nIHBhcmFtZXRlcnMNCisqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKi8NCisvKiEN
CisqICBNQVhXSU5ET1dTSVpFX0RFRkFVTFQgOg0KKyogIG1heGltdW0gd2luZG93IHNpemUgYWNj
ZXB0ZWQgYnkgRFN0cmVhbSwgYnkgZGVmYXVsdC4NCisqICBGcmFtZXMgcmVxdWlyaW5nIG1vcmUg
bWVtb3J5IHdpbGwgYmUgcmVqZWN0ZWQuDQorKi8NCisjaWZuZGVmIFpTVERfTUFYV0lORE9XU0la
RV9ERUZBVUxUDQorI2RlZmluZSBaU1REX01BWFdJTkRPV1NJWkVfREVGQVVMVCAoKDEgPDwgWlNU
RF9XSU5ET1dMT0dfTUFYKSArIDEpIC8qIGRlZmluZWQgd2l0aGluIHpzdGQuaCAqLw0KKyNlbmRp
Zg0KKw0KKy8qLSoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioNCisqICBEZXBlbmRlbmNpZXMNCisqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKiovDQorI2luY2x1ZGUgImZzZS5oIg0KKyNpbmNs
dWRlICJodWYuaCINCisjaW5jbHVkZSAibWVtLmgiIC8qIGxvdyBsZXZlbCBtZW1vcnkgcm91dGlu
ZXMgKi8NCisjaW5jbHVkZSAienN0ZF9pbnRlcm5hbC5oIg0KKyNpbmNsdWRlIDxsaW51eC9rZXJu
ZWwuaD4NCisjaW5jbHVkZSA8bGludXgvbW9kdWxlLmg+DQorI2luY2x1ZGUgPGxpbnV4L3N0cmlu
Zy5oPiAvKiBtZW1jcHksIG1lbW1vdmUsIG1lbXNldCAqLw0KKw0KKyNkZWZpbmUgWlNURF9QUkVG
RVRDSChwdHIpIF9fYnVpbHRpbl9wcmVmZXRjaChwdHIsIDAsIDApDQorDQorLyotKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KKyogIE1hY3Jvcw0KKyoqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKi8NCisjZGVmaW5lIFpTVERfaXNFcnJvciBFUlJfaXNF
cnJvciAvKiBmb3IgaW5saW5pbmcgKi8NCisjZGVmaW5lIEZTRV9pc0Vycm9yIEVSUl9pc0Vycm9y
DQorI2RlZmluZSBIVUZfaXNFcnJvciBFUlJfaXNFcnJvcg0KKw0KKy8qXyoqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCisqICBNZW1vcnkgb3Bl
cmF0aW9ucw0KKyoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKiovDQorc3RhdGljIHZvaWQgWlNURF9jb3B5NCh2b2lkICpkc3QsIGNvbnN0IHZv
aWQgKnNyYykgeyBtZW1jcHkoZHN0LCBzcmMsIDQpOyB9DQorDQorLyotKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KKyogICBDb250
ZXh0IG1hbmFnZW1lbnQNCisqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKiovDQordHlwZWRlZiBlbnVtIHsNCisJWlNURGRzX2dldEZy
YW1lSGVhZGVyU2l6ZSwNCisJWlNURGRzX2RlY29kZUZyYW1lSGVhZGVyLA0KKwlaU1REZHNfZGVj
b2RlQmxvY2tIZWFkZXIsDQorCVpTVERkc19kZWNvbXByZXNzQmxvY2ssDQorCVpTVERkc19kZWNv
bXByZXNzTGFzdEJsb2NrLA0KKwlaU1REZHNfY2hlY2tDaGVja3N1bSwNCisJWlNURGRzX2RlY29k
ZVNraXBwYWJsZUhlYWRlciwNCisJWlNURGRzX3NraXBGcmFtZQ0KK30gWlNURF9kU3RhZ2U7DQor
DQordHlwZWRlZiBzdHJ1Y3Qgew0KKwlGU0VfRFRhYmxlIExMVGFibGVbRlNFX0RUQUJMRV9TSVpF
X1UzMihMTEZTRUxvZyldOw0KKwlGU0VfRFRhYmxlIE9GVGFibGVbRlNFX0RUQUJMRV9TSVpFX1Uz
MihPZmZGU0VMb2cpXTsNCisJRlNFX0RUYWJsZSBNTFRhYmxlW0ZTRV9EVEFCTEVfU0laRV9VMzIo
TUxGU0VMb2cpXTsNCisJSFVGX0RUYWJsZSBodWZUYWJsZVtIVUZfRFRBQkxFX1NJWkUoSHVmTG9n
KV07IC8qIGNhbiBhY2NvbW1vZGF0ZSBIVUZfZGVjb21wcmVzczRYICovDQorCVU2NCB3b3Jrc3Bh
Y2VbSFVGX0RFQ09NUFJFU1NfV09SS1NQQUNFX1NJWkVfVTMyIC8gMl07DQorCVUzMiByZXBbWlNU
RF9SRVBfTlVNXTsNCit9IFpTVERfZW50cm9weVRhYmxlc190Ow0KKw0KK3N0cnVjdCBaU1REX0RD
dHhfcyB7DQorCWNvbnN0IEZTRV9EVGFibGUgKkxMVHB0cjsNCisJY29uc3QgRlNFX0RUYWJsZSAq
TUxUcHRyOw0KKwljb25zdCBGU0VfRFRhYmxlICpPRlRwdHI7DQorCWNvbnN0IEhVRl9EVGFibGUg
KkhVRnB0cjsNCisJWlNURF9lbnRyb3B5VGFibGVzX3QgZW50cm9weTsNCisJY29uc3Qgdm9pZCAq
cHJldmlvdXNEc3RFbmQ7IC8qIGRldGVjdCBjb250aW51aXR5ICovDQorCWNvbnN0IHZvaWQgKmJh
c2U7CSAgIC8qIHN0YXJ0IG9mIGN1cnIgc2VnbWVudCAqLw0KKwljb25zdCB2b2lkICp2QmFzZTsJ
ICAvKiB2aXJ0dWFsIHN0YXJ0IG9mIHByZXZpb3VzIHNlZ21lbnQgaWYgaXQgd2FzIGp1c3QgYmVm
b3JlIGN1cnIgb25lICovDQorCWNvbnN0IHZvaWQgKmRpY3RFbmQ7CS8qIGVuZCBvZiBwcmV2aW91
cyBzZWdtZW50ICovDQorCXNpemVfdCBleHBlY3RlZDsNCisJWlNURF9mcmFtZVBhcmFtcyBmUGFy
YW1zOw0KKwlibG9ja1R5cGVfZSBiVHlwZTsgLyogdXNlZCBpbiBaU1REX2RlY29tcHJlc3NDb250
aW51ZSgpLCB0byB0cmFuc2ZlciBibG9ja1R5cGUgYmV0d2VlbiBoZWFkZXIgZGVjb2RpbmcgYW5k
IGJsb2NrIGRlY29kaW5nIHN0YWdlcyAqLw0KKwlaU1REX2RTdGFnZSBzdGFnZTsNCisJVTMyIGxp
dEVudHJvcHk7DQorCVUzMiBmc2VFbnRyb3B5Ow0KKwlzdHJ1Y3QgeHhoNjRfc3RhdGUgeHhoU3Rh
dGU7DQorCXNpemVfdCBoZWFkZXJTaXplOw0KKwlVMzIgZGljdElEOw0KKwljb25zdCBCWVRFICps
aXRQdHI7DQorCVpTVERfY3VzdG9tTWVtIGN1c3RvbU1lbTsNCisJc2l6ZV90IGxpdFNpemU7DQor
CXNpemVfdCBybGVTaXplOw0KKwlCWVRFIGxpdEJ1ZmZlcltaU1REX0JMT0NLU0laRV9BQlNPTFVU
RU1BWCArIFdJTERDT1BZX09WRVJMRU5HVEhdOw0KKwlCWVRFIGhlYWRlckJ1ZmZlcltaU1REX0ZS
QU1FSEVBREVSU0laRV9NQVhdOw0KK307IC8qIHR5cGVkZWYnZCB0byBaU1REX0RDdHggd2l0aGlu
ICJ6c3RkLmgiICovDQorDQorc2l6ZV90IFpTVERfREN0eFdvcmtzcGFjZUJvdW5kKHZvaWQpIHsg
cmV0dXJuIFpTVERfQUxJR04oc2l6ZW9mKFpTVERfc3RhY2spKSArIFpTVERfQUxJR04oc2l6ZW9m
KFpTVERfREN0eCkpOyB9DQorDQorc2l6ZV90IFpTVERfZGVjb21wcmVzc0JlZ2luKFpTVERfREN0
eCAqZGN0eCkNCit7DQorCWRjdHgtPmV4cGVjdGVkID0gWlNURF9mcmFtZUhlYWRlclNpemVfcHJl
Zml4Ow0KKwlkY3R4LT5zdGFnZSA9IFpTVERkc19nZXRGcmFtZUhlYWRlclNpemU7DQorCWRjdHgt
PnByZXZpb3VzRHN0RW5kID0gTlVMTDsNCisJZGN0eC0+YmFzZSA9IE5VTEw7DQorCWRjdHgtPnZC
YXNlID0gTlVMTDsNCisJZGN0eC0+ZGljdEVuZCA9IE5VTEw7DQorCWRjdHgtPmVudHJvcHkuaHVm
VGFibGVbMF0gPSAoSFVGX0RUYWJsZSkoKEh1ZkxvZykqMHgxMDAwMDAxKTsgLyogY292ZXIgYm90
aCBsaXR0bGUgYW5kIGJpZyBlbmRpYW4gKi8NCisJZGN0eC0+bGl0RW50cm9weSA9IGRjdHgtPmZz
ZUVudHJvcHkgPSAwOw0KKwlkY3R4LT5kaWN0SUQgPSAwOw0KKwlaU1REX1NUQVRJQ19BU1NFUlQo
c2l6ZW9mKGRjdHgtPmVudHJvcHkucmVwKSA9PSBzaXplb2YocmVwU3RhcnRWYWx1ZSkpOw0KKwlt
ZW1jcHkoZGN0eC0+ZW50cm9weS5yZXAsIHJlcFN0YXJ0VmFsdWUsIHNpemVvZihyZXBTdGFydFZh
bHVlKSk7IC8qIGluaXRpYWwgcmVwY29kZXMgKi8NCisJZGN0eC0+TExUcHRyID0gZGN0eC0+ZW50
cm9weS5MTFRhYmxlOw0KKwlkY3R4LT5NTFRwdHIgPSBkY3R4LT5lbnRyb3B5Lk1MVGFibGU7DQor
CWRjdHgtPk9GVHB0ciA9IGRjdHgtPmVudHJvcHkuT0ZUYWJsZTsNCisJZGN0eC0+SFVGcHRyID0g
ZGN0eC0+ZW50cm9weS5odWZUYWJsZTsNCisJcmV0dXJuIDA7DQorfQ0KKw0KK1pTVERfREN0eCAq
WlNURF9jcmVhdGVEQ3R4X2FkdmFuY2VkKFpTVERfY3VzdG9tTWVtIGN1c3RvbU1lbSkNCit7DQor
CVpTVERfREN0eCAqZGN0eDsNCisNCisJaWYgKCFjdXN0b21NZW0uY3VzdG9tQWxsb2MgfHwgIWN1
c3RvbU1lbS5jdXN0b21GcmVlKQ0KKwkJcmV0dXJuIE5VTEw7DQorDQorCWRjdHggPSAoWlNURF9E
Q3R4ICopWlNURF9tYWxsb2Moc2l6ZW9mKFpTVERfREN0eCksIGN1c3RvbU1lbSk7DQorCWlmICgh
ZGN0eCkNCisJCXJldHVybiBOVUxMOw0KKwltZW1jcHkoJmRjdHgtPmN1c3RvbU1lbSwgJmN1c3Rv
bU1lbSwgc2l6ZW9mKGN1c3RvbU1lbSkpOw0KKwlaU1REX2RlY29tcHJlc3NCZWdpbihkY3R4KTsN
CisJcmV0dXJuIGRjdHg7DQorfQ0KKw0KK1pTVERfREN0eCAqWlNURF9pbml0REN0eCh2b2lkICp3
b3Jrc3BhY2UsIHNpemVfdCB3b3Jrc3BhY2VTaXplKQ0KK3sNCisJWlNURF9jdXN0b21NZW0gY29u
c3Qgc3RhY2tNZW0gPSBaU1REX2luaXRTdGFjayh3b3Jrc3BhY2UsIHdvcmtzcGFjZVNpemUpOw0K
KwlyZXR1cm4gWlNURF9jcmVhdGVEQ3R4X2FkdmFuY2VkKHN0YWNrTWVtKTsNCit9DQorDQorc2l6
ZV90IFpTVERfZnJlZURDdHgoWlNURF9EQ3R4ICpkY3R4KQ0KK3sNCisJaWYgKGRjdHggPT0gTlVM
TCkNCisJCXJldHVybiAwOyAvKiBzdXBwb3J0IGZyZWUgb24gTlVMTCAqLw0KKwlaU1REX2ZyZWUo
ZGN0eCwgZGN0eC0+Y3VzdG9tTWVtKTsNCisJcmV0dXJuIDA7IC8qIHJlc2VydmVkIGFzIGEgcG90
ZW50aWFsIGVycm9yIGNvZGUgaW4gdGhlIGZ1dHVyZSAqLw0KK30NCisNCit2b2lkIFpTVERfY29w
eURDdHgoWlNURF9EQ3R4ICpkc3REQ3R4LCBjb25zdCBaU1REX0RDdHggKnNyY0RDdHgpDQorew0K
KwlzaXplX3QgY29uc3Qgd29ya1NwYWNlU2l6ZSA9IChaU1REX0JMT0NLU0laRV9BQlNPTFVURU1B
WCArIFdJTERDT1BZX09WRVJMRU5HVEgpICsgWlNURF9mcmFtZUhlYWRlclNpemVfbWF4Ow0KKwlt
ZW1jcHkoZHN0REN0eCwgc3JjREN0eCwgc2l6ZW9mKFpTVERfREN0eCkgLSB3b3JrU3BhY2VTaXpl
KTsgLyogbm8gbmVlZCB0byBjb3B5IHdvcmtzcGFjZSAqLw0KK30NCisNCitzdGF0aWMgdm9pZCBa
U1REX3JlZkREaWN0KFpTVERfREN0eCAqZHN0REN0eCwgY29uc3QgWlNURF9ERGljdCAqZGRpY3Qp
Ow0KKw0KKy8qLSoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioNCisqICAgRGVjb21wcmVzc2lvbiBzZWN0aW9uDQorKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqLw0KKw0K
Ky8qISBaU1REX2lzRnJhbWUoKSA6DQorICogIFRlbGxzIGlmIHRoZSBjb250ZW50IG9mIGBidWZm
ZXJgIHN0YXJ0cyB3aXRoIGEgdmFsaWQgRnJhbWUgSWRlbnRpZmllci4NCisgKiAgTm90ZSA6IEZy
YW1lIElkZW50aWZpZXIgaXMgNCBieXRlcy4gSWYgYHNpemUgPCA0YCwgQHJldHVybiB3aWxsIGFs
d2F5cyBiZSAwLg0KKyAqICBOb3RlIDIgOiBMZWdhY3kgRnJhbWUgSWRlbnRpZmllcnMgYXJlIGNv
bnNpZGVyZWQgdmFsaWQgb25seSBpZiBMZWdhY3kgU3VwcG9ydCBpcyBlbmFibGVkLg0KKyAqICBO
b3RlIDMgOiBTa2lwcGFibGUgRnJhbWUgSWRlbnRpZmllcnMgYXJlIGNvbnNpZGVyZWQgdmFsaWQu
ICovDQordW5zaWduZWQgWlNURF9pc0ZyYW1lKGNvbnN0IHZvaWQgKmJ1ZmZlciwgc2l6ZV90IHNp
emUpDQorew0KKwlpZiAoc2l6ZSA8IDQpDQorCQlyZXR1cm4gMDsNCisJew0KKwkJVTMyIGNvbnN0
IG1hZ2ljID0gWlNURF9yZWFkTEUzMihidWZmZXIpOw0KKwkJaWYgKG1hZ2ljID09IFpTVERfTUFH
SUNOVU1CRVIpDQorCQkJcmV0dXJuIDE7DQorCQlpZiAoKG1hZ2ljICYgMHhGRkZGRkZGMFUpID09
IFpTVERfTUFHSUNfU0tJUFBBQkxFX1NUQVJUKQ0KKwkJCXJldHVybiAxOw0KKwl9DQorCXJldHVy
biAwOw0KK30NCisNCisvKiogWlNURF9mcmFtZUhlYWRlclNpemUoKSA6DQorKiAgIHNyY1NpemUg
bXVzdCBiZSA+PSBaU1REX2ZyYW1lSGVhZGVyU2l6ZV9wcmVmaXguDQorKiAgIEByZXR1cm4gOiBz
aXplIG9mIHRoZSBGcmFtZSBIZWFkZXIgKi8NCitzdGF0aWMgc2l6ZV90IFpTVERfZnJhbWVIZWFk
ZXJTaXplKGNvbnN0IHZvaWQgKnNyYywgc2l6ZV90IHNyY1NpemUpDQorew0KKwlpZiAoc3JjU2l6
ZSA8IFpTVERfZnJhbWVIZWFkZXJTaXplX3ByZWZpeCkNCisJCXJldHVybiBFUlJPUihzcmNTaXpl
X3dyb25nKTsNCisJew0KKwkJQllURSBjb25zdCBmaGQgPSAoKGNvbnN0IEJZVEUgKilzcmMpWzRd
Ow0KKwkJVTMyIGNvbnN0IGRpY3RJRCA9IGZoZCAmIDM7DQorCQlVMzIgY29uc3Qgc2luZ2xlU2Vn
bWVudCA9IChmaGQgPj4gNSkgJiAxOw0KKwkJVTMyIGNvbnN0IGZjc0lkID0gZmhkID4+IDY7DQor
CQlyZXR1cm4gWlNURF9mcmFtZUhlYWRlclNpemVfcHJlZml4ICsgIXNpbmdsZVNlZ21lbnQgKyBa
U1REX2RpZF9maWVsZFNpemVbZGljdElEXSArIFpTVERfZmNzX2ZpZWxkU2l6ZVtmY3NJZF0gKyAo
c2luZ2xlU2VnbWVudCAmJiAhZmNzSWQpOw0KKwl9DQorfQ0KKw0KKy8qKiBaU1REX2dldEZyYW1l
UGFyYW1zKCkgOg0KKyogICBkZWNvZGUgRnJhbWUgSGVhZGVyLCBvciByZXF1aXJlIGxhcmdlciBg
c3JjU2l6ZWAuDQorKiAgIEByZXR1cm4gOiAwLCBgZnBhcmFtc1B0cmAgaXMgY29ycmVjdGx5IGZp
bGxlZCwNCisqICAgICAgICAgICAgPjAsIGBzcmNTaXplYCBpcyB0b28gc21hbGwsIHJlc3VsdCBp
cyBleHBlY3RlZCBgc3JjU2l6ZWAsDQorKiAgICAgICAgICAgICBvciBhbiBlcnJvciBjb2RlLCB3
aGljaCBjYW4gYmUgdGVzdGVkIHVzaW5nIFpTVERfaXNFcnJvcigpICovDQorc2l6ZV90IFpTVERf
Z2V0RnJhbWVQYXJhbXMoWlNURF9mcmFtZVBhcmFtcyAqZnBhcmFtc1B0ciwgY29uc3Qgdm9pZCAq
c3JjLCBzaXplX3Qgc3JjU2l6ZSkNCit7DQorCWNvbnN0IEJZVEUgKmlwID0gKGNvbnN0IEJZVEUg
KilzcmM7DQorDQorCWlmIChzcmNTaXplIDwgWlNURF9mcmFtZUhlYWRlclNpemVfcHJlZml4KQ0K
KwkJcmV0dXJuIFpTVERfZnJhbWVIZWFkZXJTaXplX3ByZWZpeDsNCisJaWYgKFpTVERfcmVhZExF
MzIoc3JjKSAhPSBaU1REX01BR0lDTlVNQkVSKSB7DQorCQlpZiAoKFpTVERfcmVhZExFMzIoc3Jj
KSAmIDB4RkZGRkZGRjBVKSA9PSBaU1REX01BR0lDX1NLSVBQQUJMRV9TVEFSVCkgew0KKwkJCWlm
IChzcmNTaXplIDwgWlNURF9za2lwcGFibGVIZWFkZXJTaXplKQ0KKwkJCQlyZXR1cm4gWlNURF9z
a2lwcGFibGVIZWFkZXJTaXplOyAvKiBtYWdpYyBudW1iZXIgKyBza2lwcGFibGUgZnJhbWUgbGVu
Z3RoICovDQorCQkJbWVtc2V0KGZwYXJhbXNQdHIsIDAsIHNpemVvZigqZnBhcmFtc1B0cikpOw0K
KwkJCWZwYXJhbXNQdHItPmZyYW1lQ29udGVudFNpemUgPSBaU1REX3JlYWRMRTMyKChjb25zdCBj
aGFyICopc3JjICsgNCk7DQorCQkJZnBhcmFtc1B0ci0+d2luZG93U2l6ZSA9IDA7IC8qIHdpbmRv
d1NpemU9PTAgbWVhbnMgYSBmcmFtZSBpcyBza2lwcGFibGUgKi8NCisJCQlyZXR1cm4gMDsNCisJ
CX0NCisJCXJldHVybiBFUlJPUihwcmVmaXhfdW5rbm93bik7DQorCX0NCisNCisJLyogZW5zdXJl
IHRoZXJlIGlzIGVub3VnaCBgc3JjU2l6ZWAgdG8gZnVsbHkgcmVhZC9kZWNvZGUgZnJhbWUgaGVh
ZGVyICovDQorCXsNCisJCXNpemVfdCBjb25zdCBmaHNpemUgPSBaU1REX2ZyYW1lSGVhZGVyU2l6
ZShzcmMsIHNyY1NpemUpOw0KKwkJaWYgKHNyY1NpemUgPCBmaHNpemUpDQorCQkJcmV0dXJuIGZo
c2l6ZTsNCisJfQ0KKw0KKwl7DQorCQlCWVRFIGNvbnN0IGZoZEJ5dGUgPSBpcFs0XTsNCisJCXNp
emVfdCBwb3MgPSA1Ow0KKwkJVTMyIGNvbnN0IGRpY3RJRFNpemVDb2RlID0gZmhkQnl0ZSAmIDM7
DQorCQlVMzIgY29uc3QgY2hlY2tzdW1GbGFnID0gKGZoZEJ5dGUgPj4gMikgJiAxOw0KKwkJVTMy
IGNvbnN0IHNpbmdsZVNlZ21lbnQgPSAoZmhkQnl0ZSA+PiA1KSAmIDE7DQorCQlVMzIgY29uc3Qg
ZmNzSUQgPSBmaGRCeXRlID4+IDY7DQorCQlVMzIgY29uc3Qgd2luZG93U2l6ZU1heCA9IDFVIDw8
IFpTVERfV0lORE9XTE9HX01BWDsNCisJCVUzMiB3aW5kb3dTaXplID0gMDsNCisJCVUzMiBkaWN0
SUQgPSAwOw0KKwkJVTY0IGZyYW1lQ29udGVudFNpemUgPSAwOw0KKwkJaWYgKChmaGRCeXRlICYg
MHgwOCkgIT0gMCkNCisJCQlyZXR1cm4gRVJST1IoZnJhbWVQYXJhbWV0ZXJfdW5zdXBwb3J0ZWQp
OyAvKiByZXNlcnZlZCBiaXRzLCB3aGljaCBtdXN0IGJlIHplcm8gKi8NCisJCWlmICghc2luZ2xl
U2VnbWVudCkgew0KKwkJCUJZVEUgY29uc3Qgd2xCeXRlID0gaXBbcG9zKytdOw0KKwkJCVUzMiBj
b25zdCB3aW5kb3dMb2cgPSAod2xCeXRlID4+IDMpICsgWlNURF9XSU5ET1dMT0dfQUJTT0xVVEVN
SU47DQorCQkJaWYgKHdpbmRvd0xvZyA+IFpTVERfV0lORE9XTE9HX01BWCkNCisJCQkJcmV0dXJu
IEVSUk9SKGZyYW1lUGFyYW1ldGVyX3dpbmRvd1Rvb0xhcmdlKTsgLyogYXZvaWRzIGlzc3VlIHdp
dGggMSA8PCB3aW5kb3dMb2cgKi8NCisJCQl3aW5kb3dTaXplID0gKDFVIDw8IHdpbmRvd0xvZyk7
DQorCQkJd2luZG93U2l6ZSArPSAod2luZG93U2l6ZSA+PiAzKSAqICh3bEJ5dGUgJiA3KTsNCisJ
CX0NCisNCisJCXN3aXRjaCAoZGljdElEU2l6ZUNvZGUpIHsNCisJCWRlZmF1bHQ6IC8qIGltcG9z
c2libGUgKi8NCisJCWNhc2UgMDogYnJlYWs7DQorCQljYXNlIDE6DQorCQkJZGljdElEID0gaXBb
cG9zXTsNCisJCQlwb3MrKzsNCisJCQlicmVhazsNCisJCWNhc2UgMjoNCisJCQlkaWN0SUQgPSBa
U1REX3JlYWRMRTE2KGlwICsgcG9zKTsNCisJCQlwb3MgKz0gMjsNCisJCQlicmVhazsNCisJCWNh
c2UgMzoNCisJCQlkaWN0SUQgPSBaU1REX3JlYWRMRTMyKGlwICsgcG9zKTsNCisJCQlwb3MgKz0g
NDsNCisJCQlicmVhazsNCisJCX0NCisJCXN3aXRjaCAoZmNzSUQpIHsNCisJCWRlZmF1bHQ6IC8q
IGltcG9zc2libGUgKi8NCisJCWNhc2UgMDoNCisJCQlpZiAoc2luZ2xlU2VnbWVudCkNCisJCQkJ
ZnJhbWVDb250ZW50U2l6ZSA9IGlwW3Bvc107DQorCQkJYnJlYWs7DQorCQljYXNlIDE6IGZyYW1l
Q29udGVudFNpemUgPSBaU1REX3JlYWRMRTE2KGlwICsgcG9zKSArIDI1NjsgYnJlYWs7DQorCQlj
YXNlIDI6IGZyYW1lQ29udGVudFNpemUgPSBaU1REX3JlYWRMRTMyKGlwICsgcG9zKTsgYnJlYWs7
DQorCQljYXNlIDM6IGZyYW1lQ29udGVudFNpemUgPSBaU1REX3JlYWRMRTY0KGlwICsgcG9zKTsg
YnJlYWs7DQorCQl9DQorCQlpZiAoIXdpbmRvd1NpemUpDQorCQkJd2luZG93U2l6ZSA9IChVMzIp
ZnJhbWVDb250ZW50U2l6ZTsNCisJCWlmICh3aW5kb3dTaXplID4gd2luZG93U2l6ZU1heCkNCisJ
CQlyZXR1cm4gRVJST1IoZnJhbWVQYXJhbWV0ZXJfd2luZG93VG9vTGFyZ2UpOw0KKwkJZnBhcmFt
c1B0ci0+ZnJhbWVDb250ZW50U2l6ZSA9IGZyYW1lQ29udGVudFNpemU7DQorCQlmcGFyYW1zUHRy
LT53aW5kb3dTaXplID0gd2luZG93U2l6ZTsNCisJCWZwYXJhbXNQdHItPmRpY3RJRCA9IGRpY3RJ
RDsNCisJCWZwYXJhbXNQdHItPmNoZWNrc3VtRmxhZyA9IGNoZWNrc3VtRmxhZzsNCisJfQ0KKwly
ZXR1cm4gMDsNCit9DQorDQorLyoqIFpTVERfZ2V0RnJhbWVDb250ZW50U2l6ZSgpIDoNCisqICAg
Y29tcGF0aWJsZSB3aXRoIGxlZ2FjeSBtb2RlDQorKiAgIEByZXR1cm4gOiBkZWNvbXByZXNzZWQg
c2l6ZSBvZiB0aGUgc2luZ2xlIGZyYW1lIHBvaW50ZWQgdG8gYmUgYHNyY2AgaWYga25vd24sIG90
aGVyd2lzZQ0KKyogICAgICAgICAgICAgLSBaU1REX0NPTlRFTlRTSVpFX1VOS05PV04gaWYgdGhl
IHNpemUgY2Fubm90IGJlIGRldGVybWluZWQNCisqICAgICAgICAgICAgIC0gWlNURF9DT05URU5U
U0laRV9FUlJPUiBpZiBhbiBlcnJvciBvY2N1cnJlZCAoZS5nLiBpbnZhbGlkIG1hZ2ljIG51bWJl
ciwgc3JjU2l6ZSB0b28gc21hbGwpICovDQordW5zaWduZWQgbG9uZyBsb25nIFpTVERfZ2V0RnJh
bWVDb250ZW50U2l6ZShjb25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNTaXplKQ0KK3sNCisJew0K
KwkJWlNURF9mcmFtZVBhcmFtcyBmUGFyYW1zOw0KKwkJaWYgKFpTVERfZ2V0RnJhbWVQYXJhbXMo
JmZQYXJhbXMsIHNyYywgc3JjU2l6ZSkgIT0gMCkNCisJCQlyZXR1cm4gWlNURF9DT05URU5UU0la
RV9FUlJPUjsNCisJCWlmIChmUGFyYW1zLndpbmRvd1NpemUgPT0gMCkgew0KKwkJCS8qIEVpdGhl
ciBza2lwcGFibGUgb3IgZW1wdHkgZnJhbWUsIHNpemUgPT0gMCBlaXRoZXIgd2F5ICovDQorCQkJ
cmV0dXJuIDA7DQorCQl9IGVsc2UgaWYgKGZQYXJhbXMuZnJhbWVDb250ZW50U2l6ZSAhPSAwKSB7
DQorCQkJcmV0dXJuIGZQYXJhbXMuZnJhbWVDb250ZW50U2l6ZTsNCisJCX0gZWxzZSB7DQorCQkJ
cmV0dXJuIFpTVERfQ09OVEVOVFNJWkVfVU5LTk9XTjsNCisJCX0NCisJfQ0KK30NCisNCisvKiog
WlNURF9maW5kRGVjb21wcmVzc2VkU2l6ZSgpIDoNCisgKiAgY29tcGF0aWJsZSB3aXRoIGxlZ2Fj
eSBtb2RlDQorICogIGBzcmNTaXplYCBtdXN0IGJlIHRoZSBleGFjdCBsZW5ndGggb2Ygc29tZSBu
dW1iZXIgb2YgWlNURCBjb21wcmVzc2VkIGFuZC9vcg0KKyAqICAgICAgc2tpcHBhYmxlIGZyYW1l
cw0KKyAqICBAcmV0dXJuIDogZGVjb21wcmVzc2VkIHNpemUgb2YgdGhlIGZyYW1lcyBjb250YWlu
ZWQgKi8NCit1bnNpZ25lZCBsb25nIGxvbmcgWlNURF9maW5kRGVjb21wcmVzc2VkU2l6ZShjb25z
dCB2b2lkICpzcmMsIHNpemVfdCBzcmNTaXplKQ0KK3sNCisJew0KKwkJdW5zaWduZWQgbG9uZyBs
b25nIHRvdGFsRHN0U2l6ZSA9IDA7DQorCQl3aGlsZSAoc3JjU2l6ZSA+PSBaU1REX2ZyYW1lSGVh
ZGVyU2l6ZV9wcmVmaXgpIHsNCisJCQljb25zdCBVMzIgbWFnaWNOdW1iZXIgPSBaU1REX3JlYWRM
RTMyKHNyYyk7DQorDQorCQkJaWYgKChtYWdpY051bWJlciAmIDB4RkZGRkZGRjBVKSA9PSBaU1RE
X01BR0lDX1NLSVBQQUJMRV9TVEFSVCkgew0KKwkJCQlzaXplX3Qgc2tpcHBhYmxlU2l6ZTsNCisJ
CQkJaWYgKHNyY1NpemUgPCBaU1REX3NraXBwYWJsZUhlYWRlclNpemUpDQorCQkJCQlyZXR1cm4g
RVJST1Ioc3JjU2l6ZV93cm9uZyk7DQorCQkJCXNraXBwYWJsZVNpemUgPSBaU1REX3JlYWRMRTMy
KChjb25zdCBCWVRFICopc3JjICsgNCkgKyBaU1REX3NraXBwYWJsZUhlYWRlclNpemU7DQorCQkJ
CWlmIChzcmNTaXplIDwgc2tpcHBhYmxlU2l6ZSkgew0KKwkJCQkJcmV0dXJuIFpTVERfQ09OVEVO
VFNJWkVfRVJST1I7DQorCQkJCX0NCisNCisJCQkJc3JjID0gKGNvbnN0IEJZVEUgKilzcmMgKyBz
a2lwcGFibGVTaXplOw0KKwkJCQlzcmNTaXplIC09IHNraXBwYWJsZVNpemU7DQorCQkJCWNvbnRp
bnVlOw0KKwkJCX0NCisNCisJCQl7DQorCQkJCXVuc2lnbmVkIGxvbmcgbG9uZyBjb25zdCByZXQg
PSBaU1REX2dldEZyYW1lQ29udGVudFNpemUoc3JjLCBzcmNTaXplKTsNCisJCQkJaWYgKHJldCA+
PSBaU1REX0NPTlRFTlRTSVpFX0VSUk9SKQ0KKwkJCQkJcmV0dXJuIHJldDsNCisNCisJCQkJLyog
Y2hlY2sgZm9yIG92ZXJmbG93ICovDQorCQkJCWlmICh0b3RhbERzdFNpemUgKyByZXQgPCB0b3Rh
bERzdFNpemUpDQorCQkJCQlyZXR1cm4gWlNURF9DT05URU5UU0laRV9FUlJPUjsNCisJCQkJdG90
YWxEc3RTaXplICs9IHJldDsNCisJCQl9DQorCQkJew0KKwkJCQlzaXplX3QgY29uc3QgZnJhbWVT
cmNTaXplID0gWlNURF9maW5kRnJhbWVDb21wcmVzc2VkU2l6ZShzcmMsIHNyY1NpemUpOw0KKwkJ
CQlpZiAoWlNURF9pc0Vycm9yKGZyYW1lU3JjU2l6ZSkpIHsNCisJCQkJCXJldHVybiBaU1REX0NP
TlRFTlRTSVpFX0VSUk9SOw0KKwkJCQl9DQorDQorCQkJCXNyYyA9IChjb25zdCBCWVRFICopc3Jj
ICsgZnJhbWVTcmNTaXplOw0KKwkJCQlzcmNTaXplIC09IGZyYW1lU3JjU2l6ZTsNCisJCQl9DQor
CQl9DQorDQorCQlpZiAoc3JjU2l6ZSkgew0KKwkJCXJldHVybiBaU1REX0NPTlRFTlRTSVpFX0VS
Uk9SOw0KKwkJfQ0KKw0KKwkJcmV0dXJuIHRvdGFsRHN0U2l6ZTsNCisJfQ0KK30NCisNCisvKiog
WlNURF9kZWNvZGVGcmFtZUhlYWRlcigpIDoNCisqICAgYGhlYWRlclNpemVgIG11c3QgYmUgdGhl
IHNpemUgcHJvdmlkZWQgYnkgWlNURF9mcmFtZUhlYWRlclNpemUoKS4NCisqICAgQHJldHVybiA6
IDAgaWYgc3VjY2Vzcywgb3IgYW4gZXJyb3IgY29kZSwgd2hpY2ggY2FuIGJlIHRlc3RlZCB1c2lu
ZyBaU1REX2lzRXJyb3IoKSAqLw0KK3N0YXRpYyBzaXplX3QgWlNURF9kZWNvZGVGcmFtZUhlYWRl
cihaU1REX0RDdHggKmRjdHgsIGNvbnN0IHZvaWQgKnNyYywgc2l6ZV90IGhlYWRlclNpemUpDQor
ew0KKwlzaXplX3QgY29uc3QgcmVzdWx0ID0gWlNURF9nZXRGcmFtZVBhcmFtcygmKGRjdHgtPmZQ
YXJhbXMpLCBzcmMsIGhlYWRlclNpemUpOw0KKwlpZiAoWlNURF9pc0Vycm9yKHJlc3VsdCkpDQor
CQlyZXR1cm4gcmVzdWx0OyAvKiBpbnZhbGlkIGhlYWRlciAqLw0KKwlpZiAocmVzdWx0ID4gMCkN
CisJCXJldHVybiBFUlJPUihzcmNTaXplX3dyb25nKTsgLyogaGVhZGVyU2l6ZSB0b28gc21hbGwg
Ki8NCisJaWYgKGRjdHgtPmZQYXJhbXMuZGljdElEICYmIChkY3R4LT5kaWN0SUQgIT0gZGN0eC0+
ZlBhcmFtcy5kaWN0SUQpKQ0KKwkJcmV0dXJuIEVSUk9SKGRpY3Rpb25hcnlfd3JvbmcpOw0KKwlp
ZiAoZGN0eC0+ZlBhcmFtcy5jaGVja3N1bUZsYWcpDQorCQl4eGg2NF9yZXNldCgmZGN0eC0+eHho
U3RhdGUsIDApOw0KKwlyZXR1cm4gMDsNCit9DQorDQordHlwZWRlZiBzdHJ1Y3Qgew0KKwlibG9j
a1R5cGVfZSBibG9ja1R5cGU7DQorCVUzMiBsYXN0QmxvY2s7DQorCVUzMiBvcmlnU2l6ZTsNCit9
IGJsb2NrUHJvcGVydGllc190Ow0KKw0KKy8qISBaU1REX2dldGNCbG9ja1NpemUoKSA6DQorKiAg
IFByb3ZpZGVzIHRoZSBzaXplIG9mIGNvbXByZXNzZWQgYmxvY2sgZnJvbSBibG9jayBoZWFkZXIg
YHNyY2AgKi8NCitzaXplX3QgWlNURF9nZXRjQmxvY2tTaXplKGNvbnN0IHZvaWQgKnNyYywgc2l6
ZV90IHNyY1NpemUsIGJsb2NrUHJvcGVydGllc190ICpicFB0cikNCit7DQorCWlmIChzcmNTaXpl
IDwgWlNURF9ibG9ja0hlYWRlclNpemUpDQorCQlyZXR1cm4gRVJST1Ioc3JjU2l6ZV93cm9uZyk7
DQorCXsNCisJCVUzMiBjb25zdCBjQmxvY2tIZWFkZXIgPSBaU1REX3JlYWRMRTI0KHNyYyk7DQor
CQlVMzIgY29uc3QgY1NpemUgPSBjQmxvY2tIZWFkZXIgPj4gMzsNCisJCWJwUHRyLT5sYXN0Qmxv
Y2sgPSBjQmxvY2tIZWFkZXIgJiAxOw0KKwkJYnBQdHItPmJsb2NrVHlwZSA9IChibG9ja1R5cGVf
ZSkoKGNCbG9ja0hlYWRlciA+PiAxKSAmIDMpOw0KKwkJYnBQdHItPm9yaWdTaXplID0gY1NpemU7
IC8qIG9ubHkgdXNlZnVsIGZvciBSTEUgKi8NCisJCWlmIChicFB0ci0+YmxvY2tUeXBlID09IGJ0
X3JsZSkNCisJCQlyZXR1cm4gMTsNCisJCWlmIChicFB0ci0+YmxvY2tUeXBlID09IGJ0X3Jlc2Vy
dmVkKQ0KKwkJCXJldHVybiBFUlJPUihjb3JydXB0aW9uX2RldGVjdGVkKTsNCisJCXJldHVybiBj
U2l6ZTsNCisJfQ0KK30NCisNCitzdGF0aWMgc2l6ZV90IFpTVERfY29weVJhd0Jsb2NrKHZvaWQg
KmRzdCwgc2l6ZV90IGRzdENhcGFjaXR5LCBjb25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNTaXpl
KQ0KK3sNCisJaWYgKHNyY1NpemUgPiBkc3RDYXBhY2l0eSkNCisJCXJldHVybiBFUlJPUihkc3RT
aXplX3Rvb1NtYWxsKTsNCisJbWVtY3B5KGRzdCwgc3JjLCBzcmNTaXplKTsNCisJcmV0dXJuIHNy
Y1NpemU7DQorfQ0KKw0KK3N0YXRpYyBzaXplX3QgWlNURF9zZXRSbGVCbG9jayh2b2lkICpkc3Qs
IHNpemVfdCBkc3RDYXBhY2l0eSwgY29uc3Qgdm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZSwgc2l6
ZV90IHJlZ2VuU2l6ZSkNCit7DQorCWlmIChzcmNTaXplICE9IDEpDQorCQlyZXR1cm4gRVJST1Io
c3JjU2l6ZV93cm9uZyk7DQorCWlmIChyZWdlblNpemUgPiBkc3RDYXBhY2l0eSkNCisJCXJldHVy
biBFUlJPUihkc3RTaXplX3Rvb1NtYWxsKTsNCisJbWVtc2V0KGRzdCwgKihjb25zdCBCWVRFICop
c3JjLCByZWdlblNpemUpOw0KKwlyZXR1cm4gcmVnZW5TaXplOw0KK30NCisNCisvKiEgWlNURF9k
ZWNvZGVMaXRlcmFsc0Jsb2NrKCkgOg0KKwlAcmV0dXJuIDogbmIgb2YgYnl0ZXMgcmVhZCBmcm9t
IHNyYyAoPCBzcmNTaXplICkgKi8NCitzaXplX3QgWlNURF9kZWNvZGVMaXRlcmFsc0Jsb2NrKFpT
VERfREN0eCAqZGN0eCwgY29uc3Qgdm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZSkgLyogbm90ZSA6
IHNyY1NpemUgPCBCTE9DS1NJWkUgKi8NCit7DQorCWlmIChzcmNTaXplIDwgTUlOX0NCTE9DS19T
SVpFKQ0KKwkJcmV0dXJuIEVSUk9SKGNvcnJ1cHRpb25fZGV0ZWN0ZWQpOw0KKw0KKwl7DQorCQlj
b25zdCBCWVRFICpjb25zdCBpc3RhcnQgPSAoY29uc3QgQllURSAqKXNyYzsNCisJCXN5bWJvbEVu
Y29kaW5nVHlwZV9lIGNvbnN0IGxpdEVuY1R5cGUgPSAoc3ltYm9sRW5jb2RpbmdUeXBlX2UpKGlz
dGFydFswXSAmIDMpOw0KKw0KKwkJc3dpdGNoIChsaXRFbmNUeXBlKSB7DQorCQljYXNlIHNldF9y
ZXBlYXQ6DQorCQkJaWYgKGRjdHgtPmxpdEVudHJvcHkgPT0gMCkNCisJCQkJcmV0dXJuIEVSUk9S
KGRpY3Rpb25hcnlfY29ycnVwdGVkKTsNCisJCQkvKiBmYWxsIHRocm91Z2ggKi8NCisJCWNhc2Ug
c2V0X2NvbXByZXNzZWQ6DQorCQkJaWYgKHNyY1NpemUgPCA1KQ0KKwkJCQlyZXR1cm4gRVJST1Io
Y29ycnVwdGlvbl9kZXRlY3RlZCk7IC8qIHNyY1NpemUgPj0gTUlOX0NCTE9DS19TSVpFID09IDM7
IGhlcmUgd2UgbmVlZCB1cCB0byA1IGZvciBjYXNlIDMgKi8NCisJCQl7DQorCQkJCXNpemVfdCBs
aFNpemUsIGxpdFNpemUsIGxpdENTaXplOw0KKwkJCQlVMzIgc2luZ2xlU3RyZWFtID0gMDsNCisJ
CQkJVTMyIGNvbnN0IGxobENvZGUgPSAoaXN0YXJ0WzBdID4+IDIpICYgMzsNCisJCQkJVTMyIGNv
bnN0IGxoYyA9IFpTVERfcmVhZExFMzIoaXN0YXJ0KTsNCisJCQkJc3dpdGNoIChsaGxDb2RlKSB7
DQorCQkJCWNhc2UgMDoNCisJCQkJY2FzZSAxOg0KKwkJCQlkZWZhdWx0OiAvKiBub3RlIDogZGVm
YXVsdCBpcyBpbXBvc3NpYmxlLCBzaW5jZSBsaGxDb2RlIGludG8gWzAuLjNdICovDQorCQkJCQkv
KiAyIC0gMiAtIDEwIC0gMTAgKi8NCisJCQkJCXNpbmdsZVN0cmVhbSA9ICFsaGxDb2RlOw0KKwkJ
CQkJbGhTaXplID0gMzsNCisJCQkJCWxpdFNpemUgPSAobGhjID4+IDQpICYgMHgzRkY7DQorCQkJ
CQlsaXRDU2l6ZSA9IChsaGMgPj4gMTQpICYgMHgzRkY7DQorCQkJCQlicmVhazsNCisJCQkJY2Fz
ZSAyOg0KKwkJCQkJLyogMiAtIDIgLSAxNCAtIDE0ICovDQorCQkJCQlsaFNpemUgPSA0Ow0KKwkJ
CQkJbGl0U2l6ZSA9IChsaGMgPj4gNCkgJiAweDNGRkY7DQorCQkJCQlsaXRDU2l6ZSA9IGxoYyA+
PiAxODsNCisJCQkJCWJyZWFrOw0KKwkJCQljYXNlIDM6DQorCQkJCQkvKiAyIC0gMiAtIDE4IC0g
MTggKi8NCisJCQkJCWxoU2l6ZSA9IDU7DQorCQkJCQlsaXRTaXplID0gKGxoYyA+PiA0KSAmIDB4
M0ZGRkY7DQorCQkJCQlsaXRDU2l6ZSA9IChsaGMgPj4gMjIpICsgKGlzdGFydFs0XSA8PCAxMCk7
DQorCQkJCQlicmVhazsNCisJCQkJfQ0KKwkJCQlpZiAobGl0U2l6ZSA+IFpTVERfQkxPQ0tTSVpF
X0FCU09MVVRFTUFYKQ0KKwkJCQkJcmV0dXJuIEVSUk9SKGNvcnJ1cHRpb25fZGV0ZWN0ZWQpOw0K
KwkJCQlpZiAobGl0Q1NpemUgKyBsaFNpemUgPiBzcmNTaXplKQ0KKwkJCQkJcmV0dXJuIEVSUk9S
KGNvcnJ1cHRpb25fZGV0ZWN0ZWQpOw0KKw0KKwkJCQlpZiAoSFVGX2lzRXJyb3IoDQorCQkJCQko
bGl0RW5jVHlwZSA9PSBzZXRfcmVwZWF0KQ0KKwkJCQkJICAgID8gKHNpbmdsZVN0cmVhbSA/IEhV
Rl9kZWNvbXByZXNzMVhfdXNpbmdEVGFibGUoZGN0eC0+bGl0QnVmZmVyLCBsaXRTaXplLCBpc3Rh
cnQgKyBsaFNpemUsIGxpdENTaXplLCBkY3R4LT5IVUZwdHIpDQorCQkJCQkJCSAgICA6IEhVRl9k
ZWNvbXByZXNzNFhfdXNpbmdEVGFibGUoZGN0eC0+bGl0QnVmZmVyLCBsaXRTaXplLCBpc3RhcnQg
KyBsaFNpemUsIGxpdENTaXplLCBkY3R4LT5IVUZwdHIpKQ0KKwkJCQkJICAgIDogKHNpbmdsZVN0
cmVhbQ0KKwkJCQkJCSAgID8gSFVGX2RlY29tcHJlc3MxWDJfREN0eF93a3NwKGRjdHgtPmVudHJv
cHkuaHVmVGFibGUsIGRjdHgtPmxpdEJ1ZmZlciwgbGl0U2l6ZSwgaXN0YXJ0ICsgbGhTaXplLCBs
aXRDU2l6ZSwNCisJCQkJCQkJCQkJIGRjdHgtPmVudHJvcHkud29ya3NwYWNlLCBzaXplb2YoZGN0
eC0+ZW50cm9weS53b3Jrc3BhY2UpKQ0KKwkJCQkJCSAgIDogSFVGX2RlY29tcHJlc3M0WF9odWZP
bmx5X3drc3AoZGN0eC0+ZW50cm9weS5odWZUYWJsZSwgZGN0eC0+bGl0QnVmZmVyLCBsaXRTaXpl
LCBpc3RhcnQgKyBsaFNpemUsIGxpdENTaXplLA0KKwkJCQkJCQkJCQkgICBkY3R4LT5lbnRyb3B5
LndvcmtzcGFjZSwgc2l6ZW9mKGRjdHgtPmVudHJvcHkud29ya3NwYWNlKSkpKSkNCisJCQkJCXJl
dHVybiBFUlJPUihjb3JydXB0aW9uX2RldGVjdGVkKTsNCisNCisJCQkJZGN0eC0+bGl0UHRyID0g
ZGN0eC0+bGl0QnVmZmVyOw0KKwkJCQlkY3R4LT5saXRTaXplID0gbGl0U2l6ZTsNCisJCQkJZGN0
eC0+bGl0RW50cm9weSA9IDE7DQorCQkJCWlmIChsaXRFbmNUeXBlID09IHNldF9jb21wcmVzc2Vk
KQ0KKwkJCQkJZGN0eC0+SFVGcHRyID0gZGN0eC0+ZW50cm9weS5odWZUYWJsZTsNCisJCQkJbWVt
c2V0KGRjdHgtPmxpdEJ1ZmZlciArIGRjdHgtPmxpdFNpemUsIDAsIFdJTERDT1BZX09WRVJMRU5H
VEgpOw0KKwkJCQlyZXR1cm4gbGl0Q1NpemUgKyBsaFNpemU7DQorCQkJfQ0KKw0KKwkJY2FzZSBz
ZXRfYmFzaWM6IHsNCisJCQlzaXplX3QgbGl0U2l6ZSwgbGhTaXplOw0KKwkJCVUzMiBjb25zdCBs
aGxDb2RlID0gKChpc3RhcnRbMF0pID4+IDIpICYgMzsNCisJCQlzd2l0Y2ggKGxobENvZGUpIHsN
CisJCQljYXNlIDA6DQorCQkJY2FzZSAyOg0KKwkJCWRlZmF1bHQ6IC8qIG5vdGUgOiBkZWZhdWx0
IGlzIGltcG9zc2libGUsIHNpbmNlIGxobENvZGUgaW50byBbMC4uM10gKi8NCisJCQkJbGhTaXpl
ID0gMTsNCisJCQkJbGl0U2l6ZSA9IGlzdGFydFswXSA+PiAzOw0KKwkJCQlicmVhazsNCisJCQlj
YXNlIDE6DQorCQkJCWxoU2l6ZSA9IDI7DQorCQkJCWxpdFNpemUgPSBaU1REX3JlYWRMRTE2KGlz
dGFydCkgPj4gNDsNCisJCQkJYnJlYWs7DQorCQkJY2FzZSAzOg0KKwkJCQlsaFNpemUgPSAzOw0K
KwkJCQlsaXRTaXplID0gWlNURF9yZWFkTEUyNChpc3RhcnQpID4+IDQ7DQorCQkJCWJyZWFrOw0K
KwkJCX0NCisNCisJCQlpZiAobGhTaXplICsgbGl0U2l6ZSArIFdJTERDT1BZX09WRVJMRU5HVEgg
PiBzcmNTaXplKSB7IC8qIHJpc2sgcmVhZGluZyBiZXlvbmQgc3JjIGJ1ZmZlciB3aXRoIHdpbGRj
b3B5ICovDQorCQkJCWlmIChsaXRTaXplICsgbGhTaXplID4gc3JjU2l6ZSkNCisJCQkJCXJldHVy
biBFUlJPUihjb3JydXB0aW9uX2RldGVjdGVkKTsNCisJCQkJbWVtY3B5KGRjdHgtPmxpdEJ1ZmZl
ciwgaXN0YXJ0ICsgbGhTaXplLCBsaXRTaXplKTsNCisJCQkJZGN0eC0+bGl0UHRyID0gZGN0eC0+
bGl0QnVmZmVyOw0KKwkJCQlkY3R4LT5saXRTaXplID0gbGl0U2l6ZTsNCisJCQkJbWVtc2V0KGRj
dHgtPmxpdEJ1ZmZlciArIGRjdHgtPmxpdFNpemUsIDAsIFdJTERDT1BZX09WRVJMRU5HVEgpOw0K
KwkJCQlyZXR1cm4gbGhTaXplICsgbGl0U2l6ZTsNCisJCQl9DQorCQkJLyogZGlyZWN0IHJlZmVy
ZW5jZSBpbnRvIGNvbXByZXNzZWQgc3RyZWFtICovDQorCQkJZGN0eC0+bGl0UHRyID0gaXN0YXJ0
ICsgbGhTaXplOw0KKwkJCWRjdHgtPmxpdFNpemUgPSBsaXRTaXplOw0KKwkJCXJldHVybiBsaFNp
emUgKyBsaXRTaXplOw0KKwkJfQ0KKw0KKwkJY2FzZSBzZXRfcmxlOiB7DQorCQkJVTMyIGNvbnN0
IGxobENvZGUgPSAoKGlzdGFydFswXSkgPj4gMikgJiAzOw0KKwkJCXNpemVfdCBsaXRTaXplLCBs
aFNpemU7DQorCQkJc3dpdGNoIChsaGxDb2RlKSB7DQorCQkJY2FzZSAwOg0KKwkJCWNhc2UgMjoN
CisJCQlkZWZhdWx0OiAvKiBub3RlIDogZGVmYXVsdCBpcyBpbXBvc3NpYmxlLCBzaW5jZSBsaGxD
b2RlIGludG8gWzAuLjNdICovDQorCQkJCWxoU2l6ZSA9IDE7DQorCQkJCWxpdFNpemUgPSBpc3Rh
cnRbMF0gPj4gMzsNCisJCQkJYnJlYWs7DQorCQkJY2FzZSAxOg0KKwkJCQlsaFNpemUgPSAyOw0K
KwkJCQlsaXRTaXplID0gWlNURF9yZWFkTEUxNihpc3RhcnQpID4+IDQ7DQorCQkJCWJyZWFrOw0K
KwkJCWNhc2UgMzoNCisJCQkJbGhTaXplID0gMzsNCisJCQkJbGl0U2l6ZSA9IFpTVERfcmVhZExF
MjQoaXN0YXJ0KSA+PiA0Ow0KKwkJCQlpZiAoc3JjU2l6ZSA8IDQpDQorCQkJCQlyZXR1cm4gRVJS
T1IoY29ycnVwdGlvbl9kZXRlY3RlZCk7IC8qIHNyY1NpemUgPj0gTUlOX0NCTE9DS19TSVpFID09
IDM7IGhlcmUgd2UgbmVlZCBsaFNpemUrMSA9IDQgKi8NCisJCQkJYnJlYWs7DQorCQkJfQ0KKwkJ
CWlmIChsaXRTaXplID4gWlNURF9CTE9DS1NJWkVfQUJTT0xVVEVNQVgpDQorCQkJCXJldHVybiBF
UlJPUihjb3JydXB0aW9uX2RldGVjdGVkKTsNCisJCQltZW1zZXQoZGN0eC0+bGl0QnVmZmVyLCBp
c3RhcnRbbGhTaXplXSwgbGl0U2l6ZSArIFdJTERDT1BZX09WRVJMRU5HVEgpOw0KKwkJCWRjdHgt
PmxpdFB0ciA9IGRjdHgtPmxpdEJ1ZmZlcjsNCisJCQlkY3R4LT5saXRTaXplID0gbGl0U2l6ZTsN
CisJCQlyZXR1cm4gbGhTaXplICsgMTsNCisJCX0NCisJCWRlZmF1bHQ6DQorCQkJcmV0dXJuIEVS
Uk9SKGNvcnJ1cHRpb25fZGV0ZWN0ZWQpOyAvKiBpbXBvc3NpYmxlICovDQorCQl9DQorCX0NCit9
DQorDQordHlwZWRlZiB1bmlvbiB7DQorCUZTRV9kZWNvZGVfdCByZWFsRGF0YTsNCisJVTMyIGFs
aWduZWRCeTQ7DQorfSBGU0VfZGVjb2RlX3Q0Ow0KKw0KK3N0YXRpYyBjb25zdCBGU0VfZGVjb2Rl
X3Q0IExMX2RlZmF1bHREVGFibGVbKDEgPDwgTExfREVGQVVMVE5PUk1MT0cpICsgMV0gPSB7DQor
ICAgIHt7TExfREVGQVVMVE5PUk1MT0csIDEsIDF9fSwgLyogaGVhZGVyIDogdGFibGVMb2csIGZh
c3RNb2RlLCBmYXN0TW9kZSAqLw0KKyAgICB7ezAsIDAsIDR9fSwJCSAvKiAwIDogYmFzZSwgc3lt
Ym9sLCBiaXRzICovDQorICAgIHt7MTYsIDAsIDR9fSwNCisgICAge3szMiwgMSwgNX19LA0KKyAg
ICB7ezAsIDMsIDV9fSwNCisgICAge3swLCA0LCA1fX0sDQorICAgIHt7MCwgNiwgNX19LA0KKyAg
ICB7ezAsIDcsIDV9fSwNCisgICAge3swLCA5LCA1fX0sDQorICAgIHt7MCwgMTAsIDV9fSwNCisg
ICAge3swLCAxMiwgNX19LA0KKyAgICB7ezAsIDE0LCA2fX0sDQorICAgIHt7MCwgMTYsIDV9fSwN
CisgICAge3swLCAxOCwgNX19LA0KKyAgICB7ezAsIDE5LCA1fX0sDQorICAgIHt7MCwgMjEsIDV9
fSwNCisgICAge3swLCAyMiwgNX19LA0KKyAgICB7ezAsIDI0LCA1fX0sDQorICAgIHt7MzIsIDI1
LCA1fX0sDQorICAgIHt7MCwgMjYsIDV9fSwNCisgICAge3swLCAyNywgNn19LA0KKyAgICB7ezAs
IDI5LCA2fX0sDQorICAgIHt7MCwgMzEsIDZ9fSwNCisgICAge3szMiwgMCwgNH19LA0KKyAgICB7
ezAsIDEsIDR9fSwNCisgICAge3swLCAyLCA1fX0sDQorICAgIHt7MzIsIDQsIDV9fSwNCisgICAg
e3swLCA1LCA1fX0sDQorICAgIHt7MzIsIDcsIDV9fSwNCisgICAge3swLCA4LCA1fX0sDQorICAg
IHt7MzIsIDEwLCA1fX0sDQorICAgIHt7MCwgMTEsIDV9fSwNCisgICAge3swLCAxMywgNn19LA0K
KyAgICB7ezMyLCAxNiwgNX19LA0KKyAgICB7ezAsIDE3LCA1fX0sDQorICAgIHt7MzIsIDE5LCA1
fX0sDQorICAgIHt7MCwgMjAsIDV9fSwNCisgICAge3szMiwgMjIsIDV9fSwNCisgICAge3swLCAy
MywgNX19LA0KKyAgICB7ezAsIDI1LCA0fX0sDQorICAgIHt7MTYsIDI1LCA0fX0sDQorICAgIHt7
MzIsIDI2LCA1fX0sDQorICAgIHt7MCwgMjgsIDZ9fSwNCisgICAge3swLCAzMCwgNn19LA0KKyAg
ICB7ezQ4LCAwLCA0fX0sDQorICAgIHt7MTYsIDEsIDR9fSwNCisgICAge3szMiwgMiwgNX19LA0K
KyAgICB7ezMyLCAzLCA1fX0sDQorICAgIHt7MzIsIDUsIDV9fSwNCisgICAge3szMiwgNiwgNX19
LA0KKyAgICB7ezMyLCA4LCA1fX0sDQorICAgIHt7MzIsIDksIDV9fSwNCisgICAge3szMiwgMTEs
IDV9fSwNCisgICAge3szMiwgMTIsIDV9fSwNCisgICAge3swLCAxNSwgNn19LA0KKyAgICB7ezMy
LCAxNywgNX19LA0KKyAgICB7ezMyLCAxOCwgNX19LA0KKyAgICB7ezMyLCAyMCwgNX19LA0KKyAg
ICB7ezMyLCAyMSwgNX19LA0KKyAgICB7ezMyLCAyMywgNX19LA0KKyAgICB7ezMyLCAyNCwgNX19
LA0KKyAgICB7ezAsIDM1LCA2fX0sDQorICAgIHt7MCwgMzQsIDZ9fSwNCisgICAge3swLCAzMywg
Nn19LA0KKyAgICB7ezAsIDMyLCA2fX0sDQorfTsgLyogTExfZGVmYXVsdERUYWJsZSAqLw0KKw0K
K3N0YXRpYyBjb25zdCBGU0VfZGVjb2RlX3Q0IE1MX2RlZmF1bHREVGFibGVbKDEgPDwgTUxfREVG
QVVMVE5PUk1MT0cpICsgMV0gPSB7DQorICAgIHt7TUxfREVGQVVMVE5PUk1MT0csIDEsIDF9fSwg
LyogaGVhZGVyIDogdGFibGVMb2csIGZhc3RNb2RlLCBmYXN0TW9kZSAqLw0KKyAgICB7ezAsIDAs
IDZ9fSwJCSAvKiAwIDogYmFzZSwgc3ltYm9sLCBiaXRzICovDQorICAgIHt7MCwgMSwgNH19LA0K
KyAgICB7ezMyLCAyLCA1fX0sDQorICAgIHt7MCwgMywgNX19LA0KKyAgICB7ezAsIDUsIDV9fSwN
CisgICAge3swLCA2LCA1fX0sDQorICAgIHt7MCwgOCwgNX19LA0KKyAgICB7ezAsIDEwLCA2fX0s
DQorICAgIHt7MCwgMTMsIDZ9fSwNCisgICAge3swLCAxNiwgNn19LA0KKyAgICB7ezAsIDE5LCA2
fX0sDQorICAgIHt7MCwgMjIsIDZ9fSwNCisgICAge3swLCAyNSwgNn19LA0KKyAgICB7ezAsIDI4
LCA2fX0sDQorICAgIHt7MCwgMzEsIDZ9fSwNCisgICAge3swLCAzMywgNn19LA0KKyAgICB7ezAs
IDM1LCA2fX0sDQorICAgIHt7MCwgMzcsIDZ9fSwNCisgICAge3swLCAzOSwgNn19LA0KKyAgICB7
ezAsIDQxLCA2fX0sDQorICAgIHt7MCwgNDMsIDZ9fSwNCisgICAge3swLCA0NSwgNn19LA0KKyAg
ICB7ezE2LCAxLCA0fX0sDQorICAgIHt7MCwgMiwgNH19LA0KKyAgICB7ezMyLCAzLCA1fX0sDQor
ICAgIHt7MCwgNCwgNX19LA0KKyAgICB7ezMyLCA2LCA1fX0sDQorICAgIHt7MCwgNywgNX19LA0K
KyAgICB7ezAsIDksIDZ9fSwNCisgICAge3swLCAxMiwgNn19LA0KKyAgICB7ezAsIDE1LCA2fX0s
DQorICAgIHt7MCwgMTgsIDZ9fSwNCisgICAge3swLCAyMSwgNn19LA0KKyAgICB7ezAsIDI0LCA2
fX0sDQorICAgIHt7MCwgMjcsIDZ9fSwNCisgICAge3swLCAzMCwgNn19LA0KKyAgICB7ezAsIDMy
LCA2fX0sDQorICAgIHt7MCwgMzQsIDZ9fSwNCisgICAge3swLCAzNiwgNn19LA0KKyAgICB7ezAs
IDM4LCA2fX0sDQorICAgIHt7MCwgNDAsIDZ9fSwNCisgICAge3swLCA0MiwgNn19LA0KKyAgICB7
ezAsIDQ0LCA2fX0sDQorICAgIHt7MzIsIDEsIDR9fSwNCisgICAge3s0OCwgMSwgNH19LA0KKyAg
ICB7ezE2LCAyLCA0fX0sDQorICAgIHt7MzIsIDQsIDV9fSwNCisgICAge3szMiwgNSwgNX19LA0K
KyAgICB7ezMyLCA3LCA1fX0sDQorICAgIHt7MzIsIDgsIDV9fSwNCisgICAge3swLCAxMSwgNn19
LA0KKyAgICB7ezAsIDE0LCA2fX0sDQorICAgIHt7MCwgMTcsIDZ9fSwNCisgICAge3swLCAyMCwg
Nn19LA0KKyAgICB7ezAsIDIzLCA2fX0sDQorICAgIHt7MCwgMjYsIDZ9fSwNCisgICAge3swLCAy
OSwgNn19LA0KKyAgICB7ezAsIDUyLCA2fX0sDQorICAgIHt7MCwgNTEsIDZ9fSwNCisgICAge3sw
LCA1MCwgNn19LA0KKyAgICB7ezAsIDQ5LCA2fX0sDQorICAgIHt7MCwgNDgsIDZ9fSwNCisgICAg
e3swLCA0NywgNn19LA0KKyAgICB7ezAsIDQ2LCA2fX0sDQorfTsgLyogTUxfZGVmYXVsdERUYWJs
ZSAqLw0KKw0KK3N0YXRpYyBjb25zdCBGU0VfZGVjb2RlX3Q0IE9GX2RlZmF1bHREVGFibGVbKDEg
PDwgT0ZfREVGQVVMVE5PUk1MT0cpICsgMV0gPSB7DQorICAgIHt7T0ZfREVGQVVMVE5PUk1MT0cs
IDEsIDF9fSwgLyogaGVhZGVyIDogdGFibGVMb2csIGZhc3RNb2RlLCBmYXN0TW9kZSAqLw0KKyAg
ICB7ezAsIDAsIDV9fSwJCSAvKiAwIDogYmFzZSwgc3ltYm9sLCBiaXRzICovDQorICAgIHt7MCwg
NiwgNH19LA0KKyAgICB7ezAsIDksIDV9fSwNCisgICAge3swLCAxNSwgNX19LA0KKyAgICB7ezAs
IDIxLCA1fX0sDQorICAgIHt7MCwgMywgNX19LA0KKyAgICB7ezAsIDcsIDR9fSwNCisgICAge3sw
LCAxMiwgNX19LA0KKyAgICB7ezAsIDE4LCA1fX0sDQorICAgIHt7MCwgMjMsIDV9fSwNCisgICAg
e3swLCA1LCA1fX0sDQorICAgIHt7MCwgOCwgNH19LA0KKyAgICB7ezAsIDE0LCA1fX0sDQorICAg
IHt7MCwgMjAsIDV9fSwNCisgICAge3swLCAyLCA1fX0sDQorICAgIHt7MTYsIDcsIDR9fSwNCisg
ICAge3swLCAxMSwgNX19LA0KKyAgICB7ezAsIDE3LCA1fX0sDQorICAgIHt7MCwgMjIsIDV9fSwN
CisgICAge3swLCA0LCA1fX0sDQorICAgIHt7MTYsIDgsIDR9fSwNCisgICAge3swLCAxMywgNX19
LA0KKyAgICB7ezAsIDE5LCA1fX0sDQorICAgIHt7MCwgMSwgNX19LA0KKyAgICB7ezE2LCA2LCA0
fX0sDQorICAgIHt7MCwgMTAsIDV9fSwNCisgICAge3swLCAxNiwgNX19LA0KKyAgICB7ezAsIDI4
LCA1fX0sDQorICAgIHt7MCwgMjcsIDV9fSwNCisgICAge3swLCAyNiwgNX19LA0KKyAgICB7ezAs
IDI1LCA1fX0sDQorICAgIHt7MCwgMjQsIDV9fSwNCit9OyAvKiBPRl9kZWZhdWx0RFRhYmxlICov
DQorDQorLyohIFpTVERfYnVpbGRTZXFUYWJsZSgpIDoNCisJQHJldHVybiA6IG5iIGJ5dGVzIHJl
YWQgZnJvbSBzcmMsDQorCQkJICBvciBhbiBlcnJvciBjb2RlIGlmIGl0IGZhaWxzLCB0ZXN0YWJs
ZSB3aXRoIFpTVERfaXNFcnJvcigpDQorKi8NCitzdGF0aWMgc2l6ZV90IFpTVERfYnVpbGRTZXFU
YWJsZShGU0VfRFRhYmxlICpEVGFibGVTcGFjZSwgY29uc3QgRlNFX0RUYWJsZSAqKkRUYWJsZVB0
ciwgc3ltYm9sRW5jb2RpbmdUeXBlX2UgdHlwZSwgVTMyIG1heCwgVTMyIG1heExvZywgY29uc3Qg
dm9pZCAqc3JjLA0KKwkJCQkgc2l6ZV90IHNyY1NpemUsIGNvbnN0IEZTRV9kZWNvZGVfdDQgKmRl
ZmF1bHRUYWJsZSwgVTMyIGZsYWdSZXBlYXRUYWJsZSwgdm9pZCAqd29ya3NwYWNlLCBzaXplX3Qg
d29ya3NwYWNlU2l6ZSkNCit7DQorCWNvbnN0IHZvaWQgKmNvbnN0IHRtcFB0ciA9IGRlZmF1bHRU
YWJsZTsgLyogYnlwYXNzIHN0cmljdCBhbGlhc2luZyAqLw0KKwlzd2l0Y2ggKHR5cGUpIHsNCisJ
Y2FzZSBzZXRfcmxlOg0KKwkJaWYgKCFzcmNTaXplKQ0KKwkJCXJldHVybiBFUlJPUihzcmNTaXpl
X3dyb25nKTsNCisJCWlmICgoKihjb25zdCBCWVRFICopc3JjKSA+IG1heCkNCisJCQlyZXR1cm4g
RVJST1IoY29ycnVwdGlvbl9kZXRlY3RlZCk7DQorCQlGU0VfYnVpbGREVGFibGVfcmxlKERUYWJs
ZVNwYWNlLCAqKGNvbnN0IEJZVEUgKilzcmMpOw0KKwkJKkRUYWJsZVB0ciA9IERUYWJsZVNwYWNl
Ow0KKwkJcmV0dXJuIDE7DQorCWNhc2Ugc2V0X2Jhc2ljOiAqRFRhYmxlUHRyID0gKGNvbnN0IEZT
RV9EVGFibGUgKil0bXBQdHI7IHJldHVybiAwOw0KKwljYXNlIHNldF9yZXBlYXQ6DQorCQlpZiAo
IWZsYWdSZXBlYXRUYWJsZSkNCisJCQlyZXR1cm4gRVJST1IoY29ycnVwdGlvbl9kZXRlY3RlZCk7
DQorCQlyZXR1cm4gMDsNCisJZGVmYXVsdDogLyogaW1wb3NzaWJsZSAqLw0KKwljYXNlIHNldF9j
b21wcmVzc2VkOiB7DQorCQlVMzIgdGFibGVMb2c7DQorCQlTMTYgKm5vcm0gPSAoUzE2ICopd29y
a3NwYWNlOw0KKwkJc2l6ZV90IGNvbnN0IHNwYWNlVXNlZDMyID0gQUxJR04oc2l6ZW9mKFMxNikg
KiAoTWF4U2VxICsgMSksIHNpemVvZihVMzIpKSA+PiAyOw0KKw0KKwkJaWYgKChzcGFjZVVzZWQz
MiA8PCAyKSA+IHdvcmtzcGFjZVNpemUpDQorCQkJcmV0dXJuIEVSUk9SKEdFTkVSSUMpOw0KKwkJ
d29ya3NwYWNlID0gKFUzMiAqKXdvcmtzcGFjZSArIHNwYWNlVXNlZDMyOw0KKwkJd29ya3NwYWNl
U2l6ZSAtPSAoc3BhY2VVc2VkMzIgPDwgMik7DQorCQl7DQorCQkJc2l6ZV90IGNvbnN0IGhlYWRl
clNpemUgPSBGU0VfcmVhZE5Db3VudChub3JtLCAmbWF4LCAmdGFibGVMb2csIHNyYywgc3JjU2l6
ZSk7DQorCQkJaWYgKEZTRV9pc0Vycm9yKGhlYWRlclNpemUpKQ0KKwkJCQlyZXR1cm4gRVJST1Io
Y29ycnVwdGlvbl9kZXRlY3RlZCk7DQorCQkJaWYgKHRhYmxlTG9nID4gbWF4TG9nKQ0KKwkJCQly
ZXR1cm4gRVJST1IoY29ycnVwdGlvbl9kZXRlY3RlZCk7DQorCQkJRlNFX2J1aWxkRFRhYmxlX3dr
c3AoRFRhYmxlU3BhY2UsIG5vcm0sIG1heCwgdGFibGVMb2csIHdvcmtzcGFjZSwgd29ya3NwYWNl
U2l6ZSk7DQorCQkJKkRUYWJsZVB0ciA9IERUYWJsZVNwYWNlOw0KKwkJCXJldHVybiBoZWFkZXJT
aXplOw0KKwkJfQ0KKwl9DQorCX0NCit9DQorDQorc2l6ZV90IFpTVERfZGVjb2RlU2VxSGVhZGVy
cyhaU1REX0RDdHggKmRjdHgsIGludCAqbmJTZXFQdHIsIGNvbnN0IHZvaWQgKnNyYywgc2l6ZV90
IHNyY1NpemUpDQorew0KKwljb25zdCBCWVRFICpjb25zdCBpc3RhcnQgPSAoY29uc3QgQllURSAq
Y29uc3Qpc3JjOw0KKwljb25zdCBCWVRFICpjb25zdCBpZW5kID0gaXN0YXJ0ICsgc3JjU2l6ZTsN
CisJY29uc3QgQllURSAqaXAgPSBpc3RhcnQ7DQorDQorCS8qIGNoZWNrICovDQorCWlmIChzcmNT
aXplIDwgTUlOX1NFUVVFTkNFU19TSVpFKQ0KKwkJcmV0dXJuIEVSUk9SKHNyY1NpemVfd3Jvbmcp
Ow0KKw0KKwkvKiBTZXFIZWFkICovDQorCXsNCisJCWludCBuYlNlcSA9ICppcCsrOw0KKwkJaWYg
KCFuYlNlcSkgew0KKwkJCSpuYlNlcVB0ciA9IDA7DQorCQkJcmV0dXJuIDE7DQorCQl9DQorCQlp
ZiAobmJTZXEgPiAweDdGKSB7DQorCQkJaWYgKG5iU2VxID09IDB4RkYpIHsNCisJCQkJaWYgKGlw
ICsgMiA+IGllbmQpDQorCQkJCQlyZXR1cm4gRVJST1Ioc3JjU2l6ZV93cm9uZyk7DQorCQkJCW5i
U2VxID0gWlNURF9yZWFkTEUxNihpcCkgKyBMT05HTkJTRVEsIGlwICs9IDI7DQorCQkJfSBlbHNl
IHsNCisJCQkJaWYgKGlwID49IGllbmQpDQorCQkJCQlyZXR1cm4gRVJST1Ioc3JjU2l6ZV93cm9u
Zyk7DQorCQkJCW5iU2VxID0gKChuYlNlcSAtIDB4ODApIDw8IDgpICsgKmlwKys7DQorCQkJfQ0K
KwkJfQ0KKwkJKm5iU2VxUHRyID0gbmJTZXE7DQorCX0NCisNCisJLyogRlNFIHRhYmxlIGRlc2Ny
aXB0b3JzICovDQorCWlmIChpcCArIDQgPiBpZW5kKQ0KKwkJcmV0dXJuIEVSUk9SKHNyY1NpemVf
d3JvbmcpOyAvKiBtaW5pbXVtIHBvc3NpYmxlIHNpemUgKi8NCisJew0KKwkJc3ltYm9sRW5jb2Rp
bmdUeXBlX2UgY29uc3QgTEx0eXBlID0gKHN5bWJvbEVuY29kaW5nVHlwZV9lKSgqaXAgPj4gNik7
DQorCQlzeW1ib2xFbmNvZGluZ1R5cGVfZSBjb25zdCBPRnR5cGUgPSAoc3ltYm9sRW5jb2RpbmdU
eXBlX2UpKCgqaXAgPj4gNCkgJiAzKTsNCisJCXN5bWJvbEVuY29kaW5nVHlwZV9lIGNvbnN0IE1M
dHlwZSA9IChzeW1ib2xFbmNvZGluZ1R5cGVfZSkoKCppcCA+PiAyKSAmIDMpOw0KKwkJaXArKzsN
CisNCisJCS8qIEJ1aWxkIERUYWJsZXMgKi8NCisJCXsNCisJCQlzaXplX3QgY29uc3QgbGxoU2l6
ZSA9IFpTVERfYnVpbGRTZXFUYWJsZShkY3R4LT5lbnRyb3B5LkxMVGFibGUsICZkY3R4LT5MTFRw
dHIsIExMdHlwZSwgTWF4TEwsIExMRlNFTG9nLCBpcCwgaWVuZCAtIGlwLA0KKwkJCQkJCQkJICBM
TF9kZWZhdWx0RFRhYmxlLCBkY3R4LT5mc2VFbnRyb3B5LCBkY3R4LT5lbnRyb3B5LndvcmtzcGFj
ZSwgc2l6ZW9mKGRjdHgtPmVudHJvcHkud29ya3NwYWNlKSk7DQorCQkJaWYgKFpTVERfaXNFcnJv
cihsbGhTaXplKSkNCisJCQkJcmV0dXJuIEVSUk9SKGNvcnJ1cHRpb25fZGV0ZWN0ZWQpOw0KKwkJ
CWlwICs9IGxsaFNpemU7DQorCQl9DQorCQl7DQorCQkJc2l6ZV90IGNvbnN0IG9maFNpemUgPSBa
U1REX2J1aWxkU2VxVGFibGUoZGN0eC0+ZW50cm9weS5PRlRhYmxlLCAmZGN0eC0+T0ZUcHRyLCBP
RnR5cGUsIE1heE9mZiwgT2ZmRlNFTG9nLCBpcCwgaWVuZCAtIGlwLA0KKwkJCQkJCQkJICBPRl9k
ZWZhdWx0RFRhYmxlLCBkY3R4LT5mc2VFbnRyb3B5LCBkY3R4LT5lbnRyb3B5LndvcmtzcGFjZSwg
c2l6ZW9mKGRjdHgtPmVudHJvcHkud29ya3NwYWNlKSk7DQorCQkJaWYgKFpTVERfaXNFcnJvcihv
ZmhTaXplKSkNCisJCQkJcmV0dXJuIEVSUk9SKGNvcnJ1cHRpb25fZGV0ZWN0ZWQpOw0KKwkJCWlw
ICs9IG9maFNpemU7DQorCQl9DQorCQl7DQorCQkJc2l6ZV90IGNvbnN0IG1saFNpemUgPSBaU1RE
X2J1aWxkU2VxVGFibGUoZGN0eC0+ZW50cm9weS5NTFRhYmxlLCAmZGN0eC0+TUxUcHRyLCBNTHR5
cGUsIE1heE1MLCBNTEZTRUxvZywgaXAsIGllbmQgLSBpcCwNCisJCQkJCQkJCSAgTUxfZGVmYXVs
dERUYWJsZSwgZGN0eC0+ZnNlRW50cm9weSwgZGN0eC0+ZW50cm9weS53b3Jrc3BhY2UsIHNpemVv
ZihkY3R4LT5lbnRyb3B5LndvcmtzcGFjZSkpOw0KKwkJCWlmIChaU1REX2lzRXJyb3IobWxoU2l6
ZSkpDQorCQkJCXJldHVybiBFUlJPUihjb3JydXB0aW9uX2RldGVjdGVkKTsNCisJCQlpcCArPSBt
bGhTaXplOw0KKwkJfQ0KKwl9DQorDQorCXJldHVybiBpcCAtIGlzdGFydDsNCit9DQorDQordHlw
ZWRlZiBzdHJ1Y3Qgew0KKwlzaXplX3QgbGl0TGVuZ3RoOw0KKwlzaXplX3QgbWF0Y2hMZW5ndGg7
DQorCXNpemVfdCBvZmZzZXQ7DQorCWNvbnN0IEJZVEUgKm1hdGNoOw0KK30gc2VxX3Q7DQorDQor
dHlwZWRlZiBzdHJ1Y3Qgew0KKwlCSVRfRFN0cmVhbV90IERTdHJlYW07DQorCUZTRV9EU3RhdGVf
dCBzdGF0ZUxMOw0KKwlGU0VfRFN0YXRlX3Qgc3RhdGVPZmZiOw0KKwlGU0VfRFN0YXRlX3Qgc3Rh
dGVNTDsNCisJc2l6ZV90IHByZXZPZmZzZXRbWlNURF9SRVBfTlVNXTsNCisJY29uc3QgQllURSAq
YmFzZTsNCisJc2l6ZV90IHBvczsNCisJdVB0ckRpZmYgZ290b0RpY3Q7DQorfSBzZXFTdGF0ZV90
Ow0KKw0KK0ZPUkNFX05PSU5MSU5FDQorc2l6ZV90IFpTVERfZXhlY1NlcXVlbmNlTGFzdDcoQllU
RSAqb3AsIEJZVEUgKmNvbnN0IG9lbmQsIHNlcV90IHNlcXVlbmNlLCBjb25zdCBCWVRFICoqbGl0
UHRyLCBjb25zdCBCWVRFICpjb25zdCBsaXRMaW1pdCwgY29uc3QgQllURSAqY29uc3QgYmFzZSwN
CisJCQkgICAgICBjb25zdCBCWVRFICpjb25zdCB2QmFzZSwgY29uc3QgQllURSAqY29uc3QgZGlj
dEVuZCkNCit7DQorCUJZVEUgKmNvbnN0IG9MaXRFbmQgPSBvcCArIHNlcXVlbmNlLmxpdExlbmd0
aDsNCisJc2l6ZV90IGNvbnN0IHNlcXVlbmNlTGVuZ3RoID0gc2VxdWVuY2UubGl0TGVuZ3RoICsg
c2VxdWVuY2UubWF0Y2hMZW5ndGg7DQorCUJZVEUgKmNvbnN0IG9NYXRjaEVuZCA9IG9wICsgc2Vx
dWVuY2VMZW5ndGg7IC8qIHJpc2sgOiBhZGRyZXNzIHNwYWNlIG92ZXJmbG93ICgzMi1iaXRzKSAq
Lw0KKwlCWVRFICpjb25zdCBvZW5kX3cgPSBvZW5kIC0gV0lMRENPUFlfT1ZFUkxFTkdUSDsNCisJ
Y29uc3QgQllURSAqY29uc3QgaUxpdEVuZCA9ICpsaXRQdHIgKyBzZXF1ZW5jZS5saXRMZW5ndGg7
DQorCWNvbnN0IEJZVEUgKm1hdGNoID0gb0xpdEVuZCAtIHNlcXVlbmNlLm9mZnNldDsNCisNCisJ
LyogY2hlY2sgKi8NCisJaWYgKG9NYXRjaEVuZCA+IG9lbmQpDQorCQlyZXR1cm4gRVJST1IoZHN0
U2l6ZV90b29TbWFsbCk7IC8qIGxhc3QgbWF0Y2ggbXVzdCBzdGFydCBhdCBhIG1pbmltdW0gZGlz
dGFuY2Ugb2YgV0lMRENPUFlfT1ZFUkxFTkdUSCBmcm9tIG9lbmQgKi8NCisJaWYgKGlMaXRFbmQg
PiBsaXRMaW1pdCkNCisJCXJldHVybiBFUlJPUihjb3JydXB0aW9uX2RldGVjdGVkKTsgLyogb3Zl
ci1yZWFkIGJleW9uZCBsaXQgYnVmZmVyICovDQorCWlmIChvTGl0RW5kIDw9IG9lbmRfdykNCisJ
CXJldHVybiBFUlJPUihHRU5FUklDKTsgLyogUHJlY29uZGl0aW9uICovDQorDQorCS8qIGNvcHkg
bGl0ZXJhbHMgKi8NCisJaWYgKG9wIDwgb2VuZF93KSB7DQorCQlaU1REX3dpbGRjb3B5KG9wLCAq
bGl0UHRyLCBvZW5kX3cgLSBvcCk7DQorCQkqbGl0UHRyICs9IG9lbmRfdyAtIG9wOw0KKwkJb3Ag
PSBvZW5kX3c7DQorCX0NCisJd2hpbGUgKG9wIDwgb0xpdEVuZCkNCisJCSpvcCsrID0gKigqbGl0
UHRyKSsrOw0KKw0KKwkvKiBjb3B5IE1hdGNoICovDQorCWlmIChzZXF1ZW5jZS5vZmZzZXQgPiAo
c2l6ZV90KShvTGl0RW5kIC0gYmFzZSkpIHsNCisJCS8qIG9mZnNldCBiZXlvbmQgcHJlZml4ICov
DQorCQlpZiAoc2VxdWVuY2Uub2Zmc2V0ID4gKHNpemVfdCkob0xpdEVuZCAtIHZCYXNlKSkNCisJ
CQlyZXR1cm4gRVJST1IoY29ycnVwdGlvbl9kZXRlY3RlZCk7DQorCQltYXRjaCA9IGRpY3RFbmQg
LSAoYmFzZSAtIG1hdGNoKTsNCisJCWlmIChtYXRjaCArIHNlcXVlbmNlLm1hdGNoTGVuZ3RoIDw9
IGRpY3RFbmQpIHsNCisJCQltZW1tb3ZlKG9MaXRFbmQsIG1hdGNoLCBzZXF1ZW5jZS5tYXRjaExl
bmd0aCk7DQorCQkJcmV0dXJuIHNlcXVlbmNlTGVuZ3RoOw0KKwkJfQ0KKwkJLyogc3BhbiBleHRE
aWN0ICYgY3VyclByZWZpeFNlZ21lbnQgKi8NCisJCXsNCisJCQlzaXplX3QgY29uc3QgbGVuZ3Ro
MSA9IGRpY3RFbmQgLSBtYXRjaDsNCisJCQltZW1tb3ZlKG9MaXRFbmQsIG1hdGNoLCBsZW5ndGgx
KTsNCisJCQlvcCA9IG9MaXRFbmQgKyBsZW5ndGgxOw0KKwkJCXNlcXVlbmNlLm1hdGNoTGVuZ3Ro
IC09IGxlbmd0aDE7DQorCQkJbWF0Y2ggPSBiYXNlOw0KKwkJfQ0KKwl9DQorCXdoaWxlIChvcCA8
IG9NYXRjaEVuZCkNCisJCSpvcCsrID0gKm1hdGNoKys7DQorCXJldHVybiBzZXF1ZW5jZUxlbmd0
aDsNCit9DQorDQorc3RhdGljIHNlcV90IFpTVERfZGVjb2RlU2VxdWVuY2Uoc2VxU3RhdGVfdCAq
c2VxU3RhdGUpDQorew0KKwlzZXFfdCBzZXE7DQorDQorCVUzMiBjb25zdCBsbENvZGUgPSBGU0Vf
cGVla1N5bWJvbCgmc2VxU3RhdGUtPnN0YXRlTEwpOw0KKwlVMzIgY29uc3QgbWxDb2RlID0gRlNF
X3BlZWtTeW1ib2woJnNlcVN0YXRlLT5zdGF0ZU1MKTsNCisJVTMyIGNvbnN0IG9mQ29kZSA9IEZT
RV9wZWVrU3ltYm9sKCZzZXFTdGF0ZS0+c3RhdGVPZmZiKTsgLyogPD0gbWF4T2ZmLCBieSB0YWJs
ZSBjb25zdHJ1Y3Rpb24gKi8NCisNCisJVTMyIGNvbnN0IGxsQml0cyA9IExMX2JpdHNbbGxDb2Rl
XTsNCisJVTMyIGNvbnN0IG1sQml0cyA9IE1MX2JpdHNbbWxDb2RlXTsNCisJVTMyIGNvbnN0IG9m
Qml0cyA9IG9mQ29kZTsNCisJVTMyIGNvbnN0IHRvdGFsQml0cyA9IGxsQml0cyArIG1sQml0cyAr
IG9mQml0czsNCisNCisJc3RhdGljIGNvbnN0IFUzMiBMTF9iYXNlW01heExMICsgMV0gPSB7MCwg
IDEsICAyLCAgMywgIDQsICA1LCAgNiwgIDcsICA4LCAgICA5LCAgICAgMTAsICAgIDExLCAgICAx
MiwgICAgMTMsICAgICAxNCwgICAgIDE1LCAgICAgMTYsICAgICAxOCwNCisJCQkJCSAgICAgICAy
MCwgMjIsIDI0LCAyOCwgMzIsIDQwLCA0OCwgNjQsIDB4ODAsIDB4MTAwLCAweDIwMCwgMHg0MDAs
IDB4ODAwLCAweDEwMDAsIDB4MjAwMCwgMHg0MDAwLCAweDgwMDAsIDB4MTAwMDB9Ow0KKw0KKwlz
dGF0aWMgY29uc3QgVTMyIE1MX2Jhc2VbTWF4TUwgKyAxXSA9IHszLCAgNCwgIDUsICA2LCAgNywg
IDgsICA5LCAgMTAsICAgMTEsICAgIDEyLCAgICAxMywgICAgMTQsICAgIDE1LCAgICAgMTYsICAg
ICAxNywgICAgIDE4LCAgICAgMTksICAgICAyMCwNCisJCQkJCSAgICAgICAyMSwgMjIsIDIzLCAy
NCwgMjUsIDI2LCAyNywgMjgsICAgMjksICAgIDMwLCAgICAzMSwgICAgMzIsICAgIDMzLCAgICAg
MzQsICAgICAzNSwgICAgIDM3LCAgICAgMzksICAgICA0MSwNCisJCQkJCSAgICAgICA0MywgNDcs
IDUxLCA1OSwgNjcsIDgzLCA5OSwgMHg4MywgMHgxMDMsIDB4MjAzLCAweDQwMywgMHg4MDMsIDB4
MTAwMywgMHgyMDAzLCAweDQwMDMsIDB4ODAwMywgMHgxMDAwM307DQorDQorCXN0YXRpYyBjb25z
dCBVMzIgT0ZfYmFzZVtNYXhPZmYgKyAxXSA9IHswLCAgICAgICAxLAkxLAk1LAkweEQsICAgICAg
MHgxRCwgICAgICAweDNELCAgICAgIDB4N0QsICAgICAgMHhGRCwgICAgIDB4MUZELA0KKwkJCQkJ
CTB4M0ZELCAgIDB4N0ZELCAgICAweEZGRCwgICAgMHgxRkZELCAgIDB4M0ZGRCwgICAweDdGRkQs
ICAgIDB4RkZGRCwgICAgMHgxRkZGRCwgICAweDNGRkZELCAgMHg3RkZGRCwNCisJCQkJCQkweEZG
RkZELCAweDFGRkZGRCwgMHgzRkZGRkQsIDB4N0ZGRkZELCAweEZGRkZGRCwgMHgxRkZGRkZELCAw
eDNGRkZGRkQsIDB4N0ZGRkZGRCwgMHhGRkZGRkZEfTsNCisNCisJLyogc2VxdWVuY2UgKi8NCisJ
ew0KKwkJc2l6ZV90IG9mZnNldDsNCisJCWlmICghb2ZDb2RlKQ0KKwkJCW9mZnNldCA9IDA7DQor
CQllbHNlIHsNCisJCQlvZmZzZXQgPSBPRl9iYXNlW29mQ29kZV0gKyBCSVRfcmVhZEJpdHNGYXN0
KCZzZXFTdGF0ZS0+RFN0cmVhbSwgb2ZCaXRzKTsgLyogPD0gIChaU1REX1dJTkRPV0xPR19NQVgt
MSkgYml0cyAqLw0KKwkJCWlmIChaU1REXzMyYml0cygpKQ0KKwkJCQlCSVRfcmVsb2FkRFN0cmVh
bSgmc2VxU3RhdGUtPkRTdHJlYW0pOw0KKwkJfQ0KKw0KKwkJaWYgKG9mQ29kZSA8PSAxKSB7DQor
CQkJb2Zmc2V0ICs9IChsbENvZGUgPT0gMCk7DQorCQkJaWYgKG9mZnNldCkgew0KKwkJCQlzaXpl
X3QgdGVtcCA9IChvZmZzZXQgPT0gMykgPyBzZXFTdGF0ZS0+cHJldk9mZnNldFswXSAtIDEgOiBz
ZXFTdGF0ZS0+cHJldk9mZnNldFtvZmZzZXRdOw0KKwkJCQl0ZW1wICs9ICF0ZW1wOyAvKiAwIGlz
IG5vdCB2YWxpZDsgaW5wdXQgaXMgY29ycnVwdGVkOyBmb3JjZSBvZmZzZXQgdG8gMSAqLw0KKwkJ
CQlpZiAob2Zmc2V0ICE9IDEpDQorCQkJCQlzZXFTdGF0ZS0+cHJldk9mZnNldFsyXSA9IHNlcVN0
YXRlLT5wcmV2T2Zmc2V0WzFdOw0KKwkJCQlzZXFTdGF0ZS0+cHJldk9mZnNldFsxXSA9IHNlcVN0
YXRlLT5wcmV2T2Zmc2V0WzBdOw0KKwkJCQlzZXFTdGF0ZS0+cHJldk9mZnNldFswXSA9IG9mZnNl
dCA9IHRlbXA7DQorCQkJfSBlbHNlIHsNCisJCQkJb2Zmc2V0ID0gc2VxU3RhdGUtPnByZXZPZmZz
ZXRbMF07DQorCQkJfQ0KKwkJfSBlbHNlIHsNCisJCQlzZXFTdGF0ZS0+cHJldk9mZnNldFsyXSA9
IHNlcVN0YXRlLT5wcmV2T2Zmc2V0WzFdOw0KKwkJCXNlcVN0YXRlLT5wcmV2T2Zmc2V0WzFdID0g
c2VxU3RhdGUtPnByZXZPZmZzZXRbMF07DQorCQkJc2VxU3RhdGUtPnByZXZPZmZzZXRbMF0gPSBv
ZmZzZXQ7DQorCQl9DQorCQlzZXEub2Zmc2V0ID0gb2Zmc2V0Ow0KKwl9DQorDQorCXNlcS5tYXRj
aExlbmd0aCA9IE1MX2Jhc2VbbWxDb2RlXSArICgobWxDb2RlID4gMzEpID8gQklUX3JlYWRCaXRz
RmFzdCgmc2VxU3RhdGUtPkRTdHJlYW0sIG1sQml0cykgOiAwKTsgLyogPD0gIDE2IGJpdHMgKi8N
CisJaWYgKFpTVERfMzJiaXRzKCkgJiYgKG1sQml0cyArIGxsQml0cyA+IDI0KSkNCisJCUJJVF9y
ZWxvYWREU3RyZWFtKCZzZXFTdGF0ZS0+RFN0cmVhbSk7DQorDQorCXNlcS5saXRMZW5ndGggPSBM
TF9iYXNlW2xsQ29kZV0gKyAoKGxsQ29kZSA+IDE1KSA/IEJJVF9yZWFkQml0c0Zhc3QoJnNlcVN0
YXRlLT5EU3RyZWFtLCBsbEJpdHMpIDogMCk7IC8qIDw9ICAxNiBiaXRzICovDQorCWlmIChaU1RE
XzMyYml0cygpIHx8ICh0b3RhbEJpdHMgPiA2NCAtIDcgLSAoTExGU0VMb2cgKyBNTEZTRUxvZyAr
IE9mZkZTRUxvZykpKQ0KKwkJQklUX3JlbG9hZERTdHJlYW0oJnNlcVN0YXRlLT5EU3RyZWFtKTsN
CisNCisJLyogQU5TIHN0YXRlIHVwZGF0ZSAqLw0KKwlGU0VfdXBkYXRlU3RhdGUoJnNlcVN0YXRl
LT5zdGF0ZUxMLCAmc2VxU3RhdGUtPkRTdHJlYW0pOyAvKiA8PSAgOSBiaXRzICovDQorCUZTRV91
cGRhdGVTdGF0ZSgmc2VxU3RhdGUtPnN0YXRlTUwsICZzZXFTdGF0ZS0+RFN0cmVhbSk7IC8qIDw9
ICA5IGJpdHMgKi8NCisJaWYgKFpTVERfMzJiaXRzKCkpDQorCQlCSVRfcmVsb2FkRFN0cmVhbSgm
c2VxU3RhdGUtPkRTdHJlYW0pOwkJICAgLyogPD0gMTggYml0cyAqLw0KKwlGU0VfdXBkYXRlU3Rh
dGUoJnNlcVN0YXRlLT5zdGF0ZU9mZmIsICZzZXFTdGF0ZS0+RFN0cmVhbSk7IC8qIDw9ICA4IGJp
dHMgKi8NCisNCisJc2VxLm1hdGNoID0gTlVMTDsNCisNCisJcmV0dXJuIHNlcTsNCit9DQorDQor
Rk9SQ0VfSU5MSU5FDQorc2l6ZV90IFpTVERfZXhlY1NlcXVlbmNlKEJZVEUgKm9wLCBCWVRFICpj
b25zdCBvZW5kLCBzZXFfdCBzZXF1ZW5jZSwgY29uc3QgQllURSAqKmxpdFB0ciwgY29uc3QgQllU
RSAqY29uc3QgbGl0TGltaXQsIGNvbnN0IEJZVEUgKmNvbnN0IGJhc2UsDQorCQkJIGNvbnN0IEJZ
VEUgKmNvbnN0IHZCYXNlLCBjb25zdCBCWVRFICpjb25zdCBkaWN0RW5kKQ0KK3sNCisJQllURSAq
Y29uc3Qgb0xpdEVuZCA9IG9wICsgc2VxdWVuY2UubGl0TGVuZ3RoOw0KKwlzaXplX3QgY29uc3Qg
c2VxdWVuY2VMZW5ndGggPSBzZXF1ZW5jZS5saXRMZW5ndGggKyBzZXF1ZW5jZS5tYXRjaExlbmd0
aDsNCisJQllURSAqY29uc3Qgb01hdGNoRW5kID0gb3AgKyBzZXF1ZW5jZUxlbmd0aDsgLyogcmlz
ayA6IGFkZHJlc3Mgc3BhY2Ugb3ZlcmZsb3cgKDMyLWJpdHMpICovDQorCUJZVEUgKmNvbnN0IG9l
bmRfdyA9IG9lbmQgLSBXSUxEQ09QWV9PVkVSTEVOR1RIOw0KKwljb25zdCBCWVRFICpjb25zdCBp
TGl0RW5kID0gKmxpdFB0ciArIHNlcXVlbmNlLmxpdExlbmd0aDsNCisJY29uc3QgQllURSAqbWF0
Y2ggPSBvTGl0RW5kIC0gc2VxdWVuY2Uub2Zmc2V0Ow0KKw0KKwkvKiBjaGVjayAqLw0KKwlpZiAo
b01hdGNoRW5kID4gb2VuZCkNCisJCXJldHVybiBFUlJPUihkc3RTaXplX3Rvb1NtYWxsKTsgLyog
bGFzdCBtYXRjaCBtdXN0IHN0YXJ0IGF0IGEgbWluaW11bSBkaXN0YW5jZSBvZiBXSUxEQ09QWV9P
VkVSTEVOR1RIIGZyb20gb2VuZCAqLw0KKwlpZiAoaUxpdEVuZCA+IGxpdExpbWl0KQ0KKwkJcmV0
dXJuIEVSUk9SKGNvcnJ1cHRpb25fZGV0ZWN0ZWQpOyAvKiBvdmVyLXJlYWQgYmV5b25kIGxpdCBi
dWZmZXIgKi8NCisJaWYgKG9MaXRFbmQgPiBvZW5kX3cpDQorCQlyZXR1cm4gWlNURF9leGVjU2Vx
dWVuY2VMYXN0NyhvcCwgb2VuZCwgc2VxdWVuY2UsIGxpdFB0ciwgbGl0TGltaXQsIGJhc2UsIHZC
YXNlLCBkaWN0RW5kKTsNCisNCisJLyogY29weSBMaXRlcmFscyAqLw0KKwlaU1REX2NvcHk4KG9w
LCAqbGl0UHRyKTsNCisJaWYgKHNlcXVlbmNlLmxpdExlbmd0aCA+IDgpDQorCQlaU1REX3dpbGRj
b3B5KG9wICsgOCwgKCpsaXRQdHIpICsgOCwNCisJCQkgICAgICBzZXF1ZW5jZS5saXRMZW5ndGgg
LSA4KTsgLyogbm90ZSA6IHNpbmNlIG9MaXRFbmQgPD0gb2VuZC1XSUxEQ09QWV9PVkVSTEVOR1RI
LCBubyByaXNrIG9mIG92ZXJ3cml0ZSBiZXlvbmQgb2VuZCAqLw0KKwlvcCA9IG9MaXRFbmQ7DQor
CSpsaXRQdHIgPSBpTGl0RW5kOyAvKiB1cGRhdGUgZm9yIG5leHQgc2VxdWVuY2UgKi8NCisNCisJ
LyogY29weSBNYXRjaCAqLw0KKwlpZiAoc2VxdWVuY2Uub2Zmc2V0ID4gKHNpemVfdCkob0xpdEVu
ZCAtIGJhc2UpKSB7DQorCQkvKiBvZmZzZXQgYmV5b25kIHByZWZpeCAqLw0KKwkJaWYgKHNlcXVl
bmNlLm9mZnNldCA+IChzaXplX3QpKG9MaXRFbmQgLSB2QmFzZSkpDQorCQkJcmV0dXJuIEVSUk9S
KGNvcnJ1cHRpb25fZGV0ZWN0ZWQpOw0KKwkJbWF0Y2ggPSBkaWN0RW5kICsgKG1hdGNoIC0gYmFz
ZSk7DQorCQlpZiAobWF0Y2ggKyBzZXF1ZW5jZS5tYXRjaExlbmd0aCA8PSBkaWN0RW5kKSB7DQor
CQkJbWVtbW92ZShvTGl0RW5kLCBtYXRjaCwgc2VxdWVuY2UubWF0Y2hMZW5ndGgpOw0KKwkJCXJl
dHVybiBzZXF1ZW5jZUxlbmd0aDsNCisJCX0NCisJCS8qIHNwYW4gZXh0RGljdCAmIGN1cnJQcmVm
aXhTZWdtZW50ICovDQorCQl7DQorCQkJc2l6ZV90IGNvbnN0IGxlbmd0aDEgPSBkaWN0RW5kIC0g
bWF0Y2g7DQorCQkJbWVtbW92ZShvTGl0RW5kLCBtYXRjaCwgbGVuZ3RoMSk7DQorCQkJb3AgPSBv
TGl0RW5kICsgbGVuZ3RoMTsNCisJCQlzZXF1ZW5jZS5tYXRjaExlbmd0aCAtPSBsZW5ndGgxOw0K
KwkJCW1hdGNoID0gYmFzZTsNCisJCQlpZiAob3AgPiBvZW5kX3cgfHwgc2VxdWVuY2UubWF0Y2hM
ZW5ndGggPCBNSU5NQVRDSCkgew0KKwkJCQlVMzIgaTsNCisJCQkJZm9yIChpID0gMDsgaSA8IHNl
cXVlbmNlLm1hdGNoTGVuZ3RoOyArK2kpDQorCQkJCQlvcFtpXSA9IG1hdGNoW2ldOw0KKwkJCQly
ZXR1cm4gc2VxdWVuY2VMZW5ndGg7DQorCQkJfQ0KKwkJfQ0KKwl9DQorCS8qIFJlcXVpcmVtZW50
OiBvcCA8PSBvZW5kX3cgJiYgc2VxdWVuY2UubWF0Y2hMZW5ndGggPj0gTUlOTUFUQ0ggKi8NCisN
CisJLyogbWF0Y2ggd2l0aGluIHByZWZpeCAqLw0KKwlpZiAoc2VxdWVuY2Uub2Zmc2V0IDwgOCkg
ew0KKwkJLyogY2xvc2UgcmFuZ2UgbWF0Y2gsIG92ZXJsYXAgKi8NCisJCXN0YXRpYyBjb25zdCBV
MzIgZGVjMzJ0YWJsZVtdID0gezAsIDEsIDIsIDEsIDQsIDQsIDQsIDR9OyAgIC8qIGFkZGVkICov
DQorCQlzdGF0aWMgY29uc3QgaW50IGRlYzY0dGFibGVbXSA9IHs4LCA4LCA4LCA3LCA4LCA5LCAx
MCwgMTF9OyAvKiBzdWJ0cmFjdGVkICovDQorCQlpbnQgY29uc3Qgc3ViMiA9IGRlYzY0dGFibGVb
c2VxdWVuY2Uub2Zmc2V0XTsNCisJCW9wWzBdID0gbWF0Y2hbMF07DQorCQlvcFsxXSA9IG1hdGNo
WzFdOw0KKwkJb3BbMl0gPSBtYXRjaFsyXTsNCisJCW9wWzNdID0gbWF0Y2hbM107DQorCQltYXRj
aCArPSBkZWMzMnRhYmxlW3NlcXVlbmNlLm9mZnNldF07DQorCQlaU1REX2NvcHk0KG9wICsgNCwg
bWF0Y2gpOw0KKwkJbWF0Y2ggLT0gc3ViMjsNCisJfSBlbHNlIHsNCisJCVpTVERfY29weTgob3As
IG1hdGNoKTsNCisJfQ0KKwlvcCArPSA4Ow0KKwltYXRjaCArPSA4Ow0KKw0KKwlpZiAob01hdGNo
RW5kID4gb2VuZCAtICgxNiAtIE1JTk1BVENIKSkgew0KKwkJaWYgKG9wIDwgb2VuZF93KSB7DQor
CQkJWlNURF93aWxkY29weShvcCwgbWF0Y2gsIG9lbmRfdyAtIG9wKTsNCisJCQltYXRjaCArPSBv
ZW5kX3cgLSBvcDsNCisJCQlvcCA9IG9lbmRfdzsNCisJCX0NCisJCXdoaWxlIChvcCA8IG9NYXRj
aEVuZCkNCisJCQkqb3ArKyA9ICptYXRjaCsrOw0KKwl9IGVsc2Ugew0KKwkJWlNURF93aWxkY29w
eShvcCwgbWF0Y2gsIChwdHJkaWZmX3Qpc2VxdWVuY2UubWF0Y2hMZW5ndGggLSA4KTsgLyogd29y
a3MgZXZlbiBpZiBtYXRjaExlbmd0aCA8IDggKi8NCisJfQ0KKwlyZXR1cm4gc2VxdWVuY2VMZW5n
dGg7DQorfQ0KKw0KK3N0YXRpYyBzaXplX3QgWlNURF9kZWNvbXByZXNzU2VxdWVuY2VzKFpTVERf
REN0eCAqZGN0eCwgdm9pZCAqZHN0LCBzaXplX3QgbWF4RHN0U2l6ZSwgY29uc3Qgdm9pZCAqc2Vx
U3RhcnQsIHNpemVfdCBzZXFTaXplKQ0KK3sNCisJY29uc3QgQllURSAqaXAgPSAoY29uc3QgQllU
RSAqKXNlcVN0YXJ0Ow0KKwljb25zdCBCWVRFICpjb25zdCBpZW5kID0gaXAgKyBzZXFTaXplOw0K
KwlCWVRFICpjb25zdCBvc3RhcnQgPSAoQllURSAqIGNvbnN0KWRzdDsNCisJQllURSAqY29uc3Qg
b2VuZCA9IG9zdGFydCArIG1heERzdFNpemU7DQorCUJZVEUgKm9wID0gb3N0YXJ0Ow0KKwljb25z
dCBCWVRFICpsaXRQdHIgPSBkY3R4LT5saXRQdHI7DQorCWNvbnN0IEJZVEUgKmNvbnN0IGxpdEVu
ZCA9IGxpdFB0ciArIGRjdHgtPmxpdFNpemU7DQorCWNvbnN0IEJZVEUgKmNvbnN0IGJhc2UgPSAo
Y29uc3QgQllURSAqKShkY3R4LT5iYXNlKTsNCisJY29uc3QgQllURSAqY29uc3QgdkJhc2UgPSAo
Y29uc3QgQllURSAqKShkY3R4LT52QmFzZSk7DQorCWNvbnN0IEJZVEUgKmNvbnN0IGRpY3RFbmQg
PSAoY29uc3QgQllURSAqKShkY3R4LT5kaWN0RW5kKTsNCisJaW50IG5iU2VxOw0KKw0KKwkvKiBC
dWlsZCBEZWNvZGluZyBUYWJsZXMgKi8NCisJew0KKwkJc2l6ZV90IGNvbnN0IHNlcUhTaXplID0g
WlNURF9kZWNvZGVTZXFIZWFkZXJzKGRjdHgsICZuYlNlcSwgaXAsIHNlcVNpemUpOw0KKwkJaWYg
KFpTVERfaXNFcnJvcihzZXFIU2l6ZSkpDQorCQkJcmV0dXJuIHNlcUhTaXplOw0KKwkJaXAgKz0g
c2VxSFNpemU7DQorCX0NCisNCisJLyogUmVnZW4gc2VxdWVuY2VzICovDQorCWlmIChuYlNlcSkg
ew0KKwkJc2VxU3RhdGVfdCBzZXFTdGF0ZTsNCisJCWRjdHgtPmZzZUVudHJvcHkgPSAxOw0KKwkJ
ew0KKwkJCVUzMiBpOw0KKwkJCWZvciAoaSA9IDA7IGkgPCBaU1REX1JFUF9OVU07IGkrKykNCisJ
CQkJc2VxU3RhdGUucHJldk9mZnNldFtpXSA9IGRjdHgtPmVudHJvcHkucmVwW2ldOw0KKwkJfQ0K
KwkJQ0hFQ0tfRShCSVRfaW5pdERTdHJlYW0oJnNlcVN0YXRlLkRTdHJlYW0sIGlwLCBpZW5kIC0g
aXApLCBjb3JydXB0aW9uX2RldGVjdGVkKTsNCisJCUZTRV9pbml0RFN0YXRlKCZzZXFTdGF0ZS5z
dGF0ZUxMLCAmc2VxU3RhdGUuRFN0cmVhbSwgZGN0eC0+TExUcHRyKTsNCisJCUZTRV9pbml0RFN0
YXRlKCZzZXFTdGF0ZS5zdGF0ZU9mZmIsICZzZXFTdGF0ZS5EU3RyZWFtLCBkY3R4LT5PRlRwdHIp
Ow0KKwkJRlNFX2luaXREU3RhdGUoJnNlcVN0YXRlLnN0YXRlTUwsICZzZXFTdGF0ZS5EU3RyZWFt
LCBkY3R4LT5NTFRwdHIpOw0KKw0KKwkJZm9yICg7IChCSVRfcmVsb2FkRFN0cmVhbSgmKHNlcVN0
YXRlLkRTdHJlYW0pKSA8PSBCSVRfRFN0cmVhbV9jb21wbGV0ZWQpICYmIG5iU2VxOykgew0KKwkJ
CW5iU2VxLS07DQorCQkJew0KKwkJCQlzZXFfdCBjb25zdCBzZXF1ZW5jZSA9IFpTVERfZGVjb2Rl
U2VxdWVuY2UoJnNlcVN0YXRlKTsNCisJCQkJc2l6ZV90IGNvbnN0IG9uZVNlcVNpemUgPSBaU1RE
X2V4ZWNTZXF1ZW5jZShvcCwgb2VuZCwgc2VxdWVuY2UsICZsaXRQdHIsIGxpdEVuZCwgYmFzZSwg
dkJhc2UsIGRpY3RFbmQpOw0KKwkJCQlpZiAoWlNURF9pc0Vycm9yKG9uZVNlcVNpemUpKQ0KKwkJ
CQkJcmV0dXJuIG9uZVNlcVNpemU7DQorCQkJCW9wICs9IG9uZVNlcVNpemU7DQorCQkJfQ0KKwkJ
fQ0KKw0KKwkJLyogY2hlY2sgaWYgcmVhY2hlZCBleGFjdCBlbmQgKi8NCisJCWlmIChuYlNlcSkN
CisJCQlyZXR1cm4gRVJST1IoY29ycnVwdGlvbl9kZXRlY3RlZCk7DQorCQkvKiBzYXZlIHJlcHMg
Zm9yIG5leHQgYmxvY2sgKi8NCisJCXsNCisJCQlVMzIgaTsNCisJCQlmb3IgKGkgPSAwOyBpIDwg
WlNURF9SRVBfTlVNOyBpKyspDQorCQkJCWRjdHgtPmVudHJvcHkucmVwW2ldID0gKFUzMikoc2Vx
U3RhdGUucHJldk9mZnNldFtpXSk7DQorCQl9DQorCX0NCisNCisJLyogbGFzdCBsaXRlcmFsIHNl
Z21lbnQgKi8NCisJew0KKwkJc2l6ZV90IGNvbnN0IGxhc3RMTFNpemUgPSBsaXRFbmQgLSBsaXRQ
dHI7DQorCQlpZiAobGFzdExMU2l6ZSA+IChzaXplX3QpKG9lbmQgLSBvcCkpDQorCQkJcmV0dXJu
IEVSUk9SKGRzdFNpemVfdG9vU21hbGwpOw0KKwkJbWVtY3B5KG9wLCBsaXRQdHIsIGxhc3RMTFNp
emUpOw0KKwkJb3AgKz0gbGFzdExMU2l6ZTsNCisJfQ0KKw0KKwlyZXR1cm4gb3AgLSBvc3RhcnQ7
DQorfQ0KKw0KK0ZPUkNFX0lOTElORSBzZXFfdCBaU1REX2RlY29kZVNlcXVlbmNlTG9uZ19nZW5l
cmljKHNlcVN0YXRlX3QgKnNlcVN0YXRlLCBpbnQgY29uc3QgbG9uZ09mZnNldHMpDQorew0KKwlz
ZXFfdCBzZXE7DQorDQorCVUzMiBjb25zdCBsbENvZGUgPSBGU0VfcGVla1N5bWJvbCgmc2VxU3Rh
dGUtPnN0YXRlTEwpOw0KKwlVMzIgY29uc3QgbWxDb2RlID0gRlNFX3BlZWtTeW1ib2woJnNlcVN0
YXRlLT5zdGF0ZU1MKTsNCisJVTMyIGNvbnN0IG9mQ29kZSA9IEZTRV9wZWVrU3ltYm9sKCZzZXFT
dGF0ZS0+c3RhdGVPZmZiKTsgLyogPD0gbWF4T2ZmLCBieSB0YWJsZSBjb25zdHJ1Y3Rpb24gKi8N
CisNCisJVTMyIGNvbnN0IGxsQml0cyA9IExMX2JpdHNbbGxDb2RlXTsNCisJVTMyIGNvbnN0IG1s
Qml0cyA9IE1MX2JpdHNbbWxDb2RlXTsNCisJVTMyIGNvbnN0IG9mQml0cyA9IG9mQ29kZTsNCisJ
VTMyIGNvbnN0IHRvdGFsQml0cyA9IGxsQml0cyArIG1sQml0cyArIG9mQml0czsNCisNCisJc3Rh
dGljIGNvbnN0IFUzMiBMTF9iYXNlW01heExMICsgMV0gPSB7MCwgIDEsICAyLCAgMywgIDQsICA1
LCAgNiwgIDcsICA4LCAgICA5LCAgICAgMTAsICAgIDExLCAgICAxMiwgICAgMTMsICAgICAxNCwg
ICAgIDE1LCAgICAgMTYsICAgICAxOCwNCisJCQkJCSAgICAgICAyMCwgMjIsIDI0LCAyOCwgMzIs
IDQwLCA0OCwgNjQsIDB4ODAsIDB4MTAwLCAweDIwMCwgMHg0MDAsIDB4ODAwLCAweDEwMDAsIDB4
MjAwMCwgMHg0MDAwLCAweDgwMDAsIDB4MTAwMDB9Ow0KKw0KKwlzdGF0aWMgY29uc3QgVTMyIE1M
X2Jhc2VbTWF4TUwgKyAxXSA9IHszLCAgNCwgIDUsICA2LCAgNywgIDgsICA5LCAgMTAsICAgMTEs
ICAgIDEyLCAgICAxMywgICAgMTQsICAgIDE1LCAgICAgMTYsICAgICAxNywgICAgIDE4LCAgICAg
MTksICAgICAyMCwNCisJCQkJCSAgICAgICAyMSwgMjIsIDIzLCAyNCwgMjUsIDI2LCAyNywgMjgs
ICAgMjksICAgIDMwLCAgICAzMSwgICAgMzIsICAgIDMzLCAgICAgMzQsICAgICAzNSwgICAgIDM3
LCAgICAgMzksICAgICA0MSwNCisJCQkJCSAgICAgICA0MywgNDcsIDUxLCA1OSwgNjcsIDgzLCA5
OSwgMHg4MywgMHgxMDMsIDB4MjAzLCAweDQwMywgMHg4MDMsIDB4MTAwMywgMHgyMDAzLCAweDQw
MDMsIDB4ODAwMywgMHgxMDAwM307DQorDQorCXN0YXRpYyBjb25zdCBVMzIgT0ZfYmFzZVtNYXhP
ZmYgKyAxXSA9IHswLCAgICAgICAxLAkxLAk1LAkweEQsICAgICAgMHgxRCwgICAgICAweDNELCAg
ICAgIDB4N0QsICAgICAgMHhGRCwgICAgIDB4MUZELA0KKwkJCQkJCTB4M0ZELCAgIDB4N0ZELCAg
ICAweEZGRCwgICAgMHgxRkZELCAgIDB4M0ZGRCwgICAweDdGRkQsICAgIDB4RkZGRCwgICAgMHgx
RkZGRCwgICAweDNGRkZELCAgMHg3RkZGRCwNCisJCQkJCQkweEZGRkZELCAweDFGRkZGRCwgMHgz
RkZGRkQsIDB4N0ZGRkZELCAweEZGRkZGRCwgMHgxRkZGRkZELCAweDNGRkZGRkQsIDB4N0ZGRkZG
RCwgMHhGRkZGRkZEfTsNCisNCisJLyogc2VxdWVuY2UgKi8NCisJew0KKwkJc2l6ZV90IG9mZnNl
dDsNCisJCWlmICghb2ZDb2RlKQ0KKwkJCW9mZnNldCA9IDA7DQorCQllbHNlIHsNCisJCQlpZiAo
bG9uZ09mZnNldHMpIHsNCisJCQkJaW50IGNvbnN0IGV4dHJhQml0cyA9IG9mQml0cyAtIE1JTihv
ZkJpdHMsIFNUUkVBTV9BQ0NVTVVMQVRPUl9NSU4pOw0KKwkJCQlvZmZzZXQgPSBPRl9iYXNlW29m
Q29kZV0gKyAoQklUX3JlYWRCaXRzRmFzdCgmc2VxU3RhdGUtPkRTdHJlYW0sIG9mQml0cyAtIGV4
dHJhQml0cykgPDwgZXh0cmFCaXRzKTsNCisJCQkJaWYgKFpTVERfMzJiaXRzKCkgfHwgZXh0cmFC
aXRzKQ0KKwkJCQkJQklUX3JlbG9hZERTdHJlYW0oJnNlcVN0YXRlLT5EU3RyZWFtKTsNCisJCQkJ
aWYgKGV4dHJhQml0cykNCisJCQkJCW9mZnNldCArPSBCSVRfcmVhZEJpdHNGYXN0KCZzZXFTdGF0
ZS0+RFN0cmVhbSwgZXh0cmFCaXRzKTsNCisJCQl9IGVsc2Ugew0KKwkJCQlvZmZzZXQgPSBPRl9i
YXNlW29mQ29kZV0gKyBCSVRfcmVhZEJpdHNGYXN0KCZzZXFTdGF0ZS0+RFN0cmVhbSwgb2ZCaXRz
KTsgLyogPD0gIChaU1REX1dJTkRPV0xPR19NQVgtMSkgYml0cyAqLw0KKwkJCQlpZiAoWlNURF8z
MmJpdHMoKSkNCisJCQkJCUJJVF9yZWxvYWREU3RyZWFtKCZzZXFTdGF0ZS0+RFN0cmVhbSk7DQor
CQkJfQ0KKwkJfQ0KKw0KKwkJaWYgKG9mQ29kZSA8PSAxKSB7DQorCQkJb2Zmc2V0ICs9IChsbENv
ZGUgPT0gMCk7DQorCQkJaWYgKG9mZnNldCkgew0KKwkJCQlzaXplX3QgdGVtcCA9IChvZmZzZXQg
PT0gMykgPyBzZXFTdGF0ZS0+cHJldk9mZnNldFswXSAtIDEgOiBzZXFTdGF0ZS0+cHJldk9mZnNl
dFtvZmZzZXRdOw0KKwkJCQl0ZW1wICs9ICF0ZW1wOyAvKiAwIGlzIG5vdCB2YWxpZDsgaW5wdXQg
aXMgY29ycnVwdGVkOyBmb3JjZSBvZmZzZXQgdG8gMSAqLw0KKwkJCQlpZiAob2Zmc2V0ICE9IDEp
DQorCQkJCQlzZXFTdGF0ZS0+cHJldk9mZnNldFsyXSA9IHNlcVN0YXRlLT5wcmV2T2Zmc2V0WzFd
Ow0KKwkJCQlzZXFTdGF0ZS0+cHJldk9mZnNldFsxXSA9IHNlcVN0YXRlLT5wcmV2T2Zmc2V0WzBd
Ow0KKwkJCQlzZXFTdGF0ZS0+cHJldk9mZnNldFswXSA9IG9mZnNldCA9IHRlbXA7DQorCQkJfSBl
bHNlIHsNCisJCQkJb2Zmc2V0ID0gc2VxU3RhdGUtPnByZXZPZmZzZXRbMF07DQorCQkJfQ0KKwkJ
fSBlbHNlIHsNCisJCQlzZXFTdGF0ZS0+cHJldk9mZnNldFsyXSA9IHNlcVN0YXRlLT5wcmV2T2Zm
c2V0WzFdOw0KKwkJCXNlcVN0YXRlLT5wcmV2T2Zmc2V0WzFdID0gc2VxU3RhdGUtPnByZXZPZmZz
ZXRbMF07DQorCQkJc2VxU3RhdGUtPnByZXZPZmZzZXRbMF0gPSBvZmZzZXQ7DQorCQl9DQorCQlz
ZXEub2Zmc2V0ID0gb2Zmc2V0Ow0KKwl9DQorDQorCXNlcS5tYXRjaExlbmd0aCA9IE1MX2Jhc2Vb
bWxDb2RlXSArICgobWxDb2RlID4gMzEpID8gQklUX3JlYWRCaXRzRmFzdCgmc2VxU3RhdGUtPkRT
dHJlYW0sIG1sQml0cykgOiAwKTsgLyogPD0gIDE2IGJpdHMgKi8NCisJaWYgKFpTVERfMzJiaXRz
KCkgJiYgKG1sQml0cyArIGxsQml0cyA+IDI0KSkNCisJCUJJVF9yZWxvYWREU3RyZWFtKCZzZXFT
dGF0ZS0+RFN0cmVhbSk7DQorDQorCXNlcS5saXRMZW5ndGggPSBMTF9iYXNlW2xsQ29kZV0gKyAo
KGxsQ29kZSA+IDE1KSA/IEJJVF9yZWFkQml0c0Zhc3QoJnNlcVN0YXRlLT5EU3RyZWFtLCBsbEJp
dHMpIDogMCk7IC8qIDw9ICAxNiBiaXRzICovDQorCWlmIChaU1REXzMyYml0cygpIHx8ICh0b3Rh
bEJpdHMgPiA2NCAtIDcgLSAoTExGU0VMb2cgKyBNTEZTRUxvZyArIE9mZkZTRUxvZykpKQ0KKwkJ
QklUX3JlbG9hZERTdHJlYW0oJnNlcVN0YXRlLT5EU3RyZWFtKTsNCisNCisJew0KKwkJc2l6ZV90
IGNvbnN0IHBvcyA9IHNlcVN0YXRlLT5wb3MgKyBzZXEubGl0TGVuZ3RoOw0KKwkJc2VxLm1hdGNo
ID0gc2VxU3RhdGUtPmJhc2UgKyBwb3MgLSBzZXEub2Zmc2V0OyAvKiBzaW5nbGUgbWVtb3J5IHNl
Z21lbnQgKi8NCisJCWlmIChzZXEub2Zmc2V0ID4gcG9zKQ0KKwkJCXNlcS5tYXRjaCArPSBzZXFT
dGF0ZS0+Z290b0RpY3Q7IC8qIHNlcGFyYXRlIG1lbW9yeSBzZWdtZW50ICovDQorCQlzZXFTdGF0
ZS0+cG9zID0gcG9zICsgc2VxLm1hdGNoTGVuZ3RoOw0KKwl9DQorDQorCS8qIEFOUyBzdGF0ZSB1
cGRhdGUgKi8NCisJRlNFX3VwZGF0ZVN0YXRlKCZzZXFTdGF0ZS0+c3RhdGVMTCwgJnNlcVN0YXRl
LT5EU3RyZWFtKTsgLyogPD0gIDkgYml0cyAqLw0KKwlGU0VfdXBkYXRlU3RhdGUoJnNlcVN0YXRl
LT5zdGF0ZU1MLCAmc2VxU3RhdGUtPkRTdHJlYW0pOyAvKiA8PSAgOSBiaXRzICovDQorCWlmICha
U1REXzMyYml0cygpKQ0KKwkJQklUX3JlbG9hZERTdHJlYW0oJnNlcVN0YXRlLT5EU3RyZWFtKTsJ
CSAgIC8qIDw9IDE4IGJpdHMgKi8NCisJRlNFX3VwZGF0ZVN0YXRlKCZzZXFTdGF0ZS0+c3RhdGVP
ZmZiLCAmc2VxU3RhdGUtPkRTdHJlYW0pOyAvKiA8PSAgOCBiaXRzICovDQorDQorCXJldHVybiBz
ZXE7DQorfQ0KKw0KK3N0YXRpYyBzZXFfdCBaU1REX2RlY29kZVNlcXVlbmNlTG9uZyhzZXFTdGF0
ZV90ICpzZXFTdGF0ZSwgdW5zaWduZWQgY29uc3Qgd2luZG93U2l6ZSkNCit7DQorCWlmIChaU1RE
X2hpZ2hiaXQzMih3aW5kb3dTaXplKSA+IFNUUkVBTV9BQ0NVTVVMQVRPUl9NSU4pIHsNCisJCXJl
dHVybiBaU1REX2RlY29kZVNlcXVlbmNlTG9uZ19nZW5lcmljKHNlcVN0YXRlLCAxKTsNCisJfSBl
bHNlIHsNCisJCXJldHVybiBaU1REX2RlY29kZVNlcXVlbmNlTG9uZ19nZW5lcmljKHNlcVN0YXRl
LCAwKTsNCisJfQ0KK30NCisNCitGT1JDRV9JTkxJTkUNCitzaXplX3QgWlNURF9leGVjU2VxdWVu
Y2VMb25nKEJZVEUgKm9wLCBCWVRFICpjb25zdCBvZW5kLCBzZXFfdCBzZXF1ZW5jZSwgY29uc3Qg
QllURSAqKmxpdFB0ciwgY29uc3QgQllURSAqY29uc3QgbGl0TGltaXQsIGNvbnN0IEJZVEUgKmNv
bnN0IGJhc2UsDQorCQkJICAgICBjb25zdCBCWVRFICpjb25zdCB2QmFzZSwgY29uc3QgQllURSAq
Y29uc3QgZGljdEVuZCkNCit7DQorCUJZVEUgKmNvbnN0IG9MaXRFbmQgPSBvcCArIHNlcXVlbmNl
LmxpdExlbmd0aDsNCisJc2l6ZV90IGNvbnN0IHNlcXVlbmNlTGVuZ3RoID0gc2VxdWVuY2UubGl0
TGVuZ3RoICsgc2VxdWVuY2UubWF0Y2hMZW5ndGg7DQorCUJZVEUgKmNvbnN0IG9NYXRjaEVuZCA9
IG9wICsgc2VxdWVuY2VMZW5ndGg7IC8qIHJpc2sgOiBhZGRyZXNzIHNwYWNlIG92ZXJmbG93ICgz
Mi1iaXRzKSAqLw0KKwlCWVRFICpjb25zdCBvZW5kX3cgPSBvZW5kIC0gV0lMRENPUFlfT1ZFUkxF
TkdUSDsNCisJY29uc3QgQllURSAqY29uc3QgaUxpdEVuZCA9ICpsaXRQdHIgKyBzZXF1ZW5jZS5s
aXRMZW5ndGg7DQorCWNvbnN0IEJZVEUgKm1hdGNoID0gc2VxdWVuY2UubWF0Y2g7DQorDQorCS8q
IGNoZWNrICovDQorCWlmIChvTWF0Y2hFbmQgPiBvZW5kKQ0KKwkJcmV0dXJuIEVSUk9SKGRzdFNp
emVfdG9vU21hbGwpOyAvKiBsYXN0IG1hdGNoIG11c3Qgc3RhcnQgYXQgYSBtaW5pbXVtIGRpc3Rh
bmNlIG9mIFdJTERDT1BZX09WRVJMRU5HVEggZnJvbSBvZW5kICovDQorCWlmIChpTGl0RW5kID4g
bGl0TGltaXQpDQorCQlyZXR1cm4gRVJST1IoY29ycnVwdGlvbl9kZXRlY3RlZCk7IC8qIG92ZXIt
cmVhZCBiZXlvbmQgbGl0IGJ1ZmZlciAqLw0KKwlpZiAob0xpdEVuZCA+IG9lbmRfdykNCisJCXJl
dHVybiBaU1REX2V4ZWNTZXF1ZW5jZUxhc3Q3KG9wLCBvZW5kLCBzZXF1ZW5jZSwgbGl0UHRyLCBs
aXRMaW1pdCwgYmFzZSwgdkJhc2UsIGRpY3RFbmQpOw0KKw0KKwkvKiBjb3B5IExpdGVyYWxzICov
DQorCVpTVERfY29weTgob3AsICpsaXRQdHIpOw0KKwlpZiAoc2VxdWVuY2UubGl0TGVuZ3RoID4g
OCkNCisJCVpTVERfd2lsZGNvcHkob3AgKyA4LCAoKmxpdFB0cikgKyA4LA0KKwkJCSAgICAgIHNl
cXVlbmNlLmxpdExlbmd0aCAtIDgpOyAvKiBub3RlIDogc2luY2Ugb0xpdEVuZCA8PSBvZW5kLVdJ
TERDT1BZX09WRVJMRU5HVEgsIG5vIHJpc2sgb2Ygb3ZlcndyaXRlIGJleW9uZCBvZW5kICovDQor
CW9wID0gb0xpdEVuZDsNCisJKmxpdFB0ciA9IGlMaXRFbmQ7IC8qIHVwZGF0ZSBmb3IgbmV4dCBz
ZXF1ZW5jZSAqLw0KKw0KKwkvKiBjb3B5IE1hdGNoICovDQorCWlmIChzZXF1ZW5jZS5vZmZzZXQg
PiAoc2l6ZV90KShvTGl0RW5kIC0gYmFzZSkpIHsNCisJCS8qIG9mZnNldCBiZXlvbmQgcHJlZml4
ICovDQorCQlpZiAoc2VxdWVuY2Uub2Zmc2V0ID4gKHNpemVfdCkob0xpdEVuZCAtIHZCYXNlKSkN
CisJCQlyZXR1cm4gRVJST1IoY29ycnVwdGlvbl9kZXRlY3RlZCk7DQorCQlpZiAobWF0Y2ggKyBz
ZXF1ZW5jZS5tYXRjaExlbmd0aCA8PSBkaWN0RW5kKSB7DQorCQkJbWVtbW92ZShvTGl0RW5kLCBt
YXRjaCwgc2VxdWVuY2UubWF0Y2hMZW5ndGgpOw0KKwkJCXJldHVybiBzZXF1ZW5jZUxlbmd0aDsN
CisJCX0NCisJCS8qIHNwYW4gZXh0RGljdCAmIGN1cnJQcmVmaXhTZWdtZW50ICovDQorCQl7DQor
CQkJc2l6ZV90IGNvbnN0IGxlbmd0aDEgPSBkaWN0RW5kIC0gbWF0Y2g7DQorCQkJbWVtbW92ZShv
TGl0RW5kLCBtYXRjaCwgbGVuZ3RoMSk7DQorCQkJb3AgPSBvTGl0RW5kICsgbGVuZ3RoMTsNCisJ
CQlzZXF1ZW5jZS5tYXRjaExlbmd0aCAtPSBsZW5ndGgxOw0KKwkJCW1hdGNoID0gYmFzZTsNCisJ
CQlpZiAob3AgPiBvZW5kX3cgfHwgc2VxdWVuY2UubWF0Y2hMZW5ndGggPCBNSU5NQVRDSCkgew0K
KwkJCQlVMzIgaTsNCisJCQkJZm9yIChpID0gMDsgaSA8IHNlcXVlbmNlLm1hdGNoTGVuZ3RoOyAr
K2kpDQorCQkJCQlvcFtpXSA9IG1hdGNoW2ldOw0KKwkJCQlyZXR1cm4gc2VxdWVuY2VMZW5ndGg7
DQorCQkJfQ0KKwkJfQ0KKwl9DQorCS8qIFJlcXVpcmVtZW50OiBvcCA8PSBvZW5kX3cgJiYgc2Vx
dWVuY2UubWF0Y2hMZW5ndGggPj0gTUlOTUFUQ0ggKi8NCisNCisJLyogbWF0Y2ggd2l0aGluIHBy
ZWZpeCAqLw0KKwlpZiAoc2VxdWVuY2Uub2Zmc2V0IDwgOCkgew0KKwkJLyogY2xvc2UgcmFuZ2Ug
bWF0Y2gsIG92ZXJsYXAgKi8NCisJCXN0YXRpYyBjb25zdCBVMzIgZGVjMzJ0YWJsZVtdID0gezAs
IDEsIDIsIDEsIDQsIDQsIDQsIDR9OyAgIC8qIGFkZGVkICovDQorCQlzdGF0aWMgY29uc3QgaW50
IGRlYzY0dGFibGVbXSA9IHs4LCA4LCA4LCA3LCA4LCA5LCAxMCwgMTF9OyAvKiBzdWJ0cmFjdGVk
ICovDQorCQlpbnQgY29uc3Qgc3ViMiA9IGRlYzY0dGFibGVbc2VxdWVuY2Uub2Zmc2V0XTsNCisJ
CW9wWzBdID0gbWF0Y2hbMF07DQorCQlvcFsxXSA9IG1hdGNoWzFdOw0KKwkJb3BbMl0gPSBtYXRj
aFsyXTsNCisJCW9wWzNdID0gbWF0Y2hbM107DQorCQltYXRjaCArPSBkZWMzMnRhYmxlW3NlcXVl
bmNlLm9mZnNldF07DQorCQlaU1REX2NvcHk0KG9wICsgNCwgbWF0Y2gpOw0KKwkJbWF0Y2ggLT0g
c3ViMjsNCisJfSBlbHNlIHsNCisJCVpTVERfY29weTgob3AsIG1hdGNoKTsNCisJfQ0KKwlvcCAr
PSA4Ow0KKwltYXRjaCArPSA4Ow0KKw0KKwlpZiAob01hdGNoRW5kID4gb2VuZCAtICgxNiAtIE1J
Tk1BVENIKSkgew0KKwkJaWYgKG9wIDwgb2VuZF93KSB7DQorCQkJWlNURF93aWxkY29weShvcCwg
bWF0Y2gsIG9lbmRfdyAtIG9wKTsNCisJCQltYXRjaCArPSBvZW5kX3cgLSBvcDsNCisJCQlvcCA9
IG9lbmRfdzsNCisJCX0NCisJCXdoaWxlIChvcCA8IG9NYXRjaEVuZCkNCisJCQkqb3ArKyA9ICpt
YXRjaCsrOw0KKwl9IGVsc2Ugew0KKwkJWlNURF93aWxkY29weShvcCwgbWF0Y2gsIChwdHJkaWZm
X3Qpc2VxdWVuY2UubWF0Y2hMZW5ndGggLSA4KTsgLyogd29ya3MgZXZlbiBpZiBtYXRjaExlbmd0
aCA8IDggKi8NCisJfQ0KKwlyZXR1cm4gc2VxdWVuY2VMZW5ndGg7DQorfQ0KKw0KK3N0YXRpYyBz
aXplX3QgWlNURF9kZWNvbXByZXNzU2VxdWVuY2VzTG9uZyhaU1REX0RDdHggKmRjdHgsIHZvaWQg
KmRzdCwgc2l6ZV90IG1heERzdFNpemUsIGNvbnN0IHZvaWQgKnNlcVN0YXJ0LCBzaXplX3Qgc2Vx
U2l6ZSkNCit7DQorCWNvbnN0IEJZVEUgKmlwID0gKGNvbnN0IEJZVEUgKilzZXFTdGFydDsNCisJ
Y29uc3QgQllURSAqY29uc3QgaWVuZCA9IGlwICsgc2VxU2l6ZTsNCisJQllURSAqY29uc3Qgb3N0
YXJ0ID0gKEJZVEUgKiBjb25zdClkc3Q7DQorCUJZVEUgKmNvbnN0IG9lbmQgPSBvc3RhcnQgKyBt
YXhEc3RTaXplOw0KKwlCWVRFICpvcCA9IG9zdGFydDsNCisJY29uc3QgQllURSAqbGl0UHRyID0g
ZGN0eC0+bGl0UHRyOw0KKwljb25zdCBCWVRFICpjb25zdCBsaXRFbmQgPSBsaXRQdHIgKyBkY3R4
LT5saXRTaXplOw0KKwljb25zdCBCWVRFICpjb25zdCBiYXNlID0gKGNvbnN0IEJZVEUgKikoZGN0
eC0+YmFzZSk7DQorCWNvbnN0IEJZVEUgKmNvbnN0IHZCYXNlID0gKGNvbnN0IEJZVEUgKikoZGN0
eC0+dkJhc2UpOw0KKwljb25zdCBCWVRFICpjb25zdCBkaWN0RW5kID0gKGNvbnN0IEJZVEUgKiko
ZGN0eC0+ZGljdEVuZCk7DQorCXVuc2lnbmVkIGNvbnN0IHdpbmRvd1NpemUgPSBkY3R4LT5mUGFy
YW1zLndpbmRvd1NpemU7DQorCWludCBuYlNlcTsNCisNCisJLyogQnVpbGQgRGVjb2RpbmcgVGFi
bGVzICovDQorCXsNCisJCXNpemVfdCBjb25zdCBzZXFIU2l6ZSA9IFpTVERfZGVjb2RlU2VxSGVh
ZGVycyhkY3R4LCAmbmJTZXEsIGlwLCBzZXFTaXplKTsNCisJCWlmIChaU1REX2lzRXJyb3Ioc2Vx
SFNpemUpKQ0KKwkJCXJldHVybiBzZXFIU2l6ZTsNCisJCWlwICs9IHNlcUhTaXplOw0KKwl9DQor
DQorCS8qIFJlZ2VuIHNlcXVlbmNlcyAqLw0KKwlpZiAobmJTZXEpIHsNCisjZGVmaW5lIFNUT1JF
RF9TRVFTIDQNCisjZGVmaW5lIFNUT1NFUV9NQVNLIChTVE9SRURfU0VRUyAtIDEpDQorI2RlZmlu
ZSBBRFZBTkNFRF9TRVFTIDQNCisJCXNlcV90ICpzZXF1ZW5jZXMgPSAoc2VxX3QgKilkY3R4LT5l
bnRyb3B5LndvcmtzcGFjZTsNCisJCWludCBjb25zdCBzZXFBZHZhbmNlID0gTUlOKG5iU2VxLCBB
RFZBTkNFRF9TRVFTKTsNCisJCXNlcVN0YXRlX3Qgc2VxU3RhdGU7DQorCQlpbnQgc2VxTmI7DQor
CQlaU1REX1NUQVRJQ19BU1NFUlQoc2l6ZW9mKGRjdHgtPmVudHJvcHkud29ya3NwYWNlKSA+PSBz
aXplb2Yoc2VxX3QpICogU1RPUkVEX1NFUVMpOw0KKwkJZGN0eC0+ZnNlRW50cm9weSA9IDE7DQor
CQl7DQorCQkJVTMyIGk7DQorCQkJZm9yIChpID0gMDsgaSA8IFpTVERfUkVQX05VTTsgaSsrKQ0K
KwkJCQlzZXFTdGF0ZS5wcmV2T2Zmc2V0W2ldID0gZGN0eC0+ZW50cm9weS5yZXBbaV07DQorCQl9
DQorCQlzZXFTdGF0ZS5iYXNlID0gYmFzZTsNCisJCXNlcVN0YXRlLnBvcyA9IChzaXplX3QpKG9w
IC0gYmFzZSk7DQorCQlzZXFTdGF0ZS5nb3RvRGljdCA9ICh1UHRyRGlmZilkaWN0RW5kIC0gKHVQ
dHJEaWZmKWJhc2U7IC8qIGNhc3QgdG8gYXZvaWQgdW5kZWZpbmVkIGJlaGF2aW91ciAqLw0KKwkJ
Q0hFQ0tfRShCSVRfaW5pdERTdHJlYW0oJnNlcVN0YXRlLkRTdHJlYW0sIGlwLCBpZW5kIC0gaXAp
LCBjb3JydXB0aW9uX2RldGVjdGVkKTsNCisJCUZTRV9pbml0RFN0YXRlKCZzZXFTdGF0ZS5zdGF0
ZUxMLCAmc2VxU3RhdGUuRFN0cmVhbSwgZGN0eC0+TExUcHRyKTsNCisJCUZTRV9pbml0RFN0YXRl
KCZzZXFTdGF0ZS5zdGF0ZU9mZmIsICZzZXFTdGF0ZS5EU3RyZWFtLCBkY3R4LT5PRlRwdHIpOw0K
KwkJRlNFX2luaXREU3RhdGUoJnNlcVN0YXRlLnN0YXRlTUwsICZzZXFTdGF0ZS5EU3RyZWFtLCBk
Y3R4LT5NTFRwdHIpOw0KKw0KKwkJLyogcHJlcGFyZSBpbiBhZHZhbmNlICovDQorCQlmb3IgKHNl
cU5iID0gMDsgKEJJVF9yZWxvYWREU3RyZWFtKCZzZXFTdGF0ZS5EU3RyZWFtKSA8PSBCSVRfRFN0
cmVhbV9jb21wbGV0ZWQpICYmIHNlcU5iIDwgc2VxQWR2YW5jZTsgc2VxTmIrKykgew0KKwkJCXNl
cXVlbmNlc1tzZXFOYl0gPSBaU1REX2RlY29kZVNlcXVlbmNlTG9uZygmc2VxU3RhdGUsIHdpbmRv
d1NpemUpOw0KKwkJfQ0KKwkJaWYgKHNlcU5iIDwgc2VxQWR2YW5jZSkNCisJCQlyZXR1cm4gRVJS
T1IoY29ycnVwdGlvbl9kZXRlY3RlZCk7DQorDQorCQkvKiBkZWNvZGUgYW5kIGRlY29tcHJlc3Mg
Ki8NCisJCWZvciAoOyAoQklUX3JlbG9hZERTdHJlYW0oJihzZXFTdGF0ZS5EU3RyZWFtKSkgPD0g
QklUX0RTdHJlYW1fY29tcGxldGVkKSAmJiBzZXFOYiA8IG5iU2VxOyBzZXFOYisrKSB7DQorCQkJ
c2VxX3QgY29uc3Qgc2VxdWVuY2UgPSBaU1REX2RlY29kZVNlcXVlbmNlTG9uZygmc2VxU3RhdGUs
IHdpbmRvd1NpemUpOw0KKwkJCXNpemVfdCBjb25zdCBvbmVTZXFTaXplID0NCisJCQkgICAgWlNU
RF9leGVjU2VxdWVuY2VMb25nKG9wLCBvZW5kLCBzZXF1ZW5jZXNbKHNlcU5iIC0gQURWQU5DRURf
U0VRUykgJiBTVE9TRVFfTUFTS10sICZsaXRQdHIsIGxpdEVuZCwgYmFzZSwgdkJhc2UsIGRpY3RF
bmQpOw0KKwkJCWlmIChaU1REX2lzRXJyb3Iob25lU2VxU2l6ZSkpDQorCQkJCXJldHVybiBvbmVT
ZXFTaXplOw0KKwkJCVpTVERfUFJFRkVUQ0goc2VxdWVuY2UubWF0Y2gpOw0KKwkJCXNlcXVlbmNl
c1tzZXFOYiAmIFNUT1NFUV9NQVNLXSA9IHNlcXVlbmNlOw0KKwkJCW9wICs9IG9uZVNlcVNpemU7
DQorCQl9DQorCQlpZiAoc2VxTmIgPCBuYlNlcSkNCisJCQlyZXR1cm4gRVJST1IoY29ycnVwdGlv
bl9kZXRlY3RlZCk7DQorDQorCQkvKiBmaW5pc2ggcXVldWUgKi8NCisJCXNlcU5iIC09IHNlcUFk
dmFuY2U7DQorCQlmb3IgKDsgc2VxTmIgPCBuYlNlcTsgc2VxTmIrKykgew0KKwkJCXNpemVfdCBj
b25zdCBvbmVTZXFTaXplID0gWlNURF9leGVjU2VxdWVuY2VMb25nKG9wLCBvZW5kLCBzZXF1ZW5j
ZXNbc2VxTmIgJiBTVE9TRVFfTUFTS10sICZsaXRQdHIsIGxpdEVuZCwgYmFzZSwgdkJhc2UsIGRp
Y3RFbmQpOw0KKwkJCWlmIChaU1REX2lzRXJyb3Iob25lU2VxU2l6ZSkpDQorCQkJCXJldHVybiBv
bmVTZXFTaXplOw0KKwkJCW9wICs9IG9uZVNlcVNpemU7DQorCQl9DQorDQorCQkvKiBzYXZlIHJl
cHMgZm9yIG5leHQgYmxvY2sgKi8NCisJCXsNCisJCQlVMzIgaTsNCisJCQlmb3IgKGkgPSAwOyBp
IDwgWlNURF9SRVBfTlVNOyBpKyspDQorCQkJCWRjdHgtPmVudHJvcHkucmVwW2ldID0gKFUzMiko
c2VxU3RhdGUucHJldk9mZnNldFtpXSk7DQorCQl9DQorCX0NCisNCisJLyogbGFzdCBsaXRlcmFs
IHNlZ21lbnQgKi8NCisJew0KKwkJc2l6ZV90IGNvbnN0IGxhc3RMTFNpemUgPSBsaXRFbmQgLSBs
aXRQdHI7DQorCQlpZiAobGFzdExMU2l6ZSA+IChzaXplX3QpKG9lbmQgLSBvcCkpDQorCQkJcmV0
dXJuIEVSUk9SKGRzdFNpemVfdG9vU21hbGwpOw0KKwkJbWVtY3B5KG9wLCBsaXRQdHIsIGxhc3RM
TFNpemUpOw0KKwkJb3AgKz0gbGFzdExMU2l6ZTsNCisJfQ0KKw0KKwlyZXR1cm4gb3AgLSBvc3Rh
cnQ7DQorfQ0KKw0KK3N0YXRpYyBzaXplX3QgWlNURF9kZWNvbXByZXNzQmxvY2tfaW50ZXJuYWwo
WlNURF9EQ3R4ICpkY3R4LCB2b2lkICpkc3QsIHNpemVfdCBkc3RDYXBhY2l0eSwgY29uc3Qgdm9p
ZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZSkNCit7IC8qIGJsb2NrVHlwZSA9PSBibG9ja0NvbXByZXNz
ZWQgKi8NCisJY29uc3QgQllURSAqaXAgPSAoY29uc3QgQllURSAqKXNyYzsNCisNCisJaWYgKHNy
Y1NpemUgPj0gWlNURF9CTE9DS1NJWkVfQUJTT0xVVEVNQVgpDQorCQlyZXR1cm4gRVJST1Ioc3Jj
U2l6ZV93cm9uZyk7DQorDQorCS8qIERlY29kZSBsaXRlcmFscyBzZWN0aW9uICovDQorCXsNCisJ
CXNpemVfdCBjb25zdCBsaXRDU2l6ZSA9IFpTVERfZGVjb2RlTGl0ZXJhbHNCbG9jayhkY3R4LCBz
cmMsIHNyY1NpemUpOw0KKwkJaWYgKFpTVERfaXNFcnJvcihsaXRDU2l6ZSkpDQorCQkJcmV0dXJu
IGxpdENTaXplOw0KKwkJaXAgKz0gbGl0Q1NpemU7DQorCQlzcmNTaXplIC09IGxpdENTaXplOw0K
Kwl9DQorCWlmIChzaXplb2Yoc2l6ZV90KSA+IDQpIC8qIGRvIG5vdCBlbmFibGUgcHJlZmV0Y2hp
bmcgb24gMzItYml0cyB4ODYsIGFzIGl0J3MgcGVyZm9ybWFuY2UgZGV0cmltZW50YWwgKi8NCisJ
CQkJLyogbGlrZWx5IGJlY2F1c2Ugb2YgcmVnaXN0ZXIgcHJlc3N1cmUgKi8NCisJCQkJLyogaWYg
dGhhdCdzIHRoZSBjb3JyZWN0IGNhdXNlLCB0aGVuIDMyLWJpdHMgQVJNIHNob3VsZCBiZSBhZmZl
Y3RlZCBkaWZmZXJlbnRseSAqLw0KKwkJCQkvKiBpdCB3b3VsZCBiZSBnb29kIHRvIHRlc3QgdGhp
cyBvbiBBUk0gcmVhbCBoYXJkd2FyZSwgdG8gc2VlIGlmIHByZWZldGNoIHZlcnNpb24gaW1wcm92
ZXMgc3BlZWQgKi8NCisJCWlmIChkY3R4LT5mUGFyYW1zLndpbmRvd1NpemUgPiAoMSA8PCAyMykp
DQorCQkJcmV0dXJuIFpTVERfZGVjb21wcmVzc1NlcXVlbmNlc0xvbmcoZGN0eCwgZHN0LCBkc3RD
YXBhY2l0eSwgaXAsIHNyY1NpemUpOw0KKwlyZXR1cm4gWlNURF9kZWNvbXByZXNzU2VxdWVuY2Vz
KGRjdHgsIGRzdCwgZHN0Q2FwYWNpdHksIGlwLCBzcmNTaXplKTsNCit9DQorDQorc3RhdGljIHZv
aWQgWlNURF9jaGVja0NvbnRpbnVpdHkoWlNURF9EQ3R4ICpkY3R4LCBjb25zdCB2b2lkICpkc3Qp
DQorew0KKwlpZiAoZHN0ICE9IGRjdHgtPnByZXZpb3VzRHN0RW5kKSB7IC8qIG5vdCBjb250aWd1
b3VzICovDQorCQlkY3R4LT5kaWN0RW5kID0gZGN0eC0+cHJldmlvdXNEc3RFbmQ7DQorCQlkY3R4
LT52QmFzZSA9IChjb25zdCBjaGFyICopZHN0IC0gKChjb25zdCBjaGFyICopKGRjdHgtPnByZXZp
b3VzRHN0RW5kKSAtIChjb25zdCBjaGFyICopKGRjdHgtPmJhc2UpKTsNCisJCWRjdHgtPmJhc2Ug
PSBkc3Q7DQorCQlkY3R4LT5wcmV2aW91c0RzdEVuZCA9IGRzdDsNCisJfQ0KK30NCisNCitzaXpl
X3QgWlNURF9kZWNvbXByZXNzQmxvY2soWlNURF9EQ3R4ICpkY3R4LCB2b2lkICpkc3QsIHNpemVf
dCBkc3RDYXBhY2l0eSwgY29uc3Qgdm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZSkNCit7DQorCXNp
emVfdCBkU2l6ZTsNCisJWlNURF9jaGVja0NvbnRpbnVpdHkoZGN0eCwgZHN0KTsNCisJZFNpemUg
PSBaU1REX2RlY29tcHJlc3NCbG9ja19pbnRlcm5hbChkY3R4LCBkc3QsIGRzdENhcGFjaXR5LCBz
cmMsIHNyY1NpemUpOw0KKwlkY3R4LT5wcmV2aW91c0RzdEVuZCA9IChjaGFyICopZHN0ICsgZFNp
emU7DQorCXJldHVybiBkU2l6ZTsNCit9DQorDQorLyoqIFpTVERfaW5zZXJ0QmxvY2soKSA6DQor
CWluc2VydCBgc3JjYCBibG9jayBpbnRvIGBkY3R4YCBoaXN0b3J5LiBVc2VmdWwgdG8gdHJhY2sg
dW5jb21wcmVzc2VkIGJsb2Nrcy4gKi8NCitzaXplX3QgWlNURF9pbnNlcnRCbG9jayhaU1REX0RD
dHggKmRjdHgsIGNvbnN0IHZvaWQgKmJsb2NrU3RhcnQsIHNpemVfdCBibG9ja1NpemUpDQorew0K
KwlaU1REX2NoZWNrQ29udGludWl0eShkY3R4LCBibG9ja1N0YXJ0KTsNCisJZGN0eC0+cHJldmlv
dXNEc3RFbmQgPSAoY29uc3QgY2hhciAqKWJsb2NrU3RhcnQgKyBibG9ja1NpemU7DQorCXJldHVy
biBibG9ja1NpemU7DQorfQ0KKw0KK3NpemVfdCBaU1REX2dlbmVyYXRlTnhCeXRlcyh2b2lkICpk
c3QsIHNpemVfdCBkc3RDYXBhY2l0eSwgQllURSBieXRlLCBzaXplX3QgbGVuZ3RoKQ0KK3sNCisJ
aWYgKGxlbmd0aCA+IGRzdENhcGFjaXR5KQ0KKwkJcmV0dXJuIEVSUk9SKGRzdFNpemVfdG9vU21h
bGwpOw0KKwltZW1zZXQoZHN0LCBieXRlLCBsZW5ndGgpOw0KKwlyZXR1cm4gbGVuZ3RoOw0KK30N
CisNCisvKiogWlNURF9maW5kRnJhbWVDb21wcmVzc2VkU2l6ZSgpIDoNCisgKiAgY29tcGF0aWJs
ZSB3aXRoIGxlZ2FjeSBtb2RlDQorICogIGBzcmNgIG11c3QgcG9pbnQgdG8gdGhlIHN0YXJ0IG9m
IGEgWlNURCBmcmFtZSwgWlNURCBsZWdhY3kgZnJhbWUsIG9yIHNraXBwYWJsZSBmcmFtZQ0KKyAq
ICBgc3JjU2l6ZWAgbXVzdCBiZSBhdCBsZWFzdCBhcyBsYXJnZSBhcyB0aGUgZnJhbWUgY29udGFp
bmVkDQorICogIEByZXR1cm4gOiB0aGUgY29tcHJlc3NlZCBzaXplIG9mIHRoZSBmcmFtZSBzdGFy
dGluZyBhdCBgc3JjYCAqLw0KK3NpemVfdCBaU1REX2ZpbmRGcmFtZUNvbXByZXNzZWRTaXplKGNv
bnN0IHZvaWQgKnNyYywgc2l6ZV90IHNyY1NpemUpDQorew0KKwlpZiAoc3JjU2l6ZSA+PSBaU1RE
X3NraXBwYWJsZUhlYWRlclNpemUgJiYgKFpTVERfcmVhZExFMzIoc3JjKSAmIDB4RkZGRkZGRjBV
KSA9PSBaU1REX01BR0lDX1NLSVBQQUJMRV9TVEFSVCkgew0KKwkJcmV0dXJuIFpTVERfc2tpcHBh
YmxlSGVhZGVyU2l6ZSArIFpTVERfcmVhZExFMzIoKGNvbnN0IEJZVEUgKilzcmMgKyA0KTsNCisJ
fSBlbHNlIHsNCisJCWNvbnN0IEJZVEUgKmlwID0gKGNvbnN0IEJZVEUgKilzcmM7DQorCQljb25z
dCBCWVRFICpjb25zdCBpcHN0YXJ0ID0gaXA7DQorCQlzaXplX3QgcmVtYWluaW5nU2l6ZSA9IHNy
Y1NpemU7DQorCQlaU1REX2ZyYW1lUGFyYW1zIGZQYXJhbXM7DQorDQorCQlzaXplX3QgY29uc3Qg
aGVhZGVyU2l6ZSA9IFpTVERfZnJhbWVIZWFkZXJTaXplKGlwLCByZW1haW5pbmdTaXplKTsNCisJ
CWlmIChaU1REX2lzRXJyb3IoaGVhZGVyU2l6ZSkpDQorCQkJcmV0dXJuIGhlYWRlclNpemU7DQor
DQorCQkvKiBGcmFtZSBIZWFkZXIgKi8NCisJCXsNCisJCQlzaXplX3QgY29uc3QgcmV0ID0gWlNU
RF9nZXRGcmFtZVBhcmFtcygmZlBhcmFtcywgaXAsIHJlbWFpbmluZ1NpemUpOw0KKwkJCWlmICha
U1REX2lzRXJyb3IocmV0KSkNCisJCQkJcmV0dXJuIHJldDsNCisJCQlpZiAocmV0ID4gMCkNCisJ
CQkJcmV0dXJuIEVSUk9SKHNyY1NpemVfd3JvbmcpOw0KKwkJfQ0KKw0KKwkJaXAgKz0gaGVhZGVy
U2l6ZTsNCisJCXJlbWFpbmluZ1NpemUgLT0gaGVhZGVyU2l6ZTsNCisNCisJCS8qIExvb3Agb24g
ZWFjaCBibG9jayAqLw0KKwkJd2hpbGUgKDEpIHsNCisJCQlibG9ja1Byb3BlcnRpZXNfdCBibG9j
a1Byb3BlcnRpZXM7DQorCQkJc2l6ZV90IGNvbnN0IGNCbG9ja1NpemUgPSBaU1REX2dldGNCbG9j
a1NpemUoaXAsIHJlbWFpbmluZ1NpemUsICZibG9ja1Byb3BlcnRpZXMpOw0KKwkJCWlmIChaU1RE
X2lzRXJyb3IoY0Jsb2NrU2l6ZSkpDQorCQkJCXJldHVybiBjQmxvY2tTaXplOw0KKw0KKwkJCWlm
IChaU1REX2Jsb2NrSGVhZGVyU2l6ZSArIGNCbG9ja1NpemUgPiByZW1haW5pbmdTaXplKQ0KKwkJ
CQlyZXR1cm4gRVJST1Ioc3JjU2l6ZV93cm9uZyk7DQorDQorCQkJaXAgKz0gWlNURF9ibG9ja0hl
YWRlclNpemUgKyBjQmxvY2tTaXplOw0KKwkJCXJlbWFpbmluZ1NpemUgLT0gWlNURF9ibG9ja0hl
YWRlclNpemUgKyBjQmxvY2tTaXplOw0KKw0KKwkJCWlmIChibG9ja1Byb3BlcnRpZXMubGFzdEJs
b2NrKQ0KKwkJCQlicmVhazsNCisJCX0NCisNCisJCWlmIChmUGFyYW1zLmNoZWNrc3VtRmxhZykg
eyAvKiBGcmFtZSBjb250ZW50IGNoZWNrc3VtICovDQorCQkJaWYgKHJlbWFpbmluZ1NpemUgPCA0
KQ0KKwkJCQlyZXR1cm4gRVJST1Ioc3JjU2l6ZV93cm9uZyk7DQorCQkJaXAgKz0gNDsNCisJCQly
ZW1haW5pbmdTaXplIC09IDQ7DQorCQl9DQorDQorCQlyZXR1cm4gaXAgLSBpcHN0YXJ0Ow0KKwl9
DQorfQ0KKw0KKy8qISBaU1REX2RlY29tcHJlc3NGcmFtZSgpIDoNCisqICAgQGRjdHggbXVzdCBi
ZSBwcm9wZXJseSBpbml0aWFsaXplZCAqLw0KK3N0YXRpYyBzaXplX3QgWlNURF9kZWNvbXByZXNz
RnJhbWUoWlNURF9EQ3R4ICpkY3R4LCB2b2lkICpkc3QsIHNpemVfdCBkc3RDYXBhY2l0eSwgY29u
c3Qgdm9pZCAqKnNyY1B0ciwgc2l6ZV90ICpzcmNTaXplUHRyKQ0KK3sNCisJY29uc3QgQllURSAq
aXAgPSAoY29uc3QgQllURSAqKSgqc3JjUHRyKTsNCisJQllURSAqY29uc3Qgb3N0YXJ0ID0gKEJZ
VEUgKiBjb25zdClkc3Q7DQorCUJZVEUgKmNvbnN0IG9lbmQgPSBvc3RhcnQgKyBkc3RDYXBhY2l0
eTsNCisJQllURSAqb3AgPSBvc3RhcnQ7DQorCXNpemVfdCByZW1haW5pbmdTaXplID0gKnNyY1Np
emVQdHI7DQorDQorCS8qIGNoZWNrICovDQorCWlmIChyZW1haW5pbmdTaXplIDwgWlNURF9mcmFt
ZUhlYWRlclNpemVfbWluICsgWlNURF9ibG9ja0hlYWRlclNpemUpDQorCQlyZXR1cm4gRVJST1Io
c3JjU2l6ZV93cm9uZyk7DQorDQorCS8qIEZyYW1lIEhlYWRlciAqLw0KKwl7DQorCQlzaXplX3Qg
Y29uc3QgZnJhbWVIZWFkZXJTaXplID0gWlNURF9mcmFtZUhlYWRlclNpemUoaXAsIFpTVERfZnJh
bWVIZWFkZXJTaXplX3ByZWZpeCk7DQorCQlpZiAoWlNURF9pc0Vycm9yKGZyYW1lSGVhZGVyU2l6
ZSkpDQorCQkJcmV0dXJuIGZyYW1lSGVhZGVyU2l6ZTsNCisJCWlmIChyZW1haW5pbmdTaXplIDwg
ZnJhbWVIZWFkZXJTaXplICsgWlNURF9ibG9ja0hlYWRlclNpemUpDQorCQkJcmV0dXJuIEVSUk9S
KHNyY1NpemVfd3JvbmcpOw0KKwkJQ0hFQ0tfRihaU1REX2RlY29kZUZyYW1lSGVhZGVyKGRjdHgs
IGlwLCBmcmFtZUhlYWRlclNpemUpKTsNCisJCWlwICs9IGZyYW1lSGVhZGVyU2l6ZTsNCisJCXJl
bWFpbmluZ1NpemUgLT0gZnJhbWVIZWFkZXJTaXplOw0KKwl9DQorDQorCS8qIExvb3Agb24gZWFj
aCBibG9jayAqLw0KKwl3aGlsZSAoMSkgew0KKwkJc2l6ZV90IGRlY29kZWRTaXplOw0KKwkJYmxv
Y2tQcm9wZXJ0aWVzX3QgYmxvY2tQcm9wZXJ0aWVzOw0KKwkJc2l6ZV90IGNvbnN0IGNCbG9ja1Np
emUgPSBaU1REX2dldGNCbG9ja1NpemUoaXAsIHJlbWFpbmluZ1NpemUsICZibG9ja1Byb3BlcnRp
ZXMpOw0KKwkJaWYgKFpTVERfaXNFcnJvcihjQmxvY2tTaXplKSkNCisJCQlyZXR1cm4gY0Jsb2Nr
U2l6ZTsNCisNCisJCWlwICs9IFpTVERfYmxvY2tIZWFkZXJTaXplOw0KKwkJcmVtYWluaW5nU2l6
ZSAtPSBaU1REX2Jsb2NrSGVhZGVyU2l6ZTsNCisJCWlmIChjQmxvY2tTaXplID4gcmVtYWluaW5n
U2l6ZSkNCisJCQlyZXR1cm4gRVJST1Ioc3JjU2l6ZV93cm9uZyk7DQorDQorCQlzd2l0Y2ggKGJs
b2NrUHJvcGVydGllcy5ibG9ja1R5cGUpIHsNCisJCWNhc2UgYnRfY29tcHJlc3NlZDogZGVjb2Rl
ZFNpemUgPSBaU1REX2RlY29tcHJlc3NCbG9ja19pbnRlcm5hbChkY3R4LCBvcCwgb2VuZCAtIG9w
LCBpcCwgY0Jsb2NrU2l6ZSk7IGJyZWFrOw0KKwkJY2FzZSBidF9yYXc6IGRlY29kZWRTaXplID0g
WlNURF9jb3B5UmF3QmxvY2sob3AsIG9lbmQgLSBvcCwgaXAsIGNCbG9ja1NpemUpOyBicmVhazsN
CisJCWNhc2UgYnRfcmxlOiBkZWNvZGVkU2l6ZSA9IFpTVERfZ2VuZXJhdGVOeEJ5dGVzKG9wLCBv
ZW5kIC0gb3AsICppcCwgYmxvY2tQcm9wZXJ0aWVzLm9yaWdTaXplKTsgYnJlYWs7DQorCQljYXNl
IGJ0X3Jlc2VydmVkOg0KKwkJZGVmYXVsdDogcmV0dXJuIEVSUk9SKGNvcnJ1cHRpb25fZGV0ZWN0
ZWQpOw0KKwkJfQ0KKw0KKwkJaWYgKFpTVERfaXNFcnJvcihkZWNvZGVkU2l6ZSkpDQorCQkJcmV0
dXJuIGRlY29kZWRTaXplOw0KKwkJaWYgKGRjdHgtPmZQYXJhbXMuY2hlY2tzdW1GbGFnKQ0KKwkJ
CXh4aDY0X3VwZGF0ZSgmZGN0eC0+eHhoU3RhdGUsIG9wLCBkZWNvZGVkU2l6ZSk7DQorCQlvcCAr
PSBkZWNvZGVkU2l6ZTsNCisJCWlwICs9IGNCbG9ja1NpemU7DQorCQlyZW1haW5pbmdTaXplIC09
IGNCbG9ja1NpemU7DQorCQlpZiAoYmxvY2tQcm9wZXJ0aWVzLmxhc3RCbG9jaykNCisJCQlicmVh
azsNCisJfQ0KKw0KKwlpZiAoZGN0eC0+ZlBhcmFtcy5jaGVja3N1bUZsYWcpIHsgLyogRnJhbWUg
Y29udGVudCBjaGVja3N1bSB2ZXJpZmljYXRpb24gKi8NCisJCVUzMiBjb25zdCBjaGVja0NhbGMg
PSAoVTMyKXh4aDY0X2RpZ2VzdCgmZGN0eC0+eHhoU3RhdGUpOw0KKwkJVTMyIGNoZWNrUmVhZDsN
CisJCWlmIChyZW1haW5pbmdTaXplIDwgNCkNCisJCQlyZXR1cm4gRVJST1IoY2hlY2tzdW1fd3Jv
bmcpOw0KKwkJY2hlY2tSZWFkID0gWlNURF9yZWFkTEUzMihpcCk7DQorCQlpZiAoY2hlY2tSZWFk
ICE9IGNoZWNrQ2FsYykNCisJCQlyZXR1cm4gRVJST1IoY2hlY2tzdW1fd3JvbmcpOw0KKwkJaXAg
Kz0gNDsNCisJCXJlbWFpbmluZ1NpemUgLT0gNDsNCisJfQ0KKw0KKwkvKiBBbGxvdyBjYWxsZXIg
dG8gZ2V0IHNpemUgcmVhZCAqLw0KKwkqc3JjUHRyID0gaXA7DQorCSpzcmNTaXplUHRyID0gcmVt
YWluaW5nU2l6ZTsNCisJcmV0dXJuIG9wIC0gb3N0YXJ0Ow0KK30NCisNCitzdGF0aWMgY29uc3Qg
dm9pZCAqWlNURF9ERGljdERpY3RDb250ZW50KGNvbnN0IFpTVERfRERpY3QgKmRkaWN0KTsNCitz
dGF0aWMgc2l6ZV90IFpTVERfRERpY3REaWN0U2l6ZShjb25zdCBaU1REX0REaWN0ICpkZGljdCk7
DQorDQorc3RhdGljIHNpemVfdCBaU1REX2RlY29tcHJlc3NNdWx0aUZyYW1lKFpTVERfREN0eCAq
ZGN0eCwgdm9pZCAqZHN0LCBzaXplX3QgZHN0Q2FwYWNpdHksIGNvbnN0IHZvaWQgKnNyYywgc2l6
ZV90IHNyY1NpemUsIGNvbnN0IHZvaWQgKmRpY3QsIHNpemVfdCBkaWN0U2l6ZSwNCisJCQkJCWNv
bnN0IFpTVERfRERpY3QgKmRkaWN0KQ0KK3sNCisJdm9pZCAqY29uc3QgZHN0c3RhcnQgPSBkc3Q7
DQorDQorCWlmIChkZGljdCkgew0KKwkJaWYgKGRpY3QpIHsNCisJCQkvKiBwcm9ncmFtbWVyIGVy
cm9yLCB0aGVzZSB0d28gY2FzZXMgc2hvdWxkIGJlIG11dHVhbGx5IGV4Y2x1c2l2ZSAqLw0KKwkJ
CXJldHVybiBFUlJPUihHRU5FUklDKTsNCisJCX0NCisNCisJCWRpY3QgPSBaU1REX0REaWN0RGlj
dENvbnRlbnQoZGRpY3QpOw0KKwkJZGljdFNpemUgPSBaU1REX0REaWN0RGljdFNpemUoZGRpY3Qp
Ow0KKwl9DQorDQorCXdoaWxlIChzcmNTaXplID49IFpTVERfZnJhbWVIZWFkZXJTaXplX3ByZWZp
eCkgew0KKwkJVTMyIG1hZ2ljTnVtYmVyOw0KKw0KKwkJbWFnaWNOdW1iZXIgPSBaU1REX3JlYWRM
RTMyKHNyYyk7DQorCQlpZiAobWFnaWNOdW1iZXIgIT0gWlNURF9NQUdJQ05VTUJFUikgew0KKwkJ
CWlmICgobWFnaWNOdW1iZXIgJiAweEZGRkZGRkYwVSkgPT0gWlNURF9NQUdJQ19TS0lQUEFCTEVf
U1RBUlQpIHsNCisJCQkJc2l6ZV90IHNraXBwYWJsZVNpemU7DQorCQkJCWlmIChzcmNTaXplIDwg
WlNURF9za2lwcGFibGVIZWFkZXJTaXplKQ0KKwkJCQkJcmV0dXJuIEVSUk9SKHNyY1NpemVfd3Jv
bmcpOw0KKwkJCQlza2lwcGFibGVTaXplID0gWlNURF9yZWFkTEUzMigoY29uc3QgQllURSAqKXNy
YyArIDQpICsgWlNURF9za2lwcGFibGVIZWFkZXJTaXplOw0KKwkJCQlpZiAoc3JjU2l6ZSA8IHNr
aXBwYWJsZVNpemUpIHsNCisJCQkJCXJldHVybiBFUlJPUihzcmNTaXplX3dyb25nKTsNCisJCQkJ
fQ0KKw0KKwkJCQlzcmMgPSAoY29uc3QgQllURSAqKXNyYyArIHNraXBwYWJsZVNpemU7DQorCQkJ
CXNyY1NpemUgLT0gc2tpcHBhYmxlU2l6ZTsNCisJCQkJY29udGludWU7DQorCQkJfSBlbHNlIHsN
CisJCQkJcmV0dXJuIEVSUk9SKHByZWZpeF91bmtub3duKTsNCisJCQl9DQorCQl9DQorDQorCQlp
ZiAoZGRpY3QpIHsNCisJCQkvKiB3ZSB3ZXJlIGNhbGxlZCBmcm9tIFpTVERfZGVjb21wcmVzc191
c2luZ0REaWN0ICovDQorCQkJWlNURF9yZWZERGljdChkY3R4LCBkZGljdCk7DQorCQl9IGVsc2Ug
ew0KKwkJCS8qIHRoaXMgd2lsbCBpbml0aWFsaXplIGNvcnJlY3RseSB3aXRoIG5vIGRpY3QgaWYg
ZGljdCA9PSBOVUxMLCBzbw0KKwkJCSAqIHVzZSB0aGlzIGluIGFsbCBjYXNlcyBidXQgZGRpY3Qg
Ki8NCisJCQlDSEVDS19GKFpTVERfZGVjb21wcmVzc0JlZ2luX3VzaW5nRGljdChkY3R4LCBkaWN0
LCBkaWN0U2l6ZSkpOw0KKwkJfQ0KKwkJWlNURF9jaGVja0NvbnRpbnVpdHkoZGN0eCwgZHN0KTsN
CisNCisJCXsNCisJCQljb25zdCBzaXplX3QgcmVzID0gWlNURF9kZWNvbXByZXNzRnJhbWUoZGN0
eCwgZHN0LCBkc3RDYXBhY2l0eSwgJnNyYywgJnNyY1NpemUpOw0KKwkJCWlmIChaU1REX2lzRXJy
b3IocmVzKSkNCisJCQkJcmV0dXJuIHJlczsNCisJCQkvKiBkb24ndCBuZWVkIHRvIGJvdW5kcyBj
aGVjayB0aGlzLCBaU1REX2RlY29tcHJlc3NGcmFtZSB3aWxsIGhhdmUNCisJCQkgKiBhbHJlYWR5
ICovDQorCQkJZHN0ID0gKEJZVEUgKilkc3QgKyByZXM7DQorCQkJZHN0Q2FwYWNpdHkgLT0gcmVz
Ow0KKwkJfQ0KKwl9DQorDQorCWlmIChzcmNTaXplKQ0KKwkJcmV0dXJuIEVSUk9SKHNyY1NpemVf
d3JvbmcpOyAvKiBpbnB1dCBub3QgZW50aXJlbHkgY29uc3VtZWQgKi8NCisNCisJcmV0dXJuIChC
WVRFICopZHN0IC0gKEJZVEUgKilkc3RzdGFydDsNCit9DQorDQorc2l6ZV90IFpTVERfZGVjb21w
cmVzc191c2luZ0RpY3QoWlNURF9EQ3R4ICpkY3R4LCB2b2lkICpkc3QsIHNpemVfdCBkc3RDYXBh
Y2l0eSwgY29uc3Qgdm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZSwgY29uc3Qgdm9pZCAqZGljdCwg
c2l6ZV90IGRpY3RTaXplKQ0KK3sNCisJcmV0dXJuIFpTVERfZGVjb21wcmVzc011bHRpRnJhbWUo
ZGN0eCwgZHN0LCBkc3RDYXBhY2l0eSwgc3JjLCBzcmNTaXplLCBkaWN0LCBkaWN0U2l6ZSwgTlVM
TCk7DQorfQ0KKw0KK3NpemVfdCBaU1REX2RlY29tcHJlc3NEQ3R4KFpTVERfREN0eCAqZGN0eCwg
dm9pZCAqZHN0LCBzaXplX3QgZHN0Q2FwYWNpdHksIGNvbnN0IHZvaWQgKnNyYywgc2l6ZV90IHNy
Y1NpemUpDQorew0KKwlyZXR1cm4gWlNURF9kZWNvbXByZXNzX3VzaW5nRGljdChkY3R4LCBkc3Qs
IGRzdENhcGFjaXR5LCBzcmMsIHNyY1NpemUsIE5VTEwsIDApOw0KK30NCisNCisvKi0qKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KKyogICBBZHZhbmNlZCBTdHJlYW1pbmcg
RGVjb21wcmVzc2lvbiBBUEkNCisqICAgQnVmZmVybGVzcyBhbmQgc3luY2hyb25vdXMNCisqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqLw0KK3NpemVfdCBaU1REX25leHRT
cmNTaXplVG9EZWNvbXByZXNzKFpTVERfREN0eCAqZGN0eCkgeyByZXR1cm4gZGN0eC0+ZXhwZWN0
ZWQ7IH0NCisNCitaU1REX25leHRJbnB1dFR5cGVfZSBaU1REX25leHRJbnB1dFR5cGUoWlNURF9E
Q3R4ICpkY3R4KQ0KK3sNCisJc3dpdGNoIChkY3R4LT5zdGFnZSkgew0KKwlkZWZhdWx0OiAvKiBz
aG91bGQgbm90IGhhcHBlbiAqLw0KKwljYXNlIFpTVERkc19nZXRGcmFtZUhlYWRlclNpemU6DQor
CWNhc2UgWlNURGRzX2RlY29kZUZyYW1lSGVhZGVyOiByZXR1cm4gWlNURG5pdF9mcmFtZUhlYWRl
cjsNCisJY2FzZSBaU1REZHNfZGVjb2RlQmxvY2tIZWFkZXI6IHJldHVybiBaU1REbml0X2Jsb2Nr
SGVhZGVyOw0KKwljYXNlIFpTVERkc19kZWNvbXByZXNzQmxvY2s6IHJldHVybiBaU1REbml0X2Js
b2NrOw0KKwljYXNlIFpTVERkc19kZWNvbXByZXNzTGFzdEJsb2NrOiByZXR1cm4gWlNURG5pdF9s
YXN0QmxvY2s7DQorCWNhc2UgWlNURGRzX2NoZWNrQ2hlY2tzdW06IHJldHVybiBaU1REbml0X2No
ZWNrc3VtOw0KKwljYXNlIFpTVERkc19kZWNvZGVTa2lwcGFibGVIZWFkZXI6DQorCWNhc2UgWlNU
RGRzX3NraXBGcmFtZTogcmV0dXJuIFpTVERuaXRfc2tpcHBhYmxlRnJhbWU7DQorCX0NCit9DQor
DQoraW50IFpTVERfaXNTa2lwRnJhbWUoWlNURF9EQ3R4ICpkY3R4KSB7IHJldHVybiBkY3R4LT5z
dGFnZSA9PSBaU1REZHNfc2tpcEZyYW1lOyB9IC8qIGZvciB6YnVmZiAqLw0KKw0KKy8qKiBaU1RE
X2RlY29tcHJlc3NDb250aW51ZSgpIDoNCisqICAgQHJldHVybiA6IG5iIG9mIGJ5dGVzIGdlbmVy
YXRlZCBpbnRvIGBkc3RgIChuZWNlc3NhcmlseSA8PSBgZHN0Q2FwYWNpdHkpDQorKiAgICAgICAg
ICAgICBvciBhbiBlcnJvciBjb2RlLCB3aGljaCBjYW4gYmUgdGVzdGVkIHVzaW5nIFpTVERfaXNF
cnJvcigpICovDQorc2l6ZV90IFpTVERfZGVjb21wcmVzc0NvbnRpbnVlKFpTVERfREN0eCAqZGN0
eCwgdm9pZCAqZHN0LCBzaXplX3QgZHN0Q2FwYWNpdHksIGNvbnN0IHZvaWQgKnNyYywgc2l6ZV90
IHNyY1NpemUpDQorew0KKwkvKiBTYW5pdHkgY2hlY2sgKi8NCisJaWYgKHNyY1NpemUgIT0gZGN0
eC0+ZXhwZWN0ZWQpDQorCQlyZXR1cm4gRVJST1Ioc3JjU2l6ZV93cm9uZyk7DQorCWlmIChkc3RD
YXBhY2l0eSkNCisJCVpTVERfY2hlY2tDb250aW51aXR5KGRjdHgsIGRzdCk7DQorDQorCXN3aXRj
aCAoZGN0eC0+c3RhZ2UpIHsNCisJY2FzZSBaU1REZHNfZ2V0RnJhbWVIZWFkZXJTaXplOg0KKwkJ
aWYgKHNyY1NpemUgIT0gWlNURF9mcmFtZUhlYWRlclNpemVfcHJlZml4KQ0KKwkJCXJldHVybiBF
UlJPUihzcmNTaXplX3dyb25nKTsJCQkJCS8qIGltcG9zc2libGUgKi8NCisJCWlmICgoWlNURF9y
ZWFkTEUzMihzcmMpICYgMHhGRkZGRkZGMFUpID09IFpTVERfTUFHSUNfU0tJUFBBQkxFX1NUQVJU
KSB7IC8qIHNraXBwYWJsZSBmcmFtZSAqLw0KKwkJCW1lbWNweShkY3R4LT5oZWFkZXJCdWZmZXIs
IHNyYywgWlNURF9mcmFtZUhlYWRlclNpemVfcHJlZml4KTsNCisJCQlkY3R4LT5leHBlY3RlZCA9
IFpTVERfc2tpcHBhYmxlSGVhZGVyU2l6ZSAtIFpTVERfZnJhbWVIZWFkZXJTaXplX3ByZWZpeDsg
LyogbWFnaWMgbnVtYmVyICsgc2tpcHBhYmxlIGZyYW1lIGxlbmd0aCAqLw0KKwkJCWRjdHgtPnN0
YWdlID0gWlNURGRzX2RlY29kZVNraXBwYWJsZUhlYWRlcjsNCisJCQlyZXR1cm4gMDsNCisJCX0N
CisJCWRjdHgtPmhlYWRlclNpemUgPSBaU1REX2ZyYW1lSGVhZGVyU2l6ZShzcmMsIFpTVERfZnJh
bWVIZWFkZXJTaXplX3ByZWZpeCk7DQorCQlpZiAoWlNURF9pc0Vycm9yKGRjdHgtPmhlYWRlclNp
emUpKQ0KKwkJCXJldHVybiBkY3R4LT5oZWFkZXJTaXplOw0KKwkJbWVtY3B5KGRjdHgtPmhlYWRl
ckJ1ZmZlciwgc3JjLCBaU1REX2ZyYW1lSGVhZGVyU2l6ZV9wcmVmaXgpOw0KKwkJaWYgKGRjdHgt
PmhlYWRlclNpemUgPiBaU1REX2ZyYW1lSGVhZGVyU2l6ZV9wcmVmaXgpIHsNCisJCQlkY3R4LT5l
eHBlY3RlZCA9IGRjdHgtPmhlYWRlclNpemUgLSBaU1REX2ZyYW1lSGVhZGVyU2l6ZV9wcmVmaXg7
DQorCQkJZGN0eC0+c3RhZ2UgPSBaU1REZHNfZGVjb2RlRnJhbWVIZWFkZXI7DQorCQkJcmV0dXJu
IDA7DQorCQl9DQorCQlkY3R4LT5leHBlY3RlZCA9IDA7IC8qIG5vdCBuZWNlc3NhcnkgdG8gY29w
eSBtb3JlICovDQorCQkvKiBmYWxsIHRocm91Z2ggKi8NCisNCisJY2FzZSBaU1REZHNfZGVjb2Rl
RnJhbWVIZWFkZXI6DQorCQltZW1jcHkoZGN0eC0+aGVhZGVyQnVmZmVyICsgWlNURF9mcmFtZUhl
YWRlclNpemVfcHJlZml4LCBzcmMsIGRjdHgtPmV4cGVjdGVkKTsNCisJCUNIRUNLX0YoWlNURF9k
ZWNvZGVGcmFtZUhlYWRlcihkY3R4LCBkY3R4LT5oZWFkZXJCdWZmZXIsIGRjdHgtPmhlYWRlclNp
emUpKTsNCisJCWRjdHgtPmV4cGVjdGVkID0gWlNURF9ibG9ja0hlYWRlclNpemU7DQorCQlkY3R4
LT5zdGFnZSA9IFpTVERkc19kZWNvZGVCbG9ja0hlYWRlcjsNCisJCXJldHVybiAwOw0KKw0KKwlj
YXNlIFpTVERkc19kZWNvZGVCbG9ja0hlYWRlcjogew0KKwkJYmxvY2tQcm9wZXJ0aWVzX3QgYnA7
DQorCQlzaXplX3QgY29uc3QgY0Jsb2NrU2l6ZSA9IFpTVERfZ2V0Y0Jsb2NrU2l6ZShzcmMsIFpT
VERfYmxvY2tIZWFkZXJTaXplLCAmYnApOw0KKwkJaWYgKFpTVERfaXNFcnJvcihjQmxvY2tTaXpl
KSkNCisJCQlyZXR1cm4gY0Jsb2NrU2l6ZTsNCisJCWRjdHgtPmV4cGVjdGVkID0gY0Jsb2NrU2l6
ZTsNCisJCWRjdHgtPmJUeXBlID0gYnAuYmxvY2tUeXBlOw0KKwkJZGN0eC0+cmxlU2l6ZSA9IGJw
Lm9yaWdTaXplOw0KKwkJaWYgKGNCbG9ja1NpemUpIHsNCisJCQlkY3R4LT5zdGFnZSA9IGJwLmxh
c3RCbG9jayA/IFpTVERkc19kZWNvbXByZXNzTGFzdEJsb2NrIDogWlNURGRzX2RlY29tcHJlc3NC
bG9jazsNCisJCQlyZXR1cm4gMDsNCisJCX0NCisJCS8qIGVtcHR5IGJsb2NrICovDQorCQlpZiAo
YnAubGFzdEJsb2NrKSB7DQorCQkJaWYgKGRjdHgtPmZQYXJhbXMuY2hlY2tzdW1GbGFnKSB7DQor
CQkJCWRjdHgtPmV4cGVjdGVkID0gNDsNCisJCQkJZGN0eC0+c3RhZ2UgPSBaU1REZHNfY2hlY2tD
aGVja3N1bTsNCisJCQl9IGVsc2Ugew0KKwkJCQlkY3R4LT5leHBlY3RlZCA9IDA7IC8qIGVuZCBv
ZiBmcmFtZSAqLw0KKwkJCQlkY3R4LT5zdGFnZSA9IFpTVERkc19nZXRGcmFtZUhlYWRlclNpemU7
DQorCQkJfQ0KKwkJfSBlbHNlIHsNCisJCQlkY3R4LT5leHBlY3RlZCA9IDM7IC8qIGdvIGRpcmVj
dGx5IHRvIG5leHQgaGVhZGVyICovDQorCQkJZGN0eC0+c3RhZ2UgPSBaU1REZHNfZGVjb2RlQmxv
Y2tIZWFkZXI7DQorCQl9DQorCQlyZXR1cm4gMDsNCisJfQ0KKwljYXNlIFpTVERkc19kZWNvbXBy
ZXNzTGFzdEJsb2NrOg0KKwljYXNlIFpTVERkc19kZWNvbXByZXNzQmxvY2s6IHsNCisJCXNpemVf
dCByU2l6ZTsNCisJCXN3aXRjaCAoZGN0eC0+YlR5cGUpIHsNCisJCWNhc2UgYnRfY29tcHJlc3Nl
ZDogclNpemUgPSBaU1REX2RlY29tcHJlc3NCbG9ja19pbnRlcm5hbChkY3R4LCBkc3QsIGRzdENh
cGFjaXR5LCBzcmMsIHNyY1NpemUpOyBicmVhazsNCisJCWNhc2UgYnRfcmF3OiByU2l6ZSA9IFpT
VERfY29weVJhd0Jsb2NrKGRzdCwgZHN0Q2FwYWNpdHksIHNyYywgc3JjU2l6ZSk7IGJyZWFrOw0K
KwkJY2FzZSBidF9ybGU6IHJTaXplID0gWlNURF9zZXRSbGVCbG9jayhkc3QsIGRzdENhcGFjaXR5
LCBzcmMsIHNyY1NpemUsIGRjdHgtPnJsZVNpemUpOyBicmVhazsNCisJCWNhc2UgYnRfcmVzZXJ2
ZWQ6IC8qIHNob3VsZCBuZXZlciBoYXBwZW4gKi8NCisJCWRlZmF1bHQ6IHJldHVybiBFUlJPUihj
b3JydXB0aW9uX2RldGVjdGVkKTsNCisJCX0NCisJCWlmIChaU1REX2lzRXJyb3IoclNpemUpKQ0K
KwkJCXJldHVybiByU2l6ZTsNCisJCWlmIChkY3R4LT5mUGFyYW1zLmNoZWNrc3VtRmxhZykNCisJ
CQl4eGg2NF91cGRhdGUoJmRjdHgtPnh4aFN0YXRlLCBkc3QsIHJTaXplKTsNCisNCisJCWlmIChk
Y3R4LT5zdGFnZSA9PSBaU1REZHNfZGVjb21wcmVzc0xhc3RCbG9jaykgeyAvKiBlbmQgb2YgZnJh
bWUgKi8NCisJCQlpZiAoZGN0eC0+ZlBhcmFtcy5jaGVja3N1bUZsYWcpIHsJLyogYW5vdGhlciBy
b3VuZCBmb3IgZnJhbWUgY2hlY2tzdW0gKi8NCisJCQkJZGN0eC0+ZXhwZWN0ZWQgPSA0Ow0KKwkJ
CQlkY3R4LT5zdGFnZSA9IFpTVERkc19jaGVja0NoZWNrc3VtOw0KKwkJCX0gZWxzZSB7DQorCQkJ
CWRjdHgtPmV4cGVjdGVkID0gMDsgLyogZW5kcyBoZXJlICovDQorCQkJCWRjdHgtPnN0YWdlID0g
WlNURGRzX2dldEZyYW1lSGVhZGVyU2l6ZTsNCisJCQl9DQorCQl9IGVsc2Ugew0KKwkJCWRjdHgt
PnN0YWdlID0gWlNURGRzX2RlY29kZUJsb2NrSGVhZGVyOw0KKwkJCWRjdHgtPmV4cGVjdGVkID0g
WlNURF9ibG9ja0hlYWRlclNpemU7DQorCQkJZGN0eC0+cHJldmlvdXNEc3RFbmQgPSAoY2hhciAq
KWRzdCArIHJTaXplOw0KKwkJfQ0KKwkJcmV0dXJuIHJTaXplOw0KKwl9DQorCWNhc2UgWlNURGRz
X2NoZWNrQ2hlY2tzdW06IHsNCisJCVUzMiBjb25zdCBoMzIgPSAoVTMyKXh4aDY0X2RpZ2VzdCgm
ZGN0eC0+eHhoU3RhdGUpOw0KKwkJVTMyIGNvbnN0IGNoZWNrMzIgPSBaU1REX3JlYWRMRTMyKHNy
Yyk7IC8qIHNyY1NpemUgPT0gNCwgZ3VhcmFudGVlZCBieSBkY3R4LT5leHBlY3RlZCAqLw0KKwkJ
aWYgKGNoZWNrMzIgIT0gaDMyKQ0KKwkJCXJldHVybiBFUlJPUihjaGVja3N1bV93cm9uZyk7DQor
CQlkY3R4LT5leHBlY3RlZCA9IDA7DQorCQlkY3R4LT5zdGFnZSA9IFpTVERkc19nZXRGcmFtZUhl
YWRlclNpemU7DQorCQlyZXR1cm4gMDsNCisJfQ0KKwljYXNlIFpTVERkc19kZWNvZGVTa2lwcGFi
bGVIZWFkZXI6IHsNCisJCW1lbWNweShkY3R4LT5oZWFkZXJCdWZmZXIgKyBaU1REX2ZyYW1lSGVh
ZGVyU2l6ZV9wcmVmaXgsIHNyYywgZGN0eC0+ZXhwZWN0ZWQpOw0KKwkJZGN0eC0+ZXhwZWN0ZWQg
PSBaU1REX3JlYWRMRTMyKGRjdHgtPmhlYWRlckJ1ZmZlciArIDQpOw0KKwkJZGN0eC0+c3RhZ2Ug
PSBaU1REZHNfc2tpcEZyYW1lOw0KKwkJcmV0dXJuIDA7DQorCX0NCisJY2FzZSBaU1REZHNfc2tp
cEZyYW1lOiB7DQorCQlkY3R4LT5leHBlY3RlZCA9IDA7DQorCQlkY3R4LT5zdGFnZSA9IFpTVERk
c19nZXRGcmFtZUhlYWRlclNpemU7DQorCQlyZXR1cm4gMDsNCisJfQ0KKwlkZWZhdWx0Og0KKwkJ
cmV0dXJuIEVSUk9SKEdFTkVSSUMpOyAvKiBpbXBvc3NpYmxlICovDQorCX0NCit9DQorDQorc3Rh
dGljIHNpemVfdCBaU1REX3JlZkRpY3RDb250ZW50KFpTVERfREN0eCAqZGN0eCwgY29uc3Qgdm9p
ZCAqZGljdCwgc2l6ZV90IGRpY3RTaXplKQ0KK3sNCisJZGN0eC0+ZGljdEVuZCA9IGRjdHgtPnBy
ZXZpb3VzRHN0RW5kOw0KKwlkY3R4LT52QmFzZSA9IChjb25zdCBjaGFyICopZGljdCAtICgoY29u
c3QgY2hhciAqKShkY3R4LT5wcmV2aW91c0RzdEVuZCkgLSAoY29uc3QgY2hhciAqKShkY3R4LT5i
YXNlKSk7DQorCWRjdHgtPmJhc2UgPSBkaWN0Ow0KKwlkY3R4LT5wcmV2aW91c0RzdEVuZCA9IChj
b25zdCBjaGFyICopZGljdCArIGRpY3RTaXplOw0KKwlyZXR1cm4gMDsNCit9DQorDQorLyogWlNU
RF9sb2FkRW50cm9weSgpIDoNCisgKiBkaWN0IDogbXVzdCBwb2ludCBhdCBiZWdpbm5pbmcgb2Yg
YSB2YWxpZCB6c3RkIGRpY3Rpb25hcnkNCisgKiBAcmV0dXJuIDogc2l6ZSBvZiBlbnRyb3B5IHRh
YmxlcyByZWFkICovDQorc3RhdGljIHNpemVfdCBaU1REX2xvYWRFbnRyb3B5KFpTVERfZW50cm9w
eVRhYmxlc190ICplbnRyb3B5LCBjb25zdCB2b2lkICpjb25zdCBkaWN0LCBzaXplX3QgY29uc3Qg
ZGljdFNpemUpDQorew0KKwljb25zdCBCWVRFICpkaWN0UHRyID0gKGNvbnN0IEJZVEUgKilkaWN0
Ow0KKwljb25zdCBCWVRFICpjb25zdCBkaWN0RW5kID0gZGljdFB0ciArIGRpY3RTaXplOw0KKw0K
KwlpZiAoZGljdFNpemUgPD0gOCkNCisJCXJldHVybiBFUlJPUihkaWN0aW9uYXJ5X2NvcnJ1cHRl
ZCk7DQorCWRpY3RQdHIgKz0gODsgLyogc2tpcCBoZWFkZXIgPSBtYWdpYyArIGRpY3RJRCAqLw0K
Kw0KKwl7DQorCQlzaXplX3QgY29uc3QgaFNpemUgPSBIVUZfcmVhZERUYWJsZVg0X3drc3AoZW50
cm9weS0+aHVmVGFibGUsIGRpY3RQdHIsIGRpY3RFbmQgLSBkaWN0UHRyLCBlbnRyb3B5LT53b3Jr
c3BhY2UsIHNpemVvZihlbnRyb3B5LT53b3Jrc3BhY2UpKTsNCisJCWlmIChIVUZfaXNFcnJvciho
U2l6ZSkpDQorCQkJcmV0dXJuIEVSUk9SKGRpY3Rpb25hcnlfY29ycnVwdGVkKTsNCisJCWRpY3RQ
dHIgKz0gaFNpemU7DQorCX0NCisNCisJew0KKwkJc2hvcnQgb2ZmY29kZU5Db3VudFtNYXhPZmYg
KyAxXTsNCisJCVUzMiBvZmZjb2RlTWF4VmFsdWUgPSBNYXhPZmYsIG9mZmNvZGVMb2c7DQorCQlz
aXplX3QgY29uc3Qgb2ZmY29kZUhlYWRlclNpemUgPSBGU0VfcmVhZE5Db3VudChvZmZjb2RlTkNv
dW50LCAmb2ZmY29kZU1heFZhbHVlLCAmb2ZmY29kZUxvZywgZGljdFB0ciwgZGljdEVuZCAtIGRp
Y3RQdHIpOw0KKwkJaWYgKEZTRV9pc0Vycm9yKG9mZmNvZGVIZWFkZXJTaXplKSkNCisJCQlyZXR1
cm4gRVJST1IoZGljdGlvbmFyeV9jb3JydXB0ZWQpOw0KKwkJaWYgKG9mZmNvZGVMb2cgPiBPZmZG
U0VMb2cpDQorCQkJcmV0dXJuIEVSUk9SKGRpY3Rpb25hcnlfY29ycnVwdGVkKTsNCisJCUNIRUNL
X0UoRlNFX2J1aWxkRFRhYmxlX3drc3AoZW50cm9weS0+T0ZUYWJsZSwgb2ZmY29kZU5Db3VudCwg
b2ZmY29kZU1heFZhbHVlLCBvZmZjb2RlTG9nLCBlbnRyb3B5LT53b3Jrc3BhY2UsIHNpemVvZihl
bnRyb3B5LT53b3Jrc3BhY2UpKSwgZGljdGlvbmFyeV9jb3JydXB0ZWQpOw0KKwkJZGljdFB0ciAr
PSBvZmZjb2RlSGVhZGVyU2l6ZTsNCisJfQ0KKw0KKwl7DQorCQlzaG9ydCBtYXRjaGxlbmd0aE5D
b3VudFtNYXhNTCArIDFdOw0KKwkJdW5zaWduZWQgbWF0Y2hsZW5ndGhNYXhWYWx1ZSA9IE1heE1M
LCBtYXRjaGxlbmd0aExvZzsNCisJCXNpemVfdCBjb25zdCBtYXRjaGxlbmd0aEhlYWRlclNpemUg
PSBGU0VfcmVhZE5Db3VudChtYXRjaGxlbmd0aE5Db3VudCwgJm1hdGNobGVuZ3RoTWF4VmFsdWUs
ICZtYXRjaGxlbmd0aExvZywgZGljdFB0ciwgZGljdEVuZCAtIGRpY3RQdHIpOw0KKwkJaWYgKEZT
RV9pc0Vycm9yKG1hdGNobGVuZ3RoSGVhZGVyU2l6ZSkpDQorCQkJcmV0dXJuIEVSUk9SKGRpY3Rp
b25hcnlfY29ycnVwdGVkKTsNCisJCWlmIChtYXRjaGxlbmd0aExvZyA+IE1MRlNFTG9nKQ0KKwkJ
CXJldHVybiBFUlJPUihkaWN0aW9uYXJ5X2NvcnJ1cHRlZCk7DQorCQlDSEVDS19FKEZTRV9idWls
ZERUYWJsZV93a3NwKGVudHJvcHktPk1MVGFibGUsIG1hdGNobGVuZ3RoTkNvdW50LCBtYXRjaGxl
bmd0aE1heFZhbHVlLCBtYXRjaGxlbmd0aExvZywgZW50cm9weS0+d29ya3NwYWNlLCBzaXplb2Yo
ZW50cm9weS0+d29ya3NwYWNlKSksIGRpY3Rpb25hcnlfY29ycnVwdGVkKTsNCisJCWRpY3RQdHIg
Kz0gbWF0Y2hsZW5ndGhIZWFkZXJTaXplOw0KKwl9DQorDQorCXsNCisJCXNob3J0IGxpdGxlbmd0
aE5Db3VudFtNYXhMTCArIDFdOw0KKwkJdW5zaWduZWQgbGl0bGVuZ3RoTWF4VmFsdWUgPSBNYXhM
TCwgbGl0bGVuZ3RoTG9nOw0KKwkJc2l6ZV90IGNvbnN0IGxpdGxlbmd0aEhlYWRlclNpemUgPSBG
U0VfcmVhZE5Db3VudChsaXRsZW5ndGhOQ291bnQsICZsaXRsZW5ndGhNYXhWYWx1ZSwgJmxpdGxl
bmd0aExvZywgZGljdFB0ciwgZGljdEVuZCAtIGRpY3RQdHIpOw0KKwkJaWYgKEZTRV9pc0Vycm9y
KGxpdGxlbmd0aEhlYWRlclNpemUpKQ0KKwkJCXJldHVybiBFUlJPUihkaWN0aW9uYXJ5X2NvcnJ1
cHRlZCk7DQorCQlpZiAobGl0bGVuZ3RoTG9nID4gTExGU0VMb2cpDQorCQkJcmV0dXJuIEVSUk9S
KGRpY3Rpb25hcnlfY29ycnVwdGVkKTsNCisJCUNIRUNLX0UoRlNFX2J1aWxkRFRhYmxlX3drc3Ao
ZW50cm9weS0+TExUYWJsZSwgbGl0bGVuZ3RoTkNvdW50LCBsaXRsZW5ndGhNYXhWYWx1ZSwgbGl0
bGVuZ3RoTG9nLCBlbnRyb3B5LT53b3Jrc3BhY2UsIHNpemVvZihlbnRyb3B5LT53b3Jrc3BhY2Up
KSwgZGljdGlvbmFyeV9jb3JydXB0ZWQpOw0KKwkJZGljdFB0ciArPSBsaXRsZW5ndGhIZWFkZXJT
aXplOw0KKwl9DQorDQorCWlmIChkaWN0UHRyICsgMTIgPiBkaWN0RW5kKQ0KKwkJcmV0dXJuIEVS
Uk9SKGRpY3Rpb25hcnlfY29ycnVwdGVkKTsNCisJew0KKwkJaW50IGk7DQorCQlzaXplX3QgY29u
c3QgZGljdENvbnRlbnRTaXplID0gKHNpemVfdCkoZGljdEVuZCAtIChkaWN0UHRyICsgMTIpKTsN
CisJCWZvciAoaSA9IDA7IGkgPCAzOyBpKyspIHsNCisJCQlVMzIgY29uc3QgcmVwID0gWlNURF9y
ZWFkTEUzMihkaWN0UHRyKTsNCisJCQlkaWN0UHRyICs9IDQ7DQorCQkJaWYgKHJlcCA9PSAwIHx8
IHJlcCA+PSBkaWN0Q29udGVudFNpemUpDQorCQkJCXJldHVybiBFUlJPUihkaWN0aW9uYXJ5X2Nv
cnJ1cHRlZCk7DQorCQkJZW50cm9weS0+cmVwW2ldID0gcmVwOw0KKwkJfQ0KKwl9DQorDQorCXJl
dHVybiBkaWN0UHRyIC0gKGNvbnN0IEJZVEUgKilkaWN0Ow0KK30NCisNCitzdGF0aWMgc2l6ZV90
IFpTVERfZGVjb21wcmVzc19pbnNlcnREaWN0aW9uYXJ5KFpTVERfREN0eCAqZGN0eCwgY29uc3Qg
dm9pZCAqZGljdCwgc2l6ZV90IGRpY3RTaXplKQ0KK3sNCisJaWYgKGRpY3RTaXplIDwgOCkNCisJ
CXJldHVybiBaU1REX3JlZkRpY3RDb250ZW50KGRjdHgsIGRpY3QsIGRpY3RTaXplKTsNCisJew0K
KwkJVTMyIGNvbnN0IG1hZ2ljID0gWlNURF9yZWFkTEUzMihkaWN0KTsNCisJCWlmIChtYWdpYyAh
PSBaU1REX0RJQ1RfTUFHSUMpIHsNCisJCQlyZXR1cm4gWlNURF9yZWZEaWN0Q29udGVudChkY3R4
LCBkaWN0LCBkaWN0U2l6ZSk7IC8qIHB1cmUgY29udGVudCBtb2RlICovDQorCQl9DQorCX0NCisJ
ZGN0eC0+ZGljdElEID0gWlNURF9yZWFkTEUzMigoY29uc3QgY2hhciAqKWRpY3QgKyA0KTsNCisN
CisJLyogbG9hZCBlbnRyb3B5IHRhYmxlcyAqLw0KKwl7DQorCQlzaXplX3QgY29uc3QgZVNpemUg
PSBaU1REX2xvYWRFbnRyb3B5KCZkY3R4LT5lbnRyb3B5LCBkaWN0LCBkaWN0U2l6ZSk7DQorCQlp
ZiAoWlNURF9pc0Vycm9yKGVTaXplKSkNCisJCQlyZXR1cm4gRVJST1IoZGljdGlvbmFyeV9jb3Jy
dXB0ZWQpOw0KKwkJZGljdCA9IChjb25zdCBjaGFyICopZGljdCArIGVTaXplOw0KKwkJZGljdFNp
emUgLT0gZVNpemU7DQorCX0NCisJZGN0eC0+bGl0RW50cm9weSA9IGRjdHgtPmZzZUVudHJvcHkg
PSAxOw0KKw0KKwkvKiByZWZlcmVuY2UgZGljdGlvbmFyeSBjb250ZW50ICovDQorCXJldHVybiBa
U1REX3JlZkRpY3RDb250ZW50KGRjdHgsIGRpY3QsIGRpY3RTaXplKTsNCit9DQorDQorc2l6ZV90
IFpTVERfZGVjb21wcmVzc0JlZ2luX3VzaW5nRGljdChaU1REX0RDdHggKmRjdHgsIGNvbnN0IHZv
aWQgKmRpY3QsIHNpemVfdCBkaWN0U2l6ZSkNCit7DQorCUNIRUNLX0YoWlNURF9kZWNvbXByZXNz
QmVnaW4oZGN0eCkpOw0KKwlpZiAoZGljdCAmJiBkaWN0U2l6ZSkNCisJCUNIRUNLX0UoWlNURF9k
ZWNvbXByZXNzX2luc2VydERpY3Rpb25hcnkoZGN0eCwgZGljdCwgZGljdFNpemUpLCBkaWN0aW9u
YXJ5X2NvcnJ1cHRlZCk7DQorCXJldHVybiAwOw0KK30NCisNCisvKiA9PT09PT0gICBaU1REX0RE
aWN0ICAgPT09PT09ICovDQorDQorc3RydWN0IFpTVERfRERpY3RfcyB7DQorCXZvaWQgKmRpY3RC
dWZmZXI7DQorCWNvbnN0IHZvaWQgKmRpY3RDb250ZW50Ow0KKwlzaXplX3QgZGljdFNpemU7DQor
CVpTVERfZW50cm9weVRhYmxlc190IGVudHJvcHk7DQorCVUzMiBkaWN0SUQ7DQorCVUzMiBlbnRy
b3B5UHJlc2VudDsNCisJWlNURF9jdXN0b21NZW0gY01lbTsNCit9OyAvKiB0eXBlZGVmJ2QgdG8g
WlNURF9ERGljdCB3aXRoaW4gInpzdGQuaCIgKi8NCisNCitzaXplX3QgWlNURF9ERGljdFdvcmtz
cGFjZUJvdW5kKHZvaWQpIHsgcmV0dXJuIFpTVERfQUxJR04oc2l6ZW9mKFpTVERfc3RhY2spKSAr
IFpTVERfQUxJR04oc2l6ZW9mKFpTVERfRERpY3QpKTsgfQ0KKw0KK3N0YXRpYyBjb25zdCB2b2lk
ICpaU1REX0REaWN0RGljdENvbnRlbnQoY29uc3QgWlNURF9ERGljdCAqZGRpY3QpIHsgcmV0dXJu
IGRkaWN0LT5kaWN0Q29udGVudDsgfQ0KKw0KK3N0YXRpYyBzaXplX3QgWlNURF9ERGljdERpY3RT
aXplKGNvbnN0IFpTVERfRERpY3QgKmRkaWN0KSB7IHJldHVybiBkZGljdC0+ZGljdFNpemU7IH0N
CisNCitzdGF0aWMgdm9pZCBaU1REX3JlZkREaWN0KFpTVERfREN0eCAqZHN0REN0eCwgY29uc3Qg
WlNURF9ERGljdCAqZGRpY3QpDQorew0KKwlaU1REX2RlY29tcHJlc3NCZWdpbihkc3REQ3R4KTsg
LyogaW5pdCAqLw0KKwlpZiAoZGRpY3QpIHsJCSAgICAgICAvKiBzdXBwb3J0IHJlZkREaWN0IG9u
IE5VTEwgKi8NCisJCWRzdERDdHgtPmRpY3RJRCA9IGRkaWN0LT5kaWN0SUQ7DQorCQlkc3REQ3R4
LT5iYXNlID0gZGRpY3QtPmRpY3RDb250ZW50Ow0KKwkJZHN0REN0eC0+dkJhc2UgPSBkZGljdC0+
ZGljdENvbnRlbnQ7DQorCQlkc3REQ3R4LT5kaWN0RW5kID0gKGNvbnN0IEJZVEUgKilkZGljdC0+
ZGljdENvbnRlbnQgKyBkZGljdC0+ZGljdFNpemU7DQorCQlkc3REQ3R4LT5wcmV2aW91c0RzdEVu
ZCA9IGRzdERDdHgtPmRpY3RFbmQ7DQorCQlpZiAoZGRpY3QtPmVudHJvcHlQcmVzZW50KSB7DQor
CQkJZHN0REN0eC0+bGl0RW50cm9weSA9IDE7DQorCQkJZHN0REN0eC0+ZnNlRW50cm9weSA9IDE7
DQorCQkJZHN0REN0eC0+TExUcHRyID0gZGRpY3QtPmVudHJvcHkuTExUYWJsZTsNCisJCQlkc3RE
Q3R4LT5NTFRwdHIgPSBkZGljdC0+ZW50cm9weS5NTFRhYmxlOw0KKwkJCWRzdERDdHgtPk9GVHB0
ciA9IGRkaWN0LT5lbnRyb3B5Lk9GVGFibGU7DQorCQkJZHN0REN0eC0+SFVGcHRyID0gZGRpY3Qt
PmVudHJvcHkuaHVmVGFibGU7DQorCQkJZHN0REN0eC0+ZW50cm9weS5yZXBbMF0gPSBkZGljdC0+
ZW50cm9weS5yZXBbMF07DQorCQkJZHN0REN0eC0+ZW50cm9weS5yZXBbMV0gPSBkZGljdC0+ZW50
cm9weS5yZXBbMV07DQorCQkJZHN0REN0eC0+ZW50cm9weS5yZXBbMl0gPSBkZGljdC0+ZW50cm9w
eS5yZXBbMl07DQorCQl9IGVsc2Ugew0KKwkJCWRzdERDdHgtPmxpdEVudHJvcHkgPSAwOw0KKwkJ
CWRzdERDdHgtPmZzZUVudHJvcHkgPSAwOw0KKwkJfQ0KKwl9DQorfQ0KKw0KK3N0YXRpYyBzaXpl
X3QgWlNURF9sb2FkRW50cm9weV9pbkREaWN0KFpTVERfRERpY3QgKmRkaWN0KQ0KK3sNCisJZGRp
Y3QtPmRpY3RJRCA9IDA7DQorCWRkaWN0LT5lbnRyb3B5UHJlc2VudCA9IDA7DQorCWlmIChkZGlj
dC0+ZGljdFNpemUgPCA4KQ0KKwkJcmV0dXJuIDA7DQorCXsNCisJCVUzMiBjb25zdCBtYWdpYyA9
IFpTVERfcmVhZExFMzIoZGRpY3QtPmRpY3RDb250ZW50KTsNCisJCWlmIChtYWdpYyAhPSBaU1RE
X0RJQ1RfTUFHSUMpDQorCQkJcmV0dXJuIDA7IC8qIHB1cmUgY29udGVudCBtb2RlICovDQorCX0N
CisJZGRpY3QtPmRpY3RJRCA9IFpTVERfcmVhZExFMzIoKGNvbnN0IGNoYXIgKilkZGljdC0+ZGlj
dENvbnRlbnQgKyA0KTsNCisNCisJLyogbG9hZCBlbnRyb3B5IHRhYmxlcyAqLw0KKwlDSEVDS19F
KFpTVERfbG9hZEVudHJvcHkoJmRkaWN0LT5lbnRyb3B5LCBkZGljdC0+ZGljdENvbnRlbnQsIGRk
aWN0LT5kaWN0U2l6ZSksIGRpY3Rpb25hcnlfY29ycnVwdGVkKTsNCisJZGRpY3QtPmVudHJvcHlQ
cmVzZW50ID0gMTsNCisJcmV0dXJuIDA7DQorfQ0KKw0KK3N0YXRpYyBaU1REX0REaWN0ICpaU1RE
X2NyZWF0ZUREaWN0X2FkdmFuY2VkKGNvbnN0IHZvaWQgKmRpY3QsIHNpemVfdCBkaWN0U2l6ZSwg
dW5zaWduZWQgYnlSZWZlcmVuY2UsIFpTVERfY3VzdG9tTWVtIGN1c3RvbU1lbSkNCit7DQorCWlm
ICghY3VzdG9tTWVtLmN1c3RvbUFsbG9jIHx8ICFjdXN0b21NZW0uY3VzdG9tRnJlZSkNCisJCXJl
dHVybiBOVUxMOw0KKw0KKwl7DQorCQlaU1REX0REaWN0ICpjb25zdCBkZGljdCA9IChaU1REX0RE
aWN0ICopWlNURF9tYWxsb2Moc2l6ZW9mKFpTVERfRERpY3QpLCBjdXN0b21NZW0pOw0KKwkJaWYg
KCFkZGljdCkNCisJCQlyZXR1cm4gTlVMTDsNCisJCWRkaWN0LT5jTWVtID0gY3VzdG9tTWVtOw0K
Kw0KKwkJaWYgKChieVJlZmVyZW5jZSkgfHwgKCFkaWN0KSB8fCAoIWRpY3RTaXplKSkgew0KKwkJ
CWRkaWN0LT5kaWN0QnVmZmVyID0gTlVMTDsNCisJCQlkZGljdC0+ZGljdENvbnRlbnQgPSBkaWN0
Ow0KKwkJfSBlbHNlIHsNCisJCQl2b2lkICpjb25zdCBpbnRlcm5hbEJ1ZmZlciA9IFpTVERfbWFs
bG9jKGRpY3RTaXplLCBjdXN0b21NZW0pOw0KKwkJCWlmICghaW50ZXJuYWxCdWZmZXIpIHsNCisJ
CQkJWlNURF9mcmVlRERpY3QoZGRpY3QpOw0KKwkJCQlyZXR1cm4gTlVMTDsNCisJCQl9DQorCQkJ
bWVtY3B5KGludGVybmFsQnVmZmVyLCBkaWN0LCBkaWN0U2l6ZSk7DQorCQkJZGRpY3QtPmRpY3RC
dWZmZXIgPSBpbnRlcm5hbEJ1ZmZlcjsNCisJCQlkZGljdC0+ZGljdENvbnRlbnQgPSBpbnRlcm5h
bEJ1ZmZlcjsNCisJCX0NCisJCWRkaWN0LT5kaWN0U2l6ZSA9IGRpY3RTaXplOw0KKwkJZGRpY3Qt
PmVudHJvcHkuaHVmVGFibGVbMF0gPSAoSFVGX0RUYWJsZSkoKEh1ZkxvZykqMHgxMDAwMDAxKTsg
LyogY292ZXIgYm90aCBsaXR0bGUgYW5kIGJpZyBlbmRpYW4gKi8NCisJCS8qIHBhcnNlIGRpY3Rp
b25hcnkgY29udGVudCAqLw0KKwkJew0KKwkJCXNpemVfdCBjb25zdCBlcnJvckNvZGUgPSBaU1RE
X2xvYWRFbnRyb3B5X2luRERpY3QoZGRpY3QpOw0KKwkJCWlmIChaU1REX2lzRXJyb3IoZXJyb3JD
b2RlKSkgew0KKwkJCQlaU1REX2ZyZWVERGljdChkZGljdCk7DQorCQkJCXJldHVybiBOVUxMOw0K
KwkJCX0NCisJCX0NCisNCisJCXJldHVybiBkZGljdDsNCisJfQ0KK30NCisNCisvKiEgWlNURF9p
bml0RERpY3QoKSA6DQorKiAgIENyZWF0ZSBhIGRpZ2VzdGVkIGRpY3Rpb25hcnksIHRvIHN0YXJ0
IGRlY29tcHJlc3Npb24gd2l0aG91dCBzdGFydHVwIGRlbGF5Lg0KKyogICBgZGljdGAgY29udGVu
dCBpcyBjb3BpZWQgaW5zaWRlIEREaWN0Lg0KKyogICBDb25zZXF1ZW50bHksIGBkaWN0YCBjYW4g
YmUgcmVsZWFzZWQgYWZ0ZXIgYFpTVERfRERpY3RgIGNyZWF0aW9uICovDQorWlNURF9ERGljdCAq
WlNURF9pbml0RERpY3QoY29uc3Qgdm9pZCAqZGljdCwgc2l6ZV90IGRpY3RTaXplLCB2b2lkICp3
b3Jrc3BhY2UsIHNpemVfdCB3b3Jrc3BhY2VTaXplKQ0KK3sNCisJWlNURF9jdXN0b21NZW0gY29u
c3Qgc3RhY2tNZW0gPSBaU1REX2luaXRTdGFjayh3b3Jrc3BhY2UsIHdvcmtzcGFjZVNpemUpOw0K
KwlyZXR1cm4gWlNURF9jcmVhdGVERGljdF9hZHZhbmNlZChkaWN0LCBkaWN0U2l6ZSwgMSwgc3Rh
Y2tNZW0pOw0KK30NCisNCitzaXplX3QgWlNURF9mcmVlRERpY3QoWlNURF9ERGljdCAqZGRpY3Qp
DQorew0KKwlpZiAoZGRpY3QgPT0gTlVMTCkNCisJCXJldHVybiAwOyAvKiBzdXBwb3J0IGZyZWUg
b24gTlVMTCAqLw0KKwl7DQorCQlaU1REX2N1c3RvbU1lbSBjb25zdCBjTWVtID0gZGRpY3QtPmNN
ZW07DQorCQlaU1REX2ZyZWUoZGRpY3QtPmRpY3RCdWZmZXIsIGNNZW0pOw0KKwkJWlNURF9mcmVl
KGRkaWN0LCBjTWVtKTsNCisJCXJldHVybiAwOw0KKwl9DQorfQ0KKw0KKy8qISBaU1REX2dldERp
Y3RJRF9mcm9tRGljdCgpIDoNCisgKiAgUHJvdmlkZXMgdGhlIGRpY3RJRCBzdG9yZWQgd2l0aGlu
IGRpY3Rpb25hcnkuDQorICogIGlmIEByZXR1cm4gPT0gMCwgdGhlIGRpY3Rpb25hcnkgaXMgbm90
IGNvbmZvcm1hbnQgd2l0aCBac3RhbmRhcmQgc3BlY2lmaWNhdGlvbi4NCisgKiAgSXQgY2FuIHN0
aWxsIGJlIGxvYWRlZCwgYnV0IGFzIGEgY29udGVudC1vbmx5IGRpY3Rpb25hcnkuICovDQordW5z
aWduZWQgWlNURF9nZXREaWN0SURfZnJvbURpY3QoY29uc3Qgdm9pZCAqZGljdCwgc2l6ZV90IGRp
Y3RTaXplKQ0KK3sNCisJaWYgKGRpY3RTaXplIDwgOCkNCisJCXJldHVybiAwOw0KKwlpZiAoWlNU
RF9yZWFkTEUzMihkaWN0KSAhPSBaU1REX0RJQ1RfTUFHSUMpDQorCQlyZXR1cm4gMDsNCisJcmV0
dXJuIFpTVERfcmVhZExFMzIoKGNvbnN0IGNoYXIgKilkaWN0ICsgNCk7DQorfQ0KKw0KKy8qISBa
U1REX2dldERpY3RJRF9mcm9tRERpY3QoKSA6DQorICogIFByb3ZpZGVzIHRoZSBkaWN0SUQgb2Yg
dGhlIGRpY3Rpb25hcnkgbG9hZGVkIGludG8gYGRkaWN0YC4NCisgKiAgSWYgQHJldHVybiA9PSAw
LCB0aGUgZGljdGlvbmFyeSBpcyBub3QgY29uZm9ybWFudCB0byBac3RhbmRhcmQgc3BlY2lmaWNh
dGlvbiwgb3IgZW1wdHkuDQorICogIE5vbi1jb25mb3JtYW50IGRpY3Rpb25hcmllcyBjYW4gc3Rp
bGwgYmUgbG9hZGVkLCBidXQgYXMgY29udGVudC1vbmx5IGRpY3Rpb25hcmllcy4gKi8NCit1bnNp
Z25lZCBaU1REX2dldERpY3RJRF9mcm9tRERpY3QoY29uc3QgWlNURF9ERGljdCAqZGRpY3QpDQor
ew0KKwlpZiAoZGRpY3QgPT0gTlVMTCkNCisJCXJldHVybiAwOw0KKwlyZXR1cm4gWlNURF9nZXRE
aWN0SURfZnJvbURpY3QoZGRpY3QtPmRpY3RDb250ZW50LCBkZGljdC0+ZGljdFNpemUpOw0KK30N
CisNCisvKiEgWlNURF9nZXREaWN0SURfZnJvbUZyYW1lKCkgOg0KKyAqICBQcm92aWRlcyB0aGUg
ZGljdElEIHJlcXVpcmVkIHRvIGRlY29tcHJlc3NlZCB0aGUgZnJhbWUgc3RvcmVkIHdpdGhpbiBg
c3JjYC4NCisgKiAgSWYgQHJldHVybiA9PSAwLCB0aGUgZGljdElEIGNvdWxkIG5vdCBiZSBkZWNv
ZGVkLg0KKyAqICBUaGlzIGNvdWxkIGZvciBvbmUgb2YgdGhlIGZvbGxvd2luZyByZWFzb25zIDoN
CisgKiAgLSBUaGUgZnJhbWUgZG9lcyBub3QgcmVxdWlyZSBhIGRpY3Rpb25hcnkgdG8gYmUgZGVj
b2RlZCAobW9zdCBjb21tb24gY2FzZSkuDQorICogIC0gVGhlIGZyYW1lIHdhcyBidWlsdCB3aXRo
IGRpY3RJRCBpbnRlbnRpb25hbGx5IHJlbW92ZWQuIFdoYXRldmVyIGRpY3Rpb25hcnkgaXMgbmVj
ZXNzYXJ5IGlzIGEgaGlkZGVuIGluZm9ybWF0aW9uLg0KKyAqICAgIE5vdGUgOiB0aGlzIHVzZSBj
YXNlIGFsc28gaGFwcGVucyB3aGVuIHVzaW5nIGEgbm9uLWNvbmZvcm1hbnQgZGljdGlvbmFyeS4N
CisgKiAgLSBgc3JjU2l6ZWAgaXMgdG9vIHNtYWxsLCBhbmQgYXMgYSByZXN1bHQsIHRoZSBmcmFt
ZSBoZWFkZXIgY291bGQgbm90IGJlIGRlY29kZWQgKG9ubHkgcG9zc2libGUgaWYgYHNyY1NpemUg
PCBaU1REX0ZSQU1FSEVBREVSU0laRV9NQVhgKS4NCisgKiAgLSBUaGlzIGlzIG5vdCBhIFpzdGFu
ZGFyZCBmcmFtZS4NCisgKiAgV2hlbiBpZGVudGlmeWluZyB0aGUgZXhhY3QgZmFpbHVyZSBjYXVz
ZSwgaXQncyBwb3NzaWJsZSB0byB1c2VkIFpTVERfZ2V0RnJhbWVQYXJhbXMoKSwgd2hpY2ggd2ls
bCBwcm92aWRlIGEgbW9yZSBwcmVjaXNlIGVycm9yIGNvZGUuICovDQordW5zaWduZWQgWlNURF9n
ZXREaWN0SURfZnJvbUZyYW1lKGNvbnN0IHZvaWQgKnNyYywgc2l6ZV90IHNyY1NpemUpDQorew0K
KwlaU1REX2ZyYW1lUGFyYW1zIHpmcCA9IHswLCAwLCAwLCAwfTsNCisJc2l6ZV90IGNvbnN0IGhF
cnJvciA9IFpTVERfZ2V0RnJhbWVQYXJhbXMoJnpmcCwgc3JjLCBzcmNTaXplKTsNCisJaWYgKFpT
VERfaXNFcnJvcihoRXJyb3IpKQ0KKwkJcmV0dXJuIDA7DQorCXJldHVybiB6ZnAuZGljdElEOw0K
K30NCisNCisvKiEgWlNURF9kZWNvbXByZXNzX3VzaW5nRERpY3QoKSA6DQorKiAgIERlY29tcHJl
c3Npb24gdXNpbmcgYSBwcmUtZGlnZXN0ZWQgRGljdGlvbmFyeQ0KKyogICBVc2UgZGljdGlvbmFy
eSB3aXRob3V0IHNpZ25pZmljYW50IG92ZXJoZWFkLiAqLw0KK3NpemVfdCBaU1REX2RlY29tcHJl
c3NfdXNpbmdERGljdChaU1REX0RDdHggKmRjdHgsIHZvaWQgKmRzdCwgc2l6ZV90IGRzdENhcGFj
aXR5LCBjb25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNTaXplLCBjb25zdCBaU1REX0REaWN0ICpk
ZGljdCkNCit7DQorCS8qIHBhc3MgY29udGVudCBhbmQgc2l6ZSBpbiBjYXNlIGxlZ2FjeSBmcmFt
ZXMgYXJlIGVuY291bnRlcmVkICovDQorCXJldHVybiBaU1REX2RlY29tcHJlc3NNdWx0aUZyYW1l
KGRjdHgsIGRzdCwgZHN0Q2FwYWNpdHksIHNyYywgc3JjU2l6ZSwgTlVMTCwgMCwgZGRpY3QpOw0K
K30NCisNCisvKj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0NCisqICAgU3Ry
ZWFtaW5nIGRlY29tcHJlc3Npb24NCisqPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09Ki8NCisNCit0eXBlZGVmIGVudW0geyB6ZHNzX2luaXQsIHpkc3NfbG9hZEhlYWRlciwgemRz
c19yZWFkLCB6ZHNzX2xvYWQsIHpkc3NfZmx1c2ggfSBaU1REX2RTdHJlYW1TdGFnZTsNCisNCisv
KiAqKiogUmVzb3VyY2UgbWFuYWdlbWVudCAqKiogKi8NCitzdHJ1Y3QgWlNURF9EU3RyZWFtX3Mg
ew0KKwlaU1REX0RDdHggKmRjdHg7DQorCVpTVERfRERpY3QgKmRkaWN0TG9jYWw7DQorCWNvbnN0
IFpTVERfRERpY3QgKmRkaWN0Ow0KKwlaU1REX2ZyYW1lUGFyYW1zIGZQYXJhbXM7DQorCVpTVERf
ZFN0cmVhbVN0YWdlIHN0YWdlOw0KKwljaGFyICppbkJ1ZmY7DQorCXNpemVfdCBpbkJ1ZmZTaXpl
Ow0KKwlzaXplX3QgaW5Qb3M7DQorCXNpemVfdCBtYXhXaW5kb3dTaXplOw0KKwljaGFyICpvdXRC
dWZmOw0KKwlzaXplX3Qgb3V0QnVmZlNpemU7DQorCXNpemVfdCBvdXRTdGFydDsNCisJc2l6ZV90
IG91dEVuZDsNCisJc2l6ZV90IGJsb2NrU2l6ZTsNCisJQllURSBoZWFkZXJCdWZmZXJbWlNURF9G
UkFNRUhFQURFUlNJWkVfTUFYXTsgLyogdG1wIGJ1ZmZlciB0byBzdG9yZSBmcmFtZSBoZWFkZXIg
Ki8NCisJc2l6ZV90IGxoU2l6ZTsNCisJWlNURF9jdXN0b21NZW0gY3VzdG9tTWVtOw0KKwl2b2lk
ICpsZWdhY3lDb250ZXh0Ow0KKwlVMzIgcHJldmlvdXNMZWdhY3lWZXJzaW9uOw0KKwlVMzIgbGVn
YWN5VmVyc2lvbjsNCisJVTMyIGhvc3RhZ2VCeXRlOw0KK307IC8qIHR5cGVkZWYnZCB0byBaU1RE
X0RTdHJlYW0gd2l0aGluICJ6c3RkLmgiICovDQorDQorc2l6ZV90IFpTVERfRFN0cmVhbVdvcmtz
cGFjZUJvdW5kKHNpemVfdCBtYXhXaW5kb3dTaXplKQ0KK3sNCisJc2l6ZV90IGNvbnN0IGJsb2Nr
U2l6ZSA9IE1JTihtYXhXaW5kb3dTaXplLCBaU1REX0JMT0NLU0laRV9BQlNPTFVURU1BWCk7DQor
CXNpemVfdCBjb25zdCBpbkJ1ZmZTaXplID0gYmxvY2tTaXplOw0KKwlzaXplX3QgY29uc3Qgb3V0
QnVmZlNpemUgPSBtYXhXaW5kb3dTaXplICsgYmxvY2tTaXplICsgV0lMRENPUFlfT1ZFUkxFTkdU
SCAqIDI7DQorCXJldHVybiBaU1REX0RDdHhXb3Jrc3BhY2VCb3VuZCgpICsgWlNURF9BTElHTihz
aXplb2YoWlNURF9EU3RyZWFtKSkgKyBaU1REX0FMSUdOKGluQnVmZlNpemUpICsgWlNURF9BTElH
TihvdXRCdWZmU2l6ZSk7DQorfQ0KKw0KK3N0YXRpYyBaU1REX0RTdHJlYW0gKlpTVERfY3JlYXRl
RFN0cmVhbV9hZHZhbmNlZChaU1REX2N1c3RvbU1lbSBjdXN0b21NZW0pDQorew0KKwlaU1REX0RT
dHJlYW0gKnpkczsNCisNCisJaWYgKCFjdXN0b21NZW0uY3VzdG9tQWxsb2MgfHwgIWN1c3RvbU1l
bS5jdXN0b21GcmVlKQ0KKwkJcmV0dXJuIE5VTEw7DQorDQorCXpkcyA9IChaU1REX0RTdHJlYW0g
KilaU1REX21hbGxvYyhzaXplb2YoWlNURF9EU3RyZWFtKSwgY3VzdG9tTWVtKTsNCisJaWYgKHpk
cyA9PSBOVUxMKQ0KKwkJcmV0dXJuIE5VTEw7DQorCW1lbXNldCh6ZHMsIDAsIHNpemVvZihaU1RE
X0RTdHJlYW0pKTsNCisJbWVtY3B5KCZ6ZHMtPmN1c3RvbU1lbSwgJmN1c3RvbU1lbSwgc2l6ZW9m
KFpTVERfY3VzdG9tTWVtKSk7DQorCXpkcy0+ZGN0eCA9IFpTVERfY3JlYXRlREN0eF9hZHZhbmNl
ZChjdXN0b21NZW0pOw0KKwlpZiAoemRzLT5kY3R4ID09IE5VTEwpIHsNCisJCVpTVERfZnJlZURT
dHJlYW0oemRzKTsNCisJCXJldHVybiBOVUxMOw0KKwl9DQorCXpkcy0+c3RhZ2UgPSB6ZHNzX2lu
aXQ7DQorCXpkcy0+bWF4V2luZG93U2l6ZSA9IFpTVERfTUFYV0lORE9XU0laRV9ERUZBVUxUOw0K
KwlyZXR1cm4gemRzOw0KK30NCisNCitaU1REX0RTdHJlYW0gKlpTVERfaW5pdERTdHJlYW0oc2l6
ZV90IG1heFdpbmRvd1NpemUsIHZvaWQgKndvcmtzcGFjZSwgc2l6ZV90IHdvcmtzcGFjZVNpemUp
DQorew0KKwlaU1REX2N1c3RvbU1lbSBjb25zdCBzdGFja01lbSA9IFpTVERfaW5pdFN0YWNrKHdv
cmtzcGFjZSwgd29ya3NwYWNlU2l6ZSk7DQorCVpTVERfRFN0cmVhbSAqemRzID0gWlNURF9jcmVh
dGVEU3RyZWFtX2FkdmFuY2VkKHN0YWNrTWVtKTsNCisJaWYgKCF6ZHMpIHsNCisJCXJldHVybiBO
VUxMOw0KKwl9DQorDQorCXpkcy0+bWF4V2luZG93U2l6ZSA9IG1heFdpbmRvd1NpemU7DQorCXpk
cy0+c3RhZ2UgPSB6ZHNzX2xvYWRIZWFkZXI7DQorCXpkcy0+bGhTaXplID0gemRzLT5pblBvcyA9
IHpkcy0+b3V0U3RhcnQgPSB6ZHMtPm91dEVuZCA9IDA7DQorCVpTVERfZnJlZUREaWN0KHpkcy0+
ZGRpY3RMb2NhbCk7DQorCXpkcy0+ZGRpY3RMb2NhbCA9IE5VTEw7DQorCXpkcy0+ZGRpY3QgPSB6
ZHMtPmRkaWN0TG9jYWw7DQorCXpkcy0+bGVnYWN5VmVyc2lvbiA9IDA7DQorCXpkcy0+aG9zdGFn
ZUJ5dGUgPSAwOw0KKw0KKwl7DQorCQlzaXplX3QgY29uc3QgYmxvY2tTaXplID0gTUlOKHpkcy0+
bWF4V2luZG93U2l6ZSwgWlNURF9CTE9DS1NJWkVfQUJTT0xVVEVNQVgpOw0KKwkJc2l6ZV90IGNv
bnN0IG5lZWRlZE91dFNpemUgPSB6ZHMtPm1heFdpbmRvd1NpemUgKyBibG9ja1NpemUgKyBXSUxE
Q09QWV9PVkVSTEVOR1RIICogMjsNCisNCisJCXpkcy0+aW5CdWZmID0gKGNoYXIgKilaU1REX21h
bGxvYyhibG9ja1NpemUsIHpkcy0+Y3VzdG9tTWVtKTsNCisJCXpkcy0+aW5CdWZmU2l6ZSA9IGJs
b2NrU2l6ZTsNCisJCXpkcy0+b3V0QnVmZiA9IChjaGFyICopWlNURF9tYWxsb2MobmVlZGVkT3V0
U2l6ZSwgemRzLT5jdXN0b21NZW0pOw0KKwkJemRzLT5vdXRCdWZmU2l6ZSA9IG5lZWRlZE91dFNp
emU7DQorCQlpZiAoemRzLT5pbkJ1ZmYgPT0gTlVMTCB8fCB6ZHMtPm91dEJ1ZmYgPT0gTlVMTCkg
ew0KKwkJCVpTVERfZnJlZURTdHJlYW0oemRzKTsNCisJCQlyZXR1cm4gTlVMTDsNCisJCX0NCisJ
fQ0KKwlyZXR1cm4gemRzOw0KK30NCisNCitaU1REX0RTdHJlYW0gKlpTVERfaW5pdERTdHJlYW1f
dXNpbmdERGljdChzaXplX3QgbWF4V2luZG93U2l6ZSwgY29uc3QgWlNURF9ERGljdCAqZGRpY3Qs
IHZvaWQgKndvcmtzcGFjZSwgc2l6ZV90IHdvcmtzcGFjZVNpemUpDQorew0KKwlaU1REX0RTdHJl
YW0gKnpkcyA9IFpTVERfaW5pdERTdHJlYW0obWF4V2luZG93U2l6ZSwgd29ya3NwYWNlLCB3b3Jr
c3BhY2VTaXplKTsNCisJaWYgKHpkcykgew0KKwkJemRzLT5kZGljdCA9IGRkaWN0Ow0KKwl9DQor
CXJldHVybiB6ZHM7DQorfQ0KKw0KK3NpemVfdCBaU1REX2ZyZWVEU3RyZWFtKFpTVERfRFN0cmVh
bSAqemRzKQ0KK3sNCisJaWYgKHpkcyA9PSBOVUxMKQ0KKwkJcmV0dXJuIDA7IC8qIHN1cHBvcnQg
ZnJlZSBvbiBudWxsICovDQorCXsNCisJCVpTVERfY3VzdG9tTWVtIGNvbnN0IGNNZW0gPSB6ZHMt
PmN1c3RvbU1lbTsNCisJCVpTVERfZnJlZURDdHgoemRzLT5kY3R4KTsNCisJCXpkcy0+ZGN0eCA9
IE5VTEw7DQorCQlaU1REX2ZyZWVERGljdCh6ZHMtPmRkaWN0TG9jYWwpOw0KKwkJemRzLT5kZGlj
dExvY2FsID0gTlVMTDsNCisJCVpTVERfZnJlZSh6ZHMtPmluQnVmZiwgY01lbSk7DQorCQl6ZHMt
PmluQnVmZiA9IE5VTEw7DQorCQlaU1REX2ZyZWUoemRzLT5vdXRCdWZmLCBjTWVtKTsNCisJCXpk
cy0+b3V0QnVmZiA9IE5VTEw7DQorCQlaU1REX2ZyZWUoemRzLCBjTWVtKTsNCisJCXJldHVybiAw
Ow0KKwl9DQorfQ0KKw0KKy8qICoqKiBJbml0aWFsaXphdGlvbiAqKiogKi8NCisNCitzaXplX3Qg
WlNURF9EU3RyZWFtSW5TaXplKHZvaWQpIHsgcmV0dXJuIFpTVERfQkxPQ0tTSVpFX0FCU09MVVRF
TUFYICsgWlNURF9ibG9ja0hlYWRlclNpemU7IH0NCitzaXplX3QgWlNURF9EU3RyZWFtT3V0U2l6
ZSh2b2lkKSB7IHJldHVybiBaU1REX0JMT0NLU0laRV9BQlNPTFVURU1BWDsgfQ0KKw0KK3NpemVf
dCBaU1REX3Jlc2V0RFN0cmVhbShaU1REX0RTdHJlYW0gKnpkcykNCit7DQorCXpkcy0+c3RhZ2Ug
PSB6ZHNzX2xvYWRIZWFkZXI7DQorCXpkcy0+bGhTaXplID0gemRzLT5pblBvcyA9IHpkcy0+b3V0
U3RhcnQgPSB6ZHMtPm91dEVuZCA9IDA7DQorCXpkcy0+bGVnYWN5VmVyc2lvbiA9IDA7DQorCXpk
cy0+aG9zdGFnZUJ5dGUgPSAwOw0KKwlyZXR1cm4gWlNURF9mcmFtZUhlYWRlclNpemVfcHJlZml4
Ow0KK30NCisNCisvKiAqKioqKiAgIERlY29tcHJlc3Npb24gICAqKioqKiAqLw0KKw0KK1pTVERf
U1RBVElDIHNpemVfdCBaU1REX2xpbWl0Q29weSh2b2lkICpkc3QsIHNpemVfdCBkc3RDYXBhY2l0
eSwgY29uc3Qgdm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZSkNCit7DQorCXNpemVfdCBjb25zdCBs
ZW5ndGggPSBNSU4oZHN0Q2FwYWNpdHksIHNyY1NpemUpOw0KKwltZW1jcHkoZHN0LCBzcmMsIGxl
bmd0aCk7DQorCXJldHVybiBsZW5ndGg7DQorfQ0KKw0KK3NpemVfdCBaU1REX2RlY29tcHJlc3NT
dHJlYW0oWlNURF9EU3RyZWFtICp6ZHMsIFpTVERfb3V0QnVmZmVyICpvdXRwdXQsIFpTVERfaW5C
dWZmZXIgKmlucHV0KQ0KK3sNCisJY29uc3QgY2hhciAqY29uc3QgaXN0YXJ0ID0gKGNvbnN0IGNo
YXIgKikoaW5wdXQtPnNyYykgKyBpbnB1dC0+cG9zOw0KKwljb25zdCBjaGFyICpjb25zdCBpZW5k
ID0gKGNvbnN0IGNoYXIgKikoaW5wdXQtPnNyYykgKyBpbnB1dC0+c2l6ZTsNCisJY29uc3QgY2hh
ciAqaXAgPSBpc3RhcnQ7DQorCWNoYXIgKmNvbnN0IG9zdGFydCA9IChjaGFyICopKG91dHB1dC0+
ZHN0KSArIG91dHB1dC0+cG9zOw0KKwljaGFyICpjb25zdCBvZW5kID0gKGNoYXIgKikob3V0cHV0
LT5kc3QpICsgb3V0cHV0LT5zaXplOw0KKwljaGFyICpvcCA9IG9zdGFydDsNCisJVTMyIHNvbWVN
b3JlV29yayA9IDE7DQorDQorCXdoaWxlIChzb21lTW9yZVdvcmspIHsNCisJCXN3aXRjaCAoemRz
LT5zdGFnZSkgew0KKwkJY2FzZSB6ZHNzX2luaXQ6DQorCQkJWlNURF9yZXNldERTdHJlYW0oemRz
KTsgLyogdHJhbnNwYXJlbnQgcmVzZXQgb24gc3RhcnRpbmcgZGVjb2RpbmcgYSBuZXcgZnJhbWUg
Ki8NCisJCQkvKiBmYWxsIHRocm91Z2ggKi8NCisNCisJCWNhc2UgemRzc19sb2FkSGVhZGVyOiB7
DQorCQkJc2l6ZV90IGNvbnN0IGhTaXplID0gWlNURF9nZXRGcmFtZVBhcmFtcygmemRzLT5mUGFy
YW1zLCB6ZHMtPmhlYWRlckJ1ZmZlciwgemRzLT5saFNpemUpOw0KKwkJCWlmIChaU1REX2lzRXJy
b3IoaFNpemUpKQ0KKwkJCQlyZXR1cm4gaFNpemU7DQorCQkJaWYgKGhTaXplICE9IDApIHsJCQkJ
ICAgLyogbmVlZCBtb3JlIGlucHV0ICovDQorCQkJCXNpemVfdCBjb25zdCB0b0xvYWQgPSBoU2l6
ZSAtIHpkcy0+bGhTaXplOyAvKiBpZiBoU2l6ZSE9MCwgaFNpemUgPiB6ZHMtPmxoU2l6ZSAqLw0K
KwkJCQlpZiAodG9Mb2FkID4gKHNpemVfdCkoaWVuZCAtIGlwKSkgewkvKiBub3QgZW5vdWdoIGlu
cHV0IHRvIGxvYWQgZnVsbCBoZWFkZXIgKi8NCisJCQkJCW1lbWNweSh6ZHMtPmhlYWRlckJ1ZmZl
ciArIHpkcy0+bGhTaXplLCBpcCwgaWVuZCAtIGlwKTsNCisJCQkJCXpkcy0+bGhTaXplICs9IGll
bmQgLSBpcDsNCisJCQkJCWlucHV0LT5wb3MgPSBpbnB1dC0+c2l6ZTsNCisJCQkJCXJldHVybiAo
TUFYKFpTVERfZnJhbWVIZWFkZXJTaXplX21pbiwgaFNpemUpIC0gemRzLT5saFNpemUpICsNCisJ
CQkJCSAgICAgICBaU1REX2Jsb2NrSGVhZGVyU2l6ZTsgLyogcmVtYWluaW5nIGhlYWRlciBieXRl
cyArIG5leHQgYmxvY2sgaGVhZGVyICovDQorCQkJCX0NCisJCQkJbWVtY3B5KHpkcy0+aGVhZGVy
QnVmZmVyICsgemRzLT5saFNpemUsIGlwLCB0b0xvYWQpOw0KKwkJCQl6ZHMtPmxoU2l6ZSA9IGhT
aXplOw0KKwkJCQlpcCArPSB0b0xvYWQ7DQorCQkJCWJyZWFrOw0KKwkJCX0NCisNCisJCQkvKiBj
aGVjayBmb3Igc2luZ2xlLXBhc3MgbW9kZSBvcHBvcnR1bml0eSAqLw0KKwkJCWlmICh6ZHMtPmZQ
YXJhbXMuZnJhbWVDb250ZW50U2l6ZSAmJiB6ZHMtPmZQYXJhbXMud2luZG93U2l6ZSAvKiBza2lw
cGFibGUgZnJhbWUgaWYgPT0gMCAqLw0KKwkJCSAgICAmJiAoVTY0KShzaXplX3QpKG9lbmQgLSBv
cCkgPj0gemRzLT5mUGFyYW1zLmZyYW1lQ29udGVudFNpemUpIHsNCisJCQkJc2l6ZV90IGNvbnN0
IGNTaXplID0gWlNURF9maW5kRnJhbWVDb21wcmVzc2VkU2l6ZShpc3RhcnQsIGllbmQgLSBpc3Rh
cnQpOw0KKwkJCQlpZiAoY1NpemUgPD0gKHNpemVfdCkoaWVuZCAtIGlzdGFydCkpIHsNCisJCQkJ
CXNpemVfdCBjb25zdCBkZWNvbXByZXNzZWRTaXplID0gWlNURF9kZWNvbXByZXNzX3VzaW5nRERp
Y3QoemRzLT5kY3R4LCBvcCwgb2VuZCAtIG9wLCBpc3RhcnQsIGNTaXplLCB6ZHMtPmRkaWN0KTsN
CisJCQkJCWlmIChaU1REX2lzRXJyb3IoZGVjb21wcmVzc2VkU2l6ZSkpDQorCQkJCQkJcmV0dXJu
IGRlY29tcHJlc3NlZFNpemU7DQorCQkJCQlpcCA9IGlzdGFydCArIGNTaXplOw0KKwkJCQkJb3Ag
Kz0gZGVjb21wcmVzc2VkU2l6ZTsNCisJCQkJCXpkcy0+ZGN0eC0+ZXhwZWN0ZWQgPSAwOw0KKwkJ
CQkJemRzLT5zdGFnZSA9IHpkc3NfaW5pdDsNCisJCQkJCXNvbWVNb3JlV29yayA9IDA7DQorCQkJ
CQlicmVhazsNCisJCQkJfQ0KKwkJCX0NCisNCisJCQkvKiBDb25zdW1lIGhlYWRlciAqLw0KKwkJ
CVpTVERfcmVmRERpY3QoemRzLT5kY3R4LCB6ZHMtPmRkaWN0KTsNCisJCQl7DQorCQkJCXNpemVf
dCBjb25zdCBoMVNpemUgPSBaU1REX25leHRTcmNTaXplVG9EZWNvbXByZXNzKHpkcy0+ZGN0eCk7
IC8qID09IFpTVERfZnJhbWVIZWFkZXJTaXplX3ByZWZpeCAqLw0KKwkJCQlDSEVDS19GKFpTVERf
ZGVjb21wcmVzc0NvbnRpbnVlKHpkcy0+ZGN0eCwgTlVMTCwgMCwgemRzLT5oZWFkZXJCdWZmZXIs
IGgxU2l6ZSkpOw0KKwkJCQl7DQorCQkJCQlzaXplX3QgY29uc3QgaDJTaXplID0gWlNURF9uZXh0
U3JjU2l6ZVRvRGVjb21wcmVzcyh6ZHMtPmRjdHgpOw0KKwkJCQkJQ0hFQ0tfRihaU1REX2RlY29t
cHJlc3NDb250aW51ZSh6ZHMtPmRjdHgsIE5VTEwsIDAsIHpkcy0+aGVhZGVyQnVmZmVyICsgaDFT
aXplLCBoMlNpemUpKTsNCisJCQkJfQ0KKwkJCX0NCisNCisJCQl6ZHMtPmZQYXJhbXMud2luZG93
U2l6ZSA9IE1BWCh6ZHMtPmZQYXJhbXMud2luZG93U2l6ZSwgMVUgPDwgWlNURF9XSU5ET1dMT0df
QUJTT0xVVEVNSU4pOw0KKwkJCWlmICh6ZHMtPmZQYXJhbXMud2luZG93U2l6ZSA+IHpkcy0+bWF4
V2luZG93U2l6ZSkNCisJCQkJcmV0dXJuIEVSUk9SKGZyYW1lUGFyYW1ldGVyX3dpbmRvd1Rvb0xh
cmdlKTsNCisNCisJCQkvKiBCdWZmZXJzIGFyZSBwcmVhbGxvY2F0ZWQsIGJ1dCBkb3VibGUgY2hl
Y2sgKi8NCisJCQl7DQorCQkJCXNpemVfdCBjb25zdCBibG9ja1NpemUgPSBNSU4oemRzLT5tYXhX
aW5kb3dTaXplLCBaU1REX0JMT0NLU0laRV9BQlNPTFVURU1BWCk7DQorCQkJCXNpemVfdCBjb25z
dCBuZWVkZWRPdXRTaXplID0gemRzLT5tYXhXaW5kb3dTaXplICsgYmxvY2tTaXplICsgV0lMRENP
UFlfT1ZFUkxFTkdUSCAqIDI7DQorCQkJCWlmICh6ZHMtPmluQnVmZlNpemUgPCBibG9ja1NpemUp
IHsNCisJCQkJCXJldHVybiBFUlJPUihHRU5FUklDKTsNCisJCQkJfQ0KKwkJCQlpZiAoemRzLT5v
dXRCdWZmU2l6ZSA8IG5lZWRlZE91dFNpemUpIHsNCisJCQkJCXJldHVybiBFUlJPUihHRU5FUklD
KTsNCisJCQkJfQ0KKwkJCQl6ZHMtPmJsb2NrU2l6ZSA9IGJsb2NrU2l6ZTsNCisJCQl9DQorCQkJ
emRzLT5zdGFnZSA9IHpkc3NfcmVhZDsNCisJCX0NCisJCQkvKiBmYWxsIHRocm91Z2ggKi8NCisN
CisJCWNhc2UgemRzc19yZWFkOiB7DQorCQkJc2l6ZV90IGNvbnN0IG5lZWRlZEluU2l6ZSA9IFpT
VERfbmV4dFNyY1NpemVUb0RlY29tcHJlc3MoemRzLT5kY3R4KTsNCisJCQlpZiAobmVlZGVkSW5T
aXplID09IDApIHsgLyogZW5kIG9mIGZyYW1lICovDQorCQkJCXpkcy0+c3RhZ2UgPSB6ZHNzX2lu
aXQ7DQorCQkJCXNvbWVNb3JlV29yayA9IDA7DQorCQkJCWJyZWFrOw0KKwkJCX0NCisJCQlpZiAo
KHNpemVfdCkoaWVuZCAtIGlwKSA+PSBuZWVkZWRJblNpemUpIHsgLyogZGVjb2RlIGRpcmVjdGx5
IGZyb20gc3JjICovDQorCQkJCWNvbnN0IGludCBpc1NraXBGcmFtZSA9IFpTVERfaXNTa2lwRnJh
bWUoemRzLT5kY3R4KTsNCisJCQkJc2l6ZV90IGNvbnN0IGRlY29kZWRTaXplID0gWlNURF9kZWNv
bXByZXNzQ29udGludWUoemRzLT5kY3R4LCB6ZHMtPm91dEJ1ZmYgKyB6ZHMtPm91dFN0YXJ0LA0K
KwkJCQkJCQkJCQkgICAoaXNTa2lwRnJhbWUgPyAwIDogemRzLT5vdXRCdWZmU2l6ZSAtIHpkcy0+
b3V0U3RhcnQpLCBpcCwgbmVlZGVkSW5TaXplKTsNCisJCQkJaWYgKFpTVERfaXNFcnJvcihkZWNv
ZGVkU2l6ZSkpDQorCQkJCQlyZXR1cm4gZGVjb2RlZFNpemU7DQorCQkJCWlwICs9IG5lZWRlZElu
U2l6ZTsNCisJCQkJaWYgKCFkZWNvZGVkU2l6ZSAmJiAhaXNTa2lwRnJhbWUpDQorCQkJCQlicmVh
azsgLyogdGhpcyB3YXMganVzdCBhIGhlYWRlciAqLw0KKwkJCQl6ZHMtPm91dEVuZCA9IHpkcy0+
b3V0U3RhcnQgKyBkZWNvZGVkU2l6ZTsNCisJCQkJemRzLT5zdGFnZSA9IHpkc3NfZmx1c2g7DQor
CQkJCWJyZWFrOw0KKwkJCX0NCisJCQlpZiAoaXAgPT0gaWVuZCkgew0KKwkJCQlzb21lTW9yZVdv
cmsgPSAwOw0KKwkJCQlicmVhazsNCisJCQl9IC8qIG5vIG1vcmUgaW5wdXQgKi8NCisJCQl6ZHMt
PnN0YWdlID0gemRzc19sb2FkOw0KKwkJCS8qIHBhc3MtdGhyb3VnaCAqLw0KKwkJfQ0KKwkJCS8q
IGZhbGwgdGhyb3VnaCAqLw0KKw0KKwkJY2FzZSB6ZHNzX2xvYWQ6IHsNCisJCQlzaXplX3QgY29u
c3QgbmVlZGVkSW5TaXplID0gWlNURF9uZXh0U3JjU2l6ZVRvRGVjb21wcmVzcyh6ZHMtPmRjdHgp
Ow0KKwkJCXNpemVfdCBjb25zdCB0b0xvYWQgPSBuZWVkZWRJblNpemUgLSB6ZHMtPmluUG9zOyAv
KiBzaG91bGQgYWx3YXlzIGJlIDw9IHJlbWFpbmluZyBzcGFjZSB3aXRoaW4gaW5CdWZmICovDQor
CQkJc2l6ZV90IGxvYWRlZFNpemU7DQorCQkJaWYgKHRvTG9hZCA+IHpkcy0+aW5CdWZmU2l6ZSAt
IHpkcy0+aW5Qb3MpDQorCQkJCXJldHVybiBFUlJPUihjb3JydXB0aW9uX2RldGVjdGVkKTsgLyog
c2hvdWxkIG5ldmVyIGhhcHBlbiAqLw0KKwkJCWxvYWRlZFNpemUgPSBaU1REX2xpbWl0Q29weSh6
ZHMtPmluQnVmZiArIHpkcy0+aW5Qb3MsIHRvTG9hZCwgaXAsIGllbmQgLSBpcCk7DQorCQkJaXAg
Kz0gbG9hZGVkU2l6ZTsNCisJCQl6ZHMtPmluUG9zICs9IGxvYWRlZFNpemU7DQorCQkJaWYgKGxv
YWRlZFNpemUgPCB0b0xvYWQpIHsNCisJCQkJc29tZU1vcmVXb3JrID0gMDsNCisJCQkJYnJlYWs7
DQorCQkJfSAvKiBub3QgZW5vdWdoIGlucHV0LCB3YWl0IGZvciBtb3JlICovDQorDQorCQkJLyog
ZGVjb2RlIGxvYWRlZCBpbnB1dCAqLw0KKwkJCXsNCisJCQkJY29uc3QgaW50IGlzU2tpcEZyYW1l
ID0gWlNURF9pc1NraXBGcmFtZSh6ZHMtPmRjdHgpOw0KKwkJCQlzaXplX3QgY29uc3QgZGVjb2Rl
ZFNpemUgPSBaU1REX2RlY29tcHJlc3NDb250aW51ZSh6ZHMtPmRjdHgsIHpkcy0+b3V0QnVmZiAr
IHpkcy0+b3V0U3RhcnQsIHpkcy0+b3V0QnVmZlNpemUgLSB6ZHMtPm91dFN0YXJ0LA0KKwkJCQkJ
CQkJCQkgICB6ZHMtPmluQnVmZiwgbmVlZGVkSW5TaXplKTsNCisJCQkJaWYgKFpTVERfaXNFcnJv
cihkZWNvZGVkU2l6ZSkpDQorCQkJCQlyZXR1cm4gZGVjb2RlZFNpemU7DQorCQkJCXpkcy0+aW5Q
b3MgPSAwOyAvKiBpbnB1dCBpcyBjb25zdW1lZCAqLw0KKwkJCQlpZiAoIWRlY29kZWRTaXplICYm
ICFpc1NraXBGcmFtZSkgew0KKwkJCQkJemRzLT5zdGFnZSA9IHpkc3NfcmVhZDsNCisJCQkJCWJy
ZWFrOw0KKwkJCQl9IC8qIHRoaXMgd2FzIGp1c3QgYSBoZWFkZXIgKi8NCisJCQkJemRzLT5vdXRF
bmQgPSB6ZHMtPm91dFN0YXJ0ICsgZGVjb2RlZFNpemU7DQorCQkJCXpkcy0+c3RhZ2UgPSB6ZHNz
X2ZsdXNoOw0KKwkJCQkvKiBwYXNzLXRocm91Z2ggKi8NCisJCQl9DQorCQl9DQorCQkJLyogZmFs
bCB0aHJvdWdoICovDQorDQorCQljYXNlIHpkc3NfZmx1c2g6IHsNCisJCQlzaXplX3QgY29uc3Qg
dG9GbHVzaFNpemUgPSB6ZHMtPm91dEVuZCAtIHpkcy0+b3V0U3RhcnQ7DQorCQkJc2l6ZV90IGNv
bnN0IGZsdXNoZWRTaXplID0gWlNURF9saW1pdENvcHkob3AsIG9lbmQgLSBvcCwgemRzLT5vdXRC
dWZmICsgemRzLT5vdXRTdGFydCwgdG9GbHVzaFNpemUpOw0KKwkJCW9wICs9IGZsdXNoZWRTaXpl
Ow0KKwkJCXpkcy0+b3V0U3RhcnQgKz0gZmx1c2hlZFNpemU7DQorCQkJaWYgKGZsdXNoZWRTaXpl
ID09IHRvRmx1c2hTaXplKSB7IC8qIGZsdXNoIGNvbXBsZXRlZCAqLw0KKwkJCQl6ZHMtPnN0YWdl
ID0gemRzc19yZWFkOw0KKwkJCQlpZiAoemRzLT5vdXRTdGFydCArIHpkcy0+YmxvY2tTaXplID4g
emRzLT5vdXRCdWZmU2l6ZSkNCisJCQkJCXpkcy0+b3V0U3RhcnQgPSB6ZHMtPm91dEVuZCA9IDA7
DQorCQkJCWJyZWFrOw0KKwkJCX0NCisJCQkvKiBjYW5ub3QgY29tcGxldGUgZmx1c2ggKi8NCisJ
CQlzb21lTW9yZVdvcmsgPSAwOw0KKwkJCWJyZWFrOw0KKwkJfQ0KKwkJZGVmYXVsdDoNCisJCQly
ZXR1cm4gRVJST1IoR0VORVJJQyk7IC8qIGltcG9zc2libGUgKi8NCisJCX0NCisJfQ0KKw0KKwkv
KiByZXN1bHQgKi8NCisJaW5wdXQtPnBvcyArPSAoc2l6ZV90KShpcCAtIGlzdGFydCk7DQorCW91
dHB1dC0+cG9zICs9IChzaXplX3QpKG9wIC0gb3N0YXJ0KTsNCisJew0KKwkJc2l6ZV90IG5leHRT
cmNTaXplSGludCA9IFpTVERfbmV4dFNyY1NpemVUb0RlY29tcHJlc3MoemRzLT5kY3R4KTsNCisJ
CWlmICghbmV4dFNyY1NpemVIaW50KSB7CQkJICAgIC8qIGZyYW1lIGZ1bGx5IGRlY29kZWQgKi8N
CisJCQlpZiAoemRzLT5vdXRFbmQgPT0gemRzLT5vdXRTdGFydCkgeyAvKiBvdXRwdXQgZnVsbHkg
Zmx1c2hlZCAqLw0KKwkJCQlpZiAoemRzLT5ob3N0YWdlQnl0ZSkgew0KKwkJCQkJaWYgKGlucHV0
LT5wb3MgPj0gaW5wdXQtPnNpemUpIHsNCisJCQkJCQl6ZHMtPnN0YWdlID0gemRzc19yZWFkOw0K
KwkJCQkJCXJldHVybiAxOw0KKwkJCQkJfQkgICAgIC8qIGNhbid0IHJlbGVhc2UgaG9zdGFnZSAo
bm90IHByZXNlbnQpICovDQorCQkJCQlpbnB1dC0+cG9zKys7IC8qIHJlbGVhc2UgaG9zdGFnZSAq
Lw0KKwkJCQl9DQorCQkJCXJldHVybiAwOw0KKwkJCX0NCisJCQlpZiAoIXpkcy0+aG9zdGFnZUJ5
dGUpIHsgLyogb3V0cHV0IG5vdCBmdWxseSBmbHVzaGVkOyBrZWVwIGxhc3QgYnl0ZSBhcyBob3N0
YWdlOyB3aWxsIGJlIHJlbGVhc2VkIHdoZW4gYWxsIG91dHB1dCBpcyBmbHVzaGVkICovDQorCQkJ
CWlucHV0LT5wb3MtLTsgICAgLyogbm90ZSA6IHBvcyA+IDAsIG90aGVyd2lzZSwgaW1wb3NzaWJs
ZSB0byBmaW5pc2ggcmVhZGluZyBsYXN0IGJsb2NrICovDQorCQkJCXpkcy0+aG9zdGFnZUJ5dGUg
PSAxOw0KKwkJCX0NCisJCQlyZXR1cm4gMTsNCisJCX0NCisJCW5leHRTcmNTaXplSGludCArPSBa
U1REX2Jsb2NrSGVhZGVyU2l6ZSAqIChaU1REX25leHRJbnB1dFR5cGUoemRzLT5kY3R4KSA9PSBa
U1REbml0X2Jsb2NrKTsgLyogcHJlbG9hZCBoZWFkZXIgb2YgbmV4dCBibG9jayAqLw0KKwkJaWYg
KHpkcy0+aW5Qb3MgPiBuZXh0U3JjU2l6ZUhpbnQpDQorCQkJcmV0dXJuIEVSUk9SKEdFTkVSSUMp
OyAvKiBzaG91bGQgbmV2ZXIgaGFwcGVuICovDQorCQluZXh0U3JjU2l6ZUhpbnQgLT0gemRzLT5p
blBvczsgLyogYWxyZWFkeSBsb2FkZWQqLw0KKwkJcmV0dXJuIG5leHRTcmNTaXplSGludDsNCisJ
fQ0KK30NCisNCitFWFBPUlRfU1lNQk9MKFpTVERfREN0eFdvcmtzcGFjZUJvdW5kKTsNCitFWFBP
UlRfU1lNQk9MKFpTVERfaW5pdERDdHgpOw0KK0VYUE9SVF9TWU1CT0woWlNURF9kZWNvbXByZXNz
REN0eCk7DQorRVhQT1JUX1NZTUJPTChaU1REX2RlY29tcHJlc3NfdXNpbmdEaWN0KTsNCisNCitF
WFBPUlRfU1lNQk9MKFpTVERfRERpY3RXb3Jrc3BhY2VCb3VuZCk7DQorRVhQT1JUX1NZTUJPTCha
U1REX2luaXRERGljdCk7DQorRVhQT1JUX1NZTUJPTChaU1REX2RlY29tcHJlc3NfdXNpbmdERGlj
dCk7DQorDQorRVhQT1JUX1NZTUJPTChaU1REX0RTdHJlYW1Xb3Jrc3BhY2VCb3VuZCk7DQorRVhQ
T1JUX1NZTUJPTChaU1REX2luaXREU3RyZWFtKTsNCitFWFBPUlRfU1lNQk9MKFpTVERfaW5pdERT
dHJlYW1fdXNpbmdERGljdCk7DQorRVhQT1JUX1NZTUJPTChaU1REX3Jlc2V0RFN0cmVhbSk7DQor
RVhQT1JUX1NZTUJPTChaU1REX2RlY29tcHJlc3NTdHJlYW0pOw0KK0VYUE9SVF9TWU1CT0woWlNU
RF9EU3RyZWFtSW5TaXplKTsNCitFWFBPUlRfU1lNQk9MKFpTVERfRFN0cmVhbU91dFNpemUpOw0K
Kw0KK0VYUE9SVF9TWU1CT0woWlNURF9maW5kRnJhbWVDb21wcmVzc2VkU2l6ZSk7DQorRVhQT1JU
X1NZTUJPTChaU1REX2dldEZyYW1lQ29udGVudFNpemUpOw0KK0VYUE9SVF9TWU1CT0woWlNURF9m
aW5kRGVjb21wcmVzc2VkU2l6ZSk7DQorDQorRVhQT1JUX1NZTUJPTChaU1REX2lzRnJhbWUpOw0K
K0VYUE9SVF9TWU1CT0woWlNURF9nZXREaWN0SURfZnJvbURpY3QpOw0KK0VYUE9SVF9TWU1CT0wo
WlNURF9nZXREaWN0SURfZnJvbUREaWN0KTsNCitFWFBPUlRfU1lNQk9MKFpTVERfZ2V0RGljdElE
X2Zyb21GcmFtZSk7DQorDQorRVhQT1JUX1NZTUJPTChaU1REX2dldEZyYW1lUGFyYW1zKTsNCitF
WFBPUlRfU1lNQk9MKFpTVERfZGVjb21wcmVzc0JlZ2luKTsNCitFWFBPUlRfU1lNQk9MKFpTVERf
ZGVjb21wcmVzc0JlZ2luX3VzaW5nRGljdCk7DQorRVhQT1JUX1NZTUJPTChaU1REX2NvcHlEQ3R4
KTsNCitFWFBPUlRfU1lNQk9MKFpTVERfbmV4dFNyY1NpemVUb0RlY29tcHJlc3MpOw0KK0VYUE9S
VF9TWU1CT0woWlNURF9kZWNvbXByZXNzQ29udGludWUpOw0KK0VYUE9SVF9TWU1CT0woWlNURF9u
ZXh0SW5wdXRUeXBlKTsNCisNCitFWFBPUlRfU1lNQk9MKFpTVERfZGVjb21wcmVzc0Jsb2NrKTsN
CitFWFBPUlRfU1lNQk9MKFpTVERfaW5zZXJ0QmxvY2spOw0KKw0KK01PRFVMRV9MSUNFTlNFKCJE
dWFsIEJTRC9HUEwiKTsNCitNT0RVTEVfREVTQ1JJUFRJT04oIlpzdGQgRGVjb21wcmVzc29yIik7
DQpkaWZmIC0tZ2l0IGEveGVuL2NvbW1vbi96c3RkL2VudHJvcHlfY29tbW9uLmMgYi94ZW4vY29t
bW9uL3pzdGQvZW50cm9weV9jb21tb24uYw0KbmV3IGZpbGUgbW9kZSAxMDA2NDQNCmluZGV4IDAw
MDAwMDAwMDAuLjJiMGE2NDNjMzINCi0tLSAvZGV2L251bGwNCisrKyBiL3hlbi9jb21tb24venN0
ZC9lbnRyb3B5X2NvbW1vbi5jDQpAQCAtMCwwICsxLDI0MyBAQA0KKy8qDQorICogQ29tbW9uIGZ1
bmN0aW9ucyBvZiBOZXcgR2VuZXJhdGlvbiBFbnRyb3B5IGxpYnJhcnkNCisgKiBDb3B5cmlnaHQg
KEMpIDIwMTYsIFlhbm4gQ29sbGV0Lg0KKyAqDQorICogQlNEIDItQ2xhdXNlIExpY2Vuc2UgKGh0
dHA6Ly93d3cub3BlbnNvdXJjZS5vcmcvbGljZW5zZXMvYnNkLWxpY2Vuc2UucGhwKQ0KKyAqDQor
ICogUmVkaXN0cmlidXRpb24gYW5kIHVzZSBpbiBzb3VyY2UgYW5kIGJpbmFyeSBmb3Jtcywgd2l0
aCBvciB3aXRob3V0DQorICogbW9kaWZpY2F0aW9uLCBhcmUgcGVybWl0dGVkIHByb3ZpZGVkIHRo
YXQgdGhlIGZvbGxvd2luZyBjb25kaXRpb25zIGFyZQ0KKyAqIG1ldDoNCisgKg0KKyAqICAgKiBS
ZWRpc3RyaWJ1dGlvbnMgb2Ygc291cmNlIGNvZGUgbXVzdCByZXRhaW4gdGhlIGFib3ZlIGNvcHly
aWdodA0KKyAqIG5vdGljZSwgdGhpcyBsaXN0IG9mIGNvbmRpdGlvbnMgYW5kIHRoZSBmb2xsb3dp
bmcgZGlzY2xhaW1lci4NCisgKiAgICogUmVkaXN0cmlidXRpb25zIGluIGJpbmFyeSBmb3JtIG11
c3QgcmVwcm9kdWNlIHRoZSBhYm92ZQ0KKyAqIGNvcHlyaWdodCBub3RpY2UsIHRoaXMgbGlzdCBv
ZiBjb25kaXRpb25zIGFuZCB0aGUgZm9sbG93aW5nIGRpc2NsYWltZXINCisgKiBpbiB0aGUgZG9j
dW1lbnRhdGlvbiBhbmQvb3Igb3RoZXIgbWF0ZXJpYWxzIHByb3ZpZGVkIHdpdGggdGhlDQorICog
ZGlzdHJpYnV0aW9uLg0KKyAqDQorICogVEhJUyBTT0ZUV0FSRSBJUyBQUk9WSURFRCBCWSBUSEUg
Q09QWVJJR0hUIEhPTERFUlMgQU5EIENPTlRSSUJVVE9SUw0KKyAqICJBUyBJUyIgQU5EIEFOWSBF
WFBSRVNTIE9SIElNUExJRUQgV0FSUkFOVElFUywgSU5DTFVESU5HLCBCVVQgTk9UDQorICogTElN
SVRFRCBUTywgVEhFIElNUExJRUQgV0FSUkFOVElFUyBPRiBNRVJDSEFOVEFCSUxJVFkgQU5EIEZJ
VE5FU1MgRk9SDQorICogQSBQQVJUSUNVTEFSIFBVUlBPU0UgQVJFIERJU0NMQUlNRUQuIElOIE5P
IEVWRU5UIFNIQUxMIFRIRSBDT1BZUklHSFQNCisgKiBPV05FUiBPUiBDT05UUklCVVRPUlMgQkUg
TElBQkxFIEZPUiBBTlkgRElSRUNULCBJTkRJUkVDVCwgSU5DSURFTlRBTCwNCisgKiBTUEVDSUFM
LCBFWEVNUExBUlksIE9SIENPTlNFUVVFTlRJQUwgREFNQUdFUyAoSU5DTFVESU5HLCBCVVQgTk9U
DQorICogTElNSVRFRCBUTywgUFJPQ1VSRU1FTlQgT0YgU1VCU1RJVFVURSBHT09EUyBPUiBTRVJW
SUNFUzsgTE9TUyBPRiBVU0UsDQorICogREFUQSwgT1IgUFJPRklUUzsgT1IgQlVTSU5FU1MgSU5U
RVJSVVBUSU9OKSBIT1dFVkVSIENBVVNFRCBBTkQgT04gQU5ZDQorICogVEhFT1JZIE9GIExJQUJJ
TElUWSwgV0hFVEhFUiBJTiBDT05UUkFDVCwgU1RSSUNUIExJQUJJTElUWSwgT1IgVE9SVA0KKyAq
IChJTkNMVURJTkcgTkVHTElHRU5DRSBPUiBPVEhFUldJU0UpIEFSSVNJTkcgSU4gQU5ZIFdBWSBP
VVQgT0YgVEhFIFVTRQ0KKyAqIE9GIFRISVMgU09GVFdBUkUsIEVWRU4gSUYgQURWSVNFRCBPRiBU
SEUgUE9TU0lCSUxJVFkgT0YgU1VDSCBEQU1BR0UuDQorICoNCisgKiBUaGlzIHByb2dyYW0gaXMg
ZnJlZSBzb2Z0d2FyZTsgeW91IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5kL29yIG1vZGlmeSBpdCB1
bmRlcg0KKyAqIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgdmVy
c2lvbiAyIGFzIHB1Ymxpc2hlZCBieSB0aGUNCisgKiBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24u
IFRoaXMgcHJvZ3JhbSBpcyBkdWFsLWxpY2Vuc2VkOyB5b3UgbWF5IHNlbGVjdA0KKyAqIGVpdGhl
ciB2ZXJzaW9uIDIgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlICgiR1BMIikgb3Ig
QlNEIGxpY2Vuc2UNCisgKiAoIkJTRCIpLg0KKyAqDQorICogWW91IGNhbiBjb250YWN0IHRoZSBh
dXRob3IgYXQgOg0KKyAqIC0gU291cmNlIHJlcG9zaXRvcnkgOiBodHRwczovL2dpdGh1Yi5jb20v
Q3lhbjQ5NzMvRmluaXRlU3RhdGVFbnRyb3B5DQorICovDQorDQorLyogKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKg0KKyogIERlcGVuZGVuY2llcw0KKyoqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKi8NCisjaW5jbHVkZSAiZXJyb3JfcHJpdmF0ZS5oIiAv
KiBFUlJfKiwgRVJST1IgKi8NCisjaW5jbHVkZSAiZnNlLmgiDQorI2luY2x1ZGUgImh1Zi5oIg0K
KyNpbmNsdWRlICJtZW0uaCINCisNCisvKj09PSAgIFZlcnNpb24gICA9PT0qLw0KK3Vuc2lnbmVk
IEZTRV92ZXJzaW9uTnVtYmVyKHZvaWQpIHsgcmV0dXJuIEZTRV9WRVJTSU9OX05VTUJFUjsgfQ0K
Kw0KKy8qPT09ICAgRXJyb3IgTWFuYWdlbWVudCAgID09PSovDQordW5zaWduZWQgRlNFX2lzRXJy
b3Ioc2l6ZV90IGNvZGUpIHsgcmV0dXJuIEVSUl9pc0Vycm9yKGNvZGUpOyB9DQorDQordW5zaWdu
ZWQgSFVGX2lzRXJyb3Ioc2l6ZV90IGNvZGUpIHsgcmV0dXJuIEVSUl9pc0Vycm9yKGNvZGUpOyB9
DQorDQorLyotKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioNCisqICBGU0UgTkNvdW50IGVuY29kaW5nLWRlY29kaW5nDQorKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
Ki8NCitzaXplX3QgRlNFX3JlYWROQ291bnQoc2hvcnQgKm5vcm1hbGl6ZWRDb3VudGVyLCB1bnNp
Z25lZCAqbWF4U1ZQdHIsIHVuc2lnbmVkICp0YWJsZUxvZ1B0ciwgY29uc3Qgdm9pZCAqaGVhZGVy
QnVmZmVyLCBzaXplX3QgaGJTaXplKQ0KK3sNCisJY29uc3QgQllURSAqY29uc3QgaXN0YXJ0ID0g
KGNvbnN0IEJZVEUgKiloZWFkZXJCdWZmZXI7DQorCWNvbnN0IEJZVEUgKmNvbnN0IGllbmQgPSBp
c3RhcnQgKyBoYlNpemU7DQorCWNvbnN0IEJZVEUgKmlwID0gaXN0YXJ0Ow0KKwlpbnQgbmJCaXRz
Ow0KKwlpbnQgcmVtYWluaW5nOw0KKwlpbnQgdGhyZXNob2xkOw0KKwlVMzIgYml0U3RyZWFtOw0K
KwlpbnQgYml0Q291bnQ7DQorCXVuc2lnbmVkIGNoYXJudW0gPSAwOw0KKwlpbnQgcHJldmlvdXMw
ID0gMDsNCisNCisJaWYgKGhiU2l6ZSA8IDQpDQorCQlyZXR1cm4gRVJST1Ioc3JjU2l6ZV93cm9u
Zyk7DQorCWJpdFN0cmVhbSA9IFpTVERfcmVhZExFMzIoaXApOw0KKwluYkJpdHMgPSAoYml0U3Ry
ZWFtICYgMHhGKSArIEZTRV9NSU5fVEFCTEVMT0c7IC8qIGV4dHJhY3QgdGFibGVMb2cgKi8NCisJ
aWYgKG5iQml0cyA+IEZTRV9UQUJMRUxPR19BQlNPTFVURV9NQVgpDQorCQlyZXR1cm4gRVJST1Io
dGFibGVMb2dfdG9vTGFyZ2UpOw0KKwliaXRTdHJlYW0gPj49IDQ7DQorCWJpdENvdW50ID0gNDsN
CisJKnRhYmxlTG9nUHRyID0gbmJCaXRzOw0KKwlyZW1haW5pbmcgPSAoMSA8PCBuYkJpdHMpICsg
MTsNCisJdGhyZXNob2xkID0gMSA8PCBuYkJpdHM7DQorCW5iQml0cysrOw0KKw0KKwl3aGlsZSAo
KHJlbWFpbmluZyA+IDEpICYgKGNoYXJudW0gPD0gKm1heFNWUHRyKSkgew0KKwkJaWYgKHByZXZp
b3VzMCkgew0KKwkJCXVuc2lnbmVkIG4wID0gY2hhcm51bTsNCisJCQl3aGlsZSAoKGJpdFN0cmVh
bSAmIDB4RkZGRikgPT0gMHhGRkZGKSB7DQorCQkJCW4wICs9IDI0Ow0KKwkJCQlpZiAoaXAgPCBp
ZW5kIC0gNSkgew0KKwkJCQkJaXAgKz0gMjsNCisJCQkJCWJpdFN0cmVhbSA9IFpTVERfcmVhZExF
MzIoaXApID4+IGJpdENvdW50Ow0KKwkJCQl9IGVsc2Ugew0KKwkJCQkJYml0U3RyZWFtID4+PSAx
NjsNCisJCQkJCWJpdENvdW50ICs9IDE2Ow0KKwkJCQl9DQorCQkJfQ0KKwkJCXdoaWxlICgoYml0
U3RyZWFtICYgMykgPT0gMykgew0KKwkJCQluMCArPSAzOw0KKwkJCQliaXRTdHJlYW0gPj49IDI7
DQorCQkJCWJpdENvdW50ICs9IDI7DQorCQkJfQ0KKwkJCW4wICs9IGJpdFN0cmVhbSAmIDM7DQor
CQkJYml0Q291bnQgKz0gMjsNCisJCQlpZiAobjAgPiAqbWF4U1ZQdHIpDQorCQkJCXJldHVybiBF
UlJPUihtYXhTeW1ib2xWYWx1ZV90b29TbWFsbCk7DQorCQkJd2hpbGUgKGNoYXJudW0gPCBuMCkN
CisJCQkJbm9ybWFsaXplZENvdW50ZXJbY2hhcm51bSsrXSA9IDA7DQorCQkJaWYgKChpcCA8PSBp
ZW5kIC0gNykgfHwgKGlwICsgKGJpdENvdW50ID4+IDMpIDw9IGllbmQgLSA0KSkgew0KKwkJCQlp
cCArPSBiaXRDb3VudCA+PiAzOw0KKwkJCQliaXRDb3VudCAmPSA3Ow0KKwkJCQliaXRTdHJlYW0g
PSBaU1REX3JlYWRMRTMyKGlwKSA+PiBiaXRDb3VudDsNCisJCQl9IGVsc2Ugew0KKwkJCQliaXRT
dHJlYW0gPj49IDI7DQorCQkJfQ0KKwkJfQ0KKwkJew0KKwkJCWludCBjb25zdCBtYXggPSAoMiAq
IHRocmVzaG9sZCAtIDEpIC0gcmVtYWluaW5nOw0KKwkJCWludCBjb3VudDsNCisNCisJCQlpZiAo
KGJpdFN0cmVhbSAmICh0aHJlc2hvbGQgLSAxKSkgPCAoVTMyKW1heCkgew0KKwkJCQljb3VudCA9
IGJpdFN0cmVhbSAmICh0aHJlc2hvbGQgLSAxKTsNCisJCQkJYml0Q291bnQgKz0gbmJCaXRzIC0g
MTsNCisJCQl9IGVsc2Ugew0KKwkJCQljb3VudCA9IGJpdFN0cmVhbSAmICgyICogdGhyZXNob2xk
IC0gMSk7DQorCQkJCWlmIChjb3VudCA+PSB0aHJlc2hvbGQpDQorCQkJCQljb3VudCAtPSBtYXg7
DQorCQkJCWJpdENvdW50ICs9IG5iQml0czsNCisJCQl9DQorDQorCQkJY291bnQtLTsJCQkJIC8q
IGV4dHJhIGFjY3VyYWN5ICovDQorCQkJcmVtYWluaW5nIC09IGNvdW50IDwgMCA/IC1jb3VudCA6
IGNvdW50OyAvKiAtMSBtZWFucyArMSAqLw0KKwkJCW5vcm1hbGl6ZWRDb3VudGVyW2NoYXJudW0r
K10gPSAoc2hvcnQpY291bnQ7DQorCQkJcHJldmlvdXMwID0gIWNvdW50Ow0KKwkJCXdoaWxlIChy
ZW1haW5pbmcgPCB0aHJlc2hvbGQpIHsNCisJCQkJbmJCaXRzLS07DQorCQkJCXRocmVzaG9sZCA+
Pj0gMTsNCisJCQl9DQorDQorCQkJaWYgKChpcCA8PSBpZW5kIC0gNykgfHwgKGlwICsgKGJpdENv
dW50ID4+IDMpIDw9IGllbmQgLSA0KSkgew0KKwkJCQlpcCArPSBiaXRDb3VudCA+PiAzOw0KKwkJ
CQliaXRDb3VudCAmPSA3Ow0KKwkJCX0gZWxzZSB7DQorCQkJCWJpdENvdW50IC09IChpbnQpKDgg
KiAoaWVuZCAtIDQgLSBpcCkpOw0KKwkJCQlpcCA9IGllbmQgLSA0Ow0KKwkJCX0NCisJCQliaXRT
dHJlYW0gPSBaU1REX3JlYWRMRTMyKGlwKSA+PiAoYml0Q291bnQgJiAzMSk7DQorCQl9DQorCX0g
Lyogd2hpbGUgKChyZW1haW5pbmc+MSkgJiAoY2hhcm51bTw9Km1heFNWUHRyKSkgKi8NCisJaWYg
KHJlbWFpbmluZyAhPSAxKQ0KKwkJcmV0dXJuIEVSUk9SKGNvcnJ1cHRpb25fZGV0ZWN0ZWQpOw0K
KwlpZiAoYml0Q291bnQgPiAzMikNCisJCXJldHVybiBFUlJPUihjb3JydXB0aW9uX2RldGVjdGVk
KTsNCisJKm1heFNWUHRyID0gY2hhcm51bSAtIDE7DQorDQorCWlwICs9IChiaXRDb3VudCArIDcp
ID4+IDM7DQorCXJldHVybiBpcCAtIGlzdGFydDsNCit9DQorDQorLyohIEhVRl9yZWFkU3RhdHMo
KSA6DQorCVJlYWQgY29tcGFjdCBIdWZmbWFuIHRyZWUsIHNhdmVkIGJ5IEhVRl93cml0ZUNUYWJs
ZSgpLg0KKwlgaHVmZldlaWdodGAgaXMgZGVzdGluYXRpb24gYnVmZmVyLg0KKwlgcmFua1N0YXRz
YCBpcyBhc3N1bWVkIHRvIGJlIGEgdGFibGUgb2YgYXQgbGVhc3QgSFVGX1RBQkxFTE9HX01BWCBV
MzIuDQorCUByZXR1cm4gOiBzaXplIHJlYWQgZnJvbSBgc3JjYCAsIG9yIGFuIGVycm9yIENvZGUg
Lg0KKwlOb3RlIDogTmVlZGVkIGJ5IEhVRl9yZWFkQ1RhYmxlKCkgYW5kIEhVRl9yZWFkRFRhYmxl
WD8oKSAuDQorKi8NCitzaXplX3QgSFVGX3JlYWRTdGF0c193a3NwKEJZVEUgKmh1ZmZXZWlnaHQs
IHNpemVfdCBod1NpemUsIFUzMiAqcmFua1N0YXRzLCBVMzIgKm5iU3ltYm9sc1B0ciwgVTMyICp0
YWJsZUxvZ1B0ciwgY29uc3Qgdm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZSwgdm9pZCAqd29ya3Nw
YWNlLCBzaXplX3Qgd29ya3NwYWNlU2l6ZSkNCit7DQorCVUzMiB3ZWlnaHRUb3RhbDsNCisJY29u
c3QgQllURSAqaXAgPSAoY29uc3QgQllURSAqKXNyYzsNCisJc2l6ZV90IGlTaXplOw0KKwlzaXpl
X3Qgb1NpemU7DQorDQorCWlmICghc3JjU2l6ZSkNCisJCXJldHVybiBFUlJPUihzcmNTaXplX3dy
b25nKTsNCisJaVNpemUgPSBpcFswXTsNCisJLyogbWVtc2V0KGh1ZmZXZWlnaHQsIDAsIGh3U2l6
ZSk7ICAgKi8gLyogaXMgbm90IG5lY2Vzc2FyeSwgZXZlbiB0aG91Z2ggc29tZSBhbmFseXplciBj
b21wbGFpbiAuLi4gKi8NCisNCisJaWYgKGlTaXplID49IDEyOCkgeyAvKiBzcGVjaWFsIGhlYWRl
ciAqLw0KKwkJb1NpemUgPSBpU2l6ZSAtIDEyNzsNCisJCWlTaXplID0gKChvU2l6ZSArIDEpIC8g
Mik7DQorCQlpZiAoaVNpemUgKyAxID4gc3JjU2l6ZSkNCisJCQlyZXR1cm4gRVJST1Ioc3JjU2l6
ZV93cm9uZyk7DQorCQlpZiAob1NpemUgPj0gaHdTaXplKQ0KKwkJCXJldHVybiBFUlJPUihjb3Jy
dXB0aW9uX2RldGVjdGVkKTsNCisJCWlwICs9IDE7DQorCQl7DQorCQkJVTMyIG47DQorCQkJZm9y
IChuID0gMDsgbiA8IG9TaXplOyBuICs9IDIpIHsNCisJCQkJaHVmZldlaWdodFtuXSA9IGlwW24g
LyAyXSA+PiA0Ow0KKwkJCQlodWZmV2VpZ2h0W24gKyAxXSA9IGlwW24gLyAyXSAmIDE1Ow0KKwkJ
CX0NCisJCX0NCisJfSBlbHNlIHsJCQkJCQkgLyogaGVhZGVyIGNvbXByZXNzZWQgd2l0aCBGU0Ug
KG5vcm1hbCBjYXNlKSAqLw0KKwkJaWYgKGlTaXplICsgMSA+IHNyY1NpemUpDQorCQkJcmV0dXJu
IEVSUk9SKHNyY1NpemVfd3JvbmcpOw0KKwkJb1NpemUgPSBGU0VfZGVjb21wcmVzc193a3NwKGh1
ZmZXZWlnaHQsIGh3U2l6ZSAtIDEsIGlwICsgMSwgaVNpemUsIDYsIHdvcmtzcGFjZSwgd29ya3Nw
YWNlU2l6ZSk7IC8qIG1heCAoaHdTaXplLTEpIHZhbHVlcyBkZWNvZGVkLCBhcyBsYXN0IG9uZSBp
cyBpbXBsaWVkICovDQorCQlpZiAoRlNFX2lzRXJyb3Iob1NpemUpKQ0KKwkJCXJldHVybiBvU2l6
ZTsNCisJfQ0KKw0KKwkvKiBjb2xsZWN0IHdlaWdodCBzdGF0cyAqLw0KKwltZW1zZXQocmFua1N0
YXRzLCAwLCAoSFVGX1RBQkxFTE9HX01BWCArIDEpICogc2l6ZW9mKFUzMikpOw0KKwl3ZWlnaHRU
b3RhbCA9IDA7DQorCXsNCisJCVUzMiBuOw0KKwkJZm9yIChuID0gMDsgbiA8IG9TaXplOyBuKysp
IHsNCisJCQlpZiAoaHVmZldlaWdodFtuXSA+PSBIVUZfVEFCTEVMT0dfTUFYKQ0KKwkJCQlyZXR1
cm4gRVJST1IoY29ycnVwdGlvbl9kZXRlY3RlZCk7DQorCQkJcmFua1N0YXRzW2h1ZmZXZWlnaHRb
bl1dKys7DQorCQkJd2VpZ2h0VG90YWwgKz0gKDEgPDwgaHVmZldlaWdodFtuXSkgPj4gMTsNCisJ
CX0NCisJfQ0KKwlpZiAod2VpZ2h0VG90YWwgPT0gMCkNCisJCXJldHVybiBFUlJPUihjb3JydXB0
aW9uX2RldGVjdGVkKTsNCisNCisJLyogZ2V0IGxhc3Qgbm9uLW51bGwgc3ltYm9sIHdlaWdodCAo
aW1wbGllZCwgdG90YWwgbXVzdCBiZSAyXm4pICovDQorCXsNCisJCVUzMiBjb25zdCB0YWJsZUxv
ZyA9IEJJVF9oaWdoYml0MzIod2VpZ2h0VG90YWwpICsgMTsNCisJCWlmICh0YWJsZUxvZyA+IEhV
Rl9UQUJMRUxPR19NQVgpDQorCQkJcmV0dXJuIEVSUk9SKGNvcnJ1cHRpb25fZGV0ZWN0ZWQpOw0K
KwkJKnRhYmxlTG9nUHRyID0gdGFibGVMb2c7DQorCQkvKiBkZXRlcm1pbmUgbGFzdCB3ZWlnaHQg
Ki8NCisJCXsNCisJCQlVMzIgY29uc3QgdG90YWwgPSAxIDw8IHRhYmxlTG9nOw0KKwkJCVUzMiBj
b25zdCByZXN0ID0gdG90YWwgLSB3ZWlnaHRUb3RhbDsNCisJCQlVMzIgY29uc3QgdmVyaWYgPSAx
IDw8IEJJVF9oaWdoYml0MzIocmVzdCk7DQorCQkJVTMyIGNvbnN0IGxhc3RXZWlnaHQgPSBCSVRf
aGlnaGJpdDMyKHJlc3QpICsgMTsNCisJCQlpZiAodmVyaWYgIT0gcmVzdCkNCisJCQkJcmV0dXJu
IEVSUk9SKGNvcnJ1cHRpb25fZGV0ZWN0ZWQpOyAvKiBsYXN0IHZhbHVlIG11c3QgYmUgYSBjbGVh
biBwb3dlciBvZiAyICovDQorCQkJaHVmZldlaWdodFtvU2l6ZV0gPSAoQllURSlsYXN0V2VpZ2h0
Ow0KKwkJCXJhbmtTdGF0c1tsYXN0V2VpZ2h0XSsrOw0KKwkJfQ0KKwl9DQorDQorCS8qIGNoZWNr
IHRyZWUgY29uc3RydWN0aW9uIHZhbGlkaXR5ICovDQorCWlmICgocmFua1N0YXRzWzFdIDwgMikg
fHwgKHJhbmtTdGF0c1sxXSAmIDEpKQ0KKwkJcmV0dXJuIEVSUk9SKGNvcnJ1cHRpb25fZGV0ZWN0
ZWQpOyAvKiBieSBjb25zdHJ1Y3Rpb24gOiBhdCBsZWFzdCAyIGVsdHMgb2YgcmFuayAxLCBtdXN0
IGJlIGV2ZW4gKi8NCisNCisJLyogcmVzdWx0cyAqLw0KKwkqbmJTeW1ib2xzUHRyID0gKFUzMiko
b1NpemUgKyAxKTsNCisJcmV0dXJuIGlTaXplICsgMTsNCit9DQpkaWZmIC0tZ2l0IGEveGVuL2Nv
bW1vbi96c3RkL2Vycm9yX3ByaXZhdGUuaCBiL3hlbi9jb21tb24venN0ZC9lcnJvcl9wcml2YXRl
LmgNCm5ldyBmaWxlIG1vZGUgMTAwNjQ0DQppbmRleCAwMDAwMDAwMDAwLi4xYTYwYjMxZjcwDQot
LS0gL2Rldi9udWxsDQorKysgYi94ZW4vY29tbW9uL3pzdGQvZXJyb3JfcHJpdmF0ZS5oDQpAQCAt
MCwwICsxLDUzIEBADQorLyoqDQorICogQ29weXJpZ2h0IChjKSAyMDE2LXByZXNlbnQsIFlhbm4g
Q29sbGV0LCBGYWNlYm9vaywgSW5jLg0KKyAqIEFsbCByaWdodHMgcmVzZXJ2ZWQuDQorICoNCisg
KiBUaGlzIHNvdXJjZSBjb2RlIGlzIGxpY2Vuc2VkIHVuZGVyIHRoZSBCU0Qtc3R5bGUgbGljZW5z
ZSBmb3VuZCBpbiB0aGUNCisgKiBMSUNFTlNFIGZpbGUgaW4gdGhlIHJvb3QgZGlyZWN0b3J5IG9m
IGh0dHBzOi8vZ2l0aHViLmNvbS9mYWNlYm9vay96c3RkLg0KKyAqIEFuIGFkZGl0aW9uYWwgZ3Jh
bnQgb2YgcGF0ZW50IHJpZ2h0cyBjYW4gYmUgZm91bmQgaW4gdGhlIFBBVEVOVFMgZmlsZSBpbiB0
aGUNCisgKiBzYW1lIGRpcmVjdG9yeS4NCisgKg0KKyAqIFRoaXMgcHJvZ3JhbSBpcyBmcmVlIHNv
ZnR3YXJlOyB5b3UgY2FuIHJlZGlzdHJpYnV0ZSBpdCBhbmQvb3IgbW9kaWZ5IGl0IHVuZGVyDQor
ICogdGhlIHRlcm1zIG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSB2ZXJzaW9uIDIg
YXMgcHVibGlzaGVkIGJ5IHRoZQ0KKyAqIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbi4gVGhpcyBw
cm9ncmFtIGlzIGR1YWwtbGljZW5zZWQ7IHlvdSBtYXkgc2VsZWN0DQorICogZWl0aGVyIHZlcnNp
b24gMiBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgKCJHUEwiKSBvciBCU0QgbGlj
ZW5zZQ0KKyAqICgiQlNEIikuDQorICovDQorDQorLyogTm90ZSA6IHRoaXMgbW9kdWxlIGlzIGV4
cGVjdGVkIHRvIHJlbWFpbiBwcml2YXRlLCBkbyBub3QgZXhwb3NlIGl0ICovDQorDQorI2lmbmRl
ZiBFUlJPUl9IX01PRFVMRQ0KKyNkZWZpbmUgRVJST1JfSF9NT0RVTEUNCisNCisvKiAqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQorKiAgRGVwZW5kZW5jaWVzDQorKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqLw0KKyNpbmNsdWRlIDxsaW51
eC90eXBlcy5oPiAvKiBzaXplX3QgKi8NCisjaW5jbHVkZSA8bGludXgvenN0ZC5oPiAgLyogZW51
bSBsaXN0ICovDQorDQorLyogKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
Kg0KKyogIENvbXBpbGVyLXNwZWNpZmljDQorKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqLw0KKyNkZWZpbmUgRVJSX1NUQVRJQyBzdGF0aWMgX19hdHRyaWJ1dGVfXygo
dW51c2VkKSkNCisNCisvKi0qKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
DQorKiAgQ3VzdG9taXphdGlvbiAoZXJyb3JfcHVibGljLmgpDQorKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqLw0KK3R5cGVkZWYgWlNURF9FcnJvckNvZGUgRVJSX2Vu
dW07DQorI2RlZmluZSBQUkVGSVgobmFtZSkgWlNURF9lcnJvcl8jI25hbWUNCisNCisvKi0qKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQorKiAgRXJyb3IgY29kZXMgaGFu
ZGxpbmcNCisqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKiovDQorI2Rl
ZmluZSBFUlJPUihuYW1lKSAoKHNpemVfdCktUFJFRklYKG5hbWUpKQ0KKw0KK0VSUl9TVEFUSUMg
dW5zaWduZWQgRVJSX2lzRXJyb3Ioc2l6ZV90IGNvZGUpIHsgcmV0dXJuIChjb2RlID4gRVJST1Io
bWF4Q29kZSkpOyB9DQorDQorRVJSX1NUQVRJQyBFUlJfZW51bSBFUlJfZ2V0RXJyb3JDb2RlKHNp
emVfdCBjb2RlKQ0KK3sNCisJaWYgKCFFUlJfaXNFcnJvcihjb2RlKSkNCisJCXJldHVybiAoRVJS
X2VudW0pMDsNCisJcmV0dXJuIChFUlJfZW51bSkoMCAtIGNvZGUpOw0KK30NCisNCisjZW5kaWYg
LyogRVJST1JfSF9NT0RVTEUgKi8NCmRpZmYgLS1naXQgYS94ZW4vY29tbW9uL3pzdGQvZnNlLmgg
Yi94ZW4vY29tbW9uL3pzdGQvZnNlLmgNCm5ldyBmaWxlIG1vZGUgMTAwNjQ0DQppbmRleCAwMDAw
MDAwMDAwLi43NDYwYWIwNGIxDQotLS0gL2Rldi9udWxsDQorKysgYi94ZW4vY29tbW9uL3pzdGQv
ZnNlLmgNCkBAIC0wLDAgKzEsNTc1IEBADQorLyoNCisgKiBGU0UgOiBGaW5pdGUgU3RhdGUgRW50
cm9weSBjb2RlYw0KKyAqIFB1YmxpYyBQcm90b3R5cGVzIGRlY2xhcmF0aW9uDQorICogQ29weXJp
Z2h0IChDKSAyMDEzLTIwMTYsIFlhbm4gQ29sbGV0Lg0KKyAqDQorICogQlNEIDItQ2xhdXNlIExp
Y2Vuc2UgKGh0dHA6Ly93d3cub3BlbnNvdXJjZS5vcmcvbGljZW5zZXMvYnNkLWxpY2Vuc2UucGhw
KQ0KKyAqDQorICogUmVkaXN0cmlidXRpb24gYW5kIHVzZSBpbiBzb3VyY2UgYW5kIGJpbmFyeSBm
b3Jtcywgd2l0aCBvciB3aXRob3V0DQorICogbW9kaWZpY2F0aW9uLCBhcmUgcGVybWl0dGVkIHBy
b3ZpZGVkIHRoYXQgdGhlIGZvbGxvd2luZyBjb25kaXRpb25zIGFyZQ0KKyAqIG1ldDoNCisgKg0K
KyAqICAgKiBSZWRpc3RyaWJ1dGlvbnMgb2Ygc291cmNlIGNvZGUgbXVzdCByZXRhaW4gdGhlIGFi
b3ZlIGNvcHlyaWdodA0KKyAqIG5vdGljZSwgdGhpcyBsaXN0IG9mIGNvbmRpdGlvbnMgYW5kIHRo
ZSBmb2xsb3dpbmcgZGlzY2xhaW1lci4NCisgKiAgICogUmVkaXN0cmlidXRpb25zIGluIGJpbmFy
eSBmb3JtIG11c3QgcmVwcm9kdWNlIHRoZSBhYm92ZQ0KKyAqIGNvcHlyaWdodCBub3RpY2UsIHRo
aXMgbGlzdCBvZiBjb25kaXRpb25zIGFuZCB0aGUgZm9sbG93aW5nIGRpc2NsYWltZXINCisgKiBp
biB0aGUgZG9jdW1lbnRhdGlvbiBhbmQvb3Igb3RoZXIgbWF0ZXJpYWxzIHByb3ZpZGVkIHdpdGgg
dGhlDQorICogZGlzdHJpYnV0aW9uLg0KKyAqDQorICogVEhJUyBTT0ZUV0FSRSBJUyBQUk9WSURF
RCBCWSBUSEUgQ09QWVJJR0hUIEhPTERFUlMgQU5EIENPTlRSSUJVVE9SUw0KKyAqICJBUyBJUyIg
QU5EIEFOWSBFWFBSRVNTIE9SIElNUExJRUQgV0FSUkFOVElFUywgSU5DTFVESU5HLCBCVVQgTk9U
DQorICogTElNSVRFRCBUTywgVEhFIElNUExJRUQgV0FSUkFOVElFUyBPRiBNRVJDSEFOVEFCSUxJ
VFkgQU5EIEZJVE5FU1MgRk9SDQorICogQSBQQVJUSUNVTEFSIFBVUlBPU0UgQVJFIERJU0NMQUlN
RUQuIElOIE5PIEVWRU5UIFNIQUxMIFRIRSBDT1BZUklHSFQNCisgKiBPV05FUiBPUiBDT05UUklC
VVRPUlMgQkUgTElBQkxFIEZPUiBBTlkgRElSRUNULCBJTkRJUkVDVCwgSU5DSURFTlRBTCwNCisg
KiBTUEVDSUFMLCBFWEVNUExBUlksIE9SIENPTlNFUVVFTlRJQUwgREFNQUdFUyAoSU5DTFVESU5H
LCBCVVQgTk9UDQorICogTElNSVRFRCBUTywgUFJPQ1VSRU1FTlQgT0YgU1VCU1RJVFVURSBHT09E
UyBPUiBTRVJWSUNFUzsgTE9TUyBPRiBVU0UsDQorICogREFUQSwgT1IgUFJPRklUUzsgT1IgQlVT
SU5FU1MgSU5URVJSVVBUSU9OKSBIT1dFVkVSIENBVVNFRCBBTkQgT04gQU5ZDQorICogVEhFT1JZ
IE9GIExJQUJJTElUWSwgV0hFVEhFUiBJTiBDT05UUkFDVCwgU1RSSUNUIExJQUJJTElUWSwgT1Ig
VE9SVA0KKyAqIChJTkNMVURJTkcgTkVHTElHRU5DRSBPUiBPVEhFUldJU0UpIEFSSVNJTkcgSU4g
QU5ZIFdBWSBPVVQgT0YgVEhFIFVTRQ0KKyAqIE9GIFRISVMgU09GVFdBUkUsIEVWRU4gSUYgQURW
SVNFRCBPRiBUSEUgUE9TU0lCSUxJVFkgT0YgU1VDSCBEQU1BR0UuDQorICoNCisgKiBUaGlzIHBy
b2dyYW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5kL29yIG1v
ZGlmeSBpdCB1bmRlcg0KKyAqIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExp
Y2Vuc2UgdmVyc2lvbiAyIGFzIHB1Ymxpc2hlZCBieSB0aGUNCisgKiBGcmVlIFNvZnR3YXJlIEZv
dW5kYXRpb24uIFRoaXMgcHJvZ3JhbSBpcyBkdWFsLWxpY2Vuc2VkOyB5b3UgbWF5IHNlbGVjdA0K
KyAqIGVpdGhlciB2ZXJzaW9uIDIgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlICgi
R1BMIikgb3IgQlNEIGxpY2Vuc2UNCisgKiAoIkJTRCIpLg0KKyAqDQorICogWW91IGNhbiBjb250
YWN0IHRoZSBhdXRob3IgYXQgOg0KKyAqIC0gU291cmNlIHJlcG9zaXRvcnkgOiBodHRwczovL2dp
dGh1Yi5jb20vQ3lhbjQ5NzMvRmluaXRlU3RhdGVFbnRyb3B5DQorICovDQorI2lmbmRlZiBGU0Vf
SA0KKyNkZWZpbmUgRlNFX0gNCisNCisvKi0qKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKg0KKyogIERlcGVuZGVuY2llcw0KKyoqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKi8NCisjaW5jbHVkZSA8bGludXgvdHlwZXMuaD4gLyogc2l6ZV90LCBw
dHJkaWZmX3QgKi8NCisNCisvKi0qKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKg0KKyogIEZTRV9QVUJMSUNfQVBJIDogY29udHJvbCBsaWJyYXJ5IHN5bWJvbHMgdmlzaWJp
bGl0eQ0KKyoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKi8NCisjZGVm
aW5lIEZTRV9QVUJMSUNfQVBJDQorDQorLyotLS0tLS0gICBWZXJzaW9uICAgLS0tLS0tKi8NCisj
ZGVmaW5lIEZTRV9WRVJTSU9OX01BSk9SIDANCisjZGVmaW5lIEZTRV9WRVJTSU9OX01JTk9SIDkN
CisjZGVmaW5lIEZTRV9WRVJTSU9OX1JFTEVBU0UgMA0KKw0KKyNkZWZpbmUgRlNFX0xJQl9WRVJT
SU9OIEZTRV9WRVJTSU9OX01BSk9SLkZTRV9WRVJTSU9OX01JTk9SLkZTRV9WRVJTSU9OX1JFTEVB
U0UNCisjZGVmaW5lIEZTRV9RVU9URShzdHIpICNzdHINCisjZGVmaW5lIEZTRV9FWFBBTkRfQU5E
X1FVT1RFKHN0cikgRlNFX1FVT1RFKHN0cikNCisjZGVmaW5lIEZTRV9WRVJTSU9OX1NUUklORyBG
U0VfRVhQQU5EX0FORF9RVU9URShGU0VfTElCX1ZFUlNJT04pDQorDQorI2RlZmluZSBGU0VfVkVS
U0lPTl9OVU1CRVIgKEZTRV9WRVJTSU9OX01BSk9SICogMTAwICogMTAwICsgRlNFX1ZFUlNJT05f
TUlOT1IgKiAxMDAgKyBGU0VfVkVSU0lPTl9SRUxFQVNFKQ0KK0ZTRV9QVUJMSUNfQVBJIHVuc2ln
bmVkIEZTRV92ZXJzaW9uTnVtYmVyKHZvaWQpOyAvKio8IGxpYnJhcnkgdmVyc2lvbiBudW1iZXI7
IHRvIGJlIHVzZWQgd2hlbiBjaGVja2luZyBkbGwgdmVyc2lvbiAqLw0KKw0KKy8qLSoqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQorKiAgVG9vbCBmdW5jdGlvbnMNCisq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKiovDQorRlNFX1BVQkxJQ19B
UEkgc2l6ZV90IEZTRV9jb21wcmVzc0JvdW5kKHNpemVfdCBzaXplKTsgLyogbWF4aW11bSBjb21w
cmVzc2VkIHNpemUgKi8NCisNCisvKiBFcnJvciBNYW5hZ2VtZW50ICovDQorRlNFX1BVQkxJQ19B
UEkgdW5zaWduZWQgRlNFX2lzRXJyb3Ioc2l6ZV90IGNvZGUpOyAvKiB0ZWxscyBpZiBhIHJldHVy
biB2YWx1ZSBpcyBhbiBlcnJvciBjb2RlICovDQorDQorLyotKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioNCisqICBGU0UgZGV0YWlsZWQgQVBJDQorKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqLw0KKy8qIQ0KK0ZTRV9jb21wcmVzcygpIGRv
ZXMgdGhlIGZvbGxvd2luZzoNCisxLiBjb3VudCBzeW1ib2wgb2NjdXJyZW5jZSBmcm9tIHNvdXJj
ZVtdIGludG8gdGFibGUgY291bnRbXQ0KKzIuIG5vcm1hbGl6ZSBjb3VudGVycyBzbyB0aGF0IHN1
bShjb3VudFtdKSA9PSBQb3dlcl9vZl8yICgyXnRhYmxlTG9nKQ0KKzMuIHNhdmUgbm9ybWFsaXpl
ZCBjb3VudGVycyB0byBtZW1vcnkgYnVmZmVyIHVzaW5nIHdyaXRlTkNvdW50KCkNCis0LiBidWls
ZCBlbmNvZGluZyB0YWJsZSAnQ1RhYmxlJyBmcm9tIG5vcm1hbGl6ZWQgY291bnRlcnMNCis1LiBl
bmNvZGUgdGhlIGRhdGEgc3RyZWFtIHVzaW5nIGVuY29kaW5nIHRhYmxlICdDVGFibGUnDQorDQor
RlNFX2RlY29tcHJlc3MoKSBkb2VzIHRoZSBmb2xsb3dpbmc6DQorMS4gcmVhZCBub3JtYWxpemVk
IGNvdW50ZXJzIHdpdGggcmVhZE5Db3VudCgpDQorMi4gYnVpbGQgZGVjb2RpbmcgdGFibGUgJ0RU
YWJsZScgZnJvbSBub3JtYWxpemVkIGNvdW50ZXJzDQorMy4gZGVjb2RlIHRoZSBkYXRhIHN0cmVh
bSB1c2luZyBkZWNvZGluZyB0YWJsZSAnRFRhYmxlJw0KKw0KK1RoZSBmb2xsb3dpbmcgQVBJIGFs
bG93cyB0YXJnZXRpbmcgc3BlY2lmaWMgc3ViLWZ1bmN0aW9ucyBmb3IgYWR2YW5jZWQgdGFza3Mu
DQorRm9yIGV4YW1wbGUsIGl0J3MgcG9zc2libGUgdG8gY29tcHJlc3Mgc2V2ZXJhbCBibG9ja3Mg
dXNpbmcgdGhlIHNhbWUgJ0NUYWJsZScsDQorb3IgdG8gc2F2ZSBhbmQgcHJvdmlkZSBub3JtYWxp
emVkIGRpc3RyaWJ1dGlvbiB1c2luZyBleHRlcm5hbCBtZXRob2QuDQorKi8NCisNCisvKiAqKiog
Q09NUFJFU1NJT04gKioqICovDQorLyohIEZTRV9vcHRpbWFsVGFibGVMb2coKToNCisJZHluYW1p
Y2FsbHkgZG93bnNpemUgJ3RhYmxlTG9nJyB3aGVuIGNvbmRpdGlvbnMgYXJlIG1ldC4NCisJSXQg
c2F2ZXMgQ1BVIHRpbWUsIGJ5IHVzaW5nIHNtYWxsZXIgdGFibGVzLCB3aGlsZSBwcmVzZXJ2aW5n
IG9yIGV2ZW4gaW1wcm92aW5nIGNvbXByZXNzaW9uIHJhdGlvLg0KKwlAcmV0dXJuIDogcmVjb21t
ZW5kZWQgdGFibGVMb2cgKG5lY2Vzc2FyaWx5IDw9ICdtYXhUYWJsZUxvZycpICovDQorRlNFX1BV
QkxJQ19BUEkgdW5zaWduZWQgRlNFX29wdGltYWxUYWJsZUxvZyh1bnNpZ25lZCBtYXhUYWJsZUxv
Zywgc2l6ZV90IHNyY1NpemUsIHVuc2lnbmVkIG1heFN5bWJvbFZhbHVlKTsNCisNCisvKiEgRlNF
X25vcm1hbGl6ZUNvdW50KCk6DQorCW5vcm1hbGl6ZSBjb3VudHMgc28gdGhhdCBzdW0oY291bnRb
XSkgPT0gUG93ZXJfb2ZfMiAoMl50YWJsZUxvZykNCisJJ25vcm1hbGl6ZWRDb3VudGVyJyBpcyBh
IHRhYmxlIG9mIHNob3J0LCBvZiBtaW5pbXVtIHNpemUgKG1heFN5bWJvbFZhbHVlKzEpLg0KKwlA
cmV0dXJuIDogdGFibGVMb2csDQorCQkJICBvciBhbiBlcnJvckNvZGUsIHdoaWNoIGNhbiBiZSB0
ZXN0ZWQgdXNpbmcgRlNFX2lzRXJyb3IoKSAqLw0KK0ZTRV9QVUJMSUNfQVBJIHNpemVfdCBGU0Vf
bm9ybWFsaXplQ291bnQoc2hvcnQgKm5vcm1hbGl6ZWRDb3VudGVyLCB1bnNpZ25lZCB0YWJsZUxv
ZywgY29uc3QgdW5zaWduZWQgKmNvdW50LCBzaXplX3Qgc3JjU2l6ZSwgdW5zaWduZWQgbWF4U3lt
Ym9sVmFsdWUpOw0KKw0KKy8qISBGU0VfTkNvdW50V3JpdGVCb3VuZCgpOg0KKwlQcm92aWRlcyB0
aGUgbWF4aW11bSBwb3NzaWJsZSBzaXplIG9mIGFuIEZTRSBub3JtYWxpemVkIHRhYmxlLCBnaXZl
biAnbWF4U3ltYm9sVmFsdWUnIGFuZCAndGFibGVMb2cnLg0KKwlUeXBpY2FsbHkgdXNlZnVsIGZv
ciBhbGxvY2F0aW9uIHB1cnBvc2UuICovDQorRlNFX1BVQkxJQ19BUEkgc2l6ZV90IEZTRV9OQ291
bnRXcml0ZUJvdW5kKHVuc2lnbmVkIG1heFN5bWJvbFZhbHVlLCB1bnNpZ25lZCB0YWJsZUxvZyk7
DQorDQorLyohIEZTRV93cml0ZU5Db3VudCgpOg0KKwlDb21wYWN0bHkgc2F2ZSAnbm9ybWFsaXpl
ZENvdW50ZXInIGludG8gJ2J1ZmZlcicuDQorCUByZXR1cm4gOiBzaXplIG9mIHRoZSBjb21wcmVz
c2VkIHRhYmxlLA0KKwkJCSAgb3IgYW4gZXJyb3JDb2RlLCB3aGljaCBjYW4gYmUgdGVzdGVkIHVz
aW5nIEZTRV9pc0Vycm9yKCkuICovDQorRlNFX1BVQkxJQ19BUEkgc2l6ZV90IEZTRV93cml0ZU5D
b3VudCh2b2lkICpidWZmZXIsIHNpemVfdCBidWZmZXJTaXplLCBjb25zdCBzaG9ydCAqbm9ybWFs
aXplZENvdW50ZXIsIHVuc2lnbmVkIG1heFN5bWJvbFZhbHVlLCB1bnNpZ25lZCB0YWJsZUxvZyk7
DQorDQorLyohIENvbnN0cnVjdG9yIGFuZCBEZXN0cnVjdG9yIG9mIEZTRV9DVGFibGUuDQorCU5v
dGUgdGhhdCBGU0VfQ1RhYmxlIHNpemUgZGVwZW5kcyBvbiAndGFibGVMb2cnIGFuZCAnbWF4U3lt
Ym9sVmFsdWUnICovDQordHlwZWRlZiB1bnNpZ25lZCBGU0VfQ1RhYmxlOyAvKiBkb24ndCBhbGxv
Y2F0ZSB0aGF0LiBJdCdzIG9ubHkgbWVhbnQgdG8gYmUgbW9yZSByZXN0cmljdGl2ZSB0aGFuIHZv
aWQqICovDQorDQorLyohIEZTRV9jb21wcmVzc191c2luZ0NUYWJsZSgpOg0KKwlDb21wcmVzcyBg
c3JjYCB1c2luZyBgY3RgIGludG8gYGRzdGAgd2hpY2ggbXVzdCBiZSBhbHJlYWR5IGFsbG9jYXRl
ZC4NCisJQHJldHVybiA6IHNpemUgb2YgY29tcHJlc3NlZCBkYXRhICg8PSBgZHN0Q2FwYWNpdHlg
KSwNCisJCQkgIG9yIDAgaWYgY29tcHJlc3NlZCBkYXRhIGNvdWxkIG5vdCBmaXQgaW50byBgZHN0
YCwNCisJCQkgIG9yIGFuIGVycm9yQ29kZSwgd2hpY2ggY2FuIGJlIHRlc3RlZCB1c2luZyBGU0Vf
aXNFcnJvcigpICovDQorRlNFX1BVQkxJQ19BUEkgc2l6ZV90IEZTRV9jb21wcmVzc191c2luZ0NU
YWJsZSh2b2lkICpkc3QsIHNpemVfdCBkc3RDYXBhY2l0eSwgY29uc3Qgdm9pZCAqc3JjLCBzaXpl
X3Qgc3JjU2l6ZSwgY29uc3QgRlNFX0NUYWJsZSAqY3QpOw0KKw0KKy8qIQ0KK1R1dG9yaWFsIDoN
CistLS0tLS0tLS0tDQorVGhlIGZpcnN0IHN0ZXAgaXMgdG8gY291bnQgYWxsIHN5bWJvbHMuIEZT
RV9jb3VudCgpIGRvZXMgdGhpcyBqb2IgdmVyeSBmYXN0Lg0KK1Jlc3VsdCB3aWxsIGJlIHNhdmVk
IGludG8gJ2NvdW50JywgYSB0YWJsZSBvZiB1bnNpZ25lZCBpbnQsIHdoaWNoIG11c3QgYmUgYWxy
ZWFkeSBhbGxvY2F0ZWQsIGFuZCBoYXZlICdtYXhTeW1ib2xWYWx1ZVB0clswXSsxJyBjZWxscy4N
Cisnc3JjJyBpcyBhIHRhYmxlIG9mIGJ5dGVzIG9mIHNpemUgJ3NyY1NpemUnLiBBbGwgdmFsdWVz
IHdpdGhpbiAnc3JjJyBNVVNUIGJlIDw9IG1heFN5bWJvbFZhbHVlUHRyWzBdDQorbWF4U3ltYm9s
VmFsdWVQdHJbMF0gd2lsbCBiZSB1cGRhdGVkLCB3aXRoIGl0cyByZWFsIHZhbHVlIChuZWNlc3Nh
cmlseSA8PSBvcmlnaW5hbCB2YWx1ZSkNCitGU0VfY291bnQoKSB3aWxsIHJldHVybiB0aGUgbnVt
YmVyIG9mIG9jY3VycmVuY2Ugb2YgdGhlIG1vc3QgZnJlcXVlbnQgc3ltYm9sLg0KK1RoaXMgY2Fu
IGJlIHVzZWQgdG8ga25vdyBpZiB0aGVyZSBpcyBhIHNpbmdsZSBzeW1ib2wgd2l0aGluICdzcmMn
LCBhbmQgdG8gcXVpY2tseSBldmFsdWF0ZSBpdHMgY29tcHJlc3NpYmlsaXR5Lg0KK0lmIHRoZXJl
IGlzIGFuIGVycm9yLCB0aGUgZnVuY3Rpb24gd2lsbCByZXR1cm4gYW4gRXJyb3JDb2RlICh3aGlj
aCBjYW4gYmUgdGVzdGVkIHVzaW5nIEZTRV9pc0Vycm9yKCkpLg0KKw0KK1RoZSBuZXh0IHN0ZXAg
aXMgdG8gbm9ybWFsaXplIHRoZSBmcmVxdWVuY2llcy4NCitGU0Vfbm9ybWFsaXplQ291bnQoKSB3
aWxsIGVuc3VyZSB0aGF0IHN1bSBvZiBmcmVxdWVuY2llcyBpcyA9PSAyIF4ndGFibGVMb2cnLg0K
K0l0IGFsc28gZ3VhcmFudGVlcyBhIG1pbmltdW0gb2YgMSB0byBhbnkgU3ltYm9sIHdpdGggZnJl
cXVlbmN5ID49IDEuDQorWW91IGNhbiB1c2UgJ3RhYmxlTG9nJz09MCB0byBtZWFuICJ1c2UgZGVm
YXVsdCB0YWJsZUxvZyB2YWx1ZSIuDQorSWYgeW91IGFyZSB1bnN1cmUgb2Ygd2hpY2ggdGFibGVM
b2cgdmFsdWUgdG8gdXNlLCB5b3UgY2FuIGFzayBGU0Vfb3B0aW1hbFRhYmxlTG9nKCksDQord2hp
Y2ggd2lsbCBwcm92aWRlIHRoZSBvcHRpbWFsIHZhbGlkIHRhYmxlTG9nIGdpdmVuIHNvdXJjZVNp
emUsIG1heFN5bWJvbFZhbHVlLCBhbmQgYSB1c2VyLWRlZmluZWQgbWF4aW11bSAoMCBtZWFucyAi
ZGVmYXVsdCIpLg0KKw0KK1RoZSByZXN1bHQgb2YgRlNFX25vcm1hbGl6ZUNvdW50KCkgd2lsbCBi
ZSBzYXZlZCBpbnRvIGEgdGFibGUsDQorY2FsbGVkICdub3JtYWxpemVkQ291bnRlcicsIHdoaWNo
IGlzIGEgdGFibGUgb2Ygc2lnbmVkIHNob3J0Lg0KKydub3JtYWxpemVkQ291bnRlcicgbXVzdCBi
ZSBhbHJlYWR5IGFsbG9jYXRlZCwgYW5kIGhhdmUgYXQgbGVhc3QgJ21heFN5bWJvbFZhbHVlKzEn
IGNlbGxzLg0KK1RoZSByZXR1cm4gdmFsdWUgaXMgdGFibGVMb2cgaWYgZXZlcnl0aGluZyBwcm9j
ZWVkZWQgYXMgZXhwZWN0ZWQuDQorSXQgaXMgMCBpZiB0aGVyZSBpcyBhIHNpbmdsZSBzeW1ib2wg
d2l0aGluIGRpc3RyaWJ1dGlvbi4NCitJZiB0aGVyZSBpcyBhbiBlcnJvciAoZXg6IGludmFsaWQg
dGFibGVMb2cgdmFsdWUpLCB0aGUgZnVuY3Rpb24gd2lsbCByZXR1cm4gYW4gRXJyb3JDb2RlICh3
aGljaCBjYW4gYmUgdGVzdGVkIHVzaW5nIEZTRV9pc0Vycm9yKCkpLg0KKw0KKydub3JtYWxpemVk
Q291bnRlcicgY2FuIGJlIHNhdmVkIGluIGEgY29tcGFjdCBtYW5uZXIgdG8gYSBtZW1vcnkgYXJl
YSB1c2luZyBGU0Vfd3JpdGVOQ291bnQoKS4NCisnYnVmZmVyJyBtdXN0IGJlIGFscmVhZHkgYWxs
b2NhdGVkLg0KK0ZvciBndWFyYW50ZWVkIHN1Y2Nlc3MsIGJ1ZmZlciBzaXplIG11c3QgYmUgYXQg
bGVhc3QgRlNFX2hlYWRlckJvdW5kKCkuDQorVGhlIHJlc3VsdCBvZiB0aGUgZnVuY3Rpb24gaXMg
dGhlIG51bWJlciBvZiBieXRlcyB3cml0dGVuIGludG8gJ2J1ZmZlcicuDQorSWYgdGhlcmUgaXMg
YW4gZXJyb3IsIHRoZSBmdW5jdGlvbiB3aWxsIHJldHVybiBhbiBFcnJvckNvZGUgKHdoaWNoIGNh
biBiZSB0ZXN0ZWQgdXNpbmcgRlNFX2lzRXJyb3IoKTsgZXggOiBidWZmZXIgc2l6ZSB0b28gc21h
bGwpLg0KKw0KKydub3JtYWxpemVkQ291bnRlcicgY2FuIHRoZW4gYmUgdXNlZCB0byBjcmVhdGUg
dGhlIGNvbXByZXNzaW9uIHRhYmxlICdDVGFibGUnLg0KK1RoZSBzcGFjZSByZXF1aXJlZCBieSAn
Q1RhYmxlJyBtdXN0IGJlIGFscmVhZHkgYWxsb2NhdGVkLCB1c2luZyBGU0VfY3JlYXRlQ1RhYmxl
KCkuDQorWW91IGNhbiB0aGVuIHVzZSBGU0VfYnVpbGRDVGFibGUoKSB0byBmaWxsICdDVGFibGUn
Lg0KK0lmIHRoZXJlIGlzIGFuIGVycm9yLCBib3RoIGZ1bmN0aW9ucyB3aWxsIHJldHVybiBhbiBF
cnJvckNvZGUgKHdoaWNoIGNhbiBiZSB0ZXN0ZWQgdXNpbmcgRlNFX2lzRXJyb3IoKSkuDQorDQor
J0NUYWJsZScgY2FuIHRoZW4gYmUgdXNlZCB0byBjb21wcmVzcyAnc3JjJywgd2l0aCBGU0VfY29t
cHJlc3NfdXNpbmdDVGFibGUoKS4NCitTaW1pbGFyIHRvIEZTRV9jb3VudCgpLCB0aGUgY29udmVu
dGlvbiBpcyB0aGF0ICdzcmMnIGlzIGFzc3VtZWQgdG8gYmUgYSB0YWJsZSBvZiBjaGFyIG9mIHNp
emUgJ3NyY1NpemUnDQorVGhlIGZ1bmN0aW9uIHJldHVybnMgdGhlIHNpemUgb2YgY29tcHJlc3Nl
ZCBkYXRhICh3aXRob3V0IGhlYWRlciksIG5lY2Vzc2FyaWx5IDw9IGBkc3RDYXBhY2l0eWAuDQor
SWYgaXQgcmV0dXJucyAnMCcsIGNvbXByZXNzZWQgZGF0YSBjb3VsZCBub3QgZml0IGludG8gJ2Rz
dCcuDQorSWYgdGhlcmUgaXMgYW4gZXJyb3IsIHRoZSBmdW5jdGlvbiB3aWxsIHJldHVybiBhbiBF
cnJvckNvZGUgKHdoaWNoIGNhbiBiZSB0ZXN0ZWQgdXNpbmcgRlNFX2lzRXJyb3IoKSkuDQorKi8N
CisNCisvKiAqKiogREVDT01QUkVTU0lPTiAqKiogKi8NCisNCisvKiEgRlNFX3JlYWROQ291bnQo
KToNCisJUmVhZCBjb21wYWN0bHkgc2F2ZWQgJ25vcm1hbGl6ZWRDb3VudGVyJyBmcm9tICdyQnVm
ZmVyJy4NCisJQHJldHVybiA6IHNpemUgcmVhZCBmcm9tICdyQnVmZmVyJywNCisJCQkgIG9yIGFu
IGVycm9yQ29kZSwgd2hpY2ggY2FuIGJlIHRlc3RlZCB1c2luZyBGU0VfaXNFcnJvcigpLg0KKwkJ
CSAgbWF4U3ltYm9sVmFsdWVQdHJbMF0gYW5kIHRhYmxlTG9nUHRyWzBdIHdpbGwgYWxzbyBiZSB1
cGRhdGVkIHdpdGggdGhlaXIgcmVzcGVjdGl2ZSB2YWx1ZXMgKi8NCitGU0VfUFVCTElDX0FQSSBz
aXplX3QgRlNFX3JlYWROQ291bnQoc2hvcnQgKm5vcm1hbGl6ZWRDb3VudGVyLCB1bnNpZ25lZCAq
bWF4U3ltYm9sVmFsdWVQdHIsIHVuc2lnbmVkICp0YWJsZUxvZ1B0ciwgY29uc3Qgdm9pZCAqckJ1
ZmZlciwgc2l6ZV90IHJCdWZmU2l6ZSk7DQorDQorLyohIENvbnN0cnVjdG9yIGFuZCBEZXN0cnVj
dG9yIG9mIEZTRV9EVGFibGUuDQorCU5vdGUgdGhhdCBpdHMgc2l6ZSBkZXBlbmRzIG9uICd0YWJs
ZUxvZycgKi8NCit0eXBlZGVmIHVuc2lnbmVkIEZTRV9EVGFibGU7IC8qIGRvbid0IGFsbG9jYXRl
IHRoYXQuIEl0J3MganVzdCBhIHdheSB0byBiZSBtb3JlIHJlc3RyaWN0aXZlIHRoYW4gdm9pZCog
Ki8NCisNCisvKiEgRlNFX2J1aWxkRFRhYmxlKCk6DQorCUJ1aWxkcyAnZHQnLCB3aGljaCBtdXN0
IGJlIGFscmVhZHkgYWxsb2NhdGVkLCB1c2luZyBGU0VfY3JlYXRlRFRhYmxlKCkuDQorCXJldHVy
biA6IDAsIG9yIGFuIGVycm9yQ29kZSwgd2hpY2ggY2FuIGJlIHRlc3RlZCB1c2luZyBGU0VfaXNF
cnJvcigpICovDQorRlNFX1BVQkxJQ19BUEkgc2l6ZV90IEZTRV9idWlsZERUYWJsZV93a3NwKEZT
RV9EVGFibGUgKmR0LCBjb25zdCBzaG9ydCAqbm9ybWFsaXplZENvdW50ZXIsIHVuc2lnbmVkIG1h
eFN5bWJvbFZhbHVlLCB1bnNpZ25lZCB0YWJsZUxvZywgdm9pZCAqd29ya3NwYWNlLCBzaXplX3Qg
d29ya3NwYWNlU2l6ZSk7DQorDQorLyohIEZTRV9kZWNvbXByZXNzX3VzaW5nRFRhYmxlKCk6DQor
CURlY29tcHJlc3MgY29tcHJlc3NlZCBzb3VyY2UgYGNTcmNgIG9mIHNpemUgYGNTcmNTaXplYCB1
c2luZyBgZHRgDQorCWludG8gYGRzdGAgd2hpY2ggbXVzdCBiZSBhbHJlYWR5IGFsbG9jYXRlZC4N
CisJQHJldHVybiA6IHNpemUgb2YgcmVnZW5lcmF0ZWQgZGF0YSAobmVjZXNzYXJpbHkgPD0gYGRz
dENhcGFjaXR5YCksDQorCQkJICBvciBhbiBlcnJvckNvZGUsIHdoaWNoIGNhbiBiZSB0ZXN0ZWQg
dXNpbmcgRlNFX2lzRXJyb3IoKSAqLw0KK0ZTRV9QVUJMSUNfQVBJIHNpemVfdCBGU0VfZGVjb21w
cmVzc191c2luZ0RUYWJsZSh2b2lkICpkc3QsIHNpemVfdCBkc3RDYXBhY2l0eSwgY29uc3Qgdm9p
ZCAqY1NyYywgc2l6ZV90IGNTcmNTaXplLCBjb25zdCBGU0VfRFRhYmxlICpkdCk7DQorDQorLyoh
DQorVHV0b3JpYWwgOg0KKy0tLS0tLS0tLS0NCisoTm90ZSA6IHRoZXNlIGZ1bmN0aW9ucyBvbmx5
IGRlY29tcHJlc3MgRlNFLWNvbXByZXNzZWQgYmxvY2tzLg0KKyBJZiBibG9jayBpcyB1bmNvbXBy
ZXNzZWQsIHVzZSBtZW1jcHkoKSBpbnN0ZWFkDQorIElmIGJsb2NrIGlzIGEgc2luZ2xlIHJlcGVh
dGVkIGJ5dGUsIHVzZSBtZW1zZXQoKSBpbnN0ZWFkICkNCisNCitUaGUgZmlyc3Qgc3RlcCBpcyB0
byBvYnRhaW4gdGhlIG5vcm1hbGl6ZWQgZnJlcXVlbmNpZXMgb2Ygc3ltYm9scy4NCitUaGlzIGNh
biBiZSBwZXJmb3JtZWQgYnkgRlNFX3JlYWROQ291bnQoKSBpZiBpdCB3YXMgc2F2ZWQgdXNpbmcg
RlNFX3dyaXRlTkNvdW50KCkuDQorJ25vcm1hbGl6ZWRDb3VudGVyJyBtdXN0IGJlIGFscmVhZHkg
YWxsb2NhdGVkLCBhbmQgaGF2ZSBhdCBsZWFzdCAnbWF4U3ltYm9sVmFsdWVQdHJbMF0rMScgY2Vs
bHMgb2Ygc2lnbmVkIHNob3J0Lg0KK0luIHByYWN0aWNlLCB0aGF0IG1lYW5zIGl0J3MgbmVjZXNz
YXJ5IHRvIGtub3cgJ21heFN5bWJvbFZhbHVlJyBiZWZvcmVoYW5kLA0KK29yIHNpemUgdGhlIHRh
YmxlIHRvIGhhbmRsZSB3b3JzdCBjYXNlIHNpdHVhdGlvbnMgKHR5cGljYWxseSAyNTYpLg0KK0ZT
RV9yZWFkTkNvdW50KCkgd2lsbCBwcm92aWRlICd0YWJsZUxvZycgYW5kICdtYXhTeW1ib2xWYWx1
ZScuDQorVGhlIHJlc3VsdCBvZiBGU0VfcmVhZE5Db3VudCgpIGlzIHRoZSBudW1iZXIgb2YgYnl0
ZXMgcmVhZCBmcm9tICdyQnVmZmVyJy4NCitOb3RlIHRoYXQgJ3JCdWZmZXJTaXplJyBtdXN0IGJl
IGF0IGxlYXN0IDQgYnl0ZXMsIGV2ZW4gaWYgdXNlZnVsIGluZm9ybWF0aW9uIGlzIGxlc3MgdGhh
biB0aGF0Lg0KK0lmIHRoZXJlIGlzIGFuIGVycm9yLCB0aGUgZnVuY3Rpb24gd2lsbCByZXR1cm4g
YW4gZXJyb3IgY29kZSwgd2hpY2ggY2FuIGJlIHRlc3RlZCB1c2luZyBGU0VfaXNFcnJvcigpLg0K
Kw0KK1RoZSBuZXh0IHN0ZXAgaXMgdG8gYnVpbGQgdGhlIGRlY29tcHJlc3Npb24gdGFibGVzICdG
U0VfRFRhYmxlJyBmcm9tICdub3JtYWxpemVkQ291bnRlcicuDQorVGhpcyBpcyBwZXJmb3JtZWQg
YnkgdGhlIGZ1bmN0aW9uIEZTRV9idWlsZERUYWJsZSgpLg0KK1RoZSBzcGFjZSByZXF1aXJlZCBi
eSAnRlNFX0RUYWJsZScgbXVzdCBiZSBhbHJlYWR5IGFsbG9jYXRlZCB1c2luZyBGU0VfY3JlYXRl
RFRhYmxlKCkuDQorSWYgdGhlcmUgaXMgYW4gZXJyb3IsIHRoZSBmdW5jdGlvbiB3aWxsIHJldHVy
biBhbiBlcnJvciBjb2RlLCB3aGljaCBjYW4gYmUgdGVzdGVkIHVzaW5nIEZTRV9pc0Vycm9yKCku
DQorDQorYEZTRV9EVGFibGVgIGNhbiB0aGVuIGJlIHVzZWQgdG8gZGVjb21wcmVzcyBgY1NyY2As
IHdpdGggRlNFX2RlY29tcHJlc3NfdXNpbmdEVGFibGUoKS4NCitgY1NyY1NpemVgIG11c3QgYmUg
c3RyaWN0bHkgY29ycmVjdCwgb3RoZXJ3aXNlIGRlY29tcHJlc3Npb24gd2lsbCBmYWlsLg0KK0ZT
RV9kZWNvbXByZXNzX3VzaW5nRFRhYmxlKCkgcmVzdWx0IHdpbGwgdGVsbCBob3cgbWFueSBieXRl
cyB3ZXJlIHJlZ2VuZXJhdGVkICg8PWBkc3RDYXBhY2l0eWApLg0KK0lmIHRoZXJlIGlzIGFuIGVy
cm9yLCB0aGUgZnVuY3Rpb24gd2lsbCByZXR1cm4gYW4gZXJyb3IgY29kZSwgd2hpY2ggY2FuIGJl
IHRlc3RlZCB1c2luZyBGU0VfaXNFcnJvcigpLiAoZXg6IGRzdCBidWZmZXIgdG9vIHNtYWxsKQ0K
KyovDQorDQorLyogKioqIERlcGVuZGVuY3kgKioqICovDQorI2luY2x1ZGUgImJpdHN0cmVhbS5o
Ig0KKw0KKy8qICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQorKiAg
U3RhdGljIGFsbG9jYXRpb24NCisqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqLw0KKy8qIEZTRSBidWZmZXIgYm91bmRzICovDQorI2RlZmluZSBGU0VfTkNPVU5UQk9V
TkQgNTEyDQorI2RlZmluZSBGU0VfQkxPQ0tCT1VORChzaXplKSAoc2l6ZSArIChzaXplID4+IDcp
KQ0KKyNkZWZpbmUgRlNFX0NPTVBSRVNTQk9VTkQoc2l6ZSkgKEZTRV9OQ09VTlRCT1VORCArIEZT
RV9CTE9DS0JPVU5EKHNpemUpKSAvKiBNYWNybyB2ZXJzaW9uLCB1c2VmdWwgZm9yIHN0YXRpYyBh
bGxvY2F0aW9uICovDQorDQorLyogSXQgaXMgcG9zc2libGUgdG8gc3RhdGljYWxseSBhbGxvY2F0
ZSBGU0UgQ1RhYmxlL0RUYWJsZSBhcyBhIHRhYmxlIG9mIEZTRV9DVGFibGUvRlNFX0RUYWJsZSB1
c2luZyBiZWxvdyBtYWNyb3MgKi8NCisjZGVmaW5lIEZTRV9DVEFCTEVfU0laRV9VMzIobWF4VGFi
bGVMb2csIG1heFN5bWJvbFZhbHVlKSAoMSArICgxIDw8IChtYXhUYWJsZUxvZyAtIDEpKSArICgo
bWF4U3ltYm9sVmFsdWUgKyAxKSAqIDIpKQ0KKyNkZWZpbmUgRlNFX0RUQUJMRV9TSVpFX1UzMiht
YXhUYWJsZUxvZykgKDEgKyAoMSA8PCBtYXhUYWJsZUxvZykpDQorDQorLyogKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCisqICBGU0UgYWR2YW5jZWQgQVBJDQorKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKi8NCisvKiBGU0VfY291bnRf
d2tzcCgpIDoNCisgKiBTYW1lIGFzIEZTRV9jb3VudCgpLCBidXQgdXNpbmcgYW4gZXh0ZXJuYWxs
eSBwcm92aWRlZCBzY3JhdGNoIGJ1ZmZlci4NCisgKiBgd29ya1NwYWNlYCBzaXplIG11c3QgYmUg
dGFibGUgb2YgPj0gYDEwMjRgIHVuc2lnbmVkDQorICovDQorc2l6ZV90IEZTRV9jb3VudF93a3Nw
KHVuc2lnbmVkICpjb3VudCwgdW5zaWduZWQgKm1heFN5bWJvbFZhbHVlUHRyLCBjb25zdCB2b2lk
ICpzb3VyY2UsIHNpemVfdCBzb3VyY2VTaXplLCB1bnNpZ25lZCAqd29ya1NwYWNlKTsNCisNCisv
KiBGU0VfY291bnRGYXN0X3drc3AoKSA6DQorICogU2FtZSBhcyBGU0VfY291bnRGYXN0KCksIGJ1
dCB1c2luZyBhbiBleHRlcm5hbGx5IHByb3ZpZGVkIHNjcmF0Y2ggYnVmZmVyLg0KKyAqIGB3b3Jr
U3BhY2VgIG11c3QgYmUgYSB0YWJsZSBvZiBtaW5pbXVtIGAxMDI0YCB1bnNpZ25lZA0KKyAqLw0K
K3NpemVfdCBGU0VfY291bnRGYXN0X3drc3AodW5zaWduZWQgKmNvdW50LCB1bnNpZ25lZCAqbWF4
U3ltYm9sVmFsdWVQdHIsIGNvbnN0IHZvaWQgKnNyYywgc2l6ZV90IHNyY1NpemUsIHVuc2lnbmVk
ICp3b3JrU3BhY2UpOw0KKw0KKy8qISBGU0VfY291bnRfc2ltcGxlDQorICogU2FtZSBhcyBGU0Vf
Y291bnRGYXN0KCksIGJ1dCBkb2VzIG5vdCB1c2UgYW55IGFkZGl0aW9uYWwgbWVtb3J5IChub3Qg
ZXZlbiBvbiBzdGFjaykuDQorICogVGhpcyBmdW5jdGlvbiBpcyB1bnNhZmUsIGFuZCB3aWxsIHNl
Z2ZhdWx0IGlmIGFueSB2YWx1ZSB3aXRoaW4gYHNyY2AgaXMgYD4gKm1heFN5bWJvbFZhbHVlUHRy
YCAocHJlc3VtaW5nIGl0J3MgYWxzbyB0aGUgc2l6ZSBvZiBgY291bnRgKS4NCisqLw0KK3NpemVf
dCBGU0VfY291bnRfc2ltcGxlKHVuc2lnbmVkICpjb3VudCwgdW5zaWduZWQgKm1heFN5bWJvbFZh
bHVlUHRyLCBjb25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNTaXplKTsNCisNCit1bnNpZ25lZCBG
U0Vfb3B0aW1hbFRhYmxlTG9nX2ludGVybmFsKHVuc2lnbmVkIG1heFRhYmxlTG9nLCBzaXplX3Qg
c3JjU2l6ZSwgdW5zaWduZWQgbWF4U3ltYm9sVmFsdWUsIHVuc2lnbmVkIG1pbnVzKTsNCisvKio8
IHNhbWUgYXMgRlNFX29wdGltYWxUYWJsZUxvZygpLCB3aGljaCB1c2VkIGBtaW51cz09MmAgKi8N
CisNCitzaXplX3QgRlNFX2J1aWxkQ1RhYmxlX3JhdyhGU0VfQ1RhYmxlICpjdCwgdW5zaWduZWQg
bmJCaXRzKTsNCisvKio8IGJ1aWxkIGEgZmFrZSBGU0VfQ1RhYmxlLCBkZXNpZ25lZCBmb3IgYSBm
bGF0IGRpc3RyaWJ1dGlvbiwgd2hlcmUgZWFjaCBzeW1ib2wgdXNlcyBuYkJpdHMgKi8NCisNCitz
aXplX3QgRlNFX2J1aWxkQ1RhYmxlX3JsZShGU0VfQ1RhYmxlICpjdCwgdW5zaWduZWQgY2hhciBz
eW1ib2xWYWx1ZSk7DQorLyoqPCBidWlsZCBhIGZha2UgRlNFX0NUYWJsZSwgZGVzaWduZWQgdG8g
Y29tcHJlc3MgYWx3YXlzIHRoZSBzYW1lIHN5bWJvbFZhbHVlICovDQorDQorLyogRlNFX2J1aWxk
Q1RhYmxlX3drc3AoKSA6DQorICogU2FtZSBhcyBGU0VfYnVpbGRDVGFibGUoKSwgYnV0IHVzaW5n
IGFuIGV4dGVybmFsbHkgYWxsb2NhdGVkIHNjcmF0Y2ggYnVmZmVyIChgd29ya1NwYWNlYCkuDQor
ICogYHdrc3BTaXplYCBtdXN0IGJlID49IGAoMTw8dGFibGVMb2cpYC4NCisgKi8NCitzaXplX3Qg
RlNFX2J1aWxkQ1RhYmxlX3drc3AoRlNFX0NUYWJsZSAqY3QsIGNvbnN0IHNob3J0ICpub3JtYWxp
emVkQ291bnRlciwgdW5zaWduZWQgbWF4U3ltYm9sVmFsdWUsIHVuc2lnbmVkIHRhYmxlTG9nLCB2
b2lkICp3b3JrU3BhY2UsIHNpemVfdCB3a3NwU2l6ZSk7DQorDQorc2l6ZV90IEZTRV9idWlsZERU
YWJsZV9yYXcoRlNFX0RUYWJsZSAqZHQsIHVuc2lnbmVkIG5iQml0cyk7DQorLyoqPCBidWlsZCBh
IGZha2UgRlNFX0RUYWJsZSwgZGVzaWduZWQgdG8gcmVhZCBhIGZsYXQgZGlzdHJpYnV0aW9uIHdo
ZXJlIGVhY2ggc3ltYm9sIHVzZXMgbmJCaXRzICovDQorDQorc2l6ZV90IEZTRV9idWlsZERUYWJs
ZV9ybGUoRlNFX0RUYWJsZSAqZHQsIHVuc2lnbmVkIGNoYXIgc3ltYm9sVmFsdWUpOw0KKy8qKjwg
YnVpbGQgYSBmYWtlIEZTRV9EVGFibGUsIGRlc2lnbmVkIHRvIGFsd2F5cyBnZW5lcmF0ZSB0aGUg
c2FtZSBzeW1ib2xWYWx1ZSAqLw0KKw0KK3NpemVfdCBGU0VfZGVjb21wcmVzc193a3NwKHZvaWQg
KmRzdCwgc2l6ZV90IGRzdENhcGFjaXR5LCBjb25zdCB2b2lkICpjU3JjLCBzaXplX3QgY1NyY1Np
emUsIHVuc2lnbmVkIG1heExvZywgdm9pZCAqd29ya3NwYWNlLCBzaXplX3Qgd29ya3NwYWNlU2l6
ZSk7DQorLyoqPCBzYW1lIGFzIEZTRV9kZWNvbXByZXNzKCksIHVzaW5nIGFuIGV4dGVybmFsbHkg
YWxsb2NhdGVkIGB3b3JrU3BhY2VgIHByb2R1Y2VkIHdpdGggYEZTRV9EVEFCTEVfU0laRV9VMzIo
bWF4TG9nKWAgKi8NCisNCisvKiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKg0KKyogIEZTRSBzeW1ib2wgY29tcHJlc3Npb24gQVBJDQorKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKi8NCisvKiENCisgICBUaGlzIEFQSSBjb25zaXN0cyBv
ZiBzbWFsbCB1bml0YXJ5IGZ1bmN0aW9ucywgd2hpY2ggaGlnaGx5IGJlbmVmaXQgZnJvbSBiZWlu
ZyBpbmxpbmVkLg0KKyAgIEhlbmNlIHRoZWlyIGJvZHkgYXJlIGluY2x1ZGVkIGluIG5leHQgc2Vj
dGlvbi4NCisqLw0KK3R5cGVkZWYgc3RydWN0IHsNCisJcHRyZGlmZl90IHZhbHVlOw0KKwljb25z
dCB2b2lkICpzdGF0ZVRhYmxlOw0KKwljb25zdCB2b2lkICpzeW1ib2xUVDsNCisJdW5zaWduZWQg
c3RhdGVMb2c7DQorfSBGU0VfQ1N0YXRlX3Q7DQorDQorc3RhdGljIHZvaWQgRlNFX2luaXRDU3Rh
dGUoRlNFX0NTdGF0ZV90ICpDU3RhdGVQdHIsIGNvbnN0IEZTRV9DVGFibGUgKmN0KTsNCisNCitz
dGF0aWMgdm9pZCBGU0VfZW5jb2RlU3ltYm9sKEJJVF9DU3RyZWFtX3QgKmJpdEMsIEZTRV9DU3Rh
dGVfdCAqQ1N0YXRlUHRyLCB1bnNpZ25lZCBzeW1ib2wpOw0KKw0KK3N0YXRpYyB2b2lkIEZTRV9m
bHVzaENTdGF0ZShCSVRfQ1N0cmVhbV90ICpiaXRDLCBjb25zdCBGU0VfQ1N0YXRlX3QgKkNTdGF0
ZVB0cik7DQorDQorLyoqPA0KK1RoZXNlIGZ1bmN0aW9ucyBhcmUgaW5uZXIgY29tcG9uZW50cyBv
ZiBGU0VfY29tcHJlc3NfdXNpbmdDVGFibGUoKS4NCitUaGV5IGFsbG93IHRoZSBjcmVhdGlvbiBv
ZiBjdXN0b20gc3RyZWFtcywgbWl4aW5nIG11bHRpcGxlIHRhYmxlcyBhbmQgYml0IHNvdXJjZXMu
DQorDQorQSBrZXkgcHJvcGVydHkgdG8ga2VlcCBpbiBtaW5kIGlzIHRoYXQgZW5jb2RpbmcgYW5k
IGRlY29kaW5nIGFyZSBkb25lICoqaW4gcmV2ZXJzZSBkaXJlY3Rpb24qKi4NCitTbyB0aGUgZmly
c3Qgc3ltYm9sIHlvdSB3aWxsIGVuY29kZSBpcyB0aGUgbGFzdCB5b3Ugd2lsbCBkZWNvZGUsIGxp
a2UgYSBMSUZPIHN0YWNrLg0KKw0KK1lvdSB3aWxsIG5lZWQgYSBmZXcgdmFyaWFibGVzIHRvIHRy
YWNrIHlvdXIgQ1N0cmVhbS4gVGhleSBhcmUgOg0KKw0KK0ZTRV9DVGFibGUgICAgY3Q7ICAgICAg
ICAgLy8gUHJvdmlkZWQgYnkgRlNFX2J1aWxkQ1RhYmxlKCkNCitCSVRfQ1N0cmVhbV90IGJpdFN0
cmVhbTsgIC8vIGJpdFN0cmVhbSB0cmFja2luZyBzdHJ1Y3R1cmUNCitGU0VfQ1N0YXRlX3QgIHN0
YXRlOyAgICAgIC8vIFN0YXRlIHRyYWNraW5nIHN0cnVjdHVyZSAoY2FuIGhhdmUgc2V2ZXJhbCkN
CisNCisNCitUaGUgZmlyc3QgdGhpbmcgdG8gZG8gaXMgdG8gaW5pdCBiaXRTdHJlYW0gYW5kIHN0
YXRlLg0KKwlzaXplX3QgZXJyb3JDb2RlID0gQklUX2luaXRDU3RyZWFtKCZiaXRTdHJlYW0sIGRz
dEJ1ZmZlciwgbWF4RHN0U2l6ZSk7DQorCUZTRV9pbml0Q1N0YXRlKCZzdGF0ZSwgY3QpOw0KKw0K
K05vdGUgdGhhdCBCSVRfaW5pdENTdHJlYW0oKSBjYW4gcHJvZHVjZSBhbiBlcnJvciBjb2RlLCBz
byBpdHMgcmVzdWx0IHNob3VsZCBiZSB0ZXN0ZWQsIHVzaW5nIEZTRV9pc0Vycm9yKCk7DQorWW91
IGNhbiB0aGVuIGVuY29kZSB5b3VyIGlucHV0IGRhdGEsIGJ5dGUgYWZ0ZXIgYnl0ZS4NCitGU0Vf
ZW5jb2RlU3ltYm9sKCkgb3V0cHV0cyBhIG1heGltdW0gb2YgJ3RhYmxlTG9nJyBiaXRzIGF0IGEg
dGltZS4NCitSZW1lbWJlciBkZWNvZGluZyB3aWxsIGJlIGRvbmUgaW4gcmV2ZXJzZSBkaXJlY3Rp
b24uDQorCUZTRV9lbmNvZGVCeXRlKCZiaXRTdHJlYW0sICZzdGF0ZSwgc3ltYm9sKTsNCisNCitB
dCBhbnkgdGltZSwgeW91IGNhbiBhbHNvIGFkZCBhbnkgYml0IHNlcXVlbmNlLg0KK05vdGUgOiBt
YXhpbXVtIGFsbG93ZWQgbmJCaXRzIGlzIDI1LCBmb3IgY29tcGF0aWJpbGl0eSB3aXRoIDMyLWJp
dHMgZGVjb2RlcnMNCisJQklUX2FkZEJpdHMoJmJpdFN0cmVhbSwgYml0RmllbGQsIG5iQml0cyk7
DQorDQorVGhlIGFib3ZlIG1ldGhvZHMgZG9uJ3QgY29tbWl0IGRhdGEgdG8gbWVtb3J5LCB0aGV5
IGp1c3Qgc3RvcmUgaXQgaW50byBsb2NhbCByZWdpc3RlciwgZm9yIHNwZWVkLg0KK0xvY2FsIHJl
Z2lzdGVyIHNpemUgaXMgNjQtYml0cyBvbiA2NC1iaXRzIHN5c3RlbXMsIDMyLWJpdHMgb24gMzIt
Yml0cyBzeXN0ZW1zIChzaXplX3QpLg0KK1dyaXRpbmcgZGF0YSB0byBtZW1vcnkgaXMgYSBtYW51
YWwgb3BlcmF0aW9uLCBwZXJmb3JtZWQgYnkgdGhlIGZsdXNoQml0cyBmdW5jdGlvbi4NCisJQklU
X2ZsdXNoQml0cygmYml0U3RyZWFtKTsNCisNCitZb3VyIGxhc3QgRlNFIGVuY29kaW5nIG9wZXJh
dGlvbiBzaGFsbCBiZSB0byBmbHVzaCB5b3VyIGxhc3Qgc3RhdGUgdmFsdWUocykuDQorCUZTRV9m
bHVzaFN0YXRlKCZiaXRTdHJlYW0sICZzdGF0ZSk7DQorDQorRmluYWxseSwgeW91IG11c3QgY2xv
c2UgdGhlIGJpdFN0cmVhbS4NCitUaGUgZnVuY3Rpb24gcmV0dXJucyB0aGUgc2l6ZSBvZiBDU3Ry
ZWFtIGluIGJ5dGVzLg0KK0lmIGRhdGEgY291bGRuJ3QgZml0IGludG8gZHN0QnVmZmVyLCBpdCB3
aWxsIHJldHVybiBhIDAgKCA9PSBub3QgY29tcHJlc3NpYmxlKQ0KK0lmIHRoZXJlIGlzIGFuIGVy
cm9yLCBpdCByZXR1cm5zIGFuIGVycm9yQ29kZSAod2hpY2ggY2FuIGJlIHRlc3RlZCB1c2luZyBG
U0VfaXNFcnJvcigpKS4NCisJc2l6ZV90IHNpemUgPSBCSVRfY2xvc2VDU3RyZWFtKCZiaXRTdHJl
YW0pOw0KKyovDQorDQorLyogKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioNCisqICBGU0Ugc3ltYm9sIGRlY29tcHJlc3Npb24gQVBJDQorKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKi8NCit0eXBlZGVmIHN0cnVjdCB7DQorCXNpemVfdCBz
dGF0ZTsNCisJY29uc3Qgdm9pZCAqdGFibGU7IC8qIHByZWNpc2UgdGFibGUgbWF5IHZhcnksIGRl
cGVuZGluZyBvbiBVMTYgKi8NCit9IEZTRV9EU3RhdGVfdDsNCisNCitzdGF0aWMgdm9pZCBGU0Vf
aW5pdERTdGF0ZShGU0VfRFN0YXRlX3QgKkRTdGF0ZVB0ciwgQklUX0RTdHJlYW1fdCAqYml0RCwg
Y29uc3QgRlNFX0RUYWJsZSAqZHQpOw0KKw0KK3N0YXRpYyB1bnNpZ25lZCBjaGFyIEZTRV9kZWNv
ZGVTeW1ib2woRlNFX0RTdGF0ZV90ICpEU3RhdGVQdHIsIEJJVF9EU3RyZWFtX3QgKmJpdEQpOw0K
Kw0KK3N0YXRpYyB1bnNpZ25lZCBGU0VfZW5kT2ZEU3RhdGUoY29uc3QgRlNFX0RTdGF0ZV90ICpE
U3RhdGVQdHIpOw0KKw0KKy8qKjwNCitMZXQncyBub3cgZGVjb21wb3NlIEZTRV9kZWNvbXByZXNz
X3VzaW5nRFRhYmxlKCkgaW50byBpdHMgdW5pdGFyeSBjb21wb25lbnRzLg0KK1lvdSB3aWxsIGRl
Y29kZSBGU0UtZW5jb2RlZCBzeW1ib2xzIGZyb20gdGhlIGJpdFN0cmVhbSwNCithbmQgYWxzbyBh
bnkgb3RoZXIgYml0RmllbGRzIHlvdSBwdXQgaW4sICoqaW4gcmV2ZXJzZSBvcmRlcioqLg0KKw0K
K1lvdSB3aWxsIG5lZWQgYSBmZXcgdmFyaWFibGVzIHRvIHRyYWNrIHlvdXIgYml0U3RyZWFtLiBU
aGV5IGFyZSA6DQorDQorQklUX0RTdHJlYW1fdCBEU3RyZWFtOyAgICAvLyBTdHJlYW0gY29udGV4
dA0KK0ZTRV9EU3RhdGVfdCAgRFN0YXRlOyAgICAgLy8gU3RhdGUgY29udGV4dC4gTXVsdGlwbGUg
b25lcyBhcmUgcG9zc2libGUNCitGU0VfRFRhYmxlKiAgIERUYWJsZVB0cjsgIC8vIERlY29kaW5n
IHRhYmxlLCBwcm92aWRlZCBieSBGU0VfYnVpbGREVGFibGUoKQ0KKw0KK1RoZSBmaXJzdCB0aGlu
ZyB0byBkbyBpcyB0byBpbml0IHRoZSBiaXRTdHJlYW0uDQorCWVycm9yQ29kZSA9IEJJVF9pbml0
RFN0cmVhbSgmRFN0cmVhbSwgc3JjQnVmZmVyLCBzcmNTaXplKTsNCisNCitZb3Ugc2hvdWxkIHRo
ZW4gcmV0cmlldmUgeW91ciBpbml0aWFsIHN0YXRlKHMpDQorKGluIHJldmVyc2UgZmx1c2hpbmcg
b3JkZXIgaWYgeW91IGhhdmUgc2V2ZXJhbCBvbmVzKSA6DQorCWVycm9yQ29kZSA9IEZTRV9pbml0
RFN0YXRlKCZEU3RhdGUsICZEU3RyZWFtLCBEVGFibGVQdHIpOw0KKw0KK1lvdSBjYW4gdGhlbiBk
ZWNvZGUgeW91ciBkYXRhLCBzeW1ib2wgYWZ0ZXIgc3ltYm9sLg0KK0ZvciBpbmZvcm1hdGlvbiB0
aGUgbWF4aW11bSBudW1iZXIgb2YgYml0cyByZWFkIGJ5IEZTRV9kZWNvZGVTeW1ib2woKSBpcyAn
dGFibGVMb2cnLg0KK0tlZXAgaW4gbWluZCB0aGF0IHN5bWJvbHMgYXJlIGRlY29kZWQgaW4gcmV2
ZXJzZSBvcmRlciwgbGlrZSBhIExJRk8gc3RhY2sgKGxhc3QgaW4sIGZpcnN0IG91dCkuDQorCXVu
c2lnbmVkIGNoYXIgc3ltYm9sID0gRlNFX2RlY29kZVN5bWJvbCgmRFN0YXRlLCAmRFN0cmVhbSk7
DQorDQorWW91IGNhbiByZXRyaWV2ZSBhbnkgYml0ZmllbGQgeW91IGV2ZW50dWFsbHkgc3RvcmVk
IGludG8gdGhlIGJpdFN0cmVhbSAoaW4gcmV2ZXJzZSBvcmRlcikNCitOb3RlIDogbWF4aW11bSBh
bGxvd2VkIG5iQml0cyBpcyAyNSwgZm9yIDMyLWJpdHMgY29tcGF0aWJpbGl0eQ0KKwlzaXplX3Qg
Yml0RmllbGQgPSBCSVRfcmVhZEJpdHMoJkRTdHJlYW0sIG5iQml0cyk7DQorDQorQWxsIGFib3Zl
IG9wZXJhdGlvbnMgb25seSByZWFkIGZyb20gbG9jYWwgcmVnaXN0ZXIgKHdoaWNoIHNpemUgZGVw
ZW5kcyBvbiBzaXplX3QpLg0KK1JlZnVlbGluZyB0aGUgcmVnaXN0ZXIgZnJvbSBtZW1vcnkgaXMg
bWFudWFsbHkgcGVyZm9ybWVkIGJ5IHRoZSByZWxvYWQgbWV0aG9kLg0KKwllbmRTaWduYWwgPSBG
U0VfcmVsb2FkRFN0cmVhbSgmRFN0cmVhbSk7DQorDQorQklUX3JlbG9hZERTdHJlYW0oKSByZXN1
bHQgdGVsbHMgaWYgdGhlcmUgaXMgc3RpbGwgc29tZSBtb3JlIGRhdGEgdG8gcmVhZCBmcm9tIERT
dHJlYW0uDQorQklUX0RTdHJlYW1fdW5maW5pc2hlZCA6IHRoZXJlIGlzIHN0aWxsIHNvbWUgZGF0
YSBsZWZ0IGludG8gdGhlIERTdHJlYW0uDQorQklUX0RTdHJlYW1fZW5kT2ZCdWZmZXIgOiBEc3Ry
ZWFtIHJlYWNoZWQgZW5kIG9mIGJ1ZmZlci4gSXRzIGNvbnRhaW5lciBtYXkgbm8gbG9uZ2VyIGJl
IGNvbXBsZXRlbHkgZmlsbGVkLg0KK0JJVF9EU3RyZWFtX2NvbXBsZXRlZCA6IERzdHJlYW0gcmVh
Y2hlZCBpdHMgZXhhY3QgZW5kLCBjb3JyZXNwb25kaW5nIGluIGdlbmVyYWwgdG8gZGVjb21wcmVz
c2lvbiBjb21wbGV0ZWQuDQorQklUX0RTdHJlYW1fdG9vRmFyIDogRHN0cmVhbSB3ZW50IHRvbyBm
YXIuIERlY29tcHJlc3Npb24gcmVzdWx0IGlzIGNvcnJ1cHRlZC4NCisNCitXaGVuIHJlYWNoaW5n
IGVuZCBvZiBidWZmZXIgKEJJVF9EU3RyZWFtX2VuZE9mQnVmZmVyKSwgcHJvZ3Jlc3Mgc2xvd2x5
LCBub3RhYmx5IGlmIHlvdSBkZWNvZGUgbXVsdGlwbGUgc3ltYm9scyBwZXIgbG9vcCwNCit0byBw
cm9wZXJseSBkZXRlY3QgdGhlIGV4YWN0IGVuZCBvZiBzdHJlYW0uDQorQWZ0ZXIgZWFjaCBkZWNv
ZGVkIHN5bWJvbCwgY2hlY2sgaWYgRFN0cmVhbSBpcyBmdWxseSBjb25zdW1lZCB1c2luZyB0aGlz
IHNpbXBsZSB0ZXN0IDoNCisJQklUX3JlbG9hZERTdHJlYW0oJkRTdHJlYW0pID49IEJJVF9EU3Ry
ZWFtX2NvbXBsZXRlZA0KKw0KK1doZW4gaXQncyBkb25lLCB2ZXJpZnkgZGVjb21wcmVzc2lvbiBp
cyBmdWxseSBjb21wbGV0ZWQsIGJ5IGNoZWNraW5nIGJvdGggRFN0cmVhbSBhbmQgdGhlIHJlbGV2
YW50IHN0YXRlcy4NCitDaGVja2luZyBpZiBEU3RyZWFtIGhhcyByZWFjaGVkIGl0cyBlbmQgaXMg
cGVyZm9ybWVkIGJ5IDoNCisJQklUX2VuZE9mRFN0cmVhbSgmRFN0cmVhbSk7DQorQ2hlY2sgYWxz
byB0aGUgc3RhdGVzLiBUaGVyZSBtaWdodCBiZSBzb21lIHN5bWJvbHMgbGVmdCB0aGVyZSwgaWYg
c29tZSBoaWdoIHByb2JhYmlsaXR5IG9uZXMgKD41MCUpIGFyZSBwb3NzaWJsZS4NCisJRlNFX2Vu
ZE9mRFN0YXRlKCZEU3RhdGUpOw0KKyovDQorDQorLyogKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioNCisqICBGU0UgdW5zYWZlIEFQSQ0KKyoqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKiovDQorc3RhdGljIHVuc2lnbmVkIGNoYXIgRlNFX2Rl
Y29kZVN5bWJvbEZhc3QoRlNFX0RTdGF0ZV90ICpEU3RhdGVQdHIsIEJJVF9EU3RyZWFtX3QgKmJp
dEQpOw0KKy8qIGZhc3RlciwgYnV0IHdvcmtzIG9ubHkgaWYgbmJCaXRzIGlzIGFsd2F5cyA+PSAx
IChvdGhlcndpc2UsIHJlc3VsdCB3aWxsIGJlIGNvcnJ1cHRlZCkgKi8NCisNCisvKiAqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KKyogIEltcGxlbWVudGF0aW9uIG9m
IGlubGluZWQgZnVuY3Rpb25zDQorKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKi8NCit0eXBlZGVmIHN0cnVjdCB7DQorCWludCBkZWx0YUZpbmRTdGF0ZTsNCisJVTMy
IGRlbHRhTmJCaXRzOw0KK30gRlNFX3N5bWJvbENvbXByZXNzaW9uVHJhbnNmb3JtOyAvKiB0b3Rh
bCA4IGJ5dGVzICovDQorDQorWlNURF9TVEFUSUMgdm9pZCBGU0VfaW5pdENTdGF0ZShGU0VfQ1N0
YXRlX3QgKnN0YXRlUHRyLCBjb25zdCBGU0VfQ1RhYmxlICpjdCkNCit7DQorCWNvbnN0IHZvaWQg
KnB0ciA9IGN0Ow0KKwljb25zdCBVMTYgKnUxNnB0ciA9IChjb25zdCBVMTYgKilwdHI7DQorCWNv
bnN0IFUzMiB0YWJsZUxvZyA9IFpTVERfcmVhZDE2KHB0cik7DQorCXN0YXRlUHRyLT52YWx1ZSA9
IChwdHJkaWZmX3QpMSA8PCB0YWJsZUxvZzsNCisJc3RhdGVQdHItPnN0YXRlVGFibGUgPSB1MTZw
dHIgKyAyOw0KKwlzdGF0ZVB0ci0+c3ltYm9sVFQgPSAoKGNvbnN0IFUzMiAqKWN0ICsgMSArICh0
YWJsZUxvZyA/ICgxIDw8ICh0YWJsZUxvZyAtIDEpKSA6IDEpKTsNCisJc3RhdGVQdHItPnN0YXRl
TG9nID0gdGFibGVMb2c7DQorfQ0KKw0KKy8qISBGU0VfaW5pdENTdGF0ZTIoKSA6DQorKiAgIFNh
bWUgYXMgRlNFX2luaXRDU3RhdGUoKSwgYnV0IHRoZSBmaXJzdCBzeW1ib2wgdG8gaW5jbHVkZSAo
d2hpY2ggd2lsbCBiZSB0aGUgbGFzdCB0byBiZSByZWFkKQ0KKyogICB1c2VzIHRoZSBzbWFsbGVz
dCBzdGF0ZSB2YWx1ZSBwb3NzaWJsZSwgc2F2aW5nIHRoZSBjb3N0IG9mIHRoaXMgc3ltYm9sICov
DQorWlNURF9TVEFUSUMgdm9pZCBGU0VfaW5pdENTdGF0ZTIoRlNFX0NTdGF0ZV90ICpzdGF0ZVB0
ciwgY29uc3QgRlNFX0NUYWJsZSAqY3QsIFUzMiBzeW1ib2wpDQorew0KKwlGU0VfaW5pdENTdGF0
ZShzdGF0ZVB0ciwgY3QpOw0KKwl7DQorCQljb25zdCBGU0Vfc3ltYm9sQ29tcHJlc3Npb25UcmFu
c2Zvcm0gc3ltYm9sVFQgPSAoKGNvbnN0IEZTRV9zeW1ib2xDb21wcmVzc2lvblRyYW5zZm9ybSAq
KShzdGF0ZVB0ci0+c3ltYm9sVFQpKVtzeW1ib2xdOw0KKwkJY29uc3QgVTE2ICpzdGF0ZVRhYmxl
ID0gKGNvbnN0IFUxNiAqKShzdGF0ZVB0ci0+c3RhdGVUYWJsZSk7DQorCQlVMzIgbmJCaXRzT3V0
ID0gKFUzMikoKHN5bWJvbFRULmRlbHRhTmJCaXRzICsgKDEgPDwgMTUpKSA+PiAxNik7DQorCQlz
dGF0ZVB0ci0+dmFsdWUgPSAobmJCaXRzT3V0IDw8IDE2KSAtIHN5bWJvbFRULmRlbHRhTmJCaXRz
Ow0KKwkJc3RhdGVQdHItPnZhbHVlID0gc3RhdGVUYWJsZVsoc3RhdGVQdHItPnZhbHVlID4+IG5i
Qml0c091dCkgKyBzeW1ib2xUVC5kZWx0YUZpbmRTdGF0ZV07DQorCX0NCit9DQorDQorWlNURF9T
VEFUSUMgdm9pZCBGU0VfZW5jb2RlU3ltYm9sKEJJVF9DU3RyZWFtX3QgKmJpdEMsIEZTRV9DU3Rh
dGVfdCAqc3RhdGVQdHIsIFUzMiBzeW1ib2wpDQorew0KKwljb25zdCBGU0Vfc3ltYm9sQ29tcHJl
c3Npb25UcmFuc2Zvcm0gc3ltYm9sVFQgPSAoKGNvbnN0IEZTRV9zeW1ib2xDb21wcmVzc2lvblRy
YW5zZm9ybSAqKShzdGF0ZVB0ci0+c3ltYm9sVFQpKVtzeW1ib2xdOw0KKwljb25zdCBVMTYgKmNv
bnN0IHN0YXRlVGFibGUgPSAoY29uc3QgVTE2ICopKHN0YXRlUHRyLT5zdGF0ZVRhYmxlKTsNCisJ
VTMyIG5iQml0c091dCA9IChVMzIpKChzdGF0ZVB0ci0+dmFsdWUgKyBzeW1ib2xUVC5kZWx0YU5i
Qml0cykgPj4gMTYpOw0KKwlCSVRfYWRkQml0cyhiaXRDLCBzdGF0ZVB0ci0+dmFsdWUsIG5iQml0
c091dCk7DQorCXN0YXRlUHRyLT52YWx1ZSA9IHN0YXRlVGFibGVbKHN0YXRlUHRyLT52YWx1ZSA+
PiBuYkJpdHNPdXQpICsgc3ltYm9sVFQuZGVsdGFGaW5kU3RhdGVdOw0KK30NCisNCitaU1REX1NU
QVRJQyB2b2lkIEZTRV9mbHVzaENTdGF0ZShCSVRfQ1N0cmVhbV90ICpiaXRDLCBjb25zdCBGU0Vf
Q1N0YXRlX3QgKnN0YXRlUHRyKQ0KK3sNCisJQklUX2FkZEJpdHMoYml0Qywgc3RhdGVQdHItPnZh
bHVlLCBzdGF0ZVB0ci0+c3RhdGVMb2cpOw0KKwlCSVRfZmx1c2hCaXRzKGJpdEMpOw0KK30NCisN
CisvKiA9PT09PT0gICAgRGVjb21wcmVzc2lvbiAgICA9PT09PT0gKi8NCisNCit0eXBlZGVmIHN0
cnVjdCB7DQorCVUxNiB0YWJsZUxvZzsNCisJVTE2IGZhc3RNb2RlOw0KK30gRlNFX0RUYWJsZUhl
YWRlcjsgLyogc2l6ZW9mIFUzMiAqLw0KKw0KK3R5cGVkZWYgc3RydWN0IHsNCisJdW5zaWduZWQg
c2hvcnQgbmV3U3RhdGU7DQorCXVuc2lnbmVkIGNoYXIgc3ltYm9sOw0KKwl1bnNpZ25lZCBjaGFy
IG5iQml0czsNCit9IEZTRV9kZWNvZGVfdDsgLyogc2l6ZSA9PSBVMzIgKi8NCisNCitaU1REX1NU
QVRJQyB2b2lkIEZTRV9pbml0RFN0YXRlKEZTRV9EU3RhdGVfdCAqRFN0YXRlUHRyLCBCSVRfRFN0
cmVhbV90ICpiaXRELCBjb25zdCBGU0VfRFRhYmxlICpkdCkNCit7DQorCWNvbnN0IHZvaWQgKnB0
ciA9IGR0Ow0KKwljb25zdCBGU0VfRFRhYmxlSGVhZGVyICpjb25zdCBEVGFibGVIID0gKGNvbnN0
IEZTRV9EVGFibGVIZWFkZXIgKilwdHI7DQorCURTdGF0ZVB0ci0+c3RhdGUgPSBCSVRfcmVhZEJp
dHMoYml0RCwgRFRhYmxlSC0+dGFibGVMb2cpOw0KKwlCSVRfcmVsb2FkRFN0cmVhbShiaXREKTsN
CisJRFN0YXRlUHRyLT50YWJsZSA9IGR0ICsgMTsNCit9DQorDQorWlNURF9TVEFUSUMgQllURSBG
U0VfcGVla1N5bWJvbChjb25zdCBGU0VfRFN0YXRlX3QgKkRTdGF0ZVB0cikNCit7DQorCUZTRV9k
ZWNvZGVfdCBjb25zdCBESW5mbyA9ICgoY29uc3QgRlNFX2RlY29kZV90ICopKERTdGF0ZVB0ci0+
dGFibGUpKVtEU3RhdGVQdHItPnN0YXRlXTsNCisJcmV0dXJuIERJbmZvLnN5bWJvbDsNCit9DQor
DQorWlNURF9TVEFUSUMgdm9pZCBGU0VfdXBkYXRlU3RhdGUoRlNFX0RTdGF0ZV90ICpEU3RhdGVQ
dHIsIEJJVF9EU3RyZWFtX3QgKmJpdEQpDQorew0KKwlGU0VfZGVjb2RlX3QgY29uc3QgREluZm8g
PSAoKGNvbnN0IEZTRV9kZWNvZGVfdCAqKShEU3RhdGVQdHItPnRhYmxlKSlbRFN0YXRlUHRyLT5z
dGF0ZV07DQorCVUzMiBjb25zdCBuYkJpdHMgPSBESW5mby5uYkJpdHM7DQorCXNpemVfdCBjb25z
dCBsb3dCaXRzID0gQklUX3JlYWRCaXRzKGJpdEQsIG5iQml0cyk7DQorCURTdGF0ZVB0ci0+c3Rh
dGUgPSBESW5mby5uZXdTdGF0ZSArIGxvd0JpdHM7DQorfQ0KKw0KK1pTVERfU1RBVElDIEJZVEUg
RlNFX2RlY29kZVN5bWJvbChGU0VfRFN0YXRlX3QgKkRTdGF0ZVB0ciwgQklUX0RTdHJlYW1fdCAq
Yml0RCkNCit7DQorCUZTRV9kZWNvZGVfdCBjb25zdCBESW5mbyA9ICgoY29uc3QgRlNFX2RlY29k
ZV90ICopKERTdGF0ZVB0ci0+dGFibGUpKVtEU3RhdGVQdHItPnN0YXRlXTsNCisJVTMyIGNvbnN0
IG5iQml0cyA9IERJbmZvLm5iQml0czsNCisJQllURSBjb25zdCBzeW1ib2wgPSBESW5mby5zeW1i
b2w7DQorCXNpemVfdCBjb25zdCBsb3dCaXRzID0gQklUX3JlYWRCaXRzKGJpdEQsIG5iQml0cyk7
DQorDQorCURTdGF0ZVB0ci0+c3RhdGUgPSBESW5mby5uZXdTdGF0ZSArIGxvd0JpdHM7DQorCXJl
dHVybiBzeW1ib2w7DQorfQ0KKw0KKy8qISBGU0VfZGVjb2RlU3ltYm9sRmFzdCgpIDoNCisJdW5z
YWZlLCBvbmx5IHdvcmtzIGlmIG5vIHN5bWJvbCBoYXMgYSBwcm9iYWJpbGl0eSA+IDUwJSAqLw0K
K1pTVERfU1RBVElDIEJZVEUgRlNFX2RlY29kZVN5bWJvbEZhc3QoRlNFX0RTdGF0ZV90ICpEU3Rh
dGVQdHIsIEJJVF9EU3RyZWFtX3QgKmJpdEQpDQorew0KKwlGU0VfZGVjb2RlX3QgY29uc3QgRElu
Zm8gPSAoKGNvbnN0IEZTRV9kZWNvZGVfdCAqKShEU3RhdGVQdHItPnRhYmxlKSlbRFN0YXRlUHRy
LT5zdGF0ZV07DQorCVUzMiBjb25zdCBuYkJpdHMgPSBESW5mby5uYkJpdHM7DQorCUJZVEUgY29u
c3Qgc3ltYm9sID0gREluZm8uc3ltYm9sOw0KKwlzaXplX3QgY29uc3QgbG93Qml0cyA9IEJJVF9y
ZWFkQml0c0Zhc3QoYml0RCwgbmJCaXRzKTsNCisNCisJRFN0YXRlUHRyLT5zdGF0ZSA9IERJbmZv
Lm5ld1N0YXRlICsgbG93Qml0czsNCisJcmV0dXJuIHN5bWJvbDsNCit9DQorDQorWlNURF9TVEFU
SUMgdW5zaWduZWQgRlNFX2VuZE9mRFN0YXRlKGNvbnN0IEZTRV9EU3RhdGVfdCAqRFN0YXRlUHRy
KSB7IHJldHVybiBEU3RhdGVQdHItPnN0YXRlID09IDA7IH0NCisNCisvKiAqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KKyogIFR1
bmluZyBwYXJhbWV0ZXJzDQorKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKi8NCisvKiFNRU1PUllfVVNBR0UgOg0KKyogIE1lbW9y
eSB1c2FnZSBmb3JtdWxhIDogTi0+Ml5OIEJ5dGVzIChleGFtcGxlcyA6IDEwIC0+IDFLQjsgMTIg
LT4gNEtCIDsgMTYgLT4gNjRLQjsgMjAgLT4gMU1COyBldGMuKQ0KKyogIEluY3JlYXNpbmcgbWVt
b3J5IHVzYWdlIGltcHJvdmVzIGNvbXByZXNzaW9uIHJhdGlvDQorKiAgUmVkdWNlZCBtZW1vcnkg
dXNhZ2UgY2FuIGltcHJvdmUgc3BlZWQsIGR1ZSB0byBjYWNoZSBlZmZlY3QNCisqICBSZWNvbW1l
bmRlZCBtYXggdmFsdWUgaXMgMTQsIGZvciAxNktCLCB3aGljaCBuaWNlbHkgZml0cyBpbnRvIElu
dGVsIHg4NiBMMSBjYWNoZSAqLw0KKyNpZm5kZWYgRlNFX01BWF9NRU1PUllfVVNBR0UNCisjZGVm
aW5lIEZTRV9NQVhfTUVNT1JZX1VTQUdFIDE0DQorI2VuZGlmDQorI2lmbmRlZiBGU0VfREVGQVVM
VF9NRU1PUllfVVNBR0UNCisjZGVmaW5lIEZTRV9ERUZBVUxUX01FTU9SWV9VU0FHRSAxMw0KKyNl
bmRpZg0KKw0KKy8qIUZTRV9NQVhfU1lNQk9MX1ZBTFVFIDoNCisqICBNYXhpbXVtIHN5bWJvbCB2
YWx1ZSBhdXRob3JpemVkLg0KKyogIFJlcXVpcmVkIGZvciBwcm9wZXIgc3RhY2sgYWxsb2NhdGlv
biAqLw0KKyNpZm5kZWYgRlNFX01BWF9TWU1CT0xfVkFMVUUNCisjZGVmaW5lIEZTRV9NQVhfU1lN
Qk9MX1ZBTFVFIDI1NQ0KKyNlbmRpZg0KKw0KKy8qICoqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQorKiAgdGVtcGxhdGUgZnVuY3Rp
b25zIHR5cGUgJiBzdWZmaXgNCisqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqLw0KKyNkZWZpbmUgRlNFX0ZVTkNUSU9OX1RZUEUg
QllURQ0KKyNkZWZpbmUgRlNFX0ZVTkNUSU9OX0VYVEVOU0lPTg0KKyNkZWZpbmUgRlNFX0RFQ09E
RV9UWVBFIEZTRV9kZWNvZGVfdA0KKw0KKy8qICoqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KKyogIENvbnN0YW50cw0KKyoqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqLw0KKyNkZWZpbmUgRlNFX01BWF9UQUJMRUxPRyAoRlNFX01BWF9NRU1PUllfVVNBR0UgLSAy
KQ0KKyNkZWZpbmUgRlNFX01BWF9UQUJMRVNJWkUgKDFVIDw8IEZTRV9NQVhfVEFCTEVMT0cpDQor
I2RlZmluZSBGU0VfTUFYVEFCTEVTSVpFX01BU0sgKEZTRV9NQVhfVEFCTEVTSVpFIC0gMSkNCisj
ZGVmaW5lIEZTRV9ERUZBVUxUX1RBQkxFTE9HIChGU0VfREVGQVVMVF9NRU1PUllfVVNBR0UgLSAy
KQ0KKyNkZWZpbmUgRlNFX01JTl9UQUJMRUxPRyA1DQorDQorI2RlZmluZSBGU0VfVEFCTEVMT0df
QUJTT0xVVEVfTUFYIDE1DQorI2lmIEZTRV9NQVhfVEFCTEVMT0cgPiBGU0VfVEFCTEVMT0dfQUJT
T0xVVEVfTUFYDQorI2Vycm9yICJGU0VfTUFYX1RBQkxFTE9HID4gRlNFX1RBQkxFTE9HX0FCU09M
VVRFX01BWCBpcyBub3Qgc3VwcG9ydGVkIg0KKyNlbmRpZg0KKw0KKyNkZWZpbmUgRlNFX1RBQkxF
U1RFUCh0YWJsZVNpemUpICgodGFibGVTaXplID4+IDEpICsgKHRhYmxlU2l6ZSA+PiAzKSArIDMp
DQorDQorI2VuZGlmIC8qIEZTRV9IICovDQpkaWZmIC0tZ2l0IGEveGVuL2NvbW1vbi96c3RkL2Zz
ZV9kZWNvbXByZXNzLmMgYi94ZW4vY29tbW9uL3pzdGQvZnNlX2RlY29tcHJlc3MuYw0KbmV3IGZp
bGUgbW9kZSAxMDA2NDQNCmluZGV4IDAwMDAwMDAwMDAuLjBiMzUzNTMwZmINCi0tLSAvZGV2L251
bGwNCisrKyBiL3hlbi9jb21tb24venN0ZC9mc2VfZGVjb21wcmVzcy5jDQpAQCAtMCwwICsxLDMy
NSBAQA0KKy8qDQorICogRlNFIDogRmluaXRlIFN0YXRlIEVudHJvcHkgZGVjb2Rlcg0KKyAqIENv
cHlyaWdodCAoQykgMjAxMy0yMDE1LCBZYW5uIENvbGxldC4NCisgKg0KKyAqIEJTRCAyLUNsYXVz
ZSBMaWNlbnNlIChodHRwOi8vd3d3Lm9wZW5zb3VyY2Uub3JnL2xpY2Vuc2VzL2JzZC1saWNlbnNl
LnBocCkNCisgKg0KKyAqIFJlZGlzdHJpYnV0aW9uIGFuZCB1c2UgaW4gc291cmNlIGFuZCBiaW5h
cnkgZm9ybXMsIHdpdGggb3Igd2l0aG91dA0KKyAqIG1vZGlmaWNhdGlvbiwgYXJlIHBlcm1pdHRl
ZCBwcm92aWRlZCB0aGF0IHRoZSBmb2xsb3dpbmcgY29uZGl0aW9ucyBhcmUNCisgKiBtZXQ6DQor
ICoNCisgKiAgICogUmVkaXN0cmlidXRpb25zIG9mIHNvdXJjZSBjb2RlIG11c3QgcmV0YWluIHRo
ZSBhYm92ZSBjb3B5cmlnaHQNCisgKiBub3RpY2UsIHRoaXMgbGlzdCBvZiBjb25kaXRpb25zIGFu
ZCB0aGUgZm9sbG93aW5nIGRpc2NsYWltZXIuDQorICogICAqIFJlZGlzdHJpYnV0aW9ucyBpbiBi
aW5hcnkgZm9ybSBtdXN0IHJlcHJvZHVjZSB0aGUgYWJvdmUNCisgKiBjb3B5cmlnaHQgbm90aWNl
LCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9ucyBhbmQgdGhlIGZvbGxvd2luZyBkaXNjbGFpbWVyDQor
ICogaW4gdGhlIGRvY3VtZW50YXRpb24gYW5kL29yIG90aGVyIG1hdGVyaWFscyBwcm92aWRlZCB3
aXRoIHRoZQ0KKyAqIGRpc3RyaWJ1dGlvbi4NCisgKg0KKyAqIFRISVMgU09GVFdBUkUgSVMgUFJP
VklERUQgQlkgVEhFIENPUFlSSUdIVCBIT0xERVJTIEFORCBDT05UUklCVVRPUlMNCisgKiAiQVMg
SVMiIEFORCBBTlkgRVhQUkVTUyBPUiBJTVBMSUVEIFdBUlJBTlRJRVMsIElOQ0xVRElORywgQlVU
IE5PVA0KKyAqIExJTUlURUQgVE8sIFRIRSBJTVBMSUVEIFdBUlJBTlRJRVMgT0YgTUVSQ0hBTlRB
QklMSVRZIEFORCBGSVRORVNTIEZPUg0KKyAqIEEgUEFSVElDVUxBUiBQVVJQT1NFIEFSRSBESVND
TEFJTUVELiBJTiBOTyBFVkVOVCBTSEFMTCBUSEUgQ09QWVJJR0hUDQorICogT1dORVIgT1IgQ09O
VFJJQlVUT1JTIEJFIExJQUJMRSBGT1IgQU5ZIERJUkVDVCwgSU5ESVJFQ1QsIElOQ0lERU5UQUws
DQorICogU1BFQ0lBTCwgRVhFTVBMQVJZLCBPUiBDT05TRVFVRU5USUFMIERBTUFHRVMgKElOQ0xV
RElORywgQlVUIE5PVA0KKyAqIExJTUlURUQgVE8sIFBST0NVUkVNRU5UIE9GIFNVQlNUSVRVVEUg
R09PRFMgT1IgU0VSVklDRVM7IExPU1MgT0YgVVNFLA0KKyAqIERBVEEsIE9SIFBST0ZJVFM7IE9S
IEJVU0lORVNTIElOVEVSUlVQVElPTikgSE9XRVZFUiBDQVVTRUQgQU5EIE9OIEFOWQ0KKyAqIFRI
RU9SWSBPRiBMSUFCSUxJVFksIFdIRVRIRVIgSU4gQ09OVFJBQ1QsIFNUUklDVCBMSUFCSUxJVFks
IE9SIFRPUlQNCisgKiAoSU5DTFVESU5HIE5FR0xJR0VOQ0UgT1IgT1RIRVJXSVNFKSBBUklTSU5H
IElOIEFOWSBXQVkgT1VUIE9GIFRIRSBVU0UNCisgKiBPRiBUSElTIFNPRlRXQVJFLCBFVkVOIElG
IEFEVklTRUQgT0YgVEhFIFBPU1NJQklMSVRZIE9GIFNVQ0ggREFNQUdFLg0KKyAqDQorICogVGhp
cyBwcm9ncmFtIGlzIGZyZWUgc29mdHdhcmU7IHlvdSBjYW4gcmVkaXN0cmlidXRlIGl0IGFuZC9v
ciBtb2RpZnkgaXQgdW5kZXINCisgKiB0aGUgdGVybXMgb2YgdGhlIEdOVSBHZW5lcmFsIFB1Ymxp
YyBMaWNlbnNlIHZlcnNpb24gMiBhcyBwdWJsaXNoZWQgYnkgdGhlDQorICogRnJlZSBTb2Z0d2Fy
ZSBGb3VuZGF0aW9uLiBUaGlzIHByb2dyYW0gaXMgZHVhbC1saWNlbnNlZDsgeW91IG1heSBzZWxl
Y3QNCisgKiBlaXRoZXIgdmVyc2lvbiAyIG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5z
ZSAoIkdQTCIpIG9yIEJTRCBsaWNlbnNlDQorICogKCJCU0QiKS4NCisgKg0KKyAqIFlvdSBjYW4g
Y29udGFjdCB0aGUgYXV0aG9yIGF0IDoNCisgKiAtIFNvdXJjZSByZXBvc2l0b3J5IDogaHR0cHM6
Ly9naXRodWIuY29tL0N5YW40OTczL0Zpbml0ZVN0YXRlRW50cm9weQ0KKyAqLw0KKw0KKy8qICoq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqDQorKiAgQ29tcGlsZXIgc3BlY2lmaWNzDQorKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKi8NCisjZGVmaW5lIEZPUkNFX0lO
TElORSBzdGF0aWMgX19hbHdheXNfaW5saW5lDQorDQorLyogKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCisqICBJbmNsdWRlcw0K
KyoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKiovDQorI2luY2x1ZGUgImJpdHN0cmVhbS5oIg0KKyNpbmNsdWRlICJmc2UuaCINCisj
aW5jbHVkZSAienN0ZF9pbnRlcm5hbC5oIg0KKyNpbmNsdWRlIDxsaW51eC9jb21waWxlci5oPg0K
KyNpbmNsdWRlIDxsaW51eC9rZXJuZWwuaD4NCisjaW5jbHVkZSA8bGludXgvc3RyaW5nLmg+IC8q
IG1lbWNweSwgbWVtc2V0ICovDQorDQorLyogKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCisqICBFcnJvciBNYW5hZ2VtZW50DQor
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKi8NCisjZGVmaW5lIEZTRV9pc0Vycm9yIEVSUl9pc0Vycm9yDQorI2RlZmluZSBGU0Vf
U1RBVElDX0FTU0VSVChjKSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXA0KKwl7
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXA0K
KwkJZW51bSB7IEZTRV9zdGF0aWNfYXNzZXJ0ID0gMSAvIChpbnQpKCEhKGMpKSB9OyBcDQorCX0g
LyogdXNlIG9ubHkgKmFmdGVyKiB2YXJpYWJsZSBkZWNsYXJhdGlvbnMgKi8NCisNCisvKiAqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
Kg0KKyogIFRlbXBsYXRlcw0KKyoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKiovDQorLyoNCisgIGRlc2lnbmVkIHRvIGJlIGluY2x1
ZGVkDQorICBmb3IgdHlwZS1zcGVjaWZpYyBmdW5jdGlvbnMgKHRlbXBsYXRlIGVtdWxhdGlvbiBp
biBDKQ0KKyAgT2JqZWN0aXZlIGlzIHRvIHdyaXRlIHRoZXNlIGZ1bmN0aW9ucyBvbmx5IG9uY2Us
IGZvciBpbXByb3ZlZCBtYWludGVuYW5jZQ0KKyovDQorDQorLyogc2FmZXR5IGNoZWNrcyAqLw0K
KyNpZm5kZWYgRlNFX0ZVTkNUSU9OX0VYVEVOU0lPTg0KKyNlcnJvciAiRlNFX0ZVTkNUSU9OX0VY
VEVOU0lPTiBtdXN0IGJlIGRlZmluZWQiDQorI2VuZGlmDQorI2lmbmRlZiBGU0VfRlVOQ1RJT05f
VFlQRQ0KKyNlcnJvciAiRlNFX0ZVTkNUSU9OX1RZUEUgbXVzdCBiZSBkZWZpbmVkIg0KKyNlbmRp
Zg0KKw0KKy8qIEZ1bmN0aW9uIG5hbWVzICovDQorI2RlZmluZSBGU0VfQ0FUKFgsIFkpIFgjI1kN
CisjZGVmaW5lIEZTRV9GVU5DVElPTl9OQU1FKFgsIFkpIEZTRV9DQVQoWCwgWSkNCisjZGVmaW5l
IEZTRV9UWVBFX05BTUUoWCwgWSkgRlNFX0NBVChYLCBZKQ0KKw0KKy8qIEZ1bmN0aW9uIHRlbXBs
YXRlcyAqLw0KKw0KK3NpemVfdCBGU0VfYnVpbGREVGFibGVfd2tzcChGU0VfRFRhYmxlICpkdCwg
Y29uc3Qgc2hvcnQgKm5vcm1hbGl6ZWRDb3VudGVyLCB1bnNpZ25lZCBtYXhTeW1ib2xWYWx1ZSwg
dW5zaWduZWQgdGFibGVMb2csIHZvaWQgKndvcmtzcGFjZSwgc2l6ZV90IHdvcmtzcGFjZVNpemUp
DQorew0KKwl2b2lkICpjb25zdCB0ZFB0ciA9IGR0ICsgMTsgLyogYmVjYXVzZSAqZHQgaXMgdW5z
aWduZWQsIDMyLWJpdHMgYWxpZ25lZCBvbiAzMi1iaXRzICovDQorCUZTRV9ERUNPREVfVFlQRSAq
Y29uc3QgdGFibGVEZWNvZGUgPSAoRlNFX0RFQ09ERV9UWVBFICopKHRkUHRyKTsNCisJVTE2ICpz
eW1ib2xOZXh0ID0gKFUxNiAqKXdvcmtzcGFjZTsNCisNCisJVTMyIGNvbnN0IG1heFNWMSA9IG1h
eFN5bWJvbFZhbHVlICsgMTsNCisJVTMyIGNvbnN0IHRhYmxlU2l6ZSA9IDEgPDwgdGFibGVMb2c7
DQorCVUzMiBoaWdoVGhyZXNob2xkID0gdGFibGVTaXplIC0gMTsNCisNCisJLyogU2FuaXR5IENo
ZWNrcyAqLw0KKwlpZiAod29ya3NwYWNlU2l6ZSA8IHNpemVvZihVMTYpICogKEZTRV9NQVhfU1lN
Qk9MX1ZBTFVFICsgMSkpDQorCQlyZXR1cm4gRVJST1IodGFibGVMb2dfdG9vTGFyZ2UpOw0KKwlp
ZiAobWF4U3ltYm9sVmFsdWUgPiBGU0VfTUFYX1NZTUJPTF9WQUxVRSkNCisJCXJldHVybiBFUlJP
UihtYXhTeW1ib2xWYWx1ZV90b29MYXJnZSk7DQorCWlmICh0YWJsZUxvZyA+IEZTRV9NQVhfVEFC
TEVMT0cpDQorCQlyZXR1cm4gRVJST1IodGFibGVMb2dfdG9vTGFyZ2UpOw0KKw0KKwkvKiBJbml0
LCBsYXkgZG93biBsb3dwcm9iIHN5bWJvbHMgKi8NCisJew0KKwkJRlNFX0RUYWJsZUhlYWRlciBE
VGFibGVIOw0KKwkJRFRhYmxlSC50YWJsZUxvZyA9IChVMTYpdGFibGVMb2c7DQorCQlEVGFibGVI
LmZhc3RNb2RlID0gMTsNCisJCXsNCisJCQlTMTYgY29uc3QgbGFyZ2VMaW1pdCA9IChTMTYpKDEg
PDwgKHRhYmxlTG9nIC0gMSkpOw0KKwkJCVUzMiBzOw0KKwkJCWZvciAocyA9IDA7IHMgPCBtYXhT
VjE7IHMrKykgew0KKwkJCQlpZiAobm9ybWFsaXplZENvdW50ZXJbc10gPT0gLTEpIHsNCisJCQkJ
CXRhYmxlRGVjb2RlW2hpZ2hUaHJlc2hvbGQtLV0uc3ltYm9sID0gKEZTRV9GVU5DVElPTl9UWVBF
KXM7DQorCQkJCQlzeW1ib2xOZXh0W3NdID0gMTsNCisJCQkJfSBlbHNlIHsNCisJCQkJCWlmIChu
b3JtYWxpemVkQ291bnRlcltzXSA+PSBsYXJnZUxpbWl0KQ0KKwkJCQkJCURUYWJsZUguZmFzdE1v
ZGUgPSAwOw0KKwkJCQkJc3ltYm9sTmV4dFtzXSA9IG5vcm1hbGl6ZWRDb3VudGVyW3NdOw0KKwkJ
CQl9DQorCQkJfQ0KKwkJfQ0KKwkJbWVtY3B5KGR0LCAmRFRhYmxlSCwgc2l6ZW9mKERUYWJsZUgp
KTsNCisJfQ0KKw0KKwkvKiBTcHJlYWQgc3ltYm9scyAqLw0KKwl7DQorCQlVMzIgY29uc3QgdGFi
bGVNYXNrID0gdGFibGVTaXplIC0gMTsNCisJCVUzMiBjb25zdCBzdGVwID0gRlNFX1RBQkxFU1RF
UCh0YWJsZVNpemUpOw0KKwkJVTMyIHMsIHBvc2l0aW9uID0gMDsNCisJCWZvciAocyA9IDA7IHMg
PCBtYXhTVjE7IHMrKykgew0KKwkJCWludCBpOw0KKwkJCWZvciAoaSA9IDA7IGkgPCBub3JtYWxp
emVkQ291bnRlcltzXTsgaSsrKSB7DQorCQkJCXRhYmxlRGVjb2RlW3Bvc2l0aW9uXS5zeW1ib2wg
PSAoRlNFX0ZVTkNUSU9OX1RZUEUpczsNCisJCQkJcG9zaXRpb24gPSAocG9zaXRpb24gKyBzdGVw
KSAmIHRhYmxlTWFzazsNCisJCQkJd2hpbGUgKHBvc2l0aW9uID4gaGlnaFRocmVzaG9sZCkNCisJ
CQkJCXBvc2l0aW9uID0gKHBvc2l0aW9uICsgc3RlcCkgJiB0YWJsZU1hc2s7IC8qIGxvd3Byb2Ig
YXJlYSAqLw0KKwkJCX0NCisJCX0NCisJCWlmIChwb3NpdGlvbiAhPSAwKQ0KKwkJCXJldHVybiBF
UlJPUihHRU5FUklDKTsgLyogcG9zaXRpb24gbXVzdCByZWFjaCBhbGwgY2VsbHMgb25jZSwgb3Ro
ZXJ3aXNlIG5vcm1hbGl6ZWRDb3VudGVyIGlzIGluY29ycmVjdCAqLw0KKwl9DQorDQorCS8qIEJ1
aWxkIERlY29kaW5nIHRhYmxlICovDQorCXsNCisJCVUzMiB1Ow0KKwkJZm9yICh1ID0gMDsgdSA8
IHRhYmxlU2l6ZTsgdSsrKSB7DQorCQkJRlNFX0ZVTkNUSU9OX1RZUEUgY29uc3Qgc3ltYm9sID0g
KEZTRV9GVU5DVElPTl9UWVBFKSh0YWJsZURlY29kZVt1XS5zeW1ib2wpOw0KKwkJCVUxNiBuZXh0
U3RhdGUgPSBzeW1ib2xOZXh0W3N5bWJvbF0rKzsNCisJCQl0YWJsZURlY29kZVt1XS5uYkJpdHMg
PSAoQllURSkodGFibGVMb2cgLSBCSVRfaGlnaGJpdDMyKChVMzIpbmV4dFN0YXRlKSk7DQorCQkJ
dGFibGVEZWNvZGVbdV0ubmV3U3RhdGUgPSAoVTE2KSgobmV4dFN0YXRlIDw8IHRhYmxlRGVjb2Rl
W3VdLm5iQml0cykgLSB0YWJsZVNpemUpOw0KKwkJfQ0KKwl9DQorDQorCXJldHVybiAwOw0KK30N
CisNCisvKi0qKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqDQorKiAgRGVjb21wcmVzc2lvbiAoQnl0ZSBzeW1ib2xzKQ0KKyoqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKi8NCitzaXplX3QgRlNF
X2J1aWxkRFRhYmxlX3JsZShGU0VfRFRhYmxlICpkdCwgQllURSBzeW1ib2xWYWx1ZSkNCit7DQor
CXZvaWQgKnB0ciA9IGR0Ow0KKwlGU0VfRFRhYmxlSGVhZGVyICpjb25zdCBEVGFibGVIID0gKEZT
RV9EVGFibGVIZWFkZXIgKilwdHI7DQorCXZvaWQgKmRQdHIgPSBkdCArIDE7DQorCUZTRV9kZWNv
ZGVfdCAqY29uc3QgY2VsbCA9IChGU0VfZGVjb2RlX3QgKilkUHRyOw0KKw0KKwlEVGFibGVILT50
YWJsZUxvZyA9IDA7DQorCURUYWJsZUgtPmZhc3RNb2RlID0gMDsNCisNCisJY2VsbC0+bmV3U3Rh
dGUgPSAwOw0KKwljZWxsLT5zeW1ib2wgPSBzeW1ib2xWYWx1ZTsNCisJY2VsbC0+bmJCaXRzID0g
MDsNCisNCisJcmV0dXJuIDA7DQorfQ0KKw0KK3NpemVfdCBGU0VfYnVpbGREVGFibGVfcmF3KEZT
RV9EVGFibGUgKmR0LCB1bnNpZ25lZCBuYkJpdHMpDQorew0KKwl2b2lkICpwdHIgPSBkdDsNCisJ
RlNFX0RUYWJsZUhlYWRlciAqY29uc3QgRFRhYmxlSCA9IChGU0VfRFRhYmxlSGVhZGVyICopcHRy
Ow0KKwl2b2lkICpkUHRyID0gZHQgKyAxOw0KKwlGU0VfZGVjb2RlX3QgKmNvbnN0IGRpbmZvID0g
KEZTRV9kZWNvZGVfdCAqKWRQdHI7DQorCWNvbnN0IHVuc2lnbmVkIHRhYmxlU2l6ZSA9IDEgPDwg
bmJCaXRzOw0KKwljb25zdCB1bnNpZ25lZCB0YWJsZU1hc2sgPSB0YWJsZVNpemUgLSAxOw0KKwlj
b25zdCB1bnNpZ25lZCBtYXhTVjEgPSB0YWJsZU1hc2sgKyAxOw0KKwl1bnNpZ25lZCBzOw0KKw0K
KwkvKiBTYW5pdHkgY2hlY2tzICovDQorCWlmIChuYkJpdHMgPCAxKQ0KKwkJcmV0dXJuIEVSUk9S
KEdFTkVSSUMpOyAvKiBtaW4gc2l6ZSAqLw0KKw0KKwkvKiBCdWlsZCBEZWNvZGluZyBUYWJsZSAq
Lw0KKwlEVGFibGVILT50YWJsZUxvZyA9IChVMTYpbmJCaXRzOw0KKwlEVGFibGVILT5mYXN0TW9k
ZSA9IDE7DQorCWZvciAocyA9IDA7IHMgPCBtYXhTVjE7IHMrKykgew0KKwkJZGluZm9bc10ubmV3
U3RhdGUgPSAwOw0KKwkJZGluZm9bc10uc3ltYm9sID0gKEJZVEUpczsNCisJCWRpbmZvW3NdLm5i
Qml0cyA9IChCWVRFKW5iQml0czsNCisJfQ0KKw0KKwlyZXR1cm4gMDsNCit9DQorDQorRk9SQ0Vf
SU5MSU5FIHNpemVfdCBGU0VfZGVjb21wcmVzc191c2luZ0RUYWJsZV9nZW5lcmljKHZvaWQgKmRz
dCwgc2l6ZV90IG1heERzdFNpemUsIGNvbnN0IHZvaWQgKmNTcmMsIHNpemVfdCBjU3JjU2l6ZSwg
Y29uc3QgRlNFX0RUYWJsZSAqZHQsDQorCQkJCQkJICAgICAgIGNvbnN0IHVuc2lnbmVkIGZhc3Qp
DQorew0KKwlCWVRFICpjb25zdCBvc3RhcnQgPSAoQllURSAqKWRzdDsNCisJQllURSAqb3AgPSBv
c3RhcnQ7DQorCUJZVEUgKmNvbnN0IG9tYXggPSBvcCArIG1heERzdFNpemU7DQorCUJZVEUgKmNv
bnN0IG9saW1pdCA9IG9tYXggLSAzOw0KKw0KKwlCSVRfRFN0cmVhbV90IGJpdEQ7DQorCUZTRV9E
U3RhdGVfdCBzdGF0ZTE7DQorCUZTRV9EU3RhdGVfdCBzdGF0ZTI7DQorDQorCS8qIEluaXQgKi8N
CisJQ0hFQ0tfRihCSVRfaW5pdERTdHJlYW0oJmJpdEQsIGNTcmMsIGNTcmNTaXplKSk7DQorDQor
CUZTRV9pbml0RFN0YXRlKCZzdGF0ZTEsICZiaXRELCBkdCk7DQorCUZTRV9pbml0RFN0YXRlKCZz
dGF0ZTIsICZiaXRELCBkdCk7DQorDQorI2RlZmluZSBGU0VfR0VUU1lNQk9MKHN0YXRlUHRyKSBm
YXN0ID8gRlNFX2RlY29kZVN5bWJvbEZhc3Qoc3RhdGVQdHIsICZiaXREKSA6IEZTRV9kZWNvZGVT
eW1ib2woc3RhdGVQdHIsICZiaXREKQ0KKw0KKwkvKiA0IHN5bWJvbHMgcGVyIGxvb3AgKi8NCisJ
Zm9yICg7IChCSVRfcmVsb2FkRFN0cmVhbSgmYml0RCkgPT0gQklUX0RTdHJlYW1fdW5maW5pc2hl
ZCkgJiAob3AgPCBvbGltaXQpOyBvcCArPSA0KSB7DQorCQlvcFswXSA9IEZTRV9HRVRTWU1CT0wo
JnN0YXRlMSk7DQorDQorCQlpZiAoRlNFX01BWF9UQUJMRUxPRyAqIDIgKyA3ID4gc2l6ZW9mKGJp
dEQuYml0Q29udGFpbmVyKSAqIDgpIC8qIFRoaXMgdGVzdCBtdXN0IGJlIHN0YXRpYyAqLw0KKwkJ
CUJJVF9yZWxvYWREU3RyZWFtKCZiaXREKTsNCisNCisJCW9wWzFdID0gRlNFX0dFVFNZTUJPTCgm
c3RhdGUyKTsNCisNCisJCWlmIChGU0VfTUFYX1RBQkxFTE9HICogNCArIDcgPiBzaXplb2YoYml0
RC5iaXRDb250YWluZXIpICogOCkgLyogVGhpcyB0ZXN0IG11c3QgYmUgc3RhdGljICovDQorCQl7
DQorCQkJaWYgKEJJVF9yZWxvYWREU3RyZWFtKCZiaXREKSA+IEJJVF9EU3RyZWFtX3VuZmluaXNo
ZWQpIHsNCisJCQkJb3AgKz0gMjsNCisJCQkJYnJlYWs7DQorCQkJfQ0KKwkJfQ0KKw0KKwkJb3Bb
Ml0gPSBGU0VfR0VUU1lNQk9MKCZzdGF0ZTEpOw0KKw0KKwkJaWYgKEZTRV9NQVhfVEFCTEVMT0cg
KiAyICsgNyA+IHNpemVvZihiaXRELmJpdENvbnRhaW5lcikgKiA4KSAvKiBUaGlzIHRlc3QgbXVz
dCBiZSBzdGF0aWMgKi8NCisJCQlCSVRfcmVsb2FkRFN0cmVhbSgmYml0RCk7DQorDQorCQlvcFsz
XSA9IEZTRV9HRVRTWU1CT0woJnN0YXRlMik7DQorCX0NCisNCisJLyogdGFpbCAqLw0KKwkvKiBu
b3RlIDogQklUX3JlbG9hZERTdHJlYW0oJmJpdEQpID49IEZTRV9EU3RyZWFtX3BhcnRpYWxseUZp
bGxlZDsgRW5kcyBhdCBleGFjdGx5IEJJVF9EU3RyZWFtX2NvbXBsZXRlZCAqLw0KKwl3aGlsZSAo
MSkgew0KKwkJaWYgKG9wID4gKG9tYXggLSAyKSkNCisJCQlyZXR1cm4gRVJST1IoZHN0U2l6ZV90
b29TbWFsbCk7DQorCQkqb3ArKyA9IEZTRV9HRVRTWU1CT0woJnN0YXRlMSk7DQorCQlpZiAoQklU
X3JlbG9hZERTdHJlYW0oJmJpdEQpID09IEJJVF9EU3RyZWFtX292ZXJmbG93KSB7DQorCQkJKm9w
KysgPSBGU0VfR0VUU1lNQk9MKCZzdGF0ZTIpOw0KKwkJCWJyZWFrOw0KKwkJfQ0KKw0KKwkJaWYg
KG9wID4gKG9tYXggLSAyKSkNCisJCQlyZXR1cm4gRVJST1IoZHN0U2l6ZV90b29TbWFsbCk7DQor
CQkqb3ArKyA9IEZTRV9HRVRTWU1CT0woJnN0YXRlMik7DQorCQlpZiAoQklUX3JlbG9hZERTdHJl
YW0oJmJpdEQpID09IEJJVF9EU3RyZWFtX292ZXJmbG93KSB7DQorCQkJKm9wKysgPSBGU0VfR0VU
U1lNQk9MKCZzdGF0ZTEpOw0KKwkJCWJyZWFrOw0KKwkJfQ0KKwl9DQorDQorCXJldHVybiBvcCAt
IG9zdGFydDsNCit9DQorDQorc2l6ZV90IEZTRV9kZWNvbXByZXNzX3VzaW5nRFRhYmxlKHZvaWQg
KmRzdCwgc2l6ZV90IG9yaWdpbmFsU2l6ZSwgY29uc3Qgdm9pZCAqY1NyYywgc2l6ZV90IGNTcmNT
aXplLCBjb25zdCBGU0VfRFRhYmxlICpkdCkNCit7DQorCWNvbnN0IHZvaWQgKnB0ciA9IGR0Ow0K
Kwljb25zdCBGU0VfRFRhYmxlSGVhZGVyICpEVGFibGVIID0gKGNvbnN0IEZTRV9EVGFibGVIZWFk
ZXIgKilwdHI7DQorCWNvbnN0IFUzMiBmYXN0TW9kZSA9IERUYWJsZUgtPmZhc3RNb2RlOw0KKw0K
KwkvKiBzZWxlY3QgZmFzdCBtb2RlIChzdGF0aWMpICovDQorCWlmIChmYXN0TW9kZSkNCisJCXJl
dHVybiBGU0VfZGVjb21wcmVzc191c2luZ0RUYWJsZV9nZW5lcmljKGRzdCwgb3JpZ2luYWxTaXpl
LCBjU3JjLCBjU3JjU2l6ZSwgZHQsIDEpOw0KKwlyZXR1cm4gRlNFX2RlY29tcHJlc3NfdXNpbmdE
VGFibGVfZ2VuZXJpYyhkc3QsIG9yaWdpbmFsU2l6ZSwgY1NyYywgY1NyY1NpemUsIGR0LCAwKTsN
Cit9DQorDQorc2l6ZV90IEZTRV9kZWNvbXByZXNzX3drc3Aodm9pZCAqZHN0LCBzaXplX3QgZHN0
Q2FwYWNpdHksIGNvbnN0IHZvaWQgKmNTcmMsIHNpemVfdCBjU3JjU2l6ZSwgdW5zaWduZWQgbWF4
TG9nLCB2b2lkICp3b3Jrc3BhY2UsIHNpemVfdCB3b3Jrc3BhY2VTaXplKQ0KK3sNCisJY29uc3Qg
QllURSAqY29uc3QgaXN0YXJ0ID0gKGNvbnN0IEJZVEUgKiljU3JjOw0KKwljb25zdCBCWVRFICpp
cCA9IGlzdGFydDsNCisJdW5zaWduZWQgdGFibGVMb2c7DQorCXVuc2lnbmVkIG1heFN5bWJvbFZh
bHVlID0gRlNFX01BWF9TWU1CT0xfVkFMVUU7DQorCXNpemVfdCBOQ291bnRMZW5ndGg7DQorDQor
CUZTRV9EVGFibGUgKmR0Ow0KKwlzaG9ydCAqY291bnRpbmc7DQorCXNpemVfdCBzcGFjZVVzZWQz
MiA9IDA7DQorDQorCUZTRV9TVEFUSUNfQVNTRVJUKHNpemVvZihGU0VfRFRhYmxlKSA9PSBzaXpl
b2YoVTMyKSk7DQorDQorCWR0ID0gKEZTRV9EVGFibGUgKikoKFUzMiAqKXdvcmtzcGFjZSArIHNw
YWNlVXNlZDMyKTsNCisJc3BhY2VVc2VkMzIgKz0gRlNFX0RUQUJMRV9TSVpFX1UzMihtYXhMb2cp
Ow0KKwljb3VudGluZyA9IChzaG9ydCAqKSgoVTMyICopd29ya3NwYWNlICsgc3BhY2VVc2VkMzIp
Ow0KKwlzcGFjZVVzZWQzMiArPSBBTElHTihzaXplb2Yoc2hvcnQpICogKEZTRV9NQVhfU1lNQk9M
X1ZBTFVFICsgMSksIHNpemVvZihVMzIpKSA+PiAyOw0KKw0KKwlpZiAoKHNwYWNlVXNlZDMyIDw8
IDIpID4gd29ya3NwYWNlU2l6ZSkNCisJCXJldHVybiBFUlJPUih0YWJsZUxvZ190b29MYXJnZSk7
DQorCXdvcmtzcGFjZSA9IChVMzIgKil3b3Jrc3BhY2UgKyBzcGFjZVVzZWQzMjsNCisJd29ya3Nw
YWNlU2l6ZSAtPSAoc3BhY2VVc2VkMzIgPDwgMik7DQorDQorCS8qIG5vcm1hbCBGU0UgZGVjb2Rp
bmcgbW9kZSAqLw0KKwlOQ291bnRMZW5ndGggPSBGU0VfcmVhZE5Db3VudChjb3VudGluZywgJm1h
eFN5bWJvbFZhbHVlLCAmdGFibGVMb2csIGlzdGFydCwgY1NyY1NpemUpOw0KKwlpZiAoRlNFX2lz
RXJyb3IoTkNvdW50TGVuZ3RoKSkNCisJCXJldHVybiBOQ291bnRMZW5ndGg7DQorCS8vIGlmIChO
Q291bnRMZW5ndGggPj0gY1NyY1NpemUpIHJldHVybiBFUlJPUihzcmNTaXplX3dyb25nKTsgICAv
KiB0b28gc21hbGwgaW5wdXQgc2l6ZTsgc3VwcG9zZWQgdG8gYmUgYWxyZWFkeSBjaGVja2VkIGlu
IE5Db3VudExlbmd0aCwgb25seSByZW1haW5pbmcNCisJLy8gY2FzZSA6IE5Db3VudExlbmd0aD09
Y1NyY1NpemUgKi8NCisJaWYgKHRhYmxlTG9nID4gbWF4TG9nKQ0KKwkJcmV0dXJuIEVSUk9SKHRh
YmxlTG9nX3Rvb0xhcmdlKTsNCisJaXAgKz0gTkNvdW50TGVuZ3RoOw0KKwljU3JjU2l6ZSAtPSBO
Q291bnRMZW5ndGg7DQorDQorCUNIRUNLX0YoRlNFX2J1aWxkRFRhYmxlX3drc3AoZHQsIGNvdW50
aW5nLCBtYXhTeW1ib2xWYWx1ZSwgdGFibGVMb2csIHdvcmtzcGFjZSwgd29ya3NwYWNlU2l6ZSkp
Ow0KKw0KKwlyZXR1cm4gRlNFX2RlY29tcHJlc3NfdXNpbmdEVGFibGUoZHN0LCBkc3RDYXBhY2l0
eSwgaXAsIGNTcmNTaXplLCBkdCk7IC8qIGFsd2F5cyByZXR1cm4sIGV2ZW4gaWYgaXQgaXMgYW4g
ZXJyb3IgY29kZSAqLw0KK30NCmRpZmYgLS1naXQgYS94ZW4vY29tbW9uL3pzdGQvaHVmLmggYi94
ZW4vY29tbW9uL3pzdGQvaHVmLmgNCm5ldyBmaWxlIG1vZGUgMTAwNjQ0DQppbmRleCAwMDAwMDAw
MDAwLi4yMTQzZGEyOGQ5DQotLS0gL2Rldi9udWxsDQorKysgYi94ZW4vY29tbW9uL3pzdGQvaHVm
LmgNCkBAIC0wLDAgKzEsMjEyIEBADQorLyoNCisgKiBIdWZmbWFuIGNvZGVyLCBwYXJ0IG9mIE5l
dyBHZW5lcmF0aW9uIEVudHJvcHkgbGlicmFyeQ0KKyAqIGhlYWRlciBmaWxlDQorICogQ29weXJp
Z2h0IChDKSAyMDEzLTIwMTYsIFlhbm4gQ29sbGV0Lg0KKyAqDQorICogQlNEIDItQ2xhdXNlIExp
Y2Vuc2UgKGh0dHA6Ly93d3cub3BlbnNvdXJjZS5vcmcvbGljZW5zZXMvYnNkLWxpY2Vuc2UucGhw
KQ0KKyAqDQorICogUmVkaXN0cmlidXRpb24gYW5kIHVzZSBpbiBzb3VyY2UgYW5kIGJpbmFyeSBm
b3Jtcywgd2l0aCBvciB3aXRob3V0DQorICogbW9kaWZpY2F0aW9uLCBhcmUgcGVybWl0dGVkIHBy
b3ZpZGVkIHRoYXQgdGhlIGZvbGxvd2luZyBjb25kaXRpb25zIGFyZQ0KKyAqIG1ldDoNCisgKg0K
KyAqICAgKiBSZWRpc3RyaWJ1dGlvbnMgb2Ygc291cmNlIGNvZGUgbXVzdCByZXRhaW4gdGhlIGFi
b3ZlIGNvcHlyaWdodA0KKyAqIG5vdGljZSwgdGhpcyBsaXN0IG9mIGNvbmRpdGlvbnMgYW5kIHRo
ZSBmb2xsb3dpbmcgZGlzY2xhaW1lci4NCisgKiAgICogUmVkaXN0cmlidXRpb25zIGluIGJpbmFy
eSBmb3JtIG11c3QgcmVwcm9kdWNlIHRoZSBhYm92ZQ0KKyAqIGNvcHlyaWdodCBub3RpY2UsIHRo
aXMgbGlzdCBvZiBjb25kaXRpb25zIGFuZCB0aGUgZm9sbG93aW5nIGRpc2NsYWltZXINCisgKiBp
biB0aGUgZG9jdW1lbnRhdGlvbiBhbmQvb3Igb3RoZXIgbWF0ZXJpYWxzIHByb3ZpZGVkIHdpdGgg
dGhlDQorICogZGlzdHJpYnV0aW9uLg0KKyAqDQorICogVEhJUyBTT0ZUV0FSRSBJUyBQUk9WSURF
RCBCWSBUSEUgQ09QWVJJR0hUIEhPTERFUlMgQU5EIENPTlRSSUJVVE9SUw0KKyAqICJBUyBJUyIg
QU5EIEFOWSBFWFBSRVNTIE9SIElNUExJRUQgV0FSUkFOVElFUywgSU5DTFVESU5HLCBCVVQgTk9U
DQorICogTElNSVRFRCBUTywgVEhFIElNUExJRUQgV0FSUkFOVElFUyBPRiBNRVJDSEFOVEFCSUxJ
VFkgQU5EIEZJVE5FU1MgRk9SDQorICogQSBQQVJUSUNVTEFSIFBVUlBPU0UgQVJFIERJU0NMQUlN
RUQuIElOIE5PIEVWRU5UIFNIQUxMIFRIRSBDT1BZUklHSFQNCisgKiBPV05FUiBPUiBDT05UUklC
VVRPUlMgQkUgTElBQkxFIEZPUiBBTlkgRElSRUNULCBJTkRJUkVDVCwgSU5DSURFTlRBTCwNCisg
KiBTUEVDSUFMLCBFWEVNUExBUlksIE9SIENPTlNFUVVFTlRJQUwgREFNQUdFUyAoSU5DTFVESU5H
LCBCVVQgTk9UDQorICogTElNSVRFRCBUTywgUFJPQ1VSRU1FTlQgT0YgU1VCU1RJVFVURSBHT09E
UyBPUiBTRVJWSUNFUzsgTE9TUyBPRiBVU0UsDQorICogREFUQSwgT1IgUFJPRklUUzsgT1IgQlVT
SU5FU1MgSU5URVJSVVBUSU9OKSBIT1dFVkVSIENBVVNFRCBBTkQgT04gQU5ZDQorICogVEhFT1JZ
IE9GIExJQUJJTElUWSwgV0hFVEhFUiBJTiBDT05UUkFDVCwgU1RSSUNUIExJQUJJTElUWSwgT1Ig
VE9SVA0KKyAqIChJTkNMVURJTkcgTkVHTElHRU5DRSBPUiBPVEhFUldJU0UpIEFSSVNJTkcgSU4g
QU5ZIFdBWSBPVVQgT0YgVEhFIFVTRQ0KKyAqIE9GIFRISVMgU09GVFdBUkUsIEVWRU4gSUYgQURW
SVNFRCBPRiBUSEUgUE9TU0lCSUxJVFkgT0YgU1VDSCBEQU1BR0UuDQorICoNCisgKiBUaGlzIHBy
b2dyYW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5kL29yIG1v
ZGlmeSBpdCB1bmRlcg0KKyAqIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExp
Y2Vuc2UgdmVyc2lvbiAyIGFzIHB1Ymxpc2hlZCBieSB0aGUNCisgKiBGcmVlIFNvZnR3YXJlIEZv
dW5kYXRpb24uIFRoaXMgcHJvZ3JhbSBpcyBkdWFsLWxpY2Vuc2VkOyB5b3UgbWF5IHNlbGVjdA0K
KyAqIGVpdGhlciB2ZXJzaW9uIDIgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlICgi
R1BMIikgb3IgQlNEIGxpY2Vuc2UNCisgKiAoIkJTRCIpLg0KKyAqDQorICogWW91IGNhbiBjb250
YWN0IHRoZSBhdXRob3IgYXQgOg0KKyAqIC0gU291cmNlIHJlcG9zaXRvcnkgOiBodHRwczovL2dp
dGh1Yi5jb20vQ3lhbjQ5NzMvRmluaXRlU3RhdGVFbnRyb3B5DQorICovDQorI2lmbmRlZiBIVUZf
SF8yOTg3MzQyMzQNCisjZGVmaW5lIEhVRl9IXzI5ODczNDIzNA0KKw0KKy8qICoqKiBEZXBlbmRl
bmNpZXMgKioqICovDQorI2luY2x1ZGUgPGxpbnV4L3R5cGVzLmg+IC8qIHNpemVfdCAqLw0KKw0K
Ky8qICoqKiAgIFRvb2wgZnVuY3Rpb25zICoqKiAqLw0KKyNkZWZpbmUgSFVGX0JMT0NLU0laRV9N
QVggKDEyOCAqIDEwMjQpIC8qKjwgbWF4aW11bSBpbnB1dCBzaXplIGZvciBhIHNpbmdsZSBibG9j
ayBjb21wcmVzc2VkIHdpdGggSFVGX2NvbXByZXNzICovDQorc2l6ZV90IEhVRl9jb21wcmVzc0Jv
dW5kKHNpemVfdCBzaXplKTsgLyoqPCBtYXhpbXVtIGNvbXByZXNzZWQgc2l6ZSAod29yc3QgY2Fz
ZSkgKi8NCisNCisvKiBFcnJvciBNYW5hZ2VtZW50ICovDQordW5zaWduZWQgSFVGX2lzRXJyb3Io
c2l6ZV90IGNvZGUpOyAvKio8IHRlbGxzIGlmIGEgcmV0dXJuIHZhbHVlIGlzIGFuIGVycm9yIGNv
ZGUgKi8NCisNCisvKiAqKiogICBBZHZhbmNlZCBmdW5jdGlvbiAgICoqKiAqLw0KKw0KKy8qKiBI
VUZfY29tcHJlc3M0WF93a3NwKCkgOg0KKyogICBTYW1lIGFzIEhVRl9jb21wcmVzczIoKSwgYnV0
IHVzZXMgZXh0ZXJuYWxseSBhbGxvY2F0ZWQgYHdvcmtTcGFjZWAsIHdoaWNoIG11c3QgYmUgYSB0
YWJsZSBvZiA+PSAxMDI0IHVuc2lnbmVkICovDQorc2l6ZV90IEhVRl9jb21wcmVzczRYX3drc3Ao
dm9pZCAqZHN0LCBzaXplX3QgZHN0U2l6ZSwgY29uc3Qgdm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6
ZSwgdW5zaWduZWQgbWF4U3ltYm9sVmFsdWUsIHVuc2lnbmVkIHRhYmxlTG9nLCB2b2lkICp3b3Jr
U3BhY2UsDQorCQkJICAgc2l6ZV90IHdrc3BTaXplKTsgLyoqPCBgd29ya1NwYWNlYCBtdXN0IGJl
IGEgdGFibGUgb2YgYXQgbGVhc3QgSFVGX0NPTVBSRVNTX1dPUktTUEFDRV9TSVpFX1UzMiB1bnNp
Z25lZCAqLw0KKw0KKy8qICoqKiBEZXBlbmRlbmNpZXMgKioqICovDQorI2luY2x1ZGUgIm1lbS5o
IiAvKiBVMzIgKi8NCisNCisvKiAqKiogQ29uc3RhbnRzICoqKiAqLw0KKyNkZWZpbmUgSFVGX1RB
QkxFTE9HX01BWCAxMiAgICAgLyogbWF4IGNvbmZpZ3VyZWQgdGFibGVMb2cgKGZvciBzdGF0aWMg
YWxsb2NhdGlvbik7IGNhbiBiZSBtb2RpZmllZCB1cCB0byBIVUZfQUJTT0xVVEVNQVhfVEFCTEVM
T0cgKi8NCisjZGVmaW5lIEhVRl9UQUJMRUxPR19ERUZBVUxUIDExIC8qIHRhYmxlTG9nIGJ5IGRl
ZmF1bHQsIHdoZW4gbm90IHNwZWNpZmllZCAqLw0KKyNkZWZpbmUgSFVGX1NZTUJPTFZBTFVFX01B
WCAyNTUNCisNCisjZGVmaW5lIEhVRl9UQUJMRUxPR19BQlNPTFVURU1BWCAxNSAvKiBhYnNvbHV0
ZSBsaW1pdCBvZiBIVUZfTUFYX1RBQkxFTE9HLiBCZXlvbmQgdGhhdCB2YWx1ZSwgY29kZSBkb2Vz
IG5vdCB3b3JrICovDQorI2lmIChIVUZfVEFCTEVMT0dfTUFYID4gSFVGX1RBQkxFTE9HX0FCU09M
VVRFTUFYKQ0KKyNlcnJvciAiSFVGX1RBQkxFTE9HX01BWCBpcyB0b28gbGFyZ2UgISINCisjZW5k
aWYNCisNCisvKiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQorKiAg
U3RhdGljIGFsbG9jYXRpb24NCisqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKiovDQorLyogSFVGIGJ1ZmZlciBib3VuZHMgKi8NCisjZGVmaW5lIEhVRl9DVEFCTEVCT1VO
RCAxMjkNCisjZGVmaW5lIEhVRl9CTE9DS0JPVU5EKHNpemUpIChzaXplICsgKHNpemUgPj4gOCkg
KyA4KQkJCSAvKiBvbmx5IHRydWUgaWYgaW5jb21wcmVzc2libGUgcHJlLWZpbHRlcmVkIHdpdGgg
ZmFzdCBoZXVyaXN0aWMgKi8NCisjZGVmaW5lIEhVRl9DT01QUkVTU0JPVU5EKHNpemUpIChIVUZf
Q1RBQkxFQk9VTkQgKyBIVUZfQkxPQ0tCT1VORChzaXplKSkgLyogTWFjcm8gdmVyc2lvbiwgdXNl
ZnVsIGZvciBzdGF0aWMgYWxsb2NhdGlvbiAqLw0KKw0KKy8qIHN0YXRpYyBhbGxvY2F0aW9uIG9m
IEhVRidzIENvbXByZXNzaW9uIFRhYmxlICovDQorI2RlZmluZSBIVUZfQ1JFQVRFX1NUQVRJQ19D
VEFCTEUobmFtZSwgbWF4U3ltYm9sVmFsdWUpIFwNCisJVTMyIG5hbWUjI2hiW21heFN5bWJvbFZh
bHVlICsgMV07ICAgICAgICAgICAgICBcDQorCXZvaWQgKm5hbWUjI2h2ID0gJihuYW1lIyNoYik7
ICAgICAgICAgICAgICAgICAgXA0KKwlIVUZfQ0VsdCAqbmFtZSA9IChIVUZfQ0VsdCAqKShuYW1l
IyNodikgLyogbm8gZmluYWwgOyAqLw0KKw0KKy8qIHN0YXRpYyBhbGxvY2F0aW9uIG9mIEhVRidz
IERUYWJsZSAqLw0KK3R5cGVkZWYgVTMyIEhVRl9EVGFibGU7DQorI2RlZmluZSBIVUZfRFRBQkxF
X1NJWkUobWF4VGFibGVMb2cpICgxICsgKDEgPDwgKG1heFRhYmxlTG9nKSkpDQorI2RlZmluZSBI
VUZfQ1JFQVRFX1NUQVRJQ19EVEFCTEVYMihEVGFibGUsIG1heFRhYmxlTG9nKSBIVUZfRFRhYmxl
IERUYWJsZVtIVUZfRFRBQkxFX1NJWkUoKG1heFRhYmxlTG9nKS0xKV0gPSB7KChVMzIpKChtYXhU
YWJsZUxvZyktMSkgKiAweDAxMDAwMDAxKX0NCisjZGVmaW5lIEhVRl9DUkVBVEVfU1RBVElDX0RU
QUJMRVg0KERUYWJsZSwgbWF4VGFibGVMb2cpIEhVRl9EVGFibGUgRFRhYmxlW0hVRl9EVEFCTEVf
U0laRShtYXhUYWJsZUxvZyldID0geygoVTMyKShtYXhUYWJsZUxvZykqMHgwMTAwMDAwMSl9DQor
DQorLyogVGhlIHdvcmtzcGFjZSBtdXN0IGhhdmUgYWxpZ25tZW50IGF0IGxlYXN0IDQgYW5kIGJl
IGF0IGxlYXN0IHRoaXMgbGFyZ2UgKi8NCisjZGVmaW5lIEhVRl9DT01QUkVTU19XT1JLU1BBQ0Vf
U0laRSAoNiA8PCAxMCkNCisjZGVmaW5lIEhVRl9DT01QUkVTU19XT1JLU1BBQ0VfU0laRV9VMzIg
KEhVRl9DT01QUkVTU19XT1JLU1BBQ0VfU0laRSAvIHNpemVvZihVMzIpKQ0KKw0KKy8qIFRoZSB3
b3Jrc3BhY2UgbXVzdCBoYXZlIGFsaWdubWVudCBhdCBsZWFzdCA0IGFuZCBiZSBhdCBsZWFzdCB0
aGlzIGxhcmdlICovDQorI2RlZmluZSBIVUZfREVDT01QUkVTU19XT1JLU1BBQ0VfU0laRSAoMyA8
PCAxMCkNCisjZGVmaW5lIEhVRl9ERUNPTVBSRVNTX1dPUktTUEFDRV9TSVpFX1UzMiAoSFVGX0RF
Q09NUFJFU1NfV09SS1NQQUNFX1NJWkUgLyBzaXplb2YoVTMyKSkNCisNCisvKiAqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQorKiAgQWR2YW5jZWQgZGVjb21wcmVzc2lv
biBmdW5jdGlvbnMNCisqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKiov
DQorc2l6ZV90IEhVRl9kZWNvbXByZXNzNFhfREN0eF93a3NwKEhVRl9EVGFibGUgKmRjdHgsIHZv
aWQgKmRzdCwgc2l6ZV90IGRzdFNpemUsIGNvbnN0IHZvaWQgKmNTcmMsIHNpemVfdCBjU3JjU2l6
ZSwgdm9pZCAqd29ya3NwYWNlLCBzaXplX3Qgd29ya3NwYWNlU2l6ZSk7IC8qKjwgZGVjb2RlcyBS
TEUgYW5kIHVuY29tcHJlc3NlZCAqLw0KK3NpemVfdCBIVUZfZGVjb21wcmVzczRYX2h1Zk9ubHlf
d2tzcChIVUZfRFRhYmxlICpkY3R4LCB2b2lkICpkc3QsIHNpemVfdCBkc3RTaXplLCBjb25zdCB2
b2lkICpjU3JjLCBzaXplX3QgY1NyY1NpemUsIHZvaWQgKndvcmtzcGFjZSwNCisJCQkJc2l6ZV90
IHdvcmtzcGFjZVNpemUpOwkJCQkJCQkgICAgICAgLyoqPCBjb25zaWRlcnMgUkxFIGFuZCB1bmNv
bXByZXNzZWQgYXMgZXJyb3JzICovDQorc2l6ZV90IEhVRl9kZWNvbXByZXNzNFgyX0RDdHhfd2tz
cChIVUZfRFRhYmxlICpkY3R4LCB2b2lkICpkc3QsIHNpemVfdCBkc3RTaXplLCBjb25zdCB2b2lk
ICpjU3JjLCBzaXplX3QgY1NyY1NpemUsIHZvaWQgKndvcmtzcGFjZSwNCisJCQkJICAgc2l6ZV90
IHdvcmtzcGFjZVNpemUpOyAvKio8IHNpbmdsZS1zeW1ib2wgZGVjb2RlciAqLw0KK3NpemVfdCBI
VUZfZGVjb21wcmVzczRYNF9EQ3R4X3drc3AoSFVGX0RUYWJsZSAqZGN0eCwgdm9pZCAqZHN0LCBz
aXplX3QgZHN0U2l6ZSwgY29uc3Qgdm9pZCAqY1NyYywgc2l6ZV90IGNTcmNTaXplLCB2b2lkICp3
b3Jrc3BhY2UsDQorCQkJCSAgIHNpemVfdCB3b3Jrc3BhY2VTaXplKTsgLyoqPCBkb3VibGUtc3lt
Ym9scyBkZWNvZGVyICovDQorDQorLyogKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKg0KKyogIEhVRiBkZXRhaWxlZCBBUEkNCisqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKiovDQorLyohDQorSFVGX2NvbXByZXNzKCkgZG9lcyB0aGUgZm9sbG93
aW5nOg0KKzEuIGNvdW50IHN5bWJvbCBvY2N1cnJlbmNlIGZyb20gc291cmNlW10gaW50byB0YWJs
ZSBjb3VudFtdIHVzaW5nIEZTRV9jb3VudCgpDQorMi4gKG9wdGlvbmFsKSByZWZpbmUgdGFibGVM
b2cgdXNpbmcgSFVGX29wdGltYWxUYWJsZUxvZygpDQorMy4gYnVpbGQgSHVmZm1hbiB0YWJsZSBm
cm9tIGNvdW50IHVzaW5nIEhVRl9idWlsZENUYWJsZSgpDQorNC4gc2F2ZSBIdWZmbWFuIHRhYmxl
IHRvIG1lbW9yeSBidWZmZXIgdXNpbmcgSFVGX3dyaXRlQ1RhYmxlX3drc3AoKQ0KKzUuIGVuY29k
ZSB0aGUgZGF0YSBzdHJlYW0gdXNpbmcgSFVGX2NvbXByZXNzNFhfdXNpbmdDVGFibGUoKQ0KKw0K
K1RoZSBmb2xsb3dpbmcgQVBJIGFsbG93cyB0YXJnZXRpbmcgc3BlY2lmaWMgc3ViLWZ1bmN0aW9u
cyBmb3IgYWR2YW5jZWQgdGFza3MuDQorRm9yIGV4YW1wbGUsIGl0J3MgcG9zc2libGUgdG8gY29t
cHJlc3Mgc2V2ZXJhbCBibG9ja3MgdXNpbmcgdGhlIHNhbWUgJ0NUYWJsZScsDQorb3IgdG8gc2F2
ZSBhbmQgcmVnZW5lcmF0ZSAnQ1RhYmxlJyB1c2luZyBleHRlcm5hbCBtZXRob2RzLg0KKyovDQor
LyogRlNFX2NvdW50KCkgOiBmaW5kIGl0IHdpdGhpbiAiZnNlLmgiICovDQordW5zaWduZWQgSFVG
X29wdGltYWxUYWJsZUxvZyh1bnNpZ25lZCBtYXhUYWJsZUxvZywgc2l6ZV90IHNyY1NpemUsIHVu
c2lnbmVkIG1heFN5bWJvbFZhbHVlKTsNCit0eXBlZGVmIHN0cnVjdCBIVUZfQ0VsdF9zIEhVRl9D
RWx0OyAvKiBpbmNvbXBsZXRlIHR5cGUgKi8NCitzaXplX3QgSFVGX3dyaXRlQ1RhYmxlX3drc3Ao
dm9pZCAqZHN0LCBzaXplX3QgbWF4RHN0U2l6ZSwgY29uc3QgSFVGX0NFbHQgKkNUYWJsZSwgdW5z
aWduZWQgbWF4U3ltYm9sVmFsdWUsIHVuc2lnbmVkIGh1ZmZMb2csIHZvaWQgKndvcmtzcGFjZSwg
c2l6ZV90IHdvcmtzcGFjZVNpemUpOw0KK3NpemVfdCBIVUZfY29tcHJlc3M0WF91c2luZ0NUYWJs
ZSh2b2lkICpkc3QsIHNpemVfdCBkc3RTaXplLCBjb25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNT
aXplLCBjb25zdCBIVUZfQ0VsdCAqQ1RhYmxlKTsNCisNCit0eXBlZGVmIGVudW0gew0KKwlIVUZf
cmVwZWF0X25vbmUsICAvKio8IENhbm5vdCB1c2UgdGhlIHByZXZpb3VzIHRhYmxlICovDQorCUhV
Rl9yZXBlYXRfY2hlY2ssIC8qKjwgQ2FuIHVzZSB0aGUgcHJldmlvdXMgdGFibGUgYnV0IGl0IG11
c3QgYmUgY2hlY2tlZC4gTm90ZSA6IFRoZSBwcmV2aW91cyB0YWJsZSBtdXN0IGhhdmUgYmVlbiBj
b25zdHJ1Y3RlZCBieSBIVUZfY29tcHJlc3N7MSwNCisJCQkgICAgIDR9WF9yZXBlYXQgKi8NCisJ
SFVGX3JlcGVhdF92YWxpZCAgLyoqPCBDYW4gdXNlIHRoZSBwcmV2aW91cyB0YWJsZSBhbmQgaXQg
aXMgYXN1bWVkIHRvIGJlIHZhbGlkICovDQorfSBIVUZfcmVwZWF0Ow0KKy8qKiBIVUZfY29tcHJl
c3M0WF9yZXBlYXQoKSA6DQorKiAgIFNhbWUgYXMgSFVGX2NvbXByZXNzNFhfd2tzcCgpLCBidXQg
Y29uc2lkZXJzIHVzaW5nIGh1ZlRhYmxlIGlmICpyZXBlYXQgIT0gSFVGX3JlcGVhdF9ub25lLg0K
KyogICBJZiBpdCB1c2VzIGh1ZlRhYmxlIGl0IGRvZXMgbm90IG1vZGlmeSBodWZUYWJsZSBvciBy
ZXBlYXQuDQorKiAgIElmIGl0IGRvZXNuJ3QsIGl0IHNldHMgKnJlcGVhdCA9IEhVRl9yZXBlYXRf
bm9uZSwgYW5kIGl0IHNldHMgaHVmVGFibGUgdG8gdGhlIHRhYmxlIHVzZWQuDQorKiAgIElmIHBy
ZWZlclJlcGVhdCB0aGVuIHRoZSBvbGQgdGFibGUgd2lsbCBhbHdheXMgYmUgdXNlZCBpZiB2YWxp
ZC4gKi8NCitzaXplX3QgSFVGX2NvbXByZXNzNFhfcmVwZWF0KHZvaWQgKmRzdCwgc2l6ZV90IGRz
dFNpemUsIGNvbnN0IHZvaWQgKnNyYywgc2l6ZV90IHNyY1NpemUsIHVuc2lnbmVkIG1heFN5bWJv
bFZhbHVlLCB1bnNpZ25lZCB0YWJsZUxvZywgdm9pZCAqd29ya1NwYWNlLA0KKwkJCSAgICAgc2l6
ZV90IHdrc3BTaXplLCBIVUZfQ0VsdCAqaHVmVGFibGUsIEhVRl9yZXBlYXQgKnJlcGVhdCwNCisJ
CQkgICAgIGludCBwcmVmZXJSZXBlYXQpOyAvKio8IGB3b3JrU3BhY2VgIG11c3QgYmUgYSB0YWJs
ZSBvZiBhdCBsZWFzdCBIVUZfQ09NUFJFU1NfV09SS1NQQUNFX1NJWkVfVTMyIHVuc2lnbmVkICov
DQorDQorLyoqIEhVRl9idWlsZENUYWJsZV93a3NwKCkgOg0KKyAqICBTYW1lIGFzIEhVRl9idWls
ZENUYWJsZSgpLCBidXQgdXNpbmcgZXh0ZXJuYWxseSBhbGxvY2F0ZWQgc2NyYXRjaCBidWZmZXIu
DQorICogIGB3b3JrU3BhY2VgIG11c3QgYmUgYWxpZ25lZCBvbiA0LWJ5dGVzIGJvdW5kYXJpZXMs
IGFuZCBiZSBhdCBsZWFzdCBhcyBsYXJnZSBhcyBhIHRhYmxlIG9mIDEwMjQgdW5zaWduZWQuDQor
ICovDQorc2l6ZV90IEhVRl9idWlsZENUYWJsZV93a3NwKEhVRl9DRWx0ICp0cmVlLCBjb25zdCBV
MzIgKmNvdW50LCBVMzIgbWF4U3ltYm9sVmFsdWUsIFUzMiBtYXhOYkJpdHMsIHZvaWQgKndvcmtT
cGFjZSwgc2l6ZV90IHdrc3BTaXplKTsNCisNCisvKiEgSFVGX3JlYWRTdGF0cygpIDoNCisJUmVh
ZCBjb21wYWN0IEh1ZmZtYW4gdHJlZSwgc2F2ZWQgYnkgSFVGX3dyaXRlQ1RhYmxlKCkuDQorCWBo
dWZmV2VpZ2h0YCBpcyBkZXN0aW5hdGlvbiBidWZmZXIuDQorCUByZXR1cm4gOiBzaXplIHJlYWQg
ZnJvbSBgc3JjYCAsIG9yIGFuIGVycm9yIENvZGUgLg0KKwlOb3RlIDogTmVlZGVkIGJ5IEhVRl9y
ZWFkQ1RhYmxlKCkgYW5kIEhVRl9yZWFkRFRhYmxlWG4oKSAuICovDQorc2l6ZV90IEhVRl9yZWFk
U3RhdHNfd2tzcChCWVRFICpodWZmV2VpZ2h0LCBzaXplX3QgaHdTaXplLCBVMzIgKnJhbmtTdGF0
cywgVTMyICpuYlN5bWJvbHNQdHIsIFUzMiAqdGFibGVMb2dQdHIsIGNvbnN0IHZvaWQgKnNyYywg
c2l6ZV90IHNyY1NpemUsDQorCQkJICB2b2lkICp3b3Jrc3BhY2UsIHNpemVfdCB3b3Jrc3BhY2VT
aXplKTsNCisNCisvKiogSFVGX3JlYWRDVGFibGUoKSA6DQorKiAgIExvYWRpbmcgYSBDVGFibGUg
c2F2ZWQgd2l0aCBIVUZfd3JpdGVDVGFibGUoKSAqLw0KK3NpemVfdCBIVUZfcmVhZENUYWJsZV93
a3NwKEhVRl9DRWx0ICpDVGFibGUsIHVuc2lnbmVkIG1heFN5bWJvbFZhbHVlLCBjb25zdCB2b2lk
ICpzcmMsIHNpemVfdCBzcmNTaXplLCB2b2lkICp3b3Jrc3BhY2UsIHNpemVfdCB3b3Jrc3BhY2VT
aXplKTsNCisNCisvKg0KK0hVRl9kZWNvbXByZXNzKCkgZG9lcyB0aGUgZm9sbG93aW5nOg0KKzEu
IHNlbGVjdCB0aGUgZGVjb21wcmVzc2lvbiBhbGdvcml0aG0gKFgyLCBYNCkgYmFzZWQgb24gcHJl
LWNvbXB1dGVkIGhldXJpc3RpY3MNCisyLiBidWlsZCBIdWZmbWFuIHRhYmxlIGZyb20gc2F2ZSwg
dXNpbmcgSFVGX3JlYWREVGFibGVYbigpDQorMy4gZGVjb2RlIDEgb3IgNCBzZWdtZW50cyBpbiBw
YXJhbGxlbCB1c2luZyBIVUZfZGVjb21wcmVzc1NYbl91c2luZ0RUYWJsZQ0KKyovDQorDQorLyoq
IEhVRl9zZWxlY3REZWNvZGVyKCkgOg0KKyogICBUZWxscyB3aGljaCBkZWNvZGVyIGlzIGxpa2Vs
eSB0byBkZWNvZGUgZmFzdGVyLA0KKyogICBiYXNlZCBvbiBhIHNldCBvZiBwcmUtZGV0ZXJtaW5l
ZCBtZXRyaWNzLg0KKyogICBAcmV0dXJuIDogMD09SFVGX2RlY29tcHJlc3M0WDIsIDE9PUhVRl9k
ZWNvbXByZXNzNFg0IC4NCisqICAgQXNzdW1wdGlvbiA6IDAgPCBjU3JjU2l6ZSA8IGRzdFNpemUg
PD0gMTI4IEtCICovDQorVTMyIEhVRl9zZWxlY3REZWNvZGVyKHNpemVfdCBkc3RTaXplLCBzaXpl
X3QgY1NyY1NpemUpOw0KKw0KK3NpemVfdCBIVUZfcmVhZERUYWJsZVgyX3drc3AoSFVGX0RUYWJs
ZSAqRFRhYmxlLCBjb25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNTaXplLCB2b2lkICp3b3Jrc3Bh
Y2UsIHNpemVfdCB3b3Jrc3BhY2VTaXplKTsNCitzaXplX3QgSFVGX3JlYWREVGFibGVYNF93a3Nw
KEhVRl9EVGFibGUgKkRUYWJsZSwgY29uc3Qgdm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZSwgdm9p
ZCAqd29ya3NwYWNlLCBzaXplX3Qgd29ya3NwYWNlU2l6ZSk7DQorDQorc2l6ZV90IEhVRl9kZWNv
bXByZXNzNFhfdXNpbmdEVGFibGUodm9pZCAqZHN0LCBzaXplX3QgbWF4RHN0U2l6ZSwgY29uc3Qg
dm9pZCAqY1NyYywgc2l6ZV90IGNTcmNTaXplLCBjb25zdCBIVUZfRFRhYmxlICpEVGFibGUpOw0K
K3NpemVfdCBIVUZfZGVjb21wcmVzczRYMl91c2luZ0RUYWJsZSh2b2lkICpkc3QsIHNpemVfdCBt
YXhEc3RTaXplLCBjb25zdCB2b2lkICpjU3JjLCBzaXplX3QgY1NyY1NpemUsIGNvbnN0IEhVRl9E
VGFibGUgKkRUYWJsZSk7DQorc2l6ZV90IEhVRl9kZWNvbXByZXNzNFg0X3VzaW5nRFRhYmxlKHZv
aWQgKmRzdCwgc2l6ZV90IG1heERzdFNpemUsIGNvbnN0IHZvaWQgKmNTcmMsIHNpemVfdCBjU3Jj
U2l6ZSwgY29uc3QgSFVGX0RUYWJsZSAqRFRhYmxlKTsNCisNCisvKiBzaW5nbGUgc3RyZWFtIHZh
cmlhbnRzICovDQorDQorc2l6ZV90IEhVRl9jb21wcmVzczFYX3drc3Aodm9pZCAqZHN0LCBzaXpl
X3QgZHN0U2l6ZSwgY29uc3Qgdm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZSwgdW5zaWduZWQgbWF4
U3ltYm9sVmFsdWUsIHVuc2lnbmVkIHRhYmxlTG9nLCB2b2lkICp3b3JrU3BhY2UsDQorCQkJICAg
c2l6ZV90IHdrc3BTaXplKTsgLyoqPCBgd29ya1NwYWNlYCBtdXN0IGJlIGEgdGFibGUgb2YgYXQg
bGVhc3QgSFVGX0NPTVBSRVNTX1dPUktTUEFDRV9TSVpFX1UzMiB1bnNpZ25lZCAqLw0KK3NpemVf
dCBIVUZfY29tcHJlc3MxWF91c2luZ0NUYWJsZSh2b2lkICpkc3QsIHNpemVfdCBkc3RTaXplLCBj
b25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNTaXplLCBjb25zdCBIVUZfQ0VsdCAqQ1RhYmxlKTsN
CisvKiogSFVGX2NvbXByZXNzMVhfcmVwZWF0KCkgOg0KKyogICBTYW1lIGFzIEhVRl9jb21wcmVz
czFYX3drc3AoKSwgYnV0IGNvbnNpZGVycyB1c2luZyBodWZUYWJsZSBpZiAqcmVwZWF0ICE9IEhV
Rl9yZXBlYXRfbm9uZS4NCisqICAgSWYgaXQgdXNlcyBodWZUYWJsZSBpdCBkb2VzIG5vdCBtb2Rp
ZnkgaHVmVGFibGUgb3IgcmVwZWF0Lg0KKyogICBJZiBpdCBkb2Vzbid0LCBpdCBzZXRzICpyZXBl
YXQgPSBIVUZfcmVwZWF0X25vbmUsIGFuZCBpdCBzZXRzIGh1ZlRhYmxlIHRvIHRoZSB0YWJsZSB1
c2VkLg0KKyogICBJZiBwcmVmZXJSZXBlYXQgdGhlbiB0aGUgb2xkIHRhYmxlIHdpbGwgYWx3YXlz
IGJlIHVzZWQgaWYgdmFsaWQuICovDQorc2l6ZV90IEhVRl9jb21wcmVzczFYX3JlcGVhdCh2b2lk
ICpkc3QsIHNpemVfdCBkc3RTaXplLCBjb25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNTaXplLCB1
bnNpZ25lZCBtYXhTeW1ib2xWYWx1ZSwgdW5zaWduZWQgdGFibGVMb2csIHZvaWQgKndvcmtTcGFj
ZSwNCisJCQkgICAgIHNpemVfdCB3a3NwU2l6ZSwgSFVGX0NFbHQgKmh1ZlRhYmxlLCBIVUZfcmVw
ZWF0ICpyZXBlYXQsDQorCQkJICAgICBpbnQgcHJlZmVyUmVwZWF0KTsgLyoqPCBgd29ya1NwYWNl
YCBtdXN0IGJlIGEgdGFibGUgb2YgYXQgbGVhc3QgSFVGX0NPTVBSRVNTX1dPUktTUEFDRV9TSVpF
X1UzMiB1bnNpZ25lZCAqLw0KKw0KK3NpemVfdCBIVUZfZGVjb21wcmVzczFYX0RDdHhfd2tzcChI
VUZfRFRhYmxlICpkY3R4LCB2b2lkICpkc3QsIHNpemVfdCBkc3RTaXplLCBjb25zdCB2b2lkICpj
U3JjLCBzaXplX3QgY1NyY1NpemUsIHZvaWQgKndvcmtzcGFjZSwgc2l6ZV90IHdvcmtzcGFjZVNp
emUpOw0KK3NpemVfdCBIVUZfZGVjb21wcmVzczFYMl9EQ3R4X3drc3AoSFVGX0RUYWJsZSAqZGN0
eCwgdm9pZCAqZHN0LCBzaXplX3QgZHN0U2l6ZSwgY29uc3Qgdm9pZCAqY1NyYywgc2l6ZV90IGNT
cmNTaXplLCB2b2lkICp3b3Jrc3BhY2UsDQorCQkJCSAgIHNpemVfdCB3b3Jrc3BhY2VTaXplKTsg
LyoqPCBzaW5nbGUtc3ltYm9sIGRlY29kZXIgKi8NCitzaXplX3QgSFVGX2RlY29tcHJlc3MxWDRf
REN0eF93a3NwKEhVRl9EVGFibGUgKmRjdHgsIHZvaWQgKmRzdCwgc2l6ZV90IGRzdFNpemUsIGNv
bnN0IHZvaWQgKmNTcmMsIHNpemVfdCBjU3JjU2l6ZSwgdm9pZCAqd29ya3NwYWNlLA0KKwkJCQkg
ICBzaXplX3Qgd29ya3NwYWNlU2l6ZSk7IC8qKjwgZG91YmxlLXN5bWJvbHMgZGVjb2RlciAqLw0K
Kw0KK3NpemVfdCBIVUZfZGVjb21wcmVzczFYX3VzaW5nRFRhYmxlKHZvaWQgKmRzdCwgc2l6ZV90
IG1heERzdFNpemUsIGNvbnN0IHZvaWQgKmNTcmMsIHNpemVfdCBjU3JjU2l6ZSwNCisJCQkJICAg
IGNvbnN0IEhVRl9EVGFibGUgKkRUYWJsZSk7IC8qKjwgYXV0b21hdGljIHNlbGVjdGlvbiBvZiBz
aW5nIG9yIGRvdWJsZSBzeW1ib2wgZGVjb2RlciwgYmFzZWQgb24gRFRhYmxlICovDQorc2l6ZV90
IEhVRl9kZWNvbXByZXNzMVgyX3VzaW5nRFRhYmxlKHZvaWQgKmRzdCwgc2l6ZV90IG1heERzdFNp
emUsIGNvbnN0IHZvaWQgKmNTcmMsIHNpemVfdCBjU3JjU2l6ZSwgY29uc3QgSFVGX0RUYWJsZSAq
RFRhYmxlKTsNCitzaXplX3QgSFVGX2RlY29tcHJlc3MxWDRfdXNpbmdEVGFibGUodm9pZCAqZHN0
LCBzaXplX3QgbWF4RHN0U2l6ZSwgY29uc3Qgdm9pZCAqY1NyYywgc2l6ZV90IGNTcmNTaXplLCBj
b25zdCBIVUZfRFRhYmxlICpEVGFibGUpOw0KKw0KKyNlbmRpZiAvKiBIVUZfSF8yOTg3MzQyMzQg
Ki8NCmRpZmYgLS1naXQgYS94ZW4vY29tbW9uL3pzdGQvaHVmX2RlY29tcHJlc3MuYyBiL3hlbi9j
b21tb24venN0ZC9odWZfZGVjb21wcmVzcy5jDQpuZXcgZmlsZSBtb2RlIDEwMDY0NA0KaW5kZXgg
MDAwMDAwMDAwMC4uNjUyNjQ4MjA0Nw0KLS0tIC9kZXYvbnVsbA0KKysrIGIveGVuL2NvbW1vbi96
c3RkL2h1Zl9kZWNvbXByZXNzLmMNCkBAIC0wLDAgKzEsOTYwIEBADQorLyoNCisgKiBIdWZmbWFu
IGRlY29kZXIsIHBhcnQgb2YgTmV3IEdlbmVyYXRpb24gRW50cm9weSBsaWJyYXJ5DQorICogQ29w
eXJpZ2h0IChDKSAyMDEzLTIwMTYsIFlhbm4gQ29sbGV0Lg0KKyAqDQorICogQlNEIDItQ2xhdXNl
IExpY2Vuc2UgKGh0dHA6Ly93d3cub3BlbnNvdXJjZS5vcmcvbGljZW5zZXMvYnNkLWxpY2Vuc2Uu
cGhwKQ0KKyAqDQorICogUmVkaXN0cmlidXRpb24gYW5kIHVzZSBpbiBzb3VyY2UgYW5kIGJpbmFy
eSBmb3Jtcywgd2l0aCBvciB3aXRob3V0DQorICogbW9kaWZpY2F0aW9uLCBhcmUgcGVybWl0dGVk
IHByb3ZpZGVkIHRoYXQgdGhlIGZvbGxvd2luZyBjb25kaXRpb25zIGFyZQ0KKyAqIG1ldDoNCisg
Kg0KKyAqICAgKiBSZWRpc3RyaWJ1dGlvbnMgb2Ygc291cmNlIGNvZGUgbXVzdCByZXRhaW4gdGhl
IGFib3ZlIGNvcHlyaWdodA0KKyAqIG5vdGljZSwgdGhpcyBsaXN0IG9mIGNvbmRpdGlvbnMgYW5k
IHRoZSBmb2xsb3dpbmcgZGlzY2xhaW1lci4NCisgKiAgICogUmVkaXN0cmlidXRpb25zIGluIGJp
bmFyeSBmb3JtIG11c3QgcmVwcm9kdWNlIHRoZSBhYm92ZQ0KKyAqIGNvcHlyaWdodCBub3RpY2Us
IHRoaXMgbGlzdCBvZiBjb25kaXRpb25zIGFuZCB0aGUgZm9sbG93aW5nIGRpc2NsYWltZXINCisg
KiBpbiB0aGUgZG9jdW1lbnRhdGlvbiBhbmQvb3Igb3RoZXIgbWF0ZXJpYWxzIHByb3ZpZGVkIHdp
dGggdGhlDQorICogZGlzdHJpYnV0aW9uLg0KKyAqDQorICogVEhJUyBTT0ZUV0FSRSBJUyBQUk9W
SURFRCBCWSBUSEUgQ09QWVJJR0hUIEhPTERFUlMgQU5EIENPTlRSSUJVVE9SUw0KKyAqICJBUyBJ
UyIgQU5EIEFOWSBFWFBSRVNTIE9SIElNUExJRUQgV0FSUkFOVElFUywgSU5DTFVESU5HLCBCVVQg
Tk9UDQorICogTElNSVRFRCBUTywgVEhFIElNUExJRUQgV0FSUkFOVElFUyBPRiBNRVJDSEFOVEFC
SUxJVFkgQU5EIEZJVE5FU1MgRk9SDQorICogQSBQQVJUSUNVTEFSIFBVUlBPU0UgQVJFIERJU0NM
QUlNRUQuIElOIE5PIEVWRU5UIFNIQUxMIFRIRSBDT1BZUklHSFQNCisgKiBPV05FUiBPUiBDT05U
UklCVVRPUlMgQkUgTElBQkxFIEZPUiBBTlkgRElSRUNULCBJTkRJUkVDVCwgSU5DSURFTlRBTCwN
CisgKiBTUEVDSUFMLCBFWEVNUExBUlksIE9SIENPTlNFUVVFTlRJQUwgREFNQUdFUyAoSU5DTFVE
SU5HLCBCVVQgTk9UDQorICogTElNSVRFRCBUTywgUFJPQ1VSRU1FTlQgT0YgU1VCU1RJVFVURSBH
T09EUyBPUiBTRVJWSUNFUzsgTE9TUyBPRiBVU0UsDQorICogREFUQSwgT1IgUFJPRklUUzsgT1Ig
QlVTSU5FU1MgSU5URVJSVVBUSU9OKSBIT1dFVkVSIENBVVNFRCBBTkQgT04gQU5ZDQorICogVEhF
T1JZIE9GIExJQUJJTElUWSwgV0hFVEhFUiBJTiBDT05UUkFDVCwgU1RSSUNUIExJQUJJTElUWSwg
T1IgVE9SVA0KKyAqIChJTkNMVURJTkcgTkVHTElHRU5DRSBPUiBPVEhFUldJU0UpIEFSSVNJTkcg
SU4gQU5ZIFdBWSBPVVQgT0YgVEhFIFVTRQ0KKyAqIE9GIFRISVMgU09GVFdBUkUsIEVWRU4gSUYg
QURWSVNFRCBPRiBUSEUgUE9TU0lCSUxJVFkgT0YgU1VDSCBEQU1BR0UuDQorICoNCisgKiBUaGlz
IHByb2dyYW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5kL29y
IG1vZGlmeSBpdCB1bmRlcg0KKyAqIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGlj
IExpY2Vuc2UgdmVyc2lvbiAyIGFzIHB1Ymxpc2hlZCBieSB0aGUNCisgKiBGcmVlIFNvZnR3YXJl
IEZvdW5kYXRpb24uIFRoaXMgcHJvZ3JhbSBpcyBkdWFsLWxpY2Vuc2VkOyB5b3UgbWF5IHNlbGVj
dA0KKyAqIGVpdGhlciB2ZXJzaW9uIDIgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNl
ICgiR1BMIikgb3IgQlNEIGxpY2Vuc2UNCisgKiAoIkJTRCIpLg0KKyAqDQorICogWW91IGNhbiBj
b250YWN0IHRoZSBhdXRob3IgYXQgOg0KKyAqIC0gU291cmNlIHJlcG9zaXRvcnkgOiBodHRwczov
L2dpdGh1Yi5jb20vQ3lhbjQ5NzMvRmluaXRlU3RhdGVFbnRyb3B5DQorICovDQorDQorLyogKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioNCisqICBDb21waWxlciBzcGVjaWZpY3MNCisqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqLw0KKyNkZWZpbmUgRk9SQ0VfSU5M
SU5FIHN0YXRpYyBfX2Fsd2F5c19pbmxpbmUNCisNCisvKiAqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KKyogIERlcGVuZGVuY2ll
cw0KKyoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKiovDQorI2luY2x1ZGUgImJpdHN0cmVhbS5oIiAvKiBCSVRfKiAqLw0KKyNpbmNs
dWRlICJmc2UuaCIgICAgICAgLyogaGVhZGVyIGNvbXByZXNzaW9uICovDQorI2luY2x1ZGUgImh1
Zi5oIg0KKyNpbmNsdWRlIDxsaW51eC9jb21waWxlci5oPg0KKyNpbmNsdWRlIDxsaW51eC9rZXJu
ZWwuaD4NCisjaW5jbHVkZSA8bGludXgvc3RyaW5nLmg+IC8qIG1lbWNweSwgbWVtc2V0ICovDQor
DQorLyogKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioNCisqICBFcnJvciBNYW5hZ2VtZW50DQorKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKi8NCisjZGVmaW5lIEhV
Rl9TVEFUSUNfQVNTRVJUKGMpICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcDQor
CXsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBc
DQorCQllbnVtIHsgSFVGX3N0YXRpY19hc3NlcnQgPSAxIC8gKGludCkoISEoYykpIH07IFwNCisJ
fSAvKiB1c2Ugb25seSAqYWZ0ZXIqIHZhcmlhYmxlIGRlY2xhcmF0aW9ucyAqLw0KKw0KKy8qLSoq
KioqKioqKioqKioqKioqKioqKioqKioqKi8NCisvKiAgZ2VuZXJpYyBEVGFibGVEZXNjICAgICAg
ICovDQorLyotKioqKioqKioqKioqKioqKioqKioqKioqKioqLw0KKw0KK3R5cGVkZWYgc3RydWN0
IHsNCisJQllURSBtYXhUYWJsZUxvZzsNCisJQllURSB0YWJsZVR5cGU7DQorCUJZVEUgdGFibGVM
b2c7DQorCUJZVEUgcmVzZXJ2ZWQ7DQorfSBEVGFibGVEZXNjOw0KKw0KK3N0YXRpYyBEVGFibGVE
ZXNjIEhVRl9nZXREVGFibGVEZXNjKGNvbnN0IEhVRl9EVGFibGUgKnRhYmxlKQ0KK3sNCisJRFRh
YmxlRGVzYyBkdGQ7DQorCW1lbWNweSgmZHRkLCB0YWJsZSwgc2l6ZW9mKGR0ZCkpOw0KKwlyZXR1
cm4gZHRkOw0KK30NCisNCisvKi0qKioqKioqKioqKioqKioqKioqKioqKioqKiovDQorLyogIHNp
bmdsZS1zeW1ib2wgZGVjb2RpbmcgICAqLw0KKy8qLSoqKioqKioqKioqKioqKioqKioqKioqKioq
Ki8NCisNCit0eXBlZGVmIHN0cnVjdCB7DQorCUJZVEUgYnl0ZTsNCisJQllURSBuYkJpdHM7DQor
fSBIVUZfREVsdFgyOyAvKiBzaW5nbGUtc3ltYm9sIGRlY29kaW5nICovDQorDQorc2l6ZV90IEhV
Rl9yZWFkRFRhYmxlWDJfd2tzcChIVUZfRFRhYmxlICpEVGFibGUsIGNvbnN0IHZvaWQgKnNyYywg
c2l6ZV90IHNyY1NpemUsIHZvaWQgKndvcmtzcGFjZSwgc2l6ZV90IHdvcmtzcGFjZVNpemUpDQor
ew0KKwlVMzIgdGFibGVMb2cgPSAwOw0KKwlVMzIgbmJTeW1ib2xzID0gMDsNCisJc2l6ZV90IGlT
aXplOw0KKwl2b2lkICpjb25zdCBkdFB0ciA9IERUYWJsZSArIDE7DQorCUhVRl9ERWx0WDIgKmNv
bnN0IGR0ID0gKEhVRl9ERWx0WDIgKilkdFB0cjsNCisNCisJVTMyICpyYW5rVmFsOw0KKwlCWVRF
ICpodWZmV2VpZ2h0Ow0KKwlzaXplX3Qgc3BhY2VVc2VkMzIgPSAwOw0KKw0KKwlyYW5rVmFsID0g
KFUzMiAqKXdvcmtzcGFjZSArIHNwYWNlVXNlZDMyOw0KKwlzcGFjZVVzZWQzMiArPSBIVUZfVEFC
TEVMT0dfQUJTT0xVVEVNQVggKyAxOw0KKwlodWZmV2VpZ2h0ID0gKEJZVEUgKikoKFUzMiAqKXdv
cmtzcGFjZSArIHNwYWNlVXNlZDMyKTsNCisJc3BhY2VVc2VkMzIgKz0gQUxJR04oSFVGX1NZTUJP
TFZBTFVFX01BWCArIDEsIHNpemVvZihVMzIpKSA+PiAyOw0KKw0KKwlpZiAoKHNwYWNlVXNlZDMy
IDw8IDIpID4gd29ya3NwYWNlU2l6ZSkNCisJCXJldHVybiBFUlJPUih0YWJsZUxvZ190b29MYXJn
ZSk7DQorCXdvcmtzcGFjZSA9IChVMzIgKil3b3Jrc3BhY2UgKyBzcGFjZVVzZWQzMjsNCisJd29y
a3NwYWNlU2l6ZSAtPSAoc3BhY2VVc2VkMzIgPDwgMik7DQorDQorCUhVRl9TVEFUSUNfQVNTRVJU
KHNpemVvZihEVGFibGVEZXNjKSA9PSBzaXplb2YoSFVGX0RUYWJsZSkpOw0KKwkvKiBtZW1zZXQo
aHVmZldlaWdodCwgMCwgc2l6ZW9mKGh1ZmZXZWlnaHQpKTsgKi8gLyogaXMgbm90IG5lY2Vzc2Fy
eSwgZXZlbiB0aG91Z2ggc29tZSBhbmFseXplciBjb21wbGFpbiAuLi4gKi8NCisNCisJaVNpemUg
PSBIVUZfcmVhZFN0YXRzX3drc3AoaHVmZldlaWdodCwgSFVGX1NZTUJPTFZBTFVFX01BWCArIDEs
IHJhbmtWYWwsICZuYlN5bWJvbHMsICZ0YWJsZUxvZywgc3JjLCBzcmNTaXplLCB3b3Jrc3BhY2Us
IHdvcmtzcGFjZVNpemUpOw0KKwlpZiAoSFVGX2lzRXJyb3IoaVNpemUpKQ0KKwkJcmV0dXJuIGlT
aXplOw0KKw0KKwkvKiBUYWJsZSBoZWFkZXIgKi8NCisJew0KKwkJRFRhYmxlRGVzYyBkdGQgPSBI
VUZfZ2V0RFRhYmxlRGVzYyhEVGFibGUpOw0KKwkJaWYgKHRhYmxlTG9nID4gKFUzMikoZHRkLm1h
eFRhYmxlTG9nICsgMSkpDQorCQkJcmV0dXJuIEVSUk9SKHRhYmxlTG9nX3Rvb0xhcmdlKTsgLyog
RFRhYmxlIHRvbyBzbWFsbCwgSHVmZm1hbiB0cmVlIGNhbm5vdCBmaXQgaW4gKi8NCisJCWR0ZC50
YWJsZVR5cGUgPSAwOw0KKwkJZHRkLnRhYmxlTG9nID0gKEJZVEUpdGFibGVMb2c7DQorCQltZW1j
cHkoRFRhYmxlLCAmZHRkLCBzaXplb2YoZHRkKSk7DQorCX0NCisNCisJLyogQ2FsY3VsYXRlIHN0
YXJ0aW5nIHZhbHVlIGZvciBlYWNoIHJhbmsgKi8NCisJew0KKwkJVTMyIG4sIG5leHRSYW5rU3Rh
cnQgPSAwOw0KKwkJZm9yIChuID0gMTsgbiA8IHRhYmxlTG9nICsgMTsgbisrKSB7DQorCQkJVTMy
IGNvbnN0IGN1cnIgPSBuZXh0UmFua1N0YXJ0Ow0KKwkJCW5leHRSYW5rU3RhcnQgKz0gKHJhbmtW
YWxbbl0gPDwgKG4gLSAxKSk7DQorCQkJcmFua1ZhbFtuXSA9IGN1cnI7DQorCQl9DQorCX0NCisN
CisJLyogZmlsbCBEVGFibGUgKi8NCisJew0KKwkJVTMyIG47DQorCQlmb3IgKG4gPSAwOyBuIDwg
bmJTeW1ib2xzOyBuKyspIHsNCisJCQlVMzIgY29uc3QgdyA9IGh1ZmZXZWlnaHRbbl07DQorCQkJ
VTMyIGNvbnN0IGxlbmd0aCA9ICgxIDw8IHcpID4+IDE7DQorCQkJVTMyIHU7DQorCQkJSFVGX0RF
bHRYMiBEOw0KKwkJCUQuYnl0ZSA9IChCWVRFKW47DQorCQkJRC5uYkJpdHMgPSAoQllURSkodGFi
bGVMb2cgKyAxIC0gdyk7DQorCQkJZm9yICh1ID0gcmFua1ZhbFt3XTsgdSA8IHJhbmtWYWxbd10g
KyBsZW5ndGg7IHUrKykNCisJCQkJZHRbdV0gPSBEOw0KKwkJCXJhbmtWYWxbd10gKz0gbGVuZ3Ro
Ow0KKwkJfQ0KKwl9DQorDQorCXJldHVybiBpU2l6ZTsNCit9DQorDQorc3RhdGljIEJZVEUgSFVG
X2RlY29kZVN5bWJvbFgyKEJJVF9EU3RyZWFtX3QgKkRzdHJlYW0sIGNvbnN0IEhVRl9ERWx0WDIg
KmR0LCBjb25zdCBVMzIgZHRMb2cpDQorew0KKwlzaXplX3QgY29uc3QgdmFsID0gQklUX2xvb2tC
aXRzRmFzdChEc3RyZWFtLCBkdExvZyk7IC8qIG5vdGUgOiBkdExvZyA+PSAxICovDQorCUJZVEUg
Y29uc3QgYyA9IGR0W3ZhbF0uYnl0ZTsNCisJQklUX3NraXBCaXRzKERzdHJlYW0sIGR0W3ZhbF0u
bmJCaXRzKTsNCisJcmV0dXJuIGM7DQorfQ0KKw0KKyNkZWZpbmUgSFVGX0RFQ09ERV9TWU1CT0xY
Ml8wKHB0ciwgRFN0cmVhbVB0cikgKnB0cisrID0gSFVGX2RlY29kZVN5bWJvbFgyKERTdHJlYW1Q
dHIsIGR0LCBkdExvZykNCisNCisjZGVmaW5lIEhVRl9ERUNPREVfU1lNQk9MWDJfMShwdHIsIERT
dHJlYW1QdHIpICAgICAgICAgXA0KKwlpZiAoWlNURF82NGJpdHMoKSB8fCAoSFVGX1RBQkxFTE9H
X01BWCA8PSAxMikpIFwNCisJSFVGX0RFQ09ERV9TWU1CT0xYMl8wKHB0ciwgRFN0cmVhbVB0cikN
CisNCisjZGVmaW5lIEhVRl9ERUNPREVfU1lNQk9MWDJfMihwdHIsIERTdHJlYW1QdHIpIFwNCisJ
aWYgKFpTVERfNjRiaXRzKCkpICAgICAgICAgICAgICAgICAgICAgXA0KKwlIVUZfREVDT0RFX1NZ
TUJPTFgyXzAocHRyLCBEU3RyZWFtUHRyKQ0KKw0KK0ZPUkNFX0lOTElORSBzaXplX3QgSFVGX2Rl
Y29kZVN0cmVhbVgyKEJZVEUgKnAsIEJJVF9EU3RyZWFtX3QgKmNvbnN0IGJpdERQdHIsIEJZVEUg
KmNvbnN0IHBFbmQsIGNvbnN0IEhVRl9ERWx0WDIgKmNvbnN0IGR0LCBjb25zdCBVMzIgZHRMb2cp
DQorew0KKwlCWVRFICpjb25zdCBwU3RhcnQgPSBwOw0KKw0KKwkvKiB1cCB0byA0IHN5bWJvbHMg
YXQgYSB0aW1lICovDQorCXdoaWxlICgoQklUX3JlbG9hZERTdHJlYW0oYml0RFB0cikgPT0gQklU
X0RTdHJlYW1fdW5maW5pc2hlZCkgJiYgKHAgPD0gcEVuZCAtIDQpKSB7DQorCQlIVUZfREVDT0RF
X1NZTUJPTFgyXzIocCwgYml0RFB0cik7DQorCQlIVUZfREVDT0RFX1NZTUJPTFgyXzEocCwgYml0
RFB0cik7DQorCQlIVUZfREVDT0RFX1NZTUJPTFgyXzIocCwgYml0RFB0cik7DQorCQlIVUZfREVD
T0RFX1NZTUJPTFgyXzAocCwgYml0RFB0cik7DQorCX0NCisNCisJLyogY2xvc2VyIHRvIHRoZSBl
bmQgKi8NCisJd2hpbGUgKChCSVRfcmVsb2FkRFN0cmVhbShiaXREUHRyKSA9PSBCSVRfRFN0cmVh
bV91bmZpbmlzaGVkKSAmJiAocCA8IHBFbmQpKQ0KKwkJSFVGX0RFQ09ERV9TWU1CT0xYMl8wKHAs
IGJpdERQdHIpOw0KKw0KKwkvKiBubyBtb3JlIGRhdGEgdG8gcmV0cmlldmUgZnJvbSBiaXRzdHJl
YW0sIGhlbmNlIG5vIG5lZWQgdG8gcmVsb2FkICovDQorCXdoaWxlIChwIDwgcEVuZCkNCisJCUhV
Rl9ERUNPREVfU1lNQk9MWDJfMChwLCBiaXREUHRyKTsNCisNCisJcmV0dXJuIHBFbmQgLSBwU3Rh
cnQ7DQorfQ0KKw0KK3N0YXRpYyBzaXplX3QgSFVGX2RlY29tcHJlc3MxWDJfdXNpbmdEVGFibGVf
aW50ZXJuYWwodm9pZCAqZHN0LCBzaXplX3QgZHN0U2l6ZSwgY29uc3Qgdm9pZCAqY1NyYywgc2l6
ZV90IGNTcmNTaXplLCBjb25zdCBIVUZfRFRhYmxlICpEVGFibGUpDQorew0KKwlCWVRFICpvcCA9
IChCWVRFICopZHN0Ow0KKwlCWVRFICpjb25zdCBvZW5kID0gb3AgKyBkc3RTaXplOw0KKwljb25z
dCB2b2lkICpkdFB0ciA9IERUYWJsZSArIDE7DQorCWNvbnN0IEhVRl9ERWx0WDIgKmNvbnN0IGR0
ID0gKGNvbnN0IEhVRl9ERWx0WDIgKilkdFB0cjsNCisJQklUX0RTdHJlYW1fdCBiaXREOw0KKwlE
VGFibGVEZXNjIGNvbnN0IGR0ZCA9IEhVRl9nZXREVGFibGVEZXNjKERUYWJsZSk7DQorCVUzMiBj
b25zdCBkdExvZyA9IGR0ZC50YWJsZUxvZzsNCisNCisJew0KKwkJc2l6ZV90IGNvbnN0IGVycm9y
Q29kZSA9IEJJVF9pbml0RFN0cmVhbSgmYml0RCwgY1NyYywgY1NyY1NpemUpOw0KKwkJaWYgKEhV
Rl9pc0Vycm9yKGVycm9yQ29kZSkpDQorCQkJcmV0dXJuIGVycm9yQ29kZTsNCisJfQ0KKw0KKwlI
VUZfZGVjb2RlU3RyZWFtWDIob3AsICZiaXRELCBvZW5kLCBkdCwgZHRMb2cpOw0KKw0KKwkvKiBj
aGVjayAqLw0KKwlpZiAoIUJJVF9lbmRPZkRTdHJlYW0oJmJpdEQpKQ0KKwkJcmV0dXJuIEVSUk9S
KGNvcnJ1cHRpb25fZGV0ZWN0ZWQpOw0KKw0KKwlyZXR1cm4gZHN0U2l6ZTsNCit9DQorDQorc2l6
ZV90IEhVRl9kZWNvbXByZXNzMVgyX3VzaW5nRFRhYmxlKHZvaWQgKmRzdCwgc2l6ZV90IGRzdFNp
emUsIGNvbnN0IHZvaWQgKmNTcmMsIHNpemVfdCBjU3JjU2l6ZSwgY29uc3QgSFVGX0RUYWJsZSAq
RFRhYmxlKQ0KK3sNCisJRFRhYmxlRGVzYyBkdGQgPSBIVUZfZ2V0RFRhYmxlRGVzYyhEVGFibGUp
Ow0KKwlpZiAoZHRkLnRhYmxlVHlwZSAhPSAwKQ0KKwkJcmV0dXJuIEVSUk9SKEdFTkVSSUMpOw0K
KwlyZXR1cm4gSFVGX2RlY29tcHJlc3MxWDJfdXNpbmdEVGFibGVfaW50ZXJuYWwoZHN0LCBkc3RT
aXplLCBjU3JjLCBjU3JjU2l6ZSwgRFRhYmxlKTsNCit9DQorDQorc2l6ZV90IEhVRl9kZWNvbXBy
ZXNzMVgyX0RDdHhfd2tzcChIVUZfRFRhYmxlICpEQ3R4LCB2b2lkICpkc3QsIHNpemVfdCBkc3RT
aXplLCBjb25zdCB2b2lkICpjU3JjLCBzaXplX3QgY1NyY1NpemUsIHZvaWQgKndvcmtzcGFjZSwg
c2l6ZV90IHdvcmtzcGFjZVNpemUpDQorew0KKwljb25zdCBCWVRFICppcCA9IChjb25zdCBCWVRF
ICopY1NyYzsNCisNCisJc2l6ZV90IGNvbnN0IGhTaXplID0gSFVGX3JlYWREVGFibGVYMl93a3Nw
KERDdHgsIGNTcmMsIGNTcmNTaXplLCB3b3Jrc3BhY2UsIHdvcmtzcGFjZVNpemUpOw0KKwlpZiAo
SFVGX2lzRXJyb3IoaFNpemUpKQ0KKwkJcmV0dXJuIGhTaXplOw0KKwlpZiAoaFNpemUgPj0gY1Ny
Y1NpemUpDQorCQlyZXR1cm4gRVJST1Ioc3JjU2l6ZV93cm9uZyk7DQorCWlwICs9IGhTaXplOw0K
KwljU3JjU2l6ZSAtPSBoU2l6ZTsNCisNCisJcmV0dXJuIEhVRl9kZWNvbXByZXNzMVgyX3VzaW5n
RFRhYmxlX2ludGVybmFsKGRzdCwgZHN0U2l6ZSwgaXAsIGNTcmNTaXplLCBEQ3R4KTsNCit9DQor
DQorc3RhdGljIHNpemVfdCBIVUZfZGVjb21wcmVzczRYMl91c2luZ0RUYWJsZV9pbnRlcm5hbCh2
b2lkICpkc3QsIHNpemVfdCBkc3RTaXplLCBjb25zdCB2b2lkICpjU3JjLCBzaXplX3QgY1NyY1Np
emUsIGNvbnN0IEhVRl9EVGFibGUgKkRUYWJsZSkNCit7DQorCS8qIENoZWNrICovDQorCWlmIChj
U3JjU2l6ZSA8IDEwKQ0KKwkJcmV0dXJuIEVSUk9SKGNvcnJ1cHRpb25fZGV0ZWN0ZWQpOyAvKiBz
dHJpY3QgbWluaW11bSA6IGp1bXAgdGFibGUgKyAxIGJ5dGUgcGVyIHN0cmVhbSAqLw0KKw0KKwl7
DQorCQljb25zdCBCWVRFICpjb25zdCBpc3RhcnQgPSAoY29uc3QgQllURSAqKWNTcmM7DQorCQlC
WVRFICpjb25zdCBvc3RhcnQgPSAoQllURSAqKWRzdDsNCisJCUJZVEUgKmNvbnN0IG9lbmQgPSBv
c3RhcnQgKyBkc3RTaXplOw0KKwkJY29uc3Qgdm9pZCAqY29uc3QgZHRQdHIgPSBEVGFibGUgKyAx
Ow0KKwkJY29uc3QgSFVGX0RFbHRYMiAqY29uc3QgZHQgPSAoY29uc3QgSFVGX0RFbHRYMiAqKWR0
UHRyOw0KKw0KKwkJLyogSW5pdCAqLw0KKwkJQklUX0RTdHJlYW1fdCBiaXREMTsNCisJCUJJVF9E
U3RyZWFtX3QgYml0RDI7DQorCQlCSVRfRFN0cmVhbV90IGJpdEQzOw0KKwkJQklUX0RTdHJlYW1f
dCBiaXRENDsNCisJCXNpemVfdCBjb25zdCBsZW5ndGgxID0gWlNURF9yZWFkTEUxNihpc3RhcnQp
Ow0KKwkJc2l6ZV90IGNvbnN0IGxlbmd0aDIgPSBaU1REX3JlYWRMRTE2KGlzdGFydCArIDIpOw0K
KwkJc2l6ZV90IGNvbnN0IGxlbmd0aDMgPSBaU1REX3JlYWRMRTE2KGlzdGFydCArIDQpOw0KKwkJ
c2l6ZV90IGNvbnN0IGxlbmd0aDQgPSBjU3JjU2l6ZSAtIChsZW5ndGgxICsgbGVuZ3RoMiArIGxl
bmd0aDMgKyA2KTsNCisJCWNvbnN0IEJZVEUgKmNvbnN0IGlzdGFydDEgPSBpc3RhcnQgKyA2OyAv
KiBqdW1wVGFibGUgKi8NCisJCWNvbnN0IEJZVEUgKmNvbnN0IGlzdGFydDIgPSBpc3RhcnQxICsg
bGVuZ3RoMTsNCisJCWNvbnN0IEJZVEUgKmNvbnN0IGlzdGFydDMgPSBpc3RhcnQyICsgbGVuZ3Ro
MjsNCisJCWNvbnN0IEJZVEUgKmNvbnN0IGlzdGFydDQgPSBpc3RhcnQzICsgbGVuZ3RoMzsNCisJ
CWNvbnN0IHNpemVfdCBzZWdtZW50U2l6ZSA9IChkc3RTaXplICsgMykgLyA0Ow0KKwkJQllURSAq
Y29uc3Qgb3BTdGFydDIgPSBvc3RhcnQgKyBzZWdtZW50U2l6ZTsNCisJCUJZVEUgKmNvbnN0IG9w
U3RhcnQzID0gb3BTdGFydDIgKyBzZWdtZW50U2l6ZTsNCisJCUJZVEUgKmNvbnN0IG9wU3RhcnQ0
ID0gb3BTdGFydDMgKyBzZWdtZW50U2l6ZTsNCisJCUJZVEUgKm9wMSA9IG9zdGFydDsNCisJCUJZ
VEUgKm9wMiA9IG9wU3RhcnQyOw0KKwkJQllURSAqb3AzID0gb3BTdGFydDM7DQorCQlCWVRFICpv
cDQgPSBvcFN0YXJ0NDsNCisJCVUzMiBlbmRTaWduYWw7DQorCQlEVGFibGVEZXNjIGNvbnN0IGR0
ZCA9IEhVRl9nZXREVGFibGVEZXNjKERUYWJsZSk7DQorCQlVMzIgY29uc3QgZHRMb2cgPSBkdGQu
dGFibGVMb2c7DQorDQorCQlpZiAobGVuZ3RoNCA+IGNTcmNTaXplKQ0KKwkJCXJldHVybiBFUlJP
Uihjb3JydXB0aW9uX2RldGVjdGVkKTsgLyogb3ZlcmZsb3cgKi8NCisJCXsNCisJCQlzaXplX3Qg
Y29uc3QgZXJyb3JDb2RlID0gQklUX2luaXREU3RyZWFtKCZiaXREMSwgaXN0YXJ0MSwgbGVuZ3Ro
MSk7DQorCQkJaWYgKEhVRl9pc0Vycm9yKGVycm9yQ29kZSkpDQorCQkJCXJldHVybiBlcnJvckNv
ZGU7DQorCQl9DQorCQl7DQorCQkJc2l6ZV90IGNvbnN0IGVycm9yQ29kZSA9IEJJVF9pbml0RFN0
cmVhbSgmYml0RDIsIGlzdGFydDIsIGxlbmd0aDIpOw0KKwkJCWlmIChIVUZfaXNFcnJvcihlcnJv
ckNvZGUpKQ0KKwkJCQlyZXR1cm4gZXJyb3JDb2RlOw0KKwkJfQ0KKwkJew0KKwkJCXNpemVfdCBj
b25zdCBlcnJvckNvZGUgPSBCSVRfaW5pdERTdHJlYW0oJmJpdEQzLCBpc3RhcnQzLCBsZW5ndGgz
KTsNCisJCQlpZiAoSFVGX2lzRXJyb3IoZXJyb3JDb2RlKSkNCisJCQkJcmV0dXJuIGVycm9yQ29k
ZTsNCisJCX0NCisJCXsNCisJCQlzaXplX3QgY29uc3QgZXJyb3JDb2RlID0gQklUX2luaXREU3Ry
ZWFtKCZiaXRENCwgaXN0YXJ0NCwgbGVuZ3RoNCk7DQorCQkJaWYgKEhVRl9pc0Vycm9yKGVycm9y
Q29kZSkpDQorCQkJCXJldHVybiBlcnJvckNvZGU7DQorCQl9DQorDQorCQkvKiAxNi0zMiBzeW1i
b2xzIHBlciBsb29wICg0LTggc3ltYm9scyBwZXIgc3RyZWFtKSAqLw0KKwkJZW5kU2lnbmFsID0g
QklUX3JlbG9hZERTdHJlYW0oJmJpdEQxKSB8IEJJVF9yZWxvYWREU3RyZWFtKCZiaXREMikgfCBC
SVRfcmVsb2FkRFN0cmVhbSgmYml0RDMpIHwgQklUX3JlbG9hZERTdHJlYW0oJmJpdEQ0KTsNCisJ
CWZvciAoOyAoZW5kU2lnbmFsID09IEJJVF9EU3RyZWFtX3VuZmluaXNoZWQpICYmIChvcDQgPCAo
b2VuZCAtIDcpKTspIHsNCisJCQlIVUZfREVDT0RFX1NZTUJPTFgyXzIob3AxLCAmYml0RDEpOw0K
KwkJCUhVRl9ERUNPREVfU1lNQk9MWDJfMihvcDIsICZiaXREMik7DQorCQkJSFVGX0RFQ09ERV9T
WU1CT0xYMl8yKG9wMywgJmJpdEQzKTsNCisJCQlIVUZfREVDT0RFX1NZTUJPTFgyXzIob3A0LCAm
Yml0RDQpOw0KKwkJCUhVRl9ERUNPREVfU1lNQk9MWDJfMShvcDEsICZiaXREMSk7DQorCQkJSFVG
X0RFQ09ERV9TWU1CT0xYMl8xKG9wMiwgJmJpdEQyKTsNCisJCQlIVUZfREVDT0RFX1NZTUJPTFgy
XzEob3AzLCAmYml0RDMpOw0KKwkJCUhVRl9ERUNPREVfU1lNQk9MWDJfMShvcDQsICZiaXRENCk7
DQorCQkJSFVGX0RFQ09ERV9TWU1CT0xYMl8yKG9wMSwgJmJpdEQxKTsNCisJCQlIVUZfREVDT0RF
X1NZTUJPTFgyXzIob3AyLCAmYml0RDIpOw0KKwkJCUhVRl9ERUNPREVfU1lNQk9MWDJfMihvcDMs
ICZiaXREMyk7DQorCQkJSFVGX0RFQ09ERV9TWU1CT0xYMl8yKG9wNCwgJmJpdEQ0KTsNCisJCQlI
VUZfREVDT0RFX1NZTUJPTFgyXzAob3AxLCAmYml0RDEpOw0KKwkJCUhVRl9ERUNPREVfU1lNQk9M
WDJfMChvcDIsICZiaXREMik7DQorCQkJSFVGX0RFQ09ERV9TWU1CT0xYMl8wKG9wMywgJmJpdEQz
KTsNCisJCQlIVUZfREVDT0RFX1NZTUJPTFgyXzAob3A0LCAmYml0RDQpOw0KKwkJCWVuZFNpZ25h
bCA9IEJJVF9yZWxvYWREU3RyZWFtKCZiaXREMSkgfCBCSVRfcmVsb2FkRFN0cmVhbSgmYml0RDIp
IHwgQklUX3JlbG9hZERTdHJlYW0oJmJpdEQzKSB8IEJJVF9yZWxvYWREU3RyZWFtKCZiaXRENCk7
DQorCQl9DQorDQorCQkvKiBjaGVjayBjb3JydXB0aW9uICovDQorCQlpZiAob3AxID4gb3BTdGFy
dDIpDQorCQkJcmV0dXJuIEVSUk9SKGNvcnJ1cHRpb25fZGV0ZWN0ZWQpOw0KKwkJaWYgKG9wMiA+
IG9wU3RhcnQzKQ0KKwkJCXJldHVybiBFUlJPUihjb3JydXB0aW9uX2RldGVjdGVkKTsNCisJCWlm
IChvcDMgPiBvcFN0YXJ0NCkNCisJCQlyZXR1cm4gRVJST1IoY29ycnVwdGlvbl9kZXRlY3RlZCk7
DQorCQkvKiBub3RlIDogb3A0IHN1cHBvc2VkIGFscmVhZHkgdmVyaWZpZWQgd2l0aGluIG1haW4g
bG9vcCAqLw0KKw0KKwkJLyogZmluaXNoIGJpdFN0cmVhbXMgb25lIGJ5IG9uZSAqLw0KKwkJSFVG
X2RlY29kZVN0cmVhbVgyKG9wMSwgJmJpdEQxLCBvcFN0YXJ0MiwgZHQsIGR0TG9nKTsNCisJCUhV
Rl9kZWNvZGVTdHJlYW1YMihvcDIsICZiaXREMiwgb3BTdGFydDMsIGR0LCBkdExvZyk7DQorCQlI
VUZfZGVjb2RlU3RyZWFtWDIob3AzLCAmYml0RDMsIG9wU3RhcnQ0LCBkdCwgZHRMb2cpOw0KKwkJ
SFVGX2RlY29kZVN0cmVhbVgyKG9wNCwgJmJpdEQ0LCBvZW5kLCBkdCwgZHRMb2cpOw0KKw0KKwkJ
LyogY2hlY2sgKi8NCisJCWVuZFNpZ25hbCA9IEJJVF9lbmRPZkRTdHJlYW0oJmJpdEQxKSAmIEJJ
VF9lbmRPZkRTdHJlYW0oJmJpdEQyKSAmIEJJVF9lbmRPZkRTdHJlYW0oJmJpdEQzKSAmIEJJVF9l
bmRPZkRTdHJlYW0oJmJpdEQ0KTsNCisJCWlmICghZW5kU2lnbmFsKQ0KKwkJCXJldHVybiBFUlJP
Uihjb3JydXB0aW9uX2RldGVjdGVkKTsNCisNCisJCS8qIGRlY29kZWQgc2l6ZSAqLw0KKwkJcmV0
dXJuIGRzdFNpemU7DQorCX0NCit9DQorDQorc2l6ZV90IEhVRl9kZWNvbXByZXNzNFgyX3VzaW5n
RFRhYmxlKHZvaWQgKmRzdCwgc2l6ZV90IGRzdFNpemUsIGNvbnN0IHZvaWQgKmNTcmMsIHNpemVf
dCBjU3JjU2l6ZSwgY29uc3QgSFVGX0RUYWJsZSAqRFRhYmxlKQ0KK3sNCisJRFRhYmxlRGVzYyBk
dGQgPSBIVUZfZ2V0RFRhYmxlRGVzYyhEVGFibGUpOw0KKwlpZiAoZHRkLnRhYmxlVHlwZSAhPSAw
KQ0KKwkJcmV0dXJuIEVSUk9SKEdFTkVSSUMpOw0KKwlyZXR1cm4gSFVGX2RlY29tcHJlc3M0WDJf
dXNpbmdEVGFibGVfaW50ZXJuYWwoZHN0LCBkc3RTaXplLCBjU3JjLCBjU3JjU2l6ZSwgRFRhYmxl
KTsNCit9DQorDQorc2l6ZV90IEhVRl9kZWNvbXByZXNzNFgyX0RDdHhfd2tzcChIVUZfRFRhYmxl
ICpkY3R4LCB2b2lkICpkc3QsIHNpemVfdCBkc3RTaXplLCBjb25zdCB2b2lkICpjU3JjLCBzaXpl
X3QgY1NyY1NpemUsIHZvaWQgKndvcmtzcGFjZSwgc2l6ZV90IHdvcmtzcGFjZVNpemUpDQorew0K
Kwljb25zdCBCWVRFICppcCA9IChjb25zdCBCWVRFICopY1NyYzsNCisNCisJc2l6ZV90IGNvbnN0
IGhTaXplID0gSFVGX3JlYWREVGFibGVYMl93a3NwKGRjdHgsIGNTcmMsIGNTcmNTaXplLCB3b3Jr
c3BhY2UsIHdvcmtzcGFjZVNpemUpOw0KKwlpZiAoSFVGX2lzRXJyb3IoaFNpemUpKQ0KKwkJcmV0
dXJuIGhTaXplOw0KKwlpZiAoaFNpemUgPj0gY1NyY1NpemUpDQorCQlyZXR1cm4gRVJST1Ioc3Jj
U2l6ZV93cm9uZyk7DQorCWlwICs9IGhTaXplOw0KKwljU3JjU2l6ZSAtPSBoU2l6ZTsNCisNCisJ
cmV0dXJuIEhVRl9kZWNvbXByZXNzNFgyX3VzaW5nRFRhYmxlX2ludGVybmFsKGRzdCwgZHN0U2l6
ZSwgaXAsIGNTcmNTaXplLCBkY3R4KTsNCit9DQorDQorLyogKioqKioqKioqKioqKioqKioqKioq
KioqKi8NCisvKiBkb3VibGUtc3ltYm9scyBkZWNvZGluZyAqLw0KKy8qICoqKioqKioqKioqKioq
KioqKioqKioqKiovDQordHlwZWRlZiBzdHJ1Y3Qgew0KKwlVMTYgc2VxdWVuY2U7DQorCUJZVEUg
bmJCaXRzOw0KKwlCWVRFIGxlbmd0aDsNCit9IEhVRl9ERWx0WDQ7IC8qIGRvdWJsZS1zeW1ib2xz
IGRlY29kaW5nICovDQorDQordHlwZWRlZiBzdHJ1Y3Qgew0KKwlCWVRFIHN5bWJvbDsNCisJQllU
RSB3ZWlnaHQ7DQorfSBzb3J0ZWRTeW1ib2xfdDsNCisNCisvKiBIVUZfZmlsbERUYWJsZVg0TGV2
ZWwyKCkgOg0KKyAqIGByYW5rVmFsT3JpZ2luYCBtdXN0IGJlIGEgdGFibGUgb2YgYXQgbGVhc3Qg
KEhVRl9UQUJMRUxPR19NQVggKyAxKSBVMzIgKi8NCitzdGF0aWMgdm9pZCBIVUZfZmlsbERUYWJs
ZVg0TGV2ZWwyKEhVRl9ERWx0WDQgKkRUYWJsZSwgVTMyIHNpemVMb2csIGNvbnN0IFUzMiBjb25z
dW1lZCwgY29uc3QgVTMyICpyYW5rVmFsT3JpZ2luLCBjb25zdCBpbnQgbWluV2VpZ2h0LA0KKwkJ
CQkgICBjb25zdCBzb3J0ZWRTeW1ib2xfdCAqc29ydGVkU3ltYm9scywgY29uc3QgVTMyIHNvcnRl
ZExpc3RTaXplLCBVMzIgbmJCaXRzQmFzZWxpbmUsIFUxNiBiYXNlU2VxKQ0KK3sNCisJSFVGX0RF
bHRYNCBERWx0Ow0KKwlVMzIgcmFua1ZhbFtIVUZfVEFCTEVMT0dfTUFYICsgMV07DQorDQorCS8q
IGdldCBwcmUtY2FsY3VsYXRlZCByYW5rVmFsICovDQorCW1lbWNweShyYW5rVmFsLCByYW5rVmFs
T3JpZ2luLCBzaXplb2YocmFua1ZhbCkpOw0KKw0KKwkvKiBmaWxsIHNraXBwZWQgdmFsdWVzICov
DQorCWlmIChtaW5XZWlnaHQgPiAxKSB7DQorCQlVMzIgaSwgc2tpcFNpemUgPSByYW5rVmFsW21p
bldlaWdodF07DQorCQlaU1REX3dyaXRlTEUxNigmKERFbHQuc2VxdWVuY2UpLCBiYXNlU2VxKTsN
CisJCURFbHQubmJCaXRzID0gKEJZVEUpKGNvbnN1bWVkKTsNCisJCURFbHQubGVuZ3RoID0gMTsN
CisJCWZvciAoaSA9IDA7IGkgPCBza2lwU2l6ZTsgaSsrKQ0KKwkJCURUYWJsZVtpXSA9IERFbHQ7
DQorCX0NCisNCisJLyogZmlsbCBEVGFibGUgKi8NCisJew0KKwkJVTMyIHM7DQorCQlmb3IgKHMg
PSAwOyBzIDwgc29ydGVkTGlzdFNpemU7IHMrKykgeyAvKiBub3RlIDogc29ydGVkU3ltYm9scyBh
bHJlYWR5IHNraXBwZWQgKi8NCisJCQljb25zdCBVMzIgc3ltYm9sID0gc29ydGVkU3ltYm9sc1tz
XS5zeW1ib2w7DQorCQkJY29uc3QgVTMyIHdlaWdodCA9IHNvcnRlZFN5bWJvbHNbc10ud2VpZ2h0
Ow0KKwkJCWNvbnN0IFUzMiBuYkJpdHMgPSBuYkJpdHNCYXNlbGluZSAtIHdlaWdodDsNCisJCQlj
b25zdCBVMzIgbGVuZ3RoID0gMSA8PCAoc2l6ZUxvZyAtIG5iQml0cyk7DQorCQkJY29uc3QgVTMy
IHN0YXJ0ID0gcmFua1ZhbFt3ZWlnaHRdOw0KKwkJCVUzMiBpID0gc3RhcnQ7DQorCQkJY29uc3Qg
VTMyIGVuZCA9IHN0YXJ0ICsgbGVuZ3RoOw0KKw0KKwkJCVpTVERfd3JpdGVMRTE2KCYoREVsdC5z
ZXF1ZW5jZSksIChVMTYpKGJhc2VTZXEgKyAoc3ltYm9sIDw8IDgpKSk7DQorCQkJREVsdC5uYkJp
dHMgPSAoQllURSkobmJCaXRzICsgY29uc3VtZWQpOw0KKwkJCURFbHQubGVuZ3RoID0gMjsNCisJ
CQlkbyB7DQorCQkJCURUYWJsZVtpKytdID0gREVsdDsNCisJCQl9IHdoaWxlIChpIDwgZW5kKTsg
Lyogc2luY2UgbGVuZ3RoID49IDEgKi8NCisNCisJCQlyYW5rVmFsW3dlaWdodF0gKz0gbGVuZ3Ro
Ow0KKwkJfQ0KKwl9DQorfQ0KKw0KK3R5cGVkZWYgVTMyIHJhbmtWYWxfdFtIVUZfVEFCTEVMT0df
TUFYXVtIVUZfVEFCTEVMT0dfTUFYICsgMV07DQordHlwZWRlZiBVMzIgcmFua1ZhbENvbF90W0hV
Rl9UQUJMRUxPR19NQVggKyAxXTsNCisNCitzdGF0aWMgdm9pZCBIVUZfZmlsbERUYWJsZVg0KEhV
Rl9ERWx0WDQgKkRUYWJsZSwgY29uc3QgVTMyIHRhcmdldExvZywgY29uc3Qgc29ydGVkU3ltYm9s
X3QgKnNvcnRlZExpc3QsIGNvbnN0IFUzMiBzb3J0ZWRMaXN0U2l6ZSwgY29uc3QgVTMyICpyYW5r
U3RhcnQsDQorCQkJICAgICByYW5rVmFsX3QgcmFua1ZhbE9yaWdpbiwgY29uc3QgVTMyIG1heFdl
aWdodCwgY29uc3QgVTMyIG5iQml0c0Jhc2VsaW5lKQ0KK3sNCisJVTMyIHJhbmtWYWxbSFVGX1RB
QkxFTE9HX01BWCArIDFdOw0KKwljb25zdCBpbnQgc2NhbGVMb2cgPSBuYkJpdHNCYXNlbGluZSAt
IHRhcmdldExvZzsgLyogbm90ZSA6IHRhcmdldExvZyA+PSBzcmNMb2csIGhlbmNlIHNjYWxlTG9n
IDw9IDEgKi8NCisJY29uc3QgVTMyIG1pbkJpdHMgPSBuYkJpdHNCYXNlbGluZSAtIG1heFdlaWdo
dDsNCisJVTMyIHM7DQorDQorCW1lbWNweShyYW5rVmFsLCByYW5rVmFsT3JpZ2luLCBzaXplb2Yo
cmFua1ZhbCkpOw0KKw0KKwkvKiBmaWxsIERUYWJsZSAqLw0KKwlmb3IgKHMgPSAwOyBzIDwgc29y
dGVkTGlzdFNpemU7IHMrKykgew0KKwkJY29uc3QgVTE2IHN5bWJvbCA9IHNvcnRlZExpc3Rbc10u
c3ltYm9sOw0KKwkJY29uc3QgVTMyIHdlaWdodCA9IHNvcnRlZExpc3Rbc10ud2VpZ2h0Ow0KKwkJ
Y29uc3QgVTMyIG5iQml0cyA9IG5iQml0c0Jhc2VsaW5lIC0gd2VpZ2h0Ow0KKwkJY29uc3QgVTMy
IHN0YXJ0ID0gcmFua1ZhbFt3ZWlnaHRdOw0KKwkJY29uc3QgVTMyIGxlbmd0aCA9IDEgPDwgKHRh
cmdldExvZyAtIG5iQml0cyk7DQorDQorCQlpZiAodGFyZ2V0TG9nIC0gbmJCaXRzID49IG1pbkJp
dHMpIHsgLyogZW5vdWdoIHJvb20gZm9yIGEgc2Vjb25kIHN5bWJvbCAqLw0KKwkJCVUzMiBzb3J0
ZWRSYW5rOw0KKwkJCWludCBtaW5XZWlnaHQgPSBuYkJpdHMgKyBzY2FsZUxvZzsNCisJCQlpZiAo
bWluV2VpZ2h0IDwgMSkNCisJCQkJbWluV2VpZ2h0ID0gMTsNCisJCQlzb3J0ZWRSYW5rID0gcmFu
a1N0YXJ0W21pbldlaWdodF07DQorCQkJSFVGX2ZpbGxEVGFibGVYNExldmVsMihEVGFibGUgKyBz
dGFydCwgdGFyZ2V0TG9nIC0gbmJCaXRzLCBuYkJpdHMsIHJhbmtWYWxPcmlnaW5bbmJCaXRzXSwg
bWluV2VpZ2h0LCBzb3J0ZWRMaXN0ICsgc29ydGVkUmFuaywNCisJCQkJCSAgICAgICBzb3J0ZWRM
aXN0U2l6ZSAtIHNvcnRlZFJhbmssIG5iQml0c0Jhc2VsaW5lLCBzeW1ib2wpOw0KKwkJfSBlbHNl
IHsNCisJCQlIVUZfREVsdFg0IERFbHQ7DQorCQkJWlNURF93cml0ZUxFMTYoJihERWx0LnNlcXVl
bmNlKSwgc3ltYm9sKTsNCisJCQlERWx0Lm5iQml0cyA9IChCWVRFKShuYkJpdHMpOw0KKwkJCURF
bHQubGVuZ3RoID0gMTsNCisJCQl7DQorCQkJCVUzMiBjb25zdCBlbmQgPSBzdGFydCArIGxlbmd0
aDsNCisJCQkJVTMyIHU7DQorCQkJCWZvciAodSA9IHN0YXJ0OyB1IDwgZW5kOyB1KyspDQorCQkJ
CQlEVGFibGVbdV0gPSBERWx0Ow0KKwkJCX0NCisJCX0NCisJCXJhbmtWYWxbd2VpZ2h0XSArPSBs
ZW5ndGg7DQorCX0NCit9DQorDQorc2l6ZV90IEhVRl9yZWFkRFRhYmxlWDRfd2tzcChIVUZfRFRh
YmxlICpEVGFibGUsIGNvbnN0IHZvaWQgKnNyYywgc2l6ZV90IHNyY1NpemUsIHZvaWQgKndvcmtz
cGFjZSwgc2l6ZV90IHdvcmtzcGFjZVNpemUpDQorew0KKwlVMzIgdGFibGVMb2csIG1heFcsIHNp
emVPZlNvcnQsIG5iU3ltYm9sczsNCisJRFRhYmxlRGVzYyBkdGQgPSBIVUZfZ2V0RFRhYmxlRGVz
YyhEVGFibGUpOw0KKwlVMzIgY29uc3QgbWF4VGFibGVMb2cgPSBkdGQubWF4VGFibGVMb2c7DQor
CXNpemVfdCBpU2l6ZTsNCisJdm9pZCAqZHRQdHIgPSBEVGFibGUgKyAxOyAvKiBmb3JjZSBjb21w
aWxlciB0byBhdm9pZCBzdHJpY3QtYWxpYXNpbmcgKi8NCisJSFVGX0RFbHRYNCAqY29uc3QgZHQg
PSAoSFVGX0RFbHRYNCAqKWR0UHRyOw0KKwlVMzIgKnJhbmtTdGFydDsNCisNCisJcmFua1ZhbENv
bF90ICpyYW5rVmFsOw0KKwlVMzIgKnJhbmtTdGF0czsNCisJVTMyICpyYW5rU3RhcnQwOw0KKwlz
b3J0ZWRTeW1ib2xfdCAqc29ydGVkU3ltYm9sOw0KKwlCWVRFICp3ZWlnaHRMaXN0Ow0KKwlzaXpl
X3Qgc3BhY2VVc2VkMzIgPSAwOw0KKw0KKwlIVUZfU1RBVElDX0FTU0VSVCgoc2l6ZW9mKHJhbmtW
YWxDb2xfdCkgJiAzKSA9PSAwKTsNCisNCisJcmFua1ZhbCA9IChyYW5rVmFsQ29sX3QgKikoKFUz
MiAqKXdvcmtzcGFjZSArIHNwYWNlVXNlZDMyKTsNCisJc3BhY2VVc2VkMzIgKz0gKHNpemVvZihy
YW5rVmFsQ29sX3QpICogSFVGX1RBQkxFTE9HX01BWCkgPj4gMjsNCisJcmFua1N0YXRzID0gKFUz
MiAqKXdvcmtzcGFjZSArIHNwYWNlVXNlZDMyOw0KKwlzcGFjZVVzZWQzMiArPSBIVUZfVEFCTEVM
T0dfTUFYICsgMTsNCisJcmFua1N0YXJ0MCA9IChVMzIgKil3b3Jrc3BhY2UgKyBzcGFjZVVzZWQz
MjsNCisJc3BhY2VVc2VkMzIgKz0gSFVGX1RBQkxFTE9HX01BWCArIDI7DQorCXNvcnRlZFN5bWJv
bCA9IChzb3J0ZWRTeW1ib2xfdCAqKSgoVTMyICopd29ya3NwYWNlICsgc3BhY2VVc2VkMzIpOw0K
KwlzcGFjZVVzZWQzMiArPSBBTElHTihzaXplb2Yoc29ydGVkU3ltYm9sX3QpICogKEhVRl9TWU1C
T0xWQUxVRV9NQVggKyAxKSwgc2l6ZW9mKFUzMikpID4+IDI7DQorCXdlaWdodExpc3QgPSAoQllU
RSAqKSgoVTMyICopd29ya3NwYWNlICsgc3BhY2VVc2VkMzIpOw0KKwlzcGFjZVVzZWQzMiArPSBB
TElHTihIVUZfU1lNQk9MVkFMVUVfTUFYICsgMSwgc2l6ZW9mKFUzMikpID4+IDI7DQorDQorCWlm
ICgoc3BhY2VVc2VkMzIgPDwgMikgPiB3b3Jrc3BhY2VTaXplKQ0KKwkJcmV0dXJuIEVSUk9SKHRh
YmxlTG9nX3Rvb0xhcmdlKTsNCisJd29ya3NwYWNlID0gKFUzMiAqKXdvcmtzcGFjZSArIHNwYWNl
VXNlZDMyOw0KKwl3b3Jrc3BhY2VTaXplIC09IChzcGFjZVVzZWQzMiA8PCAyKTsNCisNCisJcmFu
a1N0YXJ0ID0gcmFua1N0YXJ0MCArIDE7DQorCW1lbXNldChyYW5rU3RhdHMsIDAsIHNpemVvZihV
MzIpICogKDIgKiBIVUZfVEFCTEVMT0dfTUFYICsgMiArIDEpKTsNCisNCisJSFVGX1NUQVRJQ19B
U1NFUlQoc2l6ZW9mKEhVRl9ERWx0WDQpID09IHNpemVvZihIVUZfRFRhYmxlKSk7IC8qIGlmIGNv
bXBpbGVyIGZhaWxzIGhlcmUsIGFzc2VydGlvbiBpcyB3cm9uZyAqLw0KKwlpZiAobWF4VGFibGVM
b2cgPiBIVUZfVEFCTEVMT0dfTUFYKQ0KKwkJcmV0dXJuIEVSUk9SKHRhYmxlTG9nX3Rvb0xhcmdl
KTsNCisJLyogbWVtc2V0KHdlaWdodExpc3QsIDAsIHNpemVvZih3ZWlnaHRMaXN0KSk7ICovIC8q
IGlzIG5vdCBuZWNlc3NhcnksIGV2ZW4gdGhvdWdoIHNvbWUgYW5hbHl6ZXIgY29tcGxhaW4gLi4u
ICovDQorDQorCWlTaXplID0gSFVGX3JlYWRTdGF0c193a3NwKHdlaWdodExpc3QsIEhVRl9TWU1C
T0xWQUxVRV9NQVggKyAxLCByYW5rU3RhdHMsICZuYlN5bWJvbHMsICZ0YWJsZUxvZywgc3JjLCBz
cmNTaXplLCB3b3Jrc3BhY2UsIHdvcmtzcGFjZVNpemUpOw0KKwlpZiAoSFVGX2lzRXJyb3IoaVNp
emUpKQ0KKwkJcmV0dXJuIGlTaXplOw0KKw0KKwkvKiBjaGVjayByZXN1bHQgKi8NCisJaWYgKHRh
YmxlTG9nID4gbWF4VGFibGVMb2cpDQorCQlyZXR1cm4gRVJST1IodGFibGVMb2dfdG9vTGFyZ2Up
OyAvKiBEVGFibGUgY2FuJ3QgZml0IGNvZGUgZGVwdGggKi8NCisNCisJLyogZmluZCBtYXhXZWln
aHQgKi8NCisJZm9yIChtYXhXID0gdGFibGVMb2c7IHJhbmtTdGF0c1ttYXhXXSA9PSAwOyBtYXhX
LS0pIHsNCisJfSAvKiBuZWNlc3NhcmlseSBmaW5kcyBhIHNvbHV0aW9uIGJlZm9yZSAwICovDQor
DQorCS8qIEdldCBzdGFydCBpbmRleCBvZiBlYWNoIHdlaWdodCAqLw0KKwl7DQorCQlVMzIgdywg
bmV4dFJhbmtTdGFydCA9IDA7DQorCQlmb3IgKHcgPSAxOyB3IDwgbWF4VyArIDE7IHcrKykgew0K
KwkJCVUzMiBjdXJyID0gbmV4dFJhbmtTdGFydDsNCisJCQluZXh0UmFua1N0YXJ0ICs9IHJhbmtT
dGF0c1t3XTsNCisJCQlyYW5rU3RhcnRbd10gPSBjdXJyOw0KKwkJfQ0KKwkJcmFua1N0YXJ0WzBd
ID0gbmV4dFJhbmtTdGFydDsgLyogcHV0IGFsbCAwdyBzeW1ib2xzIGF0IHRoZSBlbmQgb2Ygc29y
dGVkIGxpc3QqLw0KKwkJc2l6ZU9mU29ydCA9IG5leHRSYW5rU3RhcnQ7DQorCX0NCisNCisJLyog
c29ydCBzeW1ib2xzIGJ5IHdlaWdodCAqLw0KKwl7DQorCQlVMzIgczsNCisJCWZvciAocyA9IDA7
IHMgPCBuYlN5bWJvbHM7IHMrKykgew0KKwkJCVUzMiBjb25zdCB3ID0gd2VpZ2h0TGlzdFtzXTsN
CisJCQlVMzIgY29uc3QgciA9IHJhbmtTdGFydFt3XSsrOw0KKwkJCXNvcnRlZFN5bWJvbFtyXS5z
eW1ib2wgPSAoQllURSlzOw0KKwkJCXNvcnRlZFN5bWJvbFtyXS53ZWlnaHQgPSAoQllURSl3Ow0K
KwkJfQ0KKwkJcmFua1N0YXJ0WzBdID0gMDsgLyogZm9yZ2V0IDB3IHN5bWJvbHM7IHRoaXMgaXMg
YmVnaW5uaW5nIG9mIHdlaWdodCgxKSAqLw0KKwl9DQorDQorCS8qIEJ1aWxkIHJhbmtWYWwgKi8N
CisJew0KKwkJVTMyICpjb25zdCByYW5rVmFsMCA9IHJhbmtWYWxbMF07DQorCQl7DQorCQkJaW50
IGNvbnN0IHJlc2NhbGUgPSAobWF4VGFibGVMb2cgLSB0YWJsZUxvZykgLSAxOyAvKiB0YWJsZUxv
ZyA8PSBtYXhUYWJsZUxvZyAqLw0KKwkJCVUzMiBuZXh0UmFua1ZhbCA9IDA7DQorCQkJVTMyIHc7
DQorCQkJZm9yICh3ID0gMTsgdyA8IG1heFcgKyAxOyB3KyspIHsNCisJCQkJVTMyIGN1cnIgPSBu
ZXh0UmFua1ZhbDsNCisJCQkJbmV4dFJhbmtWYWwgKz0gcmFua1N0YXRzW3ddIDw8ICh3ICsgcmVz
Y2FsZSk7DQorCQkJCXJhbmtWYWwwW3ddID0gY3VycjsNCisJCQl9DQorCQl9DQorCQl7DQorCQkJ
VTMyIGNvbnN0IG1pbkJpdHMgPSB0YWJsZUxvZyArIDEgLSBtYXhXOw0KKwkJCVUzMiBjb25zdW1l
ZDsNCisJCQlmb3IgKGNvbnN1bWVkID0gbWluQml0czsgY29uc3VtZWQgPCBtYXhUYWJsZUxvZyAt
IG1pbkJpdHMgKyAxOyBjb25zdW1lZCsrKSB7DQorCQkJCVUzMiAqY29uc3QgcmFua1ZhbFB0ciA9
IHJhbmtWYWxbY29uc3VtZWRdOw0KKwkJCQlVMzIgdzsNCisJCQkJZm9yICh3ID0gMTsgdyA8IG1h
eFcgKyAxOyB3KyspIHsNCisJCQkJCXJhbmtWYWxQdHJbd10gPSByYW5rVmFsMFt3XSA+PiBjb25z
dW1lZDsNCisJCQkJfQ0KKwkJCX0NCisJCX0NCisJfQ0KKw0KKwlIVUZfZmlsbERUYWJsZVg0KGR0
LCBtYXhUYWJsZUxvZywgc29ydGVkU3ltYm9sLCBzaXplT2ZTb3J0LCByYW5rU3RhcnQwLCByYW5r
VmFsLCBtYXhXLCB0YWJsZUxvZyArIDEpOw0KKw0KKwlkdGQudGFibGVMb2cgPSAoQllURSltYXhU
YWJsZUxvZzsNCisJZHRkLnRhYmxlVHlwZSA9IDE7DQorCW1lbWNweShEVGFibGUsICZkdGQsIHNp
emVvZihkdGQpKTsNCisJcmV0dXJuIGlTaXplOw0KK30NCisNCitzdGF0aWMgVTMyIEhVRl9kZWNv
ZGVTeW1ib2xYNCh2b2lkICpvcCwgQklUX0RTdHJlYW1fdCAqRFN0cmVhbSwgY29uc3QgSFVGX0RF
bHRYNCAqZHQsIGNvbnN0IFUzMiBkdExvZykNCit7DQorCXNpemVfdCBjb25zdCB2YWwgPSBCSVRf
bG9va0JpdHNGYXN0KERTdHJlYW0sIGR0TG9nKTsgLyogbm90ZSA6IGR0TG9nID49IDEgKi8NCisJ
bWVtY3B5KG9wLCBkdCArIHZhbCwgMik7DQorCUJJVF9za2lwQml0cyhEU3RyZWFtLCBkdFt2YWxd
Lm5iQml0cyk7DQorCXJldHVybiBkdFt2YWxdLmxlbmd0aDsNCit9DQorDQorc3RhdGljIFUzMiBI
VUZfZGVjb2RlTGFzdFN5bWJvbFg0KHZvaWQgKm9wLCBCSVRfRFN0cmVhbV90ICpEU3RyZWFtLCBj
b25zdCBIVUZfREVsdFg0ICpkdCwgY29uc3QgVTMyIGR0TG9nKQ0KK3sNCisJc2l6ZV90IGNvbnN0
IHZhbCA9IEJJVF9sb29rQml0c0Zhc3QoRFN0cmVhbSwgZHRMb2cpOyAvKiBub3RlIDogZHRMb2cg
Pj0gMSAqLw0KKwltZW1jcHkob3AsIGR0ICsgdmFsLCAxKTsNCisJaWYgKGR0W3ZhbF0ubGVuZ3Ro
ID09IDEpDQorCQlCSVRfc2tpcEJpdHMoRFN0cmVhbSwgZHRbdmFsXS5uYkJpdHMpOw0KKwllbHNl
IHsNCisJCWlmIChEU3RyZWFtLT5iaXRzQ29uc3VtZWQgPCAoc2l6ZW9mKERTdHJlYW0tPmJpdENv
bnRhaW5lcikgKiA4KSkgew0KKwkJCUJJVF9za2lwQml0cyhEU3RyZWFtLCBkdFt2YWxdLm5iQml0
cyk7DQorCQkJaWYgKERTdHJlYW0tPmJpdHNDb25zdW1lZCA+IChzaXplb2YoRFN0cmVhbS0+Yml0
Q29udGFpbmVyKSAqIDgpKQ0KKwkJCQkvKiB1Z2x5IGhhY2s7IHdvcmtzIG9ubHkgYmVjYXVzZSBp
dCdzIHRoZSBsYXN0IHN5bWJvbC4gTm90ZSA6IGNhbid0IGVhc2lseSBleHRyYWN0IG5iQml0cyBm
cm9tIGp1c3QgdGhpcyBzeW1ib2wgKi8NCisJCQkJRFN0cmVhbS0+Yml0c0NvbnN1bWVkID0gKHNp
emVvZihEU3RyZWFtLT5iaXRDb250YWluZXIpICogOCk7DQorCQl9DQorCX0NCisJcmV0dXJuIDE7
DQorfQ0KKw0KKyNkZWZpbmUgSFVGX0RFQ09ERV9TWU1CT0xYNF8wKHB0ciwgRFN0cmVhbVB0cikg
cHRyICs9IEhVRl9kZWNvZGVTeW1ib2xYNChwdHIsIERTdHJlYW1QdHIsIGR0LCBkdExvZykNCisN
CisjZGVmaW5lIEhVRl9ERUNPREVfU1lNQk9MWDRfMShwdHIsIERTdHJlYW1QdHIpICAgICAgICAg
XA0KKwlpZiAoWlNURF82NGJpdHMoKSB8fCAoSFVGX1RBQkxFTE9HX01BWCA8PSAxMikpIFwNCisJ
cHRyICs9IEhVRl9kZWNvZGVTeW1ib2xYNChwdHIsIERTdHJlYW1QdHIsIGR0LCBkdExvZykNCisN
CisjZGVmaW5lIEhVRl9ERUNPREVfU1lNQk9MWDRfMihwdHIsIERTdHJlYW1QdHIpIFwNCisJaWYg
KFpTVERfNjRiaXRzKCkpICAgICAgICAgICAgICAgICAgICAgXA0KKwlwdHIgKz0gSFVGX2RlY29k
ZVN5bWJvbFg0KHB0ciwgRFN0cmVhbVB0ciwgZHQsIGR0TG9nKQ0KKw0KK0ZPUkNFX0lOTElORSBz
aXplX3QgSFVGX2RlY29kZVN0cmVhbVg0KEJZVEUgKnAsIEJJVF9EU3RyZWFtX3QgKmJpdERQdHIs
IEJZVEUgKmNvbnN0IHBFbmQsIGNvbnN0IEhVRl9ERWx0WDQgKmNvbnN0IGR0LCBjb25zdCBVMzIg
ZHRMb2cpDQorew0KKwlCWVRFICpjb25zdCBwU3RhcnQgPSBwOw0KKw0KKwkvKiB1cCB0byA4IHN5
bWJvbHMgYXQgYSB0aW1lICovDQorCXdoaWxlICgoQklUX3JlbG9hZERTdHJlYW0oYml0RFB0cikg
PT0gQklUX0RTdHJlYW1fdW5maW5pc2hlZCkgJiAocCA8IHBFbmQgLSAoc2l6ZW9mKGJpdERQdHIt
PmJpdENvbnRhaW5lcikgLSAxKSkpIHsNCisJCUhVRl9ERUNPREVfU1lNQk9MWDRfMihwLCBiaXRE
UHRyKTsNCisJCUhVRl9ERUNPREVfU1lNQk9MWDRfMShwLCBiaXREUHRyKTsNCisJCUhVRl9ERUNP
REVfU1lNQk9MWDRfMihwLCBiaXREUHRyKTsNCisJCUhVRl9ERUNPREVfU1lNQk9MWDRfMChwLCBi
aXREUHRyKTsNCisJfQ0KKw0KKwkvKiBjbG9zZXIgdG8gZW5kIDogdXAgdG8gMiBzeW1ib2xzIGF0
IGEgdGltZSAqLw0KKwl3aGlsZSAoKEJJVF9yZWxvYWREU3RyZWFtKGJpdERQdHIpID09IEJJVF9E
U3RyZWFtX3VuZmluaXNoZWQpICYgKHAgPD0gcEVuZCAtIDIpKQ0KKwkJSFVGX0RFQ09ERV9TWU1C
T0xYNF8wKHAsIGJpdERQdHIpOw0KKw0KKwl3aGlsZSAocCA8PSBwRW5kIC0gMikNCisJCUhVRl9E
RUNPREVfU1lNQk9MWDRfMChwLCBiaXREUHRyKTsgLyogbm8gbmVlZCB0byByZWxvYWQgOiByZWFj
aGVkIHRoZSBlbmQgb2YgRFN0cmVhbSAqLw0KKw0KKwlpZiAocCA8IHBFbmQpDQorCQlwICs9IEhV
Rl9kZWNvZGVMYXN0U3ltYm9sWDQocCwgYml0RFB0ciwgZHQsIGR0TG9nKTsNCisNCisJcmV0dXJu
IHAgLSBwU3RhcnQ7DQorfQ0KKw0KK3N0YXRpYyBzaXplX3QgSFVGX2RlY29tcHJlc3MxWDRfdXNp
bmdEVGFibGVfaW50ZXJuYWwodm9pZCAqZHN0LCBzaXplX3QgZHN0U2l6ZSwgY29uc3Qgdm9pZCAq
Y1NyYywgc2l6ZV90IGNTcmNTaXplLCBjb25zdCBIVUZfRFRhYmxlICpEVGFibGUpDQorew0KKwlC
SVRfRFN0cmVhbV90IGJpdEQ7DQorDQorCS8qIEluaXQgKi8NCisJew0KKwkJc2l6ZV90IGNvbnN0
IGVycm9yQ29kZSA9IEJJVF9pbml0RFN0cmVhbSgmYml0RCwgY1NyYywgY1NyY1NpemUpOw0KKwkJ
aWYgKEhVRl9pc0Vycm9yKGVycm9yQ29kZSkpDQorCQkJcmV0dXJuIGVycm9yQ29kZTsNCisJfQ0K
Kw0KKwkvKiBkZWNvZGUgKi8NCisJew0KKwkJQllURSAqY29uc3Qgb3N0YXJ0ID0gKEJZVEUgKilk
c3Q7DQorCQlCWVRFICpjb25zdCBvZW5kID0gb3N0YXJ0ICsgZHN0U2l6ZTsNCisJCWNvbnN0IHZv
aWQgKmNvbnN0IGR0UHRyID0gRFRhYmxlICsgMTsgLyogZm9yY2UgY29tcGlsZXIgdG8gbm90IHVz
ZSBzdHJpY3QtYWxpYXNpbmcgKi8NCisJCWNvbnN0IEhVRl9ERWx0WDQgKmNvbnN0IGR0ID0gKGNv
bnN0IEhVRl9ERWx0WDQgKilkdFB0cjsNCisJCURUYWJsZURlc2MgY29uc3QgZHRkID0gSFVGX2dl
dERUYWJsZURlc2MoRFRhYmxlKTsNCisJCUhVRl9kZWNvZGVTdHJlYW1YNChvc3RhcnQsICZiaXRE
LCBvZW5kLCBkdCwgZHRkLnRhYmxlTG9nKTsNCisJfQ0KKw0KKwkvKiBjaGVjayAqLw0KKwlpZiAo
IUJJVF9lbmRPZkRTdHJlYW0oJmJpdEQpKQ0KKwkJcmV0dXJuIEVSUk9SKGNvcnJ1cHRpb25fZGV0
ZWN0ZWQpOw0KKw0KKwkvKiBkZWNvZGVkIHNpemUgKi8NCisJcmV0dXJuIGRzdFNpemU7DQorfQ0K
Kw0KK3NpemVfdCBIVUZfZGVjb21wcmVzczFYNF91c2luZ0RUYWJsZSh2b2lkICpkc3QsIHNpemVf
dCBkc3RTaXplLCBjb25zdCB2b2lkICpjU3JjLCBzaXplX3QgY1NyY1NpemUsIGNvbnN0IEhVRl9E
VGFibGUgKkRUYWJsZSkNCit7DQorCURUYWJsZURlc2MgZHRkID0gSFVGX2dldERUYWJsZURlc2Mo
RFRhYmxlKTsNCisJaWYgKGR0ZC50YWJsZVR5cGUgIT0gMSkNCisJCXJldHVybiBFUlJPUihHRU5F
UklDKTsNCisJcmV0dXJuIEhVRl9kZWNvbXByZXNzMVg0X3VzaW5nRFRhYmxlX2ludGVybmFsKGRz
dCwgZHN0U2l6ZSwgY1NyYywgY1NyY1NpemUsIERUYWJsZSk7DQorfQ0KKw0KK3NpemVfdCBIVUZf
ZGVjb21wcmVzczFYNF9EQ3R4X3drc3AoSFVGX0RUYWJsZSAqREN0eCwgdm9pZCAqZHN0LCBzaXpl
X3QgZHN0U2l6ZSwgY29uc3Qgdm9pZCAqY1NyYywgc2l6ZV90IGNTcmNTaXplLCB2b2lkICp3b3Jr
c3BhY2UsIHNpemVfdCB3b3Jrc3BhY2VTaXplKQ0KK3sNCisJY29uc3QgQllURSAqaXAgPSAoY29u
c3QgQllURSAqKWNTcmM7DQorDQorCXNpemVfdCBjb25zdCBoU2l6ZSA9IEhVRl9yZWFkRFRhYmxl
WDRfd2tzcChEQ3R4LCBjU3JjLCBjU3JjU2l6ZSwgd29ya3NwYWNlLCB3b3Jrc3BhY2VTaXplKTsN
CisJaWYgKEhVRl9pc0Vycm9yKGhTaXplKSkNCisJCXJldHVybiBoU2l6ZTsNCisJaWYgKGhTaXpl
ID49IGNTcmNTaXplKQ0KKwkJcmV0dXJuIEVSUk9SKHNyY1NpemVfd3JvbmcpOw0KKwlpcCArPSBo
U2l6ZTsNCisJY1NyY1NpemUgLT0gaFNpemU7DQorDQorCXJldHVybiBIVUZfZGVjb21wcmVzczFY
NF91c2luZ0RUYWJsZV9pbnRlcm5hbChkc3QsIGRzdFNpemUsIGlwLCBjU3JjU2l6ZSwgREN0eCk7
DQorfQ0KKw0KK3N0YXRpYyBzaXplX3QgSFVGX2RlY29tcHJlc3M0WDRfdXNpbmdEVGFibGVfaW50
ZXJuYWwodm9pZCAqZHN0LCBzaXplX3QgZHN0U2l6ZSwgY29uc3Qgdm9pZCAqY1NyYywgc2l6ZV90
IGNTcmNTaXplLCBjb25zdCBIVUZfRFRhYmxlICpEVGFibGUpDQorew0KKwlpZiAoY1NyY1NpemUg
PCAxMCkNCisJCXJldHVybiBFUlJPUihjb3JydXB0aW9uX2RldGVjdGVkKTsgLyogc3RyaWN0IG1p
bmltdW0gOiBqdW1wIHRhYmxlICsgMSBieXRlIHBlciBzdHJlYW0gKi8NCisNCisJew0KKwkJY29u
c3QgQllURSAqY29uc3QgaXN0YXJ0ID0gKGNvbnN0IEJZVEUgKiljU3JjOw0KKwkJQllURSAqY29u
c3Qgb3N0YXJ0ID0gKEJZVEUgKilkc3Q7DQorCQlCWVRFICpjb25zdCBvZW5kID0gb3N0YXJ0ICsg
ZHN0U2l6ZTsNCisJCWNvbnN0IHZvaWQgKmNvbnN0IGR0UHRyID0gRFRhYmxlICsgMTsNCisJCWNv
bnN0IEhVRl9ERWx0WDQgKmNvbnN0IGR0ID0gKGNvbnN0IEhVRl9ERWx0WDQgKilkdFB0cjsNCisN
CisJCS8qIEluaXQgKi8NCisJCUJJVF9EU3RyZWFtX3QgYml0RDE7DQorCQlCSVRfRFN0cmVhbV90
IGJpdEQyOw0KKwkJQklUX0RTdHJlYW1fdCBiaXREMzsNCisJCUJJVF9EU3RyZWFtX3QgYml0RDQ7
DQorCQlzaXplX3QgY29uc3QgbGVuZ3RoMSA9IFpTVERfcmVhZExFMTYoaXN0YXJ0KTsNCisJCXNp
emVfdCBjb25zdCBsZW5ndGgyID0gWlNURF9yZWFkTEUxNihpc3RhcnQgKyAyKTsNCisJCXNpemVf
dCBjb25zdCBsZW5ndGgzID0gWlNURF9yZWFkTEUxNihpc3RhcnQgKyA0KTsNCisJCXNpemVfdCBj
b25zdCBsZW5ndGg0ID0gY1NyY1NpemUgLSAobGVuZ3RoMSArIGxlbmd0aDIgKyBsZW5ndGgzICsg
Nik7DQorCQljb25zdCBCWVRFICpjb25zdCBpc3RhcnQxID0gaXN0YXJ0ICsgNjsgLyoganVtcFRh
YmxlICovDQorCQljb25zdCBCWVRFICpjb25zdCBpc3RhcnQyID0gaXN0YXJ0MSArIGxlbmd0aDE7
DQorCQljb25zdCBCWVRFICpjb25zdCBpc3RhcnQzID0gaXN0YXJ0MiArIGxlbmd0aDI7DQorCQlj
b25zdCBCWVRFICpjb25zdCBpc3RhcnQ0ID0gaXN0YXJ0MyArIGxlbmd0aDM7DQorCQlzaXplX3Qg
Y29uc3Qgc2VnbWVudFNpemUgPSAoZHN0U2l6ZSArIDMpIC8gNDsNCisJCUJZVEUgKmNvbnN0IG9w
U3RhcnQyID0gb3N0YXJ0ICsgc2VnbWVudFNpemU7DQorCQlCWVRFICpjb25zdCBvcFN0YXJ0MyA9
IG9wU3RhcnQyICsgc2VnbWVudFNpemU7DQorCQlCWVRFICpjb25zdCBvcFN0YXJ0NCA9IG9wU3Rh
cnQzICsgc2VnbWVudFNpemU7DQorCQlCWVRFICpvcDEgPSBvc3RhcnQ7DQorCQlCWVRFICpvcDIg
PSBvcFN0YXJ0MjsNCisJCUJZVEUgKm9wMyA9IG9wU3RhcnQzOw0KKwkJQllURSAqb3A0ID0gb3BT
dGFydDQ7DQorCQlVMzIgZW5kU2lnbmFsOw0KKwkJRFRhYmxlRGVzYyBjb25zdCBkdGQgPSBIVUZf
Z2V0RFRhYmxlRGVzYyhEVGFibGUpOw0KKwkJVTMyIGNvbnN0IGR0TG9nID0gZHRkLnRhYmxlTG9n
Ow0KKw0KKwkJaWYgKGxlbmd0aDQgPiBjU3JjU2l6ZSkNCisJCQlyZXR1cm4gRVJST1IoY29ycnVw
dGlvbl9kZXRlY3RlZCk7IC8qIG92ZXJmbG93ICovDQorCQl7DQorCQkJc2l6ZV90IGNvbnN0IGVy
cm9yQ29kZSA9IEJJVF9pbml0RFN0cmVhbSgmYml0RDEsIGlzdGFydDEsIGxlbmd0aDEpOw0KKwkJ
CWlmIChIVUZfaXNFcnJvcihlcnJvckNvZGUpKQ0KKwkJCQlyZXR1cm4gZXJyb3JDb2RlOw0KKwkJ
fQ0KKwkJew0KKwkJCXNpemVfdCBjb25zdCBlcnJvckNvZGUgPSBCSVRfaW5pdERTdHJlYW0oJmJp
dEQyLCBpc3RhcnQyLCBsZW5ndGgyKTsNCisJCQlpZiAoSFVGX2lzRXJyb3IoZXJyb3JDb2RlKSkN
CisJCQkJcmV0dXJuIGVycm9yQ29kZTsNCisJCX0NCisJCXsNCisJCQlzaXplX3QgY29uc3QgZXJy
b3JDb2RlID0gQklUX2luaXREU3RyZWFtKCZiaXREMywgaXN0YXJ0MywgbGVuZ3RoMyk7DQorCQkJ
aWYgKEhVRl9pc0Vycm9yKGVycm9yQ29kZSkpDQorCQkJCXJldHVybiBlcnJvckNvZGU7DQorCQl9
DQorCQl7DQorCQkJc2l6ZV90IGNvbnN0IGVycm9yQ29kZSA9IEJJVF9pbml0RFN0cmVhbSgmYml0
RDQsIGlzdGFydDQsIGxlbmd0aDQpOw0KKwkJCWlmIChIVUZfaXNFcnJvcihlcnJvckNvZGUpKQ0K
KwkJCQlyZXR1cm4gZXJyb3JDb2RlOw0KKwkJfQ0KKw0KKwkJLyogMTYtMzIgc3ltYm9scyBwZXIg
bG9vcCAoNC04IHN5bWJvbHMgcGVyIHN0cmVhbSkgKi8NCisJCWVuZFNpZ25hbCA9IEJJVF9yZWxv
YWREU3RyZWFtKCZiaXREMSkgfCBCSVRfcmVsb2FkRFN0cmVhbSgmYml0RDIpIHwgQklUX3JlbG9h
ZERTdHJlYW0oJmJpdEQzKSB8IEJJVF9yZWxvYWREU3RyZWFtKCZiaXRENCk7DQorCQlmb3IgKDsg
KGVuZFNpZ25hbCA9PSBCSVRfRFN0cmVhbV91bmZpbmlzaGVkKSAmIChvcDQgPCAob2VuZCAtIChz
aXplb2YoYml0RDQuYml0Q29udGFpbmVyKSAtIDEpKSk7KSB7DQorCQkJSFVGX0RFQ09ERV9TWU1C
T0xYNF8yKG9wMSwgJmJpdEQxKTsNCisJCQlIVUZfREVDT0RFX1NZTUJPTFg0XzIob3AyLCAmYml0
RDIpOw0KKwkJCUhVRl9ERUNPREVfU1lNQk9MWDRfMihvcDMsICZiaXREMyk7DQorCQkJSFVGX0RF
Q09ERV9TWU1CT0xYNF8yKG9wNCwgJmJpdEQ0KTsNCisJCQlIVUZfREVDT0RFX1NZTUJPTFg0XzEo
b3AxLCAmYml0RDEpOw0KKwkJCUhVRl9ERUNPREVfU1lNQk9MWDRfMShvcDIsICZiaXREMik7DQor
CQkJSFVGX0RFQ09ERV9TWU1CT0xYNF8xKG9wMywgJmJpdEQzKTsNCisJCQlIVUZfREVDT0RFX1NZ
TUJPTFg0XzEob3A0LCAmYml0RDQpOw0KKwkJCUhVRl9ERUNPREVfU1lNQk9MWDRfMihvcDEsICZi
aXREMSk7DQorCQkJSFVGX0RFQ09ERV9TWU1CT0xYNF8yKG9wMiwgJmJpdEQyKTsNCisJCQlIVUZf
REVDT0RFX1NZTUJPTFg0XzIob3AzLCAmYml0RDMpOw0KKwkJCUhVRl9ERUNPREVfU1lNQk9MWDRf
MihvcDQsICZiaXRENCk7DQorCQkJSFVGX0RFQ09ERV9TWU1CT0xYNF8wKG9wMSwgJmJpdEQxKTsN
CisJCQlIVUZfREVDT0RFX1NZTUJPTFg0XzAob3AyLCAmYml0RDIpOw0KKwkJCUhVRl9ERUNPREVf
U1lNQk9MWDRfMChvcDMsICZiaXREMyk7DQorCQkJSFVGX0RFQ09ERV9TWU1CT0xYNF8wKG9wNCwg
JmJpdEQ0KTsNCisNCisJCQllbmRTaWduYWwgPSBCSVRfcmVsb2FkRFN0cmVhbSgmYml0RDEpIHwg
QklUX3JlbG9hZERTdHJlYW0oJmJpdEQyKSB8IEJJVF9yZWxvYWREU3RyZWFtKCZiaXREMykgfCBC
SVRfcmVsb2FkRFN0cmVhbSgmYml0RDQpOw0KKwkJfQ0KKw0KKwkJLyogY2hlY2sgY29ycnVwdGlv
biAqLw0KKwkJaWYgKG9wMSA+IG9wU3RhcnQyKQ0KKwkJCXJldHVybiBFUlJPUihjb3JydXB0aW9u
X2RldGVjdGVkKTsNCisJCWlmIChvcDIgPiBvcFN0YXJ0MykNCisJCQlyZXR1cm4gRVJST1IoY29y
cnVwdGlvbl9kZXRlY3RlZCk7DQorCQlpZiAob3AzID4gb3BTdGFydDQpDQorCQkJcmV0dXJuIEVS
Uk9SKGNvcnJ1cHRpb25fZGV0ZWN0ZWQpOw0KKwkJLyogbm90ZSA6IG9wNCBhbHJlYWR5IHZlcmlm
aWVkIHdpdGhpbiBtYWluIGxvb3AgKi8NCisNCisJCS8qIGZpbmlzaCBiaXRTdHJlYW1zIG9uZSBi
eSBvbmUgKi8NCisJCUhVRl9kZWNvZGVTdHJlYW1YNChvcDEsICZiaXREMSwgb3BTdGFydDIsIGR0
LCBkdExvZyk7DQorCQlIVUZfZGVjb2RlU3RyZWFtWDQob3AyLCAmYml0RDIsIG9wU3RhcnQzLCBk
dCwgZHRMb2cpOw0KKwkJSFVGX2RlY29kZVN0cmVhbVg0KG9wMywgJmJpdEQzLCBvcFN0YXJ0NCwg
ZHQsIGR0TG9nKTsNCisJCUhVRl9kZWNvZGVTdHJlYW1YNChvcDQsICZiaXRENCwgb2VuZCwgZHQs
IGR0TG9nKTsNCisNCisJCS8qIGNoZWNrICovDQorCQl7DQorCQkJVTMyIGNvbnN0IGVuZENoZWNr
ID0gQklUX2VuZE9mRFN0cmVhbSgmYml0RDEpICYgQklUX2VuZE9mRFN0cmVhbSgmYml0RDIpICYg
QklUX2VuZE9mRFN0cmVhbSgmYml0RDMpICYgQklUX2VuZE9mRFN0cmVhbSgmYml0RDQpOw0KKwkJ
CWlmICghZW5kQ2hlY2spDQorCQkJCXJldHVybiBFUlJPUihjb3JydXB0aW9uX2RldGVjdGVkKTsN
CisJCX0NCisNCisJCS8qIGRlY29kZWQgc2l6ZSAqLw0KKwkJcmV0dXJuIGRzdFNpemU7DQorCX0N
Cit9DQorDQorc2l6ZV90IEhVRl9kZWNvbXByZXNzNFg0X3VzaW5nRFRhYmxlKHZvaWQgKmRzdCwg
c2l6ZV90IGRzdFNpemUsIGNvbnN0IHZvaWQgKmNTcmMsIHNpemVfdCBjU3JjU2l6ZSwgY29uc3Qg
SFVGX0RUYWJsZSAqRFRhYmxlKQ0KK3sNCisJRFRhYmxlRGVzYyBkdGQgPSBIVUZfZ2V0RFRhYmxl
RGVzYyhEVGFibGUpOw0KKwlpZiAoZHRkLnRhYmxlVHlwZSAhPSAxKQ0KKwkJcmV0dXJuIEVSUk9S
KEdFTkVSSUMpOw0KKwlyZXR1cm4gSFVGX2RlY29tcHJlc3M0WDRfdXNpbmdEVGFibGVfaW50ZXJu
YWwoZHN0LCBkc3RTaXplLCBjU3JjLCBjU3JjU2l6ZSwgRFRhYmxlKTsNCit9DQorDQorc2l6ZV90
IEhVRl9kZWNvbXByZXNzNFg0X0RDdHhfd2tzcChIVUZfRFRhYmxlICpkY3R4LCB2b2lkICpkc3Qs
IHNpemVfdCBkc3RTaXplLCBjb25zdCB2b2lkICpjU3JjLCBzaXplX3QgY1NyY1NpemUsIHZvaWQg
KndvcmtzcGFjZSwgc2l6ZV90IHdvcmtzcGFjZVNpemUpDQorew0KKwljb25zdCBCWVRFICppcCA9
IChjb25zdCBCWVRFICopY1NyYzsNCisNCisJc2l6ZV90IGhTaXplID0gSFVGX3JlYWREVGFibGVY
NF93a3NwKGRjdHgsIGNTcmMsIGNTcmNTaXplLCB3b3Jrc3BhY2UsIHdvcmtzcGFjZVNpemUpOw0K
KwlpZiAoSFVGX2lzRXJyb3IoaFNpemUpKQ0KKwkJcmV0dXJuIGhTaXplOw0KKwlpZiAoaFNpemUg
Pj0gY1NyY1NpemUpDQorCQlyZXR1cm4gRVJST1Ioc3JjU2l6ZV93cm9uZyk7DQorCWlwICs9IGhT
aXplOw0KKwljU3JjU2l6ZSAtPSBoU2l6ZTsNCisNCisJcmV0dXJuIEhVRl9kZWNvbXByZXNzNFg0
X3VzaW5nRFRhYmxlX2ludGVybmFsKGRzdCwgZHN0U2l6ZSwgaXAsIGNTcmNTaXplLCBkY3R4KTsN
Cit9DQorDQorLyogKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKiovDQorLyogR2VuZXJp
YyBkZWNvbXByZXNzaW9uIHNlbGVjdG9yICovDQorLyogKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKiovDQorDQorc2l6ZV90IEhVRl9kZWNvbXByZXNzMVhfdXNpbmdEVGFibGUodm9pZCAq
ZHN0LCBzaXplX3QgbWF4RHN0U2l6ZSwgY29uc3Qgdm9pZCAqY1NyYywgc2l6ZV90IGNTcmNTaXpl
LCBjb25zdCBIVUZfRFRhYmxlICpEVGFibGUpDQorew0KKwlEVGFibGVEZXNjIGNvbnN0IGR0ZCA9
IEhVRl9nZXREVGFibGVEZXNjKERUYWJsZSk7DQorCXJldHVybiBkdGQudGFibGVUeXBlID8gSFVG
X2RlY29tcHJlc3MxWDRfdXNpbmdEVGFibGVfaW50ZXJuYWwoZHN0LCBtYXhEc3RTaXplLCBjU3Jj
LCBjU3JjU2l6ZSwgRFRhYmxlKQ0KKwkJCSAgICAgOiBIVUZfZGVjb21wcmVzczFYMl91c2luZ0RU
YWJsZV9pbnRlcm5hbChkc3QsIG1heERzdFNpemUsIGNTcmMsIGNTcmNTaXplLCBEVGFibGUpOw0K
K30NCisNCitzaXplX3QgSFVGX2RlY29tcHJlc3M0WF91c2luZ0RUYWJsZSh2b2lkICpkc3QsIHNp
emVfdCBtYXhEc3RTaXplLCBjb25zdCB2b2lkICpjU3JjLCBzaXplX3QgY1NyY1NpemUsIGNvbnN0
IEhVRl9EVGFibGUgKkRUYWJsZSkNCit7DQorCURUYWJsZURlc2MgY29uc3QgZHRkID0gSFVGX2dl
dERUYWJsZURlc2MoRFRhYmxlKTsNCisJcmV0dXJuIGR0ZC50YWJsZVR5cGUgPyBIVUZfZGVjb21w
cmVzczRYNF91c2luZ0RUYWJsZV9pbnRlcm5hbChkc3QsIG1heERzdFNpemUsIGNTcmMsIGNTcmNT
aXplLCBEVGFibGUpDQorCQkJICAgICA6IEhVRl9kZWNvbXByZXNzNFgyX3VzaW5nRFRhYmxlX2lu
dGVybmFsKGRzdCwgbWF4RHN0U2l6ZSwgY1NyYywgY1NyY1NpemUsIERUYWJsZSk7DQorfQ0KKw0K
K3R5cGVkZWYgc3RydWN0IHsNCisJVTMyIHRhYmxlVGltZTsNCisJVTMyIGRlY29kZTI1NlRpbWU7
DQorfSBhbGdvX3RpbWVfdDsNCitzdGF0aWMgY29uc3QgYWxnb190aW1lX3QgYWxnb1RpbWVbMTYg
LyogUXVhbnRpemF0aW9uICovXVszIC8qIHNpbmdsZSwgZG91YmxlLCBxdWFkICovXSA9IHsNCisg
ICAgLyogc2luZ2xlLCBkb3VibGUsIHF1YWQgKi8NCisgICAge3swLCAwfSwgezEsIDF9LCB7Miwg
Mn19LAkJICAgICAvKiBRPT0wIDogaW1wb3NzaWJsZSAqLw0KKyAgICB7ezAsIDB9LCB7MSwgMX0s
IHsyLCAyfX0sCQkgICAgIC8qIFE9PTEgOiBpbXBvc3NpYmxlICovDQorICAgIHt7MzgsIDEzMH0s
IHsxMzEzLCA3NH0sIHsyMTUxLCAzOH19LCAgICAgLyogUSA9PSAyIDogMTItMTglICovDQorICAg
IHt7NDQ4LCAxMjh9LCB7MTM1MywgNzR9LCB7MjIzOCwgNDF9fSwgICAgLyogUSA9PSAzIDogMTgt
MjUlICovDQorICAgIHt7NTU2LCAxMjh9LCB7MTM1MywgNzR9LCB7MjIzOCwgNDd9fSwgICAgLyog
USA9PSA0IDogMjUtMzIlICovDQorICAgIHt7NzE0LCAxMjh9LCB7MTQxOCwgNzR9LCB7MjQzNiwg
NTN9fSwgICAgLyogUSA9PSA1IDogMzItMzglICovDQorICAgIHt7ODgzLCAxMjh9LCB7MTQzNywg
NzR9LCB7MjQ2NCwgNjF9fSwgICAgLyogUSA9PSA2IDogMzgtNDQlICovDQorICAgIHt7ODk3LCAx
Mjh9LCB7MTUxNSwgNzV9LCB7MjYyMiwgNjh9fSwgICAgLyogUSA9PSA3IDogNDQtNTAlICovDQor
ICAgIHt7OTI2LCAxMjh9LCB7MTYxMywgNzV9LCB7MjczMCwgNzV9fSwgICAgLyogUSA9PSA4IDog
NTAtNTYlICovDQorICAgIHt7OTQ3LCAxMjh9LCB7MTcyOSwgNzd9LCB7MzM1OSwgNzd9fSwgICAg
LyogUSA9PSA5IDogNTYtNjIlICovDQorICAgIHt7MTEwNywgMTI4fSwgezIwODMsIDgxfSwgezQw
MDYsIDg0fX0sICAgLyogUSA9PTEwIDogNjItNjklICovDQorICAgIHt7MTE3NywgMTI4fSwgezIz
NzksIDg3fSwgezQ3ODUsIDg4fX0sICAgLyogUSA9PTExIDogNjktNzUlICovDQorICAgIHt7MTI0
MiwgMTI4fSwgezI0MTUsIDkzfSwgezUxNTUsIDg0fX0sICAgLyogUSA9PTEyIDogNzUtODElICov
DQorICAgIHt7MTM0OSwgMTI4fSwgezI2NDQsIDEwNn0sIHs1MjYwLCAxMDZ9fSwgLyogUSA9PTEz
IDogODEtODclICovDQorICAgIHt7MTQ1NSwgMTI4fSwgezI0MjIsIDEyNH0sIHs0MTc0LCAxMjR9
fSwgLyogUSA9PTE0IDogODctOTMlICovDQorICAgIHt7NzIyLCAxMjh9LCB7MTg5MSwgMTQ1fSwg
ezE5MzYsIDE0Nn19LCAgLyogUSA9PTE1IDogOTMtOTklICovDQorfTsNCisNCisvKiogSFVGX3Nl
bGVjdERlY29kZXIoKSA6DQorKiAgIFRlbGxzIHdoaWNoIGRlY29kZXIgaXMgbGlrZWx5IHRvIGRl
Y29kZSBmYXN0ZXIsDQorKiAgIGJhc2VkIG9uIGEgc2V0IG9mIHByZS1kZXRlcm1pbmVkIG1ldHJp
Y3MuDQorKiAgIEByZXR1cm4gOiAwPT1IVUZfZGVjb21wcmVzczRYMiwgMT09SFVGX2RlY29tcHJl
c3M0WDQgLg0KKyogICBBc3N1bXB0aW9uIDogMCA8IGNTcmNTaXplIDwgZHN0U2l6ZSA8PSAxMjgg
S0IgKi8NCitVMzIgSFVGX3NlbGVjdERlY29kZXIoc2l6ZV90IGRzdFNpemUsIHNpemVfdCBjU3Jj
U2l6ZSkNCit7DQorCS8qIGRlY29kZXIgdGltaW5nIGV2YWx1YXRpb24gKi8NCisJVTMyIGNvbnN0
IFEgPSAoVTMyKShjU3JjU2l6ZSAqIDE2IC8gZHN0U2l6ZSk7IC8qIFEgPCAxNiBzaW5jZSBkc3RT
aXplID4gY1NyY1NpemUgKi8NCisJVTMyIGNvbnN0IEQyNTYgPSAoVTMyKShkc3RTaXplID4+IDgp
Ow0KKwlVMzIgY29uc3QgRFRpbWUwID0gYWxnb1RpbWVbUV1bMF0udGFibGVUaW1lICsgKGFsZ29U
aW1lW1FdWzBdLmRlY29kZTI1NlRpbWUgKiBEMjU2KTsNCisJVTMyIERUaW1lMSA9IGFsZ29UaW1l
W1FdWzFdLnRhYmxlVGltZSArIChhbGdvVGltZVtRXVsxXS5kZWNvZGUyNTZUaW1lICogRDI1Nik7
DQorCURUaW1lMSArPSBEVGltZTEgPj4gMzsgLyogYWR2YW50YWdlIHRvIGFsZ29yaXRobSB1c2lu
ZyBsZXNzIG1lbW9yeSwgZm9yIGNhY2hlIGV2aWN0aW9uICovDQorDQorCXJldHVybiBEVGltZTEg
PCBEVGltZTA7DQorfQ0KKw0KK3R5cGVkZWYgc2l6ZV90ICgqZGVjb21wcmVzc2lvbkFsZ28pKHZv
aWQgKmRzdCwgc2l6ZV90IGRzdFNpemUsIGNvbnN0IHZvaWQgKmNTcmMsIHNpemVfdCBjU3JjU2l6
ZSk7DQorDQorc2l6ZV90IEhVRl9kZWNvbXByZXNzNFhfREN0eF93a3NwKEhVRl9EVGFibGUgKmRj
dHgsIHZvaWQgKmRzdCwgc2l6ZV90IGRzdFNpemUsIGNvbnN0IHZvaWQgKmNTcmMsIHNpemVfdCBj
U3JjU2l6ZSwgdm9pZCAqd29ya3NwYWNlLCBzaXplX3Qgd29ya3NwYWNlU2l6ZSkNCit7DQorCS8q
IHZhbGlkYXRpb24gY2hlY2tzICovDQorCWlmIChkc3RTaXplID09IDApDQorCQlyZXR1cm4gRVJS
T1IoZHN0U2l6ZV90b29TbWFsbCk7DQorCWlmIChjU3JjU2l6ZSA+IGRzdFNpemUpDQorCQlyZXR1
cm4gRVJST1IoY29ycnVwdGlvbl9kZXRlY3RlZCk7IC8qIGludmFsaWQgKi8NCisJaWYgKGNTcmNT
aXplID09IGRzdFNpemUpIHsNCisJCW1lbWNweShkc3QsIGNTcmMsIGRzdFNpemUpOw0KKwkJcmV0
dXJuIGRzdFNpemU7DQorCX0gLyogbm90IGNvbXByZXNzZWQgKi8NCisJaWYgKGNTcmNTaXplID09
IDEpIHsNCisJCW1lbXNldChkc3QsICooY29uc3QgQllURSAqKWNTcmMsIGRzdFNpemUpOw0KKwkJ
cmV0dXJuIGRzdFNpemU7DQorCX0gLyogUkxFICovDQorDQorCXsNCisJCVUzMiBjb25zdCBhbGdv
TmIgPSBIVUZfc2VsZWN0RGVjb2Rlcihkc3RTaXplLCBjU3JjU2l6ZSk7DQorCQlyZXR1cm4gYWxn
b05iID8gSFVGX2RlY29tcHJlc3M0WDRfREN0eF93a3NwKGRjdHgsIGRzdCwgZHN0U2l6ZSwgY1Ny
YywgY1NyY1NpemUsIHdvcmtzcGFjZSwgd29ya3NwYWNlU2l6ZSkNCisJCQkgICAgICA6IEhVRl9k
ZWNvbXByZXNzNFgyX0RDdHhfd2tzcChkY3R4LCBkc3QsIGRzdFNpemUsIGNTcmMsIGNTcmNTaXpl
LCB3b3Jrc3BhY2UsIHdvcmtzcGFjZVNpemUpOw0KKwl9DQorfQ0KKw0KK3NpemVfdCBIVUZfZGVj
b21wcmVzczRYX2h1Zk9ubHlfd2tzcChIVUZfRFRhYmxlICpkY3R4LCB2b2lkICpkc3QsIHNpemVf
dCBkc3RTaXplLCBjb25zdCB2b2lkICpjU3JjLCBzaXplX3QgY1NyY1NpemUsIHZvaWQgKndvcmtz
cGFjZSwgc2l6ZV90IHdvcmtzcGFjZVNpemUpDQorew0KKwkvKiB2YWxpZGF0aW9uIGNoZWNrcyAq
Lw0KKwlpZiAoZHN0U2l6ZSA9PSAwKQ0KKwkJcmV0dXJuIEVSUk9SKGRzdFNpemVfdG9vU21hbGwp
Ow0KKwlpZiAoKGNTcmNTaXplID49IGRzdFNpemUpIHx8IChjU3JjU2l6ZSA8PSAxKSkNCisJCXJl
dHVybiBFUlJPUihjb3JydXB0aW9uX2RldGVjdGVkKTsgLyogaW52YWxpZCAqLw0KKw0KKwl7DQor
CQlVMzIgY29uc3QgYWxnb05iID0gSFVGX3NlbGVjdERlY29kZXIoZHN0U2l6ZSwgY1NyY1NpemUp
Ow0KKwkJcmV0dXJuIGFsZ29OYiA/IEhVRl9kZWNvbXByZXNzNFg0X0RDdHhfd2tzcChkY3R4LCBk
c3QsIGRzdFNpemUsIGNTcmMsIGNTcmNTaXplLCB3b3Jrc3BhY2UsIHdvcmtzcGFjZVNpemUpDQor
CQkJICAgICAgOiBIVUZfZGVjb21wcmVzczRYMl9EQ3R4X3drc3AoZGN0eCwgZHN0LCBkc3RTaXpl
LCBjU3JjLCBjU3JjU2l6ZSwgd29ya3NwYWNlLCB3b3Jrc3BhY2VTaXplKTsNCisJfQ0KK30NCisN
CitzaXplX3QgSFVGX2RlY29tcHJlc3MxWF9EQ3R4X3drc3AoSFVGX0RUYWJsZSAqZGN0eCwgdm9p
ZCAqZHN0LCBzaXplX3QgZHN0U2l6ZSwgY29uc3Qgdm9pZCAqY1NyYywgc2l6ZV90IGNTcmNTaXpl
LCB2b2lkICp3b3Jrc3BhY2UsIHNpemVfdCB3b3Jrc3BhY2VTaXplKQ0KK3sNCisJLyogdmFsaWRh
dGlvbiBjaGVja3MgKi8NCisJaWYgKGRzdFNpemUgPT0gMCkNCisJCXJldHVybiBFUlJPUihkc3RT
aXplX3Rvb1NtYWxsKTsNCisJaWYgKGNTcmNTaXplID4gZHN0U2l6ZSkNCisJCXJldHVybiBFUlJP
Uihjb3JydXB0aW9uX2RldGVjdGVkKTsgLyogaW52YWxpZCAqLw0KKwlpZiAoY1NyY1NpemUgPT0g
ZHN0U2l6ZSkgew0KKwkJbWVtY3B5KGRzdCwgY1NyYywgZHN0U2l6ZSk7DQorCQlyZXR1cm4gZHN0
U2l6ZTsNCisJfSAvKiBub3QgY29tcHJlc3NlZCAqLw0KKwlpZiAoY1NyY1NpemUgPT0gMSkgew0K
KwkJbWVtc2V0KGRzdCwgKihjb25zdCBCWVRFICopY1NyYywgZHN0U2l6ZSk7DQorCQlyZXR1cm4g
ZHN0U2l6ZTsNCisJfSAvKiBSTEUgKi8NCisNCisJew0KKwkJVTMyIGNvbnN0IGFsZ29OYiA9IEhV
Rl9zZWxlY3REZWNvZGVyKGRzdFNpemUsIGNTcmNTaXplKTsNCisJCXJldHVybiBhbGdvTmIgPyBI
VUZfZGVjb21wcmVzczFYNF9EQ3R4X3drc3AoZGN0eCwgZHN0LCBkc3RTaXplLCBjU3JjLCBjU3Jj
U2l6ZSwgd29ya3NwYWNlLCB3b3Jrc3BhY2VTaXplKQ0KKwkJCSAgICAgIDogSFVGX2RlY29tcHJl
c3MxWDJfREN0eF93a3NwKGRjdHgsIGRzdCwgZHN0U2l6ZSwgY1NyYywgY1NyY1NpemUsIHdvcmtz
cGFjZSwgd29ya3NwYWNlU2l6ZSk7DQorCX0NCit9DQpkaWZmIC0tZ2l0IGEveGVuL2NvbW1vbi96
c3RkL21lbS5oIGIveGVuL2NvbW1vbi96c3RkL21lbS5oDQpuZXcgZmlsZSBtb2RlIDEwMDY0NA0K
aW5kZXggMDAwMDAwMDAwMC4uOTNkN2EyYzM3Nw0KLS0tIC9kZXYvbnVsbA0KKysrIGIveGVuL2Nv
bW1vbi96c3RkL21lbS5oDQpAQCAtMCwwICsxLDE1MSBAQA0KKy8qKg0KKyAqIENvcHlyaWdodCAo
YykgMjAxNi1wcmVzZW50LCBZYW5uIENvbGxldCwgRmFjZWJvb2ssIEluYy4NCisgKiBBbGwgcmln
aHRzIHJlc2VydmVkLg0KKyAqDQorICogVGhpcyBzb3VyY2UgY29kZSBpcyBsaWNlbnNlZCB1bmRl
ciB0aGUgQlNELXN0eWxlIGxpY2Vuc2UgZm91bmQgaW4gdGhlDQorICogTElDRU5TRSBmaWxlIGlu
IHRoZSByb290IGRpcmVjdG9yeSBvZiBodHRwczovL2dpdGh1Yi5jb20vZmFjZWJvb2svenN0ZC4N
CisgKiBBbiBhZGRpdGlvbmFsIGdyYW50IG9mIHBhdGVudCByaWdodHMgY2FuIGJlIGZvdW5kIGlu
IHRoZSBQQVRFTlRTIGZpbGUgaW4gdGhlDQorICogc2FtZSBkaXJlY3RvcnkuDQorICoNCisgKiBU
aGlzIHByb2dyYW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5k
L29yIG1vZGlmeSBpdCB1bmRlcg0KKyAqIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVi
bGljIExpY2Vuc2UgdmVyc2lvbiAyIGFzIHB1Ymxpc2hlZCBieSB0aGUNCisgKiBGcmVlIFNvZnR3
YXJlIEZvdW5kYXRpb24uIFRoaXMgcHJvZ3JhbSBpcyBkdWFsLWxpY2Vuc2VkOyB5b3UgbWF5IHNl
bGVjdA0KKyAqIGVpdGhlciB2ZXJzaW9uIDIgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNl
bnNlICgiR1BMIikgb3IgQlNEIGxpY2Vuc2UNCisgKiAoIkJTRCIpLg0KKyAqLw0KKw0KKyNpZm5k
ZWYgTUVNX0hfTU9EVUxFDQorI2RlZmluZSBNRU1fSF9NT0RVTEUNCisNCisvKi0qKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQorKiAgRGVwZW5kZW5jaWVzDQorKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqLw0KKyNpbmNsdWRlIDxhc20vdW5h
bGlnbmVkLmg+DQorI2luY2x1ZGUgPGxpbnV4L3N0cmluZy5oPiAvKiBtZW1jcHkgKi8NCisjaW5j
bHVkZSA8bGludXgvdHlwZXMuaD4gIC8qIHNpemVfdCwgcHRyZGlmZl90ICovDQorDQorLyotKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KKyogIENvbXBpbGVyIHNwZWNp
Zmljcw0KKyoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKi8NCisjZGVm
aW5lIFpTVERfU1RBVElDIHN0YXRpYyBpbmxpbmUNCisNCisvKi0qKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KKyogIEJhc2ljIFR5
cGVzDQorKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKiovDQordHlwZWRlZiB1aW50OF90IEJZVEU7DQordHlwZWRlZiB1aW50MTZf
dCBVMTY7DQordHlwZWRlZiBpbnQxNl90IFMxNjsNCit0eXBlZGVmIHVpbnQzMl90IFUzMjsNCit0
eXBlZGVmIGludDMyX3QgUzMyOw0KK3R5cGVkZWYgdWludDY0X3QgVTY0Ow0KK3R5cGVkZWYgaW50
NjRfdCBTNjQ7DQordHlwZWRlZiBwdHJkaWZmX3QgaVB0ckRpZmY7DQordHlwZWRlZiB1aW50cHRy
X3QgdVB0ckRpZmY7DQorDQorLyotKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioNCisqICBNZW1vcnkgSS9PDQorKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKiovDQor
WlNURF9TVEFUSUMgdW5zaWduZWQgWlNURF8zMmJpdHModm9pZCkgeyByZXR1cm4gc2l6ZW9mKHNp
emVfdCkgPT0gNDsgfQ0KK1pTVERfU1RBVElDIHVuc2lnbmVkIFpTVERfNjRiaXRzKHZvaWQpIHsg
cmV0dXJuIHNpemVvZihzaXplX3QpID09IDg7IH0NCisNCisjaWYgZGVmaW5lZChfX0xJVFRMRV9F
TkRJQU4pDQorI2RlZmluZSBaU1REX0xJVFRMRV9FTkRJQU4gMQ0KKyNlbHNlDQorI2RlZmluZSBa
U1REX0xJVFRMRV9FTkRJQU4gMA0KKyNlbmRpZg0KKw0KK1pTVERfU1RBVElDIHVuc2lnbmVkIFpT
VERfaXNMaXR0bGVFbmRpYW4odm9pZCkgeyByZXR1cm4gWlNURF9MSVRUTEVfRU5ESUFOOyB9DQor
DQorWlNURF9TVEFUSUMgVTE2IFpTVERfcmVhZDE2KGNvbnN0IHZvaWQgKm1lbVB0cikgeyByZXR1
cm4gZ2V0X3VuYWxpZ25lZCgoY29uc3QgVTE2ICopbWVtUHRyKTsgfQ0KKw0KK1pTVERfU1RBVElD
IFUzMiBaU1REX3JlYWQzMihjb25zdCB2b2lkICptZW1QdHIpIHsgcmV0dXJuIGdldF91bmFsaWdu
ZWQoKGNvbnN0IFUzMiAqKW1lbVB0cik7IH0NCisNCitaU1REX1NUQVRJQyBVNjQgWlNURF9yZWFk
NjQoY29uc3Qgdm9pZCAqbWVtUHRyKSB7IHJldHVybiBnZXRfdW5hbGlnbmVkKChjb25zdCBVNjQg
KiltZW1QdHIpOyB9DQorDQorWlNURF9TVEFUSUMgc2l6ZV90IFpTVERfcmVhZFNUKGNvbnN0IHZv
aWQgKm1lbVB0cikgeyByZXR1cm4gZ2V0X3VuYWxpZ25lZCgoY29uc3Qgc2l6ZV90ICopbWVtUHRy
KTsgfQ0KKw0KK1pTVERfU1RBVElDIHZvaWQgWlNURF93cml0ZTE2KHZvaWQgKm1lbVB0ciwgVTE2
IHZhbHVlKSB7IHB1dF91bmFsaWduZWQodmFsdWUsIChVMTYgKiltZW1QdHIpOyB9DQorDQorWlNU
RF9TVEFUSUMgdm9pZCBaU1REX3dyaXRlMzIodm9pZCAqbWVtUHRyLCBVMzIgdmFsdWUpIHsgcHV0
X3VuYWxpZ25lZCh2YWx1ZSwgKFUzMiAqKW1lbVB0cik7IH0NCisNCitaU1REX1NUQVRJQyB2b2lk
IFpTVERfd3JpdGU2NCh2b2lkICptZW1QdHIsIFU2NCB2YWx1ZSkgeyBwdXRfdW5hbGlnbmVkKHZh
bHVlLCAoVTY0ICopbWVtUHRyKTsgfQ0KKw0KKy8qPT09IExpdHRsZSBlbmRpYW4gci93ID09PSov
DQorDQorWlNURF9TVEFUSUMgVTE2IFpTVERfcmVhZExFMTYoY29uc3Qgdm9pZCAqbWVtUHRyKSB7
IHJldHVybiBnZXRfdW5hbGlnbmVkX2xlMTYobWVtUHRyKTsgfQ0KKw0KK1pTVERfU1RBVElDIHZv
aWQgWlNURF93cml0ZUxFMTYodm9pZCAqbWVtUHRyLCBVMTYgdmFsKSB7IHB1dF91bmFsaWduZWRf
bGUxNih2YWwsIG1lbVB0cik7IH0NCisNCitaU1REX1NUQVRJQyBVMzIgWlNURF9yZWFkTEUyNChj
b25zdCB2b2lkICptZW1QdHIpIHsgcmV0dXJuIFpTVERfcmVhZExFMTYobWVtUHRyKSArICgoKGNv
bnN0IEJZVEUgKiltZW1QdHIpWzJdIDw8IDE2KTsgfQ0KKw0KK1pTVERfU1RBVElDIHZvaWQgWlNU
RF93cml0ZUxFMjQodm9pZCAqbWVtUHRyLCBVMzIgdmFsKQ0KK3sNCisJWlNURF93cml0ZUxFMTYo
bWVtUHRyLCAoVTE2KXZhbCk7DQorCSgoQllURSAqKW1lbVB0cilbMl0gPSAoQllURSkodmFsID4+
IDE2KTsNCit9DQorDQorWlNURF9TVEFUSUMgVTMyIFpTVERfcmVhZExFMzIoY29uc3Qgdm9pZCAq
bWVtUHRyKSB7IHJldHVybiBnZXRfdW5hbGlnbmVkX2xlMzIobWVtUHRyKTsgfQ0KKw0KK1pTVERf
U1RBVElDIHZvaWQgWlNURF93cml0ZUxFMzIodm9pZCAqbWVtUHRyLCBVMzIgdmFsMzIpIHsgcHV0
X3VuYWxpZ25lZF9sZTMyKHZhbDMyLCBtZW1QdHIpOyB9DQorDQorWlNURF9TVEFUSUMgVTY0IFpT
VERfcmVhZExFNjQoY29uc3Qgdm9pZCAqbWVtUHRyKSB7IHJldHVybiBnZXRfdW5hbGlnbmVkX2xl
NjQobWVtUHRyKTsgfQ0KKw0KK1pTVERfU1RBVElDIHZvaWQgWlNURF93cml0ZUxFNjQodm9pZCAq
bWVtUHRyLCBVNjQgdmFsNjQpIHsgcHV0X3VuYWxpZ25lZF9sZTY0KHZhbDY0LCBtZW1QdHIpOyB9
DQorDQorWlNURF9TVEFUSUMgc2l6ZV90IFpTVERfcmVhZExFU1QoY29uc3Qgdm9pZCAqbWVtUHRy
KQ0KK3sNCisJaWYgKFpTVERfMzJiaXRzKCkpDQorCQlyZXR1cm4gKHNpemVfdClaU1REX3JlYWRM
RTMyKG1lbVB0cik7DQorCWVsc2UNCisJCXJldHVybiAoc2l6ZV90KVpTVERfcmVhZExFNjQobWVt
UHRyKTsNCit9DQorDQorWlNURF9TVEFUSUMgdm9pZCBaU1REX3dyaXRlTEVTVCh2b2lkICptZW1Q
dHIsIHNpemVfdCB2YWwpDQorew0KKwlpZiAoWlNURF8zMmJpdHMoKSkNCisJCVpTVERfd3JpdGVM
RTMyKG1lbVB0ciwgKFUzMil2YWwpOw0KKwllbHNlDQorCQlaU1REX3dyaXRlTEU2NChtZW1QdHIs
IChVNjQpdmFsKTsNCit9DQorDQorLyo9PT0gQmlnIGVuZGlhbiByL3cgPT09Ki8NCisNCitaU1RE
X1NUQVRJQyBVMzIgWlNURF9yZWFkQkUzMihjb25zdCB2b2lkICptZW1QdHIpIHsgcmV0dXJuIGdl
dF91bmFsaWduZWRfYmUzMihtZW1QdHIpOyB9DQorDQorWlNURF9TVEFUSUMgdm9pZCBaU1REX3dy
aXRlQkUzMih2b2lkICptZW1QdHIsIFUzMiB2YWwzMikgeyBwdXRfdW5hbGlnbmVkX2JlMzIodmFs
MzIsIG1lbVB0cik7IH0NCisNCitaU1REX1NUQVRJQyBVNjQgWlNURF9yZWFkQkU2NChjb25zdCB2
b2lkICptZW1QdHIpIHsgcmV0dXJuIGdldF91bmFsaWduZWRfYmU2NChtZW1QdHIpOyB9DQorDQor
WlNURF9TVEFUSUMgdm9pZCBaU1REX3dyaXRlQkU2NCh2b2lkICptZW1QdHIsIFU2NCB2YWw2NCkg
eyBwdXRfdW5hbGlnbmVkX2JlNjQodmFsNjQsIG1lbVB0cik7IH0NCisNCitaU1REX1NUQVRJQyBz
aXplX3QgWlNURF9yZWFkQkVTVChjb25zdCB2b2lkICptZW1QdHIpDQorew0KKwlpZiAoWlNURF8z
MmJpdHMoKSkNCisJCXJldHVybiAoc2l6ZV90KVpTVERfcmVhZEJFMzIobWVtUHRyKTsNCisJZWxz
ZQ0KKwkJcmV0dXJuIChzaXplX3QpWlNURF9yZWFkQkU2NChtZW1QdHIpOw0KK30NCisNCitaU1RE
X1NUQVRJQyB2b2lkIFpTVERfd3JpdGVCRVNUKHZvaWQgKm1lbVB0ciwgc2l6ZV90IHZhbCkNCit7
DQorCWlmIChaU1REXzMyYml0cygpKQ0KKwkJWlNURF93cml0ZUJFMzIobWVtUHRyLCAoVTMyKXZh
bCk7DQorCWVsc2UNCisJCVpTVERfd3JpdGVCRTY0KG1lbVB0ciwgKFU2NCl2YWwpOw0KK30NCisN
CisvKiBmdW5jdGlvbiBzYWZlIG9ubHkgZm9yIGNvbXBhcmlzb25zICovDQorWlNURF9TVEFUSUMg
VTMyIFpTVERfcmVhZE1JTk1BVENIKGNvbnN0IHZvaWQgKm1lbVB0ciwgVTMyIGxlbmd0aCkNCit7
DQorCXN3aXRjaCAobGVuZ3RoKSB7DQorCWRlZmF1bHQ6DQorCWNhc2UgNDogcmV0dXJuIFpTVERf
cmVhZDMyKG1lbVB0cik7DQorCWNhc2UgMzoNCisJCWlmIChaU1REX2lzTGl0dGxlRW5kaWFuKCkp
DQorCQkJcmV0dXJuIFpTVERfcmVhZDMyKG1lbVB0cikgPDwgODsNCisJCWVsc2UNCisJCQlyZXR1
cm4gWlNURF9yZWFkMzIobWVtUHRyKSA+PiA4Ow0KKwl9DQorfQ0KKw0KKyNlbmRpZiAvKiBNRU1f
SF9NT0RVTEUgKi8NCmRpZmYgLS1naXQgYS94ZW4vY29tbW9uL3pzdGQvenN0ZF9jb21tb24uYyBi
L3hlbi9jb21tb24venN0ZC96c3RkX2NvbW1vbi5jDQpuZXcgZmlsZSBtb2RlIDEwMDY0NA0KaW5k
ZXggMDAwMDAwMDAwMC4uYTI4MjYyNGVlMQ0KLS0tIC9kZXYvbnVsbA0KKysrIGIveGVuL2NvbW1v
bi96c3RkL3pzdGRfY29tbW9uLmMNCkBAIC0wLDAgKzEsNzUgQEANCisvKioNCisgKiBDb3B5cmln
aHQgKGMpIDIwMTYtcHJlc2VudCwgWWFubiBDb2xsZXQsIEZhY2Vib29rLCBJbmMuDQorICogQWxs
IHJpZ2h0cyByZXNlcnZlZC4NCisgKg0KKyAqIFRoaXMgc291cmNlIGNvZGUgaXMgbGljZW5zZWQg
dW5kZXIgdGhlIEJTRC1zdHlsZSBsaWNlbnNlIGZvdW5kIGluIHRoZQ0KKyAqIExJQ0VOU0UgZmls
ZSBpbiB0aGUgcm9vdCBkaXJlY3Rvcnkgb2YgaHR0cHM6Ly9naXRodWIuY29tL2ZhY2Vib29rL3pz
dGQuDQorICogQW4gYWRkaXRpb25hbCBncmFudCBvZiBwYXRlbnQgcmlnaHRzIGNhbiBiZSBmb3Vu
ZCBpbiB0aGUgUEFURU5UUyBmaWxlIGluIHRoZQ0KKyAqIHNhbWUgZGlyZWN0b3J5Lg0KKyAqDQor
ICogVGhpcyBwcm9ncmFtIGlzIGZyZWUgc29mdHdhcmU7IHlvdSBjYW4gcmVkaXN0cmlidXRlIGl0
IGFuZC9vciBtb2RpZnkgaXQgdW5kZXINCisgKiB0aGUgdGVybXMgb2YgdGhlIEdOVSBHZW5lcmFs
IFB1YmxpYyBMaWNlbnNlIHZlcnNpb24gMiBhcyBwdWJsaXNoZWQgYnkgdGhlDQorICogRnJlZSBT
b2Z0d2FyZSBGb3VuZGF0aW9uLiBUaGlzIHByb2dyYW0gaXMgZHVhbC1saWNlbnNlZDsgeW91IG1h
eSBzZWxlY3QNCisgKiBlaXRoZXIgdmVyc2lvbiAyIG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMg
TGljZW5zZSAoIkdQTCIpIG9yIEJTRCBsaWNlbnNlDQorICogKCJCU0QiKS4NCisgKi8NCisNCisv
Ki0qKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQorKiAgRGVwZW5kZW5jaWVz
DQorKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqLw0KKyNpbmNsdWRlICJl
cnJvcl9wcml2YXRlLmgiDQorI2luY2x1ZGUgInpzdGRfaW50ZXJuYWwuaCIgLyogZGVjbGFyYXRp
b24gb2YgWlNURF9pc0Vycm9yLCBaU1REX2dldEVycm9yTmFtZSwgWlNURF9nZXRFcnJvckNvZGUs
IFpTVERfZ2V0RXJyb3JTdHJpbmcsIFpTVERfdmVyc2lvbk51bWJlciAqLw0KKyNpbmNsdWRlIDxs
aW51eC9rZXJuZWwuaD4NCisNCisvKj0qKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KKyogIEN1c3RvbSBhbGxvY2F0b3INCisqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqLw0KKw0KKyNkZWZpbmUgc3RhY2tfcHVzaChzdGFjaywgc2l6ZSkgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBcDQorCSh7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgXA0KKwkJdm9pZCAqY29uc3QgcHRyID0gWlNURF9QVFJfQUxJ
R04oKHN0YWNrKS0+cHRyKTsgXA0KKwkJKHN0YWNrKS0+cHRyID0gKGNoYXIgKilwdHIgKyAoc2l6
ZSk7ICAgICAgICAgICAgXA0KKwkJKHN0YWNrKS0+cHRyIDw9IChzdGFjayktPmVuZCA/IHB0ciA6
IE5VTEw7ICAgICAgXA0KKwl9KQ0KKw0KK1pTVERfY3VzdG9tTWVtIFpTVERfaW5pdFN0YWNrKHZv
aWQgKndvcmtzcGFjZSwgc2l6ZV90IHdvcmtzcGFjZVNpemUpDQorew0KKwlaU1REX2N1c3RvbU1l
bSBzdGFja01lbSA9IHtaU1REX3N0YWNrQWxsb2MsIFpTVERfc3RhY2tGcmVlLCB3b3Jrc3BhY2V9
Ow0KKwlaU1REX3N0YWNrICpzdGFjayA9IChaU1REX3N0YWNrICopd29ya3NwYWNlOw0KKwkvKiBW
ZXJpZnkgcHJlY29uZGl0aW9ucyAqLw0KKwlpZiAoIXdvcmtzcGFjZSB8fCB3b3Jrc3BhY2VTaXpl
IDwgc2l6ZW9mKFpTVERfc3RhY2spIHx8IHdvcmtzcGFjZSAhPSBaU1REX1BUUl9BTElHTih3b3Jr
c3BhY2UpKSB7DQorCQlaU1REX2N1c3RvbU1lbSBlcnJvciA9IHtOVUxMLCBOVUxMLCBOVUxMfTsN
CisJCXJldHVybiBlcnJvcjsNCisJfQ0KKwkvKiBJbml0aWFsaXplIHRoZSBzdGFjayAqLw0KKwlz
dGFjay0+cHRyID0gd29ya3NwYWNlOw0KKwlzdGFjay0+ZW5kID0gKGNoYXIgKil3b3Jrc3BhY2Ug
KyB3b3Jrc3BhY2VTaXplOw0KKwlzdGFja19wdXNoKHN0YWNrLCBzaXplb2YoWlNURF9zdGFjaykp
Ow0KKwlyZXR1cm4gc3RhY2tNZW07DQorfQ0KKw0KK3ZvaWQgKlpTVERfc3RhY2tBbGxvY0FsbCh2
b2lkICpvcGFxdWUsIHNpemVfdCAqc2l6ZSkNCit7DQorCVpTVERfc3RhY2sgKnN0YWNrID0gKFpT
VERfc3RhY2sgKilvcGFxdWU7DQorCSpzaXplID0gKEJZVEUgY29uc3QgKilzdGFjay0+ZW5kIC0g
KEJZVEUgKilaU1REX1BUUl9BTElHTihzdGFjay0+cHRyKTsNCisJcmV0dXJuIHN0YWNrX3B1c2go
c3RhY2ssICpzaXplKTsNCit9DQorDQordm9pZCAqWlNURF9zdGFja0FsbG9jKHZvaWQgKm9wYXF1
ZSwgc2l6ZV90IHNpemUpDQorew0KKwlaU1REX3N0YWNrICpzdGFjayA9IChaU1REX3N0YWNrICop
b3BhcXVlOw0KKwlyZXR1cm4gc3RhY2tfcHVzaChzdGFjaywgc2l6ZSk7DQorfQ0KK3ZvaWQgWlNU
RF9zdGFja0ZyZWUodm9pZCAqb3BhcXVlLCB2b2lkICphZGRyZXNzKQ0KK3sNCisJKHZvaWQpb3Bh
cXVlOw0KKwkodm9pZClhZGRyZXNzOw0KK30NCisNCit2b2lkICpaU1REX21hbGxvYyhzaXplX3Qg
c2l6ZSwgWlNURF9jdXN0b21NZW0gY3VzdG9tTWVtKSB7IHJldHVybiBjdXN0b21NZW0uY3VzdG9t
QWxsb2MoY3VzdG9tTWVtLm9wYXF1ZSwgc2l6ZSk7IH0NCisNCit2b2lkIFpTVERfZnJlZSh2b2lk
ICpwdHIsIFpTVERfY3VzdG9tTWVtIGN1c3RvbU1lbSkNCit7DQorCWlmIChwdHIgIT0gTlVMTCkN
CisJCWN1c3RvbU1lbS5jdXN0b21GcmVlKGN1c3RvbU1lbS5vcGFxdWUsIHB0cik7DQorfQ0KZGlm
ZiAtLWdpdCBhL3hlbi9jb21tb24venN0ZC96c3RkX2ludGVybmFsLmggYi94ZW4vY29tbW9uL3pz
dGQvenN0ZF9pbnRlcm5hbC5oDQpuZXcgZmlsZSBtb2RlIDEwMDY0NA0KaW5kZXggMDAwMDAwMDAw
MC4uZGFjNzUzMzk3Zg0KLS0tIC9kZXYvbnVsbA0KKysrIGIveGVuL2NvbW1vbi96c3RkL3pzdGRf
aW50ZXJuYWwuaA0KQEAgLTAsMCArMSwyNzMgQEANCisvKioNCisgKiBDb3B5cmlnaHQgKGMpIDIw
MTYtcHJlc2VudCwgWWFubiBDb2xsZXQsIEZhY2Vib29rLCBJbmMuDQorICogQWxsIHJpZ2h0cyBy
ZXNlcnZlZC4NCisgKg0KKyAqIFRoaXMgc291cmNlIGNvZGUgaXMgbGljZW5zZWQgdW5kZXIgdGhl
IEJTRC1zdHlsZSBsaWNlbnNlIGZvdW5kIGluIHRoZQ0KKyAqIExJQ0VOU0UgZmlsZSBpbiB0aGUg
cm9vdCBkaXJlY3Rvcnkgb2YgaHR0cHM6Ly9naXRodWIuY29tL2ZhY2Vib29rL3pzdGQuDQorICog
QW4gYWRkaXRpb25hbCBncmFudCBvZiBwYXRlbnQgcmlnaHRzIGNhbiBiZSBmb3VuZCBpbiB0aGUg
UEFURU5UUyBmaWxlIGluIHRoZQ0KKyAqIHNhbWUgZGlyZWN0b3J5Lg0KKyAqDQorICogVGhpcyBw
cm9ncmFtIGlzIGZyZWUgc29mdHdhcmU7IHlvdSBjYW4gcmVkaXN0cmlidXRlIGl0IGFuZC9vciBt
b2RpZnkgaXQgdW5kZXINCisgKiB0aGUgdGVybXMgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBM
aWNlbnNlIHZlcnNpb24gMiBhcyBwdWJsaXNoZWQgYnkgdGhlDQorICogRnJlZSBTb2Z0d2FyZSBG
b3VuZGF0aW9uLiBUaGlzIHByb2dyYW0gaXMgZHVhbC1saWNlbnNlZDsgeW91IG1heSBzZWxlY3QN
CisgKiBlaXRoZXIgdmVyc2lvbiAyIG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSAo
IkdQTCIpIG9yIEJTRCBsaWNlbnNlDQorICogKCJCU0QiKS4NCisgKi8NCisNCisjaWZuZGVmIFpT
VERfQ0NPTU1PTl9IX01PRFVMRQ0KKyNkZWZpbmUgWlNURF9DQ09NTU9OX0hfTU9EVUxFDQorDQor
LyotKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
Kg0KKyogIENvbXBpbGVyIHNwZWNpZmljcw0KKyoqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKi8NCisjZGVmaW5lIEZPUkNFX0lOTElORSBzdGF0
aWMgX19hbHdheXNfaW5saW5lDQorI2RlZmluZSBGT1JDRV9OT0lOTElORSBzdGF0aWMgbm9pbmxp
bmUNCisNCisvKi0qKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQorKiAgRGVw
ZW5kZW5jaWVzDQorKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqLw0KKyNp
bmNsdWRlICJlcnJvcl9wcml2YXRlLmgiDQorI2luY2x1ZGUgIm1lbS5oIg0KKyNpbmNsdWRlIDxs
aW51eC9jb21waWxlci5oPg0KKyNpbmNsdWRlIDxsaW51eC9rZXJuZWwuaD4NCisjaW5jbHVkZSA8
bGludXgveHhoYXNoLmg+DQorI2luY2x1ZGUgPGxpbnV4L3pzdGQuaD4NCisNCisvKi0qKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQorKiAgc2hhcmVkIG1hY3Jvcw0KKyoqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKi8NCisjZGVmaW5lIE1JTihhLCBiKSAo
KGEpIDwgKGIpID8gKGEpIDogKGIpKQ0KKyNkZWZpbmUgTUFYKGEsIGIpICgoYSkgPiAoYikgPyAo
YSkgOiAoYikpDQorI2RlZmluZSBDSEVDS19GKGYpICAgICAgICAgICAgICAgICAgICAgICBcDQor
CXsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwNCisJCXNpemVfdCBjb25zdCBlcnJj
b2QgPSBmOyBcDQorCQlpZiAoRVJSX2lzRXJyb3IoZXJyY29kKSkgXA0KKwkJCXJldHVybiBlcnJj
b2Q7ICAgXA0KKwl9IC8qIGNoZWNrIGFuZCBGb3J3YXJkIGVycm9yIGNvZGUgKi8NCisjZGVmaW5l
IENIRUNLX0UoZiwgZSkgICAgICAgICAgICAgICAgICAgIFwNCisJeyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgXA0KKwkJc2l6ZV90IGNvbnN0IGVycmNvZCA9IGY7IFwNCisJCWlmIChF
UlJfaXNFcnJvcihlcnJjb2QpKSBcDQorCQkJcmV0dXJuIEVSUk9SKGUpOyBcDQorCX0gLyogY2hl
Y2sgYW5kIHNlbmQgRXJyb3IgY29kZSAqLw0KKyNkZWZpbmUgWlNURF9TVEFUSUNfQVNTRVJUKGMp
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcDQorCXsgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXA0KKwkJZW51bSB7IFpTVERf
c3RhdGljX2Fzc2VydCA9IDEgLyAoaW50KSghIShjKSkgfTsgXA0KKwl9DQorDQorLyotKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KKyogIENvbW1vbiBjb25zdGFudHMNCisq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKiovDQorI2RlZmluZSBaU1REX09Q
VF9OVU0gKDEgPDwgMTIpDQorI2RlZmluZSBaU1REX0RJQ1RfTUFHSUMgMHhFQzMwQTQzNyAvKiB2
MC43KyAqLw0KKw0KKyNkZWZpbmUgWlNURF9SRVBfTlVNIDMJCSAgICAgIC8qIG51bWJlciBvZiBy
ZXBjb2RlcyAqLw0KKyNkZWZpbmUgWlNURF9SRVBfQ0hFQ0sgKFpTVERfUkVQX05VTSkgLyogbnVt
YmVyIG9mIHJlcGNvZGVzIHRvIGNoZWNrIGJ5IHRoZSBvcHRpbWFsIHBhcnNlciAqLw0KKyNkZWZp
bmUgWlNURF9SRVBfTU9WRSAoWlNURF9SRVBfTlVNIC0gMSkNCisjZGVmaW5lIFpTVERfUkVQX01P
VkVfT1BUIChaU1REX1JFUF9OVU0pDQorc3RhdGljIGNvbnN0IFUzMiByZXBTdGFydFZhbHVlW1pT
VERfUkVQX05VTV0gPSB7MSwgNCwgOH07DQorDQorI2RlZmluZSBLQiAqKDEgPDwgMTApDQorI2Rl
ZmluZSBNQiAqKDEgPDwgMjApDQorI2RlZmluZSBHQiAqKDFVIDw8IDMwKQ0KKw0KKyNkZWZpbmUg
QklUNyAxMjgNCisjZGVmaW5lIEJJVDYgNjQNCisjZGVmaW5lIEJJVDUgMzINCisjZGVmaW5lIEJJ
VDQgMTYNCisjZGVmaW5lIEJJVDEgMg0KKyNkZWZpbmUgQklUMCAxDQorDQorI2RlZmluZSBaU1RE
X1dJTkRPV0xPR19BQlNPTFVURU1JTiAxMA0KK3N0YXRpYyBjb25zdCBzaXplX3QgWlNURF9mY3Nf
ZmllbGRTaXplWzRdID0gezAsIDIsIDQsIDh9Ow0KK3N0YXRpYyBjb25zdCBzaXplX3QgWlNURF9k
aWRfZmllbGRTaXplWzRdID0gezAsIDEsIDIsIDR9Ow0KKw0KKyNkZWZpbmUgWlNURF9CTE9DS0hF
QURFUlNJWkUgMyAvKiBDIHN0YW5kYXJkIGRvZXNuJ3QgYWxsb3cgYHN0YXRpYyBjb25zdGAgdmFy
aWFibGUgdG8gYmUgaW5pdCB1c2luZyBhbm90aGVyIGBzdGF0aWMgY29uc3RgIHZhcmlhYmxlICov
DQorc3RhdGljIGNvbnN0IHNpemVfdCBaU1REX2Jsb2NrSGVhZGVyU2l6ZSA9IFpTVERfQkxPQ0tI
RUFERVJTSVpFOw0KK3R5cGVkZWYgZW51bSB7IGJ0X3JhdywgYnRfcmxlLCBidF9jb21wcmVzc2Vk
LCBidF9yZXNlcnZlZCB9IGJsb2NrVHlwZV9lOw0KKw0KKyNkZWZpbmUgTUlOX1NFUVVFTkNFU19T
SVpFIDEJCQkJCQkJCQkgIC8qIG5iU2VxPT0wICovDQorI2RlZmluZSBNSU5fQ0JMT0NLX1NJWkUg
KDEgLypsaXRDU2l6ZSovICsgMSAvKiBSTEUgb3IgUkFXICovICsgTUlOX1NFUVVFTkNFU19TSVpF
IC8qIG5iU2VxPT0wICovKSAvKiBmb3IgYSBub24tbnVsbCBibG9jayAqLw0KKw0KKyNkZWZpbmUg
SHVmTG9nIDEyDQordHlwZWRlZiBlbnVtIHsgc2V0X2Jhc2ljLCBzZXRfcmxlLCBzZXRfY29tcHJl
c3NlZCwgc2V0X3JlcGVhdCB9IHN5bWJvbEVuY29kaW5nVHlwZV9lOw0KKw0KKyNkZWZpbmUgTE9O
R05CU0VRIDB4N0YwMA0KKw0KKyNkZWZpbmUgTUlOTUFUQ0ggMw0KKyNkZWZpbmUgRVFVQUxfUkVB
RDMyIDQNCisNCisjZGVmaW5lIExpdGJpdHMgOA0KKyNkZWZpbmUgTWF4TGl0ICgoMSA8PCBMaXRi
aXRzKSAtIDEpDQorI2RlZmluZSBNYXhNTCA1Mg0KKyNkZWZpbmUgTWF4TEwgMzUNCisjZGVmaW5l
IE1heE9mZiAyOA0KKyNkZWZpbmUgTWF4U2VxIE1BWChNYXhMTCwgTWF4TUwpIC8qIEFzc3VtcHRp
b24gOiBNYXhPZmYgPCBNYXhMTCxNYXhNTCAqLw0KKyNkZWZpbmUgTUxGU0VMb2cgOQ0KKyNkZWZp
bmUgTExGU0VMb2cgOQ0KKyNkZWZpbmUgT2ZmRlNFTG9nIDgNCisNCitzdGF0aWMgY29uc3QgVTMy
IExMX2JpdHNbTWF4TEwgKyAxXSA9IHswLCAwLCAwLCAwLCAwLCAwLCAwLCAwLCAwLCAwLCAwLCAw
LCAwLCAwLCAwLCAwLCAxLCAxLCAxLCAxLCAyLCAyLCAzLCAzLCA0LCA2LCA3LCA4LCA5LCAxMCwg
MTEsIDEyLCAxMywgMTQsIDE1LCAxNn07DQorc3RhdGljIGNvbnN0IFMxNiBMTF9kZWZhdWx0Tm9y
bVtNYXhMTCArIDFdID0gezQsIDMsIDIsIDIsIDIsIDIsIDIsIDIsIDIsIDIsIDIsIDIsIDIsIDEs
IDEsIDEsIDIsIDIsIDIsIDIsIDIsIDIsIDIsIDIsIDIsIDMsIDIsIDEsIDEsIDEsIDEsIDEsIC0x
LCAtMSwgLTEsIC0xfTsNCisjZGVmaW5lIExMX0RFRkFVTFROT1JNTE9HIDYgLyogZm9yIHN0YXRp
YyBhbGxvY2F0aW9uICovDQorc3RhdGljIGNvbnN0IFUzMiBMTF9kZWZhdWx0Tm9ybUxvZyA9IExM
X0RFRkFVTFROT1JNTE9HOw0KKw0KK3N0YXRpYyBjb25zdCBVMzIgTUxfYml0c1tNYXhNTCArIDFd
ID0gezAsIDAsIDAsIDAsIDAsIDAsIDAsIDAsIDAsIDAsIDAsIDAsIDAsIDAsIDAsIDAsIDAsIDAs
IDAsIDAsICAwLCAgMCwgIDAsICAwLCAgMCwgIDAsIDAsDQorCQkJCSAgICAgICAwLCAwLCAwLCAw
LCAwLCAxLCAxLCAxLCAxLCAyLCAyLCAzLCAzLCA0LCA0LCA1LCA3LCA4LCA5LCAxMCwgMTEsIDEy
LCAxMywgMTQsIDE1LCAxNn07DQorc3RhdGljIGNvbnN0IFMxNiBNTF9kZWZhdWx0Tm9ybVtNYXhN
TCArIDFdID0gezEsIDQsIDMsIDIsIDIsIDIsIDIsIDIsIDIsIDEsIDEsIDEsIDEsIDEsIDEsIDEs
IDEsIDEsIDEsIDEsICAxLCAgMSwgIDEsICAxLCAgMSwgIDEsIDEsDQorCQkJCQkgICAgICAxLCAx
LCAxLCAxLCAxLCAxLCAxLCAxLCAxLCAxLCAxLCAxLCAxLCAxLCAxLCAxLCAxLCAxLCAxLCAtMSwg
LTEsIC0xLCAtMSwgLTEsIC0xLCAtMX07DQorI2RlZmluZSBNTF9ERUZBVUxUTk9STUxPRyA2IC8q
IGZvciBzdGF0aWMgYWxsb2NhdGlvbiAqLw0KK3N0YXRpYyBjb25zdCBVMzIgTUxfZGVmYXVsdE5v
cm1Mb2cgPSBNTF9ERUZBVUxUTk9STUxPRzsNCisNCitzdGF0aWMgY29uc3QgUzE2IE9GX2RlZmF1
bHROb3JtW01heE9mZiArIDFdID0gezEsIDEsIDEsIDEsIDEsIDEsIDIsIDIsIDIsIDEsIDEsIDEs
IDEsIDEsIDEsIDEsIDEsIDEsIDEsIDEsIDEsIDEsIDEsIDEsIC0xLCAtMSwgLTEsIC0xLCAtMX07
DQorI2RlZmluZSBPRl9ERUZBVUxUTk9STUxPRyA1IC8qIGZvciBzdGF0aWMgYWxsb2NhdGlvbiAq
Lw0KK3N0YXRpYyBjb25zdCBVMzIgT0ZfZGVmYXVsdE5vcm1Mb2cgPSBPRl9ERUZBVUxUTk9STUxP
RzsNCisNCisvKi0qKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQor
KiAgU2hhcmVkIGZ1bmN0aW9ucyB0byBpbmNsdWRlIGZvciBpbmxpbmluZw0KKyoqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKi8NCitaU1REX1NUQVRJQyB2b2lkIFpT
VERfY29weTgodm9pZCAqZHN0LCBjb25zdCB2b2lkICpzcmMpIHsNCisJLyoNCisJICogenN0ZCBy
ZWxpZXMgaGVhdmlseSBvbiBnY2MgYmVpbmcgYWJsZSB0byBhbmFseXplIGFuZCBpbmxpbmUgdGhp
cw0KKwkgKiBtZW1jcHkoKSBjYWxsLCBzaW5jZSBpdCBpcyBjYWxsZWQgaW4gYSB0aWdodCBsb29w
LiBQcmVib290IG1vZGUNCisJICogaXMgY29tcGlsZWQgaW4gZnJlZXN0YW5kaW5nIG1vZGUsIHdo
aWNoIHN0b3BzIGdjYyBmcm9tIGFuYWx5emluZw0KKwkgKiBtZW1jcHkoKS4gVXNlIF9fYnVpbHRp
bl9tZW1jcHkoKSB0byB0ZWxsIGdjYyB0byBhbmFseXplIHRoaXMgYXMgYQ0KKwkgKiByZWd1bGFy
IG1lbWNweSgpLg0KKwkgKi8NCisJX19idWlsdGluX21lbWNweShkc3QsIHNyYywgOCk7DQorfQ0K
Ky8qISBaU1REX3dpbGRjb3B5KCkgOg0KKyogICBjdXN0b20gdmVyc2lvbiBvZiBtZW1jcHkoKSwg
Y2FuIGNvcHkgdXAgdG8gNyBieXRlcyB0b28gbWFueSAoOCBieXRlcyBpZiBsZW5ndGg9PTApICov
DQorI2RlZmluZSBXSUxEQ09QWV9PVkVSTEVOR1RIIDgNCitaU1REX1NUQVRJQyB2b2lkIFpTVERf
d2lsZGNvcHkodm9pZCAqZHN0LCBjb25zdCB2b2lkICpzcmMsIHB0cmRpZmZfdCBsZW5ndGgpDQor
ew0KKwljb25zdCBCWVRFKiBpcCA9IChjb25zdCBCWVRFKilzcmM7DQorCUJZVEUqIG9wID0gKEJZ
VEUqKWRzdDsNCisJQllURSogY29uc3Qgb2VuZCA9IG9wICsgbGVuZ3RoOw0KKyNpZiBkZWZpbmVk
KEdDQ19WRVJTSU9OKSAmJiBHQ0NfVkVSU0lPTiA+PSA3MDAwMCAmJiBHQ0NfVkVSU0lPTiA8IDcw
MjAwDQorCS8qDQorCSAqIFdvcmsgYXJvdW5kIGh0dHBzOi8vZ2NjLmdudS5vcmcvYnVnemlsbGEv
c2hvd19idWcuY2dpP2lkPTgxMzg4Lg0KKwkgKiBBdm9pZCB0aGUgYmFkIGNhc2Ugd2hlcmUgdGhl
IGxvb3Agb25seSBydW5zIG9uY2UgYnkgaGFuZGxpbmcgdGhlDQorCSAqIHNwZWNpYWwgY2FzZSBz
ZXBhcmF0ZWx5LiBUaGlzIGRvZXNuJ3QgdHJpZ2dlciB0aGUgYnVnIGJlY2F1c2UgaXQNCisJICog
ZG9lc24ndCBpbnZvbHZlIHBvaW50ZXIvaW50ZWdlciBvdmVyZmxvdy4NCisJICovDQorCWlmIChs
ZW5ndGggPD0gOCkNCisJCXJldHVybiBaU1REX2NvcHk4KGRzdCwgc3JjKTsNCisjZW5kaWYNCisJ
ZG8gew0KKwkJWlNURF9jb3B5OChvcCwgaXApOw0KKwkJb3AgKz0gODsNCisJCWlwICs9IDg7DQor
CX0gd2hpbGUgKG9wIDwgb2VuZCk7DQorfQ0KKw0KKy8qLSoqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioNCisqICBQcml2YXRlIGludGVyZmFjZXMNCisqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKiovDQordHlwZWRlZiBzdHJ1Y3QgWlNU
RF9zdGF0c19zIFpTVERfc3RhdHNfdDsNCisNCit0eXBlZGVmIHN0cnVjdCB7DQorCVUzMiBvZmY7
DQorCVUzMiBsZW47DQorfSBaU1REX21hdGNoX3Q7DQorDQordHlwZWRlZiBzdHJ1Y3Qgew0KKwlV
MzIgcHJpY2U7DQorCVUzMiBvZmY7DQorCVUzMiBtbGVuOw0KKwlVMzIgbGl0bGVuOw0KKwlVMzIg
cmVwW1pTVERfUkVQX05VTV07DQorfSBaU1REX29wdGltYWxfdDsNCisNCit0eXBlZGVmIHN0cnVj
dCBzZXFEZWZfcyB7DQorCVUzMiBvZmZzZXQ7DQorCVUxNiBsaXRMZW5ndGg7DQorCVUxNiBtYXRj
aExlbmd0aDsNCit9IHNlcURlZjsNCisNCit0eXBlZGVmIHN0cnVjdCB7DQorCXNlcURlZiAqc2Vx
dWVuY2VzU3RhcnQ7DQorCXNlcURlZiAqc2VxdWVuY2VzOw0KKwlCWVRFICpsaXRTdGFydDsNCisJ
QllURSAqbGl0Ow0KKwlCWVRFICpsbENvZGU7DQorCUJZVEUgKm1sQ29kZTsNCisJQllURSAqb2ZD
b2RlOw0KKwlVMzIgbG9uZ0xlbmd0aElEOyAvKiAwID09IG5vIGxvbmdMZW5ndGg7IDEgPT0gTGl0
LmxvbmdMZW5ndGg7IDIgPT0gTWF0Y2gubG9uZ0xlbmd0aDsgKi8NCisJVTMyIGxvbmdMZW5ndGhQ
b3M7DQorCS8qIG9wdCAqLw0KKwlaU1REX29wdGltYWxfdCAqcHJpY2VUYWJsZTsNCisJWlNURF9t
YXRjaF90ICptYXRjaFRhYmxlOw0KKwlVMzIgKm1hdGNoTGVuZ3RoRnJlcTsNCisJVTMyICpsaXRM
ZW5ndGhGcmVxOw0KKwlVMzIgKmxpdEZyZXE7DQorCVUzMiAqb2ZmQ29kZUZyZXE7DQorCVUzMiBt
YXRjaExlbmd0aFN1bTsNCisJVTMyIG1hdGNoU3VtOw0KKwlVMzIgbGl0TGVuZ3RoU3VtOw0KKwlV
MzIgbGl0U3VtOw0KKwlVMzIgb2ZmQ29kZVN1bTsNCisJVTMyIGxvZzJtYXRjaExlbmd0aFN1bTsN
CisJVTMyIGxvZzJtYXRjaFN1bTsNCisJVTMyIGxvZzJsaXRMZW5ndGhTdW07DQorCVUzMiBsb2cy
bGl0U3VtOw0KKwlVMzIgbG9nMm9mZkNvZGVTdW07DQorCVUzMiBmYWN0b3I7DQorCVUzMiBzdGF0
aWNQcmljZXM7DQorCVUzMiBjYWNoZWRQcmljZTsNCisJVTMyIGNhY2hlZExpdExlbmd0aDsNCisJ
Y29uc3QgQllURSAqY2FjaGVkTGl0ZXJhbHM7DQorfSBzZXFTdG9yZV90Ow0KKw0KK2NvbnN0IHNl
cVN0b3JlX3QgKlpTVERfZ2V0U2VxU3RvcmUoY29uc3QgWlNURF9DQ3R4ICpjdHgpOw0KK3ZvaWQg
WlNURF9zZXFUb0NvZGVzKGNvbnN0IHNlcVN0b3JlX3QgKnNlcVN0b3JlUHRyKTsNCitpbnQgWlNU
RF9pc1NraXBGcmFtZShaU1REX0RDdHggKmRjdHgpOw0KKw0KKy8qPSBDdXN0b20gbWVtb3J5IGFs
bG9jYXRpb24gZnVuY3Rpb25zICovDQordHlwZWRlZiB2b2lkICooKlpTVERfYWxsb2NGdW5jdGlv
bikodm9pZCAqb3BhcXVlLCBzaXplX3Qgc2l6ZSk7DQordHlwZWRlZiB2b2lkICgqWlNURF9mcmVl
RnVuY3Rpb24pKHZvaWQgKm9wYXF1ZSwgdm9pZCAqYWRkcmVzcyk7DQordHlwZWRlZiBzdHJ1Y3Qg
ew0KKwlaU1REX2FsbG9jRnVuY3Rpb24gY3VzdG9tQWxsb2M7DQorCVpTVERfZnJlZUZ1bmN0aW9u
IGN1c3RvbUZyZWU7DQorCXZvaWQgKm9wYXF1ZTsNCit9IFpTVERfY3VzdG9tTWVtOw0KKw0KK3Zv
aWQgKlpTVERfbWFsbG9jKHNpemVfdCBzaXplLCBaU1REX2N1c3RvbU1lbSBjdXN0b21NZW0pOw0K
K3ZvaWQgWlNURF9mcmVlKHZvaWQgKnB0ciwgWlNURF9jdXN0b21NZW0gY3VzdG9tTWVtKTsNCisN
CisvKj09PT09PSBzdGFjayBhbGxvY2F0aW9uICA9PT09PT0qLw0KKw0KK3R5cGVkZWYgc3RydWN0
IHsNCisJdm9pZCAqcHRyOw0KKwljb25zdCB2b2lkICplbmQ7DQorfSBaU1REX3N0YWNrOw0KKw0K
KyNkZWZpbmUgWlNURF9BTElHTih4KSBBTElHTih4LCBzaXplb2Yoc2l6ZV90KSkNCisjZGVmaW5l
IFpTVERfUFRSX0FMSUdOKHApIFBUUl9BTElHTihwLCBzaXplb2Yoc2l6ZV90KSkNCisNCitaU1RE
X2N1c3RvbU1lbSBaU1REX2luaXRTdGFjayh2b2lkICp3b3Jrc3BhY2UsIHNpemVfdCB3b3Jrc3Bh
Y2VTaXplKTsNCisNCit2b2lkICpaU1REX3N0YWNrQWxsb2NBbGwodm9pZCAqb3BhcXVlLCBzaXpl
X3QgKnNpemUpOw0KK3ZvaWQgKlpTVERfc3RhY2tBbGxvYyh2b2lkICpvcGFxdWUsIHNpemVfdCBz
aXplKTsNCit2b2lkIFpTVERfc3RhY2tGcmVlKHZvaWQgKm9wYXF1ZSwgdm9pZCAqYWRkcmVzcyk7
DQorDQorLyo9PT09PT0gIGNvbW1vbiBmdW5jdGlvbiAgPT09PT09Ki8NCisNCitaU1REX1NUQVRJ
QyBVMzIgWlNURF9oaWdoYml0MzIoVTMyIHZhbCkgeyByZXR1cm4gMzEgLSBfX2J1aWx0aW5fY2x6
KHZhbCk7IH0NCisNCisvKiBoaWRkZW4gZnVuY3Rpb25zICovDQorDQorLyogWlNURF9pbnZhbGlk
YXRlUmVwQ29kZXMoKSA6DQorICogZW5zdXJlcyBuZXh0IGNvbXByZXNzaW9uIHdpbGwgbm90IHVz
ZSByZXBjb2RlcyBmcm9tIHByZXZpb3VzIGJsb2NrLg0KKyAqIE5vdGUgOiBvbmx5IHdvcmtzIHdp
dGggcmVndWxhciB2YXJpYW50Ow0KKyAqICAgICAgICBkbyBub3QgdXNlIHdpdGggZXh0RGljdCB2
YXJpYW50ICEgKi8NCit2b2lkIFpTVERfaW52YWxpZGF0ZVJlcENvZGVzKFpTVERfQ0N0eCAqY2N0
eCk7DQorDQorc2l6ZV90IFpTVERfZnJlZUNDdHgoWlNURF9DQ3R4ICpjY3R4KTsNCitzaXplX3Qg
WlNURF9mcmVlREN0eChaU1REX0RDdHggKmRjdHgpOw0KK3NpemVfdCBaU1REX2ZyZWVDRGljdCha
U1REX0NEaWN0ICpjZGljdCk7DQorc2l6ZV90IFpTVERfZnJlZUREaWN0KFpTVERfRERpY3QgKmNk
aWN0KTsNCitzaXplX3QgWlNURF9mcmVlQ1N0cmVhbShaU1REX0NTdHJlYW0gKnpjcyk7DQorc2l6
ZV90IFpTVERfZnJlZURTdHJlYW0oWlNURF9EU3RyZWFtICp6ZHMpOw0KKw0KKyNlbmRpZiAvKiBa
U1REX0NDT01NT05fSF9NT0RVTEUgKi8NCmRpZmYgLS1naXQgYS94ZW4vY29tbW9uL3pzdGQvenN0
ZF9vcHQuaCBiL3hlbi9jb21tb24venN0ZC96c3RkX29wdC5oDQpuZXcgZmlsZSBtb2RlIDEwMDY0
NA0KaW5kZXggMDAwMDAwMDAwMC4uNTVlMWI0Y2JhOA0KLS0tIC9kZXYvbnVsbA0KKysrIGIveGVu
L2NvbW1vbi96c3RkL3pzdGRfb3B0LmgNCkBAIC0wLDAgKzEsMTAxNCBAQA0KKy8qKg0KKyAqIENv
cHlyaWdodCAoYykgMjAxNi1wcmVzZW50LCBQcnplbXlzbGF3IFNraWJpbnNraSwgWWFubiBDb2xs
ZXQsIEZhY2Vib29rLCBJbmMuDQorICogQWxsIHJpZ2h0cyByZXNlcnZlZC4NCisgKg0KKyAqIFRo
aXMgc291cmNlIGNvZGUgaXMgbGljZW5zZWQgdW5kZXIgdGhlIEJTRC1zdHlsZSBsaWNlbnNlIGZv
dW5kIGluIHRoZQ0KKyAqIExJQ0VOU0UgZmlsZSBpbiB0aGUgcm9vdCBkaXJlY3Rvcnkgb2YgaHR0
cHM6Ly9naXRodWIuY29tL2ZhY2Vib29rL3pzdGQuDQorICogQW4gYWRkaXRpb25hbCBncmFudCBv
ZiBwYXRlbnQgcmlnaHRzIGNhbiBiZSBmb3VuZCBpbiB0aGUgUEFURU5UUyBmaWxlIGluIHRoZQ0K
KyAqIHNhbWUgZGlyZWN0b3J5Lg0KKyAqDQorICogVGhpcyBwcm9ncmFtIGlzIGZyZWUgc29mdHdh
cmU7IHlvdSBjYW4gcmVkaXN0cmlidXRlIGl0IGFuZC9vciBtb2RpZnkgaXQgdW5kZXINCisgKiB0
aGUgdGVybXMgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIHZlcnNpb24gMiBhcyBw
dWJsaXNoZWQgYnkgdGhlDQorICogRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLiBUaGlzIHByb2dy
YW0gaXMgZHVhbC1saWNlbnNlZDsgeW91IG1heSBzZWxlY3QNCisgKiBlaXRoZXIgdmVyc2lvbiAy
IG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSAoIkdQTCIpIG9yIEJTRCBsaWNlbnNl
DQorICogKCJCU0QiKS4NCisgKi8NCisNCisvKiBOb3RlIDogdGhpcyBmaWxlIGlzIGludGVuZGVk
IHRvIGJlIGluY2x1ZGVkIHdpdGhpbiB6c3RkX2NvbXByZXNzLmMgKi8NCisNCisjaWZuZGVmIFpT
VERfT1BUX0hfOTE4NDIzOTg3NDMNCisjZGVmaW5lIFpTVERfT1BUX0hfOTE4NDIzOTg3NDMNCisN
CisjZGVmaW5lIFpTVERfTElURlJFUV9BREQgMg0KKyNkZWZpbmUgWlNURF9GUkVRX0RJViA0DQor
I2RlZmluZSBaU1REX01BWF9QUklDRSAoMSA8PCAzMCkNCisNCisvKi0qKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqDQorKiAgUHJpY2UgZnVuY3Rpb25zIGZvciBvcHRpbWFsIHBh
cnNlcg0KKyoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKi8NCitGT1JDRV9J
TkxJTkUgdm9pZCBaU1REX3NldExvZzJQcmljZXMoc2VxU3RvcmVfdCAqc3NQdHIpDQorew0KKwlz
c1B0ci0+bG9nMm1hdGNoTGVuZ3RoU3VtID0gWlNURF9oaWdoYml0MzIoc3NQdHItPm1hdGNoTGVu
Z3RoU3VtICsgMSk7DQorCXNzUHRyLT5sb2cybGl0TGVuZ3RoU3VtID0gWlNURF9oaWdoYml0MzIo
c3NQdHItPmxpdExlbmd0aFN1bSArIDEpOw0KKwlzc1B0ci0+bG9nMmxpdFN1bSA9IFpTVERfaGln
aGJpdDMyKHNzUHRyLT5saXRTdW0gKyAxKTsNCisJc3NQdHItPmxvZzJvZmZDb2RlU3VtID0gWlNU
RF9oaWdoYml0MzIoc3NQdHItPm9mZkNvZGVTdW0gKyAxKTsNCisJc3NQdHItPmZhY3RvciA9IDEg
KyAoKHNzUHRyLT5saXRTdW0gPj4gNSkgLyBzc1B0ci0+bGl0TGVuZ3RoU3VtKSArICgoc3NQdHIt
PmxpdFN1bSA8PCAxKSAvIChzc1B0ci0+bGl0U3VtICsgc3NQdHItPm1hdGNoU3VtKSk7DQorfQ0K
Kw0KK1pTVERfU1RBVElDIHZvaWQgWlNURF9yZXNjYWxlRnJlcXMoc2VxU3RvcmVfdCAqc3NQdHIs
IGNvbnN0IEJZVEUgKnNyYywgc2l6ZV90IHNyY1NpemUpDQorew0KKwl1bnNpZ25lZCB1Ow0KKw0K
Kwlzc1B0ci0+Y2FjaGVkTGl0ZXJhbHMgPSBOVUxMOw0KKwlzc1B0ci0+Y2FjaGVkUHJpY2UgPSBz
c1B0ci0+Y2FjaGVkTGl0TGVuZ3RoID0gMDsNCisJc3NQdHItPnN0YXRpY1ByaWNlcyA9IDA7DQor
DQorCWlmIChzc1B0ci0+bGl0TGVuZ3RoU3VtID09IDApIHsNCisJCWlmIChzcmNTaXplIDw9IDEw
MjQpDQorCQkJc3NQdHItPnN0YXRpY1ByaWNlcyA9IDE7DQorDQorCQlmb3IgKHUgPSAwOyB1IDw9
IE1heExpdDsgdSsrKQ0KKwkJCXNzUHRyLT5saXRGcmVxW3VdID0gMDsNCisJCWZvciAodSA9IDA7
IHUgPCBzcmNTaXplOyB1KyspDQorCQkJc3NQdHItPmxpdEZyZXFbc3JjW3VdXSsrOw0KKw0KKwkJ
c3NQdHItPmxpdFN1bSA9IDA7DQorCQlzc1B0ci0+bGl0TGVuZ3RoU3VtID0gTWF4TEwgKyAxOw0K
KwkJc3NQdHItPm1hdGNoTGVuZ3RoU3VtID0gTWF4TUwgKyAxOw0KKwkJc3NQdHItPm9mZkNvZGVT
dW0gPSAoTWF4T2ZmICsgMSk7DQorCQlzc1B0ci0+bWF0Y2hTdW0gPSAoWlNURF9MSVRGUkVRX0FE
RCA8PCBMaXRiaXRzKTsNCisNCisJCWZvciAodSA9IDA7IHUgPD0gTWF4TGl0OyB1KyspIHsNCisJ
CQlzc1B0ci0+bGl0RnJlcVt1XSA9IDEgKyAoc3NQdHItPmxpdEZyZXFbdV0gPj4gWlNURF9GUkVR
X0RJVik7DQorCQkJc3NQdHItPmxpdFN1bSArPSBzc1B0ci0+bGl0RnJlcVt1XTsNCisJCX0NCisJ
CWZvciAodSA9IDA7IHUgPD0gTWF4TEw7IHUrKykNCisJCQlzc1B0ci0+bGl0TGVuZ3RoRnJlcVt1
XSA9IDE7DQorCQlmb3IgKHUgPSAwOyB1IDw9IE1heE1MOyB1KyspDQorCQkJc3NQdHItPm1hdGNo
TGVuZ3RoRnJlcVt1XSA9IDE7DQorCQlmb3IgKHUgPSAwOyB1IDw9IE1heE9mZjsgdSsrKQ0KKwkJ
CXNzUHRyLT5vZmZDb2RlRnJlcVt1XSA9IDE7DQorCX0gZWxzZSB7DQorCQlzc1B0ci0+bWF0Y2hM
ZW5ndGhTdW0gPSAwOw0KKwkJc3NQdHItPmxpdExlbmd0aFN1bSA9IDA7DQorCQlzc1B0ci0+b2Zm
Q29kZVN1bSA9IDA7DQorCQlzc1B0ci0+bWF0Y2hTdW0gPSAwOw0KKwkJc3NQdHItPmxpdFN1bSA9
IDA7DQorDQorCQlmb3IgKHUgPSAwOyB1IDw9IE1heExpdDsgdSsrKSB7DQorCQkJc3NQdHItPmxp
dEZyZXFbdV0gPSAxICsgKHNzUHRyLT5saXRGcmVxW3VdID4+IChaU1REX0ZSRVFfRElWICsgMSkp
Ow0KKwkJCXNzUHRyLT5saXRTdW0gKz0gc3NQdHItPmxpdEZyZXFbdV07DQorCQl9DQorCQlmb3Ig
KHUgPSAwOyB1IDw9IE1heExMOyB1KyspIHsNCisJCQlzc1B0ci0+bGl0TGVuZ3RoRnJlcVt1XSA9
IDEgKyAoc3NQdHItPmxpdExlbmd0aEZyZXFbdV0gPj4gKFpTVERfRlJFUV9ESVYgKyAxKSk7DQor
CQkJc3NQdHItPmxpdExlbmd0aFN1bSArPSBzc1B0ci0+bGl0TGVuZ3RoRnJlcVt1XTsNCisJCX0N
CisJCWZvciAodSA9IDA7IHUgPD0gTWF4TUw7IHUrKykgew0KKwkJCXNzUHRyLT5tYXRjaExlbmd0
aEZyZXFbdV0gPSAxICsgKHNzUHRyLT5tYXRjaExlbmd0aEZyZXFbdV0gPj4gWlNURF9GUkVRX0RJ
Vik7DQorCQkJc3NQdHItPm1hdGNoTGVuZ3RoU3VtICs9IHNzUHRyLT5tYXRjaExlbmd0aEZyZXFb
dV07DQorCQkJc3NQdHItPm1hdGNoU3VtICs9IHNzUHRyLT5tYXRjaExlbmd0aEZyZXFbdV0gKiAo
dSArIDMpOw0KKwkJfQ0KKwkJc3NQdHItPm1hdGNoU3VtICo9IFpTVERfTElURlJFUV9BREQ7DQor
CQlmb3IgKHUgPSAwOyB1IDw9IE1heE9mZjsgdSsrKSB7DQorCQkJc3NQdHItPm9mZkNvZGVGcmVx
W3VdID0gMSArIChzc1B0ci0+b2ZmQ29kZUZyZXFbdV0gPj4gWlNURF9GUkVRX0RJVik7DQorCQkJ
c3NQdHItPm9mZkNvZGVTdW0gKz0gc3NQdHItPm9mZkNvZGVGcmVxW3VdOw0KKwkJfQ0KKwl9DQor
DQorCVpTVERfc2V0TG9nMlByaWNlcyhzc1B0cik7DQorfQ0KKw0KK0ZPUkNFX0lOTElORSBVMzIg
WlNURF9nZXRMaXRlcmFsUHJpY2Uoc2VxU3RvcmVfdCAqc3NQdHIsIFUzMiBsaXRMZW5ndGgsIGNv
bnN0IEJZVEUgKmxpdGVyYWxzKQ0KK3sNCisJVTMyIHByaWNlLCB1Ow0KKw0KKwlpZiAoc3NQdHIt
PnN0YXRpY1ByaWNlcykNCisJCXJldHVybiBaU1REX2hpZ2hiaXQzMigoVTMyKWxpdExlbmd0aCAr
IDEpICsgKGxpdExlbmd0aCAqIDYpOw0KKw0KKwlpZiAobGl0TGVuZ3RoID09IDApDQorCQlyZXR1
cm4gc3NQdHItPmxvZzJsaXRMZW5ndGhTdW0gLSBaU1REX2hpZ2hiaXQzMihzc1B0ci0+bGl0TGVu
Z3RoRnJlcVswXSArIDEpOw0KKw0KKwkvKiBsaXRlcmFscyAqLw0KKwlpZiAoc3NQdHItPmNhY2hl
ZExpdGVyYWxzID09IGxpdGVyYWxzKSB7DQorCQlVMzIgY29uc3QgYWRkaXRpb25hbCA9IGxpdExl
bmd0aCAtIHNzUHRyLT5jYWNoZWRMaXRMZW5ndGg7DQorCQljb25zdCBCWVRFICpsaXRlcmFsczIg
PSBzc1B0ci0+Y2FjaGVkTGl0ZXJhbHMgKyBzc1B0ci0+Y2FjaGVkTGl0TGVuZ3RoOw0KKwkJcHJp
Y2UgPSBzc1B0ci0+Y2FjaGVkUHJpY2UgKyBhZGRpdGlvbmFsICogc3NQdHItPmxvZzJsaXRTdW07
DQorCQlmb3IgKHUgPSAwOyB1IDwgYWRkaXRpb25hbDsgdSsrKQ0KKwkJCXByaWNlIC09IFpTVERf
aGlnaGJpdDMyKHNzUHRyLT5saXRGcmVxW2xpdGVyYWxzMlt1XV0gKyAxKTsNCisJCXNzUHRyLT5j
YWNoZWRQcmljZSA9IHByaWNlOw0KKwkJc3NQdHItPmNhY2hlZExpdExlbmd0aCA9IGxpdExlbmd0
aDsNCisJfSBlbHNlIHsNCisJCXByaWNlID0gbGl0TGVuZ3RoICogc3NQdHItPmxvZzJsaXRTdW07
DQorCQlmb3IgKHUgPSAwOyB1IDwgbGl0TGVuZ3RoOyB1KyspDQorCQkJcHJpY2UgLT0gWlNURF9o
aWdoYml0MzIoc3NQdHItPmxpdEZyZXFbbGl0ZXJhbHNbdV1dICsgMSk7DQorDQorCQlpZiAobGl0
TGVuZ3RoID49IDEyKSB7DQorCQkJc3NQdHItPmNhY2hlZExpdGVyYWxzID0gbGl0ZXJhbHM7DQor
CQkJc3NQdHItPmNhY2hlZFByaWNlID0gcHJpY2U7DQorCQkJc3NQdHItPmNhY2hlZExpdExlbmd0
aCA9IGxpdExlbmd0aDsNCisJCX0NCisJfQ0KKw0KKwkvKiBsaXRlcmFsIExlbmd0aCAqLw0KKwl7
DQorCQljb25zdCBCWVRFIExMX2RlbHRhQ29kZSA9IDE5Ow0KKwkJY29uc3QgQllURSBsbENvZGUg
PSAobGl0TGVuZ3RoID4gNjMpID8gKEJZVEUpWlNURF9oaWdoYml0MzIobGl0TGVuZ3RoKSArIExM
X2RlbHRhQ29kZSA6IExMX0NvZGVbbGl0TGVuZ3RoXTsNCisJCXByaWNlICs9IExMX2JpdHNbbGxD
b2RlXSArIHNzUHRyLT5sb2cybGl0TGVuZ3RoU3VtIC0gWlNURF9oaWdoYml0MzIoc3NQdHItPmxp
dExlbmd0aEZyZXFbbGxDb2RlXSArIDEpOw0KKwl9DQorDQorCXJldHVybiBwcmljZTsNCit9DQor
DQorRk9SQ0VfSU5MSU5FIFUzMiBaU1REX2dldFByaWNlKHNlcVN0b3JlX3QgKnNlcVN0b3JlUHRy
LCBVMzIgbGl0TGVuZ3RoLCBjb25zdCBCWVRFICpsaXRlcmFscywgVTMyIG9mZnNldCwgVTMyIG1h
dGNoTGVuZ3RoLCBjb25zdCBpbnQgdWx0cmEpDQorew0KKwkvKiBvZmZzZXQgKi8NCisJVTMyIHBy
aWNlOw0KKwlCWVRFIGNvbnN0IG9mZkNvZGUgPSAoQllURSlaU1REX2hpZ2hiaXQzMihvZmZzZXQg
KyAxKTsNCisNCisJaWYgKHNlcVN0b3JlUHRyLT5zdGF0aWNQcmljZXMpDQorCQlyZXR1cm4gWlNU
RF9nZXRMaXRlcmFsUHJpY2Uoc2VxU3RvcmVQdHIsIGxpdExlbmd0aCwgbGl0ZXJhbHMpICsgWlNU
RF9oaWdoYml0MzIoKFUzMiltYXRjaExlbmd0aCArIDEpICsgMTYgKyBvZmZDb2RlOw0KKw0KKwlw
cmljZSA9IG9mZkNvZGUgKyBzZXFTdG9yZVB0ci0+bG9nMm9mZkNvZGVTdW0gLSBaU1REX2hpZ2hi
aXQzMihzZXFTdG9yZVB0ci0+b2ZmQ29kZUZyZXFbb2ZmQ29kZV0gKyAxKTsNCisJaWYgKCF1bHRy
YSAmJiBvZmZDb2RlID49IDIwKQ0KKwkJcHJpY2UgKz0gKG9mZkNvZGUgLSAxOSkgKiAyOw0KKw0K
KwkvKiBtYXRjaCBMZW5ndGggKi8NCisJew0KKwkJY29uc3QgQllURSBNTF9kZWx0YUNvZGUgPSAz
NjsNCisJCWNvbnN0IEJZVEUgbWxDb2RlID0gKG1hdGNoTGVuZ3RoID4gMTI3KSA/IChCWVRFKVpT
VERfaGlnaGJpdDMyKG1hdGNoTGVuZ3RoKSArIE1MX2RlbHRhQ29kZSA6IE1MX0NvZGVbbWF0Y2hM
ZW5ndGhdOw0KKwkJcHJpY2UgKz0gTUxfYml0c1ttbENvZGVdICsgc2VxU3RvcmVQdHItPmxvZzJt
YXRjaExlbmd0aFN1bSAtIFpTVERfaGlnaGJpdDMyKHNlcVN0b3JlUHRyLT5tYXRjaExlbmd0aEZy
ZXFbbWxDb2RlXSArIDEpOw0KKwl9DQorDQorCXJldHVybiBwcmljZSArIFpTVERfZ2V0TGl0ZXJh
bFByaWNlKHNlcVN0b3JlUHRyLCBsaXRMZW5ndGgsIGxpdGVyYWxzKSArIHNlcVN0b3JlUHRyLT5m
YWN0b3I7DQorfQ0KKw0KK1pTVERfU1RBVElDIHZvaWQgWlNURF91cGRhdGVQcmljZShzZXFTdG9y
ZV90ICpzZXFTdG9yZVB0ciwgVTMyIGxpdExlbmd0aCwgY29uc3QgQllURSAqbGl0ZXJhbHMsIFUz
MiBvZmZzZXQsIFUzMiBtYXRjaExlbmd0aCkNCit7DQorCVUzMiB1Ow0KKw0KKwkvKiBsaXRlcmFs
cyAqLw0KKwlzZXFTdG9yZVB0ci0+bGl0U3VtICs9IGxpdExlbmd0aCAqIFpTVERfTElURlJFUV9B
REQ7DQorCWZvciAodSA9IDA7IHUgPCBsaXRMZW5ndGg7IHUrKykNCisJCXNlcVN0b3JlUHRyLT5s
aXRGcmVxW2xpdGVyYWxzW3VdXSArPSBaU1REX0xJVEZSRVFfQUREOw0KKw0KKwkvKiBsaXRlcmFs
IExlbmd0aCAqLw0KKwl7DQorCQljb25zdCBCWVRFIExMX2RlbHRhQ29kZSA9IDE5Ow0KKwkJY29u
c3QgQllURSBsbENvZGUgPSAobGl0TGVuZ3RoID4gNjMpID8gKEJZVEUpWlNURF9oaWdoYml0MzIo
bGl0TGVuZ3RoKSArIExMX2RlbHRhQ29kZSA6IExMX0NvZGVbbGl0TGVuZ3RoXTsNCisJCXNlcVN0
b3JlUHRyLT5saXRMZW5ndGhGcmVxW2xsQ29kZV0rKzsNCisJCXNlcVN0b3JlUHRyLT5saXRMZW5n
dGhTdW0rKzsNCisJfQ0KKw0KKwkvKiBtYXRjaCBvZmZzZXQgKi8NCisJew0KKwkJQllURSBjb25z
dCBvZmZDb2RlID0gKEJZVEUpWlNURF9oaWdoYml0MzIob2Zmc2V0ICsgMSk7DQorCQlzZXFTdG9y
ZVB0ci0+b2ZmQ29kZVN1bSsrOw0KKwkJc2VxU3RvcmVQdHItPm9mZkNvZGVGcmVxW29mZkNvZGVd
Kys7DQorCX0NCisNCisJLyogbWF0Y2ggTGVuZ3RoICovDQorCXsNCisJCWNvbnN0IEJZVEUgTUxf
ZGVsdGFDb2RlID0gMzY7DQorCQljb25zdCBCWVRFIG1sQ29kZSA9IChtYXRjaExlbmd0aCA+IDEy
NykgPyAoQllURSlaU1REX2hpZ2hiaXQzMihtYXRjaExlbmd0aCkgKyBNTF9kZWx0YUNvZGUgOiBN
TF9Db2RlW21hdGNoTGVuZ3RoXTsNCisJCXNlcVN0b3JlUHRyLT5tYXRjaExlbmd0aEZyZXFbbWxD
b2RlXSsrOw0KKwkJc2VxU3RvcmVQdHItPm1hdGNoTGVuZ3RoU3VtKys7DQorCX0NCisNCisJWlNU
RF9zZXRMb2cyUHJpY2VzKHNlcVN0b3JlUHRyKTsNCit9DQorDQorI2RlZmluZSBTRVRfUFJJQ0Uo
cG9zLCBtbGVuXywgb2Zmc2V0XywgbGl0bGVuXywgcHJpY2VfKSAgICAgICAgICAgXA0KKwl7ICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXA0K
KwkJd2hpbGUgKGxhc3RfcG9zIDwgcG9zKSB7ICAgICAgICAgICAgICAgICAgICAgICAgICBcDQor
CQkJb3B0W2xhc3RfcG9zICsgMV0ucHJpY2UgPSBaU1REX01BWF9QUklDRTsgXA0KKwkJCWxhc3Rf
cG9zKys7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwNCisJCX0gICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXA0KKwkJb3B0W3Bvc10ubWxlbiA9
IG1sZW5fOyAgICAgICAgICAgICAgICAgICAgICAgICAgICBcDQorCQlvcHRbcG9zXS5vZmYgPSBv
ZmZzZXRfOyAgICAgICAgICAgICAgICAgICAgICAgICAgIFwNCisJCW9wdFtwb3NdLmxpdGxlbiA9
IGxpdGxlbl87ICAgICAgICAgICAgICAgICAgICAgICAgXA0KKwkJb3B0W3Bvc10ucHJpY2UgPSBw
cmljZV87ICAgICAgICAgICAgICAgICAgICAgICAgICBcDQorCX0NCisNCisvKiBVcGRhdGUgaGFz
aFRhYmxlMyB1cCB0byBpcCAoZXhjbHVkZWQpDQorICAgQXNzdW1wdGlvbiA6IGFsd2F5cyB3aXRo
aW4gcHJlZml4IChpLmUuIG5vdCB3aXRoaW4gZXh0RGljdCkgKi8NCitGT1JDRV9JTkxJTkUNCitV
MzIgWlNURF9pbnNlcnRBbmRGaW5kRmlyc3RJbmRleEhhc2gzKFpTVERfQ0N0eCAqemMsIGNvbnN0
IEJZVEUgKmlwKQ0KK3sNCisJVTMyICpjb25zdCBoYXNoVGFibGUzID0gemMtPmhhc2hUYWJsZTM7
DQorCVUzMiBjb25zdCBoYXNoTG9nMyA9IHpjLT5oYXNoTG9nMzsNCisJY29uc3QgQllURSAqY29u
c3QgYmFzZSA9IHpjLT5iYXNlOw0KKwlVMzIgaWR4ID0gemMtPm5leHRUb1VwZGF0ZTM7DQorCWNv
bnN0IFUzMiB0YXJnZXQgPSB6Yy0+bmV4dFRvVXBkYXRlMyA9IChVMzIpKGlwIC0gYmFzZSk7DQor
CWNvbnN0IHNpemVfdCBoYXNoMyA9IFpTVERfaGFzaDNQdHIoaXAsIGhhc2hMb2czKTsNCisNCisJ
d2hpbGUgKGlkeCA8IHRhcmdldCkgew0KKwkJaGFzaFRhYmxlM1taU1REX2hhc2gzUHRyKGJhc2Ug
KyBpZHgsIGhhc2hMb2czKV0gPSBpZHg7DQorCQlpZHgrKzsNCisJfQ0KKw0KKwlyZXR1cm4gaGFz
aFRhYmxlM1toYXNoM107DQorfQ0KKw0KKy8qLSoqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioNCisqICBCaW5hcnkgVHJlZSBzZWFyY2gNCisqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKiovDQorc3RhdGljIFUzMiBaU1REX2luc2VydEJ0QW5kR2V0QWxsTWF0
Y2hlcyhaU1REX0NDdHggKnpjLCBjb25zdCBCWVRFICpjb25zdCBpcCwgY29uc3QgQllURSAqY29u
c3QgaUxpbWl0LCBVMzIgbmJDb21wYXJlcywgY29uc3QgVTMyIG1scywgVTMyIGV4dERpY3QsDQor
CQkJCQkgWlNURF9tYXRjaF90ICptYXRjaGVzLCBjb25zdCBVMzIgbWluTWF0Y2hMZW4pDQorew0K
Kwljb25zdCBCWVRFICpjb25zdCBiYXNlID0gemMtPmJhc2U7DQorCWNvbnN0IFUzMiBjdXJyID0g
KFUzMikoaXAgLSBiYXNlKTsNCisJY29uc3QgVTMyIGhhc2hMb2cgPSB6Yy0+cGFyYW1zLmNQYXJh
bXMuaGFzaExvZzsNCisJY29uc3Qgc2l6ZV90IGggPSBaU1REX2hhc2hQdHIoaXAsIGhhc2hMb2cs
IG1scyk7DQorCVUzMiAqY29uc3QgaGFzaFRhYmxlID0gemMtPmhhc2hUYWJsZTsNCisJVTMyIG1h
dGNoSW5kZXggPSBoYXNoVGFibGVbaF07DQorCVUzMiAqY29uc3QgYnQgPSB6Yy0+Y2hhaW5UYWJs
ZTsNCisJY29uc3QgVTMyIGJ0TG9nID0gemMtPnBhcmFtcy5jUGFyYW1zLmNoYWluTG9nIC0gMTsN
CisJY29uc3QgVTMyIGJ0TWFzayA9ICgxVSA8PCBidExvZykgLSAxOw0KKwlzaXplX3QgY29tbW9u
TGVuZ3RoU21hbGxlciA9IDAsIGNvbW1vbkxlbmd0aExhcmdlciA9IDA7DQorCWNvbnN0IEJZVEUg
KmNvbnN0IGRpY3RCYXNlID0gemMtPmRpY3RCYXNlOw0KKwljb25zdCBVMzIgZGljdExpbWl0ID0g
emMtPmRpY3RMaW1pdDsNCisJY29uc3QgQllURSAqY29uc3QgZGljdEVuZCA9IGRpY3RCYXNlICsg
ZGljdExpbWl0Ow0KKwljb25zdCBCWVRFICpjb25zdCBwcmVmaXhTdGFydCA9IGJhc2UgKyBkaWN0
TGltaXQ7DQorCWNvbnN0IFUzMiBidExvdyA9IGJ0TWFzayA+PSBjdXJyID8gMCA6IGN1cnIgLSBi
dE1hc2s7DQorCWNvbnN0IFUzMiB3aW5kb3dMb3cgPSB6Yy0+bG93TGltaXQ7DQorCVUzMiAqc21h
bGxlclB0ciA9IGJ0ICsgMiAqIChjdXJyICYgYnRNYXNrKTsNCisJVTMyICpsYXJnZXJQdHIgPSBi
dCArIDIgKiAoY3VyciAmIGJ0TWFzaykgKyAxOw0KKwlVMzIgbWF0Y2hFbmRJZHggPSBjdXJyICsg
ODsNCisJVTMyIGR1bW15MzI7IC8qIHRvIGJlIG51bGxpZmllZCBhdCB0aGUgZW5kICovDQorCVUz
MiBtbnVtID0gMDsNCisNCisJY29uc3QgVTMyIG1pbk1hdGNoID0gKG1scyA9PSAzKSA/IDMgOiA0
Ow0KKwlzaXplX3QgYmVzdExlbmd0aCA9IG1pbk1hdGNoTGVuIC0gMTsNCisNCisJaWYgKG1pbk1h
dGNoID09IDMpIHsgLyogSEMzIG1hdGNoIGZpbmRlciAqLw0KKwkJVTMyIGNvbnN0IG1hdGNoSW5k
ZXgzID0gWlNURF9pbnNlcnRBbmRGaW5kRmlyc3RJbmRleEhhc2gzKHpjLCBpcCk7DQorCQlpZiAo
bWF0Y2hJbmRleDMgPiB3aW5kb3dMb3cgJiYgKGN1cnIgLSBtYXRjaEluZGV4MyA8ICgxIDw8IDE4
KSkpIHsNCisJCQljb25zdCBCWVRFICptYXRjaDsNCisJCQlzaXplX3QgY3Vyck1sID0gMDsNCisJ
CQlpZiAoKCFleHREaWN0KSB8fCBtYXRjaEluZGV4MyA+PSBkaWN0TGltaXQpIHsNCisJCQkJbWF0
Y2ggPSBiYXNlICsgbWF0Y2hJbmRleDM7DQorCQkJCWlmIChtYXRjaFtiZXN0TGVuZ3RoXSA9PSBp
cFtiZXN0TGVuZ3RoXSkNCisJCQkJCWN1cnJNbCA9IFpTVERfY291bnQoaXAsIG1hdGNoLCBpTGlt
aXQpOw0KKwkJCX0gZWxzZSB7DQorCQkJCW1hdGNoID0gZGljdEJhc2UgKyBtYXRjaEluZGV4MzsN
CisJCQkJaWYgKFpTVERfcmVhZE1JTk1BVENIKG1hdGNoLCBNSU5NQVRDSCkgPT0NCisJCQkJICAg
IFpTVERfcmVhZE1JTk1BVENIKGlwLCBNSU5NQVRDSCkpIC8qIGFzc3VtcHRpb24gOiBtYXRjaElu
ZGV4MyA8PSBkaWN0TGltaXQtNCAoYnkgdGFibGUgY29uc3RydWN0aW9uKSAqLw0KKwkJCQkJY3Vy
ck1sID0gWlNURF9jb3VudF8yc2VnbWVudHMoaXAgKyBNSU5NQVRDSCwgbWF0Y2ggKyBNSU5NQVRD
SCwgaUxpbWl0LCBkaWN0RW5kLCBwcmVmaXhTdGFydCkgKyBNSU5NQVRDSDsNCisJCQl9DQorDQor
CQkJLyogc2F2ZSBiZXN0IHNvbHV0aW9uICovDQorCQkJaWYgKGN1cnJNbCA+IGJlc3RMZW5ndGgp
IHsNCisJCQkJYmVzdExlbmd0aCA9IGN1cnJNbDsNCisJCQkJbWF0Y2hlc1ttbnVtXS5vZmYgPSBa
U1REX1JFUF9NT1ZFX09QVCArIGN1cnIgLSBtYXRjaEluZGV4MzsNCisJCQkJbWF0Y2hlc1ttbnVt
XS5sZW4gPSAoVTMyKWN1cnJNbDsNCisJCQkJbW51bSsrOw0KKwkJCQlpZiAoY3Vyck1sID4gWlNU
RF9PUFRfTlVNKQ0KKwkJCQkJZ290byB1cGRhdGU7DQorCQkJCWlmIChpcCArIGN1cnJNbCA9PSBp
TGltaXQpDQorCQkJCQlnb3RvIHVwZGF0ZTsgLyogYmVzdCBwb3NzaWJsZSwgYW5kIGF2b2lkIHJl
YWQgb3ZlcmZsb3cqLw0KKwkJCX0NCisJCX0NCisJfQ0KKw0KKwloYXNoVGFibGVbaF0gPSBjdXJy
OyAvKiBVcGRhdGUgSGFzaCBUYWJsZSAqLw0KKw0KKwl3aGlsZSAobmJDb21wYXJlcy0tICYmICht
YXRjaEluZGV4ID4gd2luZG93TG93KSkgew0KKwkJVTMyICpuZXh0UHRyID0gYnQgKyAyICogKG1h
dGNoSW5kZXggJiBidE1hc2spOw0KKwkJc2l6ZV90IG1hdGNoTGVuZ3RoID0gTUlOKGNvbW1vbkxl
bmd0aFNtYWxsZXIsIGNvbW1vbkxlbmd0aExhcmdlcik7IC8qIGd1YXJhbnRlZWQgbWluaW11bSBu
YiBvZiBjb21tb24gYnl0ZXMgKi8NCisJCWNvbnN0IEJZVEUgKm1hdGNoOw0KKw0KKwkJaWYgKCgh
ZXh0RGljdCkgfHwgKG1hdGNoSW5kZXggKyBtYXRjaExlbmd0aCA+PSBkaWN0TGltaXQpKSB7DQor
CQkJbWF0Y2ggPSBiYXNlICsgbWF0Y2hJbmRleDsNCisJCQlpZiAobWF0Y2hbbWF0Y2hMZW5ndGhd
ID09IGlwW21hdGNoTGVuZ3RoXSkgew0KKwkJCQltYXRjaExlbmd0aCArPSBaU1REX2NvdW50KGlw
ICsgbWF0Y2hMZW5ndGggKyAxLCBtYXRjaCArIG1hdGNoTGVuZ3RoICsgMSwgaUxpbWl0KSArIDE7
DQorCQkJfQ0KKwkJfSBlbHNlIHsNCisJCQltYXRjaCA9IGRpY3RCYXNlICsgbWF0Y2hJbmRleDsN
CisJCQltYXRjaExlbmd0aCArPSBaU1REX2NvdW50XzJzZWdtZW50cyhpcCArIG1hdGNoTGVuZ3Ro
LCBtYXRjaCArIG1hdGNoTGVuZ3RoLCBpTGltaXQsIGRpY3RFbmQsIHByZWZpeFN0YXJ0KTsNCisJ
CQlpZiAobWF0Y2hJbmRleCArIG1hdGNoTGVuZ3RoID49IGRpY3RMaW1pdCkNCisJCQkJbWF0Y2gg
PSBiYXNlICsgbWF0Y2hJbmRleDsgLyogdG8gcHJlcGFyZSBmb3IgbmV4dCB1c2FnZSBvZiBtYXRj
aFttYXRjaExlbmd0aF0gKi8NCisJCX0NCisNCisJCWlmIChtYXRjaExlbmd0aCA+IGJlc3RMZW5n
dGgpIHsNCisJCQlpZiAobWF0Y2hMZW5ndGggPiBtYXRjaEVuZElkeCAtIG1hdGNoSW5kZXgpDQor
CQkJCW1hdGNoRW5kSWR4ID0gbWF0Y2hJbmRleCArIChVMzIpbWF0Y2hMZW5ndGg7DQorCQkJYmVz
dExlbmd0aCA9IG1hdGNoTGVuZ3RoOw0KKwkJCW1hdGNoZXNbbW51bV0ub2ZmID0gWlNURF9SRVBf
TU9WRV9PUFQgKyBjdXJyIC0gbWF0Y2hJbmRleDsNCisJCQltYXRjaGVzW21udW1dLmxlbiA9IChV
MzIpbWF0Y2hMZW5ndGg7DQorCQkJbW51bSsrOw0KKwkJCWlmIChtYXRjaExlbmd0aCA+IFpTVERf
T1BUX05VTSkNCisJCQkJYnJlYWs7DQorCQkJaWYgKGlwICsgbWF0Y2hMZW5ndGggPT0gaUxpbWl0
KSAvKiBlcXVhbCA6IG5vIHdheSB0byBrbm93IGlmIGluZiBvciBzdXAgKi8NCisJCQkJYnJlYWs7
CQkJLyogZHJvcCwgdG8gZ3VhcmFudGVlIGNvbnNpc3RlbmN5IChtaXNzIGEgbGl0dGxlIGJpdCBv
ZiBjb21wcmVzc2lvbikgKi8NCisJCX0NCisNCisJCWlmIChtYXRjaFttYXRjaExlbmd0aF0gPCBp
cFttYXRjaExlbmd0aF0pIHsNCisJCQkvKiBtYXRjaCBpcyBzbWFsbGVyIHRoYW4gY3VyciAqLw0K
KwkJCSpzbWFsbGVyUHRyID0gbWF0Y2hJbmRleDsJICAvKiB1cGRhdGUgc21hbGxlciBpZHggKi8N
CisJCQljb21tb25MZW5ndGhTbWFsbGVyID0gbWF0Y2hMZW5ndGg7IC8qIGFsbCBzbWFsbGVyIHdp
bGwgbm93IGhhdmUgYXQgbGVhc3QgdGhpcyBndWFyYW50ZWVkIGNvbW1vbiBsZW5ndGggKi8NCisJ
CQlpZiAobWF0Y2hJbmRleCA8PSBidExvdykgew0KKwkJCQlzbWFsbGVyUHRyID0gJmR1bW15MzI7
DQorCQkJCWJyZWFrOw0KKwkJCX0JCQkgIC8qIGJleW9uZCB0cmVlIHNpemUsIHN0b3AgdGhlIHNl
YXJjaCAqLw0KKwkJCXNtYWxsZXJQdHIgPSBuZXh0UHRyICsgMTsgLyogbmV3ICJzbWFsbGVyIiA9
PiBsYXJnZXIgb2YgbWF0Y2ggKi8NCisJCQltYXRjaEluZGV4ID0gbmV4dFB0clsxXTsgIC8qIG5l
dyBtYXRjaEluZGV4IGxhcmdlciB0aGFuIHByZXZpb3VzIChjbG9zZXIgdG8gY3VycikgKi8NCisJ
CX0gZWxzZSB7DQorCQkJLyogbWF0Y2ggaXMgbGFyZ2VyIHRoYW4gY3VyciAqLw0KKwkJCSpsYXJn
ZXJQdHIgPSBtYXRjaEluZGV4Ow0KKwkJCWNvbW1vbkxlbmd0aExhcmdlciA9IG1hdGNoTGVuZ3Ro
Ow0KKwkJCWlmIChtYXRjaEluZGV4IDw9IGJ0TG93KSB7DQorCQkJCWxhcmdlclB0ciA9ICZkdW1t
eTMyOw0KKwkJCQlicmVhazsNCisJCQl9IC8qIGJleW9uZCB0cmVlIHNpemUsIHN0b3AgdGhlIHNl
YXJjaCAqLw0KKwkJCWxhcmdlclB0ciA9IG5leHRQdHI7DQorCQkJbWF0Y2hJbmRleCA9IG5leHRQ
dHJbMF07DQorCQl9DQorCX0NCisNCisJKnNtYWxsZXJQdHIgPSAqbGFyZ2VyUHRyID0gMDsNCisN
Cit1cGRhdGU6DQorCXpjLT5uZXh0VG9VcGRhdGUgPSAobWF0Y2hFbmRJZHggPiBjdXJyICsgOCkg
PyBtYXRjaEVuZElkeCAtIDggOiBjdXJyICsgMTsNCisJcmV0dXJuIG1udW07DQorfQ0KKw0KKy8q
KiBUcmVlIHVwZGF0ZXIsIHByb3ZpZGluZyBiZXN0IG1hdGNoICovDQorc3RhdGljIFUzMiBaU1RE
X0J0R2V0QWxsTWF0Y2hlcyhaU1REX0NDdHggKnpjLCBjb25zdCBCWVRFICpjb25zdCBpcCwgY29u
c3QgQllURSAqY29uc3QgaUxpbWl0LCBjb25zdCBVMzIgbWF4TmJBdHRlbXB0cywgY29uc3QgVTMy
IG1scywgWlNURF9tYXRjaF90ICptYXRjaGVzLA0KKwkJCQljb25zdCBVMzIgbWluTWF0Y2hMZW4p
DQorew0KKwlpZiAoaXAgPCB6Yy0+YmFzZSArIHpjLT5uZXh0VG9VcGRhdGUpDQorCQlyZXR1cm4g
MDsgLyogc2tpcHBlZCBhcmVhICovDQorCVpTVERfdXBkYXRlVHJlZSh6YywgaXAsIGlMaW1pdCwg
bWF4TmJBdHRlbXB0cywgbWxzKTsNCisJcmV0dXJuIFpTVERfaW5zZXJ0QnRBbmRHZXRBbGxNYXRj
aGVzKHpjLCBpcCwgaUxpbWl0LCBtYXhOYkF0dGVtcHRzLCBtbHMsIDAsIG1hdGNoZXMsIG1pbk1h
dGNoTGVuKTsNCit9DQorDQorc3RhdGljIFUzMiBaU1REX0J0R2V0QWxsTWF0Y2hlc19zZWxlY3RN
TFMoWlNURF9DQ3R4ICp6YywgLyogSW5kZXggdGFibGUgd2lsbCBiZSB1cGRhdGVkICovDQorCQkJ
CQkgIGNvbnN0IEJZVEUgKmlwLCBjb25zdCBCWVRFICpjb25zdCBpSGlnaExpbWl0LCBjb25zdCBV
MzIgbWF4TmJBdHRlbXB0cywgY29uc3QgVTMyIG1hdGNoTGVuZ3RoU2VhcmNoLA0KKwkJCQkJICBa
U1REX21hdGNoX3QgKm1hdGNoZXMsIGNvbnN0IFUzMiBtaW5NYXRjaExlbikNCit7DQorCXN3aXRj
aCAobWF0Y2hMZW5ndGhTZWFyY2gpIHsNCisJY2FzZSAzOiByZXR1cm4gWlNURF9CdEdldEFsbE1h
dGNoZXMoemMsIGlwLCBpSGlnaExpbWl0LCBtYXhOYkF0dGVtcHRzLCAzLCBtYXRjaGVzLCBtaW5N
YXRjaExlbik7DQorCWRlZmF1bHQ6DQorCWNhc2UgNDogcmV0dXJuIFpTVERfQnRHZXRBbGxNYXRj
aGVzKHpjLCBpcCwgaUhpZ2hMaW1pdCwgbWF4TmJBdHRlbXB0cywgNCwgbWF0Y2hlcywgbWluTWF0
Y2hMZW4pOw0KKwljYXNlIDU6IHJldHVybiBaU1REX0J0R2V0QWxsTWF0Y2hlcyh6YywgaXAsIGlI
aWdoTGltaXQsIG1heE5iQXR0ZW1wdHMsIDUsIG1hdGNoZXMsIG1pbk1hdGNoTGVuKTsNCisJY2Fz
ZSA3Og0KKwljYXNlIDY6IHJldHVybiBaU1REX0J0R2V0QWxsTWF0Y2hlcyh6YywgaXAsIGlIaWdo
TGltaXQsIG1heE5iQXR0ZW1wdHMsIDYsIG1hdGNoZXMsIG1pbk1hdGNoTGVuKTsNCisJfQ0KK30N
CisNCisvKiogVHJlZSB1cGRhdGVyLCBwcm92aWRpbmcgYmVzdCBtYXRjaCAqLw0KK3N0YXRpYyBV
MzIgWlNURF9CdEdldEFsbE1hdGNoZXNfZXh0RGljdChaU1REX0NDdHggKnpjLCBjb25zdCBCWVRF
ICpjb25zdCBpcCwgY29uc3QgQllURSAqY29uc3QgaUxpbWl0LCBjb25zdCBVMzIgbWF4TmJBdHRl
bXB0cywgY29uc3QgVTMyIG1scywNCisJCQkJCVpTVERfbWF0Y2hfdCAqbWF0Y2hlcywgY29uc3Qg
VTMyIG1pbk1hdGNoTGVuKQ0KK3sNCisJaWYgKGlwIDwgemMtPmJhc2UgKyB6Yy0+bmV4dFRvVXBk
YXRlKQ0KKwkJcmV0dXJuIDA7IC8qIHNraXBwZWQgYXJlYSAqLw0KKwlaU1REX3VwZGF0ZVRyZWVf
ZXh0RGljdCh6YywgaXAsIGlMaW1pdCwgbWF4TmJBdHRlbXB0cywgbWxzKTsNCisJcmV0dXJuIFpT
VERfaW5zZXJ0QnRBbmRHZXRBbGxNYXRjaGVzKHpjLCBpcCwgaUxpbWl0LCBtYXhOYkF0dGVtcHRz
LCBtbHMsIDEsIG1hdGNoZXMsIG1pbk1hdGNoTGVuKTsNCit9DQorDQorc3RhdGljIFUzMiBaU1RE
X0J0R2V0QWxsTWF0Y2hlc19zZWxlY3RNTFNfZXh0RGljdChaU1REX0NDdHggKnpjLCAvKiBJbmRl
eCB0YWJsZSB3aWxsIGJlIHVwZGF0ZWQgKi8NCisJCQkJCQkgIGNvbnN0IEJZVEUgKmlwLCBjb25z
dCBCWVRFICpjb25zdCBpSGlnaExpbWl0LCBjb25zdCBVMzIgbWF4TmJBdHRlbXB0cywgY29uc3Qg
VTMyIG1hdGNoTGVuZ3RoU2VhcmNoLA0KKwkJCQkJCSAgWlNURF9tYXRjaF90ICptYXRjaGVzLCBj
b25zdCBVMzIgbWluTWF0Y2hMZW4pDQorew0KKwlzd2l0Y2ggKG1hdGNoTGVuZ3RoU2VhcmNoKSB7
DQorCWNhc2UgMzogcmV0dXJuIFpTVERfQnRHZXRBbGxNYXRjaGVzX2V4dERpY3QoemMsIGlwLCBp
SGlnaExpbWl0LCBtYXhOYkF0dGVtcHRzLCAzLCBtYXRjaGVzLCBtaW5NYXRjaExlbik7DQorCWRl
ZmF1bHQ6DQorCWNhc2UgNDogcmV0dXJuIFpTVERfQnRHZXRBbGxNYXRjaGVzX2V4dERpY3QoemMs
IGlwLCBpSGlnaExpbWl0LCBtYXhOYkF0dGVtcHRzLCA0LCBtYXRjaGVzLCBtaW5NYXRjaExlbik7
DQorCWNhc2UgNTogcmV0dXJuIFpTVERfQnRHZXRBbGxNYXRjaGVzX2V4dERpY3QoemMsIGlwLCBp
SGlnaExpbWl0LCBtYXhOYkF0dGVtcHRzLCA1LCBtYXRjaGVzLCBtaW5NYXRjaExlbik7DQorCWNh
c2UgNzoNCisJY2FzZSA2OiByZXR1cm4gWlNURF9CdEdldEFsbE1hdGNoZXNfZXh0RGljdCh6Yywg
aXAsIGlIaWdoTGltaXQsIG1heE5iQXR0ZW1wdHMsIDYsIG1hdGNoZXMsIG1pbk1hdGNoTGVuKTsN
CisJfQ0KK30NCisNCisvKi0qKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQorKiAgT3B0
aW1hbCBwYXJzZXINCisqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKiovDQorRk9SQ0Vf
SU5MSU5FDQordm9pZCBaU1REX2NvbXByZXNzQmxvY2tfb3B0X2dlbmVyaWMoWlNURF9DQ3R4ICpj
dHgsIGNvbnN0IHZvaWQgKnNyYywgc2l6ZV90IHNyY1NpemUsIGNvbnN0IGludCB1bHRyYSkNCit7
DQorCXNlcVN0b3JlX3QgKnNlcVN0b3JlUHRyID0gJihjdHgtPnNlcVN0b3JlKTsNCisJY29uc3Qg
QllURSAqY29uc3QgaXN0YXJ0ID0gKGNvbnN0IEJZVEUgKilzcmM7DQorCWNvbnN0IEJZVEUgKmlw
ID0gaXN0YXJ0Ow0KKwljb25zdCBCWVRFICphbmNob3IgPSBpc3RhcnQ7DQorCWNvbnN0IEJZVEUg
KmNvbnN0IGllbmQgPSBpc3RhcnQgKyBzcmNTaXplOw0KKwljb25zdCBCWVRFICpjb25zdCBpbGlt
aXQgPSBpZW5kIC0gODsNCisJY29uc3QgQllURSAqY29uc3QgYmFzZSA9IGN0eC0+YmFzZTsNCisJ
Y29uc3QgQllURSAqY29uc3QgcHJlZml4U3RhcnQgPSBiYXNlICsgY3R4LT5kaWN0TGltaXQ7DQor
DQorCWNvbnN0IFUzMiBtYXhTZWFyY2hlcyA9IDFVIDw8IGN0eC0+cGFyYW1zLmNQYXJhbXMuc2Vh
cmNoTG9nOw0KKwljb25zdCBVMzIgc3VmZmljaWVudF9sZW4gPSBjdHgtPnBhcmFtcy5jUGFyYW1z
LnRhcmdldExlbmd0aDsNCisJY29uc3QgVTMyIG1scyA9IGN0eC0+cGFyYW1zLmNQYXJhbXMuc2Vh
cmNoTGVuZ3RoOw0KKwljb25zdCBVMzIgbWluTWF0Y2ggPSAoY3R4LT5wYXJhbXMuY1BhcmFtcy5z
ZWFyY2hMZW5ndGggPT0gMykgPyAzIDogNDsNCisNCisJWlNURF9vcHRpbWFsX3QgKm9wdCA9IHNl
cVN0b3JlUHRyLT5wcmljZVRhYmxlOw0KKwlaU1REX21hdGNoX3QgKm1hdGNoZXMgPSBzZXFTdG9y
ZVB0ci0+bWF0Y2hUYWJsZTsNCisJY29uc3QgQllURSAqaW5yOw0KKwlVMzIgb2Zmc2V0LCByZXBb
WlNURF9SRVBfTlVNXTsNCisNCisJLyogaW5pdCAqLw0KKwljdHgtPm5leHRUb1VwZGF0ZTMgPSBj
dHgtPm5leHRUb1VwZGF0ZTsNCisJWlNURF9yZXNjYWxlRnJlcXMoc2VxU3RvcmVQdHIsIChjb25z
dCBCWVRFICopc3JjLCBzcmNTaXplKTsNCisJaXAgKz0gKGlwID09IHByZWZpeFN0YXJ0KTsNCisJ
ew0KKwkJVTMyIGk7DQorCQlmb3IgKGkgPSAwOyBpIDwgWlNURF9SRVBfTlVNOyBpKyspDQorCQkJ
cmVwW2ldID0gY3R4LT5yZXBbaV07DQorCX0NCisNCisJLyogTWF0Y2ggTG9vcCAqLw0KKwl3aGls
ZSAoaXAgPCBpbGltaXQpIHsNCisJCVUzMiBjdXIsIG1hdGNoX251bSwgbGFzdF9wb3MsIGxpdGxl
biwgcHJpY2U7DQorCQlVMzIgdSwgbWxlbiwgYmVzdF9tbGVuLCBiZXN0X29mZiwgbGl0TGVuZ3Ro
Ow0KKwkJbWVtc2V0KG9wdCwgMCwgc2l6ZW9mKFpTVERfb3B0aW1hbF90KSk7DQorCQlsYXN0X3Bv
cyA9IDA7DQorCQlsaXRsZW4gPSAoVTMyKShpcCAtIGFuY2hvcik7DQorDQorCQkvKiBjaGVjayBy
ZXBDb2RlICovDQorCQl7DQorCQkJVTMyIGksIGxhc3RfaSA9IFpTVERfUkVQX0NIRUNLICsgKGlw
ID09IGFuY2hvcik7DQorCQkJZm9yIChpID0gKGlwID09IGFuY2hvcik7IGkgPCBsYXN0X2k7IGkr
Kykgew0KKwkJCQljb25zdCBTMzIgcmVwQ3VyID0gKGkgPT0gWlNURF9SRVBfTU9WRV9PUFQpID8g
KHJlcFswXSAtIDEpIDogcmVwW2ldOw0KKwkJCQlpZiAoKHJlcEN1ciA+IDApICYmIChyZXBDdXIg
PCAoUzMyKShpcCAtIHByZWZpeFN0YXJ0KSkgJiYNCisJCQkJICAgIChaU1REX3JlYWRNSU5NQVRD
SChpcCwgbWluTWF0Y2gpID09IFpTVERfcmVhZE1JTk1BVENIKGlwIC0gcmVwQ3VyLCBtaW5NYXRj
aCkpKSB7DQorCQkJCQltbGVuID0gKFUzMilaU1REX2NvdW50KGlwICsgbWluTWF0Y2gsIGlwICsg
bWluTWF0Y2ggLSByZXBDdXIsIGllbmQpICsgbWluTWF0Y2g7DQorCQkJCQlpZiAobWxlbiA+IHN1
ZmZpY2llbnRfbGVuIHx8IG1sZW4gPj0gWlNURF9PUFRfTlVNKSB7DQorCQkJCQkJYmVzdF9tbGVu
ID0gbWxlbjsNCisJCQkJCQliZXN0X29mZiA9IGk7DQorCQkJCQkJY3VyID0gMDsNCisJCQkJCQls
YXN0X3BvcyA9IDE7DQorCQkJCQkJZ290byBfc3RvcmVTZXF1ZW5jZTsNCisJCQkJCX0NCisJCQkJ
CWJlc3Rfb2ZmID0gaSAtIChpcCA9PSBhbmNob3IpOw0KKwkJCQkJZG8gew0KKwkJCQkJCXByaWNl
ID0gWlNURF9nZXRQcmljZShzZXFTdG9yZVB0ciwgbGl0bGVuLCBhbmNob3IsIGJlc3Rfb2ZmLCBt
bGVuIC0gTUlOTUFUQ0gsIHVsdHJhKTsNCisJCQkJCQlpZiAobWxlbiA+IGxhc3RfcG9zIHx8IHBy
aWNlIDwgb3B0W21sZW5dLnByaWNlKQ0KKwkJCQkJCQlTRVRfUFJJQ0UobWxlbiwgbWxlbiwgaSwg
bGl0bGVuLCBwcmljZSk7IC8qIG5vdGUgOiBtYWNybyBtb2RpZmllcyBsYXN0X3BvcyAqLw0KKwkJ
CQkJCW1sZW4tLTsNCisJCQkJCX0gd2hpbGUgKG1sZW4gPj0gbWluTWF0Y2gpOw0KKwkJCQl9DQor
CQkJfQ0KKwkJfQ0KKw0KKwkJbWF0Y2hfbnVtID0gWlNURF9CdEdldEFsbE1hdGNoZXNfc2VsZWN0
TUxTKGN0eCwgaXAsIGllbmQsIG1heFNlYXJjaGVzLCBtbHMsIG1hdGNoZXMsIG1pbk1hdGNoKTsN
CisNCisJCWlmICghbGFzdF9wb3MgJiYgIW1hdGNoX251bSkgew0KKwkJCWlwKys7DQorCQkJY29u
dGludWU7DQorCQl9DQorDQorCQlpZiAobWF0Y2hfbnVtICYmIChtYXRjaGVzW21hdGNoX251bSAt
IDFdLmxlbiA+IHN1ZmZpY2llbnRfbGVuIHx8IG1hdGNoZXNbbWF0Y2hfbnVtIC0gMV0ubGVuID49
IFpTVERfT1BUX05VTSkpIHsNCisJCQliZXN0X21sZW4gPSBtYXRjaGVzW21hdGNoX251bSAtIDFd
LmxlbjsNCisJCQliZXN0X29mZiA9IG1hdGNoZXNbbWF0Y2hfbnVtIC0gMV0ub2ZmOw0KKwkJCWN1
ciA9IDA7DQorCQkJbGFzdF9wb3MgPSAxOw0KKwkJCWdvdG8gX3N0b3JlU2VxdWVuY2U7DQorCQl9
DQorDQorCQkvKiBzZXQgcHJpY2VzIHVzaW5nIG1hdGNoZXMgYXQgcG9zaXRpb24gPSAwICovDQor
CQliZXN0X21sZW4gPSAobGFzdF9wb3MpID8gbGFzdF9wb3MgOiBtaW5NYXRjaDsNCisJCWZvciAo
dSA9IDA7IHUgPCBtYXRjaF9udW07IHUrKykgew0KKwkJCW1sZW4gPSAodSA+IDApID8gbWF0Y2hl
c1t1IC0gMV0ubGVuICsgMSA6IGJlc3RfbWxlbjsNCisJCQliZXN0X21sZW4gPSBtYXRjaGVzW3Vd
LmxlbjsNCisJCQl3aGlsZSAobWxlbiA8PSBiZXN0X21sZW4pIHsNCisJCQkJcHJpY2UgPSBaU1RE
X2dldFByaWNlKHNlcVN0b3JlUHRyLCBsaXRsZW4sIGFuY2hvciwgbWF0Y2hlc1t1XS5vZmYgLSAx
LCBtbGVuIC0gTUlOTUFUQ0gsIHVsdHJhKTsNCisJCQkJaWYgKG1sZW4gPiBsYXN0X3BvcyB8fCBw
cmljZSA8IG9wdFttbGVuXS5wcmljZSkNCisJCQkJCVNFVF9QUklDRShtbGVuLCBtbGVuLCBtYXRj
aGVzW3VdLm9mZiwgbGl0bGVuLCBwcmljZSk7IC8qIG5vdGUgOiBtYWNybyBtb2RpZmllcyBsYXN0
X3BvcyAqLw0KKwkJCQltbGVuKys7DQorCQkJfQ0KKwkJfQ0KKw0KKwkJaWYgKGxhc3RfcG9zIDwg
bWluTWF0Y2gpIHsNCisJCQlpcCsrOw0KKwkJCWNvbnRpbnVlOw0KKwkJfQ0KKw0KKwkJLyogaW5p
dGlhbGl6ZSBvcHRbMF0gKi8NCisJCXsNCisJCQlVMzIgaTsNCisJCQlmb3IgKGkgPSAwOyBpIDwg
WlNURF9SRVBfTlVNOyBpKyspDQorCQkJCW9wdFswXS5yZXBbaV0gPSByZXBbaV07DQorCQl9DQor
CQlvcHRbMF0ubWxlbiA9IDE7DQorCQlvcHRbMF0ubGl0bGVuID0gbGl0bGVuOw0KKw0KKwkJLyog
Y2hlY2sgZnVydGhlciBwb3NpdGlvbnMgKi8NCisJCWZvciAoY3VyID0gMTsgY3VyIDw9IGxhc3Rf
cG9zOyBjdXIrKykgew0KKwkJCWluciA9IGlwICsgY3VyOw0KKw0KKwkJCWlmIChvcHRbY3VyIC0g
MV0ubWxlbiA9PSAxKSB7DQorCQkJCWxpdGxlbiA9IG9wdFtjdXIgLSAxXS5saXRsZW4gKyAxOw0K
KwkJCQlpZiAoY3VyID4gbGl0bGVuKSB7DQorCQkJCQlwcmljZSA9IG9wdFtjdXIgLSBsaXRsZW5d
LnByaWNlICsgWlNURF9nZXRMaXRlcmFsUHJpY2Uoc2VxU3RvcmVQdHIsIGxpdGxlbiwgaW5yIC0g
bGl0bGVuKTsNCisJCQkJfSBlbHNlDQorCQkJCQlwcmljZSA9IFpTVERfZ2V0TGl0ZXJhbFByaWNl
KHNlcVN0b3JlUHRyLCBsaXRsZW4sIGFuY2hvcik7DQorCQkJfSBlbHNlIHsNCisJCQkJbGl0bGVu
ID0gMTsNCisJCQkJcHJpY2UgPSBvcHRbY3VyIC0gMV0ucHJpY2UgKyBaU1REX2dldExpdGVyYWxQ
cmljZShzZXFTdG9yZVB0ciwgbGl0bGVuLCBpbnIgLSAxKTsNCisJCQl9DQorDQorCQkJaWYgKGN1
ciA+IGxhc3RfcG9zIHx8IHByaWNlIDw9IG9wdFtjdXJdLnByaWNlKQ0KKwkJCQlTRVRfUFJJQ0Uo
Y3VyLCAxLCAwLCBsaXRsZW4sIHByaWNlKTsNCisNCisJCQlpZiAoY3VyID09IGxhc3RfcG9zKQ0K
KwkJCQlicmVhazsNCisNCisJCQlpZiAoaW5yID4gaWxpbWl0KSAvKiBsYXN0IG1hdGNoIG11c3Qg
c3RhcnQgYXQgYSBtaW5pbXVtIGRpc3RhbmNlIG9mIDggZnJvbSBvZW5kICovDQorCQkJCWNvbnRp
bnVlOw0KKw0KKwkJCW1sZW4gPSBvcHRbY3VyXS5tbGVuOw0KKwkJCWlmIChvcHRbY3VyXS5vZmYg
PiBaU1REX1JFUF9NT1ZFX09QVCkgew0KKwkJCQlvcHRbY3VyXS5yZXBbMl0gPSBvcHRbY3VyIC0g
bWxlbl0ucmVwWzFdOw0KKwkJCQlvcHRbY3VyXS5yZXBbMV0gPSBvcHRbY3VyIC0gbWxlbl0ucmVw
WzBdOw0KKwkJCQlvcHRbY3VyXS5yZXBbMF0gPSBvcHRbY3VyXS5vZmYgLSBaU1REX1JFUF9NT1ZF
X09QVDsNCisJCQl9IGVsc2Ugew0KKwkJCQlvcHRbY3VyXS5yZXBbMl0gPSAob3B0W2N1cl0ub2Zm
ID4gMSkgPyBvcHRbY3VyIC0gbWxlbl0ucmVwWzFdIDogb3B0W2N1ciAtIG1sZW5dLnJlcFsyXTsN
CisJCQkJb3B0W2N1cl0ucmVwWzFdID0gKG9wdFtjdXJdLm9mZiA+IDApID8gb3B0W2N1ciAtIG1s
ZW5dLnJlcFswXSA6IG9wdFtjdXIgLSBtbGVuXS5yZXBbMV07DQorCQkJCW9wdFtjdXJdLnJlcFsw
XSA9DQorCQkJCSAgICAoKG9wdFtjdXJdLm9mZiA9PSBaU1REX1JFUF9NT1ZFX09QVCkgJiYgKG1s
ZW4gIT0gMSkpID8gKG9wdFtjdXIgLSBtbGVuXS5yZXBbMF0gLSAxKSA6IChvcHRbY3VyIC0gbWxl
bl0ucmVwW29wdFtjdXJdLm9mZl0pOw0KKwkJCX0NCisNCisJCQliZXN0X21sZW4gPSBtaW5NYXRj
aDsNCisJCQl7DQorCQkJCVUzMiBpLCBsYXN0X2kgPSBaU1REX1JFUF9DSEVDSyArIChtbGVuICE9
IDEpOw0KKwkJCQlmb3IgKGkgPSAob3B0W2N1cl0ubWxlbiAhPSAxKTsgaSA8IGxhc3RfaTsgaSsr
KSB7IC8qIGNoZWNrIHJlcCAqLw0KKwkJCQkJY29uc3QgUzMyIHJlcEN1ciA9IChpID09IFpTVERf
UkVQX01PVkVfT1BUKSA/IChvcHRbY3VyXS5yZXBbMF0gLSAxKSA6IG9wdFtjdXJdLnJlcFtpXTsN
CisJCQkJCWlmICgocmVwQ3VyID4gMCkgJiYgKHJlcEN1ciA8IChTMzIpKGluciAtIHByZWZpeFN0
YXJ0KSkgJiYNCisJCQkJCSAgICAoWlNURF9yZWFkTUlOTUFUQ0goaW5yLCBtaW5NYXRjaCkgPT0g
WlNURF9yZWFkTUlOTUFUQ0goaW5yIC0gcmVwQ3VyLCBtaW5NYXRjaCkpKSB7DQorCQkJCQkJbWxl
biA9IChVMzIpWlNURF9jb3VudChpbnIgKyBtaW5NYXRjaCwgaW5yICsgbWluTWF0Y2ggLSByZXBD
dXIsIGllbmQpICsgbWluTWF0Y2g7DQorDQorCQkJCQkJaWYgKG1sZW4gPiBzdWZmaWNpZW50X2xl
biB8fCBjdXIgKyBtbGVuID49IFpTVERfT1BUX05VTSkgew0KKwkJCQkJCQliZXN0X21sZW4gPSBt
bGVuOw0KKwkJCQkJCQliZXN0X29mZiA9IGk7DQorCQkJCQkJCWxhc3RfcG9zID0gY3VyICsgMTsN
CisJCQkJCQkJZ290byBfc3RvcmVTZXF1ZW5jZTsNCisJCQkJCQl9DQorDQorCQkJCQkJYmVzdF9v
ZmYgPSBpIC0gKG9wdFtjdXJdLm1sZW4gIT0gMSk7DQorCQkJCQkJaWYgKG1sZW4gPiBiZXN0X21s
ZW4pDQorCQkJCQkJCWJlc3RfbWxlbiA9IG1sZW47DQorDQorCQkJCQkJZG8gew0KKwkJCQkJCQlp
ZiAob3B0W2N1cl0ubWxlbiA9PSAxKSB7DQorCQkJCQkJCQlsaXRsZW4gPSBvcHRbY3VyXS5saXRs
ZW47DQorCQkJCQkJCQlpZiAoY3VyID4gbGl0bGVuKSB7DQorCQkJCQkJCQkJcHJpY2UgPSBvcHRb
Y3VyIC0gbGl0bGVuXS5wcmljZSArIFpTVERfZ2V0UHJpY2Uoc2VxU3RvcmVQdHIsIGxpdGxlbiwg
aW5yIC0gbGl0bGVuLA0KKwkJCQkJCQkJCQkJCQkJCWJlc3Rfb2ZmLCBtbGVuIC0gTUlOTUFUQ0gs
IHVsdHJhKTsNCisJCQkJCQkJCX0gZWxzZQ0KKwkJCQkJCQkJCXByaWNlID0gWlNURF9nZXRQcmlj
ZShzZXFTdG9yZVB0ciwgbGl0bGVuLCBhbmNob3IsIGJlc3Rfb2ZmLCBtbGVuIC0gTUlOTUFUQ0gs
IHVsdHJhKTsNCisJCQkJCQkJfSBlbHNlIHsNCisJCQkJCQkJCWxpdGxlbiA9IDA7DQorCQkJCQkJ
CQlwcmljZSA9IG9wdFtjdXJdLnByaWNlICsgWlNURF9nZXRQcmljZShzZXFTdG9yZVB0ciwgMCwg
TlVMTCwgYmVzdF9vZmYsIG1sZW4gLSBNSU5NQVRDSCwgdWx0cmEpOw0KKwkJCQkJCQl9DQorDQor
CQkJCQkJCWlmIChjdXIgKyBtbGVuID4gbGFzdF9wb3MgfHwgcHJpY2UgPD0gb3B0W2N1ciArIG1s
ZW5dLnByaWNlKQ0KKwkJCQkJCQkJU0VUX1BSSUNFKGN1ciArIG1sZW4sIG1sZW4sIGksIGxpdGxl
biwgcHJpY2UpOw0KKwkJCQkJCQltbGVuLS07DQorCQkJCQkJfSB3aGlsZSAobWxlbiA+PSBtaW5N
YXRjaCk7DQorCQkJCQl9DQorCQkJCX0NCisJCQl9DQorDQorCQkJbWF0Y2hfbnVtID0gWlNURF9C
dEdldEFsbE1hdGNoZXNfc2VsZWN0TUxTKGN0eCwgaW5yLCBpZW5kLCBtYXhTZWFyY2hlcywgbWxz
LCBtYXRjaGVzLCBiZXN0X21sZW4pOw0KKw0KKwkJCWlmIChtYXRjaF9udW0gPiAwICYmIChtYXRj
aGVzW21hdGNoX251bSAtIDFdLmxlbiA+IHN1ZmZpY2llbnRfbGVuIHx8IGN1ciArIG1hdGNoZXNb
bWF0Y2hfbnVtIC0gMV0ubGVuID49IFpTVERfT1BUX05VTSkpIHsNCisJCQkJYmVzdF9tbGVuID0g
bWF0Y2hlc1ttYXRjaF9udW0gLSAxXS5sZW47DQorCQkJCWJlc3Rfb2ZmID0gbWF0Y2hlc1ttYXRj
aF9udW0gLSAxXS5vZmY7DQorCQkJCWxhc3RfcG9zID0gY3VyICsgMTsNCisJCQkJZ290byBfc3Rv
cmVTZXF1ZW5jZTsNCisJCQl9DQorDQorCQkJLyogc2V0IHByaWNlcyB1c2luZyBtYXRjaGVzIGF0
IHBvc2l0aW9uID0gY3VyICovDQorCQkJZm9yICh1ID0gMDsgdSA8IG1hdGNoX251bTsgdSsrKSB7
DQorCQkJCW1sZW4gPSAodSA+IDApID8gbWF0Y2hlc1t1IC0gMV0ubGVuICsgMSA6IGJlc3RfbWxl
bjsNCisJCQkJYmVzdF9tbGVuID0gbWF0Y2hlc1t1XS5sZW47DQorDQorCQkJCXdoaWxlIChtbGVu
IDw9IGJlc3RfbWxlbikgew0KKwkJCQkJaWYgKG9wdFtjdXJdLm1sZW4gPT0gMSkgew0KKwkJCQkJ
CWxpdGxlbiA9IG9wdFtjdXJdLmxpdGxlbjsNCisJCQkJCQlpZiAoY3VyID4gbGl0bGVuKQ0KKwkJ
CQkJCQlwcmljZSA9IG9wdFtjdXIgLSBsaXRsZW5dLnByaWNlICsgWlNURF9nZXRQcmljZShzZXFT
dG9yZVB0ciwgbGl0bGVuLCBpcCArIGN1ciAtIGxpdGxlbiwNCisJCQkJCQkJCQkJCQkJbWF0Y2hl
c1t1XS5vZmYgLSAxLCBtbGVuIC0gTUlOTUFUQ0gsIHVsdHJhKTsNCisJCQkJCQllbHNlDQorCQkJ
CQkJCXByaWNlID0gWlNURF9nZXRQcmljZShzZXFTdG9yZVB0ciwgbGl0bGVuLCBhbmNob3IsIG1h
dGNoZXNbdV0ub2ZmIC0gMSwgbWxlbiAtIE1JTk1BVENILCB1bHRyYSk7DQorCQkJCQl9IGVsc2Ug
ew0KKwkJCQkJCWxpdGxlbiA9IDA7DQorCQkJCQkJcHJpY2UgPSBvcHRbY3VyXS5wcmljZSArIFpT
VERfZ2V0UHJpY2Uoc2VxU3RvcmVQdHIsIDAsIE5VTEwsIG1hdGNoZXNbdV0ub2ZmIC0gMSwgbWxl
biAtIE1JTk1BVENILCB1bHRyYSk7DQorCQkJCQl9DQorDQorCQkJCQlpZiAoY3VyICsgbWxlbiA+
IGxhc3RfcG9zIHx8IChwcmljZSA8IG9wdFtjdXIgKyBtbGVuXS5wcmljZSkpDQorCQkJCQkJU0VU
X1BSSUNFKGN1ciArIG1sZW4sIG1sZW4sIG1hdGNoZXNbdV0ub2ZmLCBsaXRsZW4sIHByaWNlKTsN
CisNCisJCQkJCW1sZW4rKzsNCisJCQkJfQ0KKwkJCX0NCisJCX0NCisNCisJCWJlc3RfbWxlbiA9
IG9wdFtsYXN0X3Bvc10ubWxlbjsNCisJCWJlc3Rfb2ZmID0gb3B0W2xhc3RfcG9zXS5vZmY7DQor
CQljdXIgPSBsYXN0X3BvcyAtIGJlc3RfbWxlbjsNCisNCisJLyogc3RvcmUgc2VxdWVuY2UgKi8N
Citfc3RvcmVTZXF1ZW5jZTogLyogY3VyLCBsYXN0X3BvcywgYmVzdF9tbGVuLCBiZXN0X29mZiBo
YXZlIHRvIGJlIHNldCAqLw0KKwkJb3B0WzBdLm1sZW4gPSAxOw0KKw0KKwkJd2hpbGUgKDEpIHsN
CisJCQltbGVuID0gb3B0W2N1cl0ubWxlbjsNCisJCQlvZmZzZXQgPSBvcHRbY3VyXS5vZmY7DQor
CQkJb3B0W2N1cl0ubWxlbiA9IGJlc3RfbWxlbjsNCisJCQlvcHRbY3VyXS5vZmYgPSBiZXN0X29m
ZjsNCisJCQliZXN0X21sZW4gPSBtbGVuOw0KKwkJCWJlc3Rfb2ZmID0gb2Zmc2V0Ow0KKwkJCWlm
IChtbGVuID4gY3VyKQ0KKwkJCQlicmVhazsNCisJCQljdXIgLT0gbWxlbjsNCisJCX0NCisNCisJ
CWZvciAodSA9IDA7IHUgPD0gbGFzdF9wb3M7KSB7DQorCQkJdSArPSBvcHRbdV0ubWxlbjsNCisJ
CX0NCisNCisJCWZvciAoY3VyID0gMDsgY3VyIDwgbGFzdF9wb3M7KSB7DQorCQkJbWxlbiA9IG9w
dFtjdXJdLm1sZW47DQorCQkJaWYgKG1sZW4gPT0gMSkgew0KKwkJCQlpcCsrOw0KKwkJCQljdXIr
KzsNCisJCQkJY29udGludWU7DQorCQkJfQ0KKwkJCW9mZnNldCA9IG9wdFtjdXJdLm9mZjsNCisJ
CQljdXIgKz0gbWxlbjsNCisJCQlsaXRMZW5ndGggPSAoVTMyKShpcCAtIGFuY2hvcik7DQorDQor
CQkJaWYgKG9mZnNldCA+IFpTVERfUkVQX01PVkVfT1BUKSB7DQorCQkJCXJlcFsyXSA9IHJlcFsx
XTsNCisJCQkJcmVwWzFdID0gcmVwWzBdOw0KKwkJCQlyZXBbMF0gPSBvZmZzZXQgLSBaU1REX1JF
UF9NT1ZFX09QVDsNCisJCQkJb2Zmc2V0LS07DQorCQkJfSBlbHNlIHsNCisJCQkJaWYgKG9mZnNl
dCAhPSAwKSB7DQorCQkJCQliZXN0X29mZiA9IChvZmZzZXQgPT0gWlNURF9SRVBfTU9WRV9PUFQp
ID8gKHJlcFswXSAtIDEpIDogKHJlcFtvZmZzZXRdKTsNCisJCQkJCWlmIChvZmZzZXQgIT0gMSkN
CisJCQkJCQlyZXBbMl0gPSByZXBbMV07DQorCQkJCQlyZXBbMV0gPSByZXBbMF07DQorCQkJCQly
ZXBbMF0gPSBiZXN0X29mZjsNCisJCQkJfQ0KKwkJCQlpZiAobGl0TGVuZ3RoID09IDApDQorCQkJ
CQlvZmZzZXQtLTsNCisJCQl9DQorDQorCQkJWlNURF91cGRhdGVQcmljZShzZXFTdG9yZVB0ciwg
bGl0TGVuZ3RoLCBhbmNob3IsIG9mZnNldCwgbWxlbiAtIE1JTk1BVENIKTsNCisJCQlaU1REX3N0
b3JlU2VxKHNlcVN0b3JlUHRyLCBsaXRMZW5ndGgsIGFuY2hvciwgb2Zmc2V0LCBtbGVuIC0gTUlO
TUFUQ0gpOw0KKwkJCWFuY2hvciA9IGlwID0gaXAgKyBtbGVuOw0KKwkJfQ0KKwl9IC8qIGZvciAo
Y3VyPTA7IGN1ciA8IGxhc3RfcG9zOyApICovDQorDQorCS8qIFNhdmUgcmVwcyBmb3IgbmV4dCBi
bG9jayAqLw0KKwl7DQorCQlpbnQgaTsNCisJCWZvciAoaSA9IDA7IGkgPCBaU1REX1JFUF9OVU07
IGkrKykNCisJCQljdHgtPnJlcFRvQ29uZmlybVtpXSA9IHJlcFtpXTsNCisJfQ0KKw0KKwkvKiBM
YXN0IExpdGVyYWxzICovDQorCXsNCisJCXNpemVfdCBjb25zdCBsYXN0TExTaXplID0gaWVuZCAt
IGFuY2hvcjsNCisJCW1lbWNweShzZXFTdG9yZVB0ci0+bGl0LCBhbmNob3IsIGxhc3RMTFNpemUp
Ow0KKwkJc2VxU3RvcmVQdHItPmxpdCArPSBsYXN0TExTaXplOw0KKwl9DQorfQ0KKw0KK0ZPUkNF
X0lOTElORQ0KK3ZvaWQgWlNURF9jb21wcmVzc0Jsb2NrX29wdF9leHREaWN0X2dlbmVyaWMoWlNU
RF9DQ3R4ICpjdHgsIGNvbnN0IHZvaWQgKnNyYywgc2l6ZV90IHNyY1NpemUsIGNvbnN0IGludCB1
bHRyYSkNCit7DQorCXNlcVN0b3JlX3QgKnNlcVN0b3JlUHRyID0gJihjdHgtPnNlcVN0b3JlKTsN
CisJY29uc3QgQllURSAqY29uc3QgaXN0YXJ0ID0gKGNvbnN0IEJZVEUgKilzcmM7DQorCWNvbnN0
IEJZVEUgKmlwID0gaXN0YXJ0Ow0KKwljb25zdCBCWVRFICphbmNob3IgPSBpc3RhcnQ7DQorCWNv
bnN0IEJZVEUgKmNvbnN0IGllbmQgPSBpc3RhcnQgKyBzcmNTaXplOw0KKwljb25zdCBCWVRFICpj
b25zdCBpbGltaXQgPSBpZW5kIC0gODsNCisJY29uc3QgQllURSAqY29uc3QgYmFzZSA9IGN0eC0+
YmFzZTsNCisJY29uc3QgVTMyIGxvd2VzdEluZGV4ID0gY3R4LT5sb3dMaW1pdDsNCisJY29uc3Qg
VTMyIGRpY3RMaW1pdCA9IGN0eC0+ZGljdExpbWl0Ow0KKwljb25zdCBCWVRFICpjb25zdCBwcmVm
aXhTdGFydCA9IGJhc2UgKyBkaWN0TGltaXQ7DQorCWNvbnN0IEJZVEUgKmNvbnN0IGRpY3RCYXNl
ID0gY3R4LT5kaWN0QmFzZTsNCisJY29uc3QgQllURSAqY29uc3QgZGljdEVuZCA9IGRpY3RCYXNl
ICsgZGljdExpbWl0Ow0KKw0KKwljb25zdCBVMzIgbWF4U2VhcmNoZXMgPSAxVSA8PCBjdHgtPnBh
cmFtcy5jUGFyYW1zLnNlYXJjaExvZzsNCisJY29uc3QgVTMyIHN1ZmZpY2llbnRfbGVuID0gY3R4
LT5wYXJhbXMuY1BhcmFtcy50YXJnZXRMZW5ndGg7DQorCWNvbnN0IFUzMiBtbHMgPSBjdHgtPnBh
cmFtcy5jUGFyYW1zLnNlYXJjaExlbmd0aDsNCisJY29uc3QgVTMyIG1pbk1hdGNoID0gKGN0eC0+
cGFyYW1zLmNQYXJhbXMuc2VhcmNoTGVuZ3RoID09IDMpID8gMyA6IDQ7DQorDQorCVpTVERfb3B0
aW1hbF90ICpvcHQgPSBzZXFTdG9yZVB0ci0+cHJpY2VUYWJsZTsNCisJWlNURF9tYXRjaF90ICpt
YXRjaGVzID0gc2VxU3RvcmVQdHItPm1hdGNoVGFibGU7DQorCWNvbnN0IEJZVEUgKmlucjsNCisN
CisJLyogaW5pdCAqLw0KKwlVMzIgb2Zmc2V0LCByZXBbWlNURF9SRVBfTlVNXTsNCisJew0KKwkJ
VTMyIGk7DQorCQlmb3IgKGkgPSAwOyBpIDwgWlNURF9SRVBfTlVNOyBpKyspDQorCQkJcmVwW2ld
ID0gY3R4LT5yZXBbaV07DQorCX0NCisNCisJY3R4LT5uZXh0VG9VcGRhdGUzID0gY3R4LT5uZXh0
VG9VcGRhdGU7DQorCVpTVERfcmVzY2FsZUZyZXFzKHNlcVN0b3JlUHRyLCAoY29uc3QgQllURSAq
KXNyYywgc3JjU2l6ZSk7DQorCWlwICs9IChpcCA9PSBwcmVmaXhTdGFydCk7DQorDQorCS8qIE1h
dGNoIExvb3AgKi8NCisJd2hpbGUgKGlwIDwgaWxpbWl0KSB7DQorCQlVMzIgY3VyLCBtYXRjaF9u
dW0sIGxhc3RfcG9zLCBsaXRsZW4sIHByaWNlOw0KKwkJVTMyIHUsIG1sZW4sIGJlc3RfbWxlbiwg
YmVzdF9vZmYsIGxpdExlbmd0aDsNCisJCVUzMiBjdXJyID0gKFUzMikoaXAgLSBiYXNlKTsNCisJ
CW1lbXNldChvcHQsIDAsIHNpemVvZihaU1REX29wdGltYWxfdCkpOw0KKwkJbGFzdF9wb3MgPSAw
Ow0KKwkJb3B0WzBdLmxpdGxlbiA9IChVMzIpKGlwIC0gYW5jaG9yKTsNCisNCisJCS8qIGNoZWNr
IHJlcENvZGUgKi8NCisJCXsNCisJCQlVMzIgaSwgbGFzdF9pID0gWlNURF9SRVBfQ0hFQ0sgKyAo
aXAgPT0gYW5jaG9yKTsNCisJCQlmb3IgKGkgPSAoaXAgPT0gYW5jaG9yKTsgaSA8IGxhc3RfaTsg
aSsrKSB7DQorCQkJCWNvbnN0IFMzMiByZXBDdXIgPSAoaSA9PSBaU1REX1JFUF9NT1ZFX09QVCkg
PyAocmVwWzBdIC0gMSkgOiByZXBbaV07DQorCQkJCWNvbnN0IFUzMiByZXBJbmRleCA9IChVMzIp
KGN1cnIgLSByZXBDdXIpOw0KKwkJCQljb25zdCBCWVRFICpjb25zdCByZXBCYXNlID0gcmVwSW5k
ZXggPCBkaWN0TGltaXQgPyBkaWN0QmFzZSA6IGJhc2U7DQorCQkJCWNvbnN0IEJZVEUgKmNvbnN0
IHJlcE1hdGNoID0gcmVwQmFzZSArIHJlcEluZGV4Ow0KKwkJCQlpZiAoKHJlcEN1ciA+IDAgJiYg
cmVwQ3VyIDw9IChTMzIpY3VycikgJiYNCisJCQkJICAgICgoKFUzMikoKGRpY3RMaW1pdCAtIDEp
IC0gcmVwSW5kZXgpID49IDMpICYgKHJlcEluZGV4ID4gbG93ZXN0SW5kZXgpKSAvKiBpbnRlbnRp
b25hbCBvdmVyZmxvdyAqLw0KKwkJCQkgICAgJiYgKFpTVERfcmVhZE1JTk1BVENIKGlwLCBtaW5N
YXRjaCkgPT0gWlNURF9yZWFkTUlOTUFUQ0gocmVwTWF0Y2gsIG1pbk1hdGNoKSkpIHsNCisJCQkJ
CS8qIHJlcGNvZGUgZGV0ZWN0ZWQgd2Ugc2hvdWxkIHRha2UgaXQgKi8NCisJCQkJCWNvbnN0IEJZ
VEUgKmNvbnN0IHJlcEVuZCA9IHJlcEluZGV4IDwgZGljdExpbWl0ID8gZGljdEVuZCA6IGllbmQ7
DQorCQkJCQltbGVuID0gKFUzMilaU1REX2NvdW50XzJzZWdtZW50cyhpcCArIG1pbk1hdGNoLCBy
ZXBNYXRjaCArIG1pbk1hdGNoLCBpZW5kLCByZXBFbmQsIHByZWZpeFN0YXJ0KSArIG1pbk1hdGNo
Ow0KKw0KKwkJCQkJaWYgKG1sZW4gPiBzdWZmaWNpZW50X2xlbiB8fCBtbGVuID49IFpTVERfT1BU
X05VTSkgew0KKwkJCQkJCWJlc3RfbWxlbiA9IG1sZW47DQorCQkJCQkJYmVzdF9vZmYgPSBpOw0K
KwkJCQkJCWN1ciA9IDA7DQorCQkJCQkJbGFzdF9wb3MgPSAxOw0KKwkJCQkJCWdvdG8gX3N0b3Jl
U2VxdWVuY2U7DQorCQkJCQl9DQorDQorCQkJCQliZXN0X29mZiA9IGkgLSAoaXAgPT0gYW5jaG9y
KTsNCisJCQkJCWxpdGxlbiA9IG9wdFswXS5saXRsZW47DQorCQkJCQlkbyB7DQorCQkJCQkJcHJp
Y2UgPSBaU1REX2dldFByaWNlKHNlcVN0b3JlUHRyLCBsaXRsZW4sIGFuY2hvciwgYmVzdF9vZmYs
IG1sZW4gLSBNSU5NQVRDSCwgdWx0cmEpOw0KKwkJCQkJCWlmIChtbGVuID4gbGFzdF9wb3MgfHwg
cHJpY2UgPCBvcHRbbWxlbl0ucHJpY2UpDQorCQkJCQkJCVNFVF9QUklDRShtbGVuLCBtbGVuLCBp
LCBsaXRsZW4sIHByaWNlKTsgLyogbm90ZSA6IG1hY3JvIG1vZGlmaWVzIGxhc3RfcG9zICovDQor
CQkJCQkJbWxlbi0tOw0KKwkJCQkJfSB3aGlsZSAobWxlbiA+PSBtaW5NYXRjaCk7DQorCQkJCX0N
CisJCQl9DQorCQl9DQorDQorCQltYXRjaF9udW0gPSBaU1REX0J0R2V0QWxsTWF0Y2hlc19zZWxl
Y3RNTFNfZXh0RGljdChjdHgsIGlwLCBpZW5kLCBtYXhTZWFyY2hlcywgbWxzLCBtYXRjaGVzLCBt
aW5NYXRjaCk7IC8qIGZpcnN0IHNlYXJjaCAoZGVwdGggMCkgKi8NCisNCisJCWlmICghbGFzdF9w
b3MgJiYgIW1hdGNoX251bSkgew0KKwkJCWlwKys7DQorCQkJY29udGludWU7DQorCQl9DQorDQor
CQl7DQorCQkJVTMyIGk7DQorCQkJZm9yIChpID0gMDsgaSA8IFpTVERfUkVQX05VTTsgaSsrKQ0K
KwkJCQlvcHRbMF0ucmVwW2ldID0gcmVwW2ldOw0KKwkJfQ0KKwkJb3B0WzBdLm1sZW4gPSAxOw0K
Kw0KKwkJaWYgKG1hdGNoX251bSAmJiAobWF0Y2hlc1ttYXRjaF9udW0gLSAxXS5sZW4gPiBzdWZm
aWNpZW50X2xlbiB8fCBtYXRjaGVzW21hdGNoX251bSAtIDFdLmxlbiA+PSBaU1REX09QVF9OVU0p
KSB7DQorCQkJYmVzdF9tbGVuID0gbWF0Y2hlc1ttYXRjaF9udW0gLSAxXS5sZW47DQorCQkJYmVz
dF9vZmYgPSBtYXRjaGVzW21hdGNoX251bSAtIDFdLm9mZjsNCisJCQljdXIgPSAwOw0KKwkJCWxh
c3RfcG9zID0gMTsNCisJCQlnb3RvIF9zdG9yZVNlcXVlbmNlOw0KKwkJfQ0KKw0KKwkJYmVzdF9t
bGVuID0gKGxhc3RfcG9zKSA/IGxhc3RfcG9zIDogbWluTWF0Y2g7DQorDQorCQkvKiBzZXQgcHJp
Y2VzIHVzaW5nIG1hdGNoZXMgYXQgcG9zaXRpb24gPSAwICovDQorCQlmb3IgKHUgPSAwOyB1IDwg
bWF0Y2hfbnVtOyB1KyspIHsNCisJCQltbGVuID0gKHUgPiAwKSA/IG1hdGNoZXNbdSAtIDFdLmxl
biArIDEgOiBiZXN0X21sZW47DQorCQkJYmVzdF9tbGVuID0gbWF0Y2hlc1t1XS5sZW47DQorCQkJ
bGl0bGVuID0gb3B0WzBdLmxpdGxlbjsNCisJCQl3aGlsZSAobWxlbiA8PSBiZXN0X21sZW4pIHsN
CisJCQkJcHJpY2UgPSBaU1REX2dldFByaWNlKHNlcVN0b3JlUHRyLCBsaXRsZW4sIGFuY2hvciwg
bWF0Y2hlc1t1XS5vZmYgLSAxLCBtbGVuIC0gTUlOTUFUQ0gsIHVsdHJhKTsNCisJCQkJaWYgKG1s
ZW4gPiBsYXN0X3BvcyB8fCBwcmljZSA8IG9wdFttbGVuXS5wcmljZSkNCisJCQkJCVNFVF9QUklD
RShtbGVuLCBtbGVuLCBtYXRjaGVzW3VdLm9mZiwgbGl0bGVuLCBwcmljZSk7DQorCQkJCW1sZW4r
KzsNCisJCQl9DQorCQl9DQorDQorCQlpZiAobGFzdF9wb3MgPCBtaW5NYXRjaCkgew0KKwkJCWlw
Kys7DQorCQkJY29udGludWU7DQorCQl9DQorDQorCQkvKiBjaGVjayBmdXJ0aGVyIHBvc2l0aW9u
cyAqLw0KKwkJZm9yIChjdXIgPSAxOyBjdXIgPD0gbGFzdF9wb3M7IGN1cisrKSB7DQorCQkJaW5y
ID0gaXAgKyBjdXI7DQorDQorCQkJaWYgKG9wdFtjdXIgLSAxXS5tbGVuID09IDEpIHsNCisJCQkJ
bGl0bGVuID0gb3B0W2N1ciAtIDFdLmxpdGxlbiArIDE7DQorCQkJCWlmIChjdXIgPiBsaXRsZW4p
IHsNCisJCQkJCXByaWNlID0gb3B0W2N1ciAtIGxpdGxlbl0ucHJpY2UgKyBaU1REX2dldExpdGVy
YWxQcmljZShzZXFTdG9yZVB0ciwgbGl0bGVuLCBpbnIgLSBsaXRsZW4pOw0KKwkJCQl9IGVsc2UN
CisJCQkJCXByaWNlID0gWlNURF9nZXRMaXRlcmFsUHJpY2Uoc2VxU3RvcmVQdHIsIGxpdGxlbiwg
YW5jaG9yKTsNCisJCQl9IGVsc2Ugew0KKwkJCQlsaXRsZW4gPSAxOw0KKwkJCQlwcmljZSA9IG9w
dFtjdXIgLSAxXS5wcmljZSArIFpTVERfZ2V0TGl0ZXJhbFByaWNlKHNlcVN0b3JlUHRyLCBsaXRs
ZW4sIGluciAtIDEpOw0KKwkJCX0NCisNCisJCQlpZiAoY3VyID4gbGFzdF9wb3MgfHwgcHJpY2Ug
PD0gb3B0W2N1cl0ucHJpY2UpDQorCQkJCVNFVF9QUklDRShjdXIsIDEsIDAsIGxpdGxlbiwgcHJp
Y2UpOw0KKw0KKwkJCWlmIChjdXIgPT0gbGFzdF9wb3MpDQorCQkJCWJyZWFrOw0KKw0KKwkJCWlm
IChpbnIgPiBpbGltaXQpIC8qIGxhc3QgbWF0Y2ggbXVzdCBzdGFydCBhdCBhIG1pbmltdW0gZGlz
dGFuY2Ugb2YgOCBmcm9tIG9lbmQgKi8NCisJCQkJY29udGludWU7DQorDQorCQkJbWxlbiA9IG9w
dFtjdXJdLm1sZW47DQorCQkJaWYgKG9wdFtjdXJdLm9mZiA+IFpTVERfUkVQX01PVkVfT1BUKSB7
DQorCQkJCW9wdFtjdXJdLnJlcFsyXSA9IG9wdFtjdXIgLSBtbGVuXS5yZXBbMV07DQorCQkJCW9w
dFtjdXJdLnJlcFsxXSA9IG9wdFtjdXIgLSBtbGVuXS5yZXBbMF07DQorCQkJCW9wdFtjdXJdLnJl
cFswXSA9IG9wdFtjdXJdLm9mZiAtIFpTVERfUkVQX01PVkVfT1BUOw0KKwkJCX0gZWxzZSB7DQor
CQkJCW9wdFtjdXJdLnJlcFsyXSA9IChvcHRbY3VyXS5vZmYgPiAxKSA/IG9wdFtjdXIgLSBtbGVu
XS5yZXBbMV0gOiBvcHRbY3VyIC0gbWxlbl0ucmVwWzJdOw0KKwkJCQlvcHRbY3VyXS5yZXBbMV0g
PSAob3B0W2N1cl0ub2ZmID4gMCkgPyBvcHRbY3VyIC0gbWxlbl0ucmVwWzBdIDogb3B0W2N1ciAt
IG1sZW5dLnJlcFsxXTsNCisJCQkJb3B0W2N1cl0ucmVwWzBdID0NCisJCQkJICAgICgob3B0W2N1
cl0ub2ZmID09IFpTVERfUkVQX01PVkVfT1BUKSAmJiAobWxlbiAhPSAxKSkgPyAob3B0W2N1ciAt
IG1sZW5dLnJlcFswXSAtIDEpIDogKG9wdFtjdXIgLSBtbGVuXS5yZXBbb3B0W2N1cl0ub2ZmXSk7
DQorCQkJfQ0KKw0KKwkJCWJlc3RfbWxlbiA9IG1pbk1hdGNoOw0KKwkJCXsNCisJCQkJVTMyIGks
IGxhc3RfaSA9IFpTVERfUkVQX0NIRUNLICsgKG1sZW4gIT0gMSk7DQorCQkJCWZvciAoaSA9ICht
bGVuICE9IDEpOyBpIDwgbGFzdF9pOyBpKyspIHsNCisJCQkJCWNvbnN0IFMzMiByZXBDdXIgPSAo
aSA9PSBaU1REX1JFUF9NT1ZFX09QVCkgPyAob3B0W2N1cl0ucmVwWzBdIC0gMSkgOiBvcHRbY3Vy
XS5yZXBbaV07DQorCQkJCQljb25zdCBVMzIgcmVwSW5kZXggPSAoVTMyKShjdXJyICsgY3VyIC0g
cmVwQ3VyKTsNCisJCQkJCWNvbnN0IEJZVEUgKmNvbnN0IHJlcEJhc2UgPSByZXBJbmRleCA8IGRp
Y3RMaW1pdCA/IGRpY3RCYXNlIDogYmFzZTsNCisJCQkJCWNvbnN0IEJZVEUgKmNvbnN0IHJlcE1h
dGNoID0gcmVwQmFzZSArIHJlcEluZGV4Ow0KKwkJCQkJaWYgKChyZXBDdXIgPiAwICYmIHJlcEN1
ciA8PSAoUzMyKShjdXJyICsgY3VyKSkgJiYNCisJCQkJCSAgICAoKChVMzIpKChkaWN0TGltaXQg
LSAxKSAtIHJlcEluZGV4KSA+PSAzKSAmIChyZXBJbmRleCA+IGxvd2VzdEluZGV4KSkgLyogaW50
ZW50aW9uYWwgb3ZlcmZsb3cgKi8NCisJCQkJCSAgICAmJiAoWlNURF9yZWFkTUlOTUFUQ0goaW5y
LCBtaW5NYXRjaCkgPT0gWlNURF9yZWFkTUlOTUFUQ0gocmVwTWF0Y2gsIG1pbk1hdGNoKSkpIHsN
CisJCQkJCQkvKiByZXBjb2RlIGRldGVjdGVkICovDQorCQkJCQkJY29uc3QgQllURSAqY29uc3Qg
cmVwRW5kID0gcmVwSW5kZXggPCBkaWN0TGltaXQgPyBkaWN0RW5kIDogaWVuZDsNCisJCQkJCQlt
bGVuID0gKFUzMilaU1REX2NvdW50XzJzZWdtZW50cyhpbnIgKyBtaW5NYXRjaCwgcmVwTWF0Y2gg
KyBtaW5NYXRjaCwgaWVuZCwgcmVwRW5kLCBwcmVmaXhTdGFydCkgKyBtaW5NYXRjaDsNCisNCisJ
CQkJCQlpZiAobWxlbiA+IHN1ZmZpY2llbnRfbGVuIHx8IGN1ciArIG1sZW4gPj0gWlNURF9PUFRf
TlVNKSB7DQorCQkJCQkJCWJlc3RfbWxlbiA9IG1sZW47DQorCQkJCQkJCWJlc3Rfb2ZmID0gaTsN
CisJCQkJCQkJbGFzdF9wb3MgPSBjdXIgKyAxOw0KKwkJCQkJCQlnb3RvIF9zdG9yZVNlcXVlbmNl
Ow0KKwkJCQkJCX0NCisNCisJCQkJCQliZXN0X29mZiA9IGkgLSAob3B0W2N1cl0ubWxlbiAhPSAx
KTsNCisJCQkJCQlpZiAobWxlbiA+IGJlc3RfbWxlbikNCisJCQkJCQkJYmVzdF9tbGVuID0gbWxl
bjsNCisNCisJCQkJCQlkbyB7DQorCQkJCQkJCWlmIChvcHRbY3VyXS5tbGVuID09IDEpIHsNCisJ
CQkJCQkJCWxpdGxlbiA9IG9wdFtjdXJdLmxpdGxlbjsNCisJCQkJCQkJCWlmIChjdXIgPiBsaXRs
ZW4pIHsNCisJCQkJCQkJCQlwcmljZSA9IG9wdFtjdXIgLSBsaXRsZW5dLnByaWNlICsgWlNURF9n
ZXRQcmljZShzZXFTdG9yZVB0ciwgbGl0bGVuLCBpbnIgLSBsaXRsZW4sDQorCQkJCQkJCQkJCQkJ
CQkJYmVzdF9vZmYsIG1sZW4gLSBNSU5NQVRDSCwgdWx0cmEpOw0KKwkJCQkJCQkJfSBlbHNlDQor
CQkJCQkJCQkJcHJpY2UgPSBaU1REX2dldFByaWNlKHNlcVN0b3JlUHRyLCBsaXRsZW4sIGFuY2hv
ciwgYmVzdF9vZmYsIG1sZW4gLSBNSU5NQVRDSCwgdWx0cmEpOw0KKwkJCQkJCQl9IGVsc2Ugew0K
KwkJCQkJCQkJbGl0bGVuID0gMDsNCisJCQkJCQkJCXByaWNlID0gb3B0W2N1cl0ucHJpY2UgKyBa
U1REX2dldFByaWNlKHNlcVN0b3JlUHRyLCAwLCBOVUxMLCBiZXN0X29mZiwgbWxlbiAtIE1JTk1B
VENILCB1bHRyYSk7DQorCQkJCQkJCX0NCisNCisJCQkJCQkJaWYgKGN1ciArIG1sZW4gPiBsYXN0
X3BvcyB8fCBwcmljZSA8PSBvcHRbY3VyICsgbWxlbl0ucHJpY2UpDQorCQkJCQkJCQlTRVRfUFJJ
Q0UoY3VyICsgbWxlbiwgbWxlbiwgaSwgbGl0bGVuLCBwcmljZSk7DQorCQkJCQkJCW1sZW4tLTsN
CisJCQkJCQl9IHdoaWxlIChtbGVuID49IG1pbk1hdGNoKTsNCisJCQkJCX0NCisJCQkJfQ0KKwkJ
CX0NCisNCisJCQltYXRjaF9udW0gPSBaU1REX0J0R2V0QWxsTWF0Y2hlc19zZWxlY3RNTFNfZXh0
RGljdChjdHgsIGluciwgaWVuZCwgbWF4U2VhcmNoZXMsIG1scywgbWF0Y2hlcywgbWluTWF0Y2gp
Ow0KKw0KKwkJCWlmIChtYXRjaF9udW0gPiAwICYmIChtYXRjaGVzW21hdGNoX251bSAtIDFdLmxl
biA+IHN1ZmZpY2llbnRfbGVuIHx8IGN1ciArIG1hdGNoZXNbbWF0Y2hfbnVtIC0gMV0ubGVuID49
IFpTVERfT1BUX05VTSkpIHsNCisJCQkJYmVzdF9tbGVuID0gbWF0Y2hlc1ttYXRjaF9udW0gLSAx
XS5sZW47DQorCQkJCWJlc3Rfb2ZmID0gbWF0Y2hlc1ttYXRjaF9udW0gLSAxXS5vZmY7DQorCQkJ
CWxhc3RfcG9zID0gY3VyICsgMTsNCisJCQkJZ290byBfc3RvcmVTZXF1ZW5jZTsNCisJCQl9DQor
DQorCQkJLyogc2V0IHByaWNlcyB1c2luZyBtYXRjaGVzIGF0IHBvc2l0aW9uID0gY3VyICovDQor
CQkJZm9yICh1ID0gMDsgdSA8IG1hdGNoX251bTsgdSsrKSB7DQorCQkJCW1sZW4gPSAodSA+IDAp
ID8gbWF0Y2hlc1t1IC0gMV0ubGVuICsgMSA6IGJlc3RfbWxlbjsNCisJCQkJYmVzdF9tbGVuID0g
bWF0Y2hlc1t1XS5sZW47DQorDQorCQkJCXdoaWxlIChtbGVuIDw9IGJlc3RfbWxlbikgew0KKwkJ
CQkJaWYgKG9wdFtjdXJdLm1sZW4gPT0gMSkgew0KKwkJCQkJCWxpdGxlbiA9IG9wdFtjdXJdLmxp
dGxlbjsNCisJCQkJCQlpZiAoY3VyID4gbGl0bGVuKQ0KKwkJCQkJCQlwcmljZSA9IG9wdFtjdXIg
LSBsaXRsZW5dLnByaWNlICsgWlNURF9nZXRQcmljZShzZXFTdG9yZVB0ciwgbGl0bGVuLCBpcCAr
IGN1ciAtIGxpdGxlbiwNCisJCQkJCQkJCQkJCQkJbWF0Y2hlc1t1XS5vZmYgLSAxLCBtbGVuIC0g
TUlOTUFUQ0gsIHVsdHJhKTsNCisJCQkJCQllbHNlDQorCQkJCQkJCXByaWNlID0gWlNURF9nZXRQ
cmljZShzZXFTdG9yZVB0ciwgbGl0bGVuLCBhbmNob3IsIG1hdGNoZXNbdV0ub2ZmIC0gMSwgbWxl
biAtIE1JTk1BVENILCB1bHRyYSk7DQorCQkJCQl9IGVsc2Ugew0KKwkJCQkJCWxpdGxlbiA9IDA7
DQorCQkJCQkJcHJpY2UgPSBvcHRbY3VyXS5wcmljZSArIFpTVERfZ2V0UHJpY2Uoc2VxU3RvcmVQ
dHIsIDAsIE5VTEwsIG1hdGNoZXNbdV0ub2ZmIC0gMSwgbWxlbiAtIE1JTk1BVENILCB1bHRyYSk7
DQorCQkJCQl9DQorDQorCQkJCQlpZiAoY3VyICsgbWxlbiA+IGxhc3RfcG9zIHx8IChwcmljZSA8
IG9wdFtjdXIgKyBtbGVuXS5wcmljZSkpDQorCQkJCQkJU0VUX1BSSUNFKGN1ciArIG1sZW4sIG1s
ZW4sIG1hdGNoZXNbdV0ub2ZmLCBsaXRsZW4sIHByaWNlKTsNCisNCisJCQkJCW1sZW4rKzsNCisJ
CQkJfQ0KKwkJCX0NCisJCX0gLyogZm9yIChjdXIgPSAxOyBjdXIgPD0gbGFzdF9wb3M7IGN1cisr
KSAqLw0KKw0KKwkJYmVzdF9tbGVuID0gb3B0W2xhc3RfcG9zXS5tbGVuOw0KKwkJYmVzdF9vZmYg
PSBvcHRbbGFzdF9wb3NdLm9mZjsNCisJCWN1ciA9IGxhc3RfcG9zIC0gYmVzdF9tbGVuOw0KKw0K
KwkvKiBzdG9yZSBzZXF1ZW5jZSAqLw0KK19zdG9yZVNlcXVlbmNlOiAvKiBjdXIsIGxhc3RfcG9z
LCBiZXN0X21sZW4sIGJlc3Rfb2ZmIGhhdmUgdG8gYmUgc2V0ICovDQorCQlvcHRbMF0ubWxlbiA9
IDE7DQorDQorCQl3aGlsZSAoMSkgew0KKwkJCW1sZW4gPSBvcHRbY3VyXS5tbGVuOw0KKwkJCW9m
ZnNldCA9IG9wdFtjdXJdLm9mZjsNCisJCQlvcHRbY3VyXS5tbGVuID0gYmVzdF9tbGVuOw0KKwkJ
CW9wdFtjdXJdLm9mZiA9IGJlc3Rfb2ZmOw0KKwkJCWJlc3RfbWxlbiA9IG1sZW47DQorCQkJYmVz
dF9vZmYgPSBvZmZzZXQ7DQorCQkJaWYgKG1sZW4gPiBjdXIpDQorCQkJCWJyZWFrOw0KKwkJCWN1
ciAtPSBtbGVuOw0KKwkJfQ0KKw0KKwkJZm9yICh1ID0gMDsgdSA8PSBsYXN0X3BvczspIHsNCisJ
CQl1ICs9IG9wdFt1XS5tbGVuOw0KKwkJfQ0KKw0KKwkJZm9yIChjdXIgPSAwOyBjdXIgPCBsYXN0
X3BvczspIHsNCisJCQltbGVuID0gb3B0W2N1cl0ubWxlbjsNCisJCQlpZiAobWxlbiA9PSAxKSB7
DQorCQkJCWlwKys7DQorCQkJCWN1cisrOw0KKwkJCQljb250aW51ZTsNCisJCQl9DQorCQkJb2Zm
c2V0ID0gb3B0W2N1cl0ub2ZmOw0KKwkJCWN1ciArPSBtbGVuOw0KKwkJCWxpdExlbmd0aCA9IChV
MzIpKGlwIC0gYW5jaG9yKTsNCisNCisJCQlpZiAob2Zmc2V0ID4gWlNURF9SRVBfTU9WRV9PUFQp
IHsNCisJCQkJcmVwWzJdID0gcmVwWzFdOw0KKwkJCQlyZXBbMV0gPSByZXBbMF07DQorCQkJCXJl
cFswXSA9IG9mZnNldCAtIFpTVERfUkVQX01PVkVfT1BUOw0KKwkJCQlvZmZzZXQtLTsNCisJCQl9
IGVsc2Ugew0KKwkJCQlpZiAob2Zmc2V0ICE9IDApIHsNCisJCQkJCWJlc3Rfb2ZmID0gKG9mZnNl
dCA9PSBaU1REX1JFUF9NT1ZFX09QVCkgPyAocmVwWzBdIC0gMSkgOiAocmVwW29mZnNldF0pOw0K
KwkJCQkJaWYgKG9mZnNldCAhPSAxKQ0KKwkJCQkJCXJlcFsyXSA9IHJlcFsxXTsNCisJCQkJCXJl
cFsxXSA9IHJlcFswXTsNCisJCQkJCXJlcFswXSA9IGJlc3Rfb2ZmOw0KKwkJCQl9DQorDQorCQkJ
CWlmIChsaXRMZW5ndGggPT0gMCkNCisJCQkJCW9mZnNldC0tOw0KKwkJCX0NCisNCisJCQlaU1RE
X3VwZGF0ZVByaWNlKHNlcVN0b3JlUHRyLCBsaXRMZW5ndGgsIGFuY2hvciwgb2Zmc2V0LCBtbGVu
IC0gTUlOTUFUQ0gpOw0KKwkJCVpTVERfc3RvcmVTZXEoc2VxU3RvcmVQdHIsIGxpdExlbmd0aCwg
YW5jaG9yLCBvZmZzZXQsIG1sZW4gLSBNSU5NQVRDSCk7DQorCQkJYW5jaG9yID0gaXAgPSBpcCAr
IG1sZW47DQorCQl9DQorCX0gLyogZm9yIChjdXI9MDsgY3VyIDwgbGFzdF9wb3M7ICkgKi8NCisN
CisJLyogU2F2ZSByZXBzIGZvciBuZXh0IGJsb2NrICovDQorCXsNCisJCWludCBpOw0KKwkJZm9y
IChpID0gMDsgaSA8IFpTVERfUkVQX05VTTsgaSsrKQ0KKwkJCWN0eC0+cmVwVG9Db25maXJtW2ld
ID0gcmVwW2ldOw0KKwl9DQorDQorCS8qIExhc3QgTGl0ZXJhbHMgKi8NCisJew0KKwkJc2l6ZV90
IGxhc3RMTFNpemUgPSBpZW5kIC0gYW5jaG9yOw0KKwkJbWVtY3B5KHNlcVN0b3JlUHRyLT5saXQs
IGFuY2hvciwgbGFzdExMU2l6ZSk7DQorCQlzZXFTdG9yZVB0ci0+bGl0ICs9IGxhc3RMTFNpemU7
DQorCX0NCit9DQorDQorI2VuZGlmIC8qIFpTVERfT1BUX0hfOTE4NDIzOTg3NDMgKi8NCmRpZmYg
LS1naXQgYS94ZW4vaW5jbHVkZS94ZW4veHhoYXNoLmggYi94ZW4vaW5jbHVkZS94ZW4veHhoYXNo
LmgNCm5ldyBmaWxlIG1vZGUgMTAwNjQ0DQppbmRleCAwMDAwMDAwMDAwLi5kZjQyNTExNDM4DQot
LS0gL2Rldi9udWxsDQorKysgYi94ZW4vaW5jbHVkZS94ZW4veHhoYXNoLmgNCkBAIC0wLDAgKzEs
MjU5IEBADQorLyoNCisgKiB4eEhhc2ggLSBFeHRyZW1lbHkgRmFzdCBIYXNoIGFsZ29yaXRobQ0K
KyAqIENvcHlyaWdodCAoQykgMjAxMi0yMDE2LCBZYW5uIENvbGxldC4NCisgKg0KKyAqIEJTRCAy
LUNsYXVzZSBMaWNlbnNlIChodHRwOi8vd3d3Lm9wZW5zb3VyY2Uub3JnL2xpY2Vuc2VzL2JzZC1s
aWNlbnNlLnBocCkNCisgKg0KKyAqIFJlZGlzdHJpYnV0aW9uIGFuZCB1c2UgaW4gc291cmNlIGFu
ZCBiaW5hcnkgZm9ybXMsIHdpdGggb3Igd2l0aG91dA0KKyAqIG1vZGlmaWNhdGlvbiwgYXJlIHBl
cm1pdHRlZCBwcm92aWRlZCB0aGF0IHRoZSBmb2xsb3dpbmcgY29uZGl0aW9ucyBhcmUNCisgKiBt
ZXQ6DQorICoNCisgKiAgICogUmVkaXN0cmlidXRpb25zIG9mIHNvdXJjZSBjb2RlIG11c3QgcmV0
YWluIHRoZSBhYm92ZSBjb3B5cmlnaHQNCisgKiAgICAgbm90aWNlLCB0aGlzIGxpc3Qgb2YgY29u
ZGl0aW9ucyBhbmQgdGhlIGZvbGxvd2luZyBkaXNjbGFpbWVyLg0KKyAqICAgKiBSZWRpc3RyaWJ1
dGlvbnMgaW4gYmluYXJ5IGZvcm0gbXVzdCByZXByb2R1Y2UgdGhlIGFib3ZlDQorICogICAgIGNv
cHlyaWdodCBub3RpY2UsIHRoaXMgbGlzdCBvZiBjb25kaXRpb25zIGFuZCB0aGUgZm9sbG93aW5n
IGRpc2NsYWltZXINCisgKiAgICAgaW4gdGhlIGRvY3VtZW50YXRpb24gYW5kL29yIG90aGVyIG1h
dGVyaWFscyBwcm92aWRlZCB3aXRoIHRoZQ0KKyAqICAgICBkaXN0cmlidXRpb24uDQorICoNCisg
KiBUSElTIFNPRlRXQVJFIElTIFBST1ZJREVEIEJZIFRIRSBDT1BZUklHSFQgSE9MREVSUyBBTkQg
Q09OVFJJQlVUT1JTDQorICogIkFTIElTIiBBTkQgQU5ZIEVYUFJFU1MgT1IgSU1QTElFRCBXQVJS
QU5USUVTLCBJTkNMVURJTkcsIEJVVCBOT1QNCisgKiBMSU1JVEVEIFRPLCBUSEUgSU1QTElFRCBX
QVJSQU5USUVTIE9GIE1FUkNIQU5UQUJJTElUWSBBTkQgRklUTkVTUyBGT1INCisgKiBBIFBBUlRJ
Q1VMQVIgUFVSUE9TRSBBUkUgRElTQ0xBSU1FRC4gSU4gTk8gRVZFTlQgU0hBTEwgVEhFIENPUFlS
SUdIVA0KKyAqIE9XTkVSIE9SIENPTlRSSUJVVE9SUyBCRSBMSUFCTEUgRk9SIEFOWSBESVJFQ1Qs
IElORElSRUNULCBJTkNJREVOVEFMLA0KKyAqIFNQRUNJQUwsIEVYRU1QTEFSWSwgT1IgQ09OU0VR
VUVOVElBTCBEQU1BR0VTIChJTkNMVURJTkcsIEJVVCBOT1QNCisgKiBMSU1JVEVEIFRPLCBQUk9D
VVJFTUVOVCBPRiBTVUJTVElUVVRFIEdPT0RTIE9SIFNFUlZJQ0VTOyBMT1NTIE9GIFVTRSwNCisg
KiBEQVRBLCBPUiBQUk9GSVRTOyBPUiBCVVNJTkVTUyBJTlRFUlJVUFRJT04pIEhPV0VWRVIgQ0FV
U0VEIEFORCBPTiBBTlkNCisgKiBUSEVPUlkgT0YgTElBQklMSVRZLCBXSEVUSEVSIElOIENPTlRS
QUNULCBTVFJJQ1QgTElBQklMSVRZLCBPUiBUT1JUDQorICogKElOQ0xVRElORyBORUdMSUdFTkNF
IE9SIE9USEVSV0lTRSkgQVJJU0lORyBJTiBBTlkgV0FZIE9VVCBPRiBUSEUgVVNFDQorICogT0Yg
VEhJUyBTT0ZUV0FSRSwgRVZFTiBJRiBBRFZJU0VEIE9GIFRIRSBQT1NTSUJJTElUWSBPRiBTVUNI
IERBTUFHRS4NCisgKg0KKyAqIFRoaXMgcHJvZ3JhbSBpcyBmcmVlIHNvZnR3YXJlOyB5b3UgY2Fu
IHJlZGlzdHJpYnV0ZSBpdCBhbmQvb3IgbW9kaWZ5IGl0IHVuZGVyDQorICogdGhlIHRlcm1zIG9m
IHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSB2ZXJzaW9uIDIgYXMgcHVibGlzaGVkIGJ5
IHRoZQ0KKyAqIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbi4gVGhpcyBwcm9ncmFtIGlzIGR1YWwt
bGljZW5zZWQ7IHlvdSBtYXkgc2VsZWN0DQorICogZWl0aGVyIHZlcnNpb24gMiBvZiB0aGUgR05V
IEdlbmVyYWwgUHVibGljIExpY2Vuc2UgKCJHUEwiKSBvciBCU0QgbGljZW5zZQ0KKyAqICgiQlNE
IikuDQorICoNCisgKiBZb3UgY2FuIGNvbnRhY3QgdGhlIGF1dGhvciBhdDoNCisgKiAtIHh4SGFz
aCBob21lcGFnZTogaHR0cHM6Ly9jeWFuNDk3My5naXRodWIuaW8veHhIYXNoLw0KKyAqIC0geHhI
YXNoIHNvdXJjZSByZXBvc2l0b3J5OiBodHRwczovL2dpdGh1Yi5jb20vQ3lhbjQ5NzMveHhIYXNo
DQorICovDQorDQorLyoNCisgKiBOb3RpY2UgZXh0cmFjdGVkIGZyb20geHhIYXNoIGhvbWVwYWdl
Og0KKyAqDQorICogeHhIYXNoIGlzIGFuIGV4dHJlbWVseSBmYXN0IEhhc2ggYWxnb3JpdGhtLCBy
dW5uaW5nIGF0IFJBTSBzcGVlZCBsaW1pdHMuDQorICogSXQgYWxzbyBzdWNjZXNzZnVsbHkgcGFz
c2VzIGFsbCB0ZXN0cyBmcm9tIHRoZSBTTUhhc2hlciBzdWl0ZS4NCisgKg0KKyAqIENvbXBhcmlz
b24gKHNpbmdsZSB0aHJlYWQsIFdpbmRvd3MgU2V2ZW4gMzIgYml0cywgdXNpbmcgU01IYXNoZXIg
b24gYSBDb3JlIDINCisgKiBEdW8gQDNHSHopDQorICoNCisgKiBOYW1lICAgICAgICAgICAgU3Bl
ZWQgICAgICAgUS5TY29yZSAgIEF1dGhvcg0KKyAqIHh4SGFzaCAgICAgICAgICA1LjQgR0IvcyAg
ICAgMTANCisgKiBDcmFwV293ICAgICAgICAgMy4yIEdCL3MgICAgICAyICAgICAgIEFuZHJldw0K
KyAqIE11bXVySGFzaCAzYSAgICAyLjcgR0IvcyAgICAgMTAgICAgICAgQXVzdGluIEFwcGxlYnkN
CisgKiBTcG9va3lIYXNoICAgICAgMi4wIEdCL3MgICAgIDEwICAgICAgIEJvYiBKZW5raW5zDQor
ICogU0JveCAgICAgICAgICAgIDEuNCBHQi9zICAgICAgOSAgICAgICBCcmV0IE11bHZleQ0KKyAq
IExvb2t1cDMgICAgICAgICAxLjIgR0IvcyAgICAgIDkgICAgICAgQm9iIEplbmtpbnMNCisgKiBT
dXBlckZhc3RIYXNoICAgMS4yIEdCL3MgICAgICAxICAgICAgIFBhdWwgSHNpZWgNCisgKiBDaXR5
SGFzaDY0ICAgICAgMS4wNSBHQi9zICAgIDEwICAgICAgIFBpa2UgJiBBbGFrdWlqYWxhDQorICog
Rk5WICAgICAgICAgICAgIDAuNTUgR0IvcyAgICAgNSAgICAgICBGb3dsZXIsIE5vbGwsIFZvDQor
ICogQ1JDMzIgICAgICAgICAgIDAuNDMgR0IvcyAgICAgOQ0KKyAqIE1ENS0zMiAgICAgICAgICAw
LjMzIEdCL3MgICAgMTAgICAgICAgUm9uYWxkIEwuIFJpdmVzdA0KKyAqIFNIQTEtMzIgICAgICAg
ICAwLjI4IEdCL3MgICAgMTANCisgKg0KKyAqIFEuU2NvcmUgaXMgYSBtZWFzdXJlIG9mIHF1YWxp
dHkgb2YgdGhlIGhhc2ggZnVuY3Rpb24uDQorICogSXQgZGVwZW5kcyBvbiBzdWNjZXNzZnVsbHkg
cGFzc2luZyBTTUhhc2hlciB0ZXN0IHNldC4NCisgKiAxMCBpcyBhIHBlcmZlY3Qgc2NvcmUuDQor
ICoNCisgKiBBIDY0LWJpdHMgdmVyc2lvbiwgbmFtZWQgeHhoNjQgb2ZmZXJzIG11Y2ggYmV0dGVy
IHNwZWVkLA0KKyAqIGJ1dCBmb3IgNjQtYml0cyBhcHBsaWNhdGlvbnMgb25seS4NCisgKiBOYW1l
ICAgICBTcGVlZCBvbiA2NCBiaXRzICAgIFNwZWVkIG9uIDMyIGJpdHMNCisgKiB4eGg2NCAgICAg
ICAxMy44IEdCL3MgICAgICAgICAgICAxLjkgR0Ivcw0KKyAqIHh4aDMyICAgICAgICA2LjggR0Iv
cyAgICAgICAgICAgIDYuMCBHQi9zDQorICovDQorDQorI2lmbmRlZiBYWEhBU0hfSA0KKyNkZWZp
bmUgWFhIQVNIX0gNCisNCisjaW5jbHVkZSA8bGludXgvdHlwZXMuaD4NCisNCisvKi0qKioqKioq
KioqKioqKioqKioqKioqKioqKioqDQorICogU2ltcGxlIEhhc2ggRnVuY3Rpb25zDQorICoqKioq
KioqKioqKioqKioqKioqKioqKioqKioqLw0KKw0KKy8qKg0KKyAqIHh4aDMyKCkgLSBjYWxjdWxh
dGUgdGhlIDMyLWJpdCBoYXNoIG9mIHRoZSBpbnB1dCB3aXRoIGEgZ2l2ZW4gc2VlZC4NCisgKg0K
KyAqIEBpbnB1dDogIFRoZSBkYXRhIHRvIGhhc2guDQorICogQGxlbmd0aDogVGhlIGxlbmd0aCBv
ZiB0aGUgZGF0YSB0byBoYXNoLg0KKyAqIEBzZWVkOiAgIFRoZSBzZWVkIGNhbiBiZSB1c2VkIHRv
IGFsdGVyIHRoZSByZXN1bHQgcHJlZGljdGFibHkuDQorICoNCisgKiBTcGVlZCBvbiBDb3JlIDIg
RHVvIEAgMyBHSHogKHNpbmdsZSB0aHJlYWQsIFNNSGFzaGVyIGJlbmNobWFyaykgOiA1LjQgR0Iv
cw0KKyAqDQorICogUmV0dXJuOiAgVGhlIDMyLWJpdCBoYXNoIG9mIHRoZSBkYXRhLg0KKyAqLw0K
K3VpbnQzMl90IHh4aDMyKGNvbnN0IHZvaWQgKmlucHV0LCBzaXplX3QgbGVuZ3RoLCB1aW50MzJf
dCBzZWVkKTsNCisNCisvKioNCisgKiB4eGg2NCgpIC0gY2FsY3VsYXRlIHRoZSA2NC1iaXQgaGFz
aCBvZiB0aGUgaW5wdXQgd2l0aCBhIGdpdmVuIHNlZWQuDQorICoNCisgKiBAaW5wdXQ6ICBUaGUg
ZGF0YSB0byBoYXNoLg0KKyAqIEBsZW5ndGg6IFRoZSBsZW5ndGggb2YgdGhlIGRhdGEgdG8gaGFz
aC4NCisgKiBAc2VlZDogICBUaGUgc2VlZCBjYW4gYmUgdXNlZCB0byBhbHRlciB0aGUgcmVzdWx0
IHByZWRpY3RhYmx5Lg0KKyAqDQorICogVGhpcyBmdW5jdGlvbiBydW5zIDJ4IGZhc3RlciBvbiA2
NC1iaXQgc3lzdGVtcywgYnV0IHNsb3dlciBvbiAzMi1iaXQgc3lzdGVtcy4NCisgKg0KKyAqIFJl
dHVybjogIFRoZSA2NC1iaXQgaGFzaCBvZiB0aGUgZGF0YS4NCisgKi8NCit1aW50NjRfdCB4eGg2
NChjb25zdCB2b2lkICppbnB1dCwgc2l6ZV90IGxlbmd0aCwgdWludDY0X3Qgc2VlZCk7DQorDQor
LyoqDQorICogeHhoYXNoKCkgLSBjYWxjdWxhdGUgd29yZHNpemUgaGFzaCBvZiB0aGUgaW5wdXQg
d2l0aCBhIGdpdmVuIHNlZWQNCisgKiBAaW5wdXQ6ICBUaGUgZGF0YSB0byBoYXNoLg0KKyAqIEBs
ZW5ndGg6IFRoZSBsZW5ndGggb2YgdGhlIGRhdGEgdG8gaGFzaC4NCisgKiBAc2VlZDogICBUaGUg
c2VlZCBjYW4gYmUgdXNlZCB0byBhbHRlciB0aGUgcmVzdWx0IHByZWRpY3RhYmx5Lg0KKyAqDQor
ICogSWYgdGhlIGhhc2ggZG9lcyBub3QgbmVlZCB0byBiZSBjb21wYXJhYmxlIGJldHdlZW4gbWFj
aGluZXMgd2l0aA0KKyAqIGRpZmZlcmVudCB3b3JkIHNpemVzLCB0aGlzIGZ1bmN0aW9uIHdpbGwg
Y2FsbCB3aGljaGV2ZXIgb2YgeHhoMzIoKQ0KKyAqIG9yIHh4aDY0KCkgaXMgZmFzdGVyLg0KKyAq
DQorICogUmV0dXJuOiAgd29yZHNpemUgaGFzaCBvZiB0aGUgZGF0YS4NCisgKi8NCisNCitzdGF0
aWMgaW5saW5lIHVuc2lnbmVkIGxvbmcgeHhoYXNoKGNvbnN0IHZvaWQgKmlucHV0LCBzaXplX3Qg
bGVuZ3RoLA0KKwkJCQkgICB1aW50NjRfdCBzZWVkKQ0KK3sNCisjaWYgQklUU19QRVJfTE9ORyA9
PSA2NA0KKyAgICAgICByZXR1cm4geHhoNjQoaW5wdXQsIGxlbmd0aCwgc2VlZCk7DQorI2Vsc2UN
CisgICAgICAgcmV0dXJuIHh4aDMyKGlucHV0LCBsZW5ndGgsIHNlZWQpOw0KKyNlbmRpZg0KK30N
CisNCisvKi0qKioqKioqKioqKioqKioqKioqKioqKioqKioqDQorICogU3RyZWFtaW5nIEhhc2gg
RnVuY3Rpb25zDQorICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqLw0KKw0KKy8qDQorICog
VGhlc2UgZGVmaW5pdGlvbnMgYXJlIG9ubHkgbWVhbnQgdG8gYWxsb3cgYWxsb2NhdGlvbiBvZiBY
WEggc3RhdGUNCisgKiBzdGF0aWNhbGx5LCBvbiBzdGFjaywgb3IgaW4gYSBzdHJ1Y3QgZm9yIGV4
YW1wbGUuDQorICogRG8gbm90IHVzZSBtZW1iZXJzIGRpcmVjdGx5Lg0KKyAqLw0KKw0KKy8qKg0K
KyAqIHN0cnVjdCB4eGgzMl9zdGF0ZSAtIHByaXZhdGUgeHhoMzIgc3RhdGUsIGRvIG5vdCB1c2Ug
bWVtYmVycyBkaXJlY3RseQ0KKyAqLw0KK3N0cnVjdCB4eGgzMl9zdGF0ZSB7DQorCXVpbnQzMl90
IHRvdGFsX2xlbl8zMjsNCisJdWludDMyX3QgbGFyZ2VfbGVuOw0KKwl1aW50MzJfdCB2MTsNCisJ
dWludDMyX3QgdjI7DQorCXVpbnQzMl90IHYzOw0KKwl1aW50MzJfdCB2NDsNCisJdWludDMyX3Qg
bWVtMzJbNF07DQorCXVpbnQzMl90IG1lbXNpemU7DQorfTsNCisNCisvKioNCisgKiBzdHJ1Y3Qg
eHhoMzJfc3RhdGUgLSBwcml2YXRlIHh4aDY0IHN0YXRlLCBkbyBub3QgdXNlIG1lbWJlcnMgZGly
ZWN0bHkNCisgKi8NCitzdHJ1Y3QgeHhoNjRfc3RhdGUgew0KKwl1aW50NjRfdCB0b3RhbF9sZW47
DQorCXVpbnQ2NF90IHYxOw0KKwl1aW50NjRfdCB2MjsNCisJdWludDY0X3QgdjM7DQorCXVpbnQ2
NF90IHY0Ow0KKwl1aW50NjRfdCBtZW02NFs0XTsNCisJdWludDMyX3QgbWVtc2l6ZTsNCit9Ow0K
Kw0KKy8qKg0KKyAqIHh4aDMyX3Jlc2V0KCkgLSByZXNldCB0aGUgeHhoMzIgc3RhdGUgdG8gc3Rh
cnQgYSBuZXcgaGFzaGluZyBvcGVyYXRpb24NCisgKg0KKyAqIEBzdGF0ZTogVGhlIHh4aDMyIHN0
YXRlIHRvIHJlc2V0Lg0KKyAqIEBzZWVkOiAgSW5pdGlhbGl6ZSB0aGUgaGFzaCBzdGF0ZSB3aXRo
IHRoaXMgc2VlZC4NCisgKg0KKyAqIENhbGwgdGhpcyBmdW5jdGlvbiBvbiBhbnkgeHhoMzJfc3Rh
dGUgdG8gcHJlcGFyZSBmb3IgYSBuZXcgaGFzaGluZyBvcGVyYXRpb24uDQorICovDQordm9pZCB4
eGgzMl9yZXNldChzdHJ1Y3QgeHhoMzJfc3RhdGUgKnN0YXRlLCB1aW50MzJfdCBzZWVkKTsNCisN
CisvKioNCisgKiB4eGgzMl91cGRhdGUoKSAtIGhhc2ggdGhlIGRhdGEgZ2l2ZW4gYW5kIHVwZGF0
ZSB0aGUgeHhoMzIgc3RhdGUNCisgKg0KKyAqIEBzdGF0ZTogIFRoZSB4eGgzMiBzdGF0ZSB0byB1
cGRhdGUuDQorICogQGlucHV0OiAgVGhlIGRhdGEgdG8gaGFzaC4NCisgKiBAbGVuZ3RoOiBUaGUg
bGVuZ3RoIG9mIHRoZSBkYXRhIHRvIGhhc2guDQorICoNCisgKiBBZnRlciBjYWxsaW5nIHh4aDMy
X3Jlc2V0KCkgY2FsbCB4eGgzMl91cGRhdGUoKSBhcyBtYW55IHRpbWVzIGFzIG5lY2Vzc2FyeS4N
CisgKg0KKyAqIFJldHVybjogIFplcm8gb24gc3VjY2Vzcywgb3RoZXJ3aXNlIGFuIGVycm9yIGNv
ZGUuDQorICovDQoraW50IHh4aDMyX3VwZGF0ZShzdHJ1Y3QgeHhoMzJfc3RhdGUgKnN0YXRlLCBj
b25zdCB2b2lkICppbnB1dCwgc2l6ZV90IGxlbmd0aCk7DQorDQorLyoqDQorICogeHhoMzJfZGln
ZXN0KCkgLSBwcm9kdWNlIHRoZSBjdXJyZW50IHh4aDMyIGhhc2gNCisgKg0KKyAqIEBzdGF0ZTog
UHJvZHVjZSB0aGUgY3VycmVudCB4eGgzMiBoYXNoIG9mIHRoaXMgc3RhdGUuDQorICoNCisgKiBB
IGhhc2ggdmFsdWUgY2FuIGJlIHByb2R1Y2VkIGF0IGFueSB0aW1lLiBJdCBpcyBzdGlsbCBwb3Nz
aWJsZSB0byBjb250aW51ZQ0KKyAqIGluc2VydGluZyBpbnB1dCBpbnRvIHRoZSBoYXNoIHN0YXRl
IGFmdGVyIGEgY2FsbCB0byB4eGgzMl9kaWdlc3QoKSwgYW5kDQorICogZ2VuZXJhdGUgbmV3IGhh
c2hlcyBsYXRlciBvbiwgYnkgY2FsbGluZyB4eGgzMl9kaWdlc3QoKSBhZ2Fpbi4NCisgKg0KKyAq
IFJldHVybjogVGhlIHh4aDMyIGhhc2ggc3RvcmVkIGluIHRoZSBzdGF0ZS4NCisgKi8NCit1aW50
MzJfdCB4eGgzMl9kaWdlc3QoY29uc3Qgc3RydWN0IHh4aDMyX3N0YXRlICpzdGF0ZSk7DQorDQor
LyoqDQorICogeHhoNjRfcmVzZXQoKSAtIHJlc2V0IHRoZSB4eGg2NCBzdGF0ZSB0byBzdGFydCBh
IG5ldyBoYXNoaW5nIG9wZXJhdGlvbg0KKyAqDQorICogQHN0YXRlOiBUaGUgeHhoNjQgc3RhdGUg
dG8gcmVzZXQuDQorICogQHNlZWQ6ICBJbml0aWFsaXplIHRoZSBoYXNoIHN0YXRlIHdpdGggdGhp
cyBzZWVkLg0KKyAqLw0KK3ZvaWQgeHhoNjRfcmVzZXQoc3RydWN0IHh4aDY0X3N0YXRlICpzdGF0
ZSwgdWludDY0X3Qgc2VlZCk7DQorDQorLyoqDQorICogeHhoNjRfdXBkYXRlKCkgLSBoYXNoIHRo
ZSBkYXRhIGdpdmVuIGFuZCB1cGRhdGUgdGhlIHh4aDY0IHN0YXRlDQorICogQHN0YXRlOiAgVGhl
IHh4aDY0IHN0YXRlIHRvIHVwZGF0ZS4NCisgKiBAaW5wdXQ6ICBUaGUgZGF0YSB0byBoYXNoLg0K
KyAqIEBsZW5ndGg6IFRoZSBsZW5ndGggb2YgdGhlIGRhdGEgdG8gaGFzaC4NCisgKg0KKyAqIEFm
dGVyIGNhbGxpbmcgeHhoNjRfcmVzZXQoKSBjYWxsIHh4aDY0X3VwZGF0ZSgpIGFzIG1hbnkgdGlt
ZXMgYXMgbmVjZXNzYXJ5Lg0KKyAqDQorICogUmV0dXJuOiAgWmVybyBvbiBzdWNjZXNzLCBvdGhl
cndpc2UgYW4gZXJyb3IgY29kZS4NCisgKi8NCitpbnQgeHhoNjRfdXBkYXRlKHN0cnVjdCB4eGg2
NF9zdGF0ZSAqc3RhdGUsIGNvbnN0IHZvaWQgKmlucHV0LCBzaXplX3QgbGVuZ3RoKTsNCisNCisv
KioNCisgKiB4eGg2NF9kaWdlc3QoKSAtIHByb2R1Y2UgdGhlIGN1cnJlbnQgeHhoNjQgaGFzaA0K
KyAqDQorICogQHN0YXRlOiBQcm9kdWNlIHRoZSBjdXJyZW50IHh4aDY0IGhhc2ggb2YgdGhpcyBz
dGF0ZS4NCisgKg0KKyAqIEEgaGFzaCB2YWx1ZSBjYW4gYmUgcHJvZHVjZWQgYXQgYW55IHRpbWUu
IEl0IGlzIHN0aWxsIHBvc3NpYmxlIHRvIGNvbnRpbnVlDQorICogaW5zZXJ0aW5nIGlucHV0IGlu
dG8gdGhlIGhhc2ggc3RhdGUgYWZ0ZXIgYSBjYWxsIHRvIHh4aDY0X2RpZ2VzdCgpLCBhbmQNCisg
KiBnZW5lcmF0ZSBuZXcgaGFzaGVzIGxhdGVyIG9uLCBieSBjYWxsaW5nIHh4aDY0X2RpZ2VzdCgp
IGFnYWluLg0KKyAqDQorICogUmV0dXJuOiBUaGUgeHhoNjQgaGFzaCBzdG9yZWQgaW4gdGhlIHN0
YXRlLg0KKyAqLw0KK3VpbnQ2NF90IHh4aDY0X2RpZ2VzdChjb25zdCBzdHJ1Y3QgeHhoNjRfc3Rh
dGUgKnN0YXRlKTsNCisNCisvKi0qKioqKioqKioqKioqKioqKioqKioqKioqKg0KKyAqIFV0aWxz
DQorICoqKioqKioqKioqKioqKioqKioqKioqKioqKi8NCisNCisvKioNCisgKiB4eGgzMl9jb3B5
X3N0YXRlKCkgLSBjb3B5IHRoZSBzb3VyY2Ugc3RhdGUgaW50byB0aGUgZGVzdGluYXRpb24gc3Rh
dGUNCisgKg0KKyAqIEBzcmM6IFRoZSBzb3VyY2UgeHhoMzIgc3RhdGUuDQorICogQGRzdDogVGhl
IGRlc3RpbmF0aW9uIHh4aDMyIHN0YXRlLg0KKyAqLw0KK3ZvaWQgeHhoMzJfY29weV9zdGF0ZShz
dHJ1Y3QgeHhoMzJfc3RhdGUgKmRzdCwgY29uc3Qgc3RydWN0IHh4aDMyX3N0YXRlICpzcmMpOw0K
Kw0KKy8qKg0KKyAqIHh4aDY0X2NvcHlfc3RhdGUoKSAtIGNvcHkgdGhlIHNvdXJjZSBzdGF0ZSBp
bnRvIHRoZSBkZXN0aW5hdGlvbiBzdGF0ZQ0KKyAqDQorICogQHNyYzogVGhlIHNvdXJjZSB4eGg2
NCBzdGF0ZS4NCisgKiBAZHN0OiBUaGUgZGVzdGluYXRpb24geHhoNjQgc3RhdGUuDQorICovDQor
dm9pZCB4eGg2NF9jb3B5X3N0YXRlKHN0cnVjdCB4eGg2NF9zdGF0ZSAqZHN0LCBjb25zdCBzdHJ1
Y3QgeHhoNjRfc3RhdGUgKnNyYyk7DQorDQorI2VuZGlmIC8qIFhYSEFTSF9IICovDQpkaWZmIC0t
Z2l0IGEveGVuL2luY2x1ZGUveGVuL3pzdGQuaCBiL3hlbi9pbmNsdWRlL3hlbi96c3RkLmgNCm5l
dyBmaWxlIG1vZGUgMTAwNjQ0DQppbmRleCAwMDAwMDAwMDAwLi4yNDk1NzVlMjQ4DQotLS0gL2Rl
di9udWxsDQorKysgYi94ZW4vaW5jbHVkZS94ZW4venN0ZC5oDQpAQCAtMCwwICsxLDExNTcgQEAN
CisvKg0KKyAqIENvcHlyaWdodCAoYykgMjAxNi1wcmVzZW50LCBZYW5uIENvbGxldCwgRmFjZWJv
b2ssIEluYy4NCisgKiBBbGwgcmlnaHRzIHJlc2VydmVkLg0KKyAqDQorICogVGhpcyBzb3VyY2Ug
Y29kZSBpcyBsaWNlbnNlZCB1bmRlciB0aGUgQlNELXN0eWxlIGxpY2Vuc2UgZm91bmQgaW4gdGhl
DQorICogTElDRU5TRSBmaWxlIGluIHRoZSByb290IGRpcmVjdG9yeSBvZiBodHRwczovL2dpdGh1
Yi5jb20vZmFjZWJvb2svenN0ZC4NCisgKiBBbiBhZGRpdGlvbmFsIGdyYW50IG9mIHBhdGVudCBy
aWdodHMgY2FuIGJlIGZvdW5kIGluIHRoZSBQQVRFTlRTIGZpbGUgaW4gdGhlDQorICogc2FtZSBk
aXJlY3RvcnkuDQorICoNCisgKiBUaGlzIHByb2dyYW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNh
biByZWRpc3RyaWJ1dGUgaXQgYW5kL29yIG1vZGlmeSBpdCB1bmRlcg0KKyAqIHRoZSB0ZXJtcyBv
ZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgdmVyc2lvbiAyIGFzIHB1Ymxpc2hlZCBi
eSB0aGUNCisgKiBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24uIFRoaXMgcHJvZ3JhbSBpcyBkdWFs
LWxpY2Vuc2VkOyB5b3UgbWF5IHNlbGVjdA0KKyAqIGVpdGhlciB2ZXJzaW9uIDIgb2YgdGhlIEdO
VSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlICgiR1BMIikgb3IgQlNEIGxpY2Vuc2UNCisgKiAoIkJT
RCIpLg0KKyAqLw0KKw0KKyNpZm5kZWYgWlNURF9IDQorI2RlZmluZSBaU1REX0gNCisNCisvKiA9
PT09PT0gICBEZXBlbmRlbmN5ICAgPT09PT09Ki8NCisjaW5jbHVkZSA8bGludXgvdHlwZXMuaD4g
ICAvKiBzaXplX3QgKi8NCisNCisNCisvKi0qKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KKyAqIEludHJv
ZHVjdGlvbg0KKyAqDQorICogenN0ZCwgc2hvcnQgZm9yIFpzdGFuZGFyZCwgaXMgYSBmYXN0IGxv
c3NsZXNzIGNvbXByZXNzaW9uIGFsZ29yaXRobSwNCisgKiB0YXJnZXRpbmcgcmVhbC10aW1lIGNv
bXByZXNzaW9uIHNjZW5hcmlvcyBhdCB6bGliLWxldmVsIGFuZCBiZXR0ZXINCisgKiBjb21wcmVz
c2lvbiByYXRpb3MuIFRoZSB6c3RkIGNvbXByZXNzaW9uIGxpYnJhcnkgcHJvdmlkZXMgaW4tbWVt
b3J5DQorICogY29tcHJlc3Npb24gYW5kIGRlY29tcHJlc3Npb24gZnVuY3Rpb25zLiBUaGUgbGli
cmFyeSBzdXBwb3J0cyBjb21wcmVzc2lvbg0KKyAqIGxldmVscyBmcm9tIDEgdXAgdG8gWlNURF9t
YXhDTGV2ZWwoKSB3aGljaCBpcyAyMi4gTGV2ZWxzID49IDIwLCBsYWJlbGVkDQorICogdWx0cmEs
IHNob3VsZCBiZSB1c2VkIHdpdGggY2F1dGlvbiwgYXMgdGhleSByZXF1aXJlIG1vcmUgbWVtb3J5
Lg0KKyAqIENvbXByZXNzaW9uIGNhbiBiZSBkb25lIGluOg0KKyAqICAtIGEgc2luZ2xlIHN0ZXAs
IHJldXNpbmcgYSBjb250ZXh0IChkZXNjcmliZWQgYXMgRXhwbGljaXQgbWVtb3J5IG1hbmFnZW1l
bnQpDQorICogIC0gdW5ib3VuZGVkIG11bHRpcGxlIHN0ZXBzIChkZXNjcmliZWQgYXMgU3RyZWFt
aW5nIGNvbXByZXNzaW9uKQ0KKyAqIFRoZSBjb21wcmVzc2lvbiByYXRpbyBhY2hpZXZhYmxlIG9u
IHNtYWxsIGRhdGEgY2FuIGJlIGhpZ2hseSBpbXByb3ZlZCB1c2luZw0KKyAqIGNvbXByZXNzaW9u
IHdpdGggYSBkaWN0aW9uYXJ5IGluOg0KKyAqICAtIGEgc2luZ2xlIHN0ZXAgKGRlc2NyaWJlZCBh
cyBTaW1wbGUgZGljdGlvbmFyeSBBUEkpDQorICogIC0gYSBzaW5nbGUgc3RlcCwgcmV1c2luZyBh
IGRpY3Rpb25hcnkgKGRlc2NyaWJlZCBhcyBGYXN0IGRpY3Rpb25hcnkgQVBJKQ0KKyAqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKiovDQorDQorLyo9PT09PT0gIEhlbHBlciBmdW5jdGlvbnMgID09PT09PSov
DQorDQorLyoqDQorICogZW51bSBaU1REX0Vycm9yQ29kZSAtIHpzdGQgZXJyb3IgY29kZXMNCisg
Kg0KKyAqIEZ1bmN0aW9ucyB0aGF0IHJldHVybiBzaXplX3QgY2FuIGJlIGNoZWNrZWQgZm9yIGVy
cm9ycyB1c2luZyBaU1REX2lzRXJyb3IoKQ0KKyAqIGFuZCB0aGUgWlNURF9FcnJvckNvZGUgY2Fu
IGJlIGV4dHJhY3RlZCB1c2luZyBaU1REX2dldEVycm9yQ29kZSgpLg0KKyAqLw0KK3R5cGVkZWYg
ZW51bSB7DQorCVpTVERfZXJyb3Jfbm9fZXJyb3IsDQorCVpTVERfZXJyb3JfR0VORVJJQywNCisJ
WlNURF9lcnJvcl9wcmVmaXhfdW5rbm93biwNCisJWlNURF9lcnJvcl92ZXJzaW9uX3Vuc3VwcG9y
dGVkLA0KKwlaU1REX2Vycm9yX3BhcmFtZXRlcl91bmtub3duLA0KKwlaU1REX2Vycm9yX2ZyYW1l
UGFyYW1ldGVyX3Vuc3VwcG9ydGVkLA0KKwlaU1REX2Vycm9yX2ZyYW1lUGFyYW1ldGVyX3Vuc3Vw
cG9ydGVkQnkzMmJpdHMsDQorCVpTVERfZXJyb3JfZnJhbWVQYXJhbWV0ZXJfd2luZG93VG9vTGFy
Z2UsDQorCVpTVERfZXJyb3JfY29tcHJlc3Npb25QYXJhbWV0ZXJfdW5zdXBwb3J0ZWQsDQorCVpT
VERfZXJyb3JfaW5pdF9taXNzaW5nLA0KKwlaU1REX2Vycm9yX21lbW9yeV9hbGxvY2F0aW9uLA0K
KwlaU1REX2Vycm9yX3N0YWdlX3dyb25nLA0KKwlaU1REX2Vycm9yX2RzdFNpemVfdG9vU21hbGws
DQorCVpTVERfZXJyb3Jfc3JjU2l6ZV93cm9uZywNCisJWlNURF9lcnJvcl9jb3JydXB0aW9uX2Rl
dGVjdGVkLA0KKwlaU1REX2Vycm9yX2NoZWNrc3VtX3dyb25nLA0KKwlaU1REX2Vycm9yX3RhYmxl
TG9nX3Rvb0xhcmdlLA0KKwlaU1REX2Vycm9yX21heFN5bWJvbFZhbHVlX3Rvb0xhcmdlLA0KKwla
U1REX2Vycm9yX21heFN5bWJvbFZhbHVlX3Rvb1NtYWxsLA0KKwlaU1REX2Vycm9yX2RpY3Rpb25h
cnlfY29ycnVwdGVkLA0KKwlaU1REX2Vycm9yX2RpY3Rpb25hcnlfd3JvbmcsDQorCVpTVERfZXJy
b3JfZGljdGlvbmFyeUNyZWF0aW9uX2ZhaWxlZCwNCisJWlNURF9lcnJvcl9tYXhDb2RlDQorfSBa
U1REX0Vycm9yQ29kZTsNCisNCisvKioNCisgKiBaU1REX21heENMZXZlbCgpIC0gbWF4aW11bSBj
b21wcmVzc2lvbiBsZXZlbCBhdmFpbGFibGUNCisgKg0KKyAqIFJldHVybjogTWF4aW11bSBjb21w
cmVzc2lvbiBsZXZlbCBhdmFpbGFibGUuDQorICovDQoraW50IFpTVERfbWF4Q0xldmVsKHZvaWQp
Ow0KKy8qKg0KKyAqIFpTVERfY29tcHJlc3NCb3VuZCgpIC0gbWF4aW11bSBjb21wcmVzc2VkIHNp
emUgaW4gd29yc3QgY2FzZSBzY2VuYXJpbw0KKyAqIEBzcmNTaXplOiBUaGUgc2l6ZSBvZiB0aGUg
ZGF0YSB0byBjb21wcmVzcy4NCisgKg0KKyAqIFJldHVybjogICBUaGUgbWF4aW11bSBjb21wcmVz
c2VkIHNpemUgaW4gdGhlIHdvcnN0IGNhc2Ugc2NlbmFyaW8uDQorICovDQorc2l6ZV90IFpTVERf
Y29tcHJlc3NCb3VuZChzaXplX3Qgc3JjU2l6ZSk7DQorLyoqDQorICogWlNURF9pc0Vycm9yKCkg
LSB0ZWxscyBpZiBhIHNpemVfdCBmdW5jdGlvbiByZXN1bHQgaXMgYW4gZXJyb3IgY29kZQ0KKyAq
IEBjb2RlOiAgVGhlIGZ1bmN0aW9uIHJlc3VsdCB0byBjaGVjayBmb3IgZXJyb3IuDQorICoNCisg
KiBSZXR1cm46IE5vbi16ZXJvIGlmZiB0aGUgY29kZSBpcyBhbiBlcnJvci4NCisgKi8NCitzdGF0
aWMgX19hdHRyaWJ1dGVfXygodW51c2VkKSkgdW5zaWduZWQgaW50IFpTVERfaXNFcnJvcihzaXpl
X3QgY29kZSkNCit7DQorCXJldHVybiBjb2RlID4gKHNpemVfdCktWlNURF9lcnJvcl9tYXhDb2Rl
Ow0KK30NCisvKioNCisgKiBaU1REX2dldEVycm9yQ29kZSgpIC0gdHJhbnNsYXRlcyBhbiBlcnJv
ciBmdW5jdGlvbiByZXN1bHQgdG8gYSBaU1REX0Vycm9yQ29kZQ0KKyAqIEBmdW5jdGlvblJlc3Vs
dDogVGhlIHJlc3VsdCBvZiBhIGZ1bmN0aW9uIGZvciB3aGljaCBaU1REX2lzRXJyb3IoKSBpcyB0
cnVlLg0KKyAqDQorICogUmV0dXJuOiAgICAgICAgICBUaGUgWlNURF9FcnJvckNvZGUgY29ycmVz
cG9uZGluZyB0byB0aGUgZnVuY3Rpb25SZXN1bHQgb3IgMA0KKyAqICAgICAgICAgICAgICAgICAg
aWYgdGhlIGZ1bmN0aW9uUmVzdWx0IGlzbid0IGFuIGVycm9yLg0KKyAqLw0KK3N0YXRpYyBfX2F0
dHJpYnV0ZV9fKCh1bnVzZWQpKSBaU1REX0Vycm9yQ29kZSBaU1REX2dldEVycm9yQ29kZSgNCisJ
c2l6ZV90IGZ1bmN0aW9uUmVzdWx0KQ0KK3sNCisJaWYgKCFaU1REX2lzRXJyb3IoZnVuY3Rpb25S
ZXN1bHQpKQ0KKwkJcmV0dXJuIChaU1REX0Vycm9yQ29kZSkwOw0KKwlyZXR1cm4gKFpTVERfRXJy
b3JDb2RlKSgwIC0gZnVuY3Rpb25SZXN1bHQpOw0KK30NCisNCisvKioNCisgKiBlbnVtIFpTVERf
c3RyYXRlZ3kgLSB6c3RkIGNvbXByZXNzaW9uIHNlYXJjaCBzdHJhdGVneQ0KKyAqDQorICogRnJv
bSBmYXN0ZXIgdG8gc3Ryb25nZXIuDQorICovDQordHlwZWRlZiBlbnVtIHsNCisJWlNURF9mYXN0
LA0KKwlaU1REX2RmYXN0LA0KKwlaU1REX2dyZWVkeSwNCisJWlNURF9sYXp5LA0KKwlaU1REX2xh
enkyLA0KKwlaU1REX2J0bGF6eTIsDQorCVpTVERfYnRvcHQsDQorCVpTVERfYnRvcHQyDQorfSBa
U1REX3N0cmF0ZWd5Ow0KKw0KKy8qKg0KKyAqIHN0cnVjdCBaU1REX2NvbXByZXNzaW9uUGFyYW1l
dGVycyAtIHpzdGQgY29tcHJlc3Npb24gcGFyYW1ldGVycw0KKyAqIEB3aW5kb3dMb2c6ICAgIExv
ZyBvZiB0aGUgbGFyZ2VzdCBtYXRjaCBkaXN0YW5jZS4gTGFyZ2VyIG1lYW5zIG1vcmUNCisgKiAg
ICAgICAgICAgICAgICBjb21wcmVzc2lvbiwgYW5kIG1vcmUgbWVtb3J5IG5lZWRlZCBkdXJpbmcg
ZGVjb21wcmVzc2lvbi4NCisgKiBAY2hhaW5Mb2c6ICAgICBGdWxseSBzZWFyY2hlZCBzZWdtZW50
LiBMYXJnZXIgbWVhbnMgbW9yZSBjb21wcmVzc2lvbiwgc2xvd2VyLA0KKyAqICAgICAgICAgICAg
ICAgIGFuZCBtb3JlIG1lbW9yeSAodXNlbGVzcyBmb3IgZmFzdCkuDQorICogQGhhc2hMb2c6ICAg
ICAgRGlzcGF0Y2ggdGFibGUuIExhcmdlciBtZWFucyBtb3JlIGNvbXByZXNzaW9uLA0KKyAqICAg
ICAgICAgICAgICAgIHNsb3dlciwgYW5kIG1vcmUgbWVtb3J5Lg0KKyAqIEBzZWFyY2hMb2c6ICAg
IE51bWJlciBvZiBzZWFyY2hlcy4gTGFyZ2VyIG1lYW5zIG1vcmUgY29tcHJlc3Npb24gYW5kIHNs
b3dlci4NCisgKiBAc2VhcmNoTGVuZ3RoOiBNYXRjaCBsZW5ndGggc2VhcmNoZWQuIExhcmdlciBt
ZWFucyBmYXN0ZXIgZGVjb21wcmVzc2lvbiwNCisgKiAgICAgICAgICAgICAgICBzb21ldGltZXMg
bGVzcyBjb21wcmVzc2lvbi4NCisgKiBAdGFyZ2V0TGVuZ3RoOiBBY2NlcHRhYmxlIG1hdGNoIHNp
emUgZm9yIG9wdGltYWwgcGFyc2VyIChvbmx5KS4gTGFyZ2VyIG1lYW5zDQorICogICAgICAgICAg
ICAgICAgbW9yZSBjb21wcmVzc2lvbiwgYW5kIHNsb3dlci4NCisgKiBAc3RyYXRlZ3k6ICAgICBU
aGUgenN0ZCBjb21wcmVzc2lvbiBzdHJhdGVneS4NCisgKi8NCit0eXBlZGVmIHN0cnVjdCB7DQor
CXVuc2lnbmVkIGludCB3aW5kb3dMb2c7DQorCXVuc2lnbmVkIGludCBjaGFpbkxvZzsNCisJdW5z
aWduZWQgaW50IGhhc2hMb2c7DQorCXVuc2lnbmVkIGludCBzZWFyY2hMb2c7DQorCXVuc2lnbmVk
IGludCBzZWFyY2hMZW5ndGg7DQorCXVuc2lnbmVkIGludCB0YXJnZXRMZW5ndGg7DQorCVpTVERf
c3RyYXRlZ3kgc3RyYXRlZ3k7DQorfSBaU1REX2NvbXByZXNzaW9uUGFyYW1ldGVyczsNCisNCisv
KioNCisgKiBzdHJ1Y3QgWlNURF9mcmFtZVBhcmFtZXRlcnMgLSB6c3RkIGZyYW1lIHBhcmFtZXRl
cnMNCisgKiBAY29udGVudFNpemVGbGFnOiBDb250cm9scyB3aGV0aGVyIGNvbnRlbnQgc2l6ZSB3
aWxsIGJlIHByZXNlbnQgaW4gdGhlIGZyYW1lDQorICogICAgICAgICAgICAgICAgICAgaGVhZGVy
ICh3aGVuIGtub3duKS4NCisgKiBAY2hlY2tzdW1GbGFnOiAgICBDb250cm9scyB3aGV0aGVyIGEg
MzItYml0IGNoZWNrc3VtIGlzIGdlbmVyYXRlZCBhdCB0aGUgZW5kDQorICogICAgICAgICAgICAg
ICAgICAgb2YgdGhlIGZyYW1lIGZvciBlcnJvciBkZXRlY3Rpb24uDQorICogQG5vRGljdElERmxh
ZzogICAgQ29udHJvbHMgd2hldGhlciBkaWN0SUQgd2lsbCBiZSBzYXZlZCBpbnRvIHRoZSBmcmFt
ZSBoZWFkZXINCisgKiAgICAgICAgICAgICAgICAgICB3aGVuIHVzaW5nIGRpY3Rpb25hcnkgY29t
cHJlc3Npb24uDQorICoNCisgKiBUaGUgZGVmYXVsdCB2YWx1ZSBpcyBhbGwgZmllbGRzIHNldCB0
byAwLg0KKyAqLw0KK3R5cGVkZWYgc3RydWN0IHsNCisJdW5zaWduZWQgaW50IGNvbnRlbnRTaXpl
RmxhZzsNCisJdW5zaWduZWQgaW50IGNoZWNrc3VtRmxhZzsNCisJdW5zaWduZWQgaW50IG5vRGlj
dElERmxhZzsNCit9IFpTVERfZnJhbWVQYXJhbWV0ZXJzOw0KKw0KKy8qKg0KKyAqIHN0cnVjdCBa
U1REX3BhcmFtZXRlcnMgLSB6c3RkIHBhcmFtZXRlcnMNCisgKiBAY1BhcmFtczogVGhlIGNvbXBy
ZXNzaW9uIHBhcmFtZXRlcnMuDQorICogQGZQYXJhbXM6IFRoZSBmcmFtZSBwYXJhbWV0ZXJzLg0K
KyAqLw0KK3R5cGVkZWYgc3RydWN0IHsNCisJWlNURF9jb21wcmVzc2lvblBhcmFtZXRlcnMgY1Bh
cmFtczsNCisJWlNURF9mcmFtZVBhcmFtZXRlcnMgZlBhcmFtczsNCit9IFpTVERfcGFyYW1ldGVy
czsNCisNCisvKioNCisgKiBaU1REX2dldENQYXJhbXMoKSAtIHJldHVybnMgWlNURF9jb21wcmVz
c2lvblBhcmFtZXRlcnMgZm9yIHNlbGVjdGVkIGxldmVsDQorICogQGNvbXByZXNzaW9uTGV2ZWw6
IFRoZSBjb21wcmVzc2lvbiBsZXZlbCBmcm9tIDEgdG8gWlNURF9tYXhDTGV2ZWwoKS4NCisgKiBA
ZXN0aW1hdGVkU3JjU2l6ZTogVGhlIGVzdGltYXRlZCBzb3VyY2Ugc2l6ZSB0byBjb21wcmVzcyBv
ciAwIGlmIHVua25vd24uDQorICogQGRpY3RTaXplOiAgICAgICAgIFRoZSBkaWN0aW9uYXJ5IHNp
emUgb3IgMCBpZiBhIGRpY3Rpb25hcnkgaXNuJ3QgYmVpbmcgdXNlZC4NCisgKg0KKyAqIFJldHVy
bjogICAgICAgICAgICBUaGUgc2VsZWN0ZWQgWlNURF9jb21wcmVzc2lvblBhcmFtZXRlcnMuDQor
ICovDQorWlNURF9jb21wcmVzc2lvblBhcmFtZXRlcnMgWlNURF9nZXRDUGFyYW1zKGludCBjb21w
cmVzc2lvbkxldmVsLA0KKwl1bnNpZ25lZCBsb25nIGxvbmcgZXN0aW1hdGVkU3JjU2l6ZSwgc2l6
ZV90IGRpY3RTaXplKTsNCisNCisvKioNCisgKiBaU1REX2dldFBhcmFtcygpIC0gcmV0dXJucyBa
U1REX3BhcmFtZXRlcnMgZm9yIHNlbGVjdGVkIGxldmVsDQorICogQGNvbXByZXNzaW9uTGV2ZWw6
IFRoZSBjb21wcmVzc2lvbiBsZXZlbCBmcm9tIDEgdG8gWlNURF9tYXhDTGV2ZWwoKS4NCisgKiBA
ZXN0aW1hdGVkU3JjU2l6ZTogVGhlIGVzdGltYXRlZCBzb3VyY2Ugc2l6ZSB0byBjb21wcmVzcyBv
ciAwIGlmIHVua25vd24uDQorICogQGRpY3RTaXplOiAgICAgICAgIFRoZSBkaWN0aW9uYXJ5IHNp
emUgb3IgMCBpZiBhIGRpY3Rpb25hcnkgaXNuJ3QgYmVpbmcgdXNlZC4NCisgKg0KKyAqIFRoZSBz
YW1lIGFzIFpTVERfZ2V0Q1BhcmFtcygpIGV4Y2VwdCBhbHNvIHNlbGVjdHMgdGhlIGRlZmF1bHQg
ZnJhbWUNCisgKiBwYXJhbWV0ZXJzIChhbGwgemVybykuDQorICoNCisgKiBSZXR1cm46ICAgICAg
ICAgICAgVGhlIHNlbGVjdGVkIFpTVERfcGFyYW1ldGVycy4NCisgKi8NCitaU1REX3BhcmFtZXRl
cnMgWlNURF9nZXRQYXJhbXMoaW50IGNvbXByZXNzaW9uTGV2ZWwsDQorCXVuc2lnbmVkIGxvbmcg
bG9uZyBlc3RpbWF0ZWRTcmNTaXplLCBzaXplX3QgZGljdFNpemUpOw0KKw0KKy8qLSoqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCisgKiBFeHBsaWNpdCBtZW1vcnkgbWFuYWdl
bWVudA0KKyAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKi8NCisNCisvKioN
CisgKiBaU1REX0NDdHhXb3Jrc3BhY2VCb3VuZCgpIC0gYW1vdW50IG9mIG1lbW9yeSBuZWVkZWQg
dG8gaW5pdGlhbGl6ZSBhIFpTVERfQ0N0eA0KKyAqIEBjUGFyYW1zOiBUaGUgY29tcHJlc3Npb24g
cGFyYW1ldGVycyB0byBiZSB1c2VkIGZvciBjb21wcmVzc2lvbi4NCisgKg0KKyAqIElmIG11bHRp
cGxlIGNvbXByZXNzaW9uIHBhcmFtZXRlcnMgbWlnaHQgYmUgdXNlZCwgdGhlIGNhbGxlciBtdXN0
IGNhbGwNCisgKiBaU1REX0NDdHhXb3Jrc3BhY2VCb3VuZCgpIGZvciBlYWNoIHNldCBvZiBwYXJh
bWV0ZXJzIGFuZCB1c2UgdGhlIG1heGltdW0NCisgKiBzaXplLg0KKyAqDQorICogUmV0dXJuOiAg
IEEgbG93ZXIgYm91bmQgb24gdGhlIHNpemUgb2YgdGhlIHdvcmtzcGFjZSB0aGF0IGlzIHBhc3Nl
ZCB0bw0KKyAqICAgICAgICAgICBaU1REX2luaXRDQ3R4KCkuDQorICovDQorc2l6ZV90IFpTVERf
Q0N0eFdvcmtzcGFjZUJvdW5kKFpTVERfY29tcHJlc3Npb25QYXJhbWV0ZXJzIGNQYXJhbXMpOw0K
Kw0KKy8qKg0KKyAqIHN0cnVjdCBaU1REX0NDdHggLSB0aGUgenN0ZCBjb21wcmVzc2lvbiBjb250
ZXh0DQorICoNCisgKiBXaGVuIGNvbXByZXNzaW5nIG1hbnkgdGltZXMgaXQgaXMgcmVjb21tZW5k
ZWQgdG8gYWxsb2NhdGUgYSBjb250ZXh0IGp1c3Qgb25jZQ0KKyAqIGFuZCByZXVzZSBpdCBmb3Ig
ZWFjaCBzdWNjZXNzaXZlIGNvbXByZXNzaW9uIG9wZXJhdGlvbi4NCisgKi8NCit0eXBlZGVmIHN0
cnVjdCBaU1REX0NDdHhfcyBaU1REX0NDdHg7DQorLyoqDQorICogWlNURF9pbml0Q0N0eCgpIC0g
aW5pdGlhbGl6ZSBhIHpzdGQgY29tcHJlc3Npb24gY29udGV4dA0KKyAqIEB3b3Jrc3BhY2U6ICAg
ICBUaGUgd29ya3NwYWNlIHRvIGVtcGxhY2UgdGhlIGNvbnRleHQgaW50by4gSXQgbXVzdCBvdXRs
aXZlDQorICogICAgICAgICAgICAgICAgIHRoZSByZXR1cm5lZCBjb250ZXh0Lg0KKyAqIEB3b3Jr
c3BhY2VTaXplOiBUaGUgc2l6ZSBvZiB3b3Jrc3BhY2UuIFVzZSBaU1REX0NDdHhXb3Jrc3BhY2VC
b3VuZCgpIHRvDQorICogICAgICAgICAgICAgICAgIGRldGVybWluZSBob3cgbGFyZ2UgdGhlIHdv
cmtzcGFjZSBtdXN0IGJlLg0KKyAqDQorICogUmV0dXJuOiAgICAgICAgIEEgY29tcHJlc3Npb24g
Y29udGV4dCBlbXBsYWNlZCBpbnRvIHdvcmtzcGFjZS4NCisgKi8NCitaU1REX0NDdHggKlpTVERf
aW5pdENDdHgodm9pZCAqd29ya3NwYWNlLCBzaXplX3Qgd29ya3NwYWNlU2l6ZSk7DQorDQorLyoq
DQorICogWlNURF9jb21wcmVzc0NDdHgoKSAtIGNvbXByZXNzIHNyYyBpbnRvIGRzdA0KKyAqIEBj
dHg6ICAgICAgICAgVGhlIGNvbnRleHQuIE11c3QgaGF2ZSBiZWVuIGluaXRpYWxpemVkIHdpdGgg
YSB3b3Jrc3BhY2UgYXQNCisgKiAgICAgICAgICAgICAgIGxlYXN0IGFzIGxhcmdlIGFzIFpTVERf
Q0N0eFdvcmtzcGFjZUJvdW5kKHBhcmFtcy5jUGFyYW1zKS4NCisgKiBAZHN0OiAgICAgICAgIFRo
ZSBidWZmZXIgdG8gY29tcHJlc3Mgc3JjIGludG8uDQorICogQGRzdENhcGFjaXR5OiBUaGUgc2l6
ZSBvZiB0aGUgZGVzdGluYXRpb24gYnVmZmVyLiBNYXkgYmUgYW55IHNpemUsIGJ1dA0KKyAqICAg
ICAgICAgICAgICAgWlNURF9jb21wcmVzc0JvdW5kKHNyY1NpemUpIGlzIGd1YXJhbnRlZWQgdG8g
YmUgbGFyZ2UgZW5vdWdoLg0KKyAqIEBzcmM6ICAgICAgICAgVGhlIGRhdGEgdG8gY29tcHJlc3Mu
DQorICogQHNyY1NpemU6ICAgICBUaGUgc2l6ZSBvZiB0aGUgZGF0YSB0byBjb21wcmVzcy4NCisg
KiBAcGFyYW1zOiAgICAgIFRoZSBwYXJhbWV0ZXJzIHRvIHVzZSBmb3IgY29tcHJlc3Npb24uIFNl
ZSBaU1REX2dldFBhcmFtcygpLg0KKyAqDQorICogUmV0dXJuOiAgICAgICBUaGUgY29tcHJlc3Nl
ZCBzaXplIG9yIGFuIGVycm9yLCB3aGljaCBjYW4gYmUgY2hlY2tlZCB1c2luZw0KKyAqICAgICAg
ICAgICAgICAgWlNURF9pc0Vycm9yKCkuDQorICovDQorc2l6ZV90IFpTVERfY29tcHJlc3NDQ3R4
KFpTVERfQ0N0eCAqY3R4LCB2b2lkICpkc3QsIHNpemVfdCBkc3RDYXBhY2l0eSwNCisJY29uc3Qg
dm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZSwgWlNURF9wYXJhbWV0ZXJzIHBhcmFtcyk7DQorDQor
LyoqDQorICogWlNURF9EQ3R4V29ya3NwYWNlQm91bmQoKSAtIGFtb3VudCBvZiBtZW1vcnkgbmVl
ZGVkIHRvIGluaXRpYWxpemUgYSBaU1REX0RDdHgNCisgKg0KKyAqIFJldHVybjogQSBsb3dlciBi
b3VuZCBvbiB0aGUgc2l6ZSBvZiB0aGUgd29ya3NwYWNlIHRoYXQgaXMgcGFzc2VkIHRvDQorICog
ICAgICAgICBaU1REX2luaXREQ3R4KCkuDQorICovDQorc2l6ZV90IFpTVERfREN0eFdvcmtzcGFj
ZUJvdW5kKHZvaWQpOw0KKw0KKy8qKg0KKyAqIHN0cnVjdCBaU1REX0RDdHggLSB0aGUgenN0ZCBk
ZWNvbXByZXNzaW9uIGNvbnRleHQNCisgKg0KKyAqIFdoZW4gZGVjb21wcmVzc2luZyBtYW55IHRp
bWVzIGl0IGlzIHJlY29tbWVuZGVkIHRvIGFsbG9jYXRlIGEgY29udGV4dCBqdXN0DQorICogb25j
ZSBhbmQgcmV1c2UgaXQgZm9yIGVhY2ggc3VjY2Vzc2l2ZSBkZWNvbXByZXNzaW9uIG9wZXJhdGlv
bi4NCisgKi8NCit0eXBlZGVmIHN0cnVjdCBaU1REX0RDdHhfcyBaU1REX0RDdHg7DQorLyoqDQor
ICogWlNURF9pbml0REN0eCgpIC0gaW5pdGlhbGl6ZSBhIHpzdGQgZGVjb21wcmVzc2lvbiBjb250
ZXh0DQorICogQHdvcmtzcGFjZTogICAgIFRoZSB3b3Jrc3BhY2UgdG8gZW1wbGFjZSB0aGUgY29u
dGV4dCBpbnRvLiBJdCBtdXN0IG91dGxpdmUNCisgKiAgICAgICAgICAgICAgICAgdGhlIHJldHVy
bmVkIGNvbnRleHQuDQorICogQHdvcmtzcGFjZVNpemU6IFRoZSBzaXplIG9mIHdvcmtzcGFjZS4g
VXNlIFpTVERfREN0eFdvcmtzcGFjZUJvdW5kKCkgdG8NCisgKiAgICAgICAgICAgICAgICAgZGV0
ZXJtaW5lIGhvdyBsYXJnZSB0aGUgd29ya3NwYWNlIG11c3QgYmUuDQorICoNCisgKiBSZXR1cm46
ICAgICAgICAgQSBkZWNvbXByZXNzaW9uIGNvbnRleHQgZW1wbGFjZWQgaW50byB3b3Jrc3BhY2Uu
DQorICovDQorWlNURF9EQ3R4ICpaU1REX2luaXREQ3R4KHZvaWQgKndvcmtzcGFjZSwgc2l6ZV90
IHdvcmtzcGFjZVNpemUpOw0KKw0KKy8qKg0KKyAqIFpTVERfZGVjb21wcmVzc0RDdHgoKSAtIGRl
Y29tcHJlc3MgenN0ZCBjb21wcmVzc2VkIHNyYyBpbnRvIGRzdA0KKyAqIEBjdHg6ICAgICAgICAg
VGhlIGRlY29tcHJlc3Npb24gY29udGV4dC4NCisgKiBAZHN0OiAgICAgICAgIFRoZSBidWZmZXIg
dG8gZGVjb21wcmVzcyBzcmMgaW50by4NCisgKiBAZHN0Q2FwYWNpdHk6IFRoZSBzaXplIG9mIHRo
ZSBkZXN0aW5hdGlvbiBidWZmZXIuIE11c3QgYmUgYXQgbGVhc3QgYXMgbGFyZ2UNCisgKiAgICAg
ICAgICAgICAgIGFzIHRoZSBkZWNvbXByZXNzZWQgc2l6ZS4gSWYgdGhlIGNhbGxlciBjYW5ub3Qg
dXBwZXIgYm91bmQgdGhlDQorICogICAgICAgICAgICAgICBkZWNvbXByZXNzZWQgc2l6ZSwgdGhl
biBpdCdzIGJldHRlciB0byB1c2UgdGhlIHN0cmVhbWluZyBBUEkuDQorICogQHNyYzogICAgICAg
ICBUaGUgenN0ZCBjb21wcmVzc2VkIGRhdGEgdG8gZGVjb21wcmVzcy4gTXVsdGlwbGUgY29uY2F0
ZW5hdGVkDQorICogICAgICAgICAgICAgICBmcmFtZXMgYW5kIHNraXBwYWJsZSBmcmFtZXMgYXJl
IGFsbG93ZWQuDQorICogQHNyY1NpemU6ICAgICBUaGUgZXhhY3Qgc2l6ZSBvZiB0aGUgZGF0YSB0
byBkZWNvbXByZXNzLg0KKyAqDQorICogUmV0dXJuOiAgICAgICBUaGUgZGVjb21wcmVzc2VkIHNp
emUgb3IgYW4gZXJyb3IsIHdoaWNoIGNhbiBiZSBjaGVja2VkIHVzaW5nDQorICogICAgICAgICAg
ICAgICBaU1REX2lzRXJyb3IoKS4NCisgKi8NCitzaXplX3QgWlNURF9kZWNvbXByZXNzREN0eCha
U1REX0RDdHggKmN0eCwgdm9pZCAqZHN0LCBzaXplX3QgZHN0Q2FwYWNpdHksDQorCWNvbnN0IHZv
aWQgKnNyYywgc2l6ZV90IHNyY1NpemUpOw0KKw0KKy8qLSoqKioqKioqKioqKioqKioqKioqKioq
Kg0KKyAqIFNpbXBsZSBkaWN0aW9uYXJ5IEFQSQ0KKyAqKioqKioqKioqKioqKioqKioqKioqKioq
Ki8NCisNCisvKioNCisgKiBaU1REX2NvbXByZXNzX3VzaW5nRGljdCgpIC0gY29tcHJlc3Mgc3Jj
IGludG8gZHN0IHVzaW5nIGEgZGljdGlvbmFyeQ0KKyAqIEBjdHg6ICAgICAgICAgVGhlIGNvbnRl
eHQuIE11c3QgaGF2ZSBiZWVuIGluaXRpYWxpemVkIHdpdGggYSB3b3Jrc3BhY2UgYXQNCisgKiAg
ICAgICAgICAgICAgIGxlYXN0IGFzIGxhcmdlIGFzIFpTVERfQ0N0eFdvcmtzcGFjZUJvdW5kKHBh
cmFtcy5jUGFyYW1zKS4NCisgKiBAZHN0OiAgICAgICAgIFRoZSBidWZmZXIgdG8gY29tcHJlc3Mg
c3JjIGludG8uDQorICogQGRzdENhcGFjaXR5OiBUaGUgc2l6ZSBvZiB0aGUgZGVzdGluYXRpb24g
YnVmZmVyLiBNYXkgYmUgYW55IHNpemUsIGJ1dA0KKyAqICAgICAgICAgICAgICAgWlNURF9jb21w
cmVzc0JvdW5kKHNyY1NpemUpIGlzIGd1YXJhbnRlZWQgdG8gYmUgbGFyZ2UgZW5vdWdoLg0KKyAq
IEBzcmM6ICAgICAgICAgVGhlIGRhdGEgdG8gY29tcHJlc3MuDQorICogQHNyY1NpemU6ICAgICBU
aGUgc2l6ZSBvZiB0aGUgZGF0YSB0byBjb21wcmVzcy4NCisgKiBAZGljdDogICAgICAgIFRoZSBk
aWN0aW9uYXJ5IHRvIHVzZSBmb3IgY29tcHJlc3Npb24uDQorICogQGRpY3RTaXplOiAgICBUaGUg
c2l6ZSBvZiB0aGUgZGljdGlvbmFyeS4NCisgKiBAcGFyYW1zOiAgICAgIFRoZSBwYXJhbWV0ZXJz
IHRvIHVzZSBmb3IgY29tcHJlc3Npb24uIFNlZSBaU1REX2dldFBhcmFtcygpLg0KKyAqDQorICog
Q29tcHJlc3Npb24gdXNpbmcgYSBwcmVkZWZpbmVkIGRpY3Rpb25hcnkuIFRoZSBzYW1lIGRpY3Rp
b25hcnkgbXVzdCBiZSB1c2VkDQorICogZHVyaW5nIGRlY29tcHJlc3Npb24uDQorICoNCisgKiBS
ZXR1cm46ICAgICAgIFRoZSBjb21wcmVzc2VkIHNpemUgb3IgYW4gZXJyb3IsIHdoaWNoIGNhbiBi
ZSBjaGVja2VkIHVzaW5nDQorICogICAgICAgICAgICAgICBaU1REX2lzRXJyb3IoKS4NCisgKi8N
CitzaXplX3QgWlNURF9jb21wcmVzc191c2luZ0RpY3QoWlNURF9DQ3R4ICpjdHgsIHZvaWQgKmRz
dCwgc2l6ZV90IGRzdENhcGFjaXR5LA0KKwljb25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNTaXpl
LCBjb25zdCB2b2lkICpkaWN0LCBzaXplX3QgZGljdFNpemUsDQorCVpTVERfcGFyYW1ldGVycyBw
YXJhbXMpOw0KKw0KKy8qKg0KKyAqIFpTVERfZGVjb21wcmVzc191c2luZ0RpY3QoKSAtIGRlY29t
cHJlc3Mgc3JjIGludG8gZHN0IHVzaW5nIGEgZGljdGlvbmFyeQ0KKyAqIEBjdHg6ICAgICAgICAg
VGhlIGRlY29tcHJlc3Npb24gY29udGV4dC4NCisgKiBAZHN0OiAgICAgICAgIFRoZSBidWZmZXIg
dG8gZGVjb21wcmVzcyBzcmMgaW50by4NCisgKiBAZHN0Q2FwYWNpdHk6IFRoZSBzaXplIG9mIHRo
ZSBkZXN0aW5hdGlvbiBidWZmZXIuIE11c3QgYmUgYXQgbGVhc3QgYXMgbGFyZ2UNCisgKiAgICAg
ICAgICAgICAgIGFzIHRoZSBkZWNvbXByZXNzZWQgc2l6ZS4gSWYgdGhlIGNhbGxlciBjYW5ub3Qg
dXBwZXIgYm91bmQgdGhlDQorICogICAgICAgICAgICAgICBkZWNvbXByZXNzZWQgc2l6ZSwgdGhl
biBpdCdzIGJldHRlciB0byB1c2UgdGhlIHN0cmVhbWluZyBBUEkuDQorICogQHNyYzogICAgICAg
ICBUaGUgenN0ZCBjb21wcmVzc2VkIGRhdGEgdG8gZGVjb21wcmVzcy4gTXVsdGlwbGUgY29uY2F0
ZW5hdGVkDQorICogICAgICAgICAgICAgICBmcmFtZXMgYW5kIHNraXBwYWJsZSBmcmFtZXMgYXJl
IGFsbG93ZWQuDQorICogQHNyY1NpemU6ICAgICBUaGUgZXhhY3Qgc2l6ZSBvZiB0aGUgZGF0YSB0
byBkZWNvbXByZXNzLg0KKyAqIEBkaWN0OiAgICAgICAgVGhlIGRpY3Rpb25hcnkgdG8gdXNlIGZv
ciBkZWNvbXByZXNzaW9uLiBUaGUgc2FtZSBkaWN0aW9uYXJ5DQorICogICAgICAgICAgICAgICBt
dXN0J3ZlIGJlZW4gdXNlZCB0byBjb21wcmVzcyB0aGUgZGF0YS4NCisgKiBAZGljdFNpemU6ICAg
IFRoZSBzaXplIG9mIHRoZSBkaWN0aW9uYXJ5Lg0KKyAqDQorICogUmV0dXJuOiAgICAgICBUaGUg
ZGVjb21wcmVzc2VkIHNpemUgb3IgYW4gZXJyb3IsIHdoaWNoIGNhbiBiZSBjaGVja2VkIHVzaW5n
DQorICogICAgICAgICAgICAgICBaU1REX2lzRXJyb3IoKS4NCisgKi8NCitzaXplX3QgWlNURF9k
ZWNvbXByZXNzX3VzaW5nRGljdChaU1REX0RDdHggKmN0eCwgdm9pZCAqZHN0LCBzaXplX3QgZHN0
Q2FwYWNpdHksDQorCWNvbnN0IHZvaWQgKnNyYywgc2l6ZV90IHNyY1NpemUsIGNvbnN0IHZvaWQg
KmRpY3QsIHNpemVfdCBkaWN0U2l6ZSk7DQorDQorLyotKioqKioqKioqKioqKioqKioqKioqKioq
KioNCisgKiBGYXN0IGRpY3Rpb25hcnkgQVBJDQorICoqKioqKioqKioqKioqKioqKioqKioqKioq
Ki8NCisNCisvKioNCisgKiBaU1REX0NEaWN0V29ya3NwYWNlQm91bmQoKSAtIG1lbW9yeSBuZWVk
ZWQgdG8gaW5pdGlhbGl6ZSBhIFpTVERfQ0RpY3QNCisgKiBAY1BhcmFtczogVGhlIGNvbXByZXNz
aW9uIHBhcmFtZXRlcnMgdG8gYmUgdXNlZCBmb3IgY29tcHJlc3Npb24uDQorICoNCisgKiBSZXR1
cm46ICAgQSBsb3dlciBib3VuZCBvbiB0aGUgc2l6ZSBvZiB0aGUgd29ya3NwYWNlIHRoYXQgaXMg
cGFzc2VkIHRvDQorICogICAgICAgICAgIFpTVERfaW5pdENEaWN0KCkuDQorICovDQorc2l6ZV90
IFpTVERfQ0RpY3RXb3Jrc3BhY2VCb3VuZChaU1REX2NvbXByZXNzaW9uUGFyYW1ldGVycyBjUGFy
YW1zKTsNCisNCisvKioNCisgKiBzdHJ1Y3QgWlNURF9DRGljdCAtIGEgZGlnZXN0ZWQgZGljdGlv
bmFyeSB0byBiZSB1c2VkIGZvciBjb21wcmVzc2lvbg0KKyAqLw0KK3R5cGVkZWYgc3RydWN0IFpT
VERfQ0RpY3RfcyBaU1REX0NEaWN0Ow0KKw0KKy8qKg0KKyAqIFpTVERfaW5pdENEaWN0KCkgLSBp
bml0aWFsaXplIGEgZGlnZXN0ZWQgZGljdGlvbmFyeSBmb3IgY29tcHJlc3Npb24NCisgKiBAZGlj
dEJ1ZmZlcjogICAgVGhlIGRpY3Rpb25hcnkgdG8gZGlnZXN0LiBUaGUgYnVmZmVyIGlzIHJlZmVy
ZW5jZWQgYnkgdGhlDQorICogICAgICAgICAgICAgICAgIFpTVERfQ0RpY3Qgc28gaXQgbXVzdCBv
dXRsaXZlIHRoZSByZXR1cm5lZCBaU1REX0NEaWN0Lg0KKyAqIEBkaWN0U2l6ZTogICAgICBUaGUg
c2l6ZSBvZiB0aGUgZGljdGlvbmFyeS4NCisgKiBAcGFyYW1zOiAgICAgICAgVGhlIHBhcmFtZXRl
cnMgdG8gdXNlIGZvciBjb21wcmVzc2lvbi4gU2VlIFpTVERfZ2V0UGFyYW1zKCkuDQorICogQHdv
cmtzcGFjZTogICAgIFRoZSB3b3Jrc3BhY2UuIEl0IG11c3Qgb3V0bGl2ZSB0aGUgcmV0dXJuZWQg
WlNURF9DRGljdC4NCisgKiBAd29ya3NwYWNlU2l6ZTogVGhlIHdvcmtzcGFjZSBzaXplLiBNdXN0
IGJlIGF0IGxlYXN0DQorICogICAgICAgICAgICAgICAgIFpTVERfQ0RpY3RXb3Jrc3BhY2VCb3Vu
ZChwYXJhbXMuY1BhcmFtcykuDQorICoNCisgKiBXaGVuIGNvbXByZXNzaW5nIG11bHRpcGxlIG1l
c3NhZ2VzIC8gYmxvY2tzIHdpdGggdGhlIHNhbWUgZGljdGlvbmFyeSBpdCBpcw0KKyAqIHJlY29t
bWVuZGVkIHRvIGxvYWQgaXQganVzdCBvbmNlLiBUaGUgWlNURF9DRGljdCBtZXJlbHkgcmVmZXJl
bmNlcyB0aGUNCisgKiBkaWN0QnVmZmVyLCBzbyBpdCBtdXN0IG91dGxpdmUgdGhlIHJldHVybmVk
IFpTVERfQ0RpY3QuDQorICoNCisgKiBSZXR1cm46ICAgICAgICAgVGhlIGRpZ2VzdGVkIGRpY3Rp
b25hcnkgZW1wbGFjZWQgaW50byB3b3Jrc3BhY2UuDQorICovDQorWlNURF9DRGljdCAqWlNURF9p
bml0Q0RpY3QoY29uc3Qgdm9pZCAqZGljdEJ1ZmZlciwgc2l6ZV90IGRpY3RTaXplLA0KKwlaU1RE
X3BhcmFtZXRlcnMgcGFyYW1zLCB2b2lkICp3b3Jrc3BhY2UsIHNpemVfdCB3b3Jrc3BhY2VTaXpl
KTsNCisNCisvKioNCisgKiBaU1REX2NvbXByZXNzX3VzaW5nQ0RpY3QoKSAtIGNvbXByZXNzIHNy
YyBpbnRvIGRzdCB1c2luZyBhIFpTVERfQ0RpY3QNCisgKiBAY3R4OiAgICAgICAgIFRoZSBjb250
ZXh0LiBNdXN0IGhhdmUgYmVlbiBpbml0aWFsaXplZCB3aXRoIGEgd29ya3NwYWNlIGF0DQorICog
ICAgICAgICAgICAgICBsZWFzdCBhcyBsYXJnZSBhcyBaU1REX0NDdHhXb3Jrc3BhY2VCb3VuZChj
UGFyYW1zKSB3aGVyZQ0KKyAqICAgICAgICAgICAgICAgY1BhcmFtcyBhcmUgdGhlIGNvbXByZXNz
aW9uIHBhcmFtZXRlcnMgdXNlZCB0byBpbml0aWFsaXplIHRoZQ0KKyAqICAgICAgICAgICAgICAg
Y2RpY3QuDQorICogQGRzdDogICAgICAgICBUaGUgYnVmZmVyIHRvIGNvbXByZXNzIHNyYyBpbnRv
Lg0KKyAqIEBkc3RDYXBhY2l0eTogVGhlIHNpemUgb2YgdGhlIGRlc3RpbmF0aW9uIGJ1ZmZlci4g
TWF5IGJlIGFueSBzaXplLCBidXQNCisgKiAgICAgICAgICAgICAgIFpTVERfY29tcHJlc3NCb3Vu
ZChzcmNTaXplKSBpcyBndWFyYW50ZWVkIHRvIGJlIGxhcmdlIGVub3VnaC4NCisgKiBAc3JjOiAg
ICAgICAgIFRoZSBkYXRhIHRvIGNvbXByZXNzLg0KKyAqIEBzcmNTaXplOiAgICAgVGhlIHNpemUg
b2YgdGhlIGRhdGEgdG8gY29tcHJlc3MuDQorICogQGNkaWN0OiAgICAgICBUaGUgZGlnZXN0ZWQg
ZGljdGlvbmFyeSB0byB1c2UgZm9yIGNvbXByZXNzaW9uLg0KKyAqIEBwYXJhbXM6ICAgICAgVGhl
IHBhcmFtZXRlcnMgdG8gdXNlIGZvciBjb21wcmVzc2lvbi4gU2VlIFpTVERfZ2V0UGFyYW1zKCku
DQorICoNCisgKiBDb21wcmVzc2lvbiB1c2luZyBhIGRpZ2VzdGVkIGRpY3Rpb25hcnkuIFRoZSBz
YW1lIGRpY3Rpb25hcnkgbXVzdCBiZSB1c2VkDQorICogZHVyaW5nIGRlY29tcHJlc3Npb24uDQor
ICoNCisgKiBSZXR1cm46ICAgICAgIFRoZSBjb21wcmVzc2VkIHNpemUgb3IgYW4gZXJyb3IsIHdo
aWNoIGNhbiBiZSBjaGVja2VkIHVzaW5nDQorICogICAgICAgICAgICAgICBaU1REX2lzRXJyb3Io
KS4NCisgKi8NCitzaXplX3QgWlNURF9jb21wcmVzc191c2luZ0NEaWN0KFpTVERfQ0N0eCAqY2N0
eCwgdm9pZCAqZHN0LCBzaXplX3QgZHN0Q2FwYWNpdHksDQorCWNvbnN0IHZvaWQgKnNyYywgc2l6
ZV90IHNyY1NpemUsIGNvbnN0IFpTVERfQ0RpY3QgKmNkaWN0KTsNCisNCisNCisvKioNCisgKiBa
U1REX0REaWN0V29ya3NwYWNlQm91bmQoKSAtIG1lbW9yeSBuZWVkZWQgdG8gaW5pdGlhbGl6ZSBh
IFpTVERfRERpY3QNCisgKg0KKyAqIFJldHVybjogIEEgbG93ZXIgYm91bmQgb24gdGhlIHNpemUg
b2YgdGhlIHdvcmtzcGFjZSB0aGF0IGlzIHBhc3NlZCB0bw0KKyAqICAgICAgICAgIFpTVERfaW5p
dEREaWN0KCkuDQorICovDQorc2l6ZV90IFpTVERfRERpY3RXb3Jrc3BhY2VCb3VuZCh2b2lkKTsN
CisNCisvKioNCisgKiBzdHJ1Y3QgWlNURF9ERGljdCAtIGEgZGlnZXN0ZWQgZGljdGlvbmFyeSB0
byBiZSB1c2VkIGZvciBkZWNvbXByZXNzaW9uDQorICovDQordHlwZWRlZiBzdHJ1Y3QgWlNURF9E
RGljdF9zIFpTVERfRERpY3Q7DQorDQorLyoqDQorICogWlNURF9pbml0RERpY3QoKSAtIGluaXRp
YWxpemUgYSBkaWdlc3RlZCBkaWN0aW9uYXJ5IGZvciBkZWNvbXByZXNzaW9uDQorICogQGRpY3RC
dWZmZXI6ICAgIFRoZSBkaWN0aW9uYXJ5IHRvIGRpZ2VzdC4gVGhlIGJ1ZmZlciBpcyByZWZlcmVu
Y2VkIGJ5IHRoZQ0KKyAqICAgICAgICAgICAgICAgICBaU1REX0REaWN0IHNvIGl0IG11c3Qgb3V0
bGl2ZSB0aGUgcmV0dXJuZWQgWlNURF9ERGljdC4NCisgKiBAZGljdFNpemU6ICAgICAgVGhlIHNp
emUgb2YgdGhlIGRpY3Rpb25hcnkuDQorICogQHdvcmtzcGFjZTogICAgIFRoZSB3b3Jrc3BhY2Uu
IEl0IG11c3Qgb3V0bGl2ZSB0aGUgcmV0dXJuZWQgWlNURF9ERGljdC4NCisgKiBAd29ya3NwYWNl
U2l6ZTogVGhlIHdvcmtzcGFjZSBzaXplLiBNdXN0IGJlIGF0IGxlYXN0DQorICogICAgICAgICAg
ICAgICAgIFpTVERfRERpY3RXb3Jrc3BhY2VCb3VuZCgpLg0KKyAqDQorICogV2hlbiBkZWNvbXBy
ZXNzaW5nIG11bHRpcGxlIG1lc3NhZ2VzIC8gYmxvY2tzIHdpdGggdGhlIHNhbWUgZGljdGlvbmFy
eSBpdCBpcw0KKyAqIHJlY29tbWVuZGVkIHRvIGxvYWQgaXQganVzdCBvbmNlLiBUaGUgWlNURF9E
RGljdCBtZXJlbHkgcmVmZXJlbmNlcyB0aGUNCisgKiBkaWN0QnVmZmVyLCBzbyBpdCBtdXN0IG91
dGxpdmUgdGhlIHJldHVybmVkIFpTVERfRERpY3QuDQorICoNCisgKiBSZXR1cm46ICAgICAgICAg
VGhlIGRpZ2VzdGVkIGRpY3Rpb25hcnkgZW1wbGFjZWQgaW50byB3b3Jrc3BhY2UuDQorICovDQor
WlNURF9ERGljdCAqWlNURF9pbml0RERpY3QoY29uc3Qgdm9pZCAqZGljdEJ1ZmZlciwgc2l6ZV90
IGRpY3RTaXplLA0KKwl2b2lkICp3b3Jrc3BhY2UsIHNpemVfdCB3b3Jrc3BhY2VTaXplKTsNCisN
CisvKioNCisgKiBaU1REX2RlY29tcHJlc3NfdXNpbmdERGljdCgpIC0gZGVjb21wcmVzcyBzcmMg
aW50byBkc3QgdXNpbmcgYSBaU1REX0REaWN0DQorICogQGN0eDogICAgICAgICBUaGUgZGVjb21w
cmVzc2lvbiBjb250ZXh0Lg0KKyAqIEBkc3Q6ICAgICAgICAgVGhlIGJ1ZmZlciB0byBkZWNvbXBy
ZXNzIHNyYyBpbnRvLg0KKyAqIEBkc3RDYXBhY2l0eTogVGhlIHNpemUgb2YgdGhlIGRlc3RpbmF0
aW9uIGJ1ZmZlci4gTXVzdCBiZSBhdCBsZWFzdCBhcyBsYXJnZQ0KKyAqICAgICAgICAgICAgICAg
YXMgdGhlIGRlY29tcHJlc3NlZCBzaXplLiBJZiB0aGUgY2FsbGVyIGNhbm5vdCB1cHBlciBib3Vu
ZCB0aGUNCisgKiAgICAgICAgICAgICAgIGRlY29tcHJlc3NlZCBzaXplLCB0aGVuIGl0J3MgYmV0
dGVyIHRvIHVzZSB0aGUgc3RyZWFtaW5nIEFQSS4NCisgKiBAc3JjOiAgICAgICAgIFRoZSB6c3Rk
IGNvbXByZXNzZWQgZGF0YSB0byBkZWNvbXByZXNzLiBNdWx0aXBsZSBjb25jYXRlbmF0ZWQNCisg
KiAgICAgICAgICAgICAgIGZyYW1lcyBhbmQgc2tpcHBhYmxlIGZyYW1lcyBhcmUgYWxsb3dlZC4N
CisgKiBAc3JjU2l6ZTogICAgIFRoZSBleGFjdCBzaXplIG9mIHRoZSBkYXRhIHRvIGRlY29tcHJl
c3MuDQorICogQGRkaWN0OiAgICAgICBUaGUgZGlnZXN0ZWQgZGljdGlvbmFyeSB0byB1c2UgZm9y
IGRlY29tcHJlc3Npb24uIFRoZSBzYW1lDQorICogICAgICAgICAgICAgICBkaWN0aW9uYXJ5IG11
c3QndmUgYmVlbiB1c2VkIHRvIGNvbXByZXNzIHRoZSBkYXRhLg0KKyAqDQorICogUmV0dXJuOiAg
ICAgICBUaGUgZGVjb21wcmVzc2VkIHNpemUgb3IgYW4gZXJyb3IsIHdoaWNoIGNhbiBiZSBjaGVj
a2VkIHVzaW5nDQorICogICAgICAgICAgICAgICBaU1REX2lzRXJyb3IoKS4NCisgKi8NCitzaXpl
X3QgWlNURF9kZWNvbXByZXNzX3VzaW5nRERpY3QoWlNURF9EQ3R4ICpkY3R4LCB2b2lkICpkc3Qs
DQorCXNpemVfdCBkc3RDYXBhY2l0eSwgY29uc3Qgdm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZSwN
CisJY29uc3QgWlNURF9ERGljdCAqZGRpY3QpOw0KKw0KKw0KKy8qLSoqKioqKioqKioqKioqKioq
KioqKioqKioqDQorICogU3RyZWFtaW5nDQorICoqKioqKioqKioqKioqKioqKioqKioqKioqKi8N
CisNCisvKioNCisgKiBzdHJ1Y3QgWlNURF9pbkJ1ZmZlciAtIGlucHV0IGJ1ZmZlciBmb3Igc3Ry
ZWFtaW5nDQorICogQHNyYzogIFN0YXJ0IG9mIHRoZSBpbnB1dCBidWZmZXIuDQorICogQHNpemU6
IFNpemUgb2YgdGhlIGlucHV0IGJ1ZmZlci4NCisgKiBAcG9zOiAgUG9zaXRpb24gd2hlcmUgcmVh
ZGluZyBzdG9wcGVkLiBXaWxsIGJlIHVwZGF0ZWQuDQorICogICAgICAgIE5lY2Vzc2FyaWx5IDAg
PD0gcG9zIDw9IHNpemUuDQorICovDQordHlwZWRlZiBzdHJ1Y3QgWlNURF9pbkJ1ZmZlcl9zIHsN
CisJY29uc3Qgdm9pZCAqc3JjOw0KKwlzaXplX3Qgc2l6ZTsNCisJc2l6ZV90IHBvczsNCit9IFpT
VERfaW5CdWZmZXI7DQorDQorLyoqDQorICogc3RydWN0IFpTVERfb3V0QnVmZmVyIC0gb3V0cHV0
IGJ1ZmZlciBmb3Igc3RyZWFtaW5nDQorICogQGRzdDogIFN0YXJ0IG9mIHRoZSBvdXRwdXQgYnVm
ZmVyLg0KKyAqIEBzaXplOiBTaXplIG9mIHRoZSBvdXRwdXQgYnVmZmVyLg0KKyAqIEBwb3M6ICBQ
b3NpdGlvbiB3aGVyZSB3cml0aW5nIHN0b3BwZWQuIFdpbGwgYmUgdXBkYXRlZC4NCisgKiAgICAg
ICAgTmVjZXNzYXJpbHkgMCA8PSBwb3MgPD0gc2l6ZS4NCisgKi8NCit0eXBlZGVmIHN0cnVjdCBa
U1REX291dEJ1ZmZlcl9zIHsNCisJdm9pZCAqZHN0Ow0KKwlzaXplX3Qgc2l6ZTsNCisJc2l6ZV90
IHBvczsNCit9IFpTVERfb3V0QnVmZmVyOw0KKw0KKw0KKw0KKy8qLSoqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqDQorICogU3RyZWFtaW5nIGNvbXByZXNzaW9uIC0gSG93VG8NCisgKg0KKyAqIEEgWlNURF9D
U3RyZWFtIG9iamVjdCBpcyByZXF1aXJlZCB0byB0cmFjayBzdHJlYW1pbmcgb3BlcmF0aW9uLg0K
KyAqIFVzZSBaU1REX2luaXRDU3RyZWFtKCkgdG8gaW5pdGlhbGl6ZSBhIFpTVERfQ1N0cmVhbSBv
YmplY3QuDQorICogWlNURF9DU3RyZWFtIG9iamVjdHMgY2FuIGJlIHJldXNlZCBtdWx0aXBsZSB0
aW1lcyBvbiBjb25zZWN1dGl2ZSBjb21wcmVzc2lvbg0KKyAqIG9wZXJhdGlvbnMuIEl0IGlzIHJl
Y29tbWVuZGVkIHRvIHJlLXVzZSBaU1REX0NTdHJlYW0gaW4gc2l0dWF0aW9ucyB3aGVyZSBtYW55
DQorICogc3RyZWFtaW5nIG9wZXJhdGlvbnMgd2lsbCBiZSBhY2hpZXZlZCBjb25zZWN1dGl2ZWx5
LiBVc2Ugb25lIHNlcGFyYXRlDQorICogWlNURF9DU3RyZWFtIHBlciB0aHJlYWQgZm9yIHBhcmFs
bGVsIGV4ZWN1dGlvbi4NCisgKg0KKyAqIFVzZSBaU1REX2NvbXByZXNzU3RyZWFtKCkgcmVwZXRp
dGl2ZWx5IHRvIGNvbnN1bWUgaW5wdXQgc3RyZWFtLg0KKyAqIFRoZSBmdW5jdGlvbiB3aWxsIGF1
dG9tYXRpY2FsbHkgdXBkYXRlIGJvdGggYHBvc2AgZmllbGRzLg0KKyAqIE5vdGUgdGhhdCBpdCBt
YXkgbm90IGNvbnN1bWUgdGhlIGVudGlyZSBpbnB1dCwgaW4gd2hpY2ggY2FzZSBgcG9zIDwgc2l6
ZWAsDQorICogYW5kIGl0J3MgdXAgdG8gdGhlIGNhbGxlciB0byBwcmVzZW50IGFnYWluIHJlbWFp
bmluZyBkYXRhLg0KKyAqIEl0IHJldHVybnMgYSBoaW50IGZvciB0aGUgcHJlZmVycmVkIG51bWJl
ciBvZiBieXRlcyB0byB1c2UgYXMgYW4gaW5wdXQgZm9yDQorICogdGhlIG5leHQgZnVuY3Rpb24g
Y2FsbC4NCisgKg0KKyAqIEF0IGFueSBtb21lbnQsIGl0J3MgcG9zc2libGUgdG8gZmx1c2ggd2hh
dGV2ZXIgZGF0YSByZW1haW5zIHdpdGhpbiBpbnRlcm5hbA0KKyAqIGJ1ZmZlciwgdXNpbmcgWlNU
RF9mbHVzaFN0cmVhbSgpLiBgb3V0cHV0LT5wb3NgIHdpbGwgYmUgdXBkYXRlZC4gVGhlcmUgbWln
aHQNCisgKiBzdGlsbCBiZSBzb21lIGNvbnRlbnQgbGVmdCB3aXRoaW4gdGhlIGludGVybmFsIGJ1
ZmZlciBpZiBgb3V0cHV0LT5zaXplYCBpcw0KKyAqIHRvbyBzbWFsbC4gSXQgcmV0dXJucyB0aGUg
bnVtYmVyIG9mIGJ5dGVzIGxlZnQgaW4gdGhlIGludGVybmFsIGJ1ZmZlciBhbmQNCisgKiBtdXN0
IGJlIGNhbGxlZCB1bnRpbCBpdCByZXR1cm5zIDAuDQorICoNCisgKiBaU1REX2VuZFN0cmVhbSgp
IGluc3RydWN0cyB0byBmaW5pc2ggYSBmcmFtZS4gSXQgd2lsbCBwZXJmb3JtIGEgZmx1c2ggYW5k
DQorICogd3JpdGUgZnJhbWUgZXBpbG9ndWUuIFRoZSBlcGlsb2d1ZSBpcyByZXF1aXJlZCBmb3Ig
ZGVjb2RlcnMgdG8gY29uc2lkZXIgYQ0KKyAqIGZyYW1lIGNvbXBsZXRlZC4gU2ltaWxhciB0byBa
U1REX2ZsdXNoU3RyZWFtKCksIGl0IG1heSBub3QgYmUgYWJsZSB0byBmbHVzaA0KKyAqIHRoZSBm
dWxsIGNvbnRlbnQgaWYgYG91dHB1dC0+c2l6ZWAgaXMgdG9vIHNtYWxsLiBJbiB3aGljaCBjYXNl
LCBjYWxsIGFnYWluDQorICogWlNURF9lbmRTdHJlYW0oKSB0byBjb21wbGV0ZSB0aGUgZmx1c2gu
IEl0IHJldHVybnMgdGhlIG51bWJlciBvZiBieXRlcyBsZWZ0DQorICogaW4gdGhlIGludGVybmFs
IGJ1ZmZlciBhbmQgbXVzdCBiZSBjYWxsZWQgdW50aWwgaXQgcmV0dXJucyAwLg0KKyAqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKiovDQorDQorLyoqDQorICogWlNURF9DU3RyZWFtV29ya3NwYWNlQm91bmQo
KSAtIG1lbW9yeSBuZWVkZWQgdG8gaW5pdGlhbGl6ZSBhIFpTVERfQ1N0cmVhbQ0KKyAqIEBjUGFy
YW1zOiBUaGUgY29tcHJlc3Npb24gcGFyYW1ldGVycyB0byBiZSB1c2VkIGZvciBjb21wcmVzc2lv
bi4NCisgKg0KKyAqIFJldHVybjogICBBIGxvd2VyIGJvdW5kIG9uIHRoZSBzaXplIG9mIHRoZSB3
b3Jrc3BhY2UgdGhhdCBpcyBwYXNzZWQgdG8NCisgKiAgICAgICAgICAgWlNURF9pbml0Q1N0cmVh
bSgpIGFuZCBaU1REX2luaXRDU3RyZWFtX3VzaW5nQ0RpY3QoKS4NCisgKi8NCitzaXplX3QgWlNU
RF9DU3RyZWFtV29ya3NwYWNlQm91bmQoWlNURF9jb21wcmVzc2lvblBhcmFtZXRlcnMgY1BhcmFt
cyk7DQorDQorLyoqDQorICogc3RydWN0IFpTVERfQ1N0cmVhbSAtIHRoZSB6c3RkIHN0cmVhbWlu
ZyBjb21wcmVzc2lvbiBjb250ZXh0DQorICovDQordHlwZWRlZiBzdHJ1Y3QgWlNURF9DU3RyZWFt
X3MgWlNURF9DU3RyZWFtOw0KKw0KKy8qPT09PT0gWlNURF9DU3RyZWFtIG1hbmFnZW1lbnQgZnVu
Y3Rpb25zID09PT09Ki8NCisvKioNCisgKiBaU1REX2luaXRDU3RyZWFtKCkgLSBpbml0aWFsaXpl
IGEgenN0ZCBzdHJlYW1pbmcgY29tcHJlc3Npb24gY29udGV4dA0KKyAqIEBwYXJhbXM6ICAgICAg
ICAgVGhlIHpzdGQgY29tcHJlc3Npb24gcGFyYW1ldGVycy4NCisgKiBAcGxlZGdlZFNyY1NpemU6
IElmIHBhcmFtcy5mUGFyYW1zLmNvbnRlbnRTaXplRmxhZyA9PSAxIHRoZW4gdGhlIGNhbGxlciBt
dXN0DQorICogICAgICAgICAgICAgICAgICBwYXNzIHRoZSBzb3VyY2Ugc2l6ZSAoemVybyBtZWFu
cyBlbXB0eSBzb3VyY2UpLiBPdGhlcndpc2UsDQorICogICAgICAgICAgICAgICAgICB0aGUgY2Fs
bGVyIG1heSBvcHRpb25hbGx5IHBhc3MgdGhlIHNvdXJjZSBzaXplLCBvciB6ZXJvIGlmDQorICog
ICAgICAgICAgICAgICAgICB1bmtub3duLg0KKyAqIEB3b3Jrc3BhY2U6ICAgICAgVGhlIHdvcmtz
cGFjZSB0byBlbXBsYWNlIHRoZSBjb250ZXh0IGludG8uIEl0IG11c3Qgb3V0bGl2ZQ0KKyAqICAg
ICAgICAgICAgICAgICAgdGhlIHJldHVybmVkIGNvbnRleHQuDQorICogQHdvcmtzcGFjZVNpemU6
ICBUaGUgc2l6ZSBvZiB3b3Jrc3BhY2UuDQorICogICAgICAgICAgICAgICAgICBVc2UgWlNURF9D
U3RyZWFtV29ya3NwYWNlQm91bmQocGFyYW1zLmNQYXJhbXMpIHRvIGRldGVybWluZQ0KKyAqICAg
ICAgICAgICAgICAgICAgaG93IGxhcmdlIHRoZSB3b3Jrc3BhY2UgbXVzdCBiZS4NCisgKg0KKyAq
IFJldHVybjogICAgICAgICAgVGhlIHpzdGQgc3RyZWFtaW5nIGNvbXByZXNzaW9uIGNvbnRleHQu
DQorICovDQorWlNURF9DU3RyZWFtICpaU1REX2luaXRDU3RyZWFtKFpTVERfcGFyYW1ldGVycyBw
YXJhbXMsDQorCXVuc2lnbmVkIGxvbmcgbG9uZyBwbGVkZ2VkU3JjU2l6ZSwgdm9pZCAqd29ya3Nw
YWNlLA0KKwlzaXplX3Qgd29ya3NwYWNlU2l6ZSk7DQorDQorLyoqDQorICogWlNURF9pbml0Q1N0
cmVhbV91c2luZ0NEaWN0KCkgLSBpbml0aWFsaXplIGEgc3RyZWFtaW5nIGNvbXByZXNzaW9uIGNv
bnRleHQNCisgKiBAY2RpY3Q6ICAgICAgICAgIFRoZSBkaWdlc3RlZCBkaWN0aW9uYXJ5IHRvIHVz
ZSBmb3IgY29tcHJlc3Npb24uDQorICogQHBsZWRnZWRTcmNTaXplOiBPcHRpb25hbGx5IHRoZSBz
b3VyY2Ugc2l6ZSwgb3IgemVybyBpZiB1bmtub3duLg0KKyAqIEB3b3Jrc3BhY2U6ICAgICAgVGhl
IHdvcmtzcGFjZSB0byBlbXBsYWNlIHRoZSBjb250ZXh0IGludG8uIEl0IG11c3Qgb3V0bGl2ZQ0K
KyAqICAgICAgICAgICAgICAgICAgdGhlIHJldHVybmVkIGNvbnRleHQuDQorICogQHdvcmtzcGFj
ZVNpemU6ICBUaGUgc2l6ZSBvZiB3b3Jrc3BhY2UuIENhbGwgWlNURF9DU3RyZWFtV29ya3NwYWNl
Qm91bmQoKQ0KKyAqICAgICAgICAgICAgICAgICAgd2l0aCB0aGUgY1BhcmFtcyB1c2VkIHRvIGlu
aXRpYWxpemUgdGhlIGNkaWN0IHRvIGRldGVybWluZQ0KKyAqICAgICAgICAgICAgICAgICAgaG93
IGxhcmdlIHRoZSB3b3Jrc3BhY2UgbXVzdCBiZS4NCisgKg0KKyAqIFJldHVybjogICAgICAgICAg
VGhlIHpzdGQgc3RyZWFtaW5nIGNvbXByZXNzaW9uIGNvbnRleHQuDQorICovDQorWlNURF9DU3Ry
ZWFtICpaU1REX2luaXRDU3RyZWFtX3VzaW5nQ0RpY3QoY29uc3QgWlNURF9DRGljdCAqY2RpY3Qs
DQorCXVuc2lnbmVkIGxvbmcgbG9uZyBwbGVkZ2VkU3JjU2l6ZSwgdm9pZCAqd29ya3NwYWNlLA0K
KwlzaXplX3Qgd29ya3NwYWNlU2l6ZSk7DQorDQorLyo9PT09PSBTdHJlYW1pbmcgY29tcHJlc3Np
b24gZnVuY3Rpb25zID09PT09Ki8NCisvKioNCisgKiBaU1REX3Jlc2V0Q1N0cmVhbSgpIC0gcmVz
ZXQgdGhlIGNvbnRleHQgdXNpbmcgcGFyYW1ldGVycyBmcm9tIGNyZWF0aW9uDQorICogQHpjczog
ICAgICAgICAgICBUaGUgenN0ZCBzdHJlYW1pbmcgY29tcHJlc3Npb24gY29udGV4dCB0byByZXNl
dC4NCisgKiBAcGxlZGdlZFNyY1NpemU6IE9wdGlvbmFsbHkgdGhlIHNvdXJjZSBzaXplLCBvciB6
ZXJvIGlmIHVua25vd24uDQorICoNCisgKiBSZXNldHMgdGhlIGNvbnRleHQgdXNpbmcgdGhlIHBh
cmFtZXRlcnMgZnJvbSBjcmVhdGlvbi4gU2tpcHMgZGljdGlvbmFyeQ0KKyAqIGxvYWRpbmcsIHNp
bmNlIGl0IGNhbiBiZSByZXVzZWQuIElmIGBwbGVkZ2VkU3JjU2l6ZWAgaXMgbm9uLXplcm8gdGhl
IGZyYW1lDQorICogY29udGVudCBzaXplIGlzIGFsd2F5cyB3cml0dGVuIGludG8gdGhlIGZyYW1l
IGhlYWRlci4NCisgKg0KKyAqIFJldHVybjogICAgICAgICAgWmVybyBvciBhbiBlcnJvciwgd2hp
Y2ggY2FuIGJlIGNoZWNrZWQgdXNpbmcgWlNURF9pc0Vycm9yKCkuDQorICovDQorc2l6ZV90IFpT
VERfcmVzZXRDU3RyZWFtKFpTVERfQ1N0cmVhbSAqemNzLCB1bnNpZ25lZCBsb25nIGxvbmcgcGxl
ZGdlZFNyY1NpemUpOw0KKy8qKg0KKyAqIFpTVERfY29tcHJlc3NTdHJlYW0oKSAtIHN0cmVhbWlu
ZyBjb21wcmVzcyBzb21lIG9mIGlucHV0IGludG8gb3V0cHV0DQorICogQHpjczogICAgVGhlIHpz
dGQgc3RyZWFtaW5nIGNvbXByZXNzaW9uIGNvbnRleHQuDQorICogQG91dHB1dDogRGVzdGluYXRp
b24gYnVmZmVyLiBgb3V0cHV0LT5wb3NgIGlzIHVwZGF0ZWQgdG8gaW5kaWNhdGUgaG93IG11Y2gN
CisgKiAgICAgICAgICBjb21wcmVzc2VkIGRhdGEgd2FzIHdyaXR0ZW4uDQorICogQGlucHV0OiAg
U291cmNlIGJ1ZmZlci4gYGlucHV0LT5wb3NgIGlzIHVwZGF0ZWQgdG8gaW5kaWNhdGUgaG93IG11
Y2ggZGF0YSB3YXMNCisgKiAgICAgICAgICByZWFkLiBOb3RlIHRoYXQgaXQgbWF5IG5vdCBjb25z
dW1lIHRoZSBlbnRpcmUgaW5wdXQsIGluIHdoaWNoIGNhc2UNCisgKiAgICAgICAgICBgaW5wdXQt
PnBvcyA8IGlucHV0LT5zaXplYCwgYW5kIGl0J3MgdXAgdG8gdGhlIGNhbGxlciB0byBwcmVzZW50
DQorICogICAgICAgICAgcmVtYWluaW5nIGRhdGEgYWdhaW4uDQorICoNCisgKiBUaGUgYGlucHV0
YCBhbmQgYG91dHB1dGAgYnVmZmVycyBtYXkgYmUgYW55IHNpemUuIEd1YXJhbnRlZWQgdG8gbWFr
ZSBzb21lDQorICogZm9yd2FyZCBwcm9ncmVzcyBpZiBgaW5wdXRgIGFuZCBgb3V0cHV0YCBhcmUg
bm90IGVtcHR5Lg0KKyAqDQorICogUmV0dXJuOiAgQSBoaW50IGZvciB0aGUgbnVtYmVyIG9mIGJ5
dGVzIHRvIHVzZSBhcyB0aGUgaW5wdXQgZm9yIHRoZSBuZXh0DQorICogICAgICAgICAgZnVuY3Rp
b24gY2FsbCBvciBhbiBlcnJvciwgd2hpY2ggY2FuIGJlIGNoZWNrZWQgdXNpbmcNCisgKiAgICAg
ICAgICBaU1REX2lzRXJyb3IoKS4NCisgKi8NCitzaXplX3QgWlNURF9jb21wcmVzc1N0cmVhbSha
U1REX0NTdHJlYW0gKnpjcywgWlNURF9vdXRCdWZmZXIgKm91dHB1dCwNCisJWlNURF9pbkJ1ZmZl
ciAqaW5wdXQpOw0KKy8qKg0KKyAqIFpTVERfZmx1c2hTdHJlYW0oKSAtIGZsdXNoIGludGVybmFs
IGJ1ZmZlcnMgaW50byBvdXRwdXQNCisgKiBAemNzOiAgICBUaGUgenN0ZCBzdHJlYW1pbmcgY29t
cHJlc3Npb24gY29udGV4dC4NCisgKiBAb3V0cHV0OiBEZXN0aW5hdGlvbiBidWZmZXIuIGBvdXRw
dXQtPnBvc2AgaXMgdXBkYXRlZCB0byBpbmRpY2F0ZSBob3cgbXVjaA0KKyAqICAgICAgICAgIGNv
bXByZXNzZWQgZGF0YSB3YXMgd3JpdHRlbi4NCisgKg0KKyAqIFpTVERfZmx1c2hTdHJlYW0oKSBt
dXN0IGJlIGNhbGxlZCB1bnRpbCBpdCByZXR1cm5zIDAsIG1lYW5pbmcgYWxsIHRoZSBkYXRhDQor
ICogaGFzIGJlZW4gZmx1c2hlZC4gU2luY2UgWlNURF9mbHVzaFN0cmVhbSgpIGNhdXNlcyBhIGJs
b2NrIHRvIGJlIGVuZGVkLA0KKyAqIGNhbGxpbmcgaXQgdG9vIG9mdGVuIHdpbGwgZGVncmFkZSB0
aGUgY29tcHJlc3Npb24gcmF0aW8uDQorICoNCisgKiBSZXR1cm46ICBUaGUgbnVtYmVyIG9mIGJ5
dGVzIHN0aWxsIHByZXNlbnQgd2l0aGluIGludGVybmFsIGJ1ZmZlcnMgb3IgYW4NCisgKiAgICAg
ICAgICBlcnJvciwgd2hpY2ggY2FuIGJlIGNoZWNrZWQgdXNpbmcgWlNURF9pc0Vycm9yKCkuDQor
ICovDQorc2l6ZV90IFpTVERfZmx1c2hTdHJlYW0oWlNURF9DU3RyZWFtICp6Y3MsIFpTVERfb3V0
QnVmZmVyICpvdXRwdXQpOw0KKy8qKg0KKyAqIFpTVERfZW5kU3RyZWFtKCkgLSBmbHVzaCBpbnRl
cm5hbCBidWZmZXJzIGludG8gb3V0cHV0IGFuZCBlbmQgdGhlIGZyYW1lDQorICogQHpjczogICAg
VGhlIHpzdGQgc3RyZWFtaW5nIGNvbXByZXNzaW9uIGNvbnRleHQuDQorICogQG91dHB1dDogRGVz
dGluYXRpb24gYnVmZmVyLiBgb3V0cHV0LT5wb3NgIGlzIHVwZGF0ZWQgdG8gaW5kaWNhdGUgaG93
IG11Y2gNCisgKiAgICAgICAgICBjb21wcmVzc2VkIGRhdGEgd2FzIHdyaXR0ZW4uDQorICoNCisg
KiBaU1REX2VuZFN0cmVhbSgpIG11c3QgYmUgY2FsbGVkIHVudGlsIGl0IHJldHVybnMgMCwgbWVh
bmluZyBhbGwgdGhlIGRhdGEgaGFzDQorICogYmVlbiBmbHVzaGVkIGFuZCB0aGUgZnJhbWUgZXBp
bG9ndWUgaGFzIGJlZW4gd3JpdHRlbi4NCisgKg0KKyAqIFJldHVybjogIFRoZSBudW1iZXIgb2Yg
Ynl0ZXMgc3RpbGwgcHJlc2VudCB3aXRoaW4gaW50ZXJuYWwgYnVmZmVycyBvciBhbg0KKyAqICAg
ICAgICAgIGVycm9yLCB3aGljaCBjYW4gYmUgY2hlY2tlZCB1c2luZyBaU1REX2lzRXJyb3IoKS4N
CisgKi8NCitzaXplX3QgWlNURF9lbmRTdHJlYW0oWlNURF9DU3RyZWFtICp6Y3MsIFpTVERfb3V0
QnVmZmVyICpvdXRwdXQpOw0KKw0KKy8qKg0KKyAqIFpTVERfQ1N0cmVhbUluU2l6ZSgpIC0gcmVj
b21tZW5kZWQgc2l6ZSBmb3IgdGhlIGlucHV0IGJ1ZmZlcg0KKyAqDQorICogUmV0dXJuOiBUaGUg
cmVjb21tZW5kZWQgc2l6ZSBmb3IgdGhlIGlucHV0IGJ1ZmZlci4NCisgKi8NCitzaXplX3QgWlNU
RF9DU3RyZWFtSW5TaXplKHZvaWQpOw0KKy8qKg0KKyAqIFpTVERfQ1N0cmVhbU91dFNpemUoKSAt
IHJlY29tbWVuZGVkIHNpemUgZm9yIHRoZSBvdXRwdXQgYnVmZmVyDQorICoNCisgKiBXaGVuIHRo
ZSBvdXRwdXQgYnVmZmVyIGlzIGF0IGxlYXN0IHRoaXMgbGFyZ2UsIGl0IGlzIGd1YXJhbnRlZWQg
dG8gYmUgbGFyZ2UNCisgKiBlbm91Z2ggdG8gZmx1c2ggYXQgbGVhc3Qgb25lIGNvbXBsZXRlIGNv
bXByZXNzZWQgYmxvY2suDQorICoNCisgKiBSZXR1cm46IFRoZSByZWNvbW1lbmRlZCBzaXplIGZv
ciB0aGUgb3V0cHV0IGJ1ZmZlci4NCisgKi8NCitzaXplX3QgWlNURF9DU3RyZWFtT3V0U2l6ZSh2
b2lkKTsNCisNCisNCisNCisvKi0qKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KKyAqIFN0cmVhbWluZyBk
ZWNvbXByZXNzaW9uIC0gSG93VG8NCisgKg0KKyAqIEEgWlNURF9EU3RyZWFtIG9iamVjdCBpcyBy
ZXF1aXJlZCB0byB0cmFjayBzdHJlYW1pbmcgb3BlcmF0aW9ucy4NCisgKiBVc2UgWlNURF9pbml0
RFN0cmVhbSgpIHRvIGluaXRpYWxpemUgYSBaU1REX0RTdHJlYW0gb2JqZWN0Lg0KKyAqIFpTVERf
RFN0cmVhbSBvYmplY3RzIGNhbiBiZSByZS11c2VkIG11bHRpcGxlIHRpbWVzLg0KKyAqDQorICog
VXNlIFpTVERfZGVjb21wcmVzc1N0cmVhbSgpIHJlcGV0aXRpdmVseSB0byBjb25zdW1lIHlvdXIg
aW5wdXQuDQorICogVGhlIGZ1bmN0aW9uIHdpbGwgdXBkYXRlIGJvdGggYHBvc2AgZmllbGRzLg0K
KyAqIElmIGBpbnB1dC0+cG9zIDwgaW5wdXQtPnNpemVgLCBzb21lIGlucHV0IGhhcyBub3QgYmVl
biBjb25zdW1lZC4NCisgKiBJdCdzIHVwIHRvIHRoZSBjYWxsZXIgdG8gcHJlc2VudCBhZ2FpbiBy
ZW1haW5pbmcgZGF0YS4NCisgKiBJZiBgb3V0cHV0LT5wb3MgPCBvdXRwdXQtPnNpemVgLCBkZWNv
ZGVyIGhhcyBmbHVzaGVkIGV2ZXJ5dGhpbmcgaXQgY291bGQuDQorICogUmV0dXJucyAwIGlmZiBh
IGZyYW1lIGlzIGNvbXBsZXRlbHkgZGVjb2RlZCBhbmQgZnVsbHkgZmx1c2hlZC4NCisgKiBPdGhl
cndpc2UgaXQgcmV0dXJucyBhIHN1Z2dlc3RlZCBuZXh0IGlucHV0IHNpemUgdGhhdCB3aWxsIG5l
dmVyIGxvYWQgbW9yZQ0KKyAqIHRoYW4gdGhlIGN1cnJlbnQgZnJhbWUuDQorICoqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKi8NCisNCisvKioNCisgKiBaU1REX0RTdHJlYW1Xb3Jrc3BhY2VCb3VuZCgpIC0g
bWVtb3J5IG5lZWRlZCB0byBpbml0aWFsaXplIGEgWlNURF9EU3RyZWFtDQorICogQG1heFdpbmRv
d1NpemU6IFRoZSBtYXhpbXVtIHdpbmRvdyBzaXplIGFsbG93ZWQgZm9yIGNvbXByZXNzZWQgZnJh
bWVzLg0KKyAqDQorICogUmV0dXJuOiAgICAgICAgIEEgbG93ZXIgYm91bmQgb24gdGhlIHNpemUg
b2YgdGhlIHdvcmtzcGFjZSB0aGF0IGlzIHBhc3NlZCB0bw0KKyAqICAgICAgICAgICAgICAgICBa
U1REX2luaXREU3RyZWFtKCkgYW5kIFpTVERfaW5pdERTdHJlYW1fdXNpbmdERGljdCgpLg0KKyAq
Lw0KK3NpemVfdCBaU1REX0RTdHJlYW1Xb3Jrc3BhY2VCb3VuZChzaXplX3QgbWF4V2luZG93U2l6
ZSk7DQorDQorLyoqDQorICogc3RydWN0IFpTVERfRFN0cmVhbSAtIHRoZSB6c3RkIHN0cmVhbWlu
ZyBkZWNvbXByZXNzaW9uIGNvbnRleHQNCisgKi8NCit0eXBlZGVmIHN0cnVjdCBaU1REX0RTdHJl
YW1fcyBaU1REX0RTdHJlYW07DQorLyo9PT09PSBaU1REX0RTdHJlYW0gbWFuYWdlbWVudCBmdW5j
dGlvbnMgPT09PT0qLw0KKy8qKg0KKyAqIFpTVERfaW5pdERTdHJlYW0oKSAtIGluaXRpYWxpemUg
YSB6c3RkIHN0cmVhbWluZyBkZWNvbXByZXNzaW9uIGNvbnRleHQNCisgKiBAbWF4V2luZG93U2l6
ZTogVGhlIG1heGltdW0gd2luZG93IHNpemUgYWxsb3dlZCBmb3IgY29tcHJlc3NlZCBmcmFtZXMu
DQorICogQHdvcmtzcGFjZTogICAgIFRoZSB3b3Jrc3BhY2UgdG8gZW1wbGFjZSB0aGUgY29udGV4
dCBpbnRvLiBJdCBtdXN0IG91dGxpdmUNCisgKiAgICAgICAgICAgICAgICAgdGhlIHJldHVybmVk
IGNvbnRleHQuDQorICogQHdvcmtzcGFjZVNpemU6IFRoZSBzaXplIG9mIHdvcmtzcGFjZS4NCisg
KiAgICAgICAgICAgICAgICAgVXNlIFpTVERfRFN0cmVhbVdvcmtzcGFjZUJvdW5kKG1heFdpbmRv
d1NpemUpIHRvIGRldGVybWluZQ0KKyAqICAgICAgICAgICAgICAgICBob3cgbGFyZ2UgdGhlIHdv
cmtzcGFjZSBtdXN0IGJlLg0KKyAqDQorICogUmV0dXJuOiAgICAgICAgIFRoZSB6c3RkIHN0cmVh
bWluZyBkZWNvbXByZXNzaW9uIGNvbnRleHQuDQorICovDQorWlNURF9EU3RyZWFtICpaU1REX2lu
aXREU3RyZWFtKHNpemVfdCBtYXhXaW5kb3dTaXplLCB2b2lkICp3b3Jrc3BhY2UsDQorCXNpemVf
dCB3b3Jrc3BhY2VTaXplKTsNCisvKioNCisgKiBaU1REX2luaXREU3RyZWFtX3VzaW5nRERpY3Qo
KSAtIGluaXRpYWxpemUgc3RyZWFtaW5nIGRlY29tcHJlc3Npb24gY29udGV4dA0KKyAqIEBtYXhX
aW5kb3dTaXplOiBUaGUgbWF4aW11bSB3aW5kb3cgc2l6ZSBhbGxvd2VkIGZvciBjb21wcmVzc2Vk
IGZyYW1lcy4NCisgKiBAZGRpY3Q6ICAgICAgICAgVGhlIGRpZ2VzdGVkIGRpY3Rpb25hcnkgdG8g
dXNlIGZvciBkZWNvbXByZXNzaW9uLg0KKyAqIEB3b3Jrc3BhY2U6ICAgICBUaGUgd29ya3NwYWNl
IHRvIGVtcGxhY2UgdGhlIGNvbnRleHQgaW50by4gSXQgbXVzdCBvdXRsaXZlDQorICogICAgICAg
ICAgICAgICAgIHRoZSByZXR1cm5lZCBjb250ZXh0Lg0KKyAqIEB3b3Jrc3BhY2VTaXplOiBUaGUg
c2l6ZSBvZiB3b3Jrc3BhY2UuDQorICogICAgICAgICAgICAgICAgIFVzZSBaU1REX0RTdHJlYW1X
b3Jrc3BhY2VCb3VuZChtYXhXaW5kb3dTaXplKSB0byBkZXRlcm1pbmUNCisgKiAgICAgICAgICAg
ICAgICAgaG93IGxhcmdlIHRoZSB3b3Jrc3BhY2UgbXVzdCBiZS4NCisgKg0KKyAqIFJldHVybjog
ICAgICAgICBUaGUgenN0ZCBzdHJlYW1pbmcgZGVjb21wcmVzc2lvbiBjb250ZXh0Lg0KKyAqLw0K
K1pTVERfRFN0cmVhbSAqWlNURF9pbml0RFN0cmVhbV91c2luZ0REaWN0KHNpemVfdCBtYXhXaW5k
b3dTaXplLA0KKwljb25zdCBaU1REX0REaWN0ICpkZGljdCwgdm9pZCAqd29ya3NwYWNlLCBzaXpl
X3Qgd29ya3NwYWNlU2l6ZSk7DQorDQorLyo9PT09PSBTdHJlYW1pbmcgZGVjb21wcmVzc2lvbiBm
dW5jdGlvbnMgPT09PT0qLw0KKy8qKg0KKyAqIFpTVERfcmVzZXREU3RyZWFtKCkgLSByZXNldCB0
aGUgY29udGV4dCB1c2luZyBwYXJhbWV0ZXJzIGZyb20gY3JlYXRpb24NCisgKiBAemRzOiAgIFRo
ZSB6c3RkIHN0cmVhbWluZyBkZWNvbXByZXNzaW9uIGNvbnRleHQgdG8gcmVzZXQuDQorICoNCisg
KiBSZXNldHMgdGhlIGNvbnRleHQgdXNpbmcgdGhlIHBhcmFtZXRlcnMgZnJvbSBjcmVhdGlvbi4g
U2tpcHMgZGljdGlvbmFyeQ0KKyAqIGxvYWRpbmcsIHNpbmNlIGl0IGNhbiBiZSByZXVzZWQuDQor
ICoNCisgKiBSZXR1cm46IFplcm8gb3IgYW4gZXJyb3IsIHdoaWNoIGNhbiBiZSBjaGVja2VkIHVz
aW5nIFpTVERfaXNFcnJvcigpLg0KKyAqLw0KK3NpemVfdCBaU1REX3Jlc2V0RFN0cmVhbShaU1RE
X0RTdHJlYW0gKnpkcyk7DQorLyoqDQorICogWlNURF9kZWNvbXByZXNzU3RyZWFtKCkgLSBzdHJl
YW1pbmcgZGVjb21wcmVzcyBzb21lIG9mIGlucHV0IGludG8gb3V0cHV0DQorICogQHpkczogICAg
VGhlIHpzdGQgc3RyZWFtaW5nIGRlY29tcHJlc3Npb24gY29udGV4dC4NCisgKiBAb3V0cHV0OiBE
ZXN0aW5hdGlvbiBidWZmZXIuIGBvdXRwdXQucG9zYCBpcyB1cGRhdGVkIHRvIGluZGljYXRlIGhv
dyBtdWNoDQorICogICAgICAgICAgZGVjb21wcmVzc2VkIGRhdGEgd2FzIHdyaXR0ZW4uDQorICog
QGlucHV0OiAgU291cmNlIGJ1ZmZlci4gYGlucHV0LnBvc2AgaXMgdXBkYXRlZCB0byBpbmRpY2F0
ZSBob3cgbXVjaCBkYXRhIHdhcw0KKyAqICAgICAgICAgIHJlYWQuIE5vdGUgdGhhdCBpdCBtYXkg
bm90IGNvbnN1bWUgdGhlIGVudGlyZSBpbnB1dCwgaW4gd2hpY2ggY2FzZQ0KKyAqICAgICAgICAg
IGBpbnB1dC5wb3MgPCBpbnB1dC5zaXplYCwgYW5kIGl0J3MgdXAgdG8gdGhlIGNhbGxlciB0byBw
cmVzZW50DQorICogICAgICAgICAgcmVtYWluaW5nIGRhdGEgYWdhaW4uDQorICoNCisgKiBUaGUg
YGlucHV0YCBhbmQgYG91dHB1dGAgYnVmZmVycyBtYXkgYmUgYW55IHNpemUuIEd1YXJhbnRlZWQg
dG8gbWFrZSBzb21lDQorICogZm9yd2FyZCBwcm9ncmVzcyBpZiBgaW5wdXRgIGFuZCBgb3V0cHV0
YCBhcmUgbm90IGVtcHR5Lg0KKyAqIFpTVERfZGVjb21wcmVzc1N0cmVhbSgpIHdpbGwgbm90IGNv
bnN1bWUgdGhlIGxhc3QgYnl0ZSBvZiB0aGUgZnJhbWUgdW50aWwNCisgKiB0aGUgZW50aXJlIGZy
YW1lIGlzIGZsdXNoZWQuDQorICoNCisgKiBSZXR1cm46ICBSZXR1cm5zIDAgaWZmIGEgZnJhbWUg
aXMgY29tcGxldGVseSBkZWNvZGVkIGFuZCBmdWxseSBmbHVzaGVkLg0KKyAqICAgICAgICAgIE90
aGVyd2lzZSByZXR1cm5zIGEgaGludCBmb3IgdGhlIG51bWJlciBvZiBieXRlcyB0byB1c2UgYXMg
dGhlIGlucHV0DQorICogICAgICAgICAgZm9yIHRoZSBuZXh0IGZ1bmN0aW9uIGNhbGwgb3IgYW4g
ZXJyb3IsIHdoaWNoIGNhbiBiZSBjaGVja2VkIHVzaW5nDQorICogICAgICAgICAgWlNURF9pc0Vy
cm9yKCkuIFRoZSBzaXplIGhpbnQgd2lsbCBuZXZlciBsb2FkIG1vcmUgdGhhbiB0aGUgZnJhbWUu
DQorICovDQorc2l6ZV90IFpTVERfZGVjb21wcmVzc1N0cmVhbShaU1REX0RTdHJlYW0gKnpkcywg
WlNURF9vdXRCdWZmZXIgKm91dHB1dCwNCisJWlNURF9pbkJ1ZmZlciAqaW5wdXQpOw0KKw0KKy8q
Kg0KKyAqIFpTVERfRFN0cmVhbUluU2l6ZSgpIC0gcmVjb21tZW5kZWQgc2l6ZSBmb3IgdGhlIGlu
cHV0IGJ1ZmZlcg0KKyAqDQorICogUmV0dXJuOiBUaGUgcmVjb21tZW5kZWQgc2l6ZSBmb3IgdGhl
IGlucHV0IGJ1ZmZlci4NCisgKi8NCitzaXplX3QgWlNURF9EU3RyZWFtSW5TaXplKHZvaWQpOw0K
Ky8qKg0KKyAqIFpTVERfRFN0cmVhbU91dFNpemUoKSAtIHJlY29tbWVuZGVkIHNpemUgZm9yIHRo
ZSBvdXRwdXQgYnVmZmVyDQorICoNCisgKiBXaGVuIHRoZSBvdXRwdXQgYnVmZmVyIGlzIGF0IGxl
YXN0IHRoaXMgbGFyZ2UsIGl0IGlzIGd1YXJhbnRlZWQgdG8gYmUgbGFyZ2UNCisgKiBlbm91Z2gg
dG8gZmx1c2ggYXQgbGVhc3Qgb25lIGNvbXBsZXRlIGRlY29tcHJlc3NlZCBibG9jay4NCisgKg0K
KyAqIFJldHVybjogVGhlIHJlY29tbWVuZGVkIHNpemUgZm9yIHRoZSBvdXRwdXQgYnVmZmVyLg0K
KyAqLw0KK3NpemVfdCBaU1REX0RTdHJlYW1PdXRTaXplKHZvaWQpOw0KKw0KKw0KKy8qIC0tLSBD
b25zdGFudHMgLS0tKi8NCisjZGVmaW5lIFpTVERfTUFHSUNOVU1CRVIgICAgICAgICAgICAweEZE
MkZCNTI4ICAgLyogPj0gdjAuOC4wICovDQorI2RlZmluZSBaU1REX01BR0lDX1NLSVBQQUJMRV9T
VEFSVCAgMHgxODREMkE1MFUNCisNCisjZGVmaW5lIFpTVERfQ09OVEVOVFNJWkVfVU5LTk9XTiAo
MFVMTCAtIDEpDQorI2RlZmluZSBaU1REX0NPTlRFTlRTSVpFX0VSUk9SICAgKDBVTEwgLSAyKQ0K
Kw0KKyNkZWZpbmUgWlNURF9XSU5ET1dMT0dfTUFYXzMyICAyNw0KKyNkZWZpbmUgWlNURF9XSU5E
T1dMT0dfTUFYXzY0ICAyNw0KKyNkZWZpbmUgWlNURF9XSU5ET1dMT0dfTUFYIFwNCisJKCh1bnNp
Z25lZCBpbnQpKHNpemVvZihzaXplX3QpID09IDQgXA0KKwkJPyBaU1REX1dJTkRPV0xPR19NQVhf
MzIgXA0KKwkJOiBaU1REX1dJTkRPV0xPR19NQVhfNjQpKQ0KKyNkZWZpbmUgWlNURF9XSU5ET1dM
T0dfTUlOIDEwDQorI2RlZmluZSBaU1REX0hBU0hMT0dfTUFYIFpTVERfV0lORE9XTE9HX01BWA0K
KyNkZWZpbmUgWlNURF9IQVNITE9HX01JTiAgICAgICAgNg0KKyNkZWZpbmUgWlNURF9DSEFJTkxP
R19NQVggICAgIChaU1REX1dJTkRPV0xPR19NQVgrMSkNCisjZGVmaW5lIFpTVERfQ0hBSU5MT0df
TUlOICAgICAgWlNURF9IQVNITE9HX01JTg0KKyNkZWZpbmUgWlNURF9IQVNITE9HM19NQVggICAg
ICAxNw0KKyNkZWZpbmUgWlNURF9TRUFSQ0hMT0dfTUFYICAgIChaU1REX1dJTkRPV0xPR19NQVgt
MSkNCisjZGVmaW5lIFpTVERfU0VBUkNITE9HX01JTiAgICAgIDENCisvKiBvbmx5IGZvciBaU1RE
X2Zhc3QsIG90aGVyIHN0cmF0ZWdpZXMgYXJlIGxpbWl0ZWQgdG8gNiAqLw0KKyNkZWZpbmUgWlNU
RF9TRUFSQ0hMRU5HVEhfTUFYICAgNw0KKy8qIG9ubHkgZm9yIFpTVERfYnRvcHQsIG90aGVyIHN0
cmF0ZWdpZXMgYXJlIGxpbWl0ZWQgdG8gNCAqLw0KKyNkZWZpbmUgWlNURF9TRUFSQ0hMRU5HVEhf
TUlOICAgMw0KKyNkZWZpbmUgWlNURF9UQVJHRVRMRU5HVEhfTUlOICAgNA0KKyNkZWZpbmUgWlNU
RF9UQVJHRVRMRU5HVEhfTUFYIDk5OQ0KKw0KKy8qIGZvciBzdGF0aWMgYWxsb2NhdGlvbiAqLw0K
KyNkZWZpbmUgWlNURF9GUkFNRUhFQURFUlNJWkVfTUFYIDE4DQorI2RlZmluZSBaU1REX0ZSQU1F
SEVBREVSU0laRV9NSU4gIDYNCitzdGF0aWMgY29uc3Qgc2l6ZV90IFpTVERfZnJhbWVIZWFkZXJT
aXplX3ByZWZpeCA9IDU7DQorc3RhdGljIGNvbnN0IHNpemVfdCBaU1REX2ZyYW1lSGVhZGVyU2l6
ZV9taW4gPSBaU1REX0ZSQU1FSEVBREVSU0laRV9NSU47DQorc3RhdGljIGNvbnN0IHNpemVfdCBa
U1REX2ZyYW1lSGVhZGVyU2l6ZV9tYXggPSBaU1REX0ZSQU1FSEVBREVSU0laRV9NQVg7DQorLyog
bWFnaWMgbnVtYmVyICsgc2tpcHBhYmxlIGZyYW1lIGxlbmd0aCAqLw0KK3N0YXRpYyBjb25zdCBz
aXplX3QgWlNURF9za2lwcGFibGVIZWFkZXJTaXplID0gODsNCisNCisNCisvKi0qKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqDQorICogQ29tcHJlc3NlZCBzaXplIGZ1bmN0aW9u
cw0KKyAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKi8NCisNCisvKioNCisg
KiBaU1REX2ZpbmRGcmFtZUNvbXByZXNzZWRTaXplKCkgLSByZXR1cm5zIHRoZSBzaXplIG9mIGEg
Y29tcHJlc3NlZCBmcmFtZQ0KKyAqIEBzcmM6ICAgICBTb3VyY2UgYnVmZmVyLiBJdCBzaG91bGQg
cG9pbnQgdG8gdGhlIHN0YXJ0IG9mIGEgenN0ZCBlbmNvZGVkIGZyYW1lDQorICogICAgICAgICAg
IG9yIGEgc2tpcHBhYmxlIGZyYW1lLg0KKyAqIEBzcmNTaXplOiBUaGUgc2l6ZSBvZiB0aGUgc291
cmNlIGJ1ZmZlci4gSXQgbXVzdCBiZSBhdCBsZWFzdCBhcyBsYXJnZSBhcyB0aGUNCisgKiAgICAg
ICAgICAgc2l6ZSBvZiB0aGUgZnJhbWUuDQorICoNCisgKiBSZXR1cm46ICAgVGhlIGNvbXByZXNz
ZWQgc2l6ZSBvZiB0aGUgZnJhbWUgcG9pbnRlZCB0byBieSBgc3JjYCBvciBhbiBlcnJvciwNCisg
KiAgICAgICAgICAgd2hpY2ggY2FuIGJlIGNoZWNrIHdpdGggWlNURF9pc0Vycm9yKCkuDQorICog
ICAgICAgICAgIFN1aXRhYmxlIHRvIHBhc3MgdG8gWlNURF9kZWNvbXByZXNzKCkgb3Igc2ltaWxh
ciBmdW5jdGlvbnMuDQorICovDQorc2l6ZV90IFpTVERfZmluZEZyYW1lQ29tcHJlc3NlZFNpemUo
Y29uc3Qgdm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZSk7DQorDQorLyotKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKg0KKyAqIERlY29tcHJlc3NlZCBzaXplIGZ1bmN0aW9ucw0K
KyAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKi8NCisvKioNCisgKiBaU1RE
X2dldEZyYW1lQ29udGVudFNpemUoKSAtIHJldHVybnMgdGhlIGNvbnRlbnQgc2l6ZSBpbiBhIHpz
dGQgZnJhbWUgaGVhZGVyDQorICogQHNyYzogICAgIEl0IHNob3VsZCBwb2ludCB0byB0aGUgc3Rh
cnQgb2YgYSB6c3RkIGVuY29kZWQgZnJhbWUuDQorICogQHNyY1NpemU6IFRoZSBzaXplIG9mIHRo
ZSBzb3VyY2UgYnVmZmVyLiBJdCBtdXN0IGJlIGF0IGxlYXN0IGFzIGxhcmdlIGFzIHRoZQ0KKyAq
ICAgICAgICAgICBmcmFtZSBoZWFkZXIuIGBaU1REX2ZyYW1lSGVhZGVyU2l6ZV9tYXhgIGlzIGFs
d2F5cyBsYXJnZSBlbm91Z2guDQorICoNCisgKiBSZXR1cm46ICAgVGhlIGZyYW1lIGNvbnRlbnQg
c2l6ZSBzdG9yZWQgaW4gdGhlIGZyYW1lIGhlYWRlciBpZiBrbm93bi4NCisgKiAgICAgICAgICAg
YFpTVERfQ09OVEVOVFNJWkVfVU5LTk9XTmAgaWYgdGhlIGNvbnRlbnQgc2l6ZSBpc24ndCBzdG9y
ZWQgaW4gdGhlDQorICogICAgICAgICAgIGZyYW1lIGhlYWRlci4gYFpTVERfQ09OVEVOVFNJWkVf
RVJST1JgIG9uIGludmFsaWQgaW5wdXQuDQorICovDQordW5zaWduZWQgbG9uZyBsb25nIFpTVERf
Z2V0RnJhbWVDb250ZW50U2l6ZShjb25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNTaXplKTsNCisN
CisvKioNCisgKiBaU1REX2ZpbmREZWNvbXByZXNzZWRTaXplKCkgLSByZXR1cm5zIGRlY29tcHJl
c3NlZCBzaXplIG9mIGEgc2VyaWVzIG9mIGZyYW1lcw0KKyAqIEBzcmM6ICAgICBJdCBzaG91bGQg
cG9pbnQgdG8gdGhlIHN0YXJ0IG9mIGEgc2VyaWVzIG9mIHpzdGQgZW5jb2RlZCBhbmQvb3INCisg
KiAgICAgICAgICAgc2tpcHBhYmxlIGZyYW1lcy4NCisgKiBAc3JjU2l6ZTogVGhlIGV4YWN0IHNp
emUgb2YgdGhlIHNlcmllcyBvZiBmcmFtZXMuDQorICoNCisgKiBJZiBhbnkgenN0ZCBlbmNvZGVk
IGZyYW1lIGluIHRoZSBzZXJpZXMgZG9lc24ndCBoYXZlIHRoZSBmcmFtZSBjb250ZW50IHNpemUN
CisgKiBzZXQsIGBaU1REX0NPTlRFTlRTSVpFX1VOS05PV05gIGlzIHJldHVybmVkLiBCdXQgZnJh
bWUgY29udGVudCBzaXplIGlzIGFsd2F5cw0KKyAqIHNldCB3aGVuIHVzaW5nIFpTVERfY29tcHJl
c3MoKS4gVGhlIGRlY29tcHJlc3NlZCBzaXplIGNhbiBiZSB2ZXJ5IGxhcmdlLg0KKyAqIElmIHRo
ZSBzb3VyY2UgaXMgdW50cnVzdGVkLCB0aGUgZGVjb21wcmVzc2VkIHNpemUgY291bGQgYmUgd3Jv
bmcgb3INCisgKiBpbnRlbnRpb25hbGx5IG1vZGlmaWVkLiBBbHdheXMgZW5zdXJlIHRoZSByZXN1
bHQgZml0cyB3aXRoaW4gdGhlDQorICogYXBwbGljYXRpb24ncyBhdXRob3JpemVkIGxpbWl0cy4g
WlNURF9maW5kRGVjb21wcmVzc2VkU2l6ZSgpIGhhbmRsZXMgbXVsdGlwbGUNCisgKiBmcmFtZXMs
IGFuZCBzbyBpdCBtdXN0IHRyYXZlcnNlIHRoZSBpbnB1dCB0byByZWFkIGVhY2ggZnJhbWUgaGVh
ZGVyLiBUaGlzIGlzDQorICogZWZmaWNpZW50IGFzIG1vc3Qgb2YgdGhlIGRhdGEgaXMgc2tpcHBl
ZCwgaG93ZXZlciBpdCBkb2VzIG1lYW4gdGhhdCBhbGwgZnJhbWUNCisgKiBkYXRhIG11c3QgYmUg
cHJlc2VudCBhbmQgdmFsaWQuDQorICoNCisgKiBSZXR1cm46ICAgRGVjb21wcmVzc2VkIHNpemUg
b2YgYWxsIHRoZSBkYXRhIGNvbnRhaW5lZCBpbiB0aGUgZnJhbWVzIGlmIGtub3duLg0KKyAqICAg
ICAgICAgICBgWlNURF9DT05URU5UU0laRV9VTktOT1dOYCBpZiB0aGUgZGVjb21wcmVzc2VkIHNp
emUgaXMgdW5rbm93bi4NCisgKiAgICAgICAgICAgYFpTVERfQ09OVEVOVFNJWkVfRVJST1JgIGlm
IGFuIGVycm9yIG9jY3VycmVkLg0KKyAqLw0KK3Vuc2lnbmVkIGxvbmcgbG9uZyBaU1REX2ZpbmRE
ZWNvbXByZXNzZWRTaXplKGNvbnN0IHZvaWQgKnNyYywgc2l6ZV90IHNyY1NpemUpOw0KKw0KKy8q
LSoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCisgKiBBZHZhbmNlZCBjb21w
cmVzc2lvbiBmdW5jdGlvbnMNCisgKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KiovDQorLyoqDQorICogWlNURF9jaGVja0NQYXJhbXMoKSAtIGVuc3VyZSBwYXJhbWV0ZXIgdmFs
dWVzIHJlbWFpbiB3aXRoaW4gYXV0aG9yaXplZCByYW5nZQ0KKyAqIEBjUGFyYW1zOiBUaGUgenN0
ZCBjb21wcmVzc2lvbiBwYXJhbWV0ZXJzLg0KKyAqDQorICogUmV0dXJuOiAgIFplcm8gb3IgYW4g
ZXJyb3IsIHdoaWNoIGNhbiBiZSBjaGVja2VkIHVzaW5nIFpTVERfaXNFcnJvcigpLg0KKyAqLw0K
K3NpemVfdCBaU1REX2NoZWNrQ1BhcmFtcyhaU1REX2NvbXByZXNzaW9uUGFyYW1ldGVycyBjUGFy
YW1zKTsNCisNCisvKioNCisgKiBaU1REX2FkanVzdENQYXJhbXMoKSAtIG9wdGltaXplIHBhcmFt
ZXRlcnMgZm9yIGEgZ2l2ZW4gc3JjU2l6ZSBhbmQgZGljdFNpemUNCisgKiBAc3JjU2l6ZTogIE9w
dGlvbmFsbHkgdGhlIGVzdGltYXRlZCBzb3VyY2Ugc2l6ZSwgb3IgemVybyBpZiB1bmtub3duLg0K
KyAqIEBkaWN0U2l6ZTogT3B0aW9uYWxseSB0aGUgZXN0aW1hdGVkIGRpY3Rpb25hcnkgc2l6ZSwg
b3IgemVybyBpZiB1bmtub3duLg0KKyAqDQorICogUmV0dXJuOiAgICBUaGUgb3B0aW1pemVkIHBh
cmFtZXRlcnMuDQorICovDQorWlNURF9jb21wcmVzc2lvblBhcmFtZXRlcnMgWlNURF9hZGp1c3RD
UGFyYW1zKA0KKwlaU1REX2NvbXByZXNzaW9uUGFyYW1ldGVycyBjUGFyYW1zLCB1bnNpZ25lZCBs
b25nIGxvbmcgc3JjU2l6ZSwNCisJc2l6ZV90IGRpY3RTaXplKTsNCisNCisvKi0tLSBBZHZhbmNl
ZCBkZWNvbXByZXNzaW9uIGZ1bmN0aW9ucyAtLS0qLw0KKw0KKy8qKg0KKyAqIFpTVERfaXNGcmFt
ZSgpIC0gcmV0dXJucyB0cnVlIGlmZiB0aGUgYnVmZmVyIHN0YXJ0cyB3aXRoIGEgdmFsaWQgZnJh
bWUNCisgKiBAYnVmZmVyOiBUaGUgc291cmNlIGJ1ZmZlciB0byBjaGVjay4NCisgKiBAc2l6ZTog
ICBUaGUgc2l6ZSBvZiB0aGUgc291cmNlIGJ1ZmZlciwgbXVzdCBiZSBhdCBsZWFzdCA0IGJ5dGVz
Lg0KKyAqDQorICogUmV0dXJuOiBUcnVlIGlmZiB0aGUgYnVmZmVyIHN0YXJ0cyB3aXRoIGEgenN0
ZCBvciBza2lwcGFibGUgZnJhbWUgaWRlbnRpZmllci4NCisgKi8NCit1bnNpZ25lZCBpbnQgWlNU
RF9pc0ZyYW1lKGNvbnN0IHZvaWQgKmJ1ZmZlciwgc2l6ZV90IHNpemUpOw0KKw0KKy8qKg0KKyAq
IFpTVERfZ2V0RGljdElEX2Zyb21EaWN0KCkgLSByZXR1cm5zIHRoZSBkaWN0aW9uYXJ5IGlkIHN0
b3JlZCBpbiBhIGRpY3Rpb25hcnkNCisgKiBAZGljdDogICAgIFRoZSBkaWN0aW9uYXJ5IGJ1ZmZl
ci4NCisgKiBAZGljdFNpemU6IFRoZSBzaXplIG9mIHRoZSBkaWN0aW9uYXJ5IGJ1ZmZlci4NCisg
Kg0KKyAqIFJldHVybjogICAgVGhlIGRpY3Rpb25hcnkgaWQgc3RvcmVkIHdpdGhpbiB0aGUgZGlj
dGlvbmFyeSBvciAwIGlmIHRoZQ0KKyAqICAgICAgICAgICAgZGljdGlvbmFyeSBpcyBub3QgYSB6
c3RkIGRpY3Rpb25hcnkuIElmIGl0IHJldHVybnMgMCB0aGUNCisgKiAgICAgICAgICAgIGRpY3Rp
b25hcnkgY2FuIHN0aWxsIGJlIGxvYWRlZCBhcyBhIGNvbnRlbnQtb25seSBkaWN0aW9uYXJ5Lg0K
KyAqLw0KK3Vuc2lnbmVkIGludCBaU1REX2dldERpY3RJRF9mcm9tRGljdChjb25zdCB2b2lkICpk
aWN0LCBzaXplX3QgZGljdFNpemUpOw0KKw0KKy8qKg0KKyAqIFpTVERfZ2V0RGljdElEX2Zyb21E
RGljdCgpIC0gcmV0dXJucyB0aGUgZGljdGlvbmFyeSBpZCBzdG9yZWQgaW4gYSBaU1REX0REaWN0
DQorICogQGRkaWN0OiBUaGUgZGRpY3QgdG8gZmluZCB0aGUgaWQgb2YuDQorICoNCisgKiBSZXR1
cm46IFRoZSBkaWN0aW9uYXJ5IGlkIHN0b3JlZCB3aXRoaW4gYGRkaWN0YCBvciAwIGlmIHRoZSBk
aWN0aW9uYXJ5IGlzIG5vdA0KKyAqICAgICAgICAgYSB6c3RkIGRpY3Rpb25hcnkuIElmIGl0IHJl
dHVybnMgMCBgZGRpY3RgIHdpbGwgYmUgbG9hZGVkIGFzIGENCisgKiAgICAgICAgIGNvbnRlbnQt
b25seSBkaWN0aW9uYXJ5Lg0KKyAqLw0KK3Vuc2lnbmVkIGludCBaU1REX2dldERpY3RJRF9mcm9t
RERpY3QoY29uc3QgWlNURF9ERGljdCAqZGRpY3QpOw0KKw0KKy8qKg0KKyAqIFpTVERfZ2V0RGlj
dElEX2Zyb21GcmFtZSgpIC0gcmV0dXJucyB0aGUgZGljdGlvbmFyeSBpZCBzdG9yZWQgaW4gYSB6
c3RkIGZyYW1lDQorICogQHNyYzogICAgIFNvdXJjZSBidWZmZXIuIEl0IG11c3QgYmUgYSB6c3Rk
IGVuY29kZWQgZnJhbWUuDQorICogQHNyY1NpemU6IFRoZSBzaXplIG9mIHRoZSBzb3VyY2UgYnVm
ZmVyLiBJdCBtdXN0IGJlIGF0IGxlYXN0IGFzIGxhcmdlIGFzIHRoZQ0KKyAqICAgICAgICAgICBm
cmFtZSBoZWFkZXIuIGBaU1REX2ZyYW1lSGVhZGVyU2l6ZV9tYXhgIGlzIGFsd2F5cyBsYXJnZSBl
bm91Z2guDQorICoNCisgKiBSZXR1cm46ICAgVGhlIGRpY3Rpb25hcnkgaWQgcmVxdWlyZWQgdG8g
ZGVjb21wcmVzcyB0aGUgZnJhbWUgc3RvcmVkIHdpdGhpbg0KKyAqICAgICAgICAgICBgc3JjYCBv
ciAwIGlmIHRoZSBkaWN0aW9uYXJ5IGlkIGNvdWxkIG5vdCBiZSBkZWNvZGVkLiBJdCBjYW4gcmV0
dXJuDQorICogICAgICAgICAgIDAgaWYgdGhlIGZyYW1lIGRvZXMgbm90IHJlcXVpcmUgYSBkaWN0
aW9uYXJ5LCB0aGUgZGljdGlvbmFyeSBpZA0KKyAqICAgICAgICAgICB3YXNuJ3Qgc3RvcmVkIGlu
IHRoZSBmcmFtZSwgYHNyY2AgaXMgbm90IGEgenN0ZCBmcmFtZSwgb3IgYHNyY1NpemVgDQorICog
ICAgICAgICAgIGlzIHRvbyBzbWFsbC4NCisgKi8NCit1bnNpZ25lZCBpbnQgWlNURF9nZXREaWN0
SURfZnJvbUZyYW1lKGNvbnN0IHZvaWQgKnNyYywgc2l6ZV90IHNyY1NpemUpOw0KKw0KKy8qKg0K
KyAqIHN0cnVjdCBaU1REX2ZyYW1lUGFyYW1zIC0genN0ZCBmcmFtZSBwYXJhbWV0ZXJzIHN0b3Jl
ZCBpbiB0aGUgZnJhbWUgaGVhZGVyDQorICogQGZyYW1lQ29udGVudFNpemU6IFRoZSBmcmFtZSBj
b250ZW50IHNpemUsIG9yIDAgaWYgbm90IHByZXNlbnQuDQorICogQHdpbmRvd1NpemU6ICAgICAg
IFRoZSB3aW5kb3cgc2l6ZSwgb3IgMCBpZiB0aGUgZnJhbWUgaXMgYSBza2lwcGFibGUgZnJhbWUu
DQorICogQGRpY3RJRDogICAgICAgICAgIFRoZSBkaWN0aW9uYXJ5IGlkLCBvciAwIGlmIG5vdCBw
cmVzZW50Lg0KKyAqIEBjaGVja3N1bUZsYWc6ICAgICBXaGV0aGVyIGEgY2hlY2tzdW0gd2FzIHVz
ZWQuDQorICovDQordHlwZWRlZiBzdHJ1Y3Qgew0KKwl1bnNpZ25lZCBsb25nIGxvbmcgZnJhbWVD
b250ZW50U2l6ZTsNCisJdW5zaWduZWQgaW50IHdpbmRvd1NpemU7DQorCXVuc2lnbmVkIGludCBk
aWN0SUQ7DQorCXVuc2lnbmVkIGludCBjaGVja3N1bUZsYWc7DQorfSBaU1REX2ZyYW1lUGFyYW1z
Ow0KKw0KKy8qKg0KKyAqIFpTVERfZ2V0RnJhbWVQYXJhbXMoKSAtIGV4dHJhY3RzIHBhcmFtZXRl
cnMgZnJvbSBhIHpzdGQgb3Igc2tpcHBhYmxlIGZyYW1lDQorICogQGZwYXJhbXNQdHI6IE9uIHN1
Y2Nlc3MgdGhlIGZyYW1lIHBhcmFtZXRlcnMgYXJlIHdyaXR0ZW4gaGVyZS4NCisgKiBAc3JjOiAg
ICAgICAgVGhlIHNvdXJjZSBidWZmZXIuIEl0IG11c3QgcG9pbnQgdG8gYSB6c3RkIG9yIHNraXBw
YWJsZSBmcmFtZS4NCisgKiBAc3JjU2l6ZTogICAgVGhlIHNpemUgb2YgdGhlIHNvdXJjZSBidWZm
ZXIuIGBaU1REX2ZyYW1lSGVhZGVyU2l6ZV9tYXhgIGlzDQorICogICAgICAgICAgICAgIGFsd2F5
cyBsYXJnZSBlbm91Z2ggdG8gc3VjY2VlZC4NCisgKg0KKyAqIFJldHVybjogICAgICAwIG9uIHN1
Y2Nlc3MuIElmIG1vcmUgZGF0YSBpcyByZXF1aXJlZCBpdCByZXR1cm5zIGhvdyBtYW55IGJ5dGVz
DQorICogICAgICAgICAgICAgIG11c3QgYmUgcHJvdmlkZWQgdG8gbWFrZSBmb3J3YXJkIHByb2dy
ZXNzLiBPdGhlcndpc2UgaXQgcmV0dXJucw0KKyAqICAgICAgICAgICAgICBhbiBlcnJvciwgd2hp
Y2ggY2FuIGJlIGNoZWNrZWQgdXNpbmcgWlNURF9pc0Vycm9yKCkuDQorICovDQorc2l6ZV90IFpT
VERfZ2V0RnJhbWVQYXJhbXMoWlNURF9mcmFtZVBhcmFtcyAqZnBhcmFtc1B0ciwgY29uc3Qgdm9p
ZCAqc3JjLA0KKwlzaXplX3Qgc3JjU2l6ZSk7DQorDQorLyotKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioN
CisgKiBCdWZmZXItbGVzcyBhbmQgc3luY2hyb25vdXMgaW5uZXIgc3RyZWFtaW5nIGZ1bmN0aW9u
cw0KKyAqDQorICogVGhpcyBpcyBhbiBhZHZhbmNlZCBBUEksIGdpdmluZyBmdWxsIGNvbnRyb2wg
b3ZlciBidWZmZXIgbWFuYWdlbWVudCwgZm9yDQorICogdXNlcnMgd2hpY2ggbmVlZCBkaXJlY3Qg
Y29udHJvbCBvdmVyIG1lbW9yeS4NCisgKiBCdXQgaXQncyBhbHNvIGEgY29tcGxleCBvbmUsIHdp
dGggbWFueSByZXN0cmljdGlvbnMgKGRvY3VtZW50ZWQgYmVsb3cpLg0KKyAqIFByZWZlciB1c2lu
ZyBub3JtYWwgc3RyZWFtaW5nIEFQSSBmb3IgYW4gZWFzaWVyIGV4cGVyaWVuY2UNCisgKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqLw0KKw0KKy8qLSoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQorICogQnVmZmVy
LWxlc3Mgc3RyZWFtaW5nIGNvbXByZXNzaW9uIChzeW5jaHJvbm91cyBtb2RlKQ0KKyAqDQorICog
QSBaU1REX0NDdHggb2JqZWN0IGlzIHJlcXVpcmVkIHRvIHRyYWNrIHN0cmVhbWluZyBvcGVyYXRp
b25zLg0KKyAqIFVzZSBaU1REX2luaXRDQ3R4KCkgdG8gaW5pdGlhbGl6ZSBhIGNvbnRleHQuDQor
ICogWlNURF9DQ3R4IG9iamVjdCBjYW4gYmUgcmUtdXNlZCBtdWx0aXBsZSB0aW1lcyB3aXRoaW4g
c3VjY2Vzc2l2ZSBjb21wcmVzc2lvbg0KKyAqIG9wZXJhdGlvbnMuDQorICoNCisgKiBTdGFydCBi
eSBpbml0aWFsaXppbmcgYSBjb250ZXh0Lg0KKyAqIFVzZSBaU1REX2NvbXByZXNzQmVnaW4oKSwg
b3IgWlNURF9jb21wcmVzc0JlZ2luX3VzaW5nRGljdCgpIGZvciBkaWN0aW9uYXJ5DQorICogY29t
cHJlc3Npb24sDQorICogb3IgWlNURF9jb21wcmVzc0JlZ2luX2FkdmFuY2VkKCksIGZvciBmaW5l
ciBwYXJhbWV0ZXIgY29udHJvbC4NCisgKiBJdCdzIGFsc28gcG9zc2libGUgdG8gZHVwbGljYXRl
IGEgcmVmZXJlbmNlIGNvbnRleHQgd2hpY2ggaGFzIGFscmVhZHkgYmVlbg0KKyAqIGluaXRpYWxp
emVkLCB1c2luZyBaU1REX2NvcHlDQ3R4KCkNCisgKg0KKyAqIFRoZW4sIGNvbnN1bWUgeW91ciBp
bnB1dCB1c2luZyBaU1REX2NvbXByZXNzQ29udGludWUoKS4NCisgKiBUaGVyZSBhcmUgc29tZSBp
bXBvcnRhbnQgY29uc2lkZXJhdGlvbnMgdG8ga2VlcCBpbiBtaW5kIHdoZW4gdXNpbmcgdGhpcw0K
KyAqIGFkdmFuY2VkIGZ1bmN0aW9uIDoNCisgKiAtIFpTVERfY29tcHJlc3NDb250aW51ZSgpIGhh
cyBubyBpbnRlcm5hbCBidWZmZXIuIEl0IHVzZXMgZXh0ZXJuYWxseSBwcm92aWRlZA0KKyAqICAg
YnVmZmVyIG9ubHkuDQorICogLSBJbnRlcmZhY2UgaXMgc3luY2hyb25vdXMgOiBpbnB1dCBpcyBj
b25zdW1lZCBlbnRpcmVseSBhbmQgcHJvZHVjZSAxKw0KKyAqICAgKG9yIG1vcmUpIGNvbXByZXNz
ZWQgYmxvY2tzLg0KKyAqIC0gQ2FsbGVyIG11c3QgZW5zdXJlIHRoZXJlIGlzIGVub3VnaCBzcGFj
ZSBpbiBgZHN0YCB0byBzdG9yZSBjb21wcmVzc2VkIGRhdGENCisgKiAgIHVuZGVyIHdvcnN0IGNh
c2Ugc2NlbmFyaW8uIFdvcnN0IGNhc2UgZXZhbHVhdGlvbiBpcyBwcm92aWRlZCBieQ0KKyAqICAg
WlNURF9jb21wcmVzc0JvdW5kKCkuDQorICogICBaU1REX2NvbXByZXNzQ29udGludWUoKSBkb2Vz
bid0IGd1YXJhbnRlZSByZWNvdmVyIGFmdGVyIGEgZmFpbGVkDQorICogICBjb21wcmVzc2lvbi4N
CisgKiAtIFpTVERfY29tcHJlc3NDb250aW51ZSgpIHByZXN1bWVzIHByaW9yIGlucHV0ICoqKmlz
IHN0aWxsIGFjY2Vzc2libGUgYW5kDQorICogICB1bm1vZGlmaWVkKioqICh1cCB0byBtYXhpbXVt
IGRpc3RhbmNlIHNpemUsIHNlZSBXaW5kb3dMb2cpLg0KKyAqICAgSXQgcmVtZW1iZXJzIGFsbCBw
cmV2aW91cyBjb250aWd1b3VzIGJsb2NrcywgcGx1cyBvbmUgc2VwYXJhdGVkIG1lbW9yeQ0KKyAq
ICAgc2VnbWVudCAod2hpY2ggY2FuIGl0c2VsZiBjb25zaXN0cyBvZiBtdWx0aXBsZSBjb250aWd1
b3VzIGJsb2NrcykNCisgKiAtIFpTVERfY29tcHJlc3NDb250aW51ZSgpIGRldGVjdHMgdGhhdCBw
cmlvciBpbnB1dCBoYXMgYmVlbiBvdmVyd3JpdHRlbiB3aGVuDQorICogICBgc3JjYCBidWZmZXIg
b3ZlcmxhcHMuIEluIHdoaWNoIGNhc2UsIGl0IHdpbGwgImRpc2NhcmQiIHRoZSByZWxldmFudCBt
ZW1vcnkNCisgKiAgIHNlY3Rpb24gZnJvbSBpdHMgaGlzdG9yeS4NCisgKg0KKyAqIEZpbmlzaCBh
IGZyYW1lIHdpdGggWlNURF9jb21wcmVzc0VuZCgpLCB3aGljaCB3aWxsIHdyaXRlIHRoZSBsYXN0
IGJsb2NrKHMpDQorICogYW5kIG9wdGlvbmFsIGNoZWNrc3VtLiBJdCdzIHBvc3NpYmxlIHRvIHVz
ZSBzcmNTaXplPT0wLCBpbiB3aGljaCBjYXNlLCBpdA0KKyAqIHdpbGwgd3JpdGUgYSBmaW5hbCBl
bXB0eSBibG9jayB0byBlbmQgdGhlIGZyYW1lLiBXaXRob3V0IGxhc3QgYmxvY2sgbWFyaywNCisg
KiBmcmFtZXMgd2lsbCBiZSBjb25zaWRlcmVkIHVuZmluaXNoZWQgKGNvcnJ1cHRlZCkgYnkgZGVj
b2RlcnMuDQorICoNCisgKiBgWlNURF9DQ3R4YCBvYmplY3QgY2FuIGJlIHJlLXVzZWQgKFpTVERf
Y29tcHJlc3NCZWdpbigpKSB0byBjb21wcmVzcyBzb21lIG5ldw0KKyAqIGZyYW1lLg0KKyAqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKiovDQorDQorLyo9PT09PSAgIEJ1ZmZlci1sZXNzIHN0cmVhbWluZyBj
b21wcmVzc2lvbiBmdW5jdGlvbnMgID09PT09Ki8NCitzaXplX3QgWlNURF9jb21wcmVzc0JlZ2lu
KFpTVERfQ0N0eCAqY2N0eCwgaW50IGNvbXByZXNzaW9uTGV2ZWwpOw0KK3NpemVfdCBaU1REX2Nv
bXByZXNzQmVnaW5fdXNpbmdEaWN0KFpTVERfQ0N0eCAqY2N0eCwgY29uc3Qgdm9pZCAqZGljdCwN
CisJc2l6ZV90IGRpY3RTaXplLCBpbnQgY29tcHJlc3Npb25MZXZlbCk7DQorc2l6ZV90IFpTVERf
Y29tcHJlc3NCZWdpbl9hZHZhbmNlZChaU1REX0NDdHggKmNjdHgsIGNvbnN0IHZvaWQgKmRpY3Qs
DQorCXNpemVfdCBkaWN0U2l6ZSwgWlNURF9wYXJhbWV0ZXJzIHBhcmFtcywNCisJdW5zaWduZWQg
bG9uZyBsb25nIHBsZWRnZWRTcmNTaXplKTsNCitzaXplX3QgWlNURF9jb3B5Q0N0eChaU1REX0ND
dHggKmNjdHgsIGNvbnN0IFpTVERfQ0N0eCAqcHJlcGFyZWRDQ3R4LA0KKwl1bnNpZ25lZCBsb25n
IGxvbmcgcGxlZGdlZFNyY1NpemUpOw0KK3NpemVfdCBaU1REX2NvbXByZXNzQmVnaW5fdXNpbmdD
RGljdChaU1REX0NDdHggKmNjdHgsIGNvbnN0IFpTVERfQ0RpY3QgKmNkaWN0LA0KKwl1bnNpZ25l
ZCBsb25nIGxvbmcgcGxlZGdlZFNyY1NpemUpOw0KK3NpemVfdCBaU1REX2NvbXByZXNzQ29udGlu
dWUoWlNURF9DQ3R4ICpjY3R4LCB2b2lkICpkc3QsIHNpemVfdCBkc3RDYXBhY2l0eSwNCisJY29u
c3Qgdm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZSk7DQorc2l6ZV90IFpTVERfY29tcHJlc3NFbmQo
WlNURF9DQ3R4ICpjY3R4LCB2b2lkICpkc3QsIHNpemVfdCBkc3RDYXBhY2l0eSwNCisJY29uc3Qg
dm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZSk7DQorDQorDQorDQorLyotKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioNCisgKiBCdWZmZXItbGVzcyBzdHJlYW1pbmcgZGVjb21wcmVzc2lvbiAoc3luY2hyb25v
dXMgbW9kZSkNCisgKg0KKyAqIEEgWlNURF9EQ3R4IG9iamVjdCBpcyByZXF1aXJlZCB0byB0cmFj
ayBzdHJlYW1pbmcgb3BlcmF0aW9ucy4NCisgKiBVc2UgWlNURF9pbml0REN0eCgpIHRvIGluaXRp
YWxpemUgYSBjb250ZXh0Lg0KKyAqIEEgWlNURF9EQ3R4IG9iamVjdCBjYW4gYmUgcmUtdXNlZCBt
dWx0aXBsZSB0aW1lcy4NCisgKg0KKyAqIEZpcnN0IHR5cGljYWwgb3BlcmF0aW9uIGlzIHRvIHJl
dHJpZXZlIGZyYW1lIHBhcmFtZXRlcnMsIHVzaW5nDQorICogWlNURF9nZXRGcmFtZVBhcmFtcygp
LiBJdCBmaWxscyBhIFpTVERfZnJhbWVQYXJhbXMgc3RydWN0dXJlIHdoaWNoIHByb3ZpZGUNCisg
KiBpbXBvcnRhbnQgaW5mb3JtYXRpb24gdG8gY29ycmVjdGx5IGRlY29kZSB0aGUgZnJhbWUsIHN1
Y2ggYXMgdGhlIG1pbmltdW0NCisgKiByb2xsaW5nIGJ1ZmZlciBzaXplIHRvIGFsbG9jYXRlIHRv
IGRlY29tcHJlc3MgZGF0YSAoYHdpbmRvd1NpemVgKSwgYW5kIHRoZQ0KKyAqIGRpY3Rpb25hcnkg
SUQgdXNlZC4NCisgKiBOb3RlOiBjb250ZW50IHNpemUgaXMgb3B0aW9uYWwsIGl0IG1heSBub3Qg
YmUgcHJlc2VudC4gMCBtZWFucyB1bmtub3duLg0KKyAqIE5vdGUgdGhhdCB0aGVzZSB2YWx1ZXMg
Y291bGQgYmUgd3JvbmcsIGVpdGhlciBiZWNhdXNlIG9mIGRhdGEgbWFsZm9ybWF0aW9uLA0KKyAq
IG9yIGJlY2F1c2UgYW4gYXR0YWNrZXIgaXMgc3Bvb2ZpbmcgZGVsaWJlcmF0ZSBmYWxzZSBpbmZv
cm1hdGlvbi4gQXMgYQ0KKyAqIGNvbnNlcXVlbmNlLCBjaGVjayB0aGF0IHZhbHVlcyByZW1haW4g
d2l0aGluIHZhbGlkIGFwcGxpY2F0aW9uIHJhbmdlLA0KKyAqIGVzcGVjaWFsbHkgYHdpbmRvd1Np
emVgLCBiZWZvcmUgYWxsb2NhdGlvbi4gRWFjaCBhcHBsaWNhdGlvbiBjYW4gc2V0IGl0cyBvd24N
CisgKiBsaW1pdCwgZGVwZW5kaW5nIG9uIGxvY2FsIHJlc3RyaWN0aW9ucy4gRm9yIGV4dGVuZGVk
IGludGVyb3BlcmFiaWxpdHksIGl0IGlzDQorICogcmVjb21tZW5kZWQgdG8gc3VwcG9ydCBhdCBs
ZWFzdCA4IE1CLg0KKyAqIEZyYW1lIHBhcmFtZXRlcnMgYXJlIGV4dHJhY3RlZCBmcm9tIHRoZSBi
ZWdpbm5pbmcgb2YgdGhlIGNvbXByZXNzZWQgZnJhbWUuDQorICogRGF0YSBmcmFnbWVudCBtdXN0
IGJlIGxhcmdlIGVub3VnaCB0byBlbnN1cmUgc3VjY2Vzc2Z1bCBkZWNvZGluZywgdHlwaWNhbGx5
DQorICogYFpTVERfZnJhbWVIZWFkZXJTaXplX21heGAgYnl0ZXMuDQorICogUmVzdWx0OiAwOiBz
dWNjZXNzZnVsIGRlY29kaW5nLCB0aGUgYFpTVERfZnJhbWVQYXJhbXNgIHN0cnVjdHVyZSBpcyBm
aWxsZWQuDQorICogICAgICAgID4wOiBgc3JjU2l6ZWAgaXMgdG9vIHNtYWxsLCBwcm92aWRlIGF0
IGxlYXN0IHRoaXMgbWFueSBieXRlcy4NCisgKiAgICAgICAgZXJyb3JDb2RlLCB3aGljaCBjYW4g
YmUgdGVzdGVkIHVzaW5nIFpTVERfaXNFcnJvcigpLg0KKyAqDQorICogU3RhcnQgZGVjb21wcmVz
c2lvbiwgd2l0aCBaU1REX2RlY29tcHJlc3NCZWdpbigpIG9yDQorICogWlNURF9kZWNvbXByZXNz
QmVnaW5fdXNpbmdEaWN0KCkuIEFsdGVybmF0aXZlbHksIHlvdSBjYW4gY29weSBhIHByZXBhcmVk
DQorICogY29udGV4dCwgdXNpbmcgWlNURF9jb3B5REN0eCgpLg0KKyAqDQorICogVGhlbiB1c2Ug
WlNURF9uZXh0U3JjU2l6ZVRvRGVjb21wcmVzcygpIGFuZCBaU1REX2RlY29tcHJlc3NDb250aW51
ZSgpDQorICogYWx0ZXJuYXRpdmVseS4NCisgKiBaU1REX25leHRTcmNTaXplVG9EZWNvbXByZXNz
KCkgdGVsbHMgaG93IG1hbnkgYnl0ZXMgdG8gcHJvdmlkZSBhcyAnc3JjU2l6ZScNCisgKiB0byBa
U1REX2RlY29tcHJlc3NDb250aW51ZSgpLg0KKyAqIFpTVERfZGVjb21wcmVzc0NvbnRpbnVlKCkg
cmVxdWlyZXMgdGhpcyBfZXhhY3RfIGFtb3VudCBvZiBieXRlcywgb3IgaXQgd2lsbA0KKyAqIGZh
aWwuDQorICoNCisgKiBUaGUgcmVzdWx0IG9mIFpTVERfZGVjb21wcmVzc0NvbnRpbnVlKCkgaXMg
dGhlIG51bWJlciBvZiBieXRlcyByZWdlbmVyYXRlZA0KKyAqIHdpdGhpbiAnZHN0JyAobmVjZXNz
YXJpbHkgPD0gZHN0Q2FwYWNpdHkpLiBJdCBjYW4gYmUgemVybywgd2hpY2ggaXMgbm90IGFuDQor
ICogZXJyb3I7IGl0IGp1c3QgbWVhbnMgWlNURF9kZWNvbXByZXNzQ29udGludWUoKSBoYXMgZGVj
b2RlZCBzb21lIG1ldGFkYXRhDQorICogaXRlbS4gSXQgY2FuIGFsc28gYmUgYW4gZXJyb3IgY29k
ZSwgd2hpY2ggY2FuIGJlIHRlc3RlZCB3aXRoIFpTVERfaXNFcnJvcigpLg0KKyAqDQorICogWlNU
RF9kZWNvbXByZXNzQ29udGludWUoKSBuZWVkcyBwcmV2aW91cyBkYXRhIGJsb2NrcyBkdXJpbmcg
ZGVjb21wcmVzc2lvbiwgdXANCisgKiB0byBgd2luZG93U2l6ZWAuIFRoZXkgc2hvdWxkIHByZWZl
cmFibHkgYmUgbG9jYXRlZCBjb250aWd1b3VzbHksIHByaW9yIHRvDQorICogY3VycmVudCBibG9j
ay4gQWx0ZXJuYXRpdmVseSwgYSByb3VuZCBidWZmZXIgb2Ygc3VmZmljaWVudCBzaXplIGlzIGFs
c28NCisgKiBwb3NzaWJsZS4gU3VmZmljaWVudCBzaXplIGlzIGRldGVybWluZWQgYnkgZnJhbWUg
cGFyYW1ldGVycy4NCisgKiBaU1REX2RlY29tcHJlc3NDb250aW51ZSgpIGlzIHZlcnkgc2Vuc2l0
aXZlIHRvIGNvbnRpZ3VpdHksIGlmIDIgYmxvY2tzIGRvbid0DQorICogZm9sbG93IGVhY2ggb3Ro
ZXIsIG1ha2Ugc3VyZSB0aGF0IGVpdGhlciB0aGUgY29tcHJlc3NvciBicmVha3MgY29udGlndWl0
eSBhdA0KKyAqIHRoZSBzYW1lIHBsYWNlLCBvciB0aGF0IHByZXZpb3VzIGNvbnRpZ3VvdXMgc2Vn
bWVudCBpcyBsYXJnZSBlbm91Z2ggdG8NCisgKiBwcm9wZXJseSBoYW5kbGUgbWF4aW11bSBiYWNr
LXJlZmVyZW5jZS4NCisgKg0KKyAqIEEgZnJhbWUgaXMgZnVsbHkgZGVjb2RlZCB3aGVuIFpTVERf
bmV4dFNyY1NpemVUb0RlY29tcHJlc3MoKSByZXR1cm5zIHplcm8uDQorICogQ29udGV4dCBjYW4g
dGhlbiBiZSByZXNldCB0byBzdGFydCBhIG5ldyBkZWNvbXByZXNzaW9uLg0KKyAqDQorICogTm90
ZTogaXQncyBwb3NzaWJsZSB0byBrbm93IGlmIG5leHQgaW5wdXQgdG8gcHJlc2VudCBpcyBhIGhl
YWRlciBvciBhIGJsb2NrLA0KKyAqIHVzaW5nIFpTVERfbmV4dElucHV0VHlwZSgpLiBUaGlzIGlu
Zm9ybWF0aW9uIGlzIG5vdCByZXF1aXJlZCB0byBwcm9wZXJseQ0KKyAqIGRlY29kZSBhIGZyYW1l
Lg0KKyAqDQorICogPT0gU3BlY2lhbCBjYXNlOiBza2lwcGFibGUgZnJhbWVzID09DQorICoNCisg
KiBTa2lwcGFibGUgZnJhbWVzIGFsbG93IGludGVncmF0aW9uIG9mIHVzZXItZGVmaW5lZCBkYXRh
IGludG8gYSBmbG93IG9mDQorICogY29uY2F0ZW5hdGVkIGZyYW1lcy4gU2tpcHBhYmxlIGZyYW1l
cyB3aWxsIGJlIGlnbm9yZWQgKHNraXBwZWQpIGJ5IGENCisgKiBkZWNvbXByZXNzb3IuIFRoZSBm
b3JtYXQgb2Ygc2tpcHBhYmxlIGZyYW1lcyBpcyBhcyBmb2xsb3dzOg0KKyAqIGEpIFNraXBwYWJs
ZSBmcmFtZSBJRCAtIDQgQnl0ZXMsIExpdHRsZSBlbmRpYW4gZm9ybWF0LCBhbnkgdmFsdWUgZnJv
bQ0KKyAqICAgIDB4MTg0RDJBNTAgdG8gMHgxODREMkE1Rg0KKyAqIGIpIEZyYW1lIFNpemUgLSA0
IEJ5dGVzLCBMaXR0bGUgZW5kaWFuIGZvcm1hdCwgdW5zaWduZWQgMzItYml0cw0KKyAqIGMpIEZy
YW1lIENvbnRlbnQgLSBhbnkgY29udGVudCAoVXNlciBEYXRhKSBvZiBsZW5ndGggZXF1YWwgdG8g
RnJhbWUgU2l6ZQ0KKyAqIEZvciBza2lwcGFibGUgZnJhbWVzIFpTVERfZGVjb21wcmVzc0NvbnRp
bnVlKCkgYWx3YXlzIHJldHVybnMgMC4NCisgKiBGb3Igc2tpcHBhYmxlIGZyYW1lcyBaU1REX2dl
dEZyYW1lUGFyYW1zKCkgcmV0dXJucyBmcGFyYW1zUHRyLT53aW5kb3dMb2c9PTANCisgKiB3aGF0
IG1lYW5zIHRoYXQgYSBmcmFtZSBpcyBza2lwcGFibGUuDQorICogTm90ZTogSWYgZnBhcmFtc1B0
ci0+ZnJhbWVDb250ZW50U2l6ZT09MCwgaXQgaXMgYW1iaWd1b3VzOiB0aGUgZnJhbWUgbWlnaHQN
CisgKiAgICAgICBhY3R1YWxseSBiZSBhIHpzdGQgZW5jb2RlZCBmcmFtZSB3aXRoIG5vIGNvbnRl
bnQuIEZvciBwdXJwb3NlcyBvZg0KKyAqICAgICAgIGRlY29tcHJlc3Npb24sIGl0IGlzIHZhbGlk
IGluIGJvdGggY2FzZXMgdG8gc2tpcCB0aGUgZnJhbWUgdXNpbmcNCisgKiAgICAgICBaU1REX2Zp
bmRGcmFtZUNvbXByZXNzZWRTaXplKCkgdG8gZmluZCBpdHMgc2l6ZSBpbiBieXRlcy4NCisgKiBJ
dCBhbHNvIHJldHVybnMgZnJhbWUgc2l6ZSBhcyBmcGFyYW1zUHRyLT5mcmFtZUNvbnRlbnRTaXpl
Lg0KKyAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKiovDQorDQorLyo9PT09PSAgIEJ1ZmZlci1sZXNzIHN0
cmVhbWluZyBkZWNvbXByZXNzaW9uIGZ1bmN0aW9ucyAgPT09PT0qLw0KK3NpemVfdCBaU1REX2Rl
Y29tcHJlc3NCZWdpbihaU1REX0RDdHggKmRjdHgpOw0KK3NpemVfdCBaU1REX2RlY29tcHJlc3NC
ZWdpbl91c2luZ0RpY3QoWlNURF9EQ3R4ICpkY3R4LCBjb25zdCB2b2lkICpkaWN0LA0KKwlzaXpl
X3QgZGljdFNpemUpOw0KK3ZvaWQgICBaU1REX2NvcHlEQ3R4KFpTVERfREN0eCAqZGN0eCwgY29u
c3QgWlNURF9EQ3R4ICpwcmVwYXJlZERDdHgpOw0KK3NpemVfdCBaU1REX25leHRTcmNTaXplVG9E
ZWNvbXByZXNzKFpTVERfREN0eCAqZGN0eCk7DQorc2l6ZV90IFpTVERfZGVjb21wcmVzc0NvbnRp
bnVlKFpTVERfREN0eCAqZGN0eCwgdm9pZCAqZHN0LCBzaXplX3QgZHN0Q2FwYWNpdHksDQorCWNv
bnN0IHZvaWQgKnNyYywgc2l6ZV90IHNyY1NpemUpOw0KK3R5cGVkZWYgZW51bSB7DQorCVpTVERu
aXRfZnJhbWVIZWFkZXIsDQorCVpTVERuaXRfYmxvY2tIZWFkZXIsDQorCVpTVERuaXRfYmxvY2ss
DQorCVpTVERuaXRfbGFzdEJsb2NrLA0KKwlaU1REbml0X2NoZWNrc3VtLA0KKwlaU1REbml0X3Nr
aXBwYWJsZUZyYW1lDQorfSBaU1REX25leHRJbnB1dFR5cGVfZTsNCitaU1REX25leHRJbnB1dFR5
cGVfZSBaU1REX25leHRJbnB1dFR5cGUoWlNURF9EQ3R4ICpkY3R4KTsNCisNCisvKi0qKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKg0KKyAqIEJsb2NrIGZ1bmN0aW9ucw0KKyAqDQorICogQmxvY2sgZnVuY3Rp
b25zIHByb2R1Y2UgYW5kIGRlY29kZSByYXcgenN0ZCBibG9ja3MsIHdpdGhvdXQgZnJhbWUgbWV0
YWRhdGEuDQorICogRnJhbWUgbWV0YWRhdGEgY29zdCBpcyB0eXBpY2FsbHkgfjE4IGJ5dGVzLCB3
aGljaCBjYW4gYmUgbm9uLW5lZ2xpZ2libGUgZm9yDQorICogdmVyeSBzbWFsbCBibG9ja3MgKDwg
MTAwIGJ5dGVzKS4gVXNlciB3aWxsIGhhdmUgdG8gdGFrZSBpbiBjaGFyZ2UgcmVxdWlyZWQNCisg
KiBpbmZvcm1hdGlvbiB0byByZWdlbmVyYXRlIGRhdGEsIHN1Y2ggYXMgY29tcHJlc3NlZCBhbmQg
Y29udGVudCBzaXplcy4NCisgKg0KKyAqIEEgZmV3IHJ1bGVzIHRvIHJlc3BlY3Q6DQorICogLSBD
b21wcmVzc2luZyBhbmQgZGVjb21wcmVzc2luZyByZXF1aXJlIGEgY29udGV4dCBzdHJ1Y3R1cmUN
CisgKiAgICsgVXNlIFpTVERfaW5pdENDdHgoKSBhbmQgWlNURF9pbml0REN0eCgpDQorICogLSBJ
dCBpcyBuZWNlc3NhcnkgdG8gaW5pdCBjb250ZXh0IGJlZm9yZSBzdGFydGluZw0KKyAqICAgKyBj
b21wcmVzc2lvbiA6IFpTVERfY29tcHJlc3NCZWdpbigpDQorICogICArIGRlY29tcHJlc3Npb24g
OiBaU1REX2RlY29tcHJlc3NCZWdpbigpDQorICogICArIHZhcmlhbnRzIF91c2luZ0RpY3QoKSBh
cmUgYWxzbyBhbGxvd2VkDQorICogICArIGNvcHlDQ3R4KCkgYW5kIGNvcHlEQ3R4KCkgd29yayB0
b28NCisgKiAtIEJsb2NrIHNpemUgaXMgbGltaXRlZCwgaXQgbXVzdCBiZSA8PSBaU1REX2dldEJs
b2NrU2l6ZU1heCgpDQorICogICArIElmIHlvdSBuZWVkIHRvIGNvbXByZXNzIG1vcmUsIGN1dCBk
YXRhIGludG8gbXVsdGlwbGUgYmxvY2tzDQorICogICArIENvbnNpZGVyIHVzaW5nIHRoZSByZWd1
bGFyIFpTVERfY29tcHJlc3MoKSBpbnN0ZWFkLCBhcyBmcmFtZSBtZXRhZGF0YQ0KKyAqICAgICBj
b3N0cyBiZWNvbWUgbmVnbGlnaWJsZSB3aGVuIHNvdXJjZSBzaXplIGlzIGxhcmdlLg0KKyAqIC0g
V2hlbiBhIGJsb2NrIGlzIGNvbnNpZGVyZWQgbm90IGNvbXByZXNzaWJsZSBlbm91Z2gsIFpTVERf
Y29tcHJlc3NCbG9jaygpDQorICogICByZXN1bHQgd2lsbCBiZSB6ZXJvLiBJbiB3aGljaCBjYXNl
LCBub3RoaW5nIGlzIHByb2R1Y2VkIGludG8gYGRzdGAuDQorICogICArIFVzZXIgbXVzdCB0ZXN0
IGZvciBzdWNoIG91dGNvbWUgYW5kIGRlYWwgZGlyZWN0bHkgd2l0aCB1bmNvbXByZXNzZWQgZGF0
YQ0KKyAqICAgKyBaU1REX2RlY29tcHJlc3NCbG9jaygpIGRvZXNuJ3QgYWNjZXB0IHVuY29tcHJl
c3NlZCBkYXRhIGFzIGlucHV0ISEhDQorICogICArIEluIGNhc2Ugb2YgbXVsdGlwbGUgc3VjY2Vz
c2l2ZSBibG9ja3MsIGRlY29kZXIgbXVzdCBiZSBpbmZvcm1lZCBvZg0KKyAqICAgICB1bmNvbXBy
ZXNzZWQgYmxvY2sgZXhpc3RlbmNlIHRvIGZvbGxvdyBwcm9wZXIgaGlzdG9yeS4gVXNlDQorICog
ICAgIFpTVERfaW5zZXJ0QmxvY2soKSBpbiBzdWNoIGEgY2FzZS4NCisgKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqLw0KKw0KKy8qIERlZmluZSBmb3Igc3RhdGljIGFsbG9jYXRpb24gKi8NCisjZGVmaW5l
IFpTVERfQkxPQ0tTSVpFX0FCU09MVVRFTUFYICgxMjggKiAxMDI0KQ0KKy8qPT09PT0gICBSYXcg
enN0ZCBibG9jayBmdW5jdGlvbnMgID09PT09Ki8NCitzaXplX3QgWlNURF9nZXRCbG9ja1NpemVN
YXgoWlNURF9DQ3R4ICpjY3R4KTsNCitzaXplX3QgWlNURF9jb21wcmVzc0Jsb2NrKFpTVERfQ0N0
eCAqY2N0eCwgdm9pZCAqZHN0LCBzaXplX3QgZHN0Q2FwYWNpdHksDQorCWNvbnN0IHZvaWQgKnNy
Yywgc2l6ZV90IHNyY1NpemUpOw0KK3NpemVfdCBaU1REX2RlY29tcHJlc3NCbG9jayhaU1REX0RD
dHggKmRjdHgsIHZvaWQgKmRzdCwgc2l6ZV90IGRzdENhcGFjaXR5LA0KKwljb25zdCB2b2lkICpz
cmMsIHNpemVfdCBzcmNTaXplKTsNCitzaXplX3QgWlNURF9pbnNlcnRCbG9jayhaU1REX0RDdHgg
KmRjdHgsIGNvbnN0IHZvaWQgKmJsb2NrU3RhcnQsDQorCXNpemVfdCBibG9ja1NpemUpOw0KKw0K
KyNlbmRpZiAgLyogWlNURF9IICovDQotLSANCjIuMjYuMg0KDQo=

--8323328-1543918517-1606172471=:3753
Content-Type: text/plain; charset=US-ASCII; name=0002-adapt-kernel-code-for-use-in-xen-dom0-only-so-far.patch
Content-Transfer-Encoding: BASE64
Content-ID: <4062107-ca43-451-b053-683b7bc67bec@austen3.home>
Content-Description:
Content-Disposition: attachment; filename=0002-adapt-kernel-code-for-use-in-xen-dom0-only-so-far.patch

RnJvbSAzMTAyMDQ5OGEwMmY5NzdmM2I1N2VhNWE3NmYyNTFhZDEzM2FmYjQ5IE1vbiBTZXAgMTcg
MDA6MDA6MDAgMjAwMQ0KTWVzc2FnZS1JZDogPDMxMDIwNDk4YTAyZjk3N2YzYjU3ZWE1YTc2ZjI1
MWFkMTMzYWZiNDkuMTYwNjE3MDgzNi5naXQubS5hLnlvdW5nQGR1cmhhbS5hYy51az4NCkluLVJl
cGx5LVRvOiA8MWQ0ZDc2YTk5NzFmYmMzNTg4NzVkOTU1NDNkNmFhN2Q4ZTcyN2JhNC4xNjA2MTcw
ODM0LmdpdC5tLmEueW91bmdAZHVyaGFtLmFjLnVrPg0KUmVmZXJlbmNlczogPDFkNGQ3NmE5OTcx
ZmJjMzU4ODc1ZDk1NTQzZDZhYTdkOGU3MjdiYTQuMTYwNjE3MDgzNC5naXQubS5hLnlvdW5nQGR1
cmhhbS5hYy51az4NCkZyb206IE1pY2hhZWwgWW91bmcgPG0uYS55b3VuZ0BkdXJoYW0uYWMudWs+
DQpEYXRlOiBNb24sIDIzIE5vdiAyMDIwIDE5OjE3OjA3ICswMDAwDQpTdWJqZWN0OiBbWEVOIFBB
VENIIDIvMl0gYWRhcHQga2VybmVsIGNvZGUgZm9yIHVzZSBpbiB4ZW4gKGRvbTAgb25seSBzbyBm
YXIpDQoNCi0tLQ0KIHhlbi9jb21tb24vTWFrZWZpbGUgICAgICAgICAgICAgICAgICAgICAgICAg
IHwgICAyICstDQogeGVuL2NvbW1vbi9kZWNvbXByZXNzLmMgICAgICAgICAgICAgICAgICAgICAg
fCAgIDMgKw0KIHhlbi9jb21tb24ve2RlY29tcHJlc3NfdW56c3RkLmMgPT4gdW56c3RkLmN9IHwg
IDY5ICsrKy0tLS0NCiB4ZW4vY29tbW9uL3h4aGFzaC5jICAgICAgICAgICAgICAgICAgICAgICAg
ICB8ICA1MCArKy0tLS0NCiB4ZW4vY29tbW9uL3pzdGQvZGVjb21wcmVzcy5jICAgICAgICAgICAg
ICAgICB8IDE4MCArKysrKysrLS0tLS0tLS0tLS0tDQogeGVuL2NvbW1vbi96c3RkL2VudHJvcHlf
Y29tbW9uLmMgICAgICAgICAgICAgfCAgMTAgKy0NCiB4ZW4vY29tbW9uL3pzdGQvZXJyb3JfcHJp
dmF0ZS5oICAgICAgICAgICAgICB8ICAgNCArLQ0KIHhlbi9jb21tb24venN0ZC9mc2UuaCAgICAg
ICAgICAgICAgICAgICAgICAgIHwgICAyICstDQogeGVuL2NvbW1vbi96c3RkL2ZzZV9kZWNvbXBy
ZXNzLmMgICAgICAgICAgICAgfCAgMTYgKy0NCiB4ZW4vY29tbW9uL3pzdGQvaHVmLmggICAgICAg
ICAgICAgICAgICAgICAgICB8ICAgMiArLQ0KIHhlbi9jb21tb24venN0ZC9odWZfZGVjb21wcmVz
cy5jICAgICAgICAgICAgIHwgIDU4ICsrKy0tLQ0KIHhlbi9jb21tb24venN0ZC9tZW0uaCAgICAg
ICAgICAgICAgICAgICAgICAgIHwgICA2ICstDQogeGVuL2NvbW1vbi96c3RkL3ByaXZhdGUuaCAg
ICAgICAgICAgICAgICAgICAgfCAxMDUgKysrKysrKysrKysNCiB4ZW4vY29tbW9uL3pzdGQvenN0
ZF9jb21tb24uYyAgICAgICAgICAgICAgICB8ICAxMyArLQ0KIHhlbi9jb21tb24venN0ZC96c3Rk
X2ludGVybmFsLmggICAgICAgICAgICAgIHwgIDE0ICstDQogeGVuL2luY2x1ZGUveGVuL2RlY29t
cHJlc3MuaCAgICAgICAgICAgICAgICAgfCAgIDIgKy0NCiB4ZW4vaW5jbHVkZS94ZW4veHhoYXNo
LmggICAgICAgICAgICAgICAgICAgICB8ICAgMiArLQ0KIHhlbi9pbmNsdWRlL3hlbi96c3RkLmgg
ICAgICAgICAgICAgICAgICAgICAgIHwgICAyICstDQogMTggZmlsZXMgY2hhbmdlZCwgMjgyIGlu
c2VydGlvbnMoKyksIDI1OCBkZWxldGlvbnMoLSkNCiByZW5hbWUgeGVuL2NvbW1vbi97ZGVjb21w
cmVzc191bnpzdGQuYyA9PiB1bnpzdGQuY30gKDg2JSkNCiBjcmVhdGUgbW9kZSAxMDA2NDQgeGVu
L2NvbW1vbi96c3RkL3ByaXZhdGUuaA0KDQpkaWZmIC0tZ2l0IGEveGVuL2NvbW1vbi9NYWtlZmls
ZSBiL3hlbi9jb21tb24vTWFrZWZpbGUNCmluZGV4IGQxMDlmMjc5YTQuLjViYTA5ZjA0YWMgMTAw
NjQ0DQotLS0gYS94ZW4vY29tbW9uL01ha2VmaWxlDQorKysgYi94ZW4vY29tbW9uL01ha2VmaWxl
DQpAQCAtNTksNyArNTksNyBAQCBvYmotYmluLXkgKz0gd2FybmluZy5pbml0Lm8NCiBvYmotJChD
T05GSUdfWEVOT1BST0YpICs9IHhlbm9wcm9mLm8NCiBvYmoteSArPSB4bWFsbG9jX3Rsc2Yubw0K
IA0KLW9iai1iaW4tJChDT05GSUdfWDg2KSArPSAkKGZvcmVhY2ggbixkZWNvbXByZXNzIGJ1bnpp
cDIgdW54eiB1bmx6bWEgbHpvIHVubHpvIHVubHo0IGVhcmx5Y3BpbywkKG4pLmluaXQubykNCitv
YmotYmluLSQoQ09ORklHX1g4NikgKz0gJChmb3JlYWNoIG4sZGVjb21wcmVzcyBidW56aXAyIHVu
eHogdW5sem1hIGx6byB1bmx6byB1bmx6NCB1bnpzdGQgZWFybHljcGlvLCQobikuaW5pdC5vKQ0K
IA0KIG9iai0kKENPTkZJR19DT01QQVQpICs9ICQoYWRkcHJlZml4IGNvbXBhdC8sZG9tYWluLm8g
a2VybmVsLm8gbWVtb3J5Lm8gbXVsdGljYWxsLm8geGxhdC5vKQ0KIA0KZGlmZiAtLWdpdCBhL3hl
bi9jb21tb24vZGVjb21wcmVzcy5jIGIveGVuL2NvbW1vbi9kZWNvbXByZXNzLmMNCmluZGV4IDlk
NmUwYzRhYjAuLjBkYTI3YjBhYjYgMTAwNjQ0DQotLS0gYS94ZW4vY29tbW9uL2RlY29tcHJlc3Mu
Yw0KKysrIGIveGVuL2NvbW1vbi9kZWNvbXByZXNzLmMNCkBAIC0zMSw1ICszMSw4IEBAIGludCBf
X2luaXQgZGVjb21wcmVzcyh2b2lkICppbmJ1ZiwgdW5zaWduZWQgaW50IGxlbiwgdm9pZCAqb3V0
YnVmKQ0KICAgICBpZiAoIGxlbiA+PSAyICYmICFtZW1jbXAoaW5idWYsICJceDAyXHgyMSIsIDIp
ICkNCiAJcmV0dXJuIHVubHo0KGluYnVmLCBsZW4sIE5VTEwsIE5VTEwsIG91dGJ1ZiwgTlVMTCwg
ZXJyb3IpOw0KIA0KKyAgICBpZiAoIGxlbiA+PSA0ICYmICFtZW1jbXAoaW5idWYsICJcMDUwXDI2
NVwwNTdcMzc1IiwgNCkgKQ0KKwlyZXR1cm4gdW56c3RkKGluYnVmLCBsZW4sIE5VTEwsIE5VTEws
IG91dGJ1ZiwgTlVMTCwgZXJyb3IpOw0KKw0KICAgICByZXR1cm4gMTsNCiB9DQpkaWZmIC0tZ2l0
IGEveGVuL2NvbW1vbi9kZWNvbXByZXNzX3VuenN0ZC5jIGIveGVuL2NvbW1vbi91bnpzdGQuYw0K
c2ltaWxhcml0eSBpbmRleCA4NiUNCnJlbmFtZSBmcm9tIHhlbi9jb21tb24vZGVjb21wcmVzc191
bnpzdGQuYw0KcmVuYW1lIHRvIHhlbi9jb21tb24vdW56c3RkLmMNCmluZGV4IDBhZDJjMTU0Nzku
LmEyYzM4MmZkZGMgMTAwNjQ0DQotLS0gYS94ZW4vY29tbW9uL2RlY29tcHJlc3NfdW56c3RkLmMN
CisrKyBiL3hlbi9jb21tb24vdW56c3RkLmMNCkBAIC0xLDUgKzEsMyBAQA0KLS8vIFNQRFgtTGlj
ZW5zZS1JZGVudGlmaWVyOiBHUEwtMi4wDQotDQogLyoNCiAgKiBJbXBvcnRhbnQgbm90ZXMgYWJv
dXQgaW4tcGxhY2UgZGVjb21wcmVzc2lvbg0KICAqDQpAQCAtNTEsNiArNDksMTAgQEANCiAgKg0K
ICAqICAgIHNhZmV0eV9tYXJnaW4gPSAyMiArIHVuY29tcHJlc3NlZF9zaXplICogMyAvIDEzMTA3
MiArIDEzMTA3Mg0KICAqICAgICAgICAgICAgICAgICA8PSAyMiArICh1bmNvbXByZXNzZWRfc2l6
ZSA+PiAxNSkgKyAxMzEwNzINCisgKg0KKyAqIFRoaXMgcHJvZ3JhbSBpcyBmcmVlIHNvZnR3YXJl
OyB5b3UgY2FuIHJlZGlzdHJpYnV0ZSBpdCBhbmQvb3IgbW9kaWZ5DQorICogaXQgdW5kZXIgdGhl
IHRlcm1zIG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSB2ZXJzaW9uIDIgYXMNCisg
KiBwdWJsaXNoZWQgYnkgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbi4NCiAgKi8NCiANCiAv
Kg0KQEAgLTY1LDE5ICs2NywxNiBAQA0KICAqIERlZmluZSBfX0RJU0FCTEVfRVhQT1JUUyBpbiBw
cmVib290IGVudmlyb25tZW50cyB0byBwcmV2ZW50IHN5bWJvbHMNCiAgKiBmcm9tIHh4aGFzaCBh
bmQgenN0ZCBmcm9tIGJlaW5nIGV4cG9ydGVkIGJ5IHRoZSBFWFBPUlRfU1lNQk9MIG1hY3JvLg0K
ICAqLw0KLSNpZmRlZiBTVEFUSUMNCi0jIGRlZmluZSBVTlpTVERfUFJFQk9PVA0KLSMgaW5jbHVk
ZSAieHhoYXNoLmMiDQotIyBpbmNsdWRlICJ6c3RkL2VudHJvcHlfY29tbW9uLmMiDQotIyBpbmNs
dWRlICJ6c3RkL2ZzZV9kZWNvbXByZXNzLmMiDQotIyBpbmNsdWRlICJ6c3RkL2h1Zl9kZWNvbXBy
ZXNzLmMiDQotIyBpbmNsdWRlICJ6c3RkL3pzdGRfY29tbW9uLmMiDQotIyBpbmNsdWRlICJ6c3Rk
L2RlY29tcHJlc3MuYyINCi0jZW5kaWYNCiANCi0jaW5jbHVkZSA8bGludXgvZGVjb21wcmVzcy9t
bS5oPg0KLSNpbmNsdWRlIDxsaW51eC9rZXJuZWwuaD4NCi0jaW5jbHVkZSA8bGludXgvenN0ZC5o
Pg0KKyNpbmNsdWRlICJkZWNvbXByZXNzLmgiDQorI2luY2x1ZGUgInh4aGFzaC5jIg0KKyNpbmNs
dWRlICJ6c3RkL2VudHJvcHlfY29tbW9uLmMiDQorI2luY2x1ZGUgInpzdGQvZnNlX2RlY29tcHJl
c3MuYyINCisjaW5jbHVkZSAienN0ZC9odWZfZGVjb21wcmVzcy5jIg0KKyNpbmNsdWRlICJ6c3Rk
L3pzdGRfY29tbW9uLmMiDQorI2luY2x1ZGUgInpzdGQvZGVjb21wcmVzcy5jIg0KKw0KKyNpbmNs
dWRlIDx4ZW4venN0ZC5oPg0KIA0KIC8qIDEyOE1CIGlzIHRoZSBtYXhpbXVtIHdpbmRvdyBzaXpl
IHN1cHBvcnRlZCBieSB6c3RkLiAqLw0KICNkZWZpbmUgWlNURF9XSU5ET1dTSVpFX01BWAkoMSA8
PCBaU1REX1dJTkRPV0xPR19NQVgpDQpAQCAtODksNyArODgsNyBAQA0KICAqLw0KICNkZWZpbmUg
WlNURF9JT0JVRl9TSVpFCQkoMSA8PCAxNykNCiANCi1zdGF0aWMgaW50IElOSVQgaGFuZGxlX3pz
dGRfZXJyb3Ioc2l6ZV90IHJldCwgdm9pZCAoKmVycm9yKShjaGFyICp4KSkNCitzdGF0aWMgaW50
IElOSVQgaGFuZGxlX3pzdGRfZXJyb3Ioc2l6ZV90IHJldCwgdm9pZCAoKmVycm9yKShjb25zdCBj
aGFyICp4KSkNCiB7DQogCWNvbnN0IGludCBlcnIgPSBaU1REX2dldEVycm9yQ29kZShyZXQpOw0K
IA0KQEAgLTEyMCw5ICsxMTksOSBAQCBzdGF0aWMgaW50IElOSVQgaGFuZGxlX3pzdGRfZXJyb3Io
c2l6ZV90IHJldCwgdm9pZCAoKmVycm9yKShjaGFyICp4KSkNCiAgKiBXZSBjYW4gYWxsb2NhdGUg
bGVzcyBtZW1vcnkgKG5vIGNpcmN1bGFyIGJ1ZmZlciBmb3IgdGhlIHNsaWRpbmcgd2luZG93KSwN
CiAgKiBhbmQgYXZvaWQgc29tZSBtZW1jcHkoKSBjYWxscy4NCiAgKi8NCi1zdGF0aWMgaW50IElO
SVQgZGVjb21wcmVzc19zaW5nbGUoY29uc3QgdTggKmluX2J1ZiwgbG9uZyBpbl9sZW4sIHU4ICpv
dXRfYnVmLA0KLQkJCQkgIGxvbmcgb3V0X2xlbiwgbG9uZyAqaW5fcG9zLA0KLQkJCQkgIHZvaWQg
KCplcnJvcikoY2hhciAqeCkpDQorc3RhdGljIGludCBJTklUIGRlY29tcHJlc3Nfc2luZ2xlKGNv
bnN0IHU4ICppbl9idWYsIHVuc2lnbmVkIGludCBpbl9sZW4sIHU4ICpvdXRfYnVmLA0KKwkJCQkg
IGxvbmcgb3V0X2xlbiwgdW5zaWduZWQgaW50ICppbl9wb3MsDQorCQkJCSAgdm9pZCAoKmVycm9y
KShjb25zdCBjaGFyICp4KSkNCiB7DQogCWNvbnN0IHNpemVfdCB3a3NwX3NpemUgPSBaU1REX0RD
dHhXb3Jrc3BhY2VCb3VuZCgpOw0KIAl2b2lkICp3a3NwID0gbGFyZ2VfbWFsbG9jKHdrc3Bfc2l6
ZSk7DQpAQCAtMTYwLDEyICsxNTksMTIgQEAgb3V0Og0KIAlyZXR1cm4gZXJyOw0KIH0NCiANCi1z
dGF0aWMgaW50IElOSVQgX191bnpzdGQodW5zaWduZWQgY2hhciAqaW5fYnVmLCBsb25nIGluX2xl
biwNCi0JCQkgbG9uZyAoKmZpbGwpKHZvaWQqLCB1bnNpZ25lZCBsb25nKSwNCi0JCQkgbG9uZyAo
KmZsdXNoKSh2b2lkKiwgdW5zaWduZWQgbG9uZyksDQorc3RhdGljIGludCBJTklUIF9fdW56c3Rk
KHVuc2lnbmVkIGNoYXIgKmluX2J1ZiwgdW5zaWduZWQgaW50IGluX2xlbiwNCisJCQkgaW50ICgq
ZmlsbCkodm9pZCosIHVuc2lnbmVkIGludCksDQorCQkJIGludCAoKmZsdXNoKSh2b2lkKiwgdW5z
aWduZWQgaW50KSwNCiAJCQkgdW5zaWduZWQgY2hhciAqb3V0X2J1ZiwgbG9uZyBvdXRfbGVuLA0K
LQkJCSBsb25nICppbl9wb3MsDQotCQkJIHZvaWQgKCplcnJvcikoY2hhciAqeCkpDQorCQkJIHVu
c2lnbmVkIGludCAqaW5fcG9zLA0KKwkJCSB2b2lkICgqZXJyb3IpKGNvbnN0IGNoYXIgKngpKQ0K
IHsNCiAJWlNURF9pbkJ1ZmZlciBpbjsNCiAJWlNURF9vdXRCdWZmZXIgb3V0Ow0KQEAgLTE3OSw3
ICsxNzgsNyBAQCBzdGF0aWMgaW50IElOSVQgX191bnpzdGQodW5zaWduZWQgY2hhciAqaW5fYnVm
LCBsb25nIGluX2xlbiwNCiAJc2l6ZV90IHJldDsNCiANCiAJaWYgKG91dF9sZW4gPT0gMCkNCi0J
CW91dF9sZW4gPSBMT05HX01BWDsgLyogbm8gbGltaXQgKi8NCisJCW91dF9sZW4gPSBJTlRfTUFY
OyAvKiBubyBsaW1pdCAqLw0KIA0KIAlpZiAoZmlsbCA9PSBOVUxMICYmIGZsdXNoID09IE5VTEwp
DQogCQkvKg0KQEAgLTMyMiwyNCArMzIxLDEyIEBAIG91dDoNCiAJcmV0dXJuIGVycjsNCiB9DQog
DQotI2lmbmRlZiBVTlpTVERfUFJFQk9PVA0KLVNUQVRJQyBpbnQgSU5JVCB1bnpzdGQodW5zaWdu
ZWQgY2hhciAqYnVmLCBsb25nIGxlbiwNCi0JCSAgICAgICBsb25nICgqZmlsbCkodm9pZCosIHVu
c2lnbmVkIGxvbmcpLA0KLQkJICAgICAgIGxvbmcgKCpmbHVzaCkodm9pZCosIHVuc2lnbmVkIGxv
bmcpLA0KK1NUQVRJQyBpbnQgSU5JVCB1bnpzdGQodW5zaWduZWQgY2hhciAqYnVmLCB1bnNpZ25l
ZCBpbnQgbGVuLA0KKwkJICAgICAgIGludCAoKmZpbGwpKHZvaWQqLCB1bnNpZ25lZCBpbnQpLA0K
KwkJICAgICAgIGludCAoKmZsdXNoKSh2b2lkKiwgdW5zaWduZWQgaW50KSwNCiAJCSAgICAgICB1
bnNpZ25lZCBjaGFyICpvdXRfYnVmLA0KLQkJICAgICAgIGxvbmcgKnBvcywNCi0JCSAgICAgICB2
b2lkICgqZXJyb3IpKGNoYXIgKngpKQ0KKwkJICAgICAgIHVuc2lnbmVkIGludCAqcG9zLA0KKwkJ
ICAgICAgIHZvaWQgKCplcnJvcikoY29uc3QgY2hhciAqeCkpDQogew0KIAlyZXR1cm4gX191bnpz
dGQoYnVmLCBsZW4sIGZpbGwsIGZsdXNoLCBvdXRfYnVmLCAwLCBwb3MsIGVycm9yKTsNCiB9DQot
I2Vsc2UNCi1TVEFUSUMgaW50IElOSVQgX19kZWNvbXByZXNzKHVuc2lnbmVkIGNoYXIgKmJ1Ziwg
bG9uZyBsZW4sDQotCQkJICAgICBsb25nICgqZmlsbCkodm9pZCosIHVuc2lnbmVkIGxvbmcpLA0K
LQkJCSAgICAgbG9uZyAoKmZsdXNoKSh2b2lkKiwgdW5zaWduZWQgbG9uZyksDQotCQkJICAgICB1
bnNpZ25lZCBjaGFyICpvdXRfYnVmLCBsb25nIG91dF9sZW4sDQotCQkJICAgICBsb25nICpwb3Ms
DQotCQkJICAgICB2b2lkICgqZXJyb3IpKGNoYXIgKngpKQ0KLXsNCi0JcmV0dXJuIF9fdW56c3Rk
KGJ1ZiwgbGVuLCBmaWxsLCBmbHVzaCwgb3V0X2J1Ziwgb3V0X2xlbiwgcG9zLCBlcnJvcik7DQot
fQ0KLSNlbmRpZg0KZGlmZiAtLWdpdCBhL3hlbi9jb21tb24veHhoYXNoLmMgYi94ZW4vY29tbW9u
L3h4aGFzaC5jDQppbmRleCBkNWJiOWZmMTA2Li4zYWIzZTAxODU5IDEwMDY0NA0KLS0tIGEveGVu
L2NvbW1vbi94eGhhc2guYw0KKysrIGIveGVuL2NvbW1vbi94eGhhc2guYw0KQEAgLTM4LDEzICsz
OCwxMCBAQA0KICAqIC0geHhIYXNoIHNvdXJjZSByZXBvc2l0b3J5OiBodHRwczovL2dpdGh1Yi5j
b20vQ3lhbjQ5NzMveHhIYXNoDQogICovDQogDQotI2luY2x1ZGUgPGFzbS91bmFsaWduZWQuaD4N
Ci0jaW5jbHVkZSA8bGludXgvZXJybm8uaD4NCi0jaW5jbHVkZSA8bGludXgvY29tcGlsZXIuaD4N
Ci0jaW5jbHVkZSA8bGludXgva2VybmVsLmg+DQotI2luY2x1ZGUgPGxpbnV4L21vZHVsZS5oPg0K
LSNpbmNsdWRlIDxsaW51eC9zdHJpbmcuaD4NCi0jaW5jbHVkZSA8bGludXgveHhoYXNoLmg+DQor
I2luY2x1ZGUgPHhlbi9zdHJpbmcuaD4NCisjaW5jbHVkZSA8eGVuL2Vycm5vLmg+DQorI2luY2x1
ZGUgPHhlbi94eGhhc2guaD4NCisjaW5jbHVkZSAienN0ZC9wcml2YXRlLmgiDQogDQogLyotKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KICAqIE1hY3Jvcw0KQEAgLTc2LDIy
ICs3MywyMCBAQCBzdGF0aWMgY29uc3QgdWludDY0X3QgUFJJTUU2NF81ID0gIDI4NzAxNzc0NTAw
MTI2MDAyNjFVTEw7DQogLyotKioqKioqKioqKioqKioqKioqKioqKioqKioNCiAgKiAgVXRpbHMN
CiAgKioqKioqKioqKioqKioqKioqKioqKioqKioqLw0KLXZvaWQgeHhoMzJfY29weV9zdGF0ZShz
dHJ1Y3QgeHhoMzJfc3RhdGUgKmRzdCwgY29uc3Qgc3RydWN0IHh4aDMyX3N0YXRlICpzcmMpDQor
dm9pZCBJTklUIHh4aDMyX2NvcHlfc3RhdGUoc3RydWN0IHh4aDMyX3N0YXRlICpkc3QsIGNvbnN0
IHN0cnVjdCB4eGgzMl9zdGF0ZSAqc3JjKQ0KIHsNCiAJbWVtY3B5KGRzdCwgc3JjLCBzaXplb2Yo
KmRzdCkpOw0KIH0NCi1FWFBPUlRfU1lNQk9MKHh4aDMyX2NvcHlfc3RhdGUpOw0KIA0KLXZvaWQg
eHhoNjRfY29weV9zdGF0ZShzdHJ1Y3QgeHhoNjRfc3RhdGUgKmRzdCwgY29uc3Qgc3RydWN0IHh4
aDY0X3N0YXRlICpzcmMpDQordm9pZCBJTklUIHh4aDY0X2NvcHlfc3RhdGUoc3RydWN0IHh4aDY0
X3N0YXRlICpkc3QsIGNvbnN0IHN0cnVjdCB4eGg2NF9zdGF0ZSAqc3JjKQ0KIHsNCiAJbWVtY3B5
KGRzdCwgc3JjLCBzaXplb2YoKmRzdCkpOw0KIH0NCi1FWFBPUlRfU1lNQk9MKHh4aDY0X2NvcHlf
c3RhdGUpOw0KIA0KIC8qLSoqKioqKioqKioqKioqKioqKioqKioqKioqKg0KICAqIFNpbXBsZSBI
YXNoIEZ1bmN0aW9ucw0KICAqKioqKioqKioqKioqKioqKioqKioqKioqKioqLw0KLXN0YXRpYyB1
aW50MzJfdCB4eGgzMl9yb3VuZCh1aW50MzJfdCBzZWVkLCBjb25zdCB1aW50MzJfdCBpbnB1dCkN
CitzdGF0aWMgdWludDMyX3QgSU5JVCB4eGgzMl9yb3VuZCh1aW50MzJfdCBzZWVkLCBjb25zdCB1
aW50MzJfdCBpbnB1dCkNCiB7DQogCXNlZWQgKz0gaW5wdXQgKiBQUklNRTMyXzI7DQogCXNlZWQg
PSB4eGhfcm90bDMyKHNlZWQsIDEzKTsNCkBAIC05OSw3ICs5NCw3IEBAIHN0YXRpYyB1aW50MzJf
dCB4eGgzMl9yb3VuZCh1aW50MzJfdCBzZWVkLCBjb25zdCB1aW50MzJfdCBpbnB1dCkNCiAJcmV0
dXJuIHNlZWQ7DQogfQ0KIA0KLXVpbnQzMl90IHh4aDMyKGNvbnN0IHZvaWQgKmlucHV0LCBjb25z
dCBzaXplX3QgbGVuLCBjb25zdCB1aW50MzJfdCBzZWVkKQ0KK3VpbnQzMl90IElOSVQgeHhoMzIo
Y29uc3Qgdm9pZCAqaW5wdXQsIGNvbnN0IHNpemVfdCBsZW4sIGNvbnN0IHVpbnQzMl90IHNlZWQp
DQogew0KIAljb25zdCB1aW50OF90ICpwID0gKGNvbnN0IHVpbnQ4X3QgKilpbnB1dDsNCiAJY29u
c3QgdWludDhfdCAqYl9lbmQgPSBwICsgbGVuOw0KQEAgLTE1MSw5ICsxNDYsOCBAQCB1aW50MzJf
dCB4eGgzMihjb25zdCB2b2lkICppbnB1dCwgY29uc3Qgc2l6ZV90IGxlbiwgY29uc3QgdWludDMy
X3Qgc2VlZCkNCiANCiAJcmV0dXJuIGgzMjsNCiB9DQotRVhQT1JUX1NZTUJPTCh4eGgzMik7DQog
DQotc3RhdGljIHVpbnQ2NF90IHh4aDY0X3JvdW5kKHVpbnQ2NF90IGFjYywgY29uc3QgdWludDY0
X3QgaW5wdXQpDQorc3RhdGljIHVpbnQ2NF90IElOSVQgeHhoNjRfcm91bmQodWludDY0X3QgYWNj
LCBjb25zdCB1aW50NjRfdCBpbnB1dCkNCiB7DQogCWFjYyArPSBpbnB1dCAqIFBSSU1FNjRfMjsN
CiAJYWNjID0geHhoX3JvdGw2NChhY2MsIDMxKTsNCkBAIC0xNjEsNyArMTU1LDcgQEAgc3RhdGlj
IHVpbnQ2NF90IHh4aDY0X3JvdW5kKHVpbnQ2NF90IGFjYywgY29uc3QgdWludDY0X3QgaW5wdXQp
DQogCXJldHVybiBhY2M7DQogfQ0KIA0KLXN0YXRpYyB1aW50NjRfdCB4eGg2NF9tZXJnZV9yb3Vu
ZCh1aW50NjRfdCBhY2MsIHVpbnQ2NF90IHZhbCkNCitzdGF0aWMgdWludDY0X3QgSU5JVCB4eGg2
NF9tZXJnZV9yb3VuZCh1aW50NjRfdCBhY2MsIHVpbnQ2NF90IHZhbCkNCiB7DQogCXZhbCA9IHh4
aDY0X3JvdW5kKDAsIHZhbCk7DQogCWFjYyBePSB2YWw7DQpAQCAtMTY5LDcgKzE2Myw3IEBAIHN0
YXRpYyB1aW50NjRfdCB4eGg2NF9tZXJnZV9yb3VuZCh1aW50NjRfdCBhY2MsIHVpbnQ2NF90IHZh
bCkNCiAJcmV0dXJuIGFjYzsNCiB9DQogDQotdWludDY0X3QgeHhoNjQoY29uc3Qgdm9pZCAqaW5w
dXQsIGNvbnN0IHNpemVfdCBsZW4sIGNvbnN0IHVpbnQ2NF90IHNlZWQpDQordWludDY0X3QgSU5J
VCB4eGg2NChjb25zdCB2b2lkICppbnB1dCwgY29uc3Qgc2l6ZV90IGxlbiwgY29uc3QgdWludDY0
X3Qgc2VlZCkNCiB7DQogCWNvbnN0IHVpbnQ4X3QgKnAgPSAoY29uc3QgdWludDhfdCAqKWlucHV0
Ow0KIAljb25zdCB1aW50OF90ICpjb25zdCBiX2VuZCA9IHAgKyBsZW47DQpAQCAtMjM0LDEyICsy
MjgsMTEgQEAgdWludDY0X3QgeHhoNjQoY29uc3Qgdm9pZCAqaW5wdXQsIGNvbnN0IHNpemVfdCBs
ZW4sIGNvbnN0IHVpbnQ2NF90IHNlZWQpDQogDQogCXJldHVybiBoNjQ7DQogfQ0KLUVYUE9SVF9T
WU1CT0woeHhoNjQpOw0KIA0KIC8qLSoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqDQogICogQWR2YW5jZWQgSGFzaCBGdW5jdGlvbnMNCiAgKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqLw0KLXZvaWQgeHhoMzJf
cmVzZXQoc3RydWN0IHh4aDMyX3N0YXRlICpzdGF0ZVB0ciwgY29uc3QgdWludDMyX3Qgc2VlZCkN
Cit2b2lkIElOSVQgeHhoMzJfcmVzZXQoc3RydWN0IHh4aDMyX3N0YXRlICpzdGF0ZVB0ciwgY29u
c3QgdWludDMyX3Qgc2VlZCkNCiB7DQogCS8qIHVzZSBhIGxvY2FsIHN0YXRlIGZvciBtZW1jcHko
KSB0byBhdm9pZCBzdHJpY3QtYWxpYXNpbmcgd2FybmluZ3MgKi8NCiAJc3RydWN0IHh4aDMyX3N0
YXRlIHN0YXRlOw0KQEAgLTI1MSw5ICsyNDQsOCBAQCB2b2lkIHh4aDMyX3Jlc2V0KHN0cnVjdCB4
eGgzMl9zdGF0ZSAqc3RhdGVQdHIsIGNvbnN0IHVpbnQzMl90IHNlZWQpDQogCXN0YXRlLnY0ID0g
c2VlZCAtIFBSSU1FMzJfMTsNCiAJbWVtY3B5KHN0YXRlUHRyLCAmc3RhdGUsIHNpemVvZihzdGF0
ZSkpOw0KIH0NCi1FWFBPUlRfU1lNQk9MKHh4aDMyX3Jlc2V0KTsNCiANCi12b2lkIHh4aDY0X3Jl
c2V0KHN0cnVjdCB4eGg2NF9zdGF0ZSAqc3RhdGVQdHIsIGNvbnN0IHVpbnQ2NF90IHNlZWQpDQor
dm9pZCBJTklUIHh4aDY0X3Jlc2V0KHN0cnVjdCB4eGg2NF9zdGF0ZSAqc3RhdGVQdHIsIGNvbnN0
IHVpbnQ2NF90IHNlZWQpDQogew0KIAkvKiB1c2UgYSBsb2NhbCBzdGF0ZSBmb3IgbWVtY3B5KCkg
dG8gYXZvaWQgc3RyaWN0LWFsaWFzaW5nIHdhcm5pbmdzICovDQogCXN0cnVjdCB4eGg2NF9zdGF0
ZSBzdGF0ZTsNCkBAIC0yNjUsOSArMjU3LDggQEAgdm9pZCB4eGg2NF9yZXNldChzdHJ1Y3QgeHho
NjRfc3RhdGUgKnN0YXRlUHRyLCBjb25zdCB1aW50NjRfdCBzZWVkKQ0KIAlzdGF0ZS52NCA9IHNl
ZWQgLSBQUklNRTY0XzE7DQogCW1lbWNweShzdGF0ZVB0ciwgJnN0YXRlLCBzaXplb2Yoc3RhdGUp
KTsNCiB9DQotRVhQT1JUX1NZTUJPTCh4eGg2NF9yZXNldCk7DQogDQotaW50IHh4aDMyX3VwZGF0
ZShzdHJ1Y3QgeHhoMzJfc3RhdGUgKnN0YXRlLCBjb25zdCB2b2lkICppbnB1dCwgY29uc3Qgc2l6
ZV90IGxlbikNCitpbnQgSU5JVCB4eGgzMl91cGRhdGUoc3RydWN0IHh4aDMyX3N0YXRlICpzdGF0
ZSwgY29uc3Qgdm9pZCAqaW5wdXQsIGNvbnN0IHNpemVfdCBsZW4pDQogew0KIAljb25zdCB1aW50
OF90ICpwID0gKGNvbnN0IHVpbnQ4X3QgKilpbnB1dDsNCiAJY29uc3QgdWludDhfdCAqY29uc3Qg
Yl9lbmQgPSBwICsgbGVuOw0KQEAgLTMzNCw5ICszMjUsOCBAQCBpbnQgeHhoMzJfdXBkYXRlKHN0
cnVjdCB4eGgzMl9zdGF0ZSAqc3RhdGUsIGNvbnN0IHZvaWQgKmlucHV0LCBjb25zdCBzaXplX3Qg
bGVuKQ0KIA0KIAlyZXR1cm4gMDsNCiB9DQotRVhQT1JUX1NZTUJPTCh4eGgzMl91cGRhdGUpOw0K
IA0KLXVpbnQzMl90IHh4aDMyX2RpZ2VzdChjb25zdCBzdHJ1Y3QgeHhoMzJfc3RhdGUgKnN0YXRl
KQ0KK3VpbnQzMl90IElOSVQgeHhoMzJfZGlnZXN0KGNvbnN0IHN0cnVjdCB4eGgzMl9zdGF0ZSAq
c3RhdGUpDQogew0KIAljb25zdCB1aW50OF90ICpwID0gKGNvbnN0IHVpbnQ4X3QgKilzdGF0ZS0+
bWVtMzI7DQogCWNvbnN0IHVpbnQ4X3QgKmNvbnN0IGJfZW5kID0gKGNvbnN0IHVpbnQ4X3QgKiko
c3RhdGUtPm1lbTMyKSArDQpAQCAtMzcyLDkgKzM2Miw4IEBAIHVpbnQzMl90IHh4aDMyX2RpZ2Vz
dChjb25zdCBzdHJ1Y3QgeHhoMzJfc3RhdGUgKnN0YXRlKQ0KIA0KIAlyZXR1cm4gaDMyOw0KIH0N
Ci1FWFBPUlRfU1lNQk9MKHh4aDMyX2RpZ2VzdCk7DQogDQotaW50IHh4aDY0X3VwZGF0ZShzdHJ1
Y3QgeHhoNjRfc3RhdGUgKnN0YXRlLCBjb25zdCB2b2lkICppbnB1dCwgY29uc3Qgc2l6ZV90IGxl
bikNCitpbnQgSU5JVCB4eGg2NF91cGRhdGUoc3RydWN0IHh4aDY0X3N0YXRlICpzdGF0ZSwgY29u
c3Qgdm9pZCAqaW5wdXQsIGNvbnN0IHNpemVfdCBsZW4pDQogew0KIAljb25zdCB1aW50OF90ICpw
ID0gKGNvbnN0IHVpbnQ4X3QgKilpbnB1dDsNCiAJY29uc3QgdWludDhfdCAqY29uc3QgYl9lbmQg
PSBwICsgbGVuOw0KQEAgLTQzOSw5ICs0MjgsOCBAQCBpbnQgeHhoNjRfdXBkYXRlKHN0cnVjdCB4
eGg2NF9zdGF0ZSAqc3RhdGUsIGNvbnN0IHZvaWQgKmlucHV0LCBjb25zdCBzaXplX3QgbGVuKQ0K
IA0KIAlyZXR1cm4gMDsNCiB9DQotRVhQT1JUX1NZTUJPTCh4eGg2NF91cGRhdGUpOw0KIA0KLXVp
bnQ2NF90IHh4aDY0X2RpZ2VzdChjb25zdCBzdHJ1Y3QgeHhoNjRfc3RhdGUgKnN0YXRlKQ0KK3Vp
bnQ2NF90IElOSVQgeHhoNjRfZGlnZXN0KGNvbnN0IHN0cnVjdCB4eGg2NF9zdGF0ZSAqc3RhdGUp
DQogew0KIAljb25zdCB1aW50OF90ICpwID0gKGNvbnN0IHVpbnQ4X3QgKilzdGF0ZS0+bWVtNjQ7
DQogCWNvbnN0IHVpbnQ4X3QgKmNvbnN0IGJfZW5kID0gKGNvbnN0IHVpbnQ4X3QgKilzdGF0ZS0+
bWVtNjQgKw0KQEAgLTQ5NCw3ICs0ODIsMyBAQCB1aW50NjRfdCB4eGg2NF9kaWdlc3QoY29uc3Qg
c3RydWN0IHh4aDY0X3N0YXRlICpzdGF0ZSkNCiANCiAJcmV0dXJuIGg2NDsNCiB9DQotRVhQT1JU
X1NZTUJPTCh4eGg2NF9kaWdlc3QpOw0KLQ0KLU1PRFVMRV9MSUNFTlNFKCJEdWFsIEJTRC9HUEwi
KTsNCi1NT0RVTEVfREVTQ1JJUFRJT04oInh4SGFzaCIpOw0KZGlmZiAtLWdpdCBhL3hlbi9jb21t
b24venN0ZC9kZWNvbXByZXNzLmMgYi94ZW4vY29tbW9uL3pzdGQvZGVjb21wcmVzcy5jDQppbmRl
eCBkYjY3NjFlYTRkLi44ZTYyN2Q4ODFhIDEwMDY0NA0KLS0tIGEveGVuL2NvbW1vbi96c3RkL2Rl
Y29tcHJlc3MuYw0KKysrIGIveGVuL2NvbW1vbi96c3RkL2RlY29tcHJlc3MuYw0KQEAgLTMzLDkg
KzMzLDcgQEANCiAjaW5jbHVkZSAiaHVmLmgiDQogI2luY2x1ZGUgIm1lbS5oIiAvKiBsb3cgbGV2
ZWwgbWVtb3J5IHJvdXRpbmVzICovDQogI2luY2x1ZGUgInpzdGRfaW50ZXJuYWwuaCINCi0jaW5j
bHVkZSA8bGludXgva2VybmVsLmg+DQotI2luY2x1ZGUgPGxpbnV4L21vZHVsZS5oPg0KLSNpbmNs
dWRlIDxsaW51eC9zdHJpbmcuaD4gLyogbWVtY3B5LCBtZW1tb3ZlLCBtZW1zZXQgKi8NCisjaW5j
bHVkZSA8eGVuL3N0cmluZy5oPiAvKiBtZW1jcHksIG1lbW1vdmUsIG1lbXNldCAqLw0KIA0KICNk
ZWZpbmUgWlNURF9QUkVGRVRDSChwdHIpIF9fYnVpbHRpbl9wcmVmZXRjaChwdHIsIDAsIDApDQog
DQpAQCAtNDksNyArNDcsNyBAQA0KIC8qXyoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioNCiAqICBNZW1vcnkgb3BlcmF0aW9ucw0KICoqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKiovDQotc3Rh
dGljIHZvaWQgWlNURF9jb3B5NCh2b2lkICpkc3QsIGNvbnN0IHZvaWQgKnNyYykgeyBtZW1jcHko
ZHN0LCBzcmMsIDQpOyB9DQorc3RhdGljIHZvaWQgSU5JVCBaU1REX2NvcHk0KHZvaWQgKmRzdCwg
Y29uc3Qgdm9pZCAqc3JjKSB7IG1lbWNweShkc3QsIHNyYywgNCk7IH0NCiANCiAvKi0qKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQog
KiAgIENvbnRleHQgbWFuYWdlbWVudA0KQEAgLTEwMSw5ICs5OSw5IEBAIHN0cnVjdCBaU1REX0RD
dHhfcyB7DQogCUJZVEUgaGVhZGVyQnVmZmVyW1pTVERfRlJBTUVIRUFERVJTSVpFX01BWF07DQog
fTsgLyogdHlwZWRlZidkIHRvIFpTVERfREN0eCB3aXRoaW4gInpzdGQuaCIgKi8NCiANCi1zaXpl
X3QgWlNURF9EQ3R4V29ya3NwYWNlQm91bmQodm9pZCkgeyByZXR1cm4gWlNURF9BTElHTihzaXpl
b2YoWlNURF9zdGFjaykpICsgWlNURF9BTElHTihzaXplb2YoWlNURF9EQ3R4KSk7IH0NCitzaXpl
X3QgSU5JVCBaU1REX0RDdHhXb3Jrc3BhY2VCb3VuZCh2b2lkKSB7IHJldHVybiBaU1REX0FMSUdO
KHNpemVvZihaU1REX3N0YWNrKSkgKyBaU1REX0FMSUdOKHNpemVvZihaU1REX0RDdHgpKTsgfQ0K
IA0KLXNpemVfdCBaU1REX2RlY29tcHJlc3NCZWdpbihaU1REX0RDdHggKmRjdHgpDQorc2l6ZV90
IElOSVQgWlNURF9kZWNvbXByZXNzQmVnaW4oWlNURF9EQ3R4ICpkY3R4KQ0KIHsNCiAJZGN0eC0+
ZXhwZWN0ZWQgPSBaU1REX2ZyYW1lSGVhZGVyU2l6ZV9wcmVmaXg7DQogCWRjdHgtPnN0YWdlID0g
WlNURGRzX2dldEZyYW1lSGVhZGVyU2l6ZTsNCkBAIC0xMjMsNyArMTIxLDcgQEAgc2l6ZV90IFpT
VERfZGVjb21wcmVzc0JlZ2luKFpTVERfREN0eCAqZGN0eCkNCiAJcmV0dXJuIDA7DQogfQ0KIA0K
LVpTVERfREN0eCAqWlNURF9jcmVhdGVEQ3R4X2FkdmFuY2VkKFpTVERfY3VzdG9tTWVtIGN1c3Rv
bU1lbSkNCitaU1REX0RDdHggSU5JVCAqWlNURF9jcmVhdGVEQ3R4X2FkdmFuY2VkKFpTVERfY3Vz
dG9tTWVtIGN1c3RvbU1lbSkNCiB7DQogCVpTVERfREN0eCAqZGN0eDsNCiANCkBAIC0xMzgsMTMg
KzEzNiwxMyBAQCBaU1REX0RDdHggKlpTVERfY3JlYXRlREN0eF9hZHZhbmNlZChaU1REX2N1c3Rv
bU1lbSBjdXN0b21NZW0pDQogCXJldHVybiBkY3R4Ow0KIH0NCiANCi1aU1REX0RDdHggKlpTVERf
aW5pdERDdHgodm9pZCAqd29ya3NwYWNlLCBzaXplX3Qgd29ya3NwYWNlU2l6ZSkNCitaU1REX0RD
dHggSU5JVCAqWlNURF9pbml0REN0eCh2b2lkICp3b3Jrc3BhY2UsIHNpemVfdCB3b3Jrc3BhY2VT
aXplKQ0KIHsNCiAJWlNURF9jdXN0b21NZW0gY29uc3Qgc3RhY2tNZW0gPSBaU1REX2luaXRTdGFj
ayh3b3Jrc3BhY2UsIHdvcmtzcGFjZVNpemUpOw0KIAlyZXR1cm4gWlNURF9jcmVhdGVEQ3R4X2Fk
dmFuY2VkKHN0YWNrTWVtKTsNCiB9DQogDQotc2l6ZV90IFpTVERfZnJlZURDdHgoWlNURF9EQ3R4
ICpkY3R4KQ0KK3NpemVfdCBJTklUIFpTVERfZnJlZURDdHgoWlNURF9EQ3R4ICpkY3R4KQ0KIHsN
CiAJaWYgKGRjdHggPT0gTlVMTCkNCiAJCXJldHVybiAwOyAvKiBzdXBwb3J0IGZyZWUgb24gTlVM
TCAqLw0KQEAgLTE1MiwxMyArMTUwLDEzIEBAIHNpemVfdCBaU1REX2ZyZWVEQ3R4KFpTVERfREN0
eCAqZGN0eCkNCiAJcmV0dXJuIDA7IC8qIHJlc2VydmVkIGFzIGEgcG90ZW50aWFsIGVycm9yIGNv
ZGUgaW4gdGhlIGZ1dHVyZSAqLw0KIH0NCiANCi12b2lkIFpTVERfY29weURDdHgoWlNURF9EQ3R4
ICpkc3REQ3R4LCBjb25zdCBaU1REX0RDdHggKnNyY0RDdHgpDQordm9pZCBJTklUIFpTVERfY29w
eURDdHgoWlNURF9EQ3R4ICpkc3REQ3R4LCBjb25zdCBaU1REX0RDdHggKnNyY0RDdHgpDQogew0K
IAlzaXplX3QgY29uc3Qgd29ya1NwYWNlU2l6ZSA9IChaU1REX0JMT0NLU0laRV9BQlNPTFVURU1B
WCArIFdJTERDT1BZX09WRVJMRU5HVEgpICsgWlNURF9mcmFtZUhlYWRlclNpemVfbWF4Ow0KIAlt
ZW1jcHkoZHN0REN0eCwgc3JjREN0eCwgc2l6ZW9mKFpTVERfREN0eCkgLSB3b3JrU3BhY2VTaXpl
KTsgLyogbm8gbmVlZCB0byBjb3B5IHdvcmtzcGFjZSAqLw0KIH0NCiANCi1zdGF0aWMgdm9pZCBa
U1REX3JlZkREaWN0KFpTVERfREN0eCAqZHN0REN0eCwgY29uc3QgWlNURF9ERGljdCAqZGRpY3Qp
Ow0KK3N0YXRpYyB2b2lkIElOSVQgWlNURF9yZWZERGljdChaU1REX0RDdHggKmRzdERDdHgsIGNv
bnN0IFpTVERfRERpY3QgKmRkaWN0KTsNCiANCiAvKi0qKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQogKiAgIERlY29tcHJlc3Npb24g
c2VjdGlvbg0KQEAgLTE2OSw3ICsxNjcsNyBAQCBzdGF0aWMgdm9pZCBaU1REX3JlZkREaWN0KFpT
VERfREN0eCAqZHN0REN0eCwgY29uc3QgWlNURF9ERGljdCAqZGRpY3QpOw0KICAqICBOb3RlIDog
RnJhbWUgSWRlbnRpZmllciBpcyA0IGJ5dGVzLiBJZiBgc2l6ZSA8IDRgLCBAcmV0dXJuIHdpbGwg
YWx3YXlzIGJlIDAuDQogICogIE5vdGUgMiA6IExlZ2FjeSBGcmFtZSBJZGVudGlmaWVycyBhcmUg
Y29uc2lkZXJlZCB2YWxpZCBvbmx5IGlmIExlZ2FjeSBTdXBwb3J0IGlzIGVuYWJsZWQuDQogICog
IE5vdGUgMyA6IFNraXBwYWJsZSBGcmFtZSBJZGVudGlmaWVycyBhcmUgY29uc2lkZXJlZCB2YWxp
ZC4gKi8NCi11bnNpZ25lZCBaU1REX2lzRnJhbWUoY29uc3Qgdm9pZCAqYnVmZmVyLCBzaXplX3Qg
c2l6ZSkNCit1bnNpZ25lZCBJTklUIFpTVERfaXNGcmFtZShjb25zdCB2b2lkICpidWZmZXIsIHNp
emVfdCBzaXplKQ0KIHsNCiAJaWYgKHNpemUgPCA0KQ0KIAkJcmV0dXJuIDA7DQpAQCAtMTg2LDcg
KzE4NCw3IEBAIHVuc2lnbmVkIFpTVERfaXNGcmFtZShjb25zdCB2b2lkICpidWZmZXIsIHNpemVf
dCBzaXplKQ0KIC8qKiBaU1REX2ZyYW1lSGVhZGVyU2l6ZSgpIDoNCiAqICAgc3JjU2l6ZSBtdXN0
IGJlID49IFpTVERfZnJhbWVIZWFkZXJTaXplX3ByZWZpeC4NCiAqICAgQHJldHVybiA6IHNpemUg
b2YgdGhlIEZyYW1lIEhlYWRlciAqLw0KLXN0YXRpYyBzaXplX3QgWlNURF9mcmFtZUhlYWRlclNp
emUoY29uc3Qgdm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZSkNCitzdGF0aWMgc2l6ZV90IElOSVQg
WlNURF9mcmFtZUhlYWRlclNpemUoY29uc3Qgdm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZSkNCiB7
DQogCWlmIChzcmNTaXplIDwgWlNURF9mcmFtZUhlYWRlclNpemVfcHJlZml4KQ0KIAkJcmV0dXJu
IEVSUk9SKHNyY1NpemVfd3JvbmcpOw0KQEAgLTIwNCw3ICsyMDIsNyBAQCBzdGF0aWMgc2l6ZV90
IFpTVERfZnJhbWVIZWFkZXJTaXplKGNvbnN0IHZvaWQgKnNyYywgc2l6ZV90IHNyY1NpemUpDQog
KiAgIEByZXR1cm4gOiAwLCBgZnBhcmFtc1B0cmAgaXMgY29ycmVjdGx5IGZpbGxlZCwNCiAqICAg
ICAgICAgICAgPjAsIGBzcmNTaXplYCBpcyB0b28gc21hbGwsIHJlc3VsdCBpcyBleHBlY3RlZCBg
c3JjU2l6ZWAsDQogKiAgICAgICAgICAgICBvciBhbiBlcnJvciBjb2RlLCB3aGljaCBjYW4gYmUg
dGVzdGVkIHVzaW5nIFpTVERfaXNFcnJvcigpICovDQotc2l6ZV90IFpTVERfZ2V0RnJhbWVQYXJh
bXMoWlNURF9mcmFtZVBhcmFtcyAqZnBhcmFtc1B0ciwgY29uc3Qgdm9pZCAqc3JjLCBzaXplX3Qg
c3JjU2l6ZSkNCitzaXplX3QgSU5JVCBaU1REX2dldEZyYW1lUGFyYW1zKFpTVERfZnJhbWVQYXJh
bXMgKmZwYXJhbXNQdHIsIGNvbnN0IHZvaWQgKnNyYywgc2l6ZV90IHNyY1NpemUpDQogew0KIAlj
b25zdCBCWVRFICppcCA9IChjb25zdCBCWVRFICopc3JjOw0KIA0KQEAgLTI5NCw3ICsyOTIsNyBA
QCBzaXplX3QgWlNURF9nZXRGcmFtZVBhcmFtcyhaU1REX2ZyYW1lUGFyYW1zICpmcGFyYW1zUHRy
LCBjb25zdCB2b2lkICpzcmMsIHNpemVfdA0KICogICBAcmV0dXJuIDogZGVjb21wcmVzc2VkIHNp
emUgb2YgdGhlIHNpbmdsZSBmcmFtZSBwb2ludGVkIHRvIGJlIGBzcmNgIGlmIGtub3duLCBvdGhl
cndpc2UNCiAqICAgICAgICAgICAgIC0gWlNURF9DT05URU5UU0laRV9VTktOT1dOIGlmIHRoZSBz
aXplIGNhbm5vdCBiZSBkZXRlcm1pbmVkDQogKiAgICAgICAgICAgICAtIFpTVERfQ09OVEVOVFNJ
WkVfRVJST1IgaWYgYW4gZXJyb3Igb2NjdXJyZWQgKGUuZy4gaW52YWxpZCBtYWdpYyBudW1iZXIs
IHNyY1NpemUgdG9vIHNtYWxsKSAqLw0KLXVuc2lnbmVkIGxvbmcgbG9uZyBaU1REX2dldEZyYW1l
Q29udGVudFNpemUoY29uc3Qgdm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZSkNCit1bnNpZ25lZCBs
b25nIGxvbmcgSU5JVCBaU1REX2dldEZyYW1lQ29udGVudFNpemUoY29uc3Qgdm9pZCAqc3JjLCBz
aXplX3Qgc3JjU2l6ZSkNCiB7DQogCXsNCiAJCVpTVERfZnJhbWVQYXJhbXMgZlBhcmFtczsNCkBA
IC0zMTYsNyArMzE0LDcgQEAgdW5zaWduZWQgbG9uZyBsb25nIFpTVERfZ2V0RnJhbWVDb250ZW50
U2l6ZShjb25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNTaXplKQ0KICAqICBgc3JjU2l6ZWAgbXVz
dCBiZSB0aGUgZXhhY3QgbGVuZ3RoIG9mIHNvbWUgbnVtYmVyIG9mIFpTVEQgY29tcHJlc3NlZCBh
bmQvb3INCiAgKiAgICAgIHNraXBwYWJsZSBmcmFtZXMNCiAgKiAgQHJldHVybiA6IGRlY29tcHJl
c3NlZCBzaXplIG9mIHRoZSBmcmFtZXMgY29udGFpbmVkICovDQotdW5zaWduZWQgbG9uZyBsb25n
IFpTVERfZmluZERlY29tcHJlc3NlZFNpemUoY29uc3Qgdm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6
ZSkNCit1bnNpZ25lZCBsb25nIGxvbmcgSU5JVCBaU1REX2ZpbmREZWNvbXByZXNzZWRTaXplKGNv
bnN0IHZvaWQgKnNyYywgc2l6ZV90IHNyY1NpemUpDQogew0KIAl7DQogCQl1bnNpZ25lZCBsb25n
IGxvbmcgdG90YWxEc3RTaXplID0gMDsNCkBAIC0zNjksNyArMzY3LDcgQEAgdW5zaWduZWQgbG9u
ZyBsb25nIFpTVERfZmluZERlY29tcHJlc3NlZFNpemUoY29uc3Qgdm9pZCAqc3JjLCBzaXplX3Qg
c3JjU2l6ZSkNCiAvKiogWlNURF9kZWNvZGVGcmFtZUhlYWRlcigpIDoNCiAqICAgYGhlYWRlclNp
emVgIG11c3QgYmUgdGhlIHNpemUgcHJvdmlkZWQgYnkgWlNURF9mcmFtZUhlYWRlclNpemUoKS4N
CiAqICAgQHJldHVybiA6IDAgaWYgc3VjY2Vzcywgb3IgYW4gZXJyb3IgY29kZSwgd2hpY2ggY2Fu
IGJlIHRlc3RlZCB1c2luZyBaU1REX2lzRXJyb3IoKSAqLw0KLXN0YXRpYyBzaXplX3QgWlNURF9k
ZWNvZGVGcmFtZUhlYWRlcihaU1REX0RDdHggKmRjdHgsIGNvbnN0IHZvaWQgKnNyYywgc2l6ZV90
IGhlYWRlclNpemUpDQorc3RhdGljIHNpemVfdCBJTklUIFpTVERfZGVjb2RlRnJhbWVIZWFkZXIo
WlNURF9EQ3R4ICpkY3R4LCBjb25zdCB2b2lkICpzcmMsIHNpemVfdCBoZWFkZXJTaXplKQ0KIHsN
CiAJc2l6ZV90IGNvbnN0IHJlc3VsdCA9IFpTVERfZ2V0RnJhbWVQYXJhbXMoJihkY3R4LT5mUGFy
YW1zKSwgc3JjLCBoZWFkZXJTaXplKTsNCiAJaWYgKFpTVERfaXNFcnJvcihyZXN1bHQpKQ0KQEAg
LTM5MSw3ICszODksNyBAQCB0eXBlZGVmIHN0cnVjdCB7DQogDQogLyohIFpTVERfZ2V0Y0Jsb2Nr
U2l6ZSgpIDoNCiAqICAgUHJvdmlkZXMgdGhlIHNpemUgb2YgY29tcHJlc3NlZCBibG9jayBmcm9t
IGJsb2NrIGhlYWRlciBgc3JjYCAqLw0KLXNpemVfdCBaU1REX2dldGNCbG9ja1NpemUoY29uc3Qg
dm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZSwgYmxvY2tQcm9wZXJ0aWVzX3QgKmJwUHRyKQ0KK3Np
emVfdCBJTklUIFpTVERfZ2V0Y0Jsb2NrU2l6ZShjb25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNT
aXplLCBibG9ja1Byb3BlcnRpZXNfdCAqYnBQdHIpDQogew0KIAlpZiAoc3JjU2l6ZSA8IFpTVERf
YmxvY2tIZWFkZXJTaXplKQ0KIAkJcmV0dXJuIEVSUk9SKHNyY1NpemVfd3JvbmcpOw0KQEAgLTQw
OSw3ICs0MDcsNyBAQCBzaXplX3QgWlNURF9nZXRjQmxvY2tTaXplKGNvbnN0IHZvaWQgKnNyYywg
c2l6ZV90IHNyY1NpemUsIGJsb2NrUHJvcGVydGllc190ICpicA0KIAl9DQogfQ0KIA0KLXN0YXRp
YyBzaXplX3QgWlNURF9jb3B5UmF3QmxvY2sodm9pZCAqZHN0LCBzaXplX3QgZHN0Q2FwYWNpdHks
IGNvbnN0IHZvaWQgKnNyYywgc2l6ZV90IHNyY1NpemUpDQorc3RhdGljIHNpemVfdCBJTklUIFpT
VERfY29weVJhd0Jsb2NrKHZvaWQgKmRzdCwgc2l6ZV90IGRzdENhcGFjaXR5LCBjb25zdCB2b2lk
ICpzcmMsIHNpemVfdCBzcmNTaXplKQ0KIHsNCiAJaWYgKHNyY1NpemUgPiBkc3RDYXBhY2l0eSkN
CiAJCXJldHVybiBFUlJPUihkc3RTaXplX3Rvb1NtYWxsKTsNCkBAIC00MTcsNyArNDE1LDcgQEAg
c3RhdGljIHNpemVfdCBaU1REX2NvcHlSYXdCbG9jayh2b2lkICpkc3QsIHNpemVfdCBkc3RDYXBh
Y2l0eSwgY29uc3Qgdm9pZCAqc3JjLA0KIAlyZXR1cm4gc3JjU2l6ZTsNCiB9DQogDQotc3RhdGlj
IHNpemVfdCBaU1REX3NldFJsZUJsb2NrKHZvaWQgKmRzdCwgc2l6ZV90IGRzdENhcGFjaXR5LCBj
b25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNTaXplLCBzaXplX3QgcmVnZW5TaXplKQ0KK3N0YXRp
YyBzaXplX3QgSU5JVCBaU1REX3NldFJsZUJsb2NrKHZvaWQgKmRzdCwgc2l6ZV90IGRzdENhcGFj
aXR5LCBjb25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNTaXplLCBzaXplX3QgcmVnZW5TaXplKQ0K
IHsNCiAJaWYgKHNyY1NpemUgIT0gMSkNCiAJCXJldHVybiBFUlJPUihzcmNTaXplX3dyb25nKTsN
CkBAIC00MjksNyArNDI3LDcgQEAgc3RhdGljIHNpemVfdCBaU1REX3NldFJsZUJsb2NrKHZvaWQg
KmRzdCwgc2l6ZV90IGRzdENhcGFjaXR5LCBjb25zdCB2b2lkICpzcmMsIHMNCiANCiAvKiEgWlNU
RF9kZWNvZGVMaXRlcmFsc0Jsb2NrKCkgOg0KIAlAcmV0dXJuIDogbmIgb2YgYnl0ZXMgcmVhZCBm
cm9tIHNyYyAoPCBzcmNTaXplICkgKi8NCi1zaXplX3QgWlNURF9kZWNvZGVMaXRlcmFsc0Jsb2Nr
KFpTVERfREN0eCAqZGN0eCwgY29uc3Qgdm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZSkgLyogbm90
ZSA6IHNyY1NpemUgPCBCTE9DS1NJWkUgKi8NCitzaXplX3QgSU5JVCBaU1REX2RlY29kZUxpdGVy
YWxzQmxvY2soWlNURF9EQ3R4ICpkY3R4LCBjb25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNTaXpl
KSAvKiBub3RlIDogc3JjU2l6ZSA8IEJMT0NLU0laRSAqLw0KIHsNCiAJaWYgKHNyY1NpemUgPCBN
SU5fQ0JMT0NLX1NJWkUpDQogCQlyZXR1cm4gRVJST1IoY29ycnVwdGlvbl9kZXRlY3RlZCk7DQpA
QCAtNzQ5LDcgKzc0Nyw3IEBAIHN0YXRpYyBjb25zdCBGU0VfZGVjb2RlX3Q0IE9GX2RlZmF1bHRE
VGFibGVbKDEgPDwgT0ZfREVGQVVMVE5PUk1MT0cpICsgMV0gPSB7DQogCUByZXR1cm4gOiBuYiBi
eXRlcyByZWFkIGZyb20gc3JjLA0KIAkJCSAgb3IgYW4gZXJyb3IgY29kZSBpZiBpdCBmYWlscywg
dGVzdGFibGUgd2l0aCBaU1REX2lzRXJyb3IoKQ0KICovDQotc3RhdGljIHNpemVfdCBaU1REX2J1
aWxkU2VxVGFibGUoRlNFX0RUYWJsZSAqRFRhYmxlU3BhY2UsIGNvbnN0IEZTRV9EVGFibGUgKipE
VGFibGVQdHIsIHN5bWJvbEVuY29kaW5nVHlwZV9lIHR5cGUsIFUzMiBtYXgsIFUzMiBtYXhMb2cs
IGNvbnN0IHZvaWQgKnNyYywNCitzdGF0aWMgc2l6ZV90IElOSVQgWlNURF9idWlsZFNlcVRhYmxl
KEZTRV9EVGFibGUgKkRUYWJsZVNwYWNlLCBjb25zdCBGU0VfRFRhYmxlICoqRFRhYmxlUHRyLCBz
eW1ib2xFbmNvZGluZ1R5cGVfZSB0eXBlLCBVMzIgbWF4LCBVMzIgbWF4TG9nLCBjb25zdCB2b2lk
ICpzcmMsDQogCQkJCSBzaXplX3Qgc3JjU2l6ZSwgY29uc3QgRlNFX2RlY29kZV90NCAqZGVmYXVs
dFRhYmxlLCBVMzIgZmxhZ1JlcGVhdFRhYmxlLCB2b2lkICp3b3Jrc3BhY2UsIHNpemVfdCB3b3Jr
c3BhY2VTaXplKQ0KIHsNCiAJY29uc3Qgdm9pZCAqY29uc3QgdG1wUHRyID0gZGVmYXVsdFRhYmxl
OyAvKiBieXBhc3Mgc3RyaWN0IGFsaWFzaW5nICovDQpAQCAtNzkxLDcgKzc4OSw3IEBAIHN0YXRp
YyBzaXplX3QgWlNURF9idWlsZFNlcVRhYmxlKEZTRV9EVGFibGUgKkRUYWJsZVNwYWNlLCBjb25z
dCBGU0VfRFRhYmxlICoqRFRhDQogCX0NCiB9DQogDQotc2l6ZV90IFpTVERfZGVjb2RlU2VxSGVh
ZGVycyhaU1REX0RDdHggKmRjdHgsIGludCAqbmJTZXFQdHIsIGNvbnN0IHZvaWQgKnNyYywgc2l6
ZV90IHNyY1NpemUpDQorc2l6ZV90IElOSVQgWlNURF9kZWNvZGVTZXFIZWFkZXJzKFpTVERfREN0
eCAqZGN0eCwgaW50ICpuYlNlcVB0ciwgY29uc3Qgdm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZSkN
CiB7DQogCWNvbnN0IEJZVEUgKmNvbnN0IGlzdGFydCA9IChjb25zdCBCWVRFICpjb25zdClzcmM7
DQogCWNvbnN0IEJZVEUgKmNvbnN0IGllbmQgPSBpc3RhcnQgKyBzcmNTaXplOw0KQEAgLTg3Nyw3
ICs4NzUsNyBAQCB0eXBlZGVmIHN0cnVjdCB7DQogfSBzZXFTdGF0ZV90Ow0KIA0KIEZPUkNFX05P
SU5MSU5FDQotc2l6ZV90IFpTVERfZXhlY1NlcXVlbmNlTGFzdDcoQllURSAqb3AsIEJZVEUgKmNv
bnN0IG9lbmQsIHNlcV90IHNlcXVlbmNlLCBjb25zdCBCWVRFICoqbGl0UHRyLCBjb25zdCBCWVRF
ICpjb25zdCBsaXRMaW1pdCwgY29uc3QgQllURSAqY29uc3QgYmFzZSwNCitzaXplX3QgSU5JVCBa
U1REX2V4ZWNTZXF1ZW5jZUxhc3Q3KEJZVEUgKm9wLCBCWVRFICpjb25zdCBvZW5kLCBzZXFfdCBz
ZXF1ZW5jZSwgY29uc3QgQllURSAqKmxpdFB0ciwgY29uc3QgQllURSAqY29uc3QgbGl0TGltaXQs
IGNvbnN0IEJZVEUgKmNvbnN0IGJhc2UsDQogCQkJICAgICAgY29uc3QgQllURSAqY29uc3QgdkJh
c2UsIGNvbnN0IEJZVEUgKmNvbnN0IGRpY3RFbmQpDQogew0KIAlCWVRFICpjb25zdCBvTGl0RW5k
ID0gb3AgKyBzZXF1ZW5jZS5saXRMZW5ndGg7DQpAQCAtOTI4LDcgKzkyNiw3IEBAIHNpemVfdCBa
U1REX2V4ZWNTZXF1ZW5jZUxhc3Q3KEJZVEUgKm9wLCBCWVRFICpjb25zdCBvZW5kLCBzZXFfdCBz
ZXF1ZW5jZSwgY29uc3QNCiAJcmV0dXJuIHNlcXVlbmNlTGVuZ3RoOw0KIH0NCiANCi1zdGF0aWMg
c2VxX3QgWlNURF9kZWNvZGVTZXF1ZW5jZShzZXFTdGF0ZV90ICpzZXFTdGF0ZSkNCitzdGF0aWMg
c2VxX3QgSU5JVCBaU1REX2RlY29kZVNlcXVlbmNlKHNlcVN0YXRlX3QgKnNlcVN0YXRlKQ0KIHsN
CiAJc2VxX3Qgc2VxOw0KIA0KQEAgLTEwOTAsNyArMTA4OCw3IEBAIHNpemVfdCBaU1REX2V4ZWNT
ZXF1ZW5jZShCWVRFICpvcCwgQllURSAqY29uc3Qgb2VuZCwgc2VxX3Qgc2VxdWVuY2UsIGNvbnN0
IEJZVEUNCiAJcmV0dXJuIHNlcXVlbmNlTGVuZ3RoOw0KIH0NCiANCi1zdGF0aWMgc2l6ZV90IFpT
VERfZGVjb21wcmVzc1NlcXVlbmNlcyhaU1REX0RDdHggKmRjdHgsIHZvaWQgKmRzdCwgc2l6ZV90
IG1heERzdFNpemUsIGNvbnN0IHZvaWQgKnNlcVN0YXJ0LCBzaXplX3Qgc2VxU2l6ZSkNCitzdGF0
aWMgc2l6ZV90IElOSVQgWlNURF9kZWNvbXByZXNzU2VxdWVuY2VzKFpTVERfREN0eCAqZGN0eCwg
dm9pZCAqZHN0LCBzaXplX3QgbWF4RHN0U2l6ZSwgY29uc3Qgdm9pZCAqc2VxU3RhcnQsIHNpemVf
dCBzZXFTaXplKQ0KIHsNCiAJY29uc3QgQllURSAqaXAgPSAoY29uc3QgQllURSAqKXNlcVN0YXJ0
Ow0KIAljb25zdCBCWVRFICpjb25zdCBpZW5kID0gaXAgKyBzZXFTaXplOw0KQEAgLTExNjAsNyAr
MTE1OCw3IEBAIHN0YXRpYyBzaXplX3QgWlNURF9kZWNvbXByZXNzU2VxdWVuY2VzKFpTVERfREN0
eCAqZGN0eCwgdm9pZCAqZHN0LCBzaXplX3QgbWF4RHN0DQogCXJldHVybiBvcCAtIG9zdGFydDsN
CiB9DQogDQotRk9SQ0VfSU5MSU5FIHNlcV90IFpTVERfZGVjb2RlU2VxdWVuY2VMb25nX2dlbmVy
aWMoc2VxU3RhdGVfdCAqc2VxU3RhdGUsIGludCBjb25zdCBsb25nT2Zmc2V0cykNCitGT1JDRV9J
TkxJTkUgc2VxX3QgSU5JVCBaU1REX2RlY29kZVNlcXVlbmNlTG9uZ19nZW5lcmljKHNlcVN0YXRl
X3QgKnNlcVN0YXRlLCBpbnQgY29uc3QgbG9uZ09mZnNldHMpDQogew0KIAlzZXFfdCBzZXE7DQog
DQpAQCAtMTI1MCw3ICsxMjQ4LDcgQEAgRk9SQ0VfSU5MSU5FIHNlcV90IFpTVERfZGVjb2RlU2Vx
dWVuY2VMb25nX2dlbmVyaWMoc2VxU3RhdGVfdCAqc2VxU3RhdGUsIGludCBjb24NCiAJcmV0dXJu
IHNlcTsNCiB9DQogDQotc3RhdGljIHNlcV90IFpTVERfZGVjb2RlU2VxdWVuY2VMb25nKHNlcVN0
YXRlX3QgKnNlcVN0YXRlLCB1bnNpZ25lZCBjb25zdCB3aW5kb3dTaXplKQ0KK3N0YXRpYyBzZXFf
dCBJTklUIFpTVERfZGVjb2RlU2VxdWVuY2VMb25nKHNlcVN0YXRlX3QgKnNlcVN0YXRlLCB1bnNp
Z25lZCBjb25zdCB3aW5kb3dTaXplKQ0KIHsNCiAJaWYgKFpTVERfaGlnaGJpdDMyKHdpbmRvd1Np
emUpID4gU1RSRUFNX0FDQ1VNVUxBVE9SX01JTikgew0KIAkJcmV0dXJuIFpTVERfZGVjb2RlU2Vx
dWVuY2VMb25nX2dlbmVyaWMoc2VxU3RhdGUsIDEpOw0KQEAgLTEzNDUsNyArMTM0Myw3IEBAIHNp
emVfdCBaU1REX2V4ZWNTZXF1ZW5jZUxvbmcoQllURSAqb3AsIEJZVEUgKmNvbnN0IG9lbmQsIHNl
cV90IHNlcXVlbmNlLCBjb25zdCBCDQogCXJldHVybiBzZXF1ZW5jZUxlbmd0aDsNCiB9DQogDQot
c3RhdGljIHNpemVfdCBaU1REX2RlY29tcHJlc3NTZXF1ZW5jZXNMb25nKFpTVERfREN0eCAqZGN0
eCwgdm9pZCAqZHN0LCBzaXplX3QgbWF4RHN0U2l6ZSwgY29uc3Qgdm9pZCAqc2VxU3RhcnQsIHNp
emVfdCBzZXFTaXplKQ0KK3N0YXRpYyBzaXplX3QgSU5JVCBaU1REX2RlY29tcHJlc3NTZXF1ZW5j
ZXNMb25nKFpTVERfREN0eCAqZGN0eCwgdm9pZCAqZHN0LCBzaXplX3QgbWF4RHN0U2l6ZSwgY29u
c3Qgdm9pZCAqc2VxU3RhcnQsIHNpemVfdCBzZXFTaXplKQ0KIHsNCiAJY29uc3QgQllURSAqaXAg
PSAoY29uc3QgQllURSAqKXNlcVN0YXJ0Ow0KIAljb25zdCBCWVRFICpjb25zdCBpZW5kID0gaXAg
KyBzZXFTaXplOw0KQEAgLTE0NDIsNyArMTQ0MCw3IEBAIHN0YXRpYyBzaXplX3QgWlNURF9kZWNv
bXByZXNzU2VxdWVuY2VzTG9uZyhaU1REX0RDdHggKmRjdHgsIHZvaWQgKmRzdCwgc2l6ZV90IG1h
DQogCXJldHVybiBvcCAtIG9zdGFydDsNCiB9DQogDQotc3RhdGljIHNpemVfdCBaU1REX2RlY29t
cHJlc3NCbG9ja19pbnRlcm5hbChaU1REX0RDdHggKmRjdHgsIHZvaWQgKmRzdCwgc2l6ZV90IGRz
dENhcGFjaXR5LCBjb25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNTaXplKQ0KK3N0YXRpYyBzaXpl
X3QgSU5JVCBaU1REX2RlY29tcHJlc3NCbG9ja19pbnRlcm5hbChaU1REX0RDdHggKmRjdHgsIHZv
aWQgKmRzdCwgc2l6ZV90IGRzdENhcGFjaXR5LCBjb25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNT
aXplKQ0KIHsgLyogYmxvY2tUeXBlID09IGJsb2NrQ29tcHJlc3NlZCAqLw0KIAljb25zdCBCWVRF
ICppcCA9IChjb25zdCBCWVRFICopc3JjOw0KIA0KQEAgLTE0NjYsNyArMTQ2NCw3IEBAIHN0YXRp
YyBzaXplX3QgWlNURF9kZWNvbXByZXNzQmxvY2tfaW50ZXJuYWwoWlNURF9EQ3R4ICpkY3R4LCB2
b2lkICpkc3QsIHNpemVfdCBkDQogCXJldHVybiBaU1REX2RlY29tcHJlc3NTZXF1ZW5jZXMoZGN0
eCwgZHN0LCBkc3RDYXBhY2l0eSwgaXAsIHNyY1NpemUpOw0KIH0NCiANCi1zdGF0aWMgdm9pZCBa
U1REX2NoZWNrQ29udGludWl0eShaU1REX0RDdHggKmRjdHgsIGNvbnN0IHZvaWQgKmRzdCkNCitz
dGF0aWMgdm9pZCBJTklUIFpTVERfY2hlY2tDb250aW51aXR5KFpTVERfREN0eCAqZGN0eCwgY29u
c3Qgdm9pZCAqZHN0KQ0KIHsNCiAJaWYgKGRzdCAhPSBkY3R4LT5wcmV2aW91c0RzdEVuZCkgeyAv
KiBub3QgY29udGlndW91cyAqLw0KIAkJZGN0eC0+ZGljdEVuZCA9IGRjdHgtPnByZXZpb3VzRHN0
RW5kOw0KQEAgLTE0NzYsNyArMTQ3NCw3IEBAIHN0YXRpYyB2b2lkIFpTVERfY2hlY2tDb250aW51
aXR5KFpTVERfREN0eCAqZGN0eCwgY29uc3Qgdm9pZCAqZHN0KQ0KIAl9DQogfQ0KIA0KLXNpemVf
dCBaU1REX2RlY29tcHJlc3NCbG9jayhaU1REX0RDdHggKmRjdHgsIHZvaWQgKmRzdCwgc2l6ZV90
IGRzdENhcGFjaXR5LCBjb25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNTaXplKQ0KK3NpemVfdCBJ
TklUIFpTVERfZGVjb21wcmVzc0Jsb2NrKFpTVERfREN0eCAqZGN0eCwgdm9pZCAqZHN0LCBzaXpl
X3QgZHN0Q2FwYWNpdHksIGNvbnN0IHZvaWQgKnNyYywgc2l6ZV90IHNyY1NpemUpDQogew0KIAlz
aXplX3QgZFNpemU7DQogCVpTVERfY2hlY2tDb250aW51aXR5KGRjdHgsIGRzdCk7DQpAQCAtMTQ4
NywxNCArMTQ4NSwxNCBAQCBzaXplX3QgWlNURF9kZWNvbXByZXNzQmxvY2soWlNURF9EQ3R4ICpk
Y3R4LCB2b2lkICpkc3QsIHNpemVfdCBkc3RDYXBhY2l0eSwgY29ucw0KIA0KIC8qKiBaU1REX2lu
c2VydEJsb2NrKCkgOg0KIAlpbnNlcnQgYHNyY2AgYmxvY2sgaW50byBgZGN0eGAgaGlzdG9yeS4g
VXNlZnVsIHRvIHRyYWNrIHVuY29tcHJlc3NlZCBibG9ja3MuICovDQotc2l6ZV90IFpTVERfaW5z
ZXJ0QmxvY2soWlNURF9EQ3R4ICpkY3R4LCBjb25zdCB2b2lkICpibG9ja1N0YXJ0LCBzaXplX3Qg
YmxvY2tTaXplKQ0KK3NpemVfdCBJTklUIFpTVERfaW5zZXJ0QmxvY2soWlNURF9EQ3R4ICpkY3R4
LCBjb25zdCB2b2lkICpibG9ja1N0YXJ0LCBzaXplX3QgYmxvY2tTaXplKQ0KIHsNCiAJWlNURF9j
aGVja0NvbnRpbnVpdHkoZGN0eCwgYmxvY2tTdGFydCk7DQogCWRjdHgtPnByZXZpb3VzRHN0RW5k
ID0gKGNvbnN0IGNoYXIgKilibG9ja1N0YXJ0ICsgYmxvY2tTaXplOw0KIAlyZXR1cm4gYmxvY2tT
aXplOw0KIH0NCiANCi1zaXplX3QgWlNURF9nZW5lcmF0ZU54Qnl0ZXModm9pZCAqZHN0LCBzaXpl
X3QgZHN0Q2FwYWNpdHksIEJZVEUgYnl0ZSwgc2l6ZV90IGxlbmd0aCkNCitzaXplX3QgSU5JVCBa
U1REX2dlbmVyYXRlTnhCeXRlcyh2b2lkICpkc3QsIHNpemVfdCBkc3RDYXBhY2l0eSwgQllURSBi
eXRlLCBzaXplX3QgbGVuZ3RoKQ0KIHsNCiAJaWYgKGxlbmd0aCA+IGRzdENhcGFjaXR5KQ0KIAkJ
cmV0dXJuIEVSUk9SKGRzdFNpemVfdG9vU21hbGwpOw0KQEAgLTE1MDcsNyArMTUwNSw3IEBAIHNp
emVfdCBaU1REX2dlbmVyYXRlTnhCeXRlcyh2b2lkICpkc3QsIHNpemVfdCBkc3RDYXBhY2l0eSwg
QllURSBieXRlLCBzaXplX3QgbGVuDQogICogIGBzcmNgIG11c3QgcG9pbnQgdG8gdGhlIHN0YXJ0
IG9mIGEgWlNURCBmcmFtZSwgWlNURCBsZWdhY3kgZnJhbWUsIG9yIHNraXBwYWJsZSBmcmFtZQ0K
ICAqICBgc3JjU2l6ZWAgbXVzdCBiZSBhdCBsZWFzdCBhcyBsYXJnZSBhcyB0aGUgZnJhbWUgY29u
dGFpbmVkDQogICogIEByZXR1cm4gOiB0aGUgY29tcHJlc3NlZCBzaXplIG9mIHRoZSBmcmFtZSBz
dGFydGluZyBhdCBgc3JjYCAqLw0KLXNpemVfdCBaU1REX2ZpbmRGcmFtZUNvbXByZXNzZWRTaXpl
KGNvbnN0IHZvaWQgKnNyYywgc2l6ZV90IHNyY1NpemUpDQorc2l6ZV90IElOSVQgWlNURF9maW5k
RnJhbWVDb21wcmVzc2VkU2l6ZShjb25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNTaXplKQ0KIHsN
CiAJaWYgKHNyY1NpemUgPj0gWlNURF9za2lwcGFibGVIZWFkZXJTaXplICYmIChaU1REX3JlYWRM
RTMyKHNyYykgJiAweEZGRkZGRkYwVSkgPT0gWlNURF9NQUdJQ19TS0lQUEFCTEVfU1RBUlQpIHsN
CiAJCXJldHVybiBaU1REX3NraXBwYWJsZUhlYWRlclNpemUgKyBaU1REX3JlYWRMRTMyKChjb25z
dCBCWVRFICopc3JjICsgNCk7DQpAQCAtMTU2Myw3ICsxNTYxLDcgQEAgc2l6ZV90IFpTVERfZmlu
ZEZyYW1lQ29tcHJlc3NlZFNpemUoY29uc3Qgdm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZSkNCiAN
CiAvKiEgWlNURF9kZWNvbXByZXNzRnJhbWUoKSA6DQogKiAgIEBkY3R4IG11c3QgYmUgcHJvcGVy
bHkgaW5pdGlhbGl6ZWQgKi8NCi1zdGF0aWMgc2l6ZV90IFpTVERfZGVjb21wcmVzc0ZyYW1lKFpT
VERfREN0eCAqZGN0eCwgdm9pZCAqZHN0LCBzaXplX3QgZHN0Q2FwYWNpdHksIGNvbnN0IHZvaWQg
KipzcmNQdHIsIHNpemVfdCAqc3JjU2l6ZVB0cikNCitzdGF0aWMgc2l6ZV90IElOSVQgWlNURF9k
ZWNvbXByZXNzRnJhbWUoWlNURF9EQ3R4ICpkY3R4LCB2b2lkICpkc3QsIHNpemVfdCBkc3RDYXBh
Y2l0eSwgY29uc3Qgdm9pZCAqKnNyY1B0ciwgc2l6ZV90ICpzcmNTaXplUHRyKQ0KIHsNCiAJY29u
c3QgQllURSAqaXAgPSAoY29uc3QgQllURSAqKSgqc3JjUHRyKTsNCiAJQllURSAqY29uc3Qgb3N0
YXJ0ID0gKEJZVEUgKiBjb25zdClkc3Q7DQpAQCAtMTYzNywxMCArMTYzNSwxMCBAQCBzdGF0aWMg
c2l6ZV90IFpTVERfZGVjb21wcmVzc0ZyYW1lKFpTVERfREN0eCAqZGN0eCwgdm9pZCAqZHN0LCBz
aXplX3QgZHN0Q2FwYWNpdA0KIAlyZXR1cm4gb3AgLSBvc3RhcnQ7DQogfQ0KIA0KLXN0YXRpYyBj
b25zdCB2b2lkICpaU1REX0REaWN0RGljdENvbnRlbnQoY29uc3QgWlNURF9ERGljdCAqZGRpY3Qp
Ow0KLXN0YXRpYyBzaXplX3QgWlNURF9ERGljdERpY3RTaXplKGNvbnN0IFpTVERfRERpY3QgKmRk
aWN0KTsNCitzdGF0aWMgY29uc3Qgdm9pZCBJTklUICpaU1REX0REaWN0RGljdENvbnRlbnQoY29u
c3QgWlNURF9ERGljdCAqZGRpY3QpOw0KK3N0YXRpYyBzaXplX3QgSU5JVCBaU1REX0REaWN0RGlj
dFNpemUoY29uc3QgWlNURF9ERGljdCAqZGRpY3QpOw0KIA0KLXN0YXRpYyBzaXplX3QgWlNURF9k
ZWNvbXByZXNzTXVsdGlGcmFtZShaU1REX0RDdHggKmRjdHgsIHZvaWQgKmRzdCwgc2l6ZV90IGRz
dENhcGFjaXR5LCBjb25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNTaXplLCBjb25zdCB2b2lkICpk
aWN0LCBzaXplX3QgZGljdFNpemUsDQorc3RhdGljIHNpemVfdCBJTklUIFpTVERfZGVjb21wcmVz
c011bHRpRnJhbWUoWlNURF9EQ3R4ICpkY3R4LCB2b2lkICpkc3QsIHNpemVfdCBkc3RDYXBhY2l0
eSwgY29uc3Qgdm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZSwgY29uc3Qgdm9pZCAqZGljdCwgc2l6
ZV90IGRpY3RTaXplLA0KIAkJCQkJY29uc3QgWlNURF9ERGljdCAqZGRpY3QpDQogew0KIAl2b2lk
ICpjb25zdCBkc3RzdGFydCA9IGRzdDsNCkBAIC0xNzA0LDEyICsxNzAyLDEyIEBAIHN0YXRpYyBz
aXplX3QgWlNURF9kZWNvbXByZXNzTXVsdGlGcmFtZShaU1REX0RDdHggKmRjdHgsIHZvaWQgKmRz
dCwgc2l6ZV90IGRzdENhDQogCXJldHVybiAoQllURSAqKWRzdCAtIChCWVRFICopZHN0c3RhcnQ7
DQogfQ0KIA0KLXNpemVfdCBaU1REX2RlY29tcHJlc3NfdXNpbmdEaWN0KFpTVERfREN0eCAqZGN0
eCwgdm9pZCAqZHN0LCBzaXplX3QgZHN0Q2FwYWNpdHksIGNvbnN0IHZvaWQgKnNyYywgc2l6ZV90
IHNyY1NpemUsIGNvbnN0IHZvaWQgKmRpY3QsIHNpemVfdCBkaWN0U2l6ZSkNCitzaXplX3QgSU5J
VCBaU1REX2RlY29tcHJlc3NfdXNpbmdEaWN0KFpTVERfREN0eCAqZGN0eCwgdm9pZCAqZHN0LCBz
aXplX3QgZHN0Q2FwYWNpdHksIGNvbnN0IHZvaWQgKnNyYywgc2l6ZV90IHNyY1NpemUsIGNvbnN0
IHZvaWQgKmRpY3QsIHNpemVfdCBkaWN0U2l6ZSkNCiB7DQogCXJldHVybiBaU1REX2RlY29tcHJl
c3NNdWx0aUZyYW1lKGRjdHgsIGRzdCwgZHN0Q2FwYWNpdHksIHNyYywgc3JjU2l6ZSwgZGljdCwg
ZGljdFNpemUsIE5VTEwpOw0KIH0NCiANCi1zaXplX3QgWlNURF9kZWNvbXByZXNzREN0eChaU1RE
X0RDdHggKmRjdHgsIHZvaWQgKmRzdCwgc2l6ZV90IGRzdENhcGFjaXR5LCBjb25zdCB2b2lkICpz
cmMsIHNpemVfdCBzcmNTaXplKQ0KK3NpemVfdCBJTklUIFpTVERfZGVjb21wcmVzc0RDdHgoWlNU
RF9EQ3R4ICpkY3R4LCB2b2lkICpkc3QsIHNpemVfdCBkc3RDYXBhY2l0eSwgY29uc3Qgdm9pZCAq
c3JjLCBzaXplX3Qgc3JjU2l6ZSkNCiB7DQogCXJldHVybiBaU1REX2RlY29tcHJlc3NfdXNpbmdE
aWN0KGRjdHgsIGRzdCwgZHN0Q2FwYWNpdHksIHNyYywgc3JjU2l6ZSwgTlVMTCwgMCk7DQogfQ0K
QEAgLTE3MTgsOSArMTcxNiw5IEBAIHNpemVfdCBaU1REX2RlY29tcHJlc3NEQ3R4KFpTVERfREN0
eCAqZGN0eCwgdm9pZCAqZHN0LCBzaXplX3QgZHN0Q2FwYWNpdHksIGNvbnN0DQogKiAgIEFkdmFu
Y2VkIFN0cmVhbWluZyBEZWNvbXByZXNzaW9uIEFQSQ0KICogICBCdWZmZXJsZXNzIGFuZCBzeW5j
aHJvbm91cw0KICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKiovDQotc2l6
ZV90IFpTVERfbmV4dFNyY1NpemVUb0RlY29tcHJlc3MoWlNURF9EQ3R4ICpkY3R4KSB7IHJldHVy
biBkY3R4LT5leHBlY3RlZDsgfQ0KK3NpemVfdCBJTklUIFpTVERfbmV4dFNyY1NpemVUb0RlY29t
cHJlc3MoWlNURF9EQ3R4ICpkY3R4KSB7IHJldHVybiBkY3R4LT5leHBlY3RlZDsgfQ0KIA0KLVpT
VERfbmV4dElucHV0VHlwZV9lIFpTVERfbmV4dElucHV0VHlwZShaU1REX0RDdHggKmRjdHgpDQor
WlNURF9uZXh0SW5wdXRUeXBlX2UgSU5JVCBaU1REX25leHRJbnB1dFR5cGUoWlNURF9EQ3R4ICpk
Y3R4KQ0KIHsNCiAJc3dpdGNoIChkY3R4LT5zdGFnZSkgew0KIAlkZWZhdWx0OiAvKiBzaG91bGQg
bm90IGhhcHBlbiAqLw0KQEAgLTE3MzUsMTIgKzE3MzMsMTIgQEAgWlNURF9uZXh0SW5wdXRUeXBl
X2UgWlNURF9uZXh0SW5wdXRUeXBlKFpTVERfREN0eCAqZGN0eCkNCiAJfQ0KIH0NCiANCi1pbnQg
WlNURF9pc1NraXBGcmFtZShaU1REX0RDdHggKmRjdHgpIHsgcmV0dXJuIGRjdHgtPnN0YWdlID09
IFpTVERkc19za2lwRnJhbWU7IH0gLyogZm9yIHpidWZmICovDQoraW50IElOSVQgWlNURF9pc1Nr
aXBGcmFtZShaU1REX0RDdHggKmRjdHgpIHsgcmV0dXJuIGRjdHgtPnN0YWdlID09IFpTVERkc19z
a2lwRnJhbWU7IH0gLyogZm9yIHpidWZmICovDQogDQogLyoqIFpTVERfZGVjb21wcmVzc0NvbnRp
bnVlKCkgOg0KICogICBAcmV0dXJuIDogbmIgb2YgYnl0ZXMgZ2VuZXJhdGVkIGludG8gYGRzdGAg
KG5lY2Vzc2FyaWx5IDw9IGBkc3RDYXBhY2l0eSkNCiAqICAgICAgICAgICAgIG9yIGFuIGVycm9y
IGNvZGUsIHdoaWNoIGNhbiBiZSB0ZXN0ZWQgdXNpbmcgWlNURF9pc0Vycm9yKCkgKi8NCi1zaXpl
X3QgWlNURF9kZWNvbXByZXNzQ29udGludWUoWlNURF9EQ3R4ICpkY3R4LCB2b2lkICpkc3QsIHNp
emVfdCBkc3RDYXBhY2l0eSwgY29uc3Qgdm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZSkNCitzaXpl
X3QgSU5JVCBaU1REX2RlY29tcHJlc3NDb250aW51ZShaU1REX0RDdHggKmRjdHgsIHZvaWQgKmRz
dCwgc2l6ZV90IGRzdENhcGFjaXR5LCBjb25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNTaXplKQ0K
IHsNCiAJLyogU2FuaXR5IGNoZWNrICovDQogCWlmIChzcmNTaXplICE9IGRjdHgtPmV4cGVjdGVk
KQ0KQEAgLTE4NTksNyArMTg1Nyw3IEBAIHNpemVfdCBaU1REX2RlY29tcHJlc3NDb250aW51ZSha
U1REX0RDdHggKmRjdHgsIHZvaWQgKmRzdCwgc2l6ZV90IGRzdENhcGFjaXR5LCBjDQogCX0NCiB9
DQogDQotc3RhdGljIHNpemVfdCBaU1REX3JlZkRpY3RDb250ZW50KFpTVERfREN0eCAqZGN0eCwg
Y29uc3Qgdm9pZCAqZGljdCwgc2l6ZV90IGRpY3RTaXplKQ0KK3N0YXRpYyBzaXplX3QgSU5JVCBa
U1REX3JlZkRpY3RDb250ZW50KFpTVERfREN0eCAqZGN0eCwgY29uc3Qgdm9pZCAqZGljdCwgc2l6
ZV90IGRpY3RTaXplKQ0KIHsNCiAJZGN0eC0+ZGljdEVuZCA9IGRjdHgtPnByZXZpb3VzRHN0RW5k
Ow0KIAlkY3R4LT52QmFzZSA9IChjb25zdCBjaGFyICopZGljdCAtICgoY29uc3QgY2hhciAqKShk
Y3R4LT5wcmV2aW91c0RzdEVuZCkgLSAoY29uc3QgY2hhciAqKShkY3R4LT5iYXNlKSk7DQpAQCAt
MTg3MSw3ICsxODY5LDcgQEAgc3RhdGljIHNpemVfdCBaU1REX3JlZkRpY3RDb250ZW50KFpTVERf
REN0eCAqZGN0eCwgY29uc3Qgdm9pZCAqZGljdCwgc2l6ZV90IGRpY3QNCiAvKiBaU1REX2xvYWRF
bnRyb3B5KCkgOg0KICAqIGRpY3QgOiBtdXN0IHBvaW50IGF0IGJlZ2lubmluZyBvZiBhIHZhbGlk
IHpzdGQgZGljdGlvbmFyeQ0KICAqIEByZXR1cm4gOiBzaXplIG9mIGVudHJvcHkgdGFibGVzIHJl
YWQgKi8NCi1zdGF0aWMgc2l6ZV90IFpTVERfbG9hZEVudHJvcHkoWlNURF9lbnRyb3B5VGFibGVz
X3QgKmVudHJvcHksIGNvbnN0IHZvaWQgKmNvbnN0IGRpY3QsIHNpemVfdCBjb25zdCBkaWN0U2l6
ZSkNCitzdGF0aWMgc2l6ZV90IElOSVQgWlNURF9sb2FkRW50cm9weShaU1REX2VudHJvcHlUYWJs
ZXNfdCAqZW50cm9weSwgY29uc3Qgdm9pZCAqY29uc3QgZGljdCwgc2l6ZV90IGNvbnN0IGRpY3RT
aXplKQ0KIHsNCiAJY29uc3QgQllURSAqZGljdFB0ciA9IChjb25zdCBCWVRFICopZGljdDsNCiAJ
Y29uc3QgQllURSAqY29uc3QgZGljdEVuZCA9IGRpY3RQdHIgKyBkaWN0U2l6ZTsNCkBAIC0xOTQw
LDcgKzE5MzgsNyBAQCBzdGF0aWMgc2l6ZV90IFpTVERfbG9hZEVudHJvcHkoWlNURF9lbnRyb3B5
VGFibGVzX3QgKmVudHJvcHksIGNvbnN0IHZvaWQgKmNvbnN0DQogCXJldHVybiBkaWN0UHRyIC0g
KGNvbnN0IEJZVEUgKilkaWN0Ow0KIH0NCiANCi1zdGF0aWMgc2l6ZV90IFpTVERfZGVjb21wcmVz
c19pbnNlcnREaWN0aW9uYXJ5KFpTVERfREN0eCAqZGN0eCwgY29uc3Qgdm9pZCAqZGljdCwgc2l6
ZV90IGRpY3RTaXplKQ0KK3N0YXRpYyBzaXplX3QgSU5JVCBaU1REX2RlY29tcHJlc3NfaW5zZXJ0
RGljdGlvbmFyeShaU1REX0RDdHggKmRjdHgsIGNvbnN0IHZvaWQgKmRpY3QsIHNpemVfdCBkaWN0
U2l6ZSkNCiB7DQogCWlmIChkaWN0U2l6ZSA8IDgpDQogCQlyZXR1cm4gWlNURF9yZWZEaWN0Q29u
dGVudChkY3R4LCBkaWN0LCBkaWN0U2l6ZSk7DQpAQCAtMTk2Niw3ICsxOTY0LDcgQEAgc3RhdGlj
IHNpemVfdCBaU1REX2RlY29tcHJlc3NfaW5zZXJ0RGljdGlvbmFyeShaU1REX0RDdHggKmRjdHgs
IGNvbnN0IHZvaWQgKmRpY3QNCiAJcmV0dXJuIFpTVERfcmVmRGljdENvbnRlbnQoZGN0eCwgZGlj
dCwgZGljdFNpemUpOw0KIH0NCiANCi1zaXplX3QgWlNURF9kZWNvbXByZXNzQmVnaW5fdXNpbmdE
aWN0KFpTVERfREN0eCAqZGN0eCwgY29uc3Qgdm9pZCAqZGljdCwgc2l6ZV90IGRpY3RTaXplKQ0K
K3NpemVfdCBJTklUIFpTVERfZGVjb21wcmVzc0JlZ2luX3VzaW5nRGljdChaU1REX0RDdHggKmRj
dHgsIGNvbnN0IHZvaWQgKmRpY3QsIHNpemVfdCBkaWN0U2l6ZSkNCiB7DQogCUNIRUNLX0YoWlNU
RF9kZWNvbXByZXNzQmVnaW4oZGN0eCkpOw0KIAlpZiAoZGljdCAmJiBkaWN0U2l6ZSkNCkBAIC0x
OTg2LDEzICsxOTg0LDEzIEBAIHN0cnVjdCBaU1REX0REaWN0X3Mgew0KIAlaU1REX2N1c3RvbU1l
bSBjTWVtOw0KIH07IC8qIHR5cGVkZWYnZCB0byBaU1REX0REaWN0IHdpdGhpbiAienN0ZC5oIiAq
Lw0KIA0KLXNpemVfdCBaU1REX0REaWN0V29ya3NwYWNlQm91bmQodm9pZCkgeyByZXR1cm4gWlNU
RF9BTElHTihzaXplb2YoWlNURF9zdGFjaykpICsgWlNURF9BTElHTihzaXplb2YoWlNURF9ERGlj
dCkpOyB9DQorc2l6ZV90IElOSVQgWlNURF9ERGljdFdvcmtzcGFjZUJvdW5kKHZvaWQpIHsgcmV0
dXJuIFpTVERfQUxJR04oc2l6ZW9mKFpTVERfc3RhY2spKSArIFpTVERfQUxJR04oc2l6ZW9mKFpT
VERfRERpY3QpKTsgfQ0KIA0KLXN0YXRpYyBjb25zdCB2b2lkICpaU1REX0REaWN0RGljdENvbnRl
bnQoY29uc3QgWlNURF9ERGljdCAqZGRpY3QpIHsgcmV0dXJuIGRkaWN0LT5kaWN0Q29udGVudDsg
fQ0KK3N0YXRpYyBjb25zdCB2b2lkIElOSVQgKlpTVERfRERpY3REaWN0Q29udGVudChjb25zdCBa
U1REX0REaWN0ICpkZGljdCkgeyByZXR1cm4gZGRpY3QtPmRpY3RDb250ZW50OyB9DQogDQotc3Rh
dGljIHNpemVfdCBaU1REX0REaWN0RGljdFNpemUoY29uc3QgWlNURF9ERGljdCAqZGRpY3QpIHsg
cmV0dXJuIGRkaWN0LT5kaWN0U2l6ZTsgfQ0KK3N0YXRpYyBzaXplX3QgSU5JVCBaU1REX0REaWN0
RGljdFNpemUoY29uc3QgWlNURF9ERGljdCAqZGRpY3QpIHsgcmV0dXJuIGRkaWN0LT5kaWN0U2l6
ZTsgfQ0KIA0KLXN0YXRpYyB2b2lkIFpTVERfcmVmRERpY3QoWlNURF9EQ3R4ICpkc3REQ3R4LCBj
b25zdCBaU1REX0REaWN0ICpkZGljdCkNCitzdGF0aWMgdm9pZCBJTklUIFpTVERfcmVmRERpY3Qo
WlNURF9EQ3R4ICpkc3REQ3R4LCBjb25zdCBaU1REX0REaWN0ICpkZGljdCkNCiB7DQogCVpTVERf
ZGVjb21wcmVzc0JlZ2luKGRzdERDdHgpOyAvKiBpbml0ICovDQogCWlmIChkZGljdCkgewkJICAg
ICAgIC8qIHN1cHBvcnQgcmVmRERpY3Qgb24gTlVMTCAqLw0KQEAgLTIwMTgsNyArMjAxNiw3IEBA
IHN0YXRpYyB2b2lkIFpTVERfcmVmRERpY3QoWlNURF9EQ3R4ICpkc3REQ3R4LCBjb25zdCBaU1RE
X0REaWN0ICpkZGljdCkNCiAJfQ0KIH0NCiANCi1zdGF0aWMgc2l6ZV90IFpTVERfbG9hZEVudHJv
cHlfaW5ERGljdChaU1REX0REaWN0ICpkZGljdCkNCitzdGF0aWMgc2l6ZV90IElOSVQgWlNURF9s
b2FkRW50cm9weV9pbkREaWN0KFpTVERfRERpY3QgKmRkaWN0KQ0KIHsNCiAJZGRpY3QtPmRpY3RJ
RCA9IDA7DQogCWRkaWN0LT5lbnRyb3B5UHJlc2VudCA9IDA7DQpAQCAtMjAzNyw3ICsyMDM1LDcg
QEAgc3RhdGljIHNpemVfdCBaU1REX2xvYWRFbnRyb3B5X2luRERpY3QoWlNURF9ERGljdCAqZGRp
Y3QpDQogCXJldHVybiAwOw0KIH0NCiANCi1zdGF0aWMgWlNURF9ERGljdCAqWlNURF9jcmVhdGVE
RGljdF9hZHZhbmNlZChjb25zdCB2b2lkICpkaWN0LCBzaXplX3QgZGljdFNpemUsIHVuc2lnbmVk
IGJ5UmVmZXJlbmNlLCBaU1REX2N1c3RvbU1lbSBjdXN0b21NZW0pDQorc3RhdGljIFpTVERfRERp
Y3QgSU5JVCAqWlNURF9jcmVhdGVERGljdF9hZHZhbmNlZChjb25zdCB2b2lkICpkaWN0LCBzaXpl
X3QgZGljdFNpemUsIHVuc2lnbmVkIGJ5UmVmZXJlbmNlLCBaU1REX2N1c3RvbU1lbSBjdXN0b21N
ZW0pDQogew0KIAlpZiAoIWN1c3RvbU1lbS5jdXN0b21BbGxvYyB8fCAhY3VzdG9tTWVtLmN1c3Rv
bUZyZWUpDQogCQlyZXR1cm4gTlVMTDsNCkBAIC0yMDgwLDEzICsyMDc4LDEzIEBAIHN0YXRpYyBa
U1REX0REaWN0ICpaU1REX2NyZWF0ZUREaWN0X2FkdmFuY2VkKGNvbnN0IHZvaWQgKmRpY3QsIHNp
emVfdCBkaWN0U2l6ZSwNCiAqICAgQ3JlYXRlIGEgZGlnZXN0ZWQgZGljdGlvbmFyeSwgdG8gc3Rh
cnQgZGVjb21wcmVzc2lvbiB3aXRob3V0IHN0YXJ0dXAgZGVsYXkuDQogKiAgIGBkaWN0YCBjb250
ZW50IGlzIGNvcGllZCBpbnNpZGUgRERpY3QuDQogKiAgIENvbnNlcXVlbnRseSwgYGRpY3RgIGNh
biBiZSByZWxlYXNlZCBhZnRlciBgWlNURF9ERGljdGAgY3JlYXRpb24gKi8NCi1aU1REX0REaWN0
ICpaU1REX2luaXRERGljdChjb25zdCB2b2lkICpkaWN0LCBzaXplX3QgZGljdFNpemUsIHZvaWQg
KndvcmtzcGFjZSwgc2l6ZV90IHdvcmtzcGFjZVNpemUpDQorWlNURF9ERGljdCBJTklUICpaU1RE
X2luaXRERGljdChjb25zdCB2b2lkICpkaWN0LCBzaXplX3QgZGljdFNpemUsIHZvaWQgKndvcmtz
cGFjZSwgc2l6ZV90IHdvcmtzcGFjZVNpemUpDQogew0KIAlaU1REX2N1c3RvbU1lbSBjb25zdCBz
dGFja01lbSA9IFpTVERfaW5pdFN0YWNrKHdvcmtzcGFjZSwgd29ya3NwYWNlU2l6ZSk7DQogCXJl
dHVybiBaU1REX2NyZWF0ZUREaWN0X2FkdmFuY2VkKGRpY3QsIGRpY3RTaXplLCAxLCBzdGFja01l
bSk7DQogfQ0KIA0KLXNpemVfdCBaU1REX2ZyZWVERGljdChaU1REX0REaWN0ICpkZGljdCkNCitz
aXplX3QgSU5JVCBaU1REX2ZyZWVERGljdChaU1REX0REaWN0ICpkZGljdCkNCiB7DQogCWlmIChk
ZGljdCA9PSBOVUxMKQ0KIAkJcmV0dXJuIDA7IC8qIHN1cHBvcnQgZnJlZSBvbiBOVUxMICovDQpA
QCAtMjEwMiw3ICsyMTAwLDcgQEAgc2l6ZV90IFpTVERfZnJlZUREaWN0KFpTVERfRERpY3QgKmRk
aWN0KQ0KICAqICBQcm92aWRlcyB0aGUgZGljdElEIHN0b3JlZCB3aXRoaW4gZGljdGlvbmFyeS4N
CiAgKiAgaWYgQHJldHVybiA9PSAwLCB0aGUgZGljdGlvbmFyeSBpcyBub3QgY29uZm9ybWFudCB3
aXRoIFpzdGFuZGFyZCBzcGVjaWZpY2F0aW9uLg0KICAqICBJdCBjYW4gc3RpbGwgYmUgbG9hZGVk
LCBidXQgYXMgYSBjb250ZW50LW9ubHkgZGljdGlvbmFyeS4gKi8NCi11bnNpZ25lZCBaU1REX2dl
dERpY3RJRF9mcm9tRGljdChjb25zdCB2b2lkICpkaWN0LCBzaXplX3QgZGljdFNpemUpDQordW5z
aWduZWQgSU5JVCBaU1REX2dldERpY3RJRF9mcm9tRGljdChjb25zdCB2b2lkICpkaWN0LCBzaXpl
X3QgZGljdFNpemUpDQogew0KIAlpZiAoZGljdFNpemUgPCA4KQ0KIAkJcmV0dXJuIDA7DQpAQCAt
MjExNSw3ICsyMTEzLDcgQEAgdW5zaWduZWQgWlNURF9nZXREaWN0SURfZnJvbURpY3QoY29uc3Qg
dm9pZCAqZGljdCwgc2l6ZV90IGRpY3RTaXplKQ0KICAqICBQcm92aWRlcyB0aGUgZGljdElEIG9m
IHRoZSBkaWN0aW9uYXJ5IGxvYWRlZCBpbnRvIGBkZGljdGAuDQogICogIElmIEByZXR1cm4gPT0g
MCwgdGhlIGRpY3Rpb25hcnkgaXMgbm90IGNvbmZvcm1hbnQgdG8gWnN0YW5kYXJkIHNwZWNpZmlj
YXRpb24sIG9yIGVtcHR5Lg0KICAqICBOb24tY29uZm9ybWFudCBkaWN0aW9uYXJpZXMgY2FuIHN0
aWxsIGJlIGxvYWRlZCwgYnV0IGFzIGNvbnRlbnQtb25seSBkaWN0aW9uYXJpZXMuICovDQotdW5z
aWduZWQgWlNURF9nZXREaWN0SURfZnJvbUREaWN0KGNvbnN0IFpTVERfRERpY3QgKmRkaWN0KQ0K
K3Vuc2lnbmVkIElOSVQgWlNURF9nZXREaWN0SURfZnJvbUREaWN0KGNvbnN0IFpTVERfRERpY3Qg
KmRkaWN0KQ0KIHsNCiAJaWYgKGRkaWN0ID09IE5VTEwpDQogCQlyZXR1cm4gMDsNCkBAIC0yMTMy
LDcgKzIxMzAsNyBAQCB1bnNpZ25lZCBaU1REX2dldERpY3RJRF9mcm9tRERpY3QoY29uc3QgWlNU
RF9ERGljdCAqZGRpY3QpDQogICogIC0gYHNyY1NpemVgIGlzIHRvbyBzbWFsbCwgYW5kIGFzIGEg
cmVzdWx0LCB0aGUgZnJhbWUgaGVhZGVyIGNvdWxkIG5vdCBiZSBkZWNvZGVkIChvbmx5IHBvc3Np
YmxlIGlmIGBzcmNTaXplIDwgWlNURF9GUkFNRUhFQURFUlNJWkVfTUFYYCkuDQogICogIC0gVGhp
cyBpcyBub3QgYSBac3RhbmRhcmQgZnJhbWUuDQogICogIFdoZW4gaWRlbnRpZnlpbmcgdGhlIGV4
YWN0IGZhaWx1cmUgY2F1c2UsIGl0J3MgcG9zc2libGUgdG8gdXNlZCBaU1REX2dldEZyYW1lUGFy
YW1zKCksIHdoaWNoIHdpbGwgcHJvdmlkZSBhIG1vcmUgcHJlY2lzZSBlcnJvciBjb2RlLiAqLw0K
LXVuc2lnbmVkIFpTVERfZ2V0RGljdElEX2Zyb21GcmFtZShjb25zdCB2b2lkICpzcmMsIHNpemVf
dCBzcmNTaXplKQ0KK3Vuc2lnbmVkIElOSVQgWlNURF9nZXREaWN0SURfZnJvbUZyYW1lKGNvbnN0
IHZvaWQgKnNyYywgc2l6ZV90IHNyY1NpemUpDQogew0KIAlaU1REX2ZyYW1lUGFyYW1zIHpmcCA9
IHswLCAwLCAwLCAwfTsNCiAJc2l6ZV90IGNvbnN0IGhFcnJvciA9IFpTVERfZ2V0RnJhbWVQYXJh
bXMoJnpmcCwgc3JjLCBzcmNTaXplKTsNCkBAIC0yMTQ0LDcgKzIxNDIsNyBAQCB1bnNpZ25lZCBa
U1REX2dldERpY3RJRF9mcm9tRnJhbWUoY29uc3Qgdm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZSkN
CiAvKiEgWlNURF9kZWNvbXByZXNzX3VzaW5nRERpY3QoKSA6DQogKiAgIERlY29tcHJlc3Npb24g
dXNpbmcgYSBwcmUtZGlnZXN0ZWQgRGljdGlvbmFyeQ0KICogICBVc2UgZGljdGlvbmFyeSB3aXRo
b3V0IHNpZ25pZmljYW50IG92ZXJoZWFkLiAqLw0KLXNpemVfdCBaU1REX2RlY29tcHJlc3NfdXNp
bmdERGljdChaU1REX0RDdHggKmRjdHgsIHZvaWQgKmRzdCwgc2l6ZV90IGRzdENhcGFjaXR5LCBj
b25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNTaXplLCBjb25zdCBaU1REX0REaWN0ICpkZGljdCkN
CitzaXplX3QgSU5JVCBaU1REX2RlY29tcHJlc3NfdXNpbmdERGljdChaU1REX0RDdHggKmRjdHgs
IHZvaWQgKmRzdCwgc2l6ZV90IGRzdENhcGFjaXR5LCBjb25zdCB2b2lkICpzcmMsIHNpemVfdCBz
cmNTaXplLCBjb25zdCBaU1REX0REaWN0ICpkZGljdCkNCiB7DQogCS8qIHBhc3MgY29udGVudCBh
bmQgc2l6ZSBpbiBjYXNlIGxlZ2FjeSBmcmFtZXMgYXJlIGVuY291bnRlcmVkICovDQogCXJldHVy
biBaU1REX2RlY29tcHJlc3NNdWx0aUZyYW1lKGRjdHgsIGRzdCwgZHN0Q2FwYWNpdHksIHNyYywg
c3JjU2l6ZSwgTlVMTCwgMCwgZGRpY3QpOw0KQEAgLTIxODEsNyArMjE3OSw3IEBAIHN0cnVjdCBa
U1REX0RTdHJlYW1fcyB7DQogCVUzMiBob3N0YWdlQnl0ZTsNCiB9OyAvKiB0eXBlZGVmJ2QgdG8g
WlNURF9EU3RyZWFtIHdpdGhpbiAienN0ZC5oIiAqLw0KIA0KLXNpemVfdCBaU1REX0RTdHJlYW1X
b3Jrc3BhY2VCb3VuZChzaXplX3QgbWF4V2luZG93U2l6ZSkNCitzaXplX3QgSU5JVCBaU1REX0RT
dHJlYW1Xb3Jrc3BhY2VCb3VuZChzaXplX3QgbWF4V2luZG93U2l6ZSkNCiB7DQogCXNpemVfdCBj
b25zdCBibG9ja1NpemUgPSBNSU4obWF4V2luZG93U2l6ZSwgWlNURF9CTE9DS1NJWkVfQUJTT0xV
VEVNQVgpOw0KIAlzaXplX3QgY29uc3QgaW5CdWZmU2l6ZSA9IGJsb2NrU2l6ZTsNCkBAIC0yMTg5
LDcgKzIxODcsNyBAQCBzaXplX3QgWlNURF9EU3RyZWFtV29ya3NwYWNlQm91bmQoc2l6ZV90IG1h
eFdpbmRvd1NpemUpDQogCXJldHVybiBaU1REX0RDdHhXb3Jrc3BhY2VCb3VuZCgpICsgWlNURF9B
TElHTihzaXplb2YoWlNURF9EU3RyZWFtKSkgKyBaU1REX0FMSUdOKGluQnVmZlNpemUpICsgWlNU
RF9BTElHTihvdXRCdWZmU2l6ZSk7DQogfQ0KIA0KLXN0YXRpYyBaU1REX0RTdHJlYW0gKlpTVERf
Y3JlYXRlRFN0cmVhbV9hZHZhbmNlZChaU1REX2N1c3RvbU1lbSBjdXN0b21NZW0pDQorc3RhdGlj
IFpTVERfRFN0cmVhbSBJTklUICpaU1REX2NyZWF0ZURTdHJlYW1fYWR2YW5jZWQoWlNURF9jdXN0
b21NZW0gY3VzdG9tTWVtKQ0KIHsNCiAJWlNURF9EU3RyZWFtICp6ZHM7DQogDQpAQCAtMjIxMSw3
ICsyMjA5LDcgQEAgc3RhdGljIFpTVERfRFN0cmVhbSAqWlNURF9jcmVhdGVEU3RyZWFtX2FkdmFu
Y2VkKFpTVERfY3VzdG9tTWVtIGN1c3RvbU1lbSkNCiAJcmV0dXJuIHpkczsNCiB9DQogDQotWlNU
RF9EU3RyZWFtICpaU1REX2luaXREU3RyZWFtKHNpemVfdCBtYXhXaW5kb3dTaXplLCB2b2lkICp3
b3Jrc3BhY2UsIHNpemVfdCB3b3Jrc3BhY2VTaXplKQ0KK1pTVERfRFN0cmVhbSBJTklUICpaU1RE
X2luaXREU3RyZWFtKHNpemVfdCBtYXhXaW5kb3dTaXplLCB2b2lkICp3b3Jrc3BhY2UsIHNpemVf
dCB3b3Jrc3BhY2VTaXplKQ0KIHsNCiAJWlNURF9jdXN0b21NZW0gY29uc3Qgc3RhY2tNZW0gPSBa
U1REX2luaXRTdGFjayh3b3Jrc3BhY2UsIHdvcmtzcGFjZVNpemUpOw0KIAlaU1REX0RTdHJlYW0g
KnpkcyA9IFpTVERfY3JlYXRlRFN0cmVhbV9hZHZhbmNlZChzdGFja01lbSk7DQpAQCAtMjI0NCw3
ICsyMjQyLDcgQEAgWlNURF9EU3RyZWFtICpaU1REX2luaXREU3RyZWFtKHNpemVfdCBtYXhXaW5k
b3dTaXplLCB2b2lkICp3b3Jrc3BhY2UsIHNpemVfdCB3b3INCiAJcmV0dXJuIHpkczsNCiB9DQog
DQotWlNURF9EU3RyZWFtICpaU1REX2luaXREU3RyZWFtX3VzaW5nRERpY3Qoc2l6ZV90IG1heFdp
bmRvd1NpemUsIGNvbnN0IFpTVERfRERpY3QgKmRkaWN0LCB2b2lkICp3b3Jrc3BhY2UsIHNpemVf
dCB3b3Jrc3BhY2VTaXplKQ0KK1pTVERfRFN0cmVhbSBJTklUICpaU1REX2luaXREU3RyZWFtX3Vz
aW5nRERpY3Qoc2l6ZV90IG1heFdpbmRvd1NpemUsIGNvbnN0IFpTVERfRERpY3QgKmRkaWN0LCB2
b2lkICp3b3Jrc3BhY2UsIHNpemVfdCB3b3Jrc3BhY2VTaXplKQ0KIHsNCiAJWlNURF9EU3RyZWFt
ICp6ZHMgPSBaU1REX2luaXREU3RyZWFtKG1heFdpbmRvd1NpemUsIHdvcmtzcGFjZSwgd29ya3Nw
YWNlU2l6ZSk7DQogCWlmICh6ZHMpIHsNCkBAIC0yMjUzLDcgKzIyNTEsNyBAQCBaU1REX0RTdHJl
YW0gKlpTVERfaW5pdERTdHJlYW1fdXNpbmdERGljdChzaXplX3QgbWF4V2luZG93U2l6ZSwgY29u
c3QgWlNURF9ERGljdA0KIAlyZXR1cm4gemRzOw0KIH0NCiANCi1zaXplX3QgWlNURF9mcmVlRFN0
cmVhbShaU1REX0RTdHJlYW0gKnpkcykNCitzaXplX3QgSU5JVCBaU1REX2ZyZWVEU3RyZWFtKFpT
VERfRFN0cmVhbSAqemRzKQ0KIHsNCiAJaWYgKHpkcyA9PSBOVUxMKQ0KIAkJcmV0dXJuIDA7IC8q
IHN1cHBvcnQgZnJlZSBvbiBudWxsICovDQpAQCAtMjI3NCwxMCArMjI3MiwxMCBAQCBzaXplX3Qg
WlNURF9mcmVlRFN0cmVhbShaU1REX0RTdHJlYW0gKnpkcykNCiANCiAvKiAqKiogSW5pdGlhbGl6
YXRpb24gKioqICovDQogDQotc2l6ZV90IFpTVERfRFN0cmVhbUluU2l6ZSh2b2lkKSB7IHJldHVy
biBaU1REX0JMT0NLU0laRV9BQlNPTFVURU1BWCArIFpTVERfYmxvY2tIZWFkZXJTaXplOyB9DQot
c2l6ZV90IFpTVERfRFN0cmVhbU91dFNpemUodm9pZCkgeyByZXR1cm4gWlNURF9CTE9DS1NJWkVf
QUJTT0xVVEVNQVg7IH0NCitzaXplX3QgSU5JVCBaU1REX0RTdHJlYW1JblNpemUodm9pZCkgeyBy
ZXR1cm4gWlNURF9CTE9DS1NJWkVfQUJTT0xVVEVNQVggKyBaU1REX2Jsb2NrSGVhZGVyU2l6ZTsg
fQ0KK3NpemVfdCBJTklUIFpTVERfRFN0cmVhbU91dFNpemUodm9pZCkgeyByZXR1cm4gWlNURF9C
TE9DS1NJWkVfQUJTT0xVVEVNQVg7IH0NCiANCi1zaXplX3QgWlNURF9yZXNldERTdHJlYW0oWlNU
RF9EU3RyZWFtICp6ZHMpDQorc2l6ZV90IElOSVQgWlNURF9yZXNldERTdHJlYW0oWlNURF9EU3Ry
ZWFtICp6ZHMpDQogew0KIAl6ZHMtPnN0YWdlID0gemRzc19sb2FkSGVhZGVyOw0KIAl6ZHMtPmxo
U2l6ZSA9IHpkcy0+aW5Qb3MgPSB6ZHMtPm91dFN0YXJ0ID0gemRzLT5vdXRFbmQgPSAwOw0KQEAg
LTIyODgsMTQgKzIyODYsMTQgQEAgc2l6ZV90IFpTVERfcmVzZXREU3RyZWFtKFpTVERfRFN0cmVh
bSAqemRzKQ0KIA0KIC8qICoqKioqICAgRGVjb21wcmVzc2lvbiAgICoqKioqICovDQogDQotWlNU
RF9TVEFUSUMgc2l6ZV90IFpTVERfbGltaXRDb3B5KHZvaWQgKmRzdCwgc2l6ZV90IGRzdENhcGFj
aXR5LCBjb25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNTaXplKQ0KK1pTVERfU1RBVElDIHNpemVf
dCBJTklUIFpTVERfbGltaXRDb3B5KHZvaWQgKmRzdCwgc2l6ZV90IGRzdENhcGFjaXR5LCBjb25z
dCB2b2lkICpzcmMsIHNpemVfdCBzcmNTaXplKQ0KIHsNCiAJc2l6ZV90IGNvbnN0IGxlbmd0aCA9
IE1JTihkc3RDYXBhY2l0eSwgc3JjU2l6ZSk7DQogCW1lbWNweShkc3QsIHNyYywgbGVuZ3RoKTsN
CiAJcmV0dXJuIGxlbmd0aDsNCiB9DQogDQotc2l6ZV90IFpTVERfZGVjb21wcmVzc1N0cmVhbSha
U1REX0RTdHJlYW0gKnpkcywgWlNURF9vdXRCdWZmZXIgKm91dHB1dCwgWlNURF9pbkJ1ZmZlciAq
aW5wdXQpDQorc2l6ZV90IElOSVQgWlNURF9kZWNvbXByZXNzU3RyZWFtKFpTVERfRFN0cmVhbSAq
emRzLCBaU1REX291dEJ1ZmZlciAqb3V0cHV0LCBaU1REX2luQnVmZmVyICppbnB1dCkNCiB7DQog
CWNvbnN0IGNoYXIgKmNvbnN0IGlzdGFydCA9IChjb25zdCBjaGFyICopKGlucHV0LT5zcmMpICsg
aW5wdXQtPnBvczsNCiAJY29uc3QgY2hhciAqY29uc3QgaWVuZCA9IChjb25zdCBjaGFyICopKGlu
cHV0LT5zcmMpICsgaW5wdXQtPnNpemU7DQpAQCAtMjQ4OSw0MyArMjQ4NywzIEBAIHNpemVfdCBa
U1REX2RlY29tcHJlc3NTdHJlYW0oWlNURF9EU3RyZWFtICp6ZHMsIFpTVERfb3V0QnVmZmVyICpv
dXRwdXQsIFpTVERfaW5CDQogCQlyZXR1cm4gbmV4dFNyY1NpemVIaW50Ow0KIAl9DQogfQ0KLQ0K
LUVYUE9SVF9TWU1CT0woWlNURF9EQ3R4V29ya3NwYWNlQm91bmQpOw0KLUVYUE9SVF9TWU1CT0wo
WlNURF9pbml0REN0eCk7DQotRVhQT1JUX1NZTUJPTChaU1REX2RlY29tcHJlc3NEQ3R4KTsNCi1F
WFBPUlRfU1lNQk9MKFpTVERfZGVjb21wcmVzc191c2luZ0RpY3QpOw0KLQ0KLUVYUE9SVF9TWU1C
T0woWlNURF9ERGljdFdvcmtzcGFjZUJvdW5kKTsNCi1FWFBPUlRfU1lNQk9MKFpTVERfaW5pdERE
aWN0KTsNCi1FWFBPUlRfU1lNQk9MKFpTVERfZGVjb21wcmVzc191c2luZ0REaWN0KTsNCi0NCi1F
WFBPUlRfU1lNQk9MKFpTVERfRFN0cmVhbVdvcmtzcGFjZUJvdW5kKTsNCi1FWFBPUlRfU1lNQk9M
KFpTVERfaW5pdERTdHJlYW0pOw0KLUVYUE9SVF9TWU1CT0woWlNURF9pbml0RFN0cmVhbV91c2lu
Z0REaWN0KTsNCi1FWFBPUlRfU1lNQk9MKFpTVERfcmVzZXREU3RyZWFtKTsNCi1FWFBPUlRfU1lN
Qk9MKFpTVERfZGVjb21wcmVzc1N0cmVhbSk7DQotRVhQT1JUX1NZTUJPTChaU1REX0RTdHJlYW1J
blNpemUpOw0KLUVYUE9SVF9TWU1CT0woWlNURF9EU3RyZWFtT3V0U2l6ZSk7DQotDQotRVhQT1JU
X1NZTUJPTChaU1REX2ZpbmRGcmFtZUNvbXByZXNzZWRTaXplKTsNCi1FWFBPUlRfU1lNQk9MKFpT
VERfZ2V0RnJhbWVDb250ZW50U2l6ZSk7DQotRVhQT1JUX1NZTUJPTChaU1REX2ZpbmREZWNvbXBy
ZXNzZWRTaXplKTsNCi0NCi1FWFBPUlRfU1lNQk9MKFpTVERfaXNGcmFtZSk7DQotRVhQT1JUX1NZ
TUJPTChaU1REX2dldERpY3RJRF9mcm9tRGljdCk7DQotRVhQT1JUX1NZTUJPTChaU1REX2dldERp
Y3RJRF9mcm9tRERpY3QpOw0KLUVYUE9SVF9TWU1CT0woWlNURF9nZXREaWN0SURfZnJvbUZyYW1l
KTsNCi0NCi1FWFBPUlRfU1lNQk9MKFpTVERfZ2V0RnJhbWVQYXJhbXMpOw0KLUVYUE9SVF9TWU1C
T0woWlNURF9kZWNvbXByZXNzQmVnaW4pOw0KLUVYUE9SVF9TWU1CT0woWlNURF9kZWNvbXByZXNz
QmVnaW5fdXNpbmdEaWN0KTsNCi1FWFBPUlRfU1lNQk9MKFpTVERfY29weURDdHgpOw0KLUVYUE9S
VF9TWU1CT0woWlNURF9uZXh0U3JjU2l6ZVRvRGVjb21wcmVzcyk7DQotRVhQT1JUX1NZTUJPTCha
U1REX2RlY29tcHJlc3NDb250aW51ZSk7DQotRVhQT1JUX1NZTUJPTChaU1REX25leHRJbnB1dFR5
cGUpOw0KLQ0KLUVYUE9SVF9TWU1CT0woWlNURF9kZWNvbXByZXNzQmxvY2spOw0KLUVYUE9SVF9T
WU1CT0woWlNURF9pbnNlcnRCbG9jayk7DQotDQotTU9EVUxFX0xJQ0VOU0UoIkR1YWwgQlNEL0dQ
TCIpOw0KLU1PRFVMRV9ERVNDUklQVElPTigiWnN0ZCBEZWNvbXByZXNzb3IiKTsNCmRpZmYgLS1n
aXQgYS94ZW4vY29tbW9uL3pzdGQvZW50cm9weV9jb21tb24uYyBiL3hlbi9jb21tb24venN0ZC9l
bnRyb3B5X2NvbW1vbi5jDQppbmRleCAyYjBhNjQzYzMyLi5iY2RiNTc5ODJiIDEwMDY0NA0KLS0t
IGEveGVuL2NvbW1vbi96c3RkL2VudHJvcHlfY29tbW9uLmMNCisrKyBiL3hlbi9jb21tb24venN0
ZC9lbnRyb3B5X2NvbW1vbi5jDQpAQCAtNDYsMTcgKzQ2LDE3IEBADQogI2luY2x1ZGUgIm1lbS5o
Ig0KIA0KIC8qPT09ICAgVmVyc2lvbiAgID09PSovDQotdW5zaWduZWQgRlNFX3ZlcnNpb25OdW1i
ZXIodm9pZCkgeyByZXR1cm4gRlNFX1ZFUlNJT05fTlVNQkVSOyB9DQordW5zaWduZWQgSU5JVCBG
U0VfdmVyc2lvbk51bWJlcih2b2lkKSB7IHJldHVybiBGU0VfVkVSU0lPTl9OVU1CRVI7IH0NCiAN
CiAvKj09PSAgIEVycm9yIE1hbmFnZW1lbnQgICA9PT0qLw0KLXVuc2lnbmVkIEZTRV9pc0Vycm9y
KHNpemVfdCBjb2RlKSB7IHJldHVybiBFUlJfaXNFcnJvcihjb2RlKTsgfQ0KK3Vuc2lnbmVkIElO
SVQgRlNFX2lzRXJyb3Ioc2l6ZV90IGNvZGUpIHsgcmV0dXJuIEVSUl9pc0Vycm9yKGNvZGUpOyB9
DQogDQotdW5zaWduZWQgSFVGX2lzRXJyb3Ioc2l6ZV90IGNvZGUpIHsgcmV0dXJuIEVSUl9pc0Vy
cm9yKGNvZGUpOyB9DQordW5zaWduZWQgSU5JVCBIVUZfaXNFcnJvcihzaXplX3QgY29kZSkgeyBy
ZXR1cm4gRVJSX2lzRXJyb3IoY29kZSk7IH0NCiANCiAvKi0qKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KICogIEZTRSBOQ291bnQg
ZW5jb2RpbmctZGVjb2RpbmcNCiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqLw0KLXNpemVfdCBGU0VfcmVhZE5Db3VudChzaG9y
dCAqbm9ybWFsaXplZENvdW50ZXIsIHVuc2lnbmVkICptYXhTVlB0ciwgdW5zaWduZWQgKnRhYmxl
TG9nUHRyLCBjb25zdCB2b2lkICpoZWFkZXJCdWZmZXIsIHNpemVfdCBoYlNpemUpDQorc2l6ZV90
IElOSVQgRlNFX3JlYWROQ291bnQoc2hvcnQgKm5vcm1hbGl6ZWRDb3VudGVyLCB1bnNpZ25lZCAq
bWF4U1ZQdHIsIHVuc2lnbmVkICp0YWJsZUxvZ1B0ciwgY29uc3Qgdm9pZCAqaGVhZGVyQnVmZmVy
LCBzaXplX3QgaGJTaXplKQ0KIHsNCiAJY29uc3QgQllURSAqY29uc3QgaXN0YXJ0ID0gKGNvbnN0
IEJZVEUgKiloZWFkZXJCdWZmZXI7DQogCWNvbnN0IEJZVEUgKmNvbnN0IGllbmQgPSBpc3RhcnQg
KyBoYlNpemU7DQpAQCAtMTY0LDcgKzE2NCw3IEBAIHNpemVfdCBGU0VfcmVhZE5Db3VudChzaG9y
dCAqbm9ybWFsaXplZENvdW50ZXIsIHVuc2lnbmVkICptYXhTVlB0ciwgdW5zaWduZWQgKnRhDQog
CUByZXR1cm4gOiBzaXplIHJlYWQgZnJvbSBgc3JjYCAsIG9yIGFuIGVycm9yIENvZGUgLg0KIAlO
b3RlIDogTmVlZGVkIGJ5IEhVRl9yZWFkQ1RhYmxlKCkgYW5kIEhVRl9yZWFkRFRhYmxlWD8oKSAu
DQogKi8NCi1zaXplX3QgSFVGX3JlYWRTdGF0c193a3NwKEJZVEUgKmh1ZmZXZWlnaHQsIHNpemVf
dCBod1NpemUsIFUzMiAqcmFua1N0YXRzLCBVMzIgKm5iU3ltYm9sc1B0ciwgVTMyICp0YWJsZUxv
Z1B0ciwgY29uc3Qgdm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZSwgdm9pZCAqd29ya3NwYWNlLCBz
aXplX3Qgd29ya3NwYWNlU2l6ZSkNCitzaXplX3QgSU5JVCBIVUZfcmVhZFN0YXRzX3drc3AoQllU
RSAqaHVmZldlaWdodCwgc2l6ZV90IGh3U2l6ZSwgVTMyICpyYW5rU3RhdHMsIFUzMiAqbmJTeW1i
b2xzUHRyLCBVMzIgKnRhYmxlTG9nUHRyLCBjb25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNTaXpl
LCB2b2lkICp3b3Jrc3BhY2UsIHNpemVfdCB3b3Jrc3BhY2VTaXplKQ0KIHsNCiAJVTMyIHdlaWdo
dFRvdGFsOw0KIAljb25zdCBCWVRFICppcCA9IChjb25zdCBCWVRFICopc3JjOw0KZGlmZiAtLWdp
dCBhL3hlbi9jb21tb24venN0ZC9lcnJvcl9wcml2YXRlLmggYi94ZW4vY29tbW9uL3pzdGQvZXJy
b3JfcHJpdmF0ZS5oDQppbmRleCAxYTYwYjMxZjcwLi5lY2JmZTUxZGZiIDEwMDY0NA0KLS0tIGEv
eGVuL2NvbW1vbi96c3RkL2Vycm9yX3ByaXZhdGUuaA0KKysrIGIveGVuL2NvbW1vbi96c3RkL2Vy
cm9yX3ByaXZhdGUuaA0KQEAgLTIyLDggKzIyLDggQEANCiAvKiAqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqDQogKiAgRGVwZW5kZW5jaWVzDQogKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqLw0KLSNpbmNsdWRlIDxsaW51eC90eXBlcy5oPiAv
KiBzaXplX3QgKi8NCi0jaW5jbHVkZSA8bGludXgvenN0ZC5oPiAgLyogZW51bSBsaXN0ICovDQor
I2luY2x1ZGUgPHhlbi90eXBlcy5oPiAvKiBzaXplX3QgKi8NCisjaW5jbHVkZSA8eGVuL3pzdGQu
aD4gIC8qIGVudW0gbGlzdCAqLw0KIA0KIC8qICoqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioNCiAqICBDb21waWxlci1zcGVjaWZpYw0KZGlmZiAtLWdpdCBhL3hlbi9jb21t
b24venN0ZC9mc2UuaCBiL3hlbi9jb21tb24venN0ZC9mc2UuaA0KaW5kZXggNzQ2MGFiMDRiMS4u
Yjg2NzE3YzM0ZCAxMDA2NDQNCi0tLSBhL3hlbi9jb21tb24venN0ZC9mc2UuaA0KKysrIGIveGVu
L2NvbW1vbi96c3RkL2ZzZS5oDQpAQCAtNDMsNyArNDMsNyBAQA0KIC8qLSoqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqDQogKiAgRGVwZW5kZW5jaWVzDQogKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqLw0KLSNpbmNsdWRlIDxsaW51eC90eXBl
cy5oPiAvKiBzaXplX3QsIHB0cmRpZmZfdCAqLw0KKyNpbmNsdWRlIDx4ZW4vdHlwZXMuaD4gLyog
c2l6ZV90LCBwdHJkaWZmX3QgKi8NCiANCiAvKi0qKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKg0KICogIEZTRV9QVUJMSUNfQVBJIDogY29udHJvbCBsaWJyYXJ5IHN5bWJv
bHMgdmlzaWJpbGl0eQ0KZGlmZiAtLWdpdCBhL3hlbi9jb21tb24venN0ZC9mc2VfZGVjb21wcmVz
cy5jIGIveGVuL2NvbW1vbi96c3RkL2ZzZV9kZWNvbXByZXNzLmMNCmluZGV4IDBiMzUzNTMwZmIu
LjA0MWE1YTFmMGEgMTAwNjQ0DQotLS0gYS94ZW4vY29tbW9uL3pzdGQvZnNlX2RlY29tcHJlc3Mu
Yw0KKysrIGIveGVuL2NvbW1vbi96c3RkL2ZzZV9kZWNvbXByZXNzLmMNCkBAIC00MCw3ICs0MCw3
IEBADQogLyogKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioNCiAqICBDb21waWxlciBzcGVjaWZpY3MNCiAqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqLw0KLSNkZWZp
bmUgRk9SQ0VfSU5MSU5FIHN0YXRpYyBfX2Fsd2F5c19pbmxpbmUNCisjZGVmaW5lIEZPUkNFX0lO
TElORSBzdGF0aWMgYWx3YXlzX2lubGluZQ0KIA0KIC8qICoqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQogKiAgSW5jbHVkZXMNCkBA
IC00OCw5ICs0OCw3IEBADQogI2luY2x1ZGUgImJpdHN0cmVhbS5oIg0KICNpbmNsdWRlICJmc2Uu
aCINCiAjaW5jbHVkZSAienN0ZF9pbnRlcm5hbC5oIg0KLSNpbmNsdWRlIDxsaW51eC9jb21waWxl
ci5oPg0KLSNpbmNsdWRlIDxsaW51eC9rZXJuZWwuaD4NCi0jaW5jbHVkZSA8bGludXgvc3RyaW5n
Lmg+IC8qIG1lbWNweSwgbWVtc2V0ICovDQorI2luY2x1ZGUgPHhlbi9zdHJpbmcuaD4gLyogbWVt
Y3B5LCBtZW1zZXQgKi8NCiANCiAvKiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KICogIEVycm9yIE1hbmFnZW1lbnQNCkBAIC04
NSw3ICs4Myw3IEBADQogDQogLyogRnVuY3Rpb24gdGVtcGxhdGVzICovDQogDQotc2l6ZV90IEZT
RV9idWlsZERUYWJsZV93a3NwKEZTRV9EVGFibGUgKmR0LCBjb25zdCBzaG9ydCAqbm9ybWFsaXpl
ZENvdW50ZXIsIHVuc2lnbmVkIG1heFN5bWJvbFZhbHVlLCB1bnNpZ25lZCB0YWJsZUxvZywgdm9p
ZCAqd29ya3NwYWNlLCBzaXplX3Qgd29ya3NwYWNlU2l6ZSkNCitzaXplX3QgSU5JVCBGU0VfYnVp
bGREVGFibGVfd2tzcChGU0VfRFRhYmxlICpkdCwgY29uc3Qgc2hvcnQgKm5vcm1hbGl6ZWRDb3Vu
dGVyLCB1bnNpZ25lZCBtYXhTeW1ib2xWYWx1ZSwgdW5zaWduZWQgdGFibGVMb2csIHZvaWQgKndv
cmtzcGFjZSwgc2l6ZV90IHdvcmtzcGFjZVNpemUpDQogew0KIAl2b2lkICpjb25zdCB0ZFB0ciA9
IGR0ICsgMTsgLyogYmVjYXVzZSAqZHQgaXMgdW5zaWduZWQsIDMyLWJpdHMgYWxpZ25lZCBvbiAz
Mi1iaXRzICovDQogCUZTRV9ERUNPREVfVFlQRSAqY29uc3QgdGFibGVEZWNvZGUgPSAoRlNFX0RF
Q09ERV9UWVBFICopKHRkUHRyKTsNCkBAIC0xNjAsNyArMTU4LDcgQEAgc2l6ZV90IEZTRV9idWls
ZERUYWJsZV93a3NwKEZTRV9EVGFibGUgKmR0LCBjb25zdCBzaG9ydCAqbm9ybWFsaXplZENvdW50
ZXIsIHVuc2kNCiAvKi0qKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqDQogKiAgRGVjb21wcmVzc2lvbiAoQnl0ZSBzeW1ib2xzKQ0KICoqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKi8NCi1zaXpl
X3QgRlNFX2J1aWxkRFRhYmxlX3JsZShGU0VfRFRhYmxlICpkdCwgQllURSBzeW1ib2xWYWx1ZSkN
CitzaXplX3QgSU5JVCBGU0VfYnVpbGREVGFibGVfcmxlKEZTRV9EVGFibGUgKmR0LCBCWVRFIHN5
bWJvbFZhbHVlKQ0KIHsNCiAJdm9pZCAqcHRyID0gZHQ7DQogCUZTRV9EVGFibGVIZWFkZXIgKmNv
bnN0IERUYWJsZUggPSAoRlNFX0RUYWJsZUhlYWRlciAqKXB0cjsNCkBAIC0xNzcsNyArMTc1LDcg
QEAgc2l6ZV90IEZTRV9idWlsZERUYWJsZV9ybGUoRlNFX0RUYWJsZSAqZHQsIEJZVEUgc3ltYm9s
VmFsdWUpDQogCXJldHVybiAwOw0KIH0NCiANCi1zaXplX3QgRlNFX2J1aWxkRFRhYmxlX3JhdyhG
U0VfRFRhYmxlICpkdCwgdW5zaWduZWQgbmJCaXRzKQ0KK3NpemVfdCBJTklUIEZTRV9idWlsZERU
YWJsZV9yYXcoRlNFX0RUYWJsZSAqZHQsIHVuc2lnbmVkIG5iQml0cykNCiB7DQogCXZvaWQgKnB0
ciA9IGR0Ow0KIAlGU0VfRFRhYmxlSGVhZGVyICpjb25zdCBEVGFibGVIID0gKEZTRV9EVGFibGVI
ZWFkZXIgKilwdHI7DQpAQCAtMjcyLDcgKzI3MCw3IEBAIEZPUkNFX0lOTElORSBzaXplX3QgRlNF
X2RlY29tcHJlc3NfdXNpbmdEVGFibGVfZ2VuZXJpYyh2b2lkICpkc3QsIHNpemVfdCBtYXhEc3RT
DQogCXJldHVybiBvcCAtIG9zdGFydDsNCiB9DQogDQotc2l6ZV90IEZTRV9kZWNvbXByZXNzX3Vz
aW5nRFRhYmxlKHZvaWQgKmRzdCwgc2l6ZV90IG9yaWdpbmFsU2l6ZSwgY29uc3Qgdm9pZCAqY1Ny
Yywgc2l6ZV90IGNTcmNTaXplLCBjb25zdCBGU0VfRFRhYmxlICpkdCkNCitzaXplX3QgSU5JVCBG
U0VfZGVjb21wcmVzc191c2luZ0RUYWJsZSh2b2lkICpkc3QsIHNpemVfdCBvcmlnaW5hbFNpemUs
IGNvbnN0IHZvaWQgKmNTcmMsIHNpemVfdCBjU3JjU2l6ZSwgY29uc3QgRlNFX0RUYWJsZSAqZHQp
DQogew0KIAljb25zdCB2b2lkICpwdHIgPSBkdDsNCiAJY29uc3QgRlNFX0RUYWJsZUhlYWRlciAq
RFRhYmxlSCA9IChjb25zdCBGU0VfRFRhYmxlSGVhZGVyICopcHRyOw0KQEAgLTI4NCw3ICsyODIs
NyBAQCBzaXplX3QgRlNFX2RlY29tcHJlc3NfdXNpbmdEVGFibGUodm9pZCAqZHN0LCBzaXplX3Qg
b3JpZ2luYWxTaXplLCBjb25zdCB2b2lkICpjUw0KIAlyZXR1cm4gRlNFX2RlY29tcHJlc3NfdXNp
bmdEVGFibGVfZ2VuZXJpYyhkc3QsIG9yaWdpbmFsU2l6ZSwgY1NyYywgY1NyY1NpemUsIGR0LCAw
KTsNCiB9DQogDQotc2l6ZV90IEZTRV9kZWNvbXByZXNzX3drc3Aodm9pZCAqZHN0LCBzaXplX3Qg
ZHN0Q2FwYWNpdHksIGNvbnN0IHZvaWQgKmNTcmMsIHNpemVfdCBjU3JjU2l6ZSwgdW5zaWduZWQg
bWF4TG9nLCB2b2lkICp3b3Jrc3BhY2UsIHNpemVfdCB3b3Jrc3BhY2VTaXplKQ0KK3NpemVfdCBJ
TklUIEZTRV9kZWNvbXByZXNzX3drc3Aodm9pZCAqZHN0LCBzaXplX3QgZHN0Q2FwYWNpdHksIGNv
bnN0IHZvaWQgKmNTcmMsIHNpemVfdCBjU3JjU2l6ZSwgdW5zaWduZWQgbWF4TG9nLCB2b2lkICp3
b3Jrc3BhY2UsIHNpemVfdCB3b3Jrc3BhY2VTaXplKQ0KIHsNCiAJY29uc3QgQllURSAqY29uc3Qg
aXN0YXJ0ID0gKGNvbnN0IEJZVEUgKiljU3JjOw0KIAljb25zdCBCWVRFICppcCA9IGlzdGFydDsN
CmRpZmYgLS1naXQgYS94ZW4vY29tbW9uL3pzdGQvaHVmLmggYi94ZW4vY29tbW9uL3pzdGQvaHVm
LmgNCmluZGV4IDIxNDNkYTI4ZDkuLmE5ZDUyMmM3YmIgMTAwNjQ0DQotLS0gYS94ZW4vY29tbW9u
L3pzdGQvaHVmLmgNCisrKyBiL3hlbi9jb21tb24venN0ZC9odWYuaA0KQEAgLTQxLDcgKzQxLDcg
QEANCiAjZGVmaW5lIEhVRl9IXzI5ODczNDIzNA0KIA0KIC8qICoqKiBEZXBlbmRlbmNpZXMgKioq
ICovDQotI2luY2x1ZGUgPGxpbnV4L3R5cGVzLmg+IC8qIHNpemVfdCAqLw0KKyNpbmNsdWRlIDx4
ZW4vdHlwZXMuaD4gLyogc2l6ZV90ICovDQogDQogLyogKioqICAgVG9vbCBmdW5jdGlvbnMgKioq
ICovDQogI2RlZmluZSBIVUZfQkxPQ0tTSVpFX01BWCAoMTI4ICogMTAyNCkgLyoqPCBtYXhpbXVt
IGlucHV0IHNpemUgZm9yIGEgc2luZ2xlIGJsb2NrIGNvbXByZXNzZWQgd2l0aCBIVUZfY29tcHJl
c3MgKi8NCmRpZmYgLS1naXQgYS94ZW4vY29tbW9uL3pzdGQvaHVmX2RlY29tcHJlc3MuYyBiL3hl
bi9jb21tb24venN0ZC9odWZfZGVjb21wcmVzcy5jDQppbmRleCA2NTI2NDgyMDQ3Li5mNzk2MDNh
MTJmIDEwMDY0NA0KLS0tIGEveGVuL2NvbW1vbi96c3RkL2h1Zl9kZWNvbXByZXNzLmMNCisrKyBi
L3hlbi9jb21tb24venN0ZC9odWZfZGVjb21wcmVzcy5jDQpAQCAtNDAsNyArNDAsNyBAQA0KIC8q
ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqDQogKiAgQ29tcGlsZXIgc3BlY2lmaWNzDQogKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKi8NCi0jZGVmaW5lIEZPUkNF
X0lOTElORSBzdGF0aWMgX19hbHdheXNfaW5saW5lDQorI2RlZmluZSBGT1JDRV9JTkxJTkUgc3Rh
dGljIGFsd2F5c19pbmxpbmUNCiANCiAvKiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KICogIERlcGVuZGVuY2llcw0KQEAgLTQ4
LDkgKzQ4LDcgQEANCiAjaW5jbHVkZSAiYml0c3RyZWFtLmgiIC8qIEJJVF8qICovDQogI2luY2x1
ZGUgImZzZS5oIiAgICAgICAvKiBoZWFkZXIgY29tcHJlc3Npb24gKi8NCiAjaW5jbHVkZSAiaHVm
LmgiDQotI2luY2x1ZGUgPGxpbnV4L2NvbXBpbGVyLmg+DQotI2luY2x1ZGUgPGxpbnV4L2tlcm5l
bC5oPg0KLSNpbmNsdWRlIDxsaW51eC9zdHJpbmcuaD4gLyogbWVtY3B5LCBtZW1zZXQgKi8NCisj
aW5jbHVkZSA8eGVuL3N0cmluZy5oPiAvKiBtZW1jcHksIG1lbXNldCAqLw0KIA0KIC8qICoqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
DQogKiAgRXJyb3IgTWFuYWdlbWVudA0KQEAgLTcxLDcgKzY5LDcgQEAgdHlwZWRlZiBzdHJ1Y3Qg
ew0KIAlCWVRFIHJlc2VydmVkOw0KIH0gRFRhYmxlRGVzYzsNCiANCi1zdGF0aWMgRFRhYmxlRGVz
YyBIVUZfZ2V0RFRhYmxlRGVzYyhjb25zdCBIVUZfRFRhYmxlICp0YWJsZSkNCitzdGF0aWMgRFRh
YmxlRGVzYyBJTklUIEhVRl9nZXREVGFibGVEZXNjKGNvbnN0IEhVRl9EVGFibGUgKnRhYmxlKQ0K
IHsNCiAJRFRhYmxlRGVzYyBkdGQ7DQogCW1lbWNweSgmZHRkLCB0YWJsZSwgc2l6ZW9mKGR0ZCkp
Ow0KQEAgLTg3LDcgKzg1LDcgQEAgdHlwZWRlZiBzdHJ1Y3Qgew0KIAlCWVRFIG5iQml0czsNCiB9
IEhVRl9ERWx0WDI7IC8qIHNpbmdsZS1zeW1ib2wgZGVjb2RpbmcgKi8NCiANCi1zaXplX3QgSFVG
X3JlYWREVGFibGVYMl93a3NwKEhVRl9EVGFibGUgKkRUYWJsZSwgY29uc3Qgdm9pZCAqc3JjLCBz
aXplX3Qgc3JjU2l6ZSwgdm9pZCAqd29ya3NwYWNlLCBzaXplX3Qgd29ya3NwYWNlU2l6ZSkNCitz
aXplX3QgSU5JVCBIVUZfcmVhZERUYWJsZVgyX3drc3AoSFVGX0RUYWJsZSAqRFRhYmxlLCBjb25z
dCB2b2lkICpzcmMsIHNpemVfdCBzcmNTaXplLCB2b2lkICp3b3Jrc3BhY2UsIHNpemVfdCB3b3Jr
c3BhY2VTaXplKQ0KIHsNCiAJVTMyIHRhYmxlTG9nID0gMDsNCiAJVTMyIG5iU3ltYm9scyA9IDA7
DQpAQCAtMTU1LDcgKzE1Myw3IEBAIHNpemVfdCBIVUZfcmVhZERUYWJsZVgyX3drc3AoSFVGX0RU
YWJsZSAqRFRhYmxlLCBjb25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNTaXplDQogCXJldHVybiBp
U2l6ZTsNCiB9DQogDQotc3RhdGljIEJZVEUgSFVGX2RlY29kZVN5bWJvbFgyKEJJVF9EU3RyZWFt
X3QgKkRzdHJlYW0sIGNvbnN0IEhVRl9ERWx0WDIgKmR0LCBjb25zdCBVMzIgZHRMb2cpDQorc3Rh
dGljIEJZVEUgSU5JVCBIVUZfZGVjb2RlU3ltYm9sWDIoQklUX0RTdHJlYW1fdCAqRHN0cmVhbSwg
Y29uc3QgSFVGX0RFbHRYMiAqZHQsIGNvbnN0IFUzMiBkdExvZykNCiB7DQogCXNpemVfdCBjb25z
dCB2YWwgPSBCSVRfbG9va0JpdHNGYXN0KERzdHJlYW0sIGR0TG9nKTsgLyogbm90ZSA6IGR0TG9n
ID49IDEgKi8NCiAJQllURSBjb25zdCBjID0gZHRbdmFsXS5ieXRlOw0KQEAgLTE5Niw3ICsxOTQs
NyBAQCBGT1JDRV9JTkxJTkUgc2l6ZV90IEhVRl9kZWNvZGVTdHJlYW1YMihCWVRFICpwLCBCSVRf
RFN0cmVhbV90ICpjb25zdCBiaXREUHRyLCBCWQ0KIAlyZXR1cm4gcEVuZCAtIHBTdGFydDsNCiB9
DQogDQotc3RhdGljIHNpemVfdCBIVUZfZGVjb21wcmVzczFYMl91c2luZ0RUYWJsZV9pbnRlcm5h
bCh2b2lkICpkc3QsIHNpemVfdCBkc3RTaXplLCBjb25zdCB2b2lkICpjU3JjLCBzaXplX3QgY1Ny
Y1NpemUsIGNvbnN0IEhVRl9EVGFibGUgKkRUYWJsZSkNCitzdGF0aWMgc2l6ZV90IElOSVQgSFVG
X2RlY29tcHJlc3MxWDJfdXNpbmdEVGFibGVfaW50ZXJuYWwodm9pZCAqZHN0LCBzaXplX3QgZHN0
U2l6ZSwgY29uc3Qgdm9pZCAqY1NyYywgc2l6ZV90IGNTcmNTaXplLCBjb25zdCBIVUZfRFRhYmxl
ICpEVGFibGUpDQogew0KIAlCWVRFICpvcCA9IChCWVRFICopZHN0Ow0KIAlCWVRFICpjb25zdCBv
ZW5kID0gb3AgKyBkc3RTaXplOw0KQEAgLTIyMSw3ICsyMTksNyBAQCBzdGF0aWMgc2l6ZV90IEhV
Rl9kZWNvbXByZXNzMVgyX3VzaW5nRFRhYmxlX2ludGVybmFsKHZvaWQgKmRzdCwgc2l6ZV90IGRz
dFNpemUsDQogCXJldHVybiBkc3RTaXplOw0KIH0NCiANCi1zaXplX3QgSFVGX2RlY29tcHJlc3Mx
WDJfdXNpbmdEVGFibGUodm9pZCAqZHN0LCBzaXplX3QgZHN0U2l6ZSwgY29uc3Qgdm9pZCAqY1Ny
Yywgc2l6ZV90IGNTcmNTaXplLCBjb25zdCBIVUZfRFRhYmxlICpEVGFibGUpDQorc2l6ZV90IElO
SVQgSFVGX2RlY29tcHJlc3MxWDJfdXNpbmdEVGFibGUodm9pZCAqZHN0LCBzaXplX3QgZHN0U2l6
ZSwgY29uc3Qgdm9pZCAqY1NyYywgc2l6ZV90IGNTcmNTaXplLCBjb25zdCBIVUZfRFRhYmxlICpE
VGFibGUpDQogew0KIAlEVGFibGVEZXNjIGR0ZCA9IEhVRl9nZXREVGFibGVEZXNjKERUYWJsZSk7
DQogCWlmIChkdGQudGFibGVUeXBlICE9IDApDQpAQCAtMjI5LDcgKzIyNyw3IEBAIHNpemVfdCBI
VUZfZGVjb21wcmVzczFYMl91c2luZ0RUYWJsZSh2b2lkICpkc3QsIHNpemVfdCBkc3RTaXplLCBj
b25zdCB2b2lkICpjU3JjDQogCXJldHVybiBIVUZfZGVjb21wcmVzczFYMl91c2luZ0RUYWJsZV9p
bnRlcm5hbChkc3QsIGRzdFNpemUsIGNTcmMsIGNTcmNTaXplLCBEVGFibGUpOw0KIH0NCiANCi1z
aXplX3QgSFVGX2RlY29tcHJlc3MxWDJfREN0eF93a3NwKEhVRl9EVGFibGUgKkRDdHgsIHZvaWQg
KmRzdCwgc2l6ZV90IGRzdFNpemUsIGNvbnN0IHZvaWQgKmNTcmMsIHNpemVfdCBjU3JjU2l6ZSwg
dm9pZCAqd29ya3NwYWNlLCBzaXplX3Qgd29ya3NwYWNlU2l6ZSkNCitzaXplX3QgSU5JVCBIVUZf
ZGVjb21wcmVzczFYMl9EQ3R4X3drc3AoSFVGX0RUYWJsZSAqREN0eCwgdm9pZCAqZHN0LCBzaXpl
X3QgZHN0U2l6ZSwgY29uc3Qgdm9pZCAqY1NyYywgc2l6ZV90IGNTcmNTaXplLCB2b2lkICp3b3Jr
c3BhY2UsIHNpemVfdCB3b3Jrc3BhY2VTaXplKQ0KIHsNCiAJY29uc3QgQllURSAqaXAgPSAoY29u
c3QgQllURSAqKWNTcmM7DQogDQpAQCAtMjQ0LDcgKzI0Miw3IEBAIHNpemVfdCBIVUZfZGVjb21w
cmVzczFYMl9EQ3R4X3drc3AoSFVGX0RUYWJsZSAqREN0eCwgdm9pZCAqZHN0LCBzaXplX3QgZHN0
U2l6ZSwNCiAJcmV0dXJuIEhVRl9kZWNvbXByZXNzMVgyX3VzaW5nRFRhYmxlX2ludGVybmFsKGRz
dCwgZHN0U2l6ZSwgaXAsIGNTcmNTaXplLCBEQ3R4KTsNCiB9DQogDQotc3RhdGljIHNpemVfdCBI
VUZfZGVjb21wcmVzczRYMl91c2luZ0RUYWJsZV9pbnRlcm5hbCh2b2lkICpkc3QsIHNpemVfdCBk
c3RTaXplLCBjb25zdCB2b2lkICpjU3JjLCBzaXplX3QgY1NyY1NpemUsIGNvbnN0IEhVRl9EVGFi
bGUgKkRUYWJsZSkNCitzdGF0aWMgc2l6ZV90IElOSVQgSFVGX2RlY29tcHJlc3M0WDJfdXNpbmdE
VGFibGVfaW50ZXJuYWwodm9pZCAqZHN0LCBzaXplX3QgZHN0U2l6ZSwgY29uc3Qgdm9pZCAqY1Ny
Yywgc2l6ZV90IGNTcmNTaXplLCBjb25zdCBIVUZfRFRhYmxlICpEVGFibGUpDQogew0KIAkvKiBD
aGVjayAqLw0KIAlpZiAoY1NyY1NpemUgPCAxMCkNCkBAIC0zNTIsNyArMzUwLDcgQEAgc3RhdGlj
IHNpemVfdCBIVUZfZGVjb21wcmVzczRYMl91c2luZ0RUYWJsZV9pbnRlcm5hbCh2b2lkICpkc3Qs
IHNpemVfdCBkc3RTaXplLA0KIAl9DQogfQ0KIA0KLXNpemVfdCBIVUZfZGVjb21wcmVzczRYMl91
c2luZ0RUYWJsZSh2b2lkICpkc3QsIHNpemVfdCBkc3RTaXplLCBjb25zdCB2b2lkICpjU3JjLCBz
aXplX3QgY1NyY1NpemUsIGNvbnN0IEhVRl9EVGFibGUgKkRUYWJsZSkNCitzaXplX3QgSU5JVCBI
VUZfZGVjb21wcmVzczRYMl91c2luZ0RUYWJsZSh2b2lkICpkc3QsIHNpemVfdCBkc3RTaXplLCBj
b25zdCB2b2lkICpjU3JjLCBzaXplX3QgY1NyY1NpemUsIGNvbnN0IEhVRl9EVGFibGUgKkRUYWJs
ZSkNCiB7DQogCURUYWJsZURlc2MgZHRkID0gSFVGX2dldERUYWJsZURlc2MoRFRhYmxlKTsNCiAJ
aWYgKGR0ZC50YWJsZVR5cGUgIT0gMCkNCkBAIC0zNjAsNyArMzU4LDcgQEAgc2l6ZV90IEhVRl9k
ZWNvbXByZXNzNFgyX3VzaW5nRFRhYmxlKHZvaWQgKmRzdCwgc2l6ZV90IGRzdFNpemUsIGNvbnN0
IHZvaWQgKmNTcmMNCiAJcmV0dXJuIEhVRl9kZWNvbXByZXNzNFgyX3VzaW5nRFRhYmxlX2ludGVy
bmFsKGRzdCwgZHN0U2l6ZSwgY1NyYywgY1NyY1NpemUsIERUYWJsZSk7DQogfQ0KIA0KLXNpemVf
dCBIVUZfZGVjb21wcmVzczRYMl9EQ3R4X3drc3AoSFVGX0RUYWJsZSAqZGN0eCwgdm9pZCAqZHN0
LCBzaXplX3QgZHN0U2l6ZSwgY29uc3Qgdm9pZCAqY1NyYywgc2l6ZV90IGNTcmNTaXplLCB2b2lk
ICp3b3Jrc3BhY2UsIHNpemVfdCB3b3Jrc3BhY2VTaXplKQ0KK3NpemVfdCBJTklUIEhVRl9kZWNv
bXByZXNzNFgyX0RDdHhfd2tzcChIVUZfRFRhYmxlICpkY3R4LCB2b2lkICpkc3QsIHNpemVfdCBk
c3RTaXplLCBjb25zdCB2b2lkICpjU3JjLCBzaXplX3QgY1NyY1NpemUsIHZvaWQgKndvcmtzcGFj
ZSwgc2l6ZV90IHdvcmtzcGFjZVNpemUpDQogew0KIAljb25zdCBCWVRFICppcCA9IChjb25zdCBC
WVRFICopY1NyYzsNCiANCkBAIC0zOTEsNyArMzg5LDcgQEAgdHlwZWRlZiBzdHJ1Y3Qgew0KIA0K
IC8qIEhVRl9maWxsRFRhYmxlWDRMZXZlbDIoKSA6DQogICogYHJhbmtWYWxPcmlnaW5gIG11c3Qg
YmUgYSB0YWJsZSBvZiBhdCBsZWFzdCAoSFVGX1RBQkxFTE9HX01BWCArIDEpIFUzMiAqLw0KLXN0
YXRpYyB2b2lkIEhVRl9maWxsRFRhYmxlWDRMZXZlbDIoSFVGX0RFbHRYNCAqRFRhYmxlLCBVMzIg
c2l6ZUxvZywgY29uc3QgVTMyIGNvbnN1bWVkLCBjb25zdCBVMzIgKnJhbmtWYWxPcmlnaW4sIGNv
bnN0IGludCBtaW5XZWlnaHQsDQorc3RhdGljIHZvaWQgSU5JVCBIVUZfZmlsbERUYWJsZVg0TGV2
ZWwyKEhVRl9ERWx0WDQgKkRUYWJsZSwgVTMyIHNpemVMb2csIGNvbnN0IFUzMiBjb25zdW1lZCwg
Y29uc3QgVTMyICpyYW5rVmFsT3JpZ2luLCBjb25zdCBpbnQgbWluV2VpZ2h0LA0KIAkJCQkgICBj
b25zdCBzb3J0ZWRTeW1ib2xfdCAqc29ydGVkU3ltYm9scywgY29uc3QgVTMyIHNvcnRlZExpc3RT
aXplLCBVMzIgbmJCaXRzQmFzZWxpbmUsIFUxNiBiYXNlU2VxKQ0KIHsNCiAJSFVGX0RFbHRYNCBE
RWx0Ow0KQEAgLTQzNyw3ICs0MzUsNyBAQCBzdGF0aWMgdm9pZCBIVUZfZmlsbERUYWJsZVg0TGV2
ZWwyKEhVRl9ERWx0WDQgKkRUYWJsZSwgVTMyIHNpemVMb2csIGNvbnN0IFUzMiBjbw0KIHR5cGVk
ZWYgVTMyIHJhbmtWYWxfdFtIVUZfVEFCTEVMT0dfTUFYXVtIVUZfVEFCTEVMT0dfTUFYICsgMV07
DQogdHlwZWRlZiBVMzIgcmFua1ZhbENvbF90W0hVRl9UQUJMRUxPR19NQVggKyAxXTsNCiANCi1z
dGF0aWMgdm9pZCBIVUZfZmlsbERUYWJsZVg0KEhVRl9ERWx0WDQgKkRUYWJsZSwgY29uc3QgVTMy
IHRhcmdldExvZywgY29uc3Qgc29ydGVkU3ltYm9sX3QgKnNvcnRlZExpc3QsIGNvbnN0IFUzMiBz
b3J0ZWRMaXN0U2l6ZSwgY29uc3QgVTMyICpyYW5rU3RhcnQsDQorc3RhdGljIHZvaWQgSU5JVCBI
VUZfZmlsbERUYWJsZVg0KEhVRl9ERWx0WDQgKkRUYWJsZSwgY29uc3QgVTMyIHRhcmdldExvZywg
Y29uc3Qgc29ydGVkU3ltYm9sX3QgKnNvcnRlZExpc3QsIGNvbnN0IFUzMiBzb3J0ZWRMaXN0U2l6
ZSwgY29uc3QgVTMyICpyYW5rU3RhcnQsDQogCQkJICAgICByYW5rVmFsX3QgcmFua1ZhbE9yaWdp
biwgY29uc3QgVTMyIG1heFdlaWdodCwgY29uc3QgVTMyIG5iQml0c0Jhc2VsaW5lKQ0KIHsNCiAJ
VTMyIHJhbmtWYWxbSFVGX1RBQkxFTE9HX01BWCArIDFdOw0KQEAgLTQ3OSw3ICs0NzcsNyBAQCBz
dGF0aWMgdm9pZCBIVUZfZmlsbERUYWJsZVg0KEhVRl9ERWx0WDQgKkRUYWJsZSwgY29uc3QgVTMy
IHRhcmdldExvZywgY29uc3Qgc29ydA0KIAl9DQogfQ0KIA0KLXNpemVfdCBIVUZfcmVhZERUYWJs
ZVg0X3drc3AoSFVGX0RUYWJsZSAqRFRhYmxlLCBjb25zdCB2b2lkICpzcmMsIHNpemVfdCBzcmNT
aXplLCB2b2lkICp3b3Jrc3BhY2UsIHNpemVfdCB3b3Jrc3BhY2VTaXplKQ0KK3NpemVfdCBJTklU
IEhVRl9yZWFkRFRhYmxlWDRfd2tzcChIVUZfRFRhYmxlICpEVGFibGUsIGNvbnN0IHZvaWQgKnNy
Yywgc2l6ZV90IHNyY1NpemUsIHZvaWQgKndvcmtzcGFjZSwgc2l6ZV90IHdvcmtzcGFjZVNpemUp
DQogew0KIAlVMzIgdGFibGVMb2csIG1heFcsIHNpemVPZlNvcnQsIG5iU3ltYm9sczsNCiAJRFRh
YmxlRGVzYyBkdGQgPSBIVUZfZ2V0RFRhYmxlRGVzYyhEVGFibGUpOw0KQEAgLTU5Miw3ICs1OTAs
NyBAQCBzaXplX3QgSFVGX3JlYWREVGFibGVYNF93a3NwKEhVRl9EVGFibGUgKkRUYWJsZSwgY29u
c3Qgdm9pZCAqc3JjLCBzaXplX3Qgc3JjU2l6ZQ0KIAlyZXR1cm4gaVNpemU7DQogfQ0KIA0KLXN0
YXRpYyBVMzIgSFVGX2RlY29kZVN5bWJvbFg0KHZvaWQgKm9wLCBCSVRfRFN0cmVhbV90ICpEU3Ry
ZWFtLCBjb25zdCBIVUZfREVsdFg0ICpkdCwgY29uc3QgVTMyIGR0TG9nKQ0KK3N0YXRpYyBVMzIg
SU5JVCBIVUZfZGVjb2RlU3ltYm9sWDQodm9pZCAqb3AsIEJJVF9EU3RyZWFtX3QgKkRTdHJlYW0s
IGNvbnN0IEhVRl9ERWx0WDQgKmR0LCBjb25zdCBVMzIgZHRMb2cpDQogew0KIAlzaXplX3QgY29u
c3QgdmFsID0gQklUX2xvb2tCaXRzRmFzdChEU3RyZWFtLCBkdExvZyk7IC8qIG5vdGUgOiBkdExv
ZyA+PSAxICovDQogCW1lbWNweShvcCwgZHQgKyB2YWwsIDIpOw0KQEAgLTYwMCw3ICs1OTgsNyBA
QCBzdGF0aWMgVTMyIEhVRl9kZWNvZGVTeW1ib2xYNCh2b2lkICpvcCwgQklUX0RTdHJlYW1fdCAq
RFN0cmVhbSwgY29uc3QgSFVGX0RFbHRYNA0KIAlyZXR1cm4gZHRbdmFsXS5sZW5ndGg7DQogfQ0K
IA0KLXN0YXRpYyBVMzIgSFVGX2RlY29kZUxhc3RTeW1ib2xYNCh2b2lkICpvcCwgQklUX0RTdHJl
YW1fdCAqRFN0cmVhbSwgY29uc3QgSFVGX0RFbHRYNCAqZHQsIGNvbnN0IFUzMiBkdExvZykNCitz
dGF0aWMgVTMyIElOSVQgSFVGX2RlY29kZUxhc3RTeW1ib2xYNCh2b2lkICpvcCwgQklUX0RTdHJl
YW1fdCAqRFN0cmVhbSwgY29uc3QgSFVGX0RFbHRYNCAqZHQsIGNvbnN0IFUzMiBkdExvZykNCiB7
DQogCXNpemVfdCBjb25zdCB2YWwgPSBCSVRfbG9va0JpdHNGYXN0KERTdHJlYW0sIGR0TG9nKTsg
Lyogbm90ZSA6IGR0TG9nID49IDEgKi8NCiAJbWVtY3B5KG9wLCBkdCArIHZhbCwgMSk7DQpAQCAt
NjUyLDcgKzY1MCw3IEBAIEZPUkNFX0lOTElORSBzaXplX3QgSFVGX2RlY29kZVN0cmVhbVg0KEJZ
VEUgKnAsIEJJVF9EU3RyZWFtX3QgKmJpdERQdHIsIEJZVEUgKmNvDQogCXJldHVybiBwIC0gcFN0
YXJ0Ow0KIH0NCiANCi1zdGF0aWMgc2l6ZV90IEhVRl9kZWNvbXByZXNzMVg0X3VzaW5nRFRhYmxl
X2ludGVybmFsKHZvaWQgKmRzdCwgc2l6ZV90IGRzdFNpemUsIGNvbnN0IHZvaWQgKmNTcmMsIHNp
emVfdCBjU3JjU2l6ZSwgY29uc3QgSFVGX0RUYWJsZSAqRFRhYmxlKQ0KK3N0YXRpYyBzaXplX3Qg
SU5JVCBIVUZfZGVjb21wcmVzczFYNF91c2luZ0RUYWJsZV9pbnRlcm5hbCh2b2lkICpkc3QsIHNp
emVfdCBkc3RTaXplLCBjb25zdCB2b2lkICpjU3JjLCBzaXplX3QgY1NyY1NpemUsIGNvbnN0IEhV
Rl9EVGFibGUgKkRUYWJsZSkNCiB7DQogCUJJVF9EU3RyZWFtX3QgYml0RDsNCiANCkBAIC02ODEs
NyArNjc5LDcgQEAgc3RhdGljIHNpemVfdCBIVUZfZGVjb21wcmVzczFYNF91c2luZ0RUYWJsZV9p
bnRlcm5hbCh2b2lkICpkc3QsIHNpemVfdCBkc3RTaXplLA0KIAlyZXR1cm4gZHN0U2l6ZTsNCiB9
DQogDQotc2l6ZV90IEhVRl9kZWNvbXByZXNzMVg0X3VzaW5nRFRhYmxlKHZvaWQgKmRzdCwgc2l6
ZV90IGRzdFNpemUsIGNvbnN0IHZvaWQgKmNTcmMsIHNpemVfdCBjU3JjU2l6ZSwgY29uc3QgSFVG
X0RUYWJsZSAqRFRhYmxlKQ0KK3NpemVfdCBJTklUIEhVRl9kZWNvbXByZXNzMVg0X3VzaW5nRFRh
YmxlKHZvaWQgKmRzdCwgc2l6ZV90IGRzdFNpemUsIGNvbnN0IHZvaWQgKmNTcmMsIHNpemVfdCBj
U3JjU2l6ZSwgY29uc3QgSFVGX0RUYWJsZSAqRFRhYmxlKQ0KIHsNCiAJRFRhYmxlRGVzYyBkdGQg
PSBIVUZfZ2V0RFRhYmxlRGVzYyhEVGFibGUpOw0KIAlpZiAoZHRkLnRhYmxlVHlwZSAhPSAxKQ0K
QEAgLTY4OSw3ICs2ODcsNyBAQCBzaXplX3QgSFVGX2RlY29tcHJlc3MxWDRfdXNpbmdEVGFibGUo
dm9pZCAqZHN0LCBzaXplX3QgZHN0U2l6ZSwgY29uc3Qgdm9pZCAqY1NyYw0KIAlyZXR1cm4gSFVG
X2RlY29tcHJlc3MxWDRfdXNpbmdEVGFibGVfaW50ZXJuYWwoZHN0LCBkc3RTaXplLCBjU3JjLCBj
U3JjU2l6ZSwgRFRhYmxlKTsNCiB9DQogDQotc2l6ZV90IEhVRl9kZWNvbXByZXNzMVg0X0RDdHhf
d2tzcChIVUZfRFRhYmxlICpEQ3R4LCB2b2lkICpkc3QsIHNpemVfdCBkc3RTaXplLCBjb25zdCB2
b2lkICpjU3JjLCBzaXplX3QgY1NyY1NpemUsIHZvaWQgKndvcmtzcGFjZSwgc2l6ZV90IHdvcmtz
cGFjZVNpemUpDQorc2l6ZV90IElOSVQgSFVGX2RlY29tcHJlc3MxWDRfREN0eF93a3NwKEhVRl9E
VGFibGUgKkRDdHgsIHZvaWQgKmRzdCwgc2l6ZV90IGRzdFNpemUsIGNvbnN0IHZvaWQgKmNTcmMs
IHNpemVfdCBjU3JjU2l6ZSwgdm9pZCAqd29ya3NwYWNlLCBzaXplX3Qgd29ya3NwYWNlU2l6ZSkN
CiB7DQogCWNvbnN0IEJZVEUgKmlwID0gKGNvbnN0IEJZVEUgKiljU3JjOw0KIA0KQEAgLTcwNCw3
ICs3MDIsNyBAQCBzaXplX3QgSFVGX2RlY29tcHJlc3MxWDRfREN0eF93a3NwKEhVRl9EVGFibGUg
KkRDdHgsIHZvaWQgKmRzdCwgc2l6ZV90IGRzdFNpemUsDQogCXJldHVybiBIVUZfZGVjb21wcmVz
czFYNF91c2luZ0RUYWJsZV9pbnRlcm5hbChkc3QsIGRzdFNpemUsIGlwLCBjU3JjU2l6ZSwgREN0
eCk7DQogfQ0KIA0KLXN0YXRpYyBzaXplX3QgSFVGX2RlY29tcHJlc3M0WDRfdXNpbmdEVGFibGVf
aW50ZXJuYWwodm9pZCAqZHN0LCBzaXplX3QgZHN0U2l6ZSwgY29uc3Qgdm9pZCAqY1NyYywgc2l6
ZV90IGNTcmNTaXplLCBjb25zdCBIVUZfRFRhYmxlICpEVGFibGUpDQorc3RhdGljIHNpemVfdCBJ
TklUIEhVRl9kZWNvbXByZXNzNFg0X3VzaW5nRFRhYmxlX2ludGVybmFsKHZvaWQgKmRzdCwgc2l6
ZV90IGRzdFNpemUsIGNvbnN0IHZvaWQgKmNTcmMsIHNpemVfdCBjU3JjU2l6ZSwgY29uc3QgSFVG
X0RUYWJsZSAqRFRhYmxlKQ0KIHsNCiAJaWYgKGNTcmNTaXplIDwgMTApDQogCQlyZXR1cm4gRVJS
T1IoY29ycnVwdGlvbl9kZXRlY3RlZCk7IC8qIHN0cmljdCBtaW5pbXVtIDoganVtcCB0YWJsZSAr
IDEgYnl0ZSBwZXIgc3RyZWFtICovDQpAQCAtODE0LDcgKzgxMiw3IEBAIHN0YXRpYyBzaXplX3Qg
SFVGX2RlY29tcHJlc3M0WDRfdXNpbmdEVGFibGVfaW50ZXJuYWwodm9pZCAqZHN0LCBzaXplX3Qg
ZHN0U2l6ZSwNCiAJfQ0KIH0NCiANCi1zaXplX3QgSFVGX2RlY29tcHJlc3M0WDRfdXNpbmdEVGFi
bGUodm9pZCAqZHN0LCBzaXplX3QgZHN0U2l6ZSwgY29uc3Qgdm9pZCAqY1NyYywgc2l6ZV90IGNT
cmNTaXplLCBjb25zdCBIVUZfRFRhYmxlICpEVGFibGUpDQorc2l6ZV90IElOSVQgSFVGX2RlY29t
cHJlc3M0WDRfdXNpbmdEVGFibGUodm9pZCAqZHN0LCBzaXplX3QgZHN0U2l6ZSwgY29uc3Qgdm9p
ZCAqY1NyYywgc2l6ZV90IGNTcmNTaXplLCBjb25zdCBIVUZfRFRhYmxlICpEVGFibGUpDQogew0K
IAlEVGFibGVEZXNjIGR0ZCA9IEhVRl9nZXREVGFibGVEZXNjKERUYWJsZSk7DQogCWlmIChkdGQu
dGFibGVUeXBlICE9IDEpDQpAQCAtODIyLDcgKzgyMCw3IEBAIHNpemVfdCBIVUZfZGVjb21wcmVz
czRYNF91c2luZ0RUYWJsZSh2b2lkICpkc3QsIHNpemVfdCBkc3RTaXplLCBjb25zdCB2b2lkICpj
U3JjDQogCXJldHVybiBIVUZfZGVjb21wcmVzczRYNF91c2luZ0RUYWJsZV9pbnRlcm5hbChkc3Qs
IGRzdFNpemUsIGNTcmMsIGNTcmNTaXplLCBEVGFibGUpOw0KIH0NCiANCi1zaXplX3QgSFVGX2Rl
Y29tcHJlc3M0WDRfREN0eF93a3NwKEhVRl9EVGFibGUgKmRjdHgsIHZvaWQgKmRzdCwgc2l6ZV90
IGRzdFNpemUsIGNvbnN0IHZvaWQgKmNTcmMsIHNpemVfdCBjU3JjU2l6ZSwgdm9pZCAqd29ya3Nw
YWNlLCBzaXplX3Qgd29ya3NwYWNlU2l6ZSkNCitzaXplX3QgSU5JVCBIVUZfZGVjb21wcmVzczRY
NF9EQ3R4X3drc3AoSFVGX0RUYWJsZSAqZGN0eCwgdm9pZCAqZHN0LCBzaXplX3QgZHN0U2l6ZSwg
Y29uc3Qgdm9pZCAqY1NyYywgc2l6ZV90IGNTcmNTaXplLCB2b2lkICp3b3Jrc3BhY2UsIHNpemVf
dCB3b3Jrc3BhY2VTaXplKQ0KIHsNCiAJY29uc3QgQllURSAqaXAgPSAoY29uc3QgQllURSAqKWNT
cmM7DQogDQpAQCAtODQxLDE0ICs4MzksMTQgQEAgc2l6ZV90IEhVRl9kZWNvbXByZXNzNFg0X0RD
dHhfd2tzcChIVUZfRFRhYmxlICpkY3R4LCB2b2lkICpkc3QsIHNpemVfdCBkc3RTaXplLA0KIC8q
IEdlbmVyaWMgZGVjb21wcmVzc2lvbiBzZWxlY3RvciAqLw0KIC8qICoqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqLw0KIA0KLXNpemVfdCBIVUZfZGVjb21wcmVzczFYX3VzaW5nRFRhYmxl
KHZvaWQgKmRzdCwgc2l6ZV90IG1heERzdFNpemUsIGNvbnN0IHZvaWQgKmNTcmMsIHNpemVfdCBj
U3JjU2l6ZSwgY29uc3QgSFVGX0RUYWJsZSAqRFRhYmxlKQ0KK3NpemVfdCBJTklUIEhVRl9kZWNv
bXByZXNzMVhfdXNpbmdEVGFibGUodm9pZCAqZHN0LCBzaXplX3QgbWF4RHN0U2l6ZSwgY29uc3Qg
dm9pZCAqY1NyYywgc2l6ZV90IGNTcmNTaXplLCBjb25zdCBIVUZfRFRhYmxlICpEVGFibGUpDQog
ew0KIAlEVGFibGVEZXNjIGNvbnN0IGR0ZCA9IEhVRl9nZXREVGFibGVEZXNjKERUYWJsZSk7DQog
CXJldHVybiBkdGQudGFibGVUeXBlID8gSFVGX2RlY29tcHJlc3MxWDRfdXNpbmdEVGFibGVfaW50
ZXJuYWwoZHN0LCBtYXhEc3RTaXplLCBjU3JjLCBjU3JjU2l6ZSwgRFRhYmxlKQ0KIAkJCSAgICAg
OiBIVUZfZGVjb21wcmVzczFYMl91c2luZ0RUYWJsZV9pbnRlcm5hbChkc3QsIG1heERzdFNpemUs
IGNTcmMsIGNTcmNTaXplLCBEVGFibGUpOw0KIH0NCiANCi1zaXplX3QgSFVGX2RlY29tcHJlc3M0
WF91c2luZ0RUYWJsZSh2b2lkICpkc3QsIHNpemVfdCBtYXhEc3RTaXplLCBjb25zdCB2b2lkICpj
U3JjLCBzaXplX3QgY1NyY1NpemUsIGNvbnN0IEhVRl9EVGFibGUgKkRUYWJsZSkNCitzaXplX3Qg
SU5JVCBIVUZfZGVjb21wcmVzczRYX3VzaW5nRFRhYmxlKHZvaWQgKmRzdCwgc2l6ZV90IG1heERz
dFNpemUsIGNvbnN0IHZvaWQgKmNTcmMsIHNpemVfdCBjU3JjU2l6ZSwgY29uc3QgSFVGX0RUYWJs
ZSAqRFRhYmxlKQ0KIHsNCiAJRFRhYmxlRGVzYyBjb25zdCBkdGQgPSBIVUZfZ2V0RFRhYmxlRGVz
YyhEVGFibGUpOw0KIAlyZXR1cm4gZHRkLnRhYmxlVHlwZSA/IEhVRl9kZWNvbXByZXNzNFg0X3Vz
aW5nRFRhYmxlX2ludGVybmFsKGRzdCwgbWF4RHN0U2l6ZSwgY1NyYywgY1NyY1NpemUsIERUYWJs
ZSkNCkBAIC04ODQsNyArODgyLDcgQEAgc3RhdGljIGNvbnN0IGFsZ29fdGltZV90IGFsZ29UaW1l
WzE2IC8qIFF1YW50aXphdGlvbiAqL11bMyAvKiBzaW5nbGUsIGRvdWJsZSwgcXUNCiAqICAgYmFz
ZWQgb24gYSBzZXQgb2YgcHJlLWRldGVybWluZWQgbWV0cmljcy4NCiAqICAgQHJldHVybiA6IDA9
PUhVRl9kZWNvbXByZXNzNFgyLCAxPT1IVUZfZGVjb21wcmVzczRYNCAuDQogKiAgIEFzc3VtcHRp
b24gOiAwIDwgY1NyY1NpemUgPCBkc3RTaXplIDw9IDEyOCBLQiAqLw0KLVUzMiBIVUZfc2VsZWN0
RGVjb2RlcihzaXplX3QgZHN0U2l6ZSwgc2l6ZV90IGNTcmNTaXplKQ0KK1UzMiBJTklUIEhVRl9z
ZWxlY3REZWNvZGVyKHNpemVfdCBkc3RTaXplLCBzaXplX3QgY1NyY1NpemUpDQogew0KIAkvKiBk
ZWNvZGVyIHRpbWluZyBldmFsdWF0aW9uICovDQogCVUzMiBjb25zdCBRID0gKFUzMikoY1NyY1Np
emUgKiAxNiAvIGRzdFNpemUpOyAvKiBRIDwgMTYgc2luY2UgZHN0U2l6ZSA+IGNTcmNTaXplICov
DQpAQCAtODk4LDcgKzg5Niw3IEBAIFUzMiBIVUZfc2VsZWN0RGVjb2RlcihzaXplX3QgZHN0U2l6
ZSwgc2l6ZV90IGNTcmNTaXplKQ0KIA0KIHR5cGVkZWYgc2l6ZV90ICgqZGVjb21wcmVzc2lvbkFs
Z28pKHZvaWQgKmRzdCwgc2l6ZV90IGRzdFNpemUsIGNvbnN0IHZvaWQgKmNTcmMsIHNpemVfdCBj
U3JjU2l6ZSk7DQogDQotc2l6ZV90IEhVRl9kZWNvbXByZXNzNFhfREN0eF93a3NwKEhVRl9EVGFi
bGUgKmRjdHgsIHZvaWQgKmRzdCwgc2l6ZV90IGRzdFNpemUsIGNvbnN0IHZvaWQgKmNTcmMsIHNp
emVfdCBjU3JjU2l6ZSwgdm9pZCAqd29ya3NwYWNlLCBzaXplX3Qgd29ya3NwYWNlU2l6ZSkNCitz
aXplX3QgSU5JVCBIVUZfZGVjb21wcmVzczRYX0RDdHhfd2tzcChIVUZfRFRhYmxlICpkY3R4LCB2
b2lkICpkc3QsIHNpemVfdCBkc3RTaXplLCBjb25zdCB2b2lkICpjU3JjLCBzaXplX3QgY1NyY1Np
emUsIHZvaWQgKndvcmtzcGFjZSwgc2l6ZV90IHdvcmtzcGFjZVNpemUpDQogew0KIAkvKiB2YWxp
ZGF0aW9uIGNoZWNrcyAqLw0KIAlpZiAoZHN0U2l6ZSA9PSAwKQ0KQEAgLTkyMSw3ICs5MTksNyBA
QCBzaXplX3QgSFVGX2RlY29tcHJlc3M0WF9EQ3R4X3drc3AoSFVGX0RUYWJsZSAqZGN0eCwgdm9p
ZCAqZHN0LCBzaXplX3QgZHN0U2l6ZSwgYw0KIAl9DQogfQ0KIA0KLXNpemVfdCBIVUZfZGVjb21w
cmVzczRYX2h1Zk9ubHlfd2tzcChIVUZfRFRhYmxlICpkY3R4LCB2b2lkICpkc3QsIHNpemVfdCBk
c3RTaXplLCBjb25zdCB2b2lkICpjU3JjLCBzaXplX3QgY1NyY1NpemUsIHZvaWQgKndvcmtzcGFj
ZSwgc2l6ZV90IHdvcmtzcGFjZVNpemUpDQorc2l6ZV90IElOSVQgSFVGX2RlY29tcHJlc3M0WF9o
dWZPbmx5X3drc3AoSFVGX0RUYWJsZSAqZGN0eCwgdm9pZCAqZHN0LCBzaXplX3QgZHN0U2l6ZSwg
Y29uc3Qgdm9pZCAqY1NyYywgc2l6ZV90IGNTcmNTaXplLCB2b2lkICp3b3Jrc3BhY2UsIHNpemVf
dCB3b3Jrc3BhY2VTaXplKQ0KIHsNCiAJLyogdmFsaWRhdGlvbiBjaGVja3MgKi8NCiAJaWYgKGRz
dFNpemUgPT0gMCkNCkBAIC05MzYsNyArOTM0LDcgQEAgc2l6ZV90IEhVRl9kZWNvbXByZXNzNFhf
aHVmT25seV93a3NwKEhVRl9EVGFibGUgKmRjdHgsIHZvaWQgKmRzdCwgc2l6ZV90IGRzdFNpemUN
CiAJfQ0KIH0NCiANCi1zaXplX3QgSFVGX2RlY29tcHJlc3MxWF9EQ3R4X3drc3AoSFVGX0RUYWJs
ZSAqZGN0eCwgdm9pZCAqZHN0LCBzaXplX3QgZHN0U2l6ZSwgY29uc3Qgdm9pZCAqY1NyYywgc2l6
ZV90IGNTcmNTaXplLCB2b2lkICp3b3Jrc3BhY2UsIHNpemVfdCB3b3Jrc3BhY2VTaXplKQ0KK3Np
emVfdCBJTklUIEhVRl9kZWNvbXByZXNzMVhfREN0eF93a3NwKEhVRl9EVGFibGUgKmRjdHgsIHZv
aWQgKmRzdCwgc2l6ZV90IGRzdFNpemUsIGNvbnN0IHZvaWQgKmNTcmMsIHNpemVfdCBjU3JjU2l6
ZSwgdm9pZCAqd29ya3NwYWNlLCBzaXplX3Qgd29ya3NwYWNlU2l6ZSkNCiB7DQogCS8qIHZhbGlk
YXRpb24gY2hlY2tzICovDQogCWlmIChkc3RTaXplID09IDApDQpkaWZmIC0tZ2l0IGEveGVuL2Nv
bW1vbi96c3RkL21lbS5oIGIveGVuL2NvbW1vbi96c3RkL21lbS5oDQppbmRleCA5M2Q3YTJjMzc3
Li5kMmZhNDQ0Njg3IDEwMDY0NA0KLS0tIGEveGVuL2NvbW1vbi96c3RkL21lbS5oDQorKysgYi94
ZW4vY29tbW9uL3pzdGQvbWVtLmgNCkBAIC0yMCw5ICsyMCw5IEBADQogLyotKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KICogIERlcGVuZGVuY2llcw0KICoqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKi8NCi0jaW5jbHVkZSA8YXNtL3VuYWxp
Z25lZC5oPg0KLSNpbmNsdWRlIDxsaW51eC9zdHJpbmcuaD4gLyogbWVtY3B5ICovDQotI2luY2x1
ZGUgPGxpbnV4L3R5cGVzLmg+ICAvKiBzaXplX3QsIHB0cmRpZmZfdCAqLw0KKyNpbmNsdWRlIDx4
ZW4vc3RyaW5nLmg+IC8qIG1lbWNweSAqLw0KKyNpbmNsdWRlIDx4ZW4vdHlwZXMuaD4gIC8qIHNp
emVfdCwgcHRyZGlmZl90ICovDQorI2luY2x1ZGUgInByaXZhdGUuaCINCiANCiAvKi0qKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQogKiAgQ29tcGlsZXIgc3BlY2lmaWNz
DQpkaWZmIC0tZ2l0IGEveGVuL2NvbW1vbi96c3RkL3ByaXZhdGUuaCBiL3hlbi9jb21tb24venN0
ZC9wcml2YXRlLmgNCm5ldyBmaWxlIG1vZGUgMTAwNjQ0DQppbmRleCAwMDAwMDAwMDAwLi5mYWM0
ZDNjMDk1DQotLS0gL2Rldi9udWxsDQorKysgYi94ZW4vY29tbW9uL3pzdGQvcHJpdmF0ZS5oDQpA
QCAtMCwwICsxLDEwNSBAQA0KKyNpZm5kZWYgWlNURF9QUklWQVRFX0gNCisjZGVmaW5lIFpTVERf
UFJJVkFURV9IDQorDQorI2luY2x1ZGUgPGFzbS9ieXRlb3JkZXIuaD4NCisjaW5jbHVkZSA8eGVu
L2tlcm5lbC5oPg0KKyNpbmNsdWRlIDx4ZW4vdHlwZXMuaD4NCisNCit0eXBlZGVmIHNzaXplX3Qg
X19hdHRyaWJ1dGVfXygoX19tb2RlX18oX19wb2ludGVyX18pKSkgcHRyZGlmZl90Ow0KKw0KKy8q
IGZyb20ga2VybmVsIGluY2x1ZGUvbGludXgvdW5hbGlnbmVkL2FjY2Vzc19vay5oICovDQorDQor
c3RhdGljIGFsd2F5c19pbmxpbmUgdTE2IGdldF91bmFsaWduZWRfbGUxNihjb25zdCB2b2lkICpw
KQ0KK3sNCisJcmV0dXJuIGxlMTZfdG9fY3B1cCgoX19sZTE2ICopcCk7DQorfQ0KKw0KK3N0YXRp
YyBhbHdheXNfaW5saW5lIHUzMiBnZXRfdW5hbGlnbmVkX2xlMzIoY29uc3Qgdm9pZCAqcCkNCit7
DQorCXJldHVybiBsZTMyX3RvX2NwdXAoKF9fbGUzMiAqKXApOw0KK30NCisNCitzdGF0aWMgYWx3
YXlzX2lubGluZSB1NjQgZ2V0X3VuYWxpZ25lZF9sZTY0KGNvbnN0IHZvaWQgKnApDQorew0KKwly
ZXR1cm4gbGU2NF90b19jcHVwKChfX2xlNjQgKilwKTsNCit9DQorDQorc3RhdGljIGFsd2F5c19p
bmxpbmUgdTMyIGdldF91bmFsaWduZWRfYmUzMihjb25zdCB2b2lkICpwKQ0KK3sNCisgICAgICAg
IHJldHVybiBiZTMyX3RvX2NwdXAoKF9fYmUzMiAqKXApOw0KK30NCisNCitzdGF0aWMgYWx3YXlz
X2lubGluZSB1NjQgZ2V0X3VuYWxpZ25lZF9iZTY0KGNvbnN0IHZvaWQgKnApDQorew0KKyAgICAg
ICAgcmV0dXJuIGJlNjRfdG9fY3B1cCgoX19iZTY0ICopcCk7DQorfQ0KKw0KK3N0YXRpYyBhbHdh
eXNfaW5saW5lIHZvaWQgcHV0X3VuYWxpZ25lZF9sZTE2KHUxNiB2YWwsIHZvaWQgKnApDQorew0K
KwkqKChfX2xlMTYgKilwKSA9IGNwdV90b19sZTE2KHZhbCk7DQorfQ0KKw0KK3N0YXRpYyBhbHdh
eXNfaW5saW5lIHZvaWQgcHV0X3VuYWxpZ25lZF9sZTMyKHUzMiB2YWwsIHZvaWQgKnApDQorew0K
KwkqKChfX2xlMzIgKilwKSA9IGNwdV90b19sZTMyKHZhbCk7DQorfQ0KKw0KK3N0YXRpYyBhbHdh
eXNfaW5saW5lIHZvaWQgcHV0X3VuYWxpZ25lZF9sZTY0KHU2NCB2YWwsIHZvaWQgKnApDQorew0K
KwkqKChfX2xlNjQgKilwKSA9IGNwdV90b19sZTY0KHZhbCk7DQorfQ0KKw0KK3N0YXRpYyBhbHdh
eXNfaW5saW5lIHZvaWQgcHV0X3VuYWxpZ25lZF9iZTMyKHUzMiB2YWwsIHZvaWQgKnApDQorew0K
KyAgICAgICAgKigoX19iZTMyICopcCkgPSBjcHVfdG9fYmUzMih2YWwpOw0KK30NCisNCitzdGF0
aWMgYWx3YXlzX2lubGluZSB2b2lkIHB1dF91bmFsaWduZWRfYmU2NCh1NjQgdmFsLCB2b2lkICpw
KQ0KK3sNCisgICAgICAgICooKF9fYmU2NCAqKXApID0gY3B1X3RvX2JlNjQodmFsKTsNCit9DQor
DQorDQorLyogZnJvbSBrZXJuZWwgaW5jbHVkZS9hc20tZ2VuZXJpYy91bmFsaWduZWQuaCB3aXRo
IGxpbnV4L3VuYWxpZ25lZC9nZW5lcmljLmgNCisgICBhc3N1bWluZyBsaXR0bGUgZW5kaWFuICov
DQorDQorZXh0ZXJuIHZvaWQgX19iYWRfdW5hbGlnbmVkX2FjY2Vzc19zaXplKHZvaWQpOw0KKw0K
KyNkZWZpbmUgZ2V0X3VuYWxpZ25lZChwdHIpICgoX19mb3JjZSB0eXBlb2YoKihwdHIpKSkoeyAg
ICAgICAgICAgICAgICAgICAgIFwNCisgICAgICAgIF9fYnVpbHRpbl9jaG9vc2VfZXhwcihzaXpl
b2YoKihwdHIpKSA9PSAxLCAqKHB0ciksICAgICAgICAgICAgICAgICAgICAgIFwNCisgICAgICAg
IF9fYnVpbHRpbl9jaG9vc2VfZXhwcihzaXplb2YoKihwdHIpKSA9PSAyLCBnZXRfdW5hbGlnbmVk
X2xlMTYoKHB0cikpLCAgIFwNCisgICAgICAgIF9fYnVpbHRpbl9jaG9vc2VfZXhwcihzaXplb2Yo
KihwdHIpKSA9PSA0LCBnZXRfdW5hbGlnbmVkX2xlMzIoKHB0cikpLCAgIFwNCisgICAgICAgIF9f
YnVpbHRpbl9jaG9vc2VfZXhwcihzaXplb2YoKihwdHIpKSA9PSA4LCBnZXRfdW5hbGlnbmVkX2xl
NjQoKHB0cikpLCAgIFwNCisgICAgICAgIF9fYmFkX3VuYWxpZ25lZF9hY2Nlc3Nfc2l6ZSgpKSkp
KTsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwNCisgICAgICAgIH0pKQ0K
Kw0KKyNkZWZpbmUgcHV0X3VuYWxpZ25lZCh2YWwsIHB0cikgKHsgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBcDQorICAgICAgICB2b2lkICpfX2d1X3AgPSAocHRyKTsgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXA0KKyAgICAgICAgc3dpdGNoIChzaXpl
b2YoKihwdHIpKSkgeyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwNCisg
ICAgICAgIGNhc2UgMTogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBcDQorICAgICAgICAgICAgICAgICoodTggKilfX2d1X3AgPSAoX19mb3Jj
ZSB1OCkodmFsKTsgICAgICAgICAgICAgICAgICAgICAgXA0KKyAgICAgICAgICAgICAgICBicmVh
azsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwNCisg
ICAgICAgIGNhc2UgMjogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBcDQorICAgICAgICAgICAgICAgIHB1dF91bmFsaWduZWRfbGUxNigoX19m
b3JjZSB1MTYpKHZhbCksIF9fZ3VfcCk7ICAgICAgICAgXA0KKyAgICAgICAgICAgICAgICBicmVh
azsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwNCisg
ICAgICAgIGNhc2UgNDogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBcDQorICAgICAgICAgICAgICAgIHB1dF91bmFsaWduZWRfbGUzMigoX19m
b3JjZSB1MzIpKHZhbCksIF9fZ3VfcCk7ICAgICAgICAgXA0KKyAgICAgICAgICAgICAgICBicmVh
azsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwNCisg
ICAgICAgIGNhc2UgODogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBcDQorICAgICAgICAgICAgICAgIHB1dF91bmFsaWduZWRfbGU2NCgoX19m
b3JjZSB1NjQpKHZhbCksIF9fZ3VfcCk7ICAgICAgICAgXA0KKyAgICAgICAgICAgICAgICBicmVh
azsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwNCisg
ICAgICAgIGRlZmF1bHQ6ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBcDQorICAgICAgICAgICAgICAgIF9fYmFkX3VuYWxpZ25lZF9hY2Nlc3Nf
c2l6ZSgpOyAgICAgICAgICAgICAgICAgICAgICAgICAgXA0KKyAgICAgICAgICAgICAgICBicmVh
azsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwNCisg
ICAgICAgIH0gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBcDQorICAgICAgICAodm9pZCkwOyB9KQ0KKw0KKw0KKy8qIGZyb20ga2Vy
bmVsIGxpbnV4L2tlcm5lbC5oIGFuZCB1YXBpL2xpbnV4L2tlcm5lbC5oICovDQorDQorI2RlZmlu
ZSBfX0FMSUdOX0tFUk5FTCh4LCBhKQkJX19BTElHTl9LRVJORUxfTUFTSyh4LCAodHlwZW9mKHgp
KShhKSAtIDEpDQorI2RlZmluZSBfX0FMSUdOX0tFUk5FTF9NQVNLKHgsIG1hc2spCSgoKHgpICsg
KG1hc2spKSAmIH4obWFzaykpDQorI2RlZmluZSBBTElHTih4LCBhKSAgICAgICAgICAgICBfX0FM
SUdOX0tFUk5FTCgoeCksIChhKSkNCisjZGVmaW5lIFBUUl9BTElHTihwLCBhKSAgICAgICAgICgo
dHlwZW9mKHApKUFMSUdOKCh1bnNpZ25lZCBsb25nKShwKSwgKGEpKSkNCisNCisjZW5kaWYgLyog
WlNURF9QUklWQVRFX0ggKi8NCmRpZmYgLS1naXQgYS94ZW4vY29tbW9uL3pzdGQvenN0ZF9jb21t
b24uYyBiL3hlbi9jb21tb24venN0ZC96c3RkX2NvbW1vbi5jDQppbmRleCBhMjgyNjI0ZWUxLi4x
YjEzOTAzNTM4IDEwMDY0NA0KLS0tIGEveGVuL2NvbW1vbi96c3RkL3pzdGRfY29tbW9uLmMNCisr
KyBiL3hlbi9jb21tb24venN0ZC96c3RkX2NvbW1vbi5jDQpAQCAtMTksNyArMTksNiBAQA0KICoq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKi8NCiAjaW5jbHVkZSAiZXJyb3Jf
cHJpdmF0ZS5oIg0KICNpbmNsdWRlICJ6c3RkX2ludGVybmFsLmgiIC8qIGRlY2xhcmF0aW9uIG9m
IFpTVERfaXNFcnJvciwgWlNURF9nZXRFcnJvck5hbWUsIFpTVERfZ2V0RXJyb3JDb2RlLCBaU1RE
X2dldEVycm9yU3RyaW5nLCBaU1REX3ZlcnNpb25OdW1iZXIgKi8NCi0jaW5jbHVkZSA8bGludXgv
a2VybmVsLmg+DQogDQogLyo9KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioNCiAqICBDdXN0b20gYWxsb2NhdG9yDQpAQCAtMzIsNyAr
MzEsNyBAQA0KIAkJKHN0YWNrKS0+cHRyIDw9IChzdGFjayktPmVuZCA/IHB0ciA6IE5VTEw7ICAg
ICAgXA0KIAl9KQ0KIA0KLVpTVERfY3VzdG9tTWVtIFpTVERfaW5pdFN0YWNrKHZvaWQgKndvcmtz
cGFjZSwgc2l6ZV90IHdvcmtzcGFjZVNpemUpDQorWlNURF9jdXN0b21NZW0gSU5JVCBaU1REX2lu
aXRTdGFjayh2b2lkICp3b3Jrc3BhY2UsIHNpemVfdCB3b3Jrc3BhY2VTaXplKQ0KIHsNCiAJWlNU
RF9jdXN0b21NZW0gc3RhY2tNZW0gPSB7WlNURF9zdGFja0FsbG9jLCBaU1REX3N0YWNrRnJlZSwg
d29ya3NwYWNlfTsNCiAJWlNURF9zdGFjayAqc3RhY2sgPSAoWlNURF9zdGFjayAqKXdvcmtzcGFj
ZTsNCkBAIC00OCwyNyArNDcsMjcgQEAgWlNURF9jdXN0b21NZW0gWlNURF9pbml0U3RhY2sodm9p
ZCAqd29ya3NwYWNlLCBzaXplX3Qgd29ya3NwYWNlU2l6ZSkNCiAJcmV0dXJuIHN0YWNrTWVtOw0K
IH0NCiANCi12b2lkICpaU1REX3N0YWNrQWxsb2NBbGwodm9pZCAqb3BhcXVlLCBzaXplX3QgKnNp
emUpDQordm9pZCBJTklUICpaU1REX3N0YWNrQWxsb2NBbGwodm9pZCAqb3BhcXVlLCBzaXplX3Qg
KnNpemUpDQogew0KIAlaU1REX3N0YWNrICpzdGFjayA9IChaU1REX3N0YWNrICopb3BhcXVlOw0K
IAkqc2l6ZSA9IChCWVRFIGNvbnN0ICopc3RhY2stPmVuZCAtIChCWVRFICopWlNURF9QVFJfQUxJ
R04oc3RhY2stPnB0cik7DQogCXJldHVybiBzdGFja19wdXNoKHN0YWNrLCAqc2l6ZSk7DQogfQ0K
IA0KLXZvaWQgKlpTVERfc3RhY2tBbGxvYyh2b2lkICpvcGFxdWUsIHNpemVfdCBzaXplKQ0KK3Zv
aWQgSU5JVCAqWlNURF9zdGFja0FsbG9jKHZvaWQgKm9wYXF1ZSwgc2l6ZV90IHNpemUpDQogew0K
IAlaU1REX3N0YWNrICpzdGFjayA9IChaU1REX3N0YWNrICopb3BhcXVlOw0KIAlyZXR1cm4gc3Rh
Y2tfcHVzaChzdGFjaywgc2l6ZSk7DQogfQ0KLXZvaWQgWlNURF9zdGFja0ZyZWUodm9pZCAqb3Bh
cXVlLCB2b2lkICphZGRyZXNzKQ0KK3ZvaWQgSU5JVCBaU1REX3N0YWNrRnJlZSh2b2lkICpvcGFx
dWUsIHZvaWQgKmFkZHJlc3MpDQogew0KIAkodm9pZClvcGFxdWU7DQogCSh2b2lkKWFkZHJlc3M7
DQogfQ0KIA0KLXZvaWQgKlpTVERfbWFsbG9jKHNpemVfdCBzaXplLCBaU1REX2N1c3RvbU1lbSBj
dXN0b21NZW0pIHsgcmV0dXJuIGN1c3RvbU1lbS5jdXN0b21BbGxvYyhjdXN0b21NZW0ub3BhcXVl
LCBzaXplKTsgfQ0KK3ZvaWQgSU5JVCAqWlNURF9tYWxsb2Moc2l6ZV90IHNpemUsIFpTVERfY3Vz
dG9tTWVtIGN1c3RvbU1lbSkgeyByZXR1cm4gY3VzdG9tTWVtLmN1c3RvbUFsbG9jKGN1c3RvbU1l
bS5vcGFxdWUsIHNpemUpOyB9DQogDQotdm9pZCBaU1REX2ZyZWUodm9pZCAqcHRyLCBaU1REX2N1
c3RvbU1lbSBjdXN0b21NZW0pDQordm9pZCBJTklUIFpTVERfZnJlZSh2b2lkICpwdHIsIFpTVERf
Y3VzdG9tTWVtIGN1c3RvbU1lbSkNCiB7DQogCWlmIChwdHIgIT0gTlVMTCkNCiAJCWN1c3RvbU1l
bS5jdXN0b21GcmVlKGN1c3RvbU1lbS5vcGFxdWUsIHB0cik7DQpkaWZmIC0tZ2l0IGEveGVuL2Nv
bW1vbi96c3RkL3pzdGRfaW50ZXJuYWwuaCBiL3hlbi9jb21tb24venN0ZC96c3RkX2ludGVybmFs
LmgNCmluZGV4IGRhYzc1MzM5N2YuLjFiMTM4NDBjNDQgMTAwNjQ0DQotLS0gYS94ZW4vY29tbW9u
L3pzdGQvenN0ZF9pbnRlcm5hbC5oDQorKysgYi94ZW4vY29tbW9uL3pzdGQvenN0ZF9pbnRlcm5h
bC5oDQpAQCAtMjAsNyArMjAsNyBAQA0KIC8qLSoqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioNCiAqICBDb21waWxlciBzcGVjaWZpY3MNCiAqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKiovDQot
I2RlZmluZSBGT1JDRV9JTkxJTkUgc3RhdGljIF9fYWx3YXlzX2lubGluZQ0KKyNkZWZpbmUgRk9S
Q0VfSU5MSU5FIHN0YXRpYyBhbHdheXNfaW5saW5lDQogI2RlZmluZSBGT1JDRV9OT0lOTElORSBz
dGF0aWMgbm9pbmxpbmUNCiANCiAvKi0qKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqDQpAQCAtMjgsMTYgKzI4LDEyIEBADQogKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqLw0KICNpbmNsdWRlICJlcnJvcl9wcml2YXRlLmgiDQogI2luY2x1ZGUgIm1lbS5o
Ig0KLSNpbmNsdWRlIDxsaW51eC9jb21waWxlci5oPg0KLSNpbmNsdWRlIDxsaW51eC9rZXJuZWwu
aD4NCi0jaW5jbHVkZSA8bGludXgveHhoYXNoLmg+DQotI2luY2x1ZGUgPGxpbnV4L3pzdGQuaD4N
CisjaW5jbHVkZSA8eGVuL3h4aGFzaC5oPg0KKyNpbmNsdWRlIDx4ZW4venN0ZC5oPg0KIA0KIC8q
LSoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCiAqICBzaGFyZWQgbWFjcm9z
DQogKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqLw0KLSNkZWZpbmUgTUlO
KGEsIGIpICgoYSkgPCAoYikgPyAoYSkgOiAoYikpDQotI2RlZmluZSBNQVgoYSwgYikgKChhKSA+
IChiKSA/IChhKSA6IChiKSkNCiAjZGVmaW5lIENIRUNLX0YoZikgICAgICAgICAgICAgICAgICAg
ICAgIFwNCiAJeyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXA0KIAkJc2l6ZV90IGNv
bnN0IGVycmNvZCA9IGY7IFwNCkBAIC02NywxMCArNjMsNiBAQA0KICNkZWZpbmUgWlNURF9SRVBf
TU9WRV9PUFQgKFpTVERfUkVQX05VTSkNCiBzdGF0aWMgY29uc3QgVTMyIHJlcFN0YXJ0VmFsdWVb
WlNURF9SRVBfTlVNXSA9IHsxLCA0LCA4fTsNCiANCi0jZGVmaW5lIEtCICooMSA8PCAxMCkNCi0j
ZGVmaW5lIE1CICooMSA8PCAyMCkNCi0jZGVmaW5lIEdCICooMVUgPDwgMzApDQotDQogI2RlZmlu
ZSBCSVQ3IDEyOA0KICNkZWZpbmUgQklUNiA2NA0KICNkZWZpbmUgQklUNSAzMg0KZGlmZiAtLWdp
dCBhL3hlbi9pbmNsdWRlL3hlbi9kZWNvbXByZXNzLmggYi94ZW4vaW5jbHVkZS94ZW4vZGVjb21w
cmVzcy5oDQppbmRleCBiMjk1NWZhYTRiLi5mNWJjMTdmMmI2IDEwMDY0NA0KLS0tIGEveGVuL2lu
Y2x1ZGUveGVuL2RlY29tcHJlc3MuaA0KKysrIGIveGVuL2luY2x1ZGUveGVuL2RlY29tcHJlc3Mu
aA0KQEAgLTMxLDcgKzMxLDcgQEAgdHlwZWRlZiBpbnQgZGVjb21wcmVzc19mbih1bnNpZ25lZCBj
aGFyICppbmJ1ZiwgdW5zaWduZWQgaW50IGxlbiwNCiAgKiBkZXBlbmRlbnQpLg0KICAqLw0KIA0K
LWRlY29tcHJlc3NfZm4gYnVuemlwMiwgdW54eiwgdW5sem1hLCB1bmx6bywgdW5sejQ7DQorZGVj
b21wcmVzc19mbiBidW56aXAyLCB1bnh6LCB1bmx6bWEsIHVubHpvLCB1bmx6NCwgdW56c3RkOw0K
IA0KIGludCBkZWNvbXByZXNzKHZvaWQgKmluYnVmLCB1bnNpZ25lZCBpbnQgbGVuLCB2b2lkICpv
dXRidWYpOw0KIA0KZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL3hlbi94eGhhc2guaCBiL3hlbi9p
bmNsdWRlL3hlbi94eGhhc2guaA0KaW5kZXggZGY0MjUxMTQzOC4uMTNkZGM2MTZkMSAxMDA2NDQN
Ci0tLSBhL3hlbi9pbmNsdWRlL3hlbi94eGhhc2guaA0KKysrIGIveGVuL2luY2x1ZGUveGVuL3h4
aGFzaC5oDQpAQCAtNzUsNyArNzUsNyBAQA0KICNpZm5kZWYgWFhIQVNIX0gNCiAjZGVmaW5lIFhY
SEFTSF9IDQogDQotI2luY2x1ZGUgPGxpbnV4L3R5cGVzLmg+DQorI2luY2x1ZGUgPHhlbi90eXBl
cy5oPg0KIA0KIC8qLSoqKioqKioqKioqKioqKioqKioqKioqKioqKioNCiAgKiBTaW1wbGUgSGFz
aCBGdW5jdGlvbnMNCmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS94ZW4venN0ZC5oIGIveGVuL2lu
Y2x1ZGUveGVuL3pzdGQuaA0KaW5kZXggMjQ5NTc1ZTI0OC4uZWIzMzU4MmExOCAxMDA2NDQNCi0t
LSBhL3hlbi9pbmNsdWRlL3hlbi96c3RkLmgNCisrKyBiL3hlbi9pbmNsdWRlL3hlbi96c3RkLmgN
CkBAIC0xOCw3ICsxOCw3IEBADQogI2RlZmluZSBaU1REX0gNCiANCiAvKiA9PT09PT0gICBEZXBl
bmRlbmN5ICAgPT09PT09Ki8NCi0jaW5jbHVkZSA8bGludXgvdHlwZXMuaD4gICAvKiBzaXplX3Qg
Ki8NCisjaW5jbHVkZSA8eGVuL3R5cGVzLmg+ICAgLyogc2l6ZV90ICovDQogDQogDQogLyotKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioNCi0tIA0KMi4yNi4yDQoNCg==

--8323328-1543918517-1606172471=:3753--


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 07:01:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 07:01:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35377.66885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khSKB-0001lz-3J; Tue, 24 Nov 2020 07:01:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35377.66885; Tue, 24 Nov 2020 07:01:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khSKA-0001ls-Vw; Tue, 24 Nov 2020 07:01:14 +0000
Received: by outflank-mailman (input) for mailman id 35377;
 Tue, 24 Nov 2020 07:01:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KyA6=E6=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1khSK9-0001kv-BO
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 07:01:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3f16b6f2-66de-4860-8df4-aa00258c40a5;
 Tue, 24 Nov 2020 07:01:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E99DCADD5;
 Tue, 24 Nov 2020 07:01:09 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=KyA6=E6=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1khSK9-0001kv-BO
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 07:01:13 +0000
X-Inumbo-ID: 3f16b6f2-66de-4860-8df4-aa00258c40a5
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3f16b6f2-66de-4860-8df4-aa00258c40a5;
	Tue, 24 Nov 2020 07:01:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606201270; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Yz6J7wdCjuva4czegeT7IAWoCsKh1vZ7NlNUOs0HkPI=;
	b=Ri1obx6a7eZp4WW1J7KUmJbRyO6puXP6b/7u2DmSYjNGeiGGfx3+oXKKljh8zElqbgaWp1
	OVWRG7/iRhdGbZ9kytMy5IbkDc1fVMtpwgnSf43rYs+nmbRXvRefHBag1Ktys+AuPZFHNY
	P+pFkpiB0GwOeK/fYwk42cme8KNBbrs=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E99DCADD5;
	Tue, 24 Nov 2020 07:01:09 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v7 3/3] xen/events: rework fifo queue locking
Date: Tue, 24 Nov 2020 08:01:06 +0100
Message-Id: <20201124070106.26854-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201124070106.26854-1-jgross@suse.com>
References: <20201124070106.26854-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Two cpus entering evtchn_fifo_set_pending() for the same event channel
can race in case the first one gets interrupted after setting
EVTCHN_FIFO_PENDING and when the other one manages to set
EVTCHN_FIFO_LINKED before the first one is testing that bit. This can
lead to evtchn_check_pollers() being called before the event is put
properly into the queue, resulting eventually in the guest not seeing
the event pending and thus blocking forever afterwards.

Note that commit 5f2df45ead7c1195 ("xen/evtchn: rework per event channel
lock") made the race just more obvious, while the fifo event channel
implementation had this race from the beginning when an unmask operation
was running in parallel with an event channel send operation.

For avoiding this race the queue locking in evtchn_fifo_set_pending()
needs to be reworked to cover the test of EVTCHN_FIFO_PENDING,
EVTCHN_FIFO_MASKED and EVTCHN_FIFO_LINKED, too. Additionally when an
event channel needs to change queues both queues need to be locked
initially.

Fixes: 5f2df45ead7c1195 ("xen/evtchn: rework per event channel lock")
Fixes: 88910061ec615b2d ("evtchn: add FIFO-based event channel hypercalls and port ops")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/event_fifo.c | 115 ++++++++++++++++++++--------------------
 1 file changed, 58 insertions(+), 57 deletions(-)

diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c
index 79090c04ca..a57d459cc2 100644
--- a/xen/common/event_fifo.c
+++ b/xen/common/event_fifo.c
@@ -87,38 +87,6 @@ static void evtchn_fifo_init(struct domain *d, struct evtchn *evtchn)
                  d->domain_id, evtchn->port);
 }
 
-static struct evtchn_fifo_queue *lock_old_queue(const struct domain *d,
-                                                struct evtchn *evtchn,
-                                                unsigned long *flags)
-{
-    struct vcpu *v;
-    struct evtchn_fifo_queue *q, *old_q;
-    unsigned int try;
-    union evtchn_fifo_lastq lastq;
-
-    for ( try = 0; try < 3; try++ )
-    {
-        lastq.raw = read_atomic(&evtchn->fifo_lastq);
-        v = d->vcpu[lastq.last_vcpu_id];
-        old_q = &v->evtchn_fifo->queue[lastq.last_priority];
-
-        spin_lock_irqsave(&old_q->lock, *flags);
-
-        v = d->vcpu[lastq.last_vcpu_id];
-        q = &v->evtchn_fifo->queue[lastq.last_priority];
-
-        if ( old_q == q )
-            return old_q;
-
-        spin_unlock_irqrestore(&old_q->lock, *flags);
-    }
-
-    gprintk(XENLOG_WARNING,
-            "dom%d port %d lost event (too many queue changes)\n",
-            d->domain_id, evtchn->port);
-    return NULL;
-}          
-
 static int try_set_link(event_word_t *word, event_word_t *w, uint32_t link)
 {
     event_word_t new, old;
@@ -190,6 +158,9 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
     event_word_t *word;
     unsigned long flags;
     bool_t was_pending;
+    struct evtchn_fifo_queue *q, *old_q;
+    unsigned int try;
+    bool linked = true;
 
     port = evtchn->port;
     word = evtchn_fifo_word_from_port(d, port);
@@ -204,6 +175,48 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
         return;
     }
 
+    for ( try = 0; ; try++ )
+    {
+        union evtchn_fifo_lastq lastq;
+        struct vcpu *old_v;
+
+        lastq.raw = read_atomic(&evtchn->fifo_lastq);
+        old_v = d->vcpu[lastq.last_vcpu_id];
+
+        q = &v->evtchn_fifo->queue[evtchn->priority];
+        old_q = &old_v->evtchn_fifo->queue[lastq.last_priority];
+
+        if ( q <= old_q )
+        {
+            spin_lock_irqsave(&q->lock, flags);
+            if ( q != old_q )
+                spin_lock(&old_q->lock);
+        }
+        else
+        {
+            spin_lock_irqsave(&old_q->lock, flags);
+            spin_lock(&q->lock);
+        }
+
+        lastq.raw = read_atomic(&evtchn->fifo_lastq);
+        old_v = d->vcpu[lastq.last_vcpu_id];
+        if ( q == &v->evtchn_fifo->queue[evtchn->priority] &&
+             old_q == &old_v->evtchn_fifo->queue[lastq.last_priority] )
+            break;
+
+        if ( q != old_q )
+            spin_unlock(&old_q->lock);
+        spin_unlock_irqrestore(&q->lock, flags);
+
+        if ( try == 3 )
+        {
+            gprintk(XENLOG_WARNING,
+                    "dom%d port %d lost event (too many queue changes)\n",
+                    d->domain_id, evtchn->port);
+            return;
+        }
+    }
+
     was_pending = guest_test_and_set_bit(d, EVTCHN_FIFO_PENDING, word);
 
     /*
@@ -212,9 +225,7 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
     if ( !guest_test_bit(d, EVTCHN_FIFO_MASKED, word) &&
          !guest_test_bit(d, EVTCHN_FIFO_LINKED, word) )
     {
-        struct evtchn_fifo_queue *q, *old_q;
         event_word_t *tail_word;
-        bool_t linked = 0;
 
         /*
          * Control block not mapped.  The guest must not unmask an
@@ -228,22 +239,8 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
             goto done;
         }
 
-        /*
-         * No locking around getting the queue. This may race with
-         * changing the priority but we are allowed to signal the
-         * event once on the old priority.
-         */
-        q = &v->evtchn_fifo->queue[evtchn->priority];
-
-        old_q = lock_old_queue(d, evtchn, &flags);
-        if ( !old_q )
-            goto done;
-
         if ( guest_test_and_set_bit(d, EVTCHN_FIFO_LINKED, word) )
-        {
-            spin_unlock_irqrestore(&old_q->lock, flags);
             goto done;
-        }
 
         /*
          * If this event was a tail, the old queue is now empty and
@@ -262,8 +259,8 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
             lastq.last_priority = q->priority;
             write_atomic(&evtchn->fifo_lastq, lastq.raw);
 
-            spin_unlock_irqrestore(&old_q->lock, flags);
-            spin_lock_irqsave(&q->lock, flags);
+            spin_unlock(&old_q->lock);
+            old_q = q;
         }
 
         /*
@@ -276,6 +273,7 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
          * If the queue is empty (i.e., we haven't linked to the new
          * event), head must be updated.
          */
+        linked = false;
         if ( q->tail )
         {
             tail_word = evtchn_fifo_word_from_port(d, q->tail);
@@ -284,15 +282,18 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
         if ( !linked )
             write_atomic(q->head, port);
         q->tail = port;
-
-        spin_unlock_irqrestore(&q->lock, flags);
-
-        if ( !linked
-             && !guest_test_and_set_bit(d, q->priority,
-                                        &v->evtchn_fifo->control_block->ready) )
-            vcpu_mark_events_pending(v);
     }
+
  done:
+    if ( q != old_q )
+        spin_unlock(&old_q->lock);
+    spin_unlock_irqrestore(&q->lock, flags);
+
+    if ( !linked &&
+         !guest_test_and_set_bit(d, q->priority,
+                                 &v->evtchn_fifo->control_block->ready) )
+        vcpu_mark_events_pending(v);
+
     if ( !was_pending )
         evtchn_check_pollers(d, port);
 }
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 07:01:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 07:01:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35379.66908 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khSKI-0001qZ-LL; Tue, 24 Nov 2020 07:01:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35379.66908; Tue, 24 Nov 2020 07:01:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khSKI-0001qS-Hj; Tue, 24 Nov 2020 07:01:22 +0000
Received: by outflank-mailman (input) for mailman id 35379;
 Tue, 24 Nov 2020 07:01:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KyA6=E6=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1khSKI-0001kq-4R
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 07:01:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d691081d-7325-4f16-9a49-8c4ac1eeac55;
 Tue, 24 Nov 2020 07:01:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 60C81ADAA;
 Tue, 24 Nov 2020 07:01:09 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=KyA6=E6=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1khSKI-0001kq-4R
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 07:01:22 +0000
X-Inumbo-ID: d691081d-7325-4f16-9a49-8c4ac1eeac55
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d691081d-7325-4f16-9a49-8c4ac1eeac55;
	Tue, 24 Nov 2020 07:01:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606201269; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QEucFGeONQJdQQwxfd3osX/7AAW0Pbr2/fwfaONhMq0=;
	b=dUiEWDaF3N/1coDsYe6Jr8SuU+TPBtmwdsNxCgNq+fnPQaBki8NY0UTBQjD2XRacgs+JPD
	8xWQqvdH26vDPVUS0G1RaeAidD4K7C+eOOFTyyThhOuUvyTUnh94MXG5DbWPSRgL8MpFeG
	E7Mv5c+1/No5wxDNGTAjWizgL0G1wbs=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 60C81ADAA;
	Tue, 24 Nov 2020 07:01:09 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v7 1/3] xen/events: access last_priority and last_vcpu_id together
Date: Tue, 24 Nov 2020 08:01:04 +0100
Message-Id: <20201124070106.26854-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201124070106.26854-1-jgross@suse.com>
References: <20201124070106.26854-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The queue for a fifo event is depending on the vcpu_id and the
priority of the event. When sending an event it might happen the
event needs to change queues and the old queue needs to be kept for
keeping the links between queue elements intact. For this purpose
the event channel contains last_priority and last_vcpu_id values
elements for being able to identify the old queue.

In order to avoid races always access last_priority and last_vcpu_id
with a single atomic operation avoiding any inconsistencies.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 xen/common/event_fifo.c | 25 +++++++++++++++++++------
 xen/include/xen/sched.h |  3 +--
 2 files changed, 20 insertions(+), 8 deletions(-)

diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c
index c6e58d2a1a..79090c04ca 100644
--- a/xen/common/event_fifo.c
+++ b/xen/common/event_fifo.c
@@ -42,6 +42,14 @@ struct evtchn_fifo_domain {
     unsigned int num_evtchns;
 };
 
+union evtchn_fifo_lastq {
+    uint32_t raw;
+    struct {
+        uint8_t last_priority;
+        uint16_t last_vcpu_id;
+    };
+};
+
 static inline event_word_t *evtchn_fifo_word_from_port(const struct domain *d,
                                                        unsigned int port)
 {
@@ -86,16 +94,18 @@ static struct evtchn_fifo_queue *lock_old_queue(const struct domain *d,
     struct vcpu *v;
     struct evtchn_fifo_queue *q, *old_q;
     unsigned int try;
+    union evtchn_fifo_lastq lastq;
 
     for ( try = 0; try < 3; try++ )
     {
-        v = d->vcpu[evtchn->last_vcpu_id];
-        old_q = &v->evtchn_fifo->queue[evtchn->last_priority];
+        lastq.raw = read_atomic(&evtchn->fifo_lastq);
+        v = d->vcpu[lastq.last_vcpu_id];
+        old_q = &v->evtchn_fifo->queue[lastq.last_priority];
 
         spin_lock_irqsave(&old_q->lock, *flags);
 
-        v = d->vcpu[evtchn->last_vcpu_id];
-        q = &v->evtchn_fifo->queue[evtchn->last_priority];
+        v = d->vcpu[lastq.last_vcpu_id];
+        q = &v->evtchn_fifo->queue[lastq.last_priority];
 
         if ( old_q == q )
             return old_q;
@@ -246,8 +256,11 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
         /* Moved to a different queue? */
         if ( old_q != q )
         {
-            evtchn->last_vcpu_id = v->vcpu_id;
-            evtchn->last_priority = q->priority;
+            union evtchn_fifo_lastq lastq = { };
+
+            lastq.last_vcpu_id = v->vcpu_id;
+            lastq.last_priority = q->priority;
+            write_atomic(&evtchn->fifo_lastq, lastq.raw);
 
             spin_unlock_irqrestore(&old_q->lock, flags);
             spin_lock_irqsave(&q->lock, flags);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 7251b3ae3e..a345cc01f8 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -117,8 +117,7 @@ struct evtchn
 #ifndef NDEBUG
     u8 old_state;      /* State when taking lock in write mode. */
 #endif
-    u8 last_priority;
-    u16 last_vcpu_id;
+    u32 fifo_lastq;    /* Data for fifo events identifying last queue. */
 #ifdef CONFIG_XSM
     union {
 #ifdef XSM_NEED_GENERIC_EVTCHN_SSID
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 07:01:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 07:01:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35376.66873 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khSK9-0001l7-Ql; Tue, 24 Nov 2020 07:01:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35376.66873; Tue, 24 Nov 2020 07:01:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khSK9-0001l0-Na; Tue, 24 Nov 2020 07:01:13 +0000
Received: by outflank-mailman (input) for mailman id 35376;
 Tue, 24 Nov 2020 07:01:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KyA6=E6=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1khSK8-0001kq-8D
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 07:01:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 168c7a9a-9299-4675-a1e2-cb9118043922;
 Tue, 24 Nov 2020 07:01:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 14A13ADA2;
 Tue, 24 Nov 2020 07:01:09 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=KyA6=E6=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1khSK8-0001kq-8D
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 07:01:12 +0000
X-Inumbo-ID: 168c7a9a-9299-4675-a1e2-cb9118043922
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 168c7a9a-9299-4675-a1e2-cb9118043922;
	Tue, 24 Nov 2020 07:01:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606201269; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=Y70uQzyLQKdWXjkBuRCWzDGAgoEANc5akwMpuiUm184=;
	b=A+D6TBR+7L/YG7k2RXOaS2GvCR+ucezzfrjtryIsIW0dWE1oblb876zHEt+yUDLUHchhdn
	m3WKDKSjmvp3qG5DzgdLUTDxl3vaXrNvwU6EhI/Btsi9k00pRpRPPi4NCl1psww50aZ2By
	O0/wDefv0zqZ5EBp8Ywt+9IANsGAVdo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 14A13ADA2;
	Tue, 24 Nov 2020 07:01:09 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v7 0/3] xen/events: further locking adjustments
Date: Tue, 24 Nov 2020 08:01:03 +0100
Message-Id: <20201124070106.26854-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is an add-on of my event channel locking series.

It is a resend of the single patch not having been applied from my
V6 series (being the reason to name this one V7), plus two patches
addressing issues Jan identified with the previous approach (with
one issue being more a latent one, while the other one actually existed
since the introduction on fifo events and just has been made more
probable with the new locking scheme).

Juergen Gross (3):
  xen/events: access last_priority and last_vcpu_id together
  xen/events: modify struct evtchn layout
  xen/events: rework fifo queue locking

 xen/common/event_fifo.c | 128 ++++++++++++++++++++++------------------
 xen/include/xen/sched.h |  23 ++++----
 2 files changed, 83 insertions(+), 68 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 07:01:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 07:01:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35378.66897 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khSKE-0001nR-CI; Tue, 24 Nov 2020 07:01:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35378.66897; Tue, 24 Nov 2020 07:01:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khSKE-0001nH-7p; Tue, 24 Nov 2020 07:01:18 +0000
Received: by outflank-mailman (input) for mailman id 35378;
 Tue, 24 Nov 2020 07:01:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KyA6=E6=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1khSKD-0001kq-3z
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 07:01:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 38370360-fb20-49fc-88ce-cf4fc5afc72c;
 Tue, 24 Nov 2020 07:01:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A013DADCD;
 Tue, 24 Nov 2020 07:01:09 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=KyA6=E6=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1khSKD-0001kq-3z
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 07:01:17 +0000
X-Inumbo-ID: 38370360-fb20-49fc-88ce-cf4fc5afc72c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 38370360-fb20-49fc-88ce-cf4fc5afc72c;
	Tue, 24 Nov 2020 07:01:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606201269; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=uY+rObtaesBiYwhFR54/hfk+UTURWjLhSDOKmBjo9Hk=;
	b=DuiB6acuqrH4C2absqcxEFRbDrXB2fDjBgSxIwxTz1tlybaHV+QEpsJU02wI1BLe46mAw0
	QMHW2k4JCUWGGaNTw0+FqZKNpJPpydJ3mutCADpZN9FTGdUIzLDMcOLuaegMoB+IhE4m7h
	YMuNtsNn46pIrv/Xlh+AWDys/+6aZfQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A013DADCD;
	Tue, 24 Nov 2020 07:01:09 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v7 2/3] xen/events: modify struct evtchn layout
Date: Tue, 24 Nov 2020 08:01:05 +0100
Message-Id: <20201124070106.26854-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201124070106.26854-1-jgross@suse.com>
References: <20201124070106.26854-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to avoid latent races when updating an event channel put
xen_consumer and pending fields in different bytes.

At the same time move some other fields around to have less implicit
paddings and to keep related fields more closely together.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/include/xen/sched.h | 22 ++++++++++++----------
 1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index a345cc01f8..e6d09aa055 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -80,8 +80,7 @@ extern domid_t hardware_domid;
 #define EVTCHNS_PER_GROUP  (BUCKETS_PER_GROUP * EVTCHNS_PER_BUCKET)
 #define NR_EVTCHN_GROUPS   DIV_ROUND_UP(MAX_NR_EVTCHNS, EVTCHNS_PER_GROUP)
 
-#define XEN_CONSUMER_BITS 3
-#define NR_XEN_CONSUMERS ((1 << XEN_CONSUMER_BITS) - 1)
+#define NR_XEN_CONSUMERS 8
 
 struct evtchn
 {
@@ -94,9 +93,10 @@ struct evtchn
 #define ECS_VIRQ         5 /* Channel is bound to a virtual IRQ line.        */
 #define ECS_IPI          6 /* Channel is bound to a virtual IPI line.        */
     u8  state;             /* ECS_* */
-    u8  xen_consumer:XEN_CONSUMER_BITS; /* Consumer in Xen if nonzero */
-    u8  pending:1;
-    u16 notify_vcpu_id;    /* VCPU for local delivery notification */
+#ifndef NDEBUG
+    u8  old_state;     /* State when taking lock in write mode. */
+#endif
+    u8  xen_consumer;  /* Consumer in Xen if nonzero */
     u32 port;
     union {
         struct {
@@ -113,11 +113,13 @@ struct evtchn
         } pirq;        /* state == ECS_PIRQ */
         u16 virq;      /* state == ECS_VIRQ */
     } u;
-    u8 priority;
-#ifndef NDEBUG
-    u8 old_state;      /* State when taking lock in write mode. */
-#endif
-    u32 fifo_lastq;    /* Data for fifo events identifying last queue. */
+
+    /* FIFO event channels only. */
+    u8  pending;
+    u8  priority;
+    u16 notify_vcpu_id;    /* VCPU for local delivery notification */
+    u32 fifo_lastq;        /* Data for identifying last queue. */
+
 #ifdef CONFIG_XSM
     union {
 #ifdef XSM_NEED_GENERIC_EVTCHN_SSID
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 07:19:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 07:19:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35407.66921 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khSbb-0003Dd-5x; Tue, 24 Nov 2020 07:19:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35407.66921; Tue, 24 Nov 2020 07:19:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khSbb-0003DW-24; Tue, 24 Nov 2020 07:19:15 +0000
Received: by outflank-mailman (input) for mailman id 35407;
 Tue, 24 Nov 2020 07:19:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khSbZ-0003DO-ON; Tue, 24 Nov 2020 07:19:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khSbZ-0003ph-3h; Tue, 24 Nov 2020 07:19:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khSbY-0008OZ-Mt; Tue, 24 Nov 2020 07:19:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1khSbY-0006zC-MO; Tue, 24 Nov 2020 07:19:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khSbZ-0003DO-ON; Tue, 24 Nov 2020 07:19:13 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NEC3EQpK2WlZmFfVo8as28c0l0W6iXUQyzirbzMhN4g=; b=eBlJCmzWwMTpM9jsKgyQlvA86m
	KS70tzpdQ+iPQwGQW5mMV9J6rgbkD55ObvZc2K/wKduZC1LWUJtr3gq1xta7O67KoLr6hFEXlf+VS
	WybQzo4JR8NXEN4cI2n3xr0lrti1ZNebG6Rta5+1iZK542ZV4s0paT23UdpsJyrGzZZk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khSbZ-0003ph-3h; Tue, 24 Nov 2020 07:19:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khSbY-0008OZ-Mt; Tue, 24 Nov 2020 07:19:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khSbY-0006zC-MO; Tue, 24 Nov 2020 07:19:12 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156972-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156972: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-install:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=418baf2c28f3473039f2f7377760bd8f6897ae18
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 24 Nov 2020 07:19:12 +0000

flight 156972 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156972/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  12 debian-install fail in 156955 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds 20 guest-localmigrate/x10 fail in 156955 pass in 156972
 test-arm64-arm64-xl           8 xen-boot         fail in 156964 pass in 156972
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen        fail pass in 156955
 test-arm64-arm64-xl          10 host-ping-check-xen        fail pass in 156955
 test-arm64-arm64-xl-seattle   8 xen-boot                   fail pass in 156955
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 156955
 test-armhf-armhf-libvirt-raw  8 xen-boot                   fail pass in 156964

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle 11 leak-check/basis(11) fail in 156955 blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11) fail in 156955 blocked in 152332
 test-arm64-arm64-xl   11 leak-check/basis(11) fail in 156955 blocked in 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail in 156955 like 152332
 test-armhf-armhf-libvirt-raw 14 migrate-support-check fail in 156955 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                418baf2c28f3473039f2f7377760bd8f6897ae18
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  115 days
Failing since        152366  2020-08-01 20:49:34 Z  114 days  193 attempts
Testing same since   156955  2020-11-23 01:40:54 Z    1 days    3 attempts

------------------------------------------------------------
3576 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 683939 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 07:45:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 07:45:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35416.66936 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khT12-0005qj-I0; Tue, 24 Nov 2020 07:45:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35416.66936; Tue, 24 Nov 2020 07:45:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khT12-0005qc-ER; Tue, 24 Nov 2020 07:45:32 +0000
Received: by outflank-mailman (input) for mailman id 35416;
 Tue, 24 Nov 2020 07:45:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5Zhb=E6=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1khT10-0005qR-S5
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 07:45:30 +0000
Received: from mail-wm1-x329.google.com (unknown [2a00:1450:4864:20::329])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fc78b056-3f7a-4371-913d-8bec1fbed5cc;
 Tue, 24 Nov 2020 07:45:29 +0000 (UTC)
Received: by mail-wm1-x329.google.com with SMTP id a65so1842420wme.1
 for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 23:45:29 -0800 (PST)
Received: from CBGR90WXYV0 (host86-183-162-145.range86-183.btcentralplus.com.
 [86.183.162.145])
 by smtp.gmail.com with ESMTPSA id q25sm4278793wmq.37.2020.11.23.23.45.28
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 23 Nov 2020 23:45:28 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5Zhb=E6=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1khT10-0005qR-S5
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 07:45:30 +0000
X-Inumbo-ID: fc78b056-3f7a-4371-913d-8bec1fbed5cc
Received: from mail-wm1-x329.google.com (unknown [2a00:1450:4864:20::329])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fc78b056-3f7a-4371-913d-8bec1fbed5cc;
	Tue, 24 Nov 2020 07:45:29 +0000 (UTC)
Received: by mail-wm1-x329.google.com with SMTP id a65so1842420wme.1
        for <xen-devel@lists.xenproject.org>; Mon, 23 Nov 2020 23:45:29 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=S9s4cpok78Ha4Q/+X64sGyNmgKnYE8ebzWe66uKslek=;
        b=lsHLrqTLpzQUm22lnr14pSoee2cbMWvu9ct1llEHdkBuvKg98RYSGxkEgjgR3IwBW1
         FUhw7S6gdbG/V10Zs1OVOXP8n/PfW7jSbNRpMA/HXoQNxu5wHG0gHOp+ym/DQkY09s//
         5iGBi2acVfygnQ5b3BZ9/BB2ttJBdStdvl4DXjfoSCRBUCnzGklyB1Xb9NieIg92PPIc
         4oryVPB4vlsuA0YzTu8rIvA/oGXi4YVKDsE9gV9MqaFeHcXbOESuf3ChMnCOEqug7lbY
         QRCztlBgjmjpbut7Rx2/ADululxI9uoPsH6yGpgfYiekPVEJc7h3y+xNc9/3W9qW10YI
         PS7A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=S9s4cpok78Ha4Q/+X64sGyNmgKnYE8ebzWe66uKslek=;
        b=X4P9YNOewgZ/VE+0hVSmA2cbFNtrgVquaanA6Pfsuh5ptfVTY/8iVJhi4s/qrbr4wH
         kdLiFlDYBHJcdmM8qhr2zckgmoZTnFqjEJwdO3iIy9sv6MXDF5jUo5PjnEaQjKGmo7se
         UBRZ3b4gLJGUGe5qkfp+i1R7oWIoVOcJRY+8ydhrpzT6f+7dulfZRrvu11+mXTKB3jCU
         Ve2yt1N0VPDSpAOdfOxDqPvfViCV/kcASLA89ZII0i8yGhrCIS/SAgPCivZVNDKEwNi6
         LE7cLNQoYSQeccbZrA9MRsqbfJR5y6eCebnPUff470n6agv6hl15jhy6M63g6TIG/1+Q
         jJhg==
X-Gm-Message-State: AOAM533nvbNl/VUifvkPNPkAZ/K9mhKBNoEW+YP4bAFmUtySo9iExIAv
	ek9mzhZysUG0VENl3NtRBCY=
X-Google-Smtp-Source: ABdhPJx8zhzev/pyr2YTP9QJadVGoQ9ZMhpvMn+KcUqH7WWcc93DsWPrPNEHKqukNrDS9H6kAT3GjQ==
X-Received: by 2002:a7b:c843:: with SMTP id c3mr2918205wml.100.1606203929123;
        Mon, 23 Nov 2020 23:45:29 -0800 (PST)
Received: from CBGR90WXYV0 (host86-183-162-145.range86-183.btcentralplus.com. [86.183.162.145])
        by smtp.gmail.com with ESMTPSA id q25sm4278793wmq.37.2020.11.23.23.45.28
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Mon, 23 Nov 2020 23:45:28 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	"'Anthony PERARD'" <anthony.perard@citrix.com>,
	"'Christian Lindig'" <christian.lindig@citrix.com>,
	"'David Scott'" <dave@recoil.org>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Nick Rosbrook'" <rosbrookn@ainfosec.com>,
	"'Wei Liu'" <wl@xen.org>
References: <20201123174503.6800-1-paul@xen.org> <822734cf-a048-2f53-940a-9f5ccf9df40f@citrix.com>
In-Reply-To: <822734cf-a048-2f53-940a-9f5ccf9df40f@citrix.com>
Subject: RE: [PATCH v3 00/23] xl / libxl: named PCI pass-through devices
Date: Tue, 24 Nov 2020 07:45:27 -0000
Message-ID: <001601d6c235$c7396150$55ac23f0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQG+euZNgx6mUfoFllRdYIBemuWGRgEeRX7Zqf6COOA=
Content-Language: en-gb

> -----Original Message-----
> From: Andrew Cooper <andrew.cooper3@citrix.com>
> Sent: 23 November 2020 22:18
> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org
> Cc: Paul Durrant <pdurrant@amazon.com>; Anthony PERARD <anthony.perard@citrix.com>; Christian Lindig
> <christian.lindig@citrix.com>; David Scott <dave@recoil.org>; George Dunlap
> <george.dunlap@citrix.com>; Ian Jackson <iwj@xenproject.org>; Nick Rosbrook <rosbrookn@ainfosec.com>;
> Wei Liu <wl@xen.org>
> Subject: Re: [PATCH v3 00/23] xl / libxl: named PCI pass-through devices
> 
> On 23/11/2020 17:44, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > Paul Durrant (23):
> >   xl / libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
> >   libxl: make libxl__device_list() work correctly for
> >     LIBXL__DEVICE_KIND_PCI...
> >   libxl: Make sure devices added by pci-attach are reflected in the
> >     config
> >   libxl: add/recover 'rdm_policy' to/from PCI backend in xenstore
> >   libxl: s/detatched/detached in libxl_pci.c
> >   libxl: remove extraneous arguments to do_pci_remove() in libxl_pci.c
> >   libxl: stop using aodev->device_config in libxl__device_pci_add()...
> >   libxl: generalise 'driver_path' xenstore access functions in
> >     libxl_pci.c
> >   libxl: remove unnecessary check from libxl__device_pci_add()
> >   libxl: remove get_all_assigned_devices() from libxl_pci.c
> >   libxl: make sure callers of libxl_device_pci_list() free the list
> >     after use
> >   libxl: add libxl_device_pci_assignable_list_free()...
> >   libxl: use COMPARE_PCI() macro is_pci_in_array()...
> >   docs/man: extract documentation of PCI_SPEC_STRING from the xl.cfg
> >     manpage...
> >   docs/man: improve documentation of PCI_SPEC_STRING...
> >   docs/man: fix xl(1) documentation for 'pci' operations
> >   libxl: introduce 'libxl_pci_bdf' in the idl...
> >   libxlu: introduce xlu_pci_parse_spec_string()
> >   libxl: modify
> >     libxl_device_pci_assignable_add/remove/list/list_free()...
> >   docs/man: modify xl(1) in preparation for naming of assignable devices
> >   xl / libxl: support naming of assignable devices
> >   docs/man: modify xl-pci-configuration(5) to add 'name' field to
> >     PCI_SPEC_STRING
> >   xl / libxl: support 'xl pci-attach/detach' by name
> 
> We're trying to get the CI loop up and running.  Its not emailing
> xen-devel yet, but has found a real error somewhere in this series.
> 
> https://gitlab.com/xen-project/patchew/xen/-/pipelines/220153571
> 

Found it, thanks...

libxl_pci.c: In function 'libxl_device_pci_assignable_name2bdf':
libxl_pci.c:970:5: error: 'pcibdf' may be used uninitialized in this function [-Werror=maybe-uninitialized]
     return pcibdf;
     ^

Odd that my local build (debian 9.13) didn't pick it up. Will send a v4 shortly.

  Paul

> ~Andrew



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 08:02:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 08:02:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35431.67002 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTH5-0008CO-EH; Tue, 24 Nov 2020 08:02:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35431.67002; Tue, 24 Nov 2020 08:02:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTH5-0008CA-0v; Tue, 24 Nov 2020 08:02:07 +0000
Received: by outflank-mailman (input) for mailman id 35431;
 Tue, 24 Nov 2020 08:02:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khTH2-00088A-F4
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:02:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH1-00060N-S0; Tue, 24 Nov 2020 08:02:03 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH1-0001hp-Oq; Tue, 24 Nov 2020 08:02:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH2-00088A-F4
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:02:04 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=woMUqgZMx7At7jFsiX2KySgK2cVzSMaxkSnE1C583H4=; b=XdCotyRlljb+RCHrL8k/Wh4Ea
	HCoYI2SoK8XKRQWm16JBdGydzmG9EE/O1aCfA1GXo33WqpfZ/74STwfh8bKVpZVNXHh5aE0qVxLNq
	jKdVSTmlYITgic9W3Y1Rr2fwJ0stZl01CKMYiEo1hCpo2sZ684SmuCJZa5hIzEBk+bNIM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH1-00060N-S0; Tue, 24 Nov 2020 08:02:03 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH1-0001hp-Oq; Tue, 24 Nov 2020 08:02:03 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 05/23] libxl: s/detatched/detached in libxl_pci.c
Date: Tue, 24 Nov 2020 08:01:41 +0000
Message-Id: <20201124080159.11912-6-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201124080159.11912-1-paul@xen.org>
References: <20201124080159.11912-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

Simply spelling correction. Purely cosmetic fix.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 50c96cbfa6..de617e95eb 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1864,7 +1864,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
     libxl__ev_qmp *qmp, const libxl__json_object *response, int rc);
 static void pci_remove_timeout(libxl__egc *egc,
     libxl__ev_time *ev, const struct timeval *requested_abs, int rc);
-static void pci_remove_detatched(libxl__egc *egc,
+static void pci_remove_detached(libxl__egc *egc,
     pci_remove_state *prs, int rc);
 static void pci_remove_stubdom_done(libxl__egc *egc,
     libxl__ao_device *aodev);
@@ -1978,7 +1978,7 @@ skip1:
 skip_irq:
     rc = 0;
 out_fail:
-    pci_remove_detatched(egc, prs, rc); /* must be last */
+    pci_remove_detached(egc, prs, rc); /* must be last */
 }
 
 static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
@@ -2002,7 +2002,7 @@ static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
     rc = qemu_pci_remove_xenstore(gc, domid, pci, prs->force);
 
 out:
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
 static void pci_remove_qmp_device_del(libxl__egc *egc,
@@ -2028,7 +2028,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     return;
 
 out:
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
 static void pci_remove_qmp_device_del_cb(libxl__egc *egc,
@@ -2051,7 +2051,7 @@ static void pci_remove_qmp_device_del_cb(libxl__egc *egc,
     return;
 
 out:
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
 static void pci_remove_qmp_retry_timer_cb(libxl__egc *egc, libxl__ev_time *ev,
@@ -2067,7 +2067,7 @@ static void pci_remove_qmp_retry_timer_cb(libxl__egc *egc, libxl__ev_time *ev,
     return;
 
 out:
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
 static void pci_remove_qmp_query_cb(libxl__egc *egc,
@@ -2127,7 +2127,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
     }
 
 out:
-    pci_remove_detatched(egc, prs, rc); /* must be last */
+    pci_remove_detached(egc, prs, rc); /* must be last */
 }
 
 static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
@@ -2146,12 +2146,12 @@ static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
     /* If we timed out, we might still want to keep destroying the device
      * (when force==true), so let the next function decide what to do on
      * error */
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
-static void pci_remove_detatched(libxl__egc *egc,
-                                 pci_remove_state *prs,
-                                 int rc)
+static void pci_remove_detached(libxl__egc *egc,
+                                pci_remove_state *prs,
+                                int rc)
 {
     STATE_AO_GC(prs->aodev->ao);
     int stubdomid = 0;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 08:02:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 08:02:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35435.67043 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTH8-0008K4-7j; Tue, 24 Nov 2020 08:02:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35435.67043; Tue, 24 Nov 2020 08:02:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTH7-0008JU-Rh; Tue, 24 Nov 2020 08:02:09 +0000
Received: by outflank-mailman (input) for mailman id 35435;
 Tue, 24 Nov 2020 08:02:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khTH3-000894-Dn
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:02:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH2-00060p-LA; Tue, 24 Nov 2020 08:02:04 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH2-0001hp-IP; Tue, 24 Nov 2020 08:02:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH3-000894-Dn
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:02:05 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=ydZtYAWQIkmj6i2skveH66kfmosN7aYsaYp/dzWVx0Q=; b=0+RndaiKvKg5Sun4pE39q/rsG
	cA4cqfqeN03lJSDsfQSo6Yr+ClCo0EBzTJPhWauTbqPDfI8H2iiQbiZHBv2U78yymDumWwbkMeChl
	WCQ/QMQ/4k/UI7qYiPw5zKiwSBlb7D+CGZLz8CvEPsKYqqvukjE/9OhOxSt6WwtzRHsKY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH2-00060p-LA; Tue, 24 Nov 2020 08:02:04 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH2-0001hp-IP; Tue, 24 Nov 2020 08:02:04 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 09/23] libxl: remove unnecessary check from libxl__device_pci_add()
Date: Tue, 24 Nov 2020 08:01:45 +0000
Message-Id: <20201124080159.11912-10-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201124080159.11912-1-paul@xen.org>
References: <20201124080159.11912-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

The code currently checks explicitly whether the device is already assigned,
but this is actually unnecessary as assigned devices do not form part of
the list returned by libxl_device_pci_assignable_list() and hence the
libxl_pci_assignable() test would have already failed.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 16 +---------------
 1 file changed, 1 insertion(+), 15 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index a5d5d2e78b..ec101f255f 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1555,8 +1555,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
 {
     STATE_AO_GC(aodev->ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
-    libxl_device_pci *assigned;
-    int num_assigned, rc;
+    int rc;
     int stubdomid = 0;
     pci_add_state *pas;
 
@@ -1595,19 +1594,6 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
         goto out;
     }
 
-    rc = get_all_assigned_devices(gc, &assigned, &num_assigned);
-    if ( rc ) {
-        LOGD(ERROR, domid,
-             "cannot determine if device is assigned, refusing to continue");
-        goto out;
-    }
-    if ( is_pci_in_array(assigned, num_assigned, pci->domain,
-                         pci->bus, pci->dev, pci->func) ) {
-        LOGD(ERROR, domid, "PCI device already attached to a domain");
-        rc = ERROR_FAIL;
-        goto out;
-    }
-
     libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 08:02:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 08:02:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35427.66955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTH3-000895-KM; Tue, 24 Nov 2020 08:02:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35427.66955; Tue, 24 Nov 2020 08:02:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTH3-00088v-BU; Tue, 24 Nov 2020 08:02:05 +0000
Received: by outflank-mailman (input) for mailman id 35427;
 Tue, 24 Nov 2020 08:02:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khTH2-00087l-6o
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:02:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH1-000609-8v; Tue, 24 Nov 2020 08:02:03 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH1-0001hp-4l; Tue, 24 Nov 2020 08:02:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH2-00087l-6o
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:02:04 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=yMHep/dhCDaK4WgXxVfnOxAemfKui/VNIp6TlUXURAA=; b=pkWiyjIhDqUQz3tiqjT1ponhz
	4VIRX3Tz1qeOxloN03HIlwn+aMwjIk9kOhjlxAhZnr4Q7EudsnK8xNqyqZ8or5CzidT1KJDS5r/1R
	hIBKzHF4ekPaq/nuLj8qz5H2+Pf8/tpsPYsdbe44pF7mdmTEa7YY0WOgf4NFa2iArMPhw=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH1-000609-8v; Tue, 24 Nov 2020 08:02:03 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH1-0001hp-4l; Tue, 24 Nov 2020 08:02:03 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 02/23] libxl: make libxl__device_list() work correctly for LIBXL__DEVICE_KIND_PCI...
Date: Tue, 24 Nov 2020 08:01:38 +0000
Message-Id: <20201124080159.11912-3-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201124080159.11912-1-paul@xen.org>
References: <20201124080159.11912-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

... devices.

Currently there is an assumption built into libxl__device_list() that device
backends are fully enumarated under the '/libxl' path in xenstore. This is
not the case for PCI backend devices, which are only properly enumerated
under '/local/domain/0/backend'.

This patch adds a new get_path() method to libxl__device_type to allow a
backend implementation (such as PCI) to specify the xenstore path where
devices are enumerated and modifies libxl__device_list() to use this method
if it is available. Also, if the get_num() method is defined then the
from_xenstore() method expects to be passed the backend path without the device
number concatenated, so this issue is also rectified.

Having made libxl__device_list() work correctly, this patch removes the
open-coded libxl_pci_device_pci_list() in favour of an evaluation of the
LIBXL_DEFINE_DEVICE_LIST() macro. This has the side-effect of also defining
libxl_pci_device_pci_list_free() which will be used in subsequent patches.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>

v3:
 - New in v3 (replacing "libxl: use LIBXL_DEFINE_DEVICE_LIST for pci devices")
---
 tools/include/libxl.h             |  7 +++++
 tools/libs/light/libxl_device.c   | 66 +++++++++++++++++++++------------------
 tools/libs/light/libxl_internal.h |  2 ++
 tools/libs/light/libxl_pci.c      | 29 +++++------------
 4 files changed, 52 insertions(+), 52 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index fbe4c81ba5..ee52d3cf7e 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -452,6 +452,12 @@
 #define LIBXL_HAVE_CONFIG_PCIS 1
 
 /*
+ * LIBXL_HAVE_DEVICE_PCI_LIST_FREE indicates that the
+ * libxl_device_pci_list_free() function is defined.
+ */
+#define LIBXL_HAVE_DEVICE_PCI_LIST_FREE 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
@@ -2321,6 +2327,7 @@ int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
 
 libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid,
                                         int *num);
+void libxl_device_pci_list_free(libxl_device_pci* list, int num);
 
 /*
  * Turns the current process into a backend device service daemon
diff --git a/tools/libs/light/libxl_device.c b/tools/libs/light/libxl_device.c
index e081faf9a9..ac173a043d 100644
--- a/tools/libs/light/libxl_device.c
+++ b/tools/libs/light/libxl_device.c
@@ -2011,7 +2011,7 @@ void *libxl__device_list(libxl__gc *gc, const libxl__device_type *dt,
     void *r = NULL;
     void *list = NULL;
     void *item = NULL;
-    char *libxl_path;
+    char *path;
     char **dir = NULL;
     unsigned int ndirs = 0;
     unsigned int ndevs = 0;
@@ -2019,42 +2019,46 @@ void *libxl__device_list(libxl__gc *gc, const libxl__device_type *dt,
 
     *num = 0;
 
-    libxl_path = GCSPRINTF("%s/device/%s",
-                           libxl__xs_libxl_path(gc, domid),
-                           libxl__device_kind_to_string(dt->type));
-
-    dir = libxl__xs_directory(gc, XBT_NULL, libxl_path, &ndirs);
+    if (dt->get_path) {
+        rc = dt->get_path(gc, domid, &path);
+        if (rc) goto out;
+    } else {
+        path = GCSPRINTF("%s/device/%s",
+                         libxl__xs_libxl_path(gc, domid),
+                         libxl__device_kind_to_string(dt->type));
+    }
 
-    if (dir && ndirs) {
-        if (dt->get_num) {
-            if (ndirs != 1) {
-                LOGD(ERROR, domid, "multiple entries in %s\n", libxl_path);
-                rc = ERROR_FAIL;
-                goto out;
-            }
-            rc = dt->get_num(gc, GCSPRINTF("%s/%s", libxl_path, *dir), &ndevs);
-            if (rc) goto out;
-        } else {
+    if (dt->get_num) {
+        rc = dt->get_num(gc, path, &ndevs);
+        if (rc) goto out;
+    } else {
+        dir = libxl__xs_directory(gc, XBT_NULL, path, &ndirs);
+        if (dir && ndirs)
             ndevs = ndirs;
-        }
-        list = libxl__malloc(NOGC, dt->dev_elem_size * ndevs);
-        item = list;
+    }
 
-        while (*num < ndevs) {
-            dt->init(item);
+    if (!ndevs)
+        return NULL;
 
-            if (dt->from_xenstore) {
-                int nr = dt->get_num ? *num : atoi(*dir);
-                char *device_libxl_path = GCSPRINTF("%s/%s", libxl_path, *dir);
-                rc = dt->from_xenstore(gc, device_libxl_path, nr, item);
-                if (rc) goto out;
-            }
+    list = libxl__malloc(NOGC, dt->dev_elem_size * ndevs);
+    item = list;
 
-            item = (uint8_t *)item + dt->dev_elem_size;
-            ++(*num);
-            if (!dt->get_num)
-                ++dir;
+    while (*num < ndevs) {
+        dt->init(item);
+
+        if (dt->from_xenstore) {
+            int nr = dt->get_num ? *num : atoi(*dir);
+            char *device_path = dt->get_num ? path :
+                GCSPRINTF("%s/%d", path, nr);
+
+            rc = dt->from_xenstore(gc, device_path, nr, item);
+            if (rc) goto out;
         }
+
+        item = (uint8_t *)item + dt->dev_elem_size;
+        ++(*num);
+        if (!dt->get_num)
+            ++dir;
     }
 
     r = list;
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 3e70ff639b..ecee61b541 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -3917,6 +3917,7 @@ typedef int (*device_dm_needed_fn_t)(void *, unsigned);
 typedef void (*device_update_config_fn_t)(libxl__gc *, void *, void *);
 typedef int (*device_update_devid_fn_t)(libxl__gc *, uint32_t, void *);
 typedef int (*device_get_num_fn_t)(libxl__gc *, const char *, unsigned int *);
+typedef int (*device_get_path_fn_t)(libxl__gc *, uint32_t, char **);
 typedef int (*device_from_xenstore_fn_t)(libxl__gc *, const char *,
                                          libxl_devid, void *);
 typedef int (*device_set_xenstore_config_fn_t)(libxl__gc *, uint32_t, void *,
@@ -3941,6 +3942,7 @@ struct libxl__device_type {
     device_update_config_fn_t       update_config;
     device_update_devid_fn_t        update_devid;
     device_get_num_fn_t             get_num;
+    device_get_path_fn_t            get_path;
     device_from_xenstore_fn_t       from_xenstore;
     device_set_xenstore_config_fn_t set_xenstore_config;
 };
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 2ff1c64a31..9d44b28f0a 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -2393,29 +2393,13 @@ static int libxl__device_pci_get_num(libxl__gc *gc, const char *be_path,
     return rc;
 }
 
-libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid, int *num)
+static int libxl__device_pci_get_path(libxl__gc *gc, uint32_t domid,
+                                      char **path)
 {
-    GC_INIT(ctx);
-    char *be_path;
-    unsigned int n, i;
-    libxl_device_pci *pcis = NULL;
-
-    *num = 0;
-
-    be_path = libxl__domain_device_backend_path(gc, 0, domid, 0,
-                                                LIBXL__DEVICE_KIND_PCI);
-    if (libxl__device_pci_get_num(gc, be_path, &n))
-        goto out;
+    *path = libxl__domain_device_backend_path(gc, 0, domid, 0,
+                                              LIBXL__DEVICE_KIND_PCI);
 
-    pcis = calloc(n, sizeof(libxl_device_pci));
-
-    for (i = 0; i < n; i++)
-        libxl__device_pci_from_xs_be(gc, be_path, i, pcis + i);
-
-    *num = n;
-out:
-    GC_FREE;
-    return pcis;
+    return 0;
 }
 
 void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
@@ -2492,10 +2476,13 @@ static int libxl_device_pci_compare(const libxl_device_pci *d1,
     return COMPARE_PCI(d1, d2);
 }
 
+LIBXL_DEFINE_DEVICE_LIST(pci)
+
 #define libxl__device_pci_update_devid NULL
 
 DEFINE_DEVICE_TYPE_STRUCT(pci, PCI,
     .get_num = libxl__device_pci_get_num,
+    .get_path = libxl__device_pci_get_path,
     .from_xenstore = libxl__device_pci_from_xs_be,
 );
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 08:02:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 08:02:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35432.67015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTH6-0008DK-0g; Tue, 24 Nov 2020 08:02:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35432.67015; Tue, 24 Nov 2020 08:02:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTH5-0008D2-Eh; Tue, 24 Nov 2020 08:02:07 +0000
Received: by outflank-mailman (input) for mailman id 35432;
 Tue, 24 Nov 2020 08:02:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khTH2-00088F-Nd
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:02:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH2-00060T-1o; Tue, 24 Nov 2020 08:02:04 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH1-0001hp-VP; Tue, 24 Nov 2020 08:02:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH2-00088F-Nd
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:02:04 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=QpeMsaNDODuqzK/3ooOCFKLwmrQNIi9uPeHe1VKXhiI=; b=kSYkDCf4URtCU7KeceG1mcm+y
	T5bmNwgU6wg0iKrklthJv+GDNbKhIxvG+n3tw2yniMRKjuMurKc4Rufgj6O/bAObEi523OraEXPk8
	crNRcn/XfpLOGPLLXRwg9MduYdu3FEWs2a2uKnJBPw1alfZBv3XjG8QkzvAelqPVNs9cU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH2-00060T-1o; Tue, 24 Nov 2020 08:02:04 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH1-0001hp-VP; Tue, 24 Nov 2020 08:02:04 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 06/23] libxl: remove extraneous arguments to do_pci_remove() in libxl_pci.c
Date: Tue, 24 Nov 2020 08:01:42 +0000
Message-Id: <20201124080159.11912-7-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201124080159.11912-1-paul@xen.org>
References: <20201124080159.11912-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

Both 'domid' and 'pci' are available in 'pci_remove_state' so there is no
need to also pass them as separate arguments.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index de617e95eb..41e4b2b571 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1871,14 +1871,14 @@ static void pci_remove_stubdom_done(libxl__egc *egc,
 static void pci_remove_done(libxl__egc *egc,
     pci_remove_state *prs, int rc);
 
-static void do_pci_remove(libxl__egc *egc, uint32_t domid,
-                          libxl_device_pci *pci, int force,
-                          pci_remove_state *prs)
+static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
 {
     STATE_AO_GC(prs->aodev->ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
     libxl_device_pci *assigned;
+    uint32_t domid = prs->domid;
     libxl_domain_type type = libxl__domain_type(gc, domid);
+    libxl_device_pci *pci = prs->pci;
     int rc, num;
     uint32_t domainid = domid;
 
@@ -2275,7 +2275,6 @@ static void device_pci_remove_common_next(libxl__egc *egc,
     EGC_GC;
 
     /* Convenience aliases */
-    libxl_domid domid = prs->domid;
     libxl_device_pci *const pci = prs->pci;
     libxl__ao_device *const aodev = prs->aodev;
     const unsigned int pfunc_mask = prs->pfunc_mask;
@@ -2293,7 +2292,7 @@ static void device_pci_remove_common_next(libxl__egc *egc,
             } else {
                 pci->vdevfn = orig_vdev;
             }
-            do_pci_remove(egc, domid, pci, prs->force, prs);
+            do_pci_remove(egc, prs);
             return;
         }
     }
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 08:02:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 08:02:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35433.67029 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTH6-0008G8-TB; Tue, 24 Nov 2020 08:02:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35433.67029; Tue, 24 Nov 2020 08:02:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTH6-0008FB-BM; Tue, 24 Nov 2020 08:02:08 +0000
Received: by outflank-mailman (input) for mailman id 35433;
 Tue, 24 Nov 2020 08:02:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khTH3-00088K-1g
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:02:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH2-00060X-8K; Tue, 24 Nov 2020 08:02:04 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH2-0001hp-5Z; Tue, 24 Nov 2020 08:02:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH3-00088K-1g
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:02:05 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=gOoYvYBdUT6Nx0tE3SEPd8rpiDKo5xSEp4PAlyL6C2k=; b=thqr998HWqKKl8SyXcXMSxsPZ
	225NEVKbNE03NgJbBteJeCZtlthZ0IGCDCnCJTXKKd5ahxVkZ/eI2HmOswAohnLamptgGH6JdhsoD
	yJBJAvyZYDLDsn9S77C4VaZNXna89MOUG8ahGd4Eo8i25ScE6hEB0O3eW1NWptpESh9Ug=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH2-00060X-8K; Tue, 24 Nov 2020 08:02:04 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH2-0001hp-5Z; Tue, 24 Nov 2020 08:02:04 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 07/23] libxl: stop using aodev->device_config in libxl__device_pci_add()...
Date: Tue, 24 Nov 2020 08:01:43 +0000
Message-Id: <20201124080159.11912-8-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201124080159.11912-1-paul@xen.org>
References: <20201124080159.11912-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

... to hold a pointer to the device.

There is already a 'pci' field in 'pci_add_state' so simply use that from
the start. This also allows the 'pci' (#3) argument to be dropped from
do_pci_add().

NOTE: This patch also changes the type of the 'pci_domid' field in
      'pci_add_state' from 'int' to 'libxl_domid' which is more appropriate
      given what the field is used for.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 19 +++++++------------
 1 file changed, 7 insertions(+), 12 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 41e4b2b571..77edd27345 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1074,7 +1074,7 @@ typedef struct pci_add_state {
     libxl__ev_qmp qmp;
     libxl__ev_time timeout;
     libxl_device_pci *pci;
-    int pci_domid;
+    libxl_domid pci_domid;
 } pci_add_state;
 
 static void pci_add_qemu_trad_watch_state_cb(libxl__egc *egc,
@@ -1091,7 +1091,6 @@ static void pci_add_dm_done(libxl__egc *,
 
 static void do_pci_add(libxl__egc *egc,
                        libxl_domid domid,
-                       libxl_device_pci *pci,
                        pci_add_state *pas)
 {
     STATE_AO_GC(pas->aodev->ao);
@@ -1101,7 +1100,6 @@ static void do_pci_add(libxl__egc *egc,
     /* init pci_add_state */
     libxl__xswait_init(&pas->xswait);
     libxl__ev_qmp_init(&pas->qmp);
-    pas->pci = pci;
     pas->pci_domid = domid;
     libxl__ev_time_init(&pas->timeout);
 
@@ -1564,13 +1562,10 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     int stubdomid = 0;
     pci_add_state *pas;
 
-    /* Store *pci to be used by callbacks */
-    aodev->device_config = pci;
-    aodev->device_type = &libxl__pci_devtype;
-
     GCNEW(pas);
     pas->aodev = aodev;
     pas->domid = domid;
+    pas->pci = pci;
     pas->starting = starting;
     pas->callback = device_pci_add_stubdom_done;
 
@@ -1624,9 +1619,10 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
         GCNEW(pci_s);
         libxl_device_pci_init(pci_s);
         libxl_device_pci_copy(CTX, pci_s, pci);
+        pas->pci = pci_s;
         pas->callback = device_pci_add_stubdom_wait;
 
-        do_pci_add(egc, stubdomid, pci_s, pas); /* must be last */
+        do_pci_add(egc, stubdomid, pas); /* must be last */
         return;
     }
 
@@ -1681,9 +1677,8 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
     int i;
 
     /* Convenience aliases */
-    libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = aodev->device_config;
+    libxl_device_pci *pci = pas->pci;
 
     if (rc) goto out;
 
@@ -1718,7 +1713,7 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
                 pci->vdevfn = orig_vdev;
             }
             pas->callback = device_pci_add_done;
-            do_pci_add(egc, domid, pci, pas); /* must be last */
+            do_pci_add(egc, domid, pas); /* must be last */
             return;
         }
     }
@@ -1734,7 +1729,7 @@ static void device_pci_add_done(libxl__egc *egc,
     EGC_GC;
     libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = aodev->device_config;
+    libxl_device_pci *pci = pas->pci;
 
     if (rc) {
         LOGD(ERROR, domid,
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 08:02:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 08:02:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35430.66989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTH4-0008BQ-TK; Tue, 24 Nov 2020 08:02:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35430.66989; Tue, 24 Nov 2020 08:02:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTH4-0008BB-Gu; Tue, 24 Nov 2020 08:02:06 +0000
Received: by outflank-mailman (input) for mailman id 35430;
 Tue, 24 Nov 2020 08:02:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khTH2-000885-At
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:02:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH1-00060H-LJ; Tue, 24 Nov 2020 08:02:03 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH1-0001hp-IN; Tue, 24 Nov 2020 08:02:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH2-000885-At
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:02:04 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=dpUixXjI/I4QIesMLF+10yxwU7Nt8aYiH/KlAxrUXlM=; b=aRsg47uijfeMixUPqoO2H60Z4
	AjK4rJN/qrsBNyvERPu0gzx3mHW1KYFmNerCWaNz80HxT9Xfvyx60ZvXPTZmvKTlrJEEfsrSMHx0d
	l5JfJlbBx13OpbpbB7FBubuK8cb0i6LxSmq/iUzYEYSNxBWYeDvGpb7v+kN8wSAO77RMw=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH1-00060H-LJ; Tue, 24 Nov 2020 08:02:03 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH1-0001hp-IN; Tue, 24 Nov 2020 08:02:03 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 04/23] libxl: add/recover 'rdm_policy' to/from PCI backend in xenstore
Date: Tue, 24 Nov 2020 08:01:40 +0000
Message-Id: <20201124080159.11912-5-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201124080159.11912-1-paul@xen.org>
References: <20201124080159.11912-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

Other parameters, such as 'msitranslate' and 'permissive' are dealt with
but 'rdm_policy' appears to be have been completely missed.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index da01c77ba2..50c96cbfa6 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -61,9 +61,9 @@ static void libxl_create_pci_backend_device(libxl__gc *gc,
         flexarray_append_pair(back, GCSPRINTF("vdevfn-%d", num), GCSPRINTF("%x", pci->vdevfn));
     flexarray_append(back, GCSPRINTF("opts-%d", num));
     flexarray_append(back,
-              GCSPRINTF("msitranslate=%d,power_mgmt=%d,permissive=%d",
-                             pci->msitranslate, pci->power_mgmt,
-                             pci->permissive));
+              GCSPRINTF("msitranslate=%d,power_mgmt=%d,permissive=%d,rdm_policy=%s",
+                        pci->msitranslate, pci->power_mgmt,
+                        pci->permissive, libxl_rdm_reserve_policy_to_string(pci->rdm_policy)));
     flexarray_append_pair(back, GCSPRINTF("state-%d", num), GCSPRINTF("%d", XenbusStateInitialising));
 }
 
@@ -2374,6 +2374,9 @@ static int libxl__device_pci_from_xs_be(libxl__gc *gc,
             } else if (!strcmp(p, "permissive")) {
                 p = strtok_r(NULL, ",=", &saveptr);
                 pci->permissive = atoi(p);
+            } else if (!strcmp(p, "rdm_policy")) {
+                p = strtok_r(NULL, ",=", &saveptr);
+                libxl_rdm_reserve_policy_from_string(p, &pci->rdm_policy);
             }
         } while ((p = strtok_r(NULL, ",=", &saveptr)) != NULL);
     }
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 08:02:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 08:02:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35429.66974 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTH4-0008Ac-D0; Tue, 24 Nov 2020 08:02:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35429.66974; Tue, 24 Nov 2020 08:02:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTH4-0008AI-2i; Tue, 24 Nov 2020 08:02:06 +0000
Received: by outflank-mailman (input) for mailman id 35429;
 Tue, 24 Nov 2020 08:02:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khTH2-00087t-8T
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:02:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH1-00060B-FI; Tue, 24 Nov 2020 08:02:03 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH1-0001hp-Bz; Tue, 24 Nov 2020 08:02:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH2-00087t-8T
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:02:04 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=zmhShPhhJnaAiA904102UbZsZbkJVaSxT/H2EVj5rXI=; b=3pwtvMITVGSpW4tV3gXEliFEH
	EwakXBXpEvc7ETiGo7BYbQhN9qt8ht8sxc2IfOm9rdgq/7uCoRHj+9ZoWLjGwTAy8rI3jJzCyp2z7
	4cqdCWcYO6FLTDofoGXKtG9Dt9YUo3fQYlqe9GbngGpjy5Dk+5GDT0rEdHFvKRgigO7b8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH1-00060B-FI; Tue, 24 Nov 2020 08:02:03 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH1-0001hp-Bz; Tue, 24 Nov 2020 08:02:03 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 03/23] libxl: Make sure devices added by pci-attach are reflected in the config
Date: Tue, 24 Nov 2020 08:01:39 +0000
Message-Id: <20201124080159.11912-4-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201124080159.11912-1-paul@xen.org>
References: <20201124080159.11912-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

Currently libxl__device_pci_add_xenstore() is broken in that does not
update the domain's configuration for the first device added (which causes
creation of the overall backend area in xenstore). This can be easily observed
by running 'xl list -l' after adding a single device: the device will be
missing.

This patch fixes the problem and adds a DEBUG log line to allow easy
verification that the domain configuration is being modified. Also, the use
of libxl__device_generic_add() is dropped as it leads to a confusing situation
where only partial backend information is written under the xenstore
'/libxl' path. For LIBXL__DEVICE_KIND_PCI devices the only definitive
information in xenstore is under '/local/domain/0/backend' (the '0' being
hard-coded).

NOTE: This patch includes a whitespace in add_pcis_done().

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>

v2:
 - Avoid having two completely different ways of adding devices into xenstore

v3:
 - Revert some changes form v2 as there is confusion over use of the libxl
   and backend xenstore paths which needs to be fixed
---
 tools/libs/light/libxl_pci.c | 87 +++++++++++++++++++++++---------------------
 1 file changed, 45 insertions(+), 42 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 9d44b28f0a..da01c77ba2 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -79,39 +79,55 @@ static void libxl__device_from_pci(libxl__gc *gc, uint32_t domid,
     device->kind = LIBXL__DEVICE_KIND_PCI;
 }
 
-static int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
-                                     const libxl_device_pci *pci,
-                                     int num)
+static void libxl__create_pci_backend(libxl__gc *gc, xs_transaction_t t,
+                                      uint32_t domid, const libxl_device_pci *pci)
 {
-    flexarray_t *front = NULL;
-    flexarray_t *back = NULL;
-    libxl__device device;
-    int i;
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+    flexarray_t *front, *back;
+    char *fe_path, *be_path;
+    struct xs_permissions fe_perms[2], be_perms[2];
+
+    LOGD(DEBUG, domid, "Creating pci backend");
 
     front = flexarray_make(gc, 16, 1);
     back = flexarray_make(gc, 16, 1);
 
-    LOGD(DEBUG, domid, "Creating pci backend");
-
-    /* add pci device */
-    libxl__device_from_pci(gc, domid, pci, &device);
+    fe_path = libxl__domain_device_frontend_path(gc, domid, 0,
+                                                 LIBXL__DEVICE_KIND_PCI);
+    be_path = libxl__domain_device_backend_path(gc, 0, domid, 0,
+                                                LIBXL__DEVICE_KIND_PCI);
 
+    flexarray_append_pair(back, "frontend", fe_path);
     flexarray_append_pair(back, "frontend-id", GCSPRINTF("%d", domid));
-    flexarray_append_pair(back, "online", "1");
+    flexarray_append_pair(back, "online", GCSPRINTF("%d", 1));
     flexarray_append_pair(back, "state", GCSPRINTF("%d", XenbusStateInitialising));
     flexarray_append_pair(back, "domain", libxl__domid_to_name(gc, domid));
 
-    for (i = 0; i < num; i++, pci++)
-        libxl_create_pci_backend_device(gc, back, i, pci);
+    be_perms[0].id = 0;
+    be_perms[0].perms = XS_PERM_NONE;
+    be_perms[1].id = domid;
+    be_perms[1].perms = XS_PERM_READ;
+
+    xs_rm(ctx->xsh, t, be_path);
+    xs_mkdir(ctx->xsh, t, be_path);
+    xs_set_permissions(ctx->xsh, t, be_path, be_perms,
+                       ARRAY_SIZE(be_perms));
+    libxl__xs_writev(gc, t, be_path, libxl__xs_kvs_of_flexarray(gc, back));
 
-    flexarray_append_pair(back, "num_devs", GCSPRINTF("%d", num));
+    flexarray_append_pair(front, "backend", be_path);
     flexarray_append_pair(front, "backend-id", GCSPRINTF("%d", 0));
     flexarray_append_pair(front, "state", GCSPRINTF("%d", XenbusStateInitialising));
 
-    return libxl__device_generic_add(gc, XBT_NULL, &device,
-                                     libxl__xs_kvs_of_flexarray(gc, back),
-                                     libxl__xs_kvs_of_flexarray(gc, front),
-                                     NULL);
+    fe_perms[0].id = domid;
+    fe_perms[0].perms = XS_PERM_NONE;
+    fe_perms[1].id = 0;
+    fe_perms[1].perms = XS_PERM_READ;
+
+    xs_rm(ctx->xsh, t, fe_path);
+    xs_mkdir(ctx->xsh, t, fe_path);
+    xs_set_permissions(ctx->xsh, t, fe_path,
+                       fe_perms, ARRAY_SIZE(fe_perms));
+    libxl__xs_writev(gc, t, fe_path, libxl__xs_kvs_of_flexarray(gc, front));
 }
 
 static int libxl__device_pci_add_xenstore(libxl__gc *gc,
@@ -135,8 +151,6 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
     be_path = libxl__domain_device_backend_path(gc, 0, domid, 0,
                                                 LIBXL__DEVICE_KIND_PCI);
     num_devs = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/num_devs", be_path));
-    if (!num_devs)
-        return libxl__create_pci_backend(gc, domid, pci, 1);
 
     libxl_domain_type domtype = libxl__domain_type(gc, domid);
     if (domtype == LIBXL_DOMAIN_TYPE_INVALID)
@@ -150,17 +164,17 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
     back = flexarray_make(gc, 16, 1);
 
     LOGD(DEBUG, domid, "Adding new pci device to xenstore");
-    num = atoi(num_devs);
+    num = num_devs ? atoi(num_devs) : 0;
     libxl_create_pci_backend_device(gc, back, num, pci);
     flexarray_append_pair(back, "num_devs", GCSPRINTF("%d", num + 1));
-    if (!starting)
+    if (num && !starting)
         flexarray_append_pair(back, "state", GCSPRINTF("%d", XenbusStateReconfiguring));
 
     /*
      * Stubdomin config is derived from its target domain, it doesn't have
      * its own file.
      */
-    if (!is_stubdomain) {
+    if (!is_stubdomain && !starting) {
         lock = libxl__lock_domain_userdata(gc, domid);
         if (!lock) {
             rc = ERROR_LOCK_FAIL;
@@ -170,6 +184,7 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
         rc = libxl__get_domain_configuration(gc, domid, &d_config);
         if (rc) goto out;
 
+        LOGD(DEBUG, domid, "Adding new pci device to config");
         device_add_domain_config(gc, &d_config, &libxl__pci_devtype,
                                  pci);
 
@@ -186,6 +201,10 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
             if (rc) goto out;
         }
 
+        /* This is the first device, so create the backend */
+        if (!num_devs)
+            libxl__create_pci_backend(gc, t, domid, pci);
+
         libxl__xs_writev(gc, t, be_path, libxl__xs_kvs_of_flexarray(gc, back));
 
         rc = libxl__xs_transaction_commit(gc, &t);
@@ -1437,7 +1456,7 @@ out_no_irq:
         }
     }
 
-    if (!starting && !libxl_get_stubdom_id(CTX, domid))
+    if (!libxl_get_stubdom_id(CTX, domid))
         rc = libxl__device_pci_add_xenstore(gc, domid, pci, starting);
     else
         rc = 0;
@@ -1765,28 +1784,12 @@ static void libxl__add_pcis(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
 }
 
 static void add_pcis_done(libxl__egc *egc, libxl__multidev *multidev,
-                             int rc)
+                          int rc)
 {
     EGC_GC;
     add_pcis_state *apds = CONTAINER_OF(multidev, *apds, multidev);
-
-    /* Convenience aliases */
-    libxl_domain_config *d_config = apds->d_config;
-    libxl_domid domid = apds->domid;
     libxl__ao_device *aodev = apds->outer_aodev;
 
-    if (rc) goto out;
-
-    if (d_config->num_pcis > 0 && !libxl_get_stubdom_id(CTX, domid)) {
-        rc = libxl__create_pci_backend(gc, domid, d_config->pcis,
-                                       d_config->num_pcis);
-        if (rc < 0) {
-            LOGD(ERROR, domid, "libxl_create_pci_backend failed: %d", rc);
-            goto out;
-        }
-    }
-
-out:
     aodev->rc = rc;
     aodev->callback(egc, aodev);
 }
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 08:02:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 08:02:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35434.67038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTH7-0008Hz-My; Tue, 24 Nov 2020 08:02:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35434.67038; Tue, 24 Nov 2020 08:02:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTH7-0008HE-1E; Tue, 24 Nov 2020 08:02:09 +0000
Received: by outflank-mailman (input) for mailman id 35434;
 Tue, 24 Nov 2020 08:02:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khTH3-00088d-7t
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:02:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH2-00060j-Ek; Tue, 24 Nov 2020 08:02:04 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH2-0001hp-Bz; Tue, 24 Nov 2020 08:02:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH3-00088d-7t
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:02:05 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=EIPA9+ZEASvzq7mL3kgJN1+orOrgFwgTrTG56YTc2Qo=; b=ksbbyxp2eUVo6dIEVKb++3Fod
	pzpE9qQoTF6+VpIOx21zQRoYWTzEQ+BDB0BuUkG7wgLeiLJgJzPmh+3Zz6SqScH4Vma++NSESNPH0
	lMcFrpXJLmgigfDKDDkNc8pKEDDQQTE3J8yaRaNThH1dPEK7n9VOviQ1Li2fA7gOUDD20=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH2-00060j-Ek; Tue, 24 Nov 2020 08:02:04 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH2-0001hp-Bz; Tue, 24 Nov 2020 08:02:04 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 08/23] libxl: generalise 'driver_path' xenstore access functions in libxl_pci.c
Date: Tue, 24 Nov 2020 08:01:44 +0000
Message-Id: <20201124080159.11912-9-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201124080159.11912-1-paul@xen.org>
References: <20201124080159.11912-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

For the purposes of re-binding a device to its previous driver
libxl__device_pci_assignable_add() writes the driver path into xenstore.
This path is then read back in libxl__device_pci_assignable_remove().

The functions that support this writing to and reading from xenstore are
currently dedicated for this purpose and hence the node name 'driver_path'
is hard-coded. This patch generalizes these utility functions and passes
'driver_path' as an argument. Subsequent patches will invoke them to
access other nodes.

NOTE: Because functions will have a broader use (other than storing a
      driver path in lieu of pciback) the base xenstore path is also
      changed from '/libxl/pciback' to '/libxl/pci'.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 66 +++++++++++++++++++++-----------------------
 1 file changed, 32 insertions(+), 34 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 77edd27345..a5d5d2e78b 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -737,48 +737,46 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
     return 0;
 }
 
-#define PCIBACK_INFO_PATH "/libxl/pciback"
+#define PCI_INFO_PATH "/libxl/pci"
 
-static void pci_assignable_driver_path_write(libxl__gc *gc,
-                                            libxl_device_pci *pci,
-                                            char *driver_path)
+static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node)
 {
-    char *path;
+    return node ?
+        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
+                  pci->domain, pci->bus, pci->dev, pci->func,
+                  node) :
+        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
+                  pci->domain, pci->bus, pci->dev, pci->func);
+}
+
+
+static void pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node, const char *val)
+{
+    char *path = pci_info_xs_path(gc, pci, node);
 
-    path = GCSPRINTF(PCIBACK_INFO_PATH"/"PCI_BDF_XSPATH"/driver_path",
-                     pci->domain,
-                     pci->bus,
-                     pci->dev,
-                     pci->func);
-    if ( libxl__xs_printf(gc, XBT_NULL, path, "%s", driver_path) < 0 ) {
-        LOGE(WARN, "Write of %s to node %s failed.", driver_path, path);
+    if ( libxl__xs_printf(gc, XBT_NULL, path, "%s", val) < 0 ) {
+        LOGE(WARN, "Write of %s to node %s failed.", val, path);
     }
 }
 
-static char * pci_assignable_driver_path_read(libxl__gc *gc,
-                                              libxl_device_pci *pci)
+static char *pci_info_xs_read(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node)
 {
-    return libxl__xs_read(gc, XBT_NULL,
-                          GCSPRINTF(
-                           PCIBACK_INFO_PATH "/" PCI_BDF_XSPATH "/driver_path",
-                           pci->domain,
-                           pci->bus,
-                           pci->dev,
-                           pci->func));
+    char *path = pci_info_xs_path(gc, pci, node);
+
+    return libxl__xs_read(gc, XBT_NULL, path);
 }
 
-static void pci_assignable_driver_path_remove(libxl__gc *gc,
-                                              libxl_device_pci *pci)
+static void pci_info_xs_remove(libxl__gc *gc, libxl_device_pci *pci,
+                               const char *node)
 {
+    char *path = pci_info_xs_path(gc, pci, node);
     libxl_ctx *ctx = libxl__gc_owner(gc);
 
     /* Remove the xenstore entry */
-    xs_rm(ctx->xsh, XBT_NULL,
-          GCSPRINTF(PCIBACK_INFO_PATH "/" PCI_BDF_XSPATH,
-                    pci->domain,
-                    pci->bus,
-                    pci->dev,
-                    pci->func) );
+    xs_rm(ctx->xsh, XBT_NULL, path);
 }
 
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
@@ -824,9 +822,9 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     /* Store driver_path for rebinding to dom0 */
     if ( rebind ) {
         if ( driver_path ) {
-            pci_assignable_driver_path_write(gc, pci, driver_path);
+            pci_info_xs_write(gc, pci, "driver_path", driver_path);
         } else if ( (driver_path =
-                     pci_assignable_driver_path_read(gc, pci)) != NULL ) {
+                     pci_info_xs_read(gc, pci, "driver_path")) != NULL ) {
             LOG(INFO, PCI_BDF" not bound to a driver, will be rebound to %s",
                 dom, bus, dev, func, driver_path);
         } else {
@@ -834,7 +832,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
                 dom, bus, dev, func);
         }
     } else {
-        pci_assignable_driver_path_remove(gc, pci);
+        pci_info_xs_remove(gc, pci, "driver_path");
     }
 
     if ( pciback_dev_assign(gc, pci) ) {
@@ -884,7 +882,7 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     }
 
     /* Rebind if necessary */
-    driver_path = pci_assignable_driver_path_read(gc, pci);
+    driver_path = pci_info_xs_read(gc, pci, "driver_path");
 
     if ( driver_path ) {
         if ( rebind ) {
@@ -897,7 +895,7 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
                 return -1;
             }
 
-            pci_assignable_driver_path_remove(gc, pci);
+            pci_info_xs_remove(gc, pci, "driver_path");
         }
     } else {
         if ( rebind ) {
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 08:02:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 08:02:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35426.66948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTH3-00088W-8R; Tue, 24 Nov 2020 08:02:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35426.66948; Tue, 24 Nov 2020 08:02:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTH3-00088P-2z; Tue, 24 Nov 2020 08:02:05 +0000
Received: by outflank-mailman (input) for mailman id 35426;
 Tue, 24 Nov 2020 08:02:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khTH2-00087m-6p
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:02:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH0-000605-PU; Tue, 24 Nov 2020 08:02:02 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH0-0001hp-KF; Tue, 24 Nov 2020 08:02:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH2-00087m-6p
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:02:04 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=W2WouK8f1AdKprWs2W2fVtTJuFUfHceJRHCSdCahaSQ=; b=MsD+wVyKQplcN23uxPrBM9f3n5
	KqYMhJENpml4skZH+kKFFKNLzYMOK+JfEP17ZlyREUDAgZaYms2akQCpauGt28Q3/XwW58rWHaXKD
	iAnHQ7av6UUw2lkHuER2RpRBgdWnDAa2WkaSTLwyZs0AViDRRQavvXMRT2WceeKtJwsI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH0-000605-PU; Tue, 24 Nov 2020 08:02:02 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH0-0001hp-KF; Tue, 24 Nov 2020 08:02:02 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Nick Rosbrook <rosbrookn@ainfosec.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 00/23] xl / libxl: named PCI pass-through devices
Date: Tue, 24 Nov 2020 08:01:36 +0000
Message-Id: <20201124080159.11912-1-paul@xen.org>
X-Mailer: git-send-email 2.11.0

From: Paul Durrant <pdurrant@amazon.com>

Paul Durrant (23):
  xl / libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
  libxl: make libxl__device_list() work correctly for
    LIBXL__DEVICE_KIND_PCI...
  libxl: Make sure devices added by pci-attach are reflected in the
    config
  libxl: add/recover 'rdm_policy' to/from PCI backend in xenstore
  libxl: s/detatched/detached in libxl_pci.c
  libxl: remove extraneous arguments to do_pci_remove() in libxl_pci.c
  libxl: stop using aodev->device_config in libxl__device_pci_add()...
  libxl: generalise 'driver_path' xenstore access functions in
    libxl_pci.c
  libxl: remove unnecessary check from libxl__device_pci_add()
  libxl: remove get_all_assigned_devices() from libxl_pci.c
  libxl: make sure callers of libxl_device_pci_list() free the list
    after use
  libxl: add libxl_device_pci_assignable_list_free()...
  libxl: use COMPARE_PCI() macro is_pci_in_array()...
  docs/man: extract documentation of PCI_SPEC_STRING from the xl.cfg
    manpage...
  docs/man: improve documentation of PCI_SPEC_STRING...
  docs/man: fix xl(1) documentation for 'pci' operations
  libxl: introduce 'libxl_pci_bdf' in the idl...
  libxlu: introduce xlu_pci_parse_spec_string()
  libxl: modify
    libxl_device_pci_assignable_add/remove/list/list_free()...
  docs/man: modify xl(1) in preparation for naming of assignable devices
  xl / libxl: support naming of assignable devices
  docs/man: modify xl-pci-configuration(5) to add 'name' field to
    PCI_SPEC_STRING
  xl / libxl: support 'xl pci-attach/detach' by name

 docs/man/xl-pci-configuration.5.pod  |  218 +++++++
 docs/man/xl.1.pod.in                 |   39 +-
 docs/man/xl.cfg.5.pod.in             |   68 +--
 tools/golang/xenlight/helpers.gen.go |   77 ++-
 tools/golang/xenlight/types.gen.go   |    8 +-
 tools/include/libxl.h                |   67 ++-
 tools/include/libxlutil.h            |    8 +-
 tools/libs/light/libxl_create.c      |    6 +-
 tools/libs/light/libxl_device.c      |   66 ++-
 tools/libs/light/libxl_dm.c          |   18 +-
 tools/libs/light/libxl_internal.h    |   55 +-
 tools/libs/light/libxl_pci.c         | 1048 ++++++++++++++++++----------------
 tools/libs/light/libxl_types.idl     |   19 +-
 tools/libs/util/libxlu_pci.c         |  359 ++++++------
 tools/ocaml/libs/xl/xenlight_stubs.c |   19 +-
 tools/xl/xl_cmdtable.c               |   16 +-
 tools/xl/xl_parse.c                  |   30 +-
 tools/xl/xl_pci.c                    |  163 +++---
 tools/xl/xl_sxp.c                    |   12 +-
 19 files changed, 1367 insertions(+), 929 deletions(-)
 create mode 100644 docs/man/xl-pci-configuration.5.pod
---
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Christian Lindig <christian.lindig@citrix.com>
Cc: David Scott <dave@recoil.org>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>
Cc: Wei Liu <wl@xen.org>
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 08:02:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 08:02:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35428.66961 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTH3-00089j-U5; Tue, 24 Nov 2020 08:02:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35428.66961; Tue, 24 Nov 2020 08:02:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTH3-00089S-Ln; Tue, 24 Nov 2020 08:02:05 +0000
Received: by outflank-mailman (input) for mailman id 35428;
 Tue, 24 Nov 2020 08:02:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khTH2-00087s-8P
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:02:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH1-000607-3R; Tue, 24 Nov 2020 08:02:03 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH0-0001hp-RV; Tue, 24 Nov 2020 08:02:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH2-00087s-8P
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:02:04 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=WvyItcKZ4OKbkkykEic1UVBZaut7cfBmGpJNtPhNblA=; b=Z66TRnK8kMBMGAziVamiq4HMQ
	NQJoaYx//oG2naW0AqSYL/qV9/893Fg/7LlvEsXrG+rkrp6pHn56IWXCZ3ssuCG09hWQIx3TiBHPG
	aif7F703u1r+4QzfWcjFtLJ+Pv+ts+AmpD6fkGm2rkoYvPDDo8FUlUxG+vTNtbW8SrXMY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH1-000607-3R; Tue, 24 Nov 2020 08:02:03 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH0-0001hp-RV; Tue, 24 Nov 2020 08:02:02 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 01/23] xl / libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
Date: Tue, 24 Nov 2020 08:01:37 +0000
Message-Id: <20201124080159.11912-2-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201124080159.11912-1-paul@xen.org>
References: <20201124080159.11912-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

The seemingly arbitrary use of 'pci' and 'pcidev' in the code in libxl_pci.c
is confusing and also compromises use of some macros used for other device
types. Indeed it seems that DEFINE_DEVICE_TYPE_STRUCT_X exists solely because
of this duality.

This patch purges use of 'pcidev' from the libxl code, allowing evaluation of
DEFINE_DEVICE_TYPE_STRUCT_X to be replaced with DEFINE_DEVICE_TYPE_STRUCT,
hence allowing removal of the former.

For consistency the xl and libs/util code is also modified, but in this case
it is purely cosmetic.

NOTE: Some of the more gross formatting errors (such as lack of spaces after
      keywords) that came into context have been fixed in libxl_pci.c.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxl.h             |  17 +-
 tools/libs/light/libxl_create.c   |   6 +-
 tools/libs/light/libxl_dm.c       |  18 +-
 tools/libs/light/libxl_internal.h |  45 ++-
 tools/libs/light/libxl_pci.c      | 582 +++++++++++++++++++-------------------
 tools/libs/light/libxl_types.idl  |   2 +-
 tools/libs/util/libxlu_pci.c      |  36 +--
 tools/xl/xl_parse.c               |  28 +-
 tools/xl/xl_pci.c                 |  68 ++---
 tools/xl/xl_sxp.c                 |  12 +-
 10 files changed, 409 insertions(+), 405 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 1ea5b4f446..fbe4c81ba5 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -445,6 +445,13 @@
 #define LIBXL_HAVE_DISK_SAFE_REMOVE 1
 
 /*
+ * LIBXL_HAVE_CONFIG_PCIS indicates that the 'pcidevs' and 'num_pcidevs'
+ * fields in libxl_domain_config have been renamed to 'pcis' and 'num_pcis'
+ * respectively.
+ */
+#define LIBXL_HAVE_CONFIG_PCIS 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
@@ -2300,15 +2307,15 @@ int libxl_device_pvcallsif_destroy(libxl_ctx *ctx, uint32_t domid,
 
 /* PCI Passthrough */
 int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
-                         libxl_device_pci *pcidev,
+                         libxl_device_pci *pci,
                          const libxl_asyncop_how *ao_how)
                          LIBXL_EXTERNAL_CALLERS_ONLY;
 int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid,
-                            libxl_device_pci *pcidev,
+                            libxl_device_pci *pci,
                             const libxl_asyncop_how *ao_how)
                             LIBXL_EXTERNAL_CALLERS_ONLY;
 int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
-                             libxl_device_pci *pcidev,
+                             libxl_device_pci *pci,
                              const libxl_asyncop_how *ao_how)
                              LIBXL_EXTERNAL_CALLERS_ONLY;
 
@@ -2352,8 +2359,8 @@ int libxl_device_events_handler(libxl_ctx *ctx,
  * added or is not bound, the functions will emit a warning but return
  * SUCCESS.
  */
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pcidev, int rebind);
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pcidev, int rebind);
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
 
 /* CPUID handling */
diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index 321a13e519..1f5052c520 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -1100,7 +1100,7 @@ int libxl__domain_config_setdefault(libxl__gc *gc,
         goto error_out;
     }
 
-    bool need_pt = d_config->num_pcidevs || d_config->num_dtdevs;
+    bool need_pt = d_config->num_pcis || d_config->num_dtdevs;
     if (c_info->passthrough == LIBXL_PASSTHROUGH_DEFAULT) {
         c_info->passthrough = need_pt
             ? LIBXL_PASSTHROUGH_ENABLED : LIBXL_PASSTHROUGH_DISABLED;
@@ -1141,7 +1141,7 @@ int libxl__domain_config_setdefault(libxl__gc *gc,
      * assignment when PoD is enabled.
      */
     if (d_config->c_info.type != LIBXL_DOMAIN_TYPE_PV &&
-        d_config->num_pcidevs && pod_enabled) {
+        d_config->num_pcis && pod_enabled) {
         ret = ERROR_INVAL;
         LOGD(ERROR, domid,
              "PCI device assignment for HVM guest failed due to PoD enabled");
@@ -1817,7 +1817,7 @@ const libxl__device_type *device_type_tbl[] = {
     &libxl__vtpm_devtype,
     &libxl__usbctrl_devtype,
     &libxl__usbdev_devtype,
-    &libxl__pcidev_devtype,
+    &libxl__pci_devtype,
     &libxl__dtdev_devtype,
     &libxl__vdispl_devtype,
     &libxl__vsnd_devtype,
diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 3da83259c0..8ebe1b60c9 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -442,7 +442,7 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
 
     /* Might not expose rdm. */
     if (strategy == LIBXL_RDM_RESERVE_STRATEGY_IGNORE &&
-        !d_config->num_pcidevs)
+        !d_config->num_pcis)
         return 0;
 
     /* Query all RDM entries in this platform */
@@ -469,13 +469,13 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
     }
 
     /* Query RDM entries per-device */
-    for (i = 0; i < d_config->num_pcidevs; i++) {
+    for (i = 0; i < d_config->num_pcis; i++) {
         unsigned int n, nr_entries;
 
-        seg = d_config->pcidevs[i].domain;
-        bus = d_config->pcidevs[i].bus;
-        devfn = PCI_DEVFN(d_config->pcidevs[i].dev,
-                          d_config->pcidevs[i].func);
+        seg = d_config->pcis[i].domain;
+        bus = d_config->pcis[i].bus;
+        devfn = PCI_DEVFN(d_config->pcis[i].dev,
+                          d_config->pcis[i].func);
         nr_entries = 0;
         rc = libxl__xc_device_get_rdm(gc, 0,
                                       seg, bus, devfn, &nr_entries, &xrdm);
@@ -488,7 +488,7 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
         assert(xrdm);
 
         rc = libxl__device_pci_setdefault(gc, DOMID_INVALID,
-                                          &d_config->pcidevs[i], false);
+                                          &d_config->pcis[i], false);
         if (rc)
             goto out;
 
@@ -516,7 +516,7 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
                      * global policy in this case.
                      */
                     d_config->rdms[j].policy
-                        = d_config->pcidevs[i].rdm_policy;
+                        = d_config->pcis[i].rdm_policy;
                     new = false;
                     break;
                 }
@@ -526,7 +526,7 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
                 add_rdm_entry(gc, d_config,
                               pfn_to_paddr(xrdm[n].start_pfn),
                               pfn_to_paddr(xrdm[n].nr_pages),
-                              d_config->pcidevs[i].rdm_policy);
+                              d_config->pcis[i].rdm_policy);
         }
     }
 
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index e26cda9b50..3e70ff639b 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -1709,7 +1709,7 @@ _hidden int libxl__pci_topology_init(libxl__gc *gc,
 /* from libxl_pci */
 
 _hidden void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
-                                   libxl_device_pci *pcidev, bool starting,
+                                   libxl_device_pci *pci, bool starting,
                                    libxl__ao_device *aodev);
 _hidden void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
                                            libxl__multidev *);
@@ -3945,30 +3945,27 @@ struct libxl__device_type {
     device_set_xenstore_config_fn_t set_xenstore_config;
 };
 
-#define DEFINE_DEVICE_TYPE_STRUCT_X(name, sname, kind, ...)                    \
-    const libxl__device_type libxl__ ## name ## _devtype = {                   \
-        .type          = LIBXL__DEVICE_KIND_ ## kind,                       \
-        .ptr_offset    = offsetof(libxl_domain_config, name ## s),             \
-        .num_offset    = offsetof(libxl_domain_config, num_ ## name ## s),     \
-        .dev_elem_size = sizeof(libxl_device_ ## sname),                       \
-        .add           = libxl__add_ ## name ## s,                             \
-        .set_default   = (device_set_default_fn_t)                             \
-                         libxl__device_ ## sname ## _setdefault,               \
-        .to_device     = (device_to_device_fn_t)libxl__device_from_ ## name,   \
-        .init          = (device_init_fn_t)libxl_device_ ## sname ## _init,    \
-        .copy          = (device_copy_fn_t)libxl_device_ ## sname ## _copy,    \
-        .dispose       = (device_dispose_fn_t)                                 \
-                         libxl_device_ ## sname ## _dispose,                   \
-        .compare       = (device_compare_fn_t)                                 \
-                         libxl_device_ ## sname ## _compare,                   \
-        .update_devid  = (device_update_devid_fn_t)                            \
-                         libxl__device_ ## sname ## _update_devid,             \
-        __VA_ARGS__                                                            \
+#define DEFINE_DEVICE_TYPE_STRUCT(name, kind, ...)                           \
+    const libxl__device_type libxl__ ## name ## _devtype = {                 \
+        .type          = LIBXL__DEVICE_KIND_ ## kind,                        \
+        .ptr_offset    = offsetof(libxl_domain_config, name ## s),           \
+        .num_offset    = offsetof(libxl_domain_config, num_ ## name ## s),   \
+        .dev_elem_size = sizeof(libxl_device_ ## name),                      \
+        .add           = libxl__add_ ## name ## s,                           \
+        .set_default   = (device_set_default_fn_t)                           \
+                         libxl__device_ ## name ## _setdefault,              \
+        .to_device     = (device_to_device_fn_t)libxl__device_from_ ## name, \
+        .init          = (device_init_fn_t)libxl_device_ ## name ## _init,   \
+        .copy          = (device_copy_fn_t)libxl_device_ ## name ## _copy,   \
+        .dispose       = (device_dispose_fn_t)                               \
+                         libxl_device_ ## name ## _dispose,                  \
+        .compare       = (device_compare_fn_t)                               \
+                         libxl_device_ ## name ## _compare,                  \
+        .update_devid  = (device_update_devid_fn_t)                          \
+                         libxl__device_ ## name ## _update_devid,            \
+        __VA_ARGS__                                                          \
     }
 
-#define DEFINE_DEVICE_TYPE_STRUCT(name, kind, ...)                             \
-    DEFINE_DEVICE_TYPE_STRUCT_X(name, name, kind, __VA_ARGS__)
-
 static inline void **libxl__device_type_get_ptr(
     const libxl__device_type *dt, const libxl_domain_config *d_config)
 {
@@ -3995,7 +3992,7 @@ extern const libxl__device_type libxl__nic_devtype;
 extern const libxl__device_type libxl__vtpm_devtype;
 extern const libxl__device_type libxl__usbctrl_devtype;
 extern const libxl__device_type libxl__usbdev_devtype;
-extern const libxl__device_type libxl__pcidev_devtype;
+extern const libxl__device_type libxl__pci_devtype;
 extern const libxl__device_type libxl__vdispl_devtype;
 extern const libxl__device_type libxl__p9_devtype;
 extern const libxl__device_type libxl__pvcallsif_devtype;
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index bc5843b137..2ff1c64a31 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -25,51 +25,51 @@
 #define PCI_BDF_XSPATH         "%04x-%02x-%02x-%01x"
 #define PCI_PT_QDEV_ID         "pci-pt-%02x_%02x.%01x"
 
-static unsigned int pcidev_encode_bdf(libxl_device_pci *pcidev)
+static unsigned int pci_encode_bdf(libxl_device_pci *pci)
 {
     unsigned int value;
 
-    value = pcidev->domain << 16;
-    value |= (pcidev->bus & 0xff) << 8;
-    value |= (pcidev->dev & 0x1f) << 3;
-    value |= (pcidev->func & 0x7);
+    value = pci->domain << 16;
+    value |= (pci->bus & 0xff) << 8;
+    value |= (pci->dev & 0x1f) << 3;
+    value |= (pci->func & 0x7);
 
     return value;
 }
 
-static void pcidev_struct_fill(libxl_device_pci *pcidev, unsigned int domain,
-                               unsigned int bus, unsigned int dev,
-                               unsigned int func, unsigned int vdevfn)
+static void pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
+                            unsigned int bus, unsigned int dev,
+                            unsigned int func, unsigned int vdevfn)
 {
-    pcidev->domain = domain;
-    pcidev->bus = bus;
-    pcidev->dev = dev;
-    pcidev->func = func;
-    pcidev->vdevfn = vdevfn;
+    pci->domain = domain;
+    pci->bus = bus;
+    pci->dev = dev;
+    pci->func = func;
+    pci->vdevfn = vdevfn;
 }
 
 static void libxl_create_pci_backend_device(libxl__gc *gc,
                                             flexarray_t *back,
                                             int num,
-                                            const libxl_device_pci *pcidev)
+                                            const libxl_device_pci *pci)
 {
     flexarray_append(back, GCSPRINTF("key-%d", num));
-    flexarray_append(back, GCSPRINTF(PCI_BDF, pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func));
+    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->domain, pci->bus, pci->dev, pci->func));
     flexarray_append(back, GCSPRINTF("dev-%d", num));
-    flexarray_append(back, GCSPRINTF(PCI_BDF, pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func));
-    if (pcidev->vdevfn)
-        flexarray_append_pair(back, GCSPRINTF("vdevfn-%d", num), GCSPRINTF("%x", pcidev->vdevfn));
+    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->domain, pci->bus, pci->dev, pci->func));
+    if (pci->vdevfn)
+        flexarray_append_pair(back, GCSPRINTF("vdevfn-%d", num), GCSPRINTF("%x", pci->vdevfn));
     flexarray_append(back, GCSPRINTF("opts-%d", num));
     flexarray_append(back,
               GCSPRINTF("msitranslate=%d,power_mgmt=%d,permissive=%d",
-                             pcidev->msitranslate, pcidev->power_mgmt,
-                             pcidev->permissive));
+                             pci->msitranslate, pci->power_mgmt,
+                             pci->permissive));
     flexarray_append_pair(back, GCSPRINTF("state-%d", num), GCSPRINTF("%d", XenbusStateInitialising));
 }
 
-static void libxl__device_from_pcidev(libxl__gc *gc, uint32_t domid,
-                                      const libxl_device_pci *pcidev,
-                                      libxl__device *device)
+static void libxl__device_from_pci(libxl__gc *gc, uint32_t domid,
+                                   const libxl_device_pci *pci,
+                                   libxl__device *device)
 {
     device->backend_devid = 0;
     device->backend_domid = 0;
@@ -80,7 +80,7 @@ static void libxl__device_from_pcidev(libxl__gc *gc, uint32_t domid,
 }
 
 static int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
-                                     const libxl_device_pci *pcidev,
+                                     const libxl_device_pci *pci,
                                      int num)
 {
     flexarray_t *front = NULL;
@@ -94,15 +94,15 @@ static int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
     LOGD(DEBUG, domid, "Creating pci backend");
 
     /* add pci device */
-    libxl__device_from_pcidev(gc, domid, pcidev, &device);
+    libxl__device_from_pci(gc, domid, pci, &device);
 
     flexarray_append_pair(back, "frontend-id", GCSPRINTF("%d", domid));
     flexarray_append_pair(back, "online", "1");
     flexarray_append_pair(back, "state", GCSPRINTF("%d", XenbusStateInitialising));
     flexarray_append_pair(back, "domain", libxl__domid_to_name(gc, domid));
 
-    for (i = 0; i < num; i++, pcidev++)
-        libxl_create_pci_backend_device(gc, back, i, pcidev);
+    for (i = 0; i < num; i++, pci++)
+        libxl_create_pci_backend_device(gc, back, i, pci);
 
     flexarray_append_pair(back, "num_devs", GCSPRINTF("%d", num));
     flexarray_append_pair(front, "backend-id", GCSPRINTF("%d", 0));
@@ -116,7 +116,7 @@ static int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
 
 static int libxl__device_pci_add_xenstore(libxl__gc *gc,
                                           uint32_t domid,
-                                          const libxl_device_pci *pcidev,
+                                          const libxl_device_pci *pci,
                                           bool starting)
 {
     flexarray_t *back;
@@ -136,7 +136,7 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
                                                 LIBXL__DEVICE_KIND_PCI);
     num_devs = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/num_devs", be_path));
     if (!num_devs)
-        return libxl__create_pci_backend(gc, domid, pcidev, 1);
+        return libxl__create_pci_backend(gc, domid, pci, 1);
 
     libxl_domain_type domtype = libxl__domain_type(gc, domid);
     if (domtype == LIBXL_DOMAIN_TYPE_INVALID)
@@ -151,7 +151,7 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
 
     LOGD(DEBUG, domid, "Adding new pci device to xenstore");
     num = atoi(num_devs);
-    libxl_create_pci_backend_device(gc, back, num, pcidev);
+    libxl_create_pci_backend_device(gc, back, num, pci);
     flexarray_append_pair(back, "num_devs", GCSPRINTF("%d", num + 1));
     if (!starting)
         flexarray_append_pair(back, "state", GCSPRINTF("%d", XenbusStateReconfiguring));
@@ -170,8 +170,8 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
         rc = libxl__get_domain_configuration(gc, domid, &d_config);
         if (rc) goto out;
 
-        device_add_domain_config(gc, &d_config, &libxl__pcidev_devtype,
-                                 pcidev);
+        device_add_domain_config(gc, &d_config, &libxl__pci_devtype,
+                                 pci);
 
         rc = libxl__dm_check_start(gc, &d_config, domid);
         if (rc) goto out;
@@ -201,7 +201,7 @@ out:
     return rc;
 }
 
-static int libxl__device_pci_remove_xenstore(libxl__gc *gc, uint32_t domid, libxl_device_pci *pcidev)
+static int libxl__device_pci_remove_xenstore(libxl__gc *gc, uint32_t domid, libxl_device_pci *pci)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     char *be_path, *num_devs_path, *num_devs, *xsdev, *tmp, *tmppath;
@@ -231,8 +231,8 @@ static int libxl__device_pci_remove_xenstore(libxl__gc *gc, uint32_t domid, libx
         unsigned int domain = 0, bus = 0, dev = 0, func = 0;
         xsdev = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/dev-%d", be_path, i));
         sscanf(xsdev, PCI_BDF, &domain, &bus, &dev, &func);
-        if (domain == pcidev->domain && bus == pcidev->bus &&
-            pcidev->dev == dev && pcidev->func == func) {
+        if (domain == pci->domain && bus == pci->bus &&
+            pci->dev == dev && pci->func == func) {
             break;
         }
     }
@@ -350,7 +350,7 @@ static int get_all_assigned_devices(libxl__gc *gc, libxl_device_pci **list, int
                     *list = realloc(*list, sizeof(libxl_device_pci) * ((*num) + 1));
                     if (*list == NULL)
                         return ERROR_NOMEM;
-                    pcidev_struct_fill(*list + *num, dom, bus, dev, func, 0);
+                    pci_struct_fill(*list + *num, dom, bus, dev, func, 0);
                     (*num)++;
                 }
             }
@@ -361,8 +361,8 @@ static int get_all_assigned_devices(libxl__gc *gc, libxl_device_pci **list, int
     return 0;
 }
 
-static int is_pcidev_in_array(libxl_device_pci *assigned, int num_assigned,
-                       int dom, int bus, int dev, int func)
+static int is_pci_in_array(libxl_device_pci *assigned, int num_assigned,
+                           int dom, int bus, int dev, int func)
 {
     int i;
 
@@ -383,7 +383,7 @@ static int is_pcidev_in_array(libxl_device_pci *assigned, int num_assigned,
 
 /* Write the standard BDF into the sysfs path given by sysfs_path. */
 static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
-                           libxl_device_pci *pcidev)
+                           libxl_device_pci *pci)
 {
     int rc, fd;
     char *buf;
@@ -394,8 +394,8 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
         return ERROR_FAIL;
     }
 
-    buf = GCSPRINTF(PCI_BDF, pcidev->domain, pcidev->bus,
-                    pcidev->dev, pcidev->func);
+    buf = GCSPRINTF(PCI_BDF, pci->domain, pci->bus,
+                    pci->dev, pci->func);
     rc = write(fd, buf, strlen(buf));
     /* Annoying to have two if's, but we need the errno */
     if (rc < 0)
@@ -411,7 +411,7 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 {
     GC_INIT(ctx);
-    libxl_device_pci *pcidevs = NULL, *new, *assigned;
+    libxl_device_pci *pcis = NULL, *new, *assigned;
     struct dirent *de;
     DIR *dir;
     int r, num_assigned;
@@ -436,40 +436,40 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         if (sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4)
             continue;
 
-        if (is_pcidev_in_array(assigned, num_assigned, dom, bus, dev, func))
+        if (is_pci_in_array(assigned, num_assigned, dom, bus, dev, func))
             continue;
 
-        new = realloc(pcidevs, ((*num) + 1) * sizeof(*new));
+        new = realloc(pcis, ((*num) + 1) * sizeof(*new));
         if (NULL == new)
             continue;
 
-        pcidevs = new;
-        new = pcidevs + *num;
+        pcis = new;
+        new = pcis + *num;
 
         memset(new, 0, sizeof(*new));
-        pcidev_struct_fill(new, dom, bus, dev, func, 0);
+        pci_struct_fill(new, dom, bus, dev, func, 0);
         (*num)++;
     }
 
     closedir(dir);
 out:
     GC_FREE;
-    return pcidevs;
+    return pcis;
 }
 
 /* Unbind device from its current driver, if any.  If driver_path is non-NULL,
  * store the path to the original driver in it. */
-static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pcidev,
+static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
                             char **driver_path)
 {
     char * spath, *dp = NULL;
     struct stat st;
 
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/driver",
-                           pcidev->domain,
-                           pcidev->bus,
-                           pcidev->dev,
-                           pcidev->func);
+                           pci->domain,
+                           pci->bus,
+                           pci->dev,
+                           pci->func);
     if ( !lstat(spath, &st) ) {
         /* Find the canonical path to the driver. */
         dp = libxl__zalloc(gc, PATH_MAX);
@@ -483,7 +483,7 @@ static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pcidev,
 
         /* Unbind from the old driver */
         spath = GCSPRINTF("%s/unbind", dp);
-        if ( sysfs_write_bdf(gc, spath, pcidev) < 0 ) {
+        if ( sysfs_write_bdf(gc, spath, pci) < 0 ) {
             LOGE(ERROR, "Couldn't unbind device");
             return -1;
         }
@@ -495,11 +495,11 @@ static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pcidev,
     return 0;
 }
 
-static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pcidev)
+static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pci)
 {
     char *pci_device_vendor_path =
             GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/vendor",
-                      pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+                      pci->domain, pci->bus, pci->dev, pci->func);
     uint16_t read_items;
     uint16_t pci_device_vendor;
 
@@ -507,7 +507,7 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pcidev)
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have vendor attribute",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         return 0xffff;
     }
     read_items = fscanf(f, "0x%hx\n", &pci_device_vendor);
@@ -515,18 +515,18 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pcidev)
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read vendor of pci device "PCI_BDF,
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         return 0xffff;
     }
 
     return pci_device_vendor;
 }
 
-static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pcidev)
+static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pci)
 {
     char *pci_device_device_path =
             GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/device",
-                      pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+                      pci->domain, pci->bus, pci->dev, pci->func);
     uint16_t read_items;
     uint16_t pci_device_device;
 
@@ -534,7 +534,7 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pcidev)
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have device attribute",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         return 0xffff;
     }
     read_items = fscanf(f, "0x%hx\n", &pci_device_device);
@@ -542,25 +542,25 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pcidev)
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read device of pci device "PCI_BDF,
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         return 0xffff;
     }
 
     return pci_device_device;
 }
 
-static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pcidev,
+static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pci,
                                unsigned long *class)
 {
     char *pci_device_class_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/class",
-                     pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+                     pci->domain, pci->bus, pci->dev, pci->func);
     int read_items, ret = 0;
 
     FILE *f = fopen(pci_device_class_path, "r");
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have class attribute",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         ret = ERROR_FAIL;
         goto out;
     }
@@ -569,7 +569,7 @@ static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pcidev,
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read class of pci device "PCI_BDF,
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         ret = ERROR_FAIL;
     }
 
@@ -588,16 +588,16 @@ bool libxl__is_igd_vga_passthru(libxl__gc *gc,
     uint16_t pt_vendor, pt_device;
     unsigned long class;
 
-    for (i = 0 ; i < d_config->num_pcidevs ; i++) {
-        libxl_device_pci *pcidev = &d_config->pcidevs[i];
-        pt_vendor = sysfs_dev_get_vendor(gc, pcidev);
-        pt_device = sysfs_dev_get_device(gc, pcidev);
+    for (i = 0 ; i < d_config->num_pcis ; i++) {
+        libxl_device_pci *pci = &d_config->pcis[i];
+        pt_vendor = sysfs_dev_get_vendor(gc, pci);
+        pt_device = sysfs_dev_get_device(gc, pci);
 
         if (pt_vendor == 0xffff || pt_device == 0xffff ||
             pt_vendor != 0x8086)
             continue;
 
-        if (sysfs_dev_get_class(gc, pcidev, &class))
+        if (sysfs_dev_get_class(gc, pci, &class))
             continue;
         if (class == 0x030000)
             return true;
@@ -621,8 +621,8 @@ bool libxl__is_igd_vga_passthru(libxl__gc *gc,
  * already exist.
  */
 
-/* Scan through /sys/.../pciback/slots looking for pcidev's BDF */
-static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pcidev)
+/* Scan through /sys/.../pciback/slots looking for pci's BDF */
+static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
 {
     FILE *f;
     int rc = 0;
@@ -635,11 +635,11 @@ static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pcidev)
         return ERROR_FAIL;
     }
 
-    while(fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func)==4) {
-        if(dom == pcidev->domain
-           && bus == pcidev->bus
-           && dev == pcidev->dev
-           && func == pcidev->func) {
+    while (fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func)==4) {
+        if (dom == pci->domain
+            && bus == pci->bus
+            && dev == pci->dev
+            && func == pci->func) {
             rc = 1;
             goto out;
         }
@@ -649,7 +649,7 @@ out:
     return rc;
 }
 
-static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pcidev)
+static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
 {
     char * spath;
     int rc;
@@ -665,8 +665,8 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pcidev)
     }
 
     spath = GCSPRINTF(SYSFS_PCIBACK_DRIVER"/"PCI_BDF,
-                      pcidev->domain, pcidev->bus,
-                      pcidev->dev, pcidev->func);
+                      pci->domain, pci->bus,
+                      pci->dev, pci->func);
     rc = lstat(spath, &st);
 
     if( rc == 0 )
@@ -677,40 +677,40 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pcidev)
     return -1;
 }
 
-static int pciback_dev_assign(libxl__gc *gc, libxl_device_pci *pcidev)
+static int pciback_dev_assign(libxl__gc *gc, libxl_device_pci *pci)
 {
     int rc;
 
-    if ( (rc=pciback_dev_has_slot(gc, pcidev)) < 0 ) {
+    if ( (rc = pciback_dev_has_slot(gc, pci)) < 0 ) {
         LOGE(ERROR, "Error checking for pciback slot");
         return ERROR_FAIL;
     } else if (rc == 0) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/new_slot",
-                             pcidev) < 0 ) {
+                             pci) < 0 ) {
             LOGE(ERROR, "Couldn't bind device to pciback!");
             return ERROR_FAIL;
         }
     }
 
-    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind", pcidev) < 0 ) {
+    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind", pci) < 0 ) {
         LOGE(ERROR, "Couldn't bind device to pciback!");
         return ERROR_FAIL;
     }
     return 0;
 }
 
-static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pcidev)
+static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
 {
     /* Remove from pciback */
-    if ( sysfs_dev_unbind(gc, pcidev, NULL) < 0 ) {
+    if ( sysfs_dev_unbind(gc, pci, NULL) < 0 ) {
         LOG(ERROR, "Couldn't unbind device!");
         return ERROR_FAIL;
     }
 
     /* Remove slot if necessary */
-    if ( pciback_dev_has_slot(gc, pcidev) > 0 ) {
+    if ( pciback_dev_has_slot(gc, pci) > 0 ) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/remove_slot",
-                             pcidev) < 0 ) {
+                             pci) < 0 ) {
             LOGE(ERROR, "Couldn't remove pciback slot");
             return ERROR_FAIL;
         }
@@ -721,49 +721,49 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pcidev)
 #define PCIBACK_INFO_PATH "/libxl/pciback"
 
 static void pci_assignable_driver_path_write(libxl__gc *gc,
-                                            libxl_device_pci *pcidev,
+                                            libxl_device_pci *pci,
                                             char *driver_path)
 {
     char *path;
 
     path = GCSPRINTF(PCIBACK_INFO_PATH"/"PCI_BDF_XSPATH"/driver_path",
-                     pcidev->domain,
-                     pcidev->bus,
-                     pcidev->dev,
-                     pcidev->func);
+                     pci->domain,
+                     pci->bus,
+                     pci->dev,
+                     pci->func);
     if ( libxl__xs_printf(gc, XBT_NULL, path, "%s", driver_path) < 0 ) {
         LOGE(WARN, "Write of %s to node %s failed.", driver_path, path);
     }
 }
 
 static char * pci_assignable_driver_path_read(libxl__gc *gc,
-                                              libxl_device_pci *pcidev)
+                                              libxl_device_pci *pci)
 {
     return libxl__xs_read(gc, XBT_NULL,
                           GCSPRINTF(
                            PCIBACK_INFO_PATH "/" PCI_BDF_XSPATH "/driver_path",
-                           pcidev->domain,
-                           pcidev->bus,
-                           pcidev->dev,
-                           pcidev->func));
+                           pci->domain,
+                           pci->bus,
+                           pci->dev,
+                           pci->func));
 }
 
 static void pci_assignable_driver_path_remove(libxl__gc *gc,
-                                              libxl_device_pci *pcidev)
+                                              libxl_device_pci *pci)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
 
     /* Remove the xenstore entry */
     xs_rm(ctx->xsh, XBT_NULL,
           GCSPRINTF(PCIBACK_INFO_PATH "/" PCI_BDF_XSPATH,
-                    pcidev->domain,
-                    pcidev->bus,
-                    pcidev->dev,
-                    pcidev->func) );
+                    pci->domain,
+                    pci->bus,
+                    pci->dev,
+                    pci->func) );
 }
 
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
-                                            libxl_device_pci *pcidev,
+                                            libxl_device_pci *pci,
                                             int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -773,10 +773,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     struct stat st;
 
     /* Local copy for convenience */
-    dom = pcidev->domain;
-    bus = pcidev->bus;
-    dev = pcidev->dev;
-    func = pcidev->func;
+    dom = pci->domain;
+    bus = pci->bus;
+    dev = pci->dev;
+    func = pci->func;
 
     /* See if the device exists */
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF, dom, bus, dev, func);
@@ -786,7 +786,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
 
     /* Check to see if it's already assigned to pciback */
-    rc = pciback_dev_is_assigned(gc, pcidev);
+    rc = pciback_dev_is_assigned(gc, pci);
     if ( rc < 0 ) {
         return ERROR_FAIL;
     }
@@ -796,7 +796,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
 
     /* Check to see if there's already a driver that we need to unbind from */
-    if ( sysfs_dev_unbind(gc, pcidev, &driver_path ) ) {
+    if ( sysfs_dev_unbind(gc, pci, &driver_path ) ) {
         LOG(ERROR, "Couldn't unbind "PCI_BDF" from driver",
             dom, bus, dev, func);
         return ERROR_FAIL;
@@ -805,9 +805,9 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     /* Store driver_path for rebinding to dom0 */
     if ( rebind ) {
         if ( driver_path ) {
-            pci_assignable_driver_path_write(gc, pcidev, driver_path);
+            pci_assignable_driver_path_write(gc, pci, driver_path);
         } else if ( (driver_path =
-                     pci_assignable_driver_path_read(gc, pcidev)) != NULL ) {
+                     pci_assignable_driver_path_read(gc, pci)) != NULL ) {
             LOG(INFO, PCI_BDF" not bound to a driver, will be rebound to %s",
                 dom, bus, dev, func, driver_path);
         } else {
@@ -815,10 +815,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
                 dom, bus, dev, func);
         }
     } else {
-        pci_assignable_driver_path_remove(gc, pcidev);
+        pci_assignable_driver_path_remove(gc, pci);
     }
 
-    if ( pciback_dev_assign(gc, pcidev) ) {
+    if ( pciback_dev_assign(gc, pci) ) {
         LOG(ERROR, "Couldn't bind device to pciback!");
         return ERROR_FAIL;
     }
@@ -829,7 +829,7 @@ quarantine:
      * so always pass XEN_DOMCTL_DEV_RDM_RELAXED to avoid assignment being
      * unnecessarily denied.
      */
-    rc = xc_assign_device(ctx->xch, DOMID_IO, pcidev_encode_bdf(pcidev),
+    rc = xc_assign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci),
                           XEN_DOMCTL_DEV_RDM_RELAXED);
     if ( rc < 0 ) {
         LOG(ERROR, "failed to quarantine "PCI_BDF, dom, bus, dev, func);
@@ -840,7 +840,7 @@ quarantine:
 }
 
 static int libxl__device_pci_assignable_remove(libxl__gc *gc,
-                                               libxl_device_pci *pcidev,
+                                               libxl_device_pci *pci,
                                                int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -848,24 +848,24 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     char *driver_path;
 
     /* De-quarantine */
-    rc = xc_deassign_device(ctx->xch, DOMID_IO, pcidev_encode_bdf(pcidev));
+    rc = xc_deassign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci));
     if ( rc < 0 ) {
-        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pcidev->domain, pcidev->bus,
-            pcidev->dev, pcidev->func);
+        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pci->domain, pci->bus,
+            pci->dev, pci->func);
         return ERROR_FAIL;
     }
 
     /* Unbind from pciback */
-    if ( (rc=pciback_dev_is_assigned(gc, pcidev)) < 0 ) {
+    if ( (rc = pciback_dev_is_assigned(gc, pci)) < 0 ) {
         return ERROR_FAIL;
     } else if ( rc ) {
-        pciback_dev_unassign(gc, pcidev);
+        pciback_dev_unassign(gc, pci);
     } else {
         LOG(WARN, "Not bound to pciback");
     }
 
     /* Rebind if necessary */
-    driver_path = pci_assignable_driver_path_read(gc, pcidev);
+    driver_path = pci_assignable_driver_path_read(gc, pci);
 
     if ( driver_path ) {
         if ( rebind ) {
@@ -873,12 +873,12 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
 
             if ( sysfs_write_bdf(gc,
                                  GCSPRINTF("%s/bind", driver_path),
-                                 pcidev) < 0 ) {
+                                 pci) < 0 ) {
                 LOGE(ERROR, "Couldn't bind device to %s", driver_path);
                 return -1;
             }
 
-            pci_assignable_driver_path_remove(gc, pcidev);
+            pci_assignable_driver_path_remove(gc, pci);
         }
     } else {
         if ( rebind ) {
@@ -890,26 +890,26 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     return 0;
 }
 
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pcidev,
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci,
                                     int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_add(gc, pcidev, rebind);
+    rc = libxl__device_pci_assignable_add(gc, pci, rebind);
 
     GC_FREE;
     return rc;
 }
 
 
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pcidev,
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci,
                                        int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_remove(gc, pcidev, rebind);
+    rc = libxl__device_pci_assignable_remove(gc, pci, rebind);
 
     GC_FREE;
     return rc;
@@ -920,7 +920,7 @@ int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pcidev,
  * driver. It also initialises a bit-mask of which function numbers are present
  * on that device.
 */
-static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pcidev, unsigned int *func_mask)
+static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pci, unsigned int *func_mask)
 {
     struct dirent *de;
     DIR *dir;
@@ -940,11 +940,11 @@ static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pcidev, unsi
 
         if ( sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4 )
             continue;
-        if ( pcidev->domain != dom )
+        if ( pci->domain != dom )
             continue;
-        if ( pcidev->bus != bus )
+        if ( pci->bus != bus )
             continue;
-        if ( pcidev->dev != dev )
+        if ( pci->dev != dev )
             continue;
 
         path = GCSPRINTF("%s/" PCI_BDF, SYSFS_PCIBACK_DRIVER, dom, bus, dev, func);
@@ -979,7 +979,7 @@ static int pci_ins_check(libxl__gc *gc, uint32_t domid, const char *state, void
 }
 
 static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
-                                 libxl_device_pci *pcidev)
+                                 libxl_device_pci *pci)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     int rc = 0;
@@ -991,15 +991,15 @@ static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/state");
     state = libxl__xs_read(gc, XBT_NULL, path);
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/parameter");
-    if (pcidev->vdevfn) {
+    if (pci->vdevfn) {
         libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF_VDEVFN","PCI_OPTIONS,
-                         pcidev->domain, pcidev->bus, pcidev->dev,
-                         pcidev->func, pcidev->vdevfn, pcidev->msitranslate,
-                         pcidev->power_mgmt);
+                         pci->domain, pci->bus, pci->dev,
+                         pci->func, pci->vdevfn, pci->msitranslate,
+                         pci->power_mgmt);
     } else {
         libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF","PCI_OPTIONS,
-                         pcidev->domain,  pcidev->bus, pcidev->dev,
-                         pcidev->func, pcidev->msitranslate, pcidev->power_mgmt);
+                         pci->domain,  pci->bus, pci->dev,
+                         pci->func, pci->msitranslate, pci->power_mgmt);
     }
 
     libxl__qemu_traditional_cmd(gc, domid, "pci-ins");
@@ -1010,7 +1010,7 @@ static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/state");
     if ( rc < 0 )
         LOGD(ERROR, domid, "qemu refused to add device: %s", vdevfn);
-    else if ( sscanf(vdevfn, "0x%x", &pcidev->vdevfn) != 1 ) {
+    else if ( sscanf(vdevfn, "0x%x", &pci->vdevfn) != 1 ) {
         LOGD(ERROR, domid, "wrong format for the vdevfn: '%s'", vdevfn);
         rc = -1;
     }
@@ -1054,7 +1054,7 @@ typedef struct pci_add_state {
     libxl__xswait_state xswait;
     libxl__ev_qmp qmp;
     libxl__ev_time timeout;
-    libxl_device_pci *pcidev;
+    libxl_device_pci *pci;
     int pci_domid;
 } pci_add_state;
 
@@ -1072,7 +1072,7 @@ static void pci_add_dm_done(libxl__egc *,
 
 static void do_pci_add(libxl__egc *egc,
                        libxl_domid domid,
-                       libxl_device_pci *pcidev,
+                       libxl_device_pci *pci,
                        pci_add_state *pas)
 {
     STATE_AO_GC(pas->aodev->ao);
@@ -1082,7 +1082,7 @@ static void do_pci_add(libxl__egc *egc,
     /* init pci_add_state */
     libxl__xswait_init(&pas->xswait);
     libxl__ev_qmp_init(&pas->qmp);
-    pas->pcidev = pcidev;
+    pas->pci = pci;
     pas->pci_domid = domid;
     libxl__ev_time_init(&pas->timeout);
 
@@ -1128,7 +1128,7 @@ static void pci_add_qemu_trad_watch_state_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pcidev = pas->pcidev;
+    libxl_device_pci *pci = pas->pci;
 
     rc = check_qemu_running(gc, domid, xswa, rc, state);
     if (rc == ERROR_NOT_READY)
@@ -1136,7 +1136,7 @@ static void pci_add_qemu_trad_watch_state_cb(libxl__egc *egc,
     if (rc)
         goto out;
 
-    rc = qemu_pci_add_xenstore(gc, domid, pcidev);
+    rc = qemu_pci_add_xenstore(gc, domid, pci);
 out:
     pci_add_dm_done(egc, pas, rc); /* must be last */
 }
@@ -1149,7 +1149,7 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pcidev = pas->pcidev;
+    libxl_device_pci *pci = pas->pci;
     libxl__ev_qmp *const qmp = &pas->qmp;
 
     rc = libxl__ev_time_register_rel(ao, &pas->timeout,
@@ -1160,14 +1160,14 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
     libxl__qmp_param_add_string(gc, &args, "driver",
                                 "xen-pci-passthrough");
     QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
-                           pcidev->bus, pcidev->dev, pcidev->func);
+                           pci->bus, pci->dev, pci->func);
     QMP_PARAMETERS_SPRINTF(&args, "hostaddr",
-                           "%04x:%02x:%02x.%01x", pcidev->domain,
-                           pcidev->bus, pcidev->dev, pcidev->func);
-    if (pcidev->vdevfn) {
+                           "%04x:%02x:%02x.%01x", pci->domain,
+                           pci->bus, pci->dev, pci->func);
+    if (pci->vdevfn) {
         QMP_PARAMETERS_SPRINTF(&args, "addr", "%x.%x",
-                               PCI_SLOT(pcidev->vdevfn),
-                               PCI_FUNC(pcidev->vdevfn));
+                               PCI_SLOT(pci->vdevfn),
+                               PCI_FUNC(pci->vdevfn));
     }
     /*
      * Version of QEMU prior to the XSA-131 fix did not support
@@ -1179,7 +1179,7 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
      * set the permissive flag if it is true. Users of older QEMU
      * have no reason to set the flag so this is ok.
      */
-    if (pcidev->permissive)
+    if (pci->permissive)
         libxl__qmp_param_add_bool(gc, &args, "permissive", true);
 
     qmp->ao = pas->aodev->ao;
@@ -1230,7 +1230,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
     int dev_slot, dev_func;
 
     /* Convenience aliases */
-    libxl_device_pci *pcidev = pas->pcidev;
+    libxl_device_pci *pci = pas->pci;
 
     if (rc) goto out;
 
@@ -1251,7 +1251,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
      */
 
     asked_id = GCSPRINTF(PCI_PT_QDEV_ID,
-                         pcidev->bus, pcidev->dev, pcidev->func);
+                         pci->bus, pci->dev, pci->func);
 
     for (i = 0; (bus = libxl__json_array_get(response, i)); i++) {
         devices = libxl__json_map_get("devices", bus, JSON_ARRAY);
@@ -1283,7 +1283,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
              }
              dev_func = libxl__json_object_get_integer(o);
 
-             pcidev->vdevfn = PCI_DEVFN(dev_slot, dev_func);
+             pci->vdevfn = PCI_DEVFN(dev_slot, dev_func);
 
              rc = 0;
              goto out;
@@ -1331,7 +1331,7 @@ static void pci_add_dm_done(libxl__egc *egc,
 
     /* Convenience aliases */
     bool starting = pas->starting;
-    libxl_device_pci *pcidev = pas->pcidev;
+    libxl_device_pci *pci = pas->pci;
     bool hvm = libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM;
 
     libxl__ev_qmp_dispose(gc, &pas->qmp);
@@ -1342,8 +1342,8 @@ static void pci_add_dm_done(libxl__egc *egc,
     if (isstubdom)
         starting = false;
 
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pcidev->domain,
-                           pcidev->bus, pcidev->dev, pcidev->func);
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->domain,
+                           pci->bus, pci->dev, pci->func);
     f = fopen(sysfs_path, "r");
     start = end = flags = size = 0;
     irq = 0;
@@ -1383,8 +1383,8 @@ static void pci_add_dm_done(libxl__egc *egc,
         }
     }
     fclose(f);
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pcidev->domain,
-                                pcidev->bus, pcidev->dev, pcidev->func);
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
+                                pci->bus, pci->dev, pci->func);
     f = fopen(sysfs_path, "r");
     if (f == NULL) {
         LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
@@ -1411,9 +1411,9 @@ static void pci_add_dm_done(libxl__egc *egc,
     fclose(f);
 
     /* Don't restrict writes to the PCI config space from this VM */
-    if (pcidev->permissive) {
+    if (pci->permissive) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/permissive",
-                             pcidev) < 0 ) {
+                             pci) < 0 ) {
             LOGD(ERROR, domainid, "Setting permissive for device");
             rc = ERROR_FAIL;
             goto out;
@@ -1422,14 +1422,14 @@ static void pci_add_dm_done(libxl__egc *egc,
 
 out_no_irq:
     if (!isstubdom) {
-        if (pcidev->rdm_policy == LIBXL_RDM_RESERVE_POLICY_STRICT) {
+        if (pci->rdm_policy == LIBXL_RDM_RESERVE_POLICY_STRICT) {
             flag &= ~XEN_DOMCTL_DEV_RDM_RELAXED;
-        } else if (pcidev->rdm_policy != LIBXL_RDM_RESERVE_POLICY_RELAXED) {
+        } else if (pci->rdm_policy != LIBXL_RDM_RESERVE_POLICY_RELAXED) {
             LOGED(ERROR, domainid, "unknown rdm check flag.");
             rc = ERROR_FAIL;
             goto out;
         }
-        r = xc_assign_device(ctx->xch, domid, pcidev_encode_bdf(pcidev), flag);
+        r = xc_assign_device(ctx->xch, domid, pci_encode_bdf(pci), flag);
         if (r < 0 && (hvm || errno != ENOSYS)) {
             LOGED(ERROR, domainid, "xc_assign_device failed");
             rc = ERROR_FAIL;
@@ -1438,7 +1438,7 @@ out_no_irq:
     }
 
     if (!starting && !libxl_get_stubdom_id(CTX, domid))
-        rc = libxl__device_pci_add_xenstore(gc, domid, pcidev, starting);
+        rc = libxl__device_pci_add_xenstore(gc, domid, pci, starting);
     else
         rc = 0;
 out:
@@ -1493,7 +1493,7 @@ int libxl__device_pci_setdefault(libxl__gc *gc, uint32_t domid,
 }
 
 int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
-                         libxl_device_pci *pcidev,
+                         libxl_device_pci *pci,
                          const libxl_asyncop_how *ao_how)
 {
     AO_CREATE(ctx, domid, ao_how);
@@ -1504,24 +1504,24 @@ int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
     aodev->action = LIBXL__DEVICE_ACTION_ADD;
     aodev->callback = device_addrm_aocomplete;
     aodev->update_json = true;
-    libxl__device_pci_add(egc, domid, pcidev, false, aodev);
+    libxl__device_pci_add(egc, domid, pci, false, aodev);
     return AO_INPROGRESS;
 }
 
-static int libxl_pcidev_assignable(libxl_ctx *ctx, libxl_device_pci *pcidev)
+static int libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
 {
-    libxl_device_pci *pcidevs;
+    libxl_device_pci *pcis;
     int num, i;
 
-    pcidevs = libxl_device_pci_assignable_list(ctx, &num);
+    pcis = libxl_device_pci_assignable_list(ctx, &num);
     for (i = 0; i < num; i++) {
-        if (pcidevs[i].domain == pcidev->domain &&
-            pcidevs[i].bus == pcidev->bus &&
-            pcidevs[i].dev == pcidev->dev &&
-            pcidevs[i].func == pcidev->func)
+        if (pcis[i].domain == pci->domain &&
+            pcis[i].bus == pci->bus &&
+            pcis[i].dev == pci->dev &&
+            pcis[i].func == pci->func)
             break;
     }
-    free(pcidevs);
+    free(pcis);
     return i != num;
 }
 
@@ -1535,7 +1535,7 @@ static void device_pci_add_done(libxl__egc *egc,
     pci_add_state *, int rc);
 
 void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
-                           libxl_device_pci *pcidev, bool starting,
+                           libxl_device_pci *pci, bool starting,
                            libxl__ao_device *aodev)
 {
     STATE_AO_GC(aodev->ao);
@@ -1545,9 +1545,9 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     int stubdomid = 0;
     pci_add_state *pas;
 
-    /* Store *pcidev to be used by callbacks */
-    aodev->device_config = pcidev;
-    aodev->device_type = &libxl__pcidev_devtype;
+    /* Store *pci to be used by callbacks */
+    aodev->device_config = pci;
+    aodev->device_type = &libxl__pci_devtype;
 
     GCNEW(pas);
     pas->aodev = aodev;
@@ -1556,29 +1556,29 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     pas->callback = device_pci_add_stubdom_done;
 
     if (libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
-        rc = xc_test_assign_device(ctx->xch, domid, pcidev_encode_bdf(pcidev));
+        rc = xc_test_assign_device(ctx->xch, domid, pci_encode_bdf(pci));
         if (rc) {
             LOGD(ERROR, domid,
                  "PCI device %04x:%02x:%02x.%u %s?",
-                 pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func,
+                 pci->domain, pci->bus, pci->dev, pci->func,
                  errno == EOPNOTSUPP ? "cannot be assigned - no IOMMU"
                  : "already assigned to a different guest");
             goto out;
         }
     }
 
-    rc = libxl__device_pci_setdefault(gc, domid, pcidev, !starting);
+    rc = libxl__device_pci_setdefault(gc, domid, pci, !starting);
     if (rc) goto out;
 
-    if (pcidev->seize && !pciback_dev_is_assigned(gc, pcidev)) {
-        rc = libxl__device_pci_assignable_add(gc, pcidev, 1);
+    if (pci->seize && !pciback_dev_is_assigned(gc, pci)) {
+        rc = libxl__device_pci_assignable_add(gc, pci, 1);
         if ( rc )
             goto out;
     }
 
-    if (!libxl_pcidev_assignable(ctx, pcidev)) {
+    if (!libxl_pci_assignable(ctx, pci)) {
         LOGD(ERROR, domid, "PCI device %x:%x:%x.%x is not assignable",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         rc = ERROR_FAIL;
         goto out;
     }
@@ -1589,25 +1589,25 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
              "cannot determine if device is assigned, refusing to continue");
         goto out;
     }
-    if ( is_pcidev_in_array(assigned, num_assigned, pcidev->domain,
-                     pcidev->bus, pcidev->dev, pcidev->func) ) {
+    if ( is_pci_in_array(assigned, num_assigned, pci->domain,
+                         pci->bus, pci->dev, pci->func) ) {
         LOGD(ERROR, domid, "PCI device already attached to a domain");
         rc = ERROR_FAIL;
         goto out;
     }
 
-    libxl__device_pci_reset(gc, pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+    libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
     if (stubdomid != 0) {
-        libxl_device_pci *pcidev_s;
+        libxl_device_pci *pci_s;
 
-        GCNEW(pcidev_s);
-        libxl_device_pci_init(pcidev_s);
-        libxl_device_pci_copy(CTX, pcidev_s, pcidev);
+        GCNEW(pci_s);
+        libxl_device_pci_init(pci_s);
+        libxl_device_pci_copy(CTX, pci_s, pci);
         pas->callback = device_pci_add_stubdom_wait;
 
-        do_pci_add(egc, stubdomid, pcidev_s, pas); /* must be last */
+        do_pci_add(egc, stubdomid, pci_s, pas); /* must be last */
         return;
     }
 
@@ -1664,42 +1664,42 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
     /* Convenience aliases */
     libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pcidev = aodev->device_config;
+    libxl_device_pci *pci = aodev->device_config;
 
     if (rc) goto out;
 
-    orig_vdev = pcidev->vdevfn & ~7U;
+    orig_vdev = pci->vdevfn & ~7U;
 
-    if ( pcidev->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
-        if ( !(pcidev->vdevfn >> 3) ) {
+    if ( pci->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
+        if ( !(pci->vdevfn >> 3) ) {
             LOGD(ERROR, domid, "Must specify a v-slot for multi-function devices");
             rc = ERROR_INVAL;
             goto out;
         }
-        if ( pci_multifunction_check(gc, pcidev, &pfunc_mask) ) {
+        if ( pci_multifunction_check(gc, pci, &pfunc_mask) ) {
             rc = ERROR_FAIL;
             goto out;
         }
-        pcidev->vfunc_mask &= pfunc_mask;
+        pci->vfunc_mask &= pfunc_mask;
         /* so now vfunc_mask == pfunc_mask */
     }else{
-        pfunc_mask = (1 << pcidev->func);
+        pfunc_mask = (1 << pci->func);
     }
 
-    for(rc = 0, i = 7; i >= 0; --i) {
+    for (rc = 0, i = 7; i >= 0; --i) {
         if ( (1 << i) & pfunc_mask ) {
-            if ( pcidev->vfunc_mask == pfunc_mask ) {
-                pcidev->func = i;
-                pcidev->vdevfn = orig_vdev | i;
-            }else{
+            if ( pci->vfunc_mask == pfunc_mask ) {
+                pci->func = i;
+                pci->vdevfn = orig_vdev | i;
+            } else {
                 /* if not passing through multiple devices in a block make
                  * sure that virtual function number 0 is always used otherwise
                  * guest won't see the device
                  */
-                pcidev->vdevfn = orig_vdev;
+                pci->vdevfn = orig_vdev;
             }
             pas->callback = device_pci_add_done;
-            do_pci_add(egc, domid, pcidev, pas); /* must be last */
+            do_pci_add(egc, domid, pci, pas); /* must be last */
             return;
         }
     }
@@ -1715,13 +1715,13 @@ static void device_pci_add_done(libxl__egc *egc,
     EGC_GC;
     libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pcidev = aodev->device_config;
+    libxl_device_pci *pci = aodev->device_config;
 
     if (rc) {
         LOGD(ERROR, domid,
              "libxl__device_pci_add  failed for "
              "PCI device %x:%x:%x.%x (rc %d)",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func,
+             pci->domain, pci->bus, pci->dev, pci->func,
              rc);
     }
     aodev->rc = rc;
@@ -1733,16 +1733,16 @@ typedef struct {
     libxl__ao_device *outer_aodev;
     libxl_domain_config *d_config;
     libxl_domid domid;
-} add_pcidevs_state;
+} add_pcis_state;
 
-static void add_pcidevs_done(libxl__egc *, libxl__multidev *, int rc);
+static void add_pcis_done(libxl__egc *, libxl__multidev *, int rc);
 
-static void libxl__add_pcidevs(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
-                               libxl_domain_config *d_config,
-                               libxl__multidev *multidev)
+static void libxl__add_pcis(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
+                            libxl_domain_config *d_config,
+                            libxl__multidev *multidev)
 {
     AO_GC;
-    add_pcidevs_state *apds;
+    add_pcis_state *apds;
     int i;
 
     /* We need to start a new multidev in order to be able to execute
@@ -1752,23 +1752,23 @@ static void libxl__add_pcidevs(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
     apds->outer_aodev = libxl__multidev_prepare(multidev);
     apds->d_config = d_config;
     apds->domid = domid;
-    apds->multidev.callback = add_pcidevs_done;
+    apds->multidev.callback = add_pcis_done;
     libxl__multidev_begin(ao, &apds->multidev);
 
-    for (i = 0; i < d_config->num_pcidevs; i++) {
+    for (i = 0; i < d_config->num_pcis; i++) {
         libxl__ao_device *aodev = libxl__multidev_prepare(&apds->multidev);
-        libxl__device_pci_add(egc, domid, &d_config->pcidevs[i],
+        libxl__device_pci_add(egc, domid, &d_config->pcis[i],
                               true, aodev);
     }
 
     libxl__multidev_prepared(egc, &apds->multidev, 0);
 }
 
-static void add_pcidevs_done(libxl__egc *egc, libxl__multidev *multidev,
+static void add_pcis_done(libxl__egc *egc, libxl__multidev *multidev,
                              int rc)
 {
     EGC_GC;
-    add_pcidevs_state *apds = CONTAINER_OF(multidev, *apds, multidev);
+    add_pcis_state *apds = CONTAINER_OF(multidev, *apds, multidev);
 
     /* Convenience aliases */
     libxl_domain_config *d_config = apds->d_config;
@@ -1777,9 +1777,9 @@ static void add_pcidevs_done(libxl__egc *egc, libxl__multidev *multidev,
 
     if (rc) goto out;
 
-    if (d_config->num_pcidevs > 0 && !libxl_get_stubdom_id(CTX, domid)) {
-        rc = libxl__create_pci_backend(gc, domid, d_config->pcidevs,
-            d_config->num_pcidevs);
+    if (d_config->num_pcis > 0 && !libxl_get_stubdom_id(CTX, domid)) {
+        rc = libxl__create_pci_backend(gc, domid, d_config->pcis,
+                                       d_config->num_pcis);
         if (rc < 0) {
             LOGD(ERROR, domid, "libxl_create_pci_backend failed: %d", rc);
             goto out;
@@ -1792,7 +1792,7 @@ out:
 }
 
 static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
-                                    libxl_device_pci *pcidev, int force)
+                                    libxl_device_pci *pci, int force)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     char *state;
@@ -1804,12 +1804,12 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/state");
     state = libxl__xs_read(gc, XBT_NULL, path);
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/parameter");
-    libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF, pcidev->domain,
-                     pcidev->bus, pcidev->dev, pcidev->func);
+    libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF, pci->domain,
+                     pci->bus, pci->dev, pci->func);
 
     /* Remove all functions at once atomically by only signalling
      * device-model for function 0 */
-    if ( !force && (pcidev->vdevfn & 0x7) == 0 ) {
+    if ( !force && (pci->vdevfn & 0x7) == 0 ) {
         libxl__qemu_traditional_cmd(gc, domid, "pci-rem");
         if (libxl__wait_for_device_model_deprecated(gc, domid, "pci-removed",
                                          NULL, NULL, NULL) < 0) {
@@ -1830,7 +1830,7 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
 typedef struct pci_remove_state {
     libxl__ao_device *aodev;
     libxl_domid domid;
-    libxl_device_pci *pcidev;
+    libxl_device_pci *pci;
     bool force;
     bool hvm;
     unsigned int orig_vdev;
@@ -1844,7 +1844,7 @@ typedef struct pci_remove_state {
 } pci_remove_state;
 
 static void libxl__device_pci_remove_common(libxl__egc *egc,
-    uint32_t domid, libxl_device_pci *pcidev, bool force,
+    uint32_t domid, libxl_device_pci *pci, bool force,
     libxl__ao_device *aodev);
 static void device_pci_remove_common_next(libxl__egc *egc,
     pci_remove_state *prs, int rc);
@@ -1869,7 +1869,7 @@ static void pci_remove_done(libxl__egc *egc,
     pci_remove_state *prs, int rc);
 
 static void do_pci_remove(libxl__egc *egc, uint32_t domid,
-                          libxl_device_pci *pcidev, int force,
+                          libxl_device_pci *pci, int force,
                           pci_remove_state *prs)
 {
     STATE_AO_GC(prs->aodev->ao);
@@ -1887,8 +1887,8 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
     libxl__ptr_add(gc, assigned);
 
     rc = ERROR_INVAL;
-    if ( !is_pcidev_in_array(assigned, num, pcidev->domain,
-                      pcidev->bus, pcidev->dev, pcidev->func) ) {
+    if ( !is_pci_in_array(assigned, num, pci->domain,
+                          pci->bus, pci->dev, pci->func) ) {
         LOGD(ERROR, domainid, "PCI device not attached to this domain");
         goto out_fail;
     }
@@ -1917,8 +1917,8 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
     } else {
         assert(type == LIBXL_DOMAIN_TYPE_PV);
 
-        char *sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pcidev->domain,
-                                     pcidev->bus, pcidev->dev, pcidev->func);
+        char *sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->domain,
+                                     pci->bus, pci->dev, pci->func);
         FILE *f = fopen(sysfs_path, "r");
         unsigned int start = 0, end = 0, flags = 0, size = 0;
         int irq = 0;
@@ -1953,8 +1953,8 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
         }
         fclose(f);
 skip1:
-        sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pcidev->domain,
-                               pcidev->bus, pcidev->dev, pcidev->func);
+        sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
+                               pci->bus, pci->dev, pci->func);
         f = fopen(sysfs_path, "r");
         if (f == NULL) {
             LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
@@ -1988,7 +1988,7 @@ static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = prs->domid;
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
 
     rc = check_qemu_running(gc, domid, xswa, rc, state);
     if (rc == ERROR_NOT_READY)
@@ -1996,7 +1996,7 @@ static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
     if (rc)
         goto out;
 
-    rc = qemu_pci_remove_xenstore(gc, domid, pcidev, prs->force);
+    rc = qemu_pci_remove_xenstore(gc, domid, pci, prs->force);
 
 out:
     pci_remove_detatched(egc, prs, rc);
@@ -2010,7 +2010,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     int rc;
 
     /* Convenience aliases */
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
 
     rc = libxl__ev_time_register_rel(ao, &prs->timeout,
                                      pci_remove_timeout,
@@ -2018,7 +2018,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     if (rc) goto out;
 
     QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
-                           pcidev->bus, pcidev->dev, pcidev->func);
+                           pci->bus, pci->dev, pci->func);
     prs->qmp.callback = pci_remove_qmp_device_del_cb;
     rc = libxl__ev_qmp_send(egc, &prs->qmp, "device_del", args);
     if (rc) goto out;
@@ -2080,14 +2080,14 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl__ao *const ao = prs->aodev->ao;
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
 
     if (rc) goto out;
 
     libxl__ev_qmp_dispose(gc, qmp);
 
     asked_id = GCSPRINTF(PCI_PT_QDEV_ID,
-                         pcidev->bus, pcidev->dev, pcidev->func);
+                         pci->bus, pci->dev, pci->func);
 
     /* query-pci response:
      * [{ 'devices': [ 'qdev_id': 'str', ...  ], ... }]
@@ -2135,10 +2135,10 @@ static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
     pci_remove_state *prs = CONTAINER_OF(ev, *prs, timeout);
 
     /* Convenience aliases */
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
 
     LOGD(WARN, prs->domid, "timed out waiting for DM to remove "
-         PCI_PT_QDEV_ID, pcidev->bus, pcidev->dev, pcidev->func);
+         PCI_PT_QDEV_ID, pci->bus, pci->dev, pci->func);
 
     /* If we timed out, we might still want to keep destroying the device
      * (when force==true), so let the next function decide what to do on
@@ -2156,7 +2156,7 @@ static void pci_remove_detatched(libxl__egc *egc,
     bool isstubdom;
 
     /* Convenience aliases */
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
     libxl_domid domid = prs->domid;
 
     /* Cleaning QMP states ASAP */
@@ -2170,30 +2170,30 @@ static void pci_remove_detatched(libxl__egc *egc,
     isstubdom = libxl_is_stubdom(CTX, domid, &domainid);
 
     /* don't do multiple resets while some functions are still passed through */
-    if ( (pcidev->vdevfn & 0x7) == 0 ) {
-        libxl__device_pci_reset(gc, pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+    if ((pci->vdevfn & 0x7) == 0) {
+        libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
     }
 
     if (!isstubdom) {
-        rc = xc_deassign_device(CTX->xch, domid, pcidev_encode_bdf(pcidev));
+        rc = xc_deassign_device(CTX->xch, domid, pci_encode_bdf(pci));
         if (rc < 0 && (prs->hvm || errno != ENOSYS))
             LOGED(ERROR, domainid, "xc_deassign_device failed");
     }
 
     stubdomid = libxl_get_stubdom_id(CTX, domid);
     if (stubdomid != 0) {
-        libxl_device_pci *pcidev_s;
+        libxl_device_pci *pci_s;
         libxl__ao_device *const stubdom_aodev = &prs->stubdom_aodev;
 
-        GCNEW(pcidev_s);
-        libxl_device_pci_init(pcidev_s);
-        libxl_device_pci_copy(CTX, pcidev_s, pcidev);
+        GCNEW(pci_s);
+        libxl_device_pci_init(pci_s);
+        libxl_device_pci_copy(CTX, pci_s, pci);
 
         libxl__prepare_ao_device(ao, stubdom_aodev);
         stubdom_aodev->action = LIBXL__DEVICE_ACTION_REMOVE;
         stubdom_aodev->callback = pci_remove_stubdom_done;
         stubdom_aodev->update_json = prs->aodev->update_json;
-        libxl__device_pci_remove_common(egc, stubdomid, pcidev_s,
+        libxl__device_pci_remove_common(egc, stubdomid, pci_s,
                                         prs->force, stubdom_aodev);
         return;
     }
@@ -2219,14 +2219,14 @@ static void pci_remove_done(libxl__egc *egc,
 
     if (rc) goto out;
 
-    libxl__device_pci_remove_xenstore(gc, prs->domid, prs->pcidev);
+    libxl__device_pci_remove_xenstore(gc, prs->domid, prs->pci);
 out:
     device_pci_remove_common_next(egc, prs, rc);
 }
 
 static void libxl__device_pci_remove_common(libxl__egc *egc,
                                             uint32_t domid,
-                                            libxl_device_pci *pcidev,
+                                            libxl_device_pci *pci,
                                             bool force,
                                             libxl__ao_device *aodev)
 {
@@ -2237,7 +2237,7 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
     GCNEW(prs);
     prs->aodev = aodev;
     prs->domid = domid;
-    prs->pcidev = pcidev;
+    prs->pci = pci;
     prs->force = force;
     libxl__xswait_init(&prs->xswait);
     libxl__ev_qmp_init(&prs->qmp);
@@ -2247,16 +2247,16 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
     libxl__ev_time_init(&prs->timeout);
     libxl__ev_time_init(&prs->retry_timer);
 
-    prs->orig_vdev = pcidev->vdevfn & ~7U;
+    prs->orig_vdev = pci->vdevfn & ~7U;
 
-    if ( pcidev->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
-        if ( pci_multifunction_check(gc, pcidev, &prs->pfunc_mask) ) {
+    if ( pci->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
+        if ( pci_multifunction_check(gc, pci, &prs->pfunc_mask) ) {
             rc = ERROR_FAIL;
             goto out;
         }
-        pcidev->vfunc_mask &= prs->pfunc_mask;
-    }else{
-        prs->pfunc_mask = (1 << pcidev->func);
+        pci->vfunc_mask &= prs->pfunc_mask;
+    } else {
+        prs->pfunc_mask = (1 << pci->func);
     }
 
     rc = 0;
@@ -2273,7 +2273,7 @@ static void device_pci_remove_common_next(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = prs->domid;
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
     libxl__ao_device *const aodev = prs->aodev;
     const unsigned int pfunc_mask = prs->pfunc_mask;
     const unsigned int orig_vdev = prs->orig_vdev;
@@ -2284,13 +2284,13 @@ static void device_pci_remove_common_next(libxl__egc *egc,
         const int i = prs->next_func;
         prs->next_func--;
         if ( (1 << i) & pfunc_mask ) {
-            if ( pcidev->vfunc_mask == pfunc_mask ) {
-                pcidev->func = i;
-                pcidev->vdevfn = orig_vdev | i;
-            }else{
-                pcidev->vdevfn = orig_vdev;
+            if ( pci->vfunc_mask == pfunc_mask ) {
+                pci->func = i;
+                pci->vdevfn = orig_vdev | i;
+            } else {
+                pci->vdevfn = orig_vdev;
             }
-            do_pci_remove(egc, domid, pcidev, prs->force, prs);
+            do_pci_remove(egc, domid, pci, prs->force, prs);
             return;
         }
     }
@@ -2306,7 +2306,7 @@ out:
 }
 
 int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid,
-                            libxl_device_pci *pcidev,
+                            libxl_device_pci *pci,
                             const libxl_asyncop_how *ao_how)
 
 {
@@ -2318,12 +2318,12 @@ int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid,
     aodev->action = LIBXL__DEVICE_ACTION_REMOVE;
     aodev->callback = device_addrm_aocomplete;
     aodev->update_json = true;
-    libxl__device_pci_remove_common(egc, domid, pcidev, false, aodev);
+    libxl__device_pci_remove_common(egc, domid, pci, false, aodev);
     return AO_INPROGRESS;
 }
 
 int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
-                             libxl_device_pci *pcidev,
+                             libxl_device_pci *pci,
                              const libxl_asyncop_how *ao_how)
 {
     AO_CREATE(ctx, domid, ao_how);
@@ -2334,7 +2334,7 @@ int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
     aodev->action = LIBXL__DEVICE_ACTION_REMOVE;
     aodev->callback = device_addrm_aocomplete;
     aodev->update_json = true;
-    libxl__device_pci_remove_common(egc, domid, pcidev, true, aodev);
+    libxl__device_pci_remove_common(egc, domid, pci, true, aodev);
     return AO_INPROGRESS;
 }
 
@@ -2353,7 +2353,7 @@ static int libxl__device_pci_from_xs_be(libxl__gc *gc,
     if (s)
         vdevfn = strtol(s, (char **) NULL, 16);
 
-    pcidev_struct_fill(pci, domain, bus, dev, func, vdevfn);
+    pci_struct_fill(pci, domain, bus, dev, func, vdevfn);
 
     s = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/opts-%d", be_path, nr));
     if (s) {
@@ -2398,7 +2398,7 @@ libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid, int *num
     GC_INIT(ctx);
     char *be_path;
     unsigned int n, i;
-    libxl_device_pci *pcidevs = NULL;
+    libxl_device_pci *pcis = NULL;
 
     *num = 0;
 
@@ -2407,28 +2407,28 @@ libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid, int *num
     if (libxl__device_pci_get_num(gc, be_path, &n))
         goto out;
 
-    pcidevs = calloc(n, sizeof(libxl_device_pci));
+    pcis = calloc(n, sizeof(libxl_device_pci));
 
     for (i = 0; i < n; i++)
-        libxl__device_pci_from_xs_be(gc, be_path, i, pcidevs + i);
+        libxl__device_pci_from_xs_be(gc, be_path, i, pcis + i);
 
     *num = n;
 out:
     GC_FREE;
-    return pcidevs;
+    return pcis;
 }
 
 void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
                                    libxl__multidev *multidev)
 {
     STATE_AO_GC(multidev->ao);
-    libxl_device_pci *pcidevs;
+    libxl_device_pci *pcis;
     int num, i;
 
-    pcidevs = libxl_device_pci_list(CTX, domid, &num);
-    if ( pcidevs == NULL )
+    pcis = libxl_device_pci_list(CTX, domid, &num);
+    if ( pcis == NULL )
         return;
-    libxl__ptr_add(gc, pcidevs);
+    libxl__ptr_add(gc, pcis);
 
     for (i = 0; i < num; i++) {
         /* Force remove on shutdown since, on HVM, qemu will not always
@@ -2436,7 +2436,7 @@ void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
          * devices by the time we even get here!
          */
         libxl__ao_device *aodev = libxl__multidev_prepare(multidev);
-        libxl__device_pci_remove_common(egc, domid, pcidevs + i, true,
+        libxl__device_pci_remove_common(egc, domid, pcis + i, true,
                                         aodev);
     }
 }
@@ -2449,13 +2449,13 @@ int libxl__grant_vga_iomem_permission(libxl__gc *gc, const uint32_t domid,
     if (!libxl_defbool_val(d_config->b_info.u.hvm.gfx_passthru))
         return 0;
 
-    for (i = 0 ; i < d_config->num_pcidevs ; i++) {
+    for (i = 0 ; i < d_config->num_pcis ; i++) {
         uint64_t vga_iomem_start = 0xa0000 >> XC_PAGE_SHIFT;
         uint32_t stubdom_domid;
-        libxl_device_pci *pcidev = &d_config->pcidevs[i];
+        libxl_device_pci *pci = &d_config->pcis[i];
         unsigned long pci_device_class;
 
-        if (sysfs_dev_get_class(gc, pcidev, &pci_device_class))
+        if (sysfs_dev_get_class(gc, pci, &pci_device_class))
             continue;
         if (pci_device_class != 0x030000) /* VGA class */
             continue;
@@ -2494,7 +2494,7 @@ static int libxl_device_pci_compare(const libxl_device_pci *d1,
 
 #define libxl__device_pci_update_devid NULL
 
-DEFINE_DEVICE_TYPE_STRUCT_X(pcidev, pci, PCI,
+DEFINE_DEVICE_TYPE_STRUCT(pci, PCI,
     .get_num = libxl__device_pci_get_num,
     .from_xenstore = libxl__device_pci_from_xs_be,
 );
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 9d3f05f399..20f8dd7cfa 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -940,7 +940,7 @@ libxl_domain_config = Struct("domain_config", [
 
     ("disks", Array(libxl_device_disk, "num_disks")),
     ("nics", Array(libxl_device_nic, "num_nics")),
-    ("pcidevs", Array(libxl_device_pci, "num_pcidevs")),
+    ("pcis", Array(libxl_device_pci, "num_pcis")),
     ("rdms", Array(libxl_device_rdm, "num_rdms")),
     ("dtdevs", Array(libxl_device_dtdev, "num_dtdevs")),
     ("vfbs", Array(libxl_device_vfb, "num_vfbs")),
diff --git a/tools/libs/util/libxlu_pci.c b/tools/libs/util/libxlu_pci.c
index 12fc0b3a7f..1d38fffce3 100644
--- a/tools/libs/util/libxlu_pci.c
+++ b/tools/libs/util/libxlu_pci.c
@@ -23,15 +23,15 @@ static int hex_convert(const char *str, unsigned int *val, unsigned int mask)
     return 0;
 }
 
-static int pcidev_struct_fill(libxl_device_pci *pcidev, unsigned int domain,
-                               unsigned int bus, unsigned int dev,
-                               unsigned int func, unsigned int vdevfn)
+static int pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
+                           unsigned int bus, unsigned int dev,
+                           unsigned int func, unsigned int vdevfn)
 {
-    pcidev->domain = domain;
-    pcidev->bus = bus;
-    pcidev->dev = dev;
-    pcidev->func = func;
-    pcidev->vdevfn = vdevfn;
+    pci->domain = domain;
+    pci->bus = bus;
+    pci->dev = dev;
+    pci->func = func;
+    pci->vdevfn = vdevfn;
     return 0;
 }
 
@@ -47,7 +47,7 @@ static int pcidev_struct_fill(libxl_device_pci *pcidev, unsigned int domain,
 #define STATE_RDM_STRATEGY      10
 #define STATE_RESERVE_POLICY    11
 #define INVALID         0xffffffff
-int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str)
+int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pci, const char *str)
 {
     unsigned state = STATE_DOMAIN;
     unsigned dom = INVALID, bus = INVALID, dev = INVALID, func = INVALID, vslot = 0;
@@ -110,11 +110,11 @@ int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str
                 }
                 *ptr = '\0';
                 if ( !strcmp(tok, "*") ) {
-                    pcidev->vfunc_mask = LIBXL_PCI_FUNC_ALL;
+                    pci->vfunc_mask = LIBXL_PCI_FUNC_ALL;
                 }else{
                     if ( hex_convert(tok, &func, 0x7) )
                         goto parse_error;
-                    pcidev->vfunc_mask = (1 << 0);
+                    pci->vfunc_mask = (1 << 0);
                 }
                 tok = ptr + 1;
             }
@@ -141,18 +141,18 @@ int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str
                 state = (*ptr == ',') ? STATE_OPTIONS_K : STATE_TERMINAL;
                 *ptr = '\0';
                 if ( !strcmp(optkey, "msitranslate") ) {
-                    pcidev->msitranslate = atoi(tok);
+                    pci->msitranslate = atoi(tok);
                 }else if ( !strcmp(optkey, "power_mgmt") ) {
-                    pcidev->power_mgmt = atoi(tok);
+                    pci->power_mgmt = atoi(tok);
                 }else if ( !strcmp(optkey, "permissive") ) {
-                    pcidev->permissive = atoi(tok);
+                    pci->permissive = atoi(tok);
                 }else if ( !strcmp(optkey, "seize") ) {
-                    pcidev->seize = atoi(tok);
+                    pci->seize = atoi(tok);
                 } else if (!strcmp(optkey, "rdm_policy")) {
                     if (!strcmp(tok, "strict")) {
-                        pcidev->rdm_policy = LIBXL_RDM_RESERVE_POLICY_STRICT;
+                        pci->rdm_policy = LIBXL_RDM_RESERVE_POLICY_STRICT;
                     } else if (!strcmp(tok, "relaxed")) {
-                        pcidev->rdm_policy = LIBXL_RDM_RESERVE_POLICY_RELAXED;
+                        pci->rdm_policy = LIBXL_RDM_RESERVE_POLICY_RELAXED;
                     } else {
                         XLU__PCI_ERR(cfg, "%s is not an valid PCI RDM property"
                                           " policy: 'strict' or 'relaxed'.",
@@ -175,7 +175,7 @@ int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str
     assert(dom != INVALID && bus != INVALID && dev != INVALID && func != INVALID);
 
     /* Just a pretty way to fill in the values */
-    pcidev_struct_fill(pcidev, dom, bus, dev, func, vslot << 3);
+    pci_struct_fill(pci, dom, bus, dev, func, vslot << 3);
 
     free(buf2);
 
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index cae8eb679c..0765780d9f 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1470,24 +1470,24 @@ void parse_config_data(const char *config_source,
     }
 
     if (!xlu_cfg_get_list (config, "pci", &pcis, 0, 0)) {
-        d_config->num_pcidevs = 0;
-        d_config->pcidevs = NULL;
+        d_config->num_pcis = 0;
+        d_config->pcis = NULL;
         for(i = 0; (buf = xlu_cfg_get_listitem (pcis, i)) != NULL; i++) {
-            libxl_device_pci *pcidev;
-
-            pcidev = ARRAY_EXTEND_INIT_NODEVID(d_config->pcidevs,
-                                               d_config->num_pcidevs,
-                                               libxl_device_pci_init);
-            pcidev->msitranslate = pci_msitranslate;
-            pcidev->power_mgmt = pci_power_mgmt;
-            pcidev->permissive = pci_permissive;
-            pcidev->seize = pci_seize;
+            libxl_device_pci *pci;
+
+            pci = ARRAY_EXTEND_INIT_NODEVID(d_config->pcis,
+                                            d_config->num_pcis,
+                                            libxl_device_pci_init);
+            pci->msitranslate = pci_msitranslate;
+            pci->power_mgmt = pci_power_mgmt;
+            pci->permissive = pci_permissive;
+            pci->seize = pci_seize;
             /*
              * Like other pci option, the per-device policy always follows
              * the global policy by default.
              */
-            pcidev->rdm_policy = b_info->u.hvm.rdm.policy;
-            e = xlu_pci_parse_bdf(config, pcidev, buf);
+            pci->rdm_policy = b_info->u.hvm.rdm.policy;
+            e = xlu_pci_parse_bdf(config, pci, buf);
             if (e) {
                 fprintf(stderr,
                         "unable to parse PCI BDF `%s' for passthrough\n",
@@ -1495,7 +1495,7 @@ void parse_config_data(const char *config_source,
                 exit(-e);
             }
         }
-        if (d_config->num_pcidevs && c_info->type == LIBXL_DOMAIN_TYPE_PV)
+        if (d_config->num_pcis && c_info->type == LIBXL_DOMAIN_TYPE_PV)
             libxl_defbool_set(&b_info->u.pv.e820_host, true);
     }
 
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 58345bdae2..34fcf5a4fa 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -24,20 +24,20 @@
 
 static void pcilist(uint32_t domid)
 {
-    libxl_device_pci *pcidevs;
+    libxl_device_pci *pcis;
     int num, i;
 
-    pcidevs = libxl_device_pci_list(ctx, domid, &num);
-    if (pcidevs == NULL)
+    pcis = libxl_device_pci_list(ctx, domid, &num);
+    if (pcis == NULL)
         return;
     printf("Vdev Device\n");
     for (i = 0; i < num; i++) {
         printf("%02x.%01x %04x:%02x:%02x.%01x\n",
-               (pcidevs[i].vdevfn >> 3) & 0x1f, pcidevs[i].vdevfn & 0x7,
-               pcidevs[i].domain, pcidevs[i].bus, pcidevs[i].dev, pcidevs[i].func);
-        libxl_device_pci_dispose(&pcidevs[i]);
+               (pcis[i].vdevfn >> 3) & 0x1f, pcis[i].vdevfn & 0x7,
+               pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
+        libxl_device_pci_dispose(&pcis[i]);
     }
-    free(pcidevs);
+    free(pcis);
 }
 
 int main_pcilist(int argc, char **argv)
@@ -57,28 +57,28 @@ int main_pcilist(int argc, char **argv)
 
 static int pcidetach(uint32_t domid, const char *bdf, int force)
 {
-    libxl_device_pci pcidev;
+    libxl_device_pci pci;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pcidev);
+    libxl_device_pci_init(&pci);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_inig"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcidev, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
         fprintf(stderr, "pci-detach: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
     if (force) {
-        if (libxl_device_pci_destroy(ctx, domid, &pcidev, 0))
+        if (libxl_device_pci_destroy(ctx, domid, &pci, 0))
             r = 1;
     } else {
-        if (libxl_device_pci_remove(ctx, domid, &pcidev, 0))
+        if (libxl_device_pci_remove(ctx, domid, &pci, 0))
             r = 1;
     }
 
-    libxl_device_pci_dispose(&pcidev);
+    libxl_device_pci_dispose(&pci);
     xlu_cfg_destroy(config);
 
     return r;
@@ -108,24 +108,24 @@ int main_pcidetach(int argc, char **argv)
 
 static int pciattach(uint32_t domid, const char *bdf, const char *vs)
 {
-    libxl_device_pci pcidev;
+    libxl_device_pci pci;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pcidev);
+    libxl_device_pci_init(&pci);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_inig"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcidev, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
         fprintf(stderr, "pci-attach: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_add(ctx, domid, &pcidev, 0))
+    if (libxl_device_pci_add(ctx, domid, &pci, 0))
         r = 1;
 
-    libxl_device_pci_dispose(&pcidev);
+    libxl_device_pci_dispose(&pci);
     xlu_cfg_destroy(config);
 
     return r;
@@ -155,19 +155,19 @@ int main_pciattach(int argc, char **argv)
 
 static void pciassignable_list(void)
 {
-    libxl_device_pci *pcidevs;
+    libxl_device_pci *pcis;
     int num, i;
 
-    pcidevs = libxl_device_pci_assignable_list(ctx, &num);
+    pcis = libxl_device_pci_assignable_list(ctx, &num);
 
-    if ( pcidevs == NULL )
+    if ( pcis == NULL )
         return;
     for (i = 0; i < num; i++) {
         printf("%04x:%02x:%02x.%01x\n",
-               pcidevs[i].domain, pcidevs[i].bus, pcidevs[i].dev, pcidevs[i].func);
-        libxl_device_pci_dispose(&pcidevs[i]);
+               pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
+        libxl_device_pci_dispose(&pcis[i]);
     }
-    free(pcidevs);
+    free(pcis);
 }
 
 int main_pciassignable_list(int argc, char **argv)
@@ -184,24 +184,24 @@ int main_pciassignable_list(int argc, char **argv)
 
 static int pciassignable_add(const char *bdf, int rebind)
 {
-    libxl_device_pci pcidev;
+    libxl_device_pci pci;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pcidev);
+    libxl_device_pci_init(&pci);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcidev, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
         fprintf(stderr, "pci-assignable-add: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_add(ctx, &pcidev, rebind))
+    if (libxl_device_pci_assignable_add(ctx, &pci, rebind))
         r = 1;
 
-    libxl_device_pci_dispose(&pcidev);
+    libxl_device_pci_dispose(&pci);
     xlu_cfg_destroy(config);
 
     return r;
@@ -226,24 +226,24 @@ int main_pciassignable_add(int argc, char **argv)
 
 static int pciassignable_remove(const char *bdf, int rebind)
 {
-    libxl_device_pci pcidev;
+    libxl_device_pci pci;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pcidev);
+    libxl_device_pci_init(&pci);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcidev, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
         fprintf(stderr, "pci-assignable-remove: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_remove(ctx, &pcidev, rebind))
+    if (libxl_device_pci_assignable_remove(ctx, &pci, rebind))
         r = 1;
 
-    libxl_device_pci_dispose(&pcidev);
+    libxl_device_pci_dispose(&pci);
     xlu_cfg_destroy(config);
 
     return r;
diff --git a/tools/xl/xl_sxp.c b/tools/xl/xl_sxp.c
index 359a001570..b03e348ffb 100644
--- a/tools/xl/xl_sxp.c
+++ b/tools/xl/xl_sxp.c
@@ -190,16 +190,16 @@ void printf_info_sexp(int domid, libxl_domain_config *d_config, FILE *fh)
         fprintf(fh, "\t)\n");
     }
 
-    for (i = 0; i < d_config->num_pcidevs; i++) {
+    for (i = 0; i < d_config->num_pcis; i++) {
         fprintf(fh, "\t(device\n");
         fprintf(fh, "\t\t(pci\n");
         fprintf(fh, "\t\t\t(pci dev %04x:%02x:%02x.%01x@%02x)\n",
-               d_config->pcidevs[i].domain, d_config->pcidevs[i].bus,
-               d_config->pcidevs[i].dev, d_config->pcidevs[i].func,
-               d_config->pcidevs[i].vdevfn);
+               d_config->pcis[i].domain, d_config->pcis[i].bus,
+               d_config->pcis[i].dev, d_config->pcis[i].func,
+               d_config->pcis[i].vdevfn);
         fprintf(fh, "\t\t\t(opts msitranslate %d power_mgmt %d)\n",
-               d_config->pcidevs[i].msitranslate,
-               d_config->pcidevs[i].power_mgmt);
+               d_config->pcis[i].msitranslate,
+               d_config->pcis[i].power_mgmt);
         fprintf(fh, "\t\t)\n");
         fprintf(fh, "\t)\n");
     }
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 08:30:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 08:30:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35504.67068 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTio-0003QZ-6o; Tue, 24 Nov 2020 08:30:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35504.67068; Tue, 24 Nov 2020 08:30:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTio-0003QS-1q; Tue, 24 Nov 2020 08:30:46 +0000
Received: by outflank-mailman (input) for mailman id 35504;
 Tue, 24 Nov 2020 08:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khTim-0003PK-U3
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTim-0006af-1y; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH5-0001hp-0k; Tue, 24 Nov 2020 08:02:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTim-0003PK-U3
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:44 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=+FpU4YZrBWh/cVU+pi5ZnHT9t/65wsl1ql6zxUHKlbw=; b=SFjXCXDdmflFf2GDPjI6sDMVQ
	u/faL4LE7J5oPBotz3KClrSDxDWT8AbC2L0L2Y9haVbMT5Rmc4gxcYNPHIkJitrQL0V50tRLm7NLI
	SjnvJL6K4KD38PjpVVUOJbZPrRsjRQCeTDguJkRxIL/DvgnnnzEBdZm1+VxoXCsvQxkrc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTim-0006af-1y; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH5-0001hp-0k; Tue, 24 Nov 2020 08:02:07 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 20/23] docs/man: modify xl(1) in preparation for naming of assignable devices
Date: Tue, 24 Nov 2020 08:01:56 +0000
Message-Id: <20201124080159.11912-21-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201124080159.11912-1-paul@xen.org>
References: <20201124080159.11912-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

A subsequent patch will introduce code to allow a name to be specified to
'xl pci-assignable-add' such that the assignable device may be referred to
by than name in subsequent operations.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 docs/man/xl.1.pod.in | 19 ++++++++++++-------
 1 file changed, 12 insertions(+), 7 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index c5fbce3b5c..0822a58428 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -1595,19 +1595,23 @@ List virtual network interfaces for a domain.
 
 =over 4
 
-=item B<pci-assignable-list>
+=item B<pci-assignable-list> [I<-n>]
 
 List all the B<BDF> of assignable PCI devices. See
-L<xl-pci-configuration(5)> for more information.
+L<xl-pci-configuration(5)> for more information. If the -n option is
+specified then any name supplied when the device was made assignable
+will also be displayed.
 
 These are devices in the system which are configured to be
 available for passthrough and are bound to a suitable PCI
 backend driver in domain 0 rather than a real driver.
 
-=item B<pci-assignable-add> I<BDF>
+=item B<pci-assignable-add> [I<-n NAME>] I<BDF>
 
 Make the device at B<BDF> assignable to guests. See
-L<xl-pci-configuration(5)> for more information.
+L<xl-pci-configuration(5)> for more information. If the -n option is
+supplied then the assignable device entry will the named with the
+given B<NAME>.
 
 This will bind the device to the pciback driver and assign it to the
 "quarantine domain".  If it is already bound to a driver, it will
@@ -1622,10 +1626,11 @@ not to do this on a device critical to domain 0's operation, such as
 storage controllers, network interfaces, or GPUs that are currently
 being used.
 
-=item B<pci-assignable-remove> [I<-r>] I<BDF>
+=item B<pci-assignable-remove> [I<-r>] I<BDF>|I<NAME>
 
-Make the device at B<BDF> not assignable to guests. See
-L<xl-pci-configuration(5)> for more information.
+Make a device non-assignable to guests. The device may be identified
+either by its B<BDF> or the B<NAME> supplied when the device was made
+assignable. See L<xl-pci-configuration(5)> for more information.
 
 This will at least unbind the device from pciback, and
 re-assign it from the "quarantine domain" back to domain 0.  If the -r
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 08:30:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 08:30:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35507.67095 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTip-0003SZ-95; Tue, 24 Nov 2020 08:30:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35507.67095; Tue, 24 Nov 2020 08:30:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTip-0003SG-0E; Tue, 24 Nov 2020 08:30:47 +0000
Received: by outflank-mailman (input) for mailman id 35507;
 Tue, 24 Nov 2020 08:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khTim-0003PQ-Vk
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTim-0006ar-Bx; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH5-0001hp-Fu; Tue, 24 Nov 2020 08:02:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTim-0003PQ-Vk
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:44 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=z+TBph7KfTXS8lKGFLMVlUTRo1VQXbbt6hL6Ha7NBXk=; b=rEqyyobaINla31ETAb8+7/Qih
	ROtHcoDpjF5t1g5GpQuy1yk9UNht8D2JDXw/WVPD8A/4s+ekltp/DclPNiUk0Xnn+7YAogYp+Oiig
	ifNNX0xdMkxRbDj3edeGl9GPRkr3YyL8ed6AiegbjLcTl8zZNewdP53sfqGNL8nVjK/yI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTim-0006ar-Bx; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH5-0001hp-Fu; Tue, 24 Nov 2020 08:02:07 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 22/23] docs/man: modify xl-pci-configuration(5) to add 'name' field to PCI_SPEC_STRING
Date: Tue, 24 Nov 2020 08:01:58 +0000
Message-Id: <20201124080159.11912-23-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201124080159.11912-1-paul@xen.org>
References: <20201124080159.11912-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

Since assignable devices can be named, a subsequent patch will support use
of a PCI_SPEC_STRING containing a 'name' parameter instead of a 'bdf'. In
this case the name will be used to look up the 'bdf' in the list of assignable
(or assigned) devices.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 docs/man/xl-pci-configuration.5.pod | 25 +++++++++++++++++++++++--
 1 file changed, 23 insertions(+), 2 deletions(-)

diff --git a/docs/man/xl-pci-configuration.5.pod b/docs/man/xl-pci-configuration.5.pod
index 4dd73bc498..db3360307c 100644
--- a/docs/man/xl-pci-configuration.5.pod
+++ b/docs/man/xl-pci-configuration.5.pod
@@ -51,7 +51,7 @@ is not specified, or if it is specified with an empty value (whether
 positionally or explicitly).
 
 B<NOTE>: In context of B<xl pci-detach> (see L<xl(1)>), parameters other than
-B<bdf> will be ignored.
+B<bdf> or B<name> will be ignored.
 
 =head1 Positional Parameters
 
@@ -70,7 +70,11 @@ B<*> to indicate all functions of a multi-function device.
 
 =item Default Value
 
-None. This parameter is mandatory as it identifies the device.
+None. This parameter is mandatory in its positional form. As a non-positional
+parameter it is also mandatory unless a B<name> parameter is present, in
+which case B<bdf> must not be present since the B<name> will be used to find
+the B<bdf> in the list of assignable devices. See L<xl(1)> for more information
+on naming assignable devices.
 
 =back
 
@@ -194,4 +198,21 @@ B<NOTE>: This overrides the global B<rdm> option.
 
 =back
 
+=item B<name>=I<STRING>
+
+=over 4
+
+=item Description
+
+This is the name given when the B<BDF> was made assignable. See L<xl(1)> for
+more information on naming assignable devices.
+
+=item Default Value
+
+None. This parameter must not be present if a B<bdf> parameter is present.
+If a B<bdf> parameter is not present then B<name> is mandatory as it is
+required to look up the B<BDF> in the list of assignable devices.
+
+=back
+
 =back
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 08:30:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 08:30:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35508.67108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTip-0003TT-Pn; Tue, 24 Nov 2020 08:30:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35508.67108; Tue, 24 Nov 2020 08:30:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTip-0003T8-EL; Tue, 24 Nov 2020 08:30:47 +0000
Received: by outflank-mailman (input) for mailman id 35508;
 Tue, 24 Nov 2020 08:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khTin-0003Pb-0A
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTim-0006aj-78; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH4-0001hp-1v; Tue, 24 Nov 2020 08:02:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTin-0003Pb-0A
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=YcXfMQGt7JCycdtTvU5ue8RUHb6lcyzpJs/F2AZGOVM=; b=7NDHtfpgZ7fuS/b+NLuUedYji
	bwI3tVhp17dlzApBmJMmVeJZvXRfCDlbbf6Nb2CrstVUJFQcXaUA7jTXPtQHL4JQ6aF1tgC22n7zS
	qWlrRuyuZX1IC5/v9OuHs2o51nUfUowqPksQ1Rd6UViYjoq/4bmcHiXZ90JjqZMCENaYM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTim-0006aj-78; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH4-0001hp-1v; Tue, 24 Nov 2020 08:02:06 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 16/23] docs/man: fix xl(1) documentation for 'pci' operations
Date: Tue, 24 Nov 2020 08:01:52 +0000
Message-Id: <20201124080159.11912-17-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201124080159.11912-1-paul@xen.org>
References: <20201124080159.11912-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

Currently the documentation completely fails to mention the existence of
PCI_SPEC_STRING. This patch tidies things up, specifically clarifying that
'pci-assignable-add/remove' take <BDF> arguments where as 'pci-attach/detach'
take <PCI_SPEC_STRING> arguments (which will be enforced in a subsequent
patch).

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 docs/man/xl.1.pod.in | 28 +++++++++++++++++-----------
 1 file changed, 17 insertions(+), 11 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index f92bacfa72..c5fbce3b5c 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -1597,14 +1597,18 @@ List virtual network interfaces for a domain.
 
 =item B<pci-assignable-list>
 
-List all the assignable PCI devices.
+List all the B<BDF> of assignable PCI devices. See
+L<xl-pci-configuration(5)> for more information.
+
 These are devices in the system which are configured to be
 available for passthrough and are bound to a suitable PCI
 backend driver in domain 0 rather than a real driver.
 
 =item B<pci-assignable-add> I<BDF>
 
-Make the device at PCI Bus/Device/Function BDF assignable to guests.
+Make the device at B<BDF> assignable to guests. See
+L<xl-pci-configuration(5)> for more information.
+
 This will bind the device to the pciback driver and assign it to the
 "quarantine domain".  If it is already bound to a driver, it will
 first be unbound, and the original driver stored so that it can be
@@ -1620,8 +1624,10 @@ being used.
 
 =item B<pci-assignable-remove> [I<-r>] I<BDF>
 
-Make the device at PCI Bus/Device/Function BDF not assignable to
-guests.  This will at least unbind the device from pciback, and
+Make the device at B<BDF> not assignable to guests. See
+L<xl-pci-configuration(5)> for more information.
+
+This will at least unbind the device from pciback, and
 re-assign it from the "quarantine domain" back to domain 0.  If the -r
 option is specified, it will also attempt to re-bind the device to its
 original driver, making it usable by Domain 0 again.  If the device is
@@ -1637,15 +1643,15 @@ As always, this should only be done if you trust the guest, or are
 confident that the particular device you're re-assigning to dom0 will
 cancel all in-flight DMA on FLR.
 
-=item B<pci-attach> I<domain-id> I<BDF>
+=item B<pci-attach> I<domain-id> I<PCI_SPEC_STRING>
 
-Hot-plug a new pass-through pci device to the specified domain.
-B<BDF> is the PCI Bus/Device/Function of the physical device to pass-through.
+Hot-plug a new pass-through pci device to the specified domain. See
+L<xl-pci-configuration(5)> for more information.
 
-=item B<pci-detach> [I<OPTIONS>] I<domain-id> I<BDF>
+=item B<pci-detach> [I<OPTIONS>] I<domain-id> I<PCI_SPEC_STRING>
 
-Hot-unplug a previously assigned pci device from a domain. B<BDF> is the PCI
-Bus/Device/Function of the physical device to be removed from the guest domain.
+Hot-unplug a pci device that was previously passed through to a domain. See
+L<xl-pci-configuration(5)> for more information.
 
 B<OPTIONS>
 
@@ -1660,7 +1666,7 @@ even without guest domain's collaboration.
 
 =item B<pci-list> I<domain-id>
 
-List pass-through pci devices for a domain.
+List the B<BDF> of pci devices passed through to a domain.
 
 =back
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 08:30:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 08:30:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35512.67155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTis-0003aD-Db; Tue, 24 Nov 2020 08:30:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35512.67155; Tue, 24 Nov 2020 08:30:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTis-0003ZT-14; Tue, 24 Nov 2020 08:30:50 +0000
Received: by outflank-mailman (input) for mailman id 35512;
 Tue, 24 Nov 2020 08:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khTin-0003Py-8A
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTim-0006b5-HC; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH3-0001hp-LL; Tue, 24 Nov 2020 08:02:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTin-0003Py-8A
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=GMhuBQvWwKt0oiSVX55rbWxIi0temtvls6NwYa6kuYM=; b=oc4iWKmMQqVome82cT7LhUB9Q
	8uSciffzs5wauEy4tUGZSLwf93XJpQ6pBp+3McG+OuT+7T54gDbkbvbPRxN6Fqvw4SFflQcgOO+79
	cBv81Pa8Zgux0T0ZJgbdL5OJkh8m8JdSTKqJi8WK8QEOQoeagbSS2Qq1ipf/qMg3AEQ8E=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTim-0006b5-HC; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH3-0001hp-LL; Tue, 24 Nov 2020 08:02:05 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 14/23] docs/man: extract documentation of PCI_SPEC_STRING from the xl.cfg manpage...
Date: Tue, 24 Nov 2020 08:01:50 +0000
Message-Id: <20201124080159.11912-15-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201124080159.11912-1-paul@xen.org>
References: <20201124080159.11912-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

... and put it into a new xl-pci-configuration(5) manpage, akin to the
xl-network-configration(5) and xl-disk-configuration(5) manpages.

This patch moves the content of the section verbatim. A subsequent patch
will improve the documentation, once it is in its new location.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 docs/man/xl-pci-configuration.5.pod | 78 +++++++++++++++++++++++++++++++++++++
 docs/man/xl.cfg.5.pod.in            | 68 +-------------------------------
 2 files changed, 79 insertions(+), 67 deletions(-)
 create mode 100644 docs/man/xl-pci-configuration.5.pod

diff --git a/docs/man/xl-pci-configuration.5.pod b/docs/man/xl-pci-configuration.5.pod
new file mode 100644
index 0000000000..72a27bd95d
--- /dev/null
+++ b/docs/man/xl-pci-configuration.5.pod
@@ -0,0 +1,78 @@
+=encoding utf8
+
+=head1 NAME
+
+xl-pci-configuration - XL PCI Configuration Syntax
+
+=head1 SYNTAX
+
+This document specifies the format for B<PCI_SPEC_STRING> which is used by
+the L<xl.cfg(5)> pci configuration option, and related L<xl(1)> commands.
+
+Each B<PCI_SPEC_STRING> has the form of
+B<[DDDD:]BB:DD.F[@VSLOT],KEY=VALUE,KEY=VALUE,...> where:
+
+=over 4
+
+=item B<[DDDD:]BB:DD.F>
+
+Identifies the PCI device from the host perspective in the domain
+(B<DDDD>), Bus (B<BB>), Device (B<DD>) and Function (B<F>) syntax. This is
+the same scheme as used in the output of B<lspci(1)> for the device in
+question.
+
+Note: by default B<lspci(1)> will omit the domain (B<DDDD>) if it
+is zero and it is optional here also. You may specify the function
+(B<F>) as B<*> to indicate all functions.
+
+=item B<@VSLOT>
+
+Specifies the virtual slot where the guest will see this
+device. This is equivalent to the B<DD> which the guest sees. In a
+guest B<DDDD> and B<BB> are C<0000:00>.
+
+=item B<permissive=BOOLEAN>
+
+By default pciback only allows PV guests to write "known safe" values
+into PCI configuration space, likewise QEMU (both qemu-xen and
+qemu-xen-traditional) imposes the same constraint on HVM guests.
+However, many devices require writes to other areas of the configuration space
+in order to operate properly.  This option tells the backend (pciback or QEMU)
+to allow all writes to the PCI configuration space of this device by this
+domain.
+
+B<This option should be enabled with caution:> it gives the guest much
+more control over the device, which may have security or stability
+implications.  It is recommended to only enable this option for
+trusted VMs under administrator's control.
+
+=item B<msitranslate=BOOLEAN>
+
+Specifies that MSI-INTx translation should be turned on for the PCI
+device. When enabled, MSI-INTx translation will always enable MSI on
+the PCI device regardless of whether the guest uses INTx or MSI. Some
+device drivers, such as NVIDIA's, detect an inconsistency and do not
+function when this option is enabled. Therefore the default is false (0).
+
+=item B<seize=BOOLEAN>
+
+Tells B<xl> to automatically attempt to re-assign a device to
+pciback if it is not already assigned.
+
+B<WARNING:> If you set this option, B<xl> will gladly re-assign a critical
+system device, such as a network or a disk controller being used by
+dom0 without confirmation.  Please use with care.
+
+=item B<power_mgmt=BOOLEAN>
+
+B<(HVM only)> Specifies that the VM should be able to program the
+D0-D3hot power management states for the PCI device. The default is false (0).
+
+=item B<rdm_policy=STRING>
+
+B<(HVM/x86 only)> This is the same as the policy setting inside the B<rdm>
+option but just specific to a given device. The default is "relaxed".
+
+Note: this would override global B<rdm> option.
+
+=back
diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 0532739c1f..b00644e852 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1101,73 +1101,7 @@ option is valid only when the B<controller> option is specified.
 =item B<pci=[ "PCI_SPEC_STRING", "PCI_SPEC_STRING", ...]>
 
 Specifies the host PCI devices to passthrough to this guest.
-Each B<PCI_SPEC_STRING> has the form of
-B<[DDDD:]BB:DD.F[@VSLOT],KEY=VALUE,KEY=VALUE,...> where:
-
-=over 4
-
-=item B<[DDDD:]BB:DD.F>
-
-Identifies the PCI device from the host perspective in the domain
-(B<DDDD>), Bus (B<BB>), Device (B<DD>) and Function (B<F>) syntax. This is
-the same scheme as used in the output of B<lspci(1)> for the device in
-question.
-
-Note: by default B<lspci(1)> will omit the domain (B<DDDD>) if it
-is zero and it is optional here also. You may specify the function
-(B<F>) as B<*> to indicate all functions.
-
-=item B<@VSLOT>
-
-Specifies the virtual slot where the guest will see this
-device. This is equivalent to the B<DD> which the guest sees. In a
-guest B<DDDD> and B<BB> are C<0000:00>.
-
-=item B<permissive=BOOLEAN>
-
-By default pciback only allows PV guests to write "known safe" values
-into PCI configuration space, likewise QEMU (both qemu-xen and
-qemu-xen-traditional) imposes the same constraint on HVM guests.
-However, many devices require writes to other areas of the configuration space
-in order to operate properly.  This option tells the backend (pciback or QEMU)
-to allow all writes to the PCI configuration space of this device by this
-domain.
-
-B<This option should be enabled with caution:> it gives the guest much
-more control over the device, which may have security or stability
-implications.  It is recommended to only enable this option for
-trusted VMs under administrator's control.
-
-=item B<msitranslate=BOOLEAN>
-
-Specifies that MSI-INTx translation should be turned on for the PCI
-device. When enabled, MSI-INTx translation will always enable MSI on
-the PCI device regardless of whether the guest uses INTx or MSI. Some
-device drivers, such as NVIDIA's, detect an inconsistency and do not
-function when this option is enabled. Therefore the default is false (0).
-
-=item B<seize=BOOLEAN>
-
-Tells B<xl> to automatically attempt to re-assign a device to
-pciback if it is not already assigned.
-
-B<WARNING:> If you set this option, B<xl> will gladly re-assign a critical
-system device, such as a network or a disk controller being used by
-dom0 without confirmation.  Please use with care.
-
-=item B<power_mgmt=BOOLEAN>
-
-B<(HVM only)> Specifies that the VM should be able to program the
-D0-D3hot power management states for the PCI device. The default is false (0).
-
-=item B<rdm_policy=STRING>
-
-B<(HVM/x86 only)> This is the same as the policy setting inside the B<rdm>
-option but just specific to a given device. The default is "relaxed".
-
-Note: this would override global B<rdm> option.
-
-=back
+See L<xl-pci-configuration(5)> for more details.
 
 =item B<pci_permissive=BOOLEAN>
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 08:30:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 08:30:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35511.67143 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTir-0003Y2-JK; Tue, 24 Nov 2020 08:30:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35511.67143; Tue, 24 Nov 2020 08:30:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTir-0003XL-6j; Tue, 24 Nov 2020 08:30:49 +0000
Received: by outflank-mailman (input) for mailman id 35511;
 Tue, 24 Nov 2020 08:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khTin-0003Pt-71
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTim-0006av-E8; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH4-0001hp-Qc; Tue, 24 Nov 2020 08:02:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTin-0003Pt-71
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=4LIIgPLaZ7owad3kxi5/a8tOUYDqOVCOVfSVh7XjWYQ=; b=ZzvIyfmC40wFFFZCfQAgJ7iOc
	GGrTP0ECDwP5DHUUvSKPVSiXZrtpbVsF8UmZouTC4sWrI6EIWffBi0QxRjw911+3MwPYa9Oeu59Na
	gRahdMcaj4wK3Spt1w9swHjo9Tp3LC4BAT5L2+kp0zWF+1eIP56LHyBUXUxGAF2k07Lbs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTim-0006av-E8; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH4-0001hp-Qc; Tue, 24 Nov 2020 08:02:06 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 19/23] libxl: modify libxl_device_pci_assignable_add/remove/list/list_free()...
Date: Tue, 24 Nov 2020 08:01:55 +0000
Message-Id: <20201124080159.11912-20-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201124080159.11912-1-paul@xen.org>
References: <20201124080159.11912-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

... to use 'libxl_pci_bdf' rather than 'libxl_device_pci'.

This patch modifies the API and callers accordingly. It also modifies
several internal functions in libxl_pci.c that support the API to also use
'libxl_pci_bdf'.

NOTE: The OCaml bindings are adjusted to contain the interface change. It
      should therefore not affect compatibility with OCaml-based utilities.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Christian Lindig <christian.lindig@citrix.com>
Cc: David Scott <dave@recoil.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxl.h                |  15 ++-
 tools/libs/light/libxl_pci.c         | 215 +++++++++++++++++++----------------
 tools/ocaml/libs/xl/xenlight_stubs.c |  15 ++-
 tools/xl/xl_pci.c                    |  32 +++---
 4 files changed, 157 insertions(+), 120 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 5edacccbd1..5703fdf367 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -470,6 +470,13 @@
 #define LIBXL_HAVE_PCI_BDF 1
 
 /*
+ * LIBXL_HAVE_PCI_ASSIGNABLE_BDF indicates that the
+ * libxl_device_pci_assignable_add/remove/list/list_free() functions all
+ * use the 'libxl_pci_bdf' type rather than 'libxl_device_pci' type.
+ */
+#define LIBXL_HAVE_PCI_ASSIGNABLE_BDF 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
@@ -2378,10 +2385,10 @@ int libxl_device_events_handler(libxl_ctx *ctx,
  * added or is not bound, the functions will emit a warning but return
  * SUCCESS.
  */
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
-libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
-void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num);
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
+libxl_pci_bdf *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
+void libxl_device_pci_assignable_list_free(libxl_pci_bdf *list, int num);
 
 /* CPUID handling */
 int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str);
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 3cfba0e527..f9ace1faec 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -25,26 +25,33 @@
 #define PCI_BDF_XSPATH         "%04x-%02x-%02x-%01x"
 #define PCI_PT_QDEV_ID         "pci-pt-%02x_%02x.%01x"
 
-static unsigned int pci_encode_bdf(libxl_device_pci *pci)
+static unsigned int pci_encode_bdf(libxl_pci_bdf *pcibdf)
 {
     unsigned int value;
 
-    value = pci->bdf.domain << 16;
-    value |= (pci->bdf.bus & 0xff) << 8;
-    value |= (pci->bdf.dev & 0x1f) << 3;
-    value |= (pci->bdf.func & 0x7);
+    value = pcibdf->domain << 16;
+    value |= (pcibdf->bus & 0xff) << 8;
+    value |= (pcibdf->dev & 0x1f) << 3;
+    value |= (pcibdf->func & 0x7);
 
     return value;
 }
 
+static void pcibdf_struct_fill(libxl_pci_bdf *pcibdf, unsigned int domain,
+                               unsigned int bus, unsigned int dev,
+                               unsigned int func)
+{
+    pcibdf->domain = domain;
+    pcibdf->bus = bus;
+    pcibdf->dev = dev;
+    pcibdf->func = func;
+}
+
 static void pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
                             unsigned int bus, unsigned int dev,
                             unsigned int func, unsigned int vdevfn)
 {
-    pci->bdf.domain = domain;
-    pci->bdf.bus = bus;
-    pci->bdf.dev = dev;
-    pci->bdf.func = func;
+    pcibdf_struct_fill(&pci->bdf, domain, bus, dev, func);
     pci->vdevfn = vdevfn;
 }
 
@@ -350,8 +357,8 @@ static bool is_pci_in_array(libxl_device_pci *pcis, int num,
 }
 
 /* Write the standard BDF into the sysfs path given by sysfs_path. */
-static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
-                           libxl_device_pci *pci)
+static int sysfs_write_bdf(libxl__gc *gc, const char *sysfs_path,
+                           libxl_pci_bdf *pcibdf)
 {
     int rc, fd;
     char *buf;
@@ -362,8 +369,8 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
         return ERROR_FAIL;
     }
 
-    buf = GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus,
-                    pci->bdf.dev, pci->bdf.func);
+    buf = GCSPRINTF(PCI_BDF, pcibdf->domain, pcibdf->bus,
+                    pcibdf->dev, pcibdf->func);
     rc = write(fd, buf, strlen(buf));
     /* Annoying to have two if's, but we need the errno */
     if (rc < 0)
@@ -378,22 +385,22 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
 
 #define PCI_INFO_PATH "/libxl/pci"
 
-static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
+static char *pci_info_xs_path(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                               const char *node)
 {
     return node ?
         GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
-                  pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
+                  pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func,
                   node) :
         GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
-                  pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
+                  pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func);
 }
 
 
-static int pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
+static int pci_info_xs_write(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                               const char *node, const char *val)
 {
-    char *path = pci_info_xs_path(gc, pci, node);
+    char *path = pci_info_xs_path(gc, pcibdf, node);
     int rc = libxl__xs_printf(gc, XBT_NULL, path, "%s", val);
 
     if (rc) LOGE(WARN, "Write of %s to node %s failed.", val, path);
@@ -401,28 +408,28 @@ static int pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
     return rc;
 }
 
-static char *pci_info_xs_read(libxl__gc *gc, libxl_device_pci *pci,
+static char *pci_info_xs_read(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                               const char *node)
 {
-    char *path = pci_info_xs_path(gc, pci, node);
+    char *path = pci_info_xs_path(gc, pcibdf, node);
 
     return libxl__xs_read(gc, XBT_NULL, path);
 }
 
-static void pci_info_xs_remove(libxl__gc *gc, libxl_device_pci *pci,
+static void pci_info_xs_remove(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                                const char *node)
 {
-    char *path = pci_info_xs_path(gc, pci, node);
+    char *path = pci_info_xs_path(gc, pcibdf, node);
     libxl_ctx *ctx = libxl__gc_owner(gc);
 
     /* Remove the xenstore entry */
     xs_rm(ctx->xsh, XBT_NULL, path);
 }
 
-libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
+libxl_pci_bdf *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 {
     GC_INIT(ctx);
-    libxl_device_pci *pcis = NULL, *new;
+    libxl_pci_bdf *pcibdfs = NULL, *new;
     struct dirent *de;
     DIR *dir;
 
@@ -443,15 +450,15 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         if (sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4)
             continue;
 
-        new = realloc(pcis, ((*num) + 1) * sizeof(*new));
+        new = realloc(pcibdfs, ((*num) + 1) * sizeof(*new));
         if (NULL == new)
             continue;
 
-        pcis = new;
-        new = pcis + *num;
+        pcibdfs = new;
+        new = pcibdfs + *num;
 
-        libxl_device_pci_init(new);
-        pci_struct_fill(new, dom, bus, dev, func, 0);
+        libxl_pci_bdf_init(new);
+        pcibdf_struct_fill(new, dom, bus, dev, func);
 
         if (pci_info_xs_read(gc, new, "domid")) /* already assigned */
             continue;
@@ -462,32 +469,32 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
     closedir(dir);
 out:
     GC_FREE;
-    return pcis;
+    return pcibdfs;
 }
 
-void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num)
+void libxl_device_pci_assignable_list_free(libxl_pci_bdf *list, int num)
 {
     int i;
 
     for (i = 0; i < num; i++)
-        libxl_device_pci_dispose(&list[i]);
+        libxl_pci_bdf_dispose(&list[i]);
 
     free(list);
 }
 
 /* Unbind device from its current driver, if any.  If driver_path is non-NULL,
  * store the path to the original driver in it. */
-static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
+static int sysfs_dev_unbind(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                             char **driver_path)
 {
     char * spath, *dp = NULL;
     struct stat st;
 
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/driver",
-                           pci->bdf.domain,
-                           pci->bdf.bus,
-                           pci->bdf.dev,
-                           pci->bdf.func);
+                           pcibdf->domain,
+                           pcibdf->bus,
+                           pcibdf->dev,
+                           pcibdf->func);
     if ( !lstat(spath, &st) ) {
         /* Find the canonical path to the driver. */
         dp = libxl__zalloc(gc, PATH_MAX);
@@ -501,7 +508,7 @@ static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
 
         /* Unbind from the old driver */
         spath = GCSPRINTF("%s/unbind", dp);
-        if ( sysfs_write_bdf(gc, spath, pci) < 0 ) {
+        if ( sysfs_write_bdf(gc, spath, pcibdf) < 0 ) {
             LOGE(ERROR, "Couldn't unbind device");
             return -1;
         }
@@ -639,8 +646,8 @@ bool libxl__is_igd_vga_passthru(libxl__gc *gc,
  * already exist.
  */
 
-/* Scan through /sys/.../pciback/slots looking for pci's BDF */
-static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
+/* Scan through /sys/.../pciback/slots looking for BDF */
+static int pciback_dev_has_slot(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 {
     FILE *f;
     int rc = 0;
@@ -653,11 +660,11 @@ static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
         return ERROR_FAIL;
     }
 
-    while (fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func)==4) {
-        if (dom == pci->bdf.domain
-            && bus == pci->bdf.bus
-            && dev == pci->bdf.dev
-            && func == pci->bdf.func) {
+    while (fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func) == 4) {
+        if (dom == pcibdf->domain
+           && bus == pcibdf->bus
+           && dev == pcibdf->dev
+           && func == pcibdf->func) {
             rc = 1;
             goto out;
         }
@@ -667,7 +674,7 @@ out:
     return rc;
 }
 
-static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
+static int pciback_dev_is_assigned(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 {
     char * spath;
     int rc;
@@ -683,8 +690,8 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
     }
 
     spath = GCSPRINTF(SYSFS_PCIBACK_DRIVER"/"PCI_BDF,
-                      pci->bdf.domain, pci->bdf.bus,
-                      pci->bdf.dev, pci->bdf.func);
+                      pcibdf->domain, pcibdf->bus,
+                      pcibdf->dev, pcibdf->func);
     rc = lstat(spath, &st);
 
     if( rc == 0 )
@@ -695,40 +702,40 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
     return -1;
 }
 
-static int pciback_dev_assign(libxl__gc *gc, libxl_device_pci *pci)
+static int pciback_dev_assign(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 {
     int rc;
 
-    if ( (rc = pciback_dev_has_slot(gc, pci)) < 0 ) {
+    if ( (rc = pciback_dev_has_slot(gc, pcibdf)) < 0 ) {
         LOGE(ERROR, "Error checking for pciback slot");
         return ERROR_FAIL;
     } else if (rc == 0) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/new_slot",
-                             pci) < 0 ) {
+                             pcibdf) < 0 ) {
             LOGE(ERROR, "Couldn't bind device to pciback!");
             return ERROR_FAIL;
         }
     }
 
-    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind", pci) < 0 ) {
+    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind", pcibdf) < 0 ) {
         LOGE(ERROR, "Couldn't bind device to pciback!");
         return ERROR_FAIL;
     }
     return 0;
 }
 
-static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
+static int pciback_dev_unassign(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 {
     /* Remove from pciback */
-    if ( sysfs_dev_unbind(gc, pci, NULL) < 0 ) {
+    if ( sysfs_dev_unbind(gc, pcibdf, NULL) < 0 ) {
         LOG(ERROR, "Couldn't unbind device!");
         return ERROR_FAIL;
     }
 
     /* Remove slot if necessary */
-    if ( pciback_dev_has_slot(gc, pci) > 0 ) {
+    if ( pciback_dev_has_slot(gc, pcibdf) > 0 ) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/remove_slot",
-                             pci) < 0 ) {
+                             pcibdf) < 0 ) {
             LOGE(ERROR, "Couldn't remove pciback slot");
             return ERROR_FAIL;
         }
@@ -737,7 +744,7 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
 }
 
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
-                                            libxl_device_pci *pci,
+                                            libxl_pci_bdf *pcibdf,
                                             int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -747,10 +754,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     struct stat st;
 
     /* Local copy for convenience */
-    dom = pci->bdf.domain;
-    bus = pci->bdf.bus;
-    dev = pci->bdf.dev;
-    func = pci->bdf.func;
+    dom = pcibdf->domain;
+    bus = pcibdf->bus;
+    dev = pcibdf->dev;
+    func = pcibdf->func;
 
     /* See if the device exists */
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF, dom, bus, dev, func);
@@ -760,7 +767,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
 
     /* Check to see if it's already assigned to pciback */
-    rc = pciback_dev_is_assigned(gc, pci);
+    rc = pciback_dev_is_assigned(gc, pcibdf);
     if ( rc < 0 ) {
         return ERROR_FAIL;
     }
@@ -770,7 +777,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
 
     /* Check to see if there's already a driver that we need to unbind from */
-    if ( sysfs_dev_unbind(gc, pci, &driver_path ) ) {
+    if ( sysfs_dev_unbind(gc, pcibdf, &driver_path ) ) {
         LOG(ERROR, "Couldn't unbind "PCI_BDF" from driver",
             dom, bus, dev, func);
         return ERROR_FAIL;
@@ -779,9 +786,9 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     /* Store driver_path for rebinding to dom0 */
     if ( rebind ) {
         if ( driver_path ) {
-            pci_info_xs_write(gc, pci, "driver_path", driver_path);
+            pci_info_xs_write(gc, pcibdf, "driver_path", driver_path);
         } else if ( (driver_path =
-                     pci_info_xs_read(gc, pci, "driver_path")) != NULL ) {
+                     pci_info_xs_read(gc, pcibdf, "driver_path")) != NULL ) {
             LOG(INFO, PCI_BDF" not bound to a driver, will be rebound to %s",
                 dom, bus, dev, func, driver_path);
         } else {
@@ -789,10 +796,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
                 dom, bus, dev, func);
         }
     } else {
-        pci_info_xs_remove(gc, pci, "driver_path");
+        pci_info_xs_remove(gc, pcibdf, "driver_path");
     }
 
-    if ( pciback_dev_assign(gc, pci) ) {
+    if ( pciback_dev_assign(gc, pcibdf) ) {
         LOG(ERROR, "Couldn't bind device to pciback!");
         return ERROR_FAIL;
     }
@@ -803,7 +810,7 @@ quarantine:
      * so always pass XEN_DOMCTL_DEV_RDM_RELAXED to avoid assignment being
      * unnecessarily denied.
      */
-    rc = xc_assign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci),
+    rc = xc_assign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pcibdf),
                           XEN_DOMCTL_DEV_RDM_RELAXED);
     if ( rc < 0 ) {
         LOG(ERROR, "failed to quarantine "PCI_BDF, dom, bus, dev, func);
@@ -814,7 +821,7 @@ quarantine:
 }
 
 static int libxl__device_pci_assignable_remove(libxl__gc *gc,
-                                               libxl_device_pci *pci,
+                                               libxl_pci_bdf *pcibdf,
                                                int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -822,24 +829,24 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     char *driver_path;
 
     /* De-quarantine */
-    rc = xc_deassign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci));
+    rc = xc_deassign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pcibdf));
     if ( rc < 0 ) {
-        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pci->bdf.domain, pci->bdf.bus,
-            pci->bdf.dev, pci->bdf.func);
+        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pcibdf->domain,
+            pcibdf->bus, pcibdf->dev, pcibdf->func);
         return ERROR_FAIL;
     }
 
     /* Unbind from pciback */
-    if ( (rc = pciback_dev_is_assigned(gc, pci)) < 0 ) {
+    if ( (rc = pciback_dev_is_assigned(gc, pcibdf)) < 0 ) {
         return ERROR_FAIL;
     } else if ( rc ) {
-        pciback_dev_unassign(gc, pci);
+        pciback_dev_unassign(gc, pcibdf);
     } else {
         LOG(WARN, "Not bound to pciback");
     }
 
     /* Rebind if necessary */
-    driver_path = pci_info_xs_read(gc, pci, "driver_path");
+    driver_path = pci_info_xs_read(gc, pcibdf, "driver_path");
 
     if ( driver_path ) {
         if ( rebind ) {
@@ -847,12 +854,12 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
 
             if ( sysfs_write_bdf(gc,
                                  GCSPRINTF("%s/bind", driver_path),
-                                 pci) < 0 ) {
+                                 pcibdf) < 0 ) {
                 LOGE(ERROR, "Couldn't bind device to %s", driver_path);
                 return -1;
             }
 
-            pci_info_xs_remove(gc, pci, "driver_path");
+            pci_info_xs_remove(gc, pcibdf, "driver_path");
         }
     } else {
         if ( rebind ) {
@@ -864,26 +871,26 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     return 0;
 }
 
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci,
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
                                     int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_add(gc, pci, rebind);
+    rc = libxl__device_pci_assignable_add(gc, pcibdf, rebind);
 
     GC_FREE;
     return rc;
 }
 
 
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci,
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
                                        int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_remove(gc, pci, rebind);
+    rc = libxl__device_pci_assignable_remove(gc, pcibdf, rebind);
 
     GC_FREE;
     return rc;
@@ -1385,7 +1392,7 @@ static void pci_add_dm_done(libxl__egc *egc,
     /* Don't restrict writes to the PCI config space from this VM */
     if (pci->permissive) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/permissive",
-                             pci) < 0 ) {
+                             &pci->bdf) < 0 ) {
             LOGD(ERROR, domainid, "Setting permissive for device");
             rc = ERROR_FAIL;
             goto out;
@@ -1401,7 +1408,8 @@ out_no_irq:
             rc = ERROR_FAIL;
             goto out;
         }
-        r = xc_assign_device(ctx->xch, domid, pci_encode_bdf(pci), flag);
+        r = xc_assign_device(ctx->xch, domid, pci_encode_bdf(&pci->bdf),
+                             flag);
         if (r < 0 && (hvm || errno != ENOSYS)) {
             LOGED(ERROR, domainid, "xc_assign_device failed");
             rc = ERROR_FAIL;
@@ -1480,15 +1488,28 @@ int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
     return AO_INPROGRESS;
 }
 
-static bool libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
+static int is_bdf_in_array(libxl_pci_bdf *pcibdfs, int num,
+                           libxl_pci_bdf *pcibdf)
 {
-    libxl_device_pci *pcis;
+    int i;
+
+    for(i = 0; i < num; i++) {
+        if (COMPARE_BDF(pcibdf, &pcibdfs[i]))
+            break;
+    }
+
+    return i < num;
+}
+
+static bool is_bdf_assignable(libxl_ctx *ctx, libxl_pci_bdf *pcibdf)
+{
+    libxl_pci_bdf *pcibdfs;
     int num;
     bool assignable;
 
-    pcis = libxl_device_pci_assignable_list(ctx, &num);
-    assignable = is_pci_in_array(pcis, num, pci);
-    libxl_device_pci_assignable_list_free(pcis, num);
+    pcibdfs = libxl_device_pci_assignable_list(ctx, &num);
+    assignable = is_bdf_in_array(pcibdfs, num, pcibdf);
+    libxl_device_pci_assignable_list_free(pcibdfs, num);
 
     return assignable;
 }
@@ -1523,7 +1544,8 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     pas->callback = device_pci_add_stubdom_done;
 
     if (libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
-        rc = xc_test_assign_device(ctx->xch, domid, pci_encode_bdf(pci));
+        rc = xc_test_assign_device(ctx->xch, domid,
+                                   pci_encode_bdf(&pci->bdf));
         if (rc) {
             LOGD(ERROR, domid,
                  "PCI device %04x:%02x:%02x.%u %s?",
@@ -1537,20 +1559,20 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     rc = libxl__device_pci_setdefault(gc, domid, pci, !starting);
     if (rc) goto out;
 
-    if (pci->seize && !pciback_dev_is_assigned(gc, pci)) {
-        rc = libxl__device_pci_assignable_add(gc, pci, 1);
+    if (pci->seize && !pciback_dev_is_assigned(gc, &pci->bdf)) {
+        rc = libxl__device_pci_assignable_add(gc, &pci->bdf, 1);
         if ( rc )
             goto out;
     }
 
-    if (!libxl_pci_assignable(ctx, pci)) {
+    if (!is_bdf_assignable(ctx, &pci->bdf)) {
         LOGD(ERROR, domid, "PCI device %x:%x:%x.%x is not assignable",
              pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         rc = ERROR_FAIL;
         goto out;
     }
 
-    rc = pci_info_xs_write(gc, pci, "domid", GCSPRINTF("%u", domid));
+    rc = pci_info_xs_write(gc, &pci->bdf, "domid", GCSPRINTF("%u", domid));
     if (rc) goto out;
 
     libxl__device_pci_reset(gc, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
@@ -1674,7 +1696,7 @@ static void device_pci_add_done(libxl__egc *egc,
              "PCI device %x:%x:%x.%x (rc %d)",
              pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
              rc);
-        pci_info_xs_remove(gc, pci, "domid");
+        pci_info_xs_remove(gc, &pci->bdf, "domid");
     }
     libxl_device_pci_dispose(pci);
     aodev->rc = rc;
@@ -2114,7 +2136,8 @@ static void pci_remove_detached(libxl__egc *egc,
     }
 
     if (!isstubdom) {
-        rc = xc_deassign_device(CTX->xch, domid, pci_encode_bdf(pci));
+        rc = xc_deassign_device(CTX->xch, domid,
+                                pci_encode_bdf(&pci->bdf));
         if (rc < 0 && (prs->hvm || errno != ENOSYS))
             LOGED(ERROR, domainid, "xc_deassign_device failed");
     }
@@ -2243,7 +2266,7 @@ out:
     libxl__ev_time_deregister(gc, &prs->timeout);
     libxl__ev_time_deregister(gc, &prs->retry_timer);
 
-    if (!rc) pci_info_xs_remove(gc, pci, "domid");
+    if (!rc) pci_info_xs_remove(gc, &pci->bdf, "domid");
 
     libxl_device_pci_dispose(pci);
     aodev->rc = rc;
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 352a00134d..2388f23869 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -840,7 +840,7 @@ value stub_xl_device_pci_assignable_add(value ctx, value info, value rebind)
 	device_pci_val(CTX, &c_info, info);
 
 	caml_enter_blocking_section();
-	ret = libxl_device_pci_assignable_add(CTX, &c_info, c_rebind);
+	ret = libxl_device_pci_assignable_add(CTX, &c_info.bdf, c_rebind);
 	caml_leave_blocking_section();
 
 	libxl_device_pci_dispose(&c_info);
@@ -861,7 +861,7 @@ value stub_xl_device_pci_assignable_remove(value ctx, value info, value rebind)
 	device_pci_val(CTX, &c_info, info);
 
 	caml_enter_blocking_section();
-	ret = libxl_device_pci_assignable_remove(CTX, &c_info, c_rebind);
+	ret = libxl_device_pci_assignable_remove(CTX, &c_info.bdf, c_rebind);
 	caml_leave_blocking_section();
 
 	libxl_device_pci_dispose(&c_info);
@@ -876,7 +876,7 @@ value stub_xl_device_pci_assignable_list(value ctx)
 {
 	CAMLparam1(ctx);
 	CAMLlocal2(list, temp);
-	libxl_device_pci *c_list;
+	libxl_pci_bdf *c_list;
 	int i, nb;
 	uint32_t c_domid;
 
@@ -889,11 +889,18 @@ value stub_xl_device_pci_assignable_list(value ctx)
 
 	list = temp = Val_emptylist;
 	for (i = 0; i < nb; i++) {
+		libxl_device_pci pci;
+
+		libxl_device_pci_init(&pci);
+		libxl_pci_bdf_copy(CTX, &pci.bdf, &c_list[i]);
+
 		list = caml_alloc_small(2, Tag_cons);
 		Field(list, 0) = Val_int(0);
 		Field(list, 1) = temp;
 		temp = list;
-		Store_field(list, 0, Val_device_pci(&c_list[i]));
+		Store_field(list, 0, Val_device_pci(&pci));
+
+		libxl_device_pci_dispose(&pci);
 	}
 	libxl_device_pci_assignable_list_free(c_list, nb);
 
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 9c24496cb2..37708b4eb1 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -154,19 +154,19 @@ int main_pciattach(int argc, char **argv)
 
 static void pciassignable_list(void)
 {
-    libxl_device_pci *pcis;
+    libxl_pci_bdf *pcibdfs;
     int num, i;
 
-    pcis = libxl_device_pci_assignable_list(ctx, &num);
+    pcibdfs = libxl_device_pci_assignable_list(ctx, &num);
 
-    if ( pcis == NULL )
+    if ( pcibdfs == NULL )
         return;
     for (i = 0; i < num; i++) {
         printf("%04x:%02x:%02x.%01x\n",
-               pcis[i].bdf.domain, pcis[i].bdf.bus, pcis[i].bdf.dev,
-               pcis[i].bdf.func);
+               pcibdfs[i].domain, pcibdfs[i].bus, pcibdfs[i].dev,
+               pcibdfs[i].func);
     }
-    libxl_device_pci_assignable_list_free(pcis, num);
+    libxl_device_pci_assignable_list_free(pcibdfs, num);
 }
 
 int main_pciassignable_list(int argc, char **argv)
@@ -183,24 +183,24 @@ int main_pciassignable_list(int argc, char **argv)
 
 static int pciassignable_add(const char *bdf, int rebind)
 {
-    libxl_device_pci pci;
+    libxl_pci_bdf pcibdf;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pci);
+    libxl_pci_bdf_init(&pcibdf);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci.bdf, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pcibdf, bdf)) {
         fprintf(stderr, "pci-assignable-add: malformed BDF \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_add(ctx, &pci, rebind))
+    if (libxl_device_pci_assignable_add(ctx, &pcibdf, rebind))
         r = 1;
 
-    libxl_device_pci_dispose(&pci);
+    libxl_pci_bdf_dispose(&pcibdf);
     xlu_cfg_destroy(config);
 
     return r;
@@ -225,24 +225,24 @@ int main_pciassignable_add(int argc, char **argv)
 
 static int pciassignable_remove(const char *bdf, int rebind)
 {
-    libxl_device_pci pci;
+    libxl_pci_bdf pcibdf;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pci);
+    libxl_pci_bdf_init(&pcibdf);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci.bdf, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pcibdf, bdf)) {
         fprintf(stderr, "pci-assignable-remove: malformed BDF \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_remove(ctx, &pci, rebind))
+    if (libxl_device_pci_assignable_remove(ctx, &pcibdf, rebind))
         r = 1;
 
-    libxl_device_pci_dispose(&pci);
+    libxl_pci_bdf_dispose(&pcibdf);
     xlu_cfg_destroy(config);
 
     return r;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 08:30:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 08:30:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35509.67120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTiq-0003Ua-Aa; Tue, 24 Nov 2020 08:30:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35509.67120; Tue, 24 Nov 2020 08:30:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTip-0003U3-UY; Tue, 24 Nov 2020 08:30:47 +0000
Received: by outflank-mailman (input) for mailman id 35509;
 Tue, 24 Nov 2020 08:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khTin-0003Pg-1Z
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTim-0006an-AW; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH2-0001hp-Oq; Tue, 24 Nov 2020 08:02:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTin-0003Pg-1Z
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=cM4E5HSO7y4rsBvC0ulob6owEH5QJoWwuEB72xnydjQ=; b=qOas+fZ5XzQqOVTEoQEKEj1oH
	jGM3ZkDwfooe3PEaIGxIjW+KeXJp5AiQs08+U/GgOVMt89cG5DtWY3PveNm+BHdrnScSXahY7NU92
	YUNw5FAxcP06c6Fy6+7lYRc6NFmblBZ0+zG7BV/JUBcc2pe+i4BG0W4CgwBW8KqUB7sbo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTim-0006an-AW; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH2-0001hp-Oq; Tue, 24 Nov 2020 08:02:04 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 10/23] libxl: remove get_all_assigned_devices() from libxl_pci.c
Date: Tue, 24 Nov 2020 08:01:46 +0000
Message-Id: <20201124080159.11912-11-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201124080159.11912-1-paul@xen.org>
References: <20201124080159.11912-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

Use of this function is a very inefficient way to check whether a device
has already been assigned.

This patch adds code that saves the domain id in xenstore at the point of
assignment, and removes it again when the device id de-assigned (or the
domain is destroyed). It is then straightforward to check whether a device
has been assigned by checking whether a device has a saved domain id.

NOTE: To facilitate the xenstore check it is necessary to move the
      pci_info_xs_read() earlier in libxl_pci.c. To keep related functions
      together, the rest of the pci_info_xs_XXX() functions are moved too.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 149 ++++++++++++++++---------------------------
 1 file changed, 55 insertions(+), 94 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index ec101f255f..d3c7a547c3 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -336,50 +336,6 @@ retry_transaction2:
     return 0;
 }
 
-static int get_all_assigned_devices(libxl__gc *gc, libxl_device_pci **list, int *num)
-{
-    char **domlist;
-    unsigned int nd = 0, i;
-
-    *list = NULL;
-    *num = 0;
-
-    domlist = libxl__xs_directory(gc, XBT_NULL, "/local/domain", &nd);
-    for(i = 0; i < nd; i++) {
-        char *path, *num_devs;
-
-        path = GCSPRINTF("/local/domain/0/backend/%s/%s/0/num_devs",
-                         libxl__device_kind_to_string(LIBXL__DEVICE_KIND_PCI),
-                         domlist[i]);
-        num_devs = libxl__xs_read(gc, XBT_NULL, path);
-        if ( num_devs ) {
-            int ndev = atoi(num_devs), j;
-            char *devpath, *bdf;
-
-            for(j = 0; j < ndev; j++) {
-                devpath = GCSPRINTF("/local/domain/0/backend/%s/%s/0/dev-%u",
-                                    libxl__device_kind_to_string(LIBXL__DEVICE_KIND_PCI),
-                                    domlist[i], j);
-                bdf = libxl__xs_read(gc, XBT_NULL, devpath);
-                if ( bdf ) {
-                    unsigned dom, bus, dev, func;
-                    if ( sscanf(bdf, PCI_BDF, &dom, &bus, &dev, &func) != 4 )
-                        continue;
-
-                    *list = realloc(*list, sizeof(libxl_device_pci) * ((*num) + 1));
-                    if (*list == NULL)
-                        return ERROR_NOMEM;
-                    pci_struct_fill(*list + *num, dom, bus, dev, func, 0);
-                    (*num)++;
-                }
-            }
-        }
-    }
-    libxl__ptr_add(gc, *list);
-
-    return 0;
-}
-
 static int is_pci_in_array(libxl_device_pci *assigned, int num_assigned,
                            int dom, int bus, int dev, int func)
 {
@@ -427,19 +383,58 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
     return 0;
 }
 
+#define PCI_INFO_PATH "/libxl/pci"
+
+static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node)
+{
+    return node ?
+        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
+                  pci->domain, pci->bus, pci->dev, pci->func,
+                  node) :
+        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
+                  pci->domain, pci->bus, pci->dev, pci->func);
+}
+
+
+static int pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node, const char *val)
+{
+    char *path = pci_info_xs_path(gc, pci, node);
+    int rc = libxl__xs_printf(gc, XBT_NULL, path, "%s", val);
+
+    if (rc) LOGE(WARN, "Write of %s to node %s failed.", val, path);
+
+    return rc;
+}
+
+static char *pci_info_xs_read(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node)
+{
+    char *path = pci_info_xs_path(gc, pci, node);
+
+    return libxl__xs_read(gc, XBT_NULL, path);
+}
+
+static void pci_info_xs_remove(libxl__gc *gc, libxl_device_pci *pci,
+                               const char *node)
+{
+    char *path = pci_info_xs_path(gc, pci, node);
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+
+    /* Remove the xenstore entry */
+    xs_rm(ctx->xsh, XBT_NULL, path);
+}
+
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 {
     GC_INIT(ctx);
-    libxl_device_pci *pcis = NULL, *new, *assigned;
+    libxl_device_pci *pcis = NULL, *new;
     struct dirent *de;
     DIR *dir;
-    int r, num_assigned;
 
     *num = 0;
 
-    r = get_all_assigned_devices(gc, &assigned, &num_assigned);
-    if (r) goto out;
-
     dir = opendir(SYSFS_PCIBACK_DRIVER);
     if (NULL == dir) {
         if (errno == ENOENT) {
@@ -455,9 +450,6 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         if (sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4)
             continue;
 
-        if (is_pci_in_array(assigned, num_assigned, dom, bus, dev, func))
-            continue;
-
         new = realloc(pcis, ((*num) + 1) * sizeof(*new));
         if (NULL == new)
             continue;
@@ -467,6 +459,10 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 
         memset(new, 0, sizeof(*new));
         pci_struct_fill(new, dom, bus, dev, func, 0);
+
+        if (pci_info_xs_read(gc, new, "domid")) /* already assigned */
+            continue;
+
         (*num)++;
     }
 
@@ -737,48 +733,6 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
     return 0;
 }
 
-#define PCI_INFO_PATH "/libxl/pci"
-
-static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
-                              const char *node)
-{
-    return node ?
-        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
-                  pci->domain, pci->bus, pci->dev, pci->func,
-                  node) :
-        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
-                  pci->domain, pci->bus, pci->dev, pci->func);
-}
-
-
-static void pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
-                              const char *node, const char *val)
-{
-    char *path = pci_info_xs_path(gc, pci, node);
-
-    if ( libxl__xs_printf(gc, XBT_NULL, path, "%s", val) < 0 ) {
-        LOGE(WARN, "Write of %s to node %s failed.", val, path);
-    }
-}
-
-static char *pci_info_xs_read(libxl__gc *gc, libxl_device_pci *pci,
-                              const char *node)
-{
-    char *path = pci_info_xs_path(gc, pci, node);
-
-    return libxl__xs_read(gc, XBT_NULL, path);
-}
-
-static void pci_info_xs_remove(libxl__gc *gc, libxl_device_pci *pci,
-                               const char *node)
-{
-    char *path = pci_info_xs_path(gc, pci, node);
-    libxl_ctx *ctx = libxl__gc_owner(gc);
-
-    /* Remove the xenstore entry */
-    xs_rm(ctx->xsh, XBT_NULL, path);
-}
-
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
                                             libxl_device_pci *pci,
                                             int rebind)
@@ -1594,6 +1548,9 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
         goto out;
     }
 
+    rc = pci_info_xs_write(gc, pci, "domid", GCSPRINTF("%u", domid));
+    if (rc) goto out;
+
     libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
@@ -1721,6 +1678,7 @@ static void device_pci_add_done(libxl__egc *egc,
              "PCI device %x:%x:%x.%x (rc %d)",
              pci->domain, pci->bus, pci->dev, pci->func,
              rc);
+        pci_info_xs_remove(gc, pci, "domid");
     }
     aodev->rc = rc;
     aodev->callback(egc, aodev);
@@ -2282,6 +2240,9 @@ out:
     libxl__xswait_stop(gc, &prs->xswait);
     libxl__ev_time_deregister(gc, &prs->timeout);
     libxl__ev_time_deregister(gc, &prs->retry_timer);
+
+    if (!rc) pci_info_xs_remove(gc, pci, "domid");
+
     aodev->rc = rc;
     aodev->callback(egc, aodev);
 }
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 08:30:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 08:30:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35506.67073 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTio-0003R2-Er; Tue, 24 Nov 2020 08:30:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35506.67073; Tue, 24 Nov 2020 08:30:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTio-0003Qq-AV; Tue, 24 Nov 2020 08:30:46 +0000
Received: by outflank-mailman (input) for mailman id 35506;
 Tue, 24 Nov 2020 08:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khTim-0003PL-UL
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTim-0006ah-3v; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH3-0001hp-Rm; Tue, 24 Nov 2020 08:02:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTim-0003PL-UL
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:44 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=z6HgaaJxE43aYUfbefthFXj4bV6TUshSIIqyN3QunIw=; b=KvNaN7Gc1rXGzOL5ZtUOdh6i4
	UQAFI11D88dYwfC+M6zZBs5xlHyu8ud+YycOoKhQRRkYYUy7216pMZ2//YOb1YtY4woooA+5jTQTj
	SgZcLH8Oi3bSlvFuuoEC0Z0f7/+FyoX1xsf85MLecSQ+9/ClBlgjZkLGrQifKXJqcF8Ww=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTim-0006ah-3v; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH3-0001hp-Rm; Tue, 24 Nov 2020 08:02:05 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 15/23] docs/man: improve documentation of PCI_SPEC_STRING...
Date: Tue, 24 Nov 2020 08:01:51 +0000
Message-Id: <20201124080159.11912-16-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201124080159.11912-1-paul@xen.org>
References: <20201124080159.11912-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

... and prepare for adding support for non-positional parsing of 'bdf' and
'vslot' in a subsequent patch.

Also document 'BDF' as a first-class parameter type and fix the documentation
to state that the default value of 'rdm_policy' is actually 'strict', not
'relaxed', as can be seen in libxl__device_pci_setdefault().

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 docs/man/xl-pci-configuration.5.pod | 177 ++++++++++++++++++++++++++++++------
 1 file changed, 148 insertions(+), 29 deletions(-)

diff --git a/docs/man/xl-pci-configuration.5.pod b/docs/man/xl-pci-configuration.5.pod
index 72a27bd95d..4dd73bc498 100644
--- a/docs/man/xl-pci-configuration.5.pod
+++ b/docs/man/xl-pci-configuration.5.pod
@@ -6,32 +6,105 @@ xl-pci-configuration - XL PCI Configuration Syntax
 
 =head1 SYNTAX
 
-This document specifies the format for B<PCI_SPEC_STRING> which is used by
-the L<xl.cfg(5)> pci configuration option, and related L<xl(1)> commands.
+This document specifies the format for B<BDF> and B<PCI_SPEC_STRING> which are
+used by the L<xl.cfg(5)> pci configuration option, and related L<xl(1)>
+commands.
 
-Each B<PCI_SPEC_STRING> has the form of
-B<[DDDD:]BB:DD.F[@VSLOT],KEY=VALUE,KEY=VALUE,...> where:
+A B<BDF> has the following form:
+
+    [DDDD:]BB:SS.F
+
+B<DDDD> is the domain number, B<BB> is the bus number, B<SS> is the device (or
+slot) number, and B<F> is the function number. This is the same scheme as
+used in the output of L<lspci(1)> for the device in question. By default
+L<lspci(1)> will omit the domain (B<DDDD>) if it is zero and hence a zero
+value for domain may also be omitted when specifying a B<BDF>.
+
+Each B<PCI_SPEC_STRING> has the one of the forms:
+
+=over 4
+
+    [<bdf>[@<vslot>,][<key>=<value>,]*
+    [<key>=<value>,]*
+
+=back
+
+For example, these strings are equivalent:
 
 =over 4
 
-=item B<[DDDD:]BB:DD.F>
+    36:00.0@20,seize=1
+    36:00.0,vslot=20,seize=1
+    bdf=36:00.0,vslot=20,seize=1
 
-Identifies the PCI device from the host perspective in the domain
-(B<DDDD>), Bus (B<BB>), Device (B<DD>) and Function (B<F>) syntax. This is
-the same scheme as used in the output of B<lspci(1)> for the device in
-question.
+=back
+
+More formally, the string is a series of comma-separated keyword/value
+pairs, flags and positional parameters.  Parameters which are not bare
+keywords and which do not contain "=" symbols are assigned to the
+positional parameters, in the order specified below.  The positional
+parameters may also be specified by name.
+
+Each parameter may be specified at most once, either as a positional
+parameter or a named parameter.  Default values apply if the parameter
+is not specified, or if it is specified with an empty value (whether
+positionally or explicitly).
+
+B<NOTE>: In context of B<xl pci-detach> (see L<xl(1)>), parameters other than
+B<bdf> will be ignored.
+
+=head1 Positional Parameters
+
+=over 4
+
+=item B<bdf>=I<BDF>
+
+=over 4
 
-Note: by default B<lspci(1)> will omit the domain (B<DDDD>) if it
-is zero and it is optional here also. You may specify the function
-(B<F>) as B<*> to indicate all functions.
+=item Description
 
-=item B<@VSLOT>
+This identifies the PCI device from the host perspective.
 
-Specifies the virtual slot where the guest will see this
-device. This is equivalent to the B<DD> which the guest sees. In a
-guest B<DDDD> and B<BB> are C<0000:00>.
+In the context of a B<PCI_SPEC_STRING> you may specify the function (B<F>) as
+B<*> to indicate all functions of a multi-function device.
 
-=item B<permissive=BOOLEAN>
+=item Default Value
+
+None. This parameter is mandatory as it identifies the device.
+
+=back
+
+=item B<vslot>=I<NUMBER>
+
+=over 4
+
+=item Description
+
+Specifies the virtual slot (device) number where the guest will see this
+device. For example, running L<lspci(1)> in a Linux guest where B<vslot>
+was specified as C<8> would identify the device as C<00:08.0>. Virtual domain
+and bus numbers are always 0.
+
+B<NOTE:> This parameter is always parsed as a hexidecimal value.
+
+=item Default Value
+
+None. This parameter is not mandatory. An available B<vslot> will be selected
+if this parameter is not specified.
+
+=back
+
+=back
+
+=head1 Other Parameters and Flags
+
+=over 4
+
+=item B<permissive>=I<BOOLEAN>
+
+=over 4
+
+=item Description
 
 By default pciback only allows PV guests to write "known safe" values
 into PCI configuration space, likewise QEMU (both qemu-xen and
@@ -46,33 +119,79 @@ more control over the device, which may have security or stability
 implications.  It is recommended to only enable this option for
 trusted VMs under administrator's control.
 
-=item B<msitranslate=BOOLEAN>
+=item Default Value
+
+0
+
+=back
+
+=item B<msitranslate>=I<BOOLEAN>
+
+=over 4
+
+=item Description
 
 Specifies that MSI-INTx translation should be turned on for the PCI
 device. When enabled, MSI-INTx translation will always enable MSI on
-the PCI device regardless of whether the guest uses INTx or MSI. Some
-device drivers, such as NVIDIA's, detect an inconsistency and do not
+the PCI device regardless of whether the guest uses INTx or MSI.
+
+=item Default Value
+
+Some device drivers, such as NVIDIA's, detect an inconsistency and do not
 function when this option is enabled. Therefore the default is false (0).
 
-=item B<seize=BOOLEAN>
+=back
+
+=item B<seize>=I<BOOLEAN>
+
+=over 4
+
+=item Description
 
-Tells B<xl> to automatically attempt to re-assign a device to
-pciback if it is not already assigned.
+Tells L<xl(1)> to automatically attempt to make the device assignable to
+guests if that has not already been done by the B<pci-assignable-add>
+command.
 
-B<WARNING:> If you set this option, B<xl> will gladly re-assign a critical
+B<WARNING:> If you set this option, L<xl> will gladly re-assign a critical
 system device, such as a network or a disk controller being used by
 dom0 without confirmation.  Please use with care.
 
-=item B<power_mgmt=BOOLEAN>
+=item Default Value
+
+0
+
+=back
+
+=item B<power_mgmt>=I<BOOLEAN>
+
+=over 4
+
+=item Description
 
 B<(HVM only)> Specifies that the VM should be able to program the
-D0-D3hot power management states for the PCI device. The default is false (0).
+D0-D3hot power management states for the PCI device.
+
+=item Default Value
+
+0
 
-=item B<rdm_policy=STRING>
+=back
+
+=item B<rdm_policy>=I<STRING>
+
+=over 4
+
+=item Description
 
 B<(HVM/x86 only)> This is the same as the policy setting inside the B<rdm>
-option but just specific to a given device. The default is "relaxed".
+option in L<xl.cfg(5)> but just specific to a given device.
 
-Note: this would override global B<rdm> option.
+B<NOTE>: This overrides the global B<rdm> option.
+
+=item Default Value
+
+"strict"
+
+=back
 
 =back
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 08:30:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 08:30:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35505.67084 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTio-0003Rj-Sb; Tue, 24 Nov 2020 08:30:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35505.67084; Tue, 24 Nov 2020 08:30:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTio-0003Ra-LQ; Tue, 24 Nov 2020 08:30:46 +0000
Received: by outflank-mailman (input) for mailman id 35505;
 Tue, 24 Nov 2020 08:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khTim-0003PM-V8
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTim-0006al-8x; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH5-0001hp-N5; Tue, 24 Nov 2020 08:02:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTim-0003PM-V8
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:44 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=/AMoYjWi5H9dseAs4K+GDd7cbQWU+SnJYanWAEcg98o=; b=16qYaUG1cZYFRWuRFo0fIOqI5
	fgSidJvEJPOnjNUnMB41AcXK3Ormm8340GCfsUGTxOYqDHVVtW0dfIfz1ydiOUgwriiR4p45diZM3
	dj35qyPfIn/ftKCHqFPqY6mXaTbe2OaiVd8Mr1RayqjrQtQwHJhbmKLOxMQKUrZaoIyYc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTim-0006al-8x; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH5-0001hp-N5; Tue, 24 Nov 2020 08:02:07 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 23/23] xl / libxl: support 'xl pci-attach/detach' by name
Date: Tue, 24 Nov 2020 08:01:59 +0000
Message-Id: <20201124080159.11912-24-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201124080159.11912-1-paul@xen.org>
References: <20201124080159.11912-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

This patch adds a 'name' field into the idl for 'libxl_device_pci' and
libxlu_pci_parse_spec_string() is modified to parse the new 'name'
parameter of PCI_SPEC_STRING detailed in the updated documention in
xl-pci-configuration(5).

If the 'name' field is non-NULL then both libxl_device_pci_add() and
libxl_device_pci_remove() will use it to look up the device BDF in
the list of assignable devices.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxl.h            |  6 ++++
 tools/libs/light/libxl_pci.c     | 67 +++++++++++++++++++++++++++++++++++++---
 tools/libs/light/libxl_types.idl |  1 +
 tools/libs/util/libxlu_pci.c     |  7 ++++-
 4 files changed, 75 insertions(+), 6 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 4025d3a3d4..5b55a20155 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -485,6 +485,12 @@
 #define LIBXL_HAVE_PCI_ASSIGNABLE_NAME 1
 
 /*
+ * LIBXL_HAVE_DEVICE_PCI_NAME indicates that the 'name' field of
+ * libxl_device_pci is defined.
+ */
+#define LIBXL_HAVE_DEVICE_PCI_NAME 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 1da7fd5508..c45f4c6a40 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -60,6 +60,10 @@ static void libxl_create_pci_backend_device(libxl__gc *gc,
                                             int num,
                                             const libxl_device_pci *pci)
 {
+    if (pci->name) {
+        flexarray_append(back, GCSPRINTF("name-%d", num));
+        flexarray_append(back, GCSPRINTF("%s", pci->name));
+    }
     flexarray_append(back, GCSPRINTF("key-%d", num));
     flexarray_append(back, GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func));
     flexarray_append(back, GCSPRINTF("dev-%d", num));
@@ -284,6 +288,7 @@ retry_transaction:
 
 retry_transaction2:
     t = xs_transaction_start(ctx->xsh);
+    xs_rm(ctx->xsh, t, GCSPRINTF("%s/name-%d", be_path, i));
     xs_rm(ctx->xsh, t, GCSPRINTF("%s/state-%d", be_path, i));
     xs_rm(ctx->xsh, t, GCSPRINTF("%s/key-%d", be_path, i));
     xs_rm(ctx->xsh, t, GCSPRINTF("%s/dev-%d", be_path, i));
@@ -322,6 +327,12 @@ retry_transaction2:
             xs_write(ctx->xsh, t, GCSPRINTF("%s/vdevfn-%d", be_path, j - 1), tmp, strlen(tmp));
             xs_rm(ctx->xsh, t, tmppath);
         }
+        tmppath = GCSPRINTF("%s/name-%d", be_path, j);
+        tmp = libxl__xs_read(gc, t, tmppath);
+        if (tmp) {
+            xs_write(ctx->xsh, t, GCSPRINTF("%s/name-%d", be_path, j - 1), tmp, strlen(tmp));
+            xs_rm(ctx->xsh, t, tmppath);
+        }
     }
     if (!xs_transaction_end(ctx->xsh, t, 0))
         if (errno == EAGAIN)
@@ -1619,6 +1630,23 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     pas->starting = starting;
     pas->callback = device_pci_add_stubdom_done;
 
+    if (pci->name) {
+        libxl_pci_bdf *pcibdf =
+            libxl_device_pci_assignable_name2bdf(CTX, pci->name);
+
+        if (!pcibdf) {
+            rc = ERROR_FAIL;
+            goto out;
+        }
+
+        LOGD(DETAIL, domid, "'%s' -> %04x:%02x:%02x.%u", pci->name,
+             pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func);
+
+        libxl_pci_bdf_copy(CTX, &pci->bdf, pcibdf);
+        libxl_pci_bdf_dispose(pcibdf);
+        free(pcibdf);
+    }
+
     if (libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
         rc = xc_test_assign_device(ctx->xch, domid,
                                    pci_encode_bdf(&pci->bdf));
@@ -1767,11 +1795,19 @@ static void device_pci_add_done(libxl__egc *egc,
     libxl_device_pci *pci = &pas->pci;
 
     if (rc) {
-        LOGD(ERROR, domid,
-             "libxl__device_pci_add  failed for "
-             "PCI device %x:%x:%x.%x (rc %d)",
-             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
-             rc);
+        if (pci->name) {
+            LOGD(ERROR, domid,
+                 "libxl__device_pci_add failed for "
+                 "PCI device '%s' (rc %d)",
+                 pci->name,
+                 rc);
+        } else {
+            LOGD(ERROR, domid,
+                 "libxl__device_pci_add failed for "
+                 "PCI device %x:%x:%x.%x (rc %d)",
+                 pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
+                 rc);
+        }
         pci_info_xs_remove(gc, &pci->bdf, "domid");
     }
     libxl_device_pci_dispose(pci);
@@ -2288,6 +2324,23 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
     libxl__ev_time_init(&prs->timeout);
     libxl__ev_time_init(&prs->retry_timer);
 
+    if (pci->name) {
+        libxl_pci_bdf *pcibdf =
+            libxl_device_pci_assignable_name2bdf(CTX, pci->name);
+
+        if (!pcibdf) {
+            rc = ERROR_FAIL;
+            goto out;
+        }
+
+        LOGD(DETAIL, domid, "'%s' -> %04x:%02x:%02x.%u", pci->name,
+             pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func);
+
+        libxl_pci_bdf_copy(CTX, &prs->pci.bdf, pcibdf);
+        libxl_pci_bdf_dispose(pcibdf);
+        free(pcibdf);
+    }
+
     prs->orig_vdev = pci->vdevfn & ~7U;
 
     if ( pci->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
@@ -2422,6 +2475,10 @@ static int libxl__device_pci_from_xs_be(libxl__gc *gc,
         } while ((p = strtok_r(NULL, ",=", &saveptr)) != NULL);
     }
 
+    s = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/name-%d", be_path, nr));
+    if (s)
+        pci->name = strdup(s);
+
     return 0;
 }
 
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 2c441142fb..44bad36f1c 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -778,6 +778,7 @@ libxl_pci_bdf = Struct("pci_bdf", [
 
 libxl_device_pci = Struct("device_pci", [
     ("bdf", libxl_pci_bdf),
+    ("name", string),
     ("vdevfn", uint32),
     ("vfunc_mask", uint32),
     ("msitranslate", bool),
diff --git a/tools/libs/util/libxlu_pci.c b/tools/libs/util/libxlu_pci.c
index a8b6ce5427..543a1f80e9 100644
--- a/tools/libs/util/libxlu_pci.c
+++ b/tools/libs/util/libxlu_pci.c
@@ -151,6 +151,7 @@ int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pcidev,
 {
     const char *ptr = str;
     bool bdf_present = false;
+    bool name_present = false;
     int ret;
 
     /* Attempt to parse 'bdf' as positional parameter */
@@ -193,6 +194,10 @@ int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pcidev,
             pcidev->power_mgmt = atoi(val);
         } else if (!strcmp(key, "rdm_policy")) {
             ret = parse_rdm_policy(cfg, &pcidev->rdm_policy, val);
+        } else if (!strcmp(key, "name")) {
+            name_present = true;
+            pcidev->name = strdup(val);
+            if (!pcidev->name) ret = ERROR_NOMEM;
         } else {
             XLU__PCI_ERR(cfg, "Unknown PCI_SPEC_STRING option: %s", key);
             ret = ERROR_INVAL;
@@ -205,7 +210,7 @@ int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pcidev,
             return ret;
     }
 
-    if (!bdf_present)
+    if (!(bdf_present ^ name_present))
         return ERROR_INVAL;
 
     return 0;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 08:30:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 08:30:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35510.67133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTir-0003WM-3B; Tue, 24 Nov 2020 08:30:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35510.67133; Tue, 24 Nov 2020 08:30:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTiq-0003VQ-HJ; Tue, 24 Nov 2020 08:30:48 +0000
Received: by outflank-mailman (input) for mailman id 35510;
 Tue, 24 Nov 2020 08:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khTin-0003Po-6H
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTim-0006bB-KP; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH2-0001hp-W1; Tue, 24 Nov 2020 08:02:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTin-0003Po-6H
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=gs+2F3wLageZmiAxkHoKR705R9GKQDZVMvtOetNICfQ=; b=Hjiq/0DGDcLu2vsgmQeappHZt
	AWqP/35H/xuzYjTUsRhQNnd0zFK8gMDWBL1LJKFBZwn8BZZGh991ou8M3p7of8xZyGR6qhoqsT5X9
	lZUqSbYPSNOqPZ+RD2kslsT6HKnSReM3iih8B0ZCJlFApZjCBbs/hDHXNl17qtQM8Ixo4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTim-0006bB-KP; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH2-0001hp-W1; Tue, 24 Nov 2020 08:02:05 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 11/23] libxl: make sure callers of libxl_device_pci_list() free the list after use
Date: Tue, 24 Nov 2020 08:01:47 +0000
Message-Id: <20201124080159.11912-12-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201124080159.11912-1-paul@xen.org>
References: <20201124080159.11912-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

A previous patch introduced libxl_device_pci_list_free() which should be used
by callers of libxl_device_pci_list() to properly dispose of the exported
'libxl_device_pci' types and the free the memory holding them. Whilst all
current callers do ensure the memory is freed, only the code in xl's
pcilist() function actually calls libxl_device_pci_dispose(). As it stands
this laxity does not lead to any memory leaks, but the simple addition of
.e.g. a 'string' into the idl definition of 'libxl_device_pci' would lead
to leaks.

This patch makes sure all callers of libxl_device_pci_list() can call
libxl_device_pci_list_free() by keeping copies of 'libxl_device_pci'
structures inline in 'pci_add_state' and 'pci_remove_state' (and also making
sure these are properly disposed at the end of the operations) rather
than keeping pointers to the structures returned by libxl_device_pci_list().

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libs/light/libxl_pci.c | 68 ++++++++++++++++++++++++--------------------
 tools/xl/xl_pci.c            |  3 +-
 2 files changed, 38 insertions(+), 33 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index d3c7a547c3..0f41939d1f 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1025,7 +1025,7 @@ typedef struct pci_add_state {
     libxl__xswait_state xswait;
     libxl__ev_qmp qmp;
     libxl__ev_time timeout;
-    libxl_device_pci *pci;
+    libxl_device_pci pci;
     libxl_domid pci_domid;
 } pci_add_state;
 
@@ -1097,7 +1097,7 @@ static void pci_add_qemu_trad_watch_state_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
 
     rc = check_qemu_running(gc, domid, xswa, rc, state);
     if (rc == ERROR_NOT_READY)
@@ -1118,7 +1118,7 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
     libxl__ev_qmp *const qmp = &pas->qmp;
 
     rc = libxl__ev_time_register_rel(ao, &pas->timeout,
@@ -1199,7 +1199,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
     int dev_slot, dev_func;
 
     /* Convenience aliases */
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
 
     if (rc) goto out;
 
@@ -1300,7 +1300,7 @@ static void pci_add_dm_done(libxl__egc *egc,
 
     /* Convenience aliases */
     bool starting = pas->starting;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
     bool hvm = libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM;
 
     libxl__ev_qmp_dispose(gc, &pas->qmp);
@@ -1516,7 +1516,10 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     GCNEW(pas);
     pas->aodev = aodev;
     pas->domid = domid;
-    pas->pci = pci;
+
+    libxl_device_pci_copy(CTX, &pas->pci, pci);
+    pci = &pas->pci;
+
     pas->starting = starting;
     pas->callback = device_pci_add_stubdom_done;
 
@@ -1555,12 +1558,6 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
     if (stubdomid != 0) {
-        libxl_device_pci *pci_s;
-
-        GCNEW(pci_s);
-        libxl_device_pci_init(pci_s);
-        libxl_device_pci_copy(CTX, pci_s, pci);
-        pas->pci = pci_s;
         pas->callback = device_pci_add_stubdom_wait;
 
         do_pci_add(egc, stubdomid, pas); /* must be last */
@@ -1619,7 +1616,7 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
 
     if (rc) goto out;
 
@@ -1670,7 +1667,7 @@ static void device_pci_add_done(libxl__egc *egc,
     EGC_GC;
     libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
 
     if (rc) {
         LOGD(ERROR, domid,
@@ -1680,6 +1677,7 @@ static void device_pci_add_done(libxl__egc *egc,
              rc);
         pci_info_xs_remove(gc, pci, "domid");
     }
+    libxl_device_pci_dispose(pci);
     aodev->rc = rc;
     aodev->callback(egc, aodev);
 }
@@ -1770,7 +1768,7 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
 typedef struct pci_remove_state {
     libxl__ao_device *aodev;
     libxl_domid domid;
-    libxl_device_pci *pci;
+    libxl_device_pci pci;
     bool force;
     bool hvm;
     unsigned int orig_vdev;
@@ -1812,23 +1810,26 @@ static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
 {
     STATE_AO_GC(prs->aodev->ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
-    libxl_device_pci *assigned;
+    libxl_device_pci *pcis;
+    bool attached;
     uint32_t domid = prs->domid;
     libxl_domain_type type = libxl__domain_type(gc, domid);
-    libxl_device_pci *pci = prs->pci;
+    libxl_device_pci *pci = &prs->pci;
     int rc, num;
     uint32_t domainid = domid;
 
-    assigned = libxl_device_pci_list(ctx, domid, &num);
-    if (assigned == NULL) {
+    pcis = libxl_device_pci_list(ctx, domid, &num);
+    if (!pcis) {
         rc = ERROR_FAIL;
         goto out_fail;
     }
-    libxl__ptr_add(gc, assigned);
+
+    attached = is_pci_in_array(pcis, num, pci->domain,
+                               pci->bus, pci->dev, pci->func);
+    libxl_device_pci_list_free(pcis, num);
 
     rc = ERROR_INVAL;
-    if ( !is_pci_in_array(assigned, num, pci->domain,
-                          pci->bus, pci->dev, pci->func) ) {
+    if (!attached) {
         LOGD(ERROR, domainid, "PCI device not attached to this domain");
         goto out_fail;
     }
@@ -1928,7 +1929,7 @@ static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = prs->domid;
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
 
     rc = check_qemu_running(gc, domid, xswa, rc, state);
     if (rc == ERROR_NOT_READY)
@@ -1950,7 +1951,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     int rc;
 
     /* Convenience aliases */
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
 
     rc = libxl__ev_time_register_rel(ao, &prs->timeout,
                                      pci_remove_timeout,
@@ -2020,7 +2021,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl__ao *const ao = prs->aodev->ao;
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
 
     if (rc) goto out;
 
@@ -2075,7 +2076,7 @@ static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
     pci_remove_state *prs = CONTAINER_OF(ev, *prs, timeout);
 
     /* Convenience aliases */
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
 
     LOGD(WARN, prs->domid, "timed out waiting for DM to remove "
          PCI_PT_QDEV_ID, pci->bus, pci->dev, pci->func);
@@ -2096,7 +2097,7 @@ static void pci_remove_detached(libxl__egc *egc,
     bool isstubdom;
 
     /* Convenience aliases */
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
     libxl_domid domid = prs->domid;
 
     /* Cleaning QMP states ASAP */
@@ -2159,7 +2160,7 @@ static void pci_remove_done(libxl__egc *egc,
 
     if (rc) goto out;
 
-    libxl__device_pci_remove_xenstore(gc, prs->domid, prs->pci);
+    libxl__device_pci_remove_xenstore(gc, prs->domid, &prs->pci);
 out:
     device_pci_remove_common_next(egc, prs, rc);
 }
@@ -2177,7 +2178,10 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
     GCNEW(prs);
     prs->aodev = aodev;
     prs->domid = domid;
-    prs->pci = pci;
+
+    libxl_device_pci_copy(CTX, &prs->pci, pci);
+    pci = &prs->pci;
+
     prs->force = force;
     libxl__xswait_init(&prs->xswait);
     libxl__ev_qmp_init(&prs->qmp);
@@ -2212,7 +2216,7 @@ static void device_pci_remove_common_next(libxl__egc *egc,
     EGC_GC;
 
     /* Convenience aliases */
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
     libxl__ao_device *const aodev = prs->aodev;
     const unsigned int pfunc_mask = prs->pfunc_mask;
     const unsigned int orig_vdev = prs->orig_vdev;
@@ -2243,6 +2247,7 @@ out:
 
     if (!rc) pci_info_xs_remove(gc, pci, "domid");
 
+    libxl_device_pci_dispose(pci);
     aodev->rc = rc;
     aodev->callback(egc, aodev);
 }
@@ -2357,7 +2362,6 @@ void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
     pcis = libxl_device_pci_list(CTX, domid, &num);
     if ( pcis == NULL )
         return;
-    libxl__ptr_add(gc, pcis);
 
     for (i = 0; i < num; i++) {
         /* Force remove on shutdown since, on HVM, qemu will not always
@@ -2368,6 +2372,8 @@ void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
         libxl__device_pci_remove_common(egc, domid, pcis + i, true,
                                         aodev);
     }
+
+    libxl_device_pci_list_free(pcis, num);
 }
 
 int libxl__grant_vga_iomem_permission(libxl__gc *gc, const uint32_t domid,
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 34fcf5a4fa..7c0f102ac7 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -35,9 +35,8 @@ static void pcilist(uint32_t domid)
         printf("%02x.%01x %04x:%02x:%02x.%01x\n",
                (pcis[i].vdevfn >> 3) & 0x1f, pcis[i].vdevfn & 0x7,
                pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
-        libxl_device_pci_dispose(&pcis[i]);
     }
-    free(pcis);
+    libxl_device_pci_list_free(pcis, num);
 }
 
 int main_pcilist(int argc, char **argv)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 08:30:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 08:30:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35513.67172 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTit-0003dN-UA; Tue, 24 Nov 2020 08:30:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35513.67172; Tue, 24 Nov 2020 08:30:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTis-0003cG-No; Tue, 24 Nov 2020 08:30:50 +0000
Received: by outflank-mailman (input) for mailman id 35513;
 Tue, 24 Nov 2020 08:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khTin-0003Q3-BP
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTim-0006b7-Iv; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH4-0001hp-Hr; Tue, 24 Nov 2020 08:02:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTin-0003Q3-BP
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=FS2lsRq/N5rr0lREQKsZJxqwgY3o2rJbmA8WgHHgsnw=; b=JTGrwTSp9VPw0YAYvGLicdEOW
	m0wC2emBvO9vBiBu/Uq+DE8B1ulecN8F66OAQ0WJvMMolQmqfbQ30FWPsIELH6qGbL0qa+L18Io8n
	vunRdxK2FkiAaXc23Um8PCceajJgBoPPotH+0OU4LqYB2qJ3UVw25w6Lh8iDZiiIHyhQ4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTim-0006b7-Iv; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH4-0001hp-Hr; Tue, 24 Nov 2020 08:02:06 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 18/23] libxlu: introduce xlu_pci_parse_spec_string()
Date: Tue, 24 Nov 2020 08:01:54 +0000
Message-Id: <20201124080159.11912-19-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201124080159.11912-1-paul@xen.org>
References: <20201124080159.11912-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

This patch largely re-writes the code to parse a PCI_SPEC_STRING and enters
it via the newly introduced function. The new parser also deals with 'bdf'
and 'vslot' as non-positional paramaters, as per the documentation in
xl-pci-configuration(5).

The existing xlu_pci_parse_bdf() function remains, but now strictly parses
BDF values. Some existing callers of xlu_pci_parse_bdf() are
modified to call xlu_pci_parse_spec_string() as per the documentation in xl(1).

NOTE: Usage text in xl_cmdtable.c and error messages are also modified
      appropriately.

Fixes: d25cc3ec93eb ("libxl: workaround gcc 10.2 maybe-uninitialized warning")
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxlutil.h    |   8 +-
 tools/libs/util/libxlu_pci.c | 354 +++++++++++++++++++++++--------------------
 tools/xl/xl_cmdtable.c       |   4 +-
 tools/xl/xl_parse.c          |   4 +-
 tools/xl/xl_pci.c            |  37 +++--
 5 files changed, 220 insertions(+), 187 deletions(-)

diff --git a/tools/include/libxlutil.h b/tools/include/libxlutil.h
index 92e35c5462..cdd6aab4f8 100644
--- a/tools/include/libxlutil.h
+++ b/tools/include/libxlutil.h
@@ -109,9 +109,15 @@ int xlu_disk_parse(XLU_Config *cfg, int nspecs, const char *const *specs,
    */
 
 /*
+ * PCI BDF
+ */
+int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_pci_bdf *bdf, const char *str);
+
+/*
  * PCI specification parsing
  */
-int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str);
+int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pci,
+                              const char *str);
 
 /*
  * RDM parsing
diff --git a/tools/libs/util/libxlu_pci.c b/tools/libs/util/libxlu_pci.c
index 5c107f2642..a8b6ce5427 100644
--- a/tools/libs/util/libxlu_pci.c
+++ b/tools/libs/util/libxlu_pci.c
@@ -1,5 +1,7 @@
 #define _GNU_SOURCE
 
+#include <ctype.h>
+
 #include "libxlu_internal.h"
 #include "libxlu_disk_l.h"
 #include "libxlu_disk_i.h"
@@ -9,185 +11,213 @@
 #define XLU__PCI_ERR(_c, _x, _a...) \
     if((_c) && (_c)->report) fprintf((_c)->report, _x, ##_a)
 
-static int hex_convert(const char *str, unsigned int *val, unsigned int mask)
+static int parse_bdf(libxl_pci_bdf *bdfp, uint32_t *vfunc_maskp,
+                     const char *str, const char **endp)
 {
-    unsigned long ret;
-    char *end;
-
-    ret = strtoul(str, &end, 16);
-    if ( end == str || *end != '\0' )
-        return -1;
-    if ( ret & ~mask )
-        return -1;
-    *val = (unsigned int)ret & mask;
+    const char *ptr = str;
+    unsigned int colons = 0;
+    unsigned int domain, bus, dev, func;
+    int n;
+
+    /* Count occurrences of ':' to detrmine presence/absence of the 'domain' */
+    while (isxdigit(*ptr) || *ptr == ':') {
+        if (*ptr == ':')
+            colons++;
+        ptr++;
+    }
+
+    ptr = str;
+    switch (colons) {
+    case 1:
+        domain = 0;
+        if (sscanf(ptr, "%x:%x.%n", &bus, &dev, &n) != 2)
+            return ERROR_INVAL;
+        break;
+    case 2:
+        if (sscanf(ptr, "%x:%x:%x.%n", &domain, &bus, &dev, &n) != 3)
+            return ERROR_INVAL;
+        break;
+    default:
+        return ERROR_INVAL;
+    }
+
+    if (domain > 0xffff || bus > 0xff || dev > 0x1f)
+        return ERROR_INVAL;
+
+    ptr += n;
+    if (*ptr == '*') {
+        if (!vfunc_maskp)
+            return ERROR_INVAL;
+        *vfunc_maskp = LIBXL_PCI_FUNC_ALL;
+        func = 0;
+        ptr++;
+    } else {
+        if (sscanf(ptr, "%x%n", &func, &n) != 1)
+            return ERROR_INVAL;
+        if (func > 7)
+            return ERROR_INVAL;
+        if (vfunc_maskp)
+            *vfunc_maskp = 1;
+        ptr += n;
+    }
+
+    bdfp->domain = domain;
+    bdfp->bus = bus;
+    bdfp->dev = dev;
+    bdfp->func = func;
+
+    if (endp)
+        *endp = ptr;
+
     return 0;
 }
 
-static int pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
-                           unsigned int bus, unsigned int dev,
-                           unsigned int func, unsigned int vdevfn)
+static int parse_vslot(uint32_t *vdevfnp, const char *str, const char **endp)
 {
-    pci->bdf.domain = domain;
-    pci->bdf.bus = bus;
-    pci->bdf.dev = dev;
-    pci->bdf.func = func;
-    pci->vdevfn = vdevfn;
+    const char *ptr = str;
+    unsigned int val;
+    int n;
+
+    if (sscanf(ptr, "%x%n", &val, &n) != 1)
+        return ERROR_INVAL;
+
+    if (val > 0x1f)
+        return ERROR_INVAL;
+
+    ptr += n;
+
+    *vdevfnp = val << 3;
+
+    if (endp)
+        *endp = ptr;
+
     return 0;
 }
 
-#define STATE_DOMAIN    0
-#define STATE_BUS       1
-#define STATE_DEV       2
-#define STATE_FUNC      3
-#define STATE_VSLOT     4
-#define STATE_OPTIONS_K 6
-#define STATE_OPTIONS_V 7
-#define STATE_TERMINAL  8
-#define STATE_TYPE      9
-#define STATE_RDM_STRATEGY      10
-#define STATE_RESERVE_POLICY    11
-#define INVALID         0xffffffff
-int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pci, const char *str)
+static int parse_key_val(char **keyp, char**valp, const char *str,
+                         const char **endp)
 {
-    unsigned state = STATE_DOMAIN;
-    unsigned dom = INVALID, bus = INVALID, dev = INVALID, func = INVALID, vslot = 0;
-    char *buf2, *tok, *ptr, *end, *optkey = NULL;
+    const char *ptr = str;
+    char *key, *val;
+
+    while (*ptr != '=' && *ptr != '\0')
+        ptr++;
 
-    if ( NULL == (buf2 = ptr = strdup(str)) )
+    if (*ptr == '\0')
+        return ERROR_INVAL;
+
+    key = strndup(str, ptr - str);
+    if (!key)
         return ERROR_NOMEM;
 
-    for(tok = ptr, end = ptr + strlen(ptr) + 1; ptr < end; ptr++) {
-        switch(state) {
-        case STATE_DOMAIN:
-            if ( *ptr == ':' ) {
-                state = STATE_BUS;
-                *ptr = '\0';
-                if ( hex_convert(tok, &dom, 0xffff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_BUS:
-            if ( *ptr == ':' ) {
-                state = STATE_DEV;
-                *ptr = '\0';
-                if ( hex_convert(tok, &bus, 0xff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }else if ( *ptr == '.' ) {
-                state = STATE_FUNC;
-                *ptr = '\0';
-                if ( dom & ~0xff )
-                    goto parse_error;
-                bus = dom;
-                dom = 0;
-                if ( hex_convert(tok, &dev, 0xff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_DEV:
-            if ( *ptr == '.' ) {
-                state = STATE_FUNC;
-                *ptr = '\0';
-                if ( hex_convert(tok, &dev, 0xff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_FUNC:
-            if ( *ptr == '\0' || *ptr == '@' || *ptr == ',' ) {
-                switch( *ptr ) {
-                case '\0':
-                    state = STATE_TERMINAL;
-                    break;
-                case '@':
-                    state = STATE_VSLOT;
-                    break;
-                case ',':
-                    state = STATE_OPTIONS_K;
-                    break;
-                }
-                *ptr = '\0';
-                if ( !strcmp(tok, "*") ) {
-                    pci->vfunc_mask = LIBXL_PCI_FUNC_ALL;
-                }else{
-                    if ( hex_convert(tok, &func, 0x7) )
-                        goto parse_error;
-                    pci->vfunc_mask = (1 << 0);
-                }
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_VSLOT:
-            if ( *ptr == '\0' || *ptr == ',' ) {
-                state = ( *ptr == ',' ) ? STATE_OPTIONS_K : STATE_TERMINAL;
-                *ptr = '\0';
-                if ( hex_convert(tok, &vslot, 0xff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_OPTIONS_K:
-            if ( *ptr == '=' ) {
-                state = STATE_OPTIONS_V;
-                *ptr = '\0';
-                optkey = tok;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_OPTIONS_V:
-            if ( *ptr == ',' || *ptr == '\0' ) {
-                state = (*ptr == ',') ? STATE_OPTIONS_K : STATE_TERMINAL;
-                *ptr = '\0';
-                if ( !strcmp(optkey, "msitranslate") ) {
-                    pci->msitranslate = atoi(tok);
-                }else if ( !strcmp(optkey, "power_mgmt") ) {
-                    pci->power_mgmt = atoi(tok);
-                }else if ( !strcmp(optkey, "permissive") ) {
-                    pci->permissive = atoi(tok);
-                }else if ( !strcmp(optkey, "seize") ) {
-                    pci->seize = atoi(tok);
-                } else if (!strcmp(optkey, "rdm_policy")) {
-                    if (!strcmp(tok, "strict")) {
-                        pci->rdm_policy = LIBXL_RDM_RESERVE_POLICY_STRICT;
-                    } else if (!strcmp(tok, "relaxed")) {
-                        pci->rdm_policy = LIBXL_RDM_RESERVE_POLICY_RELAXED;
-                    } else {
-                        XLU__PCI_ERR(cfg, "%s is not an valid PCI RDM property"
-                                          " policy: 'strict' or 'relaxed'.",
-                                     tok);
-                        goto parse_error;
-                    }
-                } else {
-                    XLU__PCI_ERR(cfg, "Unknown PCI BDF option: %s", optkey);
-                }
-                tok = ptr + 1;
-            }
-        default:
-            break;
+    str = ++ptr; /* skip '=' */
+    while (*ptr != ',' && *ptr != '\0')
+        ptr++;
+
+    val = strndup(str, ptr - str);
+    if (!val) {
+        free(key);
+        return ERROR_NOMEM;
+    }
+
+    if (*ptr == ',')
+        ptr++;
+
+    *keyp = key;
+    *valp = val;
+    *endp = ptr;
+
+    return 0;
+}
+
+static int parse_rdm_policy(XLU_Config *cfg, libxl_rdm_reserve_policy *policy,
+                            const char *str)
+{
+    int ret = libxl_rdm_reserve_policy_from_string(str, policy);
+
+    if (ret)
+        XLU__PCI_ERR(cfg, "Unknown RDM policy: %s", str);
+
+    return ret;
+}
+
+int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_pci_bdf *bdf, const char *str)
+{
+    return parse_bdf(bdf, NULL, str, NULL);
+}
+
+int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pcidev,
+                              const char *str)
+{
+    const char *ptr = str;
+    bool bdf_present = false;
+    int ret;
+
+    /* Attempt to parse 'bdf' as positional parameter */
+    ret = parse_bdf(&pcidev->bdf, &pcidev->vfunc_mask, ptr, &ptr);
+    if (!ret) {
+        bdf_present = true;
+
+        /* Check whether 'vslot' if present */
+        if (*ptr == '@') {
+            ret = parse_vslot(&pcidev->vdevfn, ++ptr, &ptr);
+            if (ret)
+                return ret;
         }
+        if (*ptr == ',')
+            ptr++;
+        else if (*ptr != '\0')
+            return ERROR_INVAL;
     }
 
-    if ( tok != ptr || state != STATE_TERMINAL )
-        goto parse_error;
+    /* Parse the rest as 'key=val' pairs */
+    while (*ptr != '\0') {
+        char *key, *val;
 
-    assert(dom != INVALID && bus != INVALID && dev != INVALID && func != INVALID);
+        ret = parse_key_val(&key, &val, ptr, &ptr);
+        if (ret)
+            return ret;
 
-    /* Just a pretty way to fill in the values */
-    pci_struct_fill(pci, dom, bus, dev, func, vslot << 3);
+        if (!strcmp(key, "bdf")) {
+            ret = parse_bdf(&pcidev->bdf, &pcidev->vfunc_mask, val, NULL);
+            bdf_present = !ret;
+        } else if (!strcmp(key, "vslot")) {
+            ret = parse_vslot(&pcidev->vdevfn, val, NULL);
+        } else if (!strcmp(key, "permissive")) {
+            pcidev->permissive = atoi(val);
+        } else if (!strcmp(key, "msitranslate")) {
+            pcidev->msitranslate = atoi(val);
+        } else if (!strcmp(key, "seize")) {
+            pcidev->seize= atoi(val);
+        } else if (!strcmp(key, "power_mgmt")) {
+            pcidev->power_mgmt = atoi(val);
+        } else if (!strcmp(key, "rdm_policy")) {
+            ret = parse_rdm_policy(cfg, &pcidev->rdm_policy, val);
+        } else {
+            XLU__PCI_ERR(cfg, "Unknown PCI_SPEC_STRING option: %s", key);
+            ret = ERROR_INVAL;
+        }
 
-    free(buf2);
+        free(key);
+        free(val);
 
-    return 0;
+        if (ret)
+            return ret;
+    }
 
-parse_error:
-    free(buf2);
-    return ERROR_INVAL;
+    if (!bdf_present)
+        return ERROR_INVAL;
+
+    return 0;
 }
 
 int xlu_rdm_parse(XLU_Config *cfg, libxl_rdm_reserve *rdm, const char *str)
 {
+#define STATE_TYPE           0
+#define STATE_RDM_STRATEGY   1
+#define STATE_RESERVE_POLICY 2
+#define STATE_TERMINAL       3
+
     unsigned state = STATE_TYPE;
     char *buf2, *tok, *ptr, *end;
 
@@ -227,15 +257,8 @@ int xlu_rdm_parse(XLU_Config *cfg, libxl_rdm_reserve *rdm, const char *str)
             if (*ptr == ',' || *ptr == '\0') {
                 state = *ptr == ',' ? STATE_TYPE : STATE_TERMINAL;
                 *ptr = '\0';
-                if (!strcmp(tok, "strict")) {
-                    rdm->policy = LIBXL_RDM_RESERVE_POLICY_STRICT;
-                } else if (!strcmp(tok, "relaxed")) {
-                    rdm->policy = LIBXL_RDM_RESERVE_POLICY_RELAXED;
-                } else {
-                    XLU__PCI_ERR(cfg, "Unknown RDM property policy value: %s",
-                                 tok);
+                if (!parse_rdm_policy(cfg, &rdm->policy, tok))
                     goto parse_error;
-                }
                 tok = ptr + 1;
             }
         default:
@@ -253,6 +276,11 @@ int xlu_rdm_parse(XLU_Config *cfg, libxl_rdm_reserve *rdm, const char *str)
 parse_error:
     free(buf2);
     return ERROR_INVAL;
+
+#undef STATE_TYPE
+#undef STATE_RDM_STRATEGY
+#undef STATE_RESERVE_POLICY
+#undef STATE_TERMINAL
 }
 
 /*
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 7da6c1b927..2ee0c49673 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -90,12 +90,12 @@ struct cmd_spec cmd_table[] = {
     { "pci-attach",
       &main_pciattach, 0, 1,
       "Insert a new pass-through pci device",
-      "<Domain> <BDF> [Virtual Slot]",
+      "<Domain> <PCI_SPEC_STRING>",
     },
     { "pci-detach",
       &main_pcidetach, 0, 1,
       "Remove a domain's pass-through pci device",
-      "<Domain> <BDF>",
+      "<Domain> <PCI_SPEC_STRING>",
     },
     { "pci-list",
       &main_pcilist, 0, 0,
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 0765780d9f..6a4703e745 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1487,10 +1487,10 @@ void parse_config_data(const char *config_source,
              * the global policy by default.
              */
             pci->rdm_policy = b_info->u.hvm.rdm.policy;
-            e = xlu_pci_parse_bdf(config, pci, buf);
+            e = xlu_pci_parse_spec_string(config, pci, buf);
             if (e) {
                 fprintf(stderr,
-                        "unable to parse PCI BDF `%s' for passthrough\n",
+                        "unable to parse PCI_SPEC_STRING `%s' for passthrough\n",
                         buf);
                 exit(-e);
             }
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index b6dc7c2840..9c24496cb2 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -55,7 +55,7 @@ int main_pcilist(int argc, char **argv)
     return 0;
 }
 
-static int pcidetach(uint32_t domid, const char *bdf, int force)
+static int pcidetach(uint32_t domid, const char *spec_string, int force)
 {
     libxl_device_pci pci;
     XLU_Config *config;
@@ -66,8 +66,9 @@ static int pcidetach(uint32_t domid, const char *bdf, int force)
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_inig"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
-        fprintf(stderr, "pci-detach: malformed BDF specification \"%s\"\n", bdf);
+    if (xlu_pci_parse_spec_string(config, &pci, spec_string)) {
+        fprintf(stderr, "pci-detach: malformed PCI_SPEC_STRING \"%s\"\n",
+                spec_string);
         exit(2);
     }
     if (force) {
@@ -89,7 +90,7 @@ int main_pcidetach(int argc, char **argv)
     uint32_t domid;
     int opt;
     int force = 0;
-    const char *bdf = NULL;
+    const char *spec_string = NULL;
 
     SWITCH_FOREACH_OPT(opt, "f", NULL, "pci-detach", 2) {
     case 'f':
@@ -98,15 +99,15 @@ int main_pcidetach(int argc, char **argv)
     }
 
     domid = find_domain(argv[optind]);
-    bdf = argv[optind + 1];
+    spec_string = argv[optind + 1];
 
-    if (pcidetach(domid, bdf, force))
+    if (pcidetach(domid, spec_string, force))
         return EXIT_FAILURE;
 
     return EXIT_SUCCESS;
 }
 
-static int pciattach(uint32_t domid, const char *bdf, const char *vs)
+static int pciattach(uint32_t domid, const char *spec_string)
 {
     libxl_device_pci pci;
     XLU_Config *config;
@@ -117,8 +118,9 @@ static int pciattach(uint32_t domid, const char *bdf, const char *vs)
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_inig"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
-        fprintf(stderr, "pci-attach: malformed BDF specification \"%s\"\n", bdf);
+    if (xlu_pci_parse_spec_string(config, &pci, spec_string)) {
+        fprintf(stderr, "pci-attach: malformed PCI_SPEC_STRING \"%s\"\n",
+                spec_string);
         exit(2);
     }
 
@@ -135,19 +137,16 @@ int main_pciattach(int argc, char **argv)
 {
     uint32_t domid;
     int opt;
-    const char *bdf = NULL, *vs = NULL;
+    const char *spec_string = NULL;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "pci-attach", 2) {
         /* No options */
     }
 
     domid = find_domain(argv[optind]);
-    bdf = argv[optind + 1];
-
-    if (optind + 1 < argc)
-        vs = argv[optind + 2];
+    spec_string = argv[optind + 1];
 
-    if (pciattach(domid, bdf, vs))
+    if (pciattach(domid, spec_string))
         return EXIT_FAILURE;
 
     return EXIT_SUCCESS;
@@ -193,8 +192,8 @@ static int pciassignable_add(const char *bdf, int rebind)
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
-        fprintf(stderr, "pci-assignable-add: malformed BDF specification \"%s\"\n", bdf);
+    if (xlu_pci_parse_bdf(config, &pci.bdf, bdf)) {
+        fprintf(stderr, "pci-assignable-add: malformed BDF \"%s\"\n", bdf);
         exit(2);
     }
 
@@ -235,8 +234,8 @@ static int pciassignable_remove(const char *bdf, int rebind)
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
-        fprintf(stderr, "pci-assignable-remove: malformed BDF specification \"%s\"\n", bdf);
+    if (xlu_pci_parse_bdf(config, &pci.bdf, bdf)) {
+        fprintf(stderr, "pci-assignable-remove: malformed BDF \"%s\"\n", bdf);
         exit(2);
     }
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 08:30:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 08:30:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35514.67181 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTiu-0003fy-H5; Tue, 24 Nov 2020 08:30:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35514.67181; Tue, 24 Nov 2020 08:30:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTit-0003ez-Ne; Tue, 24 Nov 2020 08:30:51 +0000
Received: by outflank-mailman (input) for mailman id 35514;
 Tue, 24 Nov 2020 08:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khTin-0003Q8-CF
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTim-0006az-Fk; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH3-0001hp-Ex; Tue, 24 Nov 2020 08:02:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTin-0003Q8-CF
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=QgHDaVAQBuzu/wVXFL4ANbr3u6TzV6QgoeQN2pH/gEg=; b=Zx1VNNU929L1xSDx9gjZx0iUu
	FYHvlt/uy6PH7f6lx4mUQpdi1/w+OavQEJC9MP/3CJRDz6ecWFy3HQJiZPTEIdv9ULCNspnQ+55YD
	miZQapD87cICX1XoOmaOiQ9JNhXY+d2qC4M/CRybvl+LDHUw91jFJ3oE7fQjbGHLY4cYk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTim-0006az-Fk; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH3-0001hp-Ex; Tue, 24 Nov 2020 08:02:05 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 13/23] libxl: use COMPARE_PCI() macro is_pci_in_array()...
Date: Tue, 24 Nov 2020 08:01:49 +0000
Message-Id: <20201124080159.11912-14-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201124080159.11912-1-paul@xen.org>
References: <20201124080159.11912-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

... rather than an open-coded equivalent.

This patch tidies up the is_pci_in_array() function, making it take a single
'libxl_device_pci' argument rather than separate domain, bus, device and
function arguments. The already-available COMPARE_PCI() macro can then be
used and it is also modified to return 'bool' rather than 'int'.

The patch also modifies libxl_pci_assignable() to use is_pci_in_array() rather
than a separate open-coded equivalent, and also modifies it to return a
'bool' rather than an 'int'.

NOTE: The COMPARE_PCI() macro is also fixed to include the 'domain' in its
      comparison, which should always have been the case.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_internal.h |  7 ++++---
 tools/libs/light/libxl_pci.c      | 38 +++++++++++++-------------------------
 2 files changed, 17 insertions(+), 28 deletions(-)

diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index ecee61b541..02f8a3179c 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -4746,9 +4746,10 @@ void libxl__xcinfo2xlinfo(libxl_ctx *ctx,
  * devices have same identifier. */
 #define COMPARE_DEVID(a, b) ((a)->devid == (b)->devid)
 #define COMPARE_DISK(a, b) (!strcmp((a)->vdev, (b)->vdev))
-#define COMPARE_PCI(a, b) ((a)->func == (b)->func &&    \
-                           (a)->bus == (b)->bus &&      \
-                           (a)->dev == (b)->dev)
+#define COMPARE_PCI(a, b) ((a)->domain == (b)->domain && \
+                           (a)->bus == (b)->bus &&       \
+                           (a)->dev == (b)->dev &&       \
+                           (a)->func == (b)->func)
 #define COMPARE_USB(a, b) ((a)->ctrl == (b)->ctrl && \
                            (a)->port == (b)->port)
 #define COMPARE_USBCTRL(a, b) ((a)->devid == (b)->devid)
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 5a3352c2ec..e0b616fe18 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -336,24 +336,17 @@ retry_transaction2:
     return 0;
 }
 
-static int is_pci_in_array(libxl_device_pci *assigned, int num_assigned,
-                           int dom, int bus, int dev, int func)
+static bool is_pci_in_array(libxl_device_pci *pcis, int num,
+                            libxl_device_pci *pci)
 {
     int i;
 
-    for(i = 0; i < num_assigned; i++) {
-        if ( assigned[i].domain != dom )
-            continue;
-        if ( assigned[i].bus != bus )
-            continue;
-        if ( assigned[i].dev != dev )
-            continue;
-        if ( assigned[i].func != func )
-            continue;
-        return 1;
+    for (i = 0; i < num; i++) {
+        if (COMPARE_PCI(pci, &pcis[i]))
+            break;
     }
 
-    return 0;
+    return i < num;
 }
 
 /* Write the standard BDF into the sysfs path given by sysfs_path. */
@@ -1487,21 +1480,17 @@ int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
     return AO_INPROGRESS;
 }
 
-static int libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
+static bool libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
 {
     libxl_device_pci *pcis;
-    int num, i;
+    int num;
+    bool assignable;
 
     pcis = libxl_device_pci_assignable_list(ctx, &num);
-    for (i = 0; i < num; i++) {
-        if (pcis[i].domain == pci->domain &&
-            pcis[i].bus == pci->bus &&
-            pcis[i].dev == pci->dev &&
-            pcis[i].func == pci->func)
-            break;
-    }
+    assignable = is_pci_in_array(pcis, num, pci);
     libxl_device_pci_assignable_list_free(pcis, num);
-    return i != num;
+
+    return assignable;
 }
 
 static void device_pci_add_stubdom_wait(libxl__egc *egc,
@@ -1834,8 +1823,7 @@ static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
         goto out_fail;
     }
 
-    attached = is_pci_in_array(pcis, num, pci->domain,
-                               pci->bus, pci->dev, pci->func);
+    attached = is_pci_in_array(pcis, num, pci);
     libxl_device_pci_list_free(pcis, num);
 
     rc = ERROR_INVAL;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 08:30:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 08:30:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35515.67192 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTiv-0003iy-Dk; Tue, 24 Nov 2020 08:30:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35515.67192; Tue, 24 Nov 2020 08:30:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTiu-0003ho-No; Tue, 24 Nov 2020 08:30:52 +0000
Received: by outflank-mailman (input) for mailman id 35515;
 Tue, 24 Nov 2020 08:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khTin-0003QD-IE
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTim-0006bN-PH; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH5-0001hp-9V; Tue, 24 Nov 2020 08:02:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTin-0003QD-IE
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=4uWNFHLUmZLexL69CUdt3mhLOWrq8aO99ml+uC2JZLc=; b=VbSOpiTVNdY2BNAwshKashhMc
	G/WA39HxAw2hZbB5TMirCRuo/4dHujZhd9YtzKPUn740wEreahZTqOlc9k1iDzP3MeUyiHU0xRvkr
	4YWXQvk+Kw0Vhu7PYfNCRlKQu8tiSdC1hkv6OkXLsdD8Z/R6mu5SUoJ5zqeiDZmNyJuac=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTim-0006bN-PH; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH5-0001hp-9V; Tue, 24 Nov 2020 08:02:07 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 21/23] xl / libxl: support naming of assignable devices
Date: Tue, 24 Nov 2020 08:01:57 +0000
Message-Id: <20201124080159.11912-22-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201124080159.11912-1-paul@xen.org>
References: <20201124080159.11912-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

This patch modifies libxl_device_pci_assignable_add() to take an optional
'name' argument, which (if supplied) is saved into xenstore and can hence be
used to refer to the now-assignable BDF in subsequent operations. To
facilitate this, a new libxl_device_pci_assignable_name2bdf() function is
added.

The xl code is modified to allow a name to be specified in the
'pci-assignable-add' operation and also allow an option to be specified to
'pci-assignable-list' requesting that names be displayed. The latter is
facilitated by a new libxl_device_pci_assignable_bdf2name() function. Finally
xl 'pci-assignable-remove' is modified to that either a name or BDF can be
supplied. The supplied 'identifier' is first assumed to be a name, but if
libxl_device_pci_assignable_name2bdf() fails to find a matching BDF the
identifier itself will be parsed as a BDF. Names my only include printable
characters and may not include whitespace.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Christian Lindig <christian.lindig@citrix.com>
Cc: David Scott <dave@recoil.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>

v4:
 - Fix unitialized return value in libxl_device_pci_assignable_name2bdf()
   that was discovered in CI
---
 tools/include/libxl.h                | 19 +++++++-
 tools/libs/light/libxl_pci.c         | 86 +++++++++++++++++++++++++++++++++---
 tools/ocaml/libs/xl/xenlight_stubs.c |  3 +-
 tools/xl/xl_cmdtable.c               | 12 +++--
 tools/xl/xl_pci.c                    | 84 ++++++++++++++++++++++++-----------
 5 files changed, 166 insertions(+), 38 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 5703fdf367..4025d3a3d4 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -477,6 +477,14 @@
 #define LIBXL_HAVE_PCI_ASSIGNABLE_BDF 1
 
 /*
+ * LIBXL_HAVE_PCI_ASSIGNABLE_NAME indicates that the
+ * libxl_device_pci_assignable_add() function takes a 'name' argument
+ * and that the libxl_device_pci_assignable_name2bdf() and
+ * libxl_device_pci_assignable_bdf2name() functions are defined.
+ */
+#define LIBXL_HAVE_PCI_ASSIGNABLE_NAME 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
@@ -2385,11 +2393,18 @@ int libxl_device_events_handler(libxl_ctx *ctx,
  * added or is not bound, the functions will emit a warning but return
  * SUCCESS.
  */
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
+                                    const char *name, int rebind);
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
+                                       int rebind);
 libxl_pci_bdf *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
 void libxl_device_pci_assignable_list_free(libxl_pci_bdf *list, int num);
 
+libxl_pci_bdf *libxl_device_pci_assignable_name2bdf(libxl_ctx *ctx,
+                                                    const char *name);
+char *libxl_device_pci_assignable_bdf2name(libxl_ctx *ctx,
+                                           libxl_pci_bdf *pcibdf);
+
 /* CPUID handling */
 int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str);
 int libxl_cpuid_parse_config_xend(libxl_cpuid_policy_list *cpuid,
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index f9ace1faec..1da7fd5508 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -745,6 +745,7 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
                                             libxl_pci_bdf *pcibdf,
+                                            const char *name,
                                             int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -753,6 +754,23 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     int rc;
     struct stat st;
 
+    /* Sanitise any name that was passed */
+    if (name) {
+        unsigned int i, n = strlen(name);
+
+        if (n > 64) { /* Reasonable upper bound on name length */
+            LOG(ERROR, "Name too long");
+            return ERROR_FAIL;
+        }
+
+        for (i = 0; i < n; i++) {
+            if (!isgraph(name[i])) {
+                LOG(ERROR, "Names may only include printable characters");
+                return ERROR_FAIL;
+            }
+        }
+    }
+
     /* Local copy for convenience */
     dom = pcibdf->domain;
     bus = pcibdf->bus;
@@ -773,7 +791,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
     if ( rc ) {
         LOG(WARN, PCI_BDF" already assigned to pciback", dom, bus, dev, func);
-        goto quarantine;
+        goto name;
     }
 
     /* Check to see if there's already a driver that we need to unbind from */
@@ -804,7 +822,12 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
         return ERROR_FAIL;
     }
 
-quarantine:
+name:
+    if (name)
+        pci_info_xs_write(gc, pcibdf, "name", name);
+    else
+        pci_info_xs_remove(gc, pcibdf, "name");
+
     /*
      * DOMID_IO is just a sentinel domain, without any actual mappings,
      * so always pass XEN_DOMCTL_DEV_RDM_RELAXED to avoid assignment being
@@ -868,16 +891,18 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
         }
     }
 
+    pci_info_xs_remove(gc, pcibdf, "name");
+
     return 0;
 }
 
 int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
-                                    int rebind)
+                                    const char *name, int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_add(gc, pcibdf, rebind);
+    rc = libxl__device_pci_assignable_add(gc, pcibdf, name, rebind);
 
     GC_FREE;
     return rc;
@@ -896,6 +921,57 @@ int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
     return rc;
 }
 
+libxl_pci_bdf *libxl_device_pci_assignable_name2bdf(libxl_ctx *ctx,
+                                                    const char *name)
+{
+    GC_INIT(ctx);
+    char **bdfs;
+    libxl_pci_bdf *pcibdf = NULL;
+    unsigned int i, n;
+
+    bdfs = libxl__xs_directory(gc, XBT_NULL, PCI_INFO_PATH, &n);
+    if (!n)
+        goto out;
+
+    pcibdf = calloc(1, sizeof(*pcibdf));
+    if (!pcibdf)
+        goto out;
+
+    for (i = 0; i < n; i++) {
+        unsigned dom, bus, dev, func;
+        const char *tmp;
+
+        if (sscanf(bdfs[i], PCI_BDF_XSPATH, &dom, &bus, &dev, &func) != 4)
+            continue;
+
+        pcibdf_struct_fill(pcibdf, dom, bus, dev, func);
+
+        tmp = pci_info_xs_read(gc, pcibdf, "name");
+        if (tmp && !strcmp(tmp, name))
+            goto out;
+    }
+
+    free(pcibdf);
+    pcibdf = NULL;
+
+out:
+    GC_FREE;
+    return pcibdf;
+}
+
+char *libxl_device_pci_assignable_bdf2name(libxl_ctx *ctx,
+                                           libxl_pci_bdf *pcibdf)
+{
+    GC_INIT(ctx);
+    char *name = NULL, *tmp = pci_info_xs_read(gc, pcibdf, "name");
+
+    if (tmp)
+        name = strdup(tmp);
+
+    GC_FREE;
+    return name;
+}
+
 /*
  * This function checks that all functions of a device are bound to pciback
  * driver. It also initialises a bit-mask of which function numbers are present
@@ -1560,7 +1636,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     if (rc) goto out;
 
     if (pci->seize && !pciback_dev_is_assigned(gc, &pci->bdf)) {
-        rc = libxl__device_pci_assignable_add(gc, &pci->bdf, 1);
+        rc = libxl__device_pci_assignable_add(gc, &pci->bdf, NULL, 1);
         if ( rc )
             goto out;
     }
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 2388f23869..96bb4655e0 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -840,7 +840,8 @@ value stub_xl_device_pci_assignable_add(value ctx, value info, value rebind)
 	device_pci_val(CTX, &c_info, info);
 
 	caml_enter_blocking_section();
-	ret = libxl_device_pci_assignable_add(CTX, &c_info.bdf, c_rebind);
+	ret = libxl_device_pci_assignable_add(CTX, &c_info.bdf, NULL,
+					      c_rebind);
 	caml_leave_blocking_section();
 
 	libxl_device_pci_dispose(&c_info);
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 2ee0c49673..9e9aa448e2 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -105,21 +105,25 @@ struct cmd_spec cmd_table[] = {
     { "pci-assignable-add",
       &main_pciassignable_add, 0, 1,
       "Make a device assignable for pci-passthru",
-      "<BDF>",
+      "[options] <BDF>",
+      "-n NAME, --name=NAME    Name the assignable device.\n"
       "-h                      Print this help.\n"
     },
     { "pci-assignable-remove",
       &main_pciassignable_remove, 0, 1,
       "Remove a device from being assignable",
-      "[options] <BDF>",
+      "[options] <BDF>|NAME",
       "-h                      Print this help.\n"
       "-r                      Attempt to re-assign the device to the\n"
-      "                        original driver"
+      "                        original driver."
     },
     { "pci-assignable-list",
       &main_pciassignable_list, 0, 0,
       "List all the assignable pci devices",
-      "",
+      "[options]",
+      "-h                      Print this help.\n"
+      "-n, --show-names        Display assignable device names where\n"
+      "                        supplied.\n"
     },
     { "pause",
       &main_pause, 0, 1,
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 37708b4eb1..f1b58b3976 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -152,7 +152,7 @@ int main_pciattach(int argc, char **argv)
     return EXIT_SUCCESS;
 }
 
-static void pciassignable_list(void)
+static void pciassignable_list(bool show_names)
 {
     libxl_pci_bdf *pcibdfs;
     int num, i;
@@ -162,9 +162,15 @@ static void pciassignable_list(void)
     if ( pcibdfs == NULL )
         return;
     for (i = 0; i < num; i++) {
-        printf("%04x:%02x:%02x.%01x\n",
-               pcibdfs[i].domain, pcibdfs[i].bus, pcibdfs[i].dev,
-               pcibdfs[i].func);
+        libxl_pci_bdf *pcibdf = &pcibdfs[i];
+        char *name = show_names ?
+            libxl_device_pci_assignable_bdf2name(ctx, pcibdf) : NULL;
+
+        printf("%04x:%02x:%02x.%01x %s\n",
+               pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func,
+               name ?: "");
+
+        free(name);
     }
     libxl_device_pci_assignable_list_free(pcibdfs, num);
 }
@@ -172,16 +178,23 @@ static void pciassignable_list(void)
 int main_pciassignable_list(int argc, char **argv)
 {
     int opt;
-
-    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-assignable-list", 0) {
-        /* No options */
+    static struct option opts[] = {
+        {"show-names", 0, 0, 'n'},
+        COMMON_LONG_OPTS
+    };
+    bool show_names = false;
+
+    SWITCH_FOREACH_OPT(opt, "n", opts, "pci-assignable-list", 0) {
+    case 'n':
+        show_names = true;
+        break;
     }
 
-    pciassignable_list();
+    pciassignable_list(show_names);
     return 0;
 }
 
-static int pciassignable_add(const char *bdf, int rebind)
+static int pciassignable_add(const char *bdf, const char *name, int rebind)
 {
     libxl_pci_bdf pcibdf;
     XLU_Config *config;
@@ -197,7 +210,7 @@ static int pciassignable_add(const char *bdf, int rebind)
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_add(ctx, &pcibdf, rebind))
+    if (libxl_device_pci_assignable_add(ctx, &pcibdf, name, rebind))
         r = 1;
 
     libxl_pci_bdf_dispose(&pcibdf);
@@ -210,39 +223,58 @@ int main_pciassignable_add(int argc, char **argv)
 {
     int opt;
     const char *bdf = NULL;
-
-    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-assignable-add", 1) {
-        /* No options */
+    static struct option opts[] = {
+        {"name", 1, 0, 'n'},
+        COMMON_LONG_OPTS
+    };
+    const char *name = NULL;
+
+    SWITCH_FOREACH_OPT(opt, "n:", opts, "pci-assignable-add", 0) {
+    case 'n':
+        name = optarg;
+        break;
     }
 
     bdf = argv[optind];
 
-    if (pciassignable_add(bdf, 1))
+    if (pciassignable_add(bdf, name, 1))
         return EXIT_FAILURE;
 
     return EXIT_SUCCESS;
 }
 
-static int pciassignable_remove(const char *bdf, int rebind)
+static int pciassignable_remove(const char *ident, int rebind)
 {
-    libxl_pci_bdf pcibdf;
+    libxl_pci_bdf *pcibdf;
     XLU_Config *config;
     int r = 0;
 
-    libxl_pci_bdf_init(&pcibdf);
-
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcibdf, bdf)) {
-        fprintf(stderr, "pci-assignable-remove: malformed BDF \"%s\"\n", bdf);
-        exit(2);
+    pcibdf = libxl_device_pci_assignable_name2bdf(ctx, ident);
+    if (!pcibdf) {
+        pcibdf = calloc(1, sizeof(*pcibdf));
+
+        if (!pcibdf) {
+            fprintf(stderr,
+                    "pci-assignable-remove: failed to allocate memory\n");
+            exit(2);
+        }
+
+        libxl_pci_bdf_init(pcibdf);
+        if (xlu_pci_parse_bdf(config, pcibdf, ident)) {
+            fprintf(stderr,
+                    "pci-assignable-remove: malformed BDF '%s'\n", ident);
+            exit(2);
+        }
     }
 
-    if (libxl_device_pci_assignable_remove(ctx, &pcibdf, rebind))
+    if (libxl_device_pci_assignable_remove(ctx, pcibdf, rebind))
         r = 1;
 
-    libxl_pci_bdf_dispose(&pcibdf);
+    libxl_pci_bdf_dispose(pcibdf);
+    free(pcibdf);
     xlu_cfg_destroy(config);
 
     return r;
@@ -251,7 +283,7 @@ static int pciassignable_remove(const char *bdf, int rebind)
 int main_pciassignable_remove(int argc, char **argv)
 {
     int opt;
-    const char *bdf = NULL;
+    const char *ident = NULL;
     int rebind = 0;
 
     SWITCH_FOREACH_OPT(opt, "r", NULL, "pci-assignable-remove", 1) {
@@ -260,9 +292,9 @@ int main_pciassignable_remove(int argc, char **argv)
         break;
     }
 
-    bdf = argv[optind];
+    ident = argv[optind];
 
-    if (pciassignable_remove(bdf, rebind))
+    if (pciassignable_remove(ident, rebind))
         return EXIT_FAILURE;
 
     return EXIT_SUCCESS;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 08:30:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 08:30:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35516.67205 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTiw-0003mZ-UT; Tue, 24 Nov 2020 08:30:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35516.67205; Tue, 24 Nov 2020 08:30:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTiv-0003lG-T2; Tue, 24 Nov 2020 08:30:53 +0000
Received: by outflank-mailman (input) for mailman id 35516;
 Tue, 24 Nov 2020 08:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khTin-0003QI-JO
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTim-0006bD-M6; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH4-0001hp-Ag; Tue, 24 Nov 2020 08:02:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTin-0003QI-JO
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=8fS6RrvvdnuNUI/KdahI5cH7K76QmmNtKDWTvY4ag4k=; b=m41lAVUp6l6P6Ub9U1yYUlNcn
	iCUJrQgrJDLMQ4NblOFzL7OIw1VXY0ZJ71V4F/QviKAk0nPRDiOw8X5iiVHVCTzSVj+tQPJdrBboT
	UlxruuqTmsV/lOLRctgtMKO9PAVVv6MhkIShoLly7WGefE19026f2ERYt1L8otRv8L1x4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTim-0006bD-M6; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH4-0001hp-Ag; Tue, 24 Nov 2020 08:02:06 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Nick Rosbrook <rosbrookn@ainfosec.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 17/23] libxl: introduce 'libxl_pci_bdf' in the idl...
Date: Tue, 24 Nov 2020 08:01:53 +0000
Message-Id: <20201124080159.11912-18-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201124080159.11912-1-paul@xen.org>
References: <20201124080159.11912-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

... and use in 'libxl_device_pci'

This patch is preparatory work for restricting the type passed to functions
that only require BDF information, rather than passing a 'libxl_device_pci'
structure which is only partially filled. In this patch only the minimal
mechanical changes necessary to deal with the structural changes are made.
Subsequent patches will adjust the code to make better use of the new type.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/golang/xenlight/helpers.gen.go |  77 ++++++++++++------
 tools/golang/xenlight/types.gen.go   |   8 +-
 tools/include/libxl.h                |   6 ++
 tools/libs/light/libxl_dm.c          |   8 +-
 tools/libs/light/libxl_internal.h    |   3 +-
 tools/libs/light/libxl_pci.c         | 148 +++++++++++++++++------------------
 tools/libs/light/libxl_types.idl     |  16 ++--
 tools/libs/util/libxlu_pci.c         |   8 +-
 tools/xl/xl_pci.c                    |   6 +-
 tools/xl/xl_sxp.c                    |   4 +-
 10 files changed, 167 insertions(+), 117 deletions(-)

diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index c8605994e7..b7230f693c 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -1999,6 +1999,41 @@ xc.colo_checkpoint_port = C.CString(x.ColoCheckpointPort)}
  return nil
  }
 
+// NewPciBdf returns an instance of PciBdf initialized with defaults.
+func NewPciBdf() (*PciBdf, error) {
+var (
+x PciBdf
+xc C.libxl_pci_bdf)
+
+C.libxl_pci_bdf_init(&xc)
+defer C.libxl_pci_bdf_dispose(&xc)
+
+if err := x.fromC(&xc); err != nil {
+return nil, err }
+
+return &x, nil}
+
+func (x *PciBdf) fromC(xc *C.libxl_pci_bdf) error {
+ x.Func = byte(xc._func)
+x.Dev = byte(xc.dev)
+x.Bus = byte(xc.bus)
+x.Domain = int(xc.domain)
+
+ return nil}
+
+func (x *PciBdf) toC(xc *C.libxl_pci_bdf) (err error){defer func(){
+if err != nil{
+C.libxl_pci_bdf_dispose(xc)}
+}()
+
+xc._func = C.uint8_t(x.Func)
+xc.dev = C.uint8_t(x.Dev)
+xc.bus = C.uint8_t(x.Bus)
+xc.domain = C.int(x.Domain)
+
+ return nil
+ }
+
 // NewDevicePci returns an instance of DevicePci initialized with defaults.
 func NewDevicePci() (*DevicePci, error) {
 var (
@@ -2014,10 +2049,9 @@ return nil, err }
 return &x, nil}
 
 func (x *DevicePci) fromC(xc *C.libxl_device_pci) error {
- x.Func = byte(xc._func)
-x.Dev = byte(xc.dev)
-x.Bus = byte(xc.bus)
-x.Domain = int(xc.domain)
+ if err := x.Bdf.fromC(&xc.bdf);err != nil {
+return fmt.Errorf("converting field Bdf: %v", err)
+}
 x.Vdevfn = uint32(xc.vdevfn)
 x.VfuncMask = uint32(xc.vfunc_mask)
 x.Msitranslate = bool(xc.msitranslate)
@@ -2033,10 +2067,9 @@ if err != nil{
 C.libxl_device_pci_dispose(xc)}
 }()
 
-xc._func = C.uint8_t(x.Func)
-xc.dev = C.uint8_t(x.Dev)
-xc.bus = C.uint8_t(x.Bus)
-xc.domain = C.int(x.Domain)
+if err := x.Bdf.toC(&xc.bdf); err != nil {
+return fmt.Errorf("converting field Bdf: %v", err)
+}
 xc.vdevfn = C.uint32_t(x.Vdevfn)
 xc.vfunc_mask = C.uint32_t(x.VfuncMask)
 xc.msitranslate = C.bool(x.Msitranslate)
@@ -2766,13 +2799,13 @@ if err := x.Nics[i].fromC(&v); err != nil {
 return fmt.Errorf("converting field Nics: %v", err) }
 }
 }
-x.Pcidevs = nil
-if n := int(xc.num_pcidevs); n > 0 {
-cPcidevs := (*[1<<28]C.libxl_device_pci)(unsafe.Pointer(xc.pcidevs))[:n:n]
-x.Pcidevs = make([]DevicePci, n)
-for i, v := range cPcidevs {
-if err := x.Pcidevs[i].fromC(&v); err != nil {
-return fmt.Errorf("converting field Pcidevs: %v", err) }
+x.Pcis = nil
+if n := int(xc.num_pcis); n > 0 {
+cPcis := (*[1<<28]C.libxl_device_pci)(unsafe.Pointer(xc.pcis))[:n:n]
+x.Pcis = make([]DevicePci, n)
+for i, v := range cPcis {
+if err := x.Pcis[i].fromC(&v); err != nil {
+return fmt.Errorf("converting field Pcis: %v", err) }
 }
 }
 x.Rdms = nil
@@ -2922,13 +2955,13 @@ return fmt.Errorf("converting field Nics: %v", err)
 }
 }
 }
-if numPcidevs := len(x.Pcidevs); numPcidevs > 0 {
-xc.pcidevs = (*C.libxl_device_pci)(C.malloc(C.ulong(numPcidevs)*C.sizeof_libxl_device_pci))
-xc.num_pcidevs = C.int(numPcidevs)
-cPcidevs := (*[1<<28]C.libxl_device_pci)(unsafe.Pointer(xc.pcidevs))[:numPcidevs:numPcidevs]
-for i,v := range x.Pcidevs {
-if err := v.toC(&cPcidevs[i]); err != nil {
-return fmt.Errorf("converting field Pcidevs: %v", err)
+if numPcis := len(x.Pcis); numPcis > 0 {
+xc.pcis = (*C.libxl_device_pci)(C.malloc(C.ulong(numPcis)*C.sizeof_libxl_device_pci))
+xc.num_pcis = C.int(numPcis)
+cPcis := (*[1<<28]C.libxl_device_pci)(unsafe.Pointer(xc.pcis))[:numPcis:numPcis]
+for i,v := range x.Pcis {
+if err := v.toC(&cPcis[i]); err != nil {
+return fmt.Errorf("converting field Pcis: %v", err)
 }
 }
 }
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index b4c5df0f2c..bc62ae8ce9 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -707,11 +707,15 @@ ColoCheckpointHost string
 ColoCheckpointPort string
 }
 
-type DevicePci struct {
+type PciBdf struct {
 Func byte
 Dev byte
 Bus byte
 Domain int
+}
+
+type DevicePci struct {
+Bdf PciBdf
 Vdevfn uint32
 VfuncMask uint32
 Msitranslate bool
@@ -896,7 +900,7 @@ CInfo DomainCreateInfo
 BInfo DomainBuildInfo
 Disks []DeviceDisk
 Nics []DeviceNic
-Pcidevs []DevicePci
+Pcis []DevicePci
 Rdms []DeviceRdm
 Dtdevs []DeviceDtdev
 Vfbs []DeviceVfb
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 8225809d94..5edacccbd1 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -464,6 +464,12 @@
 #define LIBXL_HAVE_DEVICE_PCI_ASSIGNABLE_LIST_FREE 1
 
 /*
+ * LIBXL_HAVE_PCI_BDF indicates that the 'libxl_pci_bdf' type is defined
+ * is embedded in the 'libxl_device_pci' type.
+ */
+#define LIBXL_HAVE_PCI_BDF 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 8ebe1b60c9..a25bf23834 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -472,10 +472,10 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
     for (i = 0; i < d_config->num_pcis; i++) {
         unsigned int n, nr_entries;
 
-        seg = d_config->pcis[i].domain;
-        bus = d_config->pcis[i].bus;
-        devfn = PCI_DEVFN(d_config->pcis[i].dev,
-                          d_config->pcis[i].func);
+        seg = d_config->pcis[i].bdf.domain;
+        bus = d_config->pcis[i].bdf.bus;
+        devfn = PCI_DEVFN(d_config->pcis[i].bdf.dev,
+                          d_config->pcis[i].bdf.func);
         nr_entries = 0;
         rc = libxl__xc_device_get_rdm(gc, 0,
                                       seg, bus, devfn, &nr_entries, &xrdm);
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 02f8a3179c..da12d92209 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -4746,10 +4746,11 @@ void libxl__xcinfo2xlinfo(libxl_ctx *ctx,
  * devices have same identifier. */
 #define COMPARE_DEVID(a, b) ((a)->devid == (b)->devid)
 #define COMPARE_DISK(a, b) (!strcmp((a)->vdev, (b)->vdev))
-#define COMPARE_PCI(a, b) ((a)->domain == (b)->domain && \
+#define COMPARE_BDF(a, b) ((a)->domain == (b)->domain && \
                            (a)->bus == (b)->bus &&       \
                            (a)->dev == (b)->dev &&       \
                            (a)->func == (b)->func)
+#define COMPARE_PCI(a, b) COMPARE_BDF(&((a)->bdf), &((b)->bdf))
 #define COMPARE_USB(a, b) ((a)->ctrl == (b)->ctrl && \
                            (a)->port == (b)->port)
 #define COMPARE_USBCTRL(a, b) ((a)->devid == (b)->devid)
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index e0b616fe18..3cfba0e527 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -29,10 +29,10 @@ static unsigned int pci_encode_bdf(libxl_device_pci *pci)
 {
     unsigned int value;
 
-    value = pci->domain << 16;
-    value |= (pci->bus & 0xff) << 8;
-    value |= (pci->dev & 0x1f) << 3;
-    value |= (pci->func & 0x7);
+    value = pci->bdf.domain << 16;
+    value |= (pci->bdf.bus & 0xff) << 8;
+    value |= (pci->bdf.dev & 0x1f) << 3;
+    value |= (pci->bdf.func & 0x7);
 
     return value;
 }
@@ -41,10 +41,10 @@ static void pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
                             unsigned int bus, unsigned int dev,
                             unsigned int func, unsigned int vdevfn)
 {
-    pci->domain = domain;
-    pci->bus = bus;
-    pci->dev = dev;
-    pci->func = func;
+    pci->bdf.domain = domain;
+    pci->bdf.bus = bus;
+    pci->bdf.dev = dev;
+    pci->bdf.func = func;
     pci->vdevfn = vdevfn;
 }
 
@@ -54,9 +54,9 @@ static void libxl_create_pci_backend_device(libxl__gc *gc,
                                             const libxl_device_pci *pci)
 {
     flexarray_append(back, GCSPRINTF("key-%d", num));
-    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->domain, pci->bus, pci->dev, pci->func));
+    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func));
     flexarray_append(back, GCSPRINTF("dev-%d", num));
-    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->domain, pci->bus, pci->dev, pci->func));
+    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func));
     if (pci->vdevfn)
         flexarray_append_pair(back, GCSPRINTF("vdevfn-%d", num), GCSPRINTF("%x", pci->vdevfn));
     flexarray_append(back, GCSPRINTF("opts-%d", num));
@@ -250,8 +250,8 @@ static int libxl__device_pci_remove_xenstore(libxl__gc *gc, uint32_t domid, libx
         unsigned int domain = 0, bus = 0, dev = 0, func = 0;
         xsdev = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/dev-%d", be_path, i));
         sscanf(xsdev, PCI_BDF, &domain, &bus, &dev, &func);
-        if (domain == pci->domain && bus == pci->bus &&
-            pci->dev == dev && pci->func == func) {
+        if (domain == pci->bdf.domain && bus == pci->bdf.bus &&
+            pci->bdf.dev == dev && pci->bdf.func == func) {
             break;
         }
     }
@@ -362,8 +362,8 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
         return ERROR_FAIL;
     }
 
-    buf = GCSPRINTF(PCI_BDF, pci->domain, pci->bus,
-                    pci->dev, pci->func);
+    buf = GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus,
+                    pci->bdf.dev, pci->bdf.func);
     rc = write(fd, buf, strlen(buf));
     /* Annoying to have two if's, but we need the errno */
     if (rc < 0)
@@ -383,10 +383,10 @@ static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
 {
     return node ?
         GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
-                  pci->domain, pci->bus, pci->dev, pci->func,
+                  pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
                   node) :
         GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
-                  pci->domain, pci->bus, pci->dev, pci->func);
+                  pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 }
 
 
@@ -484,10 +484,10 @@ static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
     struct stat st;
 
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/driver",
-                           pci->domain,
-                           pci->bus,
-                           pci->dev,
-                           pci->func);
+                           pci->bdf.domain,
+                           pci->bdf.bus,
+                           pci->bdf.dev,
+                           pci->bdf.func);
     if ( !lstat(spath, &st) ) {
         /* Find the canonical path to the driver. */
         dp = libxl__zalloc(gc, PATH_MAX);
@@ -517,7 +517,7 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pci)
 {
     char *pci_device_vendor_path =
             GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/vendor",
-                      pci->domain, pci->bus, pci->dev, pci->func);
+                      pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     uint16_t read_items;
     uint16_t pci_device_vendor;
 
@@ -525,7 +525,7 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pci)
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have vendor attribute",
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         return 0xffff;
     }
     read_items = fscanf(f, "0x%hx\n", &pci_device_vendor);
@@ -533,7 +533,7 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pci)
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read vendor of pci device "PCI_BDF,
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         return 0xffff;
     }
 
@@ -544,7 +544,7 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pci)
 {
     char *pci_device_device_path =
             GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/device",
-                      pci->domain, pci->bus, pci->dev, pci->func);
+                      pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     uint16_t read_items;
     uint16_t pci_device_device;
 
@@ -552,7 +552,7 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pci)
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have device attribute",
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         return 0xffff;
     }
     read_items = fscanf(f, "0x%hx\n", &pci_device_device);
@@ -560,7 +560,7 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pci)
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read device of pci device "PCI_BDF,
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         return 0xffff;
     }
 
@@ -571,14 +571,14 @@ static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pci,
                                unsigned long *class)
 {
     char *pci_device_class_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/class",
-                     pci->domain, pci->bus, pci->dev, pci->func);
+                     pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     int read_items, ret = 0;
 
     FILE *f = fopen(pci_device_class_path, "r");
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have class attribute",
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         ret = ERROR_FAIL;
         goto out;
     }
@@ -587,7 +587,7 @@ static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pci,
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read class of pci device "PCI_BDF,
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         ret = ERROR_FAIL;
     }
 
@@ -654,10 +654,10 @@ static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
     }
 
     while (fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func)==4) {
-        if (dom == pci->domain
-            && bus == pci->bus
-            && dev == pci->dev
-            && func == pci->func) {
+        if (dom == pci->bdf.domain
+            && bus == pci->bdf.bus
+            && dev == pci->bdf.dev
+            && func == pci->bdf.func) {
             rc = 1;
             goto out;
         }
@@ -683,8 +683,8 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
     }
 
     spath = GCSPRINTF(SYSFS_PCIBACK_DRIVER"/"PCI_BDF,
-                      pci->domain, pci->bus,
-                      pci->dev, pci->func);
+                      pci->bdf.domain, pci->bdf.bus,
+                      pci->bdf.dev, pci->bdf.func);
     rc = lstat(spath, &st);
 
     if( rc == 0 )
@@ -747,10 +747,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     struct stat st;
 
     /* Local copy for convenience */
-    dom = pci->domain;
-    bus = pci->bus;
-    dev = pci->dev;
-    func = pci->func;
+    dom = pci->bdf.domain;
+    bus = pci->bdf.bus;
+    dev = pci->bdf.dev;
+    func = pci->bdf.func;
 
     /* See if the device exists */
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF, dom, bus, dev, func);
@@ -824,8 +824,8 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     /* De-quarantine */
     rc = xc_deassign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci));
     if ( rc < 0 ) {
-        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pci->domain, pci->bus,
-            pci->dev, pci->func);
+        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pci->bdf.domain, pci->bdf.bus,
+            pci->bdf.dev, pci->bdf.func);
         return ERROR_FAIL;
     }
 
@@ -914,11 +914,11 @@ static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pci, unsigne
 
         if ( sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4 )
             continue;
-        if ( pci->domain != dom )
+        if ( pci->bdf.domain != dom )
             continue;
-        if ( pci->bus != bus )
+        if ( pci->bdf.bus != bus )
             continue;
-        if ( pci->dev != dev )
+        if ( pci->bdf.dev != dev )
             continue;
 
         path = GCSPRINTF("%s/" PCI_BDF, SYSFS_PCIBACK_DRIVER, dom, bus, dev, func);
@@ -967,13 +967,13 @@ static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/parameter");
     if (pci->vdevfn) {
         libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF_VDEVFN","PCI_OPTIONS,
-                         pci->domain, pci->bus, pci->dev,
-                         pci->func, pci->vdevfn, pci->msitranslate,
+                         pci->bdf.domain, pci->bdf.bus, pci->bdf.dev,
+                         pci->bdf.func, pci->vdevfn, pci->msitranslate,
                          pci->power_mgmt);
     } else {
         libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF","PCI_OPTIONS,
-                         pci->domain,  pci->bus, pci->dev,
-                         pci->func, pci->msitranslate, pci->power_mgmt);
+                         pci->bdf.domain,  pci->bdf.bus, pci->bdf.dev,
+                         pci->bdf.func, pci->msitranslate, pci->power_mgmt);
     }
 
     libxl__qemu_traditional_cmd(gc, domid, "pci-ins");
@@ -1132,10 +1132,10 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
     libxl__qmp_param_add_string(gc, &args, "driver",
                                 "xen-pci-passthrough");
     QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
-                           pci->bus, pci->dev, pci->func);
+                           pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     QMP_PARAMETERS_SPRINTF(&args, "hostaddr",
-                           "%04x:%02x:%02x.%01x", pci->domain,
-                           pci->bus, pci->dev, pci->func);
+                           "%04x:%02x:%02x.%01x", pci->bdf.domain,
+                           pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     if (pci->vdevfn) {
         QMP_PARAMETERS_SPRINTF(&args, "addr", "%x.%x",
                                PCI_SLOT(pci->vdevfn),
@@ -1223,7 +1223,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
      */
 
     asked_id = GCSPRINTF(PCI_PT_QDEV_ID,
-                         pci->bus, pci->dev, pci->func);
+                         pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     for (i = 0; (bus = libxl__json_array_get(response, i)); i++) {
         devices = libxl__json_map_get("devices", bus, JSON_ARRAY);
@@ -1314,8 +1314,8 @@ static void pci_add_dm_done(libxl__egc *egc,
     if (isstubdom)
         starting = false;
 
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->domain,
-                           pci->bus, pci->dev, pci->func);
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->bdf.domain,
+                           pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     f = fopen(sysfs_path, "r");
     start = end = flags = size = 0;
     irq = 0;
@@ -1355,8 +1355,8 @@ static void pci_add_dm_done(libxl__egc *egc,
         }
     }
     fclose(f);
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
-                                pci->bus, pci->dev, pci->func);
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->bdf.domain,
+                                pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     f = fopen(sysfs_path, "r");
     if (f == NULL) {
         LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
@@ -1527,7 +1527,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
         if (rc) {
             LOGD(ERROR, domid,
                  "PCI device %04x:%02x:%02x.%u %s?",
-                 pci->domain, pci->bus, pci->dev, pci->func,
+                 pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
                  errno == EOPNOTSUPP ? "cannot be assigned - no IOMMU"
                  : "already assigned to a different guest");
             goto out;
@@ -1545,7 +1545,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
 
     if (!libxl_pci_assignable(ctx, pci)) {
         LOGD(ERROR, domid, "PCI device %x:%x:%x.%x is not assignable",
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         rc = ERROR_FAIL;
         goto out;
     }
@@ -1553,7 +1553,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     rc = pci_info_xs_write(gc, pci, "domid", GCSPRINTF("%u", domid));
     if (rc) goto out;
 
-    libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
+    libxl__device_pci_reset(gc, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
     if (stubdomid != 0) {
@@ -1634,13 +1634,13 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
         pci->vfunc_mask &= pfunc_mask;
         /* so now vfunc_mask == pfunc_mask */
     }else{
-        pfunc_mask = (1 << pci->func);
+        pfunc_mask = (1 << pci->bdf.func);
     }
 
     for (rc = 0, i = 7; i >= 0; --i) {
         if ( (1 << i) & pfunc_mask ) {
             if ( pci->vfunc_mask == pfunc_mask ) {
-                pci->func = i;
+                pci->bdf.func = i;
                 pci->vdevfn = orig_vdev | i;
             } else {
                 /* if not passing through multiple devices in a block make
@@ -1672,7 +1672,7 @@ static void device_pci_add_done(libxl__egc *egc,
         LOGD(ERROR, domid,
              "libxl__device_pci_add  failed for "
              "PCI device %x:%x:%x.%x (rc %d)",
-             pci->domain, pci->bus, pci->dev, pci->func,
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
              rc);
         pci_info_xs_remove(gc, pci, "domid");
     }
@@ -1741,8 +1741,8 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/state");
     state = libxl__xs_read(gc, XBT_NULL, path);
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/parameter");
-    libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF, pci->domain,
-                     pci->bus, pci->dev, pci->func);
+    libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF, pci->bdf.domain,
+                     pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     /* Remove all functions at once atomically by only signalling
      * device-model for function 0 */
@@ -1856,8 +1856,8 @@ static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
     } else {
         assert(type == LIBXL_DOMAIN_TYPE_PV);
 
-        char *sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->domain,
-                                     pci->bus, pci->dev, pci->func);
+        char *sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->bdf.domain,
+                                     pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         FILE *f = fopen(sysfs_path, "r");
         unsigned int start = 0, end = 0, flags = 0, size = 0;
         int irq = 0;
@@ -1892,8 +1892,8 @@ static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
         }
         fclose(f);
 skip1:
-        sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
-                               pci->bus, pci->dev, pci->func);
+        sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->bdf.domain,
+                               pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         f = fopen(sysfs_path, "r");
         if (f == NULL) {
             LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
@@ -1957,7 +1957,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     if (rc) goto out;
 
     QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
-                           pci->bus, pci->dev, pci->func);
+                           pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     prs->qmp.callback = pci_remove_qmp_device_del_cb;
     rc = libxl__ev_qmp_send(egc, &prs->qmp, "device_del", args);
     if (rc) goto out;
@@ -2026,7 +2026,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
     libxl__ev_qmp_dispose(gc, qmp);
 
     asked_id = GCSPRINTF(PCI_PT_QDEV_ID,
-                         pci->bus, pci->dev, pci->func);
+                         pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     /* query-pci response:
      * [{ 'devices': [ 'qdev_id': 'str', ...  ], ... }]
@@ -2077,7 +2077,7 @@ static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
     libxl_device_pci *const pci = &prs->pci;
 
     LOGD(WARN, prs->domid, "timed out waiting for DM to remove "
-         PCI_PT_QDEV_ID, pci->bus, pci->dev, pci->func);
+         PCI_PT_QDEV_ID, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     /* If we timed out, we might still want to keep destroying the device
      * (when force==true), so let the next function decide what to do on
@@ -2110,7 +2110,7 @@ static void pci_remove_detached(libxl__egc *egc,
 
     /* don't do multiple resets while some functions are still passed through */
     if ((pci->vdevfn & 0x7) == 0) {
-        libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
+        libxl__device_pci_reset(gc, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     }
 
     if (!isstubdom) {
@@ -2198,7 +2198,7 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
         }
         pci->vfunc_mask &= prs->pfunc_mask;
     } else {
-        prs->pfunc_mask = (1 << pci->func);
+        prs->pfunc_mask = (1 << pci->bdf.func);
     }
 
     rc = 0;
@@ -2226,7 +2226,7 @@ static void device_pci_remove_common_next(libxl__egc *egc,
         prs->next_func--;
         if ( (1 << i) & pfunc_mask ) {
             if ( pci->vfunc_mask == pfunc_mask ) {
-                pci->func = i;
+                pci->bdf.func = i;
                 pci->vdevfn = orig_vdev | i;
             } else {
                 pci->vdevfn = orig_vdev;
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 20f8dd7cfa..2c441142fb 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -769,18 +769,22 @@ libxl_device_nic = Struct("device_nic", [
     ("colo_checkpoint_port", string)
     ])
 
+libxl_pci_bdf = Struct("pci_bdf", [
+    ("func", uint8),
+    ("dev", uint8),
+    ("bus", uint8),
+    ("domain", integer),
+    ])
+
 libxl_device_pci = Struct("device_pci", [
-    ("func",      uint8),
-    ("dev",       uint8),
-    ("bus",       uint8),
-    ("domain",    integer),
-    ("vdevfn",    uint32),
+    ("bdf", libxl_pci_bdf),
+    ("vdevfn", uint32),
     ("vfunc_mask", uint32),
     ("msitranslate", bool),
     ("power_mgmt", bool),
     ("permissive", bool),
     ("seize", bool),
-    ("rdm_policy",      libxl_rdm_reserve_policy),
+    ("rdm_policy", libxl_rdm_reserve_policy),
     ])
 
 libxl_device_rdm = Struct("device_rdm", [
diff --git a/tools/libs/util/libxlu_pci.c b/tools/libs/util/libxlu_pci.c
index 1d38fffce3..5c107f2642 100644
--- a/tools/libs/util/libxlu_pci.c
+++ b/tools/libs/util/libxlu_pci.c
@@ -27,10 +27,10 @@ static int pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
                            unsigned int bus, unsigned int dev,
                            unsigned int func, unsigned int vdevfn)
 {
-    pci->domain = domain;
-    pci->bus = bus;
-    pci->dev = dev;
-    pci->func = func;
+    pci->bdf.domain = domain;
+    pci->bdf.bus = bus;
+    pci->bdf.dev = dev;
+    pci->bdf.func = func;
     pci->vdevfn = vdevfn;
     return 0;
 }
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index f71498cbb5..b6dc7c2840 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -34,7 +34,8 @@ static void pcilist(uint32_t domid)
     for (i = 0; i < num; i++) {
         printf("%02x.%01x %04x:%02x:%02x.%01x\n",
                (pcis[i].vdevfn >> 3) & 0x1f, pcis[i].vdevfn & 0x7,
-               pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
+               pcis[i].bdf.domain, pcis[i].bdf.bus, pcis[i].bdf.dev,
+               pcis[i].bdf.func);
     }
     libxl_device_pci_list_free(pcis, num);
 }
@@ -163,7 +164,8 @@ static void pciassignable_list(void)
         return;
     for (i = 0; i < num; i++) {
         printf("%04x:%02x:%02x.%01x\n",
-               pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
+               pcis[i].bdf.domain, pcis[i].bdf.bus, pcis[i].bdf.dev,
+               pcis[i].bdf.func);
     }
     libxl_device_pci_assignable_list_free(pcis, num);
 }
diff --git a/tools/xl/xl_sxp.c b/tools/xl/xl_sxp.c
index b03e348ffb..95180b60df 100644
--- a/tools/xl/xl_sxp.c
+++ b/tools/xl/xl_sxp.c
@@ -194,8 +194,8 @@ void printf_info_sexp(int domid, libxl_domain_config *d_config, FILE *fh)
         fprintf(fh, "\t(device\n");
         fprintf(fh, "\t\t(pci\n");
         fprintf(fh, "\t\t\t(pci dev %04x:%02x:%02x.%01x@%02x)\n",
-               d_config->pcis[i].domain, d_config->pcis[i].bus,
-               d_config->pcis[i].dev, d_config->pcis[i].func,
+               d_config->pcis[i].bdf.domain, d_config->pcis[i].bdf.bus,
+               d_config->pcis[i].bdf.dev, d_config->pcis[i].bdf.func,
                d_config->pcis[i].vdevfn);
         fprintf(fh, "\t\t\t(opts msitranslate %d power_mgmt %d)\n",
                d_config->pcis[i].msitranslate,
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 08:30:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 08:30:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35517.67215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTiy-0003q1-5X; Tue, 24 Nov 2020 08:30:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35517.67215; Tue, 24 Nov 2020 08:30:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khTix-0003p2-9R; Tue, 24 Nov 2020 08:30:55 +0000
Received: by outflank-mailman (input) for mailman id 35517;
 Tue, 24 Nov 2020 08:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khTin-0003QN-OY
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTim-0006bH-Nh; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208]
 helo=ip-10-0-29-170.ec2.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khTH3-0001hp-8X; Tue, 24 Nov 2020 08:02:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTin-0003QN-OY
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 08:30:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=Ni3v8rNOQnYX+FJEz4ZSjie9Y3E4/QZ6KFSYwE8qkX0=; b=Eehnorm9JIbSuVNtlLSjlNXiQ
	KCGTPg+d6prQt/3qPJFBAxqCt6LDWlfL/JdUhOrwiJLzx8gc/Y6QfmpRT+mE+kxNEnG2l03dZG+MG
	P5RJEA89CuplLUNNKAjBroHmMk9NZPgEtRniMokMgACRycs2uDLxa9+kYqJFF63P7x+II=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTim-0006bH-Nh; Tue, 24 Nov 2020 08:30:44 +0000
Received: from ec2-54-145-241-208.compute-1.amazonaws.com ([54.145.241.208] helo=ip-10-0-29-170.ec2.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khTH3-0001hp-8X; Tue, 24 Nov 2020 08:02:05 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 12/23] libxl: add libxl_device_pci_assignable_list_free()...
Date: Tue, 24 Nov 2020 08:01:48 +0000
Message-Id: <20201124080159.11912-13-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201124080159.11912-1-paul@xen.org>
References: <20201124080159.11912-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

... to be used by callers of libxl_device_pci_assignable_list().

Currently there is no API for callers of libxl_device_pci_assignable_list()
to free the list. The xl function pciassignable_list() calls
libxl_device_pci_dispose() on each element of the returned list, but
libxl_pci_assignable() in libxl_pci.c does not. Neither does the implementation
of libxl_device_pci_assignable_list() call libxl_device_pci_init().

This patch adds the new API function, makes sure it is used everywhere and
also modifies libxl_device_pci_assignable_list() to initialize list
entries rather than just zeroing them.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Christian Lindig <christian.lindig@citrix.com>
Cc: David Scott <dave@recoil.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxl.h                |  7 +++++++
 tools/libs/light/libxl_pci.c         | 14 ++++++++++++--
 tools/ocaml/libs/xl/xenlight_stubs.c |  3 +--
 tools/xl/xl_pci.c                    |  3 +--
 4 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index ee52d3cf7e..8225809d94 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -458,6 +458,12 @@
 #define LIBXL_HAVE_DEVICE_PCI_LIST_FREE 1
 
 /*
+ * LIBXL_HAVE_DEVICE_PCI_ASSIGNABLE_LIST_FREE indicates that the
+ * libxl_device_pci_assignable_list_free() function is defined.
+ */
+#define LIBXL_HAVE_DEVICE_PCI_ASSIGNABLE_LIST_FREE 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
@@ -2369,6 +2375,7 @@ int libxl_device_events_handler(libxl_ctx *ctx,
 int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
 int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
+void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num);
 
 /* CPUID handling */
 int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str);
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 0f41939d1f..5a3352c2ec 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -457,7 +457,7 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         pcis = new;
         new = pcis + *num;
 
-        memset(new, 0, sizeof(*new));
+        libxl_device_pci_init(new);
         pci_struct_fill(new, dom, bus, dev, func, 0);
 
         if (pci_info_xs_read(gc, new, "domid")) /* already assigned */
@@ -472,6 +472,16 @@ out:
     return pcis;
 }
 
+void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num)
+{
+    int i;
+
+    for (i = 0; i < num; i++)
+        libxl_device_pci_dispose(&list[i]);
+
+    free(list);
+}
+
 /* Unbind device from its current driver, if any.  If driver_path is non-NULL,
  * store the path to the original driver in it. */
 static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
@@ -1490,7 +1500,7 @@ static int libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
             pcis[i].func == pci->func)
             break;
     }
-    free(pcis);
+    libxl_device_pci_assignable_list_free(pcis, num);
     return i != num;
 }
 
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 1181971da4..352a00134d 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -894,9 +894,8 @@ value stub_xl_device_pci_assignable_list(value ctx)
 		Field(list, 1) = temp;
 		temp = list;
 		Store_field(list, 0, Val_device_pci(&c_list[i]));
-		libxl_device_pci_dispose(&c_list[i]);
 	}
-	free(c_list);
+	libxl_device_pci_assignable_list_free(c_list, nb);
 
 	CAMLreturn(list);
 }
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 7c0f102ac7..f71498cbb5 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -164,9 +164,8 @@ static void pciassignable_list(void)
     for (i = 0; i < num; i++) {
         printf("%04x:%02x:%02x.%01x\n",
                pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
-        libxl_device_pci_dispose(&pcis[i]);
     }
-    free(pcis);
+    libxl_device_pci_assignable_list_free(pcis, num);
 }
 
 int main_pciassignable_list(int argc, char **argv)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 09:31:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 09:31:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35612.67236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khUf0-0001kz-QE; Tue, 24 Nov 2020 09:30:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35612.67236; Tue, 24 Nov 2020 09:30:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khUf0-0001ks-MJ; Tue, 24 Nov 2020 09:30:54 +0000
Received: by outflank-mailman (input) for mailman id 35612;
 Tue, 24 Nov 2020 09:30:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khUez-0001kn-RY
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 09:30:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 25413af2-ac18-4d21-8b34-ba82f67dac10;
 Tue, 24 Nov 2020 09:30:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1C181AC48;
 Tue, 24 Nov 2020 09:30:52 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khUez-0001kn-RY
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 09:30:53 +0000
X-Inumbo-ID: 25413af2-ac18-4d21-8b34-ba82f67dac10
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 25413af2-ac18-4d21-8b34-ba82f67dac10;
	Tue, 24 Nov 2020 09:30:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606210252; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+gDIb2S94xj9bvpj3ElBlsVJlCAUGjA4er3a6DmRAPU=;
	b=vS11Y6gVNpEjxyAOANUO0FYkpoEeTdTA/EXbs0hGhoBSXVemsnMBnVVBD5zzSfRzzEWiPK
	9X/VRPnE/X2JD133RGMTwotnqIfnnYcnpiFb4h0tEH3sXzn4FVI/dneoBrW8VU2eM8etpt
	FcqKR/WzNngmmqfJMjFjB4zwZ0TELzA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1C181AC48;
	Tue, 24 Nov 2020 09:30:52 +0000 (UTC)
Subject: Re: [PATCH v2 4/8] lib: move parse_size_and_unit()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <eaffac30-8bd0-6018-5186-ca53d1becfe5@suse.com>
 <1bd906ff-0b37-07de-75ab-84a169151c2d@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ca34b711-c6e1-2dac-30a0-47fd54e16715@suse.com>
Date: Tue, 24 Nov 2020 10:30:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <1bd906ff-0b37-07de-75ab-84a169151c2d@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 24.11.2020 01:58, Andrew Cooper wrote:
> On 23/10/2020 11:17, Jan Beulich wrote:
>> ... into its own CU, to build it into an archive.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>>  xen/common/lib.c     | 39 ----------------------------------
>>  xen/lib/Makefile     |  1 +
>>  xen/lib/parse-size.c | 50 ++++++++++++++++++++++++++++++++++++++++++++
>>  3 files changed, 51 insertions(+), 39 deletions(-)
>>  create mode 100644 xen/lib/parse-size.c
> 
> What is the point of turning this into a library?  It isn't a leaf
> function (calls simple_strtoull()) and doesn't have any any plausible
> way of losing all its callers in various configurations (given its
> direct use by the cmdline parsing logic).

It's still a library function. As said earlier, I think _all_
of what's now in lib.c should move to lib/. That's how it
should have been from the beginning, or stuff shouldn't have
been put in lib.c.

The one alternative I see is to move the code next to
parse_bool() / parse_boolean(), in kernel.c, or put all
parse_*() into a new common/parse.c.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 09:39:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 09:39:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35620.67248 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khUn9-00021y-Le; Tue, 24 Nov 2020 09:39:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35620.67248; Tue, 24 Nov 2020 09:39:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khUn9-00021r-Ii; Tue, 24 Nov 2020 09:39:19 +0000
Received: by outflank-mailman (input) for mailman id 35620;
 Tue, 24 Nov 2020 09:39:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khUn8-00021m-Dz
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 09:39:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9c92992b-3ab4-475e-8c6b-401f9849d76a;
 Tue, 24 Nov 2020 09:39:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 709B8AC77;
 Tue, 24 Nov 2020 09:39:16 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khUn8-00021m-Dz
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 09:39:18 +0000
X-Inumbo-ID: 9c92992b-3ab4-475e-8c6b-401f9849d76a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9c92992b-3ab4-475e-8c6b-401f9849d76a;
	Tue, 24 Nov 2020 09:39:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606210756; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=179VxCLqljTPNdqZEibn09l4AdKc1ZBxVDQNvr+r9LE=;
	b=mVLURiFxO9ibNImYaI30HWclEVXRQh+STEcAqMwtRc2sheYwK6eE7bjY0OHLdptYndhsAh
	fpODZUZfJ2bQD0Ztv9n0so5tn8EvNqZ4xolMzfYZpsBxSDBcq1M+c1T15IJgSK2jgE6V/k
	TMFYpsMTTTnwpokHhqYIoc4KlEwklYg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 709B8AC77;
	Tue, 24 Nov 2020 09:39:16 +0000 (UTC)
Subject: Re: [PATCH v2 7/8] lib: move bsearch code
To: Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <julien@xen.org>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <87a20884-5a76-a664-dcc9-bd4becee40b3@suse.com>
 <44ffc041-cacd-468e-a835-f5b2048bb201@xen.org>
 <2cf3a90d-f463-41f8-f861-6ef00279b204@suse.com>
 <2419eccf-c696-6aa1-ada4-0f7bd6bc5657@xen.org>
 <77534dc3-bdd6-f884-99e3-90dc9b02a81f@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1d04fce7-beb4-9434-a528-b1cfdd07084b@suse.com>
Date: Tue, 24 Nov 2020 10:39:16 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <77534dc3-bdd6-f884-99e3-90dc9b02a81f@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 24.11.2020 01:40, Andrew Cooper wrote:
> On 23/11/2020 22:49, Julien Grall wrote:
>> On 19/11/2020 10:27, Jan Beulich wrote:
>>> On 18.11.2020 19:09, Julien Grall wrote:
>>>> On 23/10/2020 11:19, Jan Beulich wrote:
>>>>> --- a/xen/include/xen/compiler.h
>>>>> +++ b/xen/include/xen/compiler.h
>>>>> @@ -12,6 +12,7 @@
>>>>>       #define inline        __inline__
>>>>>    #define always_inline __inline__ __attribute__
>>>>> ((__always_inline__))
>>>>> +#define gnu_inline    __inline__ __attribute__ ((__gnu_inline__))
>>>>
>>>> bsearch() is only used by Arm and I haven't seen anyone so far
>>>> complaining about the perf of I/O emulation.
>>>>
>>>> Therefore, I am not convinced that there is enough justification to
>>>> introduce a GNU attribute just for this patch.
>>>
>>> Please settle this with Andrew: He had asked for the function to
>>> become inline. I don't view making it static inline in the header
>>> as an option here - if the compiler decides to not inline it, we
>>> should not end up with multiple instances in different CUs.
>>
>> That's the cons of static inline... but then why is it suddenly a
>> problem with this helper?
>>
>>> And
>>> without making it static inline the attribute needs adding; at
>>> least I'm unaware of an alternative which works with the various
>>> compiler versions.
>>
>> The question we have to answer is: What is the gain with this approach?
> 
> Substantial.
> 
>>
>> If it is not quantifiable, then introducing compiler specific
>> attribute is not an option.
>>
>> IIRC, there are only two callers (all in Arm code) of this function.
>> Even inlined, I don't believe you would drastically reduce the number
>> of instructions compare to a full blown version. To be generous, I
>> would say you may save ~20 instructions per copy.
>>
>> Therefore, so far, the compiler specific attribute doesn't look
>> justified to me. As usual, I am happy to be proven wrong
> 
> There is a very good reason why this is the classic example used for
> extern inline's in various libc's.
> 
> The gains are from the compiler being able to optimise away the function
> pointer(s) entirely.  Instead of working on opaque objects, it can see
> the accesses directly, implement compares as straight array reads, (for
> sorting, the swap() call turns into memcpy()) and because it can see all
> the memory accesses, doesn't have to assume that every call to cmp()
> modifies arbitrary data in the array (i.e. doesn't have to reload the
> objects from memory every iteration).
> 
> extern inline allows the compiler full flexibility to judge whether
> inlining is a net win, based on optimisation settings and observing what
> the practical memory access pattern would be from not inlining.
> 
> extern inline is the appropriate thing to use here, except for the big
> note in the GCC manual saying "always use gnu_inline in this case" which
> appears to be working around a change in the C99 standard which forces
> any non-static inline to emit a body even when its not called, due to
> rules about global symbols.
> 
> Therefore, Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks Andrew.

Julien - please clarify whether you're okay with Andrew's response,
or whether you continue to object the conversion to inline.

> On a totally separate point,  I wonder if we'd be better off compiling
> with -fgnu89-inline because I can't see any case we're we'd want the C99
> inline semantics anywhere in Xen.

I'm not sure about this, i.e. I wouldn't want to exclude such a
case appearing. I think using attributes is better in general, as
it allows fine grained control.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 09:47:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 09:47:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35628.67260 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khUv4-0002vt-Hh; Tue, 24 Nov 2020 09:47:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35628.67260; Tue, 24 Nov 2020 09:47:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khUv4-0002vm-DD; Tue, 24 Nov 2020 09:47:30 +0000
Received: by outflank-mailman (input) for mailman id 35628;
 Tue, 24 Nov 2020 09:47:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khUv3-0002vh-2y
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 09:47:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 44f65819-0046-4281-8cc3-522bbbf70572;
 Tue, 24 Nov 2020 09:47:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 56831AC82;
 Tue, 24 Nov 2020 09:47:27 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khUv3-0002vh-2y
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 09:47:29 +0000
X-Inumbo-ID: 44f65819-0046-4281-8cc3-522bbbf70572
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 44f65819-0046-4281-8cc3-522bbbf70572;
	Tue, 24 Nov 2020 09:47:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606211247; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=LiSYqbycaoO6blIlW/yxLnvgHN1iPAknpNf1F9ygPtg=;
	b=BW4wk/Nyl+Hn7F8xk9n3XgDyRafCTrN0F6yluJ/8hmeXfS4I2pWdDSmaJoYftC9dlh0l8F
	j31U+l8Wjvx426X1dy/PeZWFGQKzmG+E/SPBGMINR8Xpi6rA5Lco5j0RMKSSVoxf5BqRHO
	Ruje8W2NgfVIV/3Pokl0BQxMV8RvqMk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 56831AC82;
	Tue, 24 Nov 2020 09:47:27 +0000 (UTC)
Subject: Re: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
To: Julien Grall <julien@xen.org>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>
References: <cover.1605527997.git.rahul.singh@arm.com>
 <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
 <3740e147-719a-4e97-bb0e-fe9bd2ec2aa5@xen.org>
 <aa256a44-8f8f-d4f1-f5f4-12529f45d8c8@suse.com>
 <9007e08f-6d90-88ed-ba64-2f0b3c21cb50@xen.org>
 <8531a99d-3c54-36c7-0cd4-2e4838f96eb0@suse.com>
 <ba26fdfb-34f8-c4d3-e082-f1f49c768981@xen.org>
 <89F35B3F-FAAD-4C58-B3FD-F93CA3290A49@arm.com>
 <alpine.DEB.2.21.2011191534060.7979@sstabellini-ThinkPad-T480s>
 <CAJ=z9a0aS1G0F1jAtKNEe4r3tyBoxy1xJ9AV7pYgifsL62iqww@mail.gmail.com>
 <alpine.DEB.2.21.2011191551510.7979@sstabellini-ThinkPad-T480s>
 <6d2dae58-bfe5-596e-7850-a20ed54e1a81@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7964507a-afc8-a548-5ea1-0182b6001cb1@suse.com>
Date: Tue, 24 Nov 2020 10:47:27 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <6d2dae58-bfe5-596e-7850-a20ed54e1a81@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.11.2020 18:13, Julien Grall wrote:
> My view on 2) can change if Jan provides enough information into why one 
> would want NS1650 PCI enabled by default on Arm but disable MSI.

Because, like it was on x86, initially there may be no support for
MSI? I have no idea what the plans are ...

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 09:50:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 09:50:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35634.67272 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khUxu-0003ln-01; Tue, 24 Nov 2020 09:50:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35634.67272; Tue, 24 Nov 2020 09:50:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khUxt-0003lg-Sv; Tue, 24 Nov 2020 09:50:25 +0000
Received: by outflank-mailman (input) for mailman id 35634;
 Tue, 24 Nov 2020 09:50:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lBCh=E6=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1khUxs-0003la-02
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 09:50:24 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.55]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f3f87a1b-3114-4d3f-a3c2-c707d606b104;
 Tue, 24 Nov 2020 09:50:20 +0000 (UTC)
Received: from AM4PR0302CA0019.eurprd03.prod.outlook.com (2603:10a6:205:2::32)
 by AM0PR08MB5057.eurprd08.prod.outlook.com (2603:10a6:208:165::33)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20; Tue, 24 Nov
 2020 09:50:17 +0000
Received: from AM5EUR03FT060.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:205:2:cafe::35) by AM4PR0302CA0019.outlook.office365.com
 (2603:10a6:205:2::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Tue, 24 Nov 2020 09:50:17 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT060.mail.protection.outlook.com (10.152.16.160) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Tue, 24 Nov 2020 09:50:17 +0000
Received: ("Tessian outbound 797fb8e1da56:v71");
 Tue, 24 Nov 2020 09:50:16 +0000
Received: from 2b530c0ba4b7.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 57BA6EBA-9362-4334-859D-3F4069318E72.1; 
 Tue, 24 Nov 2020 09:50:11 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2b530c0ba4b7.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 24 Nov 2020 09:50:11 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB6PR0802MB2375.eurprd08.prod.outlook.com (2603:10a6:4:87::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Tue, 24 Nov
 2020 09:50:08 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3589.030; Tue, 24 Nov 2020
 09:50:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lBCh=E6=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1khUxs-0003la-02
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 09:50:24 +0000
X-Inumbo-ID: f3f87a1b-3114-4d3f-a3c2-c707d606b104
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown [40.107.7.55])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f3f87a1b-3114-4d3f-a3c2-c707d606b104;
	Tue, 24 Nov 2020 09:50:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Jl0Ru8BJej6YPTGhMZzE9SUlbtQpcsPBo2Wl/+Qb/IA=;
 b=6j4EpSUTtENGV4uPcSemDRYl1ujxX+5z/9ryHtgGDJQcoDpJGYRyvC11EFTISyn9R0QXHgEC7GmYdgrPfEyUq5ky2c5sgeN0f5ZsvpgDFzs1GmifQK/NbD8hkCCloIQ7OJCr0Am0B0SQ4X2dY06UsQCwSTmK2SkW99M+EZ9Wwzg=
Received: from AM4PR0302CA0019.eurprd03.prod.outlook.com (2603:10a6:205:2::32)
 by AM0PR08MB5057.eurprd08.prod.outlook.com (2603:10a6:208:165::33) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20; Tue, 24 Nov
 2020 09:50:17 +0000
Received: from AM5EUR03FT060.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:205:2:cafe::35) by AM4PR0302CA0019.outlook.office365.com
 (2603:10a6:205:2::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Tue, 24 Nov 2020 09:50:17 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT060.mail.protection.outlook.com (10.152.16.160) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Tue, 24 Nov 2020 09:50:17 +0000
Received: ("Tessian outbound 797fb8e1da56:v71"); Tue, 24 Nov 2020 09:50:16 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 0fdd33d8015c53fb
X-CR-MTA-TID: 64aa7808
Received: from 2b530c0ba4b7.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 57BA6EBA-9362-4334-859D-3F4069318E72.1;
	Tue, 24 Nov 2020 09:50:11 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2b530c0ba4b7.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Tue, 24 Nov 2020 09:50:11 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jg/24ergYWeAByg2LQB5pt8Hd7LbcSPv9zT+M6XtDMrgDPYqfJl6G1m1e3gfdK1YOKHFsIyE45Kt+4t5N6D7QbHO4jWiqv/VDNPLxd1gAiBrk/DAS/l7E9+Iv1nNULO5H3ZPV2lv3AheREaaRWfPNLbw+1GRP1AFJbcP7rNyzymIAwKbBj4NtG0TikflJxXdeAwVfUdI7ZhX/NHR5NrMsfSwtAS53aGFR/8ZJpdtzSiF3QmXjvnLlemCcZrnDv4cOcndRuG01fcTw5Y6Ag/3XwEWmGjpcJBvutlj5S+Scy6RlfZ2YRWW2uBwH+LsiB6+kI+T7EBTio8ijWUc180d1A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Jl0Ru8BJej6YPTGhMZzE9SUlbtQpcsPBo2Wl/+Qb/IA=;
 b=ekLsG88iUfc7auaz9L6z7Wsrgszcw1jajkhX8H+5cl5GxkxqngBJLKsAtkdeRI/QKNjVbz3L0J9bZVzehmxeL4h9Nv261cUMw6YZ5vGiE3AAS09lpdsv2LI3Zg8KUYAYDDB00eZh8bAgDtskY37YuE5IpJxGW61P1TDaj4IV1JutGMX5ewet7CrKM3675xqIY48XxNiWG7T5XcaJqO0K9ycfsfuWiT+X6XzAJgiRbe5gqJQdMlW1OXrZuY94KQvophBBaAO0rB8+emfZdeGLJZLAQE8LgtL3paeO9bLbu+gDVqDizHSiRahZ5XPYeuOoph5r4wS9oP26UR5WyFf4OQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Jl0Ru8BJej6YPTGhMZzE9SUlbtQpcsPBo2Wl/+Qb/IA=;
 b=6j4EpSUTtENGV4uPcSemDRYl1ujxX+5z/9ryHtgGDJQcoDpJGYRyvC11EFTISyn9R0QXHgEC7GmYdgrPfEyUq5ky2c5sgeN0f5ZsvpgDFzs1GmifQK/NbD8hkCCloIQ7OJCr0Am0B0SQ4X2dY06UsQCwSTmK2SkW99M+EZ9Wwzg=
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB6PR0802MB2375.eurprd08.prod.outlook.com (2603:10a6:4:87::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Tue, 24 Nov
 2020 09:50:08 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3589.030; Tue, 24 Nov 2020
 09:50:08 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 1/3] ns16550: move PCI arrays next to the function
 using them
Thread-Topic: [PATCH v2 1/3] ns16550: move PCI arrays next to the function
 using them
Thread-Index: AQHWwZqgfCAQQ/ir8kuoacesfXkcZKnWZvIAgACkOoA=
Date: Tue, 24 Nov 2020 09:50:08 +0000
Message-ID: <C8F87C4F-C907-40C8-B774-BB1483944A34@arm.com>
References: <96115b2b-c104-e566-2368-6a2439d2c988@suse.com>
 <b47b5557-ad67-5bf4-45ce-c305ee5da977@suse.com>
 <alpine.DEB.2.21.2011231602071.7979@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2011231602071.7979@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: bc356e9f-7850-4ff6-cc7f-08d8905e594b
x-ms-traffictypediagnostic: DB6PR0802MB2375:|AM0PR08MB5057:
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB505747C3AF9ADBF3AC708BB8FCFB0@AM0PR08MB5057.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8273;OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 wDga62z4Aqo8SkT5+isf7Y5xV0QOBoTLZDlgP58yttZOHaWqapovTJgOvzgSq0WwkzCW9O9as6pBLh0jaqEW66eMiQXYGhijJK1puylJUnxW5eIKdx/zooVxzERaF+scqVbgMeLCzsLeUYfhQ14rYc3KVjYD9nmx1vv8j06R6eXVAlNVbVl9J3eGhkaYQBh8NIxZebA6LYM0Q1SaGw72Qu9/R5fL9oumJRDU7lo4mD1BmOoGBPC6UqwizgaBlh8ch/o/kll2UOYoNXcWA6lsF4YQX8PogqwLsZwazjHDtHsj73hiw2qHaWuZS7HS7HrPzo6+Fythgw9yZaFiHr2/mA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(136003)(346002)(39860400002)(376002)(71200400001)(83380400001)(4326008)(26005)(33656002)(6506007)(53546011)(186003)(6486002)(36756003)(5660300002)(2906002)(8676002)(6916009)(6512007)(8936002)(76116006)(2616005)(66446008)(66476007)(54906003)(316002)(86362001)(91956017)(66556008)(66946007)(30864003)(478600001)(64756008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?EsDRSZCdxCSb9bT80zHOA63R/DweMHmU4ceQll+2mUihhIqrGTU1kZtyA7RU?=
 =?us-ascii?Q?z1IWDJP0Hq/yobjgSIiHaxFlptFY3kjPlm8McIkSREvRCrS6jnCkiYOwvrSd?=
 =?us-ascii?Q?PkodHVPd45aENjh5oAFxpjZiK+SZSsgvFFbnESNRxALEbyjowSq8+flsw+jf?=
 =?us-ascii?Q?4S/U5clYYJUGCbjjLHjo0lKawLLBbhY1pquOdQr91/2FeLGOKkE+ER+BlLaM?=
 =?us-ascii?Q?EHdFknO+/JvfzMQt8FDabEnqLmW+cmeHyMhfz6WM0ZexdejeWx7Ui3C9wbUA?=
 =?us-ascii?Q?LG9FjUOMV0jOx3JhtP8nUrId1EaHeb9Ml33bpeyPGDfTDlvP+9lZwmPkDRHz?=
 =?us-ascii?Q?ccExciJs0Tso+6JtmCju63br122gL0TTellNzhtQMJP96bui9dL2MwbnVVd8?=
 =?us-ascii?Q?qBYq7fLVzJwBQZMaLJsxaWnuKFYKNyCg5OszgMn1TqanEPhaTh+efCdyRGE6?=
 =?us-ascii?Q?4UHmXSiC3W6pm3gNS0zaj3zqmvFV8cmHvIBVWf7xN+ysFTAa/BtnyihpFIOL?=
 =?us-ascii?Q?k/tHfCwNBvOdn1z3g1BbgR0WRZ32jLItsJpg2nOSaRNUedg34pPvYRBPWzu1?=
 =?us-ascii?Q?jMm7jqmUXSUbDtqfZfrbKmOH9jQGL2Qapiq8M+2wENx34oTmO2LEX/HkNb6c?=
 =?us-ascii?Q?tVe72f0loiuAvjxv4A37i9eK1AI1A/45qvV0DuxzaLQDRwuiieUeszazaYvQ?=
 =?us-ascii?Q?YNRPRBZHecSH0Lgt9p5Mk/JqD8RcikuB55apiTuZwKorPGYI0lfGl6J/v6LF?=
 =?us-ascii?Q?UKx3NPcOcZlzwruv+DlXitmPFn0KIqvjy+rwemFshXlbLfZiEiZBXPD3peHQ?=
 =?us-ascii?Q?10OmkgnBJ6ohfAUgEcypmTuwOaoHqMUFAlmwIcjAzXoB87JoxOYXC0u/O8MI?=
 =?us-ascii?Q?ndX8renUiByjGxHM1ywELpU1NtTkSHd7nLr/dJUHJR7ftX0zptoxktHqIV3C?=
 =?us-ascii?Q?JIRDPtePCgsQSzB09GZGRLXuxo08S/dNx9CdyjMkZdgZF9heSMw1darvbJGC?=
 =?us-ascii?Q?G2vD?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <8422561C662AA748BD50BE481AD334C1@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2375
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	b059933c-b8ee-4a08-3531-08d8905e5436
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1ThCEjAO3cfgULi5WfD8N/GupLqicefbFv9NkUflhu61heFqkpvOzP5w1hn69gXd9KWQhgIKCteWOYM3K6svug2uEgv9wW1AVisy9iwsWmFAm2Bf0CGmZphGtrHz5h+z/WC1wTNI2wfVOBa3TFfKiRJi92gUrsiabusjTMiZvYgbpwlGB9Tn90jAT1eliethj2leGjaB0WC5mzgzPLlnNnNIzZ15DeGVRw5eu2GDF80J8rYS+mUR1LK5wJlS8xOpsqL2rSdzkuLDjEx/1RgIp/c1frzgZHIA9BzZpl/JCA2e1raT+SZZ0gg/9lWaF0qhZel7PhTNs0J1HDydfb/DYZeWH2BviR2+imejUNe/D5JjcAEsCWM6pLU+3OVpJduTYUDdbWV6hkqM8Y1MwnEc/g==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(346002)(396003)(39860400002)(136003)(46966005)(26005)(6862004)(186003)(4326008)(53546011)(6506007)(8936002)(2906002)(6486002)(8676002)(33656002)(54906003)(6512007)(36906005)(316002)(5660300002)(70206006)(70586007)(2616005)(336012)(478600001)(30864003)(81166007)(356005)(36756003)(83380400001)(82310400003)(86362001)(47076004)(82740400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Nov 2020 09:50:17.1470
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: bc356e9f-7850-4ff6-cc7f-08d8905e594b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5057

Hello ,

> On 24 Nov 2020, at 12:02 am, Stefano Stabellini <sstabellini@kernel.org> =
wrote:
>=20
> On Mon, 23 Nov 2020, Jan Beulich wrote:
>> Pure code motion; no functional change intended.
>>=20
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>=20
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>=20
Reviewed-by: Rahul Singh <rahul.singh@arm.com>

Regards,
Rahul
>=20
>> ---
>> v2: New.
>>=20
>> --- a/xen/drivers/char/ns16550.c
>> +++ b/xen/drivers/char/ns16550.c
>> @@ -153,312 +153,6 @@ struct ns16550_config_param {
>>     unsigned int uart_offset;
>>     unsigned int first_offset;
>> };
>> -
>> -/*
>> - * Create lookup tables for specific devices. It is assumed that if
>> - * the device found is MMIO, then you have indexed it here. Else, the
>> - * driver does nothing for MMIO based devices.
>> - */
>> -static const struct ns16550_config_param __initconst uart_param[] =3D {
>> -    [param_default] =3D {
>> -        .reg_width =3D 1,
>> -        .lsr_mask =3D UART_LSR_THRE,
>> -        .max_ports =3D 1,
>> -    },
>> -    [param_trumanage] =3D {
>> -        .reg_shift =3D 2,
>> -        .reg_width =3D 1,
>> -        .fifo_size =3D 16,
>> -        .lsr_mask =3D (UART_LSR_THRE | UART_LSR_TEMT),
>> -        .mmio =3D 1,
>> -        .max_ports =3D 1,
>> -    },
>> -    [param_oxford] =3D {
>> -        .base_baud =3D 4000000,
>> -        .uart_offset =3D 0x200,
>> -        .first_offset =3D 0x1000,
>> -        .reg_width =3D 1,
>> -        .fifo_size =3D 16,
>> -        .lsr_mask =3D UART_LSR_THRE,
>> -        .mmio =3D 1,
>> -        .max_ports =3D 1, /* It can do more, but we would need more cus=
tom code.*/
>> -    },
>> -    [param_oxford_2port] =3D {
>> -        .base_baud =3D 4000000,
>> -        .uart_offset =3D 0x200,
>> -        .first_offset =3D 0x1000,
>> -        .reg_width =3D 1,
>> -        .fifo_size =3D 16,
>> -        .lsr_mask =3D UART_LSR_THRE,
>> -        .mmio =3D 1,
>> -        .max_ports =3D 2,
>> -    },
>> -    [param_pericom_1port] =3D {
>> -        .base_baud =3D 921600,
>> -        .uart_offset =3D 8,
>> -        .reg_width =3D 1,
>> -        .fifo_size =3D 16,
>> -        .lsr_mask =3D UART_LSR_THRE,
>> -        .bar0 =3D 1,
>> -        .max_ports =3D 1,
>> -    },
>> -    [param_pericom_2port] =3D {
>> -        .base_baud =3D 921600,
>> -        .uart_offset =3D 8,
>> -        .reg_width =3D 1,
>> -        .fifo_size =3D 16,
>> -        .lsr_mask =3D UART_LSR_THRE,
>> -        .bar0 =3D 1,
>> -        .max_ports =3D 2,
>> -    },
>> -    /*
>> -     * Of the two following ones, we can't really use all of their port=
s,
>> -     * unless ns16550_com[] would get grown.
>> -     */
>> -    [param_pericom_4port] =3D {
>> -        .base_baud =3D 921600,
>> -        .uart_offset =3D 8,
>> -        .reg_width =3D 1,
>> -        .fifo_size =3D 16,
>> -        .lsr_mask =3D UART_LSR_THRE,
>> -        .bar0 =3D 1,
>> -        .max_ports =3D 4,
>> -    },
>> -    [param_pericom_8port] =3D {
>> -        .base_baud =3D 921600,
>> -        .uart_offset =3D 8,
>> -        .reg_width =3D 1,
>> -        .fifo_size =3D 16,
>> -        .lsr_mask =3D UART_LSR_THRE,
>> -        .bar0 =3D 1,
>> -        .max_ports =3D 8,
>> -    }
>> -};
>> -static const struct ns16550_config __initconst uart_config[] =3D
>> -{
>> -    /* Broadcom TruManage device */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_BROADCOM,
>> -        .dev_id =3D 0x160a,
>> -        .param =3D param_trumanage,
>> -    },
>> -    /* OXPCIe952 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc11b,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe952 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc11f,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe952 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc138,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe952 2 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc158,
>> -        .param =3D param_oxford_2port,
>> -    },
>> -    /* OXPCIe952 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc13d,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe952 2 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc15d,
>> -        .param =3D param_oxford_2port,
>> -    },
>> -    /* OXPCIe952 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc40b,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe200 1 Native UART */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc40f,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe200 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc41b,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe200 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc41f,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe200 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc42b,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe200 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc42f,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe200 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc43b,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe200 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc43f,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe200 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc44b,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe200 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc44f,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe200 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc45b,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe200 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc45f,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe200 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc46b,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe200 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc46f,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe200 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc47b,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe200 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc47f,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe200 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc48b,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe200 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc48f,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe200 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc49b,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe200 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc49f,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe200 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc4ab,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe200 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc4af,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe200 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc4bb,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe200 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc4bf,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe200 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc4cb,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* OXPCIe200 1 Native UART  */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> -        .dev_id =3D 0xc4cf,
>> -        .param =3D param_oxford,
>> -    },
>> -    /* Pericom PI7C9X7951 Uno UART */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_PERICOM,
>> -        .dev_id =3D 0x7951,
>> -        .param =3D param_pericom_1port
>> -    },
>> -    /* Pericom PI7C9X7952 Duo UART */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_PERICOM,
>> -        .dev_id =3D 0x7952,
>> -        .param =3D param_pericom_2port
>> -    },
>> -    /* Pericom PI7C9X7954 Quad UART */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_PERICOM,
>> -        .dev_id =3D 0x7954,
>> -        .param =3D param_pericom_4port
>> -    },
>> -    /* Pericom PI7C9X7958 Octal UART */
>> -    {
>> -        .vendor_id =3D PCI_VENDOR_ID_PERICOM,
>> -        .dev_id =3D 0x7958,
>> -        .param =3D param_pericom_8port
>> -    }
>> -};
>> #endif
>>=20
>> static void ns16550_delayed_resume(void *data);
>> @@ -1045,6 +739,314 @@ static int __init check_existence(struct
>> }
>>=20
>> #ifdef CONFIG_HAS_PCI
>> +
>> +/*
>> + * Create lookup tables for specific devices. It is assumed that if
>> + * the device found is MMIO, then you have indexed it here. Else, the
>> + * driver does nothing for MMIO based devices.
>> + */
>> +static const struct ns16550_config_param __initconst uart_param[] =3D {
>> +    [param_default] =3D {
>> +        .reg_width =3D 1,
>> +        .lsr_mask =3D UART_LSR_THRE,
>> +        .max_ports =3D 1,
>> +    },
>> +    [param_trumanage] =3D {
>> +        .reg_shift =3D 2,
>> +        .reg_width =3D 1,
>> +        .fifo_size =3D 16,
>> +        .lsr_mask =3D (UART_LSR_THRE | UART_LSR_TEMT),
>> +        .mmio =3D 1,
>> +        .max_ports =3D 1,
>> +    },
>> +    [param_oxford] =3D {
>> +        .base_baud =3D 4000000,
>> +        .uart_offset =3D 0x200,
>> +        .first_offset =3D 0x1000,
>> +        .reg_width =3D 1,
>> +        .fifo_size =3D 16,
>> +        .lsr_mask =3D UART_LSR_THRE,
>> +        .mmio =3D 1,
>> +        .max_ports =3D 1, /* It can do more, but we would need more cus=
tom code.*/
>> +    },
>> +    [param_oxford_2port] =3D {
>> +        .base_baud =3D 4000000,
>> +        .uart_offset =3D 0x200,
>> +        .first_offset =3D 0x1000,
>> +        .reg_width =3D 1,
>> +        .fifo_size =3D 16,
>> +        .lsr_mask =3D UART_LSR_THRE,
>> +        .mmio =3D 1,
>> +        .max_ports =3D 2,
>> +    },
>> +    [param_pericom_1port] =3D {
>> +        .base_baud =3D 921600,
>> +        .uart_offset =3D 8,
>> +        .reg_width =3D 1,
>> +        .fifo_size =3D 16,
>> +        .lsr_mask =3D UART_LSR_THRE,
>> +        .bar0 =3D 1,
>> +        .max_ports =3D 1,
>> +    },
>> +    [param_pericom_2port] =3D {
>> +        .base_baud =3D 921600,
>> +        .uart_offset =3D 8,
>> +        .reg_width =3D 1,
>> +        .fifo_size =3D 16,
>> +        .lsr_mask =3D UART_LSR_THRE,
>> +        .bar0 =3D 1,
>> +        .max_ports =3D 2,
>> +    },
>> +    /*
>> +     * Of the two following ones, we can't really use all of their port=
s,
>> +     * unless ns16550_com[] would get grown.
>> +     */
>> +    [param_pericom_4port] =3D {
>> +        .base_baud =3D 921600,
>> +        .uart_offset =3D 8,
>> +        .reg_width =3D 1,
>> +        .fifo_size =3D 16,
>> +        .lsr_mask =3D UART_LSR_THRE,
>> +        .bar0 =3D 1,
>> +        .max_ports =3D 4,
>> +    },
>> +    [param_pericom_8port] =3D {
>> +        .base_baud =3D 921600,
>> +        .uart_offset =3D 8,
>> +        .reg_width =3D 1,
>> +        .fifo_size =3D 16,
>> +        .lsr_mask =3D UART_LSR_THRE,
>> +        .bar0 =3D 1,
>> +        .max_ports =3D 8,
>> +    }
>> +};
>> +
>> +static const struct ns16550_config __initconst uart_config[] =3D
>> +{
>> +    /* Broadcom TruManage device */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_BROADCOM,
>> +        .dev_id =3D 0x160a,
>> +        .param =3D param_trumanage,
>> +    },
>> +    /* OXPCIe952 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc11b,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe952 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc11f,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe952 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc138,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe952 2 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc158,
>> +        .param =3D param_oxford_2port,
>> +    },
>> +    /* OXPCIe952 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc13d,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe952 2 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc15d,
>> +        .param =3D param_oxford_2port,
>> +    },
>> +    /* OXPCIe952 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc40b,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe200 1 Native UART */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc40f,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe200 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc41b,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe200 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc41f,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe200 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc42b,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe200 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc42f,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe200 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc43b,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe200 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc43f,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe200 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc44b,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe200 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc44f,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe200 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc45b,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe200 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc45f,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe200 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc46b,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe200 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc46f,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe200 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc47b,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe200 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc47f,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe200 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc48b,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe200 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc48f,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe200 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc49b,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe200 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc49f,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe200 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc4ab,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe200 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc4af,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe200 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc4bb,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe200 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc4bf,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe200 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc4cb,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* OXPCIe200 1 Native UART  */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_OXSEMI,
>> +        .dev_id =3D 0xc4cf,
>> +        .param =3D param_oxford,
>> +    },
>> +    /* Pericom PI7C9X7951 Uno UART */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_PERICOM,
>> +        .dev_id =3D 0x7951,
>> +        .param =3D param_pericom_1port
>> +    },
>> +    /* Pericom PI7C9X7952 Duo UART */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_PERICOM,
>> +        .dev_id =3D 0x7952,
>> +        .param =3D param_pericom_2port
>> +    },
>> +    /* Pericom PI7C9X7954 Quad UART */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_PERICOM,
>> +        .dev_id =3D 0x7954,
>> +        .param =3D param_pericom_4port
>> +    },
>> +    /* Pericom PI7C9X7958 Octal UART */
>> +    {
>> +        .vendor_id =3D PCI_VENDOR_ID_PERICOM,
>> +        .dev_id =3D 0x7958,
>> +        .param =3D param_pericom_8port
>> +    }
>> +};
>> +
>> static int __init
>> pci_uart_config(struct ns16550 *uart, bool_t skip_amt, unsigned int idx)
>> {
>> @@ -1211,7 +1213,8 @@ pci_uart_config(struct ns16550 *uart, bo
>>=20
>>     return 0;
>> }
>> -#endif
>> +
>> +#endif /* CONFIG_HAS_PCI */
>>=20
>> /*
>>  * Used to parse name value pairs and return which value it is along wit=
h
>>=20



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 09:51:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 09:51:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35645.67284 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khUyb-0003tp-Eq; Tue, 24 Nov 2020 09:51:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35645.67284; Tue, 24 Nov 2020 09:51:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khUyb-0003ti-Bd; Tue, 24 Nov 2020 09:51:09 +0000
Received: by outflank-mailman (input) for mailman id 35645;
 Tue, 24 Nov 2020 09:51:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lBCh=E6=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1khUya-0003tY-2Y
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 09:51:08 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.47]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0b4334fb-e58e-4aea-be68-ab876c462f93;
 Tue, 24 Nov 2020 09:51:06 +0000 (UTC)
Received: from AM5PR0601CA0081.eurprd06.prod.outlook.com (2603:10a6:206::46)
 by VI1PR08MB3967.eurprd08.prod.outlook.com (2603:10a6:803:df::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20; Tue, 24 Nov
 2020 09:51:04 +0000
Received: from AM5EUR03FT006.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:0:cafe::31) by AM5PR0601CA0081.outlook.office365.com
 (2603:10a6:206::46) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Tue, 24 Nov 2020 09:51:04 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT006.mail.protection.outlook.com (10.152.16.122) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Tue, 24 Nov 2020 09:51:03 +0000
Received: ("Tessian outbound 39167997cde8:v71");
 Tue, 24 Nov 2020 09:51:02 +0000
Received: from 8599586636a8.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 81FA1A10-8465-4940-B127-D87B4CE47541.1; 
 Tue, 24 Nov 2020 09:50:56 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8599586636a8.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 24 Nov 2020 09:50:56 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB6PR0802MB2375.eurprd08.prod.outlook.com (2603:10a6:4:87::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Tue, 24 Nov
 2020 09:50:55 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3589.030; Tue, 24 Nov 2020
 09:50:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lBCh=E6=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1khUya-0003tY-2Y
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 09:51:08 +0000
X-Inumbo-ID: 0b4334fb-e58e-4aea-be68-ab876c462f93
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown [40.107.6.47])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0b4334fb-e58e-4aea-be68-ab876c462f93;
	Tue, 24 Nov 2020 09:51:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GZFaB2/bXfxcggfsHvRMUSk/hZxdtuFFLn0thbD2pBk=;
 b=vvLKxIK0iNUCHBTdD4fWBxRz6XISTLFeW5obdDIBySQSbETaS2/jvbP5RxujV5QdcDe33T+bEEsb9QpkzC7v4vyLP1daZiksdhqDr8iXWLezJRW6Ec45RuCa2UdM0K1BSLEl9qWW+rzH3oCufayyr7FjT12j4GpmoMuqTGWgTQc=
Received: from AM5PR0601CA0081.eurprd06.prod.outlook.com (2603:10a6:206::46)
 by VI1PR08MB3967.eurprd08.prod.outlook.com (2603:10a6:803:df::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20; Tue, 24 Nov
 2020 09:51:04 +0000
Received: from AM5EUR03FT006.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:0:cafe::31) by AM5PR0601CA0081.outlook.office365.com
 (2603:10a6:206::46) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Tue, 24 Nov 2020 09:51:04 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT006.mail.protection.outlook.com (10.152.16.122) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Tue, 24 Nov 2020 09:51:03 +0000
Received: ("Tessian outbound 39167997cde8:v71"); Tue, 24 Nov 2020 09:51:02 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 935a50edbdcfc374
X-CR-MTA-TID: 64aa7808
Received: from 8599586636a8.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 81FA1A10-8465-4940-B127-D87B4CE47541.1;
	Tue, 24 Nov 2020 09:50:56 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8599586636a8.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Tue, 24 Nov 2020 09:50:56 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JYHx4qkIBnYHL76NY0JpGVqi0juEQ1kmsHyWcYZeuJG234fbe+dqZmaITRs9eKtOJIY8fBTxirBQgkX/+c3BvPL0GiDKMKwczJqPAXNn+DAOELarTR/pqBwEqzWC/6CHljLj/1N3sYrUXhgwvD9hRJ99TisGtM1JOVfi0aKhAWmcL4pMQ2vKgAeZz8DbcLQP1wfRjuQrnEhMP2o0+v8tr5k4uDpbs4oUE91nTyfTnPF9nNdujuIly2lJmWNeuRQT7eZ7vZIrJc5mHjDvFmLHGOSY+BaJRf3Q70LbVTFAzxRDtO0AB3c1VSfWC40PlwH2ZmFwEOKy/0kruSzV0lcxjg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GZFaB2/bXfxcggfsHvRMUSk/hZxdtuFFLn0thbD2pBk=;
 b=nkL6l/5nnNzqo6nbGJu4lBlwXbadKWXCLDNpe2Bo2fGfDX0bQy4puQbFKMOraiqbMZ+r+OCT4xobfr1J6cDCkWywsQ5uVCyEk9yN4r9yRIAnM1XSc5DFxop9ZnwoyztT9wlTmFgT3uRaskC2oHYzInxUKNGwe+ALoZcwl1pzyQuIJ5l5YMAgnox9HLw92LAlzlpYrphPuQ6CCSyF/pi+2dECIpdhvWNPCjgiqlL6vObDDgEAomnxixrStwe67NDQMVTpfx0GkkJzat6oW7fDJ3qMfxsLsM4jEq13JgDmgYaqQTzRDcGTwYSdalv8ZDj7X7xIjVioSmRpt/NKVOhspA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GZFaB2/bXfxcggfsHvRMUSk/hZxdtuFFLn0thbD2pBk=;
 b=vvLKxIK0iNUCHBTdD4fWBxRz6XISTLFeW5obdDIBySQSbETaS2/jvbP5RxujV5QdcDe33T+bEEsb9QpkzC7v4vyLP1daZiksdhqDr8iXWLezJRW6Ec45RuCa2UdM0K1BSLEl9qWW+rzH3oCufayyr7FjT12j4GpmoMuqTGWgTQc=
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB6PR0802MB2375.eurprd08.prod.outlook.com (2603:10a6:4:87::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Tue, 24 Nov
 2020 09:50:55 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3589.030; Tue, 24 Nov 2020
 09:50:55 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 2/3] ns16550: "com<N>=" command line options are
 x86-specific
Thread-Topic: [PATCH v2 2/3] ns16550: "com<N>=" command line options are
 x86-specific
Thread-Index: AQHWwZqpPJPxxO+xfkyOkLGUgNSyN6nWaXmAgACh7IA=
Date: Tue, 24 Nov 2020 09:50:55 +0000
Message-ID: <1709861D-749B-4FAE-8948-28F6C6904DBB@arm.com>
References: <96115b2b-c104-e566-2368-6a2439d2c988@suse.com>
 <bfa07fc2-9151-402f-3b73-dedf8280cb66@suse.com>
 <alpine.DEB.2.21.2011231608120.7979@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2011231608120.7979@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 8aad7919-352c-43d7-b47a-08d8905e751a
x-ms-traffictypediagnostic: DB6PR0802MB2375:|VI1PR08MB3967:
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB39675C70F26B812B7F949D12FCFB0@VI1PR08MB3967.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8273;OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 LMSKoj8hqN1ED7h3zTj0VaR+VuczigffeZKHIcUyelUEQ2Hdd7+K9abCQPB8Yajc6MobAox8lwUDyczcNUPkFCGH36ieZinSPNYBqYYtgw8Xr7iT0WkD7rQIA762hq6QTiyHst1w49WgEG0YTp+1zxabS43X7gAD1oGQ2WUxdpPcMpzBKUJ+uVPibhrhU8VNaVdWPl75uaaYvw0IqaYP5CjtBlH4fHSjQHKiE4HuyAwqLi838a6IXSn1ADpf7HrCGGdbdWGBrbT0Cf2Gozd+pqIvsHeFTnvrRA325qThKJVO3LtD+ZGcEf5QRrasscBqte0RSHVfK+RIW6eXlWpW4A==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(136003)(346002)(39860400002)(376002)(71200400001)(83380400001)(4326008)(26005)(33656002)(6506007)(53546011)(186003)(6486002)(36756003)(5660300002)(2906002)(8676002)(6916009)(6512007)(8936002)(76116006)(2616005)(66446008)(66476007)(54906003)(316002)(86362001)(91956017)(66556008)(66946007)(478600001)(64756008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?gyF4I/Iji2ej5HnqcgCoFOdpQEwUDupj8KGBEYrXHgogqZZVufwGWImvS61D?=
 =?us-ascii?Q?gytFMSNJCtxpsKxdnCcHifVeT8VQvB7vibB4EmfGMvckhdLWf4YJgH3jmpFA?=
 =?us-ascii?Q?rc5yD/3Gr3sXftPSYlbbMjLC0qf96raPHaeDBj0bB7Fj1i4zZHl3BZagLSbg?=
 =?us-ascii?Q?HjLs2KFzgm4Wym8U/owDKKC0bd+oFjZfD3zgdwGMUr4PbA5VNRB/lmtV7eZp?=
 =?us-ascii?Q?Y+jKdgy1ivWFMncql2SNIrxv8b/ecSatp2XjsIF8CppCXKMZCoegAAHIqyai?=
 =?us-ascii?Q?7RQ5XX8u+HQ3L5sE91CvNbPM1W4NNdKpEmIO4ftpB6+jJ6ZAasXyqTR4+4Go?=
 =?us-ascii?Q?i6xze5FO8Eb8nt437I3B3DBm6BD56982fde8+DLhVVLVcIC8tXgrtaJGY3aT?=
 =?us-ascii?Q?VWjVdxHGW76mnlsPnk/Ewcp6Did9/kBNJjtQCSEP/jIv814LcdFwuohM6gUv?=
 =?us-ascii?Q?ghYltoZYxdUeWAwo6go0QTGZgQu7eFzuYUBRAuerRZqYhwADM2CF1/qLR0AV?=
 =?us-ascii?Q?/aE7e6Ig0FIdRmPJNrxkxl34XIJBSCjvtOEvxcl6XaE4O4vdFOLpLmUY9dAM?=
 =?us-ascii?Q?FEA/63FOg0zQZIMg5nMQwv73Nek01dL7MK+enYry2bgCvxrqvzzde+oY2chP?=
 =?us-ascii?Q?vEVJ6C9UX+qKDJ5M7BzomkK5ezg3KQTD2/AiLIh9lC2lpRS2Ez8bXT1O32b9?=
 =?us-ascii?Q?OsYa4sj+JBq9mqALAr9Hh/aU7c9Airx2CVv5YeYORt7PuN5NxlUsGDzaGoEj?=
 =?us-ascii?Q?OtwaXh226u6qn0AmT6G457CywcHkzqDCPWBPN7chNOXnB8WSoRxTdfr61/9X?=
 =?us-ascii?Q?ZrEmsPsbcMl2JD2VIzppIz6y0DN4noqmHyf1p2xVrzY8oUHx5CTE94uNyQh6?=
 =?us-ascii?Q?RXlBKIvmyaK5GIGZb7YuZ1aj5swKBkGUBfGpHJV9RYIocY/qVwWAog83+Sly?=
 =?us-ascii?Q?7aS1Bky8VRF3YAsqICseAD/RZPwZQQ8PdstS0i4QeIkdDd9of5jS8eg/PebB?=
 =?us-ascii?Q?MRuI?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <8D8478678CCDA14FBB65E9FC69B5ADE5@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2375
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	4e1486b2-2be3-4879-a4fc-08d8905e7042
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ImoTfeLaQXykiIaA4cdRwfy86qsVOEHrQDJcSEWWyO/dQE0k41shxKGsPDOzBluPSZVZcN08DjV9kS10LAyKokBpvw9T7uN1eic9J7KZCQwWsz5Te1kDVoY+STpX0272zLYu7q8qYTI82KWaMyLIHVuue/xHRw0BBLlTyoJUAValkNq3L713aLrYwkPpP/R5Cs/aopUL+i9qmINIrAIOQIvTxey1LDB59LQ78uT8toSg128bmKAXpSo7Hz4Mb1ANWVHfUGwWxWkfeP2LLUPa6n8cF5Z0w5KWCzOl2Bv2dnt4TG6Zde/7JZXpfJBtLbYuKIo9SG1AmEV+4EUnLg7f7xJL3C8+xb21SqERGV00PfsoGE9CBz1j5wkMohdgT+AT/XKFRH8WWRAdf5UEE0X5lg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(346002)(39860400002)(376002)(136003)(46966005)(82310400003)(70206006)(83380400001)(186003)(70586007)(47076004)(82740400003)(356005)(54906003)(4326008)(316002)(81166007)(6862004)(36906005)(2906002)(33656002)(36756003)(86362001)(53546011)(6506007)(336012)(8676002)(6512007)(478600001)(8936002)(6486002)(26005)(5660300002)(2616005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Nov 2020 09:51:03.8305
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8aad7919-352c-43d7-b47a-08d8905e751a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3967

Hello Jan,

> On 24 Nov 2020, at 12:11 am, Stefano Stabellini <sstabellini@kernel.org> =
wrote:
>=20
> On Mon, 23 Nov 2020, Jan Beulich wrote:
>> Pure code motion (plus the addition of "#ifdef CONFIG_X86); no
>> functional change intended.
>>=20
>> Reported-by: Julien Grall <julien@xen.org>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>=20
> Great cleanup
>=20
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

Reviewed-by: Rahul Singh <rahul.singh@arm.com>

Regards,
Rahul
>=20
>=20
>> ---
>> v2: Re-base over new earlier patch.
>>=20
>> --- a/docs/misc/xen-command-line.pandoc
>> +++ b/docs/misc/xen-command-line.pandoc
>> @@ -318,8 +318,8 @@ Interrupts.  Specifying zero disables CM
>> Flag to indicate whether to probe for a CMOS Real Time Clock irrespectiv=
e of
>> ACPI indicating none to be there.
>>=20
>> -### com1
>> -### com2
>> +### com1 (x86)
>> +### com2 (x86)
>>> `=3D <baud>[/<base-baud>][,[DPS][,[<io-base>|pci|amt][,[<irq>|msi][,[<p=
ort-bdf>][,[<bridge-bdf>]]]]]]`
>>=20
>> Both option `com1` and `com2` follow the same format.
>> --- a/xen/drivers/char/ns16550.c
>> +++ b/xen/drivers/char/ns16550.c
>> @@ -31,38 +31,6 @@
>> #include <asm/fixmap.h>
>> #endif
>>=20
>> -/*
>> - * Configure serial port with a string:
>> - *   <baud>[/<base_baud>][,DPS[,<io-base>[,<irq>[,<port-bdf>[,<bridge-b=
df>]]]]].
>> - * The tail of the string can be omitted if platform defaults are suffi=
cient.
>> - * If the baud rate is pre-configured, perhaps by a bootloader, then 'a=
uto'
>> - * can be specified in place of a numeric baud rate. Polled mode is spe=
cified
>> - * by requesting irq 0.
>> - */
>> -static char __initdata opt_com1[128] =3D "";
>> -static char __initdata opt_com2[128] =3D "";
>> -string_param("com1", opt_com1);
>> -string_param("com2", opt_com2);
>> -
>> -enum serial_param_type {
>> -    baud,
>> -    clock_hz,
>> -    data_bits,
>> -    io_base,
>> -    irq,
>> -    parity,
>> -    reg_shift,
>> -    reg_width,
>> -    stop_bits,
>> -#ifdef CONFIG_HAS_PCI
>> -    bridge_bdf,
>> -    device,
>> -    port_bdf,
>> -#endif
>> -    /* List all parameters before this line. */
>> -    num_serial_params
>> -};
>> -
>> static struct ns16550 {
>>     int baud, clock_hz, data_bits, parity, stop_bits, fifo_size, irq;
>>     u64 io_base;   /* I/O port or memory-mapped I/O address. */
>> @@ -98,32 +66,6 @@ static struct ns16550 {
>> #endif
>> } ns16550_com[2] =3D { { 0 } };
>>=20
>> -struct serial_param_var {
>> -    char name[12];
>> -    enum serial_param_type type;
>> -};
>> -
>> -/*
>> - * Enum struct keeping a table of all accepted parameter names for pars=
ing
>> - * com_console_options for serial port com1 and com2.
>> - */
>> -static const struct serial_param_var __initconst sp_vars[] =3D {
>> -    {"baud", baud},
>> -    {"clock-hz", clock_hz},
>> -    {"data-bits", data_bits},
>> -    {"io-base", io_base},
>> -    {"irq", irq},
>> -    {"parity", parity},
>> -    {"reg-shift", reg_shift},
>> -    {"reg-width", reg_width},
>> -    {"stop-bits", stop_bits},
>> -#ifdef CONFIG_HAS_PCI
>> -    {"bridge", bridge_bdf},
>> -    {"dev", device},
>> -    {"port", port_bdf},
>> -#endif
>> -};
>> -
>> #ifdef CONFIG_HAS_PCI
>> struct ns16550_config {
>>     u16 vendor_id;
>> @@ -674,6 +616,19 @@ static struct uart_driver __read_mostly
>> #endif
>> };
>>=20
>> +static void ns16550_init_common(struct ns16550 *uart)
>> +{
>> +    uart->clock_hz  =3D UART_CLOCK_HZ;
>> +
>> +    /* Default is no transmit FIFO. */
>> +    uart->fifo_size =3D 1;
>> +
>> +    /* Default lsr_mask =3D UART_LSR_THRE */
>> +    uart->lsr_mask  =3D UART_LSR_THRE;
>> +}
>> +
>> +#ifdef CONFIG_X86
>> +
>> static int __init parse_parity_char(int c)
>> {
>>     switch ( c )
>> @@ -1217,6 +1172,64 @@ pci_uart_config(struct ns16550 *uart, bo
>> #endif /* CONFIG_HAS_PCI */
>>=20
>> /*
>> + * Configure serial port with a string:
>> + *   <baud>[/<base_baud>][,DPS[,<io-base>[,<irq>[,<port-bdf>[,<bridge-b=
df>]]]]].
>> + * The tail of the string can be omitted if platform defaults are suffi=
cient.
>> + * If the baud rate is pre-configured, perhaps by a bootloader, then 'a=
uto'
>> + * can be specified in place of a numeric baud rate. Polled mode is spe=
cified
>> + * by requesting irq 0.
>> + */
>> +static char __initdata opt_com1[128] =3D "";
>> +static char __initdata opt_com2[128] =3D "";
>> +string_param("com1", opt_com1);
>> +string_param("com2", opt_com2);
>> +
>> +enum serial_param_type {
>> +    baud,
>> +    clock_hz,
>> +    data_bits,
>> +    io_base,
>> +    irq,
>> +    parity,
>> +    reg_shift,
>> +    reg_width,
>> +    stop_bits,
>> +#ifdef CONFIG_HAS_PCI
>> +    bridge_bdf,
>> +    device,
>> +    port_bdf,
>> +#endif
>> +    /* List all parameters before this line. */
>> +    num_serial_params
>> +};
>> +
>> +struct serial_param_var {
>> +    char name[12];
>> +    enum serial_param_type type;
>> +};
>> +
>> +/*
>> + * Enum struct keeping a table of all accepted parameter names for pars=
ing
>> + * com_console_options for serial port com1 and com2.
>> + */
>> +static const struct serial_param_var __initconst sp_vars[] =3D {
>> +    {"baud", baud},
>> +    {"clock-hz", clock_hz},
>> +    {"data-bits", data_bits},
>> +    {"io-base", io_base},
>> +    {"irq", irq},
>> +    {"parity", parity},
>> +    {"reg-shift", reg_shift},
>> +    {"reg-width", reg_width},
>> +    {"stop-bits", stop_bits},
>> +#ifdef CONFIG_HAS_PCI
>> +    {"bridge", bridge_bdf},
>> +    {"dev", device},
>> +    {"port", port_bdf},
>> +#endif
>> +};
>> +
>> +/*
>>  * Used to parse name value pairs and return which value it is along wit=
h
>>  * pointer for the extracted value.
>>  */
>> @@ -1504,17 +1517,6 @@ static void __init ns16550_parse_port_co
>>     serial_register_uart(uart - ns16550_com, &ns16550_driver, uart);
>> }
>>=20
>> -static void ns16550_init_common(struct ns16550 *uart)
>> -{
>> -    uart->clock_hz  =3D UART_CLOCK_HZ;
>> -
>> -    /* Default is no transmit FIFO. */
>> -    uart->fifo_size =3D 1;
>> -
>> -    /* Default lsr_mask =3D UART_LSR_THRE */
>> -    uart->lsr_mask  =3D UART_LSR_THRE;
>> -}
>> -
>> void __init ns16550_init(int index, struct ns16550_defaults *defaults)
>> {
>>     struct ns16550 *uart;
>> @@ -1541,6 +1543,8 @@ void __init ns16550_init(int index, stru
>>     ns16550_parse_port_config(uart, (index =3D=3D 0) ? opt_com1 : opt_co=
m2);
>> }
>>=20
>> +#endif /* CONFIG_X86 */
>> +
>> #ifdef CONFIG_HAS_DEVICE_TREE
>> static int __init ns16550_uart_dt_init(struct dt_device_node *dev,
>>                                        const void *data)
>>=20
>=20



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 09:52:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 09:52:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35659.67295 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khUzl-00042Y-Qb; Tue, 24 Nov 2020 09:52:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35659.67295; Tue, 24 Nov 2020 09:52:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khUzl-00042R-Nb; Tue, 24 Nov 2020 09:52:21 +0000
Received: by outflank-mailman (input) for mailman id 35659;
 Tue, 24 Nov 2020 09:52:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lBCh=E6=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1khUzk-00042K-9H
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 09:52:20 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.1.55]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ce8ca9d7-6457-47b1-8dcc-c3eecf4b5176;
 Tue, 24 Nov 2020 09:52:19 +0000 (UTC)
Received: from AM5PR0201CA0001.eurprd02.prod.outlook.com
 (2603:10a6:203:3d::11) by AM0PR08MB3634.eurprd08.prod.outlook.com
 (2603:10a6:208:d6::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.24; Tue, 24 Nov
 2020 09:52:16 +0000
Received: from AM5EUR03FT004.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:3d:cafe::d2) by AM5PR0201CA0001.outlook.office365.com
 (2603:10a6:203:3d::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Tue, 24 Nov 2020 09:52:16 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT004.mail.protection.outlook.com (10.152.16.163) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Tue, 24 Nov 2020 09:52:16 +0000
Received: ("Tessian outbound 082214a64d39:v71");
 Tue, 24 Nov 2020 09:52:16 +0000
Received: from 2e9b45130cab.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1C56E1D7-2A1C-44AA-B574-C98B8747D074.1; 
 Tue, 24 Nov 2020 09:52:10 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2e9b45130cab.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 24 Nov 2020 09:52:10 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB6PR0802MB2375.eurprd08.prod.outlook.com (2603:10a6:4:87::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Tue, 24 Nov
 2020 09:52:08 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3589.030; Tue, 24 Nov 2020
 09:52:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lBCh=E6=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1khUzk-00042K-9H
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 09:52:20 +0000
X-Inumbo-ID: ce8ca9d7-6457-47b1-8dcc-c3eecf4b5176
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown [40.107.1.55])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ce8ca9d7-6457-47b1-8dcc-c3eecf4b5176;
	Tue, 24 Nov 2020 09:52:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OYMspa/OlGi/l4ukMEcDAxi0KoqVji6ybi+Ak4X8Tuo=;
 b=zzSyWNRgSBMWAbJFNwvb0Qj8o+1l/vx6LFtPE1ml44rFfBb5H5T7pvOP9Sa2Xc8gknlRhZnORW2osJxvx47tJHhXDm1tYrslDmcO9a4LIcgjDJObA2dy9LnCJozMQAGHulN/r1e3u65gjhj+tMO3d/mtQiUMQHGD8ItNMOsaHw0=
Received: from AM5PR0201CA0001.eurprd02.prod.outlook.com
 (2603:10a6:203:3d::11) by AM0PR08MB3634.eurprd08.prod.outlook.com
 (2603:10a6:208:d6::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.24; Tue, 24 Nov
 2020 09:52:16 +0000
Received: from AM5EUR03FT004.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:3d:cafe::d2) by AM5PR0201CA0001.outlook.office365.com
 (2603:10a6:203:3d::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Tue, 24 Nov 2020 09:52:16 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT004.mail.protection.outlook.com (10.152.16.163) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Tue, 24 Nov 2020 09:52:16 +0000
Received: ("Tessian outbound 082214a64d39:v71"); Tue, 24 Nov 2020 09:52:16 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: f21d1071c5ff9716
X-CR-MTA-TID: 64aa7808
Received: from 2e9b45130cab.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 1C56E1D7-2A1C-44AA-B574-C98B8747D074.1;
	Tue, 24 Nov 2020 09:52:10 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2e9b45130cab.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Tue, 24 Nov 2020 09:52:10 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SJEzzh+CIlMUSUZRVM1pjRBp5aEvBjMBBHnsnnSoIGhyTfNdJJ/vi2caJn0nypzV9eA771ZWyi72jfjL3H19I1okiVfjqj9uhXviNn2xravYPDN2CGxS7S/k7atpp6CjrpbJL/MwHB8xyKmv1bYpsjbkXh+NJGstJBP5sHT/GZJoR8pg/PM28vwJkwMbt1K8U43RNXyGupjIgbC2ohd4afQg/6w6BLAc/6t2pZfuLtjKsIdjrymPD0YG2goqslEAIIohx0QGiWffRhIF0zrENQy1xNvyzFuea/fwPaoc4kun8RDgyolZbF3F/zemoPnxU7D7MPyxflEEqOF0hj299w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OYMspa/OlGi/l4ukMEcDAxi0KoqVji6ybi+Ak4X8Tuo=;
 b=MwXHtgIzhxsOjUZt1FQ4NQAUR+a9vFyNLA8fa2pS2vdurL0ZBSxqgJJmk6KiZt1pLCDGgs+98uuKBvjjk/euKOvfshaOdCV2pR+cd4hXJ6NbFQ+5JUiqfqdnOj9BRjxOP/mXwra3kyr3p0hVV+wOnPndz7+Fradt9NtpSQjwuGw2HpG/FVJOSvnTKvwR9xL/KU3E9bSOEtUqhvnX5KRFfdIaUvt4zRwg2Lz0Byw4DQ0xN1lb111na9U965h3+sgH7oLN0hquid9zMnhPBCKcq3ITgz4XKQ8ZC1XD+3NydgXCV/rfHPoco1SJgRI4B4VSMX9/2zSVKnlXknh51PCgpA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OYMspa/OlGi/l4ukMEcDAxi0KoqVji6ybi+Ak4X8Tuo=;
 b=zzSyWNRgSBMWAbJFNwvb0Qj8o+1l/vx6LFtPE1ml44rFfBb5H5T7pvOP9Sa2Xc8gknlRhZnORW2osJxvx47tJHhXDm1tYrslDmcO9a4LIcgjDJObA2dy9LnCJozMQAGHulN/r1e3u65gjhj+tMO3d/mtQiUMQHGD8ItNMOsaHw0=
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DB6PR0802MB2375.eurprd08.prod.outlook.com (2603:10a6:4:87::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.28; Tue, 24 Nov
 2020 09:52:08 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3589.030; Tue, 24 Nov 2020
 09:52:08 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 3/3] ns16550: drop stray "#ifdef CONFIG_HAS_PCI"
Thread-Topic: [PATCH v2 3/3] ns16550: drop stray "#ifdef CONFIG_HAS_PCI"
Thread-Index: AQHWwZq/qJpu10nDFkuzrG8rOtqGVanWaYOAgACiOQA=
Date: Tue, 24 Nov 2020 09:52:08 +0000
Message-ID: <C88DD0E4-47DC-4D20-8FA9-6ED4502EE47F@arm.com>
References: <96115b2b-c104-e566-2368-6a2439d2c988@suse.com>
 <c5cf7b83-9948-dd87-dfe0-40d36df0db70@suse.com>
 <alpine.DEB.2.21.2011231610110.7979@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2011231610110.7979@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 9b6fc64e-54e6-43c2-2c1c-08d8905ea05b
x-ms-traffictypediagnostic: DB6PR0802MB2375:|AM0PR08MB3634:
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB3634035A874821DA25C49135FCFB0@AM0PR08MB3634.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:2803;OLM:2803;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 vZGBll6CZn2t8rXzlXU0iwhOuqM0Z1cwW2WJIumoVEtbgAqmS5QtwKNAnYLVJNwfKvfKJ3fC2fjvkTLUVi16v7m84nPOXyVTX1jhsDfN2xCUYF9SdnEue/LnbTkC5aCsSupOQg8HmXnEIhQVE4Mg/AgkRjqykSoI7ROC9ayRg9x17uQO/UPCPabp84mwqOQHRjy2Nia9PDXsl874Au6VmnFueWukyCdm+ruelICIoFIGc/KtivrrypwBvQATkLry73r8/uV59H64/RSLZTJHxD6U0E0HAyoB1q8fTobKJ6AZOyn9LQBwVP+WVyJWcile2M0tpsc1Y+3mCxXqoIbcHQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(136003)(346002)(39860400002)(376002)(71200400001)(4326008)(26005)(33656002)(6506007)(53546011)(186003)(6486002)(36756003)(5660300002)(2906002)(8676002)(6916009)(6512007)(8936002)(76116006)(2616005)(66446008)(4744005)(66476007)(54906003)(316002)(86362001)(91956017)(66556008)(66946007)(478600001)(64756008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?iPGa9WmDQBupjgHWza5Fh+DIBuDEpaaMGerjV+MMR/yV5HltVYkp5Y0lA5Jy?=
 =?us-ascii?Q?AMNS924ZwUi6smnb8QQS5FSoJboyTwKTxOUXEIWPhARUH9FQ0s0iob76rCu3?=
 =?us-ascii?Q?XO4o2qsXkQi1uMfRz3g588YWecg3A477f1J1hpyLjQUHfVeXA8e3YDLopbCJ?=
 =?us-ascii?Q?Kn8iKtif84ztuRgXRUm/b91HQBWQC4aoElutES4evRbfZX0e9lU08rAUbTL5?=
 =?us-ascii?Q?KlljvZPW5Ds3ScUvIW/2xX35VicKToP9NXT6jOp2JJMPcOgbG/g85iQVlCsG?=
 =?us-ascii?Q?AH8tVb5m0ITlVh7SIRaclq+tREo+bd3KEliRYqGh21W5661SKSI4Vt+ZwvWb?=
 =?us-ascii?Q?d3GBo6bI2r/+bPt10Z5HjXHkOo17WnjtQc50eC1NmnskAQP+plLu5aX9JyqD?=
 =?us-ascii?Q?odQgSmjRyXUIzwoNeR+iVyhUAIlcRD7dT+v9RM0ar++NCQiHEjZfxPm6MGrJ?=
 =?us-ascii?Q?w0Y0u7I6uBgcuSjaIDDFw8BL0tTP8BoNqjsSZXBcsjf5E3m9977lH9ebPS/I?=
 =?us-ascii?Q?5yyvv2ecFHI/ues185qh1I3cgtT2op7FgwIJEo0bQ/PYGqS5i66HlxDp+RG5?=
 =?us-ascii?Q?Q/Q4l/g6tn3Y8FBCA9TQ6XAuR/HY8slKkil9DyvNIwnz7gI/c7yNiVwR0sMP?=
 =?us-ascii?Q?v2fLbKcrMdQPkhP9KaJuGjCZAJT1JD/GXgZa0IPghZBP6icVgCEw8XS3nd6I?=
 =?us-ascii?Q?GEjTdoIDmzQErBC5C0TaZtWkV2xQD/5htomepP2weOplLltdJl598w+UnG+4?=
 =?us-ascii?Q?8XASp45ipXyVgISkBeCZiR5vAZMqcUZtgw6kOk5DbTdsoxF1BCONf1tgwuGo?=
 =?us-ascii?Q?5Hho1Ovzc2/zCqVMC+1eMAiuJaP7nUr4jKhj0nNQvBcs6b1MXgz3LGLI1pjJ?=
 =?us-ascii?Q?GqAYf1T5x0D2RgPxgwVTKeF33M/dCQIHIXX1Gt1JpXvGlQcQz7mSSAYJMN8e?=
 =?us-ascii?Q?824BhENi1sDLscNSQPa5KAfSBg2KddiOj7l32PUcn1ON9/wmzIxcYxyvk3eW?=
 =?us-ascii?Q?Lojk?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <06EB38CAE8B60B4494E4BF309FBCC8BE@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2375
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	3a50a609-a3fd-48c8-42bf-08d8905e9bad
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UzclhXNX2O+GVA7/PTX0bknlr1qn4hUzHK0q+wuKXcxORkq2G7k/QfDXUXuaFsI4xAHBJdbJRjCVFoPhMJ1VVwy9RcuME90fU9es3wxLdIzEWQyyIkpkVlZKUHS+3kY1pqS9tWe5K6bu9WwpTyfzmK9q3U1YqIbDFK5YCYn8/xFzAv2HV1LyQQHsX+QYkS2bTJTWRjQbMuXOau374BduQiCv4yzP4d0C6XMHcwzGf7j53Vz5+8GNv30zrmwxHWNvU6zsTky31RuN5QZuHruUxnjwgVPtOiJHJEbu8plWgxq7BkWg0gSGry1/DTovv+QJTmouhK10ue2Os+7XNhNq30EQPt/PM9WeZpymQUcLsEOnsgnyXhahYCjFBffdN5f6zjeMu17yTNttJYPSqjUhnQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(136003)(39860400002)(396003)(346002)(46966005)(478600001)(2616005)(2906002)(6862004)(5660300002)(4744005)(86362001)(82310400003)(33656002)(70586007)(70206006)(82740400003)(36906005)(47076004)(6512007)(356005)(336012)(36756003)(6506007)(54906003)(26005)(4326008)(81166007)(186003)(8676002)(8936002)(53546011)(316002)(6486002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Nov 2020 09:52:16.3850
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9b6fc64e-54e6-43c2-2c1c-08d8905ea05b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3634

Hello ,

> On 24 Nov 2020, at 12:11 am, Stefano Stabellini <sstabellini@kernel.org> =
wrote:
>=20
> On Mon, 23 Nov 2020, Jan Beulich wrote:
>> There's no point wrapping the function invocation when
>> - the function body is already suitably wrapped,
>> - the function itself is unconditionally available.
>>=20
>> Reported-by: Julien Grall <julien@xen.org>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>=20
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>=20
Reviewed-by: Rahul Singh <rahul.singh@arm.com>
>=20
>> --- a/xen/drivers/char/ns16550.c
>> +++ b/xen/drivers/char/ns16550.c
>> @@ -662,9 +662,7 @@ static int __init check_existence(struct
>>     return 1; /* Everything is MMIO */
>> #endif
>>=20
>> -#ifdef CONFIG_HAS_PCI
>>     pci_serial_early_init(uart);
>> -#endif
>>=20
>>     /*
>>      * Do a simple existence test first; if we fail this,
>>=20
>=20

Regards,
Rahul=


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 10:05:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 10:05:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35682.67307 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khVCG-0005Az-3y; Tue, 24 Nov 2020 10:05:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35682.67307; Tue, 24 Nov 2020 10:05:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khVCG-0005As-0o; Tue, 24 Nov 2020 10:05:16 +0000
Received: by outflank-mailman (input) for mailman id 35682;
 Tue, 24 Nov 2020 10:05:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khVCE-0005An-Pr
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 10:05:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5ddcddfa-1bc2-40b9-966e-0ecadce09127;
 Tue, 24 Nov 2020 10:05:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8CCEFAC48;
 Tue, 24 Nov 2020 10:05:12 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khVCE-0005An-Pr
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 10:05:14 +0000
X-Inumbo-ID: 5ddcddfa-1bc2-40b9-966e-0ecadce09127
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5ddcddfa-1bc2-40b9-966e-0ecadce09127;
	Tue, 24 Nov 2020 10:05:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606212312; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=A1LEu8xjN8h2skGlFztJ38qpXkl5P+Ns3SftVKPcF1U=;
	b=R397AY4KqBGf9JOsISTTJqS9QCQbMco58pPEWGSBB0Mm1O65V0WGArhNhV9JS2Ol83ns+Y
	pBMZTqIok3Xdk53RYR8M2j1IsvByYP9l2LihZWNRq0UjVp3Hcw95mgOsSkSed0hpx1Hhzn
	2+pp5xPEBPuDjy5SbqPORKQAy7rKWqQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8CCEFAC48;
	Tue, 24 Nov 2020 10:05:12 +0000 (UTC)
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
To: Manuel Bouyer <bouyer@antioche.eu.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
References: <20201120085249.GA1508@antioche.eu.org>
 <97f371a9-00fe-33fe-8923-c247f44f9af6@suse.com>
 <20201120092754.GH1508@antioche.eu.org>
 <20904a6a-ac64-755d-d228-4c49faf66fb5@suse.com>
 <20201120103824.GJ1508@antioche.eu.org>
 <20201123095713.orfpg72r73m7f46n@Air-de-Roger>
 <20201123113241.GE2520@antioche.eu.org>
 <20201123125112.q3zqb4e5nk6jg4hw@Air-de-Roger>
 <20201123143150.GG2520@antioche.eu.org>
 <20201123170610.kzfxvcgkdkvh3ex4@Air-de-Roger>
 <20201123173925.GG4662@antioche.eu.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b3912e97-9684-fe97-1053-ad7168a19721@suse.com>
Date: Tue, 24 Nov 2020 11:05:12 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201123173925.GG4662@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 23.11.2020 18:39, Manuel Bouyer wrote:
> On Mon, Nov 23, 2020 at 06:06:10PM +0100, Roger Pau Monné wrote:
>> OK, I'm afraid this is likely too verbose and messes with the timings.
>>
>> I've been looking (again) into the code, and I found something weird
>> that I think could be related to the issue you are seeing, but haven't
>> managed to try to boot the NetBSD kernel provided in order to assert
>> whether it solves the issue or not (or even whether I'm able to
>> repro it). Would you mind giving the patch below a try?
> 
> With this, I get the same hang but XEN outputs don't wake up the interrupt
> any more. The NetBSD counter shows only one interrupt for ioapic2 pin 2,
> while I would have about 8 at the time of the hang.
> 
> So, now it looks like interrupts are blocked forever.

Which may be a good thing for debugging purposes, because now we have
a way to investigate what is actually blocking the interrupt's
delivery without having to worry about more output screwing the
overall picture.

> At
> http://www-soc.lip6.fr/~bouyer/xen-log5.txt
> you'll find the output of the 'i' key.

(XEN)    IRQ:  34 vec:59 IO-APIC-level   status=010 aff:{0}/{0-7} in-flight=1 d0: 34(-MM)

(XEN)     IRQ 34 Vec 89:
(XEN)       Apic 0x02, Pin  2: vec=59 delivery=LoPri dest=L status=1 polarity=1 irr=1 trig=L mask=0 dest_id:00000001

(XEN) ioapic 2 pin 2 gsi 34 vector 0x67
(XEN)   delivery mode 0 dest mode 0 delivery status 0
(XEN)   polarity 1 IRR 0 trig mode 1 mask 0 dest id 0

IOW from guest pov the interrupt is entirely idle (mask and irr clear),
while Xen sees it as both in-flight and irr also already having become
set again. I continue to suspect the EOI timer not doing its job. Yet
as said before, for it to have to do anything in the first place the
"guest" (really Dom0 here) would need to fail to EOI the IRQ within
the timeout period. Which in turn, given your description of how you
handle interrupts, cannot be excluded (i.e. the handling may simply
take "slightly" too long).

What we're missing is LAPIC information, since the masked status logged
is unclear: (-MM) isn't fully matching up with "mask=0". But of course
the former is just a software representation, while the latter is what
the RTE holds. IOW for the interrupt to not get delivered, there needs
to be this or a higher ISR bit set (considering we don't use the TPR),
or (I think we can pretty much exclude this) we'd need to be running
with IRQs off for extended periods of time.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 10:22:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 10:22:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35715.67320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khVSc-0006vx-Hr; Tue, 24 Nov 2020 10:22:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35715.67320; Tue, 24 Nov 2020 10:22:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khVSc-0006vq-EG; Tue, 24 Nov 2020 10:22:10 +0000
Received: by outflank-mailman (input) for mailman id 35715;
 Tue, 24 Nov 2020 10:22:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1khVSb-0006vl-6W
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 10:22:09 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khVSZ-0000bO-Cn; Tue, 24 Nov 2020 10:22:07 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khVSZ-0003EJ-52; Tue, 24 Nov 2020 10:22:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khVSb-0006vl-6W
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 10:22:09 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=oUxMt1PuBh4NR7u1++TTZEDMiD5d+PhkMJAmaufMzXw=; b=XYvhIuQJuAUxQE7DX+nm10va88
	8GStVFqatsypPcJD8qw+6rvBl4Rp76Fsvdyyhc4mj9d+38Ws5db9ciOi/dJaq97qaHJi9cPZg1OIp
	aiFd9LZhy1ujjxeOR6TpvwKTsJqE3Ccvk6An2lhKN1w12MpXTJWqHux5k96OgKrfCxts=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khVSZ-0000bO-Cn; Tue, 24 Nov 2020 10:22:07 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khVSZ-0003EJ-52; Tue, 24 Nov 2020 10:22:07 +0000
Subject: Re: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
To: Jan Beulich <jbeulich@suse.com>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>
References: <cover.1605527997.git.rahul.singh@arm.com>
 <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
 <3740e147-719a-4e97-bb0e-fe9bd2ec2aa5@xen.org>
 <aa256a44-8f8f-d4f1-f5f4-12529f45d8c8@suse.com>
 <9007e08f-6d90-88ed-ba64-2f0b3c21cb50@xen.org>
 <8531a99d-3c54-36c7-0cd4-2e4838f96eb0@suse.com>
 <ba26fdfb-34f8-c4d3-e082-f1f49c768981@xen.org>
 <89F35B3F-FAAD-4C58-B3FD-F93CA3290A49@arm.com>
 <alpine.DEB.2.21.2011191534060.7979@sstabellini-ThinkPad-T480s>
 <CAJ=z9a0aS1G0F1jAtKNEe4r3tyBoxy1xJ9AV7pYgifsL62iqww@mail.gmail.com>
 <alpine.DEB.2.21.2011191551510.7979@sstabellini-ThinkPad-T480s>
 <6d2dae58-bfe5-596e-7850-a20ed54e1a81@xen.org>
 <7964507a-afc8-a548-5ea1-0182b6001cb1@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <a634e6c9-7ada-3739-8d8f-00c53bcd2815@xen.org>
Date: Tue, 24 Nov 2020 10:22:04 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <7964507a-afc8-a548-5ea1-0182b6001cb1@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 24/11/2020 09:47, Jan Beulich wrote:
> On 23.11.2020 18:13, Julien Grall wrote:
>> My view on 2) can change if Jan provides enough information into why one
>> would want NS1650 PCI enabled by default on Arm but disable MSI.
> 
> Because, like it was on x86, initially there may be no support for
> MSI?

"no support for MSI" implies that there will be at least support for 
NS16550 PCI.

>  I have no idea what the plans are ...

There are no such plan on Arm for the forseeable future (read as we 
haven't seen any interested from the Arm community).

The NS16550 PCI code will stay unusable until someone effectively send a 
patch to plumb it correctly.

While I agree that disabling MSI may be nice to have to do in the 
future, this doesn't address the need for Arm. I don't want to get in 
our way the NS16550 PCI code in our way when implementing PCI (with or 
without MSI) on Arm.

Even if there were an interest, I would still expect some users (e.g. 
embedded folks) to want to compile-out unused feature (you may have a 
platform with a embedded NS16550).

So the path forward will stay either 1) or 3) for me.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 10:46:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 10:46:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35738.67360 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khVq5-0000Vu-Rv; Tue, 24 Nov 2020 10:46:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35738.67360; Tue, 24 Nov 2020 10:46:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khVq5-0000Vn-Oh; Tue, 24 Nov 2020 10:46:25 +0000
Received: by outflank-mailman (input) for mailman id 35738;
 Tue, 24 Nov 2020 10:46:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khVq4-0000Vf-It; Tue, 24 Nov 2020 10:46:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khVq4-00014y-AY; Tue, 24 Nov 2020 10:46:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khVq3-0002X0-Ty; Tue, 24 Nov 2020 10:46:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1khVq3-00044G-T9; Tue, 24 Nov 2020 10:46:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khVq4-0000Vf-It; Tue, 24 Nov 2020 10:46:24 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nZlH3a5xIOXGkGb5WFld6wyesdzFWB29FaFaQe1FWoU=; b=WefPrRJDVu9G7gRDYE6TJWLlp4
	V5ttWSV1o30z0FJdeXcZb0SUa7FgJ99cuYhCpnKT1PppHkidrNj7SVXsxstbWbDhUwbZ6qTywWvV0
	S7FOmNIxeaZthdALzzrrZj2yD8u4JPumhNZiSiIYeMnEUMCRjU2cLAzIiEboz0KC9h0E=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khVq4-00014y-AY; Tue, 24 Nov 2020 10:46:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khVq3-0002X0-Ty; Tue, 24 Nov 2020 10:46:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khVq3-00044G-T9; Tue, 24 Nov 2020 10:46:23 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156977-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156977: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=9a063f5c261332756a32b2f7842ed2848eaaa52e
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 24 Nov 2020 10:46:23 +0000

flight 156977 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156977/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              9a063f5c261332756a32b2f7842ed2848eaaa52e
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  137 days
Failing since        151818  2020-07-11 04:18:52 Z  136 days  131 attempts
Testing same since   156977  2020-11-24 04:19:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 28583 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 10:49:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 10:49:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35747.67375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khVtN-0000hO-Fu; Tue, 24 Nov 2020 10:49:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35747.67375; Tue, 24 Nov 2020 10:49:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khVtN-0000hH-Ar; Tue, 24 Nov 2020 10:49:49 +0000
Received: by outflank-mailman (input) for mailman id 35747;
 Tue, 24 Nov 2020 10:49:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khVtM-0000hC-6q
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 10:49:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 83866c44-6cfc-4b1d-9389-28decc564439;
 Tue, 24 Nov 2020 10:49:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 90026AC2E;
 Tue, 24 Nov 2020 10:49:46 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khVtM-0000hC-6q
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 10:49:48 +0000
X-Inumbo-ID: 83866c44-6cfc-4b1d-9389-28decc564439
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 83866c44-6cfc-4b1d-9389-28decc564439;
	Tue, 24 Nov 2020 10:49:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606214986; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SZKSVVPjh9io3hX3hgPDkgrOGmXkV4LjTmpETkWhWZY=;
	b=QXVNwaZYwWYWmYk6q3ANbpcjuYogcFZ21uFBWS3oOFEXX7/ucgbJWHjqpslPb1oe9lwofR
	x/dlRw9DNqNc+mcsylgyZE7zI4y6xnse8e9eAeRr6Jl0kFnIyHighunt0A3FNz99zcc3U8
	dM285/xthwk1uucWjfsO3Gtstz/GacY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 90026AC2E;
	Tue, 24 Nov 2020 10:49:46 +0000 (UTC)
Subject: Re: [PATCH v3 1/3] xen/ns16550: Make ns16550 driver usable on ARM
 with HAS_PCI enabled.
To: Julien Grall <julien@xen.org>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>
References: <cover.1605527997.git.rahul.singh@arm.com>
 <955996aa8cd7f17f9f39c60bd3b9b74ffaa5c5f7.1605527997.git.rahul.singh@arm.com>
 <3740e147-719a-4e97-bb0e-fe9bd2ec2aa5@xen.org>
 <aa256a44-8f8f-d4f1-f5f4-12529f45d8c8@suse.com>
 <9007e08f-6d90-88ed-ba64-2f0b3c21cb50@xen.org>
 <8531a99d-3c54-36c7-0cd4-2e4838f96eb0@suse.com>
 <ba26fdfb-34f8-c4d3-e082-f1f49c768981@xen.org>
 <89F35B3F-FAAD-4C58-B3FD-F93CA3290A49@arm.com>
 <alpine.DEB.2.21.2011191534060.7979@sstabellini-ThinkPad-T480s>
 <CAJ=z9a0aS1G0F1jAtKNEe4r3tyBoxy1xJ9AV7pYgifsL62iqww@mail.gmail.com>
 <alpine.DEB.2.21.2011191551510.7979@sstabellini-ThinkPad-T480s>
 <6d2dae58-bfe5-596e-7850-a20ed54e1a81@xen.org>
 <7964507a-afc8-a548-5ea1-0182b6001cb1@suse.com>
 <a634e6c9-7ada-3739-8d8f-00c53bcd2815@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <32524f30-acb1-04d7-b69d-6e881f45f5ae@suse.com>
Date: Tue, 24 Nov 2020 11:49:46 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <a634e6c9-7ada-3739-8d8f-00c53bcd2815@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 24.11.2020 11:22, Julien Grall wrote:
> On 24/11/2020 09:47, Jan Beulich wrote:
>> On 23.11.2020 18:13, Julien Grall wrote:
>>> My view on 2) can change if Jan provides enough information into why one
>>> would want NS1650 PCI enabled by default on Arm but disable MSI.
>>
>> Because, like it was on x86, initially there may be no support for
>> MSI?
> 
> "no support for MSI" implies that there will be at least support for 
> NS16550 PCI.
> 
>>  I have no idea what the plans are ...
> 
> There are no such plan on Arm for the forseeable future (read as we 
> haven't seen any interested from the Arm community).

Okay, so you're question wasn't so much about the "but" in there,
but the "PCI" in the first place.

> The NS16550 PCI code will stay unusable until someone effectively send a 
> patch to plumb it correctly.
> 
> While I agree that disabling MSI may be nice to have to do in the 
> future, this doesn't address the need for Arm. I don't want to get in 
> our way the NS16550 PCI code in our way when implementing PCI (with or 
> without MSI) on Arm.
> 
> Even if there were an interest, I would still expect some users (e.g. 
> embedded folks) to want to compile-out unused feature (you may have a 
> platform with a embedded NS16550).
> 
> So the path forward will stay either 1) or 3) for me.

Well, as said elsewhere - 3) it is then afaic, for making more
obvious that this is really a hack.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 11:04:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 11:04:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35758.67390 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khW7N-0002Tj-P6; Tue, 24 Nov 2020 11:04:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35758.67390; Tue, 24 Nov 2020 11:04:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khW7N-0002Tc-Lx; Tue, 24 Nov 2020 11:04:17 +0000
Received: by outflank-mailman (input) for mailman id 35758;
 Tue, 24 Nov 2020 11:04:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khW7N-0002TX-1l
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 11:04:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 07242373-3a06-4af2-8e84-0467ab8ed6d2;
 Tue, 24 Nov 2020 11:04:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EF3E5AC2D;
 Tue, 24 Nov 2020 11:04:14 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khW7N-0002TX-1l
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 11:04:17 +0000
X-Inumbo-ID: 07242373-3a06-4af2-8e84-0467ab8ed6d2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 07242373-3a06-4af2-8e84-0467ab8ed6d2;
	Tue, 24 Nov 2020 11:04:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606215855; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=T8eVZ4DqvpfafX30qWTsandQgKds3bRQUYvkav5QoEg=;
	b=Mas6AuYrSUojHSCJwHmwdXkwwxpLtqt9oN0G1C+pouxW+e9QWmHlWSEiiKmluNV8DH/htY
	W5ALAvYaItBKEAAW9Qsh/rU2ay8U93rt/rRFxvvlhyGPi6KWz/R5SLUAPfNL/9Lpmw5TTr
	asA7cKr79CgdI1b+6qHyVftUeEGGLGc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EF3E5AC2D;
	Tue, 24 Nov 2020 11:04:14 +0000 (UTC)
Subject: Re: [PATCH 2/4] x86/ACPI: fix S3 wakeup vector mapping
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>
References: <7f895b0e-f46f-8fe2-b0ac-e0503ef06a1f@suse.com>
 <c0210cbf-c07d-7fa6-2ae0-59764514836a@suse.com>
 <20201123152454.yjr3jgvsyucftrff@Air-de-Roger>
 <79776889-c566-5f07-abfe-2cb79cfa78fa@suse.com>
 <20201123160752.uzczcxnz5ytvtd46@Air-de-Roger>
 <fe2ec163-c6c7-12d6-0c89-57a238514e25@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <094e9e27-e01f-6020-c091-f9c546e92028@suse.com>
Date: Tue, 24 Nov 2020 12:04:15 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <fe2ec163-c6c7-12d6-0c89-57a238514e25@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 23.11.2020 17:14, Andrew Cooper wrote:
> On 23/11/2020 16:07, Roger Pau Monné wrote:
>> On Mon, Nov 23, 2020 at 04:30:05PM +0100, Jan Beulich wrote:
>>> On 23.11.2020 16:24, Roger Pau Monné wrote:
>>>> On Mon, Nov 23, 2020 at 01:40:12PM +0100, Jan Beulich wrote:
>>>>> --- a/xen/arch/x86/acpi/power.c
>>>>> +++ b/xen/arch/x86/acpi/power.c
>>>>> @@ -174,17 +174,20 @@ static void acpi_sleep_prepare(u32 state
>>>>>      if ( state != ACPI_STATE_S3 )
>>>>>          return;
>>>>>  
>>>>> -    wakeup_vector_va = __acpi_map_table(
>>>>> -        acpi_sinfo.wakeup_vector, sizeof(uint64_t));
>>>>> -
>>>>>      /* TBoot will set resume vector itself (when it is safe to do so). */
>>>>>      if ( tboot_in_measured_env() )
>>>>>          return;
>>>>>  
>>>>> +    set_fixmap(FIX_ACPI_END, acpi_sinfo.wakeup_vector);
>>>>> +    wakeup_vector_va = fix_to_virt(FIX_ACPI_END) +
>>>>> +                       PAGE_OFFSET(acpi_sinfo.wakeup_vector);
>>>>> +
>>>>>      if ( acpi_sinfo.vector_width == 32 )
>>>>>          *(uint32_t *)wakeup_vector_va = bootsym_phys(wakeup_start);
>>>>>      else
>>>>>          *(uint64_t *)wakeup_vector_va = bootsym_phys(wakeup_start);
>>>>> +
>>>>> +    clear_fixmap(FIX_ACPI_END);
>>>> Why not use vmap here instead of the fixmap?
>>> Considering the S3 path is relatively fragile (as in: we end up
>>> breaking it more often than about anything else) I wanted to
>>> make as little of a change as possible. Hence I decided to stick
>>> to the fixmap use that was (indirectly) used before as well.
>> Unless there's a restriction to use the ACPI fixmap entry I would just
>> switch to use vmap, as it's used extensively in the code and less
>> likely to trigger issues in the future, or else a bunch of other stuff
>> would also be broken.
>>
>> IMO doing the mapping differently here when it's not required will end
>> up turning this code more fragile in the long run.
> 
> We can't enter S3 at all until dom0 has booted, as one detail has to
> come from AML.
> 
> Therefore, we're fully up and running by this point, and vmap() will be
> fine.

That's not the point of my reservation. The code here runs when the
system already isn't "fully up and running" anymore. Secondary CPUs
have already been offlined, and we're around the point where we
disable interrupts. Granted when we disable them, we also turn off
spin debugging, but I'd still prefer a path that's not susceptible
to IRQ state. What I admit I didn't pay attention to is that
set_fixmap(), by virtue of being a thin wrapper around
map_pages_to_xen(), similarly uses locks. IOW - okay, I'll switch
to vmap(). You're both aware that it, unlike set_fixmap(), can
fail, aren't you?

> However, why are we re-writing the wakeup vector every time?  Its fixed
> by the position of the trampoline, so we'd actually simplify the S3 path
> by only setting it up once.

I think the spec allows for (as in: doesn't preclude) firmware to
write this structure from scratch again each time the system comes
back up. Therefore what we've written there once may not survive
the first suspend/resume cycle.

> (The fix for fragility is to actually test it, not shy away from making
> any change)

Fair point. I'll see if I can convince my old laptop to cooperate.
I know Windows doesn't resume correctly on it ...

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 11:12:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 11:12:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35767.67405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khWFW-0003P0-U7; Tue, 24 Nov 2020 11:12:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35767.67405; Tue, 24 Nov 2020 11:12:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khWFW-0003Ot-QF; Tue, 24 Nov 2020 11:12:42 +0000
Received: by outflank-mailman (input) for mailman id 35767;
 Tue, 24 Nov 2020 11:12:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tuHM=E6=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1khWFV-0003Oo-Sn
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 11:12:41 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 8476b3c8-252c-416f-b1ad-4d35d64ba8c2;
 Tue, 24 Nov 2020 11:12:40 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 968561396;
 Tue, 24 Nov 2020 03:12:39 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E4F243F71F;
 Tue, 24 Nov 2020 03:12:38 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tuHM=E6=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1khWFV-0003Oo-Sn
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 11:12:41 +0000
X-Inumbo-ID: 8476b3c8-252c-416f-b1ad-4d35d64ba8c2
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 8476b3c8-252c-416f-b1ad-4d35d64ba8c2;
	Tue, 24 Nov 2020 11:12:40 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 968561396;
	Tue, 24 Nov 2020 03:12:39 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.199.1])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E4F243F71F;
	Tue, 24 Nov 2020 03:12:38 -0800 (PST)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/arm: Add workaround for Cortex-A55 erratum #1530923
Date: Tue, 24 Nov 2020 11:12:15 +0000
Message-Id: <61a105672650e7470710183f37351b821b818d1e.1606215998.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1

On the Cortex A55, TLB entries can be allocated by a speculative AT
instruction. If this is happening during a guest context switch with an
inconsistent page table state in the guest, TLBs with wrong values might
be allocated.
The ARM64_WORKAROUND_AT_SPECULATE workaround is used as for erratum
1165522 on Cortex A76 or Neoverse N1.

This change is also introducing the MIDR identifier for the Cortex-A55.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 docs/misc/arm/silicon-errata.txt | 1 +
 xen/arch/arm/cpuerrata.c         | 6 ++++++
 xen/include/asm-arm/processor.h  | 2 ++
 3 files changed, 9 insertions(+)

diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-errata.txt
index d183ba543f..27bf957ebf 100644
--- a/docs/misc/arm/silicon-errata.txt
+++ b/docs/misc/arm/silicon-errata.txt
@@ -45,6 +45,7 @@ stable hypervisors.
 | ARM            | Cortex-A53      | #827319         | ARM64_ERRATUM_827319    |
 | ARM            | Cortex-A53      | #824069         | ARM64_ERRATUM_824069    |
 | ARM            | Cortex-A53      | #819472         | ARM64_ERRATUM_819472    |
+| ARM            | Cortex-A55      | #1530923        | N/A                     |
 | ARM            | Cortex-A57      | #852523         | N/A                     |
 | ARM            | Cortex-A57      | #832075         | ARM64_ERRATUM_832075    |
 | ARM            | Cortex-A57      | #834220         | ARM64_ERRATUM_834220    |
diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
index cb4795beec..b398d480f1 100644
--- a/xen/arch/arm/cpuerrata.c
+++ b/xen/arch/arm/cpuerrata.c
@@ -514,6 +514,12 @@ static const struct arm_cpu_capabilities arm_errata[] = {
         .capability = ARM64_WORKAROUND_AT_SPECULATE,
         MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
     },
+    {
+        /* Cortex-A55 (All versions as erratum is open in SDEN v14) */
+        .desc = "ARM erratum 1530923",
+        .capability = ARM64_WORKAROUND_AT_SPECULATE,
+        MIDR_ALL_VERSIONS(MIDR_CORTEX_A55),
+    },
     {},
 };
 
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index d3d12a9d19..87c8136022 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -53,6 +53,7 @@
 #define ARM_CPU_PART_CORTEX_A17     0xC0E
 #define ARM_CPU_PART_CORTEX_A15     0xC0F
 #define ARM_CPU_PART_CORTEX_A53     0xD03
+#define ARM_CPU_PART_CORTEX_A55     0xD05
 #define ARM_CPU_PART_CORTEX_A57     0xD07
 #define ARM_CPU_PART_CORTEX_A72     0xD08
 #define ARM_CPU_PART_CORTEX_A73     0xD09
@@ -64,6 +65,7 @@
 #define MIDR_CORTEX_A17 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A17)
 #define MIDR_CORTEX_A15 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A15)
 #define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53)
+#define MIDR_CORTEX_A55 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A55)
 #define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57)
 #define MIDR_CORTEX_A72 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A72)
 #define MIDR_CORTEX_A73 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A73)
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 11:27:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 11:27:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35774.67417 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khWTc-0004QP-6p; Tue, 24 Nov 2020 11:27:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35774.67417; Tue, 24 Nov 2020 11:27:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khWTc-0004QI-3m; Tue, 24 Nov 2020 11:27:16 +0000
Received: by outflank-mailman (input) for mailman id 35774;
 Tue, 24 Nov 2020 11:27:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khWTb-0004QD-1d
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 11:27:15 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fbc1b030-4a85-4299-8b30-2ae4b2c244da;
 Tue, 24 Nov 2020 11:27:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1B7A8AC2D;
 Tue, 24 Nov 2020 11:27:13 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khWTb-0004QD-1d
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 11:27:15 +0000
X-Inumbo-ID: fbc1b030-4a85-4299-8b30-2ae4b2c244da
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fbc1b030-4a85-4299-8b30-2ae4b2c244da;
	Tue, 24 Nov 2020 11:27:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606217233; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+aWKri1K+oKGMkeVg9gsz4JY50xsphKOgWyMVtRlIQE=;
	b=MO9UCYPtbmBp8wgfjo2Umi3qS3n6FWUNKqk/bV99wRdSXfmpabgfN8rFjsN0a/v39pj/TT
	mRVKqyDhyPBySg6ivBlk4hRxFIomZaLtr9jZzf8AAAxOM0XF+IYRkCKmVKjDCOt+wa4g6U
	6Deyb6dyfBf7PSGbSeX8O9yQ+Fk3b1Y=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1B7A8AC2D;
	Tue, 24 Nov 2020 11:27:13 +0000 (UTC)
Subject: Re: [PATCH v2] xen: add support for automatic debug key actions in
 case of crash
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201120131306.24388-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e7cc6511-d741-c7dd-5c35-ab9cf031d4b5@suse.com>
Date: Tue, 24 Nov 2020 12:27:13 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201120131306.24388-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.11.2020 14:13, Juergen Gross wrote:
> @@ -507,6 +509,42 @@ void __init initialize_keytable(void)
>      }
>  }
>  
> +#define CRASHACTION_SIZE  32
> +static char crash_debug_panic[CRASHACTION_SIZE];
> +static char crash_debug_hwdom[CRASHACTION_SIZE];
> +static char crash_debug_watchdog[CRASHACTION_SIZE];
> +static char crash_debug_kexeccmd[CRASHACTION_SIZE];
> +static char crash_debug_debugkey[CRASHACTION_SIZE];
> +
> +static char *crash_action[CRASHREASON_N] = {

Considering the sole use below, I think there can be two "const"
added here. With this single use I also wonder whether this
array wouldn't better be private to that function.

> +    [CRASHREASON_PANIC] = crash_debug_panic,
> +    [CRASHREASON_HWDOM] = crash_debug_hwdom,
> +    [CRASHREASON_WATCHDOG] = crash_debug_watchdog,
> +    [CRASHREASON_KEXECCMD] = crash_debug_kexeccmd,
> +    [CRASHREASON_DEBUGKEY] = crash_debug_debugkey,
> +};
> +
> +string_runtime_param("crash-debug-panic", crash_debug_panic);
> +string_runtime_param("crash-debug-hwdom", crash_debug_hwdom);
> +string_runtime_param("crash-debug-watchdog", crash_debug_watchdog);
> +string_runtime_param("crash-debug-kexeccmd", crash_debug_kexeccmd);

This one probably wants a CONFIG_KEXEC conditional around it,
such that requests to set it won't appear to be "okay" on !KEXEC
builds. At which point the doc probably also wants to mention the
conditional availability of this option.

> +string_runtime_param("crash-debug-debugkey", crash_debug_debugkey);
> +
> +void keyhandler_crash_action(enum crash_reason reason)
> +{
> +    const char *action = crash_action[reason];

In order to avoid cascade problems when the system's already in
trouble, maybe better to bounds check "reason" before using as
array index and, also with the CONFIG_KEXEC related adjustment
requested above in mind, ...

> +    struct cpu_user_regs *regs = get_irq_regs() ? : guest_cpu_user_regs();
> +
> +    while ( *action )

... perhaps also better to check action against NULL before
de-referencing?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 11:42:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 11:42:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35781.67428 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khWie-00067c-Hr; Tue, 24 Nov 2020 11:42:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35781.67428; Tue, 24 Nov 2020 11:42:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khWie-00067V-Ev; Tue, 24 Nov 2020 11:42:48 +0000
Received: by outflank-mailman (input) for mailman id 35781;
 Tue, 24 Nov 2020 11:42:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khWid-00067N-4b
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 11:42:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 06790d6d-c3cb-4afa-8d9b-587322f914ce;
 Tue, 24 Nov 2020 11:42:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5C784AD21;
 Tue, 24 Nov 2020 11:42:45 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khWid-00067N-4b
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 11:42:47 +0000
X-Inumbo-ID: 06790d6d-c3cb-4afa-8d9b-587322f914ce
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 06790d6d-c3cb-4afa-8d9b-587322f914ce;
	Tue, 24 Nov 2020 11:42:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606218165; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KjFa8dYabyDR+EBZEGf5cxuhNmEhJOb9hv+kHZNMPYg=;
	b=GRRYmqds40yt+rDmIpdOTFVjyUom70zNjOvH9hS9XlnXJ9u06m9h2bC5IF5KKsQBl8x+4+
	Sj485EBHmWLgyh8V1qPpt+2XOYE+H3sDuTju/P1ekZTLnCdQW8fAx2iQr2DqNR4vlGDcdy
	++/I1a+IBljigMD4Ii3pTD4F/COQOcI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5C784AD21;
	Tue, 24 Nov 2020 11:42:45 +0000 (UTC)
Subject: Re: [PATCH v7 2/3] xen/events: modify struct evtchn layout
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201124070106.26854-1-jgross@suse.com>
 <20201124070106.26854-3-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <440bced0-97ec-33c4-f6fa-01850777e5c2@suse.com>
Date: Tue, 24 Nov 2020 12:42:45 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201124070106.26854-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 24.11.2020 08:01, Juergen Gross wrote:
> In order to avoid latent races when updating an event channel put
> xen_consumer and pending fields in different bytes.

I think there's a little more to be said here as to what the
actual risk is, as the two fields are - afaict - at present
fine the way they're declared.

> @@ -94,9 +93,10 @@ struct evtchn
>  #define ECS_VIRQ         5 /* Channel is bound to a virtual IRQ line.        */
>  #define ECS_IPI          6 /* Channel is bound to a virtual IPI line.        */
>      u8  state;             /* ECS_* */
> -    u8  xen_consumer:XEN_CONSUMER_BITS; /* Consumer in Xen if nonzero */

I see no reason to use a full byte for this one; in fact I
was considering whether it, state, and old_state couldn't
share storage (the latest when we run into space issues with
this struct). (In this context I'm also observing that
old_state could get away with just 2 bits, i.e. all three
fields would fit in a single byte.)

> -    u8  pending:1;
> -    u16 notify_vcpu_id;    /* VCPU for local delivery notification */
> +#ifndef NDEBUG
> +    u8  old_state;     /* State when taking lock in write mode. */
> +#endif
> +    u8  xen_consumer;  /* Consumer in Xen if nonzero */
>      u32 port;
>      union {
>          struct {
> @@ -113,11 +113,13 @@ struct evtchn
>          } pirq;        /* state == ECS_PIRQ */
>          u16 virq;      /* state == ECS_VIRQ */
>      } u;
> -    u8 priority;
> -#ifndef NDEBUG
> -    u8 old_state;      /* State when taking lock in write mode. */
> -#endif
> -    u32 fifo_lastq;    /* Data for fifo events identifying last queue. */
> +
> +    /* FIFO event channels only. */
> +    u8  pending;
> +    u8  priority;
> +    u16 notify_vcpu_id;    /* VCPU for local delivery notification */

This field definitely isn't FIFO-only.

Also for all fields you touch anyway, may I ask that you switch to
uint<N>_t or, in the case of "pending", bool?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 11:47:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 11:47:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35789.67440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khWmy-0006Kg-4U; Tue, 24 Nov 2020 11:47:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35789.67440; Tue, 24 Nov 2020 11:47:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khWmy-0006KZ-1Z; Tue, 24 Nov 2020 11:47:16 +0000
Received: by outflank-mailman (input) for mailman id 35789;
 Tue, 24 Nov 2020 11:47:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khWmw-0006KR-Ph; Tue, 24 Nov 2020 11:47:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khWmw-0002LY-KV; Tue, 24 Nov 2020 11:47:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khWmw-0004mu-Bi; Tue, 24 Nov 2020 11:47:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1khWmw-0004EE-BH; Tue, 24 Nov 2020 11:47:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khWmw-0006KR-Ph; Tue, 24 Nov 2020 11:47:14 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PTiQmq46a05jN+kI0YDTglbTwiaiLpTk04wcQPhgv1o=; b=UMMAIFmNaxQUqO/Y9+QSwGWZiI
	q75OBRNIsmjr04X+OjU+FfoyUT3yG4AAM0k78utQeolVU0AQk/mOE7eojSBcrMcDIl/pq0/hgeT+w
	gzr5RApsTObjw3+T0jbkvf9Oz2h+S7Qz1RnUZ+8THTimdowLyb/h1TrTo5wB6F/d63HI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khWmw-0002LY-KV; Tue, 24 Nov 2020 11:47:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khWmw-0004mu-Bi; Tue, 24 Nov 2020 11:47:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khWmw-0004EE-BH; Tue, 24 Nov 2020 11:47:14 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156975-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156975: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b659a5cebd611dbe698e63c03485b5fe8cd964ad
X-Osstest-Versions-That:
    xen=b659a5cebd611dbe698e63c03485b5fe8cd964ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 24 Nov 2020 11:47:14 +0000

flight 156975 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156975/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156956
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156956
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156956
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156956
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156956
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156956
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156956
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 156956
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156956
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156956
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156956
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156956
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  b659a5cebd611dbe698e63c03485b5fe8cd964ad
baseline version:
 xen                  b659a5cebd611dbe698e63c03485b5fe8cd964ad

Last test of basis   156975  2020-11-24 01:51:25 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 12:04:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 12:04:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35799.67456 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khX35-0008BV-Ru; Tue, 24 Nov 2020 12:03:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35799.67456; Tue, 24 Nov 2020 12:03:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khX35-0008BO-O0; Tue, 24 Nov 2020 12:03:55 +0000
Received: by outflank-mailman (input) for mailman id 35799;
 Tue, 24 Nov 2020 12:03:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PL67=E6=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1khX34-0008BE-MU
 for xen-devel@lists.xen.org; Tue, 24 Nov 2020 12:03:54 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dfc37efd-b2dc-4675-a8a7-bee241deb68d;
 Tue, 24 Nov 2020 12:03:52 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1khX2v-0002gn-6a; Tue, 24 Nov 2020 12:03:45 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1khX2v-0002f4-3b; Tue, 24 Nov 2020 12:03:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PL67=E6=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
	id 1khX34-0008BE-MU
	for xen-devel@lists.xen.org; Tue, 24 Nov 2020 12:03:54 +0000
X-Inumbo-ID: dfc37efd-b2dc-4675-a8a7-bee241deb68d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id dfc37efd-b2dc-4675-a8a7-bee241deb68d;
	Tue, 24 Nov 2020 12:03:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=yfttipwcpjyGr0PfbdQvyjVzZvYzgmIiVMoZvzn78XI=; b=O41nLK447kTInVnyNAxpWJG5GG
	LJvFROMGe3S4Kx7fnC+zhbbffxMHbrLrkdL1xUMYpV5LgTsPU+q7jNUj3IjF2fU8TAD4wUq5+LVhW
	cd/ra3WjTUl0d/I+WDYH9ycuy/6xCa69+jkKjJOhaH0dPH0zJNdaENYriHb1tsQcu1cw=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1khX2v-0002gn-6a; Tue, 24 Nov 2020 12:03:45 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1khX2v-0002f4-3b; Tue, 24 Nov 2020 12:03:45 +0000
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 355 v2 - stack corruption from XSA-346 change
Message-Id: <E1khX2v-0002f4-3b@xenbits.xenproject.org>
Date: Tue, 24 Nov 2020 12:03:45 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

                    Xen Security Advisory XSA-355
                              version 2

                 stack corruption from XSA-346 change

UPDATES IN VERSION 2
====================

Added metadata file.

Public release.

ISSUE DESCRIPTION
=================

One of the two changes for XSA-346 introduced an on-stack array.  The
check for guarding against overrunning this array was off by one,
allowing for corruption of the first stack slot immediately following
this array.

IMPACT
======

A malicious or buggy HVM or PVH guest can cause Xen to crash, resulting
in a Denial of Service (DoS) to the entire host.  Privilege escalation
as well as information leaks cannot be excluded.

VULNERABLE SYSTEMS
==================

All Xen versions which have the patches for XSA-346 applied are
vulnerable.

Only x86 HVM and PVH guests can leverage the vulnerability.  Arm guests
and x86 PV guests cannot leverage the vulnerability.

Only x86 HVM and PVH guests which have physical devices passed through
to them can leverage the vulnerability.

MITIGATION
==========

Not passing through physical devices to untrusted guests will avoid
the vulnerability.

CREDITS
=======

This issue was discovered by Jan Beulich of SUSE.

RESOLUTION
==========

Applying the attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa355.patch           xen-unstable - Xen 4.10.x

$ sha256sum xsa355*
a93bfc376897e7cffd095d395f1a66476adb9503d7d80a59b7861e64c2675323  xsa355.meta
dae633c11cf2eff3e304737265e18ab09213e8e4640458080a944ae7a40819a4  xsa355.patch
$

NOTE CONCERNING SHORT EMBARGO
=============================

This issue is likely to be re-discovered as the changes for XSA-346
are deployed more widely, since the issue is also triggerable without
any malice or bugginess.

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl+89pEMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZRHQH/1D8CfjZWYgLcdYOg6sDO6BIK8IsnAiOoe2C8b9i
M8QPFzHlUx09FI5CHVb0Va/pFliR1OS2tmmIU30DL9nmiDLcaP2uvpgJAYo5GwL5
Rzccjo4qbXwfSRQvHmLzbr+XN8sHDxbekpFd8T5WvuarUgxOaPCLTfSG0nag/t52
OVNIdDcP5lSt/Z88lYW75j4gBAsXUZDEXgn81JpeHj9js8YLFC3WFcwh58Jjd+hw
5DH955jNAKD8TRSy6uffDpvN1m9wm2vDGeXSUcJyswlV8Nqi6YRW4XO4Q6Cfj+CG
LVBS/T977JZGJjRvTw4j0H+xAXiLFwQ1I/6v6fSZzxDMt9k=
=+4M1
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa355.meta"
Content-Disposition: attachment; filename="xsa355.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzNTUsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIs
CiAgICAiNC4xMSIsCiAgICAiNC4xMCIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjEwIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICIxNWIyOTgwOTcyODlmMWMxMWI5ODE0NTRhM2RjOTEyYjk1
ZTJmNjViIiwKICAgICAgICAgICJQcmVyZXFzIjogW10sCiAgICAgICAgICAi
UGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTM1NS5wYXRjaCIKICAgICAg
ICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAiNC4xMSI6IHsK
ICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAgICAgICAg
ICJTdGFibGVSZWYiOiAiMTQ0N2Q0NDlmYWI3ZTQ4Yzg1ZmFmODM5NTE4NDJi
YjYwZDdkYWJlNSIsCiAgICAgICAgICAiUHJlcmVxcyI6IFtdLAogICAgICAg
ICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2EzNTUucGF0Y2giCiAg
ICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9LAogICAgIjQuMTIi
OiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAg
ICAgICAiU3RhYmxlUmVmIjogIjE0YzljMGZjZWFlOTJhMThkZWRjM2YyODBl
YmY4YjlmNTJlMzlkZTUiLAogICAgICAgICAgIlByZXJlcXMiOiBbXSwKICAg
ICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzU1LnBhdGNo
IgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICI0
LjEzIjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewog
ICAgICAgICAgIlN0YWJsZVJlZiI6ICJkNGMwNDgzYzBiODc3NjhjZDliOTU1
NDJlOTgxMTFlNGMwOThkNTdmIiwKICAgICAgICAgICJQcmVyZXFzIjogW10s
CiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTM1NS5w
YXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAg
ICAiNC4xNCI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6
IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiZDEwMWI0MTdiNzg0YTI2MzI2
ZmM3ODAwYTc5Y2M1MzliYTU3MGI3OSIsCiAgICAgICAgICAiUHJlcmVxcyI6
IFtdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2Ez
NTUucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9
LAogICAgIm1hc3RlciI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAg
InhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiYjY1OWE1Y2ViZDYx
MWRiZTY5OGU2M2MwMzQ4NWI1ZmU4Y2Q5NjRhZCIsCiAgICAgICAgICAiUHJl
cmVxcyI6IFtdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAg
ICJ4c2EzNTUucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9
CiAgICB9CiAgfQp9

--=separator
Content-Type: application/octet-stream; name="xsa355.patch"
Content-Disposition: attachment; filename="xsa355.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBtZW1vcnk6IGZpeCBvZmYtYnktb25lIGluIFhTQS0zNDYgY2hhbmdlCgpU
aGUgY29tcGFyaXNvbiBhZ2FpbnN0IEFSUkFZX1NJWkUoKSBuZWVkcyB0byBi
ZSA+PSBpbiBvcmRlciB0byBhdm9pZApvdmVycnVubmluZyB0aGUgcGFnZXNb
XSBhcnJheS4KClRoaXMgaXMgWFNBLTM1NS4KCkZpeGVzOiA1Nzc3YTM3NDJk
ODggKCJJT01NVTogaG9sZCBwYWdlIHJlZiB1bnRpbCBhZnRlciBkZWZlcnJl
ZCBUTEIgZmx1c2giKQpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJl
dWxpY2hAc3VzZS5jb20+ClJldmlld2VkLWJ5OiBKdWxpZW4gR3JhbGwgPGpn
cmFsbEBhbWF6b24uY29tPgoKLS0tIGEveGVuL2NvbW1vbi9tZW1vcnkuYwor
KysgYi94ZW4vY29tbW9uL21lbW9yeS5jCkBAIC04NTQsNyArODU0LDcgQEAg
aW50IHhlbm1lbV9hZGRfdG9fcGh5c21hcChzdHJ1Y3QgZG9tYWluCiAgICAg
ICAgICAgICArK2V4dHJhLnBwYWdlOwogCiAgICAgICAgIC8qIENoZWNrIGZv
ciBjb250aW51YXRpb24gaWYgaXQncyBub3QgdGhlIGxhc3QgaXRlcmF0aW9u
LiAqLwotICAgICAgICBpZiAoICgrK2RvbmUgPiBBUlJBWV9TSVpFKHBhZ2Vz
KSAmJiBleHRyYS5wcGFnZSkgfHwKKyAgICAgICAgaWYgKCAoKytkb25lID49
IEFSUkFZX1NJWkUocGFnZXMpICYmIGV4dHJhLnBwYWdlKSB8fAogICAgICAg
ICAgICAgICh4YXRwLT5zaXplID4gZG9uZSAmJiBoeXBlcmNhbGxfcHJlZW1w
dF9jaGVjaygpKSApCiAgICAgICAgIHsKICAgICAgICAgICAgIHJjID0gc3Rh
cnQgKyBkb25lOwo=

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 12:19:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 12:19:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35864.67508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khXHf-0001P9-22; Tue, 24 Nov 2020 12:18:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35864.67508; Tue, 24 Nov 2020 12:18:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khXHe-0001P2-VA; Tue, 24 Nov 2020 12:18:58 +0000
Received: by outflank-mailman (input) for mailman id 35864;
 Tue, 24 Nov 2020 12:18:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KyA6=E6=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1khXHd-0001Ox-0t
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 12:18:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4f5f1624-66c9-4873-bc92-77be059c37ae;
 Tue, 24 Nov 2020 12:18:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8114EAC2E;
 Tue, 24 Nov 2020 12:18:54 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=KyA6=E6=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1khXHd-0001Ox-0t
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 12:18:57 +0000
X-Inumbo-ID: 4f5f1624-66c9-4873-bc92-77be059c37ae
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4f5f1624-66c9-4873-bc92-77be059c37ae;
	Tue, 24 Nov 2020 12:18:55 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606220334; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=RFWJDXQbZkjrb5CRsC/eLe96jaTP+gxrZWQQaIRru8I=;
	b=VugQZ21xNQ9Vk6lSigrRD7YjB2WhIJDu+ULIPJalqll4J3jwUHkI9ouEHOgOmGyaRQ+Z+J
	E5flwcHEdQm91vqiOwIbndv/Kkb2wz5Cm4WW2PSSGVMbMwrkvX9X2kwrxJX/l85s1J704d
	grJYdhyOfbBHV0dpbAltQ5SZH8SDzY8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8114EAC2E;
	Tue, 24 Nov 2020 12:18:54 +0000 (UTC)
Subject: Re: [PATCH v7 2/3] xen/events: modify struct evtchn layout
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201124070106.26854-1-jgross@suse.com>
 <20201124070106.26854-3-jgross@suse.com>
 <440bced0-97ec-33c4-f6fa-01850777e5c2@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <696314b9-18e3-e18d-10f2-a510e19438da@suse.com>
Date: Tue, 24 Nov 2020 13:18:53 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <440bced0-97ec-33c4-f6fa-01850777e5c2@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="TT1hRZiCRHqfqHKBklAnfyKM1R4I6rr16"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--TT1hRZiCRHqfqHKBklAnfyKM1R4I6rr16
Content-Type: multipart/mixed; boundary="8fjOsoXHK3aUncWCh5rGMM9UVGnCsIO5a";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <696314b9-18e3-e18d-10f2-a510e19438da@suse.com>
Subject: Re: [PATCH v7 2/3] xen/events: modify struct evtchn layout
References: <20201124070106.26854-1-jgross@suse.com>
 <20201124070106.26854-3-jgross@suse.com>
 <440bced0-97ec-33c4-f6fa-01850777e5c2@suse.com>
In-Reply-To: <440bced0-97ec-33c4-f6fa-01850777e5c2@suse.com>

--8fjOsoXHK3aUncWCh5rGMM9UVGnCsIO5a
Content-Type: multipart/mixed;
 boundary="------------2BC00B69429FD899B1B88A01"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------2BC00B69429FD899B1B88A01
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 24.11.20 12:42, Jan Beulich wrote:
> On 24.11.2020 08:01, Juergen Gross wrote:
>> In order to avoid latent races when updating an event channel put
>> xen_consumer and pending fields in different bytes.
>=20
> I think there's a little more to be said here as to what the
> actual risk is, as the two fields are - afaict - at present
> fine the way they're declared.

Okay.

>=20
>> @@ -94,9 +93,10 @@ struct evtchn
>>   #define ECS_VIRQ         5 /* Channel is bound to a virtual IRQ line=
=2E        */
>>   #define ECS_IPI          6 /* Channel is bound to a virtual IPI line=
=2E        */
>>       u8  state;             /* ECS_* */
>> -    u8  xen_consumer:XEN_CONSUMER_BITS; /* Consumer in Xen if nonzero=
 */
>=20
> I see no reason to use a full byte for this one; in fact I
> was considering whether it, state, and old_state couldn't
> share storage (the latest when we run into space issues with
> this struct). (In this context I'm also observing that
> old_state could get away with just 2 bits, i.e. all three
> fields would fit in a single byte.)

I think doing further compression now isn't really helping. It would
just add more padding bytes and result in larger code.

>=20
>> -    u8  pending:1;
>> -    u16 notify_vcpu_id;    /* VCPU for local delivery notification */=

>> +#ifndef NDEBUG
>> +    u8  old_state;     /* State when taking lock in write mode. */
>> +#endif
>> +    u8  xen_consumer;  /* Consumer in Xen if nonzero */
>>       u32 port;
>>       union {
>>           struct {
>> @@ -113,11 +113,13 @@ struct evtchn
>>           } pirq;        /* state =3D=3D ECS_PIRQ */
>>           u16 virq;      /* state =3D=3D ECS_VIRQ */
>>       } u;
>> -    u8 priority;
>> -#ifndef NDEBUG
>> -    u8 old_state;      /* State when taking lock in write mode. */
>> -#endif
>> -    u32 fifo_lastq;    /* Data for fifo events identifying last queue=
=2E */
>> +
>> +    /* FIFO event channels only. */
>> +    u8  pending;
>> +    u8  priority;
>> +    u16 notify_vcpu_id;    /* VCPU for local delivery notification */=

>=20
> This field definitely isn't FIFO-only.

Oh, you are right.

> Also for all fields you touch anyway, may I ask that you switch to
> uint<N>_t or, in the case of "pending", bool?

Fine with me.

Would you object to switching the whole structure in this regard?


Juergen

--------------2BC00B69429FD899B1B88A01
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------2BC00B69429FD899B1B88A01--

--8fjOsoXHK3aUncWCh5rGMM9UVGnCsIO5a--

--TT1hRZiCRHqfqHKBklAnfyKM1R4I6rr16
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+8+i0FAwAAAAAACgkQsN6d1ii/Ey+g
Awf/RqmtqWU4u7986qrW0DVLqlQ7kDvnUsKdtytwLVrrVgp2XA9BSd3FtehnA0UtdbzOWxID8kHN
6uRCM7mKO/YGT3+A7cJ+B/oLemIY9AllbIwRhiuHiZ0R8XeuX6ITqpoiCz4/YvcHeiVJRlBwpcNT
+Bdbt+MSf7NAFg7Z6Q57keGVon+pY+CW/vzfdvf+JjjK/QUIBbHHSN6HaKa8D38K2ElMi++BP/fa
AuJqwj/1cAx9w8QUkM7n3Q8IOvcOjx9d0fFeKH9zwHs/HgWtdXCoz8bWgb5Z3cMSS+Tx8ElWe6Q2
grDPRk6Atx4I/LC8uER7fnbMh4pXpjMJefLbD2xQ7g==
=P9zi
-----END PGP SIGNATURE-----

--TT1hRZiCRHqfqHKBklAnfyKM1R4I6rr16--


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 12:20:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 12:20:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35871.67521 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khXIf-0001ZJ-Ga; Tue, 24 Nov 2020 12:20:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35871.67521; Tue, 24 Nov 2020 12:20:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khXIf-0001Z6-9q; Tue, 24 Nov 2020 12:20:01 +0000
Received: by outflank-mailman (input) for mailman id 35871;
 Tue, 24 Nov 2020 12:19:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tuHM=E6=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1khXId-0001W3-6q
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 12:19:59 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.80]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ab6357f3-f3c9-4f74-a78e-b8fd3044c125;
 Tue, 24 Nov 2020 12:19:58 +0000 (UTC)
Received: from DB6PR0601CA0046.eurprd06.prod.outlook.com (2603:10a6:4:17::32)
 by DB6PR0801MB1894.eurprd08.prod.outlook.com (2603:10a6:4:72::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.30; Tue, 24 Nov
 2020 12:19:55 +0000
Received: from DB5EUR03FT043.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:17:cafe::7f) by DB6PR0601CA0046.outlook.office365.com
 (2603:10a6:4:17::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Tue, 24 Nov 2020 12:19:55 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT043.mail.protection.outlook.com (10.152.20.236) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Tue, 24 Nov 2020 12:19:55 +0000
Received: ("Tessian outbound e0cdfd2b0406:v71");
 Tue, 24 Nov 2020 12:19:55 +0000
Received: from 0951a5ada50c.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 11E58ACC-864A-46CD-8122-9E99350E9045.1; 
 Tue, 24 Nov 2020 12:19:41 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0951a5ada50c.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 24 Nov 2020 12:19:41 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB6220.eurprd08.prod.outlook.com (2603:10a6:10:205::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.21; Tue, 24 Nov
 2020 12:19:40 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0%7]) with mapi id 15.20.3589.030; Tue, 24 Nov 2020
 12:19:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tuHM=E6=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1khXId-0001W3-6q
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 12:19:59 +0000
X-Inumbo-ID: ab6357f3-f3c9-4f74-a78e-b8fd3044c125
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown [40.107.8.80])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ab6357f3-f3c9-4f74-a78e-b8fd3044c125;
	Tue, 24 Nov 2020 12:19:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9lPurmAyxjBr8vvqzQC466c9CBWgu4SVW3fNteI5ru4=;
 b=I//jMKKCqWidDa6eikEFXS1LGz3YrIo5UW3egmeyNKu4pn6l4MEn/rs372TWmmDa7m76jB1IyY2h24EnJveYP7OyU6q3JrNCeY+GWqD1YDCDP0Kc5FtFRbJmgolkrneLWv47NvbREr2YJuT4roWs4yGp5O9NiR90Uin1m2cZ3H4=
Received: from DB6PR0601CA0046.eurprd06.prod.outlook.com (2603:10a6:4:17::32)
 by DB6PR0801MB1894.eurprd08.prod.outlook.com (2603:10a6:4:72::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.30; Tue, 24 Nov
 2020 12:19:55 +0000
Received: from DB5EUR03FT043.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:17:cafe::7f) by DB6PR0601CA0046.outlook.office365.com
 (2603:10a6:4:17::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Tue, 24 Nov 2020 12:19:55 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT043.mail.protection.outlook.com (10.152.20.236) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Tue, 24 Nov 2020 12:19:55 +0000
Received: ("Tessian outbound e0cdfd2b0406:v71"); Tue, 24 Nov 2020 12:19:55 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: e504c5c1d9bc8e27
X-CR-MTA-TID: 64aa7808
Received: from 0951a5ada50c.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 11E58ACC-864A-46CD-8122-9E99350E9045.1;
	Tue, 24 Nov 2020 12:19:41 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0951a5ada50c.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Tue, 24 Nov 2020 12:19:41 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MAgOW6gzhDn+wMwnQmLZkCBGRbp4MDuGIPy69Vkojjmzv30NRqNIsgEJcb5dI7Amhq4/1CyYUQR0pvFCPOnel8i/gi4heOMgrkL16W5Nmq0wGOSsJJP0bHiKdzMgWhuTAq5Miki36jprWknBU+0GujYM+YAMjJv/7zyDoYP/IAEeFn2/Dd6GjSqPKlmrShtFE4oHQwx0WYgm8RSlM+e/vKybAwV8v4NWWNNMy3eSa7FtYXYBqrKKq979ExOD/xNcNQY4o+hyIRhO0Ktsg1dpOwWKHv648x1maS4fiuWPxviQPiERkz2ZLhYW4eAqzENHMlqg7vBIEg9j2kEcDWdZJg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9lPurmAyxjBr8vvqzQC466c9CBWgu4SVW3fNteI5ru4=;
 b=gpd0zQ3kb/DYKEKZS9LlKTQDlhDFLp1E5UPXD5JbclZsQueORuPB6wVpKC7zziQK4GMC+rN/PKPL8b2RitpR6mFJZnJbfPz8XcYT2W+N9l4ImkAb71cusPZkFi1qDJMXzU/HZ24jyEUwQewBDUkR8ackhLmZ5kkMYhCWjDrhaB85q+OOJT37rC1TLS9s8zXaDidrpjN+xvFw8OAODqBusvxqtbSjCNNubDiKQgq3nKLNg7UcDlbwv4u89pktrSpSZ4lMnOll8h4HhQk5aqml7uFdwCKJislGgvUaZpgRNsZvzE9X0WKvXEYTPUCSNfDAdArrvaBBFHk8mjSNs4nZQw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9lPurmAyxjBr8vvqzQC466c9CBWgu4SVW3fNteI5ru4=;
 b=I//jMKKCqWidDa6eikEFXS1LGz3YrIo5UW3egmeyNKu4pn6l4MEn/rs372TWmmDa7m76jB1IyY2h24EnJveYP7OyU6q3JrNCeY+GWqD1YDCDP0Kc5FtFRbJmgolkrneLWv47NvbREr2YJuT4roWs4yGp5O9NiR90Uin1m2cZ3H4=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB6220.eurprd08.prod.outlook.com (2603:10a6:10:205::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.21; Tue, 24 Nov
 2020 12:19:40 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0%7]) with mapi id 15.20.3589.030; Tue, 24 Nov 2020
 12:19:40 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Julien Grall
	<jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH RFC 2/6] xen/arm: mm: Remove ; at the end of mm_printk()
Thread-Topic: [PATCH RFC 2/6] xen/arm: mm: Remove ; at the end of mm_printk()
Thread-Index: AQHWvqdY1rahTHlLkUqbsCGy2n+E1qnXOtqA
Date: Tue, 24 Nov 2020 12:19:40 +0000
Message-ID: <6D9F19C7-DAC2-4B29-93D8-B7F47DC90AB4@arm.com>
References: <20201119190751.22345-1-julien@xen.org>
 <20201119190751.22345-3-julien@xen.org>
In-Reply-To: <20201119190751.22345-3-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 3befa256-49ff-4b83-fde3-08d890734094
x-ms-traffictypediagnostic: DBBPR08MB6220:|DB6PR0801MB1894:
X-Microsoft-Antispam-PRVS:
	<DB6PR0801MB18943DAEFFB9BBFD5D5AF9BA9DFB0@DB6PR0801MB1894.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:4502;OLM:4502;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 PcNS2Q0oXvgO9OFRh476eq7x2RxKm2+gesLfrPA4yqVVHVQu6H9hhmvuOVwXr3a7XKGpJ2fLVObxjMR/u5fCuFulaDL24OAl6dE/YUZZeggKQFPvLQ0L9EPf7e2zmED2BRug4amvts5ZI4MfgdHTnK32fv+W5roVnZxwwoXflgXcbAQtJR9ReGufShGwJfFPoTAoiJS22TKRZ+zSpSyir/JoJhS1LjY+crin9QLalYt2OFh3zEZw3Rh1vFoxxzUW6gXxk4Q6ryoFKIsKW+qzQnu17d+aOM7GeRCULJnKU1g1kgQKqeG/loAvwfXmcscptetOMiF1j2pA+3LxlsgDKg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(376002)(346002)(39860400002)(136003)(66946007)(6506007)(2906002)(8936002)(36756003)(54906003)(86362001)(91956017)(8676002)(76116006)(53546011)(186003)(26005)(6916009)(478600001)(66556008)(6486002)(4326008)(4744005)(64756008)(316002)(33656002)(83380400001)(66476007)(66446008)(5660300002)(6512007)(71200400001)(2616005);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 lipM6hV0cdJ7L37NHNSGvZtCU8ygsqZu9ZjZXi9GyrDLpZi8Rg9yQfaZeiXlOZ62lLTJczjPsmUm25NDIwmWzQbgy82VTMjYw4JSFJ25PFi72e7Dl/29bHN4TU1I3jeL3BhxGTRAN3sF5pqqVWKH9hrTJuFKy1MwzBkSJxrkmS6kv/1w0WapTDy8w/W3fsFNa0msBCOF4C2MqKq2iDOj4nM9hFCZYiDvK6Qj3Ubbnt24VrFTcmMelh7+V6uyWMtO3EaEStUDpFIP1ByZ1AZIOLTNbMl+lLJD5AeXGfTkTArpo3YmL0b48CbFJoz0ZXm0drA8n4Pem3fKjH0TdT5UKtjCRcDjndrubrTY85FwXy6s8NgR2h7CuDbt9WerowR9/38Pm4lJWraV2+YuCLHvo8lNk//0QyLLNvI/7HtuHNMkCNto+WxNiCudR1aqbW7c+sIlR7bmdBiF6GADicGCzMRg/7TAN8gBHX4XPo98bu2uyPbD19fOfPu3FflQQCTawwxozs7StqkNTclAjnOS5d6NbWma+vtp0wA/HEmllBwbg+9C0IJXmuj5uGw9K5a2dJQqtunPUKfISKHebeGq6HMSjI5BMs3n9RCLqrtY9sK609eRqNum/01HShcTa4esRGeGeGDDj9gXqcO4fa7yBpgh5XrDr+jLl0lxfEhgrnavMk0uBgGWuooUWWbd2MEo3DtSTDMu798DxJXpezGNyzCcsfdmarfBq6nSsmotfuA/L3dFqaTNLbULSlqMG8nYUpW/Uvccm4qXnLdjjIcrYEdSOpDLM/76wm3ijpGZSeC1rr82D40d5ix9AZlMp23T2ub2tXEyzRecIVLA95SsuHO9Q+EXHKfuwaFQ6Qu9vg+tApLQbblZ/iimgj6bBA/aQafq+TtG7cWCO4kTXYjJCQ==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <4142C9E21973E3408B1FC811E4ADF684@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6220
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	071b888d-b6d5-49df-b05a-08d8907337bc
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sx726/Ajflwrv866XOq1LDMAeosjHUl0aXxW5/4M8EZj5OZAk22BtdJ4NQlm8J+7Lxy5k+1WK8GdtqTo+6nLpUtuMtL86A9tjFUY3SHZvmxnNUgNvwqz8mo7YR5sroCBmwqnW81iw5aLd6Y5AD7bhp/hCrV2bqgvZJyoCuhiYeml9Oymjd3Jump2nrOV85f3RRl181sdTY7g46jOX0HQBg18m9A7o2dxkYiV2gXoKPXOxDeProvCByLLGV4PHgOA9oLUTeuxUkkkuNq3DMtTsjtn0Z6El2ksmvMDPJBzCJ1gOlDZXU1vppFlyoRWJLn07VqMdMYjNf4dFiipkRv/KkB7ukESu9+4DxIsZwS5sefyXTn4TkDeeVmoPfZLvlY7d3p17M0uB9D9WC7OfKYUwQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(376002)(346002)(396003)(136003)(46966005)(70586007)(336012)(2616005)(70206006)(6862004)(54906003)(107886003)(6512007)(4326008)(8676002)(2906002)(8936002)(316002)(82310400003)(6486002)(4744005)(356005)(83380400001)(53546011)(47076004)(478600001)(33656002)(86362001)(186003)(36756003)(81166007)(5660300002)(26005)(82740400003)(6506007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Nov 2020 12:19:55.2000
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3befa256-49ff-4b83-fde3-08d890734094
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1894

Hi Julien,

> On 19 Nov 2020, at 19:07, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> The ; at the end of mm_printk() means the following code will not build
> correctly:
>=20
> if ( ... )
>    mm_printk(...);
> else
>    ...
>=20
> As we treat the macro as a function, we want to remove the ; at the end
> of it.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand


> ---
> xen/arch/arm/mm.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>=20
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 4dd886f7c80d..59f8a3f15fd1 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -59,7 +59,7 @@ mm_printk(const char *fmt, ...) {}
>     {                                       \
>         dprintk(XENLOG_ERR, fmt, ## args);  \
>         WARN();                             \
> -    } while (0);
> +    } while (0)
> #endif
>=20
> /*
> --=20
> 2.17.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 12:21:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 12:21:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35884.67531 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khXJx-0002M1-V8; Tue, 24 Nov 2020 12:21:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35884.67531; Tue, 24 Nov 2020 12:21:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khXJx-0002Lu-S3; Tue, 24 Nov 2020 12:21:21 +0000
Received: by outflank-mailman (input) for mailman id 35884;
 Tue, 24 Nov 2020 12:21:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IwAZ=E6=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1khXJw-0002Lm-1S
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 12:21:20 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 00e2104e-7bcf-496d-9923-191005c00d95;
 Tue, 24 Nov 2020 12:21:18 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IwAZ=E6=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1khXJw-0002Lm-1S
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 12:21:20 +0000
X-Inumbo-ID: 00e2104e-7bcf-496d-9923-191005c00d95
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 00e2104e-7bcf-496d-9923-191005c00d95;
	Tue, 24 Nov 2020 12:21:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606220479;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=UT1Jw74Vs/Hv2fHQGoRVXkE+ZNaVwVPscpNr1DaJqr4=;
  b=ZgDlnn9zBpYIbh9K+/lVNYiVCQwNSxD5JdLFejNqiWAp16LC41wxjFWv
   Dt5PbM82bYkfC+VHwaAGje9UUL0vUtMd3AwvdsIZYu61OG1rH8EZpirTh
   QImW70ceTMjWJ2jjMyryv2rlNk6Q6AX0hoF1fe1F+0tQ43onkPLu22S4z
   U=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: P26pmw1m8uQYa0VNLSfqkzvQGOMBoB6DXMkpVJv9oq51mLl90ZGR2rM0mRQMSh2YQipBiYvptj
 zlnKCV0OHetvgWv+orhEBMm+GsTREkzo+YdtjhEYqbG3GBD3tXJyriiLQ0FrZwv7VTaxCpupjj
 zONm+wmeBKwBaBSvbWY64OCpAxElBzcYC8XJiZJK58CHcaWPut08LlKxzU85Pz1vVzyXqo9W1s
 Hl6fJXzEFFmtMHCOxm3v/V61G8TaLnLd4LRKEjvn2fsllfIKmrGlghyq0eZjl/nx5OYtTo/nR4
 1ic=
X-SBRS: None
X-MesageID: 31821158
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,366,1599537600"; 
   d="scan'208";a="31821158"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=akyBNoR98n2oyBfNGJUi0DlYUIxfEU1juvtwllcFDByr6tOe1yvwU59l/MdKQeONPUzGsgDa/nl66Bav9zMCzzhgJVvwHYF7ERqgqy3LQS4J2q2MDd0mlvBu8Iq06M6pBby7t3xriyWeCGmEtwCooo1zRhClwLpfBR6dqF1MjFM7VckRn9Nq+vBc25Ms++pgYLNanNpSxfQooIEASIqCPLmKD+fiTmYBO4v38j58oSInUrGrTTB1jBscWforhYs1USqhXr/tbVY5cVAgERs3NkQByy3t3qPvic6tqvY/VPa1qRbt282srDjOzHbfONMyBX5mlIuQZ+CbEY4VS08kPw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=S+wZjtA5qv85uNzbG2MpKsL2j7/fVq+8tr23ISSjgW0=;
 b=gDKjXQrygG5WC6+H8FNUyydzbGnNwY/VROWbF43cI62t1ESv4dRvT6i+WjW734c7dnEVwkwHQ9Tuw2lUBu3xNSJPwaw3z08Z3bpso92RekKn1zqAQruTcCfNQUFUX0LjMLZWgzmWqNiwt2kykb+9lP1DwlhghJVbma6/3+9zx2BNtfoZ8aNA+sJ8/qq3Fme+Mraa2Yp6LU8/1YTDOsdDTHFKf+Uu+Orv1ZVy8ldycB2nJN1pZNefc3qPBXsxqmKr5OWtzkAfiB6eQ8fegba3pHuXEHQSD+1wAmaZk78nRzH8oZtMiqMjzg45SkRET4TE2AWuZbDU/y6nPousDoLXYA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=S+wZjtA5qv85uNzbG2MpKsL2j7/fVq+8tr23ISSjgW0=;
 b=vZnED5EYGTfRwCkaO2keLavZ+bkXhazGCzk5/sYExGKYQ7KlNSY/k/HH1lErWUuvLQsrFh4UuZ3296D0bZ3/GA4xFv7DLnq8qyG6OhD+0lx3tsT9NS8UDBkrpTYbDXXv2DWLmNntWH3HxaOus7vT/Xj6OSLoAQInS4PPnF1toB8=
Date: Tue, 24 Nov 2020 13:21:02 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Manuel Bouyer <bouyer@antioche.eu.org>, <xen-devel@lists.xenproject.org>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201124122102.3igsriesou3vl6mu@Air-de-Roger>
References: <20201120092754.GH1508@antioche.eu.org>
 <20904a6a-ac64-755d-d228-4c49faf66fb5@suse.com>
 <20201120103824.GJ1508@antioche.eu.org>
 <20201123095713.orfpg72r73m7f46n@Air-de-Roger>
 <20201123113241.GE2520@antioche.eu.org>
 <20201123125112.q3zqb4e5nk6jg4hw@Air-de-Roger>
 <20201123143150.GG2520@antioche.eu.org>
 <20201123170610.kzfxvcgkdkvh3ex4@Air-de-Roger>
 <20201123173925.GG4662@antioche.eu.org>
 <b3912e97-9684-fe97-1053-ad7168a19721@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <b3912e97-9684-fe97-1053-ad7168a19721@suse.com>
X-ClientProxiedBy: LNXP265CA0011.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5e::23) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a0f3f439-d5e7-4b76-223f-08d89073701e
X-MS-TrafficTypeDiagnostic: DM5PR03MB2970:
X-Microsoft-Antispam-PRVS: <DM5PR03MB2970CF23602DF67A847EAB0E8FFB0@DM5PR03MB2970.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 1Bylk13G0VCykW43GuU1TvNvg0Dfr+dhOZ103It9FEbGEeVnPaKQHCFnx0x62izfC0kqLi2/YV3k80/cfG7Yg9x1AH0c0WY14YFfxfJHoH58BCBaZDmKvp8/hGGyvuhDm+1HOHWIbkbrY5G1x5xD4cW9j+ArJAqR8qDPE3QEiGYEQHzambWSb76/pOTiRA/cfj8QxlJwIuFDJ/lsQJMURcFXmseUACzdxib56sWY/uRH0/YWWoQF4pefEyajMRfYtEoMWaoZiDhmAxNtOFOZ5ZGHgonlWsjA0BsUsxVeQQjg2YXblS3ZBBKtlEWCIIjyT8Am+nqVLj7U4cSCR3SzcUO6dyvnuwBrqQKGQm9+05dkAh6wtoMSaDPqg/uDLFqrBjhauSfnOkNeL3aRCv18EA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(39860400002)(366004)(376002)(396003)(346002)(6496006)(66556008)(66476007)(966005)(6916009)(66946007)(5660300002)(478600001)(85182001)(956004)(186003)(4326008)(6666004)(8936002)(9686003)(2906002)(33716001)(1076003)(26005)(8676002)(83380400001)(53546011)(316002)(16526019)(6486002)(86362001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: E+Ze+aUnovIsm1NPO1v9zskaM+rvSGLA/xKJoGrpMPHWD/DUTPRIDcgGY5XsuDABIziJSyCdUIhD+NLL+JOkDzgmawY7HEYwhLsrWgTMlI1ZSgWDETqFGRFlyVjMmE4ndSMFEmUxA7SYcH4f5Vnfdd1mE3veIsxTfWPEWKg9S+P58xQ+SgzP3FeB0RVfIEsy1WiqBSvDQ88ko83mpzuUmiuvAep7cIHA9V8UprMHPgwWG4xbjHtEtkgYMcZlvx3H+O3vWsO2xkUgE9Li/A0gF1+m0jNCthD7UkzKhzYwMYDz/1pRsVowqhzWnehs9crE/8iYEsK/Vsepj7lyZPbpDTxczbRPXDrgueStgClPmWmgNxtRN8xdMrizT9VLYsaooQJ1vhwdoXW2qgyKu2AYJmimdpueWvfYnhhPZYNn1Xk2rQ7w6qylu/7MjK92d82r6yOqacJ9HZKAGgoOkIe24QR6uCTm7qtGaaFWNaE7vKhCXo6GnEB0g8hXeKF2A4nMp7vhdXQ+l9SJ+9Ytts4stDiyGHXmdCBCqsThRwfNp6ctFAaDPXs35ijArn1etJbP1Hem6EewNV+GkprwW5mpRQNckVt+yIwXtXMGktfwc48QWQcO54+LQ7s8kniU8/aqVNICqrxN9lz1swtxnkLTT5X/FmegvtrCf7qoI6XJq8l0J+bW707FrX/ut02HeDL4SqcJkwnKIK3bEFNc1ibilmYwUGBmRZajLjcrILlNUGbCSDahKQMg9BV/cpv83yQ843xpebdsQF4zh7rkIqWR1qfVNHSV+Q6SvYKYDRM3syyEOrlO16LbbfOCjV2ehWFlb8+o7Upe6vCktZzc3YRas3rsyiP+VzKS/d0YuLNm55/0R2SAytGS5CyB9w/hGt/ihNibTtM403zZfiheFynMIw==
X-MS-Exchange-CrossTenant-Network-Message-Id: a0f3f439-d5e7-4b76-223f-08d89073701e
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Nov 2020 12:21:15.0851
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5M6ohzohVqb6VcjnYntnN1wilOG9Pc/+l0u40d7DfaFxfQKrRBxdnU6mwuyIpJMqPatiosZBDLNigxQatbO+Mw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2970
X-OriginatorOrg: citrix.com

On Tue, Nov 24, 2020 at 11:05:12AM +0100, Jan Beulich wrote:
> On 23.11.2020 18:39, Manuel Bouyer wrote:
> > On Mon, Nov 23, 2020 at 06:06:10PM +0100, Roger Pau Monné wrote:
> >> OK, I'm afraid this is likely too verbose and messes with the timings.
> >>
> >> I've been looking (again) into the code, and I found something weird
> >> that I think could be related to the issue you are seeing, but haven't
> >> managed to try to boot the NetBSD kernel provided in order to assert
> >> whether it solves the issue or not (or even whether I'm able to
> >> repro it). Would you mind giving the patch below a try?
> > 
> > With this, I get the same hang but XEN outputs don't wake up the interrupt
> > any more. The NetBSD counter shows only one interrupt for ioapic2 pin 2,
> > while I would have about 8 at the time of the hang.
> > 
> > So, now it looks like interrupts are blocked forever.
> 
> Which may be a good thing for debugging purposes, because now we have
> a way to investigate what is actually blocking the interrupt's
> delivery without having to worry about more output screwing the
> overall picture.
> 
> > At
> > http://www-soc.lip6.fr/~bouyer/xen-log5.txt
> > you'll find the output of the 'i' key.
> 
> (XEN)    IRQ:  34 vec:59 IO-APIC-level   status=010 aff:{0}/{0-7} in-flight=1 d0: 34(-MM)
> 
> (XEN)     IRQ 34 Vec 89:
> (XEN)       Apic 0x02, Pin  2: vec=59 delivery=LoPri dest=L status=1 polarity=1 irr=1 trig=L mask=0 dest_id:00000001
> 
> (XEN) ioapic 2 pin 2 gsi 34 vector 0x67
> (XEN)   delivery mode 0 dest mode 0 delivery status 0
> (XEN)   polarity 1 IRR 0 trig mode 1 mask 0 dest id 0
> 
> IOW from guest pov the interrupt is entirely idle (mask and irr clear),
> while Xen sees it as both in-flight and irr also already having become
> set again. I continue to suspect the EOI timer not doing its job. Yet
> as said before, for it to have to do anything in the first place the
> "guest" (really Dom0 here) would need to fail to EOI the IRQ within
> the timeout period. Which in turn, given your description of how you
> handle interrupts, cannot be excluded (i.e. the handling may simply
> take "slightly" too long).

I've tried to force some of those scenarios myself by modifying the
code, and didn't seem to be able to trigger the same scenario. I guess
the NetBSD case is slightly difficult to recreate.

> What we're missing is LAPIC information, since the masked status logged
> is unclear: (-MM) isn't fully matching up with "mask=0". But of course
> the former is just a software representation, while the latter is what
> the RTE holds. IOW for the interrupt to not get delivered, there needs
> to be this or a higher ISR bit set (considering we don't use the TPR),
> or (I think we can pretty much exclude this) we'd need to be running
> with IRQs off for extended periods of time.

Let's dump the physical lapic(s) IRR and ISR together with the
IO-APIC state. Can you please apply the following patch and use the
'i' key again? (please keep the previous patch applied)

Thanks, Roger.
---8<---
diff --git a/xen/arch/x86/apic.c b/xen/arch/x86/apic.c
index 60627fd6e6..c33d682b69 100644
--- a/xen/arch/x86/apic.c
+++ b/xen/arch/x86/apic.c
@@ -1547,3 +1547,24 @@ void check_for_unexpected_msi(unsigned int vector)
 {
     BUG_ON(apic_isr_read(vector));
 }
+
+static DEFINE_SPINLOCK(dump_lock);
+void dump_lapic(void *unused)
+{
+    unsigned int i;
+    unsigned long flags;
+
+    spin_lock_irqsave(&dump_lock, flags);
+    printk("CPU %u APIC ID %u\n", smp_processor_id(), apic_read(APIC_ID));
+
+    printk("IRR ");
+    for ( i = APIC_ISR_NR - 1; i-- > 0; )
+        printk("%08x", apic_read(APIC_ISR + i*0x10));
+
+    printk("\nISR ");
+    for ( i = APIC_ISR_NR - 1; i-- > 0; )
+        printk("%08x", apic_read(APIC_IRR + i*0x10));
+    printk("\n");
+
+    spin_unlock_irqrestore(&dump_lock, flags);
+}
diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c
index e66fa99ec7..92edb3000a 100644
--- a/xen/arch/x86/io_apic.c
+++ b/xen/arch/x86/io_apic.c
@@ -2470,6 +2470,7 @@ static const char * delivery_mode_2_str(
     }
 }
 
+void dump_lapic(void *unused);
 void dump_ioapic_irq_info(void)
 {
     struct irq_pin_list *entry;
@@ -2516,6 +2517,9 @@ void dump_ioapic_irq_info(void)
             entry = &irq_2_pin[entry->next];
         }
     }
+
+    dump_lapic(NULL);
+    smp_call_function(dump_lapic, NULL, true);
 }
 
 static unsigned int __initdata max_gsi_irqs;



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 12:23:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 12:23:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35896.67544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khXM8-0002Xu-D8; Tue, 24 Nov 2020 12:23:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35896.67544; Tue, 24 Nov 2020 12:23:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khXM8-0002Xn-9p; Tue, 24 Nov 2020 12:23:36 +0000
Received: by outflank-mailman (input) for mailman id 35896;
 Tue, 24 Nov 2020 12:23:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KyA6=E6=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1khXM7-0002Xi-Ci
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 12:23:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ecea3f05-e2dc-489a-8df0-7de91c429efe;
 Tue, 24 Nov 2020 12:23:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 41361AF8D;
 Tue, 24 Nov 2020 12:23:33 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=KyA6=E6=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1khXM7-0002Xi-Ci
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 12:23:35 +0000
X-Inumbo-ID: ecea3f05-e2dc-489a-8df0-7de91c429efe
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ecea3f05-e2dc-489a-8df0-7de91c429efe;
	Tue, 24 Nov 2020 12:23:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606220613; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=AMXFbJWran/SKNcGpSeqiLdeuTGZOHkVHWpnp252xgE=;
	b=boEmeBdwnvjZiikQp2QlanlKHyy1yRKY3G0F8syetyxwZARiY/NdRdFp+/0CcSujdqYfs8
	I1QZGE+2yZp4ZtIU7kdFk6wwTaxeWNF9Owa+v3czVhE8HdX9tptvHV45p/kPz8LCyclvDH
	nDgt49t55ItNUbQc4MyaoSwONfM+jEc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 41361AF8D;
	Tue, 24 Nov 2020 12:23:33 +0000 (UTC)
Subject: Re: [PATCH v2] xen: add support for automatic debug key actions in
 case of crash
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201120131306.24388-1-jgross@suse.com>
 <e7cc6511-d741-c7dd-5c35-ab9cf031d4b5@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <30821266-e800-c2f2-a903-8c361d0bb5dd@suse.com>
Date: Tue, 24 Nov 2020 13:23:32 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <e7cc6511-d741-c7dd-5c35-ab9cf031d4b5@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="dncJQbPPS14y6VWt6BLheMrJfk7aIAfjc"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--dncJQbPPS14y6VWt6BLheMrJfk7aIAfjc
Content-Type: multipart/mixed; boundary="bVyeKRY09QN2rNwI7Qs5NwuA7fxCvCUsI";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <30821266-e800-c2f2-a903-8c361d0bb5dd@suse.com>
Subject: Re: [PATCH v2] xen: add support for automatic debug key actions in
 case of crash
References: <20201120131306.24388-1-jgross@suse.com>
 <e7cc6511-d741-c7dd-5c35-ab9cf031d4b5@suse.com>
In-Reply-To: <e7cc6511-d741-c7dd-5c35-ab9cf031d4b5@suse.com>

--bVyeKRY09QN2rNwI7Qs5NwuA7fxCvCUsI
Content-Type: multipart/mixed;
 boundary="------------5415E507314CAE744F164A3D"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------5415E507314CAE744F164A3D
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 24.11.20 12:27, Jan Beulich wrote:
> On 20.11.2020 14:13, Juergen Gross wrote:
>> @@ -507,6 +509,42 @@ void __init initialize_keytable(void)
>>       }
>>   }
>>  =20
>> +#define CRASHACTION_SIZE  32
>> +static char crash_debug_panic[CRASHACTION_SIZE];
>> +static char crash_debug_hwdom[CRASHACTION_SIZE];
>> +static char crash_debug_watchdog[CRASHACTION_SIZE];
>> +static char crash_debug_kexeccmd[CRASHACTION_SIZE];
>> +static char crash_debug_debugkey[CRASHACTION_SIZE];
>> +
>> +static char *crash_action[CRASHREASON_N] =3D {
>=20
> Considering the sole use below, I think there can be two "const"
> added here. With this single use I also wonder whether this
> array wouldn't better be private to that function.

Both fine with me.

>=20
>> +    [CRASHREASON_PANIC] =3D crash_debug_panic,
>> +    [CRASHREASON_HWDOM] =3D crash_debug_hwdom,
>> +    [CRASHREASON_WATCHDOG] =3D crash_debug_watchdog,
>> +    [CRASHREASON_KEXECCMD] =3D crash_debug_kexeccmd,
>> +    [CRASHREASON_DEBUGKEY] =3D crash_debug_debugkey,
>> +};
>> +
>> +string_runtime_param("crash-debug-panic", crash_debug_panic);
>> +string_runtime_param("crash-debug-hwdom", crash_debug_hwdom);
>> +string_runtime_param("crash-debug-watchdog", crash_debug_watchdog);
>> +string_runtime_param("crash-debug-kexeccmd", crash_debug_kexeccmd);
>=20
> This one probably wants a CONFIG_KEXEC conditional around it,
> such that requests to set it won't appear to be "okay" on !KEXEC
> builds. At which point the doc probably also wants to mention the
> conditional availability of this option.

Yes.

>=20
>> +string_runtime_param("crash-debug-debugkey", crash_debug_debugkey);
>> +
>> +void keyhandler_crash_action(enum crash_reason reason)
>> +{
>> +    const char *action =3D crash_action[reason];
>=20
> In order to avoid cascade problems when the system's already in
> trouble, maybe better to bounds check "reason" before using as
> array index and, also with the CONFIG_KEXEC related adjustment
> requested above in mind, ...
>=20
>> +    struct cpu_user_regs *regs =3D get_irq_regs() ? : guest_cpu_user_=
regs();
>> +
>> +    while ( *action )
>=20
> ... perhaps also better to check action against NULL before
> de-referencing?

Okay (to both).

And I only now realized that get_irq_regs() is x86 only. I'll add a
dummy Arm function always returning NULL.


Juergen

--------------5415E507314CAE744F164A3D
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------5415E507314CAE744F164A3D--

--bVyeKRY09QN2rNwI7Qs5NwuA7fxCvCUsI--

--dncJQbPPS14y6VWt6BLheMrJfk7aIAfjc
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+8+0QFAwAAAAAACgkQsN6d1ii/Ey8s
bwf/doDjtbwRUn9zNVna/5VgJuamyS6g48ysWIlFtwPPUW5Abg37EBSQFWzi5uGwval6MaART/85
C/T3YK4zmeJc+vHnACVW/rSF1OLeqMw38vKe7tAO9bJR77g2DNsqAfd6u/68/vflmm9/w6AnDemh
PZ/s4CVLUvYh7r5f+ug76yT1ZK6ilLQUF62llqZm+fF+NfDERgZqBsJAsC5olmWvu14ubZYbC99I
78ny8yxGU8KsGhL9oWim83rLdghhuXyyjmk0OqCN3zoZtJNcolDfpc0bWdmDjLUI9cN4AEmcbfv7
vXbCAN16RMia0D4s452+nGNJbJi/YhFwh6fbGiAFXg==
=SSAr
-----END PGP SIGNATURE-----

--dncJQbPPS14y6VWt6BLheMrJfk7aIAfjc--


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 12:26:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 12:26:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35903.67556 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khXOv-0002he-Rd; Tue, 24 Nov 2020 12:26:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35903.67556; Tue, 24 Nov 2020 12:26:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khXOv-0002hX-OW; Tue, 24 Nov 2020 12:26:29 +0000
Received: by outflank-mailman (input) for mailman id 35903;
 Tue, 24 Nov 2020 12:26:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
 id 1khXOt-0002hQ-UQ
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 12:26:28 +0000
Received: from mail-qt1-x842.google.com (unknown [2607:f8b0:4864:20::842])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 50273573-a0da-4c5c-82a2-195c84ec28b4;
 Tue, 24 Nov 2020 12:26:27 +0000 (UTC)
Received: by mail-qt1-x842.google.com with SMTP id 7so15888720qtp.1
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 04:26:27 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
 by smtp.gmail.com with ESMTPSA id t2sm8643187qkb.2.2020.11.24.04.26.25
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Nov 2020 04:26:25 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
	id 1khXOt-0002hQ-UQ
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 12:26:28 +0000
X-Inumbo-ID: 50273573-a0da-4c5c-82a2-195c84ec28b4
Received: from mail-qt1-x842.google.com (unknown [2607:f8b0:4864:20::842])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 50273573-a0da-4c5c-82a2-195c84ec28b4;
	Tue, 24 Nov 2020 12:26:27 +0000 (UTC)
Received: by mail-qt1-x842.google.com with SMTP id 7so15888720qtp.1
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 04:26:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=3VmFnou2nD4lcYGUPrMqThXCue7L4UhclP4oij42Kbk=;
        b=Q2pmLjkQax53DfEJqAJ++rfXRuCUfTMdSrNvTnPX432cf0gQiAL3+rv3sWj3pG2My/
         oRdarF4chfZ/NGCWEsg2aUxLBAr7DM0+LX5j3dmwR37bMrIIw9a2t0+E+kgW+uzsRvYc
         g8kM86RUe+4+vv//4l1WUE0yyA289I+AFdLve44wNIShUP964g0YZBC6/O2hSjbZ07AU
         S+AiwdwqWWOpdsi3wwn8TVFeRWE/u7jWG4pRzv/CGg3KS5VD7Ier0L558UzN4+P3vh8c
         3eIrk+wHnJy7NT29VnOUcXmovGz+97HAGUQWfQatJC0i/4bkcHptLw7Lu0lLat1XLRR0
         0C4A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=3VmFnou2nD4lcYGUPrMqThXCue7L4UhclP4oij42Kbk=;
        b=eps1XF5mtztgNbAmcgGlsB5qnHLn5DQ2RK5XsM4AENWTDvDNW/7m2SxpZMZA4cDOLA
         rpcQ2nVohvzfBxGNciubPNxxT2n3gVD7faIr3T7wNnku8bcvruOtQkFz3Nuy5zRaWzj/
         VvpYzK/Si0Ps62qjszDGvh+WDgwj6vdnV6Dkdc286QPT50iqUjqkzX1PVR5NNtHfP3rh
         F13WKVLvH/h/xT2ui8kQcHVQ7x96b0+rPSCxEdIth1ZHywLoMEYInEi4fhw6SnijjcRq
         lu1LbRXEfvBK9GfRmiiQ1fkB8yRW4TpbAh0Ma3YoH/ZLhIR82JzfmHpzRXokVOT6U98X
         YfvQ==
X-Gm-Message-State: AOAM531PGdcHRkffALkMejC+pSMIgCB4raEYuQ9mVEa24g/Q9pqrBMg4
	hvbgL7S60hqRnppGqXvvA6Y=
X-Google-Smtp-Source: ABdhPJzEc+iLQWYgDMndDYVqthGkMbJAnqvSsODxlzYXE5vX4s2url7RgqZoX9eivJ6nNv6zlRow0w==
X-Received: by 2002:aed:3c42:: with SMTP id u2mr4081287qte.159.1606220786736;
        Tue, 24 Nov 2020 04:26:26 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
        by smtp.gmail.com with ESMTPSA id t2sm8643187qkb.2.2020.11.24.04.26.25
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 24 Nov 2020 04:26:25 -0800 (PST)
Sender: Tejun Heo <htejun@gmail.com>
Date: Tue, 24 Nov 2020 07:26:03 -0500
From: Tejun Heo <tj@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 01/20] blk-cgroup: fix a hd_struct leak in
 blkcg_fill_root_iostats
Message-ID: <X7z7215hVXzg3FGA@mtj.duckdns.org>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-2-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201118084800.2339180-2-hch@lst.de>

On Wed, Nov 18, 2020 at 09:47:41AM +0100, Christoph Hellwig wrote:
> disk_get_part needs to be paired with a disk_put_part.
> 
> Fixes: ef45fe470e1 ("blk-cgroup: show global disk stats in root cgroup io.stat")
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Acked-by: Tejun Heo <tj@kernel.org>

Thanks.

-- 
tejun


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 12:37:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 12:37:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35912.67568 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khXZh-0003ga-TT; Tue, 24 Nov 2020 12:37:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35912.67568; Tue, 24 Nov 2020 12:37:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khXZh-0003gT-QI; Tue, 24 Nov 2020 12:37:37 +0000
Received: by outflank-mailman (input) for mailman id 35912;
 Tue, 24 Nov 2020 12:37:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khXZh-0003gO-5w
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 12:37:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 37d8de65-7865-4798-a96c-d72c1a860a8f;
 Tue, 24 Nov 2020 12:37:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 79410AC2D;
 Tue, 24 Nov 2020 12:37:35 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khXZh-0003gO-5w
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 12:37:37 +0000
X-Inumbo-ID: 37d8de65-7865-4798-a96c-d72c1a860a8f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 37d8de65-7865-4798-a96c-d72c1a860a8f;
	Tue, 24 Nov 2020 12:37:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606221455; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QCDm5pogy+IZ0lOtJJutBjvsjA+rmanEIHT9rFjFzIA=;
	b=tMLVqdEkp3s5CI2EME+ZaMdeQJVF2J6uYuTJmzh7zATOZ1S7nszblAB1hP2H4QXFokgmCz
	efI8wy2Jk8H7lG5yLNlyxuRIt/fsRZdPhLOlAIAPXCFwAeWhJAeFyhHglPsdeY9FcELYBs
	NFuu/bnr3ygkno4O59wQi/PGYGc7gws=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 79410AC2D;
	Tue, 24 Nov 2020 12:37:35 +0000 (UTC)
Subject: Re: [PATCH v7 2/3] xen/events: modify struct evtchn layout
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201124070106.26854-1-jgross@suse.com>
 <20201124070106.26854-3-jgross@suse.com>
 <440bced0-97ec-33c4-f6fa-01850777e5c2@suse.com>
 <696314b9-18e3-e18d-10f2-a510e19438da@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9017e6a2-2fa0-4093-32a8-a256a58f4a33@suse.com>
Date: Tue, 24 Nov 2020 13:37:35 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <696314b9-18e3-e18d-10f2-a510e19438da@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 24.11.2020 13:18, Jürgen Groß wrote:
> On 24.11.20 12:42, Jan Beulich wrote:
>> On 24.11.2020 08:01, Juergen Gross wrote:
>>> @@ -94,9 +93,10 @@ struct evtchn
>>>   #define ECS_VIRQ         5 /* Channel is bound to a virtual IRQ line.        */
>>>   #define ECS_IPI          6 /* Channel is bound to a virtual IPI line.        */
>>>       u8  state;             /* ECS_* */
>>> -    u8  xen_consumer:XEN_CONSUMER_BITS; /* Consumer in Xen if nonzero */
>>
>> I see no reason to use a full byte for this one; in fact I
>> was considering whether it, state, and old_state couldn't
>> share storage (the latest when we run into space issues with
>> this struct). (In this context I'm also observing that
>> old_state could get away with just 2 bits, i.e. all three
>> fields would fit in a single byte.)
> 
> I think doing further compression now isn't really helping. It would
> just add more padding bytes and result in larger code.

I'm not meaning to ask to widen the use of bitfields right now
(unless this helps avoiding holes). But I'd like to not see the
one non-problematic use go away without this really being
necessary.

>> Also for all fields you touch anyway, may I ask that you switch to
>> uint<N>_t or, in the case of "pending", bool?
> 
> Fine with me.
> 
> Would you object to switching the whole structure in this regard?

I didn't dare to suggest you doing so. So no, I wouldn't mind.
However, there's more room then for what some would possibly
call bike shedding: The wider the scope of the conversion you
do the more relevant it'll become that strictly speaking there
ought to be (almost?) no use of fixed width types here, as per
./CODING_STYLE.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 12:45:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 12:45:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35931.67601 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khXgs-0004hl-6C; Tue, 24 Nov 2020 12:45:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35931.67601; Tue, 24 Nov 2020 12:45:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khXgs-0004he-36; Tue, 24 Nov 2020 12:45:02 +0000
Received: by outflank-mailman (input) for mailman id 35931;
 Tue, 24 Nov 2020 12:45:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IwAZ=E6=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1khXgq-0004fn-ER
 for xen-devel@lists.xen.org; Tue, 24 Nov 2020 12:45:00 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1498ab9e-817d-4733-b60a-98a139788c53;
 Tue, 24 Nov 2020 12:44:53 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IwAZ=E6=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1khXgq-0004fn-ER
	for xen-devel@lists.xen.org; Tue, 24 Nov 2020 12:45:00 +0000
X-Inumbo-ID: 1498ab9e-817d-4733-b60a-98a139788c53
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1498ab9e-817d-4733-b60a-98a139788c53;
	Tue, 24 Nov 2020 12:44:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606221893;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=v5+NeC+1f1FcIE26rPZNgQI1C2Y2AZyD7lZuHqqxYiI=;
  b=iIWkRyzHIiQI6qmttGVvmWtnlqizVwdvLeBlcvH8yvueSY4fNthAjkik
   X7Rg2dZgTFzNYQUfGMyoeLzjVwbH/mCUXCs30+Etks2hPogef6MK2TNzr
   Xkd/gjiRhugJBqya7QfUL0mIqm+cgUJTm0EOZBTwl1N9/ix25CIoFUIoS
   0=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: eKs0eZxbg0+zgHMPFHr9cMPGlBuwKOFKxUkti8i7fsjECZsruRuki7+T5vsXTSbpVveOryn+nw
 70n5WDvmk4SYc4JtS9wQAG8IJpRldZQ1kgpG4qROZktsFUlVGyifu19b62s7Ows6a2sIeYBXQ5
 GWTB9pXAHz+fkO/FYuYzYC7EXxJfpxYCqGv2WHBcJ5bc4+lE80YMPhiWdZglHTjR31dFOlXPr4
 72LWOUGusSSKHq6sCW5vBatreC8Pv45jRHM1mtTvLgNoTgD9P8bNHiQuAKC6jueEFfX7q4j/lB
 IAM=
X-SBRS: None
X-MesageID: 32165581
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,366,1599537600"; 
   d="scan'208";a="32165581"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AcyiBqK8YdNNzsr4yjSWsiGDDzUY0wVk+uWRBRKSQu6vvvJzboiEYqS3WBdUOjrNM+566a1QP3Eta/6QH/fx9cJkfRuHQWxVyk8J7ayBlfqofwiH4vLWInAMNBg6c24aS0iDJ0F/6R9mqfai3tzNkvGPKdBYjXjWN2Fu0FhVCnetkLQZCgNQqpM3gEEVOGjA3Y0fQJHDuzlqs2spKX/EMkZAvBQ68oAYWeJdNbm1N0VTxn9c7vMq7NPrJVcXQye9aYaVQQlylwmko2EsH+lOu/yuXcaULrfYojBzzPO4cRm0nzphEADoUkNFmbZdxCfpHue+iVrbudVFP8onU8JW1Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bNiiNcKeu69XwjDnhN0RCQkefm9PB3+H4JgnQ+jvX+A=;
 b=XIbARE+jo6nK76OhUOBC14Oh9thzAOuS66Rh3AawtqDUgHGKuGbfXUTTTf29rBizqizNWOzS6uqB/Q7r++r5voVbSJ9Q18n4l3uKjp7wqfPNtuR9TazOGX9LeKTBnx+6h7LyaAQvCQRMwlT36MyhH3hAOahb7E04y4KrYCmOjHK+HYkH1vqHIWbWJXG1kpTD64yNVNYEXkOYnlwu1GH16etL0RJRLEnHjTbxI1hXh9sVSogMYU2zXoGr16dDadLBVZurtSel/SczS4hXWyrWygQQh9R/81QdUxTL2KvnYm5NPoNAw3UEHBi+MwMmf6bkwEG7SNls9ebFW0CQntiSeQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bNiiNcKeu69XwjDnhN0RCQkefm9PB3+H4JgnQ+jvX+A=;
 b=eEN2V8Hh1KONWUQPx7v4lC9iduZaiW+JOp1H6Cb+yU6YiRkfa/vbGCTkLmme5omndJVprR6lQEBxaukY73bfHe86dovFJR/5M4ghQDJl9PQ3DRG4XObMX/HdqWAHA+PbaHYI9dHdtz0ErdY1Su98Ld7X/B02XYVvCilLuiKKRhc=
Date: Tue, 24 Nov 2020 13:44:43 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Xen.org security team <security@xen.org>
CC: <xen-announce@lists.xen.org>, <xen-devel@lists.xen.org>,
	<xen-users@lists.xen.org>, <oss-security@lists.openwall.com>, Xen.org
 security team <security-team-members@xen.org>
Subject: Re: Xen Security Advisory 355 v2 - stack corruption from XSA-346
 change
Message-ID: <20201124124443.jhl25ldkhkawmzdb@Air-de-Roger>
References: <E1khX2v-0002f4-3b@xenbits.xenproject.org-0>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <E1khX2v-0002f4-3b@xenbits.xenproject.org-0>
X-ClientProxiedBy: MR2P264CA0017.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:1::29) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3147ea02-0eb6-4151-5739-08d89076bab1
X-MS-TrafficTypeDiagnostic: DM5PR03MB3291:
X-Microsoft-Antispam-PRVS: <DM5PR03MB32916E80552EF3FC151D88A18FFB0@DM5PR03MB3291.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: N2f/L8i+iwiAFRJSzfb1OMS02pTfA9Ka1VvEqH7hp3zn/00+4emuWofp4IGZOP3TFuT/+q8p6Orz+7S4pc1q68S6KLS88XUGypOtG47umjSgdku9MjSK4IulM0k6OGVENE2annZeHFn2jTE2zQOQwL2t8WubzdKMDRL0GzLyE15SUFYSDskPa881GsRuMBRt7EyzKDclJPo166HEnrbPqaDe1GbnZT4DlSAQd50mjUqP9HNFAGflfmHKOzXSU8b6FchTSEa+f692CtHBuTTlx9wIOj9ZSj4CNrg8mJrzChhQflnkt3tz18soAhTbFqtg
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(346002)(136003)(366004)(376002)(39860400002)(396003)(186003)(16526019)(66556008)(9686003)(83380400001)(6486002)(478600001)(6666004)(86362001)(8676002)(85182001)(2906002)(1076003)(26005)(956004)(33716001)(15650500001)(4326008)(316002)(6916009)(5660300002)(6496006)(66946007)(66476007)(8936002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: yG39Q838Qld/lZnJpxGKg83PjAV0eqUt1RqT4k8eQaQgqkMgT4S7i74MywZ1tYJpAnSepPAV4sDBx+SjSnnAVjGHiKkH8ct3Jo3OFfBeJ5lGs1c+95wym1yqwNAowKF8qI8z61+EVA0PTRMTLTq6B433AVFT1+Hvu7ThesxG4aEpK8AGFTTJ1skreC2obC5bm8JAIlpcdmkNlWnmc5PYBRZaAIkgGo6N3aOU8QYfBev0DBuXS5r73jLAXSi2+9/VHQTGriC2uHVLRa+OGunzlqtM22q85X+0JP1HhCa/MiGWqgyNhPtChPcw4tS5i/nIIzjJxbSwxE9DD5XFfdsA8IrY4iJDuH/fdjKpLJT8FeEibhxQDU8sGcHAGFDIiCCg22UcnzoBZ8Dlvm8QcJ2SnuMSHjAKfJliDkIDE8NSdqnng3euKIfZgpkHv/58k0lTLdFc+KrSjk9j/+iJMjUYH745yZuzImRuP6FRwTN1g487hn7RNQ7YDn85BDErV4BfFvOpZcCOoYITXHArttk8/NiVUt1+xkIoE8Zd8alFbJbbn1tiMGBBEwvXhkjxbezn1liUB4yvHs8K1134eqw8wi3jT86JRSuK2OKkr7K4+MZqcoXSgtFtnlvbVkpj3JJH+Q7hYs37J5JEtFTdTbw0+70T4z4bIFqI4l68L5wfceDCdaC+bKZ6mC7iDFpTuBMNeKVr6MOelBuhALzpIWjuGSOsv53TAXWD37HXmUa88a4I7Ias0Q32JuknD4GJkJSuSai5HG+LKJcfwzU/gMlFMGVA5OLrHdIAXr250c2ihQsGjYB0M6tRq2dj6rQI/Zof7CtVB9H8MbYxexXtfn8XDoWkevhn1wNK5AHC+0o4Kc2TcMucXndKNPpiMJ4iotyuOmdDU+DJaUwpTwVW5BigNA==
X-MS-Exchange-CrossTenant-Network-Message-Id: 3147ea02-0eb6-4151-5739-08d89076bab1
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Nov 2020 12:44:48.9154
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7z1s1dAgcqlMw3grk9KStbeGbU6L9V1hv7xHY6mGHbZJ9VLCqMKaFKVKk9YQavywUWdQQYJBfM8bcxVqxTx14g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3291
X-OriginatorOrg: citrix.com

On Tue, Nov 24, 2020 at 12:03:45PM +0000, Xen.org security team wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
> 
>                     Xen Security Advisory XSA-355
>                               version 2
> 
>                  stack corruption from XSA-346 change
> 
> UPDATES IN VERSION 2
> ====================
> 
> Added metadata file.
> 
> Public release.
> 
> ISSUE DESCRIPTION
> =================
> 
> One of the two changes for XSA-346 introduced an on-stack array.  The
> check for guarding against overrunning this array was off by one,
> allowing for corruption of the first stack slot immediately following
> this array.
> 
> IMPACT
> ======
> 
> A malicious or buggy HVM or PVH guest can cause Xen to crash, resulting
> in a Denial of Service (DoS) to the entire host.  Privilege escalation
> as well as information leaks cannot be excluded.
> 
> VULNERABLE SYSTEMS
> ==================
> 
> All Xen versions which have the patches for XSA-346 applied are
> vulnerable.
> 
> Only x86 HVM and PVH guests can leverage the vulnerability.  Arm guests
> and x86 PV guests cannot leverage the vulnerability.
> 
> Only x86 HVM and PVH guests which have physical devices passed through
> to them can leverage the vulnerability.

There's no support for passthrough for x86 PVH guests yet, so this
issue only affects x86 HVM with passthrough.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:20:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:20:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35959.67617 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYEW-0007gd-1S; Tue, 24 Nov 2020 13:19:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35959.67617; Tue, 24 Nov 2020 13:19:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYEV-0007gW-Uq; Tue, 24 Nov 2020 13:19:47 +0000
Received: by outflank-mailman (input) for mailman id 35959;
 Tue, 24 Nov 2020 13:19:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KyA6=E6=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1khYEU-0007gR-Du
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:19:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c1dbd80f-5505-4f0e-be61-8067dc6d77e1;
 Tue, 24 Nov 2020 13:19:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2464FAC2D;
 Tue, 24 Nov 2020 13:19:44 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=KyA6=E6=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1khYEU-0007gR-Du
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:19:46 +0000
X-Inumbo-ID: c1dbd80f-5505-4f0e-be61-8067dc6d77e1
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c1dbd80f-5505-4f0e-be61-8067dc6d77e1;
	Tue, 24 Nov 2020 13:19:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606223984; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=8Apj1z0TnJBFEQzKcpxGgYAQeXqw9VJt7qawygadYbg=;
	b=ZlPM2Kqh8f0uAKixkiQ/6w+uteYNYpPaYwK2ojXE5WsEc3gvq0NgBsxhVF5ObRqdoGcWmX
	IB24Sa2IiLMH1YSTpdPIrYwZ1HJQ1DK3/61YfKbPDX5PHti29Yw++w/FU/eCgiA4Dfg1MJ
	TKeiik4Y5I9atR8AXxIXgqd5s16lKqg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2464FAC2D;
	Tue, 24 Nov 2020 13:19:44 +0000 (UTC)
Subject: Re: [PATCH v7 2/3] xen/events: modify struct evtchn layout
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201124070106.26854-1-jgross@suse.com>
 <20201124070106.26854-3-jgross@suse.com>
 <440bced0-97ec-33c4-f6fa-01850777e5c2@suse.com>
 <696314b9-18e3-e18d-10f2-a510e19438da@suse.com>
 <9017e6a2-2fa0-4093-32a8-a256a58f4a33@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <78318d0b-b80e-c50e-c3a7-7a0d2e616dc4@suse.com>
Date: Tue, 24 Nov 2020 14:19:43 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <9017e6a2-2fa0-4093-32a8-a256a58f4a33@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="TkxJVRfb5WQYg5Sl8K4TnW14HXuhnC4Tt"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--TkxJVRfb5WQYg5Sl8K4TnW14HXuhnC4Tt
Content-Type: multipart/mixed; boundary="n0EZJFrZMQQbhjTZEkQA5oQ1kEpObiQhM";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <78318d0b-b80e-c50e-c3a7-7a0d2e616dc4@suse.com>
Subject: Re: [PATCH v7 2/3] xen/events: modify struct evtchn layout
References: <20201124070106.26854-1-jgross@suse.com>
 <20201124070106.26854-3-jgross@suse.com>
 <440bced0-97ec-33c4-f6fa-01850777e5c2@suse.com>
 <696314b9-18e3-e18d-10f2-a510e19438da@suse.com>
 <9017e6a2-2fa0-4093-32a8-a256a58f4a33@suse.com>
In-Reply-To: <9017e6a2-2fa0-4093-32a8-a256a58f4a33@suse.com>

--n0EZJFrZMQQbhjTZEkQA5oQ1kEpObiQhM
Content-Type: multipart/mixed;
 boundary="------------80CFB60A314653583D64E407"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------80CFB60A314653583D64E407
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 24.11.20 13:37, Jan Beulich wrote:
> On 24.11.2020 13:18, J=C3=BCrgen Gro=C3=9F wrote:
>> On 24.11.20 12:42, Jan Beulich wrote:
>>> On 24.11.2020 08:01, Juergen Gross wrote:
>>>> @@ -94,9 +93,10 @@ struct evtchn
>>>>    #define ECS_VIRQ         5 /* Channel is bound to a virtual IRQ l=
ine.        */
>>>>    #define ECS_IPI          6 /* Channel is bound to a virtual IPI l=
ine.        */
>>>>        u8  state;             /* ECS_* */
>>>> -    u8  xen_consumer:XEN_CONSUMER_BITS; /* Consumer in Xen if nonze=
ro */
>>>
>>> I see no reason to use a full byte for this one; in fact I
>>> was considering whether it, state, and old_state couldn't
>>> share storage (the latest when we run into space issues with
>>> this struct). (In this context I'm also observing that
>>> old_state could get away with just 2 bits, i.e. all three
>>> fields would fit in a single byte.)
>>
>> I think doing further compression now isn't really helping. It would
>> just add more padding bytes and result in larger code.
>=20
> I'm not meaning to ask to widen the use of bitfields right now
> (unless this helps avoiding holes). But I'd like to not see the
> one non-problematic use go away without this really being
> necessary.

Okay.

>=20
>>> Also for all fields you touch anyway, may I ask that you switch to
>>> uint<N>_t or, in the case of "pending", bool?
>>
>> Fine with me.
>>
>> Would you object to switching the whole structure in this regard?
>=20
> I didn't dare to suggest you doing so. So no, I wouldn't mind.
> However, there's more room then for what some would possibly
> call bike shedding: The wider the scope of the conversion you
> do the more relevant it'll become that strictly speaking there
> ought to be (almost?) no use of fixed width types here, as per
> ./CODING_STYLE.

No problem with that.


Juergen

--------------80CFB60A314653583D64E407
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------80CFB60A314653583D64E407--

--n0EZJFrZMQQbhjTZEkQA5oQ1kEpObiQhM--

--TkxJVRfb5WQYg5Sl8K4TnW14HXuhnC4Tt
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+9CG8FAwAAAAAACgkQsN6d1ii/Ey8U
2Af/ZAPIasqOKBRPa6YEk2iXCe+fXgeI90o8Rg6ff3+EmQJxMzkbV4PGQ8EqtQZvWcjwxx5MA/VQ
medukuI+6zHkw6VmTNdeVYcYThXWYgAMNYI1hcsWThB423rTASCwuuLRKIF80rYzrEqttQk1eDsF
WUHHT+MASQPPZOdzmkq0dutJs3fImZc8dfq7+1un/AdCjiP1YqFQhsiWu7e3RCpl9GGSZo4hxqmZ
K7za6mcvYEU/3sO8lT5Jv53OMkNVP8Nj0flimVcPiJ83N7FMerI0I+OFs8mJZLL3XsypJ7MdRG4n
wGNtq0Ymq7e9TsLbFCvVwT6m88I4oHZe1bj9qDyg6Q==
=Nl2X
-----END PGP SIGNATURE-----

--TkxJVRfb5WQYg5Sl8K4TnW14HXuhnC4Tt--


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:25:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:25:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35972.67650 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYKC-0000EG-Tv; Tue, 24 Nov 2020 13:25:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35972.67650; Tue, 24 Nov 2020 13:25:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYKC-0000E9-Pv; Tue, 24 Nov 2020 13:25:40 +0000
Received: by outflank-mailman (input) for mailman id 35972;
 Tue, 24 Nov 2020 13:25:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tuHM=E6=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1khYKB-0000E4-8q
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:25:39 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.5.55]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 639e6375-9945-40d2-a893-011a86837155;
 Tue, 24 Nov 2020 13:25:34 +0000 (UTC)
Received: from AM5PR0601CA0081.eurprd06.prod.outlook.com (2603:10a6:206::46)
 by HE1PR0802MB2140.eurprd08.prod.outlook.com (2603:10a6:3:c2::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20; Tue, 24 Nov
 2020 13:25:28 +0000
Received: from VE1EUR03FT061.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:0:cafe::ef) by AM5PR0601CA0081.outlook.office365.com
 (2603:10a6:206::46) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Tue, 24 Nov 2020 13:25:28 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT061.mail.protection.outlook.com (10.152.19.220) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Tue, 24 Nov 2020 13:25:27 +0000
Received: ("Tessian outbound e0cdfd2b0406:v71");
 Tue, 24 Nov 2020 13:25:27 +0000
Received: from ea14c71aca94.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A60FD409-6F1B-4F55-8E04-BC6E5D2A7C65.1; 
 Tue, 24 Nov 2020 13:25:21 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ea14c71aca94.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 24 Nov 2020 13:25:21 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4775.eurprd08.prod.outlook.com (2603:10a6:10:da::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.29; Tue, 24 Nov
 2020 13:25:18 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0%7]) with mapi id 15.20.3589.030; Tue, 24 Nov 2020
 13:25:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tuHM=E6=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1khYKB-0000E4-8q
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:25:39 +0000
X-Inumbo-ID: 639e6375-9945-40d2-a893-011a86837155
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown [40.107.5.55])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 639e6375-9945-40d2-a893-011a86837155;
	Tue, 24 Nov 2020 13:25:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UVbzaMWJSPUqrJbAHs5E5esD8fonsBvUNk2PUepUqrg=;
 b=HzYX0qo5saOwFrksSm4fj99M1e3vjK8TnSE77GR5GBq1p06NZsCe0nMfaHGUmz6uYiVjVJ06CSmoxFCP9uuknaPMadMQI/FmPbwYolYc3MuIuTvKOrqa4nZwqJlf59KjIW+CL8xWIZWUQ+jgxAl5y9TsdcIXbmvgnmgMZqMEASs=
Received: from AM5PR0601CA0081.eurprd06.prod.outlook.com (2603:10a6:206::46)
 by HE1PR0802MB2140.eurprd08.prod.outlook.com (2603:10a6:3:c2::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20; Tue, 24 Nov
 2020 13:25:28 +0000
Received: from VE1EUR03FT061.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:0:cafe::ef) by AM5PR0601CA0081.outlook.office365.com
 (2603:10a6:206::46) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Tue, 24 Nov 2020 13:25:28 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT061.mail.protection.outlook.com (10.152.19.220) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Tue, 24 Nov 2020 13:25:27 +0000
Received: ("Tessian outbound e0cdfd2b0406:v71"); Tue, 24 Nov 2020 13:25:27 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5015fdd956e2b1db
X-CR-MTA-TID: 64aa7808
Received: from ea14c71aca94.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id A60FD409-6F1B-4F55-8E04-BC6E5D2A7C65.1;
	Tue, 24 Nov 2020 13:25:21 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ea14c71aca94.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Tue, 24 Nov 2020 13:25:21 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ka0P/TM/BYr+JOI9gvGLjMTtQ+hK8l3vWU/w1ttQ3YIRIhbnKosgV7eMNcpMRu2VwvblCY78DsUttkou5OGc06stpR1zQaUb/IvAx0hhUxoctBjycYLVc2TTVLsFddteO4bFCIdpphYvgkJwWagiGYCdXdKopNslYCiqLy4R3T9GvhZzqZeN3pFliXZZutp01kIuAUyuIM2+0EO2XKPSdflh0MSInUKai3AqPV3nsJh/VlWBUGq7VphVt8G2zYoG66stAIavgjNnFTqV0sjcAMXVDBp5srBbXsCzRpi1pVOoLcDfuoWkGTkZUclyORLDg3twMxNEa6HiLUHvvSr9pA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UVbzaMWJSPUqrJbAHs5E5esD8fonsBvUNk2PUepUqrg=;
 b=RXIrkgTFk3+BcuTNeQ2l+dZK2nGmAQ8MOf8VP96oI8DG3WKo2oKbxTNlBxjh30konw1mWAqe3YoPS851oUNR7jpMQaSvYolVn2maUn1GnD2gpi5faChuUB5NtKkYIqiez/U0OPRIQUIOtzMk6F5R5WlbnbAOWVD57dgmBbb/qNA1fyjiQz6MtBXy5uy3BzM3IPgZyPRP8X0l0A+wN5w1HrDX3F3v1lP/CkCg8D22PQakkFUQJ520o8C20C15Gt268JnBOrh9lJyNwLD8Dp8B/DvwA6jkrEFdIsFA6BJtk9cB1+ob9JKKGOH7pkkJbQPjLJKeAm2YB38jW112ZRashw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UVbzaMWJSPUqrJbAHs5E5esD8fonsBvUNk2PUepUqrg=;
 b=HzYX0qo5saOwFrksSm4fj99M1e3vjK8TnSE77GR5GBq1p06NZsCe0nMfaHGUmz6uYiVjVJ06CSmoxFCP9uuknaPMadMQI/FmPbwYolYc3MuIuTvKOrqa4nZwqJlf59KjIW+CL8xWIZWUQ+jgxAl5y9TsdcIXbmvgnmgMZqMEASs=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4775.eurprd08.prod.outlook.com (2603:10a6:10:da::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.29; Tue, 24 Nov
 2020 13:25:18 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0%7]) with mapi id 15.20.3589.030; Tue, 24 Nov 2020
 13:25:18 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Julien Grall
	<jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH RFC 3/6] xen/arm: setup: Call
 unregister_init_virtual_region() after the last init function
Thread-Topic: [PATCH RFC 3/6] xen/arm: setup: Call
 unregister_init_virtual_region() after the last init function
Thread-Index: AQHWvqdS2mJLaroJgEqXj0ki52YbT6nXTTEA
Date: Tue, 24 Nov 2020 13:25:18 +0000
Message-ID: <18463E31-62DE-4859-8453-CF0DCD755103@arm.com>
References: <20201119190751.22345-1-julien@xen.org>
 <20201119190751.22345-4-julien@xen.org>
In-Reply-To: <20201119190751.22345-4-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 29e7798a-e81a-460b-8740-08d8907c687e
x-ms-traffictypediagnostic: DBBPR08MB4775:|HE1PR0802MB2140:
X-Microsoft-Antispam-PRVS:
	<HE1PR0802MB2140F967DFCC6540C4DD18EF9DFB0@HE1PR0802MB2140.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:4502;OLM:4502;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 2iONf+4mtKnrmzBwZDzpGmExyQD82UYrgJ4Byh9F/efsKBLHUrKzRnm0sHauQJRVhZvWBom7BgdV0l14sZOf26d/sCnojmn9viBqGTB8KD4LWLVRK3EZzf4QDyKD9q7z+gx1MXiPnSTXe2grojUHHsmcKAkbSMorjYWUP3S4qAEZOnP5D+hn43byC6/2fTcAqVXLFmVznfrnij9mn95Jk3w8nbelyhitppso7sUaZB5Ri0s8vIumMh5vRERVM/ZzGkxE+x7OSBiLpUijq+SgeuQZ9wRC2W3d5I5nzFVdanlLBmxEMF1MN19mly++wEUj9Ct4jnnfeeSUtDerK/J5Ng==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(396003)(39860400002)(136003)(376002)(6486002)(76116006)(2616005)(26005)(71200400001)(186003)(478600001)(4326008)(8936002)(91956017)(64756008)(66556008)(86362001)(66446008)(66946007)(66476007)(8676002)(5660300002)(316002)(4744005)(53546011)(6506007)(83380400001)(2906002)(36756003)(54906003)(33656002)(6916009)(6512007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 /yrVEnu5gOeDj3TvwKOTptCdlH3/nnQzWIrVqt4DkYAYpJ/eGynC4MuW0bf3fbyI+D0QYXHNlyd2T07c7GggsJGYPPIozIm5Z9AkDZvA0DxfddeE66Ffyp4rbPf9sTekrx3z5nTTttt9Nh1TamXhcLWhBhtf8T3zbiGE6B0HuIAe67KMmgaxx41XwM1sOuNhkB1Zxu//EAHO8cAYwB8cDvJXBMUjRmxjpb70+FixpmWrBmAgJj5wVJaxQt+mMbW0cUZ32sQ6eK/0PScogRZsA7xZDSQUCWUHGRrftHIhMTpFkfccQEgMPkAA2E7lOx3mJxt7gdP7eOpXYcSB4IxC4rKBL8JaQlLbV/rljNh0NvEYQKdhWuUpWlKhQmM2gE3easwyiWveE+7AcSXzEFW85asK3evNWGQhlHtmqvI2U9th1WDxnrFvMB1my700QecWIETKbNfXqqyalThVr/1MB8jO2zYHCIqrnrBe+pi4Zau25bq9GHTCEhBt3UU+/83IMB+UQq/M58mbx2HP0Bm2M9fb4b63nsVNJPOduuuDXPcZPQjgsICnzdCxqgLpwab9UWeLsB8hcIEEFaEjAYhUbx66Fcao3+SBh03IesI4YoATXKOAQioCWijSs7nmTTlxvCgNaNWPImGqI3WBVdkw8+53iri1JgOuMEFvs1Pa1Didlic8dFeLKb1N+GTQl+OsS+Yvc7IrDCjNtz5mJqm8M4NTvjxEtuJkA9N1XAfv34AuZt/ujIPmMmX8ghKV+PVoJVlHLqHDch/R5oHaScrrvFfcIxtjG3zAqAnSc1ZLFwwBKSL2Vl+CRJT9gRlwORYb9rdvKKySaVTOvHRUOP3X+wx70GuKd0faq3q9KvMd4nxiQ31Gma77XN7Y26VamaPAues0RxqBqQjLc6gGvajb2g==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <9F1C948EB92A684DA2FB0189813EC9DD@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4775
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT061.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	707d9288-e614-4e77-e34a-08d8907c633c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TUP3NFl94caR/VE6nW8KYA/+kwTKAu/ElVs/jLdC39gafiEKt83ckGa1Vz5nWkUTLUYW+V61JrNhVOVQZorB2+3JS0a7fJRKSL5BGW/2CBa05hzVuKJ3FkYzXDZcmo0IN189/FOZYE5pQp1oaS1BMuCiLamQW1D79PGHk/TWLRPlRg3Yc3PtQ5sIVO+sqXIpP1m3+FknO1mU3IX1tDfNM0RG9D8N2WPzRDcIw3qi8weP8+lFGnXODrJU+WF2U4incgMJXMSKsGrsZi2J0OcKaqIXG2aFQ0Qe6/qL2rf8T86DlnLHm1tJtBVS74rdTp36KptgWGiczh0S6iJj4zQIEuS/XujoUlB42f0UYippCfYH+GyAqBI3sgSw6GaEbj0gQsSP2d8wSOnrpg8yclbx5w==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(396003)(136003)(346002)(376002)(46966005)(5660300002)(83380400001)(70206006)(47076004)(70586007)(356005)(81166007)(82740400003)(6512007)(4326008)(33656002)(53546011)(6506007)(8676002)(26005)(186003)(6862004)(36756003)(478600001)(86362001)(2906002)(54906003)(316002)(2616005)(107886003)(8936002)(336012)(6486002)(82310400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Nov 2020 13:25:27.5102
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 29e7798a-e81a-460b-8740-08d8907c687e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT061.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0802MB2140

Hi Julien,

> On 19 Nov 2020, at 19:07, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> discard_init_modules() is an init function, if the path contains a
> BUG() or WARN() we still want to get the full stack trace.
>=20
> The init virtual region is now kept after the last init function has
> been called.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> xen/arch/arm/setup.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>=20
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 7fcff9af2a7e..2532ec973913 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -72,10 +72,11 @@ domid_t __read_mostly max_init_domid;
>=20
> static __used void init_done(void)
> {
> +    discard_initial_modules();
> +
>     /* Must be done past setting system_state. */
>     unregister_init_virtual_region();
>=20
> -    discard_initial_modules();
>     free_init_memory();
>     startup_cpu_idle_loop();
> }
> --=20
> 2.17.1
>=20



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:28:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:28:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35986.67662 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYMs-0000Qs-Ey; Tue, 24 Nov 2020 13:28:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35986.67662; Tue, 24 Nov 2020 13:28:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYMs-0000Ql-BK; Tue, 24 Nov 2020 13:28:26 +0000
Received: by outflank-mailman (input) for mailman id 35986;
 Tue, 24 Nov 2020 13:28:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYMq-0000Qf-1v
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:28:25 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0e2b4566-a3a9-49f0-b0ef-9759e8f25a47;
 Tue, 24 Nov 2020 13:28:20 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYMV-0006Ut-HC; Tue, 24 Nov 2020 13:28:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYMq-0000Qf-1v
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:28:25 +0000
X-Inumbo-ID: 0e2b4566-a3a9-49f0-b0ef-9759e8f25a47
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0e2b4566-a3a9-49f0-b0ef-9759e8f25a47;
	Tue, 24 Nov 2020 13:28:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=kUYn0sikpHZNYDnsxFrAwXSb49C5P7DMdxps0LhhPGQ=; b=voamq2gIRabU8afAKL3jTQT86w
	3voeMXKzWRc968NV3Og6QI7vPIsFgyzT+g3y7G5VN1mzlkrcmMY4Qg7EwhHhprBp2pFWLMjK63ObP
	R7mWTCNSKQkV5l0bX4rfRh4gKPGZnVeHwjNUcB1zJgqL93wnbxgFlTDmJRPjnCaFhMh+HPucQfU+S
	uovUuXHghcRdZOgSkakX0fZ9XuPs/BxiQ7jD+ibvnUIkvCuGFLC0BxmxD5xKdEPhugucjYN8I4Xpv
	WSH12/wojKPamwwzSgge7VqR4tsSr5bVtW0y1gEbh0LGN1fc0aleWXDqaFKriKv0dnauuuFj6BOkD
	RzQOH7Hw==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYMV-0006Ut-HC; Tue, 24 Nov 2020 13:28:03 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 06/45] zram: remove the claim mechanism
Date: Tue, 24 Nov 2020 14:27:12 +0100
Message-Id: <20201124132751.3747337-7-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

The zram claim mechanism was added to ensure no new opens come in
during teardown.  But the proper way to archive that is to call
del_gendisk first, which takes care of all that.  Once del_gendisk
is called in the right place, the reset side can also be simplified
as no I/O can be outstanding on a block device that is not open.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/zram/zram_drv.c | 72 ++++++++---------------------------
 1 file changed, 15 insertions(+), 57 deletions(-)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 6d15d51cee2b7e..2e6d75ec1afddb 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1756,64 +1756,33 @@ static ssize_t disksize_store(struct device *dev,
 static ssize_t reset_store(struct device *dev,
 		struct device_attribute *attr, const char *buf, size_t len)
 {
-	int ret;
-	unsigned short do_reset;
-	struct zram *zram;
+	struct zram *zram = dev_to_zram(dev);
 	struct block_device *bdev;
+	unsigned short do_reset;
+	int ret = 0;
 
 	ret = kstrtou16(buf, 10, &do_reset);
 	if (ret)
 		return ret;
-
 	if (!do_reset)
 		return -EINVAL;
 
-	zram = dev_to_zram(dev);
 	bdev = bdget_disk(zram->disk, 0);
 	if (!bdev)
 		return -ENOMEM;
 
 	mutex_lock(&bdev->bd_mutex);
-	/* Do not reset an active device or claimed device */
-	if (bdev->bd_openers || zram->claim) {
-		mutex_unlock(&bdev->bd_mutex);
-		bdput(bdev);
-		return -EBUSY;
-	}
-
-	/* From now on, anyone can't open /dev/zram[0-9] */
-	zram->claim = true;
+	if (bdev->bd_openers)
+		ret = -EBUSY;
+	else
+		zram_reset_device(zram);
 	mutex_unlock(&bdev->bd_mutex);
-
-	/* Make sure all the pending I/O are finished */
-	fsync_bdev(bdev);
-	zram_reset_device(zram);
 	bdput(bdev);
 
-	mutex_lock(&bdev->bd_mutex);
-	zram->claim = false;
-	mutex_unlock(&bdev->bd_mutex);
-
-	return len;
-}
-
-static int zram_open(struct block_device *bdev, fmode_t mode)
-{
-	int ret = 0;
-	struct zram *zram;
-
-	WARN_ON(!mutex_is_locked(&bdev->bd_mutex));
-
-	zram = bdev->bd_disk->private_data;
-	/* zram was claimed to reset so open request fails */
-	if (zram->claim)
-		ret = -EBUSY;
-
-	return ret;
+	return ret ? ret : len;
 }
 
 static const struct block_device_operations zram_devops = {
-	.open = zram_open,
 	.submit_bio = zram_submit_bio,
 	.swap_slot_free_notify = zram_slot_free_notify,
 	.rw_page = zram_rw_page,
@@ -1821,7 +1790,6 @@ static const struct block_device_operations zram_devops = {
 };
 
 static const struct block_device_operations zram_wb_devops = {
-	.open = zram_open,
 	.submit_bio = zram_submit_bio,
 	.swap_slot_free_notify = zram_slot_free_notify,
 	.owner = THIS_MODULE
@@ -1974,32 +1942,22 @@ static int zram_add(void)
 
 static int zram_remove(struct zram *zram)
 {
-	struct block_device *bdev;
-
-	bdev = bdget_disk(zram->disk, 0);
-	if (!bdev)
-		return -ENOMEM;
+	struct block_device *bdev = bdget_disk(zram->disk, 0);
 
-	mutex_lock(&bdev->bd_mutex);
-	if (bdev->bd_openers || zram->claim) {
-		mutex_unlock(&bdev->bd_mutex);
+	if (bdev) {
+		if (bdev->bd_openers) {
+			bdput(bdev);
+			return -EBUSY;
+		}
 		bdput(bdev);
-		return -EBUSY;
 	}
 
-	zram->claim = true;
-	mutex_unlock(&bdev->bd_mutex);
-
+	del_gendisk(zram->disk);
 	zram_debugfs_unregister(zram);
-
-	/* Make sure all the pending I/O are finished */
-	fsync_bdev(bdev);
 	zram_reset_device(zram);
-	bdput(bdev);
 
 	pr_info("Removed device: %s\n", zram->disk->disk_name);
 
-	del_gendisk(zram->disk);
 	blk_cleanup_queue(zram->disk->queue);
 	put_disk(zram->disk);
 	kfree(zram);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:28:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:28:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35987.67674 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYMv-0000Sn-Od; Tue, 24 Nov 2020 13:28:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35987.67674; Tue, 24 Nov 2020 13:28:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYMv-0000Se-KB; Tue, 24 Nov 2020 13:28:29 +0000
Received: by outflank-mailman (input) for mailman id 35987;
 Tue, 24 Nov 2020 13:28:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYMv-0000Qf-0e
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:28:29 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c587630-2356-47af-b874-fad4b8154cd1;
 Tue, 24 Nov 2020 13:28:23 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYMc-0006W0-RY; Tue, 24 Nov 2020 13:28:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYMv-0000Qf-0e
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:28:29 +0000
X-Inumbo-ID: 2c587630-2356-47af-b874-fad4b8154cd1
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2c587630-2356-47af-b874-fad4b8154cd1;
	Tue, 24 Nov 2020 13:28:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=Ya7mNl614k8cPEx//1LbBwD1mBTI6q+TUS2d4XeT6DY=; b=iay1UgkMtKfQ94ufwnWzk2T1it
	c3R7eVYN9ml3Ywzgas/Gd9buHnKumjaigSbyg1fPjmWGpoRxU/Ek5+fXa/aVGF5IwolH6uhPeyqXg
	DL3BpGdtWlvBlZGgRl6YFd9TchAoOKNQ3CQdeKQsVAHkF4KVJlc5U5b1Es0gwwJlxLXKhao0DIH4f
	I6Z9XFgDf5i76KLycjVCRbd6nn4p6eFYr5DszA7JqBQnbOk9oqwKjl/u9Cj1QKjlzfSam/1Xl0WBh
	o3Zpx2Uoye0mnaezi08x9l1pycXROgv4ksjzY0neMwRpaXspy5GWJjavFXe45MHLxT+eJ+V5whCq4
	kvpN+7fQ==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYMc-0006W0-RY; Tue, 24 Nov 2020 13:28:11 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 11/45] block: remove a duplicate __disk_get_part prototype
Date: Tue, 24 Nov 2020 14:27:17 +0100
Message-Id: <20201124132751.3747337-12-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Jan Kara <jack@suse.cz>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
---
 include/linux/genhd.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 46553d6d602563..22f5b9fd96f8bf 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -250,7 +250,6 @@ static inline dev_t part_devt(struct hd_struct *part)
 	return part_to_dev(part)->devt;
 }
 
-extern struct hd_struct *__disk_get_part(struct gendisk *disk, int partno);
 extern struct hd_struct *disk_get_part(struct gendisk *disk, int partno);
 
 static inline void disk_put_part(struct hd_struct *part)
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:28:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:28:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35988.67686 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYN1-0000WC-10; Tue, 24 Nov 2020 13:28:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35988.67686; Tue, 24 Nov 2020 13:28:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYN0-0000W3-Tx; Tue, 24 Nov 2020 13:28:34 +0000
Received: by outflank-mailman (input) for mailman id 35988;
 Tue, 24 Nov 2020 13:28:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYN0-0000Qf-0f
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:28:34 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cc39ff9b-672d-4638-a3a9-831fd558c749;
 Tue, 24 Nov 2020 13:28:23 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYMS-0006UY-Of; Tue, 24 Nov 2020 13:28:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYN0-0000Qf-0f
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:28:34 +0000
X-Inumbo-ID: cc39ff9b-672d-4638-a3a9-831fd558c749
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cc39ff9b-672d-4638-a3a9-831fd558c749;
	Tue, 24 Nov 2020 13:28:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=/wu2CSHeB3Gl/t7r+Aw3SS2grYZ13bhBv8r8IW86TqI=; b=Y1N/2HD33Uav08FYSVkGnaVSHI
	WPlSFEuXLyQhBcuSLeIqr73yqS6AW4O7CCXgYlwLqk7PRnfEAxOgc5ebT2zGe/Q4jDKj1TZml5KW6
	zxuduAnEcRYNXRLJfNfoVEqB7dvLowMzijO8QeIjl6QFuGAlKHuyv25WN4tJ8QUrAbMO9TBEVtuQZ
	BjmoIXu0ikoriwKuMSaI9W6uv3yCs3/9OxRgQnCKwZGoy0qYl1OR9rpx0/icb21Cc4JuTljr3xVK/
	lAjKL1pMZ8mwxFgvHH2tqEwyW4bK8qHE278Pz0q8BuTy3VF1zNvkF7La7YPUW8Bcc35jmJRRWkLo3
	3nOUSf4A==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYMS-0006UY-Of; Tue, 24 Nov 2020 13:28:01 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 04/45] fs: simplify freeze_bdev/thaw_bdev
Date: Tue, 24 Nov 2020 14:27:10 +0100
Message-Id: <20201124132751.3747337-5-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Store the frozen superblock in struct block_device to avoid the awkward
interface that can return a sb only used a cookie, an ERR_PTR or NULL.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/md/dm-core.h      |  5 -----
 drivers/md/dm.c           | 20 ++++++--------------
 fs/block_dev.c            | 39 ++++++++++++++++-----------------------
 fs/buffer.c               |  2 +-
 fs/ext4/ioctl.c           |  2 +-
 fs/f2fs/file.c            | 14 +++++---------
 fs/xfs/xfs_fsops.c        |  7 ++-----
 include/linux/blk_types.h |  1 +
 include/linux/blkdev.h    |  4 ++--
 9 files changed, 34 insertions(+), 60 deletions(-)

diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h
index d522093cb39dda..aace147effcacb 100644
--- a/drivers/md/dm-core.h
+++ b/drivers/md/dm-core.h
@@ -96,11 +96,6 @@ struct mapped_device {
 	 */
 	struct workqueue_struct *wq;
 
-	/*
-	 * freeze/thaw support require holding onto a super block
-	 */
-	struct super_block *frozen_sb;
-
 	/* forced geometry settings */
 	struct hd_geometry geometry;
 
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 54739f1b579bc8..50541d336c719b 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -2392,27 +2392,19 @@ static int lock_fs(struct mapped_device *md)
 {
 	int r;
 
-	WARN_ON(md->frozen_sb);
+	WARN_ON(test_bit(DMF_FROZEN, &md->flags));
 
-	md->frozen_sb = freeze_bdev(md->bdev);
-	if (IS_ERR(md->frozen_sb)) {
-		r = PTR_ERR(md->frozen_sb);
-		md->frozen_sb = NULL;
-		return r;
-	}
-
-	set_bit(DMF_FROZEN, &md->flags);
-
-	return 0;
+	r = freeze_bdev(md->bdev);
+	if (!r)
+		set_bit(DMF_FROZEN, &md->flags);
+	return r;
 }
 
 static void unlock_fs(struct mapped_device *md)
 {
 	if (!test_bit(DMF_FROZEN, &md->flags))
 		return;
-
-	thaw_bdev(md->bdev, md->frozen_sb);
-	md->frozen_sb = NULL;
+	thaw_bdev(md->bdev);
 	clear_bit(DMF_FROZEN, &md->flags);
 }
 
diff --git a/fs/block_dev.c b/fs/block_dev.c
index d8664f5c1ff669..60492620d51866 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -548,55 +548,47 @@ EXPORT_SYMBOL(fsync_bdev);
  * count down in thaw_bdev(). When it becomes 0, thaw_bdev() will unfreeze
  * actually.
  */
-struct super_block *freeze_bdev(struct block_device *bdev)
+int freeze_bdev(struct block_device *bdev)
 {
 	struct super_block *sb;
 	int error = 0;
 
 	mutex_lock(&bdev->bd_fsfreeze_mutex);
-	if (++bdev->bd_fsfreeze_count > 1) {
-		/*
-		 * We don't even need to grab a reference - the first call
-		 * to freeze_bdev grab an active reference and only the last
-		 * thaw_bdev drops it.
-		 */
-		sb = get_super(bdev);
-		if (sb)
-			drop_super(sb);
-		mutex_unlock(&bdev->bd_fsfreeze_mutex);
-		return sb;
-	}
+	if (++bdev->bd_fsfreeze_count > 1)
+		goto done;
 
 	sb = get_active_super(bdev);
 	if (!sb)
-		goto out;
+		goto sync;
 	if (sb->s_op->freeze_super)
 		error = sb->s_op->freeze_super(sb);
 	else
 		error = freeze_super(sb);
+	deactivate_super(sb);
+
 	if (error) {
-		deactivate_super(sb);
 		bdev->bd_fsfreeze_count--;
-		mutex_unlock(&bdev->bd_fsfreeze_mutex);
-		return ERR_PTR(error);
+		goto done;
 	}
-	deactivate_super(sb);
- out:
+	bdev->bd_fsfreeze_sb = sb;
+
+sync:
 	sync_blockdev(bdev);
+done:
 	mutex_unlock(&bdev->bd_fsfreeze_mutex);
-	return sb;	/* thaw_bdev releases s->s_umount */
+	return error;	/* thaw_bdev releases s->s_umount */
 }
 EXPORT_SYMBOL(freeze_bdev);
 
 /**
  * thaw_bdev  -- unlock filesystem
  * @bdev:	blockdevice to unlock
- * @sb:		associated superblock
  *
  * Unlocks the filesystem and marks it writeable again after freeze_bdev().
  */
-int thaw_bdev(struct block_device *bdev, struct super_block *sb)
+int thaw_bdev(struct block_device *bdev)
 {
+	struct super_block *sb;
 	int error = -EINVAL;
 
 	mutex_lock(&bdev->bd_fsfreeze_mutex);
@@ -607,6 +599,7 @@ int thaw_bdev(struct block_device *bdev, struct super_block *sb)
 	if (--bdev->bd_fsfreeze_count > 0)
 		goto out;
 
+	sb = bdev->bd_fsfreeze_sb;
 	if (!sb)
 		goto out;
 
@@ -618,7 +611,7 @@ int thaw_bdev(struct block_device *bdev, struct super_block *sb)
 		bdev->bd_fsfreeze_count++;
 out:
 	mutex_unlock(&bdev->bd_fsfreeze_mutex);
-	return error;
+	return 0;
 }
 EXPORT_SYMBOL(thaw_bdev);
 
diff --git a/fs/buffer.c b/fs/buffer.c
index 23f645657488ba..a7595ada9400ff 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -523,7 +523,7 @@ static int osync_buffers_list(spinlock_t *lock, struct list_head *list)
 
 void emergency_thaw_bdev(struct super_block *sb)
 {
-	while (sb->s_bdev && !thaw_bdev(sb->s_bdev, sb))
+	while (sb->s_bdev && !thaw_bdev(sb->s_bdev))
 		printk(KERN_WARNING "Emergency Thaw on %pg\n", sb->s_bdev);
 }
 
diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
index f0381876a7e5b0..524e134324475e 100644
--- a/fs/ext4/ioctl.c
+++ b/fs/ext4/ioctl.c
@@ -624,7 +624,7 @@ static int ext4_shutdown(struct super_block *sb, unsigned long arg)
 	case EXT4_GOING_FLAGS_DEFAULT:
 		freeze_bdev(sb->s_bdev);
 		set_bit(EXT4_FLAGS_SHUTDOWN, &sbi->s_ext4_flags);
-		thaw_bdev(sb->s_bdev, sb);
+		thaw_bdev(sb->s_bdev);
 		break;
 	case EXT4_GOING_FLAGS_LOGFLUSH:
 		set_bit(EXT4_FLAGS_SHUTDOWN, &sbi->s_ext4_flags);
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index ee861c6d9ff026..a9fc482a0e60a5 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -2230,16 +2230,12 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
 
 	switch (in) {
 	case F2FS_GOING_DOWN_FULLSYNC:
-		sb = freeze_bdev(sb->s_bdev);
-		if (IS_ERR(sb)) {
-			ret = PTR_ERR(sb);
+		ret = freeze_bdev(sb->s_bdev);
+		if (ret)
 			goto out;
-		}
-		if (sb) {
-			f2fs_stop_checkpoint(sbi, false);
-			set_sbi_flag(sbi, SBI_IS_SHUTDOWN);
-			thaw_bdev(sb->s_bdev, sb);
-		}
+		f2fs_stop_checkpoint(sbi, false);
+		set_sbi_flag(sbi, SBI_IS_SHUTDOWN);
+		thaw_bdev(sb->s_bdev);
 		break;
 	case F2FS_GOING_DOWN_METASYNC:
 		/* do checkpoint only */
diff --git a/fs/xfs/xfs_fsops.c b/fs/xfs/xfs_fsops.c
index ef1d5bb88b93ab..b7c5783a031c69 100644
--- a/fs/xfs/xfs_fsops.c
+++ b/fs/xfs/xfs_fsops.c
@@ -433,13 +433,10 @@ xfs_fs_goingdown(
 {
 	switch (inflags) {
 	case XFS_FSOP_GOING_FLAGS_DEFAULT: {
-		struct super_block *sb = freeze_bdev(mp->m_super->s_bdev);
-
-		if (sb && !IS_ERR(sb)) {
+		if (!freeze_bdev(mp->m_super->s_bdev)) {
 			xfs_force_shutdown(mp, SHUTDOWN_FORCE_UMOUNT);
-			thaw_bdev(sb->s_bdev, sb);
+			thaw_bdev(mp->m_super->s_bdev);
 		}
-
 		break;
 	}
 	case XFS_FSOP_GOING_FLAGS_LOGFLUSH:
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index d9b69bbde5cc54..ebfb4e7c1fd125 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -46,6 +46,7 @@ struct block_device {
 	int			bd_fsfreeze_count;
 	/* Mutex for freeze */
 	struct mutex		bd_fsfreeze_mutex;
+	struct super_block	*bd_fsfreeze_sb;
 } __randomize_layout;
 
 /*
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 05b346a68c2eee..12810a19edebc4 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -2020,7 +2020,7 @@ static inline int sync_blockdev(struct block_device *bdev)
 #endif
 int fsync_bdev(struct block_device *bdev);
 
-struct super_block *freeze_bdev(struct block_device *bdev);
-int thaw_bdev(struct block_device *bdev, struct super_block *sb);
+int freeze_bdev(struct block_device *bdev);
+int thaw_bdev(struct block_device *bdev);
 
 #endif /* _LINUX_BLKDEV_H */
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:28:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:28:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35989.67698 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYN6-0000aI-CH; Tue, 24 Nov 2020 13:28:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35989.67698; Tue, 24 Nov 2020 13:28:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYN6-0000a8-89; Tue, 24 Nov 2020 13:28:40 +0000
Received: by outflank-mailman (input) for mailman id 35989;
 Tue, 24 Nov 2020 13:28:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYN5-0000Qf-0t
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:28:39 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9613395a-11bc-44b4-bd61-f1e2d26132bd;
 Tue, 24 Nov 2020 13:28:23 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYMU-0006Uj-4z; Tue, 24 Nov 2020 13:28:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYN5-0000Qf-0t
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:28:39 +0000
X-Inumbo-ID: 9613395a-11bc-44b4-bd61-f1e2d26132bd
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9613395a-11bc-44b4-bd61-f1e2d26132bd;
	Tue, 24 Nov 2020 13:28:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=0e+oH+92GlExM9i4SIPWTJvhHTDsEZRU6nAY6yeduas=; b=REng/ovLixhm4d/5DVr1VL2uxE
	639N3KIOFrakBFne91QmfkqFXFb98IXLDOJAJjpMFMj6wLqG5p9ivwi/IdkTiYg4UpBWe6eJ1dbQ0
	HBBalWEc8V1HK853AyKP05+uuRQAG2MYtBlard5dwdv3pfAPnyCU606UPfuqSfkDGROj+x/Jgp80m
	xgK0Cn++2YT5C/MR23s2fjCN+ORZdOrOQHa/0v4hpeslgE++xWymjVqNfUo2VOJ114NGbFD3umqP5
	6luT+lKBWO6SNfhge4bMavfuGRB4GvvtxnMy6Fhr3dVO8r0ujYfdajMXqb4qBL494koWxbslNF9Up
	gR2DYZAw==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYMU-0006Uj-4z; Tue, 24 Nov 2020 13:28:02 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 05/45] mtip32xx: remove the call to fsync_bdev on removal
Date: Tue, 24 Nov 2020 14:27:11 +0100
Message-Id: <20201124132751.3747337-6-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

del_gendisk already calls fsync_bdev for every partition, no need
to do this twice.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/mtip32xx/mtip32xx.c | 15 ---------------
 drivers/block/mtip32xx/mtip32xx.h |  2 --
 2 files changed, 17 deletions(-)

diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c
index 153e2cdecb4d40..53ac59d19ae530 100644
--- a/drivers/block/mtip32xx/mtip32xx.c
+++ b/drivers/block/mtip32xx/mtip32xx.c
@@ -3687,7 +3687,6 @@ static int mtip_block_initialize(struct driver_data *dd)
 	/* Enable the block device and add it to /dev */
 	device_add_disk(&dd->pdev->dev, dd->disk, NULL);
 
-	dd->bdev = bdget_disk(dd->disk, 0);
 	/*
 	 * Now that the disk is active, initialize any sysfs attributes
 	 * managed by the protocol layer.
@@ -3721,9 +3720,6 @@ static int mtip_block_initialize(struct driver_data *dd)
 	return rv;
 
 kthread_run_error:
-	bdput(dd->bdev);
-	dd->bdev = NULL;
-
 	/* Delete our gendisk. This also removes the device from /dev */
 	del_gendisk(dd->disk);
 
@@ -3804,14 +3800,6 @@ static int mtip_block_remove(struct driver_data *dd)
 	blk_mq_tagset_busy_iter(&dd->tags, mtip_no_dev_cleanup, dd);
 	blk_mq_unquiesce_queue(dd->queue);
 
-	/*
-	 * Delete our gendisk structure. This also removes the device
-	 * from /dev
-	 */
-	if (dd->bdev) {
-		bdput(dd->bdev);
-		dd->bdev = NULL;
-	}
 	if (dd->disk) {
 		if (test_bit(MTIP_DDF_INIT_DONE_BIT, &dd->dd_flag))
 			del_gendisk(dd->disk);
@@ -4206,9 +4194,6 @@ static void mtip_pci_remove(struct pci_dev *pdev)
 	} while (atomic_read(&dd->irq_workers_active) != 0 &&
 		time_before(jiffies, to));
 
-	if (!dd->sr)
-		fsync_bdev(dd->bdev);
-
 	if (atomic_read(&dd->irq_workers_active) != 0) {
 		dev_warn(&dd->pdev->dev,
 			"Completion workers still active!\n");
diff --git a/drivers/block/mtip32xx/mtip32xx.h b/drivers/block/mtip32xx/mtip32xx.h
index e22a7f0523bf30..88f4206310e4c8 100644
--- a/drivers/block/mtip32xx/mtip32xx.h
+++ b/drivers/block/mtip32xx/mtip32xx.h
@@ -463,8 +463,6 @@ struct driver_data {
 
 	int isr_binding;
 
-	struct block_device *bdev;
-
 	struct list_head online_list; /* linkage for online list */
 
 	struct list_head remove_list; /* linkage for removing list */
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:28:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:28:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35990.67710 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYNB-0000fY-N0; Tue, 24 Nov 2020 13:28:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35990.67710; Tue, 24 Nov 2020 13:28:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYNB-0000fM-J8; Tue, 24 Nov 2020 13:28:45 +0000
Received: by outflank-mailman (input) for mailman id 35990;
 Tue, 24 Nov 2020 13:28:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYNA-0000Qf-0z
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:28:44 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2a683895-2499-4995-accb-56c7fdf0d24e;
 Tue, 24 Nov 2020 13:28:24 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYMW-0006V1-Oc; Tue, 24 Nov 2020 13:28:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYNA-0000Qf-0z
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:28:44 +0000
X-Inumbo-ID: 2a683895-2499-4995-accb-56c7fdf0d24e
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2a683895-2499-4995-accb-56c7fdf0d24e;
	Tue, 24 Nov 2020 13:28:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=q5VEN3b+/NuaC3nLa7Ob8T2zF/1/3On5f5TLNt/sEKk=; b=gSBkuGb01pZYejE/qt8U+f1Prp
	H6UFF5H/7Q8DKAajhbcAG/TWkWqcbJP6bF/2kpoG1MhuVrgkQKne/Is825NwtMjoMkmVuk6QyOhw4
	6d32g0XCyz8crp9IhQ8kuRwKQHuh2i/recIYUgVpmwkRUq3tDkXkobmCdXxgw2dGWnUTzWgrewmho
	SphjpGruHOWqUH4vavWTPhddkaX2Ct2GTq94foULjN6riyh6UEzcq/lFdKB/Bgo2FSx5SzSq74gko
	EnRhAkBQn/CQvobi3r+bM87y3KzNaOsLt6QALXtIHOxRJ1xW5O4DJquFcSCvJ8LYd4MwXPv9+3hgu
	zAxlftXQ==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYMW-0006V1-Oc; Tue, 24 Nov 2020 13:28:05 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 07/45] zram: do not call set_blocksize
Date: Tue, 24 Nov 2020 14:27:13 +0100
Message-Id: <20201124132751.3747337-8-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

set_blocksize is used by file systems to use their preferred buffer cache
block size.  Block drivers should not set it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/zram/zram_drv.c | 11 +----------
 drivers/block/zram/zram_drv.h |  1 -
 2 files changed, 1 insertion(+), 11 deletions(-)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 2e6d75ec1afddb..88baa6158eaee1 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -403,13 +403,10 @@ static void reset_bdev(struct zram *zram)
 		return;
 
 	bdev = zram->bdev;
-	if (zram->old_block_size)
-		set_blocksize(bdev, zram->old_block_size);
 	blkdev_put(bdev, FMODE_READ|FMODE_WRITE|FMODE_EXCL);
 	/* hope filp_close flush all of IO */
 	filp_close(zram->backing_dev, NULL);
 	zram->backing_dev = NULL;
-	zram->old_block_size = 0;
 	zram->bdev = NULL;
 	zram->disk->fops = &zram_devops;
 	kvfree(zram->bitmap);
@@ -454,7 +451,7 @@ static ssize_t backing_dev_store(struct device *dev,
 	struct file *backing_dev = NULL;
 	struct inode *inode;
 	struct address_space *mapping;
-	unsigned int bitmap_sz, old_block_size = 0;
+	unsigned int bitmap_sz;
 	unsigned long nr_pages, *bitmap = NULL;
 	struct block_device *bdev = NULL;
 	int err;
@@ -509,14 +506,8 @@ static ssize_t backing_dev_store(struct device *dev,
 		goto out;
 	}
 
-	old_block_size = block_size(bdev);
-	err = set_blocksize(bdev, PAGE_SIZE);
-	if (err)
-		goto out;
-
 	reset_bdev(zram);
 
-	zram->old_block_size = old_block_size;
 	zram->bdev = bdev;
 	zram->backing_dev = backing_dev;
 	zram->bitmap = bitmap;
diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
index f2fd46daa76045..712354a4207c77 100644
--- a/drivers/block/zram/zram_drv.h
+++ b/drivers/block/zram/zram_drv.h
@@ -118,7 +118,6 @@ struct zram {
 	bool wb_limit_enable;
 	u64 bd_wb_limit;
 	struct block_device *bdev;
-	unsigned int old_block_size;
 	unsigned long *bitmap;
 	unsigned long nr_pages;
 #endif
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:28:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:28:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.35993.67721 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYNG-0000lZ-CV; Tue, 24 Nov 2020 13:28:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 35993.67721; Tue, 24 Nov 2020 13:28:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYNG-0000lL-6z; Tue, 24 Nov 2020 13:28:50 +0000
Received: by outflank-mailman (input) for mailman id 35993;
 Tue, 24 Nov 2020 13:28:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYNF-0000Qf-1Q
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:28:49 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4856bb45-c322-4086-a1cb-94c910b09985;
 Tue, 24 Nov 2020 13:28:26 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYMb-0006Vh-99; Tue, 24 Nov 2020 13:28:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYNF-0000Qf-1Q
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:28:49 +0000
X-Inumbo-ID: 4856bb45-c322-4086-a1cb-94c910b09985
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4856bb45-c322-4086-a1cb-94c910b09985;
	Tue, 24 Nov 2020 13:28:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=qLuO7hD0TG0p9aylN+kHqBJxf5z3v4FVxxZ2M17hRmY=; b=cByrnVSUwNsV+8vL6OejYZMzYW
	g8VICMIEorS+0pOBvVioVLf/hyD/zxobeo8PAet5DaDiGdRDYy0L5JNIcM3yD1NCnwpsR6gWUVB5i
	Lptsh/7GyC3dYE+HryC4wV9CakS+3V2vvPvT4vRhVftfmO4XobgclczjV5yeEVlKRsJvJ9LwjX+8o
	1vNw6Sl4pqdKC25En8sUougbgcvCNyLxzyGGHsEPu8enWgOfJoKU0RIKHjBkbhK7GxVxYfvwMvJI6
	Vf9vnnAEkii3NBNG4XOnzwMMzJMYdhpc7IAUijTjrAXJlfebUq3ghdAO/ZeMpsZrhIZUAVgWmW/K4
	kMYMHe4Q==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYMb-0006Vh-99; Tue, 24 Nov 2020 13:28:09 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 10/45] dm: remove the block_device reference in struct mapped_device
Date: Tue, 24 Nov 2020 14:27:16 +0100
Message-Id: <20201124132751.3747337-11-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Get rid of the long-lasting struct block_device reference in
struct mapped_device.  The only remaining user is the freeze code,
where we can trivially look up the block device at freeze time
and release the reference at thaw time.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Mike Snitzer <snitzer@redhat.com>
---
 drivers/md/dm-core.h |  2 --
 drivers/md/dm.c      | 25 ++++++++++++++-----------
 2 files changed, 14 insertions(+), 13 deletions(-)

diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h
index aace147effcacb..086d293c2b036c 100644
--- a/drivers/md/dm-core.h
+++ b/drivers/md/dm-core.h
@@ -102,8 +102,6 @@ struct mapped_device {
 	/* kobject and completion */
 	struct dm_kobject_holder kobj_holder;
 
-	struct block_device *bdev;
-
 	struct dm_stats stats;
 
 	/* for blk-mq request-based DM support */
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index ab0a8335f098d9..48051db006f30c 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1744,11 +1744,6 @@ static void cleanup_mapped_device(struct mapped_device *md)
 
 	cleanup_srcu_struct(&md->io_barrier);
 
-	if (md->bdev) {
-		bdput(md->bdev);
-		md->bdev = NULL;
-	}
-
 	mutex_destroy(&md->suspend_lock);
 	mutex_destroy(&md->type_lock);
 	mutex_destroy(&md->table_devices_lock);
@@ -1840,10 +1835,6 @@ static struct mapped_device *alloc_dev(int minor)
 	if (!md->wq)
 		goto bad;
 
-	md->bdev = bdget_disk(md->disk, 0);
-	if (!md->bdev)
-		goto bad;
-
 	dm_stats_init(&md->stats);
 
 	/* Populate the mapping, nobody knows we exist yet */
@@ -2384,11 +2375,16 @@ struct dm_table *dm_swap_table(struct mapped_device *md, struct dm_table *table)
  */
 static int lock_fs(struct mapped_device *md)
 {
+	struct block_device *bdev;
 	int r;
 
 	WARN_ON(test_bit(DMF_FROZEN, &md->flags));
 
-	r = freeze_bdev(md->bdev);
+	bdev = bdget_disk(md->disk, 0);
+	if (!bdev)
+		return -ENOMEM;
+	r = freeze_bdev(bdev);
+	bdput(bdev);
 	if (!r)
 		set_bit(DMF_FROZEN, &md->flags);
 	return r;
@@ -2396,9 +2392,16 @@ static int lock_fs(struct mapped_device *md)
 
 static void unlock_fs(struct mapped_device *md)
 {
+	struct block_device *bdev;
+
 	if (!test_bit(DMF_FROZEN, &md->flags))
 		return;
-	thaw_bdev(md->bdev);
+
+	bdev = bdget_disk(md->disk, 0);
+	if (!bdev)
+		return;
+	thaw_bdev(bdev);
+	bdput(bdev);
 	clear_bit(DMF_FROZEN, &md->flags);
 }
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:28:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:28:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36001.67734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYNL-0000sB-PG; Tue, 24 Nov 2020 13:28:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36001.67734; Tue, 24 Nov 2020 13:28:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYNL-0000s2-Lg; Tue, 24 Nov 2020 13:28:55 +0000
Received: by outflank-mailman (input) for mailman id 36001;
 Tue, 24 Nov 2020 13:28:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYNK-0000Qf-1J
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:28:54 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6ea29fcb-06f5-46f6-a89f-c1b138585611;
 Tue, 24 Nov 2020 13:28:25 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYML-0006U2-TO; Tue, 24 Nov 2020 13:27:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYNK-0000Qf-1J
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:28:54 +0000
X-Inumbo-ID: 6ea29fcb-06f5-46f6-a89f-c1b138585611
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6ea29fcb-06f5-46f6-a89f-c1b138585611;
	Tue, 24 Nov 2020 13:28:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID:
	Content-Description:In-Reply-To:References;
	bh=hzV3z6SMaqyfIauoWZXz79MWtSUwimkTva+cu+wYxlA=; b=YcqovzdCTe1S4oEOtkiwBB1Lva
	tbtWR9C+DovWiWZoNJPHK9RQrYZjKUM7Drp1hnFw1GTX1Jvk04Hd3trxzQ+fyoxsP1futR8jAHh7G
	m++vI6ur7xxBWeJPKrJNK5QVX7l3+UeM6AjbiAjRATlrWkYizakQMYaATUwBbYrcERkSINRrED8sM
	txPsKX3pPj/UYI560klmFpFYMcMloF+t/eXLNLUVVvoEw3eCXzSegwCuUVDPpT8EfzdIKSLiqbIKa
	lxgMy3k+JWNbKkmnFS165gUjPgzeD07g2h5F4z1oEdlv0ay94Qn86TlwxjevvUAt6v+K6LwpveQ2I
	DtDCjJVg==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYML-0006U2-TO; Tue, 24 Nov 2020 13:27:54 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: merge struct block_device and struct hd_struct v2
Date: Tue, 24 Nov 2020 14:27:06 +0100
Message-Id: <20201124132751.3747337-1-hch@lst.de>
X-Mailer: git-send-email 2.29.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Hi Jens,

this series cleans up our main per-device node data structure by merging
the block_device and hd_struct data structures that have the same scope,
but different life times.  The main effect (besides removing lots of
code) is that instead of having two device sizes that need complex
synchronization there is just one now.

Note that this now includes the previous "misc cleanups" series as I had
to fix up a thing in there with the changed patch ordering.

The first patch already is in 5.10-rc, but not in for-5.11/block

A git tree is available here:

    git://git.infradead.org/users/hch/block.git bdev-lookup

Gitweb:

    http://git.infradead.org/users/hch/block.git/shortlog/refs/heads/bdev-lookup

Changes since v1:
 - spelling fixes
 - fix error unwinding in __alloc_disk_node
 - use bdev_is_partition in a few more places
 - don't send the RESIZE=1 uevent for hidden gendisks
 - rename __bdget_disk to disk_find_part
 - drop a bcache patch
 - some patch reordering
 - add more refactoring
 - use rcu protection to prevent racing with a disk going away
   in blkdev_get
 - split up some of the big patches into many small ones
 - clean up the freeze_bdev interface

Diffstat:
 block/bio.c                                  |    6 
 block/blk-cgroup.c                           |   50 -
 block/blk-core.c                             |   68 +-
 block/blk-flush.c                            |    2 
 block/blk-iocost.c                           |   36 -
 block/blk-lib.c                              |    2 
 block/blk-merge.c                            |    2 
 block/blk-mq.c                               |    9 
 block/blk-mq.h                               |    7 
 block/blk.h                                  |   84 ---
 block/genhd.c                                |  467 ++++-------------
 block/ioctl.c                                |   14 
 block/partitions/core.c                      |  252 +++------
 drivers/block/drbd/drbd_receiver.c           |    2 
 drivers/block/drbd/drbd_worker.c             |    3 
 drivers/block/loop.c                         |   24 
 drivers/block/mtip32xx/mtip32xx.c            |   15 
 drivers/block/mtip32xx/mtip32xx.h            |    2 
 drivers/block/nbd.c                          |    6 
 drivers/block/xen-blkback/common.h           |    4 
 drivers/block/xen-blkfront.c                 |   20 
 drivers/block/zram/zram_drv.c                |   87 ---
 drivers/block/zram/zram_drv.h                |    1 
 drivers/md/bcache/request.c                  |    4 
 drivers/md/bcache/super.c                    |   29 -
 drivers/md/dm-core.h                         |    7 
 drivers/md/dm-table.c                        |    9 
 drivers/md/dm.c                              |   45 -
 drivers/md/md.c                              |    8 
 drivers/mtd/mtdsuper.c                       |   17 
 drivers/nvme/target/admin-cmd.c              |   20 
 drivers/s390/block/dasd.c                    |    8 
 drivers/s390/block/dasd_ioctl.c              |    9 
 drivers/scsi/scsicam.c                       |    2 
 drivers/target/target_core_file.c            |    6 
 drivers/target/target_core_pscsi.c           |    7 
 drivers/usb/gadget/function/storage_common.c |    8 
 fs/block_dev.c                               |  730 +++++++++------------------
 fs/btrfs/sysfs.c                             |   15 
 fs/btrfs/volumes.c                           |   13 
 fs/buffer.c                                  |    2 
 fs/ext4/ioctl.c                              |    2 
 fs/ext4/super.c                              |   18 
 fs/ext4/sysfs.c                              |   10 
 fs/f2fs/checkpoint.c                         |    5 
 fs/f2fs/f2fs.h                               |    2 
 fs/f2fs/file.c                               |   14 
 fs/f2fs/super.c                              |    8 
 fs/f2fs/sysfs.c                              |    9 
 fs/inode.c                                   |    3 
 fs/internal.h                                |    7 
 fs/io_uring.c                                |   10 
 fs/pipe.c                                    |    5 
 fs/pstore/blk.c                              |    2 
 fs/quota/quota.c                             |   40 +
 fs/statfs.c                                  |    2 
 fs/super.c                                   |   86 ---
 fs/xfs/xfs_fsops.c                           |    7 
 include/linux/blk-cgroup.h                   |    4 
 include/linux/blk_types.h                    |   24 
 include/linux/blkdev.h                       |   27 
 include/linux/fs.h                           |    5 
 include/linux/genhd.h                        |  110 ----
 include/linux/part_stat.h                    |   45 -
 init/do_mounts.c                             |  271 ++++------
 kernel/trace/blktrace.c                      |   54 -
 mm/filemap.c                                 |   13 
 67 files changed, 957 insertions(+), 1928 deletions(-)


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:29:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:29:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36008.67746 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYNQ-0000yB-5H; Tue, 24 Nov 2020 13:29:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36008.67746; Tue, 24 Nov 2020 13:29:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYNQ-0000xy-0R; Tue, 24 Nov 2020 13:29:00 +0000
Received: by outflank-mailman (input) for mailman id 36008;
 Tue, 24 Nov 2020 13:28:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYNP-0000Qf-1Q
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:28:59 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 18f566b9-7e1b-41e7-9082-a35a81855c6f;
 Tue, 24 Nov 2020 13:28:26 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYMZ-0006VO-IX; Tue, 24 Nov 2020 13:28:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYNP-0000Qf-1Q
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:28:59 +0000
X-Inumbo-ID: 18f566b9-7e1b-41e7-9082-a35a81855c6f
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 18f566b9-7e1b-41e7-9082-a35a81855c6f;
	Tue, 24 Nov 2020 13:28:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=qOmzpRMb4vEzGwba4P3m5gENBKcm1ZKj8N5Pylgvbms=; b=QlVuTCTh45nrCbNvdsVoIgsOr3
	o95XamtfFlHQdWoYLWTiXelkn0D+odqN40uheOBv3z0BV6r096cNZn1W0wS1mz+vFGmKDoLfxwB6t
	9+zKNn1qt6dRi6yiiLqPQAL91CKDp2S6xsY83LfbwnAlo50KMLRzxrcqOmqiSsIMFl8Aw9XpdKS+R
	WZJmPOjQKJSU2brvqFKaHLDst3ASHFCm++DZPoqCslNlR8eV59MkIXKBDS0vMEWNRPVQfY4/6dJR2
	0yR6aoz6V/xEKpsBinRf0moR7ga9qiIpxc1qZCLa7UWzw8rxoOhDnTmEEP/Ho/mppNNW42sLXt019
	r6XhQ5Zw==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYMZ-0006VO-IX; Tue, 24 Nov 2020 13:28:08 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 09/45] dm: simplify flush_bio initialization in __send_empty_flush
Date: Tue, 24 Nov 2020 14:27:15 +0100
Message-Id: <20201124132751.3747337-10-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

We don't really need the struct block_device to initialize a bio.  So
switch from using bio_set_dev to manually setting up bi_disk (bi_partno
will always be zero and has been cleared by bio_init already).

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Mike Snitzer <snitzer@redhat.com>
---
 drivers/md/dm.c | 12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 50541d336c719b..ab0a8335f098d9 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1422,18 +1422,12 @@ static int __send_empty_flush(struct clone_info *ci)
 	 */
 	bio_init(&flush_bio, NULL, 0);
 	flush_bio.bi_opf = REQ_OP_WRITE | REQ_PREFLUSH | REQ_SYNC;
+	flush_bio.bi_disk = ci->io->md->disk;
+	bio_associate_blkg(&flush_bio);
+
 	ci->bio = &flush_bio;
 	ci->sector_count = 0;
 
-	/*
-	 * Empty flush uses a statically initialized bio, as the base for
-	 * cloning.  However, blkg association requires that a bdev is
-	 * associated with a gendisk, which doesn't happen until the bdev is
-	 * opened.  So, blkg association is done at issue time of the flush
-	 * rather than when the device is created in alloc_dev().
-	 */
-	bio_set_dev(ci->bio, ci->io->md->bdev);
-
 	BUG_ON(bio_has_data(ci->bio));
 	while ((ti = dm_table_get_target(ci->map, target_nr++)))
 		__send_duplicate_bios(ci, ti, ti->num_flush_bios, NULL);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:29:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:29:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36012.67758 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYNV-00013m-Dq; Tue, 24 Nov 2020 13:29:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36012.67758; Tue, 24 Nov 2020 13:29:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYNV-00013b-AS; Tue, 24 Nov 2020 13:29:05 +0000
Received: by outflank-mailman (input) for mailman id 36012;
 Tue, 24 Nov 2020 13:29:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYNU-0000Qf-1X
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:29:04 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eef56dca-15d9-43e0-b480-0ff9c313d056;
 Tue, 24 Nov 2020 13:28:31 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYMO-0006UA-S8; Tue, 24 Nov 2020 13:27:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYNU-0000Qf-1X
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:29:04 +0000
X-Inumbo-ID: eef56dca-15d9-43e0-b480-0ff9c313d056
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id eef56dca-15d9-43e0-b480-0ff9c313d056;
	Tue, 24 Nov 2020 13:28:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=Q/MCT6maV/iKaSxRRRnLFBnoEWfPfC40ypgl1Ci8nRk=; b=pWcZmhed1VtM52P3q1JMqqjfxR
	GWMfkfGGh21/NCT85snknz3WDUM0FCyAbepRt+ZELj82dad4m3br4Sawb66pKbQxSAEUaJxSeL+Ll
	71lYUsvhjLKgY/YbGp0X++3PkxXh+KrssNfKTT0h2mCWPuefl+mBpjJwTcCV+e1SRjBbrscsDgFCm
	lMW8Geg/e4WV3fB64E9SDr1XU5cgwgMIdLeq0nWxNCx03aqe1+hiU2wmbmGncWiO1/xxhiOEuXkHa
	z9KSmzy5zBC51q81wbv9yHNv1xP232XeeVA1TEmOLWeYTbWgkPDrO0cmsehbzSHT+I2DtVpVX0ELT
	qGUkh4WA==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYMO-0006UA-S8; Tue, 24 Nov 2020 13:27:58 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 02/45] filemap: consistently use ->f_mapping over ->i_mapping
Date: Tue, 24 Nov 2020 14:27:08 +0100
Message-Id: <20201124132751.3747337-3-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use file->f_mapping in all remaining places that have a struct file
available to properly handle the case where inode->i_mapping !=
file_inode(file)->i_mapping.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 mm/filemap.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index d5e7c2029d16b4..4f583489aa3c2a 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2886,14 +2886,14 @@ EXPORT_SYMBOL(filemap_map_pages);
 
 vm_fault_t filemap_page_mkwrite(struct vm_fault *vmf)
 {
+	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
 	struct page *page = vmf->page;
-	struct inode *inode = file_inode(vmf->vma->vm_file);
 	vm_fault_t ret = VM_FAULT_LOCKED;
 
-	sb_start_pagefault(inode->i_sb);
+	sb_start_pagefault(mapping->host->i_sb);
 	file_update_time(vmf->vma->vm_file);
 	lock_page(page);
-	if (page->mapping != inode->i_mapping) {
+	if (page->mapping != mapping) {
 		unlock_page(page);
 		ret = VM_FAULT_NOPAGE;
 		goto out;
@@ -2906,7 +2906,7 @@ vm_fault_t filemap_page_mkwrite(struct vm_fault *vmf)
 	set_page_dirty(page);
 	wait_for_stable_page(page);
 out:
-	sb_end_pagefault(inode->i_sb);
+	sb_end_pagefault(mapping->host->i_sb);
 	return ret;
 }
 
@@ -3149,10 +3149,9 @@ void dio_warn_stale_pagecache(struct file *filp)
 {
 	static DEFINE_RATELIMIT_STATE(_rs, 86400 * HZ, DEFAULT_RATELIMIT_BURST);
 	char pathname[128];
-	struct inode *inode = file_inode(filp);
 	char *path;
 
-	errseq_set(&inode->i_mapping->wb_err, -EIO);
+	errseq_set(&filp->f_mapping->wb_err, -EIO);
 	if (__ratelimit(&_rs)) {
 		path = file_path(filp, pathname, sizeof(pathname));
 		if (IS_ERR(path))
@@ -3179,7 +3178,7 @@ generic_file_direct_write(struct kiocb *iocb, struct iov_iter *from)
 
 	if (iocb->ki_flags & IOCB_NOWAIT) {
 		/* If there are pages to writeback, return */
-		if (filemap_range_has_page(inode->i_mapping, pos,
+		if (filemap_range_has_page(file->f_mapping, pos,
 					   pos + write_len - 1))
 			return -EAGAIN;
 	} else {
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:29:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:29:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36015.67769 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYNa-0001A1-PK; Tue, 24 Nov 2020 13:29:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36015.67769; Tue, 24 Nov 2020 13:29:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYNa-00019q-Lh; Tue, 24 Nov 2020 13:29:10 +0000
Received: by outflank-mailman (input) for mailman id 36015;
 Tue, 24 Nov 2020 13:29:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYNZ-0000Qf-1z
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:29:09 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e128a56a-905b-4351-8aa3-30d7fc23e5b2;
 Tue, 24 Nov 2020 13:28:32 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYMR-0006UP-A2; Tue, 24 Nov 2020 13:27:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYNZ-0000Qf-1z
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:29:09 +0000
X-Inumbo-ID: e128a56a-905b-4351-8aa3-30d7fc23e5b2
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e128a56a-905b-4351-8aa3-30d7fc23e5b2;
	Tue, 24 Nov 2020 13:28:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=hNtZTYwDbpxOWnjbdPfEN2LKFTEyc+Xwk1zoPtNIsws=; b=KW/HOfgirUKthyUKuxmIOZ3L14
	PxmbPRzCYhFzSzd/bHlNMHDBgtk8AWQbuv2Np7A5hD54AbqwLNdvMEGQ9rvRi53KRItg4ljPIXN8j
	laYZPmVMxu+I+Xr/eXeEt4Z7FsAA/4Gq/OuO4qxhVj8ln7jkMiD5R0gJ/14CVFp4UhgnoRadNe4ZC
	LOLAMCrHd+WS7Uh/WVbYf7QzV5bd5/j5dX32PzOJPj7fKwxz2QApY0cW15/U90iga5Rj21c9kNtwz
	L7GmN4Z/GX5QBrx7BRnqDM1JTLj8DfcYMuAUZfQgs4m6RFsv4cza8s4pw2rNS7l2ZS64FnStRTaCh
	wR4T2pvw==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYMR-0006UP-A2; Tue, 24 Nov 2020 13:27:59 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 03/45] fs: remove get_super_thawed and get_super_exclusive_thawed
Date: Tue, 24 Nov 2020 14:27:09 +0100
Message-Id: <20201124132751.3747337-4-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Just open code the wait in the only caller of both functions.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/internal.h      |  2 ++
 fs/quota/quota.c   | 31 +++++++++++++++++++++-------
 fs/super.c         | 51 ++--------------------------------------------
 include/linux/fs.h |  4 +---
 4 files changed, 29 insertions(+), 59 deletions(-)

diff --git a/fs/internal.h b/fs/internal.h
index a7cd0f64faa4ab..47be21dfeebef5 100644
--- a/fs/internal.h
+++ b/fs/internal.h
@@ -114,7 +114,9 @@ extern struct file *alloc_empty_file_noaccount(int, const struct cred *);
  */
 extern int reconfigure_super(struct fs_context *);
 extern bool trylock_super(struct super_block *sb);
+struct super_block *__get_super(struct block_device *bdev, bool excl);
 extern struct super_block *user_get_super(dev_t);
+void put_super(struct super_block *sb);
 extern bool mount_capable(struct fs_context *);
 
 /*
diff --git a/fs/quota/quota.c b/fs/quota/quota.c
index 9af95c7a0bbe3c..f3d32b0d9008f2 100644
--- a/fs/quota/quota.c
+++ b/fs/quota/quota.c
@@ -20,6 +20,7 @@
 #include <linux/writeback.h>
 #include <linux/nospec.h>
 #include "compat.h"
+#include "../internal.h"
 
 static int check_quotactl_permission(struct super_block *sb, int type, int cmd,
 				     qid_t id)
@@ -868,6 +869,7 @@ static struct super_block *quotactl_block(const char __user *special, int cmd)
 	struct block_device *bdev;
 	struct super_block *sb;
 	struct filename *tmp = getname(special);
+	bool excl = false, thawed = false;
 
 	if (IS_ERR(tmp))
 		return ERR_CAST(tmp);
@@ -875,17 +877,32 @@ static struct super_block *quotactl_block(const char __user *special, int cmd)
 	putname(tmp);
 	if (IS_ERR(bdev))
 		return ERR_CAST(bdev);
-	if (quotactl_cmd_onoff(cmd))
-		sb = get_super_exclusive_thawed(bdev);
-	else if (quotactl_cmd_write(cmd))
-		sb = get_super_thawed(bdev);
-	else
-		sb = get_super(bdev);
+
+	if (quotactl_cmd_onoff(cmd)) {
+		excl = true;
+		thawed = true;
+	} else if (quotactl_cmd_write(cmd)) {
+		thawed = true;
+	}
+
+retry:
+	sb = __get_super(bdev, excl);
+	if (thawed && sb && sb->s_writers.frozen != SB_UNFROZEN) {
+		if (excl)
+			up_write(&sb->s_umount);
+		else
+			up_read(&sb->s_umount);
+		wait_event(sb->s_writers.wait_unfrozen,
+			   sb->s_writers.frozen == SB_UNFROZEN);
+		put_super(sb);
+		goto retry;
+	}
+
 	bdput(bdev);
 	if (!sb)
 		return ERR_PTR(-ENODEV);
-
 	return sb;
+
 #else
 	return ERR_PTR(-ENODEV);
 #endif
diff --git a/fs/super.c b/fs/super.c
index 98bb0629ee108e..343e5c1e538d2a 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -307,7 +307,7 @@ static void __put_super(struct super_block *s)
  *	Drops a temporary reference, frees superblock if there's no
  *	references left.
  */
-static void put_super(struct super_block *sb)
+void put_super(struct super_block *sb)
 {
 	spin_lock(&sb_lock);
 	__put_super(sb);
@@ -740,7 +740,7 @@ void iterate_supers_type(struct file_system_type *type,
 
 EXPORT_SYMBOL(iterate_supers_type);
 
-static struct super_block *__get_super(struct block_device *bdev, bool excl)
+struct super_block *__get_super(struct block_device *bdev, bool excl)
 {
 	struct super_block *sb;
 
@@ -789,53 +789,6 @@ struct super_block *get_super(struct block_device *bdev)
 }
 EXPORT_SYMBOL(get_super);
 
-static struct super_block *__get_super_thawed(struct block_device *bdev,
-					      bool excl)
-{
-	while (1) {
-		struct super_block *s = __get_super(bdev, excl);
-		if (!s || s->s_writers.frozen == SB_UNFROZEN)
-			return s;
-		if (!excl)
-			up_read(&s->s_umount);
-		else
-			up_write(&s->s_umount);
-		wait_event(s->s_writers.wait_unfrozen,
-			   s->s_writers.frozen == SB_UNFROZEN);
-		put_super(s);
-	}
-}
-
-/**
- *	get_super_thawed - get thawed superblock of a device
- *	@bdev: device to get the superblock for
- *
- *	Scans the superblock list and finds the superblock of the file system
- *	mounted on the device. The superblock is returned once it is thawed
- *	(or immediately if it was not frozen). %NULL is returned if no match
- *	is found.
- */
-struct super_block *get_super_thawed(struct block_device *bdev)
-{
-	return __get_super_thawed(bdev, false);
-}
-EXPORT_SYMBOL(get_super_thawed);
-
-/**
- *	get_super_exclusive_thawed - get thawed superblock of a device
- *	@bdev: device to get the superblock for
- *
- *	Scans the superblock list and finds the superblock of the file system
- *	mounted on the device. The superblock is returned once it is thawed
- *	(or immediately if it was not frozen) and s_umount semaphore is held
- *	in exclusive mode. %NULL is returned if no match is found.
- */
-struct super_block *get_super_exclusive_thawed(struct block_device *bdev)
-{
-	return __get_super_thawed(bdev, true);
-}
-EXPORT_SYMBOL(get_super_exclusive_thawed);
-
 /**
  * get_active_super - get an active reference to the superblock of a device
  * @bdev: device to get the superblock for
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 8667d0cdc71e76..a61df0dd4f1989 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1409,7 +1409,7 @@ enum {
 
 struct sb_writers {
 	int				frozen;		/* Is sb frozen? */
-	wait_queue_head_t		wait_unfrozen;	/* for get_super_thawed() */
+	wait_queue_head_t		wait_unfrozen;	/* wait for thaw */
 	struct percpu_rw_semaphore	rw_sem[SB_FREEZE_LEVELS];
 };
 
@@ -3132,8 +3132,6 @@ extern struct file_system_type *get_filesystem(struct file_system_type *fs);
 extern void put_filesystem(struct file_system_type *fs);
 extern struct file_system_type *get_fs_type(const char *name);
 extern struct super_block *get_super(struct block_device *);
-extern struct super_block *get_super_thawed(struct block_device *);
-extern struct super_block *get_super_exclusive_thawed(struct block_device *bdev);
 extern struct super_block *get_active_super(struct block_device *bdev);
 extern void drop_super(struct super_block *sb);
 extern void drop_super_exclusive(struct super_block *sb);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:29:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:29:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36043.67791 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYO6-0001W5-Ot; Tue, 24 Nov 2020 13:29:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36043.67791; Tue, 24 Nov 2020 13:29:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYO6-0001Vm-Ft; Tue, 24 Nov 2020 13:29:42 +0000
Received: by outflank-mailman (input) for mailman id 36043;
 Tue, 24 Nov 2020 13:29:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYO3-0000Qf-2g
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:29:39 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c38a68e8-be03-497f-9f6d-02fcc4642a2d;
 Tue, 24 Nov 2020 13:28:41 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYMe-0006WC-FD; Tue, 24 Nov 2020 13:28:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYO3-0000Qf-2g
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:29:39 +0000
X-Inumbo-ID: c38a68e8-be03-497f-9f6d-02fcc4642a2d
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c38a68e8-be03-497f-9f6d-02fcc4642a2d;
	Tue, 24 Nov 2020 13:28:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=/O13+dcLk1L7JyoO+1N94KNWJ8AA8wxdNDwugfkz49U=; b=T0mbtousMy3gTU4W+Hav1SXmPG
	ntoJpVsr1rVrypAPqknRPUnpesJSMBrUc00lVI6rw9tW7L0UTd5qDYChjsDIp5I+2p9rgRfczorvW
	myNaorvAqWHtFI8YzROhDDUSGaAL1oE03jbsWr7RUeag8w/Fra4vYKypkXrsbnoMydets0qq/jCNw
	kAWVxaa3y0zHwqhTRdVwYBa7yrqO3jWUl7tXKnrMDreCJh9HnCSyQhcMGVcHEbB5RfozmnnymLj8T
	T8hSeMtKcFNU6MG3WPtXCg75QSQ31Et+ZdTKD3UyLrPnveQ8KHIGAcZK2WiJCk91FG4lz+px4829j
	pUPMLhxA==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYMe-0006WC-FD; Tue, 24 Nov 2020 13:28:12 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 12/45] block: remove a superflous check in blkpg_do_ioctl
Date: Tue, 24 Nov 2020 14:27:18 +0100
Message-Id: <20201124132751.3747337-13-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

sector_t is now always a u64, so this check is not needed.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/ioctl.c | 9 ---------
 1 file changed, 9 deletions(-)

diff --git a/block/ioctl.c b/block/ioctl.c
index 6b785181344fe1..0c09bb7a6ff35f 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -35,15 +35,6 @@ static int blkpg_do_ioctl(struct block_device *bdev,
 	start = p.start >> SECTOR_SHIFT;
 	length = p.length >> SECTOR_SHIFT;
 
-	/* check for fit in a hd_struct */
-	if (sizeof(sector_t) < sizeof(long long)) {
-		long pstart = start, plength = length;
-
-		if (pstart != start || plength != length || pstart < 0 ||
-		    plength < 0 || p.pno > 65535)
-			return -EINVAL;
-	}
-
 	switch (op) {
 	case BLKPG_ADD_PARTITION:
 		/* check if partition is aligned to blocksize */
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:29:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:29:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36042.67781 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYO6-0001VP-AU; Tue, 24 Nov 2020 13:29:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36042.67781; Tue, 24 Nov 2020 13:29:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYO6-0001VH-7L; Tue, 24 Nov 2020 13:29:42 +0000
Received: by outflank-mailman (input) for mailman id 36042;
 Tue, 24 Nov 2020 13:29:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYNo-0000Qf-2L
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:29:24 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 26ea2157-cf32-448a-907d-cc80947b34ea;
 Tue, 24 Nov 2020 13:28:39 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYMN-0006U7-DO; Tue, 24 Nov 2020 13:27:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYNo-0000Qf-2L
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:29:24 +0000
X-Inumbo-ID: 26ea2157-cf32-448a-907d-cc80947b34ea
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 26ea2157-cf32-448a-907d-cc80947b34ea;
	Tue, 24 Nov 2020 13:28:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=qbQZqUyuqbWI0k7YOCxXBM7JuauYhSOH9/mPud+JBEQ=; b=JPuFDbn3CeG05+vmoNAk65fF+e
	E+AWjN7EiPbhuy8d43uHrVAUMryHcbvWp/sIQsb2keLu3WxSA8fNQgXpu6WDI/oPIzGdu1t9jnCoh
	ALo16cLEzPbEOBDyyUQhXe2zb4Wg1WnXZR7b7Of0uxXZvBhKgYdSIm1GQtyXmTB80hVs2aO5jg3Wd
	JxN4GkeL/03jRIvwuCVHER5Yi938IlSwfy+kOfinhXXDGRCH8xrqmYFJRsvf9MdJi8dfiZSzBVDNv
	9Y5vxWaV8co6fpHvChVu6aEJa1JKBk5/o4XXJCwXXE79GGFIwkJ9LI5tm5QxCZm0GqpKScG6vBm6n
	A4zKiNLQ==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYMN-0006U7-DO; Tue, 24 Nov 2020 13:27:55 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 01/45] blk-cgroup: fix a hd_struct leak in blkcg_fill_root_iostats
Date: Tue, 24 Nov 2020 14:27:07 +0100
Message-Id: <20201124132751.3747337-2-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

disk_get_part needs to be paired with a disk_put_part.

Fixes: ef45fe470e1 ("blk-cgroup: show global disk stats in root cgroup io.stat")
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Jan Kara <jack@suse.cz>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Acked-by: Tejun Heo <tj@kernel.org>
---
 block/blk-cgroup.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index c68bdf58c9a6e1..54fbe1e80cc41a 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -849,6 +849,7 @@ static void blkcg_fill_root_iostats(void)
 			blkg_iostat_set(&blkg->iostat.cur, &tmp);
 			u64_stats_update_end(&blkg->iostat.sync);
 		}
+		disk_put_part(part);
 	}
 }
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:29:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:29:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36045.67806 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYOF-0001eA-VA; Tue, 24 Nov 2020 13:29:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36045.67806; Tue, 24 Nov 2020 13:29:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYOF-0001dz-PI; Tue, 24 Nov 2020 13:29:51 +0000
Received: by outflank-mailman (input) for mailman id 36045;
 Tue, 24 Nov 2020 13:29:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYNj-0000Qf-2G
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:29:19 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4e81fbcc-44c0-45e0-a3e4-844dac80b398;
 Tue, 24 Nov 2020 13:28:37 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYMY-0006VC-60; Tue, 24 Nov 2020 13:28:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYNj-0000Qf-2G
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:29:19 +0000
X-Inumbo-ID: 4e81fbcc-44c0-45e0-a3e4-844dac80b398
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4e81fbcc-44c0-45e0-a3e4-844dac80b398;
	Tue, 24 Nov 2020 13:28:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=2AKcef4DY6iJcprvQynjGgbCIgh8d3+qztu8J6nwbBU=; b=ZoLAX4IygcjKDDGL+R/L/NGfMA
	YlwR4qVUSVXjC/FQMPi9yNLQklagBSMZXN97OugbTjH0foML7tKuGRsO8QpQgajPY1ljXymAqnVGj
	hBYSVVrFavzJRjAMnvihP4CU0J0lD++Ohhxbd0ATXtBWfXwjb/MhAMjN2LmALGIzwn1CsBVK8t7wz
	SLIy4zz38tAxtqZlDxVQ3r1Vyfx0sAlWysmBq2CqEt2VFhQHpxjxBLAjG2dhmtEJWNshJafCuVX9Q
	8FnIDbA6123YSNJdoCn/kvc5rA13f5YX11HXcmevu5u8KVx9s1pGUvc4r6ukRByzfoXWk785N632H
	EJ0mLWSQ==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYMY-0006VC-60; Tue, 24 Nov 2020 13:28:06 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 08/45] loop: do not call set_blocksize
Date: Tue, 24 Nov 2020 14:27:14 +0100
Message-Id: <20201124132751.3747337-9-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

set_blocksize is used by file systems to use their preferred buffer cache
block size.  Block drivers should not set it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/loop.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 9a27d4f1c08aac..b42c728620c9e4 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -1164,9 +1164,6 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
 	size = get_loop_size(lo, file);
 	loop_set_size(lo, size);
 
-	set_blocksize(bdev, S_ISBLK(inode->i_mode) ?
-		      block_size(inode->i_bdev) : PAGE_SIZE);
-
 	lo->lo_state = Lo_bound;
 	if (part_shift)
 		lo->lo_flags |= LO_FLAGS_PARTSCAN;
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:29:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:29:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36046.67818 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYOH-0001gA-7D; Tue, 24 Nov 2020 13:29:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36046.67818; Tue, 24 Nov 2020 13:29:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYOH-0001g3-38; Tue, 24 Nov 2020 13:29:53 +0000
Received: by outflank-mailman (input) for mailman id 36046;
 Tue, 24 Nov 2020 13:29:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYNt-0000Qf-2K
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:29:29 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8c44d38a-3103-4e34-b193-c5b93c192b58;
 Tue, 24 Nov 2020 13:28:39 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYMn-0006Y9-FD; Tue, 24 Nov 2020 13:28:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYNt-0000Qf-2K
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:29:29 +0000
X-Inumbo-ID: 8c44d38a-3103-4e34-b193-c5b93c192b58
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8c44d38a-3103-4e34-b193-c5b93c192b58;
	Tue, 24 Nov 2020 13:28:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=/JD7P59kLLFVg7PYLNCPMZCDXAoG4vpH+WsHJpZ7foo=; b=Bvsu7rjRscfk3fEBDWdwaXTw1a
	yBP66Yl4lQOXZQ1S0o5jVg/rBBbRLPUiXelUDNs0ZAosLOVPtEolGC+8nE1mAd6iOzHUIIuTpQhAM
	QGAU98CffrUAdOC0jTKXiJ0GblnVmH8SXMxqR0sNHBtPeJOVwXat5gkslEyaiJN1KHtmwltwbLBBr
	xcNJqY9CnNj5y2xB3ZTcIuTs7H5v3FpZu2pdJV/2UWpXznmvQYvTM9bx79wzAXH4NglFzV354UL/8
	8BDoU15Guo3iZd+qKmPA8nkrs/rxZ0GZST9iabXcDgdZbFFJjp6nZrmp50w1pJT1e3IrE4W8kg91U
	Ya1lng2Q==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYMn-0006Y9-FD; Tue, 24 Nov 2020 13:28:21 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 18/45] init: refactor devt_from_partuuid
Date: Tue, 24 Nov 2020 14:27:24 +0100
Message-Id: <20201124132751.3747337-19-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

The code in devt_from_partuuid is very convoluted.  Refactor a bit by
sanitizing the goto and variable name usage.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Jan Kara <jack@suse.cz>
---
 init/do_mounts.c | 68 ++++++++++++++++++++++--------------------------
 1 file changed, 31 insertions(+), 37 deletions(-)

diff --git a/init/do_mounts.c b/init/do_mounts.c
index aef2f24461c7f1..afa26a4028d25e 100644
--- a/init/do_mounts.c
+++ b/init/do_mounts.c
@@ -105,13 +105,10 @@ static int match_dev_by_uuid(struct device *dev, const void *data)
  */
 static dev_t devt_from_partuuid(const char *uuid_str)
 {
-	dev_t res = 0;
 	struct uuidcmp cmp;
 	struct device *dev = NULL;
-	struct gendisk *disk;
-	struct hd_struct *part;
+	dev_t devt = 0;
 	int offset = 0;
-	bool clear_root_wait = false;
 	char *slash;
 
 	cmp.uuid = uuid_str;
@@ -120,52 +117,49 @@ static dev_t devt_from_partuuid(const char *uuid_str)
 	/* Check for optional partition number offset attributes. */
 	if (slash) {
 		char c = 0;
+
 		/* Explicitly fail on poor PARTUUID syntax. */
-		if (sscanf(slash + 1,
-			   "PARTNROFF=%d%c", &offset, &c) != 1) {
-			clear_root_wait = true;
-			goto done;
-		}
+		if (sscanf(slash + 1, "PARTNROFF=%d%c", &offset, &c) != 1)
+			goto clear_root_wait;
 		cmp.len = slash - uuid_str;
 	} else {
 		cmp.len = strlen(uuid_str);
 	}
 
-	if (!cmp.len) {
-		clear_root_wait = true;
-		goto done;
-	}
+	if (!cmp.len)
+		goto clear_root_wait;
 
-	dev = class_find_device(&block_class, NULL, &cmp,
-				&match_dev_by_uuid);
+	dev = class_find_device(&block_class, NULL, &cmp, &match_dev_by_uuid);
 	if (!dev)
-		goto done;
-
-	res = dev->devt;
+		return 0;
 
-	/* Attempt to find the partition by offset. */
-	if (!offset)
-		goto no_offset;
+	if (offset) {
+		/*
+		 * Attempt to find the requested partition by adding an offset
+		 * to the partition number found by UUID.
+		 */
+		struct hd_struct *part;
 
-	res = 0;
-	disk = part_to_disk(dev_to_part(dev));
-	part = disk_get_part(disk, dev_to_part(dev)->partno + offset);
-	if (part) {
-		res = part_devt(part);
-		put_device(part_to_dev(part));
+		part = disk_get_part(dev_to_disk(dev),
+				     dev_to_part(dev)->partno + offset);
+		if (part) {
+			devt = part_devt(part);
+			put_device(part_to_dev(part));
+		}
+	} else {
+		devt = dev->devt;
 	}
 
-no_offset:
 	put_device(dev);
-done:
-	if (clear_root_wait) {
-		pr_err("VFS: PARTUUID= is invalid.\n"
-		       "Expected PARTUUID=<valid-uuid-id>[/PARTNROFF=%%d]\n");
-		if (root_wait)
-			pr_err("Disabling rootwait; root= is invalid.\n");
-		root_wait = 0;
-	}
-	return res;
+	return devt;
+
+clear_root_wait:
+	pr_err("VFS: PARTUUID= is invalid.\n"
+	       "Expected PARTUUID=<valid-uuid-id>[/PARTNROFF=%%d]\n");
+	if (root_wait)
+		pr_err("Disabling rootwait; root= is invalid.\n");
+	root_wait = 0;
+	return 0;
 }
 
 /**
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:30:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:30:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36054.67830 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYOT-0002Dz-KI; Tue, 24 Nov 2020 13:30:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36054.67830; Tue, 24 Nov 2020 13:30:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYOT-0002DM-GH; Tue, 24 Nov 2020 13:30:05 +0000
Received: by outflank-mailman (input) for mailman id 36054;
 Tue, 24 Nov 2020 13:30:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYNy-0000Qf-2P
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:29:34 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8eef9b6d-0fcd-4372-b97b-0a155c9a35bc;
 Tue, 24 Nov 2020 13:28:39 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYMj-0006XN-5i; Tue, 24 Nov 2020 13:28:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYNy-0000Qf-2P
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:29:34 +0000
X-Inumbo-ID: 8eef9b6d-0fcd-4372-b97b-0a155c9a35bc
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8eef9b6d-0fcd-4372-b97b-0a155c9a35bc;
	Tue, 24 Nov 2020 13:28:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=5XlVZbSBkS7owyByi7+TU2krdZYtRQJNSLbElG7UMJM=; b=GrByciCmlypygW+ymXF+7X/jSp
	HsgeqlURzqGzlgS5RJ/o9YvA/LFELAtg9fJnwnXHIYCDHFU5pcD4d6q944dHkxA6N7cn0bDnE7VXh
	GGu35B6Gcg+bgzCRYC//r/GDM/4JYgkFo6e+maqC9WuOLWUBLvew8ZTAEOoLACaBDzdaiWX4kCzN1
	mrD6nnXkiMIkyK5dE06VrsauzZZkdbBiGnIewiH224Fo/WsEG8HJfz+sdUEhX6TsJp+ZFRhjiRw2S
	cY+TmRWWh+F6CCP9jGYgpqRwjHhvoZocC5aLnQE1p+91dxzwMcKB7t/OezvIVTW5MzvYKNFt5e/x4
	rVcbPBpg==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYMj-0006XN-5i; Tue, 24 Nov 2020 13:28:17 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 15/45] block: use put_device in put_disk
Date: Tue, 24 Nov 2020 14:27:21 +0100
Message-Id: <20201124132751.3747337-16-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use put_device to put the device instead of poking into the internals
and using kobject_put.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Jan Kara <jack@suse.cz>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
---
 block/genhd.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/block/genhd.c b/block/genhd.c
index 0bd9c41dd4cb69..f46e89226fdf91 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -1803,7 +1803,7 @@ EXPORT_SYMBOL(__alloc_disk_node);
 void put_disk(struct gendisk *disk)
 {
 	if (disk)
-		kobject_put(&disk_to_dev(disk)->kobj);
+		put_device(disk_to_dev(disk));
 }
 EXPORT_SYMBOL(put_disk);
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:30:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:30:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36076.67842 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYP3-0002nS-V3; Tue, 24 Nov 2020 13:30:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36076.67842; Tue, 24 Nov 2020 13:30:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYP3-0002nL-RA; Tue, 24 Nov 2020 13:30:41 +0000
Received: by outflank-mailman (input) for mailman id 36076;
 Tue, 24 Nov 2020 13:30:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYNe-0000Qf-23
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:29:14 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9f360c11-3cce-4639-9598-31dba95ee084;
 Tue, 24 Nov 2020 13:28:34 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYMk-0006Xd-Im; Tue, 24 Nov 2020 13:28:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYNe-0000Qf-23
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:29:14 +0000
X-Inumbo-ID: 9f360c11-3cce-4639-9598-31dba95ee084
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9f360c11-3cce-4639-9598-31dba95ee084;
	Tue, 24 Nov 2020 13:28:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=aYwkX4I6zMx34nzrdb7tYezzW5eC03Czg70BjD05yqI=; b=PBlG+SI1LP3b7gr/lJlKmKFFJd
	bocVKBuspYZb9+bd6oRwQGEd0L7j3hQYuLx9nBobabPLesfL8HLab1E4KCm/5+LC3aeewsBsrF0Ec
	hulcbyCIFfsfA8vcNobuzkqbXhMEwxxlSaiC+PbDLVpuj7Vfeb4n/Hc03FWeCTlPALu7BKMacg7VB
	VdoUIV8XfOoQvjg+7Lpil7jQgfh2pOibzDF+EEKPx14W4pVYNFzWxgoqa5dOjKsd+dX8NOcHTR0xd
	r+KlCQy9eeFmFLAe7pGrffUSOJRf0kI5rzxEXRSylWdG/EZ+5D3kd5agpzXpiDKm6I3+ieiU+g83C
	6CbAnNrA==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYMk-0006Xd-Im; Tue, 24 Nov 2020 13:28:19 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 16/45] block: change the hash used for looking up block devices
Date: Tue, 24 Nov 2020 14:27:22 +0100
Message-Id: <20201124132751.3747337-17-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Adding the minor to the major creates tons of pointless conflicts. Just
use the dev_t itself, which is 32-bits and thus is guaranteed to fit
into ino_t.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Jan Kara <jack@suse.cz>
---
 fs/block_dev.c | 26 ++------------------------
 1 file changed, 2 insertions(+), 24 deletions(-)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index f6a2a06ad262fa..437f67e12b2838 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -863,35 +863,12 @@ void __init bdev_cache_init(void)
 	blockdev_superblock = bd_mnt->mnt_sb;   /* For writeback */
 }
 
-/*
- * Most likely _very_ bad one - but then it's hardly critical for small
- * /dev and can be fixed when somebody will need really large one.
- * Keep in mind that it will be fed through icache hash function too.
- */
-static inline unsigned long hash(dev_t dev)
-{
-	return MAJOR(dev)+MINOR(dev);
-}
-
-static int bdev_test(struct inode *inode, void *data)
-{
-	return BDEV_I(inode)->bdev.bd_dev == *(dev_t *)data;
-}
-
-static int bdev_set(struct inode *inode, void *data)
-{
-	BDEV_I(inode)->bdev.bd_dev = *(dev_t *)data;
-	return 0;
-}
-
 static struct block_device *bdget(dev_t dev)
 {
 	struct block_device *bdev;
 	struct inode *inode;
 
-	inode = iget5_locked(blockdev_superblock, hash(dev),
-			bdev_test, bdev_set, &dev);
-
+	inode = iget_locked(blockdev_superblock, dev);
 	if (!inode)
 		return NULL;
 
@@ -903,6 +880,7 @@ static struct block_device *bdget(dev_t dev)
 		bdev->bd_super = NULL;
 		bdev->bd_inode = inode;
 		bdev->bd_part_count = 0;
+		bdev->bd_dev = dev;
 		inode->i_mode = S_IFBLK;
 		inode->i_rdev = dev;
 		inode->i_bdev = bdev;
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:32:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:32:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36095.67854 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYQs-00034C-BO; Tue, 24 Nov 2020 13:32:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36095.67854; Tue, 24 Nov 2020 13:32:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYQs-000345-8D; Tue, 24 Nov 2020 13:32:34 +0000
Received: by outflank-mailman (input) for mailman id 36095;
 Tue, 24 Nov 2020 13:32:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IyBN=E6=suse.de=colyli@srs-us1.protection.inumbo.net>)
 id 1khYQq-00033w-Kb
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:32:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cb3e361f-6f3a-4e36-8c21-584378efb9e0;
 Tue, 24 Nov 2020 13:32:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A52D5AC66;
 Tue, 24 Nov 2020 13:32:30 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IyBN=E6=suse.de=colyli@srs-us1.protection.inumbo.net>)
	id 1khYQq-00033w-Kb
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:32:32 +0000
X-Inumbo-ID: cb3e361f-6f3a-4e36-8c21-584378efb9e0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cb3e361f-6f3a-4e36-8c21-584378efb9e0;
	Tue, 24 Nov 2020 13:32:31 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A52D5AC66;
	Tue, 24 Nov 2020 13:32:30 +0000 (UTC)
Subject: Re: [PATCH 38/45] block: switch partition lookup to use struct
 block_device
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Mike Snitzer <snitzer@redhat.com>,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Jan Kara <jack@suse.cz>,
 Johannes Thumshirn <johannes.thumshirn@wdc.com>, dm-devel@redhat.com,
 Richard Weinberger <richard@nod.at>, Jan Kara <jack@suse.com>,
 linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org,
 linux-fsdevel@vger.kernel.org, linux-mm@kvack.org
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-39-hch@lst.de>
From: Coly Li <colyli@suse.de>
Message-ID: <4cdd6877-f4fd-8022-4a4f-3eabb86af2b9@suse.de>
Date: Tue, 24 Nov 2020 21:32:19 +0800
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.16; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201124132751.3747337-39-hch@lst.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11/24/20 9:27 PM, Christoph Hellwig wrote:
> Use struct block_device to lookup partitions on a disk.  This removes
> all usage of struct hd_struct from the I/O path, and this allows removing
> the percpu refcount in struct hd_struct.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>


For the bcache part, Acked-by: Coly Li <colyli@suse.de>

> ---
>  block/bio.c                        |  4 +-
>  block/blk-core.c                   | 66 ++++++++++++++----------------
>  block/blk-flush.c                  |  2 +-
>  block/blk-mq.c                     |  9 ++--
>  block/blk-mq.h                     |  7 ++--
>  block/blk.h                        |  4 +-
>  block/genhd.c                      | 56 +++++++++++++------------
>  block/partitions/core.c            |  7 +---
>  drivers/block/drbd/drbd_receiver.c |  2 +-
>  drivers/block/drbd/drbd_worker.c   |  2 +-
>  drivers/block/zram/zram_drv.c      |  2 +-
>  drivers/md/bcache/request.c        |  4 +-
>  drivers/md/dm.c                    |  4 +-
>  drivers/md/md.c                    |  4 +-
>  drivers/nvme/target/admin-cmd.c    | 20 ++++-----
>  fs/ext4/super.c                    | 18 +++-----
>  fs/ext4/sysfs.c                    | 10 +----
>  fs/f2fs/f2fs.h                     |  2 +-
>  fs/f2fs/super.c                    |  6 +--
>  include/linux/blkdev.h             |  8 ++--
>  include/linux/genhd.h              |  4 +-
>  include/linux/part_stat.h          | 17 ++++----
>  22 files changed, 120 insertions(+), 138 deletions(-)
[snipped]

> diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
> index afac8d07c1bd00..85b1f2a9b72d68 100644
> --- a/drivers/md/bcache/request.c
> +++ b/drivers/md/bcache/request.c
> @@ -475,7 +475,7 @@ struct search {
>  	unsigned int		read_dirty_data:1;
>  	unsigned int		cache_missed:1;
>  
> -	struct hd_struct	*part;
> +	struct block_device	*part;
>  	unsigned long		start_time;
>  
>  	struct btree_op		op;
> @@ -1073,7 +1073,7 @@ struct detached_dev_io_private {
>  	unsigned long		start_time;
>  	bio_end_io_t		*bi_end_io;
>  	void			*bi_private;
> -	struct hd_struct	*part;
> +	struct block_device	*part;
>  };
[snipped]



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:34:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:34:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36107.67865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYSe-0003G2-SB; Tue, 24 Nov 2020 13:34:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36107.67865; Tue, 24 Nov 2020 13:34:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYSe-0003Fv-PF; Tue, 24 Nov 2020 13:34:24 +0000
Received: by outflank-mailman (input) for mailman id 36107;
 Tue, 24 Nov 2020 13:34:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IyBN=E6=suse.de=colyli@srs-us1.protection.inumbo.net>)
 id 1khYSe-0003Fp-1D
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:34:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 40281fb5-d1b4-4aff-8d04-fe3db5abd4e9;
 Tue, 24 Nov 2020 13:34:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 61EB5AF33;
 Tue, 24 Nov 2020 13:34:22 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IyBN=E6=suse.de=colyli@srs-us1.protection.inumbo.net>)
	id 1khYSe-0003Fp-1D
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:34:24 +0000
X-Inumbo-ID: 40281fb5-d1b4-4aff-8d04-fe3db5abd4e9
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 40281fb5-d1b4-4aff-8d04-fe3db5abd4e9;
	Tue, 24 Nov 2020 13:34:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 61EB5AF33;
	Tue, 24 Nov 2020 13:34:22 +0000 (UTC)
Subject: Re: [PATCH 30/45] block: remove the nr_sects field in struct
 hd_struct
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Mike Snitzer <snitzer@redhat.com>,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Jan Kara <jack@suse.cz>,
 Johannes Thumshirn <johannes.thumshirn@wdc.com>, dm-devel@redhat.com,
 Richard Weinberger <richard@nod.at>, Jan Kara <jack@suse.com>,
 linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org,
 linux-fsdevel@vger.kernel.org, linux-mm@kvack.org
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-31-hch@lst.de>
From: Coly Li <colyli@suse.de>
Message-ID: <044dd4ec-c64d-3c5d-cf54-a4ca665b8912@suse.de>
Date: Tue, 24 Nov 2020 21:34:11 +0800
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.16; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201124132751.3747337-31-hch@lst.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11/24/20 9:27 PM, Christoph Hellwig wrote:
> Now that the hd_struct always has a block device attached to it, there is
> no need for having two size field that just get out of sync.
> 
> Additional the field in hd_struct did not use proper serializiation,
> possibly allowing for torn writes.  By only using the block_device field
> this problem also gets fixed.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

For the bcache part, Acked-by: Coly Li <colyli@suse.de>

Thanks.

Coly Li

> ---
>  block/bio.c                        |  4 +-
>  block/blk-core.c                   |  2 +-
>  block/blk.h                        | 53 ----------------------
>  block/genhd.c                      | 55 +++++++++++-----------
>  block/partitions/core.c            | 17 ++++---
>  drivers/block/loop.c               |  1 -
>  drivers/block/nbd.c                |  2 +-
>  drivers/block/xen-blkback/common.h |  4 +-
>  drivers/md/bcache/super.c          |  2 +-
>  drivers/s390/block/dasd_ioctl.c    |  4 +-
>  drivers/target/target_core_pscsi.c |  7 +--
>  fs/block_dev.c                     | 73 +-----------------------------
>  fs/f2fs/super.c                    |  2 +-
>  fs/pstore/blk.c                    |  2 +-
>  include/linux/genhd.h              | 29 +++---------
>  kernel/trace/blktrace.c            |  2 +-
>  16 files changed, 60 insertions(+), 199 deletions(-)
> 
[snipped]

> diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
> index c55d3c58a7ef55..04fa40868fbe10 100644
> --- a/drivers/md/bcache/super.c
> +++ b/drivers/md/bcache/super.c
> @@ -1408,7 +1408,7 @@ static int cached_dev_init(struct cached_dev *dc, unsigned int block_size)
>  			q->limits.raid_partial_stripes_expensive;
>  
>  	ret = bcache_device_init(&dc->disk, block_size,
> -			 dc->bdev->bd_part->nr_sects - dc->sb.data_offset,
> +			 bdev_nr_sectors(dc->bdev) - dc->sb.data_offset,
>  			 dc->bdev, &bcache_cached_ops);
>  	if (ret)
>  		return ret;
[snipped]


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:38:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:38:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36116.67878 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYWn-0003S6-Er; Tue, 24 Nov 2020 13:38:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36116.67878; Tue, 24 Nov 2020 13:38:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYWn-0003Rz-BH; Tue, 24 Nov 2020 13:38:41 +0000
Received: by outflank-mailman (input) for mailman id 36116;
 Tue, 24 Nov 2020 13:38:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IyBN=E6=suse.de=colyli@srs-us1.protection.inumbo.net>)
 id 1khYWl-0003Rt-Qt
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:38:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2b7af852-0673-4c3c-a815-09a2a59dc2ac;
 Tue, 24 Nov 2020 13:38:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E8E43AC2D;
 Tue, 24 Nov 2020 13:38:37 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IyBN=E6=suse.de=colyli@srs-us1.protection.inumbo.net>)
	id 1khYWl-0003Rt-Qt
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:38:39 +0000
X-Inumbo-ID: 2b7af852-0673-4c3c-a815-09a2a59dc2ac
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2b7af852-0673-4c3c-a815-09a2a59dc2ac;
	Tue, 24 Nov 2020 13:38:38 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E8E43AC2D;
	Tue, 24 Nov 2020 13:38:37 +0000 (UTC)
Subject: Re: [PATCH 23/45] block: remove i_bdev
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Mike Snitzer <snitzer@redhat.com>,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Jan Kara <jack@suse.cz>,
 Johannes Thumshirn <johannes.thumshirn@wdc.com>, dm-devel@redhat.com,
 Richard Weinberger <richard@nod.at>, Jan Kara <jack@suse.com>,
 linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org,
 linux-fsdevel@vger.kernel.org, linux-mm@kvack.org
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-24-hch@lst.de>
From: Coly Li <colyli@suse.de>
Message-ID: <bbb4130b-6848-f2ed-b7e0-c86b68c2663a@suse.de>
Date: Tue, 24 Nov 2020 21:38:27 +0800
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.16; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201124132751.3747337-24-hch@lst.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11/24/20 9:27 PM, Christoph Hellwig wrote:
> Switch the block device lookup interfaces to directly work with a dev_t
> so that struct block_device references are only acquired by the
> blkdev_get variants (and the blk-cgroup special case).  This means that
> we not don't need an extra reference in the inode and can generally
> simplify handling of struct block_device to keep the lookups contained
> in the core block layer code.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

For the bcache part, Acked-by: Coly Li <colyli@suse.de>

Thanks.

Coly Li

> ---
>  block/ioctl.c                                |   3 +-
>  drivers/block/loop.c                         |   8 +-
>  drivers/md/bcache/super.c                    |  20 +-
>  drivers/md/dm-table.c                        |   9 +-
>  drivers/mtd/mtdsuper.c                       |  17 +-
>  drivers/target/target_core_file.c            |   6 +-
>  drivers/usb/gadget/function/storage_common.c |   8 +-
>  fs/block_dev.c                               | 195 +++++--------------
>  fs/btrfs/volumes.c                           |  13 +-
>  fs/inode.c                                   |   3 -
>  fs/internal.h                                |   7 +-
>  fs/io_uring.c                                |  10 +-
>  fs/pipe.c                                    |   5 +-
>  fs/quota/quota.c                             |  19 +-
>  fs/statfs.c                                  |   2 +-
>  fs/super.c                                   |  37 ++--
>  include/linux/blkdev.h                       |   2 +-
>  include/linux/fs.h                           |   1 -
>  18 files changed, 114 insertions(+), 251 deletions(-)
> 

[snipped]

> diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
> index a6a5e21e4fd136..c55d3c58a7ef55 100644
> --- a/drivers/md/bcache/super.c
> +++ b/drivers/md/bcache/super.c
> @@ -2380,38 +2380,38 @@ kobj_attribute_write(register,		register_bcache);
>  kobj_attribute_write(register_quiet,	register_bcache);
>  kobj_attribute_write(pendings_cleanup,	bch_pending_bdevs_cleanup);
>  
> -static bool bch_is_open_backing(struct block_device *bdev)
> +static bool bch_is_open_backing(dev_t dev)
>  {
>  	struct cache_set *c, *tc;
>  	struct cached_dev *dc, *t;
>  
>  	list_for_each_entry_safe(c, tc, &bch_cache_sets, list)
>  		list_for_each_entry_safe(dc, t, &c->cached_devs, list)
> -			if (dc->bdev == bdev)
> +			if (dc->bdev->bd_dev == dev)
>  				return true;
>  	list_for_each_entry_safe(dc, t, &uncached_devices, list)
> -		if (dc->bdev == bdev)
> +		if (dc->bdev->bd_dev == dev)
>  			return true;
>  	return false;
>  }
>  
> -static bool bch_is_open_cache(struct block_device *bdev)
> +static bool bch_is_open_cache(dev_t dev)
>  {
>  	struct cache_set *c, *tc;
>  
>  	list_for_each_entry_safe(c, tc, &bch_cache_sets, list) {
>  		struct cache *ca = c->cache;
>  
> -		if (ca->bdev == bdev)
> +		if (ca->bdev->bd_dev == dev)
>  			return true;
>  	}
>  
>  	return false;
>  }
>  
> -static bool bch_is_open(struct block_device *bdev)
> +static bool bch_is_open(dev_t dev)
>  {
> -	return bch_is_open_cache(bdev) || bch_is_open_backing(bdev);
> +	return bch_is_open_cache(dev) || bch_is_open_backing(dev);
>  }
>  
>  struct async_reg_args {
> @@ -2535,9 +2535,11 @@ static ssize_t register_bcache(struct kobject *k, struct kobj_attribute *attr,
>  				  sb);
>  	if (IS_ERR(bdev)) {
>  		if (bdev == ERR_PTR(-EBUSY)) {
> -			bdev = lookup_bdev(strim(path));
> +			dev_t dev;
> +
>  			mutex_lock(&bch_register_lock);
> -			if (!IS_ERR(bdev) && bch_is_open(bdev))
> +			if (lookup_bdev(strim(path), &dev) == 0 &&
> +			    bch_is_open(dev))
>  				err = "device already registered";
>  			else
>  				err = "device busy";
[snipped]


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:40:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:40:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36125.67902 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYJ-0004Hp-3D; Tue, 24 Nov 2020 13:40:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36125.67902; Tue, 24 Nov 2020 13:40:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYI-0004He-Vj; Tue, 24 Nov 2020 13:40:14 +0000
Received: by outflank-mailman (input) for mailman id 36125;
 Tue, 24 Nov 2020 13:40:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYPp-0000Qf-5d
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:31:29 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a0136be9-339d-49b6-8e25-045d885334de;
 Tue, 24 Nov 2020 13:29:15 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYNS-0006hp-IQ; Tue, 24 Nov 2020 13:29:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYPp-0000Qf-5d
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:31:29 +0000
X-Inumbo-ID: a0136be9-339d-49b6-8e25-045d885334de
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a0136be9-339d-49b6-8e25-045d885334de;
	Tue, 24 Nov 2020 13:29:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=Byl7CBCOdylPQKGIFA7NBT0G7FiaS8ROkaw/bubCmjo=; b=hP+HoMXSsqR+FJlPVNTSPcniBz
	1aqdDxKykpBKAkRkRbBCNckP9ETrUGi7CUc6c0QdQEfaGPZF8esX16E5969KLXQn0csqLqX91PHbN
	2FT5mRafK8f26f12dZqTcphrahqhc5JyTIMBOODUZcgk3yEsPQql0HiJdKT4GIgdOp5RcKsptXLF7
	0a2dEii7Uz3/wk6fKxXxM0CuK95G/xwkiZOshyF6U7C3FVF+5neDdGw9ntgzVsAasqtcgiS4Qbtue
	zIqh0VftgHq8kEMenVn6mlPCa3+4kytrRXrVQU5oi8/2yu6lQoIT61N/JBZpQdCKQ4RpagJwj6Ero
	pFqYpyaQ==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYNS-0006hp-IQ; Tue, 24 Nov 2020 13:29:02 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 40/45] block: pass a block_device to blk_alloc_devt
Date: Tue, 24 Nov 2020 14:27:46 +0100
Message-Id: <20201124132751.3747337-41-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Pass the block_device actually needed instead of the hd_struct.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk.h             |  2 +-
 block/genhd.c           | 14 +++++++-------
 block/partitions/core.c |  2 +-
 3 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/block/blk.h b/block/blk.h
index d5bf8f3a078186..9657c6da7c770c 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -350,7 +350,7 @@ static inline void blk_queue_free_zone_bitmaps(struct request_queue *q) {}
 
 struct block_device *disk_map_sector_rcu(struct gendisk *disk, sector_t sector);
 
-int blk_alloc_devt(struct hd_struct *part, dev_t *devt);
+int blk_alloc_devt(struct block_device *part, dev_t *devt);
 void blk_free_devt(dev_t devt);
 char *disk_name(struct gendisk *hd, int partno, char *buf);
 #define ADDPART_FLAG_NONE	0
diff --git a/block/genhd.c b/block/genhd.c
index 60004bc8ba5b56..498c816e90df64 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -557,8 +557,8 @@ static int blk_mangle_minor(int minor)
 }
 
 /**
- * blk_alloc_devt - allocate a dev_t for a partition
- * @part: partition to allocate dev_t for
+ * blk_alloc_devt - allocate a dev_t for a block device
+ * @bdev: block device to allocate dev_t for
  * @devt: out parameter for resulting dev_t
  *
  * Allocate a dev_t for block device.
@@ -570,14 +570,14 @@ static int blk_mangle_minor(int minor)
  * CONTEXT:
  * Might sleep.
  */
-int blk_alloc_devt(struct hd_struct *part, dev_t *devt)
+int blk_alloc_devt(struct block_device *bdev, dev_t *devt)
 {
-	struct gendisk *disk = part_to_disk(part);
+	struct gendisk *disk = bdev->bd_disk;
 	int idx;
 
 	/* in consecutive minor range? */
-	if (part->bdev->bd_partno < disk->minors) {
-		*devt = MKDEV(disk->major, disk->first_minor + part->bdev->bd_partno);
+	if (bdev->bd_partno < disk->minors) {
+		*devt = MKDEV(disk->major, disk->first_minor + bdev->bd_partno);
 		return 0;
 	}
 
@@ -733,7 +733,7 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
 
 	disk->flags |= GENHD_FL_UP;
 
-	retval = blk_alloc_devt(disk->part0->bd_part, &devt);
+	retval = blk_alloc_devt(disk->part0, &devt);
 	if (retval) {
 		WARN_ON(1);
 		return;
diff --git a/block/partitions/core.c b/block/partitions/core.c
index ee4f4e3237aa2d..45fed1108d4425 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -392,7 +392,7 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 	pdev->type = &part_type;
 	pdev->parent = ddev;
 
-	err = blk_alloc_devt(p, &devt);
+	err = blk_alloc_devt(bdev, &devt);
 	if (err)
 		goto out_bdput;
 	pdev->devt = devt;
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:40:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:40:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36123.67890 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYH-0004Gc-Pu; Tue, 24 Nov 2020 13:40:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36123.67890; Tue, 24 Nov 2020 13:40:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYH-0004GU-Mv; Tue, 24 Nov 2020 13:40:13 +0000
Received: by outflank-mailman (input) for mailman id 36123;
 Tue, 24 Nov 2020 13:40:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYPu-0000Qf-6s
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:31:34 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bf340b6a-d8d5-4c32-ba13-d1a33dcf193e;
 Tue, 24 Nov 2020 13:29:15 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYNF-0006fK-UM; Tue, 24 Nov 2020 13:28:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYPu-0000Qf-6s
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:31:34 +0000
X-Inumbo-ID: bf340b6a-d8d5-4c32-ba13-d1a33dcf193e
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bf340b6a-d8d5-4c32-ba13-d1a33dcf193e;
	Tue, 24 Nov 2020 13:29:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=poZBwhuOLPH4nVpuZeBAMid+BIvjZtpwd5xAAK8WqlA=; b=GShL211CdoWfaon8Asg+Yyt/uD
	O4KY/ru6xFzxXugAsi4gNVsfLdWSHBDbaQHEileb61HyjAxsCkG17xh05JMkhBNHpjZ7bdbqhUV6j
	ZJDO8oPwblwrDMI4PFP8XiSn2cGysXNlJXRl2ZwnsCcm8WrZrLGTBvkklhzFB20Tl5AgqzPshm0EO
	hrL4cwDDEc2HuFczFp/++Sod4QAfiJnnwIIlrtvLuBLWttgsr1YhsQ96QA7pgqWpGdQTY/qbb10u/
	HgkloUSyqKxJ18ixGgzZUIzKPRUgVuftoC7ZzTmsFglKlIbNU502KOzP0sFhzL4RsM0nISeUSHKnc
	NZz4Fvfw==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYNF-0006fK-UM; Tue, 24 Nov 2020 13:28:51 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 33/45] block: move the partition_meta_info to struct block_device
Date: Tue, 24 Nov 2020 14:27:39 +0100
Message-Id: <20201124132751.3747337-34-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Move the partition_meta_info to struct block_device in preparation for
killing struct hd_struct.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk.h               |  1 -
 block/genhd.c             |  3 ++-
 block/partitions/core.c   | 18 +++++++-----------
 fs/block_dev.c            |  1 +
 include/linux/blk_types.h |  2 ++
 include/linux/genhd.h     |  1 -
 init/do_mounts.c          |  7 ++++---
 7 files changed, 16 insertions(+), 17 deletions(-)

diff --git a/block/blk.h b/block/blk.h
index 3f801f6e86f8a1..0bd4b58bcbaf77 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -381,7 +381,6 @@ static inline void hd_struct_put(struct hd_struct *part)
 
 static inline void hd_free_part(struct hd_struct *part)
 {
-	kfree(part->info);
 	bdput(part->bdev);
 	percpu_ref_exit(&part->ref);
 }
diff --git a/block/genhd.c b/block/genhd.c
index 8212a2dd10ec4e..20c7bf6d091e94 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -994,7 +994,8 @@ void __init printk_all_partitions(void)
 			       bdevt_str(part_devt(part), devt_buf),
 			       bdev_nr_sectors(part->bdev) >> 1,
 			       disk_name(disk, part->partno, name_buf),
-			       part->info ? part->info->uuid : "");
+			       part->bdev->bd_meta_info ?
+					part->bdev->bd_meta_info->uuid : "");
 			if (is_part0) {
 				if (dev->parent && dev->parent->driver)
 					printk(" driver: %s\n",
diff --git a/block/partitions/core.c b/block/partitions/core.c
index aa4b836374b037..e24673b4cba61f 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -275,8 +275,9 @@ static int part_uevent(struct device *dev, struct kobj_uevent_env *env)
 	struct hd_struct *part = dev_to_part(dev);
 
 	add_uevent_var(env, "PARTN=%u", part->partno);
-	if (part->info && part->info->volname[0])
-		add_uevent_var(env, "PARTNAME=%s", part->info->volname);
+	if (part->bdev->bd_meta_info && part->bdev->bd_meta_info->volname[0])
+		add_uevent_var(env, "PARTNAME=%s",
+			       part->bdev->bd_meta_info->volname);
 	return 0;
 }
 
@@ -422,13 +423,10 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 	p->policy = get_disk_ro(disk);
 
 	if (info) {
-		struct partition_meta_info *pinfo;
-
-		pinfo = kzalloc_node(sizeof(*pinfo), GFP_KERNEL, disk->node_id);
-		if (!pinfo)
+		err = -ENOMEM;
+		bdev->bd_meta_info = kmemdup(info, sizeof(*info), GFP_KERNEL);
+		if (!bdev->bd_meta_info)
 			goto out_bdput;
-		memcpy(pinfo, info, sizeof(*info));
-		p->info = pinfo;
 	}
 
 	dname = dev_name(ddev);
@@ -444,7 +442,7 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 
 	err = blk_alloc_devt(p, &devt);
 	if (err)
-		goto out_free_info;
+		goto out_bdput;
 	pdev->devt = devt;
 
 	/* delay uevent until 'holders' subdir is created */
@@ -481,8 +479,6 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 		kobject_uevent(&pdev->kobj, KOBJ_ADD);
 	return p;
 
-out_free_info:
-	kfree(p->info);
 out_bdput:
 	bdput(bdev);
 out_free:
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 0427e6fa59556f..2393395201aa6c 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -785,6 +785,7 @@ static void bdev_free_inode(struct inode *inode)
 	struct block_device *bdev = I_BDEV(inode);
 
 	free_percpu(bdev->bd_stats);
+	kfree(bdev->bd_meta_info);
 
 	kmem_cache_free(bdev_cachep, BDEV_I(inode));
 }
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index a690008f60cd92..2f8ede04e5a94c 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -49,6 +49,8 @@ struct block_device {
 	/* Mutex for freeze */
 	struct mutex		bd_fsfreeze_mutex;
 	struct super_block	*bd_fsfreeze_sb;
+
+	struct partition_meta_info *bd_meta_info;
 } __randomize_layout;
 
 #define bdev_whole(_bdev) \
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index a9d64da474233f..1e52f38b719db3 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -57,7 +57,6 @@ struct hd_struct {
 	struct device __dev;
 	struct kobject *holder_dir;
 	int policy, partno;
-	struct partition_meta_info *info;
 #ifdef CONFIG_FAIL_MAKE_REQUEST
 	int make_it_fail;
 #endif
diff --git a/init/do_mounts.c b/init/do_mounts.c
index 5879edf083b318..368ccb71850126 100644
--- a/init/do_mounts.c
+++ b/init/do_mounts.c
@@ -79,8 +79,8 @@ static int match_dev_by_uuid(struct device *dev, const void *data)
 	const struct uuidcmp *cmp = data;
 	struct hd_struct *part = dev_to_part(dev);
 
-	if (!part->info ||
-	    strncasecmp(cmp->uuid, part->info->uuid, cmp->len))
+	if (!part->bdev->bd_meta_info ||
+	    strncasecmp(cmp->uuid, part->bdev->bd_meta_info->uuid, cmp->len))
 		return 0;
 	return 1;
 }
@@ -169,7 +169,8 @@ static int match_dev_by_label(struct device *dev, const void *data)
 	const char *label = data;
 	struct hd_struct *part = dev_to_part(dev);
 
-	if (!part->info || strcmp(label, part->info->volname))
+	if (!part->bdev->bd_meta_info ||
+	    strcmp(label, part->bdev->bd_meta_info->volname))
 		return 0;
 	return 1;
 }
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:40:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:40:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36126.67908 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYJ-0004IP-G6; Tue, 24 Nov 2020 13:40:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36126.67908; Tue, 24 Nov 2020 13:40:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYJ-0004IF-7s; Tue, 24 Nov 2020 13:40:15 +0000
Received: by outflank-mailman (input) for mailman id 36126;
 Tue, 24 Nov 2020 13:40:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYOI-0000Qf-3K
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:29:54 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 162c60c4-ba10-400e-9e62-6205e806d74c;
 Tue, 24 Nov 2020 13:28:43 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYMf-0006WP-Qt; Tue, 24 Nov 2020 13:28:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYOI-0000Qf-3K
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:29:54 +0000
X-Inumbo-ID: 162c60c4-ba10-400e-9e62-6205e806d74c
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 162c60c4-ba10-400e-9e62-6205e806d74c;
	Tue, 24 Nov 2020 13:28:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=1wdTghj4fAZYDMEoMIijyWaUffoUw73Ru2rPfTzIwgc=; b=iuyrZ6v9lOV+HmLY++2sL1Tphw
	RstNxNQbVHznNk65hqoXvXwyS2SxZeKCbg82rqAa4OoHom6hfCotYBQDqYAWw1z7h2qe5AtwL40YH
	pBWGQVEI8t2986Trhrw4ARuUf+uIC6KWaSH3uNwAKDcQFYZw/3SW7lDafQ8OOn0HUoOncreV3btKH
	ODAv7ImyQoKl/hMu3FGvOWhHWdbWS5MKSe/MRvcGAShE6UQB/FLHFc9L49FARzuarAUj6Qb7J9kUo
	goEygVbNrIrONzqSmE36MDIEQOQwyxFUA0fjky85vXz+mIOI+RNIu5gA2ev77SpfiLP3LSaTl5ArA
	qhnIJE8w==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYMf-0006WP-Qt; Tue, 24 Nov 2020 13:28:14 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 13/45] block: add a bdev_kobj helper
Date: Tue, 24 Nov 2020 14:27:19 +0100
Message-Id: <20201124132751.3747337-14-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Add a little helper to find the kobject for a struct block_device.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Jan Kara <jack@suse.cz>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
---
 drivers/md/bcache/super.c |  7 ++-----
 drivers/md/md.c           |  4 +---
 fs/block_dev.c            |  6 +++---
 fs/btrfs/sysfs.c          | 15 +++------------
 include/linux/blk_types.h |  3 +++
 5 files changed, 12 insertions(+), 23 deletions(-)

diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 46a00134a36ae1..a6a5e21e4fd136 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -1447,8 +1447,7 @@ static int register_bdev(struct cache_sb *sb, struct cache_sb_disk *sb_disk,
 		goto err;
 
 	err = "error creating kobject";
-	if (kobject_add(&dc->disk.kobj, &part_to_dev(bdev->bd_part)->kobj,
-			"bcache"))
+	if (kobject_add(&dc->disk.kobj, bdev_kobj(bdev), "bcache"))
 		goto err;
 	if (bch_cache_accounting_add_kobjs(&dc->accounting, &dc->disk.kobj))
 		goto err;
@@ -2342,9 +2341,7 @@ static int register_cache(struct cache_sb *sb, struct cache_sb_disk *sb_disk,
 		goto err;
 	}
 
-	if (kobject_add(&ca->kobj,
-			&part_to_dev(bdev->bd_part)->kobj,
-			"bcache")) {
+	if (kobject_add(&ca->kobj, bdev_kobj(bdev), "bcache")) {
 		err = "error calling kobject_add";
 		ret = -ENOMEM;
 		goto out;
diff --git a/drivers/md/md.c b/drivers/md/md.c
index b2edf5e0f965b5..7ce6047c856ea2 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -2414,7 +2414,6 @@ EXPORT_SYMBOL(md_integrity_add_rdev);
 static int bind_rdev_to_array(struct md_rdev *rdev, struct mddev *mddev)
 {
 	char b[BDEVNAME_SIZE];
-	struct kobject *ko;
 	int err;
 
 	/* prevent duplicates */
@@ -2477,9 +2476,8 @@ static int bind_rdev_to_array(struct md_rdev *rdev, struct mddev *mddev)
 	if ((err = kobject_add(&rdev->kobj, &mddev->kobj, "dev-%s", b)))
 		goto fail;
 
-	ko = &part_to_dev(rdev->bdev->bd_part)->kobj;
 	/* failure here is OK */
-	err = sysfs_create_link(&rdev->kobj, ko, "block");
+	err = sysfs_create_link(&rdev->kobj, bdev_kobj(rdev->bdev), "block");
 	rdev->sysfs_state = sysfs_get_dirent_safe(rdev->kobj.sd, "state");
 	rdev->sysfs_unack_badblocks =
 		sysfs_get_dirent_safe(rdev->kobj.sd, "unacknowledged_bad_blocks");
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 60492620d51866..f6a2a06ad262fa 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -1242,7 +1242,7 @@ int bd_link_disk_holder(struct block_device *bdev, struct gendisk *disk)
 	holder->disk = disk;
 	holder->refcnt = 1;
 
-	ret = add_symlink(disk->slave_dir, &part_to_dev(bdev->bd_part)->kobj);
+	ret = add_symlink(disk->slave_dir, bdev_kobj(bdev));
 	if (ret)
 		goto out_free;
 
@@ -1259,7 +1259,7 @@ int bd_link_disk_holder(struct block_device *bdev, struct gendisk *disk)
 	goto out_unlock;
 
 out_del:
-	del_symlink(disk->slave_dir, &part_to_dev(bdev->bd_part)->kobj);
+	del_symlink(disk->slave_dir, bdev_kobj(bdev));
 out_free:
 	kfree(holder);
 out_unlock:
@@ -1287,7 +1287,7 @@ void bd_unlink_disk_holder(struct block_device *bdev, struct gendisk *disk)
 	holder = bd_find_holder_disk(bdev, disk);
 
 	if (!WARN_ON_ONCE(holder == NULL) && !--holder->refcnt) {
-		del_symlink(disk->slave_dir, &part_to_dev(bdev->bd_part)->kobj);
+		del_symlink(disk->slave_dir, bdev_kobj(bdev));
 		del_symlink(bdev->bd_part->holder_dir,
 			    &disk_to_dev(disk)->kobj);
 		kobject_put(bdev->bd_part->holder_dir);
diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
index 279d9262b676d4..24b6c6dc69000a 100644
--- a/fs/btrfs/sysfs.c
+++ b/fs/btrfs/sysfs.c
@@ -1232,8 +1232,6 @@ int btrfs_sysfs_add_space_info_type(struct btrfs_fs_info *fs_info,
 
 void btrfs_sysfs_remove_device(struct btrfs_device *device)
 {
-	struct hd_struct *disk;
-	struct kobject *disk_kobj;
 	struct kobject *devices_kobj;
 
 	/*
@@ -1243,11 +1241,8 @@ void btrfs_sysfs_remove_device(struct btrfs_device *device)
 	devices_kobj = device->fs_info->fs_devices->devices_kobj;
 	ASSERT(devices_kobj);
 
-	if (device->bdev) {
-		disk = device->bdev->bd_part;
-		disk_kobj = &part_to_dev(disk)->kobj;
-		sysfs_remove_link(devices_kobj, disk_kobj->name);
-	}
+	if (device->bdev)
+		sysfs_remove_link(devices_kobj, bdev_kobj(device->bdev)->name);
 
 	if (device->devid_kobj.state_initialized) {
 		kobject_del(&device->devid_kobj);
@@ -1353,11 +1348,7 @@ int btrfs_sysfs_add_device(struct btrfs_device *device)
 	nofs_flag = memalloc_nofs_save();
 
 	if (device->bdev) {
-		struct hd_struct *disk;
-		struct kobject *disk_kobj;
-
-		disk = device->bdev->bd_part;
-		disk_kobj = &part_to_dev(disk)->kobj;
+		struct kobject *disk_kobj = bdev_kobj(device->bdev);
 
 		ret = sysfs_create_link(devices_kobj, disk_kobj, disk_kobj->name);
 		if (ret) {
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index ebfb4e7c1fd125..9698f459cc65c9 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -49,6 +49,9 @@ struct block_device {
 	struct super_block	*bd_fsfreeze_sb;
 } __randomize_layout;
 
+#define bdev_kobj(_bdev) \
+	(&part_to_dev((_bdev)->bd_part)->kobj)
+
 /*
  * Block error status values.  See block/blk-core:blk_errors for the details.
  * Alpha cannot write a byte atomically, so we need to use 32-bit value.
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:40:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:40:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36127.67918 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYJ-0004JO-Uu; Tue, 24 Nov 2020 13:40:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36127.67918; Tue, 24 Nov 2020 13:40:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYJ-0004J7-M2; Tue, 24 Nov 2020 13:40:15 +0000
Received: by outflank-mailman (input) for mailman id 36127;
 Tue, 24 Nov 2020 13:40:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYQT-0000Qf-78
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:32:09 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 53cf6379-1e78-4f12-bf8c-3463abba7bd9;
 Tue, 24 Nov 2020 13:29:21 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYNJ-0006ft-Oa; Tue, 24 Nov 2020 13:28:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYQT-0000Qf-78
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:32:09 +0000
X-Inumbo-ID: 53cf6379-1e78-4f12-bf8c-3463abba7bd9
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 53cf6379-1e78-4f12-bf8c-3463abba7bd9;
	Tue, 24 Nov 2020 13:29:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=oCaG9QTti35Co15ZkTmy9WR1wN4N3tzsYJ4o9aP2968=; b=uTjq+lzNK3MRvJUZJe1MFek3QY
	mdTWQqnQ6bq3Bs9Eh1zdFXj1RqnhHFuSzSt0o314aTwb5YooreOfWR4Gmgn/r7VxTMNLuIUI2gBnN
	Ym7inGUKXhKLVn4XEfAdQgHIzdr3jOVC/NgIXZI6iMRHxIF1O+uILzeLIxHG+SnknBLygyZesVlYM
	Uk9wQKOssAMW3w8as2Z6Tn9IvRXZpZfVZZ/c6w4j4Aww1N6913//5WtpNlj6xqJ9Ek4ANTYisQPlM
	+6XnevhikD3blqjEd/gF5Tvp+llNnQOKWfeZVuBaB/9fPWQMHTFcFKvfAGo7ObRXC2Kdd1b9A1KcU
	267uk3bg==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYNJ-0006ft-Oa; Tue, 24 Nov 2020 13:28:54 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 35/45] block: move make_it_fail to struct block_device
Date: Tue, 24 Nov 2020 14:27:41 +0100
Message-Id: <20201124132751.3747337-36-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Move the make_it_fail flag to struct block_device an turn it into a bool
in preparation of killing struct hd_struct.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-core.c          | 3 ++-
 block/genhd.c             | 4 ++--
 include/linux/blk_types.h | 3 +++
 include/linux/genhd.h     | 3 ---
 4 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 9a3793d5ce38d4..9121390be97a76 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -668,7 +668,8 @@ __setup("fail_make_request=", setup_fail_make_request);
 
 static bool should_fail_request(struct hd_struct *part, unsigned int bytes)
 {
-	return part->make_it_fail && should_fail(&fail_make_request, bytes);
+	return part->bdev->bd_make_it_fail &&
+		should_fail(&fail_make_request, bytes);
 }
 
 static int __init fail_make_request_debugfs(void)
diff --git a/block/genhd.c b/block/genhd.c
index a991f0122e53d8..8c734a4e8ff31c 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -1276,7 +1276,7 @@ ssize_t part_fail_show(struct device *dev,
 {
 	struct hd_struct *p = dev_to_part(dev);
 
-	return sprintf(buf, "%d\n", p->make_it_fail);
+	return sprintf(buf, "%d\n", p->bdev->bd_make_it_fail);
 }
 
 ssize_t part_fail_store(struct device *dev,
@@ -1287,7 +1287,7 @@ ssize_t part_fail_store(struct device *dev,
 	int i;
 
 	if (count > 0 && sscanf(buf, "%d", &i) > 0)
-		p->make_it_fail = (i == 0) ? 0 : 1;
+		p->pdev->bd_make_it_fail = (i == 0) ? 0 : 1;
 
 	return count;
 }
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index c0591e52d7d7ce..b237f1e4081405 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -52,6 +52,9 @@ struct block_device {
 	struct super_block	*bd_fsfreeze_sb;
 
 	struct partition_meta_info *bd_meta_info;
+#ifdef CONFIG_FAIL_MAKE_REQUEST
+	bool			bd_make_it_fail;
+#endif
 } __randomize_layout;
 
 #define bdev_whole(_bdev) \
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index c2a8cf12c5cab5..5d46ea7be7e4f0 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -56,9 +56,6 @@ struct hd_struct {
 	struct block_device *bdev;
 	struct device __dev;
 	int policy, partno;
-#ifdef CONFIG_FAIL_MAKE_REQUEST
-	int make_it_fail;
-#endif
 	struct rcu_work rcu_work;
 };
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:40:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:40:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36129.67935 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYK-0004MC-Va; Tue, 24 Nov 2020 13:40:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36129.67935; Tue, 24 Nov 2020 13:40:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYK-0004Lj-Fa; Tue, 24 Nov 2020 13:40:16 +0000
Received: by outflank-mailman (input) for mailman id 36129;
 Tue, 24 Nov 2020 13:40:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYPz-0000Qf-69
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:31:39 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d53ca69c-246f-4718-9082-8a203082a0ac;
 Tue, 24 Nov 2020 13:29:14 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYNO-0006gl-Gq; Tue, 24 Nov 2020 13:28:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYPz-0000Qf-69
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:31:39 +0000
X-Inumbo-ID: d53ca69c-246f-4718-9082-8a203082a0ac
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d53ca69c-246f-4718-9082-8a203082a0ac;
	Tue, 24 Nov 2020 13:29:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=5Ieqq2Esw2uH5DQ55aOZnqozMhcSVzcFskydpUrvjJE=; b=KNQXXXOsTK2/JcBFY0rAA5ZlcX
	0rxJfId/S/351sZwDEqvhFA5c390U1uD4rylUhYaRKFg6CHczRQ6dUEDPhs0M7k/Ew/fVtt/Ov3Un
	niAnpMHWjhkbP2ser95j7h5EO/ke/sf/VNnHk/BA61UsjvvFwXXFK3SXoKswP+8eqJNkECNtH2s+3
	dW14guawsNUlVQtDHx4od51EcFJ04Di1WK123CesbbjdLVyDBRRoNvDjgxWoz2zoAtO4eQNpozYzv
	+QPQTB8IQ6tJu6EdUxjALmqoj+dwd+Q7jxcE+yPo/Eu3D7cyiB4M1I2Gc9ofwlW8yUpSWCagJHjxu
	LaUf7O+A==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYNO-0006gl-Gq; Tue, 24 Nov 2020 13:28:59 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 38/45] block: switch partition lookup to use struct block_device
Date: Tue, 24 Nov 2020 14:27:44 +0100
Message-Id: <20201124132751.3747337-39-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Use struct block_device to lookup partitions on a disk.  This removes
all usage of struct hd_struct from the I/O path, and this allows removing
the percpu refcount in struct hd_struct.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/bio.c                        |  4 +-
 block/blk-core.c                   | 66 ++++++++++++++----------------
 block/blk-flush.c                  |  2 +-
 block/blk-mq.c                     |  9 ++--
 block/blk-mq.h                     |  7 ++--
 block/blk.h                        |  4 +-
 block/genhd.c                      | 56 +++++++++++++------------
 block/partitions/core.c            |  7 +---
 drivers/block/drbd/drbd_receiver.c |  2 +-
 drivers/block/drbd/drbd_worker.c   |  2 +-
 drivers/block/zram/zram_drv.c      |  2 +-
 drivers/md/bcache/request.c        |  4 +-
 drivers/md/dm.c                    |  4 +-
 drivers/md/md.c                    |  4 +-
 drivers/nvme/target/admin-cmd.c    | 20 ++++-----
 fs/ext4/super.c                    | 18 +++-----
 fs/ext4/sysfs.c                    | 10 +----
 fs/f2fs/f2fs.h                     |  2 +-
 fs/f2fs/super.c                    |  6 +--
 include/linux/blkdev.h             |  8 ++--
 include/linux/genhd.h              |  4 +-
 include/linux/part_stat.h          | 17 ++++----
 22 files changed, 120 insertions(+), 138 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 669bb47a31988e..ebb18136b86f2f 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -608,12 +608,12 @@ void bio_truncate(struct bio *bio, unsigned new_size)
 void guard_bio_eod(struct bio *bio)
 {
 	sector_t maxsector;
-	struct hd_struct *part;
+	struct block_device *part;
 
 	rcu_read_lock();
 	part = __disk_get_part(bio->bi_disk, bio->bi_partno);
 	if (part)
-		maxsector = bdev_nr_sectors(part->bdev);
+		maxsector = bdev_nr_sectors(part);
 	else	
 		maxsector = get_capacity(bio->bi_disk);
 	rcu_read_unlock();
diff --git a/block/blk-core.c b/block/blk-core.c
index 9ea70275fc1cfe..cee568389b7e11 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -666,10 +666,9 @@ static int __init setup_fail_make_request(char *str)
 }
 __setup("fail_make_request=", setup_fail_make_request);
 
-static bool should_fail_request(struct hd_struct *part, unsigned int bytes)
+static bool should_fail_request(struct block_device *part, unsigned int bytes)
 {
-	return part->bdev->bd_make_it_fail &&
-		should_fail(&fail_make_request, bytes);
+	return part->bd_make_it_fail && should_fail(&fail_make_request, bytes);
 }
 
 static int __init fail_make_request_debugfs(void)
@@ -684,7 +683,7 @@ late_initcall(fail_make_request_debugfs);
 
 #else /* CONFIG_FAIL_MAKE_REQUEST */
 
-static inline bool should_fail_request(struct hd_struct *part,
+static inline bool should_fail_request(struct block_device *part,
 					unsigned int bytes)
 {
 	return false;
@@ -692,11 +691,11 @@ static inline bool should_fail_request(struct hd_struct *part,
 
 #endif /* CONFIG_FAIL_MAKE_REQUEST */
 
-static inline bool bio_check_ro(struct bio *bio, struct hd_struct *part)
+static inline bool bio_check_ro(struct bio *bio, struct block_device *part)
 {
 	const int op = bio_op(bio);
 
-	if (part->bdev->bd_read_only && op_is_write(op)) {
+	if (part->bd_read_only && op_is_write(op)) {
 		char b[BDEVNAME_SIZE];
 
 		if (op_is_flush(bio->bi_opf) && !bio_sectors(bio))
@@ -704,7 +703,7 @@ static inline bool bio_check_ro(struct bio *bio, struct hd_struct *part)
 
 		WARN_ONCE(1,
 		       "Trying to write to read-only block-device %s (partno %d)\n",
-			bio_devname(bio, b), part->partno);
+			bio_devname(bio, b), part->bd_partno);
 		/* Older lvm-tools actually trigger this */
 		return false;
 	}
@@ -714,8 +713,7 @@ static inline bool bio_check_ro(struct bio *bio, struct hd_struct *part)
 
 static noinline int should_fail_bio(struct bio *bio)
 {
-	if (should_fail_request(bio->bi_disk->part0->bd_part,
-			bio->bi_iter.bi_size))
+	if (should_fail_request(bio->bi_disk->part0, bio->bi_iter.bi_size))
 		return -EIO;
 	return 0;
 }
@@ -744,7 +742,7 @@ static inline int bio_check_eod(struct bio *bio, sector_t maxsector)
  */
 static inline int blk_partition_remap(struct bio *bio)
 {
-	struct hd_struct *p;
+	struct block_device *p;
 	int ret = -EIO;
 
 	rcu_read_lock();
@@ -757,12 +755,12 @@ static inline int blk_partition_remap(struct bio *bio)
 		goto out;
 
 	if (bio_sectors(bio)) {
-		if (bio_check_eod(bio, bdev_nr_sectors(p->bdev)))
+		if (bio_check_eod(bio, bdev_nr_sectors(p)))
 			goto out;
-		bio->bi_iter.bi_sector += p->bdev->bd_start_sect;
-		trace_block_bio_remap(bio->bi_disk->queue, bio, part_devt(p),
+		bio->bi_iter.bi_sector += p->bd_start_sect;
+		trace_block_bio_remap(bio->bi_disk->queue, bio, p->bd_dev,
 				      bio->bi_iter.bi_sector -
-				      p->bdev->bd_start_sect);
+				      p->bd_start_sect);
 	}
 	bio->bi_partno = 0;
 	ret = 0;
@@ -832,7 +830,7 @@ static noinline_for_stack bool submit_bio_checks(struct bio *bio)
 		if (unlikely(blk_partition_remap(bio)))
 			goto end_io;
 	} else {
-		if (unlikely(bio_check_ro(bio, bio->bi_disk->part0->bd_part)))
+		if (unlikely(bio_check_ro(bio, bio->bi_disk->part0)))
 			goto end_io;
 		if (unlikely(bio_check_eod(bio, get_capacity(bio->bi_disk))))
 			goto end_io;
@@ -1204,7 +1202,7 @@ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *
 		return ret;
 
 	if (rq->rq_disk &&
-	    should_fail_request(rq->rq_disk->part0->bd_part, blk_rq_bytes(rq)))
+	    should_fail_request(rq->rq_disk->part0, blk_rq_bytes(rq)))
 		return BLK_STS_IOERR;
 
 	if (blk_crypto_insert_cloned_request(rq))
@@ -1263,17 +1261,18 @@ unsigned int blk_rq_err_bytes(const struct request *rq)
 }
 EXPORT_SYMBOL_GPL(blk_rq_err_bytes);
 
-static void update_io_ticks(struct hd_struct *part, unsigned long now, bool end)
+static void update_io_ticks(struct block_device *part, unsigned long now,
+		bool end)
 {
 	unsigned long stamp;
 again:
-	stamp = READ_ONCE(part->bdev->bd_stamp);
+	stamp = READ_ONCE(part->bd_stamp);
 	if (unlikely(stamp != now)) {
-		if (likely(cmpxchg(&part->bdev->bd_stamp, stamp, now) == stamp))
+		if (likely(cmpxchg(&part->bd_stamp, stamp, now) == stamp))
 			__part_stat_add(part, io_ticks, end ? now - stamp : 1);
 	}
-	if (part->partno) {
-		part = part_to_disk(part)->part0->bd_part;
+	if (part->bd_partno) {
+		part = bdev_whole(part);
 		goto again;
 	}
 }
@@ -1282,11 +1281,9 @@ static void blk_account_io_completion(struct request *req, unsigned int bytes)
 {
 	if (req->part && blk_do_io_stat(req)) {
 		const int sgrp = op_stat_group(req_op(req));
-		struct hd_struct *part;
 
 		part_stat_lock();
-		part = req->part;
-		part_stat_add(part, sectors[sgrp], bytes >> 9);
+		part_stat_add(req->part, sectors[sgrp], bytes >> 9);
 		part_stat_unlock();
 	}
 }
@@ -1301,14 +1298,11 @@ void blk_account_io_done(struct request *req, u64 now)
 	if (req->part && blk_do_io_stat(req) &&
 	    !(req->rq_flags & RQF_FLUSH_SEQ)) {
 		const int sgrp = op_stat_group(req_op(req));
-		struct hd_struct *part;
 
 		part_stat_lock();
-		part = req->part;
-
-		update_io_ticks(part, jiffies, true);
-		part_stat_inc(part, ios[sgrp]);
-		part_stat_add(part, nsecs[sgrp], now - req->start_time_ns);
+		update_io_ticks(req->part, jiffies, true);
+		part_stat_inc(req->part, ios[sgrp]);
+		part_stat_add(req->part, nsecs[sgrp], now - req->start_time_ns);
 		part_stat_unlock();
 	}
 }
@@ -1325,7 +1319,7 @@ void blk_account_io_start(struct request *rq)
 	part_stat_unlock();
 }
 
-static unsigned long __part_start_io_acct(struct hd_struct *part,
+static unsigned long __part_start_io_acct(struct block_device *part,
 					  unsigned int sectors, unsigned int op)
 {
 	const int sgrp = op_stat_group(op);
@@ -1341,7 +1335,7 @@ static unsigned long __part_start_io_acct(struct hd_struct *part,
 	return now;
 }
 
-unsigned long part_start_io_acct(struct gendisk *disk, struct hd_struct **part,
+unsigned long part_start_io_acct(struct gendisk *disk, struct block_device **part,
 				 struct bio *bio)
 {
 	*part = disk_map_sector_rcu(disk, bio->bi_iter.bi_sector);
@@ -1353,11 +1347,11 @@ EXPORT_SYMBOL_GPL(part_start_io_acct);
 unsigned long disk_start_io_acct(struct gendisk *disk, unsigned int sectors,
 				 unsigned int op)
 {
-	return __part_start_io_acct(disk->part0->bd_part, sectors, op);
+	return __part_start_io_acct(disk->part0, sectors, op);
 }
 EXPORT_SYMBOL(disk_start_io_acct);
 
-static void __part_end_io_acct(struct hd_struct *part, unsigned int op,
+static void __part_end_io_acct(struct block_device *part, unsigned int op,
 			       unsigned long start_time)
 {
 	const int sgrp = op_stat_group(op);
@@ -1371,7 +1365,7 @@ static void __part_end_io_acct(struct hd_struct *part, unsigned int op,
 	part_stat_unlock();
 }
 
-void part_end_io_acct(struct hd_struct *part, struct bio *bio,
+void part_end_io_acct(struct block_device *part, struct bio *bio,
 		      unsigned long start_time)
 {
 	__part_end_io_acct(part, bio_op(bio), start_time);
@@ -1381,7 +1375,7 @@ EXPORT_SYMBOL_GPL(part_end_io_acct);
 void disk_end_io_acct(struct gendisk *disk, unsigned int op,
 		      unsigned long start_time)
 {
-	__part_end_io_acct(disk->part0->bd_part, op, start_time);
+	__part_end_io_acct(disk->part0, op, start_time);
 }
 EXPORT_SYMBOL(disk_end_io_acct);
 
diff --git a/block/blk-flush.c b/block/blk-flush.c
index fcd0a60574dff8..9507dcdd58814c 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -139,7 +139,7 @@ static void blk_flush_queue_rq(struct request *rq, bool add_front)
 
 static void blk_account_io_flush(struct request *rq)
 {
-	struct hd_struct *part = rq->rq_disk->part0->bd_part;
+	struct block_device *part = rq->rq_disk->part0;
 
 	part_stat_lock();
 	part_stat_inc(part, ios[STAT_FLUSH]);
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 55bcee5dc0320c..a2593748fa5342 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -95,7 +95,7 @@ static void blk_mq_hctx_clear_pending(struct blk_mq_hw_ctx *hctx,
 }
 
 struct mq_inflight {
-	struct hd_struct *part;
+	struct block_device *part;
 	unsigned int inflight[2];
 };
 
@@ -111,7 +111,8 @@ static bool blk_mq_check_inflight(struct blk_mq_hw_ctx *hctx,
 	return true;
 }
 
-unsigned int blk_mq_in_flight(struct request_queue *q, struct hd_struct *part)
+unsigned int blk_mq_in_flight(struct request_queue *q,
+		struct block_device *part)
 {
 	struct mq_inflight mi = { .part = part };
 
@@ -120,8 +121,8 @@ unsigned int blk_mq_in_flight(struct request_queue *q, struct hd_struct *part)
 	return mi.inflight[0] + mi.inflight[1];
 }
 
-void blk_mq_in_flight_rw(struct request_queue *q, struct hd_struct *part,
-			 unsigned int inflight[2])
+void blk_mq_in_flight_rw(struct request_queue *q, struct block_device *part,
+		unsigned int inflight[2])
 {
 	struct mq_inflight mi = { .part = part };
 
diff --git a/block/blk-mq.h b/block/blk-mq.h
index a52703c98b7736..c696515766c780 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -182,9 +182,10 @@ static inline bool blk_mq_hw_queue_mapped(struct blk_mq_hw_ctx *hctx)
 	return hctx->nr_ctx && hctx->tags;
 }
 
-unsigned int blk_mq_in_flight(struct request_queue *q, struct hd_struct *part);
-void blk_mq_in_flight_rw(struct request_queue *q, struct hd_struct *part,
-			 unsigned int inflight[2]);
+unsigned int blk_mq_in_flight(struct request_queue *q,
+		struct block_device *part);
+void blk_mq_in_flight_rw(struct request_queue *q, struct block_device *part,
+		unsigned int inflight[2]);
 
 static inline void blk_mq_put_dispatch_budget(struct request_queue *q)
 {
diff --git a/block/blk.h b/block/blk.h
index 32ac41f7557fcc..d5bf8f3a078186 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -215,7 +215,7 @@ static inline void elevator_exit(struct request_queue *q,
 	__elevator_exit(q, e);
 }
 
-struct hd_struct *__disk_get_part(struct gendisk *disk, int partno);
+struct block_device *__disk_get_part(struct gendisk *disk, int partno);
 
 ssize_t part_size_show(struct device *dev, struct device_attribute *attr,
 		char *buf);
@@ -348,7 +348,7 @@ void blk_queue_free_zone_bitmaps(struct request_queue *q);
 static inline void blk_queue_free_zone_bitmaps(struct request_queue *q) {}
 #endif
 
-struct hd_struct *disk_map_sector_rcu(struct gendisk *disk, sector_t sector);
+struct block_device *disk_map_sector_rcu(struct gendisk *disk, sector_t sector);
 
 int blk_alloc_devt(struct hd_struct *part, dev_t *devt);
 void blk_free_devt(dev_t devt);
diff --git a/block/genhd.c b/block/genhd.c
index 3bbf6d3a69ec63..197120c0c60f23 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -116,7 +116,7 @@ static void part_stat_read_all(struct hd_struct *part, struct disk_stats *stat)
 	}
 }
 
-static unsigned int part_in_flight(struct hd_struct *part)
+static unsigned int part_in_flight(struct block_device *part)
 {
 	unsigned int inflight = 0;
 	int cpu;
@@ -131,7 +131,8 @@ static unsigned int part_in_flight(struct hd_struct *part)
 	return inflight;
 }
 
-static void part_in_flight_rw(struct hd_struct *part, unsigned int inflight[2])
+static void part_in_flight_rw(struct block_device *part,
+		unsigned int inflight[2])
 {
 	int cpu;
 
@@ -147,7 +148,7 @@ static void part_in_flight_rw(struct hd_struct *part, unsigned int inflight[2])
 		inflight[1] = 0;
 }
 
-struct hd_struct *__disk_get_part(struct gendisk *disk, int partno)
+struct block_device *__disk_get_part(struct gendisk *disk, int partno)
 {
 	struct disk_part_tbl *ptbl = rcu_dereference(disk->part_tbl);
 
@@ -172,15 +173,18 @@ struct hd_struct *__disk_get_part(struct gendisk *disk, int partno)
  */
 struct hd_struct *disk_get_part(struct gendisk *disk, int partno)
 {
-	struct hd_struct *part;
+	struct block_device *part;
 
 	rcu_read_lock();
 	part = __disk_get_part(disk, partno);
-	if (part)
-		get_device(part_to_dev(part));
-	rcu_read_unlock();
+	if (!part) {
+		rcu_read_unlock();
+		return NULL;
+	}
 
-	return part;
+	get_device(part_to_dev(part->bd_part));
+	rcu_read_unlock();
+	return part->bd_part;
 }
 
 /**
@@ -254,19 +258,19 @@ struct hd_struct *disk_part_iter_next(struct disk_part_iter *piter)
 
 	/* iterate to the next partition */
 	for (; piter->idx != end; piter->idx += inc) {
-		struct hd_struct *part;
+		struct block_device *part;
 
 		part = rcu_dereference(ptbl->part[piter->idx]);
 		if (!part)
 			continue;
-		if (!bdev_nr_sectors(part->bdev) &&
+		if (!bdev_nr_sectors(part) &&
 		    !(piter->flags & DISK_PITER_INCL_EMPTY) &&
 		    !(piter->flags & DISK_PITER_INCL_EMPTY_PART0 &&
 		      piter->idx == 0))
 			continue;
 
-		get_device(part_to_dev(part));
-		piter->part = part;
+		get_device(part_to_dev(part->bd_part));
+		piter->part = part->bd_part;
 		piter->idx += inc;
 		break;
 	}
@@ -293,10 +297,10 @@ void disk_part_iter_exit(struct disk_part_iter *piter)
 }
 EXPORT_SYMBOL_GPL(disk_part_iter_exit);
 
-static inline int sector_in_part(struct hd_struct *part, sector_t sector)
+static inline int sector_in_part(struct block_device *part, sector_t sector)
 {
-	return part->bdev->bd_start_sect <= sector &&
-		sector < part->bdev->bd_start_sect + bdev_nr_sectors(part->bdev);
+	return part->bd_start_sect <= sector &&
+		sector < part->bd_start_sect + bdev_nr_sectors(part);
 }
 
 /**
@@ -314,10 +318,10 @@ static inline int sector_in_part(struct hd_struct *part, sector_t sector)
  * Found partition on success, part0 is returned if no partition matches
  * or the matched partition is being deleted.
  */
-struct hd_struct *disk_map_sector_rcu(struct gendisk *disk, sector_t sector)
+struct block_device *disk_map_sector_rcu(struct gendisk *disk, sector_t sector)
 {
 	struct disk_part_tbl *ptbl;
-	struct hd_struct *part;
+	struct block_device *part;
 	int i;
 
 	rcu_read_lock();
@@ -336,7 +340,7 @@ struct hd_struct *disk_map_sector_rcu(struct gendisk *disk, sector_t sector)
 		}
 	}
 
-	part = disk->part0->bd_part;
+	part = disk->part0;
 out_unlock:
 	rcu_read_unlock();
 	return part;
@@ -866,7 +870,7 @@ void del_gendisk(struct gendisk *disk)
 	kobject_put(disk->part0->bd_holder_dir);
 	kobject_put(disk->slave_dir);
 
-	part_stat_set_all(disk->part0->bd_part, 0);
+	part_stat_set_all(disk->part0, 0);
 	disk->part0->bd_stamp = 0;
 	if (!sysfs_deprecated)
 		sysfs_remove_link(block_depr, dev_name(disk_to_dev(disk)));
@@ -1173,9 +1177,9 @@ ssize_t part_stat_show(struct device *dev,
 
 	part_stat_read_all(p, &stat);
 	if (queue_is_mq(q))
-		inflight = blk_mq_in_flight(q, p);
+		inflight = blk_mq_in_flight(q, p->bdev);
 	else
-		inflight = part_in_flight(p);
+		inflight = part_in_flight(p->bdev);
 
 	return sprintf(buf,
 		"%8lu %8lu %8llu %8u "
@@ -1215,9 +1219,9 @@ ssize_t part_inflight_show(struct device *dev, struct device_attribute *attr,
 	unsigned int inflight[2];
 
 	if (queue_is_mq(q))
-		blk_mq_in_flight_rw(q, p, inflight);
+		blk_mq_in_flight_rw(q, p->bdev, inflight);
 	else
-		part_in_flight_rw(p, inflight);
+		part_in_flight_rw(p->bdev, inflight);
 
 	return sprintf(buf, "%8u %8u\n", inflight[0], inflight[1]);
 }
@@ -1490,9 +1494,9 @@ static int diskstats_show(struct seq_file *seqf, void *v)
 	while ((hd = disk_part_iter_next(&piter))) {
 		part_stat_read_all(hd, &stat);
 		if (queue_is_mq(gp->queue))
-			inflight = blk_mq_in_flight(gp->queue, hd);
+			inflight = blk_mq_in_flight(gp->queue, hd->bdev);
 		else
-			inflight = part_in_flight(hd);
+			inflight = part_in_flight(hd->bdev);
 
 		seq_printf(seqf, "%4d %7d %s "
 			   "%lu %lu %lu %u "
@@ -1610,7 +1614,7 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
 		goto out_bdput;
 
 	ptbl = rcu_dereference_protected(disk->part_tbl, 1);
-	rcu_assign_pointer(ptbl->part[0], disk->part0->bd_part);
+	rcu_assign_pointer(ptbl->part[0], disk->part0);
 
 	disk->minors = minors;
 	rand_initialize_disk(disk);
diff --git a/block/partitions/core.c b/block/partitions/core.c
index 4b0352cb29e132..8beab9e7727e27 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -298,12 +298,9 @@ void delete_partition(struct hd_struct *part)
 	struct disk_part_tbl *ptbl =
 		rcu_dereference_protected(disk->part_tbl, 1);
 
-	/*
-	 * ->part_tbl is referenced in this part's release handler, so
-	 *  we have to hold the disk device
-	 */
 	rcu_assign_pointer(ptbl->part[part->partno], NULL);
 	rcu_assign_pointer(ptbl->last_lookup, NULL);
+
 	kobject_put(part->bdev->bd_holder_dir);
 	device_del(part_to_dev(part));
 
@@ -421,7 +418,7 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 
 	/* everything is up and running, commence */
 	bdev_add(bdev, devt);
-	rcu_assign_pointer(ptbl->part[partno], p);
+	rcu_assign_pointer(ptbl->part[partno], bdev);
 
 	/* suppress uevent if the disk suppresses it */
 	if (!dev_get_uevent_suppress(ddev))
diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c
index 9e5c2fdfda3629..09c86ef3f0fd93 100644
--- a/drivers/block/drbd/drbd_receiver.c
+++ b/drivers/block/drbd/drbd_receiver.c
@@ -2802,7 +2802,7 @@ bool drbd_rs_c_min_rate_throttle(struct drbd_device *device)
 	if (c_min_rate == 0)
 		return false;
 
-	curr_events = (int)part_stat_read_accum(disk->part0->bd_part, sectors) -
+	curr_events = (int)part_stat_read_accum(disk->part0, sectors) -
 			atomic_read(&device->rs_sect_ev);
 
 	if (atomic_read(&device->ap_actlog_cnt)
diff --git a/drivers/block/drbd/drbd_worker.c b/drivers/block/drbd/drbd_worker.c
index 343f56b86bb766..02044ab7f767d5 100644
--- a/drivers/block/drbd/drbd_worker.c
+++ b/drivers/block/drbd/drbd_worker.c
@@ -1679,7 +1679,7 @@ void drbd_rs_controller_reset(struct drbd_device *device)
 	atomic_set(&device->rs_sect_ev, 0);
 	device->rs_in_flight = 0;
 	device->rs_last_events =
-		(int)part_stat_read_accum(disk->part0->bd_part, sectors);
+		(int)part_stat_read_accum(disk->part0, sectors);
 
 	/* Updating the RCU protected object in place is necessary since
 	   this function gets called from atomic context.
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 153858734cd47d..01757f9578dcb8 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1687,7 +1687,7 @@ static void zram_reset_device(struct zram *zram)
 	zram->disksize = 0;
 
 	set_capacity_and_notify(zram->disk, 0);
-	part_stat_set_all(zram->disk->part0->bd_part, 0);
+	part_stat_set_all(zram->disk->part0, 0);
 
 	up_write(&zram->init_lock);
 	/* I/O operation under all of CPU are done so let's free */
diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
index afac8d07c1bd00..85b1f2a9b72d68 100644
--- a/drivers/md/bcache/request.c
+++ b/drivers/md/bcache/request.c
@@ -475,7 +475,7 @@ struct search {
 	unsigned int		read_dirty_data:1;
 	unsigned int		cache_missed:1;
 
-	struct hd_struct	*part;
+	struct block_device	*part;
 	unsigned long		start_time;
 
 	struct btree_op		op;
@@ -1073,7 +1073,7 @@ struct detached_dev_io_private {
 	unsigned long		start_time;
 	bio_end_io_t		*bi_end_io;
 	void			*bi_private;
-	struct hd_struct	*part;
+	struct block_device	*part;
 };
 
 static void detached_dev_end_io(struct bio *bio)
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 1b2db4d530ea71..176adcff56b380 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1607,7 +1607,7 @@ static blk_qc_t __split_and_process_bio(struct mapped_device *md,
 				 * (by eliminating DM's splitting and just using bio_split)
 				 */
 				part_stat_lock();
-				__dm_part_stat_sub(dm_disk(md)->part0->bd_part,
+				__dm_part_stat_sub(dm_disk(md)->part0,
 						   sectors[op_stat_group(bio_op(bio))], ci.sector_count);
 				part_stat_unlock();
 
@@ -2242,7 +2242,7 @@ EXPORT_SYMBOL_GPL(dm_put);
 static bool md_in_flight_bios(struct mapped_device *md)
 {
 	int cpu;
-	struct hd_struct *part = dm_disk(md)->part0->bd_part;
+	struct block_device *part = dm_disk(md)->part0;
 	long sum = 0;
 
 	for_each_possible_cpu(cpu) {
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 3696c2d77a4dd7..0065736f05b428 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -464,7 +464,7 @@ struct md_io {
 	bio_end_io_t *orig_bi_end_io;
 	void *orig_bi_private;
 	unsigned long start_time;
-	struct hd_struct *part;
+	struct block_device *part;
 };
 
 static void md_end_io(struct bio *bio)
@@ -8441,7 +8441,7 @@ static int is_mddev_idle(struct mddev *mddev, int init)
 	rcu_read_lock();
 	rdev_for_each_rcu(rdev, mddev) {
 		struct gendisk *disk = rdev->bdev->bd_disk;
-		curr_events = (int)part_stat_read_accum(disk->part0->bd_part, sectors) -
+		curr_events = (int)part_stat_read_accum(disk->part0, sectors) -
 			      atomic_read(&disk->sync_io);
 		/* sync IO will cause sync_io to increase before the disk_stats
 		 * as sync_io is counted when a request starts, and
diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index dca34489a1dc9e..8d90235e4fcc5a 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -89,12 +89,12 @@ static u16 nvmet_get_smart_log_nsid(struct nvmet_req *req,
 	if (!ns->bdev)
 		goto out;
 
-	host_reads = part_stat_read(ns->bdev->bd_part, ios[READ]);
-	data_units_read = DIV_ROUND_UP(part_stat_read(ns->bdev->bd_part,
-		sectors[READ]), 1000);
-	host_writes = part_stat_read(ns->bdev->bd_part, ios[WRITE]);
-	data_units_written = DIV_ROUND_UP(part_stat_read(ns->bdev->bd_part,
-		sectors[WRITE]), 1000);
+	host_reads = part_stat_read(ns->bdev, ios[READ]);
+	data_units_read =
+		DIV_ROUND_UP(part_stat_read(ns->bdev, sectors[READ]), 1000);
+	host_writes = part_stat_read(ns->bdev, ios[WRITE]);
+	data_units_written =
+		DIV_ROUND_UP(part_stat_read(ns->bdev, sectors[WRITE]), 1000);
 
 	put_unaligned_le64(host_reads, &slog->host_reads[0]);
 	put_unaligned_le64(data_units_read, &slog->data_units_read[0]);
@@ -120,12 +120,12 @@ static u16 nvmet_get_smart_log_all(struct nvmet_req *req,
 		/* we don't have the right data for file backed ns */
 		if (!ns->bdev)
 			continue;
-		host_reads += part_stat_read(ns->bdev->bd_part, ios[READ]);
+		host_reads += part_stat_read(ns->bdev, ios[READ]);
 		data_units_read += DIV_ROUND_UP(
-			part_stat_read(ns->bdev->bd_part, sectors[READ]), 1000);
-		host_writes += part_stat_read(ns->bdev->bd_part, ios[WRITE]);
+			part_stat_read(ns->bdev, sectors[READ]), 1000);
+		host_writes += part_stat_read(ns->bdev, ios[WRITE]);
 		data_units_written += DIV_ROUND_UP(
-			part_stat_read(ns->bdev->bd_part, sectors[WRITE]), 1000);
+			part_stat_read(ns->bdev, sectors[WRITE]), 1000);
 	}
 
 	put_unaligned_le64(host_reads, &slog->host_reads[0]);
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 6633b20224d509..c303a0ff0b1701 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -4048,9 +4048,8 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
 	sbi->s_sb = sb;
 	sbi->s_inode_readahead_blks = EXT4_DEF_INODE_READAHEAD_BLKS;
 	sbi->s_sb_block = sb_block;
-	if (sb->s_bdev->bd_part)
-		sbi->s_sectors_written_start =
-			part_stat_read(sb->s_bdev->bd_part, sectors[STAT_WRITE]);
+	sbi->s_sectors_written_start =
+		part_stat_read(sb->s_bdev, sectors[STAT_WRITE]);
 
 	/* Cleanup superblock name */
 	strreplace(sb->s_id, '/', '!');
@@ -5509,15 +5508,10 @@ static int ext4_commit_super(struct super_block *sb, int sync)
 	 */
 	if (!(sb->s_flags & SB_RDONLY))
 		ext4_update_tstamp(es, s_wtime);
-	if (sb->s_bdev->bd_part)
-		es->s_kbytes_written =
-			cpu_to_le64(EXT4_SB(sb)->s_kbytes_written +
-			    ((part_stat_read(sb->s_bdev->bd_part,
-					     sectors[STAT_WRITE]) -
-			      EXT4_SB(sb)->s_sectors_written_start) >> 1));
-	else
-		es->s_kbytes_written =
-			cpu_to_le64(EXT4_SB(sb)->s_kbytes_written);
+	es->s_kbytes_written =
+		cpu_to_le64(EXT4_SB(sb)->s_kbytes_written +
+		    ((part_stat_read(sb->s_bdev, sectors[STAT_WRITE]) -
+		      EXT4_SB(sb)->s_sectors_written_start) >> 1));
 	if (percpu_counter_initialized(&EXT4_SB(sb)->s_freeclusters_counter))
 		ext4_free_blocks_count_set(es,
 			EXT4_C2B(EXT4_SB(sb), percpu_counter_sum_positive(
diff --git a/fs/ext4/sysfs.c b/fs/ext4/sysfs.c
index 4e27fe6ed3ae6a..075aa3a19ff5f1 100644
--- a/fs/ext4/sysfs.c
+++ b/fs/ext4/sysfs.c
@@ -62,11 +62,8 @@ static ssize_t session_write_kbytes_show(struct ext4_sb_info *sbi, char *buf)
 {
 	struct super_block *sb = sbi->s_buddy_cache->i_sb;
 
-	if (!sb->s_bdev->bd_part)
-		return snprintf(buf, PAGE_SIZE, "0\n");
 	return snprintf(buf, PAGE_SIZE, "%lu\n",
-			(part_stat_read(sb->s_bdev->bd_part,
-					sectors[STAT_WRITE]) -
+			(part_stat_read(sb->s_bdev, sectors[STAT_WRITE]) -
 			 sbi->s_sectors_written_start) >> 1);
 }
 
@@ -74,12 +71,9 @@ static ssize_t lifetime_write_kbytes_show(struct ext4_sb_info *sbi, char *buf)
 {
 	struct super_block *sb = sbi->s_buddy_cache->i_sb;
 
-	if (!sb->s_bdev->bd_part)
-		return snprintf(buf, PAGE_SIZE, "0\n");
 	return snprintf(buf, PAGE_SIZE, "%llu\n",
 			(unsigned long long)(sbi->s_kbytes_written +
-			((part_stat_read(sb->s_bdev->bd_part,
-					 sectors[STAT_WRITE]) -
+			((part_stat_read(sb->s_bdev, sectors[STAT_WRITE]) -
 			  EXT4_SB(sb)->s_sectors_written_start) >> 1)));
 }
 
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index cb700d79729680..49681a8d2b14a5 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -1675,7 +1675,7 @@ static inline bool f2fs_is_multi_device(struct f2fs_sb_info *sbi)
  * and the return value is in kbytes. s is of struct f2fs_sb_info.
  */
 #define BD_PART_WRITTEN(s)						 \
-(((u64)part_stat_read((s)->sb->s_bdev->bd_part, sectors[STAT_WRITE]) -   \
+	(((u64)part_stat_read((s)->sb->s_bdev, sectors[STAT_WRITE]) -   \
 		(s)->sectors_written_start) >> 1)
 
 static inline void f2fs_update_time(struct f2fs_sb_info *sbi, int type)
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index d4e7fab352bacb..af9f449da64bac 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -3700,10 +3700,8 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
 	}
 
 	/* For write statistics */
-	if (sb->s_bdev->bd_part)
-		sbi->sectors_written_start =
-			(u64)part_stat_read(sb->s_bdev->bd_part,
-					    sectors[STAT_WRITE]);
+	sbi->sectors_written_start =
+		(u64)part_stat_read(sb->s_bdev, sectors[STAT_WRITE]);
 
 	/* Read accumulated write IO statistics if exists */
 	seg_i = CURSEG_I(sbi, CURSEG_HOT_NODE);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 8fc0b266610f7f..0e83989b9678c3 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -191,7 +191,7 @@ struct request {
 	};
 
 	struct gendisk *rq_disk;
-	struct hd_struct *part;
+	struct block_device *part;
 #ifdef CONFIG_BLK_RQ_ALLOC_TIME
 	/* Time that the first bio started allocating this request. */
 	u64 alloc_time_ns;
@@ -1943,9 +1943,9 @@ unsigned long disk_start_io_acct(struct gendisk *disk, unsigned int sectors,
 void disk_end_io_acct(struct gendisk *disk, unsigned int op,
 		unsigned long start_time);
 
-unsigned long part_start_io_acct(struct gendisk *disk, struct hd_struct **part,
-				 struct bio *bio);
-void part_end_io_acct(struct hd_struct *part, struct bio *bio,
+unsigned long part_start_io_acct(struct gendisk *disk,
+		struct block_device **part, struct bio *bio);
+void part_end_io_acct(struct block_device *part, struct bio *bio,
 		      unsigned long start_time);
 
 /**
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 6e16c264439bdb..77443a1031e373 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -131,8 +131,8 @@ enum {
 struct disk_part_tbl {
 	struct rcu_head rcu_head;
 	int len;
-	struct hd_struct __rcu *last_lookup;
-	struct hd_struct __rcu *part[];
+	struct block_device __rcu *last_lookup;
+	struct block_device __rcu *part[];
 };
 
 struct disk_events;
diff --git a/include/linux/part_stat.h b/include/linux/part_stat.h
index 680de036691ef9..d2558121d48c00 100644
--- a/include/linux/part_stat.h
+++ b/include/linux/part_stat.h
@@ -25,26 +25,26 @@ struct disk_stats {
 #define part_stat_unlock()	preempt_enable()
 
 #define part_stat_get_cpu(part, field, cpu)				\
-	(per_cpu_ptr((part)->bdev->bd_stats, (cpu))->field)
+	(per_cpu_ptr((part)->bd_stats, (cpu))->field)
 
 #define part_stat_get(part, field)					\
 	part_stat_get_cpu(part, field, smp_processor_id())
 
 #define part_stat_read(part, field)					\
 ({									\
-	typeof((part)->bdev->bd_stats->field) res = 0;			\
+	typeof((part)->bd_stats->field) res = 0;			\
 	unsigned int _cpu;						\
 	for_each_possible_cpu(_cpu)					\
-		res += per_cpu_ptr((part)->bdev->bd_stats, _cpu)->field; \
+		res += per_cpu_ptr((part)->bd_stats, _cpu)->field; \
 	res;								\
 })
 
-static inline void part_stat_set_all(struct hd_struct *part, int value)
+static inline void part_stat_set_all(struct block_device *part, int value)
 {
 	int i;
 
 	for_each_possible_cpu(i)
-		memset(per_cpu_ptr(part->bdev->bd_stats, i), value,
+		memset(per_cpu_ptr(part->bd_stats, i), value,
 				sizeof(struct disk_stats));
 }
 
@@ -54,13 +54,12 @@ static inline void part_stat_set_all(struct hd_struct *part, int value)
 	 part_stat_read(part, field[STAT_DISCARD]))
 
 #define __part_stat_add(part, field, addnd)				\
-	__this_cpu_add((part)->bdev->bd_stats->field, addnd)
+	__this_cpu_add((part)->bd_stats->field, addnd)
 
 #define part_stat_add(part, field, addnd)	do {			\
 	__part_stat_add((part), field, addnd);				\
-	if ((part)->partno)						\
-		__part_stat_add(part_to_disk((part))->part0->bd_part,	\
-			field, addnd); \
+	if ((part)->bd_partno)						\
+		__part_stat_add(bdev_whole(part), field, addnd);	\
 } while (0)
 
 #define part_stat_dec(part, field)					\
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:40:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:40:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36130.67944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYL-0004Np-JI; Tue, 24 Nov 2020 13:40:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36130.67944; Tue, 24 Nov 2020 13:40:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYL-0004NI-4f; Tue, 24 Nov 2020 13:40:17 +0000
Received: by outflank-mailman (input) for mailman id 36130;
 Tue, 24 Nov 2020 13:40:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYP6-0000Qf-4b
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:30:44 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ac69b391-4604-41e3-b476-dba3f133e5c8;
 Tue, 24 Nov 2020 13:28:56 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYN8-0006dt-TQ; Tue, 24 Nov 2020 13:28:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYP6-0000Qf-4b
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:30:44 +0000
X-Inumbo-ID: ac69b391-4604-41e3-b476-dba3f133e5c8
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ac69b391-4604-41e3-b476-dba3f133e5c8;
	Tue, 24 Nov 2020 13:28:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=s2bj0ka9tzVXa7iO1xuCgon+nWLgrfiF05EXz6Wq7iQ=; b=jo7jitVZLk7dE0nFi35W7LDPJN
	tFqdEul2zhhIuA5tOCbi5aycGTNvy8TrCwjvdqjWRV+CbsHXj1uHR9YGBIHpgX5k1OrWEEjG1OBGS
	05uqltCoKW08gYj1Exy9DRh4S20GKqktTwAhDBgXdpa6tr3gbKlx7crJ/yMgRgYb5bj1t1DPrZcGR
	/hwdibm8NOcUYLKg0vLiJpGjPeRHco2g+1GJeSTgoyu6wnqw78bcOPXI+iGIr/LCTqUZw/z9tv92S
	lld+DSHqPLq7Hfkngg8JscAjZbW0u/H0x3XHbCsyaVQZAi5fsRHdRkbzm4rSxa6gS65lVwxUXtiGV
	AvXj9emg==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYN8-0006dt-TQ; Tue, 24 Nov 2020 13:28:43 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 29/45] block: initialize struct block_device in bdev_alloc
Date: Tue, 24 Nov 2020 14:27:35 +0100
Message-Id: <20201124132751.3747337-30-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Don't play tricks with slab constructors as bdev structures tends to not
get reused very much, and this makes the code a lot less error prone.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/block_dev.c | 22 +++++++++-------------
 1 file changed, 9 insertions(+), 13 deletions(-)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index 43a0fda982c879..1e5c6d0eb92677 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -784,20 +784,11 @@ static void bdev_free_inode(struct inode *inode)
 	kmem_cache_free(bdev_cachep, BDEV_I(inode));
 }
 
-static void init_once(void *foo)
+static void init_once(void *data)
 {
-	struct bdev_inode *ei = (struct bdev_inode *) foo;
-	struct block_device *bdev = &ei->bdev;
+	struct bdev_inode *ei = data;
 
-	memset(bdev, 0, sizeof(*bdev));
-	mutex_init(&bdev->bd_mutex);
-#ifdef CONFIG_SYSFS
-	INIT_LIST_HEAD(&bdev->bd_holder_disks);
-#endif
-	bdev->bd_bdi = &noop_backing_dev_info;
 	inode_init_once(&ei->vfs_inode);
-	/* Initialize mutex for freeze. */
-	mutex_init(&bdev->bd_fsfreeze_mutex);
 }
 
 static void bdev_evict_inode(struct inode *inode)
@@ -872,12 +863,17 @@ struct block_device *bdev_alloc(struct gendisk *disk, u8 partno)
 	inode->i_data.a_ops = &def_blk_aops;
 
 	bdev = I_BDEV(inode);
+	memset(bdev, 0, sizeof(*bdev));
+	mutex_init(&bdev->bd_mutex);
+	mutex_init(&bdev->bd_fsfreeze_mutex);
 	spin_lock_init(&bdev->bd_size_lock);
 	bdev->bd_disk = disk;
 	bdev->bd_partno = partno;
-	bdev->bd_super = NULL;
 	bdev->bd_inode = inode;
-	bdev->bd_part_count = 0;
+	bdev->bd_bdi = &noop_backing_dev_info;
+#ifdef CONFIG_SYSFS
+	INIT_LIST_HEAD(&bdev->bd_holder_disks);
+#endif
 	return bdev;
 }
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:40:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:40:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36131.67957 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYM-0004Pf-9S; Tue, 24 Nov 2020 13:40:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36131.67957; Tue, 24 Nov 2020 13:40:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYL-0004P4-Ph; Tue, 24 Nov 2020 13:40:17 +0000
Received: by outflank-mailman (input) for mailman id 36131;
 Tue, 24 Nov 2020 13:40:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYOh-0000Qf-3x
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:30:19 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bb682eb6-e81b-4fab-9c78-d20579156464;
 Tue, 24 Nov 2020 13:28:48 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYMx-0006bA-OT; Tue, 24 Nov 2020 13:28:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYOh-0000Qf-3x
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:30:19 +0000
X-Inumbo-ID: bb682eb6-e81b-4fab-9c78-d20579156464
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bb682eb6-e81b-4fab-9c78-d20579156464;
	Tue, 24 Nov 2020 13:28:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=Mhpy0Cb59wkbzThtZwDnrbLXD1rWczYtJDTvhh8SMU0=; b=Ea+9q0/5U1C3lxpYG+cp8W8+yq
	HDJXNs1xkR6d5Q3Tqe8DzJMoqEQ7/6jzh7BdzDFOCgM6eJtZ/NdJBDr4wa1fg4oXr74Y5kroSq79K
	VzT26zUATIM8C+jMZIcZTGU/6xcbiiu6NMExdkpxZExa8l29GhQ0Qte1UYrKyFBP9keFbOOHO8Do8
	pg6wptGVpopV7o1YV4zwjSA2vi6DqZrN+W1PPJvGBKFgFYLTf6Hhc/xGBLAwEP1JaP8w6JU559eQE
	bqw7WPqgNie7PhEXK192YC8HvtakiLkYEPYmuhZflhhpqi33qULhu7LUJVG5R4zXadf40txQOM3eh
	EzrjvHUQ==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYMx-0006bA-OT; Tue, 24 Nov 2020 13:28:32 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 24/45] blk-cgroup: stop abusing get_gendisk
Date: Tue, 24 Nov 2020 14:27:30 +0100
Message-Id: <20201124132751.3747337-25-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Properly open the device instead of relying on deep internals by
using get_gendisk.  Note that this uses FMODE_NDELAY without either
FMODE_READ or FMODE_WRITE, which is a special open mode to allow
for opening without media access, thus avoiding unexpexted interactions
especially on removable media.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-cgroup.c         | 42 +++++++++++++++++++-------------------
 block/blk-iocost.c         | 36 ++++++++++++++++----------------
 include/linux/blk-cgroup.h |  4 ++--
 3 files changed, 41 insertions(+), 41 deletions(-)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 54fbe1e80cc41a..23437b96ea41e6 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -556,22 +556,22 @@ static struct blkcg_gq *blkg_lookup_check(struct blkcg *blkcg,
 }
 
 /**
- * blkg_conf_prep - parse and prepare for per-blkg config update
+ * blkcg_conf_open_bdev - parse and open bdev for per-blkg config update
  * @inputp: input string pointer
  *
  * Parse the device node prefix part, MAJ:MIN, of per-blkg config update
- * from @input and get and return the matching gendisk.  *@inputp is
+ * from @input and get and return the matching bdev.  *@inputp is
  * updated to point past the device node prefix.  Returns an ERR_PTR()
  * value on error.
  *
  * Use this function iff blkg_conf_prep() can't be used for some reason.
  */
-struct gendisk *blkcg_conf_get_disk(char **inputp)
+struct block_device *blkcg_conf_open_bdev(char **inputp)
 {
 	char *input = *inputp;
 	unsigned int major, minor;
-	struct gendisk *disk;
-	int key_len, part;
+	struct block_device *bdev;
+	int key_len;
 
 	if (sscanf(input, "%u:%u%n", &major, &minor, &key_len) != 2)
 		return ERR_PTR(-EINVAL);
@@ -581,16 +581,16 @@ struct gendisk *blkcg_conf_get_disk(char **inputp)
 		return ERR_PTR(-EINVAL);
 	input = skip_spaces(input);
 
-	disk = get_gendisk(MKDEV(major, minor), &part);
-	if (!disk)
+	bdev = blkdev_get_by_dev(MKDEV(major, minor), FMODE_NDELAY, NULL);
+	if (!bdev)
 		return ERR_PTR(-ENODEV);
-	if (part) {
-		put_disk_and_module(disk);
+	if (bdev_is_partition(bdev)) {
+		blkdev_put(bdev, FMODE_NDELAY);
 		return ERR_PTR(-ENODEV);
 	}
 
 	*inputp = input;
-	return disk;
+	return bdev;
 }
 
 /**
@@ -607,18 +607,18 @@ struct gendisk *blkcg_conf_get_disk(char **inputp)
  */
 int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
 		   char *input, struct blkg_conf_ctx *ctx)
-	__acquires(rcu) __acquires(&disk->queue->queue_lock)
+	__acquires(rcu) __acquires(&bdev->bd_disk->queue->queue_lock)
 {
-	struct gendisk *disk;
+	struct block_device *bdev;
 	struct request_queue *q;
 	struct blkcg_gq *blkg;
 	int ret;
 
-	disk = blkcg_conf_get_disk(&input);
-	if (IS_ERR(disk))
-		return PTR_ERR(disk);
+	bdev = blkcg_conf_open_bdev(&input);
+	if (IS_ERR(bdev))
+		return PTR_ERR(bdev);
 
-	q = disk->queue;
+	q = bdev->bd_disk->queue;
 
 	rcu_read_lock();
 	spin_lock_irq(&q->queue_lock);
@@ -689,7 +689,7 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
 			goto success;
 	}
 success:
-	ctx->disk = disk;
+	ctx->bdev = bdev;
 	ctx->blkg = blkg;
 	ctx->body = input;
 	return 0;
@@ -700,7 +700,7 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
 	spin_unlock_irq(&q->queue_lock);
 	rcu_read_unlock();
 fail:
-	put_disk_and_module(disk);
+	blkdev_put(bdev, FMODE_NDELAY);
 	/*
 	 * If queue was bypassing, we should retry.  Do so after a
 	 * short msleep().  It isn't strictly necessary but queue
@@ -723,11 +723,11 @@ EXPORT_SYMBOL_GPL(blkg_conf_prep);
  * with blkg_conf_prep().
  */
 void blkg_conf_finish(struct blkg_conf_ctx *ctx)
-	__releases(&ctx->disk->queue->queue_lock) __releases(rcu)
+	__releases(&ctx->bdev->bd_disk->queue->queue_lock) __releases(rcu)
 {
-	spin_unlock_irq(&ctx->disk->queue->queue_lock);
+	spin_unlock_irq(&ctx->bdev->bd_disk->queue->queue_lock);
 	rcu_read_unlock();
-	put_disk_and_module(ctx->disk);
+	blkdev_put(ctx->bdev, FMODE_NDELAY);
 }
 EXPORT_SYMBOL_GPL(blkg_conf_finish);
 
diff --git a/block/blk-iocost.c b/block/blk-iocost.c
index bbe86d1199dc5b..9f219718e9813c 100644
--- a/block/blk-iocost.c
+++ b/block/blk-iocost.c
@@ -3120,23 +3120,23 @@ static const match_table_t qos_tokens = {
 static ssize_t ioc_qos_write(struct kernfs_open_file *of, char *input,
 			     size_t nbytes, loff_t off)
 {
-	struct gendisk *disk;
+	struct block_device *bdev;
 	struct ioc *ioc;
 	u32 qos[NR_QOS_PARAMS];
 	bool enable, user;
 	char *p;
 	int ret;
 
-	disk = blkcg_conf_get_disk(&input);
-	if (IS_ERR(disk))
-		return PTR_ERR(disk);
+	bdev = blkcg_conf_open_bdev(&input);
+	if (IS_ERR(bdev))
+		return PTR_ERR(bdev);
 
-	ioc = q_to_ioc(disk->queue);
+	ioc = q_to_ioc(bdev->bd_disk->queue);
 	if (!ioc) {
-		ret = blk_iocost_init(disk->queue);
+		ret = blk_iocost_init(bdev->bd_disk->queue);
 		if (ret)
 			goto err;
-		ioc = q_to_ioc(disk->queue);
+		ioc = q_to_ioc(bdev->bd_disk->queue);
 	}
 
 	spin_lock_irq(&ioc->lock);
@@ -3231,12 +3231,12 @@ static ssize_t ioc_qos_write(struct kernfs_open_file *of, char *input,
 	ioc_refresh_params(ioc, true);
 	spin_unlock_irq(&ioc->lock);
 
-	put_disk_and_module(disk);
+	blkdev_put(bdev, FMODE_NDELAY);
 	return nbytes;
 einval:
 	ret = -EINVAL;
 err:
-	put_disk_and_module(disk);
+	blkdev_put(bdev, FMODE_NDELAY);
 	return ret;
 }
 
@@ -3287,23 +3287,23 @@ static const match_table_t i_lcoef_tokens = {
 static ssize_t ioc_cost_model_write(struct kernfs_open_file *of, char *input,
 				    size_t nbytes, loff_t off)
 {
-	struct gendisk *disk;
+	struct block_device *bdev;
 	struct ioc *ioc;
 	u64 u[NR_I_LCOEFS];
 	bool user;
 	char *p;
 	int ret;
 
-	disk = blkcg_conf_get_disk(&input);
-	if (IS_ERR(disk))
-		return PTR_ERR(disk);
+	bdev = blkcg_conf_open_bdev(&input);
+	if (IS_ERR(bdev))
+		return PTR_ERR(bdev);
 
-	ioc = q_to_ioc(disk->queue);
+	ioc = q_to_ioc(bdev->bd_disk->queue);
 	if (!ioc) {
-		ret = blk_iocost_init(disk->queue);
+		ret = blk_iocost_init(bdev->bd_disk->queue);
 		if (ret)
 			goto err;
-		ioc = q_to_ioc(disk->queue);
+		ioc = q_to_ioc(bdev->bd_disk->queue);
 	}
 
 	spin_lock_irq(&ioc->lock);
@@ -3356,13 +3356,13 @@ static ssize_t ioc_cost_model_write(struct kernfs_open_file *of, char *input,
 	ioc_refresh_params(ioc, true);
 	spin_unlock_irq(&ioc->lock);
 
-	put_disk_and_module(disk);
+	blkdev_put(bdev, FMODE_NDELAY);
 	return nbytes;
 
 einval:
 	ret = -EINVAL;
 err:
-	put_disk_and_module(disk);
+	blkdev_put(bdev, FMODE_NDELAY);
 	return ret;
 }
 
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index c8fc9792ac776d..b9f3c246c3c908 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -197,12 +197,12 @@ void blkcg_print_blkgs(struct seq_file *sf, struct blkcg *blkcg,
 u64 __blkg_prfill_u64(struct seq_file *sf, struct blkg_policy_data *pd, u64 v);
 
 struct blkg_conf_ctx {
-	struct gendisk			*disk;
+	struct block_device		*bdev;
 	struct blkcg_gq			*blkg;
 	char				*body;
 };
 
-struct gendisk *blkcg_conf_get_disk(char **inputp);
+struct block_device *blkcg_conf_open_bdev(char **inputp);
 int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
 		   char *input, struct blkg_conf_ctx *ctx);
 void blkg_conf_finish(struct blkg_conf_ctx *ctx);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:40:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:40:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36132.67974 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYP-0004Xz-33; Tue, 24 Nov 2020 13:40:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36132.67974; Tue, 24 Nov 2020 13:40:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYO-0004Xl-SV; Tue, 24 Nov 2020 13:40:20 +0000
Received: by outflank-mailman (input) for mailman id 36132;
 Tue, 24 Nov 2020 13:40:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYQn-0000Qf-7f
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:32:29 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d0090ced-4cc8-4b0e-bb46-9565920c6421;
 Tue, 24 Nov 2020 13:29:31 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYNV-0006iR-O5; Tue, 24 Nov 2020 13:29:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYQn-0000Qf-7f
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:32:29 +0000
X-Inumbo-ID: d0090ced-4cc8-4b0e-bb46-9565920c6421
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d0090ced-4cc8-4b0e-bb46-9565920c6421;
	Tue, 24 Nov 2020 13:29:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=q//akcXYDaN3GPU06icBwYE5X5asU9hjziQkxopOWdc=; b=eQn/egDroO3NcQjEPaZ0Bsez0W
	g65pWhb83qBpDStOGyaDi0qhUUfUxxawlIER9j9qVSoX6/4MnNdOfGBj1P+BWltbHEPAi5IemngJF
	9o0MVerz1J+NxqR6kdvskLei+XNvmJDF3X7wIhlssF1rUjkK3635NDgCarRPgw4IKsEU7tjM7+Azl
	hdA0AgJ8XHNfMMi5RX+fNtbFsJsBWTrYdbs47JB/T5JsGC+HA3MEyGyi/9uSQnp8PliomGe1LV4Rw
	mNYH4cKfdhtoxoKiZQnVFW16/Y4ehZU/dx/bUsnH9GR+SzC/YIwg/TJfMNnd5rIhhG+jaiGOHSDYo
	RGzzS8FQ==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYNV-0006iR-O5; Tue, 24 Nov 2020 13:29:06 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 42/45] block: switch disk_part_iter_* to use a struct block_device
Date: Tue, 24 Nov 2020 14:27:48 +0100
Message-Id: <20201124132751.3747337-43-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Switch the partition iter infrastructure to iterate over block_device
references instead of hd_struct ones mostly used to get at the
block_device.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/genhd.c             | 57 +++++++++++++++++++--------------------
 block/partitions/core.c   | 13 +++++----
 drivers/s390/block/dasd.c |  8 +++---
 include/linux/genhd.h     |  4 +--
 4 files changed, 40 insertions(+), 42 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index 2985740eab084b..f0bf10192066ac 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -231,7 +231,7 @@ EXPORT_SYMBOL_GPL(disk_part_iter_init);
  * CONTEXT:
  * Don't care.
  */
-struct hd_struct *disk_part_iter_next(struct disk_part_iter *piter)
+struct block_device *disk_part_iter_next(struct disk_part_iter *piter)
 {
 	struct disk_part_tbl *ptbl;
 	int inc, end;
@@ -269,8 +269,7 @@ struct hd_struct *disk_part_iter_next(struct disk_part_iter *piter)
 		      piter->idx == 0))
 			continue;
 
-		get_device(part_to_dev(part->bd_part));
-		piter->part = part->bd_part;
+		piter->part = bdgrab(part);
 		piter->idx += inc;
 		break;
 	}
@@ -292,7 +291,8 @@ EXPORT_SYMBOL_GPL(disk_part_iter_next);
  */
 void disk_part_iter_exit(struct disk_part_iter *piter)
 {
-	disk_put_part(piter->part);
+	if (piter->part)
+		bdput(piter->part);
 	piter->part = NULL;
 }
 EXPORT_SYMBOL_GPL(disk_part_iter_exit);
@@ -333,7 +333,6 @@ struct block_device *disk_map_sector_rcu(struct gendisk *disk, sector_t sector)
 
 	for (i = 1; i < ptbl->len; i++) {
 		part = rcu_dereference(ptbl->part[i]);
-
 		if (part && sector_in_part(part, sector)) {
 			rcu_assign_pointer(ptbl->last_lookup, part);
 			goto out_unlock;
@@ -634,7 +633,7 @@ static void register_disk(struct device *parent, struct gendisk *disk,
 {
 	struct device *ddev = disk_to_dev(disk);
 	struct disk_part_iter piter;
-	struct hd_struct *part;
+	struct block_device *part;
 	int err;
 
 	ddev->parent = parent;
@@ -684,7 +683,7 @@ static void register_disk(struct device *parent, struct gendisk *disk,
 	/* announce possible partitions */
 	disk_part_iter_init(&piter, disk, 0);
 	while ((part = disk_part_iter_next(&piter)))
-		kobject_uevent(&part_to_dev(part)->kobj, KOBJ_ADD);
+		kobject_uevent(bdev_kobj(part), KOBJ_ADD);
 	disk_part_iter_exit(&piter);
 
 	if (disk->queue->backing_dev_info->dev) {
@@ -824,7 +823,7 @@ static void invalidate_partition(struct block_device *bdev)
 void del_gendisk(struct gendisk *disk)
 {
 	struct disk_part_iter piter;
-	struct hd_struct *part;
+	struct block_device *part;
 
 	might_sleep();
 
@@ -840,8 +839,8 @@ void del_gendisk(struct gendisk *disk)
 	disk_part_iter_init(&piter, disk,
 			     DISK_PITER_INCL_EMPTY | DISK_PITER_REVERSE);
 	while ((part = disk_part_iter_next(&piter))) {
-		invalidate_partition(part->bdev);
-		delete_partition(part);
+		invalidate_partition(part);
+		delete_partition(part->bd_part);
 	}
 	disk_part_iter_exit(&piter);
 
@@ -958,7 +957,7 @@ void __init printk_all_partitions(void)
 	while ((dev = class_dev_iter_next(&iter))) {
 		struct gendisk *disk = dev_to_disk(dev);
 		struct disk_part_iter piter;
-		struct hd_struct *part;
+		struct block_device *part;
 		char name_buf[BDEVNAME_SIZE];
 		char devt_buf[BDEVT_SIZE];
 
@@ -977,14 +976,14 @@ void __init printk_all_partitions(void)
 		 */
 		disk_part_iter_init(&piter, disk, DISK_PITER_INCL_PART0);
 		while ((part = disk_part_iter_next(&piter))) {
-			bool is_part0 = part == disk->part0->bd_part;
+			bool is_part0 = part == disk->part0;
 
 			printk("%s%s %10llu %s %s", is_part0 ? "" : "  ",
-			       bdevt_str(part_devt(part), devt_buf),
-			       bdev_nr_sectors(part->bdev) >> 1,
-			       disk_name(disk, part->bdev->bd_partno, name_buf),
-			       part->bdev->bd_meta_info ?
-					part->bdev->bd_meta_info->uuid : "");
+			       bdevt_str(part->bd_dev, devt_buf),
+			       bdev_nr_sectors(part) >> 1,
+			       disk_name(disk, part->bd_partno, name_buf),
+			       part->bd_meta_info ?
+					part->bd_meta_info->uuid : "");
 			if (is_part0) {
 				if (dev->parent && dev->parent->driver)
 					printk(" driver: %s\n",
@@ -1060,7 +1059,7 @@ static int show_partition(struct seq_file *seqf, void *v)
 {
 	struct gendisk *sgp = v;
 	struct disk_part_iter piter;
-	struct hd_struct *part;
+	struct block_device *part;
 	char buf[BDEVNAME_SIZE];
 
 	/* Don't show non-partitionable removeable devices or empty devices */
@@ -1074,9 +1073,9 @@ static int show_partition(struct seq_file *seqf, void *v)
 	disk_part_iter_init(&piter, sgp, DISK_PITER_INCL_PART0);
 	while ((part = disk_part_iter_next(&piter)))
 		seq_printf(seqf, "%4d  %7d %10llu %s\n",
-			   MAJOR(part_devt(part)), MINOR(part_devt(part)),
-			   bdev_nr_sectors(part->bdev) >> 1,
-			   disk_name(sgp, part->bdev->bd_partno, buf));
+			   MAJOR(part->bd_dev), MINOR(part->bd_dev),
+			   bdev_nr_sectors(part) >> 1,
+			   disk_name(sgp, part->bd_partno, buf));
 	disk_part_iter_exit(&piter);
 
 	return 0;
@@ -1470,7 +1469,7 @@ static int diskstats_show(struct seq_file *seqf, void *v)
 {
 	struct gendisk *gp = v;
 	struct disk_part_iter piter;
-	struct hd_struct *hd;
+	struct block_device *hd;
 	char buf[BDEVNAME_SIZE];
 	unsigned int inflight;
 	struct disk_stats stat;
@@ -1485,11 +1484,11 @@ static int diskstats_show(struct seq_file *seqf, void *v)
 
 	disk_part_iter_init(&piter, gp, DISK_PITER_INCL_EMPTY_PART0);
 	while ((hd = disk_part_iter_next(&piter))) {
-		part_stat_read_all(hd, &stat);
+		part_stat_read_all(hd->bd_part, &stat);
 		if (queue_is_mq(gp->queue))
-			inflight = blk_mq_in_flight(gp->queue, hd->bdev);
+			inflight = blk_mq_in_flight(gp->queue, hd);
 		else
-			inflight = part_in_flight(hd->bdev);
+			inflight = part_in_flight(hd);
 
 		seq_printf(seqf, "%4d %7d %s "
 			   "%lu %lu %lu %u "
@@ -1498,8 +1497,8 @@ static int diskstats_show(struct seq_file *seqf, void *v)
 			   "%lu %lu %lu %u "
 			   "%lu %u"
 			   "\n",
-			   MAJOR(part_devt(hd)), MINOR(part_devt(hd)),
-			   disk_name(gp, hd->bdev->bd_partno, buf),
+			   MAJOR(hd->bd_dev), MINOR(hd->bd_dev),
+			   disk_name(gp, hd->bd_partno, buf),
 			   stat.ios[STAT_READ],
 			   stat.merges[STAT_READ],
 			   stat.sectors[STAT_READ],
@@ -1654,7 +1653,7 @@ static void set_disk_ro_uevent(struct gendisk *gd, int ro)
 void set_disk_ro(struct gendisk *disk, int flag)
 {
 	struct disk_part_iter piter;
-	struct hd_struct *part;
+	struct block_device *part;
 
 	if (disk->part0->bd_read_only != flag) {
 		set_disk_ro_uevent(disk, flag);
@@ -1663,7 +1662,7 @@ void set_disk_ro(struct gendisk *disk, int flag)
 
 	disk_part_iter_init(&piter, disk, DISK_PITER_INCL_EMPTY);
 	while ((part = disk_part_iter_next(&piter)))
-		part->bdev->bd_read_only = flag;
+		part->bd_read_only = flag;
 	disk_part_iter_exit(&piter);
 }
 
diff --git a/block/partitions/core.c b/block/partitions/core.c
index 45fed1108d4425..c189ebd4569812 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -439,15 +439,14 @@ static bool partition_overlaps(struct gendisk *disk, sector_t start,
 		sector_t length, int skip_partno)
 {
 	struct disk_part_iter piter;
-	struct hd_struct *part;
+	struct block_device *part;
 	bool overlap = false;
 
 	disk_part_iter_init(&piter, disk, DISK_PITER_INCL_EMPTY);
 	while ((part = disk_part_iter_next(&piter))) {
-		if (part->bdev->bd_partno == skip_partno ||
-		    start >= part->bdev->bd_start_sect +
-			bdev_nr_sectors(part->bdev) ||
-		    start + length <= part->bdev->bd_start_sect)
+		if (part->bd_partno == skip_partno ||
+		    start >= part->bd_start_sect + bdev_nr_sectors(part) ||
+		    start + length <= part->bd_start_sect)
 			continue;
 		overlap = true;
 		break;
@@ -568,7 +567,7 @@ static bool disk_unlock_native_capacity(struct gendisk *disk)
 int blk_drop_partitions(struct block_device *bdev)
 {
 	struct disk_part_iter piter;
-	struct hd_struct *part;
+	struct block_device *part;
 
 	if (bdev->bd_part_count)
 		return -EBUSY;
@@ -578,7 +577,7 @@ int blk_drop_partitions(struct block_device *bdev)
 
 	disk_part_iter_init(&piter, bdev->bd_disk, DISK_PITER_INCL_EMPTY);
 	while ((part = disk_part_iter_next(&piter)))
-		delete_partition(part);
+		delete_partition(part->bd_part);
 	disk_part_iter_exit(&piter);
 
 	return 0;
diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
index db24e04ee9781e..1825fa8d05a780 100644
--- a/drivers/s390/block/dasd.c
+++ b/drivers/s390/block/dasd.c
@@ -432,7 +432,7 @@ dasd_state_ready_to_online(struct dasd_device * device)
 {
 	struct gendisk *disk;
 	struct disk_part_iter piter;
-	struct hd_struct *part;
+	struct block_device *part;
 
 	device->state = DASD_STATE_ONLINE;
 	if (device->block) {
@@ -445,7 +445,7 @@ dasd_state_ready_to_online(struct dasd_device * device)
 		disk = device->block->bdev->bd_disk;
 		disk_part_iter_init(&piter, disk, DISK_PITER_INCL_PART0);
 		while ((part = disk_part_iter_next(&piter)))
-			kobject_uevent(&part_to_dev(part)->kobj, KOBJ_CHANGE);
+			kobject_uevent(bdev_kobj(part), KOBJ_CHANGE);
 		disk_part_iter_exit(&piter);
 	}
 	return 0;
@@ -459,7 +459,7 @@ static int dasd_state_online_to_ready(struct dasd_device *device)
 	int rc;
 	struct gendisk *disk;
 	struct disk_part_iter piter;
-	struct hd_struct *part;
+	struct block_device *part;
 
 	if (device->discipline->online_to_ready) {
 		rc = device->discipline->online_to_ready(device);
@@ -472,7 +472,7 @@ static int dasd_state_online_to_ready(struct dasd_device *device)
 		disk = device->block->bdev->bd_disk;
 		disk_part_iter_init(&piter, disk, DISK_PITER_INCL_PART0);
 		while ((part = disk_part_iter_next(&piter)))
-			kobject_uevent(&part_to_dev(part)->kobj, KOBJ_CHANGE);
+			kobject_uevent(bdev_kobj(part), KOBJ_CHANGE);
 		disk_part_iter_exit(&piter);
 	}
 	return 0;
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index afe27a0b8f5dd8..de2ee5ffeefe45 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -245,14 +245,14 @@ static inline void disk_put_part(struct hd_struct *part)
 
 struct disk_part_iter {
 	struct gendisk		*disk;
-	struct hd_struct	*part;
+	struct block_device	*part;
 	int			idx;
 	unsigned int		flags;
 };
 
 extern void disk_part_iter_init(struct disk_part_iter *piter,
 				 struct gendisk *disk, unsigned int flags);
-extern struct hd_struct *disk_part_iter_next(struct disk_part_iter *piter);
+struct block_device *disk_part_iter_next(struct disk_part_iter *piter);
 extern void disk_part_iter_exit(struct disk_part_iter *piter);
 extern bool disk_has_partitions(struct gendisk *disk);
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:40:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:40:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36133.67983 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYQ-0004a1-22; Tue, 24 Nov 2020 13:40:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36133.67983; Tue, 24 Nov 2020 13:40:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYP-0004ZF-L3; Tue, 24 Nov 2020 13:40:21 +0000
Received: by outflank-mailman (input) for mailman id 36133;
 Tue, 24 Nov 2020 13:40:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYQJ-0000Qf-70
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:31:59 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id df515c30-c464-4c53-b98c-0ee0f4188782;
 Tue, 24 Nov 2020 13:29:20 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYNR-0006hS-0P; Tue, 24 Nov 2020 13:29:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYQJ-0000Qf-70
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:31:59 +0000
X-Inumbo-ID: df515c30-c464-4c53-b98c-0ee0f4188782
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id df515c30-c464-4c53-b98c-0ee0f4188782;
	Tue, 24 Nov 2020 13:29:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=fU4q5g8DrG8H4tmNCyRi+kxvuLyvbj3BSRsKCHW7Lvs=; b=lCj/qA5g5ZkwyWOAQF8ozej3vH
	keMjMwfYtWrbfzi/noHPO0Jb5da9D0DGw7G8IWjYVy/O03K5og4aipRWsXkLxQK8EApPzcDWjwilc
	7fOJgtIOp1JRzInyLgIUAxC0fi7eGIOiUCRfXEhU3Tc/HoDFvyV8LhCmctZyuMEvauG5YTP8OsLJd
	IYuC4WJUcdjC5RIGPIV3kk26bwLOc4XU7AkfAB1J2hi0J7THmDwtvAdtfGR8uO7tvNnNVgXDX8hQk
	MiweyWMDmpBAA5XTivLlhZpF582slGC3wGEw8gercd3+MT5GCadCLqsEY9cuVgz3g2gwaSBRy9UC3
	4wKMb0bg==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYNR-0006hS-0P; Tue, 24 Nov 2020 13:29:01 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 39/45] block: remove the partno field from struct hd_struct
Date: Tue, 24 Nov 2020 14:27:45 +0100
Message-Id: <20201124132751.3747337-40-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Just use the bd_partno field in struct block_device everywhere.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/genhd.c           | 12 ++++++------
 block/partitions/core.c |  9 ++++-----
 include/linux/genhd.h   |  1 -
 init/do_mounts.c        |  2 +-
 4 files changed, 11 insertions(+), 13 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index 197120c0c60f23..60004bc8ba5b56 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -576,8 +576,8 @@ int blk_alloc_devt(struct hd_struct *part, dev_t *devt)
 	int idx;
 
 	/* in consecutive minor range? */
-	if (part->partno < disk->minors) {
-		*devt = MKDEV(disk->major, disk->first_minor + part->partno);
+	if (part->bdev->bd_partno < disk->minors) {
+		*devt = MKDEV(disk->major, disk->first_minor + part->bdev->bd_partno);
 		return 0;
 	}
 
@@ -847,7 +847,7 @@ void del_gendisk(struct gendisk *disk)
 	disk_part_iter_init(&piter, disk,
 			     DISK_PITER_INCL_EMPTY | DISK_PITER_REVERSE);
 	while ((part = disk_part_iter_next(&piter))) {
-		invalidate_partition(disk, part->partno);
+		invalidate_partition(disk, part->bdev->bd_partno);
 		delete_partition(part);
 	}
 	disk_part_iter_exit(&piter);
@@ -989,7 +989,7 @@ void __init printk_all_partitions(void)
 			printk("%s%s %10llu %s %s", is_part0 ? "" : "  ",
 			       bdevt_str(part_devt(part), devt_buf),
 			       bdev_nr_sectors(part->bdev) >> 1,
-			       disk_name(disk, part->partno, name_buf),
+			       disk_name(disk, part->bdev->bd_partno, name_buf),
 			       part->bdev->bd_meta_info ?
 					part->bdev->bd_meta_info->uuid : "");
 			if (is_part0) {
@@ -1083,7 +1083,7 @@ static int show_partition(struct seq_file *seqf, void *v)
 		seq_printf(seqf, "%4d  %7d %10llu %s\n",
 			   MAJOR(part_devt(part)), MINOR(part_devt(part)),
 			   bdev_nr_sectors(part->bdev) >> 1,
-			   disk_name(sgp, part->partno, buf));
+			   disk_name(sgp, part->bdev->bd_partno, buf));
 	disk_part_iter_exit(&piter);
 
 	return 0;
@@ -1506,7 +1506,7 @@ static int diskstats_show(struct seq_file *seqf, void *v)
 			   "%lu %u"
 			   "\n",
 			   MAJOR(part_devt(hd)), MINOR(part_devt(hd)),
-			   disk_name(gp, hd->partno, buf),
+			   disk_name(gp, hd->bdev->bd_partno, buf),
 			   stat.ios[STAT_READ],
 			   stat.merges[STAT_READ],
 			   stat.sectors[STAT_READ],
diff --git a/block/partitions/core.c b/block/partitions/core.c
index 8beab9e7727e27..ee4f4e3237aa2d 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -184,7 +184,7 @@ static ssize_t part_partition_show(struct device *dev,
 {
 	struct hd_struct *p = dev_to_part(dev);
 
-	return sprintf(buf, "%d\n", p->partno);
+	return sprintf(buf, "%d\n", p->bdev->bd_partno);
 }
 
 static ssize_t part_start_show(struct device *dev,
@@ -274,7 +274,7 @@ static int part_uevent(struct device *dev, struct kobj_uevent_env *env)
 {
 	struct hd_struct *part = dev_to_part(dev);
 
-	add_uevent_var(env, "PARTN=%u", part->partno);
+	add_uevent_var(env, "PARTN=%u", part->bdev->bd_partno);
 	if (part->bdev->bd_meta_info && part->bdev->bd_meta_info->volname[0])
 		add_uevent_var(env, "PARTNAME=%s",
 			       part->bdev->bd_meta_info->volname);
@@ -298,7 +298,7 @@ void delete_partition(struct hd_struct *part)
 	struct disk_part_tbl *ptbl =
 		rcu_dereference_protected(disk->part_tbl, 1);
 
-	rcu_assign_pointer(ptbl->part[part->partno], NULL);
+	rcu_assign_pointer(ptbl->part[part->bdev->bd_partno], NULL);
 	rcu_assign_pointer(ptbl->last_lookup, NULL);
 
 	kobject_put(part->bdev->bd_holder_dir);
@@ -372,7 +372,6 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 
 	bdev->bd_start_sect = start;
 	bdev_set_nr_sectors(bdev, len);
-	p->partno = partno;
 	bdev->bd_read_only = get_disk_ro(disk);
 
 	if (info) {
@@ -445,7 +444,7 @@ static bool partition_overlaps(struct gendisk *disk, sector_t start,
 
 	disk_part_iter_init(&piter, disk, DISK_PITER_INCL_EMPTY);
 	while ((part = disk_part_iter_next(&piter))) {
-		if (part->partno == skip_partno ||
+		if (part->bdev->bd_partno == skip_partno ||
 		    start >= part->bdev->bd_start_sect +
 			bdev_nr_sectors(part->bdev) ||
 		    start + length <= part->bdev->bd_start_sect)
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 77443a1031e373..afe27a0b8f5dd8 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -54,7 +54,6 @@ struct partition_meta_info {
 struct hd_struct {
 	struct block_device *bdev;
 	struct device __dev;
-	int partno;
 };
 
 /**
diff --git a/init/do_mounts.c b/init/do_mounts.c
index 368ccb71850126..86bef93e72ebd6 100644
--- a/init/do_mounts.c
+++ b/init/do_mounts.c
@@ -136,7 +136,7 @@ static dev_t devt_from_partuuid(const char *uuid_str)
 		struct hd_struct *part;
 
 		part = disk_get_part(dev_to_disk(dev),
-				     dev_to_part(dev)->partno + offset);
+				     dev_to_part(dev)->bdev->bd_partno + offset);
 		if (part) {
 			devt = part_devt(part);
 			put_device(part_to_dev(part));
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:40:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:40:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36135.67994 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYR-0004dp-7R; Tue, 24 Nov 2020 13:40:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36135.67994; Tue, 24 Nov 2020 13:40:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYQ-0004dM-VR; Tue, 24 Nov 2020 13:40:22 +0000
Received: by outflank-mailman (input) for mailman id 36135;
 Tue, 24 Nov 2020 13:40:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYOw-0000Qf-4S
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:30:34 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c2b616b1-91a8-4099-81ae-08513ab3ea85;
 Tue, 24 Nov 2020 13:28:52 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYN3-0006cE-B6; Tue, 24 Nov 2020 13:28:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYOw-0000Qf-4S
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:30:34 +0000
X-Inumbo-ID: c2b616b1-91a8-4099-81ae-08513ab3ea85
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c2b616b1-91a8-4099-81ae-08513ab3ea85;
	Tue, 24 Nov 2020 13:28:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=nzkUfrT01Wyy3ftM2t5tJYLu+k0O8FwenTfIP4W4Alg=; b=QJqpmwmcRFqcG29M/kctfXCzQB
	BGC4iJ+kXweWuVnoPisgRNDv06K0vk8fvyp15/7tvBIq4LD48rZzewgoVFPe/2uGW8RQq+6b5Jo5k
	WJbpdWZYIqUnR0J90EL3Xlw5a1OXx02V58H5fRlGs8qPi4+Fj5hrX5NBXm1FNtfqTDy2OkeV0TVx9
	FB45owhz/9M9gbrNGPgJBRCGNH+DgxkmtwdGfT/rdtgIwVbTnQSsyNZzCWC4VwvGnJ0E9CevwS/lN
	LVoKe3fEpXX2CVVvJouwGSJdWtCRLhGR47Obd4dXmOJL2zf9oFs12PgYTuAqCvqs1XpWJqIZew3WK
	lNqUCg2A==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYN3-0006cE-B6; Tue, 24 Nov 2020 13:28:38 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 26/45] block: remove ->bd_contains
Date: Tue, 24 Nov 2020 14:27:32 +0100
Message-Id: <20201124132751.3747337-27-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Now that each hd_struct has a reference to the corresponding
block_device, there is no need for the bd_contains pointer.  Add
a bdev_whole() helper to look up the whole device block_device
struture instead.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Jan Kara <jack@suse.cz>
---
 drivers/block/loop.c      |  2 +-
 drivers/scsi/scsicam.c    |  2 +-
 fs/block_dev.c            | 22 ++++++++--------------
 include/linux/blk_types.h |  4 +++-
 4 files changed, 13 insertions(+), 17 deletions(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 26c7aafba7c5f8..c0df88b3300c41 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -1088,7 +1088,7 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
 	 * here to avoid changing device under exclusive owner.
 	 */
 	if (!(mode & FMODE_EXCL)) {
-		claimed_bdev = bdev->bd_contains;
+		claimed_bdev = bdev_whole(bdev);
 		error = bd_prepare_to_claim(bdev, claimed_bdev, loop_configure);
 		if (error)
 			goto out_putf;
diff --git a/drivers/scsi/scsicam.c b/drivers/scsi/scsicam.c
index 682cf08ab04153..f1553a453616fd 100644
--- a/drivers/scsi/scsicam.c
+++ b/drivers/scsi/scsicam.c
@@ -32,7 +32,7 @@
  */
 unsigned char *scsi_bios_ptable(struct block_device *dev)
 {
-	struct address_space *mapping = dev->bd_contains->bd_inode->i_mapping;
+	struct address_space *mapping = bdev_whole(dev)->bd_inode->i_mapping;
 	unsigned char *res = NULL;
 	struct page *page;
 
diff --git a/fs/block_dev.c b/fs/block_dev.c
index b9ee8fe5acd570..e8d7de5fae00a9 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -119,7 +119,7 @@ int truncate_bdev_range(struct block_device *bdev, fmode_t mode,
 	 * under live filesystem.
 	 */
 	if (!(mode & FMODE_EXCL)) {
-		claimed_bdev = bdev->bd_contains;
+		claimed_bdev = bdev_whole(bdev);
 		err = bd_prepare_to_claim(bdev, claimed_bdev,
 					  truncate_bdev_range);
 		if (err)
@@ -879,7 +879,6 @@ struct block_device *bdev_alloc(struct gendisk *disk, u8 partno)
 	spin_lock_init(&bdev->bd_size_lock);
 	bdev->bd_disk = disk;
 	bdev->bd_partno = partno;
-	bdev->bd_contains = NULL;
 	bdev->bd_super = NULL;
 	bdev->bd_inode = inode;
 	bdev->bd_part_count = 0;
@@ -1342,9 +1341,7 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode)
 	int ret;
 
 	if (!bdev->bd_openers) {
-		bdev->bd_contains = bdev;
-
-		if (!bdev->bd_partno) {
+		if (!bdev_is_partition(bdev)) {
 			ret = -ENXIO;
 			bdev->bd_part = disk_get_part(disk, 0);
 			if (!bdev->bd_part)
@@ -1384,7 +1381,6 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode)
 			whole->bd_part_count++;
 			mutex_unlock(&whole->bd_mutex);
 
-			bdev->bd_contains = whole;
 			bdev->bd_part = disk_get_part(disk, bdev->bd_partno);
 			if (!bdev->bd_part || !bdev->bd_part->nr_sects) {
 				__blkdev_put(whole, mode, 1);
@@ -1398,7 +1394,7 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode)
 		if (bdev->bd_bdi == &noop_backing_dev_info)
 			bdev->bd_bdi = bdi_get(disk->queue->backing_dev_info);
 	} else {
-		if (bdev->bd_contains == bdev) {
+		if (!bdev_is_partition(bdev)) {
 			ret = 0;
 			if (bdev->bd_disk->fops->open)
 				ret = bdev->bd_disk->fops->open(bdev, mode);
@@ -1416,7 +1412,6 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode)
  out_clear:
 	disk_put_part(bdev->bd_part);
 	bdev->bd_part = NULL;
-	bdev->bd_contains = NULL;
 	return ret;
 }
 
@@ -1659,8 +1654,7 @@ static void __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part)
 		disk_put_part(bdev->bd_part);
 		bdev->bd_part = NULL;
 		if (bdev_is_partition(bdev))
-			victim = bdev->bd_contains;
-		bdev->bd_contains = NULL;
+			victim = bdev_whole(bdev);
 	} else {
 		if (!bdev_is_partition(bdev) && disk->fops->release)
 			disk->fops->release(disk, mode);
@@ -1678,6 +1672,7 @@ void blkdev_put(struct block_device *bdev, fmode_t mode)
 	mutex_lock(&bdev->bd_mutex);
 
 	if (mode & FMODE_EXCL) {
+		struct block_device *whole = bdev_whole(bdev);
 		bool bdev_free;
 
 		/*
@@ -1688,13 +1683,12 @@ void blkdev_put(struct block_device *bdev, fmode_t mode)
 		spin_lock(&bdev_lock);
 
 		WARN_ON_ONCE(--bdev->bd_holders < 0);
-		WARN_ON_ONCE(--bdev->bd_contains->bd_holders < 0);
+		WARN_ON_ONCE(--whole->bd_holders < 0);
 
-		/* bd_contains might point to self, check in a separate step */
 		if ((bdev_free = !bdev->bd_holders))
 			bdev->bd_holder = NULL;
-		if (!bdev->bd_contains->bd_holders)
-			bdev->bd_contains->bd_holder = NULL;
+		if (!whole->bd_holders)
+			whole->bd_holder = NULL;
 
 		spin_unlock(&bdev_lock);
 
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 9698f459cc65c9..2e0a9bd9688d28 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -32,7 +32,6 @@ struct block_device {
 #ifdef CONFIG_SYSFS
 	struct list_head	bd_holder_disks;
 #endif
-	struct block_device *	bd_contains;
 	u8			bd_partno;
 	struct hd_struct *	bd_part;
 	/* number of times partitions within this device have been opened. */
@@ -49,6 +48,9 @@ struct block_device {
 	struct super_block	*bd_fsfreeze_sb;
 } __randomize_layout;
 
+#define bdev_whole(_bdev) \
+	((_bdev)->bd_disk->part0.bdev)
+
 #define bdev_kobj(_bdev) \
 	(&part_to_dev((_bdev)->bd_part)->kobj)
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:40:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:40:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36136.68006 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYS-0004h8-P9; Tue, 24 Nov 2020 13:40:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36136.68006; Tue, 24 Nov 2020 13:40:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYR-0004fg-TJ; Tue, 24 Nov 2020 13:40:23 +0000
Received: by outflank-mailman (input) for mailman id 36136;
 Tue, 24 Nov 2020 13:40:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYOX-0000Qf-3f
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:30:09 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id df57f4db-d836-4740-a4bf-0fbc31536d56;
 Tue, 24 Nov 2020 13:28:46 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYMh-0006Ww-C9; Tue, 24 Nov 2020 13:28:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYOX-0000Qf-3f
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:30:09 +0000
X-Inumbo-ID: df57f4db-d836-4740-a4bf-0fbc31536d56
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id df57f4db-d836-4740-a4bf-0fbc31536d56;
	Tue, 24 Nov 2020 13:28:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=NU6baGQPMXpwn5pZ9gkDp3IVsd46BLx4N0SPrDXIchU=; b=EOAAPNtRZFDVKupvihhdIus3Uz
	DNcSizR54IlzQ7Yfr89Ztz2DVmH9B56VSYARQNlWEzEsbYaSuBOk0LN/us9lcD3b6z4L57avlUNBE
	X4UKxelEsGPLgoQ09dZONBvDgAZb9jXyd26PP9jeByPVcEQyALdmqCNoTI2Cal6EFAtVUX5nT7Vop
	Nqz1yf0Thq+fRg7FEzTN8nRwmL+HGPwAbN5hTH5oRcYFIFi33RiiVucACgiK9rfIk89I/c8SPq3fT
	xRcT6wCXamprxrUeW4unnNl7FZEsCMl6MwdVBhSFlNLC6OdMvXQXphLXXTwQIzV1e8E6JzrATYINy
	HhUQ1aeQ==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYMh-0006Ww-C9; Tue, 24 Nov 2020 13:28:16 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 14/45] block: use disk_part_iter_exit in disk_part_iter_next
Date: Tue, 24 Nov 2020 14:27:20 +0100
Message-Id: <20201124132751.3747337-15-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Call disk_part_iter_exit in disk_part_iter_next instead of duplicating
the functionality.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Jan Kara <jack@suse.cz>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
---
 block/genhd.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index 4e039524f92b8f..0bd9c41dd4cb69 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -227,8 +227,7 @@ struct hd_struct *disk_part_iter_next(struct disk_part_iter *piter)
 	int inc, end;
 
 	/* put the last partition */
-	disk_put_part(piter->part);
-	piter->part = NULL;
+	disk_part_iter_exit(piter);
 
 	/* get part_tbl */
 	rcu_read_lock();
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:40:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:40:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36137.68017 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYT-0004kT-Uf; Tue, 24 Nov 2020 13:40:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36137.68017; Tue, 24 Nov 2020 13:40:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYT-0004j7-3I; Tue, 24 Nov 2020 13:40:25 +0000
Received: by outflank-mailman (input) for mailman id 36137;
 Tue, 24 Nov 2020 13:40:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYOr-0000Qf-4A
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:30:29 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f0cadaa0-b970-4e35-ac36-787d74d9f299;
 Tue, 24 Nov 2020 13:28:49 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYMm-0006Xu-0n; Tue, 24 Nov 2020 13:28:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYOr-0000Qf-4A
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:30:29 +0000
X-Inumbo-ID: f0cadaa0-b970-4e35-ac36-787d74d9f299
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f0cadaa0-b970-4e35-ac36-787d74d9f299;
	Tue, 24 Nov 2020 13:28:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=sq5VpPGxfF2kkzPYx3vsaAUL0wGdgrOW5ZX0PlEgXAM=; b=r7g7yDYegpOMYt6pXHT1n2T/zL
	b+34jNgNj1tqJxcH7t0JJgm5YSYrAtr0jsN0ntOiFWGVGQBvQLKDCgR5mzzN9SAvsnGUCUSMCZa8p
	/jNdYw4WlvwYdf5CpUizYYHu/xAYk355tkiXksQmCQGNLHP8ZC2DL3uUNmRLg5pK5X1gsbof/8y6v
	AlMTcb3SwkfSJQBWsX/dbEB0oVCqGZPCMXGazDcXv2FtJ+wf0ze4J1u5xmJtxpSz/wiePAgP+4AIj
	7ZsCXN0JNSmkxlPdRln6ziBOqtp786hyG/LrQzjhTGVFCS9+d2vPB0+p0o+KT6w4hV+fkcpCAVEG2
	itOl4TdQ==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYMm-0006Xu-0n; Tue, 24 Nov 2020 13:28:20 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 17/45] init: refactor name_to_dev_t
Date: Tue, 24 Nov 2020 14:27:23 +0100
Message-Id: <20201124132751.3747337-18-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Split each case into a self-contained helper, and move the block
dependent code entirely under the pre-existing #ifdef CONFIG_BLOCK.
This allows to remove the blk_lookup_devt stub in genhd.h.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Jan Kara <jack@suse.cz>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
---
 include/linux/genhd.h |   7 +-
 init/do_mounts.c      | 183 +++++++++++++++++++++---------------------
 2 files changed, 91 insertions(+), 99 deletions(-)

diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 22f5b9fd96f8bf..ca5e356084c353 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -388,18 +388,13 @@ static inline void bd_unlink_disk_holder(struct block_device *bdev,
 }
 #endif /* CONFIG_SYSFS */
 
+dev_t blk_lookup_devt(const char *name, int partno);
 #ifdef CONFIG_BLOCK
 void printk_all_partitions(void);
-dev_t blk_lookup_devt(const char *name, int partno);
 #else /* CONFIG_BLOCK */
 static inline void printk_all_partitions(void)
 {
 }
-static inline dev_t blk_lookup_devt(const char *name, int partno)
-{
-	dev_t devt = MKDEV(0, 0);
-	return devt;
-}
 #endif /* CONFIG_BLOCK */
 
 #endif /* _LINUX_GENHD_H */
diff --git a/init/do_mounts.c b/init/do_mounts.c
index b5f9604d0c98a2..aef2f24461c7f1 100644
--- a/init/do_mounts.c
+++ b/init/do_mounts.c
@@ -90,7 +90,6 @@ static int match_dev_by_uuid(struct device *dev, const void *data)
 	return 0;
 }
 
-
 /**
  * devt_from_partuuid - looks up the dev_t of a partition by its UUID
  * @uuid_str:	char array containing ascii UUID
@@ -186,7 +185,83 @@ static int match_dev_by_label(struct device *dev, const void *data)
 
 	return 0;
 }
-#endif
+
+static dev_t devt_from_partlabel(const char *label)
+{
+	struct device *dev;
+	dev_t devt = 0;
+
+	dev = class_find_device(&block_class, NULL, label, &match_dev_by_label);
+	if (dev) {
+		devt = dev->devt;
+		put_device(dev);
+	}
+
+	return devt;
+}
+
+static dev_t devt_from_devname(const char *name)
+{
+	dev_t devt = 0;
+	int part;
+	char s[32];
+	char *p;
+
+	if (strlen(name) > 31)
+		return 0;
+	strcpy(s, name);
+	for (p = s; *p; p++) {
+		if (*p == '/')
+			*p = '!';
+	}
+
+	devt = blk_lookup_devt(s, 0);
+	if (devt)
+		return devt;
+
+	/*
+	 * Try non-existent, but valid partition, which may only exist after
+	 * opening the device, like partitioned md devices.
+	 */
+	while (p > s && isdigit(p[-1]))
+		p--;
+	if (p == s || !*p || *p == '0')
+		return 0;
+
+	/* try disk name without <part number> */
+	part = simple_strtoul(p, NULL, 10);
+	*p = '\0';
+	devt = blk_lookup_devt(s, part);
+	if (devt)
+		return devt;
+
+	/* try disk name without p<part number> */
+	if (p < s + 2 || !isdigit(p[-2]) || p[-1] != 'p')
+		return 0;
+	p[-1] = '\0';
+	return blk_lookup_devt(s, part);
+}
+#endif /* CONFIG_BLOCK */
+
+static dev_t devt_from_devnum(const char *name)
+{
+	unsigned maj, min, offset;
+	dev_t devt = 0;
+	char *p, dummy;
+
+	if (sscanf(name, "%u:%u%c", &maj, &min, &dummy) == 2 ||
+	    sscanf(name, "%u:%u:%u:%c", &maj, &min, &offset, &dummy) == 3) {
+		devt = MKDEV(maj, min);
+		if (maj != MAJOR(devt) || min != MINOR(devt))
+			return 0;
+	} else {
+		devt = new_decode_dev(simple_strtoul(name, &p, 16));
+		if (*p)
+			return 0;
+	}
+
+	return devt;
+}
 
 /*
  *	Convert a name into device number.  We accept the following variants:
@@ -218,101 +293,23 @@ static int match_dev_by_label(struct device *dev, const void *data)
  *	name contains slashes, the device name has them replaced with
  *	bangs.
  */
-
 dev_t name_to_dev_t(const char *name)
 {
-	char s[32];
-	char *p;
-	dev_t res = 0;
-	int part;
-
+	if (strcmp(name, "/dev/nfs") == 0)
+		return Root_NFS;
+	if (strcmp(name, "/dev/cifs") == 0)
+		return Root_CIFS;
+	if (strcmp(name, "/dev/ram") == 0)
+		return Root_RAM0;
 #ifdef CONFIG_BLOCK
-	if (strncmp(name, "PARTUUID=", 9) == 0) {
-		name += 9;
-		res = devt_from_partuuid(name);
-		if (!res)
-			goto fail;
-		goto done;
-	} else if (strncmp(name, "PARTLABEL=", 10) == 0) {
-		struct device *dev;
-
-		dev = class_find_device(&block_class, NULL, name + 10,
-					&match_dev_by_label);
-		if (!dev)
-			goto fail;
-
-		res = dev->devt;
-		put_device(dev);
-		goto done;
-	}
+	if (strncmp(name, "PARTUUID=", 9) == 0)
+		return devt_from_partuuid(name + 9);
+	if (strncmp(name, "PARTLABEL=", 10) == 0)
+		return devt_from_partlabel(name + 10);
+	if (strncmp(name, "/dev/", 5) == 0)
+		return devt_from_devname(name + 5);
 #endif
-
-	if (strncmp(name, "/dev/", 5) != 0) {
-		unsigned maj, min, offset;
-		char dummy;
-
-		if ((sscanf(name, "%u:%u%c", &maj, &min, &dummy) == 2) ||
-		    (sscanf(name, "%u:%u:%u:%c", &maj, &min, &offset, &dummy) == 3)) {
-			res = MKDEV(maj, min);
-			if (maj != MAJOR(res) || min != MINOR(res))
-				goto fail;
-		} else {
-			res = new_decode_dev(simple_strtoul(name, &p, 16));
-			if (*p)
-				goto fail;
-		}
-		goto done;
-	}
-
-	name += 5;
-	res = Root_NFS;
-	if (strcmp(name, "nfs") == 0)
-		goto done;
-	res = Root_CIFS;
-	if (strcmp(name, "cifs") == 0)
-		goto done;
-	res = Root_RAM0;
-	if (strcmp(name, "ram") == 0)
-		goto done;
-
-	if (strlen(name) > 31)
-		goto fail;
-	strcpy(s, name);
-	for (p = s; *p; p++)
-		if (*p == '/')
-			*p = '!';
-	res = blk_lookup_devt(s, 0);
-	if (res)
-		goto done;
-
-	/*
-	 * try non-existent, but valid partition, which may only exist
-	 * after revalidating the disk, like partitioned md devices
-	 */
-	while (p > s && isdigit(p[-1]))
-		p--;
-	if (p == s || !*p || *p == '0')
-		goto fail;
-
-	/* try disk name without <part number> */
-	part = simple_strtoul(p, NULL, 10);
-	*p = '\0';
-	res = blk_lookup_devt(s, part);
-	if (res)
-		goto done;
-
-	/* try disk name without p<part number> */
-	if (p < s + 2 || !isdigit(p[-2]) || p[-1] != 'p')
-		goto fail;
-	p[-1] = '\0';
-	res = blk_lookup_devt(s, part);
-	if (res)
-		goto done;
-
-fail:
-	return 0;
-done:
-	return res;
+	return devt_from_devnum(name);
 }
 EXPORT_SYMBOL_GPL(name_to_dev_t);
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:40:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:40:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36139.68025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYV-0004nz-P0; Tue, 24 Nov 2020 13:40:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36139.68025; Tue, 24 Nov 2020 13:40:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYU-0004mO-9E; Tue, 24 Nov 2020 13:40:26 +0000
Received: by outflank-mailman (input) for mailman id 36139;
 Tue, 24 Nov 2020 13:40:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYPB-0000Qf-4c
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:30:49 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 07771d51-8c35-4879-90b0-4d565c40568a;
 Tue, 24 Nov 2020 13:28:57 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYNA-0006eF-Jo; Tue, 24 Nov 2020 13:28:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYPB-0000Qf-4c
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:30:49 +0000
X-Inumbo-ID: 07771d51-8c35-4879-90b0-4d565c40568a
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 07771d51-8c35-4879-90b0-4d565c40568a;
	Tue, 24 Nov 2020 13:28:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=VSLNoQYrcMsDvA0bIyZ6Cp7YvcFLkKctZqyVVMQzWQQ=; b=Xj5f5+5votDpJMSn8jQCWmHw9/
	XXRQ59s5rv/9zlsKYdfsZBWD1P7YKJYsYlYIC7zCjRk8UlazwByqEbxTzz9ghIwV/RVYFaLz+qmsz
	rjW5dlfsEfLdWPMXVrkJP/LKQ+33LmrzdwOtA4aTpSIJMj8z0Ph+ufU4vLwfoLzFbM1J8mFrrVhUi
	Hbpb1ukVNbEJylCu6liovwieggVlh1pq8+rv6Tt0xtV9nnseQ4/e+tgqCCReJQlC9YaLQhQ6++QvO
	27SlLu3NX5YpgagCH0eM/PxZn+v1bTyB7UCCJDLuBLHXUrGXyo/4Ba5cEzZoaY+J8FxE4Ehch1F4M
	EerU96dQ==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYNA-0006eF-Jo; Tue, 24 Nov 2020 13:28:45 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 30/45] block: remove the nr_sects field in struct hd_struct
Date: Tue, 24 Nov 2020 14:27:36 +0100
Message-Id: <20201124132751.3747337-31-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Now that the hd_struct always has a block device attached to it, there is
no need for having two size field that just get out of sync.

Additional the field in hd_struct did not use proper serializiation,
possibly allowing for torn writes.  By only using the block_device field
this problem also gets fixed.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 block/bio.c                        |  4 +-
 block/blk-core.c                   |  2 +-
 block/blk.h                        | 53 ----------------------
 block/genhd.c                      | 55 +++++++++++-----------
 block/partitions/core.c            | 17 ++++---
 drivers/block/loop.c               |  1 -
 drivers/block/nbd.c                |  2 +-
 drivers/block/xen-blkback/common.h |  4 +-
 drivers/md/bcache/super.c          |  2 +-
 drivers/s390/block/dasd_ioctl.c    |  4 +-
 drivers/target/target_core_pscsi.c |  7 +--
 fs/block_dev.c                     | 73 +-----------------------------
 fs/f2fs/super.c                    |  2 +-
 fs/pstore/blk.c                    |  2 +-
 include/linux/genhd.h              | 29 +++---------
 kernel/trace/blktrace.c            |  2 +-
 16 files changed, 60 insertions(+), 199 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index fa01bef35bb1fe..669bb47a31988e 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -613,8 +613,8 @@ void guard_bio_eod(struct bio *bio)
 	rcu_read_lock();
 	part = __disk_get_part(bio->bi_disk, bio->bi_partno);
 	if (part)
-		maxsector = part_nr_sects_read(part);
-	else
+		maxsector = bdev_nr_sectors(part->bdev);
+	else	
 		maxsector = get_capacity(bio->bi_disk);
 	rcu_read_unlock();
 
diff --git a/block/blk-core.c b/block/blk-core.c
index 2db8bda43b6e6d..988f45094a387b 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -755,7 +755,7 @@ static inline int blk_partition_remap(struct bio *bio)
 		goto out;
 
 	if (bio_sectors(bio)) {
-		if (bio_check_eod(bio, part_nr_sects_read(p)))
+		if (bio_check_eod(bio, bdev_nr_sectors(p->bdev)))
 			goto out;
 		bio->bi_iter.bi_sector += p->start_sect;
 		trace_block_bio_remap(bio->bi_disk->queue, bio, part_devt(p),
diff --git a/block/blk.h b/block/blk.h
index c4839abcfa27eb..09cee7024fb43e 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -387,59 +387,6 @@ static inline void hd_free_part(struct hd_struct *part)
 	percpu_ref_exit(&part->ref);
 }
 
-/*
- * Any access of part->nr_sects which is not protected by partition
- * bd_mutex or gendisk bdev bd_mutex, should be done using this
- * accessor function.
- *
- * Code written along the lines of i_size_read() and i_size_write().
- * CONFIG_PREEMPTION case optimizes the case of UP kernel with preemption
- * on.
- */
-static inline sector_t part_nr_sects_read(struct hd_struct *part)
-{
-#if BITS_PER_LONG==32 && defined(CONFIG_SMP)
-	sector_t nr_sects;
-	unsigned seq;
-	do {
-		seq = read_seqcount_begin(&part->nr_sects_seq);
-		nr_sects = part->nr_sects;
-	} while (read_seqcount_retry(&part->nr_sects_seq, seq));
-	return nr_sects;
-#elif BITS_PER_LONG==32 && defined(CONFIG_PREEMPTION)
-	sector_t nr_sects;
-
-	preempt_disable();
-	nr_sects = part->nr_sects;
-	preempt_enable();
-	return nr_sects;
-#else
-	return part->nr_sects;
-#endif
-}
-
-/*
- * Should be called with mutex lock held (typically bd_mutex) of partition
- * to provide mutual exlusion among writers otherwise seqcount might be
- * left in wrong state leaving the readers spinning infinitely.
- */
-static inline void part_nr_sects_write(struct hd_struct *part, sector_t size)
-{
-#if BITS_PER_LONG==32 && defined(CONFIG_SMP)
-	preempt_disable();
-	write_seqcount_begin(&part->nr_sects_seq);
-	part->nr_sects = size;
-	write_seqcount_end(&part->nr_sects_seq);
-	preempt_enable();
-#elif BITS_PER_LONG==32 && defined(CONFIG_PREEMPTION)
-	preempt_disable();
-	part->nr_sects = size;
-	preempt_enable();
-#else
-	part->nr_sects = size;
-#endif
-}
-
 int bio_add_hw_page(struct request_queue *q, struct bio *bio,
 		struct page *page, unsigned int len, unsigned int offset,
 		unsigned int max_sectors, bool *same_page);
diff --git a/block/genhd.c b/block/genhd.c
index 16c6b13242105b..8ace0628ac20b7 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -38,6 +38,16 @@ static void disk_add_events(struct gendisk *disk);
 static void disk_del_events(struct gendisk *disk);
 static void disk_release_events(struct gendisk *disk);
 
+void set_capacity(struct gendisk *disk, sector_t sectors)
+{
+	struct block_device *bdev = disk->part0.bdev;
+
+	spin_lock(&bdev->bd_size_lock);
+	i_size_write(bdev->bd_inode, (loff_t)sectors << SECTOR_SHIFT);
+	spin_unlock(&bdev->bd_size_lock);
+}
+EXPORT_SYMBOL(set_capacity);
+
 /*
  * Set disk capacity and notify if the size is not currently zero and will not
  * be set to zero.  Returns true if a uevent was sent, otherwise false.
@@ -45,18 +55,22 @@ static void disk_release_events(struct gendisk *disk);
 bool set_capacity_and_notify(struct gendisk *disk, sector_t size)
 {
 	sector_t capacity = get_capacity(disk);
+	char *envp[] = { "RESIZE=1", NULL };
 
 	set_capacity(disk, size);
-	revalidate_disk_size(disk, true);
-
-	if (capacity != size && capacity != 0 && size != 0) {
-		char *envp[] = { "RESIZE=1", NULL };
-
-		kobject_uevent_env(&disk_to_dev(disk)->kobj, KOBJ_CHANGE, envp);
-		return true;
-	}
 
-	return false;
+	/*
+	 * Only print a message and send a uevent if the gendisk is user visible
+	 * and alive.  This avoids spamming the log and udev when setting the
+	 * initial capacity during probing.
+	 */
+	if (size == capacity ||
+	    (disk->flags & (GENHD_FL_UP | GENHD_FL_HIDDEN)) != GENHD_FL_UP)
+		return false;
+	pr_info("%s: detected capacity change from %lld to %lld\n",
+		disk->disk_name, size, capacity);
+	kobject_uevent_env(&disk_to_dev(disk)->kobj, KOBJ_CHANGE, envp);
+	return true;
 }
 EXPORT_SYMBOL_GPL(set_capacity_and_notify);
 
@@ -245,7 +259,7 @@ struct hd_struct *disk_part_iter_next(struct disk_part_iter *piter)
 		part = rcu_dereference(ptbl->part[piter->idx]);
 		if (!part)
 			continue;
-		if (!part_nr_sects_read(part) &&
+		if (!bdev_nr_sectors(part->bdev) &&
 		    !(piter->flags & DISK_PITER_INCL_EMPTY) &&
 		    !(piter->flags & DISK_PITER_INCL_EMPTY_PART0 &&
 		      piter->idx == 0))
@@ -282,7 +296,7 @@ EXPORT_SYMBOL_GPL(disk_part_iter_exit);
 static inline int sector_in_part(struct hd_struct *part, sector_t sector)
 {
 	return part->start_sect <= sector &&
-		sector < part->start_sect + part_nr_sects_read(part);
+		sector < part->start_sect + bdev_nr_sectors(part->bdev);
 }
 
 /**
@@ -978,8 +992,8 @@ void __init printk_all_partitions(void)
 
 			printk("%s%s %10llu %s %s", is_part0 ? "" : "  ",
 			       bdevt_str(part_devt(part), devt_buf),
-			       (unsigned long long)part_nr_sects_read(part) >> 1
-			       , disk_name(disk, part->partno, name_buf),
+			       bdev_nr_sectors(part->bdev) >> 1,
+			       disk_name(disk, part->partno, name_buf),
 			       part->info ? part->info->uuid : "");
 			if (is_part0) {
 				if (dev->parent && dev->parent->driver)
@@ -1071,7 +1085,7 @@ static int show_partition(struct seq_file *seqf, void *v)
 	while ((part = disk_part_iter_next(&piter)))
 		seq_printf(seqf, "%4d  %7d %10llu %s\n",
 			   MAJOR(part_devt(part)), MINOR(part_devt(part)),
-			   (unsigned long long)part_nr_sects_read(part) >> 1,
+			   bdev_nr_sectors(part->bdev) >> 1,
 			   disk_name(sgp, part->partno, buf));
 	disk_part_iter_exit(&piter);
 
@@ -1153,8 +1167,7 @@ ssize_t part_size_show(struct device *dev,
 {
 	struct hd_struct *p = dev_to_part(dev);
 
-	return sprintf(buf, "%llu\n",
-		(unsigned long long)part_nr_sects_read(p));
+	return sprintf(buf, "%llu\n", bdev_nr_sectors(p->bdev));
 }
 
 ssize_t part_stat_show(struct device *dev,
@@ -1610,16 +1623,6 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
 	ptbl = rcu_dereference_protected(disk->part_tbl, 1);
 	rcu_assign_pointer(ptbl->part[0], &disk->part0);
 
-	/*
-	 * set_capacity() and get_capacity() currently don't use
-	 * seqcounter to read/update the part0->nr_sects. Still init
-	 * the counter as we can read the sectors in IO submission
-	 * patch using seqence counters.
-	 *
-	 * TODO: Ideally set_capacity() and get_capacity() should be
-	 * converted to make use of bd_mutex and sequence counters.
-	 */
-	hd_sects_seq_init(&disk->part0);
 	if (hd_ref_init(&disk->part0))
 		goto out_free_bdstats;
 
diff --git a/block/partitions/core.c b/block/partitions/core.c
index 508b46da53eee5..92ffa55bdfddfd 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -85,6 +85,13 @@ static int (*check_part[])(struct parsed_partitions *) = {
 	NULL
 };
 
+static void bdev_set_nr_sectors(struct block_device *bdev, sector_t sectors)
+{
+	spin_lock(&bdev->bd_size_lock);
+	i_size_write(bdev->bd_inode, (loff_t)sectors << SECTOR_SHIFT);
+	spin_unlock(&bdev->bd_size_lock);
+}
+
 static struct parsed_partitions *allocate_partitions(struct gendisk *hd)
 {
 	struct parsed_partitions *state;
@@ -295,7 +302,7 @@ static void hd_struct_free_work(struct work_struct *work)
 	put_device(disk_to_dev(disk));
 
 	part->start_sect = 0;
-	part->nr_sects = 0;
+	bdev_set_nr_sectors(part->bdev, 0);
 	part_stat_set_all(part, 0);
 	put_device(part_to_dev(part));
 }
@@ -412,11 +419,10 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 		goto out_free_stats;
 	p->bdev = bdev;
 
-	hd_sects_seq_init(p);
 	pdev = part_to_dev(p);
 
 	p->start_sect = start;
-	p->nr_sects = len;
+	bdev_set_nr_sectors(bdev, len);
 	p->partno = partno;
 	p->policy = get_disk_ro(disk);
 
@@ -509,7 +515,7 @@ static bool partition_overlaps(struct gendisk *disk, sector_t start,
 	disk_part_iter_init(&piter, disk, DISK_PITER_INCL_EMPTY);
 	while ((part = disk_part_iter_next(&piter))) {
 		if (part->partno == skip_partno ||
-		    start >= part->start_sect + part->nr_sects ||
+		    start >= part->start_sect + bdev_nr_sectors(part->bdev) ||
 		    start + length <= part->start_sect)
 			continue;
 		overlap = true;
@@ -600,8 +606,7 @@ int bdev_resize_partition(struct block_device *bdev, int partno,
 	if (partition_overlaps(bdev->bd_disk, start, length, partno))
 		goto out_unlock;
 
-	part_nr_sects_write(part, length);
-	bd_set_nr_sectors(bdevp, length);
+	bdev_set_nr_sectors(bdevp, length);
 
 	ret = 0;
 out_unlock:
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index d643c67be6acea..d2ce1ddc192d78 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -1241,7 +1241,6 @@ static int __loop_clr_fd(struct loop_device *lo, bool release)
 	set_capacity(lo->lo_disk, 0);
 	loop_sysfs_exit(lo);
 	if (bdev) {
-		bd_set_nr_sectors(bdev, 0);
 		/* let user-space know about this change */
 		kobject_uevent(&disk_to_dev(bdev->bd_disk)->kobj, KOBJ_CHANGE);
 	}
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 45b0423ef2c53d..014683968ce174 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -1132,7 +1132,7 @@ static void nbd_bdev_reset(struct block_device *bdev)
 {
 	if (bdev->bd_openers > 1)
 		return;
-	bd_set_nr_sectors(bdev, 0);
+	set_capacity(bdev->bd_disk, 0);
 }
 
 static void nbd_parse_flags(struct nbd_device *nbd)
diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback/common.h
index c6ea5d38c509a6..0762db247b41b3 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -358,9 +358,7 @@ struct pending_req {
 };
 
 
-#define vbd_sz(_v)	((_v)->bdev->bd_part ? \
-			 (_v)->bdev->bd_part->nr_sects : \
-			  get_capacity((_v)->bdev->bd_disk))
+#define vbd_sz(_v)	bdev_nr_sectors((_v)->bdev)
 
 #define xen_blkif_get(_b) (atomic_inc(&(_b)->refcnt))
 #define xen_blkif_put(_b)				\
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index c55d3c58a7ef55..04fa40868fbe10 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -1408,7 +1408,7 @@ static int cached_dev_init(struct cached_dev *dc, unsigned int block_size)
 			q->limits.raid_partial_stripes_expensive;
 
 	ret = bcache_device_init(&dc->disk, block_size,
-			 dc->bdev->bd_part->nr_sects - dc->sb.data_offset,
+			 bdev_nr_sectors(dc->bdev) - dc->sb.data_offset,
 			 dc->bdev, &bcache_cached_ops);
 	if (ret)
 		return ret;
diff --git a/drivers/s390/block/dasd_ioctl.c b/drivers/s390/block/dasd_ioctl.c
index 3359559517bfcf..304eba1acf163c 100644
--- a/drivers/s390/block/dasd_ioctl.c
+++ b/drivers/s390/block/dasd_ioctl.c
@@ -54,8 +54,6 @@ dasd_ioctl_enable(struct block_device *bdev)
 		return -ENODEV;
 
 	dasd_enable_device(base);
-	/* Formatting the dasd device can change the capacity. */
-	bd_set_nr_sectors(bdev, get_capacity(base->block->gdp));
 	dasd_put_device(base);
 	return 0;
 }
@@ -88,7 +86,7 @@ dasd_ioctl_disable(struct block_device *bdev)
 	 * Set i_size to zero, since read, write, etc. check against this
 	 * value.
 	 */
-	bd_set_nr_sectors(bdev, 0);
+	set_capacity(bdev->bd_disk, 0);
 	dasd_put_device(base);
 	return 0;
 }
diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c
index 4e37fa9b409d52..a70c33c49f0960 100644
--- a/drivers/target/target_core_pscsi.c
+++ b/drivers/target/target_core_pscsi.c
@@ -1027,12 +1027,7 @@ static u32 pscsi_get_device_type(struct se_device *dev)
 
 static sector_t pscsi_get_blocks(struct se_device *dev)
 {
-	struct pscsi_dev_virt *pdv = PSCSI_DEV(dev);
-
-	if (pdv->pdv_bd && pdv->pdv_bd->bd_part)
-		return pdv->pdv_bd->bd_part->nr_sects;
-
-	return 0;
+	return bdev_nr_sectors(PSCSI_DEV(dev)->pdv_bd);
 }
 
 static void pscsi_req_done(struct request *req, blk_status_t status)
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 1e5c6d0eb92677..02536d9fa29945 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -1203,70 +1203,6 @@ void bd_unlink_disk_holder(struct block_device *bdev, struct gendisk *disk)
 EXPORT_SYMBOL_GPL(bd_unlink_disk_holder);
 #endif
 
-/**
- * check_disk_size_change - checks for disk size change and adjusts bdev size.
- * @disk: struct gendisk to check
- * @bdev: struct bdev to adjust.
- * @verbose: if %true log a message about a size change if there is any
- *
- * This routine checks to see if the bdev size does not match the disk size
- * and adjusts it if it differs. When shrinking the bdev size, its all caches
- * are freed.
- */
-static void check_disk_size_change(struct gendisk *disk,
-		struct block_device *bdev, bool verbose)
-{
-	loff_t disk_size, bdev_size;
-
-	spin_lock(&bdev->bd_size_lock);
-	disk_size = (loff_t)get_capacity(disk) << 9;
-	bdev_size = i_size_read(bdev->bd_inode);
-	if (disk_size != bdev_size) {
-		if (verbose) {
-			printk(KERN_INFO
-			       "%s: detected capacity change from %lld to %lld\n",
-			       disk->disk_name, bdev_size, disk_size);
-		}
-		i_size_write(bdev->bd_inode, disk_size);
-	}
-	spin_unlock(&bdev->bd_size_lock);
-}
-
-/**
- * revalidate_disk_size - checks for disk size change and adjusts bdev size.
- * @disk: struct gendisk to check
- * @verbose: if %true log a message about a size change if there is any
- *
- * This routine checks to see if the bdev size does not match the disk size
- * and adjusts it if it differs. When shrinking the bdev size, its all caches
- * are freed.
- */
-void revalidate_disk_size(struct gendisk *disk, bool verbose)
-{
-	struct block_device *bdev;
-
-	/*
-	 * Hidden disks don't have associated bdev so there's no point in
-	 * revalidating them.
-	 */
-	if (disk->flags & GENHD_FL_HIDDEN)
-		return;
-
-	bdev = bdget_disk(disk, 0);
-	if (bdev) {
-		check_disk_size_change(disk, bdev, verbose);
-		bdput(bdev);
-	}
-}
-
-void bd_set_nr_sectors(struct block_device *bdev, sector_t sectors)
-{
-	spin_lock(&bdev->bd_size_lock);
-	i_size_write(bdev->bd_inode, (loff_t)sectors << SECTOR_SHIFT);
-	spin_unlock(&bdev->bd_size_lock);
-}
-EXPORT_SYMBOL(bd_set_nr_sectors);
-
 static void __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part);
 
 int bdev_disk_changed(struct block_device *bdev, bool invalidate)
@@ -1300,8 +1236,6 @@ int bdev_disk_changed(struct block_device *bdev, bool invalidate)
 			disk->fops->revalidate_disk(disk);
 	}
 
-	check_disk_size_change(disk, bdev, !invalidate);
-
 	if (get_capacity(disk)) {
 		ret = blk_add_partitions(disk, bdev);
 		if (ret == -EAGAIN)
@@ -1344,10 +1278,8 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode)
 			if (disk->fops->open)
 				ret = disk->fops->open(bdev, mode);
 
-			if (!ret) {
-				bd_set_nr_sectors(bdev, get_capacity(disk));
+			if (!ret)
 				set_init_blocksize(bdev);
-			}
 
 			/*
 			 * If the device is invalidated, rescan partition
@@ -1375,12 +1307,11 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode)
 			mutex_unlock(&whole->bd_mutex);
 
 			bdev->bd_part = disk_get_part(disk, bdev->bd_partno);
-			if (!bdev->bd_part || !bdev->bd_part->nr_sects) {
+			if (!bdev_nr_sectors(bdev)) {
 				__blkdev_put(whole, mode, 1);
 				ret = -ENXIO;
 				goto out_clear;
 			}
-			bd_set_nr_sectors(bdev, bdev->bd_part->nr_sects);
 			set_init_blocksize(bdev);
 		}
 
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index 00eff2f5180790..d4e7fab352bacb 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -3151,7 +3151,7 @@ static int f2fs_report_zone_cb(struct blk_zone *zone, unsigned int idx,
 static int init_blkz_info(struct f2fs_sb_info *sbi, int devi)
 {
 	struct block_device *bdev = FDEV(devi).bdev;
-	sector_t nr_sectors = bdev->bd_part->nr_sects;
+	sector_t nr_sectors = bdev_nr_sectors(bdev);
 	struct f2fs_report_zones_args rep_zone_arg;
 	int ret;
 
diff --git a/fs/pstore/blk.c b/fs/pstore/blk.c
index fcd5563dde063c..777a26f7bbe2aa 100644
--- a/fs/pstore/blk.c
+++ b/fs/pstore/blk.c
@@ -245,7 +245,7 @@ static struct block_device *psblk_get_bdev(void *holder,
 			return bdev;
 	}
 
-	nr_sects = part_nr_sects_read(bdev->bd_part);
+	nr_sects = bdev_nr_sectors(bdev);
 	if (!nr_sects) {
 		pr_err("not enough space for '%s'\n", blkdev);
 		blkdev_put(bdev, mode);
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index dcf86a3d4dedc4..0dbd254bca51aa 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -52,15 +52,6 @@ struct partition_meta_info {
 
 struct hd_struct {
 	sector_t start_sect;
-	/*
-	 * nr_sects is protected by sequence counter. One might extend a
-	 * partition while IO is happening to it and update of nr_sects
-	 * can be non-atomic on 32bit machines with 64bit sector_t.
-	 */
-	sector_t nr_sects;
-#if BITS_PER_LONG==32 && defined(CONFIG_SMP)
-	seqcount_t nr_sects_seq;
-#endif
 	unsigned long stamp;
 	struct disk_stats __percpu *dkstats;
 	struct percpu_ref ref;
@@ -255,13 +246,6 @@ static inline void disk_put_part(struct hd_struct *part)
 		put_device(part_to_dev(part));
 }
 
-static inline void hd_sects_seq_init(struct hd_struct *p)
-{
-#if BITS_PER_LONG==32 && defined(CONFIG_SMP)
-	seqcount_init(&p->nr_sects_seq);
-#endif
-}
-
 /*
  * Smarter partition iterator without context limits.
  */
@@ -319,13 +303,15 @@ static inline sector_t get_start_sect(struct block_device *bdev)
 {
 	return bdev->bd_part->start_sect;
 }
-static inline sector_t get_capacity(struct gendisk *disk)
+
+static inline sector_t bdev_nr_sectors(struct block_device *bdev)
 {
-	return disk->part0.nr_sects;
+	return i_size_read(bdev->bd_inode) >> 9;
 }
-static inline void set_capacity(struct gendisk *disk, sector_t size)
+
+static inline sector_t get_capacity(struct gendisk *disk)
 {
-	disk->part0.nr_sects = size;
+	return bdev_nr_sectors(disk->part0.bdev);
 }
 
 int bdev_disk_changed(struct block_device *bdev, bool invalidate);
@@ -359,10 +345,9 @@ int __register_blkdev(unsigned int major, const char *name,
 	__register_blkdev(major, name, NULL)
 void unregister_blkdev(unsigned int major, const char *name);
 
-void revalidate_disk_size(struct gendisk *disk, bool verbose);
 bool bdev_check_media_change(struct block_device *bdev);
 int __invalidate_device(struct block_device *bdev, bool kill_dirty);
-void bd_set_nr_sectors(struct block_device *bdev, sector_t sectors);
+void set_capacity(struct gendisk *disk, sector_t size);
 
 /* for drivers/char/raw.c: */
 int blkdev_ioctl(struct block_device *, fmode_t, unsigned, unsigned long);
diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
index f1022945e3460b..7076d588a50d69 100644
--- a/kernel/trace/blktrace.c
+++ b/kernel/trace/blktrace.c
@@ -465,7 +465,7 @@ static void blk_trace_setup_lba(struct blk_trace *bt,
 
 	if (part) {
 		bt->start_lba = part->start_sect;
-		bt->end_lba = part->start_sect + part->nr_sects;
+		bt->end_lba = part->start_sect + bdev_nr_sectors(bdev);
 	} else {
 		bt->start_lba = 0;
 		bt->end_lba = -1ULL;
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:40:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:40:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36140.68038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYX-0004sJ-Ez; Tue, 24 Nov 2020 13:40:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36140.68038; Tue, 24 Nov 2020 13:40:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYV-0004rG-WD; Tue, 24 Nov 2020 13:40:28 +0000
Received: by outflank-mailman (input) for mailman id 36140;
 Tue, 24 Nov 2020 13:40:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYQY-0000Qf-7I
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:32:14 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9c9b9d27-27a1-4cb0-85c7-1227bcf234ee;
 Tue, 24 Nov 2020 13:29:24 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYNb-0006jz-99; Tue, 24 Nov 2020 13:29:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYQY-0000Qf-7I
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:32:14 +0000
X-Inumbo-ID: 9c9b9d27-27a1-4cb0-85c7-1227bcf234ee
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9c9b9d27-27a1-4cb0-85c7-1227bcf234ee;
	Tue, 24 Nov 2020 13:29:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=oY/Kzr07zSpX+zHQFFeLth/QBlUbzbRy+7rMoha2aW8=; b=KADdJG2RhR4/EqqmDR2o46+ook
	CJr/KoBRvzXuaGAzArmj0uFbtLWWacUi3GUHPNtW5gX+xXv2xhbfbJGb7ALMkupFI53CzOTk86848
	7nWfv4SWiWKmQYpt7ubrEpnqNpH/Gv4MXURNq6jxRjnMI+yadTr91z4UnJ3e/Ux9cofnNl7pzL3aV
	uEFPy1OOl7jAKXLcYeRLyijWGjLTq1guL80iDVCbay23g7rQhuwy5LOm2l6sdF2RN9/jvwR+0hra6
	HdSBQuMgnQPSPASpJLUUiANBaVoaraWMU/rc01jMlFng4Fsq8t435FuUVFfM1FddcNMS4tYgD39ll
	jo+Vc6wg==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYNb-0006jz-99; Tue, 24 Nov 2020 13:29:11 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 45/45] block: stop using bdget_disk for partition 0
Date: Tue, 24 Nov 2020 14:27:51 +0100
Message-Id: <20201124132751.3747337-46-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

We can just dereference the point in struct gendisk instead.  Also
remove the now unused export.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Jan Kara <jack@suse.cz>
---
 block/genhd.c                   |  1 -
 drivers/block/nbd.c             |  4 +---
 drivers/block/xen-blkfront.c    | 20 +++++---------------
 drivers/block/zram/zram_drv.c   | 18 +++---------------
 drivers/md/dm.c                 | 16 ++--------------
 drivers/s390/block/dasd_ioctl.c |  5 ++---
 6 files changed, 13 insertions(+), 51 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index e1f67b9951f0ec..9a3ad1895888c6 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -912,7 +912,6 @@ struct block_device *bdget_disk(struct gendisk *disk, int partno)
 
 	return bdev;
 }
-EXPORT_SYMBOL(bdget_disk);
 
 /*
  * print a full list of all partitions - intended for places where the root
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 014683968ce174..92f84ed0ba9eb6 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -1488,12 +1488,10 @@ static int nbd_open(struct block_device *bdev, fmode_t mode)
 static void nbd_release(struct gendisk *disk, fmode_t mode)
 {
 	struct nbd_device *nbd = disk->private_data;
-	struct block_device *bdev = bdget_disk(disk, 0);
 
 	if (test_bit(NBD_RT_DISCONNECT_ON_CLOSE, &nbd->config->runtime_flags) &&
-			bdev->bd_openers == 0)
+			disk->part0->bd_openers == 0)
 		nbd_disconnect_and_put(nbd);
-	bdput(bdev);
 
 	nbd_config_put(nbd);
 	nbd_put(nbd);
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 79521e33d30ed5..188e0b47534bcf 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -2153,7 +2153,7 @@ static void blkfront_closing(struct blkfront_info *info)
 	}
 
 	if (info->gd)
-		bdev = bdget_disk(info->gd, 0);
+		bdev = bdgrab(info->gd->part0);
 
 	mutex_unlock(&info->mutex);
 
@@ -2518,7 +2518,7 @@ static int blkfront_remove(struct xenbus_device *xbdev)
 
 	disk = info->gd;
 	if (disk)
-		bdev = bdget_disk(disk, 0);
+		bdev = bdgrab(disk->part0);
 
 	info->xbdev = NULL;
 	mutex_unlock(&info->mutex);
@@ -2595,19 +2595,11 @@ static int blkif_open(struct block_device *bdev, fmode_t mode)
 static void blkif_release(struct gendisk *disk, fmode_t mode)
 {
 	struct blkfront_info *info = disk->private_data;
-	struct block_device *bdev;
 	struct xenbus_device *xbdev;
 
 	mutex_lock(&blkfront_mutex);
-
-	bdev = bdget_disk(disk, 0);
-
-	if (!bdev) {
-		WARN(1, "Block device %s yanked out from us!\n", disk->disk_name);
+	if (disk->part0->bd_openers)
 		goto out_mutex;
-	}
-	if (bdev->bd_openers)
-		goto out;
 
 	/*
 	 * Check if we have been instructed to close. We will have
@@ -2619,7 +2611,7 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
 
 	if (xbdev && xbdev->state == XenbusStateClosing) {
 		/* pending switch to state closed */
-		dev_info(disk_to_dev(bdev->bd_disk), "releasing disk\n");
+		dev_info(disk_to_dev(disk), "releasing disk\n");
 		xlvbd_release_gendisk(info);
 		xenbus_frontend_closed(info->xbdev);
  	}
@@ -2628,14 +2620,12 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
 
 	if (!xbdev) {
 		/* sudden device removal */
-		dev_info(disk_to_dev(bdev->bd_disk), "releasing disk\n");
+		dev_info(disk_to_dev(disk), "releasing disk\n");
 		xlvbd_release_gendisk(info);
 		disk->private_data = NULL;
 		free_info(info);
 	}
 
-out:
-	bdput(bdev);
 out_mutex:
 	mutex_unlock(&blkfront_mutex);
 }
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 01757f9578dcb8..56024905bd242c 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1748,7 +1748,7 @@ static ssize_t reset_store(struct device *dev,
 		struct device_attribute *attr, const char *buf, size_t len)
 {
 	struct zram *zram = dev_to_zram(dev);
-	struct block_device *bdev;
+	struct block_device *bdev = zram->disk->part0;
 	unsigned short do_reset;
 	int ret = 0;
 
@@ -1758,17 +1758,12 @@ static ssize_t reset_store(struct device *dev,
 	if (!do_reset)
 		return -EINVAL;
 
-	bdev = bdget_disk(zram->disk, 0);
-	if (!bdev)
-		return -ENOMEM;
-
 	mutex_lock(&bdev->bd_mutex);
 	if (bdev->bd_openers)
 		ret = -EBUSY;
 	else
 		zram_reset_device(zram);
 	mutex_unlock(&bdev->bd_mutex);
-	bdput(bdev);
 
 	return ret ? ret : len;
 }
@@ -1933,15 +1928,8 @@ static int zram_add(void)
 
 static int zram_remove(struct zram *zram)
 {
-	struct block_device *bdev = bdget_disk(zram->disk, 0);
-
-	if (bdev) {
-		if (bdev->bd_openers) {
-			bdput(bdev);
-			return -EBUSY;
-		}
-		bdput(bdev);
-	}
+	if (zram->disk->part0->bd_openers)
+		return -EBUSY;
 
 	del_gendisk(zram->disk);
 	zram_debugfs_unregister(zram);
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 176adcff56b380..ed7e836efbcdbc 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -2375,16 +2375,11 @@ struct dm_table *dm_swap_table(struct mapped_device *md, struct dm_table *table)
  */
 static int lock_fs(struct mapped_device *md)
 {
-	struct block_device *bdev;
 	int r;
 
 	WARN_ON(test_bit(DMF_FROZEN, &md->flags));
 
-	bdev = bdget_disk(md->disk, 0);
-	if (!bdev)
-		return -ENOMEM;
-	r = freeze_bdev(bdev);
-	bdput(bdev);
+	r = freeze_bdev(md->disk->part0);
 	if (!r)
 		set_bit(DMF_FROZEN, &md->flags);
 	return r;
@@ -2392,16 +2387,9 @@ static int lock_fs(struct mapped_device *md)
 
 static void unlock_fs(struct mapped_device *md)
 {
-	struct block_device *bdev;
-
 	if (!test_bit(DMF_FROZEN, &md->flags))
 		return;
-
-	bdev = bdget_disk(md->disk, 0);
-	if (!bdev)
-		return;
-	thaw_bdev(bdev);
-	bdput(bdev);
+	thaw_bdev(md->disk->part0);
 	clear_bit(DMF_FROZEN, &md->flags);
 }
 
diff --git a/drivers/s390/block/dasd_ioctl.c b/drivers/s390/block/dasd_ioctl.c
index 304eba1acf163c..9f642440894655 100644
--- a/drivers/s390/block/dasd_ioctl.c
+++ b/drivers/s390/block/dasd_ioctl.c
@@ -220,9 +220,8 @@ dasd_format(struct dasd_block *block, struct format_data_t *fdata)
 	 * enabling the device later.
 	 */
 	if (fdata->start_unit == 0) {
-		struct block_device *bdev = bdget_disk(block->gdp, 0);
-		bdev->bd_inode->i_blkbits = blksize_bits(fdata->blksize);
-		bdput(bdev);
+		block->gdp->part0->bd_inode->i_blkbits =
+			blksize_bits(fdata->blksize);
 	}
 
 	rc = base->discipline->format_device(base, fdata, 1);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:40:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:40:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36141.68054 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYa-00050w-Or; Tue, 24 Nov 2020 13:40:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36141.68054; Tue, 24 Nov 2020 13:40:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYY-0004z5-P5; Tue, 24 Nov 2020 13:40:30 +0000
Received: by outflank-mailman (input) for mailman id 36141;
 Tue, 24 Nov 2020 13:40:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYQd-0000Qf-7X
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:32:19 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f880fac1-f1cc-4be6-9256-93d08272d6b8;
 Tue, 24 Nov 2020 13:29:23 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYNM-0006gV-JV; Tue, 24 Nov 2020 13:28:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYQd-0000Qf-7X
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:32:19 +0000
X-Inumbo-ID: f880fac1-f1cc-4be6-9256-93d08272d6b8
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f880fac1-f1cc-4be6-9256-93d08272d6b8;
	Tue, 24 Nov 2020 13:29:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=hd58vTxwYflslTooUzvIkiEUZh8oqfMehzvRdiTDmRg=; b=YCNB4naCKZWhAaCsg1kYokxu80
	rDFjtrYT7lEuIA8sVbt3CobuCxtFPj2/8pE/tRoK9WUNzb1EdlB5LEPyHhE8AJLIRh3/PtVC8ov/2
	mjdWUOy4bTyT5WfIjImo/Xyw0j7f7zyydciqCBDbZljVfC1fZErmHDGOhwC9Dzd0ePKxnsPI1bPv+
	6cChoKXYn89Zvu9rflA5Px/7sSzxUQ2digOs8X0WWZZ+SFnJJoixr4SYXxUPnXKXDp3K+ri7V80Kr
	UHQMztu7BO6hhSivmU18rXD88ntr13ZGE1Ci0+MS7lPzbgMA19Bp2MZIaWuGvdI9AR6vpqR8mTh6B
	IrrZC5xw==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYNM-0006gV-JV; Tue, 24 Nov 2020 13:28:57 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 37/45] block: allocate struct hd_struct as part of struct bdev_inode
Date: Tue, 24 Nov 2020 14:27:43 +0100
Message-Id: <20201124132751.3747337-38-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Allocate hd_struct together with struct block_device to pre-load
the lifetime rule changes in preparation of merging the two structures.

Note that part0 was previously embedded into struct gendisk, but is
a separate allocation now, and already points to the block_device instead
of the hd_struct.  The lifetime of struct gendisk is still controlled by
the struct device embedded in the part0 hd_struct.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-core.c                   | 16 ++++---
 block/blk-flush.c                  |  2 +-
 block/blk-merge.c                  |  2 -
 block/blk.h                        | 21 ----------
 block/genhd.c                      | 50 +++++++++-------------
 block/partitions/core.c            | 67 +++---------------------------
 drivers/block/drbd/drbd_receiver.c |  2 +-
 drivers/block/drbd/drbd_worker.c   |  3 +-
 drivers/block/zram/zram_drv.c      |  2 +-
 drivers/md/dm.c                    |  4 +-
 drivers/md/md.c                    |  2 +-
 fs/block_dev.c                     | 37 +++++------------
 include/linux/blk_types.h          |  2 +-
 include/linux/genhd.h              | 14 +++----
 include/linux/part_stat.h          |  4 +-
 15 files changed, 60 insertions(+), 168 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index d64ffcb6f9ae5d..9ea70275fc1cfe 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -714,7 +714,8 @@ static inline bool bio_check_ro(struct bio *bio, struct hd_struct *part)
 
 static noinline int should_fail_bio(struct bio *bio)
 {
-	if (should_fail_request(&bio->bi_disk->part0, bio->bi_iter.bi_size))
+	if (should_fail_request(bio->bi_disk->part0->bd_part,
+			bio->bi_iter.bi_size))
 		return -EIO;
 	return 0;
 }
@@ -831,7 +832,7 @@ static noinline_for_stack bool submit_bio_checks(struct bio *bio)
 		if (unlikely(blk_partition_remap(bio)))
 			goto end_io;
 	} else {
-		if (unlikely(bio_check_ro(bio, &bio->bi_disk->part0)))
+		if (unlikely(bio_check_ro(bio, bio->bi_disk->part0->bd_part)))
 			goto end_io;
 		if (unlikely(bio_check_eod(bio, get_capacity(bio->bi_disk))))
 			goto end_io;
@@ -1203,7 +1204,7 @@ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *
 		return ret;
 
 	if (rq->rq_disk &&
-	    should_fail_request(&rq->rq_disk->part0, blk_rq_bytes(rq)))
+	    should_fail_request(rq->rq_disk->part0->bd_part, blk_rq_bytes(rq)))
 		return BLK_STS_IOERR;
 
 	if (blk_crypto_insert_cloned_request(rq))
@@ -1272,7 +1273,7 @@ static void update_io_ticks(struct hd_struct *part, unsigned long now, bool end)
 			__part_stat_add(part, io_ticks, end ? now - stamp : 1);
 	}
 	if (part->partno) {
-		part = &part_to_disk(part)->part0;
+		part = part_to_disk(part)->part0->bd_part;
 		goto again;
 	}
 }
@@ -1309,8 +1310,6 @@ void blk_account_io_done(struct request *req, u64 now)
 		part_stat_inc(part, ios[sgrp]);
 		part_stat_add(part, nsecs[sgrp], now - req->start_time_ns);
 		part_stat_unlock();
-
-		hd_struct_put(part);
 	}
 }
 
@@ -1354,7 +1353,7 @@ EXPORT_SYMBOL_GPL(part_start_io_acct);
 unsigned long disk_start_io_acct(struct gendisk *disk, unsigned int sectors,
 				 unsigned int op)
 {
-	return __part_start_io_acct(&disk->part0, sectors, op);
+	return __part_start_io_acct(disk->part0->bd_part, sectors, op);
 }
 EXPORT_SYMBOL(disk_start_io_acct);
 
@@ -1376,14 +1375,13 @@ void part_end_io_acct(struct hd_struct *part, struct bio *bio,
 		      unsigned long start_time)
 {
 	__part_end_io_acct(part, bio_op(bio), start_time);
-	hd_struct_put(part);
 }
 EXPORT_SYMBOL_GPL(part_end_io_acct);
 
 void disk_end_io_acct(struct gendisk *disk, unsigned int op,
 		      unsigned long start_time)
 {
-	__part_end_io_acct(&disk->part0, op, start_time);
+	__part_end_io_acct(disk->part0->bd_part, op, start_time);
 }
 EXPORT_SYMBOL(disk_end_io_acct);
 
diff --git a/block/blk-flush.c b/block/blk-flush.c
index e32958f0b68750..fcd0a60574dff8 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -139,7 +139,7 @@ static void blk_flush_queue_rq(struct request *rq, bool add_front)
 
 static void blk_account_io_flush(struct request *rq)
 {
-	struct hd_struct *part = &rq->rq_disk->part0;
+	struct hd_struct *part = rq->rq_disk->part0->bd_part;
 
 	part_stat_lock();
 	part_stat_inc(part, ios[STAT_FLUSH]);
diff --git a/block/blk-merge.c b/block/blk-merge.c
index bcf5e458060337..cb351ab9b77dbd 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -683,8 +683,6 @@ static void blk_account_io_merge_request(struct request *req)
 		part_stat_lock();
 		part_stat_inc(req->part, merges[op_stat_group(req_op(req))]);
 		part_stat_unlock();
-
-		hd_struct_put(req->part);
 	}
 }
 
diff --git a/block/blk.h b/block/blk.h
index 0bd4b58bcbaf77..32ac41f7557fcc 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -363,27 +363,6 @@ int bdev_del_partition(struct block_device *bdev, int partno);
 int bdev_resize_partition(struct block_device *bdev, int partno,
 		sector_t start, sector_t length);
 int disk_expand_part_tbl(struct gendisk *disk, int target);
-int hd_ref_init(struct hd_struct *part);
-
-/* no need to get/put refcount of part0 */
-static inline int hd_struct_try_get(struct hd_struct *part)
-{
-	if (part->partno)
-		return percpu_ref_tryget_live(&part->ref);
-	return 1;
-}
-
-static inline void hd_struct_put(struct hd_struct *part)
-{
-	if (part->partno)
-		percpu_ref_put(&part->ref);
-}
-
-static inline void hd_free_part(struct hd_struct *part)
-{
-	bdput(part->bdev);
-	percpu_ref_exit(&part->ref);
-}
 
 int bio_add_hw_page(struct request_queue *q, struct bio *bio,
 		struct page *page, unsigned int len, unsigned int offset,
diff --git a/block/genhd.c b/block/genhd.c
index 8aed77cc8ad169..3bbf6d3a69ec63 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -40,7 +40,7 @@ static void disk_release_events(struct gendisk *disk);
 
 void set_capacity(struct gendisk *disk, sector_t sectors)
 {
-	struct block_device *bdev = disk->part0.bdev;
+	struct block_device *bdev = disk->part0;
 
 	spin_lock(&bdev->bd_size_lock);
 	i_size_write(bdev->bd_inode, (loff_t)sectors << SECTOR_SHIFT);
@@ -308,9 +308,7 @@ static inline int sector_in_part(struct hd_struct *part, sector_t sector)
  * primarily used for stats accounting.
  *
  * CONTEXT:
- * RCU read locked.  The returned partition pointer is always valid
- * because its refcount is grabbed except for part0, which lifetime
- * is same with the disk.
+ * RCU read locked.
  *
  * RETURNS:
  * Found partition on success, part0 is returned if no partition matches
@@ -326,26 +324,19 @@ struct hd_struct *disk_map_sector_rcu(struct gendisk *disk, sector_t sector)
 	ptbl = rcu_dereference(disk->part_tbl);
 
 	part = rcu_dereference(ptbl->last_lookup);
-	if (part && sector_in_part(part, sector) && hd_struct_try_get(part))
+	if (part && sector_in_part(part, sector))
 		goto out_unlock;
 
 	for (i = 1; i < ptbl->len; i++) {
 		part = rcu_dereference(ptbl->part[i]);
 
 		if (part && sector_in_part(part, sector)) {
-			/*
-			 * only live partition can be cached for lookup,
-			 * so use-after-free on cached & deleting partition
-			 * can be avoided
-			 */
-			if (!hd_struct_try_get(part))
-				break;
 			rcu_assign_pointer(ptbl->last_lookup, part);
 			goto out_unlock;
 		}
 	}
 
-	part = &disk->part0;
+	part = disk->part0->bd_part;
 out_unlock:
 	rcu_read_unlock();
 	return part;
@@ -671,8 +662,8 @@ static void register_disk(struct device *parent, struct gendisk *disk,
 	 */
 	pm_runtime_set_memalloc_noio(ddev, true);
 
-	disk->part0.bdev->bd_holder_dir =
-			kobject_create_and_add("holders", &ddev->kobj);
+	disk->part0->bd_holder_dir =
+		kobject_create_and_add("holders", &ddev->kobj);
 	disk->slave_dir = kobject_create_and_add("slaves", &ddev->kobj);
 
 	if (disk->flags & GENHD_FL_HIDDEN) {
@@ -738,7 +729,7 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
 
 	disk->flags |= GENHD_FL_UP;
 
-	retval = blk_alloc_devt(&disk->part0, &devt);
+	retval = blk_alloc_devt(disk->part0->bd_part, &devt);
 	if (retval) {
 		WARN_ON(1);
 		return;
@@ -765,7 +756,7 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
 		ret = bdi_register(bdi, "%u:%u", MAJOR(devt), MINOR(devt));
 		WARN_ON(ret);
 		bdi_set_owner(bdi, dev);
-		bdev_add(disk->part0.bdev, devt);
+		bdev_add(disk->part0, devt);
 	}
 	register_disk(parent, disk, groups);
 	if (register_queue)
@@ -872,11 +863,11 @@ void del_gendisk(struct gendisk *disk)
 
 	blk_unregister_queue(disk);
 
-	kobject_put(disk->part0.bdev->bd_holder_dir);
+	kobject_put(disk->part0->bd_holder_dir);
 	kobject_put(disk->slave_dir);
 
-	part_stat_set_all(&disk->part0, 0);
-	disk->part0.bdev->bd_stamp = 0;
+	part_stat_set_all(disk->part0->bd_part, 0);
+	disk->part0->bd_stamp = 0;
 	if (!sysfs_deprecated)
 		sysfs_remove_link(block_depr, dev_name(disk_to_dev(disk)));
 	pm_runtime_set_memalloc_noio(disk_to_dev(disk), false);
@@ -989,7 +980,7 @@ void __init printk_all_partitions(void)
 		 */
 		disk_part_iter_init(&piter, disk, DISK_PITER_INCL_PART0);
 		while ((part = disk_part_iter_next(&piter))) {
-			bool is_part0 = part == &disk->part0;
+			bool is_part0 = part == disk->part0->bd_part;
 
 			printk("%s%s %10llu %s %s", is_part0 ? "" : "  ",
 			       bdevt_str(part_devt(part), devt_buf),
@@ -1444,7 +1435,7 @@ static void disk_release(struct device *dev)
 	disk_release_events(disk);
 	kfree(disk->random);
 	disk_replace_part_tbl(disk, NULL);
-	hd_free_part(&disk->part0);
+	bdput(disk->part0);
 	if (disk->queue)
 		blk_put_queue(disk->queue);
 	kfree_rcu(disk, rcu);
@@ -1610,8 +1601,8 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
 	if (!disk)
 		return NULL;
 
-	disk->part0.bdev = bdev_alloc(disk, 0);
-	if (!disk->part0.bdev)
+	disk->part0 = bdev_alloc(disk, 0);
+	if (!disk->part0)
 		goto out_free_disk;
 
 	disk->node_id = node_id;
@@ -1619,10 +1610,7 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
 		goto out_bdput;
 
 	ptbl = rcu_dereference_protected(disk->part_tbl, 1);
-	rcu_assign_pointer(ptbl->part[0], &disk->part0);
-
-	if (hd_ref_init(&disk->part0))
-		goto out_bdput;
+	rcu_assign_pointer(ptbl->part[0], disk->part0->bd_part);
 
 	disk->minors = minors;
 	rand_initialize_disk(disk);
@@ -1632,7 +1620,7 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
 	return disk;
 
 out_bdput:
-	bdput(disk->part0.bdev);
+	bdput(disk->part0);
 out_free_disk:
 	kfree(disk);
 	return NULL;
@@ -1671,9 +1659,9 @@ void set_disk_ro(struct gendisk *disk, int flag)
 	struct disk_part_iter piter;
 	struct hd_struct *part;
 
-	if (disk->part0.bdev->bd_read_only != flag) {
+	if (disk->part0->bd_read_only != flag) {
 		set_disk_ro_uevent(disk, flag);
-		disk->part0.bdev->bd_read_only = flag;
+		disk->part0->bd_read_only = flag;
 	}
 
 	disk_part_iter_init(&piter, disk, DISK_PITER_INCL_EMPTY);
diff --git a/block/partitions/core.c b/block/partitions/core.c
index fd00428e437a63..4b0352cb29e132 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -265,9 +265,9 @@ static const struct attribute_group *part_attr_groups[] = {
 static void part_release(struct device *dev)
 {
 	struct hd_struct *p = dev_to_part(dev);
+
 	blk_free_devt(dev->devt);
-	hd_free_part(p);
-	kfree(p);
+	bdput(p->bdev);
 }
 
 static int part_uevent(struct device *dev, struct kobj_uevent_env *env)
@@ -288,46 +288,6 @@ struct device_type part_type = {
 	.uevent		= part_uevent,
 };
 
-static void hd_struct_free_work(struct work_struct *work)
-{
-	struct hd_struct *part =
-		container_of(to_rcu_work(work), struct hd_struct, rcu_work);
-	struct gendisk *disk = part_to_disk(part);
-
-	/*
-	 * Release the disk reference acquired in delete_partition here.
-	 * We can't release it in hd_struct_free because the final put_device
-	 * needs process context and thus can't be run directly from a
-	 * percpu_ref ->release handler.
-	 */
-	put_device(disk_to_dev(disk));
-
-	part->bdev->bd_start_sect = 0;
-	bdev_set_nr_sectors(part->bdev, 0);
-	part_stat_set_all(part, 0);
-	put_device(part_to_dev(part));
-}
-
-static void hd_struct_free(struct percpu_ref *ref)
-{
-	struct hd_struct *part = container_of(ref, struct hd_struct, ref);
-	struct gendisk *disk = part_to_disk(part);
-	struct disk_part_tbl *ptbl =
-		rcu_dereference_protected(disk->part_tbl, 1);
-
-	rcu_assign_pointer(ptbl->last_lookup, NULL);
-
-	INIT_RCU_WORK(&part->rcu_work, hd_struct_free_work);
-	queue_rcu_work(system_wq, &part->rcu_work);
-}
-
-int hd_ref_init(struct hd_struct *part)
-{
-	if (percpu_ref_init(&part->ref, hd_struct_free, 0, GFP_KERNEL))
-		return -ENOMEM;
-	return 0;
-}
-
 /*
  * Must be called either with bd_mutex held, before a disk can be opened or
  * after all disk users are gone.
@@ -342,8 +302,8 @@ void delete_partition(struct hd_struct *part)
 	 * ->part_tbl is referenced in this part's release handler, so
 	 *  we have to hold the disk device
 	 */
-	get_device(disk_to_dev(disk));
 	rcu_assign_pointer(ptbl->part[part->partno], NULL);
+	rcu_assign_pointer(ptbl->last_lookup, NULL);
 	kobject_put(part->bdev->bd_holder_dir);
 	device_del(part_to_dev(part));
 
@@ -353,7 +313,7 @@ void delete_partition(struct hd_struct *part)
 	 */
 	remove_inode_hash(part->bdev->bd_inode);
 
-	percpu_ref_kill(&part->ref);
+	put_device(part_to_dev(part));
 }
 
 static ssize_t whole_disk_show(struct device *dev,
@@ -406,15 +366,11 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 	if (ptbl->part[partno])
 		return ERR_PTR(-EBUSY);
 
-	p = kzalloc(sizeof(*p), GFP_KERNEL);
-	if (!p)
-		return ERR_PTR(-EBUSY);
-
 	bdev = bdev_alloc(disk, partno);
 	if (!bdev)
-		goto out_free;
-	p->bdev = bdev;
+		return ERR_PTR(-ENOMEM);
 
+	p = bdev->bd_part;
 	pdev = part_to_dev(p);
 
 	bdev->bd_start_sect = start;
@@ -463,13 +419,6 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 			goto out_del;
 	}
 
-	err = hd_ref_init(p);
-	if (err) {
-		if (flags & ADDPART_FLAG_WHOLEDISK)
-			goto out_remove_file;
-		goto out_del;
-	}
-
 	/* everything is up and running, commence */
 	bdev_add(bdev, devt);
 	rcu_assign_pointer(ptbl->part[partno], p);
@@ -481,11 +430,7 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 
 out_bdput:
 	bdput(bdev);
-out_free:
-	kfree(p);
 	return ERR_PTR(err);
-out_remove_file:
-	device_remove_file(pdev, &dev_attr_whole_disk);
 out_del:
 	kobject_put(bdev->bd_holder_dir);
 	device_del(pdev);
diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c
index dc333dbe523281..9e5c2fdfda3629 100644
--- a/drivers/block/drbd/drbd_receiver.c
+++ b/drivers/block/drbd/drbd_receiver.c
@@ -2802,7 +2802,7 @@ bool drbd_rs_c_min_rate_throttle(struct drbd_device *device)
 	if (c_min_rate == 0)
 		return false;
 
-	curr_events = (int)part_stat_read_accum(&disk->part0, sectors) -
+	curr_events = (int)part_stat_read_accum(disk->part0->bd_part, sectors) -
 			atomic_read(&device->rs_sect_ev);
 
 	if (atomic_read(&device->ap_actlog_cnt)
diff --git a/drivers/block/drbd/drbd_worker.c b/drivers/block/drbd/drbd_worker.c
index ba56f3f05312f0..343f56b86bb766 100644
--- a/drivers/block/drbd/drbd_worker.c
+++ b/drivers/block/drbd/drbd_worker.c
@@ -1678,7 +1678,8 @@ void drbd_rs_controller_reset(struct drbd_device *device)
 	atomic_set(&device->rs_sect_in, 0);
 	atomic_set(&device->rs_sect_ev, 0);
 	device->rs_in_flight = 0;
-	device->rs_last_events = (int)part_stat_read_accum(&disk->part0, sectors);
+	device->rs_last_events =
+		(int)part_stat_read_accum(disk->part0->bd_part, sectors);
 
 	/* Updating the RCU protected object in place is necessary since
 	   this function gets called from atomic context.
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 88baa6158eaee1..153858734cd47d 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1687,7 +1687,7 @@ static void zram_reset_device(struct zram *zram)
 	zram->disksize = 0;
 
 	set_capacity_and_notify(zram->disk, 0);
-	part_stat_set_all(&zram->disk->part0, 0);
+	part_stat_set_all(zram->disk->part0->bd_part, 0);
 
 	up_write(&zram->init_lock);
 	/* I/O operation under all of CPU are done so let's free */
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 48051db006f30c..1b2db4d530ea71 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1607,7 +1607,7 @@ static blk_qc_t __split_and_process_bio(struct mapped_device *md,
 				 * (by eliminating DM's splitting and just using bio_split)
 				 */
 				part_stat_lock();
-				__dm_part_stat_sub(&dm_disk(md)->part0,
+				__dm_part_stat_sub(dm_disk(md)->part0->bd_part,
 						   sectors[op_stat_group(bio_op(bio))], ci.sector_count);
 				part_stat_unlock();
 
@@ -2242,7 +2242,7 @@ EXPORT_SYMBOL_GPL(dm_put);
 static bool md_in_flight_bios(struct mapped_device *md)
 {
 	int cpu;
-	struct hd_struct *part = &dm_disk(md)->part0;
+	struct hd_struct *part = dm_disk(md)->part0->bd_part;
 	long sum = 0;
 
 	for_each_possible_cpu(cpu) {
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 7ce6047c856ea2..3696c2d77a4dd7 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -8441,7 +8441,7 @@ static int is_mddev_idle(struct mddev *mddev, int init)
 	rcu_read_lock();
 	rdev_for_each_rcu(rdev, mddev) {
 		struct gendisk *disk = rdev->bdev->bd_disk;
-		curr_events = (int)part_stat_read_accum(&disk->part0, sectors) -
+		curr_events = (int)part_stat_read_accum(disk->part0->bd_part, sectors) -
 			      atomic_read(&disk->sync_io);
 		/* sync IO will cause sync_io to increase before the disk_stats
 		 * as sync_io is counted when a request starts, and
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 0b8d6009486643..1538b20ca4bd43 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -39,6 +39,7 @@
 
 struct bdev_inode {
 	struct block_device bdev;
+	struct hd_struct hd;
 	struct inode vfs_inode;
 };
 
@@ -885,6 +886,9 @@ struct block_device *bdev_alloc(struct gendisk *disk, u8 partno)
 		iput(inode);
 		return NULL;
 	}
+	bdev->bd_part = &BDEV_I(inode)->hd;
+	memset(bdev->bd_part, 0, sizeof(*bdev->bd_part));
+	bdev->bd_part->bdev = bdev;
 	return bdev;
 }
 
@@ -1275,15 +1279,10 @@ EXPORT_SYMBOL_GPL(bdev_disk_changed);
 static int __blkdev_get(struct block_device *bdev, fmode_t mode)
 {
 	struct gendisk *disk = bdev->bd_disk;
-	int ret;
+	int ret = 0;
 
 	if (!bdev->bd_openers) {
 		if (!bdev_is_partition(bdev)) {
-			ret = -ENXIO;
-			bdev->bd_part = disk_get_part(disk, 0);
-			if (!bdev->bd_part)
-				goto out_clear;
-
 			ret = 0;
 			if (disk->fops->open)
 				ret = disk->fops->open(bdev, mode);
@@ -1302,7 +1301,7 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode)
 				bdev_disk_changed(bdev, ret == -ENOMEDIUM);
 
 			if (ret)
-				goto out_clear;
+				return ret;
 		} else {
 			struct block_device *whole = bdget_disk(disk, 0);
 
@@ -1311,16 +1310,14 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode)
 			if (ret) {
 				mutex_unlock(&whole->bd_mutex);
 				bdput(whole);
-				goto out_clear;
+				return ret;
 			}
 			whole->bd_part_count++;
 			mutex_unlock(&whole->bd_mutex);
 
-			bdev->bd_part = disk_get_part(disk, bdev->bd_partno);
 			if (!bdev_nr_sectors(bdev)) {
 				__blkdev_put(whole, mode, 1);
-				ret = -ENXIO;
-				goto out_clear;
+				return -ENXIO;
 			}
 			set_init_blocksize(bdev);
 		}
@@ -1329,7 +1326,6 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode)
 			bdev->bd_bdi = bdi_get(disk->queue->backing_dev_info);
 	} else {
 		if (!bdev_is_partition(bdev)) {
-			ret = 0;
 			if (bdev->bd_disk->fops->open)
 				ret = bdev->bd_disk->fops->open(bdev, mode);
 			/* the same as first opener case, read comment there */
@@ -1342,11 +1338,6 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode)
 	}
 	bdev->bd_openers++;
 	return 0;
-
- out_clear:
-	disk_put_part(bdev->bd_part);
-	bdev->bd_part = NULL;
-	return ret;
 }
 
 static struct block_device *get_bdev_disk_and_module(dev_t dev)
@@ -1569,18 +1560,12 @@ static void __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part)
 		sync_blockdev(bdev);
 		kill_bdev(bdev);
 		bdev_write_inode(bdev);
-
-		if (!bdev_is_partition(bdev) && disk->fops->release)
-			disk->fops->release(disk, mode);
-
-		disk_put_part(bdev->bd_part);
-		bdev->bd_part = NULL;
 		if (bdev_is_partition(bdev))
 			victim = bdev_whole(bdev);
-	} else {
-		if (!bdev_is_partition(bdev) && disk->fops->release)
-			disk->fops->release(disk, mode);
 	}
+
+	if (!bdev_is_partition(bdev) && disk->fops->release)
+		disk->fops->release(disk, mode);
 	mutex_unlock(&bdev->bd_mutex);
 	bdput(bdev);
 	if (victim)
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 758cf71c9aa2a6..6edea5c1625909 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -59,7 +59,7 @@ struct block_device {
 } __randomize_layout;
 
 #define bdev_whole(_bdev) \
-	((_bdev)->bd_disk->part0.bdev)
+	((_bdev)->bd_disk->part0)
 
 #define bdev_kobj(_bdev) \
 	(&part_to_dev((_bdev)->bd_part)->kobj)
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 08d00b526b0a3b..6e16c264439bdb 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -19,11 +19,12 @@
 #include <linux/blk_types.h>
 #include <asm/local.h>
 
-#define dev_to_disk(device)	container_of((device), struct gendisk, part0.__dev)
 #define dev_to_part(device)	container_of((device), struct hd_struct, __dev)
-#define disk_to_dev(disk)	(&(disk)->part0.__dev)
 #define part_to_dev(part)	(&((part)->__dev))
 
+#define dev_to_disk(device)	(dev_to_part(device)->bdev->bd_disk)
+#define disk_to_dev(disk)	(part_to_dev((disk)->part0->bd_part))
+
 extern const struct device_type disk_type;
 extern struct device_type part_type;
 extern struct class block_class;
@@ -51,12 +52,9 @@ struct partition_meta_info {
 };
 
 struct hd_struct {
-	struct percpu_ref ref;
-
 	struct block_device *bdev;
 	struct device __dev;
 	int partno;
-	struct rcu_work rcu_work;
 };
 
 /**
@@ -168,7 +166,7 @@ struct gendisk {
 	 * helpers.
 	 */
 	struct disk_part_tbl __rcu *part_tbl;
-	struct hd_struct part0;
+	struct block_device *part0;
 
 	const struct block_device_operations *fops;
 	struct request_queue *queue;
@@ -279,7 +277,7 @@ extern void set_disk_ro(struct gendisk *disk, int flag);
 
 static inline int get_disk_ro(struct gendisk *disk)
 {
-	return disk->part0.bdev->bd_read_only;
+	return disk->part0->bd_read_only;
 }
 
 extern void disk_block_events(struct gendisk *disk);
@@ -303,7 +301,7 @@ static inline sector_t bdev_nr_sectors(struct block_device *bdev)
 
 static inline sector_t get_capacity(struct gendisk *disk)
 {
-	return bdev_nr_sectors(disk->part0.bdev);
+	return bdev_nr_sectors(disk->part0);
 }
 
 int bdev_disk_changed(struct block_device *bdev, bool invalidate);
diff --git a/include/linux/part_stat.h b/include/linux/part_stat.h
index 87ad60106e1db0..680de036691ef9 100644
--- a/include/linux/part_stat.h
+++ b/include/linux/part_stat.h
@@ -59,8 +59,8 @@ static inline void part_stat_set_all(struct hd_struct *part, int value)
 #define part_stat_add(part, field, addnd)	do {			\
 	__part_stat_add((part), field, addnd);				\
 	if ((part)->partno)						\
-		__part_stat_add(&part_to_disk((part))->part0,		\
-				field, addnd);				\
+		__part_stat_add(part_to_disk((part))->part0->bd_part,	\
+			field, addnd); \
 } while (0)
 
 #define part_stat_dec(part, field)					\
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:40:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:40:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36142.68063 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYd-000581-3O; Tue, 24 Nov 2020 13:40:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36142.68063; Tue, 24 Nov 2020 13:40:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYb-00056i-AK; Tue, 24 Nov 2020 13:40:33 +0000
Received: by outflank-mailman (input) for mailman id 36142;
 Tue, 24 Nov 2020 13:40:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYOc-0000Qf-3y
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:30:14 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 357d6660-927d-4846-ad24-bd638e6c3be1;
 Tue, 24 Nov 2020 13:28:46 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYMv-0006aF-Ta; Tue, 24 Nov 2020 13:28:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYOc-0000Qf-3y
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:30:14 +0000
X-Inumbo-ID: 357d6660-927d-4846-ad24-bd638e6c3be1
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 357d6660-927d-4846-ad24-bd638e6c3be1;
	Tue, 24 Nov 2020 13:28:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=zR55v5+OPdHMJV10YpgHDjRLnmPcRL239K9WcigGn28=; b=GoMq4i3DWTppD5+H991Mkc8eOI
	zXC2wNKRdrsXtbL3xkHLmQBdPkGyNUj+ZaJZOCBn+g6J1pITt8IoVkiAgsYVxTpzhhmlquNScHtt7
	/4uIFoYnWr57MB8JVA155Kb+LTXW/jlHArLqnteTFwWa0Q4GFj3A+N6seyWMw8YTNiVQUuZ/OAvJf
	TyIgdcg8js28joqni8x6cp8aP6JQg4hNmQ7nlJus5Ndomu80A+dZQopyj+rowN5rQd+9R3OpFGp7X
	Xr57uEfF/ebbfSsC2kZR0v9CJsRFA+MkDloI2+UlOueh2HrisbZRhS7Ey0njvx4JspTpTjpTJm6jo
	GLwvK3FQ==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYMv-0006aF-Ta; Tue, 24 Nov 2020 13:28:30 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 23/45] block: remove i_bdev
Date: Tue, 24 Nov 2020 14:27:29 +0100
Message-Id: <20201124132751.3747337-24-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Switch the block device lookup interfaces to directly work with a dev_t
so that struct block_device references are only acquired by the
blkdev_get variants (and the blk-cgroup special case).  This means that
we not don't need an extra reference in the inode and can generally
simplify handling of struct block_device to keep the lookups contained
in the core block layer code.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/ioctl.c                                |   3 +-
 drivers/block/loop.c                         |   8 +-
 drivers/md/bcache/super.c                    |  20 +-
 drivers/md/dm-table.c                        |   9 +-
 drivers/mtd/mtdsuper.c                       |  17 +-
 drivers/target/target_core_file.c            |   6 +-
 drivers/usb/gadget/function/storage_common.c |   8 +-
 fs/block_dev.c                               | 195 +++++--------------
 fs/btrfs/volumes.c                           |  13 +-
 fs/inode.c                                   |   3 -
 fs/internal.h                                |   7 +-
 fs/io_uring.c                                |  10 +-
 fs/pipe.c                                    |   5 +-
 fs/quota/quota.c                             |  19 +-
 fs/statfs.c                                  |   2 +-
 fs/super.c                                   |  37 ++--
 include/linux/blkdev.h                       |   2 +-
 include/linux/fs.h                           |   1 -
 18 files changed, 114 insertions(+), 251 deletions(-)

diff --git a/block/ioctl.c b/block/ioctl.c
index 0c09bb7a6ff35f..a6d8171221c7dc 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -590,8 +590,7 @@ long compat_blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg)
 {
 	int ret;
 	void __user *argp = compat_ptr(arg);
-	struct inode *inode = file->f_mapping->host;
-	struct block_device *bdev = inode->i_bdev;
+	struct block_device *bdev = I_BDEV(file->f_mapping->host);
 	struct gendisk *disk = bdev->bd_disk;
 	fmode_t mode = file->f_mode;
 	loff_t size;
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index b42c728620c9e4..26c7aafba7c5f8 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -675,10 +675,10 @@ static int loop_validate_file(struct file *file, struct block_device *bdev)
 	while (is_loop_device(f)) {
 		struct loop_device *l;
 
-		if (f->f_mapping->host->i_bdev == bdev)
+		if (f->f_mapping->host->i_rdev == bdev->bd_dev)
 			return -EBADF;
 
-		l = f->f_mapping->host->i_bdev->bd_disk->private_data;
+		l = I_BDEV(f->f_mapping->host)->bd_disk->private_data;
 		if (l->lo_state != Lo_bound) {
 			return -EINVAL;
 		}
@@ -885,9 +885,7 @@ static void loop_config_discard(struct loop_device *lo)
 	 * file-backed loop devices: discarded regions read back as zero.
 	 */
 	if (S_ISBLK(inode->i_mode) && !lo->lo_encrypt_key_size) {
-		struct request_queue *backingq;
-
-		backingq = bdev_get_queue(inode->i_bdev);
+		struct request_queue *backingq = bdev_get_queue(I_BDEV(inode));
 
 		max_discard_sectors = backingq->limits.max_write_zeroes_sectors;
 		granularity = backingq->limits.discard_granularity ?:
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index a6a5e21e4fd136..c55d3c58a7ef55 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -2380,38 +2380,38 @@ kobj_attribute_write(register,		register_bcache);
 kobj_attribute_write(register_quiet,	register_bcache);
 kobj_attribute_write(pendings_cleanup,	bch_pending_bdevs_cleanup);
 
-static bool bch_is_open_backing(struct block_device *bdev)
+static bool bch_is_open_backing(dev_t dev)
 {
 	struct cache_set *c, *tc;
 	struct cached_dev *dc, *t;
 
 	list_for_each_entry_safe(c, tc, &bch_cache_sets, list)
 		list_for_each_entry_safe(dc, t, &c->cached_devs, list)
-			if (dc->bdev == bdev)
+			if (dc->bdev->bd_dev == dev)
 				return true;
 	list_for_each_entry_safe(dc, t, &uncached_devices, list)
-		if (dc->bdev == bdev)
+		if (dc->bdev->bd_dev == dev)
 			return true;
 	return false;
 }
 
-static bool bch_is_open_cache(struct block_device *bdev)
+static bool bch_is_open_cache(dev_t dev)
 {
 	struct cache_set *c, *tc;
 
 	list_for_each_entry_safe(c, tc, &bch_cache_sets, list) {
 		struct cache *ca = c->cache;
 
-		if (ca->bdev == bdev)
+		if (ca->bdev->bd_dev == dev)
 			return true;
 	}
 
 	return false;
 }
 
-static bool bch_is_open(struct block_device *bdev)
+static bool bch_is_open(dev_t dev)
 {
-	return bch_is_open_cache(bdev) || bch_is_open_backing(bdev);
+	return bch_is_open_cache(dev) || bch_is_open_backing(dev);
 }
 
 struct async_reg_args {
@@ -2535,9 +2535,11 @@ static ssize_t register_bcache(struct kobject *k, struct kobj_attribute *attr,
 				  sb);
 	if (IS_ERR(bdev)) {
 		if (bdev == ERR_PTR(-EBUSY)) {
-			bdev = lookup_bdev(strim(path));
+			dev_t dev;
+
 			mutex_lock(&bch_register_lock);
-			if (!IS_ERR(bdev) && bch_is_open(bdev))
+			if (lookup_bdev(strim(path), &dev) == 0 &&
+			    bch_is_open(dev))
 				err = "device already registered";
 			else
 				err = "device busy";
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index ce543b761be7b2..dea67772171053 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -348,16 +348,9 @@ static int upgrade_mode(struct dm_dev_internal *dd, fmode_t new_mode,
 dev_t dm_get_dev_t(const char *path)
 {
 	dev_t dev;
-	struct block_device *bdev;
 
-	bdev = lookup_bdev(path);
-	if (IS_ERR(bdev))
+	if (lookup_bdev(path, &dev))
 		dev = name_to_dev_t(path);
-	else {
-		dev = bdev->bd_dev;
-		bdput(bdev);
-	}
-
 	return dev;
 }
 EXPORT_SYMBOL_GPL(dm_get_dev_t);
diff --git a/drivers/mtd/mtdsuper.c b/drivers/mtd/mtdsuper.c
index c3e2098372f2e5..38b6aa849c6383 100644
--- a/drivers/mtd/mtdsuper.c
+++ b/drivers/mtd/mtdsuper.c
@@ -120,8 +120,8 @@ int get_tree_mtd(struct fs_context *fc,
 				struct fs_context *fc))
 {
 #ifdef CONFIG_BLOCK
-	struct block_device *bdev;
-	int ret, major;
+	dev_t dev;
+	int ret;
 #endif
 	int mtdnr;
 
@@ -169,20 +169,15 @@ int get_tree_mtd(struct fs_context *fc,
 	/* try the old way - the hack where we allowed users to mount
 	 * /dev/mtdblock$(n) but didn't actually _use_ the blockdev
 	 */
-	bdev = lookup_bdev(fc->source);
-	if (IS_ERR(bdev)) {
-		ret = PTR_ERR(bdev);
+	ret = lookup_bdev(fc->source, &dev);
+	if (ret) {
 		errorf(fc, "MTD: Couldn't look up '%s': %d", fc->source, ret);
 		return ret;
 	}
 	pr_debug("MTDSB: lookup_bdev() returned 0\n");
 
-	major = MAJOR(bdev->bd_dev);
-	mtdnr = MINOR(bdev->bd_dev);
-	bdput(bdev);
-
-	if (major == MTD_BLOCK_MAJOR)
-		return mtd_get_sb_by_nr(fc, mtdnr, fill_super);
+	if (MAJOR(dev) == MTD_BLOCK_MAJOR)
+		return mtd_get_sb_by_nr(fc, MINOR(dev), fill_super);
 
 #endif /* CONFIG_BLOCK */
 
diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
index 7143d03f0e027e..b0cb5b95e892d3 100644
--- a/drivers/target/target_core_file.c
+++ b/drivers/target/target_core_file.c
@@ -133,10 +133,10 @@ static int fd_configure_device(struct se_device *dev)
 	 */
 	inode = file->f_mapping->host;
 	if (S_ISBLK(inode->i_mode)) {
-		struct request_queue *q = bdev_get_queue(inode->i_bdev);
+		struct request_queue *q = bdev_get_queue(I_BDEV(inode));
 		unsigned long long dev_size;
 
-		fd_dev->fd_block_size = bdev_logical_block_size(inode->i_bdev);
+		fd_dev->fd_block_size = bdev_logical_block_size(I_BDEV(inode));
 		/*
 		 * Determine the number of bytes from i_size_read() minus
 		 * one (1) logical sector from underlying struct block_device
@@ -559,7 +559,7 @@ fd_execute_unmap(struct se_cmd *cmd, sector_t lba, sector_t nolb)
 
 	if (S_ISBLK(inode->i_mode)) {
 		/* The backend is block device, use discard */
-		struct block_device *bdev = inode->i_bdev;
+		struct block_device *bdev = I_BDEV(inode);
 		struct se_device *dev = cmd->se_dev;
 
 		ret = blkdev_issue_discard(bdev,
diff --git a/drivers/usb/gadget/function/storage_common.c b/drivers/usb/gadget/function/storage_common.c
index f7e6c42558eb76..b859a158a4140e 100644
--- a/drivers/usb/gadget/function/storage_common.c
+++ b/drivers/usb/gadget/function/storage_common.c
@@ -204,7 +204,7 @@ int fsg_lun_open(struct fsg_lun *curlun, const char *filename)
 	if (!(filp->f_mode & FMODE_WRITE))
 		ro = 1;
 
-	inode = file_inode(filp);
+	inode = filp->f_mapping->host;
 	if ((!S_ISREG(inode->i_mode) && !S_ISBLK(inode->i_mode))) {
 		LINFO(curlun, "invalid file type: %s\n", filename);
 		goto out;
@@ -221,7 +221,7 @@ int fsg_lun_open(struct fsg_lun *curlun, const char *filename)
 	if (!(filp->f_mode & FMODE_CAN_WRITE))
 		ro = 1;
 
-	size = i_size_read(inode->i_mapping->host);
+	size = i_size_read(inode);
 	if (size < 0) {
 		LINFO(curlun, "unable to find file size: %s\n", filename);
 		rc = (int) size;
@@ -231,8 +231,8 @@ int fsg_lun_open(struct fsg_lun *curlun, const char *filename)
 	if (curlun->cdrom) {
 		blksize = 2048;
 		blkbits = 11;
-	} else if (inode->i_bdev) {
-		blksize = bdev_logical_block_size(inode->i_bdev);
+	} else if (S_ISBLK(inode->i_mode)) {
+		blksize = bdev_logical_block_size(I_BDEV(inode));
 		blkbits = blksize_bits(blksize);
 	} else {
 		blksize = 512;
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 1e35faf6dad42c..c0d1e8248ffe23 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -883,7 +883,6 @@ static struct block_device *bdget(dev_t dev)
 		bdev->bd_dev = dev;
 		inode->i_mode = S_IFBLK;
 		inode->i_rdev = dev;
-		inode->i_bdev = bdev;
 		inode->i_data.a_ops = &def_blk_aops;
 		mapping_set_gfp_mask(&inode->i_data, GFP_USER);
 		unlock_new_inode(inode);
@@ -924,67 +923,8 @@ void bdput(struct block_device *bdev)
 {
 	iput(bdev->bd_inode);
 }
-
 EXPORT_SYMBOL(bdput);
  
-static struct block_device *bd_acquire(struct inode *inode)
-{
-	struct block_device *bdev;
-
-	spin_lock(&bdev_lock);
-	bdev = inode->i_bdev;
-	if (bdev && !inode_unhashed(bdev->bd_inode)) {
-		bdgrab(bdev);
-		spin_unlock(&bdev_lock);
-		return bdev;
-	}
-	spin_unlock(&bdev_lock);
-
-	/*
-	 * i_bdev references block device inode that was already shut down
-	 * (corresponding device got removed).  Remove the reference and look
-	 * up block device inode again just in case new device got
-	 * reestablished under the same device number.
-	 */
-	if (bdev)
-		bd_forget(inode);
-
-	bdev = bdget(inode->i_rdev);
-	if (bdev) {
-		spin_lock(&bdev_lock);
-		if (!inode->i_bdev) {
-			/*
-			 * We take an additional reference to bd_inode,
-			 * and it's released in clear_inode() of inode.
-			 * So, we can access it via ->i_mapping always
-			 * without igrab().
-			 */
-			bdgrab(bdev);
-			inode->i_bdev = bdev;
-			inode->i_mapping = bdev->bd_inode->i_mapping;
-		}
-		spin_unlock(&bdev_lock);
-	}
-	return bdev;
-}
-
-/* Call when you free inode */
-
-void bd_forget(struct inode *inode)
-{
-	struct block_device *bdev = NULL;
-
-	spin_lock(&bdev_lock);
-	if (!sb_is_blkdev_sb(inode->i_sb))
-		bdev = inode->i_bdev;
-	inode->i_bdev = NULL;
-	inode->i_mapping = &inode->i_data;
-	spin_unlock(&bdev_lock);
-
-	if (bdev)
-		bdput(bdev);
-}
-
 /**
  * bd_may_claim - test whether a block device can be claimed
  * @bdev: block device of interest
@@ -1492,38 +1432,45 @@ static int __blkdev_get(struct block_device *bdev, struct gendisk *disk,
 }
 
 /**
- * blkdev_get - open a block device
- * @bdev: block_device to open
+ * blkdev_get_by_dev - open a block device by device number
+ * @dev: device number of block device to open
  * @mode: FMODE_* mask
  * @holder: exclusive holder identifier
  *
- * Open @bdev with @mode.  If @mode includes %FMODE_EXCL, @bdev is
- * open with exclusive access.  Specifying %FMODE_EXCL with %NULL
- * @holder is invalid.  Exclusive opens may nest for the same @holder.
+ * Open the block device described by device number @dev.  If @mode includes
+ * If @mode includes %FMODE_EXCL, the block device is opened with exclusive
+ * access.  Specifying %FMODE_EXCL with a %NULL @holder is invalid.  Exclusive
+ * opens may nest for the same @holder.
  *
- * On success, the reference count of @bdev is unchanged.  On failure,
- * @bdev is put.
+ * Use this interface ONLY if you really do not have anything better - i.e. when
+ * you are behind a truly sucky interface and all you are given is a device
+ * number.  Everything else should use blkdev_get_by_path().
  *
  * CONTEXT:
  * Might sleep.
  *
  * RETURNS:
- * 0 on success, -errno on failure.
+ * Reference to the block_device on success, ERR_PTR(-errno) on failure.
  */
-static int blkdev_get(struct block_device *bdev, fmode_t mode, void *holder)
+struct block_device *blkdev_get_by_dev(dev_t dev, fmode_t mode, void *holder)
 {
 	struct block_device *claiming;
 	bool unblock_events = true;
+	struct block_device *bdev;
 	struct gendisk *disk;
 	int partno;
 	int ret;
 
 	ret = devcgroup_check_permission(DEVCG_DEV_BLOCK,
-			imajor(bdev->bd_inode), iminor(bdev->bd_inode),
+			MAJOR(dev), MINOR(dev),
 			((mode & FMODE_READ) ? DEVCG_ACC_READ : 0) |
 			((mode & FMODE_WRITE) ? DEVCG_ACC_WRITE : 0));
 	if (ret)
-		goto bdput;
+		return ERR_PTR(ret);
+
+	bdev = bdget(dev);
+	if (!bdev)
+		return ERR_PTR(-ENOMEM);
 
 	/*
 	 * If we lost a race with 'disk' being deleted, try again.  See md.c.
@@ -1584,10 +1531,13 @@ static int blkdev_get(struct block_device *bdev, fmode_t mode, void *holder)
 	if (ret == -ERESTARTSYS)
 		goto retry;
 bdput:
-	if (ret)
+	if (ret) {
 		bdput(bdev);
-	return ret;
+		return ERR_PTR(ret);
+	}
+	return bdev;
 }
+EXPORT_SYMBOL(blkdev_get_by_dev);
 
 /**
  * blkdev_get_by_path - open a block device by name
@@ -1595,32 +1545,31 @@ static int blkdev_get(struct block_device *bdev, fmode_t mode, void *holder)
  * @mode: FMODE_* mask
  * @holder: exclusive holder identifier
  *
- * Open the blockdevice described by the device file at @path.  @mode
- * and @holder are identical to blkdev_get().
+ * Open the block device described by the device file at &path.
  *
- * On success, the returned block_device has reference count of one.
+ * If @mode includes %FMODE_EXCL, the block device is opened with exclusive
+ * access.  Specifying %FMODE_EXCL with a %NULL @holder is invalid.  Exclusive
+ * opens may nest for the same @holder.
  *
  * CONTEXT:
  * Might sleep.
  *
  * RETURNS:
- * Pointer to block_device on success, ERR_PTR(-errno) on failure.
+ * Reference to the block_device on success, ERR_PTR(-errno) on failure.
  */
 struct block_device *blkdev_get_by_path(const char *path, fmode_t mode,
 					void *holder)
 {
 	struct block_device *bdev;
-	int err;
-
-	bdev = lookup_bdev(path);
-	if (IS_ERR(bdev))
-		return bdev;
+	dev_t dev;
+	int error;
 
-	err = blkdev_get(bdev, mode, holder);
-	if (err)
-		return ERR_PTR(err);
+	error = lookup_bdev(path, &dev);
+	if (error)
+		return ERR_PTR(error);
 
-	if ((mode & FMODE_WRITE) && bdev_read_only(bdev)) {
+	bdev = blkdev_get_by_dev(dev, mode, holder);
+	if (!IS_ERR(bdev) && (mode & FMODE_WRITE) && bdev_read_only(bdev)) {
 		blkdev_put(bdev, mode);
 		return ERR_PTR(-EACCES);
 	}
@@ -1629,45 +1578,6 @@ struct block_device *blkdev_get_by_path(const char *path, fmode_t mode,
 }
 EXPORT_SYMBOL(blkdev_get_by_path);
 
-/**
- * blkdev_get_by_dev - open a block device by device number
- * @dev: device number of block device to open
- * @mode: FMODE_* mask
- * @holder: exclusive holder identifier
- *
- * Open the blockdevice described by device number @dev.  @mode and
- * @holder are identical to blkdev_get().
- *
- * Use it ONLY if you really do not have anything better - i.e. when
- * you are behind a truly sucky interface and all you are given is a
- * device number.  _Never_ to be used for internal purposes.  If you
- * ever need it - reconsider your API.
- *
- * On success, the returned block_device has reference count of one.
- *
- * CONTEXT:
- * Might sleep.
- *
- * RETURNS:
- * Pointer to block_device on success, ERR_PTR(-errno) on failure.
- */
-struct block_device *blkdev_get_by_dev(dev_t dev, fmode_t mode, void *holder)
-{
-	struct block_device *bdev;
-	int err;
-
-	bdev = bdget(dev);
-	if (!bdev)
-		return ERR_PTR(-ENOMEM);
-
-	err = blkdev_get(bdev, mode, holder);
-	if (err)
-		return ERR_PTR(err);
-
-	return bdev;
-}
-EXPORT_SYMBOL(blkdev_get_by_dev);
-
 static int blkdev_open(struct inode * inode, struct file * filp)
 {
 	struct block_device *bdev;
@@ -1689,14 +1599,12 @@ static int blkdev_open(struct inode * inode, struct file * filp)
 	if ((filp->f_flags & O_ACCMODE) == 3)
 		filp->f_mode |= FMODE_WRITE_IOCTL;
 
-	bdev = bd_acquire(inode);
-	if (bdev == NULL)
-		return -ENOMEM;
-
+	bdev = blkdev_get_by_dev(inode->i_rdev, filp->f_mode, filp);
+	if (IS_ERR(bdev))
+		return PTR_ERR(bdev);
 	filp->f_mapping = bdev->bd_inode->i_mapping;
 	filp->f_wb_err = filemap_sample_wb_err(filp->f_mapping);
-
-	return blkdev_get(bdev, filp->f_mode, filp);
+	return 0;
 }
 
 static void __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part)
@@ -2003,37 +1911,32 @@ const struct file_operations def_blk_fops = {
  * namespace if possible and return it.  Return ERR_PTR(error)
  * otherwise.
  */
-struct block_device *lookup_bdev(const char *pathname)
+int lookup_bdev(const char *pathname, dev_t *dev)
 {
-	struct block_device *bdev;
 	struct inode *inode;
 	struct path path;
 	int error;
 
 	if (!pathname || !*pathname)
-		return ERR_PTR(-EINVAL);
+		return -EINVAL;
 
 	error = kern_path(pathname, LOOKUP_FOLLOW, &path);
 	if (error)
-		return ERR_PTR(error);
+		return error;
 
 	inode = d_backing_inode(path.dentry);
 	error = -ENOTBLK;
 	if (!S_ISBLK(inode->i_mode))
-		goto fail;
+		goto out_path_put;
 	error = -EACCES;
 	if (!may_open_dev(&path))
-		goto fail;
-	error = -ENOMEM;
-	bdev = bd_acquire(inode);
-	if (!bdev)
-		goto fail;
-out:
+		goto out_path_put;
+
+	*dev = inode->i_rdev;
+	error = 0;
+out_path_put:
 	path_put(&path);
-	return bdev;
-fail:
-	bdev = ERR_PTR(error);
-	goto out;
+	return error;
 }
 EXPORT_SYMBOL(lookup_bdev);
 
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index a6406b3b8c2b4f..fbc4b58228f784 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -929,16 +929,16 @@ static noinline struct btrfs_device *device_list_add(const char *path,
 		 * make sure it's the same device if the device is mounted
 		 */
 		if (device->bdev) {
-			struct block_device *path_bdev;
+			int error;
+			dev_t path_dev;
 
-			path_bdev = lookup_bdev(path);
-			if (IS_ERR(path_bdev)) {
+			error = lookup_bdev(path, &path_dev);
+			if (error) {
 				mutex_unlock(&fs_devices->device_list_mutex);
-				return ERR_CAST(path_bdev);
+				return ERR_PTR(error);
 			}
 
-			if (device->bdev != path_bdev) {
-				bdput(path_bdev);
+			if (device->bdev->bd_dev != path_dev) {
 				mutex_unlock(&fs_devices->device_list_mutex);
 				btrfs_warn_in_rcu(device->fs_info,
 	"duplicate device %s devid %llu generation %llu scanned by %s (%d)",
@@ -947,7 +947,6 @@ static noinline struct btrfs_device *device_list_add(const char *path,
 						  task_pid_nr(current));
 				return ERR_PTR(-EEXIST);
 			}
-			bdput(path_bdev);
 			btrfs_info_in_rcu(device->fs_info,
 	"devid %llu device path %s changed to %s scanned by %s (%d)",
 					  devid, rcu_str_deref(device->name),
diff --git a/fs/inode.c b/fs/inode.c
index 9d78c37b00b817..cb008acf0efdb8 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -155,7 +155,6 @@ int inode_init_always(struct super_block *sb, struct inode *inode)
 	inode->i_bytes = 0;
 	inode->i_generation = 0;
 	inode->i_pipe = NULL;
-	inode->i_bdev = NULL;
 	inode->i_cdev = NULL;
 	inode->i_link = NULL;
 	inode->i_dir_seq = 0;
@@ -580,8 +579,6 @@ static void evict(struct inode *inode)
 		truncate_inode_pages_final(&inode->i_data);
 		clear_inode(inode);
 	}
-	if (S_ISBLK(inode->i_mode) && inode->i_bdev)
-		bd_forget(inode);
 	if (S_ISCHR(inode->i_mode) && inode->i_cdev)
 		cd_forget(inode);
 
diff --git a/fs/internal.h b/fs/internal.h
index 47be21dfeebef5..53f890446e7508 100644
--- a/fs/internal.h
+++ b/fs/internal.h
@@ -25,7 +25,6 @@ extern void __init bdev_cache_init(void);
 extern int __sync_blockdev(struct block_device *bdev, int wait);
 void iterate_bdevs(void (*)(struct block_device *, void *), void *);
 void emergency_thaw_bdev(struct super_block *sb);
-void bd_forget(struct inode *inode);
 #else
 static inline void bdev_cache_init(void)
 {
@@ -43,9 +42,6 @@ static inline int emergency_thaw_bdev(struct super_block *sb)
 {
 	return 0;
 }
-static inline void bd_forget(struct inode *inode)
-{
-}
 #endif /* CONFIG_BLOCK */
 
 /*
@@ -114,8 +110,7 @@ extern struct file *alloc_empty_file_noaccount(int, const struct cred *);
  */
 extern int reconfigure_super(struct fs_context *);
 extern bool trylock_super(struct super_block *sb);
-struct super_block *__get_super(struct block_device *bdev, bool excl);
-extern struct super_block *user_get_super(dev_t);
+struct super_block *user_get_super(dev_t, bool excl);
 void put_super(struct super_block *sb);
 extern bool mount_capable(struct fs_context *);
 
diff --git a/fs/io_uring.c b/fs/io_uring.c
index 4ead291b2976f3..8f13c0417f940c 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2716,11 +2716,7 @@ static struct file *__io_file_get(struct io_submit_state *state, int fd)
 
 static bool io_bdev_nowait(struct block_device *bdev)
 {
-#ifdef CONFIG_BLOCK
 	return !bdev || blk_queue_nowait(bdev_get_queue(bdev));
-#else
-	return true;
-#endif
 }
 
 /*
@@ -2733,14 +2729,16 @@ static bool io_file_supports_async(struct file *file, int rw)
 	umode_t mode = file_inode(file)->i_mode;
 
 	if (S_ISBLK(mode)) {
-		if (io_bdev_nowait(file->f_inode->i_bdev))
+		if (IS_ENABLED(CONFIG_BLOCK) &&
+		    io_bdev_nowait(I_BDEV(file->f_mapping->host)))
 			return true;
 		return false;
 	}
 	if (S_ISCHR(mode) || S_ISSOCK(mode))
 		return true;
 	if (S_ISREG(mode)) {
-		if (io_bdev_nowait(file->f_inode->i_sb->s_bdev) &&
+		if (IS_ENABLED(CONFIG_BLOCK) &&
+		    io_bdev_nowait(file->f_inode->i_sb->s_bdev) &&
 		    file->f_op != &io_uring_fops)
 			return true;
 		return false;
diff --git a/fs/pipe.c b/fs/pipe.c
index 0ac197658a2d6e..c5989cfd564d45 100644
--- a/fs/pipe.c
+++ b/fs/pipe.c
@@ -1342,9 +1342,8 @@ static long pipe_set_size(struct pipe_inode_info *pipe, unsigned long arg)
 }
 
 /*
- * After the inode slimming patch, i_pipe/i_bdev/i_cdev share the same
- * location, so checking ->i_pipe is not enough to verify that this is a
- * pipe.
+ * Note that i_pipe and i_cdev share the same location, so checking ->i_pipe is
+ * not enough to verify that this is a pipe.
  */
 struct pipe_inode_info *get_pipe_info(struct file *file, bool for_splice)
 {
diff --git a/fs/quota/quota.c b/fs/quota/quota.c
index f3d32b0d9008f2..6d16b2be5ac4a3 100644
--- a/fs/quota/quota.c
+++ b/fs/quota/quota.c
@@ -866,17 +866,18 @@ static bool quotactl_cmd_onoff(int cmd)
 static struct super_block *quotactl_block(const char __user *special, int cmd)
 {
 #ifdef CONFIG_BLOCK
-	struct block_device *bdev;
 	struct super_block *sb;
 	struct filename *tmp = getname(special);
 	bool excl = false, thawed = false;
+	int error;
+	dev_t dev;
 
 	if (IS_ERR(tmp))
 		return ERR_CAST(tmp);
-	bdev = lookup_bdev(tmp->name);
+	error = lookup_bdev(tmp->name, &dev);
 	putname(tmp);
-	if (IS_ERR(bdev))
-		return ERR_CAST(bdev);
+	if (error)
+		return ERR_PTR(error);
 
 	if (quotactl_cmd_onoff(cmd)) {
 		excl = true;
@@ -886,8 +887,10 @@ static struct super_block *quotactl_block(const char __user *special, int cmd)
 	}
 
 retry:
-	sb = __get_super(bdev, excl);
-	if (thawed && sb && sb->s_writers.frozen != SB_UNFROZEN) {
+	sb = user_get_super(dev, excl);
+	if (!sb)
+		return ERR_PTR(-ENODEV);
+	if (thawed && sb->s_writers.frozen != SB_UNFROZEN) {
 		if (excl)
 			up_write(&sb->s_umount);
 		else
@@ -897,10 +900,6 @@ static struct super_block *quotactl_block(const char __user *special, int cmd)
 		put_super(sb);
 		goto retry;
 	}
-
-	bdput(bdev);
-	if (!sb)
-		return ERR_PTR(-ENODEV);
 	return sb;
 
 #else
diff --git a/fs/statfs.c b/fs/statfs.c
index 59f33752c1311f..68cb077887504f 100644
--- a/fs/statfs.c
+++ b/fs/statfs.c
@@ -235,7 +235,7 @@ SYSCALL_DEFINE3(fstatfs64, unsigned int, fd, size_t, sz, struct statfs64 __user
 
 static int vfs_ustat(dev_t dev, struct kstatfs *sbuf)
 {
-	struct super_block *s = user_get_super(dev);
+	struct super_block *s = user_get_super(dev, false);
 	int err;
 	if (!s)
 		return -EINVAL;
diff --git a/fs/super.c b/fs/super.c
index 343e5c1e538d2a..7a1611e5d0f45d 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -740,7 +740,7 @@ void iterate_supers_type(struct file_system_type *type,
 
 EXPORT_SYMBOL(iterate_supers_type);
 
-struct super_block *__get_super(struct block_device *bdev, bool excl)
+struct super_block *get_super(struct block_device *bdev)
 {
 	struct super_block *sb;
 
@@ -755,17 +755,11 @@ struct super_block *__get_super(struct block_device *bdev, bool excl)
 		if (sb->s_bdev == bdev) {
 			sb->s_count++;
 			spin_unlock(&sb_lock);
-			if (!excl)
-				down_read(&sb->s_umount);
-			else
-				down_write(&sb->s_umount);
+			down_read(&sb->s_umount);
 			/* still alive? */
 			if (sb->s_root && (sb->s_flags & SB_BORN))
 				return sb;
-			if (!excl)
-				up_read(&sb->s_umount);
-			else
-				up_write(&sb->s_umount);
+			up_read(&sb->s_umount);
 			/* nope, got unmounted */
 			spin_lock(&sb_lock);
 			__put_super(sb);
@@ -776,19 +770,6 @@ struct super_block *__get_super(struct block_device *bdev, bool excl)
 	return NULL;
 }
 
-/**
- *	get_super - get the superblock of a device
- *	@bdev: device to get the superblock for
- *
- *	Scans the superblock list and finds the superblock of the file system
- *	mounted on the device given. %NULL is returned if no match is found.
- */
-struct super_block *get_super(struct block_device *bdev)
-{
-	return __get_super(bdev, false);
-}
-EXPORT_SYMBOL(get_super);
-
 /**
  * get_active_super - get an active reference to the superblock of a device
  * @bdev: device to get the superblock for
@@ -820,7 +801,7 @@ struct super_block *get_active_super(struct block_device *bdev)
 	return NULL;
 }
 
-struct super_block *user_get_super(dev_t dev)
+struct super_block *user_get_super(dev_t dev, bool excl)
 {
 	struct super_block *sb;
 
@@ -832,11 +813,17 @@ struct super_block *user_get_super(dev_t dev)
 		if (sb->s_dev ==  dev) {
 			sb->s_count++;
 			spin_unlock(&sb_lock);
-			down_read(&sb->s_umount);
+			if (excl)
+				down_write(&sb->s_umount);
+			else
+				down_read(&sb->s_umount);
 			/* still alive? */
 			if (sb->s_root && (sb->s_flags & SB_BORN))
 				return sb;
-			up_read(&sb->s_umount);
+			if (excl)
+				up_write(&sb->s_umount);
+			else
+				up_read(&sb->s_umount);
 			/* nope, got unmounted */
 			spin_lock(&sb_lock);
 			__put_super(sb);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 12810a19edebc4..bdd7339bcda462 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1973,7 +1973,7 @@ int bdev_read_only(struct block_device *bdev);
 int set_blocksize(struct block_device *bdev, int size);
 
 const char *bdevname(struct block_device *bdev, char *buffer);
-struct block_device *lookup_bdev(const char *);
+int lookup_bdev(const char *pathname, dev_t *dev);
 
 void blkdev_show(struct seq_file *seqf, off_t offset);
 
diff --git a/include/linux/fs.h b/include/linux/fs.h
index a61df0dd4f1989..b0b358309657ba 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -696,7 +696,6 @@ struct inode {
 	struct list_head	i_devices;
 	union {
 		struct pipe_inode_info	*i_pipe;
-		struct block_device	*i_bdev;
 		struct cdev		*i_cdev;
 		char			*i_link;
 		unsigned		i_dir_seq;
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:40:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:40:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36152.68080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYi-0005P6-UM; Tue, 24 Nov 2020 13:40:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36152.68080; Tue, 24 Nov 2020 13:40:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYh-0005Nb-Gh; Tue, 24 Nov 2020 13:40:39 +0000
Received: by outflank-mailman (input) for mailman id 36152;
 Tue, 24 Nov 2020 13:40:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYOS-0000Qf-3g
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:30:04 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6cc9c57e-ba5a-4d85-9d25-28ec0dd73b3b;
 Tue, 24 Nov 2020 13:28:45 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYMs-0006Z8-1F; Tue, 24 Nov 2020 13:28:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYOS-0000Qf-3g
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:30:04 +0000
X-Inumbo-ID: 6cc9c57e-ba5a-4d85-9d25-28ec0dd73b3b
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6cc9c57e-ba5a-4d85-9d25-28ec0dd73b3b;
	Tue, 24 Nov 2020 13:28:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=72pzSMUtG54OL/Eca8JCS6Cut6vtXvByBKF+r5JkUZU=; b=FlEj2XVd4aO5E+awAOEaxrjHFP
	nbTwD38evQvBZtbbRpaPRv4w7SQgR/X5RUlt/qA1fFWMPnWI86V5fG/R+1H2W/UZ29b7ldRWLa1dd
	tU1fTXQkFVMRV5FKtiDc5PNo+nlp3px3fcZyUXemalSLi8QkH68sCLNjrgg/YWejEIQ4GICloHKpX
	hcIJqz4QLGO7aEZpxnGZJ/p3S2Y6mm2yl1JeTx600EG8EhKGRAmNzZwBbGkSr/guQr9ug6dVSnUs1
	PcWorBNAAH3r27c7yQDm7bIcF+dmd7t7AZ2DGtVVY9S3IddEspErcixUNQioTsPQbjYtA1t5lfJJL
	UNaX/fwg==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYMs-0006Z8-1F; Tue, 24 Nov 2020 13:28:26 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 21/45] block: refactor blkdev_get
Date: Tue, 24 Nov 2020 14:27:27 +0100
Message-Id: <20201124132751.3747337-22-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Move more code that is only run on the outer open but not the open of
the underlying whole device when opening a partition into blkdev_get,
which leads to a much easier to follow structure.

This allows to simplify the disk and module refcounting so that one
reference is held for each open, similar to what we do with normal
file operations.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/block_dev.c | 185 +++++++++++++++++++++++--------------------------
 1 file changed, 86 insertions(+), 99 deletions(-)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index 88847839ef0102..2ffa11a95f10db 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -1403,46 +1403,12 @@ EXPORT_SYMBOL_GPL(bdev_disk_changed);
  *  mutex_lock(part->bd_mutex)
  *    mutex_lock_nested(whole->bd_mutex, 1)
  */
-
-static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
-		int for_part)
+static int __blkdev_get(struct block_device *bdev, struct gendisk *disk,
+		int partno, fmode_t mode)
 {
-	struct block_device *whole = NULL, *claiming = NULL;
-	struct gendisk *disk;
 	int ret;
-	int partno;
-	bool first_open = false, unblock_events = true, need_restart;
-
- restart:
-	need_restart = false;
-	ret = -ENXIO;
-	disk = bdev_get_gendisk(bdev, &partno);
-	if (!disk)
-		goto out;
-
-	if (partno) {
-		whole = bdget_disk(disk, 0);
-		if (!whole) {
-			ret = -ENOMEM;
-			goto out_put_disk;
-		}
-	}
 
-	if (!for_part && (mode & FMODE_EXCL)) {
-		WARN_ON_ONCE(!holder);
-		if (whole)
-			claiming = whole;
-		else
-			claiming = bdev;
-		ret = bd_prepare_to_claim(bdev, claiming, holder);
-		if (ret)
-			goto out_put_whole;
-	}
-
-	disk_block_events(disk);
-	mutex_lock_nested(&bdev->bd_mutex, for_part);
 	if (!bdev->bd_openers) {
-		first_open = true;
 		bdev->bd_disk = disk;
 		bdev->bd_contains = bdev;
 		bdev->bd_partno = partno;
@@ -1454,15 +1420,8 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 				goto out_clear;
 
 			ret = 0;
-			if (disk->fops->open) {
+			if (disk->fops->open)
 				ret = disk->fops->open(bdev, mode);
-				/*
-				 * If we lost a race with 'disk' being deleted,
-				 * try again.  See md.c
-				 */
-				if (ret == -ERESTARTSYS)
-					need_restart = true;
-			}
 
 			if (!ret) {
 				bd_set_nr_sectors(bdev, get_capacity(disk));
@@ -1482,14 +1441,23 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 			if (ret)
 				goto out_clear;
 		} else {
-			BUG_ON(for_part);
-			ret = __blkdev_get(whole, mode, NULL, 1);
-			if (ret)
+			struct block_device *whole = bdget_disk(disk, 0);
+
+			mutex_lock_nested(&whole->bd_mutex, 1);
+			ret = __blkdev_get(whole, disk, 0, mode);
+			if (ret) {
+				mutex_unlock(&whole->bd_mutex);
+				bdput(whole);
 				goto out_clear;
-			bdev->bd_contains = bdgrab(whole);
+			}
+			whole->bd_part_count++;
+			mutex_unlock(&whole->bd_mutex);
+
+			bdev->bd_contains = whole;
 			bdev->bd_part = disk_get_part(disk, partno);
 			if (!(disk->flags & GENHD_FL_UP) ||
 			    !bdev->bd_part || !bdev->bd_part->nr_sects) {
+				__blkdev_put(whole, mode, 1);
 				ret = -ENXIO;
 				goto out_clear;
 			}
@@ -1509,58 +1477,17 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
 			    (!ret || ret == -ENOMEDIUM))
 				bdev_disk_changed(bdev, ret == -ENOMEDIUM);
 			if (ret)
-				goto out_unlock_bdev;
+				return ret;
 		}
 	}
 	bdev->bd_openers++;
-	if (for_part)
-		bdev->bd_part_count++;
-	if (claiming)
-		bd_finish_claiming(bdev, claiming, holder);
-
-	/*
-	 * Block event polling for write claims if requested.  Any write holder
-	 * makes the write_holder state stick until all are released.  This is
-	 * good enough and tracking individual writeable reference is too
-	 * fragile given the way @mode is used in blkdev_get/put().
-	 */
-	if (claiming && (mode & FMODE_WRITE) && !bdev->bd_write_holder &&
-	    (disk->flags & GENHD_FL_BLOCK_EVENTS_ON_EXCL_WRITE)) {
-		bdev->bd_write_holder = true;
-		unblock_events = false;
-	}
-	mutex_unlock(&bdev->bd_mutex);
-
-	if (unblock_events)
-		disk_unblock_events(disk);
-
-	/* only one opener holds refs to the module and disk */
-	if (!first_open)
-		put_disk_and_module(disk);
-	if (whole)
-		bdput(whole);
 	return 0;
 
  out_clear:
 	disk_put_part(bdev->bd_part);
 	bdev->bd_disk = NULL;
 	bdev->bd_part = NULL;
-	if (bdev != bdev->bd_contains)
-		__blkdev_put(bdev->bd_contains, mode, 1);
 	bdev->bd_contains = NULL;
- out_unlock_bdev:
-	if (claiming)
-		bd_abort_claiming(bdev, claiming, holder);
-	mutex_unlock(&bdev->bd_mutex);
-	disk_unblock_events(disk);
- out_put_whole:
- 	if (whole)
-		bdput(whole);
- out_put_disk:
-	put_disk_and_module(disk);
-	if (need_restart)
-		goto restart;
- out:
 	return ret;
 }
 
@@ -1585,7 +1512,12 @@ static int __blkdev_get(struct block_device *bdev, fmode_t mode, void *holder,
  */
 static int blkdev_get(struct block_device *bdev, fmode_t mode, void *holder)
 {
-	int ret, perm = 0;
+	struct block_device *claiming;
+	bool unblock_events = true;
+	struct gendisk *disk;
+	int perm = 0;
+	int partno;
+	int ret;
 
 	if (mode & FMODE_READ)
 		perm |= MAY_READ;
@@ -1595,13 +1527,67 @@ static int blkdev_get(struct block_device *bdev, fmode_t mode, void *holder)
 	if (ret)
 		goto bdput;
 
-	ret =__blkdev_get(bdev, mode, holder, 0);
-	if (ret)
+	/*
+	 * If we lost a race with 'disk' being deleted, try again.  See md.c.
+	 */
+retry:
+	ret = -ENXIO;
+	disk = bdev_get_gendisk(bdev, &partno);
+	if (!disk)
 		goto bdput;
-	return 0;
 
+	if (mode & FMODE_EXCL) {
+		WARN_ON_ONCE(!holder);
+	
+		ret = -ENOMEM;
+		claiming = bdget_disk(disk, 0);
+		if (!claiming)
+			goto put_disk;
+		ret = bd_prepare_to_claim(bdev, claiming, holder);
+		if (ret)
+			goto put_claiming;
+	}
+
+	disk_block_events(disk);
+
+	mutex_lock(&bdev->bd_mutex);
+	ret =__blkdev_get(bdev, disk, partno, mode);
+	if (!(mode & FMODE_EXCL)) {
+		; /* nothing to do here */
+	} else if (ret) {
+		bd_abort_claiming(bdev, claiming, holder);
+	} else {
+		bd_finish_claiming(bdev, claiming, holder);
+
+		/*
+		 * Block event polling for write claims if requested.  Any write
+		 * holder makes the write_holder state stick until all are
+		 * released.  This is good enough and tracking individual
+		 * writeable reference is too fragile given the way @mode is
+		 * used in blkdev_get/put().
+		 */
+		if ((mode & FMODE_WRITE) && !bdev->bd_write_holder &&
+		    (disk->flags & GENHD_FL_BLOCK_EVENTS_ON_EXCL_WRITE)) {
+			bdev->bd_write_holder = true;
+			unblock_events = false;
+		}
+	}
+	mutex_unlock(&bdev->bd_mutex);
+
+	if (unblock_events)
+		disk_unblock_events(disk);
+
+put_claiming:
+	if (mode & FMODE_EXCL)
+		bdput(claiming);
+put_disk:
+	if (ret)
+		put_disk_and_module(disk);
+	if (ret == -ERESTARTSYS)
+		goto retry;
 bdput:
-	bdput(bdev);
+	if (ret)
+		bdput(bdev);
 	return ret;
 }
 
@@ -1749,8 +1735,6 @@ static void __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part)
 		if (bdev_is_partition(bdev))
 			victim = bdev->bd_contains;
 		bdev->bd_contains = NULL;
-
-		put_disk_and_module(disk);
 	} else {
 		if (!bdev_is_partition(bdev) && disk->fops->release)
 			disk->fops->release(disk, mode);
@@ -1763,6 +1747,8 @@ static void __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part)
 
 void blkdev_put(struct block_device *bdev, fmode_t mode)
 {
+	struct gendisk *disk = bdev->bd_disk;
+
 	mutex_lock(&bdev->bd_mutex);
 
 	if (mode & FMODE_EXCL) {
@@ -1791,7 +1777,7 @@ void blkdev_put(struct block_device *bdev, fmode_t mode)
 		 * unblock evpoll if it was a write holder.
 		 */
 		if (bdev_free && bdev->bd_write_holder) {
-			disk_unblock_events(bdev->bd_disk);
+			disk_unblock_events(disk);
 			bdev->bd_write_holder = false;
 		}
 	}
@@ -1801,11 +1787,12 @@ void blkdev_put(struct block_device *bdev, fmode_t mode)
 	 * event.  This is to ensure detection of media removal commanded
 	 * from userland - e.g. eject(1).
 	 */
-	disk_flush_events(bdev->bd_disk, DISK_EVENT_MEDIA_CHANGE);
+	disk_flush_events(disk, DISK_EVENT_MEDIA_CHANGE);
 
 	mutex_unlock(&bdev->bd_mutex);
 
 	__blkdev_put(bdev, mode, 0);
+	put_disk_and_module(disk);
 }
 EXPORT_SYMBOL(blkdev_put);
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:40:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:40:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36157.68087 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYl-0005U9-2y; Tue, 24 Nov 2020 13:40:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36157.68087; Tue, 24 Nov 2020 13:40:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYj-0005Rz-AX; Tue, 24 Nov 2020 13:40:41 +0000
Received: by outflank-mailman (input) for mailman id 36157;
 Tue, 24 Nov 2020 13:40:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYPk-0000Qf-8x
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:31:24 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d259f70b-1005-49a9-8670-9638403c3144;
 Tue, 24 Nov 2020 13:29:14 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYNC-0006eW-7z; Tue, 24 Nov 2020 13:28:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYPk-0000Qf-8x
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:31:24 +0000
X-Inumbo-ID: d259f70b-1005-49a9-8670-9638403c3144
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d259f70b-1005-49a9-8670-9638403c3144;
	Tue, 24 Nov 2020 13:29:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=Qmxc9F7d78fxvqrcJGALGomPNXCJxQ4BUeKonQI+9iM=; b=XyqGQ69cDW3VmV7MtWM6fAyAuQ
	xMwqzl+m+RXBmsO7uXvW6LGkl8emxWJPDkGZD+RtsFYoUP8Vp7CRyACrz6WjJgjwOJEvKGCpqxA6j
	KnQrGVQTaRSu7gtK+Tea7urSawRPeyaeKy8RJfD4JEoXzn0r0hL8WB9nhe63bsjFwdopiXqfPHLiG
	p78yzr7nysZmwhciBbmOCHXpvL6ZVbcVs2e56BBFUTGKUATmxQaPUcZKkVnPojHv6fNP7LGLC2A5d
	jD5qafRcbkaausdIEi8N5FEnSzrWI/bP9n17voU8JdKocp75w1Ty8NjjBdPRVPP2Ncz2LvkdNqUfW
	Ta6LBeJQ==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYNC-0006eW-7z; Tue, 24 Nov 2020 13:28:47 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 31/45] block: move disk stat accounting to struct block_device
Date: Tue, 24 Nov 2020 14:27:37 +0100
Message-Id: <20201124132751.3747337-32-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Move the dkstats and stamp field to struct block_device in preparation
of killing struct hd_struct.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-cgroup.c        |  2 +-
 block/blk-core.c          |  4 ++--
 block/blk.h               |  1 -
 block/genhd.c             | 14 ++++----------
 block/partitions/core.c   |  9 +--------
 fs/block_dev.c            | 10 ++++++++++
 include/linux/blk_types.h |  2 ++
 include/linux/genhd.h     |  2 --
 include/linux/part_stat.h | 38 +++++++++++++++++++-------------------
 9 files changed, 39 insertions(+), 43 deletions(-)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 23437b96ea41e6..a598f86e014137 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -830,7 +830,7 @@ static void blkcg_fill_root_iostats(void)
 		for_each_possible_cpu(cpu) {
 			struct disk_stats *cpu_dkstats;
 
-			cpu_dkstats = per_cpu_ptr(part->dkstats, cpu);
+			cpu_dkstats = per_cpu_ptr(part->bdev->bd_stats, cpu);
 			tmp.ios[BLKG_IOSTAT_READ] +=
 				cpu_dkstats->ios[STAT_READ];
 			tmp.ios[BLKG_IOSTAT_WRITE] +=
diff --git a/block/blk-core.c b/block/blk-core.c
index 988f45094a387b..d2c9cb24e087f3 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1264,9 +1264,9 @@ static void update_io_ticks(struct hd_struct *part, unsigned long now, bool end)
 {
 	unsigned long stamp;
 again:
-	stamp = READ_ONCE(part->stamp);
+	stamp = READ_ONCE(part->bdev->bd_stamp);
 	if (unlikely(stamp != now)) {
-		if (likely(cmpxchg(&part->stamp, stamp, now) == stamp))
+		if (likely(cmpxchg(&part->bdev->bd_stamp, stamp, now) == stamp))
 			__part_stat_add(part, io_ticks, end ? now - stamp : 1);
 	}
 	if (part->partno) {
diff --git a/block/blk.h b/block/blk.h
index 09cee7024fb43e..3f801f6e86f8a1 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -381,7 +381,6 @@ static inline void hd_struct_put(struct hd_struct *part)
 
 static inline void hd_free_part(struct hd_struct *part)
 {
-	free_percpu(part->dkstats);
 	kfree(part->info);
 	bdput(part->bdev);
 	percpu_ref_exit(&part->ref);
diff --git a/block/genhd.c b/block/genhd.c
index 8ace0628ac20b7..0c0458367da7e4 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -102,7 +102,7 @@ static void part_stat_read_all(struct hd_struct *part, struct disk_stats *stat)
 
 	memset(stat, 0, sizeof(struct disk_stats));
 	for_each_possible_cpu(cpu) {
-		struct disk_stats *ptr = per_cpu_ptr(part->dkstats, cpu);
+		struct disk_stats *ptr = per_cpu_ptr(part->bdev->bd_stats, cpu);
 		int group;
 
 		for (group = 0; group < NR_STAT_GROUPS; group++) {
@@ -875,7 +875,7 @@ void del_gendisk(struct gendisk *disk)
 	kobject_put(disk->slave_dir);
 
 	part_stat_set_all(&disk->part0, 0);
-	disk->part0.stamp = 0;
+	disk->part0.bdev->bd_stamp = 0;
 	if (!sysfs_deprecated)
 		sysfs_remove_link(block_depr, dev_name(disk_to_dev(disk)));
 	pm_runtime_set_memalloc_noio(disk_to_dev(disk), false);
@@ -1612,19 +1612,15 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
 	if (!disk->part0.bdev)
 		goto out_free_disk;
 
-	disk->part0.dkstats = alloc_percpu(struct disk_stats);
-	if (!disk->part0.dkstats)
-		goto out_bdput;
-
 	disk->node_id = node_id;
 	if (disk_expand_part_tbl(disk, 0))
-		goto out_free_bdstats;
+		goto out_bdput;
 
 	ptbl = rcu_dereference_protected(disk->part_tbl, 1);
 	rcu_assign_pointer(ptbl->part[0], &disk->part0);
 
 	if (hd_ref_init(&disk->part0))
-		goto out_free_bdstats;
+		goto out_bdput;
 
 	disk->minors = minors;
 	rand_initialize_disk(disk);
@@ -1633,8 +1629,6 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
 	device_initialize(disk_to_dev(disk));
 	return disk;
 
-out_free_bdstats:
-	free_percpu(disk->part0.dkstats);
 out_bdput:
 	bdput(disk->part0.bdev);
 out_free_disk:
diff --git a/block/partitions/core.c b/block/partitions/core.c
index 92ffa55bdfddfd..c3a4870bfb123d 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -409,14 +409,9 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 	if (!p)
 		return ERR_PTR(-EBUSY);
 
-	err = -ENOMEM;
-	p->dkstats = alloc_percpu(struct disk_stats);
-	if (!p->dkstats)
-		goto out_free;
-
 	bdev = bdev_alloc(disk, partno);
 	if (!bdev)
-		goto out_free_stats;
+		goto out_free;
 	p->bdev = bdev;
 
 	pdev = part_to_dev(p);
@@ -490,8 +485,6 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 	kfree(p->info);
 out_bdput:
 	bdput(bdev);
-out_free_stats:
-	free_percpu(p->dkstats);
 out_free:
 	kfree(p);
 	return ERR_PTR(err);
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 02536d9fa29945..0427e6fa59556f 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -32,6 +32,7 @@
 #include <linux/cleancache.h>
 #include <linux/task_io_accounting_ops.h>
 #include <linux/falloc.h>
+#include <linux/part_stat.h>
 #include <linux/uaccess.h>
 #include <linux/suspend.h>
 #include "internal.h"
@@ -781,6 +782,10 @@ static struct inode *bdev_alloc_inode(struct super_block *sb)
 
 static void bdev_free_inode(struct inode *inode)
 {
+	struct block_device *bdev = I_BDEV(inode);
+
+	free_percpu(bdev->bd_stats);
+
 	kmem_cache_free(bdev_cachep, BDEV_I(inode));
 }
 
@@ -874,6 +879,11 @@ struct block_device *bdev_alloc(struct gendisk *disk, u8 partno)
 #ifdef CONFIG_SYSFS
 	INIT_LIST_HEAD(&bdev->bd_holder_disks);
 #endif
+	bdev->bd_stats = alloc_percpu(struct disk_stats);
+	if (!bdev->bd_stats) {
+		iput(inode);
+		return NULL;
+	}
 	return bdev;
 }
 
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 2e0a9bd9688d28..520011b95276fb 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -20,6 +20,8 @@ typedef void (bio_end_io_t) (struct bio *);
 struct bio_crypt_ctx;
 
 struct block_device {
+	struct disk_stats __percpu *bd_stats;
+	unsigned long		bd_stamp;
 	dev_t			bd_dev;
 	int			bd_openers;
 	struct inode *		bd_inode;	/* will die */
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 0dbd254bca51aa..dcc40c8217d095 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -52,8 +52,6 @@ struct partition_meta_info {
 
 struct hd_struct {
 	sector_t start_sect;
-	unsigned long stamp;
-	struct disk_stats __percpu *dkstats;
 	struct percpu_ref ref;
 
 	struct block_device *bdev;
diff --git a/include/linux/part_stat.h b/include/linux/part_stat.h
index 24125778ef3ec7..87ad60106e1db0 100644
--- a/include/linux/part_stat.h
+++ b/include/linux/part_stat.h
@@ -25,17 +25,17 @@ struct disk_stats {
 #define part_stat_unlock()	preempt_enable()
 
 #define part_stat_get_cpu(part, field, cpu)				\
-	(per_cpu_ptr((part)->dkstats, (cpu))->field)
+	(per_cpu_ptr((part)->bdev->bd_stats, (cpu))->field)
 
 #define part_stat_get(part, field)					\
 	part_stat_get_cpu(part, field, smp_processor_id())
 
 #define part_stat_read(part, field)					\
 ({									\
-	typeof((part)->dkstats->field) res = 0;				\
+	typeof((part)->bdev->bd_stats->field) res = 0;			\
 	unsigned int _cpu;						\
 	for_each_possible_cpu(_cpu)					\
-		res += per_cpu_ptr((part)->dkstats, _cpu)->field;	\
+		res += per_cpu_ptr((part)->bdev->bd_stats, _cpu)->field; \
 	res;								\
 })
 
@@ -44,7 +44,7 @@ static inline void part_stat_set_all(struct hd_struct *part, int value)
 	int i;
 
 	for_each_possible_cpu(i)
-		memset(per_cpu_ptr(part->dkstats, i), value,
+		memset(per_cpu_ptr(part->bdev->bd_stats, i), value,
 				sizeof(struct disk_stats));
 }
 
@@ -54,7 +54,7 @@ static inline void part_stat_set_all(struct hd_struct *part, int value)
 	 part_stat_read(part, field[STAT_DISCARD]))
 
 #define __part_stat_add(part, field, addnd)				\
-	__this_cpu_add((part)->dkstats->field, addnd)
+	__this_cpu_add((part)->bdev->bd_stats->field, addnd)
 
 #define part_stat_add(part, field, addnd)	do {			\
 	__part_stat_add((part), field, addnd);				\
@@ -63,20 +63,20 @@ static inline void part_stat_set_all(struct hd_struct *part, int value)
 				field, addnd);				\
 } while (0)
 
-#define part_stat_dec(gendiskp, field)					\
-	part_stat_add(gendiskp, field, -1)
-#define part_stat_inc(gendiskp, field)					\
-	part_stat_add(gendiskp, field, 1)
-#define part_stat_sub(gendiskp, field, subnd)				\
-	part_stat_add(gendiskp, field, -subnd)
+#define part_stat_dec(part, field)					\
+	part_stat_add(part, field, -1)
+#define part_stat_inc(part, field)					\
+	part_stat_add(part, field, 1)
+#define part_stat_sub(part, field, subnd)				\
+	part_stat_add(part, field, -subnd)
 
-#define part_stat_local_dec(gendiskp, field)				\
-	local_dec(&(part_stat_get(gendiskp, field)))
-#define part_stat_local_inc(gendiskp, field)				\
-	local_inc(&(part_stat_get(gendiskp, field)))
-#define part_stat_local_read(gendiskp, field)				\
-	local_read(&(part_stat_get(gendiskp, field)))
-#define part_stat_local_read_cpu(gendiskp, field, cpu)			\
-	local_read(&(part_stat_get_cpu(gendiskp, field, cpu)))
+#define part_stat_local_dec(part, field)				\
+	local_dec(&(part_stat_get(part, field)))
+#define part_stat_local_inc(part, field)				\
+	local_inc(&(part_stat_get(part, field)))
+#define part_stat_local_read(part, field)				\
+	local_read(&(part_stat_get(part, field)))
+#define part_stat_local_read_cpu(part, field, cpu)			\
+	local_read(&(part_stat_get_cpu(part, field, cpu)))
 
 #endif /* _LINUX_PART_STAT_H */
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:40:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:40:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36162.68101 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYn-0005bs-I3; Tue, 24 Nov 2020 13:40:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36162.68101; Tue, 24 Nov 2020 13:40:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYYm-0005ae-Dl; Tue, 24 Nov 2020 13:40:44 +0000
Received: by outflank-mailman (input) for mailman id 36162;
 Tue, 24 Nov 2020 13:40:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYOD-0000Qf-34
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:29:49 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8182bed2-3796-44c6-ab08-0196e6a8c9b2;
 Tue, 24 Nov 2020 13:28:43 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYMt-0006ZX-Sm; Tue, 24 Nov 2020 13:28:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYOD-0000Qf-34
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:29:49 +0000
X-Inumbo-ID: 8182bed2-3796-44c6-ab08-0196e6a8c9b2
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8182bed2-3796-44c6-ab08-0196e6a8c9b2;
	Tue, 24 Nov 2020 13:28:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=Xb3N2WAG9okvbjUfnpwdPBpAQhXn4NZ0O9b+iLQ2eLQ=; b=CD/Zu5S2v7JQL86T+0gRFtsLfM
	kUwH6dfh/CUdlgUxbwmElUz1fNicSaXpuoV30QiF0OgxlWK+j2EQGIY98jL+xCm41ZUs+ViX+4lbj
	llVEfGhPh4OpwiirW/phioWj5T3r2yQ0gr6vFtAr06LZvYhNT0W/ToEJaw6bUZy8/QSzbAoLJ2UC2
	QBgLCJI2097N6fhHep3+PhoV8nL1LM5nsWvpDoUPBnB//Qcb6h/ul/dvjIH2tAZVhJ1IXEJkOYBJS
	i/4l9EYviSTH0C4sbrZzfFcEyr70QHVgNDAlvJ0U7h2bfcMEoxnQVw6Zb5q1tzhzSoYmcYIu7Wg3G
	DywyGKyA==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYMt-0006ZX-Sm; Tue, 24 Nov 2020 13:28:28 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 22/45] block: opencode devcgroup_inode_permission
Date: Tue, 24 Nov 2020 14:27:28 +0100
Message-Id: <20201124132751.3747337-23-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Just call devcgroup_check_permission to avoid various superflous checks
and a double conversion of the access flags.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/block_dev.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index 2ffa11a95f10db..1e35faf6dad42c 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -1515,15 +1515,13 @@ static int blkdev_get(struct block_device *bdev, fmode_t mode, void *holder)
 	struct block_device *claiming;
 	bool unblock_events = true;
 	struct gendisk *disk;
-	int perm = 0;
 	int partno;
 	int ret;
 
-	if (mode & FMODE_READ)
-		perm |= MAY_READ;
-	if (mode & FMODE_WRITE)
-		perm |= MAY_WRITE;
-	ret = devcgroup_inode_permission(bdev->bd_inode, perm);
+	ret = devcgroup_check_permission(DEVCG_DEV_BLOCK,
+			imajor(bdev->bd_inode), iminor(bdev->bd_inode),
+			((mode & FMODE_READ) ? DEVCG_ACC_READ : 0) |
+			((mode & FMODE_WRITE) ? DEVCG_ACC_WRITE : 0));
 	if (ret)
 		goto bdput;
 
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:41:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:41:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36212.68118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYZ2-000641-48; Tue, 24 Nov 2020 13:41:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36212.68118; Tue, 24 Nov 2020 13:41:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYZ1-00063r-Q3; Tue, 24 Nov 2020 13:40:59 +0000
Received: by outflank-mailman (input) for mailman id 36212;
 Tue, 24 Nov 2020 13:40:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYOm-0000Qf-40
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:30:24 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2297f7a2-f7a7-4e4d-a7b9-f9365b3163f3;
 Tue, 24 Nov 2020 13:28:48 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYMo-0006YR-Rc; Tue, 24 Nov 2020 13:28:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYOm-0000Qf-40
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:30:24 +0000
X-Inumbo-ID: 2297f7a2-f7a7-4e4d-a7b9-f9365b3163f3
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2297f7a2-f7a7-4e4d-a7b9-f9365b3163f3;
	Tue, 24 Nov 2020 13:28:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=X5DUE98moi230PHv1tVgCiHzV+UhCd8lztDnZO9kG5c=; b=X+rA73r6QDOVyee+DSa3rW8b35
	OeKaTwg4NHeZa8iiOjrhEjDtW7DX4fOMB9c6aebaBDKuxk+IsDdUW0f7EuZlr2Pov3PURxlMx5rSK
	PBcTmBTO0WyZ7LcgWQ6Xl/0zPJFv0c+ZKh43ZOCIkVopkoFOK48fZGWf8ai+Cj/OoZO3jhvVw5m5Q
	yaSWPl46Vb8ovXNenLX8KK6YPvBJf1LP9VoLMLET0fF69TiH7hVM4rzUe6WIONjC+Mkq9y1ueE05Y
	CfQTDXafYdPFj4peFCtKw/0RrMBMz1uLe6nNpgZh858syqReHUhfMsbGLdWOS/inGodpPctzqcSuj
	FQUx+Tzw==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYMo-0006YR-Rc; Tue, 24 Nov 2020 13:28:23 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 19/45] init: cleanup match_dev_by_uuid and match_dev_by_label
Date: Tue, 24 Nov 2020 14:27:25 +0100
Message-Id: <20201124132751.3747337-20-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Avoid a totally pointless goto label, and use the same style of
comparism for both helpers.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Jan Kara <jack@suse.cz>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
---
 init/do_mounts.c | 18 ++++++------------
 1 file changed, 6 insertions(+), 12 deletions(-)

diff --git a/init/do_mounts.c b/init/do_mounts.c
index afa26a4028d25e..5879edf083b318 100644
--- a/init/do_mounts.c
+++ b/init/do_mounts.c
@@ -79,15 +79,10 @@ static int match_dev_by_uuid(struct device *dev, const void *data)
 	const struct uuidcmp *cmp = data;
 	struct hd_struct *part = dev_to_part(dev);
 
-	if (!part->info)
-		goto no_match;
-
-	if (strncasecmp(cmp->uuid, part->info->uuid, cmp->len))
-		goto no_match;
-
+	if (!part->info ||
+	    strncasecmp(cmp->uuid, part->info->uuid, cmp->len))
+		return 0;
 	return 1;
-no_match:
-	return 0;
 }
 
 /**
@@ -174,10 +169,9 @@ static int match_dev_by_label(struct device *dev, const void *data)
 	const char *label = data;
 	struct hd_struct *part = dev_to_part(dev);
 
-	if (part->info && !strcmp(label, part->info->volname))
-		return 1;
-
-	return 0;
+	if (!part->info || strcmp(label, part->info->volname))
+		return 0;
+	return 1;
 }
 
 static dev_t devt_from_partlabel(const char *label)
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:41:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:41:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36213.68126 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYZ3-00065v-2w; Tue, 24 Nov 2020 13:41:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36213.68126; Tue, 24 Nov 2020 13:41:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYZ2-00065B-D2; Tue, 24 Nov 2020 13:41:00 +0000
Received: by outflank-mailman (input) for mailman id 36213;
 Tue, 24 Nov 2020 13:40:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYPf-0000Qf-5Z
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:31:19 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab5cc72a-b591-4785-bc22-78d444f61869;
 Tue, 24 Nov 2020 13:29:13 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYNE-0006f1-F4; Tue, 24 Nov 2020 13:28:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYPf-0000Qf-5Z
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:31:19 +0000
X-Inumbo-ID: ab5cc72a-b591-4785-bc22-78d444f61869
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ab5cc72a-b591-4785-bc22-78d444f61869;
	Tue, 24 Nov 2020 13:29:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=UJKssyQl+GUFvi2rp9yz11KxAWWGhgO0tQYxEu3mZTI=; b=GF9czIxIzF9RlTv3dOf7Fu6foS
	BmIJ6T3jrUGNNBkYYp6LFnLd13ZKb4UUw9B1pZZjIK3BGTiaFHcuuld7LvGyBGj1GMlIb/+AjWRnn
	Jwttr1AcnqFoTQfViz8iaEu900blKT5R+zD1QgVz0Z6M/yof8f5MlsrTyWLzSfcVB7TEK+peMPmo8
	hgrZj0xWwvPrrhaalemKq19pe9+z1rsI6Yr0lREU0CuYzpGfNMe31TDmQhDHkXxGuQP5VkkZ70OXD
	gWGgVGYHvkh8rxmauH5uoO9S9/YstSSRSrSHAbnn1j61+E00sX7zAA37oK7B12hfwWXqiRvxdb2ly
	OrkYzQ3w==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYNE-0006f1-F4; Tue, 24 Nov 2020 13:28:48 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 32/45] block: move the start_sect field to struct block_device
Date: Tue, 24 Nov 2020 14:27:38 +0100
Message-Id: <20201124132751.3747337-33-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Move the start_sect field to struct block_device in preparation
of killing struct hd_struct.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-core.c          |  5 +++--
 block/blk-lib.c           |  2 +-
 block/genhd.c             |  4 ++--
 block/partitions/core.c   | 17 +++++++++--------
 include/linux/blk_types.h |  1 +
 include/linux/blkdev.h    |  4 ++--
 include/linux/genhd.h     |  3 +--
 kernel/trace/blktrace.c   | 11 +++--------
 8 files changed, 22 insertions(+), 25 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index d2c9cb24e087f3..9a3793d5ce38d4 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -757,9 +757,10 @@ static inline int blk_partition_remap(struct bio *bio)
 	if (bio_sectors(bio)) {
 		if (bio_check_eod(bio, bdev_nr_sectors(p->bdev)))
 			goto out;
-		bio->bi_iter.bi_sector += p->start_sect;
+		bio->bi_iter.bi_sector += p->bdev->bd_start_sect;
 		trace_block_bio_remap(bio->bi_disk->queue, bio, part_devt(p),
-				      bio->bi_iter.bi_sector - p->start_sect);
+				      bio->bi_iter.bi_sector -
+				      p->bdev->bd_start_sect);
 	}
 	bio->bi_partno = 0;
 	ret = 0;
diff --git a/block/blk-lib.c b/block/blk-lib.c
index e90614fd8d6a42..752f9c7220622a 100644
--- a/block/blk-lib.c
+++ b/block/blk-lib.c
@@ -65,7 +65,7 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
 
 	/* In case the discard request is in a partition */
 	if (bdev_is_partition(bdev))
-		part_offset = bdev->bd_part->start_sect;
+		part_offset = bdev->bd_start_sect;
 
 	while (nr_sects) {
 		sector_t granularity_aligned_lba, req_sects;
diff --git a/block/genhd.c b/block/genhd.c
index 0c0458367da7e4..8212a2dd10ec4e 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -295,8 +295,8 @@ EXPORT_SYMBOL_GPL(disk_part_iter_exit);
 
 static inline int sector_in_part(struct hd_struct *part, sector_t sector)
 {
-	return part->start_sect <= sector &&
-		sector < part->start_sect + bdev_nr_sectors(part->bdev);
+	return part->bdev->bd_start_sect <= sector &&
+		sector < part->bdev->bd_start_sect + bdev_nr_sectors(part->bdev);
 }
 
 /**
diff --git a/block/partitions/core.c b/block/partitions/core.c
index c3a4870bfb123d..aa4b836374b037 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -192,7 +192,7 @@ static ssize_t part_start_show(struct device *dev,
 {
 	struct hd_struct *p = dev_to_part(dev);
 
-	return sprintf(buf, "%llu\n",(unsigned long long)p->start_sect);
+	return sprintf(buf, "%llu\n",(unsigned long long)p->bdev->bd_start_sect);
 }
 
 static ssize_t part_ro_show(struct device *dev,
@@ -209,7 +209,7 @@ static ssize_t part_alignment_offset_show(struct device *dev,
 
 	return sprintf(buf, "%u\n",
 		queue_limit_alignment_offset(&part_to_disk(p)->queue->limits,
-				p->start_sect));
+				p->bdev->bd_start_sect));
 }
 
 static ssize_t part_discard_alignment_show(struct device *dev,
@@ -219,7 +219,7 @@ static ssize_t part_discard_alignment_show(struct device *dev,
 
 	return sprintf(buf, "%u\n",
 		queue_limit_discard_alignment(&part_to_disk(p)->queue->limits,
-				p->start_sect));
+				p->bdev->bd_start_sect));
 }
 
 static DEVICE_ATTR(partition, 0444, part_partition_show, NULL);
@@ -301,7 +301,7 @@ static void hd_struct_free_work(struct work_struct *work)
 	 */
 	put_device(disk_to_dev(disk));
 
-	part->start_sect = 0;
+	part->bdev->bd_start_sect = 0;
 	bdev_set_nr_sectors(part->bdev, 0);
 	part_stat_set_all(part, 0);
 	put_device(part_to_dev(part));
@@ -416,7 +416,7 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 
 	pdev = part_to_dev(p);
 
-	p->start_sect = start;
+	bdev->bd_start_sect = start;
 	bdev_set_nr_sectors(bdev, len);
 	p->partno = partno;
 	p->policy = get_disk_ro(disk);
@@ -508,8 +508,9 @@ static bool partition_overlaps(struct gendisk *disk, sector_t start,
 	disk_part_iter_init(&piter, disk, DISK_PITER_INCL_EMPTY);
 	while ((part = disk_part_iter_next(&piter))) {
 		if (part->partno == skip_partno ||
-		    start >= part->start_sect + bdev_nr_sectors(part->bdev) ||
-		    start + length <= part->start_sect)
+		    start >= part->bdev->bd_start_sect +
+			bdev_nr_sectors(part->bdev) ||
+		    start + length <= part->bdev->bd_start_sect)
 			continue;
 		overlap = true;
 		break;
@@ -592,7 +593,7 @@ int bdev_resize_partition(struct block_device *bdev, int partno,
 	mutex_lock_nested(&bdev->bd_mutex, 1);
 
 	ret = -EINVAL;
-	if (start != part->start_sect)
+	if (start != part->bdev->bd_start_sect)
 		goto out_unlock;
 
 	ret = -EBUSY;
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 520011b95276fb..a690008f60cd92 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -20,6 +20,7 @@ typedef void (bio_end_io_t) (struct bio *);
 struct bio_crypt_ctx;
 
 struct block_device {
+	sector_t		bd_start_sect;
 	struct disk_stats __percpu *bd_stats;
 	unsigned long		bd_stamp;
 	dev_t			bd_dev;
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index db1b11d6d07568..8fc0b266610f7f 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1488,7 +1488,7 @@ static inline int bdev_alignment_offset(struct block_device *bdev)
 		return -1;
 	if (bdev_is_partition(bdev))
 		return queue_limit_alignment_offset(&q->limits,
-				bdev->bd_part->start_sect);
+				bdev->bd_start_sect);
 	return q->limits.alignment_offset;
 }
 
@@ -1529,7 +1529,7 @@ static inline int bdev_discard_alignment(struct block_device *bdev)
 
 	if (bdev_is_partition(bdev))
 		return queue_limit_discard_alignment(&q->limits,
-				bdev->bd_part->start_sect);
+				bdev->bd_start_sect);
 	return q->limits.discard_alignment;
 }
 
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index dcc40c8217d095..a9d64da474233f 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -51,7 +51,6 @@ struct partition_meta_info {
 };
 
 struct hd_struct {
-	sector_t start_sect;
 	struct percpu_ref ref;
 
 	struct block_device *bdev;
@@ -299,7 +298,7 @@ extern void rand_initialize_disk(struct gendisk *disk);
 
 static inline sector_t get_start_sect(struct block_device *bdev)
 {
-	return bdev->bd_part->start_sect;
+	return bdev->bd_start_sect;
 }
 
 static inline sector_t bdev_nr_sectors(struct block_device *bdev)
diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
index 7076d588a50d69..8a723a91ec5a06 100644
--- a/kernel/trace/blktrace.c
+++ b/kernel/trace/blktrace.c
@@ -458,14 +458,9 @@ static struct rchan_callbacks blk_relay_callbacks = {
 static void blk_trace_setup_lba(struct blk_trace *bt,
 				struct block_device *bdev)
 {
-	struct hd_struct *part = NULL;
-
-	if (bdev)
-		part = bdev->bd_part;
-
-	if (part) {
-		bt->start_lba = part->start_sect;
-		bt->end_lba = part->start_sect + bdev_nr_sectors(bdev);
+	if (bdev) {
+		bt->start_lba = bdev->bd_start_sect;
+		bt->end_lba = bdev->bd_start_sect + bdev_nr_sectors(bdev);
 	} else {
 		bt->start_lba = 0;
 		bt->end_lba = -1ULL;
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:41:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:41:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36214.68136 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYZ3-00067q-Vz; Tue, 24 Nov 2020 13:41:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36214.68136; Tue, 24 Nov 2020 13:41:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYZ3-00067F-8O; Tue, 24 Nov 2020 13:41:01 +0000
Received: by outflank-mailman (input) for mailman id 36214;
 Tue, 24 Nov 2020 13:40:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYPG-0000Qf-4h
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:30:54 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 41cb8820-a934-4b1d-8e26-edf0a4933341;
 Tue, 24 Nov 2020 13:28:58 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYMz-0006bW-Ik; Tue, 24 Nov 2020 13:28:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYPG-0000Qf-4h
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:30:54 +0000
X-Inumbo-ID: 41cb8820-a934-4b1d-8e26-edf0a4933341
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 41cb8820-a934-4b1d-8e26-edf0a4933341;
	Tue, 24 Nov 2020 13:28:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=GvPG4PqG1HoxFrUZUmZkM34iIp/y9TiIMmmO3FfXfkQ=; b=UXcIHkrYnPGd2/QIRLGaBVYf+e
	0/rinjHWk+4qyNnPlWoC+vFKIiW+/w6eB6MPgLuRGHNzV/BlmKeWxR+fCkabLtgqmpqYLODQK2F4Y
	TNV7mRCk3eJkdhmhZPCMzdUwCutX6B9EFglaPZKsDWQFl7fSXBGZ9I3UcrwZKlFw7dpgRk0QLwtv5
	MLnK//Qp55CbYNt+8EoXLU2Wm7uEH00ZYcOt9IXRofTnyRwQqly5MNRhYk7ruixE7JnGPXGi9thJO
	iHg6QY9r0szSId9BsqxORTDdqPs2FyUZ2xp2EAQjOmXy7towjAfqy+tmntGnikW8C482GmuaeKWWU
	5ZTmEVnw==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYMz-0006bW-Ik; Tue, 24 Nov 2020 13:28:35 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 25/45] block: reference struct block_device from struct hd_struct
Date: Tue, 24 Nov 2020 14:27:31 +0100
Message-Id: <20201124132751.3747337-26-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

To simplify block device lookup and a few other upcoming areas, make sure
that we always have a struct block_device available for each disk and
each partition.  The only downside of this is that each device and
partition uses a little more memory.  The upside will be that a lot of
code can be simplified.

With that all we need to look up the block device is to lookup the inode
and do a few sanity checks on the gendisk, instead of the separate lookup
for the gendisk.  These checks are in a new RCU critical section and
the disk is now freed using kfree_rcu().

As part of the change switch bdget() to only find existing block devices,
given that we know that the block_device structure must be allocated at
probe / partition scan time.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk.h             |   2 +-
 block/genhd.c           | 212 ++++------------------------------------
 block/partitions/core.c |  29 +++---
 fs/block_dev.c          | 171 ++++++++++++++++++--------------
 include/linux/blkdev.h  |   2 +
 include/linux/genhd.h   |   6 +-
 6 files changed, 140 insertions(+), 282 deletions(-)

diff --git a/block/blk.h b/block/blk.h
index dfab98465db9a5..c4839abcfa27eb 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -352,7 +352,6 @@ struct hd_struct *disk_map_sector_rcu(struct gendisk *disk, sector_t sector);
 
 int blk_alloc_devt(struct hd_struct *part, dev_t *devt);
 void blk_free_devt(dev_t devt);
-void blk_invalidate_devt(dev_t devt);
 char *disk_name(struct gendisk *hd, int partno, char *buf);
 #define ADDPART_FLAG_NONE	0
 #define ADDPART_FLAG_RAID	1
@@ -384,6 +383,7 @@ static inline void hd_free_part(struct hd_struct *part)
 {
 	free_percpu(part->dkstats);
 	kfree(part->info);
+	bdput(part->bdev);
 	percpu_ref_exit(&part->ref);
 }
 
diff --git a/block/genhd.c b/block/genhd.c
index f46e89226fdf91..16c6b13242105b 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -27,17 +27,9 @@
 
 static struct kobject *block_depr;
 
-static DEFINE_XARRAY(bdev_map);
-static DEFINE_MUTEX(bdev_map_lock);
-
 /* for extended dynamic devt allocation, currently only one major is used */
 #define NR_EXT_DEVT		(1 << MINORBITS)
-
-/* For extended devt allocation.  ext_devt_lock prevents look up
- * results from going away underneath its user.
- */
-static DEFINE_SPINLOCK(ext_devt_lock);
-static DEFINE_IDR(ext_devt_idr);
+static DEFINE_IDA(ext_devt_ida);
 
 static void disk_check_events(struct disk_events *ev,
 			      unsigned int *clearing_ptr);
@@ -580,14 +572,7 @@ int blk_alloc_devt(struct hd_struct *part, dev_t *devt)
 		return 0;
 	}
 
-	/* allocate ext devt */
-	idr_preload(GFP_KERNEL);
-
-	spin_lock_bh(&ext_devt_lock);
-	idx = idr_alloc(&ext_devt_idr, part, 0, NR_EXT_DEVT, GFP_NOWAIT);
-	spin_unlock_bh(&ext_devt_lock);
-
-	idr_preload_end();
+	idx = ida_alloc_range(&ext_devt_ida, 0, NR_EXT_DEVT, GFP_KERNEL);
 	if (idx < 0)
 		return idx == -ENOSPC ? -EBUSY : idx;
 
@@ -606,26 +591,8 @@ int blk_alloc_devt(struct hd_struct *part, dev_t *devt)
  */
 void blk_free_devt(dev_t devt)
 {
-	if (devt == MKDEV(0, 0))
-		return;
-
-	if (MAJOR(devt) == BLOCK_EXT_MAJOR) {
-		spin_lock_bh(&ext_devt_lock);
-		idr_remove(&ext_devt_idr, blk_mangle_minor(MINOR(devt)));
-		spin_unlock_bh(&ext_devt_lock);
-	}
-}
-
-/*
- * We invalidate devt by assigning NULL pointer for devt in idr.
- */
-void blk_invalidate_devt(dev_t devt)
-{
-	if (MAJOR(devt) == BLOCK_EXT_MAJOR) {
-		spin_lock_bh(&ext_devt_lock);
-		idr_replace(&ext_devt_idr, NULL, blk_mangle_minor(MINOR(devt)));
-		spin_unlock_bh(&ext_devt_lock);
-	}
+	if (MAJOR(devt) == BLOCK_EXT_MAJOR)
+		ida_free(&ext_devt_ida, blk_mangle_minor(MINOR(devt)));
 }
 
 static char *bdevt_str(dev_t devt, char *buf)
@@ -640,28 +607,6 @@ static char *bdevt_str(dev_t devt, char *buf)
 	return buf;
 }
 
-static void blk_register_region(struct gendisk *disk)
-{
-	int i;
-
-	mutex_lock(&bdev_map_lock);
-	for (i = 0; i < disk->minors; i++) {
-		if (xa_insert(&bdev_map, disk_devt(disk) + i, disk, GFP_KERNEL))
-			WARN_ON_ONCE(1);
-	}
-	mutex_unlock(&bdev_map_lock);
-}
-
-static void blk_unregister_region(struct gendisk *disk)
-{
-	int i;
-
-	mutex_lock(&bdev_map_lock);
-	for (i = 0; i < disk->minors; i++)
-		xa_erase(&bdev_map, disk_devt(disk) + i);
-	mutex_unlock(&bdev_map_lock);
-}
-
 static void disk_scan_partitions(struct gendisk *disk)
 {
 	struct block_device *bdev;
@@ -805,7 +750,7 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
 		ret = bdi_register(bdi, "%u:%u", MAJOR(devt), MINOR(devt));
 		WARN_ON(ret);
 		bdi_set_owner(bdi, dev);
-		blk_register_region(disk);
+		bdev_add(disk->part0.bdev, devt);
 	}
 	register_disk(parent, disk, groups);
 	if (register_queue)
@@ -886,11 +831,8 @@ void del_gendisk(struct gendisk *disk)
 	blk_integrity_del(disk);
 	disk_del_events(disk);
 
-	/*
-	 * Block lookups of the disk until all bdevs are unhashed and the
-	 * disk is marked as dead (GENHD_FL_UP cleared).
-	 */
-	down_write(&disk->lookup_sem);
+	disk->flags &= ~GENHD_FL_UP;
+
 	/* invalidate stuff */
 	disk_part_iter_init(&piter, disk,
 			     DISK_PITER_INCL_EMPTY | DISK_PITER_REVERSE);
@@ -902,8 +844,6 @@ void del_gendisk(struct gendisk *disk)
 
 	invalidate_partition(disk, 0);
 	set_capacity(disk, 0);
-	disk->flags &= ~GENHD_FL_UP;
-	up_write(&disk->lookup_sem);
 
 	if (!(disk->flags & GENHD_FL_HIDDEN)) {
 		sysfs_remove_link(&disk_to_dev(disk)->kobj, "bdi");
@@ -916,16 +856,6 @@ void del_gendisk(struct gendisk *disk)
 	}
 
 	blk_unregister_queue(disk);
-	
-	if (!(disk->flags & GENHD_FL_HIDDEN))
-		blk_unregister_region(disk);
-	/*
-	 * Remove gendisk pointer from idr so that it cannot be looked up
-	 * while RCU period before freeing gendisk is running to prevent
-	 * use-after-free issues. Note that the device number stays
-	 * "in-use" until we really free the gendisk.
-	 */
-	blk_invalidate_devt(disk_devt(disk));
 
 	kobject_put(disk->part0.holder_dir);
 	kobject_put(disk->slave_dir);
@@ -964,7 +894,7 @@ static ssize_t disk_badblocks_store(struct device *dev,
 	return badblocks_store(disk->bb, page, len, 0);
 }
 
-static void request_gendisk_module(dev_t devt)
+void blk_request_module(dev_t devt)
 {
 	unsigned int major = MAJOR(devt);
 	struct blk_major_name **n;
@@ -984,84 +914,6 @@ static void request_gendisk_module(dev_t devt)
 		request_module("block-major-%d", MAJOR(devt));
 }
 
-static bool get_disk_and_module(struct gendisk *disk)
-{
-	struct module *owner;
-
-	if (!disk->fops)
-		return false;
-	owner = disk->fops->owner;
-	if (owner && !try_module_get(owner))
-		return false;
-	if (!kobject_get_unless_zero(&disk_to_dev(disk)->kobj)) {
-		module_put(owner);
-		return false;
-	}
-	return true;
-
-}
-
-/**
- * get_gendisk - get partitioning information for a given device
- * @devt: device to get partitioning information for
- * @partno: returned partition index
- *
- * This function gets the structure containing partitioning
- * information for the given device @devt.
- *
- * Context: can sleep
- */
-struct gendisk *get_gendisk(dev_t devt, int *partno)
-{
-	struct gendisk *disk = NULL;
-
-	might_sleep();
-
-	if (MAJOR(devt) != BLOCK_EXT_MAJOR) {
-		mutex_lock(&bdev_map_lock);
-		disk = xa_load(&bdev_map, devt);
-		if (!disk) {
-			mutex_unlock(&bdev_map_lock);
-			request_gendisk_module(devt);
-			mutex_lock(&bdev_map_lock);
-			disk = xa_load(&bdev_map, devt);
-		}
-		if (disk && !get_disk_and_module(disk))
-			disk = NULL;
-		if (disk)
-			*partno = devt - disk_devt(disk);
-		mutex_unlock(&bdev_map_lock);
-	} else {
-		struct hd_struct *part;
-
-		spin_lock_bh(&ext_devt_lock);
-		part = idr_find(&ext_devt_idr, blk_mangle_minor(MINOR(devt)));
-		if (part && get_disk_and_module(part_to_disk(part))) {
-			*partno = part->partno;
-			disk = part_to_disk(part);
-		}
-		spin_unlock_bh(&ext_devt_lock);
-	}
-
-	if (!disk)
-		return NULL;
-
-	/*
-	 * Synchronize with del_gendisk() to not return disk that is being
-	 * destroyed.
-	 */
-	down_read(&disk->lookup_sem);
-	if (unlikely((disk->flags & GENHD_FL_HIDDEN) ||
-		     !(disk->flags & GENHD_FL_UP))) {
-		up_read(&disk->lookup_sem);
-		put_disk_and_module(disk);
-		disk = NULL;
-	} else {
-		up_read(&disk->lookup_sem);
-	}
-	return disk;
-}
-
 /**
  * bdget_disk - do bdget() by gendisk and partition number
  * @disk: gendisk of interest
@@ -1559,11 +1411,6 @@ int disk_expand_part_tbl(struct gendisk *disk, int partno)
  *
  * This function releases all allocated resources of the gendisk.
  *
- * The struct gendisk refcount is incremented with get_gendisk() or
- * get_disk_and_module(), and its refcount is decremented with
- * put_disk_and_module() or put_disk(). Once the refcount reaches 0 this
- * function is called.
- *
  * Drivers which used __device_add_disk() have a gendisk with a request_queue
  * assigned. Since the request_queue sits on top of the gendisk for these
  * drivers we also call blk_put_queue() for them, and we expect the
@@ -1585,7 +1432,7 @@ static void disk_release(struct device *dev)
 	hd_free_part(&disk->part0);
 	if (disk->queue)
 		blk_put_queue(disk->queue);
-	kfree(disk);
+	kfree_rcu(disk, rcu);
 }
 struct class block_class = {
 	.name		= "block",
@@ -1748,16 +1595,17 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
 	if (!disk)
 		return NULL;
 
+	disk->part0.bdev = bdev_alloc(disk, 0);
+	if (!disk->part0.bdev)
+		goto out_free_disk;
+
 	disk->part0.dkstats = alloc_percpu(struct disk_stats);
 	if (!disk->part0.dkstats)
-		goto out_free_disk;
+		goto out_bdput;
 
-	init_rwsem(&disk->lookup_sem);
 	disk->node_id = node_id;
-	if (disk_expand_part_tbl(disk, 0)) {
-		free_percpu(disk->part0.dkstats);
-		goto out_free_disk;
-	}
+	if (disk_expand_part_tbl(disk, 0))
+		goto out_free_bdstats;
 
 	ptbl = rcu_dereference_protected(disk->part_tbl, 1);
 	rcu_assign_pointer(ptbl->part[0], &disk->part0);
@@ -1773,7 +1621,7 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
 	 */
 	hd_sects_seq_init(&disk->part0);
 	if (hd_ref_init(&disk->part0))
-		goto out_free_part0;
+		goto out_free_bdstats;
 
 	disk->minors = minors;
 	rand_initialize_disk(disk);
@@ -1782,8 +1630,10 @@ struct gendisk *__alloc_disk_node(int minors, int node_id)
 	device_initialize(disk_to_dev(disk));
 	return disk;
 
-out_free_part0:
-	hd_free_part(&disk->part0);
+out_free_bdstats:
+	free_percpu(disk->part0.dkstats);
+out_bdput:
+	bdput(disk->part0.bdev);
 out_free_disk:
 	kfree(disk);
 	return NULL;
@@ -1807,26 +1657,6 @@ void put_disk(struct gendisk *disk)
 }
 EXPORT_SYMBOL(put_disk);
 
-/**
- * put_disk_and_module - decrements the module and gendisk refcount
- * @disk: the struct gendisk to decrement the refcount for
- *
- * This is a counterpart of get_disk_and_module() and thus also of
- * get_gendisk().
- *
- * Context: Any context, but the last reference must not be dropped from
- *          atomic context.
- */
-void put_disk_and_module(struct gendisk *disk)
-{
-	if (disk) {
-		struct module *owner = disk->fops->owner;
-
-		put_disk(disk);
-		module_put(owner);
-	}
-}
-
 static void set_disk_ro_uevent(struct gendisk *gd, int ro)
 {
 	char event[] = "DISK_RO=1";
diff --git a/block/partitions/core.c b/block/partitions/core.c
index a02e224115943d..508b46da53eee5 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -340,12 +340,11 @@ void delete_partition(struct hd_struct *part)
 	device_del(part_to_dev(part));
 
 	/*
-	 * Remove gendisk pointer from idr so that it cannot be looked up
-	 * while RCU period before freeing gendisk is running to prevent
-	 * use-after-free issues. Note that the device number stays
-	 * "in-use" until we really free the gendisk.
+	 * Remove the block device from the inode hash, so that it cannot be
+	 * looked up while waiting for the RCU grace period.
 	 */
-	blk_invalidate_devt(part_devt(part));
+	remove_inode_hash(part->bdev->bd_inode);
+
 	percpu_ref_kill(&part->ref);
 }
 
@@ -368,6 +367,7 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 	dev_t devt = MKDEV(0, 0);
 	struct device *ddev = disk_to_dev(disk);
 	struct device *pdev;
+	struct block_device *bdev;
 	struct disk_part_tbl *ptbl;
 	const char *dname;
 	int err;
@@ -402,11 +402,15 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 	if (!p)
 		return ERR_PTR(-EBUSY);
 
+	err = -ENOMEM;
 	p->dkstats = alloc_percpu(struct disk_stats);
-	if (!p->dkstats) {
-		err = -ENOMEM;
+	if (!p->dkstats)
 		goto out_free;
-	}
+
+	bdev = bdev_alloc(disk, partno);
+	if (!bdev)
+		goto out_free_stats;
+	p->bdev = bdev;
 
 	hd_sects_seq_init(p);
 	pdev = part_to_dev(p);
@@ -420,10 +424,8 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 		struct partition_meta_info *pinfo;
 
 		pinfo = kzalloc_node(sizeof(*pinfo), GFP_KERNEL, disk->node_id);
-		if (!pinfo) {
-			err = -ENOMEM;
-			goto out_free_stats;
-		}
+		if (!pinfo)
+			goto out_bdput;
 		memcpy(pinfo, info, sizeof(*info));
 		p->info = pinfo;
 	}
@@ -470,6 +472,7 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 	}
 
 	/* everything is up and running, commence */
+	bdev_add(bdev, devt);
 	rcu_assign_pointer(ptbl->part[partno], p);
 
 	/* suppress uevent if the disk suppresses it */
@@ -479,6 +482,8 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 
 out_free_info:
 	kfree(p->info);
+out_bdput:
+	bdput(bdev);
 out_free_stats:
 	free_percpu(p->dkstats);
 out_free:
diff --git a/fs/block_dev.c b/fs/block_dev.c
index c0d1e8248ffe23..b9ee8fe5acd570 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -863,31 +863,45 @@ void __init bdev_cache_init(void)
 	blockdev_superblock = bd_mnt->mnt_sb;   /* For writeback */
 }
 
-static struct block_device *bdget(dev_t dev)
+struct block_device *bdev_alloc(struct gendisk *disk, u8 partno)
 {
 	struct block_device *bdev;
 	struct inode *inode;
 
-	inode = iget_locked(blockdev_superblock, dev);
+	inode = new_inode(blockdev_superblock);
 	if (!inode)
 		return NULL;
+	inode->i_mode = S_IFBLK;
+	inode->i_rdev = 0;
+	inode->i_data.a_ops = &def_blk_aops;
+
+	bdev = I_BDEV(inode);
+	spin_lock_init(&bdev->bd_size_lock);
+	bdev->bd_disk = disk;
+	bdev->bd_partno = partno;
+	bdev->bd_contains = NULL;
+	bdev->bd_super = NULL;
+	bdev->bd_inode = inode;
+	bdev->bd_part_count = 0;
+	return bdev;
+}
 
-	bdev = &BDEV_I(inode)->bdev;
+void bdev_add(struct block_device *bdev, dev_t dev)
+{
+	bdev->bd_dev = dev;
+	bdev->bd_inode->i_rdev = dev;
+	bdev->bd_inode->i_ino = dev;
+	insert_inode_hash(bdev->bd_inode);
+}
 
-	if (inode->i_state & I_NEW) {
-		spin_lock_init(&bdev->bd_size_lock);
-		bdev->bd_contains = NULL;
-		bdev->bd_super = NULL;
-		bdev->bd_inode = inode;
-		bdev->bd_part_count = 0;
-		bdev->bd_dev = dev;
-		inode->i_mode = S_IFBLK;
-		inode->i_rdev = dev;
-		inode->i_data.a_ops = &def_blk_aops;
-		mapping_set_gfp_mask(&inode->i_data, GFP_USER);
-		unlock_new_inode(inode);
-	}
-	return bdev;
+static struct block_device *bdget(dev_t dev)
+{
+	struct inode *inode;
+
+	inode = ilookup(blockdev_superblock, dev);
+	if (!inode)
+		return NULL;
+	return &BDEV_I(inode)->bdev;
 }
 
 /**
@@ -1000,27 +1014,6 @@ int bd_prepare_to_claim(struct block_device *bdev, struct block_device *whole,
 }
 EXPORT_SYMBOL_GPL(bd_prepare_to_claim); /* only for the loop driver */
 
-static struct gendisk *bdev_get_gendisk(struct block_device *bdev, int *partno)
-{
-	struct gendisk *disk = get_gendisk(bdev->bd_dev, partno);
-
-	if (!disk)
-		return NULL;
-	/*
-	 * Now that we hold gendisk reference we make sure bdev we looked up is
-	 * not stale. If it is, it means device got removed and created before
-	 * we looked up gendisk and we fail open in such case. Associating
-	 * unhashed bdev with newly created gendisk could lead to two bdevs
-	 * (and thus two independent caches) being associated with one device
-	 * which is bad.
-	 */
-	if (inode_unhashed(bdev->bd_inode)) {
-		put_disk_and_module(disk);
-		return NULL;
-	}
-	return disk;
-}
-
 static void bd_clear_claiming(struct block_device *whole, void *holder)
 {
 	lockdep_assert_held(&bdev_lock);
@@ -1343,19 +1336,17 @@ EXPORT_SYMBOL_GPL(bdev_disk_changed);
  *  mutex_lock(part->bd_mutex)
  *    mutex_lock_nested(whole->bd_mutex, 1)
  */
-static int __blkdev_get(struct block_device *bdev, struct gendisk *disk,
-		int partno, fmode_t mode)
+static int __blkdev_get(struct block_device *bdev, fmode_t mode)
 {
+	struct gendisk *disk = bdev->bd_disk;
 	int ret;
 
 	if (!bdev->bd_openers) {
-		bdev->bd_disk = disk;
 		bdev->bd_contains = bdev;
-		bdev->bd_partno = partno;
 
-		if (!partno) {
+		if (!bdev->bd_partno) {
 			ret = -ENXIO;
-			bdev->bd_part = disk_get_part(disk, partno);
+			bdev->bd_part = disk_get_part(disk, 0);
 			if (!bdev->bd_part)
 				goto out_clear;
 
@@ -1384,7 +1375,7 @@ static int __blkdev_get(struct block_device *bdev, struct gendisk *disk,
 			struct block_device *whole = bdget_disk(disk, 0);
 
 			mutex_lock_nested(&whole->bd_mutex, 1);
-			ret = __blkdev_get(whole, disk, 0, mode);
+			ret = __blkdev_get(whole, mode);
 			if (ret) {
 				mutex_unlock(&whole->bd_mutex);
 				bdput(whole);
@@ -1394,9 +1385,8 @@ static int __blkdev_get(struct block_device *bdev, struct gendisk *disk,
 			mutex_unlock(&whole->bd_mutex);
 
 			bdev->bd_contains = whole;
-			bdev->bd_part = disk_get_part(disk, partno);
-			if (!(disk->flags & GENHD_FL_UP) ||
-			    !bdev->bd_part || !bdev->bd_part->nr_sects) {
+			bdev->bd_part = disk_get_part(disk, bdev->bd_partno);
+			if (!bdev->bd_part || !bdev->bd_part->nr_sects) {
 				__blkdev_put(whole, mode, 1);
 				ret = -ENXIO;
 				goto out_clear;
@@ -1425,12 +1415,46 @@ static int __blkdev_get(struct block_device *bdev, struct gendisk *disk,
 
  out_clear:
 	disk_put_part(bdev->bd_part);
-	bdev->bd_disk = NULL;
 	bdev->bd_part = NULL;
 	bdev->bd_contains = NULL;
 	return ret;
 }
 
+static struct block_device *get_bdev_disk_and_module(dev_t dev)
+{
+	struct block_device *bdev = NULL;
+	struct gendisk *disk = NULL;
+
+	rcu_read_lock();
+	bdev = bdget(dev);
+	if (!bdev) {
+		rcu_read_unlock();
+		blk_request_module(dev);
+		rcu_read_lock();
+
+		bdev = bdget(dev);
+	}
+	if (!bdev)
+		goto fail;
+
+	if (!kobject_get_unless_zero(&disk_to_dev(bdev->bd_disk)->kobj))
+		goto fail;
+	disk = bdev->bd_disk;
+	if ((disk->flags & (GENHD_FL_UP | GENHD_FL_HIDDEN)) != GENHD_FL_UP)
+		goto fail;
+	if (!try_module_get(bdev->bd_disk->fops->owner))
+		goto fail;
+	rcu_read_unlock();
+	return bdev;
+fail:
+	rcu_read_unlock();
+	if (disk)
+		put_disk(disk);
+	if (bdev)
+		bdput(bdev);
+	return NULL;
+}
+
 /**
  * blkdev_get_by_dev - open a block device by device number
  * @dev: device number of block device to open
@@ -1458,7 +1482,6 @@ struct block_device *blkdev_get_by_dev(dev_t dev, fmode_t mode, void *holder)
 	bool unblock_events = true;
 	struct block_device *bdev;
 	struct gendisk *disk;
-	int partno;
 	int ret;
 
 	ret = devcgroup_check_permission(DEVCG_DEV_BLOCK,
@@ -1468,19 +1491,15 @@ struct block_device *blkdev_get_by_dev(dev_t dev, fmode_t mode, void *holder)
 	if (ret)
 		return ERR_PTR(ret);
 
-	bdev = bdget(dev);
-	if (!bdev)
-		return ERR_PTR(-ENOMEM);
-
 	/*
 	 * If we lost a race with 'disk' being deleted, try again.  See md.c.
 	 */
 retry:
-	ret = -ENXIO;
-	disk = bdev_get_gendisk(bdev, &partno);
-	if (!disk)
-		goto bdput;
-
+	bdev = get_bdev_disk_and_module(dev);
+	if (!bdev)
+		return ERR_PTR(-ENXIO);
+	disk = bdev->bd_disk;
+ 
 	if (mode & FMODE_EXCL) {
 		WARN_ON_ONCE(!holder);
 	
@@ -1496,12 +1515,10 @@ struct block_device *blkdev_get_by_dev(dev_t dev, fmode_t mode, void *holder)
 	disk_block_events(disk);
 
 	mutex_lock(&bdev->bd_mutex);
-	ret =__blkdev_get(bdev, disk, partno, mode);
-	if (!(mode & FMODE_EXCL)) {
-		; /* nothing to do here */
-	} else if (ret) {
-		bd_abort_claiming(bdev, claiming, holder);
-	} else {
+	ret =__blkdev_get(bdev, mode);
+	if (ret)
+		goto abort_claiming;
+	if (mode & FMODE_EXCL) {
 		bd_finish_claiming(bdev, claiming, holder);
 
 		/*
@@ -1521,21 +1538,25 @@ struct block_device *blkdev_get_by_dev(dev_t dev, fmode_t mode, void *holder)
 
 	if (unblock_events)
 		disk_unblock_events(disk);
+	if (mode & FMODE_EXCL)
+		bdput(claiming);
+	return bdev;
 
+abort_claiming:
+	if (mode & FMODE_EXCL)
+		bd_abort_claiming(bdev, claiming, holder);
+	mutex_unlock(&bdev->bd_mutex);
+	disk_unblock_events(disk);
 put_claiming:
 	if (mode & FMODE_EXCL)
 		bdput(claiming);
 put_disk:
-	if (ret)
-		put_disk_and_module(disk);
+	module_put(disk->fops->owner);
+	put_disk(disk);
+	bdput(bdev);
 	if (ret == -ERESTARTSYS)
 		goto retry;
-bdput:
-	if (ret) {
-		bdput(bdev);
-		return ERR_PTR(ret);
-	}
-	return bdev;
+	return ERR_PTR(ret);
 }
 EXPORT_SYMBOL(blkdev_get_by_dev);
 
@@ -1637,7 +1658,6 @@ static void __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part)
 
 		disk_put_part(bdev->bd_part);
 		bdev->bd_part = NULL;
-		bdev->bd_disk = NULL;
 		if (bdev_is_partition(bdev))
 			victim = bdev->bd_contains;
 		bdev->bd_contains = NULL;
@@ -1698,7 +1718,8 @@ void blkdev_put(struct block_device *bdev, fmode_t mode)
 	mutex_unlock(&bdev->bd_mutex);
 
 	__blkdev_put(bdev, mode, 0);
-	put_disk_and_module(disk);
+	module_put(disk->fops->owner);
+	put_disk(disk);
 }
 EXPORT_SYMBOL(blkdev_put);
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index bdd7339bcda462..48e5a8393cd793 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1994,6 +1994,8 @@ void bd_abort_claiming(struct block_device *bdev, struct block_device *whole,
 		void *holder);
 void blkdev_put(struct block_device *bdev, fmode_t mode);
 
+struct block_device *bdev_alloc(struct gendisk *disk, u8 partno);
+void bdev_add(struct block_device *bdev, dev_t dev);
 struct block_device *I_BDEV(struct inode *inode);
 struct block_device *bdget_part(struct hd_struct *part);
 struct block_device *bdgrab(struct block_device *bdev);
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index ca5e356084c353..d068e46f9086ae 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -65,6 +65,7 @@ struct hd_struct {
 	struct disk_stats __percpu *dkstats;
 	struct percpu_ref ref;
 
+	struct block_device *bdev;
 	struct device __dev;
 	struct kobject *holder_dir;
 	int policy, partno;
@@ -193,7 +194,6 @@ struct gendisk {
 	int flags;
 	unsigned long state;
 #define GD_NEED_PART_SCAN		0
-	struct rw_semaphore lookup_sem;
 	struct kobject *slave_dir;
 
 	struct timer_rand_state *random;
@@ -207,6 +207,7 @@ struct gendisk {
 #endif
 	int node_id;
 	struct badblocks *bb;
+	struct rcu_head rcu;
 	struct lockdep_map lockdep_map;
 };
 
@@ -300,7 +301,6 @@ static inline void add_disk_no_queue_reg(struct gendisk *disk)
 }
 
 extern void del_gendisk(struct gendisk *gp);
-extern struct gendisk *get_gendisk(dev_t dev, int *partno);
 extern struct block_device *bdget_disk(struct gendisk *disk, int partno);
 
 extern void set_disk_ro(struct gendisk *disk, int flag);
@@ -338,7 +338,6 @@ int blk_drop_partitions(struct block_device *bdev);
 
 extern struct gendisk *__alloc_disk_node(int minors, int node_id);
 extern void put_disk(struct gendisk *disk);
-extern void put_disk_and_module(struct gendisk *disk);
 
 #define alloc_disk_node(minors, node_id)				\
 ({									\
@@ -389,6 +388,7 @@ static inline void bd_unlink_disk_holder(struct block_device *bdev,
 #endif /* CONFIG_SYSFS */
 
 dev_t blk_lookup_devt(const char *name, int partno);
+void blk_request_module(dev_t devt);
 #ifdef CONFIG_BLOCK
 void printk_all_partitions(void);
 #else /* CONFIG_BLOCK */
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:41:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:41:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36215.68145 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYZ4-0006A2-N9; Tue, 24 Nov 2020 13:41:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36215.68145; Tue, 24 Nov 2020 13:41:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYZ4-00069G-36; Tue, 24 Nov 2020 13:41:02 +0000
Received: by outflank-mailman (input) for mailman id 36215;
 Tue, 24 Nov 2020 13:40:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYQs-0000Qf-7g
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:32:34 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9e65922a-250e-491f-9bcf-0657c63d0fc1;
 Tue, 24 Nov 2020 13:29:33 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYNX-0006j2-Fz; Tue, 24 Nov 2020 13:29:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYQs-0000Qf-7g
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:32:34 +0000
X-Inumbo-ID: 9e65922a-250e-491f-9bcf-0657c63d0fc1
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9e65922a-250e-491f-9bcf-0657c63d0fc1;
	Tue, 24 Nov 2020 13:29:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=0FpgDSjKIA8EJjMg/TQX03TXjJ1cPgViOyNw/xtQguo=; b=k7za8bhRgbJdBtuBRZkCatmJVt
	3javttSHXWCw3XYzxw1o5tiPDrIFOs8SOQHpPYQPM2MAnrcw7t136QgN7NdQQauuL3OWcSsN4we67
	Xkv82VHwyV+CBFF6AscMEzGPQB6JFJp1KZXSEFG2wKJkQTTE6vD5T2dDADC4jSevL6+ANGl460lQd
	zZHc5qMxeXpYKJ09TzlSIx2c1dD7VxWlqCNAxqUCX7UI9CS+1JMTIOE67cnMC0Vt2j4MJy/OIqz0n
	xvf9q5qYOIF6oiTk6b2ZnbNdBsTuUHnzdKE/wa1IuK3gIVfKhY/sG8+i7LRqRDPSs+fqgPSACvntx
	QGXr7dow==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYNX-0006j2-Fz; Tue, 24 Nov 2020 13:29:07 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 43/45] f2fs: remove a few bd_part checks
Date: Tue, 24 Nov 2020 14:27:49 +0100
Message-Id: <20201124132751.3747337-44-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

bd_part is never NULL for a block device in use by a file system, so
remove the checks.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/f2fs/checkpoint.c | 5 +----
 fs/f2fs/sysfs.c      | 9 ---------
 2 files changed, 1 insertion(+), 13 deletions(-)

diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
index 023462e80e58d5..54a1905af052cc 100644
--- a/fs/f2fs/checkpoint.c
+++ b/fs/f2fs/checkpoint.c
@@ -1395,7 +1395,6 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
 	__u32 crc32 = 0;
 	int i;
 	int cp_payload_blks = __cp_payload(sbi);
-	struct super_block *sb = sbi->sb;
 	struct curseg_info *seg_i = CURSEG_I(sbi, CURSEG_HOT_NODE);
 	u64 kbytes_written;
 	int err;
@@ -1489,9 +1488,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
 	start_blk += data_sum_blocks;
 
 	/* Record write statistics in the hot node summary */
-	kbytes_written = sbi->kbytes_written;
-	if (sb->s_bdev->bd_part)
-		kbytes_written += BD_PART_WRITTEN(sbi);
+	kbytes_written = sbi->kbytes_written + BD_PART_WRITTEN(sbi);
 
 	seg_i->journal->info.kbytes_written = cpu_to_le64(kbytes_written);
 
diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
index ec77ccfea923dc..24e876e849c512 100644
--- a/fs/f2fs/sysfs.c
+++ b/fs/f2fs/sysfs.c
@@ -90,11 +90,6 @@ static ssize_t free_segments_show(struct f2fs_attr *a,
 static ssize_t lifetime_write_kbytes_show(struct f2fs_attr *a,
 		struct f2fs_sb_info *sbi, char *buf)
 {
-	struct super_block *sb = sbi->sb;
-
-	if (!sb->s_bdev->bd_part)
-		return sprintf(buf, "0\n");
-
 	return sprintf(buf, "%llu\n",
 			(unsigned long long)(sbi->kbytes_written +
 			BD_PART_WRITTEN(sbi)));
@@ -103,12 +98,8 @@ static ssize_t lifetime_write_kbytes_show(struct f2fs_attr *a,
 static ssize_t features_show(struct f2fs_attr *a,
 		struct f2fs_sb_info *sbi, char *buf)
 {
-	struct super_block *sb = sbi->sb;
 	int len = 0;
 
-	if (!sb->s_bdev->bd_part)
-		return sprintf(buf, "0\n");
-
 	if (f2fs_sb_has_encrypt(sbi))
 		len += scnprintf(buf, PAGE_SIZE - len, "%s",
 						"encryption");
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:41:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:41:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36216.68162 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYZ6-0006G2-O5; Tue, 24 Nov 2020 13:41:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36216.68162; Tue, 24 Nov 2020 13:41:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYZ6-0006F9-7h; Tue, 24 Nov 2020 13:41:04 +0000
Received: by outflank-mailman (input) for mailman id 36216;
 Tue, 24 Nov 2020 13:40:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYO8-0000Qf-2s
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:29:44 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5dced3d8-43cd-4c46-9b38-9e502cd89644;
 Tue, 24 Nov 2020 13:28:41 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYMq-0006Yi-HT; Tue, 24 Nov 2020 13:28:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYO8-0000Qf-2s
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:29:44 +0000
X-Inumbo-ID: 5dced3d8-43cd-4c46-9b38-9e502cd89644
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5dced3d8-43cd-4c46-9b38-9e502cd89644;
	Tue, 24 Nov 2020 13:28:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=haCO3X861N4gYeaBWGi7WpA4oycN+6CCoLtCh0jGV0A=; b=UbR0rqfEviC2A8JyLwBgBA5M5p
	+ivg+qHtDcLcBHMdD64n1DMXXEp3t8ybLC4npUOUn/qYfKCmt7Kr5p/jhadGMMWeJowrUm92nKPQz
	0DO6dAQGwlEARF+Tq0YvNLL9RIIo2Tuk36m6mPWI9mpe2ZrBZzeuv6v2bnWEH/X0m1YkmNAGXk04j
	mo6XzNihgFi5lQUd3e3kT5gVdg8KsVapZp96LnJ5YCrBSloPSlJds/FevxYe3saLdUdlnf8vi4Ktr
	Ea1Au77+mqNyRWCMV4E1ZfyFZIEedVVP76a4K3XW3ZfINq8T12WQoSwsrZLF2tfsp147TOxDzEcif
	V/8iGe5Q==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYMq-0006Yi-HT; Tue, 24 Nov 2020 13:28:25 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 20/45] block: refactor __blkdev_put
Date: Tue, 24 Nov 2020 14:27:26 +0100
Message-Id: <20201124132751.3747337-21-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Reorder the code to have one big section for the last close, and to use
bdev_is_partition.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Jan Kara <jack@suse.cz>
---
 fs/block_dev.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index 437f67e12b2838..88847839ef0102 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -1738,22 +1738,22 @@ static void __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part)
 		WARN_ON_ONCE(bdev->bd_holders);
 		sync_blockdev(bdev);
 		kill_bdev(bdev);
-
 		bdev_write_inode(bdev);
-	}
-	if (bdev->bd_contains == bdev) {
-		if (disk->fops->release)
+
+		if (!bdev_is_partition(bdev) && disk->fops->release)
 			disk->fops->release(disk, mode);
-	}
-	if (!bdev->bd_openers) {
+
 		disk_put_part(bdev->bd_part);
 		bdev->bd_part = NULL;
 		bdev->bd_disk = NULL;
-		if (bdev != bdev->bd_contains)
+		if (bdev_is_partition(bdev))
 			victim = bdev->bd_contains;
 		bdev->bd_contains = NULL;
 
 		put_disk_and_module(disk);
+	} else {
+		if (!bdev_is_partition(bdev) && disk->fops->release)
+			disk->fops->release(disk, mode);
 	}
 	mutex_unlock(&bdev->bd_mutex);
 	bdput(bdev);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:41:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:41:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36236.68178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYZA-0006Q3-Gz; Tue, 24 Nov 2020 13:41:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36236.68178; Tue, 24 Nov 2020 13:41:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYZA-0006Pe-6n; Tue, 24 Nov 2020 13:41:08 +0000
Received: by outflank-mailman (input) for mailman id 36236;
 Tue, 24 Nov 2020 13:41:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lBCh=E6=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1khYZ7-000505-W8
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:41:06 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.46]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9b516c6c-9981-402a-99e8-cd038d8b2fe0;
 Tue, 24 Nov 2020 13:40:52 +0000 (UTC)
Received: from DB8PR04CA0011.eurprd04.prod.outlook.com (2603:10a6:10:110::21)
 by AM9PR08MB5890.eurprd08.prod.outlook.com (2603:10a6:20b:281::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Tue, 24 Nov
 2020 13:40:51 +0000
Received: from DB5EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:110:cafe::14) by DB8PR04CA0011.outlook.office365.com
 (2603:10a6:10:110::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Tue, 24 Nov 2020 13:40:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT003.mail.protection.outlook.com (10.152.20.157) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Tue, 24 Nov 2020 13:40:50 +0000
Received: ("Tessian outbound 814be617737e:v71");
 Tue, 24 Nov 2020 13:40:49 +0000
Received: from de7edbc90859.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 54834EE7-6B04-4127-AF53-C815F1B7F397.1; 
 Tue, 24 Nov 2020 13:40:44 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id de7edbc90859.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 24 Nov 2020 13:40:44 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB4869.eurprd08.prod.outlook.com (2603:10a6:10:de::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Tue, 24 Nov
 2020 13:40:40 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3589.030; Tue, 24 Nov 2020
 13:40:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lBCh=E6=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1khYZ7-000505-W8
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:41:06 +0000
X-Inumbo-ID: 9b516c6c-9981-402a-99e8-cd038d8b2fe0
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown [40.107.22.46])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9b516c6c-9981-402a-99e8-cd038d8b2fe0;
	Tue, 24 Nov 2020 13:40:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eQYXXJD7A+pbrSCOAZuoJZB3aa9/kD52SkdCWdP8M7w=;
 b=iMEdH7gvbaT/8j3T2g/9+dWRnCEvOtPYsGHo09DCh0aPJ67KVeQlcUXqSJz4uXIF63wNMq89fq/Elzbc0JBNmJ78i484fq1DJ8kilSDgBtLByg4Y3qiR9KxfI+R1OS00jZuFtAyoujDx0hlpaTU1XVBdGsvUrOJj4ffhRTCP3Ao=
Received: from DB8PR04CA0011.eurprd04.prod.outlook.com (2603:10a6:10:110::21)
 by AM9PR08MB5890.eurprd08.prod.outlook.com (2603:10a6:20b:281::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Tue, 24 Nov
 2020 13:40:51 +0000
Received: from DB5EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:110:cafe::14) by DB8PR04CA0011.outlook.office365.com
 (2603:10a6:10:110::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Tue, 24 Nov 2020 13:40:50 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT003.mail.protection.outlook.com (10.152.20.157) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Tue, 24 Nov 2020 13:40:50 +0000
Received: ("Tessian outbound 814be617737e:v71"); Tue, 24 Nov 2020 13:40:49 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 2f94e8ae1c33e08f
X-CR-MTA-TID: 64aa7808
Received: from de7edbc90859.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 54834EE7-6B04-4127-AF53-C815F1B7F397.1;
	Tue, 24 Nov 2020 13:40:44 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id de7edbc90859.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Tue, 24 Nov 2020 13:40:44 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aJcgC5al6QzZL23EdhUITd8QO7fTG37sn9c8+BEPGDNya6cJvTfgeqBLHOvYsEMs7ek8hVRjlaO3vWG2oyKIVWkxmMSxwuiv0SpcNtcxUJhP24fz0bZhaFJcAYQnQsz+bYkzB7FG75r3TI6ExTb+Eaqd3q+FWeNobzBaP1aMh7dvPVciP8TyLftuL2YOJNcRaUMXxs+jN5BI1o+7+ldgW2Tcy27YqKiaPHpevawJcfOdg5GVFCauuVD/7qvj9imjVMVKLN7wlfeFxnB/ty21tUy+sm5+LBwkObwxUDFeZyQEZ74T1iJdu+0tN1revdLQO++/4HuvABDs+5C+xzqBMw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eQYXXJD7A+pbrSCOAZuoJZB3aa9/kD52SkdCWdP8M7w=;
 b=ejOJByDK1a1gKopZcbBF4dPt+VySi6p424+ysL/5dp4VnB7fXMRweSLQB13kmL6u+bckEaNfJO4O4nZG1AtPspTaqvvjB3h5gdXRbuaJdu8QQsO5nWJKCZOXCLabuyFXGa6i0BqiWQS5FLTqJEFqhN7YTR17zVdF+Q85S5vWd9IOCp9Si82pSZzBxTm7pN0p0olxagIP5zMLL/0ps+d1CE2acjGa1w27fiTrtn2wANJECEn1iWfeZGo+4oxX+TCytc8bfm2o2BYN3agf9kPyN/hM1K0O5PsySDQmQN61CtsKZP+OCLFVJ3Y9s0rqanVHWrAU2Pg0uja1Bbh3VLtAog==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eQYXXJD7A+pbrSCOAZuoJZB3aa9/kD52SkdCWdP8M7w=;
 b=iMEdH7gvbaT/8j3T2g/9+dWRnCEvOtPYsGHo09DCh0aPJ67KVeQlcUXqSJz4uXIF63wNMq89fq/Elzbc0JBNmJ78i484fq1DJ8kilSDgBtLByg4Y3qiR9KxfI+R1OS00jZuFtAyoujDx0hlpaTU1XVBdGsvUrOJj4ffhRTCP3Ao=
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB4869.eurprd08.prod.outlook.com (2603:10a6:10:de::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Tue, 24 Nov
 2020 13:40:40 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3589.030; Tue, 24 Nov 2020
 13:40:40 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: Add workaround for Cortex-A55 erratum #1530923
Thread-Topic: [PATCH] xen/arm: Add workaround for Cortex-A55 erratum #1530923
Thread-Index: AQHWwlLMxtsLDEfokUmiDVx2NYurAanXSiSA
Date: Tue, 24 Nov 2020 13:40:40 +0000
Message-ID: <E5A460E5-7D10-4314-98B4-0D90CD173940@arm.com>
References:
 <61a105672650e7470710183f37351b821b818d1e.1606215998.git.bertrand.marquis@arm.com>
In-Reply-To:
 <61a105672650e7470710183f37351b821b818d1e.1606215998.git.bertrand.marquis@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: fd903b0d-087a-4c4c-a9de-08d8907e8ebe
x-ms-traffictypediagnostic: DBBPR08MB4869:|AM9PR08MB5890:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM9PR08MB5890888B55B51448026D2A31FCFB0@AM9PR08MB5890.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:7691;OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 1k7htJBalYPJcR3r3i0JgVP0hvKT2lVfVHHp/tvbMTK9Fgaqh2DBDGXibcVtbI1nAPxdRPaluTFu9jUJYAy2C77Kpa8kIIHwvg81JIHJep7M62g634uWT67vCyhBEkWzX6JHwhkdTqGHjU77TBJZx4NUXF1DD3b/9LagyohVN+Zqy37XSjnDSHr2NscTqKQBMFxsEPpyj4Y59Tw7qFfggVs+vT19n7YrtgXDmTqjUr5mAClzdfTTJQXlEVEUDdCzAHWhn3IoJ1FUhz2cGQCN48aHjwJqxjLBhCOm79zkvXZ8+/Ix4De1G3d7Nc3ejAxpZE0f0919fEnVXE/Kf4xeGw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(136003)(396003)(39860400002)(366004)(76116006)(5660300002)(8936002)(64756008)(66556008)(66946007)(91956017)(66446008)(66476007)(2616005)(86362001)(54906003)(71200400001)(37006003)(8676002)(6636002)(6486002)(316002)(33656002)(6512007)(26005)(4326008)(6506007)(186003)(6862004)(478600001)(53546011)(36756003)(2906002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?3bhdd1TS08abZ7krckLZ8Nf79lo6Bx2BpaFo8Kgy5D2iJXn05LMLcr8m7llu?=
 =?us-ascii?Q?2dNxMnROTDlDIFvLTYX7TYp4Y0DAjWckZdkUXEEjpSIPb6U2KNKWaU1AtG+o?=
 =?us-ascii?Q?RrPPwLFF60sUoqg32GPHFsy1qIgVLCheMSKVYke0oiX+aBopL1IhH7ITI0Qj?=
 =?us-ascii?Q?lHMDn3hgFyzEmK4Ph+5PTkcSMQ62Mzvxb+QFXG3JmpXTBUmg+TQ1/sNXYDJW?=
 =?us-ascii?Q?DJFzpvjmy7C0VA6CWRH8lajKY7pSk4pzkBaMLDrjKev3rXHETwIHqrUiC6b2?=
 =?us-ascii?Q?cP+ZMv7mwqQmBYN8dOYwtAJbRYmXHnh/GVnMH4xByFWDsG1mYPprXA6qda+0?=
 =?us-ascii?Q?mlygQT8P6kPI8P27kpbMWMVrHOL9eGG69ZHypDbuLxeNKoZwmkLXytJQ+tR+?=
 =?us-ascii?Q?CI2kZ2RQpbRimKl4LVjNhrsykUEOlKtuUIiupiJ6fBJsT51CGAn0GnqQE618?=
 =?us-ascii?Q?Eur838qqsRNyQsFlR9JKX2bPFPiJLzmJPn10439E0zXQ7cNQ0VTPEeTsuXMG?=
 =?us-ascii?Q?+mjyIppXqUpqwtyRb4fxpYhxWPYSBfBeqZ+UJ4xsQfNw7DqbTGvSkLhfL+3n?=
 =?us-ascii?Q?QEVT+8c59HkYYoM/b4TuR21yjlw0IGm1sqQJa8rls0NUIYmagm9VFMeddxht?=
 =?us-ascii?Q?bk4ZT5S2DNOZ+MLqnXL/LnaUekavvDdFxxivcKAOFHSKN9dXnC0MDhu7zxLZ?=
 =?us-ascii?Q?ojXaAUxSxJD3H4CHWEkbHg1nfHKR1yl1ZzTDdZrWyqV+IVEsgcfa64GRzlzk?=
 =?us-ascii?Q?dvwWyR8x0Og+p9bu1nz1DKbQAMTvRU3YN/Q9NK7vTzwpj6chJ/+OkXgHt1Vt?=
 =?us-ascii?Q?3l9rTpkocbz/0jdcTplzrCptVG5uSFRMbJqloC/3qwejbQBtOx20nWV5CBs4?=
 =?us-ascii?Q?K6vIr0S/UL7lBqw9RaamklrvIrHMttFt88tkNoBbk6kU3GdYds0Ux8dBEZ1r?=
 =?us-ascii?Q?w3hfz1vsDShwq7oFPC8lrDpp3lKpnKyjwT61ECJ6ufyMYeqPOV8pqGN9h1uH?=
 =?us-ascii?Q?i7VX?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <D1904BFA5139EC45B7651B9D04A5CA40@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4869
Original-Authentication-Results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	2f9ca3c5-284e-4893-518c-08d8907e8883
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wJ1/kcnak7xtOfiEpKZmGJeQhR1lKi4u7Fvv5p4uFXIbAzxpo/fDEK3VYZcZFocMaLsBdBXv1sUFgkh4g2/fHc4bSBMaMXJWGqH7O9uTvnhe4AdOh2l31M/tNqspfexhfqCtSEyEE3k0L8Vhswa6RhDCqDBvx+jMsWFZS2Q02Ov/CWOPJzsskcSrP8CAjO90ivbykU0zslS6Monn7tLvJr3pMlbo2iHrWvquCfZuYptU6K2mu2oIiPv+nP3ct+MEFm4JyN9oY/LWJVgj2ByyjT+iEOWnNIQD1n0IOWyt3PyLxRC+rF45QvFMHrci4oX0uSnqUcP98pSaHCdwtmXDlnyRXcdan3hoIqZ2GaE42157r/aScUPG8jf2tyH8RJSbkEZcNiJFw1Eu0Jm4P4Mhbg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(136003)(39860400002)(346002)(376002)(46966005)(107886003)(5660300002)(82740400003)(81166007)(47076004)(70206006)(70586007)(356005)(6636002)(316002)(33656002)(6486002)(336012)(4326008)(54906003)(37006003)(86362001)(36756003)(2616005)(53546011)(6862004)(6506007)(6512007)(478600001)(186003)(26005)(8676002)(2906002)(82310400003)(8936002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Nov 2020 13:40:50.7974
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fd903b0d-087a-4c4c-a9de-08d8907e8ebe
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB5890

Hello ,

> On 24 Nov 2020, at 11:12 am, Bertrand Marquis <Bertrand.Marquis@arm.com> =
wrote:
>=20
> On the Cortex A55, TLB entries can be allocated by a speculative AT
> instruction. If this is happening during a guest context switch with an
> inconsistent page table state in the guest, TLBs with wrong values might
> be allocated.
> The ARM64_WORKAROUND_AT_SPECULATE workaround is used as for erratum
> 1165522 on Cortex A76 or Neoverse N1.
>=20
> This change is also introducing the MIDR identifier for the Cortex-A55.
>=20
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

Reviewed-by: Rahul Singh <rahul.singh@arm.com>

Regards,
Rahul
> ---
> docs/misc/arm/silicon-errata.txt | 1 +
> xen/arch/arm/cpuerrata.c         | 6 ++++++
> xen/include/asm-arm/processor.h  | 2 ++
> 3 files changed, 9 insertions(+)
>=20
> diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-err=
ata.txt
> index d183ba543f..27bf957ebf 100644
> --- a/docs/misc/arm/silicon-errata.txt
> +++ b/docs/misc/arm/silicon-errata.txt
> @@ -45,6 +45,7 @@ stable hypervisors.
> | ARM            | Cortex-A53      | #827319         | ARM64_ERRATUM_8273=
19    |
> | ARM            | Cortex-A53      | #824069         | ARM64_ERRATUM_8240=
69    |
> | ARM            | Cortex-A53      | #819472         | ARM64_ERRATUM_8194=
72    |
> +| ARM            | Cortex-A55      | #1530923        | N/A              =
       |
> | ARM            | Cortex-A57      | #852523         | N/A               =
      |
> | ARM            | Cortex-A57      | #832075         | ARM64_ERRATUM_8320=
75    |
> | ARM            | Cortex-A57      | #834220         | ARM64_ERRATUM_8342=
20    |
> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
> index cb4795beec..b398d480f1 100644
> --- a/xen/arch/arm/cpuerrata.c
> +++ b/xen/arch/arm/cpuerrata.c
> @@ -514,6 +514,12 @@ static const struct arm_cpu_capabilities arm_errata[=
] =3D {
>         .capability =3D ARM64_WORKAROUND_AT_SPECULATE,
>         MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
>     },
> +    {
> +        /* Cortex-A55 (All versions as erratum is open in SDEN v14) */
> +        .desc =3D "ARM erratum 1530923",
> +        .capability =3D ARM64_WORKAROUND_AT_SPECULATE,
> +        MIDR_ALL_VERSIONS(MIDR_CORTEX_A55),
> +    },
>     {},
> };
>=20
> diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/proces=
sor.h
> index d3d12a9d19..87c8136022 100644
> --- a/xen/include/asm-arm/processor.h
> +++ b/xen/include/asm-arm/processor.h
> @@ -53,6 +53,7 @@
> #define ARM_CPU_PART_CORTEX_A17     0xC0E
> #define ARM_CPU_PART_CORTEX_A15     0xC0F
> #define ARM_CPU_PART_CORTEX_A53     0xD03
> +#define ARM_CPU_PART_CORTEX_A55     0xD05
> #define ARM_CPU_PART_CORTEX_A57     0xD07
> #define ARM_CPU_PART_CORTEX_A72     0xD08
> #define ARM_CPU_PART_CORTEX_A73     0xD09
> @@ -64,6 +65,7 @@
> #define MIDR_CORTEX_A17 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORT=
EX_A17)
> #define MIDR_CORTEX_A15 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORT=
EX_A15)
> #define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORT=
EX_A53)
> +#define MIDR_CORTEX_A55 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_COR=
TEX_A55)
> #define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORT=
EX_A57)
> #define MIDR_CORTEX_A72 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORT=
EX_A72)
> #define MIDR_CORTEX_A73 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORT=
EX_A73)
> --=20
> 2.17.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:41:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:41:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36237.68184 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYZB-0006Rz-C1; Tue, 24 Nov 2020 13:41:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36237.68184; Tue, 24 Nov 2020 13:41:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYZA-0006RT-UV; Tue, 24 Nov 2020 13:41:08 +0000
Received: by outflank-mailman (input) for mailman id 36237;
 Tue, 24 Nov 2020 13:41:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYQ9-0000Qf-6M
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:31:49 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5a34b5d5-5129-41fb-9cc5-cf60046a3f08;
 Tue, 24 Nov 2020 13:29:15 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYNU-0006i4-0Y; Tue, 24 Nov 2020 13:29:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYQ9-0000Qf-6M
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:31:49 +0000
X-Inumbo-ID: 5a34b5d5-5129-41fb-9cc5-cf60046a3f08
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5a34b5d5-5129-41fb-9cc5-cf60046a3f08;
	Tue, 24 Nov 2020 13:29:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=rkhCK9axDIdfwwNjyKoG+774dMOXuWJwR7IEMeelgXY=; b=PQ5bqenaVy/ulzz4fkq2HpLbJw
	jMFBpPUwLoaNoRweakNlk9vK6w2GGBD9V7NouqIFNFDrt+OFuoHGpwmoV+C2Dz9IgRy+zT4RekOgp
	yRVbSP1HZxkx0csM7/WeXyX3uhTcfBM6ujVZ95BYXFBNO2sl3g6g85SRDSlBaFSWrg58tnEvoOh+K
	DWiflISSMeOX5/4zOkK9GQGE9hU0GpueKGcJ+Rk0zYlSdyDNpm7azVHs27XfP5PWmBH75zq117BUo
	r3V1xIFl+5sBZkqGIXM+BInjWDE/7AFwJUoPfBw5Qf3TuPAlitujVvHtEPQHxUMHyJkpLz3BzWrap
	cex3173Q==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYNU-0006i4-0Y; Tue, 24 Nov 2020 13:29:04 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 41/45] block: pass a block_device to invalidate_partition
Date: Tue, 24 Nov 2020 14:27:47 +0100
Message-Id: <20201124132751.3747337-42-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Pass the block_device actually needed instead of looking it up using
bdget_disk.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/genhd.c | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index 498c816e90df64..2985740eab084b 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -790,14 +790,8 @@ void device_add_disk_no_queue_reg(struct device *parent, struct gendisk *disk)
 }
 EXPORT_SYMBOL(device_add_disk_no_queue_reg);
 
-static void invalidate_partition(struct gendisk *disk, int partno)
+static void invalidate_partition(struct block_device *bdev)
 {
-	struct block_device *bdev;
-
-	bdev = bdget_disk(disk, partno);
-	if (!bdev)
-		return;
-
 	fsync_bdev(bdev);
 	__invalidate_device(bdev, true);
 
@@ -806,7 +800,6 @@ static void invalidate_partition(struct gendisk *disk, int partno)
 	 * as last inode reference is dropped.
 	 */
 	remove_inode_hash(bdev->bd_inode);
-	bdput(bdev);
 }
 
 /**
@@ -847,12 +840,12 @@ void del_gendisk(struct gendisk *disk)
 	disk_part_iter_init(&piter, disk,
 			     DISK_PITER_INCL_EMPTY | DISK_PITER_REVERSE);
 	while ((part = disk_part_iter_next(&piter))) {
-		invalidate_partition(disk, part->bdev->bd_partno);
+		invalidate_partition(part->bdev);
 		delete_partition(part);
 	}
 	disk_part_iter_exit(&piter);
 
-	invalidate_partition(disk, 0);
+	invalidate_partition(disk->part0);
 	set_capacity(disk, 0);
 
 	if (!(disk->flags & GENHD_FL_HIDDEN)) {
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:41:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:41:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36238.68194 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYZC-0006V9-LT; Tue, 24 Nov 2020 13:41:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36238.68194; Tue, 24 Nov 2020 13:41:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYZB-0006US-V6; Tue, 24 Nov 2020 13:41:09 +0000
Received: by outflank-mailman (input) for mailman id 36238;
 Tue, 24 Nov 2020 13:41:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYR2-0000Qf-8E
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:32:44 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 82f6531f-2282-48d8-b18a-094676ab92e4;
 Tue, 24 Nov 2020 13:29:41 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYNZ-0006jP-2n; Tue, 24 Nov 2020 13:29:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYR2-0000Qf-8E
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:32:44 +0000
X-Inumbo-ID: 82f6531f-2282-48d8-b18a-094676ab92e4
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 82f6531f-2282-48d8-b18a-094676ab92e4;
	Tue, 24 Nov 2020 13:29:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=IwwRPE9DRBc2LNo3Ny0FSBkKTpvGRuogpJShlQ+6zZ0=; b=lNpPy/ILgbZ6KaLqf9ls2+yyAK
	wDzUtjXyd0Dfbimo/3v57TN9AGQELMoz5CxCT2N0QQTU61y0uvBncRsq3hv+WR2m2385uKmDqb4mq
	5l4cWQYbKLUKvchnWWl06rLQNWn8V54/sLkOtQly6W8GqgMgXOsXBXRAVfEHEVP8joeDpxPSt+tcR
	ZgNjc/3/tBdDZvrybbvZykibFMqjHMwjzHJ73pZHRaGRoC+hm1mz2Dh0dfqj7xsQ0oiNuIcwbj7Zf
	cjdEBm1A/TPil2XHSYOKjNaXvkgEOxktA/T5z9JCpjPGFmJbaF/g6fTpgk0/ziFXn0TCJrzR+7dCo
	2seYSZYg==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYNZ-0006jP-2n; Tue, 24 Nov 2020 13:29:09 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 44/45] block: merge struct block_device and struct hd_struct
Date: Tue, 24 Nov 2020 14:27:50 +0100
Message-Id: <20201124132751.3747337-45-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Instead of having two structures that represent each block device with
different life time rules, merge them into a single one.  This also
greatly simplifies the reference counting rules, as we can use the inode
reference count as the main reference count for the new struct
block_device, with the device model reference front ending it for device
model interaction.  The percpu refcount in struct hd_struct is entirely
gone given that struct block_device must be opened and thus valid for
the duration of the I/O.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-cgroup.c        |   9 ++-
 block/blk.h               |   2 +-
 block/genhd.c             |  86 +++++++++-------------------
 block/partitions/core.c   | 116 +++++++++++++++-----------------------
 fs/block_dev.c            |   9 ---
 include/linux/blk_types.h |   8 ++-
 include/linux/blkdev.h    |   1 -
 include/linux/genhd.h     |  40 +++----------
 init/do_mounts.c          |  21 ++++---
 kernel/trace/blktrace.c   |  43 +++-----------
 10 files changed, 109 insertions(+), 226 deletions(-)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index a598f86e014137..99e0e0b3e347de 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -820,9 +820,9 @@ static void blkcg_fill_root_iostats(void)
 
 	class_dev_iter_init(&iter, &block_class, NULL, &disk_type);
 	while ((dev = class_dev_iter_next(&iter))) {
-		struct gendisk *disk = dev_to_disk(dev);
-		struct hd_struct *part = disk_get_part(disk, 0);
-		struct blkcg_gq *blkg = blk_queue_root_blkg(disk->queue);
+		struct block_device *bdev = dev_to_bdev(dev);
+		struct blkcg_gq *blkg =
+			blk_queue_root_blkg(bdev->bd_disk->queue);
 		struct blkg_iostat tmp;
 		int cpu;
 
@@ -830,7 +830,7 @@ static void blkcg_fill_root_iostats(void)
 		for_each_possible_cpu(cpu) {
 			struct disk_stats *cpu_dkstats;
 
-			cpu_dkstats = per_cpu_ptr(part->bdev->bd_stats, cpu);
+			cpu_dkstats = per_cpu_ptr(bdev->bd_stats, cpu);
 			tmp.ios[BLKG_IOSTAT_READ] +=
 				cpu_dkstats->ios[STAT_READ];
 			tmp.ios[BLKG_IOSTAT_WRITE] +=
@@ -849,7 +849,6 @@ static void blkcg_fill_root_iostats(void)
 			blkg_iostat_set(&blkg->iostat.cur, &tmp);
 			u64_stats_update_end(&blkg->iostat.sync);
 		}
-		disk_put_part(part);
 	}
 }
 
diff --git a/block/blk.h b/block/blk.h
index 9657c6da7c770c..98f0b1ae264120 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -356,7 +356,7 @@ char *disk_name(struct gendisk *hd, int partno, char *buf);
 #define ADDPART_FLAG_NONE	0
 #define ADDPART_FLAG_RAID	1
 #define ADDPART_FLAG_WHOLEDISK	2
-void delete_partition(struct hd_struct *part);
+void delete_partition(struct block_device *part);
 int bdev_add_partition(struct block_device *bdev, int partno,
 		sector_t start, sector_t length);
 int bdev_del_partition(struct block_device *bdev, int partno);
diff --git a/block/genhd.c b/block/genhd.c
index f0bf10192066ac..e1f67b9951f0ec 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -96,13 +96,14 @@ const char *bdevname(struct block_device *bdev, char *buf)
 }
 EXPORT_SYMBOL(bdevname);
 
-static void part_stat_read_all(struct hd_struct *part, struct disk_stats *stat)
+static void part_stat_read_all(struct block_device *part,
+		struct disk_stats *stat)
 {
 	int cpu;
 
 	memset(stat, 0, sizeof(struct disk_stats));
 	for_each_possible_cpu(cpu) {
-		struct disk_stats *ptr = per_cpu_ptr(part->bdev->bd_stats, cpu);
+		struct disk_stats *ptr = per_cpu_ptr(part->bd_stats, cpu);
 		int group;
 
 		for (group = 0; group < NR_STAT_GROUPS; group++) {
@@ -157,36 +158,6 @@ struct block_device *__disk_get_part(struct gendisk *disk, int partno)
 	return rcu_dereference(ptbl->part[partno]);
 }
 
-/**
- * disk_get_part - get partition
- * @disk: disk to look partition from
- * @partno: partition number
- *
- * Look for partition @partno from @disk.  If found, increment
- * reference count and return it.
- *
- * CONTEXT:
- * Don't care.
- *
- * RETURNS:
- * Pointer to the found partition on success, NULL if not found.
- */
-struct hd_struct *disk_get_part(struct gendisk *disk, int partno)
-{
-	struct block_device *part;
-
-	rcu_read_lock();
-	part = __disk_get_part(disk, partno);
-	if (!part) {
-		rcu_read_unlock();
-		return NULL;
-	}
-
-	get_device(part_to_dev(part->bd_part));
-	rcu_read_unlock();
-	return part->bd_part;
-}
-
 /**
  * disk_part_iter_init - initialize partition iterator
  * @piter: iterator to initialize
@@ -840,7 +811,7 @@ void del_gendisk(struct gendisk *disk)
 			     DISK_PITER_INCL_EMPTY | DISK_PITER_REVERSE);
 	while ((part = disk_part_iter_next(&piter))) {
 		invalidate_partition(part);
-		delete_partition(part->bd_part);
+		delete_partition(part);
 	}
 	disk_part_iter_exit(&piter);
 
@@ -931,13 +902,13 @@ void blk_request_module(dev_t devt)
  */
 struct block_device *bdget_disk(struct gendisk *disk, int partno)
 {
-	struct hd_struct *part;
 	struct block_device *bdev = NULL;
 
-	part = disk_get_part(disk, partno);
-	if (part)
-		bdev = bdget_part(part);
-	disk_put_part(part);
+	rcu_read_lock();
+	bdev = __disk_get_part(disk, partno);
+	if (bdev)
+		bdgrab(bdev);
+	rcu_read_unlock();
 
 	return bdev;
 }
@@ -1154,24 +1125,22 @@ static ssize_t disk_ro_show(struct device *dev,
 ssize_t part_size_show(struct device *dev,
 		       struct device_attribute *attr, char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
-
-	return sprintf(buf, "%llu\n", bdev_nr_sectors(p->bdev));
+	return sprintf(buf, "%llu\n", bdev_nr_sectors(dev_to_bdev(dev)));
 }
 
 ssize_t part_stat_show(struct device *dev,
 		       struct device_attribute *attr, char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
-	struct request_queue *q = part_to_disk(p)->queue;
+	struct block_device *bdev = dev_to_bdev(dev);
+	struct request_queue *q = bdev->bd_disk->queue;
 	struct disk_stats stat;
 	unsigned int inflight;
 
-	part_stat_read_all(p, &stat);
+	part_stat_read_all(bdev, &stat);
 	if (queue_is_mq(q))
-		inflight = blk_mq_in_flight(q, p->bdev);
+		inflight = blk_mq_in_flight(q, bdev);
 	else
-		inflight = part_in_flight(p->bdev);
+		inflight = part_in_flight(bdev);
 
 	return sprintf(buf,
 		"%8lu %8lu %8llu %8u "
@@ -1206,14 +1175,14 @@ ssize_t part_stat_show(struct device *dev,
 ssize_t part_inflight_show(struct device *dev, struct device_attribute *attr,
 			   char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
-	struct request_queue *q = part_to_disk(p)->queue;
+	struct block_device *bdev = dev_to_bdev(dev);
+	struct request_queue *q = bdev->bd_disk->queue;
 	unsigned int inflight[2];
 
 	if (queue_is_mq(q))
-		blk_mq_in_flight_rw(q, p->bdev, inflight);
+		blk_mq_in_flight_rw(q, bdev, inflight);
 	else
-		part_in_flight_rw(p->bdev, inflight);
+		part_in_flight_rw(bdev, inflight);
 
 	return sprintf(buf, "%8u %8u\n", inflight[0], inflight[1]);
 }
@@ -1261,16 +1230,14 @@ static DEVICE_ATTR(badblocks, 0644, disk_badblocks_show, disk_badblocks_store);
 ssize_t part_fail_show(struct device *dev,
 		       struct device_attribute *attr, char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
-
-	return sprintf(buf, "%d\n", p->bdev->bd_make_it_fail);
+	return sprintf(buf, "%d\n", dev_to_bdev(dev)->make_it_fail);
 }
 
 ssize_t part_fail_store(struct device *dev,
 			struct device_attribute *attr,
 			const char *buf, size_t count)
 {
-	struct hd_struct *p = dev_to_part(dev);
+	struct block_device *p = dev_to_bdev(dev);
 	int i;
 
 	if (count > 0 && sscanf(buf, "%d", &i) > 0)
@@ -1484,7 +1451,7 @@ static int diskstats_show(struct seq_file *seqf, void *v)
 
 	disk_part_iter_init(&piter, gp, DISK_PITER_INCL_EMPTY_PART0);
 	while ((hd = disk_part_iter_next(&piter))) {
-		part_stat_read_all(hd->bd_part, &stat);
+		part_stat_read_all(hd, &stat);
 		if (queue_is_mq(gp->queue))
 			inflight = blk_mq_in_flight(gp->queue, hd);
 		else
@@ -1556,7 +1523,7 @@ dev_t blk_lookup_devt(const char *name, int partno)
 	class_dev_iter_init(&iter, &block_class, NULL, &disk_type);
 	while ((dev = class_dev_iter_next(&iter))) {
 		struct gendisk *disk = dev_to_disk(dev);
-		struct hd_struct *part;
+		struct block_device *part;
 
 		if (strcmp(dev_name(dev), name))
 			continue;
@@ -1569,13 +1536,12 @@ dev_t blk_lookup_devt(const char *name, int partno)
 				     MINOR(dev->devt) + partno);
 			break;
 		}
-		part = disk_get_part(disk, partno);
+		part = bdget_disk(disk, partno);
 		if (part) {
-			devt = part_devt(part);
-			disk_put_part(part);
+			devt = part->bd_dev;
+			bdput(part);
 			break;
 		}
-		disk_put_part(part);
 	}
 	class_dev_iter_exit(&iter);
 	return devt;
diff --git a/block/partitions/core.c b/block/partitions/core.c
index c189ebd4569812..8d774840b14047 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -182,44 +182,39 @@ static struct parsed_partitions *check_partition(struct gendisk *hd,
 static ssize_t part_partition_show(struct device *dev,
 				   struct device_attribute *attr, char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
-
-	return sprintf(buf, "%d\n", p->bdev->bd_partno);
+	return sprintf(buf, "%d\n", dev_to_bdev(dev)->bd_partno);
 }
 
 static ssize_t part_start_show(struct device *dev,
 			       struct device_attribute *attr, char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
-
-	return sprintf(buf, "%llu\n",(unsigned long long)p->bdev->bd_start_sect);
+	return sprintf(buf, "%llu\n", dev_to_bdev(dev)->bd_start_sect);
 }
 
 static ssize_t part_ro_show(struct device *dev,
 			    struct device_attribute *attr, char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
-	return sprintf(buf, "%d\n", p->bdev->bd_read_only);
+	return sprintf(buf, "%d\n", dev_to_bdev(dev)->bd_read_only);
 }
 
 static ssize_t part_alignment_offset_show(struct device *dev,
 					  struct device_attribute *attr, char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
+	struct block_device *bdev = dev_to_bdev(dev);
 
 	return sprintf(buf, "%u\n",
-		queue_limit_alignment_offset(&part_to_disk(p)->queue->limits,
-				p->bdev->bd_start_sect));
+		queue_limit_alignment_offset(&bdev->bd_disk->queue->limits,
+				bdev->bd_start_sect));
 }
 
 static ssize_t part_discard_alignment_show(struct device *dev,
 					   struct device_attribute *attr, char *buf)
 {
-	struct hd_struct *p = dev_to_part(dev);
+	struct block_device *bdev = dev_to_bdev(dev);
 
 	return sprintf(buf, "%u\n",
-		queue_limit_discard_alignment(&part_to_disk(p)->queue->limits,
-				p->bdev->bd_start_sect));
+		queue_limit_discard_alignment(&bdev->bd_disk->queue->limits,
+				bdev->bd_start_sect));
 }
 
 static DEVICE_ATTR(partition, 0444, part_partition_show, NULL);
@@ -264,20 +259,19 @@ static const struct attribute_group *part_attr_groups[] = {
 
 static void part_release(struct device *dev)
 {
-	struct hd_struct *p = dev_to_part(dev);
+	struct block_device *p = dev_to_bdev(dev);
 
 	blk_free_devt(dev->devt);
-	bdput(p->bdev);
+	bdput(p);
 }
 
 static int part_uevent(struct device *dev, struct kobj_uevent_env *env)
 {
-	struct hd_struct *part = dev_to_part(dev);
+	struct block_device *part = dev_to_bdev(dev);
 
-	add_uevent_var(env, "PARTN=%u", part->bdev->bd_partno);
-	if (part->bdev->bd_meta_info && part->bdev->bd_meta_info->volname[0])
-		add_uevent_var(env, "PARTNAME=%s",
-			       part->bdev->bd_meta_info->volname);
+	add_uevent_var(env, "PARTN=%u", part->bd_partno);
+	if (part->bd_meta_info && part->bd_meta_info->volname[0])
+		add_uevent_var(env, "PARTNAME=%s", part->bd_meta_info->volname);
 	return 0;
 }
 
@@ -292,25 +286,25 @@ struct device_type part_type = {
  * Must be called either with bd_mutex held, before a disk can be opened or
  * after all disk users are gone.
  */
-void delete_partition(struct hd_struct *part)
+void delete_partition(struct block_device *part)
 {
-	struct gendisk *disk = part_to_disk(part);
+	struct gendisk *disk = part->bd_disk;
 	struct disk_part_tbl *ptbl =
 		rcu_dereference_protected(disk->part_tbl, 1);
 
-	rcu_assign_pointer(ptbl->part[part->bdev->bd_partno], NULL);
+	rcu_assign_pointer(ptbl->part[part->bd_partno], NULL);
 	rcu_assign_pointer(ptbl->last_lookup, NULL);
 
-	kobject_put(part->bdev->bd_holder_dir);
-	device_del(part_to_dev(part));
+	kobject_put(part->bd_holder_dir);
+	device_del(&part->bd_device);
 
 	/*
 	 * Remove the block device from the inode hash, so that it cannot be
 	 * looked up while waiting for the RCU grace period.
 	 */
-	remove_inode_hash(part->bdev->bd_inode);
+	remove_inode_hash(part->bd_inode);
 
-	put_device(part_to_dev(part));
+	put_device(&part->bd_device);
 }
 
 static ssize_t whole_disk_show(struct device *dev,
@@ -324,11 +318,10 @@ static DEVICE_ATTR(whole_disk, 0444, whole_disk_show, NULL);
  * Must be called either with bd_mutex held, before a disk can be opened or
  * after all disk users are gone.
  */
-static struct hd_struct *add_partition(struct gendisk *disk, int partno,
+static struct block_device *add_partition(struct gendisk *disk, int partno,
 				sector_t start, sector_t len, int flags,
 				struct partition_meta_info *info)
 {
-	struct hd_struct *p;
 	dev_t devt = MKDEV(0, 0);
 	struct device *ddev = disk_to_dev(disk);
 	struct device *pdev;
@@ -367,9 +360,6 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 	if (!bdev)
 		return ERR_PTR(-ENOMEM);
 
-	p = bdev->bd_part;
-	pdev = part_to_dev(p);
-
 	bdev->bd_start_sect = start;
 	bdev_set_nr_sectors(bdev, len);
 	bdev->bd_read_only = get_disk_ro(disk);
@@ -381,6 +371,7 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 			goto out_bdput;
 	}
 
+	pdev = &bdev->bd_device;
 	dname = dev_name(ddev);
 	if (isdigit(dname[strlen(dname) - 1]))
 		dev_set_name(pdev, "%sp%d", dname, partno);
@@ -422,7 +413,7 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 	/* suppress uevent if the disk suppresses it */
 	if (!dev_get_uevent_suppress(ddev))
 		kobject_uevent(&pdev->kobj, KOBJ_ADD);
-	return p;
+	return bdev;
 
 out_bdput:
 	bdput(bdev);
@@ -459,7 +450,7 @@ static bool partition_overlaps(struct gendisk *disk, sector_t start,
 int bdev_add_partition(struct block_device *bdev, int partno,
 		sector_t start, sector_t length)
 {
-	struct hd_struct *part;
+	struct block_device *part;
 
 	mutex_lock(&bdev->bd_mutex);
 	if (partition_overlaps(bdev->bd_disk, start, length, -1)) {
@@ -475,76 +466,59 @@ int bdev_add_partition(struct block_device *bdev, int partno,
 
 int bdev_del_partition(struct block_device *bdev, int partno)
 {
-	struct block_device *bdevp;
-	struct hd_struct *part = NULL;
+	struct block_device *part;
 	int ret;
 
-	bdevp = bdget_disk(bdev->bd_disk, partno);
-	if (!bdevp)
+	part = bdget_disk(bdev->bd_disk, partno);
+	if (!part)
 		return -ENXIO;
 
-	mutex_lock(&bdevp->bd_mutex);
+	mutex_lock(&part->bd_mutex);
 	mutex_lock_nested(&bdev->bd_mutex, 1);
 
-	ret = -ENXIO;
-	part = disk_get_part(bdev->bd_disk, partno);
-	if (!part)
-		goto out_unlock;
-
 	ret = -EBUSY;
-	if (bdevp->bd_openers)
+	if (part->bd_openers)
 		goto out_unlock;
 
-	sync_blockdev(bdevp);
-	invalidate_bdev(bdevp);
+	sync_blockdev(part);
+	invalidate_bdev(part);
 
 	delete_partition(part);
 	ret = 0;
 out_unlock:
 	mutex_unlock(&bdev->bd_mutex);
-	mutex_unlock(&bdevp->bd_mutex);
-	bdput(bdevp);
-	if (part)
-		disk_put_part(part);
+	mutex_unlock(&part->bd_mutex);
+	bdput(part);
 	return ret;
 }
 
 int bdev_resize_partition(struct block_device *bdev, int partno,
 		sector_t start, sector_t length)
 {
-	struct block_device *bdevp;
-	struct hd_struct *part;
+	struct block_device *part;
 	int ret = 0;
 
-	part = disk_get_part(bdev->bd_disk, partno);
+	part = bdget_disk(bdev->bd_disk, partno);
 	if (!part)
 		return -ENXIO;
 
-	ret = -ENOMEM;
-	bdevp = bdget_part(part);
-	if (!bdevp)
-		goto out_put_part;
-
-	mutex_lock(&bdevp->bd_mutex);
+	mutex_lock(&part->bd_mutex);
 	mutex_lock_nested(&bdev->bd_mutex, 1);
-
 	ret = -EINVAL;
-	if (start != part->bdev->bd_start_sect)
+	if (start != part->bd_start_sect)
 		goto out_unlock;
 
 	ret = -EBUSY;
 	if (partition_overlaps(bdev->bd_disk, start, length, partno))
 		goto out_unlock;
 
-	bdev_set_nr_sectors(bdevp, length);
+	bdev_set_nr_sectors(part, length);
 
 	ret = 0;
 out_unlock:
-	mutex_unlock(&bdevp->bd_mutex);
+	mutex_unlock(&part->bd_mutex);
 	mutex_unlock(&bdev->bd_mutex);
-	bdput(bdevp);
-out_put_part:
-	disk_put_part(part);
+	bdput(part);
 	return ret;
 }
 
@@ -577,7 +551,7 @@ int blk_drop_partitions(struct block_device *bdev)
 
 	disk_part_iter_init(&piter, bdev->bd_disk, DISK_PITER_INCL_EMPTY);
 	while ((part = disk_part_iter_next(&piter)))
-		delete_partition(part->bd_part);
+		delete_partition(part);
 	disk_part_iter_exit(&piter);
 
 	return 0;
@@ -592,7 +566,7 @@ static bool blk_add_partition(struct gendisk *disk, struct block_device *bdev,
 {
 	sector_t size = state->parts[p].size;
 	sector_t from = state->parts[p].from;
-	struct hd_struct *part;
+	struct block_device *part;
 
 	if (!size)
 		return true;
@@ -632,7 +606,7 @@ static bool blk_add_partition(struct gendisk *disk, struct block_device *bdev,
 
 	if (IS_BUILTIN(CONFIG_BLK_DEV_MD) &&
 	    (state->parts[p].flags & ADDPART_FLAG_RAID))
-		md_autodetect_dev(part_to_dev(part)->devt);
+		md_autodetect_dev(part->bd_dev);
 
 	return true;
 }
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 1538b20ca4bd43..08171478473ead 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -39,7 +39,6 @@
 
 struct bdev_inode {
 	struct block_device bdev;
-	struct hd_struct hd;
 	struct inode vfs_inode;
 };
 
@@ -886,9 +885,6 @@ struct block_device *bdev_alloc(struct gendisk *disk, u8 partno)
 		iput(inode);
 		return NULL;
 	}
-	bdev->bd_part = &BDEV_I(inode)->hd;
-	memset(bdev->bd_part, 0, sizeof(*bdev->bd_part));
-	bdev->bd_part->bdev = bdev;
 	return bdev;
 }
 
@@ -921,11 +917,6 @@ struct block_device *bdgrab(struct block_device *bdev)
 }
 EXPORT_SYMBOL(bdgrab);
 
-struct block_device *bdget_part(struct hd_struct *part)
-{
-	return bdget(part_devt(part));
-}
-
 long nr_blockdev_pages(void)
 {
 	struct inode *inode;
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 6edea5c1625909..866f74261b3ba8 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -8,6 +8,7 @@
 
 #include <linux/types.h>
 #include <linux/bvec.h>
+#include <linux/device.h>
 #include <linux/ktime.h>
 
 struct bio_set;
@@ -30,6 +31,7 @@ struct block_device {
 	struct super_block *	bd_super;
 	struct mutex		bd_mutex;	/* open/close mutex */
 	void *			bd_claiming;
+	struct device		bd_device;
 	void *			bd_holder;
 	int			bd_holders;
 	bool			bd_write_holder;
@@ -38,7 +40,6 @@ struct block_device {
 #endif
 	struct kobject		*bd_holder_dir;
 	u8			bd_partno;
-	struct hd_struct *	bd_part;
 	/* number of times partitions within this device have been opened. */
 	unsigned		bd_part_count;
 
@@ -61,8 +62,11 @@ struct block_device {
 #define bdev_whole(_bdev) \
 	((_bdev)->bd_disk->part0)
 
+#define dev_to_bdev(device) \
+	container_of((device), struct block_device, bd_device)
+
 #define bdev_kobj(_bdev) \
-	(&part_to_dev((_bdev)->bd_part)->kobj)
+	(&((_bdev)->bd_device.kobj))
 
 /*
  * Block error status values.  See block/blk-core:blk_errors for the details.
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 0e83989b9678c3..f390446919b745 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1995,7 +1995,6 @@ void blkdev_put(struct block_device *bdev, fmode_t mode);
 struct block_device *bdev_alloc(struct gendisk *disk, u8 partno);
 void bdev_add(struct block_device *bdev, dev_t dev);
 struct block_device *I_BDEV(struct inode *inode);
-struct block_device *bdget_part(struct hd_struct *part);
 struct block_device *bdgrab(struct block_device *bdev);
 void bdput(struct block_device *);
 
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index de2ee5ffeefe45..769dc22f574797 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -19,12 +19,6 @@
 #include <linux/blk_types.h>
 #include <asm/local.h>
 
-#define dev_to_part(device)	container_of((device), struct hd_struct, __dev)
-#define part_to_dev(part)	(&((part)->__dev))
-
-#define dev_to_disk(device)	(dev_to_part(device)->bdev->bd_disk)
-#define disk_to_dev(disk)	(part_to_dev((disk)->part0->bd_part))
-
 extern const struct device_type disk_type;
 extern struct device_type part_type;
 extern struct class block_class;
@@ -51,11 +45,6 @@ struct partition_meta_info {
 	u8 volname[PARTITION_META_INFO_VOLNAMELTH];
 };
 
-struct hd_struct {
-	struct block_device *bdev;
-	struct device __dev;
-};
-
 /**
  * DOC: genhd capability flags
  *
@@ -191,19 +180,21 @@ struct gendisk {
 	struct lockdep_map lockdep_map;
 };
 
+/*
+ * The gendisk is refcounted by the part0 block_device, and the bd_device
+ * therein is also used for device model presentation in sysfs.
+ */
+#define dev_to_disk(device) \
+	(dev_to_bdev(device)->bd_disk)
+#define disk_to_dev(disk) \
+	(&((disk)->part0->bd_device))
+
 #if IS_REACHABLE(CONFIG_CDROM)
 #define disk_to_cdi(disk)	((disk)->cdi)
 #else
 #define disk_to_cdi(disk)	NULL
 #endif
 
-static inline struct gendisk *part_to_disk(struct hd_struct *part)
-{
-	if (unlikely(!part))
-		return NULL;
-	return part->bdev->bd_disk;
-}
-
 static inline int disk_max_parts(struct gendisk *disk)
 {
 	if (disk->flags & GENHD_FL_EXT_DEVT)
@@ -222,19 +213,6 @@ static inline dev_t disk_devt(struct gendisk *disk)
 	return MKDEV(disk->major, disk->first_minor);
 }
 
-static inline dev_t part_devt(struct hd_struct *part)
-{
-	return part_to_dev(part)->devt;
-}
-
-extern struct hd_struct *disk_get_part(struct gendisk *disk, int partno);
-
-static inline void disk_put_part(struct hd_struct *part)
-{
-	if (likely(part))
-		put_device(part_to_dev(part));
-}
-
 /*
  * Smarter partition iterator without context limits.
  */
diff --git a/init/do_mounts.c b/init/do_mounts.c
index 86bef93e72ebd6..a78e44ee6adb8d 100644
--- a/init/do_mounts.c
+++ b/init/do_mounts.c
@@ -76,11 +76,11 @@ struct uuidcmp {
  */
 static int match_dev_by_uuid(struct device *dev, const void *data)
 {
+	struct block_device *bdev = dev_to_bdev(dev);
 	const struct uuidcmp *cmp = data;
-	struct hd_struct *part = dev_to_part(dev);
 
-	if (!part->bdev->bd_meta_info ||
-	    strncasecmp(cmp->uuid, part->bdev->bd_meta_info->uuid, cmp->len))
+	if (!bdev->bd_meta_info ||
+	    strncasecmp(cmp->uuid, bdev->bd_meta_info->uuid, cmp->len))
 		return 0;
 	return 1;
 }
@@ -133,13 +133,13 @@ static dev_t devt_from_partuuid(const char *uuid_str)
 		 * Attempt to find the requested partition by adding an offset
 		 * to the partition number found by UUID.
 		 */
-		struct hd_struct *part;
+		struct block_device *part;
 
-		part = disk_get_part(dev_to_disk(dev),
-				     dev_to_part(dev)->bdev->bd_partno + offset);
+		part = bdget_disk(dev_to_disk(dev),
+				  dev_to_bdev(dev)->bd_partno + offset);
 		if (part) {
-			devt = part_devt(part);
-			put_device(part_to_dev(part));
+			devt = part->bd_dev;
+			bdput(part);
 		}
 	} else {
 		devt = dev->devt;
@@ -166,11 +166,10 @@ static dev_t devt_from_partuuid(const char *uuid_str)
  */
 static int match_dev_by_label(struct device *dev, const void *data)
 {
+	struct block_device *bdev = dev_to_bdev(dev);
 	const char *label = data;
-	struct hd_struct *part = dev_to_part(dev);
 
-	if (!part->bdev->bd_meta_info ||
-	    strcmp(label, part->bdev->bd_meta_info->volname))
+	if (!bdev->bd_meta_info || strcmp(label, bdev->bd_meta_info->volname))
 		return 0;
 	return 1;
 }
diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
index 8a723a91ec5a06..a482a37848bff7 100644
--- a/kernel/trace/blktrace.c
+++ b/kernel/trace/blktrace.c
@@ -1810,30 +1810,15 @@ static ssize_t blk_trace_mask2str(char *buf, int mask)
 	return p - buf;
 }
 
-static struct request_queue *blk_trace_get_queue(struct block_device *bdev)
-{
-	if (bdev->bd_disk == NULL)
-		return NULL;
-
-	return bdev_get_queue(bdev);
-}
-
 static ssize_t sysfs_blk_trace_attr_show(struct device *dev,
 					 struct device_attribute *attr,
 					 char *buf)
 {
-	struct block_device *bdev = bdget_part(dev_to_part(dev));
-	struct request_queue *q;
+	struct block_device *bdev = dev_to_bdev(dev);
+	struct request_queue *q = bdev_get_queue(bdev);
 	struct blk_trace *bt;
 	ssize_t ret = -ENXIO;
 
-	if (bdev == NULL)
-		goto out;
-
-	q = blk_trace_get_queue(bdev);
-	if (q == NULL)
-		goto out_bdput;
-
 	mutex_lock(&q->debugfs_mutex);
 
 	bt = rcu_dereference_protected(q->blk_trace,
@@ -1856,9 +1841,6 @@ static ssize_t sysfs_blk_trace_attr_show(struct device *dev,
 
 out_unlock_bdev:
 	mutex_unlock(&q->debugfs_mutex);
-out_bdput:
-	bdput(bdev);
-out:
 	return ret;
 }
 
@@ -1866,8 +1848,8 @@ static ssize_t sysfs_blk_trace_attr_store(struct device *dev,
 					  struct device_attribute *attr,
 					  const char *buf, size_t count)
 {
-	struct block_device *bdev;
-	struct request_queue *q;
+	struct block_device *bdev = dev_to_bdev(dev);
+	struct request_queue *q = bdev_get_queue(bdev);
 	struct blk_trace *bt;
 	u64 value;
 	ssize_t ret = -EINVAL;
@@ -1883,17 +1865,10 @@ static ssize_t sysfs_blk_trace_attr_store(struct device *dev,
 				goto out;
 			value = ret;
 		}
-	} else if (kstrtoull(buf, 0, &value))
-		goto out;
-
-	ret = -ENXIO;
-	bdev = bdget_part(dev_to_part(dev));
-	if (bdev == NULL)
-		goto out;
-
-	q = blk_trace_get_queue(bdev);
-	if (q == NULL)
-		goto out_bdput;
+	} else {
+		if (kstrtoull(buf, 0, &value))
+			goto out;
+	}
 
 	mutex_lock(&q->debugfs_mutex);
 
@@ -1931,8 +1906,6 @@ static ssize_t sysfs_blk_trace_attr_store(struct device *dev,
 
 out_unlock_bdev:
 	mutex_unlock(&q->debugfs_mutex);
-out_bdput:
-	bdput(bdev);
 out:
 	return ret ? ret : count;
 }
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:41:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:41:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36239.68205 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYZE-0006ZI-8i; Tue, 24 Nov 2020 13:41:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36239.68205; Tue, 24 Nov 2020 13:41:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYZD-0006Y8-C8; Tue, 24 Nov 2020 13:41:11 +0000
Received: by outflank-mailman (input) for mailman id 36239;
 Tue, 24 Nov 2020 13:41:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYP1-0000Qf-4a
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:30:39 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7b249227-c3e0-400c-a07e-b6bc91474c75;
 Tue, 24 Nov 2020 13:28:53 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYN5-0006d2-N4; Tue, 24 Nov 2020 13:28:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYP1-0000Qf-4a
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:30:39 +0000
X-Inumbo-ID: 7b249227-c3e0-400c-a07e-b6bc91474c75
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7b249227-c3e0-400c-a07e-b6bc91474c75;
	Tue, 24 Nov 2020 13:28:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=dD7vTx4/xk9jtZI+HJvoO9Sft1DpGXtKTJgwao3L0Ug=; b=NA6mVSQjl/XbMhhE63L+5FK3EO
	XqJlkW1j6OinCBKUFr1IoSYH4717NtYwnS1tRjL9im3rXM5fUusOVhXh09uOgXBbRD3zOjP20pslw
	cnDzz+hN6iDvKTMbpUmY91qDbb5BIjDIRnwwoXlTQS3S+QFSWtf5Iq8BoLeRNK7SadrkegH3Gp721
	PHhF3hLrhKekdA9iGxEp+/LczQ46y1dEyxriI4j2p6TmOPRhHaUEb8WMRApq1jyePEekIGQ2Rg8oC
	sp0tD0+qmxTJX6EPlq+O5JKk6RY2pAHeOwWdWdli2LbyYaxur4IkswdoS5RPrXkorvoiIrVm12TrC
	sLEj4t1w==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYN5-0006d2-N4; Tue, 24 Nov 2020 13:28:40 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 27/45] block: simplify the block device claiming interface
Date: Tue, 24 Nov 2020 14:27:33 +0100
Message-Id: <20201124132751.3747337-28-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Stop passing the whole device as a separate argument given that it
can be trivially deducted and cleanup the !holder debug check.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Jan Kara <jack@suse.cz>
---
 drivers/block/loop.c   | 12 +++++-----
 fs/block_dev.c         | 51 +++++++++++++++---------------------------
 include/linux/blkdev.h |  6 ++---
 3 files changed, 25 insertions(+), 44 deletions(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index c0df88b3300c41..d643c67be6acea 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -1069,7 +1069,6 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
 	struct file	*file;
 	struct inode	*inode;
 	struct address_space *mapping;
-	struct block_device *claimed_bdev = NULL;
 	int		error;
 	loff_t		size;
 	bool		partscan;
@@ -1088,8 +1087,7 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
 	 * here to avoid changing device under exclusive owner.
 	 */
 	if (!(mode & FMODE_EXCL)) {
-		claimed_bdev = bdev_whole(bdev);
-		error = bd_prepare_to_claim(bdev, claimed_bdev, loop_configure);
+		error = bd_prepare_to_claim(bdev, loop_configure);
 		if (error)
 			goto out_putf;
 	}
@@ -1176,15 +1174,15 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
 	mutex_unlock(&loop_ctl_mutex);
 	if (partscan)
 		loop_reread_partitions(lo, bdev);
-	if (claimed_bdev)
-		bd_abort_claiming(bdev, claimed_bdev, loop_configure);
+	if (!(mode & FMODE_EXCL))
+		bd_abort_claiming(bdev, loop_configure);
 	return 0;
 
 out_unlock:
 	mutex_unlock(&loop_ctl_mutex);
 out_bdev:
-	if (claimed_bdev)
-		bd_abort_claiming(bdev, claimed_bdev, loop_configure);
+	if (!(mode & FMODE_EXCL))
+		bd_abort_claiming(bdev, loop_configure);
 out_putf:
 	fput(file);
 out:
diff --git a/fs/block_dev.c b/fs/block_dev.c
index e8d7de5fae00a9..43a0fda982c879 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -110,24 +110,20 @@ EXPORT_SYMBOL(invalidate_bdev);
 int truncate_bdev_range(struct block_device *bdev, fmode_t mode,
 			loff_t lstart, loff_t lend)
 {
-	struct block_device *claimed_bdev = NULL;
-	int err;
-
 	/*
 	 * If we don't hold exclusive handle for the device, upgrade to it
 	 * while we discard the buffer cache to avoid discarding buffers
 	 * under live filesystem.
 	 */
 	if (!(mode & FMODE_EXCL)) {
-		claimed_bdev = bdev_whole(bdev);
-		err = bd_prepare_to_claim(bdev, claimed_bdev,
-					  truncate_bdev_range);
+		int err = bd_prepare_to_claim(bdev, truncate_bdev_range);
 		if (err)
 			return err;
 	}
+
 	truncate_inode_pages_range(bdev->bd_inode->i_mapping, lstart, lend);
-	if (claimed_bdev)
-		bd_abort_claiming(bdev, claimed_bdev, truncate_bdev_range);
+	if (!(mode & FMODE_EXCL))
+		bd_abort_claiming(bdev, truncate_bdev_range);
 	return 0;
 }
 EXPORT_SYMBOL(truncate_bdev_range);
@@ -973,7 +969,6 @@ static bool bd_may_claim(struct block_device *bdev, struct block_device *whole,
 /**
  * bd_prepare_to_claim - claim a block device
  * @bdev: block device of interest
- * @whole: the whole device containing @bdev, may equal @bdev
  * @holder: holder trying to claim @bdev
  *
  * Claim @bdev.  This function fails if @bdev is already claimed by another
@@ -983,9 +978,12 @@ static bool bd_may_claim(struct block_device *bdev, struct block_device *whole,
  * RETURNS:
  * 0 if @bdev can be claimed, -EBUSY otherwise.
  */
-int bd_prepare_to_claim(struct block_device *bdev, struct block_device *whole,
-		void *holder)
+int bd_prepare_to_claim(struct block_device *bdev, void *holder)
 {
+	struct block_device *whole = bdev_whole(bdev);
+
+	if (WARN_ON_ONCE(!holder))
+		return -EINVAL;
 retry:
 	spin_lock(&bdev_lock);
 	/* if someone else claimed, fail */
@@ -1025,15 +1023,15 @@ static void bd_clear_claiming(struct block_device *whole, void *holder)
 /**
  * bd_finish_claiming - finish claiming of a block device
  * @bdev: block device of interest
- * @whole: whole block device
  * @holder: holder that has claimed @bdev
  *
  * Finish exclusive open of a block device. Mark the device as exlusively
  * open by the holder and wake up all waiters for exclusive open to finish.
  */
-static void bd_finish_claiming(struct block_device *bdev,
-		struct block_device *whole, void *holder)
+static void bd_finish_claiming(struct block_device *bdev, void *holder)
 {
+	struct block_device *whole = bdev_whole(bdev);
+
 	spin_lock(&bdev_lock);
 	BUG_ON(!bd_may_claim(bdev, whole, holder));
 	/*
@@ -1058,11 +1056,10 @@ static void bd_finish_claiming(struct block_device *bdev,
  * also used when exclusive open is not actually desired and we just needed
  * to block other exclusive openers for a while.
  */
-void bd_abort_claiming(struct block_device *bdev, struct block_device *whole,
-		       void *holder)
+void bd_abort_claiming(struct block_device *bdev, void *holder)
 {
 	spin_lock(&bdev_lock);
-	bd_clear_claiming(whole, holder);
+	bd_clear_claiming(bdev_whole(bdev), holder);
 	spin_unlock(&bdev_lock);
 }
 EXPORT_SYMBOL(bd_abort_claiming);
@@ -1473,7 +1470,6 @@ static struct block_device *get_bdev_disk_and_module(dev_t dev)
  */
 struct block_device *blkdev_get_by_dev(dev_t dev, fmode_t mode, void *holder)
 {
-	struct block_device *claiming;
 	bool unblock_events = true;
 	struct block_device *bdev;
 	struct gendisk *disk;
@@ -1496,15 +1492,9 @@ struct block_device *blkdev_get_by_dev(dev_t dev, fmode_t mode, void *holder)
 	disk = bdev->bd_disk;
  
 	if (mode & FMODE_EXCL) {
-		WARN_ON_ONCE(!holder);
-	
-		ret = -ENOMEM;
-		claiming = bdget_disk(disk, 0);
-		if (!claiming)
-			goto put_disk;
-		ret = bd_prepare_to_claim(bdev, claiming, holder);
+		ret = bd_prepare_to_claim(bdev, holder);
 		if (ret)
-			goto put_claiming;
+			goto put_disk;
 	}
 
 	disk_block_events(disk);
@@ -1514,7 +1504,7 @@ struct block_device *blkdev_get_by_dev(dev_t dev, fmode_t mode, void *holder)
 	if (ret)
 		goto abort_claiming;
 	if (mode & FMODE_EXCL) {
-		bd_finish_claiming(bdev, claiming, holder);
+		bd_finish_claiming(bdev, holder);
 
 		/*
 		 * Block event polling for write claims if requested.  Any write
@@ -1533,18 +1523,13 @@ struct block_device *blkdev_get_by_dev(dev_t dev, fmode_t mode, void *holder)
 
 	if (unblock_events)
 		disk_unblock_events(disk);
-	if (mode & FMODE_EXCL)
-		bdput(claiming);
 	return bdev;
 
 abort_claiming:
 	if (mode & FMODE_EXCL)
-		bd_abort_claiming(bdev, claiming, holder);
+		bd_abort_claiming(bdev, holder);
 	mutex_unlock(&bdev->bd_mutex);
 	disk_unblock_events(disk);
-put_claiming:
-	if (mode & FMODE_EXCL)
-		bdput(claiming);
 put_disk:
 	module_put(disk->fops->owner);
 	put_disk(disk);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 48e5a8393cd793..db1b11d6d07568 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1988,10 +1988,8 @@ void blkdev_show(struct seq_file *seqf, off_t offset);
 struct block_device *blkdev_get_by_path(const char *path, fmode_t mode,
 		void *holder);
 struct block_device *blkdev_get_by_dev(dev_t dev, fmode_t mode, void *holder);
-int bd_prepare_to_claim(struct block_device *bdev, struct block_device *whole,
-		void *holder);
-void bd_abort_claiming(struct block_device *bdev, struct block_device *whole,
-		void *holder);
+int bd_prepare_to_claim(struct block_device *bdev, void *holder);
+void bd_abort_claiming(struct block_device *bdev, void *holder);
 void blkdev_put(struct block_device *bdev, fmode_t mode);
 
 struct block_device *bdev_alloc(struct gendisk *disk, u8 partno);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:41:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:41:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36240.68215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYZF-0006d6-HN; Tue, 24 Nov 2020 13:41:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36240.68215; Tue, 24 Nov 2020 13:41:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYZE-0006bl-ND; Tue, 24 Nov 2020 13:41:12 +0000
Received: by outflank-mailman (input) for mailman id 36240;
 Tue, 24 Nov 2020 13:41:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYPL-0000Qf-4n
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:30:59 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 70933921-7f94-4146-983c-78fd261550a0;
 Tue, 24 Nov 2020 13:29:00 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYN7-0006dU-9l; Tue, 24 Nov 2020 13:28:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYPL-0000Qf-4n
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:30:59 +0000
X-Inumbo-ID: 70933921-7f94-4146-983c-78fd261550a0
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 70933921-7f94-4146-983c-78fd261550a0;
	Tue, 24 Nov 2020 13:29:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=IQo7+3FqwGn+nveEvbMImUHejQ6B6iul3lBAGXRT6S0=; b=Dsx0SI5F7fK6yGFNHizYK2Luqs
	OwGWEA5YdR00VnJlh8sw0Mn6T+j7LB7jv8MhUDWibdIObFF6A/ZF43bZIWLBCgxRKYGFNUkJJ7lCX
	9uBkrERnn2jWXDLuUpDe59yHT0uXNScCDoyaocQmmAzQjXpKlE4KAKPbpyUe8oYhjDaCLacF1DCg9
	9yAbwO/HOM5BkHkFS4np8Nd9Fd6Or5Eb6xBaCRgf0KNMcbJFJUCb3s3f0OWWQOleSUCquZS1yGdOP
	AQYd9fB3rQkdijuOCSdOSZ4uOqYuxhOl5dj8wm8XxGDymP9DbEN9XvguCVt7lDx0KagQxCdhTuk1a
	i/kX79lg==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYN7-0006dU-9l; Tue, 24 Nov 2020 13:28:41 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 28/45] block: simplify part_to_disk
Date: Tue, 24 Nov 2020 14:27:34 +0100
Message-Id: <20201124132751.3747337-29-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Now that struct hd_struct has a block_device pointer use that to
find the disk.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/genhd.h | 10 +++-------
 1 file changed, 3 insertions(+), 7 deletions(-)

diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index d068e46f9086ae..dcf86a3d4dedc4 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -219,13 +219,9 @@ struct gendisk {
 
 static inline struct gendisk *part_to_disk(struct hd_struct *part)
 {
-	if (likely(part)) {
-		if (part->partno)
-			return dev_to_disk(part_to_dev(part)->parent);
-		else
-			return dev_to_disk(part_to_dev(part));
-	}
-	return NULL;
+	if (unlikely(!part))
+		return NULL;
+	return part->bdev->bd_disk;
 }
 
 static inline int disk_max_parts(struct gendisk *disk)
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:41:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:41:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36241.68232 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYZH-0006lL-Rp; Tue, 24 Nov 2020 13:41:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36241.68232; Tue, 24 Nov 2020 13:41:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYZH-0006jT-8k; Tue, 24 Nov 2020 13:41:15 +0000
Received: by outflank-mailman (input) for mailman id 36241;
 Tue, 24 Nov 2020 13:41:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYPQ-0000Qf-4u
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:31:04 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1fec4524-e973-4418-a24f-5ab909f2a098;
 Tue, 24 Nov 2020 13:29:04 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYNI-0006fd-9V; Tue, 24 Nov 2020 13:28:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYPQ-0000Qf-4u
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:31:04 +0000
X-Inumbo-ID: 1fec4524-e973-4418-a24f-5ab909f2a098
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1fec4524-e973-4418-a24f-5ab909f2a098;
	Tue, 24 Nov 2020 13:29:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=NQmGHAp8ccIKeWKzHSDEWMgq45K7twbBhC04AX3u2zo=; b=LD3aUh6zWx06zp1Z5Hd2CQ/dQG
	DfvGBwpoMcvO7Yz8mgjMzoezBV5tVQdN5HpA41kDsI15KL3v6qNkNWXDuxQX3QIXARr7UOFFet4ML
	7s0d/Mewr7Aa6NcLrs5z4wtNOmHCSa4T8JZE1yYgO0sHZb7exqUu8c/bWLVQM+oiT8/2uMRSn07yp
	GBsPonmOh8TQgUX3qfkRgVQdWa/7j6CQ1OofbIBkl68FUN7k2RGQoRhOsk2OYb2KW7C8bXP9GwX/I
	Y5UgK6gcqHputfdO8FvgBOq1pLnrJAa+D8ZbFcDaPxXceBBrzkMPjCFAIwNFVOftQy/pkuYdClKgM
	JpNXfQvg==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYNI-0006fd-9V; Tue, 24 Nov 2020 13:28:52 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 34/45] block: move holder_dir to struct block_device
Date: Tue, 24 Nov 2020 14:27:40 +0100
Message-Id: <20201124132751.3747337-35-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Move the holder_dir field to struct block_device in preparation for
kill struct hd_struct.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/genhd.c             |  5 +++--
 block/partitions/core.c   |  8 ++++----
 fs/block_dev.c            | 11 +++++------
 include/linux/blk_types.h |  1 +
 include/linux/genhd.h     |  1 -
 5 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index 20c7bf6d091e94..a991f0122e53d8 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -671,7 +671,8 @@ static void register_disk(struct device *parent, struct gendisk *disk,
 	 */
 	pm_runtime_set_memalloc_noio(ddev, true);
 
-	disk->part0.holder_dir = kobject_create_and_add("holders", &ddev->kobj);
+	disk->part0.bdev->bd_holder_dir =
+			kobject_create_and_add("holders", &ddev->kobj);
 	disk->slave_dir = kobject_create_and_add("slaves", &ddev->kobj);
 
 	if (disk->flags & GENHD_FL_HIDDEN) {
@@ -871,7 +872,7 @@ void del_gendisk(struct gendisk *disk)
 
 	blk_unregister_queue(disk);
 
-	kobject_put(disk->part0.holder_dir);
+	kobject_put(disk->part0.bdev->bd_holder_dir);
 	kobject_put(disk->slave_dir);
 
 	part_stat_set_all(&disk->part0, 0);
diff --git a/block/partitions/core.c b/block/partitions/core.c
index e24673b4cba61f..ff60a14ed4dcdd 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -344,7 +344,7 @@ void delete_partition(struct hd_struct *part)
 	 */
 	get_device(disk_to_dev(disk));
 	rcu_assign_pointer(ptbl->part[part->partno], NULL);
-	kobject_put(part->holder_dir);
+	kobject_put(part->bdev->bd_holder_dir);
 	device_del(part_to_dev(part));
 
 	/*
@@ -452,8 +452,8 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 		goto out_put;
 
 	err = -ENOMEM;
-	p->holder_dir = kobject_create_and_add("holders", &pdev->kobj);
-	if (!p->holder_dir)
+	bdev->bd_holder_dir = kobject_create_and_add("holders", &pdev->kobj);
+	if (!bdev->bd_holder_dir)
 		goto out_del;
 
 	dev_set_uevent_suppress(pdev, 0);
@@ -487,7 +487,7 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 out_remove_file:
 	device_remove_file(pdev, &dev_attr_whole_disk);
 out_del:
-	kobject_put(p->holder_dir);
+	kobject_put(bdev->bd_holder_dir);
 	device_del(pdev);
 out_put:
 	put_device(pdev);
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 2393395201aa6c..0b8d6009486643 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -1137,7 +1137,7 @@ int bd_link_disk_holder(struct block_device *bdev, struct gendisk *disk)
 	WARN_ON_ONCE(!bdev->bd_holder);
 
 	/* FIXME: remove the following once add_disk() handles errors */
-	if (WARN_ON(!disk->slave_dir || !bdev->bd_part->holder_dir))
+	if (WARN_ON(!disk->slave_dir || !bdev->bd_holder_dir))
 		goto out_unlock;
 
 	holder = bd_find_holder_disk(bdev, disk);
@@ -1160,14 +1160,14 @@ int bd_link_disk_holder(struct block_device *bdev, struct gendisk *disk)
 	if (ret)
 		goto out_free;
 
-	ret = add_symlink(bdev->bd_part->holder_dir, &disk_to_dev(disk)->kobj);
+	ret = add_symlink(bdev->bd_holder_dir, &disk_to_dev(disk)->kobj);
 	if (ret)
 		goto out_del;
 	/*
 	 * bdev could be deleted beneath us which would implicitly destroy
 	 * the holder directory.  Hold on to it.
 	 */
-	kobject_get(bdev->bd_part->holder_dir);
+	kobject_get(bdev->bd_holder_dir);
 
 	list_add(&holder->list, &bdev->bd_holder_disks);
 	goto out_unlock;
@@ -1202,9 +1202,8 @@ void bd_unlink_disk_holder(struct block_device *bdev, struct gendisk *disk)
 
 	if (!WARN_ON_ONCE(holder == NULL) && !--holder->refcnt) {
 		del_symlink(disk->slave_dir, bdev_kobj(bdev));
-		del_symlink(bdev->bd_part->holder_dir,
-			    &disk_to_dev(disk)->kobj);
-		kobject_put(bdev->bd_part->holder_dir);
+		del_symlink(bdev->bd_holder_dir, &disk_to_dev(disk)->kobj);
+		kobject_put(bdev->bd_holder_dir);
 		list_del_init(&holder->list);
 		kfree(holder);
 	}
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 2f8ede04e5a94c..c0591e52d7d7ce 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -35,6 +35,7 @@ struct block_device {
 #ifdef CONFIG_SYSFS
 	struct list_head	bd_holder_disks;
 #endif
+	struct kobject		*bd_holder_dir;
 	u8			bd_partno;
 	struct hd_struct *	bd_part;
 	/* number of times partitions within this device have been opened. */
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 1e52f38b719db3..c2a8cf12c5cab5 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -55,7 +55,6 @@ struct hd_struct {
 
 	struct block_device *bdev;
 	struct device __dev;
-	struct kobject *holder_dir;
 	int policy, partno;
 #ifdef CONFIG_FAIL_MAKE_REQUEST
 	int make_it_fail;
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:41:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:41:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36244.68240 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYZJ-0006q2-Go; Tue, 24 Nov 2020 13:41:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36244.68240; Tue, 24 Nov 2020 13:41:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYZI-0006ov-UZ; Tue, 24 Nov 2020 13:41:16 +0000
Received: by outflank-mailman (input) for mailman id 36244;
 Tue, 24 Nov 2020 13:41:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1khYQi-0000Qf-7e
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:32:24 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 08250f10-ed3b-4acd-8142-aaba6d8fc385;
 Tue, 24 Nov 2020 13:29:25 +0000 (UTC)
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1khYNL-0006gB-5u; Tue, 24 Nov 2020 13:28:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=daQ6=E6=casper.srs.infradead.org=batv+cbe268a5dfa7b983a02e+6302+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1khYQi-0000Qf-7e
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:32:24 +0000
X-Inumbo-ID: 08250f10-ed3b-4acd-8142-aaba6d8fc385
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 08250f10-ed3b-4acd-8142-aaba6d8fc385;
	Tue, 24 Nov 2020 13:29:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=NGuMWbj7Hg5v+XoGXrw8eAVP/9gQlmCuUHhC+WDsnSI=; b=PtqbLTmGvI72SjOaseMqSoUl08
	gFij7fODIcUn1AJw6NyQEnXxMGhEqSJFQy0SK4i3BUfbpHfHaY3GadtPW1NFaHMUjuNEmqhI0LbEX
	MDKB13JlP2H1i8AWUXFcdqYryFwg224s1J+kcQt8ldbQri7UgunhmVOGWvbhExv8yF9MRqSOMF6h3
	FlrR3d7hwat2bwzeIiKvec23VdKPbe5SzrxAR+SzOup8yf23yYwFxKQToCC/mG2C7FXd/jomuQUnv
	o5egXoePz+VBB5OEp8ToM4bxj28Lgm60oHM8nynmakiBtLGQVXJLY0Gt3Y1sYS6SWivIcYOgdK6PG
	xUfcTHZQ==;
Received: from [2001:4bb8:180:5443:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYNL-0006gB-5u; Tue, 24 Nov 2020 13:28:55 +0000
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>,
	Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com,
	Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 36/45] block: move the policy field to struct block_device
Date: Tue, 24 Nov 2020 14:27:42 +0100
Message-Id: <20201124132751.3747337-37-hch@lst.de>
X-Mailer: git-send-email 2.29.2
In-Reply-To: <20201124132751.3747337-1-hch@lst.de>
References: <20201124132751.3747337-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Move the policy field to struct block_device and rename it to the
more descriptive bd_read_only.  Also turn the field into a bool as it
is used as such.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-core.c          | 2 +-
 block/genhd.c             | 8 ++++----
 block/ioctl.c             | 2 +-
 block/partitions/core.c   | 4 ++--
 include/linux/blk_types.h | 1 +
 include/linux/genhd.h     | 4 ++--
 6 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 9121390be97a76..d64ffcb6f9ae5d 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -696,7 +696,7 @@ static inline bool bio_check_ro(struct bio *bio, struct hd_struct *part)
 {
 	const int op = bio_op(bio);
 
-	if (part->policy && op_is_write(op)) {
+	if (part->bdev->bd_read_only && op_is_write(op)) {
 		char b[BDEVNAME_SIZE];
 
 		if (op_is_flush(bio->bi_opf) && !bio_sectors(bio))
diff --git a/block/genhd.c b/block/genhd.c
index 8c734a4e8ff31c..8aed77cc8ad169 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -1671,14 +1671,14 @@ void set_disk_ro(struct gendisk *disk, int flag)
 	struct disk_part_iter piter;
 	struct hd_struct *part;
 
-	if (disk->part0.policy != flag) {
+	if (disk->part0.bdev->bd_read_only != flag) {
 		set_disk_ro_uevent(disk, flag);
-		disk->part0.policy = flag;
+		disk->part0.bdev->bd_read_only = flag;
 	}
 
 	disk_part_iter_init(&piter, disk, DISK_PITER_INCL_EMPTY);
 	while ((part = disk_part_iter_next(&piter)))
-		part->policy = flag;
+		part->bdev->bd_read_only = flag;
 	disk_part_iter_exit(&piter);
 }
 
@@ -1688,7 +1688,7 @@ int bdev_read_only(struct block_device *bdev)
 {
 	if (!bdev)
 		return 0;
-	return bdev->bd_part->policy;
+	return bdev->bd_read_only;
 }
 
 EXPORT_SYMBOL(bdev_read_only);
diff --git a/block/ioctl.c b/block/ioctl.c
index a6d8171221c7dc..d61d652078f41c 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -345,7 +345,7 @@ static int blkdev_roset(struct block_device *bdev, fmode_t mode,
 		if (ret)
 			return ret;
 	}
-	bdev->bd_part->policy = n;
+	bdev->bd_read_only = n;
 	return 0;
 }
 
diff --git a/block/partitions/core.c b/block/partitions/core.c
index ff60a14ed4dcdd..fd00428e437a63 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -199,7 +199,7 @@ static ssize_t part_ro_show(struct device *dev,
 			    struct device_attribute *attr, char *buf)
 {
 	struct hd_struct *p = dev_to_part(dev);
-	return sprintf(buf, "%d\n", p->policy ? 1 : 0);
+	return sprintf(buf, "%d\n", p->bdev->bd_read_only);
 }
 
 static ssize_t part_alignment_offset_show(struct device *dev,
@@ -420,7 +420,7 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
 	bdev->bd_start_sect = start;
 	bdev_set_nr_sectors(bdev, len);
 	p->partno = partno;
-	p->policy = get_disk_ro(disk);
+	bdev->bd_read_only = get_disk_ro(disk);
 
 	if (info) {
 		err = -ENOMEM;
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index b237f1e4081405..758cf71c9aa2a6 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -23,6 +23,7 @@ struct block_device {
 	sector_t		bd_start_sect;
 	struct disk_stats __percpu *bd_stats;
 	unsigned long		bd_stamp;
+	bool			bd_read_only;	/* read-only policy */
 	dev_t			bd_dev;
 	int			bd_openers;
 	struct inode *		bd_inode;	/* will die */
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 5d46ea7be7e4f0..08d00b526b0a3b 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -55,7 +55,7 @@ struct hd_struct {
 
 	struct block_device *bdev;
 	struct device __dev;
-	int policy, partno;
+	int partno;
 	struct rcu_work rcu_work;
 };
 
@@ -279,7 +279,7 @@ extern void set_disk_ro(struct gendisk *disk, int flag);
 
 static inline int get_disk_ro(struct gendisk *disk)
 {
-	return disk->part0.policy;
+	return disk->part0.bdev->bd_read_only;
 }
 
 extern void disk_block_events(struct gendisk *disk);
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:51:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:51:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36362.68268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYia-0000Wu-BQ; Tue, 24 Nov 2020 13:50:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36362.68268; Tue, 24 Nov 2020 13:50:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYia-0000Wn-7s; Tue, 24 Nov 2020 13:50:52 +0000
Received: by outflank-mailman (input) for mailman id 36362;
 Tue, 24 Nov 2020 13:50:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khYiZ-0000Wd-6K; Tue, 24 Nov 2020 13:50:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khYiZ-0004v4-2P; Tue, 24 Nov 2020 13:50:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khYiY-00034u-MD; Tue, 24 Nov 2020 13:50:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1khYiY-0002IE-Li; Tue, 24 Nov 2020 13:50:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khYiZ-0000Wd-6K; Tue, 24 Nov 2020 13:50:51 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=64ddWax/uBLtI+AtrLYwC8zR+Ue2EHukf4021uKZfbw=; b=MUj1G5XlCt0ebcFarafdjYYoKC
	GjNpD83J1Sz8h3I0FJrRIR8gxn19ZKQlitA5PVM9C+Udvk/js426SOgtwmHmv/gswrtGKKhFRAXM6
	AGtQxQyG0gJl3QuX8Tqga/dfTQowkxm48S2D6Q1pqEVMi18coui4uK0zPOQ6dyTCMxbQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khYiZ-0004v4-2P; Tue, 24 Nov 2020 13:50:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khYiY-00034u-MD; Tue, 24 Nov 2020 13:50:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khYiY-0002IE-Li; Tue, 24 Nov 2020 13:50:50 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156982-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156982: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8147e00e4fbfcc43b665dc6bf279b204c501ba04
X-Osstest-Versions-That:
    xen=b659a5cebd611dbe698e63c03485b5fe8cd964ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 24 Nov 2020 13:50:50 +0000

flight 156982 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156982/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8147e00e4fbfcc43b665dc6bf279b204c501ba04
baseline version:
 xen                  b659a5cebd611dbe698e63c03485b5fe8cd964ad

Last test of basis   156907  2020-11-20 20:01:25 Z    3 days
Testing same since   156982  2020-11-24 11:01:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <JBeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b659a5cebd..8147e00e4f  8147e00e4fbfcc43b665dc6bf279b204c501ba04 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:53:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:53:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36389.68282 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYl4-0000q5-TR; Tue, 24 Nov 2020 13:53:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36389.68282; Tue, 24 Nov 2020 13:53:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYl4-0000py-QQ; Tue, 24 Nov 2020 13:53:26 +0000
Received: by outflank-mailman (input) for mailman id 36389;
 Tue, 24 Nov 2020 13:53:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IyBN=E6=suse.de=colyli@srs-us1.protection.inumbo.net>)
 id 1khYak-000505-2s
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:42:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 70400235-bde5-4dc3-8c2b-19a99aafe489;
 Tue, 24 Nov 2020 13:41:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EE837AC2D;
 Tue, 24 Nov 2020 13:41:53 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IyBN=E6=suse.de=colyli@srs-us1.protection.inumbo.net>)
	id 1khYak-000505-2s
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:42:46 +0000
X-Inumbo-ID: 70400235-bde5-4dc3-8c2b-19a99aafe489
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 70400235-bde5-4dc3-8c2b-19a99aafe489;
	Tue, 24 Nov 2020 13:41:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EE837AC2D;
	Tue, 24 Nov 2020 13:41:53 +0000 (UTC)
Subject: Re: [PATCH 13/45] block: add a bdev_kobj helper
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Mike Snitzer <snitzer@redhat.com>,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Jan Kara <jack@suse.cz>,
 Johannes Thumshirn <johannes.thumshirn@wdc.com>, dm-devel@redhat.com,
 Richard Weinberger <richard@nod.at>, Jan Kara <jack@suse.com>,
 linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org,
 linux-fsdevel@vger.kernel.org, linux-mm@kvack.org
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-14-hch@lst.de>
From: Coly Li <colyli@suse.de>
Message-ID: <cb689e01-60dc-9df8-3a94-006bc3c39367@suse.de>
Date: Tue, 24 Nov 2020 21:41:47 +0800
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.16; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.3
MIME-Version: 1.0
In-Reply-To: <20201124132751.3747337-14-hch@lst.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11/24/20 9:27 PM, Christoph Hellwig wrote:
> Add a little helper to find the kobject for a struct block_device.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Reviewed-by: Jan Kara <jack@suse.cz>
> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>

For the bcache part, Acked-by: Coly Li <colyli@suse.de>

Thanks.

Coly Li

> ---
>  drivers/md/bcache/super.c |  7 ++-----
>  drivers/md/md.c           |  4 +---
>  fs/block_dev.c            |  6 +++---
>  fs/btrfs/sysfs.c          | 15 +++------------
>  include/linux/blk_types.h |  3 +++
>  5 files changed, 12 insertions(+), 23 deletions(-)
> 
> diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
> index 46a00134a36ae1..a6a5e21e4fd136 100644
> --- a/drivers/md/bcache/super.c
> +++ b/drivers/md/bcache/super.c
> @@ -1447,8 +1447,7 @@ static int register_bdev(struct cache_sb *sb, struct cache_sb_disk *sb_disk,
>  		goto err;
>  
>  	err = "error creating kobject";
> -	if (kobject_add(&dc->disk.kobj, &part_to_dev(bdev->bd_part)->kobj,
> -			"bcache"))
> +	if (kobject_add(&dc->disk.kobj, bdev_kobj(bdev), "bcache"))
>  		goto err;
>  	if (bch_cache_accounting_add_kobjs(&dc->accounting, &dc->disk.kobj))
>  		goto err;
> @@ -2342,9 +2341,7 @@ static int register_cache(struct cache_sb *sb, struct cache_sb_disk *sb_disk,
>  		goto err;
>  	}
>  
> -	if (kobject_add(&ca->kobj,
> -			&part_to_dev(bdev->bd_part)->kobj,
> -			"bcache")) {
> +	if (kobject_add(&ca->kobj, bdev_kobj(bdev), "bcache")) {
>  		err = "error calling kobject_add";
>  		ret = -ENOMEM;
>  		goto out;

[snipped]


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 13:53:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 13:53:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36391.68295 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYlB-0000tI-6A; Tue, 24 Nov 2020 13:53:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36391.68295; Tue, 24 Nov 2020 13:53:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYlB-0000tB-2i; Tue, 24 Nov 2020 13:53:33 +0000
Received: by outflank-mailman (input) for mailman id 36391;
 Tue, 24 Nov 2020 13:53:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ki3g=E6=infradead.org=willy@srs-us1.protection.inumbo.net>)
 id 1khYl9-0000sM-5n
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:53:31 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 16d3b942-b704-4c20-b5ed-c04bf73eb38e;
 Tue, 24 Nov 2020 13:53:28 +0000 (UTC)
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red
 Hat Linux)) id 1khYkr-00014M-IZ; Tue, 24 Nov 2020 13:53:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ki3g=E6=infradead.org=willy@srs-us1.protection.inumbo.net>)
	id 1khYl9-0000sM-5n
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 13:53:31 +0000
X-Inumbo-ID: 16d3b942-b704-4c20-b5ed-c04bf73eb38e
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 16d3b942-b704-4c20-b5ed-c04bf73eb38e;
	Tue, 24 Nov 2020 13:53:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=ZXWWj8sGpheRB/oPAny/oeYYPMI5xkrPoBeF/PO6t/I=; b=uFy7Ttpe28lYbENky3PW90QaTq
	xG4wEg+wloh0hNvJWawTJQg7XQAS91A4oAPYqDqFFgSWTUjyMdGkcvWeSWNAAcnJJnpdJMC7Jm0Jq
	M95Jiv0GtCKP7Jwi7y4hV5eX/5ntAWKqVIIOuyD36/oMrxKN+22zQUxm/erDkZPH5EFs5kxiNk/qX
	fbYUgHkg+8QptScbUSZ6yFCMWEuoNILFEypZ+PtlRUpUS1f/iW3EWfM0Vm7XWaklqtf+Jppv9QouI
	MA/OL8tDwMa4wuUd7DIRsV4kdVDn9/qL5FTWuQlH76deZdi3dut2mc+76r0aL8ewuSlgqLBT+dwl6
	clkzRyQg==;
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux))
	id 1khYkr-00014M-IZ; Tue, 24 Nov 2020 13:53:13 +0000
Date: Tue, 24 Nov 2020 13:53:13 +0000
From: Matthew Wilcox <willy@infradead.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 02/45] filemap: consistently use ->f_mapping over
 ->i_mapping
Message-ID: <20201124135313.GA4327@casper.infradead.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-3-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-3-hch@lst.de>

On Tue, Nov 24, 2020 at 02:27:08PM +0100, Christoph Hellwig wrote:
> Use file->f_mapping in all remaining places that have a struct file
> available to properly handle the case where inode->i_mapping !=
> file_inode(file)->i_mapping.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 14:00:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 14:00:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36413.68307 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYrW-0001sw-Va; Tue, 24 Nov 2020 14:00:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36413.68307; Tue, 24 Nov 2020 14:00:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYrW-0001sp-RM; Tue, 24 Nov 2020 14:00:06 +0000
Received: by outflank-mailman (input) for mailman id 36413;
 Tue, 24 Nov 2020 14:00:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UPq8=E6=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1khYrV-0001gK-G1
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 14:00:05 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8f1753fc-10a9-4c03-87a4-f45e32227480;
 Tue, 24 Nov 2020 14:00:02 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AODxroP023842
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Tue, 24 Nov 2020 14:59:54 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 760BA2E9CAC; Tue, 24 Nov 2020 14:59:48 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=UPq8=E6=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1khYrV-0001gK-G1
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 14:00:05 +0000
X-Inumbo-ID: 8f1753fc-10a9-4c03-87a4-f45e32227480
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8f1753fc-10a9-4c03-87a4-f45e32227480;
	Tue, 24 Nov 2020 14:00:02 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AODxroP023842
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Tue, 24 Nov 2020 14:59:54 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 760BA2E9CAC; Tue, 24 Nov 2020 14:59:48 +0100 (MET)
Date: Tue, 24 Nov 2020 14:59:48 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201124135948.GL2020@antioche.eu.org>
References: <20904a6a-ac64-755d-d228-4c49faf66fb5@suse.com>
 <20201120103824.GJ1508@antioche.eu.org>
 <20201123095713.orfpg72r73m7f46n@Air-de-Roger>
 <20201123113241.GE2520@antioche.eu.org>
 <20201123125112.q3zqb4e5nk6jg4hw@Air-de-Roger>
 <20201123143150.GG2520@antioche.eu.org>
 <20201123170610.kzfxvcgkdkvh3ex4@Air-de-Roger>
 <20201123173925.GG4662@antioche.eu.org>
 <b3912e97-9684-fe97-1053-ad7168a19721@suse.com>
 <20201124122102.3igsriesou3vl6mu@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201124122102.3igsriesou3vl6mu@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Tue, 24 Nov 2020 14:59:55 +0100 (MET)

On Tue, Nov 24, 2020 at 01:21:02PM +0100, Roger Pau Monn wrote:
> [...]
> > What we're missing is LAPIC information, since the masked status logged
> > is unclear: (-MM) isn't fully matching up with "mask=0". But of course
> > the former is just a software representation, while the latter is what
> > the RTE holds. IOW for the interrupt to not get delivered, there needs
> > to be this or a higher ISR bit set (considering we don't use the TPR),
> > or (I think we can pretty much exclude this) we'd need to be running
> > with IRQs off for extended periods of time.
> 
> Let's dump the physical lapic(s) IRR and ISR together with the
> IO-APIC state. Can you please apply the following patch and use the
> 'i' key again? (please keep the previous patch applied)

Done, you'll find the log at
http://www-soc.lip6.fr/~bouyer/xen-log6.txt

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 14:02:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 14:02:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36420.68319 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYu4-0002CH-D8; Tue, 24 Nov 2020 14:02:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36420.68319; Tue, 24 Nov 2020 14:02:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khYu4-0002CA-9w; Tue, 24 Nov 2020 14:02:44 +0000
Received: by outflank-mailman (input) for mailman id 36420;
 Tue, 24 Nov 2020 14:02:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khYu3-0002C5-0n
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 14:02:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c518264a-cfcf-4971-9965-fddcfc553829;
 Tue, 24 Nov 2020 14:02:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CE6F7AD1E;
 Tue, 24 Nov 2020 14:02:40 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khYu3-0002C5-0n
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 14:02:43 +0000
X-Inumbo-ID: c518264a-cfcf-4971-9965-fddcfc553829
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c518264a-cfcf-4971-9965-fddcfc553829;
	Tue, 24 Nov 2020 14:02:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606226560; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=indIEahDfkT1LZzfP/mZPyUegRln11c4qOlELh8is6Y=;
	b=TqNCnQdBClekdG42HfuFy1uWlet99PX4hV3F3kWlME8aziceMfAdOidEUz5xYlp+xmRW9R
	4WdOj62txLL1k5UrsWzuACO+MbKqGQkJnDbmztTvNEpL5grgUhsbJguddWVg0/H0ZWWnOs
	sjD1tR3Xd5M7ZEHncSXNKRJX0WvFS/8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id CE6F7AD1E;
	Tue, 24 Nov 2020 14:02:40 +0000 (UTC)
Subject: Re: [PATCH v7 3/3] xen/events: rework fifo queue locking
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201124070106.26854-1-jgross@suse.com>
 <20201124070106.26854-4-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c627b42b-1e1f-b83a-2db8-b9e5fa5dce10@suse.com>
Date: Tue, 24 Nov 2020 15:02:40 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201124070106.26854-4-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 24.11.2020 08:01, Juergen Gross wrote:
> Two cpus entering evtchn_fifo_set_pending() for the same event channel
> can race in case the first one gets interrupted after setting
> EVTCHN_FIFO_PENDING and when the other one manages to set
> EVTCHN_FIFO_LINKED before the first one is testing that bit. This can
> lead to evtchn_check_pollers() being called before the event is put
> properly into the queue, resulting eventually in the guest not seeing
> the event pending and thus blocking forever afterwards.
> 
> Note that commit 5f2df45ead7c1195 ("xen/evtchn: rework per event channel
> lock") made the race just more obvious, while the fifo event channel
> implementation had this race from the beginning when an unmask operation
> was running in parallel with an event channel send operation.

Ah yes, but then also only for inter-domain channels, as it was
only in that case that the "wrong" domain's event lock was held.
IOW there was a much earlier change already where this issue
got widened (when the per-channel locking got introduced). This
then got reduced to the original scope by XSA-343's adding of
locking to evtchn_unmask(). (Not sure how much of this history
wants actually adding here. I'm writing it down not the least to
make sure I have a complete enough picture.)

> For avoiding this race the queue locking in evtchn_fifo_set_pending()
> needs to be reworked to cover the test of EVTCHN_FIFO_PENDING,
> EVTCHN_FIFO_MASKED and EVTCHN_FIFO_LINKED, too.

Perhaps mention the prior possible (and imo more natural)
alternative of taking consistent per-channel locks would have
been an alternative, until they got converted to rw ones?

> Additionally when an
> event channel needs to change queues both queues need to be locked
> initially.

Since this was (afaict) intentionally not the case before, I
think I would want to see a word spent on the "why", perhaps
better in a code comment than here. Even more so that you
delete a respective comment justifying the possible race as
permissible. And I have to admit right now I'm still uncertain
both ways, i.e. I neither have a clear understanding of why it
would have been considered fine the other way around before,
nor why the double locking is strictly needed.

> Fixes: 5f2df45ead7c1195 ("xen/evtchn: rework per event channel lock")
> Fixes: 88910061ec615b2d ("evtchn: add FIFO-based event channel hypercalls and port ops")
> Signed-off-by: Juergen Gross <jgross@suse.com>

I guess at least this one wants a Reported-by.

> @@ -204,6 +175,48 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
>          return;
>      }
>  
> +    for ( try = 0; ; try++ )
> +    {
> +        union evtchn_fifo_lastq lastq;
> +        struct vcpu *old_v;

I think this one can have const added.

> +        lastq.raw = read_atomic(&evtchn->fifo_lastq);
> +        old_v = d->vcpu[lastq.last_vcpu_id];
> +
> +        q = &v->evtchn_fifo->queue[evtchn->priority];
> +        old_q = &old_v->evtchn_fifo->queue[lastq.last_priority];
> +
> +        if ( q <= old_q )
> +        {
> +            spin_lock_irqsave(&q->lock, flags);
> +            if ( q != old_q )
> +                spin_lock(&old_q->lock);
> +        }
> +        else
> +        {
> +            spin_lock_irqsave(&old_q->lock, flags);
> +            spin_lock(&q->lock);
> +        }

Since the vast majority of cases is going to be q == old_q, would
it be worth structuring this like

        if ( q == old_q )
            spin_lock_irqsave(&q->lock, flags);
        else if ( q < old_q )
        {
            spin_lock_irqsave(&q->lock, flags);
            spin_lock(&old_q->lock);
        }
        else
        {
            spin_lock_irqsave(&old_q->lock, flags);
            spin_lock(&q->lock);
        }

saving (on average) half a conditional in this most common
case? (This is specifically different from the double locking in
event_channel.c, where the common case is to have different
entities. In fact double_evtchn_{,un}lock() look to pointlessly
check for chn1 == chn2 - I guess I'll make a patch.)

> +        lastq.raw = read_atomic(&evtchn->fifo_lastq);
> +        old_v = d->vcpu[lastq.last_vcpu_id];
> +        if ( q == &v->evtchn_fifo->queue[evtchn->priority] &&
> +             old_q == &old_v->evtchn_fifo->queue[lastq.last_priority] )
> +            break;
> +
> +        if ( q != old_q )
> +            spin_unlock(&old_q->lock);
> +        spin_unlock_irqrestore(&q->lock, flags);
> +
> +        if ( try == 3 )
> +        {
> +            gprintk(XENLOG_WARNING,
> +                    "dom%d port %d lost event (too many queue changes)\n",
> +                    d->domain_id, evtchn->port);
> +            return;

Originally evtchn_check_pollers() would still have been called
in this case. Wouldn't you better retain this, or else justify
the possibly observable change in behavior?

> @@ -228,22 +239,8 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
>              goto done;
>          }

This if() right above here can, aiui, in principle be moved out
of the surrounding if(), at which point ...

> -        /*
> -         * No locking around getting the queue. This may race with
> -         * changing the priority but we are allowed to signal the
> -         * event once on the old priority.
> -         */
> -        q = &v->evtchn_fifo->queue[evtchn->priority];
> -
> -        old_q = lock_old_queue(d, evtchn, &flags);
> -        if ( !old_q )
> -            goto done;

... with all of this gone ...

>          if ( guest_test_and_set_bit(d, EVTCHN_FIFO_LINKED, word) )
> -        {
> -            spin_unlock_irqrestore(&old_q->lock, flags);
>              goto done;
> -        }

... this could become part of the outer if(), replacing the 2nd
guest_test_bit() there. (Possibly, if deemed worthwhile at all,
to be carried out in a separate follow-on patch, to keep focus
here on the actual goal.)

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 14:10:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 14:10:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36433.68331 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khZ15-0002Vq-5Y; Tue, 24 Nov 2020 14:09:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36433.68331; Tue, 24 Nov 2020 14:09:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khZ15-0002Vj-2T; Tue, 24 Nov 2020 14:09:59 +0000
Received: by outflank-mailman (input) for mailman id 36433;
 Tue, 24 Nov 2020 14:09:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khZ14-0002Va-Bh
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 14:09:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6a4531a9-0552-4d26-b49c-67a208347313;
 Tue, 24 Nov 2020 14:09:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1BA96AC2D;
 Tue, 24 Nov 2020 14:09:56 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khZ14-0002Va-Bh
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 14:09:58 +0000
X-Inumbo-ID: 6a4531a9-0552-4d26-b49c-67a208347313
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6a4531a9-0552-4d26-b49c-67a208347313;
	Tue, 24 Nov 2020 14:09:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606226996; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Y5DXCDsO/WoTaSU6W+lK4swzLhsvT65+FBFkej8CZm0=;
	b=qS5x6JMx4Zc13WSUIAfIIgc7Bxmvz6otUSlfJcD5pp8MSduIvnggU9zOCbR0Xkmi0cglr3
	Toi7gMJC+2cp2XPnkTB2RMo4FaYTUJR7mb4A+KwopFzA00V1fdw+dgzwsB7i1ikSqSb154
	U1wSVg1rbd8k3pM4fwUxxGdjahZf/K8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1BA96AC2D;
	Tue, 24 Nov 2020 14:09:56 +0000 (UTC)
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xenproject.org, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <20904a6a-ac64-755d-d228-4c49faf66fb5@suse.com>
 <20201120103824.GJ1508@antioche.eu.org>
 <20201123095713.orfpg72r73m7f46n@Air-de-Roger>
 <20201123113241.GE2520@antioche.eu.org>
 <20201123125112.q3zqb4e5nk6jg4hw@Air-de-Roger>
 <20201123143150.GG2520@antioche.eu.org>
 <20201123170610.kzfxvcgkdkvh3ex4@Air-de-Roger>
 <20201123173925.GG4662@antioche.eu.org>
 <b3912e97-9684-fe97-1053-ad7168a19721@suse.com>
 <20201124122102.3igsriesou3vl6mu@Air-de-Roger>
 <20201124135948.GL2020@antioche.eu.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6d6a77cf-58de-4e4d-ed75-e9365be060b7@suse.com>
Date: Tue, 24 Nov 2020 15:09:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201124135948.GL2020@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 24.11.2020 14:59, Manuel Bouyer wrote:
> On Tue, Nov 24, 2020 at 01:21:02PM +0100, Roger Pau Monné wrote:
>> [...]
>>> What we're missing is LAPIC information, since the masked status logged
>>> is unclear: (-MM) isn't fully matching up with "mask=0". But of course
>>> the former is just a software representation, while the latter is what
>>> the RTE holds. IOW for the interrupt to not get delivered, there needs
>>> to be this or a higher ISR bit set (considering we don't use the TPR),
>>> or (I think we can pretty much exclude this) we'd need to be running
>>> with IRQs off for extended periods of time.
>>
>> Let's dump the physical lapic(s) IRR and ISR together with the
>> IO-APIC state. Can you please apply the following patch and use the
>> 'i' key again? (please keep the previous patch applied)
> 
> Done, you'll find the log at
> http://www-soc.lip6.fr/~bouyer/xen-log6.txt

Hmm, I can't spot respective output. Are you sure you did this with
a hypervisor with Roger's latest patch in place?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 14:28:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 14:28:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36450.68343 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khZI3-0004Sk-Op; Tue, 24 Nov 2020 14:27:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36450.68343; Tue, 24 Nov 2020 14:27:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khZI3-0004Sd-KJ; Tue, 24 Nov 2020 14:27:31 +0000
Received: by outflank-mailman (input) for mailman id 36450;
 Tue, 24 Nov 2020 14:27:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UPq8=E6=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1khZI2-0004SY-Gf
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 14:27:30 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c0ca2d64-7200-483a-a6e0-942f337b96a7;
 Tue, 24 Nov 2020 14:27:28 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AOERIsQ000171
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Tue, 24 Nov 2020 15:27:19 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id CFD642E9CAC; Tue, 24 Nov 2020 15:27:13 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=UPq8=E6=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1khZI2-0004SY-Gf
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 14:27:30 +0000
X-Inumbo-ID: c0ca2d64-7200-483a-a6e0-942f337b96a7
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c0ca2d64-7200-483a-a6e0-942f337b96a7;
	Tue, 24 Nov 2020 14:27:28 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AOERIsQ000171
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Tue, 24 Nov 2020 15:27:19 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id CFD642E9CAC; Tue, 24 Nov 2020 15:27:13 +0100 (MET)
Date: Tue, 24 Nov 2020 15:27:13 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org,
        Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201124142713.GM2020@antioche.eu.org>
References: <20201123095713.orfpg72r73m7f46n@Air-de-Roger>
 <20201123113241.GE2520@antioche.eu.org>
 <20201123125112.q3zqb4e5nk6jg4hw@Air-de-Roger>
 <20201123143150.GG2520@antioche.eu.org>
 <20201123170610.kzfxvcgkdkvh3ex4@Air-de-Roger>
 <20201123173925.GG4662@antioche.eu.org>
 <b3912e97-9684-fe97-1053-ad7168a19721@suse.com>
 <20201124122102.3igsriesou3vl6mu@Air-de-Roger>
 <20201124135948.GL2020@antioche.eu.org>
 <6d6a77cf-58de-4e4d-ed75-e9365be060b7@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <6d6a77cf-58de-4e4d-ed75-e9365be060b7@suse.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Tue, 24 Nov 2020 15:27:20 +0100 (MET)

On Tue, Nov 24, 2020 at 03:09:55PM +0100, Jan Beulich wrote:
> >> [...]
> >>> What we're missing is LAPIC information, since the masked status logged
> >>> is unclear: (-MM) isn't fully matching up with "mask=0". But of course
> >>> the former is just a software representation, while the latter is what
> >>> the RTE holds. IOW for the interrupt to not get delivered, there needs
> >>> to be this or a higher ISR bit set (considering we don't use the TPR),
> >>> or (I think we can pretty much exclude this) we'd need to be running
> >>> with IRQs off for extended periods of time.
> >>
> >> Let's dump the physical lapic(s) IRR and ISR together with the
> >> IO-APIC state. Can you please apply the following patch and use the
> >> 'i' key again? (please keep the previous patch applied)
> > 
> > Done, you'll find the log at
> > http://www-soc.lip6.fr/~bouyer/xen-log6.txt
> 
> Hmm, I can't spot respective output. Are you sure you did this with
> a hypervisor with Roger's latest patch in place?

Ops, sorry I copied xen.gz to the wrong place.
new log at
http://www-soc.lip6.fr/~bouyer/xen-log7.txt

this one ends up in a panic, I hope you'll find what you expect here.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 14:33:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 14:33:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36459.68355 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khZNr-0005jE-GN; Tue, 24 Nov 2020 14:33:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36459.68355; Tue, 24 Nov 2020 14:33:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khZNr-0005j7-Bt; Tue, 24 Nov 2020 14:33:31 +0000
Received: by outflank-mailman (input) for mailman id 36459;
 Tue, 24 Nov 2020 14:33:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khZNp-0005j2-NO
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 14:33:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 40e05122-fc9e-4b7e-a3c1-059dbbe08f0f;
 Tue, 24 Nov 2020 14:33:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E9100AC66;
 Tue, 24 Nov 2020 14:33:27 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khZNp-0005j2-NO
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 14:33:29 +0000
X-Inumbo-ID: 40e05122-fc9e-4b7e-a3c1-059dbbe08f0f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 40e05122-fc9e-4b7e-a3c1-059dbbe08f0f;
	Tue, 24 Nov 2020 14:33:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606228408; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=72US9npDgaskd4cP/ILcLEymqpRkyFp/DHcNxbrW19Y=;
	b=t7S8Hs3D9tderOZA65L2wQEJhxvoL8kYaIu66JGnnTSlWyLAo6u0PT8klQcg0mb0Fnehrm
	gmZ9Fvg3abO8fdepxnL+qW1JGQqmSycSDbf4txAyu8DR5P0t2w7WsH5UrybCsw4DpnJ1M3
	kJ+MbcAE/tZcPSZadTFe/E+3bVf0xtE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E9100AC66;
	Tue, 24 Nov 2020 14:33:27 +0000 (UTC)
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xenproject.org, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <20201123095713.orfpg72r73m7f46n@Air-de-Roger>
 <20201123113241.GE2520@antioche.eu.org>
 <20201123125112.q3zqb4e5nk6jg4hw@Air-de-Roger>
 <20201123143150.GG2520@antioche.eu.org>
 <20201123170610.kzfxvcgkdkvh3ex4@Air-de-Roger>
 <20201123173925.GG4662@antioche.eu.org>
 <b3912e97-9684-fe97-1053-ad7168a19721@suse.com>
 <20201124122102.3igsriesou3vl6mu@Air-de-Roger>
 <20201124135948.GL2020@antioche.eu.org>
 <6d6a77cf-58de-4e4d-ed75-e9365be060b7@suse.com>
 <20201124142713.GM2020@antioche.eu.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <068d11b3-f2d6-d9f4-9565-e2dbf4292df1@suse.com>
Date: Tue, 24 Nov 2020 15:33:27 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201124142713.GM2020@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 24.11.2020 15:27, Manuel Bouyer wrote:
> On Tue, Nov 24, 2020 at 03:09:55PM +0100, Jan Beulich wrote:
>>>> [...]
>>>>> What we're missing is LAPIC information, since the masked status logged
>>>>> is unclear: (-MM) isn't fully matching up with "mask=0". But of course
>>>>> the former is just a software representation, while the latter is what
>>>>> the RTE holds. IOW for the interrupt to not get delivered, there needs
>>>>> to be this or a higher ISR bit set (considering we don't use the TPR),
>>>>> or (I think we can pretty much exclude this) we'd need to be running
>>>>> with IRQs off for extended periods of time.
>>>>
>>>> Let's dump the physical lapic(s) IRR and ISR together with the
>>>> IO-APIC state. Can you please apply the following patch and use the
>>>> 'i' key again? (please keep the previous patch applied)
>>>
>>> Done, you'll find the log at
>>> http://www-soc.lip6.fr/~bouyer/xen-log6.txt
>>
>> Hmm, I can't spot respective output. Are you sure you did this with
>> a hypervisor with Roger's latest patch in place?
> 
> Ops, sorry I copied xen.gz to the wrong place.
> new log at
> http://www-soc.lip6.fr/~bouyer/xen-log7.txt
> 
> this one ends up in a panic,

Argh - too much output triggered the watchdog. I guess we need to
cut down on the vIO-APIC dumping, permaps limiting it to just the
one RTE we care about. But let me (and Roger) see if there's
anything to be derived from the LAPIC state...

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 14:36:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 14:36:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36466.68367 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khZQf-0005t3-4Z; Tue, 24 Nov 2020 14:36:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36466.68367; Tue, 24 Nov 2020 14:36:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khZQf-0005sw-11; Tue, 24 Nov 2020 14:36:25 +0000
Received: by outflank-mailman (input) for mailman id 36466;
 Tue, 24 Nov 2020 14:36:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khZQd-0005sq-O8
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 14:36:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 79ee8c41-88f3-4974-bf49-8178a65a14e6;
 Tue, 24 Nov 2020 14:36:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 39D3AAD63;
 Tue, 24 Nov 2020 14:36:22 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khZQd-0005sq-O8
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 14:36:23 +0000
X-Inumbo-ID: 79ee8c41-88f3-4974-bf49-8178a65a14e6
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 79ee8c41-88f3-4974-bf49-8178a65a14e6;
	Tue, 24 Nov 2020 14:36:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606228582; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jddJcJOBahWscYIeBSLixsP2HFFL5FW7IAtYuZSn+a8=;
	b=kVeWmAGd5wl0fqeiWH2rzszMPTrW61UhI8w5MqX2FHabKWEtxdR5TEYADP+EQ3Z/Ct+EQ2
	YwjM0Kh5v7ks2NT1ZJs1Aw9toBp6Y/JwvskZtYcb+MpBm45Lms8aC6tUHpISwgUnq2PssA
	zMYyzPLfh6OKOd4012VeR8T7C7LAGtA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 39D3AAD63;
	Tue, 24 Nov 2020 14:36:22 +0000 (UTC)
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
From: Jan Beulich <jbeulich@suse.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
References: <20201123095713.orfpg72r73m7f46n@Air-de-Roger>
 <20201123113241.GE2520@antioche.eu.org>
 <20201123125112.q3zqb4e5nk6jg4hw@Air-de-Roger>
 <20201123143150.GG2520@antioche.eu.org>
 <20201123170610.kzfxvcgkdkvh3ex4@Air-de-Roger>
 <20201123173925.GG4662@antioche.eu.org>
 <b3912e97-9684-fe97-1053-ad7168a19721@suse.com>
 <20201124122102.3igsriesou3vl6mu@Air-de-Roger>
 <20201124135948.GL2020@antioche.eu.org>
 <6d6a77cf-58de-4e4d-ed75-e9365be060b7@suse.com>
 <20201124142713.GM2020@antioche.eu.org>
 <068d11b3-f2d6-d9f4-9565-e2dbf4292df1@suse.com>
Message-ID: <1adc92cf-649f-9c75-98a1-05a34b3af42f@suse.com>
Date: Tue, 24 Nov 2020 15:36:21 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <068d11b3-f2d6-d9f4-9565-e2dbf4292df1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 24.11.2020 15:33, Jan Beulich wrote:
> On 24.11.2020 15:27, Manuel Bouyer wrote:
>> Ops, sorry I copied xen.gz to the wrong place.
>> new log at
>> http://www-soc.lip6.fr/~bouyer/xen-log7.txt
>>
>> this one ends up in a panic,
> 
> Argh - too much output triggered the watchdog. I guess we need to
> cut down on the vIO-APIC dumping, permaps limiting it to just the
> one RTE we care about. But let me (and Roger) see if there's
> anything to be derived from the LAPIC state...

All IRRs and ISRs are completely clear of set bits. Highly mysterious
how the IRQ then doesn't get delivered.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 14:42:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 14:42:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36474.68379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khZWZ-0006no-Q0; Tue, 24 Nov 2020 14:42:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36474.68379; Tue, 24 Nov 2020 14:42:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khZWZ-0006nh-My; Tue, 24 Nov 2020 14:42:31 +0000
Received: by outflank-mailman (input) for mailman id 36474;
 Tue, 24 Nov 2020 14:42:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khZWY-0006nc-3q
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 14:42:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b1c86dd2-b1ef-4ad8-9153-5fa3045bf660;
 Tue, 24 Nov 2020 14:42:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 82033AC66;
 Tue, 24 Nov 2020 14:42:28 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khZWY-0006nc-3q
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 14:42:30 +0000
X-Inumbo-ID: b1c86dd2-b1ef-4ad8-9153-5fa3045bf660
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b1c86dd2-b1ef-4ad8-9153-5fa3045bf660;
	Tue, 24 Nov 2020 14:42:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606228948; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HHoOdfLHT+/hVeeCayRObbNFDm4WCLh/XJDus/9snho=;
	b=ncJ45mmsFQEAt/F5vuCR3gCpzWthcwfdhh4AWnrA8SfPTFx81b/TIasR6RdhRUp5CPv0S+
	knvr+e8/A4ukWkEAkewMb+pefrV12tUXp0zD85dRbejHy1QyJPhaRXqq7VuMi7bi994D7c
	Hk7jdVnqu6VxqvpSQ7Kn810sHcC2C2c=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 82033AC66;
	Tue, 24 Nov 2020 14:42:28 +0000 (UTC)
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
From: Jan Beulich <jbeulich@suse.com>
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Manuel Bouyer <bouyer@antioche.eu.org>
References: <20201120085249.GA1508@antioche.eu.org>
 <97f371a9-00fe-33fe-8923-c247f44f9af6@suse.com>
 <20201120092754.GH1508@antioche.eu.org>
 <20904a6a-ac64-755d-d228-4c49faf66fb5@suse.com>
 <20201120103824.GJ1508@antioche.eu.org>
 <20201123095713.orfpg72r73m7f46n@Air-de-Roger>
 <20201123113241.GE2520@antioche.eu.org>
 <20201123125112.q3zqb4e5nk6jg4hw@Air-de-Roger>
 <20201123143150.GG2520@antioche.eu.org>
 <20201123170610.kzfxvcgkdkvh3ex4@Air-de-Roger>
 <20201123173925.GG4662@antioche.eu.org>
 <b3912e97-9684-fe97-1053-ad7168a19721@suse.com>
Message-ID: <ee63d6c2-4d0f-a3b7-37d0-8ce45c9e6fb1@suse.com>
Date: Tue, 24 Nov 2020 15:42:28 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <b3912e97-9684-fe97-1053-ad7168a19721@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 24.11.2020 11:05, Jan Beulich wrote:
> On 23.11.2020 18:39, Manuel Bouyer wrote:
>> On Mon, Nov 23, 2020 at 06:06:10PM +0100, Roger Pau Monné wrote:
>>> OK, I'm afraid this is likely too verbose and messes with the timings.
>>>
>>> I've been looking (again) into the code, and I found something weird
>>> that I think could be related to the issue you are seeing, but haven't
>>> managed to try to boot the NetBSD kernel provided in order to assert
>>> whether it solves the issue or not (or even whether I'm able to
>>> repro it). Would you mind giving the patch below a try?
>>
>> With this, I get the same hang but XEN outputs don't wake up the interrupt
>> any more. The NetBSD counter shows only one interrupt for ioapic2 pin 2,
>> while I would have about 8 at the time of the hang.
>>
>> So, now it looks like interrupts are blocked forever.
> 
> Which may be a good thing for debugging purposes, because now we have
> a way to investigate what is actually blocking the interrupt's
> delivery without having to worry about more output screwing the
> overall picture.
> 
>> At
>> http://www-soc.lip6.fr/~bouyer/xen-log5.txt
>> you'll find the output of the 'i' key.
> 
> (XEN)    IRQ:  34 vec:59 IO-APIC-level   status=010 aff:{0}/{0-7} in-flight=1 d0: 34(-MM)
> 
> (XEN)     IRQ 34 Vec 89:
> (XEN)       Apic 0x02, Pin  2: vec=59 delivery=LoPri dest=L status=1 polarity=1 irr=1 trig=L mask=0 dest_id:00000001

Since it repeats in Manuel's latest dump, perhaps the odd combination
of status=1 and irr=1 is to tell us something? It is my understanding
that irr ought to become set only when delivery-status clears. Yet I
don't know what to take from this...

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 14:47:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 14:47:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36482.68390 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khZat-0007Pm-Bc; Tue, 24 Nov 2020 14:46:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36482.68390; Tue, 24 Nov 2020 14:46:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khZat-0007Pf-8b; Tue, 24 Nov 2020 14:46:59 +0000
Received: by outflank-mailman (input) for mailman id 36482;
 Tue, 24 Nov 2020 14:46:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LHIe=E6=kernel.org=gustavoars@srs-us1.protection.inumbo.net>)
 id 1khZar-0007Pa-MN
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 14:46:57 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 82b5fe2f-7825-457a-a989-d52150337348;
 Tue, 24 Nov 2020 14:46:57 +0000 (UTC)
Received: from embeddedor (187-162-31-110.static.axtel.net [187.162.31.110])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 296CF206F9;
 Tue, 24 Nov 2020 14:46:49 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=LHIe=E6=kernel.org=gustavoars@srs-us1.protection.inumbo.net>)
	id 1khZar-0007Pa-MN
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 14:46:57 +0000
X-Inumbo-ID: 82b5fe2f-7825-457a-a989-d52150337348
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 82b5fe2f-7825-457a-a989-d52150337348;
	Tue, 24 Nov 2020 14:46:57 +0000 (UTC)
Received: from embeddedor (187-162-31-110.static.axtel.net [187.162.31.110])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 296CF206F9;
	Tue, 24 Nov 2020 14:46:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606229216;
	bh=3zQtrTTCw8twqJtP/1a4NB1MKrz6/NphiIiwgRncCiQ=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=Nf940VeIUgpffOGETTY1L3G0QcIjfozBFvnFoqG8O328ZSxeeaAXcr0hOe3zQgRlG
	 fbix380mdFR2g9eZBg8DhbUZBvf1w7UYgoM6pwiYDzTHildmv27dPl4/uYYBPb4fgA
	 1gCiZGhi+wbnwk7bOx0yc8Qq/SRuVl9DTTdMTQVI=
Date: Tue, 24 Nov 2020 08:47:05 -0600
From: "Gustavo A. R. Silva" <gustavoars@kernel.org>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: linux-kernel@vger.kernel.org, alsa-devel@alsa-project.org,
	amd-gfx@lists.freedesktop.org, bridge@lists.linux-foundation.org,
	ceph-devel@vger.kernel.org, cluster-devel@redhat.com,
	coreteam@netfilter.org, devel@driverdev.osuosl.org,
	dm-devel@redhat.com, drbd-dev@lists.linbit.com,
	dri-devel@lists.freedesktop.org, GR-everest-linux-l2@marvell.com,
	GR-Linux-NIC-Dev@marvell.com, intel-gfx@lists.freedesktop.org,
	intel-wired-lan@lists.osuosl.org, keyrings@vger.kernel.org,
	linux1394-devel@lists.sourceforge.net, linux-acpi@vger.kernel.org,
	linux-afs@lists.infradead.org, linux-arm-kernel@lists.infradead.org,
	linux-arm-msm@vger.kernel.org,
	linux-atm-general@lists.sourceforge.net,
	linux-block@vger.kernel.org, linux-can@vger.kernel.org,
	linux-cifs@vger.kernel.org, linux-crypto@vger.kernel.org,
	linux-decnet-user@lists.sourceforge.net, linux-ext4@vger.kernel.org,
	linux-fbdev@vger.kernel.org, linux-geode@lists.infradead.org,
	linux-gpio@vger.kernel.org, linux-hams@vger.kernel.org,
	linux-hwmon@vger.kernel.org, linux-i3c@lists.infradead.org,
	linux-ide@vger.kernel.org, linux-iio@vger.kernel.org,
	linux-input@vger.kernel.org, linux-integrity@vger.kernel.org,
	linux-mediatek@lists.infradead.org, linux-media@vger.kernel.org,
	linux-mmc@vger.kernel.org, linux-mm@kvack.org,
	linux-mtd@lists.infradead.org, linux-nfs@vger.kernel.org,
	linux-rdma@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-sctp@vger.kernel.org,
	linux-security-module@vger.kernel.org,
	linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org,
	linux-watchdog@vger.kernel.org, linux-wireless@vger.kernel.org,
	netdev@vger.kernel.org, netfilter-devel@vger.kernel.org,
	nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org,
	oss-drivers@netronome.com, patches@opensource.cirrus.com,
	rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org,
	samba-technical@lists.samba.org, selinux@vger.kernel.org,
	target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net,
	usb-storage@lists.one-eyed-alien.net,
	virtualization@lists.linux-foundation.org,
	wcn36xx@lists.infradead.org, x86@kernel.org,
	xen-devel@lists.xenproject.org, linux-hardening@vger.kernel.org,
	Nick Desaulniers <ndesaulniers@google.com>,
	Nathan Chancellor <natechancellor@gmail.com>,
	Miguel Ojeda <ojeda@kernel.org>, Joe Perches <joe@perches.com>,
	Kees Cook <keescook@chromium.org>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
Message-ID: <20201124144705.GK16084@embeddedor>
References: <cover.1605896059.git.gustavoars@kernel.org>
 <20201123200345.GA38546@nvidia.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201123200345.GA38546@nvidia.com>
User-Agent: Mutt/1.9.4 (2018-02-28)

On Mon, Nov 23, 2020 at 04:03:45PM -0400, Jason Gunthorpe wrote:
> On Fri, Nov 20, 2020 at 12:21:39PM -0600, Gustavo A. R. Silva wrote:
> 
> >   IB/hfi1: Fix fall-through warnings for Clang
> >   IB/mlx4: Fix fall-through warnings for Clang
> >   IB/qedr: Fix fall-through warnings for Clang
> >   RDMA/mlx5: Fix fall-through warnings for Clang
> 
> I picked these four to the rdma tree, thanks

Awesome. :)

Thank you, Jason.
--
Gustavo


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 14:47:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 14:47:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36488.68403 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khZbh-0007jS-LK; Tue, 24 Nov 2020 14:47:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36488.68403; Tue, 24 Nov 2020 14:47:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khZbh-0007jL-IG; Tue, 24 Nov 2020 14:47:49 +0000
Received: by outflank-mailman (input) for mailman id 36488;
 Tue, 24 Nov 2020 14:47:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LHIe=E6=kernel.org=gustavoars@srs-us1.protection.inumbo.net>)
 id 1khZbg-0007jF-6n
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 14:47:48 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8bccb16a-82ea-4b23-bf36-ab9ac33553d3;
 Tue, 24 Nov 2020 14:47:47 +0000 (UTC)
Received: from embeddedor (187-162-31-110.static.axtel.net [187.162.31.110])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id A5073206F9;
 Tue, 24 Nov 2020 14:47:38 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=LHIe=E6=kernel.org=gustavoars@srs-us1.protection.inumbo.net>)
	id 1khZbg-0007jF-6n
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 14:47:48 +0000
X-Inumbo-ID: 8bccb16a-82ea-4b23-bf36-ab9ac33553d3
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8bccb16a-82ea-4b23-bf36-ab9ac33553d3;
	Tue, 24 Nov 2020 14:47:47 +0000 (UTC)
Received: from embeddedor (187-162-31-110.static.axtel.net [187.162.31.110])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id A5073206F9;
	Tue, 24 Nov 2020 14:47:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606229266;
	bh=KHWIOvsVxIgxzJ3ANvrXr+IcifXvCH3d9eC+5p4MJwo=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=tWoPDQgvsKVPCfxibPNl9c0zBQ9NfBrnsDyv8UIAtPJxHLfF4o1CuP3vwUt19y/P8
	 Y66kkjj2Cim4kMyRQiiYzb1whNr+v/N+nMKfcHqv3RaX8mBnUDrFmNddDbw9/dIVnH
	 xirANyC4hb8bYXXRvxw8qG9eYp8JWf8UX9D38PQM=
Date: Tue, 24 Nov 2020 08:47:54 -0600
From: "Gustavo A. R. Silva" <gustavoars@kernel.org>
To: Mark Brown <broonie@kernel.org>
Cc: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org,
	linux-sctp@vger.kernel.org,
	Nathan Chancellor <natechancellor@gmail.com>,
	linux-hardening@vger.kernel.org,
	usb-storage@lists.one-eyed-alien.net, linux-block@vger.kernel.org,
	linux-security-module@vger.kernel.org,
	bridge@lists.linux-foundation.org, GR-Linux-NIC-Dev@marvell.com,
	rds-devel@oss.oracle.com, dri-devel@lists.freedesktop.org,
	linux-media@vger.kernel.org, wcn36xx@lists.infradead.org,
	linux-wireless@vger.kernel.org, linux-mediatek@lists.infradead.org,
	reiserfs-devel@vger.kernel.org, oss-drivers@netronome.com,
	linux-arm-kernel@lists.infradead.org, alsa-devel@alsa-project.org,
	virtualization@lists.linux-foundation.org,
	Joe Perches <joe@perches.com>, patches@opensource.cirrus.com,
	linux-gpio@vger.kernel.org, linux-hwmon@vger.kernel.org,
	linux-cifs@vger.kernel.org, coreteam@netfilter.org,
	Kees Cook <keescook@chromium.org>,
	Nick Desaulniers <ndesaulniers@google.com>,
	linux-scsi@vger.kernel.org, linux-afs@lists.infradead.org,
	netfilter-devel@vger.kernel.org, linux-geode@lists.infradead.org,
	drbd-dev@lists.linbit.com, linux-ext4@vger.kernel.org,
	linux-hams@vger.kernel.org, target-devel@vger.kernel.org,
	samba-technical@lists.samba.org,
	tipc-discussion@lists.sourceforge.net,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-renesas-soc@vger.kernel.org, linux-input@vger.kernel.org,
	amd-gfx@lists.freedesktop.org, linux-nfs@vger.kernel.org,
	devel@driverdev.osuosl.org, selinux@vger.kernel.org,
	linux-atm-general@lists.sourceforge.net, linux-iio@vger.kernel.org,
	linux-i3c@lists.infradead.org, Miguel Ojeda <ojeda@kernel.org>,
	linux-can@vger.kernel.org, linux-integrity@vger.kernel.org,
	GR-everest-linux-l2@marvell.com, keyrings@vger.kernel.org,
	intel-wired-lan@lists.osuosl.org, linux-usb@vger.kernel.org,
	nouveau@lists.freedesktop.org, x86@kernel.org,
	xen-devel@lists.xenproject.org, linux-mm@kvack.org,
	cluster-devel@redhat.com, linux1394-devel@lists.sourceforge.net,
	linux-decnet-user@lists.sourceforge.net,
	op-tee@lists.trustedfirmware.org, linux-ide@vger.kernel.org,
	intel-gfx@lists.freedesktop.org, linux-acpi@vger.kernel.org,
	dm-devel@redhat.com, linux-watchdog@vger.kernel.org,
	linux-rdma@vger.kernel.org, linux-mtd@lists.infradead.org,
	ceph-devel@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-mmc@vger.kernel.org, linux-fbdev@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
Message-ID: <20201124144754.GL16084@embeddedor>
References: <cover.1605896059.git.gustavoars@kernel.org>
 <160616392671.21180.16517492185091399884.b4-ty@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <160616392671.21180.16517492185091399884.b4-ty@kernel.org>
User-Agent: Mutt/1.9.4 (2018-02-28)

On Mon, Nov 23, 2020 at 08:38:46PM +0000, Mark Brown wrote:
> On Fri, 20 Nov 2020 12:21:39 -0600, Gustavo A. R. Silva wrote:
> > This series aims to fix almost all remaining fall-through warnings in
> > order to enable -Wimplicit-fallthrough for Clang.
> > 
> > In preparation to enable -Wimplicit-fallthrough for Clang, explicitly
> > add multiple break/goto/return/fallthrough statements instead of just
> > letting the code fall through to the next case.
> > 
> > [...]
> 
> Applied to
> 
>    https://git.kernel.org/pub/scm/linux/kernel/git/broonie/regulator.git for-next
> 
> Thanks!
> 
> [1/1] regulator: as3722: Fix fall-through warnings for Clang
>       commit: b52b417ccac4fae5b1f2ec4f1d46eb91e4493dc5

Thank you, Mark.
--
Gustavo


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 14:49:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 14:49:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36495.68414 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khZdA-00086j-0f; Tue, 24 Nov 2020 14:49:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36495.68414; Tue, 24 Nov 2020 14:49:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khZd9-00086c-Ty; Tue, 24 Nov 2020 14:49:19 +0000
Received: by outflank-mailman (input) for mailman id 36495;
 Tue, 24 Nov 2020 14:49:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KyA6=E6=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1khZd9-00086X-46
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 14:49:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id db6b45ca-5f51-4651-8939-e40ad7d25bc4;
 Tue, 24 Nov 2020 14:49:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1613EAC2D;
 Tue, 24 Nov 2020 14:49:17 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=KyA6=E6=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1khZd9-00086X-46
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 14:49:19 +0000
X-Inumbo-ID: db6b45ca-5f51-4651-8939-e40ad7d25bc4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id db6b45ca-5f51-4651-8939-e40ad7d25bc4;
	Tue, 24 Nov 2020 14:49:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606229357; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=YYMk9Rdl/wr+gMPPVKevwp+N4vIbEHzR3njBJiV+0EA=;
	b=LsL7J7L1EeIHCA/D2vL+i05p7rpItPjIAIGO4UMidz0LTitpP2RKdlQrI7K+LvkRCt/L18
	02iM4AfF71fv+2Ij7KDfvgW9KunWg40Hvbv60yIpeBsohp9VRSGWE2IubiohBRe6FOZvvm
	MRPUkfivv47hLS6jieK2mqTSx/Q33mw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1613EAC2D;
	Tue, 24 Nov 2020 14:49:17 +0000 (UTC)
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201124070106.26854-1-jgross@suse.com>
 <20201124070106.26854-4-jgross@suse.com>
 <c627b42b-1e1f-b83a-2db8-b9e5fa5dce10@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v7 3/3] xen/events: rework fifo queue locking
Message-ID: <8e2853c3-9f84-2fd6-0e41-1f1d9172f236@suse.com>
Date: Tue, 24 Nov 2020 15:49:16 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <c627b42b-1e1f-b83a-2db8-b9e5fa5dce10@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="zF9bznTrzkn7YH1WubmVMo7GOk6grn6c6"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--zF9bznTrzkn7YH1WubmVMo7GOk6grn6c6
Content-Type: multipart/mixed; boundary="AV9jh2wWkKyjo8tMpC7oODGClK0EpbYqo";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <8e2853c3-9f84-2fd6-0e41-1f1d9172f236@suse.com>
Subject: Re: [PATCH v7 3/3] xen/events: rework fifo queue locking
References: <20201124070106.26854-1-jgross@suse.com>
 <20201124070106.26854-4-jgross@suse.com>
 <c627b42b-1e1f-b83a-2db8-b9e5fa5dce10@suse.com>
In-Reply-To: <c627b42b-1e1f-b83a-2db8-b9e5fa5dce10@suse.com>

--AV9jh2wWkKyjo8tMpC7oODGClK0EpbYqo
Content-Type: multipart/mixed;
 boundary="------------15168547E2B60BB61346328D"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------15168547E2B60BB61346328D
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 24.11.20 15:02, Jan Beulich wrote:
> On 24.11.2020 08:01, Juergen Gross wrote:
>> Two cpus entering evtchn_fifo_set_pending() for the same event channel=

>> can race in case the first one gets interrupted after setting
>> EVTCHN_FIFO_PENDING and when the other one manages to set
>> EVTCHN_FIFO_LINKED before the first one is testing that bit. This can
>> lead to evtchn_check_pollers() being called before the event is put
>> properly into the queue, resulting eventually in the guest not seeing
>> the event pending and thus blocking forever afterwards.
>>
>> Note that commit 5f2df45ead7c1195 ("xen/evtchn: rework per event chann=
el
>> lock") made the race just more obvious, while the fifo event channel
>> implementation had this race from the beginning when an unmask operati=
on
>> was running in parallel with an event channel send operation.
>=20
> Ah yes, but then also only for inter-domain channels, as it was
> only in that case that the "wrong" domain's event lock was held.
> IOW there was a much earlier change already where this issue
> got widened (when the per-channel locking got introduced). This
> then got reduced to the original scope by XSA-343's adding of
> locking to evtchn_unmask(). (Not sure how much of this history
> wants actually adding here. I'm writing it down not the least to
> make sure I have a complete enough picture.)

I think we both agree that this race was possible for quite some time.
And I even think one customer bug I've been looking into recently
might be exactly this problem (a dom0 was occasionally hanging in
cross-cpu function calls, but switching to 2-level events made the
problem disappear).

>=20
>> For avoiding this race the queue locking in evtchn_fifo_set_pending()
>> needs to be reworked to cover the test of EVTCHN_FIFO_PENDING,
>> EVTCHN_FIFO_MASKED and EVTCHN_FIFO_LINKED, too.
>=20
> Perhaps mention the prior possible (and imo more natural)
> alternative of taking consistent per-channel locks would have
> been an alternative, until they got converted to rw ones?

Okay (with reasoning why this is no simple option due to the lock
needed to be taken with interrupts on and off).

>=20
>> Additionally when an
>> event channel needs to change queues both queues need to be locked
>> initially.
>=20
> Since this was (afaict) intentionally not the case before, I
> think I would want to see a word spent on the "why", perhaps
> better in a code comment than here. Even more so that you
> delete a respective comment justifying the possible race as
> permissible. And I have to admit right now I'm still uncertain
> both ways, i.e. I neither have a clear understanding of why it
> would have been considered fine the other way around before,
> nor why the double locking is strictly needed.

I need the double locking to avoid someone entering the locked region
when dropping the lock for the old queue and taking the one for the
new queue, as this would open the same race window again.

>=20
>> Fixes: 5f2df45ead7c1195 ("xen/evtchn: rework per event channel lock")
>> Fixes: 88910061ec615b2d ("evtchn: add FIFO-based event channel hyperca=
lls and port ops")
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>=20
> I guess at least this one wants a Reported-by.

Oh, right. Sorry for missing that.

>=20
>> @@ -204,6 +175,48 @@ static void evtchn_fifo_set_pending(struct vcpu *=
v, struct evtchn *evtchn)
>>           return;
>>       }
>>  =20
>> +    for ( try =3D 0; ; try++ )
>> +    {
>> +        union evtchn_fifo_lastq lastq;
>> +        struct vcpu *old_v;
>=20
> I think this one can have const added.

Yes.

>=20
>> +        lastq.raw =3D read_atomic(&evtchn->fifo_lastq);
>> +        old_v =3D d->vcpu[lastq.last_vcpu_id];
>> +
>> +        q =3D &v->evtchn_fifo->queue[evtchn->priority];
>> +        old_q =3D &old_v->evtchn_fifo->queue[lastq.last_priority];
>> +
>> +        if ( q <=3D old_q )
>> +        {
>> +            spin_lock_irqsave(&q->lock, flags);
>> +            if ( q !=3D old_q )
>> +                spin_lock(&old_q->lock);
>> +        }
>> +        else
>> +        {
>> +            spin_lock_irqsave(&old_q->lock, flags);
>> +            spin_lock(&q->lock);
>> +        }
>=20
> Since the vast majority of cases is going to be q =3D=3D old_q, would
> it be worth structuring this like
>=20
>          if ( q =3D=3D old_q )
>              spin_lock_irqsave(&q->lock, flags);
>          else if ( q < old_q )
>          {
>              spin_lock_irqsave(&q->lock, flags);
>              spin_lock(&old_q->lock);
>          }
>          else
>          {
>              spin_lock_irqsave(&old_q->lock, flags);
>              spin_lock(&q->lock);
>          }
>=20
> saving (on average) half a conditional in this most common
> case? (This is specifically different from the double locking in

Fine with me.

> event_channel.c, where the common case is to have different
> entities. In fact double_evtchn_{,un}lock() look to pointlessly
> check for chn1 =3D=3D chn2 - I guess I'll make a patch.)
>=20
>> +        lastq.raw =3D read_atomic(&evtchn->fifo_lastq);
>> +        old_v =3D d->vcpu[lastq.last_vcpu_id];
>> +        if ( q =3D=3D &v->evtchn_fifo->queue[evtchn->priority] &&
>> +             old_q =3D=3D &old_v->evtchn_fifo->queue[lastq.last_prior=
ity] )
>> +            break;
>> +
>> +        if ( q !=3D old_q )
>> +            spin_unlock(&old_q->lock);
>> +        spin_unlock_irqrestore(&q->lock, flags);
>> +
>> +        if ( try =3D=3D 3 )
>> +        {
>> +            gprintk(XENLOG_WARNING,
>> +                    "dom%d port %d lost event (too many queue changes=
)\n",
>> +                    d->domain_id, evtchn->port);
>> +            return;
>=20
> Originally evtchn_check_pollers() would still have been called
> in this case. Wouldn't you better retain this, or else justify
> the possibly observable change in behavior?

I could retain it, but without having set the event to be pending
I don't see the value in doing so.

>=20
>> @@ -228,22 +239,8 @@ static void evtchn_fifo_set_pending(struct vcpu *=
v, struct evtchn *evtchn)
>>               goto done;
>>           }
>=20
> This if() right above here can, aiui, in principle be moved out
> of the surrounding if(), at which point ...

It can even be moved out of the locked region.

>=20
>> -        /*
>> -         * No locking around getting the queue. This may race with
>> -         * changing the priority but we are allowed to signal the
>> -         * event once on the old priority.
>> -         */
>> -        q =3D &v->evtchn_fifo->queue[evtchn->priority];
>> -
>> -        old_q =3D lock_old_queue(d, evtchn, &flags);
>> -        if ( !old_q )
>> -            goto done;
>=20
> ... with all of this gone ...
>=20
>>           if ( guest_test_and_set_bit(d, EVTCHN_FIFO_LINKED, word) )
>> -        {
>> -            spin_unlock_irqrestore(&old_q->lock, flags);
>>               goto done;
>> -        }
>=20
> ... this could become part of the outer if(), replacing the 2nd
> guest_test_bit() there. (Possibly, if deemed worthwhile at all,
> to be carried out in a separate follow-on patch, to keep focus
> here on the actual goal.)

Will add a patch doing that.


Juergen

--------------15168547E2B60BB61346328D
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------15168547E2B60BB61346328D--

--AV9jh2wWkKyjo8tMpC7oODGClK0EpbYqo--

--zF9bznTrzkn7YH1WubmVMo7GOk6grn6c6
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+9HWwFAwAAAAAACgkQsN6d1ii/Ey8t
8Qf/dUdcxSnITZ6E3wUE8tfrGl9MhSKWtlvKLpFJ/GvzO5GUogcb/KYsXHTtYlZUKZlrAOWd155i
DvNev46nVsdqzZ2icyneHxwiQFsQa5GK6lM5xKlvZZ3RUUbTp522U01BkTYQuZTVRZhjCLYHBFd9
ZBXQERq/8NYSdP2qlyFWYVL2gpy2FS8CLMzkyZOxsagvNfTDdN/aK3N934qRcAjyn6PYGaUC20ux
Lpsc/uRJVbbdBedj6sQm28lNwcZ2oYY11Me/VHqmRRrEpl9SpBryVBkL56FVYyAd3wKKO+hTM/1Q
AeyqmVsZQ7CHey6fuA1PEsVuctoQKXjqQekd7mYMGw==
=9C+d
-----END PGP SIGNATURE-----

--zF9bznTrzkn7YH1WubmVMo7GOk6grn6c6--


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 14:52:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 14:52:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36506.68426 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khZgC-0000Wr-Jt; Tue, 24 Nov 2020 14:52:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36506.68426; Tue, 24 Nov 2020 14:52:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khZgC-0000Wk-Gt; Tue, 24 Nov 2020 14:52:28 +0000
Received: by outflank-mailman (input) for mailman id 36506;
 Tue, 24 Nov 2020 14:52:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khZgB-0000Wf-9j
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 14:52:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2cd59b1b-4096-4a9c-97bb-8a44bca28882;
 Tue, 24 Nov 2020 14:52:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D4E19AC2D;
 Tue, 24 Nov 2020 14:52:25 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khZgB-0000Wf-9j
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 14:52:27 +0000
X-Inumbo-ID: 2cd59b1b-4096-4a9c-97bb-8a44bca28882
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2cd59b1b-4096-4a9c-97bb-8a44bca28882;
	Tue, 24 Nov 2020 14:52:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606229545; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=sr204FPM5CDV1auX+wkjq9RbyU5ZqXaUY/fc+ZiJfik=;
	b=eAtqPG3zCFwoUkydOibifk8F7b9/B/a1jVo1RLh4gwmSNYQop1zsDj3tfNhqV3LbsreZFi
	zv9o8Pf0Fal4ntIVSt/w4DdQwm+j+QgXC9Qst84STGh4L8a1TCXDv/+5IYgVF718GBOPHz
	fOK4mCCkbgrnyqRmq9RZ+NvReClXnas=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D4E19AC2D;
	Tue, 24 Nov 2020 14:52:25 +0000 (UTC)
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xenproject.org, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <20201123095713.orfpg72r73m7f46n@Air-de-Roger>
 <20201123113241.GE2520@antioche.eu.org>
 <20201123125112.q3zqb4e5nk6jg4hw@Air-de-Roger>
 <20201123143150.GG2520@antioche.eu.org>
 <20201123170610.kzfxvcgkdkvh3ex4@Air-de-Roger>
 <20201123173925.GG4662@antioche.eu.org>
 <b3912e97-9684-fe97-1053-ad7168a19721@suse.com>
 <20201124122102.3igsriesou3vl6mu@Air-de-Roger>
 <20201124135948.GL2020@antioche.eu.org>
 <6d6a77cf-58de-4e4d-ed75-e9365be060b7@suse.com>
 <20201124142713.GM2020@antioche.eu.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e6a0fc84-e7ed-825c-5356-29b8a6359a2b@suse.com>
Date: Tue, 24 Nov 2020 15:52:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201124142713.GM2020@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 24.11.2020 15:27, Manuel Bouyer wrote:
> new log at
> http://www-soc.lip6.fr/~bouyer/xen-log7.txt
> 
> this one ends up in a panic, I hope you'll find what you expect here.

Did you actually, just to have the data point, ever try to disable
interrupt remapping ("iommu=no-intremap")? For PVH we can't ask you
to turn of the IOMMU as a whole, but aiui interrupt remapping is
not a strict prereq. (I'm sure Roger will correct me if I'm wrong.)

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 15:00:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 15:00:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36515.68438 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khZo4-0001X5-Ew; Tue, 24 Nov 2020 15:00:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36515.68438; Tue, 24 Nov 2020 15:00:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khZo4-0001Wy-Bw; Tue, 24 Nov 2020 15:00:36 +0000
Received: by outflank-mailman (input) for mailman id 36515;
 Tue, 24 Nov 2020 15:00:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IwAZ=E6=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1khZo2-0001Wt-8G
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 15:00:34 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7724c44a-f1e1-4088-9595-39b47b70d6a5;
 Tue, 24 Nov 2020 15:00:32 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IwAZ=E6=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1khZo2-0001Wt-8G
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 15:00:34 +0000
X-Inumbo-ID: 7724c44a-f1e1-4088-9595-39b47b70d6a5
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7724c44a-f1e1-4088-9595-39b47b70d6a5;
	Tue, 24 Nov 2020 15:00:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606230032;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=TuE4pQ1OvIONmT9HGFqcHQcCo6RSVky9ibwexebWbiM=;
  b=GsrKlH0NwQUgr2LPpn0IdKarZkIieJfqK3zIcs5JrvnqBfoEVh8/a5EF
   5xeCYCl2J7m1AOKoeHRAqBcKtqoo/+29VdB81hugAWa06YBKj+1Gty0VJ
   h4jr7gXvjznoyMqgfly8koGxtlj8n+EJL3mm9ZsuHTLAlrUXv/ivzPaC7
   4=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: cCSMrwaLdhQoGGZGSCuk5SUIOrHfD5A+WiXsMhIsM1joKOkk9qbptKxWCIAvjLpwaY86zEVbjY
 zX9IQGi/SXmvoMki8CiZpq34F/meKvohM4EHisRyPmLuaDBSzGmm4oWIlPJucIfk0aGYAUW7TL
 AkCyktDCu3DoH85w72SZGXx9t2CnPRibrDXaZzo5i3ur7eIFU81/jOJOol1sQix9ZwJAG7nS1N
 gPkZM5BRjBwFEuEzWG2gBnBouTVKA7KwIO3HIPCTcmbrJVo3kzeXPtfGv/w3ZNtsqmNta4K5h7
 8QI=
X-SBRS: None
X-MesageID: 32069031
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,366,1599537600"; 
   d="scan'208";a="32069031"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CU+dhdhCXo7MN8gcLzWUU80tMq48KRvIFjqjlUl1MscdYlxE+0Uq6O84MG5lhJyiCsDpmJG939i6Ok4hM8EkFtqfqbDRQou05G/imZoydtjW+0E7UozX3fwlkFgjVfkYZzyHI/Im/QjwF2EtG2IsPaWHSxhNZViLyu5uvl09HzafJnnxe753Ka9kTY2g4h4H6EuLqFpT1NJlJ0k0DSO+7uPR65GaDJb0cwLo0EcvynaQZyJ2N89w7/OKsbSHTpDJRDHSZ79TuM0IcJIOqi0ppCd874m9NkXfdeZI5gq08qnwsD492UgmzM5cgrT3qrzTLfvqI7fTIg0IhCmGMusFTg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hVUHjFUsZj5/PaBK5kj5vImLgcnNQcVtKsPK5FrO3QI=;
 b=AFQZCVP/QfrYu91v1TD0ypl+qLxIhqL5Zwy4n/rvvquiqPEkDigAFAq6uODIHLeNs2EhWd2j4ygvyqg7NAa6gpfNgBZa8zmobJZM7yl6uWeYCQJqH0shj9nmCWXC4Kn10a6gGqtQpdd6txu7hT+sMWMXvCOuz3xCfjzd/efMZOLmn79lU9bYrH/rMg96KcMDFreMFNNzyGdC8k/vtd8OdnscHwbJTwST8zW6Xy67+CMN9bJZ51pebe5QYNPd280ilgfAO2f8mgRkMwHFn6wwIKTkV76bb7tj52NZ9Ru8Je0vNWTXuqwGZtMy1Gdmc0mlzFi4XKp7grOh28m5ntlGUg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hVUHjFUsZj5/PaBK5kj5vImLgcnNQcVtKsPK5FrO3QI=;
 b=Gs0eSf10aTjvJdyBHSIEN6PbTgsn4IR2c6YWGE6UFxybGcOLW5RHH1kYBtX3Fcmf80z6v0HWvRauRVmdBBhYKV4tLCBlLNkpE3liU3KQzCRlc3G9cVkqIZM1heQ5sxYe8SJTVocXnRdxGH10SOUiXVmAFQKwu3jSs46+67gTj6s=
Date: Tue, 24 Nov 2020 15:59:27 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, Manuel Bouyer <bouyer@antioche.eu.org>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201124145927.zrbsmvs6qvaxh4hf@Air-de-Roger>
References: <20904a6a-ac64-755d-d228-4c49faf66fb5@suse.com>
 <20201120103824.GJ1508@antioche.eu.org>
 <20201123095713.orfpg72r73m7f46n@Air-de-Roger>
 <20201123113241.GE2520@antioche.eu.org>
 <20201123125112.q3zqb4e5nk6jg4hw@Air-de-Roger>
 <20201123143150.GG2520@antioche.eu.org>
 <20201123170610.kzfxvcgkdkvh3ex4@Air-de-Roger>
 <20201123173925.GG4662@antioche.eu.org>
 <b3912e97-9684-fe97-1053-ad7168a19721@suse.com>
 <ee63d6c2-4d0f-a3b7-37d0-8ce45c9e6fb1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ee63d6c2-4d0f-a3b7-37d0-8ce45c9e6fb1@suse.com>
X-ClientProxiedBy: LO2P265CA0111.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:c::27) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ec089594-6088-448b-48fe-08d890898d65
X-MS-TrafficTypeDiagnostic: DM5PR03MB2972:
X-Microsoft-Antispam-PRVS: <DM5PR03MB2972DEA40A8771419C1725728FFB0@DM5PR03MB2972.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: tbN22/MrkODmumb51tJzQ1dQjPXnNAq7ETsFihtwd+MQ5+Bl2jhyyunr7dF+5yeuA8rgrtEoAy/grlkJPGSzCMsNHcXIDQQBH039aBGg4cSNON6oBRFS8cZhbetAaismECVBXO3AoXVmSQuTosNeFS9TuQ+8bSD6y66VoKw0Ao8OPacuPknLs4IiBIRp9NfGAg/3pQ8Qd+lrk/SAU6hKILMTTeQJF3MSJeNwTHsNly2gbnP7Xt4uZMzY2NNgKXA+UJSZ+q68Nq08YoJq3k/+W9ZwBtlLkZonJVKdED8moIasepM4kFKN+AWvasGK+W0wn8ksoGu+oRYn8EzWb6vC00O99S1klhQ9wooNn5txwkLpnfLl1Cm789V52UhPbfbqr6AuURQ0IbjKiiLXVCHrkg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(346002)(136003)(39860400002)(396003)(366004)(376002)(66476007)(66946007)(4326008)(83380400001)(8676002)(33716001)(9686003)(8936002)(2906002)(85182001)(186003)(110136005)(966005)(5660300002)(1076003)(6496006)(66556008)(16526019)(316002)(26005)(956004)(478600001)(6666004)(53546011)(6486002)(86362001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: E2xkfhTzVpDLNwMtWajg+NGWHdSaF4ol4YBWXDO3Sw+3IMmdWvCHi9PjPrymi7rdhObd2M0YXbbH111b4tPKi424TigxEPt6pYGzBdu9nDiiQ+f8ospEJFYpPMknnIS77/D3HR/UbqF1NanK38MNB2zRO2WP2PeNbCvDUBnYkeDwj/c75gSkg+oHI0XBUPs+mV+XbGHP/ZYOTTdyCvDIpyqhs3Dn+3w0PNyi8r2HnfTkwO1cq/f3RDthWijBEBpJ08bwh8tahDobDAH9lAofOd7APJG9a3K8I+vwNKYlJNhqZaBTo9oK4mHdDUJlAKQcZucPu2LuiYvqHGLgYddvmLqGH21dhRAAqxYR34QeRJc8Afbh5YxT8bH9Hpezx4+lwuct24njRriIdAzdgI9GEH98M6b1O7np03netnNZA1vDU0WpCF/zVcyqjX/xf2K5qKh+ZXMsdcjHTdipkqVIm6Xf3TH3olBSn4An5cYbfJqMs3z3inCZKmQlhpr5UdH4FcbdmAbjGeptfH+aWmA5KL1lEFo/uoI0gQlpmRXp3NqOcQp7bXLH3s8SI7c3d2N6QiEd3IkxrC5Xc7cKnrC8LdLw2u9PDXNyPJk4AxYaUe3qrUI0Qtmz6jDthCTg3IFjwh+BMFwn7LirTh/CZNOHUDOHkt6jOHr+i4Gbley8qoS+IDsx6WbSookljW+Yx2P0me4HGHPYPsGrTkXAp3EcRH7Hz1dpeDMMdAK4/uCAfNmM8QKkORHHN8aNTSIoRTRdiD08TOCfMECJIig4hM5Mpl/zOK96eAVq380DhhlM+5fH3H1B/G13XkTgGLdW8HJ3sIWkd6bA7+/A8Ws7ts/K6iT8w27qHJGom3AJeAiztLjeMKra+D0OwBYy5JPO+NYUwnVQN4fH7Fqia/Tro8HziQ==
X-MS-Exchange-CrossTenant-Network-Message-Id: ec089594-6088-448b-48fe-08d890898d65
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Nov 2020 14:59:33.2021
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: t3dSRFBbFhgiVgtZW0zdIiKkTZM9iYuliIe2QgiZBmJD1PNrUbGts3hRtzbP6jXiqlBrJFLse1ArpaQSo46uQQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2972
X-OriginatorOrg: citrix.com

On Tue, Nov 24, 2020 at 03:42:28PM +0100, Jan Beulich wrote:
> On 24.11.2020 11:05, Jan Beulich wrote:
> > On 23.11.2020 18:39, Manuel Bouyer wrote:
> >> On Mon, Nov 23, 2020 at 06:06:10PM +0100, Roger Pau Monné wrote:
> >>> OK, I'm afraid this is likely too verbose and messes with the timings.
> >>>
> >>> I've been looking (again) into the code, and I found something weird
> >>> that I think could be related to the issue you are seeing, but haven't
> >>> managed to try to boot the NetBSD kernel provided in order to assert
> >>> whether it solves the issue or not (or even whether I'm able to
> >>> repro it). Would you mind giving the patch below a try?
> >>
> >> With this, I get the same hang but XEN outputs don't wake up the interrupt
> >> any more. The NetBSD counter shows only one interrupt for ioapic2 pin 2,
> >> while I would have about 8 at the time of the hang.
> >>
> >> So, now it looks like interrupts are blocked forever.
> > 
> > Which may be a good thing for debugging purposes, because now we have
> > a way to investigate what is actually blocking the interrupt's
> > delivery without having to worry about more output screwing the
> > overall picture.
> > 
> >> At
> >> http://www-soc.lip6.fr/~bouyer/xen-log5.txt
> >> you'll find the output of the 'i' key.
> > 
> > (XEN)    IRQ:  34 vec:59 IO-APIC-level   status=010 aff:{0}/{0-7} in-flight=1 d0: 34(-MM)
> > 
> > (XEN)     IRQ 34 Vec 89:
> > (XEN)       Apic 0x02, Pin  2: vec=59 delivery=LoPri dest=L status=1 polarity=1 irr=1 trig=L mask=0 dest_id:00000001
> 
> Since it repeats in Manuel's latest dump, perhaps the odd combination
> of status=1 and irr=1 is to tell us something? It is my understanding
> that irr ought to become set only when delivery-status clears. Yet I
> don't know what to take from this...

My reading of this is that one interrupt was accepted by the lapic
(irr=1) and that there's a further interrupt pending that hasn't yet
been accepted by the lapic (status=1) because it's still serving the
previous one. But that's all weird because there's no matching
vector in ISR, and hence the IRR bit on the IO-APIC has somehow become
stale or out of sync with the lapic state?

I'm also unsure about how Xen has managed to reach this state, it
shouldn't be possible in the first place.

I don't think I can instrument the paths further with printfs because
it's likely to result in the behavior itself changing and console
spamming. I could however create a static buffer to trace relevant
actions and then dump all them together with the 'i' debug key output.

Sorry Manuel, you seem to have hit some kind of weird bug regarding
interrupt management. If you want to progress further with NetBSD PVH
dom0 it's likely to work on a different box, but I would ask if you
can keep the current box in order for us to continue debugging.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 15:01:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 15:01:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36521.68450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khZoj-0001dR-OT; Tue, 24 Nov 2020 15:01:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36521.68450; Tue, 24 Nov 2020 15:01:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khZoj-0001dK-LS; Tue, 24 Nov 2020 15:01:17 +0000
Received: by outflank-mailman (input) for mailman id 36521;
 Tue, 24 Nov 2020 15:01:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IwAZ=E6=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1khZoj-0001dC-4s
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 15:01:17 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ba056cdb-20b9-4923-bbb6-7ce87b54adcc;
 Tue, 24 Nov 2020 15:01:16 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IwAZ=E6=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1khZoj-0001dC-4s
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 15:01:17 +0000
X-Inumbo-ID: ba056cdb-20b9-4923-bbb6-7ce87b54adcc
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ba056cdb-20b9-4923-bbb6-7ce87b54adcc;
	Tue, 24 Nov 2020 15:01:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606230076;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=2a4yB+JfMHRId1QIG1O6Wp1JAcjZWl3my9baKUr6mD0=;
  b=ZY2dTTB5YQVH89asG7KE2x0fpuSVkt02pQBylRrvsIsnibjRzkS34+T7
   lEa0nlzywOxaFmLyK3/y0hM1QC6sKVUu/y6ygBiADg5fyi7FipAu48mBf
   f38aH9cpWgjnhzxAR3eKLb7KfY+vroZqZAS7Q8ppqbt7T1i3dpWJFVDS9
   0=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 0+zTNiXaxA6UmK/n0vLjTgDDhQHg0qMFyVcqLrxH+mg7et2c3VagyDLSmCDZhBuhxdcmz70aAn
 aRlLruF+BKfo3OzVLPLhpg/ft7Vpgs+o+VuNvq1ZLPqOJwEhHUKfhZcDRCsfoov1JPzna384FJ
 gpNkwOA8iDz85nlhJ98fqYnEE2J6xV/ibPVd5FwGRCVdCAuVINdOxiycuJcj/p7YSUWyj6ZH36
 m/G5UiO2J7lLCJ3szubnMi9d1tErUAyRUMVM+hwo2Gw5EnKAgf+KYyzef0OtReR95ABm2wxeZL
 YQM=
X-SBRS: None
X-MesageID: 31836550
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,366,1599537600"; 
   d="scan'208";a="31836550"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FgcCnr2e9oTWw/wOq6NbieldQbs0lhPQqMR/sOg1lqPeFv43hHOyq1sXBkY5bQ47KtcUeTdFRe3KO7xcP2ab8HaJPjU6prVwJZgFZsTM6F66cc5vLXw25Apx7R8/QoQIP27u1egq8pi5BAvXU0+POBWf3vRIT9Qa2ZsB8F6ncL2xlMYVrfDi2U/U/ld8oktPPRF9NtRkk/SmZXdUGhgZwez6b2/cGSI1X2YF8/s44QQ1xIskl+3VD9/ZnBFj5ty0jYGDJYdVQra+7/XR0zKPnYOAke4YzNwnht/Ve32ftbyxVqseUvuM1zvjBrpqoRj4EIi1rpbnd7jn34h5Z0CEEQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CzxjoM+xOpXp0iwVrOM2L/2HGB1E0f48N6Hvv43dKv0=;
 b=OP9F4tLj7TGzUpEvkvDR23oODkG4j4rpasE8cD4M5h4skTfSyAxvflCYWe2jfl5tT09YKUCerWsvLsLEYumuk4wkiTW2RQhTyqqWmwrX5CaA+Due7RKedMNGPZC1u8Rs1n/LgP0SP9EbO1QcCm8m/DmF3k1l3exZxDHFvDUpQH7/9Hwl4RX0PfajT5SgkpQcJcMD61+FhPCPXtY1BuW/cTV8QHdkSC22/X8+9D0H4Vxl/QGvqb5ywkBYzpXqrKePAIVnEprZEsjVlcBMG2NMwHAnltDGkkWR9xMd++keUEtfaVIl1FdNfuLKEDaC/qS1hTD7Y9dRBE8Rl9IAWQcAzw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CzxjoM+xOpXp0iwVrOM2L/2HGB1E0f48N6Hvv43dKv0=;
 b=DfKgcPmyDtps4AP6gPC+IRMaktqOtGpDwAYYDpZhW6OE0mJ4uecfEHvktHjnC7wguE0fGyKFJ/jkCIQe3mOeV6Tr0ottgWyhZ0Ef/dofOr9Mt/igbUcvn+Ax1pzQIh2HSYdR3pqwFnIGD5uIuDO2Z+c09kJb8bOA434SmXJv0Ac=
Date: Tue, 24 Nov 2020 16:00:38 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Manuel Bouyer <bouyer@antioche.eu.org>, <xen-devel@lists.xenproject.org>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201124150038.q6tcolqcbvws55eh@Air-de-Roger>
References: <20201123125112.q3zqb4e5nk6jg4hw@Air-de-Roger>
 <20201123143150.GG2520@antioche.eu.org>
 <20201123170610.kzfxvcgkdkvh3ex4@Air-de-Roger>
 <20201123173925.GG4662@antioche.eu.org>
 <b3912e97-9684-fe97-1053-ad7168a19721@suse.com>
 <20201124122102.3igsriesou3vl6mu@Air-de-Roger>
 <20201124135948.GL2020@antioche.eu.org>
 <6d6a77cf-58de-4e4d-ed75-e9365be060b7@suse.com>
 <20201124142713.GM2020@antioche.eu.org>
 <e6a0fc84-e7ed-825c-5356-29b8a6359a2b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <e6a0fc84-e7ed-825c-5356-29b8a6359a2b@suse.com>
X-ClientProxiedBy: LNXP265CA0041.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5c::29) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4eb573b7-2865-41d7-ee41-08d89089b6fb
X-MS-TrafficTypeDiagnostic: DM6PR03MB4139:
X-Microsoft-Antispam-PRVS: <DM6PR03MB41399ECD6E64DF135A73F1BD8FFB0@DM6PR03MB4139.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: EjcFYKJyDKy0XMxRDV4ph9U83OKoNAl8r45xIlQeJlaaxQahipr/COwlDEJ36D59rn6VZPwB7GY/8zODNkLPoYXRqid/sizjQMUOJyb3Mu0bBw+4jlLRabqvhgkn8auktwXBTsOknm1ns3y7Ht5cJzOGhqwa6e7Xr8IWffUE13rkrPPQBNVhzA/DI0reWRwEf6h4LSp3YLDfm807yusRfFPIAYsUKY8WIgJxMqfm+UsuymFdy158Ane3yhYUmoO13AXptCEsQWEqgMDqEDYstaR1TqBI6ICJJvUQ0y1SyAQMO+J1nTv6UNnHbW4V643XDE6nSBVxaTXn7V5FqAIiUfWOCAyYeXIJvS9qychhTWmuILpdjNEo0wr4wmNvC3C/58HdHWIQNrSdCMRAaDBE6g==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(376002)(346002)(366004)(136003)(39860400002)(396003)(316002)(186003)(16526019)(8936002)(4326008)(8676002)(9686003)(6916009)(6486002)(33716001)(6496006)(26005)(478600001)(966005)(53546011)(6666004)(5660300002)(85182001)(66476007)(2906002)(956004)(86362001)(1076003)(66946007)(4744005)(66556008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: SFoXVeGX+c2CJuJVpzv7mxBD/1xe/Yxlgzgqp8htU1072lcPnledHHjjD3vXfRhaOtZuL8xftLH4PbrKJpsPM3bBYin9odm9cdqle5KnxhW1g+v8zKM8AQtgbyoFpCzRevTyzXVBRo2LUOMmvL/YzA4VNUi5v5t4sNRuepqey/bR5HMMsjLb8CBqF/KlIWuSuGbtCf/vmXCLQJgnruqPFNawyzHt75O9M4uyDb6/QsV5WRGMP+Z40rUP+BU1E789oki1iihlpiHBwq9u3njBJWPsz2JHVPzJzLau6tLPpKG9HQDevN6Q2AQxv2/R2mhKJjbQtb1utDuODhj5fZozF+r+ONu3G7YLdeFT3w2lm6kV6B46tRmPvqXTJ/w5PguvJFlKVgWeyEDYUNaqXA4M28feE+iqGicBfHkDpr+RN2h6A6SXnMKTYB5iUhYUEk980qg2FQBRbvj4hEM8ZZdi/H43lRIrQOT1tzPOESrxniVpCijrtPqOn/DD71/JgOaK1MA/qSCtwjEoeqLdgPS+43McIRVxtC8zxkl4cfrlQJnjrBTc+RcrOaR/Tll11p/rnPwTHf8caAYBfbx/xedeebCPDc2RmKr0N18BGq2rY4/o22H4whd/Xg2RXiXzAIvinMJuT3kEA8t5/ULpZnldhsxoxFa7VD3LZqUf63NbELbwKabmk4U8yIeWb9QdEakDDwJCIx9T/sVs3L4C7mqaF1LPhpCwJVMb30MToHw3yICjYCXLsMOHM4m/uSfMNuQkGBz2FjFvx8eQqPqMfeGv3r/uTurRRH2Zm1YPgy9VzrpQHyxctZvbraZBpA9TGN5j7KR9r0PMF0k34X/ox/wlHRDdVK9eE2074wjWDw02+VVqfqxncl5OzkX+XEm1X76Vylle8SHAb7ccvUsbJe+ZeQ==
X-MS-Exchange-CrossTenant-Network-Message-Id: 4eb573b7-2865-41d7-ee41-08d89089b6fb
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Nov 2020 15:00:42.9052
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: RIK7Y6NJvPRT0Dnyf9w/J2AL2/xyALw6xde6qWb/sQqgMyXd7vK10y1vbERUam0hvvDG8w260xgEff6IKDxjVQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4139
X-OriginatorOrg: citrix.com

On Tue, Nov 24, 2020 at 03:52:25PM +0100, Jan Beulich wrote:
> On 24.11.2020 15:27, Manuel Bouyer wrote:
> > new log at
> > http://www-soc.lip6.fr/~bouyer/xen-log7.txt
> > 
> > this one ends up in a panic, I hope you'll find what you expect here.
> 
> Did you actually, just to have the data point, ever try to disable
> interrupt remapping ("iommu=no-intremap")? For PVH we can't ask you
> to turn of the IOMMU as a whole, but aiui interrupt remapping is
> not a strict prereq. (I'm sure Roger will correct me if I'm wrong.)

No, interrupt remapping is not required for PVH dom0. I was actually
going to ask the same, albeit I'm not sure it will make a difference.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 15:09:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 15:09:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36531.68463 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khZw7-0001wP-Jv; Tue, 24 Nov 2020 15:08:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36531.68463; Tue, 24 Nov 2020 15:08:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khZw7-0001wH-Ge; Tue, 24 Nov 2020 15:08:55 +0000
Received: by outflank-mailman (input) for mailman id 36531;
 Tue, 24 Nov 2020 15:08:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UPq8=E6=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1khZw6-0001wC-4B
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 15:08:54 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 60cb4efa-09c8-4ed8-bc2b-387232b49522;
 Tue, 24 Nov 2020 15:08:53 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AOF8lK2003171
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Tue, 24 Nov 2020 16:08:48 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 1D78F2E9CAC; Tue, 24 Nov 2020 16:08:42 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=UPq8=E6=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1khZw6-0001wC-4B
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 15:08:54 +0000
X-Inumbo-ID: 60cb4efa-09c8-4ed8-bc2b-387232b49522
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 60cb4efa-09c8-4ed8-bc2b-387232b49522;
	Tue, 24 Nov 2020 15:08:53 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AOF8lK2003171
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Tue, 24 Nov 2020 16:08:48 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 1D78F2E9CAC; Tue, 24 Nov 2020 16:08:42 +0100 (MET)
Date: Tue, 24 Nov 2020 16:08:42 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org,
        Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201124150842.GN2020@antioche.eu.org>
References: <20201123125112.q3zqb4e5nk6jg4hw@Air-de-Roger>
 <20201123143150.GG2520@antioche.eu.org>
 <20201123170610.kzfxvcgkdkvh3ex4@Air-de-Roger>
 <20201123173925.GG4662@antioche.eu.org>
 <b3912e97-9684-fe97-1053-ad7168a19721@suse.com>
 <20201124122102.3igsriesou3vl6mu@Air-de-Roger>
 <20201124135948.GL2020@antioche.eu.org>
 <6d6a77cf-58de-4e4d-ed75-e9365be060b7@suse.com>
 <20201124142713.GM2020@antioche.eu.org>
 <e6a0fc84-e7ed-825c-5356-29b8a6359a2b@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <e6a0fc84-e7ed-825c-5356-29b8a6359a2b@suse.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Tue, 24 Nov 2020 16:08:48 +0100 (MET)

On Tue, Nov 24, 2020 at 03:52:25PM +0100, Jan Beulich wrote:
> On 24.11.2020 15:27, Manuel Bouyer wrote:
> > new log at
> > http://www-soc.lip6.fr/~bouyer/xen-log7.txt
> > 
> > this one ends up in a panic, I hope you'll find what you expect here.
> 
> Did you actually, just to have the data point, ever try to disable
> interrupt remapping ("iommu=no-intremap")? For PVH we can't ask you
> to turn of the IOMMU as a whole, but aiui interrupt remapping is
> not a strict prereq. (I'm sure Roger will correct me if I'm wrong.)

I just tried, it doesn't seem to change anything.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 15:18:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 15:18:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36538.68474 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kha59-0002tp-G1; Tue, 24 Nov 2020 15:18:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36538.68474; Tue, 24 Nov 2020 15:18:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kha59-0002ti-Cv; Tue, 24 Nov 2020 15:18:15 +0000
Received: by outflank-mailman (input) for mailman id 36538;
 Tue, 24 Nov 2020 15:18:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UPq8=E6=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kha57-0002td-Iw
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 15:18:13 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d710ba67-ca12-47b6-b0ee-10dfec5c0959;
 Tue, 24 Nov 2020 15:18:12 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AOFI6RA001750
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Tue, 24 Nov 2020 16:18:07 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id C56B52E9CAC; Tue, 24 Nov 2020 16:18:01 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=UPq8=E6=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kha57-0002td-Iw
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 15:18:13 +0000
X-Inumbo-ID: d710ba67-ca12-47b6-b0ee-10dfec5c0959
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d710ba67-ca12-47b6-b0ee-10dfec5c0959;
	Tue, 24 Nov 2020 15:18:12 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AOFI6RA001750
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Tue, 24 Nov 2020 16:18:07 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id C56B52E9CAC; Tue, 24 Nov 2020 16:18:01 +0100 (MET)
Date: Tue, 24 Nov 2020 16:18:01 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201124151801.GO2020@antioche.eu.org>
References: <20201120103824.GJ1508@antioche.eu.org>
 <20201123095713.orfpg72r73m7f46n@Air-de-Roger>
 <20201123113241.GE2520@antioche.eu.org>
 <20201123125112.q3zqb4e5nk6jg4hw@Air-de-Roger>
 <20201123143150.GG2520@antioche.eu.org>
 <20201123170610.kzfxvcgkdkvh3ex4@Air-de-Roger>
 <20201123173925.GG4662@antioche.eu.org>
 <b3912e97-9684-fe97-1053-ad7168a19721@suse.com>
 <ee63d6c2-4d0f-a3b7-37d0-8ce45c9e6fb1@suse.com>
 <20201124145927.zrbsmvs6qvaxh4hf@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201124145927.zrbsmvs6qvaxh4hf@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Tue, 24 Nov 2020 16:18:07 +0100 (MET)

On Tue, Nov 24, 2020 at 03:59:27PM +0100, Roger Pau Monn wrote:
> [...]
> 
> Sorry Manuel, you seem to have hit some kind of weird bug regarding
> interrupt management. If you want to progress further with NetBSD PVH
> dom0 it's likely to work on a different box,

the problem is, I don't have a different box with iommu to test it yet.

> but I would ask if you
> can keep the current box in order for us to continue debugging.

This systems isn't in production yet, and I can probably delay migrating the
domUs some more. I think we have 2 more weeks to work on this.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 15:23:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 15:23:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36546.68486 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khaA0-0003mE-3g; Tue, 24 Nov 2020 15:23:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36546.68486; Tue, 24 Nov 2020 15:23:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khaA0-0003m0-0g; Tue, 24 Nov 2020 15:23:16 +0000
Received: by outflank-mailman (input) for mailman id 36546;
 Tue, 24 Nov 2020 15:23:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KyA6=E6=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kha9x-0003lO-W0
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 15:23:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d6f2bae5-09b7-4a9c-915d-2c10e60d4ba4;
 Tue, 24 Nov 2020 15:23:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2AB7CAC2D
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 15:23:12 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=KyA6=E6=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kha9x-0003lO-W0
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 15:23:14 +0000
X-Inumbo-ID: d6f2bae5-09b7-4a9c-915d-2c10e60d4ba4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d6f2bae5-09b7-4a9c-915d-2c10e60d4ba4;
	Tue, 24 Nov 2020 15:23:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606231392; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=xOn7bhmsHAh3nbeX1jA5Pz+A8azHmtctaSU5thLi85w=;
	b=H5pMvDonqaAn5/7SWPxGd3hNew0zvm5I55fsDf3DMYH9POhueNGt0+S2mR8KhnkQqGkutc
	2x/gCRNk5Xsd/R3zk5JOq8qspcMVhJyZWU+clST7neNyIr/t9Ck5nSDPXs7nc828EYzZ3L
	f8x3I4Xjxbn+/4Z8kvl74/sh9Tv0jVg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2AB7CAC2D
	for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 15:23:12 +0000 (UTC)
To: xen-devel@lists.xenproject.org
References: <20904a6a-ac64-755d-d228-4c49faf66fb5@suse.com>
 <20201120103824.GJ1508@antioche.eu.org>
 <20201123095713.orfpg72r73m7f46n@Air-de-Roger>
 <20201123113241.GE2520@antioche.eu.org>
 <20201123125112.q3zqb4e5nk6jg4hw@Air-de-Roger>
 <20201123143150.GG2520@antioche.eu.org>
 <20201123170610.kzfxvcgkdkvh3ex4@Air-de-Roger>
 <20201123173925.GG4662@antioche.eu.org>
 <b3912e97-9684-fe97-1053-ad7168a19721@suse.com>
 <ee63d6c2-4d0f-a3b7-37d0-8ce45c9e6fb1@suse.com>
 <20201124145927.zrbsmvs6qvaxh4hf@Air-de-Roger>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <ca994a9d-92c2-9cc6-1315-cb71cd3ffeed@suse.com>
Date: Tue, 24 Nov 2020 16:23:11 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201124145927.zrbsmvs6qvaxh4hf@Air-de-Roger>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="CHFSPsKpbHOpBA7AY793o9AaF3msjbQyG"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--CHFSPsKpbHOpBA7AY793o9AaF3msjbQyG
Content-Type: multipart/mixed; boundary="13prHdkrRQ56cFfNKGw6OMaTt6fySie4V";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Message-ID: <ca994a9d-92c2-9cc6-1315-cb71cd3ffeed@suse.com>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
References: <20904a6a-ac64-755d-d228-4c49faf66fb5@suse.com>
 <20201120103824.GJ1508@antioche.eu.org>
 <20201123095713.orfpg72r73m7f46n@Air-de-Roger>
 <20201123113241.GE2520@antioche.eu.org>
 <20201123125112.q3zqb4e5nk6jg4hw@Air-de-Roger>
 <20201123143150.GG2520@antioche.eu.org>
 <20201123170610.kzfxvcgkdkvh3ex4@Air-de-Roger>
 <20201123173925.GG4662@antioche.eu.org>
 <b3912e97-9684-fe97-1053-ad7168a19721@suse.com>
 <ee63d6c2-4d0f-a3b7-37d0-8ce45c9e6fb1@suse.com>
 <20201124145927.zrbsmvs6qvaxh4hf@Air-de-Roger>
In-Reply-To: <20201124145927.zrbsmvs6qvaxh4hf@Air-de-Roger>

--13prHdkrRQ56cFfNKGw6OMaTt6fySie4V
Content-Type: multipart/mixed;
 boundary="------------7E7EF1C34DD9F071E8552867"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------7E7EF1C34DD9F071E8552867
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 24.11.20 15:59, Roger Pau Monn=C3=A9 wrote:
> On Tue, Nov 24, 2020 at 03:42:28PM +0100, Jan Beulich wrote:
>> On 24.11.2020 11:05, Jan Beulich wrote:
>>> On 23.11.2020 18:39, Manuel Bouyer wrote:
>>>> On Mon, Nov 23, 2020 at 06:06:10PM +0100, Roger Pau Monn=C3=A9 wrote=
:
>>>>> OK, I'm afraid this is likely too verbose and messes with the timin=
gs.
>>>>>
>>>>> I've been looking (again) into the code, and I found something weir=
d
>>>>> that I think could be related to the issue you are seeing, but have=
n't
>>>>> managed to try to boot the NetBSD kernel provided in order to asser=
t
>>>>> whether it solves the issue or not (or even whether I'm able to
>>>>> repro it). Would you mind giving the patch below a try?
>>>>
>>>> With this, I get the same hang but XEN outputs don't wake up the int=
errupt
>>>> any more. The NetBSD counter shows only one interrupt for ioapic2 pi=
n 2,
>>>> while I would have about 8 at the time of the hang.
>>>>
>>>> So, now it looks like interrupts are blocked forever.
>>>
>>> Which may be a good thing for debugging purposes, because now we have=

>>> a way to investigate what is actually blocking the interrupt's
>>> delivery without having to worry about more output screwing the
>>> overall picture.
>>>
>>>> At
>>>> http://www-soc.lip6.fr/~bouyer/xen-log5.txt
>>>> you'll find the output of the 'i' key.
>>>
>>> (XEN)    IRQ:  34 vec:59 IO-APIC-level   status=3D010 aff:{0}/{0-7} i=
n-flight=3D1 d0: 34(-MM)
>>>
>>> (XEN)     IRQ 34 Vec 89:
>>> (XEN)       Apic 0x02, Pin  2: vec=3D59 delivery=3DLoPri dest=3DL sta=
tus=3D1 polarity=3D1 irr=3D1 trig=3DL mask=3D0 dest_id:00000001
>>
>> Since it repeats in Manuel's latest dump, perhaps the odd combination
>> of status=3D1 and irr=3D1 is to tell us something? It is my understand=
ing
>> that irr ought to become set only when delivery-status clears. Yet I
>> don't know what to take from this...
>=20
> My reading of this is that one interrupt was accepted by the lapic
> (irr=3D1) and that there's a further interrupt pending that hasn't yet
> been accepted by the lapic (status=3D1) because it's still serving the
> previous one. But that's all weird because there's no matching
> vector in ISR, and hence the IRR bit on the IO-APIC has somehow become
> stale or out of sync with the lapic state?
>=20
> I'm also unsure about how Xen has managed to reach this state, it
> shouldn't be possible in the first place.
>=20
> I don't think I can instrument the paths further with printfs because
> it's likely to result in the behavior itself changing and console
> spamming. I could however create a static buffer to trace relevant
> actions and then dump all them together with the 'i' debug key output.

debugtrace is your friend here. It already has a debug key for printing
the buffer contents to console ('T').

As the buffer is wrap-around you can even add debug prints in the
related interrupt paths for finding out which paths have been called in
which order and on which cpu. Depending on the findings you might want
to use percpu buffers.


Juergen

--------------7E7EF1C34DD9F071E8552867
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------7E7EF1C34DD9F071E8552867--

--13prHdkrRQ56cFfNKGw6OMaTt6fySie4V--

--CHFSPsKpbHOpBA7AY793o9AaF3msjbQyG
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+9JV8FAwAAAAAACgkQsN6d1ii/Ey/z
vgf9EqBNBC5Bd0PHn2dGMAsXLPfIlLRuzuJAOggqLZ8ff91iIPjFLsAlmZ1iFUCEm/b7+6sr3EFg
3gbJBvnCBfSMWDOmlMAA1CPpf1JRqKRcSlaCLd8jkTYSQCPLjYWgjj/fywyi7fDaPzj2x6AMPlCj
5dvGxW7Y88Ns9AVlE12g5acX0PE4hZ/4tndCNi1JktbDcsQYd8rXcSa/I7VyAu1hfDHl2CxvOJiz
17im+YEZP9Wbk4ixtG5BpCb9hC4/DGicflMqfPMoN20GrwEDbukdfYuc1CjAALkFnnYsYTgyi5OQ
TowFNbtGZIGvr9WqN3xuPiv1xJprIbseQy13lGpbQA==
=XcfH
-----END PGP SIGNATURE-----

--CHFSPsKpbHOpBA7AY793o9AaF3msjbQyG--


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 15:49:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 15:49:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36556.68499 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khaZN-0005ha-BC; Tue, 24 Nov 2020 15:49:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36556.68499; Tue, 24 Nov 2020 15:49:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khaZN-0005hT-7o; Tue, 24 Nov 2020 15:49:29 +0000
Received: by outflank-mailman (input) for mailman id 36556;
 Tue, 24 Nov 2020 15:49:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IwAZ=E6=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1khaZL-0005hO-MT
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 15:49:27 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e8f1a77d-49e1-4cfa-84cf-e2686110dc82;
 Tue, 24 Nov 2020 15:49:26 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IwAZ=E6=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1khaZL-0005hO-MT
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 15:49:27 +0000
X-Inumbo-ID: e8f1a77d-49e1-4cfa-84cf-e2686110dc82
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e8f1a77d-49e1-4cfa-84cf-e2686110dc82;
	Tue, 24 Nov 2020 15:49:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606232966;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=B0AHOWU5/O2a5HTnH/yfjCvoUWmPbB14HEU14RtM+fo=;
  b=SsHVwm+tNdS/M5WohQ28aI4r6uVMAAX4Jy5FFU8UvWbnwws4+jbpcH5J
   2OOFBLY9iyVb/iuKPqeVuhhG9Hm4Ni+DHOFKfCI+2Sqcaq1+BVOUeon9w
   SeUgcEqowotDKIOUN3dUnT1rB7wVeyg1Z7u5h6Qaen+J5LL/FZuoy/UFb
   A=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: vWlt506xwQLDgQhVp4uKkIJpPNOe7Pg6BSMEEonP2tOXy/JtCRYIGz15G8MezFd757eBQciJED
 SkVGUl+EvbZ0Z82UKOimhKc1dQbHizvAzuwpuMUY2Jew+7CxSo1DILBGnuvxENwIoiLThCs9+X
 oo5rRkAVny7ELsXn/fXQ/gcgQUsuTKt3UDE6HG6FSl3DMaKSSNi7kOkBCU/1zx4ptCkgcyy5TC
 nz4Y4oGFYn1fjXTE7peLYwzk57Cf7Vwyc5ipq/QjUoN9ouo0Jg1DEUxYCPd6+UpApmDFxFNCdh
 BGU=
X-SBRS: None
X-MesageID: 32184396
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,366,1599537600"; 
   d="scan'208";a="32184396"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WdjCEMN+bhzikaEv5GU6hoVRuM2JmvcUweRUTcjRbe/yb7BplZ4XA5esJI8EvRXcKwT3I1eyNFfhyYTO+KU581/s9RtB43hrJdAF0Wr7PA3hFIVeyrIpwf90sfU/kAWKrW4AeHs/rkyCRjKq2E5MiUpM2Jyk++pjxEaWxLOJplrwjl4UNNebc1nEOt5BbRFOs/aqahjSloWERF4QGdVfERm2+7F6KX/GkmccM6CkRqOV7bODkbSf/cIoUuvFcDrufqvrVTrsIlRXEBpf/Tbk8l+Y32rTY3qZYgKUELJE7sd51pLJQ31Jnnl+eclCaTyH2jBIfa+fwiHPk0uu2dMCTA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xGQbamVpFC7tNhFrt/gNw+e5L2jcDt0svi9b393BfSI=;
 b=oX9fnBr8mEs6aM8Q9Yy9U/S5m2S1pQy+eHeqFb7Ub2Asev9G3lsbYg8ILoWbU1DsNrhuSo9BPTNkFR6x7EQGrw7tfn0QMwp3Htb6yXH/v75StFqQEmhyTDhuOffBJiFk6LoCkTT0fqBBnQp7f/NiBf12OSZd20xBXm5fVx6BPj2FDZCGPLUrtuzDxRRBJv0etAtERuzpRN9Cth6lmZex8olQGhYpoAD0jwm/pF48hRw3ALvbaaAbKXnRMwJaXYxUggsWCTMdFlIBJjYb62QDAVV3fBQWP4xth/lzFY4ye4jA9R8Gl0KNN7aO/6A3dFgvipBwNDU+e7b+bFRikvTjyw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xGQbamVpFC7tNhFrt/gNw+e5L2jcDt0svi9b393BfSI=;
 b=SzsXfFYRB7Vwfk6Xjg0SspbqlbdJcxFvX69TV495qDfOlajFYx9TDidVZigFzDNyb+BgzFF+646ulC2UQnituEYHvVLgGtr5WJs2ZHrlBNPFCi4/XcQ0qhKMO4xwKDYolxm/fciUpIoYqogOidyV5tfvSA39mKbRuqozlyovZTo=
Date: Tue, 24 Nov 2020 16:49:17 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: Jan Beulich <jbeulich@suse.com>, <xen-devel@lists.xenproject.org>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201124154917.l3jwa6w4ejumjuqw@Air-de-Roger>
References: <20201123143150.GG2520@antioche.eu.org>
 <20201123170610.kzfxvcgkdkvh3ex4@Air-de-Roger>
 <20201123173925.GG4662@antioche.eu.org>
 <b3912e97-9684-fe97-1053-ad7168a19721@suse.com>
 <20201124122102.3igsriesou3vl6mu@Air-de-Roger>
 <20201124135948.GL2020@antioche.eu.org>
 <6d6a77cf-58de-4e4d-ed75-e9365be060b7@suse.com>
 <20201124142713.GM2020@antioche.eu.org>
 <e6a0fc84-e7ed-825c-5356-29b8a6359a2b@suse.com>
 <20201124150842.GN2020@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201124150842.GN2020@antioche.eu.org>
X-ClientProxiedBy: MR2P264CA0172.FRAP264.PROD.OUTLOOK.COM (2603:10a6:501::11)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 70df89e5-87d6-4426-2bb5-08d890908349
X-MS-TrafficTypeDiagnostic: DM6PR03MB4475:
X-Microsoft-Antispam-PRVS: <DM6PR03MB4475B5E14D499F25D9D3773D8FFB0@DM6PR03MB4475.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: qlBdprVHtEYOPdP3l2yysMksKl1QoQT371AaMOtJMTvkGw7sRvaG0uV5spC2KwVfdOSMnipcYHivDDIT0b+1dH8kPzttYueZXOZ6Q8i8XzQFU6Yp3PzZGxfS1X2EAz34tFF7n0Pq05J7yHCATlM4dc3qc+4Yz/xO5gtfOergMiOtMQpBKNasdHLdQE4N2E5Y2Z5tZQp9t4/0ymc4VzZ/tk9q7LFdYuoGQwahnqbWCecy8QLnml4q3kkWMtr5ntM+IMQkrquDHra1jHC1JM7rafXydV2jqKp/4U6VUT7/F9uzmO9M5dLizSmIuak0HgWh5/EwCCedDVglwwTpUm6YF6klk2TDQlAEQXRm156cbGlHkiEj6goq2Anm/CGFoYZ/dtG7STqm8fjwDlDtOkZpxg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(346002)(376002)(396003)(39860400002)(366004)(966005)(6666004)(2906002)(8676002)(66946007)(66556008)(26005)(6486002)(16526019)(4326008)(53546011)(8936002)(6496006)(9686003)(478600001)(956004)(66476007)(4744005)(85182001)(1076003)(186003)(5660300002)(6916009)(33716001)(86362001)(316002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: O53Nft3Y/hMwR/oL6eOLfTKNRWrcf0Hl34LB3WaHwLJT/evZXyqui5263SmdAhDDI5wGOstNRa1ZFiSoWtxwu9h66Ec0osJrZ6Powczeq9SgS4i927aSEItk8roeGq6vDFkHSEhAkjOx44qs3HRBFweM2WL7ifhGYiOwOog/n7o65DfWwuRjKHh0aJpKqrbnQDb1Ue/hmNCQ4mdkCsnt1ylfSW06dsCWR0o7MCh80E81E8YSyCMQlW3E/azhd3zxaSpThqw5Gh9sjjQ3EsY7pCAPstb5oBrLBcyfEU9vW6PMp4yH96ktplepWEeuHSvjAgZA56oQgQ0MV8fHwg/5lGEPoWc0ka0UiZflYQkDWfj7el2tqLDwrr7CuRSog3HeTrxbjvV9Y6HvIEbxcDBsLD9QnM55J3VjANU0n9ntyAxtDtqGoYkXJTwHr/vNtRehzQ2dVcJwq1qjUSjZvZGBl3x2QVzmXAgfSdxcedVgobbosAcwSmIxBejZnQAUidQVzli9DGsqJrtSl3DYDweIoWyVuliTuZ5XUNq9GQ/666qUjanVWK9feng/sYOpuXXcK1m6w3Yh3gZhaLG7rcn3xn7V1fNINkH7nAF/7OJI/QkL5t0bgfrELUVkfEdb/3KQh2K5Qlx96rv9pUfcoDyZSg+ap6nKaueNRCacAupwYI6Ed7G+CPMBH6zUJpB32IjAqTuLSIZiPHcQtt/Kkcgt076skoHtvSx9b1NpdR0ywGJzFWsvF52bhoI4NI8A50LNWk1ow4zbCkgAuXlSJTyqckVc/3YRs5SaIGX8hV0R2dazApj7+ZPft0FsweDOPdcN0MEgIA//KvXUhdgZu/fch1Y+Rad6r5FHGnnVpEHmHw0adbMxEqCbl1tygdyY0zmLIqoPCvzuqEwaKRz3U5ix9A==
X-MS-Exchange-CrossTenant-Network-Message-Id: 70df89e5-87d6-4426-2bb5-08d890908349
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Nov 2020 15:49:22.6391
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pwJmFNBWrShY3w8G8LjjHY9diyv2RXogBz4O3jNFfCSCSp4+fBzXS3fxewLiuSRYxqBPJOWh5YVZ5kQ5lv2mnQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4475
X-OriginatorOrg: citrix.com

On Tue, Nov 24, 2020 at 04:08:42PM +0100, Manuel Bouyer wrote:
> On Tue, Nov 24, 2020 at 03:52:25PM +0100, Jan Beulich wrote:
> > On 24.11.2020 15:27, Manuel Bouyer wrote:
> > > new log at
> > > http://www-soc.lip6.fr/~bouyer/xen-log7.txt
> > > 
> > > this one ends up in a panic, I hope you'll find what you expect here.
> > 
> > Did you actually, just to have the data point, ever try to disable
> > interrupt remapping ("iommu=no-intremap")? For PVH we can't ask you
> > to turn of the IOMMU as a whole, but aiui interrupt remapping is
> > not a strict prereq. (I'm sure Roger will correct me if I'm wrong.)
> 
> I just tried, it doesn't seem to change anything.

Could you also give a try with ioapic_ack=new on the Xen command line?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 16:09:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 16:09:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36566.68510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khasg-00082T-0L; Tue, 24 Nov 2020 16:09:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36566.68510; Tue, 24 Nov 2020 16:09:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khasf-00082M-T8; Tue, 24 Nov 2020 16:09:25 +0000
Received: by outflank-mailman (input) for mailman id 36566;
 Tue, 24 Nov 2020 16:09:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UPq8=E6=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1khasf-00082H-Gn
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 16:09:25 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b5523cea-a48a-4f66-8fb9-ee8612eb3df1;
 Tue, 24 Nov 2020 16:09:24 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AOG9JdH005476
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Tue, 24 Nov 2020 17:09:20 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 673602E9CAC; Tue, 24 Nov 2020 17:09:14 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=UPq8=E6=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1khasf-00082H-Gn
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 16:09:25 +0000
X-Inumbo-ID: b5523cea-a48a-4f66-8fb9-ee8612eb3df1
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b5523cea-a48a-4f66-8fb9-ee8612eb3df1;
	Tue, 24 Nov 2020 16:09:24 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AOG9JdH005476
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Tue, 24 Nov 2020 17:09:20 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 673602E9CAC; Tue, 24 Nov 2020 17:09:14 +0100 (MET)
Date: Tue, 24 Nov 2020 17:09:14 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201124160914.GQ2020@antioche.eu.org>
References: <20201123170610.kzfxvcgkdkvh3ex4@Air-de-Roger>
 <20201123173925.GG4662@antioche.eu.org>
 <b3912e97-9684-fe97-1053-ad7168a19721@suse.com>
 <20201124122102.3igsriesou3vl6mu@Air-de-Roger>
 <20201124135948.GL2020@antioche.eu.org>
 <6d6a77cf-58de-4e4d-ed75-e9365be060b7@suse.com>
 <20201124142713.GM2020@antioche.eu.org>
 <e6a0fc84-e7ed-825c-5356-29b8a6359a2b@suse.com>
 <20201124150842.GN2020@antioche.eu.org>
 <20201124154917.l3jwa6w4ejumjuqw@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201124154917.l3jwa6w4ejumjuqw@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Tue, 24 Nov 2020 17:09:20 +0100 (MET)

On Tue, Nov 24, 2020 at 04:49:17PM +0100, Roger Pau Monn wrote:
> Could you also give a try with ioapic_ack=new on the Xen command line?

With this I still have the interrupt issue, but Xen doesn't panic on 'i'.
http://www-soc.lip6.fr/~bouyer/xen-log8.txt

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 16:32:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 16:32:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36574.68523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khbFE-00027U-Ug; Tue, 24 Nov 2020 16:32:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36574.68523; Tue, 24 Nov 2020 16:32:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khbFE-00027N-RI; Tue, 24 Nov 2020 16:32:44 +0000
Received: by outflank-mailman (input) for mailman id 36574;
 Tue, 24 Nov 2020 16:32:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khbFD-00027I-AV
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 16:32:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 187a57bf-2f6c-4773-b108-20a70d99ac53;
 Tue, 24 Nov 2020 16:32:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5EC96ACA8;
 Tue, 24 Nov 2020 16:32:41 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khbFD-00027I-AV
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 16:32:43 +0000
X-Inumbo-ID: 187a57bf-2f6c-4773-b108-20a70d99ac53
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 187a57bf-2f6c-4773-b108-20a70d99ac53;
	Tue, 24 Nov 2020 16:32:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606235561; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=UhH01bDe6Xjioh8ImKumhNUknUT+Ni63eIuad5SFLbk=;
	b=KItxWpQKS0QD2Hn9eD75oz7vusYrgRgd4r11c+XlynV+r+rak9rapcnyE+on4Z64XO7+O0
	AURrv55wdprdsT/xrgzoUFHaxxUiqGBbHIoixibt/zoVw8Lpuhsiuye/rd8CXvHi3Zi3EF
	XotI0AaZyGaXvF0mvluHbh0aBrpdgZM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5EC96ACA8;
	Tue, 24 Nov 2020 16:32:41 +0000 (UTC)
Subject: Re: [PATCH v7 3/3] xen/events: rework fifo queue locking
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201124070106.26854-1-jgross@suse.com>
 <20201124070106.26854-4-jgross@suse.com>
 <c627b42b-1e1f-b83a-2db8-b9e5fa5dce10@suse.com>
 <8e2853c3-9f84-2fd6-0e41-1f1d9172f236@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9eada207-9880-b2fe-054c-f3218d2034b2@suse.com>
Date: Tue, 24 Nov 2020 17:32:40 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <8e2853c3-9f84-2fd6-0e41-1f1d9172f236@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 24.11.2020 15:49, Jürgen Groß wrote:
> On 24.11.20 15:02, Jan Beulich wrote:
>> On 24.11.2020 08:01, Juergen Gross wrote:
>>> Two cpus entering evtchn_fifo_set_pending() for the same event channel
>>> can race in case the first one gets interrupted after setting
>>> EVTCHN_FIFO_PENDING and when the other one manages to set
>>> EVTCHN_FIFO_LINKED before the first one is testing that bit. This can
>>> lead to evtchn_check_pollers() being called before the event is put
>>> properly into the queue, resulting eventually in the guest not seeing
>>> the event pending and thus blocking forever afterwards.
>>>
>>> Note that commit 5f2df45ead7c1195 ("xen/evtchn: rework per event channel
>>> lock") made the race just more obvious, while the fifo event channel
>>> implementation had this race from the beginning when an unmask operation
>>> was running in parallel with an event channel send operation.
>>
>> Ah yes, but then also only for inter-domain channels, as it was
>> only in that case that the "wrong" domain's event lock was held.
>> IOW there was a much earlier change already where this issue
>> got widened (when the per-channel locking got introduced). This
>> then got reduced to the original scope by XSA-343's adding of
>> locking to evtchn_unmask(). (Not sure how much of this history
>> wants actually adding here. I'm writing it down not the least to
>> make sure I have a complete enough picture.)
> 
> I think we both agree that this race was possible for quite some time.
> And I even think one customer bug I've been looking into recently
> might be exactly this problem (a dom0 was occasionally hanging in
> cross-cpu function calls, but switching to 2-level events made the
> problem disappear).

IPIs weren't affected earlier on (i.e. in any released version),
if my analysis above is correct.

>>> Additionally when an
>>> event channel needs to change queues both queues need to be locked
>>> initially.
>>
>> Since this was (afaict) intentionally not the case before, I
>> think I would want to see a word spent on the "why", perhaps
>> better in a code comment than here. Even more so that you
>> delete a respective comment justifying the possible race as
>> permissible. And I have to admit right now I'm still uncertain
>> both ways, i.e. I neither have a clear understanding of why it
>> would have been considered fine the other way around before,
>> nor why the double locking is strictly needed.
> 
> I need the double locking to avoid someone entering the locked region
> when dropping the lock for the old queue and taking the one for the
> new queue, as this would open the same race window again.

Well, that's what have already said. Thing is that the code
prior to your change gives the impression as if this race was
benign.

>>> +        lastq.raw = read_atomic(&evtchn->fifo_lastq);
>>> +        old_v = d->vcpu[lastq.last_vcpu_id];
>>> +        if ( q == &v->evtchn_fifo->queue[evtchn->priority] &&
>>> +             old_q == &old_v->evtchn_fifo->queue[lastq.last_priority] )
>>> +            break;
>>> +
>>> +        if ( q != old_q )
>>> +            spin_unlock(&old_q->lock);
>>> +        spin_unlock_irqrestore(&q->lock, flags);
>>> +
>>> +        if ( try == 3 )
>>> +        {
>>> +            gprintk(XENLOG_WARNING,
>>> +                    "dom%d port %d lost event (too many queue changes)\n",
>>> +                    d->domain_id, evtchn->port);
>>> +            return;
>>
>> Originally evtchn_check_pollers() would still have been called
>> in this case. Wouldn't you better retain this, or else justify
>> the possibly observable change in behavior?
> 
> I could retain it, but without having set the event to be pending
> I don't see the value in doing so.

But that's part of my concern - you now don't set PENDING when
bailing here.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 16:44:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 16:44:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36582.68535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khbQn-00037v-27; Tue, 24 Nov 2020 16:44:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36582.68535; Tue, 24 Nov 2020 16:44:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khbQm-00037o-VI; Tue, 24 Nov 2020 16:44:40 +0000
Received: by outflank-mailman (input) for mailman id 36582;
 Tue, 24 Nov 2020 16:44:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khbQl-00037g-Nu; Tue, 24 Nov 2020 16:44:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khbQl-0000nj-9g; Tue, 24 Nov 2020 16:44:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khbQl-00030i-02; Tue, 24 Nov 2020 16:44:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1khbQk-0006lO-UC; Tue, 24 Nov 2020 16:44:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khbQl-00037g-Nu; Tue, 24 Nov 2020 16:44:39 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=iw4r303Lpr65VWYDflb76272p2gDbZsXObeuD0U837U=; b=7GJ5mzeWnVszI04gmEeG6qBFD0
	e6x/PH81MA4ZuRCqzS5YpwHmQC9wVDBEurbj/AMCjgbWhWdr2lrsQrJMz6irGf8X4G8uFTxW4fLtB
	jhumgFAVs56/MPQAWA4L0ci+9ZGmm2R/T8DJqce5ES4rwdvP/9+kl2GqnM1SHKmvmOdg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khbQl-0000nj-9g; Tue, 24 Nov 2020 16:44:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khbQl-00030i-02; Tue, 24 Nov 2020 16:44:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khbQk-0006lO-UC; Tue, 24 Nov 2020 16:44:38 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156991-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156991: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9b156bcc3ffcc7949edd4460b718a241e87ae302
X-Osstest-Versions-That:
    xen=8147e00e4fbfcc43b665dc6bf279b204c501ba04
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 24 Nov 2020 16:44:38 +0000

flight 156991 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156991/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  9b156bcc3ffcc7949edd4460b718a241e87ae302
baseline version:
 xen                  8147e00e4fbfcc43b665dc6bf279b204c501ba04

Last test of basis   156982  2020-11-24 11:01:24 Z    0 days
Testing same since   156991  2020-11-24 14:01:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8147e00e4f..9b156bcc3f  9b156bcc3ffcc7949edd4460b718a241e87ae302 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 16:56:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 16:56:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36592.68549 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khbbe-000489-4H; Tue, 24 Nov 2020 16:55:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36592.68549; Tue, 24 Nov 2020 16:55:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khbbe-000482-1J; Tue, 24 Nov 2020 16:55:54 +0000
Received: by outflank-mailman (input) for mailman id 36592;
 Tue, 24 Nov 2020 16:55:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khbbc-00047x-Dg
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 16:55:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c7fe34b5-944a-4837-b013-ae874e18f264;
 Tue, 24 Nov 2020 16:55:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4A94EAC77;
 Tue, 24 Nov 2020 16:55:50 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khbbc-00047x-Dg
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 16:55:52 +0000
X-Inumbo-ID: c7fe34b5-944a-4837-b013-ae874e18f264
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c7fe34b5-944a-4837-b013-ae874e18f264;
	Tue, 24 Nov 2020 16:55:51 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606236950; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ufwAhUrq/oOi8AfhpClhMYTGxOOsopGr05vHJJZQ93o=;
	b=lnjdMv0GXFoaIdP5g911w7kldh0cp6BlW+1aWwErtxGT35dJwpyDptFHoLiF4nH7HiqpEI
	flDkw9akTG6LoVIY57WC9tT0Mr4zRcSk1fftSI25PdQH4Lq0ckSHURA80jFrU4Tw4hfXk+
	fs+PAlLyqFo3HNKosUEbJyipwaS8Nrg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4A94EAC77;
	Tue, 24 Nov 2020 16:55:50 +0000 (UTC)
Subject: Re: [PATCH v2 08/12] viridian: add ExProcessorMasks variants of the
 flush hypercalls
To: Paul Durrant <paul@xen.org>
Cc: Paul Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201120094900.1489-1-paul@xen.org>
 <20201120094900.1489-9-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1b8d71bc-5f6d-b458-e0fc-2a2f0d29ddd8@suse.com>
Date: Tue, 24 Nov 2020 17:55:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201120094900.1489-9-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.11.2020 10:48, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> The Microsoft Hypervisor TLFS specifies variants of the already implemented
> HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST hypercalls that take a 'Virtual
> Processor Set' as an argument rather than a simple 64-bit mask.
> 
> This patch adds a new hvcall_flush_ex() function to implement these
> (HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX) hypercalls. This makes use of
> new helper functions, hv_vpset_nr_banks() and hv_vpset_to_vpmask(), to
> determine the size of the Virtual Processor Set (so it can be copied from
> guest memory) and parse it into hypercall_vpmask (respectively).
> 
> NOTE: A guest should not yet issue these hypercalls as 'ExProcessorMasks'
>       support needs to be advertised via CPUID. This will be done in a
>       subsequent patch.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Just a couple of cosmetic remarks, apart from them this looks
good to me, albeit some of the size calculations are quite,
well, involved. I guess like with most parts if this series,
in the end Wei will need to give his ack.

> --- a/xen/arch/x86/hvm/viridian/viridian.c
> +++ b/xen/arch/x86/hvm/viridian/viridian.c
> @@ -576,6 +576,70 @@ static unsigned int vpmask_nr(const struct hypercall_vpmask *vpmask)
>      return bitmap_weight(vpmask->mask, HVM_MAX_VCPUS);
>  }
>  
> +#define HV_VPSET_BANK_SIZE \
> +    sizeof_field(struct hv_vpset, bank_contents[0])
> +
> +#define HV_VPSET_SIZE(banks)   \
> +    (sizeof(struct hv_vpset) + (banks * HV_VPSET_BANK_SIZE))

Personally I think this would be better done using
offsetof(struct hv_vpset, bank_contents), but you're the maintainer.
However, "banks" wants parenthesizing, just in case.

> +#define HV_VPSET_MAX_BANKS \
> +    (sizeof_field(struct hv_vpset, valid_bank_mask) * 8)
> +
> +struct hypercall_vpset {
> +    union {
> +        struct hv_vpset set;
> +        uint8_t pad[HV_VPSET_SIZE(HV_VPSET_MAX_BANKS)];
> +    };
> +};

A struct with just a union as member could be expressed as a simple
union - any reason you prefer the slightly more involved variant?

> +static DEFINE_PER_CPU(struct hypercall_vpset, hypercall_vpset);
> +
> +static unsigned int hv_vpset_nr_banks(struct hv_vpset *vpset)
> +{
> +    return hweight64(vpset->valid_bank_mask);
> +}
> +
> +static uint16_t hv_vpset_to_vpmask(struct hv_vpset *set,

const?

> @@ -656,6 +720,78 @@ static int hvcall_flush(union hypercall_input *input,
>      return 0;
>  }
>  
> +static int hvcall_flush_ex(union hypercall_input *input,

const again?

> +                           union hypercall_output *output,
> +                           unsigned long input_params_gpa,
> +                           unsigned long output_params_gpa)

Mainly for cosmetic reasons and to be in sync with
hvm_copy_from_guest_phys()'s respective parameter, perhaps both
would better be paddr_t?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 16:57:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 16:57:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36599.68562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khbcu-0004Ez-FD; Tue, 24 Nov 2020 16:57:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36599.68562; Tue, 24 Nov 2020 16:57:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khbcu-0004Er-Bw; Tue, 24 Nov 2020 16:57:12 +0000
Received: by outflank-mailman (input) for mailman id 36599;
 Tue, 24 Nov 2020 16:57:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1khbct-0004Ek-W3
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 16:57:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khbcs-00014C-RW; Tue, 24 Nov 2020 16:57:10 +0000
Received: from gw1.octic.net ([81.187.162.82] helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khbcs-0007kY-G8; Tue, 24 Nov 2020 16:57:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khbct-0004Ek-W3
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 16:57:12 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=vaVujtI/I43ocpPCF1dct3D70cWbZhxiq3sAoJ2Hees=; b=DISi9+vzLnrb2T5GJYCIYGYls1
	dQmbeMeRRUvfAdaOacHzirQ8CH/vN12Jpi5UKDuDYgYurXPSBMucvgW4vbrgSjB7CMhRUnPWSoGPt
	2413o4Y/EAP2mpu1pYJ5s5kE+wWM+AdCYvqc/+glU1DglR2FGYoFF83R7bxTXcc0HoV8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khbcs-00014C-RW; Tue, 24 Nov 2020 16:57:10 +0000
Received: from gw1.octic.net ([81.187.162.82] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khbcs-0007kY-G8; Tue, 24 Nov 2020 16:57:10 +0000
Subject: Re: [PATCH v2 7/8] lib: move bsearch code
To: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
 <87a20884-5a76-a664-dcc9-bd4becee40b3@suse.com>
 <44ffc041-cacd-468e-a835-f5b2048bb201@xen.org>
 <2cf3a90d-f463-41f8-f861-6ef00279b204@suse.com>
 <2419eccf-c696-6aa1-ada4-0f7bd6bc5657@xen.org>
 <77534dc3-bdd6-f884-99e3-90dc9b02a81f@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <59a4e1c1-ea39-1846-92ae-92560db4b1fb@xen.org>
Date: Tue, 24 Nov 2020 16:57:08 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <77534dc3-bdd6-f884-99e3-90dc9b02a81f@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Andrew,

Thank you for the detailed explanation :).

On 24/11/2020 00:40, Andrew Cooper wrote:
> On 23/11/2020 22:49, Julien Grall wrote:
>> Hi Jan,
>>
>> On 19/11/2020 10:27, Jan Beulich wrote:
>>> On 18.11.2020 19:09, Julien Grall wrote:
>>>> On 23/10/2020 11:19, Jan Beulich wrote:
>>>>> --- a/xen/include/xen/compiler.h
>>>>> +++ b/xen/include/xen/compiler.h
>>>>> @@ -12,6 +12,7 @@
>>>>>        #define inline        __inline__
>>>>>     #define always_inline __inline__ __attribute__
>>>>> ((__always_inline__))
>>>>> +#define gnu_inline    __inline__ __attribute__ ((__gnu_inline__))
>>>>
>>>> bsearch() is only used by Arm and I haven't seen anyone so far
>>>> complaining about the perf of I/O emulation.
>>>>
>>>> Therefore, I am not convinced that there is enough justification to
>>>> introduce a GNU attribute just for this patch.
>>>
>>> Please settle this with Andrew: He had asked for the function to
>>> become inline. I don't view making it static inline in the header
>>> as an option here - if the compiler decides to not inline it, we
>>> should not end up with multiple instances in different CUs.
>>
>> That's the cons of static inline... but then why is it suddenly a
>> problem with this helper?
>>
>>> And
>>> without making it static inline the attribute needs adding; at
>>> least I'm unaware of an alternative which works with the various
>>> compiler versions.
>>
>> The question we have to answer is: What is the gain with this approach?
> 
> Substantial.
> 
>>
>> If it is not quantifiable, then introducing compiler specific
>> attribute is not an option.
>>
>> IIRC, there are only two callers (all in Arm code) of this function.
>> Even inlined, I don't believe you would drastically reduce the number
>> of instructions compare to a full blown version. To be generous, I
>> would say you may save ~20 instructions per copy.
>>
>> Therefore, so far, the compiler specific attribute doesn't look
>> justified to me. As usual, I am happy to be proven wrong
> 
> There is a very good reason why this is the classic example used for
> extern inline's in various libc's.
> 
> The gains are from the compiler being able to optimise away the function
> pointer(s) entirely.  Instead of working on opaque objects, it can see
> the accesses directly, implement compares as straight array reads, (for
> sorting, the swap() call turns into memcpy()) and because it can see all
> the memory accesses, doesn't have to assume that every call to cmp()
> modifies arbitrary data in the array (i.e. doesn't have to reload the
> objects from memory every iteration).
> 
> extern inline allows the compiler full flexibility to judge whether
> inlining is a net win, based on optimisation settings and observing what
> the practical memory access pattern would be from not inlining.
> 
> extern inline is the appropriate thing to use here, except for the big
> note in the GCC manual saying "always use gnu_inline in this case" which
> appears to be working around a change in the C99 standard which forces
> any non-static inline to emit a body even when its not called, due to
> rules about global symbols.
> 
> Therefore, Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Some further observations:
> 
> For arch/arm/io.c, the handlers are sorted, so find_mmio_handler() will
> be O(lg n), but it will surely be faster with the inlined version, and
> this is the fastpath.
> 
> register_mmio_handler() OTOH is massively expensive, because sort()
> turns the array into a heap and back into an array on every insertion,
> just to insert an entry into an already sorted array.  It would be more
> efficient to library-fy the work I did for VT-x MSR load/save lists
> (again, extern inline) and reuse
> "insert_$FOO_into_sorted_list_of_FOOs()" which is a search, single
> memmove() to make a gap, and a memcpy() into place.
> 
> When you compile io.c with this patch in place, the delta is:
> 
> add/remove: 0/1 grow/shrink: 1/0 up/down: 92/-164 (-72)
> Function                                     old     new   delta
> try_handle_mmio                              720     812     +92
> bsearch                                      164       -    -164
> Total: Before=992489, After=992417, chg -0.01%
> 
> The reason cmp_mmio_handler (140 bytes) doesn't drop out is because it
> is referenced by register_mmio_hanlder()'s call to sort().  All in all,
> the inlined version is less than 1/3 the size of the out-of-lined
> version, but I haven't characterised it further than that.
> 
> 
> On a totally separate point,  I wonder if we'd be better off compiling
> with -fgnu89-inline because I can't see any case we're we'd want the C99
> inline semantics anywhere in Xen.

This was one of my point above. It feels that if we want to use the 
behavior in Xen, then it should be everywhere rather than just this helper.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 16:58:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 16:58:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36607.68573 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khbdy-0004Qz-UH; Tue, 24 Nov 2020 16:58:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36607.68573; Tue, 24 Nov 2020 16:58:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khbdy-0004Qs-RC; Tue, 24 Nov 2020 16:58:18 +0000
Received: by outflank-mailman (input) for mailman id 36607;
 Tue, 24 Nov 2020 16:58:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khbdx-0004QI-Dg
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 16:58:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aa61e201-fa76-4625-82da-7264d5f3d9be;
 Tue, 24 Nov 2020 16:58:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C429AAC6A;
 Tue, 24 Nov 2020 16:58:15 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khbdx-0004QI-Dg
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 16:58:17 +0000
X-Inumbo-ID: aa61e201-fa76-4625-82da-7264d5f3d9be
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id aa61e201-fa76-4625-82da-7264d5f3d9be;
	Tue, 24 Nov 2020 16:58:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606237095; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VUellZBDDRYg6aYj/PtKvu3LQ9tnp34aTi25Ik3zNhw=;
	b=KgYlNLBwJbzvtFNTV4KIdmREVidI34IRVU8OtMPOI8qMb28fV2xLsI4/PT2KfHnu8QSF9D
	StsKNxYp70uZ2jIf8nRqlUhD1GKaqbsrJvaW7+/CF/AcckPD5UHBjr+P6Awx8upkQ75DY1
	Zgc+vWpfaMETdQiYXNv0bzJFbQIL4QM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C429AAC6A;
	Tue, 24 Nov 2020 16:58:15 +0000 (UTC)
Subject: Re: [PATCH v2 09/12] viridian: add ExProcessorMasks variant of the
 IPI hypercall
To: Paul Durrant <paul@xen.org>
Cc: Paul Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201120094900.1489-1-paul@xen.org>
 <20201120094900.1489-10-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a9cfdedf-7bf1-56a2-d716-f13ea9a37179@suse.com>
Date: Tue, 24 Nov 2020 17:58:15 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201120094900.1489-10-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.11.2020 10:48, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> A previous patch introduced variants of the flush hypercalls that take a
> 'Virtual Processor Set' as an argument rather than a simple 64-bit mask.
> This patch introduces a similar variant of the HVCALL_SEND_IPI hypercall
> (HVCALL_SEND_IPI_EX).
> 
> NOTE: As with HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX, a guest should
>       not yet issue the HVCALL_SEND_IPI_EX hypercall as support for
>       'ExProcessorMasks' is not yet advertised via CPUID.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

I guess most remarks given for the previous patch apply here
as well.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 17:00:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 17:00:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36614.68586 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khbfq-0005Gu-AC; Tue, 24 Nov 2020 17:00:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36614.68586; Tue, 24 Nov 2020 17:00:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khbfq-0005Gn-6p; Tue, 24 Nov 2020 17:00:14 +0000
Received: by outflank-mailman (input) for mailman id 36614;
 Tue, 24 Nov 2020 17:00:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
 id 1khbfp-0005Gi-80
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:00:13 +0000
Received: from mail-qt1-x844.google.com (unknown [2607:f8b0:4864:20::844])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f498b134-b06a-4ece-a250-e32d7943231f;
 Tue, 24 Nov 2020 17:00:12 +0000 (UTC)
Received: by mail-qt1-x844.google.com with SMTP id g17so16584423qts.5
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 09:00:12 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
 by smtp.gmail.com with ESMTPSA id q20sm13543500qke.0.2020.11.24.09.00.10
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Nov 2020 09:00:11 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
	id 1khbfp-0005Gi-80
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:00:13 +0000
X-Inumbo-ID: f498b134-b06a-4ece-a250-e32d7943231f
Received: from mail-qt1-x844.google.com (unknown [2607:f8b0:4864:20::844])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f498b134-b06a-4ece-a250-e32d7943231f;
	Tue, 24 Nov 2020 17:00:12 +0000 (UTC)
Received: by mail-qt1-x844.google.com with SMTP id g17so16584423qts.5
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 09:00:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=fWmahOo+BxCVV9NEre8zWFBYKORZfhSvfmLWizAk7Sk=;
        b=pkS1IsRKMFFQKIiQOFY8WLSirKTKy5vlf5vBnIHJQYY8QvEC/15BTkG00rjnGztDPS
         MPnwXECAwJubj26/AKIoLsOqJXTzH20co1dwm6n/ihZPA8R/GncgR5Nn7BAW2WaJHH5L
         tpbCDHrNvMRBQDAisHHKNo7eGDqcuHeaFIgkH8ZC9CSiQc7xH6mkLcfA+jtEEIACMHfa
         u3OVfgqj6hqatE6LBJWSWupUvafe+983boP6OtbOOXJy3CJYXaRV2xsgTTuY8gHl3o7O
         iFEoPfxU2lYaOEolDJw8+Asj3AIZ8QN2eniPlD554d4692r1CYqrwtcGwNWS1nLFzZNO
         sY1g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=fWmahOo+BxCVV9NEre8zWFBYKORZfhSvfmLWizAk7Sk=;
        b=Fs5Nzob3SRS0tvJTWgpQuGaLvFLBlW3Twkx+VBI4XiJtZJDlDCK0JLvFe2RVL7Jt6z
         /zZyYe5PeHcY4EWVRyxjrSszfoccn3tD4XnsCoTOWfMpQRE5a489RnvZEHQVDN7pqokO
         4nyc6RSBH9BkxVC+fPhFT6zaIHeSqDiYVR4n8ZoLmqobsTIM21rceBk+vCvu/kNo0iko
         NEfUWVK2GJQnKb96WTLuQ8CpcUZ5N+TzedGMhe4UUbgYGBSW4QTEqiZTXrVbo9KyyoA4
         9+d0chbj0ujEua+6XeQlkPmh6/4hVNS9YDjWG+2oU7cwCL33xBMrjiLtCbEvk5DwD81a
         weYg==
X-Gm-Message-State: AOAM532cbUnYDK7OLO1MKoicf5QmRTEoaNjYWJTvS0F/K3hYBwVrFaiE
	AlOjiI+Hiw8baQ4tbrsrhpg=
X-Google-Smtp-Source: ABdhPJw9lXoUdb+9nSlmHwoziLxai+Rt3I7zgIJ1oTBqcvjWZC4/0fG7FN8qIJjp6NzaYQdsfu0bWw==
X-Received: by 2002:a05:622a:18d:: with SMTP id s13mr5279508qtw.306.1606237211875;
        Tue, 24 Nov 2020 09:00:11 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
        by smtp.gmail.com with ESMTPSA id q20sm13543500qke.0.2020.11.24.09.00.10
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 24 Nov 2020 09:00:11 -0800 (PST)
Sender: Tejun Heo <htejun@gmail.com>
Date: Tue, 24 Nov 2020 11:59:49 -0500
From: Tejun Heo <tj@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 11/20] block: reference struct block_device from struct
 hd_struct
Message-ID: <X708BTJ5njtbC2z1@mtj.duckdns.org>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-12-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201118084800.2339180-12-hch@lst.de>

Hello,

This is great. So much simpler & better. Some nits below.

> diff --git a/block/partitions/core.c b/block/partitions/core.c
> index a02e224115943d..0ba0bf44b88af3 100644
> --- a/block/partitions/core.c
> +++ b/block/partitions/core.c
> @@ -340,12 +340,11 @@ void delete_partition(struct hd_struct *part)
>  	device_del(part_to_dev(part));
>  
>  	/*
> -	 * Remove gendisk pointer from idr so that it cannot be looked up
> -	 * while RCU period before freeing gendisk is running to prevent
> -	 * use-after-free issues. Note that the device number stays
> -	 * "in-use" until we really free the gendisk.
> +	 * Remove the block device from the inode hash, so that it cannot be
> +	 * looked up while waiting for the RCU grace period.
>  	 */
> -	blk_invalidate_devt(part_devt(part));
> +	remove_inode_hash(part->bdev->bd_inode);

I don't think this is necessary now that the bdev and inode lifetimes are
one. Before, punching out the association early was necessary because we
could be in a situation where we can successfully look up a part from idr
and then try to pin the associated disk which may already be freed. With the
new code, the lookup is through the inode whose lifetime is one and the same
with gendisk, so use-after-free isn't possible and __blkdev_get() will
reliably reject such open attempts.

...
> diff --git a/fs/block_dev.c b/fs/block_dev.c
> index 4c4d6c30382c06..e94633dc6ad93b 100644
> --- a/fs/block_dev.c
> +++ b/fs/block_dev.c
> @@ -870,34 +870,50 @@ void __init bdev_cache_init(void)
>  	blockdev_superblock = bd_mnt->mnt_sb;   /* For writeback */
>  }
>  
> -static struct block_device *bdget(dev_t dev)
> +struct block_device *bdev_alloc(struct gendisk *disk, u8 partno)
>  {
>  	struct block_device *bdev;
>  	struct inode *inode;
>  
> -	inode = iget_locked(blockdev_superblock, dev);
> +	inode = new_inode(blockdev_superblock);
>  	if (!inode)
>  		return NULL;
>  
> -	bdev = &BDEV_I(inode)->bdev;
> +	bdev = I_BDEV(inode);
> +	spin_lock_init(&bdev->bd_size_lock);
> +	bdev->bd_disk = disk;
> +	bdev->bd_partno = partno;
> +	bdev->bd_contains = NULL;
> +	bdev->bd_super = NULL;
> +	bdev->bd_inode = inode;
> +	bdev->bd_part_count = 0;
> +
> +	inode->i_mode = S_IFBLK;
> +	inode->i_rdev = 0;
> +	inode->i_bdev = bdev;
> +	inode->i_data.a_ops = &def_blk_aops;

Missing the call to mapping_set_gfp_mask().

>  
> -	if (inode->i_state & I_NEW) {
> -		spin_lock_init(&bdev->bd_size_lock);
> -		bdev->bd_contains = NULL;
> -		bdev->bd_super = NULL;
> -		bdev->bd_inode = inode;
> -		bdev->bd_part_count = 0;
> -		bdev->bd_dev = dev;
> -		inode->i_mode = S_IFBLK;
> -		inode->i_rdev = dev;
> -		inode->i_bdev = bdev;
> -		inode->i_data.a_ops = &def_blk_aops;
> -		mapping_set_gfp_mask(&inode->i_data, GFP_USER);
> -		unlock_new_inode(inode);
> -	}
>  	return bdev;
>  }
...
>  /**
>   * bdgrab -- Grab a reference to an already referenced block device
>   * @bdev:	Block device to grab a reference to.
> @@ -957,6 +973,10 @@ static struct block_device *bd_acquire(struct inode *inode)
>  		bd_forget(inode);
>  
>  	bdev = bdget(inode->i_rdev);
> +	if (!bdev) {
> +		blk_request_module(inode->i_rdev);
> +		bdev = bdget(inode->i_rdev);
> +	}

One side effect here is that, before, a device which uses the traditional
consecutive devt range would reserve all minors for the partitions whether
they exist or not and fail open requests without requesting the matching
module. After the change, trying to open an non-existent partition would
trigger module probe. I don't think the behavior change is consequential in
any sane not-crazily-arcane setup but it might be worth mentioning in the
commit log.

Thank you.

-- 
tejun


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 17:03:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 17:03:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36623.68597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khbjI-0005T6-RI; Tue, 24 Nov 2020 17:03:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36623.68597; Tue, 24 Nov 2020 17:03:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khbjI-0005Sz-OL; Tue, 24 Nov 2020 17:03:48 +0000
Received: by outflank-mailman (input) for mailman id 36623;
 Tue, 24 Nov 2020 17:03:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khbjH-0005Su-F5
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:03:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7e6a02dc-2c6f-4796-ab4b-3ec88003fed7;
 Tue, 24 Nov 2020 17:03:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C957FAC2D;
 Tue, 24 Nov 2020 17:03:45 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nkWz=E6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khbjH-0005Su-F5
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:03:47 +0000
X-Inumbo-ID: 7e6a02dc-2c6f-4796-ab4b-3ec88003fed7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7e6a02dc-2c6f-4796-ab4b-3ec88003fed7;
	Tue, 24 Nov 2020 17:03:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606237425; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=BC94yTNxLr0ndaAF4GPuK/aP1erxAAwVav17KS4KiW0=;
	b=MG/r3YB6gkWnw/aMhJq/NNaHta/B46cqqXQXZ7XZxG2Nhkfp9bGBQet7mYUspKX6urCcrb
	H4k09lmCgeenggxoS/e9PRFzOV1Na13SFnnxJdQTcNgHKcLT6T4WRGcPjjyEzTz4KNH2Dq
	iwuCz39mVaBsf3MtwEHJKmTMlCWQAow=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C957FAC2D;
	Tue, 24 Nov 2020 17:03:45 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] evtchn: double per-channel locking can't hit identical
 channels
Message-ID: <db0b16f8-2053-5ec3-f73a-c1c8fcdb9444@suse.com>
Date: Tue, 24 Nov 2020 18:03:45 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Inter-domain channels can't possibly be bound to themselves, there's
always a 2nd channel involved, even when this is a loopback into the
same domain. As a result we can drop one conditional each from the two
involved functions.

With this, the number of evtchn_write_lock() invocations can also be
shrunk by half, swapping the two incoming function arguments instead.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -326,23 +326,18 @@ static long evtchn_alloc_unbound(evtchn_
 
 static void double_evtchn_lock(struct evtchn *lchn, struct evtchn *rchn)
 {
-    if ( lchn <= rchn )
-    {
-        evtchn_write_lock(lchn);
-        if ( lchn != rchn )
-            evtchn_write_lock(rchn);
-    }
-    else
-    {
-        evtchn_write_lock(rchn);
-        evtchn_write_lock(lchn);
-    }
+    ASSERT(lchn != rchn);
+
+    if ( lchn > rchn )
+        SWAP(lchn, rchn);
+
+    evtchn_write_lock(lchn);
+    evtchn_write_lock(rchn);
 }
 
 static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn)
 {
-    if ( lchn != rchn )
-        evtchn_write_unlock(lchn);
+    evtchn_write_unlock(lchn);
     evtchn_write_unlock(rchn);
 }
 


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 17:10:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 17:10:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36631.68609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khbpd-0006Mh-J5; Tue, 24 Nov 2020 17:10:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36631.68609; Tue, 24 Nov 2020 17:10:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khbpd-0006Ma-Fs; Tue, 24 Nov 2020 17:10:21 +0000
Received: by outflank-mailman (input) for mailman id 36631;
 Tue, 24 Nov 2020 17:10:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tuHM=E6=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1khbpc-0006MV-LL
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:10:20 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.70]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 09fd3226-0a6f-488c-a0b4-afe7658f3919;
 Tue, 24 Nov 2020 17:10:18 +0000 (UTC)
Received: from DU2PR04CA0150.eurprd04.prod.outlook.com (2603:10a6:10:231::35)
 by AM0PR08MB4003.eurprd08.prod.outlook.com (2603:10a6:208:12d::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.28; Tue, 24 Nov
 2020 17:10:16 +0000
Received: from DB5EUR03FT012.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:231:cafe::a5) by DU2PR04CA0150.outlook.office365.com
 (2603:10a6:10:231::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Tue, 24 Nov 2020 17:10:16 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT012.mail.protection.outlook.com (10.152.20.161) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Tue, 24 Nov 2020 17:10:15 +0000
Received: ("Tessian outbound d6c201accd3c:v71");
 Tue, 24 Nov 2020 17:10:15 +0000
Received: from 9b72420e912d.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 6505AB7C-E5F9-4D8C-96E0-EC17046061DB.1; 
 Tue, 24 Nov 2020 17:10:09 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9b72420e912d.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 24 Nov 2020 17:10:09 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4821.eurprd08.prod.outlook.com (2603:10a6:10:d5::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.21; Tue, 24 Nov
 2020 17:10:08 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0%7]) with mapi id 15.20.3589.030; Tue, 24 Nov 2020
 17:10:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tuHM=E6=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1khbpc-0006MV-LL
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:10:20 +0000
X-Inumbo-ID: 09fd3226-0a6f-488c-a0b4-afe7658f3919
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown [40.107.8.70])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 09fd3226-0a6f-488c-a0b4-afe7658f3919;
	Tue, 24 Nov 2020 17:10:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CsPdk1yx+SbUX7HP1AfNtmsQZPPkC07SXBqWKIiwM/o=;
 b=hP5ih48hwpzx+OLP77BTPxMEqwtcIKoK1/ftnpXUOvO345hQijQDdif2CYNyUFWIAcwvK/5/dT3jCwVJeZR98owAxYNREKpRF80L7K7XjWrMfdFkk5A6WN0bcfk4SamF62COn0L3ww2BPCAr64SClSAG9Y64RtfPK3iCSXkqBGE=
Received: from DU2PR04CA0150.eurprd04.prod.outlook.com (2603:10a6:10:231::35)
 by AM0PR08MB4003.eurprd08.prod.outlook.com (2603:10a6:208:12d::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.28; Tue, 24 Nov
 2020 17:10:16 +0000
Received: from DB5EUR03FT012.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:231:cafe::a5) by DU2PR04CA0150.outlook.office365.com
 (2603:10a6:10:231::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20 via Frontend
 Transport; Tue, 24 Nov 2020 17:10:16 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT012.mail.protection.outlook.com (10.152.20.161) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Tue, 24 Nov 2020 17:10:15 +0000
Received: ("Tessian outbound d6c201accd3c:v71"); Tue, 24 Nov 2020 17:10:15 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5a138191909e597b
X-CR-MTA-TID: 64aa7808
Received: from 9b72420e912d.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 6505AB7C-E5F9-4D8C-96E0-EC17046061DB.1;
	Tue, 24 Nov 2020 17:10:09 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9b72420e912d.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Tue, 24 Nov 2020 17:10:09 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZNNMNGsikotKj0+HgIAK9j94ssStSFGmmcAq4JIAys1ARalbxRghADdmhqIiiUR2GF6Z14yEZeRlnBkYEumdCAPTZPApOYGEhiBrluJbbADu4IylbTfn9OppnVsDk9ftZRHFYyJ48joR789XrrGCZe/fVMR4BEsBz1lENLFUGgw1GSvoKKfuwjtpUGwPpPm2oxwQVX6hAI9EYkG4r6V/5GwK9bCaWAiXGNJEILDx8NrhvJjgLIq+JzS+ySyBDSe6IKjzLhmdp62RQde3fLqNE/SA/CxNf9v4idSAmlv5r0Q/xtxERUVrzEaXg+IidWqcPoZT/IX9oV3URfmjgO+zxQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CsPdk1yx+SbUX7HP1AfNtmsQZPPkC07SXBqWKIiwM/o=;
 b=G+0b09ZRBNBISL2DZqXB0zL9UkfqzqXqfHVW/ITVmfQDVYTmcwSYFEft+Ssib1wAHWXLIs9BK23eSCIbmb08pQdm8sLWm3sSBIxvGeNIE62FyDSUe6hurnn5oVDjn5ryeuKSDRgdxG67WB2xN0Zuq3ZNrCvGkIxbQqFWBiy2Ur1hC2oXwQ7i/3KQ+w/2nsgQpuuA86Vvtw68TkEIW7pPGqpk+6kwsYoSGvvyAtecwUDUPGx58/eS2m4QUS1wSs49igjbUhJy6/Kn16XlCH/IY75XeC3PJ9tXGLAeJLmS0A3/s40B/6kuFXHf2/9ZFFRigLEUGO6QHvEuv0uHdEQTpg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CsPdk1yx+SbUX7HP1AfNtmsQZPPkC07SXBqWKIiwM/o=;
 b=hP5ih48hwpzx+OLP77BTPxMEqwtcIKoK1/ftnpXUOvO345hQijQDdif2CYNyUFWIAcwvK/5/dT3jCwVJeZR98owAxYNREKpRF80L7K7XjWrMfdFkk5A6WN0bcfk4SamF62COn0L3ww2BPCAr64SClSAG9Y64RtfPK3iCSXkqBGE=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4821.eurprd08.prod.outlook.com (2603:10a6:10:d5::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.21; Tue, 24 Nov
 2020 17:10:08 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0%7]) with mapi id 15.20.3589.030; Tue, 24 Nov 2020
 17:10:08 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Stefano Stabellini <stefano.stabellini@xilinx.com>, Julien Grall
	<jgrall@amazon.com>
Subject: Re: [PATCH RFC 1/6] xen/arm: mm: Remove special case for CPU0 in
 dump_hyp_walk()
Thread-Topic: [PATCH RFC 1/6] xen/arm: mm: Remove special case for CPU0 in
 dump_hyp_walk()
Thread-Index: AQHWvqdPAjvY1ondm0yRVrKn/nv1bqnXjAKA
Date: Tue, 24 Nov 2020 17:10:07 +0000
Message-ID: <CE16D06B-B592-42C0-B524-525F0FA3B99C@arm.com>
References: <20201119190751.22345-1-julien@xen.org>
 <20201119190751.22345-2-julien@xen.org>
In-Reply-To: <20201119190751.22345-2-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: ed2e8682-51a0-426f-0e5d-08d8909bd018
x-ms-traffictypediagnostic: DBBPR08MB4821:|AM0PR08MB4003:
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB400360442C2F4C7B2F84203D9DFB0@AM0PR08MB4003.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:2958;OLM:2958;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 odf+7ot3Z40y9KhDEeqEu74/g3XZujb94FRIuCYrUiFfcPPIDUB4xeyo1+zurnbn5Q116fABejG2CDG0x42BjiSCbYuHl7wt1pH6zvnKF8Yk5188CWVr/C1EKH56CqG/imz1iXQmfrOtu1gsD0Q6icNJTXW9MvmmSDSWxvp8xB5CeBZL/egq+C3yc9bIR6Q40UZ/AH3+ZnDAYd7kDAcER0y5F8iI4Z+XXEjB0YQgXMUOMXheWcifO9g1kAcGBRHVb5SJ7NPLOG4BySsYkad2qVefar2wmCQygtomjSgenhfyxUfi1sHeeQpkfvJ6u2EC8r9nb2wOhnlMNaxj3uGEsIGE0BPD16QaY4UwyvZfA+D8bSdiVdYck7TzV3f2ArLjPM2cFFJ+SufZWsFg+NLk1WCNYMJ5ry4Vb7sLmBUQWT2nTDlE6BUV/FfF+TUWZPx8BlW3LVtBMth31D2NGBqXpQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(396003)(346002)(136003)(366004)(39850400004)(316002)(2616005)(64756008)(66446008)(8676002)(66946007)(33656002)(66476007)(83380400001)(91956017)(76116006)(478600001)(6512007)(966005)(71200400001)(5660300002)(66556008)(2906002)(36756003)(54906003)(6506007)(8936002)(86362001)(6916009)(186003)(53546011)(4326008)(6486002)(26005)(6606295002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 BKvRXX5jQdF5AwiUvOX7mWMhFHc2YSBrSgoK/HRLlNRJTias5uQ/hu6SrqByIWaHHJZ7Jh8fuxJCrqNWA5kcckxSyjclI75ZXiK3TCbPBmvWUFUYvBVNfG12xpOXiAbTdS9Zduy3vzSRHjqMuHguwdNFA7eT0Wzwk7OOK0mSlgo251f+QTJQCRbtMGC04YSG242w0ConmvJ63avV6hkMOXm+R4s1DjVNLcQikBaJFRdBQn5dDqJNll4okxuQFkvbQpkcsqOTZxO2L9g/nROd4DgZVIpXgwTvNrak0xAzrN2BMDgwDtx3PzLWZtjWJy5pvWeFnWw/qky6PDhjy664JGT8rDuGft0S0NSrAlI2D5ftl/IfcDnw46/vgEblJF3il5ZyACrgEzBohg/7B4GsPjMSTWfAAjGdcPH9ag0GdWA5QU03ToI9HOnE7atvxUc7hrXzKL/zK/S90OMq18n0xXaAj7Fm2zgZsrT7597bhtpPNuHWgs0KlIqY23V9bqx9s0EIyfdJof2LzninqHH062T0R5DsSQzdtnAcS74+GHorZjK7s/Zw95fwcGb3tf7JoZPlmVF3FIjq91hCRDgXt2gfwinZgPVZgG08dhuRIItn9IPPo3EiwSBg9QqDNvVZm8HE9NsGUHplOJHGo8WdsKG0PLG45JIU4ee+KuaBu4mjeM/X611GRIX84h1J3Pnhn5apJxvh+CJ2b+iIG42Wnk+qAlZQOLo/sRoKQrR4Rv22JuQeZr/8I30R8bn+PFA/TS/yFvYOoQWEAK5dNzoTod4u1snnbA4v4YlUp2Q0utUzjDoiCHdgbyEaRMO7Uu/gXqJdt2m75h9ySuJtxlrB3mGo/7x6wDMpRItP3M0hWcijLhbi8jKaDYMlI/wqDPNSF4jZUD8fmBYsHy/o6qo66Q==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <FC74A9D1C3615B4E891F5479A91E7752@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4821
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e1416fbf-2f2d-475c-9113-08d8909bcb73
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hrgOfK0zSftuXHQFnsKmYNisns5PHBkIsiL5IU+cGSzlVwqDOywtRKtnWpsMsdDV1dxmJhe0G3eohMCGc9Lqt/B9Rf95fWxqxznMm0tDrrkTmw1PVf1uvH7KsMU+Fj63//vVbY+R9W5tZURvHTuZ4Q97xXrq1anK5NFdyWw+eIF3YqSA/Sy7/ZC1F/rnAxOJ1ndhskFEKMIAQg7352/bwaq1wN/I9uchm5b7EIOpjKdxat4JjD/3p36kWKEJIVhIvfgScMXTqnqavIOCNJ5OqdWhqynH5odpwHUyC5IcHBaweIjBZRHjsMpHYfhNTtDQWLbSrwy63q+7cmI7+cKX1UFR0Yzl6e0Ve1Bl6ZEQFh+CsEyqInRFvrSgPwFWYF73fgIbOGMXlh6wjVflZu9xiId9UFJZfDwjxxVIGD59AX3UvMgKW/rb/qCq6BrJgkemfypM5R63e+vT809K0LwSbNAornGu/rkIWHpCCgNWH7j8Mt9XFwLxX8HQYGUKMMnn
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(136003)(346002)(396003)(39850400004)(46966005)(8676002)(6862004)(107886003)(53546011)(316002)(5660300002)(82310400003)(2616005)(6506007)(966005)(54906003)(70586007)(47076004)(70206006)(82740400003)(6512007)(186003)(4326008)(86362001)(6486002)(26005)(83380400001)(33656002)(336012)(81166007)(8936002)(2906002)(356005)(36756003)(478600001)(6606295002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Nov 2020 17:10:15.8373
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ed2e8682-51a0-426f-0e5d-08d8909bd018
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB4003

Hi Julien,


> On 19 Nov 2020, at 19:07, Julien Grall <julien@xen.org> wrote:
>=20
> From: Stefano Stabellini <sstabellini@kernel.org>
>=20
> There is no need to have a special case for CPU0 when converting the
> page-table virtual address into a physical address. The helper
> virt_to_maddr() is able to translate any address as long as the root
> page-tables is mapped in the virtual address. This is the case for all
> the CPUs at the moment.
>=20
> So use the same BUG_ON() regardless the CPU.
>=20
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> [julien: Rework the commit message]
> Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

>=20
> ---
>=20
> I went back through the conversation in [1] regarding the issue when
> loading Xen below 2MB on Arm32. The example provided is wrong because to
> find the physical address, we need to add 'phys_offset', not substract.
>=20
> So I removed the comment regarding the code was buggy.
>=20
> [1] https://marc.info/?l=3Dxen-devel&m=3D157081398022401
> ---
> xen/arch/arm/mm.c | 5 +----
> 1 file changed, 1 insertion(+), 4 deletions(-)
>=20
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 9c4b26bf079b..4dd886f7c80d 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -284,10 +284,7 @@ void dump_hyp_walk(vaddr_t addr)
>            "on CPU%d via TTBR 0x%016"PRIx64"\n",
>            addr, smp_processor_id(), ttbr);
>=20
> -    if ( smp_processor_id() =3D=3D 0 )
> -        BUG_ON( (lpae_t *)(unsigned long)(ttbr - phys_offset) !=3D pgtab=
le );
> -    else
> -        BUG_ON( virt_to_maddr(pgtable) !=3D ttbr );
> +    BUG_ON( virt_to_maddr(pgtable) !=3D ttbr );
>     dump_pt_walk(ttbr, addr, HYP_PT_ROOT_LEVEL, 1);
> }
>=20
> --=20
> 2.17.1
>=20



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 17:37:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 17:37:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36650.68622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcFb-0008HF-PQ; Tue, 24 Nov 2020 17:37:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36650.68622; Tue, 24 Nov 2020 17:37:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcFb-0008H8-M7; Tue, 24 Nov 2020 17:37:11 +0000
Received: by outflank-mailman (input) for mailman id 36650;
 Tue, 24 Nov 2020 17:37:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
 id 1khcFa-0008H3-QW
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:37:10 +0000
Received: from mail-qt1-x843.google.com (unknown [2607:f8b0:4864:20::843])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bea408ea-7ccf-4717-889a-69e4a342fa7c;
 Tue, 24 Nov 2020 17:37:10 +0000 (UTC)
Received: by mail-qt1-x843.google.com with SMTP id l7so10370225qtp.8
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 09:37:10 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
 by smtp.gmail.com with ESMTPSA id 137sm13319731qkj.109.2020.11.24.09.37.08
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Nov 2020 09:37:08 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
	id 1khcFa-0008H3-QW
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:37:10 +0000
X-Inumbo-ID: bea408ea-7ccf-4717-889a-69e4a342fa7c
Received: from mail-qt1-x843.google.com (unknown [2607:f8b0:4864:20::843])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bea408ea-7ccf-4717-889a-69e4a342fa7c;
	Tue, 24 Nov 2020 17:37:10 +0000 (UTC)
Received: by mail-qt1-x843.google.com with SMTP id l7so10370225qtp.8
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 09:37:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=RUiSFwuqBx5Z5RkIMV/LAo3IZ2wMWa1pe8Bi01kcv9s=;
        b=I/ritBavAlEmXZYTdxsCZ3TKkeS+XbD2T5fSeuu+WAKjEs5fvGA2bMQkLo5MtRefva
         qUl8yUxkVj+bBWUkgYhVfKpDhiO82hph16AQ0vGX7y+wlrcH9EWYcgfJQ+ri5yzcT/jj
         ur3RBED4twIKVJJFDo/q0nmklKouT/Ot6KY5VSp33Mrn+/sIul4sQvPJjUYsLJ8PVoUi
         uTUrhLlViP7uzPXo9Qlf/lrWs/mwSY/bN6T+YIT4kadfNOZMSiRbHdqwLGtEwY2FNG+a
         TyAacskIalN0dBCzTDwuIzKGsDgNK5Z2Dd7niqvvN6G5G9OXmjSjKHmLzT22iYm6LIXb
         z2wA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=RUiSFwuqBx5Z5RkIMV/LAo3IZ2wMWa1pe8Bi01kcv9s=;
        b=daUtIgpBr7ZDZVQhyvUf30QydLeTr9imJ5cZYKLrv/8D04BI0cWLLCk1/b0bajNqW1
         5/wp/QsEmWG0dmCy4RxgXGcqLoRgt1y/wEw0VcaBfITZlGPG0G06ODAjliDkttt6ZiXx
         u8bftj7XhSBIPpSq2aNRua6rjRJKJSbO/sw9p5Iwd6AJ4HX33KvLDW4RIZOWgaNk4V/k
         W2s1G8L9bCwLHDTDYNDMri99xYfbcWPAfdNovMMC8kQmprHfyN3jcbidSXhdZziz+XhL
         nUbMCL8XhtuwSO+75dXbDeC+I+mNGWSULKEdTQbYaRk63MfXJBN41bvCTIv8TKCT4POO
         sfSw==
X-Gm-Message-State: AOAM533xJ3bZ1B6xMeBZ7Va0Na1jegrzn4asJzpJpMnf65aS7N4z0M18
	eOCVzDW4pRZBPzs3NJRc6GU=
X-Google-Smtp-Source: ABdhPJzns9py3D52IrNBrxEn0rk+MGg8bhauWe2r/6sVRsp+lUoI9EtR7DOBSR7jcCHuxmdt3gs4iQ==
X-Received: by 2002:ac8:4802:: with SMTP id g2mr5435506qtq.235.1606239429633;
        Tue, 24 Nov 2020 09:37:09 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
        by smtp.gmail.com with ESMTPSA id 137sm13319731qkj.109.2020.11.24.09.37.08
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 24 Nov 2020 09:37:08 -0800 (PST)
Sender: Tejun Heo <htejun@gmail.com>
Date: Tue, 24 Nov 2020 12:36:46 -0500
From: Tejun Heo <tj@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 11/45] block: remove a duplicate __disk_get_part prototype
Message-ID: <X71ErqPWQu+CvXRI@mtj.duckdns.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-12-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-12-hch@lst.de>

On Tue, Nov 24, 2020 at 02:27:17PM +0100, Christoph Hellwig wrote:
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Reviewed-by: Jan Kara <jack@suse.cz>
> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>

Acked-by: Tejun Heo <tj@kernel.org>

-- 
tejun


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 17:37:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 17:37:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36653.68634 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcFz-0008NI-5X; Tue, 24 Nov 2020 17:37:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36653.68634; Tue, 24 Nov 2020 17:37:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcFz-0008NB-2G; Tue, 24 Nov 2020 17:37:35 +0000
Received: by outflank-mailman (input) for mailman id 36653;
 Tue, 24 Nov 2020 17:37:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
 id 1khcFx-0008Mt-Gw
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:37:33 +0000
Received: from mail-qv1-xf44.google.com (unknown [2607:f8b0:4864:20::f44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 747fdc6e-a7a4-4059-9226-8b6e5d5c4f04;
 Tue, 24 Nov 2020 17:37:32 +0000 (UTC)
Received: by mail-qv1-xf44.google.com with SMTP id n9so4712222qvp.5
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 09:37:32 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
 by smtp.gmail.com with ESMTPSA id k70sm13834520qke.46.2020.11.24.09.37.31
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Nov 2020 09:37:31 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
	id 1khcFx-0008Mt-Gw
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:37:33 +0000
X-Inumbo-ID: 747fdc6e-a7a4-4059-9226-8b6e5d5c4f04
Received: from mail-qv1-xf44.google.com (unknown [2607:f8b0:4864:20::f44])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 747fdc6e-a7a4-4059-9226-8b6e5d5c4f04;
	Tue, 24 Nov 2020 17:37:32 +0000 (UTC)
Received: by mail-qv1-xf44.google.com with SMTP id n9so4712222qvp.5
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 09:37:32 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=NMx3wGtNfY3eeaXC4utlwl41yJ4qPilA6BGetqYUorY=;
        b=ttWPoi7SbGNczr/1HSKQP13JwOnnq6d51NvgAXrXOVbFUHDENikis8wgWQ/ICO4vWD
         QXvXHNRrjPvcjtbRUNuU5kihbWk0IwxLYcoiLDd6H0sLjKI3e2qbtU5af/N7pFoOqAgC
         h3xBKCRJvq/QDtB6ihgsX92WTCe21la5HoXVw91eRC04tEZopSMLMnfWKE1wWDcnWaYf
         S2KXfW3yg0N/7LpsJ40fmTrf9EG2F6mG56V6rpZ8GyEnO0Ym+iYS/n4CBM7mxtRyxGrg
         uQLw/J+qnYSYcVKcO0TGKVFY752sD2m6oYXQvW4fux2BOd14a3jTxvr52rzBLO0HCBoZ
         teng==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=NMx3wGtNfY3eeaXC4utlwl41yJ4qPilA6BGetqYUorY=;
        b=X/T5cX186uzlWtYQGGz4C6xycMaMn171WBzn68045o/balRttyztESbiKbSyf3M/sx
         T3Qm8ixwkEyxpBlW5TfvSvOl3guLV9tb8x5JJ8zuAi5C/uFsksIIMpV3IrkQ4WqF6o3U
         MDCZUdD+/8jJ+UQh1SxXz0QHp4CrWib3M3LkJXGBgVRz75cy8dwWuJInJbaahh2nvUuD
         VlQuBWDDz3605Ec/30R3udXwLwIhaN9tK+IwNyxuJfYnZX77tGBdCLHdZmsy7bHP1Osa
         +I16j0xh9tZCOXSSixJQmCBC8l2soKzsEAArfPMyiEDhIpOsv1dWTsuAIY3dqUPn8ydV
         KcOw==
X-Gm-Message-State: AOAM533HGxf+hQFL8E9D4ZYsPgxtSeDcmXwwdvwizJ8rUG02Ug/mrplY
	BvmBrp9xHXhnSjqUYjPbLL4=
X-Google-Smtp-Source: ABdhPJzX+Vh2Egxjq814YfOpUGajTbbALAhbpqH5gqFyQRlGy9nqyUOsvItkj0Gt+6yRJDRCbrF4QA==
X-Received: by 2002:a05:6214:4e5:: with SMTP id cl5mr5723077qvb.42.1606239452550;
        Tue, 24 Nov 2020 09:37:32 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
        by smtp.gmail.com with ESMTPSA id k70sm13834520qke.46.2020.11.24.09.37.31
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 24 Nov 2020 09:37:31 -0800 (PST)
Sender: Tejun Heo <htejun@gmail.com>
Date: Tue, 24 Nov 2020 12:37:09 -0500
From: Tejun Heo <tj@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 12/45] block: remove a superflous check in blkpg_do_ioctl
Message-ID: <X71ExfXNm5IC7xMq@mtj.duckdns.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-13-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-13-hch@lst.de>

On Tue, Nov 24, 2020 at 02:27:18PM +0100, Christoph Hellwig wrote:
> sector_t is now always a u64, so this check is not needed.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Acked-by: Tejun Heo <tj@kernel.org>

-- 
tejun


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 17:37:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 17:37:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36657.68646 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcGE-0008Sw-FN; Tue, 24 Nov 2020 17:37:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36657.68646; Tue, 24 Nov 2020 17:37:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcGE-0008So-C9; Tue, 24 Nov 2020 17:37:50 +0000
Received: by outflank-mailman (input) for mailman id 36657;
 Tue, 24 Nov 2020 17:37:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
 id 1khcGC-0008Rv-BK
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:37:48 +0000
Received: from mail-qk1-x743.google.com (unknown [2607:f8b0:4864:20::743])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 69dc5fb7-6488-428d-82f7-0451d1f7c2eb;
 Tue, 24 Nov 2020 17:37:47 +0000 (UTC)
Received: by mail-qk1-x743.google.com with SMTP id d9so21470549qke.8
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 09:37:47 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
 by smtp.gmail.com with ESMTPSA id s3sm13993894qtd.49.2020.11.24.09.37.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Nov 2020 09:37:46 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
	id 1khcGC-0008Rv-BK
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:37:48 +0000
X-Inumbo-ID: 69dc5fb7-6488-428d-82f7-0451d1f7c2eb
Received: from mail-qk1-x743.google.com (unknown [2607:f8b0:4864:20::743])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 69dc5fb7-6488-428d-82f7-0451d1f7c2eb;
	Tue, 24 Nov 2020 17:37:47 +0000 (UTC)
Received: by mail-qk1-x743.google.com with SMTP id d9so21470549qke.8
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 09:37:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=wOTAYzxGcPu7PN/+w1SzvPIVnfCkyOLysxq7PCKNFLc=;
        b=S5nEHa589iEpXYy2XcA6lRL7T7kP7ehDm9Bouqy1TnodFVSNw+U8eEGfZTJYqfio3r
         aIIXt6yzwSLgwAXuhqGZj4J0YKEc8mQF148TodMgN6CoPRzSm84R92521SCDHI4hoxuk
         PrhKXiEWiiWgA90gCoz9PIlh8ZPWrsLSXPz4r2Y++1aYQUvVlK+d6BRd/FHjM9xPm2Hs
         6bb0egK3xiad8BkBdBGUWpDfzh4paEIzt947869TGDizd9+eyC5CHgWZWOK8SY4CO5vu
         4gCZeAvqJwOZ5TwMcQFDDPh62cYWKSsPb62iSf4+2HH9Z1OX9VZ9xWj9lq9Oe9yG0Mbo
         CB4Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=wOTAYzxGcPu7PN/+w1SzvPIVnfCkyOLysxq7PCKNFLc=;
        b=dSlBUw642o+Y4fH2KDGhHzLS8ywPoqSEOai2Pnmd3UeIV09TuSIrpE/8FbmlUlRtaW
         BFGCsCFvBOf8YQvY2uKG19vu8Cso7d9zKCCmzq92U93IFi/Urhq6SOiDLEnTmAnjtxPG
         vZTiemLvYL9BoPF6EWvYvwJodTgvXHUA5mZHYIrhGPMByyzfUokdu4A77iN4c6s3e2mk
         mgbxzFJba1lGNqiMoBJXdn+TIXtAVrVzzXCasiC/G9pugIUcREFnBYldVgbTHA1YgdqR
         gXVt6mMrFnsbNubIpuwHKT/NKErtqrVazJIFI0aae73iR12Hq5TutvILyvOTiTqtG/HK
         3Fsw==
X-Gm-Message-State: AOAM533VPmfZ5+yxHOXFs8X+uGbi+Ttl9llZFLqB3y5REiQ+m6vBmYvV
	Ih4p27HR1zgpMeUdSc9GexQ=
X-Google-Smtp-Source: ABdhPJyBD8adSd77vYvAq958zJMB+ltr44tGztKokoaCAzhoNqA4YXMUBiY2USQm4Mo3NyM012Iq6w==
X-Received: by 2002:a05:620a:11a4:: with SMTP id c4mr6030355qkk.8.1606239467443;
        Tue, 24 Nov 2020 09:37:47 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
        by smtp.gmail.com with ESMTPSA id s3sm13993894qtd.49.2020.11.24.09.37.46
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 24 Nov 2020 09:37:46 -0800 (PST)
Sender: Tejun Heo <htejun@gmail.com>
Date: Tue, 24 Nov 2020 12:37:23 -0500
From: Tejun Heo <tj@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 13/45] block: add a bdev_kobj helper
Message-ID: <X71E05jyb3JdxRti@mtj.duckdns.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-14-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-14-hch@lst.de>

On Tue, Nov 24, 2020 at 02:27:19PM +0100, Christoph Hellwig wrote:
> Add a little helper to find the kobject for a struct block_device.

Acked-by: Tejun Heo <tj@kernel.org>

-- 
tejun


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 17:38:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 17:38:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36667.68658 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcGX-0000AV-Qb; Tue, 24 Nov 2020 17:38:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36667.68658; Tue, 24 Nov 2020 17:38:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcGX-0000AK-MW; Tue, 24 Nov 2020 17:38:09 +0000
Received: by outflank-mailman (input) for mailman id 36667;
 Tue, 24 Nov 2020 17:38:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
 id 1khcGW-00008A-Uq
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:38:08 +0000
Received: from mail-qk1-x743.google.com (unknown [2607:f8b0:4864:20::743])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c563e17d-b237-4aa3-a9e2-0f68e9b899b4;
 Tue, 24 Nov 2020 17:38:05 +0000 (UTC)
Received: by mail-qk1-x743.google.com with SMTP id d9so21472379qke.8
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 09:38:05 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
 by smtp.gmail.com with ESMTPSA id i4sm12549778qti.78.2020.11.24.09.38.03
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Nov 2020 09:38:04 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
	id 1khcGW-00008A-Uq
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:38:08 +0000
X-Inumbo-ID: c563e17d-b237-4aa3-a9e2-0f68e9b899b4
Received: from mail-qk1-x743.google.com (unknown [2607:f8b0:4864:20::743])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c563e17d-b237-4aa3-a9e2-0f68e9b899b4;
	Tue, 24 Nov 2020 17:38:05 +0000 (UTC)
Received: by mail-qk1-x743.google.com with SMTP id d9so21472379qke.8
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 09:38:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=E5yu6Z9CIGvjjdSWaM1JDiBNwd7R7l70HZvgJTMOMAo=;
        b=bP/UrEJdRXtdgl6ivVFuNIsepdmdiobKpEaoQqAkqVcxh4oLhWjED34HU2CgSOcS0r
         u8nX3lWOcITugT41Gz5ogHCeG5HvCzBWSnlVt6HFT/XLqvXGgBQUlkC2CFe4rQZ6Jdgz
         Xlh808EEDi3JUpO3tYkyT+AsuwhV2f7GO2Y1rVQbOYvLQXrbYCZEOEXyaM/QNly3oWKd
         nMDmSodcBiZ4nkTVRK5MN+HlnZkSvenRe7tY//FCQ2HZJpqhNw22PuiG5Df4AdGG0TX6
         9RHtJd4eQvkljDEZVlwbW5qc2pplOC6tGot90vCLquU/Y2qjjEP022211N4bqrY0nZQk
         Cpng==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=E5yu6Z9CIGvjjdSWaM1JDiBNwd7R7l70HZvgJTMOMAo=;
        b=jhroD8SSbR8NRIjoFVtpJ0qWuSUyq4sEwoMBAVlAXiVzEKuOYBOlEA4TQQ8MzUwCrE
         QmeOBq93P0WxMBUCu4JsYnj1vPr6qBv4nXd6l0xAUFQXcxB3rb0mLgFDn8rsRz7dn1Xd
         gGoClazOaRr+bxFt2Fj04KvYoY770kdrWK2Rue31QTVlXQlhbgbs6cFYYm3cYUHgHLRv
         dTUV3hUSA2qb01bdRsMjsZRMmxj3Nkm+lt+3royaiwt65lM6KopfAEOvuEQVQ1z2xyIp
         98ixnrxpTCweMZcFr9359woc3fd3ev8z2Zfbi8DaTi+wViZYznWbYiiExtgFQwfij3wJ
         k3Nw==
X-Gm-Message-State: AOAM530ANqlBcXXP5ZV9YhyxlGvODGAhSEpa4r5lAMzvDyxE5tYoVhdL
	RiWu9uzIWLeXY3Hz2XSYp3g=
X-Google-Smtp-Source: ABdhPJyi1T8dRX2wY3q65x2J+bQePuJjN+c8Pa1HrFeBbFFSDxXqbhzlU09MTDkUoIjmr+Nu4Id3fQ==
X-Received: by 2002:a37:7bc7:: with SMTP id w190mr5863735qkc.476.1606239484751;
        Tue, 24 Nov 2020 09:38:04 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
        by smtp.gmail.com with ESMTPSA id i4sm12549778qti.78.2020.11.24.09.38.03
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 24 Nov 2020 09:38:04 -0800 (PST)
Sender: Tejun Heo <htejun@gmail.com>
Date: Tue, 24 Nov 2020 12:37:42 -0500
From: Tejun Heo <tj@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 14/45] block: use disk_part_iter_exit in
 disk_part_iter_next
Message-ID: <X71E5ohZ7I/tDKWO@mtj.duckdns.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-15-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-15-hch@lst.de>

On Tue, Nov 24, 2020 at 02:27:20PM +0100, Christoph Hellwig wrote:
> Call disk_part_iter_exit in disk_part_iter_next instead of duplicating
> the functionality.

Acked-by: Tejun Heo <tj@kernel.org>

-- 
tejun


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 17:38:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 17:38:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36670.68670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcGm-0000IQ-36; Tue, 24 Nov 2020 17:38:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36670.68670; Tue, 24 Nov 2020 17:38:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcGl-0000IJ-W3; Tue, 24 Nov 2020 17:38:23 +0000
Received: by outflank-mailman (input) for mailman id 36670;
 Tue, 24 Nov 2020 17:38:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
 id 1khcGl-0000I0-5t
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:38:23 +0000
Received: from mail-qk1-x741.google.com (unknown [2607:f8b0:4864:20::741])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c3de8506-d005-4f14-9e38-daddd4b34fbe;
 Tue, 24 Nov 2020 17:38:22 +0000 (UTC)
Received: by mail-qk1-x741.google.com with SMTP id l2so21559209qkf.0
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 09:38:22 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
 by smtp.gmail.com with ESMTPSA id x21sm13062842qkx.31.2020.11.24.09.38.20
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Nov 2020 09:38:21 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
	id 1khcGl-0000I0-5t
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:38:23 +0000
X-Inumbo-ID: c3de8506-d005-4f14-9e38-daddd4b34fbe
Received: from mail-qk1-x741.google.com (unknown [2607:f8b0:4864:20::741])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c3de8506-d005-4f14-9e38-daddd4b34fbe;
	Tue, 24 Nov 2020 17:38:22 +0000 (UTC)
Received: by mail-qk1-x741.google.com with SMTP id l2so21559209qkf.0
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 09:38:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=m2ZvAloMQE/4x6qZGlu4klEd6I1OFmrdg9L02bbThZY=;
        b=mQ/2SdU+idAG7rNq1qMaVxb9CR+RcWt1BHKTZWsDVy8fWCHqwJhHCvkgRkFmG5gG6l
         63dI7pIc19199ylq2PWs338wpUAaIlVHv4bO05m/1Tk1mu08D8CqxB2E3PYFAkryMgfE
         H/1POZOqIABmkG6petf6pD/3aOMnkDRssdWhk0UsaRx2ckUhxiT3bRltu9ZmDeO6w6xs
         euBOjZvBw84KJ31lUkNXglzMy/nzFsEPPBt7GRVCqAEkecyLnn3pxEkW/JpPGKSWAdFb
         ar0JpCcxfg2HCjARPmkDqdzRoKlh3PeHrwbTzFIOvsccSM84XNZBR0TOpFWESYxO6GeG
         MgoA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=m2ZvAloMQE/4x6qZGlu4klEd6I1OFmrdg9L02bbThZY=;
        b=BobeQLSwQsDMS8ad/SowOU/SUyLKfRhFoCHg6vk0b0F+vEIobzF5PzNsftnn6MYLkx
         eyzNi5k9DI6/VQWKCfLd6jyUfzLYHLt2Jyr37yg4heouzhj7Hw25t7MV+5BrdfaHkct7
         12W1G55WVpPqGD0qOjaGCqs+XFoPcrOi9mydh5bP6kV6aGMjDTnRruga42f1P1BMa6Ja
         8FNCaBtLNsRrgTOtgNY1EmE0wcq8onilwwxV9uzK5/Qs1XfCATXlUdOCUwS+brPUk0RF
         ahoIqW2m6ZIZcpy/0yQc3pcc6HgE+BIVYS9KIvJ/uwC0avRLJ26BFkfVkdsGRoVmjj45
         Bqrw==
X-Gm-Message-State: AOAM532S/QhgYub9r+d0sJd8lWMQSGDFqXKsDq0Man19qvZNa138/OF2
	OhhGgurfzvfPrBVWWK7i0ho=
X-Google-Smtp-Source: ABdhPJydGVRMFQ1+r3+Z46PpDtXJ8+bSnv4nPK+7aFRRl1d4ZdhaX+JqGpjWBGRDvXsi5fkaK49s7A==
X-Received: by 2002:ae9:e80d:: with SMTP id a13mr6004436qkg.140.1606239502291;
        Tue, 24 Nov 2020 09:38:22 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
        by smtp.gmail.com with ESMTPSA id x21sm13062842qkx.31.2020.11.24.09.38.20
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 24 Nov 2020 09:38:21 -0800 (PST)
Sender: Tejun Heo <htejun@gmail.com>
Date: Tue, 24 Nov 2020 12:37:58 -0500
From: Tejun Heo <tj@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 15/45] block: use put_device in put_disk
Message-ID: <X71E9mG4sw2WiIEj@mtj.duckdns.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-16-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-16-hch@lst.de>

On Tue, Nov 24, 2020 at 02:27:21PM +0100, Christoph Hellwig wrote:
> Use put_device to put the device instead of poking into the internals
> and using kobject_put.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Reviewed-by: Jan Kara <jack@suse.cz>
> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>

Acked-by: Tejun Heo <tj@kernel.org>

-- 
tejun


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 17:38:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 17:38:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36681.68682 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcH7-0000Qx-EF; Tue, 24 Nov 2020 17:38:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36681.68682; Tue, 24 Nov 2020 17:38:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcH7-0000Qp-9c; Tue, 24 Nov 2020 17:38:45 +0000
Received: by outflank-mailman (input) for mailman id 36681;
 Tue, 24 Nov 2020 17:38:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
 id 1khcH6-0000Ng-07
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:38:44 +0000
Received: from mail-qk1-x743.google.com (unknown [2607:f8b0:4864:20::743])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id accd4014-829b-4220-bea6-1c47a85ed49f;
 Tue, 24 Nov 2020 17:38:39 +0000 (UTC)
Received: by mail-qk1-x743.google.com with SMTP id i199so8038257qke.5
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 09:38:39 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
 by smtp.gmail.com with ESMTPSA id s7sm13183950qkm.124.2020.11.24.09.38.38
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Nov 2020 09:38:38 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
	id 1khcH6-0000Ng-07
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:38:44 +0000
X-Inumbo-ID: accd4014-829b-4220-bea6-1c47a85ed49f
Received: from mail-qk1-x743.google.com (unknown [2607:f8b0:4864:20::743])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id accd4014-829b-4220-bea6-1c47a85ed49f;
	Tue, 24 Nov 2020 17:38:39 +0000 (UTC)
Received: by mail-qk1-x743.google.com with SMTP id i199so8038257qke.5
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 09:38:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=g4LY6zleEye3uEtWn8BHkQWiPOUg2vGvLOmcGW0MhEo=;
        b=b7LkmYjxPrHR9TH3nlGv8RR4yt2n5LXnfymO+2DyzzU25nyOq3KZcwcvGjHXs52rxS
         +2ZXS0GcpcERn/nD7GKubAB3+B2PcUxizzfKAr8S2K2+IhWzgsJmU4g5HL09fBRhhrfd
         O9DxlLMUUmPA8UQ1XMZvrQyWKKdOK7UwZVw5EW2mNWCI7GBjN/XmPhbk3TYX7y+MWxjO
         swjqQPApTPanWu4qovGXQq//Y6ENdGy9PdgI17eJIoJsi/+bV6r0YDHFS6xz5gpMJ5IK
         uIvwIkKQlKgUDmgdCif0kTpOaJ/NZcVJOzZhxhpZTTstgSJmyH2+hiJsrfoAgMEwi33G
         0McA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=g4LY6zleEye3uEtWn8BHkQWiPOUg2vGvLOmcGW0MhEo=;
        b=GvZG3Bw1dprskT1vpwBrbB+i/9m9k//AJhVZvvwK7Z6HG/nikJy/XbXxwJxttVmFIP
         sKrWoHc0x3HfC0IaHWmSznktO+GprxI/xuUk8w/xeLpG6wJ4QIdyHFvPJ6eU1iNb/PHQ
         k1anXlOb4L5VdpEyGIjPH4Ktp2b0posxKiNDDxhK+yeRCTfZBty+paPIZfoYFAIn49q1
         YPZ0ZPHLnizl225m8ewOrlZMvrueZ9AwYZOsNaj6+JJhKnTyH59+eaMtkfsgPlp8xvFo
         +F7//31t1hEFjxNf+BgkLn+AY6ZU4pU42P4Qz8vOu63CviYR1rXuPlJyWjR5MWWgWItG
         7liA==
X-Gm-Message-State: AOAM533EEUPQ+KDoQ3J0IOMTbyPsaTcV+S+vwyEDqIYYB1ADP+dPrFhm
	VSbBNywM/Nz6IkwjmY4cTvE=
X-Google-Smtp-Source: ABdhPJw43+aOSAPykQeYdeX4DmgWRqK+CIO4uPy4u98lyqJL8LkKiYWD+c6leseDOSmCa64pRPr2xg==
X-Received: by 2002:a37:e40b:: with SMTP id y11mr5925723qkf.29.1606239519064;
        Tue, 24 Nov 2020 09:38:39 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
        by smtp.gmail.com with ESMTPSA id s7sm13183950qkm.124.2020.11.24.09.38.38
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 24 Nov 2020 09:38:38 -0800 (PST)
Sender: Tejun Heo <htejun@gmail.com>
Date: Tue, 24 Nov 2020 12:38:16 -0500
From: Tejun Heo <tj@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 16/45] block: change the hash used for looking up block
 devices
Message-ID: <X71FCKpLZLywTTT8@mtj.duckdns.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-17-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-17-hch@lst.de>

On Tue, Nov 24, 2020 at 02:27:22PM +0100, Christoph Hellwig wrote:
> Adding the minor to the major creates tons of pointless conflicts. Just
> use the dev_t itself, which is 32-bits and thus is guaranteed to fit
> into ino_t.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Reviewed-by: Jan Kara <jack@suse.cz>

Acked-by: Tejun Heo <tj@kernel.org>

-- 
tejun


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 17:38:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 17:38:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36682.68694 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcHL-0000Wp-O5; Tue, 24 Nov 2020 17:38:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36682.68694; Tue, 24 Nov 2020 17:38:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcHL-0000Wh-Jn; Tue, 24 Nov 2020 17:38:59 +0000
Received: by outflank-mailman (input) for mailman id 36682;
 Tue, 24 Nov 2020 17:38:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
 id 1khcHK-0000WD-5H
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:38:58 +0000
Received: from mail-qv1-xf41.google.com (unknown [2607:f8b0:4864:20::f41])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a7a55cf4-14fc-4845-a289-042abd0fa024;
 Tue, 24 Nov 2020 17:38:57 +0000 (UTC)
Received: by mail-qv1-xf41.google.com with SMTP id 9so6801825qvk.9
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 09:38:57 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
 by smtp.gmail.com with ESMTPSA id b3sm13138042qte.85.2020.11.24.09.38.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Nov 2020 09:38:56 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
	id 1khcHK-0000WD-5H
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:38:58 +0000
X-Inumbo-ID: a7a55cf4-14fc-4845-a289-042abd0fa024
Received: from mail-qv1-xf41.google.com (unknown [2607:f8b0:4864:20::f41])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a7a55cf4-14fc-4845-a289-042abd0fa024;
	Tue, 24 Nov 2020 17:38:57 +0000 (UTC)
Received: by mail-qv1-xf41.google.com with SMTP id 9so6801825qvk.9
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 09:38:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=Km12pVmBeWPgkRcOQAU5l84XavjRPJ4jqz28aKaNPFE=;
        b=qMRMh73o0YdpLAudiX8NPmCDY1bWM9OqQpg0RgHiyT+mlhnf0MZE5Czy8Qf5TqqFXj
         rj0mrD0ci08BBaP1QvOdQSlNYqt4OJVZLsc2rhNHZGmlnQEUH2P7sLZGRMA7ZcG7MS17
         yG8VIyxJ1lw+SyIO+IffHqOfCRSVn1pBmEJnkOVku1COq8LlNZb39TrX/RGHB52cYbib
         i+ykMchrdGixNcqG6wF1woJvO40gLVLsETP5ex6q4w5pbpO9xSAgvvJXfB6Yw0WWCqLF
         nDEPDCAPTAHfrqXDPn19OFx798DWMB961lVkvHrRd6Il2RPmsuogUvJSPGip8MEuWX9g
         QyWQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=Km12pVmBeWPgkRcOQAU5l84XavjRPJ4jqz28aKaNPFE=;
        b=o+1XRVwduLMyv0VuJ7rqyxdoYjW4y9qcyjE59XvEDJn+MWe/kUUWZtUTZ6miPItMyx
         hpoW+AKgIOrcB2uPjTMOBzrfwZjt4MomdZrw3prLvoe9Qs+pbQQWcy8jqGPrws6hYZyb
         KWXazhXWwoO9Dgq1aO4df8ZRc7uSv93CtazElntNgUiSM3GX/uPbrqoMZsfYnIW/UjLn
         Y0MELWtoTbZrmmHV4vzoZkAY9xFP5ao0lnDRBdlIbYE8czADQ5v17ZDuRoTU8CfyLZM7
         U8eAhH28o3CWZL9gtMHnzCVx7dD0seq+5tWEACRfyhSlzgFBvCcRbw3fZOBLTT43td45
         uaEA==
X-Gm-Message-State: AOAM533LLRlJ6MgwzDxsd+DVFTyl3JmViFEF+Me9VeVkfywozeImH/JA
	uxmgKNX5pEySZytr23Oo9p4=
X-Google-Smtp-Source: ABdhPJx2viG2YUIcT9pxnUmh9SMKJ+iaGPuLa9sp1d1smXeKSZjMmg4Jo+Hs7YyRPHzhOtS6yXj/uA==
X-Received: by 2002:a0c:f7cc:: with SMTP id f12mr5949753qvo.0.1606239537123;
        Tue, 24 Nov 2020 09:38:57 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
        by smtp.gmail.com with ESMTPSA id b3sm13138042qte.85.2020.11.24.09.38.56
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 24 Nov 2020 09:38:56 -0800 (PST)
Sender: Tejun Heo <htejun@gmail.com>
Date: Tue, 24 Nov 2020 12:38:34 -0500
From: Tejun Heo <tj@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 17/45] init: refactor name_to_dev_t
Message-ID: <X71FGgQtvXHTHU0V@mtj.duckdns.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-18-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-18-hch@lst.de>

On Tue, Nov 24, 2020 at 02:27:23PM +0100, Christoph Hellwig wrote:
> Split each case into a self-contained helper, and move the block
> dependent code entirely under the pre-existing #ifdef CONFIG_BLOCK.
> This allows to remove the blk_lookup_devt stub in genhd.h.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Reviewed-by: Jan Kara <jack@suse.cz>
> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>

Acked-by: Tejun Heo <tj@kernel.org>

-- 
tejun


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 17:39:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 17:39:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36694.68706 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcHj-0000gW-Vu; Tue, 24 Nov 2020 17:39:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36694.68706; Tue, 24 Nov 2020 17:39:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcHj-0000gO-So; Tue, 24 Nov 2020 17:39:23 +0000
Received: by outflank-mailman (input) for mailman id 36694;
 Tue, 24 Nov 2020 17:39:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
 id 1khcHi-0000bK-HM
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:39:22 +0000
Received: from mail-qk1-x741.google.com (unknown [2607:f8b0:4864:20::741])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 230b7e5b-5347-43d0-a0c4-c605e6225d22;
 Tue, 24 Nov 2020 17:39:15 +0000 (UTC)
Received: by mail-qk1-x741.google.com with SMTP id l2so21564673qkf.0
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 09:39:15 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
 by smtp.gmail.com with ESMTPSA id o125sm13142963qke.56.2020.11.24.09.39.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Nov 2020 09:39:14 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
	id 1khcHi-0000bK-HM
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:39:22 +0000
X-Inumbo-ID: 230b7e5b-5347-43d0-a0c4-c605e6225d22
Received: from mail-qk1-x741.google.com (unknown [2607:f8b0:4864:20::741])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 230b7e5b-5347-43d0-a0c4-c605e6225d22;
	Tue, 24 Nov 2020 17:39:15 +0000 (UTC)
Received: by mail-qk1-x741.google.com with SMTP id l2so21564673qkf.0
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 09:39:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=1gf+uZOc59XzmKEaDiZtFHdLCUgfXw7pOt3g9dTtDNI=;
        b=M4BrkAkexyVnkGUjwXdFjVYmVA9LVLj6ookNf5Ar0M5rW60FOFaco8NOC5V53XdKf3
         QJEfLqRZiaD+6aOYpsb7R05SzbB3BGYlmBRdin+nnS7Rl2/BZJIjCGem3ORrY6o5M9QB
         v9ss2oYhE9tLgFZfLE3LUECIa1myAFcsarnIGBk9Ar+eH0Bp+cdl7ZRZkZZx50CbTsHE
         K2VguikHFXg5qBSb0ELsCBeZfhaX/wrxj8/wztYcWOB/MjJmdrILJmWyfS//vV8R3Onf
         0IRpZT97EaMt3VXXj+9C8TX6D4SZ8wZliFCO/qY/oBmhfCgjCD7BmPqTXEJzXRSaVyYm
         sdFw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=1gf+uZOc59XzmKEaDiZtFHdLCUgfXw7pOt3g9dTtDNI=;
        b=E0mKGgEYeVJyrcMM9paX25EcismMipppn6x4Xma8suIW3ITolOs8K/wGVdyv/+Ep9s
         djtNuGZy7pWPa1izkLP1dwjwhaTvJvBkvG8IYFT9a8hp+3wyBGVdK3WXQqiHk307bFW/
         /slYD87yg8pFOatJERR3skZKruoDMQ9NCeCRjdCe2jbIugjBGwiWQqUb66fttTb/E80j
         +g7l5uCxnbfYPR/XtTKkrRh0aHbNODyX4rjkvyzsrXF43ipsj8mQZf8HzPH5v/1Oq+hB
         s1rpPypQ6B5qPOxMEOPFubK+DXOpoVA5Prwmd+jXPHNOgGcXpe99FOVRiBbQBPI/PVIZ
         UaVg==
X-Gm-Message-State: AOAM5313hurCb669JPPCOkoZYB9WeulKigR1kJoGn70t9HBzEuvF0EQa
	C4Qd/TYpbdg472HwwqaYshw=
X-Google-Smtp-Source: ABdhPJw/9uU36tM4rV+p5Q1FKg2O4RrQNewNe3FrRSf3D7AKvxn62XJtfvC4G0F74E53WmoixHdIOg==
X-Received: by 2002:a05:620a:569:: with SMTP id p9mr5395902qkp.119.1606239554812;
        Tue, 24 Nov 2020 09:39:14 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
        by smtp.gmail.com with ESMTPSA id o125sm13142963qke.56.2020.11.24.09.39.13
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 24 Nov 2020 09:39:14 -0800 (PST)
Sender: Tejun Heo <htejun@gmail.com>
Date: Tue, 24 Nov 2020 12:38:50 -0500
From: Tejun Heo <tj@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 18/45] init: refactor devt_from_partuuid
Message-ID: <X71FKqT9KtO4zTvw@mtj.duckdns.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-19-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-19-hch@lst.de>

On Tue, Nov 24, 2020 at 02:27:24PM +0100, Christoph Hellwig wrote:
> The code in devt_from_partuuid is very convoluted.  Refactor a bit by
> sanitizing the goto and variable name usage.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Reviewed-by: Jan Kara <jack@suse.cz>

Acked-by: Tejun Heo <tj@kernel.org>

-- 
tejun


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 17:39:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 17:39:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36696.68718 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcHv-0000mA-D1; Tue, 24 Nov 2020 17:39:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36696.68718; Tue, 24 Nov 2020 17:39:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcHv-0000m1-8r; Tue, 24 Nov 2020 17:39:35 +0000
Received: by outflank-mailman (input) for mailman id 36696;
 Tue, 24 Nov 2020 17:39:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
 id 1khcHt-0000lN-Nx
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:39:33 +0000
Received: from mail-qk1-x742.google.com (unknown [2607:f8b0:4864:20::742])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9a8f5b26-f15a-4a27-bf36-8f8ba215fd15;
 Tue, 24 Nov 2020 17:39:33 +0000 (UTC)
Received: by mail-qk1-x742.google.com with SMTP id u184so6910462qkf.3
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 09:39:33 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
 by smtp.gmail.com with ESMTPSA id e10sm13416860qkn.126.2020.11.24.09.39.31
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Nov 2020 09:39:32 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
	id 1khcHt-0000lN-Nx
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:39:33 +0000
X-Inumbo-ID: 9a8f5b26-f15a-4a27-bf36-8f8ba215fd15
Received: from mail-qk1-x742.google.com (unknown [2607:f8b0:4864:20::742])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9a8f5b26-f15a-4a27-bf36-8f8ba215fd15;
	Tue, 24 Nov 2020 17:39:33 +0000 (UTC)
Received: by mail-qk1-x742.google.com with SMTP id u184so6910462qkf.3
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 09:39:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=gccbgbDD7s/mdcXht2fL6iBUT8Prx6ozPAIi1oy24XY=;
        b=iAaQvjyW2JnxhLhsfg8rCUwKx68+n4VYv+x/cr19/l2gHjZ5rqQ2eHUQa56Yyrf+V0
         0EBO+8uJl3e7TpL8GOVdi3mPtP96tXMXp7if6adHqmGOPUC9KUkBCbpdOTkT7hALT+Ah
         7EHhHu8GvT+6D/0VCKlT2valDN6Ru1DvIbYtbLzR9+yA8jAAqgTW92ZCQEl3Pxl49Unt
         n/9PGjVy4f9zVoTzBvz1OlBcvmK214hyEQI6i4iF3fx4WQFt+rk4+OOUR5M42Vw6rBoM
         uLOJwtqDc9FyseTQMx+P4zZsyxqAY70M/tC6pBQFw6EbxPD3Q0UbNVnIzJCU6WGu3JSB
         P/Ww==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=gccbgbDD7s/mdcXht2fL6iBUT8Prx6ozPAIi1oy24XY=;
        b=elzcoEXRuTdoayDeYBd+iQ2ZMOMaQih6O16bnx2Ty7Vrx5Nxhc27Neb261ZZxcqw47
         zvzigmZ0HybrRnOVFj6YFOlCPkOWnGcYNnZ4zWajRNcQjbWIPX+c2T0cdaB/n+XmuOKN
         YYxYfvGDbJQFSm/DovJEEM4M2ODB1/DbvHNRzJlvz9Qw8MTzUNtmu39De9UEWNwo0Xpm
         GdLxW4TGRzACX7ZIqBqdX31avBClUlDTLV1+QDKJIBz53tCUXzmK2M3HkFN0K4xX7sXD
         L1bjSS3DmuR5dkPV9vlIFunq/eykehFilCT5UFxukdD6C4/ZH/hHC+Rkk58RI10Yw779
         yMCg==
X-Gm-Message-State: AOAM533DUruITNcQc+6ZwFFyX+uu1hRNqIkrOHfLETqem91Vemb8cLSw
	794TipxFZc5QtB2I0NOHGzg=
X-Google-Smtp-Source: ABdhPJxIyQph6LZGahQWIRSjw01El8frTqGQNdMpj1StAekmqcFeaj8JjyjEpK9CN9Iflo08fhI4fA==
X-Received: by 2002:a05:620a:a1d:: with SMTP id i29mr5820661qka.466.1606239572817;
        Tue, 24 Nov 2020 09:39:32 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
        by smtp.gmail.com with ESMTPSA id e10sm13416860qkn.126.2020.11.24.09.39.31
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 24 Nov 2020 09:39:32 -0800 (PST)
Sender: Tejun Heo <htejun@gmail.com>
Date: Tue, 24 Nov 2020 12:39:10 -0500
From: Tejun Heo <tj@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 19/45] init: cleanup match_dev_by_uuid and
 match_dev_by_label
Message-ID: <X71FPpCgw9t3JfIr@mtj.duckdns.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-20-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-20-hch@lst.de>

On Tue, Nov 24, 2020 at 02:27:25PM +0100, Christoph Hellwig wrote:
> Avoid a totally pointless goto label, and use the same style of
> comparism for both helpers.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Reviewed-by: Jan Kara <jack@suse.cz>
> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>

Acked-by: Tejun Heo <tj@kernel.org>

-- 
tejun


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 17:41:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 17:41:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36714.68729 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcJm-0001i5-P1; Tue, 24 Nov 2020 17:41:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36714.68729; Tue, 24 Nov 2020 17:41:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcJm-0001hy-M6; Tue, 24 Nov 2020 17:41:30 +0000
Received: by outflank-mailman (input) for mailman id 36714;
 Tue, 24 Nov 2020 17:41:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
 id 1khcJl-0001hs-58
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:41:29 +0000
Received: from mail-qk1-x741.google.com (unknown [2607:f8b0:4864:20::741])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0dad49d7-7816-41fd-8c7c-a87c31be0a37;
 Tue, 24 Nov 2020 17:41:28 +0000 (UTC)
Received: by mail-qk1-x741.google.com with SMTP id u184so6922343qkf.3
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 09:41:28 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
 by smtp.gmail.com with ESMTPSA id y3sm7003885qkl.110.2020.11.24.09.41.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Nov 2020 09:41:27 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
	id 1khcJl-0001hs-58
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:41:29 +0000
X-Inumbo-ID: 0dad49d7-7816-41fd-8c7c-a87c31be0a37
Received: from mail-qk1-x741.google.com (unknown [2607:f8b0:4864:20::741])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0dad49d7-7816-41fd-8c7c-a87c31be0a37;
	Tue, 24 Nov 2020 17:41:28 +0000 (UTC)
Received: by mail-qk1-x741.google.com with SMTP id u184so6922343qkf.3
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 09:41:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=4ibQynBLG5sCe4CZqYbUkQUaNNDbglWF2wLslYnfqek=;
        b=jYivczMQqsxh7yyiSCMC+J2uq1jjrgFqCn4zGdT7y8l1tR+lBZj0zWSEvbd0W8jl8w
         Gi4Cj5Zx4gKHsuI0fIL4rp4709gHxtA9wRWySVLNwDnlcHxfBP3hG6ecjPkRXqH0UxZp
         xk0pqka3Sqe63I6EZOHEKoOfA/4WJiEaugJWCMjne3X9nHRMXe7fYlZPuOXRYu6gUE1t
         VJ+e81XD/b5aNX2a+wyW4GmbUwrk6CKBJ/6iNXNyc9y/IvYXsxz32um6sHRBVWQzYyqm
         IRQIL0vettN12q9xzjv+kftrmu3uKde1tlzjzfXundOzi243rFxdMC5NVy5kvvEXsk4Q
         MK1Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=4ibQynBLG5sCe4CZqYbUkQUaNNDbglWF2wLslYnfqek=;
        b=Fpo9vTLP9TYoJyn2u6xpMZOXj/vE2BF3eCBbpG67NywIcYbN5ERXLaFoD+wGtz3T3Y
         TyvuIabATMOGMC2g5/u5DLAUQAwl+vRvH0PmsjOtTIPJqTepqXOl7WMds9kUCjGnr6Uy
         O2HQnH98iHcpP/KemJD6IkAL2vwnk292GbIZCu3lCWjU0c2UHp3ETGPRX/wHvOuwyFSl
         YM4oavfRHIXEt7mbouu1pp0NkHkgGtR0YQwvwPZnv3w1ZUuUcGMEBCCPhK6JwLzxCpTE
         AbLWbkqR+mcydpDKhwvpFxLcqYR9V/kBdM+t1t2KeDq2YX9kqbWW5CmcCxCsJWaGNQrZ
         efeA==
X-Gm-Message-State: AOAM530BE2unpmDAuyK1lWHaIcuerqzaqOq0iehbv1l3jetjcbb9USQt
	4l+zdLxzX3ix896ks+4JQ6c=
X-Google-Smtp-Source: ABdhPJwbAXPcygkLKUFdmjuytzuRV0DzFXM5PwNCTshXeMizxUWtjVUL4UzMNPmTvdVp88+fUo1SSg==
X-Received: by 2002:a37:8c41:: with SMTP id o62mr5500995qkd.240.1606239688196;
        Tue, 24 Nov 2020 09:41:28 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
        by smtp.gmail.com with ESMTPSA id y3sm7003885qkl.110.2020.11.24.09.41.27
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 24 Nov 2020 09:41:27 -0800 (PST)
Sender: Tejun Heo <htejun@gmail.com>
Date: Tue, 24 Nov 2020 12:41:05 -0500
From: Tejun Heo <tj@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 20/45] block: refactor __blkdev_put
Message-ID: <X71FsRDFtHWxVJOg@mtj.duckdns.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-21-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-21-hch@lst.de>

On Tue, Nov 24, 2020 at 02:27:26PM +0100, Christoph Hellwig wrote:
> Reorder the code to have one big section for the last close, and to use
> bdev_is_partition.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Reviewed-by: Jan Kara <jack@suse.cz>

Acked-by: Tejun Heo <tj@kernel.org>

-- 
tejun


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 17:41:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 17:41:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36715.68742 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcJs-0001kn-1w; Tue, 24 Nov 2020 17:41:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36715.68742; Tue, 24 Nov 2020 17:41:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcJr-0001ke-Ui; Tue, 24 Nov 2020 17:41:35 +0000
Received: by outflank-mailman (input) for mailman id 36715;
 Tue, 24 Nov 2020 17:41:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khcJq-0001k4-7X; Tue, 24 Nov 2020 17:41:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khcJp-0001zW-Sj; Tue, 24 Nov 2020 17:41:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khcJp-0006WT-Kl; Tue, 24 Nov 2020 17:41:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1khcJp-0006rd-K5; Tue, 24 Nov 2020 17:41:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khcJq-0001k4-7X; Tue, 24 Nov 2020 17:41:34 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hxLPlB95N25IDinIDGAb03fvh+IEBi+JqbkVQe5sefI=; b=w6XvQucFvh71GuNU1kXX4apEH4
	d9Jdksq5JWtVJoq6XMcaCQ7WQqX+MkQF4ZHN6jnj5t/PrEyBpxHM77ZMQ54SVFgkk77Jsv0I/bVCz
	lnIaSI/arkjhwvKzCljcWhFBva6Rpa5EQE9U9VR424zEoOLc4pvT5bn7wBEKmiw+M9eY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khcJp-0001zW-Sj; Tue, 24 Nov 2020 17:41:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khcJp-0006WT-Kl; Tue, 24 Nov 2020 17:41:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khcJp-0006rd-K5; Tue, 24 Nov 2020 17:41:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156978-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156978: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start/freebsd.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=23895cbd82be95428e90168b12e925d0d3ca2f06
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 24 Nov 2020 17:41:33 +0000

flight 156978 qemu-mainline real [real]
flight 156993 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156978/
http://logs.test-lab.xenproject.org/osstest/logs/156993/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-qemuu-freebsd12-amd64 21 guest-start/freebsd.repeat fail pass in 156993-retest

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                23895cbd82be95428e90168b12e925d0d3ca2f06
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   96 days
Failing since        152659  2020-08-21 14:07:39 Z   95 days  202 attempts
Testing same since   156978  2020-11-24 04:52:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 68859 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 17:45:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 17:45:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36743.68757 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcN6-000235-Sj; Tue, 24 Nov 2020 17:44:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36743.68757; Tue, 24 Nov 2020 17:44:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcN6-00022y-PZ; Tue, 24 Nov 2020 17:44:56 +0000
Received: by outflank-mailman (input) for mailman id 36743;
 Tue, 24 Nov 2020 17:44:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PpH5=E6=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1khcN5-00022t-8e
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:44:55 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 190566a1-69a0-42fb-8912-0e1af1d56d81;
 Tue, 24 Nov 2020 17:44:53 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 5A123206C0;
 Tue, 24 Nov 2020 17:44:52 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PpH5=E6=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1khcN5-00022t-8e
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 17:44:55 +0000
X-Inumbo-ID: 190566a1-69a0-42fb-8912-0e1af1d56d81
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 190566a1-69a0-42fb-8912-0e1af1d56d81;
	Tue, 24 Nov 2020 17:44:53 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 5A123206C0;
	Tue, 24 Nov 2020 17:44:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606239892;
	bh=1zh8h9tR1JDURCC6GdkhF4d3iDUmvRcYXH/D2Xf0Cao=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=bmYnzzV6DeYm7XBOGQpQhSv5FSdX2pbBUrmZEKPyzUZgNE51kdiixFGcVrb33luPd
	 2Wd0iIsRgCKHELjWZRx3UOlxr3fJU6STF5zcYYejNVY5OS5cs8n0TXVZVQbzT7bOHz
	 vqJFT/L1TYYPAa5uOmigJlg0J54RLrG+159h0JDs=
Date: Tue, 24 Nov 2020 09:44:51 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <Rahul.Singh@arm.com>
cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: Add workaround for Cortex-A55 erratum
 #1530923
In-Reply-To: <E5A460E5-7D10-4314-98B4-0D90CD173940@arm.com>
Message-ID: <alpine.DEB.2.21.2011240944400.7979@sstabellini-ThinkPad-T480s>
References: <61a105672650e7470710183f37351b821b818d1e.1606215998.git.bertrand.marquis@arm.com> <E5A460E5-7D10-4314-98B4-0D90CD173940@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 24 Nov 2020, Rahul Singh wrote:
> > On 24 Nov 2020, at 11:12 am, Bertrand Marquis <Bertrand.Marquis@arm.com> wrote:
> > 
> > On the Cortex A55, TLB entries can be allocated by a speculative AT
> > instruction. If this is happening during a guest context switch with an
> > inconsistent page table state in the guest, TLBs with wrong values might
> > be allocated.
> > The ARM64_WORKAROUND_AT_SPECULATE workaround is used as for erratum
> > 1165522 on Cortex A76 or Neoverse N1.
> > 
> > This change is also introducing the MIDR identifier for the Cortex-A55.
> > 
> > Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> 
> Reviewed-by: Rahul Singh <rahul.singh@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> > docs/misc/arm/silicon-errata.txt | 1 +
> > xen/arch/arm/cpuerrata.c         | 6 ++++++
> > xen/include/asm-arm/processor.h  | 2 ++
> > 3 files changed, 9 insertions(+)
> > 
> > diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-errata.txt
> > index d183ba543f..27bf957ebf 100644
> > --- a/docs/misc/arm/silicon-errata.txt
> > +++ b/docs/misc/arm/silicon-errata.txt
> > @@ -45,6 +45,7 @@ stable hypervisors.
> > | ARM            | Cortex-A53      | #827319         | ARM64_ERRATUM_827319    |
> > | ARM            | Cortex-A53      | #824069         | ARM64_ERRATUM_824069    |
> > | ARM            | Cortex-A53      | #819472         | ARM64_ERRATUM_819472    |
> > +| ARM            | Cortex-A55      | #1530923        | N/A                     |
> > | ARM            | Cortex-A57      | #852523         | N/A                     |
> > | ARM            | Cortex-A57      | #832075         | ARM64_ERRATUM_832075    |
> > | ARM            | Cortex-A57      | #834220         | ARM64_ERRATUM_834220    |
> > diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
> > index cb4795beec..b398d480f1 100644
> > --- a/xen/arch/arm/cpuerrata.c
> > +++ b/xen/arch/arm/cpuerrata.c
> > @@ -514,6 +514,12 @@ static const struct arm_cpu_capabilities arm_errata[] = {
> >         .capability = ARM64_WORKAROUND_AT_SPECULATE,
> >         MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
> >     },
> > +    {
> > +        /* Cortex-A55 (All versions as erratum is open in SDEN v14) */
> > +        .desc = "ARM erratum 1530923",
> > +        .capability = ARM64_WORKAROUND_AT_SPECULATE,
> > +        MIDR_ALL_VERSIONS(MIDR_CORTEX_A55),
> > +    },
> >     {},
> > };
> > 
> > diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
> > index d3d12a9d19..87c8136022 100644
> > --- a/xen/include/asm-arm/processor.h
> > +++ b/xen/include/asm-arm/processor.h
> > @@ -53,6 +53,7 @@
> > #define ARM_CPU_PART_CORTEX_A17     0xC0E
> > #define ARM_CPU_PART_CORTEX_A15     0xC0F
> > #define ARM_CPU_PART_CORTEX_A53     0xD03
> > +#define ARM_CPU_PART_CORTEX_A55     0xD05
> > #define ARM_CPU_PART_CORTEX_A57     0xD07
> > #define ARM_CPU_PART_CORTEX_A72     0xD08
> > #define ARM_CPU_PART_CORTEX_A73     0xD09
> > @@ -64,6 +65,7 @@
> > #define MIDR_CORTEX_A17 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A17)
> > #define MIDR_CORTEX_A15 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A15)
> > #define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53)
> > +#define MIDR_CORTEX_A55 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A55)
> > #define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57)
> > #define MIDR_CORTEX_A72 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A72)
> > #define MIDR_CORTEX_A73 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A73)
> > -- 
> > 2.17.1
> > 
> > 
> 


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 18:04:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 18:04:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36750.68769 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcfb-0003xT-Ik; Tue, 24 Nov 2020 18:04:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36750.68769; Tue, 24 Nov 2020 18:04:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcfb-0003xM-Ee; Tue, 24 Nov 2020 18:04:03 +0000
Received: by outflank-mailman (input) for mailman id 36750;
 Tue, 24 Nov 2020 18:04:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
 id 1khcfZ-0003xH-CN
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 18:04:01 +0000
Received: from mail-qk1-x743.google.com (unknown [2607:f8b0:4864:20::743])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 43030d14-aac1-4507-80bd-825c0fab2e05;
 Tue, 24 Nov 2020 18:04:00 +0000 (UTC)
Received: by mail-qk1-x743.google.com with SMTP id h20so3011964qkk.4
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 10:04:00 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
 by smtp.gmail.com with ESMTPSA id c14sm13621716qko.29.2020.11.24.10.03.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Nov 2020 10:03:59 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
	id 1khcfZ-0003xH-CN
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 18:04:01 +0000
X-Inumbo-ID: 43030d14-aac1-4507-80bd-825c0fab2e05
Received: from mail-qk1-x743.google.com (unknown [2607:f8b0:4864:20::743])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 43030d14-aac1-4507-80bd-825c0fab2e05;
	Tue, 24 Nov 2020 18:04:00 +0000 (UTC)
Received: by mail-qk1-x743.google.com with SMTP id h20so3011964qkk.4
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 10:04:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=8qZ2mbF42a4EoCPjpueBEzfe9XOwF2IptTReGgUwObs=;
        b=is4ScJMtRgB8A67gco8/ShkniDzY10n7rMewX714/9E8uwhLChpIiLuhS9NYdAwKZO
         wwPFFE1umeJEQtu7KDxbrmdClPTpQcYjr1RpDpjnuZBK1iq0PQfGRfm19X+XI49NQayK
         8lAFgH0g1BclA/vf2p2mEb1VckBHMRHQuZY2MAZGNuGYbRuKC1k7RL8V45KondL8dAgu
         pkggwpY/K/3AGEFSp3YuAHk/ID2wYfhEWzIb5TQJq39NiNddKNAmZTO9FhKaayKgBddH
         Z4wiCaTnVN8dlV2LZgcg+T7QWmffCA0edb1cuImXXMdzH67nJfiqgEY4ncID7PjQAYKh
         /ZGg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=8qZ2mbF42a4EoCPjpueBEzfe9XOwF2IptTReGgUwObs=;
        b=RS/K55POrmjxsS0uHRZuDr1UHl2JMZxC4D2tjV3o+D0wjmD5EujPTbeG1fRIom9Tzc
         O/owSkBn+THWf8paMtYPhNt++swOPhO2bkv5DAATUVhxcq6E1YDTaxVk8OR/c5xPPDaI
         v+5+Kmj4ChKhBQOaAN3xL8eRl+tB5szDmmy6frN/l8he3sqQjZV8zBncLtdq5U1ttzvY
         mPw1iEn5qeHxkcybnOXcuo1j1RAIjVAwu6XleHobcY/2tandS/oIC0z82dhiChGfL2ta
         eAuM4JnXQQAKpMgqrj5oe0niPtNNE0RFhUjRxD+nrUvf3YnbNQfKblfolRTjYgQQYDRO
         jCsA==
X-Gm-Message-State: AOAM532YCt52520yEzQLV4eABGRUQroDcNmq9WK37GrsoK2cPkJHfqpd
	9bw00r5GLr9rQpgqFZhK9Mw=
X-Google-Smtp-Source: ABdhPJypHMVf10j8dqLbG3/kMc2wzxScN+9Rw3Fm44OZCed30V+ovAuzBFJQGc76NiRl6CUPNsyaww==
X-Received: by 2002:ae9:e007:: with SMTP id m7mr5836885qkk.416.1606241040110;
        Tue, 24 Nov 2020 10:04:00 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
        by smtp.gmail.com with ESMTPSA id c14sm13621716qko.29.2020.11.24.10.03.58
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 24 Nov 2020 10:03:59 -0800 (PST)
Sender: Tejun Heo <htejun@gmail.com>
Date: Tue, 24 Nov 2020 13:03:36 -0500
From: Tejun Heo <tj@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 21/45] block: refactor blkdev_get
Message-ID: <X71K+JS+xnGs+EPF@mtj.duckdns.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-22-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-22-hch@lst.de>

On Tue, Nov 24, 2020 at 02:27:27PM +0100, Christoph Hellwig wrote:
> Move more code that is only run on the outer open but not the open of
> the underlying whole device when opening a partition into blkdev_get,
> which leads to a much easier to follow structure.
> 
> This allows to simplify the disk and module refcounting so that one
> reference is held for each open, similar to what we do with normal
> file operations.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Acked-by: Tejun Heo <tj@kernel.org>

Thanks.

-- 
tejun


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 18:06:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 18:06:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36758.68780 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khchi-00045h-Ub; Tue, 24 Nov 2020 18:06:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36758.68780; Tue, 24 Nov 2020 18:06:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khchi-00045a-RP; Tue, 24 Nov 2020 18:06:14 +0000
Received: by outflank-mailman (input) for mailman id 36758;
 Tue, 24 Nov 2020 18:06:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
 id 1khchi-00045V-De
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 18:06:14 +0000
Received: from mail-qt1-x843.google.com (unknown [2607:f8b0:4864:20::843])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bde8787d-613b-4136-afb9-58706ad8e155;
 Tue, 24 Nov 2020 18:06:13 +0000 (UTC)
Received: by mail-qt1-x843.google.com with SMTP id p12so16728144qtp.7
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 10:06:13 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
 by smtp.gmail.com with ESMTPSA id s7sm13264706qkm.124.2020.11.24.10.06.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Nov 2020 10:06:12 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
	id 1khchi-00045V-De
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 18:06:14 +0000
X-Inumbo-ID: bde8787d-613b-4136-afb9-58706ad8e155
Received: from mail-qt1-x843.google.com (unknown [2607:f8b0:4864:20::843])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bde8787d-613b-4136-afb9-58706ad8e155;
	Tue, 24 Nov 2020 18:06:13 +0000 (UTC)
Received: by mail-qt1-x843.google.com with SMTP id p12so16728144qtp.7
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 10:06:13 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=XGgWvl0Napb7ckr1b5Je/XF/vo+wq/s703ULFNCgBaI=;
        b=mFcTt0MgjM9IqQp5CjNn460f80WY5TgOpPJ6Ay4ntldAJKb6Rjl0HTJIZ+yFn5diNg
         Y9XgufAijyGSVMo3oa5mbzUjsTfdGF4klUksliDQRN3qDG0d8QMDnfA4E5LuG2LRxix9
         hartKdsjUyOnRZk1OwfqKdcIsVXRFfJDauC2DAxEf1EDFmBabgsDa0m7UqLxfYY5bZRj
         zYqj7h62Xy2JH5Y5YERtz3Jqa3G8xCCNygCgwbvWWscHLpe2a2ipXU92XYpjwaNY0VJ2
         iVk3YMDEtvOraBt/erbOI6z6NYw/s24zXpCk/3X14GTWVBM8KYbSPmvWGmoSgwz5tNpw
         HXSw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=XGgWvl0Napb7ckr1b5Je/XF/vo+wq/s703ULFNCgBaI=;
        b=JgNbGZUy8GCrwGkV8nBo91HNNzKMtNDXbfa+JnOg3ZwX4R9BrylWBtLT9O+0/VXdvr
         bCfE4EKhPg9nQ85bf/AowcaFJA0ymTx8V0VLhwEgBKSuaY+CBndu1Vg2ryg/FV6Ysj62
         SuGszdT+HGuy96sKsViD/PpTVwCAHw7ZBf2Ly2on2ZN+9vQpHqWCJHbbv6CH4kSEdQ6m
         xnnCA/pFtJXpsM6loxgMJ5w3ZR5ZhD4OKcqT1fM/tozDczP1lFyrCD63d4xzhm3ChJ8/
         XV7wETA51MG5QrScedXB29/SCImE2WWyAXeQlgd1YPmNHyVhsEf6G6vP25stXu8b6Yxv
         BxhA==
X-Gm-Message-State: AOAM532FDhkokbwYi4ffcwDdff2rPXrooNt/gJqKxOmKhODCDv36Z7VU
	SDUBBUM2J9W082qPo+ATR9E=
X-Google-Smtp-Source: ABdhPJy4KzGqVN9OifimW7Bbj0BtBOYaib7uEK7t2ko3uV9boQAgproJwZycxdIrJmhhR00FFdaCLQ==
X-Received: by 2002:ac8:58c7:: with SMTP id u7mr5558192qta.54.1606241173470;
        Tue, 24 Nov 2020 10:06:13 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
        by smtp.gmail.com with ESMTPSA id s7sm13264706qkm.124.2020.11.24.10.06.12
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 24 Nov 2020 10:06:12 -0800 (PST)
Sender: Tejun Heo <htejun@gmail.com>
Date: Tue, 24 Nov 2020 13:05:50 -0500
From: Tejun Heo <tj@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 22/45] block: opencode devcgroup_inode_permission
Message-ID: <X71LfgUr2lIKqDx+@mtj.duckdns.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-23-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-23-hch@lst.de>

On Tue, Nov 24, 2020 at 02:27:28PM +0100, Christoph Hellwig wrote:
> Just call devcgroup_check_permission to avoid various superflous checks
> and a double conversion of the access flags.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Acked-by: Tejun Heo <tj@kernel.org>

-- 
tejun


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 18:14:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 18:14:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36768.68796 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcpB-000553-QM; Tue, 24 Nov 2020 18:13:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36768.68796; Tue, 24 Nov 2020 18:13:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khcpB-00054w-ND; Tue, 24 Nov 2020 18:13:57 +0000
Received: by outflank-mailman (input) for mailman id 36768;
 Tue, 24 Nov 2020 18:13:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tuHM=E6=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1khcpA-00054r-K0
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 18:13:56 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.84]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e16f5e5c-fd8e-49f3-b65d-87c05e538f6a;
 Tue, 24 Nov 2020 18:13:55 +0000 (UTC)
Received: from AM6P191CA0004.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:8b::17)
 by HE1PR0802MB2153.eurprd08.prod.outlook.com (2603:10a6:3:c2::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.28; Tue, 24 Nov
 2020 18:13:52 +0000
Received: from VE1EUR03FT056.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8b:cafe::8b) by AM6P191CA0004.outlook.office365.com
 (2603:10a6:209:8b::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Tue, 24 Nov 2020 18:13:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT056.mail.protection.outlook.com (10.152.19.28) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Tue, 24 Nov 2020 18:13:50 +0000
Received: ("Tessian outbound 39167997cde8:v71");
 Tue, 24 Nov 2020 18:13:50 +0000
Received: from e99faebd3ed0.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 F06836F8-0498-4F8A-AA86-CF71AF11E4B6.1; 
 Tue, 24 Nov 2020 18:13:44 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e99faebd3ed0.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 24 Nov 2020 18:13:44 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0802MB2565.eurprd08.prod.outlook.com (2603:10a6:4:a1::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20; Tue, 24 Nov
 2020 18:13:41 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0%7]) with mapi id 15.20.3589.030; Tue, 24 Nov 2020
 18:13:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tuHM=E6=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1khcpA-00054r-K0
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 18:13:56 +0000
X-Inumbo-ID: e16f5e5c-fd8e-49f3-b65d-87c05e538f6a
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown [40.107.6.84])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e16f5e5c-fd8e-49f3-b65d-87c05e538f6a;
	Tue, 24 Nov 2020 18:13:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Mo49gPnktvMoGOWA9mm8DBmQwJW+1slbJowKXFbEobw=;
 b=38yk/7WulUCtrcBoR7OcQXM72Y/pvrzcnpiN7UUg8v/dfHyzgiCWsBCJ1yOQIPvwL/lftQrKExO7KwBSYW6UyaUp31Ckcn8ohWEQTbsn6ejSTrhviyyHO4keF9IAcwtdUUAStHRGZtAVrjqQloWWJYrjFtnWx7OHFjcgarwVv8Y=
Received: from AM6P191CA0004.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:8b::17)
 by HE1PR0802MB2153.eurprd08.prod.outlook.com (2603:10a6:3:c2::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.28; Tue, 24 Nov
 2020 18:13:52 +0000
Received: from VE1EUR03FT056.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8b:cafe::8b) by AM6P191CA0004.outlook.office365.com
 (2603:10a6:209:8b::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Tue, 24 Nov 2020 18:13:50 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT056.mail.protection.outlook.com (10.152.19.28) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Tue, 24 Nov 2020 18:13:50 +0000
Received: ("Tessian outbound 39167997cde8:v71"); Tue, 24 Nov 2020 18:13:50 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 2eb8af3956ac4211
X-CR-MTA-TID: 64aa7808
Received: from e99faebd3ed0.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id F06836F8-0498-4F8A-AA86-CF71AF11E4B6.1;
	Tue, 24 Nov 2020 18:13:44 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e99faebd3ed0.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Tue, 24 Nov 2020 18:13:44 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VXoio1DngWVoWFZ6138Fc59yUQ1+DSfq7dI5Q1/BEAIr1t1UUvE81n0sXMVvd6ww3gqUj1jpNHopGWvw6H1RtYVhrd7gPUiw4vdPshP3WFjDGjnC4810o0may2CYADjZ7QI1C1j/OX4YRxAsoLrk2YF2EX3qYmqwgtL+wuTzAdxkI4ohHaqQsUOGhaC/AGMuY4nYW3cPoOkV9Fc1sme/CGE7+OcL5C6Nv6QdwG2CS1QQNiDRZN72irCvGarL5i8j1EyuXe8GiMxr0X2ywCB4kSTOxAS0+/XdJMZsjvmdR5JDoBrrUsKRyfA9BP0H5F0YHmPMqZJg0gsV3aPmYTbSvA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Mo49gPnktvMoGOWA9mm8DBmQwJW+1slbJowKXFbEobw=;
 b=MM+vGCy9lox1eypCxaKc7IDG7h4T+Ljw4MPI9IJwskMLBbytYq/dsabB/B2WKWRCWR6rZRmLHQDOpNYIcTxxgnUGrg09taShyMHyjYrIw/t4BWGEalf64iSq+s0jlEgasGuWAhEKIgpIRHe0pdla9cVr/gWmseLXST8LwFKm9FS3d6AV0gQWRmfLiEx6dnjld7FEO6PFb2nlQzFw3wiKaOrYpi64xMhaAAR6vi8sGVATyh5LGXDuqnhlp4tenURZWvzSIaMwQz8hXBJeM7jd9HPToSSFvO4pny8DB4xfsYQt4E8PJ7w/iMR8ljqjsa80vJttZAo8KXuErlKwa0iIsA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Mo49gPnktvMoGOWA9mm8DBmQwJW+1slbJowKXFbEobw=;
 b=38yk/7WulUCtrcBoR7OcQXM72Y/pvrzcnpiN7UUg8v/dfHyzgiCWsBCJ1yOQIPvwL/lftQrKExO7KwBSYW6UyaUp31Ckcn8ohWEQTbsn6ejSTrhviyyHO4keF9IAcwtdUUAStHRGZtAVrjqQloWWJYrjFtnWx7OHFjcgarwVv8Y=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0802MB2565.eurprd08.prod.outlook.com (2603:10a6:4:a1::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20; Tue, 24 Nov
 2020 18:13:41 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0%7]) with mapi id 15.20.3589.030; Tue, 24 Nov 2020
 18:13:41 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Julien Grall
	<Julien.Grall@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH RFC 4/6] xen/arm: mm: Allow other mapping size in
 xen_pt_update_entry()
Thread-Topic: [PATCH RFC 4/6] xen/arm: mm: Allow other mapping size in
 xen_pt_update_entry()
Thread-Index: AQHWvqdUIaSB2fElrE+qcehs9C8Fz6nXncMA
Date: Tue, 24 Nov 2020 18:13:41 +0000
Message-ID: <9F95F565-8D59-400B-9F15-9ABA0B1FB7FC@arm.com>
References: <20201119190751.22345-1-julien@xen.org>
 <20201119190751.22345-5-julien@xen.org>
In-Reply-To: <20201119190751.22345-5-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 0b0f87b1-963b-4d21-b0fc-08d890a4b1d0
x-ms-traffictypediagnostic: DB6PR0802MB2565:|HE1PR0802MB2153:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<HE1PR0802MB2153FC921D0882C60B31B7D99DFB0@HE1PR0802MB2153.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ymIvltsuyvlYNSV7hurg7qvheICkcgdzrsS6TLk8CXA3nYWZ39pPrky6NC8nn9nnOdkYceRIbkAAtLtf+4yCz7kJX9yUbCrIhfCD0hg9TuWTuWtS4U9SAR8Ikqp4rE15NpuFvfL5eXRbHzsqQXBtGopYXfTPi3TuwDBjIcfcrn8TEwFdRtsKgRNRo1FPtt+/g0lyVpHnPci32J4b85qE2Dzp72TopcHSVQOybqqceV7zTWnSb32xfN+V5YTFMk1vsoYSUN1qp7jdl58ZxD+hJ+H3j28F8SQYApKBAc5YgW4yaWJvZUTeOlItAOMm0oEL8dx4r7/fhMawmT89/BlR9eNO4fL0PynEZT0qyrvCPR/Q7mbwZ/Kd7upuf+LBGI6i
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39860400002)(346002)(136003)(396003)(376002)(316002)(54906003)(53546011)(6506007)(6486002)(6916009)(66946007)(91956017)(76116006)(66556008)(64756008)(33656002)(26005)(71200400001)(66446008)(66476007)(6512007)(83380400001)(5660300002)(8676002)(4326008)(2906002)(36756003)(478600001)(186003)(86362001)(2616005)(8936002)(21314003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 YMkSOZpf3JBQWES0s/ylSXV14KpKZlWtCAtNZa1ZyIT07e+6gaGE774rxGrD5/qSrMncG40zQMQhGG8WX2l7wODs91EJ7Z4vtpDnCsbuXhvYoxijp6V5yBcDrXxXkks7TDFdGqoLA4DqnL5uPMxdcjsmeQoQLGzGpVyeSL6iwv0brjMwl79TJlJVoY8PVliKK3/vJrYFDtjhjIq+qCn0MVFE5fDz1vPjvfGqO7vJXRR89LBmXL1o3Tkhe435xKc1FHgNxgWIUXZpI5Wn+Efr/mFEdG/f3KimHOdJbpIajMBtApB20tRzNP5/TzN1lgnlAICI1Pchxb5AQVKiWPzJWIf/1/5GtVxygcv17F8wASKidL/29Cyjvd0Ao3bTupxk4uUDZI7/YB65Rb38yCN09uFw5X7wKUDzJ02Qw2aqzTiMZPWzTSVAGZMNzU756uzavb3dN1EeFNNm+bxYF81Exn1D9Z3jJcw0Ak8sgmcZJ51ldd1PNL5vLxa1nTuZtuFt/RzYbdmye3NMMBnUd+WdT/E1CXf9+Ps393KApVTl/xIdx0n1ebjq0/MyU/na5x0mf1JvxyLJ1cskTXkJwWZ0jt9zwU3OjkX0zfLsdIP6zrM27GFDeVqiO8nFNFXOBLavgdGIMdA8WXcIqAx/Tf3DJD6i+jEAscSuWpdwu5Dg4Pz/wTFgaBKxkm78qOPs5MY5p/sbUGfPYtL7AyjYEVGsF0ncSEkuhkUUU+u+bhRQfzdvJko7sn075uoA6lujDZeaFnR4XjL1UrA+99uQP3jPyegz1FBWL4mNo6VDeC3lFRKpEkhuHKFFYfU7YhZ+yVUAyXFit8LLf7jUF75N1ups6P1fSEn7LYk3t55k5xaVrJ3a47gdkBPi0oWrPNjnv3Fu82vUzxqoj6806jmKGLtLrA==
Content-Type: text/plain; charset="utf-8"
Content-ID: <9F8C37525B6D6345954DCD26514125DD@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2565
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	10fbd648-94eb-4d79-b088-08d890a4ac79
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	26IZWwWU4CPz45/xqrTrl8Q9g4eWWEy/gkOmUbntPQJbqKkTm3X3bC4VjYVJqN+6n+Z9YU5lAlj0GrcXWlT9AviOF4g88bDdoOtaB3tcyI+tXGAbS3Sc/dUviMZJb14rLvTxQnpAc/TqeilPFMP+C2vEuZNAk9tM/25y2Nu7VlL4QI971n+ZLsUfhIG8NDuDmqHu2JAdlPEhw8GDxa6Yge8RXKEi+E+Uicd8lBSiEqEuausGoM6EiiiMUv8We1BKs4dVutJj29yqVzX4mdHqFl5V27wPGzYOMH+N/MZr4u3LvIEIQDAKDu92MtBASEkmst/DxdDXYRq3JUl6A9o/10yo71WlYB/A46y6bWzXnKZvGWyoIncNQdRV0FA1TvJvYkBJfMIcvpqmYmAkcLXBOX6W6XgxT/vSKLh5Tcryehk=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(136003)(376002)(39860400002)(346002)(46966005)(356005)(70206006)(4326008)(8676002)(26005)(6862004)(47076004)(86362001)(336012)(82310400003)(2906002)(82740400003)(6506007)(6512007)(36756003)(8936002)(53546011)(107886003)(33656002)(316002)(5660300002)(2616005)(81166007)(70586007)(54906003)(83380400001)(6486002)(186003)(478600001)(21314003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Nov 2020 18:13:50.2704
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0b0f87b1-963b-4d21-b0fc-08d890a4b1d0
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0802MB2153

SGkgSnVsaWVuLA0KDQo+IE9uIDE5IE5vdiAyMDIwLCBhdCAxOTowNywgSnVsaWVuIEdyYWxsIDxq
dWxpZW5AeGVuLm9yZz4gd3JvdGU6DQo+IA0KPiBGcm9tOiBKdWxpZW4gR3JhbGwgPGp1bGllbi5n
cmFsbEBhcm0uY29tPg0KPiANCj4gQXQgdGhlIG1vbWVudCwgeGVuX3B0X3VwZGF0ZV9lbnRyeSgp
IG9ubHkgc3VwcG9ydHMgbWFwcGluZyBhdCBsZXZlbCAzDQo+IChpLmUgNEtCIG1hcHBpbmcpLiBX
aGlsZSB0aGlzIGlzIGZpbmUgZm9yIG1vc3Qgb2YgdGhlIHJ1bnRpbWUgaGVscGVyLA0KPiB0aGUg
Ym9vdCBjb2RlIHdpbGwgcmVxdWlyZSB0byB1c2Ugc3VwZXJwYWdlIG1hcHBpbmcuDQo+IA0KPiBX
ZSBkb24ndCB3YW50IHRvIGFsbG93IHN1cGVycGFnZSBtYXBwaW5nIGJ5IGRlZmF1bHQgYXMgc29t
ZSBvZiB0aGUNCj4gY2FsbGVycyBtYXkgZXhwZWN0IHNtYWxsIG1hcHBpbmdzIChpLmUgcG9wdWxh
dGVfcHRfcmFuZ2UoKSkgb3IgZXZlbg0KPiBleHBlY3QgdG8gdW5tYXAgb25seSBhIHBhcnQgb2Yg
YSBzdXBlcnBhZ2UuDQo+IA0KPiBUbyBrZWVwIHRoZSBjb2RlIHNpbXBsZSwgYSBuZXcgZmxhZyBf
UEFHRV9CTE9DSyBpcyBpbnRyb2R1Y2VkIHRvDQo+IGFsbG93IHRoZSBjYWxsZXIgdG8gZW5hYmxl
IHN1cGVycGFnZSBtYXBwaW5nLg0KPiANCj4gQXMgdGhlIGNvZGUgZG9lc24ndCBzdXBwb3J0IGFs
bCB0aGUgY29tYmluYXRpb25zLCB4ZW5fcHRfY2hlY2tfZW50cnkoKQ0KPiBpcyBleHRlbmRlZCB0
byB0YWtlIGludG8gYWNjb3VudCB0aGUgY2FzZXMgd2UgZG9uJ3Qgc3VwcG9ydCB3aGVuDQo+IHVz
aW5nIGJsb2NrIG1hcHBpbmc6DQo+ICAgIC0gUmVwbGFjaW5nIGEgdGFibGUgd2l0aCBhIG1hcHBp
bmcuIFRoaXMgbWF5IGhhcHBlbiBpZiByZWdpb24gd2FzDQo+ICAgIGZpcnN0IG1hcHBlZCB3aXRo
IDRLQiBtYXBwaW5nIGFuZCB0aGVuIGxhdGVyIG9uIHJlcGxhY2VkIHdpdGggYSAyTUINCj4gICAg
KG9yIDFHQiBtYXBwaW5nKQ0KPiAgICAtIFJlbW92aW5nL21vZGlmeSBhIHRhYmxlLiBUaGlzIG1h
eSBoYXBwZW4gaWYgYSBjYWxsZXIgdHJ5IHRvIHJlbW92ZSBhDQo+ICAgIHJlZ2lvbiB3aXRoIF9Q
QUdFX0JMT0NLIHNldCB3aGVuIGl0IHdhcyBjcmVhdGVkIHdpdGhvdXQgaXQNCj4gDQo+IE5vdGUg
dGhhdCB0aGUgY3VycmVudCByZXN0cmljdGlvbiBtZWFuIHRoYXQgdGhlIGNhbGxlciBtdXN0IGVu
c3VyZSB0aGF0DQo+IF9QQUdFX0JMT0NLIGlzIGNvbnNpc3RlbnRseSBzZXQvY2xlYXJlZCBhY3Jv
c3MgYWxsIHRoZSB1cGRhdGVzIG9uIGENCj4gZ2l2ZW4gdmlydHVhbCByZWdpb24uIFRoaXMgb3Vn
aHQgdG8gYmUgZmluZSB3aXRoIHRoZSBleHBlY3RlZCB1c2UtY2FzZXMuDQo+IA0KPiBNb3JlIHJl
d29yayB3aWxsIGJlIG5lY2Vzc2FyeSBpZiB3ZSB3YW50ZWQgdG8gcmVtb3ZlIHRoZSByZXN0cmlj
dGlvbnMuDQo+IA0KPiBOb3RlIHRoYXQgbnJfbWZucyBpcyBub3cgbWFya2VkIGNvbnN0IGFzIGl0
IGlzIHVzZWQgZm9yIGZsdXNoaW5nIHRoZQ0KPiBUTEJzIGFuZCB3ZSBkb24ndCB3YW50IGl0IHRv
IGJlIG1vZGlmaWVkLg0KPiANCj4gU2lnbmVkLW9mZi1ieTogSnVsaWVuIEdyYWxsIDxqdWxpZW4u
Z3JhbGxAYXJtLmNvbT4NCj4gDQoNCkZpcnN0IEkgZGlkIHRlc3QgdGhlIHNlcmllIG9uIEFybSBh
bmQgc28gZmFyIGl0IHdhcyB3b3JraW5nIHByb3Blcmx5Lg0KDQpJIG9ubHkgaGF2ZSBzb21lIHJl
bWFya3MgYmVjYXVzZSBldmVuIGlmIHRoZSBjb2RlIGlzIHJpZ2h0LCBJIHRoaW5rDQpzb21lIHBh
cnRzIG9mIHRoZSBjb2RlIGFyZSBub3QgZWFzeSB0byByZWFkLi4uDQoNCj4gLS0tDQo+IA0KPiBU
aGlzIHBhdGNoIGlzIG5lY2Vzc2FyeSBmb3IgdXBjb21pbmcgY2hhbmdlcyBpbiB0aGUgTU0gY29k
ZS4gSSB3b3VsZA0KPiBsaWtlIHRvIHJlbW92ZSBtb3N0IG9mIHRoZSBvcGVuLWNvZGluZyB1cGRh
dGUgb2YgdGhlIHBhZ2UtdGFibGVzIGFzIHRoZXkNCj4gYXJlIG5vdCBlYXN5IHRvIHByb3Blcmx5
IGZpeC9leHRlbmQuIEZvciBpbnN0YW5jZSwgYWx3YXlzIG1hcHBpbmcNCj4geGVuaGVhcCBtYXBw
aW5nIHdpdGggMUdCIHN1cGVycGFnZSBpcyBwbGFpbiB3cm9uZyBiZWNhdXNlOg0KPiAgICAtIFJB
TSByZWdpb25zIGFyZSBub3QgYWx3YXlzIDFHQiBhbGlnbmVkIChzdWNoIGFzIG9uIFJQSSA0KSBh
bmQgd2UNCj4gICAgbWF5IGVuZCB1cCB0byBtYXAgTU1JTyB3aXRoIGNhY2hlYWJsZSBhdHRyaWJ1
dGVzDQo+ICAgIC0gUkFNIG1heSBjb250YWluIHJlc2VydmVkIHJlZ2lvbnMgc2hvdWxkIGVpdGhl
ciBub3QgYmUgbWFwcGVkDQo+IC0tLQ0KPiB4ZW4vYXJjaC9hcm0vbW0uYyAgICAgICAgICB8IDg3
ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0tLS0tLS0tDQo+IHhlbi9pbmNsdWRlL2Fz
bS1hcm0vcGFnZS5oIHwgIDQgKysNCj4gMiBmaWxlcyBjaGFuZ2VkLCA3MyBpbnNlcnRpb25zKCsp
LCAxOCBkZWxldGlvbnMoLSkNCj4gDQo+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vbW0uYyBi
L3hlbi9hcmNoL2FybS9tbS5jDQo+IGluZGV4IDU5ZjhhM2YxNWZkMS4uYWYwZjEyYjZlNmQzIDEw
MDY0NA0KPiAtLS0gYS94ZW4vYXJjaC9hcm0vbW0uYw0KPiArKysgYi94ZW4vYXJjaC9hcm0vbW0u
Yw0KPiBAQCAtMTA2MCw5ICsxMDYwLDEwIEBAIHN0YXRpYyBpbnQgeGVuX3B0X25leHRfbGV2ZWwo
Ym9vbCByZWFkX29ubHksIHVuc2lnbmVkIGludCBsZXZlbCwNCj4gfQ0KPiANCj4gLyogU2FuaXR5
IGNoZWNrIG9mIHRoZSBlbnRyeSAqLw0KPiAtc3RhdGljIGJvb2wgeGVuX3B0X2NoZWNrX2VudHJ5
KGxwYWVfdCBlbnRyeSwgbWZuX3QgbWZuLCB1bnNpZ25lZCBpbnQgZmxhZ3MpDQo+ICtzdGF0aWMg
Ym9vbCB4ZW5fcHRfY2hlY2tfZW50cnkobHBhZV90IGVudHJ5LCBtZm5fdCBtZm4sIHVuc2lnbmVk
IGludCBsZXZlbCwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1bnNpZ25lZCBp
bnQgZmxhZ3MpDQo+IHsNCj4gLSAgICAvKiBTYW5pdHkgY2hlY2sgd2hlbiBtb2RpZnlpbmcgYSBw
YWdlLiAqLw0KPiArICAgIC8qIFNhbml0eSBjaGVjayB3aGVuIG1vZGlmeWluZyBhbiBlbnRyeS4g
Ki8NCj4gICAgIGlmICggKGZsYWdzICYgX1BBR0VfUFJFU0VOVCkgJiYgbWZuX2VxKG1mbiwgSU5W
QUxJRF9NRk4pICkNCj4gICAgIHsNCj4gICAgICAgICAvKiBXZSBkb24ndCBhbGxvdyBtb2RpZnlp
bmcgYW4gaW52YWxpZCBlbnRyeS4gKi8NCj4gQEAgLTEwNzIsNiArMTA3MywxMyBAQCBzdGF0aWMg
Ym9vbCB4ZW5fcHRfY2hlY2tfZW50cnkobHBhZV90IGVudHJ5LCBtZm5fdCBtZm4sIHVuc2lnbmVk
IGludCBmbGFncykNCj4gICAgICAgICAgICAgcmV0dXJuIGZhbHNlOw0KPiAgICAgICAgIH0NCj4g
DQo+ICsgICAgICAgIC8qIFdlIGRvbid0IGFsbG93IG1vZGlmeWluZyBhIHRhYmxlIGVudHJ5ICov
DQo+ICsgICAgICAgIGlmICggIWxwYWVfaXNfbWFwcGluZyhlbnRyeSwgbGV2ZWwpICkNCj4gKyAg
ICAgICAgew0KPiArICAgICAgICAgICAgbW1fcHJpbnRrKCJNb2RpZnlpbmcgYSB0YWJsZSBlbnRy
eSBpcyBub3QgYWxsb3dlZC5cbiIpOw0KPiArICAgICAgICAgICAgcmV0dXJuIGZhbHNlOw0KPiAr
ICAgICAgICB9DQo+ICsNCj4gICAgICAgICAvKiBXZSBkb24ndCBhbGxvdyBjaGFuZ2luZyBtZW1v
cnkgYXR0cmlidXRlcy4gKi8NCj4gICAgICAgICBpZiAoIGVudHJ5LnB0LmFpICE9IFBBR0VfQUlf
TUFTSyhmbGFncykgKQ0KPiAgICAgICAgIHsNCj4gQEAgLTEwODcsNyArMTA5NSw3IEBAIHN0YXRp
YyBib29sIHhlbl9wdF9jaGVja19lbnRyeShscGFlX3QgZW50cnksIG1mbl90IG1mbiwgdW5zaWdu
ZWQgaW50IGZsYWdzKQ0KPiAgICAgICAgICAgICByZXR1cm4gZmFsc2U7DQo+ICAgICAgICAgfQ0K
PiAgICAgfQ0KPiAtICAgIC8qIFNhbml0eSBjaGVjayB3aGVuIGluc2VydGluZyBhIHBhZ2UgKi8N
Cj4gKyAgICAvKiBTYW5pdHkgY2hlY2sgd2hlbiBpbnNlcnRpbmcgYSBtYXBwaW5nICovDQo+ICAg
ICBlbHNlIGlmICggZmxhZ3MgJiBfUEFHRV9QUkVTRU5UICkNCj4gICAgIHsNCj4gICAgICAgICAv
KiBXZSBzaG91bGQgYmUgaGVyZSB3aXRoIGEgdmFsaWQgTUZOLiAqLw0KPiBAQCAtMTA5NiwxOCAr
MTEwNCwyOCBAQCBzdGF0aWMgYm9vbCB4ZW5fcHRfY2hlY2tfZW50cnkobHBhZV90IGVudHJ5LCBt
Zm5fdCBtZm4sIHVuc2lnbmVkIGludCBmbGFncykNCj4gICAgICAgICAvKiBXZSBkb24ndCBhbGxv
dyByZXBsYWNpbmcgYW55IHZhbGlkIGVudHJ5LiAqLw0KPiAgICAgICAgIGlmICggbHBhZV9pc192
YWxpZChlbnRyeSkgKQ0KPiAgICAgICAgIHsNCj4gLSAgICAgICAgICAgIG1tX3ByaW50aygiQ2hh
bmdpbmcgTUZOIGZvciBhIHZhbGlkIGVudHJ5IGlzIG5vdCBhbGxvd2VkICglIyJQUklfbWZuIiAt
PiAlIyJQUklfbWZuIikuXG4iLA0KPiAtICAgICAgICAgICAgICAgICAgICAgIG1mbl94KGxwYWVf
Z2V0X21mbihlbnRyeSkpLCBtZm5feChtZm4pKTsNCj4gKyAgICAgICAgICAgIGlmICggbHBhZV9p
c19tYXBwaW5nKGVudHJ5LCBsZXZlbCkgKQ0KPiArICAgICAgICAgICAgICAgIG1tX3ByaW50aygi
Q2hhbmdpbmcgTUZOIGZvciBhIHZhbGlkIGVudHJ5IGlzIG5vdCBhbGxvd2VkICglIyJQUklfbWZu
IiAtPiAlIyJQUklfbWZuIikuXG4iLA0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICBtZm5f
eChscGFlX2dldF9tZm4oZW50cnkpKSwgbWZuX3gobWZuKSk7DQo+ICsgICAgICAgICAgICBlbHNl
DQo+ICsgICAgICAgICAgICAgICAgbW1fcHJpbnRrKCJUcnlpbmcgdG8gcmVwbGFjZSBhIHRhYmxl
IHdpdGggYSBtYXBwaW5nLlxuIik7DQo+ICAgICAgICAgICAgIHJldHVybiBmYWxzZTsNCj4gICAg
ICAgICB9DQo+ICAgICB9DQo+IC0gICAgLyogU2FuaXR5IGNoZWNrIHdoZW4gcmVtb3ZpbmcgYSBw
YWdlLiAqLw0KPiArICAgIC8qIFNhbml0eSBjaGVjayB3aGVuIHJlbW92aW5nIGEgbWFwcGluZy4g
Ki8NCj4gICAgIGVsc2UgaWYgKCAoZmxhZ3MgJiAoX1BBR0VfUFJFU0VOVHxfUEFHRV9QT1BVTEFU
RSkpID09IDAgKQ0KPiAgICAgew0KPiAgICAgICAgIC8qIFdlIHNob3VsZCBiZSBoZXJlIHdpdGgg
YW4gaW52YWxpZCBNRk4uICovDQo+ICAgICAgICAgQVNTRVJUKG1mbl9lcShtZm4sIElOVkFMSURf
TUZOKSk7DQo+IA0KPiAtICAgICAgICAvKiBXZSBkb24ndCBhbGxvdyByZW1vdmluZyBwYWdlIHdp
dGggY29udGlndW91cyBiaXQgc2V0LiAqLw0KPiArICAgICAgICAvKiBXZSBkb24ndCBhbGxvdyBy
ZW1vdmluZyBhIHRhYmxlICovDQo+ICsgICAgICAgIGlmICggbHBhZV9pc190YWJsZShlbnRyeSwg
bGV2ZWwpICkNCj4gKyAgICAgICAgew0KPiArICAgICAgICAgICAgbW1fcHJpbnRrKCJSZW1vdmlu
ZyBhIHRhYmxlIGlzIG5vdCBhbGxvd2VkLlxuIik7DQo+ICsgICAgICAgICAgICByZXR1cm4gZmFs
c2U7DQo+ICsgICAgICAgIH0NCj4gKw0KPiArICAgICAgICAvKiBXZSBkb24ndCBhbGxvdyByZW1v
dmluZyBhIG1hcHBpbmcgd2l0aCBjb250aWd1b3VzIGJpdCBzZXQuICovDQo+ICAgICAgICAgaWYg
KCBlbnRyeS5wdC5jb250aWcgKQ0KPiAgICAgICAgIHsNCj4gICAgICAgICAgICAgbW1fcHJpbnRr
KCJSZW1vdmluZyBlbnRyeSB3aXRoIGNvbnRpZ3VvdXMgYml0IHNldCBpcyBub3QgYWxsb3dlZC5c
biIpOw0KPiBAQCAtMTEyNiwxMiArMTE0NCwxMiBAQCBzdGF0aWMgYm9vbCB4ZW5fcHRfY2hlY2tf
ZW50cnkobHBhZV90IGVudHJ5LCBtZm5fdCBtZm4sIHVuc2lnbmVkIGludCBmbGFncykNCj4gfQ0K
PiANCj4gc3RhdGljIGludCB4ZW5fcHRfdXBkYXRlX2VudHJ5KG1mbl90IHJvb3QsIHVuc2lnbmVk
IGxvbmcgdmlydCwNCj4gLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBtZm5fdCBtZm4s
IHVuc2lnbmVkIGludCBmbGFncykNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBt
Zm5fdCBtZm4sIHVuc2lnbmVkIGludCBwYWdlX29yZGVyLA0KPiArICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIHVuc2lnbmVkIGludCBmbGFncykNCj4gew0KPiAgICAgaW50IHJjOw0KPiAg
ICAgdW5zaWduZWQgaW50IGxldmVsOw0KPiAtICAgIC8qIFdlIG9ubHkgc3VwcG9ydCA0S0IgbWFw
cGluZyAoaS5lIGxldmVsIDMpIGZvciBub3cgKi8NCj4gLSAgICB1bnNpZ25lZCBpbnQgdGFyZ2V0
ID0gMzsNCj4gKyAgICB1bnNpZ25lZCBpbnQgdGFyZ2V0ID0gMyAtIChwYWdlX29yZGVyIC8gTFBB
RV9TSElGVCk7DQoNClRoaXMgaXMgbm90IHJlYWxseSBzdHJhaWdodCBmb3J3YXJkIGFuZCBpdCB3
b3VsZCBiZSBnb29kIHRvIGFjdHVhbGx5IGV4cGxhaW4gdGhlIGNvbXB1dGF0aW9uIGhlcmUgb3Ig
Li4uDQoNCj4gICAgIGxwYWVfdCAqdGFibGU7DQo+ICAgICAvKg0KPiAgICAgICogVGhlIGludGVy
bWVkaWF0ZSBwYWdlIHRhYmxlcyBhcmUgcmVhZC1vbmx5IHdoZW4gdGhlIE1GTiBpcyBub3QgdmFs
aWQNCj4gQEAgLTExODYsNyArMTIwNCw3IEBAIHN0YXRpYyBpbnQgeGVuX3B0X3VwZGF0ZV9lbnRy
eShtZm5fdCByb290LCB1bnNpZ25lZCBsb25nIHZpcnQsDQo+ICAgICBlbnRyeSA9IHRhYmxlICsg
b2Zmc2V0c1tsZXZlbF07DQo+IA0KPiAgICAgcmMgPSAtRUlOVkFMOw0KPiAtICAgIGlmICggIXhl
bl9wdF9jaGVja19lbnRyeSgqZW50cnksIG1mbiwgZmxhZ3MpICkNCj4gKyAgICBpZiAoICF4ZW5f
cHRfY2hlY2tfZW50cnkoKmVudHJ5LCBtZm4sIGxldmVsLCBmbGFncykgKQ0KPiAgICAgICAgIGdv
dG8gb3V0Ow0KPiANCj4gICAgIC8qIElmIHdlIGFyZSBvbmx5IHBvcHVsYXRpbmcgcGFnZS10YWJs
ZSwgdGhlbiB3ZSBhcmUgZG9uZS4gKi8NCj4gQEAgLTEyMDQsOCArMTIyMiwxMSBAQCBzdGF0aWMg
aW50IHhlbl9wdF91cGRhdGVfZW50cnkobWZuX3Qgcm9vdCwgdW5zaWduZWQgbG9uZyB2aXJ0LA0K
PiAgICAgICAgIHsNCj4gICAgICAgICAgICAgcHRlID0gbWZuX3RvX3hlbl9lbnRyeShtZm4sIFBB
R0VfQUlfTUFTSyhmbGFncykpOw0KPiANCj4gLSAgICAgICAgICAgIC8qIFRoaXJkIGxldmVsIGVu
dHJpZXMgc2V0IHB0ZS5wdC50YWJsZSA9IDEgKi8NCj4gLSAgICAgICAgICAgIHB0ZS5wdC50YWJs
ZSA9IDE7DQo+ICsgICAgICAgICAgICAvKg0KPiArICAgICAgICAgICAgICogRmlyc3QgYW5kIHNl
Y29uZCBsZXZlbCBwYWdlcyBzZXQgcHRlLnB0LnRhYmxlID0gMCwgYnV0DQo+ICsgICAgICAgICAg
ICAgKiB0aGlyZCBsZXZlbCBlbnRyaWVzIHNldCBwdGUucHQudGFibGUgPSAxLg0KPiArICAgICAg
ICAgICAgICovDQo+ICsgICAgICAgICAgICBwdGUucHQudGFibGUgPSAobGV2ZWwgPT0gMyk7DQo+
ICAgICAgICAgfQ0KPiAgICAgICAgIGVsc2UgLyogV2UgYXJlIHVwZGF0aW5nIHRoZSBwZXJtaXNz
aW9uID0+IENvcHkgdGhlIGN1cnJlbnQgcHRlLiAqLw0KPiAgICAgICAgICAgICBwdGUgPSAqZW50
cnk7DQo+IEBAIC0xMjI5LDExICsxMjUwLDEyIEBAIHN0YXRpYyBERUZJTkVfU1BJTkxPQ0soeGVu
X3B0X2xvY2spOw0KPiANCj4gc3RhdGljIGludCB4ZW5fcHRfdXBkYXRlKHVuc2lnbmVkIGxvbmcg
dmlydCwNCj4gICAgICAgICAgICAgICAgICAgICAgICAgIG1mbl90IG1mbiwNCj4gLSAgICAgICAg
ICAgICAgICAgICAgICAgICB1bnNpZ25lZCBsb25nIG5yX21mbnMsDQo+ICsgICAgICAgICAgICAg
ICAgICAgICAgICAgY29uc3QgdW5zaWduZWQgbG9uZyBucl9tZm5zLA0KPiAgICAgICAgICAgICAg
ICAgICAgICAgICAgdW5zaWduZWQgaW50IGZsYWdzKQ0KPiB7DQo+ICAgICBpbnQgcmMgPSAwOw0K
PiAtICAgIHVuc2lnbmVkIGxvbmcgYWRkciA9IHZpcnQsIGFkZHJfZW5kID0gYWRkciArIG5yX21m
bnMgKiBQQUdFX1NJWkU7DQo+ICsgICAgdW5zaWduZWQgbG9uZyB2Zm4gPSBwYWRkcl90b19wZm4o
dmlydCk7DQo+ICsgICAgdW5zaWduZWQgbG9uZyBsZWZ0ID0gbnJfbWZuczsNCj4gDQo+ICAgICAv
Kg0KPiAgICAgICogRm9yIGFybTMyLCBwYWdlLXRhYmxlcyBhcmUgZGlmZmVyZW50IG9uIGVhY2gg
Q1BVcy4gWWV0LCB0aGV5IHNoYXJlDQo+IEBAIC0xMjY1LDE0ICsxMjg3LDQzIEBAIHN0YXRpYyBp
bnQgeGVuX3B0X3VwZGF0ZSh1bnNpZ25lZCBsb25nIHZpcnQsDQo+IA0KPiAgICAgc3Bpbl9sb2Nr
KCZ4ZW5fcHRfbG9jayk7DQo+IA0KPiAtICAgIGZvciAoIDsgYWRkciA8IGFkZHJfZW5kOyBhZGRy
ICs9IFBBR0VfU0laRSApDQo+ICsgICAgd2hpbGUgKCBsZWZ0ICkNCj4gICAgIHsNCj4gLSAgICAg
ICAgcmMgPSB4ZW5fcHRfdXBkYXRlX2VudHJ5KHJvb3QsIGFkZHIsIG1mbiwgZmxhZ3MpOw0KPiAr
ICAgICAgICB1bnNpZ25lZCBpbnQgb3JkZXI7DQo+ICsgICAgICAgIHVuc2lnbmVkIGxvbmcgbWFz
azsNCj4gKw0KPiArICAgICAgICAvKg0KPiArICAgICAgICAgKiBEb24ndCB0YWtlIGludG8gYWNj
b3VudCB0aGUgTUZOIHdoZW4gcmVtb3ZpbmcgbWFwcGluZyAoaS5lDQo+ICsgICAgICAgICAqIE1G
Tl9JTlZBTElEKSB0byBjYWxjdWxhdGUgdGhlIGNvcnJlY3QgdGFyZ2V0IG9yZGVyLg0KPiArICAg
ICAgICAgKg0KPiArICAgICAgICAgKiBYWFg6IFN1cHBvcnQgc3VwZXJwYWdlIG1hcHBpbmdzIGlm
IG5yIGlzIG5vdCBhbGlnbmVkIHRvIGENCj4gKyAgICAgICAgICogc3VwZXJwYWdlIHNpemUuDQo+
ICsgICAgICAgICAqLw0KPiArICAgICAgICBtYXNrID0gIW1mbl9lcShtZm4sIElOVkFMSURfTUZO
KSA/IG1mbl94KG1mbikgOiAwOw0KPiArICAgICAgICBtYXNrIHw9IHZmbiB8IGxlZnQ7DQo+ICsN
Cj4gKyAgICAgICAgLyoNCj4gKyAgICAgICAgICogQWx3YXlzIHVzZSBsZXZlbCAzIG1hcHBpbmcg
dW5sZXNzIHRoZSBjYWxsZXIgcmVxdWVzdCBibG9jaw0KPiArICAgICAgICAgKiBtYXBwaW5nLg0K
PiArICAgICAgICAgKi8NCj4gKyAgICAgICAgaWYgKCBsaWtlbHkoIShmbGFncyAmIF9QQUdFX0JM
T0NLKSkgKQ0KPiArICAgICAgICAgICAgb3JkZXIgPSBUSElSRF9PUkRFUjsNCj4gKyAgICAgICAg
ZWxzZSBpZiAoICEobWFzayAmIChCSVQoRklSU1RfT1JERVIsIFVMKSAtIDEpKSApDQo+ICsgICAg
ICAgICAgICBvcmRlciA9IEZJUlNUX09SREVSOw0KPiArICAgICAgICBlbHNlIGlmICggIShtYXNr
ICYgKEJJVChTRUNPTkRfT1JERVIsIFVMKSAtIDEpKSApDQo+ICsgICAgICAgICAgICBvcmRlciA9
IFNFQ09ORF9PUkRFUjsNCj4gKyAgICAgICAgZWxzZQ0KPiArICAgICAgICAgICAgb3JkZXIgPSBU
SElSRF9PUkRFUjsNCj4gKw0KPiArICAgICAgICByYyA9IHhlbl9wdF91cGRhdGVfZW50cnkocm9v
dCwgcGZuX3RvX3BhZGRyKHZmbiksIG1mbiwgb3JkZXIsIGZsYWdzKTsNCg0KbWF5YmUgaXQgd291
bGQgYmUgZWFzaWVyIGhlcmUgdG8gcGFzcyBkaXJlY3RseSB0aGUgdGFyZ2V0IGluc3RlYWQgb2Yg
dGhlIHBhZ2Ugb3JkZXIuDQoNCj4gICAgICAgICBpZiAoIHJjICkNCj4gICAgICAgICAgICAgYnJl
YWs7DQo+IA0KPiArICAgICAgICB2Zm4gKz0gMVUgPDwgb3JkZXI7DQo+ICAgICAgICAgaWYgKCAh
bWZuX2VxKG1mbiwgSU5WQUxJRF9NRk4pICkNCj4gLSAgICAgICAgICAgIG1mbiA9IG1mbl9hZGQo
bWZuLCAxKTsNCj4gKyAgICAgICAgICAgIG1mbiA9IG1mbl9hZGQobWZuLCAxVSA8PCBvcmRlcik7
DQo+ICsNCj4gKyAgICAgICAgbGVmdCAtPSAoMVUgPDwgb3JkZXIpOw0KPiAgICAgfQ0KPiANCj4g
ICAgIC8qDQo+IGRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20tYXJtL3BhZ2UuaCBiL3hlbi9p
bmNsdWRlL2FzbS1hcm0vcGFnZS5oDQo+IGluZGV4IDRlYThlOTcyNDdjOC4uZGUwOTZiMDk2OGUz
IDEwMDY0NA0KPiAtLS0gYS94ZW4vaW5jbHVkZS9hc20tYXJtL3BhZ2UuaA0KPiArKysgYi94ZW4v
aW5jbHVkZS9hc20tYXJtL3BhZ2UuaA0KPiBAQCAtNzksNiArNzksNyBAQA0KPiAgKiBbMzo0XSBQ
ZXJtaXNzaW9uIGZsYWdzDQo+ICAqIFs1XSAgIFBhZ2UgcHJlc2VudA0KPiAgKiBbNl0gICBPbmx5
IHBvcHVsYXRlIHBhZ2UgdGFibGVzDQo+ICsgKiBbN10gICBVc2UgYW55IGxldmVsIG1hcHBpbmcg
b25seSAoaS5lLiBzdXBlcnBhZ2VzIGlzIGFsbG93ZWQpDQoNCnRoZSBjb21tZW50IGZvciB0aGUg
Yml0IGlzIG5vdCByZWFsbHkgbG9naWM6IGFueSBsZXZlbCBtYXBwaW5nIG9ubHkNCldvdWxkbuKA
mXQgaXQgYmUgbW9yZSBjbGVhciB0byBuYW1lIHRoZSBiaXQgX1BBR0VfU1VQRVJQQUdFX0JJVCBh
bmQNCmNvbW1lbnQgaXQgYnkgc2F5aW5nIHRoYXQgc3VwZXJwYWdlcyBhcmUgYWxsb3dlZCA/DQoN
ClJlZ2FyZHMNCkJlcnRyYW5kDQoNCj4gICovDQo+ICNkZWZpbmUgUEFHRV9BSV9NQVNLKHgpICgo
eCkgJiAweDdVKQ0KPiANCj4gQEAgLTkyLDYgKzkzLDkgQEANCj4gI2RlZmluZSBfUEFHRV9QUkVT
RU5UICAgICgxVSA8PCA1KQ0KPiAjZGVmaW5lIF9QQUdFX1BPUFVMQVRFICAgKDFVIDw8IDYpDQo+
IA0KPiArI2RlZmluZSBfUEFHRV9CTE9DS19CSVQgICAgIDcNCj4gKyNkZWZpbmUgX1BBR0VfQkxP
Q0sgICAgICAgICAoMVUgPDwgX1BBR0VfQkxPQ0tfQklUKQ0KPiArDQo+IC8qDQo+ICAqIF9QQUdF
X0RFVklDRSBhbmQgX1BBR0VfTk9STUFMIGFyZSBjb252ZW5pZW5jZSBkZWZpbmVzLiBUaGV5IGFy
ZSBub3QNCj4gICogbWVhbnQgdG8gYmUgdXNlZCBvdXRzaWRlIG9mIHRoaXMgaGVhZGVyLg0KPiAt
LSANCj4gMi4xNy4xDQo+IA0KDQo=


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 18:40:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 18:40:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36786.68810 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdF2-0007jC-6x; Tue, 24 Nov 2020 18:40:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36786.68810; Tue, 24 Nov 2020 18:40:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdF2-0007j5-3v; Tue, 24 Nov 2020 18:40:40 +0000
Received: by outflank-mailman (input) for mailman id 36786;
 Tue, 24 Nov 2020 18:40:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5Zhb=E6=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1khdF1-0007j0-Kl
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 18:40:39 +0000
Received: from mail-wm1-x331.google.com (unknown [2a00:1450:4864:20::331])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 45289449-690e-4326-b0ac-a1da34df2809;
 Tue, 24 Nov 2020 18:40:38 +0000 (UTC)
Received: by mail-wm1-x331.google.com with SMTP id w24so3950248wmi.0
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 10:40:38 -0800 (PST)
Received: from CBGR90WXYV0 (54-240-197-232.amazon.com. [54.240.197.232])
 by smtp.gmail.com with ESMTPSA id q66sm7533447wme.6.2020.11.24.10.40.36
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 24 Nov 2020 10:40:37 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5Zhb=E6=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1khdF1-0007j0-Kl
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 18:40:39 +0000
X-Inumbo-ID: 45289449-690e-4326-b0ac-a1da34df2809
Received: from mail-wm1-x331.google.com (unknown [2a00:1450:4864:20::331])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 45289449-690e-4326-b0ac-a1da34df2809;
	Tue, 24 Nov 2020 18:40:38 +0000 (UTC)
Received: by mail-wm1-x331.google.com with SMTP id w24so3950248wmi.0
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 10:40:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=xVMrEUi82jOaWz0MJruX61qpO0RoLueRqr3pVIhR4Cg=;
        b=drD5GkCS69gCpKabzmP2fuT1G/hrFF0qpR3ZPu9kw/geTe9rfR+BgudS9nIFeiBtTF
         vlPJIY+GagP2dHMAhAze6reyi+Xcgsxn2F82pT8vvDIsw5ZazERD/eFKj14SmUieNTlx
         KZo7x+l+kyUlJngbNkiJa8ETwKIja364otFQtc4G9NoTG1vW8f8vRXN0P8KsJhqsuwh6
         G409CgujLGK/ugRQAjG275YS2hnXM1a+Ei5W8Ew8sDRhKNchalSL21jeYjRYo/r9ukOJ
         VSeGSG68dFnGObZnmBOqtUOxArP4ELiXMEqoXbV3wvdu79Y2WXHVlH2bDOgWloVD+evA
         iKqA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=xVMrEUi82jOaWz0MJruX61qpO0RoLueRqr3pVIhR4Cg=;
        b=A1qODfOnBtQU/99EO3tuH3gPIJUpRuuQiYJvYYsP12vf8S8G2fd6prGrm4YlqepTCA
         8qPhfNZqygiplpGiFRDUegPdl5yCsz5KRuOVKCWAbbO2xjkBsFysDZzbpSRA0RxC1oBF
         hYUUyiDKlOd/NB+ijvH8vKC4N5682JtObtrLMFGsgvJmB+gmEImIHlg/H2zvX0ZFrQZt
         +n5Sp6lSfUWWSyHnw6VY9WXBfpolKOkyhj1XWjMaHQgEKWGZaPJa6C9lfumESJd/qzyU
         YI3huiODIyuQGjYTdXIDHbuBqz7DmTHFyaJkC45lfiZWQf37bL7ziogsemhUUulsrhJL
         xoqg==
X-Gm-Message-State: AOAM533i/cYjqxuZ9sE8IGV+FFQku6d7fwODy5Mt+GxJiN20Ad3EJoec
	QI/rtfc6H1mmsUtfhNiSWmU=
X-Google-Smtp-Source: ABdhPJz2wq7G9BCrob+7dJcpXgP9tx15CHmCavNyi7Z8p9yg+hLdvXssdTEDk17i+o1WFWfo5CJ2Xw==
X-Received: by 2002:a1c:e907:: with SMTP id q7mr6039192wmc.161.1606243237862;
        Tue, 24 Nov 2020 10:40:37 -0800 (PST)
Received: from CBGR90WXYV0 (54-240-197-232.amazon.com. [54.240.197.232])
        by smtp.gmail.com with ESMTPSA id q66sm7533447wme.6.2020.11.24.10.40.36
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Tue, 24 Nov 2020 10:40:37 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	"'Wei Liu'" <wl@xen.org>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	=?UTF-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	<xen-devel@lists.xenproject.org>
References: <20201120094900.1489-1-paul@xen.org> <20201120094900.1489-9-paul@xen.org> <1b8d71bc-5f6d-b458-e0fc-2a2f0d29ddd8@suse.com>
In-Reply-To: <1b8d71bc-5f6d-b458-e0fc-2a2f0d29ddd8@suse.com>
Subject: RE: [PATCH v2 08/12] viridian: add ExProcessorMasks variants of the flush hypercalls
Date: Tue, 24 Nov 2020 18:40:36 -0000
Message-ID: <006101d6c291$4d12ea20$e738be60$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQH2MKAKjLdIZTWPg9fOgT90SKBicQEWVvALAlYgE+upfVyzYA==
Content-Language: en-gb

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 24 November 2020 16:56
> To: Paul Durrant <paul@xen.org>
> Cc: Paul Durrant <pdurrant@amazon.com>; Wei Liu <wl@xen.org>; Andrew =
Cooper
> <andrew.cooper3@citrix.com>; Roger Pau Monn=C3=A9 =
<roger.pau@citrix.com>; xen-devel@lists.xenproject.org
> Subject: Re: [PATCH v2 08/12] viridian: add ExProcessorMasks variants =
of the flush hypercalls
>=20
> On 20.11.2020 10:48, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > The Microsoft Hypervisor TLFS specifies variants of the already =
implemented
> > HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST hypercalls that take a =
'Virtual
> > Processor Set' as an argument rather than a simple 64-bit mask.
> >
> > This patch adds a new hvcall_flush_ex() function to implement these
> > (HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX) hypercalls. This makes =
use of
> > new helper functions, hv_vpset_nr_banks() and hv_vpset_to_vpmask(), =
to
> > determine the size of the Virtual Processor Set (so it can be copied =
from
> > guest memory) and parse it into hypercall_vpmask (respectively).
> >
> > NOTE: A guest should not yet issue these hypercalls as =
'ExProcessorMasks'
> >       support needs to be advertised via CPUID. This will be done in =
a
> >       subsequent patch.
> >
> > Signed-off-by: Paul Durrant <pdurrant@amazon.com>
>=20
> Just a couple of cosmetic remarks, apart from them this looks
> good to me, albeit some of the size calculations are quite,
> well, involved. I guess like with most parts if this series,
> in the end Wei will need to give his ack.
>=20
> > --- a/xen/arch/x86/hvm/viridian/viridian.c
> > +++ b/xen/arch/x86/hvm/viridian/viridian.c
> > @@ -576,6 +576,70 @@ static unsigned int vpmask_nr(const struct =
hypercall_vpmask *vpmask)
> >      return bitmap_weight(vpmask->mask, HVM_MAX_VCPUS);
> >  }
> >
> > +#define HV_VPSET_BANK_SIZE \
> > +    sizeof_field(struct hv_vpset, bank_contents[0])
> > +
> > +#define HV_VPSET_SIZE(banks)   \
> > +    (sizeof(struct hv_vpset) + (banks * HV_VPSET_BANK_SIZE))
>=20
> Personally I think this would be better done using
> offsetof(struct hv_vpset, bank_contents), but you're the maintainer.
> However, "banks" wants parenthesizing, just in case.
>=20

No, I actually prefer using offsetof() and yes I do indeed need to =
parenthesize 'banks'.

> > +#define HV_VPSET_MAX_BANKS \
> > +    (sizeof_field(struct hv_vpset, valid_bank_mask) * 8)
> > +
> > +struct hypercall_vpset {
> > +    union {
> > +        struct hv_vpset set;
> > +        uint8_t pad[HV_VPSET_SIZE(HV_VPSET_MAX_BANKS)];
> > +    };
> > +};
>=20
> A struct with just a union as member could be expressed as a simple
> union - any reason you prefer the slightly more involved variant?
>=20

Not really... it's only that it was a struct in the original patch. I'll =
change to using a union.

> > +static DEFINE_PER_CPU(struct hypercall_vpset, hypercall_vpset);
> > +
> > +static unsigned int hv_vpset_nr_banks(struct hv_vpset *vpset)
> > +{
> > +    return hweight64(vpset->valid_bank_mask);
> > +}
> > +
> > +static uint16_t hv_vpset_to_vpmask(struct hv_vpset *set,
>=20
> const?
>=20

Ok.

> > @@ -656,6 +720,78 @@ static int hvcall_flush(union hypercall_input =
*input,
> >      return 0;
> >  }
> >
> > +static int hvcall_flush_ex(union hypercall_input *input,
>=20
> const again?
>=20

True, but I'll need to go back and do that for the others too.

> > +                           union hypercall_output *output,
> > +                           unsigned long input_params_gpa,
> > +                           unsigned long output_params_gpa)
>=20
> Mainly for cosmetic reasons and to be in sync with
> hvm_copy_from_guest_phys()'s respective parameter, perhaps both
> would better be paddr_t?
>=20

Ok. Again I'll fix the prior patches to match.

  Paul

> Jan



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 19:07:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 19:07:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36805.68847 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdfJ-0001JP-W1; Tue, 24 Nov 2020 19:07:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36805.68847; Tue, 24 Nov 2020 19:07:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdfJ-0001JI-S6; Tue, 24 Nov 2020 19:07:49 +0000
Received: by outflank-mailman (input) for mailman id 36805;
 Tue, 24 Nov 2020 19:07:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khdfI-0001IE-HP
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:07:48 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdfI-0003sh-9A; Tue, 24 Nov 2020 19:07:48 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdfH-0000r4-TQ; Tue, 24 Nov 2020 19:07:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfI-0001IE-HP
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:07:48 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=l+HwUUf2wLYBW7rdHkqfybrg3vkpvm3Asp9ED/PY4ng=; b=Pke8eRJlxfyrXxyYfpFgmZB7kO
	1EEUV8sTdIgiw7Dw7E+R0hKJWMndwMq1xKxqh+G/2BK/nCte21PnvnVXeGlQ3BpmDAIfTn98Msvqx
	kPH1gHl0e/OEYgYBnknq5hzDQoMljHnAkVNG2UOF6Xe8WJEBmuE+Rm6oxLj/mjw3zA2s=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfI-0003sh-9A; Tue, 24 Nov 2020 19:07:48 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com ([86.183.162.145] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfH-0000r4-TQ; Tue, 24 Nov 2020 19:07:48 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v3 02/13] viridian: move flush hypercall implementation into separate function
Date: Tue, 24 Nov 2020 19:07:33 +0000
Message-Id: <20201124190744.11343-3-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201124190744.11343-1-paul@xen.org>
References: <20201124190744.11343-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This patch moves the implementation of HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST
that is currently inline in viridian_hypercall() into a new hvcall_flush()
function.

The new function returns Xen erro values which are then dealt with
appropriately. A return value of -ERESTART translates to viridian_hypercall()
returning HVM_HCALL_preempted. Other return values translate to status codes
and viridian_hypercall() returning HVM_HCALL_completed. Currently the only
values, other than -ERESTART, returned by hvcall_flush() are 0 (indicating
success) or -EINVAL.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v3:
 - Adjust prototype of new function
---
 xen/arch/x86/hvm/viridian/viridian.c | 130 ++++++++++++++++-----------
 1 file changed, 78 insertions(+), 52 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 54035f75cb1a..7871e425cbfe 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -518,6 +518,69 @@ static bool need_flush(void *ctxt, struct vcpu *v)
     return vcpu_mask & (1ul << v->vcpu_id);
 }
 
+union hypercall_input {
+    uint64_t raw;
+    struct {
+        uint16_t call_code;
+        uint16_t fast:1;
+        uint16_t rsvd1:15;
+        uint16_t rep_count:12;
+        uint16_t rsvd2:4;
+        uint16_t rep_start:12;
+        uint16_t rsvd3:4;
+    };
+};
+
+union hypercall_output {
+    uint64_t raw;
+    struct {
+        uint16_t result;
+        uint16_t rsvd1;
+        uint32_t rep_complete:12;
+        uint32_t rsvd2:20;
+    };
+};
+
+static int hvcall_flush(const union hypercall_input *input,
+                        union hypercall_output *output,
+                        paddr_t input_params_gpa,
+                        paddr_t output_params_gpa)
+{
+    struct {
+        uint64_t address_space;
+        uint64_t flags;
+        uint64_t vcpu_mask;
+    } input_params;
+
+    /* These hypercalls should never use the fast-call convention. */
+    if ( input->fast )
+        return -EINVAL;
+
+    /* Get input parameters. */
+    if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
+                                  sizeof(input_params)) != HVMTRANS_okay )
+        return -EINVAL;
+
+    /*
+     * It is not clear from the spec. if we are supposed to
+     * include current virtual CPU in the set or not in this case,
+     * so err on the safe side.
+     */
+    if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
+        input_params.vcpu_mask = ~0ul;
+
+    /*
+     * A false return means that another vcpu is currently trying
+     * a similar operation, so back off.
+     */
+    if ( !paging_flush_tlb(need_flush, &input_params.vcpu_mask) )
+        return -ERESTART;
+
+    output->rep_complete = input->rep_count;
+
+    return 0;
+}
+
 int viridian_hypercall(struct cpu_user_regs *regs)
 {
     struct vcpu *curr = current;
@@ -525,29 +588,8 @@ int viridian_hypercall(struct cpu_user_regs *regs)
     int mode = hvm_guest_x86_mode(curr);
     unsigned long input_params_gpa, output_params_gpa;
     uint16_t status = HV_STATUS_SUCCESS;
-
-    union hypercall_input {
-        uint64_t raw;
-        struct {
-            uint16_t call_code;
-            uint16_t fast:1;
-            uint16_t rsvd1:15;
-            uint16_t rep_count:12;
-            uint16_t rsvd2:4;
-            uint16_t rep_start:12;
-            uint16_t rsvd3:4;
-        };
-    } input;
-
-    union hypercall_output {
-        uint64_t raw;
-        struct {
-            uint16_t result;
-            uint16_t rsvd1;
-            uint32_t rep_complete:12;
-            uint32_t rsvd2:20;
-        };
-    } output = { 0 };
+    union hypercall_input input;
+    union hypercall_output output = {};
 
     ASSERT(is_viridian_domain(currd));
 
@@ -580,41 +622,25 @@ int viridian_hypercall(struct cpu_user_regs *regs)
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
     {
-        struct {
-            uint64_t address_space;
-            uint64_t flags;
-            uint64_t vcpu_mask;
-        } input_params;
+        int rc = hvcall_flush(&input, &output, input_params_gpa,
+                              output_params_gpa);
 
-        /* These hypercalls should never use the fast-call convention. */
-        status = HV_STATUS_INVALID_PARAMETER;
-        if ( input.fast )
+        switch ( rc )
+        {
+        case 0:
             break;
 
-        /* Get input parameters. */
-        if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
-                                      sizeof(input_params)) !=
-             HVMTRANS_okay )
-            break;
-
-        /*
-         * It is not clear from the spec. if we are supposed to
-         * include current virtual CPU in the set or not in this case,
-         * so err on the safe side.
-         */
-        if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
-            input_params.vcpu_mask = ~0ul;
-
-        /*
-         * A false return means that another vcpu is currently trying
-         * a similar operation, so back off.
-         */
-        if ( !paging_flush_tlb(need_flush, &input_params.vcpu_mask) )
+        case -ERESTART:
             return HVM_HCALL_preempted;
 
-        output.rep_complete = input.rep_count;
+        default:
+            ASSERT_UNREACHABLE();
+            /* Fallthrough */
+        case -EINVAL:
+            status = HV_STATUS_INVALID_PARAMETER;
+            break;
+        }
 
-        status = HV_STATUS_SUCCESS;
         break;
     }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 19:07:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 19:07:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36808.68883 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdfN-0001Pj-8U; Tue, 24 Nov 2020 19:07:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36808.68883; Tue, 24 Nov 2020 19:07:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdfN-0001PX-43; Tue, 24 Nov 2020 19:07:53 +0000
Received: by outflank-mailman (input) for mailman id 36808;
 Tue, 24 Nov 2020 19:07:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khdfL-0001Mq-M4
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:07:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdfL-0003t2-HW; Tue, 24 Nov 2020 19:07:51 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdfL-0000r4-9o; Tue, 24 Nov 2020 19:07:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfL-0001Mq-M4
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:07:51 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=5/IydMtjnflu2HQri4IveMg9040AYYfLQ5af+EPEKuQ=; b=NXrU+r0HHtnB6zGcRqcya6pUDD
	81seybzOoeFkLKUT0X4yDwj1OaTdVkYbCbDZ4JZkP0OtUCcf9wZG6i+xtEy92TbLN9IXn4b4Zq224
	LDZ5F9iIw5G/X80oP4GISMxpQlB48M/B9k/dU3zRf7gesLu6vLRRpyQKoZdMgfHgtogg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfL-0003t2-HW; Tue, 24 Nov 2020 19:07:51 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com ([86.183.162.145] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfL-0000r4-9o; Tue, 24 Nov 2020 19:07:51 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v3 05/13] viridian: use hypercall_vpmask in hvcall_ipi()
Date: Tue, 24 Nov 2020 19:07:36 +0000
Message-Id: <20201124190744.11343-6-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201124190744.11343-1-paul@xen.org>
References: <20201124190744.11343-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

A subsequent patch will need to IPI a mask of virtual processors potentially
wider than 64 bits. A previous patch introduced per-cpu hypercall_vpmask
to allow hvcall_flush() to deal with such wide masks. This patch modifies
the implementation of hvcall_ipi() to make use of the same mask structures,
introducing a for_each_vp() macro to facilitate traversing a mask.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v3:
 - Couple of extra 'const' qualifiers

v2:
 - Drop the 'vp' loop now that vpmask_set() will do it internally
---
 xen/arch/x86/hvm/viridian/viridian.c | 44 +++++++++++++++++++++-------
 1 file changed, 33 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index d8f47b8528a2..cdfeaec8b96b 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -551,6 +551,26 @@ static bool vpmask_test(const struct hypercall_vpmask *vpmask,
     return test_bit(vp, vpmask->mask);
 }
 
+static unsigned int vpmask_first(const struct hypercall_vpmask *vpmask)
+{
+    return find_first_bit(vpmask->mask, HVM_MAX_VCPUS);
+}
+
+static unsigned int vpmask_next(const struct hypercall_vpmask *vpmask,
+                                unsigned int vp)
+{
+    /*
+     * If vp + 1 > HVM_MAX_VCPUS then find_next_bit() will return
+     * HVM_MAX_VCPUS, ensuring the for_each_vp ( ... ) loop terminates.
+     */
+    return find_next_bit(vpmask->mask, HVM_MAX_VCPUS, vp + 1);
+}
+
+#define for_each_vp(vpmask, vp) \
+	for ( (vp) = vpmask_first(vpmask); \
+	      (vp) < HVM_MAX_VCPUS; \
+	      (vp) = vpmask_next(vpmask, vp) )
+
 /*
  * Windows should not issue the hypercalls requiring this callback in the
  * case where vcpu_id would exceed the size of the mask.
@@ -631,13 +651,21 @@ static int hvcall_flush(const union hypercall_input *input,
     return 0;
 }
 
+static void send_ipi(struct hypercall_vpmask *vpmask, uint8_t vector)
+{
+    struct domain *currd = current->domain;
+    unsigned int vp;
+
+    for_each_vp ( vpmask, vp )
+        vlapic_set_irq(vcpu_vlapic(currd->vcpu[vp]), vector, 0);
+}
+
 static int hvcall_ipi(const union hypercall_input *input,
                       union hypercall_output *output,
                       paddr_t input_params_gpa,
                       paddr_t output_params_gpa)
 {
-    struct domain *currd = current->domain;
-    struct vcpu *v;
+    struct hypercall_vpmask *vpmask = &this_cpu(hypercall_vpmask);
     uint32_t vector;
     uint64_t vcpu_mask;
 
@@ -676,16 +704,10 @@ static int hvcall_ipi(const union hypercall_input *input,
     if ( vector < 0x10 || vector > 0xff )
         return -EINVAL;
 
-    for_each_vcpu ( currd, v )
-    {
-        if ( v->vcpu_id >= (sizeof(vcpu_mask) * 8) )
-            return -EINVAL;
+    vpmask_empty(vpmask);
+    vpmask_set(vpmask, 0, vcpu_mask);
 
-        if ( !(vcpu_mask & (1ul << v->vcpu_id)) )
-            continue;
-
-        vlapic_set_irq(vcpu_vlapic(v), vector, 0);
-    }
+    send_ipi(vpmask, vector);
 
     return 0;
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 19:07:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 19:07:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36806.68854 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdfK-0001K3-Ce; Tue, 24 Nov 2020 19:07:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36806.68854; Tue, 24 Nov 2020 19:07:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdfK-0001Jr-5R; Tue, 24 Nov 2020 19:07:50 +0000
Received: by outflank-mailman (input) for mailman id 36806;
 Tue, 24 Nov 2020 19:07:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khdfJ-0001JB-G5
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:07:49 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdfJ-0003sp-9L; Tue, 24 Nov 2020 19:07:49 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdfJ-0000r4-1K; Tue, 24 Nov 2020 19:07:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfJ-0001JB-G5
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:07:49 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=qCj+b7kUEK5qNRXOsdjbFaxQEVZq3bySvvwQ+49gYL8=; b=02yqj8nobdlLzlDyZKi2MzK9PP
	dr1swJrJjZh2KVx5d+CdyqQoUfLVUmAlRSU16JvxRGDECvT63AEd//s/5VLjxnggcqa5roYrdomqF
	PUdMZBN//YQ7Zdd9hJCIBvGeefXv+L5Zh1maMizBgppKjN+7lPHqbzUwwyqyn4qfUnLk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfJ-0003sp-9L; Tue, 24 Nov 2020 19:07:49 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com ([86.183.162.145] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfJ-0000r4-1K; Tue, 24 Nov 2020 19:07:49 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v3 03/13] viridian: move IPI hypercall implementation into separate function
Date: Tue, 24 Nov 2020 19:07:34 +0000
Message-Id: <20201124190744.11343-4-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201124190744.11343-1-paul@xen.org>
References: <20201124190744.11343-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This patch moves the implementation of HVCALL_SEND_IPI that is currently
inline in viridian_hypercall() into a new hvcall_ipi() function.

The new function returns Xen errno values similarly to hvcall_flush(). Hence
the errno translation code in viridial_hypercall() is generalized, removing
the need for the local 'status' variable.

NOTE: The formatting of the switch statement at the top of
      viridial_hypercall() is also adjusted as per CODING_STYLE.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v3:
 - Adjust prototype of new function

v2:
 - Different formatting adjustment
---
 xen/arch/x86/hvm/viridian/viridian.c | 168 ++++++++++++++-------------
 1 file changed, 87 insertions(+), 81 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 7871e425cbfe..8ac1b32565a8 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -581,13 +581,72 @@ static int hvcall_flush(const union hypercall_input *input,
     return 0;
 }
 
+static int hvcall_ipi(const union hypercall_input *input,
+                      union hypercall_output *output,
+                      paddr_t input_params_gpa,
+                      paddr_t output_params_gpa)
+{
+    struct domain *currd = current->domain;
+    struct vcpu *v;
+    uint32_t vector;
+    uint64_t vcpu_mask;
+
+    /* Get input parameters. */
+    if ( input->fast )
+    {
+        if ( input_params_gpa >> 32 )
+            return -EINVAL;
+
+        vector = input_params_gpa;
+        vcpu_mask = output_params_gpa;
+    }
+    else
+    {
+        struct {
+            uint32_t vector;
+            uint8_t target_vtl;
+            uint8_t reserved_zero[3];
+            uint64_t vcpu_mask;
+        } input_params;
+
+        if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
+                                      sizeof(input_params)) != HVMTRANS_okay )
+            return -EINVAL;
+
+        if ( input_params.target_vtl ||
+             input_params.reserved_zero[0] ||
+             input_params.reserved_zero[1] ||
+             input_params.reserved_zero[2] )
+            return -EINVAL;
+
+        vector = input_params.vector;
+        vcpu_mask = input_params.vcpu_mask;
+    }
+
+    if ( vector < 0x10 || vector > 0xff )
+        return -EINVAL;
+
+    for_each_vcpu ( currd, v )
+    {
+        if ( v->vcpu_id >= (sizeof(vcpu_mask) * 8) )
+            return -EINVAL;
+
+        if ( !(vcpu_mask & (1ul << v->vcpu_id)) )
+            continue;
+
+        vlapic_set_irq(vcpu_vlapic(v), vector, 0);
+    }
+
+    return 0;
+}
+
 int viridian_hypercall(struct cpu_user_regs *regs)
 {
     struct vcpu *curr = current;
     struct domain *currd = curr->domain;
     int mode = hvm_guest_x86_mode(curr);
     unsigned long input_params_gpa, output_params_gpa;
-    uint16_t status = HV_STATUS_SUCCESS;
+    int rc = 0;
     union hypercall_input input;
     union hypercall_output output = {};
 
@@ -600,11 +659,13 @@ int viridian_hypercall(struct cpu_user_regs *regs)
         input_params_gpa = regs->rdx;
         output_params_gpa = regs->r8;
         break;
+
     case 4:
         input.raw = (regs->rdx << 32) | regs->eax;
         input_params_gpa = (regs->rbx << 32) | regs->ecx;
         output_params_gpa = (regs->rdi << 32) | regs->esi;
         break;
+
     default:
         goto out;
     }
@@ -616,92 +677,18 @@ int viridian_hypercall(struct cpu_user_regs *regs)
          * See section 14.5.1 of the specification.
          */
         do_sched_op(SCHEDOP_yield, guest_handle_from_ptr(NULL, void));
-        status = HV_STATUS_SUCCESS;
         break;
 
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
-    {
-        int rc = hvcall_flush(&input, &output, input_params_gpa,
-                              output_params_gpa);
-
-        switch ( rc )
-        {
-        case 0:
-            break;
-
-        case -ERESTART:
-            return HVM_HCALL_preempted;
-
-        default:
-            ASSERT_UNREACHABLE();
-            /* Fallthrough */
-        case -EINVAL:
-            status = HV_STATUS_INVALID_PARAMETER;
-            break;
-        }
-
+        rc = hvcall_flush(&input, &output, input_params_gpa,
+                          output_params_gpa);
         break;
-    }
 
     case HVCALL_SEND_IPI:
-    {
-        struct vcpu *v;
-        uint32_t vector;
-        uint64_t vcpu_mask;
-
-        status = HV_STATUS_INVALID_PARAMETER;
-
-        /* Get input parameters. */
-        if ( input.fast )
-        {
-            if ( input_params_gpa >> 32 )
-                break;
-
-            vector = input_params_gpa;
-            vcpu_mask = output_params_gpa;
-        }
-        else
-        {
-            struct {
-                uint32_t vector;
-                uint8_t target_vtl;
-                uint8_t reserved_zero[3];
-                uint64_t vcpu_mask;
-            } input_params;
-
-            if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
-                                          sizeof(input_params)) !=
-                 HVMTRANS_okay )
-                break;
-
-            if ( input_params.target_vtl ||
-                 input_params.reserved_zero[0] ||
-                 input_params.reserved_zero[1] ||
-                 input_params.reserved_zero[2] )
-                break;
-
-            vector = input_params.vector;
-            vcpu_mask = input_params.vcpu_mask;
-        }
-
-        if ( vector < 0x10 || vector > 0xff )
-            break;
-
-        for_each_vcpu ( currd, v )
-        {
-            if ( v->vcpu_id >= (sizeof(vcpu_mask) * 8) )
-                break;
-
-            if ( !(vcpu_mask & (1ul << v->vcpu_id)) )
-                continue;
-
-            vlapic_set_irq(vcpu_vlapic(v), vector, 0);
-        }
-
-        status = HV_STATUS_SUCCESS;
+        rc = hvcall_ipi(&input, &output, input_params_gpa,
+                        output_params_gpa);
         break;
-    }
 
     default:
         gprintk(XENLOG_WARNING, "unimplemented hypercall %04x\n",
@@ -714,12 +701,31 @@ int viridian_hypercall(struct cpu_user_regs *regs)
          * Given that return a status of 'invalid code' has not so far
          * caused any problems it's not worth logging.
          */
-        status = HV_STATUS_INVALID_HYPERCALL_CODE;
+        rc = -EOPNOTSUPP;
         break;
     }
 
  out:
-    output.result = status;
+    switch ( rc )
+    {
+    case 0:
+        break;
+
+    case -ERESTART:
+        return HVM_HCALL_preempted;
+
+    case -EOPNOTSUPP:
+        output.result = HV_STATUS_INVALID_HYPERCALL_CODE;
+        break;
+
+    default:
+        ASSERT_UNREACHABLE();
+        /* Fallthrough */
+    case -EINVAL:
+        output.result = HV_STATUS_INVALID_PARAMETER;
+        break;
+    }
+
     switch (mode) {
     case 8:
         regs->rax = output.raw;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 19:07:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 19:07:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36807.68871 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdfL-0001N9-Ta; Tue, 24 Nov 2020 19:07:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36807.68871; Tue, 24 Nov 2020 19:07:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdfL-0001N1-Ox; Tue, 24 Nov 2020 19:07:51 +0000
Received: by outflank-mailman (input) for mailman id 36807;
 Tue, 24 Nov 2020 19:07:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khdfK-0001Kt-ME
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:07:50 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdfK-0003sw-Ek; Tue, 24 Nov 2020 19:07:50 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdfK-0000r4-5c; Tue, 24 Nov 2020 19:07:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfK-0001Kt-ME
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:07:50 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=P1UWM8Y+y7SpMZ1c9aQQbLpHv68ZojqNnpvFbtmz8y4=; b=5RIRBHRqUGGDoqY2zzRhwo8mf7
	K8YpAjt0IwkIocNWyMN1VfeNOn1Gz9Yt5nJpw5t/onHrRMvLYojhRdPDer4D27L9ifP227RwVDwOk
	91VB2zi8TaOinO/UKvJkGYbLT73JW1KDl4UwNOWJM1VJf8lByjTOz/YqdP4nuXA7HLWQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfK-0003sw-Ek; Tue, 24 Nov 2020 19:07:50 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com ([86.183.162.145] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfK-0000r4-5c; Tue, 24 Nov 2020 19:07:50 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v3 04/13] viridian: introduce a per-cpu hypercall_vpmask and accessor functions...
Date: Tue, 24 Nov 2020 19:07:35 +0000
Message-Id: <20201124190744.11343-5-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201124190744.11343-1-paul@xen.org>
References: <20201124190744.11343-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... and make use of them in hvcall_flush()/need_flush().

Subsequent patches will need to deal with virtual processor masks potentially
wider than 64 bits. Thus, to avoid using too much stack, this patch
introduces global per-cpu virtual processor masks and converts the
implementation of hvcall_flush() to use them.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - Modified vpmask_set() to take a base 'vp' and a 64-bit 'mask', still
   looping over the mask as bitmap.h does not provide a primitive for copying
   one mask into another at an offset
 - Added ASSERTions to verify that we don't attempt to set or test bits
   beyond the limit of the map
---
 xen/arch/x86/hvm/viridian/viridian.c | 58 ++++++++++++++++++++++++++--
 1 file changed, 54 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 8ac1b32565a8..d8f47b8528a2 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -507,15 +507,59 @@ void viridian_domain_deinit(struct domain *d)
     XFREE(d->arch.hvm.viridian);
 }
 
+struct hypercall_vpmask {
+    DECLARE_BITMAP(mask, HVM_MAX_VCPUS);
+};
+
+static DEFINE_PER_CPU(struct hypercall_vpmask, hypercall_vpmask);
+
+static void vpmask_empty(struct hypercall_vpmask *vpmask)
+{
+    bitmap_zero(vpmask->mask, HVM_MAX_VCPUS);
+}
+
+static void vpmask_set(struct hypercall_vpmask *vpmask, unsigned int vp,
+                       uint64_t mask)
+{
+    unsigned int count = sizeof(mask) * 8;
+
+    while ( count-- )
+    {
+        if ( !mask )
+            break;
+
+        if ( mask & 1 )
+        {
+            ASSERT(vp < HVM_MAX_VCPUS);
+            __set_bit(vp, vpmask->mask);
+        }
+
+        mask >>= 1;
+        vp++;
+    }
+}
+
+static void vpmask_fill(struct hypercall_vpmask *vpmask)
+{
+    bitmap_fill(vpmask->mask, HVM_MAX_VCPUS);
+}
+
+static bool vpmask_test(const struct hypercall_vpmask *vpmask,
+                        unsigned int vp)
+{
+    ASSERT(vp < HVM_MAX_VCPUS);
+    return test_bit(vp, vpmask->mask);
+}
+
 /*
  * Windows should not issue the hypercalls requiring this callback in the
  * case where vcpu_id would exceed the size of the mask.
  */
 static bool need_flush(void *ctxt, struct vcpu *v)
 {
-    uint64_t vcpu_mask = *(uint64_t *)ctxt;
+    struct hypercall_vpmask *vpmask = ctxt;
 
-    return vcpu_mask & (1ul << v->vcpu_id);
+    return vpmask_test(vpmask, v->vcpu_id);
 }
 
 union hypercall_input {
@@ -546,6 +590,7 @@ static int hvcall_flush(const union hypercall_input *input,
                         paddr_t input_params_gpa,
                         paddr_t output_params_gpa)
 {
+    struct hypercall_vpmask *vpmask = &this_cpu(hypercall_vpmask);
     struct {
         uint64_t address_space;
         uint64_t flags;
@@ -567,13 +612,18 @@ static int hvcall_flush(const union hypercall_input *input,
      * so err on the safe side.
      */
     if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
-        input_params.vcpu_mask = ~0ul;
+        vpmask_fill(vpmask);
+    else
+    {
+        vpmask_empty(vpmask);
+        vpmask_set(vpmask, 0, input_params.vcpu_mask);
+    }
 
     /*
      * A false return means that another vcpu is currently trying
      * a similar operation, so back off.
      */
-    if ( !paging_flush_tlb(need_flush, &input_params.vcpu_mask) )
+    if ( !paging_flush_tlb(need_flush, vpmask) )
         return -ERESTART;
 
     output->rep_complete = input->rep_count;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 19:07:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 19:07:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36804.68835 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdfI-0001IQ-OQ; Tue, 24 Nov 2020 19:07:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36804.68835; Tue, 24 Nov 2020 19:07:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdfI-0001IJ-It; Tue, 24 Nov 2020 19:07:48 +0000
Received: by outflank-mailman (input) for mailman id 36804;
 Tue, 24 Nov 2020 19:07:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khdfH-0001HM-BD
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:07:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdfH-0003sb-1P; Tue, 24 Nov 2020 19:07:47 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdfG-0000r4-PC; Tue, 24 Nov 2020 19:07:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfH-0001HM-BD
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:07:47 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=oFMSo+GJNEgaaZ5yNsIGpJUzjgKARczCMDIFteDVII8=; b=mAc7Eq71ryR1pzCsfSRYwmCYU/
	JCGcN2QMNxni/S6dKxkM4Wb19VsYomMJsNIikrDh39l1R+rlb8cywON8+lXHhR3cuw0oIy1MIOyX7
	tfgxG9iaVQbEifizl33H8kX2QwR/2hhVV2eiG9REegNhtIMtFAVVKuMzB1Quq5UfzsYA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfH-0003sb-1P; Tue, 24 Nov 2020 19:07:47 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com ([86.183.162.145] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfG-0000r4-PC; Tue, 24 Nov 2020 19:07:46 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v3 01/13] viridian: don't blindly write to 32-bit registers is 'mode' is invalid
Date: Tue, 24 Nov 2020 19:07:32 +0000
Message-Id: <20201124190744.11343-2-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201124190744.11343-1-paul@xen.org>
References: <20201124190744.11343-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

If hvm_guest_x86_mode() returns something other than 8 or 4 then
viridian_hypercall() will return immediately but, on the way out, will write
back status as if 'mode' was 4. This patch simply makes it leave the registers
alone.

NOTE: The formatting of the 'out' label and the switch statement are also
      adjusted as per CODING_STYLE.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - New in v2
---
 xen/arch/x86/hvm/viridian/viridian.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index dc7183a54627..54035f75cb1a 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -692,13 +692,14 @@ int viridian_hypercall(struct cpu_user_regs *regs)
         break;
     }
 
-out:
+ out:
     output.result = status;
     switch (mode) {
     case 8:
         regs->rax = output.raw;
         break;
-    default:
+
+    case 4:
         regs->rdx = output.raw >> 32;
         regs->rax = (uint32_t)output.raw;
         break;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 19:07:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 19:07:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36803.68822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdfH-0001HQ-E4; Tue, 24 Nov 2020 19:07:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36803.68822; Tue, 24 Nov 2020 19:07:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdfH-0001HH-B3; Tue, 24 Nov 2020 19:07:47 +0000
Received: by outflank-mailman (input) for mailman id 36803;
 Tue, 24 Nov 2020 19:07:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khdfG-0001HC-7j
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:07:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdfF-0003sW-UJ; Tue, 24 Nov 2020 19:07:45 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdfF-0000r4-Ku; Tue, 24 Nov 2020 19:07:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfG-0001HC-7j
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:07:46 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=TPULqpYdJEVGJYx6fqKK70HwJX9V6FzYPwgxwHl7pgM=; b=hQMtyBT3viik1TdJA+DV3ktr66
	znrwj+lFNBmj9SZM6KZwamUIao5pX5/Q2k3wOLVlrLpuebz2Zq/5aXac/TiMjPdXnEGsZLpn43gC2
	d8H3xcz6ouPHK6+visfq/HXOKI0t1bJk10z/lZbgjhS4MZZh6+1GPSAgTs2rLpN9eFCw=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfF-0003sW-UJ; Tue, 24 Nov 2020 19:07:45 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com ([86.183.162.145] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfF-0000r4-Ku; Tue, 24 Nov 2020 19:07:45 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>
Subject: [PATCH v3 00/13] viridian: add support for ExProcessorMasks...
Date: Tue, 24 Nov 2020 19:07:31 +0000
Message-Id: <20201124190744.11343-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... plus one miscellaneous cleanup patch after introducing sizeof_field().

Paul Durrant (13):
  viridian: don't blindly write to 32-bit registers is 'mode' is invalid
  viridian: move flush hypercall implementation into separate function
  viridian: move IPI hypercall implementation into separate function
  viridian: introduce a per-cpu hypercall_vpmask and accessor
    functions...
  viridian: use hypercall_vpmask in hvcall_ipi()
  viridian: use softirq batching in hvcall_ipi()
  xen/include: import sizeof_field() macro from Linux stddef.h
  viridian: add ExProcessorMasks variants of the flush hypercalls
  viridian: add ExProcessorMasks variant of the IPI hypercall
  viridian: log initial invocation of each type of hypercall
  viridian: add a new '_HVMPV_ex_processor_masks' bit into
    HVM_PARAM_VIRIDIAN...
  xl / libxl: add 'ex_processor_mask' into
    'libxl_viridian_enlightenment'
  x86: replace open-coded occurrences of sizeof_field()...

 docs/man/xl.cfg.5.pod.in             |   8 +
 tools/include/libxl.h                |   7 +
 tools/libs/light/libxl_types.idl     |   1 +
 tools/libs/light/libxl_x86.c         |   3 +
 xen/arch/x86/cpu/vpmu_intel.c        |   4 +-
 xen/arch/x86/hvm/viridian/viridian.c | 601 +++++++++++++++++++++------
 xen/arch/x86/setup.c                 |  16 +-
 xen/include/asm-x86/hvm/viridian.h   |  10 +
 xen/include/public/hvm/params.h      |   7 +-
 xen/include/xen/compiler.h           |   8 +
 10 files changed, 532 insertions(+), 133 deletions(-)

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 19:07:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 19:07:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36809.68889 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdfN-0001Qc-Os; Tue, 24 Nov 2020 19:07:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36809.68889; Tue, 24 Nov 2020 19:07:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdfN-0001QG-Gd; Tue, 24 Nov 2020 19:07:53 +0000
Received: by outflank-mailman (input) for mailman id 36809;
 Tue, 24 Nov 2020 19:07:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khdfM-0001PE-RA
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:07:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdfM-0003t9-Mo; Tue, 24 Nov 2020 19:07:52 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdfM-0000r4-E6; Tue, 24 Nov 2020 19:07:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfM-0001PE-RA
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:07:52 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=jXUtIM3q7XVre1T2PL1q5bRCrTalFKSE91vco2iGk0A=; b=nhQ9cUXszfVEANDtv+ELcv6B7G
	DKItv6UOyXSsThrIuA+P+yLqvsy//ueHU3iqJwNaKABIawYdyjSXIoUF7T8n+d6n8jHiOa/ZzDZbq
	EzUZ/OwMOnrDsQK4ryeNvdKPhYbua/34yGxRS9pa7gbzkpME9J9c2vG7m9GeifI6q54c=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfM-0003t9-Mo; Tue, 24 Nov 2020 19:07:52 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com ([86.183.162.145] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfM-0000r4-E6; Tue, 24 Nov 2020 19:07:52 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v3 06/13] viridian: use softirq batching in hvcall_ipi()
Date: Tue, 24 Nov 2020 19:07:37 +0000
Message-Id: <20201124190744.11343-7-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201124190744.11343-1-paul@xen.org>
References: <20201124190744.11343-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

vlapic_ipi() uses a softirq batching mechanism to improve the efficiency of
sending a IPIs to large number of processors. This patch modifies send_ipi()
(the worker function called by hvcall_ipi()) to also make use of the
mechanism when there multiple bits set the hypercall_vpmask.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - Don't add the 'nr' field to struct hypercall_vpmask and use
   bitmap_weight() instead
---
 xen/arch/x86/hvm/viridian/viridian.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index cdfeaec8b96b..4867a1bd140b 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -11,6 +11,7 @@
 #include <xen/hypercall.h>
 #include <xen/domain_page.h>
 #include <xen/param.h>
+#include <xen/softirq.h>
 #include <asm/guest/hyperv-tlfs.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
@@ -571,6 +572,11 @@ static unsigned int vpmask_next(const struct hypercall_vpmask *vpmask,
 	      (vp) < HVM_MAX_VCPUS; \
 	      (vp) = vpmask_next(vpmask, vp) )
 
+static unsigned int vpmask_nr(const struct hypercall_vpmask *vpmask)
+{
+    return bitmap_weight(vpmask->mask, HVM_MAX_VCPUS);
+}
+
 /*
  * Windows should not issue the hypercalls requiring this callback in the
  * case where vcpu_id would exceed the size of the mask.
@@ -654,10 +660,17 @@ static int hvcall_flush(const union hypercall_input *input,
 static void send_ipi(struct hypercall_vpmask *vpmask, uint8_t vector)
 {
     struct domain *currd = current->domain;
+    unsigned int nr = vpmask_nr(vpmask);
     unsigned int vp;
 
+    if ( nr > 1 )
+        cpu_raise_softirq_batch_begin();
+
     for_each_vp ( vpmask, vp )
         vlapic_set_irq(vcpu_vlapic(currd->vcpu[vp]), vector, 0);
+
+    if ( nr > 1 )
+        cpu_raise_softirq_batch_finish();
 }
 
 static int hvcall_ipi(const union hypercall_input *input,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 19:07:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 19:07:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36810.68907 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdfR-0001YM-3d; Tue, 24 Nov 2020 19:07:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36810.68907; Tue, 24 Nov 2020 19:07:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdfQ-0001Xx-Vv; Tue, 24 Nov 2020 19:07:56 +0000
Received: by outflank-mailman (input) for mailman id 36810;
 Tue, 24 Nov 2020 19:07:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khdfP-0001UU-6h
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:07:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdfO-0003tH-4N; Tue, 24 Nov 2020 19:07:54 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdfN-0000r4-S6; Tue, 24 Nov 2020 19:07:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfP-0001UU-6h
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:07:55 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=UWkIBqTtAF5BCGvfGIil7ROCWJIpScItP3D5RKMBLJo=; b=kNHSiQ85ejPIPK+pAkadfc9DMc
	iF3Eq0OYhffj0XtWTMweG1TPq/KKKX9hj2j7mJmcje3nh8EW46ARoLtOlvowRPXhhTJhrMD8GKTlt
	WVGu6HoEtE1WgzEUCTe/kdDPxImm6BqBOqxo6CVKxABWc3sNIN+HYc02tPDLAHVjGtEQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfO-0003tH-4N; Tue, 24 Nov 2020 19:07:54 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com ([86.183.162.145] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfN-0000r4-S6; Tue, 24 Nov 2020 19:07:54 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 07/13] xen/include: import sizeof_field() macro from Linux stddef.h
Date: Tue, 24 Nov 2020 19:07:38 +0000
Message-Id: <20201124190744.11343-8-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201124190744.11343-1-paul@xen.org>
References: <20201124190744.11343-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Co-locate it with the definition of offsetof() (since this is also in stddef.h
in the Linux kernel source). This macro will be needed in a subsequent patch.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>
---
 xen/include/xen/compiler.h | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/xen/include/xen/compiler.h b/xen/include/xen/compiler.h
index c0e0ee9f27be..676c6ea1b0a0 100644
--- a/xen/include/xen/compiler.h
+++ b/xen/include/xen/compiler.h
@@ -76,6 +76,14 @@
 
 #define offsetof(a,b) __builtin_offsetof(a,b)
 
+/**
+ * sizeof_field(TYPE, MEMBER)
+ *
+ * @TYPE: The structure containing the field of interest
+ * @MEMBER: The field to return the size of
+ */
+#define sizeof_field(TYPE, MEMBER) sizeof((((TYPE *)0)->MEMBER))
+
 #if !defined(__STDC_VERSION__) || __STDC_VERSION__ < 201112L
 #define alignof __alignof__
 #endif
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 19:07:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 19:07:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36811.68914 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdfR-0001au-TI; Tue, 24 Nov 2020 19:07:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36811.68914; Tue, 24 Nov 2020 19:07:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdfR-0001aG-Et; Tue, 24 Nov 2020 19:07:57 +0000
Received: by outflank-mailman (input) for mailman id 36811;
 Tue, 24 Nov 2020 19:07:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khdfP-0001Uo-Ea
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:07:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdfP-0003tO-8Z; Tue, 24 Nov 2020 19:07:55 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdfP-0000r4-0V; Tue, 24 Nov 2020 19:07:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfP-0001Uo-Ea
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:07:55 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=I7JJtB7WNzzZPW1N9VUrCzgJzropJ8kr71EcJ8UnyZk=; b=c1/mzUJLUt42kjlCwqhRaQndpG
	8YUTmLnIfPglZ0/P8RxbmP5mwn1hLi7aUASCuviJUzRQ4aD8vn9sWassb9sud3Sfn1IRWGrncCRcN
	HzeDd4IYHR5IjSr1KX51aVSKyC9lRqem0cDQbRMmaNr/rgbQ1PbFEABOCSNpsxGPwvhw=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfP-0003tO-8Z; Tue, 24 Nov 2020 19:07:55 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com ([86.183.162.145] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfP-0000r4-0V; Tue, 24 Nov 2020 19:07:55 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v3 08/13] viridian: add ExProcessorMasks variants of the flush hypercalls
Date: Tue, 24 Nov 2020 19:07:39 +0000
Message-Id: <20201124190744.11343-9-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201124190744.11343-1-paul@xen.org>
References: <20201124190744.11343-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

The Microsoft Hypervisor TLFS specifies variants of the already implemented
HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST hypercalls that take a 'Virtual
Processor Set' as an argument rather than a simple 64-bit mask.

This patch adds a new hvcall_flush_ex() function to implement these
(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX) hypercalls. This makes use of
new helper functions, hv_vpset_nr_banks() and hv_vpset_to_vpmask(), to
determine the size of the Virtual Processor Set (so it can be copied from
guest memory) and parse it into hypercall_vpmask (respectively).

NOTE: A guest should not yet issue these hypercalls as 'ExProcessorMasks'
      support needs to be advertised via CPUID. This will be done in a
      subsequent patch.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v3:
 - Adjust one of the helper macros
 - A few more consts and type tweaks
 - Adjust prototype of new function

v2:
 - Add helper macros to define mask and struct sizes
 - Use a union to determine the size of 'hypercall_vpset'
 - Use hweight64() in hv_vpset_nr_banks()
 - Sanity check size before hvm_copy_from_guest_phys()
---
 xen/arch/x86/hvm/viridian/viridian.c | 141 +++++++++++++++++++++++++++
 1 file changed, 141 insertions(+)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 4867a1bd140b..5d0b49012360 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -577,6 +577,69 @@ static unsigned int vpmask_nr(const struct hypercall_vpmask *vpmask)
     return bitmap_weight(vpmask->mask, HVM_MAX_VCPUS);
 }
 
+#define HV_VPSET_BANK_SIZE \
+    sizeof_field(struct hv_vpset, bank_contents[0])
+
+#define HV_VPSET_SIZE(banks)   \
+    (offsetof(struct hv_vpset, bank_contents) + \
+     ((banks) * HV_VPSET_BANK_SIZE))
+
+#define HV_VPSET_MAX_BANKS \
+    (sizeof_field(struct hv_vpset, valid_bank_mask) * 8)
+
+union hypercall_vpset {
+    struct hv_vpset set;
+    uint8_t pad[HV_VPSET_SIZE(HV_VPSET_MAX_BANKS)];
+};
+
+static DEFINE_PER_CPU(union hypercall_vpset, hypercall_vpset);
+
+static unsigned int hv_vpset_nr_banks(struct hv_vpset *vpset)
+{
+    return hweight64(vpset->valid_bank_mask);
+}
+
+static uint16_t hv_vpset_to_vpmask(const struct hv_vpset *set,
+                                   struct hypercall_vpmask *vpmask)
+{
+#define NR_VPS_PER_BANK (HV_VPSET_BANK_SIZE * 8)
+
+    switch ( set->format )
+    {
+    case HV_GENERIC_SET_ALL:
+        vpmask_fill(vpmask);
+        return 0;
+
+    case HV_GENERIC_SET_SPARSE_4K:
+    {
+        uint64_t bank_mask;
+        unsigned int vp, bank = 0;
+
+        vpmask_empty(vpmask);
+        for ( vp = 0, bank_mask = set->valid_bank_mask;
+              bank_mask;
+              vp += NR_VPS_PER_BANK, bank_mask >>= 1 )
+        {
+            if ( bank_mask & 1 )
+            {
+                uint64_t mask = set->bank_contents[bank];
+
+                vpmask_set(vpmask, vp, mask);
+                bank++;
+            }
+        }
+        return 0;
+    }
+
+    default:
+        break;
+    }
+
+    return -EINVAL;
+
+#undef NR_VPS_PER_BANK
+}
+
 /*
  * Windows should not issue the hypercalls requiring this callback in the
  * case where vcpu_id would exceed the size of the mask.
@@ -657,6 +720,78 @@ static int hvcall_flush(const union hypercall_input *input,
     return 0;
 }
 
+static int hvcall_flush_ex(const union hypercall_input *input,
+                           union hypercall_output *output,
+                           paddr_t input_params_gpa,
+                           paddr_t output_params_gpa)
+{
+    struct hypercall_vpmask *vpmask = &this_cpu(hypercall_vpmask);
+    struct {
+        uint64_t address_space;
+        uint64_t flags;
+        struct hv_vpset set;
+    } input_params;
+
+    /* These hypercalls should never use the fast-call convention. */
+    if ( input->fast )
+        return -EINVAL;
+
+    /* Get input parameters. */
+    if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
+                                  sizeof(input_params)) != HVMTRANS_okay )
+        return -EINVAL;
+
+    if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
+        vpmask_fill(vpmask);
+    else
+    {
+        union hypercall_vpset *vpset = &this_cpu(hypercall_vpset);
+        struct hv_vpset *set = &vpset->set;
+        size_t size;
+        int rc;
+
+        *set = input_params.set;
+        if ( set->format == HV_GENERIC_SET_SPARSE_4K )
+        {
+            unsigned long offset = offsetof(typeof(input_params),
+                                            set.bank_contents);
+
+            size = sizeof(*set->bank_contents) * hv_vpset_nr_banks(set);
+
+            if ( offsetof(typeof(*vpset), set.bank_contents[0]) + size >
+                 sizeof(*vpset) )
+            {
+                ASSERT_UNREACHABLE();
+                return -EINVAL;
+            }
+
+            if ( hvm_copy_from_guest_phys(&set->bank_contents[0],
+                                          input_params_gpa + offset,
+                                          size) != HVMTRANS_okay)
+                return -EINVAL;
+
+            size += sizeof(*set);
+        }
+        else
+            size = sizeof(*set);
+
+        rc = hv_vpset_to_vpmask(set, vpmask);
+        if ( rc )
+            return rc;
+    }
+
+    /*
+     * A false return means that another vcpu is currently trying
+     * a similar operation, so back off.
+     */
+    if ( !paging_flush_tlb(need_flush, vpmask) )
+        return -ERESTART;
+
+    output->rep_complete = input->rep_count;
+
+    return 0;
+}
+
 static void send_ipi(struct hypercall_vpmask *vpmask, uint8_t vector)
 {
     struct domain *currd = current->domain;
@@ -770,6 +905,12 @@ int viridian_hypercall(struct cpu_user_regs *regs)
                           output_params_gpa);
         break;
 
+    case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX:
+    case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX:
+        rc = hvcall_flush_ex(&input, &output, input_params_gpa,
+                             output_params_gpa);
+        break;
+
     case HVCALL_SEND_IPI:
         rc = hvcall_ipi(&input, &output, input_params_gpa,
                         output_params_gpa);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 19:07:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 19:07:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36812.68926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdfT-0001dg-3j; Tue, 24 Nov 2020 19:07:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36812.68926; Tue, 24 Nov 2020 19:07:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdfS-0001cd-6T; Tue, 24 Nov 2020 19:07:58 +0000
Received: by outflank-mailman (input) for mailman id 36812;
 Tue, 24 Nov 2020 19:07:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khdfQ-0001XH-Iy
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:07:56 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdfQ-0003tY-CH; Tue, 24 Nov 2020 19:07:56 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdfQ-0000r4-4a; Tue, 24 Nov 2020 19:07:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfQ-0001XH-Iy
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:07:56 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=kWx4PMcch2RGl9krlLbC6HcOzeq81AJtXFE2kaQbHAQ=; b=X3bWEy2pV03VwPPUWm5LioJEeM
	d3sxxFzln5i4A1yXajppsi6gOMV8ZrmuWM7QPVaHGabBXBMYRZAaOaQWV3muJmuIMDi3/nOHj+x5H
	Iup+8cVpxAdEHspCmQzeQx2izVbFdMAlWznTki1VECUWZmqrBuNq9OFDqEl1rA2nai2w=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfQ-0003tY-CH; Tue, 24 Nov 2020 19:07:56 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com ([86.183.162.145] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfQ-0000r4-4a; Tue, 24 Nov 2020 19:07:56 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v3 09/13] viridian: add ExProcessorMasks variant of the IPI hypercall
Date: Tue, 24 Nov 2020 19:07:40 +0000
Message-Id: <20201124190744.11343-10-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201124190744.11343-1-paul@xen.org>
References: <20201124190744.11343-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

A previous patch introduced variants of the flush hypercalls that take a
'Virtual Processor Set' as an argument rather than a simple 64-bit mask.
This patch introduces a similar variant of the HVCALL_SEND_IPI hypercall
(HVCALL_SEND_IPI_EX).

NOTE: As with HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX, a guest should
      not yet issue the HVCALL_SEND_IPI_EX hypercall as support for
      'ExProcessorMasks' is not yet advertised via CPUID.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v3:
 - Adjust prototype of new function

v2:
 - Sanity check size before hvm_copy_from_guest_phys()
---
 xen/arch/x86/hvm/viridian/viridian.c | 74 ++++++++++++++++++++++++++++
 1 file changed, 74 insertions(+)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 5d0b49012360..93865be5797a 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -860,6 +860,75 @@ static int hvcall_ipi(const union hypercall_input *input,
     return 0;
 }
 
+static int hvcall_ipi_ex(const union hypercall_input *input,
+                         union hypercall_output *output,
+                         paddr_t input_params_gpa,
+                         paddr_t output_params_gpa)
+{
+    struct hypercall_vpmask *vpmask = &this_cpu(hypercall_vpmask);
+    struct {
+        uint32_t vector;
+        uint8_t target_vtl;
+        uint8_t reserved_zero[3];
+        struct hv_vpset set;
+    } input_params;
+    union hypercall_vpset *vpset = &this_cpu(hypercall_vpset);
+    struct hv_vpset *set = &vpset->set;
+    size_t size;
+    int rc;
+
+    /* These hypercalls should never use the fast-call convention. */
+    if ( input->fast )
+        return -EINVAL;
+
+    /* Get input parameters. */
+    if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
+                                  sizeof(input_params)) != HVMTRANS_okay )
+        return -EINVAL;
+
+    if ( input_params.target_vtl ||
+         input_params.reserved_zero[0] ||
+         input_params.reserved_zero[1] ||
+         input_params.reserved_zero[2] )
+        return HV_STATUS_INVALID_PARAMETER;
+
+    if ( input_params.vector < 0x10 || input_params.vector > 0xff )
+        return HV_STATUS_INVALID_PARAMETER;
+
+    *set = input_params.set;
+    if ( set->format == HV_GENERIC_SET_SPARSE_4K )
+    {
+        unsigned long offset = offsetof(typeof(input_params),
+                                        set.bank_contents);
+
+        size = sizeof(*set->bank_contents) * hv_vpset_nr_banks(set);
+
+        if ( offsetof(typeof(*vpset), set.bank_contents[0]) + size >
+             sizeof(*vpset) )
+        {
+            ASSERT_UNREACHABLE();
+            return -EINVAL;
+        }
+
+        if ( hvm_copy_from_guest_phys(&set->bank_contents,
+                                      input_params_gpa + offset,
+                                      size) != HVMTRANS_okay)
+            return -EINVAL;
+
+        size += sizeof(*set);
+    }
+    else
+        size = sizeof(*set);
+
+    rc = hv_vpset_to_vpmask(set, vpmask);
+    if ( rc )
+        return rc;
+
+    send_ipi(vpmask, input_params.vector);
+
+    return 0;
+}
+
 int viridian_hypercall(struct cpu_user_regs *regs)
 {
     struct vcpu *curr = current;
@@ -916,6 +985,11 @@ int viridian_hypercall(struct cpu_user_regs *regs)
                         output_params_gpa);
         break;
 
+    case HVCALL_SEND_IPI_EX:
+        rc = hvcall_ipi_ex(&input, &output, input_params_gpa,
+                           output_params_gpa);
+        break;
+
     default:
         gprintk(XENLOG_WARNING, "unimplemented hypercall %04x\n",
                 input.call_code);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 19:18:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 19:18:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36880.68979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdp9-0003R3-4V; Tue, 24 Nov 2020 19:17:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36880.68979; Tue, 24 Nov 2020 19:17:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdp9-0003Qw-1E; Tue, 24 Nov 2020 19:17:59 +0000
Received: by outflank-mailman (input) for mailman id 36880;
 Tue, 24 Nov 2020 19:17:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khdp7-0003OL-Kw
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:17:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdp6-00047C-QM; Tue, 24 Nov 2020 19:17:56 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdp6-0001af-Hj; Tue, 24 Nov 2020 19:17:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdp7-0003OL-Kw
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:17:57 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=tkkEPJDCdmPbCsiRQLMOIcAhVsNk4TLVoClBajQOxuI=; b=EYmZp8G3qHSpuuEhzJceRlDl0A
	D+d80s0ACzdoBX4PXRzJ0vVeCAWR1wDNwk2iVcJJvuYl5M+zPHZvkki8+Z7g7rNE1kRPpe6cIz2qt
	6F1r++tfBv4uwyVBGUdSOyYBEyvaIvJ1tdxK+AgIL1ojrtbkXmwnwbs2OMH8Sno/xQcc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdp6-00047C-QM; Tue, 24 Nov 2020 19:17:56 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com ([86.183.162.145] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdp6-0001af-Hj; Tue, 24 Nov 2020 19:17:56 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Eslam Elnikety <elnikety@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 3/3] xl: add 'disable_evtchn_fifo' boolean option into xl.cfg(5) ...
Date: Tue, 24 Nov 2020 19:17:51 +0000
Message-Id: <20201124191751.11472-4-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201124191751.11472-1-paul@xen.org>
References: <20201124191751.11472-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

...to set the value of the 'disable_evtchn_fifo' flag in
libxl_domain_build_info.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Signed-off-by: Eslam Elnikety <elnikety@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>

v4:
 - New in v4
---
 docs/man/xl.cfg.5.pod.in | 8 ++++++++
 tools/xl/xl_parse.c      | 3 +++
 2 files changed, 11 insertions(+)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 0532739c1fff..80d5e7aaf38f 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1338,6 +1338,14 @@ FIFO-based event channel ABI support up to 131,071 event channels.
 Other guests are limited to 4095 (64-bit x86 and ARM) or 1023 (32-bit
 x86).
 
+=item B<disable_evtchn_fifo=BOOLEAN>
+
+Indicates if support for FIFO event channel operations (EVTCHNOP_init_control,
+EVTCHNOP_expand_array and EVTCHNOP_set_priority) are disabled. This can be
+used to work around issues with guests hibernated on a version of Xen
+prior to 4.4 and resumed on a version of Xen from 4.4. onwards. The default
+value is false.
+
 =item B<vdispl=[ "VDISPL_SPEC_STRING", "VDISPL_SPEC_STRING", ...]>
 
 Specifies the virtual display devices to be provided to the guest.
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index cae8eb679c5a..f79f644c4c2e 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1569,6 +1569,9 @@ void parse_config_data(const char *config_source,
     if (!xlu_cfg_get_long(config, "max_event_channels", &l, 0))
         b_info->event_channels = l;
 
+    xlu_cfg_get_defbool(config, "disable_evtchn_fifo",
+                        &b_info->disable_evtchn_fifo, 0);
+
     xlu_cfg_replace_string (config, "kernel", &b_info->kernel, 0);
     xlu_cfg_replace_string (config, "ramdisk", &b_info->ramdisk, 0);
     xlu_cfg_replace_string (config, "device_tree", &b_info->device_tree, 0);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 19:18:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 19:18:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36877.68943 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdp5-0003MN-4t; Tue, 24 Nov 2020 19:17:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36877.68943; Tue, 24 Nov 2020 19:17:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdp5-0003MG-1q; Tue, 24 Nov 2020 19:17:55 +0000
Received: by outflank-mailman (input) for mailman id 36877;
 Tue, 24 Nov 2020 19:17:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khdp3-0003MB-5T
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:17:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdp3-00046u-09; Tue, 24 Nov 2020 19:17:53 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdp2-0001af-KZ; Tue, 24 Nov 2020 19:17:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdp3-0003MB-5T
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:17:53 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date:
	Subject:Cc:To:From; bh=U1nqXdkQgZ7asiK/N8GRlF/923RFJ4xg2wwzpCkLx9s=; b=S4Wlol
	LY5jVQhN0L6M8Lyr5VwCh7nFwqo9Z2PL11fMFzCz95tVnxrB/PSQW7+58g0lBfv8CpoPanOjzD5Tv
	dYftnHZX/Xx5MK7V0kWDkhSDqcXxlBf/V5TR0W21wFnSTzG2KOqpNAP/TuupsAiH29YUzXBivck3U
	2QkNnOsY/BY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdp3-00046u-09; Tue, 24 Nov 2020 19:17:53 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com ([86.183.162.145] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdp2-0001af-KZ; Tue, 24 Nov 2020 19:17:52 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>
Subject: [PATCH v4 0/3] evtchn: Introduce a per-guest knob to control FIFO ABI
Date: Tue, 24 Nov 2020 19:17:48 +0000
Message-Id: <20201124191751.11472-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This series is the next version of what was originally a single patch sent
by Eslam Elnikety <elnikety@amazon.com>. I have re-based and slightly expanded
the modifications.

Paul Durrant (3):
  domctl: introduce a new domain create flag,
    XEN_DOMCTL_CDF_disable_fifo, ...
  libxl: add a 'disable_fifo_evtchn' flag to libxl_domain_build_info ...
  xl: add 'disable_evtchn_fifo' boolean option into xl.cfg(5) ...

 docs/man/xl.cfg.5.pod.in         |  8 ++++++++
 tools/include/libxl.h            |  8 ++++++++
 tools/libs/light/libxl_create.c  |  5 +++++
 tools/libs/light/libxl_types.idl |  1 +
 tools/ocaml/libs/xc/xenctrl.ml   |  1 +
 tools/ocaml/libs/xc/xenctrl.mli  |  1 +
 tools/xl/xl_parse.c              |  3 +++
 xen/common/domain.c              |  2 +-
 xen/common/event_channel.c       | 17 +++++++++++++++++
 xen/include/public/domctl.h      |  4 +++-
 10 files changed, 48 insertions(+), 2 deletions(-)

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 19:18:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 19:18:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36879.68960 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdp7-0003OX-QK; Tue, 24 Nov 2020 19:17:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36879.68960; Tue, 24 Nov 2020 19:17:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdp7-0003OE-K5; Tue, 24 Nov 2020 19:17:57 +0000
Received: by outflank-mailman (input) for mailman id 36879;
 Tue, 24 Nov 2020 19:17:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khdp6-0003NJ-IB
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:17:56 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdp5-000473-MH; Tue, 24 Nov 2020 19:17:55 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdp5-0001af-EV; Tue, 24 Nov 2020 19:17:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdp6-0003NJ-IB
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:17:56 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=Ab1xFXjavoHWImv5EdYjiLsTfFXo0yohVYFMNSfSBu8=; b=XhYSWVF0+rPXX327YXZdWgIakp
	hwNMBq11dVZDKNTIM1BiXi5SO/7j2c9rqUefBht0MR1ONR9FEJ6+ELuaXw2ZNaDKvdRND+BA4ZURu
	9WF7A9N0zZHn8PBV0g7BcM53MGzaFZVF+N8cnaT6twdxkh0VpmoNefvIcFnw7mnxCAlI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdp5-000473-MH; Tue, 24 Nov 2020 19:17:55 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com ([86.183.162.145] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdp5-0001af-EV; Tue, 24 Nov 2020 19:17:55 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Eslam Elnikety <elnikety@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 2/3] libxl: add a 'disable_fifo_evtchn' flag to libxl_domain_build_info ...
Date: Tue, 24 Nov 2020 19:17:50 +0000
Message-Id: <20201124191751.11472-3-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201124191751.11472-1-paul@xen.org>
References: <20201124191751.11472-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

...to control setting the domain create flag XEN_DOMCTL_CDF_disable_fifo.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Signed-off-by: Eslam Elnikety <elnikety@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>

v4:
 - New in v4
---
 tools/include/libxl.h            | 8 ++++++++
 tools/libs/light/libxl_create.c  | 5 +++++
 tools/libs/light/libxl_types.idl | 1 +
 3 files changed, 14 insertions(+)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 1ea5b4f446e8..fe0aad632c08 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -444,6 +444,14 @@
  */
 #define LIBXL_HAVE_DISK_SAFE_REMOVE 1
 
+/*
+ * LIBXL_HAVE_BUILDINFO_DISABLE_EVTCHN_FIFO indicates that
+ * libxl_domain_build_info has a disable_evtchn_fifo (boolean) field
+ * to determine whether the EVTCHNOPs to initialize and manipulate FIFO
+ * event channels are exposed to the guest.
+ */
+#define LIBXL_HAVE_BUILDINFO_DISABLE_EVTCHN_FIFO 1
+
 /*
  * libxl ABI compatibility
  *
diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index 321a13e519b5..abbbd91442c0 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -263,6 +263,8 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
     if (!b_info->event_channels)
         b_info->event_channels = 1023;
 
+    libxl_defbool_setdefault(&b_info->disable_evtchn_fifo, false);
+
     libxl__arch_domain_build_info_setdefault(gc, b_info);
     libxl_defbool_setdefault(&b_info->dm_restrict, false);
 
@@ -609,6 +611,9 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
             .max_maptrack_frames = b_info->max_maptrack_frames,
         };
 
+        if (libxl_defbool_val(b_info->disable_evtchn_fifo))
+            create.flags |= XEN_DOMCTL_CDF_disable_fifo;
+
         if (info->type != LIBXL_DOMAIN_TYPE_PV) {
             create.flags |= XEN_DOMCTL_CDF_hvm;
 
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 9d3f05f39978..fa3f6ec3425e 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -541,6 +541,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
     ("iomem",            Array(libxl_iomem_range, "num_iomem")),
     ("claim_mode",	     libxl_defbool),
     ("event_channels",   uint32),
+    ("disable_evtchn_fifo",libxl_defbool),
     ("kernel",           string),
     ("cmdline",          string),
     ("ramdisk",          string),
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 19:18:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 19:18:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36878.68955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdp7-0003Ni-FL; Tue, 24 Nov 2020 19:17:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36878.68955; Tue, 24 Nov 2020 19:17:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdp7-0003Nb-BW; Tue, 24 Nov 2020 19:17:57 +0000
Received: by outflank-mailman (input) for mailman id 36878;
 Tue, 24 Nov 2020 19:17:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khdp6-0003NA-F5
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:17:56 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdp4-00046z-Kd; Tue, 24 Nov 2020 19:17:54 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdp4-0001af-B1; Tue, 24 Nov 2020 19:17:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdp6-0003NA-F5
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:17:56 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=RqaPp5RRZKIQZO8Gxl89YiDcr2otuNX/c8xlMGA/ZCE=; b=yJ8hF0LtYrkvPcfXjOOAI4Hbd2
	bEA2gcbXQKuVpMAEX7xvA4Eqba4XpXWP0mc22LSETae0jmhAvK1aeD+cVARRyppOF522fB1hF4mbF
	Rc/tGou53K/CT1R7+RFUGbg3RioqttSUTaJ2wYehxygC4Df8nsWYiENdUfmzjVw+AJCI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdp4-00046z-Kd; Tue, 24 Nov 2020 19:17:54 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com ([86.183.162.145] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdp4-0001af-B1; Tue, 24 Nov 2020 19:17:54 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Eslam Elnikety <elnikety@amazon.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v4 1/3] domctl: introduce a new domain create flag, XEN_DOMCTL_CDF_disable_fifo, ...
Date: Tue, 24 Nov 2020 19:17:49 +0000
Message-Id: <20201124191751.11472-2-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201124191751.11472-1-paul@xen.org>
References: <20201124191751.11472-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

...to control the visibility of the FIFO event channel operations
(EVTCHNOP_init_control, EVTCHNOP_expand_array, and EVTCHNOP_set_priority) to
the guest.

These operations were added to the public header in commit d2d50c2f308f
("evtchn: add FIFO-based event channel ABI") and the first implementation
appeared in the two subsequent commits: edc8872aeb4a ("evtchn: implement
EVTCHNOP_set_priority and add the set_priority hook") and 88910061ec61
("evtchn: add FIFO-based event channel hypercalls and port ops"). Prior to
that, a guest issuing those operations would receive a return value of
-ENOSYS (not implemented) from Xen. Guests aware of the FIFO operations but
running on an older (pre-4.4) Xen would fall back to using the 2-level event
channel interface upon seeing this return value.

Unfortunately the uncontrolable appearance of these new operations in Xen 4.4
onwards has implications for hibernation of some Linux guests. During resume
from hibernation, there are two kernels involved: the "boot" kernel and the
"resume" kernel. The guest boot kernel may default to use FIFO operations and
instruct Xen via EVTCHNOP_init_control to switch from 2-level to FIFO. On the
other hand, the resume kernel keeps assuming 2-level, because it was hibernated
on a version of Xen that did not support the FIFO operations.

To maintain compatibility it is necessary to make Xen behave as it did
before the new operations were added and hence the code in this patch ensures
that, if XEN_DOMCTL_CDF_disable_fifo is set, the FIFO event channel operations
will again result in -ENOSYS being returned to the guest.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Signed-off-by: Eslam Elnikety <elnikety@amazon.com>
---
Cc: Christian Lindig <christian.lindig@citrix.com>
Cc: David Scott <dave@recoil.org>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>

v4:
 - New in v4
---
 tools/ocaml/libs/xc/xenctrl.ml  |  1 +
 tools/ocaml/libs/xc/xenctrl.mli |  1 +
 xen/common/domain.c             |  2 +-
 xen/common/event_channel.c      | 17 +++++++++++++++++
 xen/include/public/domctl.h     |  4 +++-
 5 files changed, 23 insertions(+), 2 deletions(-)

diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index e878699b0a1a..9ccad9aece8c 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -65,6 +65,7 @@ type domain_create_flag =
 	| CDF_XS_DOMAIN
 	| CDF_IOMMU
 	| CDF_NESTED_VIRT
+	| CDF_DISABLE_FIFO
 
 type domain_create_iommu_opts =
 	| IOMMU_NO_SHAREPT
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index e64907df8e7e..8bb0f9e14b8e 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -58,6 +58,7 @@ type domain_create_flag =
   | CDF_XS_DOMAIN
   | CDF_IOMMU
   | CDF_NESTED_VIRT
+  | CDF_DISABLE_FIFO
 
 type domain_create_iommu_opts =
   | IOMMU_NO_SHAREPT
diff --git a/xen/common/domain.c b/xen/common/domain.c
index f748806a450b..75aed7fd5b01 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -307,7 +307,7 @@ static int sanitise_domain_config(struct xen_domctl_createdomain *config)
          ~(XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap |
            XEN_DOMCTL_CDF_s3_integrity | XEN_DOMCTL_CDF_oos_off |
            XEN_DOMCTL_CDF_xs_domain | XEN_DOMCTL_CDF_iommu |
-           XEN_DOMCTL_CDF_nested_virt) )
+           XEN_DOMCTL_CDF_nested_virt | XEN_DOMCTL_CDF_disable_fifo) )
     {
         dprintk(XENLOG_INFO, "Unknown CDF flags %#x\n", config->flags);
         return -EINVAL;
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 54b2e2550e93..6a96ccf56c3a 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -1193,10 +1193,27 @@ static long evtchn_set_priority(const struct evtchn_set_priority *set_priority)
     return ret;
 }
 
+static bool is_fifo_op(int cmd)
+{
+    switch ( cmd )
+    {
+    case EVTCHNOP_init_control:
+    case EVTCHNOP_expand_array:
+    case EVTCHNOP_set_priority:
+        return true;
+    default:
+        return false;
+    }
+}
+
 long do_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc;
 
+    if ( (current->domain->options & XEN_DOMCTL_CDF_disable_fifo) &&
+         is_fifo_op(cmd) )
+        return -ENOSYS;
+
     switch ( cmd )
     {
     case EVTCHNOP_alloc_unbound: {
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 666aeb71bf1b..70701c59d053 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -70,9 +70,11 @@ struct xen_domctl_createdomain {
 #define XEN_DOMCTL_CDF_iommu          (1U<<_XEN_DOMCTL_CDF_iommu)
 #define _XEN_DOMCTL_CDF_nested_virt   6
 #define XEN_DOMCTL_CDF_nested_virt    (1U << _XEN_DOMCTL_CDF_nested_virt)
+#define _XEN_DOMCTL_CDF_disable_fifo  7
+#define XEN_DOMCTL_CDF_disable_fifo   (1U << _XEN_DOMCTL_CDF_disable_fifo)
 
 /* Max XEN_DOMCTL_CDF_* constant.  Used for ABI checking. */
-#define XEN_DOMCTL_CDF_MAX XEN_DOMCTL_CDF_nested_virt
+#define XEN_DOMCTL_CDF_MAX XEN_DOMCTL_CDF_disable_fifo
 
     uint32_t flags;
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 19:26:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 19:26:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36906.68990 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdxU-0004gd-25; Tue, 24 Nov 2020 19:26:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36906.68990; Tue, 24 Nov 2020 19:26:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khdxT-0004gW-VF; Tue, 24 Nov 2020 19:26:35 +0000
Received: by outflank-mailman (input) for mailman id 36906;
 Tue, 24 Nov 2020 19:26:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khdxS-0004gO-HH; Tue, 24 Nov 2020 19:26:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khdxS-0004HN-7F; Tue, 24 Nov 2020 19:26:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khdxR-0004bZ-Tf; Tue, 24 Nov 2020 19:26:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1khdxR-0007ME-QW; Tue, 24 Nov 2020 19:26:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khdxS-0004gO-HH; Tue, 24 Nov 2020 19:26:34 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mVh7tV/6ywCTeUt9h726aC3zAjl8GZkvSnxUIvRhGyc=; b=uzMgdbRJthTbNn+1PtqU5TuljF
	2mWZcT0Orejphm+ppniMFxCOnHNBqZ5Lzw3+3dm5LbnIXrlC0vA+gVsx6nokUG7Dd7C+J6Cw6R+Mz
	FbnRVKWZtRSlUyiBJ3zMmfSsPD18e/lGS9PW/djQLccAb8yP3JHu7DXEQZkuTzgnuvfA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khdxS-0004HN-7F; Tue, 24 Nov 2020 19:26:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khdxR-0004bZ-Tf; Tue, 24 Nov 2020 19:26:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khdxR-0007ME-QW; Tue, 24 Nov 2020 19:26:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156981-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156981: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=d5beb3140f91b1c8a3d41b14d729aefa4dcc58bc
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 24 Nov 2020 19:26:33 +0000

flight 156981 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156981/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl          10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                d5beb3140f91b1c8a3d41b14d729aefa4dcc58bc
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  115 days
Failing since        152366  2020-08-01 20:49:34 Z  114 days  194 attempts
Testing same since   156981  2020-11-24 07:21:48 Z    0 days    1 attempts

------------------------------------------------------------
3576 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 683988 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 19:30:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 19:30:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36921.69005 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khe1V-0005bj-QW; Tue, 24 Nov 2020 19:30:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36921.69005; Tue, 24 Nov 2020 19:30:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khe1V-0005bc-NX; Tue, 24 Nov 2020 19:30:45 +0000
Received: by outflank-mailman (input) for mailman id 36921;
 Tue, 24 Nov 2020 19:30:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khe1U-0005bN-9w
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:30:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khe1U-0004Oc-3Y; Tue, 24 Nov 2020 19:30:44 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdfR-0000r4-8n; Tue, 24 Nov 2020 19:07:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khe1U-0005bN-9w
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:30:44 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=RtfTTUWPnmWV4G9WLn/ouwRay4gNP0ppXh1HWl5O3Ig=; b=nDbiJmy76A4OpVVMcm2mKFC5uB
	Dm6qQUR8PN0JLG/05KzzqwFGTGWOZ/2EDFlJXreAQJsvq0SduUXHy9m00LbrIt7uUUypZsIhe5HIl
	RvRG6LVfLoNQpoC3knLSZsL5XXdNLFxx/xRRTB0QIZzwy94GM9tPPf3YSRfc2V5KhIzQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khe1U-0004Oc-3Y; Tue, 24 Nov 2020 19:30:44 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com ([86.183.162.145] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfR-0000r4-8n; Tue, 24 Nov 2020 19:07:57 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v3 10/13] viridian: log initial invocation of each type of hypercall
Date: Tue, 24 Nov 2020 19:07:41 +0000
Message-Id: <20201124190744.11343-11-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201124190744.11343-1-paul@xen.org>
References: <20201124190744.11343-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

To make is simpler to observe which viridian hypercalls are issued by a
particular Windows guest, this patch adds a per-domain mask to track them.
Each type of hypercall causes a different bit to be set in the mask and
when the bit transitions from clear to set, a log line is emitted showing
the name of the hypercall and the domain that issued it.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - Use DECLARE_BITMAP() for 'hypercall_flags'
 - Use an enum for _HCALL_* values
---
 xen/arch/x86/hvm/viridian/viridian.c | 21 +++++++++++++++++++++
 xen/include/asm-x86/hvm/viridian.h   | 10 ++++++++++
 2 files changed, 31 insertions(+)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 93865be5797a..3c18908b7b35 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -933,6 +933,7 @@ int viridian_hypercall(struct cpu_user_regs *regs)
 {
     struct vcpu *curr = current;
     struct domain *currd = curr->domain;
+    struct viridian_domain *vd = currd->arch.hvm.viridian;
     int mode = hvm_guest_x86_mode(curr);
     unsigned long input_params_gpa, output_params_gpa;
     int rc = 0;
@@ -962,6 +963,10 @@ int viridian_hypercall(struct cpu_user_regs *regs)
     switch ( input.call_code )
     {
     case HVCALL_NOTIFY_LONG_SPIN_WAIT:
+        if ( !test_and_set_bit(_HCALL_spin_wait, vd->hypercall_flags) )
+            printk(XENLOG_G_INFO "d%d: VIRIDIAN HVCALL_NOTIFY_LONG_SPIN_WAIT\n",
+                   currd->domain_id);
+
         /*
          * See section 14.5.1 of the specification.
          */
@@ -970,22 +975,38 @@ int viridian_hypercall(struct cpu_user_regs *regs)
 
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
+        if ( !test_and_set_bit(_HCALL_flush, vd->hypercall_flags) )
+            printk(XENLOG_G_INFO "%pd: VIRIDIAN HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST\n",
+                   currd);
+
         rc = hvcall_flush(&input, &output, input_params_gpa,
                           output_params_gpa);
         break;
 
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX:
     case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX:
+        if ( !test_and_set_bit(_HCALL_flush_ex, vd->hypercall_flags) )
+            printk(XENLOG_G_INFO "%pd: VIRIDIAN HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX\n",
+                   currd);
+
         rc = hvcall_flush_ex(&input, &output, input_params_gpa,
                              output_params_gpa);
         break;
 
     case HVCALL_SEND_IPI:
+        if ( !test_and_set_bit(_HCALL_ipi, vd->hypercall_flags) )
+            printk(XENLOG_G_INFO "%pd: VIRIDIAN HVCALL_SEND_IPI\n",
+                   currd);
+
         rc = hvcall_ipi(&input, &output, input_params_gpa,
                         output_params_gpa);
         break;
 
     case HVCALL_SEND_IPI_EX:
+        if ( !test_and_set_bit(_HCALL_ipi_ex, vd->hypercall_flags) )
+            printk(XENLOG_G_INFO "%pd: VIRIDIAN HVCALL_SEND_IPI_EX\n",
+                   currd);
+
         rc = hvcall_ipi_ex(&input, &output, input_params_gpa,
                            output_params_gpa);
         break;
diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h
index cbf77d9c760b..4c8ff6e80b6f 100644
--- a/xen/include/asm-x86/hvm/viridian.h
+++ b/xen/include/asm-x86/hvm/viridian.h
@@ -55,10 +55,20 @@ struct viridian_time_ref_count
     int64_t off;
 };
 
+enum {
+    _HCALL_spin_wait,
+    _HCALL_flush,
+    _HCALL_flush_ex,
+    _HCALL_ipi,
+    _HCALL_ipi_ex,
+    _HCALL_nr /* must be last */
+};
+
 struct viridian_domain
 {
     union hv_guest_os_id guest_os_id;
     union hv_vp_assist_page_msr hypercall_gpa;
+    DECLARE_BITMAP(hypercall_flags, _HCALL_nr);
     struct viridian_time_ref_count time_ref_count;
     struct viridian_page reference_tsc;
 };
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 19:30:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 19:30:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36923.69030 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khe1X-0005dr-Ca; Tue, 24 Nov 2020 19:30:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36923.69030; Tue, 24 Nov 2020 19:30:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khe1X-0005dk-8n; Tue, 24 Nov 2020 19:30:47 +0000
Received: by outflank-mailman (input) for mailman id 36923;
 Tue, 24 Nov 2020 19:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khe1V-0005bX-7u
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:30:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khe1U-0004Oe-6T; Tue, 24 Nov 2020 19:30:44 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdfS-0000r4-Q4; Tue, 24 Nov 2020 19:07:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khe1V-0005bX-7u
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:30:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=aEBO33IoPtgfp4ROpOAovObbOPgE7wiSsLQNJbsivrE=; b=g+b9RfjYbai05izai8aY8mU7QG
	2ypBZdv6GtWwikgt0qJgTIxANwyidRIyRzauSJFicVbTOGDmuBrbX7+FMWQp8/0xajgItlT14hMIO
	nfkuAFubCjyFLzPKp6GYOW9iDcjyBvLid5/H/PMYc0XY8KKyaTYQRhXlbSAhwF/6XPko=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khe1U-0004Oe-6T; Tue, 24 Nov 2020 19:30:44 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com ([86.183.162.145] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfS-0000r4-Q4; Tue, 24 Nov 2020 19:07:59 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v3 11/13] viridian: add a new '_HVMPV_ex_processor_masks' bit into HVM_PARAM_VIRIDIAN...
Date: Tue, 24 Nov 2020 19:07:42 +0000
Message-Id: <20201124190744.11343-12-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201124190744.11343-1-paul@xen.org>
References: <20201124190744.11343-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... and advertise ExProcessorMasks support if it is set.

Support is advertised by setting bit 11 in CPUID:40000004:EAX.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/x86/hvm/viridian/viridian.c | 3 +++
 xen/include/public/hvm/params.h      | 7 ++++++-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 3c18908b7b35..32bf58db3a73 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -84,6 +84,7 @@ typedef union _HV_CRASH_CTL_REG_CONTENTS
 #define CPUID4A_MSR_BASED_APIC         (1 << 3)
 #define CPUID4A_RELAX_TIMER_INT        (1 << 5)
 #define CPUID4A_SYNTHETIC_CLUSTER_IPI  (1 << 10)
+#define CPUID4A_EX_PROCESSOR_MASKS     (1 << 11)
 
 /* Viridian CPUID leaf 6: Implementation HW features detected and in use */
 #define CPUID6A_APIC_OVERLAY    (1 << 0)
@@ -197,6 +198,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
             res->a |= CPUID4A_MSR_BASED_APIC;
         if ( viridian_feature_mask(d) & HVMPV_hcall_ipi )
             res->a |= CPUID4A_SYNTHETIC_CLUSTER_IPI;
+        if ( viridian_feature_mask(d) & HVMPV_ex_processor_masks )
+            res->a |= CPUID4A_EX_PROCESSOR_MASKS;
 
         /*
          * This value is the recommended number of attempts to try to
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index 0e3fdca09646..3b0a0f45da53 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -164,6 +164,10 @@
 #define _HVMPV_hcall_ipi 9
 #define HVMPV_hcall_ipi (1 << _HVMPV_hcall_ipi)
 
+/* Enable ExProcessorMasks */
+#define _HVMPV_ex_processor_masks 10
+#define HVMPV_ex_processor_masks (1 << _HVMPV_ex_processor_masks)
+
 #define HVMPV_feature_mask \
         (HVMPV_base_freq | \
          HVMPV_no_freq | \
@@ -174,7 +178,8 @@
          HVMPV_crash_ctl | \
          HVMPV_synic | \
          HVMPV_stimer | \
-         HVMPV_hcall_ipi)
+         HVMPV_hcall_ipi | \
+         HVMPV_ex_processor_masks)
 
 #endif
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 19:30:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 19:30:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36922.69011 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khe1W-0005cA-5D; Tue, 24 Nov 2020 19:30:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36922.69011; Tue, 24 Nov 2020 19:30:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khe1V-0005c3-VL; Tue, 24 Nov 2020 19:30:45 +0000
Received: by outflank-mailman (input) for mailman id 36922;
 Tue, 24 Nov 2020 19:30:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khe1V-0005bS-31
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:30:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khe1U-0004Og-8r; Tue, 24 Nov 2020 19:30:44 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdfT-0000r4-Qp; Tue, 24 Nov 2020 19:08:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khe1V-0005bS-31
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:30:45 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=/tzXLkeYgeohzzpZOobPIXPIDOQTKzNBQ+V78872kJQ=; b=5MpQnWEXpMsbH1kAHjX6u1R33y
	latsjEadbNLKX1T8yUH3KDjA/dO7QklcKgmv2lzbV0qMOXZT0VwLHlLE3EN4tnJUnCYQqNYcRYc9y
	oFkbxRAz8xuBg2g5RivKV7r5k5bCnSMfFxIg/1qdaLXgu9V73EJumdfr2+dMTddZBQo0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khe1U-0004Og-8r; Tue, 24 Nov 2020 19:30:44 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com ([86.183.162.145] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfT-0000r4-Qp; Tue, 24 Nov 2020 19:08:00 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 12/13] xl / libxl: add 'ex_processor_mask' into 'libxl_viridian_enlightenment'
Date: Tue, 24 Nov 2020 19:07:43 +0000
Message-Id: <20201124190744.11343-13-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201124190744.11343-1-paul@xen.org>
References: <20201124190744.11343-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Adding the new value into the enumeration makes it immediately available
to xl, so this patch adjusts the xl.cfg(5) documentation accordingly.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 docs/man/xl.cfg.5.pod.in         | 8 ++++++++
 tools/include/libxl.h            | 7 +++++++
 tools/libs/light/libxl_types.idl | 1 +
 tools/libs/light/libxl_x86.c     | 3 +++
 4 files changed, 19 insertions(+)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 0532739c1fff..3f0f8de1e988 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2318,6 +2318,14 @@ This set incorporates use of a hypercall for interprocessor interrupts.
 This enlightenment may improve performance of Windows guests with multiple
 virtual CPUs.
 
+=item B<ex_processor_masks>
+
+This set enables new hypercall variants taking a variably-sized sparse
+B<Virtual Processor Set> as an argument, rather than a simple 64-bit
+mask. Hence this enlightenment must be specified for guests with more
+than 64 vCPUs if B<hcall_remote_tlb_flush> and/or B<hcall_ipi> are also
+specified.
+
 =item B<defaults>
 
 This is a special value that enables the default set of groups, which
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 1ea5b4f446e8..eaffccb30f37 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -444,6 +444,13 @@
  */
 #define LIBXL_HAVE_DISK_SAFE_REMOVE 1
 
+/*
+ * LIBXL_HAVE_VIRIDIAN_EX_PROCESSOR_MASKS indicates that the
+ * 'ex_processor_masks' value is present in the viridian enlightenment
+ * enumeration.
+ */
+#define LIBXL_HAVE_VIRIDIAN_EX_PROCESSOR_MASKS 1
+
 /*
  * libxl ABI compatibility
  *
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 9d3f05f39978..05324736b744 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -238,6 +238,7 @@ libxl_viridian_enlightenment = Enumeration("viridian_enlightenment", [
     (7, "synic"),
     (8, "stimer"),
     (9, "hcall_ipi"),
+    (10, "ex_processor_masks"),
     ])
 
 libxl_hdtype = Enumeration("hdtype", [
diff --git a/tools/libs/light/libxl_x86.c b/tools/libs/light/libxl_x86.c
index e18274cc10e2..86d272999d67 100644
--- a/tools/libs/light/libxl_x86.c
+++ b/tools/libs/light/libxl_x86.c
@@ -366,6 +366,9 @@ static int hvm_set_viridian_features(libxl__gc *gc, uint32_t domid,
     if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_HCALL_IPI))
         mask |= HVMPV_hcall_ipi;
 
+    if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_EX_PROCESSOR_MASKS))
+        mask |= HVMPV_ex_processor_masks;
+
     if (mask != 0 &&
         xc_hvm_param_set(CTX->xch,
                          domid,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 19:30:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 19:30:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36924.69040 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khe1Y-0005en-0y; Tue, 24 Nov 2020 19:30:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36924.69040; Tue, 24 Nov 2020 19:30:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khe1X-0005eW-IS; Tue, 24 Nov 2020 19:30:47 +0000
Received: by outflank-mailman (input) for mailman id 36924;
 Tue, 24 Nov 2020 19:30:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>) id 1khe1W-0005cT-6c
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:30:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khe1U-0004Ok-B6; Tue, 24 Nov 2020 19:30:44 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com
 ([86.183.162.145] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1khdfV-0000r4-5F; Tue, 24 Nov 2020 19:08:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khe1W-0005cT-6c
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:30:46 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=5cd/xhDogl5UDMyDsWm0T9i011ok+fe44oXnYZ42O/I=; b=Qmp21wK5h1oc0ABJmUzsUyrNLh
	BKxtVu8O1r2cZKJUdkJ33s7n+l+d+S7+GeB/Np/G6JDTVP+3O8dwmE+VYhl/7dwz9RzyWL1j0r2D6
	42mY+Y1R4hhg3uW0tgsZqOR8CJkVfG6a+eVhJOYWGqGQh7rkVZIgHMUjBq5PCCHlDA14=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khe1U-0004Ok-B6; Tue, 24 Nov 2020 19:30:44 +0000
Received: from host86-183-162-145.range86-183.btcentralplus.com ([86.183.162.145] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1khdfV-0000r4-5F; Tue, 24 Nov 2020 19:08:01 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Jun Nakajima <jun.nakajima@intel.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 13/13] x86: replace open-coded occurrences of sizeof_field()...
Date: Tue, 24 Nov 2020 19:07:44 +0000
Message-Id: <20201124190744.11343-14-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201124190744.11343-1-paul@xen.org>
References: <20201124190744.11343-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... with macro evaluations, now that it is available.

A recent patch imported the sizeof_field() macro from Linux. This patch makes
use of it in places where the construct is currently open-coded.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Wei Liu <wl@xen.org>
---
 xen/arch/x86/cpu/vpmu_intel.c |  4 ++--
 xen/arch/x86/setup.c          | 16 ++++++++--------
 2 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c
index 75aa11c6adec..6e97ce790037 100644
--- a/xen/arch/x86/cpu/vpmu_intel.c
+++ b/xen/arch/x86/cpu/vpmu_intel.c
@@ -90,8 +90,8 @@ static uint64_t __read_mostly global_ovf_ctrl_mask, global_ctrl_mask;
 static unsigned int __read_mostly regs_sz;
 /* Offset into context of the beginning of PMU register block */
 static const unsigned int regs_off =
-        sizeof(((struct xen_pmu_intel_ctxt *)0)->fixed_counters) +
-        sizeof(((struct xen_pmu_intel_ctxt *)0)->arch_counters);
+    sizeof_field(struct xen_pmu_intel_ctxt, fixed_counters) +
+    sizeof_field(struct xen_pmu_intel_ctxt, arch_counters);
 
 /*
  * QUIRK to workaround an issue on various family 6 cpus.
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 44c04e273537..30d6f375a3af 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1617,19 +1617,19 @@ void __init noreturn __start_xen(unsigned long mbi_p)
     total_pages = nr_pages;
 
     /* Sanity check for unwanted bloat of certain hypercall structures. */
-    BUILD_BUG_ON(sizeof(((struct xen_platform_op *)0)->u) !=
-                 sizeof(((struct xen_platform_op *)0)->u.pad));
-    BUILD_BUG_ON(sizeof(((struct xen_domctl *)0)->u) !=
-                 sizeof(((struct xen_domctl *)0)->u.pad));
-    BUILD_BUG_ON(sizeof(((struct xen_sysctl *)0)->u) !=
-                 sizeof(((struct xen_sysctl *)0)->u.pad));
+    BUILD_BUG_ON(sizeof_field(struct xen_platform_op, u) !=
+                 sizeof_field(struct xen_platform_op, u.pad));
+    BUILD_BUG_ON(sizeof_field(struct xen_domctl, u) !=
+                 sizeof_field(struct xen_domctl, u.pad));
+    BUILD_BUG_ON(sizeof_field(struct xen_sysctl, u) !=
+                 sizeof_field(struct xen_sysctl, u.pad));
 
     BUILD_BUG_ON(sizeof(start_info_t) > PAGE_SIZE);
     BUILD_BUG_ON(sizeof(shared_info_t) > PAGE_SIZE);
     BUILD_BUG_ON(sizeof(struct vcpu_info) != 64);
 
-    BUILD_BUG_ON(sizeof(((struct compat_platform_op *)0)->u) !=
-                 sizeof(((struct compat_platform_op *)0)->u.pad));
+    BUILD_BUG_ON(sizeof_field(struct compat_platform_op, u) !=
+                 sizeof_field(struct compat_platform_op, u.pad));
     BUILD_BUG_ON(sizeof(start_info_compat_t) > PAGE_SIZE);
     BUILD_BUG_ON(sizeof(struct compat_vcpu_info) != 64);
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Nov 24 19:37:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 19:37:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36953.69053 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khe82-0006BP-Kw; Tue, 24 Nov 2020 19:37:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36953.69053; Tue, 24 Nov 2020 19:37:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khe82-0006BI-Hy; Tue, 24 Nov 2020 19:37:30 +0000
Received: by outflank-mailman (input) for mailman id 36953;
 Tue, 24 Nov 2020 19:37:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
 id 1khe81-0006BD-5T
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:37:29 +0000
Received: from mail-qt1-x842.google.com (unknown [2607:f8b0:4864:20::842])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8c205fd3-08c4-4ce3-b808-880f517c869b;
 Tue, 24 Nov 2020 19:37:28 +0000 (UTC)
Received: by mail-qt1-x842.google.com with SMTP id 7so17013114qtp.1
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 11:37:28 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
 by smtp.gmail.com with ESMTPSA id t126sm68819qkh.133.2020.11.24.11.37.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Nov 2020 11:37:27 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
	id 1khe81-0006BD-5T
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:37:29 +0000
X-Inumbo-ID: 8c205fd3-08c4-4ce3-b808-880f517c869b
Received: from mail-qt1-x842.google.com (unknown [2607:f8b0:4864:20::842])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8c205fd3-08c4-4ce3-b808-880f517c869b;
	Tue, 24 Nov 2020 19:37:28 +0000 (UTC)
Received: by mail-qt1-x842.google.com with SMTP id 7so17013114qtp.1
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 11:37:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=MOALJb5ZWPUh3aGAOfM0OVshborSGP7hJGfOxgGOGv8=;
        b=F5LkT6+P6W0m+3hlLX1OTdLAaWDWNPd5qU0zVXgoS6irqhQG1KPClCgLDacxdrz201
         tuSkOD3Pi7npw/jwH1Nvh11d7rq8LRNxgjQNm41OqnMYV7uCJqaXEwyai41Uj47dAbX3
         JK7du1QUlRFM6taLlQQ447SfHtDJsH8zezvU1GOUPlebgL174F6/7EsV7OtuCUoF3f9x
         Bkfc9gd/0KWs5FVV7usVufNl9+blVTvGwiyUy7osMlVAVbdFXgSTZohKeNl02jc4WQz4
         A0OAp0Qtsho7RAxf4m3roj9E2ZLkZ4zwUy7kn8kRWPQltha8Qf/YzRNO/2ELqU44tB/d
         HyAA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=MOALJb5ZWPUh3aGAOfM0OVshborSGP7hJGfOxgGOGv8=;
        b=JV2CP7aXQ3CVx3f0qUls4kOlKnuOiCRlowcPfKW+WWQqbl//XTURuZRr2D2rOkF/1T
         gf1V1wvIfbTGztYXliIXi8WR+4uxkGhkDvkO0WJMOzZRTt0/9ESCTI/qB+/8AZ1Sndgl
         Xo/+b1wFB5RMYmavO1Arkcyl1c5KZcLpmJ1cMfhuoBggp7T7cyNlv4RLvqEwucF7RcAR
         G+gz8GBPfjJhbSvtdU/nTnmuYAv0W54b2KXvDE2In5Ty590dq3Rv2Uj5NFlGQL6JUkav
         HOYAJsT1HQIlKC7TVYdZvXtIQEnKA/XVExjDeJhgt6aGAswGvad/z7oo2ov60toso9nj
         VOsg==
X-Gm-Message-State: AOAM533pdOWWBg2zXa2fpTx+nR5Im/bWVEKhn6Hz6gPgczLkXcnNGh0T
	KDm9ZY8YyO5u1Z7k10/Ac/M=
X-Google-Smtp-Source: ABdhPJybxY5/E0+KDq94c9P/N9IKjFDU+Ct8NW6WiHsGOrtJ7iW5aHJE2Msrh+VJOHLo1ru6UJnxSg==
X-Received: by 2002:ac8:7192:: with SMTP id w18mr6129290qto.149.1606246647881;
        Tue, 24 Nov 2020 11:37:27 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
        by smtp.gmail.com with ESMTPSA id t126sm68819qkh.133.2020.11.24.11.37.27
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 24 Nov 2020 11:37:27 -0800 (PST)
Sender: Tejun Heo <htejun@gmail.com>
Date: Tue, 24 Nov 2020 14:37:05 -0500
From: Tejun Heo <tj@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 23/45] block: remove i_bdev
Message-ID: <X71g4Tm+3RiRg4Gf@mtj.duckdns.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-24-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-24-hch@lst.de>

On Tue, Nov 24, 2020 at 02:27:29PM +0100, Christoph Hellwig wrote:
> Switch the block device lookup interfaces to directly work with a dev_t
> so that struct block_device references are only acquired by the
> blkdev_get variants (and the blk-cgroup special case).  This means that
> we not don't need an extra reference in the inode and can generally
     ^
     now
> simplify handling of struct block_device to keep the lookups contained
> in the core block layer code.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
...
> @@ -1689,14 +1599,12 @@ static int blkdev_open(struct inode * inode, struct file * filp)
>  	if ((filp->f_flags & O_ACCMODE) == 3)
>  		filp->f_mode |= FMODE_WRITE_IOCTL;
>  
> -	bdev = bd_acquire(inode);
> -	if (bdev == NULL)
> -		return -ENOMEM;
> -
> +	bdev = blkdev_get_by_dev(inode->i_rdev, filp->f_mode, filp);
> +	if (IS_ERR(bdev))
> +		return PTR_ERR(bdev);
>  	filp->f_mapping = bdev->bd_inode->i_mapping;
>  	filp->f_wb_err = filemap_sample_wb_err(filp->f_mapping);
> -
> -	return blkdev_get(bdev, filp->f_mode, filp);
> +	return 0;
>  }

I was wondering whether losing the stale bdev flushing in bd_acquire() would
cause user-visible behavior changes but can't see how it would given that
userland has no way of holding onto a specific instance of block inode.
Maybe it's something worth mentioning in the commit message?

Other than that, for the block part:

Acked-by: Tejun Heo <tj@kernel.org>

Thanks.

-- 
tejun


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 19:47:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 19:47:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36962.69066 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kheHR-00079l-Jy; Tue, 24 Nov 2020 19:47:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36962.69066; Tue, 24 Nov 2020 19:47:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kheHR-00079e-Gx; Tue, 24 Nov 2020 19:47:13 +0000
Received: by outflank-mailman (input) for mailman id 36962;
 Tue, 24 Nov 2020 19:47:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
 id 1kheHQ-00079Z-AX
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:47:12 +0000
Received: from mail-qk1-x742.google.com (unknown [2607:f8b0:4864:20::742])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 94d6033c-0b56-42f6-aaa0-f45425fd868e;
 Tue, 24 Nov 2020 19:47:11 +0000 (UTC)
Received: by mail-qk1-x742.google.com with SMTP id y197so22276243qkb.7
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 11:47:11 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
 by smtp.gmail.com with ESMTPSA id 137sm123792qkj.109.2020.11.24.11.47.10
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Nov 2020 11:47:10 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
	id 1kheHQ-00079Z-AX
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 19:47:12 +0000
X-Inumbo-ID: 94d6033c-0b56-42f6-aaa0-f45425fd868e
Received: from mail-qk1-x742.google.com (unknown [2607:f8b0:4864:20::742])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 94d6033c-0b56-42f6-aaa0-f45425fd868e;
	Tue, 24 Nov 2020 19:47:11 +0000 (UTC)
Received: by mail-qk1-x742.google.com with SMTP id y197so22276243qkb.7
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 11:47:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=V2TOjLdoUaYrySYqHg0oeA6ickZUCz64YonJi1MX7zw=;
        b=jOV4zvc5ykqwIR5EZxEdhoTt8gbdvwT6+6X/TSUVT/z8HYM0C2tjvbbgLMctpzeJXh
         RGgnK/f6uWChM4BxAbOX4p18Fwq1/chEXov7aPAYOwuKTjdA655sPluEae6103/B37UZ
         mtHW9DvASGebX6RreWH7qhpVzv5BxklJgKQSxfx5joha57iIHUUTiDQJHNfLS9atfGSt
         ExNgjoQWPu/xgn5sRgfmq3jVVaBwbHob8XGT+JoMXjhTpfpEBrU2/+S555WpdkNVIh4j
         Jpy0MqgRWGJT79DbBEN/JVT9ea84He6RSeOcPWGIlVmK+xMy8dzyAD0z3NQbnkk9QV/T
         mQ+g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=V2TOjLdoUaYrySYqHg0oeA6ickZUCz64YonJi1MX7zw=;
        b=dLxSsSEy2+QbBpZAI96+RpEfNVFV8BfdAh+9bBX4gqwJd8h3yiYITH1NMaXo0q9ti2
         7FajpOitIDFU8kjLoNb37pkwOZLTvikPzoV+NqMPEU/bOqIlKpvkdYWN1OCTTcP+qe+W
         m66tyX2wzUtwTqpePbSglTP2K6qhDjeoKSMDnYPCV6G6Zc72jKZiA8T4YMauQZQAK6bW
         +XrCMlB0Wcy9yc5XMNiUqd7Two9ZbDfBe5L828XiTK/9MIfAf0UnztfFZ0IdyAB88eAl
         VI0/c7HEEaXDWyB1gb0pxqUFIqj7z2zL0dk4FbtKKb+NxlHO+r50OJTD3T9q6cGextXm
         tNsQ==
X-Gm-Message-State: AOAM531ztIgjrHXbFQx0KAZG/PyOalerqqvebrYJDl0Gr/pqLu4aNyJ/
	eHBrO2VJZ+xJDZYCtN0wEXo=
X-Google-Smtp-Source: ABdhPJzCbEzsnsSHZbAP8aBMoagyKuXkYbNxYHgU470rrYkN9D5aTPe0QJyzDBb8iwv2DZpxdq68Ww==
X-Received: by 2002:a37:6cd:: with SMTP id 196mr3775536qkg.96.1606247231211;
        Tue, 24 Nov 2020 11:47:11 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
        by smtp.gmail.com with ESMTPSA id 137sm123792qkj.109.2020.11.24.11.47.10
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 24 Nov 2020 11:47:10 -0800 (PST)
Sender: Tejun Heo <htejun@gmail.com>
Date: Tue, 24 Nov 2020 14:46:47 -0500
From: Tejun Heo <tj@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 24/45] blk-cgroup: stop abusing get_gendisk
Message-ID: <X71jJywIZTSxLoqQ@mtj.duckdns.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-25-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-25-hch@lst.de>

On Tue, Nov 24, 2020 at 02:27:30PM +0100, Christoph Hellwig wrote:
> Properly open the device instead of relying on deep internals by
> using get_gendisk.  Note that this uses FMODE_NDELAY without either
> FMODE_READ or FMODE_WRITE, which is a special open mode to allow
> for opening without media access, thus avoiding unexpexted interactions
> especially on removable media.

I'm not sure FMODE_NDELAY does that. e.g. sd_open() does media change check
and full revalidation including disk spin up regadless of NDELAY and it's
odd and can lead to nasty surprises to require cgroup configuration updates
to wait for SCSI EH.

Thanks.

-- 
tejun


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 21:19:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 21:19:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36970.69078 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khfiW-0006tN-Kv; Tue, 24 Nov 2020 21:19:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36970.69078; Tue, 24 Nov 2020 21:19:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khfiW-0006tG-Hj; Tue, 24 Nov 2020 21:19:16 +0000
Received: by outflank-mailman (input) for mailman id 36970;
 Tue, 24 Nov 2020 21:19:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
 id 1khfiV-0006t9-AO
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 21:19:15 +0000
Received: from mail-qv1-xf42.google.com (unknown [2607:f8b0:4864:20::f42])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b058068b-e21d-43cd-aca2-3677d21dc479;
 Tue, 24 Nov 2020 21:19:14 +0000 (UTC)
Received: by mail-qv1-xf42.google.com with SMTP id k3so4513359qvz.4
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 13:19:14 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
 by smtp.gmail.com with ESMTPSA id c27sm387614qkk.57.2020.11.24.13.19.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Nov 2020 13:19:12 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
	id 1khfiV-0006t9-AO
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 21:19:15 +0000
X-Inumbo-ID: b058068b-e21d-43cd-aca2-3677d21dc479
Received: from mail-qv1-xf42.google.com (unknown [2607:f8b0:4864:20::f42])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b058068b-e21d-43cd-aca2-3677d21dc479;
	Tue, 24 Nov 2020 21:19:14 +0000 (UTC)
Received: by mail-qv1-xf42.google.com with SMTP id k3so4513359qvz.4
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 13:19:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=sg9SO3UStMfqHkCvNI1YBGGkjTmKJ7UaheqYby9uxnM=;
        b=OzhyL5qOUmNMCmaqmA0AJ7YTZnOa5s8lOvdGSd5gsRFwIjVjcK9RMkEdmOZVvPKHkC
         HEfwzTPFGhRpcwv4Zdi1eKZAQCSJDqL9242nm6t+004qxnjr2Wo1RG+dvaQzjYpDTlFJ
         y4MEeMckfEdVhdL9TeWxoL3uJseLhfNpPNLJKI+4lWddjQrQSYtmPS6DbLtZKSCyuI0M
         VPOnBALIPZDz9HaljCXz/QtzQMoFkORQ/oSOcQG0yJ5cuTGS83XWqbo7FaIck74bBkz3
         zSrrMusanZp5T2dH6BfcuYD799YU+BVDp+VbnKu/pAzjL4fvbG44Apq0OxYfntbUtAy3
         IgsQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=sg9SO3UStMfqHkCvNI1YBGGkjTmKJ7UaheqYby9uxnM=;
        b=nu/2QHGJS/cDrwGJQ/Asj8nz7NayM81EDS7p1P6xtuM+6bjbkBOGSywiIPGibOjqJ9
         ECw34RnSPPHWaSx637L8jyR2yO/Z3sgrszv572wxjpb85mtnLcWNOBKtQIHIniRToevd
         gt5OhdvHh9qtUTCDwEGsKz5Vur7QjWJObVEuyTO/dxZS9T2/X2xMZXNF79aGcLsf3q1s
         cs9/IWTUdQzGRgJpbjkDQAXTyMKkvCsm1uYl/fO1w8+wtxHPOM6n/rPfb0VnJ9h3Ma0E
         9BjT/sG5Htybq3qWHTptnp+Ho3Jf/+G/UYPA3XNsPaec9Qo3U2DTLmaaHPI3WqHWGPN+
         tnKQ==
X-Gm-Message-State: AOAM533nXi2oRgi5LcuSNaeap2NPikifyACAPpgcNo89h7KsAE6eu2ea
	xvPJZGnckrMyJ3/iHWxigz0=
X-Google-Smtp-Source: ABdhPJymsM7t3QC8HGlAx0p+5Wt4X/uLuUZsX3zmVjKyK1/8swtwKOE7/+uzSFJWv6qwdRjGjoCR5w==
X-Received: by 2002:a0c:fa08:: with SMTP id q8mr548456qvn.25.1606252753574;
        Tue, 24 Nov 2020 13:19:13 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
        by smtp.gmail.com with ESMTPSA id c27sm387614qkk.57.2020.11.24.13.19.12
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 24 Nov 2020 13:19:12 -0800 (PST)
Sender: Tejun Heo <htejun@gmail.com>
Date: Tue, 24 Nov 2020 16:18:49 -0500
From: Tejun Heo <tj@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 25/45] block: reference struct block_device from struct
 hd_struct
Message-ID: <X714udEyPuGarVYp@mtj.duckdns.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-26-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-26-hch@lst.de>

Hello,

Please see lkml.kernel.org/r/X708BTJ5njtbC2z1@mtj.duckdns.org for a few nits
on the previous version.

On Tue, Nov 24, 2020 at 02:27:31PM +0100, Christoph Hellwig wrote:
> To simplify block device lookup and a few other upcoming areas, make sure
> that we always have a struct block_device available for each disk and
> each partition.  The only downside of this is that each device and
> partition uses a little more memory.  The upside will be that a lot of
> code can be simplified.
> 
> With that all we need to look up the block device is to lookup the inode
> and do a few sanity checks on the gendisk, instead of the separate lookup
> for the gendisk.  These checks are in a new RCU critical section and
> the disk is now freed using kfree_rcu().

I might be confused but am wondering whether RCU is needed. It's currently
used to ensure that gendisk is accessible in the blkdev_get path but
wouldn't it be possible to simply pin gendisk from block_devices? The
gendisk and hd_structs hold the base refs of the block_devices and in turn
the block_devices pin the gendisk. When the gendisk gets deleted, it puts
the base refs of the block devices but stays around while the block devices
are being accessed.

Also, would it make sense to separate out lookup_sem removal? I *think* it's
there to ensure that the same bdev doesn't get associated with old and new
gendisks at the same time but can't wrap my head around how it works
exactly. I can see that this may not be needed once the lifetimes of gendisk
and block_devices are tied together but that may warrant a bit more
explanation.

Thanks.

-- 
tejun


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 21:19:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 21:19:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36975.69090 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khfjA-0006zB-Ue; Tue, 24 Nov 2020 21:19:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36975.69090; Tue, 24 Nov 2020 21:19:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khfjA-0006z4-Qv; Tue, 24 Nov 2020 21:19:56 +0000
Received: by outflank-mailman (input) for mailman id 36975;
 Tue, 24 Nov 2020 21:19:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
 id 1khfj9-0006yv-9O
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 21:19:55 +0000
Received: from mail-qv1-xf43.google.com (unknown [2607:f8b0:4864:20::f43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 80a224b5-d220-433c-b83b-4e8993cfc69c;
 Tue, 24 Nov 2020 21:19:54 +0000 (UTC)
Received: by mail-qv1-xf43.google.com with SMTP id ec16so11400644qvb.0
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 13:19:54 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
 by smtp.gmail.com with ESMTPSA id l46sm308143qta.44.2020.11.24.13.19.53
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Nov 2020 13:19:53 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
	id 1khfj9-0006yv-9O
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 21:19:55 +0000
X-Inumbo-ID: 80a224b5-d220-433c-b83b-4e8993cfc69c
Received: from mail-qv1-xf43.google.com (unknown [2607:f8b0:4864:20::f43])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 80a224b5-d220-433c-b83b-4e8993cfc69c;
	Tue, 24 Nov 2020 21:19:54 +0000 (UTC)
Received: by mail-qv1-xf43.google.com with SMTP id ec16so11400644qvb.0
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 13:19:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=A2Ux7RQ7mBCZEojzGvR0WDuZtfXBN33NUKSWm/nAx2s=;
        b=uiOjPjtUtQlGuTii+qORJIGUPOgnFY1nUtEhKLlffzZur6sGJm+gKH90kLKTftzjMY
         gUznZxSbf+gcBmxTds5RUXjrVxrE+Fq2EmBwcWZB+pJf3BS5L+QJ7kv+CxvIAOrqPzEX
         dMqmSibOB0dtykbXe/2E2zoCxbKuzPbH5RmI+Xfb2tQjieKxRGRgQgps4Mrmdf0e2l3a
         eNnhJnzPz0bblYg/4cAkndkSQkON8xjuTRqoNexKjm8D+3aqj4P9Sj0gjlauNmeONhmf
         5kvr4Ddaona2AGKfmkN2405koPZEL1O1mk4yy01pHWJJDJlpvcRl+PE1xhRDe1lIBY+4
         jUGw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=A2Ux7RQ7mBCZEojzGvR0WDuZtfXBN33NUKSWm/nAx2s=;
        b=WlipkRliA9Kg4Pw9P3xKbEVdxGmFxVsO3rhL66eGksgV9ix8W00tMhqSW2ArrJ1Kw2
         EhnbXhiQYb+l/EegK+KGWW+VgZ6w2ELfkWE42+AVojb3IyruGhfMFLmXe/l20GwW9uRZ
         GOxAj72jJ2MvnedoRqJwUFlU5p2JTh/yQKfMa/krq9HsRHeH+ddqML6nfCEyDyDynnUq
         ZfD1AYw+jgmTnmXT1+sTtmVsGiU3dBCeequlrT1jDdA734JvMrnSgZb8XC/gDOfH0Xkh
         cNw/kj0uVgMlUGci4p6MmOaVPDAGGgdhhpdWGoabkF4dQW4lcW4OJzq62SsUON++nlvI
         Ozyw==
X-Gm-Message-State: AOAM5333fsg+fdgghtHjjSIWIzBYKcOU3QLhj81DvNuueD7M+V6P9IuO
	gAVLNNjZMjCDZRDku0z1AVQ=
X-Google-Smtp-Source: ABdhPJxvWSMIcF1QvtVmsROEnFEfbgneWE4I12McWSmR7ItrCaZw+O6oSDwh472sj7fwnSIDMbgqhQ==
X-Received: by 2002:a05:6214:443:: with SMTP id cc3mr455144qvb.53.1606252794270;
        Tue, 24 Nov 2020 13:19:54 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
        by smtp.gmail.com with ESMTPSA id l46sm308143qta.44.2020.11.24.13.19.53
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 24 Nov 2020 13:19:53 -0800 (PST)
Sender: Tejun Heo <htejun@gmail.com>
Date: Tue, 24 Nov 2020 16:19:31 -0500
From: Tejun Heo <tj@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 26/45] block: remove ->bd_contains
Message-ID: <X7144xvjxqhopOck@mtj.duckdns.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-27-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-27-hch@lst.de>

On Tue, Nov 24, 2020 at 02:27:32PM +0100, Christoph Hellwig wrote:
> Now that each hd_struct has a reference to the corresponding
> block_device, there is no need for the bd_contains pointer.  Add
> a bdev_whole() helper to look up the whole device block_device
> struture instead.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Reviewed-by: Jan Kara <jack@suse.cz>

Acked-by: Tejun Heo <tj@kernel.org>

-- 
tejun


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 21:20:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 21:20:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36981.69102 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khfjX-0007k4-7W; Tue, 24 Nov 2020 21:20:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36981.69102; Tue, 24 Nov 2020 21:20:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khfjX-0007jx-3y; Tue, 24 Nov 2020 21:20:19 +0000
Received: by outflank-mailman (input) for mailman id 36981;
 Tue, 24 Nov 2020 21:20:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
 id 1khfjV-0007ji-KC
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 21:20:17 +0000
Received: from mail-qv1-xf42.google.com (unknown [2607:f8b0:4864:20::f42])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bc2ab0f1-843f-468a-b676-88d6cfdc8f0b;
 Tue, 24 Nov 2020 21:20:17 +0000 (UTC)
Received: by mail-qv1-xf42.google.com with SMTP id dm12so461034qvb.3
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 13:20:17 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
 by smtp.gmail.com with ESMTPSA id o16sm451985qkg.27.2020.11.24.13.20.15
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Nov 2020 13:20:16 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
	id 1khfjV-0007ji-KC
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 21:20:17 +0000
X-Inumbo-ID: bc2ab0f1-843f-468a-b676-88d6cfdc8f0b
Received: from mail-qv1-xf42.google.com (unknown [2607:f8b0:4864:20::f42])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bc2ab0f1-843f-468a-b676-88d6cfdc8f0b;
	Tue, 24 Nov 2020 21:20:17 +0000 (UTC)
Received: by mail-qv1-xf42.google.com with SMTP id dm12so461034qvb.3
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 13:20:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=3HC7ZqQanyIo8VeBYdMEle4L/4ENFYEM5MkKt8pSh/w=;
        b=SBpFqtt8PXbzCveB3YbrqNUj//b1rdDV1EqJm8ZT0/PCMKGkmFs6PzBliBJcdVTLfk
         tv/ZuM+3J7XM+n5fEUuohYu6uKi1fCr78XSUa/5RUJdfK+5UfN3C0U3yX9T9Q54Q5SaR
         vVOVCypSCFl85W7pD0ksSCQvKz2n77IAVCPEhA9UoBfthB9BvyYs6pYTaIL0wmQf9aCu
         xcqLyNke/DBRRYIHLihURVxMheAgoXZ/e0RlaCkV/7/TeYdWzBYlD3KA6yr3YBRfdQIT
         T2cBKt53JFvRl9mTVJp+iF+XYLyh3G75CH9aRMRIV/K8O2t4jUxGxv8CwdtQFQ8WaWa/
         HyDQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=3HC7ZqQanyIo8VeBYdMEle4L/4ENFYEM5MkKt8pSh/w=;
        b=Ejgbm/JT/OykUEzjtW/KJBRgBiSU+uvTtj1vouOfOFOdfKMHT975QK8/18rwAVIQ+C
         Jdbht3lN2SYUAw5lLkkNgByyRvkKLGbI8D/qx4M+nJNrEkrFr1FY7Ml1huCJJVkJt+b2
         qu0VIEWuYPEC2R0hoOfJHvuC2LngaIBDMggZ4tty35QSfDkNvBSYyP9/pud19/rWRHLN
         OgUaU11UETgQau/445BzgQ66ZH7fc0NZHrMLEQ2fYTZgC6F2xNbMvl+N/gL9Py31n9aa
         ESCr98I7uWAd0pASwMovdiPU6xGBqB4y+0tO+NAsDM2LXn3GHgUe1RSrEkPnYf3dyJ7E
         7fJQ==
X-Gm-Message-State: AOAM530HWE4kxAYyQVHBAoBETxaqpDH26dc0wY6X3jPVYUVojmpwxdST
	Cr81jb7Resz/RPd38cagdB0=
X-Google-Smtp-Source: ABdhPJxH3CMpes4k7VUBJmRwBA7xuLJPWwlXmpOLLvAcoVgK+lPVzlEL/k+veWQQUizcViCkQhf2+Q==
X-Received: by 2002:a05:6214:2a1:: with SMTP id m1mr370118qvv.35.1606252816750;
        Tue, 24 Nov 2020 13:20:16 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
        by smtp.gmail.com with ESMTPSA id o16sm451985qkg.27.2020.11.24.13.20.15
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 24 Nov 2020 13:20:16 -0800 (PST)
Sender: Tejun Heo <htejun@gmail.com>
Date: Tue, 24 Nov 2020 16:19:53 -0500
From: Tejun Heo <tj@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 27/45] block: simplify the block device claiming interface
Message-ID: <X714+RhOhKVlNrAo@mtj.duckdns.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-28-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-28-hch@lst.de>

On Tue, Nov 24, 2020 at 02:27:33PM +0100, Christoph Hellwig wrote:
> Stop passing the whole device as a separate argument given that it
> can be trivially deducted and cleanup the !holder debug check.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Reviewed-by: Jan Kara <jack@suse.cz>

Acked-by: Tejun Heo <tj@kernel.org>

-- 
tejun


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 21:20:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 21:20:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36987.69114 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khfk0-0007rr-I4; Tue, 24 Nov 2020 21:20:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36987.69114; Tue, 24 Nov 2020 21:20:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khfk0-0007rk-Dm; Tue, 24 Nov 2020 21:20:48 +0000
Received: by outflank-mailman (input) for mailman id 36987;
 Tue, 24 Nov 2020 21:20:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
 id 1khfjz-0007r1-6K
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 21:20:47 +0000
Received: from mail-qk1-x743.google.com (unknown [2607:f8b0:4864:20::743])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 595e21ca-42d4-493f-8598-8893429f8259;
 Tue, 24 Nov 2020 21:20:41 +0000 (UTC)
Received: by mail-qk1-x743.google.com with SMTP id q22so524711qkq.6
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 13:20:41 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
 by smtp.gmail.com with ESMTPSA id b33sm298244qta.62.2020.11.24.13.20.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Nov 2020 13:20:40 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
	id 1khfjz-0007r1-6K
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 21:20:47 +0000
X-Inumbo-ID: 595e21ca-42d4-493f-8598-8893429f8259
Received: from mail-qk1-x743.google.com (unknown [2607:f8b0:4864:20::743])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 595e21ca-42d4-493f-8598-8893429f8259;
	Tue, 24 Nov 2020 21:20:41 +0000 (UTC)
Received: by mail-qk1-x743.google.com with SMTP id q22so524711qkq.6
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 13:20:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=ev/F4l5kU0JVd8J9T+Hn7/Emi68DyGqwAl9VsbylUp0=;
        b=ZNSLo8JoXJqjP5o+nIXl1wDfRlZM34G4UajJItZyFsuYjmXMkUKdp9TKdu4qQVAcH5
         1UtlzvL9QAOsVwNuRmLXs7qTazgrsdlD0c9Fg5jpMS46JIBuEyExozrq6Rpy6KkznpEk
         LK8bE886olCSw1TOZhi4r5Dvu6oUxUSyqj8CgYwDqSKsVWJ+RHfjDuv3hkDpzS8EceZD
         HTuSBVL4noo6GQEfe1lNegiK98NcOs7S2p1t2MSFBRzQhrkn/44t1OAHb6UpUFPD3EHa
         FTBQsVjSYGJSYiOLAEycBszTag6Jbur7U/Yzxl9519cveNmQk/zw99cBH024rHQM0vOy
         HpHw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=ev/F4l5kU0JVd8J9T+Hn7/Emi68DyGqwAl9VsbylUp0=;
        b=NjfquaP+L7jMv+xEjOhCHHZd/oSxOSsg4buPePGlKgBenzDEjHLnP9bEJFr6iHCwEC
         v6k1sQLRL+dXfufx8UuhPI3IdEpIqaSc0uXxHbnQDuu7SKgXrkTCNU5NanoNPlIBvB8L
         XshxRaeGM0jQbJTZo+UrqXORW2CeOZFaacElHuv9N1Usbee27io0nnTqRMcp0A101qTK
         mjSwPLsrcJmH+0D08Rn94o6D9E+vVseAiXuLha7n4ZPnYzGabOspy/sMhpmJ3XhrjDgW
         DHigqiI/+tlAEm3Y7oSN6uGrv8GEyNjKmADw/0xffkhrRTATGm0mASFenczf33ylQFb0
         xtyw==
X-Gm-Message-State: AOAM5330iNN9lY6rSe0uxlKb0omJPfLFi3VuUL4btZkWKg0DPAsxqUDs
	nN6JtzUMt6/cieOO9yhCIWg=
X-Google-Smtp-Source: ABdhPJxVPExCrpl6SBnHr8hh5lACFOCmcWgKuDYnGDx1xBxinXihB0MFiLxOfh+ARQAqrCZ49K/lyA==
X-Received: by 2002:a37:7481:: with SMTP id p123mr176688qkc.424.1606252841332;
        Tue, 24 Nov 2020 13:20:41 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
        by smtp.gmail.com with ESMTPSA id b33sm298244qta.62.2020.11.24.13.20.40
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 24 Nov 2020 13:20:40 -0800 (PST)
Sender: Tejun Heo <htejun@gmail.com>
Date: Tue, 24 Nov 2020 16:20:18 -0500
From: Tejun Heo <tj@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 28/45] block: simplify part_to_disk
Message-ID: <X715EkY5mafgeJWZ@mtj.duckdns.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-29-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-29-hch@lst.de>

On Tue, Nov 24, 2020 at 02:27:34PM +0100, Christoph Hellwig wrote:
> Now that struct hd_struct has a block_device pointer use that to
> find the disk.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Acked-by: Tejun Heo <tj@kernel.org>

-- 
tejun


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 21:21:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 21:21:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.36994.69126 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khfkW-0007zS-Ql; Tue, 24 Nov 2020 21:21:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 36994.69126; Tue, 24 Nov 2020 21:21:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khfkW-0007zL-Na; Tue, 24 Nov 2020 21:21:20 +0000
Received: by outflank-mailman (input) for mailman id 36994;
 Tue, 24 Nov 2020 21:21:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
 id 1khfkW-0007zC-1D
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 21:21:20 +0000
Received: from mail-qk1-x744.google.com (unknown [2607:f8b0:4864:20::744])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 73cd1ab5-2ab9-4c81-b0f1-cc732fa222ea;
 Tue, 24 Nov 2020 21:21:19 +0000 (UTC)
Received: by mail-qk1-x744.google.com with SMTP id l2so591706qkf.0
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 13:21:19 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
 by smtp.gmail.com with ESMTPSA id y44sm292925qtb.50.2020.11.24.13.21.18
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Nov 2020 13:21:18 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YSly=E6=gmail.com=htejun@srs-us1.protection.inumbo.net>)
	id 1khfkW-0007zC-1D
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 21:21:20 +0000
X-Inumbo-ID: 73cd1ab5-2ab9-4c81-b0f1-cc732fa222ea
Received: from mail-qk1-x744.google.com (unknown [2607:f8b0:4864:20::744])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 73cd1ab5-2ab9-4c81-b0f1-cc732fa222ea;
	Tue, 24 Nov 2020 21:21:19 +0000 (UTC)
Received: by mail-qk1-x744.google.com with SMTP id l2so591706qkf.0
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 13:21:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=7Y3MLcmrdxWQffIhMnyKnSmG7B3ov7cDZAqFDsa+q7k=;
        b=JkzooPxJ8GeNrayRsu98YvwsNpW2uc9rVRoj4imwYpA6JzY+Piu1EJS2IziPBdeDkF
         Ow5qmzMcHaeZs2+oG1fSVsEAssWG0DO4F1vW/5P9q+2+DEbGyYUH1jozKg00Go0h3+ja
         YVkgi/EGLciFHd/Iul8YpgRz7FCu1kfvXaFXbSFsbyZTqz8JAJ48o8dWwKmdI5uElByj
         9UjvAnbnEBhxu3Tw7tEroOvSCxNENuu6nvEC1MTUGX0lcXIIF4ARzithi6KpCvnT/fAl
         Mp0ipaM81mRq+veWggOWfzfqSyIb3Y60deO3bam/RyYuBfYbgPa2E9CAR1VR8pj1MpV8
         kFEg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=7Y3MLcmrdxWQffIhMnyKnSmG7B3ov7cDZAqFDsa+q7k=;
        b=sapBRPxW25vntiprHFfjliKzvrOl5d23wO85/EVC1KSlEiM5zR7xFz5ndnvHpGi7OQ
         r0vugT3JS8i4HnG+bAWvwDubKF7tqmL6iUesvNi8KYboz27ZWiQF2ddDcCrDbDKVDxHo
         cKLiWMkS+57NZVn2HIHetqFspIDjdajF//BhPbXZAs00DnxtD8tiubVGqMomGnuX1eJ9
         FXPN/240YjN2VcXPr5OwwlEKHCuuiP3YPSt1lkFkMEfYh8gmYRSm68Naswuoq6JmdD+C
         y9gvYyheG8f9rAQHYwX+bxcSvQc9kvPMWqV3RsbujxIl5wmYX3cPUNNYrn9n1WjEqtx+
         XRmA==
X-Gm-Message-State: AOAM532QpjKbwJYN5jR7leaCJF5YLzJufV2HgNgY/BwOQhZutdip+z/i
	XHZ1B5+DRVkiwNPBz6SC0VM=
X-Google-Smtp-Source: ABdhPJx26oBT8NwtvheSnoKT8jLyoQC45bZaDNVCxy9rquKXHNVFfdT0K9nlXQGVBdDZtOMR/OxmVA==
X-Received: by 2002:a37:9ed3:: with SMTP id h202mr199931qke.126.1606252879118;
        Tue, 24 Nov 2020 13:21:19 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
        by smtp.gmail.com with ESMTPSA id y44sm292925qtb.50.2020.11.24.13.21.18
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 24 Nov 2020 13:21:18 -0800 (PST)
Sender: Tejun Heo <htejun@gmail.com>
Date: Tue, 24 Nov 2020 16:20:56 -0500
From: Tejun Heo <tj@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 29/45] block: initialize struct block_device in bdev_alloc
Message-ID: <X715OP+dR8KzH1wA@mtj.duckdns.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-30-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-30-hch@lst.de>

On Tue, Nov 24, 2020 at 02:27:35PM +0100, Christoph Hellwig wrote:
> Don't play tricks with slab constructors as bdev structures tends to not
> get reused very much, and this makes the code a lot less error prone.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Acked-by: Tejun Heo <tj@kernel.org>

-- 
tejun


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 21:25:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 21:25:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37008.69137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khfoo-0008G0-Bm; Tue, 24 Nov 2020 21:25:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37008.69137; Tue, 24 Nov 2020 21:25:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khfoo-0008Ft-8m; Tue, 24 Nov 2020 21:25:46 +0000
Received: by outflank-mailman (input) for mailman id 37008;
 Tue, 24 Nov 2020 21:25:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PPaS=E6=chromium.org=keescook@srs-us1.protection.inumbo.net>)
 id 1khfon-0008Fo-Ay
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 21:25:45 +0000
Received: from mail-pf1-x443.google.com (unknown [2607:f8b0:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 858a5b60-34e4-436e-8314-247b42b0274e;
 Tue, 24 Nov 2020 21:25:44 +0000 (UTC)
Received: by mail-pf1-x443.google.com with SMTP id 131so270115pfb.9
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 13:25:44 -0800 (PST)
Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163])
 by smtp.gmail.com with ESMTPSA id z68sm129381pgb.37.2020.11.24.13.25.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Nov 2020 13:25:41 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PPaS=E6=chromium.org=keescook@srs-us1.protection.inumbo.net>)
	id 1khfon-0008Fo-Ay
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 21:25:45 +0000
X-Inumbo-ID: 858a5b60-34e4-436e-8314-247b42b0274e
Received: from mail-pf1-x443.google.com (unknown [2607:f8b0:4864:20::443])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 858a5b60-34e4-436e-8314-247b42b0274e;
	Tue, 24 Nov 2020 21:25:44 +0000 (UTC)
Received: by mail-pf1-x443.google.com with SMTP id 131so270115pfb.9
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 13:25:44 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=Hc7xHQdcWqcI1RL6yWHK3qM7+D3PcB+9wJ1f+Y4kOZ8=;
        b=oKmDT/E0BiqfYtxGd9S8VgvaixpTRLnYMRrsUV9kaRuJo3R/5oNlOHAboaJA72rvz5
         cPy+dYNpKpp/tW1abpWiBH/rmtZxXE/MLGj7m5uMt/n+RU1YTE1Rw6QIxNLzAajuHLbW
         OV/WWZm28UqOigi5ggHh0BMVZishCwQGNb1Ns=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=Hc7xHQdcWqcI1RL6yWHK3qM7+D3PcB+9wJ1f+Y4kOZ8=;
        b=iA0zVU08afAX62nOGW5pBcJIlS3YZsN6EsipEbNCtGK72pUE1ICSRp7sOBFX9SnU6B
         tZ0zR1g1RSEzvHIpNUNmzvle7dd1JyagruLCw1gxpGpOhV6EdmL+hosSp1SVyoq1oZKO
         zE1lXkvMzAHYHJdkoijfhk1Wj3amtX+XtfTQwqDgxOP7F/O0VCL0TWYczAE0fKClhpaE
         j0P7lJlC+FY493lc4SWTqdlxehjP7H8N3LVt1mIQpCwt3b9tnAxw4ZUf9zc8V1MfsD/M
         7321eecip59njQFA61gwEHG0UN/xFJCYYpDVchef64eEa2HT5fP6mepwpK4sLzdkJekW
         XBXg==
X-Gm-Message-State: AOAM530/VcmmNk/cJ8/6PE7zqyoudd6iLrhYHuCIh9eVQ+tPq9Actuj8
	+/BzoGMzcpqq/L0rvuQVUrxCnA==
X-Google-Smtp-Source: ABdhPJwtm7ki1o3tlMghBzBMLVUzEcZQN+PDIoPgB8WI7uCPSXlOMvf7LiH3SXaewZBRcFltugq6wg==
X-Received: by 2002:a17:90b:3505:: with SMTP id ls5mr176054pjb.55.1606253143764;
        Tue, 24 Nov 2020 13:25:43 -0800 (PST)
Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163])
        by smtp.gmail.com with ESMTPSA id z68sm129381pgb.37.2020.11.24.13.25.41
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 24 Nov 2020 13:25:41 -0800 (PST)
Date: Tue, 24 Nov 2020 13:25:40 -0800
From: Kees Cook <keescook@chromium.org>
To: Nick Desaulniers <ndesaulniers@google.com>
Cc: Jakub Kicinski <kuba@kernel.org>,
	"Gustavo A. R. Silva" <gustavoars@kernel.org>,
	LKML <linux-kernel@vger.kernel.org>, alsa-devel@alsa-project.org,
	amd-gfx list <amd-gfx@lists.freedesktop.org>,
	bridge@lists.linux-foundation.org, ceph-devel@vger.kernel.org,
	cluster-devel@redhat.com, coreteam@netfilter.org,
	devel@driverdev.osuosl.org, dm-devel@redhat.com,
	drbd-dev@lists.linbit.com,
	dri-devel <dri-devel@lists.freedesktop.org>,
	GR-everest-linux-l2@marvell.com, GR-Linux-NIC-Dev@marvell.com,
	intel-gfx@lists.freedesktop.org, intel-wired-lan@lists.osuosl.org,
	keyrings@vger.kernel.org, linux1394-devel@lists.sourceforge.net,
	linux-acpi@vger.kernel.org, linux-afs@lists.infradead.org,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	linux-arm-msm <linux-arm-msm@vger.kernel.org>,
	linux-atm-general@lists.sourceforge.net,
	linux-block@vger.kernel.org, linux-can@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	"open list:HARDWARE RANDOM NUMBER GENERATOR CORE" <linux-crypto@vger.kernel.org>,
	linux-decnet-user@lists.sourceforge.net, linux-ext4@vger.kernel.org,
	linux-fbdev@vger.kernel.org, linux-geode@lists.infradead.org,
	linux-gpio@vger.kernel.org, linux-hams@vger.kernel.org,
	linux-hwmon@vger.kernel.org, linux-i3c@lists.infradead.org,
	linux-ide@vger.kernel.org, linux-iio@vger.kernel.org,
	linux-input@vger.kernel.org, linux-integrity@vger.kernel.org,
	linux-mediatek@lists.infradead.org, linux-media@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	Linux Memory Management List <linux-mm@kvack.org>,
	linux-mtd@lists.infradead.org, linux-nfs@vger.kernel.org,
	linux-rdma@vger.kernel.org,
	Linux-Renesas <linux-renesas-soc@vger.kernel.org>,
	linux-scsi@vger.kernel.org, linux-sctp@vger.kernel.org,
	linux-security-module@vger.kernel.org,
	linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org,
	linux-watchdog@vger.kernel.org,
	linux-wireless <linux-wireless@vger.kernel.org>,
	Network Development <netdev@vger.kernel.org>,
	netfilter-devel@vger.kernel.org, nouveau@lists.freedesktop.org,
	op-tee@lists.trustedfirmware.org, oss-drivers@netronome.com,
	patches@opensource.cirrus.com, rds-devel@oss.oracle.com,
	reiserfs-devel@vger.kernel.org, samba-technical@lists.samba.org,
	selinux@vger.kernel.org, target-devel@vger.kernel.org,
	tipc-discussion@lists.sourceforge.net,
	usb-storage@lists.one-eyed-alien.net,
	virtualization@lists.linux-foundation.org,
	wcn36xx@lists.infradead.org,
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>,
	xen-devel@lists.xenproject.org, linux-hardening@vger.kernel.org,
	Nathan Chancellor <natechancellor@gmail.com>,
	Miguel Ojeda <ojeda@kernel.org>, Joe Perches <joe@perches.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
Message-ID: <202011241324.B3439A2@keescook>
References: <cover.1605896059.git.gustavoars@kernel.org>
 <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011201129.B13FDB3C@keescook>
 <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook>
 <CAKwvOdntVfXj2WRR5n6Kw7BfG7FdKpTeHeh5nPu5AzwVMhOHTg@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAKwvOdntVfXj2WRR5n6Kw7BfG7FdKpTeHeh5nPu5AzwVMhOHTg@mail.gmail.com>

On Mon, Nov 23, 2020 at 05:32:51PM -0800, Nick Desaulniers wrote:
> On Sun, Nov 22, 2020 at 8:17 AM Kees Cook <keescook@chromium.org> wrote:
> >
> > On Fri, Nov 20, 2020 at 11:51:42AM -0800, Jakub Kicinski wrote:
> > > If none of the 140 patches here fix a real bug, and there is no change
> > > to machine code then it sounds to me like a W=2 kind of a warning.
> >
> > FWIW, this series has found at least one bug so far:
> > https://lore.kernel.org/lkml/CAFCwf11izHF=g1mGry1fE5kvFFFrxzhPSM6qKAO8gxSp=Kr_CQ@mail.gmail.com/
> 
> So looks like the bulk of these are:
> switch (x) {
>   case 0:
>     ++x;
>   default:
>     break;
> }
> 
> I have a patch that fixes those up for clang:
> https://reviews.llvm.org/D91895

I still think this isn't right -- it's a case statement that runs off
the end without an explicit flow control determination. I think Clang is
right to warn for these, and GCC should also warn.

-- 
Kees Cook


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 21:32:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 21:32:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37016.69149 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khfve-0000lZ-82; Tue, 24 Nov 2020 21:32:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37016.69149; Tue, 24 Nov 2020 21:32:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khfve-0000lS-4o; Tue, 24 Nov 2020 21:32:50 +0000
Received: by outflank-mailman (input) for mailman id 37016;
 Tue, 24 Nov 2020 21:32:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PPaS=E6=chromium.org=keescook@srs-us1.protection.inumbo.net>)
 id 1khfvc-0000lN-DO
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 21:32:48 +0000
Received: from mail-pg1-x541.google.com (unknown [2607:f8b0:4864:20::541])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f5ef4134-0374-4573-83e8-93355b2c39af;
 Tue, 24 Nov 2020 21:32:47 +0000 (UTC)
Received: by mail-pg1-x541.google.com with SMTP id w4so333810pgg.13
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 13:32:47 -0800 (PST)
Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163])
 by smtp.gmail.com with ESMTPSA id j74sm15845pfd.43.2020.11.24.13.32.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 24 Nov 2020 13:32:45 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PPaS=E6=chromium.org=keescook@srs-us1.protection.inumbo.net>)
	id 1khfvc-0000lN-DO
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 21:32:48 +0000
X-Inumbo-ID: f5ef4134-0374-4573-83e8-93355b2c39af
Received: from mail-pg1-x541.google.com (unknown [2607:f8b0:4864:20::541])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f5ef4134-0374-4573-83e8-93355b2c39af;
	Tue, 24 Nov 2020 21:32:47 +0000 (UTC)
Received: by mail-pg1-x541.google.com with SMTP id w4so333810pgg.13
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 13:32:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=nUebvx46WK355IC8BSYKhA86maU/C5TyOra9y/oS07E=;
        b=lzAwh4po+9zegkg/2K9x7CHiUUthvj2PFJyxHwt1QdcZQIdrGCSiE3JgzWuDiv+VMN
         KwBJ/6n/jsAIvSMb6eOJqvl2BVv6D2OMXP8giSKXaaH9JwLNdR2oULKAXe3g8bDVfAub
         65Ll+320/JpDsvMBOWCFVSOrRLSDK1WOgK6ns=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=nUebvx46WK355IC8BSYKhA86maU/C5TyOra9y/oS07E=;
        b=H2Ee8j6y2gDSj/kLnBUyjz9oxGGYaR/mQjX6ItJdb0PUUEnvSeN0pBATgwoxt1Ej+O
         gn3229icLBCPJHeGbgWiaV3B28e1K8WWGTnNSpGUZ7I0vRLvtkHfTYNfQWsgkWNlYQ9j
         0jmgPt9OOhIUYHNGBCUvzDhoeSrgvFZLgN6ldawJ4wMcGWoA4g0J5hVIVDmG9VIkSZkp
         6xWO6tFvrGSgG6aQm6XHSXjbK637yFmszXONCYYbAprzttmHmU7/XFVtFfGQLvz5Skro
         uDLBUrT2G9/8wx5HGb5DuByvUNkoF+k9IxhTqfdOW1DjmkZDWPKqqCUNB0h5jhOtpuRu
         xDsg==
X-Gm-Message-State: AOAM533gRRzGLwm0knw01jGkXdP/RZ9hvGdSAjlU4UJOSgKhmALHXS8/
	FUp4DJskByC7aeTEVUHHdHpGFA==
X-Google-Smtp-Source: ABdhPJxq53hoH5g966jwbYjUDwfs6lHQ9KOfN5qCJ8fiAFBBjxy/+X2rEoTmF5zSw2rlwmYa8QmuBA==
X-Received: by 2002:aa7:9af2:0:b029:198:273c:6be8 with SMTP id y18-20020aa79af20000b0290198273c6be8mr329847pfp.4.1606253566848;
        Tue, 24 Nov 2020 13:32:46 -0800 (PST)
Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163])
        by smtp.gmail.com with ESMTPSA id j74sm15845pfd.43.2020.11.24.13.32.45
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 24 Nov 2020 13:32:45 -0800 (PST)
Date: Tue, 24 Nov 2020 13:32:44 -0800
From: Kees Cook <keescook@chromium.org>
To: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>,
	Joe Perches <joe@perches.com>, Jakub Kicinski <kuba@kernel.org>,
	alsa-devel@alsa-project.org,
	linux-atm-general@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org, linux-iio@vger.kernel.org,
	linux-wireless@vger.kernel.org, linux-fbdev@vger.kernel.org,
	dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org,
	Nathan Chancellor <natechancellor@gmail.com>,
	linux-ide@vger.kernel.org, dm-devel@redhat.com,
	keyrings@vger.kernel.org, linux-mtd@lists.infradead.org,
	GR-everest-linux-l2@marvell.com, wcn36xx@lists.infradead.org,
	samba-technical@lists.samba.org, linux-i3c@lists.infradead.org,
	linux1394-devel@lists.sourceforge.net,
	linux-afs@lists.infradead.org, usb-storage@lists.one-eyed-alien.net,
	drbd-dev@lists.linbit.com, devel@driverdev.osuosl.org,
	linux-cifs@vger.kernel.org, rds-devel@oss.oracle.com,
	Nick Desaulniers <ndesaulniers@google.com>,
	linux-scsi@vger.kernel.org, linux-rdma@vger.kernel.org,
	oss-drivers@netronome.com, bridge@lists.linux-foundation.org,
	linux-security-module@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	linux-stm32@st-md-mailman.stormreply.com, cluster-devel@redhat.com,
	linux-acpi@vger.kernel.org, coreteam@netfilter.org,
	intel-wired-lan@lists.osuosl.org, linux-input@vger.kernel.org,
	Miguel Ojeda <ojeda@kernel.org>,
	tipc-discussion@lists.sourceforge.net, linux-ext4@vger.kernel.org,
	linux-media@vger.kernel.org, linux-watchdog@vger.kernel.org,
	selinux@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	intel-gfx@lists.freedesktop.org, linux-geode@lists.infradead.org,
	linux-can@vger.kernel.org, linux-block@vger.kernel.org,
	linux-gpio@vger.kernel.org, op-tee@lists.trustedfirmware.org,
	linux-mediatek@lists.infradead.org, xen-devel@lists.xenproject.org,
	nouveau@lists.freedesktop.org, linux-hams@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-arm-kernel@lists.infradead.org, linux-hwmon@vger.kernel.org,
	x86@kernel.org, linux-nfs@vger.kernel.org,
	GR-Linux-NIC-Dev@marvell.com, linux-mm@kvack.org,
	netdev@vger.kernel.org, linux-decnet-user@lists.sourceforge.net,
	linux-mmc@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
	linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org,
	netfilter-devel@vger.kernel.org, linux-crypto@vger.kernel.org,
	patches@opensource.cirrus.com, linux-integrity@vger.kernel.org,
	target-devel@vger.kernel.org, linux-hardening@vger.kernel.org,
	Jonathan Cameron <Jonathan.Cameron@huawei.com>,
	Greg KH <gregkh@linuxfoundation.org>
Subject: Re: [Intel-wired-lan] [PATCH 000/141] Fix fall-through warnings for
 Clang
Message-ID: <202011241327.BB28F12F6@keescook>
References: <202011201129.B13FDB3C@keescook>
 <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook>
 <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
 <ca071decb87cc7e905411423c05a48f9fd2f58d7.camel@perches.com>
 <0147972a72bc13f3629de8a32dee6f1f308994b5.camel@HansenPartnership.com>
 <d8d1e9add08cdd4158405e77762d4946037208f8.camel@perches.com>
 <dbd2cb703ed9eefa7dde9281ea26ab0f7acc8afe.camel@HansenPartnership.com>
 <20201123130348.GA3119@embeddedor>
 <8f5611bb015e044fa1c0a48147293923c2d904e4.camel@HansenPartnership.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <8f5611bb015e044fa1c0a48147293923c2d904e4.camel@HansenPartnership.com>

On Mon, Nov 23, 2020 at 08:31:30AM -0800, James Bottomley wrote:
> Really, no ... something which produces no improvement has no value at
> all ... we really shouldn't be wasting maintainer time with it because
> it has a cost to merge.  I'm not sure we understand where the balance
> lies in value vs cost to merge but I am confident in the zero value
> case.

What? We can't measure how many future bugs aren't introduced because the
kernel requires explicit case flow-control statements for all new code.

We already enable -Wimplicit-fallthrough globally, so that's not the
discussion. The issue is that Clang is (correctly) even more strict
than GCC for this, so these are the remaining ones to fix for full Clang
coverage too.

People have spent more time debating this already than it would have
taken to apply the patches. :)

This is about robustness and language wrangling. It's a big code-base,
and this is the price of our managing technical debt for permanent
robustness improvements. (The numbers I ran from Gustavo's earlier
patches were that about 10% of the places adjusted were identified as
legitimate bugs being fixed. This final series may be lower, but there
are still bugs being found from it -- we need to finish this and shut
the door on it for good.)

-- 
Kees Cook


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 22:24:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 22:24:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37027.69168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khgjP-0005G5-C6; Tue, 24 Nov 2020 22:24:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37027.69168; Tue, 24 Nov 2020 22:24:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khgjP-0005Fy-93; Tue, 24 Nov 2020 22:24:15 +0000
Received: by outflank-mailman (input) for mailman id 37027;
 Tue, 24 Nov 2020 22:24:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=53sX=E6=telegraphics.com.au=fthain@srs-us1.protection.inumbo.net>)
 id 1khgjO-0005Ft-Lw
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 22:24:14 +0000
Received: from kvm5.telegraphics.com.au (unknown [98.124.60.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 9b456be2-7f04-4efc-a6b2-430180eb9459;
 Tue, 24 Nov 2020 22:24:13 +0000 (UTC)
Received: from localhost (localhost.localdomain [127.0.0.1])
 by kvm5.telegraphics.com.au (Postfix) with ESMTP id 8A96E22AD6;
 Tue, 24 Nov 2020 17:24:09 -0500 (EST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=53sX=E6=telegraphics.com.au=fthain@srs-us1.protection.inumbo.net>)
	id 1khgjO-0005Ft-Lw
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 22:24:14 +0000
X-Inumbo-ID: 9b456be2-7f04-4efc-a6b2-430180eb9459
Received: from kvm5.telegraphics.com.au (unknown [98.124.60.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 9b456be2-7f04-4efc-a6b2-430180eb9459;
	Tue, 24 Nov 2020 22:24:13 +0000 (UTC)
Received: from localhost (localhost.localdomain [127.0.0.1])
	by kvm5.telegraphics.com.au (Postfix) with ESMTP id 8A96E22AD6;
	Tue, 24 Nov 2020 17:24:09 -0500 (EST)
Date: Wed, 25 Nov 2020 09:24:08 +1100 (AEDT)
From: Finn Thain <fthain@telegraphics.com.au>
To: Kees Cook <keescook@chromium.org>
cc: James Bottomley <James.Bottomley@HansenPartnership.com>, 
    "Gustavo A. R. Silva" <gustavoars@kernel.org>, 
    Joe Perches <joe@perches.com>, Jakub Kicinski <kuba@kernel.org>, 
    alsa-devel@alsa-project.org, linux-atm-general@lists.sourceforge.net, 
    reiserfs-devel@vger.kernel.org, linux-iio@vger.kernel.org, 
    linux-wireless@vger.kernel.org, linux-fbdev@vger.kernel.org, 
    dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, 
    Nathan Chancellor <natechancellor@gmail.com>, linux-ide@vger.kernel.org, 
    dm-devel@redhat.com, keyrings@vger.kernel.org, 
    linux-mtd@lists.infradead.org, GR-everest-linux-l2@marvell.com, 
    wcn36xx@lists.infradead.org, samba-technical@lists.samba.org, 
    linux-i3c@lists.infradead.org, linux1394-devel@lists.sourceforge.net, 
    linux-afs@lists.infradead.org, usb-storage@lists.one-eyed-alien.net, 
    drbd-dev@lists.linbit.com, devel@driverdev.osuosl.org, 
    linux-cifs@vger.kernel.org, rds-devel@oss.oracle.com, 
    Nick Desaulniers <ndesaulniers@google.com>, linux-scsi@vger.kernel.org, 
    linux-rdma@vger.kernel.org, oss-drivers@netronome.com, 
    bridge@lists.linux-foundation.org, linux-security-module@vger.kernel.org, 
    amd-gfx@lists.freedesktop.org, linux-stm32@st-md-mailman.stormreply.com, 
    cluster-devel@redhat.com, linux-acpi@vger.kernel.org, 
    coreteam@netfilter.org, intel-wired-lan@lists.osuosl.org, 
    linux-input@vger.kernel.org, Miguel Ojeda <ojeda@kernel.org>, 
    tipc-discussion@lists.sourceforge.net, linux-ext4@vger.kernel.org, 
    linux-media@vger.kernel.org, linux-watchdog@vger.kernel.org, 
    selinux@vger.kernel.org, linux-arm-msm@vger.kernel.org, 
    intel-gfx@lists.freedesktop.org, linux-geode@lists.infradead.org, 
    linux-can@vger.kernel.org, linux-block@vger.kernel.org, 
    linux-gpio@vger.kernel.org, op-tee@lists.trustedfirmware.org, 
    linux-mediatek@lists.infradead.org, xen-devel@lists.xenproject.org, 
    nouveau@lists.freedesktop.org, linux-hams@vger.kernel.org, 
    ceph-devel@vger.kernel.org, virtualization@lists.linux-foundation.org, 
    linux-arm-kernel@lists.infradead.org, linux-hwmon@vger.kernel.org, 
    x86@kernel.org, linux-nfs@vger.kernel.org, GR-Linux-NIC-Dev@marvell.com, 
    linux-mm@kvack.org, netdev@vger.kernel.org, 
    linux-decnet-user@lists.sourceforge.net, linux-mmc@vger.kernel.org, 
    linux-renesas-soc@vger.kernel.org, linux-sctp@vger.kernel.org, 
    linux-usb@vger.kernel.org, netfilter-devel@vger.kernel.org, 
    linux-crypto@vger.kernel.org, patches@opensource.cirrus.com, 
    linux-integrity@vger.kernel.org, target-devel@vger.kernel.org, 
    linux-hardening@vger.kernel.org, 
    Jonathan Cameron <Jonathan.Cameron@huawei.com>, 
    Greg KH <gregkh@linuxfoundation.org>
Subject: Re: [Intel-wired-lan] [PATCH 000/141] Fix fall-through warnings for
 Clang
In-Reply-To: <202011241327.BB28F12F6@keescook>
Message-ID: <alpine.LNX.2.23.453.2011250859290.15@nippy.intranet>
References: <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> <202011220816.8B6591A@keescook> <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com> <ca071decb87cc7e905411423c05a48f9fd2f58d7.camel@perches.com>
 <0147972a72bc13f3629de8a32dee6f1f308994b5.camel@HansenPartnership.com> <d8d1e9add08cdd4158405e77762d4946037208f8.camel@perches.com> <dbd2cb703ed9eefa7dde9281ea26ab0f7acc8afe.camel@HansenPartnership.com> <20201123130348.GA3119@embeddedor>
 <8f5611bb015e044fa1c0a48147293923c2d904e4.camel@HansenPartnership.com> <202011241327.BB28F12F6@keescook>
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 24 Nov 2020, Kees Cook wrote:

> On Mon, Nov 23, 2020 at 08:31:30AM -0800, James Bottomley wrote:
> > Really, no ... something which produces no improvement has no value at 
> > all ... we really shouldn't be wasting maintainer time with it because 
> > it has a cost to merge.  I'm not sure we understand where the balance 
> > lies in value vs cost to merge but I am confident in the zero value 
> > case.
> 
> What? We can't measure how many future bugs aren't introduced because 
> the kernel requires explicit case flow-control statements for all new 
> code.
> 

These statements are not "missing" unless you presume that code written 
before the latest de facto language spec was written should somehow be 
held to that spec.

If the 'fallthrough' statement is not part of the latest draft spec then 
we should ask why not before we embrace it. Being that the kernel still 
prefers -std=gnu89 you might want to consider what has prevented 
-std=gnu99 or -std=gnu2x etc.

> We already enable -Wimplicit-fallthrough globally, so that's not the 
> discussion. The issue is that Clang is (correctly) even more strict than 
> GCC for this, so these are the remaining ones to fix for full Clang 
> coverage too.
> 

Seems to me you should be patching the compiler.

When you have consensus among the language lawyers you'll have more 
credibility with those being subjected to enforcement.


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 23:11:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 23:11:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37035.69180 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khhTS-00019d-2y; Tue, 24 Nov 2020 23:11:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37035.69180; Tue, 24 Nov 2020 23:11:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khhTR-00019W-VL; Tue, 24 Nov 2020 23:11:49 +0000
Received: by outflank-mailman (input) for mailman id 37035;
 Tue, 24 Nov 2020 23:11:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Mdj2=E6=zal.aero=leo.krueger@srs-us1.protection.inumbo.net>)
 id 1khhTQ-00019R-5l
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 23:11:48 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.131]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 40668a29-9494-4bc9-aa2f-32bb02172795;
 Tue, 24 Nov 2020 23:11:46 +0000 (UTC)
Received: from HE1PR05MB4794.eurprd05.prod.outlook.com (2603:10a6:7:9b::11) by
 HE1PR05MB4779.eurprd05.prod.outlook.com (2603:10a6:7:97::19) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20; Tue, 24 Nov 2020 23:11:26 +0000
Received: from HE1PR05MB4794.eurprd05.prod.outlook.com
 ([fe80::7d6a:df13:ca6f:b173]) by HE1PR05MB4794.eurprd05.prod.outlook.com
 ([fe80::7d6a:df13:ca6f:b173%5]) with mapi id 15.20.3564.033; Tue, 24 Nov 2020
 23:11:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Mdj2=E6=zal.aero=leo.krueger@srs-us1.protection.inumbo.net>)
	id 1khhTQ-00019R-5l
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 23:11:48 +0000
X-Inumbo-ID: 40668a29-9494-4bc9-aa2f-32bb02172795
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown [40.107.21.131])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 40668a29-9494-4bc9-aa2f-32bb02172795;
	Tue, 24 Nov 2020 23:11:46 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ipjQMG1ElQ+k2eduAFpzUaP8GR2tK8zl0mnPUtrrjKa1Ox0PnImAepe8e3lqxUqSIfpK1o4daF5eyxRNI/SZ3fzIcHN0gw87WeV99j0H0mHRuI1g46lxiU8ISV0CxuN3KzENA9/e+lIRBJOh3ercC38DWpwf40Tytqn8lQfgfP4inc/ooyru7FhPDMN9ph6PQaIlKS8bYXI8xOj7GkOpwG/PnLee3vIxfVaYNim913RHLD1SMi8okqbz52dJlTFPMIT7H4WAU4KXpTG1t/wzNRkDBCREe5JaxltD0BKfwq2swRN/EIcrtzGUDe77dHMuAPPp210qPGEyJqEzX6z2uA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=53yoF7REm0yX67XQYQQYlljxEs1Kcsrjjfn40UG8xxw=;
 b=drrHy4cYnGW/yyeFCtaR7iIxLzzPW3pCWLRa0gvGxikSssNac0BocV+FKzbTrzVDQAJiFgLygUBAdNNEN1enq8fM2RHBLz0XHoYBzIuNoEPYhTWip1TgDmrN1BUAMAPgXQwZYeAmNdImI5N4T2F5i9UrKLqzFTUkwPcuuiTsqy5uGDlZ4sFrviB50vitL9LZh833TL0h/WG3XVAfA4I6qOmAOGEKSW80D65SgHFXoG3k0Dx9cItr83PH+WkGPz8ja9bHz4R9j5ZZNoI1zuJps2J942v16uRoP1aC8hM69GzL0l0MgvDsbJKeQkvpjDPxagsvJl6B/vq6WLZSariWqw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=zal.aero; dmarc=pass action=none header.from=zal.aero;
 dkim=pass header.d=zal.aero; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=zalgmbh.onmicrosoft.com; s=selector2-zalgmbh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=53yoF7REm0yX67XQYQQYlljxEs1Kcsrjjfn40UG8xxw=;
 b=IdqP7grtIjVR9/jfHPCRKzjQ7rWy+JWEiqlfwWcNha07FsNDHIOw10uUDRoB4AoFaTd8mKwrcf8o+SPkQ33k0mlniWUOXtZ+vutT9f6yCuC3Lj8eTtE7ZfrttVYcL+kZmRMMq7LXJmaPhMbN5TS+JqWpN8cziRYDRRbnSIYeH1o=
Received: from HE1PR05MB4794.eurprd05.prod.outlook.com (2603:10a6:7:9b::11) by
 HE1PR05MB4779.eurprd05.prod.outlook.com (2603:10a6:7:97::19) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20; Tue, 24 Nov 2020 23:11:26 +0000
Received: from HE1PR05MB4794.eurprd05.prod.outlook.com
 ([fe80::7d6a:df13:ca6f:b173]) by HE1PR05MB4794.eurprd05.prod.outlook.com
 ([fe80::7d6a:df13:ca6f:b173%5]) with mapi id 15.20.3564.033; Tue, 24 Nov 2020
 23:11:25 +0000
From: Leo Krueger <leo.krueger@zal.aero>
To: Julien Grall <julien@xen.org>, Stefano Stabellini
	<stefano.stabellini@xilinx.com>
CC: Peng Fan <peng.fan@nxp.com>, "brucea@xilinx.com" <brucea@xilinx.com>,
	Cornelia Bruelhart <cornelia.bruelhart@zal.aero>,
	"oleksandr_andrushchenko@epam.com" <oleksandr_andrushchenko@epam.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>
Subject: AW: AW: AW: AW: AW: AW: Xen data from meta-virtualization layer
Thread-Topic: AW: AW: AW: AW: AW: Xen data from meta-virtualization layer
Thread-Index:
 AdaxGoYJbW1LEL/jTUmK3sZDIRyPGAA9xNCAASdllGAAABZN8AAYnVUAACWhGAAAC0bcAAAAU02AAAYKpqgABBfLAADlnDFgADm6ywAAK1PyQAAEq7SAABo3VQAA3unxYAApO/qAADv7W5A=
Date: Tue, 24 Nov 2020 23:11:25 +0000
Message-ID:
 <HE1PR05MB4794FE31A2BDE8BC458D81848BFB0@HE1PR05MB4794.eurprd05.prod.outlook.com>
References:
 <AM4PR0501MB2227089FDDF0209EF6E215D9E6100@AM4PR0501MB2227.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011091858010.21307@sstabellini-ThinkPad-T480s>
 <HE1PR05MB4794B5C57A54A29A48EE8EAE8BE90@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011101842500.21307@sstabellini-ThinkPad-T480s>
 <DB6PR0402MB27608A03EC717053E392A92988E80@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <HE1PR05MB47940ED4E5FDC0BADC54C8E78BE80@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <DB6PR0402MB2760CEEABA9F52CDEB27C1DB88E80@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <HE1PR05MB47944761ED6A26D3E2CE15868BE40@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011161656080.20906@sstabellini-ThinkPad-T480s>
 <HE1PR05MB4794569AC67109AF8B6517268BE20@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011171544380.438@sstabellini-ThinkPad-T480s>
 <5dc63ee2-f1ce-31fc-cb6a-fe4dae929fb3@xen.org>
 <HE1PR05MB4794EBDD1FE29BC69D0BCC898BFD0@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <b67581c6-6682-5059-55d1-a9c695a8cdc3@xen.org>
In-Reply-To: <b67581c6-6682-5059-55d1-a9c695a8cdc3@xen.org>
Accept-Language: de-DE, en-US
Content-Language: de-DE
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=zal.aero;
x-originating-ip: [2003:e4:3f2c:9500:2d89:c573:b2ac:fc47]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 3e5f29cd-2c2e-4d02-0b8a-08d890ce446f
x-ms-traffictypediagnostic: HE1PR05MB4779:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs:
 <HE1PR05MB4779DAEB0C7268A34A519D878BFB0@HE1PR05MB4779.eurprd05.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:1923;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 R14Ryookhh8C0MLF6eJ5+M1xEBJrQ6yt58Xu/nCz46aEMoMPyzl5n8VCkFMiqaw7HgNWqteFR8CqDCxfid8rDMsRiQJGoobDr7y83fCvf4z13HaQKNvr0cHmWdyaDdMYVaTLpz5x2rh0KPVE7gH04dtNZ6jW0Gwvh65DzYxKhXezP3Z+QVKEuAkQFdiGKGKFpl0Z5mTmRA2rVE/qsFLrUVqow+zv3H/qHCtPi9I8Vvcer0n/9VWSrfZVzUbvjflIzfc2rKb6gbwlc4DvC8H8x3bVafeXC+FEhcOLfAgoEzLXldTNDEP+lL+L5C6F/80xVEaf3giD1FmPbVtYn6JvkiVuhPLuvuIFRdxy75vrAxQJbwzFTqH0vqJu1LbCFPXdu9VKdi9s242c2qfd93TkIA==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:HE1PR05MB4794.eurprd05.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(376002)(39830400003)(136003)(346002)(366004)(6506007)(9686003)(8676002)(316002)(66556008)(64756008)(66476007)(966005)(66946007)(55016002)(76116006)(478600001)(4326008)(53546011)(7696005)(71200400001)(66446008)(8936002)(44832011)(2906002)(33656002)(52536014)(110136005)(66574015)(5660300002)(186003)(54906003)(132210200001)(83380400001)(86362001);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata:
	=?utf-8?B?akRqZUVpVmVGOEtWVVFnOE05L21WNkRyaFVEbWZPbUR3MUlsTUtJTEhFcy9w?=
 =?utf-8?B?Mlgxdks5KzZpL0dPQlZmWWhoRENzUDF3THhwb2JsZkYvNzBoRnpQcktQeUdW?=
 =?utf-8?B?RWc0L2VPZnZiZGZ4cnhjamhwOHUrUGJCWStTKzZOYkdyeEdUN3VUemNSQ0FX?=
 =?utf-8?B?YXhCaEg5a1lmYzRsSGV1R28xSjFJS3VQS0xhTEF6M0diOFpRQ05xemJEeEhs?=
 =?utf-8?B?a1FsZmJ4czkzYnpWaXAxaTdyZ0tXNVNuTnpRMEdHWjZYLzg3Y3p2bXUwQldY?=
 =?utf-8?B?c1lCR0ZEcFhVQjgrYTB4UitxZVVtL0djanNFTnJtOGVUMWQyMUtiSnhHeHRW?=
 =?utf-8?B?SzJwUUhaM0xheTlMVzdHYW1XUm1ZSmlhbE9nZE5lVXFvSCtpWW9YWFhoL2gx?=
 =?utf-8?B?VkVHOTN2WDQrbDhiVnBXbSt2bnVkN1FLdi9PMFBYT0Z0enZFWlBkU04zV1J1?=
 =?utf-8?B?QzY2cVJScGFyUzZqV0VJNXFWVTl4RVF0OGRnaHphanFpdHVaeFlCRnNDL1JI?=
 =?utf-8?B?T3ptblNvS0dsNGkyK0RwbGhCeit2enFuaXFRTE5YSDdWNW1LSjFNVGcyOVJT?=
 =?utf-8?B?T2gzTW00N0R6WlNMOGtFdFljT3pOWjRaNXN4U0hJRlRLeW9JZmRvdmZjcG1r?=
 =?utf-8?B?bkJWemdjelNwUGI1ckxxV2lQNzZiTTA4NUxjZDVqaGhaR08zZ2hsL1FUbndE?=
 =?utf-8?B?QU5LOHorWUFvNW1kWlN0RjcwZ1VVR1FkMkFsWHcrV1lJL2RPaWxnVU55aDBz?=
 =?utf-8?B?Mi9MYk0wSGZLVXJielRVeU41RkxtTUZzUFpvRVZva3ZEZGNMYUErMDhrR01j?=
 =?utf-8?B?cG9LMEZ3TnlWSVV3bEtoMDR2ZVVidmtuWnRjKzg3NnNLdHhDcW1Qc1JRYjBw?=
 =?utf-8?B?WlV1TjNVVjNuOFZKVTJCUTZ1Y1Q0VXU2d3llY0o1QmdQdXJseVA2bWg3bXFl?=
 =?utf-8?B?QjV0aTB4di82MTVDVGt5T29QZ1lkdWZFSjdWS2p5U0tPZ1NyK0VPYkN3SGpz?=
 =?utf-8?B?SUtQSW9TWG44WEZNalhad3k5QnZzRk9kUUQ5aXIrd1BkWEMxYk1FTGc5dUtj?=
 =?utf-8?B?blROV0diNFByanBucjBoeTZGNWZZbTlieUU0cHJyT2hxemhlYjhwSE43bWV5?=
 =?utf-8?B?S1BnZm1yZ1hOSkFHUS9Bdk5PNjNnVEd2WW9BcVBYYWF2MTc2WVRBOGVSZEh5?=
 =?utf-8?B?STZuQ3crTXdjSUI1QVA1SnNkcStHWGFuU1ZlQTVBQU9GR1BqcXN5a3pRL2Vh?=
 =?utf-8?B?MTJITHhucWIreng0d0czcWVvTTZUS3NwZ1A1b21TVXRiQVFvUT09?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: zal.aero
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: HE1PR05MB4794.eurprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3e5f29cd-2c2e-4d02-0b8a-08d890ce446f
X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Nov 2020 23:11:25.6627
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: dd36ff89-3bc0-4d3d-b543-76f454a3c8e5
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: dd+YirD1+MgeqTE7+FXnz+oDZACsOVlJQkBrH7UZP0UkSO87OA7zdBh4WKbTwHILp3F+pCRfqmkggPZd5avnUA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR05MB4779

SGksDQoNCj4gLS0tLS1VcnNwcsO8bmdsaWNoZSBOYWNocmljaHQtLS0tLQ0KPiBWb246IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IEdlc2VuZGV0OiBNb250YWcsIDIzLiBOb3ZlbWJl
ciAyMDIwIDE5OjI3DQo+IEFuOiBMZW8gS3J1ZWdlciA8bGVvLmtydWVnZXJAemFsLmFlcm8+OyBT
dGVmYW5vIFN0YWJlbGxpbmkNCj4gPHN0ZWZhbm8uc3RhYmVsbGluaUB4aWxpbnguY29tPg0KPiBD
YzogUGVuZyBGYW4gPHBlbmcuZmFuQG54cC5jb20+OyBicnVjZWFAeGlsaW54LmNvbTsgQ29ybmVs
aWEgQnJ1ZWxoYXJ0DQo+IDxjb3JuZWxpYS5icnVlbGhhcnRAemFsLmFlcm8+OyBvbGVrc2FuZHJf
YW5kcnVzaGNoZW5rb0BlcGFtLmNvbTsgeGVuLQ0KPiBkZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9y
ZzsgQmVydHJhbmQuTWFycXVpc0Bhcm0uY29tDQo+IEJldHJlZmY6IFJlOiBBVzogQVc6IEFXOiBB
VzogQVc6IFhlbiBkYXRhIGZyb20gbWV0YS12aXJ0dWFsaXphdGlvbiBsYXllcg0KPiANCj4gDQo+
IA0KPiBPbiAyMi8xMS8yMDIwIDIyOjU1LCBMZW8gS3J1ZWdlciB3cm90ZToNCj4gPiBIaSBKdWxp
ZW4sDQo+IA0KPiBIaSBMZW8sDQo+IA0KPiA+DQo+ID4gZmluYWxseSBJIGNvdWxkIHRyeSBvdXQg
d2hhdCB5b3Ugc3VnZ2VzdGVkLCBwbGVhc2UgZmluZCBteSBhbnN3ZXJzIGlubGluZS4NCj4gDQo+
IFRoYW5rIHlvdSBmb3Igc2VuZGluZyB0aGUgbG9ncyENCj4gDQo+ID4NCj4gPj4gLS0tLS1VcnNw
csO8bmdsaWNoZSBOYWNocmljaHQtLS0tLQ0KPiA+PiBWb246IEp1bGllbiBHcmFsbCA8anVsaWVu
QHhlbi5vcmc+DQo+ID4+IEdlc2VuZGV0OiBNaXR0d29jaCwgMTguIE5vdmVtYmVyIDIwMjAgMTM6
MjQNCj4gPj4gQW46IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3RlZmFuby5zdGFiZWxsaW5pQHhpbGlu
eC5jb20+OyBMZW8gS3J1ZWdlcg0KPiA+PiA8bGVvLmtydWVnZXJAemFsLmFlcm8+DQo+ID4+IENj
OiBQZW5nIEZhbiA8cGVuZy5mYW5AbnhwLmNvbT47IGJydWNlYUB4aWxpbnguY29tOyBDb3JuZWxp
YQ0KPiA+PiBCcnVlbGhhcnQgPGNvcm5lbGlhLmJydWVsaGFydEB6YWwuYWVybz47DQo+ID4+IG9s
ZWtzYW5kcl9hbmRydXNoY2hlbmtvQGVwYW0uY29tOyB4ZW4tIGRldmVsQGxpc3RzLnhlbnByb2pl
Y3Qub3JnOw0KPiA+PiBCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20NCj4gPj4gQmV0cmVmZjogUmU6
IEFXOiBBVzogQVc6IEFXOiBYZW4gZGF0YSBmcm9tIG1ldGEtdmlydHVhbGl6YXRpb24gbGF5ZXIN
Cj4gPj4NCj4gPj4gSGksDQo+ID4+DQo+ID4+IE9uIDE3LzExLzIwMjAgMjM6NTMsIFN0ZWZhbm8g
U3RhYmVsbGluaSB3cm90ZToNCj4gPj4+IEFkZGluZyBCZXJ0cmFuZCwgT2xla3NhbmRyLCBKdWxp
ZW4sIGFuZCBvdGhlcnMgLS0gdGhleSBoYXZlIGEgbW9yZQ0KPiA+Pj4gcmVjZW50IGV4cGVyaWVu
Y2Ugd2l0aCBHSUN2MyBJVFMgdGhhbiBtZSBhbmQgbWlnaHQgYmUgYWJsZSB0byBoZWxwLg0KPiA+
Pj4gSSBhbSBhdHRhY2hpbmcgdGhlIGRldmljZSB0cmVlIExlbyBzZW50IGEgZmV3IGRheXMgYWdv
IGZvciByZWZlcmVuY2UuDQo+ID4+Pg0KPiA+Pj4NCj4gPj4+IFR5cGljYWxseSB3aGVuIHlvdSBj
YW4gc2V0IHRoZSBldGhlcm5ldCBsaW5rIHVwIGFuZCBubyBwYWNrZXRzIGFyZQ0KPiA+Pj4gZXhj
aGFuZ2VkIGl0IGlzIGJlY2F1c2Ugb2YgYSBtaXNzaW5nIGludGVycnVwdC4gSW4gdGhpcyBjYXNl
IGENCj4gPj4+IG1pc3NpbmcgTVNJLg0KPiA+Pj4NCj4gPj4+IEJlcnRyYW5kLCBJIGJlbGlldmUg
eW91IHRyaWVkIHRoZSBHSUMgSVRTIGRyaXZlciB3aXRoIFBDSSBkZXZpY2VzDQo+ID4+PiByZWNl
bnRseS4gSXQgaXMgZXhwZWN0ZWQgdG8gd29yayBjb3JyZWN0bHkgd2l0aCBNU0lzIGluIERvbTAs
IHJpZ2h0Pw0KPiA+Pg0KPiA+PiBPU1NUZXN0IGhhcyBzb21lIGhhcmR3YXJlIChlLmcuIFRodW5k
ZXItWCkgd2hlcmUgSVRTIGlzIHJlcXVpcmVkIHRvDQo+ID4+IGJvb3QgRG9tMC4gSSBoYXZlbid0
IHNlZW4gYW55IGZhaWx1cmUgb24gcmVjZW50IFhlbi4gV2UgYXJlIHRlc3RpbmcNCj4gPj4gNC4x
MSBhbmQgb253YXJkcyBvbiBUaHVuZGVyLVguDQo+ID4+DQo+ID4+IEhvd2V2ZXIsIGl0IG1heSBi
ZSBwb3NzaWJsZSB0aGF0IHNvbWUgbW9yZSB3b3JrIGlzIG5lY2Vzc2FyeSBmb3INCj4gPj4gb3Ro
ZXIgaGFyZHdhcmUgKGUuZy4gd29ya2Fyb3VuZCwgbWlzc2luZyBjb2RlLi4uKS4gU2VlIG1vcmUg
YmVsb3cuDQo+ID4+DQo+ID4+Pg0KPiA+Pj4NCj4gPj4+DQo+ID4+PiBPbiBUdWUsIDE3IE5vdiAy
MDIwLCBMZW8gS3J1ZWdlciB3cm90ZToNCj4gPj4+PiBIaSwNCj4gPj4+Pg0KPiA+Pj4+IEkgZW5h
YmxlZCBDT05GSUdfSEFTX0lUUyAod2hhdCBhIHN0dXBpZCBtaXN0YWtlIGJ5IG1lIHRvIG5vdCBz
ZXQgaXQNCj4gPj4+PiBiZWZvcmUuLi4pIGJ1dCB0aGVuIGhhZCB0byBhZGQgdGhlIGZvbGxvd2lu
ZyBub2RlIHRvIG15IGRldmljZSB0cmVlDQo+ID4+Pj4NCj4gPj4+PiAJZ2ljX2xwaV9iYXNlOiBz
eXNjb25AMHg4MDAwMDAwMCB7DQo+ID4+Pj4gCQljb21wYXRpYmxlID0gImdpYy1scGktYmFzZSI7
DQo+ID4+DQo+ID4+IEkgY291bGRuJ3QgZmluZCB0aGlzIGNvbXBhdGlibGUgZGVmaW5lZC91c2Vk
IGluIExpbnV4IDUuMTAtcmM0LiBATGVvLA0KPiA+PiBjb3VsZCB5b3UgY2xhcmlmeSB3aGljaCBm
bGF2b3IvdmVyc2lvbiBvZiBMaW51eCB5b3UgYXJlIHVzaW5nPw0KPiA+DQo+ID4gSXQgaXMgTGlu
dXggNC4xOSBmcm9tIFlvY3RvIChXYXJyb3IgcmVsZWFzZSkuIFhFTiA0LjEzLjIuDQo+IA0KPiBE
byB5b3UgaGF2ZSBhIGxpbmsgdG8gdGhlIExpbnV4IHRyZWU/IElzIHRoZXJlIGFueSBhZGRpdGlv
bmFsIHBhdGNoZXMgb24gdG9wIG9mDQo+IHZhbmlsbGE/DQoNCkxpbnV4IHRyZWUgaXMgZm91bmQg
aGVyZTogaHR0cHM6Ly9naXRodWIuY29tL2tvbnRyb24vbGludXgtc21hcmMtc2FsMjgvY29tbWl0
cy9tYXN0ZXItTFNESy0xOS4wOQ0KKHVwIHRvIHRoZSBsYXRlc3QgY29tbWl0IGluIHRoYXQgYnJh
bmNoKQ0KDQo+IA0KPiA+IFdoaWxlIHNlYXJjaGluZyBhcm91bmQgdGhlIEludGVybmV0IGZvciBh
bnkgc29sdXRpb24sIEkgY2FtZSBhY3Jvc3MgWzBdDQo+IHdoaWNoIGNvbnRhaW5lZCB0aGUgZ2lj
LWxwaS1iYXNlIG5vZGUuDQo+ID4gU28gSSBqdXN0IHRyaWVkIGFkZGluZyBpdCAocXVpdGUgZGVz
cGVyYXRlIEkga25vdykgYW5kIHZvaWxhLCBpdCBhdCBsZWFzdA0KPiBicm91Z2h0IG1lIG9uZSBz
dGVwIGZ1cnRoZXIgKFhFTiBleHBvc2luZyB0aGUgSVRTKS4uLg0KPiANCj4gSSBhbSBzbGlnaHRs
eSBjb25mdXNlZCB0byBob3cgdGhpcyB3b3VsZCBoZWxwLiBYZW4gYW5kLCBBRkFJQ1QsIExpbnV4
IGRvbid0DQo+IHVuZGVyc3RhbmQgZ2ljLWxwaS1iYXNlLiBEbyB5b3UgaGF2ZSBtb2RpZmljYXRp
b24gaW4geW91ciBMaW51eCB0byB1c2UgaXQ/DQoNCkkgaGF2ZSBubyBpZGVhLCB0byBiZSBob25l
c3QuIE1heWJlIGl0IGlzIGFib3V0IHRoZSBtZW1vcnkgYmVpbmcgcmVzZXJ2ZWQgZHVlIHRvIHRo
YXQgbm9kZSBvciBzb21ldGhpbmcgbGlrZSB0aGF0Pw0KDQo+IA0KPiBMb29raW5nIGF0IHRoZSBE
VCBjaGFuZ2VzIGluIFswXSwgaXQgbG9va3MgbGlrZSB0aGUgbm9kZSBpcyBub3QgYSBjaGlsZCBv
ZiBnaWNALg0KPiBTbyBJIHRoaW5rIFhlbiB3aWxsIG1hcCB0aGUgcmVnaW9uIHRvIERvbTAuDQo+
IA0KPiBUaGVyZSBhcmUgdHdvIHRoaW5ncyB0aGF0IEkgY2FuIG5vdGljZToNCj4gICAgMSkgVGhp
cyByZWdpb24gaXMgUkFNLCBidXQgSSBjYW4ndCBmaW5kIGFueSByZXNlcnZlIG5vZGUuIElzIHRo
ZXJlIGFueSBzcGVjaWZpYw0KPiBjb2RlIGluIExpbnV4IHRvIHJlc2VydmUgaXQ/DQo+ICAgIDIp
IFRoZSBpbXBsZW1lbnRhdGlvbiBpbiBVLWJvb3Qgc2VlbXMgdG8gc3VnZ2VzdCB0aGF0IHRoZSBm
aXJtd2FyZSB3aWxsDQo+IGNvbmZpZ3VyZSB0aGUgTFBJcyBhbmQgdGhlbiBlbmFibGUgaXQuIElm
IHRoYXQncyB0aGUgY2FzZSwgdGhlbiBYZW4gbmVlZHMgdG8NCj4gcmUtdXNlIHRoZSB0YWJsZSBp
biB0aGUgRFQgcmF0aGVyIHRoYW4gYWxsb2NhdGluZyBhIG5ldyBvbmUuDQo+IEhvd2V2ZXIsIEkg
d291bGQgaGF2ZSBleHBlY3RlZCBhbiBlcnJvciBtZXNzYWdlIGluIHRoZSBsb2c6DQo+IA0KPiAg
ICAgIkdJQ3YzOiBDUFV4OiBDYW5ub3QgaW5pdGlhbGl6ZSBMUElzIg0KPiANCj4gQXQgbGVhc3Qg
WGVuIHNob3VsZCBub3QgZXhwb3NlIGdpYy1scGktYmFzZSB0byB0aGUga2VybmVsLCBidXQgSSB3
aWxsIHdhaXQgb24NCj4gbW9yZSBkZXRhaWxzIGFib3V0IHRoZSBMaW51eCBrZXJuZWwgdXNlZCBi
ZWZvcmUgY29tbWVudGluZyBtb3JlLg0KPiANCj4gSSB3b3VsZCBhbHNvIGJlIGludGVyZXN0ZWQg
dG8ga25vdyBtb3JlIGRldGFpbHMgYWJvdXQgdGhlIGZhaWx1cmUgd2hlbiBnaWMtDQo+IGxwaS1i
YXNlIGlzIG5vdCBhZGRlZCBpbiB5b3VyIERULiBJbiBwYXJ0aWN1bGFyLCBJIGFtIGludGVyZXN0
ZWQgdG8gdW5kZXJzdGFuZA0KPiB3aHkgWGVuIHdvdWxkIG5vdCBleHBvc2UgdGhlIElUUyBhcyB3
ZSBkb24ndCBwYXJzZSB0aGF0IG5vZGUuDQoNCkhvdyBjYW4gSSBzdXBwbHkgeW91IHdpdGggbW9y
ZSBpbmZvcm1hdGlvbiBpbiByZWdhcmQgdG8gdGhhdD8gV2l0aG91dCB0aGF0IG5vZGUsIElUUyB3
YXMgbm90IGV4cG9zZWQgYXQgYWxsLg0KDQo+IA0KPiBbLi4uXQ0KPiANCj4gPiBGb3IgWEVOIDQu
MTMuMiBJIGhhZCB0byBhZGFwdCB5b3VyIHBhdGNoIHNsaWdodGx5IFsxXSwgc2VlIGJlbG93ICh5
ZXMgSSBrbm93LA0KPiBxdWl0ZSB1Z2x5IGluIHBhcnRzKS4NCj4gDQo+IE5vIHdvcnJpZXMsIGRl
YnVnIHBhdGNoZXMgYXJlIG5vdCBtZWFudCB0byBiZSBuaWNlIHRvIHJlYWQgOykuDQo+IA0KPiA+
IEZpbmQgYXR0YWNoZWQgdGhlIGJvb3QgbG9nIGFuZCBhbiBvdXRwdXQgb2YgInhsIGRtZXNnIiB3
aGljaCBpcyB0cnVuY2F0ZWQNCj4gZHVlIHRvIHRoZSBsYXJnZSBhbW91bnQgb2YgbWVzc2FnZXMu
DQo+ID4NCj4gPiBXaGVuIGVuYWJsaW5nIHRoZSBuZXR3b3JrIGludGVyZmFjZSAoZ2JlMCksIHRo
ZSBmb2xsb3dpbmcgb3V0cHV0IGlzDQo+IHZpc2libGU6DQo+ID4NCj4gPiByb290QGtvbnRyb24t
c2FsMjg6fiMgaXAgbGluayBzZXQgdXAgZGV2IGdiZTANCj4gPiAoWEVOKSB2Z2ljLXYzLWl0cy5j
OjkwMjpkMHYwIHZJVFMgIGNtZCAweDBjOiAwMDAwMDAxNzAwMDAwMDBjDQo+ID4gMDAwMDAwMDAw
MDAwMDAwMSAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCj4gPiAoWEVOKSB2Z2lj
LXYzLWl0cy5jOjkwMjpkMHYwIHZJVFMgIGNtZCAweDA1OiAwMDAwMDAwMDAwMDAwMDA1DQo+ID4g
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCj4gDQo+
IDB4YyBpcyBJTlYgYW5kIDB4NSBpcyBTWU5DLiBNb3N0IGxpa2VseSB0aGUgZHJpdmVyIHVubWFz
ayB0aGUgaW50ZXJydXB0IGJ5DQo+IHdyaXRpbmcgaW4gdGhlIHByb3BlcnR5IHRhYmxlIChhY2Nl
c3MgYXJlIG5vdCB0cmFwcGVkIHRvIFhlbikgYW5kIHRoZW4NCj4gcmVxdWVzdGVkIHRvIGludmFs
aWRhdGUgdGhlIGNhY2hlIHN0YXRlLg0KPiANCj4gPiBbICAgMzQuMDM0NTk4XSBBdGhlcm9zIDgw
MzEgZXRoZXJuZXQgMDAwMDowMDowMC4zOjA1OiBhdHRhY2hlZCBQSFkgZHJpdmVyDQo+IFtBdGhl
cm9zIDgwMzEgZXRoZXJuZXRdIChtaWlfYnVzOnBoeV9hZGRyPTAwMDA6MDA6MDAuMzowNSwgaXJx
PVBPTEwpDQo+ID4gWyAgIDM0LjA0MTExMV0gODAyMXE6IGFkZGluZyBWTEFOIDAgdG8gSFcgZmls
dGVyIG9uIGRldmljZSBnYmUwDQo+ID4gWyAgIDM0LjA0MTIwOV0gSVB2NjogQUREUkNPTkYoTkVU
REVWX1VQKTogZ2JlMDogbGluayBpcyBub3QgcmVhZHkNCj4gPiByb290QGtvbnRyb24tc2FsMjg6
fiMgWyAgIDM1LjA0MTk1MV0gZnNsX2VuZXRjIDAwMDA6MDA6MDAuMCBnYmUwOiBMaW5rIGlzDQo+
IERvd24NCj4gPiBbICAgMzguMTE0NDI2XSBmc2xfZW5ldGMgMDAwMDowMDowMC4wIGdiZTA6IExp
bmsgaXMgVXAgLSAxR2Jwcy9GdWxsIC0gZmxvdw0KPiBjb250cm9sIG9mZg0KPiA+IFsgICAzOC4x
MTQ1MDhdIElQdjY6IEFERFJDT05GKE5FVERFVl9DSEFOR0UpOiBnYmUwOiBsaW5rIGJlY29tZXMN
Cj4gcmVhZHkNCj4gPg0KPiA+IERvZXMgdGhhdCB0ZWxsIHlvdSBhbnl0aGluZz8NCj4gDQo+IEl0
IGlzIGF0IGxlYXN0IGEgZ29vZCBzaWduIGJlY2F1c2UgaXQgbWVhbnMgTGludXggaXMgYWJsZSB0
byBpbml0aWFsaXplL3RhbGsgdG8gdGhlDQo+IHZJVFMuDQo+IA0KPiBJIHdvdWxkIGxlYW4gdG93
YXJkcyBvbmUgKG9yIG11bHRpcGxlKSBpc3N1ZSB3aXRoIHBJVFMgYW5kL29yIHRoZSBkZXZpY2Ut
dHJlZQ0KPiBleHBvc2VkIHRvIExpbnV4LiBJIGFtIG5vdCBlbnRpcmVseSB3aGF0IGV4YWN0bHku
Li4gSSB0aGluayBoYXZpbmcgbW9yZSBkZXRhaWxzDQo+IGFib3V0IHRoZSBMaW51eCBzZXR1cCB3
b3VsZCBiZSBoZWxwZnVsLg0KDQpPayBsZXQgbWUga25vdyB3aGF0IHlvdSBuZWVkIGZyb20gbXkg
c2lkZS4gSSB3YXMgY29uc2lkZXJpbmcgdGhlIGZvbGxvd2luZyB0aGluZ3MgdG8gdHJ5IG91dCBu
ZXh0Og0KDQotIGEgbW9yZSByZWNlbnQgdS1ib290IHZlcnNpb24gYXMgdGhpcyBtaWdodCBmaXgg
cHJvYmxlbXMgd2l0aCB0aGUgbXNpLW1hcCAoYXQgbGVhc3QgdGhhdCBpcyB3aGF0IEkgdGhpbmss
IEkgYW0gbm90IGFuIGV4cGVydCBoZXJlKQ0KLSBhIGRpZmZlcmVudCBkZXZpY2UgdHJlZSAoYSBt
b3JlIHJlY2VudCBvbmUsIC4uLikNCi0gLi4uDQoNCj4gDQo+IEkgd2lsbCByZXBseSBvbiBSYWh1
bCdzIGUtbWFpbCBzZXBhcmF0ZWx5Lg0KPiANCj4gQ2hlZXJzLA0KDQpCZXN0IHdpc2hlcyENCg0K
PiANCj4gLS0NCj4gSnVsaWVuIEdyYWxsDQo=


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 23:16:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 23:16:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37046.69192 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khhXa-0001Mt-QA; Tue, 24 Nov 2020 23:16:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37046.69192; Tue, 24 Nov 2020 23:16:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khhXa-0001Mm-ND; Tue, 24 Nov 2020 23:16:06 +0000
Received: by outflank-mailman (input) for mailman id 37046;
 Tue, 24 Nov 2020 23:16:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wl+B=E6=gmail.com=miguel.ojeda.sandonis@srs-us1.protection.inumbo.net>)
 id 1khhXa-0001Mh-2u
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 23:16:06 +0000
Received: from mail-qk1-x743.google.com (unknown [2607:f8b0:4864:20::743])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4ab607c3-f73c-4bc1-996a-6c6eb527adde;
 Tue, 24 Nov 2020 23:16:05 +0000 (UTC)
Received: by mail-qk1-x743.google.com with SMTP id y197so1055634qkb.7
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 15:16:05 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Wl+B=E6=gmail.com=miguel.ojeda.sandonis@srs-us1.protection.inumbo.net>)
	id 1khhXa-0001Mh-2u
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 23:16:06 +0000
X-Inumbo-ID: 4ab607c3-f73c-4bc1-996a-6c6eb527adde
Received: from mail-qk1-x743.google.com (unknown [2607:f8b0:4864:20::743])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4ab607c3-f73c-4bc1-996a-6c6eb527adde;
	Tue, 24 Nov 2020 23:16:05 +0000 (UTC)
Received: by mail-qk1-x743.google.com with SMTP id y197so1055634qkb.7
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 15:16:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=Jbdr1BEjJUZERXYDOU4Gv/mlbgAbUhdHLiICpsfXV1E=;
        b=l+vUmSI7i0l7fGuVwLPE5fOuxSSHusFrOEG0e9vJ5rcmJlRUOrCYTNMgOLszTPHVi8
         kkm8fDq5tpugZyKejz3uk0cqdmJQS1mRaMRBkOrvbAyI5fs/P/fRtiJLDz+N02sHzTn9
         5/inSz1Fbn+KfojwxtYTlcXPJfmsrqKqTBJHlwr9t4EjDd547DXA4VcN7evdsnVsuQ6p
         BBfcG3FVJZcSml9If1GSG7EsdjE7zI8hFJDY7AHsK/qRYVzYo+XeozyqqdAfx+vCyNIu
         chiyZGm6NoHt+eS3/xbHVpbDRehhnbGG6wOdrUeu9OVynRP4/UUYeZuXMpT7icG9n0vb
         4oYg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=Jbdr1BEjJUZERXYDOU4Gv/mlbgAbUhdHLiICpsfXV1E=;
        b=Yk83S4mvbWtVw9oXYojWvYIBE/0w7wQKvciYwZLn8hp/DDZGIxb1tkGOKkCkZTbF41
         OCWxNXUQASjdiCO1wVnvkMuo4Ft0ki7GCkl3z5lG3uG1a9ICDRschx+jBoX76pzc5t+N
         nXhPOjyA6KfkFWAOLsZCWNKflKvxIp/rUuISHAako6bAEjfexCtopG/BxVdxK+ecwEi0
         KW4M5x5JnLT7/CSnqzVVVAWolw0VYFls557Z9HLJQUGH+Gx8uVIEgs+Ce1G7DQL1IJTX
         w0FZFDUbV9QBYCELq10eHl/NXUe5x1Fg55rMf74J/xnopV17tAcxS0CjfKDcMhX8foMI
         4A2A==
X-Gm-Message-State: AOAM531BsnI7yKlnwtwi7BcypCPbhri0cqoSEpvzqziZxKp2Z2mViMEX
	S3a7rLGR0oVRHoN6PONBlXxIVDrHSnPmZuaRfic=
X-Google-Smtp-Source: ABdhPJzk8nFL+kqPtl4RF6lmqD43KwMnT4A8Oapd7ArTtvgjfEhtR30+WEgNSraaEzhK8tG/u1tJxvxiaOrV9LODN+o=
X-Received: by 2002:a25:61c5:: with SMTP id v188mr748702ybb.422.1606259765056;
 Tue, 24 Nov 2020 15:16:05 -0800 (PST)
MIME-Version: 1.0
References: <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook> <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
 <ca071decb87cc7e905411423c05a48f9fd2f58d7.camel@perches.com>
 <0147972a72bc13f3629de8a32dee6f1f308994b5.camel@HansenPartnership.com>
 <d8d1e9add08cdd4158405e77762d4946037208f8.camel@perches.com>
 <dbd2cb703ed9eefa7dde9281ea26ab0f7acc8afe.camel@HansenPartnership.com>
 <20201123130348.GA3119@embeddedor> <8f5611bb015e044fa1c0a48147293923c2d904e4.camel@HansenPartnership.com>
 <202011241327.BB28F12F6@keescook> <alpine.LNX.2.23.453.2011250859290.15@nippy.intranet>
In-Reply-To: <alpine.LNX.2.23.453.2011250859290.15@nippy.intranet>
From: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Date: Wed, 25 Nov 2020 00:15:54 +0100
Message-ID: <CANiq72nUt57u5DG9rH=DB0DzQH7U6-QbG-2Ou+PyCY=p=_Ggag@mail.gmail.com>
Subject: Re: [Intel-wired-lan] [PATCH 000/141] Fix fall-through warnings for Clang
To: Finn Thain <fthain@telegraphics.com.au>
Cc: Kees Cook <keescook@chromium.org>, 
	James Bottomley <James.Bottomley@hansenpartnership.com>, 
	"Gustavo A. R. Silva" <gustavoars@kernel.org>, Joe Perches <joe@perches.com>, 
	Jakub Kicinski <kuba@kernel.org>, alsa-devel@alsa-project.org, 
	linux-atm-general@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, 
	linux-iio@vger.kernel.org, linux-wireless <linux-wireless@vger.kernel.org>, 
	linux-fbdev@vger.kernel.org, dri-devel@lists.freedesktop.org, 
	linux-kernel <linux-kernel@vger.kernel.org>, Nathan Chancellor <natechancellor@gmail.com>, 
	linux-ide@vger.kernel.org, dm-devel@redhat.com, keyrings@vger.kernel.org, 
	linux-mtd@lists.infradead.org, GR-everest-linux-l2@marvell.com, 
	wcn36xx@lists.infradead.org, samba-technical@lists.samba.org, 
	linux-i3c@lists.infradead.org, linux1394-devel@lists.sourceforge.net, 
	linux-afs@lists.infradead.org, usb-storage@lists.one-eyed-alien.net, 
	drbd-dev@lists.linbit.com, devel@driverdev.osuosl.org, 
	linux-cifs@vger.kernel.org, rds-devel@oss.oracle.com, 
	Nick Desaulniers <ndesaulniers@google.com>, linux-scsi@vger.kernel.org, 
	linux-rdma@vger.kernel.org, oss-drivers@netronome.com, 
	bridge@lists.linux-foundation.org, linux-security-module@vger.kernel.org, 
	amd-gfx@lists.freedesktop.org, linux-stm32@st-md-mailman.stormreply.com, 
	cluster-devel@redhat.com, linux-acpi@vger.kernel.org, coreteam@netfilter.org, 
	intel-wired-lan@lists.osuosl.org, linux-input <linux-input@vger.kernel.org>, 
	Miguel Ojeda <ojeda@kernel.org>, tipc-discussion@lists.sourceforge.net, 
	Ext4 Developers List <linux-ext4@vger.kernel.org>, 
	Linux Media Mailing List <linux-media@vger.kernel.org>, linux-watchdog@vger.kernel.org, 
	selinux@vger.kernel.org, linux-arm-msm@vger.kernel.org, 
	intel-gfx@lists.freedesktop.org, linux-geode@lists.infradead.org, 
	linux-can@vger.kernel.org, linux-block@vger.kernel.org, 
	linux-gpio@vger.kernel.org, op-tee@lists.trustedfirmware.org, 
	linux-mediatek@lists.infradead.org, xen-devel@lists.xenproject.org, 
	nouveau@lists.freedesktop.org, linux-hams@vger.kernel.org, 
	ceph-devel@vger.kernel.org, virtualization@lists.linux-foundation.org, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, linux-hwmon@vger.kernel.org, 
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, linux-nfs@vger.kernel.org, GR-Linux-NIC-Dev@marvell.com, 
	Linux-MM <linux-mm@kvack.org>, Network Development <netdev@vger.kernel.org>, 
	linux-decnet-user@lists.sourceforge.net, linux-mmc@vger.kernel.org, 
	linux-renesas-soc@vger.kernel.org, linux-sctp@vger.kernel.org, 
	linux-usb@vger.kernel.org, netfilter-devel@vger.kernel.org, 
	Linux Crypto Mailing List <linux-crypto@vger.kernel.org>, patches@opensource.cirrus.com, 
	linux-integrity@vger.kernel.org, target-devel@vger.kernel.org, 
	linux-hardening@vger.kernel.org, 
	Jonathan Cameron <Jonathan.Cameron@huawei.com>, Greg KH <gregkh@linuxfoundation.org>
Content-Type: text/plain; charset="UTF-8"

On Tue, Nov 24, 2020 at 11:24 PM Finn Thain <fthain@telegraphics.com.au> wrote:
>
> These statements are not "missing" unless you presume that code written
> before the latest de facto language spec was written should somehow be
> held to that spec.

There is no "language spec" the kernel adheres to. Even if it did,
kernel code is not frozen. If an improvement is found, it should be
applied.

> If the 'fallthrough' statement is not part of the latest draft spec then
> we should ask why not before we embrace it. Being that the kernel still
> prefers -std=gnu89 you might want to consider what has prevented
> -std=gnu99 or -std=gnu2x etc.

The C standard has nothing to do with this. We use compiler extensions
of several kinds, for many years. Even discounting those extensions,
the kernel is not even conforming to C due to e.g. strict aliasing. I
am not sure what you are trying to argue here.

But, since you insist: yes, the `fallthrough` attribute is in the
current C2x draft.

Cheers,
Miguel


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 23:46:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 23:46:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37054.69204 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khi17-00040C-B0; Tue, 24 Nov 2020 23:46:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37054.69204; Tue, 24 Nov 2020 23:46:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khi17-000405-75; Tue, 24 Nov 2020 23:46:37 +0000
Received: by outflank-mailman (input) for mailman id 37054;
 Tue, 24 Nov 2020 23:46:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wl+B=E6=gmail.com=miguel.ojeda.sandonis@srs-us1.protection.inumbo.net>)
 id 1khi15-000400-Q8
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 23:46:35 +0000
Received: from mail-qk1-x744.google.com (unknown [2607:f8b0:4864:20::744])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 976f89c0-f7cb-48ac-b69b-db25c4fb0b5b;
 Tue, 24 Nov 2020 23:46:34 +0000 (UTC)
Received: by mail-qk1-x744.google.com with SMTP id z188so1152979qke.9
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 15:46:34 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Wl+B=E6=gmail.com=miguel.ojeda.sandonis@srs-us1.protection.inumbo.net>)
	id 1khi15-000400-Q8
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 23:46:35 +0000
X-Inumbo-ID: 976f89c0-f7cb-48ac-b69b-db25c4fb0b5b
Received: from mail-qk1-x744.google.com (unknown [2607:f8b0:4864:20::744])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 976f89c0-f7cb-48ac-b69b-db25c4fb0b5b;
	Tue, 24 Nov 2020 23:46:34 +0000 (UTC)
Received: by mail-qk1-x744.google.com with SMTP id z188so1152979qke.9
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 15:46:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=H+rGPaoJ9R3rr7zK4cuxMwlBlLpdlGUAwlQnvIZjZd8=;
        b=rGjC6KnGzC9NIfLo6IfoTM7C5TiZmv4q08vYNRt8cewMWpFhbX886WW47/gIiopas/
         e2jKhT73QJs/dRAIbILcJdQSFnVW9dal1rOBoIKd0ECBe+nqcukIgJrBRfX2wbZlPbrR
         27L+0GE9CXpbAta7TZEIdoDweQHp+sEvmypYMH9hyutdrnRkkklGbspFfFiZY+HYfDHj
         jsH/8tjPy+0JiTdJWitSDoQxXZntWXoEWL0cWLR+fmdpM332GBoSCjfuWUHY88IWDiPh
         04ztpEPYlCjpB7IXy1yQhW/otwe7DgKcwtOpyIrk5XdAw3ZKiOAkll08YWpT8S+NhzMx
         e/Gw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=H+rGPaoJ9R3rr7zK4cuxMwlBlLpdlGUAwlQnvIZjZd8=;
        b=jKCChMaaqp2d8GzLpmjekjR+kGHikGCYTN+yaf0giyU1d1YrwwmkSviccMBxw8F+IH
         4OPVc3mE5DYhpM0COwUGxS4+jvWzIcCtLUvw6tE4aPMSmiDUbxDwm6yz9r8ZwUwnWBcS
         21FmrKorudsqCRc+ybvJj/YZnDEAroigAeHnShK5ykHrob9hVDtAj7y5ruq992lZVIaZ
         +r57PyfRjXoCr9j61iXdjjUERD7d5pBomTcK3qDqn2dmqs/yx0QM2fADvtFkOSR/qpkH
         MNa9Dx0mOmOM6J95T+iRGTJXSTEUp0aMJKGqi5RunMf8AuDUNvs/Q+cZ/KglTgGjP17H
         VZaw==
X-Gm-Message-State: AOAM531Jt4bSEYiqpYHmNDS36s+fk1uM+cddaye0wwR+Yot8FHSVDVUO
	8Ni+QRP08Jk6P5dXsNno7hYvOwpw04HolnEM9Eg=
X-Google-Smtp-Source: ABdhPJzOwu2wFsHHn8A1YNPfZeoFLdmnMgNDFogUYxSXdSyfkKR4RJ+IXZpEdbdh4p7RHhstmUROclNZOq0Hltp8jFg=
X-Received: by 2002:a5b:40e:: with SMTP id m14mr627984ybp.33.1606261594309;
 Tue, 24 Nov 2020 15:46:34 -0800 (PST)
MIME-Version: 1.0
References: <cover.1605896059.git.gustavoars@kernel.org> <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook> <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
 <CANiq72nZrHWTA4_Msg6MP9snTyenC6-eGfD27CyfNSu7QoVZbw@mail.gmail.com>
 <alpine.LNX.2.23.453.2011230938390.7@nippy.intranet> <CANiq72=z+tmuey9wj3Kk7wX5s0hTHpsQdLhAqcOVNrHon6xn5Q@mail.gmail.com>
 <alpine.LNX.2.23.453.2011241036520.7@nippy.intranet>
In-Reply-To: <alpine.LNX.2.23.453.2011241036520.7@nippy.intranet>
From: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Date: Wed, 25 Nov 2020 00:46:23 +0100
Message-ID: <CANiq72=Ybm2MHmOizo1xQ_QYGuvbthtnLbwCkr8AFb8PcfmuQw@mail.gmail.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
To: Finn Thain <fthain@telegraphics.com.au>
Cc: James Bottomley <James.Bottomley@hansenpartnership.com>, 
	Kees Cook <keescook@chromium.org>, Jakub Kicinski <kuba@kernel.org>, 
	"Gustavo A. R. Silva" <gustavoars@kernel.org>, linux-kernel <linux-kernel@vger.kernel.org>, 
	alsa-devel@alsa-project.org, amd-gfx@lists.freedesktop.org, 
	bridge@lists.linux-foundation.org, ceph-devel@vger.kernel.org, 
	cluster-devel@redhat.com, coreteam@netfilter.org, devel@driverdev.osuosl.org, 
	dm-devel@redhat.com, drbd-dev@lists.linbit.com, 
	dri-devel@lists.freedesktop.org, GR-everest-linux-l2@marvell.com, 
	GR-Linux-NIC-Dev@marvell.com, intel-gfx@lists.freedesktop.org, 
	intel-wired-lan@lists.osuosl.org, keyrings@vger.kernel.org, 
	linux1394-devel@lists.sourceforge.net, linux-acpi@vger.kernel.org, 
	linux-afs@lists.infradead.org, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, linux-arm-msm@vger.kernel.org, 
	linux-atm-general@lists.sourceforge.net, linux-block@vger.kernel.org, 
	linux-can@vger.kernel.org, linux-cifs@vger.kernel.org, 
	Linux Crypto Mailing List <linux-crypto@vger.kernel.org>, linux-decnet-user@lists.sourceforge.net, 
	Ext4 Developers List <linux-ext4@vger.kernel.org>, linux-fbdev@vger.kernel.org, 
	linux-geode@lists.infradead.org, linux-gpio@vger.kernel.org, 
	linux-hams@vger.kernel.org, linux-hwmon@vger.kernel.org, 
	linux-i3c@lists.infradead.org, linux-ide@vger.kernel.org, 
	linux-iio@vger.kernel.org, linux-input <linux-input@vger.kernel.org>, 
	linux-integrity@vger.kernel.org, linux-mediatek@lists.infradead.org, 
	Linux Media Mailing List <linux-media@vger.kernel.org>, linux-mmc@vger.kernel.org, 
	Linux-MM <linux-mm@kvack.org>, linux-mtd@lists.infradead.org, 
	linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, 
	linux-renesas-soc@vger.kernel.org, linux-scsi@vger.kernel.org, 
	linux-sctp@vger.kernel.org, linux-security-module@vger.kernel.org, 
	linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org, 
	linux-watchdog@vger.kernel.org, 
	linux-wireless <linux-wireless@vger.kernel.org>, 
	Network Development <netdev@vger.kernel.org>, netfilter-devel@vger.kernel.org, 
	nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org, 
	oss-drivers@netronome.com, patches@opensource.cirrus.com, 
	rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org, 
	samba-technical@lists.samba.org, selinux@vger.kernel.org, 
	target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net, 
	usb-storage@lists.one-eyed-alien.net, 
	virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, xen-devel@lists.xenproject.org, 
	linux-hardening@vger.kernel.org, Nick Desaulniers <ndesaulniers@google.com>, 
	Nathan Chancellor <natechancellor@gmail.com>, Miguel Ojeda <ojeda@kernel.org>, 
	Joe Perches <joe@perches.com>
Content-Type: text/plain; charset="UTF-8"

On Tue, Nov 24, 2020 at 1:58 AM Finn Thain <fthain@telegraphics.com.au> wrote:
>
> What I meant was that you've used pessimism as if it was fact.

"future mistakes that it might prevent" is neither pessimism nor states a fact.

> For example, "There is no way to guess what the effect would be if the
> compiler trained programmers to add a knee-jerk 'break' statement to avoid
> a warning".

It is only knee-jerk if you think you are infallible.

> Moreover, what I meant was that preventing programmer mistakes is a
> problem to be solved by development tools

This warning comes from a development tool -- the compiler.

> The idea that retro-fitting new
> language constructs onto mature code is somehow necessary to "prevent
> future mistakes" is entirely questionable.

The kernel is not a frozen codebase.

Further, "mature code vs. risk of change" arguments don't apply here
because the semantics of the program and binary output isn't changing.

> Sure. And if you put -Wimplicit-fallthrough into the Makefile and if that
> leads to well-intentioned patches that cause regressions, it is partly on
> you.

Again: adding a `fallthrough` does not change the program semantics.
If you are a maintainer and want to cross-check, compare the codegen.

> Have you ever considered the overall cost of the countless
> -Wpresume-incompetence flags?

Yeah: negative. On the other hand, the overall cost of the countless
-fI-am-infallible flags is very noticeable.

> Perhaps you pay the power bill for a build farm that produces logs that
> no-one reads? Perhaps you've run git bisect, knowing that the compiler
> messages are not interesting? Or compiled software in using a language
> that generates impenetrable messages? If so, here's a tip:
>
> # grep CFLAGS /etc/portage/make.conf
> CFLAGS="... -Wno-all -Wno-extra ..."
> CXXFLAGS="${CFLAGS}"
>
> Now allow me some pessimism: the hardware upgrades, gigawatt hours and
> wait time attributable to obligatory static analyses are a net loss.

If you really believe compiler warnings and static analysis are
useless and costly, I think there is not much point in continuing the
discussion.

> No, it's not for me to prove that such patches don't affect code
> generation. That's for the patch author and (unfortunately) for reviewers.

I was not asking you to prove it. I am stating that proving it is very easy.

Cheers,
Miguel


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 23:53:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 23:53:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37061.69216 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khi7c-0004uT-0l; Tue, 24 Nov 2020 23:53:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37061.69216; Tue, 24 Nov 2020 23:53:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khi7b-0004uM-SZ; Tue, 24 Nov 2020 23:53:19 +0000
Received: by outflank-mailman (input) for mailman id 37061;
 Tue, 24 Nov 2020 23:53:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=53sX=E6=telegraphics.com.au=fthain@srs-us1.protection.inumbo.net>)
 id 1khi7a-0004uH-4G
 for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 23:53:18 +0000
Received: from kvm5.telegraphics.com.au (unknown [98.124.60.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id e9e2c15f-b7b9-4fc6-8ba8-46c09caa7b9a;
 Tue, 24 Nov 2020 23:53:17 +0000 (UTC)
Received: from localhost (localhost.localdomain [127.0.0.1])
 by kvm5.telegraphics.com.au (Postfix) with ESMTP id DCBBC2AA63;
 Tue, 24 Nov 2020 18:53:13 -0500 (EST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=53sX=E6=telegraphics.com.au=fthain@srs-us1.protection.inumbo.net>)
	id 1khi7a-0004uH-4G
	for xen-devel@lists.xenproject.org; Tue, 24 Nov 2020 23:53:18 +0000
X-Inumbo-ID: e9e2c15f-b7b9-4fc6-8ba8-46c09caa7b9a
Received: from kvm5.telegraphics.com.au (unknown [98.124.60.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id e9e2c15f-b7b9-4fc6-8ba8-46c09caa7b9a;
	Tue, 24 Nov 2020 23:53:17 +0000 (UTC)
Received: from localhost (localhost.localdomain [127.0.0.1])
	by kvm5.telegraphics.com.au (Postfix) with ESMTP id DCBBC2AA63;
	Tue, 24 Nov 2020 18:53:13 -0500 (EST)
Date: Wed, 25 Nov 2020 10:53:13 +1100 (AEDT)
From: Finn Thain <fthain@telegraphics.com.au>
To: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
cc: Kees Cook <keescook@chromium.org>, 
    James Bottomley <James.Bottomley@hansenpartnership.com>, 
    "Gustavo A. R. Silva" <gustavoars@kernel.org>, 
    Joe Perches <joe@perches.com>, Jakub Kicinski <kuba@kernel.org>, 
    alsa-devel@alsa-project.org, linux-atm-general@lists.sourceforge.net, 
    reiserfs-devel@vger.kernel.org, linux-iio@vger.kernel.org, 
    linux-wireless <linux-wireless@vger.kernel.org>, 
    linux-fbdev@vger.kernel.org, dri-devel@lists.freedesktop.org, 
    linux-kernel <linux-kernel@vger.kernel.org>, 
    Nathan Chancellor <natechancellor@gmail.com>, linux-ide@vger.kernel.org, 
    dm-devel@redhat.com, keyrings@vger.kernel.org, 
    linux-mtd@lists.infradead.org, GR-everest-linux-l2@marvell.com, 
    wcn36xx@lists.infradead.org, samba-technical@lists.samba.org, 
    linux-i3c@lists.infradead.org, linux1394-devel@lists.sourceforge.net, 
    linux-afs@lists.infradead.org, usb-storage@lists.one-eyed-alien.net, 
    drbd-dev@lists.linbit.com, devel@driverdev.osuosl.org, 
    linux-cifs@vger.kernel.org, rds-devel@oss.oracle.com, 
    Nick Desaulniers <ndesaulniers@google.com>, linux-scsi@vger.kernel.org, 
    linux-rdma@vger.kernel.org, oss-drivers@netronome.com, 
    bridge@lists.linux-foundation.org, linux-security-module@vger.kernel.org, 
    amd-gfx@lists.freedesktop.org, linux-stm32@st-md-mailman.stormreply.com, 
    cluster-devel@redhat.com, linux-acpi@vger.kernel.org, 
    coreteam@netfilter.org, intel-wired-lan@lists.osuosl.org, 
    linux-input <linux-input@vger.kernel.org>, Miguel Ojeda <ojeda@kernel.org>, 
    tipc-discussion@lists.sourceforge.net, 
    Ext4 Developers List <linux-ext4@vger.kernel.org>, 
    Linux Media Mailing List <linux-media@vger.kernel.org>, 
    linux-watchdog@vger.kernel.org, selinux@vger.kernel.org, 
    linux-arm-msm@vger.kernel.org, intel-gfx@lists.freedesktop.org, 
    linux-geode@lists.infradead.org, linux-can@vger.kernel.org, 
    linux-block@vger.kernel.org, linux-gpio@vger.kernel.org, 
    op-tee@lists.trustedfirmware.org, linux-mediatek@lists.infradead.org, 
    xen-devel@lists.xenproject.org, nouveau@lists.freedesktop.org, 
    linux-hams@vger.kernel.org, ceph-devel@vger.kernel.org, 
    virtualization@lists.linux-foundation.org, 
    Linux ARM <linux-arm-kernel@lists.infradead.org>, 
    linux-hwmon@vger.kernel.org, 
    "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, 
    linux-nfs@vger.kernel.org, GR-Linux-NIC-Dev@marvell.com, 
    Linux-MM <linux-mm@kvack.org>, 
    Network Development <netdev@vger.kernel.org>, 
    linux-decnet-user@lists.sourceforge.net, linux-mmc@vger.kernel.org, 
    linux-renesas-soc@vger.kernel.org, linux-sctp@vger.kernel.org, 
    linux-usb@vger.kernel.org, netfilter-devel@vger.kernel.org, 
    Linux Crypto Mailing List <linux-crypto@vger.kernel.org>, 
    patches@opensource.cirrus.com, linux-integrity@vger.kernel.org, 
    target-devel@vger.kernel.org, linux-hardening@vger.kernel.org, 
    Jonathan Cameron <Jonathan.Cameron@huawei.com>, 
    Greg KH <gregkh@linuxfoundation.org>
Subject: Re: [Intel-wired-lan] [PATCH 000/141] Fix fall-through warnings for
 Clang
In-Reply-To: <CANiq72nUt57u5DG9rH=DB0DzQH7U6-QbG-2Ou+PyCY=p=_Ggag@mail.gmail.com>
Message-ID: <alpine.LNX.2.23.453.2011251022550.14@nippy.intranet>
References: <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> <202011220816.8B6591A@keescook> <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com> <ca071decb87cc7e905411423c05a48f9fd2f58d7.camel@perches.com>
 <0147972a72bc13f3629de8a32dee6f1f308994b5.camel@HansenPartnership.com> <d8d1e9add08cdd4158405e77762d4946037208f8.camel@perches.com> <dbd2cb703ed9eefa7dde9281ea26ab0f7acc8afe.camel@HansenPartnership.com> <20201123130348.GA3119@embeddedor>
 <8f5611bb015e044fa1c0a48147293923c2d904e4.camel@HansenPartnership.com> <202011241327.BB28F12F6@keescook> <alpine.LNX.2.23.453.2011250859290.15@nippy.intranet> <CANiq72nUt57u5DG9rH=DB0DzQH7U6-QbG-2Ou+PyCY=p=_Ggag@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII


On Wed, 25 Nov 2020, Miguel Ojeda wrote:

> 
> The C standard has nothing to do with this. We use compiler extensions 
> of several kinds, for many years. Even discounting those extensions, the 
> kernel is not even conforming to C due to e.g. strict aliasing. I am not 
> sure what you are trying to argue here.
> 

I'm saying that supporting the official language spec makes more sense 
than attempting to support a multitude of divergent interpretations of the 
spec (i.e. gcc, clang, coverity etc.)

I'm also saying that the reason why we use -std=gnu89 is that existing 
code was written in that language, not in ad hoc languages comprised of 
collections of extensions that change with every release.

> But, since you insist: yes, the `fallthrough` attribute is in the 
> current C2x draft.
> 

Thank you for checking. I found a free version that's only 6 weeks old:

http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2583.pdf

It will be interesting to see whether 6.7.11.5 changes once the various 
implementations reach agreement.


From xen-devel-bounces@lists.xenproject.org Tue Nov 24 23:55:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 24 Nov 2020 23:55:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37069.69232 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khiA6-00054k-FW; Tue, 24 Nov 2020 23:55:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37069.69232; Tue, 24 Nov 2020 23:55:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khiA6-00054d-CS; Tue, 24 Nov 2020 23:55:54 +0000
Received: by outflank-mailman (input) for mailman id 37069;
 Tue, 24 Nov 2020 23:55:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khiA5-00054V-2C; Tue, 24 Nov 2020 23:55:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khiA4-0001XD-Sw; Tue, 24 Nov 2020 23:55:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khiA4-0004Zf-JP; Tue, 24 Nov 2020 23:55:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1khiA4-00055l-Ic; Tue, 24 Nov 2020 23:55:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khiA5-00054V-2C; Tue, 24 Nov 2020 23:55:53 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pLIEClihhy+A4VEjsavfw76C3YvaU2TLkSDXTaI/5nU=; b=ZmzyRYlI2vat/iVrHRY5e83c9T
	HR41p9RK+YalJJOiUKlEean3hQU/daG726f5pOFRPmGIHXHVEvHw9lx0FEzisn9iemQl40lrmx44u
	R8gRZmpoeC/eUvonDlUF0YYMuH/Goe6KzW1k42aA55djd83pCssARoYqjMpRdYgSCS28=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khiA4-0001XD-Sw; Tue, 24 Nov 2020 23:55:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khiA4-0004Zf-JP; Tue, 24 Nov 2020 23:55:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khiA4-00055l-Ic; Tue, 24 Nov 2020 23:55:52 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156985-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.10-testing test] 156985: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.10-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=17ec9b43af072051edb1380a5eb459a382dcafa3
X-Osstest-Versions-That:
    xen=15b298097289f1c11b981454a3dc912b95e2f65b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 24 Nov 2020 23:55:52 +0000

flight 156985 xen-4.10-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156985/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  3 hosts-allocate               fail  like 156633
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 156633
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 156633
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156633
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156633
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156633
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156633
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156633
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156633
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156633
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156633
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156633
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156633
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156633
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  17ec9b43af072051edb1380a5eb459a382dcafa3
baseline version:
 xen                  15b298097289f1c11b981454a3dc912b95e2f65b

Last test of basis   156633  2020-11-10 18:05:53 Z   14 days
Testing same since   156985  2020-11-24 13:35:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   15b2980972..17ec9b43af  17ec9b43af072051edb1380a5eb459a382dcafa3 -> stable-4.10


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 00:32:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 00:32:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37080.69247 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khijY-0000kX-0G; Wed, 25 Nov 2020 00:32:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37080.69247; Wed, 25 Nov 2020 00:32:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khijX-0000kQ-Sz; Wed, 25 Nov 2020 00:32:31 +0000
Received: by outflank-mailman (input) for mailman id 37080;
 Wed, 25 Nov 2020 00:32:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FqDi=E7=gmail.com=miguel.ojeda.sandonis@srs-us1.protection.inumbo.net>)
 id 1khijW-0000ju-3f
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 00:32:30 +0000
Received: from mail-qk1-x741.google.com (unknown [2607:f8b0:4864:20::741])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1c828f0f-a3a0-4aed-9f96-e7e768311e96;
 Wed, 25 Nov 2020 00:32:29 +0000 (UTC)
Received: by mail-qk1-x741.google.com with SMTP id h20so1395308qkk.4
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 16:32:29 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FqDi=E7=gmail.com=miguel.ojeda.sandonis@srs-us1.protection.inumbo.net>)
	id 1khijW-0000ju-3f
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 00:32:30 +0000
X-Inumbo-ID: 1c828f0f-a3a0-4aed-9f96-e7e768311e96
Received: from mail-qk1-x741.google.com (unknown [2607:f8b0:4864:20::741])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1c828f0f-a3a0-4aed-9f96-e7e768311e96;
	Wed, 25 Nov 2020 00:32:29 +0000 (UTC)
Received: by mail-qk1-x741.google.com with SMTP id h20so1395308qkk.4
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 16:32:29 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=QqqubA90NyDjnD5SH+OxnZbso0TzlLiuZ5gzRUm0zsY=;
        b=DYdo15DH5n4eMCA51W2vXZfybVLPbTpwO6CR+j1CET94cx9FmUQEkAzz4OadVdmrht
         /5QscdfYh1sAKLvu6dkrheNjvEoR4Mdvvl3diWkBzFiGJNP9BCqxLhr4zTkKEaaVMxi5
         qUA6kNkUTJzI9KEPxIujLpbBVVOhJKSQSAf+FeTM6jY84RGcXl9jGks4AfD2ojo1GxQQ
         uEwHm9wuAxdOf70IRL+AXs8sujZOQd+kSEI1eU2QAsFia5W6QaIddZOITngYX9DSKM5R
         mNvBJUTqyzODtQONxErP8O8dRkCcVm25oDOfEuyrse782pSy0gFJjFFgzHOgMIi8mZem
         jSGA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=QqqubA90NyDjnD5SH+OxnZbso0TzlLiuZ5gzRUm0zsY=;
        b=P+CHvvjh7bDvNXkgqffFDDsg+WcUJXGgfjvb2lnHe1W7EprtBGHjLCR+/cwOd3aC5J
         jvKAZQdP4VMghShgPOfgpbqi3cUXjIb+bJn/mcqks+mh5JVh1HkbNzyy3Lqpfcyub3jf
         lwif3yYIox6KiGSevbZ9cdbTrOcFWC7CiQtWU/IpA7NCAkvoEBd8ISsCCnObs/54Ndep
         FnYuxgLbGDxbU8hliT896TieYpX/uruhtdCdYUryEqTzqG+3zWlitZqF3i3MnbPqM3AC
         24jLCj8WQkCo+Q1XGw9g0hMvcpf3PWbJ/xf9rtMDKDlf5BPtTCYR7ufc4It/6AfI4ReM
         y1HQ==
X-Gm-Message-State: AOAM531v4li0Lwm0zLX2ykCHU8P55HCYNO3vn0eKslpL/MHTiz0BmuoH
	VDi8hbbTi5l3U82AuREqOowswAgoN/F6qJJ/GRQ=
X-Google-Smtp-Source: ABdhPJwwVKQFQuxzNO/2Mi/lJ3RUII3vPq9FIwhq+sZ6y3vyBbFPDKm/uCaEGwo9LU3j1fjBVCpYTO563lD14rGXSok=
X-Received: by 2002:a25:5f0f:: with SMTP id t15mr779915ybb.26.1606264348932;
 Tue, 24 Nov 2020 16:32:28 -0800 (PST)
MIME-Version: 1.0
References: <cover.1605896059.git.gustavoars@kernel.org> <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook> <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
 <CANiq72nZrHWTA4_Msg6MP9snTyenC6-eGfD27CyfNSu7QoVZbw@mail.gmail.com>
 <1c7d7fde126bc0acf825766de64bf2f9b888f216.camel@HansenPartnership.com>
 <CANiq72m22Jb5_+62NnwX8xds2iUdWDMAqD8PZw9cuxdHd95W0A@mail.gmail.com>
 <fc45750b6d0277c401015b7aa11e16cd15f32ab2.camel@HansenPartnership.com>
 <CANiq72k5tpDoDPmJ0ZWc1DGqm+81Gi-uEENAtvEs9v3SZcx6_Q@mail.gmail.com> <4993259d01a0064f8bb22770503490f9252f3659.camel@HansenPartnership.com>
In-Reply-To: <4993259d01a0064f8bb22770503490f9252f3659.camel@HansenPartnership.com>
From: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Date: Wed, 25 Nov 2020 01:32:17 +0100
Message-ID: <CANiq72kqO=bYMJnFS2uYRpgWATJ=uXxZuNUsTXT+3aLtrpnzvQ@mail.gmail.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
To: James Bottomley <James.Bottomley@hansenpartnership.com>
Cc: Kees Cook <keescook@chromium.org>, Jakub Kicinski <kuba@kernel.org>, 
	"Gustavo A. R. Silva" <gustavoars@kernel.org>, linux-kernel <linux-kernel@vger.kernel.org>, 
	alsa-devel@alsa-project.org, amd-gfx@lists.freedesktop.org, 
	bridge@lists.linux-foundation.org, ceph-devel@vger.kernel.org, 
	cluster-devel@redhat.com, coreteam@netfilter.org, devel@driverdev.osuosl.org, 
	dm-devel@redhat.com, drbd-dev@lists.linbit.com, 
	dri-devel@lists.freedesktop.org, GR-everest-linux-l2@marvell.com, 
	GR-Linux-NIC-Dev@marvell.com, intel-gfx@lists.freedesktop.org, 
	intel-wired-lan@lists.osuosl.org, keyrings@vger.kernel.org, 
	linux1394-devel@lists.sourceforge.net, linux-acpi@vger.kernel.org, 
	linux-afs@lists.infradead.org, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, linux-arm-msm@vger.kernel.org, 
	linux-atm-general@lists.sourceforge.net, linux-block@vger.kernel.org, 
	linux-can@vger.kernel.org, linux-cifs@vger.kernel.org, 
	Linux Crypto Mailing List <linux-crypto@vger.kernel.org>, linux-decnet-user@lists.sourceforge.net, 
	Ext4 Developers List <linux-ext4@vger.kernel.org>, linux-fbdev@vger.kernel.org, 
	linux-geode@lists.infradead.org, linux-gpio@vger.kernel.org, 
	linux-hams@vger.kernel.org, linux-hwmon@vger.kernel.org, 
	linux-i3c@lists.infradead.org, linux-ide@vger.kernel.org, 
	linux-iio@vger.kernel.org, linux-input <linux-input@vger.kernel.org>, 
	linux-integrity@vger.kernel.org, linux-mediatek@lists.infradead.org, 
	Linux Media Mailing List <linux-media@vger.kernel.org>, linux-mmc@vger.kernel.org, 
	Linux-MM <linux-mm@kvack.org>, linux-mtd@lists.infradead.org, 
	linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, 
	linux-renesas-soc@vger.kernel.org, linux-scsi@vger.kernel.org, 
	linux-sctp@vger.kernel.org, linux-security-module@vger.kernel.org, 
	linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org, 
	linux-watchdog@vger.kernel.org, 
	linux-wireless <linux-wireless@vger.kernel.org>, 
	Network Development <netdev@vger.kernel.org>, netfilter-devel@vger.kernel.org, 
	nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org, 
	oss-drivers@netronome.com, patches@opensource.cirrus.com, 
	rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org, 
	samba-technical@lists.samba.org, selinux@vger.kernel.org, 
	target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net, 
	usb-storage@lists.one-eyed-alien.net, 
	virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, xen-devel@lists.xenproject.org, 
	linux-hardening@vger.kernel.org, Nick Desaulniers <ndesaulniers@google.com>, 
	Nathan Chancellor <natechancellor@gmail.com>, Miguel Ojeda <ojeda@kernel.org>, 
	Joe Perches <joe@perches.com>
Content-Type: text/plain; charset="UTF-8"

On Mon, Nov 23, 2020 at 9:38 PM James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
>
> So you think a one line patch should take one minute to produce ... I
> really don't think that's grounded in reality.

No, I have not said that. Please don't put words in my mouth (again).

I have said *authoring* lines of *this* kind takes a minute per line.
Specifically: lines fixing the fallthrough warning mechanically and
repeatedly where the compiler tells you to, and doing so full-time for
a month.

For instance, take the following one from Gustavo. Are you really
saying it takes 12 minutes (your number) to write that `break;`?

diff --git a/drivers/gpu/drm/via/via_irq.c b/drivers/gpu/drm/via/via_irq.c
index 24cc445169e2..a3e0fb5b8671 100644
--- a/drivers/gpu/drm/via/via_irq.c
+++ b/drivers/gpu/drm/via/via_irq.c
@@ -364,6 +364,7 @@ int via_wait_irq(struct drm_device *dev, void
*data, struct drm_file *file_priv)
                irqwait->request.sequence +=
                        atomic_read(&cur_irq->irq_received);
                irqwait->request.type &= ~_DRM_VBLANK_RELATIVE;
+               break;
        case VIA_IRQ_ABSOLUTE:
                break;
        default:

>  I suppose a one line
> patch only takes a minute to merge with b4 if no-one reviews or tests
> it, but that's not really desirable.

I have not said that either. I said reviewing and merging those are
noise compared to any complex patch. Testing should be done by the
author comparing codegen.

> Part of what I'm trying to measure is the "and useful" bit because
> that's not a given.

It is useful since it makes intent clear. It also catches actual bugs,
which is even more valuable.

> Well, you know, subsystems are very different in terms of the amount of
> patches a maintainer has to process per release cycle of the kernel.
> If a maintainer is close to capacity, additional patches, however
> trivial, become a problem.  If a maintainer has spare cycles, trivial
> patches may look easy.

First of all, voluntary maintainers choose their own workload.
Furthermore, we already measure capacity in the `MAINTAINERS` file:
maintainers can state they can only handle a few patches. Finally, if
someone does not have time for a trivial patch, they are very unlikely
to have any time to review big ones.

> You seem to be saying that because you find it easy to merge trivial
> patches, everyone should.

Again, I have not said anything of the sort.

Cheers,
Miguel


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 01:06:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 01:06:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37089.69265 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khjFz-00082G-Lj; Wed, 25 Nov 2020 01:06:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37089.69265; Wed, 25 Nov 2020 01:06:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khjFz-000829-If; Wed, 25 Nov 2020 01:06:03 +0000
Received: by outflank-mailman (input) for mailman id 37089;
 Wed, 25 Nov 2020 01:06:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FqDi=E7=gmail.com=miguel.ojeda.sandonis@srs-us1.protection.inumbo.net>)
 id 1khjFy-000824-0K
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 01:06:02 +0000
Received: from mail-qk1-x742.google.com (unknown [2607:f8b0:4864:20::742])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8671d5a3-3190-443e-b734-c1f835a799c3;
 Wed, 25 Nov 2020 01:06:00 +0000 (UTC)
Received: by mail-qk1-x742.google.com with SMTP id u4so1453692qkk.10
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 17:06:00 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FqDi=E7=gmail.com=miguel.ojeda.sandonis@srs-us1.protection.inumbo.net>)
	id 1khjFy-000824-0K
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 01:06:02 +0000
X-Inumbo-ID: 8671d5a3-3190-443e-b734-c1f835a799c3
Received: from mail-qk1-x742.google.com (unknown [2607:f8b0:4864:20::742])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8671d5a3-3190-443e-b734-c1f835a799c3;
	Wed, 25 Nov 2020 01:06:00 +0000 (UTC)
Received: by mail-qk1-x742.google.com with SMTP id u4so1453692qkk.10
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 17:06:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=44pFzPZOXy9WMyxuVZ4PJwmHwfPvC91GWhNSdpkVBXk=;
        b=PLHqMkuVQ6uE1aGa3jPkk7oL0rd2+0aQGjbXwXCM1BqpOEf5R6gUmrb1tTrj75nDjD
         aG6VHTDP2JdODJ/e29C1vfEMjWhTPlycM3mNfL2JW5BhFtstP0di/Kwb3XYGnsRoKRp5
         7FVJzAE2cPEDrpLIYX3jwoIfPno8H9ArdOwpkc9HelZA+M7Eq52S4RmPBHauFrK9tfW3
         8wrBLWN/Mq7Fl9GTodmgwwudDuqrVUSiSILvkeQCl7qFs3/IjbI7/AG3m5iLRdEEw+2H
         RzsA5+EdKfz0DIt5iJKomdRqiV67cH8ZcnWPF2PnARf/ANxCrT7QZJCkY0VU2DrQ9d/L
         oW2A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=44pFzPZOXy9WMyxuVZ4PJwmHwfPvC91GWhNSdpkVBXk=;
        b=fk2tYdsZeRDSOaISfzXJGVoWUXQIZA8uvM7bHj9MZev46/oFNMv8iSM7gMQaHVRu4+
         igwu54oTpaBIbKdz/rjZiDKUybZfyuRY+xHDp9LdQ9W3wMgBsvLQgL8MUKETlHYk9elk
         QUystM8hezVjrq9KJjn8pcWFx9MqZKpqBZ+ndDQ/nJcYnC9dkKH5ta0jnkh61JEw0Qk2
         srD5M+462AXQE5vgKttE49yT8np74CtN3XLCg/mYd22HyprFj6MbIHLTc58wEWFPQJxQ
         EphbrbQsejYfefv6nb+VaCPdbMKPNEhSoy1cr8qK/fXpsucbrHDsTEHqtv9bQRhuQhcK
         wmqQ==
X-Gm-Message-State: AOAM531csgQdgPQ95ovVpz/Ti9zTLMf0TZZPy0KwA7pup4LV24rhSJJe
	/VIHPF1eIW7CwHVWOPY9qTJfUVMqBUE+n4yQluU=
X-Google-Smtp-Source: ABdhPJxG2tW9FM1fAqThCcvqKAPI/OeE1R4V2//A8ePZuSketjr0yKUxoxj6t2hcKlYt9sKRSewQcSyhG/w+5cGdzqU=
X-Received: by 2002:a25:aac5:: with SMTP id t63mr1046305ybi.22.1606266360499;
 Tue, 24 Nov 2020 17:06:00 -0800 (PST)
MIME-Version: 1.0
References: <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook> <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
 <ca071decb87cc7e905411423c05a48f9fd2f58d7.camel@perches.com>
 <0147972a72bc13f3629de8a32dee6f1f308994b5.camel@HansenPartnership.com>
 <d8d1e9add08cdd4158405e77762d4946037208f8.camel@perches.com>
 <dbd2cb703ed9eefa7dde9281ea26ab0f7acc8afe.camel@HansenPartnership.com>
 <20201123130348.GA3119@embeddedor> <8f5611bb015e044fa1c0a48147293923c2d904e4.camel@HansenPartnership.com>
 <202011241327.BB28F12F6@keescook> <alpine.LNX.2.23.453.2011250859290.15@nippy.intranet>
 <CANiq72nUt57u5DG9rH=DB0DzQH7U6-QbG-2Ou+PyCY=p=_Ggag@mail.gmail.com> <alpine.LNX.2.23.453.2011251022550.14@nippy.intranet>
In-Reply-To: <alpine.LNX.2.23.453.2011251022550.14@nippy.intranet>
From: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Date: Wed, 25 Nov 2020 02:05:49 +0100
Message-ID: <CANiq72m2kGxSy2E9jgYE4_xRV6h9rFqiJP25KXs_5ObYnH_nmA@mail.gmail.com>
Subject: Re: [Intel-wired-lan] [PATCH 000/141] Fix fall-through warnings for Clang
To: Finn Thain <fthain@telegraphics.com.au>
Cc: Kees Cook <keescook@chromium.org>, 
	James Bottomley <James.Bottomley@hansenpartnership.com>, 
	"Gustavo A. R. Silva" <gustavoars@kernel.org>, Joe Perches <joe@perches.com>, 
	Jakub Kicinski <kuba@kernel.org>, alsa-devel@alsa-project.org, 
	linux-atm-general@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, 
	linux-iio@vger.kernel.org, linux-wireless <linux-wireless@vger.kernel.org>, 
	linux-fbdev@vger.kernel.org, dri-devel@lists.freedesktop.org, 
	linux-kernel <linux-kernel@vger.kernel.org>, Nathan Chancellor <natechancellor@gmail.com>, 
	linux-ide@vger.kernel.org, dm-devel@redhat.com, keyrings@vger.kernel.org, 
	linux-mtd@lists.infradead.org, GR-everest-linux-l2@marvell.com, 
	wcn36xx@lists.infradead.org, samba-technical@lists.samba.org, 
	linux-i3c@lists.infradead.org, linux1394-devel@lists.sourceforge.net, 
	linux-afs@lists.infradead.org, usb-storage@lists.one-eyed-alien.net, 
	drbd-dev@lists.linbit.com, devel@driverdev.osuosl.org, 
	linux-cifs@vger.kernel.org, rds-devel@oss.oracle.com, 
	Nick Desaulniers <ndesaulniers@google.com>, linux-scsi@vger.kernel.org, 
	linux-rdma@vger.kernel.org, oss-drivers@netronome.com, 
	bridge@lists.linux-foundation.org, linux-security-module@vger.kernel.org, 
	amd-gfx@lists.freedesktop.org, linux-stm32@st-md-mailman.stormreply.com, 
	cluster-devel@redhat.com, linux-acpi@vger.kernel.org, coreteam@netfilter.org, 
	intel-wired-lan@lists.osuosl.org, linux-input <linux-input@vger.kernel.org>, 
	Miguel Ojeda <ojeda@kernel.org>, tipc-discussion@lists.sourceforge.net, 
	Ext4 Developers List <linux-ext4@vger.kernel.org>, 
	Linux Media Mailing List <linux-media@vger.kernel.org>, linux-watchdog@vger.kernel.org, 
	selinux@vger.kernel.org, linux-arm-msm@vger.kernel.org, 
	intel-gfx@lists.freedesktop.org, linux-geode@lists.infradead.org, 
	linux-can@vger.kernel.org, linux-block@vger.kernel.org, 
	linux-gpio@vger.kernel.org, op-tee@lists.trustedfirmware.org, 
	linux-mediatek@lists.infradead.org, xen-devel@lists.xenproject.org, 
	nouveau@lists.freedesktop.org, linux-hams@vger.kernel.org, 
	ceph-devel@vger.kernel.org, virtualization@lists.linux-foundation.org, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, linux-hwmon@vger.kernel.org, 
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, linux-nfs@vger.kernel.org, GR-Linux-NIC-Dev@marvell.com, 
	Linux-MM <linux-mm@kvack.org>, Network Development <netdev@vger.kernel.org>, 
	linux-decnet-user@lists.sourceforge.net, linux-mmc@vger.kernel.org, 
	linux-renesas-soc@vger.kernel.org, linux-sctp@vger.kernel.org, 
	linux-usb@vger.kernel.org, netfilter-devel@vger.kernel.org, 
	Linux Crypto Mailing List <linux-crypto@vger.kernel.org>, patches@opensource.cirrus.com, 
	linux-integrity@vger.kernel.org, target-devel@vger.kernel.org, 
	linux-hardening@vger.kernel.org, 
	Jonathan Cameron <Jonathan.Cameron@huawei.com>, Greg KH <gregkh@linuxfoundation.org>
Content-Type: text/plain; charset="UTF-8"

On Wed, Nov 25, 2020 at 12:53 AM Finn Thain <fthain@telegraphics.com.au> wrote:
>
> I'm saying that supporting the official language spec makes more sense
> than attempting to support a multitude of divergent interpretations of the
> spec (i.e. gcc, clang, coverity etc.)

Making the kernel strictly conforming is a ship that sailed long ago,
for several reasons. Anyway, supporting several compilers and other
tools, regardless of extensions, is valuable.

> I'm also saying that the reason why we use -std=gnu89 is that existing
> code was written in that language, not in ad hoc languages comprised of
> collections of extensions that change with every release.

No, we aren't particularly tied to `gnu89` or anything like that. We
could actually go for `gnu11` already, since the minimum GCC and Clang
support it. Even if a bit of code needs fixing, that shouldn't be a
problem if someone puts the work.

In other words, the kernel code is not frozen, nor are the features it
uses from compilers. They do, in fact, change from time to time.

> Thank you for checking. I found a free version that's only 6 weeks old:

You're welcome! There are quite a few new attributes coming, mostly
following C++ ones.

> It will be interesting to see whether 6.7.11.5 changes once the various
> implementations reach agreement.

Not sure what you mean. The standard does not evolve through
implementations' agreement (although standardizing existing practice
is one of the best arguments to back a change).

Cheers,
Miguel


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 01:50:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 01:50:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37098.69277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khjws-0003oj-7X; Wed, 25 Nov 2020 01:50:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37098.69277; Wed, 25 Nov 2020 01:50:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khjws-0003ob-2P; Wed, 25 Nov 2020 01:50:22 +0000
Received: by outflank-mailman (input) for mailman id 37098;
 Wed, 25 Nov 2020 01:50:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khjwq-0003oS-MA; Wed, 25 Nov 2020 01:50:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khjwq-0000BF-Er; Wed, 25 Nov 2020 01:50:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khjwq-00048I-8A; Wed, 25 Nov 2020 01:50:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1khjwq-0005Bu-72; Wed, 25 Nov 2020 01:50:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khjwq-0003oS-MA; Wed, 25 Nov 2020 01:50:20 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nzitLm/BtpSdU4EreKfFQdnpFuysXh3BG/retvMbD+M=; b=Bd76ZahCLc/WK9sQnsMFZqd99h
	8hBSVsmmpaUUfMl1dI0ZtVO/RIgeLntwzt33CVlJfYRwtQthUBRiCzOYZQjaoPCsY+l+OCGIBrWTp
	bc0nvQpMcWFLp9YBGC+a35pL+46DwlbANLo5DqAa3B7GTG3Z1KVkqXmYZ5grJwrX9QYs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khjwq-0000BF-Er; Wed, 25 Nov 2020 01:50:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khjwq-00048I-8A; Wed, 25 Nov 2020 01:50:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khjwq-0005Bu-72; Wed, 25 Nov 2020 01:50:20 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156984-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 156984: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:guest-start/debianhvm.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=9f4b26f3ea18cb2066c9e58a84ff202c71739a41
X-Osstest-Versions-That:
    linux=fc8334619167ce90b6d3f76e3dce9284dbe14fa2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 25 Nov 2020 01:50:20 +0000

flight 156984 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156984/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156942
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156942
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156942
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 14 guest-start/debianhvm.repeat fail like 156942
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156942
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156942
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156942
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156942
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156942
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156942
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156942
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156942
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds      3 hosts-allocate               starved  n/a

version targeted for testing:
 linux                9f4b26f3ea18cb2066c9e58a84ff202c71739a41
baseline version:
 linux                fc8334619167ce90b6d3f76e3dce9284dbe14fa2

Last test of basis   156942  2020-11-22 09:40:34 Z    2 days
Testing same since   156984  2020-11-24 13:11:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lewis <aaronlewis@google.com>
  Adam Ford <aford173@gmail.com>
  Adrian Hunter <adrian.hunter@intel.com>
  Ahmad Fatoum <a.fatoum@pengutronix.de> # stpmic1
  Alejandro Concepcion Rodriguez <alejandro@acoro.eu>
  Alex Deucher <alexander.deucher@amd.com>
  Alexei Starovoitov <ast@kernel.org>
  Anant Thazhemadam <anant.thazhemadam@gmail.com>
  Andreas Gruenbacher <agruenba@redhat.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Ard Biesheuvel <ardb@kernel.org>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arvind Sankar <nivedita@alum.mit.edu>
  Aya Levin <ayal@nvidia.com>
  Benjamin Tissoires <benjamin.tissoires@redhat.com>
  Bjørn Mork <bjorn@mork.no>
  Borislav Petkov <bp@suse.de>
  Brian O'Keefe <bokeefe@alum.wpi.edu>
  Can Guo <cang@codeaurora.org>
  Charan Teja Reddy <charante@codeaurora.org>
  Chen Yu <yu.c.chen@intel.com>
  Chen-Yu Tsai <wens@csie.org>
  Chris Co <chrco@microsoft.com>
  Christoph Hellwig <hch@lst.de>
  Claire Chang <tientzu@chromium.org>
  Clément Péron <peron.clem@gmail.com>
  Colin Ian King <colin.king@canonical.com>
  Corentin Labbe <clabbe.montjoie@gmail.com>
  Corentin Labbe <clabbe@baylibre.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Dan Murphy <dmurphy@ti.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Darrick J. Wong <darrick.wong@oracle.com>
  David Rientjes <rientjes@google.com>
  Denis Yulevich <denisyu@nvidia.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Dongli Zhang <dongli.zhang@oracle.com>
  Edwin Peer <edwin.peer@broadcom.com>
  Eli Cohen <elic@nvidia.com>
  Emilio López <emilio@elopez.com.ar>
  Enric Balletbo i Serra <enric.balletbo@collabora.com>
  Fabien Parent <fparent@baylibre.com>
  Fabio Estevam <festevam@gmail.com>
  Faiz Abbas <faiz_abbas@ti.com>
  Felix Fietkau <nbd@nbd.name>
  Filip Moc <dev@moc6.cz>
  Florian Fainelli <f.fainelli@gmail.com>
  Fugang Duan <fugang.duan@nxp.com>
  Gerald Schaefer <gerald.schaefer@linux.ibm.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hans de Goede <hdegoede@redhat.com>
  Harry Cutts <hcutts@chromium.org>
  Hauke Mehrtens <hauke@hauke-m.de>
  Heiko Carstens <hca@linux.ibm.com>
  Heiner Kallweit <hkallweit1@gmail.com>
  Ido Schimmel <idosch@nvidia.com>
  Jakub Kicinski <kuba@kernel.org>
  Jan Kara <jack@suse.cz>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Jason Gunthorpe <jgg@nvidia.com>
  Jeff Dike <jdike@akamai.com>
  Jens Axboe <axboe@kernel.dk>
  Jernej Skrabec <jernej.skrabec@siol.net>
  Jianqun Xu <jay.xu@rock-chips.com>
  Jimmy Assarsson <extja@kvaser.com>
  Jiri Kosina <jkosina@suse.cz>
  Jiri Olsa <jolsa@redhat.com>
  Joakim Tjernlund <joakim.tjernlund@infinera.com>
  Joel Stanley <joel@jms.id.au>
  Joerg Roedel <jroedel@suse.de>
  Johannes Berg <johannes.berg@intel.com>
  Johannes Weiner <hannes@cmpxchg.org>
  John Fastabend <john.fastabend@gmail.com>
  Jon Hunter <jonathanh@nvidia.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Kailang Yang <kailang@realtek.com>
  Karsten Graul <kgraul@linux.ibm.com>
  Kees Cook <keescook@chromium.org>
  Leo Yan <leo.yan@linaro.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Loris Fauster <loris.fauster@ttcontrol.com>
  Lu Baolu <baolu.lu@linux.intel.com>
  Lucas Stach <l.stach@pengutronix.de>
  Lukas Wunner <lukas@wunner.de>
  Lukasz Hawrylko <lukasz.hawrylko@linux.intel.com>
  Luo Meng <luomeng12@huawei.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
  Mark Brown <broonie@kernel.org>
  Martin Blumenstingl <martin.blumenstingl@googlemail.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masami Hiramatsu <mhiramat@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Maxime Ripard <maxime@cerno.tech>
  Michael Chan <michael.chan@broadcom.com>
  Michael Guralnik <michaelgur@nvidia.com>
  Michael Hennerich <michael.hennerich@analog.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Kalderon <michal.kalderon@marvell.com>
  Michał Mirosław <mirq-linux@rere.qmqm.pl>
  Mickaël Salaün <mic@linux.microsoft.com>
  Muchun Song <songmuchun@bytedance.com>
  Neal Cardwell <ncardwell@google.com>
  Necip Fazil Yildiran <fazilyildiran@gmail.com>
  Nenad Peric <nperic@gmail.com>
  Nishanth Menon <nm@ti.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Barker <pbarker@konsulko.com>
  Paul Moore <paul@paul-moore.com>
  Pavel Machek <pavel@ucw.cz>
  PeiSen Hou <pshou@realtek.com>
  Peter Hutterer <peter.hutterer@who-t.net>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Qinglang Miao <miaoqinglang@huawei.com>
  Quentin Perret <qperret@google.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Ralph Siemsen <ralph.siemsen@linaro.org>
  Randy Dunlap <rdunlap@infradead.org>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Roman Gushchin <guro@fb.com>
  Ryan Sharpelletti <sharpelletti@google.com>
  Saeed Mahameed <saeedm@nvidia.com>
  Sam Nobs <samuel.nobs@taitradio.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sasha Levin <sashal@kernel.org>
  Sean Nyekjaer <sean@geanix.com>
  Sean Tranchetti <stranche@codeaurora.org>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Sergey Matyukevich <geomatsi@gmail.com>
  Shawn Guo <shawnguo@kernel.org>
  Shisong Qin <qinshisong1205@gmail.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Soheil Hassas Yeganeh <soheil@google.com>
  Srinivasa Rao Mandadapu <srivasam@codeaurora.org>
  Stefan Haberland <sth@linux.ibm.com>
  Stephen Rothwell <sfr@canb.auug.org.au>
  Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
  Sumanth Korikkar <sumanthk@linux.ibm.com>
  Sven Van Asbroeck <thesven73@gmail.com>
  Sven Van Asbroeck <thesven73@gmail.com> # arm imx6 lan7430
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tariq Toukan <tariqt@nvidia.com>
  Theodore Ts'o <tytso@mit.edu>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Richter <tmricht@linux.ibm.com>
  Tobias Waldekranz <tobias@waldekranz.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  V Sujith Kumar Reddy <vsujithk@codeaurora.org>
  Vadim Fedorenko <vfedorenko@novek.ru>
  Vamshi K Sthambamkadi <vamshi.k.sthambamkadi@gmail.com>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Vladimir Oltean <olteanv@gmail.com>
  Vladyslav Tarasiuk <vladyslavt@nvidia.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wang Hai <wanghai38@huawei.com>
  Wei Liu <wei.liu@kernel.org>
  Will Deacon <will@kernel.org>
  Wu Bo <wubo.oduw@gmail.com>
  Xie He <xie.he.0141@gmail.com>
  Xin Long <lucien.xin@gmail.com>
  Xiongfeng Wang <wangxiongfeng2@huawei.com>
  Yi-Hung Wei <yihung.wei@gmail.com>
  Yicong Yang <yangyicong@hisilicon.com>
  Yu Kuai <yukuai3@huawei.com>
  Yuchung Cheng <ycheng@google.com>
  Zhang Changzhong <zhangchangzhong@huawei.com>
  Zhang Qilong <zhangqilong3@huawei.com>
  Zhenzhong Duan <zhenzhong.duan@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   fc8334619167..9f4b26f3ea18  9f4b26f3ea18cb2066c9e58a84ff202c71739a41 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 02:15:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 02:15:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37108.69292 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khkKj-00066E-6e; Wed, 25 Nov 2020 02:15:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37108.69292; Wed, 25 Nov 2020 02:15:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khkKj-000667-3e; Wed, 25 Nov 2020 02:15:01 +0000
Received: by outflank-mailman (input) for mailman id 37108;
 Wed, 25 Nov 2020 02:14:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o5nq=E7=xilinx.com=stefanos@srs-us1.protection.inumbo.net>)
 id 1khkKh-000662-1p
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 02:14:59 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com (unknown
 [40.107.220.78]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9dd3037c-c5c6-49e2-a0c0-979381b91be0;
 Wed, 25 Nov 2020 02:14:57 +0000 (UTC)
Received: from CY4PR13CA0023.namprd13.prod.outlook.com (2603:10b6:903:32::33)
 by BL0PR02MB3667.namprd02.prod.outlook.com (2603:10b6:207:47::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.28; Wed, 25 Nov
 2020 02:14:54 +0000
Received: from CY1NAM02FT033.eop-nam02.prod.protection.outlook.com
 (2603:10b6:903:32:cafe::cf) by CY4PR13CA0023.outlook.office365.com
 (2603:10b6:903:32::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.12 via Frontend
 Transport; Wed, 25 Nov 2020 02:14:54 +0000
Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by
 CY1NAM02FT033.mail.protection.outlook.com (10.152.75.179) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.3589.20 via Frontend Transport; Wed, 25 Nov 2020 02:14:53 +0000
Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by
 xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1913.5; Tue, 24 Nov 2020 18:14:52 -0800
Received: from smtp.xilinx.com (172.19.127.96) by
 xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id
 15.1.1913.5 via Frontend Transport; Tue, 24 Nov 2020 18:14:52 -0800
Received: from [10.23.120.204] (port=57077 helo=localhost)
 by smtp.xilinx.com with esmtp (Exim 4.90)
 (envelope-from <stefano.stabellini@xilinx.com>)
 id 1khkKa-0000bx-6N; Tue, 24 Nov 2020 18:14:52 -0800
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=o5nq=E7=xilinx.com=stefanos@srs-us1.protection.inumbo.net>)
	id 1khkKh-000662-1p
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 02:14:59 +0000
X-Inumbo-ID: 9dd3037c-c5c6-49e2-a0c0-979381b91be0
Received: from NAM11-CO1-obe.outbound.protection.outlook.com (unknown [40.107.220.78])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9dd3037c-c5c6-49e2-a0c0-979381b91be0;
	Wed, 25 Nov 2020 02:14:57 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fkFWj09wNen4ukgQ1GCEBn2e29YOEkZKehzQd582b+qVDRc9GIwCzoNguIL0yESG8ZKa1bKkGgs29uolf1mxAN1t/kER5q1RDH2XZExbe+9pdsSvsWAHR3e7NvN7vuc3QHHWF7pyCRTPJ4Z3jHJBp+FssbX8WAI1Op2J4SjpyVbZ77LJBx4tchexVcQ9lwpEUV25K78s9fFt+xaaJjtH8NdDDfcFIaz0QUuSAAbL1M62gqolKHKzBWMeV90+zJKpAO59heln1l7mXZIKGOrpOwq6YRwm7csXXurwpnfPhz/CtI23/3+QJPV+KRnwkBJ8a1wkhxpXhtPzV570Ssndbw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aA/fEOkvpH27qOdfBIo5qS26KGKmo1TlvffLgTTtNdw=;
 b=W6ZfpPISgHXfn8Fdouye33BS3EDxHAN5NjsgTJ0kIqnpBqATSqSZn24WMEdrwYNj96sWWgvOpnN+56GTcmc9irLqtcLo/+dw/gwe2n1y67KDnFmAypJmdchI+xQkwKEG/ynnScE1kAMXeaDfOblBGT05zU/AQdZJfD4Y+m9xb9gNNETZUKxit4E0ziQgClejq2T2tyein1Oms8jKzCbzQQwsKYDcGdYgiSNfYRHtNXItEm1twGCH2eZpoj+HQ3eg56KVQ2edn/fiCX3r3hkDdztO6o7bXkao36n258XKxq5lSsM57S0XEv49IpjQTi4e4CbmbIouQ/S2p3JIWDOB4g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 149.199.62.198) smtp.rcpttodomain=zal.aero smtp.mailfrom=xilinx.com;
 dmarc=bestguesspass action=none header.from=xilinx.com; dkim=none (message
 not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aA/fEOkvpH27qOdfBIo5qS26KGKmo1TlvffLgTTtNdw=;
 b=NN7U/X5g7LZlMmVXVLElG+QDzxkwZ5Ixf6BkBWVBC/Hj8oLcCbGYQC20mFawvxj/2kbJ/3B0p+Qbuvqn0CaiRKj/gMB5qtOdGUg3hyy0RzaAz6fjZGWQSJ5u0GjT6sZJsuGc2bb/UnwxROHS/tfAk0r1wK9PnUlvxDFdKD4Y0+s=
Received: from CY4PR13CA0023.namprd13.prod.outlook.com (2603:10b6:903:32::33)
 by BL0PR02MB3667.namprd02.prod.outlook.com (2603:10b6:207:47::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.28; Wed, 25 Nov
 2020 02:14:54 +0000
Received: from CY1NAM02FT033.eop-nam02.prod.protection.outlook.com
 (2603:10b6:903:32:cafe::cf) by CY4PR13CA0023.outlook.office365.com
 (2603:10b6:903:32::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.12 via Frontend
 Transport; Wed, 25 Nov 2020 02:14:54 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198)
 smtp.mailfrom=xilinx.com; zal.aero; dkim=none (message not signed)
 header.d=none;zal.aero; dmarc=bestguesspass action=none
 header.from=xilinx.com;
Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates
 149.199.62.198 as permitted sender) receiver=protection.outlook.com;
 client-ip=149.199.62.198; helo=xsj-pvapexch01.xlnx.xilinx.com;
Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by
 CY1NAM02FT033.mail.protection.outlook.com (10.152.75.179) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.3589.20 via Frontend Transport; Wed, 25 Nov 2020 02:14:53 +0000
Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by
 xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1913.5; Tue, 24 Nov 2020 18:14:52 -0800
Received: from smtp.xilinx.com (172.19.127.96) by
 xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id
 15.1.1913.5 via Frontend Transport; Tue, 24 Nov 2020 18:14:52 -0800
Received: from [10.23.120.204] (port=57077 helo=localhost)
	by smtp.xilinx.com with esmtp (Exim 4.90)
	(envelope-from <stefano.stabellini@xilinx.com>)
	id 1khkKa-0000bx-6N; Tue, 24 Nov 2020 18:14:52 -0800
Date: Tue, 24 Nov 2020 18:14:51 -0800
From: Stefano Stabellini <stefano.stabellini@xilinx.com>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Leo Krueger <leo.krueger@zal.aero>, <Zhiqiang.Hou@nxp.com>
CC: Julien Grall <julien@xen.org>, Stefano Stabellini
	<stefano.stabellini@xilinx.com>, Peng Fan <peng.fan@nxp.com>,
	"brucea@xilinx.com" <brucea@xilinx.com>, Cornelia Bruelhart
	<cornelia.bruelhart@zal.aero>, "oleksandr_andrushchenko@epam.com"
	<oleksandr_andrushchenko@epam.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "Bertrand.Marquis@arm.com"
	<Bertrand.Marquis@arm.com>
Subject: Re: AW: AW: AW: AW: AW: AW: Xen data from meta-virtualization
 layer
In-Reply-To: <HE1PR05MB4794FE31A2BDE8BC458D81848BFB0@HE1PR05MB4794.eurprd05.prod.outlook.com>
Message-ID: <alpine.DEB.2.21.2011241743490.7979@sstabellini-ThinkPad-T480s>
References: <AM4PR0501MB2227089FDDF0209EF6E215D9E6100@AM4PR0501MB2227.eurprd05.prod.outlook.com> <alpine.DEB.2.21.2011101842500.21307@sstabellini-ThinkPad-T480s> <DB6PR0402MB27608A03EC717053E392A92988E80@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <HE1PR05MB47940ED4E5FDC0BADC54C8E78BE80@HE1PR05MB4794.eurprd05.prod.outlook.com> <DB6PR0402MB2760CEEABA9F52CDEB27C1DB88E80@DB6PR0402MB2760.eurprd04.prod.outlook.com> <HE1PR05MB47944761ED6A26D3E2CE15868BE40@HE1PR05MB4794.eurprd05.prod.outlook.com>
 <alpine.DEB.2.21.2011161656080.20906@sstabellini-ThinkPad-T480s> <HE1PR05MB4794569AC67109AF8B6517268BE20@HE1PR05MB4794.eurprd05.prod.outlook.com> <alpine.DEB.2.21.2011171544380.438@sstabellini-ThinkPad-T480s> <5dc63ee2-f1ce-31fc-cb6a-fe4dae929fb3@xen.org>
 <HE1PR05MB4794EBDD1FE29BC69D0BCC898BFD0@HE1PR05MB4794.eurprd05.prod.outlook.com> <b67581c6-6682-5059-55d1-a9c695a8cdc3@xen.org> <HE1PR05MB4794FE31A2BDE8BC458D81848BFB0@HE1PR05MB4794.eurprd05.prod.outlook.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
X-EOPAttributedMessage: 0
X-MS-Office365-Filtering-HT: Tenant
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 273b1988-81c1-4cc5-e4a8-08d890e7e564
X-MS-TrafficTypeDiagnostic: BL0PR02MB3667:
X-Microsoft-Antispam-PRVS:
	<BL0PR02MB36676EFEF1B9EA9FEE19980DA0FA0@BL0PR02MB3667.namprd02.prod.outlook.com>
X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply
X-MS-Oob-TLC-OOBClassifiers: OLM:5236;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	g+SDvn+YPn0lDNeHV9XnvgqswmS03uYbhAyle1BPsp9WDwcurNbVzWpolC8COsF1e+QlGVutxN6anKidOYKWU7QBJi8c8N3bod9+gN5vS3cCuDke7vzlrh1g+/5wK9jrO/qiDuHdFz9RM+dIRajdR66YxzB55xWnUcjKjr47FQ9D/OBqut+P5pAq9/Tl0P7shwFKfEkxqkSUqzH2IBeVJQ1CbVk+EtJBIs3WzJl30XL4qjCLD92uIujnjxMX7oDs/yg61UQ0vL7LO2evnYoUK4DEXmKg4JVfWrYf3G7LIC+Xf1iGD/kEEwqNpiymJiQQagRKGaREfP1NhveesitAzg8lCtO7BTUqEH+oFiBq5dr2GMJ2jv7HAP+Tm1T6jc0En08UAeB+4Xa/KVuJYsxgzBoVRqgW2338cGZ/2ha/MtqOUdc8udZ7UfiKxAZ7+IwaJe/ZyACVSWkbPQ3yrtCYFWpkIhYV3CsEOY6zz8To20s=
X-Forefront-Antispam-Report:
	CIP:149.199.62.198;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:xsj-pvapexch01.xlnx.xilinx.com;PTR:unknown-62-198.xilinx.com;CAT:NONE;SFS:(4636009)(7916004)(346002)(136003)(376002)(39840400004)(396003)(46966005)(33716001)(426003)(9686003)(82310400003)(2906002)(966005)(44832011)(4326008)(70586007)(70206006)(83380400001)(8676002)(26005)(54906003)(356005)(9786002)(7636003)(316002)(47076004)(36906005)(5660300002)(186003)(336012)(478600001)(110136005)(8936002);DIR:OUT;SFP:1101;
X-OriginatorOrg: xilinx.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Nov 2020 02:14:53.2625
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 273b1988-81c1-4cc5-e4a8-08d890e7e564
X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c;Ip=[149.199.62.198];Helo=[xsj-pvapexch01.xlnx.xilinx.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CY1NAM02FT033.eop-nam02.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR02MB3667

+ Zhiqiang Hou

On Tue, 24 Nov 2020, Leo Krueger wrote:
> > >>> On Tue, 17 Nov 2020, Leo Krueger wrote:
> > >>>> Hi,
> > >>>>
> > >>>> I enabled CONFIG_HAS_ITS (what a stupid mistake by me to not set it
> > >>>> before...) but then had to add the following node to my device tree
> > >>>>
> > >>>> 	gic_lpi_base: syscon@0x80000000 {
> > >>>> 		compatible = "gic-lpi-base";
> > >>
> > >> I couldn't find this compatible defined/used in Linux 5.10-rc4. @Leo,
> > >> could you clarify which flavor/version of Linux you are using?
> > >
> > > It is Linux 4.19 from Yocto (Warror release). XEN 4.13.2.
> > 
> > Do you have a link to the Linux tree? Is there any additional patches on top of
> > vanilla?
> 
> Linux tree is found here: https://github.com/kontron/linux-smarc-sal28/commits/master-LSDK-19.09
> (up to the latest commit in that branch)

[...]

> > Looking at the DT changes in [0], it looks like the node is not a child of gic@.
> > So I think Xen will map the region to Dom0.
> > 
> > There are two things that I can notice:
> >    1) This region is RAM, but I can't find any reserve node. Is there any specific
> > code in Linux to reserve it?
> >    2) The implementation in U-boot seems to suggest that the firmware will
> > configure the LPIs and then enable it. If that's the case, then Xen needs to
> > re-use the table in the DT rather than allocating a new one.

That Linux tree has no mentions of gic-lpi-base. That means that
gic-lpi-base is only used in u-boot, not in Linux. In particular the
most relevant commit is af288cb291da3abef6be0875527729296f7de7a0. 

In regards to the reserved-memory regions, maybe we are not seeing them
because Leo posted the host device tree, not the one passed at runtime
from u-boot to Linux?

If so, Leo, could you please boot Linux on native (no Xen) and get the
device tree from there at runtime using dtc -I fs -O dts
/proc/device-tree ?


However, the name of the reserved-memory region created by u-boot seems
to be "lpi_rd_table". I cannot find any mentions of lpi_rd_table in the
Linux kernel tree either.

Zhiqiang, Leo is trying to boot Xen on sAL28. Linux booting on Xen
throws errors in regards to GIC/ITS initialization. On other hardware
Xen can use and virtualize GICv3 and ITS just fine. Could you please
explain what is different about sAL28 and how Xen/Linux is expected to
use the lpi_rd_table reserved-memory region?


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 03:38:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 03:38:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37121.69306 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khlcq-0004ZF-DM; Wed, 25 Nov 2020 03:37:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37121.69306; Wed, 25 Nov 2020 03:37:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khlcq-0004Z8-AU; Wed, 25 Nov 2020 03:37:48 +0000
Received: by outflank-mailman (input) for mailman id 37121;
 Wed, 25 Nov 2020 03:37:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TOsx=E7=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1khlco-0004Z3-Or
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 03:37:46 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7dc2baf2-2a8a-441f-abf9-0552551c23a6;
 Wed, 25 Nov 2020 03:37:45 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0AP3bXaX027586
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Tue, 24 Nov 2020 22:37:39 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 0AP3bWYN027585;
 Tue, 24 Nov 2020 19:37:32 -0800 (PST) (envelope-from ehem)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TOsx=E7=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1khlco-0004Z3-Or
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 03:37:46 +0000
X-Inumbo-ID: 7dc2baf2-2a8a-441f-abf9-0552551c23a6
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7dc2baf2-2a8a-441f-abf9-0552551c23a6;
	Wed, 25 Nov 2020 03:37:45 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0AP3bXaX027586
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Tue, 24 Nov 2020 22:37:39 -0500 (EST)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 0AP3bWYN027585;
	Tue, 24 Nov 2020 19:37:32 -0800 (PST)
	(envelope-from ehem)
Date: Tue, 24 Nov 2020 19:37:32 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien@xen.org>, roman@zededa.com,
        xen-devel@lists.xenproject.org
Subject: Re: Xen on RP4
Message-ID: <X73RfHfRfBRLKkvB@mattapan.m5p.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

I finally have U-Boot -> GRUB -> Xen sort-of operational as an
alternative to Tianocore -> GRUB -> Xen on a Raspberry PI 4B.

Stefano Stabellini, how much of the Raspberry PI 4B hardware have you
observed being operational under Linux on Xen?  In particular, have you
ever observed operational graphical output from a Raspberry Pi 4B running
Xen?

Presently I'm using a 5.8 kernel with your patches and haven't seen
graphical output under Xen with either boot stack.  I've confirmed fully
operational graphics without Xen on Tianocore, I've confirmed operational
virtual terminals with U-Boot and not Xen.

I had been planning to wait a bit before moving to 5.9, but if that is
the crucial ingredient I'll move early.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Wed Nov 25 04:01:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 04:01:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37129.69319 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khm01-0007DU-6r; Wed, 25 Nov 2020 04:01:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37129.69319; Wed, 25 Nov 2020 04:01:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khm01-0007DN-3j; Wed, 25 Nov 2020 04:01:45 +0000
Received: by outflank-mailman (input) for mailman id 37129;
 Wed, 25 Nov 2020 04:01:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=al8A=E7=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1khm00-0007DI-EB
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:01:44 +0000
Received: from mail-qk1-x72a.google.com (unknown [2607:f8b0:4864:20::72a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 33bb8c92-daab-452b-8a39-7520ae97d78b;
 Wed, 25 Nov 2020 04:01:43 +0000 (UTC)
Received: by mail-qk1-x72a.google.com with SMTP id v143so2142462qkb.2
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 20:01:43 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=al8A=E7=zededa.com=roman@srs-us1.protection.inumbo.net>)
	id 1khm00-0007DI-EB
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:01:44 +0000
X-Inumbo-ID: 33bb8c92-daab-452b-8a39-7520ae97d78b
Received: from mail-qk1-x72a.google.com (unknown [2607:f8b0:4864:20::72a])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 33bb8c92-daab-452b-8a39-7520ae97d78b;
	Wed, 25 Nov 2020 04:01:43 +0000 (UTC)
Received: by mail-qk1-x72a.google.com with SMTP id v143so2142462qkb.2
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 20:01:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=zededa.com; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=cwBJSzcLltCkr02NOG52BIl8E1NR/1H/uKp+rG9k5s0=;
        b=EWHp3YHqeX1qCRwKJKRh95HKQNYKswbd9wRrXAR9kXwLihmHoe+JPaYgHLJ170IcRP
         UXjg+q+A6Xy1rQavbVjcGr0aMjrpvdLCZefGLiiL3TL/8vDWKgoutu+wa32D6vFCwK4L
         JPOTYcZcXIlV9xwO2mXdVzOra6lqbAeqIAwVG3p6I6Kv+7feKGRsTXniwL0U7PnftpaP
         rQk84uq34mSSjl6+RxDgbCFwByh1HonCmue84SE0aPA2dDidJXtvmmg6h7zrcvd5dJ5g
         rXo9M53GVlvMa2LMPaaPcfL6BugzChhFpFe7TvhXo8ldMtN9DZ0Jfntijg05QAXlN3oD
         8nsw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=cwBJSzcLltCkr02NOG52BIl8E1NR/1H/uKp+rG9k5s0=;
        b=CO9gHlF65OTT3aKMLTggrqWFMraDgeQWMkK6Zlq2jauyzqOAFT8gIxrlvHXaSLJHkm
         FWuKtQfPYu2pR4pBNhDK5BYnUKmOb+x80Xr61fD52WSZh50zTXTDOSAWuno7CNdbTXQr
         GXBMRtykvAeaiMkvK4WZ2Kh9oOU7AZdBWAJzBbXtdUlRBCXOiaUep64j6UMLSAo6MDH4
         PJDDhgB+ApQowCeAUPeKLq5nJDctWjjEHQBKm+SLadBUDt65fzYpN5Kh59q8GDv5rW2o
         pCAvv5ReXpaupn46O5U+e++5Ubjdd8KZ4Z5dcRV1dO3H18OVRH4eQXi2f9DF3t/Ab35q
         MnQQ==
X-Gm-Message-State: AOAM530/i0/smRRZ1D4h9quhVo5MuNUOp7/eDU/3BSEAG7XfaOEe5dML
	Gwbraq8i2fOTIcDyLPvDumNUZ1xdKg8yT+UoZoSzpw==
X-Google-Smtp-Source: ABdhPJz10E/GEK4JzRUMT1KGKTT8tKoUJgRjpcfJG0w+ztjdMWqmb+vE7dyJbHxKj665OdNAQSxYa+J2HPEMwBpSvHk=
X-Received: by 2002:a37:a907:: with SMTP id s7mr1525786qke.157.1606276903217;
 Tue, 24 Nov 2020 20:01:43 -0800 (PST)
MIME-Version: 1.0
References: <X73RfHfRfBRLKkvB@mattapan.m5p.com>
In-Reply-To: <X73RfHfRfBRLKkvB@mattapan.m5p.com>
From: Roman Shaposhnik <roman@zededa.com>
Date: Tue, 24 Nov 2020 20:01:32 -0800
Message-ID: <CAMmSBy8dtUQotUeX2MVke7d2nWS0shvKPL_S=4tUeF0UKh4vgA@mail.gmail.com>
Subject: Re: Xen on RP4
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Tue, Nov 24, 2020 at 7:37 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
>
> I finally have U-Boot -> GRUB -> Xen sort-of operational as an

Yup -- same as what we're using in EVE -- we're now on the same page ;-)

> alternative to Tianocore -> GRUB -> Xen on a Raspberry PI 4B.
>
> Stefano Stabellini, how much of the Raspberry PI 4B hardware have you
> observed being operational under Linux on Xen?  In particular, have you
> ever observed operational graphical output from a Raspberry Pi 4B running
> Xen?

Pretty much everything is fully operational including graphics (but
not native one -- the default LK one)

> Presently I'm using a 5.8 kernel with your patches and haven't seen
> graphical output under Xen with either boot stack.  I've confirmed fully
> operational graphics without Xen on Tianocore, I've confirmed operational
> virtual terminals with U-Boot and not Xen.
>
> I had been planning to wait a bit before moving to 5.9, but if that is
> the crucial ingredient I'll move early.

We're still using 5.4 -- but it seems that the next LTS 5.10 is also functional.

I can bet $10 whatever it is -- it is DT related ;-)

Thanks,
Roman.


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 04:27:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 04:27:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37136.69331 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmP2-0000p7-5Z; Wed, 25 Nov 2020 04:27:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37136.69331; Wed, 25 Nov 2020 04:27:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmP2-0000p0-2S; Wed, 25 Nov 2020 04:27:36 +0000
Received: by outflank-mailman (input) for mailman id 37136;
 Wed, 25 Nov 2020 04:27:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1khmP1-0000ov-1y
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:27:35 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0be4d2de-51ca-4a82-947b-60e109efc04f;
 Wed, 25 Nov 2020 04:27:34 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 10C1920789;
 Wed, 25 Nov 2020 04:27:33 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1khmP1-0000ov-1y
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:27:35 +0000
X-Inumbo-ID: 0be4d2de-51ca-4a82-947b-60e109efc04f
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0be4d2de-51ca-4a82-947b-60e109efc04f;
	Wed, 25 Nov 2020 04:27:34 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 10C1920789;
	Wed, 25 Nov 2020 04:27:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606278453;
	bh=PCH4N6dBQbNnVLc5jc5mn5eoHUGnxd6iVPbT0sBVRFY=;
	h=Date:From:To:cc:Subject:From;
	b=FRYj1Cre31TaU6QNRKRYoLBpyfRrdjeXdIDRGjdO+5E22R7UVRKqakhm9h5TFQnIg
	 79NSGv4zmureoLOwSzxLigwN0PeXGgH7h4pqzCG8kTeIMyTqOU1AD+1BLJDp4RXahn
	 /vh7hdq1axyXCOUuT+/x44aawAK5iYvO6ubSXOzU=
Date: Tue, 24 Nov 2020 20:27:21 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: andrew.cooper3@citrix.com, cardoe@cardoe.com, wl@xen.org
cc: sstabellini@kernel.org, xen-devel@lists.xenproject.org
Subject: [PATCH v3 00/12] automation: improvements (mostly) for arm64
Message-ID: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Hi all,

This series does a few things:

1) it introduces a simple Xen arm64 dom0less smoke test based on QEMU
2) it introduces alpine linux builds x86 and arm64
3) it introduces two tests artifacts containers
4) it uses said artifacts to create a dom0/domU arm64 test based on QEMU

The series is v3, but in reality only 1) above was sent out before (the
first two patches). Everything else is new. All tests succeed currently.



Stefano Stabellini (12):
      automation: add a QEMU aarch64 smoke test
      automation: add dom0less to the QEMU aarch64 smoke test
      automation: pass --disable-werror for QEMUU builds if libc is musl
      automation: add alpine linux 3.12 arm64 build container
      automation: add alpine linux arm64 build test
      automation: add alpine linux 3.12 x86 build container
      automation: add alpine linux x86 build jobs
      automation: add tests artifacts
      automation: make available the tests artifacts to the pipeline
      automation: create an alpine linux arm64 test job
      automation: use the tests-artifacts kernel for qemu-smoke-arm64-gcc
      automation: add domU creation to dom0 alpine linux test

 automation/build/alpine/3.12-arm64v8.dockerfile    |  52 +++++++++
 automation/build/alpine/3.12.dockerfile            |  52 +++++++++
 automation/gitlab-ci/build.yaml                    |  60 ++++++++++
 automation/gitlab-ci/test.yaml                     |  47 ++++++++
 automation/scripts/build                           |  12 +-
 automation/scripts/qemu-alpine-arm64.sh            | 121 +++++++++++++++++++++
 automation/scripts/qemu-smoke-arm64.sh             |  93 ++++++++++++++++
 automation/tests-artifacts/Makefile                |  19 ++++
 .../tests-artifacts/alpine/3.12-arm64v8.dockerfile |  67 ++++++++++++
 .../kernel/5.9.9-arm64v8.dockerfile                |  34 ++++++
 10 files changed, 554 insertions(+), 3 deletions(-)
 create mode 100644 automation/build/alpine/3.12-arm64v8.dockerfile
 create mode 100644 automation/build/alpine/3.12.dockerfile
 create mode 100755 automation/scripts/qemu-alpine-arm64.sh
 create mode 100755 automation/scripts/qemu-smoke-arm64.sh
 create mode 100644 automation/tests-artifacts/Makefile
 create mode 100644 automation/tests-artifacts/alpine/3.12-arm64v8.dockerfile
 create mode 100644 automation/tests-artifacts/kernel/5.9.9-arm64v8.dockerfile


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 04:27:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 04:27:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37137.69343 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmPG-0000sp-Er; Wed, 25 Nov 2020 04:27:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37137.69343; Wed, 25 Nov 2020 04:27:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmPG-0000sh-BL; Wed, 25 Nov 2020 04:27:50 +0000
Received: by outflank-mailman (input) for mailman id 37137;
 Wed, 25 Nov 2020 04:27:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1khmPE-0000sA-JE
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:27:48 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8af09075-fccc-4c2b-af61-14dbbae54f12;
 Wed, 25 Nov 2020 04:27:47 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id C7C0720789;
 Wed, 25 Nov 2020 04:27:46 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1khmPE-0000sA-JE
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:27:48 +0000
X-Inumbo-ID: 8af09075-fccc-4c2b-af61-14dbbae54f12
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8af09075-fccc-4c2b-af61-14dbbae54f12;
	Wed, 25 Nov 2020 04:27:47 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id C7C0720789;
	Wed, 25 Nov 2020 04:27:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606278467;
	bh=/lcIGplqn6odufkVeXSQ/3A4o3V3IGNc2rmyv6gS60M=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=thMeD7fw10ItmbTZlWIPLwsKzJ86JzKQYQMwg+xxRnRmiHvKRndVNGYG5ipEZJiWe
	 l8Ek0n74OaUVlJChrmZugykQHs017rCcU48YkyoMJi/CJbdJTR/sPjE9IOpVDE2MFu
	 GHjcyf+1+H2xy4G7Dl8oVyM9JMB9EqZVvq1Bmme8=
From: Stefano Stabellini <sstabellini@kernel.org>
To: andrew.cooper3@citrix.com,
	cardoe@cardoe.com,
	wl@xen.org
Cc: sstabellini@kernel.org,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v3 01/12] automation: add a QEMU aarch64 smoke test
Date: Tue, 24 Nov 2020 20:27:34 -0800
Message-Id: <20201125042745.31986-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>

Use QEMU to start Xen (just the hypervisor) up until it stops because
there is no dom0 kernel to boot.

It is based on the existing build job unstable-arm64v8.

Also use make -j$(nproc) to build Xen.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
Changes in v2:
- fix x86_32 build
---
 automation/gitlab-ci/test.yaml         | 22 ++++++++++++++++++
 automation/scripts/build               |  6 ++---
 automation/scripts/qemu-smoke-arm64.sh | 32 ++++++++++++++++++++++++++
 3 files changed, 57 insertions(+), 3 deletions(-)
 create mode 100755 automation/scripts/qemu-smoke-arm64.sh

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index 793feafe8b..35346e3f6e 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -22,6 +22,28 @@ build-each-commit-gcc:
     - /^coverity-tested\/.*/
     - /^stable-.*/
 
+qemu-smoke-arm64-gcc:
+  stage: test
+  image: registry.gitlab.com/xen-project/xen/${CONTAINER}
+  variables:
+    CONTAINER: debian:unstable-arm64v8
+  script:
+    - ./automation/scripts/qemu-smoke-arm64.sh 2>&1 | tee qemu-smoke-arm64.log
+  dependencies:
+    - debian-unstable-gcc-arm64
+  artifacts:
+    paths:
+      - smoke.serial
+      - '*.log'
+    when: always
+  tags:
+    - arm64
+  except:
+    - master
+    - smoke
+    - /^coverity-tested\/.*/
+    - /^stable-.*/
+
 qemu-smoke-x86-64-gcc:
   stage: test
   image: registry.gitlab.com/xen-project/xen/${CONTAINER}
diff --git a/automation/scripts/build b/automation/scripts/build
index 0cd0f3971d..7038e5eb50 100755
--- a/automation/scripts/build
+++ b/automation/scripts/build
@@ -10,9 +10,9 @@ cc-ver()
 
 # random config or default config
 if [[ "${RANDCONFIG}" == "y" ]]; then
-    make -C xen KCONFIG_ALLCONFIG=tools/kconfig/allrandom.config randconfig
+    make -j$(nproc) -C xen KCONFIG_ALLCONFIG=tools/kconfig/allrandom.config randconfig
 else
-    make -C xen defconfig
+    make -j$(nproc) -C xen defconfig
 fi
 
 # build up our configure options
@@ -45,7 +45,7 @@ make -j$(nproc) dist
 # Extract artifacts to avoid getting rewritten by customised builds
 cp xen/.config xen-config
 mkdir binaries
-if [[ "${XEN_TARGET_ARCH}" == "x86_64" ]]; then
+if [[ "${XEN_TARGET_ARCH}" != "x86_32" ]]; then
     cp xen/xen binaries/xen
 fi
 
diff --git a/automation/scripts/qemu-smoke-arm64.sh b/automation/scripts/qemu-smoke-arm64.sh
new file mode 100755
index 0000000000..a7efbf8b6f
--- /dev/null
+++ b/automation/scripts/qemu-smoke-arm64.sh
@@ -0,0 +1,32 @@
+#!/bin/bash
+
+set -ex
+
+# Install QEMU
+export DEBIAN_FRONTENT=noninteractive
+apt-get -qy update
+apt-get -qy install --no-install-recommends qemu-system-aarch64 \
+                                            u-boot-qemu
+
+# XXX Silly workaround to get the following QEMU command to work
+cp /usr/share/qemu/pvh.bin /usr/share/qemu/efi-virtio.rom
+qemu-system-aarch64 \
+   -machine virtualization=true \
+   -cpu cortex-a57 -machine type=virt \
+   -m 512 -display none \
+   -machine dumpdtb=binaries/virt-gicv3.dtb
+
+rm -f smoke.serial
+set +e
+echo "  booti 0x49000000 - 0x44000000" | timeout -k 1 30 qemu-system-aarch64 \
+    -machine virtualization=true \
+    -cpu cortex-a57 -machine type=virt \
+    -m 512 -monitor none -serial stdio \
+    -no-reboot \
+    -device loader,file=binaries/virt-gicv3.dtb,force-raw=on,addr=0x44000000 \
+    -device loader,file=binaries/xen,force-raw=on,addr=0x49000000 \
+    -bios /usr/lib/u-boot/qemu_arm64/u-boot.bin |& tee smoke.serial
+
+set -e
+grep -q 'LOADING DOMAIN 0' smoke.serial || exit 1
+exit 0
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 04:27:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 04:27:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37138.69352 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmPG-0000tP-U1; Wed, 25 Nov 2020 04:27:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37138.69352; Wed, 25 Nov 2020 04:27:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmPG-0000t5-Kc; Wed, 25 Nov 2020 04:27:50 +0000
Received: by outflank-mailman (input) for mailman id 37138;
 Wed, 25 Nov 2020 04:27:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1khmPF-0000sM-2I
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:27:49 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 184141ad-5e70-411a-aa1a-4ce283c152d9;
 Wed, 25 Nov 2020 04:27:48 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 54E32208C3;
 Wed, 25 Nov 2020 04:27:47 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1khmPF-0000sM-2I
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:27:49 +0000
X-Inumbo-ID: 184141ad-5e70-411a-aa1a-4ce283c152d9
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 184141ad-5e70-411a-aa1a-4ce283c152d9;
	Wed, 25 Nov 2020 04:27:48 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 54E32208C3;
	Wed, 25 Nov 2020 04:27:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606278467;
	bh=5GTondIZ9x95JwAWG7VH9BFnfiLb+jzjLUoryd7uRUE=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=qBoTDnXszD/BJwwJaEMf+0i0ckDe7+5X3utXPvtY+j8l9e0XdtGtZ3/HEdwE1MT3O
	 Sqns+slnNOJkeYoC65GPA46xlgqS4svuaMF4/tYj+UoJb+7vK1v97hFGwZ+18yg17i
	 Ike0Ts/s5FLREal2Lhccl5VSMSgDFCzCQzMX8p7E=
From: Stefano Stabellini <sstabellini@kernel.org>
To: andrew.cooper3@citrix.com,
	cardoe@cardoe.com,
	wl@xen.org
Cc: sstabellini@kernel.org,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v3 02/12] automation: add dom0less to the QEMU aarch64 smoke test
Date: Tue, 24 Nov 2020 20:27:35 -0800
Message-Id: <20201125042745.31986-2-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>

Add a trivial dom0less test:
- fetch the Debian arm64 kernel and use it ad dom0/U kernel
- use busybox-static to create a trivial dom0/U ramdisk
- use ImageBuilder to generate the uboot boot script automatically
- install and use u-boot from the Debian package to start the test
- binaries are loaded from uboot via tftp

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
Changes in v3:
- don't hardcode linux kernel version in testing script

Changes in v2:
- use the Debian kernel for testing
---
 automation/scripts/qemu-smoke-arm64.sh | 81 +++++++++++++++++++++++---
 1 file changed, 74 insertions(+), 7 deletions(-)

diff --git a/automation/scripts/qemu-smoke-arm64.sh b/automation/scripts/qemu-smoke-arm64.sh
index a7efbf8b6f..9bf4488115 100755
--- a/automation/scripts/qemu-smoke-arm64.sh
+++ b/automation/scripts/qemu-smoke-arm64.sh
@@ -6,27 +6,94 @@ set -ex
 export DEBIAN_FRONTENT=noninteractive
 apt-get -qy update
 apt-get -qy install --no-install-recommends qemu-system-aarch64 \
-                                            u-boot-qemu
+                                            u-boot-qemu \
+                                            u-boot-tools \
+                                            device-tree-compiler \
+                                            busybox-static \
+                                            cpio
+
+cd binaries
+apt-get download linux-image-*[0-9]-arm64
+dpkg -i --ignore-depends=initramfs-tools ./linux-image-*arm64.deb || true
+cp /boot/vmlinuz-*arm64 ./Image
+cd ..
 
 # XXX Silly workaround to get the following QEMU command to work
 cp /usr/share/qemu/pvh.bin /usr/share/qemu/efi-virtio.rom
 qemu-system-aarch64 \
    -machine virtualization=true \
    -cpu cortex-a57 -machine type=virt \
-   -m 512 -display none \
+   -m 1024 -display none \
    -machine dumpdtb=binaries/virt-gicv3.dtb
+# XXX disable pl061 to avoid Linux crash
+dtc -I dtb -O dts binaries/virt-gicv3.dtb > binaries/virt-gicv3.dts
+sed 's/compatible = "arm,pl061.*/status = "disabled";/g' binaries/virt-gicv3.dts > binaries/virt-gicv3-edited.dts
+dtc -I dts -O dtb binaries/virt-gicv3-edited.dts > binaries/virt-gicv3.dtb
+
+
+# Busybox Dom0
+mkdir -p initrd
+mkdir -p initrd/bin
+mkdir -p initrd/sbin
+mkdir -p initrd/etc
+mkdir -p initrd/dev
+mkdir -p initrd/proc
+mkdir -p initrd/sys
+mkdir -p initrd/lib
+mkdir -p initrd/var
+mkdir -p initrd/mnt
+cp /bin/busybox initrd/bin/busybox
+initrd/bin/busybox --install initrd/bin
+echo "#!/bin/sh
+
+mount -t proc proc /proc
+mount -t sysfs sysfs /sys
+mount -t devtmpfs devtmpfs /dev
+/bin/sh" > initrd/init
+chmod +x initrd/init
+cd initrd
+find . | cpio --create --format='newc' | gzip > ../binaries/initrd
+cd ..
+
+
+# ImageBuilder
+echo 'MEMORY_START="0x40000000"
+MEMORY_END="0x80000000"
 
+DEVICE_TREE="virt-gicv3.dtb"
+XEN="xen"
+DOM0_KERNEL="Image"
+DOM0_RAMDISK="initrd"
+XEN_CMD="console=dtuart dom0_mem=512M"
+
+NUM_DOMUS=1
+DOMU_KERNEL[0]="Image"
+DOMU_RAMDISK[0]="initrd"
+DOMU_MEM[0]="256"
+
+LOAD_CMD="tftpb"
+UBOOT_SOURCE="boot.source"
+UBOOT_SCRIPT="boot.scr"' > binaries/config
+rm -rf imagebuilder
+git clone https://gitlab.com/ViryaOS/imagebuilder
+bash imagebuilder/scripts/uboot-script-gen -t tftp -d binaries/ -c binaries/config
+
+
+# Run the test
 rm -f smoke.serial
 set +e
-echo "  booti 0x49000000 - 0x44000000" | timeout -k 1 30 qemu-system-aarch64 \
+echo "  virtio scan; dhcp; tftpb 0x40000000 boot.scr; source 0x40000000"| \
+timeout -k 1 240 \
+qemu-system-aarch64 \
     -machine virtualization=true \
     -cpu cortex-a57 -machine type=virt \
-    -m 512 -monitor none -serial stdio \
+    -m 1024 -monitor none -serial stdio \
+    -smp 2 \
     -no-reboot \
-    -device loader,file=binaries/virt-gicv3.dtb,force-raw=on,addr=0x44000000 \
-    -device loader,file=binaries/xen,force-raw=on,addr=0x49000000 \
+    -device virtio-net-pci,netdev=n0 \
+    -netdev user,id=n0,tftp=binaries \
     -bios /usr/lib/u-boot/qemu_arm64/u-boot.bin |& tee smoke.serial
 
 set -e
-grep -q 'LOADING DOMAIN 0' smoke.serial || exit 1
+(grep -q "^BusyBox" smoke.serial && grep -q "DOM1: BusyBox" smoke.serial) || exit 1
 exit 0
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 04:27:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 04:27:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37139.69364 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmPH-0000vi-Ki; Wed, 25 Nov 2020 04:27:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37139.69364; Wed, 25 Nov 2020 04:27:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmPH-0000uy-BE; Wed, 25 Nov 2020 04:27:51 +0000
Received: by outflank-mailman (input) for mailman id 37139;
 Wed, 25 Nov 2020 04:27:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1khmPF-0000sA-Gi
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:27:49 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5d84af08-07af-408b-9549-f1d41c7a8398;
 Wed, 25 Nov 2020 04:27:49 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id DAA0120B80;
 Wed, 25 Nov 2020 04:27:47 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1khmPF-0000sA-Gi
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:27:49 +0000
X-Inumbo-ID: 5d84af08-07af-408b-9549-f1d41c7a8398
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5d84af08-07af-408b-9549-f1d41c7a8398;
	Wed, 25 Nov 2020 04:27:49 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id DAA0120B80;
	Wed, 25 Nov 2020 04:27:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606278468;
	bh=CSa1gG/RT/f6nzrrY+UMikNynTa9ntaJ3vNiNTwvCr8=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=1wZDPkPH+OUlmjAMm+FNgrz4aWEbP6zSYC8XHfBIRR2qqUb7tBS7kyTCCYRAHlWfL
	 xE0iP4zVKTyOKHErM/NCtGCTRgFeSLPgKcOcKFI0X/3pamr/+wQQ+I1CDWrXx9we11
	 wQMkDCDnwOOGe8xxvmLqRqUpWNI7hsM3MS+7hqy0=
From: Stefano Stabellini <sstabellini@kernel.org>
To: andrew.cooper3@citrix.com,
	cardoe@cardoe.com,
	wl@xen.org
Cc: sstabellini@kernel.org,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v3 03/12] automation: pass --disable-werror for QEMUU builds if libc is musl
Date: Tue, 24 Nov 2020 20:27:36 -0800
Message-Id: <20201125042745.31986-3-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>

QEMU upstream builds with warnings when libc is musl:

  #warning redirecting incorrect #include <sys/signal.h> to <signal.h>

Disable -Werror by passing --disable-werror to the QEMUU config script
if libc is musl.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 automation/scripts/build | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/automation/scripts/build b/automation/scripts/build
index 7038e5eb50..3fb2fe134c 100755
--- a/automation/scripts/build
+++ b/automation/scripts/build
@@ -28,6 +28,11 @@ if [[ "${CC}" == "clang"* ]]; then
     cfgargs+=("--disable-stubdom")
 fi
 
+# disable --disable-werror for QEMUU when building with MUSL
+if ! test -z "$(ldd /bin/ls|grep musl|head -1)"; then
+	cfgargs+=("--with-extra-qemuu-configure-args=\"--disable-werror\"")
+fi
+
 # Qemu requires Python 3.5 or later
 if ! type python3 || python3 -c "import sys; res = sys.version_info < (3, 5); exit(not(res))"; then
     cfgargs+=("--with-system-qemu=/bin/false")
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 04:27:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 04:27:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37140.69379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmPK-00010B-UZ; Wed, 25 Nov 2020 04:27:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37140.69379; Wed, 25 Nov 2020 04:27:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmPK-0000zz-PI; Wed, 25 Nov 2020 04:27:54 +0000
Received: by outflank-mailman (input) for mailman id 37140;
 Wed, 25 Nov 2020 04:27:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1khmPJ-0000sA-Hu
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:27:53 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bcb38737-3533-4975-bab7-18c9db4864da;
 Wed, 25 Nov 2020 04:27:52 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 1802521D40;
 Wed, 25 Nov 2020 04:27:51 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1khmPJ-0000sA-Hu
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:27:53 +0000
X-Inumbo-ID: bcb38737-3533-4975-bab7-18c9db4864da
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id bcb38737-3533-4975-bab7-18c9db4864da;
	Wed, 25 Nov 2020 04:27:52 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 1802521D40;
	Wed, 25 Nov 2020 04:27:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606278471;
	bh=RQh+Iw+5p9me1VgNkCXV0YqsNm47AcKNEZ7dpALNXQM=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=V7Ue731QX343pJJLJNDaVMEVtBDt1QdCJ20D7/6AV5cPlZntGBIvl2Nu6bspfWnxV
	 HAuYghCJG/QaphnnwDDnCYJcQ3bsoIDZa01HPY6V1J+r9GNT2N+eFD9MRAkB2wiQ9C
	 Vm6QfO8awtWnDiH/gUdUrKdcOabN/0D0Q1YJZcGA=
From: Stefano Stabellini <sstabellini@kernel.org>
To: andrew.cooper3@citrix.com,
	cardoe@cardoe.com,
	wl@xen.org
Cc: sstabellini@kernel.org,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v3 09/12] automation: make available the tests artifacts to the pipeline
Date: Tue, 24 Nov 2020 20:27:42 -0800
Message-Id: <20201125042745.31986-9-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>

In order to make available the pre-built binaries of the
automation/tests-artifacts containers to the gitlab-ci pipeline we need
to export them as gitlab artifacts.

To do that, we create two "fake" jobs that simply export the require
binaries as artifacts and do nothing else.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 automation/gitlab-ci/build.yaml | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
index c48c0f3d66..e5246828f8 100644
--- a/automation/gitlab-ci/build.yaml
+++ b/automation/gitlab-ci/build.yaml
@@ -468,3 +468,29 @@ alpine-3.12-gcc-debug-arm64:
   extends: .gcc-arm64-build-debug
   variables:
     CONTAINER: alpine:3.12-arm64v8
+
+
+# Arm test artifacts
+
+alpine-3.12-arm64-rootfs-export:
+  stage: build
+  image: registry.gitlab.com/xen-project/xen/tests-artifacts/alpine:3.12-arm64v8
+  script:
+    - mkdir binaries && cp /initrd.tar.gz binaries/initrd.tar.gz
+  artifacts:
+    paths:
+      - binaries/initrd.tar.gz
+  tags:
+    - arm64
+
+kernel-5.9.9-arm64-export:
+  stage: build
+  image: registry.gitlab.com/xen-project/xen/tests-artifacts/kernel:5.9.9-arm64v8
+  script:
+    - mkdir binaries && cp /Image binaries/Image
+  artifacts:
+    paths:
+      - binaries/Image
+  tags:
+    - arm64
+
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 04:27:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 04:27:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37141.69385 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmPL-00011H-DX; Wed, 25 Nov 2020 04:27:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37141.69385; Wed, 25 Nov 2020 04:27:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmPL-00010s-59; Wed, 25 Nov 2020 04:27:55 +0000
Received: by outflank-mailman (input) for mailman id 37141;
 Wed, 25 Nov 2020 04:27:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1khmPJ-0000sM-W3
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:27:54 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 27dba9ea-9baf-49e2-9528-4bfb9ac189fd;
 Wed, 25 Nov 2020 04:27:49 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 6893320DD4;
 Wed, 25 Nov 2020 04:27:48 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1khmPJ-0000sM-W3
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:27:54 +0000
X-Inumbo-ID: 27dba9ea-9baf-49e2-9528-4bfb9ac189fd
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 27dba9ea-9baf-49e2-9528-4bfb9ac189fd;
	Wed, 25 Nov 2020 04:27:49 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 6893320DD4;
	Wed, 25 Nov 2020 04:27:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606278468;
	bh=bKnR+2zoa7sLbY03YwLsVroT6DJBBetQKM/lBzR8Y7g=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=jbJ50Q9BcJMOAqVD8UM9dUWPmr8HUr8PANaxbNseQmsJp2lwbjhsHeUCfpZWwQlYs
	 NfzoMQDRKJqfBf+aIzUXJSCedsmbxjpZb6MN8UwZYF3p97nDjOv2qWK4X615iOA7LQ
	 x7ZBQXftuWnwcztJre2aO3ZTCJAwMrTEaogbp+gI=
From: Stefano Stabellini <sstabellini@kernel.org>
To: andrew.cooper3@citrix.com,
	cardoe@cardoe.com,
	wl@xen.org
Cc: sstabellini@kernel.org,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v3 04/12] automation: add alpine linux 3.12 arm64 build container
Date: Tue, 24 Nov 2020 20:27:37 -0800
Message-Id: <20201125042745.31986-4-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

The build container will be used for a new Alpine Linux 3.12 arm64 build
test.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 .../build/alpine/3.12-arm64v8.dockerfile      | 52 +++++++++++++++++++
 1 file changed, 52 insertions(+)
 create mode 100644 automation/build/alpine/3.12-arm64v8.dockerfile

diff --git a/automation/build/alpine/3.12-arm64v8.dockerfile b/automation/build/alpine/3.12-arm64v8.dockerfile
new file mode 100644
index 0000000000..d6cdf5b200
--- /dev/null
+++ b/automation/build/alpine/3.12-arm64v8.dockerfile
@@ -0,0 +1,52 @@
+FROM arm64v8/alpine:3.12
+LABEL maintainer.name="The Xen Project" \
+      maintainer.email="xen-devel@lists.xenproject.org"
+
+ENV USER root
+
+RUN mkdir /build
+WORKDIR /build
+
+# build depends
+RUN \
+  # apk
+  apk update && \
+  \
+  # xen build deps
+  apk add argp-standalone && \
+  apk add autoconf && \
+  apk add automake && \
+  apk add bash && \
+  apk add curl && \
+  apk add curl-dev && \
+  apk add dev86 && \
+  apk add dtc-dev && \
+  apk add gcc  && \
+  apk add gettext && \
+  apk add git && \
+  apk add iasl && \
+  apk add libaio-dev && \
+  apk add libfdt && \
+  apk add linux-headers && \
+  apk add make && \
+  apk add musl-dev  && \
+  apk add ncurses-dev && \
+  apk add patch  && \
+  apk add python3-dev && \
+  apk add texinfo && \
+  apk add util-linux-dev && \
+  apk add xz-dev && \
+  apk add yajl-dev && \
+  apk add zlib-dev && \
+  \
+  # qemu build deps
+  apk add bison && \
+  apk add flex && \
+  apk add glib-dev && \
+  apk add libattr && \
+  apk add libcap-ng-dev && \
+  apk add pixman-dev && \
+  \
+  # cleanup
+  rm -rf /tmp/* && \
+  rm -f /var/cache/apk/*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 04:28:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 04:28:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37143.69403 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmPQ-0001BR-NM; Wed, 25 Nov 2020 04:28:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37143.69403; Wed, 25 Nov 2020 04:28:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmPQ-0001BF-Hu; Wed, 25 Nov 2020 04:28:00 +0000
Received: by outflank-mailman (input) for mailman id 37143;
 Wed, 25 Nov 2020 04:27:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1khmPO-0000sM-WE
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:27:59 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2274826e-1858-4630-8bfe-2a38e3994241;
 Wed, 25 Nov 2020 04:27:50 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id F25882173E;
 Wed, 25 Nov 2020 04:27:48 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1khmPO-0000sM-WE
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:27:59 +0000
X-Inumbo-ID: 2274826e-1858-4630-8bfe-2a38e3994241
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2274826e-1858-4630-8bfe-2a38e3994241;
	Wed, 25 Nov 2020 04:27:50 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id F25882173E;
	Wed, 25 Nov 2020 04:27:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606278469;
	bh=Q5fWCGGnyKXj094ENtB0LrY+WONobsflgJQPten0+30=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=ee+F27T5pD2L/x9ZrPJmxgxgXUcF+s+eYZ7j00H7vLHEnVNkRAHlTnCbVFbk11vCt
	 KWmyRgPl1OgVUlu4hO3I26dzZBe6mROx/DvC9q7z+UvwglH+gFHDXAds0K/+B/PFaH
	 qjtm+Z43Yrg8CGo2iI/ceLG5P17dW6GCu+qmer9E=
From: Stefano Stabellini <sstabellini@kernel.org>
To: andrew.cooper3@citrix.com,
	cardoe@cardoe.com,
	wl@xen.org
Cc: sstabellini@kernel.org,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v3 05/12] automation: add alpine linux arm64 build test
Date: Tue, 24 Nov 2020 20:27:38 -0800
Message-Id: <20201125042745.31986-5-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

Based on the arm64 3.12 build container

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 automation/gitlab-ci/build.yaml | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
index 4bd1cfc1c0..fa38c39d6a 100644
--- a/automation/gitlab-ci/build.yaml
+++ b/automation/gitlab-ci/build.yaml
@@ -434,3 +434,12 @@ debian-unstable-gcc-debug-arm64-randconfig:
     CONTAINER: debian:unstable-arm64v8
     RANDCONFIG: y
 
+alpine-3.12-gcc-arm64:
+  extends: .gcc-arm64-build
+  variables:
+    CONTAINER: alpine:3.12-arm64v8
+
+alpine-3.12-gcc-debug-arm64:
+  extends: .gcc-arm64-build-debug
+  variables:
+    CONTAINER: alpine:3.12-arm64v8
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 04:28:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 04:28:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37145.69415 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmPW-0001KH-2P; Wed, 25 Nov 2020 04:28:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37145.69415; Wed, 25 Nov 2020 04:28:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmPV-0001K5-UA; Wed, 25 Nov 2020 04:28:05 +0000
Received: by outflank-mailman (input) for mailman id 37145;
 Wed, 25 Nov 2020 04:28:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1khmPU-0000sM-08
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:28:04 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 976f4d4b-5100-479f-9da3-48865daf123c;
 Wed, 25 Nov 2020 04:27:50 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 755D0208C3;
 Wed, 25 Nov 2020 04:27:49 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1khmPU-0000sM-08
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:28:04 +0000
X-Inumbo-ID: 976f4d4b-5100-479f-9da3-48865daf123c
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 976f4d4b-5100-479f-9da3-48865daf123c;
	Wed, 25 Nov 2020 04:27:50 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 755D0208C3;
	Wed, 25 Nov 2020 04:27:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606278469;
	bh=KQsbwCY1k0XuZYdXiSfsiaplbcxxxjQeQ4+gIm03Bo4=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=ZK8FpxKj+Nqy2ZUpuKcyjEWsJhA7/6x2cIFxmPOFR3pvJdjkxWfwe3vl91BkSVCKa
	 eKM90XH1uBWGtjW0o8bIu6cWh4qpAVd2V+RJdwRVVtNiGvb5vbz7fyUpubYXt6Fdoe
	 l7Ie98tR0bVFYWE5JXyMlk9h5TlzScf0r4kr2LUI=
From: Stefano Stabellini <sstabellini@kernel.org>
To: andrew.cooper3@citrix.com,
	cardoe@cardoe.com,
	wl@xen.org
Cc: sstabellini@kernel.org,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v3 06/12] automation: add alpine linux 3.12 x86 build container
Date: Tue, 24 Nov 2020 20:27:39 -0800
Message-Id: <20201125042745.31986-6-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 automation/build/alpine/3.12.dockerfile | 52 +++++++++++++++++++++++++
 1 file changed, 52 insertions(+)
 create mode 100644 automation/build/alpine/3.12.dockerfile

diff --git a/automation/build/alpine/3.12.dockerfile b/automation/build/alpine/3.12.dockerfile
new file mode 100644
index 0000000000..2c02417ee6
--- /dev/null
+++ b/automation/build/alpine/3.12.dockerfile
@@ -0,0 +1,52 @@
+FROM alpine:3.12
+LABEL maintainer.name="The Xen Project" \
+      maintainer.email="xen-devel@lists.xenproject.org"
+
+ENV USER root
+
+RUN mkdir /build
+WORKDIR /build
+
+# build depends
+RUN \
+  # apk
+  apk update && \
+  \
+  # xen build deps
+  apk add argp-standalone && \
+  apk add autoconf && \
+  apk add automake && \
+  apk add bash && \
+  apk add curl && \
+  apk add curl-dev && \
+  apk add dev86 && \
+  apk add gcc  && \
+  apk add clang  && \
+  apk add gettext && \
+  apk add git && \
+  apk add iasl && \
+  apk add libaio-dev && \
+  apk add linux-headers && \
+  apk add make && \
+  apk add musl-dev  && \
+  apk add libc6-compat && \
+  apk add ncurses-dev && \
+  apk add patch  && \
+  apk add python3-dev && \
+  apk add texinfo && \
+  apk add util-linux-dev && \
+  apk add xz-dev && \
+  apk add yajl-dev && \
+  apk add zlib-dev && \
+  \
+  # qemu build deps
+  apk add bison && \
+  apk add flex && \
+  apk add glib-dev && \
+  apk add libattr && \
+  apk add libcap-ng-dev && \
+  apk add pixman-dev && \
+  \
+  # cleanup
+  rm -rf /tmp/* && \
+  rm -f /var/cache/apk/*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 04:28:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 04:28:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37156.69427 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmPa-0001Sh-JC; Wed, 25 Nov 2020 04:28:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37156.69427; Wed, 25 Nov 2020 04:28:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmPa-0001SQ-Ai; Wed, 25 Nov 2020 04:28:10 +0000
Received: by outflank-mailman (input) for mailman id 37156;
 Wed, 25 Nov 2020 04:28:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1khmPZ-0000sM-0O
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:28:09 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1e39ba1c-edf1-48c4-ab82-3e2147a8fe83;
 Wed, 25 Nov 2020 04:27:51 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 04CEA217A0;
 Wed, 25 Nov 2020 04:27:49 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1khmPZ-0000sM-0O
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:28:09 +0000
X-Inumbo-ID: 1e39ba1c-edf1-48c4-ab82-3e2147a8fe83
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1e39ba1c-edf1-48c4-ab82-3e2147a8fe83;
	Wed, 25 Nov 2020 04:27:51 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 04CEA217A0;
	Wed, 25 Nov 2020 04:27:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606278470;
	bh=4BFMlUkVsDnSoLgmGArV/mCzcDdkcQd+lAhVA8u7LRs=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=O0KJ2FeIWigon+n9L5r4/5UqYJVS9ALNLDtJtAYnxQddJQ7COG/nZHQKkow/ReJHJ
	 RjfEP3GZ/aXwveCH5Nk+/fokZE28D5Ya8C4yUFDR671G5gutvRoDyNrg9pVUARoLEF
	 vUN1cvvqYJBQRMWj5LAzzP2CsglE8Ka9SgdT7r84=
From: Stefano Stabellini <sstabellini@kernel.org>
To: andrew.cooper3@citrix.com,
	cardoe@cardoe.com,
	wl@xen.org
Cc: sstabellini@kernel.org,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v3 07/12] automation: add alpine linux x86 build jobs
Date: Tue, 24 Nov 2020 20:27:40 -0800
Message-Id: <20201125042745.31986-7-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>

Allow failure for these jobs. Currently they fail because hvmloader
doesn't build with musl. The failures don't block the pipeline.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
This patch is probably required: https://github.com/alpinelinux/aports/blob/master/main/xen/musl-hvmloader-fix-stdint.patch
---
 automation/gitlab-ci/build.yaml | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
index fa38c39d6a..c48c0f3d66 100644
--- a/automation/gitlab-ci/build.yaml
+++ b/automation/gitlab-ci/build.yaml
@@ -410,6 +410,31 @@ opensuse-leap-gcc-debug:
   variables:
     CONTAINER: suse:opensuse-leap
 
+alpine-3.12-gcc:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: alpine:3.12
+  allow_failure: true
+
+alpine-3.12-gcc-debug:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: alpine:3.12
+  allow_failure: true
+
+alpine-3.12-clang:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: alpine:3.12
+  allow_failure: true
+
+alpine-3.12-clang-debug:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: alpine:3.12
+  allow_failure: true
+
+
 # Arm builds
 
 debian-unstable-gcc-arm64:
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 04:28:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 04:28:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37166.69439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmPf-0001b1-T5; Wed, 25 Nov 2020 04:28:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37166.69439; Wed, 25 Nov 2020 04:28:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmPf-0001ar-PA; Wed, 25 Nov 2020 04:28:15 +0000
Received: by outflank-mailman (input) for mailman id 37166;
 Wed, 25 Nov 2020 04:28:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1khmPe-0000sM-0b
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:28:14 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9cbb3747-beb1-43fc-882f-745bfb556278;
 Wed, 25 Nov 2020 04:27:51 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 89EB621D1A;
 Wed, 25 Nov 2020 04:27:50 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1khmPe-0000sM-0b
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:28:14 +0000
X-Inumbo-ID: 9cbb3747-beb1-43fc-882f-745bfb556278
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9cbb3747-beb1-43fc-882f-745bfb556278;
	Wed, 25 Nov 2020 04:27:51 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 89EB621D1A;
	Wed, 25 Nov 2020 04:27:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606278470;
	bh=QJUK/kTkebPG1QQfKh+8CPie3i+NnzX9DzdIZukokQQ=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=Zdoi+xosVFK49d9uPE/T101CrMyOcvfwEy1p1imwULd4fLpROFxSNjG7/5UEsNz5S
	 qtqLk2FrNBsp/WP/nRTV/HWYvEBw/La1Q13Pb9BOHuHmZsekCfiAwCUfacQwG8dnGG
	 njqbBPbrBhb58qgKWR0qeSMAr8QEHeGE+V0wTdko=
From: Stefano Stabellini <sstabellini@kernel.org>
To: andrew.cooper3@citrix.com,
	cardoe@cardoe.com,
	wl@xen.org
Cc: sstabellini@kernel.org,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v3 08/12] automation: add tests artifacts
Date: Tue, 24 Nov 2020 20:27:41 -0800
Message-Id: <20201125042745.31986-8-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>

Some tests (soon to come) will require pre-built binaries to run, such
as the Linux kernel binary. We don't want to rebuild the Linux kernel
for each gitlab-ci run: these builds should not be added to the current
list of build jobs.

Instead, create additional containers that today are built and uploaded
manually, but could be re-built automatically. The containers build the
required binarires during the "docker build" step and store them inside
the container itself.

gitlab-ci will be able to fetch these pre-built binaries during the
regular test runs, saving cycles.

Add two tests artifacts containers:
- one to build the Linux kernel ARM64
- one to create an Alpine Linux ARM64 rootfs for Dom0

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 automation/tests-artifacts/Makefile           | 19 ++++++
 .../alpine/3.12-arm64v8.dockerfile            | 67 +++++++++++++++++++
 .../kernel/5.9.9-arm64v8.dockerfile           | 34 ++++++++++
 3 files changed, 120 insertions(+)
 create mode 100644 automation/tests-artifacts/Makefile
 create mode 100644 automation/tests-artifacts/alpine/3.12-arm64v8.dockerfile
 create mode 100644 automation/tests-artifacts/kernel/5.9.9-arm64v8.dockerfile

diff --git a/automation/tests-artifacts/Makefile b/automation/tests-artifacts/Makefile
new file mode 100644
index 0000000000..8ca71b78ad
--- /dev/null
+++ b/automation/tests-artifacts/Makefile
@@ -0,0 +1,19 @@
+
+# the base of where these containers will appear
+REGISTRY := registry.gitlab.com/xen-project/xen/tests-artifacts
+CONTAINERS = $(subst .dockerfile,,$(wildcard */*.dockerfile))
+
+help:
+	@echo "Containers to build and export tests artifacts."
+	@echo "To build one run 'make ARTIFACT/VERSION'. Available containers:"
+	@$(foreach file,$(sort $(CONTAINERS)),echo ${file};)
+	@echo "To push container builds, set the env var PUSH"
+
+%: %.dockerfile ## Builds containers
+	docker build -t $(REGISTRY)/$(@D):$(@F) -f $< $(<D)
+	@if [ ! -z $${PUSH+x} ]; then \
+		docker push $(REGISTRY)/$(@D):$(@F); \
+	fi
+
+.PHONY: all
+all: $(CONTAINERS)
diff --git a/automation/tests-artifacts/alpine/3.12-arm64v8.dockerfile b/automation/tests-artifacts/alpine/3.12-arm64v8.dockerfile
new file mode 100644
index 0000000000..9457009452
--- /dev/null
+++ b/automation/tests-artifacts/alpine/3.12-arm64v8.dockerfile
@@ -0,0 +1,67 @@
+FROM arm64v8/alpine:3.12
+LABEL maintainer.name="The Xen Project" \
+      maintainer.email="xen-devel@lists.xenproject.org"
+
+ENV USER root
+
+RUN mkdir /build
+WORKDIR /build
+
+RUN \
+  # apk
+  apk update && \
+  \
+  # xen runtime deps
+  apk add musl && \
+  apk add openrc && \
+  apk add busybox && \
+  apk add sudo && \
+  apk add dbus && \
+  apk add bash && \
+  apk add python2 && \
+  apk add gettext && \
+  apk add zlib && \
+  apk add ncurses && \
+  apk add texinfo && \
+  apk add yajl && \
+  apk add libaio && \
+  apk add xz-dev && \
+  apk add util-linux && \
+  apk add argp-standalone && \
+  apk add libfdt && \
+  apk add glib && \
+  apk add pixman && \
+  apk add curl && \
+  apk add udev && \
+  \
+  # Xen
+  cd / && \
+  # Minimal ramdisk environment in case of cpio output
+  rc-update add udev && \
+  rc-update add udev-trigger && \
+  rc-update add udev-settle && \
+  rc-update add networking sysinit && \
+  rc-update add loopback sysinit && \
+  rc-update add bootmisc boot && \
+  rc-update add devfs sysinit && \
+  rc-update add dmesg sysinit && \
+  rc-update add hostname boot && \
+  rc-update add hwclock boot && \
+  rc-update add hwdrivers sysinit && \
+  rc-update add killprocs shutdown && \
+  rc-update add modloop sysinit && \
+  rc-update add modules boot && \
+  rc-update add mount-ro shutdown && \
+  rc-update add savecache shutdown && \
+  rc-update add sysctl boot && \
+  rc-update add local default && \
+  cp -a /sbin/init /init && \
+  echo "ttyS0" >> /etc/securetty && \
+  echo "hvc0" >> /etc/securetty && \
+  echo "ttyS0::respawn:/sbin/getty -L ttyS0 115200 vt100" >> /etc/inittab && \
+  echo "hvc0::respawn:/sbin/getty -L hvc0 115200 vt100" >> /etc/inittab && \
+  passwd -d "root" root && \
+  \
+  # Create rootfs
+  cd / && \
+  tar cvzf /initrd.tar.gz bin dev etc home init lib mnt opt root sbin usr var
diff --git a/automation/tests-artifacts/kernel/5.9.9-arm64v8.dockerfile b/automation/tests-artifacts/kernel/5.9.9-arm64v8.dockerfile
new file mode 100644
index 0000000000..053d65a345
--- /dev/null
+++ b/automation/tests-artifacts/kernel/5.9.9-arm64v8.dockerfile
@@ -0,0 +1,34 @@
+FROM arm64v8/debian:unstable
+LABEL maintainer.name="The Xen Project" \
+      maintainer.email="xen-devel@lists.xenproject.org"
+
+ENV DEBIAN_FRONTEND=noninteractive
+ENV LINUX_VERSION=5.9.9
+ENV USER root
+
+RUN mkdir /build
+WORKDIR /build
+
+# build depends
+RUN apt-get update && \
+    apt-get --quiet --yes install \
+        build-essential \
+        libssl-dev \
+        bc \
+        curl \
+        flex \
+        bison \
+        && \
+    \
+    # Build the kernel
+    curl -fsSLO https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-"$LINUX_VERSION".tar.xz && \
+    tar xvJf linux-"$LINUX_VERSION".tar.xz && \
+    cd linux-"$LINUX_VERSION" && \
+    make defconfig && \
+    make -j$(nproc) Image.gz && \
+    cp arch/arm64/boot/Image / && \
+    cd /build && \
+    rm -rf linux-"$LINUX_VERSION"* && \
+    apt-get autoremove -y && \
+    apt-get clean && \
+    rm -rf /var/lib/apt/lists* /tmp/* /var/tmp/*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 04:28:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 04:28:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37172.69451 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmPk-0001hT-HF; Wed, 25 Nov 2020 04:28:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37172.69451; Wed, 25 Nov 2020 04:28:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmPk-0001hG-Bc; Wed, 25 Nov 2020 04:28:20 +0000
Received: by outflank-mailman (input) for mailman id 37172;
 Wed, 25 Nov 2020 04:28:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1khmPj-0000sM-0r
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:28:19 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id abf9b2c7-8f2c-4c38-9230-9998c7f079ec;
 Wed, 25 Nov 2020 04:27:52 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 92E7E21D46;
 Wed, 25 Nov 2020 04:27:51 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1khmPj-0000sM-0r
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:28:19 +0000
X-Inumbo-ID: abf9b2c7-8f2c-4c38-9230-9998c7f079ec
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id abf9b2c7-8f2c-4c38-9230-9998c7f079ec;
	Wed, 25 Nov 2020 04:27:52 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 92E7E21D46;
	Wed, 25 Nov 2020 04:27:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606278471;
	bh=78C8JPAxWzx+2hjJoT6vrfTzB1NiGDxd/5tqEyKfQ3A=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=vvqEJlv+cqSCpmW9YC0oKgxLCKH13eKDoly9WpMFUxZZXSpNlUyu3Iu2yqd50IcWn
	 XvFbhFpLFwS50nggbNEi8L9uraCwmfqWC5jEG3EJOFin2U7gqRGnhL25lXaosH9u3M
	 VBrbhGnzLSXh79THhEYXPu/VSerG985scjsruWEY=
From: Stefano Stabellini <sstabellini@kernel.org>
To: andrew.cooper3@citrix.com,
	cardoe@cardoe.com,
	wl@xen.org
Cc: sstabellini@kernel.org,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v3 10/12] automation: create an alpine linux arm64 test job
Date: Tue, 24 Nov 2020 20:27:43 -0800
Message-Id: <20201125042745.31986-10-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>

Create a test job that starts Xen and Dom0 on QEMU based on the alpine
linux rootfs. Use the Linux kernel and rootfs from the tests-artifacts
containers. Add the Xen tools binaries from the Alpine Linux build job.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 automation/gitlab-ci/test.yaml          | 24 +++++++
 automation/scripts/build                |  1 +
 automation/scripts/qemu-alpine-arm64.sh | 84 +++++++++++++++++++++++++
 3 files changed, 109 insertions(+)
 create mode 100755 automation/scripts/qemu-alpine-arm64.sh

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index 35346e3f6e..a291538d68 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -22,6 +22,30 @@ build-each-commit-gcc:
     - /^coverity-tested\/.*/
     - /^stable-.*/
 
+qemu-alpine-arm64-gcc:
+  stage: test
+  image: registry.gitlab.com/xen-project/xen/${CONTAINER}
+  variables:
+    CONTAINER: debian:unstable-arm64v8
+  script:
+    - ./automation/scripts/qemu-alpine-arm64.sh 2>&1 | tee qemu-smoke-arm64.log
+  dependencies:
+    - alpine-3.12-gcc-arm64
+    - alpine-3.12-arm64-rootfs-export
+    - kernel-5.9.9-arm64-export
+  artifacts:
+    paths:
+      - smoke.serial
+      - '*.log'
+    when: always
+  tags:
+    - arm64
+  except:
+    - master
+    - smoke
+    - /^coverity-tested\/.*/
+    - /^stable-.*/
+
 qemu-smoke-arm64-gcc:
   stage: test
   image: registry.gitlab.com/xen-project/xen/${CONTAINER}
diff --git a/automation/scripts/build b/automation/scripts/build
index 3fb2fe134c..0e1ebd60f2 100755
--- a/automation/scripts/build
+++ b/automation/scripts/build
@@ -52,6 +52,7 @@ cp xen/.config xen-config
 mkdir binaries
 if [[ "${XEN_TARGET_ARCH}" != "x86_32" ]]; then
     cp xen/xen binaries/xen
+    cp -r dist binaries/
 fi
 
 # Build all the configs we care about
diff --git a/automation/scripts/qemu-alpine-arm64.sh b/automation/scripts/qemu-alpine-arm64.sh
new file mode 100755
index 0000000000..7d1eb7d1b6
--- /dev/null
+++ b/automation/scripts/qemu-alpine-arm64.sh
@@ -0,0 +1,84 @@
+#!/bin/bash
+
+set -ex
+
+apt-get -qy update
+apt-get -qy install --no-install-recommends qemu-system-aarch64 \
+                                            u-boot-qemu \
+                                            u-boot-tools \
+                                            device-tree-compiler \
+                                            cpio \
+                                            curl
+
+mkdir -p binaries/rootfs
+cd binaries/rootfs
+tar xvzf ../initrd.tar.gz
+mkdir proc
+mkdir run
+mkdir srv
+mkdir sys
+rm var/run
+cp -ar ../dist/install/* .
+echo "#!/bin/bash
+
+export LD_LIBRARY_PATH=/usr/local/lib
+bash /etc/init.d/xencommons start
+
+xl list
+
+" > etc/local.d/xen.start
+chmod +x etc/local.d/xen.start
+echo "rc_verbose=yes" >> etc/rc.conf
+find . |cpio -H newc -o|gzip > ../xen-rootfs.cpio.gz
+cd ../..
+
+# XXX Silly workaround to get the following QEMU command to work
+cp /usr/share/qemu/pvh.bin /usr/share/qemu/efi-virtio.rom
+qemu-system-aarch64 \
+   -machine virtualization=true \
+   -cpu cortex-a57 -machine type=virt \
+   -m 1024 -display none \
+   -machine dumpdtb=binaries/virt-gicv3.dtb
+# XXX disable pl061 to avoid Linux crash
+dtc -I dtb -O dts binaries/virt-gicv3.dtb > binaries/virt-gicv3.dts
+sed 's/compatible = "arm,pl061.*/status = "disabled";/g' binaries/virt-gicv3.dts > binaries/virt-gicv3-edited.dts
+dtc -I dts -O dtb binaries/virt-gicv3-edited.dts > binaries/virt-gicv3.dtb
+
+# ImageBuilder
+echo 'MEMORY_START="0x40000000"
+MEMORY_END="0x80000000"
+
+DEVICE_TREE="virt-gicv3.dtb"
+XEN="xen"
+DOM0_KERNEL="Image"
+DOM0_RAMDISK="xen-rootfs.cpio.gz"
+XEN_CMD="console=dtuart dom0_mem=1024M"
+
+NUM_DOMUS=0
+
+LOAD_CMD="tftpb"
+UBOOT_SOURCE="boot.source"
+UBOOT_SCRIPT="boot.scr"' > binaries/config
+rm -rf imagebuilder
+git clone https://gitlab.com/ViryaOS/imagebuilder
+bash imagebuilder/scripts/uboot-script-gen -t tftp -d binaries/ -c binaries/config
+
+
+# Run the test
+rm -f smoke.serial
+set +e
+echo "  virtio scan; dhcp; tftpb 0x40000000 boot.scr; source 0x40000000"| \
+timeout -k 1 480 \
+qemu-system-aarch64 \
+    -machine virtualization=true \
+    -cpu cortex-a57 -machine type=virt \
+    -m 2048 -monitor none -serial stdio \
+    -smp 2 \
+    -no-reboot \
+    -device virtio-net-pci,netdev=n0 \
+    -netdev user,id=n0,tftp=binaries \
+    -bios /usr/lib/u-boot/qemu_arm64/u-boot.bin |& tee smoke.serial
+
+set -e
+grep -q "Domain-0" smoke.serial || exit 1
+exit 0
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 04:28:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 04:28:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37179.69463 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmPp-0001oM-UT; Wed, 25 Nov 2020 04:28:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37179.69463; Wed, 25 Nov 2020 04:28:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmPp-0001o9-Pb; Wed, 25 Nov 2020 04:28:25 +0000
Received: by outflank-mailman (input) for mailman id 37179;
 Wed, 25 Nov 2020 04:28:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1khmPo-0000sM-0t
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:28:24 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3f610be8-a076-4b62-aa4f-4b94e3612db5;
 Wed, 25 Nov 2020 04:27:53 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 1BD122173E;
 Wed, 25 Nov 2020 04:27:52 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1khmPo-0000sM-0t
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:28:24 +0000
X-Inumbo-ID: 3f610be8-a076-4b62-aa4f-4b94e3612db5
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3f610be8-a076-4b62-aa4f-4b94e3612db5;
	Wed, 25 Nov 2020 04:27:53 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 1BD122173E;
	Wed, 25 Nov 2020 04:27:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606278472;
	bh=Ue92qEDF50KpJfvuNM6M4wzRTD3agPXEU66URBcepBc=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=j+9CxijplI5rPgATSgIdxhc7CsWdTP3/RcYTwsQnz6O9K4WwaxX4XJywdKxbPqPF+
	 Zv7A1PzIRAiVvgUP3i25rhO2fQyiarakXi91gV6Il4gmlC6NCNWG/xz5sW//VzlOeg
	 OjH45tXb5hjdfFbIyiVUluBpt131nKIAN+65g1e4=
From: Stefano Stabellini <sstabellini@kernel.org>
To: andrew.cooper3@citrix.com,
	cardoe@cardoe.com,
	wl@xen.org
Cc: sstabellini@kernel.org,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v3 11/12] automation: use the tests-artifacts kernel for qemu-smoke-arm64-gcc
Date: Tue, 24 Nov 2020 20:27:44 -0800
Message-Id: <20201125042745.31986-11-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>

Use the tests-artifacts kernel, instead of the Debian kernel, for the
qemu-smoke-arm64-gcc job.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 automation/gitlab-ci/test.yaml         | 1 +
 automation/scripts/qemu-smoke-arm64.sh | 6 ------
 2 files changed, 1 insertion(+), 6 deletions(-)

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index a291538d68..9448651187 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -55,6 +55,7 @@ qemu-smoke-arm64-gcc:
     - ./automation/scripts/qemu-smoke-arm64.sh 2>&1 | tee qemu-smoke-arm64.log
   dependencies:
     - debian-unstable-gcc-arm64
+    - kernel-5.9.9-arm64-export
   artifacts:
     paths:
       - smoke.serial
diff --git a/automation/scripts/qemu-smoke-arm64.sh b/automation/scripts/qemu-smoke-arm64.sh
index 9bf4488115..d614227f0a 100755
--- a/automation/scripts/qemu-smoke-arm64.sh
+++ b/automation/scripts/qemu-smoke-arm64.sh
@@ -12,12 +12,6 @@ apt-get -qy install --no-install-recommends qemu-system-aarch64 \
                                             busybox-static \
                                             cpio
 
-cd binaries
-apt-get download linux-image-*[0-9]-arm64
-dpkg -i --ignore-depends=initramfs-tools ./linux-image-*arm64.deb || true
-cp /boot/vmlinuz-*arm64 ./Image
-cd ..
-
 # XXX Silly workaround to get the following QEMU command to work
 cp /usr/share/qemu/pvh.bin /usr/share/qemu/efi-virtio.rom
 qemu-system-aarch64 \
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 04:28:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 04:28:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37184.69475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmPu-0001ty-9D; Wed, 25 Nov 2020 04:28:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37184.69475; Wed, 25 Nov 2020 04:28:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmPu-0001to-66; Wed, 25 Nov 2020 04:28:30 +0000
Received: by outflank-mailman (input) for mailman id 37184;
 Wed, 25 Nov 2020 04:28:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1khmPt-0000sM-1D
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:28:29 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2735ef72-28c5-4302-9bb0-19c8cb949484;
 Wed, 25 Nov 2020 04:27:53 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 9E25621D7A;
 Wed, 25 Nov 2020 04:27:52 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1khmPt-0000sM-1D
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:28:29 +0000
X-Inumbo-ID: 2735ef72-28c5-4302-9bb0-19c8cb949484
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2735ef72-28c5-4302-9bb0-19c8cb949484;
	Wed, 25 Nov 2020 04:27:53 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 9E25621D7A;
	Wed, 25 Nov 2020 04:27:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606278472;
	bh=BsN+MdGSKW1XgXXksWV5TK4XWZsbUTOQRZGpT/TJoLk=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=QvEmcM+cAnFDgFRTLrutIBjyVHt8PnS3FU8WH2usO8zBQ+Zf7Rmd5I8kN7TMma/cr
	 /bWgWrb5p0MZHsOYegmtDqu3zvAVSksQyD9URIMzMummurZfyQbi+kD1N9X1SCQ0gd
	 x0Zdx8UMiFsxo0ELEUVptNiLB15v0DoVLRawjamw=
From: Stefano Stabellini <sstabellini@kernel.org>
To: andrew.cooper3@citrix.com,
	cardoe@cardoe.com,
	wl@xen.org
Cc: sstabellini@kernel.org,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v3 12/12] automation: add domU creation to dom0 alpine linux test
Date: Tue, 24 Nov 2020 20:27:45 -0800
Message-Id: <20201125042745.31986-12-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s>

Add a trivial Busybox based domU.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 automation/scripts/qemu-alpine-arm64.sh | 47 ++++++++++++++++++++++---
 1 file changed, 42 insertions(+), 5 deletions(-)

diff --git a/automation/scripts/qemu-alpine-arm64.sh b/automation/scripts/qemu-alpine-arm64.sh
index 7d1eb7d1b6..bd8965e6f1 100755
--- a/automation/scripts/qemu-alpine-arm64.sh
+++ b/automation/scripts/qemu-alpine-arm64.sh
@@ -8,10 +8,36 @@ apt-get -qy install --no-install-recommends qemu-system-aarch64 \
                                             u-boot-tools \
                                             device-tree-compiler \
                                             cpio \
-                                            curl
+                                            curl \
+                                            busybox-static
 
-mkdir -p binaries/rootfs
-cd binaries/rootfs
+# DomU Busybox
+cd binaries
+mkdir -p initrd
+mkdir -p initrd/bin
+mkdir -p initrd/sbin
+mkdir -p initrd/etc
+mkdir -p initrd/dev
+mkdir -p initrd/proc
+mkdir -p initrd/sys
+mkdir -p initrd/lib
+mkdir -p initrd/var
+mkdir -p initrd/mnt
+cp /bin/busybox initrd/bin/busybox
+initrd/bin/busybox --install initrd/bin
+echo "#!/bin/sh
+
+mount -t proc proc /proc
+mount -t sysfs sysfs /sys
+mount -t devtmpfs devtmpfs /dev
+/bin/sh" > initrd/init
+chmod +x initrd/init
+cd initrd
+find . | cpio --create --format='newc' | gzip > ../initrd.cpio.gz
+cd ..
+
+mkdir -p rootfs
+cd rootfs
 tar xvzf ../initrd.tar.gz
 mkdir proc
 mkdir run
@@ -19,6 +45,15 @@ mkdir srv
 mkdir sys
 rm var/run
 cp -ar ../dist/install/* .
+mv ../initrd.cpio.gz ./root
+cp ../Image ./root
+echo "name=\"test\"
+memory=512
+vcpus=1
+kernel=\"/root/Image\"
+ramdisk=\"/root/initrd.cpio.gz\"
+extra=\"console=hvc0 root=/dev/ram0 rdinit=/bin/sh\"
+" > root/test.cfg
 echo "#!/bin/bash
 
 export LD_LIBRARY_PATH=/usr/local/lib
@@ -26,6 +61,8 @@ bash /etc/init.d/xencommons start
 
 xl list
 
+xl create -c /root/test.cfg
+
 " > etc/local.d/xen.start
 chmod +x etc/local.d/xen.start
 echo "rc_verbose=yes" >> etc/rc.conf
@@ -68,7 +105,7 @@ bash imagebuilder/scripts/uboot-script-gen -t tftp -d binaries/ -c binaries/conf
 rm -f smoke.serial
 set +e
 echo "  virtio scan; dhcp; tftpb 0x40000000 boot.scr; source 0x40000000"| \
-timeout -k 1 480 \
+timeout -k 1 720 \
 qemu-system-aarch64 \
     -machine virtualization=true \
     -cpu cortex-a57 -machine type=virt \
@@ -80,5 +117,5 @@ qemu-system-aarch64 \
     -bios /usr/lib/u-boot/qemu_arm64/u-boot.bin |& tee smoke.serial
 
 set -e
-grep -q "Domain-0" smoke.serial || exit 1
+(grep -q "Domain-0" smoke.serial && grep -q "BusyBox" smoke.serial) || exit 1
 exit 0
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 04:42:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 04:42:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37239.69487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmct-00040J-Ix; Wed, 25 Nov 2020 04:41:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37239.69487; Wed, 25 Nov 2020 04:41:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmct-00040B-FJ; Wed, 25 Nov 2020 04:41:55 +0000
Received: by outflank-mailman (input) for mailman id 37239;
 Wed, 25 Nov 2020 04:41:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TOsx=E7=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1khmcs-000406-42
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:41:54 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 27c57388-8b16-46ff-96e8-2cfa3ffb75e9;
 Wed, 25 Nov 2020 04:41:53 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0AP4ffo8027973
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Tue, 24 Nov 2020 23:41:47 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 0AP4feW1027972;
 Tue, 24 Nov 2020 20:41:40 -0800 (PST) (envelope-from ehem)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TOsx=E7=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1khmcs-000406-42
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:41:54 +0000
X-Inumbo-ID: 27c57388-8b16-46ff-96e8-2cfa3ffb75e9
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 27c57388-8b16-46ff-96e8-2cfa3ffb75e9;
	Wed, 25 Nov 2020 04:41:53 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0AP4ffo8027973
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Tue, 24 Nov 2020 23:41:47 -0500 (EST)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 0AP4feW1027972;
	Tue, 24 Nov 2020 20:41:40 -0800 (PST)
	(envelope-from ehem)
Date: Tue, 24 Nov 2020 20:41:40 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Roman Shaposhnik <roman@zededa.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
        Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: Xen on RP4
Message-ID: <X73ghKgQEXLv2z2p@mattapan.m5p.com>
References: <X73RfHfRfBRLKkvB@mattapan.m5p.com>
 <CAMmSBy8dtUQotUeX2MVke7d2nWS0shvKPL_S=4tUeF0UKh4vgA@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAMmSBy8dtUQotUeX2MVke7d2nWS0shvKPL_S=4tUeF0UKh4vgA@mail.gmail.com>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Tue, Nov 24, 2020 at 08:01:32PM -0800, Roman Shaposhnik wrote:
> On Tue, Nov 24, 2020 at 7:37 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> > Presently I'm using a 5.8 kernel with your patches and haven't seen
> > graphical output under Xen with either boot stack.  I've confirmed fully
> > operational graphics without Xen on Tianocore, I've confirmed operational
> > virtual terminals with U-Boot and not Xen.
> >
> > I had been planning to wait a bit before moving to 5.9, but if that is
> > the crucial ingredient I'll move early.
> 
> We're still using 5.4 -- but it seems that the next LTS 5.10 is also functional.
> 
> I can bet $10 whatever it is -- it is DT related ;-)

Given how many of the pieces I'm assembling are alpha or beta level, I
estimate a 50:50 chance on that.  Good odds it is device-tree, but good
odds I grabbed a bad version of $something.

I mostly wanted to know whether I was in completely uncharted territory
and needed to wait for others to catch up, versus merely working in a
situation where support is funky and I'm at an unknown location in
charted territory.

I'll be keeping the Tianocore setup available since Xen on ARM really
/should/ allow ACPI...


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Wed Nov 25 04:45:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 04:45:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37246.69499 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmgc-0004Br-4E; Wed, 25 Nov 2020 04:45:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37246.69499; Wed, 25 Nov 2020 04:45:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khmgc-0004Bk-0X; Wed, 25 Nov 2020 04:45:46 +0000
Received: by outflank-mailman (input) for mailman id 37246;
 Wed, 25 Nov 2020 04:45:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=al8A=E7=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1khmga-0004BZ-Ea
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:45:44 +0000
Received: from mail-qt1-x82d.google.com (unknown [2607:f8b0:4864:20::82d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 03097ec4-8f92-4f73-bd0c-626decb9b687;
 Wed, 25 Nov 2020 04:45:43 +0000 (UTC)
Received: by mail-qt1-x82d.google.com with SMTP id t5so835477qtp.2
 for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 20:45:43 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=al8A=E7=zededa.com=roman@srs-us1.protection.inumbo.net>)
	id 1khmga-0004BZ-Ea
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 04:45:44 +0000
X-Inumbo-ID: 03097ec4-8f92-4f73-bd0c-626decb9b687
Received: from mail-qt1-x82d.google.com (unknown [2607:f8b0:4864:20::82d])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 03097ec4-8f92-4f73-bd0c-626decb9b687;
	Wed, 25 Nov 2020 04:45:43 +0000 (UTC)
Received: by mail-qt1-x82d.google.com with SMTP id t5so835477qtp.2
        for <xen-devel@lists.xenproject.org>; Tue, 24 Nov 2020 20:45:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=zededa.com; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=/+i1NuicU7Zmir8kXt8ybxJFulLsIkl2OPxPwAmEbCo=;
        b=NvnMVlQ3Br4hY8l2E+zSzIRD1sRtzZnn2ayc8KSNs+UYWSBODheOkOFhfHsqB+jH9m
         uXSHwaOQUtzeA/J6o/WjefoANTp8I1LWFxDMI6H1BkWs7zKYFCSRp3OaOSRIgNcsthky
         ZtEpvtIpWSykV9LlKm19yzsk6/0k1AyAeLkdhMH3mA2a0mlqqA2/Dy3u51jSsw4vyVtW
         fZwkeVltpCphT9Y5KiMFzJ/LPQKNlih9mrCDn2GeMWBKp3uUhGYTCW3OPKjkAlSM/Hao
         3j4yneIBd8gP+O06IortVVwnq2b3uhrUhvYXxC+8UtvQQYhH1D+RD3gRsxqR6aiWU2Pg
         BxIQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=/+i1NuicU7Zmir8kXt8ybxJFulLsIkl2OPxPwAmEbCo=;
        b=VLLgwI71lvvDOyjmzb3gerQJ8ibahP1BfhBuGDIJzVH0pXVin7r7sn1Xptn2d94sNF
         3E6QNXgd9sYH4tSiXP9+UWUqwjsxFU8wS5qDb/RNZLirm8pSAdRQjik18OPyP4MCH3D8
         kqISVOmzdW5I+Wrvo/dbLrJN7/KYK8aofnN9xoUrqmT5I+VRDmUVayyTAJV+jU94c17x
         FxlCAGEBcpY4Q9q3u8WaPK/WHclI0twRBpSd+XAthRTLvwgYAprSB1NR6HzSe6CHj3uV
         j9sk3J4YvNKXRi9XrQICq/MfMTBb0lrBhe8HLRHUEuomyKNs57zcmpUtiy5kkT8b2OV0
         WiOw==
X-Gm-Message-State: AOAM533Lhpr4aoa5JJ702zFqBdhiSUY66AWSEk6JCZN8h0zwuftqKq5q
	SpFmE0ae6DUZOKjZE59bpU5ferFj8mjFet16CuctUg==
X-Google-Smtp-Source: ABdhPJzTDxqVaHXAPv1xmBHDSFnPqV/aUe7gtto4xc8NqPt4KYLMSk7uVzIs/LyqoK13uHVi1R3hDnFe2FXvQXjxloI=
X-Received: by 2002:aed:32c7:: with SMTP id z65mr1476685qtd.266.1606279543420;
 Tue, 24 Nov 2020 20:45:43 -0800 (PST)
MIME-Version: 1.0
References: <X73RfHfRfBRLKkvB@mattapan.m5p.com> <CAMmSBy8dtUQotUeX2MVke7d2nWS0shvKPL_S=4tUeF0UKh4vgA@mail.gmail.com>
 <X73ghKgQEXLv2z2p@mattapan.m5p.com>
In-Reply-To: <X73ghKgQEXLv2z2p@mattapan.m5p.com>
From: Roman Shaposhnik <roman@zededa.com>
Date: Tue, 24 Nov 2020 20:45:32 -0800
Message-ID: <CAMmSBy-Qdpj+6FAk9D15=+87_=68T80Y1NGnvyAB=tOFveifiQ@mail.gmail.com>
Subject: Re: Xen on RP4
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Tue, Nov 24, 2020 at 8:41 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
>
> On Tue, Nov 24, 2020 at 08:01:32PM -0800, Roman Shaposhnik wrote:
> > On Tue, Nov 24, 2020 at 7:37 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> > > Presently I'm using a 5.8 kernel with your patches and haven't seen
> > > graphical output under Xen with either boot stack.  I've confirmed fully
> > > operational graphics without Xen on Tianocore, I've confirmed operational
> > > virtual terminals with U-Boot and not Xen.
> > >
> > > I had been planning to wait a bit before moving to 5.9, but if that is
> > > the crucial ingredient I'll move early.
> >
> > We're still using 5.4 -- but it seems that the next LTS 5.10 is also functional.
> >
> > I can bet $10 whatever it is -- it is DT related ;-)
>
> Given how many of the pieces I'm assembling are alpha or beta level, I
> estimate a 50:50 chance on that.  Good odds it is device-tree, but good
> odds I grabbed a bad version of $something.
>
> I mostly wanted to know whether I was in completely uncharted territory
> and needed to wait for others to catch up, versus merely working in a
> situation where support is funky and I'm at an unknown location in
> charted territory.
>
> I'll be keeping the Tianocore setup available since Xen on ARM really
> /should/ allow ACPI...

I don't think you're in the uncharted -- so perhaps a bit of debugging left.

And, of course, alway feel free to compare what we do -- the image is
docker pull away.

Thanks,
Roman.


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 05:17:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 05:17:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37256.69517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khnAs-0007CY-Sa; Wed, 25 Nov 2020 05:17:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37256.69517; Wed, 25 Nov 2020 05:17:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khnAs-0007CR-Ov; Wed, 25 Nov 2020 05:17:02 +0000
Received: by outflank-mailman (input) for mailman id 37256;
 Wed, 25 Nov 2020 05:17:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TOsx=E7=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1khnAr-0007CM-Fk
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 05:17:01 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e314a483-4a80-4544-a210-3a56f38fb724;
 Wed, 25 Nov 2020 05:17:00 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0AP5GnNx028151
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Wed, 25 Nov 2020 00:16:55 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 0AP5Gmo6028150;
 Tue, 24 Nov 2020 21:16:48 -0800 (PST) (envelope-from ehem)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TOsx=E7=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1khnAr-0007CM-Fk
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 05:17:01 +0000
X-Inumbo-ID: e314a483-4a80-4544-a210-3a56f38fb724
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e314a483-4a80-4544-a210-3a56f38fb724;
	Wed, 25 Nov 2020 05:17:00 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0AP5GnNx028151
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Wed, 25 Nov 2020 00:16:55 -0500 (EST)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 0AP5Gmo6028150;
	Tue, 24 Nov 2020 21:16:48 -0800 (PST)
	(envelope-from ehem)
Date: Tue, 24 Nov 2020 21:16:48 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Roman Shaposhnik <roman@zededa.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
        Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: Xen on RP4
Message-ID: <X73owDP0UXx+lvJd@mattapan.m5p.com>
References: <X73RfHfRfBRLKkvB@mattapan.m5p.com>
 <CAMmSBy8dtUQotUeX2MVke7d2nWS0shvKPL_S=4tUeF0UKh4vgA@mail.gmail.com>
 <X73ghKgQEXLv2z2p@mattapan.m5p.com>
 <CAMmSBy-Qdpj+6FAk9D15=+87_=68T80Y1NGnvyAB=tOFveifiQ@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAMmSBy-Qdpj+6FAk9D15=+87_=68T80Y1NGnvyAB=tOFveifiQ@mail.gmail.com>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Tue, Nov 24, 2020 at 08:45:32PM -0800, Roman Shaposhnik wrote:
> On Tue, Nov 24, 2020 at 8:41 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> >
> > On Tue, Nov 24, 2020 at 08:01:32PM -0800, Roman Shaposhnik wrote:
> > > On Tue, Nov 24, 2020 at 7:37 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> > > > Presently I'm using a 5.8 kernel with your patches and haven't seen
> > > > graphical output under Xen with either boot stack.  I've confirmed fully
> > > > operational graphics without Xen on Tianocore, I've confirmed operational
> > > > virtual terminals with U-Boot and not Xen.
> > > >
> > > > I had been planning to wait a bit before moving to 5.9, but if that is
> > > > the crucial ingredient I'll move early.
> > >
> > > We're still using 5.4 -- but it seems that the next LTS 5.10 is also functional.
> > >
> > > I can bet $10 whatever it is -- it is DT related ;-)
> >
> > Given how many of the pieces I'm assembling are alpha or beta level, I
> > estimate a 50:50 chance on that.  Good odds it is device-tree, but good
> > odds I grabbed a bad version of $something.
> >
> > I mostly wanted to know whether I was in completely uncharted territory
> > and needed to wait for others to catch up, versus merely working in a
> > situation where support is funky and I'm at an unknown location in
> > charted territory.
> >
> > I'll be keeping the Tianocore setup available since Xen on ARM really
> > /should/ allow ACPI...
> 
> I don't think you're in the uncharted -- so perhaps a bit of debugging left.
> 
> And, of course, alway feel free to compare what we do -- the image is
> docker pull away.

Actually, since device-tree is very much on my list of concerns, what is
your Xen boot process setup like?

Presently as previously mentioned I'm trying for
U-Boot -> GRUB/EFI -> Xen.  According to the information I currently have
the device-trees are often tied pretty closely to the kernel.  I'm also
using GRUB 2.04 since that has proper support for loading Xen on ARM.

The section of grub.cfg for Linux is roughly:
    linux /boot/vmlinuz-5.8.10-2rp4-6.1.7 root=UUID=01234567-dead-beef-d13d-456789abcdef ro
    devicetree /boot/dtb-5.8.10-2rp4-6.1.7
    initrd /boot/initrd.img-5.8.10-2rp4-6.1.7

My testing section for Xen is:
    xen_hypervisor /boot/xen-4.14-arm64.efi
    xen_module /boot/vmlinuz-5.8.10-2rp4-6.1.7 root=UUID=01234567-dead-beef-d13d-456789abcdef ro
    devicetree /boot/dtb-5.8.10-2rp4-6.1.7
    xen_module --nounzip /boot/initrd.img-5.8.10-2rp4-6.1.7

I've frankly got no idea how to ensure the correct device-tree is passed
to Xen.  Is GRUB's `devicetree` command correct when loading Xen?  Is a
device-tree matched to the Linux kernel appropriate for Xen?

(I'm guessing the second is "yes", but the first I don't have a clue)


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Wed Nov 25 05:24:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 05:24:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37264.69529 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khnHY-000899-QB; Wed, 25 Nov 2020 05:23:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37264.69529; Wed, 25 Nov 2020 05:23:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khnHY-000892-Ld; Wed, 25 Nov 2020 05:23:56 +0000
Received: by outflank-mailman (input) for mailman id 37264;
 Wed, 25 Nov 2020 05:23:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X3kr=E7=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1khnHX-00088w-Am
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 05:23:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id de01df1e-9119-415d-a1e8-897040311881;
 Wed, 25 Nov 2020 05:23:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 27C72AC2F;
 Wed, 25 Nov 2020 05:23:53 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X3kr=E7=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1khnHX-00088w-Am
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 05:23:55 +0000
X-Inumbo-ID: de01df1e-9119-415d-a1e8-897040311881
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id de01df1e-9119-415d-a1e8-897040311881;
	Wed, 25 Nov 2020 05:23:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606281833; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=U8qXYd9PB9mHD56RdIm1vJxk80bxdwKT3Ikw2dfLbp4=;
	b=UgqVcqL1R+rXsSOUzPtCE+LhZD4/cd7ZDp1ofejH0KhoHE1CzwnHAlaQE34Q8dLOe5MvW0
	tZAR2zAX8N3xFbktVnAiEqAM72hp/8A3oGxruMS8ufY9RjcSwvNNxx7lwO9PhEtNx/bccj
	an0ghHHNUhd1PyJcubV1HFvvn9IPjAU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 27C72AC2F;
	Wed, 25 Nov 2020 05:23:53 +0000 (UTC)
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201124070106.26854-1-jgross@suse.com>
 <20201124070106.26854-4-jgross@suse.com>
 <c627b42b-1e1f-b83a-2db8-b9e5fa5dce10@suse.com>
 <8e2853c3-9f84-2fd6-0e41-1f1d9172f236@suse.com>
 <9eada207-9880-b2fe-054c-f3218d2034b2@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v7 3/3] xen/events: rework fifo queue locking
Message-ID: <cce1b71c-aa37-a3b7-990e-bd2f0437d074@suse.com>
Date: Wed, 25 Nov 2020 06:23:52 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <9eada207-9880-b2fe-054c-f3218d2034b2@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="zpw0yUJnh4PXtxncz4e90AvFrVDexK14S"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--zpw0yUJnh4PXtxncz4e90AvFrVDexK14S
Content-Type: multipart/mixed; boundary="EaevYunMTJ8F9bWn6k33TwxLSYzPWyxls";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <cce1b71c-aa37-a3b7-990e-bd2f0437d074@suse.com>
Subject: Re: [PATCH v7 3/3] xen/events: rework fifo queue locking
References: <20201124070106.26854-1-jgross@suse.com>
 <20201124070106.26854-4-jgross@suse.com>
 <c627b42b-1e1f-b83a-2db8-b9e5fa5dce10@suse.com>
 <8e2853c3-9f84-2fd6-0e41-1f1d9172f236@suse.com>
 <9eada207-9880-b2fe-054c-f3218d2034b2@suse.com>
In-Reply-To: <9eada207-9880-b2fe-054c-f3218d2034b2@suse.com>

--EaevYunMTJ8F9bWn6k33TwxLSYzPWyxls
Content-Type: multipart/mixed;
 boundary="------------54EF68A2A444426C4D6CC124"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------54EF68A2A444426C4D6CC124
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 24.11.20 17:32, Jan Beulich wrote:
> On 24.11.2020 15:49, J=C3=BCrgen Gro=C3=9F wrote:
>> On 24.11.20 15:02, Jan Beulich wrote:
>>> On 24.11.2020 08:01, Juergen Gross wrote:
>>>> Two cpus entering evtchn_fifo_set_pending() for the same event chann=
el
>>>> can race in case the first one gets interrupted after setting
>>>> EVTCHN_FIFO_PENDING and when the other one manages to set
>>>> EVTCHN_FIFO_LINKED before the first one is testing that bit. This ca=
n
>>>> lead to evtchn_check_pollers() being called before the event is put
>>>> properly into the queue, resulting eventually in the guest not seein=
g
>>>> the event pending and thus blocking forever afterwards.
>>>>
>>>> Note that commit 5f2df45ead7c1195 ("xen/evtchn: rework per event cha=
nnel
>>>> lock") made the race just more obvious, while the fifo event channel=

>>>> implementation had this race from the beginning when an unmask opera=
tion
>>>> was running in parallel with an event channel send operation.
>>>
>>> Ah yes, but then also only for inter-domain channels, as it was
>>> only in that case that the "wrong" domain's event lock was held.
>>> IOW there was a much earlier change already where this issue
>>> got widened (when the per-channel locking got introduced). This
>>> then got reduced to the original scope by XSA-343's adding of
>>> locking to evtchn_unmask(). (Not sure how much of this history
>>> wants actually adding here. I'm writing it down not the least to
>>> make sure I have a complete enough picture.)
>>
>> I think we both agree that this race was possible for quite some time.=

>> And I even think one customer bug I've been looking into recently
>> might be exactly this problem (a dom0 was occasionally hanging in
>> cross-cpu function calls, but switching to 2-level events made the
>> problem disappear).
>=20
> IPIs weren't affected earlier on (i.e. in any released version),
> if my analysis above is correct.

I don't think it is correct.

An unmask operation in parallel with set_pending will have had the
same race for IPIs.

>=20
>>>> Additionally when an
>>>> event channel needs to change queues both queues need to be locked
>>>> initially.
>>>
>>> Since this was (afaict) intentionally not the case before, I
>>> think I would want to see a word spent on the "why", perhaps
>>> better in a code comment than here. Even more so that you
>>> delete a respective comment justifying the possible race as
>>> permissible. And I have to admit right now I'm still uncertain
>>> both ways, i.e. I neither have a clear understanding of why it
>>> would have been considered fine the other way around before,
>>> nor why the double locking is strictly needed.
>>
>> I need the double locking to avoid someone entering the locked region
>> when dropping the lock for the old queue and taking the one for the
>> new queue, as this would open the same race window again.
>=20
> Well, that's what have already said. Thing is that the code
> prior to your change gives the impression as if this race was
> benign.

The race regarding a queue change, yes. But not the race I'm fixing with
this patch. I need to make sure that only one caller is inside the big
if clause for a specific event. And dropping the lock inside this clause
would violate that assumption.

>=20
>>>> +        lastq.raw =3D read_atomic(&evtchn->fifo_lastq);
>>>> +        old_v =3D d->vcpu[lastq.last_vcpu_id];
>>>> +        if ( q =3D=3D &v->evtchn_fifo->queue[evtchn->priority] &&
>>>> +             old_q =3D=3D &old_v->evtchn_fifo->queue[lastq.last_pri=
ority] )
>>>> +            break;
>>>> +
>>>> +        if ( q !=3D old_q )
>>>> +            spin_unlock(&old_q->lock);
>>>> +        spin_unlock_irqrestore(&q->lock, flags);
>>>> +
>>>> +        if ( try =3D=3D 3 )
>>>> +        {
>>>> +            gprintk(XENLOG_WARNING,
>>>> +                    "dom%d port %d lost event (too many queue chang=
es)\n",
>>>> +                    d->domain_id, evtchn->port);
>>>> +            return;
>>>
>>> Originally evtchn_check_pollers() would still have been called
>>> in this case. Wouldn't you better retain this, or else justify
>>> the possibly observable change in behavior?
>>
>> I could retain it, but without having set the event to be pending
>> I don't see the value in doing so.
>=20
> But that's part of my concern - you now don't set PENDING when
> bailing here.

Hmm, I'm not sure this will really help, as the event still won't be
LINKED, what is necessary for the guest to recognize the event to be
pending.

OTOH if the event is masked setting the PENDING bit would ensure
proper handling later, so it is better to do that, even with the risk
to have the same old race again in this case (which might have the
same effect as not setting PENDING, but with a much lower probability).

I'll change the locking to handle that properly.


Juergen

--------------54EF68A2A444426C4D6CC124
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------54EF68A2A444426C4D6CC124--

--EaevYunMTJ8F9bWn6k33TwxLSYzPWyxls--

--zpw0yUJnh4PXtxncz4e90AvFrVDexK14S
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+96mgFAwAAAAAACgkQsN6d1ii/Ey+q
9wf/TsMxcGF9sM3ubJaQtnqozl3eDEr4qYrj2dAygZZ5udrLfISTvd93a9o2yphAlRvr2Nbmy95o
BfYFe3FIIfAaK4S75cA+RpCMlzAehQSFJtClIrja4CIbp1n8HG2vfUjuGCDiQHCnWXftO9zRSeqB
yfec8H0g6dt73ujdmWtVO+COGZ1WqEyXWY6IpOmX1dSJcHPwiONCgVTkEA9YqC3rB+PXO5EFuJIk
HHNtvRG0NwPJR4qN4qP5T88dhpKKePIIOs/HfLltMGPsWGbmRgsQInbX/1FIUvxNaLn0g0kIMoF9
9fCXAJNjxWtmvMdRhs6VDchI/StL244xLSnsm2s8Yg==
=C/mj
-----END PGP SIGNATURE-----

--zpw0yUJnh4PXtxncz4e90AvFrVDexK14S--


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 05:40:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 05:40:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37272.69540 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khnXF-0001SE-7o; Wed, 25 Nov 2020 05:40:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37272.69540; Wed, 25 Nov 2020 05:40:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khnXF-0001S7-4q; Wed, 25 Nov 2020 05:40:09 +0000
Received: by outflank-mailman (input) for mailman id 37272;
 Wed, 25 Nov 2020 05:40:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X3kr=E7=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1khnXD-0001OI-Hr
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 05:40:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c603590e-2ca7-4fef-bb3d-b6723a9e3a47;
 Wed, 25 Nov 2020 05:40:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C5F9DAC22;
 Wed, 25 Nov 2020 05:40:05 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X3kr=E7=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1khnXD-0001OI-Hr
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 05:40:07 +0000
X-Inumbo-ID: c603590e-2ca7-4fef-bb3d-b6723a9e3a47
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c603590e-2ca7-4fef-bb3d-b6723a9e3a47;
	Wed, 25 Nov 2020 05:40:06 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606282805; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=K1GA5YeIfIHtvZCm41SYwmOs63OFblarmmLVZhh0mdk=;
	b=m2C+zpG+6VQoRA/La1b2yphZ4OpAFdpUFEcsepcMAqV2GRvwDFKPEHZ15X0dU/1wEKriBQ
	DMmPZK0Z4Yben1L8IasEghfGk57O4QgO4SG56rjfncAininVfaHQzCJ7ZN2N15C5MCHBNm
	ORL6nLcZ7eHBXbHU1HoBEMztATHFvVs=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C5F9DAC22;
	Wed, 25 Nov 2020 05:40:05 +0000 (UTC)
Subject: Re: [PATCH] evtchn: double per-channel locking can't hit identical
 channels
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <db0b16f8-2053-5ec3-f73a-c1c8fcdb9444@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <fbf3118c-6787-86ee-388a-99e9e9d0f9cf@suse.com>
Date: Wed, 25 Nov 2020 06:40:04 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <db0b16f8-2053-5ec3-f73a-c1c8fcdb9444@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="xAYWHcVWC6v8PecIqYedbpi0zZGEuZEpw"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--xAYWHcVWC6v8PecIqYedbpi0zZGEuZEpw
Content-Type: multipart/mixed; boundary="fy5eXqOw9hrfLqxV2Umb2hvrhBMi1ozzo";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
Message-ID: <fbf3118c-6787-86ee-388a-99e9e9d0f9cf@suse.com>
Subject: Re: [PATCH] evtchn: double per-channel locking can't hit identical
 channels
References: <db0b16f8-2053-5ec3-f73a-c1c8fcdb9444@suse.com>
In-Reply-To: <db0b16f8-2053-5ec3-f73a-c1c8fcdb9444@suse.com>

--fy5eXqOw9hrfLqxV2Umb2hvrhBMi1ozzo
Content-Type: multipart/mixed;
 boundary="------------951519477318B88CB28325B0"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------951519477318B88CB28325B0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 24.11.20 18:03, Jan Beulich wrote:
> Inter-domain channels can't possibly be bound to themselves, there's
> always a 2nd channel involved, even when this is a loopback into the
> same domain. As a result we can drop one conditional each from the two
> involved functions.
>=20
> With this, the number of evtchn_write_lock() invocations can also be
> shrunk by half, swapping the two incoming function arguments instead.
>=20
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------951519477318B88CB28325B0
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------951519477318B88CB28325B0--

--fy5eXqOw9hrfLqxV2Umb2hvrhBMi1ozzo--

--xAYWHcVWC6v8PecIqYedbpi0zZGEuZEpw
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl+97jQFAwAAAAAACgkQsN6d1ii/Ey8h
qgf+I0eK/2rGHGuX/AilqW7snecaagMOgBFUA/fbzBveEIN/YtvTBpd1NtSnyXnR92efUDPGI46k
1K59HzXmfKN7SVuca9iW1C+oyQEXgDJ4kv9OdGoQLk9pbi5j0xlurE99LQcEyuH1Jt55EhmAKOrT
F+4aBGLxbI2zuFXf0STvSFVCX8CZo31uKufIBc41PXD7EkAo4tLfh6RJVpINdSCms3cE5ZMjHPWV
3SxHExo0CoUsKzy7+PIZDC1r+rpXor6lIQKEbteN398pnfLAhunwP10eQVtwCM9C0ZeBZPMW9RxC
y6EleQY44OwO37PhwYDSg+S1jEz+cw5KvKLXSv5llg==
=eMrb
-----END PGP SIGNATURE-----

--xAYWHcVWC6v8PecIqYedbpi0zZGEuZEpw--


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 06:09:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 06:09:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37281.69553 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khnyz-0003Vq-Hm; Wed, 25 Nov 2020 06:08:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37281.69553; Wed, 25 Nov 2020 06:08:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khnyz-0003Vj-Dw; Wed, 25 Nov 2020 06:08:49 +0000
Received: by outflank-mailman (input) for mailman id 37281;
 Wed, 25 Nov 2020 06:08:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BRGr=E7=huawei.com=yuchao0@srs-us1.protection.inumbo.net>)
 id 1khnyJ-0003T3-DC
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 06:08:07 +0000
Received: from szxga06-in.huawei.com (unknown [45.249.212.32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4f30ffd3-5820-4535-afad-a436ca0be133;
 Wed, 25 Nov 2020 06:07:59 +0000 (UTC)
Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.60])
 by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4Cgr5m3R7Vzhh8Q;
 Wed, 25 Nov 2020 14:07:40 +0800 (CST)
Received: from [10.136.114.67] (10.136.114.67) by smtp.huawei.com
 (10.3.19.212) with Microsoft SMTP Server (TLS) id 14.3.487.0; Wed, 25 Nov
 2020 14:07:42 +0800
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BRGr=E7=huawei.com=yuchao0@srs-us1.protection.inumbo.net>)
	id 1khnyJ-0003T3-DC
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 06:08:07 +0000
X-Inumbo-ID: 4f30ffd3-5820-4535-afad-a436ca0be133
Received: from szxga06-in.huawei.com (unknown [45.249.212.32])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4f30ffd3-5820-4535-afad-a436ca0be133;
	Wed, 25 Nov 2020 06:07:59 +0000 (UTC)
Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.60])
	by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4Cgr5m3R7Vzhh8Q;
	Wed, 25 Nov 2020 14:07:40 +0800 (CST)
Received: from [10.136.114.67] (10.136.114.67) by smtp.huawei.com
 (10.3.19.212) with Microsoft SMTP Server (TLS) id 14.3.487.0; Wed, 25 Nov
 2020 14:07:42 +0800
Subject: Re: [PATCH 04/45] fs: simplify freeze_bdev/thaw_bdev
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
CC: Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>, "Konrad
 Rzeszutek Wilk" <konrad.wilk@oracle.com>, Coly Li <colyli@suse.de>, "Mike
 Snitzer" <snitzer@redhat.com>, Greg Kroah-Hartman
	<gregkh@linuxfoundation.org>, Jan Kara <jack@suse.cz>, Johannes Thumshirn
	<johannes.thumshirn@wdc.com>, <dm-devel@redhat.com>, Richard Weinberger
	<richard@nod.at>, Jan Kara <jack@suse.com>, <linux-block@vger.kernel.org>,
	<xen-devel@lists.xenproject.org>, <linux-bcache@vger.kernel.org>,
	<linux-mtd@lists.infradead.org>, <linux-fsdevel@vger.kernel.org>,
	<linux-mm@kvack.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-5-hch@lst.de>
From: Chao Yu <yuchao0@huawei.com>
Message-ID: <4ae2d8ca-e858-e5df-67ef-d14852978976@huawei.com>
Date: Wed, 25 Nov 2020 14:07:41 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101
 Thunderbird/52.9.1
MIME-Version: 1.0
In-Reply-To: <20201124132751.3747337-5-hch@lst.de>
Content-Type: text/plain; charset="windows-1252"; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [10.136.114.67]
X-CFilter-Loop: Reflected

On 2020/11/24 21:27, Christoph Hellwig wrote:
> Store the frozen superblock in struct block_device to avoid the awkward
> interface that can return a sb only used a cookie, an ERR_PTR or NULL.
> 
> Signed-off-by: Christoph Hellwig<hch@lst.de>
> ---
>   drivers/md/dm-core.h      |  5 -----
>   drivers/md/dm.c           | 20 ++++++--------------
>   fs/block_dev.c            | 39 ++++++++++++++++-----------------------
>   fs/buffer.c               |  2 +-
>   fs/ext4/ioctl.c           |  2 +-
>   fs/f2fs/file.c            | 14 +++++---------

For f2fs part,

Acked-by: Chao Yu <yuchao0@huawei.com>

Thanks,

>   fs/xfs/xfs_fsops.c        |  7 ++-----
>   include/linux/blk_types.h |  1 +
>   include/linux/blkdev.h    |  4 ++--
>   9 files changed, 34 insertions(+), 60 deletions(-)


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 06:10:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 06:10:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37288.69564 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kho0c-0004I0-Sv; Wed, 25 Nov 2020 06:10:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37288.69564; Wed, 25 Nov 2020 06:10:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kho0c-0004Ht-Px; Wed, 25 Nov 2020 06:10:30 +0000
Received: by outflank-mailman (input) for mailman id 37288;
 Wed, 25 Nov 2020 06:10:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BRGr=E7=huawei.com=yuchao0@srs-us1.protection.inumbo.net>)
 id 1kho0a-0004Hk-Nm
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 06:10:28 +0000
Received: from szxga06-in.huawei.com (unknown [45.249.212.32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a6a15f69-22b0-469a-8dd1-466efade4452;
 Wed, 25 Nov 2020 06:10:26 +0000 (UTC)
Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.59])
 by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4Cgr8c4ZrLzhYVP;
 Wed, 25 Nov 2020 14:10:08 +0800 (CST)
Received: from [10.136.114.67] (10.136.114.67) by smtp.huawei.com
 (10.3.19.209) with Microsoft SMTP Server (TLS) id 14.3.487.0; Wed, 25 Nov
 2020 14:10:21 +0800
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BRGr=E7=huawei.com=yuchao0@srs-us1.protection.inumbo.net>)
	id 1kho0a-0004Hk-Nm
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 06:10:28 +0000
X-Inumbo-ID: a6a15f69-22b0-469a-8dd1-466efade4452
Received: from szxga06-in.huawei.com (unknown [45.249.212.32])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a6a15f69-22b0-469a-8dd1-466efade4452;
	Wed, 25 Nov 2020 06:10:26 +0000 (UTC)
Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.59])
	by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4Cgr8c4ZrLzhYVP;
	Wed, 25 Nov 2020 14:10:08 +0800 (CST)
Received: from [10.136.114.67] (10.136.114.67) by smtp.huawei.com
 (10.3.19.209) with Microsoft SMTP Server (TLS) id 14.3.487.0; Wed, 25 Nov
 2020 14:10:21 +0800
Subject: Re: [PATCH 30/45] block: remove the nr_sects field in struct
 hd_struct
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
CC: Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>, "Konrad
 Rzeszutek Wilk" <konrad.wilk@oracle.com>, Coly Li <colyli@suse.de>, "Mike
 Snitzer" <snitzer@redhat.com>, Greg Kroah-Hartman
	<gregkh@linuxfoundation.org>, Jan Kara <jack@suse.cz>, Johannes Thumshirn
	<johannes.thumshirn@wdc.com>, <dm-devel@redhat.com>, Richard Weinberger
	<richard@nod.at>, Jan Kara <jack@suse.com>, <linux-block@vger.kernel.org>,
	<xen-devel@lists.xenproject.org>, <linux-bcache@vger.kernel.org>,
	<linux-mtd@lists.infradead.org>, <linux-fsdevel@vger.kernel.org>,
	<linux-mm@kvack.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-31-hch@lst.de>
From: Chao Yu <yuchao0@huawei.com>
Message-ID: <fe5b2763-a7c7-98dd-d87e-d3fa6767eebb@huawei.com>
Date: Wed, 25 Nov 2020 14:10:20 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101
 Thunderbird/52.9.1
MIME-Version: 1.0
In-Reply-To: <20201124132751.3747337-31-hch@lst.de>
Content-Type: text/plain; charset="windows-1252"; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [10.136.114.67]
X-CFilter-Loop: Reflected

On 2020/11/24 21:27, Christoph Hellwig wrote:
> Now that the hd_struct always has a block device attached to it, there is
> no need for having two size field that just get out of sync.
> 
> Additional the field in hd_struct did not use proper serializiation,
> possibly allowing for torn writes.  By only using the block_device field
> this problem also gets fixed.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> ---
>   block/bio.c                        |  4 +-
>   block/blk-core.c                   |  2 +-
>   block/blk.h                        | 53 ----------------------
>   block/genhd.c                      | 55 +++++++++++-----------
>   block/partitions/core.c            | 17 ++++---
>   drivers/block/loop.c               |  1 -
>   drivers/block/nbd.c                |  2 +-
>   drivers/block/xen-blkback/common.h |  4 +-
>   drivers/md/bcache/super.c          |  2 +-
>   drivers/s390/block/dasd_ioctl.c    |  4 +-
>   drivers/target/target_core_pscsi.c |  7 +--
>   fs/block_dev.c                     | 73 +-----------------------------
>   fs/f2fs/super.c                    |  2 +-

For f2fs part,

Acked-by: Chao Yu <yuchao0@huawei.com>

Thanks,

>   fs/pstore/blk.c                    |  2 +-
>   include/linux/genhd.h              | 29 +++---------
>   kernel/trace/blktrace.c            |  2 +-
>   16 files changed, 60 insertions(+), 199 deletions(-)


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 06:11:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 06:11:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37295.69576 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kho1r-0004R7-86; Wed, 25 Nov 2020 06:11:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37295.69576; Wed, 25 Nov 2020 06:11:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kho1r-0004R0-4x; Wed, 25 Nov 2020 06:11:47 +0000
Received: by outflank-mailman (input) for mailman id 37295;
 Wed, 25 Nov 2020 06:11:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BRGr=E7=huawei.com=yuchao0@srs-us1.protection.inumbo.net>)
 id 1kho1q-0004Qu-Mt
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 06:11:46 +0000
Received: from szxga06-in.huawei.com (unknown [45.249.212.32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dd029a3f-15e7-4bbf-a3c7-ed3900890a66;
 Wed, 25 Nov 2020 06:11:43 +0000 (UTC)
Received: from DGGEMS410-HUB.china.huawei.com (unknown [172.30.72.59])
 by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4CgrB346lSzhfQ7;
 Wed, 25 Nov 2020 14:11:23 +0800 (CST)
Received: from [10.136.114.67] (10.136.114.67) by smtp.huawei.com
 (10.3.19.210) with Microsoft SMTP Server (TLS) id 14.3.487.0; Wed, 25 Nov
 2020 14:11:36 +0800
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BRGr=E7=huawei.com=yuchao0@srs-us1.protection.inumbo.net>)
	id 1kho1q-0004Qu-Mt
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 06:11:46 +0000
X-Inumbo-ID: dd029a3f-15e7-4bbf-a3c7-ed3900890a66
Received: from szxga06-in.huawei.com (unknown [45.249.212.32])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id dd029a3f-15e7-4bbf-a3c7-ed3900890a66;
	Wed, 25 Nov 2020 06:11:43 +0000 (UTC)
Received: from DGGEMS410-HUB.china.huawei.com (unknown [172.30.72.59])
	by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4CgrB346lSzhfQ7;
	Wed, 25 Nov 2020 14:11:23 +0800 (CST)
Received: from [10.136.114.67] (10.136.114.67) by smtp.huawei.com
 (10.3.19.210) with Microsoft SMTP Server (TLS) id 14.3.487.0; Wed, 25 Nov
 2020 14:11:36 +0800
Subject: Re: [PATCH 38/45] block: switch partition lookup to use struct
 block_device
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
CC: Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>, "Konrad
 Rzeszutek Wilk" <konrad.wilk@oracle.com>, Coly Li <colyli@suse.de>, "Mike
 Snitzer" <snitzer@redhat.com>, Greg Kroah-Hartman
	<gregkh@linuxfoundation.org>, Jan Kara <jack@suse.cz>, Johannes Thumshirn
	<johannes.thumshirn@wdc.com>, <dm-devel@redhat.com>, Richard Weinberger
	<richard@nod.at>, Jan Kara <jack@suse.com>, <linux-block@vger.kernel.org>,
	<xen-devel@lists.xenproject.org>, <linux-bcache@vger.kernel.org>,
	<linux-mtd@lists.infradead.org>, <linux-fsdevel@vger.kernel.org>,
	<linux-mm@kvack.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-39-hch@lst.de>
From: Chao Yu <yuchao0@huawei.com>
Message-ID: <fec5c478-c7cb-ce29-a35d-3474fae1c96d@huawei.com>
Date: Wed, 25 Nov 2020 14:11:35 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101
 Thunderbird/52.9.1
MIME-Version: 1.0
In-Reply-To: <20201124132751.3747337-39-hch@lst.de>
Content-Type: text/plain; charset="windows-1252"; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [10.136.114.67]
X-CFilter-Loop: Reflected

On 2020/11/24 21:27, Christoph Hellwig wrote:
> Use struct block_device to lookup partitions on a disk.  This removes
> all usage of struct hd_struct from the I/O path, and this allows removing
> the percpu refcount in struct hd_struct.
> 
> Signed-off-by: Christoph Hellwig<hch@lst.de>
> ---
>   block/bio.c                        |  4 +-
>   block/blk-core.c                   | 66 ++++++++++++++----------------
>   block/blk-flush.c                  |  2 +-
>   block/blk-mq.c                     |  9 ++--
>   block/blk-mq.h                     |  7 ++--
>   block/blk.h                        |  4 +-
>   block/genhd.c                      | 56 +++++++++++++------------
>   block/partitions/core.c            |  7 +---
>   drivers/block/drbd/drbd_receiver.c |  2 +-
>   drivers/block/drbd/drbd_worker.c   |  2 +-
>   drivers/block/zram/zram_drv.c      |  2 +-
>   drivers/md/bcache/request.c        |  4 +-
>   drivers/md/dm.c                    |  4 +-
>   drivers/md/md.c                    |  4 +-
>   drivers/nvme/target/admin-cmd.c    | 20 ++++-----
>   fs/ext4/super.c                    | 18 +++-----
>   fs/ext4/sysfs.c                    | 10 +----
>   fs/f2fs/f2fs.h                     |  2 +-
>   fs/f2fs/super.c                    |  6 +--

For f2fs part,

Acked-by: Chao Yu <yuchao0@huawei.com>

Thanks,

>   include/linux/blkdev.h             |  8 ++--
>   include/linux/genhd.h              |  4 +-
>   include/linux/part_stat.h          | 17 ++++----
>   22 files changed, 120 insertions(+), 138 deletions(-)


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 06:13:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 06:13:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37304.69589 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kho35-0004a6-N2; Wed, 25 Nov 2020 06:13:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37304.69589; Wed, 25 Nov 2020 06:13:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kho35-0004Zz-JX; Wed, 25 Nov 2020 06:13:03 +0000
Received: by outflank-mailman (input) for mailman id 37304;
 Wed, 25 Nov 2020 06:13:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BRGr=E7=huawei.com=yuchao0@srs-us1.protection.inumbo.net>)
 id 1kho34-0004Zu-Lu
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 06:13:02 +0000
Received: from szxga05-in.huawei.com (unknown [45.249.212.191])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1f5b77e3-1700-4700-99a3-6e83c2ff9c5c;
 Wed, 25 Nov 2020 06:12:59 +0000 (UTC)
Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.58])
 by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4CgrCH57LvzLtxm;
 Wed, 25 Nov 2020 14:12:27 +0800 (CST)
Received: from [10.136.114.67] (10.136.114.67) by smtp.huawei.com
 (10.3.19.214) with Microsoft SMTP Server (TLS) id 14.3.487.0; Wed, 25 Nov
 2020 14:12:51 +0800
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BRGr=E7=huawei.com=yuchao0@srs-us1.protection.inumbo.net>)
	id 1kho34-0004Zu-Lu
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 06:13:02 +0000
X-Inumbo-ID: 1f5b77e3-1700-4700-99a3-6e83c2ff9c5c
Received: from szxga05-in.huawei.com (unknown [45.249.212.191])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1f5b77e3-1700-4700-99a3-6e83c2ff9c5c;
	Wed, 25 Nov 2020 06:12:59 +0000 (UTC)
Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.58])
	by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4CgrCH57LvzLtxm;
	Wed, 25 Nov 2020 14:12:27 +0800 (CST)
Received: from [10.136.114.67] (10.136.114.67) by smtp.huawei.com
 (10.3.19.214) with Microsoft SMTP Server (TLS) id 14.3.487.0; Wed, 25 Nov
 2020 14:12:51 +0800
Subject: Re: [PATCH 43/45] f2fs: remove a few bd_part checks
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
CC: Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>, "Konrad
 Rzeszutek Wilk" <konrad.wilk@oracle.com>, Coly Li <colyli@suse.de>, "Mike
 Snitzer" <snitzer@redhat.com>, Greg Kroah-Hartman
	<gregkh@linuxfoundation.org>, Jan Kara <jack@suse.cz>, Johannes Thumshirn
	<johannes.thumshirn@wdc.com>, <dm-devel@redhat.com>, Richard Weinberger
	<richard@nod.at>, Jan Kara <jack@suse.com>, <linux-block@vger.kernel.org>,
	<xen-devel@lists.xenproject.org>, <linux-bcache@vger.kernel.org>,
	<linux-mtd@lists.infradead.org>, <linux-fsdevel@vger.kernel.org>,
	<linux-mm@kvack.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-44-hch@lst.de>
From: Chao Yu <yuchao0@huawei.com>
Message-ID: <2c7a8f2a-37a3-38c2-9256-6aae2c7a45c1@huawei.com>
Date: Wed, 25 Nov 2020 14:12:50 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101
 Thunderbird/52.9.1
MIME-Version: 1.0
In-Reply-To: <20201124132751.3747337-44-hch@lst.de>
Content-Type: text/plain; charset="windows-1252"; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [10.136.114.67]
X-CFilter-Loop: Reflected

On 2020/11/24 21:27, Christoph Hellwig wrote:
> bd_part is never NULL for a block device in use by a file system, so
> remove the checks.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Reviewed-by: Chao Yu <yuchao0@huawei.com>

Thanks,


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 06:36:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 06:36:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37315.69605 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khoPJ-0006Ua-LB; Wed, 25 Nov 2020 06:36:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37315.69605; Wed, 25 Nov 2020 06:36:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khoPJ-0006UT-HA; Wed, 25 Nov 2020 06:36:01 +0000
Received: by outflank-mailman (input) for mailman id 37315;
 Wed, 25 Nov 2020 06:36:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khoPI-0006UJ-Ee; Wed, 25 Nov 2020 06:36:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khoPI-00074J-7A; Wed, 25 Nov 2020 06:36:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khoPH-0004lR-Oe; Wed, 25 Nov 2020 06:35:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1khoPH-0008Ij-Nh; Wed, 25 Nov 2020 06:35:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khoPI-0006UJ-Ee; Wed, 25 Nov 2020 06:36:00 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WWXaIIz+d1zy/5YNVbIYTMCLgr+UDSTK7/deGR4d9UU=; b=jj2fMTa1cEbOnQgYxLj4A9Spmy
	FoKyNat1j4gdR+bw+ZMmfeeC+wgS49m+E4unQNabYqDiOwuOusCXeAqIpnnCgkhyzzStNUtqSedcb
	j5x9oGi+s+UdW+iKz1DjEDd7UL5AbY8VoBNtooAqcJfilg2PUzfXdzZeg3Udrh2y2jpY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khoPI-00074J-7A; Wed, 25 Nov 2020 06:36:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khoPH-0004lR-Oe; Wed, 25 Nov 2020 06:35:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khoPH-0008Ij-Nh; Wed, 25 Nov 2020 06:35:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156986-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 156986: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.11-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:guest-localmigrate/x10:fail:heisenbug
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=62aed78b8e0cc6dcd99b80a528650ad0619b3909
X-Osstest-Versions-That:
    xen=1447d449fab7e48c85faf83951842bb60d7dabe5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 25 Nov 2020 06:35:59 +0000

flight 156986 xen-4.11-testing real [real]
flight 156999 xen-4.11-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156986/
http://logs.test-lab.xenproject.org/osstest/logs/156999/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 18 guest-localmigrate/x10 fail pass in 156999-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 156687
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 156687
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156687
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156687
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156687
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156687
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156687
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156687
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156687
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156687
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156687
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156687
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156687
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  62aed78b8e0cc6dcd99b80a528650ad0619b3909
baseline version:
 xen                  1447d449fab7e48c85faf83951842bb60d7dabe5

Last test of basis   156687  2020-11-11 17:48:19 Z   13 days
Testing same since   156986  2020-11-24 13:36:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   1447d449fa..62aed78b8e  62aed78b8e0cc6dcd99b80a528650ad0619b3909 -> stable-4.11


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 06:41:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 06:41:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37325.69619 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khoUD-0007Nr-Aq; Wed, 25 Nov 2020 06:41:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37325.69619; Wed, 25 Nov 2020 06:41:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khoUD-0007Nk-7s; Wed, 25 Nov 2020 06:41:05 +0000
Received: by outflank-mailman (input) for mailman id 37325;
 Wed, 25 Nov 2020 06:41:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yrgl=E7=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1khoUC-0007Nf-Tj
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 06:41:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id df2d54ac-22df-4479-83a7-7dcd7777a991;
 Wed, 25 Nov 2020 06:41:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B1FDEABDE;
 Wed, 25 Nov 2020 06:41:01 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=yrgl=E7=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1khoUC-0007Nf-Tj
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 06:41:04 +0000
X-Inumbo-ID: df2d54ac-22df-4479-83a7-7dcd7777a991
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id df2d54ac-22df-4479-83a7-7dcd7777a991;
	Wed, 25 Nov 2020 06:41:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B1FDEABDE;
	Wed, 25 Nov 2020 06:41:01 +0000 (UTC)
Subject: Re: [PATCH 01/45] blk-cgroup: fix a hd_struct leak in
 blkcg_fill_root_iostats
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Coly Li <colyli@suse.de>,
 Mike Snitzer <snitzer@redhat.com>,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Jan Kara <jack@suse.cz>,
 Johannes Thumshirn <johannes.thumshirn@wdc.com>, dm-devel@redhat.com,
 Richard Weinberger <richard@nod.at>, Jan Kara <jack@suse.com>,
 linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org,
 linux-fsdevel@vger.kernel.org, linux-mm@kvack.org
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-2-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <43bdcdce-0307-2082-109c-a60bf8b533cd@suse.de>
Date: Wed, 25 Nov 2020 07:41:00 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201124132751.3747337-2-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/24/20 2:27 PM, Christoph Hellwig wrote:
> disk_get_part needs to be paired with a disk_put_part.
> 
> Fixes: ef45fe470e1 ("blk-cgroup: show global disk stats in root cgroup io.stat")
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Reviewed-by: Jan Kara <jack@suse.cz>
> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
> Acked-by: Tejun Heo <tj@kernel.org>
> ---
>   block/blk-cgroup.c | 1 +
>   1 file changed, 1 insertion(+)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 06:41:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 06:41:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37332.69631 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khoUn-0007UL-KB; Wed, 25 Nov 2020 06:41:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37332.69631; Wed, 25 Nov 2020 06:41:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khoUn-0007UE-HB; Wed, 25 Nov 2020 06:41:41 +0000
Received: by outflank-mailman (input) for mailman id 37332;
 Wed, 25 Nov 2020 06:41:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yrgl=E7=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1khoUm-0007TP-Ri
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 06:41:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 20977a22-5dd5-4d7f-8f84-5cec86f86a87;
 Wed, 25 Nov 2020 06:41:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 63062ABCE;
 Wed, 25 Nov 2020 06:41:39 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=yrgl=E7=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1khoUm-0007TP-Ri
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 06:41:40 +0000
X-Inumbo-ID: 20977a22-5dd5-4d7f-8f84-5cec86f86a87
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 20977a22-5dd5-4d7f-8f84-5cec86f86a87;
	Wed, 25 Nov 2020 06:41:40 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 63062ABCE;
	Wed, 25 Nov 2020 06:41:39 +0000 (UTC)
Subject: Re: [PATCH 02/45] filemap: consistently use ->f_mapping over
 ->i_mapping
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Coly Li <colyli@suse.de>,
 Mike Snitzer <snitzer@redhat.com>,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Jan Kara <jack@suse.cz>,
 Johannes Thumshirn <johannes.thumshirn@wdc.com>, dm-devel@redhat.com,
 Richard Weinberger <richard@nod.at>, Jan Kara <jack@suse.com>,
 linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org,
 linux-fsdevel@vger.kernel.org, linux-mm@kvack.org
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-3-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <aca8a60e-d6da-7a17-0a20-3187eed83cc1@suse.de>
Date: Wed, 25 Nov 2020 07:41:39 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201124132751.3747337-3-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/24/20 2:27 PM, Christoph Hellwig wrote:
> Use file->f_mapping in all remaining places that have a struct file
> available to properly handle the case where inode->i_mapping !=
> file_inode(file)->i_mapping.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   mm/filemap.c | 13 ++++++-------
>   1 file changed, 6 insertions(+), 7 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 06:42:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 06:42:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37336.69644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khoVN-0007bk-29; Wed, 25 Nov 2020 06:42:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37336.69644; Wed, 25 Nov 2020 06:42:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khoVM-0007bc-VS; Wed, 25 Nov 2020 06:42:16 +0000
Received: by outflank-mailman (input) for mailman id 37336;
 Wed, 25 Nov 2020 06:42:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yrgl=E7=suse.de=hare@srs-us1.protection.inumbo.net>)
 id 1khoVL-0007bT-Dk
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 06:42:15 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 54233f6b-7df2-4b73-b9f1-060d2bcd6e17;
 Wed, 25 Nov 2020 06:42:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6CE03ABDE;
 Wed, 25 Nov 2020 06:42:13 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=yrgl=E7=suse.de=hare@srs-us1.protection.inumbo.net>)
	id 1khoVL-0007bT-Dk
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 06:42:15 +0000
X-Inumbo-ID: 54233f6b-7df2-4b73-b9f1-060d2bcd6e17
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 54233f6b-7df2-4b73-b9f1-060d2bcd6e17;
	Wed, 25 Nov 2020 06:42:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6CE03ABDE;
	Wed, 25 Nov 2020 06:42:13 +0000 (UTC)
Subject: Re: [PATCH 03/45] fs: remove get_super_thawed and
 get_super_exclusive_thawed
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Tejun Heo <tj@kernel.org>, Josef Bacik <josef@toxicpanda.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Coly Li <colyli@suse.de>,
 Mike Snitzer <snitzer@redhat.com>,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Jan Kara <jack@suse.cz>,
 Johannes Thumshirn <johannes.thumshirn@wdc.com>, dm-devel@redhat.com,
 Richard Weinberger <richard@nod.at>, Jan Kara <jack@suse.com>,
 linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org,
 linux-fsdevel@vger.kernel.org, linux-mm@kvack.org
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-4-hch@lst.de>
From: Hannes Reinecke <hare@suse.de>
Message-ID: <bc32a2ff-8021-a47f-0543-955c9178d0f5@suse.de>
Date: Wed, 25 Nov 2020 07:42:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201124132751.3747337-4-hch@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11/24/20 2:27 PM, Christoph Hellwig wrote:
> Just open code the wait in the only caller of both functions.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   fs/internal.h      |  2 ++
>   fs/quota/quota.c   | 31 +++++++++++++++++++++-------
>   fs/super.c         | 51 ++--------------------------------------------
>   include/linux/fs.h |  4 +---
>   4 files changed, 29 insertions(+), 59 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 07:05:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 07:05:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37351.69659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khos8-0001Bj-10; Wed, 25 Nov 2020 07:05:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37351.69659; Wed, 25 Nov 2020 07:05:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khos7-0001Bc-U6; Wed, 25 Nov 2020 07:05:47 +0000
Received: by outflank-mailman (input) for mailman id 37351;
 Wed, 25 Nov 2020 07:05:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JIE7=E7=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
 id 1khos5-0001BX-2l
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 07:05:46 +0000
Received: from bedivere.hansenpartnership.com (unknown [96.44.175.130])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2b2967cb-352d-49c1-93ff-de8fc2249b98;
 Wed, 25 Nov 2020 07:05:41 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by bedivere.hansenpartnership.com (Postfix) with ESMTP id 313DE1280408;
 Tue, 24 Nov 2020 23:05:40 -0800 (PST)
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
 by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new,
 port 10024)
 with ESMTP id 8AuMCu2vLv9Z; Tue, 24 Nov 2020 23:05:40 -0800 (PST)
Received: from jarvis.int.hansenpartnership.com (unknown
 [IPv6:2601:600:8280:66d1::527])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id A873112803EC;
 Tue, 24 Nov 2020 23:05:36 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JIE7=E7=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
	id 1khos5-0001BX-2l
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 07:05:46 +0000
X-Inumbo-ID: 2b2967cb-352d-49c1-93ff-de8fc2249b98
Received: from bedivere.hansenpartnership.com (unknown [96.44.175.130])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2b2967cb-352d-49c1-93ff-de8fc2249b98;
	Wed, 25 Nov 2020 07:05:41 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
	by bedivere.hansenpartnership.com (Postfix) with ESMTP id 313DE1280408;
	Tue, 24 Nov 2020 23:05:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=hansenpartnership.com; s=20151216; t=1606287940;
	bh=PpyvloC8ztllb7q8ndtGKJRs78ChiB3jg6tteM0zYL0=;
	h=Message-ID:Subject:From:To:Date:In-Reply-To:References:From;
	b=DUjk2u5mMxkvusJZ7TUknDmT+9jEkjAK5Du54VYrLnX3ZVAsqbXKInJF3+bjbWxe1
	 sPTOm9Jo8O4FiM37EcbSbGJ09Z6i3toRLj70BanOqmx/doOouqQw1ofRfirJ315HKN
	 ACp6UaCD/rMf1rqLOvr/v7W+FqOYQZREI5LkhaoU=
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
	by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id 8AuMCu2vLv9Z; Tue, 24 Nov 2020 23:05:40 -0800 (PST)
Received: from jarvis.int.hansenpartnership.com (unknown [IPv6:2601:600:8280:66d1::527])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id A873112803EC;
	Tue, 24 Nov 2020 23:05:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=hansenpartnership.com; s=20151216; t=1606287940;
	bh=PpyvloC8ztllb7q8ndtGKJRs78ChiB3jg6tteM0zYL0=;
	h=Message-ID:Subject:From:To:Date:In-Reply-To:References:From;
	b=DUjk2u5mMxkvusJZ7TUknDmT+9jEkjAK5Du54VYrLnX3ZVAsqbXKInJF3+bjbWxe1
	 sPTOm9Jo8O4FiM37EcbSbGJ09Z6i3toRLj70BanOqmx/doOouqQw1ofRfirJ315HKN
	 ACp6UaCD/rMf1rqLOvr/v7W+FqOYQZREI5LkhaoU=
Message-ID: <a841536fe65bb33f1c72ce2455a6eb47a0107565.camel@HansenPartnership.com>
Subject: Re: [Intel-wired-lan] [PATCH 000/141] Fix fall-through warnings for
 Clang
From: James Bottomley <James.Bottomley@HansenPartnership.com>
To: Kees Cook <keescook@chromium.org>
Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>, Joe Perches
 <joe@perches.com>,  Jakub Kicinski <kuba@kernel.org>,
 alsa-devel@alsa-project.org,  linux-atm-general@lists.sourceforge.net,
 reiserfs-devel@vger.kernel.org,  linux-iio@vger.kernel.org,
 linux-wireless@vger.kernel.org,  linux-fbdev@vger.kernel.org,
 dri-devel@lists.freedesktop.org,  linux-kernel@vger.kernel.org, Nathan
 Chancellor <natechancellor@gmail.com>,  linux-ide@vger.kernel.org,
 dm-devel@redhat.com, keyrings@vger.kernel.org, 
 linux-mtd@lists.infradead.org, GR-everest-linux-l2@marvell.com, 
 wcn36xx@lists.infradead.org, samba-technical@lists.samba.org, 
 linux-i3c@lists.infradead.org, linux1394-devel@lists.sourceforge.net, 
 linux-afs@lists.infradead.org, usb-storage@lists.one-eyed-alien.net, 
 drbd-dev@lists.linbit.com, devel@driverdev.osuosl.org, 
 linux-cifs@vger.kernel.org, rds-devel@oss.oracle.com, Nick Desaulniers
 <ndesaulniers@google.com>, linux-scsi@vger.kernel.org, 
 linux-rdma@vger.kernel.org, oss-drivers@netronome.com, 
 bridge@lists.linux-foundation.org, linux-security-module@vger.kernel.org, 
 amd-gfx@lists.freedesktop.org, linux-stm32@st-md-mailman.stormreply.com, 
 cluster-devel@redhat.com, linux-acpi@vger.kernel.org,
 coreteam@netfilter.org,  intel-wired-lan@lists.osuosl.org,
 linux-input@vger.kernel.org, Miguel Ojeda <ojeda@kernel.org>,
 tipc-discussion@lists.sourceforge.net,  linux-ext4@vger.kernel.org,
 linux-media@vger.kernel.org,  linux-watchdog@vger.kernel.org,
 selinux@vger.kernel.org,  linux-arm-msm@vger.kernel.org,
 intel-gfx@lists.freedesktop.org,  linux-geode@lists.infradead.org,
 linux-can@vger.kernel.org,  linux-block@vger.kernel.org,
 linux-gpio@vger.kernel.org,  op-tee@lists.trustedfirmware.org,
 linux-mediatek@lists.infradead.org,  xen-devel@lists.xenproject.org,
 nouveau@lists.freedesktop.org,  linux-hams@vger.kernel.org,
 ceph-devel@vger.kernel.org,  virtualization@lists.linux-foundation.org, 
 linux-arm-kernel@lists.infradead.org, linux-hwmon@vger.kernel.org, 
 x86@kernel.org, linux-nfs@vger.kernel.org, GR-Linux-NIC-Dev@marvell.com, 
 linux-mm@kvack.org, netdev@vger.kernel.org, 
 linux-decnet-user@lists.sourceforge.net, linux-mmc@vger.kernel.org, 
 linux-renesas-soc@vger.kernel.org, linux-sctp@vger.kernel.org, 
 linux-usb@vger.kernel.org, netfilter-devel@vger.kernel.org, 
 linux-crypto@vger.kernel.org, patches@opensource.cirrus.com, 
 linux-integrity@vger.kernel.org, target-devel@vger.kernel.org, 
 linux-hardening@vger.kernel.org, Jonathan Cameron
 <Jonathan.Cameron@huawei.com>,  Greg KH <gregkh@linuxfoundation.org>
Date: Tue, 24 Nov 2020 23:05:35 -0800
In-Reply-To: <202011241327.BB28F12F6@keescook>
References: <202011201129.B13FDB3C@keescook>
	 <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
	 <202011220816.8B6591A@keescook>
	 <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
	 <ca071decb87cc7e905411423c05a48f9fd2f58d7.camel@perches.com>
	 <0147972a72bc13f3629de8a32dee6f1f308994b5.camel@HansenPartnership.com>
	 <d8d1e9add08cdd4158405e77762d4946037208f8.camel@perches.com>
	 <dbd2cb703ed9eefa7dde9281ea26ab0f7acc8afe.camel@HansenPartnership.com>
	 <20201123130348.GA3119@embeddedor>
	 <8f5611bb015e044fa1c0a48147293923c2d904e4.camel@HansenPartnership.com>
	 <202011241327.BB28F12F6@keescook>
Content-Type: text/plain; charset="UTF-8"
User-Agent: Evolution 3.34.4 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Tue, 2020-11-24 at 13:32 -0800, Kees Cook wrote:
> On Mon, Nov 23, 2020 at 08:31:30AM -0800, James Bottomley wrote:
> > Really, no ... something which produces no improvement has no value
> > at all ... we really shouldn't be wasting maintainer time with it
> > because it has a cost to merge.  I'm not sure we understand where
> > the balance lies in value vs cost to merge but I am confident in
> > the zero value case.
> 
> What? We can't measure how many future bugs aren't introduced because
> the kernel requires explicit case flow-control statements for all new
> code.

No but we can measure how vulnerable our current coding habits are to
the mistake this warning would potentially prevent.  I don't think it's
wrong to extrapolate that if we had no instances at all of prior coding
problems we likely wouldn't have any in future either making adopting
the changes needed to enable the warning valueless ... that's the zero
value case I was referring to above.

Now, what we have seems to be about 6 cases (at least what's been shown
in this thread) where a missing break would cause potentially user
visible issues.  That means the value of this isn't zero, but it's not
a no-brainer massive win either.  That's why I think asking what we've
invested vs the return isn't a useless exercise.

> We already enable -Wimplicit-fallthrough globally, so that's not the
> discussion. The issue is that Clang is (correctly) even more strict
> than GCC for this, so these are the remaining ones to fix for full
> Clang coverage too.
> 
> People have spent more time debating this already than it would have
> taken to apply the patches. :)

You mean we've already spent 90% of the effort to come this far so we
might as well go the remaining 10% because then at least we get some
return? It's certainly a clinching argument in defence procurement ...

> This is about robustness and language wrangling. It's a big code-
> base, and this is the price of our managing technical debt for
> permanent robustness improvements. (The numbers I ran from Gustavo's
> earlier patches were that about 10% of the places adjusted were
> identified as legitimate bugs being fixed. This final series may be
> lower, but there are still bugs being found from it -- we need to
> finish this and shut the door on it for good.)

I got my six patches by analyzing the lwn.net report of the fixes that
was cited which had 21 of which 50% didn't actually change the emitted
code, and 25% didn't have a user visible effect.

But the broader point I'm making is just because the compiler people
come up with a shiny new warning doesn't necessarily mean the problem
it's detecting is one that causes us actual problems in the code base. 
I'd really be happier if we had a theory about what classes of CVE or
bug we could eliminate before we embrace the next new warning.

James





From xen-devel-bounces@lists.xenproject.org Wed Nov 25 07:15:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 07:15:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37360.69676 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khp19-0002A9-Vt; Wed, 25 Nov 2020 07:15:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37360.69676; Wed, 25 Nov 2020 07:15:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khp19-0002A2-SX; Wed, 25 Nov 2020 07:15:07 +0000
Received: by outflank-mailman (input) for mailman id 37360;
 Wed, 25 Nov 2020 07:15:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khp19-00029u-0L; Wed, 25 Nov 2020 07:15:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khp18-0007t1-N9; Wed, 25 Nov 2020 07:15:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khp18-0007r3-Dc; Wed, 25 Nov 2020 07:15:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1khp18-0004Qt-D9; Wed, 25 Nov 2020 07:15:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khp19-00029u-0L; Wed, 25 Nov 2020 07:15:07 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=M8iF07+3fmPCHYAXQBhwET7Qlpao6tKzMF8cezg7w/c=; b=dgMP93Gc5Pewn6fy1cZgcGo8U9
	Z77vovo6JT83avYNPFkkerkJ5SQfH62IJkdWgGFO5E3iRuxqn0F9e7j32M7VLQYODvxk8G2kCU/iy
	DjsKnnNkNDHnTpSVGZRzweUjiqGRWEpMuUJ3JmxOLKH3Cc30vUC1lysKwch4vlkGudjs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khp18-0007t1-N9; Wed, 25 Nov 2020 07:15:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khp18-0007r3-Dc; Wed, 25 Nov 2020 07:15:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khp18-0004Qt-D9; Wed, 25 Nov 2020 07:15:06 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156987-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 156987: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=660254422422f103e797f97545018a1fbc7548e7
X-Osstest-Versions-That:
    xen=14c9c0fceae92a18dedc3f280ebf8b9f52e39de5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 25 Nov 2020 07:15:06 +0000

flight 156987 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156987/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156635
 test-amd64-amd64-xl-qcow2    19 guest-localmigrate/x10       fail  like 156635
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156635
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156635
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156635
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156635
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156635
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156635
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156635
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156635
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156635
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156635
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  660254422422f103e797f97545018a1fbc7548e7
baseline version:
 xen                  14c9c0fceae92a18dedc3f280ebf8b9f52e39de5

Last test of basis   156635  2020-11-10 18:06:22 Z   14 days
Testing same since   156987  2020-11-24 13:36:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   14c9c0fcea..6602544224  660254422422f103e797f97545018a1fbc7548e7 -> stable-4.12


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 07:42:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 07:42:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37369.69691 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khpRh-0004mp-CL; Wed, 25 Nov 2020 07:42:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37369.69691; Wed, 25 Nov 2020 07:42:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khpRh-0004mi-8i; Wed, 25 Nov 2020 07:42:33 +0000
Received: by outflank-mailman (input) for mailman id 37369;
 Wed, 25 Nov 2020 07:42:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khpRf-0004md-DM
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 07:42:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f19419f4-4388-4a41-817f-007d89e54115;
 Wed, 25 Nov 2020 07:42:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 491C3ABCE;
 Wed, 25 Nov 2020 07:42:29 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khpRf-0004md-DM
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 07:42:31 +0000
X-Inumbo-ID: f19419f4-4388-4a41-817f-007d89e54115
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f19419f4-4388-4a41-817f-007d89e54115;
	Wed, 25 Nov 2020 07:42:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606290149; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PzNZvDJCrng9xpCqfvz3Hpxclv9cnb+ur1jrb1jh5Tk=;
	b=UkKf78Dawy1wI1w4cmcJ1r+MGNGlLnAvem3cQfw0GZsMCeAlXRmsOLVJe+Tq7/AZbuxnM7
	aqOxDEcJQndScJjqyiyf8Ufos+wZdtGvZGhYkRdTJcU4sM0GZuNUUMi/VNU9RQC9soyKxu
	bzdUV4gr3j79ydgK0NL1WamyBAJfdBE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 491C3ABCE;
	Wed, 25 Nov 2020 07:42:29 +0000 (UTC)
Subject: Re: [PATCH v7 3/3] xen/events: rework fifo queue locking
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201124070106.26854-1-jgross@suse.com>
 <20201124070106.26854-4-jgross@suse.com>
 <c627b42b-1e1f-b83a-2db8-b9e5fa5dce10@suse.com>
 <8e2853c3-9f84-2fd6-0e41-1f1d9172f236@suse.com>
 <9eada207-9880-b2fe-054c-f3218d2034b2@suse.com>
 <cce1b71c-aa37-a3b7-990e-bd2f0437d074@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c3091b91-b594-7a5e-f008-6df10db227ec@suse.com>
Date: Wed, 25 Nov 2020 08:42:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <cce1b71c-aa37-a3b7-990e-bd2f0437d074@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 25.11.2020 06:23, Jürgen Groß wrote:
> On 24.11.20 17:32, Jan Beulich wrote:
>> On 24.11.2020 15:49, Jürgen Groß wrote:
>>> On 24.11.20 15:02, Jan Beulich wrote:
>>>> On 24.11.2020 08:01, Juergen Gross wrote:
>>>>> Two cpus entering evtchn_fifo_set_pending() for the same event channel
>>>>> can race in case the first one gets interrupted after setting
>>>>> EVTCHN_FIFO_PENDING and when the other one manages to set
>>>>> EVTCHN_FIFO_LINKED before the first one is testing that bit. This can
>>>>> lead to evtchn_check_pollers() being called before the event is put
>>>>> properly into the queue, resulting eventually in the guest not seeing
>>>>> the event pending and thus blocking forever afterwards.
>>>>>
>>>>> Note that commit 5f2df45ead7c1195 ("xen/evtchn: rework per event channel
>>>>> lock") made the race just more obvious, while the fifo event channel
>>>>> implementation had this race from the beginning when an unmask operation
>>>>> was running in parallel with an event channel send operation.
>>>>
>>>> Ah yes, but then also only for inter-domain channels, as it was
>>>> only in that case that the "wrong" domain's event lock was held.
>>>> IOW there was a much earlier change already where this issue
>>>> got widened (when the per-channel locking got introduced). This
>>>> then got reduced to the original scope by XSA-343's adding of
>>>> locking to evtchn_unmask(). (Not sure how much of this history
>>>> wants actually adding here. I'm writing it down not the least to
>>>> make sure I have a complete enough picture.)
>>>
>>> I think we both agree that this race was possible for quite some time.
>>> And I even think one customer bug I've been looking into recently
>>> might be exactly this problem (a dom0 was occasionally hanging in
>>> cross-cpu function calls, but switching to 2-level events made the
>>> problem disappear).
>>
>> IPIs weren't affected earlier on (i.e. in any released version),
>> if my analysis above is correct.
> 
> I don't think it is correct.
> 
> An unmask operation in parallel with set_pending will have had the
> same race for IPIs.

Why? When FIFO locks were introduced, the event lock got acquired
around the call to evtchn_unmask(), and IPIs got sent with that
lock similarly held. Likewise after XSA-343 evtchn_unmask() as
well as the sending of IPIs acquire the per-channel lock (which at
that point was still an ordinary spin lock).

>>>>> Additionally when an
>>>>> event channel needs to change queues both queues need to be locked
>>>>> initially.
>>>>
>>>> Since this was (afaict) intentionally not the case before, I
>>>> think I would want to see a word spent on the "why", perhaps
>>>> better in a code comment than here. Even more so that you
>>>> delete a respective comment justifying the possible race as
>>>> permissible. And I have to admit right now I'm still uncertain
>>>> both ways, i.e. I neither have a clear understanding of why it
>>>> would have been considered fine the other way around before,
>>>> nor why the double locking is strictly needed.
>>>
>>> I need the double locking to avoid someone entering the locked region
>>> when dropping the lock for the old queue and taking the one for the
>>> new queue, as this would open the same race window again.
>>
>> Well, that's what have already said. Thing is that the code
>> prior to your change gives the impression as if this race was
>> benign.
> 
> The race regarding a queue change, yes. But not the race I'm fixing with
> this patch. I need to make sure that only one caller is inside the big
> if clause for a specific event. And dropping the lock inside this clause
> would violate that assumption.

IOW the presumed wrong assumption back then was that the function
would always be called with a lock already held which excludes
the region to be entered twice for the same channel. But - was
this a wrong assumption at the time? Thinking about this again I
now actually come to the conclusion that my analysis above was
wrong in the other direction: Even inter-domain channels did have
consistent locking (of the other side's event lock), preventing
any such race there. Which implies that imo one of the Fixes: tags
wants dropping, as the race became possible only when "downgrading"
some of the involved locks to rw ones. Obviously my "evtchn:
convert vIRQ lock to an r/w one" then extends this race to vIRQ-s.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 07:45:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 07:45:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37375.69703 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khpUT-0004x9-Pj; Wed, 25 Nov 2020 07:45:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37375.69703; Wed, 25 Nov 2020 07:45:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khpUT-0004x2-Mq; Wed, 25 Nov 2020 07:45:25 +0000
Received: by outflank-mailman (input) for mailman id 37375;
 Wed, 25 Nov 2020 07:45:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khpUS-0004wx-6C
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 07:45:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1569f378-2059-4e41-ba46-dc7c740523d5;
 Wed, 25 Nov 2020 07:45:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AAD57ABD7;
 Wed, 25 Nov 2020 07:45:22 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khpUS-0004wx-6C
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 07:45:24 +0000
X-Inumbo-ID: 1569f378-2059-4e41-ba46-dc7c740523d5
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1569f378-2059-4e41-ba46-dc7c740523d5;
	Wed, 25 Nov 2020 07:45:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606290322; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=oP6blpcuK+VPVi3IG6ixxt3sFGZ4bfXvaJa2qLjbkAk=;
	b=B9WJ2juBejK8pywfUlhDzCD9SUBgQ4qEoc3Z/Fz1LPhpjGxgX28+2O/XXPzQn5SXKUedqc
	Me9QYiQtnWeBkczrSq1JEXoKflEqj8VaR0vqnriTep5cqWPvNRoL5fcG/RiUAqJwQA+/Vw
	EOyn/X46wm3LH5w2MgNyMw9qGGrvQzc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id AAD57ABD7;
	Wed, 25 Nov 2020 07:45:22 +0000 (UTC)
Subject: Re: [PATCH v3 13/13] x86: replace open-coded occurrences of
 sizeof_field()...
To: Paul Durrant <paul@xen.org>
Cc: Paul Durrant <pdurrant@amazon.com>, Jun Nakajima
 <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201124190744.11343-1-paul@xen.org>
 <20201124190744.11343-14-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <262181c2-3a08-8b6d-5644-5a57fd4f46a1@suse.com>
Date: Wed, 25 Nov 2020 08:45:23 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201124190744.11343-14-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 24.11.2020 20:07, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> ... with macro evaluations, now that it is available.
> 
> A recent patch imported the sizeof_field() macro from Linux. This patch makes
> use of it in places where the construct is currently open-coded.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 07:51:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 07:51:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37384.69715 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khpaU-0005rv-Gd; Wed, 25 Nov 2020 07:51:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37384.69715; Wed, 25 Nov 2020 07:51:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khpaU-0005ro-Ci; Wed, 25 Nov 2020 07:51:38 +0000
Received: by outflank-mailman (input) for mailman id 37384;
 Wed, 25 Nov 2020 07:51:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khpaT-0005rj-Ft
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 07:51:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ec37a8ff-9e05-487e-a713-bf37a6061b89;
 Wed, 25 Nov 2020 07:51:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 40C22AF45;
 Wed, 25 Nov 2020 07:51:35 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khpaT-0005rj-Ft
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 07:51:37 +0000
X-Inumbo-ID: ec37a8ff-9e05-487e-a713-bf37a6061b89
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ec37a8ff-9e05-487e-a713-bf37a6061b89;
	Wed, 25 Nov 2020 07:51:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606290695; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Ywa1RSJ2YPNS20uEm3T4o6nbcbo/euOC4wBThw/kH7A=;
	b=SANHV08dXrAj5Ouc/HMdosUKcbx/AKHH9cL8HiDU84egquj1OPQcZC0/ENn4e1EaB++gqg
	gts2xdapH6uGmHNUYeBCfk9gZK7Q34I3bV+w3HpbSMKOCjRjnzdSLCbojupoJzoIgJ1bZz
	bKubCEjllsWlne3mKmf2FI/mnTSQuuU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 40C22AF45;
	Wed, 25 Nov 2020 07:51:35 +0000 (UTC)
Subject: Re: [PATCH v3 01/13] viridian: don't blindly write to 32-bit
 registers is 'mode' is invalid
To: Paul Durrant <paul@xen.org>
Cc: Paul Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201124190744.11343-1-paul@xen.org>
 <20201124190744.11343-2-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ed2dbafa-b1fc-7ce3-9814-9034b0393921@suse.com>
Date: Wed, 25 Nov 2020 08:51:35 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201124190744.11343-2-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 24.11.2020 20:07, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> If hvm_guest_x86_mode() returns something other than 8 or 4 then
> viridian_hypercall() will return immediately but, on the way out, will write
> back status as if 'mode' was 4. This patch simply makes it leave the registers
> alone.
> 
> NOTE: The formatting of the 'out' label and the switch statement are also
>       adjusted as per CODING_STYLE.

Partly only as far as the latter goes:

> --- a/xen/arch/x86/hvm/viridian/viridian.c
> +++ b/xen/arch/x86/hvm/viridian/viridian.c
> @@ -692,13 +692,14 @@ int viridian_hypercall(struct cpu_user_regs *regs)
>          break;
>      }
>  
> -out:
> + out:
>      output.result = status;
>      switch (mode) {

This would want to be

    switch ( mode )
    {

I guess this could easily be taken care of while committing.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 07:58:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 07:58:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37392.69727 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khpgt-0006A2-6n; Wed, 25 Nov 2020 07:58:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37392.69727; Wed, 25 Nov 2020 07:58:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khpgt-00069v-3K; Wed, 25 Nov 2020 07:58:15 +0000
Received: by outflank-mailman (input) for mailman id 37392;
 Wed, 25 Nov 2020 07:58:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xP3f=E7=amazon.co.uk=prvs=591f578ad=pdurrant@srs-us1.protection.inumbo.net>)
 id 1khpgr-00069q-Ol
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 07:58:13 +0000
Received: from smtp-fw-6001.amazon.com (unknown [52.95.48.154])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a4b79f69-380a-443b-8306-a1f7c55417a1;
 Wed, 25 Nov 2020 07:58:10 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-2b-baacba05.us-west-2.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-6001.iad6.amazon.com with ESMTP;
 25 Nov 2020 07:58:04 +0000
Received: from EX13D32EUC002.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194])
 by email-inbound-relay-2b-baacba05.us-west-2.amazon.com (Postfix) with ESMTPS
 id 1E2C6A1F62; Wed, 25 Nov 2020 07:58:03 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC002.ant.amazon.com (10.43.164.94) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Wed, 25 Nov 2020 07:58:01 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Wed, 25 Nov 2020 07:58:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xP3f=E7=amazon.co.uk=prvs=591f578ad=pdurrant@srs-us1.protection.inumbo.net>)
	id 1khpgr-00069q-Ol
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 07:58:13 +0000
X-Inumbo-ID: a4b79f69-380a-443b-8306-a1f7c55417a1
Received: from smtp-fw-6001.amazon.com (unknown [52.95.48.154])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a4b79f69-380a-443b-8306-a1f7c55417a1;
	Wed, 25 Nov 2020 07:58:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
  s=amazon201209; t=1606291091; x=1637827091;
  h=from:to:cc:date:message-id:references:in-reply-to:
   content-transfer-encoding:mime-version:subject;
  bh=pSVy8q3CeysJY4r8/vbWRC+tCrOCTRUOZ6JeinKUUKk=;
  b=EhOFRl3dbbHoU5Y6HDEhljydNo6vxOPLpE14+A5kcfpQqMG/M2/AR7lS
   f8XXPlPgTxTpsgdbhU8fEWgO79fqeY62k0dtLuk1ZeCn5KZaV0kpHLGNV
   pd8Zx0GP2+veCBwtN86+KRtRvgeRWJzfdb9C5l9SpOwhLe6fIqFtIRae+
   E=;
X-IronPort-AV: E=Sophos;i="5.78,368,1599523200"; 
   d="scan'208";a="68638787"
Subject: RE: [PATCH v3 01/13] viridian: don't blindly write to 32-bit registers is
 'mode' is invalid
Thread-Topic: [PATCH v3 01/13] viridian: don't blindly write to 32-bit registers is 'mode'
 is invalid
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO email-inbound-relay-2b-baacba05.us-west-2.amazon.com) ([10.43.8.2])
  by smtp-border-fw-out-6001.iad6.amazon.com with ESMTP; 25 Nov 2020 07:58:04 +0000
Received: from EX13D32EUC002.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194])
	by email-inbound-relay-2b-baacba05.us-west-2.amazon.com (Postfix) with ESMTPS id 1E2C6A1F62;
	Wed, 25 Nov 2020 07:58:03 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC002.ant.amazon.com (10.43.164.94) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Wed, 25 Nov 2020 07:58:01 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Wed, 25 Nov 2020 07:58:01 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
CC: Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Index: AQHWwv/mBCagLEwxTECmvm/ahimeeKnYe0Gw
Date: Wed, 25 Nov 2020 07:58:01 +0000
Message-ID: <0ccd665ede7f4118bf719d29dea40b02@EX13D32EUC003.ant.amazon.com>
References: <20201124190744.11343-1-paul@xen.org>
 <20201124190744.11343-2-paul@xen.org>
 <ed2dbafa-b1fc-7ce3-9814-9034b0393921@suse.com>
In-Reply-To: <ed2dbafa-b1fc-7ce3-9814-9034b0393921@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.165.102]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxp
Y2hAc3VzZS5jb20+DQo+IFNlbnQ6IDI1IE5vdmVtYmVyIDIwMjAgMDc6NTINCj4gVG86IFBhdWwg
RHVycmFudCA8cGF1bEB4ZW4ub3JnPg0KPiBDYzogRHVycmFudCwgUGF1bCA8cGR1cnJhbnRAYW1h
em9uLmNvLnVrPjsgV2VpIExpdSA8d2xAeGVuLm9yZz47IEFuZHJldyBDb29wZXINCj4gPGFuZHJl
dy5jb29wZXIzQGNpdHJpeC5jb20+OyBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4
LmNvbT47IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZw0KPiBTdWJqZWN0OiBSRTogW0VY
VEVSTkFMXSBbUEFUQ0ggdjMgMDEvMTNdIHZpcmlkaWFuOiBkb24ndCBibGluZGx5IHdyaXRlIHRv
IDMyLWJpdCByZWdpc3RlcnMgaXMgJ21vZGUnDQo+IGlzIGludmFsaWQNCj4gDQo+IENBVVRJT046
IFRoaXMgZW1haWwgb3JpZ2luYXRlZCBmcm9tIG91dHNpZGUgb2YgdGhlIG9yZ2FuaXphdGlvbi4g
RG8gbm90IGNsaWNrIGxpbmtzIG9yIG9wZW4NCj4gYXR0YWNobWVudHMgdW5sZXNzIHlvdSBjYW4g
Y29uZmlybSB0aGUgc2VuZGVyIGFuZCBrbm93IHRoZSBjb250ZW50IGlzIHNhZmUuDQo+IA0KPiAN
Cj4gDQo+IE9uIDI0LjExLjIwMjAgMjA6MDcsIFBhdWwgRHVycmFudCB3cm90ZToNCj4gPiBGcm9t
OiBQYXVsIER1cnJhbnQgPHBkdXJyYW50QGFtYXpvbi5jb20+DQo+ID4NCj4gPiBJZiBodm1fZ3Vl
c3RfeDg2X21vZGUoKSByZXR1cm5zIHNvbWV0aGluZyBvdGhlciB0aGFuIDggb3IgNCB0aGVuDQo+
ID4gdmlyaWRpYW5faHlwZXJjYWxsKCkgd2lsbCByZXR1cm4gaW1tZWRpYXRlbHkgYnV0LCBvbiB0
aGUgd2F5IG91dCwgd2lsbCB3cml0ZQ0KPiA+IGJhY2sgc3RhdHVzIGFzIGlmICdtb2RlJyB3YXMg
NC4gVGhpcyBwYXRjaCBzaW1wbHkgbWFrZXMgaXQgbGVhdmUgdGhlIHJlZ2lzdGVycw0KPiA+IGFs
b25lLg0KPiA+DQo+ID4gTk9URTogVGhlIGZvcm1hdHRpbmcgb2YgdGhlICdvdXQnIGxhYmVsIGFu
ZCB0aGUgc3dpdGNoIHN0YXRlbWVudCBhcmUgYWxzbw0KPiA+ICAgICAgIGFkanVzdGVkIGFzIHBl
ciBDT0RJTkdfU1RZTEUuDQo+IA0KPiBQYXJ0bHkgb25seSBhcyBmYXIgYXMgdGhlIGxhdHRlciBn
b2VzOg0KPiANCj4gPiAtLS0gYS94ZW4vYXJjaC94ODYvaHZtL3ZpcmlkaWFuL3ZpcmlkaWFuLmMN
Cj4gPiArKysgYi94ZW4vYXJjaC94ODYvaHZtL3ZpcmlkaWFuL3ZpcmlkaWFuLmMNCj4gPiBAQCAt
NjkyLDEzICs2OTIsMTQgQEAgaW50IHZpcmlkaWFuX2h5cGVyY2FsbChzdHJ1Y3QgY3B1X3VzZXJf
cmVncyAqcmVncykNCj4gPiAgICAgICAgICBicmVhazsNCj4gPiAgICAgIH0NCj4gPg0KPiA+IC1v
dXQ6DQo+ID4gKyBvdXQ6DQo+ID4gICAgICBvdXRwdXQucmVzdWx0ID0gc3RhdHVzOw0KPiA+ICAg
ICAgc3dpdGNoIChtb2RlKSB7DQo+IA0KPiBUaGlzIHdvdWxkIHdhbnQgdG8gYmUNCj4gDQo+ICAg
ICBzd2l0Y2ggKCBtb2RlICkNCj4gICAgIHsNCj4gDQoNCk9oLCB5ZXMuDQoNCj4gSSBndWVzcyB0
aGlzIGNvdWxkIGVhc2lseSBiZSB0YWtlbiBjYXJlIG9mIHdoaWxlIGNvbW1pdHRpbmcuDQoNClRo
YW5rcywNCg0KICBQYXVsDQoNCj4gDQo+IEphbg0K


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 08:08:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 08:08:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37405.69745 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khpqQ-0007fB-HS; Wed, 25 Nov 2020 08:08:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37405.69745; Wed, 25 Nov 2020 08:08:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khpqQ-0007f4-EK; Wed, 25 Nov 2020 08:08:06 +0000
Received: by outflank-mailman (input) for mailman id 37405;
 Wed, 25 Nov 2020 08:08:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X3kr=E7=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1khpqP-0007ez-4T
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 08:08:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 361c3867-9381-4848-8bc7-540bf03beaec;
 Wed, 25 Nov 2020 08:08:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EA2FBAF13;
 Wed, 25 Nov 2020 08:08:02 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X3kr=E7=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1khpqP-0007ez-4T
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 08:08:05 +0000
X-Inumbo-ID: 361c3867-9381-4848-8bc7-540bf03beaec
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 361c3867-9381-4848-8bc7-540bf03beaec;
	Wed, 25 Nov 2020 08:08:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606291683; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=UExZorE119fqiJFnUUqCcIDfodxvAsbWH78/1a4Q1NY=;
	b=biX9k7QvRtGLisXF+GlaOjcnEKv1vFExMHkv/kNuJgpwDG3DCYM0FX3oNGW5tNj1Us3TRt
	gicDXweaTI62x1GsV2KNMVsv7XNebFoSr3vIAW6Acf/8DBJ6ygI8YRP4egRwoFXvQLWSOc
	n7CU1yg17GDjjFgSmUlIZOuC0ClijJI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EA2FBAF13;
	Wed, 25 Nov 2020 08:08:02 +0000 (UTC)
Subject: Re: [PATCH v7 3/3] xen/events: rework fifo queue locking
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201124070106.26854-1-jgross@suse.com>
 <20201124070106.26854-4-jgross@suse.com>
 <c627b42b-1e1f-b83a-2db8-b9e5fa5dce10@suse.com>
 <8e2853c3-9f84-2fd6-0e41-1f1d9172f236@suse.com>
 <9eada207-9880-b2fe-054c-f3218d2034b2@suse.com>
 <cce1b71c-aa37-a3b7-990e-bd2f0437d074@suse.com>
 <c3091b91-b594-7a5e-f008-6df10db227ec@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <a1b7307a-f825-091f-8499-10e47046ff07@suse.com>
Date: Wed, 25 Nov 2020 09:08:02 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <c3091b91-b594-7a5e-f008-6df10db227ec@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="lElX8ctztPUSDLzTiLz35NztcKVPW5Xw4"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--lElX8ctztPUSDLzTiLz35NztcKVPW5Xw4
Content-Type: multipart/mixed; boundary="Vp5jdXvulMPTeSjiJPdeNxRHAupihaDIk";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <a1b7307a-f825-091f-8499-10e47046ff07@suse.com>
Subject: Re: [PATCH v7 3/3] xen/events: rework fifo queue locking
References: <20201124070106.26854-1-jgross@suse.com>
 <20201124070106.26854-4-jgross@suse.com>
 <c627b42b-1e1f-b83a-2db8-b9e5fa5dce10@suse.com>
 <8e2853c3-9f84-2fd6-0e41-1f1d9172f236@suse.com>
 <9eada207-9880-b2fe-054c-f3218d2034b2@suse.com>
 <cce1b71c-aa37-a3b7-990e-bd2f0437d074@suse.com>
 <c3091b91-b594-7a5e-f008-6df10db227ec@suse.com>
In-Reply-To: <c3091b91-b594-7a5e-f008-6df10db227ec@suse.com>

--Vp5jdXvulMPTeSjiJPdeNxRHAupihaDIk
Content-Type: multipart/mixed;
 boundary="------------C0A45ADC55C2E4603E533366"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------C0A45ADC55C2E4603E533366
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 25.11.20 08:42, Jan Beulich wrote:
> On 25.11.2020 06:23, J=C3=BCrgen Gro=C3=9F wrote:
>> On 24.11.20 17:32, Jan Beulich wrote:
>>> On 24.11.2020 15:49, J=C3=BCrgen Gro=C3=9F wrote:
>>>> On 24.11.20 15:02, Jan Beulich wrote:
>>>>> On 24.11.2020 08:01, Juergen Gross wrote:
>>>>>> Two cpus entering evtchn_fifo_set_pending() for the same event cha=
nnel
>>>>>> can race in case the first one gets interrupted after setting
>>>>>> EVTCHN_FIFO_PENDING and when the other one manages to set
>>>>>> EVTCHN_FIFO_LINKED before the first one is testing that bit. This =
can
>>>>>> lead to evtchn_check_pollers() being called before the event is pu=
t
>>>>>> properly into the queue, resulting eventually in the guest not see=
ing
>>>>>> the event pending and thus blocking forever afterwards.
>>>>>>
>>>>>> Note that commit 5f2df45ead7c1195 ("xen/evtchn: rework per event c=
hannel
>>>>>> lock") made the race just more obvious, while the fifo event chann=
el
>>>>>> implementation had this race from the beginning when an unmask ope=
ration
>>>>>> was running in parallel with an event channel send operation.
>>>>>
>>>>> Ah yes, but then also only for inter-domain channels, as it was
>>>>> only in that case that the "wrong" domain's event lock was held.
>>>>> IOW there was a much earlier change already where this issue
>>>>> got widened (when the per-channel locking got introduced). This
>>>>> then got reduced to the original scope by XSA-343's adding of
>>>>> locking to evtchn_unmask(). (Not sure how much of this history
>>>>> wants actually adding here. I'm writing it down not the least to
>>>>> make sure I have a complete enough picture.)
>>>>
>>>> I think we both agree that this race was possible for quite some tim=
e.
>>>> And I even think one customer bug I've been looking into recently
>>>> might be exactly this problem (a dom0 was occasionally hanging in
>>>> cross-cpu function calls, but switching to 2-level events made the
>>>> problem disappear).
>>>
>>> IPIs weren't affected earlier on (i.e. in any released version),
>>> if my analysis above is correct.
>>
>> I don't think it is correct.
>>
>> An unmask operation in parallel with set_pending will have had the
>> same race for IPIs.
>=20
> Why? When FIFO locks were introduced, the event lock got acquired
> around the call to evtchn_unmask(), and IPIs got sent with that
> lock similarly held. Likewise after XSA-343 evtchn_unmask() as
> well as the sending of IPIs acquire the per-channel lock (which at
> that point was still an ordinary spin lock).

Oh, I think we are talking about different paths.

I'm talking about EVTCHNOP_unmask. There is no lock involved when
calling evtchn_unmask().

>=20
>>>>>> Additionally when an
>>>>>> event channel needs to change queues both queues need to be locked=

>>>>>> initially.
>>>>>
>>>>> Since this was (afaict) intentionally not the case before, I
>>>>> think I would want to see a word spent on the "why", perhaps
>>>>> better in a code comment than here. Even more so that you
>>>>> delete a respective comment justifying the possible race as
>>>>> permissible. And I have to admit right now I'm still uncertain
>>>>> both ways, i.e. I neither have a clear understanding of why it
>>>>> would have been considered fine the other way around before,
>>>>> nor why the double locking is strictly needed.
>>>>
>>>> I need the double locking to avoid someone entering the locked regio=
n
>>>> when dropping the lock for the old queue and taking the one for the
>>>> new queue, as this would open the same race window again.
>>>
>>> Well, that's what have already said. Thing is that the code
>>> prior to your change gives the impression as if this race was
>>> benign.
>>
>> The race regarding a queue change, yes. But not the race I'm fixing wi=
th
>> this patch. I need to make sure that only one caller is inside the big=

>> if clause for a specific event. And dropping the lock inside this clau=
se
>> would violate that assumption.
>=20
> IOW the presumed wrong assumption back then was that the function
> would always be called with a lock already held which excludes
> the region to be entered twice for the same channel. But - was
> this a wrong assumption at the time? Thinking about this again I
> now actually come to the conclusion that my analysis above was
> wrong in the other direction: Even inter-domain channels did have
> consistent locking (of the other side's event lock), preventing
> any such race there. Which implies that imo one of the Fixes: tags
> wants dropping, as the race became possible only when "downgrading"
> some of the involved locks to rw ones. Obviously my "evtchn:
> convert vIRQ lock to an r/w one" then extends this race to vIRQ-s.

No. See my remark regarding unmask.


Juergen

--------------C0A45ADC55C2E4603E533366
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------C0A45ADC55C2E4603E533366--

--Vp5jdXvulMPTeSjiJPdeNxRHAupihaDIk--

--lElX8ctztPUSDLzTiLz35NztcKVPW5Xw4
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl++EOIFAwAAAAAACgkQsN6d1ii/Ey9L
fwf/WwHQNYceOuI+uBF0Nhg+NnPXgWd68kydA1ciKCObmms2H4uByA9/7Gg6sjqQ+iMY5P4akLGR
ntjYxpnUf3UTeyFnnLCeArllI9A1paLIU5YJsHfsMRREuqoYpeiyFQMaggYid63dBRi8eYMDVe5N
W1KOQmrvdXbtzVFDsibhAAQEgZTsVrhp/3ADZ+/hanOJ1W9GLzO4gYO6ewqwL/PKQIQJSX9njunD
FbToy4tR13wZolT1xVkywEHofDs33jtkpUyWWK5+OfLv2JQ0rcICn/z8louCTiay3YffbpFByBLn
8lYTFwcLGGYI8X6z/bAO9Akz081dZS/2RpTBPwDnQA==
=/Afx
-----END PGP SIGNATURE-----

--lElX8ctztPUSDLzTiLz35NztcKVPW5Xw4--


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 08:25:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 08:25:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37413.69756 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khq6q-00010Y-1x; Wed, 25 Nov 2020 08:25:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37413.69756; Wed, 25 Nov 2020 08:25:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khq6p-00010R-Uz; Wed, 25 Nov 2020 08:25:03 +0000
Received: by outflank-mailman (input) for mailman id 37413;
 Wed, 25 Nov 2020 08:25:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khq6o-00010M-Lt
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 08:25:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 684dd49f-fbb8-4395-8681-68770bb2c1cd;
 Wed, 25 Nov 2020 08:25:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 02523AF33;
 Wed, 25 Nov 2020 08:25:00 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khq6o-00010M-Lt
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 08:25:02 +0000
X-Inumbo-ID: 684dd49f-fbb8-4395-8681-68770bb2c1cd
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 684dd49f-fbb8-4395-8681-68770bb2c1cd;
	Wed, 25 Nov 2020 08:25:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606292700; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=e802/OwdX5CYRWaRzUWRYN2PrzGYFf/d6gxBpboHu8A=;
	b=C/OHRJpoIJOOScLSztOp3EsPsaWk8rNu2UeMfHUYUsfqurOOhVoLr+37CSkbNLSEj1LOvz
	OrDQf4fvEZSeoa3KfFybvfChtyMbmVpxKurMYnljIiyBG9lOcQN34UvwVw+8oSlhsvf3AY
	F/wsB2UmL77e9bO7sEqoPcwCwGvkG70=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 02523AF33;
	Wed, 25 Nov 2020 08:25:00 +0000 (UTC)
Subject: Re: [PATCH v7 3/3] xen/events: rework fifo queue locking
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201124070106.26854-1-jgross@suse.com>
 <20201124070106.26854-4-jgross@suse.com>
 <c627b42b-1e1f-b83a-2db8-b9e5fa5dce10@suse.com>
 <8e2853c3-9f84-2fd6-0e41-1f1d9172f236@suse.com>
 <9eada207-9880-b2fe-054c-f3218d2034b2@suse.com>
 <cce1b71c-aa37-a3b7-990e-bd2f0437d074@suse.com>
 <c3091b91-b594-7a5e-f008-6df10db227ec@suse.com>
 <a1b7307a-f825-091f-8499-10e47046ff07@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <10293fd3-2893-2dee-c022-a36bec3fc87f@suse.com>
Date: Wed, 25 Nov 2020 09:25:00 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <a1b7307a-f825-091f-8499-10e47046ff07@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 25.11.2020 09:08, Jürgen Groß wrote:
> On 25.11.20 08:42, Jan Beulich wrote:
>> On 25.11.2020 06:23, Jürgen Groß wrote:
>>> On 24.11.20 17:32, Jan Beulich wrote:
>>>> On 24.11.2020 15:49, Jürgen Groß wrote:
>>>>> On 24.11.20 15:02, Jan Beulich wrote:
>>>>>> On 24.11.2020 08:01, Juergen Gross wrote:
>>>>>>> Two cpus entering evtchn_fifo_set_pending() for the same event channel
>>>>>>> can race in case the first one gets interrupted after setting
>>>>>>> EVTCHN_FIFO_PENDING and when the other one manages to set
>>>>>>> EVTCHN_FIFO_LINKED before the first one is testing that bit. This can
>>>>>>> lead to evtchn_check_pollers() being called before the event is put
>>>>>>> properly into the queue, resulting eventually in the guest not seeing
>>>>>>> the event pending and thus blocking forever afterwards.
>>>>>>>
>>>>>>> Note that commit 5f2df45ead7c1195 ("xen/evtchn: rework per event channel
>>>>>>> lock") made the race just more obvious, while the fifo event channel
>>>>>>> implementation had this race from the beginning when an unmask operation
>>>>>>> was running in parallel with an event channel send operation.
>>>>>>
>>>>>> Ah yes, but then also only for inter-domain channels, as it was
>>>>>> only in that case that the "wrong" domain's event lock was held.
>>>>>> IOW there was a much earlier change already where this issue
>>>>>> got widened (when the per-channel locking got introduced). This
>>>>>> then got reduced to the original scope by XSA-343's adding of
>>>>>> locking to evtchn_unmask(). (Not sure how much of this history
>>>>>> wants actually adding here. I'm writing it down not the least to
>>>>>> make sure I have a complete enough picture.)
>>>>>
>>>>> I think we both agree that this race was possible for quite some time.
>>>>> And I even think one customer bug I've been looking into recently
>>>>> might be exactly this problem (a dom0 was occasionally hanging in
>>>>> cross-cpu function calls, but switching to 2-level events made the
>>>>> problem disappear).
>>>>
>>>> IPIs weren't affected earlier on (i.e. in any released version),
>>>> if my analysis above is correct.
>>>
>>> I don't think it is correct.
>>>
>>> An unmask operation in parallel with set_pending will have had the
>>> same race for IPIs.
>>
>> Why? When FIFO locks were introduced, the event lock got acquired
>> around the call to evtchn_unmask(), and IPIs got sent with that
>> lock similarly held. Likewise after XSA-343 evtchn_unmask() as
>> well as the sending of IPIs acquire the per-channel lock (which at
>> that point was still an ordinary spin lock).
> 
> Oh, I think we are talking about different paths.
> 
> I'm talking about EVTCHNOP_unmask. There is no lock involved when
> calling evtchn_unmask().

Above I said "When FIFO locks were introduced, the event lock got
acquired around the call to evtchn_unmask()" and further "Likewise
after XSA-343 evtchn_unmask() ..." I can't see how we're talking
of different paths here. The situation has changed from back then
(lock in the callers of evtchn_unmask()) to after XSA-343 (lock in
evtchn_unmask()), but there was suitable locking. There was a
(large) window in time prior to XSA-343 where there was indeed no
locking, but that was introduced by the conversion to per-channel
locks and addressed by one of the XSA-343 changes. The issue then
got re-introduced by converting spin_lock() to read_lock().

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 08:29:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 08:29:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37420.69769 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khqAr-0001De-Jm; Wed, 25 Nov 2020 08:29:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37420.69769; Wed, 25 Nov 2020 08:29:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khqAr-0001DX-Gm; Wed, 25 Nov 2020 08:29:13 +0000
Received: by outflank-mailman (input) for mailman id 37420;
 Wed, 25 Nov 2020 08:29:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X3kr=E7=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1khqAq-0001DS-FM
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 08:29:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c3db7e83-8a91-44c4-bff5-eeef9972e782;
 Wed, 25 Nov 2020 08:29:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B8592AC75;
 Wed, 25 Nov 2020 08:29:10 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X3kr=E7=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1khqAq-0001DS-FM
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 08:29:12 +0000
X-Inumbo-ID: c3db7e83-8a91-44c4-bff5-eeef9972e782
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c3db7e83-8a91-44c4-bff5-eeef9972e782;
	Wed, 25 Nov 2020 08:29:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606292950; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ZKd5ZCt572UnVzRaw3rZZd4bRDi06Pu84QgiIKlcDrE=;
	b=dYfl5veH5H7GujeAD/9br1q5udpUiVVcwP/i3jOBp6o2wS7hzaaOeaoM4chf3xlj4JeBTT
	rmwJE24qvhVUHSKK33Qe/lhLiOfpvwwz691QVw/gunFkCj+DZVTysg9auydJ10ekr1AK5w
	y04EQuUkfchsQvB4jhqxA6ehZ1Mdbm8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B8592AC75;
	Wed, 25 Nov 2020 08:29:10 +0000 (UTC)
Subject: Re: [PATCH v7 3/3] xen/events: rework fifo queue locking
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201124070106.26854-1-jgross@suse.com>
 <20201124070106.26854-4-jgross@suse.com>
 <c627b42b-1e1f-b83a-2db8-b9e5fa5dce10@suse.com>
 <8e2853c3-9f84-2fd6-0e41-1f1d9172f236@suse.com>
 <9eada207-9880-b2fe-054c-f3218d2034b2@suse.com>
 <cce1b71c-aa37-a3b7-990e-bd2f0437d074@suse.com>
 <c3091b91-b594-7a5e-f008-6df10db227ec@suse.com>
 <a1b7307a-f825-091f-8499-10e47046ff07@suse.com>
 <10293fd3-2893-2dee-c022-a36bec3fc87f@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <49be5ba9-edba-7486-dde2-2a557a51017f@suse.com>
Date: Wed, 25 Nov 2020 09:29:09 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <10293fd3-2893-2dee-c022-a36bec3fc87f@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="TPVb0wBsWxS3PtiQYRuZ0umk97OmgpAx1"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--TPVb0wBsWxS3PtiQYRuZ0umk97OmgpAx1
Content-Type: multipart/mixed; boundary="Rzj1t93VqeDJMAlQS92MFWtsAidJv3o1L";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <49be5ba9-edba-7486-dde2-2a557a51017f@suse.com>
Subject: Re: [PATCH v7 3/3] xen/events: rework fifo queue locking
References: <20201124070106.26854-1-jgross@suse.com>
 <20201124070106.26854-4-jgross@suse.com>
 <c627b42b-1e1f-b83a-2db8-b9e5fa5dce10@suse.com>
 <8e2853c3-9f84-2fd6-0e41-1f1d9172f236@suse.com>
 <9eada207-9880-b2fe-054c-f3218d2034b2@suse.com>
 <cce1b71c-aa37-a3b7-990e-bd2f0437d074@suse.com>
 <c3091b91-b594-7a5e-f008-6df10db227ec@suse.com>
 <a1b7307a-f825-091f-8499-10e47046ff07@suse.com>
 <10293fd3-2893-2dee-c022-a36bec3fc87f@suse.com>
In-Reply-To: <10293fd3-2893-2dee-c022-a36bec3fc87f@suse.com>

--Rzj1t93VqeDJMAlQS92MFWtsAidJv3o1L
Content-Type: multipart/mixed;
 boundary="------------E5E54DD56807CFCFDAE07E50"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------E5E54DD56807CFCFDAE07E50
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 25.11.20 09:25, Jan Beulich wrote:
> On 25.11.2020 09:08, J=C3=BCrgen Gro=C3=9F wrote:
>> On 25.11.20 08:42, Jan Beulich wrote:
>>> On 25.11.2020 06:23, J=C3=BCrgen Gro=C3=9F wrote:
>>>> On 24.11.20 17:32, Jan Beulich wrote:
>>>>> On 24.11.2020 15:49, J=C3=BCrgen Gro=C3=9F wrote:
>>>>>> On 24.11.20 15:02, Jan Beulich wrote:
>>>>>>> On 24.11.2020 08:01, Juergen Gross wrote:
>>>>>>>> Two cpus entering evtchn_fifo_set_pending() for the same event c=
hannel
>>>>>>>> can race in case the first one gets interrupted after setting
>>>>>>>> EVTCHN_FIFO_PENDING and when the other one manages to set
>>>>>>>> EVTCHN_FIFO_LINKED before the first one is testing that bit. Thi=
s can
>>>>>>>> lead to evtchn_check_pollers() being called before the event is =
put
>>>>>>>> properly into the queue, resulting eventually in the guest not s=
eeing
>>>>>>>> the event pending and thus blocking forever afterwards.
>>>>>>>>
>>>>>>>> Note that commit 5f2df45ead7c1195 ("xen/evtchn: rework per event=
 channel
>>>>>>>> lock") made the race just more obvious, while the fifo event cha=
nnel
>>>>>>>> implementation had this race from the beginning when an unmask o=
peration
>>>>>>>> was running in parallel with an event channel send operation.
>>>>>>>
>>>>>>> Ah yes, but then also only for inter-domain channels, as it was
>>>>>>> only in that case that the "wrong" domain's event lock was held.
>>>>>>> IOW there was a much earlier change already where this issue
>>>>>>> got widened (when the per-channel locking got introduced). This
>>>>>>> then got reduced to the original scope by XSA-343's adding of
>>>>>>> locking to evtchn_unmask(). (Not sure how much of this history
>>>>>>> wants actually adding here. I'm writing it down not the least to
>>>>>>> make sure I have a complete enough picture.)
>>>>>>
>>>>>> I think we both agree that this race was possible for quite some t=
ime.
>>>>>> And I even think one customer bug I've been looking into recently
>>>>>> might be exactly this problem (a dom0 was occasionally hanging in
>>>>>> cross-cpu function calls, but switching to 2-level events made the=

>>>>>> problem disappear).
>>>>>
>>>>> IPIs weren't affected earlier on (i.e. in any released version),
>>>>> if my analysis above is correct.
>>>>
>>>> I don't think it is correct.
>>>>
>>>> An unmask operation in parallel with set_pending will have had the
>>>> same race for IPIs.
>>>
>>> Why? When FIFO locks were introduced, the event lock got acquired
>>> around the call to evtchn_unmask(), and IPIs got sent with that
>>> lock similarly held. Likewise after XSA-343 evtchn_unmask() as
>>> well as the sending of IPIs acquire the per-channel lock (which at
>>> that point was still an ordinary spin lock).
>>
>> Oh, I think we are talking about different paths.
>>
>> I'm talking about EVTCHNOP_unmask. There is no lock involved when
>> calling evtchn_unmask().
>=20
> Above I said "When FIFO locks were introduced, the event lock got
> acquired around the call to evtchn_unmask()" and further "Likewise
> after XSA-343 evtchn_unmask() ..." I can't see how we're talking
> of different paths here. The situation has changed from back then
> (lock in the callers of evtchn_unmask()) to after XSA-343 (lock in
> evtchn_unmask()), but there was suitable locking. There was a
> (large) window in time prior to XSA-343 where there was indeed no
> locking, but that was introduced by the conversion to per-channel
> locks and addressed by one of the XSA-343 changes. The issue then
> got re-introduced by converting spin_lock() to read_lock().

Okay, then I misunderstood your claim. I thought you meant there was
always suitable locking before the rwlock change. So I need to modify
the Fixes: tag.


Juergen

--------------E5E54DD56807CFCFDAE07E50
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------E5E54DD56807CFCFDAE07E50--

--Rzj1t93VqeDJMAlQS92MFWtsAidJv3o1L--

--TPVb0wBsWxS3PtiQYRuZ0umk97OmgpAx1
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl++FdYFAwAAAAAACgkQsN6d1ii/Ey+l
ygf/RHCQk3EXiHGSicg6J4nW8ZVlIQuDjg5rVsJTlb+1LITPiF9a83eBVfDQyRvKKNw76xBJs+b1
7wL+5Si+1EqQ4nwLIXW0aik/lZGggaKflz7dWM42fWUz92Rn1BlQn+/dGj0PDX2J01o8bCqLstPO
4nb1B1AZAtKd2A/oxFUJDNHfOP/W3S9UKDA2IHZ05izjeCsJuqkDZ5cVeha/891hrn76sq3TqdrQ
/YOWy7UYr4edYf27PiELL8H8Z76iWql00gjSWBRfciLjpXrVWwYNH0t7u8boZE4jvBG6roCDbQtR
UsEkPCZF01nEC4SXE3xs1mEYUH0s9GtvwlaxfLJRFw==
=2Ypl
-----END PGP SIGNATURE-----

--TPVb0wBsWxS3PtiQYRuZ0umk97OmgpAx1--


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 08:43:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 08:43:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37430.69784 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khqOD-0002uQ-Sx; Wed, 25 Nov 2020 08:43:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37430.69784; Wed, 25 Nov 2020 08:43:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khqOD-0002uJ-Py; Wed, 25 Nov 2020 08:43:01 +0000
Received: by outflank-mailman (input) for mailman id 37430;
 Wed, 25 Nov 2020 08:43:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khqOC-0002uE-PL
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 08:43:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ee7ec38e-885f-4406-99b6-cc469fac6b06;
 Wed, 25 Nov 2020 08:42:59 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 89E6AABD7;
 Wed, 25 Nov 2020 08:42:58 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khqOC-0002uE-PL
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 08:43:00 +0000
X-Inumbo-ID: ee7ec38e-885f-4406-99b6-cc469fac6b06
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ee7ec38e-885f-4406-99b6-cc469fac6b06;
	Wed, 25 Nov 2020 08:42:59 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606293778; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=DPHuqGl4qpSo8PE8TebKnbH3KKmtBNPNXBMO5S57Ukg=;
	b=Rv9MxrKo5/oWxU5PyYUwjOikRjxCOmFa35odDUDj/KkS4Mum1tZ6CT0WUzKuw5VtsErbre
	KbeHQFaIhrh/N373jrGMOebszs5NDqd+J9hvlEE6ESnTMxGwY4lj+YU985qy4/BDGBRmzl
	Lwl0RLCueA67Hk1zJYMdE20X38frk+U=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 89E6AABD7;
	Wed, 25 Nov 2020 08:42:58 +0000 (UTC)
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/5] x86: asm-offsets.h and !PV32 adjustments
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <46d83c92-0b06-fc09-4832-7a7d7935d5c2@suse.com>
Date: Wed, 25 Nov 2020 09:42:59 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The 2nd and 3rd patches here effectively called for the latter
two to also be created, hence why they all live together.

1: build: limit rebuilding of asm-offsets.s
2: build: limit #include-ing by asm-offsets.h
3: build: restrict contents of asm-offsets.h when !HVM / !PV
4: hypercall vector is unused when !PV32
5: don't build unused entry code when !PV32

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 08:45:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 08:45:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37437.69796 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khqR4-000351-BJ; Wed, 25 Nov 2020 08:45:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37437.69796; Wed, 25 Nov 2020 08:45:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khqR4-00034u-7u; Wed, 25 Nov 2020 08:45:58 +0000
Received: by outflank-mailman (input) for mailman id 37437;
 Wed, 25 Nov 2020 08:45:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khqR3-00034o-1L
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 08:45:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c074e09a-f2bc-462e-8dfb-c66fae4928e8;
 Wed, 25 Nov 2020 08:45:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7DEB7ABD7;
 Wed, 25 Nov 2020 08:45:55 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khqR3-00034o-1L
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 08:45:57 +0000
X-Inumbo-ID: c074e09a-f2bc-462e-8dfb-c66fae4928e8
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c074e09a-f2bc-462e-8dfb-c66fae4928e8;
	Wed, 25 Nov 2020 08:45:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606293955; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=B+q6b8HTVycttCqpnu1a9+GhKtnMC72vN0yNLtGmp6M=;
	b=IXD9JNw6jO7p6Kzmz1opC7pNa+yQrmPIE1cc9xpDxRe3V8UsVu9lSH4QH0KftAufQHUm5y
	Rt5cYTtfNRJaXojItBBvgUsneqbeZf0R/vQU2KPOvig5MaZTqlpfohoYHy/cqXHUwzZrQH
	rOeXVYgoiIzGZLzAFrs6HtaUVZjOoAI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7DEB7ABD7;
	Wed, 25 Nov 2020 08:45:55 +0000 (UTC)
Subject: [PATCH 1/5] x86/build: limit rebuilding of asm-offsets.h
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <46d83c92-0b06-fc09-4832-7a7d7935d5c2@suse.com>
Message-ID: <d437bdbf-3047-06ad-2fe8-f445cf8b3240@suse.com>
Date: Wed, 25 Nov 2020 09:45:56 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <46d83c92-0b06-fc09-4832-7a7d7935d5c2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

This file has a long dependencies list (through asm-offsets.s) and a
long list of dependents. IOW if any of the former changes, all of the
latter will be rebuilt, even if there's no actual change to the
generated file. This is the primary scenario we have the move-if-changed
macro for.

Since debug information may easily cause the file contents to change in
benign ways, also avoid emitting this into the output file.

Finally already before this change *.new files needed including in what
gets removed by the "clean" target.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Perhaps Arm would want doing the same. In fact perhaps the rules should
be unified by moving to common code?

--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -235,7 +235,8 @@ efi/buildid.o efi/relocs-dummy.o: $(BASE
 efi/buildid.o efi/relocs-dummy.o: ;
 
 asm-offsets.s: $(TARGET_SUBARCH)/asm-offsets.c $(BASEDIR)/include/asm-x86/asm-macros.h
-	$(CC) $(filter-out -Wa$(comma)% -flto,$(c_flags)) -S -o $@ $<
+	$(CC) $(filter-out -Wa$(comma)% -flto,$(c_flags)) -S -g0 -o $@.new $<
+	$(call move-if-changed,$@.new,$@)
 
 asm-macros.i: CFLAGS-y += -D__ASSEMBLY__ -P
 
@@ -262,7 +263,7 @@ efi/mkreloc: efi/mkreloc.c
 
 .PHONY: clean
 clean::
-	rm -f asm-offsets.s *.lds boot/*.o boot/*~ boot/core boot/mkelf32
+	rm -f asm-offsets.s *.lds *.new boot/*.o boot/*~ boot/core boot/mkelf32
 	rm -f asm-macros.i $(BASEDIR)/include/asm-x86/asm-macros.*
 	rm -f $(BASEDIR)/.xen-syms.[0-9]* boot/.*.d $(BASEDIR)/.xen.elf32
 	rm -f $(BASEDIR)/.xen.efi.[0-9]* efi/*.efi efi/mkreloc


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 08:49:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 08:49:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37446.69808 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khqUN-0003HW-RJ; Wed, 25 Nov 2020 08:49:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37446.69808; Wed, 25 Nov 2020 08:49:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khqUN-0003HP-O4; Wed, 25 Nov 2020 08:49:23 +0000
Received: by outflank-mailman (input) for mailman id 37446;
 Wed, 25 Nov 2020 08:49:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khqUM-0003HK-GP
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 08:49:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id da7568d2-6e50-4441-a603-52bebf66854c;
 Wed, 25 Nov 2020 08:49:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E9945AC90;
 Wed, 25 Nov 2020 08:49:20 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khqUM-0003HK-GP
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 08:49:22 +0000
X-Inumbo-ID: da7568d2-6e50-4441-a603-52bebf66854c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id da7568d2-6e50-4441-a603-52bebf66854c;
	Wed, 25 Nov 2020 08:49:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606294161; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=U1LKbOfB2ZMjw4na7OMaZ9Bq1kO31mvqo/7Y3UNtHJw=;
	b=XyoXnOj9Wknf1zmg1tFyZwodveuixxGPUub6kVYP+VBBtOqYJcMEqsmqabjaeV23bL3CAh
	hs7oLozAVrfdkRnmX/gXYdDwM4O03N5BDk6fH3BlKOKvMjfAtsMf+qBKQCAD/Iyn+djxVA
	ND2ZL8n9+NE9E0kGYu2mQDOy3jBJJZc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E9945AC90;
	Wed, 25 Nov 2020 08:49:20 +0000 (UTC)
Subject: [PATCH 2/5] x86/build: limit #include-ing by asm-offsets.c
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <46d83c92-0b06-fc09-4832-7a7d7935d5c2@suse.com>
Message-ID: <d7ac370e-2e1f-5b7a-b832-63577689053c@suse.com>
Date: Wed, 25 Nov 2020 09:49:21 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <46d83c92-0b06-fc09-4832-7a7d7935d5c2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

This file has a long dependencies list and asm-offsets.h, generated from
it, has a long list of dependents. IOW if any of the former changes, all
of the latter will be rebuilt, even if there's no actual change to the
generated file. Therefore avoid including headers we don't actually need
(generally or configuration dependent).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/x86_64/asm-offsets.c
+++ b/xen/arch/x86/x86_64/asm-offsets.c
@@ -5,11 +5,13 @@
  */
 #define COMPILE_OFFSETS
 
+#ifdef CONFIG_PERF_COUNTERS
 #include <xen/perfc.h>
+#endif
 #include <xen/sched.h>
-#include <xen/bitops.h>
+#ifdef CONFIG_PV
 #include <compat/xen.h>
-#include <asm/fixmap.h>
+#endif
 #include <asm/hardirq.h>
 #include <xen/multiboot.h>
 #include <xen/multiboot2.h>
@@ -101,7 +103,6 @@ void __dummy__(void)
 #ifdef CONFIG_PV
     OFFSET(DOMAIN_is_32bit_pv, struct domain, arch.pv.is_32bit);
     BLANK();
-#endif
 
     OFFSET(VCPUINFO_upcall_pending, struct vcpu_info, evtchn_upcall_pending);
     OFFSET(VCPUINFO_upcall_mask, struct vcpu_info, evtchn_upcall_mask);
@@ -110,6 +111,7 @@ void __dummy__(void)
     OFFSET(COMPAT_VCPUINFO_upcall_pending, struct compat_vcpu_info, evtchn_upcall_pending);
     OFFSET(COMPAT_VCPUINFO_upcall_mask, struct compat_vcpu_info, evtchn_upcall_mask);
     BLANK();
+#endif
 
     OFFSET(CPUINFO_guest_cpu_user_regs, struct cpu_info, guest_cpu_user_regs);
     OFFSET(CPUINFO_verw_sel, struct cpu_info, verw_sel);


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 08:49:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 08:49:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37453.69820 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khqUu-0003Ok-8L; Wed, 25 Nov 2020 08:49:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37453.69820; Wed, 25 Nov 2020 08:49:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khqUu-0003Od-5D; Wed, 25 Nov 2020 08:49:56 +0000
Received: by outflank-mailman (input) for mailman id 37453;
 Wed, 25 Nov 2020 08:49:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khqUt-0003OW-CT
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 08:49:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id db0f7686-f875-4e5d-82d4-45c46b5d9328;
 Wed, 25 Nov 2020 08:49:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D5FC2AC90;
 Wed, 25 Nov 2020 08:49:53 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khqUt-0003OW-CT
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 08:49:55 +0000
X-Inumbo-ID: db0f7686-f875-4e5d-82d4-45c46b5d9328
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id db0f7686-f875-4e5d-82d4-45c46b5d9328;
	Wed, 25 Nov 2020 08:49:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606294193; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0cMjBzLauJw5SKrfwXQnQb0ce9Bg9IPFpz3h0NPwyEQ=;
	b=F/riuD7HD9w3jnq4EbN2kMXuoWkqs4dYSNX/5ppXibldO5K0OMp8xhKV7rSnANLrQEDdUG
	rHX1yylqJN9NpWnFYX9pHLOGZeBjg9NPwFez058PnlqwCKUJK02tfn6QrKLfOUeacaeiFY
	htlJ5kDm3Suaupo8h8ywLaLFrt0ityI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D5FC2AC90;
	Wed, 25 Nov 2020 08:49:53 +0000 (UTC)
Subject: [PATCH 3/5] x86/build: restrict contents of asm-offsets.h when !HVM /
 !PV
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <46d83c92-0b06-fc09-4832-7a7d7935d5c2@suse.com>
Message-ID: <d41ce371-262f-747a-9f6d-e5ab85a93aa5@suse.com>
Date: Wed, 25 Nov 2020 09:49:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <46d83c92-0b06-fc09-4832-7a7d7935d5c2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

This file has a long dependencies list (through asm-offsets.[cs]) and a
long list of dependents. IOW if any of the former changes, all of the
latter will be rebuilt, even if there's no actual change to the
generated file. Therefore avoid producing symbols we don't actually
need, depending on configuration.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/x86_64/asm-offsets.c
+++ b/xen/arch/x86/x86_64/asm-offsets.c
@@ -84,6 +84,7 @@ void __dummy__(void)
     DEFINE(_VGCF_syscall_disables_events,  _VGCF_syscall_disables_events);
     BLANK();
 
+#ifdef CONFIG_HVM
     OFFSET(VCPU_svm_vmcb_pa, struct vcpu, arch.hvm.svm.vmcb_pa);
     OFFSET(VCPU_svm_vmcb, struct vcpu, arch.hvm.svm.vmcb);
     BLANK();
@@ -99,6 +100,7 @@ void __dummy__(void)
     OFFSET(VCPU_nhvm_p2m, struct vcpu, arch.hvm.nvcpu.nv_p2m);
     OFFSET(VCPU_nsvm_hap_enabled, struct vcpu, arch.hvm.nvcpu.u.nsvm.ns_hap_enabled);
     BLANK();
+#endif
 
 #ifdef CONFIG_PV
     OFFSET(DOMAIN_is_32bit_pv, struct domain, arch.pv.is_32bit);
@@ -128,6 +130,7 @@ void __dummy__(void)
     DEFINE(CPUINFO_sizeof, sizeof(struct cpu_info));
     BLANK();
 
+#ifdef CONFIG_PV
     OFFSET(TRAPINFO_eip, struct trap_info, address);
     OFFSET(TRAPINFO_cs, struct trap_info, cs);
     OFFSET(TRAPINFO_flags, struct trap_info, flags);
@@ -139,6 +142,7 @@ void __dummy__(void)
     OFFSET(TRAPBOUNCE_cs, struct trap_bounce, cs);
     OFFSET(TRAPBOUNCE_eip, struct trap_bounce, eip);
     BLANK();
+#endif
 
     OFFSET(VCPUMSR_spec_ctrl_raw, struct vcpu_msrs, spec_ctrl.raw);
     BLANK();



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 08:50:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 08:50:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37459.69832 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khqVq-0004CG-Ix; Wed, 25 Nov 2020 08:50:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37459.69832; Wed, 25 Nov 2020 08:50:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khqVq-0004C9-Fn; Wed, 25 Nov 2020 08:50:54 +0000
Received: by outflank-mailman (input) for mailman id 37459;
 Wed, 25 Nov 2020 08:50:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khqVp-0004C0-0c
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 08:50:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fa96e47c-ac95-4a20-80ef-a7338010c0fd;
 Wed, 25 Nov 2020 08:50:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4B11EAE49;
 Wed, 25 Nov 2020 08:50:51 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khqVp-0004C0-0c
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 08:50:53 +0000
X-Inumbo-ID: fa96e47c-ac95-4a20-80ef-a7338010c0fd
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fa96e47c-ac95-4a20-80ef-a7338010c0fd;
	Wed, 25 Nov 2020 08:50:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606294251; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=LsItoclm9UZflz8LqerEuyT3QzAOSbWBnrz1h3p2SlE=;
	b=S4y3k5bSM5ZUK6lymocUpyjo8fcTdX3tlaTaueT5xC6+8O5uTW8pVa8M/97/4CsK2VbEpu
	Jf2wBSMyUUCmifV5EN0p/UsFofwAD9h/p67R8mEvevmDhRNfUtOnvPZsn2Bq8MvHn8/91U
	s4tc0+7P0IFHInHK3FAz3JshwQVXB0M=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4B11EAE49;
	Wed, 25 Nov 2020 08:50:51 +0000 (UTC)
Subject: [PATCH 4/5] x86: hypercall vector is unused when !PV32
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <46d83c92-0b06-fc09-4832-7a7d7935d5c2@suse.com>
Message-ID: <6505bcc4-0cb3-42de-9fd5-50da133d6d99@suse.com>
Date: Wed, 25 Nov 2020 09:50:51 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <46d83c92-0b06-fc09-4832-7a7d7935d5c2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

This vector can be used as an ordinary interrupt handling one in this
case. To be sure no references are left, make the #define itself
conditional.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -436,8 +436,12 @@ int __init init_irq_data(void)
         irq_to_desc(irq)->irq = irq;
 
 #ifdef CONFIG_PV
-    /* Never allocate the hypercall vector or Linux/BSD fast-trap vector. */
+    /* Never allocate the Linux/BSD fast-trap vector. */
     set_bit(LEGACY_SYSCALL_VECTOR, used_vectors);
+#endif
+
+#ifdef CONFIG_PV32
+    /* Never allocate the hypercall vector. */
     set_bit(HYPERCALL_VECTOR, used_vectors);
 #endif
     
--- a/xen/arch/x86/pv/traps.c
+++ b/xen/arch/x86/pv/traps.c
@@ -30,6 +30,7 @@
 #include <asm/traps.h>
 #include <irq_vectors.h>
 
+#ifdef CONFIG_PV32
 void do_entry_int82(struct cpu_user_regs *regs)
 {
     if ( unlikely(untrusted_msi) )
@@ -37,6 +38,7 @@ void do_entry_int82(struct cpu_user_regs
 
     pv_hypercall(regs);
 }
+#endif
 
 void pv_inject_event(const struct x86_event *event)
 {
@@ -155,9 +157,11 @@ static void nmi_softirq(void)
 
 void __init pv_trap_init(void)
 {
+#ifdef CONFIG_PV32
     /* The 32-on-64 hypercall vector is only accessible from ring 1. */
     _set_gate(idt_table + HYPERCALL_VECTOR,
               SYS_DESC_trap_gate, 1, entry_int82);
+#endif
 
     /* Fast trap for int80 (faster than taking the #GP-fixup path). */
     _set_gate(idt_table + LEGACY_SYSCALL_VECTOR, SYS_DESC_trap_gate, 3,
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -11,6 +11,8 @@
 #include <public/xen.h>
 #include <irq_vectors.h>
 
+#ifdef CONFIG_PV32
+
 ENTRY(entry_int82)
         ASM_CLAC
         pushq $0
@@ -27,6 +29,8 @@ ENTRY(entry_int82)
         mov   %rsp, %rdi
         call  do_entry_int82
 
+#endif /* CONFIG_PV32 */
+
 /* %rbx: struct vcpu */
 ENTRY(compat_test_all_events)
         ASSERT_NOT_IN_ATOMIC
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -982,8 +982,10 @@ autogen_stubs: /* Automatically generate
         .rept X86_NR_VECTORS
 
         /* Common interrupts, heading towards do_IRQ(). */
-#ifdef CONFIG_PV
+#if defined(CONFIG_PV32)
         .if vec >= FIRST_IRQ_VECTOR && vec != HYPERCALL_VECTOR && vec != LEGACY_SYSCALL_VECTOR
+#elif defined(CONFIG_PV)
+        .if vec >= FIRST_IRQ_VECTOR && vec != LEGACY_SYSCALL_VECTOR
 #else
         .if vec >= FIRST_IRQ_VECTOR
 #endif
--- a/xen/include/asm-x86/mach-default/irq_vectors.h
+++ b/xen/include/asm-x86/mach-default/irq_vectors.h
@@ -22,7 +22,10 @@
 #define FIRST_LEGACY_VECTOR     FIRST_DYNAMIC_VECTOR
 #define LAST_LEGACY_VECTOR      (FIRST_LEGACY_VECTOR + 0xf)
 
-#define HYPERCALL_VECTOR	0x82
+#ifdef CONFIG_PV32
+#define HYPERCALL_VECTOR        0x82
+#endif
+
 #define LEGACY_SYSCALL_VECTOR   0x80
 
 /*



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 08:51:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 08:51:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37466.69843 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khqWV-0004JR-Sg; Wed, 25 Nov 2020 08:51:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37466.69843; Wed, 25 Nov 2020 08:51:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khqWV-0004JJ-Pp; Wed, 25 Nov 2020 08:51:35 +0000
Received: by outflank-mailman (input) for mailman id 37466;
 Wed, 25 Nov 2020 08:51:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khqWU-0004JA-5y
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 08:51:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dbe23800-881e-4818-ac3a-24c1c7c2a8be;
 Wed, 25 Nov 2020 08:51:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5B8DAAC6A;
 Wed, 25 Nov 2020 08:51:32 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khqWU-0004JA-5y
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 08:51:34 +0000
X-Inumbo-ID: dbe23800-881e-4818-ac3a-24c1c7c2a8be
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id dbe23800-881e-4818-ac3a-24c1c7c2a8be;
	Wed, 25 Nov 2020 08:51:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606294292; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=sJH6yk2rnrEcxrdt9KhOPaqgGQgdY6OdkaB7rOBiSSs=;
	b=DHlGTQ4e+KCeeg+0p6SKHFgjOd9flxUR8IVMDAl4vHVVyrOK2F4m0V3RSqt3t+usAAaP7U
	oAcn6vTJUgABQbiVJz/MlxBvysMlS05H+UZUKvAmKbdQZ7nVsceDyrz568qr2USVBaEqbh
	Ys/WqgMSYzHQXSty/dN51tfhw3FIUHg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5B8DAAC6A;
	Wed, 25 Nov 2020 08:51:32 +0000 (UTC)
Subject: [PATCH 5/5] x86: don't build unused entry code when !PV32
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <46d83c92-0b06-fc09-4832-7a7d7935d5c2@suse.com>
Message-ID: <d417d3f9-3278-ed08-1ff6-45a13b5e3757@suse.com>
Date: Wed, 25 Nov 2020 09:51:33 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <46d83c92-0b06-fc09-4832-7a7d7935d5c2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Except for the initial part of cstar_enter compat/entry.S is all dead
code in this case. Further, along the lines of the PV conditionals we
already have in entry.S, make code PV32-conditional there too (to a
fair part because this code actually references compat/entry.S).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
TBD: I'm on the fence of whether (in a separate patch) to also make
     conditional struct pv_domain's is_32bit field.

--- a/xen/arch/x86/x86_64/asm-offsets.c
+++ b/xen/arch/x86/x86_64/asm-offsets.c
@@ -9,7 +9,7 @@
 #include <xen/perfc.h>
 #endif
 #include <xen/sched.h>
-#ifdef CONFIG_PV
+#ifdef CONFIG_PV32
 #include <compat/xen.h>
 #endif
 #include <asm/hardirq.h>
@@ -102,19 +102,21 @@ void __dummy__(void)
     BLANK();
 #endif
 
-#ifdef CONFIG_PV
+#ifdef CONFIG_PV32
     OFFSET(DOMAIN_is_32bit_pv, struct domain, arch.pv.is_32bit);
     BLANK();
 
-    OFFSET(VCPUINFO_upcall_pending, struct vcpu_info, evtchn_upcall_pending);
-    OFFSET(VCPUINFO_upcall_mask, struct vcpu_info, evtchn_upcall_mask);
-    BLANK();
-
     OFFSET(COMPAT_VCPUINFO_upcall_pending, struct compat_vcpu_info, evtchn_upcall_pending);
     OFFSET(COMPAT_VCPUINFO_upcall_mask, struct compat_vcpu_info, evtchn_upcall_mask);
     BLANK();
 #endif
 
+#ifdef CONFIG_PV
+    OFFSET(VCPUINFO_upcall_pending, struct vcpu_info, evtchn_upcall_pending);
+    OFFSET(VCPUINFO_upcall_mask, struct vcpu_info, evtchn_upcall_mask);
+    BLANK();
+#endif
+
     OFFSET(CPUINFO_guest_cpu_user_regs, struct cpu_info, guest_cpu_user_regs);
     OFFSET(CPUINFO_verw_sel, struct cpu_info, verw_sel);
     OFFSET(CPUINFO_current_vcpu, struct cpu_info, current_vcpu);
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -29,8 +29,6 @@ ENTRY(entry_int82)
         mov   %rsp, %rdi
         call  do_entry_int82
 
-#endif /* CONFIG_PV32 */
-
 /* %rbx: struct vcpu */
 ENTRY(compat_test_all_events)
         ASSERT_NOT_IN_ATOMIC
@@ -197,6 +195,8 @@ ENTRY(cr4_pv32_restore)
         xor   %eax, %eax
         ret
 
+#endif /* CONFIG_PV32 */
+
         .section .text.entry, "ax", @progbits
 
 /* See lstar_enter for entry register state. */
@@ -230,6 +230,13 @@ ENTRY(cstar_enter)
         sti
 
         movq  STACK_CPUINFO_FIELD(current_vcpu)(%rbx), %rbx
+
+#ifndef CONFIG_PV32
+
+        jmp   switch_to_kernel
+
+#else
+
         movq  VCPU_domain(%rbx),%rcx
         cmpb  $0,DOMAIN_is_32bit_pv(%rcx)
         je    switch_to_kernel
@@ -393,3 +400,5 @@ compat_crash_page_fault:
         jmp   .Lft14
 .previous
         _ASM_EXTABLE(.Lft14, .Lfx14)
+
+#endif /* CONFIG_PV32 */
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -328,8 +328,10 @@ UNLIKELY_END(sysenter_gpf)
         movq  VCPU_domain(%rbx),%rdi
         movq  %rax,TRAPBOUNCE_eip(%rdx)
         movb  %cl,TRAPBOUNCE_flags(%rdx)
+#ifdef CONFIG_PV32
         cmpb  $0, DOMAIN_is_32bit_pv(%rdi)
         jne   compat_sysenter
+#endif
         jmp   .Lbounce_exception
 
 ENTRY(int80_direct_trap)
@@ -370,6 +372,7 @@ UNLIKELY_END(msi_check)
         mov    0x80 * TRAPINFO_sizeof + TRAPINFO_eip(%rsi), %rdi
         movzwl 0x80 * TRAPINFO_sizeof + TRAPINFO_cs (%rsi), %ecx
 
+#ifdef CONFIG_PV32
         mov   %ecx, %edx
         and   $~3, %edx
 
@@ -378,6 +381,10 @@ UNLIKELY_END(msi_check)
 
         test  %rdx, %rdx
         jz    int80_slow_path
+#else
+        test  %rdi, %rdi
+        jz    int80_slow_path
+#endif
 
         /* Construct trap_bounce from trap_ctxt[0x80]. */
         lea   VCPU_trap_bounce(%rbx), %rdx
@@ -390,8 +397,10 @@ UNLIKELY_END(msi_check)
         lea   (, %rcx, TBF_INTERRUPT), %ecx
         mov   %cl, TRAPBOUNCE_flags(%rdx)
 
+#ifdef CONFIG_PV32
         cmpb  $0, DOMAIN_is_32bit_pv(%rax)
         jne   compat_int80_direct_trap
+#endif
 
         call  create_bounce_frame
         jmp   test_all_events
@@ -541,12 +550,16 @@ ENTRY(dom_crash_sync_extable)
         GET_STACK_END(ax)
         leaq  STACK_CPUINFO_FIELD(guest_cpu_user_regs)(%rax),%rsp
         # create_bounce_frame() temporarily clobbers CS.RPL. Fix up.
+#ifdef CONFIG_PV32
         movq  STACK_CPUINFO_FIELD(current_vcpu)(%rax), %rax
         movq  VCPU_domain(%rax),%rax
         cmpb  $0, DOMAIN_is_32bit_pv(%rax)
         sete  %al
         leal  (%rax,%rax,2),%eax
         orb   %al,UREGS_cs(%rsp)
+#else
+        orb   $3, UREGS_cs(%rsp)
+#endif
         xorl  %edi,%edi
         jmp   asm_domain_crash_synchronous /* Does not return */
         .popsection
@@ -562,11 +575,15 @@ ENTRY(ret_from_intr)
         GET_CURRENT(bx)
         testb $3, UREGS_cs(%rsp)
         jz    restore_all_xen
+#ifdef CONFIG_PV32
         movq  VCPU_domain(%rbx), %rax
         cmpb  $0, DOMAIN_is_32bit_pv(%rax)
         je    test_all_events
         jmp   compat_test_all_events
 #else
+        jmp   test_all_events
+#endif
+#else
         ASSERT_CONTEXT_IS_XEN
         jmp   restore_all_xen
 #endif
@@ -652,7 +669,7 @@ handle_exception_saved:
         testb $X86_EFLAGS_IF>>8,UREGS_eflags+1(%rsp)
         jz    exception_with_ints_disabled
 
-#ifdef CONFIG_PV
+#if defined(CONFIG_PV32)
         ALTERNATIVE_2 "jmp .Lcr4_pv32_done", \
             __stringify(mov VCPU_domain(%rbx), %rax), X86_FEATURE_XEN_SMEP, \
             __stringify(mov VCPU_domain(%rbx), %rax), X86_FEATURE_XEN_SMAP
@@ -692,7 +709,7 @@ handle_exception_saved:
         test  $~(PFEC_write_access|PFEC_insn_fetch),%eax
         jz    compat_test_all_events
 .Lcr4_pv32_done:
-#else
+#elif !defined(CONFIG_PV)
         ASSERT_CONTEXT_IS_XEN
 #endif /* CONFIG_PV */
         sti
@@ -711,9 +728,11 @@ handle_exception_saved:
 #ifdef CONFIG_PV
         testb $3,UREGS_cs(%rsp)
         jz    restore_all_xen
+#ifdef CONFIG_PV32
         movq  VCPU_domain(%rbx),%rax
         cmpb  $0, DOMAIN_is_32bit_pv(%rax)
         jne   compat_test_all_events
+#endif
         jmp   test_all_events
 #else
         ASSERT_CONTEXT_IS_XEN
@@ -947,11 +966,16 @@ handle_ist_exception:
         je    1f
         movl  $EVENT_CHECK_VECTOR,%edi
         call  send_IPI_self
-1:      movq  VCPU_domain(%rbx),%rax
+1:
+#ifdef CONFIG_PV32
+        movq  VCPU_domain(%rbx),%rax
         cmpb  $0,DOMAIN_is_32bit_pv(%rax)
         je    restore_all_guest
         jmp   compat_restore_all_guest
 #else
+        jmp   restore_all_guest
+#endif
+#else
         ASSERT_CONTEXT_IS_XEN
         jmp   restore_all_xen
 #endif
--- a/xen/include/asm-x86/asm_defns.h
+++ b/xen/include/asm-x86/asm_defns.h
@@ -333,7 +333,7 @@ static always_inline void stac(void)
         subq  $-(UREGS_error_code-UREGS_r15+\adj), %rsp
 .endm
 
-#ifdef CONFIG_PV
+#ifdef CONFIG_PV32
 #define CR4_PV32_RESTORE                               \
     ALTERNATIVE_2 "",                                  \
         "call cr4_pv32_restore", X86_FEATURE_XEN_SMEP, \



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 09:08:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 09:08:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37478.69856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khqmQ-0005St-Bc; Wed, 25 Nov 2020 09:08:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37478.69856; Wed, 25 Nov 2020 09:08:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khqmQ-0005Sm-7y; Wed, 25 Nov 2020 09:08:02 +0000
Received: by outflank-mailman (input) for mailman id 37478;
 Wed, 25 Nov 2020 09:01:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Vrf3=E7=mess.org=sean@srs-us1.protection.inumbo.net>)
 id 1khqfv-0005KW-Qj
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 09:01:20 +0000
Received: from gofer.mess.org (unknown [2a02:8011:d000:212::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e337ed2d-8dc3-4a18-a8d7-c531ee69001c;
 Wed, 25 Nov 2020 09:01:17 +0000 (UTC)
Received: by gofer.mess.org (Postfix, from userid 1000)
 id C2D44C63FB; Wed, 25 Nov 2020 09:01:14 +0000 (GMT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Vrf3=E7=mess.org=sean@srs-us1.protection.inumbo.net>)
	id 1khqfv-0005KW-Qj
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 09:01:20 +0000
X-Inumbo-ID: e337ed2d-8dc3-4a18-a8d7-c531ee69001c
Received: from gofer.mess.org (unknown [2a02:8011:d000:212::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e337ed2d-8dc3-4a18-a8d7-c531ee69001c;
	Wed, 25 Nov 2020 09:01:17 +0000 (UTC)
Received: by gofer.mess.org (Postfix, from userid 1000)
	id C2D44C63FB; Wed, 25 Nov 2020 09:01:14 +0000 (GMT)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=mess.org; s=2020;
	t=1606294874; bh=KH9dzgywMXfGPGDwmO8Qm6o//zr8KQDL4nO6EawRuDY=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=SJYLrwiKZrmMRjkeBYo7cqsbs6xljPuQypU3vE6W0mhxhiBNmXN2bqL5RV07d2awA
	 Wb5Oa7L0TiIxfU/vxoLt2vFycI3gN8Kh1qdOF59uK2dEnqITAnDV+wsiQw/exDF78D
	 hoTz22IC76edW7bl4Xm8hYrqoRAlLOCNTSbizDTKI7x8BBnutJW03OyPsTxurVqfdC
	 T9t8y4uSMzXA9L5TYoAbkkzEdR07qHfBTYdhaiYGGYuE1E1bdzLhtRTU2iYu251NBa
	 zqIr2827TZMk9I1fNh/951tgkmCQWewUCt5nrmXnkgqHhLp9nxDE6gWAGCUXG6XCXv
	 3mw8Hv064saVQ==
Date: Wed, 25 Nov 2020 09:01:14 +0000
From: Sean Young <sean@mess.org>
To: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>,
	Kees Cook <keescook@chromium.org>, Jakub Kicinski <kuba@kernel.org>,
	"Gustavo A. R. Silva" <gustavoars@kernel.org>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	alsa-devel@alsa-project.org, amd-gfx@lists.freedesktop.org,
	bridge@lists.linux-foundation.org, ceph-devel@vger.kernel.org,
	cluster-devel@redhat.com, coreteam@netfilter.org,
	devel@driverdev.osuosl.org, dm-devel@redhat.com,
	drbd-dev@lists.linbit.com, dri-devel@lists.freedesktop.org,
	GR-everest-linux-l2@marvell.com, GR-Linux-NIC-Dev@marvell.com,
	intel-gfx@lists.freedesktop.org, intel-wired-lan@lists.osuosl.org,
	keyrings@vger.kernel.org, linux1394-devel@lists.sourceforge.net,
	linux-acpi@vger.kernel.org, linux-afs@lists.infradead.org,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	linux-arm-msm@vger.kernel.org,
	linux-atm-general@lists.sourceforge.net,
	linux-block@vger.kernel.org, linux-can@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	Linux Crypto Mailing List <linux-crypto@vger.kernel.org>,
	linux-decnet-user@lists.sourceforge.net,
	Ext4 Developers List <linux-ext4@vger.kernel.org>,
	linux-fbdev@vger.kernel.org, linux-geode@lists.infradead.org,
	linux-gpio@vger.kernel.org, linux-hams@vger.kernel.org,
	linux-hwmon@vger.kernel.org, linux-i3c@lists.infradead.org,
	linux-ide@vger.kernel.org, linux-iio@vger.kernel.org,
	linux-input <linux-input@vger.kernel.org>,
	linux-integrity@vger.kernel.org, linux-mediatek@lists.infradead.org,
	Linux Media Mailing List <linux-media@vger.kernel.org>,
	linux-mmc@vger.kernel.org, Linux-MM <linux-mm@kvack.org>,
	linux-mtd@lists.infradead.org, linux-nfs@vger.kernel.org,
	linux-rdma@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-sctp@vger.kernel.org,
	linux-security-module@vger.kernel.org,
	linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org,
	linux-watchdog@vger.kernel.org,
	linux-wireless <linux-wireless@vger.kernel.org>,
	Network Development <netdev@vger.kernel.org>,
	netfilter-devel@vger.kernel.org, nouveau@lists.freedesktop.org,
	op-tee@lists.trustedfirmware.org, oss-drivers@netronome.com,
	patches@opensource.cirrus.com, rds-devel@oss.oracle.com,
	reiserfs-devel@vger.kernel.org, samba-technical@lists.samba.org,
	selinux@vger.kernel.org, target-devel@vger.kernel.org,
	tipc-discussion@lists.sourceforge.net,
	usb-storage@lists.one-eyed-alien.net,
	virtualization@lists.linux-foundation.org,
	wcn36xx@lists.infradead.org,
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>,
	xen-devel@lists.xenproject.org, linux-hardening@vger.kernel.org,
	Nick Desaulniers <ndesaulniers@google.com>,
	Nathan Chancellor <natechancellor@gmail.com>,
	Miguel Ojeda <ojeda@kernel.org>, Joe Perches <joe@perches.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
Message-ID: <20201125090114.GA24274@gofer.mess.org>
References: <cover.1605896059.git.gustavoars@kernel.org>
 <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011201129.B13FDB3C@keescook>
 <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook>
 <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
 <CANiq72nZrHWTA4_Msg6MP9snTyenC6-eGfD27CyfNSu7QoVZbw@mail.gmail.com>
 <1c7d7fde126bc0acf825766de64bf2f9b888f216.camel@HansenPartnership.com>
 <CANiq72m22Jb5_+62NnwX8xds2iUdWDMAqD8PZw9cuxdHd95W0A@mail.gmail.com>
 <fc45750b6d0277c401015b7aa11e16cd15f32ab2.camel@HansenPartnership.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <fc45750b6d0277c401015b7aa11e16cd15f32ab2.camel@HansenPartnership.com>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Mon, Nov 23, 2020 at 07:58:06AM -0800, James Bottomley wrote:
> On Mon, 2020-11-23 at 15:19 +0100, Miguel Ojeda wrote:
> > On Sun, Nov 22, 2020 at 11:36 PM James Bottomley
> > <James.Bottomley@hansenpartnership.com> wrote:
> > > It's not about the risk of the changes it's about the cost of
> > > implementing them.  Even if you discount the producer time (which
> > > someone gets to pay for, and if I were the engineering manager, I'd
> > > be unhappy about), the review/merge/rework time is pretty
> > > significant in exchange for six minor bug fixes.  Fine, when a new
> > > compiler warning comes along it's certainly reasonable to see if we
> > > can benefit from it and the fact that the compiler people think
> > > it's worthwhile is enough evidence to assume this initially.  But
> > > at some point you have to ask whether that assumption is supported
> > > by the evidence we've accumulated over the time we've been using
> > > it.  And if the evidence doesn't support it perhaps it is time to
> > > stop the experiment.
> > 
> > Maintainers routinely review 1-line trivial patches, not to mention
> > internal API changes, etc.
> 
> We're also complaining about the inability to recruit maintainers:
> 
> https://www.theregister.com/2020/06/30/hard_to_find_linux_maintainers_says_torvalds/
> 
> And burn out:
> 
> http://antirez.com/news/129
> 
> The whole crux of your argument seems to be maintainers' time isn't
> important so we should accept all trivial patches ... I'm pushing back
> on that assumption in two places, firstly the valulessness of the time
> and secondly that all trivial patches are valuable.

You're assuming burn out or recruitment problems is due to patch workload
or too many "trivial" patches.

In my experience, "other maintainers" is by far the biggest cause of
burn out for my kernel maintenance work.

Certainly arguing with a maintainer about some obviously-correct patch
series must be a good example of this.


Sean


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 09:20:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 09:20:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37488.69872 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khqyE-0007B2-HA; Wed, 25 Nov 2020 09:20:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37488.69872; Wed, 25 Nov 2020 09:20:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khqyE-0007Av-DQ; Wed, 25 Nov 2020 09:20:14 +0000
Received: by outflank-mailman (input) for mailman id 37488;
 Wed, 25 Nov 2020 09:20:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khqyC-0007Aq-TI
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 09:20:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 68ccaf5a-02b7-4f78-98a4-8f75bac2bbb3;
 Wed, 25 Nov 2020 09:20:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EF8D8AC65;
 Wed, 25 Nov 2020 09:20:10 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khqyC-0007Aq-TI
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 09:20:12 +0000
X-Inumbo-ID: 68ccaf5a-02b7-4f78-98a4-8f75bac2bbb3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 68ccaf5a-02b7-4f78-98a4-8f75bac2bbb3;
	Wed, 25 Nov 2020 09:20:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606296011; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=WKtefuZNEFSpvl4zth8rv/GE2eTSZH3Uh3HETRcwyKo=;
	b=u0QweprMcC8ZIbp4NCB6GGEOQwZZEPsogUGRVgdGvZWZprFhJ4cX3jdjVj5ZlGZuBNjyrX
	nfEnpge+QTKK1vHbTmPkRsuqiDKOpF8uWPpUY/xUAOFoxw9JhI7DoO/ye92xp/N2c59c5m
	Jn4yOJRMX1/0wMwFhNzTAvH85OGbPo8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EF8D8AC65;
	Wed, 25 Nov 2020 09:20:10 +0000 (UTC)
Subject: Re: [PATCH v4 1/3] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_disable_fifo, ...
To: Paul Durrant <paul@xen.org>
Cc: Paul Durrant <pdurrant@amazon.com>, Eslam Elnikety <elnikety@amazon.com>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott
 <dave@recoil.org>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20201124191751.11472-1-paul@xen.org>
 <20201124191751.11472-2-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <444917ac-f2aa-5544-8f6c-097e7f57c98c@suse.com>
Date: Wed, 25 Nov 2020 10:20:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201124191751.11472-2-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 24.11.2020 20:17, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> ...to control the visibility of the FIFO event channel operations
> (EVTCHNOP_init_control, EVTCHNOP_expand_array, and EVTCHNOP_set_priority) to
> the guest.
> 
> These operations were added to the public header in commit d2d50c2f308f
> ("evtchn: add FIFO-based event channel ABI") and the first implementation
> appeared in the two subsequent commits: edc8872aeb4a ("evtchn: implement
> EVTCHNOP_set_priority and add the set_priority hook") and 88910061ec61
> ("evtchn: add FIFO-based event channel hypercalls and port ops"). Prior to
> that, a guest issuing those operations would receive a return value of
> -ENOSYS (not implemented) from Xen. Guests aware of the FIFO operations but
> running on an older (pre-4.4) Xen would fall back to using the 2-level event
> channel interface upon seeing this return value.
> 
> Unfortunately the uncontrolable appearance of these new operations in Xen 4.4
> onwards has implications for hibernation of some Linux guests. During resume
> from hibernation, there are two kernels involved: the "boot" kernel and the
> "resume" kernel. The guest boot kernel may default to use FIFO operations and
> instruct Xen via EVTCHNOP_init_control to switch from 2-level to FIFO. On the
> other hand, the resume kernel keeps assuming 2-level, because it was hibernated
> on a version of Xen that did not support the FIFO operations.

And the alternative of the boot kernel issuing EVTCHNOP_reset has
other unwanted consequences. Maybe worth mentioning here, as
otherwise this would look like the obvious way to return to 2-level
mode?

Also, why can't the boot kernel be instructed to avoid engaging
FIFO mode?

> To maintain compatibility it is necessary to make Xen behave as it did
> before the new operations were added and hence the code in this patch ensures
> that, if XEN_DOMCTL_CDF_disable_fifo is set, the FIFO event channel operations
> will again result in -ENOSYS being returned to the guest.

Are there indeed dependencies on the precise return value anywhere?
If so, the generally inappropriate use (do_event_channel_op()'s
default case really would also need switching) would want a brief
comment, so it'll be understood by readers that this isn't code to
derive other code from. If not, -EPERM or -EACCES perhaps?

Also, now that we gain a runtime control, do we perhaps also want a
build time one?

> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> Signed-off-by: Eslam Elnikety <elnikety@amazon.com>

Are this order as well as the From: tag above correct? Or
alternatively, are there actually any pieces left at all from
Eslam's earlier patch?

> v4:
>  - New in v4

(Just as an aside: That's quite interesting for a previously
standalone patch. I guess that patch was really split, considering
you've retained Eslam's S-o-b? But perhaps there are different ways
to look at things ...)

> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -70,9 +70,11 @@ struct xen_domctl_createdomain {
>  #define XEN_DOMCTL_CDF_iommu          (1U<<_XEN_DOMCTL_CDF_iommu)
>  #define _XEN_DOMCTL_CDF_nested_virt   6
>  #define XEN_DOMCTL_CDF_nested_virt    (1U << _XEN_DOMCTL_CDF_nested_virt)
> +#define _XEN_DOMCTL_CDF_disable_fifo  7
> +#define XEN_DOMCTL_CDF_disable_fifo   (1U << _XEN_DOMCTL_CDF_disable_fifo)

Despite getting longish, I think this needs "evtchn" somewhere in
the name. To keep size bounded, maybe XEN_DOMCTL_CDF_no_fifo_evtchn?

>  /* Max XEN_DOMCTL_CDF_* constant.  Used for ABI checking. */
> -#define XEN_DOMCTL_CDF_MAX XEN_DOMCTL_CDF_nested_virt
> +#define XEN_DOMCTL_CDF_MAX XEN_DOMCTL_CDF_disable_fifo

While not directly related to this patch, I'm puzzled by the
presence of this constant: I've not been able to find any use of
it. In particular you did have a need to modify
sanitise_domain_config().

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 09:36:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 09:36:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37497.69884 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khrDi-0008FX-Vr; Wed, 25 Nov 2020 09:36:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37497.69884; Wed, 25 Nov 2020 09:36:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khrDi-0008FQ-Sv; Wed, 25 Nov 2020 09:36:14 +0000
Received: by outflank-mailman (input) for mailman id 37497;
 Wed, 25 Nov 2020 09:36:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khrDh-0008FL-4c
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 09:36:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c0d80d88-2568-4d52-8d74-6cb5bec0bd7d;
 Wed, 25 Nov 2020 09:36:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2F054AC6A;
 Wed, 25 Nov 2020 09:36:11 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khrDh-0008FL-4c
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 09:36:13 +0000
X-Inumbo-ID: c0d80d88-2568-4d52-8d74-6cb5bec0bd7d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c0d80d88-2568-4d52-8d74-6cb5bec0bd7d;
	Wed, 25 Nov 2020 09:36:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606296971; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0AxH2Dn0DtiSCSIRsyowpVLW9AeNSPjSK9xgXtYk0yM=;
	b=VkxU9j4BcdI4i7O6Fqe8Ztq6cCTzfG+02hKgVIRle5wUJXth61gZZY6gTWaHyqKuFamI87
	Vl/DifLxPPWKzx4SwLAwhXwwyP02u9CbwZJ4CayzJSTgCk396p+e8C5+P5SVl5yI6/Z0ay
	MNovyDIilYG9VYxLXp05hSFUH5k9aZg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2F054AC6A;
	Wed, 25 Nov 2020 09:36:11 +0000 (UTC)
Subject: Re: [PATCH v4 1/3] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_disable_fifo, ...
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Paul Durrant <pdurrant@amazon.com>, Eslam Elnikety <elnikety@amazon.com>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott
 <dave@recoil.org>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 Paul Durrant <paul@xen.org>
References: <20201124191751.11472-1-paul@xen.org>
 <20201124191751.11472-2-paul@xen.org>
 <444917ac-f2aa-5544-8f6c-097e7f57c98c@suse.com>
Message-ID: <19792219-22f0-e7ce-23ca-5c1b20d6f581@suse.com>
Date: Wed, 25 Nov 2020 10:36:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <444917ac-f2aa-5544-8f6c-097e7f57c98c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.11.2020 10:20, Jan Beulich wrote:
> On 24.11.2020 20:17, Paul Durrant wrote:
>> --- a/xen/include/public/domctl.h
>> +++ b/xen/include/public/domctl.h
>> @@ -70,9 +70,11 @@ struct xen_domctl_createdomain {
>>  #define XEN_DOMCTL_CDF_iommu          (1U<<_XEN_DOMCTL_CDF_iommu)
>>  #define _XEN_DOMCTL_CDF_nested_virt   6
>>  #define XEN_DOMCTL_CDF_nested_virt    (1U << _XEN_DOMCTL_CDF_nested_virt)
>> +#define _XEN_DOMCTL_CDF_disable_fifo  7
>> +#define XEN_DOMCTL_CDF_disable_fifo   (1U << _XEN_DOMCTL_CDF_disable_fifo)
> 
> Despite getting longish, I think this needs "evtchn" somewhere in
> the name. To keep size bounded, maybe XEN_DOMCTL_CDF_no_fifo_evtchn?
> 
>>  /* Max XEN_DOMCTL_CDF_* constant.  Used for ABI checking. */
>> -#define XEN_DOMCTL_CDF_MAX XEN_DOMCTL_CDF_nested_virt
>> +#define XEN_DOMCTL_CDF_MAX XEN_DOMCTL_CDF_disable_fifo
> 
> While not directly related to this patch, I'm puzzled by the
> presence of this constant: I've not been able to find any use of
> it. In particular you did have a need to modify
> sanitise_domain_config().

So it was you to introduce this, right away without any user, in
7fb0e134f8c6 ("tools/ocaml: abi: Use formal conversion and check
in more places"). The only reference is from what I regard as a
comment (I don't speak any ocaml, so I may be wrong). Could you
clarify why we need to maintain this constant?

Thanks, Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 09:46:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 09:46:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37505.69896 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khrNy-0000nd-W7; Wed, 25 Nov 2020 09:46:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37505.69896; Wed, 25 Nov 2020 09:46:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khrNy-0000nW-SI; Wed, 25 Nov 2020 09:46:50 +0000
Received: by outflank-mailman (input) for mailman id 37505;
 Wed, 25 Nov 2020 09:46:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khrNw-0000nO-RZ; Wed, 25 Nov 2020 09:46:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khrNw-00038t-Jo; Wed, 25 Nov 2020 09:46:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khrNw-0007j4-Ah; Wed, 25 Nov 2020 09:46:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1khrNw-0004LN-AB; Wed, 25 Nov 2020 09:46:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khrNw-0000nO-RZ; Wed, 25 Nov 2020 09:46:48 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AS+FsnKTQjnED6gZNQm4VBL7XsD0CE2YaV0RlnT1PZs=; b=Aa+wlYhRGH/o7LfjTAj40D+5qs
	RW+oqxU1PgnEJVlTvvTIAmypp3H7VGG3ZrKGFZyUsOUcy7T9futv55WVPqQVVP7/GLIz8iWE8Tqql
	YJaqd+Ok0qG2wdrDAFakbS0tVqCFOFVvm+xyeNnjt1fs8S6GIAJ+Q5OfKhXk+2Ld5zss=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khrNw-00038t-Jo; Wed, 25 Nov 2020 09:46:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khrNw-0007j4-Ah; Wed, 25 Nov 2020 09:46:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khrNw-0004LN-AB; Wed, 25 Nov 2020 09:46:48 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157003-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 157003: all pass - PUSHED
X-Osstest-Versions-This:
    xen=9b156bcc3ffcc7949edd4460b718a241e87ae302
X-Osstest-Versions-That:
    xen=b659a5cebd611dbe698e63c03485b5fe8cd964ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 25 Nov 2020 09:46:48 +0000

flight 157003 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157003/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  9b156bcc3ffcc7949edd4460b718a241e87ae302
baseline version:
 xen                  b659a5cebd611dbe698e63c03485b5fe8cd964ad

Last test of basis   156941  2020-11-22 09:18:30 Z    3 days
Testing same since   157003  2020-11-25 09:18:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <JBeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b659a5cebd..9b156bcc3f  9b156bcc3ffcc7949edd4460b718a241e87ae302 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 10:06:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 10:06:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37515.69910 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khrh7-0002hw-Gj; Wed, 25 Nov 2020 10:06:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37515.69910; Wed, 25 Nov 2020 10:06:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khrh7-0002hp-Do; Wed, 25 Nov 2020 10:06:37 +0000
Received: by outflank-mailman (input) for mailman id 37515;
 Wed, 25 Nov 2020 10:02:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=m3fp=E7=mail.fr=psarpol@srs-us1.protection.inumbo.net>)
 id 1khrd1-0002co-72
 for Xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 10:02:23 +0000
Received: from shout12.mail.de (unknown [2001:868:100:600::f154])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c859bdd-2892-4abb-9665-c7e89658dd50;
 Wed, 25 Nov 2020 10:02:21 +0000 (UTC)
Received: from shout01.mail.de (unknown [10.0.120.221])
 by shout12.mail.de (Postfix) with ESMTPS id DBC041A003E
 for <Xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 11:02:19 +0100 (CET)
Received: from postfix01.mail.de (postfix02.bt.mail.de [10.0.121.126])
 by shout01.mail.de (Postfix) with ESMTP id CDB27100530
 for <Xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 11:02:19 +0100 (CET)
Received: from smtp03.mail.de (smtp03.bt.mail.de [10.0.121.213])
 by postfix01.mail.de (Postfix) with ESMTP id B69ECA008B
 for <Xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 11:02:19 +0100 (CET)
Received: from [127.0.0.1] (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (No client certificate requested)
 by smtp03.mail.de (Postfix) with ESMTPSA id 84AEAA1DFD
 for <Xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 11:02:19 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=m3fp=E7=mail.fr=psarpol@srs-us1.protection.inumbo.net>)
	id 1khrd1-0002co-72
	for Xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 10:02:23 +0000
X-Inumbo-ID: 2c859bdd-2892-4abb-9665-c7e89658dd50
Received: from shout12.mail.de (unknown [2001:868:100:600::f154])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2c859bdd-2892-4abb-9665-c7e89658dd50;
	Wed, 25 Nov 2020 10:02:21 +0000 (UTC)
Received: from shout01.mail.de (unknown [10.0.120.221])
	by shout12.mail.de (Postfix) with ESMTPS id DBC041A003E
	for <Xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 11:02:19 +0100 (CET)
Received: from postfix01.mail.de (postfix02.bt.mail.de [10.0.121.126])
	by shout01.mail.de (Postfix) with ESMTP id CDB27100530
	for <Xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 11:02:19 +0100 (CET)
Received: from smtp03.mail.de (smtp03.bt.mail.de [10.0.121.213])
	by postfix01.mail.de (Postfix) with ESMTP id B69ECA008B
	for <Xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 11:02:19 +0100 (CET)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=mail.fr;
	s=mailfr201610; t=1606298539;
	bh=XgxQGWScoI4a9fvBd3mlleQdIBJNzO4VjiBRlOBWePo=;
	h=From:To:Subject:Date:From;
	b=DOf137Ih3JsBp1ffmTk+wtqkvLhF6EnB85wq6QOfRWq1VjvBYn4bjDShzaZP1HAoo
	 lbT6VdQiQcp6iTFktVT46or0JZj2R4oZPZ77FtKFnT54I0JEbfp41IzFMp1ZxyTMC/
	 YslG/A+90x+jioFReXlrNUniJkqwY3RNAyrVIuH8=
Received: from [127.0.0.1] (localhost [127.0.0.1])
	(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
	(No client certificate requested)
	by smtp03.mail.de (Postfix) with ESMTPSA id 84AEAA1DFD
	for <Xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 11:02:19 +0100 (CET)
From: psarpol@mail.fr
To: Xen-devel@lists.xenproject.org
Subject: Vtpm in Windows 10
X-Priority: 3
Date: Wed, 25 Nov 2020 11:02:19 +0100
Content-Type: multipart/alternative;
 boundary="=_1294934ae3dee858928fc76202b91ce9"
MIME-Version: 1.0
Message-Id: <20201125100219.84AEAA1DFD@smtp03.mail.de>
X-purgate: clean
X-purgate: This mail is considered clean (visit http://www.eleven.de for further information)
X-purgate-type: clean
X-purgate-Ad: Categorized by eleven eXpurgate (R) http://www.eleven.de
X-purgate: This mail is considered clean (visit http://www.eleven.de for further information)
X-purgate: clean
X-purgate-size: 856
X-purgate-ID: 154282::1606298539-00000E46-177746DD/0/0

--=_1294934ae3dee858928fc76202b91ce9
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Hi,  Is there a way to mount a vtpm 2.0 in a Windows 10 VM ? Thanks Paul

--=_1294934ae3dee858928fc76202b91ce9
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable

<div style=3D"font-family: arial, helvetica,sans-serif; font-size: 10pt;=
 color: #000000;">Hi,&nbsp;</div>=0A<div style=3D"font-family: arial, he=
lvetica,sans-serif; font-size: 10pt; color: #000000;">Is there a way to=
 mount a vtpm 2.0 in a Windows 10 VM ?</div>=0A<div style=3D"font-family=
: arial, helvetica,sans-serif; font-size: 10pt; color: #000000;">Thanks<=
/div>=0A<div style=3D"font-family: arial, helvetica,sans-serif; font-siz=
e: 10pt; color: #000000;">Paul</div>

--=_1294934ae3dee858928fc76202b91ce9--


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 10:37:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 10:37:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37527.69928 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khsB2-0005Ov-2n; Wed, 25 Nov 2020 10:37:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37527.69928; Wed, 25 Nov 2020 10:37:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khsB1-0005Oo-WA; Wed, 25 Nov 2020 10:37:31 +0000
Received: by outflank-mailman (input) for mailman id 37527;
 Wed, 25 Nov 2020 10:37:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fqhz=E7=gmail.com=andy.shevchenko@srs-us1.protection.inumbo.net>)
 id 1khsB0-0005Oj-LC
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 10:37:30 +0000
Received: from mail-pl1-x642.google.com (unknown [2607:f8b0:4864:20::642])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c458a78b-3e5a-4c47-81b9-20085582c7f5;
 Wed, 25 Nov 2020 10:37:29 +0000 (UTC)
Received: by mail-pl1-x642.google.com with SMTP id bj5so914846plb.4
 for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 02:37:29 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=fqhz=E7=gmail.com=andy.shevchenko@srs-us1.protection.inumbo.net>)
	id 1khsB0-0005Oj-LC
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 10:37:30 +0000
X-Inumbo-ID: c458a78b-3e5a-4c47-81b9-20085582c7f5
Received: from mail-pl1-x642.google.com (unknown [2607:f8b0:4864:20::642])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c458a78b-3e5a-4c47-81b9-20085582c7f5;
	Wed, 25 Nov 2020 10:37:29 +0000 (UTC)
Received: by mail-pl1-x642.google.com with SMTP id bj5so914846plb.4
        for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 02:37:29 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=lG3+Uwz8lZti6FSb8xjGyaQfdt5OwV03xvf6+L3ZqWc=;
        b=YCtEdyPA4vCzWZsMjIt3djSgR9gg7vMRhn3I5LT7lEmWHdT3b/jEkR0QdduouORJ7k
         kiuUl5RijkS3EJmc3PIdIbhuWrYEtLAccN+wdoprBDsU56ruoUGszfH1Sxvuy4WXIhMK
         0dDt7R//JtRYOU1+gQ96Rpa2FinP3O1pFccTMutbPTGjvqTac0chojMQO8cZdySzLIim
         HHsTKo91pUaUTwyxPnWizwDASocTC+n+eyDdN/HKPn9pe4V2vLDA/DOFrCcWbsrVXkvE
         5CsjPeXzCB0gF8EvrEVdW+qQjVnUcViOyjSAD57xy0gbwLs+FCHLejwxIxOdH/xJHiTG
         +NRA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=lG3+Uwz8lZti6FSb8xjGyaQfdt5OwV03xvf6+L3ZqWc=;
        b=b0r7y1/0+LDs9OGnXxmtXu8MGgKwS9sZESqtyut8RIlAN4qB/0bwSruDKgviHYtDpS
         WIBKMFZGGctYBfPJv0sIQA4Drvt23oLHutSTo78/l+1vKyPFaFOP+4O74hphwYksyGwB
         HPY1hFlIuNwEGgY4rLWnwRYLJHf0O3K5V3mONy2v5VnRq2T5l8gC92zjXRVD6TZfBI8W
         Td0FFsmGpG3HEHQfZguUcXtDKKa9/CMYaAlwDrAO+kzkcs4bxYJCGHq0he0egW5E1ub9
         C2+Aypma95RCebYLBdRhYrGq+ZFjB/CV6ZWq2EwIoAqgFjfRbXdulQSxFy5Ov1ckTNyS
         Inaw==
X-Gm-Message-State: AOAM532fK9zp40dPTZRt79gSEFOMi8PU8lIyYEvP5jCFAOqjBynz8xxs
	6hUBpKGxQwzpBBVuAr/SSat6hpoCj3d8D4a0wWM=
X-Google-Smtp-Source: ABdhPJzEY8ebPN4xZ4jf0ZFVw9i65L6qlCom+E751HZA34/qY3SkadUuuLf2HukIG4qONDPWI5feIsM1VQGLuSbFYSc=
X-Received: by 2002:a17:902:ead2:b029:da:2596:198e with SMTP id
 p18-20020a170902ead2b02900da2596198emr1937529pld.21.1606300648824; Wed, 25
 Nov 2020 02:37:28 -0800 (PST)
MIME-Version: 1.0
References: <cover.1605896059.git.gustavoars@kernel.org> <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook> <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
 <CANiq72nZrHWTA4_Msg6MP9snTyenC6-eGfD27CyfNSu7QoVZbw@mail.gmail.com>
 <1c7d7fde126bc0acf825766de64bf2f9b888f216.camel@HansenPartnership.com>
 <CANiq72m22Jb5_+62NnwX8xds2iUdWDMAqD8PZw9cuxdHd95W0A@mail.gmail.com>
 <fc45750b6d0277c401015b7aa11e16cd15f32ab2.camel@HansenPartnership.com>
 <CANiq72k5tpDoDPmJ0ZWc1DGqm+81Gi-uEENAtvEs9v3SZcx6_Q@mail.gmail.com> <4993259d01a0064f8bb22770503490f9252f3659.camel@HansenPartnership.com>
In-Reply-To: <4993259d01a0064f8bb22770503490f9252f3659.camel@HansenPartnership.com>
From: Andy Shevchenko <andy.shevchenko@gmail.com>
Date: Wed, 25 Nov 2020 12:38:17 +0200
Message-ID: <CAHp75VfaewwkLsrht95Q7DaxFk7JpQjwx0KQ7Jvh5f7DUbZkRA@mail.gmail.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
To: James Bottomley <James.Bottomley@hansenpartnership.com>
Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>, Kees Cook <keescook@chromium.org>, 
	Jakub Kicinski <kuba@kernel.org>, "Gustavo A. R. Silva" <gustavoars@kernel.org>, 
	linux-kernel <linux-kernel@vger.kernel.org>, 
	ALSA Development Mailing List <alsa-devel@alsa-project.org>, amd-gfx@lists.freedesktop.org, 
	bridge@lists.linux-foundation.org, ceph-devel@vger.kernel.org, 
	cluster-devel@redhat.com, coreteam@netfilter.org, 
	"open list:STAGING SUBSYSTEM" <devel@driverdev.osuosl.org>, 
	device-mapper development <dm-devel@redhat.com>, drbd-dev@lists.linbit.com, 
	dri-devel <dri-devel@lists.freedesktop.org>, GR-everest-linux-l2@marvell.com, 
	GR-Linux-NIC-Dev@marvell.com, intel-gfx <intel-gfx@lists.freedesktop.org>, 
	intel-wired-lan@lists.osuosl.org, keyrings@vger.kernel.org, 
	linux1394-devel@lists.sourceforge.net, 
	ACPI Devel Maling List <linux-acpi@vger.kernel.org>, linux-afs@lists.infradead.org, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, linux-arm-msm@vger.kernel.org, 
	linux-atm-general@lists.sourceforge.net, linux-block@vger.kernel.org, 
	linux-can@vger.kernel.org, linux-cifs@vger.kernel.org, 
	Linux Crypto Mailing List <linux-crypto@vger.kernel.org>, linux-decnet-user@lists.sourceforge.net, 
	Ext4 Developers List <linux-ext4@vger.kernel.org>, 
	"open list:FRAMEBUFFER LAYER" <linux-fbdev@vger.kernel.org>, linux-geode@lists.infradead.org, 
	"open list:GPIO SUBSYSTEM" <linux-gpio@vger.kernel.org>, linux-hams@vger.kernel.org, 
	linux-hwmon@vger.kernel.org, linux-i3c@lists.infradead.org, 
	linux-ide@vger.kernel.org, linux-iio <linux-iio@vger.kernel.org>, 
	linux-input <linux-input@vger.kernel.org>, 
	linux-integrity <linux-integrity@vger.kernel.org>, 
	"moderated list:ARM/Mediatek SoC support" <linux-mediatek@lists.infradead.org>, 
	Linux Media Mailing List <linux-media@vger.kernel.org>, linux-mmc <linux-mmc@vger.kernel.org>, 
	Linux-MM <linux-mm@kvack.org>, 
	"open list:MEMORY TECHNOLOGY..." <linux-mtd@lists.infradead.org>, linux-nfs@vger.kernel.org, 
	"open list:HFI1 DRIVER" <linux-rdma@vger.kernel.org>, 
	Linux-Renesas <linux-renesas-soc@vger.kernel.org>, linux-scsi <linux-scsi@vger.kernel.org>, 
	linux-sctp@vger.kernel.org, 
	linux-security-module <linux-security-module@vger.kernel.org>, 
	linux-stm32@st-md-mailman.stormreply.com, USB <linux-usb@vger.kernel.org>, 
	linux-watchdog@vger.kernel.org, 
	linux-wireless <linux-wireless@vger.kernel.org>, 
	Network Development <netdev@vger.kernel.org>, netfilter-devel@vger.kernel.org, 
	nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org, 
	oss-drivers@netronome.com, patches@opensource.cirrus.com, 
	rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org, 
	samba-technical@lists.samba.org, selinux@vger.kernel.org, 
	target-devel <target-devel@vger.kernel.org>, tipc-discussion@lists.sourceforge.net, 
	usb-storage@lists.one-eyed-alien.net, 
	virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, xen-devel@lists.xenproject.org, 
	linux-hardening@vger.kernel.org, Nick Desaulniers <ndesaulniers@google.com>, 
	Nathan Chancellor <natechancellor@gmail.com>, Miguel Ojeda <ojeda@kernel.org>, 
	Joe Perches <joe@perches.com>
Content-Type: text/plain; charset="UTF-8"

On Mon, Nov 23, 2020 at 10:39 PM James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
> On Mon, 2020-11-23 at 19:56 +0100, Miguel Ojeda wrote:
> > On Mon, Nov 23, 2020 at 4:58 PM James Bottomley
> > <James.Bottomley@hansenpartnership.com> wrote:

...

> > But if we do the math, for an author, at even 1 minute per line
> > change and assuming nothing can be automated at all, it would take 1
> > month of work. For maintainers, a couple of trivial lines is noise
> > compared to many other patches.
>
> So you think a one line patch should take one minute to produce ... I
> really don't think that's grounded in reality.  I suppose a one line
> patch only takes a minute to merge with b4 if no-one reviews or tests
> it, but that's not really desirable.

In my practice most of the one line patches were either to fix or to
introduce quite interesting issues.
1 minute is 2-3 orders less than usually needed for such patches.
That's why I don't like churn produced by people who often even didn't
compile their useful contributions.

-- 
With Best Regards,
Andy Shevchenko


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 10:51:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 10:51:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37535.69941 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khsOV-00078h-Bo; Wed, 25 Nov 2020 10:51:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37535.69941; Wed, 25 Nov 2020 10:51:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khsOV-00078a-8h; Wed, 25 Nov 2020 10:51:27 +0000
Received: by outflank-mailman (input) for mailman id 37535;
 Wed, 25 Nov 2020 10:51:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X3kr=E7=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1khsOT-00078P-RB
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 10:51:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0601c2df-d943-428a-8bfb-055d745599b0;
 Wed, 25 Nov 2020 10:51:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 41431AC65;
 Wed, 25 Nov 2020 10:51:24 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X3kr=E7=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1khsOT-00078P-RB
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 10:51:25 +0000
X-Inumbo-ID: 0601c2df-d943-428a-8bfb-055d745599b0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0601c2df-d943-428a-8bfb-055d745599b0;
	Wed, 25 Nov 2020 10:51:25 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606301484; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=Tzn7L8iu9XOgwrfAALgNnm2P5rnHXcOHz63aLo4MSgM=;
	b=bEqOmJFOPO7C6jy8lxPlZJeRHkYBLvjzMPvSQx8gVDtzkNxLdXWwkzRP+jWVR5DxKb8P3z
	KKDOblV2Tr+DIGf6DLukHT7IS39MGVyTTzYY8hsyIi1nyuzmOi3Nzxrk60+YSQ01kofciZ
	oVejyIAQ52M8sXn3N3Fn/0Gm7z7ZCuY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 41431AC65;
	Wed, 25 Nov 2020 10:51:24 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v8 0/3] xen/events: further locking adjustments
Date: Wed, 25 Nov 2020 11:51:19 +0100
Message-Id: <20201125105122.3650-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is an add-on of my event channel locking series.

Juergen Gross (3):
  xen/events: modify struct evtchn layout
  xen/events: rework fifo queue locking
  xen/events: do some cleanups in evtchn_fifo_set_pending()

 xen/common/event_fifo.c | 152 +++++++++++++++++++++-------------------
 xen/include/xen/sched.h |  34 ++++-----
 2 files changed, 98 insertions(+), 88 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 10:51:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 10:51:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37536.69953 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khsOW-00079g-Mz; Wed, 25 Nov 2020 10:51:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37536.69953; Wed, 25 Nov 2020 10:51:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khsOW-00079Z-Hu; Wed, 25 Nov 2020 10:51:28 +0000
Received: by outflank-mailman (input) for mailman id 37536;
 Wed, 25 Nov 2020 10:51:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X3kr=E7=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1khsOV-00078k-Gm
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 10:51:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f2b96b92-b327-40c4-b27a-902de765b199;
 Wed, 25 Nov 2020 10:51:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B39CDADCD;
 Wed, 25 Nov 2020 10:51:24 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X3kr=E7=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1khsOV-00078k-Gm
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 10:51:27 +0000
X-Inumbo-ID: f2b96b92-b327-40c4-b27a-902de765b199
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f2b96b92-b327-40c4-b27a-902de765b199;
	Wed, 25 Nov 2020 10:51:25 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606301484; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Tg5vf3t7YsP9mKYoN8C2ngq6yNlhQuq9n9e2uzxQXtU=;
	b=mratzSmJHcroCNNvBvMePISXhf44NkOCthoU477BA+ypVbIhdWtxakdvnPAK9gfx7de619
	B+mv4WoerIQ/W134zLIIqPqxrc/B5C5ZSEGvu/f8OWpalwSd9/sU4NLRssvxcZ/pW8ViHZ
	LRRK6f66CRCYrLvTnxaQrJSNA3KCLOw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B39CDADCD;
	Wed, 25 Nov 2020 10:51:24 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v8 2/3] xen/events: rework fifo queue locking
Date: Wed, 25 Nov 2020 11:51:21 +0100
Message-Id: <20201125105122.3650-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201125105122.3650-1-jgross@suse.com>
References: <20201125105122.3650-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Two cpus entering evtchn_fifo_set_pending() for the same event channel
can race in case the first one gets interrupted after setting
EVTCHN_FIFO_PENDING and when the other one manages to set
EVTCHN_FIFO_LINKED before the first one is testing that bit. This can
lead to evtchn_check_pollers() being called before the event is put
properly into the queue, resulting eventually in the guest not seeing
the event pending and thus blocking forever afterwards.

Note that commit 5f2df45ead7c1195 ("xen/evtchn: rework per event channel
lock") made the race just more obvious, while the fifo event channel
implementation had this race from the beginning when an unmask operation
was running in parallel with an event channel send operation.

Using a spinlock for the per event channel lock is problematic due to
some paths needing to take the lock are called with interrupts off, so
the lock would need to disable interrupts, which in turn breaks some
use cases related to vm events.

For avoiding this race the queue locking in evtchn_fifo_set_pending()
needs to be reworked to cover the test of EVTCHN_FIFO_PENDING,
EVTCHN_FIFO_MASKED and EVTCHN_FIFO_LINKED, too. Additionally when an
event channel needs to change queues both queues need to be locked
initially.

Reported-by: Jan Beulich <jbeulich@suse.com>
Fixes: 5f2df45ead7c1195 ("xen/evtchn: rework per event channel lock")
Fixes: de6acb78bf0e137c ("evtchn: use a per-event channel lock for sending events")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V7:
- new patch

V8:
- update commit message (Jan Beulich)
- update double locking (Jan Beulich)
- add comment (Jan Beulich)
- fix handling when not getting lock (Jan Beulich)
- add const (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/event_fifo.c | 128 ++++++++++++++++++++++------------------
 1 file changed, 70 insertions(+), 58 deletions(-)

diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c
index f39e61105f..443593c3b3 100644
--- a/xen/common/event_fifo.c
+++ b/xen/common/event_fifo.c
@@ -87,38 +87,6 @@ static void evtchn_fifo_init(struct domain *d, struct evtchn *evtchn)
                  d->domain_id, evtchn->port);
 }
 
-static struct evtchn_fifo_queue *lock_old_queue(const struct domain *d,
-                                                struct evtchn *evtchn,
-                                                unsigned long *flags)
-{
-    struct vcpu *v;
-    struct evtchn_fifo_queue *q, *old_q;
-    unsigned int try;
-    union evtchn_fifo_lastq lastq;
-
-    for ( try = 0; try < 3; try++ )
-    {
-        lastq.raw = read_atomic(&evtchn->fifo_lastq);
-        v = d->vcpu[lastq.last_vcpu_id];
-        old_q = &v->evtchn_fifo->queue[lastq.last_priority];
-
-        spin_lock_irqsave(&old_q->lock, *flags);
-
-        v = d->vcpu[lastq.last_vcpu_id];
-        q = &v->evtchn_fifo->queue[lastq.last_priority];
-
-        if ( old_q == q )
-            return old_q;
-
-        spin_unlock_irqrestore(&old_q->lock, *flags);
-    }
-
-    gprintk(XENLOG_WARNING,
-            "dom%d port %d lost event (too many queue changes)\n",
-            d->domain_id, evtchn->port);
-    return NULL;
-}          
-
 static int try_set_link(event_word_t *word, event_word_t *w, uint32_t link)
 {
     event_word_t new, old;
@@ -190,6 +158,9 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
     event_word_t *word;
     unsigned long flags;
     bool_t was_pending;
+    struct evtchn_fifo_queue *q, *old_q;
+    unsigned int try;
+    bool linked = true;
 
     port = evtchn->port;
     word = evtchn_fifo_word_from_port(d, port);
@@ -204,17 +175,67 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
         return;
     }
 
+    /*
+     * Lock all queues related to the event channel (in case of a queue change
+     * this might be two).
+     * It is mandatory to do that before setting and testing the PENDING bit
+     * and to hold the current queue lock until the event has put into the
+     * list of pending events in order to avoid waking up a guest without the
+     * event being visibly pending in the guest.
+     */
+    for ( try = 0; try < 4; try++ )
+    {
+        union evtchn_fifo_lastq lastq;
+        const struct vcpu *old_v;
+
+        lastq.raw = read_atomic(&evtchn->fifo_lastq);
+        old_v = d->vcpu[lastq.last_vcpu_id];
+
+        q = &v->evtchn_fifo->queue[evtchn->priority];
+        old_q = &old_v->evtchn_fifo->queue[lastq.last_priority];
+
+        if ( q == old_q )
+            spin_lock_irqsave(&q->lock, flags);
+        else if ( q < old_q )
+        {
+            spin_lock_irqsave(&q->lock, flags);
+            spin_lock(&old_q->lock);
+        }
+        else
+        {
+            spin_lock_irqsave(&old_q->lock, flags);
+            spin_lock(&q->lock);
+        }
+
+        lastq.raw = read_atomic(&evtchn->fifo_lastq);
+        old_v = d->vcpu[lastq.last_vcpu_id];
+        if ( q == &v->evtchn_fifo->queue[evtchn->priority] &&
+             old_q == &old_v->evtchn_fifo->queue[lastq.last_priority] )
+            break;
+
+        if ( q != old_q )
+            spin_unlock(&old_q->lock);
+        spin_unlock_irqrestore(&q->lock, flags);
+    }
+
     was_pending = guest_test_and_set_bit(d, EVTCHN_FIFO_PENDING, word);
 
+    /* If we didn't get the lock bail out. */
+    if ( try == 4 )
+    {
+        gprintk(XENLOG_WARNING,
+                "dom%d port %d lost event (too many queue changes)\n",
+                d->domain_id, evtchn->port);
+        goto done;
+    }
+
     /*
      * Link the event if it unmasked and not already linked.
      */
     if ( !guest_test_bit(d, EVTCHN_FIFO_MASKED, word) &&
          !guest_test_bit(d, EVTCHN_FIFO_LINKED, word) )
     {
-        struct evtchn_fifo_queue *q, *old_q;
         event_word_t *tail_word;
-        bool_t linked = 0;
 
         /*
          * Control block not mapped.  The guest must not unmask an
@@ -225,25 +246,11 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
         {
             printk(XENLOG_G_WARNING
                    "%pv has no FIFO event channel control block\n", v);
-            goto done;
+            goto unlock;
         }
 
-        /*
-         * No locking around getting the queue. This may race with
-         * changing the priority but we are allowed to signal the
-         * event once on the old priority.
-         */
-        q = &v->evtchn_fifo->queue[evtchn->priority];
-
-        old_q = lock_old_queue(d, evtchn, &flags);
-        if ( !old_q )
-            goto done;
-
         if ( guest_test_and_set_bit(d, EVTCHN_FIFO_LINKED, word) )
-        {
-            spin_unlock_irqrestore(&old_q->lock, flags);
-            goto done;
-        }
+            goto unlock;
 
         /*
          * If this event was a tail, the old queue is now empty and
@@ -262,8 +269,8 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
             lastq.last_priority = q->priority;
             write_atomic(&evtchn->fifo_lastq, lastq.raw);
 
-            spin_unlock_irqrestore(&old_q->lock, flags);
-            spin_lock_irqsave(&q->lock, flags);
+            spin_unlock(&old_q->lock);
+            old_q = q;
         }
 
         /*
@@ -276,6 +283,7 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
          * If the queue is empty (i.e., we haven't linked to the new
          * event), head must be updated.
          */
+        linked = false;
         if ( q->tail )
         {
             tail_word = evtchn_fifo_word_from_port(d, q->tail);
@@ -284,15 +292,19 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
         if ( !linked )
             write_atomic(q->head, port);
         q->tail = port;
+    }
 
-        spin_unlock_irqrestore(&q->lock, flags);
+ unlock:
+    if ( q != old_q )
+        spin_unlock(&old_q->lock);
+    spin_unlock_irqrestore(&q->lock, flags);
 
-        if ( !linked
-             && !guest_test_and_set_bit(d, q->priority,
-                                        &v->evtchn_fifo->control_block->ready) )
-            vcpu_mark_events_pending(v);
-    }
  done:
+    if ( !linked &&
+         !guest_test_and_set_bit(d, q->priority,
+                                 &v->evtchn_fifo->control_block->ready) )
+        vcpu_mark_events_pending(v);
+
     if ( !was_pending )
         evtchn_check_pollers(d, port);
 }
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 10:51:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 10:51:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37537.69965 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khsOa-0007CY-2X; Wed, 25 Nov 2020 10:51:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37537.69965; Wed, 25 Nov 2020 10:51:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khsOZ-0007CQ-UJ; Wed, 25 Nov 2020 10:51:31 +0000
Received: by outflank-mailman (input) for mailman id 37537;
 Wed, 25 Nov 2020 10:51:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X3kr=E7=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1khsOY-00078P-Nr
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 10:51:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 15b0d67e-5c71-4322-8fde-7569aaecc862;
 Wed, 25 Nov 2020 10:51:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 76C72AC75;
 Wed, 25 Nov 2020 10:51:24 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X3kr=E7=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1khsOY-00078P-Nr
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 10:51:30 +0000
X-Inumbo-ID: 15b0d67e-5c71-4322-8fde-7569aaecc862
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 15b0d67e-5c71-4322-8fde-7569aaecc862;
	Wed, 25 Nov 2020 10:51:25 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606301484; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9Eg0Mg6cy2GDmCEZCPsyzzLpzDv8NGg/SYGtCOknElU=;
	b=cgFuGboLJhsfeevFj/HKdsBzy4EggHI0/RUa/dgCg+KFK0vTqHrgDpzoutlWcCrpc52wp8
	aDaFoZI1wZG6f/t9tuXiLtoikpkuLPJcHJgpgSeMuEYNkdT3tageJPoaRJO41RLawcQKag
	5ddCqdgXX4vlvvOgkDefvEZeCNoc7CU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 76C72AC75;
	Wed, 25 Nov 2020 10:51:24 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v8 1/3] xen/events: modify struct evtchn layout
Date: Wed, 25 Nov 2020 11:51:20 +0100
Message-Id: <20201125105122.3650-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201125105122.3650-1-jgross@suse.com>
References: <20201125105122.3650-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to avoid latent races when updating an event channel put
xen_consumer and pending fields in different bytes. This is no problem
right now, but especially the pending indicator isn't used only when
initializing an event channel (unlike xen_consumer), so any future
addition to this byte would need to be done with a potential race kept
in mind.

At the same time move some other fields around to have less implicit
paddings and to keep related fields more closely together.

Finally switch struct evtchn to no longer use fixed sized types where
not needed.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V7:
- new patch

V8:
- retain xen_consumer to be a bitfield (Jan Beulich)
- switch to non fixed sizes types

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/event_fifo.c |  4 ++--
 xen/include/xen/sched.h | 34 ++++++++++++++++++----------------
 2 files changed, 20 insertions(+), 18 deletions(-)

diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c
index 79090c04ca..f39e61105f 100644
--- a/xen/common/event_fifo.c
+++ b/xen/common/event_fifo.c
@@ -200,7 +200,7 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
      */
     if ( unlikely(!word) )
     {
-        evtchn->pending = 1;
+        evtchn->pending = true;
         return;
     }
 
@@ -535,7 +535,7 @@ static void setup_ports(struct domain *d, unsigned int prev_evtchns)
         evtchn = evtchn_from_port(d, port);
 
         if ( guest_test_bit(d, port, &shared_info(d, evtchn_pending)) )
-            evtchn->pending = 1;
+            evtchn->pending = true;
 
         evtchn_fifo_set_priority(d, evtchn, EVTCHN_FIFO_PRIORITY_DEFAULT);
     }
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index a345cc01f8..7afbae7dd1 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -93,31 +93,33 @@ struct evtchn
 #define ECS_PIRQ         4 /* Channel is bound to a physical IRQ line.       */
 #define ECS_VIRQ         5 /* Channel is bound to a virtual IRQ line.        */
 #define ECS_IPI          6 /* Channel is bound to a virtual IPI line.        */
-    u8  state;             /* ECS_* */
-    u8  xen_consumer:XEN_CONSUMER_BITS; /* Consumer in Xen if nonzero */
-    u8  pending:1;
-    u16 notify_vcpu_id;    /* VCPU for local delivery notification */
-    u32 port;
+    unsigned char state;   /* ECS_* */
+#ifndef NDEBUG
+    unsigned char old_state; /* State when taking lock in write mode. */
+#endif
+    unsigned char xen_consumer:XEN_CONSUMER_BITS; /* Consumer in Xen if != 0 */
+    unsigned int port;
     union {
         struct {
             domid_t remote_domid;
-        } unbound;     /* state == ECS_UNBOUND */
+        } unbound;          /* state == ECS_UNBOUND */
         struct {
             evtchn_port_t  remote_port;
             struct domain *remote_dom;
-        } interdomain; /* state == ECS_INTERDOMAIN */
+        } interdomain;      /* state == ECS_INTERDOMAIN */
         struct {
-            u32            irq;
+            unsigned int   irq;
             evtchn_port_t  next_port;
             evtchn_port_t  prev_port;
-        } pirq;        /* state == ECS_PIRQ */
-        u16 virq;      /* state == ECS_VIRQ */
+        } pirq;             /* state == ECS_PIRQ */
+        unsigned int virq;  /* state == ECS_VIRQ */
     } u;
-    u8 priority;
-#ifndef NDEBUG
-    u8 old_state;      /* State when taking lock in write mode. */
-#endif
-    u32 fifo_lastq;    /* Data for fifo events identifying last queue. */
+
+    bool pending;                  /* FIFO event channels only. */
+    unsigned char priority;        /* FIFO event channels only. */
+    unsigned short notify_vcpu_id; /* VCPU for local delivery notification */
+    uint32_t fifo_lastq;           /* Data for identifying last queue. */
+
 #ifdef CONFIG_XSM
     union {
 #ifdef XSM_NEED_GENERIC_EVTCHN_SSID
@@ -133,7 +135,7 @@ struct evtchn
          * allocations, and on 64-bit platforms with only FLASK enabled,
          * reduces the size of struct evtchn.
          */
-        u32 flask_sid;
+        uint32_t flask_sid;
 #endif
     } ssid;
 #endif
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 10:51:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 10:51:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37538.69977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khsOf-0007Hf-BP; Wed, 25 Nov 2020 10:51:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37538.69977; Wed, 25 Nov 2020 10:51:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khsOf-0007HU-86; Wed, 25 Nov 2020 10:51:37 +0000
Received: by outflank-mailman (input) for mailman id 37538;
 Wed, 25 Nov 2020 10:51:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X3kr=E7=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1khsOd-00078P-O7
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 10:51:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5ca0cb6e-32be-44dd-8153-fbc6d1ce5e33;
 Wed, 25 Nov 2020 10:51:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EE96EADAA;
 Wed, 25 Nov 2020 10:51:24 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X3kr=E7=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1khsOd-00078P-O7
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 10:51:35 +0000
X-Inumbo-ID: 5ca0cb6e-32be-44dd-8153-fbc6d1ce5e33
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5ca0cb6e-32be-44dd-8153-fbc6d1ce5e33;
	Wed, 25 Nov 2020 10:51:25 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606301485; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PpgH835OJv8kLqZm+CjGqtlbK0CvTiSkCJEinT/4/4A=;
	b=nSnC7kr60X/sBbqFE000ZLKQyFQTvBtUKu2x/AGj7aChXmOIUTrpitVGUMYf+LK5+wx4LX
	PmYEtXQxYijVS4vIGSlBoqa/nQGy0EI+2OCUUxpMCU8eRv16ribc9X/UynAbwbPvyHblkR
	ypynLSaQOuv7STlzVZTQP9pDEY5w7r8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EE96EADAA;
	Wed, 25 Nov 2020 10:51:24 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v8 3/3] xen/events: do some cleanups in evtchn_fifo_set_pending()
Date: Wed, 25 Nov 2020 11:51:22 +0100
Message-Id: <20201125105122.3650-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201125105122.3650-1-jgross@suse.com>
References: <20201125105122.3650-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

evtchn_fifo_set_pending() can be simplified a little bit.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V8:
- new patch
---
 xen/common/event_fifo.c | 34 +++++++++++++++-------------------
 1 file changed, 15 insertions(+), 19 deletions(-)

diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c
index 443593c3b3..77609539b1 100644
--- a/xen/common/event_fifo.c
+++ b/xen/common/event_fifo.c
@@ -175,6 +175,18 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
         return;
     }
 
+    /*
+     * Control block not mapped.  The guest must not unmask an
+     * event until the control block is initialized, so we can
+     * just drop the event.
+     */
+    if ( unlikely(!v->evtchn_fifo->control_block) )
+    {
+        printk(XENLOG_G_WARNING
+               "%pv has no FIFO event channel control block\n", v);
+        return;
+    }
+
     /*
      * Lock all queues related to the event channel (in case of a queue change
      * this might be two).
@@ -233,25 +245,8 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
      * Link the event if it unmasked and not already linked.
      */
     if ( !guest_test_bit(d, EVTCHN_FIFO_MASKED, word) &&
-         !guest_test_bit(d, EVTCHN_FIFO_LINKED, word) )
+         !guest_test_and_set_bit(d, EVTCHN_FIFO_LINKED, word) )
     {
-        event_word_t *tail_word;
-
-        /*
-         * Control block not mapped.  The guest must not unmask an
-         * event until the control block is initialized, so we can
-         * just drop the event.
-         */
-        if ( unlikely(!v->evtchn_fifo->control_block) )
-        {
-            printk(XENLOG_G_WARNING
-                   "%pv has no FIFO event channel control block\n", v);
-            goto unlock;
-        }
-
-        if ( guest_test_and_set_bit(d, EVTCHN_FIFO_LINKED, word) )
-            goto unlock;
-
         /*
          * If this event was a tail, the old queue is now empty and
          * its tail must be invalidated to prevent adding an event to
@@ -286,6 +281,8 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
         linked = false;
         if ( q->tail )
         {
+            event_word_t *tail_word;
+
             tail_word = evtchn_fifo_word_from_port(d, q->tail);
             linked = evtchn_fifo_set_link(d, tail_word, port);
         }
@@ -294,7 +291,6 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
         q->tail = port;
     }
 
- unlock:
     if ( q != old_q )
         spin_unlock(&old_q->lock);
     spin_unlock_irqrestore(&q->lock, flags);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 11:02:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 11:02:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37567.69989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khsYv-00004v-Eq; Wed, 25 Nov 2020 11:02:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37567.69989; Wed, 25 Nov 2020 11:02:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khsYv-0008WT-Bo; Wed, 25 Nov 2020 11:02:13 +0000
Received: by outflank-mailman (input) for mailman id 37567;
 Wed, 25 Nov 2020 11:02:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xP3f=E7=amazon.co.uk=prvs=591f578ad=pdurrant@srs-us1.protection.inumbo.net>)
 id 1khsYt-0008WO-FE
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 11:02:11 +0000
Received: from smtp-fw-33001.amazon.com (unknown [207.171.190.10])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 088478a3-1fda-4c1e-9d51-c495a195ed85;
 Wed, 25 Nov 2020 11:02:07 +0000 (UTC)
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-1e-303d0b0e.us-east-1.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP;
 25 Nov 2020 11:02:00 +0000
Received: from EX13D03EUC002.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38])
 by email-inbound-relay-1e-303d0b0e.us-east-1.amazon.com (Postfix) with ESMTPS
 id 18D87A1EB1; Wed, 25 Nov 2020 11:01:56 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D03EUC002.ant.amazon.com (10.43.164.60) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Wed, 25 Nov 2020 11:01:55 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Wed, 25 Nov 2020 11:01:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xP3f=E7=amazon.co.uk=prvs=591f578ad=pdurrant@srs-us1.protection.inumbo.net>)
	id 1khsYt-0008WO-FE
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 11:02:11 +0000
X-Inumbo-ID: 088478a3-1fda-4c1e-9d51-c495a195ed85
Received: from smtp-fw-33001.amazon.com (unknown [207.171.190.10])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 088478a3-1fda-4c1e-9d51-c495a195ed85;
	Wed, 25 Nov 2020 11:02:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
  s=amazon201209; t=1606302128; x=1637838128;
  h=from:to:cc:date:message-id:references:in-reply-to:
   content-transfer-encoding:mime-version:subject;
  bh=OIiiM2HU51UNYqzSXsup0GIZImJAvowKCm9OyLNZDuA=;
  b=AMkeZaHxP55aLJpWOkQ58knoU0P9mprcHRAI9wMVOohZIii3W/sJ9Gwr
   GxvSnfbH9VMNJK34ZQth5ghMdy1IjqX0XhDGxmmcg6S4AlDymH5Nofgfs
   fmLHhoN4lQpygA/9JKxa2W9VUPpRANwacsvYJSYU+MPTrSnCk8FCpaKgG
   k=;
X-IronPort-AV: E=Sophos;i="5.78,368,1599523200"; 
   d="scan'208";a="97775005"
Subject: RE: [PATCH v4 1/3] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_disable_fifo, ...
Thread-Topic: [PATCH v4 1/3] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_disable_fifo, ...
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-1e-303d0b0e.us-east-1.amazon.com) ([10.47.23.38])
  by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP; 25 Nov 2020 11:02:00 +0000
Received: from EX13D03EUC002.ant.amazon.com (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38])
	by email-inbound-relay-1e-303d0b0e.us-east-1.amazon.com (Postfix) with ESMTPS id 18D87A1EB1;
	Wed, 25 Nov 2020 11:01:56 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D03EUC002.ant.amazon.com (10.43.164.60) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Wed, 25 Nov 2020 11:01:55 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Wed, 25 Nov 2020 11:01:55 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
CC: "Elnikety, Eslam" <elnikety@amazon.com>, Christian Lindig
	<christian.lindig@citrix.com>, David Scott <dave@recoil.org>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Paul Durrant <paul@xen.org>
Thread-Index: AQHWwpaPMlIHtWtagUGiOXvRZvLCO6nYkyyAgAAEeICAABSW4A==
Date: Wed, 25 Nov 2020 11:01:55 +0000
Message-ID: <2fb04a18fa184e699aaf8b8c6604f396@EX13D32EUC003.ant.amazon.com>
References: <20201124191751.11472-1-paul@xen.org>
 <20201124191751.11472-2-paul@xen.org>
 <444917ac-f2aa-5544-8f6c-097e7f57c98c@suse.com>
 <19792219-22f0-e7ce-23ca-5c1b20d6f581@suse.com>
In-Reply-To: <19792219-22f0-e7ce-23ca-5c1b20d6f581@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.164.90]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxp
Y2hAc3VzZS5jb20+DQo+IFNlbnQ6IDI1IE5vdmVtYmVyIDIwMjAgMDk6MzYNCj4gVG86IEFuZHJl
dyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+DQo+IENjOiBEdXJyYW50LCBQYXVs
IDxwZHVycmFudEBhbWF6b24uY28udWs+OyBFbG5pa2V0eSwgRXNsYW0gPGVsbmlrZXR5QGFtYXpv
bi5jb20+OyBDaHJpc3RpYW4gTGluZGlnDQo+IDxjaHJpc3RpYW4ubGluZGlnQGNpdHJpeC5jb20+
OyBEYXZpZCBTY290dCA8ZGF2ZUByZWNvaWwub3JnPjsgSWFuIEphY2tzb24gPGl3akB4ZW5wcm9q
ZWN0Lm9yZz47IFdlaQ0KPiBMaXUgPHdsQHhlbi5vcmc+OyBHZW9yZ2UgRHVubGFwIDxnZW9yZ2Uu
ZHVubGFwQGNpdHJpeC5jb20+OyBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPjsgU3RlZmFu
bw0KPiBTdGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPjsgeGVuLWRldmVsQGxpc3Rz
LnhlbnByb2plY3Qub3JnOyBQYXVsIER1cnJhbnQgPHBhdWxAeGVuLm9yZz4NCj4gU3ViamVjdDog
UkU6IFtFWFRFUk5BTF0gW1BBVENIIHY0IDEvM10gZG9tY3RsOiBpbnRyb2R1Y2UgYSBuZXcgZG9t
YWluIGNyZWF0ZSBmbGFnLA0KPiBYRU5fRE9NQ1RMX0NERl9kaXNhYmxlX2ZpZm8sIC4uLg0KPiAN
Cj4gQ0FVVElPTjogVGhpcyBlbWFpbCBvcmlnaW5hdGVkIGZyb20gb3V0c2lkZSBvZiB0aGUgb3Jn
YW5pemF0aW9uLiBEbyBub3QgY2xpY2sgbGlua3Mgb3Igb3Blbg0KPiBhdHRhY2htZW50cyB1bmxl
c3MgeW91IGNhbiBjb25maXJtIHRoZSBzZW5kZXIgYW5kIGtub3cgdGhlIGNvbnRlbnQgaXMgc2Fm
ZS4NCj4gDQo+IA0KPiANCj4gT24gMjUuMTEuMjAyMCAxMDoyMCwgSmFuIEJldWxpY2ggd3JvdGU6
DQo+ID4gT24gMjQuMTEuMjAyMCAyMDoxNywgUGF1bCBEdXJyYW50IHdyb3RlOg0KPiA+PiAtLS0g
YS94ZW4vaW5jbHVkZS9wdWJsaWMvZG9tY3RsLmgNCj4gPj4gKysrIGIveGVuL2luY2x1ZGUvcHVi
bGljL2RvbWN0bC5oDQo+ID4+IEBAIC03MCw5ICs3MCwxMSBAQCBzdHJ1Y3QgeGVuX2RvbWN0bF9j
cmVhdGVkb21haW4gew0KPiA+PiAgI2RlZmluZSBYRU5fRE9NQ1RMX0NERl9pb21tdSAgICAgICAg
ICAoMVU8PF9YRU5fRE9NQ1RMX0NERl9pb21tdSkNCj4gPj4gICNkZWZpbmUgX1hFTl9ET01DVExf
Q0RGX25lc3RlZF92aXJ0ICAgNg0KPiA+PiAgI2RlZmluZSBYRU5fRE9NQ1RMX0NERl9uZXN0ZWRf
dmlydCAgICAoMVUgPDwgX1hFTl9ET01DVExfQ0RGX25lc3RlZF92aXJ0KQ0KPiA+PiArI2RlZmlu
ZSBfWEVOX0RPTUNUTF9DREZfZGlzYWJsZV9maWZvICA3DQo+ID4+ICsjZGVmaW5lIFhFTl9ET01D
VExfQ0RGX2Rpc2FibGVfZmlmbyAgICgxVSA8PCBfWEVOX0RPTUNUTF9DREZfZGlzYWJsZV9maWZv
KQ0KPiA+DQo+ID4gRGVzcGl0ZSBnZXR0aW5nIGxvbmdpc2gsIEkgdGhpbmsgdGhpcyBuZWVkcyAi
ZXZ0Y2huIiBzb21ld2hlcmUgaW4NCj4gPiB0aGUgbmFtZS4gVG8ga2VlcCBzaXplIGJvdW5kZWQs
IG1heWJlIFhFTl9ET01DVExfQ0RGX25vX2ZpZm9fZXZ0Y2huPw0KPiA+DQoNCkknbSBvayB3aXRo
IHRoYXQgbmFtZTsgSSdsbCBzZW5kIGEgdjUuDQoNCj4gPj4gIC8qIE1heCBYRU5fRE9NQ1RMX0NE
Rl8qIGNvbnN0YW50LiAgVXNlZCBmb3IgQUJJIGNoZWNraW5nLiAqLw0KPiA+PiAtI2RlZmluZSBY
RU5fRE9NQ1RMX0NERl9NQVggWEVOX0RPTUNUTF9DREZfbmVzdGVkX3ZpcnQNCj4gPj4gKyNkZWZp
bmUgWEVOX0RPTUNUTF9DREZfTUFYIFhFTl9ET01DVExfQ0RGX2Rpc2FibGVfZmlmbw0KPiA+DQo+
ID4gV2hpbGUgbm90IGRpcmVjdGx5IHJlbGF0ZWQgdG8gdGhpcyBwYXRjaCwgSSdtIHB1enpsZWQg
YnkgdGhlDQo+ID4gcHJlc2VuY2Ugb2YgdGhpcyBjb25zdGFudDogSSd2ZSBub3QgYmVlbiBhYmxl
IHRvIGZpbmQgYW55IHVzZSBvZg0KPiA+IGl0LiBJbiBwYXJ0aWN1bGFyIHlvdSBkaWQgaGF2ZSBh
IG5lZWQgdG8gbW9kaWZ5DQo+ID4gc2FuaXRpc2VfZG9tYWluX2NvbmZpZygpLg0KPiANCj4gU28g
aXQgd2FzIHlvdSB0byBpbnRyb2R1Y2UgdGhpcywgcmlnaHQgYXdheSB3aXRob3V0IGFueSB1c2Vy
LCBpbg0KPiA3ZmIwZTEzNGY4YzYgKCJ0b29scy9vY2FtbDogYWJpOiBVc2UgZm9ybWFsIGNvbnZl
cnNpb24gYW5kIGNoZWNrDQo+IGluIG1vcmUgcGxhY2VzIikuIFRoZSBvbmx5IHJlZmVyZW5jZSBp
cyBmcm9tIHdoYXQgSSByZWdhcmQgYXMgYQ0KPiBjb21tZW50IChJIGRvbid0IHNwZWFrIGFueSBv
Y2FtbCwgc28gSSBtYXkgYmUgd3JvbmcpLiBDb3VsZCB5b3UNCj4gY2xhcmlmeSB3aHkgd2UgbmVl
ZCB0byBtYWludGFpbiB0aGlzIGNvbnN0YW50Pw0KPiANCg0KSSBjYW4ndCByZW1lbWJlciB0aGUg
ZXhhY3Qgc2VxdWVuY2Ugb2YgZXZlbnRzIGJ1dCBpdCBiZWNhbWUgYXBwYXJlbnQgYXQgc29tZSBw
b2ludCB0aGF0IHRoZSBvY2FtbCBiaW5kaW5ncyB3ZXJlIG91dCBvZiBzeW5jIGFuZCB0aGV5IHJl
bHkgb24gYSBsaXN0IG9mIGRvbWFpbiBjcmVhdGUgZmxhZ3Mgd2hlcmUgdGhlIG51bWJlciBoYXMg
dG8gbWF0Y2ggdGhlIGJpdC1zaGlmdCB2YWx1ZSBpbiBkb21jdGwuaCAoYW1vbmcgb3RoZXIgdGhp
bmdzKS4gVGh1cyB0aGVyZSBpcyBhbiBhdXRvLWdlbmVyYXRlZCBoZWFkZXIgY2FsbGVkICJ4ZW5j
dHJsX2FiaV9jaGVjay5oIiB3aGljaCBpcyBpbmNsdWRlZCBieSB4ZW5jdHJsX3N0dWJzLmMuIFRo
aXMgaGVhZGVyIGlzIGdlbmVyYXRlZCBmcm9tIHhlbmN0cmwubWwgYnkgdGhlIHBlcmwgc2NyaXB0
ICJhYmktY2hlY2siIGFuZCBpdCByZWxpZXMgdGhlIFhFTl9ET01DVExfQ0RGX01BWCBjb25zdGFu
dCB0byBmb3JtIHBhcnQgb2YgdGhlIGNoZWNrcyBpdCBnZW5lcmF0ZXMuDQoNCkFzIGFuIGV4YW1w
bGUsIGhlcmUgaXMgdGhlIGdlbmVyYXRlZCBoZWFkZXIgd2l0aCB0aGlzIHBhdGNoIGFwcGxpZWQ6
DQoNCi8vIGZvdW5kIG9jYW1sIHR5cGUgeDg2X2FyY2hfZW11bGF0aW9uX2ZsYWdzIGF0IHhlbmN0
cmwubWw6MzgNCkJVSUxEX0JVR19PTiggWEVOX1g4Nl9FTVVfTEFQSUMgICAgICAgICAgICAgICE9
ICgxdSA8PCAwKSAgKTsNCkJVSUxEX0JVR19PTiggWEVOX1g4Nl9FTVVfSFBFVCAgICAgICAgICAg
ICAgICE9ICgxdSA8PCAxKSAgKTsNCkJVSUxEX0JVR19PTiggWEVOX1g4Nl9FTVVfUE0gICAgICAg
ICAgICAgICAgICE9ICgxdSA8PCAyKSAgKTsNCkJVSUxEX0JVR19PTiggWEVOX1g4Nl9FTVVfUlRD
ICAgICAgICAgICAgICAgICE9ICgxdSA8PCAzKSAgKTsNCkJVSUxEX0JVR19PTiggWEVOX1g4Nl9F
TVVfSU9BUElDICAgICAgICAgICAgICE9ICgxdSA8PCA0KSAgKTsNCkJVSUxEX0JVR19PTiggWEVO
X1g4Nl9FTVVfUElDICAgICAgICAgICAgICAgICE9ICgxdSA8PCA1KSAgKTsNCkJVSUxEX0JVR19P
TiggWEVOX1g4Nl9FTVVfVkdBICAgICAgICAgICAgICAgICE9ICgxdSA8PCA2KSAgKTsNCkJVSUxE
X0JVR19PTiggWEVOX1g4Nl9FTVVfSU9NTVUgICAgICAgICAgICAgICE9ICgxdSA8PCA3KSAgKTsN
CkJVSUxEX0JVR19PTiggWEVOX1g4Nl9FTVVfUElUICAgICAgICAgICAgICAgICE9ICgxdSA8PCA4
KSAgKTsNCkJVSUxEX0JVR19PTiggWEVOX1g4Nl9FTVVfVVNFX1BJUlEgICAgICAgICAgICE9ICgx
dSA8PCA5KSAgKTsNCkJVSUxEX0JVR19PTiggWEVOX1g4Nl9FTVVfVlBDSSAgICAgICAgICAgICAg
ICE9ICgxdSA8PCAxMCkgKTsNCkJVSUxEX0JVR19PTiggWEVOX1g4Nl9FTVVfQUxMICAgICAgICAg
ICAgICAgICE9ICgxdSA8PCAxMSktMXUgKTsNCi8vIGZvdW5kIG9jYW1sIHR5cGUgZG9tYWluX2Ny
ZWF0ZV9mbGFnIGF0IHhlbmN0cmwubWw6NjANCkJVSUxEX0JVR19PTiggWEVOX0RPTUNUTF9DREZf
aHZtICAgICAgICAgICAgICE9ICgxdSA8PCAwKSAgKTsNCkJVSUxEX0JVR19PTiggWEVOX0RPTUNU
TF9DREZfaGFwICAgICAgICAgICAgICE9ICgxdSA8PCAxKSAgKTsNCkJVSUxEX0JVR19PTiggWEVO
X0RPTUNUTF9DREZfczNfaW50ZWdyaXR5ICAgICE9ICgxdSA8PCAyKSAgKTsNCkJVSUxEX0JVR19P
TiggWEVOX0RPTUNUTF9DREZfb29zX29mZiAgICAgICAgICE9ICgxdSA8PCAzKSAgKTsNCkJVSUxE
X0JVR19PTiggWEVOX0RPTUNUTF9DREZfeHNfZG9tYWluICAgICAgICE9ICgxdSA8PCA0KSAgKTsN
CkJVSUxEX0JVR19PTiggWEVOX0RPTUNUTF9DREZfaW9tbXUgICAgICAgICAgICE9ICgxdSA8PCA1
KSAgKTsNCkJVSUxEX0JVR19PTiggWEVOX0RPTUNUTF9DREZfbmVzdGVkX3ZpcnQgICAgICE9ICgx
dSA8PCA2KSAgKTsNCkJVSUxEX0JVR19PTiggWEVOX0RPTUNUTF9DREZfZGlzYWJsZV9maWZvICAg
ICE9ICgxdSA8PCA3KSAgKTsNCkJVSUxEX0JVR19PTiggWEVOX0RPTUNUTF9DREZfTUFYICAgICAg
ICAgICAgICE9ICgxdSA8PCA3KSAgKTsNCi8vIGZvdW5kIG9jYW1sIHR5cGUgZG9tYWluX2NyZWF0
ZV9pb21tdV9vcHRzIGF0IHhlbmN0cmwubWw6NzANCkJVSUxEX0JVR19PTiggWEVOX0RPTUNUTF9J
T01NVV9ub19zaGFyZXB0ICAgICE9ICgxdSA8PCAwKSAgKTsNCkJVSUxEX0JVR19PTiggWEVOX0RP
TUNUTF9JT01NVV9NQVggICAgICAgICAgICE9ICgxdSA8PCAwKSAgKTsNCi8vIGZvdW5kIG9jYW1s
IHR5cGUgcGh5c2luZm9fY2FwX2ZsYWcgYXQgeGVuY3RybC5tbDoxMTMNCkJVSUxEX0JVR19PTigg
WEVOX1NZU0NUTF9QSFlTQ0FQX2h2bSAgICAgICAgICE9ICgxdSA8PCAwKSAgKTsNCkJVSUxEX0JV
R19PTiggWEVOX1NZU0NUTF9QSFlTQ0FQX3B2ICAgICAgICAgICE9ICgxdSA8PCAxKSAgKTsNCkJV
SUxEX0JVR19PTiggWEVOX1NZU0NUTF9QSFlTQ0FQX2RpcmVjdGlvICAgICE9ICgxdSA8PCAyKSAg
KTsNCkJVSUxEX0JVR19PTiggWEVOX1NZU0NUTF9QSFlTQ0FQX2hhcCAgICAgICAgICE9ICgxdSA8
PCAzKSAgKTsNCkJVSUxEX0JVR19PTiggWEVOX1NZU0NUTF9QSFlTQ0FQX3NoYWRvdyAgICAgICE9
ICgxdSA8PCA0KSAgKTsNCkJVSUxEX0JVR19PTiggWEVOX1NZU0NUTF9QSFlTQ0FQX2lvbW11X2hh
cF9wdF9zaGFyZSAhPSAoMXUgPDwgNSkgICk7DQpCVUlMRF9CVUdfT04oIFhFTl9TWVNDVExfUEhZ
U0NBUF9NQVggICAgICAgICAhPSAoMXUgPDwgNSkgICk7DQoNCiAgUGF1bA0K


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 11:10:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 11:10:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37575.70001 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khshK-000149-CR; Wed, 25 Nov 2020 11:10:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37575.70001; Wed, 25 Nov 2020 11:10:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khshK-000142-8r; Wed, 25 Nov 2020 11:10:54 +0000
Received: by outflank-mailman (input) for mailman id 37575;
 Wed, 25 Nov 2020 11:10:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avq/=E7=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1khshI-00013x-RS
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 11:10:52 +0000
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 39ff70a9-428c-4d15-800d-96041cc453f7;
 Wed, 25 Nov 2020 11:10:51 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id 64so1482267wra.11
 for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 03:10:51 -0800 (PST)
Received: from CBGR90WXYV0 (host86-183-162-145.range86-183.btcentralplus.com.
 [86.183.162.145])
 by smtp.gmail.com with ESMTPSA id o21sm4222767wra.40.2020.11.25.03.10.49
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 25 Nov 2020 03:10:50 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=avq/=E7=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1khshI-00013x-RS
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 11:10:52 +0000
X-Inumbo-ID: 39ff70a9-428c-4d15-800d-96041cc453f7
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 39ff70a9-428c-4d15-800d-96041cc453f7;
	Wed, 25 Nov 2020 11:10:51 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id 64so1482267wra.11
        for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 03:10:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=dmKA2RThu7LtriVr5PAHJm5iP85nmZOL0pm5muTUb/o=;
        b=SRI70cMGjkYFN7ZDdrlIkeQVwzsSCpC6530HkFenkPTNqDvIPWOPGtvA40nOmjGVpY
         uThNmrmMqL4GPasHZAe2fElGXAoGXQODNiDp+epEPn4j5m6SmdNeuwJ9y5Ze61LM+wRO
         VjsexIzhKmtY2QFJB6wzWlDeZ6/xHVaP3wJFZcrKz75Q/6ejWI8J3uoRpN98tW/RQwgA
         qYzQkbG/tPUxRYK5/TZyS0UDItufUlx08ieGK7rgE7tnlsCXZ64FWKVFwTxTwmDzOvdb
         EwGDrfmg6GArTjSScypR+D9iauycjioeb4B4qGDw5sSMxZ9hCAROMOVm7C3452jZG5S2
         qx5Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=dmKA2RThu7LtriVr5PAHJm5iP85nmZOL0pm5muTUb/o=;
        b=fNwbtaiLJfp9t+JHO0bs5Hp8koDzcTBaNCiM9ti/k475764I/MbuqVMAufM3e/5UWs
         fPkddTl8Lwj8tGvNuV/JALe9WGDHomEquSE4wBGmya59UrHqBELtfZDcXUOIgk0Siy7n
         COgeJXk9XsLMkGhIW9XQT2fmm0o6qeYqAQwO+XhZGEEqe0AgHlSv0bkOQOJNk1EiSizY
         VVLJl994xX6Pq8J3IQeLgHk0Z7MJuVqbKt3dTWOLpv0y/wAsNGVUkyS4y0YGhVPy2rNc
         wPxd7kuKeW+55jrQ+JAoEFGhBy2IbTa0KHndlY3so1/f6Jh0sZL61YiEzqXHxqm5lESc
         BpMg==
X-Gm-Message-State: AOAM530T6G0D2o4ZJEm5GFOepJYNC5d8IIvq4w/viw4mq5w+846V1jeG
	u4Uu+8FECEdU1PRCnD1M9yQ=
X-Google-Smtp-Source: ABdhPJyuW+D2Qrq/NrFARgKrP5vv9wjX+86e/yUKNZyqwiucT7ptOHa8jz3Cujhb5mux7hA23PaxJQ==
X-Received: by 2002:a5d:44c1:: with SMTP id z1mr3381539wrr.375.1606302650763;
        Wed, 25 Nov 2020 03:10:50 -0800 (PST)
Received: from CBGR90WXYV0 (host86-183-162-145.range86-183.btcentralplus.com. [86.183.162.145])
        by smtp.gmail.com with ESMTPSA id o21sm4222767wra.40.2020.11.25.03.10.49
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Wed, 25 Nov 2020 03:10:50 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	"'Eslam Elnikety'" <elnikety@amazon.com>,
	"'Christian Lindig'" <christian.lindig@citrix.com>,
	"'David Scott'" <dave@recoil.org>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Wei Liu'" <wl@xen.org>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	<xen-devel@lists.xenproject.org>
References: <20201124191751.11472-1-paul@xen.org> <20201124191751.11472-2-paul@xen.org> <444917ac-f2aa-5544-8f6c-097e7f57c98c@suse.com>
In-Reply-To: <444917ac-f2aa-5544-8f6c-097e7f57c98c@suse.com>
Subject: RE: [PATCH v4 1/3] domctl: introduce a new domain create flag, XEN_DOMCTL_CDF_disable_fifo, ...
Date: Wed, 25 Nov 2020 11:10:49 -0000
Message-ID: <009001d6c31b$a1eaeef0$e5c0ccd0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQJXq2EkpgGe9udTl5TsVbT974RgHAIZlAorAcS/UCWot+u0UA==
Content-Language: en-gb

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 25 November 2020 09:20
> To: Paul Durrant <paul@xen.org>
> Cc: Paul Durrant <pdurrant@amazon.com>; Eslam Elnikety =
<elnikety@amazon.com>; Christian Lindig
> <christian.lindig@citrix.com>; David Scott <dave@recoil.org>; Ian =
Jackson <iwj@xenproject.org>; Wei
> Liu <wl@xen.org>; Andrew Cooper <andrew.cooper3@citrix.com>; George =
Dunlap <george.dunlap@citrix.com>;
> Julien Grall <julien@xen.org>; Stefano Stabellini =
<sstabellini@kernel.org>; xen-
> devel@lists.xenproject.org
> Subject: Re: [PATCH v4 1/3] domctl: introduce a new domain create =
flag, XEN_DOMCTL_CDF_disable_fifo,
> ...
>=20
> On 24.11.2020 20:17, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > ...to control the visibility of the FIFO event channel operations
> > (EVTCHNOP_init_control, EVTCHNOP_expand_array, and =
EVTCHNOP_set_priority) to
> > the guest.
> >
> > These operations were added to the public header in commit =
d2d50c2f308f
> > ("evtchn: add FIFO-based event channel ABI") and the first =
implementation
> > appeared in the two subsequent commits: edc8872aeb4a ("evtchn: =
implement
> > EVTCHNOP_set_priority and add the set_priority hook") and =
88910061ec61
> > ("evtchn: add FIFO-based event channel hypercalls and port ops"). =
Prior to
> > that, a guest issuing those operations would receive a return value =
of
> > -ENOSYS (not implemented) from Xen. Guests aware of the FIFO =
operations but
> > running on an older (pre-4.4) Xen would fall back to using the =
2-level event
> > channel interface upon seeing this return value.
> >
> > Unfortunately the uncontrolable appearance of these new operations =
in Xen 4.4
> > onwards has implications for hibernation of some Linux guests. =
During resume
> > from hibernation, there are two kernels involved: the "boot" kernel =
and the
> > "resume" kernel. The guest boot kernel may default to use FIFO =
operations and
> > instruct Xen via EVTCHNOP_init_control to switch from 2-level to =
FIFO. On the
> > other hand, the resume kernel keeps assuming 2-level, because it was =
hibernated
> > on a version of Xen that did not support the FIFO operations.
>=20
> And the alternative of the boot kernel issuing EVTCHNOP_reset has
> other unwanted consequences. Maybe worth mentioning here, as
> otherwise this would look like the obvious way to return to 2-level
> mode?
>=20
> Also, why can't the boot kernel be instructed to avoid engaging
> FIFO mode?
>=20

Both of those are, of course, viable alternatives if the guest can be =
modified. The problem we need to work around is guest that are already =
out there and cannot be updated.

> > To maintain compatibility it is necessary to make Xen behave as it =
did
> > before the new operations were added and hence the code in this =
patch ensures
> > that, if XEN_DOMCTL_CDF_disable_fifo is set, the FIFO event channel =
operations
> > will again result in -ENOSYS being returned to the guest.
>=20
> Are there indeed dependencies on the precise return value anywhere?
> If so, the generally inappropriate use (do_event_channel_op()'s
> default case really would also need switching) would want a brief
> comment, so it'll be understood by readers that this isn't code to
> derive other code from. If not, -EPERM or -EACCES perhaps?
>=20

The patch, as stated, is reverting behaviour and so the -ENOSYS really =
needs to stay since it is essentially ABI now. I am not aware of guest =
code that will, in fact, die if it sees -EPERM or -EACCES instead but =
there may be such code. The only safe thing to do is to make things look =
like the used to.

> Also, now that we gain a runtime control, do we perhaps also want a
> build time one?

Yes, a Kconfig flag to compile out the code seems like a worthy =
addition. I'll spin up a follow-up patch as soon as I get time.

  Paul

>=20
> > Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> > Signed-off-by: Eslam Elnikety <elnikety@amazon.com>
>=20
> Are this order as well as the From: tag above correct? Or
> alternatively, are there actually any pieces left at all from
> Eslam's earlier patch?
>=20
> > v4:
> >  - New in v4
>=20
> (Just as an aside: That's quite interesting for a previously
> standalone patch. I guess that patch was really split, considering
> you've retained Eslam's S-o-b? But perhaps there are different ways
> to look at things ...)
>=20
> > --- a/xen/include/public/domctl.h
> > +++ b/xen/include/public/domctl.h
> > @@ -70,9 +70,11 @@ struct xen_domctl_createdomain {
> >  #define XEN_DOMCTL_CDF_iommu          (1U<<_XEN_DOMCTL_CDF_iommu)
> >  #define _XEN_DOMCTL_CDF_nested_virt   6
> >  #define XEN_DOMCTL_CDF_nested_virt    (1U << =
_XEN_DOMCTL_CDF_nested_virt)
> > +#define _XEN_DOMCTL_CDF_disable_fifo  7
> > +#define XEN_DOMCTL_CDF_disable_fifo   (1U << =
_XEN_DOMCTL_CDF_disable_fifo)
>=20
> Despite getting longish, I think this needs "evtchn" somewhere in
> the name. To keep size bounded, maybe XEN_DOMCTL_CDF_no_fifo_evtchn?
>=20
> >  /* Max XEN_DOMCTL_CDF_* constant.  Used for ABI checking. */
> > -#define XEN_DOMCTL_CDF_MAX XEN_DOMCTL_CDF_nested_virt
> > +#define XEN_DOMCTL_CDF_MAX XEN_DOMCTL_CDF_disable_fifo
>=20
> While not directly related to this patch, I'm puzzled by the
> presence of this constant: I've not been able to find any use of
> it. In particular you did have a need to modify
> sanitise_domain_config().
>=20
> Jan



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 11:11:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 11:11:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37577.70013 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khshd-00019K-SL; Wed, 25 Nov 2020 11:11:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37577.70013; Wed, 25 Nov 2020 11:11:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khshd-00019D-Nx; Wed, 25 Nov 2020 11:11:13 +0000
Received: by outflank-mailman (input) for mailman id 37577;
 Wed, 25 Nov 2020 11:11:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1khshc-00018u-4j
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 11:11:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khshZ-0004zN-Si; Wed, 25 Nov 2020 11:11:09 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khshZ-0006Pt-KU; Wed, 25 Nov 2020 11:11:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khshc-00018u-4j
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 11:11:12 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=oQA/M5Uuea35a1wIxgW5iLSiJLPmMuYHaChWUL4flWY=; b=VdeoLpOGqIC5/2SzR15Mp5QzUX
	9kwMg/zK0HM9qfcp+2mqW+LI9J0k5paVZfF99JDLV6Hb940fj4mFfWnuZMJVdT/iJ05MozGnYFAAc
	3vJkmyedaOXA/rsmr5YQekt7xV6DqW72JmTL1W3c04FZn+4DlMWoGDsW6ywSnYaS2q5c=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khshZ-0004zN-Si; Wed, 25 Nov 2020 11:11:09 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khshZ-0006Pt-KU; Wed, 25 Nov 2020 11:11:09 +0000
Subject: Re: [PATCH] evtchn: double per-channel locking can't hit identical
 channels
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <db0b16f8-2053-5ec3-f73a-c1c8fcdb9444@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7855e268-7857-442d-3daa-8fd837e95287@xen.org>
Date: Wed, 25 Nov 2020 11:11:07 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <db0b16f8-2053-5ec3-f73a-c1c8fcdb9444@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 24/11/2020 17:03, Jan Beulich wrote:
> Inter-domain channels can't possibly be bound to themselves, there's
> always a 2nd channel involved, even when this is a loopback into the
> same domain. As a result we can drop one conditional each from the two
> involved functions.
> 
> With this, the number of evtchn_write_lock() invocations can also be
> shrunk by half, swapping the two incoming function arguments instead.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 11:27:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 11:27:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37589.70024 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khswx-0002Gk-9a; Wed, 25 Nov 2020 11:27:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37589.70024; Wed, 25 Nov 2020 11:27:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khswx-0002Gd-6J; Wed, 25 Nov 2020 11:27:03 +0000
Received: by outflank-mailman (input) for mailman id 37589;
 Wed, 25 Nov 2020 11:27:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1khswv-0002GY-79
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 11:27:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khswu-0005Ko-UW; Wed, 25 Nov 2020 11:27:00 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khswu-0007if-Ma; Wed, 25 Nov 2020 11:27:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khswv-0002GY-79
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 11:27:01 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=/KkqULJCMurogweMWLGS4Q/2SZryfCb+U3g2wAcQBLU=; b=SXXplQM07Y1/Zm7BHsbpLdLVwG
	6w0n1SLXOtyFbRos1HrMsy/hXB0SIaMXpE5LvUrE+qWe80i3l5P1lHHj9FeGCHgN4vcCI4rX/CM3/
	cM3YP6LmiBfzXMmlKAP8/0M7P3clMGjmKAAi7xOQDy3/c/czKwy8Vh+3eK36UURUS4AU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khswu-0005Ko-UW; Wed, 25 Nov 2020 11:27:00 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khswu-0007if-Ma; Wed, 25 Nov 2020 11:27:00 +0000
Subject: Re: [PATCH] xen/arm: Add workaround for Cortex-A55 erratum #1530923
To: Stefano Stabellini <sstabellini@kernel.org>,
 Rahul Singh <Rahul.Singh@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <61a105672650e7470710183f37351b821b818d1e.1606215998.git.bertrand.marquis@arm.com>
 <E5A460E5-7D10-4314-98B4-0D90CD173940@arm.com>
 <alpine.DEB.2.21.2011240944400.7979@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <7b05cb84-a9c3-10b2-5713-42259695e9b1@xen.org>
Date: Wed, 25 Nov 2020 11:26:59 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2011240944400.7979@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 24/11/2020 17:44, Stefano Stabellini wrote:
> On Tue, 24 Nov 2020, Rahul Singh wrote:
>>> On 24 Nov 2020, at 11:12 am, Bertrand Marquis <Bertrand.Marquis@arm.com> wrote:
>>>
>>> On the Cortex A55, TLB entries can be allocated by a speculative AT
>>> instruction. If this is happening during a guest context switch with an
>>> inconsistent page table state in the guest, TLBs with wrong values might
>>> be allocated.
>>> The ARM64_WORKAROUND_AT_SPECULATE workaround is used as for erratum
>>> 1165522 on Cortex A76 or Neoverse N1.
>>>
>>> This change is also introducing the MIDR identifier for the Cortex-A55.
>>>
>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>
>> Reviewed-by: Rahul Singh <rahul.singh@arm.com>
> 
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

Acked-by: Julien Grall <jgrall@amazon.com>

And committed.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 11:35:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 11:35:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37598.70040 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kht4f-0003Ew-2y; Wed, 25 Nov 2020 11:35:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37598.70040; Wed, 25 Nov 2020 11:35:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kht4e-0003Ep-WD; Wed, 25 Nov 2020 11:35:01 +0000
Received: by outflank-mailman (input) for mailman id 37598;
 Wed, 25 Nov 2020 11:35:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QLmq=E7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kht4e-0003Ek-Bk
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 11:35:00 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 98a1c86e-ce4f-446c-b8b4-b8d25921092f;
 Wed, 25 Nov 2020 11:34:59 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=QLmq=E7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kht4e-0003Ek-Bk
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 11:35:00 +0000
X-Inumbo-ID: 98a1c86e-ce4f-446c-b8b4-b8d25921092f
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 98a1c86e-ce4f-446c-b8b4-b8d25921092f;
	Wed, 25 Nov 2020 11:34:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606304099;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=01AObPEzqIxLTNz3dIKwWe0KbB5hu7dlBpFLNIJ4zXQ=;
  b=Zyk2yEl6zhVQZdxD533Razwp0RPnPYZ5WJYr9iTcV+LaFl4GEBFJneKK
   yXDRdi9yC3+YdB62NS6P1Od0GXAQR34a/v1JNU1bebkLmafBVWB3ujJpn
   FRz0VqYAFl+hXKEKhOG3K1aGg6UdRI1PrQ+CwtY0yga3TE7K2eEtS1MU5
   0=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 8laZL+S721WS+YayX/vM0WXv1fRymI3Fd8PNOKeBwuiHq67IYrntiFO/Hmt/2Dtog/IVV4AVMy
 Odj6TxyBhODs2+1dA7BRGOn5Dk7UkXo5AzUFqrtcd8BcTBwsNuu3RL3qBnu601ax3jsjBo08sT
 xqfxVo7CGFKz6qseRqaTKpmu1IqOdabt4QG4zVH85VauKJShOW9sATKKA24GkvOKNDW9zA/6DK
 QA64IMQ3gJgq2+H+4q2sITaEt4BxE/3S47x77yi++jKOo4st/RwTZC0swmfg4/Q+6xNjw5NuWx
 nqw=
X-SBRS: None
X-MesageID: 32249198
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,368,1599537600"; 
   d="scan'208";a="32249198"
Subject: Re: [PATCH v4 1/3] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_disable_fifo, ...
To: Paul Durrant <paul@xen.org>, <xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, Eslam Elnikety <elnikety@amazon.com>,
	Christian Lindig <christian.lindig@citrix.com>, David Scott
	<dave@recoil.org>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <20201124191751.11472-1-paul@xen.org>
 <20201124191751.11472-2-paul@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <2b2e3737-ecd7-907f-3c72-f31835dc5cb8@citrix.com>
Date: Wed, 25 Nov 2020 11:30:41 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201124191751.11472-2-paul@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 24/11/2020 19:17, Paul Durrant wrote:
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 666aeb71bf1b..70701c59d053 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -70,9 +70,11 @@ struct xen_domctl_createdomain {
>  #define XEN_DOMCTL_CDF_iommu          (1U<<_XEN_DOMCTL_CDF_iommu)
>  #define _XEN_DOMCTL_CDF_nested_virt   6
>  #define XEN_DOMCTL_CDF_nested_virt    (1U << _XEN_DOMCTL_CDF_nested_virt)
> +#define _XEN_DOMCTL_CDF_disable_fifo  7
> +#define XEN_DOMCTL_CDF_disable_fifo   (1U << _XEN_DOMCTL_CDF_disable_fifo)

The sense is backwards.  It should be a "permit the use of FIFO"
control.  If the code had been written this way to begin with, the bug
you found wouldn't have existed.

Given that there is not currently a way to disable FIFO, you can
probably do without an enumeration of whether the hypervisor supports it
or not.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 11:41:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 11:41:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37606.70053 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khtAG-00048z-Nh; Wed, 25 Nov 2020 11:40:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37606.70053; Wed, 25 Nov 2020 11:40:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khtAG-00048s-JN; Wed, 25 Nov 2020 11:40:48 +0000
Received: by outflank-mailman (input) for mailman id 37606;
 Wed, 25 Nov 2020 11:40:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sFqU=E7=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1khtAF-00048M-6o
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 11:40:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 28b40baf-f245-4d24-9bdb-8cf362351697;
 Wed, 25 Nov 2020 11:40:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3F6BDAC41;
 Wed, 25 Nov 2020 11:40:45 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id D8B791E130F; Wed, 25 Nov 2020 12:40:44 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sFqU=E7=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1khtAF-00048M-6o
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 11:40:47 +0000
X-Inumbo-ID: 28b40baf-f245-4d24-9bdb-8cf362351697
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 28b40baf-f245-4d24-9bdb-8cf362351697;
	Wed, 25 Nov 2020 11:40:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3F6BDAC41;
	Wed, 25 Nov 2020 11:40:45 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id D8B791E130F; Wed, 25 Nov 2020 12:40:44 +0100 (CET)
Date: Wed, 25 Nov 2020 12:40:44 +0100
From: Jan Kara <jack@suse.cz>
To: Tejun Heo <tj@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 11/20] block: reference struct block_device from struct
 hd_struct
Message-ID: <20201125114044.GC16944@quack2.suse.cz>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-12-hch@lst.de>
 <X708BTJ5njtbC2z1@mtj.duckdns.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <X708BTJ5njtbC2z1@mtj.duckdns.org>
User-Agent: Mutt/1.10.1 (2018-07-13)


Hello!

On Tue 24-11-20 11:59:49, Tejun Heo wrote:
> > diff --git a/block/partitions/core.c b/block/partitions/core.c
> > index a02e224115943d..0ba0bf44b88af3 100644
> > --- a/block/partitions/core.c
> > +++ b/block/partitions/core.c
> > @@ -340,12 +340,11 @@ void delete_partition(struct hd_struct *part)
> >  	device_del(part_to_dev(part));
> >  
> >  	/*
> > -	 * Remove gendisk pointer from idr so that it cannot be looked up
> > -	 * while RCU period before freeing gendisk is running to prevent
> > -	 * use-after-free issues. Note that the device number stays
> > -	 * "in-use" until we really free the gendisk.
> > +	 * Remove the block device from the inode hash, so that it cannot be
> > +	 * looked up while waiting for the RCU grace period.
> >  	 */
> > -	blk_invalidate_devt(part_devt(part));
> > +	remove_inode_hash(part->bdev->bd_inode);
> 
> I don't think this is necessary now that the bdev and inode lifetimes are
> one. Before, punching out the association early was necessary because we
> could be in a situation where we can successfully look up a part from idr
> and then try to pin the associated disk which may already be freed. With the
> new code, the lookup is through the inode whose lifetime is one and the same
> with gendisk, so use-after-free isn't possible and __blkdev_get() will
> reliably reject such open attempts.

I think the remove_inode_hash() call is actually still needed. Consider a
situation when the disk is unplugged, gendisk gets destroyed, bdev still
lives on (e.g. because it is still open). Device gets re-plugged, gendisk
for the same device number gets created. But we really need new bdev for
this because from higher level POV this is completely new device. And the
old bdev needs to live on as long as it is open. So IMO we still need to
just unhash the inode and leave it lingering in the background.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 11:50:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 11:50:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37613.70065 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khtJH-00052h-Mj; Wed, 25 Nov 2020 11:50:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37613.70065; Wed, 25 Nov 2020 11:50:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khtJH-00052a-HK; Wed, 25 Nov 2020 11:50:07 +0000
Received: by outflank-mailman (input) for mailman id 37613;
 Wed, 25 Nov 2020 11:50:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xP3f=E7=amazon.co.uk=prvs=591f578ad=pdurrant@srs-us1.protection.inumbo.net>)
 id 1khtJG-0004yt-0z
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 11:50:06 +0000
Received: from smtp-fw-33001.amazon.com (unknown [207.171.190.10])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2bdb7f1d-62a6-4c4b-aa89-ead1607bc409;
 Wed, 25 Nov 2020 11:50:04 +0000 (UTC)
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-2b-4ff6265a.us-west-2.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP;
 25 Nov 2020 11:49:58 +0000
Received: from EX13D03EUC004.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198])
 by email-inbound-relay-2b-4ff6265a.us-west-2.amazon.com (Postfix) with ESMTPS
 id D7EE7A206C; Wed, 25 Nov 2020 11:49:56 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D03EUC004.ant.amazon.com (10.43.164.33) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Wed, 25 Nov 2020 11:49:55 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Wed, 25 Nov 2020 11:49:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xP3f=E7=amazon.co.uk=prvs=591f578ad=pdurrant@srs-us1.protection.inumbo.net>)
	id 1khtJG-0004yt-0z
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 11:50:06 +0000
X-Inumbo-ID: 2bdb7f1d-62a6-4c4b-aa89-ead1607bc409
Received: from smtp-fw-33001.amazon.com (unknown [207.171.190.10])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2bdb7f1d-62a6-4c4b-aa89-ead1607bc409;
	Wed, 25 Nov 2020 11:50:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
  s=amazon201209; t=1606305005; x=1637841005;
  h=from:to:cc:date:message-id:references:in-reply-to:
   content-transfer-encoding:mime-version:subject;
  bh=0Xp+Ld/00tFOv4jMqEHbRqWMFmZJkCjUUSRudyIJZ6U=;
  b=MCTuO7KvnOGfmop0S4STVAhL018Os0jj/gWhBh/XcUtGdShdsA4LifV+
   gFNzSFBNCnWhgamUwE/QCa7geyEL/PhZ7rQZI2rXncApQ94s/SupGPOwB
   daIF7oNfQlMm+mgDjEHSXVSBFK6w9SVPkmILhtyb4bfRm6QRNVcVms3wT
   A=;
X-IronPort-AV: E=Sophos;i="5.78,368,1599523200"; 
   d="scan'208";a="97787018"
Subject: RE: [PATCH v4 1/3] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_disable_fifo, ...
Thread-Topic: [PATCH v4 1/3] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_disable_fifo, ...
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2b-4ff6265a.us-west-2.amazon.com) ([10.47.23.38])
  by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP; 25 Nov 2020 11:49:58 +0000
Received: from EX13D03EUC004.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198])
	by email-inbound-relay-2b-4ff6265a.us-west-2.amazon.com (Postfix) with ESMTPS id D7EE7A206C;
	Wed, 25 Nov 2020 11:49:56 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D03EUC004.ant.amazon.com (10.43.164.33) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Wed, 25 Nov 2020 11:49:55 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Wed, 25 Nov 2020 11:49:55 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: "Elnikety, Eslam" <elnikety@amazon.com>, Christian Lindig
	<christian.lindig@citrix.com>, David Scott <dave@recoil.org>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
Thread-Index: AQHWwpaPMlIHtWtagUGiOXvRZvLCO6nYt6KAgAAEHQA=
Date: Wed, 25 Nov 2020 11:49:54 +0000
Message-ID: <fc5c13e8e8f349ad9c005313bbe60d07@EX13D32EUC003.ant.amazon.com>
References: <20201124191751.11472-1-paul@xen.org>
 <20201124191751.11472-2-paul@xen.org>
 <2b2e3737-ecd7-907f-3c72-f31835dc5cb8@citrix.com>
In-Reply-To: <2b2e3737-ecd7-907f-3c72-f31835dc5cb8@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.164.90]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBBbmRyZXcgQ29vcGVyIDxhbmRy
ZXcuY29vcGVyM0BjaXRyaXguY29tPg0KPiBTZW50OiAyNSBOb3ZlbWJlciAyMDIwIDExOjMxDQo+
IFRvOiBQYXVsIER1cnJhbnQgPHBhdWxAeGVuLm9yZz47IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9q
ZWN0Lm9yZw0KPiBDYzogRHVycmFudCwgUGF1bCA8cGR1cnJhbnRAYW1hem9uLmNvLnVrPjsgRWxu
aWtldHksIEVzbGFtIDxlbG5pa2V0eUBhbWF6b24uY29tPjsgQ2hyaXN0aWFuIExpbmRpZw0KPiA8
Y2hyaXN0aWFuLmxpbmRpZ0BjaXRyaXguY29tPjsgRGF2aWQgU2NvdHQgPGRhdmVAcmVjb2lsLm9y
Zz47IElhbiBKYWNrc29uIDxpd2pAeGVucHJvamVjdC5vcmc+OyBXZWkNCj4gTGl1IDx3bEB4ZW4u
b3JnPjsgR2VvcmdlIER1bmxhcCA8Z2VvcmdlLmR1bmxhcEBjaXRyaXguY29tPjsgSmFuIEJldWxp
Y2ggPGpiZXVsaWNoQHN1c2UuY29tPjsgSnVsaWVuDQo+IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz47
IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4NCj4gU3ViamVjdDog
UkU6IFtFWFRFUk5BTF0gW1BBVENIIHY0IDEvM10gZG9tY3RsOiBpbnRyb2R1Y2UgYSBuZXcgZG9t
YWluIGNyZWF0ZSBmbGFnLA0KPiBYRU5fRE9NQ1RMX0NERl9kaXNhYmxlX2ZpZm8sIC4uLg0KPiAN
Cj4gQ0FVVElPTjogVGhpcyBlbWFpbCBvcmlnaW5hdGVkIGZyb20gb3V0c2lkZSBvZiB0aGUgb3Jn
YW5pemF0aW9uLiBEbyBub3QgY2xpY2sgbGlua3Mgb3Igb3Blbg0KPiBhdHRhY2htZW50cyB1bmxl
c3MgeW91IGNhbiBjb25maXJtIHRoZSBzZW5kZXIgYW5kIGtub3cgdGhlIGNvbnRlbnQgaXMgc2Fm
ZS4NCj4gDQo+IA0KPiANCj4gT24gMjQvMTEvMjAyMCAxOToxNywgUGF1bCBEdXJyYW50IHdyb3Rl
Og0KPiA+IGRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9wdWJsaWMvZG9tY3RsLmggYi94ZW4vaW5j
bHVkZS9wdWJsaWMvZG9tY3RsLmgNCj4gPiBpbmRleCA2NjZhZWI3MWJmMWIuLjcwNzAxYzU5ZDA1
MyAxMDA2NDQNCj4gPiAtLS0gYS94ZW4vaW5jbHVkZS9wdWJsaWMvZG9tY3RsLmgNCj4gPiArKysg
Yi94ZW4vaW5jbHVkZS9wdWJsaWMvZG9tY3RsLmgNCj4gPiBAQCAtNzAsOSArNzAsMTEgQEAgc3Ry
dWN0IHhlbl9kb21jdGxfY3JlYXRlZG9tYWluIHsNCj4gPiAgI2RlZmluZSBYRU5fRE9NQ1RMX0NE
Rl9pb21tdSAgICAgICAgICAoMVU8PF9YRU5fRE9NQ1RMX0NERl9pb21tdSkNCj4gPiAgI2RlZmlu
ZSBfWEVOX0RPTUNUTF9DREZfbmVzdGVkX3ZpcnQgICA2DQo+ID4gICNkZWZpbmUgWEVOX0RPTUNU
TF9DREZfbmVzdGVkX3ZpcnQgICAgKDFVIDw8IF9YRU5fRE9NQ1RMX0NERl9uZXN0ZWRfdmlydCkN
Cj4gPiArI2RlZmluZSBfWEVOX0RPTUNUTF9DREZfZGlzYWJsZV9maWZvICA3DQo+ID4gKyNkZWZp
bmUgWEVOX0RPTUNUTF9DREZfZGlzYWJsZV9maWZvICAgKDFVIDw8IF9YRU5fRE9NQ1RMX0NERl9k
aXNhYmxlX2ZpZm8pDQo+IA0KPiBUaGUgc2Vuc2UgaXMgYmFja3dhcmRzLiAgSXQgc2hvdWxkIGJl
IGEgInBlcm1pdCB0aGUgdXNlIG9mIEZJRk8iDQo+IGNvbnRyb2wuICBJZiB0aGUgY29kZSBoYWQg
YmVlbiB3cml0dGVuIHRoaXMgd2F5IHRvIGJlZ2luIHdpdGgsIHRoZSBidWcNCj4geW91IGZvdW5k
IHdvdWxkbid0IGhhdmUgZXhpc3RlZC4NCj4gDQo+IEdpdmVuIHRoYXQgdGhlcmUgaXMgbm90IGN1
cnJlbnRseSBhIHdheSB0byBkaXNhYmxlIEZJRk8sIHlvdSBjYW4NCj4gcHJvYmFibHkgZG8gd2l0
aG91dCBhbiBlbnVtZXJhdGlvbiBvZiB3aGV0aGVyIHRoZSBoeXBlcnZpc29yIHN1cHBvcnRzIGl0
DQo+IG9yIG5vdC4NCj4gDQoNCk9rLCBJIGNhbiByZXZlcnNlIHRoZSBzZW5zZS4NCg0KSSBmb3Vu
ZCBhbm90aGVyIG9uZSB0aGF0IHdlIG91Z2h0IHRvIGNvbnRyb2wgaW4gYSBzaW1pbGFyIHdheS4u
LiB0aGUgcGVyLWNwdSBldnRjaG4gdXBjYWxscy4gQUZBSUsgb25seSB0aGUgV2luZG93cyBQViBk
cml2ZXJzIG1ha2UgdXNlIG9mIGl0IChhbmQgSSBjYW4gYXJyYW5nZSB0byBzcXVhc2ggdGhhdCB3
aXRoIGEgcmVnaXN0cnkgZmxhZykgYnV0IGl0IHJlYWxseSBmYWxscyBpbnRvIHRoZSBzYW1lIGNh
dGVnb3J5IGFzIEZJRk8uLi4gc28gbWF5YmUgd2UgbmVlZCBhIHNlcGFyYXRlIGJpdC1maWVsZCBm
b3IgdGhlc2Ugc29ydHMgb2YgdGhpbmc/DQoNCiAgUGF1bA0KDQo+IH5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 11:51:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 11:51:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37620.70077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khtKd-0005Dr-Vb; Wed, 25 Nov 2020 11:51:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37620.70077; Wed, 25 Nov 2020 11:51:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khtKd-0005Dk-Sa; Wed, 25 Nov 2020 11:51:31 +0000
Received: by outflank-mailman (input) for mailman id 37620;
 Wed, 25 Nov 2020 11:51:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khtKc-0005Da-J0
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 11:51:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bc7fecdb-87e3-4233-a2d9-a1612d0af6c0;
 Wed, 25 Nov 2020 11:51:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 883A9AC48;
 Wed, 25 Nov 2020 11:51:28 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khtKc-0005Da-J0
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 11:51:30 +0000
X-Inumbo-ID: bc7fecdb-87e3-4233-a2d9-a1612d0af6c0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id bc7fecdb-87e3-4233-a2d9-a1612d0af6c0;
	Wed, 25 Nov 2020 11:51:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606305088; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1ufAIsxgz3dNrTVYHsOrmKrdgkJ0tVPi4GXrtu7oa2I=;
	b=Vp74fjys1mzPp7Tn6lgRA/sbhE2WpbtnwrahiFMFWrx0gtPBtV63oa0+Ypg2XZ1b0Qkq1A
	+h9QNMoHUWREk+TgLhlF1YhD9xRMOuPc1MrAJTPGTud7wJd/KQhCBHX/ORLEiaJUeObqzw
	YoSrtrUhVhne9meKVPEkM4fLHOmmU8w=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 883A9AC48;
	Wed, 25 Nov 2020 11:51:28 +0000 (UTC)
Subject: Re: [PATCH v4 1/3] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_disable_fifo, ...
To: paul@xen.org
Cc: 'Paul Durrant' <pdurrant@amazon.com>,
 'Eslam Elnikety' <elnikety@amazon.com>,
 'Christian Lindig' <christian.lindig@citrix.com>,
 'David Scott' <dave@recoil.org>, 'Ian Jackson' <iwj@xenproject.org>,
 'Wei Liu' <wl@xen.org>, 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Julien Grall' <julien@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20201124191751.11472-1-paul@xen.org>
 <20201124191751.11472-2-paul@xen.org>
 <444917ac-f2aa-5544-8f6c-097e7f57c98c@suse.com>
 <009001d6c31b$a1eaeef0$e5c0ccd0$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3ab33c4e-56af-5690-32b8-a89c5e27761b@suse.com>
Date: Wed, 25 Nov 2020 12:51:28 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <009001d6c31b$a1eaeef0$e5c0ccd0$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.11.2020 12:10, Paul Durrant wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 25 November 2020 09:20
>>
>> On 24.11.2020 20:17, Paul Durrant wrote:
>>> From: Paul Durrant <pdurrant@amazon.com>
>>>
>>> ...to control the visibility of the FIFO event channel operations
>>> (EVTCHNOP_init_control, EVTCHNOP_expand_array, and EVTCHNOP_set_priority) to
>>> the guest.
>>>
>>> These operations were added to the public header in commit d2d50c2f308f
>>> ("evtchn: add FIFO-based event channel ABI") and the first implementation
>>> appeared in the two subsequent commits: edc8872aeb4a ("evtchn: implement
>>> EVTCHNOP_set_priority and add the set_priority hook") and 88910061ec61
>>> ("evtchn: add FIFO-based event channel hypercalls and port ops"). Prior to
>>> that, a guest issuing those operations would receive a return value of
>>> -ENOSYS (not implemented) from Xen. Guests aware of the FIFO operations but
>>> running on an older (pre-4.4) Xen would fall back to using the 2-level event
>>> channel interface upon seeing this return value.
>>>
>>> Unfortunately the uncontrolable appearance of these new operations in Xen 4.4
>>> onwards has implications for hibernation of some Linux guests. During resume
>>> from hibernation, there are two kernels involved: the "boot" kernel and the
>>> "resume" kernel. The guest boot kernel may default to use FIFO operations and
>>> instruct Xen via EVTCHNOP_init_control to switch from 2-level to FIFO. On the
>>> other hand, the resume kernel keeps assuming 2-level, because it was hibernated
>>> on a version of Xen that did not support the FIFO operations.
>>
>> And the alternative of the boot kernel issuing EVTCHNOP_reset has
>> other unwanted consequences. Maybe worth mentioning here, as
>> otherwise this would look like the obvious way to return to 2-level
>> mode?
>>
>> Also, why can't the boot kernel be instructed to avoid engaging
>> FIFO mode?
>>
> 
> Both of those are, of course, viable alternatives if the guest can be
> modified. The problem we need to work around is guest that are already
> out there and cannot be updated.

Making use of EVTCHNOP_reset indeed would require a change to the
kernel. But Linux has a command line option to suppress use of
FIFO event channels, so I can't see why the boot kernel couldn't
be passed this flag without any modification to the binary.

>>> To maintain compatibility it is necessary to make Xen behave as it did
>>> before the new operations were added and hence the code in this patch ensures
>>> that, if XEN_DOMCTL_CDF_disable_fifo is set, the FIFO event channel operations
>>> will again result in -ENOSYS being returned to the guest.
>>
>> Are there indeed dependencies on the precise return value anywhere?
>> If so, the generally inappropriate use (do_event_channel_op()'s
>> default case really would also need switching) would want a brief
>> comment, so it'll be understood by readers that this isn't code to
>> derive other code from. If not, -EPERM or -EACCES perhaps?
>>
> 
> The patch, as stated, is reverting behaviour and so the -ENOSYS really
> needs to stay since it is essentially ABI now. I am not aware of guest
> code that will, in fact, die if it sees -EPERM or -EACCES instead but
> there may be such code. The only safe thing to do is to make things
> look like the used to.

I don't think specific error codes can be considered "ABI". Not
the least because, if there are multiple causes for an error, it
ought to be undefined which error gets returned. A guest not
falling back to 2-level on _any_ error here is basically setting
itself up for eventual failure because of e.g. getting back
-ENOMEM. Or someone deciding to add an XSM check to the function.

As said, I'm of the opinion that the other -ENOSYS ought to be
replaced as well, which of course would be precluded if this was
considered "ABI".

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 12:00:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 12:00:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37628.70089 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khtT2-0006Gg-Do; Wed, 25 Nov 2020 12:00:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37628.70089; Wed, 25 Nov 2020 12:00:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khtT2-0006GZ-AX; Wed, 25 Nov 2020 12:00:12 +0000
Received: by outflank-mailman (input) for mailman id 37628;
 Wed, 25 Nov 2020 12:00:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1khtT1-0006GO-2s
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 12:00:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khtSy-00062C-QA; Wed, 25 Nov 2020 12:00:08 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khtSy-0002Ck-Ia; Wed, 25 Nov 2020 12:00:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khtT1-0006GO-2s
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 12:00:11 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=DY/vjFFbJ1KS+p1DZR/X4EeyTXxWNEKcF+8YdkY1NE8=; b=JTCeJx0L2EiS//O1Nwts3yTz5i
	4s2KwW50O4YZq1HbeukKMu8qZSJVToIoQYPwG+lk8awf+my3S+Hqf7OGf+qEye+c9dvtBEMF/sPSm
	XgmZnuQva28DZxegYsWOCnqLDUfyDXZV1H1sC2vByV6O7pLiumEhGCb4E6GQmvc4iY0k=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khtSy-00062C-QA; Wed, 25 Nov 2020 12:00:08 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khtSy-0002Ck-Ia; Wed, 25 Nov 2020 12:00:08 +0000
Subject: Re: [PATCH v2 01/17] mm: check for truncation in vmalloc_type()
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
 <5adb0089-ef9b-cfe2-db0d-7142eccc914f@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <8669e32e-fd23-1425-a9c5-7837a6d63ad6@xen.org>
Date: Wed, 25 Nov 2020 12:00:06 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <5adb0089-ef9b-cfe2-db0d-7142eccc914f@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 23/11/2020 14:23, Jan Beulich wrote:
> While it's currently implied from the checking xmalloc_array() does,
> let's make this more explicit in the function itself. As a result both
> involved local variables don't need to have size_t type anymore. This
> brings them in line with the rest of the code in this file.
> 
> Requested-by: Julien Grall <julien@xen.org>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 12:02:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 12:02:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37643.70101 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khtUz-0006Tk-Q8; Wed, 25 Nov 2020 12:02:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37643.70101; Wed, 25 Nov 2020 12:02:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khtUz-0006Td-N8; Wed, 25 Nov 2020 12:02:13 +0000
Received: by outflank-mailman (input) for mailman id 37643;
 Wed, 25 Nov 2020 12:02:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NtXg=E7=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1khtUx-0006TW-PA
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 12:02:11 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.15.52]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d64cbb68-fdc3-4358-a0d4-4ec5dd25b7a9;
 Wed, 25 Nov 2020 12:02:09 +0000 (UTC)
Received: from AM6P193CA0125.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:85::30)
 by DBBPR08MB4663.eurprd08.prod.outlook.com (2603:10a6:10:d8::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Wed, 25 Nov
 2020 12:02:07 +0000
Received: from AM5EUR03FT008.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:85:cafe::93) by AM6P193CA0125.outlook.office365.com
 (2603:10a6:209:85::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Wed, 25 Nov 2020 12:02:07 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT008.mail.protection.outlook.com (10.152.16.123) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Wed, 25 Nov 2020 12:02:07 +0000
Received: ("Tessian outbound fcd5bc555ddc:v71");
 Wed, 25 Nov 2020 12:02:07 +0000
Received: from 1503ee4a42a5.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 8FA5C22A-3FBC-421B-BCA1-3C8B2547DF21.1; 
 Wed, 25 Nov 2020 12:02:01 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1503ee4a42a5.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 25 Nov 2020 12:02:01 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5084.eurprd08.prod.outlook.com (2603:10a6:10:38::31) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.22; Wed, 25 Nov
 2020 12:02:00 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0%7]) with mapi id 15.20.3589.030; Wed, 25 Nov 2020
 12:02:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NtXg=E7=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1khtUx-0006TW-PA
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 12:02:11 +0000
X-Inumbo-ID: d64cbb68-fdc3-4358-a0d4-4ec5dd25b7a9
Received: from EUR01-DB5-obe.outbound.protection.outlook.com (unknown [40.107.15.52])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d64cbb68-fdc3-4358-a0d4-4ec5dd25b7a9;
	Wed, 25 Nov 2020 12:02:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=66bNHCUSp1na50mlvQ0gb3LddztgjRpB4XfrDIKnsr4=;
 b=4/VMHplmL1tAZVrR9Yg0SxnqQJ7O7h2Uk8we+bExUJQ6jNfiZQ0ds6zlD+TU65TBsTQR2ugmKLgMMCRCwzKFR/sikFjt/BzbOfp2NMz/2vKjCAPJfw8zVuOKEAIN8TpIY94ZjPM7JIonS1op6B1gjTVu8Qxr6TfSddsZjig0BaY=
Received: from AM6P193CA0125.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:85::30)
 by DBBPR08MB4663.eurprd08.prod.outlook.com (2603:10a6:10:d8::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Wed, 25 Nov
 2020 12:02:07 +0000
Received: from AM5EUR03FT008.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:85:cafe::93) by AM6P193CA0125.outlook.office365.com
 (2603:10a6:209:85::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Wed, 25 Nov 2020 12:02:07 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT008.mail.protection.outlook.com (10.152.16.123) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Wed, 25 Nov 2020 12:02:07 +0000
Received: ("Tessian outbound fcd5bc555ddc:v71"); Wed, 25 Nov 2020 12:02:07 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 31816d201b33c789
X-CR-MTA-TID: 64aa7808
Received: from 1503ee4a42a5.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 8FA5C22A-3FBC-421B-BCA1-3C8B2547DF21.1;
	Wed, 25 Nov 2020 12:02:01 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1503ee4a42a5.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 25 Nov 2020 12:02:01 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Sqr1ppsQANHInChesDhTTcOELIWE3wuqhVG0dhycOCmEelT85w/LGEOLL1Hw3VKUQiyV+3x0PnEIabVuhoVlu+O/EslgZP6ymicYflXCPvq6+ZlTPk95hxuG7EbeZ61HjM5Zy64t2GF7z54ic2zjfskxMlHloafT3mzf2frYTWPuGdlKMDCwzJhVyk/+FP+XYFQdh/h9PT/k9xqXirqpIiZe0axc3uOv/aIMxmTWHMy0pXkJczkeCQs3Jjlg11oveg1+ZsRvPTItJSaZqa0m8Maxi1aR9uDXulgaTOJjaYtEbkF+2rnBJX8HgLZH7GMrWOK/bMzq6n9r/IZuoPelNw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=66bNHCUSp1na50mlvQ0gb3LddztgjRpB4XfrDIKnsr4=;
 b=GTk8yf/tCJZcbdvuD+0ehViBrInx8cL7GRoYUDxV02rxGy0Fb/iEGbdscTK4ULYsfHq5RK/D35G8WDwJJwXBCoIKQkjbEowDSIYu6JP8ixcu7aqmLY7BOlglhUXI49gFNt6czG6YlJy/+9nHrwCf8i45aL6w+zv/MqnJY2EYwLbImVywEcfU/GevDtj2iorJBAeX4JLFCVHWVaZpZFzaxCdUWE9Hq2eTnhGlOEJ494psHbwg8Iuc67IPmYLtp0gXjQ8JCCVLn5Tx8c1laV3p9C1ErVyghaCGlyUamx/17N5m2VW+ULcEo4Yqj1cWfACqvOn9BxGjuiC5qQEVH0N52g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=66bNHCUSp1na50mlvQ0gb3LddztgjRpB4XfrDIKnsr4=;
 b=4/VMHplmL1tAZVrR9Yg0SxnqQJ7O7h2Uk8we+bExUJQ6jNfiZQ0ds6zlD+TU65TBsTQR2ugmKLgMMCRCwzKFR/sikFjt/BzbOfp2NMz/2vKjCAPJfw8zVuOKEAIN8TpIY94ZjPM7JIonS1op6B1gjTVu8Qxr6TfSddsZjig0BaY=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5084.eurprd08.prod.outlook.com (2603:10a6:10:38::31) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.22; Wed, 25 Nov
 2020 12:02:00 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0%7]) with mapi id 15.20.3589.030; Wed, 25 Nov 2020
 12:02:00 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Rahul Singh
	<Rahul.Singh@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: Add workaround for Cortex-A55 erratum #1530923
Thread-Topic: [PATCH] xen/arm: Add workaround for Cortex-A55 erratum #1530923
Thread-Index: AQHWwlLLNjr+48XzOEqJDwYwnQ/h+6nXSiYAgABEOYCAASjCgIAACceA
Date: Wed, 25 Nov 2020 12:02:00 +0000
Message-ID: <4D139BCF-873B-4AFA-B187-894520EF6718@arm.com>
References:
 <61a105672650e7470710183f37351b821b818d1e.1606215998.git.bertrand.marquis@arm.com>
 <E5A460E5-7D10-4314-98B4-0D90CD173940@arm.com>
 <alpine.DEB.2.21.2011240944400.7979@sstabellini-ThinkPad-T480s>
 <7b05cb84-a9c3-10b2-5713-42259695e9b1@xen.org>
In-Reply-To: <7b05cb84-a9c3-10b2-5713-42259695e9b1@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 2a6d3034-da2c-4cec-db23-08d89139ee8d
x-ms-traffictypediagnostic: DB8PR08MB5084:|DBBPR08MB4663:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DBBPR08MB4663ABC00051D481629323469DFA0@DBBPR08MB4663.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:7691;OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 YAah3qDfxgCFRn5O6rSptTU4aTCGZ5+APB1oNIMa/oTSIo41amBTFlgQY1PCqRqAVRtBAK4C2Ab9EAeb/iJAoVw27lF4pdVM7UIgetJAVs6uUn0plYfeDWBuEGjFWAguhTs+OAAuDhgHEBxfMiabxfxJhsAJFA/Z5KHfof5WcigkhCJHggdDw6KCM8z6nFgnCOGs06zcEUJZUhgeqlEnngAvv/0rZDbL9kHI3z0KicSBnuZjCTf9rinPR2QhrAsVuKanl8CuGOQylu2viDTPHFt0KGYwIlTrM4BpTnET3OYwG0d7qpNWrHVEXY342XMtg/Oo9bFpqBZak4CPXdn4iA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(346002)(396003)(39860400002)(136003)(66446008)(64756008)(66556008)(2616005)(4326008)(6486002)(66946007)(36756003)(186003)(54906003)(6512007)(316002)(2906002)(53546011)(6506007)(478600001)(26005)(6916009)(33656002)(76116006)(66476007)(91956017)(8936002)(86362001)(5660300002)(8676002)(4744005)(71200400001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 EeU0ULv85bbD222Z4O/b6DUryldmIqD4g1TM/o0RquU+xUIM3BcvJ06cBNhTRyyZu0cxa+MZ8KiACwp+cmjhHTK2zqQvb87BCaslPXlBql5MWOi18hKWxQ9nsePW0qT+WlLLDxy7Gzr7ee9MlwQSiVStBU4F9RUWBJt6m7DY5Ha2Pd3IdfgLR2C2sr8aQ18cFjf3qI8tZipSOqi7mU3TRyDSB12cS1J1xtOITufXs9fHyPEosF2hVw2WyRqHTCqiQUeYjthCNt6MVnF0uOQXvJppTDgNnr31R0JlK1QpbV9orwNTSy45D5JFXjIgNlnUzNAnARsxVlXOEtuAanYyarOJrl55MjDF5F80DblOH4910iERQl0i+q9Opd1n+ltV49f1ow/u5i6F6Ijed+CAWVik7eYDu6vcH0Lt/dQUE72KwWXH/9sHtCBipFQzOFjJOQU5OWVlZOsAIzuP8DfZL9nLtz7ymvNBv0RoDZOKzYOE03u4Sv4ZtGbvFJvudgy65E+v8KIKavc9Lfkga25h64LjNwrqLywAsYgVCaeQfYFEeKZTOXPrQRrmwDBtnxygjZ9yWwvixXJRPEIVQcD4I8JuUIllga8io5Qn7KF2w+lv5g6ZPe6hsztuFuTXhr3Nq43/E55dNRI9KZbrDCXUAmmQI//wZHCNNKInDnJDY5iaLesqmZRHIIIMZYK87Oyr6VK34QzDb02CWZJhHuVy2/3jiC/NpQ3HpH5pxYZ0XiDSBbc0ogpxbjWACZSdjWMGpIFROrqySTpm0RlA98JqsR7Ti4DQqN+R/AhKZ9zzy5M6xX/FoQrAABk1rPhT6kSHpgLeoHQqHiKkscmflWfWS1N0nwpuNSimXG8WmcTXc5OO0kQ+sQEqSyNkK/9BRF0TqK1xjUjRbVL3bAEEfAGHug==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <14EDCD9E965D1F4B87D280AE347DC07C@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5084
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT008.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	82b3a4f0-9944-4e9b-2d03-08d89139ea72
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WpPc8c/kk6jaZF1z9YhvzECDI1cxfhxIunaZQue24UfMMo7jZldl9lKIcwQXBygOdozdXqlTpFoX+J5Hx1CqrWT1itMA2ACzDvEVsKZsIi+JgeUipdAGTq0rALC/tMxssFBOf+2C4krIb8WSRVbgeuNHW5wtAKlWV64Fc52gK7bpscx7alhoksUuWDKrKWX1m/gauVJ08r+qgXjqs/MB/GMu9egw8mhMSss+KC6tDnjZWpQyOVJ2XACtBJNLpqmPdIsIgrbqJjjpC0xeuWSPJrxgB9Pet2XTGCmmsW47op5o3QEMUDMejuAD/jmAywEjMey4JedM2DpHLMcYScZuYPMQ992KrwAk+KrGVaxrYpFc/CQ4tQBcB8GuMJTYbddpxulU2qbP9Bi/eJJ1RgR5Lw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(376002)(396003)(39860400002)(346002)(46966005)(70586007)(2906002)(356005)(186003)(70206006)(33656002)(36756003)(47076004)(5660300002)(86362001)(82310400003)(82740400003)(2616005)(316002)(6506007)(478600001)(4326008)(81166007)(36906005)(54906003)(6486002)(6512007)(8676002)(8936002)(26005)(53546011)(336012)(6862004)(107886003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Nov 2020 12:02:07.3872
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2a6d3034-da2c-4cec-db23-08d89139ee8d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT008.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4663



> On 25 Nov 2020, at 11:26, Julien Grall <julien@xen.org> wrote:
>=20
>=20
>=20
> On 24/11/2020 17:44, Stefano Stabellini wrote:
>> On Tue, 24 Nov 2020, Rahul Singh wrote:
>>>> On 24 Nov 2020, at 11:12 am, Bertrand Marquis <Bertrand.Marquis@arm.co=
m> wrote:
>>>>=20
>>>> On the Cortex A55, TLB entries can be allocated by a speculative AT
>>>> instruction. If this is happening during a guest context switch with a=
n
>>>> inconsistent page table state in the guest, TLBs with wrong values mig=
ht
>>>> be allocated.
>>>> The ARM64_WORKAROUND_AT_SPECULATE workaround is used as for erratum
>>>> 1165522 on Cortex A76 or Neoverse N1.
>>>>=20
>>>> This change is also introducing the MIDR identifier for the Cortex-A55=
.
>>>>=20
>>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>>=20
>>> Reviewed-by: Rahul Singh <rahul.singh@arm.com>
>> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>=20
> Acked-by: Julien Grall <jgrall@amazon.com>
>=20
> And committed.

Thanks :-)

Cheers
Bertrand

>=20
> Cheers,
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 12:10:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 12:10:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37668.70119 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khtcg-0007V0-Sk; Wed, 25 Nov 2020 12:10:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37668.70119; Wed, 25 Nov 2020 12:10:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khtcg-0007Ut-PP; Wed, 25 Nov 2020 12:10:10 +0000
Received: by outflank-mailman (input) for mailman id 37668;
 Wed, 25 Nov 2020 12:10:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xEKP=E7=gmail.com=htejun@srs-us1.protection.inumbo.net>)
 id 1khtcg-0007Uo-CH
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 12:10:10 +0000
Received: from mail-io1-xd34.google.com (unknown [2607:f8b0:4864:20::d34])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e1a6657f-6250-4cf0-9c57-083d71ccba7e;
 Wed, 25 Nov 2020 12:10:09 +0000 (UTC)
Received: by mail-io1-xd34.google.com with SMTP id m9so1874232iox.10
 for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 04:10:09 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
 by smtp.gmail.com with ESMTPSA id v85sm1343250ilk.50.2020.11.25.04.10.08
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 25 Nov 2020 04:10:08 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xEKP=E7=gmail.com=htejun@srs-us1.protection.inumbo.net>)
	id 1khtcg-0007Uo-CH
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 12:10:10 +0000
X-Inumbo-ID: e1a6657f-6250-4cf0-9c57-083d71ccba7e
Received: from mail-io1-xd34.google.com (unknown [2607:f8b0:4864:20::d34])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e1a6657f-6250-4cf0-9c57-083d71ccba7e;
	Wed, 25 Nov 2020 12:10:09 +0000 (UTC)
Received: by mail-io1-xd34.google.com with SMTP id m9so1874232iox.10
        for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 04:10:09 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=7Gu8dfUtFbW1JkuoI0JsJxuRwdXnhq5kHzNpgpg59Vg=;
        b=qBMGf8ZlpGBDrQf4OZ1U5hpBWVyNLxT8t7mb89EQfIXn+CYHj49+4NspUMpFZN+IBb
         ow2ADzA501Y4hGPFKjrnnAGIPGi9hyvAwwXtpDtFAkYj7ksmhQny5Q5+xlY6jtJjsSvN
         iA9i5fMRanrtfyZfe3+dWZgmeaV0x4lRD+7LgXZczkfB+RbnQDZrkTxOZ7Es8MQAef6B
         e5RRaQ3t+46Lg0Zd/h/8g8StzDseAQGAZgvUuKfsUJH8/o4saKvewPtw+57G9TyZa/BY
         /ViJEbywwBG8XKFF7bgqckI6jrquPx5+GfHPK5ghMTcJ5KumQuJnVjVzT50eAtnQOxE9
         Uwfg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=7Gu8dfUtFbW1JkuoI0JsJxuRwdXnhq5kHzNpgpg59Vg=;
        b=K0tMNVJhiLUX3qpanghMJXxk9RmpLVfWF82ac19liG/CVlGqGz9ujMz8vLqM38TdTX
         7i5ZL0uvAPnF3afBbkHyxX9CxwDBn2C30a1+GGYHMGj/DamU04Xs41q9NTrqrVdzboAA
         9rAYR0En2iRAAjnY8+Rd1eD9NY/lvGoj71UEDzXoQOOj7oKI5l3NSJ6RM6hsIuv6wCng
         674VLOq0AgC9RRsLWxfhXjDrC7TAK3BFZP6YNJLAaxt0doex3xd7sAUVP+OqfA61MkQs
         1NC0swpcOTIwsEmRhXXWjvZDPWkTNE9NYnzBLDtmuwodWEznbxWup1vPSDCJR6J0xXv4
         SJLw==
X-Gm-Message-State: AOAM531Kzf7KIHhjivKRtY1IiWOq2Uf+QhBNR59mKd2CCm/ak84hoqkV
	ejlZ8m/TuHoz9rYccO9ACZ0=
X-Google-Smtp-Source: ABdhPJzrP+6qlAsvVf6NJAz9IW3W/tDwuPyEzkZeYfjLrVcKdFZvzQhG4Xm+JItZe22HwNKBFUD46g==
X-Received: by 2002:a5e:8206:: with SMTP id l6mr2327550iom.126.1606306209079;
        Wed, 25 Nov 2020 04:10:09 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
        by smtp.gmail.com with ESMTPSA id v85sm1343250ilk.50.2020.11.25.04.10.08
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 25 Nov 2020 04:10:08 -0800 (PST)
Sender: Tejun Heo <htejun@gmail.com>
Date: Wed, 25 Nov 2020 07:09:46 -0500
From: Tejun Heo <tj@kernel.org>
To: Jan Kara <jack@suse.cz>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 11/20] block: reference struct block_device from struct
 hd_struct
Message-ID: <X75JitlWvZieqIR3@mtj.duckdns.org>
References: <20201118084800.2339180-1-hch@lst.de>
 <20201118084800.2339180-12-hch@lst.de>
 <X708BTJ5njtbC2z1@mtj.duckdns.org>
 <20201125114044.GC16944@quack2.suse.cz>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201125114044.GC16944@quack2.suse.cz>

Hey, Jan,

On Wed, Nov 25, 2020 at 12:40:44PM +0100, Jan Kara wrote:
> > I don't think this is necessary now that the bdev and inode lifetimes are
> > one. Before, punching out the association early was necessary because we
> > could be in a situation where we can successfully look up a part from idr
> > and then try to pin the associated disk which may already be freed. With the
> > new code, the lookup is through the inode whose lifetime is one and the same
> > with gendisk, so use-after-free isn't possible and __blkdev_get() will
> > reliably reject such open attempts.
> 
> I think the remove_inode_hash() call is actually still needed. Consider a
> situation when the disk is unplugged, gendisk gets destroyed, bdev still
> lives on (e.g. because it is still open). Device gets re-plugged, gendisk
> for the same device number gets created. But we really need new bdev for
> this because from higher level POV this is completely new device. And the
> old bdev needs to live on as long as it is open. So IMO we still need to
> just unhash the inode and leave it lingering in the background.

You're absolutely right. I was only thinking about the lifetime problem
described in the comment. So, it just needs an updated comment there, I
think.

Thanks.

-- 
tejun


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 12:10:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 12:10:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37672.70130 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khtdO-0007b0-6H; Wed, 25 Nov 2020 12:10:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37672.70130; Wed, 25 Nov 2020 12:10:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khtdO-0007at-31; Wed, 25 Nov 2020 12:10:54 +0000
Received: by outflank-mailman (input) for mailman id 37672;
 Wed, 25 Nov 2020 12:10:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xP3f=E7=amazon.co.uk=prvs=591f578ad=pdurrant@srs-us1.protection.inumbo.net>)
 id 1khtdN-0007an-Cy
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 12:10:53 +0000
Received: from smtp-fw-9101.amazon.com (unknown [207.171.184.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8631f98b-f719-4fcd-8240-a13d22d6d25d;
 Wed, 25 Nov 2020 12:10:52 +0000 (UTC)
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-1a-e34f1ddc.us-east-1.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP;
 25 Nov 2020 12:10:45 +0000
Received: from EX13D03EUC004.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan2.iad.amazon.com [10.40.163.34])
 by email-inbound-relay-1a-e34f1ddc.us-east-1.amazon.com (Postfix) with ESMTPS
 id ED823A18B5; Wed, 25 Nov 2020 12:10:43 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D03EUC004.ant.amazon.com (10.43.164.33) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Wed, 25 Nov 2020 12:10:43 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Wed, 25 Nov 2020 12:10:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xP3f=E7=amazon.co.uk=prvs=591f578ad=pdurrant@srs-us1.protection.inumbo.net>)
	id 1khtdN-0007an-Cy
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 12:10:53 +0000
X-Inumbo-ID: 8631f98b-f719-4fcd-8240-a13d22d6d25d
Received: from smtp-fw-9101.amazon.com (unknown [207.171.184.25])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8631f98b-f719-4fcd-8240-a13d22d6d25d;
	Wed, 25 Nov 2020 12:10:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
  s=amazon201209; t=1606306253; x=1637842253;
  h=from:to:cc:date:message-id:references:in-reply-to:
   content-transfer-encoding:mime-version:subject;
  bh=bt8fTJDW8RhOtdLh62gbZQ9TL1jIRpogBwkQWBdfsT0=;
  b=pQjSktm9U/xpLoqUcdP7n9fy45eMjycC7yDKJKs1mq0acPGC7LbYEDOA
   owwPJW3Em7+9eTqhpu1eoE73TYHjij7Hun4IWG9OLsI3XWJWprNKZM7mZ
   T8RzwUFbobRWbkP3tXiAdnTbZRtV5WcyrXHm1xB8FLon0gzy/KQIFpVTh
   4=;
X-IronPort-AV: E=Sophos;i="5.78,368,1599523200"; 
   d="scan'208";a="90794913"
Subject: RE: [PATCH v4 1/3] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_disable_fifo, ...
Thread-Topic: [PATCH v4 1/3] domctl: introduce a new domain create flag,
 XEN_DOMCTL_CDF_disable_fifo, ...
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-1a-e34f1ddc.us-east-1.amazon.com) ([10.47.23.38])
  by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP; 25 Nov 2020 12:10:45 +0000
Received: from EX13D03EUC004.ant.amazon.com (iad12-ws-svc-p26-lb9-vlan2.iad.amazon.com [10.40.163.34])
	by email-inbound-relay-1a-e34f1ddc.us-east-1.amazon.com (Postfix) with ESMTPS id ED823A18B5;
	Wed, 25 Nov 2020 12:10:43 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D03EUC004.ant.amazon.com (10.43.164.33) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Wed, 25 Nov 2020 12:10:43 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Wed, 25 Nov 2020 12:10:43 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Jan Beulich <jbeulich@suse.com>, "paul@xen.org" <paul@xen.org>
CC: "Elnikety, Eslam" <elnikety@amazon.com>, 'Christian Lindig'
	<christian.lindig@citrix.com>, 'David Scott' <dave@recoil.org>, 'Ian Jackson'
	<iwj@xenproject.org>, 'Wei Liu' <wl@xen.org>, 'Andrew Cooper'
	<andrew.cooper3@citrix.com>, 'George Dunlap' <george.dunlap@citrix.com>,
	'Julien Grall' <julien@xen.org>, 'Stefano Stabellini'
	<sstabellini@kernel.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Thread-Index: AQHWwpaPMlIHtWtagUGiOXvRZvLCO6nYkyyAgAAe6YCAAAtbAIAAA6Jg
Date: Wed, 25 Nov 2020 12:10:42 +0000
Message-ID: <5bcac7a166d2476187ed5bab9e84e506@EX13D32EUC003.ant.amazon.com>
References: <20201124191751.11472-1-paul@xen.org>
 <20201124191751.11472-2-paul@xen.org>
 <444917ac-f2aa-5544-8f6c-097e7f57c98c@suse.com>
 <009001d6c31b$a1eaeef0$e5c0ccd0$@xen.org>
 <3ab33c4e-56af-5690-32b8-a89c5e27761b@suse.com>
In-Reply-To: <3ab33c4e-56af-5690-32b8-a89c5e27761b@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.165.145]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxp
Y2hAc3VzZS5jb20+DQo+IFNlbnQ6IDI1IE5vdmVtYmVyIDIwMjAgMTE6NTENCj4gVG86IHBhdWxA
eGVuLm9yZw0KPiBDYzogRHVycmFudCwgUGF1bCA8cGR1cnJhbnRAYW1hem9uLmNvLnVrPjsgRWxu
aWtldHksIEVzbGFtIDxlbG5pa2V0eUBhbWF6b24uY29tPjsgJ0NocmlzdGlhbiBMaW5kaWcnDQo+
IDxjaHJpc3RpYW4ubGluZGlnQGNpdHJpeC5jb20+OyAnRGF2aWQgU2NvdHQnIDxkYXZlQHJlY29p
bC5vcmc+OyAnSWFuIEphY2tzb24nIDxpd2pAeGVucHJvamVjdC5vcmc+Ow0KPiAnV2VpIExpdScg
PHdsQHhlbi5vcmc+OyAnQW5kcmV3IENvb3BlcicgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+
OyAnR2VvcmdlIER1bmxhcCcNCj4gPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT47ICdKdWxpZW4g
R3JhbGwnIDxqdWxpZW5AeGVuLm9yZz47ICdTdGVmYW5vIFN0YWJlbGxpbmknDQo+IDxzc3RhYmVs
bGluaUBrZXJuZWwub3JnPjsgeGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnDQo+IFN1Ympl
Y3Q6IFJFOiBbRVhURVJOQUxdIFtQQVRDSCB2NCAxLzNdIGRvbWN0bDogaW50cm9kdWNlIGEgbmV3
IGRvbWFpbiBjcmVhdGUgZmxhZywNCj4gWEVOX0RPTUNUTF9DREZfZGlzYWJsZV9maWZvLCAuLi4N
Cj4gDQo+IENBVVRJT046IFRoaXMgZW1haWwgb3JpZ2luYXRlZCBmcm9tIG91dHNpZGUgb2YgdGhl
IG9yZ2FuaXphdGlvbi4gRG8gbm90IGNsaWNrIGxpbmtzIG9yIG9wZW4NCj4gYXR0YWNobWVudHMg
dW5sZXNzIHlvdSBjYW4gY29uZmlybSB0aGUgc2VuZGVyIGFuZCBrbm93IHRoZSBjb250ZW50IGlz
IHNhZmUuDQo+IA0KPiANCj4gDQo+IE9uIDI1LjExLjIwMjAgMTI6MTAsIFBhdWwgRHVycmFudCB3
cm90ZToNCj4gPj4gRnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPiA+PiBT
ZW50OiAyNSBOb3ZlbWJlciAyMDIwIDA5OjIwDQo+ID4+DQo+ID4+IE9uIDI0LjExLjIwMjAgMjA6
MTcsIFBhdWwgRHVycmFudCB3cm90ZToNCj4gPj4+IEZyb206IFBhdWwgRHVycmFudCA8cGR1cnJh
bnRAYW1hem9uLmNvbT4NCj4gPj4+DQo+ID4+PiAuLi50byBjb250cm9sIHRoZSB2aXNpYmlsaXR5
IG9mIHRoZSBGSUZPIGV2ZW50IGNoYW5uZWwgb3BlcmF0aW9ucw0KPiA+Pj4gKEVWVENITk9QX2lu
aXRfY29udHJvbCwgRVZUQ0hOT1BfZXhwYW5kX2FycmF5LCBhbmQgRVZUQ0hOT1Bfc2V0X3ByaW9y
aXR5KSB0bw0KPiA+Pj4gdGhlIGd1ZXN0Lg0KPiA+Pj4NCj4gPj4+IFRoZXNlIG9wZXJhdGlvbnMg
d2VyZSBhZGRlZCB0byB0aGUgcHVibGljIGhlYWRlciBpbiBjb21taXQgZDJkNTBjMmYzMDhmDQo+
ID4+PiAoImV2dGNobjogYWRkIEZJRk8tYmFzZWQgZXZlbnQgY2hhbm5lbCBBQkkiKSBhbmQgdGhl
IGZpcnN0IGltcGxlbWVudGF0aW9uDQo+ID4+PiBhcHBlYXJlZCBpbiB0aGUgdHdvIHN1YnNlcXVl
bnQgY29tbWl0czogZWRjODg3MmFlYjRhICgiZXZ0Y2huOiBpbXBsZW1lbnQNCj4gPj4+IEVWVENI
Tk9QX3NldF9wcmlvcml0eSBhbmQgYWRkIHRoZSBzZXRfcHJpb3JpdHkgaG9vayIpIGFuZCA4ODkx
MDA2MWVjNjENCj4gPj4+ICgiZXZ0Y2huOiBhZGQgRklGTy1iYXNlZCBldmVudCBjaGFubmVsIGh5
cGVyY2FsbHMgYW5kIHBvcnQgb3BzIikuIFByaW9yIHRvDQo+ID4+PiB0aGF0LCBhIGd1ZXN0IGlz
c3VpbmcgdGhvc2Ugb3BlcmF0aW9ucyB3b3VsZCByZWNlaXZlIGEgcmV0dXJuIHZhbHVlIG9mDQo+
ID4+PiAtRU5PU1lTIChub3QgaW1wbGVtZW50ZWQpIGZyb20gWGVuLiBHdWVzdHMgYXdhcmUgb2Yg
dGhlIEZJRk8gb3BlcmF0aW9ucyBidXQNCj4gPj4+IHJ1bm5pbmcgb24gYW4gb2xkZXIgKHByZS00
LjQpIFhlbiB3b3VsZCBmYWxsIGJhY2sgdG8gdXNpbmcgdGhlIDItbGV2ZWwgZXZlbnQNCj4gPj4+
IGNoYW5uZWwgaW50ZXJmYWNlIHVwb24gc2VlaW5nIHRoaXMgcmV0dXJuIHZhbHVlLg0KPiA+Pj4N
Cj4gPj4+IFVuZm9ydHVuYXRlbHkgdGhlIHVuY29udHJvbGFibGUgYXBwZWFyYW5jZSBvZiB0aGVz
ZSBuZXcgb3BlcmF0aW9ucyBpbiBYZW4gNC40DQo+ID4+PiBvbndhcmRzIGhhcyBpbXBsaWNhdGlv
bnMgZm9yIGhpYmVybmF0aW9uIG9mIHNvbWUgTGludXggZ3Vlc3RzLiBEdXJpbmcgcmVzdW1lDQo+
ID4+PiBmcm9tIGhpYmVybmF0aW9uLCB0aGVyZSBhcmUgdHdvIGtlcm5lbHMgaW52b2x2ZWQ6IHRo
ZSAiYm9vdCIga2VybmVsIGFuZCB0aGUNCj4gPj4+ICJyZXN1bWUiIGtlcm5lbC4gVGhlIGd1ZXN0
IGJvb3Qga2VybmVsIG1heSBkZWZhdWx0IHRvIHVzZSBGSUZPIG9wZXJhdGlvbnMgYW5kDQo+ID4+
PiBpbnN0cnVjdCBYZW4gdmlhIEVWVENITk9QX2luaXRfY29udHJvbCB0byBzd2l0Y2ggZnJvbSAy
LWxldmVsIHRvIEZJRk8uIE9uIHRoZQ0KPiA+Pj4gb3RoZXIgaGFuZCwgdGhlIHJlc3VtZSBrZXJu
ZWwga2VlcHMgYXNzdW1pbmcgMi1sZXZlbCwgYmVjYXVzZSBpdCB3YXMgaGliZXJuYXRlZA0KPiA+
Pj4gb24gYSB2ZXJzaW9uIG9mIFhlbiB0aGF0IGRpZCBub3Qgc3VwcG9ydCB0aGUgRklGTyBvcGVy
YXRpb25zLg0KPiA+Pg0KPiA+PiBBbmQgdGhlIGFsdGVybmF0aXZlIG9mIHRoZSBib290IGtlcm5l
bCBpc3N1aW5nIEVWVENITk9QX3Jlc2V0IGhhcw0KPiA+PiBvdGhlciB1bndhbnRlZCBjb25zZXF1
ZW5jZXMuIE1heWJlIHdvcnRoIG1lbnRpb25pbmcgaGVyZSwgYXMNCj4gPj4gb3RoZXJ3aXNlIHRo
aXMgd291bGQgbG9vayBsaWtlIHRoZSBvYnZpb3VzIHdheSB0byByZXR1cm4gdG8gMi1sZXZlbA0K
PiA+PiBtb2RlPw0KPiA+Pg0KPiA+PiBBbHNvLCB3aHkgY2FuJ3QgdGhlIGJvb3Qga2VybmVsIGJl
IGluc3RydWN0ZWQgdG8gYXZvaWQgZW5nYWdpbmcNCj4gPj4gRklGTyBtb2RlPw0KPiA+Pg0KPiA+
DQo+ID4gQm90aCBvZiB0aG9zZSBhcmUsIG9mIGNvdXJzZSwgdmlhYmxlIGFsdGVybmF0aXZlcyBp
ZiB0aGUgZ3Vlc3QgY2FuIGJlDQo+ID4gbW9kaWZpZWQuIFRoZSBwcm9ibGVtIHdlIG5lZWQgdG8g
d29yayBhcm91bmQgaXMgZ3Vlc3QgdGhhdCBhcmUgYWxyZWFkeQ0KPiA+IG91dCB0aGVyZSBhbmQg
Y2Fubm90IGJlIHVwZGF0ZWQuDQo+IA0KPiBNYWtpbmcgdXNlIG9mIEVWVENITk9QX3Jlc2V0IGlu
ZGVlZCB3b3VsZCByZXF1aXJlIGEgY2hhbmdlIHRvIHRoZQ0KPiBrZXJuZWwuIEJ1dCBMaW51eCBo
YXMgYSBjb21tYW5kIGxpbmUgb3B0aW9uIHRvIHN1cHByZXNzIHVzZSBvZg0KPiBGSUZPIGV2ZW50
IGNoYW5uZWxzLCBzbyBJIGNhbid0IHNlZSB3aHkgdGhlIGJvb3Qga2VybmVsIGNvdWxkbid0DQo+
IGJlIHBhc3NlZCB0aGlzIGZsYWcgd2l0aG91dCBhbnkgbW9kaWZpY2F0aW9uIHRvIHRoZSBiaW5h
cnkuDQo+IA0KDQpJJ20gc3VyZSB0aGF0IHdhcyBjb25zaWRlcmVkIGFuZCBmb3VuZCBub3QgdG8g
YmUgZmVhc2libGUgaW4gb3VyIHVzZS1jYXNlLiAoTGlrZWx5IHRoZSBjb21tYW5kIGxpbmUgaXMg
YXMgbXVjaCBiYWtlZCBpbnRvIHRoZSBndWVzdCBpbWFnZSBhcyB0aGUga2VybmVsIGl0c2VsZiku
DQoNCj4gPj4+IFRvIG1haW50YWluIGNvbXBhdGliaWxpdHkgaXQgaXMgbmVjZXNzYXJ5IHRvIG1h
a2UgWGVuIGJlaGF2ZSBhcyBpdCBkaWQNCj4gPj4+IGJlZm9yZSB0aGUgbmV3IG9wZXJhdGlvbnMg
d2VyZSBhZGRlZCBhbmQgaGVuY2UgdGhlIGNvZGUgaW4gdGhpcyBwYXRjaCBlbnN1cmVzDQo+ID4+
PiB0aGF0LCBpZiBYRU5fRE9NQ1RMX0NERl9kaXNhYmxlX2ZpZm8gaXMgc2V0LCB0aGUgRklGTyBl
dmVudCBjaGFubmVsIG9wZXJhdGlvbnMNCj4gPj4+IHdpbGwgYWdhaW4gcmVzdWx0IGluIC1FTk9T
WVMgYmVpbmcgcmV0dXJuZWQgdG8gdGhlIGd1ZXN0Lg0KPiA+Pg0KPiA+PiBBcmUgdGhlcmUgaW5k
ZWVkIGRlcGVuZGVuY2llcyBvbiB0aGUgcHJlY2lzZSByZXR1cm4gdmFsdWUgYW55d2hlcmU/DQo+
ID4+IElmIHNvLCB0aGUgZ2VuZXJhbGx5IGluYXBwcm9wcmlhdGUgdXNlIChkb19ldmVudF9jaGFu
bmVsX29wKCkncw0KPiA+PiBkZWZhdWx0IGNhc2UgcmVhbGx5IHdvdWxkIGFsc28gbmVlZCBzd2l0
Y2hpbmcpIHdvdWxkIHdhbnQgYSBicmllZg0KPiA+PiBjb21tZW50LCBzbyBpdCdsbCBiZSB1bmRl
cnN0b29kIGJ5IHJlYWRlcnMgdGhhdCB0aGlzIGlzbid0IGNvZGUgdG8NCj4gPj4gZGVyaXZlIG90
aGVyIGNvZGUgZnJvbS4gSWYgbm90LCAtRVBFUk0gb3IgLUVBQ0NFUyBwZXJoYXBzPw0KPiA+Pg0K
PiA+DQo+ID4gVGhlIHBhdGNoLCBhcyBzdGF0ZWQsIGlzIHJldmVydGluZyBiZWhhdmlvdXIgYW5k
IHNvIHRoZSAtRU5PU1lTIHJlYWxseQ0KPiA+IG5lZWRzIHRvIHN0YXkgc2luY2UgaXQgaXMgZXNz
ZW50aWFsbHkgQUJJIG5vdy4gSSBhbSBub3QgYXdhcmUgb2YgZ3Vlc3QNCj4gPiBjb2RlIHRoYXQg
d2lsbCwgaW4gZmFjdCwgZGllIGlmIGl0IHNlZXMgLUVQRVJNIG9yIC1FQUNDRVMgaW5zdGVhZCBi
dXQNCj4gPiB0aGVyZSBtYXkgYmUgc3VjaCBjb2RlLiBUaGUgb25seSBzYWZlIHRoaW5nIHRvIGRv
IGlzIHRvIG1ha2UgdGhpbmdzDQo+ID4gbG9vayBsaWtlIHRoZSB1c2VkIHRvLg0KPiANCj4gSSBk
b24ndCB0aGluayBzcGVjaWZpYyBlcnJvciBjb2RlcyBjYW4gYmUgY29uc2lkZXJlZCAiQUJJIi4g
Tm90DQo+IHRoZSBsZWFzdCBiZWNhdXNlLCBpZiB0aGVyZSBhcmUgbXVsdGlwbGUgY2F1c2VzIGZv
ciBhbiBlcnJvciwgaXQNCj4gb3VnaHQgdG8gYmUgdW5kZWZpbmVkIHdoaWNoIGVycm9yIGdldHMg
cmV0dXJuZWQuIEEgZ3Vlc3Qgbm90DQo+IGZhbGxpbmcgYmFjayB0byAyLWxldmVsIG9uIF9hbnlf
IGVycm9yIGhlcmUgaXMgYmFzaWNhbGx5IHNldHRpbmcNCj4gaXRzZWxmIHVwIGZvciBldmVudHVh
bCBmYWlsdXJlIGJlY2F1c2Ugb2YgZS5nLiBnZXR0aW5nIGJhY2sNCj4gLUVOT01FTS4gT3Igc29t
ZW9uZSBkZWNpZGluZyB0byBhZGQgYW4gWFNNIGNoZWNrIHRvIHRoZSBmdW5jdGlvbi4NCj4gDQoN
CkknbSBub3QgZGlzYWdyZWVpbmcgdGhhdCBkZXBlbmRpbmcgb24gLUVOT1NZUyBpcyBhIGJhZCBp
ZGVhIGJ1dCwgYmVmb3JlIEZJRk8gY2FtZSBhbG9uZywgdGhhdCdzIHdoYXQgdGhlIGd1ZXN0IHdv
dWxkIHNlZSBhbmQgdGhhdCBpcyB3aGF0IGl0IG91Z2h0IHRvIHNlZSBhZ2FpbiBpZiB3ZSB3YW50
IHRvIHRydWx5IG9ic2N1cmUgdGhlIGludGVyZmFjZSAod2hpY2ggaXMgdGhlIHN0YXRlZCBhaW0g
aGVyZSkuDQoNCj4gQXMgc2FpZCwgSSdtIG9mIHRoZSBvcGluaW9uIHRoYXQgdGhlIG90aGVyIC1F
Tk9TWVMgb3VnaHQgdG8gYmUNCj4gcmVwbGFjZWQgYXMgd2VsbCwgd2hpY2ggb2YgY291cnNlIHdv
dWxkIGJlIHByZWNsdWRlZCBpZiB0aGlzIHdhcw0KPiBjb25zaWRlcmVkICJBQkkiLg0KPiANCg0K
SW5kZWVkLg0KDQogIFBhdWwNCg0KPiBKYW4NCg==


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 12:15:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 12:15:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37691.70143 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khti3-0007q0-Si; Wed, 25 Nov 2020 12:15:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37691.70143; Wed, 25 Nov 2020 12:15:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khti3-0007pt-PE; Wed, 25 Nov 2020 12:15:43 +0000
Received: by outflank-mailman (input) for mailman id 37691;
 Wed, 25 Nov 2020 12:15:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1khti2-0007po-Qr
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 12:15:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khti0-0006MZ-KD; Wed, 25 Nov 2020 12:15:40 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khti0-0003Lg-8h; Wed, 25 Nov 2020 12:15:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khti2-0007po-Qr
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 12:15:42 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=CT754L8UlLzqqf/JvmUA01cwFf2t8+0xTpzu6BdkLFg=; b=HCh8XK5ur16qRvLx/M2qC+ZQvJ
	EtCwJwXt3QOoFgz6etAqNs6hmL5U97KibQMu/4Z8xFj9Z7hBTB9VTjjJS9rutbiIs/mclLTF7oDDF
	R6kOwqMMo5vhcsHdDTIdXGjKORzRWCIHnkCuMP8LlMQgzm3l8IR62edTiHqXZYnpnT1U=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khti0-0006MZ-KD; Wed, 25 Nov 2020 12:15:40 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khti0-0003Lg-8h; Wed, 25 Nov 2020 12:15:40 +0000
Subject: Re: [PATCH v2 02/17] mm: introduce xvmalloc() et al and use for grant
 table allocations
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
 <23acd443-348c-5ef9-0fb5-880e06cc9a2d@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <0c40a6f6-af8c-1040-f249-36752df3a1f1@xen.org>
Date: Wed, 25 Nov 2020 12:15:38 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <23acd443-348c-5ef9-0fb5-880e06cc9a2d@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 23/11/2020 14:23, Jan Beulich wrote:
> All of the array allocations in grant_table_init() can exceed a page's
> worth of memory, which xmalloc()-based interfaces aren't really suitable
> for after boot. We also don't need any of these allocations to be
> physically contiguous.. Introduce interfaces dynamically switching
> between xmalloc() et al and vmalloc() et al, based on requested size,
> and use them instead.
> 
> All the wrappers in the new header get cloned mostly verbatim from
> xmalloc.h, with the sole adjustment to switch unsigned long to size_t
> for sizes and to unsigned int for alignments.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: Actually edit a copy-and-pasted comment in xvmalloc.h which was
>      meant to be edited from the beginning.
> ---
> I'm unconvinced of the mentioning of "physically contiguous" in the
> comment at the top of the new header: I don't think xmalloc() provides
> such a guarantee. Any use assuming so would look (latently) broken to
> me.

I haven't had the chance to reply to the first version about this. So I 
will reply here to avoid confusion.

I can at least spot one user in Arm that would use xmalloc() that way 
(see the allocation of itt_addr in arch/arm/gic-v3-its.c).

AFAIK, the memory is for the sole purpose of the ITS and should not be 
accessed by Xen. So I think we can replace by a new version of 
alloc_domheap_pages().

However, I still question the usefulness of introducing yet another way 
to allocate memory (we already have alloc_xenheap_pages(), xmalloc(), 
alloc_domheap_pages(), vmap()) if you think users cannot rely on 
xmalloc() to allocate memory physically contiguous.

It definitely makes more difficult to figure out when to use xmalloc() 
vs xvalloc().

I would like to hear an opinion from the other maintainers.

> 
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -37,7 +37,7 @@
>   #include <xen/iommu.h>
>   #include <xen/paging.h>
>   #include <xen/keyhandler.h>
> -#include <xen/vmap.h>
> +#include <xen/xvmalloc.h>
>   #include <xen/nospec.h>
>   #include <xsm/xsm.h>
>   #include <asm/flushtlb.h>
> @@ -1925,27 +1925,28 @@ int grant_table_init(struct domain *d, i
>       d->grant_table = gt;
>   
>       /* Active grant table. */
> -    gt->active = xzalloc_array(struct active_grant_entry *,
> -                               max_nr_active_grant_frames(gt));
> +    gt->active = xvzalloc_array(struct active_grant_entry *,
> +                                max_nr_active_grant_frames(gt));
>       if ( gt->active == NULL )
>           goto out;
>   
>       /* Tracking of mapped foreign frames table */
>       if ( gt->max_maptrack_frames )
>       {
> -        gt->maptrack = vzalloc(gt->max_maptrack_frames * sizeof(*gt->maptrack));
> +        gt->maptrack = xvzalloc_array(struct grant_mapping *,
> +                                      gt->max_maptrack_frames);
>           if ( gt->maptrack == NULL )
>               goto out;
>       }
>   
>       /* Shared grant table. */
> -    gt->shared_raw = xzalloc_array(void *, gt->max_grant_frames);
> +    gt->shared_raw = xvzalloc_array(void *, gt->max_grant_frames);
>       if ( gt->shared_raw == NULL )
>           goto out;
>   
>       /* Status pages for grant table - for version 2 */
> -    gt->status = xzalloc_array(grant_status_t *,
> -                               grant_to_status_frames(gt->max_grant_frames));
> +    gt->status = xvzalloc_array(grant_status_t *,
> +                                grant_to_status_frames(gt->max_grant_frames));
>       if ( gt->status == NULL )
>           goto out;
>   
> @@ -3870,19 +3871,19 @@ grant_table_destroy(
>   
>       for ( i = 0; i < nr_grant_frames(t); i++ )
>           free_xenheap_page(t->shared_raw[i]);
> -    xfree(t->shared_raw);
> +    xvfree(t->shared_raw);
>   
>       for ( i = 0; i < nr_maptrack_frames(t); i++ )
>           free_xenheap_page(t->maptrack[i]);
> -    vfree(t->maptrack);
> +    xvfree(t->maptrack);
>   
>       for ( i = 0; i < nr_active_grant_frames(t); i++ )
>           free_xenheap_page(t->active[i]);
> -    xfree(t->active);
> +    xvfree(t->active);
>   
>       for ( i = 0; i < nr_status_frames(t); i++ )
>           free_xenheap_page(t->status[i]);
> -    xfree(t->status);
> +    xvfree(t->status);
>   
>       xfree(t);
>       d->grant_table = NULL;
> --- a/xen/common/vmap.c
> +++ b/xen/common/vmap.c
> @@ -7,6 +7,7 @@
>   #include <xen/spinlock.h>
>   #include <xen/types.h>
>   #include <xen/vmap.h>
> +#include <xen/xvmalloc.h>
>   #include <asm/page.h>
>   
>   static DEFINE_SPINLOCK(vm_lock);
> @@ -301,11 +302,29 @@ void *vzalloc(size_t size)
>       return p;
>   }
>   
> -void vfree(void *va)
> +static void _vfree(const void *va, unsigned int pages, enum vmap_region type)
>   {
> -    unsigned int i, pages;
> +    unsigned int i;
>       struct page_info *pg;
>       PAGE_LIST_HEAD(pg_list);
> +
> +    ASSERT(pages);
> +
> +    for ( i = 0; i < pages; i++ )
> +    {
> +        pg = vmap_to_page(va + i * PAGE_SIZE);
> +        ASSERT(pg);
> +        page_list_add(pg, &pg_list);
> +    }
> +    vunmap(va);
> +
> +    while ( (pg = page_list_remove_head(&pg_list)) != NULL )
> +        free_domheap_page(pg);
> +}
> +
> +void vfree(void *va)
> +{
> +    unsigned int pages;
>       enum vmap_region type = VMAP_DEFAULT;
>   
>       if ( !va )
> @@ -317,18 +336,71 @@ void vfree(void *va)
>           type = VMAP_XEN;
>           pages = vm_size(va, type);
>       }
> -    ASSERT(pages);
>   
> -    for ( i = 0; i < pages; i++ )
> +    _vfree(va, pages, type);
> +}
> +
> +void xvfree(void *va)
> +{
> +    unsigned int pages = vm_size(va, VMAP_DEFAULT);
> +
> +    if ( pages )
> +        _vfree(va, pages, VMAP_DEFAULT);
> +    else
> +        xfree(va);
> +}
> +
> +void *_xvmalloc(size_t size, unsigned int align)
> +{
> +    ASSERT(align <= PAGE_SIZE);
> +    return size <= PAGE_SIZE ? _xmalloc(size, align) : vmalloc(size);
> +}
> +
> +void *_xvzalloc(size_t size, unsigned int align)
> +{
> +    ASSERT(align <= PAGE_SIZE);
> +    return size <= PAGE_SIZE ? _xzalloc(size, align) : vzalloc(size);
> +}
> +
> +void *_xvrealloc(void *va, size_t size, unsigned int align)
> +{
> +    size_t pages = vm_size(va, VMAP_DEFAULT);
> +    void *ptr;
> +
> +    ASSERT(align <= PAGE_SIZE);
> +
> +    if ( !pages )
>       {
> -        struct page_info *page = vmap_to_page(va + i * PAGE_SIZE);
> +        if ( size <= PAGE_SIZE )
> +            return _xrealloc(va, size, align);
>   
> -        ASSERT(page);
> -        page_list_add(page, &pg_list);
> +        ptr = vmalloc(size);
> +        if ( ptr && va && va != ZERO_BLOCK_PTR )
> +        {
> +            /*
> +             * xmalloc-based allocations up to PAGE_SIZE don't cross page
> +             * boundaries. Therefore, without needing to know the exact
> +             * prior allocation size, simply copy the entire tail of the
> +             * page containing the earlier allocation.
> +             */
> +            memcpy(ptr, va, PAGE_SIZE - PAGE_OFFSET(va));
> +            xfree(va);
> +        }
> +    }
> +    else if ( pages == PFN_UP(size) )
> +        ptr = va;
> +    else
> +    {
> +        ptr = _xvmalloc(size, align);
> +        if ( ptr )
> +        {
> +            memcpy(ptr, va, min(size, pages << PAGE_SHIFT));
> +            vfree(va);
> +        }
> +        else if ( pages > PFN_UP(size) )
> +            ptr = va;
>       }
> -    vunmap(va);
>   
> -    while ( (pg = page_list_remove_head(&pg_list)) != NULL )
> -        free_domheap_page(pg);
> +    return ptr;
>   }
>   #endif
> --- /dev/null
> +++ b/xen/include/xen/xvmalloc.h
> @@ -0,0 +1,73 @@
> +
> +#ifndef __XVMALLOC_H__
> +#define __XVMALLOC_H__
> +
> +#include <xen/cache.h>
> +#include <xen/types.h>
> +
> +/*
> + * Xen malloc/free-style interface for allocations possibly exceeding a page's
> + * worth of memory, as long as there's no need to have physically contiguous
> + * memory allocated.  These should be used in preference to xmalloc() et al
> + * whenever the size is not known to be constrained to at most a single page.
> + */
> +
> +/* Allocate space for typed object. */
> +#define xvmalloc(_type) ((_type *)_xvmalloc(sizeof(_type), __alignof__(_type)))
> +#define xvzalloc(_type) ((_type *)_xvzalloc(sizeof(_type), __alignof__(_type)))
> +
> +/* Allocate space for array of typed objects. */
> +#define xvmalloc_array(_type, _num) \
> +    ((_type *)_xvmalloc_array(sizeof(_type), __alignof__(_type), _num))
> +#define xvzalloc_array(_type, _num) \
> +    ((_type *)_xvzalloc_array(sizeof(_type), __alignof__(_type), _num))
> +
> +/* Allocate space for a structure with a flexible array of typed objects. */
> +#define xvzalloc_flex_struct(type, field, nr) \
> +    ((type *)_xvzalloc(offsetof(type, field[nr]), __alignof__(type)))
> +
> +#define xvmalloc_flex_struct(type, field, nr) \
> +    ((type *)_xvmalloc(offsetof(type, field[nr]), __alignof__(type)))
> +
> +/* Re-allocate space for a structure with a flexible array of typed objects. */
> +#define xvrealloc_flex_struct(ptr, field, nr)                          \
> +    ((typeof(ptr))_xvrealloc(ptr, offsetof(typeof(*(ptr)), field[nr]), \
> +                             __alignof__(typeof(*(ptr)))))
> +
> +/* Allocate untyped storage. */
> +#define xvmalloc_bytes(_bytes) _xvmalloc(_bytes, SMP_CACHE_BYTES)
> +#define xvzalloc_bytes(_bytes) _xvzalloc(_bytes, SMP_CACHE_BYTES)
> +
> +/* Free any of the above. */
> +extern void xvfree(void *);
> +
> +/* Free an allocation, and zero the pointer to it. */
> +#define XVFREE(p) do { \
> +    xvfree(p);         \
> +    (p) = NULL;        \
> +} while ( false )
> +
> +/* Underlying functions */
> +extern void *_xvmalloc(size_t size, unsigned int align);
> +extern void *_xvzalloc(size_t size, unsigned int align);
> +extern void *_xvrealloc(void *ptr, size_t size, unsigned int align);
> +
> +static inline void *_xvmalloc_array(
> +    size_t size, unsigned int align, unsigned long num)
> +{
> +    /* Check for overflow. */
> +    if ( size && num > UINT_MAX / size )
> +        return NULL;
> +    return _xvmalloc(size * num, align);
> +}
> +
> +static inline void *_xvzalloc_array(
> +    size_t size, unsigned int align, unsigned long num)
> +{
> +    /* Check for overflow. */
> +    if ( size && num > UINT_MAX / size )
> +        return NULL;
> +    return _xvzalloc(size * num, align);
> +}
> +
> +#endif /* __XVMALLOC_H__ */
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 12:16:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 12:16:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37698.70155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khtj6-0007x1-7G; Wed, 25 Nov 2020 12:16:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37698.70155; Wed, 25 Nov 2020 12:16:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khtj6-0007wu-3C; Wed, 25 Nov 2020 12:16:48 +0000
Received: by outflank-mailman (input) for mailman id 37698;
 Wed, 25 Nov 2020 12:16:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sFqU=E7=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1khtj4-0007wn-Kk
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 12:16:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c848016-7905-41ec-8c95-2d46b5050c67;
 Wed, 25 Nov 2020 12:16:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8E6B3AF0B;
 Wed, 25 Nov 2020 12:16:44 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id 2138A1E130F; Wed, 25 Nov 2020 13:16:44 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sFqU=E7=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1khtj4-0007wn-Kk
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 12:16:46 +0000
X-Inumbo-ID: 2c848016-7905-41ec-8c95-2d46b5050c67
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2c848016-7905-41ec-8c95-2d46b5050c67;
	Wed, 25 Nov 2020 12:16:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8E6B3AF0B;
	Wed, 25 Nov 2020 12:16:44 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id 2138A1E130F; Wed, 25 Nov 2020 13:16:44 +0100 (CET)
Date: Wed, 25 Nov 2020 13:16:44 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 02/45] filemap: consistently use ->f_mapping over
 ->i_mapping
Message-ID: <20201125121644.GF16944@quack2.suse.cz>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-3-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-3-hch@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Tue 24-11-20 14:27:08, Christoph Hellwig wrote:
> Use file->f_mapping in all remaining places that have a struct file
> available to properly handle the case where inode->i_mapping !=
> file_inode(file)->i_mapping.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Looks good to me. You can add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  mm/filemap.c | 13 ++++++-------
>  1 file changed, 6 insertions(+), 7 deletions(-)
> 
> diff --git a/mm/filemap.c b/mm/filemap.c
> index d5e7c2029d16b4..4f583489aa3c2a 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -2886,14 +2886,14 @@ EXPORT_SYMBOL(filemap_map_pages);
>  
>  vm_fault_t filemap_page_mkwrite(struct vm_fault *vmf)
>  {
> +	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
>  	struct page *page = vmf->page;
> -	struct inode *inode = file_inode(vmf->vma->vm_file);
>  	vm_fault_t ret = VM_FAULT_LOCKED;
>  
> -	sb_start_pagefault(inode->i_sb);
> +	sb_start_pagefault(mapping->host->i_sb);
>  	file_update_time(vmf->vma->vm_file);
>  	lock_page(page);
> -	if (page->mapping != inode->i_mapping) {
> +	if (page->mapping != mapping) {
>  		unlock_page(page);
>  		ret = VM_FAULT_NOPAGE;
>  		goto out;
> @@ -2906,7 +2906,7 @@ vm_fault_t filemap_page_mkwrite(struct vm_fault *vmf)
>  	set_page_dirty(page);
>  	wait_for_stable_page(page);
>  out:
> -	sb_end_pagefault(inode->i_sb);
> +	sb_end_pagefault(mapping->host->i_sb);
>  	return ret;
>  }
>  
> @@ -3149,10 +3149,9 @@ void dio_warn_stale_pagecache(struct file *filp)
>  {
>  	static DEFINE_RATELIMIT_STATE(_rs, 86400 * HZ, DEFAULT_RATELIMIT_BURST);
>  	char pathname[128];
> -	struct inode *inode = file_inode(filp);
>  	char *path;
>  
> -	errseq_set(&inode->i_mapping->wb_err, -EIO);
> +	errseq_set(&filp->f_mapping->wb_err, -EIO);
>  	if (__ratelimit(&_rs)) {
>  		path = file_path(filp, pathname, sizeof(pathname));
>  		if (IS_ERR(path))
> @@ -3179,7 +3178,7 @@ generic_file_direct_write(struct kiocb *iocb, struct iov_iter *from)
>  
>  	if (iocb->ki_flags & IOCB_NOWAIT) {
>  		/* If there are pages to writeback, return */
> -		if (filemap_range_has_page(inode->i_mapping, pos,
> +		if (filemap_range_has_page(file->f_mapping, pos,
>  					   pos + write_len - 1))
>  			return -EAGAIN;
>  	} else {
> -- 
> 2.29.2
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 12:19:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 12:19:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37706.70166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khtlR-0008AO-KG; Wed, 25 Nov 2020 12:19:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37706.70166; Wed, 25 Nov 2020 12:19:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khtlR-0008AH-HG; Wed, 25 Nov 2020 12:19:13 +0000
Received: by outflank-mailman (input) for mailman id 37706;
 Wed, 25 Nov 2020 12:19:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sFqU=E7=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1khtlR-0008AC-1O
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 12:19:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 963a6559-43f6-443c-b730-6485db45fa6e;
 Wed, 25 Nov 2020 12:19:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1AB0FACBD;
 Wed, 25 Nov 2020 12:19:11 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id D76D91E130F; Wed, 25 Nov 2020 13:19:10 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sFqU=E7=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1khtlR-0008AC-1O
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 12:19:13 +0000
X-Inumbo-ID: 963a6559-43f6-443c-b730-6485db45fa6e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 963a6559-43f6-443c-b730-6485db45fa6e;
	Wed, 25 Nov 2020 12:19:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1AB0FACBD;
	Wed, 25 Nov 2020 12:19:11 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id D76D91E130F; Wed, 25 Nov 2020 13:19:10 +0100 (CET)
Date: Wed, 25 Nov 2020 13:19:10 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 03/45] fs: remove get_super_thawed and
 get_super_exclusive_thawed
Message-ID: <20201125121910.GG16944@quack2.suse.cz>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-4-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-4-hch@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Tue 24-11-20 14:27:09, Christoph Hellwig wrote:
> Just open code the wait in the only caller of both functions.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Fine by me. You can add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  fs/internal.h      |  2 ++
>  fs/quota/quota.c   | 31 +++++++++++++++++++++-------
>  fs/super.c         | 51 ++--------------------------------------------
>  include/linux/fs.h |  4 +---
>  4 files changed, 29 insertions(+), 59 deletions(-)
> 
> diff --git a/fs/internal.h b/fs/internal.h
> index a7cd0f64faa4ab..47be21dfeebef5 100644
> --- a/fs/internal.h
> +++ b/fs/internal.h
> @@ -114,7 +114,9 @@ extern struct file *alloc_empty_file_noaccount(int, const struct cred *);
>   */
>  extern int reconfigure_super(struct fs_context *);
>  extern bool trylock_super(struct super_block *sb);
> +struct super_block *__get_super(struct block_device *bdev, bool excl);
>  extern struct super_block *user_get_super(dev_t);
> +void put_super(struct super_block *sb);
>  extern bool mount_capable(struct fs_context *);
>  
>  /*
> diff --git a/fs/quota/quota.c b/fs/quota/quota.c
> index 9af95c7a0bbe3c..f3d32b0d9008f2 100644
> --- a/fs/quota/quota.c
> +++ b/fs/quota/quota.c
> @@ -20,6 +20,7 @@
>  #include <linux/writeback.h>
>  #include <linux/nospec.h>
>  #include "compat.h"
> +#include "../internal.h"
>  
>  static int check_quotactl_permission(struct super_block *sb, int type, int cmd,
>  				     qid_t id)
> @@ -868,6 +869,7 @@ static struct super_block *quotactl_block(const char __user *special, int cmd)
>  	struct block_device *bdev;
>  	struct super_block *sb;
>  	struct filename *tmp = getname(special);
> +	bool excl = false, thawed = false;
>  
>  	if (IS_ERR(tmp))
>  		return ERR_CAST(tmp);
> @@ -875,17 +877,32 @@ static struct super_block *quotactl_block(const char __user *special, int cmd)
>  	putname(tmp);
>  	if (IS_ERR(bdev))
>  		return ERR_CAST(bdev);
> -	if (quotactl_cmd_onoff(cmd))
> -		sb = get_super_exclusive_thawed(bdev);
> -	else if (quotactl_cmd_write(cmd))
> -		sb = get_super_thawed(bdev);
> -	else
> -		sb = get_super(bdev);
> +
> +	if (quotactl_cmd_onoff(cmd)) {
> +		excl = true;
> +		thawed = true;
> +	} else if (quotactl_cmd_write(cmd)) {
> +		thawed = true;
> +	}
> +
> +retry:
> +	sb = __get_super(bdev, excl);
> +	if (thawed && sb && sb->s_writers.frozen != SB_UNFROZEN) {
> +		if (excl)
> +			up_write(&sb->s_umount);
> +		else
> +			up_read(&sb->s_umount);
> +		wait_event(sb->s_writers.wait_unfrozen,
> +			   sb->s_writers.frozen == SB_UNFROZEN);
> +		put_super(sb);
> +		goto retry;
> +	}
> +
>  	bdput(bdev);
>  	if (!sb)
>  		return ERR_PTR(-ENODEV);
> -
>  	return sb;
> +
>  #else
>  	return ERR_PTR(-ENODEV);
>  #endif
> diff --git a/fs/super.c b/fs/super.c
> index 98bb0629ee108e..343e5c1e538d2a 100644
> --- a/fs/super.c
> +++ b/fs/super.c
> @@ -307,7 +307,7 @@ static void __put_super(struct super_block *s)
>   *	Drops a temporary reference, frees superblock if there's no
>   *	references left.
>   */
> -static void put_super(struct super_block *sb)
> +void put_super(struct super_block *sb)
>  {
>  	spin_lock(&sb_lock);
>  	__put_super(sb);
> @@ -740,7 +740,7 @@ void iterate_supers_type(struct file_system_type *type,
>  
>  EXPORT_SYMBOL(iterate_supers_type);
>  
> -static struct super_block *__get_super(struct block_device *bdev, bool excl)
> +struct super_block *__get_super(struct block_device *bdev, bool excl)
>  {
>  	struct super_block *sb;
>  
> @@ -789,53 +789,6 @@ struct super_block *get_super(struct block_device *bdev)
>  }
>  EXPORT_SYMBOL(get_super);
>  
> -static struct super_block *__get_super_thawed(struct block_device *bdev,
> -					      bool excl)
> -{
> -	while (1) {
> -		struct super_block *s = __get_super(bdev, excl);
> -		if (!s || s->s_writers.frozen == SB_UNFROZEN)
> -			return s;
> -		if (!excl)
> -			up_read(&s->s_umount);
> -		else
> -			up_write(&s->s_umount);
> -		wait_event(s->s_writers.wait_unfrozen,
> -			   s->s_writers.frozen == SB_UNFROZEN);
> -		put_super(s);
> -	}
> -}
> -
> -/**
> - *	get_super_thawed - get thawed superblock of a device
> - *	@bdev: device to get the superblock for
> - *
> - *	Scans the superblock list and finds the superblock of the file system
> - *	mounted on the device. The superblock is returned once it is thawed
> - *	(or immediately if it was not frozen). %NULL is returned if no match
> - *	is found.
> - */
> -struct super_block *get_super_thawed(struct block_device *bdev)
> -{
> -	return __get_super_thawed(bdev, false);
> -}
> -EXPORT_SYMBOL(get_super_thawed);
> -
> -/**
> - *	get_super_exclusive_thawed - get thawed superblock of a device
> - *	@bdev: device to get the superblock for
> - *
> - *	Scans the superblock list and finds the superblock of the file system
> - *	mounted on the device. The superblock is returned once it is thawed
> - *	(or immediately if it was not frozen) and s_umount semaphore is held
> - *	in exclusive mode. %NULL is returned if no match is found.
> - */
> -struct super_block *get_super_exclusive_thawed(struct block_device *bdev)
> -{
> -	return __get_super_thawed(bdev, true);
> -}
> -EXPORT_SYMBOL(get_super_exclusive_thawed);
> -
>  /**
>   * get_active_super - get an active reference to the superblock of a device
>   * @bdev: device to get the superblock for
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index 8667d0cdc71e76..a61df0dd4f1989 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -1409,7 +1409,7 @@ enum {
>  
>  struct sb_writers {
>  	int				frozen;		/* Is sb frozen? */
> -	wait_queue_head_t		wait_unfrozen;	/* for get_super_thawed() */
> +	wait_queue_head_t		wait_unfrozen;	/* wait for thaw */
>  	struct percpu_rw_semaphore	rw_sem[SB_FREEZE_LEVELS];
>  };
>  
> @@ -3132,8 +3132,6 @@ extern struct file_system_type *get_filesystem(struct file_system_type *fs);
>  extern void put_filesystem(struct file_system_type *fs);
>  extern struct file_system_type *get_fs_type(const char *name);
>  extern struct super_block *get_super(struct block_device *);
> -extern struct super_block *get_super_thawed(struct block_device *);
> -extern struct super_block *get_super_exclusive_thawed(struct block_device *bdev);
>  extern struct super_block *get_active_super(struct block_device *bdev);
>  extern void drop_super(struct super_block *sb);
>  extern void drop_super_exclusive(struct super_block *sb);
> -- 
> 2.29.2
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 12:24:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 12:24:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37719.70178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khtqk-0000eN-Hj; Wed, 25 Nov 2020 12:24:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37719.70178; Wed, 25 Nov 2020 12:24:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khtqk-0000eG-Ef; Wed, 25 Nov 2020 12:24:42 +0000
Received: by outflank-mailman (input) for mailman id 37719;
 Wed, 25 Nov 2020 12:24:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ecWo=E7=google.com=ndesaulniers@srs-us1.protection.inumbo.net>)
 id 1khtqi-0000eB-T3
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 12:24:40 +0000
Received: from mail-pf1-x443.google.com (unknown [2607:f8b0:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id db519f90-de84-446e-a0dd-9bee39716d14;
 Wed, 25 Nov 2020 12:24:39 +0000 (UTC)
Received: by mail-pf1-x443.google.com with SMTP id w187so2171521pfd.5
 for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 04:24:39 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ecWo=E7=google.com=ndesaulniers@srs-us1.protection.inumbo.net>)
	id 1khtqi-0000eB-T3
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 12:24:40 +0000
X-Inumbo-ID: db519f90-de84-446e-a0dd-9bee39716d14
Received: from mail-pf1-x443.google.com (unknown [2607:f8b0:4864:20::443])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id db519f90-de84-446e-a0dd-9bee39716d14;
	Wed, 25 Nov 2020 12:24:39 +0000 (UTC)
Received: by mail-pf1-x443.google.com with SMTP id w187so2171521pfd.5
        for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 04:24:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=9IH0l2L/ELs04A0W/6GC4nhC0e+RvRGWJ1bAzD1+dFc=;
        b=GyrkIL7rJc/Wrkz9wtYqXZYvGBry6qXFkQono0nmrBFDlUCiGmbX9ByD1wUhih87ZW
         XCd/8etF0h65aGuVNHVvGVnSoIRV2cIFxWeuMsMEKDZ+SIKsK6eM3KIHPaY2Au+pxfCB
         jFmSmO0a8jtSnIjbAi/709gkMW9hnqxggrhUNIGI/2GrlejcLn7tyz9MFlSEpE31y19Z
         9ARaZhNBbaKZzII6ioDoEFmbbi01XI+4/fF65wWR3SGfZuCMoV2cgGUJ8Osa8sFeqdbz
         KLllcyBsC3gtRPDbq4Yc+z3inKaZT7D05cYTb7CBHNDdR/afLH0A7E5JHrBWilMAefn4
         uS6g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=9IH0l2L/ELs04A0W/6GC4nhC0e+RvRGWJ1bAzD1+dFc=;
        b=U/mVN8E9g7ZWl5ZyNLYkG7UxUb0sQwlCvw9g0SOgLXaRshRGwBhZnZYFX6UO2wF31e
         J74hzs/lZ4NKbHA7MxFBr89CM8NXeH+Tl2E/jUiFvU1KEqzZfgA+YcX7g6cQoM2JBkTD
         /bYspjIcvk+yh5cINjle39/UOVKW9Fhqflh1nZ681nLtkK7+LRXac+QNoZP1HhkrC/Bv
         Rx5vPA4N8QS7bJ45up/IzIIPIoFsiQw4YmyQl5DxRPVCEABoxJVLoyw7lWlmfyjfuQty
         tCNysaCfTCb4qRQ1ywbVS8gKi5UEqddhbUeDKHDy091cYyYYMZcUK7X6a+xA5wnoMYk2
         lkQQ==
X-Gm-Message-State: AOAM531DxvBt5q9sT2UGGrMvFKZdMkiUiryNOu04grAhcC8Bj650gTVg
	++Z6kXQCUMMMTw63xHye7T7MYTAs0wwQiCNfh0uRww==
X-Google-Smtp-Source: ABdhPJySo35UzNwHodlreVMfJuWPwHO1z+zkcbFfSYU3Avf+sN4n16LJPBb97SBockWyJEKx3Xs8q1wCvzejZmwrmAM=
X-Received: by 2002:a62:7905:0:b029:197:f300:5a2a with SMTP id
 u5-20020a6279050000b0290197f3005a2amr2898775pfc.30.1606307078380; Wed, 25 Nov
 2020 04:24:38 -0800 (PST)
MIME-Version: 1.0
References: <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook> <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
 <ca071decb87cc7e905411423c05a48f9fd2f58d7.camel@perches.com>
 <0147972a72bc13f3629de8a32dee6f1f308994b5.camel@HansenPartnership.com>
 <d8d1e9add08cdd4158405e77762d4946037208f8.camel@perches.com>
 <dbd2cb703ed9eefa7dde9281ea26ab0f7acc8afe.camel@HansenPartnership.com>
 <20201123130348.GA3119@embeddedor> <8f5611bb015e044fa1c0a48147293923c2d904e4.camel@HansenPartnership.com>
 <202011241327.BB28F12F6@keescook> <a841536fe65bb33f1c72ce2455a6eb47a0107565.camel@HansenPartnership.com>
In-Reply-To: <a841536fe65bb33f1c72ce2455a6eb47a0107565.camel@HansenPartnership.com>
From: Nick Desaulniers <ndesaulniers@google.com>
Date: Wed, 25 Nov 2020 04:24:27 -0800
Message-ID: <CAKwvOdkGBn7nuWTAqrORMeN1G+w3YwBfCqqaRD2nwvoAXKi=Aw@mail.gmail.com>
Subject: Re: [Intel-wired-lan] [PATCH 000/141] Fix fall-through warnings for Clang
To: James Bottomley <James.Bottomley@hansenpartnership.com>
Cc: Kees Cook <keescook@chromium.org>, "Gustavo A. R. Silva" <gustavoars@kernel.org>, 
	Joe Perches <joe@perches.com>, Jakub Kicinski <kuba@kernel.org>, alsa-devel@alsa-project.org, 
	linux-atm-general@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, 
	linux-iio@vger.kernel.org, linux-wireless <linux-wireless@vger.kernel.org>, 
	linux-fbdev@vger.kernel.org, dri-devel <dri-devel@lists.freedesktop.org>, 
	LKML <linux-kernel@vger.kernel.org>, Nathan Chancellor <natechancellor@gmail.com>, 
	linux-ide@vger.kernel.org, dm-devel@redhat.com, keyrings@vger.kernel.org, 
	linux-mtd@lists.infradead.org, GR-everest-linux-l2@marvell.com, 
	wcn36xx@lists.infradead.org, samba-technical@lists.samba.org, 
	linux-i3c@lists.infradead.org, linux1394-devel@lists.sourceforge.net, 
	linux-afs@lists.infradead.org, usb-storage@lists.one-eyed-alien.net, 
	drbd-dev@lists.linbit.com, devel@driverdev.osuosl.org, 
	linux-cifs@vger.kernel.org, rds-devel@oss.oracle.com, 
	linux-scsi@vger.kernel.org, linux-rdma@vger.kernel.org, 
	oss-drivers@netronome.com, bridge@lists.linux-foundation.org, 
	linux-security-module@vger.kernel.org, 
	amd-gfx list <amd-gfx@lists.freedesktop.org>, linux-stm32@st-md-mailman.stormreply.com, 
	cluster-devel@redhat.com, linux-acpi@vger.kernel.org, coreteam@netfilter.org, 
	intel-wired-lan@lists.osuosl.org, linux-input@vger.kernel.org, 
	Miguel Ojeda <ojeda@kernel.org>, tipc-discussion@lists.sourceforge.net, 
	linux-ext4@vger.kernel.org, linux-media@vger.kernel.org, 
	linux-watchdog@vger.kernel.org, selinux@vger.kernel.org, 
	linux-arm-msm <linux-arm-msm@vger.kernel.org>, intel-gfx@lists.freedesktop.org, 
	linux-geode@lists.infradead.org, linux-can@vger.kernel.org, 
	linux-block@vger.kernel.org, linux-gpio@vger.kernel.org, 
	op-tee@lists.trustedfirmware.org, linux-mediatek@lists.infradead.org, 
	xen-devel@lists.xenproject.org, nouveau@lists.freedesktop.org, 
	linux-hams@vger.kernel.org, ceph-devel@vger.kernel.org, 
	virtualization@lists.linux-foundation.org, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, linux-hwmon@vger.kernel.org, 
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, linux-nfs@vger.kernel.org, GR-Linux-NIC-Dev@marvell.com, 
	Linux Memory Management List <linux-mm@kvack.org>, Network Development <netdev@vger.kernel.org>, 
	linux-decnet-user@lists.sourceforge.net, linux-mmc@vger.kernel.org, 
	Linux-Renesas <linux-renesas-soc@vger.kernel.org>, linux-sctp@vger.kernel.org, 
	linux-usb@vger.kernel.org, netfilter-devel@vger.kernel.org, 
	"open list:HARDWARE RANDOM NUMBER GENERATOR CORE" <linux-crypto@vger.kernel.org>, patches@opensource.cirrus.com, 
	linux-integrity@vger.kernel.org, target-devel@vger.kernel.org, 
	linux-hardening@vger.kernel.org, 
	Jonathan Cameron <Jonathan.Cameron@huawei.com>, Greg KH <gregkh@linuxfoundation.org>
Content-Type: text/plain; charset="UTF-8"

On Tue, Nov 24, 2020 at 11:05 PM James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
>
> On Tue, 2020-11-24 at 13:32 -0800, Kees Cook wrote:
> > We already enable -Wimplicit-fallthrough globally, so that's not the
> > discussion. The issue is that Clang is (correctly) even more strict
> > than GCC for this, so these are the remaining ones to fix for full
> > Clang coverage too.
> >
> > People have spent more time debating this already than it would have
> > taken to apply the patches. :)
>
> You mean we've already spent 90% of the effort to come this far so we
> might as well go the remaining 10% because then at least we get some
> return? It's certainly a clinching argument in defence procurement ...

So developers and distributions using Clang can't have
-Wimplicit-fallthrough enabled because GCC is less strict (which has
been shown in this thread to lead to bugs)?  We'd like to have nice
things too, you know.

I even agree that most of the churn comes from

case 0:
  ++x;
default:
  break;

which I have a patch for: https://reviews.llvm.org/D91895.  I agree
that can never lead to bugs.  But that's not the sole case of this
series, just most of them.

Though, note how the reviewer (C++ spec editor and clang front end
owner) in https://reviews.llvm.org/D91895 even asks in that review how
maybe a new flag would be more appropriate for a watered
down/stylistic variant of the existing behavior.  And if the current
wording of Documentation/process/deprecated.rst around "fallthrough"
is a straightforward rule of thumb, I kind of agree with him.

>
> > This is about robustness and language wrangling. It's a big code-
> > base, and this is the price of our managing technical debt for
> > permanent robustness improvements. (The numbers I ran from Gustavo's
> > earlier patches were that about 10% of the places adjusted were
> > identified as legitimate bugs being fixed. This final series may be
> > lower, but there are still bugs being found from it -- we need to
> > finish this and shut the door on it for good.)
>
> I got my six patches by analyzing the lwn.net report of the fixes that
> was cited which had 21 of which 50% didn't actually change the emitted
> code, and 25% didn't have a user visible effect.
>
> But the broader point I'm making is just because the compiler people
> come up with a shiny new warning doesn't necessarily mean the problem

That's not what this is though; you're attacking a strawman.  I'd
encourage you to bring that up when that actually occurs, unlike this
case since it's actively hindering getting -Wimplicit-fallthrough
enabled for Clang.  This is not a shiny new warning; it's already on
for GCC and has existed in both compilers for multiple releases.

And I'll also note that warnings are warnings and not errors because
they cannot be proven to be bugs in 100% of cases, but they have led
to bugs in the past.  They require a human to review their intent and
remove ambiguities.  If 97% of cases would end in a break ("Expert C
Programming: Deep C Secrets" - Peter van der Linden), then it starts
to look to me like a language defect; certainly an incorrectly chosen
default.  But the compiler can't know those 3% were intentional,
unless you're explicit for those exceptional cases.

> it's detecting is one that causes us actual problems in the code base.
> I'd really be happier if we had a theory about what classes of CVE or
> bug we could eliminate before we embrace the next new warning.

We don't generally file CVEs and waiting for them to occur might be
too reactive, but I agree that pointing to some additional
documentation in commit messages about how a warning could lead to a
bug would make it clearer to reviewers why being able to enable it
treewide, even if there's no bug in their particular subsystem, is in
the general interest of the commons.

On Mon, Nov 23, 2020 at 7:58 AM James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
>
> We're also complaining about the inability to recruit maintainers:
>
> https://www.theregister.com/2020/06/30/hard_to_find_linux_maintainers_says_torvalds/
>
> And burn out:
>
> http://antirez.com/news/129
>
> The whole crux of your argument seems to be maintainers' time isn't
> important so we should accept all trivial patches ... I'm pushing back
> on that assumption in two places, firstly the valulessness of the time
> and secondly that all trivial patches are valuable.

It's critical to the longevity of any open source project that there
are not single points of failure.  If someone is not expendable or
replaceable (or claims to be) then that's a risk to the project and a
bottleneck.  Not having a replacement in training or some form of
redundancy is short sighted.

If trivial patches are adding too much to your workload, consider
training a co-maintainer or asking for help from one of your reviewers
whom you trust.  I don't doubt it's hard to find maintainers, but
existing maintainers should go out of their way to entrust
co-maintainers especially when they find their workload becomes too
high.  And reviewing/picking up trivial patches is probably a great
way to get started.  If we allow too much knowledge of any one
subsystem to collect with one maintainer, what happens when that
maintainer leaves the community (which, given a finite lifespan, is an
inevitability)?
-- 
Thanks,
~Nick Desaulniers


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 12:30:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 12:30:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37727.70190 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khtvp-0000tp-BI; Wed, 25 Nov 2020 12:29:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37727.70190; Wed, 25 Nov 2020 12:29:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khtvp-0000ti-8J; Wed, 25 Nov 2020 12:29:57 +0000
Received: by outflank-mailman (input) for mailman id 37727;
 Wed, 25 Nov 2020 12:29:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sFqU=E7=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1khtvn-0000td-Px
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 12:29:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1b297b2f-0c74-4ea9-a344-a5f17df76e05;
 Wed, 25 Nov 2020 12:29:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BF41AACBD;
 Wed, 25 Nov 2020 12:29:53 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id 416E21E130F; Wed, 25 Nov 2020 13:29:53 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sFqU=E7=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1khtvn-0000td-Px
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 12:29:55 +0000
X-Inumbo-ID: 1b297b2f-0c74-4ea9-a344-a5f17df76e05
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1b297b2f-0c74-4ea9-a344-a5f17df76e05;
	Wed, 25 Nov 2020 12:29:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id BF41AACBD;
	Wed, 25 Nov 2020 12:29:53 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id 416E21E130F; Wed, 25 Nov 2020 13:29:53 +0100 (CET)
Date: Wed, 25 Nov 2020 13:29:53 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 04/45] fs: simplify freeze_bdev/thaw_bdev
Message-ID: <20201125122953.GH16944@quack2.suse.cz>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-5-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-5-hch@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Tue 24-11-20 14:27:10, Christoph Hellwig wrote:
> Store the frozen superblock in struct block_device to avoid the awkward
> interface that can return a sb only used a cookie, an ERR_PTR or NULL.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Some comments below...

> diff --git a/fs/block_dev.c b/fs/block_dev.c
> index d8664f5c1ff669..60492620d51866 100644
> --- a/fs/block_dev.c
> +++ b/fs/block_dev.c
> @@ -548,55 +548,47 @@ EXPORT_SYMBOL(fsync_bdev);
>   * count down in thaw_bdev(). When it becomes 0, thaw_bdev() will unfreeze
>   * actually.
>   */
> -struct super_block *freeze_bdev(struct block_device *bdev)
> +int freeze_bdev(struct block_device *bdev)
>  {
>  	struct super_block *sb;
>  	int error = 0;
>  
>  	mutex_lock(&bdev->bd_fsfreeze_mutex);
> -	if (++bdev->bd_fsfreeze_count > 1) {
> -		/*
> -		 * We don't even need to grab a reference - the first call
> -		 * to freeze_bdev grab an active reference and only the last
> -		 * thaw_bdev drops it.
> -		 */
> -		sb = get_super(bdev);
> -		if (sb)
> -			drop_super(sb);
> -		mutex_unlock(&bdev->bd_fsfreeze_mutex);
> -		return sb;
> -	}
> +	if (++bdev->bd_fsfreeze_count > 1)
> +		goto done;
>  
>  	sb = get_active_super(bdev);
>  	if (!sb)
> -		goto out;
> +		goto sync;
>  	if (sb->s_op->freeze_super)
>  		error = sb->s_op->freeze_super(sb);
>  	else
>  		error = freeze_super(sb);
> +	deactivate_super(sb);
> +
>  	if (error) {
> -		deactivate_super(sb);
>  		bdev->bd_fsfreeze_count--;
> -		mutex_unlock(&bdev->bd_fsfreeze_mutex);
> -		return ERR_PTR(error);
> +		goto done;
>  	}
> -	deactivate_super(sb);
> - out:
> +	bdev->bd_fsfreeze_sb = sb;
> +
> +sync:
>  	sync_blockdev(bdev);
> +done:
>  	mutex_unlock(&bdev->bd_fsfreeze_mutex);
> -	return sb;	/* thaw_bdev releases s->s_umount */
> +	return error;	/* thaw_bdev releases s->s_umount */

The comment about thaw_bdev() seems to be stale? At least I don't see what
it's speaking about...

>  }
>  EXPORT_SYMBOL(freeze_bdev);
>  
>  /**
>   * thaw_bdev  -- unlock filesystem
>   * @bdev:	blockdevice to unlock
> - * @sb:		associated superblock
>   *
>   * Unlocks the filesystem and marks it writeable again after freeze_bdev().
>   */
> -int thaw_bdev(struct block_device *bdev, struct super_block *sb)
> +int thaw_bdev(struct block_device *bdev)
>  {
> +	struct super_block *sb;
>  	int error = -EINVAL;
>  
>  	mutex_lock(&bdev->bd_fsfreeze_mutex);
> @@ -607,6 +599,7 @@ int thaw_bdev(struct block_device *bdev, struct super_block *sb)
>  	if (--bdev->bd_fsfreeze_count > 0)
>  		goto out;
>  
> +	sb = bdev->bd_fsfreeze_sb;
>  	if (!sb)
>  		goto out;
>  
> @@ -618,7 +611,7 @@ int thaw_bdev(struct block_device *bdev, struct super_block *sb)
>  		bdev->bd_fsfreeze_count++;
>  out:
>  	mutex_unlock(&bdev->bd_fsfreeze_mutex);
> -	return error;
> +	return 0;

But we now won't return -EINVAL if this gets called e.g. with
bd_fsfreeze_count == 0, right?

									Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 12:31:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 12:31:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37734.70203 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khtxP-0001gi-N3; Wed, 25 Nov 2020 12:31:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37734.70203; Wed, 25 Nov 2020 12:31:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khtxP-0001gb-K7; Wed, 25 Nov 2020 12:31:35 +0000
Received: by outflank-mailman (input) for mailman id 37734;
 Wed, 25 Nov 2020 12:31:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sFqU=E7=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1khtxP-0001gW-38
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 12:31:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c63aedca-6747-43bb-b526-183eb1a25ba6;
 Wed, 25 Nov 2020 12:31:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6A34EAF0E;
 Wed, 25 Nov 2020 12:31:33 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id 3DD711E130F; Wed, 25 Nov 2020 13:31:33 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sFqU=E7=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1khtxP-0001gW-38
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 12:31:35 +0000
X-Inumbo-ID: c63aedca-6747-43bb-b526-183eb1a25ba6
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c63aedca-6747-43bb-b526-183eb1a25ba6;
	Wed, 25 Nov 2020 12:31:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6A34EAF0E;
	Wed, 25 Nov 2020 12:31:33 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id 3DD711E130F; Wed, 25 Nov 2020 13:31:33 +0100 (CET)
Date: Wed, 25 Nov 2020 13:31:33 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 05/45] mtip32xx: remove the call to fsync_bdev on removal
Message-ID: <20201125123133.GI16944@quack2.suse.cz>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-6-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-6-hch@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Tue 24-11-20 14:27:11, Christoph Hellwig wrote:
> del_gendisk already calls fsync_bdev for every partition, no need
> to do this twice.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Makes sense to me. You can add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  drivers/block/mtip32xx/mtip32xx.c | 15 ---------------
>  drivers/block/mtip32xx/mtip32xx.h |  2 --
>  2 files changed, 17 deletions(-)
> 
> diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c
> index 153e2cdecb4d40..53ac59d19ae530 100644
> --- a/drivers/block/mtip32xx/mtip32xx.c
> +++ b/drivers/block/mtip32xx/mtip32xx.c
> @@ -3687,7 +3687,6 @@ static int mtip_block_initialize(struct driver_data *dd)
>  	/* Enable the block device and add it to /dev */
>  	device_add_disk(&dd->pdev->dev, dd->disk, NULL);
>  
> -	dd->bdev = bdget_disk(dd->disk, 0);
>  	/*
>  	 * Now that the disk is active, initialize any sysfs attributes
>  	 * managed by the protocol layer.
> @@ -3721,9 +3720,6 @@ static int mtip_block_initialize(struct driver_data *dd)
>  	return rv;
>  
>  kthread_run_error:
> -	bdput(dd->bdev);
> -	dd->bdev = NULL;
> -
>  	/* Delete our gendisk. This also removes the device from /dev */
>  	del_gendisk(dd->disk);
>  
> @@ -3804,14 +3800,6 @@ static int mtip_block_remove(struct driver_data *dd)
>  	blk_mq_tagset_busy_iter(&dd->tags, mtip_no_dev_cleanup, dd);
>  	blk_mq_unquiesce_queue(dd->queue);
>  
> -	/*
> -	 * Delete our gendisk structure. This also removes the device
> -	 * from /dev
> -	 */
> -	if (dd->bdev) {
> -		bdput(dd->bdev);
> -		dd->bdev = NULL;
> -	}
>  	if (dd->disk) {
>  		if (test_bit(MTIP_DDF_INIT_DONE_BIT, &dd->dd_flag))
>  			del_gendisk(dd->disk);
> @@ -4206,9 +4194,6 @@ static void mtip_pci_remove(struct pci_dev *pdev)
>  	} while (atomic_read(&dd->irq_workers_active) != 0 &&
>  		time_before(jiffies, to));
>  
> -	if (!dd->sr)
> -		fsync_bdev(dd->bdev);
> -
>  	if (atomic_read(&dd->irq_workers_active) != 0) {
>  		dev_warn(&dd->pdev->dev,
>  			"Completion workers still active!\n");
> diff --git a/drivers/block/mtip32xx/mtip32xx.h b/drivers/block/mtip32xx/mtip32xx.h
> index e22a7f0523bf30..88f4206310e4c8 100644
> --- a/drivers/block/mtip32xx/mtip32xx.h
> +++ b/drivers/block/mtip32xx/mtip32xx.h
> @@ -463,8 +463,6 @@ struct driver_data {
>  
>  	int isr_binding;
>  
> -	struct block_device *bdev;
> -
>  	struct list_head online_list; /* linkage for online list */
>  
>  	struct list_head remove_list; /* linkage for removing list */
> -- 
> 2.29.2
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 12:37:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 12:37:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37743.70215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khu3W-0001uo-EU; Wed, 25 Nov 2020 12:37:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37743.70215; Wed, 25 Nov 2020 12:37:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khu3W-0001uh-Ab; Wed, 25 Nov 2020 12:37:54 +0000
Received: by outflank-mailman (input) for mailman id 37743;
 Wed, 25 Nov 2020 12:37:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sFqU=E7=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1khu3U-0001uc-Vo
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 12:37:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 35d75ede-f3fe-46c6-ba2c-b3f0db517b7d;
 Wed, 25 Nov 2020 12:37:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C45A3AC22;
 Wed, 25 Nov 2020 12:37:50 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id 557991E130F; Wed, 25 Nov 2020 13:37:50 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sFqU=E7=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1khu3U-0001uc-Vo
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 12:37:53 +0000
X-Inumbo-ID: 35d75ede-f3fe-46c6-ba2c-b3f0db517b7d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 35d75ede-f3fe-46c6-ba2c-b3f0db517b7d;
	Wed, 25 Nov 2020 12:37:51 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C45A3AC22;
	Wed, 25 Nov 2020 12:37:50 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id 557991E130F; Wed, 25 Nov 2020 13:37:50 +0100 (CET)
Date: Wed, 25 Nov 2020 13:37:50 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 06/45] zram: remove the claim mechanism
Message-ID: <20201125123750.GJ16944@quack2.suse.cz>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-7-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-7-hch@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Tue 24-11-20 14:27:12, Christoph Hellwig wrote:
> The zram claim mechanism was added to ensure no new opens come in
> during teardown.  But the proper way to archive that is to call
					  ^^^ achieve

> del_gendisk first, which takes care of all that.  Once del_gendisk
> is called in the right place, the reset side can also be simplified
> as no I/O can be outstanding on a block device that is not open.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Otherwise I didn't find anything obviously wrong with the patch but I don't
feel confident enough with zram to really give you my reviewed-by on this
one.

								Honza

> ---
>  drivers/block/zram/zram_drv.c | 72 ++++++++---------------------------
>  1 file changed, 15 insertions(+), 57 deletions(-)
> 
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index 6d15d51cee2b7e..2e6d75ec1afddb 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -1756,64 +1756,33 @@ static ssize_t disksize_store(struct device *dev,
>  static ssize_t reset_store(struct device *dev,
>  		struct device_attribute *attr, const char *buf, size_t len)
>  {
> -	int ret;
> -	unsigned short do_reset;
> -	struct zram *zram;
> +	struct zram *zram = dev_to_zram(dev);
>  	struct block_device *bdev;
> +	unsigned short do_reset;
> +	int ret = 0;
>  
>  	ret = kstrtou16(buf, 10, &do_reset);
>  	if (ret)
>  		return ret;
> -
>  	if (!do_reset)
>  		return -EINVAL;
>  
> -	zram = dev_to_zram(dev);
>  	bdev = bdget_disk(zram->disk, 0);
>  	if (!bdev)
>  		return -ENOMEM;
>  
>  	mutex_lock(&bdev->bd_mutex);
> -	/* Do not reset an active device or claimed device */
> -	if (bdev->bd_openers || zram->claim) {
> -		mutex_unlock(&bdev->bd_mutex);
> -		bdput(bdev);
> -		return -EBUSY;
> -	}
> -
> -	/* From now on, anyone can't open /dev/zram[0-9] */
> -	zram->claim = true;
> +	if (bdev->bd_openers)
> +		ret = -EBUSY;
> +	else
> +		zram_reset_device(zram);
>  	mutex_unlock(&bdev->bd_mutex);
> -
> -	/* Make sure all the pending I/O are finished */
> -	fsync_bdev(bdev);
> -	zram_reset_device(zram);
>  	bdput(bdev);
>  
> -	mutex_lock(&bdev->bd_mutex);
> -	zram->claim = false;
> -	mutex_unlock(&bdev->bd_mutex);
> -
> -	return len;
> -}
> -
> -static int zram_open(struct block_device *bdev, fmode_t mode)
> -{
> -	int ret = 0;
> -	struct zram *zram;
> -
> -	WARN_ON(!mutex_is_locked(&bdev->bd_mutex));
> -
> -	zram = bdev->bd_disk->private_data;
> -	/* zram was claimed to reset so open request fails */
> -	if (zram->claim)
> -		ret = -EBUSY;
> -
> -	return ret;
> +	return ret ? ret : len;
>  }
>  
>  static const struct block_device_operations zram_devops = {
> -	.open = zram_open,
>  	.submit_bio = zram_submit_bio,
>  	.swap_slot_free_notify = zram_slot_free_notify,
>  	.rw_page = zram_rw_page,
> @@ -1821,7 +1790,6 @@ static const struct block_device_operations zram_devops = {
>  };
>  
>  static const struct block_device_operations zram_wb_devops = {
> -	.open = zram_open,
>  	.submit_bio = zram_submit_bio,
>  	.swap_slot_free_notify = zram_slot_free_notify,
>  	.owner = THIS_MODULE
> @@ -1974,32 +1942,22 @@ static int zram_add(void)
>  
>  static int zram_remove(struct zram *zram)
>  {
> -	struct block_device *bdev;
> -
> -	bdev = bdget_disk(zram->disk, 0);
> -	if (!bdev)
> -		return -ENOMEM;
> +	struct block_device *bdev = bdget_disk(zram->disk, 0);
>  
> -	mutex_lock(&bdev->bd_mutex);
> -	if (bdev->bd_openers || zram->claim) {
> -		mutex_unlock(&bdev->bd_mutex);
> +	if (bdev) {
> +		if (bdev->bd_openers) {
> +			bdput(bdev);
> +			return -EBUSY;
> +		}
>  		bdput(bdev);
> -		return -EBUSY;
>  	}
>  
> -	zram->claim = true;
> -	mutex_unlock(&bdev->bd_mutex);
> -
> +	del_gendisk(zram->disk);
>  	zram_debugfs_unregister(zram);
> -
> -	/* Make sure all the pending I/O are finished */
> -	fsync_bdev(bdev);
>  	zram_reset_device(zram);
> -	bdput(bdev);
>  
>  	pr_info("Removed device: %s\n", zram->disk->disk_name);
>  
> -	del_gendisk(zram->disk);
>  	blk_cleanup_queue(zram->disk->queue);
>  	put_disk(zram->disk);
>  	kfree(zram);
> -- 
> 2.29.2
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 12:40:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 12:40:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37749.70227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khu5z-0002m1-S6; Wed, 25 Nov 2020 12:40:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37749.70227; Wed, 25 Nov 2020 12:40:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khu5z-0002lu-OU; Wed, 25 Nov 2020 12:40:27 +0000
Received: by outflank-mailman (input) for mailman id 37749;
 Wed, 25 Nov 2020 12:40:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khu5y-0002lm-H5; Wed, 25 Nov 2020 12:40:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khu5y-0006rs-A9; Wed, 25 Nov 2020 12:40:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khu5y-0000n8-1D; Wed, 25 Nov 2020 12:40:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1khu5y-0006Zp-0k; Wed, 25 Nov 2020 12:40:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khu5y-0002lm-H5; Wed, 25 Nov 2020 12:40:26 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=T2u2/93sliiwaNXoCACb5ETUydl5WHCF23zXGmp/BjY=; b=GK79gfkpKbaDTn3WpdFc1lmxcq
	iGNnTmehar0rUGxGQJ4u3hGxXEbQ3ehxpVDPwOxTxCLUTcWrPEwAbfeOXGSo/2m9pdzf2LRC2MW6u
	szg/Ag5QoM4aYpgGLZpLuXV2fv6iiJGwNPAcqvrVCT11edPbsjyzeAD4/RcPlJ6I3+io=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khu5y-0006rs-A9; Wed, 25 Nov 2020 12:40:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khu5y-0000n8-1D; Wed, 25 Nov 2020 12:40:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khu5y-0006Zp-0k; Wed, 25 Nov 2020 12:40:26 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156988-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 156988: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.13-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5e4914e60da9a8dfdc00e839278f40c87525b8ae
X-Osstest-Versions-That:
    xen=d4c0483c0b87768cd9b95542e98111e4c098d57f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 25 Nov 2020 12:40:26 +0000

flight 156988 xen-4.13-testing real [real]
flight 157005 xen-4.13-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156988/
http://logs.test-lab.xenproject.org/osstest/logs/157005/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 157005-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156636
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156636
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156636
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156636
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156636
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156636
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156636
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156636
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156636
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156636
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156636
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  5e4914e60da9a8dfdc00e839278f40c87525b8ae
baseline version:
 xen                  d4c0483c0b87768cd9b95542e98111e4c098d57f

Last test of basis   156636  2020-11-10 18:06:32 Z   14 days
Testing same since   156988  2020-11-24 13:36:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d4c0483c0b..5e4914e60d  5e4914e60da9a8dfdc00e839278f40c87525b8ae -> stable-4.13


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 12:57:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 12:57:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37760.70246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khuMJ-0003tZ-Ii; Wed, 25 Nov 2020 12:57:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37760.70246; Wed, 25 Nov 2020 12:57:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khuMJ-0003tS-FD; Wed, 25 Nov 2020 12:57:19 +0000
Received: by outflank-mailman (input) for mailman id 37760;
 Wed, 25 Nov 2020 12:57:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1khuMH-0003tN-Gf
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 12:57:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8167b2b9-92e1-4563-8697-7f51a70fad6b;
 Wed, 25 Nov 2020 12:57:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 75156AC22;
 Wed, 25 Nov 2020 12:57:15 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dO0Y=E7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1khuMH-0003tN-Gf
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 12:57:17 +0000
X-Inumbo-ID: 8167b2b9-92e1-4563-8697-7f51a70fad6b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8167b2b9-92e1-4563-8697-7f51a70fad6b;
	Wed, 25 Nov 2020 12:57:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606309035; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZCm9hKYHAvVicsIzEYXUoyHakHZMRSC+kkIexzP2VEU=;
	b=S5wb+uKEMEOQsfCYyBx4XbSmHki5Gb1h5HS73J2J3VLyBPsRJPB8xPsVG1sVgpsxlcjowr
	761gYfEKX5w9Ek81GzjCD/WO4i+6BN70aW9Dd0Dupsm9qGf4GQAwb8Ch5tDoE8Q7ycBUWF
	rS2hwSXit1EIykHe8If7HFbW72E8zho=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 75156AC22;
	Wed, 25 Nov 2020 12:57:15 +0000 (UTC)
Subject: Re: [PATCH v2 02/17] mm: introduce xvmalloc() et al and use for grant
 table allocations
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
 <23acd443-348c-5ef9-0fb5-880e06cc9a2d@suse.com>
 <0c40a6f6-af8c-1040-f249-36752df3a1f1@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a752cdb9-4609-2a61-b657-c17cbe4febb8@suse.com>
Date: Wed, 25 Nov 2020 13:57:16 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <0c40a6f6-af8c-1040-f249-36752df3a1f1@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.11.2020 13:15, Julien Grall wrote:
> On 23/11/2020 14:23, Jan Beulich wrote:
>> All of the array allocations in grant_table_init() can exceed a page's
>> worth of memory, which xmalloc()-based interfaces aren't really suitable
>> for after boot. We also don't need any of these allocations to be
>> physically contiguous.. Introduce interfaces dynamically switching
>> between xmalloc() et al and vmalloc() et al, based on requested size,
>> and use them instead.
>>
>> All the wrappers in the new header get cloned mostly verbatim from
>> xmalloc.h, with the sole adjustment to switch unsigned long to size_t
>> for sizes and to unsigned int for alignments.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> v2: Actually edit a copy-and-pasted comment in xvmalloc.h which was
>>      meant to be edited from the beginning.
>> ---
>> I'm unconvinced of the mentioning of "physically contiguous" in the
>> comment at the top of the new header: I don't think xmalloc() provides
>> such a guarantee. Any use assuming so would look (latently) broken to
>> me.
> 
> I haven't had the chance to reply to the first version about this. So I 
> will reply here to avoid confusion.
> 
> I can at least spot one user in Arm that would use xmalloc() that way 
> (see the allocation of itt_addr in arch/arm/gic-v3-its.c).

And I surely wouldn't have spotted this, even if I had tried
to find "offenders", i.e. as said before not wanting to alter
the behavior of existing code (beyond the explicit changes
done here) was ...

> AFAIK, the memory is for the sole purpose of the ITS and should not be 
> accessed by Xen. So I think we can replace by a new version of 
> alloc_domheap_pages().
> 
> However, I still question the usefulness of introducing yet another way 
> to allocate memory (we already have alloc_xenheap_pages(), xmalloc(), 
> alloc_domheap_pages(), vmap()) if you think users cannot rely on 
> xmalloc() to allocate memory physically contiguous.

... the reason to introduce a separate new interface. Plus of
course this parallels what Linux has.

> It definitely makes more difficult to figure out when to use xmalloc() 
> vs xvalloc().

I don't see the difficulty:
- if you need physically contiguous memory, use alloc_xen*_pages(),
- if you know the allocation size is always no more than a page,
  use xmalloc(),
- if you know the allocation size is always more than a page, use
  vmalloc(),
- otherwise use xvmalloc(). Exceptions may of course apply, i.e.
this is just a rule of thumb.

> I would like to hear an opinion from the other maintainers.

Let's hope at least one will voice theirs.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 13:37:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 13:37:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37774.70278 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khuzP-0007af-U8; Wed, 25 Nov 2020 13:37:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37774.70278; Wed, 25 Nov 2020 13:37:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khuzP-0007aY-QA; Wed, 25 Nov 2020 13:37:43 +0000
Received: by outflank-mailman (input) for mailman id 37774;
 Wed, 25 Nov 2020 13:37:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sFqU=E7=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1khuzP-0007aT-0W
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 13:37:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cd60aa23-ce69-40de-9f3b-6480a6c76526;
 Wed, 25 Nov 2020 13:37:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0A70EAC22;
 Wed, 25 Nov 2020 13:37:40 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id 658EF1E130F; Wed, 25 Nov 2020 14:37:39 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sFqU=E7=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1khuzP-0007aT-0W
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 13:37:43 +0000
X-Inumbo-ID: cd60aa23-ce69-40de-9f3b-6480a6c76526
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cd60aa23-ce69-40de-9f3b-6480a6c76526;
	Wed, 25 Nov 2020 13:37:40 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0A70EAC22;
	Wed, 25 Nov 2020 13:37:40 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id 658EF1E130F; Wed, 25 Nov 2020 14:37:39 +0100 (CET)
Date: Wed, 25 Nov 2020 14:37:39 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 08/45] loop: do not call set_blocksize
Message-ID: <20201125133739.GK16944@quack2.suse.cz>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-9-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-9-hch@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Tue 24-11-20 14:27:14, Christoph Hellwig wrote:
> set_blocksize is used by file systems to use their preferred buffer cache
> block size.  Block drivers should not set it.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Looks good to me. You can add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  drivers/block/loop.c | 3 ---
>  1 file changed, 3 deletions(-)
> 
> diff --git a/drivers/block/loop.c b/drivers/block/loop.c
> index 9a27d4f1c08aac..b42c728620c9e4 100644
> --- a/drivers/block/loop.c
> +++ b/drivers/block/loop.c
> @@ -1164,9 +1164,6 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
>  	size = get_loop_size(lo, file);
>  	loop_set_size(lo, size);
>  
> -	set_blocksize(bdev, S_ISBLK(inode->i_mode) ?
> -		      block_size(inode->i_bdev) : PAGE_SIZE);
> -
>  	lo->lo_state = Lo_bound;
>  	if (part_shift)
>  		lo->lo_flags |= LO_FLAGS_PARTSCAN;
> -- 
> 2.29.2
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 13:39:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 13:39:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37781.70290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khv12-0007m9-9i; Wed, 25 Nov 2020 13:39:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37781.70290; Wed, 25 Nov 2020 13:39:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khv12-0007m2-6U; Wed, 25 Nov 2020 13:39:24 +0000
Received: by outflank-mailman (input) for mailman id 37781;
 Wed, 25 Nov 2020 13:39:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sFqU=E7=suse.cz=jack@srs-us1.protection.inumbo.net>)
 id 1khv10-0007lw-HF
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 13:39:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id af0670e9-7ae3-48f1-971f-7223551ae775;
 Wed, 25 Nov 2020 13:39:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BCF50AC23;
 Wed, 25 Nov 2020 13:39:20 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
 id 8ACF21E130F; Wed, 25 Nov 2020 14:39:20 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sFqU=E7=suse.cz=jack@srs-us1.protection.inumbo.net>)
	id 1khv10-0007lw-HF
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 13:39:22 +0000
X-Inumbo-ID: af0670e9-7ae3-48f1-971f-7223551ae775
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id af0670e9-7ae3-48f1-971f-7223551ae775;
	Wed, 25 Nov 2020 13:39:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id BCF50AC23;
	Wed, 25 Nov 2020 13:39:20 +0000 (UTC)
Received: by quack2.suse.cz (Postfix, from userid 1000)
	id 8ACF21E130F; Wed, 25 Nov 2020 14:39:20 +0100 (CET)
Date: Wed, 25 Nov 2020 14:39:20 +0100
From: Jan Kara <jack@suse.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Tejun Heo <tj@kernel.org>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 12/45] block: remove a superflous check in blkpg_do_ioctl
Message-ID: <20201125133920.GL16944@quack2.suse.cz>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-13-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201124132751.3747337-13-hch@lst.de>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Tue 24-11-20 14:27:18, Christoph Hellwig wrote:
> sector_t is now always a u64, so this check is not needed.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Looks good. You can add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  block/ioctl.c | 9 ---------
>  1 file changed, 9 deletions(-)
> 
> diff --git a/block/ioctl.c b/block/ioctl.c
> index 6b785181344fe1..0c09bb7a6ff35f 100644
> --- a/block/ioctl.c
> +++ b/block/ioctl.c
> @@ -35,15 +35,6 @@ static int blkpg_do_ioctl(struct block_device *bdev,
>  	start = p.start >> SECTOR_SHIFT;
>  	length = p.length >> SECTOR_SHIFT;
>  
> -	/* check for fit in a hd_struct */
> -	if (sizeof(sector_t) < sizeof(long long)) {
> -		long pstart = start, plength = length;
> -
> -		if (pstart != start || plength != length || pstart < 0 ||
> -		    plength < 0 || p.pno > 65535)
> -			return -EINVAL;
> -	}
> -
>  	switch (op) {
>  	case BLKPG_ADD_PARTITION:
>  		/* check if partition is aligned to blocksize */
> -- 
> 2.29.2
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 13:49:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 13:49:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37790.70301 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khvAM-0000LK-6Q; Wed, 25 Nov 2020 13:49:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37790.70301; Wed, 25 Nov 2020 13:49:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khvAM-0000LD-39; Wed, 25 Nov 2020 13:49:02 +0000
Received: by outflank-mailman (input) for mailman id 37790;
 Wed, 25 Nov 2020 13:49:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1khvAK-0000L8-L1
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 13:49:00 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1khvAK-0008I8-Ga
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 13:49:00 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1khvAK-0001uP-FX
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 13:49:00 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1khvAI-0001Hf-M1; Wed, 25 Nov 2020 13:48:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1khvAK-0000L8-L1
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 13:49:00 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=CC:Subject:To:Date:Message-ID:
	Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=EXVMI6QL/ACKGwBY4ODNpCDfRoerL0fOa8oPaRNAPcs=; b=lKxf6hN0NrOyZRHRVFzNR+MPBL
	h0N/18GlOw8xKx6aX3OyUPKC2gjNwETYP198AgCOkd1UlK4FAjSc9gqZOL0Rr0bF0QQ5LmixYvZNd
	F6ttUX2wQk51iDQh//4X5h8JhoozOX52dFPpBkf2LW7JytyosUXtzMd/j3qgOwdXtW3w=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1khvAK-0008I8-Ga
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 13:49:00 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1khvAK-0001uP-FX
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 13:49:00 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
	(envelope-from <iwj@xenproject.org>)
	id 1khvAI-0001Hf-M1; Wed, 25 Nov 2020 13:48:58 +0000
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Message-ID: <24510.24778.433048.477008@mariner.uk.xensource.com>
Date: Wed, 25 Nov 2020 13:48:58 +0000
To: xen-devel@lists.xenproject.org
Subject: Xen 4.15: Proposed release schedule
CC: committers@xenproject.org,
    George Dunlap <George.Dunlap@citrix.com>,

    Andrew Cooper <andrew.cooper3@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>,
    Jan Beulich <jbeulich@suse.com>,
    Julien Grall <julien@xen.org>,
    Stefano Stabellini <sstabellini@kernel.org>,
    =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>,
    Paul Durrant <xadimgnik@gmail.com>,
    Wei Liu <wl@xen.org>
FCC: ~/mail/Outbound
--text follows this line--
Hi.  I've done a little bit of consultation with previous release
managers, and reviewed various list archives and calendars.  These
consultations seemed to suggest some folklore that wasn't captured in
our process doc - hence the proposed patch, below.

I would like to tentatively propose the following schedule and
policies for Xen 4.15.

If you have opinions, please comment as soon as you can so that we can
have an open dialogue.  Comments must be submitted at the very latest
by 1700 UTC on Wednesday the 2nd of December.

Having never done this before, I am particularly interested in
comments from previous release managers.

** DRAFT **

  Friday 8th January    Last posting date

    Patches adding new features should be posted to the mailing list
    by this cate, although perhaps not in their final version.

  Friday 22nd January   Feature freeze
 
    Patches adding new features should be committed by this date.
    Straightforward bugfixes may continue to be accepted by
    maintainers.

  Friday 12th February **tentatve**   Code freeze

    Bugfixes only, all changes to be approved by the Release Manager.

  Week of 12th March **tentative**   Release
    (probably Tuesday or Wednesday)

Any patches containing substantial refactoring are to treated as
new features, even if they intent is to fix bugs.

Freeze exceptions will not be routine, but may be granted in
exceptional cases for small changes on the basis of risk assessment.
Large series will not get exceptions.  Contributors *must not* rely on
getting, or expect, a freeze exception.

Chinese New Year falls around the 11th-19th of February this year.  In
my plan above, that falls within the hard code freeze period.  If we
don't manage to get the tree to an acceptable quality level by the
tentative codefreeze and release dates above, these dates will slip.

I have not yet started tracking "big ticket" items, and bugs.  I
expect to start doing that starting after Christmas.  NB the primary
responsibility for driving a feature's progress to meet the release
schedule, lies with the feature's proponents.

If as a feature proponent you feel your feature is at risk and there
is something the Xen Project could do to help, please consult me or
the Community Manager.  In such situations please reach out earlier
rather than later.

** END OF DRAFT **

Thanks,
Ian.

>From b34f4ddace0b8d76d8c340a46288a2db79c99460 Mon Sep 17 00:00:00 2001
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Date: Wed, 25 Nov 2020 13:22:08 +0000
Subject: [PATCH] xen-release-management doc: More info on schedule
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This documents our practice, established in 2018
  https://lists.xen.org/archives/html/xen-devel/2018-07/msg02240.html
et seq

CC: Jrgen Gro <jgross@suse.com>
CC: Paul Durrant <xadimgnik@gmail.com>
CC: Wei Liu <wl@xen.org>
Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 docs/process/xen-release-management.pandoc | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/docs/process/xen-release-management.pandoc b/docs/process/xen-release-management.pandoc
index e1aa1eda8f..a5d70fed67 100644
--- a/docs/process/xen-release-management.pandoc
+++ b/docs/process/xen-release-management.pandoc
@@ -15,8 +15,10 @@ that they can have an idea what to expect from the Release Manager.
 
 # Xen release cycle
 
-The Xen hypervisor project now releases every 8 months. The actual release date
-depends on a lot of factors.
+The Xen hypervisor project now releases every 8 months.  We aim to
+release in the first half of March/July/November.  These dates have
+been chosen to avoid major holidays and cultural events; if one
+release slips, ideally the previous release cycle would be shortened.
 
 We can roughly divide one release into two periods. The development period
 and the freeze period. The former is 6 months long and the latter is about 2
@@ -33,6 +35,12 @@ During freeze period, the tree is closed for new features. Only bug fixes are
 accepted. This period can be shorter or longer than 2 months. If it ends up
 longer than 2 months, it eats into the next development period.
 
+The precise release schedule depends on a lot of factors and needs to
+be set afresh by the Release Manager in each release cycle.  When the
+release is in March, particular consideration should be given to the
+Chinese New Year holidaty which will then typically occur curing the
+freeze, so the freeze should probably be extended to compensate.
+
 # The different roles in a Xen release
 
 ## Release Manager
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 13:57:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 13:57:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37797.70314 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khvI0-0001FO-TJ; Wed, 25 Nov 2020 13:56:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37797.70314; Wed, 25 Nov 2020 13:56:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khvI0-0001FH-PQ; Wed, 25 Nov 2020 13:56:56 +0000
Received: by outflank-mailman (input) for mailman id 37797;
 Wed, 25 Nov 2020 13:56:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1khvHz-0001FC-Bn
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 13:56:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1khvHy-0008TS-Gh
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 13:56:54 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1khvHy-0002UR-Fk
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 13:56:54 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1khvHw-0001JZ-MT; Wed, 25 Nov 2020 13:56:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1khvHz-0001FC-Bn
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 13:56:55 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Subject:References:In-Reply-To:CC:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=BIqso6EQUrm2U0mBVjcvavC/rZFxt/cV7qMCIT3KhzQ=; b=xL4EmwVS/0+UAXVSpPRNqrknkf
	zwwjf6vUwzp6+3g+9A+IRwvmGBAEYQ/Phl+PebmprjVFKIiIyI9BBIEEbGlOxQftjVEFG7huTsFyk
	IlMsiTgiP6rKiVcZQwRznns+dmOL8RgMC6VFqIWog4TKeuIUbf0wNGC37ty8Mzuo9WyU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1khvHy-0008TS-Gh
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 13:56:54 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1khvHy-0002UR-Fk
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 13:56:54 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
	(envelope-from <iwj@xenproject.org>)
	id 1khvHw-0001JZ-MT; Wed, 25 Nov 2020 13:56:52 +0000
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Message-ID: <24510.25252.447028.364012@mariner.uk.xensource.com>
Date: Wed, 25 Nov 2020 13:56:52 +0000
To: xen-devel@lists.xenproject.org
CC: committers@xenproject.org,
    George Dunlap <George.Dunlap@citrix.com>,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>,
    Jan Beulich <jbeulich@suse.com>,
    Julien Grall <julien@xen.org>,
    Stefano Stabellini <sstabellini@kernel.org>,
    =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>,
    Paul Durrant <xadimgnik@gmail.com>,
    Wei Liu <wl@xen.org>
In-Reply-To: <24510.24778.433048.477008@mariner.uk.xensource.com>
References: <24510.24778.433048.477008@mariner.uk.xensource.com>
Subject: Xen 4.15: Proposed release schedule

(resending because the first one had corrupted email headers;
 please reply to this one and not the previous one)

Hi.  I've done a little bit of consultation with previous release
managers, and reviewed various list archives and calendars.  These
consultations seemed to suggest some folklore that wasn't captured in
our process doc - hence the proposed patch, below.

I would like to tentatively propose the following schedule and
policies for Xen 4.15.

If you have opinions, please comment as soon as you can so that we can
have an open dialogue.  Comments must be submitted at the very latest
by 1700 UTC on Wednesday the 2nd of December.

Having never done this before, I am particularly interested in
comments from previous release managers.

** DRAFT **

  Friday 8th January    Last posting date

    Patches adding new features should be posted to the mailing list
    by this cate, although perhaps not in their final version.

  Friday 22nd January   Feature freeze
 
    Patches adding new features should be committed by this date.
    Straightforward bugfixes may continue to be accepted by
    maintainers.

  Friday 12th February **tentatve**   Code freeze

    Bugfixes only, all changes to be approved by the Release Manager.

  Week of 12th March **tentative**   Release
    (probably Tuesday or Wednesday)

Any patches containing substantial refactoring are to treated as
new features, even if they intent is to fix bugs.

Freeze exceptions will not be routine, but may be granted in
exceptional cases for small changes on the basis of risk assessment.
Large series will not get exceptions.  Contributors *must not* rely on
getting, or expect, a freeze exception.

Chinese New Year falls around the 11th-19th of February this year.  In
my plan above, that falls within the hard code freeze period.  If we
don't manage to get the tree to an acceptable quality level by the
tentative codefreeze and release dates above, these dates will slip.

I have not yet started tracking "big ticket" items, and bugs.  I
expect to start doing that starting after Christmas.  NB the primary
responsibility for driving a feature's progress to meet the release
schedule, lies with the feature's proponents.

If as a feature proponent you feel your feature is at risk and there
is something the Xen Project could do to help, please consult me or
the Community Manager.  In such situations please reach out earlier
rather than later.

** END OF DRAFT **

Thanks,
Ian.

>From b34f4ddace0b8d76d8c340a46288a2db79c99460 Mon Sep 17 00:00:00 2001
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Date: Wed, 25 Nov 2020 13:22:08 +0000
Subject: [PATCH] xen-release-management doc: More info on schedule
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This documents our practice, established in 2018
  https://lists.xen.org/archives/html/xen-devel/2018-07/msg02240.html
et seq

CC: Jrgen Gro <jgross@suse.com>
CC: Paul Durrant <xadimgnik@gmail.com>
CC: Wei Liu <wl@xen.org>
Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 docs/process/xen-release-management.pandoc | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/docs/process/xen-release-management.pandoc b/docs/process/xen-release-management.pandoc
index e1aa1eda8f..a5d70fed67 100644
--- a/docs/process/xen-release-management.pandoc
+++ b/docs/process/xen-release-management.pandoc
@@ -15,8 +15,10 @@ that they can have an idea what to expect from the Release Manager.
 
 # Xen release cycle
 
-The Xen hypervisor project now releases every 8 months. The actual release date
-depends on a lot of factors.
+The Xen hypervisor project now releases every 8 months.  We aim to
+release in the first half of March/July/November.  These dates have
+been chosen to avoid major holidays and cultural events; if one
+release slips, ideally the previous release cycle would be shortened.
 
 We can roughly divide one release into two periods. The development period
 and the freeze period. The former is 6 months long and the latter is about 2
@@ -33,6 +35,12 @@ During freeze period, the tree is closed for new features. Only bug fixes are
 accepted. This period can be shorter or longer than 2 months. If it ends up
 longer than 2 months, it eats into the next development period.
 
+The precise release schedule depends on a lot of factors and needs to
+be set afresh by the Release Manager in each release cycle.  When the
+release is in March, particular consideration should be given to the
+Chinese New Year holidaty which will then typically occur curing the
+freeze, so the freeze should probably be extended to compensate.
+
 # The different roles in a Xen release
 
 ## Release Manager
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 14:02:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 14:02:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37803.70326 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khvMn-0002EE-Fe; Wed, 25 Nov 2020 14:01:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37803.70326; Wed, 25 Nov 2020 14:01:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khvMn-0002E7-CZ; Wed, 25 Nov 2020 14:01:53 +0000
Received: by outflank-mailman (input) for mailman id 37803;
 Wed, 25 Nov 2020 14:01:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X3kr=E7=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1khvMl-0002DO-U2
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 14:01:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2f2e1c82-3c26-4abe-9eca-f6c55a7ad8bd;
 Wed, 25 Nov 2020 14:01:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9473AAC22;
 Wed, 25 Nov 2020 14:01:49 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X3kr=E7=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1khvMl-0002DO-U2
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 14:01:52 +0000
X-Inumbo-ID: 2f2e1c82-3c26-4abe-9eca-f6c55a7ad8bd
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2f2e1c82-3c26-4abe-9eca-f6c55a7ad8bd;
	Wed, 25 Nov 2020 14:01:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606312909; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=hFzhXzFFghZZk4ZpDbGxeUJ0LHm6treS6Oq94x67l3Q=;
	b=VVjfRpJJ9ndcRbbUXwpPhVk562st3Csoq/6/iF7FYodZApf9j/veNuc9r0+VwF+Dxabbuu
	QHydgOyOdjxW6WzTAHdlCHMOs1Z8ItFyGWJE65R6pl5Vn8akX6S/OS5PzdAhENo7e0fc7T
	gLh5TxUdgULZ7TWkcyiR3yeXZhqwAkM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 9473AAC22;
	Wed, 25 Nov 2020 14:01:49 +0000 (UTC)
Subject: Re: Xen 4.15: Proposed release schedule
To: Ian Jackson <iwj@xenproject.org>, xen-devel@lists.xenproject.org
Cc: committers@xenproject.org, George Dunlap <George.Dunlap@citrix.com>
References: <24510.24778.433048.477008@mariner.uk.xensource.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <f46207ec-5022-2415-c823-e27ee8b74f9c@suse.com>
Date: Wed, 25 Nov 2020 15:01:48 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <24510.24778.433048.477008@mariner.uk.xensource.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="iq6o53iNID9ZPMy97LKYPfBJzRBTyu9Fc"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--iq6o53iNID9ZPMy97LKYPfBJzRBTyu9Fc
Content-Type: multipart/mixed; boundary="JsFG8kzwkduzFc83dt57SkXSIiZAungJ1";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Ian Jackson <iwj@xenproject.org>, xen-devel@lists.xenproject.org
Cc: committers@xenproject.org, George Dunlap <George.Dunlap@citrix.com>
Message-ID: <f46207ec-5022-2415-c823-e27ee8b74f9c@suse.com>
Subject: Re: Xen 4.15: Proposed release schedule
References: <24510.24778.433048.477008@mariner.uk.xensource.com>
In-Reply-To: <24510.24778.433048.477008@mariner.uk.xensource.com>

--JsFG8kzwkduzFc83dt57SkXSIiZAungJ1
Content-Type: multipart/mixed;
 boundary="------------AD3675CBC7F429925466476B"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------AD3675CBC7F429925466476B
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 25.11.20 14:48, Ian Jackson wrote:
>      Andrew Cooper <andrew.cooper3@citrix.com>,
>      George Dunlap <george.dunlap@citrix.com>,
>      Jan Beulich <jbeulich@suse.com>,
>      Julien Grall <julien@xen.org>,
>      Stefano Stabellini <sstabellini@kernel.org>,
>      =3D?iso-8859-1?Q?J=3DFCrgen_Gro=3DDF?=3D <jgross@suse.com>,
>      Paul Durrant <xadimgnik@gmail.com>,
>      Wei Liu <wl@xen.org>
> FCC: ~/mail/Outbound
> --text follows this line--
> Hi.  I've done a little bit of consultation with previous release
> managers, and reviewed various list archives and calendars.  These
> consultations seemed to suggest some folklore that wasn't captured in
> our process doc - hence the proposed patch, below.
>=20
> I would like to tentatively propose the following schedule and
> policies for Xen 4.15.
>=20
> If you have opinions, please comment as soon as you can so that we can
> have an open dialogue.  Comments must be submitted at the very latest
> by 1700 UTC on Wednesday the 2nd of December.
>=20
> Having never done this before, I am particularly interested in
> comments from previous release managers.
>=20
> ** DRAFT **
>=20
>    Friday 8th January    Last posting date
>=20
>      Patches adding new features should be posted to the mailing list
>      by this cate, although perhaps not in their final version.
>=20
>    Friday 22nd January   Feature freeze
>  =20
>      Patches adding new features should be committed by this date.
>      Straightforward bugfixes may continue to be accepted by
>      maintainers.
>=20
>    Friday 12th February **tentatve**   Code freeze
>=20
>      Bugfixes only, all changes to be approved by the Release Manager.
>=20
>    Week of 12th March **tentative**   Release
>      (probably Tuesday or Wednesday)
>=20
> Any patches containing substantial refactoring are to treated as
> new features, even if they intent is to fix bugs.
>=20
> Freeze exceptions will not be routine, but may be granted in
> exceptional cases for small changes on the basis of risk assessment.
> Large series will not get exceptions.  Contributors *must not* rely on
> getting, or expect, a freeze exception.
>=20
> Chinese New Year falls around the 11th-19th of February this year.  In
> my plan above, that falls within the hard code freeze period.  If we
> don't manage to get the tree to an acceptable quality level by the
> tentative codefreeze and release dates above, these dates will slip.
>=20
> I have not yet started tracking "big ticket" items, and bugs.  I
> expect to start doing that starting after Christmas.  NB the primary
> responsibility for driving a feature's progress to meet the release
> schedule, lies with the feature's proponents.
>=20
> If as a feature proponent you feel your feature is at risk and there
> is something the Xen Project could do to help, please consult me or
> the Community Manager.  In such situations please reach out earlier
> rather than later.
>=20
> ** END OF DRAFT **
>=20
> Thanks,
> Ian.
>=20
>>From b34f4ddace0b8d76d8c340a46288a2db79c99460 Mon Sep 17 00:00:00 2001
> From: Ian Jackson <iwj@xenproject.org>
> To: xen-devel@lists.xenproject.org
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: George Dunlap <george.dunlap@citrix.com>
> Cc: Ian Jackson <iwj@xenproject.org>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Julien Grall <julien@xen.org>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Date: Wed, 25 Nov 2020 13:22:08 +0000
> Subject: [PATCH] xen-release-management doc: More info on schedule
> MIME-Version: 1.0
> Content-Type: text/plain; charset=3DUTF-8
> Content-Transfer-Encoding: 8bit
>=20
> This documents our practice, established in 2018
>    https://lists.xen.org/archives/html/xen-devel/2018-07/msg02240.html
> et seq
>=20
> CC: J=C3=BCrgen Gro=C3=9F <jgross@suse.com>
> CC: Paul Durrant <xadimgnik@gmail.com>
> CC: Wei Liu <wl@xen.org>
> Signed-off-by: Ian Jackson <iwj@xenproject.org>
> ---
>   docs/process/xen-release-management.pandoc | 12 ++++++++++--
>   1 file changed, 10 insertions(+), 2 deletions(-)
>=20
> diff --git a/docs/process/xen-release-management.pandoc b/docs/process/=
xen-release-management.pandoc
> index e1aa1eda8f..a5d70fed67 100644
> --- a/docs/process/xen-release-management.pandoc
> +++ b/docs/process/xen-release-management.pandoc
> @@ -15,8 +15,10 @@ that they can have an idea what to expect from the R=
elease Manager.
>  =20
>   # Xen release cycle
>  =20
> -The Xen hypervisor project now releases every 8 months. The actual rel=
ease date
> -depends on a lot of factors.
> +The Xen hypervisor project now releases every 8 months.  We aim to
> +release in the first half of March/July/November.  These dates have
> +been chosen to avoid major holidays and cultural events; if one
> +release slips, ideally the previous release cycle would be shortened.

s/previous/following/

Maybe add a reference to the mail thread in the xen-devel archives?

>  =20
>   We can roughly divide one release into two periods. The development p=
eriod
>   and the freeze period. The former is 6 months long and the latter is =
about 2
> @@ -33,6 +35,12 @@ During freeze period, the tree is closed for new fea=
tures. Only bug fixes are
>   accepted. This period can be shorter or longer than 2 months. If it e=
nds up
>   longer than 2 months, it eats into the next development period.
>  =20
> +The precise release schedule depends on a lot of factors and needs to
> +be set afresh by the Release Manager in each release cycle.  When the
> +release is in March, particular consideration should be given to the
> +Chinese New Year holidaty which will then typically occur curing the

s/holidaty/holiday/

> +freeze, so the freeze should probably be extended to compensate.
> +
>   # The different roles in a Xen release
>  =20
>   ## Release Manager
>=20


Juergen

--------------AD3675CBC7F429925466476B
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------AD3675CBC7F429925466476B--

--JsFG8kzwkduzFc83dt57SkXSIiZAungJ1--

--iq6o53iNID9ZPMy97LKYPfBJzRBTyu9Fc
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl++Y8wFAwAAAAAACgkQsN6d1ii/Ey/u
ogf9Hqe83P/o/oUUn23gSAEzdAkt+TzuEA1Fx8tIsQOl86X2m6C90hqrnWy7jc1kH8U931vOgIMp
RvyUSYNVGLrFW5jl2d9uDLboKacsgh9iD60KEcz+OqOI/gocizbRg88u9q8xpOxy5ZRwFnk0z2Aq
RD60hEcsbAYuyrvgg0DQGfh6RZbKlg7odPh9wSoVU7QS63lWrGSJaMh3gugBqPUyUpu2zecAddmm
nVTRrAyOlmFHem8u2ssA5IEgR/EoTDB0CzXB+G5AL2WIe4EpOQjdUqDeh5ZJvnYNekYt75UmMs5T
S3Fk64AmZc4Ra0baGCk06ppCyxtqimYB0S48Aq2aUQ==
=951n
-----END PGP SIGNATURE-----

--iq6o53iNID9ZPMy97LKYPfBJzRBTyu9Fc--


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 14:19:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 14:19:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37812.70344 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khvdW-0003PJ-CN; Wed, 25 Nov 2020 14:19:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37812.70344; Wed, 25 Nov 2020 14:19:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khvdW-0003PC-8x; Wed, 25 Nov 2020 14:19:10 +0000
Received: by outflank-mailman (input) for mailman id 37812;
 Wed, 25 Nov 2020 14:19:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khvdV-0003P4-3x; Wed, 25 Nov 2020 14:19:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khvdU-0000aV-T8; Wed, 25 Nov 2020 14:19:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khvdU-0006LB-NC; Wed, 25 Nov 2020 14:19:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1khvdU-0007wl-Mb; Wed, 25 Nov 2020 14:19:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khvdV-0003P4-3x; Wed, 25 Nov 2020 14:19:09 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fKASA1WDv6EL0udgZooEfcpW4MoLHZhyPp9PvE1l/jc=; b=kS7OUcEzyQvldfuJa41UuYW0Vn
	/7Ampv1RF5QNBn+CFKdoIeM5YVJTmQQMlbIds68gQ3aE3fdNJoeUTlJOBbDsQthI7214u8IVpj3Y7
	/8rDZrH0VYeIPTk2ScIp8hXnpJZxdwn1KIt+9X+8Ys7bH6wLm6LyZZLzanSVKMGvG8cs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khvdU-0000aV-T8; Wed, 25 Nov 2020 14:19:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khvdU-0006LB-NC; Wed, 25 Nov 2020 14:19:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khvdU-0007wl-Mb; Wed, 25 Nov 2020 14:19:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157006-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157006: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=fd7479b9aec25885cc17d33b326b9babae59faee
X-Osstest-Versions-That:
    xen=9b156bcc3ffcc7949edd4460b718a241e87ae302
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 25 Nov 2020 14:19:08 +0000

flight 157006 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157006/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  fd7479b9aec25885cc17d33b326b9babae59faee
baseline version:
 xen                  9b156bcc3ffcc7949edd4460b718a241e87ae302

Last test of basis   156991  2020-11-24 14:01:23 Z    1 days
Testing same since   157006  2020-11-25 12:00:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bertrand Marquis <bertrand.marquis@arm.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   9b156bcc3f..fd7479b9ae  fd7479b9aec25885cc17d33b326b9babae59faee -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 14:50:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 14:50:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37823.70359 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khw7I-00069k-Sw; Wed, 25 Nov 2020 14:49:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37823.70359; Wed, 25 Nov 2020 14:49:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khw7I-00069d-O4; Wed, 25 Nov 2020 14:49:56 +0000
Received: by outflank-mailman (input) for mailman id 37823;
 Wed, 25 Nov 2020 14:49:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QLmq=E7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1khw7H-00069Y-5V
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 14:49:55 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ef133d09-9859-4085-876e-169d35d219f0;
 Wed, 25 Nov 2020 14:49:53 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=QLmq=E7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1khw7H-00069Y-5V
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 14:49:55 +0000
X-Inumbo-ID: ef133d09-9859-4085-876e-169d35d219f0
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ef133d09-9859-4085-876e-169d35d219f0;
	Wed, 25 Nov 2020 14:49:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606315793;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=6RjATdOGeRy0lx5OA3+/JVm0+7J35u3rU8nLoBynQik=;
  b=D3U4zCqFIxMnm/xWty+SwsgTW2xnY9ofSA/jr3wbZKL8cwY28EGTsktY
   pjDes68uRsEuDeqGNmfAII8jArfzYSouFz1Ha7welXrs0eyBMPYpKxj8b
   e+MjP9h8lyDnH320tYu59ZOZSElxFxUw4w749b6b9bDu62T0T4ahnPN8h
   A=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: EPNlRTuwUZ5o2xYMl0cIlv0NjE+lFPO+mwXCXLZC2PAjzLOGBwezgmRFV/V0H+4ZlSB43jn28R
 7w+qS+ObcGYlkTq/pvD19IuLmYenEEmeeCObuTOssWojjvhrGfoFmrQAUFI06oGBbxAipCbl/k
 mN0gIDGFUx3UEIvRKbcXruVSoWRhcZtulBDlEl5dj6KXuASNDMUzHKKgmkKH8ffWps7Kt+tfdk
 IUTeEywLtG2fPRFdfKxnB1W9Asc3r2rvyGo7qt+YF7RVTuZUrQwWAZ6GqwuGVv4Th7vn7hY6MM
 obo=
X-SBRS: None
X-MesageID: 31895527
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,369,1599537600"; 
   d="scan'208";a="31895527"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>
Subject: [PATCH] tools/libs: Simplify internal *.pc files
Date: Wed, 25 Nov 2020 14:49:28 +0000
Message-ID: <20201125144928.22778-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain

The internal package config file for libxenlight reads (reformatted to avoid
exceeding the SMTP 998-character line length):

  Libs: -L${libdir}
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toollog
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toollog
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toolcore
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/evtchn
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toolcore
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toollog
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toollog
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toolcore
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/call
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toollog
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toolcore
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/evtchn
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toollog
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toolcore
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/gnttab
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toollog
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toolcore
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/foreignmemory
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toollog
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toolcore
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toollog
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toolcore
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/call
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/devicemodel
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/ctrl
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toolcore
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/store
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toollog
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toolcore
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toollog
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toolcore
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/call
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/hypfs
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toollog
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toolcore
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/evtchn
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toollog
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toollog
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toolcore
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/call
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toollog
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toolcore
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/evtchn
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toollog
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toolcore
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/gnttab
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toollog
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toolcore
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/foreignmemory
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toollog
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toolcore
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toollog
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toolcore
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/call
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/devicemodel
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/ctrl
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/guest
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/light
  -lxenlight

Drop duplicate -rpath-link='s to turn it into the slightly-more-manageable:

  Libs: -L${libdir}
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/call
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/ctrl
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/devicemodel
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/evtchn
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/foreignmemory
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/gnttab
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/guest
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/hypfs
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/light
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/store
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toolcore
  -Wl,-rpath-link=/local/security/xen.git/tools/libs/light/../../../tools/libs/toollog
  -lxenlight

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Juergen Gross <jgross@suse.com>
---
 tools/Rules.mk | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/Rules.mk b/tools/Rules.mk
index f61da81f4a..5d92ff0699 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -184,7 +184,7 @@ $(PKG_CONFIG_DIR)/%.pc: Makefile $(XEN_ROOT)/tools/Rules.mk $(PKG_CONFIG_DIR)
 	echo "Description: $(PKG_CONFIG_DESC)"; \
 	echo "Version: $(PKG_CONFIG_VERSION)"; \
 	echo "Cflags: -I\$${includedir} $(CFLAGS_xeninclude)"; \
-	echo "Libs: -L\$${libdir} $(PKG_CONFIG_USELIBS) -l$(PKG_CONFIG_LIB)"; \
+	echo "Libs: -L\$${libdir} $(sort $(PKG_CONFIG_USELIBS)) -l$(PKG_CONFIG_LIB)"; \
 	echo "Libs.private: $(PKG_CONFIG_LIBSPRIV)"; \
 	echo "Requires.private: $(PKG_CONFIG_REQPRIV)"; \
 	} > $@
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 15:36:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 15:36:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37832.70370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khwpz-00025f-KZ; Wed, 25 Nov 2020 15:36:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37832.70370; Wed, 25 Nov 2020 15:36:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khwpz-00025Y-Hd; Wed, 25 Nov 2020 15:36:07 +0000
Received: by outflank-mailman (input) for mailman id 37832;
 Wed, 25 Nov 2020 15:36:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kPj2=E7=kernel.org=sashal@srs-us1.protection.inumbo.net>)
 id 1khwpy-00025T-5c
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 15:36:06 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 19acc88e-9036-444e-842e-2d883727306b;
 Wed, 25 Nov 2020 15:36:05 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net
 [73.47.72.35])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 56F8C20872;
 Wed, 25 Nov 2020 15:36:03 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kPj2=E7=kernel.org=sashal@srs-us1.protection.inumbo.net>)
	id 1khwpy-00025T-5c
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 15:36:06 +0000
X-Inumbo-ID: 19acc88e-9036-444e-842e-2d883727306b
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 19acc88e-9036-444e-842e-2d883727306b;
	Wed, 25 Nov 2020 15:36:05 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 56F8C20872;
	Wed, 25 Nov 2020 15:36:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606318564;
	bh=mG6o+nKVwn1pUXo6FFrQYjWoCcXjJrRR/UibXVtgk8I=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=LO5cCEWweuQ6F6hT7djsWLg9U0bvdkssAhggHH11wlv+nuptNJlhYFJbW+3K0YR3M
	 bFyckja4vjp7UfledE8etDz1ScYR5GEDvFFr2xzETp/xnWTCtBMJ2B5HkX+v5m/l2T
	 K+kR8vPsp7k33wgyOM81Ysr5n5AK+mG/VwZebDEM=
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Brian Masney <bmasney@redhat.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Sasha Levin <sashal@kernel.org>,
	xen-devel@lists.xenproject.org
Subject: [PATCH AUTOSEL 5.9 10/33] x86/xen: don't unbind uninitialized lock_kicker_irq
Date: Wed, 25 Nov 2020 10:35:27 -0500
Message-Id: <20201125153550.810101-10-sashal@kernel.org>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20201125153550.810101-1-sashal@kernel.org>
References: <20201125153550.810101-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Brian Masney <bmasney@redhat.com>

[ Upstream commit 65cae18882f943215d0505ddc7e70495877308e6 ]

When booting a hyperthreaded system with the kernel parameter
'mitigations=auto,nosmt', the following warning occurs:

    WARNING: CPU: 0 PID: 1 at drivers/xen/events/events_base.c:1112 unbind_from_irqhandler+0x4e/0x60
    ...
    Hardware name: Xen HVM domU, BIOS 4.2.amazon 08/24/2006
    ...
    Call Trace:
     xen_uninit_lock_cpu+0x28/0x62
     xen_hvm_cpu_die+0x21/0x30
     takedown_cpu+0x9c/0xe0
     ? trace_suspend_resume+0x60/0x60
     cpuhp_invoke_callback+0x9a/0x530
     _cpu_up+0x11a/0x130
     cpu_up+0x7e/0xc0
     bringup_nonboot_cpus+0x48/0x50
     smp_init+0x26/0x79
     kernel_init_freeable+0xea/0x229
     ? rest_init+0xaa/0xaa
     kernel_init+0xa/0x106
     ret_from_fork+0x35/0x40

The secondary CPUs are not activated with the nosmt mitigations and only
the primary thread on each CPU core is used. In this situation,
xen_hvm_smp_prepare_cpus(), and more importantly xen_init_lock_cpu(), is
not called, so the lock_kicker_irq is not initialized for the secondary
CPUs. Let's fix this by exiting early in xen_uninit_lock_cpu() if the
irq is not set to avoid the warning from above for each secondary CPU.

Signed-off-by: Brian Masney <bmasney@redhat.com>
Link: https://lore.kernel.org/r/20201107011119.631442-1-bmasney@redhat.com
Reviewed-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 arch/x86/xen/spinlock.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 799f4eba0a621..043c73dfd2c98 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -93,10 +93,20 @@ void xen_init_lock_cpu(int cpu)
 
 void xen_uninit_lock_cpu(int cpu)
 {
+	int irq;
+
 	if (!xen_pvspin)
 		return;
 
-	unbind_from_irqhandler(per_cpu(lock_kicker_irq, cpu), NULL);
+	/*
+	 * When booting the kernel with 'mitigations=auto,nosmt', the secondary
+	 * CPUs are not activated, and lock_kicker_irq is not initialized.
+	 */
+	irq = per_cpu(lock_kicker_irq, cpu);
+	if (irq == -1)
+		return;
+
+	unbind_from_irqhandler(irq, NULL);
 	per_cpu(lock_kicker_irq, cpu) = -1;
 	kfree(per_cpu(irq_name, cpu));
 	per_cpu(irq_name, cpu) = NULL;
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 15:36:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 15:36:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37839.70383 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khwql-0002Bm-V8; Wed, 25 Nov 2020 15:36:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37839.70383; Wed, 25 Nov 2020 15:36:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khwql-0002Bf-Ry; Wed, 25 Nov 2020 15:36:55 +0000
Received: by outflank-mailman (input) for mailman id 37839;
 Wed, 25 Nov 2020 15:36:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kPj2=E7=kernel.org=sashal@srs-us1.protection.inumbo.net>)
 id 1khwqk-0002Ba-DF
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 15:36:54 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d9c3857a-623f-4371-8e6c-78b97d5ab704;
 Wed, 25 Nov 2020 15:36:53 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net
 [73.47.72.35])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 3305621D40;
 Wed, 25 Nov 2020 15:36:52 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kPj2=E7=kernel.org=sashal@srs-us1.protection.inumbo.net>)
	id 1khwqk-0002Ba-DF
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 15:36:54 +0000
X-Inumbo-ID: d9c3857a-623f-4371-8e6c-78b97d5ab704
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d9c3857a-623f-4371-8e6c-78b97d5ab704;
	Wed, 25 Nov 2020 15:36:53 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 3305621D40;
	Wed, 25 Nov 2020 15:36:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606318613;
	bh=p6UGhE7Kc4+yR/tYHmb8lv+gJLMFBCybGCvMZ4pJ5Rk=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=c4xRBXTeEIREiQPetfVI3J5Rw3WNws5loFNmXBO/HVVUJHLlQTxBUfK4RtMOy0E4h
	 meBar3EbG00p14TycPrFot5CVNkpShMJB30n6QNztyFMexEpCaLl3atNZJSByK+tNV
	 10yuZrTnX6Fl2207v7w5osH3uMKNpIszFvgqoRr4=
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Brian Masney <bmasney@redhat.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Sasha Levin <sashal@kernel.org>,
	xen-devel@lists.xenproject.org
Subject: [PATCH AUTOSEL 5.4 10/23] x86/xen: don't unbind uninitialized lock_kicker_irq
Date: Wed, 25 Nov 2020 10:36:25 -0500
Message-Id: <20201125153638.810419-10-sashal@kernel.org>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20201125153638.810419-1-sashal@kernel.org>
References: <20201125153638.810419-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Brian Masney <bmasney@redhat.com>

[ Upstream commit 65cae18882f943215d0505ddc7e70495877308e6 ]

When booting a hyperthreaded system with the kernel parameter
'mitigations=auto,nosmt', the following warning occurs:

    WARNING: CPU: 0 PID: 1 at drivers/xen/events/events_base.c:1112 unbind_from_irqhandler+0x4e/0x60
    ...
    Hardware name: Xen HVM domU, BIOS 4.2.amazon 08/24/2006
    ...
    Call Trace:
     xen_uninit_lock_cpu+0x28/0x62
     xen_hvm_cpu_die+0x21/0x30
     takedown_cpu+0x9c/0xe0
     ? trace_suspend_resume+0x60/0x60
     cpuhp_invoke_callback+0x9a/0x530
     _cpu_up+0x11a/0x130
     cpu_up+0x7e/0xc0
     bringup_nonboot_cpus+0x48/0x50
     smp_init+0x26/0x79
     kernel_init_freeable+0xea/0x229
     ? rest_init+0xaa/0xaa
     kernel_init+0xa/0x106
     ret_from_fork+0x35/0x40

The secondary CPUs are not activated with the nosmt mitigations and only
the primary thread on each CPU core is used. In this situation,
xen_hvm_smp_prepare_cpus(), and more importantly xen_init_lock_cpu(), is
not called, so the lock_kicker_irq is not initialized for the secondary
CPUs. Let's fix this by exiting early in xen_uninit_lock_cpu() if the
irq is not set to avoid the warning from above for each secondary CPU.

Signed-off-by: Brian Masney <bmasney@redhat.com>
Link: https://lore.kernel.org/r/20201107011119.631442-1-bmasney@redhat.com
Reviewed-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 arch/x86/xen/spinlock.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 6deb49094c605..d817b7c862a62 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -93,10 +93,20 @@ void xen_init_lock_cpu(int cpu)
 
 void xen_uninit_lock_cpu(int cpu)
 {
+	int irq;
+
 	if (!xen_pvspin)
 		return;
 
-	unbind_from_irqhandler(per_cpu(lock_kicker_irq, cpu), NULL);
+	/*
+	 * When booting the kernel with 'mitigations=auto,nosmt', the secondary
+	 * CPUs are not activated, and lock_kicker_irq is not initialized.
+	 */
+	irq = per_cpu(lock_kicker_irq, cpu);
+	if (irq == -1)
+		return;
+
+	unbind_from_irqhandler(irq, NULL);
 	per_cpu(lock_kicker_irq, cpu) = -1;
 	kfree(per_cpu(irq_name, cpu));
 	per_cpu(irq_name, cpu) = NULL;
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 15:37:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 15:37:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37841.70395 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khwrF-0002Hs-7w; Wed, 25 Nov 2020 15:37:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37841.70395; Wed, 25 Nov 2020 15:37:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khwrF-0002Hl-4r; Wed, 25 Nov 2020 15:37:25 +0000
Received: by outflank-mailman (input) for mailman id 37841;
 Wed, 25 Nov 2020 15:37:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kPj2=E7=kernel.org=sashal@srs-us1.protection.inumbo.net>)
 id 1khwrE-0002HW-3T
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 15:37:24 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b130983c-9756-4cd7-a20f-dc6e76756103;
 Wed, 25 Nov 2020 15:37:23 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net
 [73.47.72.35])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id D5EF7221EB;
 Wed, 25 Nov 2020 15:37:21 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kPj2=E7=kernel.org=sashal@srs-us1.protection.inumbo.net>)
	id 1khwrE-0002HW-3T
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 15:37:24 +0000
X-Inumbo-ID: b130983c-9756-4cd7-a20f-dc6e76756103
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b130983c-9756-4cd7-a20f-dc6e76756103;
	Wed, 25 Nov 2020 15:37:23 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id D5EF7221EB;
	Wed, 25 Nov 2020 15:37:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606318642;
	bh=krr+BXO3ubjqEgMxgDqhjCLFm4hDcYwCqMWmI9NCuNk=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=1bz/2vYdK9E9XV0dqAblor/ppuvTtoScYLy9qAGGTANGIokE5UVpe1Z7pD7izcuyP
	 5N0Xsl1TdP54sMtl+Q+NtEo8wjd3nXZtJIHNWPSmuGKc/MvxSUAw/CMfN11lsNy9u9
	 eM4bH54OUNV7wAIPjXweLkzNLwx8AhvwNDIbN/lA=
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Brian Masney <bmasney@redhat.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Sasha Levin <sashal@kernel.org>,
	xen-devel@lists.xenproject.org
Subject: [PATCH AUTOSEL 4.19 07/15] x86/xen: don't unbind uninitialized lock_kicker_irq
Date: Wed, 25 Nov 2020 10:37:04 -0500
Message-Id: <20201125153712.810655-7-sashal@kernel.org>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20201125153712.810655-1-sashal@kernel.org>
References: <20201125153712.810655-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Brian Masney <bmasney@redhat.com>

[ Upstream commit 65cae18882f943215d0505ddc7e70495877308e6 ]

When booting a hyperthreaded system with the kernel parameter
'mitigations=auto,nosmt', the following warning occurs:

    WARNING: CPU: 0 PID: 1 at drivers/xen/events/events_base.c:1112 unbind_from_irqhandler+0x4e/0x60
    ...
    Hardware name: Xen HVM domU, BIOS 4.2.amazon 08/24/2006
    ...
    Call Trace:
     xen_uninit_lock_cpu+0x28/0x62
     xen_hvm_cpu_die+0x21/0x30
     takedown_cpu+0x9c/0xe0
     ? trace_suspend_resume+0x60/0x60
     cpuhp_invoke_callback+0x9a/0x530
     _cpu_up+0x11a/0x130
     cpu_up+0x7e/0xc0
     bringup_nonboot_cpus+0x48/0x50
     smp_init+0x26/0x79
     kernel_init_freeable+0xea/0x229
     ? rest_init+0xaa/0xaa
     kernel_init+0xa/0x106
     ret_from_fork+0x35/0x40

The secondary CPUs are not activated with the nosmt mitigations and only
the primary thread on each CPU core is used. In this situation,
xen_hvm_smp_prepare_cpus(), and more importantly xen_init_lock_cpu(), is
not called, so the lock_kicker_irq is not initialized for the secondary
CPUs. Let's fix this by exiting early in xen_uninit_lock_cpu() if the
irq is not set to avoid the warning from above for each secondary CPU.

Signed-off-by: Brian Masney <bmasney@redhat.com>
Link: https://lore.kernel.org/r/20201107011119.631442-1-bmasney@redhat.com
Reviewed-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 arch/x86/xen/spinlock.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 717b4847b473f..6fffb86a32add 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -101,10 +101,20 @@ void xen_init_lock_cpu(int cpu)
 
 void xen_uninit_lock_cpu(int cpu)
 {
+	int irq;
+
 	if (!xen_pvspin)
 		return;
 
-	unbind_from_irqhandler(per_cpu(lock_kicker_irq, cpu), NULL);
+	/*
+	 * When booting the kernel with 'mitigations=auto,nosmt', the secondary
+	 * CPUs are not activated, and lock_kicker_irq is not initialized.
+	 */
+	irq = per_cpu(lock_kicker_irq, cpu);
+	if (irq == -1)
+		return;
+
+	unbind_from_irqhandler(irq, NULL);
 	per_cpu(lock_kicker_irq, cpu) = -1;
 	kfree(per_cpu(irq_name, cpu));
 	per_cpu(irq_name, cpu) = NULL;
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 15:37:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 15:37:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37849.70407 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khwre-0002OK-Gw; Wed, 25 Nov 2020 15:37:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37849.70407; Wed, 25 Nov 2020 15:37:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khwre-0002OD-Du; Wed, 25 Nov 2020 15:37:50 +0000
Received: by outflank-mailman (input) for mailman id 37849;
 Wed, 25 Nov 2020 15:37:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kPj2=E7=kernel.org=sashal@srs-us1.protection.inumbo.net>)
 id 1khwrc-0002MU-U9
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 15:37:48 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 44578b0c-e1b7-4159-9c44-8131f0ffaed3;
 Wed, 25 Nov 2020 15:37:43 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net
 [73.47.72.35])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 7DB98221F7;
 Wed, 25 Nov 2020 15:37:41 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kPj2=E7=kernel.org=sashal@srs-us1.protection.inumbo.net>)
	id 1khwrc-0002MU-U9
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 15:37:48 +0000
X-Inumbo-ID: 44578b0c-e1b7-4159-9c44-8131f0ffaed3
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 44578b0c-e1b7-4159-9c44-8131f0ffaed3;
	Wed, 25 Nov 2020 15:37:43 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 7DB98221F7;
	Wed, 25 Nov 2020 15:37:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606318662;
	bh=CQGcO1E7GEgh3th3pKWM93pk8DAZcXYV+J7BSLkezjQ=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=zC1O/FIlwf/iHFYM6uyZVg/k2SCKO24tun2V6b7aExuS03W01ldKYdVWdfoL59vc8
	 fMwKYvjXlkWuN6zLNNHaG5B0euKlqZARo+/q7RNkkmGA2In/pcFwvS9MvHyIrR7674
	 rNhsiN6ncfOTKO3USIZalfleBEmBRLpfq3oHDQ88=
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Brian Masney <bmasney@redhat.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Sasha Levin <sashal@kernel.org>,
	xen-devel@lists.xenproject.org
Subject: [PATCH AUTOSEL 4.14 05/12] x86/xen: don't unbind uninitialized lock_kicker_irq
Date: Wed, 25 Nov 2020 10:37:27 -0500
Message-Id: <20201125153734.810826-5-sashal@kernel.org>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20201125153734.810826-1-sashal@kernel.org>
References: <20201125153734.810826-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Brian Masney <bmasney@redhat.com>

[ Upstream commit 65cae18882f943215d0505ddc7e70495877308e6 ]

When booting a hyperthreaded system with the kernel parameter
'mitigations=auto,nosmt', the following warning occurs:

    WARNING: CPU: 0 PID: 1 at drivers/xen/events/events_base.c:1112 unbind_from_irqhandler+0x4e/0x60
    ...
    Hardware name: Xen HVM domU, BIOS 4.2.amazon 08/24/2006
    ...
    Call Trace:
     xen_uninit_lock_cpu+0x28/0x62
     xen_hvm_cpu_die+0x21/0x30
     takedown_cpu+0x9c/0xe0
     ? trace_suspend_resume+0x60/0x60
     cpuhp_invoke_callback+0x9a/0x530
     _cpu_up+0x11a/0x130
     cpu_up+0x7e/0xc0
     bringup_nonboot_cpus+0x48/0x50
     smp_init+0x26/0x79
     kernel_init_freeable+0xea/0x229
     ? rest_init+0xaa/0xaa
     kernel_init+0xa/0x106
     ret_from_fork+0x35/0x40

The secondary CPUs are not activated with the nosmt mitigations and only
the primary thread on each CPU core is used. In this situation,
xen_hvm_smp_prepare_cpus(), and more importantly xen_init_lock_cpu(), is
not called, so the lock_kicker_irq is not initialized for the secondary
CPUs. Let's fix this by exiting early in xen_uninit_lock_cpu() if the
irq is not set to avoid the warning from above for each secondary CPU.

Signed-off-by: Brian Masney <bmasney@redhat.com>
Link: https://lore.kernel.org/r/20201107011119.631442-1-bmasney@redhat.com
Reviewed-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 arch/x86/xen/spinlock.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 2527540051ff0..e22ee24396158 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -99,10 +99,20 @@ void xen_init_lock_cpu(int cpu)
 
 void xen_uninit_lock_cpu(int cpu)
 {
+	int irq;
+
 	if (!xen_pvspin)
 		return;
 
-	unbind_from_irqhandler(per_cpu(lock_kicker_irq, cpu), NULL);
+	/*
+	 * When booting the kernel with 'mitigations=auto,nosmt', the secondary
+	 * CPUs are not activated, and lock_kicker_irq is not initialized.
+	 */
+	irq = per_cpu(lock_kicker_irq, cpu);
+	if (irq == -1)
+		return;
+
+	unbind_from_irqhandler(irq, NULL);
 	per_cpu(lock_kicker_irq, cpu) = -1;
 	kfree(per_cpu(irq_name, cpu));
 	per_cpu(irq_name, cpu) = NULL;
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 15:38:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 15:38:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37852.70419 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khwrq-0002Wf-VD; Wed, 25 Nov 2020 15:38:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37852.70419; Wed, 25 Nov 2020 15:38:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khwrq-0002WY-Re; Wed, 25 Nov 2020 15:38:02 +0000
Received: by outflank-mailman (input) for mailman id 37852;
 Wed, 25 Nov 2020 15:38:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kPj2=E7=kernel.org=sashal@srs-us1.protection.inumbo.net>)
 id 1khwrq-0002VJ-Ak
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 15:38:02 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f08f63be-fa4b-4fa0-a790-064278d173f6;
 Wed, 25 Nov 2020 15:38:01 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net
 [73.47.72.35])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 10A63221FE;
 Wed, 25 Nov 2020 15:37:59 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kPj2=E7=kernel.org=sashal@srs-us1.protection.inumbo.net>)
	id 1khwrq-0002VJ-Ak
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 15:38:02 +0000
X-Inumbo-ID: f08f63be-fa4b-4fa0-a790-064278d173f6
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f08f63be-fa4b-4fa0-a790-064278d173f6;
	Wed, 25 Nov 2020 15:38:01 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 10A63221FE;
	Wed, 25 Nov 2020 15:37:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606318680;
	bh=wTQOE4SUn+DPuZZHeu0shEVL9fFiq/gn30Nkhrmkl8A=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=LreWwdIfxRmJgpF59iyqnvM8ucEYythtKh9zb6WFZQVDNQAwA6GPWW+SQEs3GaCib
	 RgmCYSy5cdLQEOn+42L8Eh1ywM1CIecSaXRTBj0DGBhpq1CEgduqTgI9EvZmkmAohd
	 zSfNyV7TNWHoAofShNt5JUFIQD6r7FXlAUcDD1aE=
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Brian Masney <bmasney@redhat.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Sasha Levin <sashal@kernel.org>,
	xen-devel@lists.xenproject.org
Subject: [PATCH AUTOSEL 4.9 05/10] x86/xen: don't unbind uninitialized lock_kicker_irq
Date: Wed, 25 Nov 2020 10:37:48 -0500
Message-Id: <20201125153753.810973-5-sashal@kernel.org>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20201125153753.810973-1-sashal@kernel.org>
References: <20201125153753.810973-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Brian Masney <bmasney@redhat.com>

[ Upstream commit 65cae18882f943215d0505ddc7e70495877308e6 ]

When booting a hyperthreaded system with the kernel parameter
'mitigations=auto,nosmt', the following warning occurs:

    WARNING: CPU: 0 PID: 1 at drivers/xen/events/events_base.c:1112 unbind_from_irqhandler+0x4e/0x60
    ...
    Hardware name: Xen HVM domU, BIOS 4.2.amazon 08/24/2006
    ...
    Call Trace:
     xen_uninit_lock_cpu+0x28/0x62
     xen_hvm_cpu_die+0x21/0x30
     takedown_cpu+0x9c/0xe0
     ? trace_suspend_resume+0x60/0x60
     cpuhp_invoke_callback+0x9a/0x530
     _cpu_up+0x11a/0x130
     cpu_up+0x7e/0xc0
     bringup_nonboot_cpus+0x48/0x50
     smp_init+0x26/0x79
     kernel_init_freeable+0xea/0x229
     ? rest_init+0xaa/0xaa
     kernel_init+0xa/0x106
     ret_from_fork+0x35/0x40

The secondary CPUs are not activated with the nosmt mitigations and only
the primary thread on each CPU core is used. In this situation,
xen_hvm_smp_prepare_cpus(), and more importantly xen_init_lock_cpu(), is
not called, so the lock_kicker_irq is not initialized for the secondary
CPUs. Let's fix this by exiting early in xen_uninit_lock_cpu() if the
irq is not set to avoid the warning from above for each secondary CPU.

Signed-off-by: Brian Masney <bmasney@redhat.com>
Link: https://lore.kernel.org/r/20201107011119.631442-1-bmasney@redhat.com
Reviewed-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 arch/x86/xen/spinlock.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 8d2c6f071dccf..44bf8a22c97b8 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -98,10 +98,20 @@ void xen_init_lock_cpu(int cpu)
 
 void xen_uninit_lock_cpu(int cpu)
 {
+	int irq;
+
 	if (!xen_pvspin)
 		return;
 
-	unbind_from_irqhandler(per_cpu(lock_kicker_irq, cpu), NULL);
+	/*
+	 * When booting the kernel with 'mitigations=auto,nosmt', the secondary
+	 * CPUs are not activated, and lock_kicker_irq is not initialized.
+	 */
+	irq = per_cpu(lock_kicker_irq, cpu);
+	if (irq == -1)
+		return;
+
+	unbind_from_irqhandler(irq, NULL);
 	per_cpu(lock_kicker_irq, cpu) = -1;
 	kfree(per_cpu(irq_name, cpu));
 	per_cpu(irq_name, cpu) = NULL;
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 15:38:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 15:38:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37857.70431 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khws5-0002eh-7u; Wed, 25 Nov 2020 15:38:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37857.70431; Wed, 25 Nov 2020 15:38:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khws5-0002ea-4q; Wed, 25 Nov 2020 15:38:17 +0000
Received: by outflank-mailman (input) for mailman id 37857;
 Wed, 25 Nov 2020 15:38:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kPj2=E7=kernel.org=sashal@srs-us1.protection.inumbo.net>)
 id 1khws4-0002eG-8c
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 15:38:16 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ff96824a-4190-4410-8429-a0a9a861c5ba;
 Wed, 25 Nov 2020 15:38:15 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net
 [73.47.72.35])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 01517221F7;
 Wed, 25 Nov 2020 15:38:13 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kPj2=E7=kernel.org=sashal@srs-us1.protection.inumbo.net>)
	id 1khws4-0002eG-8c
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 15:38:16 +0000
X-Inumbo-ID: ff96824a-4190-4410-8429-a0a9a861c5ba
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ff96824a-4190-4410-8429-a0a9a861c5ba;
	Wed, 25 Nov 2020 15:38:15 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 01517221F7;
	Wed, 25 Nov 2020 15:38:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606318694;
	bh=N/9mYrylQeD41uQIKVHiCHGYsB0T/taFOwU3YpP8X10=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=Lo4JgGow5MYRduf7OeWzK3goSVQahSBK1lbpvaZKQmbYKktiB97AEF1nsDDQC0o/1
	 AW+bMxKiQ5mpffwv0OaKFcOh0J6x3xDBjwD8lknz69ziKATiyaT+j1LqXFDkPF9SIL
	 K/UnS+2ldeDyfzXD4RH92ATIXm1WYD06xgwPCtnE=
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Brian Masney <bmasney@redhat.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Sasha Levin <sashal@kernel.org>,
	xen-devel@lists.xenproject.org
Subject: [PATCH AUTOSEL 4.4 4/8] x86/xen: don't unbind uninitialized lock_kicker_irq
Date: Wed, 25 Nov 2020 10:38:04 -0500
Message-Id: <20201125153808.811104-4-sashal@kernel.org>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20201125153808.811104-1-sashal@kernel.org>
References: <20201125153808.811104-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Brian Masney <bmasney@redhat.com>

[ Upstream commit 65cae18882f943215d0505ddc7e70495877308e6 ]

When booting a hyperthreaded system with the kernel parameter
'mitigations=auto,nosmt', the following warning occurs:

    WARNING: CPU: 0 PID: 1 at drivers/xen/events/events_base.c:1112 unbind_from_irqhandler+0x4e/0x60
    ...
    Hardware name: Xen HVM domU, BIOS 4.2.amazon 08/24/2006
    ...
    Call Trace:
     xen_uninit_lock_cpu+0x28/0x62
     xen_hvm_cpu_die+0x21/0x30
     takedown_cpu+0x9c/0xe0
     ? trace_suspend_resume+0x60/0x60
     cpuhp_invoke_callback+0x9a/0x530
     _cpu_up+0x11a/0x130
     cpu_up+0x7e/0xc0
     bringup_nonboot_cpus+0x48/0x50
     smp_init+0x26/0x79
     kernel_init_freeable+0xea/0x229
     ? rest_init+0xaa/0xaa
     kernel_init+0xa/0x106
     ret_from_fork+0x35/0x40

The secondary CPUs are not activated with the nosmt mitigations and only
the primary thread on each CPU core is used. In this situation,
xen_hvm_smp_prepare_cpus(), and more importantly xen_init_lock_cpu(), is
not called, so the lock_kicker_irq is not initialized for the secondary
CPUs. Let's fix this by exiting early in xen_uninit_lock_cpu() if the
irq is not set to avoid the warning from above for each secondary CPU.

Signed-off-by: Brian Masney <bmasney@redhat.com>
Link: https://lore.kernel.org/r/20201107011119.631442-1-bmasney@redhat.com
Reviewed-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 arch/x86/xen/spinlock.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 85872a08994a1..e9fc0f7df0da8 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -301,10 +301,20 @@ void xen_init_lock_cpu(int cpu)
 
 void xen_uninit_lock_cpu(int cpu)
 {
+	int irq;
+
 	if (!xen_pvspin)
 		return;
 
-	unbind_from_irqhandler(per_cpu(lock_kicker_irq, cpu), NULL);
+	/*
+	 * When booting the kernel with 'mitigations=auto,nosmt', the secondary
+	 * CPUs are not activated, and lock_kicker_irq is not initialized.
+	 */
+	irq = per_cpu(lock_kicker_irq, cpu);
+	if (irq == -1)
+		return;
+
+	unbind_from_irqhandler(irq, NULL);
 	per_cpu(lock_kicker_irq, cpu) = -1;
 	kfree(per_cpu(irq_name, cpu));
 	per_cpu(irq_name, cpu) = NULL;
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 15:46:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 15:46:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37878.70443 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khwzq-0003gQ-43; Wed, 25 Nov 2020 15:46:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37878.70443; Wed, 25 Nov 2020 15:46:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khwzq-0003gJ-0p; Wed, 25 Nov 2020 15:46:18 +0000
Received: by outflank-mailman (input) for mailman id 37878;
 Wed, 25 Nov 2020 15:46:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NtXg=E7=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1khwzo-0003gD-0t
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 15:46:16 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7d00::620])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 629c2ebc-6da0-45c1-a8c0-d228cb2f5e48;
 Wed, 25 Nov 2020 15:46:13 +0000 (UTC)
Received: from AM5PR0301CA0014.eurprd03.prod.outlook.com
 (2603:10a6:206:14::27) by AM9PR08MB5938.eurprd08.prod.outlook.com
 (2603:10a6:20b:2da::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.21; Wed, 25 Nov
 2020 15:46:11 +0000
Received: from AM5EUR03FT016.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:14:cafe::98) by AM5PR0301CA0014.outlook.office365.com
 (2603:10a6:206:14::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Wed, 25 Nov 2020 15:46:11 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT016.mail.protection.outlook.com (10.152.16.142) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Wed, 25 Nov 2020 15:46:10 +0000
Received: ("Tessian outbound d6c201accd3c:v71");
 Wed, 25 Nov 2020 15:46:10 +0000
Received: from aa55fc1f714e.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3931BC6F-3E7F-43E3-9509-20383B639618.1; 
 Wed, 25 Nov 2020 15:46:04 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id aa55fc1f714e.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 25 Nov 2020 15:46:04 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5798.eurprd08.prod.outlook.com (2603:10a6:10:1a6::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20; Wed, 25 Nov
 2020 15:46:02 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0%7]) with mapi id 15.20.3589.030; Wed, 25 Nov 2020
 15:46:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NtXg=E7=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1khwzo-0003gD-0t
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 15:46:16 +0000
X-Inumbo-ID: 629c2ebc-6da0-45c1-a8c0-d228cb2f5e48
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:7d00::620])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 629c2ebc-6da0-45c1-a8c0-d228cb2f5e48;
	Wed, 25 Nov 2020 15:46:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vZCq383xp/5qy+M97nZfOlE0VcTD2KOnamzXXXUAHsI=;
 b=VE/9xzK1uuAsiqHOBcP+hEOGVwb2wAjRqPERNP+TEwEAGm2KRiDLS8PZoCnJITad/TLh9tbsIdvgxDqxy1Yof2GLWHMVahEd4V4AZEhtTAZiVxWDSveWnyCxyUfOpaefzFsvna8Y4QiRSvVG+JeXfcx3pQhz8Q69tzIPqJ1r+0s=
Received: from AM5PR0301CA0014.eurprd03.prod.outlook.com
 (2603:10a6:206:14::27) by AM9PR08MB5938.eurprd08.prod.outlook.com
 (2603:10a6:20b:2da::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.21; Wed, 25 Nov
 2020 15:46:11 +0000
Received: from AM5EUR03FT016.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:14:cafe::98) by AM5PR0301CA0014.outlook.office365.com
 (2603:10a6:206:14::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Wed, 25 Nov 2020 15:46:11 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT016.mail.protection.outlook.com (10.152.16.142) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Wed, 25 Nov 2020 15:46:10 +0000
Received: ("Tessian outbound d6c201accd3c:v71"); Wed, 25 Nov 2020 15:46:10 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: e147afdba466d563
X-CR-MTA-TID: 64aa7808
Received: from aa55fc1f714e.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 3931BC6F-3E7F-43E3-9509-20383B639618.1;
	Wed, 25 Nov 2020 15:46:04 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id aa55fc1f714e.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 25 Nov 2020 15:46:04 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iMd/Wm90SaMsJdmKe/WRyd/h3CGvfy6swOAX8M4stUEOGy0+hx9Y8QAahExR6pkKEl2iiH0iW5M/IvLWzkTdtxyeOEC1SBjbPHFzlY0OVnFawCbCOrojl+9CCRtJw2B159DcSlb1Wp2CA0h8Fw4+qQmK486f1kcTJ+OJBuJOtlvnrek6oRVxuQnZ90sGNQ4jpSja47J6NA6DD1IKRm5yTAwZEZpnhXJOsuY3cgEMk4IzGStULGExjRwuqacLEUxCefaw+3DKdDv9hKGJY1uv5iwUB6sMHhHoEujZbjBB25ms5Z6Tn+/Y7utiWHVOhA4QHKABuATa1sbjBmhUsFX5eg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vZCq383xp/5qy+M97nZfOlE0VcTD2KOnamzXXXUAHsI=;
 b=SRF8pYVW8tFPA90MJhTkLH9PV2MI9tc9LDl1pXtSCwatTQkDY9yLxXnMrXJQ89AABmlGw7e/hsjB7ODfLVPJZwnv6s2qS+/Ul27C9rC90zqxkZbzFFvDA2H7thqt/Ffz8Qkyo8CG5UvZt93yI78U9Sls87vMGPA/W9cQM63yj114O0mwglYLNcGwHyh2j13mFVdGWT5PjQKwOBHkLEvnK8m57AjvwWHFzasO/mntneYhR6PqQDG8FEg1WIRFqB0w4ByzbkyQsRRGH6M1Vclk5eXCxfP9spW8W6BKlPSfZ8WSBt9PKsFz6bkkrZE6Qmq+lVmAODr4s3nHbo8m5BbGIA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vZCq383xp/5qy+M97nZfOlE0VcTD2KOnamzXXXUAHsI=;
 b=VE/9xzK1uuAsiqHOBcP+hEOGVwb2wAjRqPERNP+TEwEAGm2KRiDLS8PZoCnJITad/TLh9tbsIdvgxDqxy1Yof2GLWHMVahEd4V4AZEhtTAZiVxWDSveWnyCxyUfOpaefzFsvna8Y4QiRSvVG+JeXfcx3pQhz8Q69tzIPqJ1r+0s=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5798.eurprd08.prod.outlook.com (2603:10a6:10:1a6::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20; Wed, 25 Nov
 2020 15:46:02 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0%7]) with mapi id 15.20.3589.030; Wed, 25 Nov 2020
 15:46:02 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH] tools/libs: Simplify internal *.pc files
Thread-Topic: [PATCH] tools/libs: Simplify internal *.pc files
Thread-Index: AQHWwzpU13+Sxy8T80ynkBYhTAz0lKnY/bGA
Date: Wed, 25 Nov 2020 15:46:02 +0000
Message-ID: <617F2A46-68AA-4B80-8903-656027892933@arm.com>
References: <20201125144928.22778-1-andrew.cooper3@citrix.com>
In-Reply-To: <20201125144928.22778-1-andrew.cooper3@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 83d4d2b1-66da-4294-8359-08d891593b95
x-ms-traffictypediagnostic: DBAPR08MB5798:|AM9PR08MB5938:
X-Microsoft-Antispam-PRVS:
	<AM9PR08MB593802286CC964B09BC6C2D29DFA0@AM9PR08MB5938.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:4941;OLM:4941;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 r6FTJYl6iGbIoPYTZ46pgcnd20VHQ2AhrA5t/RSH8+WyCcDyUcay3F6KE8stilkVmQeVOX54Ee+KuoDZmZGaWCu1kzZPFGyteqzrj3b9YTJuE3tC6Npt1GxUrCGuQ4dgE5NXdo9kLGwWw3B37wTsoRtb/5iU5A3efmsTouvtFNjIdu0f60bDfnk1i12QtmcfgEdqcZDKP4BcWsD9akbMELmsB/Ek+6OCS0oUxEeNRfLb55RBVl3sdpVpeQlvlq6LUITDWT4pLuLO7NPO0fnccgi36ia0W+6pjmerk1vwv+ygReuR0+VAKqwnSUiq9GuKuFCBfotw4+KTSc6Iw+ZkFw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(366004)(136003)(376002)(346002)(396003)(33656002)(316002)(6512007)(66476007)(26005)(186003)(64756008)(83380400001)(8936002)(6506007)(4326008)(6916009)(2906002)(66556008)(66446008)(66946007)(53546011)(91956017)(76116006)(2616005)(5660300002)(71200400001)(86362001)(36756003)(54906003)(6486002)(8676002)(478600001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 ToRhXidCjLoPMnm+5VTs+HcRbCe6Eke0RBqFzwickIZDur7vclqI2GJjObcWyCc6ro33e0forZNkSqs7B0mAQsgBaZ2Jgeiuzs4TQKtDEZpMsSLu+FOZjebpZJ/DFvss7fuiMuG26VMtf2F5sAa6W1iMhO3YIp8AH4ue+QZL4E7qyTGCrL1x5I19GWLVQ2+Eu4jb9dnrSVU2xaoz0NI5L2ExvZrcUELFA60ZbDqryHdWOKk60jXEDE1iD+ZxFZCv50dMyqxo7gMkPk4cKkzgcIX5fWpJSyZTSePGSi55a7HC8TqO2JIEmIpw6dqcfjFBjRCj6sG48/0AIRD7nMPoweKZlrUHfaSdLiTtqBvAmfS4BEC49eCu1jFvzPh/uR1Ld+S4ISMvoGSUHQcnWOGs7mh1FzmwbxLC+mmVDqvesVuVnZX5ffr3e5WI6U+lFiA/2VwALWPC7YQ5JfG6S+520dwJnK1yADHlmJxQe1VsPKdj4MYAyIun2Ra+yBaww/ohq1F2858ubB+KZxiPZg3RHGdgjnaCs0VRt5R1iRdDHcUQyUWBa4dobiY0mRhAkwDe/uZLskpySCblN17b8cogRsB+GRl7+UCMDRnkEYnWl9QBHVxbdS7mRbK6zKxuQHox2O6VDNkZGXgm2wIWkzfat+jKDZgQQmjbD2ETmaP7jFI367sam4P1F3VwhWezhv5ZRSwoNKtF+wAhuUO6tlZDD14O1daZgq5X4FOwD7nx18MIz2a9BS+tVMHRYIaetMHtqvMLqLPm93M6QtN091oH3H+GDMHwNNRJDdSkRz0YBhuQXsPlwS8i92uteVuJC74cBCeucPSuOOQvSrgnbeJcMjyG8qWakjgP+keekwfgSPBw5FTGyGy+NQZ571Dd3eKxM53inLLZjzZhlnBy7wNpVQ==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <091995734B069540995914F43F2B9738@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5798
Original-Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	326bd53f-11f4-4167-7eee-08d89159367a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XHdCWg0VV4PC243pqUShAlqwVpeiq5WwcTRp7StSo1pUjEncx+1Cf8d1Lwf7ORd/OF4iZC3UTuRGKWcNns4NqhkuSaWeSw0ukfyMfDr2lVh3LRxMmzkECzMJgOiYlpOHmtMYxMKErKkgDkY89I92jbetf0oPhhMgW2KJev4agZcUf0OXHd3uRoYXcidjtjdeHavCxM46EW5u/3Ie+f43qvGsBNPo9CPokroJV8a8JGp+643ihF3mYYFj+Eyb1TOK7aFw5jWCe2HPQQ8ax0mb4MP8+aKNsosI4bwfQg2exRcf64d2+C1MMxhiVW3Z3bAH8h343/rfFm/v8MEHHz9nF0KLK0xWfwwmTrP/5HKCX9d32XVCZnzbvBJqbSQmvqeBn7g/hS8/ynOa+9KEXvyz2A==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(396003)(39860400002)(136003)(346002)(46966005)(6486002)(83380400001)(107886003)(6512007)(36756003)(8676002)(5660300002)(356005)(478600001)(4326008)(81166007)(6862004)(70206006)(86362001)(70586007)(2906002)(8936002)(82310400003)(26005)(33656002)(82740400003)(47076004)(6506007)(186003)(316002)(2616005)(53546011)(36906005)(54906003)(336012);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Nov 2020 15:46:10.9836
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 83d4d2b1-66da-4294-8359-08d891593b95
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB5938

Hi Andrew,

> On 25 Nov 2020, at 14:49, Andrew Cooper <andrew.cooper3@citrix.com> wrote=
:
>=20
> The internal package config file for libxenlight reads (reformatted to av=
oid
> exceeding the SMTP 998-character line length):
>=20
>  Libs: -L${libdir}
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toollog
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toollog
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toolcore
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/evtchn
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toolcore
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toollog
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toollog
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toolcore
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/call
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toollog
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toolcore
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/evtchn
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toollog
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toolcore
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/gnttab
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toollog
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toolcore
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/foreignmemory
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toollog
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toolcore
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toollog
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toolcore
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/call
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/devicemodel
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/ctrl
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toolcore
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/store
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toollog
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toolcore
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toollog
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toolcore
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/call
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/hypfs
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toollog
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toolcore
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/evtchn
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toollog
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toollog
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toolcore
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/call
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toollog
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toolcore
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/evtchn
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toollog
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toolcore
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/gnttab
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toollog
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toolcore
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/foreignmemory
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toollog
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toolcore
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toollog
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toolcore
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/call
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/devicemodel
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/ctrl
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/guest
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/light
>  -lxenlight
>=20
> Drop duplicate -rpath-link=3D's to turn it into the slightly-more-managea=
ble:
>=20
>  Libs: -L${libdir}
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/call
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/ctrl
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/devicemodel
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/evtchn
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/foreignmemory
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/gnttab
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/guest
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/hypfs
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/light
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/store
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toolcore
>  -Wl,-rpath-link=3D/local/security/xen.git/tools/libs/light/../../../tool=
s/libs/toollog
>  -lxenlight
>=20
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Nice reduction of the output :-)

Cheers
Bertrand

> ---
> CC: Ian Jackson <iwj@xenproject.org>
> CC: Wei Liu <wl@xen.org>
> CC: Juergen Gross <jgross@suse.com>
> ---
> tools/Rules.mk | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>=20
> diff --git a/tools/Rules.mk b/tools/Rules.mk
> index f61da81f4a..5d92ff0699 100644
> --- a/tools/Rules.mk
> +++ b/tools/Rules.mk
> @@ -184,7 +184,7 @@ $(PKG_CONFIG_DIR)/%.pc: Makefile $(XEN_ROOT)/tools/Ru=
les.mk $(PKG_CONFIG_DIR)
> 	echo "Description: $(PKG_CONFIG_DESC)"; \
> 	echo "Version: $(PKG_CONFIG_VERSION)"; \
> 	echo "Cflags: -I\$${includedir} $(CFLAGS_xeninclude)"; \
> -	echo "Libs: -L\$${libdir} $(PKG_CONFIG_USELIBS) -l$(PKG_CONFIG_LIB)"; \
> +	echo "Libs: -L\$${libdir} $(sort $(PKG_CONFIG_USELIBS)) -l$(PKG_CONFIG_=
LIB)"; \
> 	echo "Libs.private: $(PKG_CONFIG_LIBSPRIV)"; \
> 	echo "Requires.private: $(PKG_CONFIG_REQPRIV)"; \
> 	} > $@
> --=20
> 2.11.0
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 15:55:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 15:55:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37830.70454 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khx8i-0004fp-4W; Wed, 25 Nov 2020 15:55:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37830.70454; Wed, 25 Nov 2020 15:55:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khx8i-0004fi-0o; Wed, 25 Nov 2020 15:55:28 +0000
Received: by outflank-mailman (input) for mailman id 37830;
 Wed, 25 Nov 2020 15:32:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c0qk=E7=intel.com=oliver.sang@srs-us1.protection.inumbo.net>)
 id 1khwmH-00020Y-OR
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 15:32:17 +0000
Received: from mga17.intel.com (unknown [192.55.52.151])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2546694e-c0e2-48a9-b874-33596e79ff35;
 Wed, 25 Nov 2020 15:32:04 +0000 (UTC)
Received: from orsmga006.jf.intel.com ([10.7.209.51])
 by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 25 Nov 2020 07:32:02 -0800
Received: from xsang-optiplex-9020.sh.intel.com (HELO xsang-OptiPlex-9020)
 ([10.239.159.140])
 by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 25 Nov 2020 07:31:56 -0800
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=c0qk=E7=intel.com=oliver.sang@srs-us1.protection.inumbo.net>)
	id 1khwmH-00020Y-OR
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 15:32:17 +0000
X-Inumbo-ID: 2546694e-c0e2-48a9-b874-33596e79ff35
Received: from mga17.intel.com (unknown [192.55.52.151])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2546694e-c0e2-48a9-b874-33596e79ff35;
	Wed, 25 Nov 2020 15:32:04 +0000 (UTC)
IronPort-SDR: jJJI0sd6DNOHMrYq/Mo/3bsGUXwSTqS/HqHDCVIxi4FLUMuLy7nM3jamAoDh8Yle6ZcG6hELVW
 o/IGgn3C0hBA==
X-IronPort-AV: E=McAfee;i="6000,8403,9816"; a="151987672"
X-IronPort-AV: E=Sophos;i="5.78,369,1599548400"; 
   d="xz'?yaml'?scan'208";a="151987672"
X-Amp-Result: UNKNOWN
X-Amp-Original-Verdict: FILE UNKNOWN
X-Amp-File-Uploaded: False
Received: from orsmga006.jf.intel.com ([10.7.209.51])
  by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Nov 2020 07:32:02 -0800
IronPort-SDR: q8Fmkyzo/l+b4cBi9wKtIsxpgnFDNHF4KSDAnKTOQVZrUlm990XOHy/iL3Ja/iA6NdY1N2fmRj
 9YDf+99CoyQA==
X-IronPort-AV: E=Sophos;i="5.78,369,1599548400"; 
   d="xz'?yaml'?scan'208";a="333027014"
Received: from xsang-optiplex-9020.sh.intel.com (HELO xsang-OptiPlex-9020) ([10.239.159.140])
  by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Nov 2020 07:31:56 -0800
Date: Wed, 25 Nov 2020 23:46:12 +0800
From: kernel test robot <oliver.sang@intel.com>
To: Juergen Gross <jgross@suse.com>
Cc: 0day robot <lkp@intel.com>, LKML <linux-kernel@vger.kernel.org>,
	lkp@lists.01.org, xen-devel@lists.xenproject.org, x86@kernel.org,
	virtualization@lists.linux-foundation.org, peterz@infradead.org,
	luto@kernel.org, Juergen Gross <jgross@suse.com>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: [x86/paravirt]  fd8d46a7a2:
 kernel-selftests.livepatch.test-callbacks.sh.fail
Message-ID: <20201125154612.GB31714@xsang-OptiPlex-9020>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="O5XBE6gyVG5Rl6Rj"
Content-Disposition: inline
In-Reply-To: <20201120114630.13552-12-jgross@suse.com>
User-Agent: NeoMutt/20170113 (1.7.2)


--O5XBE6gyVG5Rl6Rj
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline


Greeting,

FYI, we noticed the following commit (built with gcc-9):

commit: fd8d46a7a2c51313ee14c43af60ff337279384ef ("x86/paravirt: switch functions with custom code to ALTERNATIVE")
url: https://github.com/0day-ci/linux/commits/Juergen-Gross/x86-major-paravirt-cleanup/20201120-194934


in testcase: kernel-selftests
version: kernel-selftests-x86_64-b5a583fb-1_20201015
with following parameters:

	group: livepatch
	ucode: 0xe2

test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url: https://www.kernel.org/doc/Documentation/kselftest.txt


on test machine: 8 threads Intel(R) Core(TM) i7-6770HQ CPU @ 2.60GHz with 32G memory

caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):




If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang@intel.com>

KERNEL SELFTESTS: linux_headers_dir is /usr/src/linux-headers-x86_64-rhel-7.6-kselftests-fd8d46a7a2c51313ee14c43af60ff337279384ef
2020-11-25 12:01:24 ln -sf /usr/bin/clang
2020-11-25 12:01:24 ln -sf /usr/bin/llc
2020-11-25 12:01:25 sed -i s/default_timeout=45/default_timeout=300/ kselftest/runner.sh
2020-11-25 12:01:25 make run_tests -C livepatch
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-fd8d46a7a2c51313ee14c43af60ff337279384ef/tools/testing/selftests/livepatch'
TAP version 13
1..5
# selftests: livepatch: test-livepatch.sh
# TEST: basic function patching ... ERROR: failed to complete transition
not ok 1 selftests: livepatch: test-livepatch.sh # exit=1
# selftests: livepatch: test-callbacks.sh
# TEST: target module before livepatch ... ERROR: modprobe: ERROR: could not insert 'test_klp_callbacks_demo': Device or resource busy
not ok 2 selftests: livepatch: test-callbacks.sh # exit=1
# selftests: livepatch: test-shadow-vars.sh
# TEST: basic shadow variable API ... ok
ok 3 selftests: livepatch: test-shadow-vars.sh
# selftests: livepatch: test-state.sh
# TEST: system state modification ... ERROR: modprobe: ERROR: could not insert 'test_klp_state': Device or resource busy
not ok 4 selftests: livepatch: test-state.sh # exit=1
# selftests: livepatch: test-ftrace.sh
# TEST: livepatch interaction with ftrace_enabled sysctl ... ERROR: test_klp_livepatch unexpectedly loaded
not ok 5 selftests: livepatch: test-ftrace.sh # exit=1
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-fd8d46a7a2c51313ee14c43af60ff337279384ef/tools/testing/selftests/livepatch'



To reproduce:

        git clone https://github.com/intel/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml



Thanks,
Oliver Sang


--O5XBE6gyVG5Rl6Rj
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="config-5.10.0-rc4-00011-gfd8d46a7a2c5"

#
# Automatically generated file; DO NOT EDIT.
# Linux/x86_64 5.10.0-rc4 Kernel Configuration
#
CONFIG_CC_VERSION_TEXT="gcc-9 (Debian 9.3.0-15) 9.3.0"
CONFIG_CC_IS_GCC=y
CONFIG_GCC_VERSION=90300
CONFIG_LD_VERSION=235000000
CONFIG_CLANG_VERSION=0
CONFIG_CC_CAN_LINK=y
CONFIG_CC_CAN_LINK_STATIC=y
CONFIG_CC_HAS_ASM_GOTO=y
CONFIG_CC_HAS_ASM_INLINE=y
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_TABLE_SORT=y
CONFIG_THREAD_INFO_IN_TASK=y

#
# General setup
#
CONFIG_INIT_ENV_ARG_LIMIT=32
# CONFIG_COMPILE_TEST is not set
CONFIG_LOCALVERSION=""
CONFIG_LOCALVERSION_AUTO=y
CONFIG_BUILD_SALT=""
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_BZIP2=y
CONFIG_HAVE_KERNEL_LZMA=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_HAVE_KERNEL_LZO=y
CONFIG_HAVE_KERNEL_LZ4=y
CONFIG_HAVE_KERNEL_ZSTD=y
CONFIG_KERNEL_GZIP=y
# CONFIG_KERNEL_BZIP2 is not set
# CONFIG_KERNEL_LZMA is not set
# CONFIG_KERNEL_XZ is not set
# CONFIG_KERNEL_LZO is not set
# CONFIG_KERNEL_LZ4 is not set
# CONFIG_KERNEL_ZSTD is not set
CONFIG_DEFAULT_INIT=""
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
CONFIG_POSIX_MQUEUE=y
CONFIG_POSIX_MQUEUE_SYSCTL=y
# CONFIG_WATCH_QUEUE is not set
CONFIG_CROSS_MEMORY_ATTACH=y
CONFIG_USELIB=y
CONFIG_AUDIT=y
CONFIG_HAVE_ARCH_AUDITSYSCALL=y
CONFIG_AUDITSYSCALL=y

#
# IRQ subsystem
#
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK=y
CONFIG_GENERIC_PENDING_IRQ=y
CONFIG_GENERIC_IRQ_MIGRATION=y
CONFIG_GENERIC_IRQ_INJECTION=y
CONFIG_HARDIRQS_SW_RESEND=y
CONFIG_IRQ_DOMAIN=y
CONFIG_IRQ_SIM=y
CONFIG_IRQ_DOMAIN_HIERARCHY=y
CONFIG_GENERIC_MSI_IRQ=y
CONFIG_GENERIC_MSI_IRQ_DOMAIN=y
CONFIG_IRQ_MSI_IOMMU=y
CONFIG_GENERIC_IRQ_MATRIX_ALLOCATOR=y
CONFIG_GENERIC_IRQ_RESERVATION_MODE=y
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_SPARSE_IRQ=y
# CONFIG_GENERIC_IRQ_DEBUGFS is not set
# end of IRQ subsystem

CONFIG_CLOCKSOURCE_WATCHDOG=y
CONFIG_ARCH_CLOCKSOURCE_INIT=y
CONFIG_CLOCKSOURCE_VALIDATE_LAST_CYCLE=y
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
CONFIG_GENERIC_CMOS_UPDATE=y
CONFIG_HAVE_POSIX_CPU_TIMERS_TASK_WORK=y
CONFIG_POSIX_CPU_TIMERS_TASK_WORK=y

#
# Timers subsystem
#
CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ_COMMON=y
# CONFIG_HZ_PERIODIC is not set
# CONFIG_NO_HZ_IDLE is not set
CONFIG_NO_HZ_FULL=y
CONFIG_CONTEXT_TRACKING=y
# CONFIG_CONTEXT_TRACKING_FORCE is not set
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
# end of Timers subsystem

# CONFIG_PREEMPT_NONE is not set
# CONFIG_PREEMPT_VOLUNTARY is not set
CONFIG_PREEMPT=y
CONFIG_PREEMPT_COUNT=y
CONFIG_PREEMPTION=y

#
# CPU/Task time and stats accounting
#
CONFIG_VIRT_CPU_ACCOUNTING=y
CONFIG_VIRT_CPU_ACCOUNTING_GEN=y
# CONFIG_IRQ_TIME_ACCOUNTING is not set
CONFIG_HAVE_SCHED_AVG_IRQ=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
CONFIG_TASKSTATS=y
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y
# CONFIG_PSI is not set
# end of CPU/Task time and stats accounting

CONFIG_CPU_ISOLATION=y

#
# RCU Subsystem
#
CONFIG_TREE_RCU=y
CONFIG_PREEMPT_RCU=y
# CONFIG_RCU_EXPERT is not set
CONFIG_SRCU=y
CONFIG_TREE_SRCU=y
CONFIG_TASKS_RCU_GENERIC=y
CONFIG_TASKS_RCU=y
CONFIG_TASKS_RUDE_RCU=y
CONFIG_TASKS_TRACE_RCU=y
CONFIG_RCU_STALL_COMMON=y
CONFIG_RCU_NEED_SEGCBLIST=y
CONFIG_RCU_NOCB_CPU=y
# end of RCU Subsystem

CONFIG_BUILD_BIN2C=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
# CONFIG_IKHEADERS is not set
CONFIG_LOG_BUF_SHIFT=20
CONFIG_LOG_CPU_MAX_BUF_SHIFT=12
CONFIG_PRINTK_SAFE_LOG_BUF_SHIFT=13
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y

#
# Scheduler features
#
# CONFIG_UCLAMP_TASK is not set
# end of Scheduler features

CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH=y
CONFIG_CC_HAS_INT128=y
CONFIG_ARCH_SUPPORTS_INT128=y
CONFIG_NUMA_BALANCING=y
CONFIG_NUMA_BALANCING_DEFAULT_ENABLED=y
CONFIG_CGROUPS=y
CONFIG_PAGE_COUNTER=y
CONFIG_MEMCG=y
CONFIG_MEMCG_SWAP=y
CONFIG_MEMCG_KMEM=y
CONFIG_BLK_CGROUP=y
CONFIG_CGROUP_WRITEBACK=y
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
CONFIG_CFS_BANDWIDTH=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_CGROUP_PIDS=y
# CONFIG_CGROUP_RDMA is not set
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_HUGETLB=y
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_BPF=y
# CONFIG_CGROUP_DEBUG is not set
CONFIG_SOCK_CGROUP_DATA=y
CONFIG_NAMESPACES=y
CONFIG_UTS_NS=y
CONFIG_TIME_NS=y
CONFIG_IPC_NS=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
CONFIG_CHECKPOINT_RESTORE=y
CONFIG_SCHED_AUTOGROUP=y
# CONFIG_SYSFS_DEPRECATED is not set
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_RD_GZIP=y
CONFIG_RD_BZIP2=y
CONFIG_RD_LZMA=y
CONFIG_RD_XZ=y
CONFIG_RD_LZO=y
CONFIG_RD_LZ4=y
CONFIG_RD_ZSTD=y
# CONFIG_BOOT_CONFIG is not set
CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_SYSCTL=y
CONFIG_HAVE_UID16=y
CONFIG_SYSCTL_EXCEPTION_TRACE=y
CONFIG_HAVE_PCSPKR_PLATFORM=y
CONFIG_BPF=y
CONFIG_EXPERT=y
CONFIG_UID16=y
CONFIG_MULTIUSER=y
CONFIG_SGETMASK_SYSCALL=y
CONFIG_SYSFS_SYSCALL=y
CONFIG_FHANDLE=y
CONFIG_POSIX_TIMERS=y
CONFIG_PRINTK=y
CONFIG_PRINTK_NMI=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
CONFIG_PCSPKR_PLATFORM=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_FUTEX_PI=y
CONFIG_EPOLL=y
CONFIG_SIGNALFD=y
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
CONFIG_SHMEM=y
CONFIG_AIO=y
CONFIG_IO_URING=y
CONFIG_ADVISE_SYSCALLS=y
CONFIG_HAVE_ARCH_USERFAULTFD_WP=y
CONFIG_MEMBARRIER=y
CONFIG_KALLSYMS=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_ABSOLUTE_PERCPU=y
CONFIG_KALLSYMS_BASE_RELATIVE=y
# CONFIG_BPF_LSM is not set
CONFIG_BPF_SYSCALL=y
CONFIG_ARCH_WANT_DEFAULT_BPF_JIT=y
CONFIG_BPF_JIT_ALWAYS_ON=y
CONFIG_BPF_JIT_DEFAULT_ON=y
# CONFIG_BPF_PRELOAD is not set
CONFIG_USERFAULTFD=y
CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE=y
CONFIG_RSEQ=y
# CONFIG_DEBUG_RSEQ is not set
CONFIG_EMBEDDED=y
CONFIG_HAVE_PERF_EVENTS=y
# CONFIG_PC104 is not set

#
# Kernel Performance Events And Counters
#
CONFIG_PERF_EVENTS=y
# CONFIG_DEBUG_PERF_USE_VMALLOC is not set
# end of Kernel Performance Events And Counters

CONFIG_VM_EVENT_COUNTERS=y
CONFIG_SLUB_DEBUG=y
# CONFIG_SLUB_MEMCG_SYSFS_ON is not set
# CONFIG_COMPAT_BRK is not set
# CONFIG_SLAB is not set
CONFIG_SLUB=y
# CONFIG_SLOB is not set
CONFIG_SLAB_MERGE_DEFAULT=y
# CONFIG_SLAB_FREELIST_RANDOM is not set
# CONFIG_SLAB_FREELIST_HARDENED is not set
# CONFIG_SHUFFLE_PAGE_ALLOCATOR is not set
CONFIG_SLUB_CPU_PARTIAL=y
CONFIG_SYSTEM_DATA_VERIFICATION=y
CONFIG_PROFILING=y
CONFIG_TRACEPOINTS=y
# end of General setup

CONFIG_64BIT=y
CONFIG_X86_64=y
CONFIG_X86=y
CONFIG_INSTRUCTION_DECODER=y
CONFIG_OUTPUT_FORMAT="elf64-x86-64"
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_MMU=y
CONFIG_ARCH_MMAP_RND_BITS_MIN=28
CONFIG_ARCH_MMAP_RND_BITS_MAX=32
CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MIN=8
CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MAX=16
CONFIG_GENERIC_ISA_DMA=y
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_ARCH_HAS_CPU_RELAX=y
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_ARCH_HAS_FILTER_PGPROT=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_ARCH_WANT_GENERAL_HUGETLB=y
CONFIG_ZONE_DMA32=y
CONFIG_AUDIT_ARCH=y
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_HAVE_INTEL_TXT=y
CONFIG_X86_64_SMP=y
CONFIG_ARCH_SUPPORTS_UPROBES=y
CONFIG_FIX_EARLYCON_MEM=y
CONFIG_DYNAMIC_PHYSICAL_MASK=y
CONFIG_PGTABLE_LEVELS=5
CONFIG_CC_HAS_SANE_STACKPROTECTOR=y

#
# Processor type and features
#
CONFIG_ZONE_DMA=y
CONFIG_SMP=y
CONFIG_X86_FEATURE_NAMES=y
CONFIG_X86_X2APIC=y
CONFIG_X86_MPPARSE=y
# CONFIG_GOLDFISH is not set
CONFIG_RETPOLINE=y
CONFIG_X86_CPU_RESCTRL=y
CONFIG_X86_EXTENDED_PLATFORM=y
# CONFIG_X86_NUMACHIP is not set
# CONFIG_X86_VSMP is not set
CONFIG_X86_UV=y
# CONFIG_X86_GOLDFISH is not set
# CONFIG_X86_INTEL_MID is not set
CONFIG_X86_INTEL_LPSS=y
CONFIG_X86_AMD_PLATFORM_DEVICE=y
CONFIG_IOSF_MBI=y
# CONFIG_IOSF_MBI_DEBUG is not set
CONFIG_X86_SUPPORTS_MEMORY_FAILURE=y
# CONFIG_SCHED_OMIT_FRAME_POINTER is not set
CONFIG_HYPERVISOR_GUEST=y
CONFIG_PARAVIRT=y
CONFIG_PARAVIRT_XXL=y
# CONFIG_PARAVIRT_DEBUG is not set
CONFIG_PARAVIRT_SPINLOCKS=y
CONFIG_X86_HV_CALLBACK_VECTOR=y
CONFIG_XEN=y
CONFIG_XEN_PV=y
CONFIG_XEN_PV_SMP=y
# CONFIG_XEN_DOM0 is not set
CONFIG_XEN_PVHVM=y
CONFIG_XEN_PVHVM_SMP=y
CONFIG_XEN_512GB=y
CONFIG_XEN_SAVE_RESTORE=y
# CONFIG_XEN_DEBUG_FS is not set
# CONFIG_XEN_PVH is not set
CONFIG_KVM_GUEST=y
CONFIG_ARCH_CPUIDLE_HALTPOLL=y
# CONFIG_PVH is not set
CONFIG_PARAVIRT_TIME_ACCOUNTING=y
CONFIG_PARAVIRT_CLOCK=y
# CONFIG_JAILHOUSE_GUEST is not set
# CONFIG_ACRN_GUEST is not set
# CONFIG_MK8 is not set
# CONFIG_MPSC is not set
# CONFIG_MCORE2 is not set
# CONFIG_MATOM is not set
CONFIG_GENERIC_CPU=y
CONFIG_X86_INTERNODE_CACHE_SHIFT=6
CONFIG_X86_L1_CACHE_SHIFT=6
CONFIG_X86_TSC=y
CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_IA32_FEAT_CTL=y
CONFIG_X86_VMX_FEATURE_NAMES=y
# CONFIG_PROCESSOR_SELECT is not set
CONFIG_CPU_SUP_INTEL=y
CONFIG_CPU_SUP_AMD=y
CONFIG_CPU_SUP_HYGON=y
CONFIG_CPU_SUP_CENTAUR=y
CONFIG_CPU_SUP_ZHAOXIN=y
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
CONFIG_DMI=y
CONFIG_GART_IOMMU=y
CONFIG_MAXSMP=y
CONFIG_NR_CPUS_RANGE_BEGIN=8192
CONFIG_NR_CPUS_RANGE_END=8192
CONFIG_NR_CPUS_DEFAULT=8192
CONFIG_NR_CPUS=8192
CONFIG_SCHED_SMT=y
CONFIG_SCHED_MC=y
CONFIG_SCHED_MC_PRIO=y
CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y
CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y
CONFIG_X86_MCE=y
# CONFIG_X86_MCELOG_LEGACY is not set
CONFIG_X86_MCE_INTEL=y
CONFIG_X86_MCE_AMD=y
CONFIG_X86_MCE_THRESHOLD=y
CONFIG_X86_MCE_INJECT=m
CONFIG_X86_THERMAL_VECTOR=y

#
# Performance monitoring
#
CONFIG_PERF_EVENTS_INTEL_UNCORE=y
CONFIG_PERF_EVENTS_INTEL_RAPL=y
CONFIG_PERF_EVENTS_INTEL_CSTATE=y
# CONFIG_PERF_EVENTS_AMD_POWER is not set
# end of Performance monitoring

CONFIG_X86_16BIT=y
CONFIG_X86_ESPFIX64=y
CONFIG_X86_VSYSCALL_EMULATION=y
CONFIG_X86_IOPL_IOPERM=y
CONFIG_I8K=m
CONFIG_MICROCODE=y
CONFIG_MICROCODE_INTEL=y
CONFIG_MICROCODE_AMD=y
CONFIG_MICROCODE_OLD_INTERFACE=y
CONFIG_X86_MSR=y
CONFIG_X86_CPUID=y
CONFIG_X86_5LEVEL=y
CONFIG_X86_DIRECT_GBPAGES=y
# CONFIG_X86_CPA_STATISTICS is not set
CONFIG_AMD_MEM_ENCRYPT=y
# CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT is not set
CONFIG_NUMA=y
CONFIG_AMD_NUMA=y
CONFIG_X86_64_ACPI_NUMA=y
CONFIG_NUMA_EMU=y
CONFIG_NODES_SHIFT=10
CONFIG_ARCH_SPARSEMEM_ENABLE=y
CONFIG_ARCH_SPARSEMEM_DEFAULT=y
CONFIG_ARCH_SELECT_MEMORY_MODEL=y
CONFIG_ARCH_MEMORY_PROBE=y
CONFIG_ARCH_PROC_KCORE_TEXT=y
CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
CONFIG_X86_PMEM_LEGACY_DEVICE=y
CONFIG_X86_PMEM_LEGACY=m
CONFIG_X86_CHECK_BIOS_CORRUPTION=y
# CONFIG_X86_BOOTPARAM_MEMORY_CORRUPTION_CHECK is not set
CONFIG_X86_RESERVE_LOW=64
CONFIG_MTRR=y
CONFIG_MTRR_SANITIZER=y
CONFIG_MTRR_SANITIZER_ENABLE_DEFAULT=1
CONFIG_MTRR_SANITIZER_SPARE_REG_NR_DEFAULT=1
CONFIG_X86_PAT=y
CONFIG_ARCH_USES_PG_UNCACHED=y
CONFIG_ARCH_RANDOM=y
CONFIG_X86_SMAP=y
CONFIG_X86_UMIP=y
CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS=y
CONFIG_X86_INTEL_TSX_MODE_OFF=y
# CONFIG_X86_INTEL_TSX_MODE_ON is not set
# CONFIG_X86_INTEL_TSX_MODE_AUTO is not set
CONFIG_EFI=y
CONFIG_EFI_STUB=y
CONFIG_EFI_MIXED=y
# CONFIG_HZ_100 is not set
# CONFIG_HZ_250 is not set
# CONFIG_HZ_300 is not set
CONFIG_HZ_1000=y
CONFIG_HZ=1000
CONFIG_SCHED_HRTICK=y
CONFIG_KEXEC=y
CONFIG_KEXEC_FILE=y
CONFIG_ARCH_HAS_KEXEC_PURGATORY=y
# CONFIG_KEXEC_SIG is not set
CONFIG_CRASH_DUMP=y
CONFIG_KEXEC_JUMP=y
CONFIG_PHYSICAL_START=0x1000000
CONFIG_RELOCATABLE=y
CONFIG_RANDOMIZE_BASE=y
CONFIG_X86_NEED_RELOCS=y
CONFIG_PHYSICAL_ALIGN=0x200000
CONFIG_DYNAMIC_MEMORY_LAYOUT=y
CONFIG_RANDOMIZE_MEMORY=y
CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING=0xa
CONFIG_HOTPLUG_CPU=y
CONFIG_BOOTPARAM_HOTPLUG_CPU0=y
# CONFIG_DEBUG_HOTPLUG_CPU0 is not set
# CONFIG_COMPAT_VDSO is not set
CONFIG_LEGACY_VSYSCALL_EMULATE=y
# CONFIG_LEGACY_VSYSCALL_XONLY is not set
# CONFIG_LEGACY_VSYSCALL_NONE is not set
# CONFIG_CMDLINE_BOOL is not set
CONFIG_MODIFY_LDT_SYSCALL=y
CONFIG_HAVE_LIVEPATCH=y
CONFIG_LIVEPATCH=y
# end of Processor type and features

CONFIG_ARCH_HAS_ADD_PAGES=y
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y
CONFIG_USE_PERCPU_NUMA_NODE_ID=y
CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK=y
CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION=y
CONFIG_ARCH_ENABLE_THP_MIGRATION=y

#
# Power management and ACPI options
#
CONFIG_ARCH_HIBERNATION_HEADER=y
CONFIG_SUSPEND=y
CONFIG_SUSPEND_FREEZER=y
# CONFIG_SUSPEND_SKIP_SYNC is not set
CONFIG_HIBERNATE_CALLBACKS=y
CONFIG_HIBERNATION=y
CONFIG_HIBERNATION_SNAPSHOT_DEV=y
CONFIG_PM_STD_PARTITION=""
CONFIG_PM_SLEEP=y
CONFIG_PM_SLEEP_SMP=y
# CONFIG_PM_AUTOSLEEP is not set
# CONFIG_PM_WAKELOCKS is not set
CONFIG_PM=y
CONFIG_PM_DEBUG=y
CONFIG_PM_ADVANCED_DEBUG=y
# CONFIG_PM_TEST_SUSPEND is not set
CONFIG_PM_SLEEP_DEBUG=y
# CONFIG_DPM_WATCHDOG is not set
CONFIG_PM_TRACE=y
CONFIG_PM_TRACE_RTC=y
CONFIG_PM_CLK=y
# CONFIG_WQ_POWER_EFFICIENT_DEFAULT is not set
# CONFIG_ENERGY_MODEL is not set
CONFIG_ARCH_SUPPORTS_ACPI=y
CONFIG_ACPI=y
CONFIG_ACPI_LEGACY_TABLES_LOOKUP=y
CONFIG_ARCH_MIGHT_HAVE_ACPI_PDC=y
CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT=y
# CONFIG_ACPI_DEBUGGER is not set
CONFIG_ACPI_SPCR_TABLE=y
CONFIG_ACPI_LPIT=y
CONFIG_ACPI_SLEEP=y
CONFIG_ACPI_REV_OVERRIDE_POSSIBLE=y
CONFIG_ACPI_EC_DEBUGFS=m
CONFIG_ACPI_AC=y
CONFIG_ACPI_BATTERY=y
CONFIG_ACPI_BUTTON=y
CONFIG_ACPI_VIDEO=m
CONFIG_ACPI_FAN=y
# CONFIG_ACPI_TAD is not set
CONFIG_ACPI_DOCK=y
CONFIG_ACPI_CPU_FREQ_PSS=y
CONFIG_ACPI_PROCESSOR_CSTATE=y
CONFIG_ACPI_PROCESSOR_IDLE=y
CONFIG_ACPI_CPPC_LIB=y
CONFIG_ACPI_PROCESSOR=y
CONFIG_ACPI_IPMI=m
CONFIG_ACPI_HOTPLUG_CPU=y
CONFIG_ACPI_PROCESSOR_AGGREGATOR=m
CONFIG_ACPI_THERMAL=y
CONFIG_ARCH_HAS_ACPI_TABLE_UPGRADE=y
CONFIG_ACPI_TABLE_UPGRADE=y
# CONFIG_ACPI_DEBUG is not set
CONFIG_ACPI_PCI_SLOT=y
CONFIG_ACPI_CONTAINER=y
CONFIG_ACPI_HOTPLUG_MEMORY=y
CONFIG_ACPI_HOTPLUG_IOAPIC=y
CONFIG_ACPI_SBS=m
CONFIG_ACPI_HED=y
CONFIG_ACPI_CUSTOM_METHOD=m
CONFIG_ACPI_BGRT=y
# CONFIG_ACPI_REDUCED_HARDWARE_ONLY is not set
CONFIG_ACPI_NFIT=m
# CONFIG_NFIT_SECURITY_DEBUG is not set
CONFIG_ACPI_NUMA=y
# CONFIG_ACPI_HMAT is not set
CONFIG_HAVE_ACPI_APEI=y
CONFIG_HAVE_ACPI_APEI_NMI=y
CONFIG_ACPI_APEI=y
CONFIG_ACPI_APEI_GHES=y
CONFIG_ACPI_APEI_PCIEAER=y
CONFIG_ACPI_APEI_MEMORY_FAILURE=y
CONFIG_ACPI_APEI_EINJ=m
# CONFIG_ACPI_APEI_ERST_DEBUG is not set
# CONFIG_ACPI_DPTF is not set
CONFIG_ACPI_WATCHDOG=y
CONFIG_ACPI_EXTLOG=m
CONFIG_ACPI_ADXL=y
# CONFIG_ACPI_CONFIGFS is not set
# CONFIG_PMIC_OPREGION is not set
CONFIG_X86_PM_TIMER=y
CONFIG_SFI=y

#
# CPU Frequency scaling
#
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_GOV_ATTR_SET=y
CONFIG_CPU_FREQ_GOV_COMMON=y
CONFIG_CPU_FREQ_STAT=y
# CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set
CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL=y
CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y

#
# CPU frequency scaling drivers
#
CONFIG_X86_INTEL_PSTATE=y
CONFIG_X86_PCC_CPUFREQ=m
CONFIG_X86_ACPI_CPUFREQ=m
CONFIG_X86_ACPI_CPUFREQ_CPB=y
CONFIG_X86_POWERNOW_K8=m
CONFIG_X86_AMD_FREQ_SENSITIVITY=m
# CONFIG_X86_SPEEDSTEP_CENTRINO is not set
CONFIG_X86_P4_CLOCKMOD=m

#
# shared options
#
CONFIG_X86_SPEEDSTEP_LIB=m
# end of CPU Frequency scaling

#
# CPU Idle
#
CONFIG_CPU_IDLE=y
# CONFIG_CPU_IDLE_GOV_LADDER is not set
CONFIG_CPU_IDLE_GOV_MENU=y
# CONFIG_CPU_IDLE_GOV_TEO is not set
# CONFIG_CPU_IDLE_GOV_HALTPOLL is not set
CONFIG_HALTPOLL_CPUIDLE=y
# end of CPU Idle

CONFIG_INTEL_IDLE=y
# end of Power management and ACPI options

#
# Bus options (PCI etc.)
#
CONFIG_PCI_DIRECT=y
CONFIG_PCI_MMCONFIG=y
CONFIG_PCI_XEN=y
CONFIG_MMCONF_FAM10H=y
# CONFIG_PCI_CNB20LE_QUIRK is not set
# CONFIG_ISA_BUS is not set
CONFIG_ISA_DMA_API=y
CONFIG_AMD_NB=y
# CONFIG_X86_SYSFB is not set
# end of Bus options (PCI etc.)

#
# Binary Emulations
#
CONFIG_IA32_EMULATION=y
# CONFIG_X86_X32 is not set
CONFIG_COMPAT_32=y
CONFIG_COMPAT=y
CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
CONFIG_SYSVIPC_COMPAT=y
# end of Binary Emulations

#
# Firmware Drivers
#
CONFIG_EDD=m
# CONFIG_EDD_OFF is not set
CONFIG_FIRMWARE_MEMMAP=y
CONFIG_DMIID=y
CONFIG_DMI_SYSFS=y
CONFIG_DMI_SCAN_MACHINE_NON_EFI_FALLBACK=y
CONFIG_ISCSI_IBFT_FIND=y
CONFIG_ISCSI_IBFT=m
CONFIG_FW_CFG_SYSFS=y
# CONFIG_FW_CFG_SYSFS_CMDLINE is not set
# CONFIG_GOOGLE_FIRMWARE is not set

#
# EFI (Extensible Firmware Interface) Support
#
CONFIG_EFI_VARS=y
CONFIG_EFI_ESRT=y
CONFIG_EFI_VARS_PSTORE=y
CONFIG_EFI_VARS_PSTORE_DEFAULT_DISABLE=y
CONFIG_EFI_RUNTIME_MAP=y
# CONFIG_EFI_FAKE_MEMMAP is not set
CONFIG_EFI_RUNTIME_WRAPPERS=y
CONFIG_EFI_GENERIC_STUB_INITRD_CMDLINE_LOADER=y
# CONFIG_EFI_BOOTLOADER_CONTROL is not set
# CONFIG_EFI_CAPSULE_LOADER is not set
# CONFIG_EFI_TEST is not set
CONFIG_APPLE_PROPERTIES=y
# CONFIG_RESET_ATTACK_MITIGATION is not set
# CONFIG_EFI_RCI2_TABLE is not set
# CONFIG_EFI_DISABLE_PCI_DMA is not set
# end of EFI (Extensible Firmware Interface) Support

CONFIG_UEFI_CPER=y
CONFIG_UEFI_CPER_X86=y
CONFIG_EFI_DEV_PATH_PARSER=y
CONFIG_EFI_EARLYCON=y
CONFIG_EFI_CUSTOM_SSDT_OVERLAYS=y

#
# Tegra firmware driver
#
# end of Tegra firmware driver
# end of Firmware Drivers

CONFIG_HAVE_KVM=y
CONFIG_HAVE_KVM_IRQCHIP=y
CONFIG_HAVE_KVM_IRQFD=y
CONFIG_HAVE_KVM_IRQ_ROUTING=y
CONFIG_HAVE_KVM_EVENTFD=y
CONFIG_KVM_MMIO=y
CONFIG_KVM_ASYNC_PF=y
CONFIG_HAVE_KVM_MSI=y
CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT=y
CONFIG_KVM_VFIO=y
CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT=y
CONFIG_KVM_COMPAT=y
CONFIG_HAVE_KVM_IRQ_BYPASS=y
CONFIG_HAVE_KVM_NO_POLL=y
CONFIG_KVM_XFER_TO_GUEST_WORK=y
CONFIG_VIRTUALIZATION=y
CONFIG_KVM=y
# CONFIG_KVM_WERROR is not set
CONFIG_KVM_INTEL=y
# CONFIG_KVM_AMD is not set
CONFIG_KVM_MMU_AUDIT=y
CONFIG_AS_AVX512=y
CONFIG_AS_SHA1_NI=y
CONFIG_AS_SHA256_NI=y
CONFIG_AS_TPAUSE=y

#
# General architecture-dependent options
#
CONFIG_CRASH_CORE=y
CONFIG_KEXEC_CORE=y
CONFIG_HOTPLUG_SMT=y
CONFIG_GENERIC_ENTRY=y
CONFIG_OPROFILE=m
CONFIG_OPROFILE_EVENT_MULTIPLEX=y
CONFIG_HAVE_OPROFILE=y
CONFIG_OPROFILE_NMI_TIMER=y
CONFIG_KPROBES=y
CONFIG_JUMP_LABEL=y
# CONFIG_STATIC_KEYS_SELFTEST is not set
# CONFIG_STATIC_CALL_SELFTEST is not set
CONFIG_OPTPROBES=y
CONFIG_KPROBES_ON_FTRACE=y
CONFIG_UPROBES=y
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_ARCH_USE_BUILTIN_BSWAP=y
CONFIG_KRETPROBES=y
CONFIG_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_OPTPROBES=y
CONFIG_HAVE_KPROBES_ON_FTRACE=y
CONFIG_HAVE_FUNCTION_ERROR_INJECTION=y
CONFIG_HAVE_NMI=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_HAVE_DMA_CONTIGUOUS=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_ARCH_HAS_FORTIFY_SOURCE=y
CONFIG_ARCH_HAS_SET_MEMORY=y
CONFIG_ARCH_HAS_SET_DIRECT_MAP=y
CONFIG_HAVE_ARCH_THREAD_STRUCT_WHITELIST=y
CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT=y
CONFIG_HAVE_ASM_MODVERSIONS=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_HAVE_RSEQ=y
CONFIG_HAVE_FUNCTION_ARG_ACCESS_API=y
CONFIG_HAVE_HW_BREAKPOINT=y
CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
CONFIG_HAVE_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_PERF_EVENTS_NMI=y
CONFIG_HAVE_HARDLOCKUP_DETECTOR_PERF=y
CONFIG_HAVE_PERF_REGS=y
CONFIG_HAVE_PERF_USER_STACK_DUMP=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_HAVE_ARCH_JUMP_LABEL_RELATIVE=y
CONFIG_MMU_GATHER_TABLE_FREE=y
CONFIG_MMU_GATHER_RCU_TABLE_FREE=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y
CONFIG_HAVE_CMPXCHG_LOCAL=y
CONFIG_HAVE_CMPXCHG_DOUBLE=y
CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y
CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y
CONFIG_HAVE_ARCH_SECCOMP=y
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
CONFIG_SECCOMP=y
CONFIG_SECCOMP_FILTER=y
CONFIG_HAVE_ARCH_STACKLEAK=y
CONFIG_HAVE_STACKPROTECTOR=y
CONFIG_STACKPROTECTOR=y
CONFIG_STACKPROTECTOR_STRONG=y
CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES=y
CONFIG_HAVE_CONTEXT_TRACKING=y
CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y
CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_MOVE_PMD=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD=y
CONFIG_HAVE_ARCH_HUGE_VMAP=y
CONFIG_ARCH_WANT_HUGE_PMD_SHARE=y
CONFIG_HAVE_ARCH_SOFT_DIRTY=y
CONFIG_HAVE_MOD_ARCH_SPECIFIC=y
CONFIG_MODULES_USE_ELF_RELA=y
CONFIG_ARCH_HAS_ELF_RANDOMIZE=y
CONFIG_HAVE_ARCH_MMAP_RND_BITS=y
CONFIG_HAVE_EXIT_THREAD=y
CONFIG_ARCH_MMAP_RND_BITS=28
CONFIG_HAVE_ARCH_MMAP_RND_COMPAT_BITS=y
CONFIG_ARCH_MMAP_RND_COMPAT_BITS=8
CONFIG_HAVE_ARCH_COMPAT_MMAP_BASES=y
CONFIG_HAVE_STACK_VALIDATION=y
CONFIG_HAVE_RELIABLE_STACKTRACE=y
CONFIG_OLD_SIGSUSPEND3=y
CONFIG_COMPAT_OLD_SIGACTION=y
CONFIG_COMPAT_32BIT_TIME=y
CONFIG_HAVE_ARCH_VMAP_STACK=y
CONFIG_VMAP_STACK=y
CONFIG_ARCH_HAS_STRICT_KERNEL_RWX=y
CONFIG_STRICT_KERNEL_RWX=y
CONFIG_ARCH_HAS_STRICT_MODULE_RWX=y
CONFIG_STRICT_MODULE_RWX=y
CONFIG_HAVE_ARCH_PREL32_RELOCATIONS=y
CONFIG_ARCH_USE_MEMREMAP_PROT=y
# CONFIG_LOCK_EVENT_COUNTS is not set
CONFIG_ARCH_HAS_MEM_ENCRYPT=y
CONFIG_HAVE_STATIC_CALL=y
CONFIG_HAVE_STATIC_CALL_INLINE=y

#
# GCOV-based kernel profiling
#
# CONFIG_GCOV_KERNEL is not set
CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y
# end of GCOV-based kernel profiling

CONFIG_HAVE_GCC_PLUGINS=y
# end of General architecture-dependent options

CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
CONFIG_MODULE_SIG_FORMAT=y
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_MODULE_FORCE_UNLOAD is not set
# CONFIG_MODVERSIONS is not set
# CONFIG_MODULE_SRCVERSION_ALL is not set
CONFIG_MODULE_SIG=y
# CONFIG_MODULE_SIG_FORCE is not set
CONFIG_MODULE_SIG_ALL=y
# CONFIG_MODULE_SIG_SHA1 is not set
# CONFIG_MODULE_SIG_SHA224 is not set
CONFIG_MODULE_SIG_SHA256=y
# CONFIG_MODULE_SIG_SHA384 is not set
# CONFIG_MODULE_SIG_SHA512 is not set
CONFIG_MODULE_SIG_HASH="sha256"
# CONFIG_MODULE_COMPRESS is not set
# CONFIG_MODULE_ALLOW_MISSING_NAMESPACE_IMPORTS is not set
# CONFIG_UNUSED_SYMBOLS is not set
# CONFIG_TRIM_UNUSED_KSYMS is not set
CONFIG_MODULES_TREE_LOOKUP=y
CONFIG_BLOCK=y
CONFIG_BLK_SCSI_REQUEST=y
CONFIG_BLK_CGROUP_RWSTAT=y
CONFIG_BLK_DEV_BSG=y
CONFIG_BLK_DEV_BSGLIB=y
CONFIG_BLK_DEV_INTEGRITY=y
CONFIG_BLK_DEV_INTEGRITY_T10=m
# CONFIG_BLK_DEV_ZONED is not set
CONFIG_BLK_DEV_THROTTLING=y
# CONFIG_BLK_DEV_THROTTLING_LOW is not set
# CONFIG_BLK_CMDLINE_PARSER is not set
# CONFIG_BLK_WBT is not set
# CONFIG_BLK_CGROUP_IOLATENCY is not set
# CONFIG_BLK_CGROUP_IOCOST is not set
CONFIG_BLK_DEBUG_FS=y
# CONFIG_BLK_SED_OPAL is not set
# CONFIG_BLK_INLINE_ENCRYPTION is not set

#
# Partition Types
#
CONFIG_PARTITION_ADVANCED=y
# CONFIG_ACORN_PARTITION is not set
# CONFIG_AIX_PARTITION is not set
CONFIG_OSF_PARTITION=y
CONFIG_AMIGA_PARTITION=y
# CONFIG_ATARI_PARTITION is not set
CONFIG_MAC_PARTITION=y
CONFIG_MSDOS_PARTITION=y
CONFIG_BSD_DISKLABEL=y
CONFIG_MINIX_SUBPARTITION=y
CONFIG_SOLARIS_X86_PARTITION=y
CONFIG_UNIXWARE_DISKLABEL=y
# CONFIG_LDM_PARTITION is not set
CONFIG_SGI_PARTITION=y
# CONFIG_ULTRIX_PARTITION is not set
CONFIG_SUN_PARTITION=y
CONFIG_KARMA_PARTITION=y
CONFIG_EFI_PARTITION=y
# CONFIG_SYSV68_PARTITION is not set
# CONFIG_CMDLINE_PARTITION is not set
# end of Partition Types

CONFIG_BLOCK_COMPAT=y
CONFIG_BLK_MQ_PCI=y
CONFIG_BLK_MQ_VIRTIO=y
CONFIG_BLK_PM=y

#
# IO Schedulers
#
CONFIG_MQ_IOSCHED_DEADLINE=y
CONFIG_MQ_IOSCHED_KYBER=y
# CONFIG_IOSCHED_BFQ is not set
# end of IO Schedulers

CONFIG_PREEMPT_NOTIFIERS=y
CONFIG_PADATA=y
CONFIG_ASN1=y
CONFIG_UNINLINE_SPIN_UNLOCK=y
CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=y
CONFIG_MUTEX_SPIN_ON_OWNER=y
CONFIG_RWSEM_SPIN_ON_OWNER=y
CONFIG_LOCK_SPIN_ON_OWNER=y
CONFIG_ARCH_USE_QUEUED_SPINLOCKS=y
CONFIG_QUEUED_SPINLOCKS=y
CONFIG_ARCH_USE_QUEUED_RWLOCKS=y
CONFIG_QUEUED_RWLOCKS=y
CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE=y
CONFIG_ARCH_HAS_SYNC_CORE_BEFORE_USERMODE=y
CONFIG_ARCH_HAS_SYSCALL_WRAPPER=y
CONFIG_FREEZER=y

#
# Executable file formats
#
CONFIG_BINFMT_ELF=y
CONFIG_COMPAT_BINFMT_ELF=y
CONFIG_ELFCORE=y
CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y
CONFIG_BINFMT_SCRIPT=y
CONFIG_BINFMT_MISC=m
CONFIG_COREDUMP=y
# end of Executable file formats

#
# Memory Management options
#
CONFIG_SELECT_MEMORY_MODEL=y
CONFIG_SPARSEMEM_MANUAL=y
CONFIG_SPARSEMEM=y
CONFIG_NEED_MULTIPLE_NODES=y
CONFIG_SPARSEMEM_EXTREME=y
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
CONFIG_SPARSEMEM_VMEMMAP=y
CONFIG_HAVE_FAST_GUP=y
CONFIG_NUMA_KEEP_MEMINFO=y
CONFIG_MEMORY_ISOLATION=y
CONFIG_HAVE_BOOTMEM_INFO_NODE=y
CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTPLUG_SPARSE=y
# CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE is not set
CONFIG_MEMORY_HOTREMOVE=y
CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_MEMORY_BALLOON=y
CONFIG_BALLOON_COMPACTION=y
CONFIG_COMPACTION=y
CONFIG_PAGE_REPORTING=y
CONFIG_MIGRATION=y
CONFIG_CONTIG_ALLOC=y
CONFIG_PHYS_ADDR_T_64BIT=y
CONFIG_BOUNCE=y
CONFIG_VIRT_TO_BUS=y
CONFIG_MMU_NOTIFIER=y
CONFIG_KSM=y
CONFIG_DEFAULT_MMAP_MIN_ADDR=4096
CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y
CONFIG_MEMORY_FAILURE=y
CONFIG_HWPOISON_INJECT=m
CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y
# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set
CONFIG_ARCH_WANTS_THP_SWAP=y
CONFIG_THP_SWAP=y
CONFIG_CLEANCACHE=y
CONFIG_FRONTSWAP=y
CONFIG_CMA=y
# CONFIG_CMA_DEBUG is not set
# CONFIG_CMA_DEBUGFS is not set
CONFIG_CMA_AREAS=7
CONFIG_MEM_SOFT_DIRTY=y
CONFIG_ZSWAP=y
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_DEFLATE is not set
CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZO=y
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_842 is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4 is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4HC is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_ZSTD is not set
CONFIG_ZSWAP_COMPRESSOR_DEFAULT="lzo"
CONFIG_ZSWAP_ZPOOL_DEFAULT_ZBUD=y
# CONFIG_ZSWAP_ZPOOL_DEFAULT_Z3FOLD is not set
# CONFIG_ZSWAP_ZPOOL_DEFAULT_ZSMALLOC is not set
CONFIG_ZSWAP_ZPOOL_DEFAULT="zbud"
# CONFIG_ZSWAP_DEFAULT_ON is not set
CONFIG_ZPOOL=y
CONFIG_ZBUD=y
# CONFIG_Z3FOLD is not set
CONFIG_ZSMALLOC=y
# CONFIG_ZSMALLOC_PGTABLE_MAPPING is not set
# CONFIG_ZSMALLOC_STAT is not set
CONFIG_GENERIC_EARLY_IOREMAP=y
CONFIG_DEFERRED_STRUCT_PAGE_INIT=y
CONFIG_IDLE_PAGE_TRACKING=y
CONFIG_ARCH_HAS_PTE_DEVMAP=y
CONFIG_ZONE_DEVICE=y
CONFIG_DEV_PAGEMAP_OPS=y
CONFIG_HMM_MIRROR=y
CONFIG_DEVICE_PRIVATE=y
CONFIG_VMAP_PFN=y
CONFIG_FRAME_VECTOR=y
CONFIG_ARCH_USES_HIGH_VMA_FLAGS=y
CONFIG_ARCH_HAS_PKEYS=y
# CONFIG_PERCPU_STATS is not set
CONFIG_GUP_BENCHMARK=y
# CONFIG_READ_ONLY_THP_FOR_FS is not set
CONFIG_ARCH_HAS_PTE_SPECIAL=y
CONFIG_MAPPING_DIRTY_HELPERS=y
# end of Memory Management options

CONFIG_NET=y
CONFIG_COMPAT_NETLINK_MESSAGES=y
CONFIG_NET_INGRESS=y
CONFIG_NET_EGRESS=y
CONFIG_NET_REDIRECT=y
CONFIG_SKB_EXTENSIONS=y

#
# Networking options
#
CONFIG_PACKET=y
CONFIG_PACKET_DIAG=m
CONFIG_UNIX=y
CONFIG_UNIX_SCM=y
CONFIG_UNIX_DIAG=m
CONFIG_TLS=m
# CONFIG_TLS_DEVICE is not set
# CONFIG_TLS_TOE is not set
CONFIG_XFRM=y
CONFIG_XFRM_ALGO=y
CONFIG_XFRM_USER=y
# CONFIG_XFRM_USER_COMPAT is not set
# CONFIG_XFRM_INTERFACE is not set
CONFIG_XFRM_SUB_POLICY=y
CONFIG_XFRM_MIGRATE=y
CONFIG_XFRM_STATISTICS=y
CONFIG_XFRM_AH=m
CONFIG_XFRM_ESP=m
CONFIG_XFRM_IPCOMP=m
CONFIG_NET_KEY=m
CONFIG_NET_KEY_MIGRATE=y
CONFIG_XDP_SOCKETS=y
# CONFIG_XDP_SOCKETS_DIAG is not set
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_ADVANCED_ROUTER=y
CONFIG_IP_FIB_TRIE_STATS=y
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_IP_ROUTE_MULTIPATH=y
CONFIG_IP_ROUTE_VERBOSE=y
CONFIG_IP_ROUTE_CLASSID=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
# CONFIG_IP_PNP_BOOTP is not set
# CONFIG_IP_PNP_RARP is not set
CONFIG_NET_IPIP=y
CONFIG_NET_IPGRE_DEMUX=y
CONFIG_NET_IP_TUNNEL=y
CONFIG_NET_IPGRE=y
CONFIG_NET_IPGRE_BROADCAST=y
CONFIG_IP_MROUTE_COMMON=y
CONFIG_IP_MROUTE=y
CONFIG_IP_MROUTE_MULTIPLE_TABLES=y
CONFIG_IP_PIMSM_V1=y
CONFIG_IP_PIMSM_V2=y
CONFIG_SYN_COOKIES=y
CONFIG_NET_IPVTI=m
CONFIG_NET_UDP_TUNNEL=y
CONFIG_NET_FOU=y
CONFIG_NET_FOU_IP_TUNNELS=y
CONFIG_INET_AH=m
CONFIG_INET_ESP=m
# CONFIG_INET_ESP_OFFLOAD is not set
# CONFIG_INET_ESPINTCP is not set
CONFIG_INET_IPCOMP=m
CONFIG_INET_XFRM_TUNNEL=m
CONFIG_INET_TUNNEL=y
CONFIG_INET_DIAG=m
CONFIG_INET_TCP_DIAG=m
CONFIG_INET_UDP_DIAG=m
# CONFIG_INET_RAW_DIAG is not set
# CONFIG_INET_DIAG_DESTROY is not set
CONFIG_TCP_CONG_ADVANCED=y
CONFIG_TCP_CONG_BIC=m
CONFIG_TCP_CONG_CUBIC=y
CONFIG_TCP_CONG_WESTWOOD=m
CONFIG_TCP_CONG_HTCP=m
CONFIG_TCP_CONG_HSTCP=m
CONFIG_TCP_CONG_HYBLA=m
CONFIG_TCP_CONG_VEGAS=m
# CONFIG_TCP_CONG_NV is not set
CONFIG_TCP_CONG_SCALABLE=m
CONFIG_TCP_CONG_LP=m
CONFIG_TCP_CONG_VENO=m
CONFIG_TCP_CONG_YEAH=m
CONFIG_TCP_CONG_ILLINOIS=m
CONFIG_TCP_CONG_DCTCP=m
# CONFIG_TCP_CONG_CDG is not set
# CONFIG_TCP_CONG_BBR is not set
CONFIG_DEFAULT_CUBIC=y
# CONFIG_DEFAULT_RENO is not set
CONFIG_DEFAULT_TCP_CONG="cubic"
CONFIG_TCP_MD5SIG=y
CONFIG_IPV6=y
CONFIG_IPV6_ROUTER_PREF=y
CONFIG_IPV6_ROUTE_INFO=y
CONFIG_IPV6_OPTIMISTIC_DAD=y
CONFIG_INET6_AH=m
CONFIG_INET6_ESP=m
# CONFIG_INET6_ESP_OFFLOAD is not set
# CONFIG_INET6_ESPINTCP is not set
CONFIG_INET6_IPCOMP=m
CONFIG_IPV6_MIP6=m
# CONFIG_IPV6_ILA is not set
CONFIG_INET6_XFRM_TUNNEL=m
CONFIG_INET6_TUNNEL=y
CONFIG_IPV6_VTI=m
CONFIG_IPV6_SIT=m
CONFIG_IPV6_SIT_6RD=y
CONFIG_IPV6_NDISC_NODETYPE=y
CONFIG_IPV6_TUNNEL=y
CONFIG_IPV6_GRE=y
CONFIG_IPV6_FOU=y
CONFIG_IPV6_FOU_TUNNEL=y
CONFIG_IPV6_MULTIPLE_TABLES=y
# CONFIG_IPV6_SUBTREES is not set
CONFIG_IPV6_MROUTE=y
CONFIG_IPV6_MROUTE_MULTIPLE_TABLES=y
CONFIG_IPV6_PIMSM_V2=y
CONFIG_IPV6_SEG6_LWTUNNEL=y
# CONFIG_IPV6_SEG6_HMAC is not set
CONFIG_IPV6_SEG6_BPF=y
# CONFIG_IPV6_RPL_LWTUNNEL is not set
CONFIG_NETLABEL=y
CONFIG_MPTCP=y
CONFIG_INET_MPTCP_DIAG=m
CONFIG_MPTCP_IPV6=y
CONFIG_NETWORK_SECMARK=y
CONFIG_NET_PTP_CLASSIFY=y
CONFIG_NETWORK_PHY_TIMESTAMPING=y
CONFIG_NETFILTER=y
CONFIG_NETFILTER_ADVANCED=y
CONFIG_BRIDGE_NETFILTER=m

#
# Core Netfilter Configuration
#
CONFIG_NETFILTER_INGRESS=y
CONFIG_NETFILTER_NETLINK=m
CONFIG_NETFILTER_FAMILY_BRIDGE=y
CONFIG_NETFILTER_FAMILY_ARP=y
CONFIG_NETFILTER_NETLINK_ACCT=m
CONFIG_NETFILTER_NETLINK_QUEUE=m
CONFIG_NETFILTER_NETLINK_LOG=m
CONFIG_NETFILTER_NETLINK_OSF=m
CONFIG_NF_CONNTRACK=m
CONFIG_NF_LOG_COMMON=m
# CONFIG_NF_LOG_NETDEV is not set
CONFIG_NETFILTER_CONNCOUNT=m
CONFIG_NF_CONNTRACK_MARK=y
CONFIG_NF_CONNTRACK_SECMARK=y
CONFIG_NF_CONNTRACK_ZONES=y
CONFIG_NF_CONNTRACK_PROCFS=y
CONFIG_NF_CONNTRACK_EVENTS=y
CONFIG_NF_CONNTRACK_TIMEOUT=y
CONFIG_NF_CONNTRACK_TIMESTAMP=y
CONFIG_NF_CONNTRACK_LABELS=y
CONFIG_NF_CT_PROTO_DCCP=y
CONFIG_NF_CT_PROTO_GRE=y
CONFIG_NF_CT_PROTO_SCTP=y
CONFIG_NF_CT_PROTO_UDPLITE=y
CONFIG_NF_CONNTRACK_AMANDA=m
CONFIG_NF_CONNTRACK_FTP=m
CONFIG_NF_CONNTRACK_H323=m
CONFIG_NF_CONNTRACK_IRC=m
CONFIG_NF_CONNTRACK_BROADCAST=m
CONFIG_NF_CONNTRACK_NETBIOS_NS=m
CONFIG_NF_CONNTRACK_SNMP=m
CONFIG_NF_CONNTRACK_PPTP=m
CONFIG_NF_CONNTRACK_SANE=m
CONFIG_NF_CONNTRACK_SIP=m
CONFIG_NF_CONNTRACK_TFTP=m
CONFIG_NF_CT_NETLINK=m
CONFIG_NF_CT_NETLINK_TIMEOUT=m
# CONFIG_NETFILTER_NETLINK_GLUE_CT is not set
CONFIG_NF_NAT=m
CONFIG_NF_NAT_AMANDA=m
CONFIG_NF_NAT_FTP=m
CONFIG_NF_NAT_IRC=m
CONFIG_NF_NAT_SIP=m
CONFIG_NF_NAT_TFTP=m
CONFIG_NF_NAT_REDIRECT=y
CONFIG_NF_NAT_MASQUERADE=y
CONFIG_NETFILTER_SYNPROXY=m
CONFIG_NF_TABLES=m
CONFIG_NF_TABLES_INET=y
CONFIG_NF_TABLES_NETDEV=y
# CONFIG_NFT_NUMGEN is not set
CONFIG_NFT_CT=m
CONFIG_NFT_FLOW_OFFLOAD=m
CONFIG_NFT_COUNTER=m
# CONFIG_NFT_CONNLIMIT is not set
CONFIG_NFT_LOG=m
CONFIG_NFT_LIMIT=m
CONFIG_NFT_MASQ=m
CONFIG_NFT_REDIR=m
CONFIG_NFT_NAT=m
# CONFIG_NFT_TUNNEL is not set
CONFIG_NFT_OBJREF=m
CONFIG_NFT_QUEUE=m
# CONFIG_NFT_QUOTA is not set
CONFIG_NFT_REJECT=m
CONFIG_NFT_REJECT_INET=m
CONFIG_NFT_COMPAT=m
CONFIG_NFT_HASH=m
# CONFIG_NFT_XFRM is not set
# CONFIG_NFT_SOCKET is not set
# CONFIG_NFT_OSF is not set
# CONFIG_NFT_TPROXY is not set
# CONFIG_NFT_SYNPROXY is not set
# CONFIG_NF_DUP_NETDEV is not set
# CONFIG_NFT_DUP_NETDEV is not set
# CONFIG_NFT_FWD_NETDEV is not set
CONFIG_NF_FLOW_TABLE_INET=m
CONFIG_NF_FLOW_TABLE=m
CONFIG_NETFILTER_XTABLES=y

#
# Xtables combined modules
#
CONFIG_NETFILTER_XT_MARK=m
CONFIG_NETFILTER_XT_CONNMARK=m
CONFIG_NETFILTER_XT_SET=m

#
# Xtables targets
#
CONFIG_NETFILTER_XT_TARGET_AUDIT=m
CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
CONFIG_NETFILTER_XT_TARGET_CONNMARK=m
CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=m
CONFIG_NETFILTER_XT_TARGET_CT=m
CONFIG_NETFILTER_XT_TARGET_DSCP=m
CONFIG_NETFILTER_XT_TARGET_HL=m
CONFIG_NETFILTER_XT_TARGET_HMARK=m
CONFIG_NETFILTER_XT_TARGET_IDLETIMER=m
CONFIG_NETFILTER_XT_TARGET_LED=m
CONFIG_NETFILTER_XT_TARGET_LOG=m
CONFIG_NETFILTER_XT_TARGET_MARK=m
CONFIG_NETFILTER_XT_NAT=m
CONFIG_NETFILTER_XT_TARGET_NETMAP=m
CONFIG_NETFILTER_XT_TARGET_NFLOG=m
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m
CONFIG_NETFILTER_XT_TARGET_NOTRACK=m
CONFIG_NETFILTER_XT_TARGET_RATEEST=m
CONFIG_NETFILTER_XT_TARGET_REDIRECT=m
CONFIG_NETFILTER_XT_TARGET_MASQUERADE=m
CONFIG_NETFILTER_XT_TARGET_TEE=m
CONFIG_NETFILTER_XT_TARGET_TPROXY=m
CONFIG_NETFILTER_XT_TARGET_TRACE=m
CONFIG_NETFILTER_XT_TARGET_SECMARK=m
CONFIG_NETFILTER_XT_TARGET_TCPMSS=m
CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m

#
# Xtables matches
#
CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=m
CONFIG_NETFILTER_XT_MATCH_BPF=m
CONFIG_NETFILTER_XT_MATCH_CGROUP=m
CONFIG_NETFILTER_XT_MATCH_CLUSTER=m
CONFIG_NETFILTER_XT_MATCH_COMMENT=m
CONFIG_NETFILTER_XT_MATCH_CONNBYTES=m
CONFIG_NETFILTER_XT_MATCH_CONNLABEL=m
CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m
CONFIG_NETFILTER_XT_MATCH_CONNMARK=m
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m
CONFIG_NETFILTER_XT_MATCH_CPU=m
CONFIG_NETFILTER_XT_MATCH_DCCP=m
CONFIG_NETFILTER_XT_MATCH_DEVGROUP=m
CONFIG_NETFILTER_XT_MATCH_DSCP=m
CONFIG_NETFILTER_XT_MATCH_ECN=m
CONFIG_NETFILTER_XT_MATCH_ESP=m
CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=m
CONFIG_NETFILTER_XT_MATCH_HELPER=m
CONFIG_NETFILTER_XT_MATCH_HL=m
# CONFIG_NETFILTER_XT_MATCH_IPCOMP is not set
CONFIG_NETFILTER_XT_MATCH_IPRANGE=m
CONFIG_NETFILTER_XT_MATCH_IPVS=m
CONFIG_NETFILTER_XT_MATCH_L2TP=m
CONFIG_NETFILTER_XT_MATCH_LENGTH=m
CONFIG_NETFILTER_XT_MATCH_LIMIT=m
CONFIG_NETFILTER_XT_MATCH_MAC=m
CONFIG_NETFILTER_XT_MATCH_MARK=m
CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m
CONFIG_NETFILTER_XT_MATCH_NFACCT=m
CONFIG_NETFILTER_XT_MATCH_OSF=m
CONFIG_NETFILTER_XT_MATCH_OWNER=m
CONFIG_NETFILTER_XT_MATCH_POLICY=m
CONFIG_NETFILTER_XT_MATCH_PHYSDEV=m
CONFIG_NETFILTER_XT_MATCH_PKTTYPE=m
CONFIG_NETFILTER_XT_MATCH_QUOTA=m
CONFIG_NETFILTER_XT_MATCH_RATEEST=m
CONFIG_NETFILTER_XT_MATCH_REALM=m
CONFIG_NETFILTER_XT_MATCH_RECENT=m
CONFIG_NETFILTER_XT_MATCH_SCTP=m
CONFIG_NETFILTER_XT_MATCH_SOCKET=m
CONFIG_NETFILTER_XT_MATCH_STATE=m
CONFIG_NETFILTER_XT_MATCH_STATISTIC=m
CONFIG_NETFILTER_XT_MATCH_STRING=m
CONFIG_NETFILTER_XT_MATCH_TCPMSS=m
CONFIG_NETFILTER_XT_MATCH_TIME=m
CONFIG_NETFILTER_XT_MATCH_U32=m
# end of Core Netfilter Configuration

CONFIG_IP_SET=m
CONFIG_IP_SET_MAX=256
CONFIG_IP_SET_BITMAP_IP=m
CONFIG_IP_SET_BITMAP_IPMAC=m
CONFIG_IP_SET_BITMAP_PORT=m
CONFIG_IP_SET_HASH_IP=m
CONFIG_IP_SET_HASH_IPMARK=m
CONFIG_IP_SET_HASH_IPPORT=m
CONFIG_IP_SET_HASH_IPPORTIP=m
CONFIG_IP_SET_HASH_IPPORTNET=m
CONFIG_IP_SET_HASH_IPMAC=m
CONFIG_IP_SET_HASH_MAC=m
CONFIG_IP_SET_HASH_NETPORTNET=m
CONFIG_IP_SET_HASH_NET=m
CONFIG_IP_SET_HASH_NETNET=m
CONFIG_IP_SET_HASH_NETPORT=m
CONFIG_IP_SET_HASH_NETIFACE=m
CONFIG_IP_SET_LIST_SET=m
CONFIG_IP_VS=m
CONFIG_IP_VS_IPV6=y
# CONFIG_IP_VS_DEBUG is not set
CONFIG_IP_VS_TAB_BITS=12

#
# IPVS transport protocol load balancing support
#
CONFIG_IP_VS_PROTO_TCP=y
CONFIG_IP_VS_PROTO_UDP=y
CONFIG_IP_VS_PROTO_AH_ESP=y
CONFIG_IP_VS_PROTO_ESP=y
CONFIG_IP_VS_PROTO_AH=y
CONFIG_IP_VS_PROTO_SCTP=y

#
# IPVS scheduler
#
CONFIG_IP_VS_RR=m
CONFIG_IP_VS_WRR=m
CONFIG_IP_VS_LC=m
CONFIG_IP_VS_WLC=m
# CONFIG_IP_VS_FO is not set
# CONFIG_IP_VS_OVF is not set
CONFIG_IP_VS_LBLC=m
CONFIG_IP_VS_LBLCR=m
CONFIG_IP_VS_DH=m
CONFIG_IP_VS_SH=m
# CONFIG_IP_VS_MH is not set
CONFIG_IP_VS_SED=m
CONFIG_IP_VS_NQ=m

#
# IPVS SH scheduler
#
CONFIG_IP_VS_SH_TAB_BITS=8

#
# IPVS MH scheduler
#
CONFIG_IP_VS_MH_TAB_INDEX=12

#
# IPVS application helper
#
CONFIG_IP_VS_FTP=m
CONFIG_IP_VS_NFCT=y
CONFIG_IP_VS_PE_SIP=m

#
# IP: Netfilter Configuration
#
CONFIG_NF_DEFRAG_IPV4=m
CONFIG_NF_SOCKET_IPV4=m
CONFIG_NF_TPROXY_IPV4=m
CONFIG_NF_TABLES_IPV4=y
CONFIG_NFT_REJECT_IPV4=m
# CONFIG_NFT_DUP_IPV4 is not set
# CONFIG_NFT_FIB_IPV4 is not set
# CONFIG_NF_TABLES_ARP is not set
CONFIG_NF_FLOW_TABLE_IPV4=m
CONFIG_NF_DUP_IPV4=m
# CONFIG_NF_LOG_ARP is not set
CONFIG_NF_LOG_IPV4=m
CONFIG_NF_REJECT_IPV4=m
CONFIG_NF_NAT_SNMP_BASIC=m
CONFIG_NF_NAT_PPTP=m
CONFIG_NF_NAT_H323=m
CONFIG_IP_NF_IPTABLES=m
CONFIG_IP_NF_MATCH_AH=m
CONFIG_IP_NF_MATCH_ECN=m
CONFIG_IP_NF_MATCH_RPFILTER=m
CONFIG_IP_NF_MATCH_TTL=m
CONFIG_IP_NF_FILTER=m
CONFIG_IP_NF_TARGET_REJECT=m
CONFIG_IP_NF_TARGET_SYNPROXY=m
CONFIG_IP_NF_NAT=m
CONFIG_IP_NF_TARGET_MASQUERADE=m
CONFIG_IP_NF_TARGET_NETMAP=m
CONFIG_IP_NF_TARGET_REDIRECT=m
CONFIG_IP_NF_MANGLE=m
CONFIG_IP_NF_TARGET_CLUSTERIP=m
CONFIG_IP_NF_TARGET_ECN=m
CONFIG_IP_NF_TARGET_TTL=m
CONFIG_IP_NF_RAW=m
CONFIG_IP_NF_SECURITY=m
CONFIG_IP_NF_ARPTABLES=m
CONFIG_IP_NF_ARPFILTER=m
CONFIG_IP_NF_ARP_MANGLE=m
# end of IP: Netfilter Configuration

#
# IPv6: Netfilter Configuration
#
CONFIG_NF_SOCKET_IPV6=m
CONFIG_NF_TPROXY_IPV6=m
CONFIG_NF_TABLES_IPV6=y
CONFIG_NFT_REJECT_IPV6=m
# CONFIG_NFT_DUP_IPV6 is not set
# CONFIG_NFT_FIB_IPV6 is not set
CONFIG_NF_FLOW_TABLE_IPV6=m
CONFIG_NF_DUP_IPV6=m
CONFIG_NF_REJECT_IPV6=m
CONFIG_NF_LOG_IPV6=m
CONFIG_IP6_NF_IPTABLES=m
CONFIG_IP6_NF_MATCH_AH=m
CONFIG_IP6_NF_MATCH_EUI64=m
CONFIG_IP6_NF_MATCH_FRAG=m
CONFIG_IP6_NF_MATCH_OPTS=m
CONFIG_IP6_NF_MATCH_HL=m
CONFIG_IP6_NF_MATCH_IPV6HEADER=m
CONFIG_IP6_NF_MATCH_MH=m
CONFIG_IP6_NF_MATCH_RPFILTER=m
CONFIG_IP6_NF_MATCH_RT=m
# CONFIG_IP6_NF_MATCH_SRH is not set
CONFIG_IP6_NF_TARGET_HL=m
CONFIG_IP6_NF_FILTER=m
CONFIG_IP6_NF_TARGET_REJECT=m
CONFIG_IP6_NF_TARGET_SYNPROXY=m
CONFIG_IP6_NF_MANGLE=m
CONFIG_IP6_NF_RAW=m
CONFIG_IP6_NF_SECURITY=m
CONFIG_IP6_NF_NAT=m
CONFIG_IP6_NF_TARGET_MASQUERADE=m
CONFIG_IP6_NF_TARGET_NPT=m
# end of IPv6: Netfilter Configuration

CONFIG_NF_DEFRAG_IPV6=m
# CONFIG_NF_TABLES_BRIDGE is not set
# CONFIG_NF_CONNTRACK_BRIDGE is not set
CONFIG_BRIDGE_NF_EBTABLES=m
CONFIG_BRIDGE_EBT_BROUTE=m
CONFIG_BRIDGE_EBT_T_FILTER=m
CONFIG_BRIDGE_EBT_T_NAT=m
CONFIG_BRIDGE_EBT_802_3=m
CONFIG_BRIDGE_EBT_AMONG=m
CONFIG_BRIDGE_EBT_ARP=m
CONFIG_BRIDGE_EBT_IP=m
CONFIG_BRIDGE_EBT_IP6=m
CONFIG_BRIDGE_EBT_LIMIT=m
CONFIG_BRIDGE_EBT_MARK=m
CONFIG_BRIDGE_EBT_PKTTYPE=m
CONFIG_BRIDGE_EBT_STP=m
CONFIG_BRIDGE_EBT_VLAN=m
CONFIG_BRIDGE_EBT_ARPREPLY=m
CONFIG_BRIDGE_EBT_DNAT=m
CONFIG_BRIDGE_EBT_MARK_T=m
CONFIG_BRIDGE_EBT_REDIRECT=m
CONFIG_BRIDGE_EBT_SNAT=m
CONFIG_BRIDGE_EBT_LOG=m
CONFIG_BRIDGE_EBT_NFLOG=m
# CONFIG_BPFILTER is not set
CONFIG_IP_DCCP=m
CONFIG_INET_DCCP_DIAG=m

#
# DCCP CCIDs Configuration
#
# CONFIG_IP_DCCP_CCID2_DEBUG is not set
CONFIG_IP_DCCP_CCID3=y
# CONFIG_IP_DCCP_CCID3_DEBUG is not set
CONFIG_IP_DCCP_TFRC_LIB=y
# end of DCCP CCIDs Configuration

#
# DCCP Kernel Hacking
#
# CONFIG_IP_DCCP_DEBUG is not set
# end of DCCP Kernel Hacking

CONFIG_IP_SCTP=m
# CONFIG_SCTP_DBG_OBJCNT is not set
# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_MD5 is not set
CONFIG_SCTP_DEFAULT_COOKIE_HMAC_SHA1=y
# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_NONE is not set
CONFIG_SCTP_COOKIE_HMAC_MD5=y
CONFIG_SCTP_COOKIE_HMAC_SHA1=y
CONFIG_INET_SCTP_DIAG=m
# CONFIG_RDS is not set
# CONFIG_TIPC is not set
CONFIG_ATM=m
CONFIG_ATM_CLIP=m
# CONFIG_ATM_CLIP_NO_ICMP is not set
CONFIG_ATM_LANE=m
# CONFIG_ATM_MPOA is not set
CONFIG_ATM_BR2684=m
# CONFIG_ATM_BR2684_IPFILTER is not set
CONFIG_L2TP=m
CONFIG_L2TP_DEBUGFS=m
CONFIG_L2TP_V3=y
CONFIG_L2TP_IP=m
CONFIG_L2TP_ETH=m
CONFIG_STP=y
CONFIG_GARP=y
CONFIG_MRP=y
CONFIG_BRIDGE=y
CONFIG_BRIDGE_IGMP_SNOOPING=y
CONFIG_BRIDGE_VLAN_FILTERING=y
# CONFIG_BRIDGE_MRP is not set
CONFIG_HAVE_NET_DSA=y
# CONFIG_NET_DSA is not set
CONFIG_VLAN_8021Q=y
CONFIG_VLAN_8021Q_GVRP=y
CONFIG_VLAN_8021Q_MVRP=y
# CONFIG_DECNET is not set
CONFIG_LLC=y
# CONFIG_LLC2 is not set
# CONFIG_ATALK is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_PHONET is not set
CONFIG_6LOWPAN=m
# CONFIG_6LOWPAN_DEBUGFS is not set
CONFIG_6LOWPAN_NHC=m
CONFIG_6LOWPAN_NHC_DEST=m
CONFIG_6LOWPAN_NHC_FRAGMENT=m
CONFIG_6LOWPAN_NHC_HOP=m
CONFIG_6LOWPAN_NHC_IPV6=m
CONFIG_6LOWPAN_NHC_MOBILITY=m
CONFIG_6LOWPAN_NHC_ROUTING=m
CONFIG_6LOWPAN_NHC_UDP=m
# CONFIG_6LOWPAN_GHC_EXT_HDR_HOP is not set
# CONFIG_6LOWPAN_GHC_UDP is not set
# CONFIG_6LOWPAN_GHC_ICMPV6 is not set
# CONFIG_6LOWPAN_GHC_EXT_HDR_DEST is not set
# CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG is not set
# CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE is not set
CONFIG_IEEE802154=m
# CONFIG_IEEE802154_NL802154_EXPERIMENTAL is not set
CONFIG_IEEE802154_SOCKET=m
CONFIG_IEEE802154_6LOWPAN=m
CONFIG_MAC802154=m
CONFIG_NET_SCHED=y

#
# Queueing/Scheduling
#
CONFIG_NET_SCH_CBQ=m
CONFIG_NET_SCH_HTB=m
CONFIG_NET_SCH_HFSC=m
CONFIG_NET_SCH_ATM=m
CONFIG_NET_SCH_PRIO=m
CONFIG_NET_SCH_MULTIQ=m
CONFIG_NET_SCH_RED=m
CONFIG_NET_SCH_SFB=m
CONFIG_NET_SCH_SFQ=m
CONFIG_NET_SCH_TEQL=m
CONFIG_NET_SCH_TBF=m
# CONFIG_NET_SCH_CBS is not set
CONFIG_NET_SCH_ETF=m
# CONFIG_NET_SCH_TAPRIO is not set
CONFIG_NET_SCH_GRED=m
CONFIG_NET_SCH_DSMARK=m
CONFIG_NET_SCH_NETEM=y
CONFIG_NET_SCH_DRR=m
CONFIG_NET_SCH_MQPRIO=m
# CONFIG_NET_SCH_SKBPRIO is not set
CONFIG_NET_SCH_CHOKE=m
CONFIG_NET_SCH_QFQ=m
CONFIG_NET_SCH_CODEL=m
CONFIG_NET_SCH_FQ_CODEL=m
# CONFIG_NET_SCH_CAKE is not set
CONFIG_NET_SCH_FQ=m
# CONFIG_NET_SCH_HHF is not set
# CONFIG_NET_SCH_PIE is not set
CONFIG_NET_SCH_INGRESS=y
CONFIG_NET_SCH_PLUG=m
CONFIG_NET_SCH_ETS=m
# CONFIG_NET_SCH_DEFAULT is not set

#
# Classification
#
CONFIG_NET_CLS=y
CONFIG_NET_CLS_BASIC=m
CONFIG_NET_CLS_TCINDEX=m
CONFIG_NET_CLS_ROUTE4=m
CONFIG_NET_CLS_FW=m
CONFIG_NET_CLS_U32=m
CONFIG_CLS_U32_PERF=y
CONFIG_CLS_U32_MARK=y
CONFIG_NET_CLS_RSVP=m
CONFIG_NET_CLS_RSVP6=m
CONFIG_NET_CLS_FLOW=m
CONFIG_NET_CLS_CGROUP=y
CONFIG_NET_CLS_BPF=m
CONFIG_NET_CLS_FLOWER=m
CONFIG_NET_CLS_MATCHALL=m
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_STACK=32
CONFIG_NET_EMATCH_CMP=m
CONFIG_NET_EMATCH_NBYTE=m
CONFIG_NET_EMATCH_U32=m
CONFIG_NET_EMATCH_META=m
CONFIG_NET_EMATCH_TEXT=m
CONFIG_NET_EMATCH_CANID=m
CONFIG_NET_EMATCH_IPSET=m
CONFIG_NET_EMATCH_IPT=m
CONFIG_NET_CLS_ACT=y
CONFIG_NET_ACT_POLICE=m
CONFIG_NET_ACT_GACT=m
CONFIG_GACT_PROB=y
CONFIG_NET_ACT_MIRRED=m
CONFIG_NET_ACT_SAMPLE=m
CONFIG_NET_ACT_IPT=m
CONFIG_NET_ACT_NAT=m
CONFIG_NET_ACT_PEDIT=m
CONFIG_NET_ACT_SIMP=m
CONFIG_NET_ACT_SKBEDIT=m
CONFIG_NET_ACT_CSUM=m
CONFIG_NET_ACT_MPLS=m
CONFIG_NET_ACT_VLAN=m
CONFIG_NET_ACT_BPF=m
CONFIG_NET_ACT_CONNMARK=m
CONFIG_NET_ACT_CTINFO=m
CONFIG_NET_ACT_SKBMOD=m
CONFIG_NET_ACT_IFE=m
CONFIG_NET_ACT_TUNNEL_KEY=m
CONFIG_NET_ACT_CT=m
# CONFIG_NET_ACT_GATE is not set
CONFIG_NET_IFE_SKBMARK=m
CONFIG_NET_IFE_SKBPRIO=m
CONFIG_NET_IFE_SKBTCINDEX=m
# CONFIG_NET_TC_SKB_EXT is not set
CONFIG_NET_SCH_FIFO=y
CONFIG_DCB=y
CONFIG_DNS_RESOLVER=m
# CONFIG_BATMAN_ADV is not set
CONFIG_OPENVSWITCH=m
CONFIG_OPENVSWITCH_GRE=m
CONFIG_OPENVSWITCH_VXLAN=m
CONFIG_OPENVSWITCH_GENEVE=m
CONFIG_VSOCKETS=m
CONFIG_VSOCKETS_DIAG=m
CONFIG_VSOCKETS_LOOPBACK=m
CONFIG_VMWARE_VMCI_VSOCKETS=m
CONFIG_VIRTIO_VSOCKETS=m
CONFIG_VIRTIO_VSOCKETS_COMMON=m
CONFIG_HYPERV_VSOCKETS=m
CONFIG_NETLINK_DIAG=m
CONFIG_MPLS=y
CONFIG_NET_MPLS_GSO=m
CONFIG_MPLS_ROUTING=m
CONFIG_MPLS_IPTUNNEL=m
CONFIG_NET_NSH=m
# CONFIG_HSR is not set
CONFIG_NET_SWITCHDEV=y
CONFIG_NET_L3_MASTER_DEV=y
# CONFIG_QRTR is not set
# CONFIG_NET_NCSI is not set
CONFIG_RPS=y
CONFIG_RFS_ACCEL=y
CONFIG_XPS=y
# CONFIG_CGROUP_NET_PRIO is not set
CONFIG_CGROUP_NET_CLASSID=y
CONFIG_NET_RX_BUSY_POLL=y
CONFIG_BQL=y
CONFIG_BPF_JIT=y
CONFIG_BPF_STREAM_PARSER=y
CONFIG_NET_FLOW_LIMIT=y

#
# Network testing
#
CONFIG_NET_PKTGEN=m
CONFIG_NET_DROP_MONITOR=y
# end of Network testing
# end of Networking options

# CONFIG_HAMRADIO is not set
CONFIG_CAN=m
CONFIG_CAN_RAW=m
CONFIG_CAN_BCM=m
CONFIG_CAN_GW=m
# CONFIG_CAN_J1939 is not set
# CONFIG_CAN_ISOTP is not set

#
# CAN Device Drivers
#
CONFIG_CAN_VCAN=m
# CONFIG_CAN_VXCAN is not set
CONFIG_CAN_SLCAN=m
CONFIG_CAN_DEV=m
CONFIG_CAN_CALC_BITTIMING=y
# CONFIG_CAN_KVASER_PCIEFD is not set
CONFIG_CAN_C_CAN=m
CONFIG_CAN_C_CAN_PLATFORM=m
CONFIG_CAN_C_CAN_PCI=m
CONFIG_CAN_CC770=m
# CONFIG_CAN_CC770_ISA is not set
CONFIG_CAN_CC770_PLATFORM=m
# CONFIG_CAN_IFI_CANFD is not set
# CONFIG_CAN_M_CAN is not set
# CONFIG_CAN_PEAK_PCIEFD is not set
CONFIG_CAN_SJA1000=m
CONFIG_CAN_EMS_PCI=m
# CONFIG_CAN_F81601 is not set
CONFIG_CAN_KVASER_PCI=m
CONFIG_CAN_PEAK_PCI=m
CONFIG_CAN_PEAK_PCIEC=y
CONFIG_CAN_PLX_PCI=m
# CONFIG_CAN_SJA1000_ISA is not set
CONFIG_CAN_SJA1000_PLATFORM=m
CONFIG_CAN_SOFTING=m

#
# CAN SPI interfaces
#
# CONFIG_CAN_HI311X is not set
# CONFIG_CAN_MCP251X is not set
# CONFIG_CAN_MCP251XFD is not set
# end of CAN SPI interfaces

#
# CAN USB interfaces
#
CONFIG_CAN_8DEV_USB=m
CONFIG_CAN_EMS_USB=m
CONFIG_CAN_ESD_USB2=m
# CONFIG_CAN_GS_USB is not set
CONFIG_CAN_KVASER_USB=m
# CONFIG_CAN_MCBA_USB is not set
CONFIG_CAN_PEAK_USB=m
# CONFIG_CAN_UCAN is not set
# end of CAN USB interfaces

# CONFIG_CAN_DEBUG_DEVICES is not set
# end of CAN Device Drivers

CONFIG_BT=m
CONFIG_BT_BREDR=y
CONFIG_BT_RFCOMM=m
CONFIG_BT_RFCOMM_TTY=y
CONFIG_BT_BNEP=m
CONFIG_BT_BNEP_MC_FILTER=y
CONFIG_BT_BNEP_PROTO_FILTER=y
CONFIG_BT_CMTP=m
CONFIG_BT_HIDP=m
CONFIG_BT_HS=y
CONFIG_BT_LE=y
# CONFIG_BT_6LOWPAN is not set
# CONFIG_BT_LEDS is not set
# CONFIG_BT_MSFTEXT is not set
CONFIG_BT_DEBUGFS=y
# CONFIG_BT_SELFTEST is not set

#
# Bluetooth device drivers
#
CONFIG_BT_INTEL=m
CONFIG_BT_BCM=m
CONFIG_BT_RTL=m
CONFIG_BT_HCIBTUSB=m
# CONFIG_BT_HCIBTUSB_AUTOSUSPEND is not set
CONFIG_BT_HCIBTUSB_BCM=y
# CONFIG_BT_HCIBTUSB_MTK is not set
CONFIG_BT_HCIBTUSB_RTL=y
CONFIG_BT_HCIBTSDIO=m
CONFIG_BT_HCIUART=m
CONFIG_BT_HCIUART_H4=y
CONFIG_BT_HCIUART_BCSP=y
CONFIG_BT_HCIUART_ATH3K=y
# CONFIG_BT_HCIUART_INTEL is not set
# CONFIG_BT_HCIUART_AG6XX is not set
CONFIG_BT_HCIBCM203X=m
CONFIG_BT_HCIBPA10X=m
CONFIG_BT_HCIBFUSB=m
CONFIG_BT_HCIVHCI=m
CONFIG_BT_MRVL=m
CONFIG_BT_MRVL_SDIO=m
CONFIG_BT_ATH3K=m
# CONFIG_BT_MTKSDIO is not set
# end of Bluetooth device drivers

# CONFIG_AF_RXRPC is not set
# CONFIG_AF_KCM is not set
CONFIG_STREAM_PARSER=y
CONFIG_FIB_RULES=y
CONFIG_WIRELESS=y
CONFIG_WIRELESS_EXT=y
CONFIG_WEXT_CORE=y
CONFIG_WEXT_PROC=y
CONFIG_WEXT_PRIV=y
CONFIG_CFG80211=m
# CONFIG_NL80211_TESTMODE is not set
# CONFIG_CFG80211_DEVELOPER_WARNINGS is not set
# CONFIG_CFG80211_CERTIFICATION_ONUS is not set
CONFIG_CFG80211_REQUIRE_SIGNED_REGDB=y
CONFIG_CFG80211_USE_KERNEL_REGDB_KEYS=y
CONFIG_CFG80211_DEFAULT_PS=y
# CONFIG_CFG80211_DEBUGFS is not set
CONFIG_CFG80211_CRDA_SUPPORT=y
CONFIG_CFG80211_WEXT=y
CONFIG_LIB80211=m
# CONFIG_LIB80211_DEBUG is not set
CONFIG_MAC80211=m
CONFIG_MAC80211_HAS_RC=y
CONFIG_MAC80211_RC_MINSTREL=y
CONFIG_MAC80211_RC_DEFAULT_MINSTREL=y
CONFIG_MAC80211_RC_DEFAULT="minstrel_ht"
# CONFIG_MAC80211_MESH is not set
CONFIG_MAC80211_LEDS=y
CONFIG_MAC80211_DEBUGFS=y
# CONFIG_MAC80211_MESSAGE_TRACING is not set
# CONFIG_MAC80211_DEBUG_MENU is not set
CONFIG_MAC80211_STA_HASH_MAX_SIZE=0
# CONFIG_WIMAX is not set
CONFIG_RFKILL=m
CONFIG_RFKILL_LEDS=y
CONFIG_RFKILL_INPUT=y
# CONFIG_RFKILL_GPIO is not set
CONFIG_NET_9P=y
CONFIG_NET_9P_VIRTIO=m
# CONFIG_NET_9P_XEN is not set
# CONFIG_NET_9P_DEBUG is not set
# CONFIG_CAIF is not set
CONFIG_CEPH_LIB=m
# CONFIG_CEPH_LIB_PRETTYDEBUG is not set
CONFIG_CEPH_LIB_USE_DNS_RESOLVER=y
# CONFIG_NFC is not set
CONFIG_PSAMPLE=m
CONFIG_NET_IFE=m
CONFIG_LWTUNNEL=y
CONFIG_LWTUNNEL_BPF=y
CONFIG_DST_CACHE=y
CONFIG_GRO_CELLS=y
CONFIG_NET_SOCK_MSG=y
CONFIG_NET_DEVLINK=y
CONFIG_PAGE_POOL=y
CONFIG_FAILOVER=m
CONFIG_ETHTOOL_NETLINK=y
CONFIG_HAVE_EBPF_JIT=y

#
# Device Drivers
#
CONFIG_HAVE_EISA=y
# CONFIG_EISA is not set
CONFIG_HAVE_PCI=y
CONFIG_PCI=y
CONFIG_PCI_DOMAINS=y
CONFIG_PCIEPORTBUS=y
CONFIG_HOTPLUG_PCI_PCIE=y
CONFIG_PCIEAER=y
CONFIG_PCIEAER_INJECT=m
CONFIG_PCIE_ECRC=y
CONFIG_PCIEASPM=y
CONFIG_PCIEASPM_DEFAULT=y
# CONFIG_PCIEASPM_POWERSAVE is not set
# CONFIG_PCIEASPM_POWER_SUPERSAVE is not set
# CONFIG_PCIEASPM_PERFORMANCE is not set
CONFIG_PCIE_PME=y
# CONFIG_PCIE_DPC is not set
# CONFIG_PCIE_PTM is not set
# CONFIG_PCIE_BW is not set
CONFIG_PCI_MSI=y
CONFIG_PCI_MSI_IRQ_DOMAIN=y
CONFIG_PCI_QUIRKS=y
# CONFIG_PCI_DEBUG is not set
# CONFIG_PCI_REALLOC_ENABLE_AUTO is not set
CONFIG_PCI_STUB=y
# CONFIG_PCI_PF_STUB is not set
# CONFIG_XEN_PCIDEV_FRONTEND is not set
CONFIG_PCI_ATS=y
CONFIG_PCI_LOCKLESS_CONFIG=y
CONFIG_PCI_IOV=y
CONFIG_PCI_PRI=y
CONFIG_PCI_PASID=y
# CONFIG_PCI_P2PDMA is not set
CONFIG_PCI_LABEL=y
CONFIG_PCI_HYPERV=m
# CONFIG_PCIE_BUS_TUNE_OFF is not set
CONFIG_PCIE_BUS_DEFAULT=y
# CONFIG_PCIE_BUS_SAFE is not set
# CONFIG_PCIE_BUS_PERFORMANCE is not set
# CONFIG_PCIE_BUS_PEER2PEER is not set
CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_ACPI=y
CONFIG_HOTPLUG_PCI_ACPI_IBM=m
# CONFIG_HOTPLUG_PCI_CPCI is not set
CONFIG_HOTPLUG_PCI_SHPC=y

#
# PCI controller drivers
#
CONFIG_VMD=y
CONFIG_PCI_HYPERV_INTERFACE=m

#
# DesignWare PCI Core Support
#
# CONFIG_PCIE_DW_PLAT_HOST is not set
# CONFIG_PCI_MESON is not set
# end of DesignWare PCI Core Support

#
# Mobiveil PCIe Core Support
#
# end of Mobiveil PCIe Core Support

#
# Cadence PCIe controllers support
#
# end of Cadence PCIe controllers support
# end of PCI controller drivers

#
# PCI Endpoint
#
# CONFIG_PCI_ENDPOINT is not set
# end of PCI Endpoint

#
# PCI switch controller drivers
#
# CONFIG_PCI_SW_SWITCHTEC is not set
# end of PCI switch controller drivers

CONFIG_PCCARD=y
# CONFIG_PCMCIA is not set
CONFIG_CARDBUS=y

#
# PC-card bridges
#
CONFIG_YENTA=m
CONFIG_YENTA_O2=y
CONFIG_YENTA_RICOH=y
CONFIG_YENTA_TI=y
CONFIG_YENTA_ENE_TUNE=y
CONFIG_YENTA_TOSHIBA=y
# CONFIG_RAPIDIO is not set

#
# Generic Driver Options
#
CONFIG_UEVENT_HELPER=y
CONFIG_UEVENT_HELPER_PATH=""
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y

#
# Firmware loader
#
CONFIG_FW_LOADER=y
CONFIG_FW_LOADER_PAGED_BUF=y
CONFIG_EXTRA_FIRMWARE=""
CONFIG_FW_LOADER_USER_HELPER=y
# CONFIG_FW_LOADER_USER_HELPER_FALLBACK is not set
# CONFIG_FW_LOADER_COMPRESS is not set
CONFIG_FW_CACHE=y
# end of Firmware loader

CONFIG_WANT_DEV_COREDUMP=y
CONFIG_ALLOW_DEV_COREDUMP=y
CONFIG_DEV_COREDUMP=y
# CONFIG_DEBUG_DRIVER is not set
# CONFIG_DEBUG_DEVRES is not set
# CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set
# CONFIG_TEST_ASYNC_DRIVER_PROBE is not set
CONFIG_SYS_HYPERVISOR=y
CONFIG_GENERIC_CPU_AUTOPROBE=y
CONFIG_GENERIC_CPU_VULNERABILITIES=y
CONFIG_REGMAP=y
CONFIG_REGMAP_I2C=m
CONFIG_REGMAP_SPI=m
CONFIG_DMA_SHARED_BUFFER=y
# CONFIG_DMA_FENCE_TRACE is not set
# end of Generic Driver Options

#
# Bus devices
#
# CONFIG_MHI_BUS is not set
# end of Bus devices

CONFIG_CONNECTOR=y
CONFIG_PROC_EVENTS=y
# CONFIG_GNSS is not set
CONFIG_MTD=m
# CONFIG_MTD_TESTS is not set

#
# Partition parsers
#
# CONFIG_MTD_AR7_PARTS is not set
# CONFIG_MTD_CMDLINE_PARTS is not set
# CONFIG_MTD_REDBOOT_PARTS is not set
# end of Partition parsers

#
# User Modules And Translation Layers
#
CONFIG_MTD_BLKDEVS=m
CONFIG_MTD_BLOCK=m
# CONFIG_MTD_BLOCK_RO is not set
# CONFIG_FTL is not set
# CONFIG_NFTL is not set
# CONFIG_INFTL is not set
# CONFIG_RFD_FTL is not set
# CONFIG_SSFDC is not set
# CONFIG_SM_FTL is not set
# CONFIG_MTD_OOPS is not set
# CONFIG_MTD_SWAP is not set
# CONFIG_MTD_PARTITIONED_MASTER is not set

#
# RAM/ROM/Flash chip drivers
#
# CONFIG_MTD_CFI is not set
# CONFIG_MTD_JEDECPROBE is not set
CONFIG_MTD_MAP_BANK_WIDTH_1=y
CONFIG_MTD_MAP_BANK_WIDTH_2=y
CONFIG_MTD_MAP_BANK_WIDTH_4=y
CONFIG_MTD_CFI_I1=y
CONFIG_MTD_CFI_I2=y
# CONFIG_MTD_RAM is not set
# CONFIG_MTD_ROM is not set
# CONFIG_MTD_ABSENT is not set
# end of RAM/ROM/Flash chip drivers

#
# Mapping drivers for chip access
#
# CONFIG_MTD_COMPLEX_MAPPINGS is not set
# CONFIG_MTD_INTEL_VR_NOR is not set
# CONFIG_MTD_PLATRAM is not set
# end of Mapping drivers for chip access

#
# Self-contained MTD device drivers
#
# CONFIG_MTD_PMC551 is not set
# CONFIG_MTD_DATAFLASH is not set
# CONFIG_MTD_MCHP23K256 is not set
# CONFIG_MTD_SST25L is not set
# CONFIG_MTD_SLRAM is not set
# CONFIG_MTD_PHRAM is not set
# CONFIG_MTD_MTDRAM is not set
# CONFIG_MTD_BLOCK2MTD is not set

#
# Disk-On-Chip Device Drivers
#
# CONFIG_MTD_DOCG3 is not set
# end of Self-contained MTD device drivers

#
# NAND
#
# CONFIG_MTD_ONENAND is not set
# CONFIG_MTD_RAW_NAND is not set
# CONFIG_MTD_SPI_NAND is not set

#
# ECC engine support
#
# end of ECC engine support
# end of NAND

#
# LPDDR & LPDDR2 PCM memory drivers
#
# CONFIG_MTD_LPDDR is not set
# end of LPDDR & LPDDR2 PCM memory drivers

# CONFIG_MTD_SPI_NOR is not set
CONFIG_MTD_UBI=m
CONFIG_MTD_UBI_WL_THRESHOLD=4096
CONFIG_MTD_UBI_BEB_LIMIT=20
# CONFIG_MTD_UBI_FASTMAP is not set
# CONFIG_MTD_UBI_GLUEBI is not set
# CONFIG_MTD_UBI_BLOCK is not set
# CONFIG_MTD_HYPERBUS is not set
# CONFIG_OF is not set
CONFIG_ARCH_MIGHT_HAVE_PC_PARPORT=y
CONFIG_PARPORT=m
CONFIG_PARPORT_PC=m
CONFIG_PARPORT_SERIAL=m
# CONFIG_PARPORT_PC_FIFO is not set
# CONFIG_PARPORT_PC_SUPERIO is not set
# CONFIG_PARPORT_AX88796 is not set
CONFIG_PARPORT_1284=y
CONFIG_PARPORT_NOT_PC=y
CONFIG_PNP=y
# CONFIG_PNP_DEBUG_MESSAGES is not set

#
# Protocols
#
CONFIG_PNPACPI=y
CONFIG_BLK_DEV=y
CONFIG_BLK_DEV_NULL_BLK=m
CONFIG_BLK_DEV_FD=m
CONFIG_CDROM=m
# CONFIG_PARIDE is not set
CONFIG_BLK_DEV_PCIESSD_MTIP32XX=m
CONFIG_ZRAM=m
# CONFIG_ZRAM_WRITEBACK is not set
# CONFIG_ZRAM_MEMORY_TRACKING is not set
# CONFIG_BLK_DEV_UMEM is not set
CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_LOOP_MIN_COUNT=0
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
# CONFIG_BLK_DEV_DRBD is not set
# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_SKD is not set
CONFIG_BLK_DEV_SX8=m
CONFIG_BLK_DEV_RAM=m
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=16384
CONFIG_CDROM_PKTCDVD=m
CONFIG_CDROM_PKTCDVD_BUFFERS=8
# CONFIG_CDROM_PKTCDVD_WCACHE is not set
CONFIG_ATA_OVER_ETH=m
CONFIG_XEN_BLKDEV_FRONTEND=m
CONFIG_VIRTIO_BLK=m
CONFIG_BLK_DEV_RBD=m
# CONFIG_BLK_DEV_RSXX is not set

#
# NVME Support
#
CONFIG_NVME_CORE=m
CONFIG_BLK_DEV_NVME=m
# CONFIG_NVME_MULTIPATH is not set
# CONFIG_NVME_HWMON is not set
CONFIG_NVME_FABRICS=m
CONFIG_NVME_FC=m
# CONFIG_NVME_TCP is not set
CONFIG_NVME_TARGET=m
# CONFIG_NVME_TARGET_PASSTHRU is not set
CONFIG_NVME_TARGET_LOOP=m
CONFIG_NVME_TARGET_FC=m
CONFIG_NVME_TARGET_FCLOOP=m
# CONFIG_NVME_TARGET_TCP is not set
# end of NVME Support

#
# Misc devices
#
CONFIG_SENSORS_LIS3LV02D=m
# CONFIG_AD525X_DPOT is not set
# CONFIG_DUMMY_IRQ is not set
# CONFIG_IBM_ASM is not set
# CONFIG_PHANTOM is not set
CONFIG_TIFM_CORE=m
CONFIG_TIFM_7XX1=m
# CONFIG_ICS932S401 is not set
CONFIG_ENCLOSURE_SERVICES=m
CONFIG_SGI_XP=m
CONFIG_HP_ILO=m
CONFIG_SGI_GRU=m
# CONFIG_SGI_GRU_DEBUG is not set
CONFIG_APDS9802ALS=m
CONFIG_ISL29003=m
CONFIG_ISL29020=m
CONFIG_SENSORS_TSL2550=m
CONFIG_SENSORS_BH1770=m
CONFIG_SENSORS_APDS990X=m
# CONFIG_HMC6352 is not set
# CONFIG_DS1682 is not set
CONFIG_VMWARE_BALLOON=m
# CONFIG_LATTICE_ECP3_CONFIG is not set
# CONFIG_SRAM is not set
# CONFIG_PCI_ENDPOINT_TEST is not set
# CONFIG_XILINX_SDFEC is not set
CONFIG_PVPANIC=y
# CONFIG_C2PORT is not set

#
# EEPROM support
#
CONFIG_EEPROM_AT24=m
# CONFIG_EEPROM_AT25 is not set
CONFIG_EEPROM_LEGACY=m
CONFIG_EEPROM_MAX6875=m
CONFIG_EEPROM_93CX6=m
# CONFIG_EEPROM_93XX46 is not set
# CONFIG_EEPROM_IDT_89HPESX is not set
# CONFIG_EEPROM_EE1004 is not set
# end of EEPROM support

CONFIG_CB710_CORE=m
# CONFIG_CB710_DEBUG is not set
CONFIG_CB710_DEBUG_ASSUMPTIONS=y

#
# Texas Instruments shared transport line discipline
#
# CONFIG_TI_ST is not set
# end of Texas Instruments shared transport line discipline

CONFIG_SENSORS_LIS3_I2C=m
CONFIG_ALTERA_STAPL=m
CONFIG_INTEL_MEI=m
CONFIG_INTEL_MEI_ME=m
# CONFIG_INTEL_MEI_TXE is not set
# CONFIG_INTEL_MEI_VIRTIO is not set
# CONFIG_INTEL_MEI_HDCP is not set
CONFIG_VMWARE_VMCI=m
# CONFIG_GENWQE is not set
# CONFIG_ECHO is not set
# CONFIG_MISC_ALCOR_PCI is not set
# CONFIG_MISC_RTSX_PCI is not set
# CONFIG_MISC_RTSX_USB is not set
# CONFIG_HABANA_AI is not set
# CONFIG_UACCE is not set
# end of Misc devices

CONFIG_HAVE_IDE=y
# CONFIG_IDE is not set

#
# SCSI device support
#
CONFIG_SCSI_MOD=y
CONFIG_RAID_ATTRS=m
CONFIG_SCSI=y
CONFIG_SCSI_DMA=y
CONFIG_SCSI_NETLINK=y
CONFIG_SCSI_PROC_FS=y

#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=m
CONFIG_CHR_DEV_ST=m
CONFIG_BLK_DEV_SR=m
CONFIG_CHR_DEV_SG=m
CONFIG_CHR_DEV_SCH=m
CONFIG_SCSI_ENCLOSURE=m
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
CONFIG_SCSI_SCAN_ASYNC=y

#
# SCSI Transports
#
CONFIG_SCSI_SPI_ATTRS=m
CONFIG_SCSI_FC_ATTRS=m
CONFIG_SCSI_ISCSI_ATTRS=m
CONFIG_SCSI_SAS_ATTRS=m
CONFIG_SCSI_SAS_LIBSAS=m
CONFIG_SCSI_SAS_ATA=y
CONFIG_SCSI_SAS_HOST_SMP=y
CONFIG_SCSI_SRP_ATTRS=m
# end of SCSI Transports

CONFIG_SCSI_LOWLEVEL=y
CONFIG_ISCSI_TCP=m
CONFIG_ISCSI_BOOT_SYSFS=m
CONFIG_SCSI_CXGB3_ISCSI=m
CONFIG_SCSI_CXGB4_ISCSI=m
CONFIG_SCSI_BNX2_ISCSI=m
CONFIG_SCSI_BNX2X_FCOE=m
CONFIG_BE2ISCSI=m
# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
CONFIG_SCSI_HPSA=m
CONFIG_SCSI_3W_9XXX=m
CONFIG_SCSI_3W_SAS=m
# CONFIG_SCSI_ACARD is not set
CONFIG_SCSI_AACRAID=m
# CONFIG_SCSI_AIC7XXX is not set
CONFIG_SCSI_AIC79XX=m
CONFIG_AIC79XX_CMDS_PER_DEVICE=4
CONFIG_AIC79XX_RESET_DELAY_MS=15000
# CONFIG_AIC79XX_DEBUG_ENABLE is not set
CONFIG_AIC79XX_DEBUG_MASK=0
# CONFIG_AIC79XX_REG_PRETTY_PRINT is not set
# CONFIG_SCSI_AIC94XX is not set
CONFIG_SCSI_MVSAS=m
# CONFIG_SCSI_MVSAS_DEBUG is not set
CONFIG_SCSI_MVSAS_TASKLET=y
CONFIG_SCSI_MVUMI=m
# CONFIG_SCSI_DPT_I2O is not set
# CONFIG_SCSI_ADVANSYS is not set
CONFIG_SCSI_ARCMSR=m
# CONFIG_SCSI_ESAS2R is not set
# CONFIG_MEGARAID_NEWGEN is not set
# CONFIG_MEGARAID_LEGACY is not set
CONFIG_MEGARAID_SAS=m
CONFIG_SCSI_MPT3SAS=m
CONFIG_SCSI_MPT2SAS_MAX_SGE=128
CONFIG_SCSI_MPT3SAS_MAX_SGE=128
CONFIG_SCSI_MPT2SAS=m
# CONFIG_SCSI_SMARTPQI is not set
CONFIG_SCSI_UFSHCD=m
CONFIG_SCSI_UFSHCD_PCI=m
# CONFIG_SCSI_UFS_DWC_TC_PCI is not set
# CONFIG_SCSI_UFSHCD_PLATFORM is not set
# CONFIG_SCSI_UFS_BSG is not set
CONFIG_SCSI_HPTIOP=m
# CONFIG_SCSI_BUSLOGIC is not set
# CONFIG_SCSI_MYRB is not set
# CONFIG_SCSI_MYRS is not set
CONFIG_VMWARE_PVSCSI=m
# CONFIG_XEN_SCSI_FRONTEND is not set
CONFIG_HYPERV_STORAGE=m
CONFIG_LIBFC=m
CONFIG_LIBFCOE=m
CONFIG_FCOE=m
CONFIG_FCOE_FNIC=m
# CONFIG_SCSI_SNIC is not set
# CONFIG_SCSI_DMX3191D is not set
# CONFIG_SCSI_FDOMAIN_PCI is not set
# CONFIG_SCSI_GDTH is not set
CONFIG_SCSI_ISCI=m
# CONFIG_SCSI_IPS is not set
CONFIG_SCSI_INITIO=m
# CONFIG_SCSI_INIA100 is not set
# CONFIG_SCSI_PPA is not set
# CONFIG_SCSI_IMM is not set
CONFIG_SCSI_STEX=m
# CONFIG_SCSI_SYM53C8XX_2 is not set
# CONFIG_SCSI_IPR is not set
# CONFIG_SCSI_QLOGIC_1280 is not set
CONFIG_SCSI_QLA_FC=m
CONFIG_TCM_QLA2XXX=m
# CONFIG_TCM_QLA2XXX_DEBUG is not set
CONFIG_SCSI_QLA_ISCSI=m
# CONFIG_QEDI is not set
# CONFIG_QEDF is not set
CONFIG_SCSI_LPFC=m
# CONFIG_SCSI_LPFC_DEBUG_FS is not set
# CONFIG_SCSI_DC395x is not set
# CONFIG_SCSI_AM53C974 is not set
# CONFIG_SCSI_WD719X is not set
CONFIG_SCSI_DEBUG=m
CONFIG_SCSI_PMCRAID=m
CONFIG_SCSI_PM8001=m
# CONFIG_SCSI_BFA_FC is not set
CONFIG_SCSI_VIRTIO=m
# CONFIG_SCSI_CHELSIO_FCOE is not set
CONFIG_SCSI_DH=y
CONFIG_SCSI_DH_RDAC=y
CONFIG_SCSI_DH_HP_SW=y
CONFIG_SCSI_DH_EMC=y
CONFIG_SCSI_DH_ALUA=y
# end of SCSI device support

CONFIG_ATA=m
CONFIG_SATA_HOST=y
CONFIG_PATA_TIMINGS=y
CONFIG_ATA_VERBOSE_ERROR=y
CONFIG_ATA_FORCE=y
CONFIG_ATA_ACPI=y
# CONFIG_SATA_ZPODD is not set
CONFIG_SATA_PMP=y

#
# Controllers with non-SFF native interface
#
CONFIG_SATA_AHCI=m
CONFIG_SATA_MOBILE_LPM_POLICY=0
CONFIG_SATA_AHCI_PLATFORM=m
# CONFIG_SATA_INIC162X is not set
CONFIG_SATA_ACARD_AHCI=m
CONFIG_SATA_SIL24=m
CONFIG_ATA_SFF=y

#
# SFF controllers with custom DMA interface
#
CONFIG_PDC_ADMA=m
CONFIG_SATA_QSTOR=m
CONFIG_SATA_SX4=m
CONFIG_ATA_BMDMA=y

#
# SATA SFF controllers with BMDMA
#
CONFIG_ATA_PIIX=m
# CONFIG_SATA_DWC is not set
CONFIG_SATA_MV=m
CONFIG_SATA_NV=m
CONFIG_SATA_PROMISE=m
CONFIG_SATA_SIL=m
CONFIG_SATA_SIS=m
CONFIG_SATA_SVW=m
CONFIG_SATA_ULI=m
CONFIG_SATA_VIA=m
CONFIG_SATA_VITESSE=m

#
# PATA SFF controllers with BMDMA
#
CONFIG_PATA_ALI=m
CONFIG_PATA_AMD=m
CONFIG_PATA_ARTOP=m
CONFIG_PATA_ATIIXP=m
CONFIG_PATA_ATP867X=m
CONFIG_PATA_CMD64X=m
# CONFIG_PATA_CYPRESS is not set
# CONFIG_PATA_EFAR is not set
CONFIG_PATA_HPT366=m
CONFIG_PATA_HPT37X=m
CONFIG_PATA_HPT3X2N=m
CONFIG_PATA_HPT3X3=m
# CONFIG_PATA_HPT3X3_DMA is not set
CONFIG_PATA_IT8213=m
CONFIG_PATA_IT821X=m
CONFIG_PATA_JMICRON=m
CONFIG_PATA_MARVELL=m
CONFIG_PATA_NETCELL=m
CONFIG_PATA_NINJA32=m
# CONFIG_PATA_NS87415 is not set
CONFIG_PATA_OLDPIIX=m
# CONFIG_PATA_OPTIDMA is not set
CONFIG_PATA_PDC2027X=m
CONFIG_PATA_PDC_OLD=m
# CONFIG_PATA_RADISYS is not set
CONFIG_PATA_RDC=m
CONFIG_PATA_SCH=m
CONFIG_PATA_SERVERWORKS=m
CONFIG_PATA_SIL680=m
CONFIG_PATA_SIS=m
CONFIG_PATA_TOSHIBA=m
# CONFIG_PATA_TRIFLEX is not set
CONFIG_PATA_VIA=m
# CONFIG_PATA_WINBOND is not set

#
# PIO-only SFF controllers
#
# CONFIG_PATA_CMD640_PCI is not set
# CONFIG_PATA_MPIIX is not set
# CONFIG_PATA_NS87410 is not set
# CONFIG_PATA_OPTI is not set
# CONFIG_PATA_PLATFORM is not set
# CONFIG_PATA_RZ1000 is not set

#
# Generic fallback / legacy drivers
#
CONFIG_PATA_ACPI=m
CONFIG_ATA_GENERIC=m
# CONFIG_PATA_LEGACY is not set
CONFIG_MD=y
CONFIG_BLK_DEV_MD=y
CONFIG_MD_AUTODETECT=y
CONFIG_MD_LINEAR=m
CONFIG_MD_RAID0=m
CONFIG_MD_RAID1=m
CONFIG_MD_RAID10=m
CONFIG_MD_RAID456=m
# CONFIG_MD_MULTIPATH is not set
CONFIG_MD_FAULTY=m
# CONFIG_MD_CLUSTER is not set
# CONFIG_BCACHE is not set
CONFIG_BLK_DEV_DM_BUILTIN=y
CONFIG_BLK_DEV_DM=m
CONFIG_DM_DEBUG=y
CONFIG_DM_BUFIO=m
# CONFIG_DM_DEBUG_BLOCK_MANAGER_LOCKING is not set
CONFIG_DM_BIO_PRISON=m
CONFIG_DM_PERSISTENT_DATA=m
# CONFIG_DM_UNSTRIPED is not set
CONFIG_DM_CRYPT=m
CONFIG_DM_SNAPSHOT=m
CONFIG_DM_THIN_PROVISIONING=m
CONFIG_DM_CACHE=m
CONFIG_DM_CACHE_SMQ=m
# CONFIG_DM_WRITECACHE is not set
# CONFIG_DM_EBS is not set
CONFIG_DM_ERA=m
# CONFIG_DM_CLONE is not set
CONFIG_DM_MIRROR=m
CONFIG_DM_LOG_USERSPACE=m
CONFIG_DM_RAID=m
CONFIG_DM_ZERO=m
CONFIG_DM_MULTIPATH=m
CONFIG_DM_MULTIPATH_QL=m
CONFIG_DM_MULTIPATH_ST=m
# CONFIG_DM_MULTIPATH_HST is not set
CONFIG_DM_DELAY=m
# CONFIG_DM_DUST is not set
CONFIG_DM_UEVENT=y
CONFIG_DM_FLAKEY=m
CONFIG_DM_VERITY=m
# CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG is not set
# CONFIG_DM_VERITY_FEC is not set
CONFIG_DM_SWITCH=m
CONFIG_DM_LOG_WRITES=m
# CONFIG_DM_INTEGRITY is not set
CONFIG_TARGET_CORE=m
CONFIG_TCM_IBLOCK=m
CONFIG_TCM_FILEIO=m
CONFIG_TCM_PSCSI=m
CONFIG_TCM_USER2=m
CONFIG_LOOPBACK_TARGET=m
CONFIG_TCM_FC=m
CONFIG_ISCSI_TARGET=m
CONFIG_ISCSI_TARGET_CXGB4=m
# CONFIG_SBP_TARGET is not set
CONFIG_FUSION=y
CONFIG_FUSION_SPI=m
# CONFIG_FUSION_FC is not set
CONFIG_FUSION_SAS=m
CONFIG_FUSION_MAX_SGE=128
CONFIG_FUSION_CTL=m
CONFIG_FUSION_LOGGING=y

#
# IEEE 1394 (FireWire) support
#
CONFIG_FIREWIRE=m
CONFIG_FIREWIRE_OHCI=m
CONFIG_FIREWIRE_SBP2=m
CONFIG_FIREWIRE_NET=m
# CONFIG_FIREWIRE_NOSY is not set
# end of IEEE 1394 (FireWire) support

CONFIG_MACINTOSH_DRIVERS=y
CONFIG_MAC_EMUMOUSEBTN=y
CONFIG_NETDEVICES=y
CONFIG_MII=m
CONFIG_NET_CORE=y
CONFIG_BONDING=m
CONFIG_DUMMY=y
# CONFIG_WIREGUARD is not set
# CONFIG_EQUALIZER is not set
CONFIG_NET_FC=y
CONFIG_IFB=y
CONFIG_NET_TEAM=m
CONFIG_NET_TEAM_MODE_BROADCAST=m
CONFIG_NET_TEAM_MODE_ROUNDROBIN=m
CONFIG_NET_TEAM_MODE_RANDOM=m
CONFIG_NET_TEAM_MODE_ACTIVEBACKUP=m
CONFIG_NET_TEAM_MODE_LOADBALANCE=m
CONFIG_MACVLAN=m
CONFIG_MACVTAP=m
# CONFIG_IPVLAN is not set
CONFIG_VXLAN=y
CONFIG_GENEVE=y
# CONFIG_BAREUDP is not set
# CONFIG_GTP is not set
CONFIG_MACSEC=y
CONFIG_NETCONSOLE=m
CONFIG_NETCONSOLE_DYNAMIC=y
CONFIG_NETPOLL=y
CONFIG_NET_POLL_CONTROLLER=y
CONFIG_NTB_NETDEV=m
CONFIG_TUN=m
CONFIG_TAP=m
# CONFIG_TUN_VNET_CROSS_LE is not set
CONFIG_VETH=y
CONFIG_VIRTIO_NET=m
CONFIG_NLMON=m
CONFIG_NET_VRF=y
CONFIG_VSOCKMON=m
# CONFIG_ARCNET is not set
# CONFIG_ATM_DRIVERS is not set

#
# Distributed Switch Architecture drivers
#
# end of Distributed Switch Architecture drivers

CONFIG_ETHERNET=y
CONFIG_MDIO=y
# CONFIG_NET_VENDOR_3COM is not set
# CONFIG_NET_VENDOR_ADAPTEC is not set
CONFIG_NET_VENDOR_AGERE=y
# CONFIG_ET131X is not set
CONFIG_NET_VENDOR_ALACRITECH=y
# CONFIG_SLICOSS is not set
# CONFIG_NET_VENDOR_ALTEON is not set
# CONFIG_ALTERA_TSE is not set
CONFIG_NET_VENDOR_AMAZON=y
CONFIG_ENA_ETHERNET=m
CONFIG_NET_VENDOR_AMD=y
# CONFIG_AMD8111_ETH is not set
# CONFIG_PCNET32 is not set
CONFIG_AMD_XGBE=m
# CONFIG_AMD_XGBE_DCB is not set
CONFIG_AMD_XGBE_HAVE_ECC=y
CONFIG_NET_VENDOR_AQUANTIA=y
CONFIG_AQTION=m
CONFIG_NET_VENDOR_ARC=y
CONFIG_NET_VENDOR_ATHEROS=y
# CONFIG_ATL2 is not set
# CONFIG_ATL1 is not set
# CONFIG_ATL1E is not set
# CONFIG_ATL1C is not set
CONFIG_ALX=m
CONFIG_NET_VENDOR_AURORA=y
# CONFIG_AURORA_NB8800 is not set
CONFIG_NET_VENDOR_BROADCOM=y
CONFIG_B44=m
CONFIG_B44_PCI_AUTOSELECT=y
CONFIG_B44_PCICORE_AUTOSELECT=y
CONFIG_B44_PCI=y
# CONFIG_BCMGENET is not set
CONFIG_BNX2=m
CONFIG_CNIC=m
# CONFIG_TIGON3 is not set
CONFIG_BNX2X=m
CONFIG_BNX2X_SRIOV=y
# CONFIG_SYSTEMPORT is not set
CONFIG_BNXT=m
CONFIG_BNXT_SRIOV=y
CONFIG_BNXT_FLOWER_OFFLOAD=y
CONFIG_BNXT_DCB=y
CONFIG_BNXT_HWMON=y
CONFIG_NET_VENDOR_BROCADE=y
CONFIG_BNA=m
CONFIG_NET_VENDOR_CADENCE=y
CONFIG_MACB=m
CONFIG_MACB_USE_HWSTAMP=y
# CONFIG_MACB_PCI is not set
CONFIG_NET_VENDOR_CAVIUM=y
# CONFIG_THUNDER_NIC_PF is not set
# CONFIG_THUNDER_NIC_VF is not set
# CONFIG_THUNDER_NIC_BGX is not set
# CONFIG_THUNDER_NIC_RGX is not set
CONFIG_CAVIUM_PTP=y
CONFIG_LIQUIDIO=m
CONFIG_LIQUIDIO_VF=m
CONFIG_NET_VENDOR_CHELSIO=y
# CONFIG_CHELSIO_T1 is not set
CONFIG_CHELSIO_T3=m
CONFIG_CHELSIO_T4=m
# CONFIG_CHELSIO_T4_DCB is not set
CONFIG_CHELSIO_T4VF=m
CONFIG_CHELSIO_LIB=m
CONFIG_CHELSIO_INLINE_CRYPTO=y
CONFIG_NET_VENDOR_CISCO=y
CONFIG_ENIC=m
CONFIG_NET_VENDOR_CORTINA=y
# CONFIG_CX_ECAT is not set
CONFIG_DNET=m
CONFIG_NET_VENDOR_DEC=y
# CONFIG_NET_TULIP is not set
# CONFIG_NET_VENDOR_DLINK is not set
CONFIG_NET_VENDOR_EMULEX=y
CONFIG_BE2NET=m
CONFIG_BE2NET_HWMON=y
CONFIG_BE2NET_BE2=y
CONFIG_BE2NET_BE3=y
CONFIG_BE2NET_LANCER=y
CONFIG_BE2NET_SKYHAWK=y
CONFIG_NET_VENDOR_EZCHIP=y
CONFIG_NET_VENDOR_GOOGLE=y
# CONFIG_GVE is not set
CONFIG_NET_VENDOR_HUAWEI=y
# CONFIG_HINIC is not set
# CONFIG_NET_VENDOR_I825XX is not set
CONFIG_NET_VENDOR_INTEL=y
# CONFIG_E100 is not set
CONFIG_E1000=y
CONFIG_E1000E=y
CONFIG_E1000E_HWTS=y
CONFIG_IGB=y
CONFIG_IGB_HWMON=y
CONFIG_IGBVF=m
# CONFIG_IXGB is not set
CONFIG_IXGBE=y
CONFIG_IXGBE_HWMON=y
CONFIG_IXGBE_DCB=y
CONFIG_IXGBEVF=m
CONFIG_I40E=y
CONFIG_I40E_DCB=y
CONFIG_IAVF=m
CONFIG_I40EVF=m
# CONFIG_ICE is not set
CONFIG_FM10K=m
# CONFIG_IGC is not set
CONFIG_JME=m
CONFIG_NET_VENDOR_MARVELL=y
CONFIG_MVMDIO=m
# CONFIG_SKGE is not set
# CONFIG_SKY2 is not set
# CONFIG_PRESTERA is not set
CONFIG_NET_VENDOR_MELLANOX=y
CONFIG_MLX4_EN=m
CONFIG_MLX4_EN_DCB=y
CONFIG_MLX4_CORE=m
CONFIG_MLX4_DEBUG=y
CONFIG_MLX4_CORE_GEN2=y
# CONFIG_MLX5_CORE is not set
# CONFIG_MLXSW_CORE is not set
# CONFIG_MLXFW is not set
# CONFIG_NET_VENDOR_MICREL is not set
# CONFIG_NET_VENDOR_MICROCHIP is not set
CONFIG_NET_VENDOR_MICROSEMI=y
CONFIG_NET_VENDOR_MYRI=y
CONFIG_MYRI10GE=m
CONFIG_MYRI10GE_DCA=y
# CONFIG_FEALNX is not set
# CONFIG_NET_VENDOR_NATSEMI is not set
CONFIG_NET_VENDOR_NETERION=y
# CONFIG_S2IO is not set
# CONFIG_VXGE is not set
CONFIG_NET_VENDOR_NETRONOME=y
CONFIG_NFP=m
CONFIG_NFP_APP_FLOWER=y
CONFIG_NFP_APP_ABM_NIC=y
# CONFIG_NFP_DEBUG is not set
CONFIG_NET_VENDOR_NI=y
# CONFIG_NI_XGE_MANAGEMENT_ENET is not set
# CONFIG_NET_VENDOR_NVIDIA is not set
CONFIG_NET_VENDOR_OKI=y
CONFIG_ETHOC=m
CONFIG_NET_VENDOR_PACKET_ENGINES=y
# CONFIG_HAMACHI is not set
# CONFIG_YELLOWFIN is not set
CONFIG_NET_VENDOR_PENSANDO=y
# CONFIG_IONIC is not set
CONFIG_NET_VENDOR_QLOGIC=y
CONFIG_QLA3XXX=m
CONFIG_QLCNIC=m
CONFIG_QLCNIC_SRIOV=y
CONFIG_QLCNIC_DCB=y
CONFIG_QLCNIC_HWMON=y
CONFIG_NETXEN_NIC=m
CONFIG_QED=m
CONFIG_QED_SRIOV=y
CONFIG_QEDE=m
CONFIG_NET_VENDOR_QUALCOMM=y
# CONFIG_QCOM_EMAC is not set
# CONFIG_RMNET is not set
# CONFIG_NET_VENDOR_RDC is not set
CONFIG_NET_VENDOR_REALTEK=y
# CONFIG_ATP is not set
# CONFIG_8139CP is not set
# CONFIG_8139TOO is not set
CONFIG_R8169=y
CONFIG_NET_VENDOR_RENESAS=y
CONFIG_NET_VENDOR_ROCKER=y
CONFIG_ROCKER=m
CONFIG_NET_VENDOR_SAMSUNG=y
# CONFIG_SXGBE_ETH is not set
# CONFIG_NET_VENDOR_SEEQ is not set
CONFIG_NET_VENDOR_SOLARFLARE=y
CONFIG_SFC=m
CONFIG_SFC_MTD=y
CONFIG_SFC_MCDI_MON=y
CONFIG_SFC_SRIOV=y
CONFIG_SFC_MCDI_LOGGING=y
CONFIG_SFC_FALCON=m
CONFIG_SFC_FALCON_MTD=y
# CONFIG_NET_VENDOR_SILAN is not set
# CONFIG_NET_VENDOR_SIS is not set
CONFIG_NET_VENDOR_SMSC=y
CONFIG_EPIC100=m
# CONFIG_SMSC911X is not set
CONFIG_SMSC9420=m
CONFIG_NET_VENDOR_SOCIONEXT=y
# CONFIG_NET_VENDOR_STMICRO is not set
# CONFIG_NET_VENDOR_SUN is not set
CONFIG_NET_VENDOR_SYNOPSYS=y
# CONFIG_DWC_XLGMAC is not set
# CONFIG_NET_VENDOR_TEHUTI is not set
CONFIG_NET_VENDOR_TI=y
# CONFIG_TI_CPSW_PHY_SEL is not set
CONFIG_TLAN=m
# CONFIG_NET_VENDOR_VIA is not set
# CONFIG_NET_VENDOR_WIZNET is not set
CONFIG_NET_VENDOR_XILINX=y
# CONFIG_XILINX_AXI_EMAC is not set
# CONFIG_XILINX_LL_TEMAC is not set
# CONFIG_FDDI is not set
# CONFIG_HIPPI is not set
# CONFIG_NET_SB1000 is not set
CONFIG_PHYLINK=m
CONFIG_PHYLIB=y
CONFIG_SWPHY=y
# CONFIG_LED_TRIGGER_PHY is not set
CONFIG_FIXED_PHY=y
# CONFIG_SFP is not set

#
# MII PHY device drivers
#
CONFIG_AMD_PHY=m
# CONFIG_ADIN_PHY is not set
# CONFIG_AQUANTIA_PHY is not set
# CONFIG_AX88796B_PHY is not set
CONFIG_BROADCOM_PHY=m
# CONFIG_BCM54140_PHY is not set
# CONFIG_BCM7XXX_PHY is not set
# CONFIG_BCM84881_PHY is not set
CONFIG_BCM87XX_PHY=m
CONFIG_BCM_NET_PHYLIB=m
CONFIG_CICADA_PHY=m
# CONFIG_CORTINA_PHY is not set
CONFIG_DAVICOM_PHY=m
CONFIG_ICPLUS_PHY=m
CONFIG_LXT_PHY=m
# CONFIG_INTEL_XWAY_PHY is not set
CONFIG_LSI_ET1011C_PHY=m
CONFIG_MARVELL_PHY=m
# CONFIG_MARVELL_10G_PHY is not set
CONFIG_MICREL_PHY=m
# CONFIG_MICROCHIP_PHY is not set
# CONFIG_MICROCHIP_T1_PHY is not set
# CONFIG_MICROSEMI_PHY is not set
CONFIG_NATIONAL_PHY=m
# CONFIG_NXP_TJA11XX_PHY is not set
CONFIG_QSEMI_PHY=m
CONFIG_REALTEK_PHY=y
# CONFIG_RENESAS_PHY is not set
# CONFIG_ROCKCHIP_PHY is not set
CONFIG_SMSC_PHY=m
CONFIG_STE10XP=m
# CONFIG_TERANETICS_PHY is not set
# CONFIG_DP83822_PHY is not set
# CONFIG_DP83TC811_PHY is not set
# CONFIG_DP83848_PHY is not set
# CONFIG_DP83867_PHY is not set
# CONFIG_DP83869_PHY is not set
CONFIG_VITESSE_PHY=m
# CONFIG_XILINX_GMII2RGMII is not set
# CONFIG_MICREL_KS8995MA is not set
CONFIG_MDIO_DEVICE=y
CONFIG_MDIO_BUS=y
CONFIG_MDIO_DEVRES=y
CONFIG_MDIO_BITBANG=m
# CONFIG_MDIO_BCM_UNIMAC is not set
# CONFIG_MDIO_GPIO is not set
# CONFIG_MDIO_MVUSB is not set
# CONFIG_MDIO_MSCC_MIIM is not set
# CONFIG_MDIO_THUNDER is not set

#
# MDIO Multiplexers
#

#
# PCS device drivers
#
# CONFIG_PCS_XPCS is not set
# end of PCS device drivers

# CONFIG_PLIP is not set
CONFIG_PPP=m
CONFIG_PPP_BSDCOMP=m
CONFIG_PPP_DEFLATE=m
CONFIG_PPP_FILTER=y
CONFIG_PPP_MPPE=m
CONFIG_PPP_MULTILINK=y
CONFIG_PPPOATM=m
CONFIG_PPPOE=m
CONFIG_PPTP=m
CONFIG_PPPOL2TP=m
CONFIG_PPP_ASYNC=m
CONFIG_PPP_SYNC_TTY=m
CONFIG_SLIP=m
CONFIG_SLHC=m
CONFIG_SLIP_COMPRESSED=y
CONFIG_SLIP_SMART=y
# CONFIG_SLIP_MODE_SLIP6 is not set
CONFIG_USB_NET_DRIVERS=y
# CONFIG_USB_CATC is not set
# CONFIG_USB_KAWETH is not set
# CONFIG_USB_PEGASUS is not set
# CONFIG_USB_RTL8150 is not set
CONFIG_USB_RTL8152=m
# CONFIG_USB_LAN78XX is not set
# CONFIG_USB_USBNET is not set
CONFIG_USB_HSO=m
# CONFIG_USB_IPHETH is not set
CONFIG_WLAN=y
# CONFIG_WIRELESS_WDS is not set
CONFIG_WLAN_VENDOR_ADMTEK=y
# CONFIG_ADM8211 is not set
CONFIG_ATH_COMMON=m
CONFIG_WLAN_VENDOR_ATH=y
# CONFIG_ATH_DEBUG is not set
# CONFIG_ATH5K is not set
# CONFIG_ATH5K_PCI is not set
CONFIG_ATH9K_HW=m
CONFIG_ATH9K_COMMON=m
CONFIG_ATH9K_BTCOEX_SUPPORT=y
# CONFIG_ATH9K is not set
CONFIG_ATH9K_HTC=m
# CONFIG_ATH9K_HTC_DEBUGFS is not set
# CONFIG_CARL9170 is not set
# CONFIG_ATH6KL is not set
# CONFIG_AR5523 is not set
# CONFIG_WIL6210 is not set
# CONFIG_ATH10K is not set
# CONFIG_WCN36XX is not set
# CONFIG_ATH11K is not set
CONFIG_WLAN_VENDOR_ATMEL=y
# CONFIG_ATMEL is not set
# CONFIG_AT76C50X_USB is not set
CONFIG_WLAN_VENDOR_BROADCOM=y
# CONFIG_B43 is not set
# CONFIG_B43LEGACY is not set
# CONFIG_BRCMSMAC is not set
# CONFIG_BRCMFMAC is not set
CONFIG_WLAN_VENDOR_CISCO=y
# CONFIG_AIRO is not set
CONFIG_WLAN_VENDOR_INTEL=y
# CONFIG_IPW2100 is not set
# CONFIG_IPW2200 is not set
CONFIG_IWLEGACY=m
CONFIG_IWL4965=m
CONFIG_IWL3945=m

#
# iwl3945 / iwl4965 Debugging Options
#
CONFIG_IWLEGACY_DEBUG=y
CONFIG_IWLEGACY_DEBUGFS=y
# end of iwl3945 / iwl4965 Debugging Options

CONFIG_IWLWIFI=m
CONFIG_IWLWIFI_LEDS=y
CONFIG_IWLDVM=m
CONFIG_IWLMVM=m
CONFIG_IWLWIFI_OPMODE_MODULAR=y
# CONFIG_IWLWIFI_BCAST_FILTERING is not set

#
# Debugging Options
#
# CONFIG_IWLWIFI_DEBUG is not set
CONFIG_IWLWIFI_DEBUGFS=y
# CONFIG_IWLWIFI_DEVICE_TRACING is not set
# end of Debugging Options

CONFIG_WLAN_VENDOR_INTERSIL=y
# CONFIG_HOSTAP is not set
# CONFIG_HERMES is not set
# CONFIG_P54_COMMON is not set
# CONFIG_PRISM54 is not set
CONFIG_WLAN_VENDOR_MARVELL=y
# CONFIG_LIBERTAS is not set
# CONFIG_LIBERTAS_THINFIRM is not set
# CONFIG_MWIFIEX is not set
# CONFIG_MWL8K is not set
CONFIG_WLAN_VENDOR_MEDIATEK=y
# CONFIG_MT7601U is not set
# CONFIG_MT76x0U is not set
# CONFIG_MT76x0E is not set
# CONFIG_MT76x2E is not set
# CONFIG_MT76x2U is not set
# CONFIG_MT7603E is not set
# CONFIG_MT7615E is not set
# CONFIG_MT7663U is not set
# CONFIG_MT7663S is not set
# CONFIG_MT7915E is not set
CONFIG_WLAN_VENDOR_MICROCHIP=y
# CONFIG_WILC1000_SDIO is not set
# CONFIG_WILC1000_SPI is not set
CONFIG_WLAN_VENDOR_RALINK=y
# CONFIG_RT2X00 is not set
CONFIG_WLAN_VENDOR_REALTEK=y
# CONFIG_RTL8180 is not set
# CONFIG_RTL8187 is not set
# CONFIG_RTL_CARDS is not set
# CONFIG_RTL8XXXU is not set
# CONFIG_RTW88 is not set
CONFIG_WLAN_VENDOR_RSI=y
# CONFIG_RSI_91X is not set
CONFIG_WLAN_VENDOR_ST=y
# CONFIG_CW1200 is not set
CONFIG_WLAN_VENDOR_TI=y
# CONFIG_WL1251 is not set
# CONFIG_WL12XX is not set
# CONFIG_WL18XX is not set
# CONFIG_WLCORE is not set
CONFIG_WLAN_VENDOR_ZYDAS=y
# CONFIG_USB_ZD1201 is not set
# CONFIG_ZD1211RW is not set
CONFIG_WLAN_VENDOR_QUANTENNA=y
# CONFIG_QTNFMAC_PCIE is not set
CONFIG_MAC80211_HWSIM=m
# CONFIG_USB_NET_RNDIS_WLAN is not set
# CONFIG_VIRT_WIFI is not set

#
# Enable WiMAX (Networking options) to see the WiMAX drivers
#
CONFIG_WAN=y
# CONFIG_LANMEDIA is not set
CONFIG_HDLC=m
CONFIG_HDLC_RAW=m
# CONFIG_HDLC_RAW_ETH is not set
CONFIG_HDLC_CISCO=m
CONFIG_HDLC_FR=m
CONFIG_HDLC_PPP=m

#
# X.25/LAPB support is disabled
#
# CONFIG_PCI200SYN is not set
# CONFIG_WANXL is not set
# CONFIG_PC300TOO is not set
# CONFIG_FARSYNC is not set
CONFIG_DLCI=m
CONFIG_DLCI_MAX=8
# CONFIG_SBNI is not set
CONFIG_IEEE802154_DRIVERS=m
CONFIG_IEEE802154_FAKELB=m
# CONFIG_IEEE802154_AT86RF230 is not set
# CONFIG_IEEE802154_MRF24J40 is not set
# CONFIG_IEEE802154_CC2520 is not set
# CONFIG_IEEE802154_ATUSB is not set
# CONFIG_IEEE802154_ADF7242 is not set
# CONFIG_IEEE802154_CA8210 is not set
# CONFIG_IEEE802154_MCR20A is not set
# CONFIG_IEEE802154_HWSIM is not set
CONFIG_XEN_NETDEV_FRONTEND=m
CONFIG_VMXNET3=m
CONFIG_FUJITSU_ES=m
CONFIG_HYPERV_NET=m
CONFIG_NETDEVSIM=m
CONFIG_NET_FAILOVER=m
CONFIG_ISDN=y
CONFIG_ISDN_CAPI=y
CONFIG_CAPI_TRACE=y
CONFIG_ISDN_CAPI_MIDDLEWARE=y
CONFIG_MISDN=m
CONFIG_MISDN_DSP=m
CONFIG_MISDN_L1OIP=m

#
# mISDN hardware drivers
#
CONFIG_MISDN_HFCPCI=m
CONFIG_MISDN_HFCMULTI=m
CONFIG_MISDN_HFCUSB=m
CONFIG_MISDN_AVMFRITZ=m
CONFIG_MISDN_SPEEDFAX=m
CONFIG_MISDN_INFINEON=m
CONFIG_MISDN_W6692=m
CONFIG_MISDN_NETJET=m
CONFIG_MISDN_HDLC=m
CONFIG_MISDN_IPAC=m
CONFIG_MISDN_ISAR=m
# CONFIG_NVM is not set

#
# Input device support
#
CONFIG_INPUT=y
CONFIG_INPUT_LEDS=y
CONFIG_INPUT_FF_MEMLESS=y
CONFIG_INPUT_POLLDEV=m
CONFIG_INPUT_SPARSEKMAP=m
# CONFIG_INPUT_MATRIXKMAP is not set

#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=y
# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
CONFIG_INPUT_JOYDEV=m
CONFIG_INPUT_EVDEV=y
# CONFIG_INPUT_EVBUG is not set

#
# Input Device Drivers
#
CONFIG_INPUT_KEYBOARD=y
# CONFIG_KEYBOARD_ADC is not set
# CONFIG_KEYBOARD_ADP5588 is not set
# CONFIG_KEYBOARD_ADP5589 is not set
# CONFIG_KEYBOARD_APPLESPI is not set
CONFIG_KEYBOARD_ATKBD=y
# CONFIG_KEYBOARD_QT1050 is not set
# CONFIG_KEYBOARD_QT1070 is not set
# CONFIG_KEYBOARD_QT2160 is not set
# CONFIG_KEYBOARD_DLINK_DIR685 is not set
# CONFIG_KEYBOARD_LKKBD is not set
# CONFIG_KEYBOARD_GPIO is not set
# CONFIG_KEYBOARD_GPIO_POLLED is not set
# CONFIG_KEYBOARD_TCA6416 is not set
# CONFIG_KEYBOARD_TCA8418 is not set
# CONFIG_KEYBOARD_MATRIX is not set
# CONFIG_KEYBOARD_LM8323 is not set
# CONFIG_KEYBOARD_LM8333 is not set
# CONFIG_KEYBOARD_MAX7359 is not set
# CONFIG_KEYBOARD_MCS is not set
# CONFIG_KEYBOARD_MPR121 is not set
# CONFIG_KEYBOARD_NEWTON is not set
# CONFIG_KEYBOARD_OPENCORES is not set
# CONFIG_KEYBOARD_SAMSUNG is not set
# CONFIG_KEYBOARD_STOWAWAY is not set
# CONFIG_KEYBOARD_SUNKBD is not set
# CONFIG_KEYBOARD_TM2_TOUCHKEY is not set
# CONFIG_KEYBOARD_XTKBD is not set
CONFIG_INPUT_MOUSE=y
CONFIG_MOUSE_PS2=y
CONFIG_MOUSE_PS2_ALPS=y
CONFIG_MOUSE_PS2_BYD=y
CONFIG_MOUSE_PS2_LOGIPS2PP=y
CONFIG_MOUSE_PS2_SYNAPTICS=y
CONFIG_MOUSE_PS2_SYNAPTICS_SMBUS=y
CONFIG_MOUSE_PS2_CYPRESS=y
CONFIG_MOUSE_PS2_LIFEBOOK=y
CONFIG_MOUSE_PS2_TRACKPOINT=y
CONFIG_MOUSE_PS2_ELANTECH=y
CONFIG_MOUSE_PS2_ELANTECH_SMBUS=y
CONFIG_MOUSE_PS2_SENTELIC=y
# CONFIG_MOUSE_PS2_TOUCHKIT is not set
CONFIG_MOUSE_PS2_FOCALTECH=y
CONFIG_MOUSE_PS2_VMMOUSE=y
CONFIG_MOUSE_PS2_SMBUS=y
CONFIG_MOUSE_SERIAL=m
CONFIG_MOUSE_APPLETOUCH=m
CONFIG_MOUSE_BCM5974=m
CONFIG_MOUSE_CYAPA=m
# CONFIG_MOUSE_ELAN_I2C is not set
CONFIG_MOUSE_VSXXXAA=m
# CONFIG_MOUSE_GPIO is not set
CONFIG_MOUSE_SYNAPTICS_I2C=m
CONFIG_MOUSE_SYNAPTICS_USB=m
# CONFIG_INPUT_JOYSTICK is not set
CONFIG_INPUT_TABLET=y
CONFIG_TABLET_USB_ACECAD=m
CONFIG_TABLET_USB_AIPTEK=m
CONFIG_TABLET_USB_GTCO=m
# CONFIG_TABLET_USB_HANWANG is not set
CONFIG_TABLET_USB_KBTAB=m
# CONFIG_TABLET_USB_PEGASUS is not set
# CONFIG_TABLET_SERIAL_WACOM4 is not set
CONFIG_INPUT_TOUCHSCREEN=y
CONFIG_TOUCHSCREEN_PROPERTIES=y
# CONFIG_TOUCHSCREEN_ADS7846 is not set
# CONFIG_TOUCHSCREEN_AD7877 is not set
# CONFIG_TOUCHSCREEN_AD7879 is not set
# CONFIG_TOUCHSCREEN_ADC is not set
# CONFIG_TOUCHSCREEN_ATMEL_MXT is not set
# CONFIG_TOUCHSCREEN_AUO_PIXCIR is not set
# CONFIG_TOUCHSCREEN_BU21013 is not set
# CONFIG_TOUCHSCREEN_BU21029 is not set
# CONFIG_TOUCHSCREEN_CHIPONE_ICN8505 is not set
# CONFIG_TOUCHSCREEN_CY8CTMA140 is not set
# CONFIG_TOUCHSCREEN_CY8CTMG110 is not set
# CONFIG_TOUCHSCREEN_CYTTSP_CORE is not set
# CONFIG_TOUCHSCREEN_CYTTSP4_CORE is not set
# CONFIG_TOUCHSCREEN_DYNAPRO is not set
# CONFIG_TOUCHSCREEN_HAMPSHIRE is not set
# CONFIG_TOUCHSCREEN_EETI is not set
# CONFIG_TOUCHSCREEN_EGALAX_SERIAL is not set
# CONFIG_TOUCHSCREEN_EXC3000 is not set
# CONFIG_TOUCHSCREEN_FUJITSU is not set
# CONFIG_TOUCHSCREEN_GOODIX is not set
# CONFIG_TOUCHSCREEN_HIDEEP is not set
# CONFIG_TOUCHSCREEN_ILI210X is not set
# CONFIG_TOUCHSCREEN_S6SY761 is not set
# CONFIG_TOUCHSCREEN_GUNZE is not set
# CONFIG_TOUCHSCREEN_EKTF2127 is not set
# CONFIG_TOUCHSCREEN_ELAN is not set
CONFIG_TOUCHSCREEN_ELO=m
CONFIG_TOUCHSCREEN_WACOM_W8001=m
CONFIG_TOUCHSCREEN_WACOM_I2C=m
# CONFIG_TOUCHSCREEN_MAX11801 is not set
# CONFIG_TOUCHSCREEN_MCS5000 is not set
# CONFIG_TOUCHSCREEN_MMS114 is not set
# CONFIG_TOUCHSCREEN_MELFAS_MIP4 is not set
# CONFIG_TOUCHSCREEN_MTOUCH is not set
# CONFIG_TOUCHSCREEN_INEXIO is not set
# CONFIG_TOUCHSCREEN_MK712 is not set
# CONFIG_TOUCHSCREEN_PENMOUNT is not set
# CONFIG_TOUCHSCREEN_EDT_FT5X06 is not set
# CONFIG_TOUCHSCREEN_TOUCHRIGHT is not set
# CONFIG_TOUCHSCREEN_TOUCHWIN is not set
# CONFIG_TOUCHSCREEN_PIXCIR is not set
# CONFIG_TOUCHSCREEN_WDT87XX_I2C is not set
# CONFIG_TOUCHSCREEN_WM97XX is not set
# CONFIG_TOUCHSCREEN_USB_COMPOSITE is not set
# CONFIG_TOUCHSCREEN_TOUCHIT213 is not set
# CONFIG_TOUCHSCREEN_TSC_SERIO is not set
# CONFIG_TOUCHSCREEN_TSC2004 is not set
# CONFIG_TOUCHSCREEN_TSC2005 is not set
# CONFIG_TOUCHSCREEN_TSC2007 is not set
# CONFIG_TOUCHSCREEN_RM_TS is not set
# CONFIG_TOUCHSCREEN_SILEAD is not set
# CONFIG_TOUCHSCREEN_SIS_I2C is not set
# CONFIG_TOUCHSCREEN_ST1232 is not set
# CONFIG_TOUCHSCREEN_STMFTS is not set
# CONFIG_TOUCHSCREEN_SUR40 is not set
# CONFIG_TOUCHSCREEN_SURFACE3_SPI is not set
# CONFIG_TOUCHSCREEN_SX8654 is not set
# CONFIG_TOUCHSCREEN_TPS6507X is not set
# CONFIG_TOUCHSCREEN_ZET6223 is not set
# CONFIG_TOUCHSCREEN_ZFORCE is not set
# CONFIG_TOUCHSCREEN_ROHM_BU21023 is not set
# CONFIG_TOUCHSCREEN_IQS5XX is not set
# CONFIG_TOUCHSCREEN_ZINITIX is not set
CONFIG_INPUT_MISC=y
# CONFIG_INPUT_AD714X is not set
# CONFIG_INPUT_BMA150 is not set
# CONFIG_INPUT_E3X0_BUTTON is not set
CONFIG_INPUT_PCSPKR=m
# CONFIG_INPUT_MMA8450 is not set
CONFIG_INPUT_APANEL=m
# CONFIG_INPUT_GPIO_BEEPER is not set
# CONFIG_INPUT_GPIO_DECODER is not set
# CONFIG_INPUT_GPIO_VIBRA is not set
CONFIG_INPUT_ATLAS_BTNS=m
CONFIG_INPUT_ATI_REMOTE2=m
CONFIG_INPUT_KEYSPAN_REMOTE=m
# CONFIG_INPUT_KXTJ9 is not set
CONFIG_INPUT_POWERMATE=m
CONFIG_INPUT_YEALINK=m
CONFIG_INPUT_CM109=m
CONFIG_INPUT_UINPUT=m
# CONFIG_INPUT_PCF8574 is not set
# CONFIG_INPUT_PWM_BEEPER is not set
# CONFIG_INPUT_PWM_VIBRA is not set
CONFIG_INPUT_GPIO_ROTARY_ENCODER=m
# CONFIG_INPUT_ADXL34X is not set
# CONFIG_INPUT_IMS_PCU is not set
# CONFIG_INPUT_IQS269A is not set
# CONFIG_INPUT_CMA3000 is not set
CONFIG_INPUT_XEN_KBDDEV_FRONTEND=m
# CONFIG_INPUT_IDEAPAD_SLIDEBAR is not set
# CONFIG_INPUT_DRV260X_HAPTICS is not set
# CONFIG_INPUT_DRV2665_HAPTICS is not set
# CONFIG_INPUT_DRV2667_HAPTICS is not set
CONFIG_RMI4_CORE=m
# CONFIG_RMI4_I2C is not set
# CONFIG_RMI4_SPI is not set
CONFIG_RMI4_SMB=m
CONFIG_RMI4_F03=y
CONFIG_RMI4_F03_SERIO=m
CONFIG_RMI4_2D_SENSOR=y
CONFIG_RMI4_F11=y
CONFIG_RMI4_F12=y
CONFIG_RMI4_F30=y
# CONFIG_RMI4_F34 is not set
# CONFIG_RMI4_F3A is not set
# CONFIG_RMI4_F54 is not set
# CONFIG_RMI4_F55 is not set

#
# Hardware I/O ports
#
CONFIG_SERIO=y
CONFIG_ARCH_MIGHT_HAVE_PC_SERIO=y
CONFIG_SERIO_I8042=y
CONFIG_SERIO_SERPORT=y
# CONFIG_SERIO_CT82C710 is not set
# CONFIG_SERIO_PARKBD is not set
# CONFIG_SERIO_PCIPS2 is not set
CONFIG_SERIO_LIBPS2=y
CONFIG_SERIO_RAW=m
CONFIG_SERIO_ALTERA_PS2=m
# CONFIG_SERIO_PS2MULT is not set
CONFIG_SERIO_ARC_PS2=m
CONFIG_HYPERV_KEYBOARD=m
# CONFIG_SERIO_GPIO_PS2 is not set
# CONFIG_USERIO is not set
# CONFIG_GAMEPORT is not set
# end of Hardware I/O ports
# end of Input device support

#
# Character devices
#
CONFIG_TTY=y
CONFIG_VT=y
CONFIG_CONSOLE_TRANSLATIONS=y
CONFIG_VT_CONSOLE=y
CONFIG_VT_CONSOLE_SLEEP=y
CONFIG_HW_CONSOLE=y
CONFIG_VT_HW_CONSOLE_BINDING=y
CONFIG_UNIX98_PTYS=y
# CONFIG_LEGACY_PTYS is not set
CONFIG_LDISC_AUTOLOAD=y

#
# Serial drivers
#
CONFIG_SERIAL_EARLYCON=y
CONFIG_SERIAL_8250=y
# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
CONFIG_SERIAL_8250_PNP=y
# CONFIG_SERIAL_8250_16550A_VARIANTS is not set
# CONFIG_SERIAL_8250_FINTEK is not set
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_8250_DMA=y
CONFIG_SERIAL_8250_PCI=y
CONFIG_SERIAL_8250_EXAR=y
CONFIG_SERIAL_8250_NR_UARTS=32
CONFIG_SERIAL_8250_RUNTIME_UARTS=4
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_MANY_PORTS=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
# CONFIG_SERIAL_8250_DETECT_IRQ is not set
CONFIG_SERIAL_8250_RSA=y
CONFIG_SERIAL_8250_DWLIB=y
CONFIG_SERIAL_8250_DW=y
# CONFIG_SERIAL_8250_RT288X is not set
CONFIG_SERIAL_8250_LPSS=y
CONFIG_SERIAL_8250_MID=y

#
# Non-8250 serial port support
#
# CONFIG_SERIAL_MAX3100 is not set
# CONFIG_SERIAL_MAX310X is not set
# CONFIG_SERIAL_UARTLITE is not set
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
CONFIG_SERIAL_JSM=m
# CONFIG_SERIAL_LANTIQ is not set
# CONFIG_SERIAL_SCCNXP is not set
# CONFIG_SERIAL_SC16IS7XX is not set
# CONFIG_SERIAL_ALTERA_JTAGUART is not set
# CONFIG_SERIAL_ALTERA_UART is not set
# CONFIG_SERIAL_IFX6X60 is not set
CONFIG_SERIAL_ARC=m
CONFIG_SERIAL_ARC_NR_PORTS=1
# CONFIG_SERIAL_RP2 is not set
# CONFIG_SERIAL_FSL_LPUART is not set
# CONFIG_SERIAL_FSL_LINFLEXUART is not set
# CONFIG_SERIAL_SPRD is not set
# end of Serial drivers

CONFIG_SERIAL_MCTRL_GPIO=y
CONFIG_SERIAL_NONSTANDARD=y
# CONFIG_ROCKETPORT is not set
CONFIG_CYCLADES=m
# CONFIG_CYZ_INTR is not set
# CONFIG_MOXA_INTELLIO is not set
# CONFIG_MOXA_SMARTIO is not set
CONFIG_SYNCLINK=m
CONFIG_SYNCLINKMP=m
CONFIG_SYNCLINK_GT=m
# CONFIG_ISI is not set
CONFIG_N_HDLC=m
CONFIG_N_GSM=m
CONFIG_NOZOMI=m
# CONFIG_NULL_TTY is not set
# CONFIG_TRACE_SINK is not set
CONFIG_HVC_DRIVER=y
CONFIG_HVC_IRQ=y
CONFIG_HVC_XEN=y
CONFIG_HVC_XEN_FRONTEND=y
# CONFIG_SERIAL_DEV_BUS is not set
# CONFIG_TTY_PRINTK is not set
CONFIG_PRINTER=m
# CONFIG_LP_CONSOLE is not set
CONFIG_PPDEV=m
CONFIG_VIRTIO_CONSOLE=m
CONFIG_IPMI_HANDLER=m
CONFIG_IPMI_DMI_DECODE=y
CONFIG_IPMI_PLAT_DATA=y
# CONFIG_IPMI_PANIC_EVENT is not set
CONFIG_IPMI_DEVICE_INTERFACE=m
CONFIG_IPMI_SI=m
CONFIG_IPMI_SSIF=m
CONFIG_IPMI_WATCHDOG=m
CONFIG_IPMI_POWEROFF=m
CONFIG_HW_RANDOM=y
CONFIG_HW_RANDOM_TIMERIOMEM=m
CONFIG_HW_RANDOM_INTEL=m
CONFIG_HW_RANDOM_AMD=m
# CONFIG_HW_RANDOM_BA431 is not set
CONFIG_HW_RANDOM_VIA=m
CONFIG_HW_RANDOM_VIRTIO=m
# CONFIG_HW_RANDOM_XIPHERA is not set
# CONFIG_APPLICOM is not set
# CONFIG_MWAVE is not set
CONFIG_DEVMEM=y
# CONFIG_DEVKMEM is not set
CONFIG_NVRAM=y
CONFIG_RAW_DRIVER=y
CONFIG_MAX_RAW_DEVS=8192
CONFIG_DEVPORT=y
CONFIG_HPET=y
CONFIG_HPET_MMAP=y
# CONFIG_HPET_MMAP_DEFAULT is not set
CONFIG_HANGCHECK_TIMER=m
CONFIG_UV_MMTIMER=m
CONFIG_TCG_TPM=y
CONFIG_HW_RANDOM_TPM=y
CONFIG_TCG_TIS_CORE=y
CONFIG_TCG_TIS=y
# CONFIG_TCG_TIS_SPI is not set
CONFIG_TCG_TIS_I2C_ATMEL=m
CONFIG_TCG_TIS_I2C_INFINEON=m
CONFIG_TCG_TIS_I2C_NUVOTON=m
CONFIG_TCG_NSC=m
CONFIG_TCG_ATMEL=m
CONFIG_TCG_INFINEON=m
# CONFIG_TCG_XEN is not set
CONFIG_TCG_CRB=y
# CONFIG_TCG_VTPM_PROXY is not set
CONFIG_TCG_TIS_ST33ZP24=m
CONFIG_TCG_TIS_ST33ZP24_I2C=m
# CONFIG_TCG_TIS_ST33ZP24_SPI is not set
CONFIG_TELCLOCK=m
# CONFIG_XILLYBUS is not set
# end of Character devices

# CONFIG_RANDOM_TRUST_CPU is not set
# CONFIG_RANDOM_TRUST_BOOTLOADER is not set

#
# I2C support
#
CONFIG_I2C=y
CONFIG_ACPI_I2C_OPREGION=y
CONFIG_I2C_BOARDINFO=y
CONFIG_I2C_COMPAT=y
CONFIG_I2C_CHARDEV=m
CONFIG_I2C_MUX=m

#
# Multiplexer I2C Chip support
#
# CONFIG_I2C_MUX_GPIO is not set
# CONFIG_I2C_MUX_LTC4306 is not set
# CONFIG_I2C_MUX_PCA9541 is not set
# CONFIG_I2C_MUX_PCA954x is not set
# CONFIG_I2C_MUX_REG is not set
# CONFIG_I2C_MUX_MLXCPLD is not set
# end of Multiplexer I2C Chip support

CONFIG_I2C_HELPER_AUTO=y
CONFIG_I2C_SMBUS=m
CONFIG_I2C_ALGOBIT=y
CONFIG_I2C_ALGOPCA=m

#
# I2C Hardware Bus support
#

#
# PC SMBus host controller drivers
#
# CONFIG_I2C_ALI1535 is not set
# CONFIG_I2C_ALI1563 is not set
# CONFIG_I2C_ALI15X3 is not set
CONFIG_I2C_AMD756=m
CONFIG_I2C_AMD756_S4882=m
CONFIG_I2C_AMD8111=m
# CONFIG_I2C_AMD_MP2 is not set
CONFIG_I2C_I801=m
CONFIG_I2C_ISCH=m
CONFIG_I2C_ISMT=m
CONFIG_I2C_PIIX4=m
CONFIG_I2C_NFORCE2=m
CONFIG_I2C_NFORCE2_S4985=m
# CONFIG_I2C_NVIDIA_GPU is not set
# CONFIG_I2C_SIS5595 is not set
# CONFIG_I2C_SIS630 is not set
CONFIG_I2C_SIS96X=m
CONFIG_I2C_VIA=m
CONFIG_I2C_VIAPRO=m

#
# ACPI drivers
#
CONFIG_I2C_SCMI=m

#
# I2C system bus drivers (mostly embedded / system-on-chip)
#
# CONFIG_I2C_CBUS_GPIO is not set
CONFIG_I2C_DESIGNWARE_CORE=m
# CONFIG_I2C_DESIGNWARE_SLAVE is not set
CONFIG_I2C_DESIGNWARE_PLATFORM=m
# CONFIG_I2C_DESIGNWARE_BAYTRAIL is not set
# CONFIG_I2C_DESIGNWARE_PCI is not set
# CONFIG_I2C_EMEV2 is not set
# CONFIG_I2C_GPIO is not set
# CONFIG_I2C_OCORES is not set
CONFIG_I2C_PCA_PLATFORM=m
CONFIG_I2C_SIMTEC=m
# CONFIG_I2C_XILINX is not set

#
# External I2C/SMBus adapter drivers
#
CONFIG_I2C_DIOLAN_U2C=m
CONFIG_I2C_PARPORT=m
# CONFIG_I2C_ROBOTFUZZ_OSIF is not set
# CONFIG_I2C_TAOS_EVM is not set
CONFIG_I2C_TINY_USB=m
CONFIG_I2C_VIPERBOARD=m

#
# Other I2C/SMBus bus drivers
#
# CONFIG_I2C_MLXCPLD is not set
# end of I2C Hardware Bus support

CONFIG_I2C_STUB=m
# CONFIG_I2C_SLAVE is not set
# CONFIG_I2C_DEBUG_CORE is not set
# CONFIG_I2C_DEBUG_ALGO is not set
# CONFIG_I2C_DEBUG_BUS is not set
# end of I2C support

# CONFIG_I3C is not set
CONFIG_SPI=y
# CONFIG_SPI_DEBUG is not set
CONFIG_SPI_MASTER=y
# CONFIG_SPI_MEM is not set

#
# SPI Master Controller Drivers
#
# CONFIG_SPI_ALTERA is not set
# CONFIG_SPI_AXI_SPI_ENGINE is not set
# CONFIG_SPI_BITBANG is not set
# CONFIG_SPI_BUTTERFLY is not set
# CONFIG_SPI_CADENCE is not set
# CONFIG_SPI_DESIGNWARE is not set
# CONFIG_SPI_NXP_FLEXSPI is not set
# CONFIG_SPI_GPIO is not set
# CONFIG_SPI_LM70_LLP is not set
# CONFIG_SPI_LANTIQ_SSC is not set
# CONFIG_SPI_OC_TINY is not set
# CONFIG_SPI_PXA2XX is not set
# CONFIG_SPI_ROCKCHIP is not set
# CONFIG_SPI_SC18IS602 is not set
# CONFIG_SPI_SIFIVE is not set
# CONFIG_SPI_MXIC is not set
# CONFIG_SPI_XCOMM is not set
# CONFIG_SPI_XILINX is not set
# CONFIG_SPI_ZYNQMP_GQSPI is not set
# CONFIG_SPI_AMD is not set

#
# SPI Multiplexer support
#
# CONFIG_SPI_MUX is not set

#
# SPI Protocol Masters
#
# CONFIG_SPI_SPIDEV is not set
# CONFIG_SPI_LOOPBACK_TEST is not set
# CONFIG_SPI_TLE62X0 is not set
# CONFIG_SPI_SLAVE is not set
CONFIG_SPI_DYNAMIC=y
# CONFIG_SPMI is not set
# CONFIG_HSI is not set
CONFIG_PPS=y
# CONFIG_PPS_DEBUG is not set

#
# PPS clients support
#
# CONFIG_PPS_CLIENT_KTIMER is not set
CONFIG_PPS_CLIENT_LDISC=m
CONFIG_PPS_CLIENT_PARPORT=m
CONFIG_PPS_CLIENT_GPIO=m

#
# PPS generators support
#

#
# PTP clock support
#
CONFIG_PTP_1588_CLOCK=y
CONFIG_DP83640_PHY=m
# CONFIG_PTP_1588_CLOCK_INES is not set
CONFIG_PTP_1588_CLOCK_KVM=m
# CONFIG_PTP_1588_CLOCK_IDT82P33 is not set
# CONFIG_PTP_1588_CLOCK_IDTCM is not set
# CONFIG_PTP_1588_CLOCK_VMW is not set
# end of PTP clock support

CONFIG_PINCTRL=y
CONFIG_PINMUX=y
CONFIG_PINCONF=y
CONFIG_GENERIC_PINCONF=y
# CONFIG_DEBUG_PINCTRL is not set
CONFIG_PINCTRL_AMD=m
# CONFIG_PINCTRL_MCP23S08 is not set
# CONFIG_PINCTRL_SX150X is not set
CONFIG_PINCTRL_BAYTRAIL=y
# CONFIG_PINCTRL_CHERRYVIEW is not set
# CONFIG_PINCTRL_LYNXPOINT is not set
CONFIG_PINCTRL_INTEL=y
# CONFIG_PINCTRL_BROXTON is not set
CONFIG_PINCTRL_CANNONLAKE=m
# CONFIG_PINCTRL_CEDARFORK is not set
CONFIG_PINCTRL_DENVERTON=m
# CONFIG_PINCTRL_EMMITSBURG is not set
CONFIG_PINCTRL_GEMINILAKE=m
# CONFIG_PINCTRL_ICELAKE is not set
# CONFIG_PINCTRL_JASPERLAKE is not set
CONFIG_PINCTRL_LEWISBURG=m
CONFIG_PINCTRL_SUNRISEPOINT=m
# CONFIG_PINCTRL_TIGERLAKE is not set

#
# Renesas pinctrl drivers
#
# end of Renesas pinctrl drivers

CONFIG_GPIOLIB=y
CONFIG_GPIOLIB_FASTPATH_LIMIT=512
CONFIG_GPIO_ACPI=y
CONFIG_GPIOLIB_IRQCHIP=y
# CONFIG_DEBUG_GPIO is not set
CONFIG_GPIO_SYSFS=y
CONFIG_GPIO_CDEV=y
CONFIG_GPIO_CDEV_V1=y
CONFIG_GPIO_GENERIC=m

#
# Memory mapped GPIO drivers
#
CONFIG_GPIO_AMDPT=m
# CONFIG_GPIO_DWAPB is not set
# CONFIG_GPIO_EXAR is not set
# CONFIG_GPIO_GENERIC_PLATFORM is not set
CONFIG_GPIO_ICH=m
# CONFIG_GPIO_MB86S7X is not set
# CONFIG_GPIO_VX855 is not set
# CONFIG_GPIO_XILINX is not set
# CONFIG_GPIO_AMD_FCH is not set
# end of Memory mapped GPIO drivers

#
# Port-mapped I/O GPIO drivers
#
# CONFIG_GPIO_F7188X is not set
# CONFIG_GPIO_IT87 is not set
# CONFIG_GPIO_SCH is not set
# CONFIG_GPIO_SCH311X is not set
# CONFIG_GPIO_WINBOND is not set
# CONFIG_GPIO_WS16C48 is not set
# end of Port-mapped I/O GPIO drivers

#
# I2C GPIO expanders
#
# CONFIG_GPIO_ADP5588 is not set
# CONFIG_GPIO_MAX7300 is not set
# CONFIG_GPIO_MAX732X is not set
# CONFIG_GPIO_PCA953X is not set
# CONFIG_GPIO_PCA9570 is not set
# CONFIG_GPIO_PCF857X is not set
# CONFIG_GPIO_TPIC2810 is not set
# end of I2C GPIO expanders

#
# MFD GPIO expanders
#
# end of MFD GPIO expanders

#
# PCI GPIO expanders
#
# CONFIG_GPIO_AMD8111 is not set
# CONFIG_GPIO_ML_IOH is not set
# CONFIG_GPIO_PCI_IDIO_16 is not set
# CONFIG_GPIO_PCIE_IDIO_24 is not set
# CONFIG_GPIO_RDC321X is not set
# end of PCI GPIO expanders

#
# SPI GPIO expanders
#
# CONFIG_GPIO_MAX3191X is not set
# CONFIG_GPIO_MAX7301 is not set
# CONFIG_GPIO_MC33880 is not set
# CONFIG_GPIO_PISOSR is not set
# CONFIG_GPIO_XRA1403 is not set
# end of SPI GPIO expanders

#
# USB GPIO expanders
#
CONFIG_GPIO_VIPERBOARD=m
# end of USB GPIO expanders

# CONFIG_GPIO_AGGREGATOR is not set
CONFIG_GPIO_MOCKUP=m
# CONFIG_W1 is not set
CONFIG_POWER_RESET=y
# CONFIG_POWER_RESET_RESTART is not set
CONFIG_POWER_SUPPLY=y
# CONFIG_POWER_SUPPLY_DEBUG is not set
CONFIG_POWER_SUPPLY_HWMON=y
# CONFIG_PDA_POWER is not set
# CONFIG_GENERIC_ADC_BATTERY is not set
# CONFIG_TEST_POWER is not set
# CONFIG_CHARGER_ADP5061 is not set
# CONFIG_BATTERY_CW2015 is not set
# CONFIG_BATTERY_DS2780 is not set
# CONFIG_BATTERY_DS2781 is not set
# CONFIG_BATTERY_DS2782 is not set
# CONFIG_BATTERY_SBS is not set
# CONFIG_CHARGER_SBS is not set
# CONFIG_MANAGER_SBS is not set
# CONFIG_BATTERY_BQ27XXX is not set
# CONFIG_BATTERY_MAX17040 is not set
# CONFIG_BATTERY_MAX17042 is not set
# CONFIG_CHARGER_MAX8903 is not set
# CONFIG_CHARGER_LP8727 is not set
# CONFIG_CHARGER_GPIO is not set
# CONFIG_CHARGER_LT3651 is not set
# CONFIG_CHARGER_BQ2415X is not set
# CONFIG_CHARGER_BQ24257 is not set
# CONFIG_CHARGER_BQ24735 is not set
# CONFIG_CHARGER_BQ2515X is not set
# CONFIG_CHARGER_BQ25890 is not set
# CONFIG_CHARGER_BQ25980 is not set
CONFIG_CHARGER_SMB347=m
# CONFIG_BATTERY_GAUGE_LTC2941 is not set
# CONFIG_CHARGER_RT9455 is not set
# CONFIG_CHARGER_BD99954 is not set
CONFIG_HWMON=y
CONFIG_HWMON_VID=m
# CONFIG_HWMON_DEBUG_CHIP is not set

#
# Native drivers
#
CONFIG_SENSORS_ABITUGURU=m
CONFIG_SENSORS_ABITUGURU3=m
# CONFIG_SENSORS_AD7314 is not set
CONFIG_SENSORS_AD7414=m
CONFIG_SENSORS_AD7418=m
CONFIG_SENSORS_ADM1021=m
CONFIG_SENSORS_ADM1025=m
CONFIG_SENSORS_ADM1026=m
CONFIG_SENSORS_ADM1029=m
CONFIG_SENSORS_ADM1031=m
# CONFIG_SENSORS_ADM1177 is not set
CONFIG_SENSORS_ADM9240=m
CONFIG_SENSORS_ADT7X10=m
# CONFIG_SENSORS_ADT7310 is not set
CONFIG_SENSORS_ADT7410=m
CONFIG_SENSORS_ADT7411=m
CONFIG_SENSORS_ADT7462=m
CONFIG_SENSORS_ADT7470=m
CONFIG_SENSORS_ADT7475=m
# CONFIG_SENSORS_AS370 is not set
CONFIG_SENSORS_ASC7621=m
# CONFIG_SENSORS_AXI_FAN_CONTROL is not set
CONFIG_SENSORS_K8TEMP=m
CONFIG_SENSORS_K10TEMP=m
CONFIG_SENSORS_FAM15H_POWER=m
# CONFIG_SENSORS_AMD_ENERGY is not set
CONFIG_SENSORS_APPLESMC=m
CONFIG_SENSORS_ASB100=m
# CONFIG_SENSORS_ASPEED is not set
CONFIG_SENSORS_ATXP1=m
# CONFIG_SENSORS_CORSAIR_CPRO is not set
# CONFIG_SENSORS_DRIVETEMP is not set
CONFIG_SENSORS_DS620=m
CONFIG_SENSORS_DS1621=m
CONFIG_SENSORS_DELL_SMM=m
CONFIG_SENSORS_I5K_AMB=m
CONFIG_SENSORS_F71805F=m
CONFIG_SENSORS_F71882FG=m
CONFIG_SENSORS_F75375S=m
CONFIG_SENSORS_FSCHMD=m
# CONFIG_SENSORS_FTSTEUTATES is not set
CONFIG_SENSORS_GL518SM=m
CONFIG_SENSORS_GL520SM=m
CONFIG_SENSORS_G760A=m
# CONFIG_SENSORS_G762 is not set
# CONFIG_SENSORS_HIH6130 is not set
CONFIG_SENSORS_IBMAEM=m
CONFIG_SENSORS_IBMPEX=m
# CONFIG_SENSORS_IIO_HWMON is not set
# CONFIG_SENSORS_I5500 is not set
CONFIG_SENSORS_CORETEMP=m
CONFIG_SENSORS_IT87=m
CONFIG_SENSORS_JC42=m
# CONFIG_SENSORS_POWR1220 is not set
CONFIG_SENSORS_LINEAGE=m
# CONFIG_SENSORS_LTC2945 is not set
# CONFIG_SENSORS_LTC2947_I2C is not set
# CONFIG_SENSORS_LTC2947_SPI is not set
# CONFIG_SENSORS_LTC2990 is not set
CONFIG_SENSORS_LTC4151=m
CONFIG_SENSORS_LTC4215=m
# CONFIG_SENSORS_LTC4222 is not set
CONFIG_SENSORS_LTC4245=m
# CONFIG_SENSORS_LTC4260 is not set
CONFIG_SENSORS_LTC4261=m
# CONFIG_SENSORS_MAX1111 is not set
CONFIG_SENSORS_MAX16065=m
CONFIG_SENSORS_MAX1619=m
CONFIG_SENSORS_MAX1668=m
CONFIG_SENSORS_MAX197=m
# CONFIG_SENSORS_MAX31722 is not set
# CONFIG_SENSORS_MAX31730 is not set
# CONFIG_SENSORS_MAX6621 is not set
CONFIG_SENSORS_MAX6639=m
CONFIG_SENSORS_MAX6642=m
CONFIG_SENSORS_MAX6650=m
CONFIG_SENSORS_MAX6697=m
# CONFIG_SENSORS_MAX31790 is not set
CONFIG_SENSORS_MCP3021=m
# CONFIG_SENSORS_TC654 is not set
# CONFIG_SENSORS_MR75203 is not set
# CONFIG_SENSORS_ADCXX is not set
CONFIG_SENSORS_LM63=m
# CONFIG_SENSORS_LM70 is not set
CONFIG_SENSORS_LM73=m
CONFIG_SENSORS_LM75=m
CONFIG_SENSORS_LM77=m
CONFIG_SENSORS_LM78=m
CONFIG_SENSORS_LM80=m
CONFIG_SENSORS_LM83=m
CONFIG_SENSORS_LM85=m
CONFIG_SENSORS_LM87=m
CONFIG_SENSORS_LM90=m
CONFIG_SENSORS_LM92=m
CONFIG_SENSORS_LM93=m
CONFIG_SENSORS_LM95234=m
CONFIG_SENSORS_LM95241=m
CONFIG_SENSORS_LM95245=m
CONFIG_SENSORS_PC87360=m
CONFIG_SENSORS_PC87427=m
CONFIG_SENSORS_NTC_THERMISTOR=m
# CONFIG_SENSORS_NCT6683 is not set
CONFIG_SENSORS_NCT6775=m
# CONFIG_SENSORS_NCT7802 is not set
# CONFIG_SENSORS_NCT7904 is not set
# CONFIG_SENSORS_NPCM7XX is not set
CONFIG_SENSORS_PCF8591=m
CONFIG_PMBUS=m
CONFIG_SENSORS_PMBUS=m
# CONFIG_SENSORS_ADM1266 is not set
CONFIG_SENSORS_ADM1275=m
# CONFIG_SENSORS_BEL_PFE is not set
# CONFIG_SENSORS_IBM_CFFPS is not set
# CONFIG_SENSORS_INSPUR_IPSPS is not set
# CONFIG_SENSORS_IR35221 is not set
# CONFIG_SENSORS_IR38064 is not set
# CONFIG_SENSORS_IRPS5401 is not set
# CONFIG_SENSORS_ISL68137 is not set
CONFIG_SENSORS_LM25066=m
CONFIG_SENSORS_LTC2978=m
# CONFIG_SENSORS_LTC3815 is not set
CONFIG_SENSORS_MAX16064=m
# CONFIG_SENSORS_MAX16601 is not set
# CONFIG_SENSORS_MAX20730 is not set
# CONFIG_SENSORS_MAX20751 is not set
# CONFIG_SENSORS_MAX31785 is not set
CONFIG_SENSORS_MAX34440=m
CONFIG_SENSORS_MAX8688=m
# CONFIG_SENSORS_MP2975 is not set
# CONFIG_SENSORS_PXE1610 is not set
# CONFIG_SENSORS_TPS40422 is not set
# CONFIG_SENSORS_TPS53679 is not set
CONFIG_SENSORS_UCD9000=m
CONFIG_SENSORS_UCD9200=m
# CONFIG_SENSORS_XDPE122 is not set
CONFIG_SENSORS_ZL6100=m
CONFIG_SENSORS_SHT15=m
CONFIG_SENSORS_SHT21=m
# CONFIG_SENSORS_SHT3x is not set
# CONFIG_SENSORS_SHTC1 is not set
CONFIG_SENSORS_SIS5595=m
CONFIG_SENSORS_DME1737=m
CONFIG_SENSORS_EMC1403=m
# CONFIG_SENSORS_EMC2103 is not set
CONFIG_SENSORS_EMC6W201=m
CONFIG_SENSORS_SMSC47M1=m
CONFIG_SENSORS_SMSC47M192=m
CONFIG_SENSORS_SMSC47B397=m
CONFIG_SENSORS_SCH56XX_COMMON=m
CONFIG_SENSORS_SCH5627=m
CONFIG_SENSORS_SCH5636=m
# CONFIG_SENSORS_STTS751 is not set
# CONFIG_SENSORS_SMM665 is not set
# CONFIG_SENSORS_ADC128D818 is not set
CONFIG_SENSORS_ADS7828=m
# CONFIG_SENSORS_ADS7871 is not set
CONFIG_SENSORS_AMC6821=m
CONFIG_SENSORS_INA209=m
CONFIG_SENSORS_INA2XX=m
# CONFIG_SENSORS_INA3221 is not set
# CONFIG_SENSORS_TC74 is not set
CONFIG_SENSORS_THMC50=m
CONFIG_SENSORS_TMP102=m
# CONFIG_SENSORS_TMP103 is not set
# CONFIG_SENSORS_TMP108 is not set
CONFIG_SENSORS_TMP401=m
CONFIG_SENSORS_TMP421=m
# CONFIG_SENSORS_TMP513 is not set
CONFIG_SENSORS_VIA_CPUTEMP=m
CONFIG_SENSORS_VIA686A=m
CONFIG_SENSORS_VT1211=m
CONFIG_SENSORS_VT8231=m
# CONFIG_SENSORS_W83773G is not set
CONFIG_SENSORS_W83781D=m
CONFIG_SENSORS_W83791D=m
CONFIG_SENSORS_W83792D=m
CONFIG_SENSORS_W83793=m
CONFIG_SENSORS_W83795=m
# CONFIG_SENSORS_W83795_FANCTRL is not set
CONFIG_SENSORS_W83L785TS=m
CONFIG_SENSORS_W83L786NG=m
CONFIG_SENSORS_W83627HF=m
CONFIG_SENSORS_W83627EHF=m
# CONFIG_SENSORS_XGENE is not set

#
# ACPI drivers
#
CONFIG_SENSORS_ACPI_POWER=m
CONFIG_SENSORS_ATK0110=m
CONFIG_THERMAL=y
# CONFIG_THERMAL_NETLINK is not set
# CONFIG_THERMAL_STATISTICS is not set
CONFIG_THERMAL_EMERGENCY_POWEROFF_DELAY_MS=0
CONFIG_THERMAL_HWMON=y
CONFIG_THERMAL_WRITABLE_TRIPS=y
CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y
# CONFIG_THERMAL_DEFAULT_GOV_FAIR_SHARE is not set
# CONFIG_THERMAL_DEFAULT_GOV_USER_SPACE is not set
CONFIG_THERMAL_GOV_FAIR_SHARE=y
CONFIG_THERMAL_GOV_STEP_WISE=y
CONFIG_THERMAL_GOV_BANG_BANG=y
CONFIG_THERMAL_GOV_USER_SPACE=y
# CONFIG_DEVFREQ_THERMAL is not set
# CONFIG_THERMAL_EMULATION is not set

#
# Intel thermal drivers
#
CONFIG_INTEL_POWERCLAMP=m
CONFIG_X86_PKG_TEMP_THERMAL=m
CONFIG_INTEL_SOC_DTS_IOSF_CORE=m
# CONFIG_INTEL_SOC_DTS_THERMAL is not set

#
# ACPI INT340X thermal drivers
#
CONFIG_INT340X_THERMAL=m
CONFIG_ACPI_THERMAL_REL=m
# CONFIG_INT3406_THERMAL is not set
CONFIG_PROC_THERMAL_MMIO_RAPL=y
# end of ACPI INT340X thermal drivers

# CONFIG_INTEL_PCH_THERMAL is not set
# end of Intel thermal drivers

# CONFIG_GENERIC_ADC_THERMAL is not set
CONFIG_WATCHDOG=y
CONFIG_WATCHDOG_CORE=y
# CONFIG_WATCHDOG_NOWAYOUT is not set
CONFIG_WATCHDOG_HANDLE_BOOT_ENABLED=y
CONFIG_WATCHDOG_OPEN_TIMEOUT=0
CONFIG_WATCHDOG_SYSFS=y

#
# Watchdog Pretimeout Governors
#
# CONFIG_WATCHDOG_PRETIMEOUT_GOV is not set

#
# Watchdog Device Drivers
#
CONFIG_SOFT_WATCHDOG=m
CONFIG_WDAT_WDT=m
# CONFIG_XILINX_WATCHDOG is not set
# CONFIG_ZIIRAVE_WATCHDOG is not set
# CONFIG_CADENCE_WATCHDOG is not set
# CONFIG_DW_WATCHDOG is not set
# CONFIG_MAX63XX_WATCHDOG is not set
# CONFIG_ACQUIRE_WDT is not set
# CONFIG_ADVANTECH_WDT is not set
CONFIG_ALIM1535_WDT=m
CONFIG_ALIM7101_WDT=m
# CONFIG_EBC_C384_WDT is not set
CONFIG_F71808E_WDT=m
CONFIG_SP5100_TCO=m
CONFIG_SBC_FITPC2_WATCHDOG=m
# CONFIG_EUROTECH_WDT is not set
CONFIG_IB700_WDT=m
CONFIG_IBMASR=m
# CONFIG_WAFER_WDT is not set
CONFIG_I6300ESB_WDT=y
CONFIG_IE6XX_WDT=m
CONFIG_ITCO_WDT=y
CONFIG_ITCO_VENDOR_SUPPORT=y
CONFIG_IT8712F_WDT=m
CONFIG_IT87_WDT=m
CONFIG_HP_WATCHDOG=m
CONFIG_HPWDT_NMI_DECODING=y
# CONFIG_SC1200_WDT is not set
# CONFIG_PC87413_WDT is not set
CONFIG_NV_TCO=m
# CONFIG_60XX_WDT is not set
# CONFIG_CPU5_WDT is not set
CONFIG_SMSC_SCH311X_WDT=m
# CONFIG_SMSC37B787_WDT is not set
# CONFIG_TQMX86_WDT is not set
CONFIG_VIA_WDT=m
CONFIG_W83627HF_WDT=m
CONFIG_W83877F_WDT=m
CONFIG_W83977F_WDT=m
CONFIG_MACHZ_WDT=m
# CONFIG_SBC_EPX_C3_WATCHDOG is not set
CONFIG_INTEL_MEI_WDT=m
# CONFIG_NI903X_WDT is not set
# CONFIG_NIC7018_WDT is not set
# CONFIG_MEN_A21_WDT is not set
CONFIG_XEN_WDT=m

#
# PCI-based Watchdog Cards
#
CONFIG_PCIPCWATCHDOG=m
CONFIG_WDTPCI=m

#
# USB-based Watchdog Cards
#
CONFIG_USBPCWATCHDOG=m
CONFIG_SSB_POSSIBLE=y
CONFIG_SSB=m
CONFIG_SSB_SPROM=y
CONFIG_SSB_PCIHOST_POSSIBLE=y
CONFIG_SSB_PCIHOST=y
CONFIG_SSB_SDIOHOST_POSSIBLE=y
CONFIG_SSB_SDIOHOST=y
CONFIG_SSB_DRIVER_PCICORE_POSSIBLE=y
CONFIG_SSB_DRIVER_PCICORE=y
CONFIG_SSB_DRIVER_GPIO=y
CONFIG_BCMA_POSSIBLE=y
CONFIG_BCMA=m
CONFIG_BCMA_HOST_PCI_POSSIBLE=y
CONFIG_BCMA_HOST_PCI=y
# CONFIG_BCMA_HOST_SOC is not set
CONFIG_BCMA_DRIVER_PCI=y
CONFIG_BCMA_DRIVER_GMAC_CMN=y
CONFIG_BCMA_DRIVER_GPIO=y
# CONFIG_BCMA_DEBUG is not set

#
# Multifunction device drivers
#
CONFIG_MFD_CORE=y
# CONFIG_MFD_AS3711 is not set
# CONFIG_PMIC_ADP5520 is not set
# CONFIG_MFD_AAT2870_CORE is not set
# CONFIG_MFD_BCM590XX is not set
# CONFIG_MFD_BD9571MWV is not set
# CONFIG_MFD_AXP20X_I2C is not set
# CONFIG_MFD_MADERA is not set
# CONFIG_PMIC_DA903X is not set
# CONFIG_MFD_DA9052_SPI is not set
# CONFIG_MFD_DA9052_I2C is not set
# CONFIG_MFD_DA9055 is not set
# CONFIG_MFD_DA9062 is not set
# CONFIG_MFD_DA9063 is not set
# CONFIG_MFD_DA9150 is not set
# CONFIG_MFD_DLN2 is not set
# CONFIG_MFD_MC13XXX_SPI is not set
# CONFIG_MFD_MC13XXX_I2C is not set
# CONFIG_MFD_MP2629 is not set
# CONFIG_HTC_PASIC3 is not set
# CONFIG_HTC_I2CPLD is not set
# CONFIG_MFD_INTEL_QUARK_I2C_GPIO is not set
CONFIG_LPC_ICH=m
CONFIG_LPC_SCH=m
# CONFIG_INTEL_SOC_PMIC_CHTDC_TI is not set
CONFIG_MFD_INTEL_LPSS=y
CONFIG_MFD_INTEL_LPSS_ACPI=y
CONFIG_MFD_INTEL_LPSS_PCI=y
# CONFIG_MFD_INTEL_PMC_BXT is not set
# CONFIG_MFD_IQS62X is not set
# CONFIG_MFD_JANZ_CMODIO is not set
# CONFIG_MFD_KEMPLD is not set
# CONFIG_MFD_88PM800 is not set
# CONFIG_MFD_88PM805 is not set
# CONFIG_MFD_88PM860X is not set
# CONFIG_MFD_MAX14577 is not set
# CONFIG_MFD_MAX77693 is not set
# CONFIG_MFD_MAX77843 is not set
# CONFIG_MFD_MAX8907 is not set
# CONFIG_MFD_MAX8925 is not set
# CONFIG_MFD_MAX8997 is not set
# CONFIG_MFD_MAX8998 is not set
# CONFIG_MFD_MT6360 is not set
# CONFIG_MFD_MT6397 is not set
# CONFIG_MFD_MENF21BMC is not set
# CONFIG_EZX_PCAP is not set
CONFIG_MFD_VIPERBOARD=m
# CONFIG_MFD_RETU is not set
# CONFIG_MFD_PCF50633 is not set
# CONFIG_UCB1400_CORE is not set
# CONFIG_MFD_RDC321X is not set
# CONFIG_MFD_RT5033 is not set
# CONFIG_MFD_RC5T583 is not set
# CONFIG_MFD_SEC_CORE is not set
# CONFIG_MFD_SI476X_CORE is not set
# CONFIG_MFD_SL28CPLD is not set
CONFIG_MFD_SM501=m
CONFIG_MFD_SM501_GPIO=y
# CONFIG_MFD_SKY81452 is not set
# CONFIG_ABX500_CORE is not set
# CONFIG_MFD_SYSCON is not set
# CONFIG_MFD_TI_AM335X_TSCADC is not set
# CONFIG_MFD_LP3943 is not set
# CONFIG_MFD_LP8788 is not set
# CONFIG_MFD_TI_LMU is not set
# CONFIG_MFD_PALMAS is not set
# CONFIG_TPS6105X is not set
# CONFIG_TPS65010 is not set
# CONFIG_TPS6507X is not set
# CONFIG_MFD_TPS65086 is not set
# CONFIG_MFD_TPS65090 is not set
# CONFIG_MFD_TI_LP873X is not set
# CONFIG_MFD_TPS6586X is not set
# CONFIG_MFD_TPS65910 is not set
# CONFIG_MFD_TPS65912_I2C is not set
# CONFIG_MFD_TPS65912_SPI is not set
# CONFIG_MFD_TPS80031 is not set
# CONFIG_TWL4030_CORE is not set
# CONFIG_TWL6040_CORE is not set
# CONFIG_MFD_WL1273_CORE is not set
# CONFIG_MFD_LM3533 is not set
# CONFIG_MFD_TQMX86 is not set
CONFIG_MFD_VX855=m
# CONFIG_MFD_ARIZONA_I2C is not set
# CONFIG_MFD_ARIZONA_SPI is not set
# CONFIG_MFD_WM8400 is not set
# CONFIG_MFD_WM831X_I2C is not set
# CONFIG_MFD_WM831X_SPI is not set
# CONFIG_MFD_WM8350_I2C is not set
# CONFIG_MFD_WM8994 is not set
# CONFIG_MFD_INTEL_M10_BMC is not set
# end of Multifunction device drivers

# CONFIG_REGULATOR is not set
CONFIG_RC_CORE=m
CONFIG_RC_MAP=m
CONFIG_LIRC=y
CONFIG_RC_DECODERS=y
CONFIG_IR_NEC_DECODER=m
CONFIG_IR_RC5_DECODER=m
CONFIG_IR_RC6_DECODER=m
CONFIG_IR_JVC_DECODER=m
CONFIG_IR_SONY_DECODER=m
CONFIG_IR_SANYO_DECODER=m
CONFIG_IR_SHARP_DECODER=m
CONFIG_IR_MCE_KBD_DECODER=m
# CONFIG_IR_XMP_DECODER is not set
CONFIG_IR_IMON_DECODER=m
# CONFIG_IR_RCMM_DECODER is not set
CONFIG_RC_DEVICES=y
CONFIG_RC_ATI_REMOTE=m
CONFIG_IR_ENE=m
CONFIG_IR_IMON=m
# CONFIG_IR_IMON_RAW is not set
CONFIG_IR_MCEUSB=m
CONFIG_IR_ITE_CIR=m
CONFIG_IR_FINTEK=m
CONFIG_IR_NUVOTON=m
CONFIG_IR_REDRAT3=m
CONFIG_IR_STREAMZAP=m
CONFIG_IR_WINBOND_CIR=m
# CONFIG_IR_IGORPLUGUSB is not set
CONFIG_IR_IGUANA=m
CONFIG_IR_TTUSBIR=m
CONFIG_RC_LOOPBACK=m
# CONFIG_IR_SERIAL is not set
# CONFIG_IR_SIR is not set
# CONFIG_RC_XBOX_DVD is not set
# CONFIG_IR_TOY is not set
# CONFIG_MEDIA_CEC_SUPPORT is not set
CONFIG_MEDIA_SUPPORT=m
# CONFIG_MEDIA_SUPPORT_FILTER is not set
CONFIG_MEDIA_SUBDRV_AUTOSELECT=y

#
# Media device types
#
CONFIG_MEDIA_CAMERA_SUPPORT=y
CONFIG_MEDIA_ANALOG_TV_SUPPORT=y
CONFIG_MEDIA_DIGITAL_TV_SUPPORT=y
CONFIG_MEDIA_RADIO_SUPPORT=y
CONFIG_MEDIA_SDR_SUPPORT=y
CONFIG_MEDIA_PLATFORM_SUPPORT=y
CONFIG_MEDIA_TEST_SUPPORT=y
# end of Media device types

#
# Media core support
#
CONFIG_VIDEO_DEV=m
CONFIG_MEDIA_CONTROLLER=y
CONFIG_DVB_CORE=m
# end of Media core support

#
# Video4Linux options
#
CONFIG_VIDEO_V4L2=m
CONFIG_VIDEO_V4L2_I2C=y
# CONFIG_VIDEO_V4L2_SUBDEV_API is not set
# CONFIG_VIDEO_ADV_DEBUG is not set
# CONFIG_VIDEO_FIXED_MINOR_RANGES is not set
CONFIG_VIDEO_TUNER=m
CONFIG_VIDEOBUF_GEN=m
CONFIG_VIDEOBUF_DMA_SG=m
CONFIG_VIDEOBUF_VMALLOC=m
# end of Video4Linux options

#
# Media controller options
#
CONFIG_MEDIA_CONTROLLER_DVB=y
# end of Media controller options

#
# Digital TV options
#
# CONFIG_DVB_MMAP is not set
CONFIG_DVB_NET=y
CONFIG_DVB_MAX_ADAPTERS=8
CONFIG_DVB_DYNAMIC_MINORS=y
# CONFIG_DVB_DEMUX_SECTION_LOSS_LOG is not set
# CONFIG_DVB_ULE_DEBUG is not set
# end of Digital TV options

#
# Media drivers
#
CONFIG_TTPCI_EEPROM=m
CONFIG_MEDIA_USB_SUPPORT=y

#
# Webcam devices
#
CONFIG_USB_VIDEO_CLASS=m
CONFIG_USB_VIDEO_CLASS_INPUT_EVDEV=y
CONFIG_USB_GSPCA=m
CONFIG_USB_M5602=m
CONFIG_USB_STV06XX=m
CONFIG_USB_GL860=m
CONFIG_USB_GSPCA_BENQ=m
CONFIG_USB_GSPCA_CONEX=m
CONFIG_USB_GSPCA_CPIA1=m
# CONFIG_USB_GSPCA_DTCS033 is not set
CONFIG_USB_GSPCA_ETOMS=m
CONFIG_USB_GSPCA_FINEPIX=m
CONFIG_USB_GSPCA_JEILINJ=m
CONFIG_USB_GSPCA_JL2005BCD=m
# CONFIG_USB_GSPCA_KINECT is not set
CONFIG_USB_GSPCA_KONICA=m
CONFIG_USB_GSPCA_MARS=m
CONFIG_USB_GSPCA_MR97310A=m
CONFIG_USB_GSPCA_NW80X=m
CONFIG_USB_GSPCA_OV519=m
CONFIG_USB_GSPCA_OV534=m
CONFIG_USB_GSPCA_OV534_9=m
CONFIG_USB_GSPCA_PAC207=m
CONFIG_USB_GSPCA_PAC7302=m
CONFIG_USB_GSPCA_PAC7311=m
CONFIG_USB_GSPCA_SE401=m
CONFIG_USB_GSPCA_SN9C2028=m
CONFIG_USB_GSPCA_SN9C20X=m
CONFIG_USB_GSPCA_SONIXB=m
CONFIG_USB_GSPCA_SONIXJ=m
CONFIG_USB_GSPCA_SPCA500=m
CONFIG_USB_GSPCA_SPCA501=m
CONFIG_USB_GSPCA_SPCA505=m
CONFIG_USB_GSPCA_SPCA506=m
CONFIG_USB_GSPCA_SPCA508=m
CONFIG_USB_GSPCA_SPCA561=m
CONFIG_USB_GSPCA_SPCA1528=m
CONFIG_USB_GSPCA_SQ905=m
CONFIG_USB_GSPCA_SQ905C=m
CONFIG_USB_GSPCA_SQ930X=m
CONFIG_USB_GSPCA_STK014=m
# CONFIG_USB_GSPCA_STK1135 is not set
CONFIG_USB_GSPCA_STV0680=m
CONFIG_USB_GSPCA_SUNPLUS=m
CONFIG_USB_GSPCA_T613=m
CONFIG_USB_GSPCA_TOPRO=m
# CONFIG_USB_GSPCA_TOUPTEK is not set
CONFIG_USB_GSPCA_TV8532=m
CONFIG_USB_GSPCA_VC032X=m
CONFIG_USB_GSPCA_VICAM=m
CONFIG_USB_GSPCA_XIRLINK_CIT=m
CONFIG_USB_GSPCA_ZC3XX=m
CONFIG_USB_PWC=m
# CONFIG_USB_PWC_DEBUG is not set
CONFIG_USB_PWC_INPUT_EVDEV=y
# CONFIG_VIDEO_CPIA2 is not set
CONFIG_USB_ZR364XX=m
CONFIG_USB_STKWEBCAM=m
CONFIG_USB_S2255=m
# CONFIG_VIDEO_USBTV is not set

#
# Analog TV USB devices
#
CONFIG_VIDEO_PVRUSB2=m
CONFIG_VIDEO_PVRUSB2_SYSFS=y
CONFIG_VIDEO_PVRUSB2_DVB=y
# CONFIG_VIDEO_PVRUSB2_DEBUGIFC is not set
CONFIG_VIDEO_HDPVR=m
# CONFIG_VIDEO_STK1160_COMMON is not set
# CONFIG_VIDEO_GO7007 is not set

#
# Analog/digital TV USB devices
#
CONFIG_VIDEO_AU0828=m
CONFIG_VIDEO_AU0828_V4L2=y
# CONFIG_VIDEO_AU0828_RC is not set
CONFIG_VIDEO_CX231XX=m
CONFIG_VIDEO_CX231XX_RC=y
CONFIG_VIDEO_CX231XX_ALSA=m
CONFIG_VIDEO_CX231XX_DVB=m
CONFIG_VIDEO_TM6000=m
CONFIG_VIDEO_TM6000_ALSA=m
CONFIG_VIDEO_TM6000_DVB=m

#
# Digital TV USB devices
#
CONFIG_DVB_USB=m
# CONFIG_DVB_USB_DEBUG is not set
CONFIG_DVB_USB_DIB3000MC=m
CONFIG_DVB_USB_A800=m
CONFIG_DVB_USB_DIBUSB_MB=m
# CONFIG_DVB_USB_DIBUSB_MB_FAULTY is not set
CONFIG_DVB_USB_DIBUSB_MC=m
CONFIG_DVB_USB_DIB0700=m
CONFIG_DVB_USB_UMT_010=m
CONFIG_DVB_USB_CXUSB=m
# CONFIG_DVB_USB_CXUSB_ANALOG is not set
CONFIG_DVB_USB_M920X=m
CONFIG_DVB_USB_DIGITV=m
CONFIG_DVB_USB_VP7045=m
CONFIG_DVB_USB_VP702X=m
CONFIG_DVB_USB_GP8PSK=m
CONFIG_DVB_USB_NOVA_T_USB2=m
CONFIG_DVB_USB_TTUSB2=m
CONFIG_DVB_USB_DTT200U=m
CONFIG_DVB_USB_OPERA1=m
CONFIG_DVB_USB_AF9005=m
CONFIG_DVB_USB_AF9005_REMOTE=m
CONFIG_DVB_USB_PCTV452E=m
CONFIG_DVB_USB_DW2102=m
CONFIG_DVB_USB_CINERGY_T2=m
CONFIG_DVB_USB_DTV5100=m
CONFIG_DVB_USB_AZ6027=m
CONFIG_DVB_USB_TECHNISAT_USB2=m
CONFIG_DVB_USB_V2=m
CONFIG_DVB_USB_AF9015=m
CONFIG_DVB_USB_AF9035=m
CONFIG_DVB_USB_ANYSEE=m
CONFIG_DVB_USB_AU6610=m
CONFIG_DVB_USB_AZ6007=m
CONFIG_DVB_USB_CE6230=m
CONFIG_DVB_USB_EC168=m
CONFIG_DVB_USB_GL861=m
CONFIG_DVB_USB_LME2510=m
CONFIG_DVB_USB_MXL111SF=m
CONFIG_DVB_USB_RTL28XXU=m
# CONFIG_DVB_USB_DVBSKY is not set
# CONFIG_DVB_USB_ZD1301 is not set
CONFIG_DVB_TTUSB_BUDGET=m
CONFIG_DVB_TTUSB_DEC=m
CONFIG_SMS_USB_DRV=m
CONFIG_DVB_B2C2_FLEXCOP_USB=m
# CONFIG_DVB_B2C2_FLEXCOP_USB_DEBUG is not set
# CONFIG_DVB_AS102 is not set

#
# Webcam, TV (analog/digital) USB devices
#
CONFIG_VIDEO_EM28XX=m
# CONFIG_VIDEO_EM28XX_V4L2 is not set
CONFIG_VIDEO_EM28XX_ALSA=m
CONFIG_VIDEO_EM28XX_DVB=m
CONFIG_VIDEO_EM28XX_RC=m

#
# Software defined radio USB devices
#
# CONFIG_USB_AIRSPY is not set
# CONFIG_USB_HACKRF is not set
# CONFIG_USB_MSI2500 is not set
CONFIG_MEDIA_PCI_SUPPORT=y

#
# Media capture support
#
# CONFIG_VIDEO_MEYE is not set
# CONFIG_VIDEO_SOLO6X10 is not set
# CONFIG_VIDEO_TW5864 is not set
# CONFIG_VIDEO_TW68 is not set
# CONFIG_VIDEO_TW686X is not set

#
# Media capture/analog TV support
#
CONFIG_VIDEO_IVTV=m
# CONFIG_VIDEO_IVTV_DEPRECATED_IOCTLS is not set
# CONFIG_VIDEO_IVTV_ALSA is not set
CONFIG_VIDEO_FB_IVTV=m
# CONFIG_VIDEO_FB_IVTV_FORCE_PAT is not set
# CONFIG_VIDEO_HEXIUM_GEMINI is not set
# CONFIG_VIDEO_HEXIUM_ORION is not set
# CONFIG_VIDEO_MXB is not set
# CONFIG_VIDEO_DT3155 is not set

#
# Media capture/analog/hybrid TV support
#
CONFIG_VIDEO_CX18=m
CONFIG_VIDEO_CX18_ALSA=m
CONFIG_VIDEO_CX23885=m
CONFIG_MEDIA_ALTERA_CI=m
# CONFIG_VIDEO_CX25821 is not set
CONFIG_VIDEO_CX88=m
CONFIG_VIDEO_CX88_ALSA=m
CONFIG_VIDEO_CX88_BLACKBIRD=m
CONFIG_VIDEO_CX88_DVB=m
CONFIG_VIDEO_CX88_ENABLE_VP3054=y
CONFIG_VIDEO_CX88_VP3054=m
CONFIG_VIDEO_CX88_MPEG=m
CONFIG_VIDEO_BT848=m
CONFIG_DVB_BT8XX=m
CONFIG_VIDEO_SAA7134=m
CONFIG_VIDEO_SAA7134_ALSA=m
CONFIG_VIDEO_SAA7134_RC=y
CONFIG_VIDEO_SAA7134_DVB=m
CONFIG_VIDEO_SAA7164=m

#
# Media digital TV PCI Adapters
#
CONFIG_DVB_AV7110_IR=y
CONFIG_DVB_AV7110=m
CONFIG_DVB_AV7110_OSD=y
CONFIG_DVB_BUDGET_CORE=m
CONFIG_DVB_BUDGET=m
CONFIG_DVB_BUDGET_CI=m
CONFIG_DVB_BUDGET_AV=m
CONFIG_DVB_BUDGET_PATCH=m
CONFIG_DVB_B2C2_FLEXCOP_PCI=m
# CONFIG_DVB_B2C2_FLEXCOP_PCI_DEBUG is not set
CONFIG_DVB_PLUTO2=m
CONFIG_DVB_DM1105=m
CONFIG_DVB_PT1=m
# CONFIG_DVB_PT3 is not set
CONFIG_MANTIS_CORE=m
CONFIG_DVB_MANTIS=m
CONFIG_DVB_HOPPER=m
CONFIG_DVB_NGENE=m
CONFIG_DVB_DDBRIDGE=m
# CONFIG_DVB_DDBRIDGE_MSIENABLE is not set
# CONFIG_DVB_SMIPCIE is not set
# CONFIG_DVB_NETUP_UNIDVB is not set
# CONFIG_VIDEO_IPU3_CIO2 is not set
# CONFIG_VIDEO_PCI_SKELETON is not set
CONFIG_RADIO_ADAPTERS=y
CONFIG_RADIO_TEA575X=m
# CONFIG_RADIO_SI470X is not set
# CONFIG_RADIO_SI4713 is not set
# CONFIG_USB_MR800 is not set
# CONFIG_USB_DSBR is not set
# CONFIG_RADIO_MAXIRADIO is not set
# CONFIG_RADIO_SHARK is not set
# CONFIG_RADIO_SHARK2 is not set
# CONFIG_USB_KEENE is not set
# CONFIG_USB_RAREMONO is not set
# CONFIG_USB_MA901 is not set
# CONFIG_RADIO_TEA5764 is not set
# CONFIG_RADIO_SAA7706H is not set
# CONFIG_RADIO_TEF6862 is not set
# CONFIG_RADIO_WL1273 is not set
CONFIG_MEDIA_COMMON_OPTIONS=y

#
# common driver options
#
CONFIG_VIDEO_CX2341X=m
CONFIG_VIDEO_TVEEPROM=m
CONFIG_CYPRESS_FIRMWARE=m
CONFIG_VIDEOBUF2_CORE=m
CONFIG_VIDEOBUF2_V4L2=m
CONFIG_VIDEOBUF2_MEMOPS=m
CONFIG_VIDEOBUF2_VMALLOC=m
CONFIG_VIDEOBUF2_DMA_SG=m
CONFIG_VIDEOBUF2_DVB=m
CONFIG_DVB_B2C2_FLEXCOP=m
CONFIG_VIDEO_SAA7146=m
CONFIG_VIDEO_SAA7146_VV=m
CONFIG_SMS_SIANO_MDTV=m
CONFIG_SMS_SIANO_RC=y
# CONFIG_SMS_SIANO_DEBUGFS is not set
# CONFIG_V4L_PLATFORM_DRIVERS is not set
# CONFIG_V4L_MEM2MEM_DRIVERS is not set
# CONFIG_DVB_PLATFORM_DRIVERS is not set
# CONFIG_SDR_PLATFORM_DRIVERS is not set

#
# MMC/SDIO DVB adapters
#
CONFIG_SMS_SDIO_DRV=m
# CONFIG_V4L_TEST_DRIVERS is not set
# CONFIG_DVB_TEST_DRIVERS is not set

#
# FireWire (IEEE 1394) Adapters
#
CONFIG_DVB_FIREDTV=m
CONFIG_DVB_FIREDTV_INPUT=y
# end of Media drivers

#
# Media ancillary drivers
#
CONFIG_MEDIA_ATTACH=y

#
# IR I2C driver auto-selected by 'Autoselect ancillary drivers'
#
CONFIG_VIDEO_IR_I2C=m

#
# Audio decoders, processors and mixers
#
CONFIG_VIDEO_TVAUDIO=m
CONFIG_VIDEO_TDA7432=m
# CONFIG_VIDEO_TDA9840 is not set
# CONFIG_VIDEO_TDA1997X is not set
# CONFIG_VIDEO_TEA6415C is not set
# CONFIG_VIDEO_TEA6420 is not set
CONFIG_VIDEO_MSP3400=m
CONFIG_VIDEO_CS3308=m
CONFIG_VIDEO_CS5345=m
CONFIG_VIDEO_CS53L32A=m
# CONFIG_VIDEO_TLV320AIC23B is not set
# CONFIG_VIDEO_UDA1342 is not set
CONFIG_VIDEO_WM8775=m
CONFIG_VIDEO_WM8739=m
CONFIG_VIDEO_VP27SMPX=m
# CONFIG_VIDEO_SONY_BTF_MPX is not set
# end of Audio decoders, processors and mixers

#
# RDS decoders
#
CONFIG_VIDEO_SAA6588=m
# end of RDS decoders

#
# Video decoders
#
# CONFIG_VIDEO_ADV7180 is not set
# CONFIG_VIDEO_ADV7183 is not set
# CONFIG_VIDEO_ADV7604 is not set
# CONFIG_VIDEO_ADV7842 is not set
# CONFIG_VIDEO_BT819 is not set
# CONFIG_VIDEO_BT856 is not set
# CONFIG_VIDEO_BT866 is not set
# CONFIG_VIDEO_KS0127 is not set
# CONFIG_VIDEO_ML86V7667 is not set
# CONFIG_VIDEO_SAA7110 is not set
CONFIG_VIDEO_SAA711X=m
# CONFIG_VIDEO_TC358743 is not set
# CONFIG_VIDEO_TVP514X is not set
# CONFIG_VIDEO_TVP5150 is not set
# CONFIG_VIDEO_TVP7002 is not set
# CONFIG_VIDEO_TW2804 is not set
# CONFIG_VIDEO_TW9903 is not set
# CONFIG_VIDEO_TW9906 is not set
# CONFIG_VIDEO_TW9910 is not set
# CONFIG_VIDEO_VPX3220 is not set

#
# Video and audio decoders
#
CONFIG_VIDEO_SAA717X=m
CONFIG_VIDEO_CX25840=m
# end of Video decoders

#
# Video encoders
#
CONFIG_VIDEO_SAA7127=m
# CONFIG_VIDEO_SAA7185 is not set
# CONFIG_VIDEO_ADV7170 is not set
# CONFIG_VIDEO_ADV7175 is not set
# CONFIG_VIDEO_ADV7343 is not set
# CONFIG_VIDEO_ADV7393 is not set
# CONFIG_VIDEO_ADV7511 is not set
# CONFIG_VIDEO_AD9389B is not set
# CONFIG_VIDEO_AK881X is not set
# CONFIG_VIDEO_THS8200 is not set
# end of Video encoders

#
# Video improvement chips
#
CONFIG_VIDEO_UPD64031A=m
CONFIG_VIDEO_UPD64083=m
# end of Video improvement chips

#
# Audio/Video compression chips
#
CONFIG_VIDEO_SAA6752HS=m
# end of Audio/Video compression chips

#
# SDR tuner chips
#
# CONFIG_SDR_MAX2175 is not set
# end of SDR tuner chips

#
# Miscellaneous helper chips
#
# CONFIG_VIDEO_THS7303 is not set
CONFIG_VIDEO_M52790=m
# CONFIG_VIDEO_I2C is not set
# CONFIG_VIDEO_ST_MIPID02 is not set
# end of Miscellaneous helper chips

#
# Camera sensor devices
#
# CONFIG_VIDEO_HI556 is not set
# CONFIG_VIDEO_IMX214 is not set
# CONFIG_VIDEO_IMX219 is not set
# CONFIG_VIDEO_IMX258 is not set
# CONFIG_VIDEO_IMX274 is not set
# CONFIG_VIDEO_IMX290 is not set
# CONFIG_VIDEO_IMX319 is not set
# CONFIG_VIDEO_IMX355 is not set
# CONFIG_VIDEO_OV2640 is not set
# CONFIG_VIDEO_OV2659 is not set
# CONFIG_VIDEO_OV2680 is not set
# CONFIG_VIDEO_OV2685 is not set
# CONFIG_VIDEO_OV2740 is not set
# CONFIG_VIDEO_OV5647 is not set
# CONFIG_VIDEO_OV6650 is not set
# CONFIG_VIDEO_OV5670 is not set
# CONFIG_VIDEO_OV5675 is not set
# CONFIG_VIDEO_OV5695 is not set
# CONFIG_VIDEO_OV7251 is not set
# CONFIG_VIDEO_OV772X is not set
# CONFIG_VIDEO_OV7640 is not set
# CONFIG_VIDEO_OV7670 is not set
# CONFIG_VIDEO_OV7740 is not set
# CONFIG_VIDEO_OV8856 is not set
# CONFIG_VIDEO_OV9640 is not set
# CONFIG_VIDEO_OV9650 is not set
# CONFIG_VIDEO_OV13858 is not set
# CONFIG_VIDEO_VS6624 is not set
# CONFIG_VIDEO_MT9M001 is not set
# CONFIG_VIDEO_MT9M032 is not set
# CONFIG_VIDEO_MT9M111 is not set
# CONFIG_VIDEO_MT9P031 is not set
# CONFIG_VIDEO_MT9T001 is not set
# CONFIG_VIDEO_MT9T112 is not set
# CONFIG_VIDEO_MT9V011 is not set
# CONFIG_VIDEO_MT9V032 is not set
# CONFIG_VIDEO_MT9V111 is not set
# CONFIG_VIDEO_SR030PC30 is not set
# CONFIG_VIDEO_NOON010PC30 is not set
# CONFIG_VIDEO_M5MOLS is not set
# CONFIG_VIDEO_RDACM20 is not set
# CONFIG_VIDEO_RJ54N1 is not set
# CONFIG_VIDEO_S5K6AA is not set
# CONFIG_VIDEO_S5K6A3 is not set
# CONFIG_VIDEO_S5K4ECGX is not set
# CONFIG_VIDEO_S5K5BAF is not set
# CONFIG_VIDEO_SMIAPP is not set
# CONFIG_VIDEO_ET8EK8 is not set
# CONFIG_VIDEO_S5C73M3 is not set
# end of Camera sensor devices

#
# Lens drivers
#
# CONFIG_VIDEO_AD5820 is not set
# CONFIG_VIDEO_AK7375 is not set
# CONFIG_VIDEO_DW9714 is not set
# CONFIG_VIDEO_DW9768 is not set
# CONFIG_VIDEO_DW9807_VCM is not set
# end of Lens drivers

#
# Flash devices
#
# CONFIG_VIDEO_ADP1653 is not set
# CONFIG_VIDEO_LM3560 is not set
# CONFIG_VIDEO_LM3646 is not set
# end of Flash devices

#
# SPI helper chips
#
# CONFIG_VIDEO_GS1662 is not set
# end of SPI helper chips

#
# Media SPI Adapters
#
# CONFIG_CXD2880_SPI_DRV is not set
# end of Media SPI Adapters

CONFIG_MEDIA_TUNER=m

#
# Customize TV tuners
#
CONFIG_MEDIA_TUNER_SIMPLE=m
CONFIG_MEDIA_TUNER_TDA18250=m
CONFIG_MEDIA_TUNER_TDA8290=m
CONFIG_MEDIA_TUNER_TDA827X=m
CONFIG_MEDIA_TUNER_TDA18271=m
CONFIG_MEDIA_TUNER_TDA9887=m
CONFIG_MEDIA_TUNER_TEA5761=m
CONFIG_MEDIA_TUNER_TEA5767=m
# CONFIG_MEDIA_TUNER_MSI001 is not set
CONFIG_MEDIA_TUNER_MT20XX=m
CONFIG_MEDIA_TUNER_MT2060=m
CONFIG_MEDIA_TUNER_MT2063=m
CONFIG_MEDIA_TUNER_MT2266=m
CONFIG_MEDIA_TUNER_MT2131=m
CONFIG_MEDIA_TUNER_QT1010=m
CONFIG_MEDIA_TUNER_XC2028=m
CONFIG_MEDIA_TUNER_XC5000=m
CONFIG_MEDIA_TUNER_XC4000=m
CONFIG_MEDIA_TUNER_MXL5005S=m
CONFIG_MEDIA_TUNER_MXL5007T=m
CONFIG_MEDIA_TUNER_MC44S803=m
CONFIG_MEDIA_TUNER_MAX2165=m
CONFIG_MEDIA_TUNER_TDA18218=m
CONFIG_MEDIA_TUNER_FC0011=m
CONFIG_MEDIA_TUNER_FC0012=m
CONFIG_MEDIA_TUNER_FC0013=m
CONFIG_MEDIA_TUNER_TDA18212=m
CONFIG_MEDIA_TUNER_E4000=m
CONFIG_MEDIA_TUNER_FC2580=m
CONFIG_MEDIA_TUNER_M88RS6000T=m
CONFIG_MEDIA_TUNER_TUA9001=m
CONFIG_MEDIA_TUNER_SI2157=m
CONFIG_MEDIA_TUNER_IT913X=m
CONFIG_MEDIA_TUNER_R820T=m
# CONFIG_MEDIA_TUNER_MXL301RF is not set
CONFIG_MEDIA_TUNER_QM1D1C0042=m
CONFIG_MEDIA_TUNER_QM1D1B0004=m
# end of Customize TV tuners

#
# Customise DVB Frontends
#

#
# Multistandard (satellite) frontends
#
CONFIG_DVB_STB0899=m
CONFIG_DVB_STB6100=m
CONFIG_DVB_STV090x=m
CONFIG_DVB_STV0910=m
CONFIG_DVB_STV6110x=m
CONFIG_DVB_STV6111=m
CONFIG_DVB_MXL5XX=m
CONFIG_DVB_M88DS3103=m

#
# Multistandard (cable + terrestrial) frontends
#
CONFIG_DVB_DRXK=m
CONFIG_DVB_TDA18271C2DD=m
CONFIG_DVB_SI2165=m
CONFIG_DVB_MN88472=m
CONFIG_DVB_MN88473=m

#
# DVB-S (satellite) frontends
#
CONFIG_DVB_CX24110=m
CONFIG_DVB_CX24123=m
CONFIG_DVB_MT312=m
CONFIG_DVB_ZL10036=m
CONFIG_DVB_ZL10039=m
CONFIG_DVB_S5H1420=m
CONFIG_DVB_STV0288=m
CONFIG_DVB_STB6000=m
CONFIG_DVB_STV0299=m
CONFIG_DVB_STV6110=m
CONFIG_DVB_STV0900=m
CONFIG_DVB_TDA8083=m
CONFIG_DVB_TDA10086=m
CONFIG_DVB_TDA8261=m
CONFIG_DVB_VES1X93=m
CONFIG_DVB_TUNER_ITD1000=m
CONFIG_DVB_TUNER_CX24113=m
CONFIG_DVB_TDA826X=m
CONFIG_DVB_TUA6100=m
CONFIG_DVB_CX24116=m
CONFIG_DVB_CX24117=m
CONFIG_DVB_CX24120=m
CONFIG_DVB_SI21XX=m
CONFIG_DVB_TS2020=m
CONFIG_DVB_DS3000=m
CONFIG_DVB_MB86A16=m
CONFIG_DVB_TDA10071=m

#
# DVB-T (terrestrial) frontends
#
CONFIG_DVB_SP8870=m
CONFIG_DVB_SP887X=m
CONFIG_DVB_CX22700=m
CONFIG_DVB_CX22702=m
# CONFIG_DVB_S5H1432 is not set
CONFIG_DVB_DRXD=m
CONFIG_DVB_L64781=m
CONFIG_DVB_TDA1004X=m
CONFIG_DVB_NXT6000=m
CONFIG_DVB_MT352=m
CONFIG_DVB_ZL10353=m
CONFIG_DVB_DIB3000MB=m
CONFIG_DVB_DIB3000MC=m
CONFIG_DVB_DIB7000M=m
CONFIG_DVB_DIB7000P=m
# CONFIG_DVB_DIB9000 is not set
CONFIG_DVB_TDA10048=m
CONFIG_DVB_AF9013=m
CONFIG_DVB_EC100=m
CONFIG_DVB_STV0367=m
CONFIG_DVB_CXD2820R=m
CONFIG_DVB_CXD2841ER=m
CONFIG_DVB_RTL2830=m
CONFIG_DVB_RTL2832=m
CONFIG_DVB_RTL2832_SDR=m
CONFIG_DVB_SI2168=m
# CONFIG_DVB_ZD1301_DEMOD is not set
CONFIG_DVB_GP8PSK_FE=m
# CONFIG_DVB_CXD2880 is not set

#
# DVB-C (cable) frontends
#
CONFIG_DVB_VES1820=m
CONFIG_DVB_TDA10021=m
CONFIG_DVB_TDA10023=m
CONFIG_DVB_STV0297=m

#
# ATSC (North American/Korean Terrestrial/Cable DTV) frontends
#
CONFIG_DVB_NXT200X=m
CONFIG_DVB_OR51211=m
CONFIG_DVB_OR51132=m
CONFIG_DVB_BCM3510=m
CONFIG_DVB_LGDT330X=m
CONFIG_DVB_LGDT3305=m
CONFIG_DVB_LGDT3306A=m
CONFIG_DVB_LG2160=m
CONFIG_DVB_S5H1409=m
CONFIG_DVB_AU8522=m
CONFIG_DVB_AU8522_DTV=m
CONFIG_DVB_AU8522_V4L=m
CONFIG_DVB_S5H1411=m

#
# ISDB-T (terrestrial) frontends
#
CONFIG_DVB_S921=m
CONFIG_DVB_DIB8000=m
CONFIG_DVB_MB86A20S=m

#
# ISDB-S (satellite) & ISDB-T (terrestrial) frontends
#
CONFIG_DVB_TC90522=m
# CONFIG_DVB_MN88443X is not set

#
# Digital terrestrial only tuners/PLL
#
CONFIG_DVB_PLL=m
CONFIG_DVB_TUNER_DIB0070=m
CONFIG_DVB_TUNER_DIB0090=m

#
# SEC control devices for DVB-S
#
CONFIG_DVB_DRX39XYJ=m
CONFIG_DVB_LNBH25=m
# CONFIG_DVB_LNBH29 is not set
CONFIG_DVB_LNBP21=m
CONFIG_DVB_LNBP22=m
CONFIG_DVB_ISL6405=m
CONFIG_DVB_ISL6421=m
CONFIG_DVB_ISL6423=m
CONFIG_DVB_A8293=m
# CONFIG_DVB_LGS8GL5 is not set
CONFIG_DVB_LGS8GXX=m
CONFIG_DVB_ATBM8830=m
CONFIG_DVB_TDA665x=m
CONFIG_DVB_IX2505V=m
CONFIG_DVB_M88RS2000=m
CONFIG_DVB_AF9033=m
# CONFIG_DVB_HORUS3A is not set
# CONFIG_DVB_ASCOT2E is not set
# CONFIG_DVB_HELENE is not set

#
# Common Interface (EN50221) controller drivers
#
CONFIG_DVB_CXD2099=m
# CONFIG_DVB_SP2 is not set
# end of Customise DVB Frontends

#
# Tools to develop new frontends
#
CONFIG_DVB_DUMMY_FE=m
# end of Media ancillary drivers

#
# Graphics support
#
CONFIG_AGP=y
CONFIG_AGP_AMD64=y
CONFIG_AGP_INTEL=y
CONFIG_AGP_SIS=y
CONFIG_AGP_VIA=y
CONFIG_INTEL_GTT=y
CONFIG_VGA_ARB=y
CONFIG_VGA_ARB_MAX_GPUS=64
CONFIG_VGA_SWITCHEROO=y
CONFIG_DRM=y
CONFIG_DRM_MIPI_DSI=y
CONFIG_DRM_DP_AUX_CHARDEV=y
# CONFIG_DRM_DEBUG_MM is not set
CONFIG_DRM_DEBUG_SELFTEST=m
CONFIG_DRM_KMS_HELPER=y
CONFIG_DRM_KMS_FB_HELPER=y
# CONFIG_DRM_DEBUG_DP_MST_TOPOLOGY_REFS is not set
CONFIG_DRM_FBDEV_EMULATION=y
CONFIG_DRM_FBDEV_OVERALLOC=100
# CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM is not set
CONFIG_DRM_LOAD_EDID_FIRMWARE=y
# CONFIG_DRM_DP_CEC is not set
CONFIG_DRM_TTM=m
CONFIG_DRM_TTM_DMA_PAGE_POOL=y
CONFIG_DRM_VRAM_HELPER=m
CONFIG_DRM_TTM_HELPER=m
CONFIG_DRM_GEM_SHMEM_HELPER=y

#
# I2C encoder or helper chips
#
CONFIG_DRM_I2C_CH7006=m
CONFIG_DRM_I2C_SIL164=m
# CONFIG_DRM_I2C_NXP_TDA998X is not set
# CONFIG_DRM_I2C_NXP_TDA9950 is not set
# end of I2C encoder or helper chips

#
# ARM devices
#
# end of ARM devices

# CONFIG_DRM_RADEON is not set
# CONFIG_DRM_AMDGPU is not set
# CONFIG_DRM_NOUVEAU is not set
CONFIG_DRM_I915=m
CONFIG_DRM_I915_FORCE_PROBE=""
CONFIG_DRM_I915_CAPTURE_ERROR=y
CONFIG_DRM_I915_COMPRESS_ERROR=y
CONFIG_DRM_I915_USERPTR=y
CONFIG_DRM_I915_GVT=y
CONFIG_DRM_I915_GVT_KVMGT=m

#
# drm/i915 Debugging
#
# CONFIG_DRM_I915_WERROR is not set
# CONFIG_DRM_I915_DEBUG is not set
# CONFIG_DRM_I915_DEBUG_MMIO is not set
# CONFIG_DRM_I915_SW_FENCE_DEBUG_OBJECTS is not set
# CONFIG_DRM_I915_SW_FENCE_CHECK_DAG is not set
# CONFIG_DRM_I915_DEBUG_GUC is not set
# CONFIG_DRM_I915_SELFTEST is not set
# CONFIG_DRM_I915_LOW_LEVEL_TRACEPOINTS is not set
# CONFIG_DRM_I915_DEBUG_VBLANK_EVADE is not set
# CONFIG_DRM_I915_DEBUG_RUNTIME_PM is not set
# end of drm/i915 Debugging

#
# drm/i915 Profile Guided Optimisation
#
CONFIG_DRM_I915_FENCE_TIMEOUT=10000
CONFIG_DRM_I915_USERFAULT_AUTOSUSPEND=250
CONFIG_DRM_I915_HEARTBEAT_INTERVAL=2500
CONFIG_DRM_I915_PREEMPT_TIMEOUT=640
CONFIG_DRM_I915_MAX_REQUEST_BUSYWAIT=8000
CONFIG_DRM_I915_STOP_TIMEOUT=100
CONFIG_DRM_I915_TIMESLICE_DURATION=1
# end of drm/i915 Profile Guided Optimisation

CONFIG_DRM_VGEM=y
# CONFIG_DRM_VKMS is not set
CONFIG_DRM_VMWGFX=m
CONFIG_DRM_VMWGFX_FBCON=y
CONFIG_DRM_GMA500=m
CONFIG_DRM_GMA600=y
CONFIG_DRM_GMA3600=y
CONFIG_DRM_UDL=m
CONFIG_DRM_AST=m
CONFIG_DRM_MGAG200=m
CONFIG_DRM_QXL=m
CONFIG_DRM_BOCHS=m
CONFIG_DRM_VIRTIO_GPU=m
CONFIG_DRM_PANEL=y

#
# Display Panels
#
# CONFIG_DRM_PANEL_RASPBERRYPI_TOUCHSCREEN is not set
# end of Display Panels

CONFIG_DRM_BRIDGE=y
CONFIG_DRM_PANEL_BRIDGE=y

#
# Display Interface Bridges
#
# CONFIG_DRM_ANALOGIX_ANX78XX is not set
# end of Display Interface Bridges

# CONFIG_DRM_ETNAVIV is not set
CONFIG_DRM_CIRRUS_QEMU=m
# CONFIG_DRM_GM12U320 is not set
# CONFIG_TINYDRM_HX8357D is not set
# CONFIG_TINYDRM_ILI9225 is not set
# CONFIG_TINYDRM_ILI9341 is not set
# CONFIG_TINYDRM_ILI9486 is not set
# CONFIG_TINYDRM_MI0283QT is not set
# CONFIG_TINYDRM_REPAPER is not set
# CONFIG_TINYDRM_ST7586 is not set
# CONFIG_TINYDRM_ST7735R is not set
# CONFIG_DRM_XEN is not set
# CONFIG_DRM_VBOXVIDEO is not set
# CONFIG_DRM_LEGACY is not set
CONFIG_DRM_EXPORT_FOR_TESTS=y
CONFIG_DRM_PANEL_ORIENTATION_QUIRKS=y
CONFIG_DRM_LIB_RANDOM=y

#
# Frame buffer Devices
#
CONFIG_FB_CMDLINE=y
CONFIG_FB_NOTIFY=y
CONFIG_FB=y
# CONFIG_FIRMWARE_EDID is not set
CONFIG_FB_BOOT_VESA_SUPPORT=y
CONFIG_FB_CFB_FILLRECT=y
CONFIG_FB_CFB_COPYAREA=y
CONFIG_FB_CFB_IMAGEBLIT=y
CONFIG_FB_SYS_FILLRECT=y
CONFIG_FB_SYS_COPYAREA=y
CONFIG_FB_SYS_IMAGEBLIT=y
# CONFIG_FB_FOREIGN_ENDIAN is not set
CONFIG_FB_SYS_FOPS=y
CONFIG_FB_DEFERRED_IO=y
# CONFIG_FB_MODE_HELPERS is not set
CONFIG_FB_TILEBLITTING=y

#
# Frame buffer hardware drivers
#
# CONFIG_FB_CIRRUS is not set
# CONFIG_FB_PM2 is not set
# CONFIG_FB_CYBER2000 is not set
# CONFIG_FB_ARC is not set
# CONFIG_FB_ASILIANT is not set
# CONFIG_FB_IMSTT is not set
# CONFIG_FB_VGA16 is not set
# CONFIG_FB_UVESA is not set
CONFIG_FB_VESA=y
CONFIG_FB_EFI=y
# CONFIG_FB_N411 is not set
# CONFIG_FB_HGA is not set
# CONFIG_FB_OPENCORES is not set
# CONFIG_FB_S1D13XXX is not set
# CONFIG_FB_NVIDIA is not set
# CONFIG_FB_RIVA is not set
# CONFIG_FB_I740 is not set
# CONFIG_FB_LE80578 is not set
# CONFIG_FB_INTEL is not set
# CONFIG_FB_MATROX is not set
# CONFIG_FB_RADEON is not set
# CONFIG_FB_ATY128 is not set
# CONFIG_FB_ATY is not set
# CONFIG_FB_S3 is not set
# CONFIG_FB_SAVAGE is not set
# CONFIG_FB_SIS is not set
# CONFIG_FB_VIA is not set
# CONFIG_FB_NEOMAGIC is not set
# CONFIG_FB_KYRO is not set
# CONFIG_FB_3DFX is not set
# CONFIG_FB_VOODOO1 is not set
# CONFIG_FB_VT8623 is not set
# CONFIG_FB_TRIDENT is not set
# CONFIG_FB_ARK is not set
# CONFIG_FB_PM3 is not set
# CONFIG_FB_CARMINE is not set
# CONFIG_FB_SM501 is not set
# CONFIG_FB_SMSCUFX is not set
# CONFIG_FB_UDL is not set
# CONFIG_FB_IBM_GXT4500 is not set
# CONFIG_FB_VIRTUAL is not set
# CONFIG_XEN_FBDEV_FRONTEND is not set
# CONFIG_FB_METRONOME is not set
# CONFIG_FB_MB862XX is not set
CONFIG_FB_HYPERV=m
# CONFIG_FB_SIMPLE is not set
# CONFIG_FB_SM712 is not set
# end of Frame buffer Devices

#
# Backlight & LCD device support
#
CONFIG_LCD_CLASS_DEVICE=m
# CONFIG_LCD_L4F00242T03 is not set
# CONFIG_LCD_LMS283GF05 is not set
# CONFIG_LCD_LTV350QV is not set
# CONFIG_LCD_ILI922X is not set
# CONFIG_LCD_ILI9320 is not set
# CONFIG_LCD_TDO24M is not set
# CONFIG_LCD_VGG2432A4 is not set
CONFIG_LCD_PLATFORM=m
# CONFIG_LCD_AMS369FG06 is not set
# CONFIG_LCD_LMS501KF03 is not set
# CONFIG_LCD_HX8357 is not set
# CONFIG_LCD_OTM3225A is not set
CONFIG_BACKLIGHT_CLASS_DEVICE=y
# CONFIG_BACKLIGHT_KTD253 is not set
# CONFIG_BACKLIGHT_PWM is not set
CONFIG_BACKLIGHT_APPLE=m
# CONFIG_BACKLIGHT_QCOM_WLED is not set
# CONFIG_BACKLIGHT_SAHARA is not set
# CONFIG_BACKLIGHT_ADP8860 is not set
# CONFIG_BACKLIGHT_ADP8870 is not set
# CONFIG_BACKLIGHT_LM3630A is not set
# CONFIG_BACKLIGHT_LM3639 is not set
CONFIG_BACKLIGHT_LP855X=m
# CONFIG_BACKLIGHT_GPIO is not set
# CONFIG_BACKLIGHT_LV5207LP is not set
# CONFIG_BACKLIGHT_BD6107 is not set
# CONFIG_BACKLIGHT_ARCXCNN is not set
# end of Backlight & LCD device support

CONFIG_HDMI=y

#
# Console display driver support
#
CONFIG_VGA_CONSOLE=y
CONFIG_DUMMY_CONSOLE=y
CONFIG_DUMMY_CONSOLE_COLUMNS=80
CONFIG_DUMMY_CONSOLE_ROWS=25
CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y
# CONFIG_FRAMEBUFFER_CONSOLE_DEFERRED_TAKEOVER is not set
# end of Console display driver support

CONFIG_LOGO=y
# CONFIG_LOGO_LINUX_MONO is not set
# CONFIG_LOGO_LINUX_VGA16 is not set
CONFIG_LOGO_LINUX_CLUT224=y
# end of Graphics support

CONFIG_SOUND=m
CONFIG_SOUND_OSS_CORE=y
CONFIG_SOUND_OSS_CORE_PRECLAIM=y
CONFIG_SND=m
CONFIG_SND_TIMER=m
CONFIG_SND_PCM=m
CONFIG_SND_PCM_ELD=y
CONFIG_SND_HWDEP=m
CONFIG_SND_SEQ_DEVICE=m
CONFIG_SND_RAWMIDI=m
CONFIG_SND_COMPRESS_OFFLOAD=m
CONFIG_SND_JACK=y
CONFIG_SND_JACK_INPUT_DEV=y
CONFIG_SND_OSSEMUL=y
# CONFIG_SND_MIXER_OSS is not set
# CONFIG_SND_PCM_OSS is not set
CONFIG_SND_PCM_TIMER=y
CONFIG_SND_HRTIMER=m
CONFIG_SND_DYNAMIC_MINORS=y
CONFIG_SND_MAX_CARDS=32
# CONFIG_SND_SUPPORT_OLD_API is not set
CONFIG_SND_PROC_FS=y
CONFIG_SND_VERBOSE_PROCFS=y
# CONFIG_SND_VERBOSE_PRINTK is not set
# CONFIG_SND_DEBUG is not set
CONFIG_SND_VMASTER=y
CONFIG_SND_DMA_SGBUF=y
CONFIG_SND_SEQUENCER=m
CONFIG_SND_SEQ_DUMMY=m
CONFIG_SND_SEQUENCER_OSS=m
CONFIG_SND_SEQ_HRTIMER_DEFAULT=y
CONFIG_SND_SEQ_MIDI_EVENT=m
CONFIG_SND_SEQ_MIDI=m
CONFIG_SND_SEQ_MIDI_EMUL=m
CONFIG_SND_SEQ_VIRMIDI=m
CONFIG_SND_MPU401_UART=m
CONFIG_SND_OPL3_LIB=m
CONFIG_SND_OPL3_LIB_SEQ=m
CONFIG_SND_VX_LIB=m
CONFIG_SND_AC97_CODEC=m
CONFIG_SND_DRIVERS=y
CONFIG_SND_PCSP=m
CONFIG_SND_DUMMY=m
CONFIG_SND_ALOOP=m
CONFIG_SND_VIRMIDI=m
CONFIG_SND_MTPAV=m
# CONFIG_SND_MTS64 is not set
# CONFIG_SND_SERIAL_U16550 is not set
CONFIG_SND_MPU401=m
# CONFIG_SND_PORTMAN2X4 is not set
CONFIG_SND_AC97_POWER_SAVE=y
CONFIG_SND_AC97_POWER_SAVE_DEFAULT=5
CONFIG_SND_PCI=y
CONFIG_SND_AD1889=m
# CONFIG_SND_ALS300 is not set
# CONFIG_SND_ALS4000 is not set
CONFIG_SND_ALI5451=m
CONFIG_SND_ASIHPI=m
CONFIG_SND_ATIIXP=m
CONFIG_SND_ATIIXP_MODEM=m
CONFIG_SND_AU8810=m
CONFIG_SND_AU8820=m
CONFIG_SND_AU8830=m
# CONFIG_SND_AW2 is not set
# CONFIG_SND_AZT3328 is not set
CONFIG_SND_BT87X=m
# CONFIG_SND_BT87X_OVERCLOCK is not set
CONFIG_SND_CA0106=m
CONFIG_SND_CMIPCI=m
CONFIG_SND_OXYGEN_LIB=m
CONFIG_SND_OXYGEN=m
# CONFIG_SND_CS4281 is not set
CONFIG_SND_CS46XX=m
CONFIG_SND_CS46XX_NEW_DSP=y
CONFIG_SND_CTXFI=m
CONFIG_SND_DARLA20=m
CONFIG_SND_GINA20=m
CONFIG_SND_LAYLA20=m
CONFIG_SND_DARLA24=m
CONFIG_SND_GINA24=m
CONFIG_SND_LAYLA24=m
CONFIG_SND_MONA=m
CONFIG_SND_MIA=m
CONFIG_SND_ECHO3G=m
CONFIG_SND_INDIGO=m
CONFIG_SND_INDIGOIO=m
CONFIG_SND_INDIGODJ=m
CONFIG_SND_INDIGOIOX=m
CONFIG_SND_INDIGODJX=m
CONFIG_SND_EMU10K1=m
CONFIG_SND_EMU10K1_SEQ=m
CONFIG_SND_EMU10K1X=m
CONFIG_SND_ENS1370=m
CONFIG_SND_ENS1371=m
# CONFIG_SND_ES1938 is not set
CONFIG_SND_ES1968=m
CONFIG_SND_ES1968_INPUT=y
CONFIG_SND_ES1968_RADIO=y
# CONFIG_SND_FM801 is not set
CONFIG_SND_HDSP=m
CONFIG_SND_HDSPM=m
CONFIG_SND_ICE1712=m
CONFIG_SND_ICE1724=m
CONFIG_SND_INTEL8X0=m
CONFIG_SND_INTEL8X0M=m
CONFIG_SND_KORG1212=m
CONFIG_SND_LOLA=m
CONFIG_SND_LX6464ES=m
CONFIG_SND_MAESTRO3=m
CONFIG_SND_MAESTRO3_INPUT=y
CONFIG_SND_MIXART=m
# CONFIG_SND_NM256 is not set
CONFIG_SND_PCXHR=m
# CONFIG_SND_RIPTIDE is not set
CONFIG_SND_RME32=m
CONFIG_SND_RME96=m
CONFIG_SND_RME9652=m
# CONFIG_SND_SONICVIBES is not set
CONFIG_SND_TRIDENT=m
CONFIG_SND_VIA82XX=m
CONFIG_SND_VIA82XX_MODEM=m
CONFIG_SND_VIRTUOSO=m
CONFIG_SND_VX222=m
# CONFIG_SND_YMFPCI is not set

#
# HD-Audio
#
CONFIG_SND_HDA=m
CONFIG_SND_HDA_GENERIC_LEDS=y
CONFIG_SND_HDA_INTEL=m
CONFIG_SND_HDA_HWDEP=y
CONFIG_SND_HDA_RECONFIG=y
CONFIG_SND_HDA_INPUT_BEEP=y
CONFIG_SND_HDA_INPUT_BEEP_MODE=0
CONFIG_SND_HDA_PATCH_LOADER=y
CONFIG_SND_HDA_CODEC_REALTEK=m
CONFIG_SND_HDA_CODEC_ANALOG=m
CONFIG_SND_HDA_CODEC_SIGMATEL=m
CONFIG_SND_HDA_CODEC_VIA=m
CONFIG_SND_HDA_CODEC_HDMI=m
CONFIG_SND_HDA_CODEC_CIRRUS=m
CONFIG_SND_HDA_CODEC_CONEXANT=m
CONFIG_SND_HDA_CODEC_CA0110=m
CONFIG_SND_HDA_CODEC_CA0132=m
CONFIG_SND_HDA_CODEC_CA0132_DSP=y
CONFIG_SND_HDA_CODEC_CMEDIA=m
CONFIG_SND_HDA_CODEC_SI3054=m
CONFIG_SND_HDA_GENERIC=m
CONFIG_SND_HDA_POWER_SAVE_DEFAULT=0
# CONFIG_SND_HDA_INTEL_HDMI_SILENT_STREAM is not set
# end of HD-Audio

CONFIG_SND_HDA_CORE=m
CONFIG_SND_HDA_DSP_LOADER=y
CONFIG_SND_HDA_COMPONENT=y
CONFIG_SND_HDA_I915=y
CONFIG_SND_HDA_EXT_CORE=m
CONFIG_SND_HDA_PREALLOC_SIZE=512
CONFIG_SND_INTEL_NHLT=y
CONFIG_SND_INTEL_DSP_CONFIG=m
# CONFIG_SND_SPI is not set
CONFIG_SND_USB=y
CONFIG_SND_USB_AUDIO=m
CONFIG_SND_USB_AUDIO_USE_MEDIA_CONTROLLER=y
CONFIG_SND_USB_UA101=m
CONFIG_SND_USB_USX2Y=m
CONFIG_SND_USB_CAIAQ=m
CONFIG_SND_USB_CAIAQ_INPUT=y
CONFIG_SND_USB_US122L=m
CONFIG_SND_USB_6FIRE=m
CONFIG_SND_USB_HIFACE=m
CONFIG_SND_BCD2000=m
CONFIG_SND_USB_LINE6=m
CONFIG_SND_USB_POD=m
CONFIG_SND_USB_PODHD=m
CONFIG_SND_USB_TONEPORT=m
CONFIG_SND_USB_VARIAX=m
CONFIG_SND_FIREWIRE=y
CONFIG_SND_FIREWIRE_LIB=m
# CONFIG_SND_DICE is not set
# CONFIG_SND_OXFW is not set
CONFIG_SND_ISIGHT=m
# CONFIG_SND_FIREWORKS is not set
# CONFIG_SND_BEBOB is not set
# CONFIG_SND_FIREWIRE_DIGI00X is not set
# CONFIG_SND_FIREWIRE_TASCAM is not set
# CONFIG_SND_FIREWIRE_MOTU is not set
# CONFIG_SND_FIREFACE is not set
CONFIG_SND_SOC=m
CONFIG_SND_SOC_COMPRESS=y
CONFIG_SND_SOC_TOPOLOGY=y
CONFIG_SND_SOC_ACPI=m
# CONFIG_SND_SOC_AMD_ACP is not set
# CONFIG_SND_SOC_AMD_ACP3x is not set
# CONFIG_SND_SOC_AMD_RENOIR is not set
# CONFIG_SND_ATMEL_SOC is not set
# CONFIG_SND_BCM63XX_I2S_WHISTLER is not set
# CONFIG_SND_DESIGNWARE_I2S is not set

#
# SoC Audio for Freescale CPUs
#

#
# Common SoC Audio options for Freescale CPUs:
#
# CONFIG_SND_SOC_FSL_ASRC is not set
# CONFIG_SND_SOC_FSL_SAI is not set
# CONFIG_SND_SOC_FSL_AUDMIX is not set
# CONFIG_SND_SOC_FSL_SSI is not set
# CONFIG_SND_SOC_FSL_SPDIF is not set
# CONFIG_SND_SOC_FSL_ESAI is not set
# CONFIG_SND_SOC_FSL_MICFIL is not set
# CONFIG_SND_SOC_IMX_AUDMUX is not set
# end of SoC Audio for Freescale CPUs

# CONFIG_SND_I2S_HI6210_I2S is not set
# CONFIG_SND_SOC_IMG is not set
CONFIG_SND_SOC_INTEL_SST_TOPLEVEL=y
CONFIG_SND_SOC_INTEL_SST=m
# CONFIG_SND_SOC_INTEL_CATPT is not set
CONFIG_SND_SST_ATOM_HIFI2_PLATFORM=m
# CONFIG_SND_SST_ATOM_HIFI2_PLATFORM_PCI is not set
CONFIG_SND_SST_ATOM_HIFI2_PLATFORM_ACPI=m
CONFIG_SND_SOC_INTEL_SKYLAKE=m
CONFIG_SND_SOC_INTEL_SKL=m
CONFIG_SND_SOC_INTEL_APL=m
CONFIG_SND_SOC_INTEL_KBL=m
CONFIG_SND_SOC_INTEL_GLK=m
CONFIG_SND_SOC_INTEL_CNL=m
CONFIG_SND_SOC_INTEL_CFL=m
# CONFIG_SND_SOC_INTEL_CML_H is not set
# CONFIG_SND_SOC_INTEL_CML_LP is not set
CONFIG_SND_SOC_INTEL_SKYLAKE_FAMILY=m
CONFIG_SND_SOC_INTEL_SKYLAKE_SSP_CLK=m
# CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC is not set
CONFIG_SND_SOC_INTEL_SKYLAKE_COMMON=m
CONFIG_SND_SOC_ACPI_INTEL_MATCH=m
CONFIG_SND_SOC_INTEL_MACH=y
# CONFIG_SND_SOC_INTEL_USER_FRIENDLY_LONG_NAMES is not set
CONFIG_SND_SOC_INTEL_BYTCR_RT5640_MACH=m
CONFIG_SND_SOC_INTEL_BYTCR_RT5651_MACH=m
CONFIG_SND_SOC_INTEL_CHT_BSW_RT5672_MACH=m
CONFIG_SND_SOC_INTEL_CHT_BSW_RT5645_MACH=m
CONFIG_SND_SOC_INTEL_CHT_BSW_MAX98090_TI_MACH=m
# CONFIG_SND_SOC_INTEL_CHT_BSW_NAU8824_MACH is not set
# CONFIG_SND_SOC_INTEL_BYT_CHT_CX2072X_MACH is not set
CONFIG_SND_SOC_INTEL_BYT_CHT_DA7213_MACH=m
CONFIG_SND_SOC_INTEL_BYT_CHT_ES8316_MACH=m
CONFIG_SND_SOC_INTEL_BYT_CHT_NOCODEC_MACH=m
CONFIG_SND_SOC_INTEL_SKL_RT286_MACH=m
CONFIG_SND_SOC_INTEL_SKL_NAU88L25_SSM4567_MACH=m
CONFIG_SND_SOC_INTEL_SKL_NAU88L25_MAX98357A_MACH=m
CONFIG_SND_SOC_INTEL_DA7219_MAX98357A_GENERIC=m
CONFIG_SND_SOC_INTEL_BXT_DA7219_MAX98357A_COMMON=m
CONFIG_SND_SOC_INTEL_BXT_DA7219_MAX98357A_MACH=m
CONFIG_SND_SOC_INTEL_BXT_RT298_MACH=m
CONFIG_SND_SOC_INTEL_KBL_RT5663_MAX98927_MACH=m
CONFIG_SND_SOC_INTEL_KBL_RT5663_RT5514_MAX98927_MACH=m
# CONFIG_SND_SOC_INTEL_KBL_DA7219_MAX98357A_MACH is not set
# CONFIG_SND_SOC_INTEL_KBL_DA7219_MAX98927_MACH is not set
# CONFIG_SND_SOC_INTEL_KBL_RT5660_MACH is not set
# CONFIG_SND_SOC_MTK_BTCVSD is not set
# CONFIG_SND_SOC_SOF_TOPLEVEL is not set

#
# STMicroelectronics STM32 SOC audio support
#
# end of STMicroelectronics STM32 SOC audio support

# CONFIG_SND_SOC_XILINX_I2S is not set
# CONFIG_SND_SOC_XILINX_AUDIO_FORMATTER is not set
# CONFIG_SND_SOC_XILINX_SPDIF is not set
# CONFIG_SND_SOC_XTFPGA_I2S is not set
# CONFIG_ZX_TDM is not set
CONFIG_SND_SOC_I2C_AND_SPI=m

#
# CODEC drivers
#
# CONFIG_SND_SOC_AC97_CODEC is not set
# CONFIG_SND_SOC_ADAU1701 is not set
# CONFIG_SND_SOC_ADAU1761_I2C is not set
# CONFIG_SND_SOC_ADAU1761_SPI is not set
# CONFIG_SND_SOC_ADAU7002 is not set
# CONFIG_SND_SOC_ADAU7118_HW is not set
# CONFIG_SND_SOC_ADAU7118_I2C is not set
# CONFIG_SND_SOC_AK4104 is not set
# CONFIG_SND_SOC_AK4118 is not set
# CONFIG_SND_SOC_AK4458 is not set
# CONFIG_SND_SOC_AK4554 is not set
# CONFIG_SND_SOC_AK4613 is not set
# CONFIG_SND_SOC_AK4642 is not set
# CONFIG_SND_SOC_AK5386 is not set
# CONFIG_SND_SOC_AK5558 is not set
# CONFIG_SND_SOC_ALC5623 is not set
# CONFIG_SND_SOC_BD28623 is not set
# CONFIG_SND_SOC_BT_SCO is not set
# CONFIG_SND_SOC_CS35L32 is not set
# CONFIG_SND_SOC_CS35L33 is not set
# CONFIG_SND_SOC_CS35L34 is not set
# CONFIG_SND_SOC_CS35L35 is not set
# CONFIG_SND_SOC_CS35L36 is not set
# CONFIG_SND_SOC_CS42L42 is not set
# CONFIG_SND_SOC_CS42L51_I2C is not set
# CONFIG_SND_SOC_CS42L52 is not set
# CONFIG_SND_SOC_CS42L56 is not set
# CONFIG_SND_SOC_CS42L73 is not set
# CONFIG_SND_SOC_CS4234 is not set
# CONFIG_SND_SOC_CS4265 is not set
# CONFIG_SND_SOC_CS4270 is not set
# CONFIG_SND_SOC_CS4271_I2C is not set
# CONFIG_SND_SOC_CS4271_SPI is not set
# CONFIG_SND_SOC_CS42XX8_I2C is not set
# CONFIG_SND_SOC_CS43130 is not set
# CONFIG_SND_SOC_CS4341 is not set
# CONFIG_SND_SOC_CS4349 is not set
# CONFIG_SND_SOC_CS53L30 is not set
# CONFIG_SND_SOC_CX2072X is not set
CONFIG_SND_SOC_DA7213=m
CONFIG_SND_SOC_DA7219=m
CONFIG_SND_SOC_DMIC=m
# CONFIG_SND_SOC_ES7134 is not set
# CONFIG_SND_SOC_ES7241 is not set
CONFIG_SND_SOC_ES8316=m
# CONFIG_SND_SOC_ES8328_I2C is not set
# CONFIG_SND_SOC_ES8328_SPI is not set
# CONFIG_SND_SOC_GTM601 is not set
CONFIG_SND_SOC_HDAC_HDMI=m
# CONFIG_SND_SOC_INNO_RK3036 is not set
# CONFIG_SND_SOC_MAX98088 is not set
CONFIG_SND_SOC_MAX98090=m
CONFIG_SND_SOC_MAX98357A=m
# CONFIG_SND_SOC_MAX98504 is not set
# CONFIG_SND_SOC_MAX9867 is not set
CONFIG_SND_SOC_MAX98927=m
# CONFIG_SND_SOC_MAX98373_I2C is not set
CONFIG_SND_SOC_MAX98390=m
# CONFIG_SND_SOC_MAX9860 is not set
# CONFIG_SND_SOC_MSM8916_WCD_DIGITAL is not set
# CONFIG_SND_SOC_PCM1681 is not set
# CONFIG_SND_SOC_PCM1789_I2C is not set
# CONFIG_SND_SOC_PCM179X_I2C is not set
# CONFIG_SND_SOC_PCM179X_SPI is not set
# CONFIG_SND_SOC_PCM186X_I2C is not set
# CONFIG_SND_SOC_PCM186X_SPI is not set
# CONFIG_SND_SOC_PCM3060_I2C is not set
# CONFIG_SND_SOC_PCM3060_SPI is not set
# CONFIG_SND_SOC_PCM3168A_I2C is not set
# CONFIG_SND_SOC_PCM3168A_SPI is not set
# CONFIG_SND_SOC_PCM512x_I2C is not set
# CONFIG_SND_SOC_PCM512x_SPI is not set
# CONFIG_SND_SOC_RK3328 is not set
CONFIG_SND_SOC_RL6231=m
CONFIG_SND_SOC_RL6347A=m
CONFIG_SND_SOC_RT286=m
CONFIG_SND_SOC_RT298=m
CONFIG_SND_SOC_RT5514=m
CONFIG_SND_SOC_RT5514_SPI=m
# CONFIG_SND_SOC_RT5616 is not set
# CONFIG_SND_SOC_RT5631 is not set
CONFIG_SND_SOC_RT5640=m
CONFIG_SND_SOC_RT5645=m
CONFIG_SND_SOC_RT5651=m
CONFIG_SND_SOC_RT5663=m
CONFIG_SND_SOC_RT5670=m
# CONFIG_SND_SOC_SGTL5000 is not set
# CONFIG_SND_SOC_SIMPLE_AMPLIFIER is not set
# CONFIG_SND_SOC_SIRF_AUDIO_CODEC is not set
# CONFIG_SND_SOC_SPDIF is not set
# CONFIG_SND_SOC_SSM2305 is not set
# CONFIG_SND_SOC_SSM2602_SPI is not set
# CONFIG_SND_SOC_SSM2602_I2C is not set
CONFIG_SND_SOC_SSM4567=m
# CONFIG_SND_SOC_STA32X is not set
# CONFIG_SND_SOC_STA350 is not set
# CONFIG_SND_SOC_STI_SAS is not set
# CONFIG_SND_SOC_TAS2552 is not set
# CONFIG_SND_SOC_TAS2562 is not set
# CONFIG_SND_SOC_TAS2764 is not set
# CONFIG_SND_SOC_TAS2770 is not set
# CONFIG_SND_SOC_TAS5086 is not set
# CONFIG_SND_SOC_TAS571X is not set
# CONFIG_SND_SOC_TAS5720 is not set
# CONFIG_SND_SOC_TAS6424 is not set
# CONFIG_SND_SOC_TDA7419 is not set
# CONFIG_SND_SOC_TFA9879 is not set
# CONFIG_SND_SOC_TLV320AIC23_I2C is not set
# CONFIG_SND_SOC_TLV320AIC23_SPI is not set
# CONFIG_SND_SOC_TLV320AIC31XX is not set
# CONFIG_SND_SOC_TLV320AIC32X4_I2C is not set
# CONFIG_SND_SOC_TLV320AIC32X4_SPI is not set
# CONFIG_SND_SOC_TLV320AIC3X is not set
# CONFIG_SND_SOC_TLV320ADCX140 is not set
CONFIG_SND_SOC_TS3A227E=m
# CONFIG_SND_SOC_TSCS42XX is not set
# CONFIG_SND_SOC_TSCS454 is not set
# CONFIG_SND_SOC_UDA1334 is not set
# CONFIG_SND_SOC_WM8510 is not set
# CONFIG_SND_SOC_WM8523 is not set
# CONFIG_SND_SOC_WM8524 is not set
# CONFIG_SND_SOC_WM8580 is not set
# CONFIG_SND_SOC_WM8711 is not set
# CONFIG_SND_SOC_WM8728 is not set
# CONFIG_SND_SOC_WM8731 is not set
# CONFIG_SND_SOC_WM8737 is not set
# CONFIG_SND_SOC_WM8741 is not set
# CONFIG_SND_SOC_WM8750 is not set
# CONFIG_SND_SOC_WM8753 is not set
# CONFIG_SND_SOC_WM8770 is not set
# CONFIG_SND_SOC_WM8776 is not set
# CONFIG_SND_SOC_WM8782 is not set
# CONFIG_SND_SOC_WM8804_I2C is not set
# CONFIG_SND_SOC_WM8804_SPI is not set
# CONFIG_SND_SOC_WM8903 is not set
# CONFIG_SND_SOC_WM8904 is not set
# CONFIG_SND_SOC_WM8960 is not set
# CONFIG_SND_SOC_WM8962 is not set
# CONFIG_SND_SOC_WM8974 is not set
# CONFIG_SND_SOC_WM8978 is not set
# CONFIG_SND_SOC_WM8985 is not set
# CONFIG_SND_SOC_ZL38060 is not set
# CONFIG_SND_SOC_ZX_AUD96P22 is not set
# CONFIG_SND_SOC_MAX9759 is not set
# CONFIG_SND_SOC_MT6351 is not set
# CONFIG_SND_SOC_MT6358 is not set
# CONFIG_SND_SOC_MT6660 is not set
# CONFIG_SND_SOC_NAU8540 is not set
# CONFIG_SND_SOC_NAU8810 is not set
# CONFIG_SND_SOC_NAU8822 is not set
CONFIG_SND_SOC_NAU8824=m
CONFIG_SND_SOC_NAU8825=m
# CONFIG_SND_SOC_TPA6130A2 is not set
# end of CODEC drivers

# CONFIG_SND_SIMPLE_CARD is not set
CONFIG_SND_X86=y
CONFIG_HDMI_LPE_AUDIO=m
CONFIG_SND_SYNTH_EMUX=m
# CONFIG_SND_XEN_FRONTEND is not set
CONFIG_AC97_BUS=m

#
# HID support
#
CONFIG_HID=y
CONFIG_HID_BATTERY_STRENGTH=y
CONFIG_HIDRAW=y
CONFIG_UHID=m
CONFIG_HID_GENERIC=y

#
# Special HID drivers
#
CONFIG_HID_A4TECH=y
# CONFIG_HID_ACCUTOUCH is not set
CONFIG_HID_ACRUX=m
# CONFIG_HID_ACRUX_FF is not set
CONFIG_HID_APPLE=y
CONFIG_HID_APPLEIR=m
# CONFIG_HID_ASUS is not set
CONFIG_HID_AUREAL=m
CONFIG_HID_BELKIN=y
# CONFIG_HID_BETOP_FF is not set
# CONFIG_HID_BIGBEN_FF is not set
CONFIG_HID_CHERRY=y
CONFIG_HID_CHICONY=y
# CONFIG_HID_CORSAIR is not set
# CONFIG_HID_COUGAR is not set
# CONFIG_HID_MACALLY is not set
CONFIG_HID_PRODIKEYS=m
# CONFIG_HID_CMEDIA is not set
# CONFIG_HID_CP2112 is not set
# CONFIG_HID_CREATIVE_SB0540 is not set
CONFIG_HID_CYPRESS=y
CONFIG_HID_DRAGONRISE=m
# CONFIG_DRAGONRISE_FF is not set
# CONFIG_HID_EMS_FF is not set
# CONFIG_HID_ELAN is not set
CONFIG_HID_ELECOM=m
# CONFIG_HID_ELO is not set
CONFIG_HID_EZKEY=y
# CONFIG_HID_GEMBIRD is not set
# CONFIG_HID_GFRM is not set
# CONFIG_HID_GLORIOUS is not set
CONFIG_HID_HOLTEK=m
# CONFIG_HOLTEK_FF is not set
# CONFIG_HID_VIVALDI is not set
# CONFIG_HID_GT683R is not set
CONFIG_HID_KEYTOUCH=m
CONFIG_HID_KYE=m
CONFIG_HID_UCLOGIC=m
CONFIG_HID_WALTOP=m
# CONFIG_HID_VIEWSONIC is not set
CONFIG_HID_GYRATION=m
CONFIG_HID_ICADE=m
CONFIG_HID_ITE=y
# CONFIG_HID_JABRA is not set
CONFIG_HID_TWINHAN=m
CONFIG_HID_KENSINGTON=y
CONFIG_HID_LCPOWER=m
CONFIG_HID_LED=m
# CONFIG_HID_LENOVO is not set
CONFIG_HID_LOGITECH=y
CONFIG_HID_LOGITECH_DJ=m
CONFIG_HID_LOGITECH_HIDPP=m
# CONFIG_LOGITECH_FF is not set
# CONFIG_LOGIRUMBLEPAD2_FF is not set
# CONFIG_LOGIG940_FF is not set
# CONFIG_LOGIWHEELS_FF is not set
CONFIG_HID_MAGICMOUSE=y
# CONFIG_HID_MALTRON is not set
# CONFIG_HID_MAYFLASH is not set
CONFIG_HID_REDRAGON=y
CONFIG_HID_MICROSOFT=y
CONFIG_HID_MONTEREY=y
CONFIG_HID_MULTITOUCH=m
# CONFIG_HID_NTI is not set
CONFIG_HID_NTRIG=y
CONFIG_HID_ORTEK=m
CONFIG_HID_PANTHERLORD=m
# CONFIG_PANTHERLORD_FF is not set
# CONFIG_HID_PENMOUNT is not set
CONFIG_HID_PETALYNX=m
CONFIG_HID_PICOLCD=m
CONFIG_HID_PICOLCD_FB=y
CONFIG_HID_PICOLCD_BACKLIGHT=y
CONFIG_HID_PICOLCD_LCD=y
CONFIG_HID_PICOLCD_LEDS=y
CONFIG_HID_PICOLCD_CIR=y
CONFIG_HID_PLANTRONICS=y
CONFIG_HID_PRIMAX=m
# CONFIG_HID_RETRODE is not set
CONFIG_HID_ROCCAT=m
CONFIG_HID_SAITEK=m
CONFIG_HID_SAMSUNG=m
CONFIG_HID_SONY=m
# CONFIG_SONY_FF is not set
CONFIG_HID_SPEEDLINK=m
# CONFIG_HID_STEAM is not set
CONFIG_HID_STEELSERIES=m
CONFIG_HID_SUNPLUS=m
CONFIG_HID_RMI=m
CONFIG_HID_GREENASIA=m
# CONFIG_GREENASIA_FF is not set
CONFIG_HID_HYPERV_MOUSE=m
CONFIG_HID_SMARTJOYPLUS=m
# CONFIG_SMARTJOYPLUS_FF is not set
CONFIG_HID_TIVO=m
CONFIG_HID_TOPSEED=m
CONFIG_HID_THINGM=m
CONFIG_HID_THRUSTMASTER=m
# CONFIG_THRUSTMASTER_FF is not set
# CONFIG_HID_UDRAW_PS3 is not set
# CONFIG_HID_U2FZERO is not set
CONFIG_HID_WACOM=m
CONFIG_HID_WIIMOTE=m
# CONFIG_HID_XINMO is not set
CONFIG_HID_ZEROPLUS=m
# CONFIG_ZEROPLUS_FF is not set
CONFIG_HID_ZYDACRON=m
CONFIG_HID_SENSOR_HUB=m
CONFIG_HID_SENSOR_CUSTOM_SENSOR=m
CONFIG_HID_ALPS=m
# CONFIG_HID_MCP2221 is not set
# end of Special HID drivers

#
# USB HID support
#
CONFIG_USB_HID=y
CONFIG_HID_PID=y
CONFIG_USB_HIDDEV=y
# end of USB HID support

#
# I2C HID support
#
CONFIG_I2C_HID=m
# end of I2C HID support

#
# Intel ISH HID support
#
CONFIG_INTEL_ISH_HID=y
# CONFIG_INTEL_ISH_FIRMWARE_DOWNLOADER is not set
# end of Intel ISH HID support
# end of HID support

CONFIG_USB_OHCI_LITTLE_ENDIAN=y
CONFIG_USB_SUPPORT=y
CONFIG_USB_COMMON=y
# CONFIG_USB_LED_TRIG is not set
# CONFIG_USB_ULPI_BUS is not set
# CONFIG_USB_CONN_GPIO is not set
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB=y
CONFIG_USB_PCI=y
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y

#
# Miscellaneous USB options
#
CONFIG_USB_DEFAULT_PERSIST=y
# CONFIG_USB_FEW_INIT_RETRIES is not set
# CONFIG_USB_DYNAMIC_MINORS is not set
# CONFIG_USB_OTG is not set
# CONFIG_USB_OTG_PRODUCTLIST is not set
# CONFIG_USB_OTG_DISABLE_EXTERNAL_HUB is not set
CONFIG_USB_LEDS_TRIGGER_USBPORT=m
CONFIG_USB_AUTOSUSPEND_DELAY=2
CONFIG_USB_MON=y

#
# USB Host Controller Drivers
#
# CONFIG_USB_C67X00_HCD is not set
CONFIG_USB_XHCI_HCD=y
# CONFIG_USB_XHCI_DBGCAP is not set
CONFIG_USB_XHCI_PCI=y
# CONFIG_USB_XHCI_PCI_RENESAS is not set
# CONFIG_USB_XHCI_PLATFORM is not set
CONFIG_USB_EHCI_HCD=y
CONFIG_USB_EHCI_ROOT_HUB_TT=y
CONFIG_USB_EHCI_TT_NEWSCHED=y
CONFIG_USB_EHCI_PCI=y
# CONFIG_USB_EHCI_FSL is not set
# CONFIG_USB_EHCI_HCD_PLATFORM is not set
# CONFIG_USB_OXU210HP_HCD is not set
# CONFIG_USB_ISP116X_HCD is not set
# CONFIG_USB_FOTG210_HCD is not set
# CONFIG_USB_MAX3421_HCD is not set
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_OHCI_HCD_PCI=y
# CONFIG_USB_OHCI_HCD_PLATFORM is not set
CONFIG_USB_UHCI_HCD=y
# CONFIG_USB_U132_HCD is not set
# CONFIG_USB_SL811_HCD is not set
# CONFIG_USB_R8A66597_HCD is not set
# CONFIG_USB_HCD_BCMA is not set
# CONFIG_USB_HCD_SSB is not set
# CONFIG_USB_HCD_TEST_MODE is not set

#
# USB Device Class drivers
#
CONFIG_USB_ACM=m
CONFIG_USB_PRINTER=m
CONFIG_USB_WDM=m
CONFIG_USB_TMC=m

#
# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
#

#
# also be needed; see USB_STORAGE Help for more info
#
CONFIG_USB_STORAGE=m
# CONFIG_USB_STORAGE_DEBUG is not set
CONFIG_USB_STORAGE_REALTEK=m
CONFIG_REALTEK_AUTOPM=y
CONFIG_USB_STORAGE_DATAFAB=m
CONFIG_USB_STORAGE_FREECOM=m
CONFIG_USB_STORAGE_ISD200=m
CONFIG_USB_STORAGE_USBAT=m
CONFIG_USB_STORAGE_SDDR09=m
CONFIG_USB_STORAGE_SDDR55=m
CONFIG_USB_STORAGE_JUMPSHOT=m
CONFIG_USB_STORAGE_ALAUDA=m
CONFIG_USB_STORAGE_ONETOUCH=m
CONFIG_USB_STORAGE_KARMA=m
CONFIG_USB_STORAGE_CYPRESS_ATACB=m
CONFIG_USB_STORAGE_ENE_UB6250=m
CONFIG_USB_UAS=m

#
# USB Imaging devices
#
CONFIG_USB_MDC800=m
CONFIG_USB_MICROTEK=m
CONFIG_USBIP_CORE=m
# CONFIG_USBIP_VHCI_HCD is not set
# CONFIG_USBIP_HOST is not set
# CONFIG_USBIP_DEBUG is not set
# CONFIG_USB_CDNS3 is not set
# CONFIG_USB_MUSB_HDRC is not set
# CONFIG_USB_DWC3 is not set
# CONFIG_USB_DWC2 is not set
# CONFIG_USB_CHIPIDEA is not set
# CONFIG_USB_ISP1760 is not set

#
# USB port drivers
#
CONFIG_USB_USS720=m
CONFIG_USB_SERIAL=y
CONFIG_USB_SERIAL_CONSOLE=y
CONFIG_USB_SERIAL_GENERIC=y
# CONFIG_USB_SERIAL_SIMPLE is not set
CONFIG_USB_SERIAL_AIRCABLE=m
CONFIG_USB_SERIAL_ARK3116=m
CONFIG_USB_SERIAL_BELKIN=m
CONFIG_USB_SERIAL_CH341=m
CONFIG_USB_SERIAL_WHITEHEAT=m
CONFIG_USB_SERIAL_DIGI_ACCELEPORT=m
CONFIG_USB_SERIAL_CP210X=m
CONFIG_USB_SERIAL_CYPRESS_M8=m
CONFIG_USB_SERIAL_EMPEG=m
CONFIG_USB_SERIAL_FTDI_SIO=m
CONFIG_USB_SERIAL_VISOR=m
CONFIG_USB_SERIAL_IPAQ=m
CONFIG_USB_SERIAL_IR=m
CONFIG_USB_SERIAL_EDGEPORT=m
CONFIG_USB_SERIAL_EDGEPORT_TI=m
# CONFIG_USB_SERIAL_F81232 is not set
# CONFIG_USB_SERIAL_F8153X is not set
CONFIG_USB_SERIAL_GARMIN=m
CONFIG_USB_SERIAL_IPW=m
CONFIG_USB_SERIAL_IUU=m
CONFIG_USB_SERIAL_KEYSPAN_PDA=m
CONFIG_USB_SERIAL_KEYSPAN=m
CONFIG_USB_SERIAL_KLSI=m
CONFIG_USB_SERIAL_KOBIL_SCT=m
CONFIG_USB_SERIAL_MCT_U232=m
# CONFIG_USB_SERIAL_METRO is not set
CONFIG_USB_SERIAL_MOS7720=m
CONFIG_USB_SERIAL_MOS7715_PARPORT=y
CONFIG_USB_SERIAL_MOS7840=m
# CONFIG_USB_SERIAL_MXUPORT is not set
CONFIG_USB_SERIAL_NAVMAN=m
CONFIG_USB_SERIAL_PL2303=m
CONFIG_USB_SERIAL_OTI6858=m
CONFIG_USB_SERIAL_QCAUX=m
CONFIG_USB_SERIAL_QUALCOMM=m
CONFIG_USB_SERIAL_SPCP8X5=m
CONFIG_USB_SERIAL_SAFE=m
CONFIG_USB_SERIAL_SAFE_PADDED=y
CONFIG_USB_SERIAL_SIERRAWIRELESS=m
CONFIG_USB_SERIAL_SYMBOL=m
# CONFIG_USB_SERIAL_TI is not set
CONFIG_USB_SERIAL_CYBERJACK=m
CONFIG_USB_SERIAL_XIRCOM=m
CONFIG_USB_SERIAL_WWAN=m
CONFIG_USB_SERIAL_OPTION=m
CONFIG_USB_SERIAL_OMNINET=m
CONFIG_USB_SERIAL_OPTICON=m
CONFIG_USB_SERIAL_XSENS_MT=m
# CONFIG_USB_SERIAL_WISHBONE is not set
CONFIG_USB_SERIAL_SSU100=m
CONFIG_USB_SERIAL_QT2=m
# CONFIG_USB_SERIAL_UPD78F0730 is not set
CONFIG_USB_SERIAL_DEBUG=m

#
# USB Miscellaneous drivers
#
CONFIG_USB_EMI62=m
CONFIG_USB_EMI26=m
CONFIG_USB_ADUTUX=m
CONFIG_USB_SEVSEG=m
CONFIG_USB_LEGOTOWER=m
CONFIG_USB_LCD=m
# CONFIG_USB_CYPRESS_CY7C63 is not set
# CONFIG_USB_CYTHERM is not set
CONFIG_USB_IDMOUSE=m
CONFIG_USB_FTDI_ELAN=m
CONFIG_USB_APPLEDISPLAY=m
# CONFIG_APPLE_MFI_FASTCHARGE is not set
CONFIG_USB_SISUSBVGA=m
CONFIG_USB_SISUSBVGA_CON=y
CONFIG_USB_LD=m
# CONFIG_USB_TRANCEVIBRATOR is not set
CONFIG_USB_IOWARRIOR=m
# CONFIG_USB_TEST is not set
# CONFIG_USB_EHSET_TEST_FIXTURE is not set
CONFIG_USB_ISIGHTFW=m
# CONFIG_USB_YUREX is not set
CONFIG_USB_EZUSB_FX2=m
# CONFIG_USB_HUB_USB251XB is not set
CONFIG_USB_HSIC_USB3503=m
# CONFIG_USB_HSIC_USB4604 is not set
# CONFIG_USB_LINK_LAYER_TEST is not set
# CONFIG_USB_CHAOSKEY is not set
CONFIG_USB_ATM=m
CONFIG_USB_SPEEDTOUCH=m
CONFIG_USB_CXACRU=m
CONFIG_USB_UEAGLEATM=m
CONFIG_USB_XUSBATM=m

#
# USB Physical Layer drivers
#
# CONFIG_NOP_USB_XCEIV is not set
# CONFIG_USB_GPIO_VBUS is not set
# CONFIG_USB_ISP1301 is not set
# end of USB Physical Layer drivers

# CONFIG_USB_GADGET is not set
CONFIG_TYPEC=y
# CONFIG_TYPEC_TCPM is not set
CONFIG_TYPEC_UCSI=y
# CONFIG_UCSI_CCG is not set
CONFIG_UCSI_ACPI=y
# CONFIG_TYPEC_TPS6598X is not set
# CONFIG_TYPEC_STUSB160X is not set

#
# USB Type-C Multiplexer/DeMultiplexer Switch support
#
# CONFIG_TYPEC_MUX_PI3USB30532 is not set
# end of USB Type-C Multiplexer/DeMultiplexer Switch support

#
# USB Type-C Alternate Mode drivers
#
# CONFIG_TYPEC_DP_ALTMODE is not set
# end of USB Type-C Alternate Mode drivers

# CONFIG_USB_ROLE_SWITCH is not set
CONFIG_MMC=m
CONFIG_MMC_BLOCK=m
CONFIG_MMC_BLOCK_MINORS=8
CONFIG_SDIO_UART=m
# CONFIG_MMC_TEST is not set

#
# MMC/SD/SDIO Host Controller Drivers
#
# CONFIG_MMC_DEBUG is not set
CONFIG_MMC_SDHCI=m
CONFIG_MMC_SDHCI_IO_ACCESSORS=y
CONFIG_MMC_SDHCI_PCI=m
CONFIG_MMC_RICOH_MMC=y
CONFIG_MMC_SDHCI_ACPI=m
CONFIG_MMC_SDHCI_PLTFM=m
# CONFIG_MMC_SDHCI_F_SDH30 is not set
# CONFIG_MMC_WBSD is not set
CONFIG_MMC_TIFM_SD=m
# CONFIG_MMC_SPI is not set
CONFIG_MMC_CB710=m
CONFIG_MMC_VIA_SDMMC=m
CONFIG_MMC_VUB300=m
CONFIG_MMC_USHC=m
# CONFIG_MMC_USDHI6ROL0 is not set
CONFIG_MMC_CQHCI=m
# CONFIG_MMC_HSQ is not set
# CONFIG_MMC_TOSHIBA_PCI is not set
# CONFIG_MMC_MTK is not set
# CONFIG_MMC_SDHCI_XENON is not set
CONFIG_MEMSTICK=m
# CONFIG_MEMSTICK_DEBUG is not set

#
# MemoryStick drivers
#
# CONFIG_MEMSTICK_UNSAFE_RESUME is not set
CONFIG_MSPRO_BLOCK=m
# CONFIG_MS_BLOCK is not set

#
# MemoryStick Host Controller Drivers
#
CONFIG_MEMSTICK_TIFM_MS=m
CONFIG_MEMSTICK_JMICRON_38X=m
CONFIG_MEMSTICK_R592=m
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
# CONFIG_LEDS_CLASS_FLASH is not set
# CONFIG_LEDS_CLASS_MULTICOLOR is not set
# CONFIG_LEDS_BRIGHTNESS_HW_CHANGED is not set

#
# LED drivers
#
# CONFIG_LEDS_APU is not set
CONFIG_LEDS_LM3530=m
# CONFIG_LEDS_LM3532 is not set
# CONFIG_LEDS_LM3642 is not set
# CONFIG_LEDS_PCA9532 is not set
# CONFIG_LEDS_GPIO is not set
CONFIG_LEDS_LP3944=m
# CONFIG_LEDS_LP3952 is not set
# CONFIG_LEDS_LP50XX is not set
CONFIG_LEDS_CLEVO_MAIL=m
# CONFIG_LEDS_PCA955X is not set
# CONFIG_LEDS_PCA963X is not set
# CONFIG_LEDS_DAC124S085 is not set
# CONFIG_LEDS_PWM is not set
# CONFIG_LEDS_BD2802 is not set
CONFIG_LEDS_INTEL_SS4200=m
# CONFIG_LEDS_TCA6507 is not set
# CONFIG_LEDS_TLC591XX is not set
# CONFIG_LEDS_LM355x is not set

#
# LED driver for blink(1) USB RGB LED is under Special HID drivers (HID_THINGM)
#
CONFIG_LEDS_BLINKM=m
# CONFIG_LEDS_MLXCPLD is not set
# CONFIG_LEDS_MLXREG is not set
# CONFIG_LEDS_USER is not set
# CONFIG_LEDS_NIC78BX is not set
# CONFIG_LEDS_TI_LMU_COMMON is not set

#
# LED Triggers
#
CONFIG_LEDS_TRIGGERS=y
CONFIG_LEDS_TRIGGER_TIMER=m
CONFIG_LEDS_TRIGGER_ONESHOT=m
# CONFIG_LEDS_TRIGGER_DISK is not set
# CONFIG_LEDS_TRIGGER_MTD is not set
CONFIG_LEDS_TRIGGER_HEARTBEAT=m
CONFIG_LEDS_TRIGGER_BACKLIGHT=m
# CONFIG_LEDS_TRIGGER_CPU is not set
# CONFIG_LEDS_TRIGGER_ACTIVITY is not set
CONFIG_LEDS_TRIGGER_GPIO=m
CONFIG_LEDS_TRIGGER_DEFAULT_ON=m

#
# iptables trigger is under Netfilter config (LED target)
#
CONFIG_LEDS_TRIGGER_TRANSIENT=m
CONFIG_LEDS_TRIGGER_CAMERA=m
# CONFIG_LEDS_TRIGGER_PANIC is not set
# CONFIG_LEDS_TRIGGER_NETDEV is not set
# CONFIG_LEDS_TRIGGER_PATTERN is not set
CONFIG_LEDS_TRIGGER_AUDIO=m
# CONFIG_ACCESSIBILITY is not set
# CONFIG_INFINIBAND is not set
CONFIG_EDAC_ATOMIC_SCRUB=y
CONFIG_EDAC_SUPPORT=y
CONFIG_EDAC=y
CONFIG_EDAC_LEGACY_SYSFS=y
# CONFIG_EDAC_DEBUG is not set
CONFIG_EDAC_DECODE_MCE=m
CONFIG_EDAC_GHES=y
CONFIG_EDAC_AMD64=m
# CONFIG_EDAC_AMD64_ERROR_INJECTION is not set
CONFIG_EDAC_E752X=m
CONFIG_EDAC_I82975X=m
CONFIG_EDAC_I3000=m
CONFIG_EDAC_I3200=m
CONFIG_EDAC_IE31200=m
CONFIG_EDAC_X38=m
CONFIG_EDAC_I5400=m
CONFIG_EDAC_I7CORE=m
CONFIG_EDAC_I5000=m
CONFIG_EDAC_I5100=m
CONFIG_EDAC_I7300=m
CONFIG_EDAC_SBRIDGE=m
CONFIG_EDAC_SKX=m
# CONFIG_EDAC_I10NM is not set
CONFIG_EDAC_PND2=m
CONFIG_RTC_LIB=y
CONFIG_RTC_MC146818_LIB=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_HCTOSYS=y
CONFIG_RTC_HCTOSYS_DEVICE="rtc0"
# CONFIG_RTC_SYSTOHC is not set
# CONFIG_RTC_DEBUG is not set
CONFIG_RTC_NVMEM=y

#
# RTC interfaces
#
CONFIG_RTC_INTF_SYSFS=y
CONFIG_RTC_INTF_PROC=y
CONFIG_RTC_INTF_DEV=y
# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set
# CONFIG_RTC_DRV_TEST is not set

#
# I2C RTC drivers
#
# CONFIG_RTC_DRV_ABB5ZES3 is not set
# CONFIG_RTC_DRV_ABEOZ9 is not set
# CONFIG_RTC_DRV_ABX80X is not set
CONFIG_RTC_DRV_DS1307=m
# CONFIG_RTC_DRV_DS1307_CENTURY is not set
CONFIG_RTC_DRV_DS1374=m
# CONFIG_RTC_DRV_DS1374_WDT is not set
CONFIG_RTC_DRV_DS1672=m
CONFIG_RTC_DRV_MAX6900=m
CONFIG_RTC_DRV_RS5C372=m
CONFIG_RTC_DRV_ISL1208=m
CONFIG_RTC_DRV_ISL12022=m
CONFIG_RTC_DRV_X1205=m
CONFIG_RTC_DRV_PCF8523=m
# CONFIG_RTC_DRV_PCF85063 is not set
# CONFIG_RTC_DRV_PCF85363 is not set
CONFIG_RTC_DRV_PCF8563=m
CONFIG_RTC_DRV_PCF8583=m
CONFIG_RTC_DRV_M41T80=m
CONFIG_RTC_DRV_M41T80_WDT=y
CONFIG_RTC_DRV_BQ32K=m
# CONFIG_RTC_DRV_S35390A is not set
CONFIG_RTC_DRV_FM3130=m
# CONFIG_RTC_DRV_RX8010 is not set
CONFIG_RTC_DRV_RX8581=m
CONFIG_RTC_DRV_RX8025=m
CONFIG_RTC_DRV_EM3027=m
# CONFIG_RTC_DRV_RV3028 is not set
# CONFIG_RTC_DRV_RV3032 is not set
# CONFIG_RTC_DRV_RV8803 is not set
# CONFIG_RTC_DRV_SD3078 is not set

#
# SPI RTC drivers
#
# CONFIG_RTC_DRV_M41T93 is not set
# CONFIG_RTC_DRV_M41T94 is not set
# CONFIG_RTC_DRV_DS1302 is not set
# CONFIG_RTC_DRV_DS1305 is not set
# CONFIG_RTC_DRV_DS1343 is not set
# CONFIG_RTC_DRV_DS1347 is not set
# CONFIG_RTC_DRV_DS1390 is not set
# CONFIG_RTC_DRV_MAX6916 is not set
# CONFIG_RTC_DRV_R9701 is not set
CONFIG_RTC_DRV_RX4581=m
# CONFIG_RTC_DRV_RX6110 is not set
# CONFIG_RTC_DRV_RS5C348 is not set
# CONFIG_RTC_DRV_MAX6902 is not set
# CONFIG_RTC_DRV_PCF2123 is not set
# CONFIG_RTC_DRV_MCP795 is not set
CONFIG_RTC_I2C_AND_SPI=y

#
# SPI and I2C RTC drivers
#
CONFIG_RTC_DRV_DS3232=m
CONFIG_RTC_DRV_DS3232_HWMON=y
# CONFIG_RTC_DRV_PCF2127 is not set
CONFIG_RTC_DRV_RV3029C2=m
CONFIG_RTC_DRV_RV3029_HWMON=y

#
# Platform RTC drivers
#
CONFIG_RTC_DRV_CMOS=y
CONFIG_RTC_DRV_DS1286=m
CONFIG_RTC_DRV_DS1511=m
CONFIG_RTC_DRV_DS1553=m
# CONFIG_RTC_DRV_DS1685_FAMILY is not set
CONFIG_RTC_DRV_DS1742=m
CONFIG_RTC_DRV_DS2404=m
CONFIG_RTC_DRV_STK17TA8=m
# CONFIG_RTC_DRV_M48T86 is not set
CONFIG_RTC_DRV_M48T35=m
CONFIG_RTC_DRV_M48T59=m
CONFIG_RTC_DRV_MSM6242=m
CONFIG_RTC_DRV_BQ4802=m
CONFIG_RTC_DRV_RP5C01=m
CONFIG_RTC_DRV_V3020=m

#
# on-CPU RTC drivers
#
# CONFIG_RTC_DRV_FTRTC010 is not set

#
# HID Sensor RTC drivers
#
# CONFIG_RTC_DRV_HID_SENSOR_TIME is not set
CONFIG_DMADEVICES=y
# CONFIG_DMADEVICES_DEBUG is not set

#
# DMA Devices
#
CONFIG_DMA_ENGINE=y
CONFIG_DMA_VIRTUAL_CHANNELS=y
CONFIG_DMA_ACPI=y
# CONFIG_ALTERA_MSGDMA is not set
# CONFIG_INTEL_IDMA64 is not set
# CONFIG_INTEL_IDXD is not set
CONFIG_INTEL_IOATDMA=m
# CONFIG_PLX_DMA is not set
# CONFIG_XILINX_ZYNQMP_DPDMA is not set
# CONFIG_QCOM_HIDMA_MGMT is not set
# CONFIG_QCOM_HIDMA is not set
CONFIG_DW_DMAC_CORE=y
CONFIG_DW_DMAC=m
CONFIG_DW_DMAC_PCI=y
# CONFIG_DW_EDMA is not set
# CONFIG_DW_EDMA_PCIE is not set
CONFIG_HSU_DMA=y
# CONFIG_SF_PDMA is not set

#
# DMA Clients
#
CONFIG_ASYNC_TX_DMA=y
# CONFIG_DMATEST is not set
CONFIG_DMA_ENGINE_RAID=y

#
# DMABUF options
#
CONFIG_SYNC_FILE=y
CONFIG_SW_SYNC=y
# CONFIG_UDMABUF is not set
# CONFIG_DMABUF_MOVE_NOTIFY is not set
# CONFIG_DMABUF_SELFTESTS is not set
# CONFIG_DMABUF_HEAPS is not set
# end of DMABUF options

CONFIG_DCA=m
CONFIG_AUXDISPLAY=y
# CONFIG_HD44780 is not set
CONFIG_KS0108=m
CONFIG_KS0108_PORT=0x378
CONFIG_KS0108_DELAY=2
CONFIG_CFAG12864B=m
CONFIG_CFAG12864B_RATE=20
# CONFIG_IMG_ASCII_LCD is not set
# CONFIG_PARPORT_PANEL is not set
# CONFIG_CHARLCD_BL_OFF is not set
# CONFIG_CHARLCD_BL_ON is not set
CONFIG_CHARLCD_BL_FLASH=y
# CONFIG_PANEL is not set
CONFIG_UIO=m
CONFIG_UIO_CIF=m
CONFIG_UIO_PDRV_GENIRQ=m
# CONFIG_UIO_DMEM_GENIRQ is not set
CONFIG_UIO_AEC=m
CONFIG_UIO_SERCOS3=m
CONFIG_UIO_PCI_GENERIC=m
# CONFIG_UIO_NETX is not set
# CONFIG_UIO_PRUSS is not set
# CONFIG_UIO_MF624 is not set
CONFIG_UIO_HV_GENERIC=m
CONFIG_VFIO_IOMMU_TYPE1=m
CONFIG_VFIO_VIRQFD=m
CONFIG_VFIO=m
CONFIG_VFIO_NOIOMMU=y
CONFIG_VFIO_PCI=m
# CONFIG_VFIO_PCI_VGA is not set
CONFIG_VFIO_PCI_MMAP=y
CONFIG_VFIO_PCI_INTX=y
# CONFIG_VFIO_PCI_IGD is not set
CONFIG_VFIO_MDEV=m
CONFIG_VFIO_MDEV_DEVICE=m
CONFIG_IRQ_BYPASS_MANAGER=y
# CONFIG_VIRT_DRIVERS is not set
CONFIG_VIRTIO=m
CONFIG_VIRTIO_MENU=y
CONFIG_VIRTIO_PCI=m
CONFIG_VIRTIO_PCI_LEGACY=y
# CONFIG_VIRTIO_PMEM is not set
CONFIG_VIRTIO_BALLOON=m
CONFIG_VIRTIO_MEM=m
CONFIG_VIRTIO_INPUT=m
# CONFIG_VIRTIO_MMIO is not set
CONFIG_VIRTIO_DMA_SHARED_BUFFER=m
# CONFIG_VDPA is not set
CONFIG_VHOST_IOTLB=m
CONFIG_VHOST=m
CONFIG_VHOST_MENU=y
CONFIG_VHOST_NET=m
# CONFIG_VHOST_SCSI is not set
CONFIG_VHOST_VSOCK=m
# CONFIG_VHOST_CROSS_ENDIAN_LEGACY is not set

#
# Microsoft Hyper-V guest support
#
CONFIG_HYPERV=m
CONFIG_HYPERV_TIMER=y
CONFIG_HYPERV_UTILS=m
CONFIG_HYPERV_BALLOON=m
# end of Microsoft Hyper-V guest support

#
# Xen driver support
#
CONFIG_XEN_BALLOON=y
# CONFIG_XEN_BALLOON_MEMORY_HOTPLUG is not set
CONFIG_XEN_SCRUB_PAGES_DEFAULT=y
CONFIG_XEN_DEV_EVTCHN=m
# CONFIG_XEN_BACKEND is not set
CONFIG_XENFS=m
CONFIG_XEN_COMPAT_XENFS=y
CONFIG_XEN_SYS_HYPERVISOR=y
CONFIG_XEN_XENBUS_FRONTEND=y
# CONFIG_XEN_GNTDEV is not set
# CONFIG_XEN_GRANT_DEV_ALLOC is not set
# CONFIG_XEN_GRANT_DMA_ALLOC is not set
CONFIG_SWIOTLB_XEN=y
# CONFIG_XEN_PVCALLS_FRONTEND is not set
CONFIG_XEN_PRIVCMD=m
CONFIG_XEN_HAVE_PVMMU=y
CONFIG_XEN_EFI=y
CONFIG_XEN_AUTO_XLATE=y
CONFIG_XEN_ACPI=y
CONFIG_XEN_HAVE_VPMU=y
# CONFIG_XEN_UNPOPULATED_ALLOC is not set
# end of Xen driver support

# CONFIG_GREYBUS is not set
CONFIG_STAGING=y
# CONFIG_PRISM2_USB is not set
# CONFIG_COMEDI is not set
# CONFIG_RTL8192U is not set
CONFIG_RTLLIB=m
CONFIG_RTLLIB_CRYPTO_CCMP=m
CONFIG_RTLLIB_CRYPTO_TKIP=m
CONFIG_RTLLIB_CRYPTO_WEP=m
CONFIG_RTL8192E=m
# CONFIG_RTL8723BS is not set
CONFIG_R8712U=m
# CONFIG_R8188EU is not set
# CONFIG_RTS5208 is not set
# CONFIG_VT6655 is not set
# CONFIG_VT6656 is not set

#
# IIO staging drivers
#

#
# Accelerometers
#
# CONFIG_ADIS16203 is not set
# CONFIG_ADIS16240 is not set
# end of Accelerometers

#
# Analog to digital converters
#
# CONFIG_AD7816 is not set
# CONFIG_AD7280 is not set
# end of Analog to digital converters

#
# Analog digital bi-direction converters
#
# CONFIG_ADT7316 is not set
# end of Analog digital bi-direction converters

#
# Capacitance to digital converters
#
# CONFIG_AD7150 is not set
# CONFIG_AD7746 is not set
# end of Capacitance to digital converters

#
# Direct Digital Synthesis
#
# CONFIG_AD9832 is not set
# CONFIG_AD9834 is not set
# end of Direct Digital Synthesis

#
# Network Analyzer, Impedance Converters
#
# CONFIG_AD5933 is not set
# end of Network Analyzer, Impedance Converters

#
# Active energy metering IC
#
# CONFIG_ADE7854 is not set
# end of Active energy metering IC

#
# Resolver to digital converters
#
# CONFIG_AD2S1210 is not set
# end of Resolver to digital converters
# end of IIO staging drivers

# CONFIG_FB_SM750 is not set
# CONFIG_STAGING_MEDIA is not set

#
# Android
#
# CONFIG_ASHMEM is not set
CONFIG_ION=y
CONFIG_ION_SYSTEM_HEAP=y
# CONFIG_ION_CMA_HEAP is not set
# end of Android

# CONFIG_LTE_GDM724X is not set
CONFIG_FIREWIRE_SERIAL=m
CONFIG_FWTTY_MAX_TOTAL_PORTS=64
CONFIG_FWTTY_MAX_CARD_PORTS=32
# CONFIG_GS_FPGABOOT is not set
# CONFIG_UNISYSSPAR is not set
# CONFIG_FB_TFT is not set
# CONFIG_KS7010 is not set
# CONFIG_PI433 is not set

#
# Gasket devices
#
# CONFIG_STAGING_GASKET_FRAMEWORK is not set
# end of Gasket devices

# CONFIG_FIELDBUS_DEV is not set
# CONFIG_KPC2000 is not set
CONFIG_QLGE=m
# CONFIG_WFX is not set
CONFIG_X86_PLATFORM_DEVICES=y
CONFIG_ACPI_WMI=m
CONFIG_WMI_BMOF=m
# CONFIG_ALIENWARE_WMI is not set
# CONFIG_HUAWEI_WMI is not set
# CONFIG_INTEL_WMI_SBL_FW_UPDATE is not set
CONFIG_INTEL_WMI_THUNDERBOLT=m
CONFIG_MXM_WMI=m
# CONFIG_PEAQ_WMI is not set
# CONFIG_XIAOMI_WMI is not set
CONFIG_ACERHDF=m
# CONFIG_ACER_WIRELESS is not set
CONFIG_ACER_WMI=m
CONFIG_APPLE_GMUX=m
CONFIG_ASUS_LAPTOP=m
# CONFIG_ASUS_WIRELESS is not set
CONFIG_ASUS_WMI=m
CONFIG_ASUS_NB_WMI=m
CONFIG_EEEPC_LAPTOP=m
CONFIG_EEEPC_WMI=m
CONFIG_DCDBAS=m
CONFIG_DELL_SMBIOS=m
CONFIG_DELL_SMBIOS_WMI=y
CONFIG_DELL_SMBIOS_SMM=y
CONFIG_DELL_LAPTOP=m
CONFIG_DELL_RBTN=m
CONFIG_DELL_RBU=m
CONFIG_DELL_SMO8800=m
CONFIG_DELL_WMI=m
CONFIG_DELL_WMI_DESCRIPTOR=m
CONFIG_DELL_WMI_AIO=m
# CONFIG_DELL_WMI_LED is not set
CONFIG_AMILO_RFKILL=m
CONFIG_FUJITSU_LAPTOP=m
CONFIG_FUJITSU_TABLET=m
# CONFIG_GPD_POCKET_FAN is not set
CONFIG_HP_ACCEL=m
CONFIG_HP_WIRELESS=m
CONFIG_HP_WMI=m
# CONFIG_IBM_RTL is not set
CONFIG_IDEAPAD_LAPTOP=m
CONFIG_SENSORS_HDAPS=m
CONFIG_THINKPAD_ACPI=m
CONFIG_THINKPAD_ACPI_ALSA_SUPPORT=y
# CONFIG_THINKPAD_ACPI_DEBUGFACILITIES is not set
# CONFIG_THINKPAD_ACPI_DEBUG is not set
# CONFIG_THINKPAD_ACPI_UNSAFE_LEDS is not set
CONFIG_THINKPAD_ACPI_VIDEO=y
CONFIG_THINKPAD_ACPI_HOTKEY_POLL=y
# CONFIG_INTEL_ATOMISP2_PM is not set
CONFIG_INTEL_HID_EVENT=m
# CONFIG_INTEL_INT0002_VGPIO is not set
# CONFIG_INTEL_MENLOW is not set
CONFIG_INTEL_OAKTRAIL=m
CONFIG_INTEL_VBTN=m
# CONFIG_SURFACE3_WMI is not set
# CONFIG_SURFACE_3_POWER_OPREGION is not set
# CONFIG_SURFACE_PRO3_BUTTON is not set
CONFIG_MSI_LAPTOP=m
CONFIG_MSI_WMI=m
# CONFIG_PCENGINES_APU2 is not set
CONFIG_SAMSUNG_LAPTOP=m
CONFIG_SAMSUNG_Q10=m
CONFIG_ACPI_TOSHIBA=m
CONFIG_TOSHIBA_BT_RFKILL=m
# CONFIG_TOSHIBA_HAPS is not set
# CONFIG_TOSHIBA_WMI is not set
CONFIG_ACPI_CMPC=m
CONFIG_COMPAL_LAPTOP=m
# CONFIG_LG_LAPTOP is not set
CONFIG_PANASONIC_LAPTOP=m
CONFIG_SONY_LAPTOP=m
CONFIG_SONYPI_COMPAT=y
# CONFIG_SYSTEM76_ACPI is not set
CONFIG_TOPSTAR_LAPTOP=m
# CONFIG_I2C_MULTI_INSTANTIATE is not set
# CONFIG_MLX_PLATFORM is not set
CONFIG_INTEL_IPS=m
# CONFIG_INTEL_RST is not set
# CONFIG_INTEL_SMARTCONNECT is not set

#
# Intel Speed Select Technology interface support
#
# CONFIG_INTEL_SPEED_SELECT_INTERFACE is not set
# end of Intel Speed Select Technology interface support

# CONFIG_INTEL_TURBO_MAX_3 is not set
# CONFIG_INTEL_UNCORE_FREQ_CONTROL is not set
CONFIG_INTEL_PMC_CORE=m
# CONFIG_INTEL_PUNIT_IPC is not set
# CONFIG_INTEL_SCU_PCI is not set
# CONFIG_INTEL_SCU_PLATFORM is not set
CONFIG_PMC_ATOM=y
# CONFIG_CHROME_PLATFORMS is not set
# CONFIG_MELLANOX_PLATFORM is not set
CONFIG_HAVE_CLK=y
CONFIG_CLKDEV_LOOKUP=y
CONFIG_HAVE_CLK_PREPARE=y
CONFIG_COMMON_CLK=y
# CONFIG_COMMON_CLK_MAX9485 is not set
# CONFIG_COMMON_CLK_SI5341 is not set
# CONFIG_COMMON_CLK_SI5351 is not set
# CONFIG_COMMON_CLK_SI544 is not set
# CONFIG_COMMON_CLK_CDCE706 is not set
# CONFIG_COMMON_CLK_CS2000_CP is not set
# CONFIG_COMMON_CLK_PWM is not set
# CONFIG_HWSPINLOCK is not set

#
# Clock Source drivers
#
CONFIG_CLKEVT_I8253=y
CONFIG_I8253_LOCK=y
CONFIG_CLKBLD_I8253=y
# end of Clock Source drivers

CONFIG_MAILBOX=y
CONFIG_PCC=y
# CONFIG_ALTERA_MBOX is not set
CONFIG_IOMMU_IOVA=y
CONFIG_IOASID=y
CONFIG_IOMMU_API=y
CONFIG_IOMMU_SUPPORT=y

#
# Generic IOMMU Pagetable Support
#
# end of Generic IOMMU Pagetable Support

# CONFIG_IOMMU_DEBUGFS is not set
# CONFIG_IOMMU_DEFAULT_PASSTHROUGH is not set
CONFIG_IOMMU_DMA=y
CONFIG_AMD_IOMMU=y
CONFIG_AMD_IOMMU_V2=m
CONFIG_DMAR_TABLE=y
CONFIG_INTEL_IOMMU=y
# CONFIG_INTEL_IOMMU_SVM is not set
# CONFIG_INTEL_IOMMU_DEFAULT_ON is not set
CONFIG_INTEL_IOMMU_FLOPPY_WA=y
# CONFIG_INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON is not set
CONFIG_IRQ_REMAP=y
CONFIG_HYPERV_IOMMU=y

#
# Remoteproc drivers
#
# CONFIG_REMOTEPROC is not set
# end of Remoteproc drivers

#
# Rpmsg drivers
#
# CONFIG_RPMSG_QCOM_GLINK_RPM is not set
# CONFIG_RPMSG_VIRTIO is not set
# end of Rpmsg drivers

# CONFIG_SOUNDWIRE is not set

#
# SOC (System On Chip) specific Drivers
#

#
# Amlogic SoC drivers
#
# end of Amlogic SoC drivers

#
# Aspeed SoC drivers
#
# end of Aspeed SoC drivers

#
# Broadcom SoC drivers
#
# end of Broadcom SoC drivers

#
# NXP/Freescale QorIQ SoC drivers
#
# end of NXP/Freescale QorIQ SoC drivers

#
# i.MX SoC drivers
#
# end of i.MX SoC drivers

#
# Qualcomm SoC drivers
#
# end of Qualcomm SoC drivers

# CONFIG_SOC_TI is not set

#
# Xilinx SoC drivers
#
# CONFIG_XILINX_VCU is not set
# end of Xilinx SoC drivers
# end of SOC (System On Chip) specific Drivers

CONFIG_PM_DEVFREQ=y

#
# DEVFREQ Governors
#
CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND=m
# CONFIG_DEVFREQ_GOV_PERFORMANCE is not set
# CONFIG_DEVFREQ_GOV_POWERSAVE is not set
# CONFIG_DEVFREQ_GOV_USERSPACE is not set
# CONFIG_DEVFREQ_GOV_PASSIVE is not set

#
# DEVFREQ Drivers
#
# CONFIG_PM_DEVFREQ_EVENT is not set
# CONFIG_EXTCON is not set
# CONFIG_MEMORY is not set
CONFIG_IIO=y
CONFIG_IIO_BUFFER=y
CONFIG_IIO_BUFFER_CB=y
# CONFIG_IIO_BUFFER_DMA is not set
# CONFIG_IIO_BUFFER_DMAENGINE is not set
# CONFIG_IIO_BUFFER_HW_CONSUMER is not set
CONFIG_IIO_KFIFO_BUF=y
CONFIG_IIO_TRIGGERED_BUFFER=m
# CONFIG_IIO_CONFIGFS is not set
CONFIG_IIO_TRIGGER=y
CONFIG_IIO_CONSUMERS_PER_TRIGGER=2
# CONFIG_IIO_SW_DEVICE is not set
# CONFIG_IIO_SW_TRIGGER is not set
# CONFIG_IIO_TRIGGERED_EVENT is not set

#
# Accelerometers
#
# CONFIG_ADIS16201 is not set
# CONFIG_ADIS16209 is not set
# CONFIG_ADXL345_I2C is not set
# CONFIG_ADXL345_SPI is not set
# CONFIG_ADXL372_SPI is not set
# CONFIG_ADXL372_I2C is not set
# CONFIG_BMA180 is not set
# CONFIG_BMA220 is not set
# CONFIG_BMA400 is not set
# CONFIG_BMC150_ACCEL is not set
# CONFIG_DA280 is not set
# CONFIG_DA311 is not set
# CONFIG_DMARD09 is not set
# CONFIG_DMARD10 is not set
CONFIG_HID_SENSOR_ACCEL_3D=m
# CONFIG_IIO_ST_ACCEL_3AXIS is not set
# CONFIG_KXSD9 is not set
# CONFIG_KXCJK1013 is not set
# CONFIG_MC3230 is not set
# CONFIG_MMA7455_I2C is not set
# CONFIG_MMA7455_SPI is not set
# CONFIG_MMA7660 is not set
# CONFIG_MMA8452 is not set
# CONFIG_MMA9551 is not set
# CONFIG_MMA9553 is not set
# CONFIG_MXC4005 is not set
# CONFIG_MXC6255 is not set
# CONFIG_SCA3000 is not set
# CONFIG_STK8312 is not set
# CONFIG_STK8BA50 is not set
# end of Accelerometers

#
# Analog to digital converters
#
# CONFIG_AD7091R5 is not set
# CONFIG_AD7124 is not set
# CONFIG_AD7192 is not set
# CONFIG_AD7266 is not set
# CONFIG_AD7291 is not set
# CONFIG_AD7292 is not set
# CONFIG_AD7298 is not set
# CONFIG_AD7476 is not set
# CONFIG_AD7606_IFACE_PARALLEL is not set
# CONFIG_AD7606_IFACE_SPI is not set
# CONFIG_AD7766 is not set
# CONFIG_AD7768_1 is not set
# CONFIG_AD7780 is not set
# CONFIG_AD7791 is not set
# CONFIG_AD7793 is not set
# CONFIG_AD7887 is not set
# CONFIG_AD7923 is not set
# CONFIG_AD7949 is not set
# CONFIG_AD799X is not set
# CONFIG_AD9467 is not set
# CONFIG_ADI_AXI_ADC is not set
# CONFIG_HI8435 is not set
# CONFIG_HX711 is not set
# CONFIG_INA2XX_ADC is not set
# CONFIG_LTC2471 is not set
# CONFIG_LTC2485 is not set
# CONFIG_LTC2496 is not set
# CONFIG_LTC2497 is not set
# CONFIG_MAX1027 is not set
# CONFIG_MAX11100 is not set
# CONFIG_MAX1118 is not set
# CONFIG_MAX1241 is not set
# CONFIG_MAX1363 is not set
# CONFIG_MAX9611 is not set
# CONFIG_MCP320X is not set
# CONFIG_MCP3422 is not set
# CONFIG_MCP3911 is not set
# CONFIG_NAU7802 is not set
# CONFIG_TI_ADC081C is not set
# CONFIG_TI_ADC0832 is not set
# CONFIG_TI_ADC084S021 is not set
# CONFIG_TI_ADC12138 is not set
# CONFIG_TI_ADC108S102 is not set
# CONFIG_TI_ADC128S052 is not set
# CONFIG_TI_ADC161S626 is not set
# CONFIG_TI_ADS1015 is not set
# CONFIG_TI_ADS7950 is not set
# CONFIG_TI_TLC4541 is not set
# CONFIG_VIPERBOARD_ADC is not set
# CONFIG_XILINX_XADC is not set
# end of Analog to digital converters

#
# Analog Front Ends
#
# end of Analog Front Ends

#
# Amplifiers
#
# CONFIG_AD8366 is not set
# CONFIG_HMC425 is not set
# end of Amplifiers

#
# Chemical Sensors
#
# CONFIG_ATLAS_PH_SENSOR is not set
# CONFIG_ATLAS_EZO_SENSOR is not set
# CONFIG_BME680 is not set
# CONFIG_CCS811 is not set
# CONFIG_IAQCORE is not set
# CONFIG_SCD30_CORE is not set
# CONFIG_SENSIRION_SGP30 is not set
# CONFIG_SPS30 is not set
# CONFIG_VZ89X is not set
# end of Chemical Sensors

#
# Hid Sensor IIO Common
#
CONFIG_HID_SENSOR_IIO_COMMON=m
CONFIG_HID_SENSOR_IIO_TRIGGER=m
# end of Hid Sensor IIO Common

#
# SSP Sensor Common
#
# CONFIG_IIO_SSP_SENSORHUB is not set
# end of SSP Sensor Common

#
# Digital to analog converters
#
# CONFIG_AD5064 is not set
# CONFIG_AD5360 is not set
# CONFIG_AD5380 is not set
# CONFIG_AD5421 is not set
# CONFIG_AD5446 is not set
# CONFIG_AD5449 is not set
# CONFIG_AD5592R is not set
# CONFIG_AD5593R is not set
# CONFIG_AD5504 is not set
# CONFIG_AD5624R_SPI is not set
# CONFIG_AD5686_SPI is not set
# CONFIG_AD5696_I2C is not set
# CONFIG_AD5755 is not set
# CONFIG_AD5758 is not set
# CONFIG_AD5761 is not set
# CONFIG_AD5764 is not set
# CONFIG_AD5770R is not set
# CONFIG_AD5791 is not set
# CONFIG_AD7303 is not set
# CONFIG_AD8801 is not set
# CONFIG_DS4424 is not set
# CONFIG_LTC1660 is not set
# CONFIG_LTC2632 is not set
# CONFIG_M62332 is not set
# CONFIG_MAX517 is not set
# CONFIG_MCP4725 is not set
# CONFIG_MCP4922 is not set
# CONFIG_TI_DAC082S085 is not set
# CONFIG_TI_DAC5571 is not set
# CONFIG_TI_DAC7311 is not set
# CONFIG_TI_DAC7612 is not set
# end of Digital to analog converters

#
# IIO dummy driver
#
# end of IIO dummy driver

#
# Frequency Synthesizers DDS/PLL
#

#
# Clock Generator/Distribution
#
# CONFIG_AD9523 is not set
# end of Clock Generator/Distribution

#
# Phase-Locked Loop (PLL) frequency synthesizers
#
# CONFIG_ADF4350 is not set
# CONFIG_ADF4371 is not set
# end of Phase-Locked Loop (PLL) frequency synthesizers
# end of Frequency Synthesizers DDS/PLL

#
# Digital gyroscope sensors
#
# CONFIG_ADIS16080 is not set
# CONFIG_ADIS16130 is not set
# CONFIG_ADIS16136 is not set
# CONFIG_ADIS16260 is not set
# CONFIG_ADXRS290 is not set
# CONFIG_ADXRS450 is not set
# CONFIG_BMG160 is not set
# CONFIG_FXAS21002C is not set
CONFIG_HID_SENSOR_GYRO_3D=m
# CONFIG_MPU3050_I2C is not set
# CONFIG_IIO_ST_GYRO_3AXIS is not set
# CONFIG_ITG3200 is not set
# end of Digital gyroscope sensors

#
# Health Sensors
#

#
# Heart Rate Monitors
#
# CONFIG_AFE4403 is not set
# CONFIG_AFE4404 is not set
# CONFIG_MAX30100 is not set
# CONFIG_MAX30102 is not set
# end of Heart Rate Monitors
# end of Health Sensors

#
# Humidity sensors
#
# CONFIG_AM2315 is not set
# CONFIG_DHT11 is not set
# CONFIG_HDC100X is not set
# CONFIG_HDC2010 is not set
# CONFIG_HID_SENSOR_HUMIDITY is not set
# CONFIG_HTS221 is not set
# CONFIG_HTU21 is not set
# CONFIG_SI7005 is not set
# CONFIG_SI7020 is not set
# end of Humidity sensors

#
# Inertial measurement units
#
# CONFIG_ADIS16400 is not set
# CONFIG_ADIS16460 is not set
# CONFIG_ADIS16475 is not set
# CONFIG_ADIS16480 is not set
# CONFIG_BMI160_I2C is not set
# CONFIG_BMI160_SPI is not set
# CONFIG_FXOS8700_I2C is not set
# CONFIG_FXOS8700_SPI is not set
# CONFIG_KMX61 is not set
# CONFIG_INV_ICM42600_I2C is not set
# CONFIG_INV_ICM42600_SPI is not set
# CONFIG_INV_MPU6050_I2C is not set
# CONFIG_INV_MPU6050_SPI is not set
# CONFIG_IIO_ST_LSM6DSX is not set
# end of Inertial measurement units

#
# Light sensors
#
# CONFIG_ACPI_ALS is not set
# CONFIG_ADJD_S311 is not set
# CONFIG_ADUX1020 is not set
# CONFIG_AL3010 is not set
# CONFIG_AL3320A is not set
# CONFIG_APDS9300 is not set
# CONFIG_APDS9960 is not set
# CONFIG_AS73211 is not set
# CONFIG_BH1750 is not set
# CONFIG_BH1780 is not set
# CONFIG_CM32181 is not set
# CONFIG_CM3232 is not set
# CONFIG_CM3323 is not set
# CONFIG_CM36651 is not set
# CONFIG_GP2AP002 is not set
# CONFIG_GP2AP020A00F is not set
# CONFIG_SENSORS_ISL29018 is not set
# CONFIG_SENSORS_ISL29028 is not set
# CONFIG_ISL29125 is not set
CONFIG_HID_SENSOR_ALS=m
CONFIG_HID_SENSOR_PROX=m
# CONFIG_JSA1212 is not set
# CONFIG_RPR0521 is not set
# CONFIG_LTR501 is not set
# CONFIG_LV0104CS is not set
# CONFIG_MAX44000 is not set
# CONFIG_MAX44009 is not set
# CONFIG_NOA1305 is not set
# CONFIG_OPT3001 is not set
# CONFIG_PA12203001 is not set
# CONFIG_SI1133 is not set
# CONFIG_SI1145 is not set
# CONFIG_STK3310 is not set
# CONFIG_ST_UVIS25 is not set
# CONFIG_TCS3414 is not set
# CONFIG_TCS3472 is not set
# CONFIG_SENSORS_TSL2563 is not set
# CONFIG_TSL2583 is not set
# CONFIG_TSL2772 is not set
# CONFIG_TSL4531 is not set
# CONFIG_US5182D is not set
# CONFIG_VCNL4000 is not set
# CONFIG_VCNL4035 is not set
# CONFIG_VEML6030 is not set
# CONFIG_VEML6070 is not set
# CONFIG_VL6180 is not set
# CONFIG_ZOPT2201 is not set
# end of Light sensors

#
# Magnetometer sensors
#
# CONFIG_AK8975 is not set
# CONFIG_AK09911 is not set
# CONFIG_BMC150_MAGN_I2C is not set
# CONFIG_BMC150_MAGN_SPI is not set
# CONFIG_MAG3110 is not set
CONFIG_HID_SENSOR_MAGNETOMETER_3D=m
# CONFIG_MMC35240 is not set
# CONFIG_IIO_ST_MAGN_3AXIS is not set
# CONFIG_SENSORS_HMC5843_I2C is not set
# CONFIG_SENSORS_HMC5843_SPI is not set
# CONFIG_SENSORS_RM3100_I2C is not set
# CONFIG_SENSORS_RM3100_SPI is not set
# end of Magnetometer sensors

#
# Multiplexers
#
# end of Multiplexers

#
# Inclinometer sensors
#
CONFIG_HID_SENSOR_INCLINOMETER_3D=m
CONFIG_HID_SENSOR_DEVICE_ROTATION=m
# end of Inclinometer sensors

#
# Triggers - standalone
#
# CONFIG_IIO_INTERRUPT_TRIGGER is not set
# CONFIG_IIO_SYSFS_TRIGGER is not set
# end of Triggers - standalone

#
# Linear and angular position sensors
#
# end of Linear and angular position sensors

#
# Digital potentiometers
#
# CONFIG_AD5272 is not set
# CONFIG_DS1803 is not set
# CONFIG_MAX5432 is not set
# CONFIG_MAX5481 is not set
# CONFIG_MAX5487 is not set
# CONFIG_MCP4018 is not set
# CONFIG_MCP4131 is not set
# CONFIG_MCP4531 is not set
# CONFIG_MCP41010 is not set
# CONFIG_TPL0102 is not set
# end of Digital potentiometers

#
# Digital potentiostats
#
# CONFIG_LMP91000 is not set
# end of Digital potentiostats

#
# Pressure sensors
#
# CONFIG_ABP060MG is not set
# CONFIG_BMP280 is not set
# CONFIG_DLHL60D is not set
# CONFIG_DPS310 is not set
CONFIG_HID_SENSOR_PRESS=m
# CONFIG_HP03 is not set
# CONFIG_ICP10100 is not set
# CONFIG_MPL115_I2C is not set
# CONFIG_MPL115_SPI is not set
# CONFIG_MPL3115 is not set
# CONFIG_MS5611 is not set
# CONFIG_MS5637 is not set
# CONFIG_IIO_ST_PRESS is not set
# CONFIG_T5403 is not set
# CONFIG_HP206C is not set
# CONFIG_ZPA2326 is not set
# end of Pressure sensors

#
# Lightning sensors
#
# CONFIG_AS3935 is not set
# end of Lightning sensors

#
# Proximity and distance sensors
#
# CONFIG_ISL29501 is not set
# CONFIG_LIDAR_LITE_V2 is not set
# CONFIG_MB1232 is not set
# CONFIG_PING is not set
# CONFIG_RFD77402 is not set
# CONFIG_SRF04 is not set
# CONFIG_SX9310 is not set
# CONFIG_SX9500 is not set
# CONFIG_SRF08 is not set
# CONFIG_VCNL3020 is not set
# CONFIG_VL53L0X_I2C is not set
# end of Proximity and distance sensors

#
# Resolver to digital converters
#
# CONFIG_AD2S90 is not set
# CONFIG_AD2S1200 is not set
# end of Resolver to digital converters

#
# Temperature sensors
#
# CONFIG_LTC2983 is not set
# CONFIG_MAXIM_THERMOCOUPLE is not set
# CONFIG_HID_SENSOR_TEMP is not set
# CONFIG_MLX90614 is not set
# CONFIG_MLX90632 is not set
# CONFIG_TMP006 is not set
# CONFIG_TMP007 is not set
# CONFIG_TSYS01 is not set
# CONFIG_TSYS02D is not set
# CONFIG_MAX31856 is not set
# end of Temperature sensors

CONFIG_NTB=m
# CONFIG_NTB_MSI is not set
CONFIG_NTB_AMD=m
# CONFIG_NTB_IDT is not set
# CONFIG_NTB_INTEL is not set
# CONFIG_NTB_SWITCHTEC is not set
# CONFIG_NTB_PINGPONG is not set
# CONFIG_NTB_TOOL is not set
CONFIG_NTB_PERF=m
CONFIG_NTB_TRANSPORT=m
# CONFIG_VME_BUS is not set
CONFIG_PWM=y
CONFIG_PWM_SYSFS=y
# CONFIG_PWM_DEBUG is not set
# CONFIG_PWM_LPSS_PCI is not set
# CONFIG_PWM_LPSS_PLATFORM is not set
# CONFIG_PWM_PCA9685 is not set

#
# IRQ chip support
#
# end of IRQ chip support

# CONFIG_IPACK_BUS is not set
# CONFIG_RESET_CONTROLLER is not set

#
# PHY Subsystem
#
CONFIG_GENERIC_PHY=y
# CONFIG_USB_LGM_PHY is not set
# CONFIG_BCM_KONA_USB2_PHY is not set
# CONFIG_PHY_PXA_28NM_HSIC is not set
# CONFIG_PHY_PXA_28NM_USB2 is not set
# CONFIG_PHY_CPCAP_USB is not set
# CONFIG_PHY_INTEL_LGM_EMMC is not set
# end of PHY Subsystem

CONFIG_POWERCAP=y
CONFIG_INTEL_RAPL_CORE=m
CONFIG_INTEL_RAPL=m
# CONFIG_IDLE_INJECT is not set
# CONFIG_MCB is not set

#
# Performance monitor support
#
# end of Performance monitor support

CONFIG_RAS=y
# CONFIG_RAS_CEC is not set
# CONFIG_USB4 is not set

#
# Android
#
CONFIG_ANDROID=y
# CONFIG_ANDROID_BINDER_IPC is not set
# end of Android

CONFIG_LIBNVDIMM=m
CONFIG_BLK_DEV_PMEM=m
CONFIG_ND_BLK=m
CONFIG_ND_CLAIM=y
CONFIG_ND_BTT=m
CONFIG_BTT=y
CONFIG_ND_PFN=m
CONFIG_NVDIMM_PFN=y
CONFIG_NVDIMM_DAX=y
CONFIG_NVDIMM_KEYS=y
CONFIG_DAX_DRIVER=y
CONFIG_DAX=y
CONFIG_DEV_DAX=m
CONFIG_DEV_DAX_PMEM=m
CONFIG_DEV_DAX_KMEM=m
CONFIG_DEV_DAX_PMEM_COMPAT=m
CONFIG_NVMEM=y
CONFIG_NVMEM_SYSFS=y

#
# HW tracing support
#
# CONFIG_STM is not set
# CONFIG_INTEL_TH is not set
# end of HW tracing support

# CONFIG_FPGA is not set
# CONFIG_TEE is not set
CONFIG_PM_OPP=y
# CONFIG_UNISYS_VISORBUS is not set
# CONFIG_SIOX is not set
# CONFIG_SLIMBUS is not set
# CONFIG_INTERCONNECT is not set
# CONFIG_COUNTER is not set
# CONFIG_MOST is not set
# end of Device Drivers

#
# File systems
#
CONFIG_DCACHE_WORD_ACCESS=y
# CONFIG_VALIDATE_FS_PARSER is not set
CONFIG_FS_IOMAP=y
# CONFIG_EXT2_FS is not set
# CONFIG_EXT3_FS is not set
CONFIG_EXT4_FS=m
CONFIG_EXT4_USE_FOR_EXT2=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
# CONFIG_EXT4_DEBUG is not set
CONFIG_JBD2=m
# CONFIG_JBD2_DEBUG is not set
CONFIG_FS_MBCACHE=m
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
CONFIG_XFS_FS=m
CONFIG_XFS_SUPPORT_V4=y
CONFIG_XFS_QUOTA=y
CONFIG_XFS_POSIX_ACL=y
# CONFIG_XFS_RT is not set
# CONFIG_XFS_ONLINE_SCRUB is not set
# CONFIG_XFS_WARN is not set
# CONFIG_XFS_DEBUG is not set
CONFIG_GFS2_FS=m
CONFIG_GFS2_FS_LOCKING_DLM=y
# CONFIG_OCFS2_FS is not set
CONFIG_BTRFS_FS=m
CONFIG_BTRFS_FS_POSIX_ACL=y
# CONFIG_BTRFS_FS_CHECK_INTEGRITY is not set
# CONFIG_BTRFS_FS_RUN_SANITY_TESTS is not set
# CONFIG_BTRFS_DEBUG is not set
# CONFIG_BTRFS_ASSERT is not set
# CONFIG_BTRFS_FS_REF_VERIFY is not set
# CONFIG_NILFS2_FS is not set
# CONFIG_F2FS_FS is not set
CONFIG_FS_DAX=y
CONFIG_FS_DAX_PMD=y
CONFIG_FS_POSIX_ACL=y
CONFIG_EXPORTFS=y
CONFIG_EXPORTFS_BLOCK_OPS=y
CONFIG_FILE_LOCKING=y
CONFIG_MANDATORY_FILE_LOCKING=y
# CONFIG_FS_ENCRYPTION is not set
# CONFIG_FS_VERITY is not set
CONFIG_FSNOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_INOTIFY_USER=y
CONFIG_FANOTIFY=y
CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
CONFIG_PRINT_QUOTA_WARNING=y
# CONFIG_QUOTA_DEBUG is not set
CONFIG_QUOTA_TREE=y
# CONFIG_QFMT_V1 is not set
CONFIG_QFMT_V2=y
CONFIG_QUOTACTL=y
CONFIG_AUTOFS4_FS=y
CONFIG_AUTOFS_FS=y
CONFIG_FUSE_FS=m
CONFIG_CUSE=m
# CONFIG_VIRTIO_FS is not set
CONFIG_OVERLAY_FS=m
# CONFIG_OVERLAY_FS_REDIRECT_DIR is not set
# CONFIG_OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW is not set
# CONFIG_OVERLAY_FS_INDEX is not set
# CONFIG_OVERLAY_FS_XINO_AUTO is not set
# CONFIG_OVERLAY_FS_METACOPY is not set

#
# Caches
#
CONFIG_FSCACHE=m
CONFIG_FSCACHE_STATS=y
# CONFIG_FSCACHE_HISTOGRAM is not set
# CONFIG_FSCACHE_DEBUG is not set
# CONFIG_FSCACHE_OBJECT_LIST is not set
CONFIG_CACHEFILES=m
# CONFIG_CACHEFILES_DEBUG is not set
# CONFIG_CACHEFILES_HISTOGRAM is not set
# end of Caches

#
# CD-ROM/DVD Filesystems
#
CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y
CONFIG_ZISOFS=y
CONFIG_UDF_FS=m
# end of CD-ROM/DVD Filesystems

#
# DOS/FAT/EXFAT/NT Filesystems
#
CONFIG_FAT_FS=m
CONFIG_MSDOS_FS=m
CONFIG_VFAT_FS=m
CONFIG_FAT_DEFAULT_CODEPAGE=437
CONFIG_FAT_DEFAULT_IOCHARSET="ascii"
# CONFIG_FAT_DEFAULT_UTF8 is not set
# CONFIG_EXFAT_FS is not set
# CONFIG_NTFS_FS is not set
# end of DOS/FAT/EXFAT/NT Filesystems

#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_PROC_VMCORE=y
# CONFIG_PROC_VMCORE_DEVICE_DUMP is not set
CONFIG_PROC_SYSCTL=y
CONFIG_PROC_PAGE_MONITOR=y
CONFIG_PROC_CHILDREN=y
CONFIG_PROC_PID_ARCH_STATUS=y
CONFIG_PROC_CPU_RESCTRL=y
CONFIG_KERNFS=y
CONFIG_SYSFS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_TMPFS_XATTR=y
# CONFIG_TMPFS_INODE64 is not set
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
CONFIG_MEMFD_CREATE=y
CONFIG_ARCH_HAS_GIGANTIC_PAGE=y
CONFIG_CONFIGFS_FS=y
CONFIG_EFIVAR_FS=y
# end of Pseudo filesystems

CONFIG_MISC_FILESYSTEMS=y
# CONFIG_ORANGEFS_FS is not set
# CONFIG_ADFS_FS is not set
# CONFIG_AFFS_FS is not set
# CONFIG_ECRYPT_FS is not set
# CONFIG_HFS_FS is not set
# CONFIG_HFSPLUS_FS is not set
# CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
# CONFIG_JFFS2_FS is not set
# CONFIG_UBIFS_FS is not set
CONFIG_CRAMFS=m
CONFIG_CRAMFS_BLOCKDEV=y
# CONFIG_CRAMFS_MTD is not set
CONFIG_SQUASHFS=m
CONFIG_SQUASHFS_FILE_CACHE=y
# CONFIG_SQUASHFS_FILE_DIRECT is not set
CONFIG_SQUASHFS_DECOMP_SINGLE=y
# CONFIG_SQUASHFS_DECOMP_MULTI is not set
# CONFIG_SQUASHFS_DECOMP_MULTI_PERCPU is not set
CONFIG_SQUASHFS_XATTR=y
CONFIG_SQUASHFS_ZLIB=y
# CONFIG_SQUASHFS_LZ4 is not set
CONFIG_SQUASHFS_LZO=y
CONFIG_SQUASHFS_XZ=y
# CONFIG_SQUASHFS_ZSTD is not set
# CONFIG_SQUASHFS_4K_DEVBLK_SIZE is not set
# CONFIG_SQUASHFS_EMBEDDED is not set
CONFIG_SQUASHFS_FRAGMENT_CACHE_SIZE=3
# CONFIG_VXFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_OMFS_FS is not set
# CONFIG_HPFS_FS is not set
# CONFIG_QNX4FS_FS is not set
# CONFIG_QNX6FS_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_PSTORE=y
CONFIG_PSTORE_DEFLATE_COMPRESS=y
# CONFIG_PSTORE_LZO_COMPRESS is not set
# CONFIG_PSTORE_LZ4_COMPRESS is not set
# CONFIG_PSTORE_LZ4HC_COMPRESS is not set
# CONFIG_PSTORE_842_COMPRESS is not set
# CONFIG_PSTORE_ZSTD_COMPRESS is not set
CONFIG_PSTORE_COMPRESS=y
CONFIG_PSTORE_DEFLATE_COMPRESS_DEFAULT=y
CONFIG_PSTORE_COMPRESS_DEFAULT="deflate"
CONFIG_PSTORE_CONSOLE=y
CONFIG_PSTORE_PMSG=y
# CONFIG_PSTORE_FTRACE is not set
CONFIG_PSTORE_RAM=m
# CONFIG_PSTORE_BLK is not set
# CONFIG_SYSV_FS is not set
# CONFIG_UFS_FS is not set
# CONFIG_EROFS_FS is not set
CONFIG_NETWORK_FILESYSTEMS=y
CONFIG_NFS_FS=y
# CONFIG_NFS_V2 is not set
CONFIG_NFS_V3=y
CONFIG_NFS_V3_ACL=y
CONFIG_NFS_V4=m
# CONFIG_NFS_SWAP is not set
CONFIG_NFS_V4_1=y
CONFIG_NFS_V4_2=y
CONFIG_PNFS_FILE_LAYOUT=m
CONFIG_PNFS_BLOCK=m
CONFIG_PNFS_FLEXFILE_LAYOUT=m
CONFIG_NFS_V4_1_IMPLEMENTATION_ID_DOMAIN="kernel.org"
# CONFIG_NFS_V4_1_MIGRATION is not set
CONFIG_NFS_V4_SECURITY_LABEL=y
CONFIG_ROOT_NFS=y
# CONFIG_NFS_USE_LEGACY_DNS is not set
CONFIG_NFS_USE_KERNEL_DNS=y
CONFIG_NFS_DEBUG=y
CONFIG_NFS_DISABLE_UDP_SUPPORT=y
CONFIG_NFSD=m
CONFIG_NFSD_V2_ACL=y
CONFIG_NFSD_V3=y
CONFIG_NFSD_V3_ACL=y
CONFIG_NFSD_V4=y
CONFIG_NFSD_PNFS=y
# CONFIG_NFSD_BLOCKLAYOUT is not set
CONFIG_NFSD_SCSILAYOUT=y
# CONFIG_NFSD_FLEXFILELAYOUT is not set
# CONFIG_NFSD_V4_2_INTER_SSC is not set
CONFIG_NFSD_V4_SECURITY_LABEL=y
CONFIG_GRACE_PERIOD=y
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
CONFIG_NFS_ACL_SUPPORT=y
CONFIG_NFS_COMMON=y
CONFIG_SUNRPC=y
CONFIG_SUNRPC_GSS=m
CONFIG_SUNRPC_BACKCHANNEL=y
CONFIG_RPCSEC_GSS_KRB5=m
# CONFIG_SUNRPC_DISABLE_INSECURE_ENCTYPES is not set
CONFIG_SUNRPC_DEBUG=y
CONFIG_CEPH_FS=m
# CONFIG_CEPH_FSCACHE is not set
CONFIG_CEPH_FS_POSIX_ACL=y
# CONFIG_CEPH_FS_SECURITY_LABEL is not set
CONFIG_CIFS=m
# CONFIG_CIFS_STATS2 is not set
CONFIG_CIFS_ALLOW_INSECURE_LEGACY=y
CONFIG_CIFS_WEAK_PW_HASH=y
CONFIG_CIFS_UPCALL=y
CONFIG_CIFS_XATTR=y
CONFIG_CIFS_POSIX=y
CONFIG_CIFS_DEBUG=y
# CONFIG_CIFS_DEBUG2 is not set
# CONFIG_CIFS_DEBUG_DUMP_KEYS is not set
CONFIG_CIFS_DFS_UPCALL=y
# CONFIG_CIFS_FSCACHE is not set
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
# CONFIG_9P_FS is not set
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="utf8"
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_CODEPAGE_737=m
CONFIG_NLS_CODEPAGE_775=m
CONFIG_NLS_CODEPAGE_850=m
CONFIG_NLS_CODEPAGE_852=m
CONFIG_NLS_CODEPAGE_855=m
CONFIG_NLS_CODEPAGE_857=m
CONFIG_NLS_CODEPAGE_860=m
CONFIG_NLS_CODEPAGE_861=m
CONFIG_NLS_CODEPAGE_862=m
CONFIG_NLS_CODEPAGE_863=m
CONFIG_NLS_CODEPAGE_864=m
CONFIG_NLS_CODEPAGE_865=m
CONFIG_NLS_CODEPAGE_866=m
CONFIG_NLS_CODEPAGE_869=m
CONFIG_NLS_CODEPAGE_936=m
CONFIG_NLS_CODEPAGE_950=m
CONFIG_NLS_CODEPAGE_932=m
CONFIG_NLS_CODEPAGE_949=m
CONFIG_NLS_CODEPAGE_874=m
CONFIG_NLS_ISO8859_8=m
CONFIG_NLS_CODEPAGE_1250=m
CONFIG_NLS_CODEPAGE_1251=m
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=m
CONFIG_NLS_ISO8859_2=m
CONFIG_NLS_ISO8859_3=m
CONFIG_NLS_ISO8859_4=m
CONFIG_NLS_ISO8859_5=m
CONFIG_NLS_ISO8859_6=m
CONFIG_NLS_ISO8859_7=m
CONFIG_NLS_ISO8859_9=m
CONFIG_NLS_ISO8859_13=m
CONFIG_NLS_ISO8859_14=m
CONFIG_NLS_ISO8859_15=m
CONFIG_NLS_KOI8_R=m
CONFIG_NLS_KOI8_U=m
CONFIG_NLS_MAC_ROMAN=m
CONFIG_NLS_MAC_CELTIC=m
CONFIG_NLS_MAC_CENTEURO=m
CONFIG_NLS_MAC_CROATIAN=m
CONFIG_NLS_MAC_CYRILLIC=m
CONFIG_NLS_MAC_GAELIC=m
CONFIG_NLS_MAC_GREEK=m
CONFIG_NLS_MAC_ICELAND=m
CONFIG_NLS_MAC_INUIT=m
CONFIG_NLS_MAC_ROMANIAN=m
CONFIG_NLS_MAC_TURKISH=m
CONFIG_NLS_UTF8=m
CONFIG_DLM=m
CONFIG_DLM_DEBUG=y
# CONFIG_UNICODE is not set
CONFIG_IO_WQ=y
# end of File systems

#
# Security options
#
CONFIG_KEYS=y
# CONFIG_KEYS_REQUEST_CACHE is not set
CONFIG_PERSISTENT_KEYRINGS=y
CONFIG_TRUSTED_KEYS=y
CONFIG_ENCRYPTED_KEYS=y
# CONFIG_KEY_DH_OPERATIONS is not set
# CONFIG_SECURITY_DMESG_RESTRICT is not set
CONFIG_SECURITY=y
CONFIG_SECURITYFS=y
CONFIG_SECURITY_NETWORK=y
CONFIG_PAGE_TABLE_ISOLATION=y
CONFIG_SECURITY_NETWORK_XFRM=y
CONFIG_SECURITY_PATH=y
CONFIG_INTEL_TXT=y
CONFIG_LSM_MMAP_MIN_ADDR=65535
CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR=y
CONFIG_HARDENED_USERCOPY=y
CONFIG_HARDENED_USERCOPY_FALLBACK=y
# CONFIG_HARDENED_USERCOPY_PAGESPAN is not set
# CONFIG_FORTIFY_SOURCE is not set
# CONFIG_STATIC_USERMODEHELPER is not set
CONFIG_SECURITY_SELINUX=y
CONFIG_SECURITY_SELINUX_BOOTPARAM=y
# CONFIG_SECURITY_SELINUX_DISABLE is not set
CONFIG_SECURITY_SELINUX_DEVELOP=y
CONFIG_SECURITY_SELINUX_AVC_STATS=y
CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=1
CONFIG_SECURITY_SELINUX_SIDTAB_HASH_BITS=9
CONFIG_SECURITY_SELINUX_SID2STR_CACHE_SIZE=256
# CONFIG_SECURITY_SMACK is not set
# CONFIG_SECURITY_TOMOYO is not set
# CONFIG_SECURITY_APPARMOR is not set
# CONFIG_SECURITY_LOADPIN is not set
CONFIG_SECURITY_YAMA=y
# CONFIG_SECURITY_SAFESETID is not set
# CONFIG_SECURITY_LOCKDOWN_LSM is not set
CONFIG_INTEGRITY=y
CONFIG_INTEGRITY_SIGNATURE=y
CONFIG_INTEGRITY_ASYMMETRIC_KEYS=y
CONFIG_INTEGRITY_TRUSTED_KEYRING=y
# CONFIG_INTEGRITY_PLATFORM_KEYRING is not set
CONFIG_INTEGRITY_AUDIT=y
CONFIG_IMA=y
CONFIG_IMA_MEASURE_PCR_IDX=10
CONFIG_IMA_LSM_RULES=y
# CONFIG_IMA_TEMPLATE is not set
CONFIG_IMA_NG_TEMPLATE=y
# CONFIG_IMA_SIG_TEMPLATE is not set
CONFIG_IMA_DEFAULT_TEMPLATE="ima-ng"
CONFIG_IMA_DEFAULT_HASH_SHA1=y
# CONFIG_IMA_DEFAULT_HASH_SHA256 is not set
CONFIG_IMA_DEFAULT_HASH="sha1"
# CONFIG_IMA_WRITE_POLICY is not set
# CONFIG_IMA_READ_POLICY is not set
CONFIG_IMA_APPRAISE=y
CONFIG_IMA_ARCH_POLICY=y
# CONFIG_IMA_APPRAISE_BUILD_POLICY is not set
CONFIG_IMA_APPRAISE_BOOTPARAM=y
# CONFIG_IMA_APPRAISE_MODSIG is not set
CONFIG_IMA_TRUSTED_KEYRING=y
# CONFIG_IMA_BLACKLIST_KEYRING is not set
# CONFIG_IMA_LOAD_X509 is not set
CONFIG_IMA_MEASURE_ASYMMETRIC_KEYS=y
CONFIG_IMA_QUEUE_EARLY_BOOT_KEYS=y
CONFIG_IMA_SECURE_AND_OR_TRUSTED_BOOT=y
CONFIG_EVM=y
CONFIG_EVM_ATTR_FSUUID=y
# CONFIG_EVM_ADD_XATTRS is not set
# CONFIG_EVM_LOAD_X509 is not set
CONFIG_DEFAULT_SECURITY_SELINUX=y
# CONFIG_DEFAULT_SECURITY_DAC is not set
CONFIG_LSM="lockdown,yama,loadpin,safesetid,integrity,selinux,smack,tomoyo,apparmor,bpf"

#
# Kernel hardening options
#

#
# Memory initialization
#
CONFIG_INIT_STACK_NONE=y
# CONFIG_INIT_ON_ALLOC_DEFAULT_ON is not set
# CONFIG_INIT_ON_FREE_DEFAULT_ON is not set
# end of Memory initialization
# end of Kernel hardening options
# end of Security options

CONFIG_XOR_BLOCKS=m
CONFIG_ASYNC_CORE=m
CONFIG_ASYNC_MEMCPY=m
CONFIG_ASYNC_XOR=m
CONFIG_ASYNC_PQ=m
CONFIG_ASYNC_RAID6_RECOV=m
CONFIG_CRYPTO=y

#
# Crypto core or helper
#
CONFIG_CRYPTO_ALGAPI=y
CONFIG_CRYPTO_ALGAPI2=y
CONFIG_CRYPTO_AEAD=y
CONFIG_CRYPTO_AEAD2=y
CONFIG_CRYPTO_SKCIPHER=y
CONFIG_CRYPTO_SKCIPHER2=y
CONFIG_CRYPTO_HASH=y
CONFIG_CRYPTO_HASH2=y
CONFIG_CRYPTO_RNG=y
CONFIG_CRYPTO_RNG2=y
CONFIG_CRYPTO_RNG_DEFAULT=y
CONFIG_CRYPTO_AKCIPHER2=y
CONFIG_CRYPTO_AKCIPHER=y
CONFIG_CRYPTO_KPP2=y
CONFIG_CRYPTO_KPP=m
CONFIG_CRYPTO_ACOMP2=y
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_MANAGER2=y
CONFIG_CRYPTO_USER=m
CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y
CONFIG_CRYPTO_GF128MUL=y
CONFIG_CRYPTO_NULL=y
CONFIG_CRYPTO_NULL2=y
CONFIG_CRYPTO_PCRYPT=m
CONFIG_CRYPTO_CRYPTD=m
CONFIG_CRYPTO_AUTHENC=m
CONFIG_CRYPTO_TEST=m
CONFIG_CRYPTO_SIMD=m
CONFIG_CRYPTO_GLUE_HELPER_X86=m
CONFIG_CRYPTO_ENGINE=m

#
# Public-key cryptography
#
CONFIG_CRYPTO_RSA=y
CONFIG_CRYPTO_DH=m
CONFIG_CRYPTO_ECC=m
CONFIG_CRYPTO_ECDH=m
# CONFIG_CRYPTO_ECRDSA is not set
# CONFIG_CRYPTO_SM2 is not set
# CONFIG_CRYPTO_CURVE25519 is not set
# CONFIG_CRYPTO_CURVE25519_X86 is not set

#
# Authenticated Encryption with Associated Data
#
CONFIG_CRYPTO_CCM=m
CONFIG_CRYPTO_GCM=y
# CONFIG_CRYPTO_CHACHA20POLY1305 is not set
# CONFIG_CRYPTO_AEGIS128 is not set
# CONFIG_CRYPTO_AEGIS128_AESNI_SSE2 is not set
CONFIG_CRYPTO_SEQIV=y
CONFIG_CRYPTO_ECHAINIV=m

#
# Block modes
#
CONFIG_CRYPTO_CBC=y
# CONFIG_CRYPTO_CFB is not set
CONFIG_CRYPTO_CTR=y
CONFIG_CRYPTO_CTS=m
CONFIG_CRYPTO_ECB=y
CONFIG_CRYPTO_LRW=m
# CONFIG_CRYPTO_OFB is not set
CONFIG_CRYPTO_PCBC=m
CONFIG_CRYPTO_XTS=m
# CONFIG_CRYPTO_KEYWRAP is not set
# CONFIG_CRYPTO_NHPOLY1305_SSE2 is not set
# CONFIG_CRYPTO_NHPOLY1305_AVX2 is not set
# CONFIG_CRYPTO_ADIANTUM is not set
CONFIG_CRYPTO_ESSIV=m

#
# Hash modes
#
CONFIG_CRYPTO_CMAC=m
CONFIG_CRYPTO_HMAC=y
CONFIG_CRYPTO_XCBC=m
CONFIG_CRYPTO_VMAC=m

#
# Digest
#
CONFIG_CRYPTO_CRC32C=y
CONFIG_CRYPTO_CRC32C_INTEL=m
CONFIG_CRYPTO_CRC32=m
CONFIG_CRYPTO_CRC32_PCLMUL=m
CONFIG_CRYPTO_XXHASH=m
CONFIG_CRYPTO_BLAKE2B=m
# CONFIG_CRYPTO_BLAKE2S is not set
# CONFIG_CRYPTO_BLAKE2S_X86 is not set
CONFIG_CRYPTO_CRCT10DIF=y
CONFIG_CRYPTO_CRCT10DIF_PCLMUL=m
CONFIG_CRYPTO_GHASH=y
# CONFIG_CRYPTO_POLY1305 is not set
# CONFIG_CRYPTO_POLY1305_X86_64 is not set
CONFIG_CRYPTO_MD4=m
CONFIG_CRYPTO_MD5=y
CONFIG_CRYPTO_MICHAEL_MIC=m
CONFIG_CRYPTO_RMD128=m
CONFIG_CRYPTO_RMD160=m
CONFIG_CRYPTO_RMD256=m
CONFIG_CRYPTO_RMD320=m
CONFIG_CRYPTO_SHA1=y
CONFIG_CRYPTO_SHA1_SSSE3=y
CONFIG_CRYPTO_SHA256_SSSE3=y
CONFIG_CRYPTO_SHA512_SSSE3=m
CONFIG_CRYPTO_SHA256=y
CONFIG_CRYPTO_SHA512=m
# CONFIG_CRYPTO_SHA3 is not set
# CONFIG_CRYPTO_SM3 is not set
# CONFIG_CRYPTO_STREEBOG is not set
CONFIG_CRYPTO_TGR192=m
CONFIG_CRYPTO_WP512=m
CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL=m

#
# Ciphers
#
CONFIG_CRYPTO_AES=y
# CONFIG_CRYPTO_AES_TI is not set
CONFIG_CRYPTO_AES_NI_INTEL=m
CONFIG_CRYPTO_ANUBIS=m
CONFIG_CRYPTO_ARC4=m
CONFIG_CRYPTO_BLOWFISH=m
CONFIG_CRYPTO_BLOWFISH_COMMON=m
CONFIG_CRYPTO_BLOWFISH_X86_64=m
CONFIG_CRYPTO_CAMELLIA=m
CONFIG_CRYPTO_CAMELLIA_X86_64=m
CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64=m
CONFIG_CRYPTO_CAMELLIA_AESNI_AVX2_X86_64=m
CONFIG_CRYPTO_CAST_COMMON=m
CONFIG_CRYPTO_CAST5=m
CONFIG_CRYPTO_CAST5_AVX_X86_64=m
CONFIG_CRYPTO_CAST6=m
CONFIG_CRYPTO_CAST6_AVX_X86_64=m
CONFIG_CRYPTO_DES=m
# CONFIG_CRYPTO_DES3_EDE_X86_64 is not set
CONFIG_CRYPTO_FCRYPT=m
CONFIG_CRYPTO_KHAZAD=m
CONFIG_CRYPTO_SALSA20=m
# CONFIG_CRYPTO_CHACHA20 is not set
# CONFIG_CRYPTO_CHACHA20_X86_64 is not set
CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_SERPENT_SSE2_X86_64=m
CONFIG_CRYPTO_SERPENT_AVX_X86_64=m
CONFIG_CRYPTO_SERPENT_AVX2_X86_64=m
# CONFIG_CRYPTO_SM4 is not set
CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m
CONFIG_CRYPTO_TWOFISH_COMMON=m
CONFIG_CRYPTO_TWOFISH_X86_64=m
CONFIG_CRYPTO_TWOFISH_X86_64_3WAY=m
CONFIG_CRYPTO_TWOFISH_AVX_X86_64=m

#
# Compression
#
CONFIG_CRYPTO_DEFLATE=y
CONFIG_CRYPTO_LZO=y
# CONFIG_CRYPTO_842 is not set
# CONFIG_CRYPTO_LZ4 is not set
# CONFIG_CRYPTO_LZ4HC is not set
# CONFIG_CRYPTO_ZSTD is not set

#
# Random Number Generation
#
CONFIG_CRYPTO_ANSI_CPRNG=m
CONFIG_CRYPTO_DRBG_MENU=y
CONFIG_CRYPTO_DRBG_HMAC=y
CONFIG_CRYPTO_DRBG_HASH=y
CONFIG_CRYPTO_DRBG_CTR=y
CONFIG_CRYPTO_DRBG=y
CONFIG_CRYPTO_JITTERENTROPY=y
CONFIG_CRYPTO_USER_API=y
CONFIG_CRYPTO_USER_API_HASH=y
CONFIG_CRYPTO_USER_API_SKCIPHER=y
CONFIG_CRYPTO_USER_API_RNG=m
# CONFIG_CRYPTO_USER_API_RNG_CAVP is not set
# CONFIG_CRYPTO_USER_API_AEAD is not set
CONFIG_CRYPTO_USER_API_ENABLE_OBSOLETE=y
# CONFIG_CRYPTO_STATS is not set
CONFIG_CRYPTO_HASH_INFO=y

#
# Crypto library routines
#
CONFIG_CRYPTO_LIB_AES=y
CONFIG_CRYPTO_LIB_ARC4=m
# CONFIG_CRYPTO_LIB_BLAKE2S is not set
# CONFIG_CRYPTO_LIB_CHACHA is not set
# CONFIG_CRYPTO_LIB_CURVE25519 is not set
CONFIG_CRYPTO_LIB_DES=m
CONFIG_CRYPTO_LIB_POLY1305_RSIZE=11
# CONFIG_CRYPTO_LIB_POLY1305 is not set
# CONFIG_CRYPTO_LIB_CHACHA20POLY1305 is not set
CONFIG_CRYPTO_LIB_SHA256=y
CONFIG_CRYPTO_HW=y
CONFIG_CRYPTO_DEV_PADLOCK=m
CONFIG_CRYPTO_DEV_PADLOCK_AES=m
CONFIG_CRYPTO_DEV_PADLOCK_SHA=m
# CONFIG_CRYPTO_DEV_ATMEL_ECC is not set
# CONFIG_CRYPTO_DEV_ATMEL_SHA204A is not set
CONFIG_CRYPTO_DEV_CCP=y
CONFIG_CRYPTO_DEV_CCP_DD=m
CONFIG_CRYPTO_DEV_SP_CCP=y
CONFIG_CRYPTO_DEV_CCP_CRYPTO=m
CONFIG_CRYPTO_DEV_SP_PSP=y
# CONFIG_CRYPTO_DEV_CCP_DEBUGFS is not set
CONFIG_CRYPTO_DEV_QAT=m
CONFIG_CRYPTO_DEV_QAT_DH895xCC=m
CONFIG_CRYPTO_DEV_QAT_C3XXX=m
CONFIG_CRYPTO_DEV_QAT_C62X=m
CONFIG_CRYPTO_DEV_QAT_DH895xCCVF=m
CONFIG_CRYPTO_DEV_QAT_C3XXXVF=m
CONFIG_CRYPTO_DEV_QAT_C62XVF=m
# CONFIG_CRYPTO_DEV_NITROX_CNN55XX is not set
CONFIG_CRYPTO_DEV_CHELSIO=m
CONFIG_CRYPTO_DEV_VIRTIO=m
# CONFIG_CRYPTO_DEV_SAFEXCEL is not set
# CONFIG_CRYPTO_DEV_AMLOGIC_GXL is not set
CONFIG_ASYMMETRIC_KEY_TYPE=y
CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=y
# CONFIG_ASYMMETRIC_TPM_KEY_SUBTYPE is not set
CONFIG_X509_CERTIFICATE_PARSER=y
# CONFIG_PKCS8_PRIVATE_KEY_PARSER is not set
CONFIG_PKCS7_MESSAGE_PARSER=y
# CONFIG_PKCS7_TEST_KEY is not set
CONFIG_SIGNED_PE_FILE_VERIFICATION=y

#
# Certificates for signature checking
#
CONFIG_MODULE_SIG_KEY="certs/signing_key.pem"
CONFIG_SYSTEM_TRUSTED_KEYRING=y
CONFIG_SYSTEM_TRUSTED_KEYS=""
# CONFIG_SYSTEM_EXTRA_CERTIFICATE is not set
# CONFIG_SECONDARY_TRUSTED_KEYRING is not set
CONFIG_SYSTEM_BLACKLIST_KEYRING=y
CONFIG_SYSTEM_BLACKLIST_HASH_LIST=""
# end of Certificates for signature checking

CONFIG_BINARY_PRINTF=y

#
# Library routines
#
CONFIG_RAID6_PQ=m
CONFIG_RAID6_PQ_BENCHMARK=y
# CONFIG_PACKING is not set
CONFIG_BITREVERSE=y
CONFIG_GENERIC_STRNCPY_FROM_USER=y
CONFIG_GENERIC_STRNLEN_USER=y
CONFIG_GENERIC_NET_UTILS=y
CONFIG_GENERIC_FIND_FIRST_BIT=y
CONFIG_CORDIC=m
CONFIG_PRIME_NUMBERS=m
CONFIG_RATIONAL=y
CONFIG_GENERIC_PCI_IOMAP=y
CONFIG_GENERIC_IOMAP=y
CONFIG_ARCH_USE_CMPXCHG_LOCKREF=y
CONFIG_ARCH_HAS_FAST_MULTIPLIER=y
CONFIG_ARCH_USE_SYM_ANNOTATIONS=y
CONFIG_CRC_CCITT=y
CONFIG_CRC16=y
CONFIG_CRC_T10DIF=y
CONFIG_CRC_ITU_T=m
CONFIG_CRC32=y
# CONFIG_CRC32_SELFTEST is not set
CONFIG_CRC32_SLICEBY8=y
# CONFIG_CRC32_SLICEBY4 is not set
# CONFIG_CRC32_SARWATE is not set
# CONFIG_CRC32_BIT is not set
# CONFIG_CRC64 is not set
# CONFIG_CRC4 is not set
# CONFIG_CRC7 is not set
CONFIG_LIBCRC32C=m
CONFIG_CRC8=m
CONFIG_XXHASH=y
# CONFIG_RANDOM32_SELFTEST is not set
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=y
CONFIG_LZO_COMPRESS=y
CONFIG_LZO_DECOMPRESS=y
CONFIG_LZ4_DECOMPRESS=y
CONFIG_ZSTD_COMPRESS=m
CONFIG_ZSTD_DECOMPRESS=y
CONFIG_XZ_DEC=y
CONFIG_XZ_DEC_X86=y
CONFIG_XZ_DEC_POWERPC=y
CONFIG_XZ_DEC_IA64=y
CONFIG_XZ_DEC_ARM=y
CONFIG_XZ_DEC_ARMTHUMB=y
CONFIG_XZ_DEC_SPARC=y
CONFIG_XZ_DEC_BCJ=y
# CONFIG_XZ_DEC_TEST is not set
CONFIG_DECOMPRESS_GZIP=y
CONFIG_DECOMPRESS_BZIP2=y
CONFIG_DECOMPRESS_LZMA=y
CONFIG_DECOMPRESS_XZ=y
CONFIG_DECOMPRESS_LZO=y
CONFIG_DECOMPRESS_LZ4=y
CONFIG_DECOMPRESS_ZSTD=y
CONFIG_GENERIC_ALLOCATOR=y
CONFIG_REED_SOLOMON=m
CONFIG_REED_SOLOMON_ENC8=y
CONFIG_REED_SOLOMON_DEC8=y
CONFIG_TEXTSEARCH=y
CONFIG_TEXTSEARCH_KMP=m
CONFIG_TEXTSEARCH_BM=m
CONFIG_TEXTSEARCH_FSM=m
CONFIG_BTREE=y
CONFIG_INTERVAL_TREE=y
CONFIG_XARRAY_MULTI=y
CONFIG_ASSOCIATIVE_ARRAY=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT_MAP=y
CONFIG_HAS_DMA=y
CONFIG_DMA_OPS=y
CONFIG_NEED_SG_DMA_LENGTH=y
CONFIG_NEED_DMA_MAP_STATE=y
CONFIG_ARCH_DMA_ADDR_T_64BIT=y
CONFIG_ARCH_HAS_FORCE_DMA_UNENCRYPTED=y
CONFIG_SWIOTLB=y
CONFIG_DMA_COHERENT_POOL=y
CONFIG_DMA_CMA=y
# CONFIG_DMA_PERNUMA_CMA is not set

#
# Default contiguous memory area size:
#
CONFIG_CMA_SIZE_MBYTES=0
CONFIG_CMA_SIZE_SEL_MBYTES=y
# CONFIG_CMA_SIZE_SEL_PERCENTAGE is not set
# CONFIG_CMA_SIZE_SEL_MIN is not set
# CONFIG_CMA_SIZE_SEL_MAX is not set
CONFIG_CMA_ALIGNMENT=8
# CONFIG_DMA_API_DEBUG is not set
CONFIG_SGL_ALLOC=y
CONFIG_IOMMU_HELPER=y
CONFIG_CHECK_SIGNATURE=y
CONFIG_CPUMASK_OFFSTACK=y
CONFIG_CPU_RMAP=y
CONFIG_DQL=y
CONFIG_GLOB=y
# CONFIG_GLOB_SELFTEST is not set
CONFIG_NLATTR=y
CONFIG_CLZ_TAB=y
CONFIG_IRQ_POLL=y
CONFIG_MPILIB=y
CONFIG_SIGNATURE=y
CONFIG_DIMLIB=y
CONFIG_OID_REGISTRY=y
CONFIG_UCS2_STRING=y
CONFIG_HAVE_GENERIC_VDSO=y
CONFIG_GENERIC_GETTIMEOFDAY=y
CONFIG_GENERIC_VDSO_TIME_NS=y
CONFIG_FONT_SUPPORT=y
# CONFIG_FONTS is not set
CONFIG_FONT_8x8=y
CONFIG_FONT_8x16=y
CONFIG_SG_POOL=y
CONFIG_ARCH_HAS_PMEM_API=y
CONFIG_MEMREGION=y
CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE=y
CONFIG_ARCH_HAS_COPY_MC=y
CONFIG_ARCH_STACKWALK=y
CONFIG_SBITMAP=y
# CONFIG_STRING_SELFTEST is not set
# end of Library routines

#
# Kernel hacking
#

#
# printk and dmesg options
#
CONFIG_PRINTK_TIME=y
# CONFIG_PRINTK_CALLER is not set
CONFIG_CONSOLE_LOGLEVEL_DEFAULT=7
CONFIG_CONSOLE_LOGLEVEL_QUIET=4
CONFIG_MESSAGE_LOGLEVEL_DEFAULT=4
CONFIG_BOOT_PRINTK_DELAY=y
CONFIG_DYNAMIC_DEBUG=y
CONFIG_DYNAMIC_DEBUG_CORE=y
CONFIG_SYMBOLIC_ERRNAME=y
CONFIG_DEBUG_BUGVERBOSE=y
# end of printk and dmesg options

#
# Compile-time checks and compiler options
#
CONFIG_DEBUG_INFO=y
# CONFIG_DEBUG_INFO_REDUCED is not set
# CONFIG_DEBUG_INFO_COMPRESSED is not set
# CONFIG_DEBUG_INFO_SPLIT is not set
# CONFIG_DEBUG_INFO_DWARF4 is not set
CONFIG_DEBUG_INFO_BTF=y
# CONFIG_GDB_SCRIPTS is not set
CONFIG_ENABLE_MUST_CHECK=y
CONFIG_FRAME_WARN=2048
CONFIG_STRIP_ASM_SYMS=y
# CONFIG_READABLE_ASM is not set
# CONFIG_HEADERS_INSTALL is not set
CONFIG_DEBUG_SECTION_MISMATCH=y
CONFIG_SECTION_MISMATCH_WARN_ONLY=y
# CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_32B is not set
CONFIG_STACK_VALIDATION=y
# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set
# end of Compile-time checks and compiler options

#
# Generic Kernel Debugging Instruments
#
CONFIG_MAGIC_SYSRQ=y
CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE=0x1
CONFIG_MAGIC_SYSRQ_SERIAL=y
CONFIG_MAGIC_SYSRQ_SERIAL_SEQUENCE=""
CONFIG_DEBUG_FS=y
CONFIG_DEBUG_FS_ALLOW_ALL=y
# CONFIG_DEBUG_FS_DISALLOW_MOUNT is not set
# CONFIG_DEBUG_FS_ALLOW_NONE is not set
CONFIG_HAVE_ARCH_KGDB=y
# CONFIG_KGDB is not set
CONFIG_ARCH_HAS_UBSAN_SANITIZE_ALL=y
# CONFIG_UBSAN is not set
CONFIG_HAVE_ARCH_KCSAN=y
# end of Generic Kernel Debugging Instruments

CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_MISC=y

#
# Memory Debugging
#
# CONFIG_PAGE_EXTENSION is not set
# CONFIG_DEBUG_PAGEALLOC is not set
# CONFIG_PAGE_OWNER is not set
# CONFIG_PAGE_POISONING is not set
# CONFIG_DEBUG_PAGE_REF is not set
CONFIG_DEBUG_RODATA_TEST=y
CONFIG_ARCH_HAS_DEBUG_WX=y
# CONFIG_DEBUG_WX is not set
CONFIG_GENERIC_PTDUMP=y
# CONFIG_PTDUMP_DEBUGFS is not set
# CONFIG_DEBUG_OBJECTS is not set
# CONFIG_SLUB_DEBUG_ON is not set
# CONFIG_SLUB_STATS is not set
CONFIG_HAVE_DEBUG_KMEMLEAK=y
# CONFIG_DEBUG_KMEMLEAK is not set
# CONFIG_DEBUG_STACK_USAGE is not set
# CONFIG_SCHED_STACK_END_CHECK is not set
CONFIG_ARCH_HAS_DEBUG_VM_PGTABLE=y
# CONFIG_DEBUG_VM is not set
# CONFIG_DEBUG_VM_PGTABLE is not set
CONFIG_ARCH_HAS_DEBUG_VIRTUAL=y
# CONFIG_DEBUG_VIRTUAL is not set
CONFIG_DEBUG_MEMORY_INIT=y
CONFIG_MEMORY_NOTIFIER_ERROR_INJECT=m
# CONFIG_DEBUG_PER_CPU_MAPS is not set
CONFIG_HAVE_ARCH_KASAN=y
CONFIG_HAVE_ARCH_KASAN_VMALLOC=y
CONFIG_CC_HAS_KASAN_GENERIC=y
CONFIG_CC_HAS_WORKING_NOSANITIZE_ADDRESS=y
# CONFIG_KASAN is not set
# end of Memory Debugging

CONFIG_DEBUG_SHIRQ=y

#
# Debug Oops, Lockups and Hangs
#
CONFIG_PANIC_ON_OOPS=y
CONFIG_PANIC_ON_OOPS_VALUE=1
CONFIG_PANIC_TIMEOUT=0
CONFIG_LOCKUP_DETECTOR=y
CONFIG_SOFTLOCKUP_DETECTOR=y
# CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC is not set
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=0
CONFIG_HARDLOCKUP_DETECTOR_PERF=y
CONFIG_HARDLOCKUP_CHECK_TIMESTAMP=y
CONFIG_HARDLOCKUP_DETECTOR=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC_VALUE=1
# CONFIG_DETECT_HUNG_TASK is not set
# CONFIG_WQ_WATCHDOG is not set
# CONFIG_TEST_LOCKUP is not set
# end of Debug Oops, Lockups and Hangs

#
# Scheduler Debugging
#
CONFIG_SCHED_DEBUG=y
CONFIG_SCHED_INFO=y
CONFIG_SCHEDSTATS=y
# end of Scheduler Debugging

# CONFIG_DEBUG_TIMEKEEPING is not set
CONFIG_DEBUG_PREEMPT=y

#
# Lock Debugging (spinlocks, mutexes, etc...)
#
CONFIG_LOCK_DEBUGGING_SUPPORT=y
CONFIG_PROVE_LOCKING=y
# CONFIG_PROVE_RAW_LOCK_NESTING is not set
# CONFIG_LOCK_STAT is not set
CONFIG_DEBUG_RT_MUTEXES=y
CONFIG_DEBUG_SPINLOCK=y
CONFIG_DEBUG_MUTEXES=y
CONFIG_DEBUG_WW_MUTEX_SLOWPATH=y
CONFIG_DEBUG_RWSEMS=y
CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_LOCKDEP=y
# CONFIG_DEBUG_LOCKDEP is not set
CONFIG_DEBUG_ATOMIC_SLEEP=y
# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
# CONFIG_LOCK_TORTURE_TEST is not set
CONFIG_WW_MUTEX_SELFTEST=m
# CONFIG_SCF_TORTURE_TEST is not set
# CONFIG_CSD_LOCK_WAIT_DEBUG is not set
# end of Lock Debugging (spinlocks, mutexes, etc...)

CONFIG_TRACE_IRQFLAGS=y
CONFIG_TRACE_IRQFLAGS_NMI=y
CONFIG_STACKTRACE=y
# CONFIG_WARN_ALL_UNSEEDED_RANDOM is not set
# CONFIG_DEBUG_KOBJECT is not set

#
# Debug kernel data structures
#
CONFIG_DEBUG_LIST=y
CONFIG_DEBUG_PLIST=y
# CONFIG_DEBUG_SG is not set
# CONFIG_DEBUG_NOTIFIERS is not set
# CONFIG_BUG_ON_DATA_CORRUPTION is not set
# end of Debug kernel data structures

# CONFIG_DEBUG_CREDENTIALS is not set

#
# RCU Debugging
#
CONFIG_PROVE_RCU=y
# CONFIG_RCU_SCALE_TEST is not set
# CONFIG_RCU_TORTURE_TEST is not set
# CONFIG_RCU_REF_SCALE_TEST is not set
CONFIG_RCU_CPU_STALL_TIMEOUT=60
# CONFIG_RCU_TRACE is not set
# CONFIG_RCU_EQS_DEBUG is not set
# end of RCU Debugging

# CONFIG_DEBUG_WQ_FORCE_RR_CPU is not set
# CONFIG_DEBUG_BLOCK_EXT_DEVT is not set
# CONFIG_CPU_HOTPLUG_STATE_CONTROL is not set
CONFIG_LATENCYTOP=y
CONFIG_USER_STACKTRACE_SUPPORT=y
CONFIG_NOP_TRACER=y
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_HAVE_FENTRY=y
CONFIG_HAVE_C_RECORDMCOUNT=y
CONFIG_TRACER_MAX_TRACE=y
CONFIG_TRACE_CLOCK=y
CONFIG_RING_BUFFER=y
CONFIG_EVENT_TRACING=y
CONFIG_CONTEXT_SWITCH_TRACER=y
CONFIG_RING_BUFFER_ALLOW_SWAP=y
CONFIG_PREEMPTIRQ_TRACEPOINTS=y
CONFIG_TRACING=y
CONFIG_GENERIC_TRACER=y
CONFIG_TRACING_SUPPORT=y
CONFIG_FTRACE=y
# CONFIG_BOOTTIME_TRACING is not set
CONFIG_FUNCTION_TRACER=y
CONFIG_FUNCTION_GRAPH_TRACER=y
CONFIG_DYNAMIC_FTRACE=y
CONFIG_DYNAMIC_FTRACE_WITH_REGS=y
CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y
CONFIG_FUNCTION_PROFILER=y
CONFIG_STACK_TRACER=y
CONFIG_TRACE_PREEMPT_TOGGLE=y
CONFIG_IRQSOFF_TRACER=y
CONFIG_PREEMPT_TRACER=y
CONFIG_SCHED_TRACER=y
CONFIG_HWLAT_TRACER=y
# CONFIG_MMIOTRACE is not set
CONFIG_FTRACE_SYSCALLS=y
CONFIG_TRACER_SNAPSHOT=y
CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP=y
CONFIG_BRANCH_PROFILE_NONE=y
# CONFIG_PROFILE_ANNOTATED_BRANCHES is not set
# CONFIG_PROFILE_ALL_BRANCHES is not set
CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_KPROBE_EVENTS=y
# CONFIG_KPROBE_EVENTS_ON_NOTRACE is not set
CONFIG_UPROBE_EVENTS=y
CONFIG_BPF_EVENTS=y
CONFIG_DYNAMIC_EVENTS=y
CONFIG_PROBE_EVENTS=y
# CONFIG_BPF_KPROBE_OVERRIDE is not set
CONFIG_FTRACE_MCOUNT_RECORD=y
CONFIG_TRACING_MAP=y
CONFIG_SYNTH_EVENTS=y
CONFIG_HIST_TRIGGERS=y
# CONFIG_TRACE_EVENT_INJECT is not set
# CONFIG_TRACEPOINT_BENCHMARK is not set
CONFIG_RING_BUFFER_BENCHMARK=m
# CONFIG_TRACE_EVAL_MAP_FILE is not set
# CONFIG_FTRACE_STARTUP_TEST is not set
# CONFIG_RING_BUFFER_STARTUP_TEST is not set
CONFIG_PREEMPTIRQ_DELAY_TEST=m
# CONFIG_SYNTH_EVENT_GEN_TEST is not set
# CONFIG_KPROBE_EVENT_GEN_TEST is not set
# CONFIG_HIST_TRIGGERS_DEBUG is not set
CONFIG_PROVIDE_OHCI1394_DMA_INIT=y
CONFIG_SAMPLES=y
# CONFIG_SAMPLE_AUXDISPLAY is not set
# CONFIG_SAMPLE_TRACE_EVENTS is not set
CONFIG_SAMPLE_TRACE_PRINTK=m
CONFIG_SAMPLE_FTRACE_DIRECT=m
# CONFIG_SAMPLE_TRACE_ARRAY is not set
# CONFIG_SAMPLE_KOBJECT is not set
# CONFIG_SAMPLE_KPROBES is not set
# CONFIG_SAMPLE_HW_BREAKPOINT is not set
# CONFIG_SAMPLE_KFIFO is not set
# CONFIG_SAMPLE_LIVEPATCH is not set
# CONFIG_SAMPLE_CONFIGFS is not set
# CONFIG_SAMPLE_VFIO_MDEV_MTTY is not set
# CONFIG_SAMPLE_VFIO_MDEV_MDPY is not set
# CONFIG_SAMPLE_VFIO_MDEV_MDPY_FB is not set
# CONFIG_SAMPLE_VFIO_MDEV_MBOCHS is not set
# CONFIG_SAMPLE_WATCHDOG is not set
CONFIG_ARCH_HAS_DEVMEM_IS_ALLOWED=y
CONFIG_STRICT_DEVMEM=y
# CONFIG_IO_STRICT_DEVMEM is not set

#
# x86 Debugging
#
CONFIG_TRACE_IRQFLAGS_SUPPORT=y
CONFIG_TRACE_IRQFLAGS_NMI_SUPPORT=y
CONFIG_EARLY_PRINTK_USB=y
CONFIG_X86_VERBOSE_BOOTUP=y
CONFIG_EARLY_PRINTK=y
CONFIG_EARLY_PRINTK_DBGP=y
CONFIG_EARLY_PRINTK_USB_XDBC=y
# CONFIG_EFI_PGT_DUMP is not set
# CONFIG_DEBUG_TLBFLUSH is not set
# CONFIG_IOMMU_DEBUG is not set
CONFIG_HAVE_MMIOTRACE_SUPPORT=y
CONFIG_X86_DECODER_SELFTEST=y
CONFIG_IO_DELAY_0X80=y
# CONFIG_IO_DELAY_0XED is not set
# CONFIG_IO_DELAY_UDELAY is not set
# CONFIG_IO_DELAY_NONE is not set
CONFIG_DEBUG_BOOT_PARAMS=y
# CONFIG_CPA_DEBUG is not set
# CONFIG_DEBUG_ENTRY is not set
# CONFIG_DEBUG_NMI_SELFTEST is not set
CONFIG_X86_DEBUG_FPU=y
# CONFIG_PUNIT_ATOM_DEBUG is not set
CONFIG_UNWINDER_ORC=y
# CONFIG_UNWINDER_FRAME_POINTER is not set
# CONFIG_UNWINDER_GUESS is not set
# end of x86 Debugging

#
# Kernel Testing and Coverage
#
# CONFIG_KUNIT is not set
CONFIG_NOTIFIER_ERROR_INJECTION=y
CONFIG_PM_NOTIFIER_ERROR_INJECT=m
# CONFIG_NETDEV_NOTIFIER_ERROR_INJECT is not set
CONFIG_FUNCTION_ERROR_INJECTION=y
# CONFIG_FAULT_INJECTION is not set
CONFIG_ARCH_HAS_KCOV=y
CONFIG_CC_HAS_SANCOV_TRACE_PC=y
# CONFIG_KCOV is not set
CONFIG_RUNTIME_TESTING_MENU=y
CONFIG_LKDTM=y
# CONFIG_TEST_LIST_SORT is not set
# CONFIG_TEST_MIN_HEAP is not set
# CONFIG_TEST_SORT is not set
# CONFIG_KPROBES_SANITY_TEST is not set
# CONFIG_BACKTRACE_SELF_TEST is not set
# CONFIG_RBTREE_TEST is not set
# CONFIG_REED_SOLOMON_TEST is not set
# CONFIG_INTERVAL_TREE_TEST is not set
# CONFIG_PERCPU_TEST is not set
CONFIG_ATOMIC64_SELFTEST=y
# CONFIG_ASYNC_RAID6_TEST is not set
# CONFIG_TEST_HEXDUMP is not set
# CONFIG_TEST_STRING_HELPERS is not set
CONFIG_TEST_STRSCPY=m
# CONFIG_TEST_KSTRTOX is not set
CONFIG_TEST_PRINTF=m
CONFIG_TEST_BITMAP=m
# CONFIG_TEST_UUID is not set
# CONFIG_TEST_XARRAY is not set
# CONFIG_TEST_OVERFLOW is not set
# CONFIG_TEST_RHASHTABLE is not set
# CONFIG_TEST_HASH is not set
# CONFIG_TEST_IDA is not set
CONFIG_TEST_LKM=m
CONFIG_TEST_BITOPS=m
CONFIG_TEST_VMALLOC=m
CONFIG_TEST_USER_COPY=m
CONFIG_TEST_BPF=m
CONFIG_TEST_BLACKHOLE_DEV=m
# CONFIG_FIND_BIT_BENCHMARK is not set
CONFIG_TEST_FIRMWARE=m
CONFIG_TEST_SYSCTL=y
# CONFIG_TEST_UDELAY is not set
CONFIG_TEST_STATIC_KEYS=m
CONFIG_TEST_KMOD=m
# CONFIG_TEST_MEMCAT_P is not set
CONFIG_TEST_LIVEPATCH=m
# CONFIG_TEST_STACKINIT is not set
# CONFIG_TEST_MEMINIT is not set
CONFIG_TEST_HMM=m
# CONFIG_TEST_FREE_PAGES is not set
# CONFIG_TEST_FPU is not set
# CONFIG_MEMTEST is not set
# CONFIG_HYPERV_TESTING is not set
# end of Kernel Testing and Coverage
# end of Kernel hacking

--O5XBE6gyVG5Rl6Rj
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename=job-script

#!/bin/sh

export_top_env()
{
	export suite='kernel-selftests'
	export testcase='kernel-selftests'
	export category='functional'
	export kconfig='x86_64-rhel-7.6-kselftests'
	export need_memory='2G'
	export need_cpu=2
	export kernel_cmdline='erst_disable'
	export job_origin='/lkp-src/allot/cyclic:p1:linux-devel:devel-hourly/lkp-skl-nuc2/kernel-selftests.yaml'
	export queue_cmdline_keys='branch
commit
queue_at_least_once'
	export queue='validate'
	export testbox='lkp-skl-nuc2'
	export tbox_group='lkp-skl-nuc2'
	export submit_id='5fbe468e2274c60fb694e304'
	export job_file='/lkp/jobs/scheduled/lkp-skl-nuc2/kernel-selftests-livepatch-ucode=0xe2-debian-10.4-x86_64-20200603.cgz-fd8d46a7a2c51313ee14c43af60ff337279384ef-20201125-4022-x8lk6d-3.yaml'
	export id='f491b9b4487cadb9bb572dbe9e41c7bfee9f83b9'
	export queuer_version='/lkp-src'
	export model='Skylake'
	export nr_cpu=8
	export memory='32G'
	export nr_sdd_partitions=1
	export ssd_partitions='/dev/disk/by-id/ata-INTEL_SSDSCKKF480H6_CVLY6296001Z480F-part1'
	export swap_partitions=
	export rootfs_partition='/dev/disk/by-id/ata-INTEL_SSDSCKKF480H6_CVLY629600JP480F-part1'
	export brand='Intel(R) Core(TM) i7-6770HQ CPU @ 2.60GHz'
	export commit='fd8d46a7a2c51313ee14c43af60ff337279384ef'
	export netconsole_port=6675
	export ucode='0xe2'
	export need_kconfig_hw='CONFIG_E1000E=y
CONFIG_SATA_AHCI'
	export need_kernel_headers=true
	export need_kernel_selftests=true
	export need_kconfig='CONFIG_DYNAMIC_DEBUG=y
CONFIG_LIVEPATCH=y ~ ">= v4.0-rc1"
CONFIG_TEST_LIVEPATCH=m ~ ">= v5.1-rc1"'
	export enqueue_time='2020-11-25 19:57:02 +0800'
	export _id='5fbe46902274c60fb694e305'
	export _rt='/result/kernel-selftests/livepatch-ucode=0xe2/lkp-skl-nuc2/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-7.6-kselftests/gcc-9/fd8d46a7a2c51313ee14c43af60ff337279384ef'
	export user='lkp'
	export compiler='gcc-9'
	export head_commit='49b0a0e914926c1116def49daf207c926dde5715'
	export base_commit='418baf2c28f3473039f2f7377760bd8f6897ae18'
	export branch='linux-review/Juergen-Gross/x86-major-paravirt-cleanup/20201120-194934'
	export rootfs='debian-10.4-x86_64-20200603.cgz'
	export result_root='/result/kernel-selftests/livepatch-ucode=0xe2/lkp-skl-nuc2/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-7.6-kselftests/gcc-9/fd8d46a7a2c51313ee14c43af60ff337279384ef/3'
	export scheduler_version='/lkp/lkp/.src-20201125-160031'
	export LKP_SERVER='internal-lkp-server'
	export arch='x86_64'
	export max_uptime=2400
	export initrd='/osimage/debian/debian-10.4-x86_64-20200603.cgz'
	export bootloader_append='root=/dev/ram0
user=lkp
job=/lkp/jobs/scheduled/lkp-skl-nuc2/kernel-selftests-livepatch-ucode=0xe2-debian-10.4-x86_64-20200603.cgz-fd8d46a7a2c51313ee14c43af60ff337279384ef-20201125-4022-x8lk6d-3.yaml
ARCH=x86_64
kconfig=x86_64-rhel-7.6-kselftests
branch=linux-review/Juergen-Gross/x86-major-paravirt-cleanup/20201120-194934
commit=fd8d46a7a2c51313ee14c43af60ff337279384ef
BOOT_IMAGE=/pkg/linux/x86_64-rhel-7.6-kselftests/gcc-9/fd8d46a7a2c51313ee14c43af60ff337279384ef/vmlinuz-5.10.0-rc4-00011-gfd8d46a7a2c5
erst_disable
max_uptime=2400
RESULT_ROOT=/result/kernel-selftests/livepatch-ucode=0xe2/lkp-skl-nuc2/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-7.6-kselftests/gcc-9/fd8d46a7a2c51313ee14c43af60ff337279384ef/3
LKP_SERVER=internal-lkp-server
nokaslr
selinux=0
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
net.ifnames=0
printk.devkmsg=on
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
drbd.minor_count=8
systemd.log_level=err
ignore_loglevel
console=tty0
earlyprintk=ttyS0,115200
console=ttyS0,115200
vga=normal
rw'
	export modules_initrd='/pkg/linux/x86_64-rhel-7.6-kselftests/gcc-9/fd8d46a7a2c51313ee14c43af60ff337279384ef/modules.cgz'
	export bm_initrd='/osimage/deps/debian-10.4-x86_64-20200603.cgz/run-ipconfig_20200608.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/lkp_20200709.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/rsync-rootfs_20200608.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/kernel-selftests_20201124.cgz,/osimage/pkg/debian-10.4-x86_64-20200603.cgz/kernel-selftests-x86_64-b5a583fb-1_20201015.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/hw_20200715.cgz'
	export linux_headers_initrd='/pkg/linux/x86_64-rhel-7.6-kselftests/gcc-9/fd8d46a7a2c51313ee14c43af60ff337279384ef/linux-headers.cgz'
	export linux_selftests_initrd='/pkg/linux/x86_64-rhel-7.6-kselftests/gcc-9/fd8d46a7a2c51313ee14c43af60ff337279384ef/linux-selftests.cgz'
	export ucode_initrd='/osimage/ucode/intel-ucode-20201117.cgz'
	export lkp_initrd='/osimage/user/lkp/lkp-x86_64.cgz'
	export site='inn'
	export LKP_CGI_PORT=80
	export LKP_CIFS_PORT=139
	export last_kernel='5.10.0-rc5-08384-g49b0a0e91492'
	export repeat_to=4
	export schedule_notify_address=
	export queue_at_least_once=1
	export kernel='/pkg/linux/x86_64-rhel-7.6-kselftests/gcc-9/fd8d46a7a2c51313ee14c43af60ff337279384ef/vmlinuz-5.10.0-rc4-00011-gfd8d46a7a2c5'
	export dequeue_time='2020-11-25 20:00:23 +0800'
	export job_initrd='/lkp/jobs/scheduled/lkp-skl-nuc2/kernel-selftests-livepatch-ucode=0xe2-debian-10.4-x86_64-20200603.cgz-fd8d46a7a2c51313ee14c43af60ff337279384ef-20201125-4022-x8lk6d-3.cgz'

	[ -n "$LKP_SRC" ] ||
	export LKP_SRC=/lkp/${user:-lkp}/src
}

run_job()
{
	echo $$ > $TMP/run-job.pid

	. $LKP_SRC/lib/http.sh
	. $LKP_SRC/lib/job.sh
	. $LKP_SRC/lib/env.sh

	export_top_env

	run_monitor $LKP_SRC/monitors/wrapper kmsg
	run_monitor $LKP_SRC/monitors/wrapper heartbeat
	run_monitor $LKP_SRC/monitors/wrapper meminfo
	run_monitor $LKP_SRC/monitors/wrapper oom-killer
	run_monitor $LKP_SRC/monitors/plain/watchdog

	run_test group='livepatch' $LKP_SRC/tests/wrapper kernel-selftests
}

extract_stats()
{
	export stats_part_begin=
	export stats_part_end=

	$LKP_SRC/stats/wrapper kernel-selftests
	$LKP_SRC/stats/wrapper kmsg
	$LKP_SRC/stats/wrapper meminfo

	$LKP_SRC/stats/wrapper time kernel-selftests.time
	$LKP_SRC/stats/wrapper dmesg
	$LKP_SRC/stats/wrapper kmsg
	$LKP_SRC/stats/wrapper last_state
	$LKP_SRC/stats/wrapper stderr
	$LKP_SRC/stats/wrapper time
}

"$@"

--O5XBE6gyVG5Rl6Rj
Content-Type: application/x-xz
Content-Disposition: attachment; filename="kmsg.xz"
Content-Transfer-Encoding: base64

/Td6WFoAAATm1rRGAgAhARYAAAB0L+Wj4fKEVuldADWZSqugAxvb4nJgTnLkWq7GiE5NSjeI
iOUi9aLumK5uQor8WvJOGrz5sBtYpRIEMtTD0dwPD5XP3+N5Gusx74fPz2vd9Gjt7OMXM4CP
bArymTFZwD2DH63TkrobAKLMf3lj+Qdz3Nj5f0nvv+FbELQc2tvvsaKd8HWzrxhyp6kzt22a
FZ3hGAogxHVowCt06olw41/QvljuprwANZ+g1Yf+dlsRnv7fCQ95T97jQ5uM7j4Z2M28KXsc
Vnjhro62UCGY64xc8AuyG6aD6mYaDQQCp91spCO5u0+gNS4XuVDGkXyz9v2rCdH7yjIPPODa
8BGuq8ITe8mFpGGnfAb+W+Y3emFKXRMB8Bx9q6ZnmuheTk9Sa/+bFpEfKsQzHjWWixNfkU/R
ks3GR/u7k7E533Vqo87eEV5mhuoqqdBd/bfjmREIBMIH4o2GtAyNgBvRUKXoE8N2KbsEaLUe
6B6bhNE9zUsNjR9l7mGSyhPiF8ZnEugDLn27XNIPPSWZezl5PFqfPDsGkco/LFkUhaHeH+zD
SPfawID9cLkdOIjwCV5kvncGsl5rbmPG4c3S5QF8Sm7/5tjUGhaXxVyXjAsiYmxRD3c11kNg
X+dp43HNDWeAAJCp2p/0RVtfhRCIDBxCtN4Hus8bkmNy2a6VDE3bQGYPo7GWb64wcmm17Hms
U34vX2ppfqtU+PohNO1l98ZYBok9zy6MAMjtgjSj473v+4CgxYi5tj9jGBNZjXZ2lNeZztos
ofgTikFzMUs10M3hSm08QPflm83gghFx2vkfZV3f1dXFquPeMi26nq0migja+pxyRFKwy7mT
2tUyXdBkl1zLRno5CvX/sdh//V55+t4TanKLf2AcR/X7ZmYrpZrgdjJ6dP3eg6869C6lNzls
a3aUiFDH31ryrcBZDADJxgLQdiYd8+IT4cdf8UACbLlpPXJakKikbxJ8rj9KEHys96Y0AhpI
IXvqXssLYgJls7/IZ+SnEM8SZC8l9sMDilhl5Qcg5WwATHEQnldOYBXF7JePia4l0riwrNR1
xL7PdWXwYtJYhWRW75VFwC7xlf3SviqkbcdIc+mNky49EEDBGNv6tgQkhB3svN0CXG3mqkzg
sgCaFhundMQa/quD12yNsfDtyDLsaMsBDNwsglmMMxDdLSMXOCrX3B3yZKwn1dlf18B6NGPf
p4CHnjrnEWxqCaxuPpwuKZ4PyhxvVSD5gCoSLKcxWwwBb1vjacf4twez0UY8KG+ZoDSTZAAB
O0gJi7PpuJ+viNzkoicOjom/c5filZqim6fBB2VltAMgJlLi9og0gSLbc0HeRm7G2+w7ger/
PBdHn9kynX5qsZWNG9cLfbadh8As37mZutrdMOg0XANTHQOFXR36+LNsyXxh84N2fGClHlgu
/ngJgczhdOBDPsBSsfje1HyJVd1xb5kbBeBpiZ+eEG1Nc9t8Jy80XflcuBMNEwYFIWJZDzsJ
c08jdjEn0na9bexdGPwlZccemZpFyYZC5JU4a9PsPo6xvC2nHrAbcGsdxxasNNwIkbzZMYTm
v2a7E4PW7azDbOW7lV4zSBwMn/Gu/40AfLTkELnxPZiqaUB1wZ74KEEmVRwgyZK6G5xCWFrh
B0xnXeB7sXq6GBkoh+9F4bhFqXvXCtFJQmOy4kABVGJSZ9AdfZFTDnZyMuAm9WXyztkYxhRg
b4+LRepJ4nNZj7Z5AT5SyHuZKdvHkzfcxJp9v+R/IkVCD6MyjVHhV+gBg2Deke8AzhbJkUke
O86PTat16tsJcgMmCJ3WEyohJ2nyc8SaaymuCXLjy81jX/maSMXppgqnLlAjh+C5t1BQZ0JU
cFrsvwZxF7Gx5yPR6so9jACG93t/BXqtayevGwfGD8MFhYNzHnl6ZVeomm1b75Jl7SCQwL7/
qYO4L7XyzZ3a2ojbaiRho/bayUenytzPlWWDqZDyu7R3WazCJxgxF6Se7crMkbLFqmbKIC2z
Sumx/ckSe7Dhck8jsZbfm93Dd2iRxhxpHmaqH/Y3dtzluSJwLKssPpOQ3lhW/hmHwk+g3SIi
Hm662PEQjxl1xZOhvhpxDTrdYctiGZRtfiJG0jiEM0YzraC9kbi3UgSw5nSp/1/PVEi3/qoY
QtHwmfztfjL7j+8SN214BpLj56w1UOQ8k3kefM7V1gbInYjxaTFsrUPj4Vfx73fK0XFoz1wK
HQQA7Q02sa859FULVn2GA2segThHVpNAMNQpY2Cbx4K/Y3EwtOu/ka49wYfp2Brcg3J+Jugw
xR3TOSzNB8FmsqramEku4XKDLw3JRyOguaOvw819GgD+SSjr062srRcIai3H+xHou8OpSMC9
BQlZ1Yb5WMMpGz5NoORz0q3ZmU5ieeSwS9oqFxx/oszR9zvgDXHvcCxIrT9NtcQ0yrGGAyOp
RmnSJHXIPnZMleBDfcfvj3ZVJCQzavq+SY7a7PL7DT2fywjKUeDA+r9fiTK/k+FsGC3bZLxR
vGbNTjoR/Focj9lQy/FH+0FDrBcBnclrCzUwXkhp5gtb5qbCA5227vmmHnzKY98/U57aKR3r
tfLKbHmUhTNVHZBIq3/7ivdPej3zIqd99XcL2AIejy48yAUZxh6+Ng3hgj7RuwLGSj5pplXv
whAA+RROcgy7TEL0teCs1fdD5wF7mS6nCHolnGx7AZ7JbeSAHIBQC22G1C5oWk33Yy0VDQmi
Un6VzYU/F5dfe1g1xA3Ezt1T9D+iRkwTVlgDEg4E41JRBZPwnYsGPvz9uvx3ZqQ0OkwwUeNG
4Lsekv7SgvT1Azuk4FwlzQ1QUWh80q4/meSCCZu94zTkldGyGdRVOfOKTFRX1LzMxBKNK15N
pxFJpbyqGH6nXizh2OumCvVHH6d71D93/EAZ8N35dImYbrLn617TDPTMwBdsFIHPoQi/FpyC
tzVXSn2ZqkzeVQJu9t97YUrmBp9ad6/mu66jBrJ4tua3/KwQLkRYf89Vlj4Omk+Cfz13FAgH
Hiz10Yr8uGI7pOtgk2B+LhNSZYyTjy6UurS7mwkWrrxxpOUxp3iyB+c4NK4DZbx1sC6jzuVF
imHV8OGuod8BYSuWvgaZ+MX7jlnNa3ubfWnz38eoS2U86SeGCAWh94W5kcNrkrzIeeKPOrzb
SvkKUo6+ZtcC3pVsD+j78Jk7ZHy8LM2xWrEDUiTSSrLJeN0jzHf8doXcQxk9V2+vPTvAx1T6
L/g+fBzW7kx2psBqIcvRYa7IpPa/zGXmnFXiJlKlQzLyZZyxT6zfgd5EW5JIRQueTh37FBqM
339Yc5aU7edxrzO0zSArGtwaW/O5Mf7Ki/L5gyfvzOVycZov337PP881Hg7+DVHGuScB+EfR
S9Pes21LCvNwipKZVxX2cyyqr8XLL8OLNOYtMa03DaqdMdSVekowc28XFWjqms+qprtSIpOk
ZJDp9Z5wzIV2EnJ/kYDXTK79F8C1pJ6Nr2besnEZ7EaKWRlc7fvcnd8I9ZxNmeC3kezo0r66
LKeSJFDiGKnis1kHnkSz0Ew6nU178VO1Vw8twjDKIgkqFro2dQH8KBT0Av3/0VJMH66eKPi8
oluExZCyKyfk32M8dzHzy/ZpYAxxB8qkkJsEp5s1sxeO5vM6IJFJ5wQBXm4YvtSBNf0DQym/
WdslUPcXqCG7Mgy26Z2nqSpkbiAa6PGWeNQ4cR30JShVUlISEcHCLzl+dtYWiHubw93QEAh1
Te3DWdzH4qhFkM7Tvqv1SyEtTqf43s9S/tdUFK66TRv3uIe0BTtAzeUQNLWb/G3zFhCCjlk8
gYg1tKUFt6d+Y4xw+3AOBd/l1GgXVrNNxTnC+dBEhLW0lp4raNW4yPufBZwUoOyRogYsXSGa
oBvXWl31xapMAIezzOyppojM5qejrb6AIRuN4Z+Jtrkt+k96lLemJU7dO93QrqdzwAe1VYA8
xtsinJy8aNaGjk994DQG03KFKgyDntfOVROWooXnJg6AvAMmxnMw8g8iKTu0g3o6WrIREu8/
4lWe01kdRsomrmHOUpDpfjJ1/QuLTR638IjIDpcpFxhg9vuWbX4r96avGHnlmKsqt5D4vcjP
ddnwm8Loxueh79NCKpn/RSPfLLwu+fpBjkEN8R+0TiUqHhF9mnO+fpMNyy19/mhfXgFlGjpg
OZVZx1FB7xgdrvucDaWZamnXYIgdZgihPcYnUZIX6qGPZkT6PwcA4IrxJSiGVXc7GmR37nWB
6VUixnSpSlbQVg6WclTsE94pkZNAJ1EnIQ2LBrwUn/Kv5wyXooK2g8ejabRS1DCDWiyOxgRS
Clmm/LtncKceFvIYc/Fr2i+yB0F49pMH/HErXR3/BMf4MMCes+3Ch0GDLbXyqf/rJZhNlLkZ
E+xx3aAdLpvb0NlnA+NvimADxFCrbnNYWS9/Jnr4eVxp9T4KoBRA0H2hL13QLzuw94xZbdKQ
KEiF9UCuqZyNSyl6UlPR9XrnKACBt5dJSA6psTtG9RUC/YHok0vp0itgeohQuNBpm52vfsa9
ZLhguq2EpULSzIBYKp5sz/PHi7NAJaZHtA0nXy0Dha93xXSULjgC/IKTvPzCOG37D+Awr6M9
Fm7VPd7dO0/uaVTheGDDATqib0QK2Veh0ABEY8/GVjwrRQWzb2RYjNV5lAZDtp1RwQKrvab6
WnRZ4o9XC0IdbYGHeEnyvkJth5RTBsld+a1PMds4qoVgLhAOJCP3+0nLwa9OIY2w1wWOKR/N
jKGTyqjHyxoAFFMnEC63Xwg5cejVkTxO3w6iWIMF9erFn4KvfQ5eykEKDE0qWeQ4G3sZkOkn
TkK7wVsElWJ0NriIiKYE9tRu2QYpNul2FDMbZiTg7jkwts3felb7NfzAuu5YfDVK6X5lS5oZ
rgB01ALjx3dXpCnPpe9uw3JEIn4KQJLEA3/oFFt9CTf7Gzota9tG+5Nu3D+yxvTY9LBmDso2
SY8i0TOGcmSkug6SEX+nbQWR3K293hKjyP6iQ/kSmiuqxBJ5euIxcNEp0bVKVxTzdmD74sW4
PEMt4HVl8pmDkHTaCDTFKYXq9kuaWR+zlsvMCMDy8Ik50uOvJlzA+D4pgcZNpXj6cLTCm/PV
sF3DpTxx9FI3kxFLTxlkvjEApWxoyIwbCfFH/PlzLWRviS+CB9hMjGNHly/0DTLamfsXfAoz
yS4gc/2uQDpu0CA+WQW3i9d1Cv5FC9r1HRotdymJ26sgPXDXI4WIxcHFHfqaYLBK/LVIP5QP
NvbQj2ewV+Ibc8DMN9O8IwLOW1yoJgl/Dd9CTVvZrt5XRtJR1yx6a3M+OQ6bHxLoF/p0/SS1
IhnjZL7RI/AsdGuyCR3iEHf6uIE0DWnjjWb1UCRh3sO8L47J+gTH08GpSq4J0UdgIvMeOUPk
hL+s5SyZwE7whZqhP4IEKlR6kAFu05l8UrZVQebVXpv7VH74JLysH9Tf1Sg7MG0dVTNawRid
CYJJYWcartLdC0/3EzOcDVKMS9WOv1Gc23jSFA/BWcD0XrKXxMWzBPkDw7NWp3NQIp9T21O9
rqnExPoTpOodyiPOcL4yQD7L32/g1Yigfkk2SahadVI4z6CBFEKiPY5CHqw+fSdhZgBkILkP
Pg+C3H+OZAJx7ZOdKdviTNaMKahToPhXjHsNZWT0+xRIu4O+Npuz3nhrYBA95J1XSLIHXBBb
WT1aODyrd4hE5PjOL5sEaEBxBRlNy8+1HjsZ2SZLy2/My1fWi23v/LAcN+TqQISTpRvXn5Ur
58/8sb9d7ZiB4IZZI63gmfYd06cxZyA0QhBzz4gM1Yx8WsRCMcimRunZg+XY9YhMwuy6pNpW
FozznoRBqteDlsuDzRve2DYSj1CPwizYM7FyuUR4negjWRMYBsroeuSLoaAqdnwY/bMCllFe
ZOJ80Lw/CVSBuLnp9bE3bCjXO1lrFfNnsLHFVZvWfRTe0Ri5W2feoV23bMFyLlBK3oYQP8Y/
wrQ1oOiNe+pKVQzR4ybPOqPP9z+iaXg8gU4TQjw/zJQc8ToKbcBHZBY/eh9JsZBMKJBEYIs7
1YkWRTgqJezX8JI8XX3/XrJol9J4yKvjbRSsTSa9y8r5eL8tOFmFkb7L1jX03w5x71AQxwQT
Pil4NSlwTGaENgCdfNW5/g5KwFm+x0ZmqUWNu5GIHe8ggxK5jOfICr1IpSJDT3XcNRZH/jmU
XrkHpPthn6JXA/u+KQccv4V3xQ+oFmVHjHPoFBaJQM3/RiTGVNLXiLU+m+1125LMaCccxXvD
WeATBlCClek6iUtvQV0eSEsfQqprFoaXvGdvNWzMoqKT0o2LUHNCYvVYZI8sF3slvErha2vL
fv3bhepORp0iyDzonyetr5BP6B0PRbV5uHnoSsapcaY5GM+x1VHfBKN3NDWzbI0b1BaZaOVw
AelmXwqUABKAR+w/yzYzQAPwGHpdzPx+MMqsHQnC8sdDUeZYLzzhQRAvOGmlhGat6wSgCwSp
YFgpDtZJzI0FcLWqU/epNk59yHasI63pH3hnqS0rTCF8Gesq8fOxcSMhuTjt4oRJwlvxCYuT
lwcJ2tgg9saOhyU0YK4x71DwESocKcUhngVFBDp88bBSMPhBv0c28EBHe0xm+hGkm7RRFpYW
fdFoCIMeFpGn9wH71r9jHSc2YccQK9YHdMadlijHDTL30CavPWY4YFXVURl3iNSm/qSBOWaG
eE/ygHLZOx2b+LLjPG90Q/QZZ9i7knunYqlUOCUu8VJr+fbilRriZsd4xC5O388AKrk8GEkc
KB+tgHi2XwaGV/W5tvV324Xo8gL81NrzkVG5a+zN1zQz6MSFiiBk2JuG+kxKX3kt2brARGta
XcsTCoilyFskbLX4wG8F3Up9dBKpQDdhMxGoj/KtWdHpW3LzZwIe0si4e4XxunooidiFjKbg
cALjwqD/qxqpUTspbl6o7jf35mQY5wGIh0ScohgJ6Lt+dBldYiu0vpQmkkbAT/ajL9EuKntR
X9LmksVpzyTX9Pzl+HB3clBWQ+dJaEydhnaWL+qBGURlGw11dFB7VEMsRFCRMtOSSPfpU3yz
wQI4hcXAIwCGnA2qh5jR26ER0nBzm9c8bq9bMFSxgCI+l1WmYQTLGMdTbegDYVSzkhyLkL34
HuOU40Vv2qJkK4mnd45A1VGEIJIjDxtYZjiIhEROeLtJR7YfQFSt0A5YNSUz0cnRx2OnAkfs
SzzhtaaNQkCnan5YSyR3xAUlxI+5ESiCMckyg56cWIeXRtTRNCWoLr0QMGqMzPctpFbjEzNg
OpNB4KZEeViL0uOIG144UoQcDc+yKAzRpNaf/pjuWondAslOD7h6cl8pOXQ7XCotRApSqVvK
0qeTpPadqVa9jM6rnp/+Pka/x8WchOUwz12TEunGOqM35SVKLhQkdfQm2yqPu7V50Umsb1fs
SByqFmeLg7E60lfJHeaGzWjgY3vXuzu4hBEXsPifhinMYhtufwYV0SMaG1V0PhRIhWzXDIhM
NDe7RM893NkNchKFq1Dyu2eCeUxX1pub3CAk7gJOYHSIoSubf/fI47WbvEZyF+bnfG4jmO9M
1O0KsFuyWqCx+E5GsIKcYyOUADASo011cNM8xgiYauNN/g2O5sSuJbLRW0tkfG0zYCpSrxRJ
BMPIuV4AJGbcuOSHETLKTrXM/jT8aJdvfDwKhhOsCES5FPpOB3L29CxwRMUByevWSipIP8Sw
C3/jLK6v13fNnmGzg97uKbMPyKae2ehjJESmeAkYQNR+xegWTtRdpgP96J/2dHMlTZMgCTlO
tZEoBAYnUnNu6aA/jEooquIYS5GHHRPYON6X8FoP3BQHJxkiF0iX+MKQHziUIwDrou5UI2cn
s10vqLkpLn2AFeH5YXtYJA1WUUuuKQ9Or+qEO00sgwuld1n0G5x6EYsM2gLj7086YkrrvPbJ
I2SMwW/d+xXflIYnGDtEQwxMEA0zdyf3Kkt3mko/y71ZF5byKhoodl6rDUH/5EwAOgSj6BUv
4x7O/wyrY0CDV7/AMKz+Aso/5CUY0jvQqPPVEWoevACpE9ZtC9jcudm+X4wO0bk0r7QEGjik
44OlxV9zGX7um85qtzn428ufWwUHSR+mVPcmndyr87sY8IFG2u/hDUAXnDpmiT3C6HWD53sz
4vzODODk0RxQr1ZEPqqJQyxiOyVs9s/EvK+3qTAAiYmeGqn5xjZ8OgPpPgFshrKMDj4JSr9M
68x9TzSBh13H8Z64nZ8Mps3lg1+RIm81275CZAsTc49B57C6/isx17hQO+jgMFu9Z4QdE3LD
jPih40NI3YOU4lc+iat7oW61vTaOajelj4mR8ZBE53LIjVgN3z/JxJLBcwYF2mp6EWHGXS58
DnxK7b0nzZud+5TMddRWfatq5TqlO0/qS1rDZc+RcKEkuOONMZuG+h6k6u0Aj3tZsLyg/OsH
GEsufgcXURqpAka96bx8PESQ5l13ng0lnfNyveEJ4ei7LVaPJ7mRC/WT3muC15VQkkaC6mDj
ZkJRePHpF9VYsy6aF2L1Qpb7kQGkghVZZVr/5sxgkXeBiT94D8YvXkjv36vnqPVWG9xM3RD3
Z5cNUOVX3jWzwMe7yZR8i0kTBqZrHWlgFkss+CNe4hvCzBLf1jnQz+6PcdILDH9apiAdNid0
Z821CWEQg9VxbuffKWcCTisxQ2XLA9TL8uBUCDT4kr53IvEToAJkFLWi3VnUS7SqY+XsXRpv
gjVeOnJxYfTDE8gg3DHCHMbDR1YjtjIpaFOdt6fun3iewRT2nki+Z2eNv4x/Y7A2ENw8VOC0
tan16Gl8Umu6WcwIB6ZHZrNCMf1gy7O+sEenXMiybkv4gPYUuUK+hT/Zlx9URg4KpYZp/vxz
EdyCqA1xyYlnUAVdmvnZ+lWLtCuLMFk3FZUqCV2+c13CXJcSp+oZV88XTEt8NgR4ZpZRR2Bw
DNR8Mwh6RNRG8N8dnnpi8c8xrV7YcLDyxmD9ZtctRQSiUpYyRA5rnmhgzqJlUVKX5q2rFk5h
xbn3fOPCuWBW90EkWI9LsundKB69AyiMcwf1adIq/RfMeU41TzQVRPBTOOeB7Nu2xSTBCB2j
tzdHgvNLuClF9dA3Vhn/TXi/t/EC1MTSjm0ByPNQEvrvVRXXMtYx5uiFMfWkQqgqsMgr6uvi
wmWDlbGivDxohLp6gRGRu26EnlVApeuC2HA10+ImOXVMy3G5ZjDcp6LbYXioTgKO+LYl896W
alMFnY3gOutO/1+ktxkILEBPI+wBoue2OfB2y7c/4gs7A4pPdaaCSxw4nRG5mwqXiQmZ6oN4
Z7kilE/kgFsT9Q/V8xkK36P1gpGLTStfORpfgQPnbNJz9kYspLP+ZbDEEQ1tHfvMcTGLsxP/
pDxRDD9zqqfv3ADorIkjJbEqJdzIXfgZIO/0WaVbeOUc8Jmu3g586axrr9M04ygo7qsJr1Xr
9NLpfBXAKu8jZvhdSyO72wZxZvQb+pzr3KSSU6o2d4hbF+AEUSJOJfcLW5ZsnWAV0JGTG31b
Ds2Opyf0zauPijjf9BBCo+FE8lcAofQfk+5CByXdGMolsfes9OagAdIEfETwTvkZf+5HT5+n
8tuQgoQkZbeEdwAufwi/7IkJ+BZTWhmSVhC9UmyGsVglL4aa4q9C13cKqje0ZynpH+TnOLtO
W4+bywvroKecHp6Rb90HAGVaw4IQZgy5aRFiOemzIXmAQbCdUh2oGQ0ifoH8zrfvylEO75Ms
ofYZE2M+Z8kbsSdCbu7wyjfLSx0EtsHnb+emhofbVSfOlTvhP1bPJQ5QHL1r2brb+Q9Cd6oS
KmmdPzixd1rPWYdlrOd+13Elzy5C7Ltfx/iZo8FThAiZkgumWO1RyHq/yRUTA19AQ6u528HP
s6gJ/r8dGlr8NrFIoc15Fn/P6R3cc9LAFyHRYtQd4Ih0cPpSrBRIgGyyT/WSif3BfnvLXx+Z
44msBBuQQqbEg3+2mBrBFtMZh1vfvstYr2PBPM/Nllwk4JdDrdZ/B4w6095tv6WjooXyhms0
HdIAouhkyH2krvBExNyjnRX4mgbYpve+tmBBIYKQih2PJUzynNLjLKZvgDxCcp9VX4zN8rGy
rP3MdFVj0gCQ0Kg/xklR/Nec5QbpCd8i6U0y4x6q50fqKEVmFisUM+DcIl5LYrfjX0V33SYX
gV9yCtk/E9/ezXLcUzLixkbFwNVQPiYlgbUxMPLBRMbtlltaFDvF/lNSJZdyVo6fv/99kJmn
Qye7y9l3NjGB7HQ+YXhca0//Y5Ou/a320VTU0K7/acukbFqg+tMQWS006Bm2NIX4OWx8ESQk
eFPINu1wjpPW36WU/xpWwM/PZf4o7quNNKMtj1rBlOd9DddjZ9TezVC7U0Ix92gJuOxBHFds
tvEOyvHrO67IPdAyMypm2oLXD9vh6qYXaFiVLY0wFU1Z6yQ5tEYtNEYjtD9zVMw3TS74v01P
bzM4xlLOynRSsUUf8rkRHlgLiR/KLDcutHW+jp6OpDjSquvL1nUEcO7V3bt6bOPTXZxYwRBJ
n9OC9H6/joSDJa2KCSlUW8nZn3XA4wAwVY1V8VSdnSBJb7yXdJvWnfURbM8QLrKMlWXx/rmH
i7qpiOwa0MhH0IJ2qYS9eO175/pNTRrgOCImaU/mPcwTk1Rkm5dukluzmFF4170omkVQoKGf
A9kNIdP9E6HSVkxOTS8cSbMVfzoM6J5TbPfFxJj3MTvZXbGzaRgWI6+R70P72GSm/FFYfmcJ
PF3XK1doHSCAi45JEgt6iqYq62FUFf2qZKVivbcooMV0xHM/svy7zSCMMwqHEwezSXI68pFk
iaJoedCEP4b/gL9Y/OrsMJtkXCU6nrlT3YVQ5XuGYb5lY76JCCLiMijh4Pj8MQSt20YB0BUY
rQVwMKO7HyFM4eFoqD8A2m8ghHzKwbP1uLBV/Gln07fsCA/81iivusD69pXL+CKV6v4HY4QP
ZLzixCKgWB8UgK8xcOK7k0hXn1Q79LYc1vieriLEHIzKgBK72xcj8f7uuYF4dZ3N67h9kOr2
OoJRAYShr0gjzPfMSlJCSywxOcxClzSX/bHQKUanITbFM0XlgIE39/pTQUXDTkirWOXWGtW4
T/McX8xILcz7NJBoFC4PBEe3aPOhGhh2Jbdc93JF6o/EE3cxA/jPlxsNCzD2VqBl3+WiakEe
mdNeEgXSRmAzQWkrdOQ2teCe3cHXTDsjgceZLP3WA2m1v9mGYmPiT7Fu+Q4hcXNGFSGH9IFt
tWUIsat4Zrt7Zzps7iGn+7PVnm6+G/TCbJ35mvrtuvyPEyHf7dcbFUSSIPL3wThwJN3k3JvP
yLomPXLJUKaqvtyzOW2Pm4MqPfM3lh9WW7BEHXntZOXawTNQUmn8I4HkdiWDntq0k82yA4uu
kXV0WyLc05F5HLNhLpfhynlQhRXLp4Gx8JGqb8xQIgbcd98IdRVxeW0nrkNCr1jejLSmO0Qe
/XkX8nQhKjdTElNeTQdMzXYp3eY01aoGbWY5wfa0lskToXaLSRYR0V+kHI80nYxU0gjZVFOq
auQC6ST40xGCggix7rje3Zds5s2sEYR2o+ZzENPKwkLajpWhmLY7s+ggQEF8UYQkzuQOF3Dd
CQgiKxDLPj2DWP3X2TO79r1vxzZPieNzIjX2QXFimJ5aTAAkJTYi3FdWW2F2rVaZFP5IRMd8
aomAiNM2VyIUnIKS9785LXQetIlhPM4dhh9tOI17fWXUQgxnG1ExmcwwUm3xZ/t9MMV7437A
GD+BplOm9pK/DbZxkWyg/oDfvxMb3VG84zfRkCWWENb9LidjBxfRKptdvQmaHWMC0kZmadf+
/+FMcTfwkYBa7sb3y2VJ6F3g4zxhstN1Q+A4QlnRHBYBkx/A00aEScB3/qcb2sSaQ+lRNOKZ
DtMGL02Ovd9/j506Gjj0Zsdh8cvUU8cWqfEh4ucG9xN6RzXgppit1ByQR/GGyqCuTEP5PmOJ
P41Z5xfQyXCkk4pKAS7K4nC+7FjlXmDaA8h6sy1HOyCgoPzLEY9fOGrRrNdpq2fUkYkq7yUA
tkZn1IMYEMc5nIEUeD0ZX11s0L2kq+HynWu5rXjNqYt/Y6PCcqP+xgU9mXnySRMRYYQMFpwK
Rlljx3QVXSVNsNiKVaVdoh40Q584UbNGe91AWkCdVQZUtUmmhzevFp36CVyLBTpjksR/hl0/
bg6PcbE8wYodK8eKczTABbTPKaN4jpMzzufUQ5eDa77q2LASlnuBHdZ5i7aFhSLhTw4VErQq
nMwvKOTxRQW16sdABKKP0uR+ceWLwHKpZFed5+nSaY73uNth9UhRGP77YLoK4kO08ERvAGvQ
rpBjnwqpX+/Ripjvawmj+sev2UoLUYJwztegQsyJc2eHz785rlHgyNC4cyheU3ZYHnJZ4QCT
hnTaXlVl7hEg6y/SFXr/fjgaj1YQ0pQHB93NhUN8dBrXREr+w/skzq8nFHfm49sRhwH1x9qw
bkRbwWh9eHukhfarkKut/hsHS+ZqjLVrt0qqblz/8UadHgDGcTUdXA6v4hAVCAp/S3WsqF1E
LweeLIysbh4dYt9xrRbSc0Zh0kSuf/T+nB2MBxJFvXhEVKkdc7YDmvldr4c0FovRw2hGDXLI
9JfY9q2FxTbmez/0jV1k2sHSRd/iVnQQApotOysW5Hzw/uO5WsbC8CPLt3VI1ORgXuhJxoUc
q6lZ77VR+lkaP8WxYzsXUk5PzhKAGhs8hLKV4+Af+U9O6qK1x8mzqbwIcg+oPHrhXtukDeeV
qgpLNZgeC7LItL2talehf/7hONimGQmXToB5iu8Ec6sdHFOK2RUVBE1wSFCsfmSG54pxoeVz
jDPsG0e1NVUsXI3HNgqvfXPhW7qy/zurkwtSqGc1PmGbV2qtojFpn0aNLPC0nhzaX0HH5Z8e
DGpOCd6xW5FfjhidONoJXQQUyz09/dBzlN9Jdm5hfr1+WbfDKWQoA3QJBV+2lwD5E1/4gKbU
mIhREi3alMX4wTkrdNeYjMoaicva7QGc+BC4XfKUDCpHRsCkNjOdkLW5Tb28kvivX2T0OdGW
8tlz26mx9KK5cM8X5tUzGa4AAFtUIyTM/ntKnsfsdpGZfT98r9QKPJyFEmvajola6KeDD2aG
+T0hGgWhABL79Z3h0CDNPVAQop76Sv/psArCMKMFdiLGIdI7xV8pPqOFXpkDmFJjD+H0tpbN
NSNYIGJXiWPYlOwPTFfRj1x/nuGLyPPVv7VyG6eBGDrfE4/4gnIaSpTpwJJg0k7RRqvyNRPG
J0i1WSvfa729s773/4bSjsSZcTThKLj63gDYkbqh/4ZQgQlcYpZ14d90aspVFpL1gpKCDj7I
URp2BbF0W7bAaOxIAxRK9p+zNTpZ5Vg9F4JdPABOLHP++dBmbtnb9oki8YkXwjwWXaPfoTqf
0Fuuqk7yii+QYuINd+en0at6TJh7TYhzX9L8wIc0NgSixOZqnKOiEKqKNQ4dDZpXZe4xXHEM
y5EyrfdiNGgLJWGRlrGk1Bno7y6wwU+Nkv4anq3QjGEeIhe3uITFMkxHEHl3Oa7kuMymG/OD
74bq+e1mSGQYe+bEbPMiGLcd5hytC4yx/LRuCR6/Rxt78rKKS7Jy4lFjA19UYlPFjuRfdtX0
4qLJY01x5T9h+2xoL2g7F5TelTOuu5q4h7zeC6Sa/ua/wLZ35tDj0xH7A/Wb+LOxAsBkWSS/
4Ca4MFScwU/AvGud5EBO1bGKwriI55DPLJRD4MeNpbM1LPJsxXbnUaZPuIlbZq1goEW6yGAO
5XSnieUR5OQdQcj2xxys9nQ3tCYUYuAU4Jj8igTMU3dlGO9PEEJExqq7XAGdC1ZSHFQPUa7u
I3YmZzdNk44vqSsKZlEXTGp+zEm9vpnzU6hcLP0qaowuXjtfDOd2Bdk+Btm5MKGCdOvXL/F6
lhBB6mIKzsfcSfe9pv9Zs17ObLWc+PS4VNp+2DuD6Mla1H7ACsB4H35b/4Tcrnsq4hEoXr29
KwZnWs/RNUz7hwKtTRJHiMRy5IQT8caI8N0Z9iOxzpmVRUAyQPjCyh1zU6yYDLdxHKJzcjF9
I3yQIODUSbgNiThqmzWX4oPuFu5ooFrmqa9tfC3Z1c864T46QdWm+UuFmKomv4sprrJrSZJW
/HsOL9S9M4vI7Rq0lzjHwu9HZt77664bI6GlObOEEPVuC0P9V7GhRE9g7ysbJFqdh1flLAGY
VmFoNkKcQdJ16cgRAdFLLKVxzuUvqZM22jI0GVIDf5BXuJId3tuvB8LMeX0W9jD9BY6/NJT7
MruEGJFRhdjgxZ5bCGFk0jVibyYWZ8pVpV/4Wkjc7MMvwKlhZkFjeJ6RhNcWVf3RJwKhTRj3
R9i/XABuPwST8RhGzthRIA6+yTyNjhacBWCHTcWubadMOfnkx7xi/b1P2nGzav+HpXVJ2rgB
APhG9jI86eNBK7ApOc/2oIz6d7ovLi45IYHR95yizMFAZBQDW19RkBMxXqnmx57GkOpApcSF
f9f0aGvolgI2Fz2bJibH5v0xV6QAUaRkeWYLotasrlVwNalKXVj7cgl81Au3Tuh4Hz2yJZ8k
s3ALskydvdh5X301AVgxRvUgaEz0HyiAXTVujwrPcp8jhkPcySmozzdu6n4GUZrxukXJbYJz
ii4n/8utgMCyBYU/DmQcVTBbMqm5iz54Wlw7/3QUdFF0OQwmjeFvVVizakoDeHiSK+adGhDD
uKnUZlrCDjClxpr+VFn49XicawiTY8nDd+Rdu3692gykUXwNuMF66uHCzqol2qZlPpsnHw1x
sbZRWw7K4st9YUWkfdhQsWxOSOwWjYcXv7PrwyU+cVCQ0IdjypezrniKp0t8lyORLG+OzgLW
GZm5QkupuRqK1iUw0tmCr9t3oF8IyCe5pjZZNsqZkyh6HpLLY4QPkVMaR1SHeIYBKgLuzOrx
HEpJSGqOLHpKRoJYRUykWO/oyICD5it0oKpPvOxGiGf2lCCPoXjZGFfc/G7zwdhjiVHsJqvn
xJLuEdgOp5MwFmVD43Sag0MrJ+Okt/TGMfboc96NP4/R5l3bwCbaUT0zRt9U0eNzw/1c2kxC
7UMunlRr2wRwVZigCq/FYTiCM5cR2Xy3bc+Kx9gDYl3iV+md7sWVMXxkbGWs4mlQb6SiO8xb
A7NLsAe5acbBtk0uG4QcQOM8q2gXiqf91ed7AEm+Om4wwgQkx9zd9HUTdtePXADyTsHBwVjS
4ztaFOa+ms7AvC/4vSdkDL2ily45CQ/MXYDXyRveGvHpoOCsnqREquevVMsdiDCweCTn7fD8
OZ9JTbQU7hMUQ92nV/zj7sKwLqQCzVsgxlbMCNO7v0uKFe0DBJbBvDyy570We9Nt3zm4GBBt
CyWFsymKw0TFSEZhgyq6tixsEygQ9fBnQvGkOoF+hmvY2WPqi4gVwTytmYhgGUufAYqO9fNV
yuIN6oli1JkhTg+ct2E97Dv7j2WWkFFC+FelQTVfjgXDypPjJrKGA2+57FICGq0lxWvuuu4O
sOhjq4lVSv7ecbEzCFY5YDxrO0QaKnkFmrN7UQODaXQlmLc/mUx2n/QyUu6PBzTA+eyROvbz
nomEoDUjSAF0G7IzMHMxxCVJUl/+CRFoJ6MBcAgK2P9QsasXkvhsfc21yd33l54Nkt5eywao
MmKV7GFbJ5QZu5sqUDgcOBx8AGPMhz5hBF65jEwfZUN8yNfHCyXGRwC6k58SLH6whDXqaAdi
pK+YG7myiB5Xl+I5Uch/dMA524cU3bPqgMM+2n8LtnL3BI8KZEAgblhfU31XD2RGdKOagUge
ny18z7bn/EMfbT/9D/cz90JZkJ606dKQb7s63rwLapCXtiUrUc849gFuhldhe7tZ90I6g1hN
oj+WXSLu7K+3/o6vh3rTwBupAQBzcO7Yt62Q6DmO53xkG+qZ+G8iKY17SzG4XVmPu8UB/An3
Q10iebMkNaujyb8VzLGVI4TS2Uy8icLUMiuP5duKvdWycDA5l4rMaxPb5dve4bl3Ayx758VU
F9IflKY82mmoiWV56rVPwc5204u8ZmMmxMerZeMnc52kIb+ToJT4H9T3LWivzCX5v2JXvmTq
Y75ln/kH3P6zzQ2p4o7w1+thlRibaNT3DuPlV+NROHkY1sDMBroidgLipWvj4KMjIO8wVwr/
+o6328tPFgIPxABSj69KUOcdB3WejR/HBd2FQuKVTgqv5Iw64nNWhJowMvl1Pwf72Ehby9Bn
HDrJdarc1xbnhi7LYgsUsM74bpJaBpe7e7RylUIHrNkGr5VTeFEXsfyfrqyna6Xv19P2P8Py
amzyecVOYk2rFiw4seSqNlg3xugLs1vR8r20r5Z5+YBRlYpa98c+FFsF4SVGlgENWYEX8f5d
WNq9aTuPus/siPHTKha+LxFl2gU6FzpgnJ46MzDzOTxYUDyJb8h1dLYdbRf/GpDpdIqiu4Fm
ka7kHl57rU1L8WZMGAvbedqX8PJKh27mG1WzRTlc8esDSBA8m5yTliSCEsyum945ANzbwBB2
O7eX9bU5ww7+bgC61dZNgZIQ0ZQDzc8sIBrvCuN7srlVAI3V00KMfH/2vHEi/R9MScdYPZr0
a2wEraNNyIG6ZADpT9jpkLn60elCCTHHkNPUDQXf5hLi9XPmZ+eF1OanrpkKOCkqt7TcRsxZ
fc2C52xazorrH0Pwyc5Pt8hF8zuNhROe7RuWhxXqX3DOhvFKniYY/UuqSXVQZGdGiauUCTfz
+fLo80i9gIKRFl1bcDfhh2perGKSxbO2BznCxJnM305Ab9VrF309/j73ee3YF/LoszsfrHZh
cm1E1eE6euwO7V18238f282EY1u8NMUFiPlrLzRhaa3W/K8xTGKqL2XWytS1w7g7jt5wC+OP
VJsfTfvloR2glx7V1N8CuP/mgAv1OgHHPzxmHzddCEPBS+/w6dYPsABPLXX5QfrzdSzy9Ssg
GIhyk4F1iKasDUQMepy2q8WLpmDlU/bZVMMFmDaZphkJb4mhbWa9IZlWFjWdEa+z1rG2wVEp
42SjgV8fv/JAmsFNNdeMD1Kdq36IssUKjnxClqSucK+Tldolksm36IMIwyGkMwi64RqQYOfv
TPD14FkbsehJ3fmEyY3qBpEJp/TQHxJof4Bik5RXJEWmmm8tqbO+fJx+750zbz1IujjWFuKc
34jfP7UNgG7OKjdGlHIFxOUxucLVTxvrEKHtsCqSAXOw1oMJYCOaVFuX4ZDDP/wqEnIJmh3/
ilpdZhIrPGTtk9qrQ8lEvBTtyRVJ+4EGHqvJNX3AlI/lGve7C5gqTvFwymXA079HTwatnfQs
C+pT7wFmpgGqvwJbwH1n0DksJg8Bi86IEDVC67rPgMS85CikauOmMey+hdYFWaLI8k9aMiUe
ku4yRfpNiQCZzZd4VA0fImrgpNTx0s9tb8zMH+d2mFvwSOzdKabxqyi06wyZcZWl4D1+1Yvw
6Nc4bnsh5vb2vIJbdq7k4eY7zTeCkVK+XSQsmpWnCzcJe444g05XDuU9yhCacRq8HPLK7HTi
ciP49kpGDN/psglOMmIqoTQ7n7INABKBhCXOSHWeHDTuQ4Do/bubkC6bA8yYyWzkFuwIYm50
MqIPlmt6oDxXa7n9hkohk1fLxhxYmcs/vTaYWcGTa/HKPqFTIRY9xZjpNvmGCTUGsvrL0waV
P5zna0AxQc2O8C3vRvRMRQCBNW/d0a/Ct1RjSV1V8R4NUpEjNgjecnqNRrRS55M3N6O+Ag0v
JciH9k7lJEFyoSBoITSPpExG9Hx/KqVaoo5RtUYgushIlucfG3o+HsuaROhtWYcTPj55PVHU
wd6VJM7psVAUgd+uP6ejklQ7B/m+W45uB6l+3aG4ZvFwfkglx4ZZl2vm2tAD8N06CwEOO62g
hQS0pGjtSRzePlIC5S2nfd6Uwt+s4gDWCRgKFZTovXOg/AkicTblpf/XDv4VbjL7h49KnDvG
G/Zn8IXRMNHlGmis3UnYwVUDy83x7b7lzvKFR3tU6FNlpkKsp3lwtv5jIXZmNjybvx9jT8CV
AUava6pVNgzKr7tktc2UUxJdhFHjHFTUpdqhuHttCTrjZ8EHLDfcqA+zNffJ6Jp+Ib92hC4Z
zjOynfVk/wr/X/r02kLoV0CRFOpvh+E4bRTSF3itoqXkQMbCtS7kQIvZikmItxQ3JiCetZs+
MPHC8yjb6SaeetO8idGTeq1hq+bXm5mBo6nKyXfrOxeTSrxXiTIbMOGY1v+HTDonuOt78+ZC
/lJDwvq0U/bDuG76LEkpMJ3vdFs1RZa8ndQiYPdq9L9MEfcFQbCg2Qx9cOpvDBi5VeAtLgAh
RrV6ptQLAgyy3gZqc56CsgYQVgg98ZpPkPXSSh/l3VAfHC7biWq7kmrJDXjcgOyyBSEcIdWR
X3NMP3a2hCQ3Yrk9QVVMVHCENWoXxcwjr4+YO6cZwK8ZaB3mvxcsdQzk02HM3R5fr/FOKx8A
qn7POKd34xV8feUABeHSSX9D+WoB3Krl1vre9jsNT2TC7aDvnGnMvkjZ1dOiuaOssU9Ze77R
HgJUGgvuiosmo3lLyFM4TxdX1wz8YpU15JOME39BEpazENor3OC48GT2ovLtRkTSJ6D1xUdQ
2WQkM3D4/QXBDYG7k5pxkqb3S6qF20OH1S4HS0VKsfuwIevM7Hrtl0TY5blmcWhWXOxPtxYu
p7DbZdqCQ+2kR0AQoZx1r1NAotxAUkBxna7n5ejSX1wwH/ZH6HgtDny7m/jSBn+FIwBNlJOx
I3GtfhKtlcAGZ1Cz/CZyBZAk3tg4RvICaHsGp73kja+gYPyVM+ArOHxvI85LARvN0WWaDh1+
xwfs6WY8QOJYVvvxWAgnRQ3oIURXN0eIrF49xam31cHGoKtJCe42/uROk3wK+fWcwR5inxml
xJ8JXAko25KK5DgyfM+BzLUJH0PdKjyHJVV0S9RYEwNLsQ5K2Z/gQJ2igBJVa8HlT0F+RvPc
UCbICQd6lSZZ5ooZcCQVrPRyeLxzcQXHoe4ASn+Fk48ZYgMOxuqkECelWtoubuv1O8YF7+oL
fBmYA43VNQsQcKhmwmXZnBTo09pi64ZFLCHoAn7Pom8yWYYZbDbyw6f/cscFpeJLNjmQrl8X
pugIsnk37F8oULh8XgjUE7TL7vCLhZuD/eiAS8ZpAD+qHSQhcyVrRC/U9FS9nA88WPNYBl7X
SDJinFLbIhms4qxSlJ+AtybaDjaVoHNqgrGyarbeZzCCj/Ot1/xvczp1OpIOE4bhl6k156VB
PBZoq0NA+B+EJKrOzTkE3qk7NULr1btSI75hrD2ceNKOaQAL6tKewSbtDdSptpM9M+WQ5Rx1
ZxTVknemZDLTA9eIxu3EDeOfuEH4oPNBnwE6XenUp8MkK1+nM0ZQUIqNPxcFDlJK16JvHSjV
Ls2gp3SjqLmCM4L/piz46aE1j2vOMcde/AUnN7hSKg90L6Fe0L5YKmeQx7sm/9of7JbVVM3M
pFeOS0xThI11GJ2j+j6uQJYq41bfQ4lgMqeqeSWU44IJMJ97u+W1rsT9UFymUsMGCDHG2IcV
v6ux5J0pDPKOL4W422WRkRbDSRdjjNNLtOIAFbQlj62WaeK3Ypa8MAMX0Xq3H8EFsdKARg6E
oQbwG761to7o2gyyqHbO2bG4qWSn6oUCtA472y7S7Opmhedb1XzYFrQkjmjph2l3IkWl1bKY
Z0wwoHzv7AK+Tc1ggW5YTcj62JXkDjQmV950/ZsTok5AMb2H39bHIXGGvm03IjIv0dTcFg2z
IhxtaNhLh6kx2DT5wB2MrlqFXcCLtZCQ8oJhATNYJYNs8LKbwKMyzXIZV2sTwTuvHinZflJn
MKfHnFdfND+GyTJQdIh9EstizBjV13KbpJGAkxdegnRkI56VH2emQo/GGawfkgP01c6pHioy
rMyfxPxNCTcRi3WiGXs6O0t6QJEGMPiMYDO4MTN24cPDk9g+06IPZIaKWXYiOMfy+dMYbihY
fyNU7ebwTuBwBNZrQIXGxo7riYSxfiGrRbBIhVcygSemt9imnRcTEn/NWak12EfR9Mik/Ob7
I6ZjTdHd67dp2ncHi+ENe6j9NCb/DkVO9+rxZ1wtLBVNw4kjVcP9S4wu9EsgymvmubI7sYpX
Pv0/fFzay1Y5j+iylh6cMTPRc/o342LH4XwMuQRmAOG5SpyvALZVevMEBpNoIBkZBgDr5NZs
ewlGQajvctfhinZiN1nshZ1U5+n9LPDbP7azZ44TittQOfbhw7Bthw8sY+LdRC4vMA6hHu52
04KIp2stcTcRktqTkyZVlFH0XpuhfXhNC46bxxz9DJJ0RjGEs0zylV7tdGKPA77D8JsJNf9E
BNoDHVxDkV/zLrXB3Fp/iGK+fpFTGMlpK4jPkFsLAnSWQ4Lu7F5Lkhdkp5yxpAc6eC9b0cwg
aLYTsGIeKR6RHKgAVa1PxLuZNBm6nZVkheVcG7GqkKvpWPrPrtiukVh3hk92t3oPOnP41C3O
sykdEqWFQa0a5iyMR7l08IwjhkL7kbgUqT0HWt6tFe1K32SMlt3IBFDQzkFdYEe3oxsXyD48
gNz9Mn+WaZarQ3IXk/yTMxbYRD7Qm34umjiQuPENxHhi0raCPBkYnKHNjHa1soFpYHQOvj8x
3sAl6Q/zILTyuo4qXYZtIxInO9MHqqPquOCX3/HTyUTpqiZQUjilh38VcDs3HTQ7BUpyjlGJ
EDhheRHe3Kx9CEeZQeatfFKpPav0zv2nt0id8UicUGCDHH55WW6CtFygAdn18n08DSUw5vW4
2J+CRVqJipIs5r43Sdo4jbhLLFsaCY6DIO4fhnnJqi7MkXjTKKQouTYDn/jBIQ0JBLtNx0Bk
6tHv4yCOwpI71oYA2zuewiHyhgeQNw5wCl2YcSumvOGpOE5KGfEYva18Ucvi9/fmIPUfMiU5
auI4sTYpW3vNhvr9jYHVerLjrF/7SKDYx/CtIUZarycF7SeeCtDk8n384NgJdFGgJcATYfhw
Wwb45HgT23FZmxR1mUiFcJQY9UGj8Uv719OKLc2QRttErVfAv3hOrkIii0PnZXxoUitI8r/n
zRAduGiJIKr+1eDrZhiheb5GVzwBZtf96oiBvifb8mA/qFmlPc6C8bmZwycAVaEceLYU+mxg
4Wwm8Q1kX96mrTekWzQIwsPodtjxBeWnUABA+7CwlnG8C6m08VNb6oDbLoQxxZHB+MRSa9W4
yUTSWuc/PBgZj2zuyMr6XnGL+00nQsYB4juZecxpCsoDlkJlE4xNeFAJLc5LGhWZteiuFZ5k
Cc/nUXAqcWvKUNhR64vp5p7fE5Ey4oxVAUkhEkK2N9GSRtTuhR0a1JpVsc18qmbH0Zmd5J6V
ueJVpyT2MmyjcB9QYxVNcNVwCVMGHNTKi+UnYxtGDEdgTFQFP6YyPoqVlo37alw5C3bQIOAu
BhPPZw+7cm2WhjZ1CuVIGiE9xZl8MiWEX1mDDIwRU7eEdo1v5rC1FnJkCSOQ58C4pJ95XMy8
gW/dp1V420LMlUGkC6i67Ec05GspjMGc0x3owt4phyA1jvHmVjRWGKzEMO8GGwyMCmFSJ+mx
nABRrVUHO1WbK1guV3TMEBUceViYSdrO3aDFYIMSY9s2BoFCkqxkducaJm5bim+BTR31Byny
cljezklIBZivAKbFHQQ588OqqktRxUxYs9l9QaPoWO4m8tD6DYVQ7LXaSYLkHIsJrkRAeE1e
dJEDf7SuOjhkKE/vNB5uFfSEqUoCa97sIM1pL4Ab6lvv1qH1lLv3LGKTv088LyHnQQFyjb5T
I22BP1nOX9W/pNvlkeemms1EAW/TlRa3DIn0rLVsuade8HKH/GbFLpFYjVEmMK3ec0ycX5r2
m68p2yySbi8EYgl0lhImfoih6zCEbn10qbgyNf+L65p4N+2+DO25NEwYEuIHSdX3xDk+R61l
Z06POYCesikKk+Od91mYC30Wr/Xe3o3SpaQqUMDoAdClTFRSCtP8B+0MmOq5NHAOWsFt4hZN
Rdb8ab/W9kek3VgVs1/THdgONowiHoQ11OuG4niA22bdRbQS0NhkK2Qnn40Wc0BM7V3by0zq
cwG2O5zUQce0ysn1VeZOEfJekeoAHJwkOo5/636f6YuHEJguFH1ro99wxmlalQ5JSDVsQrK6
bRFGp4joM3PNGdUMFt6vMrAq7pK7EdvIr90ID23Gv0MWO0yaAgslx2zeUnAk1AYTXdCukOIF
2nvhTcZevOX/Dzw9aXGopfvXpRpt0O3JmvnAlDpRG4eCe8ouRVdXn9cnoSUHiP86iDrIltLm
kGHtW7rsOqBzbVX/eLum8RPUiRv2BKFMqlcaNaefW3nVAf1TYhNANew0AucjhFhxWjbwV6fN
sseRuqOwjX4G9QsVk+ksZw3Dy0rSz6uyJfCsuzOl37p8fs7cpvSuTdbquVab9eWKo4q47Nt1
d/ZhrqPS82SMonhet7+LhWD/hoYr66efnlRLxYaryDLs/jwaqLHZ+dfuUF0RpHqZcBweA8n6
gVgyAWP16STXDarPqzPVLOFo6Gh8B3VsYMqGW/RJsC0ORcelAe1UjuorpHxP+A7av8KABTq2
icUa/3/ODulKTF1nJMpBjKbL4VnWOmg2exiqDwO48XP3ZqCoM4O2h0G3IUkfj5z/uk4t/5hk
D4eB5tjP3fV77OZSC7c1pADiPo8cJZRzOAuLoi/sW/iL30PYQ07udpk4+jJaCQqOwd3uMP8i
T9QBsRhNJ76mjUh/+AmWd2EdgGZupTg4DEJBLucxA4lZ8vlwnsXp80ApA3jndFb1Qa/2J1Ht
zQ7TFbitEcCJ0CDz1VHgj2GapU0B1i7Iaa9ZIKqTKUsIrpA2iL7AKOBaJKgDMB/yLhVH8e4b
pO+N8+7pura8rkCddvfX7lkthVgSA7q3Kvm72Urd5toYtkT+5Vr51WY+BMp8cZFXpLXGSXWO
1zLyLQS47hO0q70e1+CN31JElXCIaZmG9Xlzeky0LsXLHtQOJKb8EIDK+Vm3fmru1P+EAFhl
u8OyB77nTOGACLyftYd4tne9CUPCMJlrVVoc2NR0c/pDrgKA1Rksd/7FS4sUBc4qQhvB9D2M
oBKlgCw0cwNsYeX1VOHy8j8xrAtfo2R54yF+APTQILrOTCEbDM91n5WObTOyA1dVXv7rO4Wq
+1DXJJ4WI3L8c1/StAs9mtwO+DR+JfFj1+NDa6PUNxZBuIfVksnDKOFTQKjGBU2L8/dm1mmk
W0yWmsNpjUtW0Fr5NKyhbgC2yIZnYmSUmuSlkDWOQCBnSPFNBkHxNSMAkgetdpZtx7hnIYVN
E7EyPsF/bBg+g0C/rcwyJiSAwoRQZLANZeYoZbtTGSBTrR7lhp4YieCteZqeIJQIBP2jlx7P
BtZznxeU5UUz7UHYSfWsU/AOPvfE30L79GkJZEi6JmlEgaix4Rmu5lWkWaATyXb2yVS/bChg
aYdgEn1BYE2i9/ctT5Wo8wSs0GXMuN0Qlz1sFaIMY8rnlTFvqMpaXH80f8mHZBW45ZA4GNNy
m+0zUvJEC0rPR20MKN8MSmzCrdB1Osp2LLbmMb0XfEOrPc/jme4NaZ0Z+qXJOgcpF7ShxWGe
gHMhCL0acjJRTzvsWsiWWxxErQYNBNXU+wmQIZX33HMXGHllreUz1W7mIGNAREKdz7uq7vuk
AIO11+8AYX3VSesN4pOYUazmK/KKIESOTBBhATGN49A8ZIGqjpIvFUePaxQXleMXfT9/vBb8
kozM8OCrFNqtLuIuCrw6hFZP9SZvfPYr3Uh7uIgwVyB7a+SCHHWhV5VmJI1rZ7eDppM7RKwC
WkGhbsHDeyUhZfI5pmMyv5voKgCDaKELpD3esJnkRWwmfPgG8/dz3RIDQuskS+fUcM5F+lkZ
rggM8tHI4FqhQJ36tLveHqFbOImnaKhL5oT5g9sQ8S76hPMjLeOuGpthLs1cFZPJJ9EkV1+8
otLY/mhuvEpUxkgfpgqMbq6MZBQ6Muumvw/D973mdRHb094H6PWGAMWKbTpiqWTjgg0jzt4b
/Sksqhgw3f05MZ0N4JXfeKKc/CPu24GQ2U/gpQr3LNqKjNP7Z7x6HT12UFeh8UzTTuDQLS0A
uekglSMQCrAD5FP2kx9XwZTrHkXnhZTjfBrydHwJMkQ6ki5bTPWdDdwX8vt0RkMI2nEsCe9P
cSM84pa2GNrXG4r9GOzFfJPJekOMDwynkiLfT66Qxo4OhRG+5tOEacb+SpiPae2bDaO6Y9ZV
XSSGfPpTkKXU3K3Vd8iBgMUh5wy6+qIwF71YK2TOeaP1axhf4Qk1rDZB+wkSUxJXic5vDvfP
J8bz1knqcFkxdmwmD8G2fJD30Qxju3XfGWrMlQDxoc0OF02Ha5p+V9dQLJEV8deuQIFkH0hJ
2vGKR77jep+hrhbjxVoV2I1lhR93Ad23JH7ubTmJFLUE7ZdkvI5JeHd4hpgD4C1B9pEro431
Ic9/vdro8GKbDo+27kl9zthSAdwLd2M7/RWcb6mxQ9S+YV+ddV2dqKeALY/HceSbqvw1Sxz8
viybHeH0AppDkHOQYE0/jbmRCji9V1MBPkWZyyUAKqyc8W8wVUdyEJq0+kZDkuQDa3lL4suT
vFj+R6oyKSaUE5mO2eCoO6p1F7mF2wgEjYxOXf6TqaLT4vwsnjC+BU470YCXPvdMUq0fBR9y
G5rvuoFsz419mlA4BD3GstqOelwixOKrdswFjhDJDQkmbPjtLridbD7HNXa6Fatk2RIEtIYs
oMklspfPgcJQ5QjkYFBrlABsAhT4XvRj2SLCKYa8BLNArWlgcjFUI8eSEJOHfumSVVIl0+Sp
FwS71aoM95JjpLBxHrsr5lSPHDj5KLUKSAz7iDdhnMjJfeOibZrBHWSLDkZa4l/JhmSyDtxm
ykwNFOE1D0BI6VknK7mQGVpB9qLSGr4Y8n3WZKPo5CVAyZkNj8PpZvf0j6HivUjtgZBQTAS9
UwLWgsXyNwSD/Mpf4RnnEKTybXgOn69h19mOojmDae4DTFf/ir2yEui+6GYCNo70V4KW3g0e
z+dFMzazu5OjifLXt9jEW846zCvQVUTlAKruQtZ3UTDCR1qCq/XdLwMBovcpRrqU2wOcU7M2
o+PWWvMjOtVGncqizQuVqUG3lJFbPJEQakFQl1D5hrNAUEzkD+hldelpCz4WOlGjGoJEamt+
MlevQp7z5maM1LYjZyWz/5QPxRM1TF2Cg9XI6wNdTnxPnKjxFpA7yH9O5gPM2x/+K4xgUAIc
cAbazGTXIBPLfRO/znITPJBeeHfJvfJjJQoxi+ZO8btRpSdXY9T8fjjKsFd9EV1DwFjykQiX
ww76w9wJJwRYZtA4OZncDqi2q+oiuJcUs4JyDo2Gyd8Qei+oqeZe9aKxLswLERxoN3O4I0xk
hW5RxKKIirElnn7xM4yYolrrDmvNLaNSqi4tNPXVIiMicJki9d/0e0XXOlfZh4rjTLPyFqdV
e7AaQ/9VumM6Bvat1IWEXUH82ccbYN+AXNL/R3RvYjiPIbgjf6x3pM+aVrEHZbShGXaa/XaQ
HqRLyuuRRAhd9N6xUYBj7l2oi5lXQ0HvSXo/chr+74WT2K4EGdZ0wwYy1RleRnmYYkDbNW+B
tdEnxnmOFjHMS8GaGqUVan7lI9pVYpgQCLpYFLgK6YnWpLFqTLI0FaGRYfHH5AJfjMmZNVtt
7O7IknHjpnUIpZTMtMURi4O9Z3uPc5YjixCg+9XJGS3Wf3i449jZ9opu4RM9VgGOWy/grBsL
cRz0ZxFBmB3eEXzIhnCW6l6Ze6IT0k6pxOrzGHoT2s/EJtUavyLZ/J2mzir1XiSpB3dqTF8N
hdRfT7iBJGkwVTlYemhx8UvJf+Qvc2QG9n9Q3bnSrf4ntA3IP73nkf+gjuSclHi/tqGVB0gG
8HVNq0IFhvfPmVxpN1wokvkKGA4KwjDsoNyWIHQc7VasQ6eT82zoeNOiFQEIB2jZ4sPBOhMk
FA0lz1x1QBHdgTGH1ppNbfr4RHjwG2kEDqcqg35BxC7HHs2z0so5tep+6V9vqNFVDzV3pjKt
hvYs50M2vnhZKWLHLadRsFbi/2wvhOyn5UYcDA9ICHFqi1zhdZy1PVi/pyFcM5exmwnJ9v9E
XvBPPqoj6e51/XRjwff4bzesk5+BnY1+7Nvw4MbvGNCFvhdH4byE/JXOjrWp5mnHQCVb2VAK
7mb2VRm+5Jdst5flpXp56RugTfYVE7KvgulJxZyCCMTXSAjcNM+VpMEuvsdgbePd1LO8XMil
Vte/uKBW4zc/K+f/ukyJb8P7ZCKck6mOgSPbTZL6XNUesoZSgrz1Ow3evf995hMlzj2hn9k1
n8xQT5vRwvXy74Peb9mNGnZJr5bnffnbViF+kC1WBrj3uKOP5mwupij0kv4z5yqIEkW8iylS
YuS0iz99IERQm5fgedoeXnj9YOaS7ymnyTIHa0MusHvBZyuvGuuGbw/JUc98UXlG7vAA/a7j
dTg+EmFJjLS6MdWeDONdTHD1M3yMSmJGQiomYjDlINXtPwm1V4F7bF4GmCt6RkS0+Q7A780/
Q6iU+TmzxL2n86ILLfwDUBv46W3NU8cF1BrZ+7U/tO4YFW8Z3DLgIYSkpbl5dIOyTz6QC02e
7QfBNSn+Yr5y5bk3fRt/id8lD3WOUqA1kC4Da2ORkxCDW+Ikd4/DYaCcmV/2A/dKkZX1lDqu
TOWPiqninhBqotjsHaTOkluSzL0VNSfG7U6KtCNiFURgd8bBmBqtY2ZhgW8iDm/LbDX5LHQU
S02WzrIdbUZMiueM0V4H4j46mKoWRRXousrpoaVj2LM/Kh065TUj0lugFfvxS2zpbtNlIz2N
2/C0AcKNLYJOip0bJWqzuZRWPrAnVTAPnc7SWUMvn02Vx1eHbmyPbpbdvPtPEO3y7i9Ka8Sg
Dh+TIKulUR/S2vA+eV5AXosVX0YZOK8V/Mw6i6ZnX1mvrk67hrNvhUDKZyw3GN4caJUmIPpy
0daFEY2wi4t/SoqdWfSo52WZOZowrvTxY+T54iyh2RZa2slINln9fydnpfKUsP5sJwX6Px19
GRULRRNTmBwlgs2lzU13NJJPQhuFftMzP3HtPG9JZlcKaenoPOcU7JjMw25aulZH1K2wXXf8
UvFunThI57yCNGA5Y8KU+kCRBLoULQrM0wvNFMNqKmtREiSMa5Ou5vjVYX5K4lUMbdyrU2xJ
Oi+m4usF9EQaG973lp/s2Ch1/58KQwY0potxpBnJfBYjjry0wgEH77owfbjNXFlUZEYqxWpM
hoi6I0JRptbloewXdpueJ3HGTnYZrRxxVE712Q37NFmNrOTn5yAegDzLbHQtGQIJog2owz+z
7edspnkxPwLj5V8fv1gajcR5ob15vRJCwgs8G1CZYv3xTSXNHQfHdpoocrIpiwV+eZ4w9duP
7wnpNUGmKZPWauamHUc/w1RLhFgPGgmO2yECKLaVXb96S4hat7jnxaaBI7sQ1XPV+pz1K4Hu
jjT6JwzNbwqQr08FC4YV84lv/DVdpjuWOqy7FnD/+sqr0lWIDQDJJC+gIY7Mn86CFgnpM1x5
QyfSPFl/1R8dpFUKhHtkkHLdTXz+eLo01qb5e1gzTiQMvwUdJlXyatKZYIsYWNCySc/DqzoL
A8O7mhElxjZkxus0hX5czGcePhBE3uufBs2v4kP5IWX5qgui+Fnjl/ewgDOPag7yf+ZnGSNi
WPIwT4eajvDJ2DnovdmejP29wZYfnjjcfgPNjbjOKVC12/CZ9pw32vYx7LjXa+1kqUc5J7Gs
g5mt3KpSG+KvtmnP9h0bacDi9bLCxMoK92VtBwyxPeH/leLwKIOOmI9r+Lh9vN5GgO4BDo3v
JlSd8TQxKlyR6tnPqFc82oJzjY0OET82NthFb5Q4Bpyl95YjYyB3H8oPVQMbozZNrxoTN6Pk
wLymyM5sxPmF4aYxNg/fJVui/EFe2wfWqZ8VMLwfcCy0XP2UuWcqbFpxqdAjHb1jeflumtV2
TjP4vmkdB5Xz3U9QvmYN9ZJTi6LyiPN46WZctGxK5nXApLKKtxGlXiIg7D/KoKAziDbZV/Y9
AjuWtpw7bvyS2fjfaB5V3QZkZMhpQlva14vK3ZcSV2a+XHEacif1NbLiUrL3pdyXYrZsCuxy
TrgUUrrOGfXoVkJsFqBz/r5YI00v2dzRpgA0pRgA8tqPime6Hnm/F9z+hpvW1MRS/uR3EWsr
mHL1LY5wlDZbzHKku4wj7f5kP8AJ8/gksBhIKLO3wqChqfnXcUEBQmCPMPtBsBuLYoVeNSlx
DIED8RkFMzWlO+7ZjNVGaYh+yisGAs6CrKwDchYY5qSHm+Kg7g80rBF6rkeM0V/ElYS/i7tG
VbcwmCT9hZ16RQloWwtQY0cc1VKu0BHAZoRvc79b5InP1nMwGD+9AZLe1SIKki8vky5xtjMi
zjFydIVdwnNzDt9HzBsdZWd7JNvvSE4dbxnd3lHt3wHRqTBfD4y14BRNEKCsU4+JqdrsL/fM
KyN8fWAXTVXOKhvTBVOpNSJCl2UYBWteFwlKZ0E/Ircyi2gALNgfhSbIF6guvi3uwidBfbqS
Tmls2GRhaDE5V+iBh3SOubDGXeNUq9D7wJn7o/ayrbGhfm8A7OiVfit5TpRoMSD9IGEi+bWX
EdL0L5LNLd38Bns2AHYiMbSXs0WJUkTwM6Tb59/y3o7FR7qCJ7/YLEHduP4K3qCQehTTTgmv
ljFCtVhk+tJgRx8P3bUyyzijmM3DDpUbPUysp6WNg60WVaAaL+znul5Cw4APwWJCCEKpVWcb
VstIHnDGonxUwQbY3IK0BPGoFesjJRw98ZI04RZa/sI7gDJBv34ebvJiSE3JCLbvXSQ/T4nh
+UegMpNpetk4MiOEN2Al5lmRcSwANbgwVi/1aET2ktpONvgVEfAfY0mnlQgeuPyJOF3uDwG8
Ihzoc0dIvT9QKlXVSfwdrIvkpoiKyEOI/+QTPne5edcgOceM6SvYziuijCNy6br4+sA01AJ/
Ew5Y7DSXlObMqD18MLMdwc1ojZmOyz121Fg0p3XdUSnMmGrdsCz56FSf83oxY+FBBGjUHbsw
LoD8dmRbFlzwpvTvHUiiunF4DPXr2vZAohbQ+5jEpXQmCf5PfYUR93+r5f6abSJCX23FKmJK
owf27L4cNeixTML7hx3iogTv0HQXw19KI2GtmmaHOAppgh+RDPndgqoEBJhOcusz56LsWQS2
wmfArrOYEG5R+kKw55VNQQJrlS8XN9IjWS8mzDpOraaAluo79WEvHLgYNaFcq5jqKKtxv/LC
dmcqdRRW5WoOIcoLwlF3RlyF+eDetVCDKBuWp7Q/hkAhQL0t0M6HOf3kPHJERusSTciyZkVC
aHxw/1x2SraChdh/d7IUl1w5udIYLQkRwcgDhnoJDSZy6L7rLfw0siqud2z297ghF3bKkAK7
U2JkVyQ7gMjhgzXFqx1DUptu8caxi5wctX2sn9ug3jLzDvmLYXFSnFRvaONXJ1UW5fiJawwB
oBtfJKzvI+Ps9VkjwSWWH/zxmXBYPtk4zYyuFFIzL99WoK2IFdw5ValUq29gyF3vk93SljIU
XiWcA8PaPw4vKkRGd2bU6uWotwZy1GQfdvdJSNLRJ+z5CF8Y8r3QEA9WhwlVeuYKKPmII1xm
KntT6dKDPJqrj/F9nGp38U5pJaJQqQWFpK8kpqBAxs45lU43UAnbZAZy3ZYAwrQaQaNnUQnw
YdA9tUl7uoWE86sYeA4SCcJAvkkbdpzvBYSQfdKEgRAAAAAAUSHCxBT0HcoAAYWuAYXlB0Vc
EKCxxGf7AgAAAAAEWVo=

--O5XBE6gyVG5Rl6Rj
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename=kernel-selftests

KERNEL SELFTESTS: linux_headers_dir is /usr/src/linux-headers-x86_64-rhel-7.6-kselftests-fd8d46a7a2c51313ee14c43af60ff337279384ef
2020-11-25 12:01:24 ln -sf /usr/bin/clang
2020-11-25 12:01:24 ln -sf /usr/bin/llc
2020-11-25 12:01:25 sed -i s/default_timeout=45/default_timeout=300/ kselftest/runner.sh
2020-11-25 12:01:25 make run_tests -C livepatch
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-fd8d46a7a2c51313ee14c43af60ff337279384ef/tools/testing/selftests/livepatch'
TAP version 13
1..5
# selftests: livepatch: test-livepatch.sh
# TEST: basic function patching ... ERROR: failed to complete transition
not ok 1 selftests: livepatch: test-livepatch.sh # exit=1
# selftests: livepatch: test-callbacks.sh
# TEST: target module before livepatch ... ERROR: modprobe: ERROR: could not insert 'test_klp_callbacks_demo': Device or resource busy
not ok 2 selftests: livepatch: test-callbacks.sh # exit=1
# selftests: livepatch: test-shadow-vars.sh
# TEST: basic shadow variable API ... ok
ok 3 selftests: livepatch: test-shadow-vars.sh
# selftests: livepatch: test-state.sh
# TEST: system state modification ... ERROR: modprobe: ERROR: could not insert 'test_klp_state': Device or resource busy
not ok 4 selftests: livepatch: test-state.sh # exit=1
# selftests: livepatch: test-ftrace.sh
# TEST: livepatch interaction with ftrace_enabled sysctl ... ERROR: test_klp_livepatch unexpectedly loaded
not ok 5 selftests: livepatch: test-ftrace.sh # exit=1
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.6-kselftests-fd8d46a7a2c51313ee14c43af60ff337279384ef/tools/testing/selftests/livepatch'

--O5XBE6gyVG5Rl6Rj
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="job.yaml"

---

#! jobs/kernel-selftests.yaml
suite: kernel-selftests
testcase: kernel-selftests
category: functional
kconfig: x86_64-rhel-7.6-kselftests
need_memory: 2G
need_cpu: 2
kernel-selftests:
  group: livepatch
kernel_cmdline: erst_disable
job_origin: "/lkp-src/allot/cyclic:p1:linux-devel:devel-hourly/lkp-skl-nuc2/kernel-selftests.yaml"

#! queue options
queue_cmdline_keys:
- branch
- commit
queue: bisect
testbox: lkp-skl-nuc2
tbox_group: lkp-skl-nuc2
submit_id: 5fbe311e8539be1394aa032e
job_file: "/lkp/jobs/scheduled/lkp-skl-nuc2/kernel-selftests-livepatch-ucode=0xe2-debian-10.4-x86_64-20200603.cgz-fd8d46a7a2c51313ee14c43af60ff337279384ef-20201125-5012-1xc4olu-0.yaml"
id: 006d29ea0322fe41051c4aad10ae03217310edd6
queuer_version: "/lkp-src"

#! hosts/lkp-skl-nuc2
model: Skylake
nr_cpu: 8
memory: 32G
nr_sdd_partitions: 1
ssd_partitions: "/dev/disk/by-id/ata-INTEL_SSDSCKKF480H6_CVLY6296001Z480F-part1"
swap_partitions: 
rootfs_partition: "/dev/disk/by-id/ata-INTEL_SSDSCKKF480H6_CVLY629600JP480F-part1"
brand: Intel(R) Core(TM) i7-6770HQ CPU @ 2.60GHz

#! include/category/functional
kmsg: 
heartbeat: 
meminfo: 

#! include/queue/cyclic
commit: fd8d46a7a2c51313ee14c43af60ff337279384ef

#! include/testbox/lkp-skl-nuc2
netconsole_port: 6675
ucode: '0xe2'
need_kconfig_hw:
- CONFIG_E1000E=y
- CONFIG_SATA_AHCI

#! include/kernel-selftests
need_kernel_headers: true
need_kernel_selftests: true
need_kconfig:
- CONFIG_DYNAMIC_DEBUG=y
- CONFIG_LIVEPATCH=y ~ ">= v4.0-rc1"
- CONFIG_TEST_LIVEPATCH=m ~ ">= v5.1-rc1"
enqueue_time: 2020-11-25 18:25:34.737762009 +08:00
_id: 5fbe311e8539be1394aa032e
_rt: "/result/kernel-selftests/livepatch-ucode=0xe2/lkp-skl-nuc2/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-7.6-kselftests/gcc-9/fd8d46a7a2c51313ee14c43af60ff337279384ef"

#! schedule options
user: lkp
compiler: gcc-9
head_commit: 49b0a0e914926c1116def49daf207c926dde5715
base_commit: 418baf2c28f3473039f2f7377760bd8f6897ae18
branch: linux-devel/devel-hourly-2020112414
rootfs: debian-10.4-x86_64-20200603.cgz
result_root: "/result/kernel-selftests/livepatch-ucode=0xe2/lkp-skl-nuc2/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-7.6-kselftests/gcc-9/fd8d46a7a2c51313ee14c43af60ff337279384ef/1"
scheduler_version: "/lkp/lkp/.src-20201125-160031"
LKP_SERVER: internal-lkp-server
arch: x86_64
max_uptime: 2400
initrd: "/osimage/debian/debian-10.4-x86_64-20200603.cgz"
bootloader_append:
- root=/dev/ram0
- user=lkp
- job=/lkp/jobs/scheduled/lkp-skl-nuc2/kernel-selftests-livepatch-ucode=0xe2-debian-10.4-x86_64-20200603.cgz-fd8d46a7a2c51313ee14c43af60ff337279384ef-20201125-5012-1xc4olu-0.yaml
- ARCH=x86_64
- kconfig=x86_64-rhel-7.6-kselftests
- branch=linux-devel/devel-hourly-2020112414
- commit=fd8d46a7a2c51313ee14c43af60ff337279384ef
- BOOT_IMAGE=/pkg/linux/x86_64-rhel-7.6-kselftests/gcc-9/fd8d46a7a2c51313ee14c43af60ff337279384ef/vmlinuz-5.10.0-rc4-00011-gfd8d46a7a2c5
- erst_disable
- max_uptime=2400
- RESULT_ROOT=/result/kernel-selftests/livepatch-ucode=0xe2/lkp-skl-nuc2/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-7.6-kselftests/gcc-9/fd8d46a7a2c51313ee14c43af60ff337279384ef/1
- LKP_SERVER=internal-lkp-server
- nokaslr
- selinux=0
- debug
- apic=debug
- sysrq_always_enabled
- rcupdate.rcu_cpu_stall_timeout=100
- net.ifnames=0
- printk.devkmsg=on
- panic=-1
- softlockup_panic=1
- nmi_watchdog=panic
- oops=panic
- load_ramdisk=2
- prompt_ramdisk=0
- drbd.minor_count=8
- systemd.log_level=err
- ignore_loglevel
- console=tty0
- earlyprintk=ttyS0,115200
- console=ttyS0,115200
- vga=normal
- rw
modules_initrd: "/pkg/linux/x86_64-rhel-7.6-kselftests/gcc-9/fd8d46a7a2c51313ee14c43af60ff337279384ef/modules.cgz"
bm_initrd: "/osimage/deps/debian-10.4-x86_64-20200603.cgz/run-ipconfig_20200608.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/lkp_20200709.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/rsync-rootfs_20200608.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/kernel-selftests_20201124.cgz,/osimage/pkg/debian-10.4-x86_64-20200603.cgz/kernel-selftests-x86_64-b5a583fb-1_20201015.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/hw_20200715.cgz"
linux_headers_initrd: "/pkg/linux/x86_64-rhel-7.6-kselftests/gcc-9/fd8d46a7a2c51313ee14c43af60ff337279384ef/linux-headers.cgz"
linux_selftests_initrd: "/pkg/linux/x86_64-rhel-7.6-kselftests/gcc-9/fd8d46a7a2c51313ee14c43af60ff337279384ef/linux-selftests.cgz"
ucode_initrd: "/osimage/ucode/intel-ucode-20201117.cgz"
lkp_initrd: "/osimage/user/lkp/lkp-x86_64.cgz"
site: inn

#! /lkp/lkp/.src-20201124-233336/include/site/inn
LKP_CGI_PORT: 80
LKP_CIFS_PORT: 139
oom-killer: 
watchdog: 

#! runtime status
last_kernel: 5.10.0-rc4-00011-gfd8d46a7a2c5
schedule_notify_address: 

#! user overrides
kernel: "/pkg/linux/x86_64-rhel-7.6-kselftests/gcc-9/fd8d46a7a2c51313ee14c43af60ff337279384ef/vmlinuz-5.10.0-rc4-00011-gfd8d46a7a2c5"
dequeue_time: 2020-11-25 19:37:43.699722460 +08:00

#! /lkp/lkp/.src-20201125-160031/include/site/inn
job_state: finished
loadavg: 0.18 0.12 0.05 1/181 2923
start_time: '1606304327'
end_time: '1606304393'
version: "/lkp/lkp/.src-20201125-160128:f12d3f7a:dc1ef617f"

--O5XBE6gyVG5Rl6Rj
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename=reproduce

 "ln" "-sf" "/usr/bin/clang"
 "ln" "-sf" "/usr/bin/llc"
 "sed" "-i" "s/default_timeout=45/default_timeout=300/" "kselftest/runner.sh"
 "make" "run_tests" "-C" "livepatch"

--O5XBE6gyVG5Rl6Rj--


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 16:29:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 16:29:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37906.70467 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khxfg-00083R-1v; Wed, 25 Nov 2020 16:29:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37906.70467; Wed, 25 Nov 2020 16:29:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khxff-00083K-Uc; Wed, 25 Nov 2020 16:29:31 +0000
Received: by outflank-mailman (input) for mailman id 37906;
 Wed, 25 Nov 2020 16:29:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xlvR=E7=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1khxfe-00083F-G5
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 16:29:30 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 14ac2dc1-610d-4403-b727-1cd671707651;
 Wed, 25 Nov 2020 16:29:29 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 948AB68B02; Wed, 25 Nov 2020 17:29:26 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xlvR=E7=lst.de=hch@srs-us1.protection.inumbo.net>)
	id 1khxfe-00083F-G5
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 16:29:30 +0000
X-Inumbo-ID: 14ac2dc1-610d-4403-b727-1cd671707651
Received: from verein.lst.de (unknown [213.95.11.211])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 14ac2dc1-610d-4403-b727-1cd671707651;
	Wed, 25 Nov 2020 16:29:29 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
	id 948AB68B02; Wed, 25 Nov 2020 17:29:26 +0100 (CET)
Date: Wed, 25 Nov 2020 17:29:26 +0100
From: Christoph Hellwig <hch@lst.de>
To: Tejun Heo <tj@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 23/45] block: remove i_bdev
Message-ID: <20201125162926.GA1024@lst.de>
References: <20201124132751.3747337-1-hch@lst.de> <20201124132751.3747337-24-hch@lst.de> <X71g4Tm+3RiRg4Gf@mtj.duckdns.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <X71g4Tm+3RiRg4Gf@mtj.duckdns.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, Nov 24, 2020 at 02:37:05PM -0500, Tejun Heo wrote:
> On Tue, Nov 24, 2020 at 02:27:29PM +0100, Christoph Hellwig wrote:
> > Switch the block device lookup interfaces to directly work with a dev_t
> > so that struct block_device references are only acquired by the
> > blkdev_get variants (and the blk-cgroup special case).  This means that
> > we not don't need an extra reference in the inode and can generally
>      ^
>      now
> > simplify handling of struct block_device to keep the lookups contained
> > in the core block layer code.
> > 
> > Signed-off-by: Christoph Hellwig <hch@lst.de>
> ...
> > @@ -1689,14 +1599,12 @@ static int blkdev_open(struct inode * inode, struct file * filp)
> >  	if ((filp->f_flags & O_ACCMODE) == 3)
> >  		filp->f_mode |= FMODE_WRITE_IOCTL;
> >  
> > -	bdev = bd_acquire(inode);
> > -	if (bdev == NULL)
> > -		return -ENOMEM;
> > -
> > +	bdev = blkdev_get_by_dev(inode->i_rdev, filp->f_mode, filp);
> > +	if (IS_ERR(bdev))
> > +		return PTR_ERR(bdev);
> >  	filp->f_mapping = bdev->bd_inode->i_mapping;
> >  	filp->f_wb_err = filemap_sample_wb_err(filp->f_mapping);
> > -
> > -	return blkdev_get(bdev, filp->f_mode, filp);
> > +	return 0;
> >  }
> 
> I was wondering whether losing the stale bdev flushing in bd_acquire() would
> cause user-visible behavior changes but can't see how it would given that
> userland has no way of holding onto a specific instance of block inode.
> Maybe it's something worth mentioning in the commit message?

With stale bdev flushing do you mean the call to bd_forget if
i_bdev exists but is unhashed?  It doesn't actually flush anything but
just detaches the old bdev from the inode so that the new one can be
attached.  That problem goes away by definition if we don't attach
the bdev to the inode.


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 16:35:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 16:35:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37915.70483 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khxlD-0000Wp-OG; Wed, 25 Nov 2020 16:35:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37915.70483; Wed, 25 Nov 2020 16:35:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khxlD-0000Wi-Ky; Wed, 25 Nov 2020 16:35:15 +0000
Received: by outflank-mailman (input) for mailman id 37915;
 Wed, 25 Nov 2020 16:35:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khxlC-0000Wa-HG; Wed, 25 Nov 2020 16:35:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khxlC-0003uD-8C; Wed, 25 Nov 2020 16:35:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1khxlB-0006wQ-VR; Wed, 25 Nov 2020 16:35:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1khxlB-0003p2-Uw; Wed, 25 Nov 2020 16:35:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khxlC-0000Wa-HG; Wed, 25 Nov 2020 16:35:14 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=R776ITmDduJNiRSgzXWYDICeDdogR91bUmHBHt3K5lM=; b=YH4d/PQrqIkC5b4nLocBQ78ELg
	MKNfbwnnOTrka5eAk8hzzDx1D+JtlvfwAyv9KL5AIEpZmDdMbxFtU7mnK4VPwqxoY/4pK6g8mkaZA
	WwaBQgBiEphWsper3FoWvMEc5ySPlZIMsaVg8+sK7Idsr3ME7h+k6yLHNtDGqm46yN5Q=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khxlC-0003uD-8C; Wed, 25 Nov 2020 16:35:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khxlB-0006wQ-VR; Wed, 25 Nov 2020 16:35:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1khxlB-0003p2-Uw; Wed, 25 Nov 2020 16:35:13 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156989-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 156989: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.14-testing:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0057b1f8fa79abe8272690341db54b064c8f2b7f
X-Osstest-Versions-That:
    xen=d101b417b784a26326fc7800a79cc539ba570b79
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 25 Nov 2020 16:35:13 +0000

flight 156989 xen-4.14-testing real [real]
flight 157010 xen-4.14-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156989/
http://logs.test-lab.xenproject.org/osstest/logs/157010/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-xsm       8 xen-boot            fail pass in 157010-retest

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm     15 migrate-support-check fail in 157010 never pass
 test-arm64-arm64-xl-xsm 16 saverestore-support-check fail in 157010 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156716
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156716
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156716
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156716
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156716
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156716
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156716
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156716
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156716
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156716
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156716
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  0057b1f8fa79abe8272690341db54b064c8f2b7f
baseline version:
 xen                  d101b417b784a26326fc7800a79cc539ba570b79

Last test of basis   156716  2020-11-12 12:17:49 Z   13 days
Testing same since   156989  2020-11-24 13:36:54 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d101b417b7..0057b1f8fa  0057b1f8fa79abe8272690341db54b064c8f2b7f -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 16:51:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 16:51:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37925.70497 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khy0S-0002Ju-5M; Wed, 25 Nov 2020 16:51:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37925.70497; Wed, 25 Nov 2020 16:51:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khy0S-0002Jn-2D; Wed, 25 Nov 2020 16:51:00 +0000
Received: by outflank-mailman (input) for mailman id 37925;
 Wed, 25 Nov 2020 16:50:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QLmq=E7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1khy0Q-0002Ji-R8
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 16:50:58 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2db5e4ad-b9fe-4acb-b42b-21982dee91f1;
 Wed, 25 Nov 2020 16:50:57 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=QLmq=E7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1khy0Q-0002Ji-R8
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 16:50:58 +0000
X-Inumbo-ID: 2db5e4ad-b9fe-4acb-b42b-21982dee91f1
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2db5e4ad-b9fe-4acb-b42b-21982dee91f1;
	Wed, 25 Nov 2020 16:50:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606323057;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=jVct1SK2i7CvNlY0fbNrhDdBreKhUZcUbCrCCUWkHws=;
  b=B8YHzhVjMfZlIrTSYZDj3pUVx0p2z4kQt22r2FiCR05YVZrT2gM7xYmg
   H287t336bDUl6VzLTJ7wlUYhBNCzya+UGhLcotmREwHhfX1dujaRZWqLK
   wcXeUI+CD2/1n8gnSzr0fh8hgHzzfu2nJSNe/3IqaZcDl+FYef5mHDyS+
   E=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: wRp/0NCnnebIOglK+wxbn6WEBUYWf6v+wkY5jMr5Q15Ar869etcq5f9iBHzzEGYukIWxNpaxsN
 5iFuXvzRT5duw7rF1borim8BmMOBIV6koHlgxhbNGCrinqmDp7ZIpUrIdyBGbObXswOPvVjskQ
 cQ7CpprWaALA2zifcPGbYprhY9jJQNnq28zfvdx+R/w8Evdqc0vHaDdQwU+33wgMp+M0DoD5GF
 jX4zuy6KWWzMnvDFUw55MPlv2EBlzZ+ruo94as82eKMExI4eMDGovogibOvdq/AxRNvc992cJh
 QIE=
X-SBRS: None
X-MesageID: 31933357
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,369,1599537600"; 
   d="scan'208";a="31933357"
Subject: Re: [PATCH] tools/libs: Simplify internal *.pc files
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>
References: <20201125144928.22778-1-andrew.cooper3@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <66d5ef1f-6120-38f6-2be0-c77720272e5d@citrix.com>
Date: Wed, 25 Nov 2020 16:50:51 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201125144928.22778-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 25/11/2020 14:49, Andrew Cooper wrote:
> diff --git a/tools/Rules.mk b/tools/Rules.mk
> index f61da81f4a..5d92ff0699 100644
> --- a/tools/Rules.mk
> +++ b/tools/Rules.mk
> @@ -184,7 +184,7 @@ $(PKG_CONFIG_DIR)/%.pc: Makefile $(XEN_ROOT)/tools/Rules.mk $(PKG_CONFIG_DIR)
>  	echo "Description: $(PKG_CONFIG_DESC)"; \
>  	echo "Version: $(PKG_CONFIG_VERSION)"; \
>  	echo "Cflags: -I\$${includedir} $(CFLAGS_xeninclude)"; \
> -	echo "Libs: -L\$${libdir} $(PKG_CONFIG_USELIBS) -l$(PKG_CONFIG_LIB)"; \
> +	echo "Libs: -L\$${libdir} $(sort $(PKG_CONFIG_USELIBS)) -l$(PKG_CONFIG_LIB)"; \
>  	echo "Libs.private: $(PKG_CONFIG_LIBSPRIV)"; \
>  	echo "Requires.private: $(PKG_CONFIG_REQPRIV)"; \
>  	} > $@

Actually, it occurs to me that this would be better in libs.mk as

PKG_CONFIG_USELIBS := $(sort $(SHLIB_libxen$(LIBNAME)))

in case we gain any further uses of PKG_CONFIG_USELIBS

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 17:04:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 17:04:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37933.70510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khyDV-0003OF-E7; Wed, 25 Nov 2020 17:04:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37933.70510; Wed, 25 Nov 2020 17:04:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khyDV-0003O8-9H; Wed, 25 Nov 2020 17:04:29 +0000
Received: by outflank-mailman (input) for mailman id 37933;
 Wed, 25 Nov 2020 17:04:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FqDi=E7=gmail.com=miguel.ojeda.sandonis@srs-us1.protection.inumbo.net>)
 id 1khyDT-0003O3-R8
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 17:04:27 +0000
Received: from mail-yb1-xb42.google.com (unknown [2607:f8b0:4864:20::b42])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f7527f5a-be48-49be-8110-936b8ddd67bc;
 Wed, 25 Nov 2020 17:04:26 +0000 (UTC)
Received: by mail-yb1-xb42.google.com with SMTP id 2so374711ybc.12
 for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 09:04:26 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FqDi=E7=gmail.com=miguel.ojeda.sandonis@srs-us1.protection.inumbo.net>)
	id 1khyDT-0003O3-R8
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 17:04:27 +0000
X-Inumbo-ID: f7527f5a-be48-49be-8110-936b8ddd67bc
Received: from mail-yb1-xb42.google.com (unknown [2607:f8b0:4864:20::b42])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f7527f5a-be48-49be-8110-936b8ddd67bc;
	Wed, 25 Nov 2020 17:04:26 +0000 (UTC)
Received: by mail-yb1-xb42.google.com with SMTP id 2so374711ybc.12
        for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 09:04:26 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=U7yONu+GErpj3wVA3mUEvd1gZrZu1iMtuB4J5cc4iYs=;
        b=bn+pL/HmrYW2tlvsO08UsmlB+e0sDsIo/gBe6lZBPy5Ml0r7IepVRmwL3Z1msCDTmB
         4Fj8yYJnSSwKpycrMD6jc9mJYcLEOxyjBt+mj/swgeJwfcTqBWFSYbINT99XJh8MBLLG
         BhdJX4URpdAlU1PS41QCV8cX0uycEbKi5uankHMmLYXfRheyb1dBSnJ2lYbkM9jPzYRg
         +YL1Fiv4xli6A/G5oR00+c/fqffNKJdLOgNLmafCTxGe8sUqpvTjraMjrzXLQkd2Vyg7
         6NJIAQ3gm8Ro9XvzXTxxo6aHXEqSB5bdv5UB5bHkEX37ZUG4NR8CwSl4aaovOFcf7q/J
         MTKg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=U7yONu+GErpj3wVA3mUEvd1gZrZu1iMtuB4J5cc4iYs=;
        b=YGmuF9b+ZQJbgrsiwfcLyjqu68z6jNRxmsaOtUjJ8jRvd7wNz6Bdq0keLc3i7ZtlAy
         +XbS35Z/L1sk4djFlMPofwJ1apngqpZyJI1RSWot2Hm3EedKgCdA9naHqH00WCCm0R6h
         xYZkO9PRD2BpOjHOOhDWNb096jjqA5BeoQdQA5VdCpe58wShELcwXPCGNVoaaSI8DPw9
         U5VpYIAzAyMpDjuo2FtVqiRpOUOI8Py7temHElod6z3uO7U5g90+O29A3Z+oPamFJxRp
         XG9v8cGZKEtxdIizosk1GSHF1tPaVA9IurtJ48yAZz+7M2fFzd5VmloPNIuXAZzU0U5I
         Dj9Q==
X-Gm-Message-State: AOAM533RUWF7fVHq4CbvHcpYOf7uTrTuYGj/HeC98OmmVKOXvc0HtbWO
	VZwGVneZ9BMOzEmyyeSyTEPv0qFHLmSkZlYjuPY=
X-Google-Smtp-Source: ABdhPJwRDTWwRnnt/vVfXeVU3lUNCXdaAf9CCrzUJdkBRbFdtXrCpJBbeymEiGhAam+E5oqqQjDTbAdkVQMGwErIDPw=
X-Received: by 2002:a25:aac5:: with SMTP id t63mr6307293ybi.22.1606323866493;
 Wed, 25 Nov 2020 09:04:26 -0800 (PST)
MIME-Version: 1.0
References: <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook> <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
 <ca071decb87cc7e905411423c05a48f9fd2f58d7.camel@perches.com>
 <0147972a72bc13f3629de8a32dee6f1f308994b5.camel@HansenPartnership.com>
 <d8d1e9add08cdd4158405e77762d4946037208f8.camel@perches.com>
 <dbd2cb703ed9eefa7dde9281ea26ab0f7acc8afe.camel@HansenPartnership.com>
 <20201123130348.GA3119@embeddedor> <8f5611bb015e044fa1c0a48147293923c2d904e4.camel@HansenPartnership.com>
 <202011241327.BB28F12F6@keescook> <a841536fe65bb33f1c72ce2455a6eb47a0107565.camel@HansenPartnership.com>
 <CAKwvOdkGBn7nuWTAqrORMeN1G+w3YwBfCqqaRD2nwvoAXKi=Aw@mail.gmail.com> <20201125082405.1d8c23dc@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
In-Reply-To: <20201125082405.1d8c23dc@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
From: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Date: Wed, 25 Nov 2020 18:04:15 +0100
Message-ID: <CANiq72=RuekXf1O6Fxrz2Eend0GtS6=E72P4T2=48SDqVcTChA@mail.gmail.com>
Subject: Re: [Intel-wired-lan] [PATCH 000/141] Fix fall-through warnings for Clang
To: Jakub Kicinski <kuba@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>, 
	James Bottomley <James.Bottomley@hansenpartnership.com>, Kees Cook <keescook@chromium.org>, 
	"Gustavo A. R. Silva" <gustavoars@kernel.org>, Joe Perches <joe@perches.com>, alsa-devel@alsa-project.org, 
	linux-atm-general@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, 
	linux-iio@vger.kernel.org, linux-wireless <linux-wireless@vger.kernel.org>, 
	linux-fbdev@vger.kernel.org, dri-devel <dri-devel@lists.freedesktop.org>, 
	LKML <linux-kernel@vger.kernel.org>, Nathan Chancellor <natechancellor@gmail.com>, 
	linux-ide@vger.kernel.org, dm-devel@redhat.com, keyrings@vger.kernel.org, 
	linux-mtd@lists.infradead.org, GR-everest-linux-l2@marvell.com, 
	wcn36xx@lists.infradead.org, samba-technical@lists.samba.org, 
	linux-i3c@lists.infradead.org, linux1394-devel@lists.sourceforge.net, 
	linux-afs@lists.infradead.org, usb-storage@lists.one-eyed-alien.net, 
	drbd-dev@lists.linbit.com, devel@driverdev.osuosl.org, 
	linux-cifs@vger.kernel.org, rds-devel@oss.oracle.com, 
	linux-scsi@vger.kernel.org, linux-rdma@vger.kernel.org, 
	oss-drivers@netronome.com, bridge@lists.linux-foundation.org, 
	linux-security-module@vger.kernel.org, 
	amd-gfx list <amd-gfx@lists.freedesktop.org>, linux-stm32@st-md-mailman.stormreply.com, 
	cluster-devel@redhat.com, linux-acpi@vger.kernel.org, coreteam@netfilter.org, 
	intel-wired-lan@lists.osuosl.org, linux-input <linux-input@vger.kernel.org>, 
	Miguel Ojeda <ojeda@kernel.org>, tipc-discussion@lists.sourceforge.net, 
	Ext4 Developers List <linux-ext4@vger.kernel.org>, 
	Linux Media Mailing List <linux-media@vger.kernel.org>, linux-watchdog@vger.kernel.org, 
	selinux@vger.kernel.org, linux-arm-msm <linux-arm-msm@vger.kernel.org>, 
	intel-gfx@lists.freedesktop.org, linux-geode@lists.infradead.org, 
	linux-can@vger.kernel.org, linux-block@vger.kernel.org, 
	linux-gpio@vger.kernel.org, op-tee@lists.trustedfirmware.org, 
	linux-mediatek@lists.infradead.org, xen-devel@lists.xenproject.org, 
	nouveau@lists.freedesktop.org, linux-hams@vger.kernel.org, 
	ceph-devel@vger.kernel.org, virtualization@lists.linux-foundation.org, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, linux-hwmon@vger.kernel.org, 
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, linux-nfs@vger.kernel.org, GR-Linux-NIC-Dev@marvell.com, 
	Linux Memory Management List <linux-mm@kvack.org>, Network Development <netdev@vger.kernel.org>, 
	linux-decnet-user@lists.sourceforge.net, linux-mmc@vger.kernel.org, 
	Linux-Renesas <linux-renesas-soc@vger.kernel.org>, linux-sctp@vger.kernel.org, 
	linux-usb@vger.kernel.org, netfilter-devel@vger.kernel.org, 
	=?UTF-8?Q?open_list=3AHARDWARE_RANDOM_NUMBER_GENERATOR_CORE_=3Clinux=2Dcrypt?=
	=?UTF-8?Q?o=40vger=2Ekernel=2Eorg=3E=2C_patches=40opensource=2Ecirrus=2Ecom=2C_linux=2Dint?=
	=?UTF-8?Q?egrity=40vger=2Ekernel=2Eorg=2C_target=2Ddevel=40vger=2Ekernel=2Eorg=2C_linux=2D?=
	=?UTF-8?Q?hardening=40vger=2Ekernel=2Eorg=2C_Jonathan_Cameron_=3CJonathan=2ECamero?=
	=?UTF-8?Q?n=40huawei=2Ecom=3E=2C_Greg_KH?= <gregkh@linuxfoundation.org>
Content-Type: text/plain; charset="UTF-8"

On Wed, Nov 25, 2020 at 5:24 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> And just to spell it out,
>
> case ENUM_VALUE1:
>         bla();
>         break;
> case ENUM_VALUE2:
>         bla();
> default:
>         break;
>
> is a fairly idiomatic way of indicating that not all values of the enum
> are expected to be handled by the switch statement.

It looks like a benign typo to me -- `ENUM_VALUE2` does not follow the
same pattern like `ENUM_VALUE1`. To me, the presence of the `default`
is what indicates (explicitly) that not everything is handled.

> Applying a real patch set and then getting a few follow ups the next day
> for trivial coding things like fallthrough missing or static missing,
> just because I didn't have the full range of compilers to check with
> before applying makes me feel pretty shitty, like I'm not doing a good
> job. YMMV.

The number of compilers, checkers, static analyzers, tests, etc. we
use keeps going up. That, indeed, means maintainers will miss more
things (unless maintainers do more work than before). But catching
bugs before they happen is *not* a bad thing.

Perhaps we could encourage more rebasing in -next (while still giving
credit to bots and testers) to avoid having many fixing commits
afterwards, but that is orthogonal.

I really don't think we should encourage the feeling that a maintainer
is doing a bad job if they don't catch everything on their reviews.
Any review is worth it. Maintainers, in the end, are just the
"guaranteed" reviewers that decide when the code looks reasonable
enough. They should definitely not feel pressured to be perfect.

Cheers,
Miguel


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 18:03:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 18:03:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37942.70528 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khz8d-0000N7-Bv; Wed, 25 Nov 2020 18:03:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37942.70528; Wed, 25 Nov 2020 18:03:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khz8d-0000N0-82; Wed, 25 Nov 2020 18:03:31 +0000
Received: by outflank-mailman (input) for mailman id 37942;
 Wed, 25 Nov 2020 18:03:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1khz8b-0000Mv-Es
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 18:03:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khz8Z-0005nH-Px; Wed, 25 Nov 2020 18:03:27 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1khz8Z-0001Ga-EX; Wed, 25 Nov 2020 18:03:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khz8b-0000Mv-Es
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 18:03:29 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=KuPd5JEX9cu2UuOX3LVhygD/IpKEybGAsFruRkY/ju4=; b=vDIAUBY1VDD4A1gCwF4dEw0oUQ
	9Mh8QokwbxawytLUORdQGVJ9sTKw7NKr3EfGn4ztUnVJc/a/6HqPsV/kClvzURyu3pXdh/0TBC/vI
	xi93fl5QFurVJkvscGxkX+6D5XHIFwgRB4FErFxXJV6bMhLbd590Oo/fmarzfXvQawjE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khz8Z-0005nH-Px; Wed, 25 Nov 2020 18:03:27 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1khz8Z-0001Ga-EX; Wed, 25 Nov 2020 18:03:27 +0000
Subject: Re: [PATCH RFC 4/6] xen/arm: mm: Allow other mapping size in
 xen_pt_update_entry()
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>,
 Julien Grall <Julien.Grall@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20201119190751.22345-1-julien@xen.org>
 <20201119190751.22345-5-julien@xen.org>
 <9F95F565-8D59-400B-9F15-9ABA0B1FB7FC@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <3f6df00c-f287-8abc-ed15-9a6e6180b13c@xen.org>
Date: Wed, 25 Nov 2020 18:03:25 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <9F95F565-8D59-400B-9F15-9ABA0B1FB7FC@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 24/11/2020 18:13, Bertrand Marquis wrote:
> Hi Julien,

Hi Bertrand,

>> On 19 Nov 2020, at 19:07, Julien Grall <julien@xen.org> wrote:
>>
>> From: Julien Grall <julien.grall@arm.com>
>>
>> At the moment, xen_pt_update_entry() only supports mapping at level 3
>> (i.e 4KB mapping). While this is fine for most of the runtime helper,
>> the boot code will require to use superpage mapping.
>>
>> We don't want to allow superpage mapping by default as some of the
>> callers may expect small mappings (i.e populate_pt_range()) or even
>> expect to unmap only a part of a superpage.
>>
>> To keep the code simple, a new flag _PAGE_BLOCK is introduced to
>> allow the caller to enable superpage mapping.
>>
>> As the code doesn't support all the combinations, xen_pt_check_entry()
>> is extended to take into account the cases we don't support when
>> using block mapping:
>>     - Replacing a table with a mapping. This may happen if region was
>>     first mapped with 4KB mapping and then later on replaced with a 2MB
>>     (or 1GB mapping)
>>     - Removing/modify a table. This may happen if a caller try to remove a
>>     region with _PAGE_BLOCK set when it was created without it
>>
>> Note that the current restriction mean that the caller must ensure that
>> _PAGE_BLOCK is consistently set/cleared across all the updates on a
>> given virtual region. This ought to be fine with the expected use-cases.
>>
>> More rework will be necessary if we wanted to remove the restrictions.
>>
>> Note that nr_mfns is now marked const as it is used for flushing the
>> TLBs and we don't want it to be modified.
>>
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>>
> 
> First I did test the serie on Arm and so far it was working properly.

Thanks for the testing and...

> 
> I only have some remarks because even if the code is right, I think
> some parts of the code are not easy to read...

... I am always open for suggestion :).

>> ---
>>
>> This patch is necessary for upcoming changes in the MM code. I would
>> like to remove most of the open-coding update of the page-tables as they
>> are not easy to properly fix/extend. For instance, always mapping
>> xenheap mapping with 1GB superpage is plain wrong because:
>>     - RAM regions are not always 1GB aligned (such as on RPI 4) and we
>>     may end up to map MMIO with cacheable attributes
>>     - RAM may contain reserved regions should either not be mapped
>> ---
>> xen/arch/arm/mm.c          | 87 ++++++++++++++++++++++++++++++--------
>> xen/include/asm-arm/page.h |  4 ++
>> 2 files changed, 73 insertions(+), 18 deletions(-)
>>
>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>> index 59f8a3f15fd1..af0f12b6e6d3 100644
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -1060,9 +1060,10 @@ static int xen_pt_next_level(bool read_only, unsigned int level,
>> }
>>
>> /* Sanity check of the entry */
>> -static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
>> +static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int level,
>> +                               unsigned int flags)
>> {
>> -    /* Sanity check when modifying a page. */
>> +    /* Sanity check when modifying an entry. */
>>      if ( (flags & _PAGE_PRESENT) && mfn_eq(mfn, INVALID_MFN) )
>>      {
>>          /* We don't allow modifying an invalid entry. */
>> @@ -1072,6 +1073,13 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
>>              return false;
>>          }
>>
>> +        /* We don't allow modifying a table entry */
>> +        if ( !lpae_is_mapping(entry, level) )
>> +        {
>> +            mm_printk("Modifying a table entry is not allowed.\n");
>> +            return false;
>> +        }
>> +
>>          /* We don't allow changing memory attributes. */
>>          if ( entry.pt.ai != PAGE_AI_MASK(flags) )
>>          {
>> @@ -1087,7 +1095,7 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
>>              return false;
>>          }
>>      }
>> -    /* Sanity check when inserting a page */
>> +    /* Sanity check when inserting a mapping */
>>      else if ( flags & _PAGE_PRESENT )
>>      {
>>          /* We should be here with a valid MFN. */
>> @@ -1096,18 +1104,28 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
>>          /* We don't allow replacing any valid entry. */
>>          if ( lpae_is_valid(entry) )
>>          {
>> -            mm_printk("Changing MFN for a valid entry is not allowed (%#"PRI_mfn" -> %#"PRI_mfn").\n",
>> -                      mfn_x(lpae_get_mfn(entry)), mfn_x(mfn));
>> +            if ( lpae_is_mapping(entry, level) )
>> +                mm_printk("Changing MFN for a valid entry is not allowed (%#"PRI_mfn" -> %#"PRI_mfn").\n",
>> +                          mfn_x(lpae_get_mfn(entry)), mfn_x(mfn));
>> +            else
>> +                mm_printk("Trying to replace a table with a mapping.\n");
>>              return false;
>>          }
>>      }
>> -    /* Sanity check when removing a page. */
>> +    /* Sanity check when removing a mapping. */
>>      else if ( (flags & (_PAGE_PRESENT|_PAGE_POPULATE)) == 0 )
>>      {
>>          /* We should be here with an invalid MFN. */
>>          ASSERT(mfn_eq(mfn, INVALID_MFN));
>>
>> -        /* We don't allow removing page with contiguous bit set. */
>> +        /* We don't allow removing a table */
>> +        if ( lpae_is_table(entry, level) )
>> +        {
>> +            mm_printk("Removing a table is not allowed.\n");
>> +            return false;
>> +        }
>> +
>> +        /* We don't allow removing a mapping with contiguous bit set. */
>>          if ( entry.pt.contig )
>>          {
>>              mm_printk("Removing entry with contiguous bit set is not allowed.\n");
>> @@ -1126,12 +1144,12 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
>> }
>>
>> static int xen_pt_update_entry(mfn_t root, unsigned long virt,
>> -                               mfn_t mfn, unsigned int flags)
>> +                               mfn_t mfn, unsigned int page_order,
>> +                               unsigned int flags)
>> {
>>      int rc;
>>      unsigned int level;
>> -    /* We only support 4KB mapping (i.e level 3) for now */
>> -    unsigned int target = 3;
>> +    unsigned int target = 3 - (page_order / LPAE_SHIFT);
> 
> This is not really straight forward and it would be good to actually explain the computation here or ...

[...]

>> @@ -1265,14 +1287,43 @@ static int xen_pt_update(unsigned long virt,
>>
>>      spin_lock(&xen_pt_lock);
>>
>> -    for ( ; addr < addr_end; addr += PAGE_SIZE )
>> +    while ( left )
>>      {
>> -        rc = xen_pt_update_entry(root, addr, mfn, flags);
>> +        unsigned int order;
>> +        unsigned long mask;
>> +
>> +        /*
>> +         * Don't take into account the MFN when removing mapping (i.e
>> +         * MFN_INVALID) to calculate the correct target order.
>> +         *
>> +         * XXX: Support superpage mappings if nr is not aligned to a
>> +         * superpage size.
>> +         */
>> +        mask = !mfn_eq(mfn, INVALID_MFN) ? mfn_x(mfn) : 0;
>> +        mask |= vfn | left;
>> +
>> +        /*
>> +         * Always use level 3 mapping unless the caller request block
>> +         * mapping.
>> +         */
>> +        if ( likely(!(flags & _PAGE_BLOCK)) )
>> +            order = THIRD_ORDER;
>> +        else if ( !(mask & (BIT(FIRST_ORDER, UL) - 1)) )
>> +            order = FIRST_ORDER;
>> +        else if ( !(mask & (BIT(SECOND_ORDER, UL) - 1)) )
>> +            order = SECOND_ORDER;
>> +        else
>> +            order = THIRD_ORDER;
>> +
>> +        rc = xen_pt_update_entry(root, pfn_to_paddr(vfn), mfn, order, flags);
> 
> maybe it would be easier here to pass directly the target instead of the page order.

Stefano suggested the same. For the next version I am planning to 
"hardcoded" the level in the if/else above and then find the order from 
an array similar to level_orders in p2m.c.

> 
>>          if ( rc )
>>              break;
>>
>> +        vfn += 1U << order;
>>          if ( !mfn_eq(mfn, INVALID_MFN) )
>> -            mfn = mfn_add(mfn, 1);
>> +            mfn = mfn_add(mfn, 1U << order);
>> +
>> +        left -= (1U << order);
>>      }
>>
>>      /*
>> diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
>> index 4ea8e97247c8..de096b0968e3 100644
>> --- a/xen/include/asm-arm/page.h
>> +++ b/xen/include/asm-arm/page.h
>> @@ -79,6 +79,7 @@
>>   * [3:4] Permission flags
>>   * [5]   Page present
>>   * [6]   Only populate page tables
>> + * [7]   Use any level mapping only (i.e. superpages is allowed)
> 
> the comment for the bit is not really logic: any level mapping only

My original implementation was using the bit the other way around: the 
flag set meant we should only use level 3.

But it turns out to be more complicated to implement because runtime 
users (e.g. vmap()) should only be mapped using small pages to avoid 
trouble

> Wouldn’t it be more clear to name the bit _PAGE_SUPERPAGE_BIT and
> comment it by saying that superpages are allowed ?

I would prefer to keep the name short as the flag will be used in 
combination of others. _PAGE_BLOCK is short and also match the spec :).

In any case, I will update the description of bit 7 with:

"Superpage mappings is allowed".

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 18:16:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 18:16:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37952.70564 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khzLb-0001U9-3k; Wed, 25 Nov 2020 18:16:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37952.70564; Wed, 25 Nov 2020 18:16:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khzLa-0001U0-Vx; Wed, 25 Nov 2020 18:16:54 +0000
Received: by outflank-mailman (input) for mailman id 37952;
 Wed, 25 Nov 2020 18:16:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MqAN=E7=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1khzLZ-0001Oz-9c
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 18:16:53 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 617e2fad-6fc2-48fd-8ed6-856b402076a1;
 Wed, 25 Nov 2020 18:16:47 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 91E7E11D4;
 Wed, 25 Nov 2020 10:16:47 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C93D93F23F;
 Wed, 25 Nov 2020 10:16:46 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MqAN=E7=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1khzLZ-0001Oz-9c
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 18:16:53 +0000
X-Inumbo-ID: 617e2fad-6fc2-48fd-8ed6-856b402076a1
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 617e2fad-6fc2-48fd-8ed6-856b402076a1;
	Wed, 25 Nov 2020 18:16:47 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 91E7E11D4;
	Wed, 25 Nov 2020 10:16:47 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C93D93F23F;
	Wed, 25 Nov 2020 10:16:46 -0800 (PST)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v4 2/3] xen/pci: solve compilation error on ARM with HAS_PCI enabled.
Date: Wed, 25 Nov 2020 18:16:03 +0000
Message-Id: <2ce402cfae6d90433626bcdc6314e5ee5dda103f.1606326929.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1606326929.git.rahul.singh@arm.com>
References: <cover.1606326929.git.rahul.singh@arm.com>
In-Reply-To: <cover.1606326929.git.rahul.singh@arm.com>
References: <cover.1606326929.git.rahul.singh@arm.com>

If mem-sharing, mem-paging, or log-dirty functionality is not enabled
for architecture when HAS_PCI is enabled, the compiler will throw an
error.

Move code to x86 specific file to fix compilation error.

Also, modify the code to use likely() in place of unlikley() for each
condition to make code more optimized.

No functional change intended.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---

Changes in v4:
- fixed minor comments

---
 xen/drivers/passthrough/pci.c       |  8 +-------
 xen/drivers/passthrough/x86/iommu.c | 13 +++++++++++++
 xen/include/xen/iommu.h             |  2 ++
 3 files changed, 16 insertions(+), 7 deletions(-)

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 3c6ab1bcb6..4c21655b7d 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -20,7 +20,6 @@
 #include <xen/iommu.h>
 #include <xen/irq.h>
 #include <xen/param.h>
-#include <xen/vm_event.h>
 #include <xen/delay.h>
 #include <xen/keyhandler.h>
 #include <xen/event.h>
@@ -1418,12 +1417,7 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
     if ( !is_iommu_enabled(d) )
         return 0;
 
-    /* Prevent device assign if mem paging or mem sharing have been 
-     * enabled for this domain */
-    if ( d != dom_io &&
-         unlikely(mem_sharing_enabled(d) ||
-                  vm_event_check_ring(d->vm_event_paging) ||
-                  p2m_get_hostp2m(d)->global_logdirty) )
+    if( !arch_iommu_use_permitted(d) )
         return -EXDEV;
 
     /* device_assigned() should already have cleared the device for assignment */
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index f17b1820f4..cea1032b3d 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -18,6 +18,7 @@
 #include <xen/guest_access.h>
 #include <xen/event.h>
 #include <xen/softirq.h>
+#include <xen/vm_event.h>
 #include <xsm/xsm.h>
 
 #include <asm/hvm/io.h>
@@ -308,6 +309,18 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
     return pg;
 }
 
+bool arch_iommu_use_permitted(const struct domain *d)
+{
+    /*
+     * Prevent device assign if mem paging, mem sharing or log-dirty
+     * have been enabled for this domain.
+     */
+    return d == dom_io ||
+           (likely(!mem_sharing_enabled(d)) &&
+            likely(!vm_event_check_ring(d->vm_event_paging)) &&
+            likely(!p2m_get_hostp2m(d)->global_logdirty));
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 191021870f..056eaa09fc 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -381,6 +381,8 @@ DECLARE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 extern struct spinlock iommu_pt_cleanup_lock;
 extern struct page_list_head iommu_pt_cleanup_list;
 
+bool arch_iommu_use_permitted(const struct domain *d);
+
 #endif /* _IOMMU_H_ */
 
 /*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 18:16:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 18:16:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37951.70552 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khzLV-0001R0-PY; Wed, 25 Nov 2020 18:16:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37951.70552; Wed, 25 Nov 2020 18:16:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khzLV-0001Qt-Ma; Wed, 25 Nov 2020 18:16:49 +0000
Received: by outflank-mailman (input) for mailman id 37951;
 Wed, 25 Nov 2020 18:16:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MqAN=E7=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1khzLU-0001Oz-9Q
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 18:16:48 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 28b115bf-7771-4d9e-820d-affdbf0e73bb;
 Wed, 25 Nov 2020 18:16:45 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 205EE11D4;
 Wed, 25 Nov 2020 10:16:45 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AD29E3F23F;
 Wed, 25 Nov 2020 10:16:43 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MqAN=E7=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1khzLU-0001Oz-9Q
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 18:16:48 +0000
X-Inumbo-ID: 28b115bf-7771-4d9e-820d-affdbf0e73bb
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 28b115bf-7771-4d9e-820d-affdbf0e73bb;
	Wed, 25 Nov 2020 18:16:45 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 205EE11D4;
	Wed, 25 Nov 2020 10:16:45 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AD29E3F23F;
	Wed, 25 Nov 2020 10:16:43 -0800 (PST)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 1/3] xen/pci: Move x86 specific code to x86 directory.
Date: Wed, 25 Nov 2020 18:16:02 +0000
Message-Id: <3500f44e3b6f8f05f9d05fa170817d5bc6f39f22.1606326929.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1606326929.git.rahul.singh@arm.com>
References: <cover.1606326929.git.rahul.singh@arm.com>
In-Reply-To: <cover.1606326929.git.rahul.singh@arm.com>
References: <cover.1606326929.git.rahul.singh@arm.com>

passthrough/pci.c file is common for all architecture, but there is x86
specific code in this file.

Move x86 specific code to the drivers/passthrough/io.c file to avoid
compilation error for other architecture.

As drivers/passthrough/io.c is compiled only for x86 move it to
x86 directory and rename it to hvm.c.

No functional change intended.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---

Changes in v4:
- fixed compilation error when CONFIG_HVM is disabled 
- remove iommu_update_ire_from_msi from the patch will send another patch
  to fix.

---
 xen/drivers/passthrough/Makefile            |  3 -
 xen/drivers/passthrough/pci.c               | 71 +--------------------
 xen/drivers/passthrough/x86/Makefile        |  1 +
 xen/drivers/passthrough/{io.c => x86/hvm.c} | 66 +++++++++++++++++++
 xen/include/xen/pci.h                       |  9 +++
 5 files changed, 77 insertions(+), 73 deletions(-)
 rename xen/drivers/passthrough/{io.c => x86/hvm.c} (95%)

diff --git a/xen/drivers/passthrough/Makefile b/xen/drivers/passthrough/Makefile
index e973e16c74..cc646612c7 100644
--- a/xen/drivers/passthrough/Makefile
+++ b/xen/drivers/passthrough/Makefile
@@ -6,6 +6,3 @@ obj-$(CONFIG_ARM) += arm/
 obj-y += iommu.o
 obj-$(CONFIG_HAS_PCI) += pci.o
 obj-$(CONFIG_HAS_DEVICE_TREE) += device_tree.o
-
-x86-$(CONFIG_HVM) := io.o
-obj-$(CONFIG_X86) += $(x86-y)
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 51e584127e..3c6ab1bcb6 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -14,9 +14,6 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
-#include <xen/sched.h>
-#include <xen/pci.h>
-#include <xen/pci_regs.h>
 #include <xen/pci_ids.h>
 #include <xen/list.h>
 #include <xen/prefetch.h>
@@ -24,7 +21,6 @@
 #include <xen/irq.h>
 #include <xen/param.h>
 #include <xen/vm_event.h>
-#include <asm/hvm/irq.h>
 #include <xen/delay.h>
 #include <xen/keyhandler.h>
 #include <xen/event.h>
@@ -842,71 +838,6 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
     return ret;
 }
 
-static int pci_clean_dpci_irq(struct domain *d,
-                              struct hvm_pirq_dpci *pirq_dpci, void *arg)
-{
-    struct dev_intx_gsi_link *digl, *tmp;
-
-    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
-
-    if ( pt_irq_need_timer(pirq_dpci->flags) )
-        kill_timer(&pirq_dpci->timer);
-
-    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
-    {
-        list_del(&digl->list);
-        xfree(digl);
-    }
-
-    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
-
-    if ( !pt_pirq_softirq_active(pirq_dpci) )
-        return 0;
-
-    domain_get_irq_dpci(d)->pending_pirq_dpci = pirq_dpci;
-
-    return -ERESTART;
-}
-
-static int pci_clean_dpci_irqs(struct domain *d)
-{
-    struct hvm_irq_dpci *hvm_irq_dpci = NULL;
-
-    if ( !is_iommu_enabled(d) )
-        return 0;
-
-    if ( !is_hvm_domain(d) )
-        return 0;
-
-    spin_lock(&d->event_lock);
-    hvm_irq_dpci = domain_get_irq_dpci(d);
-    if ( hvm_irq_dpci != NULL )
-    {
-        int ret = 0;
-
-        if ( hvm_irq_dpci->pending_pirq_dpci )
-        {
-            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci) )
-                 ret = -ERESTART;
-            else
-                 hvm_irq_dpci->pending_pirq_dpci = NULL;
-        }
-
-        if ( !ret )
-            ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
-        if ( ret )
-        {
-            spin_unlock(&d->event_lock);
-            return ret;
-        }
-
-        hvm_domain_irq(d)->dpci = NULL;
-        free_hvm_irq_dpci(hvm_irq_dpci);
-    }
-    spin_unlock(&d->event_lock);
-    return 0;
-}
-
 /* Caller should hold the pcidevs_lock */
 static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
                            uint8_t devfn)
@@ -966,7 +897,7 @@ int pci_release_devices(struct domain *d)
     int ret;
 
     pcidevs_lock();
-    ret = pci_clean_dpci_irqs(d);
+    ret = arch_pci_clean_pirqs(d);
     if ( ret )
     {
         pcidevs_unlock();
diff --git a/xen/drivers/passthrough/x86/Makefile b/xen/drivers/passthrough/x86/Makefile
index a70cf9460d..69284a5d19 100644
--- a/xen/drivers/passthrough/x86/Makefile
+++ b/xen/drivers/passthrough/x86/Makefile
@@ -1,2 +1,3 @@
 obj-y += ats.o
 obj-y += iommu.o
+obj-$(CONFIG_HVM) += hvm.o
diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/x86/hvm.c
similarity index 95%
rename from xen/drivers/passthrough/io.c
rename to xen/drivers/passthrough/x86/hvm.c
index 6b1305a3e5..41cfa2e200 100644
--- a/xen/drivers/passthrough/io.c
+++ b/xen/drivers/passthrough/x86/hvm.c
@@ -1036,6 +1036,72 @@ unlock:
     spin_unlock(&d->event_lock);
 }
 
+static int pci_clean_dpci_irq(struct domain *d,
+                              struct hvm_pirq_dpci *pirq_dpci, void *arg)
+{
+    struct dev_intx_gsi_link *digl, *tmp;
+
+    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
+
+    if ( pt_irq_need_timer(pirq_dpci->flags) )
+        kill_timer(&pirq_dpci->timer);
+
+    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
+    {
+        list_del(&digl->list);
+        xfree(digl);
+    }
+
+    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
+
+    if ( !pt_pirq_softirq_active(pirq_dpci) )
+        return 0;
+
+    domain_get_irq_dpci(d)->pending_pirq_dpci = pirq_dpci;
+
+    return -ERESTART;
+}
+
+int arch_pci_clean_pirqs(struct domain *d)
+{
+    struct hvm_irq_dpci *hvm_irq_dpci = NULL;
+
+    if ( !is_iommu_enabled(d) )
+        return 0;
+
+    if ( !is_hvm_domain(d) )
+        return 0;
+
+    spin_lock(&d->event_lock);
+    hvm_irq_dpci = domain_get_irq_dpci(d);
+    if ( hvm_irq_dpci != NULL )
+    {
+        int ret = 0;
+
+        if ( hvm_irq_dpci->pending_pirq_dpci )
+        {
+            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci) )
+                 ret = -ERESTART;
+            else
+                 hvm_irq_dpci->pending_pirq_dpci = NULL;
+        }
+
+        if ( !ret )
+            ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
+        if ( ret )
+        {
+            spin_unlock(&d->event_lock);
+            return ret;
+        }
+
+        hvm_domain_irq(d)->dpci = NULL;
+        free_hvm_irq_dpci(hvm_irq_dpci);
+    }
+    spin_unlock(&d->event_lock);
+
+    return 0;
+}
+
 /*
  * Note: 'pt_pirq_softirq_reset' can clear the STATE_SCHED before we get to
  * doing it. If that is the case we let 'pt_pirq_softirq_reset' do ref-counting.
diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
index 20a54a5bb4..8e3d4d9454 100644
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -208,4 +208,13 @@ int msixtbl_pt_register(struct domain *, struct pirq *, uint64_t gtable);
 void msixtbl_pt_unregister(struct domain *, struct pirq *);
 void msixtbl_pt_cleanup(struct domain *d);
 
+#ifdef CONFIG_HVM
+int arch_pci_clean_pirqs(struct domain *d);
+#else
+static inline int arch_pci_clean_pirqs(struct domain *d)
+{
+    return 0;
+}
+#endif /* CONFIG_HVM */
+
 #endif /* __XEN_PCI_H__ */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 18:16:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 18:16:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37950.70539 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khzLQ-0001PB-HR; Wed, 25 Nov 2020 18:16:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37950.70539; Wed, 25 Nov 2020 18:16:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khzLQ-0001P4-EP; Wed, 25 Nov 2020 18:16:44 +0000
Received: by outflank-mailman (input) for mailman id 37950;
 Wed, 25 Nov 2020 18:16:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MqAN=E7=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1khzLP-0001Oz-Dz
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 18:16:43 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id f9eaebbc-03ee-46ce-9302-0670bb0b5b97;
 Wed, 25 Nov 2020 18:16:41 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 06C4811D4;
 Wed, 25 Nov 2020 10:16:41 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9138B3F23F;
 Wed, 25 Nov 2020 10:16:39 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MqAN=E7=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1khzLP-0001Oz-Dz
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 18:16:43 +0000
X-Inumbo-ID: f9eaebbc-03ee-46ce-9302-0670bb0b5b97
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id f9eaebbc-03ee-46ce-9302-0670bb0b5b97;
	Wed, 25 Nov 2020 18:16:41 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 06C4811D4;
	Wed, 25 Nov 2020 10:16:41 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9138B3F23F;
	Wed, 25 Nov 2020 10:16:39 -0800 (PST)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 0/3] xen/arm: Make PCI passthrough code non-x86 specific
Date: Wed, 25 Nov 2020 18:16:01 +0000
Message-Id: <cover.1606326929.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch series is v4 of preparatory work to make PCI passthrough code
non-x86 specific.

Rahul Singh (3):
  xen/pci: Move x86 specific code to x86 directory.
  xen/pci: solve compilation error on ARM with HAS_PCI enabled.
  ns16550: Gate all PCI code with CONFIG_X86

 xen/drivers/char/ns16550.c                  | 16 ++---
 xen/drivers/passthrough/Makefile            |  3 -
 xen/drivers/passthrough/pci.c               | 79 +--------------------
 xen/drivers/passthrough/x86/Makefile        |  1 +
 xen/drivers/passthrough/{io.c => x86/hvm.c} | 66 +++++++++++++++++
 xen/drivers/passthrough/x86/iommu.c         | 13 ++++
 xen/include/xen/iommu.h                     |  2 +
 xen/include/xen/pci.h                       |  9 +++
 8 files changed, 101 insertions(+), 88 deletions(-)
 rename xen/drivers/passthrough/{io.c => x86/hvm.c} (95%)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 18:17:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 18:17:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37955.70576 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khzLg-0001a6-Fc; Wed, 25 Nov 2020 18:17:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37955.70576; Wed, 25 Nov 2020 18:17:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khzLg-0001Zv-BZ; Wed, 25 Nov 2020 18:17:00 +0000
Received: by outflank-mailman (input) for mailman id 37955;
 Wed, 25 Nov 2020 18:16:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MqAN=E7=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1khzLe-0001Oz-9f
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 18:16:58 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 8db06e9a-3041-4f8a-a772-db9e0c6eee0a;
 Wed, 25 Nov 2020 18:16:50 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BDFEB11D4;
 Wed, 25 Nov 2020 10:16:50 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 715AE3F23F;
 Wed, 25 Nov 2020 10:16:49 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MqAN=E7=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1khzLe-0001Oz-9f
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 18:16:58 +0000
X-Inumbo-ID: 8db06e9a-3041-4f8a-a772-db9e0c6eee0a
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 8db06e9a-3041-4f8a-a772-db9e0c6eee0a;
	Wed, 25 Nov 2020 18:16:50 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BDFEB11D4;
	Wed, 25 Nov 2020 10:16:50 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 715AE3F23F;
	Wed, 25 Nov 2020 10:16:49 -0800 (PST)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 3/3] ns16550: Gate all PCI code with CONFIG_X86
Date: Wed, 25 Nov 2020 18:16:04 +0000
Message-Id: <6d64bb35a6ce247faaa3df2ebae27b6bfa1d969e.1606326929.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1606326929.git.rahul.singh@arm.com>
References: <cover.1606326929.git.rahul.singh@arm.com>
In-Reply-To: <cover.1606326929.git.rahul.singh@arm.com>
References: <cover.1606326929.git.rahul.singh@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The NS16550 driver is assuming that NS16550 PCI card are usable if the
architecture supports PCI (i.e. CONFIG_HAS_PCI=y). However, the code is
very x86 focus and will fail to build on Arm (/!\ it is not all the
errors):

ns16550.c: In function ‘ns16550_init_irq’:
ns16550.c:726:21: error: implicit declaration of function ‘create_irq’;
did you mean ‘release_irq’? [-Werror=implicit-function-declaration]
          uart->irq = create_irq(0, false);
                      ^~~~~~~~~~
                      release_irq
ns16550.c:726:21: error: nested extern declaration of ‘create_irq’
[-Werror=nested-externs]
ns16550.c: In function ‘ns16550_init_postirq’:
ns16550.c:768:33: error: ‘mmio_ro_ranges’ undeclared (first use in this
function); did you mean ‘mmio_handler’?
               rangeset_add_range(mmio_ro_ranges, uart->io_base,
                                  ^~~~~~~~~~~~~~
                                  mmio_handler
ns16550.c:768:33: note: each undeclared identifier is reported only once
for each function it appears in
ns16550.c:780:20: error: variable ‘msi’ has initializer but incomplete
type
              struct msi_info msi = {
                     ^~~~~~~~

Enabling support for NS16550 PCI card on Arm would require more plumbing
in addition to fixing the compilation error.

Arm systems tend to have platform UART available such as NS16550, PL011.
So there are limited reasons to get NS16550 PCI support for now on Arm.

Guard all remaining PCI code that is not under x86 flag with CONFIG_X86.

No functional change intended.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---

Changes in v4:
- As per the discussion guard all remaining PCI code with CONFIG_X86

---
 xen/drivers/char/ns16550.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index 9235d854fe..26e601857a 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -16,7 +16,7 @@
 #include <xen/timer.h>
 #include <xen/serial.h>
 #include <xen/iocap.h>
-#ifdef CONFIG_HAS_PCI
+#if defined(CONFIG_X86) && defined(CONFIG_HAS_PCI)
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/pci_ids.h>
@@ -51,7 +51,7 @@ static struct ns16550 {
     unsigned int timeout_ms;
     bool_t intr_works;
     bool_t dw_usr_bsy;
-#ifdef CONFIG_HAS_PCI
+#if defined(CONFIG_X86) && defined(CONFIG_HAS_PCI)
     /* PCI card parameters. */
     bool_t pb_bdf_enable;   /* if =1, pb-bdf effective, port behind bridge */
     bool_t ps_bdf_enable;   /* if =1, ps_bdf effective, port on pci card */
@@ -66,7 +66,7 @@ static struct ns16550 {
 #endif
 } ns16550_com[2] = { { 0 } };
 
-#ifdef CONFIG_HAS_PCI
+#if defined(CONFIG_X86) && defined(CONFIG_HAS_PCI)
 struct ns16550_config {
     u16 vendor_id;
     u16 dev_id;
@@ -256,7 +256,7 @@ static int ns16550_getc(struct serial_port *port, char *pc)
 
 static void pci_serial_early_init(struct ns16550 *uart)
 {
-#ifdef CONFIG_HAS_PCI
+#if defined(CONFIG_X86) && defined(CONFIG_HAS_PCI)
     if ( !uart->ps_bdf_enable || uart->io_base >= 0x10000 )
         return;
 
@@ -355,7 +355,7 @@ static void __init ns16550_init_preirq(struct serial_port *port)
 
 static void __init ns16550_init_irq(struct serial_port *port)
 {
-#ifdef CONFIG_HAS_PCI
+#if defined(CONFIG_X86) && defined(CONFIG_HAS_PCI)
     struct ns16550 *uart = port->uart;
 
     if ( uart->msi )
@@ -397,7 +397,7 @@ static void __init ns16550_init_postirq(struct serial_port *port)
     uart->timeout_ms = max_t(
         unsigned int, 1, (bits * uart->fifo_size * 1000) / uart->baud);
 
-#ifdef CONFIG_HAS_PCI
+#if defined(CONFIG_X86) && defined(CONFIG_HAS_PCI)
     if ( uart->bar || uart->ps_bdf_enable )
     {
         if ( uart->param && uart->param->mmio &&
@@ -477,7 +477,7 @@ static void ns16550_suspend(struct serial_port *port)
 
     stop_timer(&uart->timer);
 
-#ifdef CONFIG_HAS_PCI
+#if defined(CONFIG_X86) && defined(CONFIG_HAS_PCI)
     if ( uart->bar )
        uart->cr = pci_conf_read16(PCI_SBDF(0, uart->ps_bdf[0], uart->ps_bdf[1],
                                   uart->ps_bdf[2]), PCI_COMMAND);
@@ -486,7 +486,7 @@ static void ns16550_suspend(struct serial_port *port)
 
 static void _ns16550_resume(struct serial_port *port)
 {
-#ifdef CONFIG_HAS_PCI
+#if defined(CONFIG_X86) && defined(CONFIG_HAS_PCI)
     struct ns16550 *uart = port->uart;
 
     if ( uart->bar )
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 18:22:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 18:22:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.37988.70588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khzQV-0002hV-Ay; Wed, 25 Nov 2020 18:21:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 37988.70588; Wed, 25 Nov 2020 18:21:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khzQV-0002hO-7A; Wed, 25 Nov 2020 18:21:59 +0000
Received: by outflank-mailman (input) for mailman id 37988;
 Wed, 25 Nov 2020 18:21:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MqAN=E7=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1khzQU-0002gf-7T
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 18:21:58 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.5.64]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c9f07653-0e89-47d5-8a77-e7bfd432125b;
 Wed, 25 Nov 2020 18:21:55 +0000 (UTC)
Received: from MRXP264CA0002.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:15::14)
 by PR3PR08MB5819.eurprd08.prod.outlook.com (2603:10a6:102:92::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20; Wed, 25 Nov
 2020 18:21:53 +0000
Received: from VE1EUR03FT025.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:15:cafe::d) by MRXP264CA0002.outlook.office365.com
 (2603:10a6:500:15::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Wed, 25 Nov 2020 18:21:53 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT025.mail.protection.outlook.com (10.152.18.74) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Wed, 25 Nov 2020 18:21:52 +0000
Received: ("Tessian outbound 814be617737e:v71");
 Wed, 25 Nov 2020 18:21:52 +0000
Received: from 605a731f6680.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A8B4863E-D059-4552-95C1-0BAC54E59237.1; 
 Wed, 25 Nov 2020 18:21:46 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 605a731f6680.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 25 Nov 2020 18:21:46 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB6073.eurprd08.prod.outlook.com (2603:10a6:10:1f7::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.22; Wed, 25 Nov
 2020 18:21:43 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3589.030; Wed, 25 Nov 2020
 18:21:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MqAN=E7=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1khzQU-0002gf-7T
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 18:21:58 +0000
X-Inumbo-ID: c9f07653-0e89-47d5-8a77-e7bfd432125b
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown [40.107.5.64])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c9f07653-0e89-47d5-8a77-e7bfd432125b;
	Wed, 25 Nov 2020 18:21:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DLfhbrK9LYwE2RBPiYt0z6OHpay32aY/DJpawgIffhg=;
 b=dwbnKzNpQxHFwFPds7E/9iWmXcgJZ8Y6jYLjoU/sLxqxAcqF85LBFf2814ZCVykt67gNbE8InYhChXcXzAirp5ybD+/Ke1ieRWmzLJ95l5D1RzN+3SM21blBsTCbUOxf7hrl7QgRqAbguuw3MBta8zM0ki/DzSX2ZW2+sIbGCPE=
Received: from MRXP264CA0002.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:15::14)
 by PR3PR08MB5819.eurprd08.prod.outlook.com (2603:10a6:102:92::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20; Wed, 25 Nov
 2020 18:21:53 +0000
Received: from VE1EUR03FT025.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:15:cafe::d) by MRXP264CA0002.outlook.office365.com
 (2603:10a6:500:15::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Wed, 25 Nov 2020 18:21:53 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT025.mail.protection.outlook.com (10.152.18.74) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Wed, 25 Nov 2020 18:21:52 +0000
Received: ("Tessian outbound 814be617737e:v71"); Wed, 25 Nov 2020 18:21:52 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: f6d0924bc956bd91
X-CR-MTA-TID: 64aa7808
Received: from 605a731f6680.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id A8B4863E-D059-4552-95C1-0BAC54E59237.1;
	Wed, 25 Nov 2020 18:21:46 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 605a731f6680.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 25 Nov 2020 18:21:46 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=I5xd5mm9DVv8+AGK1kH6cofZjW9bWXQGXgB+/ipUimnGse0/EcBDAmAb4eB7gOqoQ7ddGLQpiSO/pvuSSNGgH83TTKSDbbqkW7yq4IXFN7D0yafC1OL8dyMJdC3vSJ9jX5c5FUYUiSMehCbo/5UStlckXqlrHpr3hO8nurCkJ18YsHZHQUJv1HXHxH3DDejMi90vSovJDfDzwcPLUEaCt1wb5IygAxgMk8aEKCiEXEJ6JS4wN7Ro/wAxZ6CdQ8of7uCCUxIEhao4jy92kQVfxZH5z+GtQ45JB0u/HoKW8cc17Ncz/QuzZYSVOLZ+Ii8Saq1XP/NUzncoLzWvyaYW+w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DLfhbrK9LYwE2RBPiYt0z6OHpay32aY/DJpawgIffhg=;
 b=exqbBk+iAUeW2uSyMfYbRJdQfvwbgKbc6G4ccRlXPmZ7RtwmvkawB9yhVgunD4ys1Tcwrrad43dRZWTF9tzydGDlJyL+//Bij8yw3RP/8sTLm3D1Drq28qXvjrH3R9dFnaC2cZa33baKk5LG3ebhpQOdot2ZCloCEU0CDjvLPKN5Q/jaFA6cHiL29aKvaqRFNx9t7EBdw26kjRQ4Grkf5YT1VA41/ju1jk3IEqRKSVr8cMvGPnNqxhK7cCAv2sl8Ad85/CtyW8xoDTb1tI35Vi8KcJ0r41HGlzylYno2ACGm3nXnuQSLy4jxrMcIqkDf6jAUFDBD9lqlAx+JIDk4lw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DLfhbrK9LYwE2RBPiYt0z6OHpay32aY/DJpawgIffhg=;
 b=dwbnKzNpQxHFwFPds7E/9iWmXcgJZ8Y6jYLjoU/sLxqxAcqF85LBFf2814ZCVykt67gNbE8InYhChXcXzAirp5ybD+/Ke1ieRWmzLJ95l5D1RzN+3SM21blBsTCbUOxf7hrl7QgRqAbguuw3MBta8zM0ki/DzSX2ZW2+sIbGCPE=
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB6073.eurprd08.prod.outlook.com (2603:10a6:10:1f7::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.22; Wed, 25 Nov
 2020 18:21:43 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3589.030; Wed, 25 Nov 2020
 18:21:43 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH v4 3/3] ns16550: Gate all PCI code with CONFIG_X86
Thread-Topic: [PATCH v4 3/3] ns16550: Gate all PCI code with CONFIG_X86
Thread-Index: AQHWw1cvjUPnGDPA302zQ9SdCrcARanZKPcA
Date: Wed, 25 Nov 2020 18:21:43 +0000
Message-ID: <A24BBAFF-2D6C-448A-955E-4471902C6413@arm.com>
References: <cover.1606326929.git.rahul.singh@arm.com>
 <6d64bb35a6ce247faaa3df2ebae27b6bfa1d969e.1606326929.git.rahul.singh@arm.com>
In-Reply-To:
 <6d64bb35a6ce247faaa3df2ebae27b6bfa1d969e.1606326929.git.rahul.singh@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: lists.xenproject.org; dkim=none (message not
 signed) header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: d7e42557-e2ae-41ca-f08e-08d8916efbbe
x-ms-traffictypediagnostic: DBBPR08MB6073:|PR3PR08MB5819:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<PR3PR08MB581974A7BA206E710250EEE9FCFA0@PR3PR08MB5819.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 /i64okDlhDE5x2s8tzPktNqBiNkPbLe6S4obOAtrbCywbymO7Ly+sM+UdK1xvewHqMMxlCRXviuzs419XhFdDurW/LQLUgtva6AAwf3JJiW2Id00ZvKDLBLmr4djdUmu6CmH+R5oFAUfrFHimdqiss9C+f/6R3EDby9MMuB5mp5tEeubKoEVemrXV0910XWrEkdRP0bQbvTruORtuinZm/MM79xiGAXcSUYdG71XhDwzlb2DdBsJDB/eqdUyi5rPEbDWoIECmszaVY0wFCMqR7kJKLQidgjnhd8U4jInz15tOUQZU5PSAiA99revDljKVPMehVdLGraDMHjSnAL99A==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(39860400002)(136003)(396003)(346002)(86362001)(36756003)(33656002)(71200400001)(6512007)(478600001)(4326008)(2616005)(54906003)(8676002)(5660300002)(2906002)(26005)(8936002)(83380400001)(186003)(6916009)(316002)(53546011)(91956017)(76116006)(6486002)(66476007)(66556008)(64756008)(66446008)(6506007)(66946007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?UE9vSFZFK09BN1VDQ25hR3lPby9xT3A4QXNwQ2hUek5OTGEzaEh3Q1BkR1lk?=
 =?utf-8?B?MUxtdkRCK00vYUZCVWcyVmZ6QXIzMGk3WlVYdk9rZlIrOWJXYTJ1ZGI5YmlF?=
 =?utf-8?B?WTBKUzJScEJRbXBzOGUvVXlhTzdscEV0UVhQQXY4MFZxZWtKN1M1alNOS2lE?=
 =?utf-8?B?T2o5bXhIR01tWXorMUZmTHB3a2p6bUcrUDlDVDFwNGZmSkhuWTJabEtEZ0c5?=
 =?utf-8?B?U1hIMzFUY2hBME1BVGV1ZVc1SC9Hd2pJR1pleE5RYkpjc1lseWxlNjJvSlhF?=
 =?utf-8?B?cDVzYjRXMXhkRmI5aG1LZU4raWo1OUdFWXFBaXVZY0Rrc0hRT1dWdDFxQkV0?=
 =?utf-8?B?MWREbzdGK2cwZWlqOEpBOERCNXNkYnZJUFJ5VGlRSlpGdmQ3eW55VnlMeVVV?=
 =?utf-8?B?akxzTVRpbG81cWNLTnhDcE8xMFoxYmZZUnpyQmMyZkZySUdZdXU5Ui9WQ2tS?=
 =?utf-8?B?eGdCMTFLSlhEUmNROTduZEhIZjhhY2VPeUVEVG9Iemk5ODFWbnVWMGlObWFM?=
 =?utf-8?B?N2xkS3hnRmVLcEQrSmdFQUJFTWtYVEpMRkhiYmdZa3AybjJ3ZGw4UWEzUUc4?=
 =?utf-8?B?NGxRSnNoQTRpT3hTekc4S2tEYnhBMU9URXRmM285elZUVkhaZzNLdzRaVGx2?=
 =?utf-8?B?NXdJTkEvSkNzc29Yc3dpRkJvZ1Nmd3J4aTVqcU9vcW56aUFKYnBCRVg1cDlV?=
 =?utf-8?B?N0NmSTBBMGo5aHJCSlAzcElJNTBZQ2RQN003ZmxZWkRCc2JPeXZKazBjVzVI?=
 =?utf-8?B?QmpEaU1pMk82Y2MvZWIvTDNDUm1aWGE5bG1zZzlHeU5aMHp0cHF0WHF6OTFi?=
 =?utf-8?B?MnNIb2RrZlVnS0thVE9nclZlS2lud2NLcFJSSk5CNUkwb0dCeklEbFp5VVRG?=
 =?utf-8?B?clRSRGlmWFk2ajhxdEtrY1pFNG0vV1FIUGk1VzJxYmM0OWNLSnJsWTgvZ0dy?=
 =?utf-8?B?Q0hWMFZqY3VYWXZPaHRxejBmakpnSE1lOUpxS1FqRWVHUm84U29Cak4wYkhO?=
 =?utf-8?B?UTkvL3VzSXRFNjUzN05TM3Q2UGxrTnNmVXlPSE5iQy9vbFV5ZTNzSzczRmY0?=
 =?utf-8?B?Z0FSdzVFWERmMTN0TWtHdm5ZTHUySVN1ajVObnBTTEEvelpHS0VTdjhwTGpG?=
 =?utf-8?B?VDYvOGJUV2EwTnlKUWlIMHl5TzV3dkhwSHYvcThub2pNd1RRUWVlQUp4eHVi?=
 =?utf-8?B?amZKcVJYNDlSUXNuQThJTHNxeVRCVWxDVDZiaXFOTmFjaXZCWlFKbXFKZWZk?=
 =?utf-8?B?WVpWeWtySjZtVVUvWkVJa1Q0SDhpZVFERkp2ZVcrclhNQlBVVElKRFRuU1RU?=
 =?utf-8?Q?mbSP0aLjSS4C5+41AbWPWgRdJOdpHSKzMu?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <78A05B04AAAB4F4380AD1EB22ED5E94B@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6073
Original-Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	11da004b-5af1-4d12-3f26-08d8916ef620
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MnnBF16t7ouaKUpECrBFQRUdQmC8L56JssJIQW3nlkusOfxQuvh5ItEMEM0hvQC9kR3KMR99CyRCAnCkZlBI51CD+MM2hVKNhPDpzwtr2bW7mIIzPROSTncmgukI1ieKgy1uU1JEi8tfDbgkQa8UpxaDUe2rhi7fkiwFTdz3UKxW300+jDgo/XU0OWF0z5Fqkb9w8q6g7NNRcyQmMl56mjP55k/26PZr64mIP3Pc0SMugD/SsBHQJw9doprXoputCIyO/rJfRvCoGt7jMqmNK8tt5Uaqz+HikLknqBByNIYQzSCwKRpDy7VJE+DApqnshO0/N3cKBqpQIXrFy7fmqVo6cuajJEXHGVafc+LrwnFCPJl//IaZmWcvbvb2YbTscXHVeRW0588kMIJFujYrYg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(346002)(136003)(39850400004)(396003)(46966005)(47076004)(26005)(82740400003)(8936002)(316002)(33656002)(70586007)(6486002)(81166007)(356005)(36756003)(70206006)(6512007)(2616005)(6916009)(336012)(8676002)(186003)(2906002)(54906003)(82310400003)(4326008)(86362001)(53546011)(478600001)(5660300002)(6506007)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Nov 2020 18:21:52.7230
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d7e42557-e2ae-41ca-f08e-08d8916efbbe
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5819

SGVsbG8gSnVsaWVuLA0KDQo+IE9uIDI1IE5vdiAyMDIwLCBhdCA2OjE2IHBtLCBSYWh1bCBTaW5n
aCA8cmFodWwuc2luZ2hAYXJtLmNvbT4gd3JvdGU6DQo+IA0KPiBUaGUgTlMxNjU1MCBkcml2ZXIg
aXMgYXNzdW1pbmcgdGhhdCBOUzE2NTUwIFBDSSBjYXJkIGFyZSB1c2FibGUgaWYgdGhlDQo+IGFy
Y2hpdGVjdHVyZSBzdXBwb3J0cyBQQ0kgKGkuZS4gQ09ORklHX0hBU19QQ0k9eSkuIEhvd2V2ZXIs
IHRoZSBjb2RlIGlzDQo+IHZlcnkgeDg2IGZvY3VzIGFuZCB3aWxsIGZhaWwgdG8gYnVpbGQgb24g
QXJtICgvIVwgaXQgaXMgbm90IGFsbCB0aGUNCj4gZXJyb3JzKToNCj4gDQo+IG5zMTY1NTAuYzog
SW4gZnVuY3Rpb24g4oCYbnMxNjU1MF9pbml0X2lyceKAmToNCj4gbnMxNjU1MC5jOjcyNjoyMTog
ZXJyb3I6IGltcGxpY2l0IGRlY2xhcmF0aW9uIG9mIGZ1bmN0aW9uIOKAmGNyZWF0ZV9pcnHigJk7
DQo+IGRpZCB5b3UgbWVhbiDigJhyZWxlYXNlX2lyceKAmT8gWy1XZXJyb3I9aW1wbGljaXQtZnVu
Y3Rpb24tZGVjbGFyYXRpb25dDQo+ICAgICAgICAgIHVhcnQtPmlycSA9IGNyZWF0ZV9pcnEoMCwg
ZmFsc2UpOw0KPiAgICAgICAgICAgICAgICAgICAgICBefn5+fn5+fn5+DQo+ICAgICAgICAgICAg
ICAgICAgICAgIHJlbGVhc2VfaXJxDQo+IG5zMTY1NTAuYzo3MjY6MjE6IGVycm9yOiBuZXN0ZWQg
ZXh0ZXJuIGRlY2xhcmF0aW9uIG9mIOKAmGNyZWF0ZV9pcnHigJkNCj4gWy1XZXJyb3I9bmVzdGVk
LWV4dGVybnNdDQo+IG5zMTY1NTAuYzogSW4gZnVuY3Rpb24g4oCYbnMxNjU1MF9pbml0X3Bvc3Rp
cnHigJk6DQo+IG5zMTY1NTAuYzo3Njg6MzM6IGVycm9yOiDigJhtbWlvX3JvX3Jhbmdlc+KAmSB1
bmRlY2xhcmVkIChmaXJzdCB1c2UgaW4gdGhpcw0KPiBmdW5jdGlvbik7IGRpZCB5b3UgbWVhbiDi
gJhtbWlvX2hhbmRsZXLigJk/DQo+ICAgICAgICAgICAgICAgcmFuZ2VzZXRfYWRkX3JhbmdlKG1t
aW9fcm9fcmFuZ2VzLCB1YXJ0LT5pb19iYXNlLA0KPiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBefn5+fn5+fn5+fn5+fg0KPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBtbWlvX2hhbmRsZXINCj4gbnMxNjU1MC5jOjc2ODozMzogbm90ZTogZWFjaCB1bmRlY2xhcmVk
IGlkZW50aWZpZXIgaXMgcmVwb3J0ZWQgb25seSBvbmNlDQo+IGZvciBlYWNoIGZ1bmN0aW9uIGl0
IGFwcGVhcnMgaW4NCj4gbnMxNjU1MC5jOjc4MDoyMDogZXJyb3I6IHZhcmlhYmxlIOKAmG1zaeKA
mSBoYXMgaW5pdGlhbGl6ZXIgYnV0IGluY29tcGxldGUNCj4gdHlwZQ0KPiAgICAgICAgICAgICAg
c3RydWN0IG1zaV9pbmZvIG1zaSA9IHsNCj4gICAgICAgICAgICAgICAgICAgICBefn5+fn5+fg0K
PiANCj4gRW5hYmxpbmcgc3VwcG9ydCBmb3IgTlMxNjU1MCBQQ0kgY2FyZCBvbiBBcm0gd291bGQg
cmVxdWlyZSBtb3JlIHBsdW1iaW5nDQo+IGluIGFkZGl0aW9uIHRvIGZpeGluZyB0aGUgY29tcGls
YXRpb24gZXJyb3IuDQo+IA0KPiBBcm0gc3lzdGVtcyB0ZW5kIHRvIGhhdmUgcGxhdGZvcm0gVUFS
VCBhdmFpbGFibGUgc3VjaCBhcyBOUzE2NTUwLCBQTDAxMS4NCj4gU28gdGhlcmUgYXJlIGxpbWl0
ZWQgcmVhc29ucyB0byBnZXQgTlMxNjU1MCBQQ0kgc3VwcG9ydCBmb3Igbm93IG9uIEFybS4NCj4g
DQo+IEd1YXJkIGFsbCByZW1haW5pbmcgUENJIGNvZGUgdGhhdCBpcyBub3QgdW5kZXIgeDg2IGZs
YWcgd2l0aCBDT05GSUdfWDg2Lg0KPiANCj4gTm8gZnVuY3Rpb25hbCBjaGFuZ2UgaW50ZW5kZWQu
DQo+IA0KPiBTaWduZWQtb2ZmLWJ5OiBSYWh1bCBTaW5naCA8cmFodWwuc2luZ2hAYXJtLmNvbT4N
Cg0KU29ycnkgSSBtaXNzZWQgdG8gYWRkIHRoZSBzaWduZWQgb2ZmIGZvciB0aGUgY29tbWl0IG1z
Zy4gSSB3aWxsIHNlbmQgdGhlIG5leHQgdmVyc2lvbiBvbmNlIHJldmlld2VkIGRvbmUuDQpTaWdu
ZWQtb2ZmLWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24uY29tPg0KDQpSZWdhcmRzLA0K
UmFodWwNCj4gLS0tDQo+IA0KPiBDaGFuZ2VzIGluIHY0Og0KPiAtIEFzIHBlciB0aGUgZGlzY3Vz
c2lvbiBndWFyZCBhbGwgcmVtYWluaW5nIFBDSSBjb2RlIHdpdGggQ09ORklHX1g4Ng0KPiANCj4g
LS0tDQo+IHhlbi9kcml2ZXJzL2NoYXIvbnMxNjU1MC5jIHwgMTYgKysrKysrKystLS0tLS0tLQ0K
PiAxIGZpbGUgY2hhbmdlZCwgOCBpbnNlcnRpb25zKCspLCA4IGRlbGV0aW9ucygtKQ0KPiANCj4g
ZGlmZiAtLWdpdCBhL3hlbi9kcml2ZXJzL2NoYXIvbnMxNjU1MC5jIGIveGVuL2RyaXZlcnMvY2hh
ci9uczE2NTUwLmMNCj4gaW5kZXggOTIzNWQ4NTRmZS4uMjZlNjAxODU3YSAxMDA2NDQNCj4gLS0t
IGEveGVuL2RyaXZlcnMvY2hhci9uczE2NTUwLmMNCj4gKysrIGIveGVuL2RyaXZlcnMvY2hhci9u
czE2NTUwLmMNCj4gQEAgLTE2LDcgKzE2LDcgQEANCj4gI2luY2x1ZGUgPHhlbi90aW1lci5oPg0K
PiAjaW5jbHVkZSA8eGVuL3NlcmlhbC5oPg0KPiAjaW5jbHVkZSA8eGVuL2lvY2FwLmg+DQo+IC0j
aWZkZWYgQ09ORklHX0hBU19QQ0kNCj4gKyNpZiBkZWZpbmVkKENPTkZJR19YODYpICYmIGRlZmlu
ZWQoQ09ORklHX0hBU19QQ0kpDQo+ICNpbmNsdWRlIDx4ZW4vcGNpLmg+DQo+ICNpbmNsdWRlIDx4
ZW4vcGNpX3JlZ3MuaD4NCj4gI2luY2x1ZGUgPHhlbi9wY2lfaWRzLmg+DQo+IEBAIC01MSw3ICs1
MSw3IEBAIHN0YXRpYyBzdHJ1Y3QgbnMxNjU1MCB7DQo+ICAgICB1bnNpZ25lZCBpbnQgdGltZW91
dF9tczsNCj4gICAgIGJvb2xfdCBpbnRyX3dvcmtzOw0KPiAgICAgYm9vbF90IGR3X3Vzcl9ic3k7
DQo+IC0jaWZkZWYgQ09ORklHX0hBU19QQ0kNCj4gKyNpZiBkZWZpbmVkKENPTkZJR19YODYpICYm
IGRlZmluZWQoQ09ORklHX0hBU19QQ0kpDQo+ICAgICAvKiBQQ0kgY2FyZCBwYXJhbWV0ZXJzLiAq
Lw0KPiAgICAgYm9vbF90IHBiX2JkZl9lbmFibGU7ICAgLyogaWYgPTEsIHBiLWJkZiBlZmZlY3Rp
dmUsIHBvcnQgYmVoaW5kIGJyaWRnZSAqLw0KPiAgICAgYm9vbF90IHBzX2JkZl9lbmFibGU7ICAg
LyogaWYgPTEsIHBzX2JkZiBlZmZlY3RpdmUsIHBvcnQgb24gcGNpIGNhcmQgKi8NCj4gQEAgLTY2
LDcgKzY2LDcgQEAgc3RhdGljIHN0cnVjdCBuczE2NTUwIHsNCj4gI2VuZGlmDQo+IH0gbnMxNjU1
MF9jb21bMl0gPSB7IHsgMCB9IH07DQo+IA0KPiAtI2lmZGVmIENPTkZJR19IQVNfUENJDQo+ICsj
aWYgZGVmaW5lZChDT05GSUdfWDg2KSAmJiBkZWZpbmVkKENPTkZJR19IQVNfUENJKQ0KPiBzdHJ1
Y3QgbnMxNjU1MF9jb25maWcgew0KPiAgICAgdTE2IHZlbmRvcl9pZDsNCj4gICAgIHUxNiBkZXZf
aWQ7DQo+IEBAIC0yNTYsNyArMjU2LDcgQEAgc3RhdGljIGludCBuczE2NTUwX2dldGMoc3RydWN0
IHNlcmlhbF9wb3J0ICpwb3J0LCBjaGFyICpwYykNCj4gDQo+IHN0YXRpYyB2b2lkIHBjaV9zZXJp
YWxfZWFybHlfaW5pdChzdHJ1Y3QgbnMxNjU1MCAqdWFydCkNCj4gew0KPiAtI2lmZGVmIENPTkZJ
R19IQVNfUENJDQo+ICsjaWYgZGVmaW5lZChDT05GSUdfWDg2KSAmJiBkZWZpbmVkKENPTkZJR19I
QVNfUENJKQ0KPiAgICAgaWYgKCAhdWFydC0+cHNfYmRmX2VuYWJsZSB8fCB1YXJ0LT5pb19iYXNl
ID49IDB4MTAwMDAgKQ0KPiAgICAgICAgIHJldHVybjsNCj4gDQo+IEBAIC0zNTUsNyArMzU1LDcg
QEAgc3RhdGljIHZvaWQgX19pbml0IG5zMTY1NTBfaW5pdF9wcmVpcnEoc3RydWN0IHNlcmlhbF9w
b3J0ICpwb3J0KQ0KPiANCj4gc3RhdGljIHZvaWQgX19pbml0IG5zMTY1NTBfaW5pdF9pcnEoc3Ry
dWN0IHNlcmlhbF9wb3J0ICpwb3J0KQ0KPiB7DQo+IC0jaWZkZWYgQ09ORklHX0hBU19QQ0kNCj4g
KyNpZiBkZWZpbmVkKENPTkZJR19YODYpICYmIGRlZmluZWQoQ09ORklHX0hBU19QQ0kpDQo+ICAg
ICBzdHJ1Y3QgbnMxNjU1MCAqdWFydCA9IHBvcnQtPnVhcnQ7DQo+IA0KPiAgICAgaWYgKCB1YXJ0
LT5tc2kgKQ0KPiBAQCAtMzk3LDcgKzM5Nyw3IEBAIHN0YXRpYyB2b2lkIF9faW5pdCBuczE2NTUw
X2luaXRfcG9zdGlycShzdHJ1Y3Qgc2VyaWFsX3BvcnQgKnBvcnQpDQo+ICAgICB1YXJ0LT50aW1l
b3V0X21zID0gbWF4X3QoDQo+ICAgICAgICAgdW5zaWduZWQgaW50LCAxLCAoYml0cyAqIHVhcnQt
PmZpZm9fc2l6ZSAqIDEwMDApIC8gdWFydC0+YmF1ZCk7DQo+IA0KPiAtI2lmZGVmIENPTkZJR19I
QVNfUENJDQo+ICsjaWYgZGVmaW5lZChDT05GSUdfWDg2KSAmJiBkZWZpbmVkKENPTkZJR19IQVNf
UENJKQ0KPiAgICAgaWYgKCB1YXJ0LT5iYXIgfHwgdWFydC0+cHNfYmRmX2VuYWJsZSApDQo+ICAg
ICB7DQo+ICAgICAgICAgaWYgKCB1YXJ0LT5wYXJhbSAmJiB1YXJ0LT5wYXJhbS0+bW1pbyAmJg0K
PiBAQCAtNDc3LDcgKzQ3Nyw3IEBAIHN0YXRpYyB2b2lkIG5zMTY1NTBfc3VzcGVuZChzdHJ1Y3Qg
c2VyaWFsX3BvcnQgKnBvcnQpDQo+IA0KPiAgICAgc3RvcF90aW1lcigmdWFydC0+dGltZXIpOw0K
PiANCj4gLSNpZmRlZiBDT05GSUdfSEFTX1BDSQ0KPiArI2lmIGRlZmluZWQoQ09ORklHX1g4Nikg
JiYgZGVmaW5lZChDT05GSUdfSEFTX1BDSSkNCj4gICAgIGlmICggdWFydC0+YmFyICkNCj4gICAg
ICAgIHVhcnQtPmNyID0gcGNpX2NvbmZfcmVhZDE2KFBDSV9TQkRGKDAsIHVhcnQtPnBzX2JkZlsw
XSwgdWFydC0+cHNfYmRmWzFdLA0KPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
dWFydC0+cHNfYmRmWzJdKSwgUENJX0NPTU1BTkQpOw0KPiBAQCAtNDg2LDcgKzQ4Niw3IEBAIHN0
YXRpYyB2b2lkIG5zMTY1NTBfc3VzcGVuZChzdHJ1Y3Qgc2VyaWFsX3BvcnQgKnBvcnQpDQo+IA0K
PiBzdGF0aWMgdm9pZCBfbnMxNjU1MF9yZXN1bWUoc3RydWN0IHNlcmlhbF9wb3J0ICpwb3J0KQ0K
PiB7DQo+IC0jaWZkZWYgQ09ORklHX0hBU19QQ0kNCj4gKyNpZiBkZWZpbmVkKENPTkZJR19YODYp
ICYmIGRlZmluZWQoQ09ORklHX0hBU19QQ0kpDQo+ICAgICBzdHJ1Y3QgbnMxNjU1MCAqdWFydCA9
IHBvcnQtPnVhcnQ7DQo+IA0KPiAgICAgaWYgKCB1YXJ0LT5iYXIgKQ0KPiAtLSANCj4gMi4xNy4x
DQo+IA0KDQo=


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 18:57:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 18:57:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38012.70603 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khzyx-0005YF-83; Wed, 25 Nov 2020 18:57:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38012.70603; Wed, 25 Nov 2020 18:57:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1khzyx-0005Y8-4L; Wed, 25 Nov 2020 18:57:35 +0000
Received: by outflank-mailman (input) for mailman id 38012;
 Wed, 25 Nov 2020 18:57:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1khzyv-0005Y3-Rl
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 18:57:33 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 32a09210-cc62-428d-9597-cab54c39a125;
 Wed, 25 Nov 2020 18:57:33 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id E52C9206B7;
 Wed, 25 Nov 2020 18:57:31 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1khzyv-0005Y3-Rl
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 18:57:33 +0000
X-Inumbo-ID: 32a09210-cc62-428d-9597-cab54c39a125
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 32a09210-cc62-428d-9597-cab54c39a125;
	Wed, 25 Nov 2020 18:57:33 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id E52C9206B7;
	Wed, 25 Nov 2020 18:57:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606330652;
	bh=YO+kG2LVM3sApEzWV/IU7WRu36EtZFAtR4zZS2rn9JI=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=bm3+oASnnYy2wFcTSIRTtaTIyVLH/5XiWa6hNme9n74djmfmaVl0N5/LeyvKcSygU
	 yneMbxnlOeEgn90pboMzs7CUFagKFm09RZbhbCvfOSkyXBT9OJqW/psR9DM0iLn9X1
	 YqeMDbtTQJQHoVOYzpL1mUjd6WUpqO+HW8oD7ERg=
Date: Wed, 25 Nov 2020 10:57:31 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Elliott Mitchell <ehem+xen@m5p.com>
cc: Roman Shaposhnik <roman@zededa.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: Xen on RP4
In-Reply-To: <X73owDP0UXx+lvJd@mattapan.m5p.com>
Message-ID: <alpine.DEB.2.21.2011251051240.7979@sstabellini-ThinkPad-T480s>
References: <X73RfHfRfBRLKkvB@mattapan.m5p.com> <CAMmSBy8dtUQotUeX2MVke7d2nWS0shvKPL_S=4tUeF0UKh4vgA@mail.gmail.com> <X73ghKgQEXLv2z2p@mattapan.m5p.com> <CAMmSBy-Qdpj+6FAk9D15=+87_=68T80Y1NGnvyAB=tOFveifiQ@mail.gmail.com>
 <X73owDP0UXx+lvJd@mattapan.m5p.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 24 Nov 2020, Elliott Mitchell wrote:
> On Tue, Nov 24, 2020 at 08:45:32PM -0800, Roman Shaposhnik wrote:
> > On Tue, Nov 24, 2020 at 8:41 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> > >
> > > On Tue, Nov 24, 2020 at 08:01:32PM -0800, Roman Shaposhnik wrote:
> > > > On Tue, Nov 24, 2020 at 7:37 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> > > > > Presently I'm using a 5.8 kernel with your patches and haven't seen
> > > > > graphical output under Xen with either boot stack.  I've confirmed fully
> > > > > operational graphics without Xen on Tianocore, I've confirmed operational
> > > > > virtual terminals with U-Boot and not Xen.
> > > > >
> > > > > I had been planning to wait a bit before moving to 5.9, but if that is
> > > > > the crucial ingredient I'll move early.
> > > >
> > > > We're still using 5.4 -- but it seems that the next LTS 5.10 is also functional.
> > > >
> > > > I can bet $10 whatever it is -- it is DT related ;-)
> > >
> > > Given how many of the pieces I'm assembling are alpha or beta level, I
> > > estimate a 50:50 chance on that.  Good odds it is device-tree, but good
> > > odds I grabbed a bad version of $something.
> > >
> > > I mostly wanted to know whether I was in completely uncharted territory
> > > and needed to wait for others to catch up, versus merely working in a
> > > situation where support is funky and I'm at an unknown location in
> > > charted territory.
> > >
> > > I'll be keeping the Tianocore setup available since Xen on ARM really
> > > /should/ allow ACPI...
> > 
> > I don't think you're in the uncharted -- so perhaps a bit of debugging left.
> > 
> > And, of course, alway feel free to compare what we do -- the image is
> > docker pull away.
> 
> Actually, since device-tree is very much on my list of concerns, what is
> your Xen boot process setup like?
> 
> Presently as previously mentioned I'm trying for
> U-Boot -> GRUB/EFI -> Xen.  According to the information I currently have
> the device-trees are often tied pretty closely to the kernel.  I'm also
> using GRUB 2.04 since that has proper support for loading Xen on ARM.
> 
> The section of grub.cfg for Linux is roughly:
>     linux /boot/vmlinuz-5.8.10-2rp4-6.1.7 root=UUID=01234567-dead-beef-d13d-456789abcdef ro
>     devicetree /boot/dtb-5.8.10-2rp4-6.1.7
>     initrd /boot/initrd.img-5.8.10-2rp4-6.1.7
> 
> My testing section for Xen is:
>     xen_hypervisor /boot/xen-4.14-arm64.efi
>     xen_module /boot/vmlinuz-5.8.10-2rp4-6.1.7 root=UUID=01234567-dead-beef-d13d-456789abcdef ro
>     devicetree /boot/dtb-5.8.10-2rp4-6.1.7
>     xen_module --nounzip /boot/initrd.img-5.8.10-2rp4-6.1.7
> 
> I've frankly got no idea how to ensure the correct device-tree is passed
> to Xen.  Is GRUB's `devicetree` command correct when loading Xen?  Is a
> device-tree matched to the Linux kernel appropriate for Xen?
> 
> (I'm guessing the second is "yes", but the first I don't have a clue)

Yes, devicetree is correct. I have not used the graphical output, so I
cannot help you there but yes the best bet is to use the devicetree that
comes with the kernel.

One thing I noticed is that you are missing some of the command line
arguments for Xen and Linux in your grub config. For instance on the Xen
line you want to have something like:

    dom0_mem=1024M console=dtuart sync_console

And on the Linux line you might want to have:

    console=tty0 console=hvc0


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 19:16:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 19:16:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38022.70615 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki0HO-0007RF-SE; Wed, 25 Nov 2020 19:16:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38022.70615; Wed, 25 Nov 2020 19:16:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki0HO-0007R8-O8; Wed, 25 Nov 2020 19:16:38 +0000
Received: by outflank-mailman (input) for mailman id 38022;
 Wed, 25 Nov 2020 19:16:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ki0HM-0007R0-On; Wed, 25 Nov 2020 19:16:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ki0HM-0007JC-IO; Wed, 25 Nov 2020 19:16:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ki0HM-0006id-Bb; Wed, 25 Nov 2020 19:16:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ki0HM-0001DK-B6; Wed, 25 Nov 2020 19:16:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ki0HM-0007R0-On; Wed, 25 Nov 2020 19:16:36 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=C+m/eOFfNmFh1atl/kgBr4SvMAPypoIQmvzDGD3RSb8=; b=RLiwPKw6x2ylpRm0Aalw0M9QNj
	jZUY0ueGPHDVzaFBDsEitb98wwG3SPY0ay7H43QIvzms92vMMc0BlwJ94c0xUqqiDpm13OUZb3Mog
	wkhH3d83dVyTQvqsHYzZDZvqekSgwZQa204YArwrCs3v74/MiivkD4NBAPZpsKa1Uewo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ki0HM-0007JC-IO; Wed, 25 Nov 2020 19:16:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ki0HM-0006id-Bb; Wed, 25 Nov 2020 19:16:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ki0HM-0001DK-B6; Wed, 25 Nov 2020 19:16:36 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157009-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157009: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=181f2c224ccd0a2900d6ae94ec390a546731f593
X-Osstest-Versions-That:
    xen=fd7479b9aec25885cc17d33b326b9babae59faee
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 25 Nov 2020 19:16:36 +0000

flight 157009 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157009/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  181f2c224ccd0a2900d6ae94ec390a546731f593
baseline version:
 xen                  fd7479b9aec25885cc17d33b326b9babae59faee

Last test of basis   157006  2020-11-25 12:00:29 Z    0 days
Testing same since   157009  2020-11-25 15:00:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Paul Durrant <pdurrant@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   fd7479b9ae..181f2c224c  181f2c224ccd0a2900d6ae94ec390a546731f593 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 19:45:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 19:45:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38031.70630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki0jb-0001kR-D1; Wed, 25 Nov 2020 19:45:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38031.70630; Wed, 25 Nov 2020 19:45:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki0jb-0001kK-9N; Wed, 25 Nov 2020 19:45:47 +0000
Received: by outflank-mailman (input) for mailman id 38031;
 Wed, 25 Nov 2020 19:45:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=al8A=E7=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1ki0jZ-0001kF-Ax
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 19:45:45 +0000
Received: from mail-qt1-x835.google.com (unknown [2607:f8b0:4864:20::835])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 606e9579-c401-44e0-8626-482e6a1946a7;
 Wed, 25 Nov 2020 19:45:44 +0000 (UTC)
Received: by mail-qt1-x835.google.com with SMTP id e60so2477676qtd.3
 for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 11:45:44 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=al8A=E7=zededa.com=roman@srs-us1.protection.inumbo.net>)
	id 1ki0jZ-0001kF-Ax
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 19:45:45 +0000
X-Inumbo-ID: 606e9579-c401-44e0-8626-482e6a1946a7
Received: from mail-qt1-x835.google.com (unknown [2607:f8b0:4864:20::835])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 606e9579-c401-44e0-8626-482e6a1946a7;
	Wed, 25 Nov 2020 19:45:44 +0000 (UTC)
Received: by mail-qt1-x835.google.com with SMTP id e60so2477676qtd.3
        for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 11:45:44 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=zededa.com; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=gPw0ACzyIzb+IpR0WmBECb1QTi2xMI8wSnaLcjiD4tE=;
        b=ViRZdOxP5o5CqYvuoj2SyWAl4LFqhODv/ovuIALUUnpS1NoIusfd0bTodE35sS7ptT
         hVSMYZaJuUIfQmH/FQbUJr53SMuOuIxZi/QMbkb0XwJHqDJYdPYyh9jUPYxkAciQ/BEH
         wvP1DPa7LSjEyDjmkFz9D9q7IsoBIlZgXmvkjpKRX+q8+bezn9Rg6YLS09wcRXm8UZ/3
         Ju6pe+TEir8qJrg122ha3skQeHItytAa5+3jcNy6GB/Vwhij1W69pVp0HfcjiEyuAW+b
         r6TANJldEgqfFlm8QWqXS9WvpdyED/S7AMGd9zB4oRjOLEQYuYPE5XySDAzOCFYhyvuX
         XU3w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=gPw0ACzyIzb+IpR0WmBECb1QTi2xMI8wSnaLcjiD4tE=;
        b=YHjb5ueFb12CP8sU5sRPCFEHIXr83JJPhwROyOySL+Hx97XNPunyUzFkgOc2YnuS3s
         bP2Vg9cykJur6glLwiSDREGZtC0mkD17dAk62Gt1fTfC9+8Bl2ByHM3TEjOfHKnroOdT
         FXZMgw/a/77owf1gYIwKCiPKtkQ3wu6Tc/Xdtun9m53NWQFarpruIXggfQ/tSmUxKPhx
         DyrHLHb2Pzui0B3BSSKX+eJlE7Mdyiq/RsQOf03Lytkc27oYhldzSq/XMeAucs6Z6d5M
         hIeLPUjRFrfi38yYB8jedgijDTF3qGsPW55SimXriXsGuJC5u5P+W2dF/mpLgMAf73o7
         bnpw==
X-Gm-Message-State: AOAM530yZJv15lh1KptT4YxB2jbSRHXYYSCOA/syDbmHGl7q5oVWTG2l
	t0gIAuh5LHpOUUCQ0+0+/WGcbOBf9SQDlRskLVHLkg==
X-Google-Smtp-Source: ABdhPJySNUFhgcgvyA+OvIfe1tjVjuNGPil4H0fB1F8AfloXcwGTbG1TPAqwdh9EKb6zy3Ixj9Jnfgu7KectkipJkOg=
X-Received: by 2002:aed:2e67:: with SMTP id j94mr455786qtd.113.1606333543991;
 Wed, 25 Nov 2020 11:45:43 -0800 (PST)
MIME-Version: 1.0
References: <X73RfHfRfBRLKkvB@mattapan.m5p.com> <CAMmSBy8dtUQotUeX2MVke7d2nWS0shvKPL_S=4tUeF0UKh4vgA@mail.gmail.com>
 <X73ghKgQEXLv2z2p@mattapan.m5p.com> <CAMmSBy-Qdpj+6FAk9D15=+87_=68T80Y1NGnvyAB=tOFveifiQ@mail.gmail.com>
 <X73owDP0UXx+lvJd@mattapan.m5p.com>
In-Reply-To: <X73owDP0UXx+lvJd@mattapan.m5p.com>
From: Roman Shaposhnik <roman@zededa.com>
Date: Wed, 25 Nov 2020 11:45:32 -0800
Message-ID: <CAMmSBy90-bwKa+Cm007k793CEM5RSZRHpZimkO=5eyWAm+bKUQ@mail.gmail.com>
Subject: Re: Xen on RP4
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Tue, Nov 24, 2020 at 9:17 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
>
> On Tue, Nov 24, 2020 at 08:45:32PM -0800, Roman Shaposhnik wrote:
> > On Tue, Nov 24, 2020 at 8:41 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> > >
> > > On Tue, Nov 24, 2020 at 08:01:32PM -0800, Roman Shaposhnik wrote:
> > > > On Tue, Nov 24, 2020 at 7:37 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> > > > > Presently I'm using a 5.8 kernel with your patches and haven't seen
> > > > > graphical output under Xen with either boot stack.  I've confirmed fully
> > > > > operational graphics without Xen on Tianocore, I've confirmed operational
> > > > > virtual terminals with U-Boot and not Xen.
> > > > >
> > > > > I had been planning to wait a bit before moving to 5.9, but if that is
> > > > > the crucial ingredient I'll move early.
> > > >
> > > > We're still using 5.4 -- but it seems that the next LTS 5.10 is also functional.
> > > >
> > > > I can bet $10 whatever it is -- it is DT related ;-)
> > >
> > > Given how many of the pieces I'm assembling are alpha or beta level, I
> > > estimate a 50:50 chance on that.  Good odds it is device-tree, but good
> > > odds I grabbed a bad version of $something.
> > >
> > > I mostly wanted to know whether I was in completely uncharted territory
> > > and needed to wait for others to catch up, versus merely working in a
> > > situation where support is funky and I'm at an unknown location in
> > > charted territory.
> > >
> > > I'll be keeping the Tianocore setup available since Xen on ARM really
> > > /should/ allow ACPI...
> >
> > I don't think you're in the uncharted -- so perhaps a bit of debugging left.
> >
> > And, of course, alway feel free to compare what we do -- the image is
> > docker pull away.
>
> Actually, since device-tree is very much on my list of concerns, what is
> your Xen boot process setup like?
>
> Presently as previously mentioned I'm trying for
> U-Boot -> GRUB/EFI -> Xen.

Exactly the same. Here's what we put on a vfat partition:
     https://github.com/lf-edge/eve/tree/master/pkg/u-boot/rpi
and here's how u-boot is built:
     https://github.com/lf-edge/eve/blob/master/pkg/u-boot/Dockerfile

> According to the information I currently have
> the device-trees are often tied pretty closely to the kernel.  I'm also
> using GRUB 2.04 since that has proper support for loading Xen on ARM.

Yes. Our DT here:
    https://github.com/lf-edge/eve/blob/master/pkg/u-boot/rpi/bcm2711-rpi-4-b.dtb
came from an honest build of our kernel (our build is still in flux -- hence
a quick hack of keeping a blob):
    https://github.com/lf-edge/eve/blob/master/pkg/kernel/Dockerfile#L154

> The section of grub.cfg for Linux is roughly:
>     linux /boot/vmlinuz-5.8.10-2rp4-6.1.7 root=UUID=01234567-dead-beef-d13d-456789abcdef ro
>     devicetree /boot/dtb-5.8.10-2rp4-6.1.7
>     initrd /boot/initrd.img-5.8.10-2rp4-6.1.7
>
> My testing section for Xen is:
>     xen_hypervisor /boot/xen-4.14-arm64.efi
>     xen_module /boot/vmlinuz-5.8.10-2rp4-6.1.7 root=UUID=01234567-dead-beef-d13d-456789abcdef ro
>     devicetree /boot/dtb-5.8.10-2rp4-6.1.7
>     xen_module --nounzip /boot/initrd.img-5.8.10-2rp4-6.1.7

Roughly the same -- but see Stefano's comment. More here:
    https://github.com/lf-edge/eve/blob/master/pkg/grub/rootfs.cfg

> I've frankly got no idea how to ensure the correct device-tree is passed
> to Xen.  Is GRUB's `devicetree` command correct when loading Xen?  Is a
> device-tree matched to the Linux kernel appropriate for Xen?
>
> (I'm guessing the second is "yes", but the first I don't have a clue)

While you can use `devicetree`  in GRUB. E.g.:
    https://github.com/lf-edge/eve/blob/master/pkg/grub/rootfs.cfg#L207
on EVE side we've only ever used it as an emergency override.

The happy path boot sequence preserves the DT that RPi bootloader
makes available to u-boot and it gets passed down the chain without
anybody doing anything.

Hope this helps.

Thanks,
Roman.


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 19:48:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 19:48:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38037.70642 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki0mK-0001xX-Rm; Wed, 25 Nov 2020 19:48:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38037.70642; Wed, 25 Nov 2020 19:48:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki0mK-0001xQ-OE; Wed, 25 Nov 2020 19:48:36 +0000
Received: by outflank-mailman (input) for mailman id 38037;
 Wed, 25 Nov 2020 19:48:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ki0mJ-0001xL-AU
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 19:48:35 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 684b9eb1-8476-449e-b0e2-49126d638c05;
 Wed, 25 Nov 2020 19:48:34 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id C2F3C2083E;
 Wed, 25 Nov 2020 19:48:32 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1ki0mJ-0001xL-AU
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 19:48:35 +0000
X-Inumbo-ID: 684b9eb1-8476-449e-b0e2-49126d638c05
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 684b9eb1-8476-449e-b0e2-49126d638c05;
	Wed, 25 Nov 2020 19:48:34 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id C2F3C2083E;
	Wed, 25 Nov 2020 19:48:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606333713;
	bh=9nqklDmA0mEpAAPaL+Az3Xe+UfoaEq3uC3vSHgdwAwI=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=pD1nuOWse+aHf5Aec5Aq/8ANi2xfhoiN9kQdYOqVPRz7NJVamS9bbr7l9GMefdd71
	 Q1X+5IOhbEK1BEO/1iFPCrNn+XVdOv1BNgbYg9zTq1ODtw+L1BFC+tNAJkcr0bTTMi
	 RdFKHpxHZXyQ+Aydt5IjrMpr2ecvh9imqCM2aHHg=
Date: Wed, 25 Nov 2020 11:48:31 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 02/17] mm: introduce xvmalloc() et al and use for
 grant table allocations
In-Reply-To: <a752cdb9-4609-2a61-b657-c17cbe4febb8@suse.com>
Message-ID: <alpine.DEB.2.21.2011251122200.7979@sstabellini-ThinkPad-T480s>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com> <23acd443-348c-5ef9-0fb5-880e06cc9a2d@suse.com> <0c40a6f6-af8c-1040-f249-36752df3a1f1@xen.org> <a752cdb9-4609-2a61-b657-c17cbe4febb8@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 25 Nov 2020, Jan Beulich wrote:
> On 25.11.2020 13:15, Julien Grall wrote:
> > On 23/11/2020 14:23, Jan Beulich wrote:
> >> All of the array allocations in grant_table_init() can exceed a page's
> >> worth of memory, which xmalloc()-based interfaces aren't really suitable
> >> for after boot. We also don't need any of these allocations to be
> >> physically contiguous.. Introduce interfaces dynamically switching
> >> between xmalloc() et al and vmalloc() et al, based on requested size,
> >> and use them instead.
> >>
> >> All the wrappers in the new header get cloned mostly verbatim from
> >> xmalloc.h, with the sole adjustment to switch unsigned long to size_t
> >> for sizes and to unsigned int for alignments.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >> ---
> >> v2: Actually edit a copy-and-pasted comment in xvmalloc.h which was
> >>      meant to be edited from the beginning.
> >> ---
> >> I'm unconvinced of the mentioning of "physically contiguous" in the
> >> comment at the top of the new header: I don't think xmalloc() provides
> >> such a guarantee. Any use assuming so would look (latently) broken to
> >> me.
> > 
> > I haven't had the chance to reply to the first version about this. So I 
> > will reply here to avoid confusion.
> > 
> > I can at least spot one user in Arm that would use xmalloc() that way 
> > (see the allocation of itt_addr in arch/arm/gic-v3-its.c).
> 
> And I surely wouldn't have spotted this, even if I had tried
> to find "offenders", i.e. as said before not wanting to alter
> the behavior of existing code (beyond the explicit changes
> done here) was ...
> 
> > AFAIK, the memory is for the sole purpose of the ITS and should not be 
> > accessed by Xen. So I think we can replace by a new version of 
> > alloc_domheap_pages().
> > 
> > However, I still question the usefulness of introducing yet another way 
> > to allocate memory (we already have alloc_xenheap_pages(), xmalloc(), 
> > alloc_domheap_pages(), vmap()) if you think users cannot rely on 
> > xmalloc() to allocate memory physically contiguous.
> 
> ... the reason to introduce a separate new interface. Plus of
> course this parallels what Linux has.
> 
> > It definitely makes more difficult to figure out when to use xmalloc() 
> > vs xvalloc().
> 
> I don't see the difficulty:
> - if you need physically contiguous memory, use alloc_xen*_pages(),
> - if you know the allocation size is always no more than a page,
>   use xmalloc(),

What if you need memory physically contiguous but not necessarily an
order of pages, such as for instance 5200 bytes?

If xmalloc can't do physically contiguous allocations, we need something
else that does physically contiguous allocations not only at page
granularity, right?

The other issue is semantics. If xmalloc is unable to allocate more than
a page of contiguous memory, then it is identical to vmalloc from the
caller's point of view: both xmalloc and vmalloc return a virtual
address for an allocation that might not be physically contiguous.

Maybe we should get rid of xmalloc entirely and improve the
implementation of vmalloc so that it falls back to xmalloc for
sub-page allocations. Which in fact is almost the same thing that you
did.


> - if you know the allocation size is always more than a page, use
>   vmalloc(),
> - otherwise use xvmalloc(). Exceptions may of course apply, i.e.
> this is just a rule of thumb.
> 
> > I would like to hear an opinion from the other maintainers.
> 
> Let's hope at least one will voice theirs.

If we take a step back, I think we only really need two memory
allocators:

1) one that allocates physically contiguous memory
2) one that allocates non-physically contiguous memory

That's it, right?

In addition to that, I understand it could be convient to have a little
wrapper that automatically chooses between 1) and 2) depending on
circumstances.

But if the circumstance is just size < PAGE_SIZE then I don't think we
need any convenience wrappers: we should just be able to call 2), which
is vmalloc, once we improve the vmalloc implementation.

Or do you see any reasons to keep the current vmalloc implementation as
is for sub-page allocations?


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 20:19:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 20:19:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38048.70657 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki1GL-0004mc-DP; Wed, 25 Nov 2020 20:19:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38048.70657; Wed, 25 Nov 2020 20:19:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki1GL-0004mV-9z; Wed, 25 Nov 2020 20:19:37 +0000
Received: by outflank-mailman (input) for mailman id 38048;
 Wed, 25 Nov 2020 20:19:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xEKP=E7=gmail.com=htejun@srs-us1.protection.inumbo.net>)
 id 1ki1GJ-0004mQ-JD
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 20:19:35 +0000
Received: from mail-qk1-x742.google.com (unknown [2607:f8b0:4864:20::742])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 92e952cb-6981-4854-aa53-d58b0626613c;
 Wed, 25 Nov 2020 20:19:34 +0000 (UTC)
Received: by mail-qk1-x742.google.com with SMTP id u4so5902611qkk.10
 for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 12:19:34 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
 by smtp.gmail.com with ESMTPSA id w54sm446145qtb.0.2020.11.25.12.19.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 25 Nov 2020 12:19:33 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xEKP=E7=gmail.com=htejun@srs-us1.protection.inumbo.net>)
	id 1ki1GJ-0004mQ-JD
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 20:19:35 +0000
X-Inumbo-ID: 92e952cb-6981-4854-aa53-d58b0626613c
Received: from mail-qk1-x742.google.com (unknown [2607:f8b0:4864:20::742])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 92e952cb-6981-4854-aa53-d58b0626613c;
	Wed, 25 Nov 2020 20:19:34 +0000 (UTC)
Received: by mail-qk1-x742.google.com with SMTP id u4so5902611qkk.10
        for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 12:19:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=wTpG1f7I5Mn7omq14ZEE1b1D1RGafIOA8Nh3enwy9Pw=;
        b=dhUspYM1wMwx4g0p349xFh+cvmkFpUHO1fMAZy5dL/oqd7kSU1ym0YtWGCwe2K5BNw
         aVwhhCckrBdx67CbATI/FFwUOrbyBQxEavleHAqEAzxG3D9N0conr/HIspxr9fzB6SfX
         i43+lRwnKCGUgJA9cRAA9Ia9ro51DDhqCV8/s5EXlHTEJAjPYAn71BuXgNVEaOWGHhM5
         mSZeoeXij9s0EAWe8DhLGkPD+d8qJ0w9uk4zhr+1KpPzvTLi317obMTNtjWvqw22AAjn
         KI0qVXs/lejTVRHvWB2yfTW87bz2pPoM76U/jKdBe6vlpVASMEZRlI5rvHT10TKYbrfB
         7MEg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=wTpG1f7I5Mn7omq14ZEE1b1D1RGafIOA8Nh3enwy9Pw=;
        b=SF+j8vK5ty6igRsgEtPFQFYE6X7zIdcqdjG/s/CrnsFpqcJqp4kg3c0V3azAyQyGgH
         ox55Fa9dqPHrAB0OEvwnMzXjFlnLuu1xQAA4Hb/IGD+8FY/K07gj7JJBsT7xXeOgytY2
         QnbnWw//SqWCnGokst39nIJ6PCjZrQ2Au4cJeAjJous6sjfv8h5H6KLRT6MM2Q3Cit8U
         i7M5lt5WLXyDIhoxhtZ6g64Nag0OVb4+NTMsTJVxYzNqazH6L0TtX08XSrXUSPUZ/Qho
         RuIsHb11iM2S2l0/WzAc5o1iCfmVQI0d+U8Fyt/POIggt6z+9KRZ6agX60nW56EQERMV
         X+LA==
X-Gm-Message-State: AOAM5336VEPg4jXZG0rtmrd0HBjCh7habREQLWRXRIQ7izSU+KIGmfqL
	8sImAnS3UuEdg87CJ+eAJwM=
X-Google-Smtp-Source: ABdhPJzv1jmYfISttH52Qv7X3ML12jiaN/y3FPvYZps3l/Q0JJ9ycbQ/3m35eYW3aCrzEXDwhiCBYw==
X-Received: by 2002:a37:4796:: with SMTP id u144mr656941qka.235.1606335573858;
        Wed, 25 Nov 2020 12:19:33 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
        by smtp.gmail.com with ESMTPSA id w54sm446145qtb.0.2020.11.25.12.19.32
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 25 Nov 2020 12:19:33 -0800 (PST)
Sender: Tejun Heo <htejun@gmail.com>
Date: Wed, 25 Nov 2020 15:19:10 -0500
From: Tejun Heo <tj@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 23/45] block: remove i_bdev
Message-ID: <X768PnFhrPtJk4U5@mtj.duckdns.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-24-hch@lst.de>
 <X71g4Tm+3RiRg4Gf@mtj.duckdns.org>
 <20201125162926.GA1024@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201125162926.GA1024@lst.de>

Hello,

On Wed, Nov 25, 2020 at 05:29:26PM +0100, Christoph Hellwig wrote:
> > I was wondering whether losing the stale bdev flushing in bd_acquire() would
> > cause user-visible behavior changes but can't see how it would given that
> > userland has no way of holding onto a specific instance of block inode.
> > Maybe it's something worth mentioning in the commit message?
> 
> With stale bdev flushing do you mean the call to bd_forget if
> i_bdev exists but is unhashed?  It doesn't actually flush anything but

Yeah.

> just detaches the old bdev from the inode so that the new one can be
> attached.  That problem goes away by definition if we don't attach
> the bdev to the inode.

Yeah, I think so. Was just wondering whether the problem actually goes away.

Thanks.

-- 
tejun


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 20:20:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 20:20:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38056.70672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki1HV-0005ZG-SE; Wed, 25 Nov 2020 20:20:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38056.70672; Wed, 25 Nov 2020 20:20:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki1HV-0005Z9-Of; Wed, 25 Nov 2020 20:20:49 +0000
Received: by outflank-mailman (input) for mailman id 38056;
 Wed, 25 Nov 2020 20:20:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xEKP=E7=gmail.com=htejun@srs-us1.protection.inumbo.net>)
 id 1ki1HU-0005Yz-7M
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 20:20:48 +0000
Received: from mail-qv1-xf41.google.com (unknown [2607:f8b0:4864:20::f41])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8e260a64-25ce-4e96-aa1e-779f61da8b9b;
 Wed, 25 Nov 2020 20:20:47 +0000 (UTC)
Received: by mail-qv1-xf41.google.com with SMTP id n9so1555651qvp.5
 for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 12:20:47 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
 by smtp.gmail.com with ESMTPSA id y44sm486168qtb.50.2020.11.25.12.20.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 25 Nov 2020 12:20:46 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xEKP=E7=gmail.com=htejun@srs-us1.protection.inumbo.net>)
	id 1ki1HU-0005Yz-7M
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 20:20:48 +0000
X-Inumbo-ID: 8e260a64-25ce-4e96-aa1e-779f61da8b9b
Received: from mail-qv1-xf41.google.com (unknown [2607:f8b0:4864:20::f41])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8e260a64-25ce-4e96-aa1e-779f61da8b9b;
	Wed, 25 Nov 2020 20:20:47 +0000 (UTC)
Received: by mail-qv1-xf41.google.com with SMTP id n9so1555651qvp.5
        for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 12:20:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=WSMpmf065AQQA16C6I8/f4Nb2MRTg2VCPV1Y+4Omok0=;
        b=aJS9gum5t6FoUnqNb5tw+YGyXQwCcI0whcaK5FVUTCKP/Za8Nic1QhStF8Oo0uQqh3
         L0EDWq8ANbf//uV8OcD8fNbj84N8b+UQW4VBsfkFxUd2wm2oiv8y2r+1szMOd2mKe/Fb
         jB/Rs1PwcZGahhDQ4EgDOj2cTvwscYJwDpscmIsBSgDUECsxq7ybIy99pl7t6EZACtmJ
         OQNz/R2bfBUvVHHvYpCiFswvCFs5ojv+6B+BefCSzh9yvMi9oo4QZXO9E+nIcmmPD3xg
         TfQqkbB+/KLVpkZTEMiMjSIt1e2srUeyrPlr4iowCq3gB3eKpa4DgFQSorpcJb38qFDX
         0fww==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=WSMpmf065AQQA16C6I8/f4Nb2MRTg2VCPV1Y+4Omok0=;
        b=dZ0D/X+2pdQFvXcTrZJombVHA9WRn6V1xG/Cs69J9Int12Zay5I2MzP8awqRccM4Aw
         LBV17PhTNJMjXAsisoW7LoN009vLIBjfkDBPrfxcEQCLzaLrzYy7wrKSArNSvJ69J3NC
         25inv1/ogchdg8QxuNeqPw9d4R0L096nDVjI0YJK4R+9QdZvcErrhqTlsaxCRVkLwOtz
         kriI7RkWZMZ2X+tchFKbd+rkO3ZOtcscFSZcWdav6t9WiFzgQrnH5raYNwVjIy/Yxl9F
         XAQoO9liGyngVwRm/44Y+zFEgWJFzHmhuEhWK/2x1dWPxJLDwvS1L2H1FIluoeje+IKQ
         KK3g==
X-Gm-Message-State: AOAM533TaGj+idaIbUUxFtVarD0hidJVdl5U+2v3KXMv5HEyZpD/wk93
	mT7zy4ePZj5ypLzvrTBnnnA=
X-Google-Smtp-Source: ABdhPJxGb7WJXeWnQukVsz1bsvc4/cmOCaTprUECQMhH8XohgWi8umM00Y/Xc9aCwh7r/SWLyDpYFQ==
X-Received: by 2002:a0c:b505:: with SMTP id d5mr4999671qve.59.1606335647135;
        Wed, 25 Nov 2020 12:20:47 -0800 (PST)
Received: from localhost (dhcp-6c-ae-f6-dc-d8-61.cpe.echoes.net. [72.28.8.195])
        by smtp.gmail.com with ESMTPSA id y44sm486168qtb.50.2020.11.25.12.20.45
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 25 Nov 2020 12:20:46 -0800 (PST)
Sender: Tejun Heo <htejun@gmail.com>
Date: Wed, 25 Nov 2020 15:20:23 -0500
From: Tejun Heo <tj@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Josef Bacik <josef@toxicpanda.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Coly Li <colyli@suse.de>, Mike Snitzer <snitzer@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jan Kara <jack@suse.cz>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	dm-devel@redhat.com, Richard Weinberger <richard@nod.at>,
	Jan Kara <jack@suse.com>, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 25/45] block: reference struct block_device from struct
 hd_struct
Message-ID: <X768hzEnD/ySog5b@mtj.duckdns.org>
References: <20201124132751.3747337-1-hch@lst.de>
 <20201124132751.3747337-26-hch@lst.de>
 <X714udEyPuGarVYp@mtj.duckdns.org>
 <20201125164515.GB1975@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201125164515.GB1975@lst.de>

Hello,

On Wed, Nov 25, 2020 at 05:45:15PM +0100, Christoph Hellwig wrote:
> On Tue, Nov 24, 2020 at 04:18:49PM -0500, Tejun Heo wrote:
> > Hello,
> > 
> > Please see lkml.kernel.org/r/X708BTJ5njtbC2z1@mtj.duckdns.org for a few nits
> > on the previous version.
> 
> Thanks, I've addressed the mapping_set_gfp mask nit and updated the
> commit log.  As Jan pointed also pointed out I think we do need the
> remove_inode_hash.

Agreed. It'd be nice to replace the stale comment.

> > Also, would it make sense to separate out lookup_sem removal? I *think* it's
> > there to ensure that the same bdev doesn't get associated with old and new
> > gendisks at the same time but can't wrap my head around how it works
> > exactly. I can see that this may not be needed once the lifetimes of gendisk
> > and block_devices are tied together but that may warrant a bit more
> > explanation.
> 
> Jan added lookup_sem in commit 56c0908c855afbb to prevent a three way
> race between del_gendisk and blkdev_open due to the weird bdev vs
> gendisk lifetime rules.  None of that can happen with the new lookup
> scheme.

Understood. I think it'd be worthwhile to note that in the commit log.

Thanks.

-- 
tejun


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 20:57:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 20:57:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38065.70684 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki1qb-0008Tx-KU; Wed, 25 Nov 2020 20:57:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38065.70684; Wed, 25 Nov 2020 20:57:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki1qb-0008Tq-HQ; Wed, 25 Nov 2020 20:57:05 +0000
Received: by outflank-mailman (input) for mailman id 38065;
 Wed, 25 Nov 2020 20:57:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yzfx=E7=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
 id 1ki1qa-0008Tl-9o
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 20:57:04 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 38c23258-97bd-4ad7-908c-1465e61840fc;
 Wed, 25 Nov 2020 20:57:03 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-361-AaIHiFkGOLqrKW3-qBNNqg-1; Wed, 25 Nov 2020 15:56:58 -0500
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com
 [10.5.11.13])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 938AF1005D62;
 Wed, 25 Nov 2020 20:56:56 +0000 (UTC)
Received: from localhost (unknown [10.10.67.22])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 4230F60854;
 Wed, 25 Nov 2020 20:56:56 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Yzfx=E7=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
	id 1ki1qa-0008Tl-9o
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 20:57:04 +0000
X-Inumbo-ID: 38c23258-97bd-4ad7-908c-1465e61840fc
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 38c23258-97bd-4ad7-908c-1465e61840fc;
	Wed, 25 Nov 2020 20:57:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1606337822;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kcyTO1EndQuMSTqqIVsGvEARRbZE17kEQ/mI/5TZMqc=;
	b=dIPPZrOk2/l4Uh/CP/fTOkVRa7VJIIQixF7ocsHcarqTk2TvDYFtVPe6Nfcc9juwSxi3Ne
	ONjAL3u+NctLrS9I7e86U493VeV2XXQgchRbWlXwwACnn8TM+7YfSue5IXgQYUFS1X+yQr
	ACRNQ+kJZmqAkmpUWZAOrId/F07ftPw=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-361-AaIHiFkGOLqrKW3-qBNNqg-1; Wed, 25 Nov 2020 15:56:58 -0500
X-MC-Unique: AaIHiFkGOLqrKW3-qBNNqg-1
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 938AF1005D62;
	Wed, 25 Nov 2020 20:56:56 +0000 (UTC)
Received: from localhost (unknown [10.10.67.22])
	by smtp.corp.redhat.com (Postfix) with ESMTP id 4230F60854;
	Wed, 25 Nov 2020 20:56:56 +0000 (UTC)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Cc: Gerd Hoffmann <kraxel@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	Claudio Fontana <cfontana@suse.de>,
	Roman Bolshakov <r.bolshakov@yadro.com>
Subject: [PATCH v2 4/6] xen: Delete xen_available() function
Date: Wed, 25 Nov 2020 15:56:34 -0500
Message-Id: <20201125205636.3305257-5-ehabkost@redhat.com>
In-Reply-To: <20201125205636.3305257-1-ehabkost@redhat.com>
References: <20201125205636.3305257-1-ehabkost@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ehabkost@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="US-ASCII"

The function can be replaced with accel_available("xen").

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Claudio Fontana <cfontana@suse.de>
Cc: Roman Bolshakov <r.bolshakov@yadro.com>
---
 include/sysemu/arch_init.h | 2 --
 softmmu/arch_init.c        | 9 ---------
 softmmu/vl.c               | 6 +++---
 3 files changed, 3 insertions(+), 14 deletions(-)

diff --git a/include/sysemu/arch_init.h b/include/sysemu/arch_init.h
index b32ce1afa9..40ac8052b7 100644
--- a/include/sysemu/arch_init.h
+++ b/include/sysemu/arch_init.h
@@ -32,6 +32,4 @@ enum {
 
 extern const uint32_t arch_type;
 
-int xen_available(void);
-
 #endif
diff --git a/softmmu/arch_init.c b/softmmu/arch_init.c
index 79383c8db8..f4770931f5 100644
--- a/softmmu/arch_init.c
+++ b/softmmu/arch_init.c
@@ -49,12 +49,3 @@ int graphic_depth = 32;
 
 
 const uint32_t arch_type = QEMU_ARCH;
-
-int xen_available(void)
-{
-#ifdef CONFIG_XEN
-    return 1;
-#else
-    return 0;
-#endif
-}
diff --git a/softmmu/vl.c b/softmmu/vl.c
index e6e0ad5a92..74b6ebf1e4 100644
--- a/softmmu/vl.c
+++ b/softmmu/vl.c
@@ -3667,21 +3667,21 @@ void qemu_init(int argc, char **argv, char **envp)
                 has_defaults = 0;
                 break;
             case QEMU_OPTION_xen_domid:
-                if (!(xen_available())) {
+                if (!(accel_available("xen"))) {
                     error_report("Option not supported for this target");
                     exit(1);
                 }
                 xen_domid = atoi(optarg);
                 break;
             case QEMU_OPTION_xen_attach:
-                if (!(xen_available())) {
+                if (!(accel_available("xen"))) {
                     error_report("Option not supported for this target");
                     exit(1);
                 }
                 xen_mode = XEN_ATTACH;
                 break;
             case QEMU_OPTION_xen_domid_restrict:
-                if (!(xen_available())) {
+                if (!(accel_available("xen"))) {
                     error_report("Option not supported for this target");
                     exit(1);
                 }
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Nov 25 21:10:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 21:10:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38073.70695 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki23B-0001ei-Ua; Wed, 25 Nov 2020 21:10:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38073.70695; Wed, 25 Nov 2020 21:10:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki23B-0001eI-Qt; Wed, 25 Nov 2020 21:10:05 +0000
Received: by outflank-mailman (input) for mailman id 38073;
 Wed, 25 Nov 2020 21:10:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8CaZ=E7=chromium.org=keescook@srs-us1.protection.inumbo.net>)
 id 1ki23A-0001TR-Mb
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 21:10:04 +0000
Received: from mail-pg1-x544.google.com (unknown [2607:f8b0:4864:20::544])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ce698c87-305d-4d99-8446-63f72c855e34;
 Wed, 25 Nov 2020 21:10:03 +0000 (UTC)
Received: by mail-pg1-x544.google.com with SMTP id 34so3481860pgp.10
 for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 13:10:03 -0800 (PST)
Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163])
 by smtp.gmail.com with ESMTPSA id z68sm2599034pgb.37.2020.11.25.13.10.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 25 Nov 2020 13:10:01 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8CaZ=E7=chromium.org=keescook@srs-us1.protection.inumbo.net>)
	id 1ki23A-0001TR-Mb
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 21:10:04 +0000
X-Inumbo-ID: ce698c87-305d-4d99-8446-63f72c855e34
Received: from mail-pg1-x544.google.com (unknown [2607:f8b0:4864:20::544])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ce698c87-305d-4d99-8446-63f72c855e34;
	Wed, 25 Nov 2020 21:10:03 +0000 (UTC)
Received: by mail-pg1-x544.google.com with SMTP id 34so3481860pgp.10
        for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 13:10:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=5kDOOuMOjwqEdI4fOY8lF/zJwxoesF7EbkKd66qXZt4=;
        b=AWR/7M4mhE4fi43nRvdimsQoF0WR0/yRvzcmpbbdNnukQZ4s3FUnDOA4HlPrcfy1KL
         TnDV3Sp7ib0l66wO1u9Qkihqc0ymqk7uAD73hLx1Hbj+4zgGkd46Av0r6N17g4aWHL9f
         T52peLS6H3gWn2HtVm1n8SJNqsSbmk3lehF9k=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=5kDOOuMOjwqEdI4fOY8lF/zJwxoesF7EbkKd66qXZt4=;
        b=QI2cR1L0b1r/dzIT+KH2DJMMOwh7fUCSl6t/jZkcy74Jqb+jL8AaCNNSqVabBI6ehN
         8Qkn44kLQeGs0ABHv1NjfdIj48mNOBbup+BMnQOQ3wHwS7HffqS0erThB0+7C0zRarWe
         OjI/3AijHV4vkCnfo1ht1WFdEMvi3svlih9WsdMl7OUmuPZzFa4pMvetLCRB5n6NRseb
         aVXs0CFnseBxL220R2ImLa0wlzXyN5M13y39btgTn08L6sVV3xFMFN7IXCqPcu5b/Ud6
         ZInlNC0flo+B+cK0L3cFKeyWc/+xKpsWhMq6NLBgS8Jy/jCLbgTAwEVfgBWP5fisUX7R
         PKEg==
X-Gm-Message-State: AOAM531JlJOzmKnn15ylyNcSNCMUzAWyc2XL+e4X/IW2sK0/Ved0xjD8
	JcZ9l71nWEriLkocCwKObCkX2A==
X-Google-Smtp-Source: ABdhPJxtOU3VPA7Y7ucN799oMICSzuh5uhFKuAq1LiQpUAr5IO1/6yxlbfPxSFBqrEGEx7Z+PADUmw==
X-Received: by 2002:a17:90b:3505:: with SMTP id ls5mr6437623pjb.55.1606338602947;
        Wed, 25 Nov 2020 13:10:02 -0800 (PST)
Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163])
        by smtp.gmail.com with ESMTPSA id z68sm2599034pgb.37.2020.11.25.13.10.01
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 25 Nov 2020 13:10:01 -0800 (PST)
Date: Wed, 25 Nov 2020 13:10:00 -0800
From: Kees Cook <keescook@chromium.org>
To: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>,
	Joe Perches <joe@perches.com>, Jakub Kicinski <kuba@kernel.org>,
	alsa-devel@alsa-project.org,
	linux-atm-general@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org, linux-iio@vger.kernel.org,
	linux-wireless@vger.kernel.org, linux-fbdev@vger.kernel.org,
	dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org,
	Nathan Chancellor <natechancellor@gmail.com>,
	linux-ide@vger.kernel.org, dm-devel@redhat.com,
	keyrings@vger.kernel.org, linux-mtd@lists.infradead.org,
	GR-everest-linux-l2@marvell.com, wcn36xx@lists.infradead.org,
	samba-technical@lists.samba.org, linux-i3c@lists.infradead.org,
	linux1394-devel@lists.sourceforge.net,
	linux-afs@lists.infradead.org, usb-storage@lists.one-eyed-alien.net,
	drbd-dev@lists.linbit.com, devel@driverdev.osuosl.org,
	linux-cifs@vger.kernel.org, rds-devel@oss.oracle.com,
	Nick Desaulniers <ndesaulniers@google.com>,
	linux-scsi@vger.kernel.org, linux-rdma@vger.kernel.org,
	oss-drivers@netronome.com, bridge@lists.linux-foundation.org,
	linux-security-module@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	linux-stm32@st-md-mailman.stormreply.com, cluster-devel@redhat.com,
	linux-acpi@vger.kernel.org, coreteam@netfilter.org,
	intel-wired-lan@lists.osuosl.org, linux-input@vger.kernel.org,
	Miguel Ojeda <ojeda@kernel.org>,
	tipc-discussion@lists.sourceforge.net, linux-ext4@vger.kernel.org,
	linux-media@vger.kernel.org, linux-watchdog@vger.kernel.org,
	selinux@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	intel-gfx@lists.freedesktop.org, linux-geode@lists.infradead.org,
	linux-can@vger.kernel.org, linux-block@vger.kernel.org,
	linux-gpio@vger.kernel.org, op-tee@lists.trustedfirmware.org,
	linux-mediatek@lists.infradead.org, xen-devel@lists.xenproject.org,
	nouveau@lists.freedesktop.org, linux-hams@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-arm-kernel@lists.infradead.org, linux-hwmon@vger.kernel.org,
	x86@kernel.org, linux-nfs@vger.kernel.org,
	GR-Linux-NIC-Dev@marvell.com, linux-mm@kvack.org,
	netdev@vger.kernel.org, linux-decnet-user@lists.sourceforge.net,
	linux-mmc@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
	linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org,
	netfilter-devel@vger.kernel.org, linux-crypto@vger.kernel.org,
	patches@opensource.cirrus.com, linux-integrity@vger.kernel.org,
	target-devel@vger.kernel.org, linux-hardening@vger.kernel.org,
	Jonathan Cameron <Jonathan.Cameron@huawei.com>,
	Greg KH <gregkh@linuxfoundation.org>
Subject: Re: [Intel-wired-lan] [PATCH 000/141] Fix fall-through warnings for
 Clang
Message-ID: <202011251240.1E67BE900@keescook>
References: <202011220816.8B6591A@keescook>
 <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
 <ca071decb87cc7e905411423c05a48f9fd2f58d7.camel@perches.com>
 <0147972a72bc13f3629de8a32dee6f1f308994b5.camel@HansenPartnership.com>
 <d8d1e9add08cdd4158405e77762d4946037208f8.camel@perches.com>
 <dbd2cb703ed9eefa7dde9281ea26ab0f7acc8afe.camel@HansenPartnership.com>
 <20201123130348.GA3119@embeddedor>
 <8f5611bb015e044fa1c0a48147293923c2d904e4.camel@HansenPartnership.com>
 <202011241327.BB28F12F6@keescook>
 <a841536fe65bb33f1c72ce2455a6eb47a0107565.camel@HansenPartnership.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <a841536fe65bb33f1c72ce2455a6eb47a0107565.camel@HansenPartnership.com>

On Tue, Nov 24, 2020 at 11:05:35PM -0800, James Bottomley wrote:
> Now, what we have seems to be about 6 cases (at least what's been shown
> in this thread) where a missing break would cause potentially user
> visible issues.  That means the value of this isn't zero, but it's not
> a no-brainer massive win either.  That's why I think asking what we've
> invested vs the return isn't a useless exercise.

The number is much higher[1]. If it were 6 in the entire history of the
kernel, I would agree with you. :) Some were fixed _before_ Gustavo's
effort too, which I also count towards the idea of "this is a dangerous
weakness in C, and now we have stopped it forever."

> But the broader point I'm making is just because the compiler people
> come up with a shiny new warning doesn't necessarily mean the problem
> it's detecting is one that causes us actual problems in the code base. 
> I'd really be happier if we had a theory about what classes of CVE or
> bug we could eliminate before we embrace the next new warning.

But we did! It was long ago justified and documented[2], and even links to
the CWE[3] for it. This wasn't random joy over discovering a new warning
we could turn on, this was turning on a warning that the compiler folks
finally gave us to handle an entire class of flaws. If we need to update
the code-base to address it not a useful debate -- that was settled
already, even if you're only discovering it now. :P. This last patch
set is about finishing that work for Clang, which is correctly even
more strict than GCC.

-Kees

[1] https://outflux.net/slides/2019/lss/kspp.pdf calls out specific
    numbers (about 6.5% of the patches fixed missing breaks):
	v4.19:  3 of 129
	v4.20:  2 of  59
	v5.0:   3 of  56
	v5.1:  10 of 100
	v5.2:   6 of  71
	v5.3:   7 of  69

    And in the history of the kernel, it's been an ongoing source of
    flaws:

    $ l --no-merges | grep -i 'missing break' | wc -l
    185

    The frequency of such errors being "naturally" found was pretty
    steady until the static checkers started warning, and then it was
    on the rise, but the full effort flushed the rest out, and now it's
    dropped to almost zero:

      1 v2.6.12
      3 v2.6.16.28
      1 v2.6.17
      1 v2.6.19
      2 v2.6.21
      1 v2.6.22
      3 v2.6.24
      3 v2.6.29
      1 v2.6.32
      1 v2.6.33
      1 v2.6.35
      4 v2.6.36
      3 v2.6.38
      2 v2.6.39
      7 v3.0
      2 v3.1
      2 v3.2
      2 v3.3
      3 v3.4
      1 v3.5
      8 v3.6
      7 v3.7
      3 v3.8
      6 v3.9
      3 v3.10
      2 v3.11
      5 v3.12
      5 v3.13
      2 v3.14
      4 v3.15
      2 v3.16
      3 v3.17
      2 v3.18
      2 v3.19
      1 v4.0
      2 v4.1
      5 v4.2
      4 v4.5
      5 v4.7
      6 v4.8
      1 v4.9
      3 v4.10
      2 v4.11
      6 v4.12
      3 v4.13
      2 v4.14
      5 v4.15
      2 v4.16
      7 v4.18
      2 v4.19
      6 v4.20
      3 v5.0
     12 v5.1
      3 v5.2
      4 v5.3
      2 v5.4
      1 v5.8


    And the reason it's fully zero, is because we still have the cases we're
    cleaning up right now. Even this last one from v5.8 is specifically of
    the same type this series addresses:

        case 4:
                color_index = TrueCModeIndex;
+               break;
        default:
                return;
        }


[2] https://www.kernel.org/doc/html/latest/process/deprecated.html#implicit-switch-case-fall-through

	All switch/case blocks must end in one of:

	break;
	fallthrough;
	continue;
	goto <label>;
	return [expression];

[3] https://cwe.mitre.org/data/definitions/484.html

-- 
Kees Cook


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 21:16:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 21:16:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38081.70708 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki29j-00023l-Mv; Wed, 25 Nov 2020 21:16:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38081.70708; Wed, 25 Nov 2020 21:16:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki29j-00023e-IS; Wed, 25 Nov 2020 21:16:51 +0000
Received: by outflank-mailman (input) for mailman id 38081;
 Wed, 25 Nov 2020 21:16:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ki29h-00023Z-D2
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 21:16:49 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 914fb1cd-81ed-469a-a058-1448f65ee0ab;
 Wed, 25 Nov 2020 21:16:48 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 5779E2067D;
 Wed, 25 Nov 2020 21:16:47 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1ki29h-00023Z-D2
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 21:16:49 +0000
X-Inumbo-ID: 914fb1cd-81ed-469a-a058-1448f65ee0ab
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 914fb1cd-81ed-469a-a058-1448f65ee0ab;
	Wed, 25 Nov 2020 21:16:48 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 5779E2067D;
	Wed, 25 Nov 2020 21:16:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606339007;
	bh=3ORE4OA2zRG6/kdfS8PhhwjL4qTMFTJtKzDXg5yC5do=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=mW9PhzekD7khhlxwVCnzpEAyHH9XumAmcH2tvZ4eaiTuFU5Aixpe1KFqtb328Dk11
	 4Pgz3fLtaPTRLub0AQG3ct7eRvp2d9lEj/2jhXL/ULUWxXjRH+CBXL7xr1bv997UqI
	 Ca5o79QeQWNP2i+roj+7LSalUoWWV/UkQADG50v8=
Date: Wed, 25 Nov 2020 13:16:46 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH v4 1/3] xen/pci: Move x86 specific code to x86
 directory.
In-Reply-To: <3500f44e3b6f8f05f9d05fa170817d5bc6f39f22.1606326929.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2011251316370.7979@sstabellini-ThinkPad-T480s>
References: <cover.1606326929.git.rahul.singh@arm.com> <3500f44e3b6f8f05f9d05fa170817d5bc6f39f22.1606326929.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 25 Nov 2020, Rahul Singh wrote:
> passthrough/pci.c file is common for all architecture, but there is x86
> specific code in this file.
> 
> Move x86 specific code to the drivers/passthrough/io.c file to avoid
> compilation error for other architecture.
> 
> As drivers/passthrough/io.c is compiled only for x86 move it to
> x86 directory and rename it to hvm.c.
> 
> No functional change intended.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> 
> Changes in v4:
> - fixed compilation error when CONFIG_HVM is disabled 
> - remove iommu_update_ire_from_msi from the patch will send another patch
>   to fix.
> 
> ---
>  xen/drivers/passthrough/Makefile            |  3 -
>  xen/drivers/passthrough/pci.c               | 71 +--------------------
>  xen/drivers/passthrough/x86/Makefile        |  1 +
>  xen/drivers/passthrough/{io.c => x86/hvm.c} | 66 +++++++++++++++++++
>  xen/include/xen/pci.h                       |  9 +++
>  5 files changed, 77 insertions(+), 73 deletions(-)
>  rename xen/drivers/passthrough/{io.c => x86/hvm.c} (95%)
> 
> diff --git a/xen/drivers/passthrough/Makefile b/xen/drivers/passthrough/Makefile
> index e973e16c74..cc646612c7 100644
> --- a/xen/drivers/passthrough/Makefile
> +++ b/xen/drivers/passthrough/Makefile
> @@ -6,6 +6,3 @@ obj-$(CONFIG_ARM) += arm/
>  obj-y += iommu.o
>  obj-$(CONFIG_HAS_PCI) += pci.o
>  obj-$(CONFIG_HAS_DEVICE_TREE) += device_tree.o
> -
> -x86-$(CONFIG_HVM) := io.o
> -obj-$(CONFIG_X86) += $(x86-y)
> diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
> index 51e584127e..3c6ab1bcb6 100644
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -14,9 +14,6 @@
>   * this program; If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> -#include <xen/sched.h>
> -#include <xen/pci.h>
> -#include <xen/pci_regs.h>
>  #include <xen/pci_ids.h>
>  #include <xen/list.h>
>  #include <xen/prefetch.h>
> @@ -24,7 +21,6 @@
>  #include <xen/irq.h>
>  #include <xen/param.h>
>  #include <xen/vm_event.h>
> -#include <asm/hvm/irq.h>
>  #include <xen/delay.h>
>  #include <xen/keyhandler.h>
>  #include <xen/event.h>
> @@ -842,71 +838,6 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
>      return ret;
>  }
>  
> -static int pci_clean_dpci_irq(struct domain *d,
> -                              struct hvm_pirq_dpci *pirq_dpci, void *arg)
> -{
> -    struct dev_intx_gsi_link *digl, *tmp;
> -
> -    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
> -
> -    if ( pt_irq_need_timer(pirq_dpci->flags) )
> -        kill_timer(&pirq_dpci->timer);
> -
> -    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
> -    {
> -        list_del(&digl->list);
> -        xfree(digl);
> -    }
> -
> -    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
> -
> -    if ( !pt_pirq_softirq_active(pirq_dpci) )
> -        return 0;
> -
> -    domain_get_irq_dpci(d)->pending_pirq_dpci = pirq_dpci;
> -
> -    return -ERESTART;
> -}
> -
> -static int pci_clean_dpci_irqs(struct domain *d)
> -{
> -    struct hvm_irq_dpci *hvm_irq_dpci = NULL;
> -
> -    if ( !is_iommu_enabled(d) )
> -        return 0;
> -
> -    if ( !is_hvm_domain(d) )
> -        return 0;
> -
> -    spin_lock(&d->event_lock);
> -    hvm_irq_dpci = domain_get_irq_dpci(d);
> -    if ( hvm_irq_dpci != NULL )
> -    {
> -        int ret = 0;
> -
> -        if ( hvm_irq_dpci->pending_pirq_dpci )
> -        {
> -            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci) )
> -                 ret = -ERESTART;
> -            else
> -                 hvm_irq_dpci->pending_pirq_dpci = NULL;
> -        }
> -
> -        if ( !ret )
> -            ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
> -        if ( ret )
> -        {
> -            spin_unlock(&d->event_lock);
> -            return ret;
> -        }
> -
> -        hvm_domain_irq(d)->dpci = NULL;
> -        free_hvm_irq_dpci(hvm_irq_dpci);
> -    }
> -    spin_unlock(&d->event_lock);
> -    return 0;
> -}
> -
>  /* Caller should hold the pcidevs_lock */
>  static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
>                             uint8_t devfn)
> @@ -966,7 +897,7 @@ int pci_release_devices(struct domain *d)
>      int ret;
>  
>      pcidevs_lock();
> -    ret = pci_clean_dpci_irqs(d);
> +    ret = arch_pci_clean_pirqs(d);
>      if ( ret )
>      {
>          pcidevs_unlock();
> diff --git a/xen/drivers/passthrough/x86/Makefile b/xen/drivers/passthrough/x86/Makefile
> index a70cf9460d..69284a5d19 100644
> --- a/xen/drivers/passthrough/x86/Makefile
> +++ b/xen/drivers/passthrough/x86/Makefile
> @@ -1,2 +1,3 @@
>  obj-y += ats.o
>  obj-y += iommu.o
> +obj-$(CONFIG_HVM) += hvm.o
> diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/x86/hvm.c
> similarity index 95%
> rename from xen/drivers/passthrough/io.c
> rename to xen/drivers/passthrough/x86/hvm.c
> index 6b1305a3e5..41cfa2e200 100644
> --- a/xen/drivers/passthrough/io.c
> +++ b/xen/drivers/passthrough/x86/hvm.c
> @@ -1036,6 +1036,72 @@ unlock:
>      spin_unlock(&d->event_lock);
>  }
>  
> +static int pci_clean_dpci_irq(struct domain *d,
> +                              struct hvm_pirq_dpci *pirq_dpci, void *arg)
> +{
> +    struct dev_intx_gsi_link *digl, *tmp;
> +
> +    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
> +
> +    if ( pt_irq_need_timer(pirq_dpci->flags) )
> +        kill_timer(&pirq_dpci->timer);
> +
> +    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
> +    {
> +        list_del(&digl->list);
> +        xfree(digl);
> +    }
> +
> +    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
> +
> +    if ( !pt_pirq_softirq_active(pirq_dpci) )
> +        return 0;
> +
> +    domain_get_irq_dpci(d)->pending_pirq_dpci = pirq_dpci;
> +
> +    return -ERESTART;
> +}
> +
> +int arch_pci_clean_pirqs(struct domain *d)
> +{
> +    struct hvm_irq_dpci *hvm_irq_dpci = NULL;
> +
> +    if ( !is_iommu_enabled(d) )
> +        return 0;
> +
> +    if ( !is_hvm_domain(d) )
> +        return 0;
> +
> +    spin_lock(&d->event_lock);
> +    hvm_irq_dpci = domain_get_irq_dpci(d);
> +    if ( hvm_irq_dpci != NULL )
> +    {
> +        int ret = 0;
> +
> +        if ( hvm_irq_dpci->pending_pirq_dpci )
> +        {
> +            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci) )
> +                 ret = -ERESTART;
> +            else
> +                 hvm_irq_dpci->pending_pirq_dpci = NULL;
> +        }
> +
> +        if ( !ret )
> +            ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
> +        if ( ret )
> +        {
> +            spin_unlock(&d->event_lock);
> +            return ret;
> +        }
> +
> +        hvm_domain_irq(d)->dpci = NULL;
> +        free_hvm_irq_dpci(hvm_irq_dpci);
> +    }
> +    spin_unlock(&d->event_lock);
> +
> +    return 0;
> +}
> +
>  /*
>   * Note: 'pt_pirq_softirq_reset' can clear the STATE_SCHED before we get to
>   * doing it. If that is the case we let 'pt_pirq_softirq_reset' do ref-counting.
> diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
> index 20a54a5bb4..8e3d4d9454 100644
> --- a/xen/include/xen/pci.h
> +++ b/xen/include/xen/pci.h
> @@ -208,4 +208,13 @@ int msixtbl_pt_register(struct domain *, struct pirq *, uint64_t gtable);
>  void msixtbl_pt_unregister(struct domain *, struct pirq *);
>  void msixtbl_pt_cleanup(struct domain *d);
>  
> +#ifdef CONFIG_HVM
> +int arch_pci_clean_pirqs(struct domain *d);
> +#else
> +static inline int arch_pci_clean_pirqs(struct domain *d)
> +{
> +    return 0;
> +}
> +#endif /* CONFIG_HVM */
> +
>  #endif /* __XEN_PCI_H__ */
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 21:23:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 21:23:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38088.70720 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki2FR-0002yT-Br; Wed, 25 Nov 2020 21:22:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38088.70720; Wed, 25 Nov 2020 21:22:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki2FR-0002yM-7q; Wed, 25 Nov 2020 21:22:45 +0000
Received: by outflank-mailman (input) for mailman id 38088;
 Wed, 25 Nov 2020 21:22:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ki2FQ-0002yH-01
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 21:22:44 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e6b2cdd3-60ca-488d-8696-759cdb4b2f4a;
 Wed, 25 Nov 2020 21:22:43 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 3567F206B5;
 Wed, 25 Nov 2020 21:22:42 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1ki2FQ-0002yH-01
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 21:22:44 +0000
X-Inumbo-ID: e6b2cdd3-60ca-488d-8696-759cdb4b2f4a
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e6b2cdd3-60ca-488d-8696-759cdb4b2f4a;
	Wed, 25 Nov 2020 21:22:43 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 3567F206B5;
	Wed, 25 Nov 2020 21:22:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606339362;
	bh=/dp5Lg3Nm/ZwwlaZbc9dAyCuGMPyOMed/wnoftDLQZk=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=DtWpPOH6o1TCGeJbRTM9jAfdZ9CimHPH/Gb4o/68yEUp0c+GWrwQFgN5sX3SDZhlN
	 d1LHafmNi91t0ypcUAoaQOA79ETM79TtKGcvo1Bp+8dm1ejjo0M1yOSACWDlUCc0fZ
	 yptQ2kRZmJHIA1xlpTqBuXw8GRgKseJpuvO+thLw=
Date: Wed, 25 Nov 2020 13:22:41 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
Subject: Re: [PATCH v4 2/3] xen/pci: solve compilation error on ARM with
 HAS_PCI enabled.
In-Reply-To: <2ce402cfae6d90433626bcdc6314e5ee5dda103f.1606326929.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2011251320300.7979@sstabellini-ThinkPad-T480s>
References: <cover.1606326929.git.rahul.singh@arm.com> <2ce402cfae6d90433626bcdc6314e5ee5dda103f.1606326929.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 25 Nov 2020, Rahul Singh wrote:
> If mem-sharing, mem-paging, or log-dirty functionality is not enabled
> for architecture when HAS_PCI is enabled, the compiler will throw an
> error.
> 
> Move code to x86 specific file to fix compilation error.
> 
> Also, modify the code to use likely() in place of unlikley() for each
> condition to make code more optimized.
> 
> No functional change intended.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
> 
> Changes in v4:
> - fixed minor comments
> 
> ---
>  xen/drivers/passthrough/pci.c       |  8 +-------
>  xen/drivers/passthrough/x86/iommu.c | 13 +++++++++++++
>  xen/include/xen/iommu.h             |  2 ++
>  3 files changed, 16 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
> index 3c6ab1bcb6..4c21655b7d 100644
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -20,7 +20,6 @@
>  #include <xen/iommu.h>
>  #include <xen/irq.h>
>  #include <xen/param.h>
> -#include <xen/vm_event.h>
>  #include <xen/delay.h>
>  #include <xen/keyhandler.h>
>  #include <xen/event.h>
> @@ -1418,12 +1417,7 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
>      if ( !is_iommu_enabled(d) )
>          return 0;
>  
> -    /* Prevent device assign if mem paging or mem sharing have been 
> -     * enabled for this domain */
> -    if ( d != dom_io &&
> -         unlikely(mem_sharing_enabled(d) ||
> -                  vm_event_check_ring(d->vm_event_paging) ||
> -                  p2m_get_hostp2m(d)->global_logdirty) )
> +    if( !arch_iommu_use_permitted(d) )

code style:

  if ( !arch_iommu_use_permitted(d) )


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 21:29:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 21:29:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38095.70732 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki2LQ-0003FL-15; Wed, 25 Nov 2020 21:28:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38095.70732; Wed, 25 Nov 2020 21:28:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki2LP-0003FE-TP; Wed, 25 Nov 2020 21:28:55 +0000
Received: by outflank-mailman (input) for mailman id 38095;
 Wed, 25 Nov 2020 21:28:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ki2LP-0003F9-1D
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 21:28:55 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3685e01e-1c47-4abd-af2b-fb4066cce531;
 Wed, 25 Nov 2020 21:28:54 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id B92BB206E0;
 Wed, 25 Nov 2020 21:28:52 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gEFk=E7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1ki2LP-0003F9-1D
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 21:28:55 +0000
X-Inumbo-ID: 3685e01e-1c47-4abd-af2b-fb4066cce531
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3685e01e-1c47-4abd-af2b-fb4066cce531;
	Wed, 25 Nov 2020 21:28:54 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id B92BB206E0;
	Wed, 25 Nov 2020 21:28:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606339733;
	bh=/Ysq6Vgj8EHb6KbamRFC3s2zGugtlI2n/1OSkfSGZ4s=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=assL+3Yi9bO8B9zPGFItUVzzA3j7agW0Ou5XWfof4AaTtUtIvxWYn1uufWfdVF5d9
	 gKdS2rwKLJpNQPRKwhREY4n4/yNHEVDy4zykMGkmP70Fa1e9GnzmqYwnaTsbMu51o8
	 +sjJCCxKBQvUvTFiZfpXj9y8WN9FbAeXUNnwCzwU=
Date: Wed, 25 Nov 2020 13:28:51 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v4 3/3] ns16550: Gate all PCI code with CONFIG_X86
In-Reply-To: <6d64bb35a6ce247faaa3df2ebae27b6bfa1d969e.1606326929.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2011251328440.7979@sstabellini-ThinkPad-T480s>
References: <cover.1606326929.git.rahul.singh@arm.com> <6d64bb35a6ce247faaa3df2ebae27b6bfa1d969e.1606326929.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-250076586-1606339733=:7979"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-250076586-1606339733=:7979
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 25 Nov 2020, Rahul Singh wrote:
> The NS16550 driver is assuming that NS16550 PCI card are usable if the
> architecture supports PCI (i.e. CONFIG_HAS_PCI=y). However, the code is
> very x86 focus and will fail to build on Arm (/!\ it is not all the
> errors):
> 
> ns16550.c: In function ‘ns16550_init_irq’:
> ns16550.c:726:21: error: implicit declaration of function ‘create_irq’;
> did you mean ‘release_irq’? [-Werror=implicit-function-declaration]
>           uart->irq = create_irq(0, false);
>                       ^~~~~~~~~~
>                       release_irq
> ns16550.c:726:21: error: nested extern declaration of ‘create_irq’
> [-Werror=nested-externs]
> ns16550.c: In function ‘ns16550_init_postirq’:
> ns16550.c:768:33: error: ‘mmio_ro_ranges’ undeclared (first use in this
> function); did you mean ‘mmio_handler’?
>                rangeset_add_range(mmio_ro_ranges, uart->io_base,
>                                   ^~~~~~~~~~~~~~
>                                   mmio_handler
> ns16550.c:768:33: note: each undeclared identifier is reported only once
> for each function it appears in
> ns16550.c:780:20: error: variable ‘msi’ has initializer but incomplete
> type
>               struct msi_info msi = {
>                      ^~~~~~~~
> 
> Enabling support for NS16550 PCI card on Arm would require more plumbing
> in addition to fixing the compilation error.
> 
> Arm systems tend to have platform UART available such as NS16550, PL011.
> So there are limited reasons to get NS16550 PCI support for now on Arm.
> 
> Guard all remaining PCI code that is not under x86 flag with CONFIG_X86.
> 
> No functional change intended.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> 
> Changes in v4:
> - As per the discussion guard all remaining PCI code with CONFIG_X86
> 
> ---
>  xen/drivers/char/ns16550.c | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
> index 9235d854fe..26e601857a 100644
> --- a/xen/drivers/char/ns16550.c
> +++ b/xen/drivers/char/ns16550.c
> @@ -16,7 +16,7 @@
>  #include <xen/timer.h>
>  #include <xen/serial.h>
>  #include <xen/iocap.h>
> -#ifdef CONFIG_HAS_PCI
> +#if defined(CONFIG_X86) && defined(CONFIG_HAS_PCI)
>  #include <xen/pci.h>
>  #include <xen/pci_regs.h>
>  #include <xen/pci_ids.h>
> @@ -51,7 +51,7 @@ static struct ns16550 {
>      unsigned int timeout_ms;
>      bool_t intr_works;
>      bool_t dw_usr_bsy;
> -#ifdef CONFIG_HAS_PCI
> +#if defined(CONFIG_X86) && defined(CONFIG_HAS_PCI)
>      /* PCI card parameters. */
>      bool_t pb_bdf_enable;   /* if =1, pb-bdf effective, port behind bridge */
>      bool_t ps_bdf_enable;   /* if =1, ps_bdf effective, port on pci card */
> @@ -66,7 +66,7 @@ static struct ns16550 {
>  #endif
>  } ns16550_com[2] = { { 0 } };
>  
> -#ifdef CONFIG_HAS_PCI
> +#if defined(CONFIG_X86) && defined(CONFIG_HAS_PCI)
>  struct ns16550_config {
>      u16 vendor_id;
>      u16 dev_id;
> @@ -256,7 +256,7 @@ static int ns16550_getc(struct serial_port *port, char *pc)
>  
>  static void pci_serial_early_init(struct ns16550 *uart)
>  {
> -#ifdef CONFIG_HAS_PCI
> +#if defined(CONFIG_X86) && defined(CONFIG_HAS_PCI)
>      if ( !uart->ps_bdf_enable || uart->io_base >= 0x10000 )
>          return;
>  
> @@ -355,7 +355,7 @@ static void __init ns16550_init_preirq(struct serial_port *port)
>  
>  static void __init ns16550_init_irq(struct serial_port *port)
>  {
> -#ifdef CONFIG_HAS_PCI
> +#if defined(CONFIG_X86) && defined(CONFIG_HAS_PCI)
>      struct ns16550 *uart = port->uart;
>  
>      if ( uart->msi )
> @@ -397,7 +397,7 @@ static void __init ns16550_init_postirq(struct serial_port *port)
>      uart->timeout_ms = max_t(
>          unsigned int, 1, (bits * uart->fifo_size * 1000) / uart->baud);
>  
> -#ifdef CONFIG_HAS_PCI
> +#if defined(CONFIG_X86) && defined(CONFIG_HAS_PCI)
>      if ( uart->bar || uart->ps_bdf_enable )
>      {
>          if ( uart->param && uart->param->mmio &&
> @@ -477,7 +477,7 @@ static void ns16550_suspend(struct serial_port *port)
>  
>      stop_timer(&uart->timer);
>  
> -#ifdef CONFIG_HAS_PCI
> +#if defined(CONFIG_X86) && defined(CONFIG_HAS_PCI)
>      if ( uart->bar )
>         uart->cr = pci_conf_read16(PCI_SBDF(0, uart->ps_bdf[0], uart->ps_bdf[1],
>                                    uart->ps_bdf[2]), PCI_COMMAND);
> @@ -486,7 +486,7 @@ static void ns16550_suspend(struct serial_port *port)
>  
>  static void _ns16550_resume(struct serial_port *port)
>  {
> -#ifdef CONFIG_HAS_PCI
> +#if defined(CONFIG_X86) && defined(CONFIG_HAS_PCI)
>      struct ns16550 *uart = port->uart;
>  
>      if ( uart->bar )
> -- 
> 2.17.1
> 
--8323329-250076586-1606339733=:7979--


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 21:33:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 21:33:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38103.70744 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki2Ps-00049J-Np; Wed, 25 Nov 2020 21:33:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38103.70744; Wed, 25 Nov 2020 21:33:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki2Ps-00049C-KO; Wed, 25 Nov 2020 21:33:32 +0000
Received: by outflank-mailman (input) for mailman id 38103;
 Wed, 25 Nov 2020 21:33:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NWac=E7=telegraphics.com.au=fthain@srs-us1.protection.inumbo.net>)
 id 1ki2Pr-000497-8d
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 21:33:31 +0000
Received: from kvm5.telegraphics.com.au (unknown [98.124.60.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 674c4678-a85e-40b6-a7f8-8022c38a6ab3;
 Wed, 25 Nov 2020 21:33:29 +0000 (UTC)
Received: from localhost (localhost.localdomain [127.0.0.1])
 by kvm5.telegraphics.com.au (Postfix) with ESMTP id 9789F29FB0;
 Wed, 25 Nov 2020 16:33:24 -0500 (EST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NWac=E7=telegraphics.com.au=fthain@srs-us1.protection.inumbo.net>)
	id 1ki2Pr-000497-8d
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 21:33:31 +0000
X-Inumbo-ID: 674c4678-a85e-40b6-a7f8-8022c38a6ab3
Received: from kvm5.telegraphics.com.au (unknown [98.124.60.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 674c4678-a85e-40b6-a7f8-8022c38a6ab3;
	Wed, 25 Nov 2020 21:33:29 +0000 (UTC)
Received: from localhost (localhost.localdomain [127.0.0.1])
	by kvm5.telegraphics.com.au (Postfix) with ESMTP id 9789F29FB0;
	Wed, 25 Nov 2020 16:33:24 -0500 (EST)
Date: Thu, 26 Nov 2020 08:33:24 +1100 (AEDT)
From: Finn Thain <fthain@telegraphics.com.au>
To: Nick Desaulniers <ndesaulniers@google.com>
cc: James Bottomley <James.Bottomley@hansenpartnership.com>, 
    Kees Cook <keescook@chromium.org>, 
    "Gustavo A. R. Silva" <gustavoars@kernel.org>, 
    Joe Perches <joe@perches.com>, Jakub Kicinski <kuba@kernel.org>, 
    alsa-devel@alsa-project.org, linux-atm-general@lists.sourceforge.net, 
    reiserfs-devel@vger.kernel.org, linux-iio@vger.kernel.org, 
    linux-wireless <linux-wireless@vger.kernel.org>, 
    linux-fbdev@vger.kernel.org, dri-devel <dri-devel@lists.freedesktop.org>, 
    LKML <linux-kernel@vger.kernel.org>, 
    Nathan Chancellor <natechancellor@gmail.com>, linux-ide@vger.kernel.org, 
    dm-devel@redhat.com, keyrings@vger.kernel.org, 
    linux-mtd@lists.infradead.org, GR-everest-linux-l2@marvell.com, 
    wcn36xx@lists.infradead.org, samba-technical@lists.samba.org, 
    linux-i3c@lists.infradead.org, linux1394-devel@lists.sourceforge.net, 
    linux-afs@lists.infradead.org, usb-storage@lists.one-eyed-alien.net, 
    drbd-dev@lists.linbit.com, devel@driverdev.osuosl.org, 
    linux-cifs@vger.kernel.org, rds-devel@oss.oracle.com, 
    linux-scsi@vger.kernel.org, linux-rdma@vger.kernel.org, 
    oss-drivers@netronome.com, bridge@lists.linux-foundation.org, 
    linux-security-module@vger.kernel.org, 
    amd-gfx list <amd-gfx@lists.freedesktop.org>, 
    linux-stm32@st-md-mailman.stormreply.com, cluster-devel@redhat.com, 
    linux-acpi@vger.kernel.org, coreteam@netfilter.org, 
    intel-wired-lan@lists.osuosl.org, linux-input@vger.kernel.org, 
    Miguel Ojeda <ojeda@kernel.org>, tipc-discussion@lists.sourceforge.net, 
    linux-ext4@vger.kernel.org, linux-media@vger.kernel.org, 
    linux-watchdog@vger.kernel.org, selinux@vger.kernel.org, 
    linux-arm-msm <linux-arm-msm@vger.kernel.org>, 
    intel-gfx@lists.freedesktop.org, linux-geode@lists.infradead.org, 
    linux-can@vger.kernel.org, linux-block@vger.kernel.org, 
    linux-gpio@vger.kernel.org, op-tee@lists.trustedfirmware.org, 
    linux-mediatek@lists.infradead.org, xen-devel@lists.xenproject.org, 
    nouveau@lists.freedesktop.org, linux-hams@vger.kernel.org, 
    ceph-devel@vger.kernel.org, virtualization@lists.linux-foundation.org, 
    Linux ARM <linux-arm-kernel@lists.infradead.org>, 
    linux-hwmon@vger.kernel.org, 
    "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, 
    linux-nfs@vger.kernel.org, GR-Linux-NIC-Dev@marvell.com, 
    Linux Memory Management List <linux-mm@kvack.org>, 
    Network Development <netdev@vger.kernel.org>, 
    linux-decnet-user@lists.sourceforge.net, linux-mmc@vger.kernel.org, 
    Linux-Renesas <linux-renesas-soc@vger.kernel.org>, 
    linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, 
    netfilter-devel@vger.kernel.org, 
    "open list:HARDWARE RANDOM NUMBER GENERATOR CORE" <linux-crypto@vger.kernel.org>, 
    patches@opensource.cirrus.com, linux-integrity@vger.kernel.org, 
    target-devel@vger.kernel.org, linux-hardening@vger.kernel.org, 
    Jonathan Cameron <Jonathan.Cameron@huawei.com>, 
    Greg KH <gregkh@linuxfoundation.org>
Subject: Re: [Intel-wired-lan] [PATCH 000/141] Fix fall-through warnings for
 Clang
In-Reply-To: <CAKwvOdkGBn7nuWTAqrORMeN1G+w3YwBfCqqaRD2nwvoAXKi=Aw@mail.gmail.com>
Message-ID: <alpine.LNX.2.23.453.2011260750300.6@nippy.intranet>
References: <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> <202011220816.8B6591A@keescook> <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com> <ca071decb87cc7e905411423c05a48f9fd2f58d7.camel@perches.com>
 <0147972a72bc13f3629de8a32dee6f1f308994b5.camel@HansenPartnership.com> <d8d1e9add08cdd4158405e77762d4946037208f8.camel@perches.com> <dbd2cb703ed9eefa7dde9281ea26ab0f7acc8afe.camel@HansenPartnership.com> <20201123130348.GA3119@embeddedor>
 <8f5611bb015e044fa1c0a48147293923c2d904e4.camel@HansenPartnership.com> <202011241327.BB28F12F6@keescook> <a841536fe65bb33f1c72ce2455a6eb47a0107565.camel@HansenPartnership.com> <CAKwvOdkGBn7nuWTAqrORMeN1G+w3YwBfCqqaRD2nwvoAXKi=Aw@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 25 Nov 2020, Nick Desaulniers wrote:

> So developers and distributions using Clang can't have 
> -Wimplicit-fallthrough enabled because GCC is less strict (which has 
> been shown in this thread to lead to bugs)?  We'd like to have nice 
> things too, you know.
> 

Apparently the GCC developers don't want you to have "nice things" either. 
Do you think that the kernel should drop gcc in favour of clang?
Or do you think that a codebase can somehow satisfy multiple checkers and 
their divergent interpretations of the language spec?

> This is not a shiny new warning; it's already on for GCC and has existed 
> in both compilers for multiple releases.
> 

Perhaps you're referring to the compiler feature that lead to the 
ill-fated, tree-wide /* fallthrough */ patch series.

When the ink dries on the C23 language spec and the implementations figure 
out how to interpret it then sure, enforce the warning for new code -- the 
cost/benefit analysis is straight forward. However, the case for patching 
existing mature code is another story.


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 22:09:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 22:09:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38111.70756 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki2yY-00072u-H3; Wed, 25 Nov 2020 22:09:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38111.70756; Wed, 25 Nov 2020 22:09:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki2yY-00072n-DU; Wed, 25 Nov 2020 22:09:22 +0000
Received: by outflank-mailman (input) for mailman id 38111;
 Wed, 25 Nov 2020 22:09:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ecWo=E7=google.com=ndesaulniers@srs-us1.protection.inumbo.net>)
 id 1ki2yX-00072i-Bd
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 22:09:21 +0000
Received: from mail-pg1-x543.google.com (unknown [2607:f8b0:4864:20::543])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 286c4bc4-a9fc-4e59-ae65-14f4ca5eb712;
 Wed, 25 Nov 2020 22:09:20 +0000 (UTC)
Received: by mail-pg1-x543.google.com with SMTP id t3so3603184pgi.11
 for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 14:09:20 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ecWo=E7=google.com=ndesaulniers@srs-us1.protection.inumbo.net>)
	id 1ki2yX-00072i-Bd
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 22:09:21 +0000
X-Inumbo-ID: 286c4bc4-a9fc-4e59-ae65-14f4ca5eb712
Received: from mail-pg1-x543.google.com (unknown [2607:f8b0:4864:20::543])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 286c4bc4-a9fc-4e59-ae65-14f4ca5eb712;
	Wed, 25 Nov 2020 22:09:20 +0000 (UTC)
Received: by mail-pg1-x543.google.com with SMTP id t3so3603184pgi.11
        for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 14:09:20 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=u18D3LcbSYuoMMQXGcZuOSSHQ97aqFAj6PSwZPKzyfI=;
        b=WifNdkiftaOaD4ZZckvIqaLo1oDpErEfQOeEaPPqC9ugBqwKDiWX7cyNX9NNokqABc
         3rs8Xkh/xt0eSkb8xVKJLxbZ9ctD9N09POwkigNrKsLe837qPjgVNlPjii1YQ7Ys5fEN
         J1ztqI2SdAxczb+fxW5+t1BuuInt0J3EnKhYhGXUx/Ycs4OeMqDUGayCvZ/rK/2hwzHv
         fvNTfM4yxqfrW/W8EGBwEJD3ef8E6CIFveHPM/2Vwj3USBV6V1sXpGOaikzNc+dcQTTs
         sQr+N2RUFmFahGpkDJgGJRdpjSZFKGXDT4Evw9946EQKZvU8LN7QRJCs+4k3wXo2GKBl
         nncg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=u18D3LcbSYuoMMQXGcZuOSSHQ97aqFAj6PSwZPKzyfI=;
        b=ByV+dJ6H7QmMqWfm3SKmQksWJavW81dmr/kn9cxaQJ3Q4GVSEG2QdIPHivLob5XlDq
         BCR1qZCGaHeXTKED+l0AQ6N+qT43Ib3WeTc5jDXwIkZTtKTdjJ/Z+qidM/mki2wCpSfg
         0IdSq1RIJ/yWWoOHlavbVMoBPP5N8Yl/9O27s2PdpmGn8SddC+9bwS0848buxFJLxGsZ
         9ftqht0FfEtwnMixw2E1kQYFkLzdhGLxpSTaGnJ98kKlcc2sa0fq7n9mL01vBfufzI8d
         bb5PqfrHHSHmR+7cIFIqf5++VoxjtQXE539ITnZD4NOLzxeKGK2ssjmcxbYoxlTgprM4
         Sv2A==
X-Gm-Message-State: AOAM5339oya8H5HsA+yXd2Jh1AhjI4HzpqJlnD+/YcMkYdRC5j7C8MvV
	j+yReUEbM5Lq/Z84MbQtvDfhqT6fcbqvY0G4okc2cA==
X-Google-Smtp-Source: ABdhPJxX4YdX/dgPSH4qNW13nZ+NPFnsWrpBgteZRS2UfRF2x25a9zmwdzqZA/h0sTtapVP+ZBS7Enbowp+5Zxn/Mxo=
X-Received: by 2002:a17:90a:d250:: with SMTP id o16mr6463569pjw.25.1606342159332;
 Wed, 25 Nov 2020 14:09:19 -0800 (PST)
MIME-Version: 1.0
References: <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook> <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
 <ca071decb87cc7e905411423c05a48f9fd2f58d7.camel@perches.com>
 <0147972a72bc13f3629de8a32dee6f1f308994b5.camel@HansenPartnership.com>
 <d8d1e9add08cdd4158405e77762d4946037208f8.camel@perches.com>
 <dbd2cb703ed9eefa7dde9281ea26ab0f7acc8afe.camel@HansenPartnership.com>
 <20201123130348.GA3119@embeddedor> <8f5611bb015e044fa1c0a48147293923c2d904e4.camel@HansenPartnership.com>
 <202011241327.BB28F12F6@keescook> <a841536fe65bb33f1c72ce2455a6eb47a0107565.camel@HansenPartnership.com>
 <CAKwvOdkGBn7nuWTAqrORMeN1G+w3YwBfCqqaRD2nwvoAXKi=Aw@mail.gmail.com> <20201125082405.1d8c23dc@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
In-Reply-To: <20201125082405.1d8c23dc@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
From: Nick Desaulniers <ndesaulniers@google.com>
Date: Wed, 25 Nov 2020 14:09:08 -0800
Message-ID: <CAKwvOdkWGE5qdFZUuMzcL63LDOu_iZQJOGbeBNjcPi8sJPMkag@mail.gmail.com>
Subject: Re: [Intel-wired-lan] [PATCH 000/141] Fix fall-through warnings for Clang
To: Jakub Kicinski <kuba@kernel.org>
Cc: James Bottomley <James.Bottomley@hansenpartnership.com>, 
	Kees Cook <keescook@chromium.org>, "Gustavo A. R. Silva" <gustavoars@kernel.org>, 
	Joe Perches <joe@perches.com>, alsa-devel@alsa-project.org, 
	linux-atm-general@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, 
	linux-iio@vger.kernel.org, linux-wireless <linux-wireless@vger.kernel.org>, 
	linux-fbdev@vger.kernel.org, dri-devel <dri-devel@lists.freedesktop.org>, 
	LKML <linux-kernel@vger.kernel.org>, Nathan Chancellor <natechancellor@gmail.com>, 
	linux-ide@vger.kernel.org, dm-devel@redhat.com, keyrings@vger.kernel.org, 
	linux-mtd@lists.infradead.org, GR-everest-linux-l2@marvell.com, 
	wcn36xx@lists.infradead.org, samba-technical@lists.samba.org, 
	linux-i3c@lists.infradead.org, linux1394-devel@lists.sourceforge.net, 
	linux-afs@lists.infradead.org, usb-storage@lists.one-eyed-alien.net, 
	drbd-dev@lists.linbit.com, devel@driverdev.osuosl.org, 
	linux-cifs@vger.kernel.org, rds-devel@oss.oracle.com, 
	linux-scsi@vger.kernel.org, linux-rdma@vger.kernel.org, 
	oss-drivers@netronome.com, bridge@lists.linux-foundation.org, 
	linux-security-module@vger.kernel.org, 
	amd-gfx list <amd-gfx@lists.freedesktop.org>, linux-stm32@st-md-mailman.stormreply.com, 
	cluster-devel@redhat.com, linux-acpi@vger.kernel.org, coreteam@netfilter.org, 
	intel-wired-lan@lists.osuosl.org, linux-input@vger.kernel.org, 
	Miguel Ojeda <ojeda@kernel.org>, tipc-discussion@lists.sourceforge.net, 
	linux-ext4@vger.kernel.org, linux-media@vger.kernel.org, 
	linux-watchdog@vger.kernel.org, selinux@vger.kernel.org, 
	linux-arm-msm <linux-arm-msm@vger.kernel.org>, intel-gfx@lists.freedesktop.org, 
	linux-geode@lists.infradead.org, linux-can@vger.kernel.org, 
	linux-block@vger.kernel.org, linux-gpio@vger.kernel.org, 
	op-tee@lists.trustedfirmware.org, linux-mediatek@lists.infradead.org, 
	xen-devel@lists.xenproject.org, nouveau@lists.freedesktop.org, 
	linux-hams@vger.kernel.org, ceph-devel@vger.kernel.org, 
	virtualization@lists.linux-foundation.org, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, linux-hwmon@vger.kernel.org, 
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, linux-nfs@vger.kernel.org, GR-Linux-NIC-Dev@marvell.com, 
	Linux Memory Management List <linux-mm@kvack.org>, Network Development <netdev@vger.kernel.org>, 
	linux-decnet-user@lists.sourceforge.net, linux-mmc@vger.kernel.org, 
	Linux-Renesas <linux-renesas-soc@vger.kernel.org>, linux-sctp@vger.kernel.org, 
	linux-usb@vger.kernel.org, netfilter-devel@vger.kernel.org, 
	=?UTF-8?Q?open_list=3AHARDWARE_RANDOM_NUMBER_GENERATOR_CORE_=3Clinux=2Dcrypt?=
	=?UTF-8?Q?o=40vger=2Ekernel=2Eorg=3E=2C_patches=40opensource=2Ecirrus=2Ecom=2C_linux=2Dint?=
	=?UTF-8?Q?egrity=40vger=2Ekernel=2Eorg=2C_target=2Ddevel=40vger=2Ekernel=2Eorg=2C_linux=2D?=
	=?UTF-8?Q?hardening=40vger=2Ekernel=2Eorg=2C_Jonathan_Cameron_=3CJonathan=2ECamero?=
	=?UTF-8?Q?n=40huawei=2Ecom=3E=2C_Greg_KH?= <gregkh@linuxfoundation.org>
Content-Type: text/plain; charset="UTF-8"

On Wed, Nov 25, 2020 at 8:24 AM Jakub Kicinski <kuba@kernel.org> wrote:
>
> Applying a real patch set and then getting a few follow ups the next day
> for trivial coding things like fallthrough missing or static missing,
> just because I didn't have the full range of compilers to check with
> before applying makes me feel pretty shitty, like I'm not doing a good
> job. YMMV.

I understand. Everyone feels that way, except maybe Bond villains and
robots.  My advice in that case is don't take it personally.  We're
working with a language that's more error prone relative to others.
While one would like to believe they are flawless, over time they
can't beat the aggregate statistics.  A balance between Imposter
Syndrome and Dunning Kruger is walked by all software developers, and
the fear of making mistakes in public is one of the number one reasons
folks don't take the plunge contributing to open source software or
even the kernel.  My advice to them is "don't sweat the small stuff."
-- 
Thanks,
~Nick Desaulniers


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 22:09:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 22:09:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38113.70768 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki2yt-00077E-Po; Wed, 25 Nov 2020 22:09:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38113.70768; Wed, 25 Nov 2020 22:09:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki2yt-000777-M9; Wed, 25 Nov 2020 22:09:43 +0000
Received: by outflank-mailman (input) for mailman id 38113;
 Wed, 25 Nov 2020 22:09:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ecWo=E7=google.com=ndesaulniers@srs-us1.protection.inumbo.net>)
 id 1ki2ys-00076s-K6
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 22:09:42 +0000
Received: from mail-pl1-x643.google.com (unknown [2607:f8b0:4864:20::643])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fa2d5c0f-ef6a-4794-90b8-6698eddd0c42;
 Wed, 25 Nov 2020 22:09:41 +0000 (UTC)
Received: by mail-pl1-x643.google.com with SMTP id r2so100453pls.3
 for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 14:09:41 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ecWo=E7=google.com=ndesaulniers@srs-us1.protection.inumbo.net>)
	id 1ki2ys-00076s-K6
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 22:09:42 +0000
X-Inumbo-ID: fa2d5c0f-ef6a-4794-90b8-6698eddd0c42
Received: from mail-pl1-x643.google.com (unknown [2607:f8b0:4864:20::643])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fa2d5c0f-ef6a-4794-90b8-6698eddd0c42;
	Wed, 25 Nov 2020 22:09:41 +0000 (UTC)
Received: by mail-pl1-x643.google.com with SMTP id r2so100453pls.3
        for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 14:09:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=/ob3FEJdYP5qhdHPeLajEb6PUDm9frxLF1mdXzRdH3M=;
        b=L3KZpP3LXxZlW/DP5AmTnD4MuZ72/rcy+coOeUVeCNTiwrWtzoVQx/hBMoKUsqDMm0
         Foet9vcyJsJZ2PSgueU+Q+MvDj0KUWyeWX5guwBWRMdbnXZ08cX56htqXhsZrxHUuRIn
         vYFnIpmbgrasg1vSD5AH5hKwOy9NejMhVovtjBrzY+2P0ug3w3+5RlYsOk03C83VD9da
         OdGtf9c6uotPHdYcmuyOqbnF5k3fP8Yo6l5eFDVR7n6m5okfFX1oHlXNOB3u6UsnrVGE
         wa6MluYkHn1XOYZTDyrE2KQmcun7Qe6vVmhaYurLDnZVMQG/lFJsGakXcuEY3+Obie2U
         RVfA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=/ob3FEJdYP5qhdHPeLajEb6PUDm9frxLF1mdXzRdH3M=;
        b=UKipMwaIe/pjioSe4V59qK5xjkQU0HXX2JWVNV2htspMvNhDbi/9yTa2aRK1hWmk+6
         NAdC6dQF5H5Rb3iCUb5FCUgYU5BsX4+Hupi+f4N6y8O4jxEXBih0zyl5PtzZ+iCVpgjR
         n/hljEYH8i8jHkzx37oLeWb46pmdP8Lt7U4lrhw8GDQZY762kh45vy0X4sKy+YbZDzpR
         s8/tApAZWGXnZZQMe5nwTMJXK2Y4+Kv55VyN0g2Au+uibIWOMSP8+T5T/fwppWa37V+O
         JhOy48E/VphHthkRXc1cyV70Jceruvwr78yulqucTmNjcdZzc/hZUVY0TJ+2Q8ZIRAo5
         DAcQ==
X-Gm-Message-State: AOAM532WjwJI2GA5EC/D7HJjzSfwMoj+AFaf4w2PuThIpqux/a59WIsk
	rdJXmItwFvC5zTbKi/gQVAsU2j9/O1ixs7v7BuHlZw==
X-Google-Smtp-Source: ABdhPJz8lLt/1COGEMpWkFDfyIK6h7HwbxIAa+xtHaAqLUfnjsVh22wexUgkVgdtXLczfRnuDI51yr8PXCWqKuZKul4=
X-Received: by 2002:a17:902:c14a:b029:d8:dc05:d7ef with SMTP id
 10-20020a170902c14ab02900d8dc05d7efmr4885886plj.83.1606342180784; Wed, 25 Nov
 2020 14:09:40 -0800 (PST)
MIME-Version: 1.0
References: <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook> <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
 <ca071decb87cc7e905411423c05a48f9fd2f58d7.camel@perches.com>
 <0147972a72bc13f3629de8a32dee6f1f308994b5.camel@HansenPartnership.com>
 <d8d1e9add08cdd4158405e77762d4946037208f8.camel@perches.com>
 <dbd2cb703ed9eefa7dde9281ea26ab0f7acc8afe.camel@HansenPartnership.com>
 <20201123130348.GA3119@embeddedor> <8f5611bb015e044fa1c0a48147293923c2d904e4.camel@HansenPartnership.com>
 <202011241327.BB28F12F6@keescook> <a841536fe65bb33f1c72ce2455a6eb47a0107565.camel@HansenPartnership.com>
 <CAKwvOdkGBn7nuWTAqrORMeN1G+w3YwBfCqqaRD2nwvoAXKi=Aw@mail.gmail.com> <alpine.LNX.2.23.453.2011260750300.6@nippy.intranet>
In-Reply-To: <alpine.LNX.2.23.453.2011260750300.6@nippy.intranet>
From: Nick Desaulniers <ndesaulniers@google.com>
Date: Wed, 25 Nov 2020 14:09:29 -0800
Message-ID: <CAKwvOdna5Zj_O=sB7Q0jHZX0BJSaakX=ZyftwQ_3=L3-ZB54XQ@mail.gmail.com>
Subject: Re: [Intel-wired-lan] [PATCH 000/141] Fix fall-through warnings for Clang
To: Finn Thain <fthain@telegraphics.com.au>
Cc: James Bottomley <James.Bottomley@hansenpartnership.com>, 
	Kees Cook <keescook@chromium.org>, "Gustavo A. R. Silva" <gustavoars@kernel.org>, 
	Joe Perches <joe@perches.com>, Jakub Kicinski <kuba@kernel.org>, alsa-devel@alsa-project.org, 
	linux-atm-general@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, 
	linux-iio@vger.kernel.org, linux-wireless <linux-wireless@vger.kernel.org>, 
	linux-fbdev@vger.kernel.org, dri-devel <dri-devel@lists.freedesktop.org>, 
	LKML <linux-kernel@vger.kernel.org>, Nathan Chancellor <natechancellor@gmail.com>, 
	linux-ide@vger.kernel.org, dm-devel@redhat.com, keyrings@vger.kernel.org, 
	linux-mtd@lists.infradead.org, GR-everest-linux-l2@marvell.com, 
	wcn36xx@lists.infradead.org, samba-technical@lists.samba.org, 
	linux-i3c@lists.infradead.org, linux1394-devel@lists.sourceforge.net, 
	linux-afs@lists.infradead.org, usb-storage@lists.one-eyed-alien.net, 
	drbd-dev@lists.linbit.com, devel@driverdev.osuosl.org, 
	linux-cifs@vger.kernel.org, rds-devel@oss.oracle.com, 
	linux-scsi@vger.kernel.org, linux-rdma@vger.kernel.org, 
	oss-drivers@netronome.com, bridge@lists.linux-foundation.org, 
	linux-security-module@vger.kernel.org, 
	amd-gfx list <amd-gfx@lists.freedesktop.org>, linux-stm32@st-md-mailman.stormreply.com, 
	cluster-devel@redhat.com, linux-acpi@vger.kernel.org, coreteam@netfilter.org, 
	intel-wired-lan@lists.osuosl.org, linux-input@vger.kernel.org, 
	Miguel Ojeda <ojeda@kernel.org>, tipc-discussion@lists.sourceforge.net, 
	linux-ext4@vger.kernel.org, linux-media@vger.kernel.org, 
	linux-watchdog@vger.kernel.org, selinux@vger.kernel.org, 
	linux-arm-msm <linux-arm-msm@vger.kernel.org>, intel-gfx@lists.freedesktop.org, 
	linux-geode@lists.infradead.org, linux-can@vger.kernel.org, 
	linux-block@vger.kernel.org, linux-gpio@vger.kernel.org, 
	op-tee@lists.trustedfirmware.org, linux-mediatek@lists.infradead.org, 
	xen-devel@lists.xenproject.org, nouveau@lists.freedesktop.org, 
	linux-hams@vger.kernel.org, ceph-devel@vger.kernel.org, 
	virtualization@lists.linux-foundation.org, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, linux-hwmon@vger.kernel.org, 
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, linux-nfs@vger.kernel.org, GR-Linux-NIC-Dev@marvell.com, 
	Linux Memory Management List <linux-mm@kvack.org>, Network Development <netdev@vger.kernel.org>, 
	linux-decnet-user@lists.sourceforge.net, linux-mmc@vger.kernel.org, 
	Linux-Renesas <linux-renesas-soc@vger.kernel.org>, linux-sctp@vger.kernel.org, 
	linux-usb@vger.kernel.org, netfilter-devel@vger.kernel.org, 
	"open list:HARDWARE RANDOM NUMBER GENERATOR CORE" <linux-crypto@vger.kernel.org>, patches@opensource.cirrus.com, 
	linux-integrity@vger.kernel.org, target-devel@vger.kernel.org, 
	linux-hardening@vger.kernel.org, 
	Jonathan Cameron <Jonathan.Cameron@huawei.com>, Greg KH <gregkh@linuxfoundation.org>
Content-Type: text/plain; charset="UTF-8"

On Wed, Nov 25, 2020 at 1:33 PM Finn Thain <fthain@telegraphics.com.au> wrote:
>
> Or do you think that a codebase can somehow satisfy multiple checkers and
> their divergent interpretations of the language spec?

Have we found any cases yet that are divergent? I don't think so.  It
sounds to me like GCC's cases it warns for is a subset of Clang's.
Having additional coverage with Clang then should ensure coverage for
both.

> > This is not a shiny new warning; it's already on for GCC and has existed
> > in both compilers for multiple releases.
> >
>
> Perhaps you're referring to the compiler feature that lead to the
> ill-fated, tree-wide /* fallthrough */ patch series.
>
> When the ink dries on the C23 language spec and the implementations figure
> out how to interpret it then sure, enforce the warning for new code -- the
> cost/benefit analysis is straight forward. However, the case for patching
> existing mature code is another story.

I don't think we need to wait for the ink to dry on the C23 language
spec to understand that implicit fallthrough is an obvious defect of
the C language.  While the kernel is a mature codebase, it's not
immune to bugs.  And its maturity has yet to slow its rapid pace of
development.
-- 
Thanks,
~Nick Desaulniers


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 22:19:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 22:19:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38125.70780 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki37o-00088p-Mm; Wed, 25 Nov 2020 22:18:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38125.70780; Wed, 25 Nov 2020 22:18:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki37o-00088i-JZ; Wed, 25 Nov 2020 22:18:56 +0000
Received: by outflank-mailman (input) for mailman id 38125;
 Wed, 25 Nov 2020 22:18:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ki37n-00088a-CG; Wed, 25 Nov 2020 22:18:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ki37n-0002iU-2M; Wed, 25 Nov 2020 22:18:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ki37m-0006vr-Pp; Wed, 25 Nov 2020 22:18:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ki37m-0004Wu-PI; Wed, 25 Nov 2020 22:18:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ki37n-00088a-CG; Wed, 25 Nov 2020 22:18:55 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fB9+FZ8AVdDiNoBLN8pNIGRTLBtX2tjoT68x2cHcq4Q=; b=vPJnr0epNVooXrbel8SX0u+jBX
	9FnZXYVSn3V3i0/DyOSufeIQzCDYs9h1tjVuGzM7gL9AcSSOPeh0TG21imBU8Hsd9gufM1btNZrKA
	ea1JjDgUW8Zz7mKOeQYc6t5yYXJSh8MFa29rhtjnI3fXVFhPX53NFJCrOrTBRBHfcs84=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ki37n-0002iU-2M; Wed, 25 Nov 2020 22:18:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ki37m-0006vr-Pp; Wed, 25 Nov 2020 22:18:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ki37m-0004Wu-PI; Wed, 25 Nov 2020 22:18:54 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157000-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157000: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=511013b57b50da7c800967cd990f8ae1ad5fa948
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 25 Nov 2020 22:18:54 +0000

flight 157000 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157000/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              511013b57b50da7c800967cd990f8ae1ad5fa948
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  138 days
Failing since        151818  2020-07-11 04:18:52 Z  137 days  132 attempts
Testing same since   157000  2020-11-25 04:19:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 29486 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 23:12:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 23:12:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38141.70801 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki3xk-0004zF-8w; Wed, 25 Nov 2020 23:12:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38141.70801; Wed, 25 Nov 2020 23:12:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki3xk-0004z8-5n; Wed, 25 Nov 2020 23:12:36 +0000
Received: by outflank-mailman (input) for mailman id 38141;
 Wed, 25 Nov 2020 23:12:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ki3xj-0004z0-Cr; Wed, 25 Nov 2020 23:12:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ki3xj-0003ny-8C; Wed, 25 Nov 2020 23:12:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ki3xj-0008WQ-05; Wed, 25 Nov 2020 23:12:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ki3xi-0004ID-Vs; Wed, 25 Nov 2020 23:12:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ki3xj-0004z0-Cr; Wed, 25 Nov 2020 23:12:35 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Yjovzm1IGK6NBPJvSk4jPqGWIR6cjZC4+X4FdUO75cw=; b=g61i3hM0o/+VZbgtqPxXpKnJYv
	AbLwhwIFQDvf14sQUF376jkXi8tXJw4Wxl0okP8L1vRPsSaDGL5Vdfm5cp/VGJRw9IxM8NgOgAgk+
	uYCPoFynrWELVSX0iP9aiEtrA/vUz7WjJ1tX62ylvtIEZnnFc5srCuQKXOWxKjsBxXxA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ki3xj-0003ny-8C; Wed, 25 Nov 2020 23:12:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ki3xj-0008WQ-05; Wed, 25 Nov 2020 23:12:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ki3xi-0004ID-Vs; Wed, 25 Nov 2020 23:12:34 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157012-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157012: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=388f9a9355ae7ab95a34ac2e9a3c608caf11b74a
X-Osstest-Versions-That:
    ovmf=e7bd0dd26db7e56aa8ca70132d6ea916ee6f3db0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 25 Nov 2020 23:12:34 +0000

flight 157012 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157012/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 388f9a9355ae7ab95a34ac2e9a3c608caf11b74a
baseline version:
 ovmf                 e7bd0dd26db7e56aa8ca70132d6ea916ee6f3db0

Last test of basis   156920  2020-11-21 09:10:55 Z    4 days
Testing same since   157012  2020-11-25 18:09:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Laszlo Ersek <lersek@redhat.com>
  Michael D Kinney <michael.d.kinney@intel.com>
  Sean Brogan <sean.brogan@microsoft.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   e7bd0dd26d..388f9a9355  388f9a9355ae7ab95a34ac2e9a3c608caf11b74a -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 23:22:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 23:22:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38151.70816 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki46q-0005zE-7T; Wed, 25 Nov 2020 23:22:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38151.70816; Wed, 25 Nov 2020 23:22:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki46q-0005z7-3r; Wed, 25 Nov 2020 23:22:00 +0000
Received: by outflank-mailman (input) for mailman id 38151;
 Wed, 25 Nov 2020 23:21:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NWac=E7=telegraphics.com.au=fthain@srs-us1.protection.inumbo.net>)
 id 1ki46p-0005z2-Ht
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 23:21:59 +0000
Received: from kvm5.telegraphics.com.au (unknown [98.124.60.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 3b4109bb-4bcd-4f48-b1da-519d2fd91930;
 Wed, 25 Nov 2020 23:21:58 +0000 (UTC)
Received: from localhost (localhost.localdomain [127.0.0.1])
 by kvm5.telegraphics.com.au (Postfix) with ESMTP id 414EA2A490;
 Wed, 25 Nov 2020 18:21:54 -0500 (EST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NWac=E7=telegraphics.com.au=fthain@srs-us1.protection.inumbo.net>)
	id 1ki46p-0005z2-Ht
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 23:21:59 +0000
X-Inumbo-ID: 3b4109bb-4bcd-4f48-b1da-519d2fd91930
Received: from kvm5.telegraphics.com.au (unknown [98.124.60.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 3b4109bb-4bcd-4f48-b1da-519d2fd91930;
	Wed, 25 Nov 2020 23:21:58 +0000 (UTC)
Received: from localhost (localhost.localdomain [127.0.0.1])
	by kvm5.telegraphics.com.au (Postfix) with ESMTP id 414EA2A490;
	Wed, 25 Nov 2020 18:21:54 -0500 (EST)
Date: Thu, 26 Nov 2020 10:21:54 +1100 (AEDT)
From: Finn Thain <fthain@telegraphics.com.au>
To: Nick Desaulniers <ndesaulniers@google.com>
cc: James Bottomley <James.Bottomley@hansenpartnership.com>, 
    Kees Cook <keescook@chromium.org>, 
    "Gustavo A. R. Silva" <gustavoars@kernel.org>, 
    Joe Perches <joe@perches.com>, Jakub Kicinski <kuba@kernel.org>, 
    alsa-devel@alsa-project.org, linux-atm-general@lists.sourceforge.net, 
    reiserfs-devel@vger.kernel.org, linux-iio@vger.kernel.org, 
    linux-wireless <linux-wireless@vger.kernel.org>, 
    linux-fbdev@vger.kernel.org, dri-devel <dri-devel@lists.freedesktop.org>, 
    LKML <linux-kernel@vger.kernel.org>, 
    Nathan Chancellor <natechancellor@gmail.com>, linux-ide@vger.kernel.org, 
    dm-devel@redhat.com, keyrings@vger.kernel.org, 
    linux-mtd@lists.infradead.org, GR-everest-linux-l2@marvell.com, 
    wcn36xx@lists.infradead.org, samba-technical@lists.samba.org, 
    linux-i3c@lists.infradead.org, linux1394-devel@lists.sourceforge.net, 
    linux-afs@lists.infradead.org, usb-storage@lists.one-eyed-alien.net, 
    drbd-dev@lists.linbit.com, devel@driverdev.osuosl.org, 
    linux-cifs@vger.kernel.org, rds-devel@oss.oracle.com, 
    linux-scsi@vger.kernel.org, linux-rdma@vger.kernel.org, 
    oss-drivers@netronome.com, bridge@lists.linux-foundation.org, 
    linux-security-module@vger.kernel.org, 
    amd-gfx list <amd-gfx@lists.freedesktop.org>, 
    linux-stm32@st-md-mailman.stormreply.com, cluster-devel@redhat.com, 
    linux-acpi@vger.kernel.org, coreteam@netfilter.org, 
    intel-wired-lan@lists.osuosl.org, linux-input@vger.kernel.org, 
    Miguel Ojeda <ojeda@kernel.org>, tipc-discussion@lists.sourceforge.net, 
    linux-ext4@vger.kernel.org, linux-media@vger.kernel.org, 
    linux-watchdog@vger.kernel.org, selinux@vger.kernel.org, 
    linux-arm-msm <linux-arm-msm@vger.kernel.org>, 
    intel-gfx@lists.freedesktop.org, linux-geode@lists.infradead.org, 
    linux-can@vger.kernel.org, linux-block@vger.kernel.org, 
    linux-gpio@vger.kernel.org, op-tee@lists.trustedfirmware.org, 
    linux-mediatek@lists.infradead.org, xen-devel@lists.xenproject.org, 
    nouveau@lists.freedesktop.org, linux-hams@vger.kernel.org, 
    ceph-devel@vger.kernel.org, virtualization@lists.linux-foundation.org, 
    Linux ARM <linux-arm-kernel@lists.infradead.org>, 
    linux-hwmon@vger.kernel.org, 
    "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, 
    linux-nfs@vger.kernel.org, GR-Linux-NIC-Dev@marvell.com, 
    Linux Memory Management List <linux-mm@kvack.org>, 
    Network Development <netdev@vger.kernel.org>, 
    linux-decnet-user@lists.sourceforge.net, linux-mmc@vger.kernel.org, 
    Linux-Renesas <linux-renesas-soc@vger.kernel.org>, 
    linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, 
    netfilter-devel@vger.kernel.org, 
    "open list:HARDWARE RANDOM NUMBER GENERATOR CORE" <linux-crypto@vger.kernel.org>, 
    patches@opensource.cirrus.com, linux-integrity@vger.kernel.org, 
    target-devel@vger.kernel.org, linux-hardening@vger.kernel.org, 
    Jonathan Cameron <Jonathan.Cameron@huawei.com>, 
    Greg KH <gregkh@linuxfoundation.org>
Subject: Re: [Intel-wired-lan] [PATCH 000/141] Fix fall-through warnings for
 Clang
In-Reply-To: <CAKwvOdna5Zj_O=sB7Q0jHZX0BJSaakX=ZyftwQ_3=L3-ZB54XQ@mail.gmail.com>
Message-ID: <alpine.LNX.2.23.453.2011260918510.6@nippy.intranet>
References: <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> <202011220816.8B6591A@keescook> <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com> <ca071decb87cc7e905411423c05a48f9fd2f58d7.camel@perches.com>
 <0147972a72bc13f3629de8a32dee6f1f308994b5.camel@HansenPartnership.com> <d8d1e9add08cdd4158405e77762d4946037208f8.camel@perches.com> <dbd2cb703ed9eefa7dde9281ea26ab0f7acc8afe.camel@HansenPartnership.com> <20201123130348.GA3119@embeddedor>
 <8f5611bb015e044fa1c0a48147293923c2d904e4.camel@HansenPartnership.com> <202011241327.BB28F12F6@keescook> <a841536fe65bb33f1c72ce2455a6eb47a0107565.camel@HansenPartnership.com> <CAKwvOdkGBn7nuWTAqrORMeN1G+w3YwBfCqqaRD2nwvoAXKi=Aw@mail.gmail.com>
 <alpine.LNX.2.23.453.2011260750300.6@nippy.intranet> <CAKwvOdna5Zj_O=sB7Q0jHZX0BJSaakX=ZyftwQ_3=L3-ZB54XQ@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 25 Nov 2020, Nick Desaulniers wrote:

> On Wed, Nov 25, 2020 at 1:33 PM Finn Thain <fthain@telegraphics.com.au> 
> wrote:
> >
> > Or do you think that a codebase can somehow satisfy multiple checkers 
> > and their divergent interpretations of the language spec?
> 
> Have we found any cases yet that are divergent? I don't think so.

There are many implementations, so I think you are guaranteed to find more 
divergence if you look. That's because the spec is full of language like 
this: "implementations are encouraged not to emit a diagnostic" and 
"implementations are encouraged to issue a diagnostic".

Some implementations will decide to not emit (under the premise that vast 
amounts of existing code would have to get patched until the compiler goes 
quiet) whereas other implementations will decide to emit (under the 
premise that the author is doing the checking and not the janitor or the 
packager).

> It sounds to me like GCC's cases it warns for is a subset of Clang's. 
> Having additional coverage with Clang then should ensure coverage for 
> both.
> 

If that claim were true, the solution would be simple. (It's not.)

For the benefit of projects that enable -Werror and projects that 
nominated gcc as their preferred compiler, clang would simply need a flag 
to enable conformance with gcc by suppressing those additional warnings 
that clang would normally produce.

This simple solution is, of course, completely unworkable, since it would 
force clang to copy some portion of gcc's logic (rewritten under LLVM's 
unique license) and then to track future changes to that portion of gcc 
indefinitely.


From xen-devel-bounces@lists.xenproject.org Wed Nov 25 23:27:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 25 Nov 2020 23:27:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38159.70828 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki4Bn-0006C8-RJ; Wed, 25 Nov 2020 23:27:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38159.70828; Wed, 25 Nov 2020 23:27:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki4Bn-0006C1-OE; Wed, 25 Nov 2020 23:27:07 +0000
Received: by outflank-mailman (input) for mailman id 38159;
 Wed, 25 Nov 2020 23:27:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ki4Bm-0006BS-AO; Wed, 25 Nov 2020 23:27:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ki4Bm-00046M-2y; Wed, 25 Nov 2020 23:27:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ki4Bl-0000dp-MS; Wed, 25 Nov 2020 23:27:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ki4Bl-0000x0-Lv; Wed, 25 Nov 2020 23:27:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ki4Bm-0006BS-AO; Wed, 25 Nov 2020 23:27:06 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3gbE5hI5EA6Fl/dGLqjFFMGOYvjYFDeg0BqhSQ/Sfwc=; b=jsHtGTkAVS98yMUTptbStYBwzm
	WvrtKU4poPh4RiQ+yk2Cmj1eR5GSQ/T1vBr33CzlgzeBGs3+L6anknhH8cmECdZ5AlSlY5ltoMMx0
	uyV0ZGXp/XLRDgj8xZ3WETcu5Kuq7mS3XuPa0awDokoI+wGab20lMh5rvlQPEfXoLjtk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ki4Bm-00046M-2y; Wed, 25 Nov 2020 23:27:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ki4Bl-0000dp-MS; Wed, 25 Nov 2020 23:27:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ki4Bl-0000x0-Lv; Wed, 25 Nov 2020 23:27:05 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156992-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156992: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8147e00e4fbfcc43b665dc6bf279b204c501ba04
X-Osstest-Versions-That:
    xen=b659a5cebd611dbe698e63c03485b5fe8cd964ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 25 Nov 2020 23:27:05 +0000

flight 156992 xen-unstable real [real]
flight 157014 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156992/
http://logs.test-lab.xenproject.org/osstest/logs/157014/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-examine      4 memdisk-try-append  fail pass in 157014-retest
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 18 guest-start/debianhvm.repeat fail pass in 157014-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156975
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156975
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156975
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156975
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156975
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156975
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156975
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 156975
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156975
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156975
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156975
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156975
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8147e00e4fbfcc43b665dc6bf279b204c501ba04
baseline version:
 xen                  b659a5cebd611dbe698e63c03485b5fe8cd964ad

Last test of basis   156975  2020-11-24 01:51:25 Z    1 days
Testing same since   156992  2020-11-24 14:07:39 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <JBeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b659a5cebd..8147e00e4f  8147e00e4fbfcc43b665dc6bf279b204c501ba04 -> master


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 00:00:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 00:00:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38177.70871 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki4hq-0001rU-8L; Thu, 26 Nov 2020 00:00:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38177.70871; Thu, 26 Nov 2020 00:00:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki4hq-0001rN-50; Thu, 26 Nov 2020 00:00:14 +0000
Received: by outflank-mailman (input) for mailman id 38177;
 Thu, 26 Nov 2020 00:00:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aPbE=FA=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ki4ho-0001rI-0s
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 00:00:12 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0c45a2d0-4e66-4b8b-badb-8016e347a3e2;
 Thu, 26 Nov 2020 00:00:11 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 2C55520B1F;
 Thu, 26 Nov 2020 00:00:10 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aPbE=FA=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1ki4ho-0001rI-0s
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 00:00:12 +0000
X-Inumbo-ID: 0c45a2d0-4e66-4b8b-badb-8016e347a3e2
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0c45a2d0-4e66-4b8b-badb-8016e347a3e2;
	Thu, 26 Nov 2020 00:00:11 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 2C55520B1F;
	Thu, 26 Nov 2020 00:00:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606348810;
	bh=s6UNnrvgotCQH11T/faRa6XXhOPUDjwt92Z2nqFk6IA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Snd9j7kpREGUTJT/AWkI1ARQ8hx28eFhePdpO0aPEsjTOnIed5Q8wL64jtyjmynmH
	 0QShDqKUiWoa7vzSFz24EZ+Iuztzfg/7/wHti84QML9+BifIqEPQwymRUp40S/Sr8w
	 2/mkZ+EHzc0Q1erRXE2YWVnhWtfceDWNlXswaPlw=
Date: Wed, 25 Nov 2020 16:00:09 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Wei Chen <Wei.Chen@arm.com>
cc: Julien Grall <julien@xen.org>, Penny Zheng <Penny.Zheng@arm.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    "sstabellini@kernel.org" <sstabellini@kernel.org>, 
    Andre Przywara <Andre.Przywara@arm.com>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, Kaly Xin <Kaly.Xin@arm.com>, 
    nd <nd@arm.com>
Subject: RE: [PATCH] xen/arm: Add Cortex-A73 erratum 858921 workaround
In-Reply-To: <AM0PR08MB3747B42FC856B9BDF24646629EE60@AM0PR08MB3747.eurprd08.prod.outlook.com>
Message-ID: <alpine.DEB.2.21.2011251554070.7979@sstabellini-ThinkPad-T480s>
References: <20201109082110.1133996-1-penny.zheng@arm.com> <cfa63398-8182-b79f-1602-ed068e2319ad@xen.org> <AM0PR08MB3747B42FC856B9BDF24646629EE60@AM0PR08MB3747.eurprd08.prod.outlook.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Resuming this old thread.

On Fri, 13 Nov 2020, Wei Chen wrote:
> > Hi,
> > 
> > On 09/11/2020 08:21, Penny Zheng wrote:
> > > CNTVCT_EL0 or CNTPCT_EL0 counter read in Cortex-A73 (all versions)
> > > might return a wrong value when the counter crosses a 32bit boundary.
> > >
> > > Until now, there is no case for Xen itself to access CNTVCT_EL0,
> > > and it also should be the Guest OS's responsibility to deal with
> > > this part.
> > >
> > > But for CNTPCT, there exists several cases in Xen involving reading
> > > CNTPCT, so a possible workaround is that performing the read twice,
> > > and to return one or the other depending on whether a transition has
> > > taken place.
> > >
> > > Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> > 
> > Acked-by: Julien Grall <jgrall@amazon.com>
> > 
> > On a related topic, do we need a fix similar to Linux commit
> > 75a19a0202db "arm64: arch_timer: Ensure counter register reads occur
> > with seqlock held"?
> > 
> 
> I take a look at this Linux commit, it seems to prevent the seq-lock to be
> speculated.  Using an enforce ordering instead of ISB after the read counter
> operation seems to be for performance reasons.
> 
> I have found that you had placed an ISB before read counter in get_cycles
> to prevent counter value to be speculated. But you haven't placed the second
> ISB after reading. Is it because we haven't used the get_cycles in seq lock
> critical context (Maybe I didn't find the right place)? So should we need to fix it
> now, or you prefer to fix it now for future usage?

Looking at the call sites, it doesn't look like we need any ISB after
get_cycles as it is not used in any critical context. There is also a
data dependency with the value returned by it.

So I am thinking we don't need any fix. At most we need an in-code comment?


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 00:27:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 00:27:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38185.70883 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki57m-0003zn-Aw; Thu, 26 Nov 2020 00:27:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38185.70883; Thu, 26 Nov 2020 00:27:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki57m-0003zg-7t; Thu, 26 Nov 2020 00:27:02 +0000
Received: by outflank-mailman (input) for mailman id 38185;
 Thu, 26 Nov 2020 00:27:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aPbE=FA=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ki57k-0003zb-Sw
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 00:27:00 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7de5fef2-b3f8-4454-835e-663d5b0f776b;
 Thu, 26 Nov 2020 00:27:00 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 45ECF20872;
 Thu, 26 Nov 2020 00:26:59 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aPbE=FA=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1ki57k-0003zb-Sw
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 00:27:00 +0000
X-Inumbo-ID: 7de5fef2-b3f8-4454-835e-663d5b0f776b
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7de5fef2-b3f8-4454-835e-663d5b0f776b;
	Thu, 26 Nov 2020 00:27:00 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 45ECF20872;
	Thu, 26 Nov 2020 00:26:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606350419;
	bh=8MznRGGIWMF1T443v1SevCrhPebJV8D4im1mfsrrw/c=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=qNC4oe7N7JDh+YUMplFm0CIeQ7GCFk4iNuwOPk2CxHZj8N1f9AewZQ941MU27zc30
	 UMDMZ+yh31evXGlQRktuf2UhpmvDFkqDC4VoEuPDJJJwL7p+GEJg2p73sKGYKKfU7h
	 QJ9vmn37ZP0pXUCD2+qGgD5xPle25MzvKjWQGCyw=
Date: Wed, 25 Nov 2020 16:26:58 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Stefano Stabellini <sstabellini@kernel.org>
cc: andrew.cooper3@citrix.com, cardoe@cardoe.com, wl@xen.org, 
    xen-devel@lists.xenproject.org, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v3 07/12] automation: add alpine linux x86 build jobs
In-Reply-To: <20201125042745.31986-7-sstabellini@kernel.org>
Message-ID: <alpine.DEB.2.21.2011251624410.7979@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2011241722540.7979@sstabellini-ThinkPad-T480s> <20201125042745.31986-7-sstabellini@kernel.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 24 Nov 2020, Stefano Stabellini wrote:
> Allow failure for these jobs. Currently they fail because hvmloader
> doesn't build with musl. The failures don't block the pipeline.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> ---
> This patch is probably required: https://github.com/alpinelinux/aports/blob/master/main/xen/musl-hvmloader-fix-stdint.patch


We could simply disable the hvmloader build when it is not necessary.
When OVMF, SeaBios, and Rombios are all disabled, it doesn't make sense
to build hvmloader. If this assumption is correct, then the patch below
fixes the Alpine Linux build (as long as we pass --disable-seabios and
--disable-rombios appropriately)


diff --git a/config/Tools.mk.in b/config/Tools.mk.in
index 48bd9ab731..a6aada576f 100644
--- a/config/Tools.mk.in
+++ b/config/Tools.mk.in
@@ -55,6 +55,15 @@ CONFIG_QEMU_XEN     := @qemu_xen@
 CONFIG_QEMUU_EXTRA_ARGS:= @EXTRA_QEMUU_CONFIGURE_ARGS@
 CONFIG_LIBNL        := @libnl@
 CONFIG_GOLANG       := @golang@
+ifeq ($(CONFIG_ROMBIOS),y)
+CONFIG_FIRMWARE=y
+endif
+ifeq ($(CONFIG_SEABIOS),y)
+CONFIG_FIRMWARE=y
+endif
+ifeq ($(CONFIG_OVMF),y)
+CONFIG_FIRMWARE=y
+endif
 
 CONFIG_SYSTEMD      := @systemd@
 SYSTEMD_CFLAGS      := @SYSTEMD_CFLAGS@
diff --git a/tools/Makefile b/tools/Makefile
index ed71474421..9821a7f5d5 100644
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -14,7 +14,7 @@ SUBDIRS-y += examples
 SUBDIRS-y += hotplug
 SUBDIRS-y += xentrace
 SUBDIRS-$(CONFIG_XCUTILS) += xcutils
-SUBDIRS-$(CONFIG_X86) += firmware
+SUBDIRS-$(CONFIG_FIRMWARE) += firmware
 SUBDIRS-y += console
 SUBDIRS-y += xenmon
 SUBDIRS-y += xentop


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 00:30:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 00:30:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38192.70895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki5BL-0004t6-Ro; Thu, 26 Nov 2020 00:30:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38192.70895; Thu, 26 Nov 2020 00:30:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki5BL-0004sz-Oi; Thu, 26 Nov 2020 00:30:43 +0000
Received: by outflank-mailman (input) for mailman id 38192;
 Thu, 26 Nov 2020 00:30:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fGtI=FA=telegraphics.com.au=fthain@srs-us1.protection.inumbo.net>)
 id 1ki5BK-0004sp-4D
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 00:30:42 +0000
Received: from kvm5.telegraphics.com.au (unknown [98.124.60.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 618b5edf-54f4-4122-8aeb-484d0aac71b8;
 Thu, 26 Nov 2020 00:30:41 +0000 (UTC)
Received: from localhost (localhost.localdomain [127.0.0.1])
 by kvm5.telegraphics.com.au (Postfix) with ESMTP id 742A42A495;
 Wed, 25 Nov 2020 19:30:37 -0500 (EST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=fGtI=FA=telegraphics.com.au=fthain@srs-us1.protection.inumbo.net>)
	id 1ki5BK-0004sp-4D
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 00:30:42 +0000
X-Inumbo-ID: 618b5edf-54f4-4122-8aeb-484d0aac71b8
Received: from kvm5.telegraphics.com.au (unknown [98.124.60.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 618b5edf-54f4-4122-8aeb-484d0aac71b8;
	Thu, 26 Nov 2020 00:30:41 +0000 (UTC)
Received: from localhost (localhost.localdomain [127.0.0.1])
	by kvm5.telegraphics.com.au (Postfix) with ESMTP id 742A42A495;
	Wed, 25 Nov 2020 19:30:37 -0500 (EST)
Date: Thu, 26 Nov 2020 11:30:36 +1100 (AEDT)
From: Finn Thain <fthain@telegraphics.com.au>
To: Nick Desaulniers <ndesaulniers@google.com>
cc: James Bottomley <James.Bottomley@hansenpartnership.com>, 
    Kees Cook <keescook@chromium.org>, 
    "Gustavo A. R. Silva" <gustavoars@kernel.org>, 
    Joe Perches <joe@perches.com>, Jakub Kicinski <kuba@kernel.org>, 
    alsa-devel@alsa-project.org, linux-atm-general@lists.sourceforge.net, 
    reiserfs-devel@vger.kernel.org, linux-iio@vger.kernel.org, 
    linux-wireless <linux-wireless@vger.kernel.org>, 
    linux-fbdev@vger.kernel.org, dri-devel <dri-devel@lists.freedesktop.org>, 
    LKML <linux-kernel@vger.kernel.org>, 
    Nathan Chancellor <natechancellor@gmail.com>, linux-ide@vger.kernel.org, 
    dm-devel@redhat.com, keyrings@vger.kernel.org, 
    linux-mtd@lists.infradead.org, GR-everest-linux-l2@marvell.com, 
    wcn36xx@lists.infradead.org, samba-technical@lists.samba.org, 
    linux-i3c@lists.infradead.org, linux1394-devel@lists.sourceforge.net, 
    linux-afs@lists.infradead.org, usb-storage@lists.one-eyed-alien.net, 
    drbd-dev@lists.linbit.com, devel@driverdev.osuosl.org, 
    linux-cifs@vger.kernel.org, rds-devel@oss.oracle.com, 
    linux-scsi@vger.kernel.org, linux-rdma@vger.kernel.org, 
    oss-drivers@netronome.com, bridge@lists.linux-foundation.org, 
    linux-security-module@vger.kernel.org, 
    amd-gfx list <amd-gfx@lists.freedesktop.org>, 
    linux-stm32@st-md-mailman.stormreply.com, cluster-devel@redhat.com, 
    linux-acpi@vger.kernel.org, coreteam@netfilter.org, 
    intel-wired-lan@lists.osuosl.org, linux-input@vger.kernel.org, 
    Miguel Ojeda <ojeda@kernel.org>, tipc-discussion@lists.sourceforge.net, 
    linux-ext4@vger.kernel.org, linux-media@vger.kernel.org, 
    linux-watchdog@vger.kernel.org, selinux@vger.kernel.org, 
    linux-arm-msm <linux-arm-msm@vger.kernel.org>, 
    intel-gfx@lists.freedesktop.org, linux-geode@lists.infradead.org, 
    linux-can@vger.kernel.org, linux-block@vger.kernel.org, 
    linux-gpio@vger.kernel.org, op-tee@lists.trustedfirmware.org, 
    linux-mediatek@lists.infradead.org, xen-devel@lists.xenproject.org, 
    nouveau@lists.freedesktop.org, linux-hams@vger.kernel.org, 
    ceph-devel@vger.kernel.org, virtualization@lists.linux-foundation.org, 
    Linux ARM <linux-arm-kernel@lists.infradead.org>, 
    linux-hwmon@vger.kernel.org, 
    "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, 
    linux-nfs@vger.kernel.org, GR-Linux-NIC-Dev@marvell.com, 
    Linux Memory Management List <linux-mm@kvack.org>, 
    Network Development <netdev@vger.kernel.org>, 
    linux-decnet-user@lists.sourceforge.net, linux-mmc@vger.kernel.org, 
    Linux-Renesas <linux-renesas-soc@vger.kernel.org>, 
    linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, 
    netfilter-devel@vger.kernel.org, 
    "open list:HARDWARE RANDOM NUMBER GENERATOR CORE" <linux-crypto@vger.kernel.org>, 
    patches@opensource.cirrus.com, linux-integrity@vger.kernel.org, 
    target-devel@vger.kernel.org, linux-hardening@vger.kernel.org, 
    Jonathan Cameron <Jonathan.Cameron@huawei.com>, 
    Greg KH <gregkh@linuxfoundation.org>
Subject: Re: [Intel-wired-lan] [PATCH 000/141] Fix fall-through warnings for
 Clang
In-Reply-To: <CAKwvOdna5Zj_O=sB7Q0jHZX0BJSaakX=ZyftwQ_3=L3-ZB54XQ@mail.gmail.com>
Message-ID: <alpine.LNX.2.23.453.2011261031290.6@nippy.intranet>
References: <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> <202011220816.8B6591A@keescook> <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com> <ca071decb87cc7e905411423c05a48f9fd2f58d7.camel@perches.com>
 <0147972a72bc13f3629de8a32dee6f1f308994b5.camel@HansenPartnership.com> <d8d1e9add08cdd4158405e77762d4946037208f8.camel@perches.com> <dbd2cb703ed9eefa7dde9281ea26ab0f7acc8afe.camel@HansenPartnership.com> <20201123130348.GA3119@embeddedor>
 <8f5611bb015e044fa1c0a48147293923c2d904e4.camel@HansenPartnership.com> <202011241327.BB28F12F6@keescook> <a841536fe65bb33f1c72ce2455a6eb47a0107565.camel@HansenPartnership.com> <CAKwvOdkGBn7nuWTAqrORMeN1G+w3YwBfCqqaRD2nwvoAXKi=Aw@mail.gmail.com>
 <alpine.LNX.2.23.453.2011260750300.6@nippy.intranet> <CAKwvOdna5Zj_O=sB7Q0jHZX0BJSaakX=ZyftwQ_3=L3-ZB54XQ@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII



On Wed, 25 Nov 2020, Nick Desaulniers wrote:

> On Wed, Nov 25, 2020 at 1:33 PM Finn Thain <fthain@telegraphics.com.au> wrote:
> >
> > Or do you think that a codebase can somehow satisfy multiple checkers 
> > and their divergent interpretations of the language spec?
> 
> Have we found any cases yet that are divergent? I don't think so. 

You mean, aside from -Wimplicit-fallthrough? I'm glad you asked. How about 
-Wincompatible-pointer-types and -Wframe-larger-than?

All of the following files have been affected by divergent diagnostics 
produced by clang and gcc.

arch/arm64/include/asm/neon-intrinsics.h
arch/powerpc/xmon/Makefile
drivers/gpu/drm/i915/Makefile
drivers/gpu/drm/i915/i915_utils.h
drivers/staging/media/atomisp/pci/atomisp_subdev.c
fs/ext4/super.c
include/trace/events/qla.h
net/mac80211/rate.c
tools/lib/string.c
tools/perf/util/setup.py
tools/scripts/Makefile.include

And if I searched for 'smatch' or 'coverity' instead of 'clang' I'd 
probably find more divergence.

Here are some of the relevant commits.

0738c8b5915c7eaf1e6007b441008e8f3b460443
9c87156cce5a63735d1218f0096a65c50a7a32aa
babaab2f473817f173a2d08e410c25abf5ed0f6b
065e5e559555e2f100bc95792a8ef1b609bbe130
93f56de259376d7e4fff2b2d104082e1fa66e237
6c4798d3f08b81c2c52936b10e0fa872590c96ae
b7a313d84e853049062011d78cb04b6decd12f5c
093b75ef5995ea35d7f6bdb6c7b32a42a1999813

And before you object, "but -Wconstant-logical-operand is a clang-only 
warning! it can't be divergent with gcc!", consider that the special cases 
added to deal with clang-only warnings have to be removed when gcc catches 
up, which is more churn. Now multiply that by the number of checkers you 
care about.


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 01:11:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 01:11:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38203.70912 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki5oZ-0004fI-5z; Thu, 26 Nov 2020 01:11:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38203.70912; Thu, 26 Nov 2020 01:11:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki5oZ-0004fB-2d; Thu, 26 Nov 2020 01:11:15 +0000
Received: by outflank-mailman (input) for mailman id 38203;
 Thu, 26 Nov 2020 01:11:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=87l1=FA=gmail.com=minchan.kim@srs-us1.protection.inumbo.net>)
 id 1ki5oX-0004f4-Gg
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 01:11:13 +0000
Received: from mail-pg1-x544.google.com (unknown [2607:f8b0:4864:20::544])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ad042a60-b2d8-4af1-b931-29d9f30633ea;
 Thu, 26 Nov 2020 01:11:12 +0000 (UTC)
Received: by mail-pg1-x544.google.com with SMTP id 62so254669pgg.12
 for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 17:11:12 -0800 (PST)
Received: from google.com (c-67-188-94-199.hsd1.ca.comcast.net.
 [67.188.94.199])
 by smtp.gmail.com with ESMTPSA id s10sm3915048pjn.35.2020.11.25.17.11.08
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 25 Nov 2020 17:11:10 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=87l1=FA=gmail.com=minchan.kim@srs-us1.protection.inumbo.net>)
	id 1ki5oX-0004f4-Gg
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 01:11:13 +0000
X-Inumbo-ID: ad042a60-b2d8-4af1-b931-29d9f30633ea
Received: from mail-pg1-x544.google.com (unknown [2607:f8b0:4864:20::544])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ad042a60-b2d8-4af1-b931-29d9f30633ea;
	Thu, 26 Nov 2020 01:11:12 +0000 (UTC)
Received: by mail-pg1-x544.google.com with SMTP id 62so254669pgg.12
        for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 17:11:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=f8bMvE80grJsQDxQbN+SXa4Ok4avPUhhCBLg+N5g7uE=;
        b=CJdcPtV0FUzyxFptDYgR6iAeEpy/fm/U+qwm7MjSAUITShNs3IGjSdma/ci1d+UdB1
         /4QBGWKw7GesxoQFO7vGWULXUqZeTgSL72KBwgOOTOpdW6FEjFkGl8KvyOmWfu5zn3C8
         2D8lQB9/sXcZGuetWYq/y9fJ/4tVoCfZkFM+w/twGxXZf6jIBGOeoFgX3V3eoJ4SxFtL
         DMlA1WjSzrfwym46BYlZOU4NNlAHgEZUyXcv6V6vuCHVbSrKNoE7MZTnCkoIzELetkkK
         3iGfzFOaAaJYntoqnGECTiJfY05IPhrPMzUEVWqaaAGWlk8VKPKZsqmpzgCIg3k/819j
         bQTQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=f8bMvE80grJsQDxQbN+SXa4Ok4avPUhhCBLg+N5g7uE=;
        b=tnfFnW8l/I9ZXlMOsV+Y+5VAlLsiR4XxkU/g/NKWk0SAdFSWASVu+weMyCml6SnYn+
         QCOjQOrwLzITGR0g/0h7hTnEvTsT1Ut3N98hygteH5qtv7ce1ah4oPg2Rh7MqRet+Yoj
         4yAoX2/hUjnudA1WYSJia860iV7ZdOeifyp1kREM/0R6P4k9LB9PzkKb6eLeIXEDc28l
         sA4/3x0hZvDoYnLIwVteSrAKWIHhVc4l4o1yqzoSsadCk7zx2DDbjEyPR4PdMAcMMbCO
         u3kbXH7OXXjjqjgRkjLvxWsMSuPaigfnOt2N+M3+QoT9J/aXYIXQUI2Pd/aNiDIVVeCX
         5lLQ==
X-Gm-Message-State: AOAM532AK7keB/q1xZs7OdOSKKrieB44udAiU4YEL/NTpZgl6535jjuv
	NbTr6TpFqnJVVRkIKAM09YI=
X-Google-Smtp-Source: ABdhPJy2rvS54nam6WzB3Gos6rktAfztTZ/apost62cNhHmAqWfXtcE8/JRGjAG0GRNN0NDnAydY0Q==
X-Received: by 2002:a17:90a:8909:: with SMTP id u9mr643556pjn.100.1606353071628;
        Wed, 25 Nov 2020 17:11:11 -0800 (PST)
Received: from google.com (c-67-188-94-199.hsd1.ca.comcast.net. [67.188.94.199])
        by smtp.gmail.com with ESMTPSA id s10sm3915048pjn.35.2020.11.25.17.11.08
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 25 Nov 2020 17:11:10 -0800 (PST)
Sender: Minchan Kim <minchan.kim@gmail.com>
Date: Wed, 25 Nov 2020 17:11:07 -0800
From: Minchan Kim <minchan@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>, Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com, linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	ceph-devel@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org, linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Subject: Re: [PATCH 60/78] zram: remove the claim mechanism
Message-ID: <20201126011107.GA57352@google.com>
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-61-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201116145809.410558-61-hch@lst.de>

On Mon, Nov 16, 2020 at 03:57:51PM +0100, Christoph Hellwig wrote:
> The zram claim mechanism was added to ensure no new opens come in
> during teardown.  But the proper way to archive that is to call
> del_gendisk first, which takes care of all that.  Once del_gendisk
> is called in the right place, the reset side can also be simplified
> as no I/O can be outstanding on a block device that is not open.

It would be great if it makes the mess simple. Let me have a question
Please see below.

> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/block/zram/zram_drv.c | 76 ++++++++++-------------------------
>  1 file changed, 21 insertions(+), 55 deletions(-)
> 
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index 6d15d51cee2b7e..3641434a9b154d 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -1756,64 +1756,33 @@ static ssize_t disksize_store(struct device *dev,
>  static ssize_t reset_store(struct device *dev,
>  		struct device_attribute *attr, const char *buf, size_t len)
>  {
> -	int ret;
> -	unsigned short do_reset;
> -	struct zram *zram;
> +	struct zram *zram = dev_to_zram(dev);
>  	struct block_device *bdev;
> +	unsigned short do_reset;
> +	int ret = 0;
>  
>  	ret = kstrtou16(buf, 10, &do_reset);
>  	if (ret)
>  		return ret;
> -
>  	if (!do_reset)
>  		return -EINVAL;
>  
> -	zram = dev_to_zram(dev);
>  	bdev = bdget_disk(zram->disk, 0);
>  	if (!bdev)
>  		return -ENOMEM;
>  
>  	mutex_lock(&bdev->bd_mutex);
> -	/* Do not reset an active device or claimed device */
> -	if (bdev->bd_openers || zram->claim) {
> -		mutex_unlock(&bdev->bd_mutex);
> -		bdput(bdev);
> -		return -EBUSY;
> -	}
> -
> -	/* From now on, anyone can't open /dev/zram[0-9] */
> -	zram->claim = true;
> +	if (bdev->bd_openers)
> +		ret = -EBUSY;
> +	else
> +		zram_reset_device(zram);
>  	mutex_unlock(&bdev->bd_mutex);
> -
> -	/* Make sure all the pending I/O are finished */
> -	fsync_bdev(bdev);
> -	zram_reset_device(zram);
>  	bdput(bdev);
>  
> -	mutex_lock(&bdev->bd_mutex);
> -	zram->claim = false;
> -	mutex_unlock(&bdev->bd_mutex);
> -
> -	return len;
> -}
> -
> -static int zram_open(struct block_device *bdev, fmode_t mode)
> -{
> -	int ret = 0;
> -	struct zram *zram;
> -
> -	WARN_ON(!mutex_is_locked(&bdev->bd_mutex));
> -
> -	zram = bdev->bd_disk->private_data;
> -	/* zram was claimed to reset so open request fails */
> -	if (zram->claim)
> -		ret = -EBUSY;
> -
> -	return ret;
> +	return ret ? ret : len;
>  }
>  
>  static const struct block_device_operations zram_devops = {
> -	.open = zram_open,
>  	.submit_bio = zram_submit_bio,
>  	.swap_slot_free_notify = zram_slot_free_notify,
>  	.rw_page = zram_rw_page,
> @@ -1821,7 +1790,6 @@ static const struct block_device_operations zram_devops = {
>  };
>  
>  static const struct block_device_operations zram_wb_devops = {
> -	.open = zram_open,
>  	.submit_bio = zram_submit_bio,
>  	.swap_slot_free_notify = zram_slot_free_notify,
>  	.owner = THIS_MODULE
> @@ -1972,34 +1940,32 @@ static int zram_add(void)
>  	return ret;
>  }
>  
> -static int zram_remove(struct zram *zram)
> +static bool zram_busy(struct zram *zram)
>  {
>  	struct block_device *bdev;
> +	bool busy = false;
>  
>  	bdev = bdget_disk(zram->disk, 0);
> -	if (!bdev)
> -		return -ENOMEM;
> -
> -	mutex_lock(&bdev->bd_mutex);
> -	if (bdev->bd_openers || zram->claim) {
> -		mutex_unlock(&bdev->bd_mutex);
> +	if (bdev) {
> +		if (bdev->bd_openers)
> +			busy = true;
>  		bdput(bdev);
> -		return -EBUSY;
>  	}
>  
> -	zram->claim = true;
> -	mutex_unlock(&bdev->bd_mutex);
> +	return busy;
> +}
>  
> -	zram_debugfs_unregister(zram);
> +static int zram_remove(struct zram *zram)
> +{
> +	if (zram_busy(zram))
> +		return -EBUSY;
>  
> -	/* Make sure all the pending I/O are finished */
> -	fsync_bdev(bdev);
> +	del_gendisk(zram->disk);
> +	zram_debugfs_unregister(zram);
>  	zram_reset_device(zram);
> -	bdput(bdev);
>  
>  	pr_info("Removed device: %s\n", zram->disk->disk_name);
>  
> -	del_gendisk(zram->disk);
>  	blk_cleanup_queue(zram->disk->queue);
>  	put_disk(zram->disk);
>  	kfree(zram);
> -- 
> 2.29.2
> 

With this patch, how deal with the race?

CPU 1                                     CPU 2

hot_remove_store
  zram_remove
    zram_busy
      return -EBUSY
                                         open /dev/zram0
    del_gendisk
    zram_reset and destroy



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 01:16:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 01:16:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38211.70925 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki5tX-0004rh-Pj; Thu, 26 Nov 2020 01:16:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38211.70925; Thu, 26 Nov 2020 01:16:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki5tX-0004ra-Mc; Thu, 26 Nov 2020 01:16:23 +0000
Received: by outflank-mailman (input) for mailman id 38211;
 Thu, 26 Nov 2020 01:16:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=87l1=FA=gmail.com=minchan.kim@srs-us1.protection.inumbo.net>)
 id 1ki5tX-0004rV-7E
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 01:16:23 +0000
Received: from mail-pg1-x544.google.com (unknown [2607:f8b0:4864:20::544])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d6088136-ad1b-4bdc-ba8f-c27f5dff07d2;
 Thu, 26 Nov 2020 01:16:22 +0000 (UTC)
Received: by mail-pg1-x544.google.com with SMTP id j19so292099pgg.5
 for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 17:16:22 -0800 (PST)
Received: from google.com (c-67-188-94-199.hsd1.ca.comcast.net.
 [67.188.94.199])
 by smtp.gmail.com with ESMTPSA id e128sm2978987pfe.154.2020.11.25.17.16.18
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 25 Nov 2020 17:16:20 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=87l1=FA=gmail.com=minchan.kim@srs-us1.protection.inumbo.net>)
	id 1ki5tX-0004rV-7E
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 01:16:23 +0000
X-Inumbo-ID: d6088136-ad1b-4bdc-ba8f-c27f5dff07d2
Received: from mail-pg1-x544.google.com (unknown [2607:f8b0:4864:20::544])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d6088136-ad1b-4bdc-ba8f-c27f5dff07d2;
	Thu, 26 Nov 2020 01:16:22 +0000 (UTC)
Received: by mail-pg1-x544.google.com with SMTP id j19so292099pgg.5
        for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 17:16:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=ffbgfORDd1J1xm3xxXp8xWzEs5sXbivtuhWMZ6CRpX8=;
        b=QvBDhQabcPJJmll9n1XSftf1/oCdc4AbjJKr3e1lsYr3oC6BYqnsjBtTeyYy83i7U3
         CiBd9FBSulwPjxAjN7h2emgRFMFS1h+Lw6DqhxGTuZzg7+z7LjtCeMG+R8Iu7GvPbymk
         ToF3QU0Kl5XY8lJh/Qkx9aL7d+kCec25ZtdGP0mSVt69+sY7dMUe9kHgs59VUC7F5d4d
         nBojS3btHNUmyJAhv9TgzqtLgQeQkeeQKwwl8KyW2EiBpga6aEsYA+VodfFczjdFHus+
         VDSakrHxOOPEEYbfR18sM6jijaOJSIrXZ6isVycVVCHpctMR7iHPW0kXlgmPrbk46btj
         mYxA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=ffbgfORDd1J1xm3xxXp8xWzEs5sXbivtuhWMZ6CRpX8=;
        b=k8DdS2QTomAhPlNeOFhsGYMveHi3j9imynNJV06Sk2RKiL+6d24yHsKRvrTYigwQXe
         YZ9sNYbDTO9uA5e/i3XoVHQs2voLCJ2Ao+LHC7DcF+Bi8KVb7gU/cNXAGmVvOa9IZMvk
         8OumpCY2VjkzhKWYAlfWFFNN60P0g1EJdg+zen+845jf/rFd4m0H1FfmKNkXegdC1Wsl
         AKvOdmpnkZ5IXg9BedkMmGSloYqAh2wGhOZy8lqfZR2OdSfhP4/octnHFeF/bctQ5lcS
         agEx0dXnpFoalf0aqEB1gLTlxcZ6qVnYV+gEwHSYrPVaxQZA3SqwQ8iQZaCvjfaJQPoC
         QIlQ==
X-Gm-Message-State: AOAM530yPooMb7we9CSV1qDsCaxm9ef4qft2kOqrWDcL1g1ScslLkJqj
	YHEYcZPgtGQLMrJKpyIvgf8=
X-Google-Smtp-Source: ABdhPJxuwzDVcKoe4o/wMgwyIQrHY+YPGkp4YxJA+sp/vPwpgqgEvxXOvnlOmZ0BtRJFwRde4WH8Zg==
X-Received: by 2002:a62:8cc6:0:b029:19a:87b1:99bb with SMTP id m189-20020a628cc60000b029019a87b199bbmr637857pfd.6.1606353381534;
        Wed, 25 Nov 2020 17:16:21 -0800 (PST)
Received: from google.com (c-67-188-94-199.hsd1.ca.comcast.net. [67.188.94.199])
        by smtp.gmail.com with ESMTPSA id e128sm2978987pfe.154.2020.11.25.17.16.18
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 25 Nov 2020 17:16:20 -0800 (PST)
Sender: Minchan Kim <minchan.kim@gmail.com>
Date: Wed, 25 Nov 2020 17:16:16 -0800
From: Minchan Kim <minchan@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>, Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com, linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	ceph-devel@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org, linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	sergey.senozhatsky.work@gmail.com
Subject: Re: [PATCH 61/78] zram:  do not call set_blocksize
Message-ID: <20201126011616.GB57352@google.com>
References: <20201116145809.410558-1-hch@lst.de>
 <20201116145809.410558-62-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201116145809.410558-62-hch@lst.de>

On Mon, Nov 16, 2020 at 03:57:52PM +0100, Christoph Hellwig wrote:
> set_blocksize is used by file systems to use their preferred buffer cache
> block size.  Block drivers should not set it.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Minchan Kim <minchan@kernel.org>

Thanks.

> ---
>  drivers/block/zram/zram_drv.c | 11 +----------
>  drivers/block/zram/zram_drv.h |  1 -
>  2 files changed, 1 insertion(+), 11 deletions(-)
> 
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index 3641434a9b154d..d00b5761ec0b21 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -403,13 +403,10 @@ static void reset_bdev(struct zram *zram)
>  		return;
>  
>  	bdev = zram->bdev;
> -	if (zram->old_block_size)
> -		set_blocksize(bdev, zram->old_block_size);
>  	blkdev_put(bdev, FMODE_READ|FMODE_WRITE|FMODE_EXCL);
>  	/* hope filp_close flush all of IO */
>  	filp_close(zram->backing_dev, NULL);
>  	zram->backing_dev = NULL;
> -	zram->old_block_size = 0;
>  	zram->bdev = NULL;
>  	zram->disk->fops = &zram_devops;
>  	kvfree(zram->bitmap);
> @@ -454,7 +451,7 @@ static ssize_t backing_dev_store(struct device *dev,
>  	struct file *backing_dev = NULL;
>  	struct inode *inode;
>  	struct address_space *mapping;
> -	unsigned int bitmap_sz, old_block_size = 0;
> +	unsigned int bitmap_sz;
>  	unsigned long nr_pages, *bitmap = NULL;
>  	struct block_device *bdev = NULL;
>  	int err;
> @@ -509,14 +506,8 @@ static ssize_t backing_dev_store(struct device *dev,
>  		goto out;
>  	}
>  
> -	old_block_size = block_size(bdev);
> -	err = set_blocksize(bdev, PAGE_SIZE);
> -	if (err)
> -		goto out;
> -
>  	reset_bdev(zram);
>  
> -	zram->old_block_size = old_block_size;
>  	zram->bdev = bdev;
>  	zram->backing_dev = backing_dev;
>  	zram->bitmap = bitmap;
> diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
> index f2fd46daa76045..712354a4207c77 100644
> --- a/drivers/block/zram/zram_drv.h
> +++ b/drivers/block/zram/zram_drv.h
> @@ -118,7 +118,6 @@ struct zram {
>  	bool wb_limit_enable;
>  	u64 bd_wb_limit;
>  	struct block_device *bdev;
> -	unsigned int old_block_size;
>  	unsigned long *bitmap;
>  	unsigned long nr_pages;
>  #endif
> -- 
> 2.29.2
> 


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 02:07:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 02:07:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38219.70937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki6gq-0001IJ-Tg; Thu, 26 Nov 2020 02:07:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38219.70937; Thu, 26 Nov 2020 02:07:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki6gq-0001IC-OF; Thu, 26 Nov 2020 02:07:20 +0000
Received: by outflank-mailman (input) for mailman id 38219;
 Thu, 26 Nov 2020 02:07:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/AcN=FA=arm.com=wei.chen@srs-us1.protection.inumbo.net>)
 id 1ki6gp-0001I4-B4
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 02:07:19 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.70]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a8e291bc-3737-49f8-9d3f-ee4b1bb2150f;
 Thu, 26 Nov 2020 02:07:17 +0000 (UTC)
Received: from AM6P191CA0043.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:7f::20)
 by DB8PR08MB5129.eurprd08.prod.outlook.com (2603:10a6:10:ec::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20; Thu, 26 Nov
 2020 02:07:15 +0000
Received: from VE1EUR03FT058.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:7f:cafe::1b) by AM6P191CA0043.outlook.office365.com
 (2603:10a6:209:7f::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Thu, 26 Nov 2020 02:07:15 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT058.mail.protection.outlook.com (10.152.19.86) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Thu, 26 Nov 2020 02:07:14 +0000
Received: ("Tessian outbound 797fb8e1da56:v71");
 Thu, 26 Nov 2020 02:07:14 +0000
Received: from 2cac11986ffd.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A7CFD925-A3DF-419B-AC86-D0FF1C0E13CB.1; 
 Thu, 26 Nov 2020 02:07:09 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2cac11986ffd.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 26 Nov 2020 02:07:09 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com (2603:10a6:208:105::24)
 by AM0PR08MB3137.eurprd08.prod.outlook.com (2603:10a6:208:64::27)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20; Thu, 26 Nov
 2020 02:07:02 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993]) by AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993%3]) with mapi id 15.20.3564.038; Thu, 26 Nov 2020
 02:07:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/AcN=FA=arm.com=wei.chen@srs-us1.protection.inumbo.net>)
	id 1ki6gp-0001I4-B4
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 02:07:19 +0000
X-Inumbo-ID: a8e291bc-3737-49f8-9d3f-ee4b1bb2150f
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown [40.107.20.70])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a8e291bc-3737-49f8-9d3f-ee4b1bb2150f;
	Thu, 26 Nov 2020 02:07:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WPAbLGBAZE5fuf6VcfGW9nHH568YYdBtCzwh5B6dB9I=;
 b=6TN/hNwJkg8zqjdk/lzexsixjfticmITHhstWc05ZQMX9mAJyE+sjO5KvN3+mdxldqB5BTQ4lQ0WewaxYkHyrXEs3cILh76v7fe/b+n4YQt0JVkblMa0DD/vVlhs42fFPNuVAORX1sc2pzIZm9CDgxDFlwiBBrXUBhvo7+NJaBg=
Received: from AM6P191CA0043.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:7f::20)
 by DB8PR08MB5129.eurprd08.prod.outlook.com (2603:10a6:10:ec::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.20; Thu, 26 Nov
 2020 02:07:15 +0000
Received: from VE1EUR03FT058.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:7f:cafe::1b) by AM6P191CA0043.outlook.office365.com
 (2603:10a6:209:7f::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Thu, 26 Nov 2020 02:07:15 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT058.mail.protection.outlook.com (10.152.19.86) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Thu, 26 Nov 2020 02:07:14 +0000
Received: ("Tessian outbound 797fb8e1da56:v71"); Thu, 26 Nov 2020 02:07:14 +0000
X-CR-MTA-TID: 64aa7808
Received: from 2cac11986ffd.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id A7CFD925-A3DF-419B-AC86-D0FF1C0E13CB.1;
	Thu, 26 Nov 2020 02:07:09 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2cac11986ffd.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Thu, 26 Nov 2020 02:07:09 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kfmj6rsgKSi3nnF8cr2DLVM2fxMk9RO3qgf54poVioCelxpDBfC+u5bRYrH4J241jRlUFH5V2BXADToUD0cpJOtD57KHyeWqIWXPZLLemQoAAXbq89vumiHoZoLMK/I8mi1YnlZH5aOuCtQJ6kk8/awMdVU4of1WcZ4vYtM+gQ/5yDRItFrX8PEgwAksQbLhRaE3UquGGuxMjw0BTgW2oxcBEN4DoSyqCiIybEWNg+nrFpVhGlqQeoSkmqpz73tWsAZUxTYDLErmLABAXGuar3UzhfWCeN3idgM7hHan4Otp+355mnmbCO28IM0wKCJKC11eAGaGsTiMZkmTY2gLBw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WPAbLGBAZE5fuf6VcfGW9nHH568YYdBtCzwh5B6dB9I=;
 b=XpOyIiCy2zkLlMu1cGFAddFyaMXOG0QCTJbnjilrpa9M3u60VtWPn7eYac1Ixk7laQ/WThlm0de/oeDTTnCcaScG5ptCjmoUg59UB4AbCO5pLdqyM4t7hz7NRIFFe4Ol3+6NB7jFkzzjvsZ067UkAkw02zkPKNfd5GJw2YWLs4DeIXtRabkSjvuYbDEt68ajFFvVhYgBiZHqmUf6nxamQn9dhqzoJQm03ROLhPfS4zS8nLYrrs2rzNj7ee+Ch0G9aKcpMobOsi9DjD4IuQ6lfpwjWPiNTkLG7gv1dY9t+TCMEHWXdxay0syZT24dUXJTV6snCQjwbxaQ6QLwfOZSgw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WPAbLGBAZE5fuf6VcfGW9nHH568YYdBtCzwh5B6dB9I=;
 b=6TN/hNwJkg8zqjdk/lzexsixjfticmITHhstWc05ZQMX9mAJyE+sjO5KvN3+mdxldqB5BTQ4lQ0WewaxYkHyrXEs3cILh76v7fe/b+n4YQt0JVkblMa0DD/vVlhs42fFPNuVAORX1sc2pzIZm9CDgxDFlwiBBrXUBhvo7+NJaBg=
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com (2603:10a6:208:105::24)
 by AM0PR08MB3137.eurprd08.prod.outlook.com (2603:10a6:208:64::27) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20; Thu, 26 Nov
 2020 02:07:02 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993]) by AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993%3]) with mapi id 15.20.3564.038; Thu, 26 Nov 2020
 02:07:02 +0000
From: Wei Chen <Wei.Chen@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>, Penny Zheng <Penny.Zheng@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andre
 Przywara <Andre.Przywara@arm.com>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Kaly Xin <Kaly.Xin@arm.com>, nd <nd@arm.com>
Subject: RE: [PATCH] xen/arm: Add Cortex-A73 erratum 858921 workaround
Thread-Topic: [PATCH] xen/arm: Add Cortex-A73 erratum 858921 workaround
Thread-Index: AQHWtnF2jWNIb4RgFU+PRE0mwpdPDam/tA6AgAW2ODCAFDcMgIAAINyA
Date: Thu, 26 Nov 2020 02:07:02 +0000
Message-ID:
 <AM0PR08MB3747912905438DA6D7FF969C9EF90@AM0PR08MB3747.eurprd08.prod.outlook.com>
References: <20201109082110.1133996-1-penny.zheng@arm.com>
 <cfa63398-8182-b79f-1602-ed068e2319ad@xen.org>
 <AM0PR08MB3747B42FC856B9BDF24646629EE60@AM0PR08MB3747.eurprd08.prod.outlook.com>
 <alpine.DEB.2.21.2011251554070.7979@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2011251554070.7979@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 1EB8119D72CAED43B9F871EBF53850AB.0
x-checkrecipientchecked: true
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [218.82.32.45]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 36235be0-d1ce-40da-3283-08d891affe93
x-ms-traffictypediagnostic: AM0PR08MB3137:|DB8PR08MB5129:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB8PR08MB5129D6A5C02E96C7DE395D779EF90@DB8PR08MB5129.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:3276;OLM:3276;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 rinVqLTGeSyCI24k96enHr/GYkjAALn387j2Mdq3C7Z6dgGW5RdxiQSqed0sGVK735vRdiO+7VB89xbCZmyqncIm2WUe6Nh5nAJuH7y+6loQidbSaUYLX0oV6iRsVo8HmHPZktREq2GC9v1s3c/UlNdqnNNkEeL1UkBuLM/M3VUdPouJW7xMzOExsBzRDONx+pzdVvYtVN9ijMB+OoAxjdZ+M+vI8YTv20fzeQOrdRb5bnBVmXvv8zKl2niEP1QIvlzoCI3Q1PSqnE9m/i38Uf1c/KiNrYuyWmFVTsX3xq/TTshqhlFZmJ9LdbrM8LgLCto8ZdOD26g6Dg9HxPnGGg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3747.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(396003)(136003)(346002)(39860400002)(376002)(53546011)(6506007)(478600001)(76116006)(66946007)(66476007)(66556008)(64756008)(66446008)(4326008)(8936002)(9686003)(186003)(6916009)(55016002)(86362001)(8676002)(54906003)(26005)(7696005)(52536014)(316002)(33656002)(5660300002)(2906002)(83380400001)(71200400001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?gb2312?B?TVBiRGUxRFA0cWhqZjl0bnpkWVhDUklFbE9vRXdXRGNNMmZQVkF4SWhPSzBu?=
 =?gb2312?B?akMxUU1xYjBzTjlkcEIxZWJRZUlIK2hBMjFxZUR3NURnZVV5N1VDQi9Tc255?=
 =?gb2312?B?bmVFTGxhd2VTU25UTVlaTzgrTWlCNWZZRzYyNnNlNmZwTWFCb3RNT0lKS3ln?=
 =?gb2312?B?Z2NIZGMrT1RvNGdmVTVzdVc5cktraHd5Qk13cTNHcUFLbkFTZlJvbk1ZSkJW?=
 =?gb2312?B?T0xvQTU2TDd2RlJ3eVQyTFpVOXhYemc5K2ZHQ1Q3V0dLUGpBOHBnMjRCYm1w?=
 =?gb2312?B?cUNWS05kbjRWT0lHMERRdi9EbGdoZUgwZWx0S3hxMU1xMGEvOTFIWkdKZnFj?=
 =?gb2312?B?ZnZBMTFNKzZTdk1DOVpTUVlMS1hWSXZVTjc2N3BZSDRmdStrVmRhYTFMT3lv?=
 =?gb2312?B?N1FSK28yazRlRGJkMEd4dHMyRnRUOHQ1TUlTNnhkL3RVZ2syUGtMcFMzVm9w?=
 =?gb2312?B?TTBTQW1weXhpN0RsNk5kK0V4d2w0emcwb3ZOWEhJUnMrZzlNcktENEVIcSti?=
 =?gb2312?B?emE4SnRGUm5mNS9RNktsdGZOS3p3bHdIQlNhUlN6d1ZLVVhMRG9zRisxL0dR?=
 =?gb2312?B?cmlIaDNlQVpLNG1PTHdGc2UzeUdrc0VCQ3pWNW1MOGFrQlRCaGRESUpuUkZY?=
 =?gb2312?B?WS9UOCtlSmFOSVBsemRhcFpaeGUxSmFTZkVxaXZVZTQ1ZFBpT3p1UE1zTGR3?=
 =?gb2312?B?akRCNmNGYks4Q2k4dUtqMDkwRWsyK0E5dEZkSHg3MEZNc29YUitHaXNQaG9N?=
 =?gb2312?B?aG9GbUl4S0IvaHZLS2pGNUg3aUJxUWxsMGVjY1c1UEVFZFdBRVpWNzdmbVVQ?=
 =?gb2312?B?cHRLV0w1ZU9pdmJSRWJVWlp2aHRqRzlLRUFGQ1dMLzBjZGEwMHMxNU5VWit2?=
 =?gb2312?B?d3dIdVduVFAyU1JiNU1KekdRbjBJTEtMaUwrM0tGYllOc3MwM2xlRjFjbWJW?=
 =?gb2312?B?SS9WbjlSVzUrWXZIblJ3KzlpUlhBV05CaHlZZGNXV2RsMEppTnp0eXBCOS8x?=
 =?gb2312?B?TXZXdUhJVmlDclRwRmhLdUVxNXFmRmxPQVVkNU1LZHlQcUYvMFMxQWh5MGxi?=
 =?gb2312?B?RU9ZSVYyb0laTlo3SS9vby9TeVVjdk9EWmZvOERhNEpkL2d5NWtsY25VelB5?=
 =?gb2312?B?b2NadlJuR3hWSmN3YlQwVFJLcmc2d0NkaXFHWWVnbXFIMkorVHFGNzNHUWk3?=
 =?gb2312?B?QVIvbEdRZTdqeTVQMnpYMmJoQzdORE90MkE5UFBEb3RxZmRBTVlhelBseEpj?=
 =?gb2312?B?N29nb1U3N29NeE5QL29LV2ZXVk5aaXNIQjhOa3ArNHdlTHM0dmg5eDlQSEI2?=
 =?gb2312?Q?jFIlO7EzjFIis=3D?=
Content-Type: text/plain; charset="gb2312"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3137
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT058.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f567267b-40ba-4b47-a369-08d891aff707
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uaqUbW1DwbCDFJjfvt6e1cBRboxvT1vcb/ok0Wj/5XzLtifcLyc5VWGp9rerhRSi6FaV3uIkNUBK979XO/0Dng5zwpq9hzVvtkzP5jJ2Q/n/jknMV71FiXVMyCz1o0yoH1Z1gmTkzwj/mmVkwm8ywN5at6qeXK4nF0Ai7UZVukyXIm/IviDWB8hkHECnBYfe0+x16TeSd7uRKMz+jA+WXsi10ZK69npk5mybaX4Gkm+OxQPt7T+2eRy1tCaZDM/AaLIhu1SnTMN3lGc8Nasl1V7FERsK0xgRWqgCMFGLNe6YgX4hnNGdDECq6Ptn/qREwm5XSBwxIZ4p8CxzL2AZOyKrSkKBXhqAiBJiwRMm74RYbW4OZyyC2MIyc1oZaUZqXnzhK6APbtZsU95J3s6X2g==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(376002)(39860400002)(136003)(346002)(46966005)(6862004)(6506007)(316002)(54906003)(4326008)(8936002)(53546011)(47076004)(356005)(82740400003)(26005)(7696005)(82310400003)(336012)(186003)(8676002)(33656002)(2906002)(70206006)(9686003)(86362001)(5660300002)(70586007)(52536014)(81166007)(83380400001)(55016002)(478600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Nov 2020 02:07:14.7999
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 36235be0-d1ce-40da-3283-08d891affe93
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT058.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5129

SGkgU3RlZmFubywNCg0KPiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBTdGVm
YW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+DQo+IFNlbnQ6IDIwMjDE6jEx
1MIyNsjVIDg6MDANCj4gVG86IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29tPg0KPiBDYzogSnVs
aWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZz47IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bhcm0u
Y29tPjsgeGVuLQ0KPiBkZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZzsgc3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZzsgQW5kcmUgUHJ6eXdhcmENCj4gPEFuZHJlLlByenl3YXJhQGFybS5jb20+OyBCZXJ0
cmFuZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+Ow0KPiBLYWx5IFhpbiA8S2Fs
eS5YaW5AYXJtLmNvbT47IG5kIDxuZEBhcm0uY29tPg0KPiBTdWJqZWN0OiBSRTogW1BBVENIXSB4
ZW4vYXJtOiBBZGQgQ29ydGV4LUE3MyBlcnJhdHVtIDg1ODkyMSB3b3JrYXJvdW5kDQo+IA0KPiBS
ZXN1bWluZyB0aGlzIG9sZCB0aHJlYWQuDQo+IA0KPiBPbiBGcmksIDEzIE5vdiAyMDIwLCBXZWkg
Q2hlbiB3cm90ZToNCj4gPiA+IEhpLA0KPiA+ID4NCj4gPiA+IE9uIDA5LzExLzIwMjAgMDg6MjEs
IFBlbm55IFpoZW5nIHdyb3RlOg0KPiA+ID4gPiBDTlRWQ1RfRUwwIG9yIENOVFBDVF9FTDAgY291
bnRlciByZWFkIGluIENvcnRleC1BNzMgKGFsbCB2ZXJzaW9ucykNCj4gPiA+ID4gbWlnaHQgcmV0
dXJuIGEgd3JvbmcgdmFsdWUgd2hlbiB0aGUgY291bnRlciBjcm9zc2VzIGEgMzJiaXQgYm91bmRh
cnkuDQo+ID4gPiA+DQo+ID4gPiA+IFVudGlsIG5vdywgdGhlcmUgaXMgbm8gY2FzZSBmb3IgWGVu
IGl0c2VsZiB0byBhY2Nlc3MgQ05UVkNUX0VMMCwNCj4gPiA+ID4gYW5kIGl0IGFsc28gc2hvdWxk
IGJlIHRoZSBHdWVzdCBPUydzIHJlc3BvbnNpYmlsaXR5IHRvIGRlYWwgd2l0aA0KPiA+ID4gPiB0
aGlzIHBhcnQuDQo+ID4gPiA+DQo+ID4gPiA+IEJ1dCBmb3IgQ05UUENULCB0aGVyZSBleGlzdHMg
c2V2ZXJhbCBjYXNlcyBpbiBYZW4gaW52b2x2aW5nIHJlYWRpbmcNCj4gPiA+ID4gQ05UUENULCBz
byBhIHBvc3NpYmxlIHdvcmthcm91bmQgaXMgdGhhdCBwZXJmb3JtaW5nIHRoZSByZWFkIHR3aWNl
LA0KPiA+ID4gPiBhbmQgdG8gcmV0dXJuIG9uZSBvciB0aGUgb3RoZXIgZGVwZW5kaW5nIG9uIHdo
ZXRoZXIgYSB0cmFuc2l0aW9uIGhhcw0KPiA+ID4gPiB0YWtlbiBwbGFjZS4NCj4gPiA+ID4NCj4g
PiA+ID4gU2lnbmVkLW9mZi1ieTogUGVubnkgWmhlbmcgPHBlbm55LnpoZW5nQGFybS5jb20+DQo+
ID4gPg0KPiA+ID4gQWNrZWQtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+DQo+
ID4gPg0KPiA+ID4gT24gYSByZWxhdGVkIHRvcGljLCBkbyB3ZSBuZWVkIGEgZml4IHNpbWlsYXIg
dG8gTGludXggY29tbWl0DQo+ID4gPiA3NWExOWEwMjAyZGIgImFybTY0OiBhcmNoX3RpbWVyOiBF
bnN1cmUgY291bnRlciByZWdpc3RlciByZWFkcyBvY2N1cg0KPiA+ID4gd2l0aCBzZXFsb2NrIGhl
bGQiPw0KPiA+ID4NCj4gPg0KPiA+IEkgdGFrZSBhIGxvb2sgYXQgdGhpcyBMaW51eCBjb21taXQs
IGl0IHNlZW1zIHRvIHByZXZlbnQgdGhlIHNlcS1sb2NrIHRvIGJlDQo+ID4gc3BlY3VsYXRlZC4g
IFVzaW5nIGFuIGVuZm9yY2Ugb3JkZXJpbmcgaW5zdGVhZCBvZiBJU0IgYWZ0ZXIgdGhlIHJlYWQg
Y291bnRlcg0KPiA+IG9wZXJhdGlvbiBzZWVtcyB0byBiZSBmb3IgcGVyZm9ybWFuY2UgcmVhc29u
cy4NCj4gPg0KPiA+IEkgaGF2ZSBmb3VuZCB0aGF0IHlvdSBoYWQgcGxhY2VkIGFuIElTQiBiZWZv
cmUgcmVhZCBjb3VudGVyIGluIGdldF9jeWNsZXMNCj4gPiB0byBwcmV2ZW50IGNvdW50ZXIgdmFs
dWUgdG8gYmUgc3BlY3VsYXRlZC4gQnV0IHlvdSBoYXZlbid0IHBsYWNlZCB0aGUgc2Vjb25kDQo+
ID4gSVNCIGFmdGVyIHJlYWRpbmcuIElzIGl0IGJlY2F1c2Ugd2UgaGF2ZW4ndCB1c2VkIHRoZSBn
ZXRfY3ljbGVzIGluIHNlcSBsb2NrDQo+ID4gY3JpdGljYWwgY29udGV4dCAoTWF5YmUgSSBkaWRu
J3QgZmluZCB0aGUgcmlnaHQgcGxhY2UpPyBTbyBzaG91bGQgd2UgbmVlZCB0byBmaXggaXQNCj4g
PiBub3csIG9yIHlvdSBwcmVmZXIgdG8gZml4IGl0IG5vdyBmb3IgZnV0dXJlIHVzYWdlPw0KPiAN
Cj4gTG9va2luZyBhdCB0aGUgY2FsbCBzaXRlcywgaXQgZG9lc24ndCBsb29rIGxpa2Ugd2UgbmVl
ZCBhbnkgSVNCIGFmdGVyDQo+IGdldF9jeWNsZXMgYXMgaXQgaXMgbm90IHVzZWQgaW4gYW55IGNy
aXRpY2FsIGNvbnRleHQuIFRoZXJlIGlzIGFsc28gYQ0KPiBkYXRhIGRlcGVuZGVuY3kgd2l0aCB0
aGUgdmFsdWUgcmV0dXJuZWQgYnkgaXQuDQo+IA0KPiBTbyBJIGFtIHRoaW5raW5nIHdlIGRvbid0
IG5lZWQgYW55IGZpeC4gQXQgbW9zdCB3ZSBuZWVkIGFuIGluLWNvZGUgY29tbWVudD8NCg0KSSBh
Z3JlZSB3aXRoIHlvdSB0byBhZGQgYW4gaW4tY29kZSBjb21tZW50LiBJdCB3aWxsIHJlbWluZCB1
cyBpbiBmdXR1cmUgd2hlbiB3ZQ0KdXNlIHRoZSBnZXRfY3ljbGVzIGluIGNyaXRpY2FsIGNvbnRl
eHQuIEFkZGluZyBpdCBub3cgd2lsbCBwcm9iYWJseSBvbmx5IGxlYWQgdG8NCm1lYW5pbmdsZXNz
IHBlcmZvcm1hbmNlIGRlZ3JhZGF0aW9uLiBCZWNhdXNlIFhlbiBtYXkgbmV2ZXIgdXNlIGl0IGlu
IGEgc2ltaWxhcg0Kc2NlbmFyaW8uDQoNCkJUVzogDQpDYW4gd2Ugc2VuZCBhIHBhdGNoIHRoYXQg
anVzdCBjb250YWlucyBhIHB1cmUgY29tbWVudCA6ICkNCg0KUmVnYXJkcywNCldlaSBDaGVuDQo=


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 03:25:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 03:25:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38243.70948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki7uZ-0008PK-P7; Thu, 26 Nov 2020 03:25:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38243.70948; Thu, 26 Nov 2020 03:25:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki7uZ-0008PD-M6; Thu, 26 Nov 2020 03:25:35 +0000
Received: by outflank-mailman (input) for mailman id 38243;
 Thu, 26 Nov 2020 03:25:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ki7uY-0008P5-8A; Thu, 26 Nov 2020 03:25:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ki7uX-0005gu-VT; Thu, 26 Nov 2020 03:25:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ki7uX-00047u-LG; Thu, 26 Nov 2020 03:25:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ki7uX-0001vi-Kj; Thu, 26 Nov 2020 03:25:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ki7uY-0008P5-8A; Thu, 26 Nov 2020 03:25:34 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FDWRVUxn4wn6UEmPya8JHTTorijHFZnL1qm+1Y83MJs=; b=cHSVXzeofKoqqr6eHTw4KP0Msp
	325w7IpI5lBbzCE8CwUss38h5aEfvN6Lv7IrJZpLy0IsTPJacwAoVWwP3U9oJn4B3W/0bvwzC1B3V
	FPEhhm8nU4gmM9gG9yupxgObsFJjQC/dTwjmLgWi8vc856mwSeIr71suQ4P5i1Cf8mkc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ki7uX-0005gu-VT; Thu, 26 Nov 2020 03:25:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ki7uX-00047u-LG; Thu, 26 Nov 2020 03:25:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ki7uX-0001vi-Kj; Thu, 26 Nov 2020 03:25:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156994-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156994: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=cef64a0b347b8fda1a6f8b65ce435d4569c3969e
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 26 Nov 2020 03:25:33 +0000

flight 156994 qemu-mainline real [real]
flight 157019 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156994/
http://logs.test-lab.xenproject.org/osstest/logs/157019/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                cef64a0b347b8fda1a6f8b65ce435d4569c3969e
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   97 days
Failing since        152659  2020-08-21 14:07:39 Z   96 days  203 attempts
Testing same since   156994  2020-11-24 18:09:41 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69049 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 03:36:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 03:36:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38254.70970 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki855-00011P-5o; Thu, 26 Nov 2020 03:36:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38254.70970; Thu, 26 Nov 2020 03:36:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki855-00011G-2T; Thu, 26 Nov 2020 03:36:27 +0000
Received: by outflank-mailman (input) for mailman id 38254;
 Thu, 26 Nov 2020 03:36:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aMh3=FA=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1ki853-00011B-Oh
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 03:36:25 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e5c0c3d5-40fe-47f6-8738-ffb14e09aed4;
 Thu, 26 Nov 2020 03:36:24 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0AQ3aEKx035215
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Wed, 25 Nov 2020 22:36:19 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 0AQ3aD4E035214;
 Wed, 25 Nov 2020 19:36:13 -0800 (PST) (envelope-from ehem)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aMh3=FA=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1ki853-00011B-Oh
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 03:36:25 +0000
X-Inumbo-ID: e5c0c3d5-40fe-47f6-8738-ffb14e09aed4
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e5c0c3d5-40fe-47f6-8738-ffb14e09aed4;
	Thu, 26 Nov 2020 03:36:24 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0AQ3aEKx035215
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Wed, 25 Nov 2020 22:36:19 -0500 (EST)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 0AQ3aD4E035214;
	Wed, 25 Nov 2020 19:36:13 -0800 (PST)
	(envelope-from ehem)
Date: Wed, 25 Nov 2020 19:36:13 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Roman Shaposhnik <roman@zededa.com>, Julien Grall <julien@xen.org>,
        Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: Xen on RP4
Message-ID: <X78irfLB6DQhkPvd@mattapan.m5p.com>
References: <X73RfHfRfBRLKkvB@mattapan.m5p.com>
 <CAMmSBy8dtUQotUeX2MVke7d2nWS0shvKPL_S=4tUeF0UKh4vgA@mail.gmail.com>
 <X73ghKgQEXLv2z2p@mattapan.m5p.com>
 <CAMmSBy-Qdpj+6FAk9D15=+87_=68T80Y1NGnvyAB=tOFveifiQ@mail.gmail.com>
 <X73owDP0UXx+lvJd@mattapan.m5p.com>
 <alpine.DEB.2.21.2011251051240.7979@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2011251051240.7979@sstabellini-ThinkPad-T480s>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Wed, Nov 25, 2020 at 10:57:31AM -0800, Stefano Stabellini wrote:
> On Tue, 24 Nov 2020, Elliott Mitchell wrote:
> > My testing section for Xen is:
> >     xen_hypervisor /boot/xen-4.14-arm64.efi
> >     xen_module /boot/vmlinuz-5.8.10-2rp4-6.1.7 root=UUID=01234567-dead-beef-d13d-456789abcdef ro
> >     devicetree /boot/dtb-5.8.10-2rp4-6.1.7
> >     xen_module --nounzip /boot/initrd.img-5.8.10-2rp4-6.1.7
> > 
> > I've frankly got no idea how to ensure the correct device-tree is passed
> > to Xen.  Is GRUB's `devicetree` command correct when loading Xen?  Is a
> > device-tree matched to the Linux kernel appropriate for Xen?
> > 
> > (I'm guessing the second is "yes", but the first I don't have a clue)
> 
> Yes, devicetree is correct. I have not used the graphical output, so I
> cannot help you there but yes the best bet is to use the devicetree that
> comes with the kernel.

Well, I've now got everything together for a "proper" Debian Raspberry PI
installation.  Apparently since 5.2 (perhaps earlier, but 5.2 is
confirmed), Debian's kernel source packages have had their Raspberry PI
device-trees garbled.  I do have full untouched Linux kernel source
handy, but I tend to stick with the distribution until that proves
untenable.


> One thing I noticed is that you are missing some of the command line
> arguments for Xen and Linux in your grub config. For instance on the Xen
> line you want to have something like:
> 
>     dom0_mem=1024M console=dtuart sync_console
> 
> And on the Linux line you might want to have:
> 
>     console=tty0 console=hvc0

This plus adding the devicetree setting now turns into a report
elsewhere...


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Thu Nov 26 05:16:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 05:16:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38137.70982 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki9dE-0001xd-Ez; Thu, 26 Nov 2020 05:15:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38137.70982; Thu, 26 Nov 2020 05:15:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki9dE-0001xW-Br; Thu, 26 Nov 2020 05:15:48 +0000
Received: by outflank-mailman (input) for mailman id 38137;
 Wed, 25 Nov 2020 22:44:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cX6R=E7=gmail.com=ecree.xilinx@srs-us1.protection.inumbo.net>)
 id 1ki3Wj-0002Mw-CT
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 22:44:41 +0000
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 700e3e7d-60c4-4325-8299-15c2f89434ee;
 Wed, 25 Nov 2020 22:44:40 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id c198so229142wmd.0
 for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 14:44:40 -0800 (PST)
Received: from [192.168.1.122]
 (cpc92720-cmbg20-2-0-cust364.5-4.cable.virginm.net. [82.21.83.109])
 by smtp.gmail.com with ESMTPSA id h15sm6411655wrw.15.2020.11.25.14.44.35
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 25 Nov 2020 14:44:38 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cX6R=E7=gmail.com=ecree.xilinx@srs-us1.protection.inumbo.net>)
	id 1ki3Wj-0002Mw-CT
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 22:44:41 +0000
X-Inumbo-ID: 700e3e7d-60c4-4325-8299-15c2f89434ee
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 700e3e7d-60c4-4325-8299-15c2f89434ee;
	Wed, 25 Nov 2020 22:44:40 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id c198so229142wmd.0
        for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 14:44:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=mMrMdknFbsC77SFRvwwD3pm1cwl3V2MP65kzVJE6tY4=;
        b=mE3/9/w1S1BsFafykdzrNl6nUKLQtv3irXzEz/AwtJdLppuZQnzVWeoYFC7Qb5OQef
         2DAcZ7D6Froq/rlKPIBwqGtt0C/0XmhDJf/Ua0btTGl1wBl77rh04suLM7Drqc/fd1vb
         erjCUGXrwTRfKAEK60/5plkWP59Lm5M7QZ9ENMraFyUEU5TYdKtuuKGBjQ65xtLyj0UE
         nayIBzzwS8QdOAkDbWxTvDHZJGPoguOAiztDitejPSo1Rp931ceTN4KgHMls79AEVEp8
         0UjkiGW0It0CwNfB/koaXcS1sEgjLFm2K1jcT9QDdZUmcgiCMPoll+6KvJ2hAfFlaA1V
         8PSw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=mMrMdknFbsC77SFRvwwD3pm1cwl3V2MP65kzVJE6tY4=;
        b=jUn+I+0nOd9095XsRVTTjJvmV9Iuwa8sJZcGI8TqceEObL2xRcWvXUDqOv78vFNQlb
         96tnZ7WP3Awulq+GmKQJQvkDz2kwrPB4GnUvtmT5UPYFfj3MKnNH+sI45M5Mp5O/aY1s
         XgSg2XjJpQXx41iWc4yiKQyoQrjiUne/gmmxLhlM2sOGP8T7hNfggIMfN1o+VoYpbwRy
         7QLD2W1rF7vs89qkk0jw8WCGcjaMMiRQasAtrjqRlRtudEtwRS02kxvnKk0PDcxtNQZ8
         8t9GwhIsPqI3CDkyV2U4HGX8YOX9c6Ez6k4sGz3cgqYlfYvZH6ILdYHr9QILcIdqBK1W
         tH7g==
X-Gm-Message-State: AOAM532xQDNoIwpM8p3aALNlwCyflAT6HB9GXP1k4iULCUm9Igm7wGax
	+6xVvDoIlkJn4nrnK7aFp7I=
X-Google-Smtp-Source: ABdhPJzVXPr1uOlB1vaw2cfl6zSoOiLq1F6oo5r6Yg/UanqxqpZONWe00UKvdrBaRFZ4o0pgYT/ByA==
X-Received: by 2002:a7b:cf0a:: with SMTP id l10mr6364382wmg.103.1606344279394;
        Wed, 25 Nov 2020 14:44:39 -0800 (PST)
Received: from [192.168.1.122] (cpc92720-cmbg20-2-0-cust364.5-4.cable.virginm.net. [82.21.83.109])
        by smtp.gmail.com with ESMTPSA id h15sm6411655wrw.15.2020.11.25.14.44.35
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Wed, 25 Nov 2020 14:44:38 -0800 (PST)
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
To: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>,
 James Bottomley <James.Bottomley@hansenpartnership.com>
Cc: Kees Cook <keescook@chromium.org>, Jakub Kicinski <kuba@kernel.org>,
 "Gustavo A. R. Silva" <gustavoars@kernel.org>,
 linux-kernel <linux-kernel@vger.kernel.org>, alsa-devel@alsa-project.org,
 amd-gfx@lists.freedesktop.org, bridge@lists.linux-foundation.org,
 ceph-devel@vger.kernel.org, cluster-devel@redhat.com,
 coreteam@netfilter.org, devel@driverdev.osuosl.org, dm-devel@redhat.com,
 drbd-dev@lists.linbit.com, dri-devel@lists.freedesktop.org,
 GR-everest-linux-l2@marvell.com, GR-Linux-NIC-Dev@marvell.com,
 intel-gfx@lists.freedesktop.org, intel-wired-lan@lists.osuosl.org,
 keyrings@vger.kernel.org, linux1394-devel@lists.sourceforge.net,
 linux-acpi@vger.kernel.org, linux-afs@lists.infradead.org,
 Linux ARM <linux-arm-kernel@lists.infradead.org>,
 linux-arm-msm@vger.kernel.org, linux-atm-general@lists.sourceforge.net,
 linux-block@vger.kernel.org, linux-can@vger.kernel.org,
 linux-cifs@vger.kernel.org,
 Linux Crypto Mailing List <linux-crypto@vger.kernel.org>,
 linux-decnet-user@lists.sourceforge.net,
 Ext4 Developers List <linux-ext4@vger.kernel.org>,
 linux-fbdev@vger.kernel.org, linux-geode@lists.infradead.org,
 linux-gpio@vger.kernel.org, linux-hams@vger.kernel.org,
 linux-hwmon@vger.kernel.org, linux-i3c@lists.infradead.org,
 linux-ide@vger.kernel.org, linux-iio@vger.kernel.org,
 linux-input <linux-input@vger.kernel.org>, linux-integrity@vger.kernel.org,
 linux-mediatek@lists.infradead.org,
 Linux Media Mailing List <linux-media@vger.kernel.org>,
 linux-mmc@vger.kernel.org, Linux-MM <linux-mm@kvack.org>,
 linux-mtd@lists.infradead.org, linux-nfs@vger.kernel.org,
 linux-rdma@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-sctp@vger.kernel.org,
 linux-security-module@vger.kernel.org,
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org,
 linux-watchdog@vger.kernel.org,
 linux-wireless <linux-wireless@vger.kernel.org>,
 Network Development <netdev@vger.kernel.org>,
 netfilter-devel@vger.kernel.org, nouveau@lists.freedesktop.org,
 op-tee@lists.trustedfirmware.org, oss-drivers@netronome.com,
 patches@opensource.cirrus.com, rds-devel@oss.oracle.com,
 reiserfs-devel@vger.kernel.org, samba-technical@lists.samba.org,
 selinux@vger.kernel.org, target-devel@vger.kernel.org,
 tipc-discussion@lists.sourceforge.net, usb-storage@lists.one-eyed-alien.net,
 virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org,
 "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>,
 xen-devel@lists.xenproject.org, linux-hardening@vger.kernel.org,
 Nick Desaulniers <ndesaulniers@google.com>,
 Nathan Chancellor <natechancellor@gmail.com>, Miguel Ojeda
 <ojeda@kernel.org>, Joe Perches <joe@perches.com>
References: <cover.1605896059.git.gustavoars@kernel.org>
 <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011201129.B13FDB3C@keescook>
 <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook>
 <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
 <CANiq72nZrHWTA4_Msg6MP9snTyenC6-eGfD27CyfNSu7QoVZbw@mail.gmail.com>
 <1c7d7fde126bc0acf825766de64bf2f9b888f216.camel@HansenPartnership.com>
 <CANiq72m22Jb5_+62NnwX8xds2iUdWDMAqD8PZw9cuxdHd95W0A@mail.gmail.com>
 <fc45750b6d0277c401015b7aa11e16cd15f32ab2.camel@HansenPartnership.com>
 <CANiq72k5tpDoDPmJ0ZWc1DGqm+81Gi-uEENAtvEs9v3SZcx6_Q@mail.gmail.com>
 <4993259d01a0064f8bb22770503490f9252f3659.camel@HansenPartnership.com>
 <CANiq72kqO=bYMJnFS2uYRpgWATJ=uXxZuNUsTXT+3aLtrpnzvQ@mail.gmail.com>
From: Edward Cree <ecree.xilinx@gmail.com>
Message-ID: <44005bde-f6d4-5eaa-39b8-1a5efeedb2d3@gmail.com>
Date: Wed, 25 Nov 2020 22:44:35 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101
 Thunderbird/60.7.2
MIME-Version: 1.0
In-Reply-To: <CANiq72kqO=bYMJnFS2uYRpgWATJ=uXxZuNUsTXT+3aLtrpnzvQ@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

On 25/11/2020 00:32, Miguel Ojeda wrote:
> I have said *authoring* lines of *this* kind takes a minute per line.
> Specifically: lines fixing the fallthrough warning mechanically and
> repeatedly where the compiler tells you to, and doing so full-time for
> a month.
<snip>
> It is useful since it makes intent clear.
To make the intent clear, you have to first be certain that you
 understand the intent; otherwise by adding either a break or a
 fallthrough to suppress the warning you are just destroying the
 information that "the intent of this code is unknown".
Figuring out the intent of a piece of unfamiliar code takes more
 than 1 minute; just because
    case foo:
        thing;
    case bar:
        break;
 produces identical code to
    case foo:
        thing;
        break;
    case bar:
        break;
 doesn't mean that *either* is correct — maybe the author meant
 to write
    case foo:
        return thing;
    case bar:
        break;
 and by inserting that break you've destroyed the marker that
 would direct someone who knew what the code was about to look
 at that point in the code and spot the problem.
Thus, you *always* have to look at more than just the immediate
 mechanical context of the code, to make a proper judgement that
 yes, this was the intent.  If you think that that sort of thing
 can be done in an *average* time of one minute, then I hope you
 stay away from code I'm responsible for!
One minute would be an optimistic target for code that, as the
 maintainer, one is already somewhat familiar with.  For code
 that you're seeing for the first time, as is usually the case
 with the people doing these mechanical fix-a-warning patches,
 it's completely unrealistic.

A warning is only useful because it makes you *think* about the
 code.  If you suppress the warning without doing that thinking,
 then you made the warning useless; and if the warning made you
 think about code that didn't *need* it, then the warning was
 useless from the start.

So make your mind up: does Clang's stricter -Wimplicit-fallthrough
 flag up code that needs thought (in which case the fixes take
 effort both to author and to review) or does it flag up code
 that can be mindlessly "fixed" (in which case the warning is
 worthless)?  Proponents in this thread seem to be trying to
 have it both ways.

-ed


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 05:16:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 05:16:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38139.70988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki9dE-0001y5-QS; Thu, 26 Nov 2020 05:15:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38139.70988; Thu, 26 Nov 2020 05:15:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki9dE-0001xx-KW; Thu, 26 Nov 2020 05:15:48 +0000
Received: by outflank-mailman (input) for mailman id 38139;
 Wed, 25 Nov 2020 23:02:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cX6R=E7=gmail.com=ecree.xilinx@srs-us1.protection.inumbo.net>)
 id 1ki3oD-000469-T0
 for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 23:02:45 +0000
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2802225f-6caa-4242-a188-4bdddacce7df;
 Wed, 25 Nov 2020 23:02:45 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id 23so83248wrc.8
 for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 15:02:45 -0800 (PST)
Received: from [192.168.1.122]
 (cpc92720-cmbg20-2-0-cust364.5-4.cable.virginm.net. [82.21.83.109])
 by smtp.gmail.com with ESMTPSA id u129sm5552667wme.9.2020.11.25.15.02.41
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 25 Nov 2020 15:02:43 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cX6R=E7=gmail.com=ecree.xilinx@srs-us1.protection.inumbo.net>)
	id 1ki3oD-000469-T0
	for xen-devel@lists.xenproject.org; Wed, 25 Nov 2020 23:02:45 +0000
X-Inumbo-ID: 2802225f-6caa-4242-a188-4bdddacce7df
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2802225f-6caa-4242-a188-4bdddacce7df;
	Wed, 25 Nov 2020 23:02:45 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id 23so83248wrc.8
        for <xen-devel@lists.xenproject.org>; Wed, 25 Nov 2020 15:02:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=ha42UWMtlXOL8kfvC1XpFKttY73p7G2gxviD5bIo2Ac=;
        b=cK16XuXEr1UlZZi/GgHF7dItPstb+7yZ2QUTp5Dxvdc6OIX52ygirnUAKuRVBOH+dK
         A20pmgUIw2HuGZFFAXCmBSAdjpUAnHlbQW6vzVJQphFFEbO8NHcI7pY1E1IYv6o5k0Qf
         jXWsM0ddqTXTn/1nZTXycpkYVMfbhxZxJ9mV7kOEU+Tec/es0pgNM20lzmzFkZBYIURn
         HK2FiFuW7CXMZxeXIL3yJ5bCH/J8M2hkcorgk3PecLezXYs+oTw3zVRlNq6FEGJfSXMe
         e5NpXkmlA2dB6AmhViUGWqJQAiQop1G/7Q0C0bZUhlLDOGEFt9XX9+Dv47nZ0Rj6T1x/
         IsxA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=ha42UWMtlXOL8kfvC1XpFKttY73p7G2gxviD5bIo2Ac=;
        b=qIETVp5aZP1N6uw1r9UFwZX7gfdV0znpb/WHMg4sNlIOCiFsEjp+Avw1ffTtPFLKKa
         6QIucF+j+e5Z1iRfVdtVH7LUEJvW12WyIr2sSaVBixh9ace+Q0RKYKNbaAgVoxNCaLpz
         fF+M7kn6nEQFr7GyswbyDXLUPhXjsoCkMQRh14xX+XgEzPDd9sUs9/lB1QhMrjC5KD4E
         MZwqFbhNvPGBElER20zHmpQTiXjMMhnXOgcH/z79SEwphZnMLj4+l2e4LBvb7Zm1ieOW
         qS2YMPpjsFP5ukBLUk9M3wyAonKiUJzZqk0G/KTEn1EnSPtwNrz0vw3hkdNomac4jU0K
         IYNw==
X-Gm-Message-State: AOAM533Vu5xCmmd9wwPEThI2chtQliOhsuCUQwW4/yNBpvT7IMMHwlYy
	rpNb1fNFvso2oIQ3x4R8qYI=
X-Google-Smtp-Source: ABdhPJzir63ZhkohioP0Ktx42tkGcEWW5m4eKxg+CUjAscHddZDwqHZSdjbm/E0JNHFBggGwkKFODQ==
X-Received: by 2002:a5d:474b:: with SMTP id o11mr180470wrs.235.1606345364309;
        Wed, 25 Nov 2020 15:02:44 -0800 (PST)
Received: from [192.168.1.122] (cpc92720-cmbg20-2-0-cust364.5-4.cable.virginm.net. [82.21.83.109])
        by smtp.gmail.com with ESMTPSA id u129sm5552667wme.9.2020.11.25.15.02.41
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Wed, 25 Nov 2020 15:02:43 -0800 (PST)
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
To: Kees Cook <keescook@chromium.org>,
 Nick Desaulniers <ndesaulniers@google.com>
Cc: Jakub Kicinski <kuba@kernel.org>,
 "Gustavo A. R. Silva" <gustavoars@kernel.org>,
 LKML <linux-kernel@vger.kernel.org>, alsa-devel@alsa-project.org,
 amd-gfx list <amd-gfx@lists.freedesktop.org>,
 bridge@lists.linux-foundation.org, ceph-devel@vger.kernel.org,
 cluster-devel@redhat.com, coreteam@netfilter.org,
 devel@driverdev.osuosl.org, dm-devel@redhat.com, drbd-dev@lists.linbit.com,
 dri-devel <dri-devel@lists.freedesktop.org>,
 GR-everest-linux-l2@marvell.com, GR-Linux-NIC-Dev@marvell.com,
 intel-gfx@lists.freedesktop.org, intel-wired-lan@lists.osuosl.org,
 keyrings@vger.kernel.org, linux1394-devel@lists.sourceforge.net,
 linux-acpi@vger.kernel.org, linux-afs@lists.infradead.org,
 Linux ARM <linux-arm-kernel@lists.infradead.org>,
 linux-arm-msm <linux-arm-msm@vger.kernel.org>,
 linux-atm-general@lists.sourceforge.net, linux-block@vger.kernel.org,
 linux-can@vger.kernel.org, linux-cifs@vger.kernel.org,
 "open list:HARDWARE RANDOM NUMBER GENERATOR CORE"
 <linux-crypto@vger.kernel.org>, linux-decnet-user@lists.sourceforge.net,
 linux-ext4@vger.kernel.org, linux-fbdev@vger.kernel.org,
 linux-geode@lists.infradead.org, linux-gpio@vger.kernel.org,
 linux-hams@vger.kernel.org, linux-hwmon@vger.kernel.org,
 linux-i3c@lists.infradead.org, linux-ide@vger.kernel.org,
 linux-iio@vger.kernel.org, linux-input@vger.kernel.org,
 linux-integrity@vger.kernel.org, linux-mediatek@lists.infradead.org,
 linux-media@vger.kernel.org, linux-mmc@vger.kernel.org,
 Linux Memory Management List <linux-mm@kvack.org>,
 linux-mtd@lists.infradead.org, linux-nfs@vger.kernel.org,
 linux-rdma@vger.kernel.org, Linux-Renesas
 <linux-renesas-soc@vger.kernel.org>, linux-scsi@vger.kernel.org,
 linux-sctp@vger.kernel.org, linux-security-module@vger.kernel.org,
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org,
 linux-watchdog@vger.kernel.org,
 linux-wireless <linux-wireless@vger.kernel.org>,
 Network Development <netdev@vger.kernel.org>,
 netfilter-devel@vger.kernel.org, nouveau@lists.freedesktop.org,
 op-tee@lists.trustedfirmware.org, oss-drivers@netronome.com,
 patches@opensource.cirrus.com, rds-devel@oss.oracle.com,
 reiserfs-devel@vger.kernel.org, samba-technical@lists.samba.org,
 selinux@vger.kernel.org, target-devel@vger.kernel.org,
 tipc-discussion@lists.sourceforge.net, usb-storage@lists.one-eyed-alien.net,
 virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org,
 "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>,
 xen-devel@lists.xenproject.org, linux-hardening@vger.kernel.org,
 Nathan Chancellor <natechancellor@gmail.com>, Miguel Ojeda
 <ojeda@kernel.org>, Joe Perches <joe@perches.com>
References: <cover.1605896059.git.gustavoars@kernel.org>
 <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011201129.B13FDB3C@keescook>
 <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook>
 <CAKwvOdntVfXj2WRR5n6Kw7BfG7FdKpTeHeh5nPu5AzwVMhOHTg@mail.gmail.com>
 <202011241324.B3439A2@keescook>
From: Edward Cree <ecree.xilinx@gmail.com>
Message-ID: <99a9ffd7-6356-b81d-6e08-7ed74b6fb82c@gmail.com>
Date: Wed, 25 Nov 2020 23:02:40 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101
 Thunderbird/60.7.2
MIME-Version: 1.0
In-Reply-To: <202011241324.B3439A2@keescook>
Content-Type: text/plain; charset=utf-8
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

On 24/11/2020 21:25, Kees Cook wrote:
> I still think this isn't right -- it's a case statement that runs off
> the end without an explicit flow control determination.

Proves too much — for instance
    case foo:
    case bar:
        thing;
        break;
 doesn't require a fallthrough; after case foo:, and afaik
 no-one is suggesting it should.  Yet it, too, is "a case
 statement that runs off the end without an explicit flow
 control determination".

-ed


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 05:35:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 05:35:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38276.71006 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki9vz-0003vS-D0; Thu, 26 Nov 2020 05:35:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38276.71006; Thu, 26 Nov 2020 05:35:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ki9vz-0003vL-9b; Thu, 26 Nov 2020 05:35:11 +0000
Received: by outflank-mailman (input) for mailman id 38276;
 Thu, 26 Nov 2020 05:35:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ki9vy-0003vD-2B; Thu, 26 Nov 2020 05:35:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ki9vx-0000Nr-HY; Thu, 26 Nov 2020 05:35:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ki9vx-0002HV-8Z; Thu, 26 Nov 2020 05:35:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ki9vx-00071i-82; Thu, 26 Nov 2020 05:35:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ki9vy-0003vD-2B; Thu, 26 Nov 2020 05:35:10 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vLRY0HfEGLUEGy5BdY3EjxEre4hzb+ZylPNXgenva2E=; b=d3oWv2dk8N0pedStqYFBjoYios
	Wm4C0+BoUrxRqvZvOFOrG461u4CYTIf2fudm5B1quOg4ekBT9XY0JAJitXo8kBlANKCfdZa6wC0bw
	LSA8ftxU8bmVSeEkedM93iwqQqEGKsoH/CkLegW+SRN/0bcxWoSRAW9IADZzHvgnGyvY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ki9vx-0000Nr-HY; Thu, 26 Nov 2020 05:35:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ki9vx-0002HV-8Z; Thu, 26 Nov 2020 05:35:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1ki9vx-00071i-82; Thu, 26 Nov 2020 05:35:09 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156996-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156996: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-install:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=d5beb3140f91b1c8a3d41b14d729aefa4dcc58bc
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 26 Nov 2020 05:35:09 +0000

flight 156996 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156996/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      12 debian-install           fail REGR. vs. 152332
 test-arm64-arm64-xl          10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit1   8 xen-boot         fail in 156981 pass in 156996
 test-amd64-amd64-examine    4 memdisk-try-append fail in 156981 pass in 156996
 test-arm64-arm64-xl-xsm       8 xen-boot         fail in 156981 pass in 156996
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen        fail pass in 156981
 test-arm64-arm64-xl-credit2   8 xen-boot                   fail pass in 156981

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-seattle 11 leak-check/basis(11) fail in 156981 blocked in 152332
 test-arm64-arm64-xl-credit2 11 leak-check/basis(11) fail in 156981 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                d5beb3140f91b1c8a3d41b14d729aefa4dcc58bc
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  117 days
Failing since        152366  2020-08-01 20:49:34 Z  116 days  195 attempts
Testing same since   156981  2020-11-24 07:21:48 Z    1 days    2 attempts

------------------------------------------------------------
3576 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 683988 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 07:08:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 07:08:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38289.71026 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiBOM-0003l0-Ir; Thu, 26 Nov 2020 07:08:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38289.71026; Thu, 26 Nov 2020 07:08:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiBOM-0003kt-Fb; Thu, 26 Nov 2020 07:08:34 +0000
Received: by outflank-mailman (input) for mailman id 38289;
 Thu, 26 Nov 2020 07:08:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiBOL-0003kl-1s; Thu, 26 Nov 2020 07:08:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiBOK-0002LS-Rx; Thu, 26 Nov 2020 07:08:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiBOK-0006Fb-JJ; Thu, 26 Nov 2020 07:08:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kiBOK-0005ig-Im; Thu, 26 Nov 2020 07:08:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiBOL-0003kl-1s; Thu, 26 Nov 2020 07:08:33 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=idQettaSfyuU4Nt6A1EKE8qyS1KNqHy7MMvb2lxbDJg=; b=1zBBTMtkP7BUH3NwXEDnDORt3v
	KUFO0hI0AJwHc9DAwnmUSrN0r62N/CdA0E75jTVQM6tHQXelTpIvSaMMICFNLnMm1OG9MwZayC8MD
	pch1z/yL3CgEi/gITL8zfy6s01dfq1I3rufaY3UqxbZmlga8NFInYS/0trgl4+cX5IJc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiBOK-0002LS-Rx; Thu, 26 Nov 2020 07:08:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiBOK-0006Fb-JJ; Thu, 26 Nov 2020 07:08:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiBOK-0005ig-Im; Thu, 26 Nov 2020 07:08:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157018-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157018: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=e9d62effa37ea13fe04fc89b21d2de7776f183a2
X-Osstest-Versions-That:
    ovmf=388f9a9355ae7ab95a34ac2e9a3c608caf11b74a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 26 Nov 2020 07:08:32 +0000

flight 157018 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157018/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 e9d62effa37ea13fe04fc89b21d2de7776f183a2
baseline version:
 ovmf                 388f9a9355ae7ab95a34ac2e9a3c608caf11b74a

Last test of basis   157012  2020-11-25 18:09:50 Z    0 days
Testing same since   157018  2020-11-26 01:39:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  gaoliming <gaoliming@byosoft.com.cn>
  Jiewen Yao <Jiewen.yao@intel.com>
  Liming Gao <gaoliming@byosoft.com.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   388f9a9355..e9d62effa3  e9d62effa37ea13fe04fc89b21d2de7776f183a2 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 08:03:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 08:03:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38304.71041 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiCFo-00015h-VO; Thu, 26 Nov 2020 08:03:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38304.71041; Thu, 26 Nov 2020 08:03:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiCFo-00015a-SQ; Thu, 26 Nov 2020 08:03:48 +0000
Received: by outflank-mailman (input) for mailman id 38304;
 Thu, 26 Nov 2020 08:03:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yWto=FA=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kiCFm-00015V-QE
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 08:03:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 54c554f6-472b-48ee-a02f-e6d373c690f2;
 Thu, 26 Nov 2020 08:03:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A289BAC0C;
 Thu, 26 Nov 2020 08:03:42 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=yWto=FA=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kiCFm-00015V-QE
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 08:03:46 +0000
X-Inumbo-ID: 54c554f6-472b-48ee-a02f-e6d373c690f2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 54c554f6-472b-48ee-a02f-e6d373c690f2;
	Thu, 26 Nov 2020 08:03:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606377822; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=CjteYFCzzzAJ0LtD9dfouVrr6hZ49DMWJlX7vqKsT9k=;
	b=lAAMy9aRxH5lK8pfQsXHog/IPnkhfo058Sgz6u8W62iUC6T0sXpwv6Q7hcqfIoQ5hzpdN6
	mtWi6ChlvbnJUvnbyWTE92y6rxYhcnsMNzNo9LWfOCTzU9M2Q01WJdXttfS7PJ72aGqbAP
	dziQush9WztfgQ/HnPGFI+00H8qjwhY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A289BAC0C;
	Thu, 26 Nov 2020 08:03:42 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3] xen: add support for automatic debug key actions in case of crash
Date: Thu, 26 Nov 2020 09:03:40 +0100
Message-Id: <20201126080340.6154-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When the host crashes it would sometimes be nice to have additional
debug data available which could be produced via debug keys, but
halting the server for manual intervention might be impossible due to
the need to reboot/kexec rather sooner than later.

Add support for automatic debug key actions in case of crashes which
can be activated via boot- or runtime-parameter.

Depending on the type of crash the desired data might be different, so
support different settings for the possible types of crashes.

The parameter is "crash-debug" with the following syntax:

  crash-debug-<type>=<string>

with <type> being one of:

  panic, hwdom, watchdog, kexeccmd, debugkey

and <string> a sequence of debug key characters with '+' having the
special semantics of a 10 millisecond pause.

So "crash-debug-watchdog=0+0qr" would result in special output in case
of watchdog triggered crash (dom0 state, 10 ms pause, dom0 state,
domain info, run queues).

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- switched special character '.' to '+' (Jan Beulich)
- 10 ms instead of 1 s pause (Jan Beulich)
- added more text to the boot parameter description (Jan Beulich)

V3:
- added const (Jan Beulich)
- thorough test of crash reason parameter (Jan Beulich)
- kexeccmd case should depend on CONFIG_KEXEC (Jan Beulich)
- added dummy get_irq_regs() helper on Arm

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 docs/misc/xen-command-line.pandoc | 39 ++++++++++++++++++++++++++
 xen/common/kexec.c                |  8 ++++--
 xen/common/keyhandler.c           | 46 +++++++++++++++++++++++++++++++
 xen/common/shutdown.c             |  4 +--
 xen/drivers/char/console.c        |  2 +-
 xen/include/asm-arm/irq.h         |  5 ++++
 xen/include/xen/kexec.h           | 10 +++++--
 xen/include/xen/keyhandler.h      | 11 ++++++++
 8 files changed, 117 insertions(+), 8 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index b4a0d60c11..294be31abb 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -574,6 +574,45 @@ reduction of features at Xen's disposal to manage guests.
 ### cpuinfo (x86)
 > `= <boolean>`
 
+### crash-debug-debugkey
+### crash-debug-hwdom
+### crash-debug-kexeccmd
+### crash-debug-panic
+### crash-debug-watchdog
+> `= <string>`
+
+> Can be modified at runtime
+
+Specify debug-key actions in cases of crashes. Each of the parameters applies
+to a different crash reason. The `<string>` is a sequence of debug key
+characters, with `+` having the special meaning of a 10 millisecond pause.
+
+`crash-debug-debugkey` will be used for crashes induced by the `C` debug
+key (i.e. manually induced crash).
+
+`crash-debug-hwdom` denotes a crash of dom0.
+
+`crash-debug-kexeccmd` is an explicit request of dom0 to continue with the
+kdump kernel via kexec. Only available on hypervisors built with CONFIG_KEXEC.
+
+`crash-debug-panic` is a crash of the hypervisor.
+
+`crash-debug-watchdog` is a crash due to the watchdog timer expiring.
+
+It should be noted that dumping diagnosis data to the console can fail in
+multiple ways (missing data, hanging system, ...) depending on the reason
+of the crash, which might have left the hypervisor in a bad state.
+
+So e.g. `crash-debug-watchdog=0+0r` would dump dom0 state twice with 10
+milliseconds between the two state dumps, followed by the run queues of the
+hypervisor, if the system crashes due to a watchdog timeout.
+
+These parameters should be used carefully, as e.g. specifying
+`crash-debug-debugkey=C` would result in an endless loop. Depending on the
+reason of the system crash it might happen that triggering some debug key
+action will result in a hang instead of dumping data and then doing a
+reboot or crash dump.
+
 ### crashinfo_maxaddr
 > `= <size>`
 
diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index 52cdc4ebc3..ebeee6405a 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -373,10 +373,12 @@ static int kexec_common_shutdown(void)
     return 0;
 }
 
-void kexec_crash(void)
+void kexec_crash(enum crash_reason reason)
 {
     int pos;
 
+    keyhandler_crash_action(reason);
+
     pos = (test_bit(KEXEC_FLAG_CRASH_POS, &kexec_flags) != 0);
     if ( !test_bit(KEXEC_IMAGE_CRASH_BASE + pos, &kexec_flags) )
         return;
@@ -409,7 +411,7 @@ static long kexec_reboot(void *_image)
 static void do_crashdump_trigger(unsigned char key)
 {
     printk("'%c' pressed -> triggering crashdump\n", key);
-    kexec_crash();
+    kexec_crash(CRASHREASON_DEBUGKEY);
     printk(" * no crash kernel loaded!\n");
 }
 
@@ -840,7 +842,7 @@ static int kexec_exec(XEN_GUEST_HANDLE_PARAM(void) uarg)
         ret = continue_hypercall_on_cpu(0, kexec_reboot, image);
         break;
     case KEXEC_TYPE_CRASH:
-        kexec_crash(); /* Does not return */
+        kexec_crash(CRASHREASON_KEXECCMD); /* Does not return */
         break;
     }
 
diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
index 68364e987d..11ab6ecd59 100644
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -3,7 +3,9 @@
  */
 
 #include <asm/regs.h>
+#include <xen/delay.h>
 #include <xen/keyhandler.h>
+#include <xen/param.h>
 #include <xen/shutdown.h>
 #include <xen/event.h>
 #include <xen/console.h>
@@ -507,6 +509,50 @@ void __init initialize_keytable(void)
     }
 }
 
+#define CRASHACTION_SIZE  32
+static char crash_debug_panic[CRASHACTION_SIZE];
+string_runtime_param("crash-debug-panic", crash_debug_panic);
+static char crash_debug_hwdom[CRASHACTION_SIZE];
+string_runtime_param("crash-debug-hwdom", crash_debug_hwdom);
+static char crash_debug_watchdog[CRASHACTION_SIZE];
+string_runtime_param("crash-debug-watchdog", crash_debug_watchdog);
+#ifdef CONFIG_KEXEC
+static char crash_debug_kexeccmd[CRASHACTION_SIZE];
+string_runtime_param("crash-debug-kexeccmd", crash_debug_kexeccmd);
+#endif
+static char crash_debug_debugkey[CRASHACTION_SIZE];
+string_runtime_param("crash-debug-debugkey", crash_debug_debugkey);
+
+void keyhandler_crash_action(enum crash_reason reason)
+{
+    static const char *const crash_action[CRASHREASON_N] = {
+        [CRASHREASON_PANIC] = crash_debug_panic,
+        [CRASHREASON_HWDOM] = crash_debug_hwdom,
+        [CRASHREASON_WATCHDOG] = crash_debug_watchdog,
+#ifdef CONFIG_KEXEC
+        [CRASHREASON_KEXECCMD] = crash_debug_kexeccmd,
+#endif
+        [CRASHREASON_DEBUGKEY] = crash_debug_debugkey,
+    };
+    const char *action;
+    struct cpu_user_regs *regs = get_irq_regs() ? : guest_cpu_user_regs();
+
+    if ( (unsigned int)reason >= CRASHREASON_N )
+        return;
+    action = crash_action[reason];
+    if ( !action )
+        return;
+
+    while ( *action )
+    {
+        if ( *action == '+' )
+            mdelay(10);
+        else
+            handle_keypress(*action, regs);
+        action++;
+    }
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/common/shutdown.c b/xen/common/shutdown.c
index 912593915b..abde48aa4c 100644
--- a/xen/common/shutdown.c
+++ b/xen/common/shutdown.c
@@ -43,7 +43,7 @@ void hwdom_shutdown(u8 reason)
     case SHUTDOWN_crash:
         debugger_trap_immediate();
         printk("Hardware Dom%u crashed: ", hardware_domain->domain_id);
-        kexec_crash();
+        kexec_crash(CRASHREASON_HWDOM);
         maybe_reboot();
         break; /* not reached */
 
@@ -56,7 +56,7 @@ void hwdom_shutdown(u8 reason)
     case SHUTDOWN_watchdog:
         printk("Hardware Dom%u shutdown: watchdog rebooting machine\n",
                hardware_domain->domain_id);
-        kexec_crash();
+        kexec_crash(CRASHREASON_WATCHDOG);
         machine_restart(0);
         break; /* not reached */
 
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 861ad53a8f..acec277f5e 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -1271,7 +1271,7 @@ void panic(const char *fmt, ...)
 
     debugger_trap_immediate();
 
-    kexec_crash();
+    kexec_crash(CRASHREASON_PANIC);
 
     if ( opt_noreboot )
         machine_halt();
diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h
index e45d574598..01d5a7cb02 100644
--- a/xen/include/asm-arm/irq.h
+++ b/xen/include/asm-arm/irq.h
@@ -98,6 +98,11 @@ void irq_set_affinity(struct irq_desc *desc, const cpumask_t *cpu_mask);
  */
 bool irq_type_set_by_domain(const struct domain *d);
 
+static inline struct cpu_user_regs *get_irq_regs(void)
+{
+    return NULL;
+}
+
 #endif /* _ASM_HW_IRQ_H */
 /*
  * Local variables:
diff --git a/xen/include/xen/kexec.h b/xen/include/xen/kexec.h
index e85ba16405..9f7a912e97 100644
--- a/xen/include/xen/kexec.h
+++ b/xen/include/xen/kexec.h
@@ -1,6 +1,8 @@
 #ifndef __XEN_KEXEC_H__
 #define __XEN_KEXEC_H__
 
+#include <xen/keyhandler.h>
+
 #ifdef CONFIG_KEXEC
 
 #include <public/kexec.h>
@@ -48,7 +50,7 @@ void machine_kexec_unload(struct kexec_image *image);
 void machine_kexec_reserved(xen_kexec_reserve_t *reservation);
 void machine_reboot_kexec(struct kexec_image *image);
 void machine_kexec(struct kexec_image *image);
-void kexec_crash(void);
+void kexec_crash(enum crash_reason reason);
 void kexec_crash_save_cpu(void);
 struct crash_xen_info *kexec_crash_save_info(void);
 void machine_crash_shutdown(void);
@@ -82,7 +84,11 @@ void vmcoreinfo_append_str(const char *fmt, ...)
 #define kexecing 0
 
 static inline void kexec_early_calculations(void) {}
-static inline void kexec_crash(void) {}
+static inline void kexec_crash(enum crash_reason reason)
+{
+    keyhandler_crash_action(reason);
+}
+
 static inline void kexec_crash_save_cpu(void) {}
 static inline void set_kexec_crash_area_size(u64 system_ram) {}
 
diff --git a/xen/include/xen/keyhandler.h b/xen/include/xen/keyhandler.h
index 5131e86cbc..dbf797a8b4 100644
--- a/xen/include/xen/keyhandler.h
+++ b/xen/include/xen/keyhandler.h
@@ -48,4 +48,15 @@ void register_irq_keyhandler(unsigned char key,
 /* Inject a keypress into the key-handling subsystem. */
 extern void handle_keypress(unsigned char key, struct cpu_user_regs *regs);
 
+enum crash_reason {
+    CRASHREASON_PANIC,
+    CRASHREASON_HWDOM,
+    CRASHREASON_WATCHDOG,
+    CRASHREASON_KEXECCMD,
+    CRASHREASON_DEBUGKEY,
+    CRASHREASON_N
+};
+
+void keyhandler_crash_action(enum crash_reason reason);
+
 #endif /* __XEN_KEYHANDLER_H__ */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 09:04:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 09:04:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38314.71054 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiDCn-0006Um-GR; Thu, 26 Nov 2020 09:04:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38314.71054; Thu, 26 Nov 2020 09:04:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiDCn-0006Uf-DE; Thu, 26 Nov 2020 09:04:45 +0000
Received: by outflank-mailman (input) for mailman id 38314;
 Thu, 26 Nov 2020 09:04:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YrL6=FA=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kiDCl-0006Ua-Ck
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 09:04:43 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.5.58]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bd90020a-1ff3-4d7c-9084-455ba474bcbc;
 Thu, 26 Nov 2020 09:04:42 +0000 (UTC)
Received: from DB7PR03CA0075.eurprd03.prod.outlook.com (2603:10a6:10:72::16)
 by AM6PR08MB3208.eurprd08.prod.outlook.com (2603:10a6:209:4b::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 26 Nov
 2020 09:04:35 +0000
Received: from DB5EUR03FT004.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:72:cafe::10) by DB7PR03CA0075.outlook.office365.com
 (2603:10a6:10:72::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Thu, 26 Nov 2020 09:04:35 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT004.mail.protection.outlook.com (10.152.20.128) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Thu, 26 Nov 2020 09:04:35 +0000
Received: ("Tessian outbound 082214a64d39:v71");
 Thu, 26 Nov 2020 09:04:34 +0000
Received: from f9916890ed0d.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E4496F37-1511-4CDD-9F5F-75BE5D70224C.1; 
 Thu, 26 Nov 2020 09:04:28 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f9916890ed0d.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 26 Nov 2020 09:04:28 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5752.eurprd08.prod.outlook.com (2603:10a6:10:1ac::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 26 Nov
 2020 09:04:27 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0%7]) with mapi id 15.20.3589.030; Thu, 26 Nov 2020
 09:04:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YrL6=FA=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kiDCl-0006Ua-Ck
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 09:04:43 +0000
X-Inumbo-ID: bd90020a-1ff3-4d7c-9084-455ba474bcbc
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown [40.107.5.58])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id bd90020a-1ff3-4d7c-9084-455ba474bcbc;
	Thu, 26 Nov 2020 09:04:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=l7FYCXLKpq3GugLNHqEUXAZJx5k0nXUoNo1QFsUcmow=;
 b=q6NeNfgd6MGuATU36I1DA26VaFKKQlFBFnmuyWSlot/xCHXkjr+Biq5ITab/qebVyKrbWoFx7nwYaQWc+jvcTOVK7ZgIw/lknDY7sY1hecimlhFMXVrqivW6x1bi4A8f7KA7jqSuuZeH7bbuHxWrSoamWzcrfcKN8ymbcsQNiqU=
Received: from DB7PR03CA0075.eurprd03.prod.outlook.com (2603:10a6:10:72::16)
 by AM6PR08MB3208.eurprd08.prod.outlook.com (2603:10a6:209:4b::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 26 Nov
 2020 09:04:35 +0000
Received: from DB5EUR03FT004.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:72:cafe::10) by DB7PR03CA0075.outlook.office365.com
 (2603:10a6:10:72::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Thu, 26 Nov 2020 09:04:35 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT004.mail.protection.outlook.com (10.152.20.128) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Thu, 26 Nov 2020 09:04:35 +0000
Received: ("Tessian outbound 082214a64d39:v71"); Thu, 26 Nov 2020 09:04:34 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: e4d613cb1f479c8d
X-CR-MTA-TID: 64aa7808
Received: from f9916890ed0d.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id E4496F37-1511-4CDD-9F5F-75BE5D70224C.1;
	Thu, 26 Nov 2020 09:04:28 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f9916890ed0d.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Thu, 26 Nov 2020 09:04:28 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UhSUTa3+SXsKXjkFBpRCCZWhLhAOIe/I97oAK1JviYopMcx5bH9qMs8ZFpG7/SyrHVcxl8pD9GgYrMRXPzxISOasiFm99ck2E7axyk0dRb1Qfn1Bfo3/ipBWkYUpW45jiWjR7bysO4E6MSrITGHZNwclQ9HaQhQwGtBMyGbTXBx6xadXFmEUaV6739Axqryy02RUA4cNGPjvXo612exN69ggTDaCyZ54z1961YFVgaa6l8Tid1hBDKNNa7yvNfRL2VC8cEXMgXrbNLq5/sALukW+gDG8Prxc7SjnBqkqpXczM2U4QlT3lX98rNFusgqR/FsraCoCVkZXzA65E9S/IA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=l7FYCXLKpq3GugLNHqEUXAZJx5k0nXUoNo1QFsUcmow=;
 b=UP16/5HLfS8GgErgG+A2erLbsE97nRZkOmjSBaT1Hwe7JQK+Ts28rkKDk2M3ztU3sj4FDcqdnBXU01VvjrXALrxxZ+sfZRuxbItuDa1+AQZkiOvKap4OSRBf2JrbWza381buGy/5lJ+tpN2dxXwREL4hc56+Iv9gA0GElC3LxoD09AA7q48Hq7D+x8cuwgtEjGH8nbH17kR0SPd5FbEX5ivYa3FZWT04QvWr78tvzmEAvToMuU48mFmYXLoYEVUD0BresDA1LXsSryJF9nFzyhyygjPCuByI99Ib+rX5MtEkSDK9AYaiwyrR28dseH3hOwqfnZTtRHcF7Zes5Txblg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=l7FYCXLKpq3GugLNHqEUXAZJx5k0nXUoNo1QFsUcmow=;
 b=q6NeNfgd6MGuATU36I1DA26VaFKKQlFBFnmuyWSlot/xCHXkjr+Biq5ITab/qebVyKrbWoFx7nwYaQWc+jvcTOVK7ZgIw/lknDY7sY1hecimlhFMXVrqivW6x1bi4A8f7KA7jqSuuZeH7bbuHxWrSoamWzcrfcKN8ymbcsQNiqU=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5752.eurprd08.prod.outlook.com (2603:10a6:10:1ac::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 26 Nov
 2020 09:04:27 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0%7]) with mapi id 15.20.3589.030; Thu, 26 Nov 2020
 09:04:27 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Rahul Singh <Rahul.Singh@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<jbeulich@suse.com>, Paul Durrant <paul@xen.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v4 1/3] xen/pci: Move x86 specific code to x86 directory.
Thread-Topic: [PATCH v4 1/3] xen/pci: Move x86 specific code to x86 directory.
Thread-Index: AQHWw1crc3wbvvllAUG9bYTTFcj66qnaH5kA
Date: Thu, 26 Nov 2020 09:04:27 +0000
Message-ID: <57E71345-E9D4-439D-B959-E520BC473F7D@arm.com>
References: <cover.1606326929.git.rahul.singh@arm.com>
 <3500f44e3b6f8f05f9d05fa170817d5bc6f39f22.1606326929.git.rahul.singh@arm.com>
In-Reply-To:
 <3500f44e3b6f8f05f9d05fa170817d5bc6f39f22.1606326929.git.rahul.singh@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 20379137-24c6-482c-62b5-08d891ea4ba7
x-ms-traffictypediagnostic: DBAPR08MB5752:|AM6PR08MB3208:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB32084A1596DF3006EE30191E9DF90@AM6PR08MB3208.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:7219;OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 f/Jr8kAZV1C9dOzS2I4MzLj/mngVpZrGKJh9vZxk6lr+dT0iZN0316b2YkPBwLk20Xc9+dwrBsXkhNT531lrEG3q4E4FNEfEIt45jMI7fWzgI0tvRlnCpWngLZ/n34IGhV0cVz4d4viYofi6uv3Dto0O3uOoplATrfVyr7OVyNL+xkjxGdWllP8qsK0bZz+ELZawbNtm0dn5TVN6YCbNQYLs90YKx3W4le1Fqj8LjknjZG5ya7ClKAeqjpK8Cz99JigjewT9Q3S6E2wfklBbCRs0PryGVm/2Xq3X5p33GTYeV3aSVWrcwi9CUgsSHaEB1kHXzlC6phFFrBwhDM8u6SE0mq2Jtyx9YQAJe+PlvHghN4Msnu0uyzuaeAd8oNkP43lto9wN6VkbxIz18Wdufg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(136003)(396003)(376002)(39860400002)(6506007)(86362001)(6512007)(186003)(8936002)(6636002)(316002)(33656002)(53546011)(6862004)(26005)(37006003)(6486002)(2616005)(2906002)(5660300002)(8676002)(54906003)(76116006)(91956017)(4326008)(36756003)(66476007)(66556008)(66946007)(66446008)(83380400001)(71200400001)(478600001)(64756008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?ZPzQ22ycQrz3u0HxcrY2sT/H0YJN88W4AwjJogqWSA/ogcXEDIutSA2j0cLh?=
 =?us-ascii?Q?7NlSj0+5QlEkMGprksta3YPsH7bHzLqGym40OYQrgc3deARUNuJfp/oSYjr4?=
 =?us-ascii?Q?iGNhQotAGFgk0usCSE2v9NAX1IFsQX5PpI8bToNE6A8r0odOdm4zUnxytRzS?=
 =?us-ascii?Q?JtGQTd2caKZ6K3JHYGdhoh6mzUvhjbfRJDX0tRqri9CACKYz2NqY+V1+8s9C?=
 =?us-ascii?Q?H4SMKICPkLbf+yXiwyPy3NBNzD/eR46b3hbYPbSUJa3Grv8P7/4UyV7/bHWO?=
 =?us-ascii?Q?Z1ZABDiK6wg4lr/5Vegg0B1SmjrN3OkTUQYOBNIO+m0QrgWY8sqACXk2/fxl?=
 =?us-ascii?Q?T6Mkhsdza+Uu/wILqmGmZRX2iOzJLo/9Ko72F7po6PNU7TVOuZoj9yMMHS+1?=
 =?us-ascii?Q?r9/DtN7wiLJXvIp+TsigZZa3b5wMAqR4dhGoCFqWFnsdpMuhwZWpUMCCWQfW?=
 =?us-ascii?Q?WoOOZNKHAqtYygiNV8H71thH3t7OO4CQ3n/sBzfSfPtZckMk3tvFuSyanwPX?=
 =?us-ascii?Q?hZJWF0yDxtKHwik1McPw14WggI+JdejWOntimSstaKkaLjZKM2Ushmbb2VNA?=
 =?us-ascii?Q?WBCjyZOR4/q7dLkvAG/u40j0LjGKL8Cin24APyxJrTvvMagB4V8hidqaEWC5?=
 =?us-ascii?Q?wrBsi148b/RHiemv/Lu2Q6HYA+z6Hl7QXjLbc6v4scOA0aEmJ0yrFwt/92SC?=
 =?us-ascii?Q?iBV1V+cGPb5AjG6fdNRvGnYi8rc+V9fNtjuz9+z9soxFZM/b8nMYcKsb/b6K?=
 =?us-ascii?Q?SxYFqs2l1Xdvv4A2QSB8GQ5Md76vU3SpL9s0pN+ozxXQ4GlQNPfUhmSMvMrB?=
 =?us-ascii?Q?8C9Q+nGSmIkfNBKZdCMXJO6fXPqR0DMDBjnPKrU8ukA8KqceJXLt5ImSjhgU?=
 =?us-ascii?Q?GNqUdwr2LCBsMeT5Q8gSx7+HSE/cAP3AbpjL+mVZnqw8AypTOAJRXtx8GP/F?=
 =?us-ascii?Q?6jOm/rSzZAqnYPL9GpX/rBn53vi6ajxFTPWY5CwSM3zq/Sa7uiHWd6dx7Kdj?=
 =?us-ascii?Q?mvmp?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <6CFF37332EDAFD45BF176D075271F592@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5752
Original-Authentication-Results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	50a2ef6a-8e85-4ff6-d78a-08d891ea4706
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KVFWVzL0sLnoffW2gW3W0tiUljNITdauBRKtsDDFXVfu0gD+olV1AhgFdPlXYmGZ0m5wi3Lfol6Q1taPP3LVBI5HZayUqExX3ZQ6kzHIyCQ4EA232R9nLreWwhVp4HPYqc9Ire8lh1HVmb8kQqjU2MeRzdeuZShLdOzWSoto+tQE+eYr4OAQfLaaKLK8nR2CJ1PpD0TGKO9VarPYKqWvZ+iRQnxyG0l/br+nz7705iYQAxPmsAJCxjhUt8VWfIT8c01reriuo2cG2/I4hbizTuooR4vFAL0WnXWDF7cTKMAQr9UeFGaXCq0EHHNOhrG4Gyy+lwCV1R1mCcG5kvMewuT3c6q813ZelBygDV1rOQkQz3rI+UD4iSJEz6ig+90Cbiz7MeQTmRNN2YuemEPyK/uiPrHH7uKRJCxSNC8pp36tcbXJNfejS0faYp4fxJGsdgCemFislCc63bMJ9rqZaas5clWy0TAtHq+UyEIPLfA=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(396003)(346002)(376002)(39860400002)(46966005)(70206006)(4326008)(54906003)(6512007)(81166007)(36756003)(8676002)(70586007)(356005)(6862004)(83380400001)(82740400003)(316002)(8936002)(47076004)(6636002)(33656002)(336012)(86362001)(186003)(26005)(37006003)(478600001)(6506007)(53546011)(2616005)(6486002)(2906002)(82310400003)(5660300002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Nov 2020 09:04:35.0481
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 20379137-24c6-482c-62b5-08d891ea4ba7
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3208



> On 25 Nov 2020, at 18:16, Rahul Singh <Rahul.Singh@arm.com> wrote:
>=20
> passthrough/pci.c file is common for all architecture, but there is x86
> specific code in this file.
>=20
> Move x86 specific code to the drivers/passthrough/io.c file to avoid
> compilation error for other architecture.
>=20
> As drivers/passthrough/io.c is compiled only for x86 move it to
> x86 directory and rename it to hvm.c.
>=20
> No functional change intended.
>=20
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
>=20
> Changes in v4:
> - fixed compilation error when CONFIG_HVM is disabled=20
> - remove iommu_update_ire_from_msi from the patch will send another patch
>  to fix.
>=20
> ---
> xen/drivers/passthrough/Makefile            |  3 -
> xen/drivers/passthrough/pci.c               | 71 +--------------------
> xen/drivers/passthrough/x86/Makefile        |  1 +
> xen/drivers/passthrough/{io.c =3D> x86/hvm.c} | 66 +++++++++++++++++++
> xen/include/xen/pci.h                       |  9 +++
> 5 files changed, 77 insertions(+), 73 deletions(-)
> rename xen/drivers/passthrough/{io.c =3D> x86/hvm.c} (95%)
>=20
> diff --git a/xen/drivers/passthrough/Makefile b/xen/drivers/passthrough/M=
akefile
> index e973e16c74..cc646612c7 100644
> --- a/xen/drivers/passthrough/Makefile
> +++ b/xen/drivers/passthrough/Makefile
> @@ -6,6 +6,3 @@ obj-$(CONFIG_ARM) +=3D arm/
> obj-y +=3D iommu.o
> obj-$(CONFIG_HAS_PCI) +=3D pci.o
> obj-$(CONFIG_HAS_DEVICE_TREE) +=3D device_tree.o
> -
> -x86-$(CONFIG_HVM) :=3D io.o
> -obj-$(CONFIG_X86) +=3D $(x86-y)
> diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.=
c
> index 51e584127e..3c6ab1bcb6 100644
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -14,9 +14,6 @@
>  * this program; If not, see <http://www.gnu.org/licenses/>.
>  */
>=20
> -#include <xen/sched.h>
> -#include <xen/pci.h>
> -#include <xen/pci_regs.h>
> #include <xen/pci_ids.h>
> #include <xen/list.h>
> #include <xen/prefetch.h>
> @@ -24,7 +21,6 @@
> #include <xen/irq.h>
> #include <xen/param.h>
> #include <xen/vm_event.h>
> -#include <asm/hvm/irq.h>
> #include <xen/delay.h>
> #include <xen/keyhandler.h>
> #include <xen/event.h>
> @@ -842,71 +838,6 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
>     return ret;
> }
>=20
> -static int pci_clean_dpci_irq(struct domain *d,
> -                              struct hvm_pirq_dpci *pirq_dpci, void *arg=
)
> -{
> -    struct dev_intx_gsi_link *digl, *tmp;
> -
> -    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
> -
> -    if ( pt_irq_need_timer(pirq_dpci->flags) )
> -        kill_timer(&pirq_dpci->timer);
> -
> -    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
> -    {
> -        list_del(&digl->list);
> -        xfree(digl);
> -    }
> -
> -    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
> -
> -    if ( !pt_pirq_softirq_active(pirq_dpci) )
> -        return 0;
> -
> -    domain_get_irq_dpci(d)->pending_pirq_dpci =3D pirq_dpci;
> -
> -    return -ERESTART;
> -}
> -
> -static int pci_clean_dpci_irqs(struct domain *d)
> -{
> -    struct hvm_irq_dpci *hvm_irq_dpci =3D NULL;
> -
> -    if ( !is_iommu_enabled(d) )
> -        return 0;
> -
> -    if ( !is_hvm_domain(d) )
> -        return 0;
> -
> -    spin_lock(&d->event_lock);
> -    hvm_irq_dpci =3D domain_get_irq_dpci(d);
> -    if ( hvm_irq_dpci !=3D NULL )
> -    {
> -        int ret =3D 0;
> -
> -        if ( hvm_irq_dpci->pending_pirq_dpci )
> -        {
> -            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci)=
 )
> -                 ret =3D -ERESTART;
> -            else
> -                 hvm_irq_dpci->pending_pirq_dpci =3D NULL;
> -        }
> -
> -        if ( !ret )
> -            ret =3D pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
> -        if ( ret )
> -        {
> -            spin_unlock(&d->event_lock);
> -            return ret;
> -        }
> -
> -        hvm_domain_irq(d)->dpci =3D NULL;
> -        free_hvm_irq_dpci(hvm_irq_dpci);
> -    }
> -    spin_unlock(&d->event_lock);
> -    return 0;
> -}
> -
> /* Caller should hold the pcidevs_lock */
> static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
>                            uint8_t devfn)
> @@ -966,7 +897,7 @@ int pci_release_devices(struct domain *d)
>     int ret;
>=20
>     pcidevs_lock();
> -    ret =3D pci_clean_dpci_irqs(d);
> +    ret =3D arch_pci_clean_pirqs(d);
>     if ( ret )
>     {
>         pcidevs_unlock();
> diff --git a/xen/drivers/passthrough/x86/Makefile b/xen/drivers/passthrou=
gh/x86/Makefile
> index a70cf9460d..69284a5d19 100644
> --- a/xen/drivers/passthrough/x86/Makefile
> +++ b/xen/drivers/passthrough/x86/Makefile
> @@ -1,2 +1,3 @@
> obj-y +=3D ats.o
> obj-y +=3D iommu.o
> +obj-$(CONFIG_HVM) +=3D hvm.o
> diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/x86/h=
vm.c
> similarity index 95%
> rename from xen/drivers/passthrough/io.c
> rename to xen/drivers/passthrough/x86/hvm.c
> index 6b1305a3e5..41cfa2e200 100644
> --- a/xen/drivers/passthrough/io.c
> +++ b/xen/drivers/passthrough/x86/hvm.c
> @@ -1036,6 +1036,72 @@ unlock:
>     spin_unlock(&d->event_lock);
> }
>=20
> +static int pci_clean_dpci_irq(struct domain *d,
> +                              struct hvm_pirq_dpci *pirq_dpci, void *arg=
)
> +{
> +    struct dev_intx_gsi_link *digl, *tmp;
> +
> +    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
> +
> +    if ( pt_irq_need_timer(pirq_dpci->flags) )
> +        kill_timer(&pirq_dpci->timer);
> +
> +    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
> +    {
> +        list_del(&digl->list);
> +        xfree(digl);
> +    }
> +
> +    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
> +
> +    if ( !pt_pirq_softirq_active(pirq_dpci) )
> +        return 0;
> +
> +    domain_get_irq_dpci(d)->pending_pirq_dpci =3D pirq_dpci;
> +
> +    return -ERESTART;
> +}
> +
> +int arch_pci_clean_pirqs(struct domain *d)
> +{
> +    struct hvm_irq_dpci *hvm_irq_dpci =3D NULL;
> +
> +    if ( !is_iommu_enabled(d) )
> +        return 0;
> +
> +    if ( !is_hvm_domain(d) )
> +        return 0;
> +
> +    spin_lock(&d->event_lock);
> +    hvm_irq_dpci =3D domain_get_irq_dpci(d);
> +    if ( hvm_irq_dpci !=3D NULL )
> +    {
> +        int ret =3D 0;
> +
> +        if ( hvm_irq_dpci->pending_pirq_dpci )
> +        {
> +            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci)=
 )
> +                 ret =3D -ERESTART;
> +            else
> +                 hvm_irq_dpci->pending_pirq_dpci =3D NULL;
> +        }
> +
> +        if ( !ret )
> +            ret =3D pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
> +        if ( ret )
> +        {
> +            spin_unlock(&d->event_lock);
> +            return ret;
> +        }
> +
> +        hvm_domain_irq(d)->dpci =3D NULL;
> +        free_hvm_irq_dpci(hvm_irq_dpci);
> +    }
> +    spin_unlock(&d->event_lock);
> +
> +    return 0;
> +}
> +
> /*
>  * Note: 'pt_pirq_softirq_reset' can clear the STATE_SCHED before we get =
to
>  * doing it. If that is the case we let 'pt_pirq_softirq_reset' do ref-co=
unting.
> diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
> index 20a54a5bb4..8e3d4d9454 100644
> --- a/xen/include/xen/pci.h
> +++ b/xen/include/xen/pci.h
> @@ -208,4 +208,13 @@ int msixtbl_pt_register(struct domain *, struct pirq=
 *, uint64_t gtable);
> void msixtbl_pt_unregister(struct domain *, struct pirq *);
> void msixtbl_pt_cleanup(struct domain *d);
>=20
> +#ifdef CONFIG_HVM
> +int arch_pci_clean_pirqs(struct domain *d);
> +#else
> +static inline int arch_pci_clean_pirqs(struct domain *d)
> +{
> +    return 0;
> +}
> +#endif /* CONFIG_HVM */
> +
> #endif /* __XEN_PCI_H__ */
> --=20
> 2.17.1
>=20



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 09:06:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 09:06:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38326.71066 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiDE8-0006cT-Vs; Thu, 26 Nov 2020 09:06:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38326.71066; Thu, 26 Nov 2020 09:06:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiDE8-0006cM-Ss; Thu, 26 Nov 2020 09:06:08 +0000
Received: by outflank-mailman (input) for mailman id 38326;
 Thu, 26 Nov 2020 09:06:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YrL6=FA=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kiDE8-0006cH-3I
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 09:06:08 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.84]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6394dc6b-b505-4b1a-9eb9-d4554834b73d;
 Thu, 26 Nov 2020 09:06:05 +0000 (UTC)
Received: from DB7PR05CA0013.eurprd05.prod.outlook.com (2603:10a6:10:36::26)
 by DB8PR08MB5420.eurprd08.prod.outlook.com (2603:10a6:10:11a::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.29; Thu, 26 Nov
 2020 09:06:02 +0000
Received: from DB5EUR03FT021.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:36:cafe::ae) by DB7PR05CA0013.outlook.office365.com
 (2603:10a6:10:36::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Thu, 26 Nov 2020 09:06:02 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT021.mail.protection.outlook.com (10.152.20.238) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Thu, 26 Nov 2020 09:06:02 +0000
Received: ("Tessian outbound e0cdfd2b0406:v71");
 Thu, 26 Nov 2020 09:06:02 +0000
Received: from 263d92b9c4c1.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DF22D542-F459-4E8D-B199-AD6813BF1D92.1; 
 Thu, 26 Nov 2020 09:05:25 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 263d92b9c4c1.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 26 Nov 2020 09:05:25 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5752.eurprd08.prod.outlook.com (2603:10a6:10:1ac::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 26 Nov
 2020 09:05:23 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0%7]) with mapi id 15.20.3589.030; Thu, 26 Nov 2020
 09:05:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YrL6=FA=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kiDE8-0006cH-3I
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 09:06:08 +0000
X-Inumbo-ID: 6394dc6b-b505-4b1a-9eb9-d4554834b73d
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown [40.107.8.84])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6394dc6b-b505-4b1a-9eb9-d4554834b73d;
	Thu, 26 Nov 2020 09:06:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YSRjyHnXyIT9S45rnI/JgSQvqBdzuhnq3nM6vMo9T6w=;
 b=F9WHnFkTRgXgstDWzLVBAJ0iuVW30wgHOcvLjamAfVxbVkn2wVDU1k4TxRTblnHoKiH0JK1RzCUBVhNfkKOi4GKtgTMRHE7WtcS/XdrkD5U3ias1DDOK9wvctgLCqoBJumrQSoMdbctroQbrbDrKHJblsMFPNpoh+VUheoZPnaM=
Received: from DB7PR05CA0013.eurprd05.prod.outlook.com (2603:10a6:10:36::26)
 by DB8PR08MB5420.eurprd08.prod.outlook.com (2603:10a6:10:11a::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.29; Thu, 26 Nov
 2020 09:06:02 +0000
Received: from DB5EUR03FT021.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:36:cafe::ae) by DB7PR05CA0013.outlook.office365.com
 (2603:10a6:10:36::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Thu, 26 Nov 2020 09:06:02 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT021.mail.protection.outlook.com (10.152.20.238) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Thu, 26 Nov 2020 09:06:02 +0000
Received: ("Tessian outbound e0cdfd2b0406:v71"); Thu, 26 Nov 2020 09:06:02 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 7d493af7f5a0fa21
X-CR-MTA-TID: 64aa7808
Received: from 263d92b9c4c1.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id DF22D542-F459-4E8D-B199-AD6813BF1D92.1;
	Thu, 26 Nov 2020 09:05:25 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 263d92b9c4c1.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Thu, 26 Nov 2020 09:05:25 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LjDhftZzRStdlpwgVNv+ZYKkV6p1yjlA1IAaWDvHxYeuuH2JCYiL+1fAOqjVjkf26r2265tVnUccA3oeDWsdDxCSE5rAmgRE4b0UNkPHk8Ys0oelc5e61q3hVuybWFM1+cosR6Xa8vMMf+llz7LijUwvfV37yVLdqBPO6+wXedJ+w2YiRX+m2/fJ9S9nHEFyVg5p5PWCj1nBDz8LEVRa2X1RYxmb0jwqhC6fkECZF9rIgsAhMFoxFZ+CVWygApRHT2vxc0Mmvd1V/54CE7ZoAcaLL0v9kmyw1DgyX498OBDZVc1SiYqkWQplNeuKYrfDskY9AHxbhT+qI2ueenopZA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YSRjyHnXyIT9S45rnI/JgSQvqBdzuhnq3nM6vMo9T6w=;
 b=Q9rvL3SDEO512dWo5VsV3Nklps5hUPosYmUJUrH3EiT90+Dwk8otV6ZcaXal3ZWBv4gaBly6Fb2k8mOoS2J3Eov1aeSzPSKGcpUGz0Xd9dPUPJpxhJsQ+qc9vKrmhBNlTZPBBhtW9mrW4B/daz50zzzLVotqM3DZagdTU5RAAXhY6xdqqR9kZhBFiaAgZBZ0WhFrK/kcYMfwgtjRIIqMxY531v8eVrNrP4LJ4bZ7MEKARyxH7C4p7Wff93vmNCRD00WgpOpNHp+J+nW+boUcQvNFAtH4zb77/2MkcwS8siPAtCZCYZ+bM1UbljLVkYoAlNjHoi7Jd0Js0mmajhWgdg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YSRjyHnXyIT9S45rnI/JgSQvqBdzuhnq3nM6vMo9T6w=;
 b=F9WHnFkTRgXgstDWzLVBAJ0iuVW30wgHOcvLjamAfVxbVkn2wVDU1k4TxRTblnHoKiH0JK1RzCUBVhNfkKOi4GKtgTMRHE7WtcS/XdrkD5U3ias1DDOK9wvctgLCqoBJumrQSoMdbctroQbrbDrKHJblsMFPNpoh+VUheoZPnaM=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5752.eurprd08.prod.outlook.com (2603:10a6:10:1ac::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 26 Nov
 2020 09:05:23 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0%7]) with mapi id 15.20.3589.030; Thu, 26 Nov 2020
 09:05:23 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Rahul Singh <Rahul.Singh@arm.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Jan
 Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
Subject: Re: [PATCH v4 2/3] xen/pci: solve compilation error on ARM with
 HAS_PCI enabled.
Thread-Topic: [PATCH v4 2/3] xen/pci: solve compilation error on ARM with
 HAS_PCI enabled.
Thread-Index: AQHWw1cqKGlzLIMerka7j78FZA7vhanaH92A
Date: Thu, 26 Nov 2020 09:05:23 +0000
Message-ID: <AF56CBF8-CAE2-4296-BEE9-0DED2CD6A648@arm.com>
References: <cover.1606326929.git.rahul.singh@arm.com>
 <2ce402cfae6d90433626bcdc6314e5ee5dda103f.1606326929.git.rahul.singh@arm.com>
In-Reply-To:
 <2ce402cfae6d90433626bcdc6314e5ee5dda103f.1606326929.git.rahul.singh@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 56c3c183-d7b0-44e5-426d-08d891ea7ffc
x-ms-traffictypediagnostic: DBAPR08MB5752:|DB8PR08MB5420:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB8PR08MB54209DCDBC8BC31B1631DB389DF90@DB8PR08MB5420.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:4303;OLM:4303;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 wpqn7tq7+NQ3WpuiHCmpse+xHn0Ffvw2VRCnFQfBKptc1zUAN2cT1V1L1nqCBJwfMYvHDNlbHg1Rjbo1rfm3EZKpPaU70CLIOkmF8FwsujKM81CHGuEMzKq6cb6r4Fc03rHgpJdrIprGC+NLrjXgLwKPOrF/rLYcoRgxKv/P1SFLZYgtPfLc4QHiHlZ4Y98TK/fS1q8Zs0Sm507tO/86ipKJgE8HpxFu6jAP42a3jxeOEKL675sNWLDTE4QJvugd4+PuVX1qQzg+sRRuTiB4LEUuxvcEbggtHXVDfDb55gf0rI9qTaazGDl/Hc5wuEI7qzkLA+kww4adxMZeIGAxhQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(136003)(396003)(376002)(39860400002)(6506007)(86362001)(6512007)(186003)(8936002)(6636002)(316002)(33656002)(53546011)(6862004)(26005)(37006003)(6486002)(2616005)(2906002)(5660300002)(8676002)(54906003)(76116006)(91956017)(4326008)(36756003)(66476007)(66556008)(66946007)(66446008)(83380400001)(71200400001)(478600001)(64756008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?mMJI+yrjiYoNnpblZ7mgxQR8u2ek7shtMOePl7KSBaVxkLPKzrw7bBRCPMfz?=
 =?us-ascii?Q?2w9olra1aVBmtOdqwxD4UlC12RG0Prx5ZPykXsA2E5HeJw30wHnix2qCH3TP?=
 =?us-ascii?Q?bL5qvPzzzDW61LEKlAZxchaQFrxEuLluUPGRMFJznHWBjR2/a54SBF8muJb4?=
 =?us-ascii?Q?uQ51mL4FdYOknR6XhFYtLKLXZwVCXH2icmAY0BSeFz99MsI8LqBfiSn2CsKt?=
 =?us-ascii?Q?N+AyGkIbmEZ/A3ODTbzUuv1yI196r+TxpVSLVJ+rf92zKKbYlwM4hGlSB798?=
 =?us-ascii?Q?fdWlMZXSXqvqRBowT9S4fhGsehoKlscBqA8Sg4rY13wehOkNuzsXI6rMmyxm?=
 =?us-ascii?Q?see58QlwUT8wousmYTzDspoe8Y/Uhy/tU+cwW+WoNNqamUkFsv4ksWzEWGBs?=
 =?us-ascii?Q?NfMNEeZ+9iQsHQvA06M2Z5Avw8s2/454/nid5y+mrYf4AoCg+pPqgSt1CO0o?=
 =?us-ascii?Q?+lf0Hm6Hrt5PHUvIzZC2AOo662rkqhed+8+KZfsR4Rzl+LbFkf6UFnDt9ZBp?=
 =?us-ascii?Q?FfcuBv+3CjPqDb01NyiboS0Rg8+e62k566a8yWKxOGtR5QFWREmT8ZPFe32R?=
 =?us-ascii?Q?ta5mBsGAjQpHoy3PyN7v8cR98o5yKIQOVBjBsCESw5Vs9GBGn3461qsCZoqP?=
 =?us-ascii?Q?Vught5jjyb8r1iICGfDqTpo+ghcq1/X9cLA84lXr944fD9Ifeqxk+8+619g3?=
 =?us-ascii?Q?gNlMjEBOhI4b7K0joShd/hVZpd6zQY2Of7n1TPvMlokNaYwLQeHBgx1ltKWj?=
 =?us-ascii?Q?r6JBjeGbK09WUqlxZaMO1bT/LPQyp3EY4oT9Ro7q6j1swiAdBgJj0cxo8ajH?=
 =?us-ascii?Q?M/XScTkvKl97wQwauJ4o0wpEFXXXnv3O6Mu/X2gjRsWUhucDoTZC/euDdanq?=
 =?us-ascii?Q?wxWoYhFqysaDO1bVpc8VcJf9NfC7FshqQaZ19P3ZW3lwr5JYL+sxUPZuyRIJ?=
 =?us-ascii?Q?3HKfKzjHvS+bNXOYXZZA3X4YoaFLKdVHN3UP8KMBD+sAFCcp55MgAqbbC9Zh?=
 =?us-ascii?Q?9nPM?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <D1E794986DEACE4FBC1330722C8864B4@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5752
Original-Authentication-Results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT021.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	649ee203-378e-43e9-8ba9-08d891ea689e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5Rab5wsjrK6/fT8zoms2d9cTprr3bL8RP//OoQesHq5aFIQ+iIcDuEYEmlNby5VuHRlAJGSS6ggjw2nixvPyjihmPB0O37W+oHoFrNW16FjJsiFoyoEHb8C000m9tjoqJfnXMvK+BFiZy9GORBABzaCdulP3TCIaX1rPCfxkO5cea/Q6cXbt+V47rQD0e/jsyjqQn8Wopqy92ck+VhFZfkwDmtJR7DYalBgExDfE/SHUOdP4y1yfZfR9C+626NFQ4Y84D5hBx45wOzRo4gsKXzMTH1Bro68+WLlEklBBiCtq77L3rEDosm7iMWSjWz1KzM/FXjic/Wdw4wRxgGFxxzjUyJ/vTuCLhidNdQm05GdyB5qL53G8ERlWEYZMLPnNFzRUnWddO50LjrhSqCq/bA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(376002)(39860400002)(396003)(346002)(46966005)(336012)(6636002)(8676002)(356005)(5660300002)(81166007)(6486002)(26005)(478600001)(47076004)(2906002)(86362001)(2616005)(82740400003)(83380400001)(37006003)(8936002)(6506007)(4326008)(53546011)(33656002)(6862004)(54906003)(186003)(6512007)(36756003)(82310400003)(70586007)(316002)(70206006);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Nov 2020 09:06:02.8478
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 56c3c183-d7b0-44e5-426d-08d891ea7ffc
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT021.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5420



> On 25 Nov 2020, at 18:16, Rahul Singh <Rahul.Singh@arm.com> wrote:
>=20
> If mem-sharing, mem-paging, or log-dirty functionality is not enabled
> for architecture when HAS_PCI is enabled, the compiler will throw an
> error.
>=20
> Move code to x86 specific file to fix compilation error.
>=20
> Also, modify the code to use likely() in place of unlikley() for each
> condition to make code more optimized.
>=20
> No functional change intended.
>=20
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

I guess the small typo fix could be fixed by the commiter directly :-)

Cheers
Bertrand=20


> ---
>=20
> Changes in v4:
> - fixed minor comments
>=20
> ---
> xen/drivers/passthrough/pci.c       |  8 +-------
> xen/drivers/passthrough/x86/iommu.c | 13 +++++++++++++
> xen/include/xen/iommu.h             |  2 ++
> 3 files changed, 16 insertions(+), 7 deletions(-)
>=20
> diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.=
c
> index 3c6ab1bcb6..4c21655b7d 100644
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -20,7 +20,6 @@
> #include <xen/iommu.h>
> #include <xen/irq.h>
> #include <xen/param.h>
> -#include <xen/vm_event.h>
> #include <xen/delay.h>
> #include <xen/keyhandler.h>
> #include <xen/event.h>
> @@ -1418,12 +1417,7 @@ static int assign_device(struct domain *d, u16 seg=
, u8 bus, u8 devfn, u32 flag)
>     if ( !is_iommu_enabled(d) )
>         return 0;
>=20
> -    /* Prevent device assign if mem paging or mem sharing have been=20
> -     * enabled for this domain */
> -    if ( d !=3D dom_io &&
> -         unlikely(mem_sharing_enabled(d) ||
> -                  vm_event_check_ring(d->vm_event_paging) ||
> -                  p2m_get_hostp2m(d)->global_logdirty) )
> +    if( !arch_iommu_use_permitted(d) )
>         return -EXDEV;
>=20
>     /* device_assigned() should already have cleared the device for assig=
nment */
> diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthroug=
h/x86/iommu.c
> index f17b1820f4..cea1032b3d 100644
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -18,6 +18,7 @@
> #include <xen/guest_access.h>
> #include <xen/event.h>
> #include <xen/softirq.h>
> +#include <xen/vm_event.h>
> #include <xsm/xsm.h>
>=20
> #include <asm/hvm/io.h>
> @@ -308,6 +309,18 @@ struct page_info *iommu_alloc_pgtable(struct domain =
*d)
>     return pg;
> }
>=20
> +bool arch_iommu_use_permitted(const struct domain *d)
> +{
> +    /*
> +     * Prevent device assign if mem paging, mem sharing or log-dirty
> +     * have been enabled for this domain.
> +     */
> +    return d =3D=3D dom_io ||
> +           (likely(!mem_sharing_enabled(d)) &&
> +            likely(!vm_event_check_ring(d->vm_event_paging)) &&
> +            likely(!p2m_get_hostp2m(d)->global_logdirty));
> +}
> +
> /*
>  * Local variables:
>  * mode: C
> diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
> index 191021870f..056eaa09fc 100644
> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -381,6 +381,8 @@ DECLARE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
> extern struct spinlock iommu_pt_cleanup_lock;
> extern struct page_list_head iommu_pt_cleanup_list;
>=20
> +bool arch_iommu_use_permitted(const struct domain *d);
> +
> #endif /* _IOMMU_H_ */
>=20
> /*
> --=20
> 2.17.1
>=20



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 09:09:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 09:09:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38340.71078 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiDGt-0006s1-FL; Thu, 26 Nov 2020 09:08:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38340.71078; Thu, 26 Nov 2020 09:08:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiDGt-0006ru-Bg; Thu, 26 Nov 2020 09:08:59 +0000
Received: by outflank-mailman (input) for mailman id 38340;
 Thu, 26 Nov 2020 09:08:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YrL6=FA=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kiDGs-0006ro-Cf
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 09:08:58 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.5.45]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b23c29c5-6e38-41ed-a83d-cfb76bcf8eb4;
 Thu, 26 Nov 2020 09:08:57 +0000 (UTC)
Received: from DB8PR03CA0028.eurprd03.prod.outlook.com (2603:10a6:10:be::41)
 by DB8PR08MB4122.eurprd08.prod.outlook.com (2603:10a6:10:ac::32) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.22; Thu, 26 Nov
 2020 09:08:54 +0000
Received: from DB5EUR03FT006.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:be:cafe::dd) by DB8PR03CA0028.outlook.office365.com
 (2603:10a6:10:be::41) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Thu, 26 Nov 2020 09:08:54 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT006.mail.protection.outlook.com (10.152.20.106) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Thu, 26 Nov 2020 09:08:53 +0000
Received: ("Tessian outbound 814be617737e:v71");
 Thu, 26 Nov 2020 09:08:53 +0000
Received: from 9e185541e911.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 028BB808-99E6-40C2-879F-9B86C614F5DF.1; 
 Thu, 26 Nov 2020 09:08:48 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9e185541e911.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 26 Nov 2020 09:08:48 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB4137.eurprd08.prod.outlook.com (2603:10a6:10:a5::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 26 Nov
 2020 09:08:45 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0%7]) with mapi id 15.20.3589.030; Thu, 26 Nov 2020
 09:08:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YrL6=FA=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kiDGs-0006ro-Cf
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 09:08:58 +0000
X-Inumbo-ID: b23c29c5-6e38-41ed-a83d-cfb76bcf8eb4
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown [40.107.5.45])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b23c29c5-6e38-41ed-a83d-cfb76bcf8eb4;
	Thu, 26 Nov 2020 09:08:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZZc0iU4XJgomLAA7yeWFerjA4qG1+thulaxjrCx6OLU=;
 b=ufXTs9BX1HmZnaftQJNLdW7swQ5oGtz1r2pXvOZKZdwQlNR4fxBsvqe5Km1+OHpRF+J/+KLX4GVBFZQvK7rdCtmBn4TyA0DrXb/ojEUrTYqXD/uRPMsP5gowddUeoFsdvGqbVAEHmmLEBqmo47I4txK1TdXflWr8mRWf8BUdfhY=
Received: from DB8PR03CA0028.eurprd03.prod.outlook.com (2603:10a6:10:be::41)
 by DB8PR08MB4122.eurprd08.prod.outlook.com (2603:10a6:10:ac::32) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.22; Thu, 26 Nov
 2020 09:08:54 +0000
Received: from DB5EUR03FT006.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:be:cafe::dd) by DB8PR03CA0028.outlook.office365.com
 (2603:10a6:10:be::41) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Thu, 26 Nov 2020 09:08:54 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT006.mail.protection.outlook.com (10.152.20.106) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Thu, 26 Nov 2020 09:08:53 +0000
Received: ("Tessian outbound 814be617737e:v71"); Thu, 26 Nov 2020 09:08:53 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: ff531dbd146ce837
X-CR-MTA-TID: 64aa7808
Received: from 9e185541e911.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 028BB808-99E6-40C2-879F-9B86C614F5DF.1;
	Thu, 26 Nov 2020 09:08:48 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9e185541e911.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Thu, 26 Nov 2020 09:08:48 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BItqkU2mZv7R6POemE2PvrIpnJqgZO52yjb+CkdWqEewglBf0bPzLVQlKBye3QUt0j0w22cuyXAs4V8e77TnCeXLzYLv17+EOh15T0TEmwitNInPvsFbjQEKKhnC0jc/8UtTLZTrDiRRu3k18rh9x/9l/CHLKG76a1wqrwAQ8wUhzRGvfFOMkA3vDYCtxXZTV9S7OCf31R7WqEen6s5lA1tQpSoV1bnIoquTgedjxC0CJjU5wncNxOf/nC4JS9/huUrLZ/mEpGzGkgRWFMkXGJ3a+ndFdByFdBKJboVjrJz5JWy1E+Ydo9Uni1XvT7EAsoRcgsxxj4hJClZb0Z21HQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZZc0iU4XJgomLAA7yeWFerjA4qG1+thulaxjrCx6OLU=;
 b=efeo3ZWuc9XKtNxznUzHSAzU4XfymXrt4uPiAGXKQk6vwxbTuSsRo6ID8Zh9AX/J849UwLQ+DAraD6GW3WLCxej2qDCjs04dFb4imXgmx8PZE9Ld/C0HXHfoSY1mkSOa6Ycp48OsGhHSy1Pa8//y+gyQfFoXEjjcKyUsjNMzSSWAXGKuYYWdzFLGwav7X/PFGVy/nvUTXani7LSJxZ7EOf6DEN+HrH3LWEmAyKcAx7kqwpLwKyeLQGbCZ7kXJTwOninvUoOeH86z1R6lxcKyjT2ENmA82juLlVr2gx7LIXFOPbHUar3uUT3fqrmMC3k2p1jbhuQ2PkvOxrsnfaXN/A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZZc0iU4XJgomLAA7yeWFerjA4qG1+thulaxjrCx6OLU=;
 b=ufXTs9BX1HmZnaftQJNLdW7swQ5oGtz1r2pXvOZKZdwQlNR4fxBsvqe5Km1+OHpRF+J/+KLX4GVBFZQvK7rdCtmBn4TyA0DrXb/ojEUrTYqXD/uRPMsP5gowddUeoFsdvGqbVAEHmmLEBqmo47I4txK1TdXflWr8mRWf8BUdfhY=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB4137.eurprd08.prod.outlook.com (2603:10a6:10:a5::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 26 Nov
 2020 09:08:45 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0%7]) with mapi id 15.20.3589.030; Thu, 26 Nov 2020
 09:08:45 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Rahul Singh <Rahul.Singh@arm.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien
 Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH v4 3/3] ns16550: Gate all PCI code with CONFIG_X86
Thread-Topic: [PATCH v4 3/3] ns16550: Gate all PCI code with CONFIG_X86
Thread-Index: AQHWw1cvptOzj+DOhEKKe2XksTlGiKnZKPiAgAD31YA=
Date: Thu, 26 Nov 2020 09:08:45 +0000
Message-ID: <909A3998-A7C1-4A4F-A842-BBC53EC662D6@arm.com>
References: <cover.1606326929.git.rahul.singh@arm.com>
 <6d64bb35a6ce247faaa3df2ebae27b6bfa1d969e.1606326929.git.rahul.singh@arm.com>
 <A24BBAFF-2D6C-448A-955E-4471902C6413@arm.com>
In-Reply-To: <A24BBAFF-2D6C-448A-955E-4471902C6413@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 1e84ce56-353a-4418-8bfa-08d891eae5dc
x-ms-traffictypediagnostic: DB8PR08MB4137:|DB8PR08MB4122:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB8PR08MB4122A3EABD0336A609540EA79DF90@DB8PR08MB4122.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 +N6gVK3Qtvje2cMuNKblbhN7/2z8lxCveTRGVbNKUibMZzBfcG0d3lylPBBJZ6Hzy+oOgv92jtEbFOtHWU1fqmITQ72RXRLvOIZOkJIdPp5ylQ+7W28og+J5yNgrXcNwciM5bW92lDpyjsIw0FgRBbZr80sn92UFt6P59gUTLjIqN5s2cnKXUxf1VY7Y6PbgjmWtEPrVHfg8csc8pRORiXSVNTRlmxFQm3Pf0Yu6btgqxhxjM2e1bWjo0UiGgBQwGCf4bJCP8r/1K1KAwfmIfvqAOA98BPh9PQYl+qvyKDupeG0Bh26o9X3LB0qm3DX8zPpgb5mirPazjLcUkZ65+w==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(376002)(136003)(39860400002)(396003)(64756008)(6636002)(66446008)(5660300002)(66946007)(6862004)(71200400001)(91956017)(36756003)(66556008)(76116006)(86362001)(4326008)(66476007)(83380400001)(478600001)(37006003)(6486002)(8676002)(33656002)(186003)(2616005)(8936002)(26005)(316002)(6506007)(6512007)(2906002)(54906003)(53546011);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?UktPWVhKSGdOSkVUdUNOSG1USWZEcVNTOGsyeU84N0lXOTdPZEg4aGt5UEVU?=
 =?utf-8?B?eWNXRXJpY21iK3FLS2xySnZYeit3TlMrS1l1Y09rRkRDcnBDM2tzYlpibzYv?=
 =?utf-8?B?VjMyQUJyQWhYRVRzRVZ1VnNaNGJBc2VvZHQxVDVMbktCeFQ0VjlNbW4wbVU0?=
 =?utf-8?B?M2xUR1lRN0MvYWdqMHZqZlVWR00zam5UVERtYVFGOE1vd3JTQ3ZOV2pEc1c5?=
 =?utf-8?B?UEthU3RBVUZETFM1MkFmajREczVSYjlobXFnT2hodGs5UnJGUmpFWHJObjJv?=
 =?utf-8?B?b0QwZzBncnFEa0hXLzY5L0tkc1BNWVc5ckdFbDhibWwrZVZyQllYYnhGVndP?=
 =?utf-8?B?K29JRktpdVRJTUZHbUR1eWE5dW5MVktXZCtHTExVWkpWanFXMVZqM1RUQWVF?=
 =?utf-8?B?bHdYL1ZET1RHZmtyMlZ2WDMvcDZ3TVhtejRNODErSFpyNXB3MkRoN3pvSC95?=
 =?utf-8?B?cHR2UWJxbW4xNmRBTWdTUnJxL0h3d3NGRlNBMzRqRFo0SFpPTHdwRmxlZkJs?=
 =?utf-8?B?YVNOdHgzR0Nxc05mTlNHTG9TUzlKbVZHeXcvWGtqMk0ydDdsaG5QRVc3UHAy?=
 =?utf-8?B?MUJyeTlTcG5aTHdIQUY0eDFqb0t2NW14ZmQvYlhYVG9QUFRVM24reGZ6d0s1?=
 =?utf-8?B?KzQvU3pFREJmbDV6SDc2TitUUlBGenBDRzFad2dqc0FvdlBqWkZMcTJMa0xq?=
 =?utf-8?B?UFdGQytDMjhHajNnZnl0RHlhMW5odUpJdHJmV2RKeEZ0ZzZ5ck9hQ1JUakF1?=
 =?utf-8?B?aHJ1TmVsN2R5bDN4aTlpVUorSmQ5MXJXTUJzdFpVWjErM2V5ek5uQmhQWmdr?=
 =?utf-8?B?RHFNWXpmYm41UUdDODhHSDZZZm5ISE1nbDBBWUgveE5oZkVaU2ViNm8zb2xZ?=
 =?utf-8?B?TFpYQ05Rdkw4RllWZ21ESGpWencvZk9yTFkxZ3ZRdkpmMDJOUGJsR1dJVDBM?=
 =?utf-8?B?cjV2dng1d0hGSWtWT3grb0xwWlJ4dElZNWJ4N1NKY3UxTDdSWnhmRGJkWFdS?=
 =?utf-8?B?VVR4UmFuSm5SbEdJM3g2TkJUblZDVFZHT1IvU1g3Qk5XcG8vODJKQ3lTeXRO?=
 =?utf-8?B?T2lBa3c0c2o0MEliUllIV05odDczV2I1U3kxcTF5MlJ1NUVVUHRLY0I4c0di?=
 =?utf-8?B?M0V3c3NuRFJPcWR5Q3ZBTlhrNngxejdNY1pnQ0dYOUxwRDhqdHdENE5FcjdD?=
 =?utf-8?B?RnNHUkI4UWMyMTFLMXI2V29VMHIvM3FsNVoyc3VZYVRlY1BTYjF4YVcxcVFM?=
 =?utf-8?B?L1d0M1g2SkFnTnhGNUVpVE1weGo0cU10dkZpUndYcVlVL0RaenFVYVlkYzRG?=
 =?utf-8?Q?bZd8yRKwbKqLJ0A7xrhQQHjszqjyNVfqf/?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <E13AE35FDA730D4A9330D4DFF86C0933@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB4137
Original-Authentication-Results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d7e19a34-3a08-4fd7-cb9e-08d891eae124
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	t/Rrd9UsJfAvt43KFAMgfKBI+JdvaiKd9iBFPoYSj2i4HFvgFYCvol4nKdO7ZePXZ528jTi9ZzUJMrMYHPMHCtB5v08/fk1yXoPw19s2VAU3HgRC63B+trtHNadMyDnwXGUzz0GcqTx1C7QYnHtNs1U/gmXHuvNIDd/hEZuwMe/xhMzI6Fyo2DeQ5ANM6gzYlwkpH1LkaIeIagiioEzxLqDbYC09Mkoj3CZ2CQZ6FhhKWJYABJYjBXUC6p/cmCxr8q9tmgXPFjjggNIEK3RxK9kvd0jv9vNx8UBMcZ5a8Z/GH4cMtjUzbJefNSj2Mbiz1MDUo4zgH2NgIkBO2NMYr9Jc2qvPNOrxjG0y8d8onfPKiknrDHyKr9ME9g4tdmZ2IdrvlKu/Jh5uoPm5RNSKQA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(396003)(39860400002)(376002)(346002)(46966005)(8936002)(2616005)(6512007)(4326008)(478600001)(6486002)(53546011)(6862004)(36756003)(6506007)(336012)(186003)(54906003)(5660300002)(8676002)(26005)(316002)(356005)(81166007)(82310400003)(70586007)(70206006)(6636002)(37006003)(33656002)(86362001)(2906002)(83380400001)(47076004)(82740400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Nov 2020 09:08:53.7666
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1e84ce56-353a-4418-8bfa-08d891eae5dc
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB4122

DQoNCj4gT24gMjUgTm92IDIwMjAsIGF0IDE4OjIxLCBSYWh1bCBTaW5naCA8UmFodWwuU2luZ2hA
YXJtLmNvbT4gd3JvdGU6DQo+IA0KPiBIZWxsbyBKdWxpZW4sDQo+IA0KPj4gT24gMjUgTm92IDIw
MjAsIGF0IDY6MTYgcG0sIFJhaHVsIFNpbmdoIDxyYWh1bC5zaW5naEBhcm0uY29tPiB3cm90ZToN
Cj4+IA0KPj4gVGhlIE5TMTY1NTAgZHJpdmVyIGlzIGFzc3VtaW5nIHRoYXQgTlMxNjU1MCBQQ0kg
Y2FyZCBhcmUgdXNhYmxlIGlmIHRoZQ0KPj4gYXJjaGl0ZWN0dXJlIHN1cHBvcnRzIFBDSSAoaS5l
LiBDT05GSUdfSEFTX1BDST15KS4gSG93ZXZlciwgdGhlIGNvZGUgaXMNCj4+IHZlcnkgeDg2IGZv
Y3VzIGFuZCB3aWxsIGZhaWwgdG8gYnVpbGQgb24gQXJtICgvIVwgaXQgaXMgbm90IGFsbCB0aGUN
Cj4+IGVycm9ycyk6DQo+PiANCj4+IG5zMTY1NTAuYzogSW4gZnVuY3Rpb24g4oCYbnMxNjU1MF9p
bml0X2lyceKAmToNCj4+IG5zMTY1NTAuYzo3MjY6MjE6IGVycm9yOiBpbXBsaWNpdCBkZWNsYXJh
dGlvbiBvZiBmdW5jdGlvbiDigJhjcmVhdGVfaXJx4oCZOw0KPj4gZGlkIHlvdSBtZWFuIOKAmHJl
bGVhc2VfaXJx4oCZPyBbLVdlcnJvcj1pbXBsaWNpdC1mdW5jdGlvbi1kZWNsYXJhdGlvbl0NCj4+
ICAgICAgICAgdWFydC0+aXJxID0gY3JlYXRlX2lycSgwLCBmYWxzZSk7DQo+PiAgICAgICAgICAg
ICAgICAgICAgIF5+fn5+fn5+fn4NCj4+ICAgICAgICAgICAgICAgICAgICAgcmVsZWFzZV9pcnEN
Cj4+IG5zMTY1NTAuYzo3MjY6MjE6IGVycm9yOiBuZXN0ZWQgZXh0ZXJuIGRlY2xhcmF0aW9uIG9m
IOKAmGNyZWF0ZV9pcnHigJkNCj4+IFstV2Vycm9yPW5lc3RlZC1leHRlcm5zXQ0KPj4gbnMxNjU1
MC5jOiBJbiBmdW5jdGlvbiDigJhuczE2NTUwX2luaXRfcG9zdGlyceKAmToNCj4+IG5zMTY1NTAu
Yzo3Njg6MzM6IGVycm9yOiDigJhtbWlvX3JvX3Jhbmdlc+KAmSB1bmRlY2xhcmVkIChmaXJzdCB1
c2UgaW4gdGhpcw0KPj4gZnVuY3Rpb24pOyBkaWQgeW91IG1lYW4g4oCYbW1pb19oYW5kbGVy4oCZ
Pw0KPj4gICAgICAgICAgICAgIHJhbmdlc2V0X2FkZF9yYW5nZShtbWlvX3JvX3JhbmdlcywgdWFy
dC0+aW9fYmFzZSwNCj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXn5+fn5+fn5+
fn5+fn4NCj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbW1pb19oYW5kbGVyDQo+
PiBuczE2NTUwLmM6NzY4OjMzOiBub3RlOiBlYWNoIHVuZGVjbGFyZWQgaWRlbnRpZmllciBpcyBy
ZXBvcnRlZCBvbmx5IG9uY2UNCj4+IGZvciBlYWNoIGZ1bmN0aW9uIGl0IGFwcGVhcnMgaW4NCj4+
IG5zMTY1NTAuYzo3ODA6MjA6IGVycm9yOiB2YXJpYWJsZSDigJhtc2nigJkgaGFzIGluaXRpYWxp
emVyIGJ1dCBpbmNvbXBsZXRlDQo+PiB0eXBlDQo+PiAgICAgICAgICAgICBzdHJ1Y3QgbXNpX2lu
Zm8gbXNpID0gew0KPj4gICAgICAgICAgICAgICAgICAgIF5+fn5+fn5+DQo+PiANCj4+IEVuYWJs
aW5nIHN1cHBvcnQgZm9yIE5TMTY1NTAgUENJIGNhcmQgb24gQXJtIHdvdWxkIHJlcXVpcmUgbW9y
ZSBwbHVtYmluZw0KPj4gaW4gYWRkaXRpb24gdG8gZml4aW5nIHRoZSBjb21waWxhdGlvbiBlcnJv
ci4NCj4+IA0KPj4gQXJtIHN5c3RlbXMgdGVuZCB0byBoYXZlIHBsYXRmb3JtIFVBUlQgYXZhaWxh
YmxlIHN1Y2ggYXMgTlMxNjU1MCwgUEwwMTEuDQo+PiBTbyB0aGVyZSBhcmUgbGltaXRlZCByZWFz
b25zIHRvIGdldCBOUzE2NTUwIFBDSSBzdXBwb3J0IGZvciBub3cgb24gQXJtLg0KPj4gDQo+PiBH
dWFyZCBhbGwgcmVtYWluaW5nIFBDSSBjb2RlIHRoYXQgaXMgbm90IHVuZGVyIHg4NiBmbGFnIHdp
dGggQ09ORklHX1g4Ni4NCj4+IA0KPj4gTm8gZnVuY3Rpb25hbCBjaGFuZ2UgaW50ZW5kZWQuDQo+
PiANCj4+IFNpZ25lZC1vZmYtYnk6IFJhaHVsIFNpbmdoIDxyYWh1bC5zaW5naEBhcm0uY29tPg0K
UmV2aWV3ZWQtYnk6IEJlcnRyYW5kIE1hcnF1aXMgPGJlcnRyYW5kLm1hcnF1aXNAYXJtLmNvbT4N
Cg0KPiANCj4gU29ycnkgSSBtaXNzZWQgdG8gYWRkIHRoZSBzaWduZWQgb2ZmIGZvciB0aGUgY29t
bWl0IG1zZy4gSSB3aWxsIHNlbmQgdGhlIG5leHQgdmVyc2lvbiBvbmNlIHJldmlld2VkIGRvbmUu
DQo+IFNpZ25lZC1vZmYtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+DQo+IA0K
DQpJIGd1ZXNzIHRoZSBjb21taXRlciBjYW4gYWRkIHRoaXMgd2hpbGUgY29tbWl0aW5nIHRoaXMg
cGF0Y2ggPw0KDQpSZWdhcmRzDQpCZXJ0cmFuZA0KDQo+IFJlZ2FyZHMsDQo+IFJhaHVsDQo+PiAt
LS0NCj4+IA0KPj4gQ2hhbmdlcyBpbiB2NDoNCj4+IC0gQXMgcGVyIHRoZSBkaXNjdXNzaW9uIGd1
YXJkIGFsbCByZW1haW5pbmcgUENJIGNvZGUgd2l0aCBDT05GSUdfWDg2DQo+PiANCj4+IC0tLQ0K
Pj4geGVuL2RyaXZlcnMvY2hhci9uczE2NTUwLmMgfCAxNiArKysrKysrKy0tLS0tLS0tDQo+PiAx
IGZpbGUgY2hhbmdlZCwgOCBpbnNlcnRpb25zKCspLCA4IGRlbGV0aW9ucygtKQ0KPj4gDQo+PiBk
aWZmIC0tZ2l0IGEveGVuL2RyaXZlcnMvY2hhci9uczE2NTUwLmMgYi94ZW4vZHJpdmVycy9jaGFy
L25zMTY1NTAuYw0KPj4gaW5kZXggOTIzNWQ4NTRmZS4uMjZlNjAxODU3YSAxMDA2NDQNCj4+IC0t
LSBhL3hlbi9kcml2ZXJzL2NoYXIvbnMxNjU1MC5jDQo+PiArKysgYi94ZW4vZHJpdmVycy9jaGFy
L25zMTY1NTAuYw0KPj4gQEAgLTE2LDcgKzE2LDcgQEANCj4+ICNpbmNsdWRlIDx4ZW4vdGltZXIu
aD4NCj4+ICNpbmNsdWRlIDx4ZW4vc2VyaWFsLmg+DQo+PiAjaW5jbHVkZSA8eGVuL2lvY2FwLmg+
DQo+PiAtI2lmZGVmIENPTkZJR19IQVNfUENJDQo+PiArI2lmIGRlZmluZWQoQ09ORklHX1g4Nikg
JiYgZGVmaW5lZChDT05GSUdfSEFTX1BDSSkNCj4+ICNpbmNsdWRlIDx4ZW4vcGNpLmg+DQo+PiAj
aW5jbHVkZSA8eGVuL3BjaV9yZWdzLmg+DQo+PiAjaW5jbHVkZSA8eGVuL3BjaV9pZHMuaD4NCj4+
IEBAIC01MSw3ICs1MSw3IEBAIHN0YXRpYyBzdHJ1Y3QgbnMxNjU1MCB7DQo+PiAgICB1bnNpZ25l
ZCBpbnQgdGltZW91dF9tczsNCj4+ICAgIGJvb2xfdCBpbnRyX3dvcmtzOw0KPj4gICAgYm9vbF90
IGR3X3Vzcl9ic3k7DQo+PiAtI2lmZGVmIENPTkZJR19IQVNfUENJDQo+PiArI2lmIGRlZmluZWQo
Q09ORklHX1g4NikgJiYgZGVmaW5lZChDT05GSUdfSEFTX1BDSSkNCj4+ICAgIC8qIFBDSSBjYXJk
IHBhcmFtZXRlcnMuICovDQo+PiAgICBib29sX3QgcGJfYmRmX2VuYWJsZTsgICAvKiBpZiA9MSwg
cGItYmRmIGVmZmVjdGl2ZSwgcG9ydCBiZWhpbmQgYnJpZGdlICovDQo+PiAgICBib29sX3QgcHNf
YmRmX2VuYWJsZTsgICAvKiBpZiA9MSwgcHNfYmRmIGVmZmVjdGl2ZSwgcG9ydCBvbiBwY2kgY2Fy
ZCAqLw0KPj4gQEAgLTY2LDcgKzY2LDcgQEAgc3RhdGljIHN0cnVjdCBuczE2NTUwIHsNCj4+ICNl
bmRpZg0KPj4gfSBuczE2NTUwX2NvbVsyXSA9IHsgeyAwIH0gfTsNCj4+IA0KPj4gLSNpZmRlZiBD
T05GSUdfSEFTX1BDSQ0KPj4gKyNpZiBkZWZpbmVkKENPTkZJR19YODYpICYmIGRlZmluZWQoQ09O
RklHX0hBU19QQ0kpDQo+PiBzdHJ1Y3QgbnMxNjU1MF9jb25maWcgew0KPj4gICAgdTE2IHZlbmRv
cl9pZDsNCj4+ICAgIHUxNiBkZXZfaWQ7DQo+PiBAQCAtMjU2LDcgKzI1Niw3IEBAIHN0YXRpYyBp
bnQgbnMxNjU1MF9nZXRjKHN0cnVjdCBzZXJpYWxfcG9ydCAqcG9ydCwgY2hhciAqcGMpDQo+PiAN
Cj4+IHN0YXRpYyB2b2lkIHBjaV9zZXJpYWxfZWFybHlfaW5pdChzdHJ1Y3QgbnMxNjU1MCAqdWFy
dCkNCj4+IHsNCj4+IC0jaWZkZWYgQ09ORklHX0hBU19QQ0kNCj4+ICsjaWYgZGVmaW5lZChDT05G
SUdfWDg2KSAmJiBkZWZpbmVkKENPTkZJR19IQVNfUENJKQ0KPj4gICAgaWYgKCAhdWFydC0+cHNf
YmRmX2VuYWJsZSB8fCB1YXJ0LT5pb19iYXNlID49IDB4MTAwMDAgKQ0KPj4gICAgICAgIHJldHVy
bjsNCj4+IA0KPj4gQEAgLTM1NSw3ICszNTUsNyBAQCBzdGF0aWMgdm9pZCBfX2luaXQgbnMxNjU1
MF9pbml0X3ByZWlycShzdHJ1Y3Qgc2VyaWFsX3BvcnQgKnBvcnQpDQo+PiANCj4+IHN0YXRpYyB2
b2lkIF9faW5pdCBuczE2NTUwX2luaXRfaXJxKHN0cnVjdCBzZXJpYWxfcG9ydCAqcG9ydCkNCj4+
IHsNCj4+IC0jaWZkZWYgQ09ORklHX0hBU19QQ0kNCj4+ICsjaWYgZGVmaW5lZChDT05GSUdfWDg2
KSAmJiBkZWZpbmVkKENPTkZJR19IQVNfUENJKQ0KPj4gICAgc3RydWN0IG5zMTY1NTAgKnVhcnQg
PSBwb3J0LT51YXJ0Ow0KPj4gDQo+PiAgICBpZiAoIHVhcnQtPm1zaSApDQo+PiBAQCAtMzk3LDcg
KzM5Nyw3IEBAIHN0YXRpYyB2b2lkIF9faW5pdCBuczE2NTUwX2luaXRfcG9zdGlycShzdHJ1Y3Qg
c2VyaWFsX3BvcnQgKnBvcnQpDQo+PiAgICB1YXJ0LT50aW1lb3V0X21zID0gbWF4X3QoDQo+PiAg
ICAgICAgdW5zaWduZWQgaW50LCAxLCAoYml0cyAqIHVhcnQtPmZpZm9fc2l6ZSAqIDEwMDApIC8g
dWFydC0+YmF1ZCk7DQo+PiANCj4+IC0jaWZkZWYgQ09ORklHX0hBU19QQ0kNCj4+ICsjaWYgZGVm
aW5lZChDT05GSUdfWDg2KSAmJiBkZWZpbmVkKENPTkZJR19IQVNfUENJKQ0KPj4gICAgaWYgKCB1
YXJ0LT5iYXIgfHwgdWFydC0+cHNfYmRmX2VuYWJsZSApDQo+PiAgICB7DQo+PiAgICAgICAgaWYg
KCB1YXJ0LT5wYXJhbSAmJiB1YXJ0LT5wYXJhbS0+bW1pbyAmJg0KPj4gQEAgLTQ3Nyw3ICs0Nzcs
NyBAQCBzdGF0aWMgdm9pZCBuczE2NTUwX3N1c3BlbmQoc3RydWN0IHNlcmlhbF9wb3J0ICpwb3J0
KQ0KPj4gDQo+PiAgICBzdG9wX3RpbWVyKCZ1YXJ0LT50aW1lcik7DQo+PiANCj4+IC0jaWZkZWYg
Q09ORklHX0hBU19QQ0kNCj4+ICsjaWYgZGVmaW5lZChDT05GSUdfWDg2KSAmJiBkZWZpbmVkKENP
TkZJR19IQVNfUENJKQ0KPj4gICAgaWYgKCB1YXJ0LT5iYXIgKQ0KPj4gICAgICAgdWFydC0+Y3Ig
PSBwY2lfY29uZl9yZWFkMTYoUENJX1NCREYoMCwgdWFydC0+cHNfYmRmWzBdLCB1YXJ0LT5wc19i
ZGZbMV0sDQo+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1YXJ0LT5wc19iZGZb
Ml0pLCBQQ0lfQ09NTUFORCk7DQo+PiBAQCAtNDg2LDcgKzQ4Niw3IEBAIHN0YXRpYyB2b2lkIG5z
MTY1NTBfc3VzcGVuZChzdHJ1Y3Qgc2VyaWFsX3BvcnQgKnBvcnQpDQo+PiANCj4+IHN0YXRpYyB2
b2lkIF9uczE2NTUwX3Jlc3VtZShzdHJ1Y3Qgc2VyaWFsX3BvcnQgKnBvcnQpDQo+PiB7DQo+PiAt
I2lmZGVmIENPTkZJR19IQVNfUENJDQo+PiArI2lmIGRlZmluZWQoQ09ORklHX1g4NikgJiYgZGVm
aW5lZChDT05GSUdfSEFTX1BDSSkNCj4+ICAgIHN0cnVjdCBuczE2NTUwICp1YXJ0ID0gcG9ydC0+
dWFydDsNCj4+IA0KPj4gICAgaWYgKCB1YXJ0LT5iYXIgKQ0KPj4gLS0NCj4+IDIuMTcuMQ0KDQo=


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 09:40:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 09:40:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38365.71090 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiDl2-0001w8-1s; Thu, 26 Nov 2020 09:40:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38365.71090; Thu, 26 Nov 2020 09:40:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiDl1-0001w1-Ud; Thu, 26 Nov 2020 09:40:07 +0000
Received: by outflank-mailman (input) for mailman id 38365;
 Thu, 26 Nov 2020 09:40:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiDl0-0001pL-4t; Thu, 26 Nov 2020 09:40:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiDkz-0005wu-Rr; Thu, 26 Nov 2020 09:40:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiDkz-0006Sg-Jr; Thu, 26 Nov 2020 09:40:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kiDkz-0004Qs-JN; Thu, 26 Nov 2020 09:40:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiDl0-0001pL-4t; Thu, 26 Nov 2020 09:40:06 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BHauw0g7KhUpD+uqm5djmPJcHPAHrNeTnUEL0DvdwLU=; b=rZGnrHqfYmvQPoTGuPHUtZOMQo
	fwuAvCVZhy/40TGVLHq5RXsLqa0Y1zKuy4gDjS47SQQPvCToCfEShLqgPHHGsLIvhlIP7FgwJu0Zt
	XngkQ7BLhikivMNmORE0RmvmPn3E4HWbAHCTNrBH0svXQeOWhxPCfKjw/e8Vqyrn4kWk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiDkz-0005wu-Rr; Thu, 26 Nov 2020 09:40:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiDkz-0006Sg-Jr; Thu, 26 Nov 2020 09:40:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiDkz-0004Qs-JN; Thu, 26 Nov 2020 09:40:05 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157022-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157022: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=259b43673fd360c7efe40631a70105e71ff10fc8
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 26 Nov 2020 09:40:05 +0000

flight 157022 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157022/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              259b43673fd360c7efe40631a70105e71ff10fc8
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  139 days
Failing since        151818  2020-07-11 04:18:52 Z  138 days  133 attempts
Testing same since   157022  2020-11-26 04:20:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 29494 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 09:59:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 09:59:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38392.71104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiE49-00039W-Ot; Thu, 26 Nov 2020 09:59:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38392.71104; Thu, 26 Nov 2020 09:59:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiE49-00039P-Lj; Thu, 26 Nov 2020 09:59:53 +0000
Received: by outflank-mailman (input) for mailman id 38392;
 Thu, 26 Nov 2020 09:59:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uoLz=FA=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1kiE48-00039K-TW
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 09:59:52 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d16e5cb3-e941-4945-9622-1a1d50805e99;
 Thu, 26 Nov 2020 09:59:51 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 9883168B02; Thu, 26 Nov 2020 10:59:48 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uoLz=FA=lst.de=hch@srs-us1.protection.inumbo.net>)
	id 1kiE48-00039K-TW
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 09:59:52 +0000
X-Inumbo-ID: d16e5cb3-e941-4945-9622-1a1d50805e99
Received: from verein.lst.de (unknown [213.95.11.211])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d16e5cb3-e941-4945-9622-1a1d50805e99;
	Thu, 26 Nov 2020 09:59:51 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
	id 9883168B02; Thu, 26 Nov 2020 10:59:48 +0100 (CET)
Date: Thu, 26 Nov 2020 10:59:48 +0100
From: Christoph Hellwig <hch@lst.de>
To: Minchan Kim <minchan@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Justin Sanders <justin@coraid.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ilya Dryomov <idryomov@gmail.com>,
	Jack Wang <jinpu.wang@cloud.ionos.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Mike Snitzer <snitzer@redhat.com>, Song Liu <song@kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	dm-devel@redhat.com, linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	ceph-devel@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-raid@vger.kernel.org, linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Subject: Re: [PATCH 60/78] zram: remove the claim mechanism
Message-ID: <20201126095948.GA24035@lst.de>
References: <20201116145809.410558-1-hch@lst.de> <20201116145809.410558-61-hch@lst.de> <20201126011107.GA57352@google.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201126011107.GA57352@google.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Wed, Nov 25, 2020 at 05:11:07PM -0800, Minchan Kim wrote:
> With this patch, how deal with the race?
> 
> CPU 1                                     CPU 2
> 
> hot_remove_store
>   zram_remove
>     zram_busy
>       return -EBUSY
>                                          open /dev/zram0
>     del_gendisk
>     zram_reset and destroy

Yeah, it looks like zram does not really handle hot unplugging unlike
other drivers.  So I've dropped this one for now.


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 10:53:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 10:53:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38406.71123 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiEtg-0008TG-4q; Thu, 26 Nov 2020 10:53:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38406.71123; Thu, 26 Nov 2020 10:53:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiEtg-0008T9-1p; Thu, 26 Nov 2020 10:53:08 +0000
Received: by outflank-mailman (input) for mailman id 38406;
 Thu, 26 Nov 2020 10:53:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiEte-0008T1-FU; Thu, 26 Nov 2020 10:53:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiEte-0007UM-4D; Thu, 26 Nov 2020 10:53:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiEtd-0001UL-RN; Thu, 26 Nov 2020 10:53:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kiEtd-0004eS-Qr; Thu, 26 Nov 2020 10:53:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiEte-0008T1-FU; Thu, 26 Nov 2020 10:53:06 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ka9nK4SDAaiaGgLmYNRMRkqsO5jdajw7sNIeOvjVpeI=; b=7IEm7RP7rTUxy5isVdMa2boM6P
	yYF+GxZxBMRN7rbvCAgPq3CxZIDCaO3oYa8ZKS5U0SM/xs/ISDXHBhuAZXScVdDc8DhS5ICEyIaie
	ta+9Vg6pA6KXAOmSyQ6OywNF30jZBhbWZ5U5evbOzFWDidVZRohX9ygEwAaOr6Vmrv1c=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiEte-0007UM-4D; Thu, 26 Nov 2020 10:53:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiEtd-0001UL-RN; Thu, 26 Nov 2020 10:53:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiEtd-0004eS-Qr; Thu, 26 Nov 2020 10:53:05 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157016-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157016: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=181f2c224ccd0a2900d6ae94ec390a546731f593
X-Osstest-Versions-That:
    xen=8147e00e4fbfcc43b665dc6bf279b204c501ba04
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 26 Nov 2020 10:53:05 +0000

flight 157016 xen-unstable real [real]
flight 157026 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157016/
http://logs.test-lab.xenproject.org/osstest/logs/157026/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 14 guest-start/debianhvm.repeat fail pass in 157026-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156992
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156992
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156992
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156992
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156992
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156992
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156992
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156992
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156992
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156992
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156992
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  181f2c224ccd0a2900d6ae94ec390a546731f593
baseline version:
 xen                  8147e00e4fbfcc43b665dc6bf279b204c501ba04

Last test of basis   156992  2020-11-24 14:07:39 Z    1 days
Testing same since   157016  2020-11-25 23:29:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bertrand Marquis <bertrand.marquis@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Paul Durrant <pdurrant@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8147e00e4f..181f2c224c  181f2c224ccd0a2900d6ae94ec390a546731f593 -> master


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 10:55:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 10:55:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38414.71142 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiEvZ-0000D5-Nt; Thu, 26 Nov 2020 10:55:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38414.71142; Thu, 26 Nov 2020 10:55:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiEvZ-0000Cy-Ki; Thu, 26 Nov 2020 10:55:05 +0000
Received: by outflank-mailman (input) for mailman id 38414;
 Thu, 26 Nov 2020 10:55:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kiEvY-0000Cs-TF
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 10:55:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kiEvY-0007WG-PE; Thu, 26 Nov 2020 10:55:04 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kiEvY-00017r-Iq; Thu, 26 Nov 2020 10:55:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kiEvY-0000Cs-TF
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 10:55:04 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=DFhUYVFFlXb3RIsaC7qTxudE9a8KQjuC14ZVwj2F9LU=; b=xBCYTt68W1vkRr1FGLL2xMxSAi
	Eq2XKNoQow9+pWhCGxQlCy+9UrdtrXiRT2CldzV7AG+dGxgbA7rjLB9oH5ZIGdS5GJcTmVptoKWUy
	ELYzocwXD+TUE1WAV343AILR6I+iaCOchBtoUu9donDGgrAvkDAroInUGyuGY/Xr1aTA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kiEvY-0007WG-PE; Thu, 26 Nov 2020 10:55:04 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kiEvY-00017r-Iq; Thu, 26 Nov 2020 10:55:04 +0000
Subject: Re: [PATCH] xen/arm: Add Cortex-A73 erratum 858921 workaround
To: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>
Cc: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andre Przywara <Andre.Przywara@arm.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Kaly Xin <Kaly.Xin@arm.com>,
 nd <nd@arm.com>
References: <20201109082110.1133996-1-penny.zheng@arm.com>
 <cfa63398-8182-b79f-1602-ed068e2319ad@xen.org>
 <AM0PR08MB3747B42FC856B9BDF24646629EE60@AM0PR08MB3747.eurprd08.prod.outlook.com>
 <alpine.DEB.2.21.2011251554070.7979@sstabellini-ThinkPad-T480s>
 <AM0PR08MB3747912905438DA6D7FF969C9EF90@AM0PR08MB3747.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
Message-ID: <8f47313a-f47a-520d-3845-3f2198fce5b4@xen.org>
Date: Thu, 26 Nov 2020 10:55:02 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <AM0PR08MB3747912905438DA6D7FF969C9EF90@AM0PR08MB3747.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=gbk; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Wei,

Your e-mail font seems to be different to the usual plain text one. Are 
you sending the e-mail using HTML by any chance?

On 26/11/2020 02:07, Wei Chen wrote:
> Hi Stefano,
> 
>> -----Original Message-----
>> From: Stefano Stabellini <sstabellini@kernel.org>
>> Sent: 20201126 8:00
>> To: Wei Chen <Wei.Chen@arm.com>
>> Cc: Julien Grall <julien@xen.org>; Penny Zheng <Penny.Zheng@arm.com>; xen-
>> devel@lists.xenproject.org; sstabellini@kernel.org; Andre Przywara
>> <Andre.Przywara@arm.com>; Bertrand Marquis <Bertrand.Marquis@arm.com>;
>> Kaly Xin <Kaly.Xin@arm.com>; nd <nd@arm.com>
>> Subject: RE: [PATCH] xen/arm: Add Cortex-A73 erratum 858921 workaround
>>
>> Resuming this old thread.
>>
>> On Fri, 13 Nov 2020, Wei Chen wrote:
>>>> Hi,
>>>>
>>>> On 09/11/2020 08:21, Penny Zheng wrote:
>>>>> CNTVCT_EL0 or CNTPCT_EL0 counter read in Cortex-A73 (all versions)
>>>>> might return a wrong value when the counter crosses a 32bit boundary.
>>>>>
>>>>> Until now, there is no case for Xen itself to access CNTVCT_EL0,
>>>>> and it also should be the Guest OS's responsibility to deal with
>>>>> this part.
>>>>>
>>>>> But for CNTPCT, there exists several cases in Xen involving reading
>>>>> CNTPCT, so a possible workaround is that performing the read twice,
>>>>> and to return one or the other depending on whether a transition has
>>>>> taken place.
>>>>>
>>>>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>>>>
>>>> Acked-by: Julien Grall <jgrall@amazon.com>
>>>>
>>>> On a related topic, do we need a fix similar to Linux commit
>>>> 75a19a0202db "arm64: arch_timer: Ensure counter register reads occur
>>>> with seqlock held"?
>>>>
>>>
>>> I take a look at this Linux commit, it seems to prevent the seq-lock to be
>>> speculated.  Using an enforce ordering instead of ISB after the read counter
>>> operation seems to be for performance reasons.
>>>
>>> I have found that you had placed an ISB before read counter in get_cycles
>>> to prevent counter value to be speculated. But you haven't placed the second
>>> ISB after reading. Is it because we haven't used the get_cycles in seq lock
>>> critical context (Maybe I didn't find the right place)? So should we need to fix it
>>> now, or you prefer to fix it now for future usage?
>>
>> Looking at the call sites, it doesn't look like we need any ISB after
>> get_cycles as it is not used in any critical context. There is also a
>> data dependency with the value returned by it.

I am assuming you looked at all the users of NOW(). Is that right?

>>
>> So I am thinking we don't need any fix. At most we need an in-code comment?
> 
> I agree with you to add an in-code comment. It will remind us in future when we
> use the get_cycles in critical context. Adding it now will probably only lead to
> meaningless performance degradation.

I read this as there would be no perfomance impact if we add the 
ordering it now. Did you intend to say?

> Because Xen may never use it in a similar
> scenario.

Right, there are two potentials approach here:
   - Wait until there are a user
       * Pros: Doesn't impact performance
       * Cons: We rely on users/reviewers to catch any misuse
   - Harden the code
       * Pros: Less risk to introduce a bug inadvertently
       * Cons: May impact the performance

In general, I prefer that the code is hardened by default if the 
performance impact is limited. This may save us hours of 
debugging/reproducing bug.

In addition, AFAICT, the x86 version of get_cycles() is already able to 
provide that ordering. So there are chances that code may rely on it.

While I don't necessarily agree to add barriers everywhere by default 
(this may have big impact on the platform). I think it is better to have 
an accurate number of cycles.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 11:20:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 11:20:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38437.71178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiFKB-00037o-BC; Thu, 26 Nov 2020 11:20:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38437.71178; Thu, 26 Nov 2020 11:20:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiFKB-00037h-7T; Thu, 26 Nov 2020 11:20:31 +0000
Received: by outflank-mailman (input) for mailman id 38437;
 Thu, 26 Nov 2020 11:20:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EevG=FA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kiFKA-00037Z-IT
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 11:20:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 79368db0-8bd5-4416-aac3-fbf7b067b746;
 Thu, 26 Nov 2020 11:20:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A2846AE38;
 Thu, 26 Nov 2020 11:20:28 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=EevG=FA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kiFKA-00037Z-IT
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 11:20:30 +0000
X-Inumbo-ID: 79368db0-8bd5-4416-aac3-fbf7b067b746
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 79368db0-8bd5-4416-aac3-fbf7b067b746;
	Thu, 26 Nov 2020 11:20:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606389628; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2HO8W78++dtjOUhVV1AqKnxQOikqYpyyy/sH+7IWvtA=;
	b=WsA9+GcRhpaGcy9SE5oQ8AJ3D5hkmWFJ3mSmHyw6WOe6GvGKtgbF7xcMxXR8VcbnqRmptj
	Zca971FxbKOjopOHH1psruq4pqq+EU0UiF++Mf/sdi55qFcOCwiIRJCfsvQhS2G4e6jPfa
	PmQ3bo3Omwz3+WpEZjb7YI2+wX1dQgk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A2846AE38;
	Thu, 26 Nov 2020 11:20:28 +0000 (UTC)
Subject: Re: [PATCH v3] xen: add support for automatic debug key actions in
 case of crash
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org
References: <20201126080340.6154-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <22190c77-eb35-5b72-7d72-34800c3f052f@suse.com>
Date: Thu, 26 Nov 2020 12:20:28 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201126080340.6154-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.11.2020 09:03, Juergen Gross wrote:
> When the host crashes it would sometimes be nice to have additional
> debug data available which could be produced via debug keys, but
> halting the server for manual intervention might be impossible due to
> the need to reboot/kexec rather sooner than later.
> 
> Add support for automatic debug key actions in case of crashes which
> can be activated via boot- or runtime-parameter.
> 
> Depending on the type of crash the desired data might be different, so
> support different settings for the possible types of crashes.
> 
> The parameter is "crash-debug" with the following syntax:
> 
>   crash-debug-<type>=<string>
> 
> with <type> being one of:
> 
>   panic, hwdom, watchdog, kexeccmd, debugkey
> 
> and <string> a sequence of debug key characters with '+' having the
> special semantics of a 10 millisecond pause.
> 
> So "crash-debug-watchdog=0+0qr" would result in special output in case
> of watchdog triggered crash (dom0 state, 10 ms pause, dom0 state,
> domain info, run queues).
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - switched special character '.' to '+' (Jan Beulich)
> - 10 ms instead of 1 s pause (Jan Beulich)
> - added more text to the boot parameter description (Jan Beulich)
> 
> V3:
> - added const (Jan Beulich)
> - thorough test of crash reason parameter (Jan Beulich)
> - kexeccmd case should depend on CONFIG_KEXEC (Jan Beulich)
> - added dummy get_irq_regs() helper on Arm
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Except for the Arm aspect, where I'm not sure using
guest_cpu_user_regs() is correct in all cases,
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Nevertheless ...

> @@ -507,6 +509,50 @@ void __init initialize_keytable(void)
>      }
>  }
>  
> +#define CRASHACTION_SIZE  32
> +static char crash_debug_panic[CRASHACTION_SIZE];
> +string_runtime_param("crash-debug-panic", crash_debug_panic);
> +static char crash_debug_hwdom[CRASHACTION_SIZE];
> +string_runtime_param("crash-debug-hwdom", crash_debug_hwdom);
> +static char crash_debug_watchdog[CRASHACTION_SIZE];
> +string_runtime_param("crash-debug-watchdog", crash_debug_watchdog);
> +#ifdef CONFIG_KEXEC
> +static char crash_debug_kexeccmd[CRASHACTION_SIZE];
> +string_runtime_param("crash-debug-kexeccmd", crash_debug_kexeccmd);
> +#endif

... to limit #ifdef-ary I'd have suggested to put

#else
# define crash_debug_kexeccmd NULL

right above here and ...

> +void keyhandler_crash_action(enum crash_reason reason)
> +{
> +    static const char *const crash_action[CRASHREASON_N] = {
> +        [CRASHREASON_PANIC] = crash_debug_panic,
> +        [CRASHREASON_HWDOM] = crash_debug_hwdom,
> +        [CRASHREASON_WATCHDOG] = crash_debug_watchdog,
> +#ifdef CONFIG_KEXEC
> +        [CRASHREASON_KEXECCMD] = crash_debug_kexeccmd,
> +#endif
> +        [CRASHREASON_DEBUGKEY] = crash_debug_debugkey,
> +    };
> +    const char *action;
> +    struct cpu_user_regs *regs = get_irq_regs() ? : guest_cpu_user_regs();
> +
> +    if ( (unsigned int)reason >= CRASHREASON_N )

... I'd have preferred ARRAY_SIZE() here, at which point the
array dimension also wouldn't need explicitly specifying.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 11:27:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 11:27:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38445.71193 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiFRC-0003M1-60; Thu, 26 Nov 2020 11:27:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38445.71193; Thu, 26 Nov 2020 11:27:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiFRC-0003Lu-1i; Thu, 26 Nov 2020 11:27:46 +0000
Received: by outflank-mailman (input) for mailman id 38445;
 Thu, 26 Nov 2020 11:27:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/AcN=FA=arm.com=wei.chen@srs-us1.protection.inumbo.net>)
 id 1kiFRB-0003Lp-Ag
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 11:27:45 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.53]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5c8de043-975e-475b-942b-d6f8e7f5e25d;
 Thu, 26 Nov 2020 11:27:41 +0000 (UTC)
Received: from AM5PR0101CA0002.eurprd01.prod.exchangelabs.com
 (2603:10a6:206:16::15) by AM6PR08MB3942.eurprd08.prod.outlook.com
 (2603:10a6:20b:ae::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 26 Nov
 2020 11:27:39 +0000
Received: from VE1EUR03FT043.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:16:cafe::ff) by AM5PR0101CA0002.outlook.office365.com
 (2603:10a6:206:16::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Thu, 26 Nov 2020 11:27:39 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT043.mail.protection.outlook.com (10.152.19.122) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Thu, 26 Nov 2020 11:27:38 +0000
Received: ("Tessian outbound 814be617737e:v71");
 Thu, 26 Nov 2020 11:27:38 +0000
Received: from a8219809ac76.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 0A75FA5D-B2A4-463E-84E7-55875CC6A2F0.1; 
 Thu, 26 Nov 2020 11:27:33 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a8219809ac76.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 26 Nov 2020 11:27:33 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com (2603:10a6:208:105::24)
 by AM9PR08MB6113.eurprd08.prod.outlook.com (2603:10a6:20b:2d6::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.22; Thu, 26 Nov
 2020 11:27:31 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993]) by AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993%3]) with mapi id 15.20.3564.038; Thu, 26 Nov 2020
 11:27:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/AcN=FA=arm.com=wei.chen@srs-us1.protection.inumbo.net>)
	id 1kiFRB-0003Lp-Ag
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 11:27:45 +0000
X-Inumbo-ID: 5c8de043-975e-475b-942b-d6f8e7f5e25d
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown [40.107.8.53])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5c8de043-975e-475b-942b-d6f8e7f5e25d;
	Thu, 26 Nov 2020 11:27:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vlRhY5zI3gi19diGyTXD/jtWW9MUoHcXlvRARRmEDus=;
 b=TlwtFpqhkJDzsaeqC5xRSf4ADLPS8M3WD9IC9qJEOiThWfT5lfYl7o86NY8GWA2KtSoSqi0i8Eux4xhhE5Rx+EprHnkrAZad6aWhklAeQLZgLiHf+h74TpbAIeFT2LwiEzEdihjTGQCirpXAM/K8ldlDPpchmFDuzx0EkJJGDxI=
Received: from AM5PR0101CA0002.eurprd01.prod.exchangelabs.com
 (2603:10a6:206:16::15) by AM6PR08MB3942.eurprd08.prod.outlook.com
 (2603:10a6:20b:ae::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.22; Thu, 26 Nov
 2020 11:27:39 +0000
Received: from VE1EUR03FT043.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:16:cafe::ff) by AM5PR0101CA0002.outlook.office365.com
 (2603:10a6:206:16::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Thu, 26 Nov 2020 11:27:39 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT043.mail.protection.outlook.com (10.152.19.122) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.20 via Frontend Transport; Thu, 26 Nov 2020 11:27:38 +0000
Received: ("Tessian outbound 814be617737e:v71"); Thu, 26 Nov 2020 11:27:38 +0000
X-CR-MTA-TID: 64aa7808
Received: from a8219809ac76.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 0A75FA5D-B2A4-463E-84E7-55875CC6A2F0.1;
	Thu, 26 Nov 2020 11:27:33 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a8219809ac76.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Thu, 26 Nov 2020 11:27:33 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OoVXJk5nRRb5fbUxMXK4DvXB7jabd+zqeHeiRu9Q29RQq4j8aZ0eBUlHTwaIM9xnbhaQZrU16EnZn/AntdtleDCOlohvc3hvqeT+MKHb3fFaVRCef6QzmRnqLpx0yKpqw0ntb5WnRrXXRgiSYbK8XK5+hwZH2avQpiFnYwLf//sP43BQ2NnakogwK7YhrhWx++jKwkuEDUz/CvZWnGhvgCPnKSTOYwMrhxannwzkvavwVxpbgtO/4fMMliXftgKn4+FT/38ZSWC2pOiN8PsXkiEqZYjXy3Hhoxnz/crQnd7iWg31Q3x7TOQOJ3JAQRZg3T+OG7Ja7zCauCz6qlfgow==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vlRhY5zI3gi19diGyTXD/jtWW9MUoHcXlvRARRmEDus=;
 b=eYWFo5PuBARuRS5S9L44T3cavM5JU+AeqGznAInFUIgh5sSBmdZiqzwxn9UWf2xeU5zcDFYg9Y+/XQtBkawDkUjiPphr70DMMLB4FOP5qgWzcZYBm5RoKt5CT/srBgZWIWodGtekXYhDa3GBp+5S4V4DavVl2q8wo6SZcJ6DN8N61KdN/lMjwyjTIL7FU27QLUo30/U/uQQCFB77qCz7xFqDXB2Qk+U6I/C9mrU55vE5ZmEvf4kb44HVcKCafbT2Js2EWGHXLpS2qgB+DQFIhcYWS8P8sLS4F8bFdLfrQA8kLSIzQzoEZ0CukH0zsUmxcxw3ASp7MVJShNAAD7htkg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vlRhY5zI3gi19diGyTXD/jtWW9MUoHcXlvRARRmEDus=;
 b=TlwtFpqhkJDzsaeqC5xRSf4ADLPS8M3WD9IC9qJEOiThWfT5lfYl7o86NY8GWA2KtSoSqi0i8Eux4xhhE5Rx+EprHnkrAZad6aWhklAeQLZgLiHf+h74TpbAIeFT2LwiEzEdihjTGQCirpXAM/K8ldlDPpchmFDuzx0EkJJGDxI=
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com (2603:10a6:208:105::24)
 by AM9PR08MB6113.eurprd08.prod.outlook.com (2603:10a6:20b:2d6::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.22; Thu, 26 Nov
 2020 11:27:31 +0000
Received: from AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993]) by AM0PR08MB3747.eurprd08.prod.outlook.com
 ([fe80::257f:eb47:fe85:5993%3]) with mapi id 15.20.3564.038; Thu, 26 Nov 2020
 11:27:31 +0000
From: Wei Chen <Wei.Chen@arm.com>
To: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
CC: Penny Zheng <Penny.Zheng@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Andre Przywara <Andre.Przywara@arm.com>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Kaly Xin <Kaly.Xin@arm.com>, nd
	<nd@arm.com>
Subject: RE: [PATCH] xen/arm: Add Cortex-A73 erratum 858921 workaround
Thread-Topic: [PATCH] xen/arm: Add Cortex-A73 erratum 858921 workaround
Thread-Index:
 AQHWtnF2jWNIb4RgFU+PRE0mwpdPDam/tA6AgAW2ODCAFDcMgIAAINyAgACWHQCAAAX2cA==
Date: Thu, 26 Nov 2020 11:27:30 +0000
Message-ID:
 <AM0PR08MB37478D884057C8720ED1023D9EF90@AM0PR08MB3747.eurprd08.prod.outlook.com>
References: <20201109082110.1133996-1-penny.zheng@arm.com>
 <cfa63398-8182-b79f-1602-ed068e2319ad@xen.org>
 <AM0PR08MB3747B42FC856B9BDF24646629EE60@AM0PR08MB3747.eurprd08.prod.outlook.com>
 <alpine.DEB.2.21.2011251554070.7979@sstabellini-ThinkPad-T480s>
 <AM0PR08MB3747912905438DA6D7FF969C9EF90@AM0PR08MB3747.eurprd08.prod.outlook.com>
 <8f47313a-f47a-520d-3845-3f2198fce5b4@xen.org>
In-Reply-To: <8f47313a-f47a-520d-3845-3f2198fce5b4@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 2B8AC37D8A6C7C4AA8DFDA5DE1C4B0F3.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [218.82.32.45]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: a6b17993-8e34-449c-c72c-08d891fe47dd
x-ms-traffictypediagnostic: AM9PR08MB6113:|AM6PR08MB3942:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB39423CB06A327728D0AE91D39EF90@AM6PR08MB3942.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 sJTo6Nr86+SxOlexg9H2TY3SMOZa9kpcP9lwQd3qyIIgVCuMVhilGTUKDHJuD+0sM5W3QZ+QlTF7zay1uhVsFc3yQKIPaoSp4nQEb8cq7Q8rqJfL7iwaBcH/w1bcqGC/FFkFuyHE6VzTKSqL5F1dfj3gHCHVAAmdVT1XOIZaIrFEWvTcm1jN2xTs+GLBwjImk84J8+F4+/hVtrnn+YiryMc7D3sj9UnenxyUUX6ZSCUEyGBN2Yk1Rl/CXaRav6T0bLECH2KdLBP642YjEP/eKvgjmyNBIJ4Oxeys7gLPiaTkdabB86n4o5NeT9xwMrulK716aDIA39ITJeYrl3yYJQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3747.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(346002)(396003)(39860400002)(136003)(66446008)(4326008)(186003)(66476007)(66946007)(2906002)(316002)(110136005)(54906003)(9686003)(7696005)(478600001)(55016002)(53546011)(26005)(6506007)(33656002)(66556008)(76116006)(5660300002)(8936002)(8676002)(64756008)(71200400001)(83380400001)(52536014)(86362001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?dFZpVXEvbnJxUTlFemcycVRDZXpYWlluTEVHT0pUdXRtUjVzQTBZdnZScDIw?=
 =?utf-8?B?TGZ5dDFOSWsxMEFDZkt4SHNGREhRU1NoT09ieGlBYjVjR3orSDQxVndkb0lw?=
 =?utf-8?B?OTR4blNOUDUweUo1bjB5Q1lEWEQ1Mk1jM1ViNmxaWExyY05USXJjWHR4YnFY?=
 =?utf-8?B?eEVMZVlYdkRURXlvQXp4MWlQMkR4UWtYUm83VUVwRFp1d29vaTdNbUZEeEdt?=
 =?utf-8?B?dGpqSFFoTlA5Q2M1WnRmVFIxNEtGem40Y1NvZGp3cVR3SDEvWkpOclg3TjB6?=
 =?utf-8?B?WnNvclVOdUxMUTdEN2IwVmVyakl2VXVoMUd4QkU2MjZpcC9oMElVNUVPQjZE?=
 =?utf-8?B?WG9ldTgvaSt5R2hoeko4WFNwNjdZNDJ1RFc1cHA1T3VWQjlFUmhGVHRLSmty?=
 =?utf-8?B?Y0NxZlZmR29GQ1R1QzdEZnQ5Y0FGazh1eEZ0cmxlbUVuZjRvZUxXcUNWK1ZT?=
 =?utf-8?B?eUFtemE5amVmeU1RdTRTMUt3cmdJRzFFZS9SVGZXR0MvZU9NODdpNnR4SHQ2?=
 =?utf-8?B?MktGMCtBNWdBcHM4NTBYMmNBd1Y0a2llOVk1YU50UlU1UTRaTW1DVG10YjAr?=
 =?utf-8?B?UFU2Y1JTa0RiU1ZBamZvaENuRUl1YkdTTXBCS1FmNGtYQ1dUcFU5aVlBNlZt?=
 =?utf-8?B?cFdWbHZna3ovcmRuSWdGQlVkcTNOenF1a2JhRzdiTml4c3VFc0VNRzhuWXZ1?=
 =?utf-8?B?K1Z5S2RBSTFtYkV0UnVybnZselB6OUwvUWlkMFJZNjFwMDZ1bkxNQzVneC9u?=
 =?utf-8?B?SmhmTWExNytiMUtLQmdvR2s0UFYveDUzUjIrazNpK3ZpS1FTUDdZQitKWHJH?=
 =?utf-8?B?ZlRZYlRSZVZMNXBEblZHTlRjekJlTVFibEVEcDQyYldrNVlRSGVBNzVlWVRi?=
 =?utf-8?B?dW9YbDZtY0NQaUNHVUp4Q0pvWkdlM0xkaDFwcHJrRUpDUXlVTzhVbTNNWHQ2?=
 =?utf-8?B?SzA0cWgwRWE5N2JCdlFoUEZFVWU1cmYwSi9VRzNpMXNqTFpFSk13dGlHTERN?=
 =?utf-8?B?RFVtWFNZTWY3UE13dFdKSHVPZE5SdnREZ3NTSHBMNGNsR2FnRWtEYlRkZCtt?=
 =?utf-8?B?WkV1aFQ2VkpYMmsxSU1MRlF2eS9ZU2xRaFR2YTRvc1ZCdXBoZ2ZHS2dMSjNG?=
 =?utf-8?B?cTRJTWFWdHZRbWwvUTd6dXRsdGQ2L1V2RHlGN0owOUY1MkxOL3RZM3o5S2k3?=
 =?utf-8?B?ZEt6akZqTk45NURFSmc2T2dvZ0N2MXlxR0NJTzRUSHNJTkI1Rm5oekZXUlJr?=
 =?utf-8?B?SmtqbVFPeDd5ekFOb0wydG1WYUtuTnFwWDJ2eWVMOEZGWVR1blJDdzdLSk5I?=
 =?utf-8?Q?dyj6s/SV3l37U=3D?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6113
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	0d945a67-cbc7-4464-29da-08d891fe4353
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5Iqs6gsCsJZOS3KymXfG/5wh0PWzaKk6QXHlGn0kf2EACnmJzMoRq+82+RoHyHmpdWkmFW2faJ6A1BqOtI2bujw3tasL2ZF9bfNuEhzV4hiwUOTYCtZ9G1Jabn911xjrOs6I4rL3x20IM3hvK/hhD8SaYfz7wq5ZiVDGZuEcNcuVDKq9iiYazZNr0Osg6zAVBCOi/43N8S17UgYLn6cuBWLx+drBcuUnR9kK7CJQMdPhSV9R5nyjR+fdMha4I3yXOZrbSRnn7+PGTYTB1gbbT1j8plvaOwFP860ZYOWW3htM0k3wzb//RRtTmljdsE5u/2egexlZ4++y6hqOQmptC95rARdr116EYaxdmluKOwvk8ej2vCJB7Ts/ObmKLI5tQgWgFvgxa8tWa9oacfHwLA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(346002)(376002)(39860400002)(396003)(46966005)(478600001)(82310400003)(86362001)(83380400001)(7696005)(26005)(4326008)(82740400003)(81166007)(336012)(356005)(186003)(47076004)(54906003)(9686003)(5660300002)(2906002)(8676002)(6506007)(53546011)(316002)(110136005)(70586007)(8936002)(70206006)(33656002)(52536014)(55016002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Nov 2020 11:27:38.4693
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a6b17993-8e34-449c-c72c-08d891fe47dd
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3942

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFNlbnQ6IDIwMjDlubQxMeaciDI25pelIDE4OjU1
DQo+IFRvOiBXZWkgQ2hlbiA8V2VpLkNoZW5AYXJtLmNvbT47IFN0ZWZhbm8gU3RhYmVsbGluaSA8
c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4NCj4gQ2M6IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bh
cm0uY29tPjsgeGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnOw0KPiBBbmRyZSBQcnp5d2Fy
YSA8QW5kcmUuUHJ6eXdhcmFAYXJtLmNvbT47IEJlcnRyYW5kIE1hcnF1aXMNCj4gPEJlcnRyYW5k
Lk1hcnF1aXNAYXJtLmNvbT47IEthbHkgWGluIDxLYWx5LlhpbkBhcm0uY29tPjsgbmQNCj4gPG5k
QGFybS5jb20+DQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0hdIHhlbi9hcm06IEFkZCBDb3J0ZXgtQTcz
IGVycmF0dW0gODU4OTIxIHdvcmthcm91bmQNCj4gDQo+IEhpIFdlaSwNCj4gDQo+IFlvdXIgZS1t
YWlsIGZvbnQgc2VlbXMgdG8gYmUgZGlmZmVyZW50IHRvIHRoZSB1c3VhbCBwbGFpbiB0ZXh0IG9u
ZS4gQXJlDQo+IHlvdSBzZW5kaW5nIHRoZSBlLW1haWwgdXNpbmcgSFRNTCBieSBhbnkgY2hhbmNl
Pw0KPiANCg0KSXQncyBzdHJhbmdlLCBJIGFsd2F5cyB1c2UgdGhlIHBsYWluIHRleHQuDQoNCj4g
T24gMjYvMTEvMjAyMCAwMjowNywgV2VpIENoZW4gd3JvdGU6DQo+ID4gSGkgU3RlZmFubywNCj4g
Pg0KPiA+PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiA+PiBGcm9tOiBTdGVmYW5vIFN0
YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+DQo+ID4+IFNlbnQ6IDIwMjDvv73vv70x
Me+/ve+/vTI277+977+9IDg6MDANCj4gPj4gVG86IFdlaSBDaGVuIDxXZWkuQ2hlbkBhcm0uY29t
Pg0KPiA+PiBDYzogSnVsaWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZz47IFBlbm55IFpoZW5nIDxQ
ZW5ueS5aaGVuZ0Bhcm0uY29tPjsNCj4geGVuLQ0KPiA+PiBkZXZlbEBsaXN0cy54ZW5wcm9qZWN0
Lm9yZzsgc3N0YWJlbGxpbmlAa2VybmVsLm9yZzsgQW5kcmUgUHJ6eXdhcmENCj4gPj4gPEFuZHJl
LlByenl3YXJhQGFybS5jb20+OyBCZXJ0cmFuZCBNYXJxdWlzDQo+IDxCZXJ0cmFuZC5NYXJxdWlz
QGFybS5jb20+Ow0KPiA+PiBLYWx5IFhpbiA8S2FseS5YaW5AYXJtLmNvbT47IG5kIDxuZEBhcm0u
Y29tPg0KPiA+PiBTdWJqZWN0OiBSRTogW1BBVENIXSB4ZW4vYXJtOiBBZGQgQ29ydGV4LUE3MyBl
cnJhdHVtIDg1ODkyMSB3b3JrYXJvdW5kDQo+ID4+DQo+ID4+IFJlc3VtaW5nIHRoaXMgb2xkIHRo
cmVhZC4NCj4gPj4NCj4gPj4gT24gRnJpLCAxMyBOb3YgMjAyMCwgV2VpIENoZW4gd3JvdGU6DQo+
ID4+Pj4gSGksDQo+ID4+Pj4NCj4gPj4+PiBPbiAwOS8xMS8yMDIwIDA4OjIxLCBQZW5ueSBaaGVu
ZyB3cm90ZToNCj4gPj4+Pj4gQ05UVkNUX0VMMCBvciBDTlRQQ1RfRUwwIGNvdW50ZXIgcmVhZCBp
biBDb3J0ZXgtQTczIChhbGwgdmVyc2lvbnMpDQo+ID4+Pj4+IG1pZ2h0IHJldHVybiBhIHdyb25n
IHZhbHVlIHdoZW4gdGhlIGNvdW50ZXIgY3Jvc3NlcyBhIDMyYml0IGJvdW5kYXJ5Lg0KPiA+Pj4+
Pg0KPiA+Pj4+PiBVbnRpbCBub3csIHRoZXJlIGlzIG5vIGNhc2UgZm9yIFhlbiBpdHNlbGYgdG8g
YWNjZXNzIENOVFZDVF9FTDAsDQo+ID4+Pj4+IGFuZCBpdCBhbHNvIHNob3VsZCBiZSB0aGUgR3Vl
c3QgT1MncyByZXNwb25zaWJpbGl0eSB0byBkZWFsIHdpdGgNCj4gPj4+Pj4gdGhpcyBwYXJ0Lg0K
PiA+Pj4+Pg0KPiA+Pj4+PiBCdXQgZm9yIENOVFBDVCwgdGhlcmUgZXhpc3RzIHNldmVyYWwgY2Fz
ZXMgaW4gWGVuIGludm9sdmluZyByZWFkaW5nDQo+ID4+Pj4+IENOVFBDVCwgc28gYSBwb3NzaWJs
ZSB3b3JrYXJvdW5kIGlzIHRoYXQgcGVyZm9ybWluZyB0aGUgcmVhZCB0d2ljZSwNCj4gPj4+Pj4g
YW5kIHRvIHJldHVybiBvbmUgb3IgdGhlIG90aGVyIGRlcGVuZGluZyBvbiB3aGV0aGVyIGEgdHJh
bnNpdGlvbiBoYXMNCj4gPj4+Pj4gdGFrZW4gcGxhY2UuDQo+ID4+Pj4+DQo+ID4+Pj4+IFNpZ25l
ZC1vZmYtYnk6IFBlbm55IFpoZW5nIDxwZW5ueS56aGVuZ0Bhcm0uY29tPg0KPiA+Pj4+DQo+ID4+
Pj4gQWNrZWQtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+DQo+ID4+Pj4NCj4g
Pj4+PiBPbiBhIHJlbGF0ZWQgdG9waWMsIGRvIHdlIG5lZWQgYSBmaXggc2ltaWxhciB0byBMaW51
eCBjb21taXQNCj4gPj4+PiA3NWExOWEwMjAyZGIgImFybTY0OiBhcmNoX3RpbWVyOiBFbnN1cmUg
Y291bnRlciByZWdpc3RlciByZWFkcyBvY2N1cg0KPiA+Pj4+IHdpdGggc2VxbG9jayBoZWxkIj8N
Cj4gPj4+Pg0KPiA+Pj4NCj4gPj4+IEkgdGFrZSBhIGxvb2sgYXQgdGhpcyBMaW51eCBjb21taXQs
IGl0IHNlZW1zIHRvIHByZXZlbnQgdGhlIHNlcS1sb2NrIHRvIGJlDQo+ID4+PiBzcGVjdWxhdGVk
LiAgVXNpbmcgYW4gZW5mb3JjZSBvcmRlcmluZyBpbnN0ZWFkIG9mIElTQiBhZnRlciB0aGUgcmVh
ZCBjb3VudGVyDQo+ID4+PiBvcGVyYXRpb24gc2VlbXMgdG8gYmUgZm9yIHBlcmZvcm1hbmNlIHJl
YXNvbnMuDQo+ID4+Pg0KPiA+Pj4gSSBoYXZlIGZvdW5kIHRoYXQgeW91IGhhZCBwbGFjZWQgYW4g
SVNCIGJlZm9yZSByZWFkIGNvdW50ZXIgaW4gZ2V0X2N5Y2xlcw0KPiA+Pj4gdG8gcHJldmVudCBj
b3VudGVyIHZhbHVlIHRvIGJlIHNwZWN1bGF0ZWQuIEJ1dCB5b3UgaGF2ZW4ndCBwbGFjZWQgdGhl
DQo+IHNlY29uZA0KPiA+Pj4gSVNCIGFmdGVyIHJlYWRpbmcuIElzIGl0IGJlY2F1c2Ugd2UgaGF2
ZW4ndCB1c2VkIHRoZSBnZXRfY3ljbGVzIGluIHNlcSBsb2NrDQo+ID4+PiBjcml0aWNhbCBjb250
ZXh0IChNYXliZSBJIGRpZG4ndCBmaW5kIHRoZSByaWdodCBwbGFjZSk/IFNvIHNob3VsZCB3ZSBu
ZWVkIHRvDQo+IGZpeCBpdA0KPiA+Pj4gbm93LCBvciB5b3UgcHJlZmVyIHRvIGZpeCBpdCBub3cg
Zm9yIGZ1dHVyZSB1c2FnZT8NCj4gPj4NCj4gPj4gTG9va2luZyBhdCB0aGUgY2FsbCBzaXRlcywg
aXQgZG9lc24ndCBsb29rIGxpa2Ugd2UgbmVlZCBhbnkgSVNCIGFmdGVyDQo+ID4+IGdldF9jeWNs
ZXMgYXMgaXQgaXMgbm90IHVzZWQgaW4gYW55IGNyaXRpY2FsIGNvbnRleHQuIFRoZXJlIGlzIGFs
c28gYQ0KPiA+PiBkYXRhIGRlcGVuZGVuY3kgd2l0aCB0aGUgdmFsdWUgcmV0dXJuZWQgYnkgaXQu
DQo+IA0KPiBJIGFtIGFzc3VtaW5nIHlvdSBsb29rZWQgYXQgYWxsIHRoZSB1c2VycyBvZiBOT1co
KS4gSXMgdGhhdCByaWdodD8NCj4gDQo+ID4+DQo+ID4+IFNvIEkgYW0gdGhpbmtpbmcgd2UgZG9u
J3QgbmVlZCBhbnkgZml4LiBBdCBtb3N0IHdlIG5lZWQgYW4gaW4tY29kZSBjb21tZW50Pw0KPiA+
DQo+ID4gSSBhZ3JlZSB3aXRoIHlvdSB0byBhZGQgYW4gaW4tY29kZSBjb21tZW50LiBJdCB3aWxs
IHJlbWluZCB1cyBpbiBmdXR1cmUgd2hlbg0KPiB3ZQ0KPiA+IHVzZSB0aGUgZ2V0X2N5Y2xlcyBp
biBjcml0aWNhbCBjb250ZXh0LiBBZGRpbmcgaXQgbm93IHdpbGwgcHJvYmFibHkgb25seSBsZWFk
IHRvDQo+ID4gbWVhbmluZ2xlc3MgcGVyZm9ybWFuY2UgZGVncmFkYXRpb24uDQo+IA0KPiBJIHJl
YWQgdGhpcyBhcyB0aGVyZSB3b3VsZCBiZSBubyBwZXJmb21hbmNlIGltcGFjdCBpZiB3ZSBhZGQg
dGhlDQo+IG9yZGVyaW5nIGl0IG5vdy4gRGlkIHlvdSBpbnRlbmQgdG8gc2F5Pw0KDQpTb3JyeSBh
Ym91dCBteSBFbmdsaXNoLiBJIGludGVuZGVkIHRvIHNheSAiQWRkaW5nIGl0IG5vdyBtYXkgaW50
cm9kdWNlIHNvbWUNCnBlcmZvcm1hbmNlIGNvc3QuIEFuZCB0aGlzIHBlcmZvcm1hbmNlIGNvc3Qg
bWF5IGJlIG5vdCB3b3J0aC4gQmVjYXVzZSBYZW4NCm1heSBuZXZlciB1c2UgaXQgaW4gYSBzaW1p
bGFyIHNjZW5hcmlvICINCg0KPiANCj4gPiBCZWNhdXNlIFhlbiBtYXkgbmV2ZXIgdXNlIGl0IGlu
IGEgc2ltaWxhcg0KPiA+IHNjZW5hcmlvLg0KPiANCj4gUmlnaHQsIHRoZXJlIGFyZSB0d28gcG90
ZW50aWFscyBhcHByb2FjaCBoZXJlOg0KPiAgICAtIFdhaXQgdW50aWwgdGhlcmUgYXJlIGEgdXNl
cg0KPiAgICAgICAgKiBQcm9zOiBEb2Vzbid0IGltcGFjdCBwZXJmb3JtYW5jZQ0KPiAgICAgICAg
KiBDb25zOiBXZSByZWx5IG9uIHVzZXJzL3Jldmlld2VycyB0byBjYXRjaCBhbnkgbWlzdXNlDQo+
ICAgIC0gSGFyZGVuIHRoZSBjb2RlDQo+ICAgICAgICAqIFByb3M6IExlc3MgcmlzayB0byBpbnRy
b2R1Y2UgYSBidWcgaW5hZHZlcnRlbnRseQ0KPiAgICAgICAgKiBDb25zOiBNYXkgaW1wYWN0IHRo
ZSBwZXJmb3JtYW5jZQ0KPiANCj4gSW4gZ2VuZXJhbCwgSSBwcmVmZXIgdGhhdCB0aGUgY29kZSBp
cyBoYXJkZW5lZCBieSBkZWZhdWx0IGlmIHRoZQ0KPiBwZXJmb3JtYW5jZSBpbXBhY3QgaXMgbGlt
aXRlZC4gVGhpcyBtYXkgc2F2ZSB1cyBob3VycyBvZg0KPiBkZWJ1Z2dpbmcvcmVwcm9kdWNpbmcg
YnVnLg0KPiANCg0KRnJvbSBhIHByZXZlbnRpdmUgcG9pbnQgb2YgdmlldywgeW91J3JlIHJpZ2h0
Lg0KDQo+IEluIGFkZGl0aW9uLCBBRkFJQ1QsIHRoZSB4ODYgdmVyc2lvbiBvZiBnZXRfY3ljbGVz
KCkgaXMgYWxyZWFkeSBhYmxlIHRvDQo+IHByb3ZpZGUgdGhhdCBvcmRlcmluZy4gU28gdGhlcmUg
YXJlIGNoYW5jZXMgdGhhdCBjb2RlIG1heSByZWx5IG9uIGl0Lg0KPiANCj4gV2hpbGUgSSBkb24n
dCBuZWNlc3NhcmlseSBhZ3JlZSB0byBhZGQgYmFycmllcnMgZXZlcnl3aGVyZSBieSBkZWZhdWx0
DQo+ICh0aGlzIG1heSBoYXZlIGJpZyBpbXBhY3Qgb24gdGhlIHBsYXRmb3JtKS4gSSB0aGluayBp
dCBpcyBiZXR0ZXIgdG8gaGF2ZQ0KPiBhbiBhY2N1cmF0ZSBudW1iZXIgb2YgY3ljbGVzLg0KPiAN
Cg0KQXMgeDg2IGhhZCBkb25lIGl0LCBJIHRoaW5rIGl04oCZcyBvayB0byBkbyBpdCBmb3IgQXJt
LiBUaGlzIHdpbGwga2VlcCBhIGZ1bmN0aW9uIA0KYmVoYXZlcyB0aGUgc2FtZSBvbiBkaWZmZXJl
bnQgYXJjaGl0ZWN0dXJlcy4NCg0KVGhhbmtzLA0KV2VpIENoZW4NCg0KPiBDaGVlcnMsDQo+IA0K
PiAtLQ0KPiBKdWxpZW4gR3JhbGwNCg==


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 11:34:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 11:34:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38461.71208 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiFXW-0004Kz-0X; Thu, 26 Nov 2020 11:34:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38461.71208; Thu, 26 Nov 2020 11:34:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiFXV-0004Ks-T1; Thu, 26 Nov 2020 11:34:17 +0000
Received: by outflank-mailman (input) for mailman id 38461;
 Thu, 26 Nov 2020 11:34:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EevG=FA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kiFXU-0004Kn-By
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 11:34:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2291cce8-aba5-40cb-baa4-e0dca5b0d110;
 Thu, 26 Nov 2020 11:34:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 38A96AF1B;
 Thu, 26 Nov 2020 11:34:14 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=EevG=FA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kiFXU-0004Kn-By
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 11:34:16 +0000
X-Inumbo-ID: 2291cce8-aba5-40cb-baa4-e0dca5b0d110
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2291cce8-aba5-40cb-baa4-e0dca5b0d110;
	Thu, 26 Nov 2020 11:34:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606390454; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PUSOL9sf+dTcCi5X9dRWCKaWFLlOpKNHykkJ7Bh+coM=;
	b=ga4OKE34gKj0M5ignILGfT0JzPMG8NZUoBjzryl4leAtFnRCbuqz0R8ZCgDGSoZpenOQWY
	90vu9A7/WATAO7sUcJYFwqsmobGipj3As96buVSsW+3bd8WHC4gaQoaANgrZzkq7tl/rZa
	GG/kFFAx8jjOgrHu/MxcpbWyi5Hb/Jk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 38A96AF1B;
	Thu, 26 Nov 2020 11:34:14 +0000 (UTC)
Subject: Re: [PATCH v2 02/17] mm: introduce xvmalloc() et al and use for grant
 table allocations
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
 <23acd443-348c-5ef9-0fb5-880e06cc9a2d@suse.com>
 <0c40a6f6-af8c-1040-f249-36752df3a1f1@xen.org>
 <a752cdb9-4609-2a61-b657-c17cbe4febb8@suse.com>
 <alpine.DEB.2.21.2011251122200.7979@sstabellini-ThinkPad-T480s>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2aeba247-8b36-7b75-dc17-b901bf746f87@suse.com>
Date: Thu, 26 Nov 2020 12:34:14 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2011251122200.7979@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.11.2020 20:48, Stefano Stabellini wrote:
> On Wed, 25 Nov 2020, Jan Beulich wrote:
>> On 25.11.2020 13:15, Julien Grall wrote:
>>> On 23/11/2020 14:23, Jan Beulich wrote:
>>>> All of the array allocations in grant_table_init() can exceed a page's
>>>> worth of memory, which xmalloc()-based interfaces aren't really suitable
>>>> for after boot. We also don't need any of these allocations to be
>>>> physically contiguous.. Introduce interfaces dynamically switching
>>>> between xmalloc() et al and vmalloc() et al, based on requested size,
>>>> and use them instead.
>>>>
>>>> All the wrappers in the new header get cloned mostly verbatim from
>>>> xmalloc.h, with the sole adjustment to switch unsigned long to size_t
>>>> for sizes and to unsigned int for alignments.
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>> ---
>>>> v2: Actually edit a copy-and-pasted comment in xvmalloc.h which was
>>>>      meant to be edited from the beginning.
>>>> ---
>>>> I'm unconvinced of the mentioning of "physically contiguous" in the
>>>> comment at the top of the new header: I don't think xmalloc() provides
>>>> such a guarantee. Any use assuming so would look (latently) broken to
>>>> me.
>>>
>>> I haven't had the chance to reply to the first version about this. So I 
>>> will reply here to avoid confusion.
>>>
>>> I can at least spot one user in Arm that would use xmalloc() that way 
>>> (see the allocation of itt_addr in arch/arm/gic-v3-its.c).
>>
>> And I surely wouldn't have spotted this, even if I had tried
>> to find "offenders", i.e. as said before not wanting to alter
>> the behavior of existing code (beyond the explicit changes
>> done here) was ...
>>
>>> AFAIK, the memory is for the sole purpose of the ITS and should not be 
>>> accessed by Xen. So I think we can replace by a new version of 
>>> alloc_domheap_pages().
>>>
>>> However, I still question the usefulness of introducing yet another way 
>>> to allocate memory (we already have alloc_xenheap_pages(), xmalloc(), 
>>> alloc_domheap_pages(), vmap()) if you think users cannot rely on 
>>> xmalloc() to allocate memory physically contiguous.
>>
>> ... the reason to introduce a separate new interface. Plus of
>> course this parallels what Linux has.
>>
>>> It definitely makes more difficult to figure out when to use xmalloc() 
>>> vs xvalloc().
>>
>> I don't see the difficulty:
>> - if you need physically contiguous memory, use alloc_xen*_pages(),
>> - if you know the allocation size is always no more than a page,
>>   use xmalloc(),
> 
> What if you need memory physically contiguous but not necessarily an
> order of pages, such as for instance 5200 bytes?

This case is, I think, rare enough (in particular in Xen) that the
waste of space can be tolerated imo.

> If xmalloc can't do physically contiguous allocations, we need something
> else that does physically contiguous allocations not only at page
> granularity, right?

Well, we first need to settle on what guarantees xmalloc() is meant
to provide. It may be just me assuming it doesn't provide the same
ones which Linux'es kmalloc() makes. I'm first and foremost
judging by the comment near the top of xmalloc.h, which compares
with malloc() / free(), not kmalloc() / kfree().

> The other issue is semantics. If xmalloc is unable to allocate more than
> a page of contiguous memory, then it is identical to vmalloc from the
> caller's point of view: both xmalloc and vmalloc return a virtual
> address for an allocation that might not be physically contiguous.

Almost. vmalloc() puts guard pages around the allocation and
guarantees page alignment.

> Maybe we should get rid of xmalloc entirely and improve the
> implementation of vmalloc so that it falls back to xmalloc for
> sub-page allocations. Which in fact is almost the same thing that you
> did.

This would break callers assuming page alignment (and - shouldn't
be an issue in practice - granularity). If anything, as Julien
did suggest, we could modify xmalloc() accordingly, but then of
course making sure we also honor alignment requests beyond page
size.

Neither of these is the goal here, hence this "intermediate"
implementation, which is only almost "redundant".

>> - if you know the allocation size is always more than a page, use
>>   vmalloc(),
>> - otherwise use xvmalloc(). Exceptions may of course apply, i.e.
>> this is just a rule of thumb.
>>
>>> I would like to hear an opinion from the other maintainers.
>>
>> Let's hope at least one will voice theirs.
> 
> If we take a step back, I think we only really need two memory
> allocators:
> 
> 1) one that allocates physically contiguous memory
> 2) one that allocates non-physically contiguous memory
> 
> That's it, right?
> 
> In addition to that, I understand it could be convient to have a little
> wrapper that automatically chooses between 1) and 2) depending on
> circumstances.
> 
> But if the circumstance is just size < PAGE_SIZE then I don't think we
> need any convenience wrappers: we should just be able to call 2), which
> is vmalloc, once we improve the vmalloc implementation.
> 
> Or do you see any reasons to keep the current vmalloc implementation as
> is for sub-page allocations?

See my "Almost. ..." above.

As an aside, I also find it quite puzzling that in one of the rare
cases where I propose to clone an interface from Linux without much
deviation from their model, I get objections. It typically was the
other way around in the past ...

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 12:03:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 12:03:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38478.71219 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiFza-00078c-Gr; Thu, 26 Nov 2020 12:03:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38478.71219; Thu, 26 Nov 2020 12:03:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiFza-00078V-Ds; Thu, 26 Nov 2020 12:03:18 +0000
Received: by outflank-mailman (input) for mailman id 38478;
 Thu, 26 Nov 2020 12:03:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiFzZ-00078N-64; Thu, 26 Nov 2020 12:03:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiFzY-0000X9-VS; Thu, 26 Nov 2020 12:03:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiFzY-0004Am-NN; Thu, 26 Nov 2020 12:03:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kiFzY-0006TU-Mt; Thu, 26 Nov 2020 12:03:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiFzZ-00078N-64; Thu, 26 Nov 2020 12:03:17 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MRgobxmtF3c5mPpA0fr2uIspHi1kc4UkYYLnLfbBwlY=; b=jII1500ULCxfxeDI6zYvL+vyC7
	/bwBEV40T+794Gcp7xqOx+fhf4BPdt9RJkEO8ARME/8SXau/sfHKKQ0t6RG7MS78lK4qAKwNk6vET
	sp1JLobPx22IuogsCObUNVVWcOXtH5gmKMcThK6BwaJF9JomkWWjmPKZw9D/wqAfLq+o=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiFzY-0000X9-VS; Thu, 26 Nov 2020 12:03:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiFzY-0004Am-NN; Thu, 26 Nov 2020 12:03:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiFzY-0006TU-Mt; Thu, 26 Nov 2020 12:03:16 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157025-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157025: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=21f984cedec1c613218480bc3eb5e92349a7a812
X-Osstest-Versions-That:
    ovmf=e9d62effa37ea13fe04fc89b21d2de7776f183a2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 26 Nov 2020 12:03:16 +0000

flight 157025 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157025/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 21f984cedec1c613218480bc3eb5e92349a7a812
baseline version:
 ovmf                 e9d62effa37ea13fe04fc89b21d2de7776f183a2

Last test of basis   157018  2020-11-26 01:39:48 Z    0 days
Testing same since   157025  2020-11-26 07:11:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Kun Qin <kun.q@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   e9d62effa3..21f984cede  21f984cedec1c613218480bc3eb5e92349a7a812 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 13:08:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 13:08:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38526.71255 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiH0Y-0004PO-M1; Thu, 26 Nov 2020 13:08:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38526.71255; Thu, 26 Nov 2020 13:08:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiH0Y-0004PH-J0; Thu, 26 Nov 2020 13:08:22 +0000
Received: by outflank-mailman (input) for mailman id 38526;
 Thu, 26 Nov 2020 13:08:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZgDp=FA=gmail.com=ganime1961tire@srs-us1.protection.inumbo.net>)
 id 1kiH0W-0004Ok-Ky
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 13:08:20 +0000
Received: from mail-wm1-x330.google.com (unknown [2a00:1450:4864:20::330])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eeda3358-4e67-4e7f-b1ee-326ab6b43ac0;
 Thu, 26 Nov 2020 13:08:19 +0000 (UTC)
Received: by mail-wm1-x330.google.com with SMTP id 10so2098910wml.2
 for <xen-devel@lists.xenproject.org>; Thu, 26 Nov 2020 05:08:19 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ZgDp=FA=gmail.com=ganime1961tire@srs-us1.protection.inumbo.net>)
	id 1kiH0W-0004Ok-Ky
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 13:08:20 +0000
X-Inumbo-ID: eeda3358-4e67-4e7f-b1ee-326ab6b43ac0
Received: from mail-wm1-x330.google.com (unknown [2a00:1450:4864:20::330])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id eeda3358-4e67-4e7f-b1ee-326ab6b43ac0;
	Thu, 26 Nov 2020 13:08:19 +0000 (UTC)
Received: by mail-wm1-x330.google.com with SMTP id 10so2098910wml.2
        for <xen-devel@lists.xenproject.org>; Thu, 26 Nov 2020 05:08:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to;
        bh=FrmZM9zwgR9nchK0ys0/eSXi1mHT4RBrjU6HUYIHqZo=;
        b=GRmlIL22hQjM7sneI1C3hRpROU7qnHrJgSa8k2FDmbzvzZMNhwZuUfJ6v8uDpEpxr5
         4Ku0eVlcm42QFkogUPPXpOapI06ESAA3b3gPkwkEbwfZvM6bAY9zPCt5cuapYyLW5KSp
         A5Dk6aVxW28hB3tC3skYZqFR4czlBa9bEwAP+DM6Li80oOU4H1YzUBrPZhQnJ/1lg0Qs
         pUPZPHQfvEtq0V0ZbCxpqMTg06zpo0JYLWNrWoFH9XtqHdP4EO0iC/WmWY8PJNUXS4sp
         17l/35B1WtKvortaqzEh0+lQ3PxDDnVkJvwy7x0ykanOVI3yXu3kCzqRY0zUejr3JCQn
         ayNA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to;
        bh=FrmZM9zwgR9nchK0ys0/eSXi1mHT4RBrjU6HUYIHqZo=;
        b=DmvfeksYHDLChzn6SL8r7QOuL0JXPWQcUO+GqJkX0B+aZrEijbeBXLlUBoKLrvz0zc
         0+AkwTRtDJ2JYZxXrhuE73kv66bNlfW9S6QdGJkQYP9R/q1uNJtCtsWjHeZJxBonmA4d
         wQBf+k/LFguq+E6mF+n9T4KyUvH6QbPHsq3RGUu3SonOyM6lNzMhefxvAhEP+YE/Dqt3
         aWRXQ0GUitTBt8gM8K8nU4EVVM7609Kw6OJApOwLJ4fmihsyp2d9rXFNIkgEctP6SabJ
         9GJb/ckNluuHbFq5Rv9ocvK+M8V7lABHVkymxneXIXhtTLiSqi9gzQLDntkUwIpXhQ/8
         5ZZw==
X-Gm-Message-State: AOAM5319dHzfm1kcNy/ibFU8YbJws8zcL8Ro9Pi0IMr+XTbLnsVu3vcD
	U0JzAAxEZeJQ1gTwi+ZAu+Uyy9X1ROiGK1PWnutMPk4Xr+E=
X-Google-Smtp-Source: ABdhPJxCw3GSSvGOhHdp3NZO7cbAp5huaXvbF59WpdGSfxe0gEZbT89f/AVgZDrQpcDAIznHIa34wrvvIn5hc9T7Gig=
X-Received: by 2002:a1c:a786:: with SMTP id q128mr3323091wme.115.1606396097564;
 Thu, 26 Nov 2020 05:08:17 -0800 (PST)
MIME-Version: 1.0
References: <CAF6SwtntQvKrmrhQaDSQwcysCOdNjsUxZO90qtQepvZ-0YLgCA@mail.gmail.com>
In-Reply-To: <CAF6SwtntQvKrmrhQaDSQwcysCOdNjsUxZO90qtQepvZ-0YLgCA@mail.gmail.com>
From: Ganime Yalur <ganime1961tire@gmail.com>
Date: Thu, 26 Nov 2020 14:08:05 +0100
Message-ID: <CAF6SwtnnTezMGJCt9PhKP+7HJogGKQVj1_4dqJ9Yduy22-JRkQ@mail.gmail.com>
Subject: Fwd: subscribe
To: xen-devel@lists.xenproject.org
Content-Type: multipart/alternative; boundary="000000000000c3baec05b5023ccc"

--000000000000c3baec05b5023ccc
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

---------- Forwarded message ---------
G=C3=B6nderen: Ganime Yalur <ganime1961tire@gmail.com>
Date: 26 Kas 2020 Per 12:57
Subject: subscribe
To: <xen-devel-request@lists.xenproject.org>


I Need additional help to verifiy Benchmarkresults with nginx and nee
librarys like -lwip -lpthread-embeddd
-lnewlibc .

Is this enough to set in configure Script of nginx latest release or i need
to patch something INSIDE makefile or nginx sources?

I need a method, who works outside of unikraft Kernel.

I hope someone can
help me

Thanks and

Best regards

--000000000000c3baec05b5023ccc
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"auto"><div><br><br><div class=3D"gmail_quote"><div dir=3D"ltr" =
class=3D"gmail_attr">---------- Forwarded message ---------<br>G=C3=B6ndere=
n: <strong class=3D"gmail_sendername" dir=3D"auto">Ganime Yalur</strong> <s=
pan dir=3D"auto">&lt;<a href=3D"mailto:ganime1961tire@gmail.com">ganime1961=
tire@gmail.com</a>&gt;</span><br>Date: 26 Kas 2020 Per 12:57<br>Subject: su=
bscribe<br>To:  &lt;<a href=3D"mailto:xen-devel-request@lists.xenproject.or=
g">xen-devel-request@lists.xenproject.org</a>&gt;<br></div><br><br><div dir=
=3D"auto">I Need additional help to verifiy Benchmarkresults with nginx and=
 nee librarys like -lwip -lpthread-embeddd=C2=A0<div dir=3D"auto">-lnewlibc=
 .</div><div dir=3D"auto"><br></div><div dir=3D"auto">Is this enough to set=
 in configure Script of nginx latest release or i need to patch something I=
NSIDE makefile or nginx sources?</div><div dir=3D"auto"><br></div><div dir=
=3D"auto">I need a method, who works outside of unikraft Kernel.</div><div =
dir=3D"auto"><br></div><div dir=3D"auto">I hope someone can=C2=A0</div><div=
 dir=3D"auto">help me</div><div dir=3D"auto"><br></div><div dir=3D"auto">Th=
anks and</div><div dir=3D"auto"><br></div><div dir=3D"auto">Best regards</d=
iv></div>
</div></div></div>

--000000000000c3baec05b5023ccc--


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 13:22:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 13:22:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38535.71267 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiHEA-00067p-VU; Thu, 26 Nov 2020 13:22:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38535.71267; Thu, 26 Nov 2020 13:22:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiHEA-00067i-SE; Thu, 26 Nov 2020 13:22:26 +0000
Received: by outflank-mailman (input) for mailman id 38535;
 Thu, 26 Nov 2020 13:22:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kiHE9-00067d-Qc
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 13:22:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kiHE8-00027f-Dm; Thu, 26 Nov 2020 13:22:24 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kiHE8-0003W5-1d; Thu, 26 Nov 2020 13:22:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kiHE9-00067d-Qc
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 13:22:25 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=la40kzql3fS0yvQUE0Hq44V+3H0ISNU/TGuhIMnUIXA=; b=fp/Wq9AJLXPJJ93S8e+8YRSWp9
	GUZhJS0XnuBo7WDRtlf4dkM0B0il0SQse5jKOsrpUhBG51J8Hc+CHQmRw+m05+PBsge+CRIyNqhcV
	1SLuqGqsQkpnTIZj/5wtXddYcmuJEpsZem8KkTLgpRSgbp317wXyrKE/21WzxL74HmTY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kiHE8-00027f-Dm; Thu, 26 Nov 2020 13:22:24 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kiHE8-0003W5-1d; Thu, 26 Nov 2020 13:22:24 +0000
Subject: Re: [PATCH v2 02/17] mm: introduce xvmalloc() et al and use for grant
 table allocations
To: Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
 <23acd443-348c-5ef9-0fb5-880e06cc9a2d@suse.com>
 <0c40a6f6-af8c-1040-f249-36752df3a1f1@xen.org>
 <a752cdb9-4609-2a61-b657-c17cbe4febb8@suse.com>
 <alpine.DEB.2.21.2011251122200.7979@sstabellini-ThinkPad-T480s>
 <2aeba247-8b36-7b75-dc17-b901bf746f87@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <8e86bed4-b6fa-ed81-8ca8-41e727c56cb1@xen.org>
Date: Thu, 26 Nov 2020 13:22:21 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <2aeba247-8b36-7b75-dc17-b901bf746f87@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 26/11/2020 11:34, Jan Beulich wrote:
> On 25.11.2020 20:48, Stefano Stabellini wrote:
>> On Wed, 25 Nov 2020, Jan Beulich wrote:
>>> On 25.11.2020 13:15, Julien Grall wrote:
>>>> On 23/11/2020 14:23, Jan Beulich wrote:
>>>>> All of the array allocations in grant_table_init() can exceed a page's
>>>>> worth of memory, which xmalloc()-based interfaces aren't really suitable
>>>>> for after boot. We also don't need any of these allocations to be
>>>>> physically contiguous.. Introduce interfaces dynamically switching
>>>>> between xmalloc() et al and vmalloc() et al, based on requested size,
>>>>> and use them instead.
>>>>>
>>>>> All the wrappers in the new header get cloned mostly verbatim from
>>>>> xmalloc.h, with the sole adjustment to switch unsigned long to size_t
>>>>> for sizes and to unsigned int for alignments.
>>>>>
>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>> ---
>>>>> v2: Actually edit a copy-and-pasted comment in xvmalloc.h which was
>>>>>       meant to be edited from the beginning.
>>>>> ---
>>>>> I'm unconvinced of the mentioning of "physically contiguous" in the
>>>>> comment at the top of the new header: I don't think xmalloc() provides
>>>>> such a guarantee. Any use assuming so would look (latently) broken to
>>>>> me.
>>>>
>>>> I haven't had the chance to reply to the first version about this. So I
>>>> will reply here to avoid confusion.
>>>>
>>>> I can at least spot one user in Arm that would use xmalloc() that way
>>>> (see the allocation of itt_addr in arch/arm/gic-v3-its.c).
>>>
>>> And I surely wouldn't have spotted this, even if I had tried
>>> to find "offenders", i.e. as said before not wanting to alter
>>> the behavior of existing code (beyond the explicit changes
>>> done here) was ...
>>>
>>>> AFAIK, the memory is for the sole purpose of the ITS and should not be
>>>> accessed by Xen. So I think we can replace by a new version of
>>>> alloc_domheap_pages().
>>>>
>>>> However, I still question the usefulness of introducing yet another way
>>>> to allocate memory (we already have alloc_xenheap_pages(), xmalloc(),
>>>> alloc_domheap_pages(), vmap()) if you think users cannot rely on
>>>> xmalloc() to allocate memory physically contiguous.
>>>
>>> ... the reason to introduce a separate new interface. Plus of
>>> course this parallels what Linux has.
>>>
>>>> It definitely makes more difficult to figure out when to use xmalloc()
>>>> vs xvalloc().
>>>
>>> I don't see the difficulty:
>>> - if you need physically contiguous memory, use alloc_xen*_pages(),
>>> - if you know the allocation size is always no more than a page,
>>>    use xmalloc(),

If that's then intention, then may I ask why xmalloc() is able to 
support multiple pages allocation?

Your assumption is Xen will always be built with the same page size 
across all the architecture. While Xen only works with 4KB pages today, 
Arm can support 16KB and 64KB. I have long term plan to add support for it.

So I don't think you can use the page size as a way to distinguish which 
one to use.

>>
>> What if you need memory physically contiguous but not necessarily an
>> order of pages, such as for instance 5200 bytes?
> 
> This case is, I think, rare enough (in particular in Xen) that the
> waste of space can be tolerated imo.

This is quite departure from:

commit b829a0ff5794ee5b0f96a0c872f6a4ed7b1007c7
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Oct 13 10:03:43 2011 +0200

     xmalloc: return unused full pages on multi-page allocations

     Certain (boot time) allocations are relatively large (particularly when
     building with high NR_CPUS), but can also happen to be pretty far away
     from a power-of-two size. Utilize the page allocator's (other than
     Linux'es) capability of allowing to return space from higher-order
     allocations in smaller pieces to return the unused parts immediately.

     Signed-off-by: Jan Beulich <jbeulich@suse.com>
     Acked-by: Keir Fraser <keir@xen.org>

I am curious to know what changed...

Anyway, what you wrote is very server focused. On Arm, we have plan to 
run Xen on smaller hardware where wasting memory mean less usable RAM 
for guests.

The problem with using an order is the bigger the order is the more 
change you will waste space...

Allocating more than a page is fairly common on Arm, so we really want 
to reduce the amount of memory wasted.

> 
>> If xmalloc can't do physically contiguous allocations, we need something
>> else that does physically contiguous allocations not only at page
>> granularity, right?
> 
> Well, we first need to settle on what guarantees xmalloc() is meant
> to provide. It may be just me assuming it doesn't provide the same
> ones which Linux'es kmalloc() makes. I'm first and foremost
> judging by the comment near the top of xmalloc.h, which compares
> with malloc() / free(), not kmalloc() / kfree().
> 
>> The other issue is semantics. If xmalloc is unable to allocate more than
>> a page of contiguous memory, then it is identical to vmalloc from the
>> caller's point of view: both xmalloc and vmalloc return a virtual
>> address for an allocation that might not be physically contiguous.
> 
> Almost. vmalloc() puts guard pages around the allocation and
> guarantees page alignment.
> 
>> Maybe we should get rid of xmalloc entirely and improve the
>> implementation of vmalloc so that it falls back to xmalloc for
>> sub-page allocations. Which in fact is almost the same thing that you
>> did.
> 
> This would break callers assuming page alignment (and - shouldn't
> be an issue in practice - granularity). If anything, as Julien
> did suggest, we could modify xmalloc() accordingly, but then of
> course making sure we also honor alignment requests beyond page
> size.
> 
> Neither of these is the goal here, hence this "intermediate"
> implementation, which is only almost "redundant".
> 
>>> - if you know the allocation size is always more than a page, use
>>>    vmalloc(),
>>> - otherwise use xvmalloc(). Exceptions may of course apply, i.e.
>>> this is just a rule of thumb.
>>>
>>>> I would like to hear an opinion from the other maintainers.
>>>
>>> Let's hope at least one will voice theirs.
>>
>> If we take a step back, I think we only really need two memory
>> allocators:
>>
>> 1) one that allocates physically contiguous memory
>> 2) one that allocates non-physically contiguous memory
>>
>> That's it, right?
>>
>> In addition to that, I understand it could be convient to have a little
>> wrapper that automatically chooses between 1) and 2) depending on
>> circumstances.
>>
>> But if the circumstance is just size < PAGE_SIZE then I don't think we
>> need any convenience wrappers: we should just be able to call 2), which
>> is vmalloc, once we improve the vmalloc implementation.
>>
>> Or do you see any reasons to keep the current vmalloc implementation as
>> is for sub-page allocations?
> 
> See my "Almost. ..." above.
> 
> As an aside, I also find it quite puzzling that in one of the rare
> cases where I propose to clone an interface from Linux without much
> deviation from their model, I get objections. It typically was the
> other way around in the past ...

If we were really following Linux, then we would have two interfaces:
    - xmalloc() which is the same as kmalloc()
    - xvalloc() which is the same a kvalloc()

However, you seem to be the one objecting on the behavior of xmalloc().

I can't speak for Stefano, but I don't object on following Linux. 
Instead I am objecting on the growing number of way to allocate memory 
in Xen and that differ depending on the system_state.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 13:36:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 13:36:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38546.71284 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiHRe-0007D3-Cl; Thu, 26 Nov 2020 13:36:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38546.71284; Thu, 26 Nov 2020 13:36:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiHRe-0007Cw-9r; Thu, 26 Nov 2020 13:36:22 +0000
Received: by outflank-mailman (input) for mailman id 38546;
 Thu, 26 Nov 2020 13:36:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TQ9x=FA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kiHRc-0007Cr-Uv
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 13:36:21 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 04e26e6b-0b47-4d5d-889f-ed30077c5dd3;
 Thu, 26 Nov 2020 13:36:19 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TQ9x=FA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kiHRc-0007Cr-Uv
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 13:36:21 +0000
X-Inumbo-ID: 04e26e6b-0b47-4d5d-889f-ed30077c5dd3
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 04e26e6b-0b47-4d5d-889f-ed30077c5dd3;
	Thu, 26 Nov 2020 13:36:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606397779;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=UNDit+cPLGHUviicj+dccXAQJ4dBH4nn15/fyR+3AQQ=;
  b=UOJW/+88HB/1zYzo4/dpcAaDCdPvT82Bz9k8gvyIOkL2pGN4fhH3bPJu
   nix+yQsPwzY3WUKEJg01LtRDdPb4rH8uakAcCHlrmdbq5mt6m9e+qDSkv
   9iqbmy/8G5xK0B8s3RWNg2bNGUc1MasypGwpKlITdGJYVGeX454dZRY3V
   U=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: NUQTTQPUa3K5XQdvBUWK05g/qFcyeKnnAMU6FnviuGHby2blIk0cCUTmw3RXHllYA/bTlkVnDp
 XOpNsHTIV/77lWJzygpzWaFjZlGqc6Ai1EGjravZEbaQQ4LBrWmcKybbB9T12p1sjlpHTc8eXL
 4jm3GBUgK7ZlnJ/HNhu5XIZjiAaiFsW2DzEyHsjzLa7pDHR7L1h0j4cBEQf2Td4o6j0OIcAjnp
 /qkTdXI3kJ1EMIN8M7zDTCnrVPDi5J31tWKUFEh4juII9D9c7r8JJyatBPmDQcsfvq1t5V+tud
 4vc=
X-SBRS: None
X-MesageID: 32334166
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,372,1599537600"; 
   d="scan'208";a="32334166"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DYekgSrGhRqcwREA0feEwIxc5OENttseouJepIQYxkDumdzvHJc+Lr/pdYMfhUr01xQDpjtV6Uw7w3jKuponRZZVJ2StZxbeFiKJyc48qj0K/Eb5D+C5BX132l2mghU0NZYcJ0duRA9b3KYeuMKQi5GGm5ZJ1VfUeKEDbEQEJB47HJFzdS4qAv+aD8pxgiKPdDWLTWsG2zQ2cnc6DN9+9+/AvnfMVeTm6VOlEkmstx1Wp+Cns1h5ObkiPrsW6AeZprQKYVVHpFQyRvWqNJdj7bA8b9Szdru8/qVfrmp2Fv1VyRc1WpE029CrmzkkXmazG7yqVCav4uq7UYz2cuniqg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nAwaaVD6Fj0sDFpd+4YXDhOxDd9+amcuKNAhabbuvj4=;
 b=jR7BMrsnuS+yCkO52I00tQOHvRwRfmoIZgxcOry2v/eEyxe04rXCDIXqEmsqUMggJ1Y+vSM07J9BoIyzSCUcyCPKfeEFqUBpHsy2GXnEzaGoxg/CJyffFemN6B/burW3SyEtYpLhPKh8xQQ1xjur6UBVBiUnqAbIUIvLOom1wLWw8benlGluMvBky63MH18Xs/OCWzVsJ6uIt/syS+KXkeIYkNgrM+7COC60CMI/kHnkVDMb6cEP5JfYxojbVfRvZMdAreHZIp+OLCfRK1sXLZd/Nx0saAlDqe3FYkECDF5wFzDle6OarJMCrT/q/ydCVFvPO7X6YElI9mwwkxKoCw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nAwaaVD6Fj0sDFpd+4YXDhOxDd9+amcuKNAhabbuvj4=;
 b=jJG1+VZR9YmDOTE6bm0nvSIdA/AbjYZJe1b1UoWQF4UUctziJiC1133zcdKlK/nzhVxLUX+oSS3XUMx4MBBAJF9wAAxexLgR8OAGDccXGKds2E6rODoDrksT1SEOo5CKXd/bWkpEGtbq9YqYPpM984N9YWuQu3fjK8t7DR1Tm3A=
Date: Thu, 26 Nov 2020 14:34:44 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: Jan Beulich <jbeulich@suse.com>, <xen-devel@lists.xenproject.org>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201126133444.r2oi24i3umh7shb3@Air-de-Roger>
References: <20201123173925.GG4662@antioche.eu.org>
 <b3912e97-9684-fe97-1053-ad7168a19721@suse.com>
 <20201124122102.3igsriesou3vl6mu@Air-de-Roger>
 <20201124135948.GL2020@antioche.eu.org>
 <6d6a77cf-58de-4e4d-ed75-e9365be060b7@suse.com>
 <20201124142713.GM2020@antioche.eu.org>
 <e6a0fc84-e7ed-825c-5356-29b8a6359a2b@suse.com>
 <20201124150842.GN2020@antioche.eu.org>
 <20201124154917.l3jwa6w4ejumjuqw@Air-de-Roger>
 <20201124160914.GQ2020@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201124160914.GQ2020@antioche.eu.org>
X-ClientProxiedBy: MR2P264CA0023.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:1::35) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e2f7c608-1946-4a88-dfc8-08d892100c8f
X-MS-TrafficTypeDiagnostic: DM6PR03MB5179:
X-Microsoft-Antispam-PRVS: <DM6PR03MB51798676705554F5AB1C1FD38FF90@DM6PR03MB5179.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 5YE8ZcFxmGSlB9S0S2kY9jG1/MKcda90l2KA+eZ3kT4N0usABps8heuJjD/XyyQqtE9RAok1HriA6bzuzMctNNcH7BSi38QfH+fVJNyj44UN6Swxs+zrGlD1++x8Pu5YHgVc//OAi3Uope2wz8P8Hyn22np44QSIeORY3XtYTYSz3w70SlRLJjrCXTamZdEZ5Wox9Y345Sx52djLOYejxEVZO76McYsMzJ7QCren75Se4Xr1FLZHMIhJ4j0pMpFB/+XmabwYskbcL3KkbG3X+x3kgX86o2iiltBIZYdFOcIng1yFe4aydEx9kjKWXJ4mMWlV6WP7DBMXwpRNlCnvD95Swdf2hd+PSnHBVKQRZ0R9Pf1qK0kkpmgyJxPGvNLUKkypBmECZE6o2BFlZQt+OQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(376002)(396003)(366004)(346002)(39860400002)(136003)(8676002)(16526019)(6486002)(6496006)(8936002)(85182001)(956004)(478600001)(5660300002)(66556008)(66946007)(66476007)(6666004)(1076003)(6916009)(86362001)(83380400001)(2906002)(966005)(186003)(316002)(33716001)(26005)(4326008)(9686003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?UFEyZVlHVlJZWHVSNEdGdUp3Kys0MjZRYmlBYkh2WlZlWmllT1A4MDVVVVBv?=
 =?utf-8?B?cDBvSG42U0pFbnVjL2hTa05RQlVBZlNmNXZNdGxsT2hGMFNEd1lQNEVaM1RP?=
 =?utf-8?B?b3VnN1VMYWtMbUZYaVd3T2lQOFk4TnBPaGQ3bVpKOXRjcXhLVSthc0lpTjUz?=
 =?utf-8?B?Y1pJZ296VjlMVG8wdXYrMnJDZURybkZzeGtMQXJ3bk1lYlBoQTgwOVl3OXpo?=
 =?utf-8?B?dmEwQkgzR2N2SW9razBTM0JkL1E1dnBqS0pmOXAzZTV4SVpuTTF2bXNIOXpt?=
 =?utf-8?B?VWpsMXB1cnJGTTRldTEvdC95T0xTUzREQ2Nyc0VaMjRMa1ZERlJJQVhHMnA4?=
 =?utf-8?B?L0cvN2xINit1OHB3SVM5ZGZJZStXV1M3MmMwSU9rRHBCY2h4c2o3cWtJUkhm?=
 =?utf-8?B?RkNjcUVBVGcxUnFPYjQ5U04wRW1lU1M2VjVKUlFoTEZGRVZZdE8yUlhneGhl?=
 =?utf-8?B?TVlLOE5HdjJTK3l6RzNtbE5OaW1YSEpFUUpDVHNmaSs2ekFFMmVnTlIxTjJh?=
 =?utf-8?B?R2Vkb0hWR2Nwb21oL2s3VWwwRmNSMmVXQUFpNmFJWGVYZUpNckN1dG8wWndW?=
 =?utf-8?B?QmYwRG1EdnhtV25RdTZ4RTFZNGFQbE5DZklucXpGNTRiOHdBVnFlREtzNDY4?=
 =?utf-8?B?TUFGcDJBWi9OQ0dDQ2dZckVXN3B4YVNoVkg2cHRtNTFPWDNCYThWRUttejFC?=
 =?utf-8?B?NEFIQ2drL2d4dEhpclBSVUVseEI2bFBLMmhmWmxtMmtwem9DQnNKcU1iN3hh?=
 =?utf-8?B?UytJRlhNQ0VIVXBKeTZydFJza29RQ3VZc0NTWHBhUWg2U2hyOXd1eWMrNzNm?=
 =?utf-8?B?dk9lOVgwUWY4NVVyUjBjRlZmZ2F2OEtOY1dkcE1GckRDbWt4Ly9DQ2tnUlph?=
 =?utf-8?B?dkFZNVR1d1BqcEExWGNza2lXRTFuWThKVitSZUhCNXlDYTg5RGFmUVJEZlZn?=
 =?utf-8?B?SzRLQ1lMNDBEM1hEY3NTUFNGY1dKTVF6MnV2c3VMMms4cGhFSXdOWGxzSTlG?=
 =?utf-8?B?b0ZGakFPalpzU2xLM1VzNkdCQkpXQ0c1Zk1UM1lJeExoMElUdjhhbzVSS2or?=
 =?utf-8?B?ZURrNVJ2VlpuQ3cvRGllQlVjeDZCRjdOb01heHlScHVkRHhZTUZnTXRPeXlE?=
 =?utf-8?B?RGtBbWE5M1pKZldyd0NId1hMT2xyQVQrd2tFeEdPdTk5L2pRRDJwUVV4d3Ar?=
 =?utf-8?B?b2JMay9GZ2xsMkc3Zk5VSllpKzdLaHRnc1ZXL1Z3V2NiaXkwWkwvU3paQXNU?=
 =?utf-8?B?M2FsVmY3ZFUwU0tmcUNtVk01cWxnVkVOWXIvOGNqZWlOdlVXS2phbWZRNHg1?=
 =?utf-8?Q?A020MeIX7zqm5ThJ5qVFTe9Sun5I5fft3Q?=
X-MS-Exchange-CrossTenant-Network-Message-Id: e2f7c608-1946-4a88-dfc8-08d892100c8f
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Nov 2020 13:34:50.4028
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: XBd8fZyt0Ax/AiEZ/nGkXNvYPLbvmhfOJp27+BD2ug/21YcW2mjJINidmLIC5DnCcW9kCU7tAQ20DwH7YbxZ5w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5179
X-OriginatorOrg: citrix.com

On Tue, Nov 24, 2020 at 05:09:14PM +0100, Manuel Bouyer wrote:
> On Tue, Nov 24, 2020 at 04:49:17PM +0100, Roger Pau Monné wrote:
> > Could you also give a try with ioapic_ack=new on the Xen command line?
> 
> With this I still have the interrupt issue, but Xen doesn't panic on 'i'.
> http://www-soc.lip6.fr/~bouyer/xen-log8.txt

Sorry for the delay, I have yet another debug patch for you to try.
Can you remove the ioapic_ack=new from the command line and rebuild
the hypervisor with the provided patch applied and debug trace
enabled? (`gmake -C xen menuconfig` and go into Debugging Options to
find it).

Then once the system stalls use the 'T' debug key to dump the buffer.

Thanks, Roger.
---8<---
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 67d4a6237f..adbfccdd0f 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -278,6 +278,10 @@ static void vioapic_write_redirent(
          */
         int ret = vioapic_hwdom_map_gsi(gsi, ent.fields.trig_mode,
                                         ent.fields.polarity);
+
+        if ( gsi == TRACK_IRQ )
+            debugtrace_printk("vIO-APIC UNMASK irq %u\n", gsi);
+
         if ( ret )
         {
             gprintk(XENLOG_ERR,
@@ -285,6 +289,9 @@ static void vioapic_write_redirent(
             unmasked = 0;
         }
     }
+    else if ( is_hardware_domain(d) && gsi == TRACK_IRQ )
+        debugtrace_printk("vIO-APIC MASK irq %u\n", gsi);
+
 
     if ( gsi == 0 || unmasked )
         pt_may_unmask_irq(d, NULL);
@@ -405,6 +412,10 @@ static void vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int pin)
 
     ASSERT(spin_is_locked(&d->arch.hvm.irq_lock));
 
+    if ( irq == TRACK_IRQ )
+            debugtrace_printk("vIO-APIC deliver irq %u vector %u\n",
+                              irq, vector);
+
     HVM_DBG_LOG(DBG_LEVEL_IOAPIC,
                 "dest=%x dest_mode=%x delivery_mode=%x "
                 "vector=%x trig_mode=%x",
diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c
index 49bd778484..db7167eb4b 100644
--- a/xen/arch/x86/io_apic.c
+++ b/xen/arch/x86/io_apic.c
@@ -1641,6 +1641,9 @@ static void mask_and_ack_level_ioapic_irq(struct irq_desc *desc)
     unsigned long v;
     int i;
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("ACK irq %u\n", desc->irq);
+
     irq_complete_move(desc);
 
     if ( !directed_eoi_enabled )
@@ -1688,6 +1691,9 @@ static void mask_and_ack_level_ioapic_irq(struct irq_desc *desc)
 
 static void end_level_ioapic_irq_old(struct irq_desc *desc, u8 vector)
 {
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("END irq %u\n", desc->irq);
+
     if ( directed_eoi_enabled )
     {
         if ( !(desc->status & (IRQ_DISABLED|IRQ_MOVE_PENDING)) )
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 8d1f9a9fc6..baef41cd37 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1109,6 +1109,10 @@ static void irq_guest_eoi_timer_fn(void *data)
     unsigned int i, irq = desc - irq_desc;
     irq_guest_action_t *action;
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("irq_guest_eoi_timer_fn irq %u status %x\n",
+                          desc->irq, desc->status);
+
     spin_lock_irq(&desc->lock);
     
     if ( !(desc->status & IRQ_GUEST) )
@@ -1118,6 +1122,10 @@ static void irq_guest_eoi_timer_fn(void *data)
 
     ASSERT(action->ack_type != ACKTYPE_NONE);
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("ack_type %u in_flight %u\n",
+                          action->ack_type, action->in_flight);
+
     /*
      * Is no IRQ in flight at all, or another instance of this timer already
      * running? Skip everything to avoid forcing an EOI early.
@@ -1837,6 +1845,10 @@ static void do_IRQ_guest(struct irq_desc *desc, unsigned int vector)
     unsigned int        i;
     struct pending_eoi *peoi = this_cpu(pending_eoi);
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("do_IRQ_guest irq %u\n", desc->irq);
+
+
     if ( unlikely(!action->nr_guests) )
     {
         /* An interrupt may slip through while freeing an ACKTYPE_EOI irq. */
diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
index 6b1305a3e5..25ee1791f8 100644
--- a/xen/drivers/passthrough/io.c
+++ b/xen/drivers/passthrough/io.c
@@ -1010,6 +1010,9 @@ void hvm_dpci_eoi(struct domain *d, unsigned int guest_gsi,
     if ( !is_iommu_enabled(d) )
         return;
 
+    if ( guest_gsi == TRACK_IRQ )
+        debugtrace_printk("hvm_dpci_eoi irq %u\n", guest_gsi);
+
     if ( is_hardware_domain(d) )
     {
         spin_lock(&d->event_lock);
diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
index 43d567fe44..91579c33b9 100644
--- a/xen/include/xen/irq.h
+++ b/xen/include/xen/irq.h
@@ -174,4 +174,6 @@ unsigned int arch_hwdom_irqs(domid_t);
 void arch_evtchn_bind_pirq(struct domain *, int pirq);
 #endif
 
+#define TRACK_IRQ 34
+
 #endif /* __XEN_IRQ_H__ */



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 13:39:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 13:39:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38552.71297 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiHUC-0007RT-Up; Thu, 26 Nov 2020 13:39:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38552.71297; Thu, 26 Nov 2020 13:39:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiHUC-0007RM-R8; Thu, 26 Nov 2020 13:39:00 +0000
Received: by outflank-mailman (input) for mailman id 38552;
 Thu, 26 Nov 2020 13:38:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tnMB=FA=linaro.org=lee.jones@srs-us1.protection.inumbo.net>)
 id 1kiHUB-0007RG-5k
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 13:38:59 +0000
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fd1bfaee-425a-4fa3-8d84-9bbadca92b99;
 Thu, 26 Nov 2020 13:38:58 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id s13so2435487wmh.4
 for <xen-devel@lists.xenproject.org>; Thu, 26 Nov 2020 05:38:58 -0800 (PST)
Received: from dell.default ([91.110.221.235])
 by smtp.gmail.com with ESMTPSA id s133sm7035825wmf.38.2020.11.26.05.38.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 26 Nov 2020 05:38:56 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tnMB=FA=linaro.org=lee.jones@srs-us1.protection.inumbo.net>)
	id 1kiHUB-0007RG-5k
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 13:38:59 +0000
X-Inumbo-ID: fd1bfaee-425a-4fa3-8d84-9bbadca92b99
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fd1bfaee-425a-4fa3-8d84-9bbadca92b99;
	Thu, 26 Nov 2020 13:38:58 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id s13so2435487wmh.4
        for <xen-devel@lists.xenproject.org>; Thu, 26 Nov 2020 05:38:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=eLYy9GrW221tE7bgw0J/86UTqSktvqRhGJxBlLeC3CQ=;
        b=l8uEDUWZzvTPhQEmHkZEDzGn+Iktals9WkH8ixtmNOmOetN+vKndogKyVo59zGXYLz
         XRXvLUk5/+I04WWy7FCL0sLq4G94vw7QdXKfevLaySsnixEVKYZjC38zOVHNwqTkRYLN
         La8EC6Kxt6FBWP9D25iK6dlCY0QSnmFPjI0JmdW+T/tdAWhzrLYaOiJHbt1NLSwEkhEx
         Y5DRG1yTcLpH7SVmPZVFWQDYEG3aMUf+bQ4cF2ARtkMOy6ywbUIdkC/0GkwZWXKV+ZrI
         GJxdb71yq3kNI4ucwGOAP83wwCKHEcGCH1dgmfnG+q6ylSw4bmYBixyYObvyjE2aYJuI
         JqmQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=eLYy9GrW221tE7bgw0J/86UTqSktvqRhGJxBlLeC3CQ=;
        b=C/C5NPsVPWb4XYiXEl3XaOLe+76uZFOGbqdB5xuLxr3255BPP756dUbEer6uSduQoo
         hFF+Ytq60KRjiH77ialvpGQCwjRh+nDMQb+5ZJOsu5IdhAlnqg5nmZOnGFSD0eOzO0V+
         1VktPn32AVtBSlckR2KqoOVsXkAQJHx/IgVFYxJBQUHIE/stK923SHyCo5xSW8Ymxw/C
         JKPUAubDXKXr5kKwzFj60O+7hNPbG4qyHLGjWgNocaNSsBjgBSi1h+A4ts2ndnpqoqun
         WG/mjVNoPXlto5mTNn8/4SfLs41fPfDm/4/cXm84Mw6WwGN4EvFv3/TLOIOkL9PTK1es
         SSkw==
X-Gm-Message-State: AOAM533AdnOVUVU3SxhJDPCvsmP6aZBAaBPIxIH+0xpar5xRZr4pHqhe
	R3Z2U5IlisFe/MwJEVoGSFAF6g==
X-Google-Smtp-Source: ABdhPJyX27kWdIdFacrNhU+B2wpKeoJmDR9OAO7qJiUZ6oA3MPy3XKvt03Krjrfl2kApfFtGjVazXw==
X-Received: by 2002:a1c:dc82:: with SMTP id t124mr3481528wmg.94.1606397937237;
        Thu, 26 Nov 2020 05:38:57 -0800 (PST)
Received: from dell.default ([91.110.221.235])
        by smtp.gmail.com with ESMTPSA id s133sm7035825wmf.38.2020.11.26.05.38.55
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 26 Nov 2020 05:38:56 -0800 (PST)
From: Lee Jones <lee.jones@linaro.org>
To: lee.jones@linaro.org
Cc: linux-kernel@vger.kernel.org,
	Alexei Starovoitov <ast@kernel.org>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	bpf@vger.kernel.org,
	Daniel Borkmann <daniel@iogearbox.net>,
	Dany Madden <drt@linux.ibm.com>,
	Daris A Nevil <dnevil@snmc.com>,
	"David S. Miller" <davem@davemloft.net>,
	Erik Stahlman <erik@vt.edu>,
	Geoff Levand <geoff@infradead.org>,
	Grygorii Strashko <grygorii.strashko@ti.com>,
	"Gustavo A. R. Silva" <gustavoars@kernel.org>,
	Ishizaki Kou <kou.ishizaki@toshiba.co.jp>,
	Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>,
	Jakub Kicinski <kuba@kernel.org>,
	Jens Osterkamp <Jens.Osterkamp@de.ibm.com>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	John Allen <jallen@linux.vnet.ibm.com>,
	John Fastabend <john.fastabend@gmail.com>,
	Kurt Kanzenbach <kurt@linutronix.de>,
	Lijun Pan <ljp@linux.ibm.com>,
	linuxppc-dev@lists.ozlabs.org,
	Michael Ellerman <mpe@ellerman.id.au>,
	netdev@vger.kernel.org,
	Nicolas Pitre <nico@fluxnic.net>,
	Paul Durrant <paul@xen.org>,
	Paul Mackerras <paulus@samba.org>,
	Peter Cammaert <pc@denkart.be>,
	Russell King <rmk@arm.linux.org.uk>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Santiago Leon <santi_leon@yahoo.com>,
	Sukadev Bhattiprolu <sukadev@linux.ibm.com>,
	Thomas Falcon <tlfalcon@linux.vnet.ibm.com>,
	Utz Bacher <utz.bacher@de.ibm.com>,
	Wei Liu <wei.liu@kernel.org>,
	xen-devel@lists.xenproject.org
Subject: [PATCH 0/8] Rid W=1 warnings in Net
Date: Thu, 26 Nov 2020 13:38:45 +0000
Message-Id: <20201126133853.3213268-1-lee.jones@linaro.org>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Resending the stragglers.

This set is part of a larger effort attempting to clean-up W=1
kernel builds, which are currently overwhelmingly riddled with
niggly little warnings.

Lee Jones (8):
  net: ethernet: smsc: smc91x: Demote non-conformant kernel function
    header
  net: xen-netback: xenbus: Demote nonconformant kernel-doc headers
  net: ethernet: ti: am65-cpsw-qos: Demote non-conformant function
    header
  net: ethernet: ti: am65-cpts: Document am65_cpts_rx_enable()'s 'en'
    parameter
  net: ethernet: ibm: ibmvnic: Fix some kernel-doc misdemeanours
  net: ethernet: toshiba: ps3_gelic_net: Fix some kernel-doc
    misdemeanours
  net: ethernet: toshiba: spider_net: Document a whole bunch of function
    parameters
  net: ethernet: ibm: ibmvnic: Fix some kernel-doc issues

 drivers/net/ethernet/ibm/ibmvnic.c           | 27 ++++++++++----------
 drivers/net/ethernet/smsc/smc91x.c           |  2 +-
 drivers/net/ethernet/ti/am65-cpsw-qos.c      |  2 +-
 drivers/net/ethernet/ti/am65-cpts.c          |  2 +-
 drivers/net/ethernet/toshiba/ps3_gelic_net.c |  9 ++++---
 drivers/net/ethernet/toshiba/spider_net.c    | 18 ++++++++-----
 drivers/net/xen-netback/xenbus.c             |  4 +--
 drivers/net/xen-netfront.c                   |  6 ++---
 8 files changed, 37 insertions(+), 33 deletions(-)

Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: bpf@vger.kernel.org
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Dany Madden <drt@linux.ibm.com>
Cc: Daris A Nevil <dnevil@snmc.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Erik Stahlman <erik@vt.edu>
Cc: Geoff Levand <geoff@infradead.org>
Cc: Grygorii Strashko <grygorii.strashko@ti.com>
Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>
Cc: Ishizaki Kou <kou.ishizaki@toshiba.co.jp>
Cc: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Jens Osterkamp <Jens.Osterkamp@de.ibm.com>
Cc: Jesper Dangaard Brouer <hawk@kernel.org>
Cc: John Allen <jallen@linux.vnet.ibm.com>
Cc: John Fastabend <john.fastabend@gmail.com>
Cc: Kurt Kanzenbach <kurt@linutronix.de>
Cc: Lijun Pan <ljp@linux.ibm.com>
Cc: linuxppc-dev@lists.ozlabs.org
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: netdev@vger.kernel.org
Cc: Nicolas Pitre <nico@fluxnic.net>
Cc: Paul Durrant <paul@xen.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Cammaert <pc@denkart.be>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Santiago Leon <santi_leon@yahoo.com>
Cc: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
Cc: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Cc: Utz Bacher <utz.bacher@de.ibm.com>
Cc: Wei Liu <wei.liu@kernel.org>
Cc: xen-devel@lists.xenproject.org
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 13:39:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 13:39:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38553.71309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiHUH-0007TT-6o; Thu, 26 Nov 2020 13:39:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38553.71309; Thu, 26 Nov 2020 13:39:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiHUH-0007TL-3H; Thu, 26 Nov 2020 13:39:05 +0000
Received: by outflank-mailman (input) for mailman id 38553;
 Thu, 26 Nov 2020 13:39:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tnMB=FA=linaro.org=lee.jones@srs-us1.protection.inumbo.net>)
 id 1kiHUG-0007RG-1I
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 13:39:04 +0000
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 050c9ea9-0c14-4648-b2c1-25c7fdaf621a;
 Thu, 26 Nov 2020 13:39:00 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id g14so2148226wrm.13
 for <xen-devel@lists.xenproject.org>; Thu, 26 Nov 2020 05:39:00 -0800 (PST)
Received: from dell.default ([91.110.221.235])
 by smtp.gmail.com with ESMTPSA id s133sm7035825wmf.38.2020.11.26.05.38.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 26 Nov 2020 05:38:59 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tnMB=FA=linaro.org=lee.jones@srs-us1.protection.inumbo.net>)
	id 1kiHUG-0007RG-1I
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 13:39:04 +0000
X-Inumbo-ID: 050c9ea9-0c14-4648-b2c1-25c7fdaf621a
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 050c9ea9-0c14-4648-b2c1-25c7fdaf621a;
	Thu, 26 Nov 2020 13:39:00 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id g14so2148226wrm.13
        for <xen-devel@lists.xenproject.org>; Thu, 26 Nov 2020 05:39:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=PvDQ8jiSdggfocg7QevhSStHyCThX2E8vELTACyHnVM=;
        b=hfIUxVh9cOOEk1+Xnm9Js0JYx8f+4mDVffWLE+RpCmxL0UzygV22nLn5ImHdR94WtJ
         V84dJowMWFbBc/vOV5VcNtNwABoiSipolRFrT0sotRub7aAlqP7agQXikgVrvCdA2QhE
         1XMvyzZ2DaV11usuc9FgvVwCcGCAeKOuUfuG2YhlvYZaw/Cosi3dFUAOofTHESywFkhR
         lXBStbyPFd9fMWdFbVbWSRWZUoZaIwrvHuPNfaOuddOq/VwH+UM2FYbBPeQvNclUYX3v
         bWqIk2ZR8IZOZymlbf8n4zvsR+avPkwgP+ylWMDUOng9/e7F1EqjwxWl5IwWm0RAijtc
         4L4A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=PvDQ8jiSdggfocg7QevhSStHyCThX2E8vELTACyHnVM=;
        b=Tj90SPS8lhBioxVfRh/k1B/KWNUyFyMtx7KmO+2BseQliEFJjf9iUfX7Lci1aZwqWK
         hu13F/T0Nlx47KIZ9MPWS86JXSx6lvd4edkQAVafBg0d3zXw5Jyejn7I0BBNLHI9hY+i
         B5DzNCMyr2zEXYWGovjFs0CZ7rN5hLaChqTa91VqUDyjnHl39d1qKexGlX0GYBZ3A1cu
         S1Q5LpybLqsZf+XA5ClCQs0CDu4HOCDl84E/BBOJ4u5biJ71RcKvn+UqGLvPsWczylqG
         6aPmV0XN7mtShoTWigp0Tj2ZX1ssSRASgvQwvLLisUZi3mIuIRdHfXGSkFPl9pdxO3US
         liaA==
X-Gm-Message-State: AOAM533EgWbwykP0X++omyU2iQMdT8a+iJDqFL/P6MVAQQgfzMGKq8q4
	YAbYJEfIfv8foiRm88P4yIvh8Q==
X-Google-Smtp-Source: ABdhPJxxuDr0CFihwvoh9qXdvCtCohE03uRRltH7bO3tWEys9Iub2oFH9hIvPdZDltlla0WbomeVaw==
X-Received: by 2002:adf:ed12:: with SMTP id a18mr3939773wro.5.1606397940009;
        Thu, 26 Nov 2020 05:39:00 -0800 (PST)
Received: from dell.default ([91.110.221.235])
        by smtp.gmail.com with ESMTPSA id s133sm7035825wmf.38.2020.11.26.05.38.58
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 26 Nov 2020 05:38:59 -0800 (PST)
From: Lee Jones <lee.jones@linaro.org>
To: lee.jones@linaro.org
Cc: linux-kernel@vger.kernel.org,
	Wei Liu <wei.liu@kernel.org>,
	Paul Durrant <paul@xen.org>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	John Fastabend <john.fastabend@gmail.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	xen-devel@lists.xenproject.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org
Subject: [PATCH 2/8] net: xen-netback: xenbus: Demote nonconformant kernel-doc headers
Date: Thu, 26 Nov 2020 13:38:47 +0000
Message-Id: <20201126133853.3213268-3-lee.jones@linaro.org>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20201126133853.3213268-1-lee.jones@linaro.org>
References: <20201126133853.3213268-1-lee.jones@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Fixes the following W=1 kernel build warning(s):

 drivers/net/xen-netback/xenbus.c:419: warning: Function parameter or member 'dev' not described in 'frontend_changed'
 drivers/net/xen-netback/xenbus.c:419: warning: Function parameter or member 'frontend_state' not described in 'frontend_changed'
 drivers/net/xen-netback/xenbus.c:1001: warning: Function parameter or member 'dev' not described in 'netback_probe'
 drivers/net/xen-netback/xenbus.c:1001: warning: Function parameter or member 'id' not described in 'netback_probe'

Cc: Wei Liu <wei.liu@kernel.org>
Cc: Paul Durrant <paul@xen.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Jesper Dangaard Brouer <hawk@kernel.org>
Cc: John Fastabend <john.fastabend@gmail.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: xen-devel@lists.xenproject.org
Cc: netdev@vger.kernel.org
Cc: bpf@vger.kernel.org
Signed-off-by: Lee Jones <lee.jones@linaro.org>
---
 drivers/net/xen-netback/xenbus.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index f1c1624cec8f5..de1b5471d929b 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -411,7 +411,7 @@ static void read_xenbus_frontend_xdp(struct backend_info *be,
 	vif->xdp_headroom = headroom;
 }
 
-/**
+/*
  * Callback received when the frontend's state changes.
  */
 static void frontend_changed(struct xenbus_device *dev,
@@ -992,7 +992,7 @@ static int netback_remove(struct xenbus_device *dev)
 	return 0;
 }
 
-/**
+/*
  * Entry point to this code when a new device is created.  Allocate the basic
  * structures and switch to InitWait.
  */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 14:16:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 14:16:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38569.71321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiI4R-0002jc-4Z; Thu, 26 Nov 2020 14:16:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38569.71321; Thu, 26 Nov 2020 14:16:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiI4R-0002jV-1Q; Thu, 26 Nov 2020 14:16:27 +0000
Received: by outflank-mailman (input) for mailman id 38569;
 Thu, 26 Nov 2020 14:16:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=smDs=FA=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kiI4P-0002jQ-F3
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 14:16:25 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8cb1f05b-ffe3-4645-ad21-a2b638b37f03;
 Thu, 26 Nov 2020 14:16:22 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AQEGDC3018777
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Thu, 26 Nov 2020 15:16:14 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 9DF9F2E9CAC; Thu, 26 Nov 2020 15:16:08 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=smDs=FA=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kiI4P-0002jQ-F3
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 14:16:25 +0000
X-Inumbo-ID: 8cb1f05b-ffe3-4645-ad21-a2b638b37f03
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8cb1f05b-ffe3-4645-ad21-a2b638b37f03;
	Thu, 26 Nov 2020 14:16:22 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AQEGDC3018777
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Thu, 26 Nov 2020 15:16:14 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 9DF9F2E9CAC; Thu, 26 Nov 2020 15:16:08 +0100 (MET)
Date: Thu, 26 Nov 2020 15:16:08 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201126141608.GA4123@antioche.eu.org>
References: <b3912e97-9684-fe97-1053-ad7168a19721@suse.com>
 <20201124122102.3igsriesou3vl6mu@Air-de-Roger>
 <20201124135948.GL2020@antioche.eu.org>
 <6d6a77cf-58de-4e4d-ed75-e9365be060b7@suse.com>
 <20201124142713.GM2020@antioche.eu.org>
 <e6a0fc84-e7ed-825c-5356-29b8a6359a2b@suse.com>
 <20201124150842.GN2020@antioche.eu.org>
 <20201124154917.l3jwa6w4ejumjuqw@Air-de-Roger>
 <20201124160914.GQ2020@antioche.eu.org>
 <20201126133444.r2oi24i3umh7shb3@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201126133444.r2oi24i3umh7shb3@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Thu, 26 Nov 2020 15:16:15 +0100 (MET)

On Thu, Nov 26, 2020 at 02:34:44PM +0100, Roger Pau Monn wrote:
> On Tue, Nov 24, 2020 at 05:09:14PM +0100, Manuel Bouyer wrote:
> > On Tue, Nov 24, 2020 at 04:49:17PM +0100, Roger Pau Monn wrote:
> > > Could you also give a try with ioapic_ack=new on the Xen command line?
> > 
> > With this I still have the interrupt issue, but Xen doesn't panic on 'i'.
> > http://www-soc.lip6.fr/~bouyer/xen-log8.txt
> 
> Sorry for the delay, I have yet another debug patch for you to try.
> Can you remove the ioapic_ack=new from the command line and rebuild
> the hypervisor with the provided patch applied and debug trace
> enabled? (`gmake -C xen menuconfig` and go into Debugging Options to
> find it).

menuconfig doens't build on NetBSD, I set CONFIG_DEBUG_TRACE=y in
.config. I guess it is enough ?

For the record, my boot commad line is now
menu=Boot Xen PVH:load /test console=com0 root=dk0 -vx; multiboot /xen-test.gz dom0_mem=1024M console=com2 com2=57600,8n1,,0 loglvl=all guest_loglvl=all gnttab_max_nr_frames=64 dom0=pvh iommu=debug dom0_vcpus_pin sync_console dom0_max_vcpus=1 watchdog=force iommu=no-intremap


> 
> Then once the system stalls use the 'T' debug key to dump the buffer.

Here it is. It seems to be stuck in an infinite loop, I hit the 'R' key
after several minutes
http://www-soc.lip6.fr/~bouyer/xen-log9.txt

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 14:26:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 14:26:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38578.71333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiIEU-0003k0-4n; Thu, 26 Nov 2020 14:26:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38578.71333; Thu, 26 Nov 2020 14:26:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiIEU-0003jt-1P; Thu, 26 Nov 2020 14:26:50 +0000
Received: by outflank-mailman (input) for mailman id 38578;
 Thu, 26 Nov 2020 14:26:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TQ9x=FA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kiIES-0003jo-Al
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 14:26:48 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id faa86cfa-d83b-40ee-8247-5a320e167ee4;
 Thu, 26 Nov 2020 14:26:47 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TQ9x=FA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kiIES-0003jo-Al
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 14:26:48 +0000
X-Inumbo-ID: faa86cfa-d83b-40ee-8247-5a320e167ee4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id faa86cfa-d83b-40ee-8247-5a320e167ee4;
	Thu, 26 Nov 2020 14:26:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606400807;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=rv6g410FGYHyb6P4r4g1APVzFc4V0hnLWjMl2Mbtbsc=;
  b=J9wOXoCTGFKh/r7TodcP5gL6WA5E9LINtdwTqFs4BI+sw4KV/HVLiymP
   lB5Spj9uBA0lhQL4MOV5wAESmFwsRFIZ42Udbdin5+mwwRLz4UpX40Xpp
   YEzoixzpauX2deY2lq93VekVToTY1wo+PNY5GKrZEoVr9BdYUu0emdf8T
   I=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: eNGlQ0tFch7UeHwQggCyekhQWHvxj3kBwDCbhi7WRV6ozNSPC+UZm/w+/+/9rdV2X9k21jIRMq
 ZZ/FumoQ2u5cpOwgBXEU54ZfLNUE4/S3hEp0uzG9CnsUCvtFZQnG0dB+CJC9s2feuOzdhh2oM4
 nbyZkVUD9fkwyZi089ztDQl0BYn4VGv6jGdwyYDPqWkaFjTA5/jwF1rRXSBWMjm7b4TWKtxGGy
 7OGHPo/97y69hFTWYxrdJTBzaAINRB4gHdSwMPcmhQd3+xcZ0dCqlrBKt/gw6q15G0TDfbFWvB
 +Ts=
X-SBRS: None
X-MesageID: 32224912
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,372,1599537600"; 
   d="scan'208";a="32224912"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nBhKWPw6p+3v/BZ6aWkfzT9qdSEooF9785xgVIzGvQmZdTLGqfqfbU6mDEvIQDH48lygysiQMSByQCSZLfYIJu87oN6/4kQehWOi3sNf9c63ZgjUZEw0TAayy1KSbCZ7zEmokwSnZbHY8IBgKDHw1K2bdCWtqJpGJlJk5i1E0YZ9D+9SlcQ04WHpW6zo7+gl7b7+V8IK7EfRmE/7RDQ0BuRVbdqx8et8naHG0J1v+WJivYNo65CJI03Fc80+S3zqiQDAgFA3tKtRRZEgLL/N8e1V3U0nercTiRMP6INfs+pRirDK++EEIiihc6eg3ceQqmYZG514zZWx4MASMx8sjw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GCpU2/zs8j2zYgYkGpaGpAoUqf4mUEXkuihQ+udr8X4=;
 b=HRJqkjrLdvfIRPNWNExvgvLTsfHjbqJxvM/HnrxEHvgnQ12r5ko6Pvagqueyf+rn95p9QwgqakCZH044NWePHWRQVcpmX3eMTCU6w40If3WxWQuet+B4O07Vyf3zqc4YUZAlQG4Cbo58eUl4BuR6NuX6GEi8GiTg4+U2eCk5NsY8TEVHwlpn8srp2tCeWo8HaNe9zX0sNWK19ifyRHMkfEdP4TB2XlyRAqQ4tFcQOOMKdRB4YZrj8hdKMsHJYo5GkxfxUkLfuiqmlqXQ4O8ZfPgKvemsKW1lJmLaE2UmAVFrDsvDbxeXOaOz09AEYaMR+UfX6Kat5vk8rk8bgLs3bQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GCpU2/zs8j2zYgYkGpaGpAoUqf4mUEXkuihQ+udr8X4=;
 b=IjL0rpNY3wY/+CzDOuyDr2+Y+tTzqkj6d8xvL5+zn8XdeHfYL9IReza/Q0NHpn7nRKI7oo/xDYNtLRYb3xaPvNmHgarh53L0nJLzq3fyhBhv/1xHXOY/FZkBkVJVVDDsKlkkT+u7tIVKis+u0IIUViioEVUGbgjjDuO5SYXeJc8=
Date: Thu, 26 Nov 2020 15:26:35 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: Jan Beulich <jbeulich@suse.com>, <xen-devel@lists.xenproject.org>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201126142635.uzi643co3mxp5h42@Air-de-Roger>
References: <20201124122102.3igsriesou3vl6mu@Air-de-Roger>
 <20201124135948.GL2020@antioche.eu.org>
 <6d6a77cf-58de-4e4d-ed75-e9365be060b7@suse.com>
 <20201124142713.GM2020@antioche.eu.org>
 <e6a0fc84-e7ed-825c-5356-29b8a6359a2b@suse.com>
 <20201124150842.GN2020@antioche.eu.org>
 <20201124154917.l3jwa6w4ejumjuqw@Air-de-Roger>
 <20201124160914.GQ2020@antioche.eu.org>
 <20201126133444.r2oi24i3umh7shb3@Air-de-Roger>
 <20201126141608.GA4123@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201126141608.GA4123@antioche.eu.org>
X-ClientProxiedBy: MR2P264CA0006.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:1::18) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 673c298e-a933-4a0f-050b-08d892174a7b
X-MS-TrafficTypeDiagnostic: DM6PR03MB4971:
X-Microsoft-Antispam-PRVS: <DM6PR03MB49716ED7B0F2FE873147128C8FF90@DM6PR03MB4971.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: jBIg8U45gQzA1xmvEOhf3k0omvo8ebrnbmERTQz+zypQHH1X4pN1Xuen9krpCsCfy6n1ifX+usO30wM7q/AiTYTX0MB7pk73B3Z/PjQyK9tjx+gUlYVjdSMDvJfLSt1FFLzor/J6J0BDF/aGXrZen5BFrWRa56a+6G24wdU+51MgACIJjKjc/+7R3eYT9PLNV2/7sqzrp+NpQxEek1bLWkbFYFzzHSrZBS9iPcG40jRn9jmoUCWHs8BuEZ4nNPxdq36ukwd82JnWxoCVcfzAxgVqbYWHH7Uw06mf8Gfl4l0iQm1lYZWoYrN+dRQYDxmbJXuCQ0bVWUzmMGKjX6a4us/l2WSMhbm8B/D+760levJtePT0hc+n2Ouj9GCdgRAX2suo7D1ZBK+PQ2aX/tfLoA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(396003)(39860400002)(376002)(346002)(136003)(956004)(83380400001)(2906002)(66556008)(33716001)(66946007)(316002)(6666004)(66476007)(8676002)(4326008)(478600001)(86362001)(5660300002)(186003)(16526019)(26005)(966005)(6486002)(8936002)(6496006)(85182001)(1076003)(6916009)(9686003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?MzRPRkdhOWJpeDZKN2RWYkVRN0gyL1B5R2V3UkV0ZlJXeU9BN3YzQTJpL1Fu?=
 =?utf-8?B?enk0SGFnbi9ZSGVYK1haU1gvaGJQQ2FVcittZnljSXJJLzVSYmd6MzlBNkQ3?=
 =?utf-8?B?ckdvZWRTMXFzTlVuMm5UU1FLcE9mRzJFSGdXaVg1a0R0NDl4STgwWFdKRnR6?=
 =?utf-8?B?N0RFQ084bDJ6K3BGWXJ3ZXFpLzVzdk1ybkJlZG95MzFZTXhEbkorSTRaalo4?=
 =?utf-8?B?dEU0TTdpR21FV1ZsUnBZbWRldVc1SVQzTmM3My9XMjNGVVB5aGZFSllEcm5R?=
 =?utf-8?B?NC9wREhYcktRVDZlWUs1MEs5T25mWDNsTzRVczF5OHQ5YlJNN1UrdWN1SFhi?=
 =?utf-8?B?MGNtT25va1NOYVpTeG5xUFM5WGZsbUdyUXREN0xxZUliWXZGUUpyWUs1UGZq?=
 =?utf-8?B?ZWlDRktlUmpBRG1qMytGaGZPODF3U2piZkhTVUZwMCtkYXNISFRmK0E5S3M1?=
 =?utf-8?B?ZXV1WmxPdmorK0xTTTBSdTFJMnpWZldkQkl6RzRkZnB0OW85UE9iY3BoUjRU?=
 =?utf-8?B?MWVSTk53anUvSko4cGg5K3NQaHZOa0tqcCtRV3VGSTNWVStxOTJ3OTdXanBF?=
 =?utf-8?B?U254eVRBVmJrS3FUR1VaR3ZqK09Na1llNG1IRDUyTkhwTjM3SCtydFBjdFpC?=
 =?utf-8?B?MG5kcXFUZUU0Y1h0S1VwVFVtSlNxWTJaNmJERVorR2NYZmJvQU9VcXhwbGgv?=
 =?utf-8?B?MjArUm9NbDRFdGtOWkpuQUl4cEUzd1E5eEhtWUJwM0RHUXFOWUczc0x0QW5N?=
 =?utf-8?B?RFcwSzA4NnAxbDR5czNLb3VrUDExR0dqaEozT3NyQ21EbEhsMW1HSXc3d0dr?=
 =?utf-8?B?SUNHS2lmRE0vbWRzTFJOTy9yclFLa2RBSlBYMWwzVkRXc0pUYnN3WGoxZWwy?=
 =?utf-8?B?ZFhSTm5paG1qenFPR0szeCtacit0ZmFlS0tpcFltOFphU0hyVlVqaFYwUHZt?=
 =?utf-8?B?TnJsRkpERVY5VEZYcEVQeDdaQ1RiUC9PMUdPczJVeFdUQStNa1hwMS9CSmNz?=
 =?utf-8?B?UnRqR1BlQ3g3OGpEendTcUdlRUNJZmRNRHUyQnlSdDRBRjFOclFTek9kR2xR?=
 =?utf-8?B?R1dtNUJteWo2UE9kWk15bmc4bkNOYXpRUW1CakloaHRVYjkzcWo2bWJ2ZElh?=
 =?utf-8?B?WGJQaExHV1FUSDVXelZEZ3pyQk5nNzJ0S3dLYjVJZXYybk9uSEJEbG5vcTNJ?=
 =?utf-8?B?Q1FFaDVIWXE0bnc5RHlVQWozd214eG1POUtEbmZUVXpGN0lXNjRBckkwRnV3?=
 =?utf-8?B?MEdHTjY4c20vbDNPSnBZRE5EK1FMYVBFUDZMaEwwSkJqcHd0YXVMdDJsaCs3?=
 =?utf-8?Q?OSLxhEUVkyNuNK4I61M/04Dbh5bGY1jLbj?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 673c298e-a933-4a0f-050b-08d892174a7b
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Nov 2020 14:26:40.6654
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: d74EU53SSv0p18tAZ6vaj8gVSYOdJfHV5mh/AdoCgNgmRlzD3dOxOOb0BpSaDL+2JENw8EMaDjQp8SRYQ3uu2g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4971
X-OriginatorOrg: citrix.com

On Thu, Nov 26, 2020 at 03:16:08PM +0100, Manuel Bouyer wrote:
> On Thu, Nov 26, 2020 at 02:34:44PM +0100, Roger Pau Monné wrote:
> > On Tue, Nov 24, 2020 at 05:09:14PM +0100, Manuel Bouyer wrote:
> > > On Tue, Nov 24, 2020 at 04:49:17PM +0100, Roger Pau Monné wrote:
> > > > Could you also give a try with ioapic_ack=new on the Xen command line?
> > > 
> > > With this I still have the interrupt issue, but Xen doesn't panic on 'i'.
> > > http://www-soc.lip6.fr/~bouyer/xen-log8.txt
> > 
> > Sorry for the delay, I have yet another debug patch for you to try.
> > Can you remove the ioapic_ack=new from the command line and rebuild
> > the hypervisor with the provided patch applied and debug trace
> > enabled? (`gmake -C xen menuconfig` and go into Debugging Options to
> > find it).
> 
> menuconfig doens't build on NetBSD, I set CONFIG_DEBUG_TRACE=y in
> .config. I guess it is enough ?
> 
> For the record, my boot commad line is now
> menu=Boot Xen PVH:load /test console=com0 root=dk0 -vx; multiboot /xen-test.gz dom0_mem=1024M console=com2 com2=57600,8n1,,0 loglvl=all guest_loglvl=all gnttab_max_nr_frames=64 dom0=pvh iommu=debug dom0_vcpus_pin sync_console dom0_max_vcpus=1 watchdog=force iommu=no-intremap
> 
> 
> > 
> > Then once the system stalls use the 'T' debug key to dump the buffer.
> 
> Here it is. It seems to be stuck in an infinite loop, I hit the 'R' key
> after several minutes
> http://www-soc.lip6.fr/~bouyer/xen-log9.txt

Oh, that's actually very useful. The interrupt is being constantly
injected from the hardware and received by Xen, it's just not then
injected into dom0 - that's the bit we are missing. Let me look into
adding some more debug to that path, hopefully it will tell us where
things are getting blocked.

Roger.


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 14:53:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 14:53:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38586.71345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiIeU-0006Ru-9F; Thu, 26 Nov 2020 14:53:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38586.71345; Thu, 26 Nov 2020 14:53:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiIeU-0006Rn-5J; Thu, 26 Nov 2020 14:53:42 +0000
Received: by outflank-mailman (input) for mailman id 38586;
 Thu, 26 Nov 2020 14:53:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rLOc=FA=gmail.com=miguel.ojeda.sandonis@srs-us1.protection.inumbo.net>)
 id 1kiIeS-0006Ri-Kj
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 14:53:40 +0000
Received: from mail-yb1-xb44.google.com (unknown [2607:f8b0:4864:20::b44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8afff821-b449-4009-a062-debfe01bd601;
 Thu, 26 Nov 2020 14:53:39 +0000 (UTC)
Received: by mail-yb1-xb44.google.com with SMTP id o71so1881761ybc.2
 for <xen-devel@lists.xenproject.org>; Thu, 26 Nov 2020 06:53:39 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rLOc=FA=gmail.com=miguel.ojeda.sandonis@srs-us1.protection.inumbo.net>)
	id 1kiIeS-0006Ri-Kj
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 14:53:40 +0000
X-Inumbo-ID: 8afff821-b449-4009-a062-debfe01bd601
Received: from mail-yb1-xb44.google.com (unknown [2607:f8b0:4864:20::b44])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8afff821-b449-4009-a062-debfe01bd601;
	Thu, 26 Nov 2020 14:53:39 +0000 (UTC)
Received: by mail-yb1-xb44.google.com with SMTP id o71so1881761ybc.2
        for <xen-devel@lists.xenproject.org>; Thu, 26 Nov 2020 06:53:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=DTotsfb8u+PB0eT0g4gTrA/vgT4HfiDZTYQ/geteWC4=;
        b=QWuS1TBfImi/m9J+5lB37414TjwoB+kfRyX/7ohuLNDuEazbBA4fEE506RPmuhzrQT
         KCYao+uRLv4Su9SW/eKLjGfymcNDAcVP9irkT5MfO/ys+N1YXqwwODp/at6Hjf5mbn7y
         sEjBq+fitA5RwLT9szzutyPemHxF/3vzz+qMkDnIA1HF+rIjtAU7PcoRO+GQ0ANzP2/3
         4RgPAdoGGim7svP8Zrp9IWC0s7kVtrSQn/PXobh5aGfJoVk3uyVZYliED/q0Nk6tCoA8
         W9fyUHPup50QVVdVd9NqYx0m2pkDUcyuxLwwV68mOS3NFLJOgvtFKpngAgg7rBEfKLyO
         Kn5Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=DTotsfb8u+PB0eT0g4gTrA/vgT4HfiDZTYQ/geteWC4=;
        b=ilFJt8VtW0Fm+UVO4nQX8RyqZN1/4TIYokVvBIMy7/20b9t+docFRJ7hNxiK5ATVwz
         hmNadxy5sDlrTwLnR9xgg04hK0Edb6x8Q2yoVJ/9bwXS5593BzF9QbR/nFzajYfBg3ox
         Nl5TCkzOAuw3pzIlxZAACDgGA11bn2IzbLUmnTzzhPOOHQ9gsy+ZUUkJCsQM5Klt0irS
         JWgqjQGZGgp3u8nia8e1Cl/WDjZWWG+MGd4T7wy62uYUglJRLKfZnCAZAoFmbI3b5L1D
         r7omAzlNL53wyctryB+/erxKyoaIVN3eOhtp5vGuOyH80AixhFLgyGzWl2bY0kEPcsEB
         i5Fg==
X-Gm-Message-State: AOAM531GfYFbiXBnQV2mch3oY2ILfd8JV2hGq3Q+WZUQ/SPXvdO59Qbb
	Qq3o//xB/4fRcdsvS1L7ED8dSNN+uHb+XPepGs0=
X-Google-Smtp-Source: ABdhPJxnOXTYq+iR4KCiqWmvI5brXjGTtWFP5n3J8AJAa70YoLiXvitjZtrbfPlByRts+Q0mJZBucXgMnTrbOCoAA44=
X-Received: by 2002:a25:aac5:: with SMTP id t63mr5128050ybi.22.1606402419264;
 Thu, 26 Nov 2020 06:53:39 -0800 (PST)
MIME-Version: 1.0
References: <cover.1605896059.git.gustavoars@kernel.org> <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook> <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
 <CANiq72nZrHWTA4_Msg6MP9snTyenC6-eGfD27CyfNSu7QoVZbw@mail.gmail.com>
 <1c7d7fde126bc0acf825766de64bf2f9b888f216.camel@HansenPartnership.com>
 <CANiq72m22Jb5_+62NnwX8xds2iUdWDMAqD8PZw9cuxdHd95W0A@mail.gmail.com>
 <fc45750b6d0277c401015b7aa11e16cd15f32ab2.camel@HansenPartnership.com>
 <CANiq72k5tpDoDPmJ0ZWc1DGqm+81Gi-uEENAtvEs9v3SZcx6_Q@mail.gmail.com>
 <4993259d01a0064f8bb22770503490f9252f3659.camel@HansenPartnership.com>
 <CANiq72kqO=bYMJnFS2uYRpgWATJ=uXxZuNUsTXT+3aLtrpnzvQ@mail.gmail.com> <44005bde-f6d4-5eaa-39b8-1a5efeedb2d3@gmail.com>
In-Reply-To: <44005bde-f6d4-5eaa-39b8-1a5efeedb2d3@gmail.com>
From: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Date: Thu, 26 Nov 2020 15:53:27 +0100
Message-ID: <CANiq72nobq=ptWK-qWxU91JHqkKhMcRtJNnw2XJd5-vSJWZd8Q@mail.gmail.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
To: Edward Cree <ecree.xilinx@gmail.com>
Cc: James Bottomley <James.Bottomley@hansenpartnership.com>, 
	Kees Cook <keescook@chromium.org>, Jakub Kicinski <kuba@kernel.org>, 
	"Gustavo A. R. Silva" <gustavoars@kernel.org>, linux-kernel <linux-kernel@vger.kernel.org>, 
	alsa-devel@alsa-project.org, amd-gfx list <amd-gfx@lists.freedesktop.org>, 
	bridge@lists.linux-foundation.org, ceph-devel@vger.kernel.org, 
	cluster-devel@redhat.com, coreteam@netfilter.org, devel@driverdev.osuosl.org, 
	dm-devel@redhat.com, drbd-dev@lists.linbit.com, 
	dri-devel <dri-devel@lists.freedesktop.org>, GR-everest-linux-l2@marvell.com, 
	GR-Linux-NIC-Dev@marvell.com, intel-gfx@lists.freedesktop.org, 
	intel-wired-lan@lists.osuosl.org, keyrings@vger.kernel.org, 
	linux1394-devel@lists.sourceforge.net, linux-acpi@vger.kernel.org, 
	linux-afs@lists.infradead.org, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, 
	linux-arm-msm <linux-arm-msm@vger.kernel.org>, linux-atm-general@lists.sourceforge.net, 
	linux-block@vger.kernel.org, linux-can@vger.kernel.org, 
	linux-cifs@vger.kernel.org, 
	Linux Crypto Mailing List <linux-crypto@vger.kernel.org>, linux-decnet-user@lists.sourceforge.net, 
	Ext4 Developers List <linux-ext4@vger.kernel.org>, linux-fbdev@vger.kernel.org, 
	linux-geode@lists.infradead.org, linux-gpio@vger.kernel.org, 
	linux-hams@vger.kernel.org, linux-hwmon@vger.kernel.org, 
	linux-i3c@lists.infradead.org, linux-ide@vger.kernel.org, 
	linux-iio@vger.kernel.org, linux-input <linux-input@vger.kernel.org>, 
	linux-integrity@vger.kernel.org, linux-mediatek@lists.infradead.org, 
	Linux Media Mailing List <linux-media@vger.kernel.org>, linux-mmc@vger.kernel.org, 
	Linux-MM <linux-mm@kvack.org>, linux-mtd@lists.infradead.org, 
	linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, 
	Linux-Renesas <linux-renesas-soc@vger.kernel.org>, linux-scsi@vger.kernel.org, 
	linux-sctp@vger.kernel.org, linux-security-module@vger.kernel.org, 
	linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org, 
	linux-watchdog@vger.kernel.org, 
	linux-wireless <linux-wireless@vger.kernel.org>, 
	Network Development <netdev@vger.kernel.org>, netfilter-devel@vger.kernel.org, 
	nouveau@lists.freedesktop.org, op-tee@lists.trustedfirmware.org, 
	oss-drivers@netronome.com, patches@opensource.cirrus.com, 
	rds-devel@oss.oracle.com, reiserfs-devel@vger.kernel.org, 
	samba-technical@lists.samba.org, selinux@vger.kernel.org, 
	target-devel@vger.kernel.org, tipc-discussion@lists.sourceforge.net, 
	usb-storage@lists.one-eyed-alien.net, 
	virtualization@lists.linux-foundation.org, wcn36xx@lists.infradead.org, 
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, xen-devel@lists.xenproject.org, 
	linux-hardening@vger.kernel.org, Nick Desaulniers <ndesaulniers@google.com>, 
	Nathan Chancellor <natechancellor@gmail.com>, Miguel Ojeda <ojeda@kernel.org>, 
	Joe Perches <joe@perches.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, Nov 25, 2020 at 11:44 PM Edward Cree <ecree.xilinx@gmail.com> wrote=
:
>
> To make the intent clear, you have to first be certain that you
>  understand the intent; otherwise by adding either a break or a
>  fallthrough to suppress the warning you are just destroying the
>  information that "the intent of this code is unknown".

If you don't know what the intent of your own code is, then you
*already* have a problem in your hands.

> Figuring out the intent of a piece of unfamiliar code takes more
>  than 1 minute; just because
>     case foo:
>         thing;
>     case bar:
>         break;
>  produces identical code to
>     case foo:
>         thing;
>         break;
>     case bar:
>         break;
>  doesn't mean that *either* is correct =E2=80=94 maybe the author meant

What takes 1 minute is adding it *mechanically* by the author, i.e. so
that you later compare whether codegen is the same.

>  to write
>     case foo:
>         return thing;
>     case bar:
>         break;
>  and by inserting that break you've destroyed the marker that
>  would direct someone who knew what the code was about to look
>  at that point in the code and spot the problem.

Then it means you already have a bug. This patchset gives the
maintainer a chance to notice it, which is a good thing. The "you've
destroyed the market" claim is bogus, because:
  1. you were not looking into it
  2. you didn't notice the bug so far
  3. is implicit -- harder to spot
  4. is only useful if you explicitly take a look at this kind of bug.
So why don't you do it now?

> Thus, you *always* have to look at more than just the immediate
>  mechanical context of the code, to make a proper judgement that
>  yes, this was the intent.

I find that is the responsibility of the maintainers and reviewers for
tree-wide patches like this, assuming they want. They can also keep
the behavior (and the bugs) without spending time. Their choice.

> If you think that that sort of thing
>  can be done in an *average* time of one minute, then I hope you
>  stay away from code I'm responsible for!

Please don't accuse others of recklessness or incompetence, especially
if you didn't understand what they said.

> A warning is only useful because it makes you *think* about the
>  code.  If you suppress the warning without doing that thinking,
>  then you made the warning useless; and if the warning made you
>  think about code that didn't *need* it, then the warning was
>  useless from the start.

We are not suppressing the warning. Quite the opposite, in fact.

> So make your mind up: does Clang's stricter -Wimplicit-fallthrough
>  flag up code that needs thought (in which case the fixes take
>  effort both to author and to review)

As I said several times already, it does take time to review if the
maintainer wants to take the chance to see if they had a bug to begin
with, but it does not require thought for the author if they just go
for equivalent codegen.

> or does it flag up code
>  that can be mindlessly "fixed" (in which case the warning is
>  worthless)?  Proponents in this thread seem to be trying to
>  have it both ways.

A warning is not worthless just because you can mindlessly fix it.
There are many counterexamples, e.g. many
checkpatch/lint/lang-format/indentation warnings, functional ones like
the `if (a =3D b)` warning...

Cheers,
Miguel


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 15:09:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 15:09:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38619.71380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiIu4-0007lS-3X; Thu, 26 Nov 2020 15:09:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38619.71380; Thu, 26 Nov 2020 15:09:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiIu4-0007lL-0U; Thu, 26 Nov 2020 15:09:48 +0000
Received: by outflank-mailman (input) for mailman id 38619;
 Thu, 26 Nov 2020 15:09:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TQ9x=FA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kiIu2-0007lC-NN
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 15:09:46 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e34560ea-5e4d-4c62-887f-0791fcc26732;
 Thu, 26 Nov 2020 15:09:45 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TQ9x=FA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kiIu2-0007lC-NN
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 15:09:46 +0000
X-Inumbo-ID: e34560ea-5e4d-4c62-887f-0791fcc26732
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e34560ea-5e4d-4c62-887f-0791fcc26732;
	Thu, 26 Nov 2020 15:09:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606403385;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=S3xEXblAkA9UUAnOtoZ8R8dU7OwjQBGIDmjQucm7mYk=;
  b=QrwLpSdAZFsEVCrTsiEYfTDNu2apDPeSojmRnY5dLqiOmCfRlNtuFNfT
   N0Gz44TB02mdeqb5oRrhaZDwZjkoMRlJOFXqtr15F2etORrlIoJVNEZoB
   fJrZ/vqBK87xm6ud+gl76HF4tNOmIavYalPZRDW65l82pOTz0egzt0wIy
   4=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: GEClyimU1g6/KT5QycSY5EDEOjXRQRHUQ3ke+9+iSHW2REUnebOQL8pu/zknLYw7/ysg9Ifsd9
 19RLeYK6AdONEq5Slk5ocpwrnM4FS909ShzG6Npu8zH3O2UsuXB0zkQvvXJPw8kq+e0sKbNX+K
 JNxXBx53DMZtvSJC1lnn3+kXUnq25KMs/mH7C+bnq1n7dmjdDjNOVtZDTF5ON+dGEM3HbdObPn
 01tYjAcZmrGxbiZOaXPANPQY6tUb+Cweo/KGwaZKABUGDY7o20N3pZqUVT65Rltwzp/X3FkO+l
 vrg=
X-SBRS: None
X-MesageID: 31969960
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,372,1599537600"; 
   d="scan'208";a="31969960"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=emFxJG9hYcgVeUFIx4Nh9horP5kZlQDXp4qLuF2PhAFbwqh2L5F3H43CyQZsrqGxRjR3Ihb8qS+Gu7/1j+GWcs6GiYin7Cnn1MxudMHdysDpyZEMC2vRhBrDN75UIguUoj0N5wP9lfVUbNJeUfF9n6yZxQopEHMm3XbqcYZctZ/Xo2c+nyuqevPW5reF0vondFvS3iFk7kpZsr5klXTxo94GXAkg+hxF/MMgeEujC6j6xhNwWAn2Byml0Zu+C85tsuzMBeEl3x1yVzfdXmmqhc9lusdYWcfQAmqGnVSUDup+IMPlXS5J9goLsPfqlSS2Dfbm9g8fmWoMvJFNMdw0uA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Wu0uj+4INHqlDzBIKmTM1MaLYOMF9jLy6dsO+VElkyo=;
 b=mt0EhFiiEji+qD1WrZlW3aPQja5kD59a9KazNrtCE9fcqBWefT65NTfcygDMXMWDeh0wUWno1S1J+Mu+hA4H1pstrLdbAHgmSYwCZ6kbsRwj34xD5tn2ZniDPtik0u1vVWN+1Zdjiuzk2yBkK5nO6pfF716u7gmqWadpT74gt+3jteFDozjntspgSxI6+39UFOLfcaQbsOXDeI9zOuP0S/P4UUB4/RUkWSuSHyj7HgL8/ixe05wfTmBq6biWg3G6YJiKHxj9chutrDZ5uwz/cCYs5IKNNff9Ers+wLFMmoZfh5CRsJlBI08yZMSiWDLX+Kpce32JjUYUhEHI5VK/SA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Wu0uj+4INHqlDzBIKmTM1MaLYOMF9jLy6dsO+VElkyo=;
 b=DmrY6HJ0P+RlpMNnNOMyUnqjv78yo920B/5FHseW6KlWUbntZy2/55o7LHAsWPD2m+4PG+q3gfYtT3RmnGC+G9cM+WiHC7ED+V+EvIxTwDNc9uemH03G5qOkrQAkWsz+vLrujv9m+DB038C5EkfzZHScHQ4mIoUoxYet5bO5WEw=
Date: Thu, 26 Nov 2020 16:09:37 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: Jan Beulich <jbeulich@suse.com>, <xen-devel@lists.xenproject.org>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201126150937.jhbfp7iefkmtedx7@Air-de-Roger>
References: <20201124135948.GL2020@antioche.eu.org>
 <6d6a77cf-58de-4e4d-ed75-e9365be060b7@suse.com>
 <20201124142713.GM2020@antioche.eu.org>
 <e6a0fc84-e7ed-825c-5356-29b8a6359a2b@suse.com>
 <20201124150842.GN2020@antioche.eu.org>
 <20201124154917.l3jwa6w4ejumjuqw@Air-de-Roger>
 <20201124160914.GQ2020@antioche.eu.org>
 <20201126133444.r2oi24i3umh7shb3@Air-de-Roger>
 <20201126141608.GA4123@antioche.eu.org>
 <20201126142635.uzi643co3mxp5h42@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201126142635.uzi643co3mxp5h42@Air-de-Roger>
X-ClientProxiedBy: MRXP264CA0014.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:15::26) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d0d17e61-8f73-4192-b39b-08d8921d4cb1
X-MS-TrafficTypeDiagnostic: DM5PR03MB2780:
X-Microsoft-Antispam-PRVS: <DM5PR03MB27804F7C166DC1B9B67B90C68FF90@DM5PR03MB2780.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: WydnORSkeqrc+bFFMcUlY6yoUAkWdtuexWyvoTAdKx46UK/4ZygXti+/AUPFLnNjiRr6DE/n/vNPKc0V7M82WG8RBKl99f1Vx0ZGwWJJ3ZoMYD15ZKEaWDDYlqzYkjTqoAUaBVEnB1wwCNrAm8OO6mpBtecGfbMSRdVUDUA3sqJzgum8sXrZIlU6wYYazy8L2lHTeM1pRtWMO4g4tydQ3WWGcYehjBKWxmWAocjRNlwMd1yXq7T6EQkwxP4QmcHYd6A4gd51M0H84/6YBHm3sgCtWn7ws/oywc9OcfZleorDleV4p8XAGhUN90WCJRJxIJcLCzlc1WXO0tkp9+ScVdjDQdzYL8FTqtiATphWpf9Me5v71wJ6M1jWqISdvzY1gQRTlkBEw/ujC04ifLSFUA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(346002)(39860400002)(136003)(376002)(366004)(396003)(478600001)(66556008)(66476007)(6486002)(8676002)(86362001)(8936002)(1076003)(966005)(66946007)(16526019)(5660300002)(26005)(186003)(2906002)(956004)(4326008)(33716001)(6916009)(9686003)(316002)(6496006)(83380400001)(85182001)(6666004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?dTFRdUdicUdqV3d1Ym1LMnRLbTYzWFp1SDNqWGVnV2l5bEs1UCt6N0tTeE8x?=
 =?utf-8?B?UGYzWTcwZkZxUXRzWFdlc05ZOUtaaHlZdXlnUWtqaXFIUnc3SDZoRjFFaCts?=
 =?utf-8?B?b0Z3dzVyT0taVWVESFVMYk5RWkhjb0MvZjlEMnhOOU5aT2VueWY0MEhaQnpu?=
 =?utf-8?B?SnJuV3hibzBrNVdWWVQyTkNUS3kyeHhxcDYwemhuNXN2L2lGTUFDQU0vVmxk?=
 =?utf-8?B?ZWdjWWoyaHZPRjhwZ2FxNTVuTUV4T2N5Ui9EWUE0UFo1emV5RWhMcklUWVpk?=
 =?utf-8?B?U1QzVm1jMTcxVFVMMC9GY2s0QnJQWEF1ZXNDMitMbDJHY01EbzByUElueVZD?=
 =?utf-8?B?WlYyclNFV2lydis1YVE0blBsVDdJRFNGbkhWZEVTWUxxNTFRM0dzWFQrRTB4?=
 =?utf-8?B?UEdIKzhUTko2UVhWSlNrYmJYQzQvYWJWOGRMaWEwRlpWR3AxTGViYkdBdjJJ?=
 =?utf-8?B?Y1BoTWIzYi9hdlh5d1Y2ZkQ5K20vYmxrSkdQY1NHbGZtYVBROEdBSW9aQ0Fi?=
 =?utf-8?B?U3Y5emRlbHpFdk9nWm9oYjhQTlloMlh2MEVXY2JYSkZsZEtSeWFzbWwvMFFI?=
 =?utf-8?B?TitrRi8zeVNQY3ZuVTVrdnFxbDNGdzhycXBWOGt1T3RmdVdQMXh4V2psSmlw?=
 =?utf-8?B?S1g1alhGQ0hXSkx6NVBmcjNMbHJFUVNiUUNvL1NybE4xdU5pdHhSK25TR1lB?=
 =?utf-8?B?YjF5SWx1L1BrZHVTRkc3dHpTcXhrVmdJWlBIL2ZaZnVMTk8zQ2NIcFNoMndx?=
 =?utf-8?B?dGFGUXlVckNmZ2wxcjlpV0FGQS9MN0tKbmZkV2FaWk5ZRjV0cEVCY3QzK2tJ?=
 =?utf-8?B?V2YwVWhqd3B0Q0JHQjN4TVRGYVR2L0NWaDRvSTEwKzhiWXFwZWJZYVcyRzRE?=
 =?utf-8?B?VDI4bTQrSDl1VmFpREVyYUdhK1dub2daL1o5ZGtOZmwwaWdPZzJxVkN0ZXFt?=
 =?utf-8?B?UjVtZzBZL2hOM3orUlZvY1cxcHduUWU3Wm94ZlJqQUZnVGc1WElrQkxqWXBp?=
 =?utf-8?B?d0NDRVpEWFk4MTVBazR6ejV4UDJCT3F3UWQ0ME5SY0t6M0R1MGFvUEFXejNV?=
 =?utf-8?B?em5vanh6MjJNaitmNkVtMmJtbjAxeHVHYkhlZ0xVZllxMTl3dlVYNEFDN1k4?=
 =?utf-8?B?Uitwa1VQN2E2STB1RXdySU1icmkzcTNybmZBcWRUZi9PZS9Cd3NMRElxMFBW?=
 =?utf-8?B?NTczdXBkQU5YUEZzUnFtOTRIejU5eDFCakJPQU5RV3pVTHU5bEEwU1p3enE2?=
 =?utf-8?B?OGViYVBqUnptWGZxSGdnQnBYRTIyNE9pV1JwMVpVeTRHMkZmcEpqeGd2TUd2?=
 =?utf-8?Q?+l+qKxaIu+kTJk9E8mfI82oBBH+mZfh/gU?=
X-MS-Exchange-CrossTenant-Network-Message-Id: d0d17e61-8f73-4192-b39b-08d8921d4cb1
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Nov 2020 15:09:41.2471
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5kxPHxjvM4r4rxKFdEouUefwC0aYEnKLfQMSBbIagoOl+nDjKtpp9Co7fbftmq0nJYh+30aXZzn6XjINvhJikw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2780
X-OriginatorOrg: citrix.com

On Thu, Nov 26, 2020 at 03:26:35PM +0100, Roger Pau Monné wrote:
> On Thu, Nov 26, 2020 at 03:16:08PM +0100, Manuel Bouyer wrote:
> > On Thu, Nov 26, 2020 at 02:34:44PM +0100, Roger Pau Monné wrote:
> > > On Tue, Nov 24, 2020 at 05:09:14PM +0100, Manuel Bouyer wrote:
> > > > On Tue, Nov 24, 2020 at 04:49:17PM +0100, Roger Pau Monné wrote:
> > > > > Could you also give a try with ioapic_ack=new on the Xen command line?
> > > > 
> > > > With this I still have the interrupt issue, but Xen doesn't panic on 'i'.
> > > > http://www-soc.lip6.fr/~bouyer/xen-log8.txt
> > > 
> > > Sorry for the delay, I have yet another debug patch for you to try.
> > > Can you remove the ioapic_ack=new from the command line and rebuild
> > > the hypervisor with the provided patch applied and debug trace
> > > enabled? (`gmake -C xen menuconfig` and go into Debugging Options to
> > > find it).
> > 
> > menuconfig doens't build on NetBSD, I set CONFIG_DEBUG_TRACE=y in
> > .config. I guess it is enough ?
> > 
> > For the record, my boot commad line is now
> > menu=Boot Xen PVH:load /test console=com0 root=dk0 -vx; multiboot /xen-test.gz dom0_mem=1024M console=com2 com2=57600,8n1,,0 loglvl=all guest_loglvl=all gnttab_max_nr_frames=64 dom0=pvh iommu=debug dom0_vcpus_pin sync_console dom0_max_vcpus=1 watchdog=force iommu=no-intremap
> > 
> > 
> > > 
> > > Then once the system stalls use the 'T' debug key to dump the buffer.
> > 
> > Here it is. It seems to be stuck in an infinite loop, I hit the 'R' key
> > after several minutes
> > http://www-soc.lip6.fr/~bouyer/xen-log9.txt
> 
> Oh, that's actually very useful. The interrupt is being constantly
> injected from the hardware and received by Xen, it's just not then
> injected into dom0 - that's the bit we are missing. Let me look into
> adding some more debug to that path, hopefully it will tell us where
> things are getting blocked.

So I have yet one more patch for you to try, this one has more
debugging and a slight change in the emulated IO-APIC behavior.
Depending on the result I might have to find a way to mask the
interrupt so it doesn't spam the whole buffer in order for us to see
exactly what triggered this scenario you are in.

Thanks, Roger.
---8<---
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index 38ac5fb6c7..9db3dcc957 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -187,6 +187,10 @@ void hvm_gsi_assert(struct domain *d, unsigned int gsi)
      * to know if the GSI is pending or not.
      */
     spin_lock(&d->arch.hvm.irq_lock);
+    if ( gsi == TRACK_IRQ )
+        debugtrace_printk("hvm_gsi_assert irq %u trig %u assert count %u\n",
+                          gsi, trig, hvm_irq->gsi_assert_count[gsi]);
+
     if ( trig == VIOAPIC_EDGE_TRIG || !hvm_irq->gsi_assert_count[gsi] )
     {
         if ( trig == VIOAPIC_LEVEL_TRIG )
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 67d4a6237f..aeff9c7687 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -257,7 +257,17 @@ static void vioapic_write_redirent(
         vlapic_adjust_i8259_target(d);
     }
     else if ( ent.fields.trig_mode == VIOAPIC_EDGE_TRIG )
+    {
+        if ( gsi == TRACK_IRQ )
+            debugtrace_printk("vIO-APIC set edge trigger irq %u\n", gsi);
         pent->fields.remote_irr = 0;
+        if ( is_iommu_enabled(d) )
+        {
+            spin_unlock(&d->arch.hvm.irq_lock);
+            hvm_dpci_eoi(d, gsi, pent);
+            spin_lock(&d->arch.hvm.irq_lock);
+        }
+    }
     else if ( !ent.fields.mask &&
               !ent.fields.remote_irr &&
               hvm_irq->gsi_assert_count[idx] )
@@ -278,6 +288,10 @@ static void vioapic_write_redirent(
          */
         int ret = vioapic_hwdom_map_gsi(gsi, ent.fields.trig_mode,
                                         ent.fields.polarity);
+
+        if ( gsi == TRACK_IRQ )
+            debugtrace_printk("vIO-APIC UNMASK irq %u\n", gsi);
+
         if ( ret )
         {
             gprintk(XENLOG_ERR,
@@ -285,6 +299,9 @@ static void vioapic_write_redirent(
             unmasked = 0;
         }
     }
+    else if ( is_hardware_domain(d) && gsi == TRACK_IRQ )
+        debugtrace_printk("vIO-APIC MASK irq %u\n", gsi);
+
 
     if ( gsi == 0 || unmasked )
         pt_may_unmask_irq(d, NULL);
@@ -405,6 +422,10 @@ static void vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int pin)
 
     ASSERT(spin_is_locked(&d->arch.hvm.irq_lock));
 
+    if ( irq == TRACK_IRQ )
+            debugtrace_printk("vIO-APIC deliver irq %u vector %u\n",
+                              irq, vector);
+
     HVM_DBG_LOG(DBG_LEVEL_IOAPIC,
                 "dest=%x dest_mode=%x delivery_mode=%x "
                 "vector=%x trig_mode=%x",
diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c
index 49bd778484..db7167eb4b 100644
--- a/xen/arch/x86/io_apic.c
+++ b/xen/arch/x86/io_apic.c
@@ -1641,6 +1641,9 @@ static void mask_and_ack_level_ioapic_irq(struct irq_desc *desc)
     unsigned long v;
     int i;
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("ACK irq %u\n", desc->irq);
+
     irq_complete_move(desc);
 
     if ( !directed_eoi_enabled )
@@ -1688,6 +1691,9 @@ static void mask_and_ack_level_ioapic_irq(struct irq_desc *desc)
 
 static void end_level_ioapic_irq_old(struct irq_desc *desc, u8 vector)
 {
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("END irq %u\n", desc->irq);
+
     if ( directed_eoi_enabled )
     {
         if ( !(desc->status & (IRQ_DISABLED|IRQ_MOVE_PENDING)) )
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 8d1f9a9fc6..baef41cd37 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1109,6 +1109,10 @@ static void irq_guest_eoi_timer_fn(void *data)
     unsigned int i, irq = desc - irq_desc;
     irq_guest_action_t *action;
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("irq_guest_eoi_timer_fn irq %u status %x\n",
+                          desc->irq, desc->status);
+
     spin_lock_irq(&desc->lock);
     
     if ( !(desc->status & IRQ_GUEST) )
@@ -1118,6 +1122,10 @@ static void irq_guest_eoi_timer_fn(void *data)
 
     ASSERT(action->ack_type != ACKTYPE_NONE);
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("ack_type %u in_flight %u\n",
+                          action->ack_type, action->in_flight);
+
     /*
      * Is no IRQ in flight at all, or another instance of this timer already
      * running? Skip everything to avoid forcing an EOI early.
@@ -1837,6 +1845,10 @@ static void do_IRQ_guest(struct irq_desc *desc, unsigned int vector)
     unsigned int        i;
     struct pending_eoi *peoi = this_cpu(pending_eoi);
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("do_IRQ_guest irq %u\n", desc->irq);
+
+
     if ( unlikely(!action->nr_guests) )
     {
         /* An interrupt may slip through while freeing an ACKTYPE_EOI irq. */
diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
index 6b1305a3e5..92f3670508 100644
--- a/xen/drivers/passthrough/io.c
+++ b/xen/drivers/passthrough/io.c
@@ -828,6 +828,9 @@ int hvm_do_IRQ_dpci(struct domain *d, struct pirq *pirq)
          !pirq_dpci || !(pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) )
         return 0;
 
+    if ( pirq->pirq == TRACK_IRQ )
+        debugtrace_printk("hvm_do_IRQ_dpci irq %u\n", pirq->pirq);
+
     pirq_dpci->masked = 1;
     raise_softirq_for(pirq_dpci);
     return 1;
@@ -1010,6 +1013,9 @@ void hvm_dpci_eoi(struct domain *d, unsigned int guest_gsi,
     if ( !is_iommu_enabled(d) )
         return;
 
+    if ( guest_gsi == TRACK_IRQ )
+        debugtrace_printk("hvm_dpci_eoi irq %u\n", guest_gsi);
+
     if ( is_hardware_domain(d) )
     {
         spin_lock(&d->event_lock);
diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
index 43d567fe44..91579c33b9 100644
--- a/xen/include/xen/irq.h
+++ b/xen/include/xen/irq.h
@@ -174,4 +174,6 @@ unsigned int arch_hwdom_irqs(domid_t);
 void arch_evtchn_bind_pirq(struct domain *, int pirq);
 #endif
 
+#define TRACK_IRQ 34
+
 #endif /* __XEN_IRQ_H__ */



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 15:18:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 15:18:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38636.71391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiJ2L-0000KO-W1; Thu, 26 Nov 2020 15:18:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38636.71391; Thu, 26 Nov 2020 15:18:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiJ2L-0000KD-Sw; Thu, 26 Nov 2020 15:18:21 +0000
Received: by outflank-mailman (input) for mailman id 38636;
 Thu, 26 Nov 2020 15:18:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EevG=FA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kiJ2K-0000K8-KQ
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 15:18:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2acfbf8e-7e4f-4002-8e37-b0dc4961aaa6;
 Thu, 26 Nov 2020 15:18:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EC0C7ACE0;
 Thu, 26 Nov 2020 15:18:17 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=EevG=FA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kiJ2K-0000K8-KQ
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 15:18:20 +0000
X-Inumbo-ID: 2acfbf8e-7e4f-4002-8e37-b0dc4961aaa6
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2acfbf8e-7e4f-4002-8e37-b0dc4961aaa6;
	Thu, 26 Nov 2020 15:18:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606403898; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Zy7ccoNOl0qFZO7b2BZ4VQGuj+49u2hiBURF8zzqyo0=;
	b=jxN4HrOBbgFtAShjzpbN1iWS4WX53PrS+VWdMoyrwKAv+I1Kh4vgNp5J2TiFGqc9N+iVTL
	IOVRBRRS7mlaBsr0BRG4A39KWOHWw0kzDn0t+Y3RZQokthgQgqH0YxD9XCB3r/x2OyXIDt
	lDkVaZdsxd7q8gkZWQMP8iGbux1q84w=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EC0C7ACE0;
	Thu, 26 Nov 2020 15:18:17 +0000 (UTC)
Subject: Re: [PATCH v2 02/17] mm: introduce xvmalloc() et al and use for grant
 table allocations
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
 <23acd443-348c-5ef9-0fb5-880e06cc9a2d@suse.com>
 <0c40a6f6-af8c-1040-f249-36752df3a1f1@xen.org>
 <a752cdb9-4609-2a61-b657-c17cbe4febb8@suse.com>
 <alpine.DEB.2.21.2011251122200.7979@sstabellini-ThinkPad-T480s>
 <2aeba247-8b36-7b75-dc17-b901bf746f87@suse.com>
 <8e86bed4-b6fa-ed81-8ca8-41e727c56cb1@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <150fdd8b-f4c5-5ee8-f1e5-b3fafb4eb3ca@suse.com>
Date: Thu, 26 Nov 2020 16:18:18 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <8e86bed4-b6fa-ed81-8ca8-41e727c56cb1@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.11.2020 14:22, Julien Grall wrote:
> On 26/11/2020 11:34, Jan Beulich wrote:
>> On 25.11.2020 20:48, Stefano Stabellini wrote:
>>> On Wed, 25 Nov 2020, Jan Beulich wrote:
>>>> On 25.11.2020 13:15, Julien Grall wrote:
>>>>> On 23/11/2020 14:23, Jan Beulich wrote:
>>>>>> I'm unconvinced of the mentioning of "physically contiguous" in the
>>>>>> comment at the top of the new header: I don't think xmalloc() provides
>>>>>> such a guarantee. Any use assuming so would look (latently) broken to
>>>>>> me.
>>>>>
>>>>> I haven't had the chance to reply to the first version about this. So I
>>>>> will reply here to avoid confusion.
>>>>>
>>>>> I can at least spot one user in Arm that would use xmalloc() that way
>>>>> (see the allocation of itt_addr in arch/arm/gic-v3-its.c).
>>>>
>>>> And I surely wouldn't have spotted this, even if I had tried
>>>> to find "offenders", i.e. as said before not wanting to alter
>>>> the behavior of existing code (beyond the explicit changes
>>>> done here) was ...
>>>>
>>>>> AFAIK, the memory is for the sole purpose of the ITS and should not be
>>>>> accessed by Xen. So I think we can replace by a new version of
>>>>> alloc_domheap_pages().
>>>>>
>>>>> However, I still question the usefulness of introducing yet another way
>>>>> to allocate memory (we already have alloc_xenheap_pages(), xmalloc(),
>>>>> alloc_domheap_pages(), vmap()) if you think users cannot rely on
>>>>> xmalloc() to allocate memory physically contiguous.
>>>>
>>>> ... the reason to introduce a separate new interface. Plus of
>>>> course this parallels what Linux has.
>>>>
>>>>> It definitely makes more difficult to figure out when to use xmalloc()
>>>>> vs xvalloc().
>>>>
>>>> I don't see the difficulty:
>>>> - if you need physically contiguous memory, use alloc_xen*_pages(),
>>>> - if you know the allocation size is always no more than a page,
>>>>    use xmalloc(),
> 
> If that's then intention, then may I ask why xmalloc() is able to 
> support multiple pages allocation?

Because support for this pre-dates even the introduction of vmalloc()?

> Your assumption is Xen will always be built with the same page size 
> across all the architecture. While Xen only works with 4KB pages today, 
> Arm can support 16KB and 64KB. I have long term plan to add support for it.
> 
> So I don't think you can use the page size as a way to distinguish which 
> one to use.

The let's abstract this one level further

- if you know the allocation size is always no more than the smallest
  possible page size, use xmalloc()

>>> What if you need memory physically contiguous but not necessarily an
>>> order of pages, such as for instance 5200 bytes?
>>
>> This case is, I think, rare enough (in particular in Xen) that the
>> waste of space can be tolerated imo.
> 
> This is quite departure from:
> 
> commit b829a0ff5794ee5b0f96a0c872f6a4ed7b1007c7
> Author: Jan Beulich <jbeulich@suse.com>
> Date:   Thu Oct 13 10:03:43 2011 +0200
> 
>      xmalloc: return unused full pages on multi-page allocations
> 
>      Certain (boot time) allocations are relatively large (particularly when
>      building with high NR_CPUS), but can also happen to be pretty far away
>      from a power-of-two size. Utilize the page allocator's (other than
>      Linux'es) capability of allowing to return space from higher-order
>      allocations in smaller pieces to return the unused parts immediately.
> 
>      Signed-off-by: Jan Beulich <jbeulich@suse.com>
>      Acked-by: Keir Fraser <keir@xen.org>
> 
> I am curious to know what changed...

Nothing. But even if something had, citing a 9 year old commit is
not likely to point out any actual contradiction.

> Anyway, what you wrote is very server focused. On Arm, we have plan to 
> run Xen on smaller hardware where wasting memory mean less usable RAM 
> for guests.
> 
> The problem with using an order is the bigger the order is the more 
> change you will waste space...
> 
> Allocating more than a page is fairly common on Arm, so we really want 
> to reduce the amount of memory wasted.

The amount of space wasted is the same - the tail of the trailing
page. I'm afraid I don't see what your point is.

>>> If xmalloc can't do physically contiguous allocations, we need something
>>> else that does physically contiguous allocations not only at page
>>> granularity, right?
>>
>> Well, we first need to settle on what guarantees xmalloc() is meant
>> to provide. It may be just me assuming it doesn't provide the same
>> ones which Linux'es kmalloc() makes. I'm first and foremost
>> judging by the comment near the top of xmalloc.h, which compares
>> with malloc() / free(), not kmalloc() / kfree().
>>
>>> The other issue is semantics. If xmalloc is unable to allocate more than
>>> a page of contiguous memory, then it is identical to vmalloc from the
>>> caller's point of view: both xmalloc and vmalloc return a virtual
>>> address for an allocation that might not be physically contiguous.
>>
>> Almost. vmalloc() puts guard pages around the allocation and
>> guarantees page alignment.
>>
>>> Maybe we should get rid of xmalloc entirely and improve the
>>> implementation of vmalloc so that it falls back to xmalloc for
>>> sub-page allocations. Which in fact is almost the same thing that you
>>> did.
>>
>> This would break callers assuming page alignment (and - shouldn't
>> be an issue in practice - granularity). If anything, as Julien
>> did suggest, we could modify xmalloc() accordingly, but then of
>> course making sure we also honor alignment requests beyond page
>> size.
>>
>> Neither of these is the goal here, hence this "intermediate"
>> implementation, which is only almost "redundant".
>>
>>>> - if you know the allocation size is always more than a page, use
>>>>    vmalloc(),
>>>> - otherwise use xvmalloc(). Exceptions may of course apply, i.e.
>>>> this is just a rule of thumb.
>>>>
>>>>> I would like to hear an opinion from the other maintainers.
>>>>
>>>> Let's hope at least one will voice theirs.
>>>
>>> If we take a step back, I think we only really need two memory
>>> allocators:
>>>
>>> 1) one that allocates physically contiguous memory
>>> 2) one that allocates non-physically contiguous memory
>>>
>>> That's it, right?
>>>
>>> In addition to that, I understand it could be convient to have a little
>>> wrapper that automatically chooses between 1) and 2) depending on
>>> circumstances.
>>>
>>> But if the circumstance is just size < PAGE_SIZE then I don't think we
>>> need any convenience wrappers: we should just be able to call 2), which
>>> is vmalloc, once we improve the vmalloc implementation.
>>>
>>> Or do you see any reasons to keep the current vmalloc implementation as
>>> is for sub-page allocations?
>>
>> See my "Almost. ..." above.
>>
>> As an aside, I also find it quite puzzling that in one of the rare
>> cases where I propose to clone an interface from Linux without much
>> deviation from their model, I get objections. It typically was the
>> other way around in the past ...
> 
> If we were really following Linux, then we would have two interfaces:
>     - xmalloc() which is the same as kmalloc()
>     - xvalloc() which is the same a kvalloc()

(correction: xvmalloc() and kvmalloc())

- vmalloc() (named identically in Linux and Xen)

IOW the same set of _three_ interface groups.

> However, you seem to be the one objecting on the behavior of xmalloc().

I don't think I'm objecting to any existing behavior. What I did
is state my view on (non-)guarantees by xmalloc(). And I've
already said - maybe I'm wrong and, like Linux'es kmalloc(),
there is a guarantee of it producing contiguous memory, and I
merely didn't find where that's said.

> I can't speak for Stefano, but I don't object on following Linux. 
> Instead I am objecting on the growing number of way to allocate memory 
> in Xen and that differ depending on the system_state.

But as per above the addition only brings us on par with Linux.
There, kvmalloc_node() is simply a wrapper (with different logic
when to try what) around kmalloc_node() and __vmalloc_node(). No
different (in the basic idea) from what I'm doing here.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 15:28:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 15:28:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38644.71404 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiJC7-0001KM-0V; Thu, 26 Nov 2020 15:28:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38644.71404; Thu, 26 Nov 2020 15:28:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiJC6-0001KF-Sv; Thu, 26 Nov 2020 15:28:26 +0000
Received: by outflank-mailman (input) for mailman id 38644;
 Thu, 26 Nov 2020 15:28:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LKT8=FA=gmail.com=geert.uytterhoeven@srs-us1.protection.inumbo.net>)
 id 1kiJC6-0001KA-1K
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 15:28:26 +0000
Received: from mail-ot1-f65.google.com (unknown [209.85.210.65])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab7e3f94-68da-488e-bb6c-4353ec84fe44;
 Thu, 26 Nov 2020 15:28:24 +0000 (UTC)
Received: by mail-ot1-f65.google.com with SMTP id 79so2156912otc.7
 for <xen-devel@lists.xenproject.org>; Thu, 26 Nov 2020 07:28:24 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=LKT8=FA=gmail.com=geert.uytterhoeven@srs-us1.protection.inumbo.net>)
	id 1kiJC6-0001KA-1K
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 15:28:26 +0000
X-Inumbo-ID: ab7e3f94-68da-488e-bb6c-4353ec84fe44
Received: from mail-ot1-f65.google.com (unknown [209.85.210.65])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ab7e3f94-68da-488e-bb6c-4353ec84fe44;
	Thu, 26 Nov 2020 15:28:24 +0000 (UTC)
Received: by mail-ot1-f65.google.com with SMTP id 79so2156912otc.7
        for <xen-devel@lists.xenproject.org>; Thu, 26 Nov 2020 07:28:24 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=kP9MspVOPl/NnVl8oGn1EIC/+F8CcK5+OXo+jY56Nno=;
        b=P8jJ8LPDzZu00Cs7zmm6NkeXye2k+pTbCKake39qiyx5HBWziWWAQCFThcE2XZcbwp
         IDr4qif/UHnpNZ4dTGWWsM4Gw2wDGnxUat1hmJKidY5UE0bqnuX83hVHAThcHLnK99B/
         746noz0PTfJINeUGeodUWBZ+L7B1T/wasn8PNANwBe02V9zCmCUFsPHZ9QEAq7SITelw
         NPV/xINz0ImsZsAfh2HgXregN+DAmi8Ai0nTtKqh/zaqVN3Pwy2KiluPPpwnQW4n4IzY
         snBF1+1bBUiTCB62CxaWfQc+ngrxSzggRdbxie6SFTeN0S0JMhDfNOpRS7S76yQ8UBFG
         +WUA==
X-Gm-Message-State: AOAM530HbTgkWN1TA9Ob5AqKhGHZxvREDGMQ9hiCcD4fkHqyDYQMpsO3
	nwek6XYijubVL807tX8znt6yHgekl4gdpqF82yY=
X-Google-Smtp-Source: ABdhPJyYtol6dSfaI6WhgTcuunq7fhBuSULViECyA4Z+K27bCejCuaia55DZ/aziu9dD69JTQZlYwW/4z6Mu7Di+rU4=
X-Received: by 2002:a05:6830:210a:: with SMTP id i10mr2551843otc.145.1606404504116;
 Thu, 26 Nov 2020 07:28:24 -0800 (PST)
MIME-Version: 1.0
References: <cover.1605896059.git.gustavoars@kernel.org> <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook> <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
 <CANiq72nZrHWTA4_Msg6MP9snTyenC6-eGfD27CyfNSu7QoVZbw@mail.gmail.com>
 <1c7d7fde126bc0acf825766de64bf2f9b888f216.camel@HansenPartnership.com>
 <CANiq72m22Jb5_+62NnwX8xds2iUdWDMAqD8PZw9cuxdHd95W0A@mail.gmail.com>
 <fc45750b6d0277c401015b7aa11e16cd15f32ab2.camel@HansenPartnership.com>
 <CANiq72k5tpDoDPmJ0ZWc1DGqm+81Gi-uEENAtvEs9v3SZcx6_Q@mail.gmail.com>
 <4993259d01a0064f8bb22770503490f9252f3659.camel@HansenPartnership.com>
 <CANiq72kqO=bYMJnFS2uYRpgWATJ=uXxZuNUsTXT+3aLtrpnzvQ@mail.gmail.com>
 <44005bde-f6d4-5eaa-39b8-1a5efeedb2d3@gmail.com> <CANiq72nobq=ptWK-qWxU91JHqkKhMcRtJNnw2XJd5-vSJWZd8Q@mail.gmail.com>
In-Reply-To: <CANiq72nobq=ptWK-qWxU91JHqkKhMcRtJNnw2XJd5-vSJWZd8Q@mail.gmail.com>
From: Geert Uytterhoeven <geert@linux-m68k.org>
Date: Thu, 26 Nov 2020 16:28:12 +0100
Message-ID: <CAMuHMdV5kOakvZJMWLxbpigFPS+Xuw6DVYsWCWZy7wGsv3idcw@mail.gmail.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
To: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Cc: Edward Cree <ecree.xilinx@gmail.com>, 
	ALSA Development Mailing List <alsa-devel@alsa-project.org>, linux-atm-general@lists.sourceforge.net, 
	reiserfs-devel@vger.kernel.org, linux-iio@vger.kernel.org, 
	linux-wireless <linux-wireless@vger.kernel.org>, 
	Linux Fbdev development list <linux-fbdev@vger.kernel.org>, dri-devel <dri-devel@lists.freedesktop.org>, 
	"Gustavo A. R. Silva" <gustavoars@kernel.org>, 
	James Bottomley <James.Bottomley@hansenpartnership.com>, linux-ide@vger.kernel.org, 
	dm-devel@redhat.com, keyrings@vger.kernel.org, 
	MTD Maling List <linux-mtd@lists.infradead.org>, GR-everest-linux-l2@marvell.com, 
	wcn36xx@lists.infradead.org, samba-technical@lists.samba.org, 
	linux-i3c@lists.infradead.org, linux1394-devel@lists.sourceforge.net, 
	linux-afs@lists.infradead.org, usb-storage@lists.one-eyed-alien.net, 
	Lars Ellenberg <drbd-dev@lists.linbit.com>, driverdevel <devel@driverdev.osuosl.org>, 
	linux-cifs@vger.kernel.org, rds-devel@oss.oracle.com, 
	Nick Desaulniers <ndesaulniers@google.com>, scsi <linux-scsi@vger.kernel.org>, 
	Nathan Chancellor <natechancellor@gmail.com>, linux-rdma <linux-rdma@vger.kernel.org>, 
	oss-drivers@netronome.com, bridge@lists.linux-foundation.org, 
	linux-security-module <linux-security-module@vger.kernel.org>, 
	amd-gfx list <amd-gfx@lists.freedesktop.org>, linux-stm32@st-md-mailman.stormreply.com, 
	cluster-devel@redhat.com, ACPI Devel Maling List <linux-acpi@vger.kernel.org>, 
	coreteam@netfilter.org, intel-wired-lan@lists.osuosl.org, 
	linux-input <linux-input@vger.kernel.org>, Miguel Ojeda <ojeda@kernel.org>, 
	Jakub Kicinski <kuba@kernel.org>, Ext4 Developers List <linux-ext4@vger.kernel.org>, 
	Linux Media Mailing List <linux-media@vger.kernel.org>, Kees Cook <keescook@chromium.org>, 
	selinux@vger.kernel.org, linux-arm-msm <linux-arm-msm@vger.kernel.org>, 
	Intel Graphics Development <intel-gfx@lists.freedesktop.org>, linux-geode@lists.infradead.org, 
	linux-can@vger.kernel.org, linux-block@vger.kernel.org, 
	"open list:GPIO SUBSYSTEM" <linux-gpio@vger.kernel.org>, op-tee@lists.trustedfirmware.org, 
	linux-mediatek@lists.infradead.org, xen-devel@lists.xenproject.org, 
	Nouveau Dev <nouveau@lists.freedesktop.org>, linux-hams@vger.kernel.org, 
	ceph-devel <ceph-devel@vger.kernel.org>, virtualization@lists.linux-foundation.org, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, linux-hwmon@vger.kernel.org, 
	Linux Watchdog Mailing List <linux-watchdog@vger.kernel.org>, 
	"open list:NFS, SUNRPC, AND..." <linux-nfs@vger.kernel.org>, GR-Linux-NIC-Dev@marvell.com, 
	tipc-discussion@lists.sourceforge.net, Linux-MM <linux-mm@kvack.org>, 
	Network Development <netdev@vger.kernel.org>, linux-decnet-user@lists.sourceforge.net, 
	Linux MMC List <linux-mmc@vger.kernel.org>, linux-kernel <linux-kernel@vger.kernel.org>, 
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, Linux-Renesas <linux-renesas-soc@vger.kernel.org>, 
	linux-sctp@vger.kernel.org, USB list <linux-usb@vger.kernel.org>, 
	NetFilter <netfilter-devel@vger.kernel.org>, 
	Linux Crypto Mailing List <linux-crypto@vger.kernel.org>, patches@opensource.cirrus.com, 
	Joe Perches <joe@perches.com>, linux-integrity <linux-integrity@vger.kernel.org>, 
	target-devel <target-devel@vger.kernel.org>, linux-hardening@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"

Hi Miguel,

On Thu, Nov 26, 2020 at 3:54 PM Miguel Ojeda
<miguel.ojeda.sandonis@gmail.com> wrote:
> On Wed, Nov 25, 2020 at 11:44 PM Edward Cree <ecree.xilinx@gmail.com> wrote:
> > To make the intent clear, you have to first be certain that you
> >  understand the intent; otherwise by adding either a break or a
> >  fallthrough to suppress the warning you are just destroying the
> >  information that "the intent of this code is unknown".
>
> If you don't know what the intent of your own code is, then you
> *already* have a problem in your hands.

The maintainer is not necessarily the owner/author of the code, and
thus may not know the intent of the code.

> > or does it flag up code
> >  that can be mindlessly "fixed" (in which case the warning is
> >  worthless)?  Proponents in this thread seem to be trying to
> >  have it both ways.
>
> A warning is not worthless just because you can mindlessly fix it.
> There are many counterexamples, e.g. many
> checkpatch/lint/lang-format/indentation warnings, functional ones like
> the `if (a = b)` warning...

BTW, you cannot mindlessly fix the latter, as you cannot know if
"(a == b)" or "((a = b))" was intended, without understanding the code
(and the (possibly unavailable) data sheet, and the hardware, ...).

P.S. So far I've stayed out of this thread, as I like it if the compiler
     flags possible mistakes.  After all I was the one fixing new
     "may be used uninitialized" warnings thrown up by gcc-4.1, until
     (a bit later than) support for that compiler was removed...

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 15:51:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 15:51:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38682.71416 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiJXw-0003ws-VZ; Thu, 26 Nov 2020 15:51:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38682.71416; Thu, 26 Nov 2020 15:51:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiJXw-0003wl-SM; Thu, 26 Nov 2020 15:51:00 +0000
Received: by outflank-mailman (input) for mailman id 38682;
 Thu, 26 Nov 2020 15:50:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiJXv-0003wd-7z; Thu, 26 Nov 2020 15:50:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiJXv-0005Cq-1K; Thu, 26 Nov 2020 15:50:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiJXu-0000TR-Nc; Thu, 26 Nov 2020 15:50:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kiJXu-0008J2-N9; Thu, 26 Nov 2020 15:50:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiJXv-0003wd-7z; Thu, 26 Nov 2020 15:50:59 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aQ3TWvoEeOosM8YY19TrsJU82kL7j1zjxx2vOpc/V2w=; b=2NPz71Y5wsEQ+A4xb43njpSuu2
	+a05Db76TAngY1cY3zNRu5TV4EQAtfPYBHUZ/o13g1OhI17AgyOU+233B3BDA1E/lFRqfO/KzFuBC
	8hKN3pN9+KcT1w/L7hkdxmWGTwDZoJFaf9w5gLgvruLpHyrtx/ib+swxtSPGBcGXBTOA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiJXv-0005Cq-1K; Thu, 26 Nov 2020 15:50:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiJXu-0000TR-Nc; Thu, 26 Nov 2020 15:50:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiJXu-0008J2-N9; Thu, 26 Nov 2020 15:50:58 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157020-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157020: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=dd3d2340c4076d1735cd0f7cb61f4d8622b9562d
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 26 Nov 2020 15:50:58 +0000

flight 157020 qemu-mainline real [real]
flight 157034 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157020/
http://logs.test-lab.xenproject.org/osstest/logs/157034/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                dd3d2340c4076d1735cd0f7cb61f4d8622b9562d
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   98 days
Failing since        152659  2020-08-21 14:07:39 Z   97 days  204 attempts
Testing same since   157020  2020-11-26 03:28:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69155 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 15:51:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 15:51:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38688.71430 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiJYa-000439-9o; Thu, 26 Nov 2020 15:51:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38688.71430; Thu, 26 Nov 2020 15:51:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiJYa-000432-6E; Thu, 26 Nov 2020 15:51:40 +0000
Received: by outflank-mailman (input) for mailman id 38688;
 Thu, 26 Nov 2020 15:51:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YrL6=FA=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kiJYY-00042t-UE
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 15:51:38 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 0b911d9d-5d4c-42a8-ae0d-e8d4e7cf0ee4;
 Thu, 26 Nov 2020 15:51:37 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 51F6531B;
 Thu, 26 Nov 2020 07:51:37 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A13EC3F23F;
 Thu, 26 Nov 2020 07:51:36 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YrL6=FA=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kiJYY-00042t-UE
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 15:51:38 +0000
X-Inumbo-ID: 0b911d9d-5d4c-42a8-ae0d-e8d4e7cf0ee4
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 0b911d9d-5d4c-42a8-ae0d-e8d4e7cf0ee4;
	Thu, 26 Nov 2020 15:51:37 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 51F6531B;
	Thu, 26 Nov 2020 07:51:37 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.199.1])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A13EC3F23F;
	Thu, 26 Nov 2020 07:51:36 -0800 (PST)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 0/7] xen/arm: Emulate ID registers
Date: Thu, 26 Nov 2020 15:51:02 +0000
Message-Id: <cover.1606151462.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1

The goal of this serie is to emulate coprocessor ID registers so that
Xen only publish to guest features that are supported by Xen and can
actually be used by guests.
One practical example where this is required are SVE support which is
forbidden by Xen as it is not supported, but if Linux is compiled with
it, it will crash on boot. An other one is AMU which is also forbidden
by Xen but one Linux compiled with it would crash if the platform
supports it.

To be able to emulate the coprocessor registers defining what features
are supported by the hardware, the TID3 bit of HCR must be disabled and
Xen must emulated the values of those registers when an exception is
catched when a guest is accessing it.

This serie is first creating a guest cpuinfo structure which will
contain the values that we want to publish to the guests and then
provides the proper emulationg for those registers when Xen is getting
an exception due to an access to any of those registers.

This is a first simple implementation to solve the problem and the way
to define the values that we provide to guests and which features are
disabled will be in a future patchset enhance so that we could decide
per guest what can be used or not and depending on this deduce the bits
to activate in HCR and the values that we must publish on ID registers.

Bertrand Marquis (7):
  xen/arm: Add ID registers and complete cpufinfo
  xen/arm: Add arm64 ID registers definitions
  xen/arm: create a cpuinfo structure for guest
  xen/arm: Add handler for ID registers on arm64
  xen/arm: Add handler for cp15 ID registers
  xen/arm: Add CP10 exception support to handle VMFR
  xen/arm: Activate TID3 in HCR_EL2

 xen/arch/arm/arm64/vsysreg.c        | 49 +++++++++++++++++++
 xen/arch/arm/cpufeature.c           | 67 ++++++++++++++++++++++++++
 xen/arch/arm/traps.c                |  7 ++-
 xen/arch/arm/vcpreg.c               | 73 +++++++++++++++++++++++++++++
 xen/include/asm-arm/arm64/hsr.h     | 37 +++++++++++++++
 xen/include/asm-arm/arm64/sysregs.h | 22 +++++++++
 xen/include/asm-arm/cpregs.h        | 10 ++++
 xen/include/asm-arm/cpufeature.h    | 63 +++++++++++++++++++++----
 xen/include/asm-arm/perfc_defn.h    |  1 +
 xen/include/asm-arm/traps.h         |  1 +
 10 files changed, 321 insertions(+), 9 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 15:53:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 15:53:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38695.71443 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiJZw-0004Cw-PY; Thu, 26 Nov 2020 15:53:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38695.71443; Thu, 26 Nov 2020 15:53:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiJZw-0004Cp-Lg; Thu, 26 Nov 2020 15:53:04 +0000
Received: by outflank-mailman (input) for mailman id 38695;
 Thu, 26 Nov 2020 15:53:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YrL6=FA=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kiJZv-0004Ch-LO
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 15:53:03 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id d1e24fff-4259-4dc0-a40e-aafde6250990;
 Thu, 26 Nov 2020 15:53:02 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 923E131B;
 Thu, 26 Nov 2020 07:53:02 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C7A693F23F;
 Thu, 26 Nov 2020 07:53:01 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YrL6=FA=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kiJZv-0004Ch-LO
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 15:53:03 +0000
X-Inumbo-ID: d1e24fff-4259-4dc0-a40e-aafde6250990
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id d1e24fff-4259-4dc0-a40e-aafde6250990;
	Thu, 26 Nov 2020 15:53:02 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 923E131B;
	Thu, 26 Nov 2020 07:53:02 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.199.1])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C7A693F23F;
	Thu, 26 Nov 2020 07:53:01 -0800 (PST)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 1/7] xen/arm: Add ID registers and complete cpufinfo
Date: Thu, 26 Nov 2020 15:51:03 +0000
Message-Id: <181f3157e2895498b09790d02555e5530adec6ad.1606317985.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1606151462.git.bertrand.marquis@arm.com>
References: <cover.1606151462.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1606151462.git.bertrand.marquis@arm.com>
References: <cover.1606151462.git.bertrand.marquis@arm.com>

Add definition and entries in cpuinfo for ID registers introduced in
newer Arm Architecture reference manual:
- ID_PFR2: processor feature register 2
- ID_DFR1: debug feature register 1
- ID_MMFR4 and ID_MMFR5: Memory model feature registers 4 and 5
- ID_ISA6: ISA Feature register 6
Add more bitfield definitions in PFR fields of cpuinfo.
Add MVFR2 register definition for aarch32.
Add mvfr values in cpuinfo.
Add some registers definition for arm64 in sysregs as some are not
always know by compilers.
Initialize the new values added in cpuinfo in identify_cpu during init.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/cpufeature.c           | 16 ++++++++
 xen/include/asm-arm/arm64/sysregs.h | 22 +++++++++++
 xen/include/asm-arm/cpregs.h        | 10 +++++
 xen/include/asm-arm/cpufeature.h    | 61 +++++++++++++++++++++++++----
 4 files changed, 101 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
index 44126dbf07..c0adf848c5 100644
--- a/xen/arch/arm/cpufeature.c
+++ b/xen/arch/arm/cpufeature.c
@@ -114,13 +114,17 @@ void identify_cpu(struct cpuinfo_arm *c)
 
         c->mm64.bits[0]  = READ_SYSREG64(ID_AA64MMFR0_EL1);
         c->mm64.bits[1]  = READ_SYSREG64(ID_AA64MMFR1_EL1);
+        c->mm64.bits[2]  = READ_SYSREG64(ID_AA64MMFR2_EL1);
 
         c->isa64.bits[0] = READ_SYSREG64(ID_AA64ISAR0_EL1);
         c->isa64.bits[1] = READ_SYSREG64(ID_AA64ISAR1_EL1);
+
+        c->zfr64.bits[0] = READ_SYSREG64(ID_AA64ZFR0_EL1);
 #endif
 
         c->pfr32.bits[0] = READ_SYSREG32(ID_PFR0_EL1);
         c->pfr32.bits[1] = READ_SYSREG32(ID_PFR1_EL1);
+        c->pfr32.bits[2] = READ_SYSREG32(ID_PFR2_EL1);
 
         c->dbg32.bits[0] = READ_SYSREG32(ID_DFR0_EL1);
 
@@ -130,6 +134,8 @@ void identify_cpu(struct cpuinfo_arm *c)
         c->mm32.bits[1]  = READ_SYSREG32(ID_MMFR1_EL1);
         c->mm32.bits[2]  = READ_SYSREG32(ID_MMFR2_EL1);
         c->mm32.bits[3]  = READ_SYSREG32(ID_MMFR3_EL1);
+        c->mm32.bits[4]  = READ_SYSREG32(ID_MMFR4_EL1);
+        c->mm32.bits[5]  = READ_SYSREG32(ID_MMFR5_EL1);
 
         c->isa32.bits[0] = READ_SYSREG32(ID_ISAR0_EL1);
         c->isa32.bits[1] = READ_SYSREG32(ID_ISAR1_EL1);
@@ -137,6 +143,16 @@ void identify_cpu(struct cpuinfo_arm *c)
         c->isa32.bits[3] = READ_SYSREG32(ID_ISAR3_EL1);
         c->isa32.bits[4] = READ_SYSREG32(ID_ISAR4_EL1);
         c->isa32.bits[5] = READ_SYSREG32(ID_ISAR5_EL1);
+        c->isa32.bits[6] = READ_SYSREG32(ID_ISAR6_EL1);
+
+#ifdef CONFIG_ARM_64
+        c->mvfr.bits[0] = READ_SYSREG64(MVFR0_EL1);
+        c->mvfr.bits[1] = READ_SYSREG64(MVFR1_EL1);
+        c->mvfr.bits[2] = READ_SYSREG64(MVFR2_EL1);
+#else
+        c->mvfr.bits[0] = READ_CP32(MVFR0);
+        c->mvfr.bits[1] = READ_CP32(MVFR1);
+#endif
 }
 
 /*
diff --git a/xen/include/asm-arm/arm64/sysregs.h b/xen/include/asm-arm/arm64/sysregs.h
index c60029d38f..fd0b25b9c7 100644
--- a/xen/include/asm-arm/arm64/sysregs.h
+++ b/xen/include/asm-arm/arm64/sysregs.h
@@ -57,6 +57,28 @@
 #define ICH_AP1R2_EL2             __AP1Rx_EL2(2)
 #define ICH_AP1R3_EL2             __AP1Rx_EL2(3)
 
+/*
+ * Define ID coprocessor registers if they are not
+ * already defined by the compiler.
+ *
+ * Values picked from linux kernel
+ */
+#ifndef ID_AA64MMFR2_EL1
+#define ID_AA64MMFR2_EL1            S3_0_C0_C7_2
+#endif
+#ifndef ID_PFR2_EL1
+#define ID_PFR2_EL1                 S3_0_C0_C3_4
+#endif
+#ifndef ID_MMFR5_EL1
+#define ID_MMFR5_EL1                S3_0_C0_C3_6
+#endif
+#ifndef ID_ISAR6_EL1
+#define ID_ISAR6_EL1                S3_0_C0_C2_7
+#endif
+#ifndef ID_AA64ZFR0_EL1
+#define ID_AA64ZFR0_EL1             S3_0_C0_C4_4
+#endif
+
 /* Access to system registers */
 
 #define READ_SYSREG32(name) ((uint32_t)READ_SYSREG64(name))
diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs.h
index 8fd344146e..aa8fc1d4a3 100644
--- a/xen/include/asm-arm/cpregs.h
+++ b/xen/include/asm-arm/cpregs.h
@@ -63,6 +63,7 @@
 #define FPSID           p10,7,c0,c0,0   /* Floating-Point System ID Register */
 #define FPSCR           p10,7,c1,c0,0   /* Floating-Point Status and Control Register */
 #define MVFR0           p10,7,c7,c0,0   /* Media and VFP Feature Register 0 */
+#define MVFR1           p10,7,c6,c0,0   /* Media and VFP Feature Register 1 */
 #define FPEXC           p10,7,c8,c0,0   /* Floating-Point Exception Control Register */
 #define FPINST          p10,7,c9,c0,0   /* Floating-Point Instruction Register */
 #define FPINST2         p10,7,c10,c0,0  /* Floating-point Instruction Register 2 */
@@ -108,18 +109,23 @@
 #define MPIDR           p15,0,c0,c0,5   /* Multiprocessor Affinity Register */
 #define ID_PFR0         p15,0,c0,c1,0   /* Processor Feature Register 0 */
 #define ID_PFR1         p15,0,c0,c1,1   /* Processor Feature Register 1 */
+#define ID_PFR2         p15,0,c0,c3,4   /* Processor Feature Register 2 */
 #define ID_DFR0         p15,0,c0,c1,2   /* Debug Feature Register 0 */
+#define ID_DFR1         p15,0,c0,c3,5   /* Debug Feature Register 1 */
 #define ID_AFR0         p15,0,c0,c1,3   /* Auxiliary Feature Register 0 */
 #define ID_MMFR0        p15,0,c0,c1,4   /* Memory Model Feature Register 0 */
 #define ID_MMFR1        p15,0,c0,c1,5   /* Memory Model Feature Register 1 */
 #define ID_MMFR2        p15,0,c0,c1,6   /* Memory Model Feature Register 2 */
 #define ID_MMFR3        p15,0,c0,c1,7   /* Memory Model Feature Register 3 */
+#define ID_MMFR4        p15,0,c0,c2,6   /* Memory Model Feature Register 4 */
+#define ID_MMFR5        p15,0,c0,c3,6   /* Memory Model Feature Register 5 */
 #define ID_ISAR0        p15,0,c0,c2,0   /* ISA Feature Register 0 */
 #define ID_ISAR1        p15,0,c0,c2,1   /* ISA Feature Register 1 */
 #define ID_ISAR2        p15,0,c0,c2,2   /* ISA Feature Register 2 */
 #define ID_ISAR3        p15,0,c0,c2,3   /* ISA Feature Register 3 */
 #define ID_ISAR4        p15,0,c0,c2,4   /* ISA Feature Register 4 */
 #define ID_ISAR5        p15,0,c0,c2,5   /* ISA Feature Register 5 */
+#define ID_ISAR6        p15,0,c0,c2,7   /* ISA Feature Register 6 */
 #define CCSIDR          p15,1,c0,c0,0   /* Cache Size ID Registers */
 #define CLIDR           p15,1,c0,c0,1   /* Cache Level ID Register */
 #define CSSELR          p15,2,c0,c0,0   /* Cache Size Selection Register */
@@ -318,12 +324,16 @@
 #define ID_ISAR3_EL1            ID_ISAR3
 #define ID_ISAR4_EL1            ID_ISAR4
 #define ID_ISAR5_EL1            ID_ISAR5
+#define ID_ISAR6_EL1            ID_ISAR6
 #define ID_MMFR0_EL1            ID_MMFR0
 #define ID_MMFR1_EL1            ID_MMFR1
 #define ID_MMFR2_EL1            ID_MMFR2
 #define ID_MMFR3_EL1            ID_MMFR3
+#define ID_MMFR4_EL1            ID_MMFR4
+#define ID_MMFR5_EL1            ID_MMFR5
 #define ID_PFR0_EL1             ID_PFR0
 #define ID_PFR1_EL1             ID_PFR1
+#define ID_PFR2_EL1             ID_PFR2
 #define IFSR32_EL2              IFSR
 #define MDCR_EL2                HDCR
 #define MIDR_EL1                MIDR
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index c7b5052992..1862c14e37 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -148,6 +148,7 @@ struct cpuinfo_arm {
     union {
         uint64_t bits[2];
         struct {
+            /* PFR0 */
             unsigned long el0:4;
             unsigned long el1:4;
             unsigned long el2:4;
@@ -155,9 +156,23 @@ struct cpuinfo_arm {
             unsigned long fp:4;   /* Floating Point */
             unsigned long simd:4; /* Advanced SIMD */
             unsigned long gic:4;  /* GIC support */
-            unsigned long __res0:28;
+            unsigned long ras:4;
+            unsigned long sve:4;
+            unsigned long sel2:4;
+            unsigned long mpam:4;
+            unsigned long amu:4;
+            unsigned long dit:4;
+            unsigned long __res0:4;
             unsigned long csv2:4;
-            unsigned long __res1:4;
+            unsigned long cvs3:4;
+
+            /* PFR1 */
+            unsigned long bt:4;
+            unsigned long ssbs:4;
+            unsigned long mte:4;
+            unsigned long ras_frac:4;
+            unsigned long mpam_frac:4;
+            unsigned long __res1:44;
         };
     } pfr64;
 
@@ -170,7 +185,7 @@ struct cpuinfo_arm {
     } aux64;
 
     union {
-        uint64_t bits[2];
+        uint64_t bits[3];
         struct {
             unsigned long pa_range:4;
             unsigned long asid_bits:4;
@@ -190,6 +205,8 @@ struct cpuinfo_arm {
             unsigned long pan:4;
             unsigned long __res1:8;
             unsigned long __res2:32;
+
+            unsigned long __res3:64;
         };
     } mm64;
 
@@ -197,6 +214,10 @@ struct cpuinfo_arm {
         uint64_t bits[2];
     } isa64;
 
+    struct {
+        uint64_t bits[1];
+    } zfr64;
+
 #endif
 
     /*
@@ -204,20 +225,33 @@ struct cpuinfo_arm {
      * when running in 32-bit mode.
      */
     union {
-        uint32_t bits[2];
+        uint32_t bits[3];
         struct {
+            /* PFR0 */
             unsigned long arm:4;
             unsigned long thumb:4;
             unsigned long jazelle:4;
             unsigned long thumbee:4;
-            unsigned long __res0:16;
+            unsigned long csv2:4;
+            unsigned long amu:4;
+            unsigned long dit:4;
+            unsigned long ras:4;
 
+            /* PFR1 */
             unsigned long progmodel:4;
             unsigned long security:4;
             unsigned long mprofile:4;
             unsigned long virt:4;
             unsigned long gentimer:4;
-            unsigned long __res1:12;
+            unsigned long sec_frac:4;
+            unsigned long virt_frac:4;
+            unsigned long gic:4;
+
+            /* PFR2 */
+            unsigned long csv3:4;
+            unsigned long ssbs:4;
+            unsigned long ras_frac:4;
+            unsigned long __res2:20;
         };
     } pfr32;
 
@@ -230,12 +264,23 @@ struct cpuinfo_arm {
     } aux32;
 
     struct {
-        uint32_t bits[4];
+        uint32_t bits[6];
     } mm32;
 
     struct {
-        uint32_t bits[6];
+        uint32_t bits[7];
     } isa32;
+
+#ifdef CONFIG_ARM_64
+    struct {
+        uint64_t bits[3];
+    } mvfr;
+#else
+    /* Only MVFR0 and MVFR1 exist on armv7 */
+    struct {
+        uint32_t bits[2];
+    } mvfr;
+#endif
 };
 
 extern struct cpuinfo_arm boot_cpu_data;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 15:53:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 15:53:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38697.71455 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiJZy-0004EL-20; Thu, 26 Nov 2020 15:53:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38697.71455; Thu, 26 Nov 2020 15:53:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiJZx-0004EC-Te; Thu, 26 Nov 2020 15:53:05 +0000
Received: by outflank-mailman (input) for mailman id 38697;
 Thu, 26 Nov 2020 15:53:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YrL6=FA=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kiJZx-0004Dd-9k
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 15:53:05 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 902b8efa-1484-4e62-a926-245695ab5dfb;
 Thu, 26 Nov 2020 15:53:04 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5777631B;
 Thu, 26 Nov 2020 07:53:04 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A85963F23F;
 Thu, 26 Nov 2020 07:53:03 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YrL6=FA=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kiJZx-0004Dd-9k
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 15:53:05 +0000
X-Inumbo-ID: 902b8efa-1484-4e62-a926-245695ab5dfb
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 902b8efa-1484-4e62-a926-245695ab5dfb;
	Thu, 26 Nov 2020 15:53:04 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5777631B;
	Thu, 26 Nov 2020 07:53:04 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.199.1])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A85963F23F;
	Thu, 26 Nov 2020 07:53:03 -0800 (PST)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 3/7] xen/arm: create a cpuinfo structure for guest
Date: Thu, 26 Nov 2020 15:51:05 +0000
Message-Id: <538c6319adadd9944187fcb9df52f66af079cf13.1606151462.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1606151462.git.bertrand.marquis@arm.com>
References: <cover.1606151462.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1606151462.git.bertrand.marquis@arm.com>
References: <cover.1606151462.git.bertrand.marquis@arm.com>

Create a cpuinfo structure for guest and mask into it the features that
we do not support in Xen or that we do not want to publish to guests.

Modify some values in the cpuinfo structure for guests to mask some
features which we do not want to allow to guests (like AMU) or we do not
support (like SVE).

The code is trying to group together registers modifications for the
same feature to be able in the long term to easily enable/disable a
feature depending on user parameters or add other registers modification
in the same place (like enabling/disabling HCR bits).

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/cpufeature.c        | 51 ++++++++++++++++++++++++++++++++
 xen/include/asm-arm/cpufeature.h |  2 ++
 2 files changed, 53 insertions(+)

diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
index 835c48ca28..4cc4ebc78a 100644
--- a/xen/arch/arm/cpufeature.c
+++ b/xen/arch/arm/cpufeature.c
@@ -24,6 +24,8 @@
 
 DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
 
+struct cpuinfo_arm __read_mostly guest_cpuinfo;
+
 void update_cpu_capabilities(const struct arm_cpu_capabilities *caps,
                              const char *info)
 {
@@ -155,6 +157,55 @@ void identify_cpu(struct cpuinfo_arm *c)
 #endif
 }
 
+/*
+ * This function is creating a cpuinfo structure with values modified to mask
+ * all cpu features that should not be published to guest.
+ * The created structure is then used to provide ID registers values to guests.
+ */
+static int __init create_guest_cpuinfo(void)
+{
+    /*
+     * TODO: The code is currently using only the features detected on the boot
+     * core. In the long term we should try to compute values containing only
+     * features supported by all cores.
+     */
+    identify_cpu(&guest_cpuinfo);
+
+#ifdef CONFIG_ARM_64
+    /* Disable MPAM as xen does not support it */
+    guest_cpuinfo.pfr64.mpam = 0;
+    guest_cpuinfo.pfr64.mpam_frac = 0;
+
+    /* Disable SVE as Xen does not support it */
+    guest_cpuinfo.pfr64.sve = 0;
+    guest_cpuinfo.zfr64.bits[0] = 0;
+
+    /* Disable MTE as Xen does not support it */
+    guest_cpuinfo.pfr64.mte = 0;
+#endif
+
+    /* Disable AMU */
+#ifdef CONFIG_ARM_64
+    guest_cpuinfo.pfr64.amu = 0;
+#endif
+    guest_cpuinfo.pfr32.amu = 0;
+
+    /* Disable RAS as Xen does not support it */
+#ifdef CONFIG_ARM_64
+    guest_cpuinfo.pfr64.ras = 0;
+    guest_cpuinfo.pfr64.ras_frac = 0;
+#endif
+    guest_cpuinfo.pfr32.ras = 0;
+    guest_cpuinfo.pfr32.ras_frac = 0;
+
+    return 0;
+}
+/*
+ * This function needs to be run after all smp are started to have
+ * cpuinfo structures for all cores.
+ */
+__initcall(create_guest_cpuinfo);
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index 1862c14e37..f03aca3d90 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -290,6 +290,8 @@ extern void identify_cpu(struct cpuinfo_arm *);
 extern struct cpuinfo_arm cpu_data[];
 #define current_cpu_data cpu_data[smp_processor_id()]
 
+extern struct cpuinfo_arm guest_cpuinfo;
+
 #endif /* __ASSEMBLY__ */
 
 #endif
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 15:53:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 15:53:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38698.71467 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiJa2-0004Ia-CC; Thu, 26 Nov 2020 15:53:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38698.71467; Thu, 26 Nov 2020 15:53:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiJa2-0004IS-7A; Thu, 26 Nov 2020 15:53:10 +0000
Received: by outflank-mailman (input) for mailman id 38698;
 Thu, 26 Nov 2020 15:53:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YrL6=FA=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kiJa0-0004Ch-Ha
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 15:53:08 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 82f708c1-3d7d-41ff-933d-f670bfb3023d;
 Thu, 26 Nov 2020 15:53:03 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 74EAD1516;
 Thu, 26 Nov 2020 07:53:03 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C5C943F23F;
 Thu, 26 Nov 2020 07:53:02 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YrL6=FA=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kiJa0-0004Ch-Ha
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 15:53:08 +0000
X-Inumbo-ID: 82f708c1-3d7d-41ff-933d-f670bfb3023d
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 82f708c1-3d7d-41ff-933d-f670bfb3023d;
	Thu, 26 Nov 2020 15:53:03 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 74EAD1516;
	Thu, 26 Nov 2020 07:53:03 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.199.1])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C5C943F23F;
	Thu, 26 Nov 2020 07:53:02 -0800 (PST)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 2/7] xen/arm: Add arm64 ID registers definitions
Date: Thu, 26 Nov 2020 15:51:04 +0000
Message-Id: <5609f33f446fa08f83109a2f6f96b53cff672d4b.1606151462.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1606151462.git.bertrand.marquis@arm.com>
References: <cover.1606151462.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1606151462.git.bertrand.marquis@arm.com>
References: <cover.1606151462.git.bertrand.marquis@arm.com>

Add coprocessor registers definitions for all ID registers trapped
through the TID3 bit of HSR.
Those are the one that will be emulated in Xen to only publish to guests
the features that are supported by Xen and that are accessible to
guests.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/include/asm-arm/arm64/hsr.h | 37 +++++++++++++++++++++++++++++++++
 1 file changed, 37 insertions(+)

diff --git a/xen/include/asm-arm/arm64/hsr.h b/xen/include/asm-arm/arm64/hsr.h
index ca931dd2fe..e691d41c17 100644
--- a/xen/include/asm-arm/arm64/hsr.h
+++ b/xen/include/asm-arm/arm64/hsr.h
@@ -110,6 +110,43 @@
 #define HSR_SYSREG_CNTP_CTL_EL0   HSR_SYSREG(3,3,c14,c2,1)
 #define HSR_SYSREG_CNTP_CVAL_EL0  HSR_SYSREG(3,3,c14,c2,2)
 
+/* Those registers are used when HCR_EL2.TID3 is set */
+#define HSR_SYSREG_ID_PFR0_EL1    HSR_SYSREG(3,0,c0,c1,0)
+#define HSR_SYSREG_ID_PFR1_EL1    HSR_SYSREG(3,0,c0,c1,1)
+#define HSR_SYSREG_ID_PFR2_EL1    HSR_SYSREG(3,0,c0,c3,4)
+#define HSR_SYSREG_ID_DFR0_EL1    HSR_SYSREG(3,0,c0,c1,2)
+#define HSR_SYSREG_ID_DFR1_EL1    HSR_SYSREG(3,0,c0,c3,5)
+#define HSR_SYSREG_ID_AFR0_EL1    HSR_SYSREG(3,0,c0,c1,3)
+#define HSR_SYSREG_ID_MMFR0_EL1   HSR_SYSREG(3,0,c0,c1,4)
+#define HSR_SYSREG_ID_MMFR1_EL1   HSR_SYSREG(3,0,c0,c1,5)
+#define HSR_SYSREG_ID_MMFR2_EL1   HSR_SYSREG(3,0,c0,c1,6)
+#define HSR_SYSREG_ID_MMFR3_EL1   HSR_SYSREG(3,0,c0,c1,7)
+#define HSR_SYSREG_ID_MMFR4_EL1   HSR_SYSREG(3,0,c0,c2,6)
+#define HSR_SYSREG_ID_MMFR5_EL1   HSR_SYSREG(3,0,c0,c3,6)
+#define HSR_SYSREG_ID_ISAR0_EL1   HSR_SYSREG(3,0,c0,c2,0)
+#define HSR_SYSREG_ID_ISAR1_EL1   HSR_SYSREG(3,0,c0,c2,1)
+#define HSR_SYSREG_ID_ISAR2_EL1   HSR_SYSREG(3,0,c0,c2,2)
+#define HSR_SYSREG_ID_ISAR3_EL1   HSR_SYSREG(3,0,c0,c2,3)
+#define HSR_SYSREG_ID_ISAR4_EL1   HSR_SYSREG(3,0,c0,c2,4)
+#define HSR_SYSREG_ID_ISAR5_EL1   HSR_SYSREG(3,0,c0,c2,5)
+#define HSR_SYSREG_ID_ISAR6_EL1   HSR_SYSREG(3,0,c0,c2,7)
+#define HSR_SYSREG_MVFR0_EL1      HSR_SYSREG(3,0,c0,c3,0)
+#define HSR_SYSREG_MVFR1_EL1      HSR_SYSREG(3,0,c0,c3,1)
+#define HSR_SYSREG_MVFR2_EL1      HSR_SYSREG(3,0,c0,c3,2)
+
+#define HSR_SYSREG_ID_AA64PFR0_EL1   HSR_SYSREG(3,0,c0,c4,0)
+#define HSR_SYSREG_ID_AA64PFR1_EL1   HSR_SYSREG(3,0,c0,c4,1)
+#define HSR_SYSREG_ID_AA64DFR0_EL1   HSR_SYSREG(3,0,c0,c5,0)
+#define HSR_SYSREG_ID_AA64DFR1_EL1   HSR_SYSREG(3,0,c0,c5,1)
+#define HSR_SYSREG_ID_AA64ISAR0_EL1  HSR_SYSREG(3,0,c0,c6,0)
+#define HSR_SYSREG_ID_AA64ISAR1_EL1  HSR_SYSREG(3,0,c0,c6,1)
+#define HSR_SYSREG_ID_AA64MMFR0_EL1  HSR_SYSREG(3,0,c0,c7,0)
+#define HSR_SYSREG_ID_AA64MMFR1_EL1  HSR_SYSREG(3,0,c0,c7,1)
+#define HSR_SYSREG_ID_AA64MMFR2_EL1  HSR_SYSREG(3,0,c0,c7,2)
+#define HSR_SYSREG_ID_AA64AFR0_EL1   HSR_SYSREG(3,0,c0,c5,4)
+#define HSR_SYSREG_ID_AA64AFR1_EL1   HSR_SYSREG(3,0,c0,c5,5)
+#define HSR_SYSREG_ID_AA64ZFR0_EL1   HSR_SYSREG(3,0,c0,c4,4)
+
 #endif /* __ASM_ARM_ARM64_HSR_H */
 
 /*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 15:53:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 15:53:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38699.71479 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiJa3-0004Lb-KE; Thu, 26 Nov 2020 15:53:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38699.71479; Thu, 26 Nov 2020 15:53:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiJa3-0004LN-GT; Thu, 26 Nov 2020 15:53:11 +0000
Received: by outflank-mailman (input) for mailman id 38699;
 Thu, 26 Nov 2020 15:53:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YrL6=FA=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kiJa2-0004Dd-8F
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 15:53:10 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id c67c59f9-4127-43c7-83f7-6e995a80dadc;
 Thu, 26 Nov 2020 15:53:07 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F3B9531B;
 Thu, 26 Nov 2020 07:53:06 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4F7FA3F23F;
 Thu, 26 Nov 2020 07:53:06 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YrL6=FA=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kiJa2-0004Dd-8F
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 15:53:10 +0000
X-Inumbo-ID: c67c59f9-4127-43c7-83f7-6e995a80dadc
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id c67c59f9-4127-43c7-83f7-6e995a80dadc;
	Thu, 26 Nov 2020 15:53:07 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F3B9531B;
	Thu, 26 Nov 2020 07:53:06 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.199.1])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4F7FA3F23F;
	Thu, 26 Nov 2020 07:53:06 -0800 (PST)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 6/7] xen/arm: Add CP10 exception support to handle VMFR
Date: Thu, 26 Nov 2020 15:51:08 +0000
Message-Id: <18f51a4276f50e270f80d1d7d13b156509233cac.1606151462.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1606151462.git.bertrand.marquis@arm.com>
References: <cover.1606151462.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1606151462.git.bertrand.marquis@arm.com>
References: <cover.1606151462.git.bertrand.marquis@arm.com>

Add support for cp10 exceptions decoding to be able to emulate the
values for VMFR0 and VMFR1 when TID3 bit of HSR is activated.
This is required for aarch32 guests accessing VMFR0 and VMFR1 using vmrs
and vmsr instructions.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/traps.c             |  5 +++++
 xen/arch/arm/vcpreg.c            | 38 ++++++++++++++++++++++++++++++++
 xen/include/asm-arm/perfc_defn.h |  1 +
 xen/include/asm-arm/traps.h      |  1 +
 4 files changed, 45 insertions(+)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 22bd1bd4c6..28d9d64558 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2097,6 +2097,11 @@ void do_trap_guest_sync(struct cpu_user_regs *regs)
         perfc_incr(trap_cp14_dbg);
         do_cp14_dbg(regs, hsr);
         break;
+    case HSR_EC_CP10:
+        GUEST_BUG_ON(!psr_mode_is_32bit(regs));
+        perfc_incr(trap_cp10);
+        do_cp10(regs, hsr);
+        break;
     case HSR_EC_CP:
         GUEST_BUG_ON(!psr_mode_is_32bit(regs));
         perfc_incr(trap_cp);
diff --git a/xen/arch/arm/vcpreg.c b/xen/arch/arm/vcpreg.c
index d0c6406f34..9d6a36ca5d 100644
--- a/xen/arch/arm/vcpreg.c
+++ b/xen/arch/arm/vcpreg.c
@@ -634,6 +634,44 @@ void do_cp14_dbg(struct cpu_user_regs *regs, const union hsr hsr)
     inject_undef_exception(regs, hsr);
 }
 
+void do_cp10(struct cpu_user_regs *regs, const union hsr hsr)
+{
+    const struct hsr_cp32 cp32 = hsr.cp32;
+    int regidx = cp32.reg;
+
+    if ( !check_conditional_instr(regs, hsr) )
+    {
+        advance_pc(regs, hsr);
+        return;
+    }
+
+    switch ( hsr.bits & HSR_CP32_REGS_MASK )
+    {
+    /*
+     * HSR.TID3 is trapping access to MVFR register used to identify the
+     * VFP/Simd using VMRS/VMSR instructions.
+     * In this case MVFR2 is not supported as the instruction does not support
+     * it.
+     * Exception encoding is using MRC/MCR standard with the reg field in Crn
+     * as are declared MVFR0 and MVFR1 in cpregs.h
+     */
+    GENERATE_TID3_INFO(MVFR0, mvfr, 0)
+    GENERATE_TID3_INFO(MVFR1, mvfr, 1)
+
+    default:
+        gdprintk(XENLOG_ERR,
+                 "%s p10, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
+                 cp32.read ? "mrc" : "mcr",
+                 cp32.op1, cp32.reg, cp32.crn, cp32.crm, cp32.op2, regs->pc);
+        gdprintk(XENLOG_ERR, "unhandled 32-bit CP10 access %#x\n",
+                 hsr.bits & HSR_CP32_REGS_MASK);
+        inject_undef_exception(regs, hsr);
+        return;
+    }
+
+    advance_pc(regs, hsr);
+}
+
 void do_cp(struct cpu_user_regs *regs, const union hsr hsr)
 {
     const struct hsr_cp cp = hsr.cp;
diff --git a/xen/include/asm-arm/perfc_defn.h b/xen/include/asm-arm/perfc_defn.h
index 6a83185163..31f071222b 100644
--- a/xen/include/asm-arm/perfc_defn.h
+++ b/xen/include/asm-arm/perfc_defn.h
@@ -11,6 +11,7 @@ PERFCOUNTER(trap_cp15_64,  "trap: cp15 64-bit access")
 PERFCOUNTER(trap_cp14_32,  "trap: cp14 32-bit access")
 PERFCOUNTER(trap_cp14_64,  "trap: cp14 64-bit access")
 PERFCOUNTER(trap_cp14_dbg, "trap: cp14 dbg access")
+PERFCOUNTER(trap_cp10,     "trap: cp10 access")
 PERFCOUNTER(trap_cp,       "trap: cp access")
 PERFCOUNTER(trap_smc32,    "trap: 32-bit smc")
 PERFCOUNTER(trap_hvc32,    "trap: 32-bit hvc")
diff --git a/xen/include/asm-arm/traps.h b/xen/include/asm-arm/traps.h
index 997c37884e..c4a3d0fb1b 100644
--- a/xen/include/asm-arm/traps.h
+++ b/xen/include/asm-arm/traps.h
@@ -62,6 +62,7 @@ void do_cp15_64(struct cpu_user_regs *regs, const union hsr hsr);
 void do_cp14_32(struct cpu_user_regs *regs, const union hsr hsr);
 void do_cp14_64(struct cpu_user_regs *regs, const union hsr hsr);
 void do_cp14_dbg(struct cpu_user_regs *regs, const union hsr hsr);
+void do_cp10(struct cpu_user_regs *regs, const union hsr hsr);
 void do_cp(struct cpu_user_regs *regs, const union hsr hsr);
 
 /* SMCCC handling */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 15:53:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 15:53:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38700.71491 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiJa7-0004QC-0L; Thu, 26 Nov 2020 15:53:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38700.71491; Thu, 26 Nov 2020 15:53:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiJa6-0004Q0-RF; Thu, 26 Nov 2020 15:53:14 +0000
Received: by outflank-mailman (input) for mailman id 38700;
 Thu, 26 Nov 2020 15:53:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YrL6=FA=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kiJa5-0004Ch-Hu
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 15:53:13 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id f0c60b78-2f02-4709-a42d-d384f03bdf94;
 Thu, 26 Nov 2020 15:53:05 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 39FEB1516;
 Thu, 26 Nov 2020 07:53:05 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8AF283F23F;
 Thu, 26 Nov 2020 07:53:04 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YrL6=FA=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kiJa5-0004Ch-Hu
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 15:53:13 +0000
X-Inumbo-ID: f0c60b78-2f02-4709-a42d-d384f03bdf94
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id f0c60b78-2f02-4709-a42d-d384f03bdf94;
	Thu, 26 Nov 2020 15:53:05 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 39FEB1516;
	Thu, 26 Nov 2020 07:53:05 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.199.1])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8AF283F23F;
	Thu, 26 Nov 2020 07:53:04 -0800 (PST)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 4/7] xen/arm: Add handler for ID registers on arm64
Date: Thu, 26 Nov 2020 15:51:06 +0000
Message-Id: <2db0532f50eafdce50b62b22b58ae04072a4a56a.1606151462.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1606151462.git.bertrand.marquis@arm.com>
References: <cover.1606151462.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1606151462.git.bertrand.marquis@arm.com>
References: <cover.1606151462.git.bertrand.marquis@arm.com>

Add vsysreg emulation for registers trapped when TID3 bit is activated
in HSR.
The emulation is returning the value stored in cpuinfo_guest structure
for most values and the hardware value for registers not stored in the
structure (those are mostly registers existing only as a provision for
feature use but who have no definition right now).

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/arm64/vsysreg.c | 49 ++++++++++++++++++++++++++++++++++++
 1 file changed, 49 insertions(+)

diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.c
index 8a85507d9d..970ef51603 100644
--- a/xen/arch/arm/arm64/vsysreg.c
+++ b/xen/arch/arm/arm64/vsysreg.c
@@ -69,6 +69,14 @@ TVM_REG(CONTEXTIDR_EL1)
         break;                                                          \
     }
 
+/* Macro to generate easily case for ID co-processor emulation */
+#define GENERATE_TID3_INFO(reg,field,offset)                            \
+    case HSR_SYSREG_##reg:                                              \
+    {                                                                   \
+        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr,   \
+                          1, guest_cpuinfo.field.bits[offset]);         \
+    }
+
 void do_sysreg(struct cpu_user_regs *regs,
                const union hsr hsr)
 {
@@ -259,6 +267,47 @@ void do_sysreg(struct cpu_user_regs *regs,
          */
         return handle_raz_wi(regs, regidx, hsr.sysreg.read, hsr, 1);
 
+    /*
+     * HCR_EL2.TID3
+     *
+     * This is trapping most Identification registers used by a guest
+     * to identify the processor features
+     */
+    GENERATE_TID3_INFO(ID_PFR0_EL1, pfr32, 0)
+    GENERATE_TID3_INFO(ID_PFR1_EL1, pfr32, 1)
+    GENERATE_TID3_INFO(ID_PFR2_EL1, pfr32, 2)
+    GENERATE_TID3_INFO(ID_DFR0_EL1, dbg32, 0)
+    GENERATE_TID3_INFO(ID_DFR1_EL1, dbg32, 1)
+    GENERATE_TID3_INFO(ID_AFR0_EL1, aux32, 0)
+    GENERATE_TID3_INFO(ID_MMFR0_EL1, mm32, 0)
+    GENERATE_TID3_INFO(ID_MMFR1_EL1, mm32, 1)
+    GENERATE_TID3_INFO(ID_MMFR2_EL1, mm32, 2)
+    GENERATE_TID3_INFO(ID_MMFR3_EL1, mm32, 3)
+    GENERATE_TID3_INFO(ID_MMFR4_EL1, mm32, 4)
+    GENERATE_TID3_INFO(ID_MMFR5_EL1, mm32, 5)
+    GENERATE_TID3_INFO(ID_ISAR0_EL1, isa32, 0)
+    GENERATE_TID3_INFO(ID_ISAR1_EL1, isa32, 1)
+    GENERATE_TID3_INFO(ID_ISAR2_EL1, isa32, 2)
+    GENERATE_TID3_INFO(ID_ISAR3_EL1, isa32, 3)
+    GENERATE_TID3_INFO(ID_ISAR4_EL1, isa32, 4)
+    GENERATE_TID3_INFO(ID_ISAR5_EL1, isa32, 5)
+    GENERATE_TID3_INFO(ID_ISAR6_EL1, isa32, 6)
+    GENERATE_TID3_INFO(MVFR0_EL1, mvfr, 0)
+    GENERATE_TID3_INFO(MVFR1_EL1, mvfr, 1)
+    GENERATE_TID3_INFO(MVFR2_EL1, mvfr, 2)
+    GENERATE_TID3_INFO(ID_AA64PFR0_EL1, pfr64, 0)
+    GENERATE_TID3_INFO(ID_AA64PFR1_EL1, pfr64, 1)
+    GENERATE_TID3_INFO(ID_AA64DFR0_EL1, dbg64, 0)
+    GENERATE_TID3_INFO(ID_AA64DFR1_EL1, dbg64, 1)
+    GENERATE_TID3_INFO(ID_AA64ISAR0_EL1, isa64, 0)
+    GENERATE_TID3_INFO(ID_AA64ISAR1_EL1, isa64, 1)
+    GENERATE_TID3_INFO(ID_AA64MMFR0_EL1, mm64, 0)
+    GENERATE_TID3_INFO(ID_AA64MMFR1_EL1, mm64, 1)
+    GENERATE_TID3_INFO(ID_AA64MMFR2_EL1, mm64, 2)
+    GENERATE_TID3_INFO(ID_AA64AFR0_EL1, aux64, 0)
+    GENERATE_TID3_INFO(ID_AA64AFR1_EL1, aux64, 1)
+    GENERATE_TID3_INFO(ID_AA64ZFR0_EL1, zfr64, 0)
+
     /*
      * HCR_EL2.TIDCP
      *
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 15:53:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 15:53:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38701.71503 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiJaC-0004Xl-Fo; Thu, 26 Nov 2020 15:53:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38701.71503; Thu, 26 Nov 2020 15:53:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiJaC-0004Xc-BN; Thu, 26 Nov 2020 15:53:20 +0000
Received: by outflank-mailman (input) for mailman id 38701;
 Thu, 26 Nov 2020 15:53:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YrL6=FA=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kiJaA-0004Ch-IS
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 15:53:18 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id f5a14498-556f-4d9b-bcff-08e943744578;
 Thu, 26 Nov 2020 15:53:06 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1C7B331B;
 Thu, 26 Nov 2020 07:53:06 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6D6863F23F;
 Thu, 26 Nov 2020 07:53:05 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YrL6=FA=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kiJaA-0004Ch-IS
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 15:53:18 +0000
X-Inumbo-ID: f5a14498-556f-4d9b-bcff-08e943744578
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id f5a14498-556f-4d9b-bcff-08e943744578;
	Thu, 26 Nov 2020 15:53:06 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1C7B331B;
	Thu, 26 Nov 2020 07:53:06 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.199.1])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6D6863F23F;
	Thu, 26 Nov 2020 07:53:05 -0800 (PST)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 5/7] xen/arm: Add handler for cp15 ID registers
Date: Thu, 26 Nov 2020 15:51:07 +0000
Message-Id: <d1b01a60e2e0042fe98719c2cd13b7d44e565dfb.1606151462.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1606151462.git.bertrand.marquis@arm.com>
References: <cover.1606151462.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1606151462.git.bertrand.marquis@arm.com>
References: <cover.1606151462.git.bertrand.marquis@arm.com>

Add support for emulation of cp15 based ID registers (on arm32 or when
running a 32bit guest on arm64).
The handlers are returning the values stored in the guest_cpuinfo
structure.
In the current status the MVFR registers are no supported.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/vcpreg.c | 35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/xen/arch/arm/vcpreg.c b/xen/arch/arm/vcpreg.c
index cdc91cdf5b..d0c6406f34 100644
--- a/xen/arch/arm/vcpreg.c
+++ b/xen/arch/arm/vcpreg.c
@@ -155,6 +155,14 @@ TVM_REG32(CONTEXTIDR, CONTEXTIDR_EL1)
         break;                                                      \
     }
 
+/* Macro to generate easily case for ID co-processor emulation */
+#define GENERATE_TID3_INFO(reg,field,offset)                        \
+    case HSR_CPREG32(reg):                                          \
+    {                                                               \
+        return handle_ro_read_val(regs, regidx, cp32.read, hsr,     \
+                          1, guest_cpuinfo.field.bits[offset]);     \
+    }
+
 void do_cp15_32(struct cpu_user_regs *regs, const union hsr hsr)
 {
     const struct hsr_cp32 cp32 = hsr.cp32;
@@ -286,6 +294,33 @@ void do_cp15_32(struct cpu_user_regs *regs, const union hsr hsr)
          */
         return handle_raz_wi(regs, regidx, cp32.read, hsr, 1);
 
+    /*
+     * HCR_EL2.TID3
+     *
+     * This is trapping most Identification registers used by a guest
+     * to identify the processor features
+     */
+    GENERATE_TID3_INFO(ID_PFR0, pfr32, 0)
+    GENERATE_TID3_INFO(ID_PFR1, pfr32, 1)
+    GENERATE_TID3_INFO(ID_PFR2, pfr32, 2)
+    GENERATE_TID3_INFO(ID_DFR0, dbg32, 0)
+    GENERATE_TID3_INFO(ID_DFR1, dbg32, 1)
+    GENERATE_TID3_INFO(ID_AFR0, aux32, 0)
+    GENERATE_TID3_INFO(ID_MMFR0, mm32, 0)
+    GENERATE_TID3_INFO(ID_MMFR1, mm32, 1)
+    GENERATE_TID3_INFO(ID_MMFR2, mm32, 2)
+    GENERATE_TID3_INFO(ID_MMFR3, mm32, 3)
+    GENERATE_TID3_INFO(ID_MMFR4, mm32, 4)
+    GENERATE_TID3_INFO(ID_MMFR5, mm32, 5)
+    GENERATE_TID3_INFO(ID_ISAR0, isa32, 0)
+    GENERATE_TID3_INFO(ID_ISAR1, isa32, 1)
+    GENERATE_TID3_INFO(ID_ISAR2, isa32, 2)
+    GENERATE_TID3_INFO(ID_ISAR3, isa32, 3)
+    GENERATE_TID3_INFO(ID_ISAR4, isa32, 4)
+    GENERATE_TID3_INFO(ID_ISAR5, isa32, 5)
+    GENERATE_TID3_INFO(ID_ISAR6, isa32, 6)
+    /* MVFR registers are in cp10 no cp15 */
+
     /*
      * HCR_EL2.TIDCP
      *
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 15:53:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 15:53:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38703.71514 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiJaD-0004ad-Pi; Thu, 26 Nov 2020 15:53:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38703.71514; Thu, 26 Nov 2020 15:53:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiJaD-0004aT-Lx; Thu, 26 Nov 2020 15:53:21 +0000
Received: by outflank-mailman (input) for mailman id 38703;
 Thu, 26 Nov 2020 15:53:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kiJaC-0004X9-0b
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 15:53:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kiJaA-0005Gw-Fv; Thu, 26 Nov 2020 15:53:18 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kiJaA-00052g-33; Thu, 26 Nov 2020 15:53:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kiJaC-0004X9-0b
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 15:53:20 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=mTQJ9X7d/a3fsMSBvF8uWg17XNTL7706OSoeXAc8bJU=; b=WJg13pTZx2JjbhG4eJWZU7N9Ea
	x6LWhfOIKqbAzCQlXOsS6KV4wy4eASdsR+dcUHzG33G5CAwwb3fMLTgRsKCOO5RiRdNeP5Xcxjjej
	/p30R56yrNU5eL72B9edyATxQJjv1uVDhWKSxAjX798bg0K43FogI7oDL5kyl8E6YtEA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kiJaA-0005Gw-Fv; Thu, 26 Nov 2020 15:53:18 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kiJaA-00052g-33; Thu, 26 Nov 2020 15:53:18 +0000
Subject: Re: [PATCH v2 02/17] mm: introduce xvmalloc() et al and use for grant
 table allocations
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
 <23acd443-348c-5ef9-0fb5-880e06cc9a2d@suse.com>
 <0c40a6f6-af8c-1040-f249-36752df3a1f1@xen.org>
 <a752cdb9-4609-2a61-b657-c17cbe4febb8@suse.com>
 <alpine.DEB.2.21.2011251122200.7979@sstabellini-ThinkPad-T480s>
 <2aeba247-8b36-7b75-dc17-b901bf746f87@suse.com>
 <8e86bed4-b6fa-ed81-8ca8-41e727c56cb1@xen.org>
 <150fdd8b-f4c5-5ee8-f1e5-b3fafb4eb3ca@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <19142200-cddb-0d54-4c15-5eb7668481bc@xen.org>
Date: Thu, 26 Nov 2020 15:53:15 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <150fdd8b-f4c5-5ee8-f1e5-b3fafb4eb3ca@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 26/11/2020 15:18, Jan Beulich wrote:
> On 26.11.2020 14:22, Julien Grall wrote:
>> On 26/11/2020 11:34, Jan Beulich wrote:
>>> On 25.11.2020 20:48, Stefano Stabellini wrote:
>>>> On Wed, 25 Nov 2020, Jan Beulich wrote:
>>>>> On 25.11.2020 13:15, Julien Grall wrote:
>>>>>> On 23/11/2020 14:23, Jan Beulich wrote:
>>>>>>> I'm unconvinced of the mentioning of "physically contiguous" in the
>>>>>>> comment at the top of the new header: I don't think xmalloc() provides
>>>>>>> such a guarantee. Any use assuming so would look (latently) broken to
>>>>>>> me.
>>>>>>
>>>>>> I haven't had the chance to reply to the first version about this. So I
>>>>>> will reply here to avoid confusion.
>>>>>>
>>>>>> I can at least spot one user in Arm that would use xmalloc() that way
>>>>>> (see the allocation of itt_addr in arch/arm/gic-v3-its.c).
>>>>>
>>>>> And I surely wouldn't have spotted this, even if I had tried
>>>>> to find "offenders", i.e. as said before not wanting to alter
>>>>> the behavior of existing code (beyond the explicit changes
>>>>> done here) was ...
>>>>>
>>>>>> AFAIK, the memory is for the sole purpose of the ITS and should not be
>>>>>> accessed by Xen. So I think we can replace by a new version of
>>>>>> alloc_domheap_pages().
>>>>>>
>>>>>> However, I still question the usefulness of introducing yet another way
>>>>>> to allocate memory (we already have alloc_xenheap_pages(), xmalloc(),
>>>>>> alloc_domheap_pages(), vmap()) if you think users cannot rely on
>>>>>> xmalloc() to allocate memory physically contiguous.
>>>>>
>>>>> ... the reason to introduce a separate new interface. Plus of
>>>>> course this parallels what Linux has.
>>>>>
>>>>>> It definitely makes more difficult to figure out when to use xmalloc()
>>>>>> vs xvalloc().
>>>>>
>>>>> I don't see the difficulty:
>>>>> - if you need physically contiguous memory, use alloc_xen*_pages(),
>>>>> - if you know the allocation size is always no more than a page,
>>>>>     use xmalloc(),
>>
>> If that's then intention, then may I ask why xmalloc() is able to
>> support multiple pages allocation?
> 
> Because support for this pre-dates even the introduction of vmalloc()?

Right, so the code should disappear if we want people to avoid making 
that assumption with xmalloc().

> 
>> Your assumption is Xen will always be built with the same page size
>> across all the architecture. While Xen only works with 4KB pages today,
>> Arm can support 16KB and 64KB. I have long term plan to add support for it.
>>
>> So I don't think you can use the page size as a way to distinguish which
>> one to use.
> 
> The let's abstract this one level further
> 
> - if you know the allocation size is always no more than the smallest
>    possible page size, use xmalloc()

So basically, xmalloc() is becoming pointless when xvmalloc() can do the 
same for you (as it would call xmalloc()).

> 
>>>> What if you need memory physically contiguous but not necessarily an
>>>> order of pages, such as for instance 5200 bytes?
>>>
>>> This case is, I think, rare enough (in particular in Xen) that the
>>> waste of space can be tolerated imo.
>>
>> This is quite departure from:
>>
>> commit b829a0ff5794ee5b0f96a0c872f6a4ed7b1007c7
>> Author: Jan Beulich <jbeulich@suse.com>
>> Date:   Thu Oct 13 10:03:43 2011 +0200
>>
>>       xmalloc: return unused full pages on multi-page allocations
>>
>>       Certain (boot time) allocations are relatively large (particularly when
>>       building with high NR_CPUS), but can also happen to be pretty far away
>>       from a power-of-two size. Utilize the page allocator's (other than
>>       Linux'es) capability of allowing to return space from higher-order
>>       allocations in smaller pieces to return the unused parts immediately.
>>
>>       Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>       Acked-by: Keir Fraser <keir@xen.org>
>>
>> I am curious to know what changed...
> 
> Nothing. But even if something had, citing a 9 year old commit is
> not likely to point out any actual contradiction.

You can't say it is tolerable but in the past suggested that it was not 
(otherwise why would you hand back memory?).

Therefore I would like to understand why in the past this was not 
tolerable but now it is... What changed?

> 
>> Anyway, what you wrote is very server focused. On Arm, we have plan to
>> run Xen on smaller hardware where wasting memory mean less usable RAM
>> for guests.
>>
>> The problem with using an order is the bigger the order is the more
>> change you will waste space...
>>
>> Allocating more than a page is fairly common on Arm, so we really want
>> to reduce the amount of memory wasted.
> 
> The amount of space wasted is the same - the tail of the trailing
> page. I'm afraid I don't see what your point is.

There would obviously be no difference if one wants to allocate more 
than one page but less than 2 pages....

But that was not my point. My point is when you allocate with an order 
greater or equal to 2, then you will start wasting memory when not using 
xmalloc().

For instance, if you want to allocate 20kB then you will need to 
allocate 32kB and lose 12kB. To make it sound bigger, you could replace 
kB by mB.

>>>> If xmalloc can't do physically contiguous allocations, we need something
>>>> else that does physically contiguous allocations not only at page
>>>> granularity, right?
>>>
>>> Well, we first need to settle on what guarantees xmalloc() is meant
>>> to provide. It may be just me assuming it doesn't provide the same
>>> ones which Linux'es kmalloc() makes. I'm first and foremost
>>> judging by the comment near the top of xmalloc.h, which compares
>>> with malloc() / free(), not kmalloc() / kfree().
>>>
>>>> The other issue is semantics. If xmalloc is unable to allocate more than
>>>> a page of contiguous memory, then it is identical to vmalloc from the
>>>> caller's point of view: both xmalloc and vmalloc return a virtual
>>>> address for an allocation that might not be physically contiguous.
>>>
>>> Almost. vmalloc() puts guard pages around the allocation and
>>> guarantees page alignment.
>>>
>>>> Maybe we should get rid of xmalloc entirely and improve the
>>>> implementation of vmalloc so that it falls back to xmalloc for
>>>> sub-page allocations. Which in fact is almost the same thing that you
>>>> did.
>>>
>>> This would break callers assuming page alignment (and - shouldn't
>>> be an issue in practice - granularity). If anything, as Julien
>>> did suggest, we could modify xmalloc() accordingly, but then of
>>> course making sure we also honor alignment requests beyond page
>>> size.
>>>
>>> Neither of these is the goal here, hence this "intermediate"
>>> implementation, which is only almost "redundant".
>>>
>>>>> - if you know the allocation size is always more than a page, use
>>>>>     vmalloc(),
>>>>> - otherwise use xvmalloc(). Exceptions may of course apply, i.e.
>>>>> this is just a rule of thumb.
>>>>>
>>>>>> I would like to hear an opinion from the other maintainers.
>>>>>
>>>>> Let's hope at least one will voice theirs.
>>>>
>>>> If we take a step back, I think we only really need two memory
>>>> allocators:
>>>>
>>>> 1) one that allocates physically contiguous memory
>>>> 2) one that allocates non-physically contiguous memory
>>>>
>>>> That's it, right?
>>>>
>>>> In addition to that, I understand it could be convient to have a little
>>>> wrapper that automatically chooses between 1) and 2) depending on
>>>> circumstances.
>>>>
>>>> But if the circumstance is just size < PAGE_SIZE then I don't think we
>>>> need any convenience wrappers: we should just be able to call 2), which
>>>> is vmalloc, once we improve the vmalloc implementation.
>>>>
>>>> Or do you see any reasons to keep the current vmalloc implementation as
>>>> is for sub-page allocations?
>>>
>>> See my "Almost. ..." above.
>>>
>>> As an aside, I also find it quite puzzling that in one of the rare
>>> cases where I propose to clone an interface from Linux without much
>>> deviation from their model, I get objections. It typically was the
>>> other way around in the past ...
>>
>> If we were really following Linux, then we would have two interfaces:
>>      - xmalloc() which is the same as kmalloc()
>>      - xvalloc() which is the same a kvalloc()
> 
> (correction: xvmalloc() and kvmalloc())
> 
> - vmalloc() (named identically in Linux and Xen)
> 
> IOW the same set of _three_ interface groups.
> 
>> However, you seem to be the one objecting on the behavior of xmalloc().
> 
> I don't think I'm objecting to any existing behavior. What I did
> is state my view on (non-)guarantees by xmalloc(). And I've
> already said - maybe I'm wrong and, like Linux'es kmalloc(),
> there is a guarantee of it producing contiguous memory, and I
> merely didn't find where that's said.

I can find quite a few places in Linux that use kmalloc() with size that 
are bigger than a page size.

That's enough for me to think that while this may not have been the 
original intention, people are using it like that (same in Xen). So we 
can't dismiss it.

> 
>> I can't speak for Stefano, but I don't object on following Linux.
>> Instead I am objecting on the growing number of way to allocate memory
>> in Xen and that differ depending on the system_state.
> 
> But as per above the addition only brings us on par with Linux.
> There, kvmalloc_node() is simply a wrapper (with different logic
> when to try what) around kmalloc_node() and __vmalloc_node(). No
> different (in the basic idea) from what I'm doing here.

There are at least two more in Xen so far:

  - alloc_domheap_pages()
  - alloc_xenheap_pages()

I still maintain that the way you suggest to use each of them is not 
clear. In particular, that xmalloc() doesn't guarantee physcally 
contiguous allocation for allocation larger than a page size.

In summary, I would be happy with the introduction of xvmalloc() as long 
as we guarantee that xmalloc() is allocating physically contiguous memory.

Users that doesn't care about this guarantee would have to be switched 
to use xvmalloc() (This doesn't need to be done here).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 15:53:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 15:53:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38708.71527 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiJaG-0004fP-83; Thu, 26 Nov 2020 15:53:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38708.71527; Thu, 26 Nov 2020 15:53:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiJaG-0004f6-0u; Thu, 26 Nov 2020 15:53:24 +0000
Received: by outflank-mailman (input) for mailman id 38708;
 Thu, 26 Nov 2020 15:53:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YrL6=FA=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kiJaF-0004Ch-IV
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 15:53:23 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id a4884ce1-4f93-440f-a742-a98e93937557;
 Thu, 26 Nov 2020 15:53:08 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D67FA31B;
 Thu, 26 Nov 2020 07:53:07 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 330C63F23F;
 Thu, 26 Nov 2020 07:53:07 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YrL6=FA=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kiJaF-0004Ch-IV
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 15:53:23 +0000
X-Inumbo-ID: a4884ce1-4f93-440f-a742-a98e93937557
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id a4884ce1-4f93-440f-a742-a98e93937557;
	Thu, 26 Nov 2020 15:53:08 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D67FA31B;
	Thu, 26 Nov 2020 07:53:07 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.199.1])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 330C63F23F;
	Thu, 26 Nov 2020 07:53:07 -0800 (PST)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 7/7] xen/arm: Activate TID3 in HCR_EL2
Date: Thu, 26 Nov 2020 15:51:09 +0000
Message-Id: <4d48107fb864222aa6f32ebf691e169a14c3e6b9.1606151462.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1606151462.git.bertrand.marquis@arm.com>
References: <cover.1606151462.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1606151462.git.bertrand.marquis@arm.com>
References: <cover.1606151462.git.bertrand.marquis@arm.com>

Activate TID3 bit in HSR register when starting a guest.
This will trap all coprecessor ID registers so that we can give to guest
values corresponding to what they can actually use and mask some
features to guests even though they would be supported by the underlying
hardware (like SVE or MPAM).

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/traps.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 28d9d64558..c1a9ad6056 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -98,7 +98,7 @@ register_t get_default_hcr_flags(void)
 {
     return  (HCR_PTW|HCR_BSU_INNER|HCR_AMO|HCR_IMO|HCR_FMO|HCR_VM|
              (vwfi != NATIVE ? (HCR_TWI|HCR_TWE) : 0) |
-             HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
+             HCR_TID3|HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
 }
 
 static enum {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 16:45:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 16:45:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38799.71575 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKOI-0001wr-6u; Thu, 26 Nov 2020 16:45:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38799.71575; Thu, 26 Nov 2020 16:45:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKOI-0001wk-2x; Thu, 26 Nov 2020 16:45:06 +0000
Received: by outflank-mailman (input) for mailman id 38799;
 Thu, 26 Nov 2020 16:45:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g0it=FA=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1kiKOG-0001t3-QM
 for xen-devel@lists.xen.org; Thu, 26 Nov 2020 16:45:04 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b54d675b-b6a6-41b3-8b4d-430fbd651a8c;
 Thu, 26 Nov 2020 16:44:51 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1kiKNx-0006r3-If; Thu, 26 Nov 2020 16:44:45 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1kiKNx-0000Us-Ep; Thu, 26 Nov 2020 16:44:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=g0it=FA=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
	id 1kiKOG-0001t3-QM
	for xen-devel@lists.xen.org; Thu, 26 Nov 2020 16:45:04 +0000
X-Inumbo-ID: b54d675b-b6a6-41b3-8b4d-430fbd651a8c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b54d675b-b6a6-41b3-8b4d-430fbd651a8c;
	Thu, 26 Nov 2020 16:44:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=QcKhnNAUTD9we9C0ydlHR9FppI9RStRTtluVYUEHMcg=; b=fyxv68yfCKQ4GlV5s+ANaO70ZN
	WBs/f+kYDcSJKiQqxrOqAXGchxm3l3codQ+zQqPGMru7i3Q8Te60r6WcxGkuxUSIkAyuyrRrb0/aR
	UaNAD5KpZSqJP69vP9ZDxksAt2g0vZgJL1xoM79moiq0DAGegXu/MKFe6W7NnWH5m7C0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1kiKNx-0006r3-If; Thu, 26 Nov 2020 16:44:45 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1kiKNx-0000Us-Ep; Thu, 26 Nov 2020 16:44:45 +0000
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 351 v2 (CVE-2020-28368) - Information leak
 via power sidechannel
Message-Id: <E1kiKNx-0000Us-Ep@xenbits.xenproject.org>
Date: Thu, 26 Nov 2020 16:44:45 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-28368 / XSA-351
                              version 2

                 Information leak via power sidechannel

UPDATES IN VERSION 2
====================

CVE assigned.

ISSUE DESCRIPTION
=================

Researchers have demonstrated using software power/energy monitoring
interfaces to create covert channels, and infer the operations/data used
by other contexts within the system.

Access to these interfaces should be restricted to privileged software,
but it was found that Xen doesn't restrict access suitably, and the
interfaces are accessible to all guests.

For more information, see:
  https://platypusattack.com
  https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-00389.html

IMPACT
======

An unprivileged guest administrator can sample platform power/energy
data.  This may be used to infer the operations/data used by other
contexts within the system.

The research demonstrates using this sidechannel to leak the AES keys
used elsewhere in the system.

VULNERABLE SYSTEMS
==================

Power/energy monitoring interfaces are platform and architecture
specific.  Consult your hardware vendor to ascertain what power feedback
interfaces are available.

For ARM systems, all versions of Xen are vulnerable.  The fix restricts
access to the AMU (Activity Monitors Unit) interface, introduced in
Armv8.4.

For x86 systems, Xen 4.14 and earlier are vulnerable - master is not
vulnerable, as these issues have been addressed in a more general
fashion.

The x86 fixes restrict access to:
 * Intel RAPL interface, introduced in SandyBridge CPUs.
 * Intel platform energy interface.
 * Intel perf_ctl interface, introduced in Pentium 4 CPUs and also
   implemented by other vendors.
 * AMD RAPL interface, introduced in Ryzen/EPYC CPUs.
 * AMD compute unit energy interface, present in Fam15/16 CPUs.

MITIGATION
==========

There are no mitigations available.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa351-arm.patch             Xen unstable - 4.10.x [ARM]
xsa351-x86-4.14-?.patch      Xen 4.14.x            [x86]
xsa351-x86-4.13-?.patch      Xen 4.13.x            [x86]
xsa351-x86-4.12-?.patch      Xen 4.12.x            [x86]
xsa351-x86-4.11-?.patch      Xen 4.11.x - 4.10.x   [x86]

$ sha256sum xsa351*
cad287981a870f13484834fa2364ffee68178517e906f55d2889304a4a9eae06  xsa351.meta
70ebd0e93af240af2680374dcfd8ff4a5dd3eefccf670f1cb9b546d763d6a554  xsa351-arm.patch
49b52a1366912a29e184e3014a9f1f579e8a0dd8a36f01d38d995d2c8ed81928  xsa351-arm-4.11.patch
2e7b7c2b98625d70c8b10047a9f668372f3ccede167344dedb712312606acbca  xsa351-x86-4.11-1.patch
ab9e2cb7d5e3e0c3a916f006c697495f4f01146e09df60ece59ce0a8f7aa5ed0  xsa351-x86-4.11-2.patch
bb68f6e6905bc1566156cafab058cbaf02a17c197385c33a83b7f73885913c1c  xsa351-x86-4.12-1.patch
53f464269f59498f8a9a614f10a47cfb1d81c666f0d684346e28005015de962c  xsa351-x86-4.12-2.patch
67a29d66230faafd9a8047ac80ec18130b5659e80a38c3a412cb2be6d3288a8f  xsa351-x86-4.13-1.patch
f7d8717dec33ee7484b36490402d113f1e7e168e7541bcf193fef620df299f08  xsa351-x86-4.13-2.patch
7d4fbe11a766226d7f1b93c5bf34664d8855deee09d1feebc76f11e49f2aa9c9  xsa351-x86-4.14-1.patch
41df825deafe3ef28e8594ec956033689af69f84a4a6dd92f97d1071e925203d  xsa351-x86-4.14-2.patch
$

NOTE REGARDING LACK OF EMBARGO
==============================

Despite an attempt to organise predisclosure, the discoverers ultimately
did not authorise a predisclosure.
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl+/22UMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZt0IH/1P4OlmyExkX0u1mVXcG3o85esBYDAD6RKDhRCr5
IjbitMUItESGYyz/Z6BEmuUIiJ1gfNx7xs4I3b4i8UUBYdBvvsdjeL3WK75Ym3nW
Jh63AQzbDeNjkEnK4UnF6+V/9BJJUYB4avH6m82LU8+9Gp8S9CGH4y73gpiTGhcK
VHPhdPOSc+ZDJ/OEQUR/3uMci9nuQ+qw9PClGybj3j4iru3PWfPRCSsdy2sAZO0T
KwS+PvDgFKWqKiIAL6Ahfb0VnREP8zqpIxSq2cN8+SuVQym+H6zKh2exvYtEjq/j
pWNzHublyLZoSBLvfguIeNKj4x2va9dF7/jJAnLfNVrvJCI=
=QgHs
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa351.meta"
Content-Disposition: attachment; filename="xsa351.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzNTEsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIs
CiAgICAiNC4xMSIsCiAgICAiNC4xMCIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjEwIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICI3OGQ5MDNlOTVlZmM1YjAxNjZiMzkzZDI4OWE2ODdjNjQw
MTZlOGVmIiwKICAgICAgICAgICJQcmVyZXFzIjogW10sCiAgICAgICAgICAi
UGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTM1MS14ODYtNC4xMS0/LnBh
dGNoIiwKICAgICAgICAgICAgInhzYTM1MS1hcm0tNC4xMS5wYXRjaCIKICAg
ICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAiNC4xMSI6
IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAgICAg
ICAgICJTdGFibGVSZWYiOiAiZTI3NGM4YmRjMTJlYjU5NmU1NTIzMzA0MGU4
YjQ5ZGEyNzE1MGYzMSIsCiAgICAgICAgICAiUHJlcmVxcyI6IFtdLAogICAg
ICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2EzNTEteDg2LTQu
MTEtPy5wYXRjaCIsCiAgICAgICAgICAgICJ4c2EzNTEtYXJtLTQuMTEucGF0
Y2giCiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9LAogICAg
IjQuMTIiOiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7
CiAgICAgICAgICAiU3RhYmxlUmVmIjogIjk3YjdiNTU2N2ZiYTY5MThhNjU2
YWQzNDkwNTFiNTM0M2I1ZGVhMmUiLAogICAgICAgICAgIlByZXJlcXMiOiBb
XSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzUx
LXg4Ni00LjEyLT8ucGF0Y2giLAogICAgICAgICAgICAieHNhMzUxLWFybS5w
YXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAg
ICAiNC4xMyI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6
IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiMDA2MGFjMjliY2JkYjc2ZDQ5
ZDJlMjQ4ZGRmY2I3YWZhMjM0NTQ0MCIsCiAgICAgICAgICAiUHJlcmVxcyI6
IFtdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2Ez
NTEteDg2LTQuMTMtPy5wYXRjaCIsCiAgICAgICAgICAgICJ4c2EzNTEtYXJt
LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAgfSwK
ICAgICI0LjE0IjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVu
IjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICIxMGJiNjNjMjAzZjQyZDkz
MWZhMWZhN2RiYmFlN2NlMTc2NWNlY2YyIiwKICAgICAgICAgICJQcmVyZXFz
IjogW10sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhz
YTM1MS14ODYtNC4xNC0/LnBhdGNoIiwKICAgICAgICAgICAgInhzYTM1MS1h
cm0ucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9
LAogICAgIm1hc3RlciI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAg
InhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiNzA1NmYyZjg5ZjAz
ZjJmODA0YWM3ZTc3NmM3YjJiMDAwY2Q3MTZjZCIsCiAgICAgICAgICAiUHJl
cmVxcyI6IFtdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCgkgICAgICAgICAg
ICAgICJ4c2EzNTEtYXJtLnBhdGNoIgoJCSAgICAgIF0KICAgICAgICB9CiAg
ICAgIH0KICAgIH0KICB9Cn0=

--=separator
Content-Type: application/octet-stream; name="xsa351-arm.patch"
Content-Disposition: attachment; filename="xsa351-arm.patch"
Content-Transfer-Encoding: base64

RnJvbTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4KU3ViamVj
dDogeGVuL2FybTogQWx3YXlzIHRyYXAgQU1VIHN5c3RlbSByZWdpc3RlcnMK
ClRoZSBBY3Rpdml0eSBNb25pdG9ycyBVbml0IChBTVUpIGhhcyBiZWVuIGlu
dHJvZHVjZWQgYnkgQVJNdjguNC4gSXQgaXMKY29uc2lkZXJlZCB0byBiZSB1
bnNhZmUgdG8gYmUgZXhwb3NlIHRvIGd1ZXN0cyBhcyB0aGV5IG1pZ2h0IGV4
cG9zZQppbmZvcm1hdGlvbiBhYm91dCBjb2RlIGV4ZWN1dGVkIGJ5IG90aGVy
IGd1ZXN0cyBvciB0aGUgaG9zdC4KCkFybSBwcm92aWRlZCBhIHdheSB0byB0
cmFwIGFsbCB0aGUgQU1VIHN5c3RlbSByZWdpc3RlcnMgYnkgc2V0dGluZwpD
UFRSX0VMMi5UQU0gdG8gMS4KClVuZm9ydHVuYXRlbHksIG9uIG9sZGVyIHJl
dmlzaW9uIG9mIHRoZSBzcGVjaWZpY2F0aW9uLCB0aGUgYml0IDMwIChub3cK
Q1BUUl9FTDEuVEFNKSB3YXMgUkVTMC4gQmVjYXVzZSBvZiB0aGF0LCBYZW4g
aXMgc2V0dGluZyBpdCB0byAwIGFuZAp0aGVyZWZvcmUgdGhlIHN5c3RlbSBy
ZWdpc3RlcnMgd291bGQgYmUgZXhwb3NlZCB0byB0aGUgZ3Vlc3Qgd2hlbiBp
dCBpcwpydW4gb24gcHJvY2Vzc29ycyB3aXRoIEFNVS4KCkFzIHRoZSBiaXQg
aXMgbWFyayBhcyBVTktOT1dOIGF0IGJvb3QgaW4gQXJtdjguNCwgdGhlIG9u
bHkgc2FmZSBzb2x1dGlvbgpmb3IgdXMgaXMgdG8gYWx3YXlzIHNldCBDUFRS
X0VMMS5UQU0gdG8gMS4KCkd1ZXN0IHRyeWluZyB0byBhY2Nlc3MgdGhlIEFN
VSBzeXN0ZW0gcmVnaXN0ZXJzIHdpbGwgbm93IHJlY2VpdmUgYW4KdW5kZWZp
bmVkIGluc3RydWN0aW9uLiBVbmZvcnR1bmF0ZWx5LCB0aGlzIG1lYW5zIHRo
YXQgZXZlbiB3ZWxsLWJlaGF2ZWQKZ3Vlc3QgbWF5IGZhaWwgdG8gYm9vdCBi
ZWNhdXNlIHdlIGRvbid0IHNhbml0aXplIHRoZSBJRCByZWdpc3RlcnMuCgpU
aGlzIGlzIGEga25vd24gaXNzdWVzIHdpdGggb3RoZXIgQXJtdjguMCsgZmVh
dHVyZXMgKGUuZy4gU1ZFLCBQb2ludGVyCkF1dGgpLiBUaGlzIHdpbGwgdGFr
ZW4gY2FyZSBzZXBhcmF0ZWx5LgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0zNTEg
KG9yIFhTQS05MyByZS1ib3JuKS4KClNpZ25lZC1vZmYtYnk6IEp1bGllbiBH
cmFsbCA8amdyYWxsQGFtYXpvbi5jb20+ClJldmlld2VkLWJ5OiBBbmRyZSBQ
cnp5d2FyYSA8YW5kcmUucHJ6eXdhcmFAYXJtLmNvbT4KUmV2aWV3ZWQtYnk6
IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4K
UmV2aWV3ZWQtYnk6IEJlcnRyYW5kIE1hcnF1aXMgPGJlcnRyYW5kLm1hcnF1
aXNAYXJtLmNvbT4KCmRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vdHJhcHMu
YyBiL3hlbi9hcmNoL2FybS90cmFwcy5jCmluZGV4IGEzNmYxNDVlNjcuLjIy
YmQxYmQ0YzYgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL2FybS90cmFwcy5jCisr
KyBiL3hlbi9hcmNoL2FybS90cmFwcy5jCkBAIC0xNTEsNyArMTUxLDggQEAg
dm9pZCBpbml0X3RyYXBzKHZvaWQpCiAgICAgICogT24gQVJNNjQgdGhlIFRD
UHggYml0cyB3aGljaCB3ZSBzZXQgaGVyZSAoMC4uOSwxMiwxMykgYXJlIGFs
bAogICAgICAqIFJFUzEsIGkuZS4gdGhleSB3b3VsZCB0cmFwIHdoZXRoZXIg
d2UgZGlkIHRoaXMgd3JpdGUgb3Igbm90LgogICAgICAqLwotICAgIFdSSVRF
X1NZU1JFRygoSENQVFJfQ1BfTUFTSyAmIH4oSENQVFJfQ1AoMTApIHwgSENQ
VFJfQ1AoMTEpKSkgfCBIQ1BUUl9UVEEsCisgICAgV1JJVEVfU1lTUkVHKChI
Q1BUUl9DUF9NQVNLICYgfihIQ1BUUl9DUCgxMCkgfCBIQ1BUUl9DUCgxMSkp
KSB8CisgICAgICAgICAgICAgICAgIEhDUFRSX1RUQSB8IEhDUFRSX1RBTSwK
ICAgICAgICAgICAgICAgICAgQ1BUUl9FTDIpOwogCiAgICAgLyoKZGlmZiAt
LWdpdCBhL3hlbi9pbmNsdWRlL2FzbS1hcm0vcHJvY2Vzc29yLmggYi94ZW4v
aW5jbHVkZS9hc20tYXJtL3Byb2Nlc3Nvci5oCmluZGV4IDNjYTY3ZjgxNTcu
LmQzZDEyYTlkMTkgMTAwNjQ0Ci0tLSBhL3hlbi9pbmNsdWRlL2FzbS1hcm0v
cHJvY2Vzc29yLmgKKysrIGIveGVuL2luY2x1ZGUvYXNtLWFybS9wcm9jZXNz
b3IuaApAQCAtMzUxLDYgKzM1MSw3IEBACiAjZGVmaW5lIFZUQ1JfUkVTMSAg
ICAgICAoX0FDKDEsVUwpPDwzMSkKIAogLyogSENQVFIgSHlwLiBDb3Byb2Nl
c3NvciBUcmFwIFJlZ2lzdGVyICovCisjZGVmaW5lIEhDUFRSX1RBTSAgICAg
ICAoKF9BQygxLFUpPDwzMCkpCiAjZGVmaW5lIEhDUFRSX1RUQSAgICAgICAo
KF9BQygxLFUpPDwyMCkpICAgICAgICAvKiBUcmFwIHRyYWNlIHJlZ2lzdGVy
cyAqLwogI2RlZmluZSBIQ1BUUl9DUCh4KSAgICAgKChfQUMoMSxVKTw8KHgp
KSkgICAgICAgLyogVHJhcCBDb3Byb2Nlc3NvciB4ICovCiAjZGVmaW5lIEhD
UFRSX0NQX01BU0sgICAoKF9BQygxLFUpPDwxNCktMSkK

--=separator
Content-Type: application/octet-stream; name="xsa351-arm-4.11.patch"
Content-Disposition: attachment; filename="xsa351-arm-4.11.patch"
Content-Transfer-Encoding: base64

RnJvbSBiZGJkNjZjYjliYTE3ZGQxYTcyMjFmMmE1NjFmNDVhODM2ZjEyZjY0
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWxpZW4gR3JhbGwg
PGpncmFsbEBhbWF6b24uY29tPgpEYXRlOiBUdWUsIDEwIE5vdiAyMDIwIDE3
OjA4OjMyICswMDAwClN1YmplY3Q6IFtQQVRDSF0geGVuL2FybTogQWx3YXlz
IHRyYXAgQU1VIHN5c3RlbSByZWdpc3RlcnMKClRoZSBBY3Rpdml0eSBNb25p
dG9ycyBVbml0IChBTVUpIGhhcyBiZWVuIGludHJvZHVjZWQgYnkgQVJNdjgu
NC4gSXQgaXMKY29uc2lkZXJlZCB0byBiZSB1bnNhZmUgdG8gYmUgZXhwb3Nl
IHRvIGd1ZXN0cyBhcyB0aGV5IG1pZ2h0IGV4cG9zZQppbmZvcm1hdGlvbiBh
Ym91dCBjb2RlIGV4ZWN1dGVkIGJ5IG90aGVyIGd1ZXN0cyBvciB0aGUgaG9z
dC4KCkFybSBwcm92aWRlZCBhIHdheSB0byB0cmFwIGFsbCB0aGUgQU1VIHN5
c3RlbSByZWdpc3RlcnMgYnkgc2V0dGluZwpDUFRSX0VMMi5UQU0gdG8gMS4K
ClVuZm9ydHVuYXRlbHksIG9uIG9sZGVyIHJldmlzaW9uIG9mIHRoZSBzcGVj
aWZpY2F0aW9uLCB0aGUgYml0IDMwIChub3cKQ1BUUl9FTDEuVEFNKSB3YXMg
UkVTMC4gQmVjYXVzZSBvZiB0aGF0LCBYZW4gaXMgc2V0dGluZyBpdCB0byAw
IGFuZAp0aGVyZWZvcmUgdGhlIHN5c3RlbSByZWdpc3RlcnMgd291bGQgYmUg
ZXhwb3NlZCB0byB0aGUgZ3Vlc3Qgd2hlbiBpdCBpcwpydW4gb24gcHJvY2Vz
c29ycyB3aXRoIEFNVS4KCkFzIHRoZSBiaXQgaXMgbWFyayBhcyBVTktOT1dO
IGF0IGJvb3QgaW4gQXJtdjguNCwgdGhlIG9ubHkgc2FmZSBzb2x1dGlvbgpm
b3IgdXMgaXMgdG8gYWx3YXlzIHNldCBDUFRSX0VMMS5UQU0gdG8gMS4KCkd1
ZXN0IHRyeWluZyB0byBhY2Nlc3MgdGhlIEFNVSBzeXN0ZW0gcmVnaXN0ZXJz
IHdpbGwgbm93IHJlY2VpdmUgYW4KdW5kZWZpbmVkIGluc3RydWN0aW9uLiBV
bmZvcnR1bmF0ZWx5LCB0aGlzIG1lYW5zIHRoYXQgZXZlbiB3ZWxsLWJlaGF2
ZWQKZ3Vlc3QgbWF5IGZhaWwgdG8gYm9vdCBiZWNhdXNlIHdlIGRvbid0IHNh
bml0aXplIHRoZSBJRCByZWdpc3RlcnMuCgpUaGlzIGlzIGEga25vd24gaXNz
dWVzIHdpdGggb3RoZXIgQXJtdjguMCsgZmVhdHVyZXMgKGUuZy4gU1ZFLCBQ
b2ludGVyCkF1dGgpLiBUaGlzIHdpbGwgdGFrZW4gY2FyZSBzZXBhcmF0ZWx5
LgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0zNTEgKG9yIFhTQS05MyByZS1ib3Ju
KS4KClNpZ25lZC1vZmYtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpv
bi5jb20+ClJldmlld2VkLWJ5OiBBbmRyZSBQcnp5d2FyYSA8YW5kcmUucHJ6
eXdhcmFAYXJtLmNvbT4KUmV2aWV3ZWQtYnk6IFN0ZWZhbm8gU3RhYmVsbGlu
aSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4KUmV2aWV3ZWQtYnk6IEJlcnRy
YW5kIE1hcnF1aXMgPGJlcnRyYW5kLm1hcnF1aXNAYXJtLmNvbT4KLS0tCiB4
ZW4vYXJjaC9hcm0vdHJhcHMuYyAgICAgICAgICAgIHwgMyArKy0KIHhlbi9p
bmNsdWRlL2FzbS1hcm0vcHJvY2Vzc29yLmggfCAxICsKIDIgZmlsZXMgY2hh
bmdlZCwgMyBpbnNlcnRpb25zKCspLCAxIGRlbGV0aW9uKC0pCgpkaWZmIC0t
Z2l0IGEveGVuL2FyY2gvYXJtL3RyYXBzLmMgYi94ZW4vYXJjaC9hcm0vdHJh
cHMuYwppbmRleCBlOTMwNTg1YWQ2ZDQuLmMxMjAxMGE3MjJiNSAxMDA2NDQK
LS0tIGEveGVuL2FyY2gvYXJtL3RyYXBzLmMKKysrIGIveGVuL2FyY2gvYXJt
L3RyYXBzLmMKQEAgLTE3OSw3ICsxNzksOCBAQCB2b2lkIGluaXRfdHJhcHMo
dm9pZCkKICAgICAgKiBPbiBBUk02NCB0aGUgVENQeCBiaXRzIHdoaWNoIHdl
IHNldCBoZXJlICgwLi45LDEyLDEzKSBhcmUgYWxsCiAgICAgICogUkVTMSwg
aS5lLiB0aGV5IHdvdWxkIHRyYXAgd2hldGhlciB3ZSBkaWQgdGhpcyB3cml0
ZSBvciBub3QuCiAgICAgICovCi0gICAgV1JJVEVfU1lTUkVHKChIQ1BUUl9D
UF9NQVNLICYgfihIQ1BUUl9DUCgxMCkgfCBIQ1BUUl9DUCgxMSkpKSB8IEhD
UFRSX1RUQSwKKyAgICBXUklURV9TWVNSRUcoKEhDUFRSX0NQX01BU0sgJiB+
KEhDUFRSX0NQKDEwKSB8IEhDUFRSX0NQKDExKSkpIHwKKyAgICAgICAgICAg
ICAgICAgSENQVFJfVFRBIHwgSENQVFJfVEFNLAogICAgICAgICAgICAgICAg
ICBDUFRSX0VMMik7CiAKICAgICAvKiBTZXR1cCBoeXBlcnZpc29yIHRyYXBz
ICovCmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20tYXJtL3Byb2Nlc3Nv
ci5oIGIveGVuL2luY2x1ZGUvYXNtLWFybS9wcm9jZXNzb3IuaAppbmRleCAy
MjJhMDJkZDk5MzUuLjU3NTVjYzY0MzQ0YSAxMDA2NDQKLS0tIGEveGVuL2lu
Y2x1ZGUvYXNtLWFybS9wcm9jZXNzb3IuaAorKysgYi94ZW4vaW5jbHVkZS9h
c20tYXJtL3Byb2Nlc3Nvci5oCkBAIC0yOTEsNiArMjkxLDcgQEAKICNkZWZp
bmUgVlRDUl9SRVMxICAgICAgIChfQUMoMSxVTCk8PDMxKQogCiAvKiBIQ1BU
UiBIeXAuIENvcHJvY2Vzc29yIFRyYXAgUmVnaXN0ZXIgKi8KKyNkZWZpbmUg
SENQVFJfVEFNICAgICAgICgoX0FDKDEsVSk8PDMwKSkKICNkZWZpbmUgSENQ
VFJfVFRBICAgICAgICgoX0FDKDEsVSk8PDIwKSkgICAgICAgIC8qIFRyYXAg
dHJhY2UgcmVnaXN0ZXJzICovCiAjZGVmaW5lIEhDUFRSX0NQKHgpICAgICAo
KF9BQygxLFUpPDwoeCkpKSAgICAgICAvKiBUcmFwIENvcHJvY2Vzc29yIHgg
Ki8KICNkZWZpbmUgSENQVFJfQ1BfTUFTSyAgICgoX0FDKDEsVSk8PDE0KS0x
KQotLSAKMi4xNy4xCgo=

--=separator
Content-Type: application/octet-stream; name="xsa351-x86-4.11-1.patch"
Content-Disposition: attachment; filename="xsa351-x86-4.11-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP1JvZ2VyPTIwUGF1PTIwTW9ubj1DMz1BOT89IDxy
b2dlci5wYXVAY2l0cml4LmNvbT4KU3ViamVjdDogeDg2L21zcjogZml4IGhh
bmRsaW5nIG9mIE1TUl9JQTMyX1BFUkZfe1NUQVRVUy9DVEx9Ck1JTUUtVmVy
c2lvbjogMS4wCkNvbnRlbnQtVHlwZTogdGV4dC9wbGFpbjsgY2hhcnNldD1V
VEYtOApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA4Yml0CgpDdXJyZW50
bHkgYSBQViBoYXJkd2FyZSBkb21haW4gY2FuIGFsc28gYmUgZ2l2ZW4gY29u
dHJvbCBvdmVyIHRoZSBDUFUKZnJlcXVlbmN5LCBhbmQgc3VjaCBndWVzdCBp
cyBhbGxvd2VkIHRvIHdyaXRlIHRvIE1TUl9JQTMyX1BFUkZfQ1RMLgpIb3dl
dmVyIHNpbmNlIGNvbW1pdCAzMjJlYzdjODlmNiB0aGUgZGVmYXVsdCBiZWhh
dmlvciBoYXMgYmVlbiBjaGFuZ2VkCnRvIHJlamVjdCBhY2Nlc3NlcyB0byBu
b3QgZXhwbGljaXRseSBoYW5kbGVkIE1TUnMsIHByZXZlbnRpbmcgUFYKZ3Vl
c3RzIHRoYXQgbWFuYWdlIENQVSBmcmVxdWVuY3kgZnJvbSByZWFkaW5nCk1T
Ul9JQTMyX1BFUkZfe1NUQVRVUy9DVEx9LgoKQWRkaXRpb25hbGx5IHNvbWUg
SFZNIGd1ZXN0cyAoV2luZG93cyBhdCBsZWFzdCkgd2lsbCBhdHRlbXB0IHRv
IHJlYWQKTVNSX0lBMzJfUEVSRl9DVEwgYW5kIHdpbGwgcGFuaWMgaWYgZ2l2
ZW4gYmFjayBhICNHUCBmYXVsdDoKCiAgdm14LmM6MzAzNTpkOHYwIFJETVNS
IDB4MDAwMDAxOTkgdW5pbXBsZW1lbnRlZAogIGQ4djAgVklSSURJQU4gQ1JB
U0g6IDNiIGMwMDAwMDk2IGZmZmZmODA2ODcxYzE2NTEgZmZmZmRhMDI1MzY4
MzcyMCAwCgpNb3ZlIHRoZSBoYW5kbGluZyBvZiBNU1JfSUEzMl9QRVJGX3tT
VEFUVVMvQ1RMfSB0byB0aGUgY29tbW9uIE1TUgpoYW5kbGluZyBzaGFyZWQg
YmV0d2VlbiBIVk0gYW5kIFBWIGd1ZXN0cywgYW5kIGFkZCBhbiBleHBsaWNp
dCBjYXNlCmZvciByZWFkcyB0byBNU1JfSUEzMl9QRVJGX3tTVEFUVVMvQ1RM
fS4KClJlc3RvcmUgcHJldmlvdXMgYmVoYXZpb3IgYW5kIGFsbG93IFBWIGd1
ZXN0cyB3aXRoIHRoZSByZXF1aXJlZApwZXJtaXNzaW9ucyB0byByZWFkIHRo
ZSBjb250ZW50cyBvZiB0aGUgbWVudGlvbmVkIE1TUnMuIE5vbiBwcml2aWxl
Z2VkCmd1ZXN0cyB3aWxsIGdldCAwIHdoZW4gdHJ5aW5nIHRvIHJlYWQgdGhv
c2UgcmVnaXN0ZXJzLCBhcyB3cml0ZXMgdG8KTVNSX0lBMzJfUEVSRl9DVEwg
Ynkgc3VjaCBndWVzdCB3aWxsIGFscmVhZHkgYmUgc2lsZW50bHkgZHJvcHBl
ZC4KCkZpeGVzOiAzMjJlYzdjODlmNiAoJ3g4Ni9wdjogZGlzYWxsb3cgYWNj
ZXNzIHRvIHVua25vd24gTVNScycpCkZpeGVzOiA4NGU4NDhmZDdhMSAoJ3g4
Ni9odm06IGRpc2FsbG93IGFjY2VzcyB0byB1bmtub3duIE1TUnMnKQpTaWdu
ZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4
LmNvbT4KU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNv
b3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IFJvZ2VyIFBhdSBNb25u
w6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgooY2hlcnJ5IHBpY2tlZCBmcm9t
IGNvbW1pdCAzMDU5MTc4Nzk4YTIzYmE4NzBmZjg2ZmY1NGQ0NDJhMDdlNjY1
MWZjKQoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9tc3IuYyBiL3hlbi9h
cmNoL3g4Ni9tc3IuYwppbmRleCAyNTZlNThkODJiLi4zNDk1YWM5ZjRhIDEw
MDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbXNyLmMKKysrIGIveGVuL2FyY2gv
eDg2L21zci5jCkBAIC0xNDEsNiArMTQxLDcgQEAgaW50IGluaXRfdmNwdV9t
c3JfcG9saWN5KHN0cnVjdCB2Y3B1ICp2KQogCiBpbnQgZ3Vlc3RfcmRtc3Io
Y29uc3Qgc3RydWN0IHZjcHUgKnYsIHVpbnQzMl90IG1zciwgdWludDY0X3Qg
KnZhbCkKIHsKKyAgICBjb25zdCBzdHJ1Y3QgZG9tYWluICpkID0gdi0+ZG9t
YWluOwogICAgIGNvbnN0IHN0cnVjdCBjcHVpZF9wb2xpY3kgKmNwID0gdi0+
ZG9tYWluLT5hcmNoLmNwdWlkOwogICAgIGNvbnN0IHN0cnVjdCBtc3JfZG9t
YWluX3BvbGljeSAqZHAgPSB2LT5kb21haW4tPmFyY2gubXNyOwogICAgIGNv
bnN0IHN0cnVjdCBtc3JfdmNwdV9wb2xpY3kgKnZwID0gdi0+YXJjaC5tc3I7
CkBAIC0yMTIsNiArMjEzLDI1IEBAIGludCBndWVzdF9yZG1zcihjb25zdCBz
dHJ1Y3QgdmNwdSAqdiwgdWludDMyX3QgbXNyLCB1aW50NjRfdCAqdmFsKQog
ICAgICAgICBicmVhazsKIAogICAgICAgICAvKgorICAgICAgICAgKiBUaGVz
ZSBNU1JzIGFyZSBub3QgZW51bWVyYXRlZCBpbiBDUFVJRC4gIFRoZXkgaGF2
ZSBiZWVuIGFyb3VuZAorICAgICAgICAgKiBzaW5jZSB0aGUgUGVudGl1bSA0
LCBhbmQgaW1wbGVtZW50ZWQgYnkgb3RoZXIgdmVuZG9ycy4KKyAgICAgICAg
ICoKKyAgICAgICAgICogU29tZSB2ZXJzaW9ucyBvZiBXaW5kb3dzIHRyeSBy
ZWFkaW5nIHRoZXNlIGJlZm9yZSBzZXR0aW5nIHVwIGEgI0dQCisgICAgICAg
ICAqIGhhbmRsZXIsIGFuZCBMaW51eCBoYXMgc2V2ZXJhbCB1bmd1YXJkZWQg
cmVhZHMgYXMgd2VsbC4gIFByb3ZpZGUKKyAgICAgICAgICogUkFaIHNlbWFu
dGljcywgaW4gZ2VuZXJhbCwgYnV0IHBlcm1pdCBhIGNwdWZyZXEgY29udHJv
bGxlciBkb20wIHRvCisgICAgICAgICAqIGhhdmUgZnVsbCBhY2Nlc3MuCisg
ICAgICAgICAqLworICAgIGNhc2UgTVNSX0lBMzJfUEVSRl9TVEFUVVM6Cisg
ICAgY2FzZSBNU1JfSUEzMl9QRVJGX0NUTDoKKyAgICAgICAgaWYgKCAhKGNw
LT54ODZfdmVuZG9yICYgKFg4Nl9WRU5ET1JfSU5URUwgfCBYODZfVkVORE9S
X0NFTlRBVVIpKSApCisgICAgICAgICAgICBnb3RvIGdwX2ZhdWx0OworCisg
ICAgICAgICp2YWwgPSAwOworICAgICAgICBpZiAoIGxpa2VseSghaXNfY3B1
ZnJlcV9jb250cm9sbGVyKGQpKSB8fCByZG1zcl9zYWZlKG1zciwgKnZhbCkg
PT0gMCApCisgICAgICAgICAgICBicmVhazsKKyAgICAgICAgZ290byBncF9m
YXVsdDsKKworICAgICAgICAvKgogICAgICAgICAgKiBUT0RPOiBJbXBsZW1l
bnQgd2hlbiB3ZSBoYXZlIGJldHRlciB0b3BvbG9neSByZXByZXNlbnRhdGlv
bi4KICAgICBjYXNlIE1TUl9JTlRFTF9DT1JFX1RIUkVBRF9DT1VOVDoKICAg
ICAgICAgICovCkBAIC0yNDEsNiArMjYxLDcgQEAgaW50IGd1ZXN0X3dybXNy
KHN0cnVjdCB2Y3B1ICp2LCB1aW50MzJfdCBtc3IsIHVpbnQ2NF90IHZhbCkK
ICAgICBjYXNlIE1TUl9JTlRFTF9DT1JFX1RIUkVBRF9DT1VOVDoKICAgICBj
YXNlIE1TUl9JTlRFTF9QTEFURk9STV9JTkZPOgogICAgIGNhc2UgTVNSX0FS
Q0hfQ0FQQUJJTElUSUVTOgorICAgIGNhc2UgTVNSX0lBMzJfUEVSRl9TVEFU
VVM6CiAgICAgICAgIC8qIFJlYWQtb25seSAqLwogICAgIGNhc2UgTVNSX1RT
WF9GT1JDRV9BQk9SVDoKICAgICBjYXNlIE1TUl9UU1hfQ1RSTDoKQEAgLTM0
NSw2ICszNjYsMjEgQEAgaW50IGd1ZXN0X3dybXNyKHN0cnVjdCB2Y3B1ICp2
LCB1aW50MzJfdCBtc3IsIHVpbnQ2NF90IHZhbCkKICAgICAgICAgYnJlYWs7
CiAgICAgfQogCisgICAgICAgIC8qCisgICAgICAgICAqIFRoaXMgTVNSIGlz
IG5vdCBlbnVtZXJhdGVkIGluIENQVUlELiAgSXQgaGFzIGJlZW4gYXJvdW5k
IHNpbmNlIHRoZQorICAgICAgICAgKiBQZW50aXVtIDQsIGFuZCBpbXBsZW1l
bnRlZCBieSBvdGhlciB2ZW5kb3JzLgorICAgICAgICAgKgorICAgICAgICAg
KiBUbyBtYXRjaCB0aGUgUkFaIHNlbWFudGljcywgaW1wbGVtZW50IGFzIHdy
aXRlLWRpc2NhcmQsIGV4Y2VwdCBmb3IKKyAgICAgICAgICogYSBjcHVmcmVx
IGNvbnRyb2xsZXIgZG9tMCB3aGljaCBoYXMgZnVsbCBhY2Nlc3MuCisgICAg
ICAgICAqLworICAgIGNhc2UgTVNSX0lBMzJfUEVSRl9DVEw6CisgICAgICAg
IGlmICggIShjcC0+eDg2X3ZlbmRvciAmIChYODZfVkVORE9SX0lOVEVMIHwg
WDg2X1ZFTkRPUl9DRU5UQVVSKSkgKQorICAgICAgICAgICAgZ290byBncF9m
YXVsdDsKKworICAgICAgICBpZiAoIGxpa2VseSghaXNfY3B1ZnJlcV9jb250
cm9sbGVyKGQpKSB8fCB3cm1zcl9zYWZlKG1zciwgdmFsKSA9PSAwICkKKyAg
ICAgICAgICAgIGJyZWFrOworICAgICAgICBnb3RvIGdwX2ZhdWx0OworCiAg
ICAgZGVmYXVsdDoKICAgICAgICAgcmV0dXJuIFg4NkVNVUxfVU5IQU5ETEVB
QkxFOwogICAgIH0KZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9wdi9lbXVs
LXByaXYtb3AuYyBiL3hlbi9hcmNoL3g4Ni9wdi9lbXVsLXByaXYtb3AuYwpp
bmRleCA4MTIwZGVkMzMwLi43NTVmMDBkYjMzIDEwMDY0NAotLS0gYS94ZW4v
YXJjaC94ODYvcHYvZW11bC1wcml2LW9wLmMKKysrIGIveGVuL2FyY2gveDg2
L3B2L2VtdWwtcHJpdi1vcC5jCkBAIC04MTYsMTIgKzgxNiw2IEBAIHN0YXRp
YyBpbmxpbmUgdWludDY0X3QgZ3Vlc3RfbWlzY19lbmFibGUodWludDY0X3Qg
dmFsKQogICAgIHJldHVybiB2YWw7CiB9CiAKLXN0YXRpYyBpbmxpbmUgYm9v
bCBpc19jcHVmcmVxX2NvbnRyb2xsZXIoY29uc3Qgc3RydWN0IGRvbWFpbiAq
ZCkKLXsKLSAgICByZXR1cm4gKChjcHVmcmVxX2NvbnRyb2xsZXIgPT0gRlJF
UUNUTF9kb20wX2tlcm5lbCkgJiYKLSAgICAgICAgICAgIGlzX2hhcmR3YXJl
X2RvbWFpbihkKSk7Ci19Ci0KIHN0YXRpYyBpbnQgcmVhZF9tc3IodW5zaWdu
ZWQgaW50IHJlZywgdWludDY0X3QgKnZhbCwKICAgICAgICAgICAgICAgICAg
ICAgc3RydWN0IHg4Nl9lbXVsYXRlX2N0eHQgKmN0eHQpCiB7CkBAIC0xMDk2
LDE0ICsxMDkwLDYgQEAgc3RhdGljIGludCB3cml0ZV9tc3IodW5zaWduZWQg
aW50IHJlZywgdWludDY0X3QgdmFsLAogICAgICAgICAgICAgcmV0dXJuIFg4
NkVNVUxfT0tBWTsKICAgICAgICAgYnJlYWs7CiAKLSAgICBjYXNlIE1TUl9J
QTMyX1BFUkZfQ1RMOgotICAgICAgICBpZiAoIGJvb3RfY3B1X2RhdGEueDg2
X3ZlbmRvciAhPSBYODZfVkVORE9SX0lOVEVMICkKLSAgICAgICAgICAgIGJy
ZWFrOwotICAgICAgICBpZiAoIGxpa2VseSghaXNfY3B1ZnJlcV9jb250cm9s
bGVyKGN1cnJkKSkgfHwKLSAgICAgICAgICAgICB3cm1zcl9zYWZlKHJlZywg
dmFsKSA9PSAwICkKLSAgICAgICAgICAgIHJldHVybiBYODZFTVVMX09LQVk7
Ci0gICAgICAgIGJyZWFrOwotCiAgICAgY2FzZSBNU1JfSUEzMl9USEVSTV9D
T05UUk9MOgogICAgIGNhc2UgTVNSX0lBMzJfRU5FUkdZX1BFUkZfQklBUzoK
ICAgICAgICAgaWYgKCBib290X2NwdV9kYXRhLng4Nl92ZW5kb3IgIT0gWDg2
X1ZFTkRPUl9JTlRFTCApCmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS94ZW4v
c2NoZWQuaCBiL3hlbi9pbmNsdWRlL3hlbi9zY2hlZC5oCmluZGV4IGMwY2M1
ZDkzMzYuLjdlNGFkNWQ1MWIgMTAwNjQ0Ci0tLSBhL3hlbi9pbmNsdWRlL3hl
bi9zY2hlZC5oCisrKyBiL3hlbi9pbmNsdWRlL3hlbi9zY2hlZC5oCkBAIC05
MjAsNiArOTIwLDIyIEBAIGV4dGVybiBlbnVtIGNwdWZyZXFfY29udHJvbGxl
ciB7CiAgICAgRlJFUUNUTF9ub25lLCBGUkVRQ1RMX2RvbTBfa2VybmVsLCBG
UkVRQ1RMX3hlbgogfSBjcHVmcmVxX2NvbnRyb2xsZXI7CiAKK3N0YXRpYyBh
bHdheXNfaW5saW5lIGJvb2wgaXNfY3B1ZnJlcV9jb250cm9sbGVyKGNvbnN0
IHN0cnVjdCBkb21haW4gKmQpCit7CisgICAgLyoKKyAgICAgKiBBIFBWIGRv
bTAgY2FuIGJlIG5vbWluYXRlZCBhcyB0aGUgY3B1ZnJlcSBjb250cm9sbGVy
LCBpbnN0ZWFkIG9mIHVzaW5nCisgICAgICogWGVuJ3MgY3B1ZnJlcSBkcml2
ZXIsIGF0IHdoaWNoIHBvaW50IGRvbTAgZ2V0cyBkaXJlY3QgYWNjZXNzIHRv
IGNlcnRhaW4KKyAgICAgKiBNU1JzLgorICAgICAqCisgICAgICogVGhpcyBp
bnRlcmZhY2Ugb25seSB3b3JrcyB3aGVuIGRvbTAgaXMgaWRlbnRpdHkgcGlu
bmVkIGFuZCBoYXMgdGhlIHNhbWUKKyAgICAgKiBudW1iZXIgb2YgdkNQVXMg
YXMgcENQVXMgb24gdGhlIHN5c3RlbS4KKyAgICAgKgorICAgICAqIEl0IHdv
dWxkIGJlIGZhciBiZXR0ZXIgdG8gcGFyYXZpcnR1YWxpc2UgdGhlIGludGVy
ZmFjZS4KKyAgICAgKi8KKyAgICByZXR1cm4gKGlzX3B2X2RvbWFpbihkKSAm
JiBpc19oYXJkd2FyZV9kb21haW4oZCkgJiYKKyAgICAgICAgICAgIGNwdWZy
ZXFfY29udHJvbGxlciA9PSBGUkVRQ1RMX2RvbTBfa2VybmVsKTsKK30KKwog
I2RlZmluZSBDUFVQT09MSURfTk9ORSAgICAtMQogCiBzdHJ1Y3QgY3B1cG9v
bCAqY3B1cG9vbF9nZXRfYnlfaWQoaW50IHBvb2xpZCk7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa351-x86-4.11-2.patch"
Content-Disposition: attachment; filename="xsa351-x86-4.11-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L21zcjogRGlzYWxsb3cgZ3Vlc3QgYWNjZXNzIHRv
IHRoZSBSQVBMIE1TUnMKClJlc2VhcmNoZXJzIGhhdmUgZGVtb25zdHJhdGVk
IHVzaW5nIHRoZSBSQVBMIGludGVyZmFjZSB0byBwZXJmb3JtIGEKZGlmZmVy
ZW50aWFsIHBvd2VyIGFuYWx5c2lzIGF0dGFjayB0byByZWNvdmVyIEFFUyBr
ZXlzIHVzZWQgYnkgb3RoZXIgY29yZXMgaW4KdGhlIHN5c3RlbS4KCkZ1cnRo
ZXJtb3JlLCBldmVuIHByaXZpbGVnZWQgZ3Vlc3RzIGNhbm5vdCB1c2UgdGhp
cyBpbnRlcmZhY2UgY29ycmVjdGx5LCBkdWUKdG8gTVNSIHNjb3BlIGFuZCB2
Y3B1IHNjaGVkdWxpbmcgaXNzdWVzLiAgVGhlIGludGVyZmFjZSB3b3VsZCB3
YW50IHRvIGJlCnBhcmF2aXJ0dWFsaXNlZCB0byBiZSB1c2VkIHNlbnNpYmx5
LgoKRGlzYWxsb3cgYWNjZXNzIHRvIHRoZSBSQVBMIE1TUnMgY29tcGxldGVs
eSwgYXMgd2VsbCBhcyBvdGhlciBNU1JzIHdoaWNoCnBvdGVudGlhbGx5IGFj
Y2VzcyBmaW5lIGdyYWluIHBvd2VyIGluZm9ybWF0aW9uLgoKVGhpcyBpcyBw
YXJ0IG9mIFhTQS0zNTEuCgpTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFu
IEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgoKZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL3g4Ni9tc3IuYyBiL3hlbi9hcmNoL3g4Ni9tc3IuYwppbmRleCAz
NDk1YWM5ZjRhLi45OWM4NDhmZjQxIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94
ODYvbXNyLmMKKysrIGIveGVuL2FyY2gveDg2L21zci5jCkBAIC0xNTYsNiAr
MTU2LDE1IEBAIGludCBndWVzdF9yZG1zcihjb25zdCBzdHJ1Y3QgdmNwdSAq
diwgdWludDMyX3QgbXNyLCB1aW50NjRfdCAqdmFsKQogICAgIGNhc2UgTVNS
X1RTWF9GT1JDRV9BQk9SVDoKICAgICBjYXNlIE1TUl9UU1hfQ1RSTDoKICAg
ICBjYXNlIE1TUl9NQ1VfT1BUX0NUUkw6CisgICAgY2FzZSBNU1JfUkFQTF9Q
T1dFUl9VTklUOgorICAgIGNhc2UgTVNSX1BLR19QT1dFUl9MSU1JVCAgLi4u
IE1TUl9QS0dfUE9XRVJfSU5GTzoKKyAgICBjYXNlIE1TUl9EUkFNX1BPV0VS
X0xJTUlUIC4uLiBNU1JfRFJBTV9QT1dFUl9JTkZPOgorICAgIGNhc2UgTVNS
X1BQMF9QT1dFUl9MSU1JVCAgLi4uIE1TUl9QUDBfUE9MSUNZOgorICAgIGNh
c2UgTVNSX1BQMV9QT1dFUl9MSU1JVCAgLi4uIE1TUl9QUDFfUE9MSUNZOgor
ICAgIGNhc2UgTVNSX1BMQVRGT1JNX0VORVJHWV9DT1VOVEVSOgorICAgIGNh
c2UgTVNSX1BMQVRGT1JNX1BPV0VSX0xJTUlUOgorICAgIGNhc2UgTVNSX0Yx
NUhfQ1VfUE9XRVIgLi4uIE1TUl9GMTVIX0NVX01BWF9QT1dFUjoKKyAgICBj
YXNlIE1TUl9BTURfUkFQTF9QT1dFUl9VTklUIC4uLiBNU1JfQU1EX1BLR19F
TkVSR1lfU1RBVFVTOgogICAgICAgICAvKiBOb3Qgb2ZmZXJlZCB0byBndWVz
dHMuICovCiAgICAgICAgIGdvdG8gZ3BfZmF1bHQ7CiAKQEAgLTI2Niw2ICsy
NzUsMTUgQEAgaW50IGd1ZXN0X3dybXNyKHN0cnVjdCB2Y3B1ICp2LCB1aW50
MzJfdCBtc3IsIHVpbnQ2NF90IHZhbCkKICAgICBjYXNlIE1TUl9UU1hfRk9S
Q0VfQUJPUlQ6CiAgICAgY2FzZSBNU1JfVFNYX0NUUkw6CiAgICAgY2FzZSBN
U1JfTUNVX09QVF9DVFJMOgorICAgIGNhc2UgTVNSX1JBUExfUE9XRVJfVU5J
VDoKKyAgICBjYXNlIE1TUl9QS0dfUE9XRVJfTElNSVQgIC4uLiBNU1JfUEtH
X1BPV0VSX0lORk86CisgICAgY2FzZSBNU1JfRFJBTV9QT1dFUl9MSU1JVCAu
Li4gTVNSX0RSQU1fUE9XRVJfSU5GTzoKKyAgICBjYXNlIE1TUl9QUDBfUE9X
RVJfTElNSVQgIC4uLiBNU1JfUFAwX1BPTElDWToKKyAgICBjYXNlIE1TUl9Q
UDFfUE9XRVJfTElNSVQgIC4uLiBNU1JfUFAxX1BPTElDWToKKyAgICBjYXNl
IE1TUl9QTEFURk9STV9FTkVSR1lfQ09VTlRFUjoKKyAgICBjYXNlIE1TUl9Q
TEFURk9STV9QT1dFUl9MSU1JVDoKKyAgICBjYXNlIE1TUl9GMTVIX0NVX1BP
V0VSIC4uLiBNU1JfRjE1SF9DVV9NQVhfUE9XRVI6CisgICAgY2FzZSBNU1Jf
QU1EX1JBUExfUE9XRVJfVU5JVCAuLi4gTVNSX0FNRF9QS0dfRU5FUkdZX1NU
QVRVUzoKICAgICAgICAgLyogTm90IG9mZmVyZWQgdG8gZ3Vlc3RzLiAqLwog
ICAgICAgICBnb3RvIGdwX2ZhdWx0OwogCmRpZmYgLS1naXQgYS94ZW4vaW5j
bHVkZS9hc20teDg2L21zci1pbmRleC5oIGIveGVuL2luY2x1ZGUvYXNtLXg4
Ni9tc3ItaW5kZXguaAppbmRleCA0ODBkMWQ4MTAyLi5hNjg1ZGNkY2NhIDEw
MDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L21zci1pbmRleC5oCisr
KyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvbXNyLWluZGV4LmgKQEAgLTk2LDYg
Kzk2LDM4IEBACiAvKiBMb3dlciA2IGJpdHMgZGVmaW5lIHRoZSBmb3JtYXQg
b2YgdGhlIGFkZHJlc3MgaW4gdGhlIExCUiBzdGFjayAqLwogI2RlZmluZSBN
U1JfSUEzMl9QRVJGX0NBUF9MQlJfRk9STUFUCTB4M2YKIAorLyoKKyAqIElu
dGVsIFJ1bnRpbWUgQXZlcmFnZSBQb3dlciBMaW1pdGluZyAoUkFQTCkgaW50
ZXJmYWNlLiAgUG93ZXIgcGxhbmUgYmFzZQorICogYWRkcmVzc2VzIChNU1Jf
Kl9QT1dFUl9MSU1JVCkgYXJlIG1vZGVsIHNwZWNpZmljLCBidXQgaGF2ZSBz
by1mYXIgYmVlbgorICogY29uc2lzdGVudCBzaW5jZSB0aGVpciBpbnRyb2R1
Y3Rpb24gaW4gU2FuZHlCcmlkZ2UuCisgKgorICogT2Zmc2V0cyBvZiBmdW5j
dGlvbmFsaXR5IGZyb20gdGhlIHBvd2VyIHBsYW5lIGJhc2UgaXMgYXJjaGl0
ZWN0dXJhbCwgYnV0CisgKiBub3QgYWxsIHBvd2VyIHBsYW5lcyBzdXBwb3J0
IGFsbCBmdW5jdGlvbmFsaXR5LgorICovCisjZGVmaW5lIE1TUl9SQVBMX1BP
V0VSX1VOSVQJCTB4MDAwMDA2MDYKKworI2RlZmluZSBNU1JfUEtHX1BPV0VS
X0xJTUlUCQkweDAwMDAwNjEwCisjZGVmaW5lIE1TUl9QS0dfRU5FUkdZX1NU
QVRVUwkJMHgwMDAwMDYxMQorI2RlZmluZSBNU1JfUEtHX1BFUkZfU1RBVFVT
CQkweDAwMDAwNjEzCisjZGVmaW5lIE1TUl9QS0dfUE9XRVJfSU5GTwkJMHgw
MDAwMDYxNAorCisjZGVmaW5lIE1TUl9EUkFNX1BPV0VSX0xJTUlUCQkweDAw
MDAwNjE4CisjZGVmaW5lIE1TUl9EUkFNX0VORVJHWV9TVEFUVVMJCTB4MDAw
MDA2MTkKKyNkZWZpbmUgTVNSX0RSQU1fUEVSRl9TVEFUVVMJCTB4MDAwMDA2
MWIKKyNkZWZpbmUgTVNSX0RSQU1fUE9XRVJfSU5GTwkJMHgwMDAwMDYxYwor
CisjZGVmaW5lIE1TUl9QUDBfUE9XRVJfTElNSVQJCTB4MDAwMDA2MzgKKyNk
ZWZpbmUgTVNSX1BQMF9FTkVSR1lfU1RBVFVTCQkweDAwMDAwNjM5CisjZGVm
aW5lIE1TUl9QUDBfUE9MSUNZCQkJMHgwMDAwMDYzYQorCisjZGVmaW5lIE1T
Ul9QUDFfUE9XRVJfTElNSVQJCTB4MDAwMDA2NDAKKyNkZWZpbmUgTVNSX1BQ
MV9FTkVSR1lfU1RBVFVTCQkweDAwMDAwNjQxCisjZGVmaW5lIE1TUl9QUDFf
UE9MSUNZCQkJMHgwMDAwMDY0MgorCisvKiBJbnRlbCBQbGF0Zm9ybS13aWRl
IHBvd2VyIGludGVyZmFjZS4gKi8KKyNkZWZpbmUgTVNSX1BMQVRGT1JNX0VO
RVJHWV9DT1VOVEVSCTB4MDAwMDA2NGQKKyNkZWZpbmUgTVNSX1BMQVRGT1JN
X1BPV0VSX0xJTUlUCTB4MDAwMDA2NWMKKwogI2RlZmluZSBNU1JfSUEzMl9C
TkRDRkdTCQkweDAwMDAwZDkwCiAjZGVmaW5lIElBMzJfQk5EQ0ZHU19FTkFC
TEUJCTB4MDAwMDAwMDEKICNkZWZpbmUgSUEzMl9CTkRDRkdTX1BSRVNFUlZF
CQkweDAwMDAwMDAyCkBAIC0yMTgsNiArMjUwLDggQEAKICNkZWZpbmUgTVNS
X0s4X1ZNX0NSCQkJMHhjMDAxMDExNAogI2RlZmluZSBNU1JfSzhfVk1fSFNB
VkVfUEEJCTB4YzAwMTAxMTcKIAorI2RlZmluZSBNU1JfRjE1SF9DVV9QT1dF
UgkJMHhjMDAxMDA3YQorI2RlZmluZSBNU1JfRjE1SF9DVV9NQVhfUE9XRVIJ
CTB4YzAwMTAwN2IKICNkZWZpbmUgTVNSX0FNRF9GQU0xNUhfRVZOVFNFTDAJ
CTB4YzAwMTAyMDAKICNkZWZpbmUgTVNSX0FNRF9GQU0xNUhfUEVSRkNUUjAJ
CTB4YzAwMTAyMDEKICNkZWZpbmUgTVNSX0FNRF9GQU0xNUhfRVZOVFNFTDEJ
CTB4YzAwMTAyMDIKQEAgLTIzMSw2ICsyNjUsMTAgQEAKICNkZWZpbmUgTVNS
X0FNRF9GQU0xNUhfRVZOVFNFTDUJCTB4YzAwMTAyMGEKICNkZWZpbmUgTVNS
X0FNRF9GQU0xNUhfUEVSRkNUUjUJCTB4YzAwMTAyMGIKIAorI2RlZmluZSBN
U1JfQU1EX1JBUExfUE9XRVJfVU5JVAkJMHhjMDAxMDI5OQorI2RlZmluZSBN
U1JfQU1EX0NPUkVfRU5FUkdZX1NUQVRVUwkweGMwMDEwMjlhCisjZGVmaW5l
IE1TUl9BTURfUEtHX0VORVJHWV9TVEFUVVMJMHhjMDAxMDI5YgorCiAjZGVm
aW5lIE1TUl9BTURfTDdTMF9GRUFUVVJFX01BU0sJMHhjMDAxMTAwMgogI2Rl
ZmluZSBNU1JfQU1EX1RIUk1fRkVBVFVSRV9NQVNLCTB4YzAwMTEwMDMKICNk
ZWZpbmUgTVNSX0s4X0ZFQVRVUkVfTUFTSwkJMHhjMDAxMTAwNAo=

--=separator
Content-Type: application/octet-stream; name="xsa351-x86-4.12-1.patch"
Content-Disposition: attachment; filename="xsa351-x86-4.12-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP1JvZ2VyPTIwUGF1PTIwTW9ubj1DMz1BOT89IDxy
b2dlci5wYXVAY2l0cml4LmNvbT4KU3ViamVjdDogeDg2L21zcjogZml4IGhh
bmRsaW5nIG9mIE1TUl9JQTMyX1BFUkZfe1NUQVRVUy9DVEx9Ck1JTUUtVmVy
c2lvbjogMS4wCkNvbnRlbnQtVHlwZTogdGV4dC9wbGFpbjsgY2hhcnNldD1V
VEYtOApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA4Yml0CgpDdXJyZW50
bHkgYSBQViBoYXJkd2FyZSBkb21haW4gY2FuIGFsc28gYmUgZ2l2ZW4gY29u
dHJvbCBvdmVyIHRoZSBDUFUKZnJlcXVlbmN5LCBhbmQgc3VjaCBndWVzdCBp
cyBhbGxvd2VkIHRvIHdyaXRlIHRvIE1TUl9JQTMyX1BFUkZfQ1RMLgpIb3dl
dmVyIHNpbmNlIGNvbW1pdCAzMjJlYzdjODlmNiB0aGUgZGVmYXVsdCBiZWhh
dmlvciBoYXMgYmVlbiBjaGFuZ2VkCnRvIHJlamVjdCBhY2Nlc3NlcyB0byBu
b3QgZXhwbGljaXRseSBoYW5kbGVkIE1TUnMsIHByZXZlbnRpbmcgUFYKZ3Vl
c3RzIHRoYXQgbWFuYWdlIENQVSBmcmVxdWVuY3kgZnJvbSByZWFkaW5nCk1T
Ul9JQTMyX1BFUkZfe1NUQVRVUy9DVEx9LgoKQWRkaXRpb25hbGx5IHNvbWUg
SFZNIGd1ZXN0cyAoV2luZG93cyBhdCBsZWFzdCkgd2lsbCBhdHRlbXB0IHRv
IHJlYWQKTVNSX0lBMzJfUEVSRl9DVEwgYW5kIHdpbGwgcGFuaWMgaWYgZ2l2
ZW4gYmFjayBhICNHUCBmYXVsdDoKCiAgdm14LmM6MzAzNTpkOHYwIFJETVNS
IDB4MDAwMDAxOTkgdW5pbXBsZW1lbnRlZAogIGQ4djAgVklSSURJQU4gQ1JB
U0g6IDNiIGMwMDAwMDk2IGZmZmZmODA2ODcxYzE2NTEgZmZmZmRhMDI1MzY4
MzcyMCAwCgpNb3ZlIHRoZSBoYW5kbGluZyBvZiBNU1JfSUEzMl9QRVJGX3tT
VEFUVVMvQ1RMfSB0byB0aGUgY29tbW9uIE1TUgpoYW5kbGluZyBzaGFyZWQg
YmV0d2VlbiBIVk0gYW5kIFBWIGd1ZXN0cywgYW5kIGFkZCBhbiBleHBsaWNp
dCBjYXNlCmZvciByZWFkcyB0byBNU1JfSUEzMl9QRVJGX3tTVEFUVVMvQ1RM
fS4KClJlc3RvcmUgcHJldmlvdXMgYmVoYXZpb3IgYW5kIGFsbG93IFBWIGd1
ZXN0cyB3aXRoIHRoZSByZXF1aXJlZApwZXJtaXNzaW9ucyB0byByZWFkIHRo
ZSBjb250ZW50cyBvZiB0aGUgbWVudGlvbmVkIE1TUnMuIE5vbiBwcml2aWxl
Z2VkCmd1ZXN0cyB3aWxsIGdldCAwIHdoZW4gdHJ5aW5nIHRvIHJlYWQgdGhv
c2UgcmVnaXN0ZXJzLCBhcyB3cml0ZXMgdG8KTVNSX0lBMzJfUEVSRl9DVEwg
Ynkgc3VjaCBndWVzdCB3aWxsIGFscmVhZHkgYmUgc2lsZW50bHkgZHJvcHBl
ZC4KCkZpeGVzOiAzMjJlYzdjODlmNiAoJ3g4Ni9wdjogZGlzYWxsb3cgYWNj
ZXNzIHRvIHVua25vd24gTVNScycpCkZpeGVzOiA4NGU4NDhmZDdhMSAoJ3g4
Ni9odm06IGRpc2FsbG93IGFjY2VzcyB0byB1bmtub3duIE1TUnMnKQpTaWdu
ZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4
LmNvbT4KU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNv
b3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IFJvZ2VyIFBhdSBNb25u
w6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgooY2hlcnJ5IHBpY2tlZCBmcm9t
IGNvbW1pdCAzMDU5MTc4Nzk4YTIzYmE4NzBmZjg2ZmY1NGQ0NDJhMDdlNjY1
MWZjKQoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9tc3IuYyBiL3hlbi9h
cmNoL3g4Ni9tc3IuYwppbmRleCA0Njc3MjIyYzQwLi5hNDI3ODI2YmEwIDEw
MDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbXNyLmMKKysrIGIveGVuL2FyY2gv
eDg2L21zci5jCkBAIC0yMDYsNiArMjA2LDI1IEBAIGludCBndWVzdF9yZG1z
cihjb25zdCBzdHJ1Y3QgdmNwdSAqdiwgdWludDMyX3QgbXNyLCB1aW50NjRf
dCAqdmFsKQogICAgICAgICAqdmFsID0gbXNycy0+bWlzY19mZWF0dXJlc19l
bmFibGVzLnJhdzsKICAgICAgICAgYnJlYWs7CiAKKyAgICAgICAgLyoKKyAg
ICAgICAgICogVGhlc2UgTVNScyBhcmUgbm90IGVudW1lcmF0ZWQgaW4gQ1BV
SUQuICBUaGV5IGhhdmUgYmVlbiBhcm91bmQKKyAgICAgICAgICogc2luY2Ug
dGhlIFBlbnRpdW0gNCwgYW5kIGltcGxlbWVudGVkIGJ5IG90aGVyIHZlbmRv
cnMuCisgICAgICAgICAqCisgICAgICAgICAqIFNvbWUgdmVyc2lvbnMgb2Yg
V2luZG93cyB0cnkgcmVhZGluZyB0aGVzZSBiZWZvcmUgc2V0dGluZyB1cCBh
ICNHUAorICAgICAgICAgKiBoYW5kbGVyLCBhbmQgTGludXggaGFzIHNldmVy
YWwgdW5ndWFyZGVkIHJlYWRzIGFzIHdlbGwuICBQcm92aWRlCisgICAgICAg
ICAqIFJBWiBzZW1hbnRpY3MsIGluIGdlbmVyYWwsIGJ1dCBwZXJtaXQgYSBj
cHVmcmVxIGNvbnRyb2xsZXIgZG9tMCB0bworICAgICAgICAgKiBoYXZlIGZ1
bGwgYWNjZXNzLgorICAgICAgICAgKi8KKyAgICBjYXNlIE1TUl9JQTMyX1BF
UkZfU1RBVFVTOgorICAgIGNhc2UgTVNSX0lBMzJfUEVSRl9DVEw6CisgICAg
ICAgIGlmICggIShjcC0+eDg2X3ZlbmRvciAmIChYODZfVkVORE9SX0lOVEVM
IHwgWDg2X1ZFTkRPUl9DRU5UQVVSKSkgKQorICAgICAgICAgICAgZ290byBn
cF9mYXVsdDsKKworICAgICAgICAqdmFsID0gMDsKKyAgICAgICAgaWYgKCBs
aWtlbHkoIWlzX2NwdWZyZXFfY29udHJvbGxlcihkKSkgfHwgcmRtc3Jfc2Fm
ZShtc3IsICp2YWwpID09IDAgKQorICAgICAgICAgICAgYnJlYWs7CisgICAg
ICAgIGdvdG8gZ3BfZmF1bHQ7CisKICAgICBjYXNlIE1TUl9YMkFQSUNfRklS
U1QgLi4uIE1TUl9YMkFQSUNfTEFTVDoKICAgICAgICAgaWYgKCAhaXNfaHZt
X2RvbWFpbihkKSB8fCB2ICE9IGN1cnIgKQogICAgICAgICAgICAgZ290byBn
cF9mYXVsdDsKQEAgLTI5MCw2ICszMDksNyBAQCBpbnQgZ3Vlc3Rfd3Jtc3Io
c3RydWN0IHZjcHUgKnYsIHVpbnQzMl90IG1zciwgdWludDY0X3QgdmFsKQog
ICAgIGNhc2UgTVNSX0lOVEVMX0NPUkVfVEhSRUFEX0NPVU5UOgogICAgIGNh
c2UgTVNSX0lOVEVMX1BMQVRGT1JNX0lORk86CiAgICAgY2FzZSBNU1JfQVJD
SF9DQVBBQklMSVRJRVM6CisgICAgY2FzZSBNU1JfSUEzMl9QRVJGX1NUQVRV
UzoKICAgICAgICAgLyogUmVhZC1vbmx5ICovCiAgICAgY2FzZSBNU1JfVFNY
X0ZPUkNFX0FCT1JUOgogICAgIGNhc2UgTVNSX1RTWF9DVFJMOgpAQCAtMzk0
LDYgKzQxNCwyMSBAQCBpbnQgZ3Vlc3Rfd3Jtc3Ioc3RydWN0IHZjcHUgKnYs
IHVpbnQzMl90IG1zciwgdWludDY0X3QgdmFsKQogICAgICAgICBicmVhazsK
ICAgICB9CiAKKyAgICAgICAgLyoKKyAgICAgICAgICogVGhpcyBNU1IgaXMg
bm90IGVudW1lcmF0ZWQgaW4gQ1BVSUQuICBJdCBoYXMgYmVlbiBhcm91bmQg
c2luY2UgdGhlCisgICAgICAgICAqIFBlbnRpdW0gNCwgYW5kIGltcGxlbWVu
dGVkIGJ5IG90aGVyIHZlbmRvcnMuCisgICAgICAgICAqCisgICAgICAgICAq
IFRvIG1hdGNoIHRoZSBSQVogc2VtYW50aWNzLCBpbXBsZW1lbnQgYXMgd3Jp
dGUtZGlzY2FyZCwgZXhjZXB0IGZvcgorICAgICAgICAgKiBhIGNwdWZyZXEg
Y29udHJvbGxlciBkb20wIHdoaWNoIGhhcyBmdWxsIGFjY2Vzcy4KKyAgICAg
ICAgICovCisgICAgY2FzZSBNU1JfSUEzMl9QRVJGX0NUTDoKKyAgICAgICAg
aWYgKCAhKGNwLT54ODZfdmVuZG9yICYgKFg4Nl9WRU5ET1JfSU5URUwgfCBY
ODZfVkVORE9SX0NFTlRBVVIpKSApCisgICAgICAgICAgICBnb3RvIGdwX2Zh
dWx0OworCisgICAgICAgIGlmICggbGlrZWx5KCFpc19jcHVmcmVxX2NvbnRy
b2xsZXIoZCkpIHx8IHdybXNyX3NhZmUobXNyLCB2YWwpID09IDAgKQorICAg
ICAgICAgICAgYnJlYWs7CisgICAgICAgIGdvdG8gZ3BfZmF1bHQ7CisKICAg
ICBjYXNlIE1TUl9YMkFQSUNfRklSU1QgLi4uIE1TUl9YMkFQSUNfTEFTVDoK
ICAgICAgICAgaWYgKCAhaXNfaHZtX2RvbWFpbihkKSB8fCB2ICE9IGN1cnIg
KQogICAgICAgICAgICAgZ290byBncF9mYXVsdDsKZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL3g4Ni9wdi9lbXVsLXByaXYtb3AuYyBiL3hlbi9hcmNoL3g4Ni9w
di9lbXVsLXByaXYtb3AuYwppbmRleCAzMjRhMjMzNGEyLi45MzMwMzZlYTM0
IDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvcHYvZW11bC1wcml2LW9wLmMK
KysrIGIveGVuL2FyY2gveDg2L3B2L2VtdWwtcHJpdi1vcC5jCkBAIC03OTks
MTIgKzc5OSw2IEBAIHN0YXRpYyBpbmxpbmUgdWludDY0X3QgZ3Vlc3RfbWlz
Y19lbmFibGUodWludDY0X3QgdmFsKQogICAgIHJldHVybiB2YWw7CiB9CiAK
LXN0YXRpYyBpbmxpbmUgYm9vbCBpc19jcHVmcmVxX2NvbnRyb2xsZXIoY29u
c3Qgc3RydWN0IGRvbWFpbiAqZCkKLXsKLSAgICByZXR1cm4gKChjcHVmcmVx
X2NvbnRyb2xsZXIgPT0gRlJFUUNUTF9kb20wX2tlcm5lbCkgJiYKLSAgICAg
ICAgICAgIGlzX2hhcmR3YXJlX2RvbWFpbihkKSk7Ci19Ci0KIHN0YXRpYyBp
bnQgcmVhZF9tc3IodW5zaWduZWQgaW50IHJlZywgdWludDY0X3QgKnZhbCwK
ICAgICAgICAgICAgICAgICAgICAgc3RydWN0IHg4Nl9lbXVsYXRlX2N0eHQg
KmN0eHQpCiB7CkBAIC0xMDQ3LDE0ICsxMDQxLDYgQEAgc3RhdGljIGludCB3
cml0ZV9tc3IodW5zaWduZWQgaW50IHJlZywgdWludDY0X3QgdmFsLAogICAg
ICAgICAgICAgcmV0dXJuIFg4NkVNVUxfT0tBWTsKICAgICAgICAgYnJlYWs7
CiAKLSAgICBjYXNlIE1TUl9JQTMyX1BFUkZfQ1RMOgotICAgICAgICBpZiAo
IGJvb3RfY3B1X2RhdGEueDg2X3ZlbmRvciAhPSBYODZfVkVORE9SX0lOVEVM
ICkKLSAgICAgICAgICAgIGJyZWFrOwotICAgICAgICBpZiAoIGxpa2VseSgh
aXNfY3B1ZnJlcV9jb250cm9sbGVyKGN1cnJkKSkgfHwKLSAgICAgICAgICAg
ICB3cm1zcl9zYWZlKHJlZywgdmFsKSA9PSAwICkKLSAgICAgICAgICAgIHJl
dHVybiBYODZFTVVMX09LQVk7Ci0gICAgICAgIGJyZWFrOwotCiAgICAgY2Fz
ZSBNU1JfSUEzMl9USEVSTV9DT05UUk9MOgogICAgIGNhc2UgTVNSX0lBMzJf
RU5FUkdZX1BFUkZfQklBUzoKICAgICAgICAgaWYgKCBib290X2NwdV9kYXRh
Lng4Nl92ZW5kb3IgIT0gWDg2X1ZFTkRPUl9JTlRFTCApCmRpZmYgLS1naXQg
YS94ZW4vaW5jbHVkZS94ZW4vc2NoZWQuaCBiL3hlbi9pbmNsdWRlL3hlbi9z
Y2hlZC5oCmluZGV4IDgxOWY2ZWRlMmIuLmI5MTg2MjQzMjcgMTAwNjQ0Ci0t
LSBhL3hlbi9pbmNsdWRlL3hlbi9zY2hlZC5oCisrKyBiL3hlbi9pbmNsdWRl
L3hlbi9zY2hlZC5oCkBAIC05OTMsNiArOTkzLDIyIEBAIGV4dGVybiBlbnVt
IGNwdWZyZXFfY29udHJvbGxlciB7CiAgICAgRlJFUUNUTF9ub25lLCBGUkVR
Q1RMX2RvbTBfa2VybmVsLCBGUkVRQ1RMX3hlbgogfSBjcHVmcmVxX2NvbnRy
b2xsZXI7CiAKK3N0YXRpYyBhbHdheXNfaW5saW5lIGJvb2wgaXNfY3B1ZnJl
cV9jb250cm9sbGVyKGNvbnN0IHN0cnVjdCBkb21haW4gKmQpCit7CisgICAg
LyoKKyAgICAgKiBBIFBWIGRvbTAgY2FuIGJlIG5vbWluYXRlZCBhcyB0aGUg
Y3B1ZnJlcSBjb250cm9sbGVyLCBpbnN0ZWFkIG9mIHVzaW5nCisgICAgICog
WGVuJ3MgY3B1ZnJlcSBkcml2ZXIsIGF0IHdoaWNoIHBvaW50IGRvbTAgZ2V0
cyBkaXJlY3QgYWNjZXNzIHRvIGNlcnRhaW4KKyAgICAgKiBNU1JzLgorICAg
ICAqCisgICAgICogVGhpcyBpbnRlcmZhY2Ugb25seSB3b3JrcyB3aGVuIGRv
bTAgaXMgaWRlbnRpdHkgcGlubmVkIGFuZCBoYXMgdGhlIHNhbWUKKyAgICAg
KiBudW1iZXIgb2YgdkNQVXMgYXMgcENQVXMgb24gdGhlIHN5c3RlbS4KKyAg
ICAgKgorICAgICAqIEl0IHdvdWxkIGJlIGZhciBiZXR0ZXIgdG8gcGFyYXZp
cnR1YWxpc2UgdGhlIGludGVyZmFjZS4KKyAgICAgKi8KKyAgICByZXR1cm4g
KGlzX3B2X2RvbWFpbihkKSAmJiBpc19oYXJkd2FyZV9kb21haW4oZCkgJiYK
KyAgICAgICAgICAgIGNwdWZyZXFfY29udHJvbGxlciA9PSBGUkVRQ1RMX2Rv
bTBfa2VybmVsKTsKK30KKwogI2RlZmluZSBDUFVQT09MSURfTk9ORSAgICAt
MQogCiBzdHJ1Y3QgY3B1cG9vbCAqY3B1cG9vbF9nZXRfYnlfaWQoaW50IHBv
b2xpZCk7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa351-x86-4.12-2.patch"
Content-Disposition: attachment; filename="xsa351-x86-4.12-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L21zcjogRGlzYWxsb3cgZ3Vlc3QgYWNjZXNzIHRv
IHRoZSBSQVBMIE1TUnMKClJlc2VhcmNoZXJzIGhhdmUgZGVtb25zdHJhdGVk
IHVzaW5nIHRoZSBSQVBMIGludGVyZmFjZSB0byBwZXJmb3JtIGEKZGlmZmVy
ZW50aWFsIHBvd2VyIGFuYWx5c2lzIGF0dGFjayB0byByZWNvdmVyIEFFUyBr
ZXlzIHVzZWQgYnkgb3RoZXIgY29yZXMgaW4KdGhlIHN5c3RlbS4KCkZ1cnRo
ZXJtb3JlLCBldmVuIHByaXZpbGVnZWQgZ3Vlc3RzIGNhbm5vdCB1c2UgdGhp
cyBpbnRlcmZhY2UgY29ycmVjdGx5LCBkdWUKdG8gTVNSIHNjb3BlIGFuZCB2
Y3B1IHNjaGVkdWxpbmcgaXNzdWVzLiAgVGhlIGludGVyZmFjZSB3b3VsZCB3
YW50IHRvIGJlCnBhcmF2aXJ0dWFsaXNlZCB0byBiZSB1c2VkIHNlbnNpYmx5
LgoKRGlzYWxsb3cgYWNjZXNzIHRvIHRoZSBSQVBMIE1TUnMgY29tcGxldGVs
eSwgYXMgd2VsbCBhcyBvdGhlciBNU1JzIHdoaWNoCnBvdGVudGlhbGx5IGFj
Y2VzcyBmaW5lIGdyYWluIHBvd2VyIGluZm9ybWF0aW9uLgoKVGhpcyBpcyBw
YXJ0IG9mIFhTQS0zNTEuCgpTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFu
IEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgoKZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL3g4Ni9tc3IuYyBiL3hlbi9hcmNoL3g4Ni9tc3IuYwppbmRleCBh
NDI3ODI2YmEwLi45MjdlZDYyNWRmIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94
ODYvbXNyLmMKKysrIGIveGVuL2FyY2gveDg2L21zci5jCkBAIC0xNTEsOSAr
MTUxLDE4IEBAIGludCBndWVzdF9yZG1zcihjb25zdCBzdHJ1Y3QgdmNwdSAq
diwgdWludDMyX3QgbXNyLCB1aW50NjRfdCAqdmFsKQogICAgIGNhc2UgTVNS
X1RTWF9DVFJMOgogICAgIGNhc2UgTVNSX01DVV9PUFRfQ1RSTDoKICAgICBj
YXNlIE1TUl9SVElUX09VVFBVVF9CQVNFIC4uLiBNU1JfUlRJVF9BRERSX0Io
Nyk6CisgICAgY2FzZSBNU1JfUkFQTF9QT1dFUl9VTklUOgorICAgIGNhc2Ug
TVNSX1BLR19QT1dFUl9MSU1JVCAgLi4uIE1TUl9QS0dfUE9XRVJfSU5GTzoK
KyAgICBjYXNlIE1TUl9EUkFNX1BPV0VSX0xJTUlUIC4uLiBNU1JfRFJBTV9Q
T1dFUl9JTkZPOgorICAgIGNhc2UgTVNSX1BQMF9QT1dFUl9MSU1JVCAgLi4u
IE1TUl9QUDBfUE9MSUNZOgorICAgIGNhc2UgTVNSX1BQMV9QT1dFUl9MSU1J
VCAgLi4uIE1TUl9QUDFfUE9MSUNZOgorICAgIGNhc2UgTVNSX1BMQVRGT1JN
X0VORVJHWV9DT1VOVEVSOgorICAgIGNhc2UgTVNSX1BMQVRGT1JNX1BPV0VS
X0xJTUlUOgogICAgIGNhc2UgTVNSX1VfQ0VUOgogICAgIGNhc2UgTVNSX1Nf
Q0VUOgogICAgIGNhc2UgTVNSX1BMMF9TU1AgLi4uIE1TUl9JTlRFUlJVUFRf
U1NQX1RBQkxFOgorICAgIGNhc2UgTVNSX0YxNUhfQ1VfUE9XRVIgLi4uIE1T
Ul9GMTVIX0NVX01BWF9QT1dFUjoKKyAgICBjYXNlIE1TUl9BTURfUkFQTF9Q
T1dFUl9VTklUIC4uLiBNU1JfQU1EX1BLR19FTkVSR1lfU1RBVFVTOgogICAg
ICAgICAvKiBOb3Qgb2ZmZXJlZCB0byBndWVzdHMuICovCiAgICAgICAgIGdv
dG8gZ3BfZmF1bHQ7CiAKQEAgLTMxNSw5ICszMjQsMTggQEAgaW50IGd1ZXN0
X3dybXNyKHN0cnVjdCB2Y3B1ICp2LCB1aW50MzJfdCBtc3IsIHVpbnQ2NF90
IHZhbCkKICAgICBjYXNlIE1TUl9UU1hfQ1RSTDoKICAgICBjYXNlIE1TUl9N
Q1VfT1BUX0NUUkw6CiAgICAgY2FzZSBNU1JfUlRJVF9PVVRQVVRfQkFTRSAu
Li4gTVNSX1JUSVRfQUREUl9CKDcpOgorICAgIGNhc2UgTVNSX1JBUExfUE9X
RVJfVU5JVDoKKyAgICBjYXNlIE1TUl9QS0dfUE9XRVJfTElNSVQgIC4uLiBN
U1JfUEtHX1BPV0VSX0lORk86CisgICAgY2FzZSBNU1JfRFJBTV9QT1dFUl9M
SU1JVCAuLi4gTVNSX0RSQU1fUE9XRVJfSU5GTzoKKyAgICBjYXNlIE1TUl9Q
UDBfUE9XRVJfTElNSVQgIC4uLiBNU1JfUFAwX1BPTElDWToKKyAgICBjYXNl
IE1TUl9QUDFfUE9XRVJfTElNSVQgIC4uLiBNU1JfUFAxX1BPTElDWToKKyAg
ICBjYXNlIE1TUl9QTEFURk9STV9FTkVSR1lfQ09VTlRFUjoKKyAgICBjYXNl
IE1TUl9QTEFURk9STV9QT1dFUl9MSU1JVDoKICAgICBjYXNlIE1TUl9VX0NF
VDoKICAgICBjYXNlIE1TUl9TX0NFVDoKICAgICBjYXNlIE1TUl9QTDBfU1NQ
IC4uLiBNU1JfSU5URVJSVVBUX1NTUF9UQUJMRToKKyAgICBjYXNlIE1TUl9G
MTVIX0NVX1BPV0VSIC4uLiBNU1JfRjE1SF9DVV9NQVhfUE9XRVI6CisgICAg
Y2FzZSBNU1JfQU1EX1JBUExfUE9XRVJfVU5JVCAuLi4gTVNSX0FNRF9QS0df
RU5FUkdZX1NUQVRVUzoKICAgICAgICAgLyogTm90IG9mZmVyZWQgdG8gZ3Vl
c3RzLiAqLwogICAgICAgICBnb3RvIGdwX2ZhdWx0OwogCmRpZmYgLS1naXQg
YS94ZW4vaW5jbHVkZS9hc20teDg2L21zci1pbmRleC5oIGIveGVuL2luY2x1
ZGUvYXNtLXg4Ni9tc3ItaW5kZXguaAppbmRleCAwZWI2ODU1NjE0Li5iYTll
OTBhZjIxIDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L21zci1p
bmRleC5oCisrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvbXNyLWluZGV4LmgK
QEAgLTk2LDYgKzk2LDM4IEBACiAvKiBMb3dlciA2IGJpdHMgZGVmaW5lIHRo
ZSBmb3JtYXQgb2YgdGhlIGFkZHJlc3MgaW4gdGhlIExCUiBzdGFjayAqLwog
I2RlZmluZSBNU1JfSUEzMl9QRVJGX0NBUF9MQlJfRk9STUFUCTB4M2YKIAor
LyoKKyAqIEludGVsIFJ1bnRpbWUgQXZlcmFnZSBQb3dlciBMaW1pdGluZyAo
UkFQTCkgaW50ZXJmYWNlLiAgUG93ZXIgcGxhbmUgYmFzZQorICogYWRkcmVz
c2VzIChNU1JfKl9QT1dFUl9MSU1JVCkgYXJlIG1vZGVsIHNwZWNpZmljLCBi
dXQgaGF2ZSBzby1mYXIgYmVlbgorICogY29uc2lzdGVudCBzaW5jZSB0aGVp
ciBpbnRyb2R1Y3Rpb24gaW4gU2FuZHlCcmlkZ2UuCisgKgorICogT2Zmc2V0
cyBvZiBmdW5jdGlvbmFsaXR5IGZyb20gdGhlIHBvd2VyIHBsYW5lIGJhc2Ug
aXMgYXJjaGl0ZWN0dXJhbCwgYnV0CisgKiBub3QgYWxsIHBvd2VyIHBsYW5l
cyBzdXBwb3J0IGFsbCBmdW5jdGlvbmFsaXR5LgorICovCisjZGVmaW5lIE1T
Ul9SQVBMX1BPV0VSX1VOSVQJCTB4MDAwMDA2MDYKKworI2RlZmluZSBNU1Jf
UEtHX1BPV0VSX0xJTUlUCQkweDAwMDAwNjEwCisjZGVmaW5lIE1TUl9QS0df
RU5FUkdZX1NUQVRVUwkJMHgwMDAwMDYxMQorI2RlZmluZSBNU1JfUEtHX1BF
UkZfU1RBVFVTCQkweDAwMDAwNjEzCisjZGVmaW5lIE1TUl9QS0dfUE9XRVJf
SU5GTwkJMHgwMDAwMDYxNAorCisjZGVmaW5lIE1TUl9EUkFNX1BPV0VSX0xJ
TUlUCQkweDAwMDAwNjE4CisjZGVmaW5lIE1TUl9EUkFNX0VORVJHWV9TVEFU
VVMJCTB4MDAwMDA2MTkKKyNkZWZpbmUgTVNSX0RSQU1fUEVSRl9TVEFUVVMJ
CTB4MDAwMDA2MWIKKyNkZWZpbmUgTVNSX0RSQU1fUE9XRVJfSU5GTwkJMHgw
MDAwMDYxYworCisjZGVmaW5lIE1TUl9QUDBfUE9XRVJfTElNSVQJCTB4MDAw
MDA2MzgKKyNkZWZpbmUgTVNSX1BQMF9FTkVSR1lfU1RBVFVTCQkweDAwMDAw
NjM5CisjZGVmaW5lIE1TUl9QUDBfUE9MSUNZCQkJMHgwMDAwMDYzYQorCisj
ZGVmaW5lIE1TUl9QUDFfUE9XRVJfTElNSVQJCTB4MDAwMDA2NDAKKyNkZWZp
bmUgTVNSX1BQMV9FTkVSR1lfU1RBVFVTCQkweDAwMDAwNjQxCisjZGVmaW5l
IE1TUl9QUDFfUE9MSUNZCQkJMHgwMDAwMDY0MgorCisvKiBJbnRlbCBQbGF0
Zm9ybS13aWRlIHBvd2VyIGludGVyZmFjZS4gKi8KKyNkZWZpbmUgTVNSX1BM
QVRGT1JNX0VORVJHWV9DT1VOVEVSCTB4MDAwMDA2NGQKKyNkZWZpbmUgTVNS
X1BMQVRGT1JNX1BPV0VSX0xJTUlUCTB4MDAwMDA2NWMKKwogI2RlZmluZSBN
U1JfSUEzMl9CTkRDRkdTCQkweDAwMDAwZDkwCiAjZGVmaW5lIElBMzJfQk5E
Q0ZHU19FTkFCTEUJCTB4MDAwMDAwMDEKICNkZWZpbmUgSUEzMl9CTkRDRkdT
X1BSRVNFUlZFCQkweDAwMDAwMDAyCkBAIC0yMzYsNiArMjY4LDggQEAKICNk
ZWZpbmUgTVNSX0s4X1ZNX0NSCQkJMHhjMDAxMDExNAogI2RlZmluZSBNU1Jf
SzhfVk1fSFNBVkVfUEEJCTB4YzAwMTAxMTcKIAorI2RlZmluZSBNU1JfRjE1
SF9DVV9QT1dFUgkJMHhjMDAxMDA3YQorI2RlZmluZSBNU1JfRjE1SF9DVV9N
QVhfUE9XRVIJCTB4YzAwMTAwN2IKICNkZWZpbmUgTVNSX0FNRF9GQU0xNUhf
RVZOVFNFTDAJCTB4YzAwMTAyMDAKICNkZWZpbmUgTVNSX0FNRF9GQU0xNUhf
UEVSRkNUUjAJCTB4YzAwMTAyMDEKICNkZWZpbmUgTVNSX0FNRF9GQU0xNUhf
RVZOVFNFTDEJCTB4YzAwMTAyMDIKQEAgLTI0OSw2ICsyODMsMTAgQEAKICNk
ZWZpbmUgTVNSX0FNRF9GQU0xNUhfRVZOVFNFTDUJCTB4YzAwMTAyMGEKICNk
ZWZpbmUgTVNSX0FNRF9GQU0xNUhfUEVSRkNUUjUJCTB4YzAwMTAyMGIKIAor
I2RlZmluZSBNU1JfQU1EX1JBUExfUE9XRVJfVU5JVAkJMHhjMDAxMDI5OQor
I2RlZmluZSBNU1JfQU1EX0NPUkVfRU5FUkdZX1NUQVRVUwkweGMwMDEwMjlh
CisjZGVmaW5lIE1TUl9BTURfUEtHX0VORVJHWV9TVEFUVVMJMHhjMDAxMDI5
YgorCiAjZGVmaW5lIE1TUl9BTURfTDdTMF9GRUFUVVJFX01BU0sJMHhjMDAx
MTAwMgogI2RlZmluZSBNU1JfQU1EX1RIUk1fRkVBVFVSRV9NQVNLCTB4YzAw
MTEwMDMKICNkZWZpbmUgTVNSX0s4X0ZFQVRVUkVfTUFTSwkJMHhjMDAxMTAw
NAo=

--=separator
Content-Type: application/octet-stream; name="xsa351-x86-4.13-1.patch"
Content-Disposition: attachment; filename="xsa351-x86-4.13-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP1JvZ2VyPTIwUGF1PTIwTW9ubj1DMz1BOT89IDxy
b2dlci5wYXVAY2l0cml4LmNvbT4KU3ViamVjdDogeDg2L21zcjogZml4IGhh
bmRsaW5nIG9mIE1TUl9JQTMyX1BFUkZfe1NUQVRVUy9DVEx9Ck1JTUUtVmVy
c2lvbjogMS4wCkNvbnRlbnQtVHlwZTogdGV4dC9wbGFpbjsgY2hhcnNldD1V
VEYtOApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA4Yml0CgpDdXJyZW50
bHkgYSBQViBoYXJkd2FyZSBkb21haW4gY2FuIGFsc28gYmUgZ2l2ZW4gY29u
dHJvbCBvdmVyIHRoZSBDUFUKZnJlcXVlbmN5LCBhbmQgc3VjaCBndWVzdCBp
cyBhbGxvd2VkIHRvIHdyaXRlIHRvIE1TUl9JQTMyX1BFUkZfQ1RMLgpIb3dl
dmVyIHNpbmNlIGNvbW1pdCAzMjJlYzdjODlmNiB0aGUgZGVmYXVsdCBiZWhh
dmlvciBoYXMgYmVlbiBjaGFuZ2VkCnRvIHJlamVjdCBhY2Nlc3NlcyB0byBu
b3QgZXhwbGljaXRseSBoYW5kbGVkIE1TUnMsIHByZXZlbnRpbmcgUFYKZ3Vl
c3RzIHRoYXQgbWFuYWdlIENQVSBmcmVxdWVuY3kgZnJvbSByZWFkaW5nCk1T
Ul9JQTMyX1BFUkZfe1NUQVRVUy9DVEx9LgoKQWRkaXRpb25hbGx5IHNvbWUg
SFZNIGd1ZXN0cyAoV2luZG93cyBhdCBsZWFzdCkgd2lsbCBhdHRlbXB0IHRv
IHJlYWQKTVNSX0lBMzJfUEVSRl9DVEwgYW5kIHdpbGwgcGFuaWMgaWYgZ2l2
ZW4gYmFjayBhICNHUCBmYXVsdDoKCiAgdm14LmM6MzAzNTpkOHYwIFJETVNS
IDB4MDAwMDAxOTkgdW5pbXBsZW1lbnRlZAogIGQ4djAgVklSSURJQU4gQ1JB
U0g6IDNiIGMwMDAwMDk2IGZmZmZmODA2ODcxYzE2NTEgZmZmZmRhMDI1MzY4
MzcyMCAwCgpNb3ZlIHRoZSBoYW5kbGluZyBvZiBNU1JfSUEzMl9QRVJGX3tT
VEFUVVMvQ1RMfSB0byB0aGUgY29tbW9uIE1TUgpoYW5kbGluZyBzaGFyZWQg
YmV0d2VlbiBIVk0gYW5kIFBWIGd1ZXN0cywgYW5kIGFkZCBhbiBleHBsaWNp
dCBjYXNlCmZvciByZWFkcyB0byBNU1JfSUEzMl9QRVJGX3tTVEFUVVMvQ1RM
fS4KClJlc3RvcmUgcHJldmlvdXMgYmVoYXZpb3IgYW5kIGFsbG93IFBWIGd1
ZXN0cyB3aXRoIHRoZSByZXF1aXJlZApwZXJtaXNzaW9ucyB0byByZWFkIHRo
ZSBjb250ZW50cyBvZiB0aGUgbWVudGlvbmVkIE1TUnMuIE5vbiBwcml2aWxl
Z2VkCmd1ZXN0cyB3aWxsIGdldCAwIHdoZW4gdHJ5aW5nIHRvIHJlYWQgdGhv
c2UgcmVnaXN0ZXJzLCBhcyB3cml0ZXMgdG8KTVNSX0lBMzJfUEVSRl9DVEwg
Ynkgc3VjaCBndWVzdCB3aWxsIGFscmVhZHkgYmUgc2lsZW50bHkgZHJvcHBl
ZC4KCkZpeGVzOiAzMjJlYzdjODlmNiAoJ3g4Ni9wdjogZGlzYWxsb3cgYWNj
ZXNzIHRvIHVua25vd24gTVNScycpCkZpeGVzOiA4NGU4NDhmZDdhMSAoJ3g4
Ni9odm06IGRpc2FsbG93IGFjY2VzcyB0byB1bmtub3duIE1TUnMnKQpTaWdu
ZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4
LmNvbT4KU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNv
b3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IFJvZ2VyIFBhdSBNb25u
w6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgooY2hlcnJ5IHBpY2tlZCBmcm9t
IGNvbW1pdCAzMDU5MTc4Nzk4YTIzYmE4NzBmZjg2ZmY1NGQ0NDJhMDdlNjY1
MWZjKQoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9tc3IuYyBiL3hlbi9h
cmNoL3g4Ni9tc3IuYwppbmRleCA4NzVhYzM5ZDMwLi44Yzk2OTE5N2FhIDEw
MDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbXNyLmMKKysrIGIveGVuL2FyY2gv
eDg2L21zci5jCkBAIC0yMDgsNiArMjA4LDI1IEBAIGludCBndWVzdF9yZG1z
cihzdHJ1Y3QgdmNwdSAqdiwgdWludDMyX3QgbXNyLCB1aW50NjRfdCAqdmFs
KQogICAgICAgICAqdmFsID0gbXNycy0+bWlzY19mZWF0dXJlc19lbmFibGVz
LnJhdzsKICAgICAgICAgYnJlYWs7CiAKKyAgICAgICAgLyoKKyAgICAgICAg
ICogVGhlc2UgTVNScyBhcmUgbm90IGVudW1lcmF0ZWQgaW4gQ1BVSUQuICBU
aGV5IGhhdmUgYmVlbiBhcm91bmQKKyAgICAgICAgICogc2luY2UgdGhlIFBl
bnRpdW0gNCwgYW5kIGltcGxlbWVudGVkIGJ5IG90aGVyIHZlbmRvcnMuCisg
ICAgICAgICAqCisgICAgICAgICAqIFNvbWUgdmVyc2lvbnMgb2YgV2luZG93
cyB0cnkgcmVhZGluZyB0aGVzZSBiZWZvcmUgc2V0dGluZyB1cCBhICNHUAor
ICAgICAgICAgKiBoYW5kbGVyLCBhbmQgTGludXggaGFzIHNldmVyYWwgdW5n
dWFyZGVkIHJlYWRzIGFzIHdlbGwuICBQcm92aWRlCisgICAgICAgICAqIFJB
WiBzZW1hbnRpY3MsIGluIGdlbmVyYWwsIGJ1dCBwZXJtaXQgYSBjcHVmcmVx
IGNvbnRyb2xsZXIgZG9tMCB0bworICAgICAgICAgKiBoYXZlIGZ1bGwgYWNj
ZXNzLgorICAgICAgICAgKi8KKyAgICBjYXNlIE1TUl9JQTMyX1BFUkZfU1RB
VFVTOgorICAgIGNhc2UgTVNSX0lBMzJfUEVSRl9DVEw6CisgICAgICAgIGlm
ICggIShjcC0+eDg2X3ZlbmRvciAmIChYODZfVkVORE9SX0lOVEVMIHwgWDg2
X1ZFTkRPUl9DRU5UQVVSKSkgKQorICAgICAgICAgICAgZ290byBncF9mYXVs
dDsKKworICAgICAgICAqdmFsID0gMDsKKyAgICAgICAgaWYgKCBsaWtlbHko
IWlzX2NwdWZyZXFfY29udHJvbGxlcihkKSkgfHwgcmRtc3Jfc2FmZShtc3Is
ICp2YWwpID09IDAgKQorICAgICAgICAgICAgYnJlYWs7CisgICAgICAgIGdv
dG8gZ3BfZmF1bHQ7CisKICAgICBjYXNlIE1TUl9YMkFQSUNfRklSU1QgLi4u
IE1TUl9YMkFQSUNfTEFTVDoKICAgICAgICAgaWYgKCAhaXNfaHZtX2RvbWFp
bihkKSB8fCB2ICE9IGN1cnIgKQogICAgICAgICAgICAgZ290byBncF9mYXVs
dDsKQEAgLTMwNSw2ICszMjQsNyBAQCBpbnQgZ3Vlc3Rfd3Jtc3Ioc3RydWN0
IHZjcHUgKnYsIHVpbnQzMl90IG1zciwgdWludDY0X3QgdmFsKQogICAgIGNh
c2UgTVNSX0lOVEVMX0NPUkVfVEhSRUFEX0NPVU5UOgogICAgIGNhc2UgTVNS
X0lOVEVMX1BMQVRGT1JNX0lORk86CiAgICAgY2FzZSBNU1JfQVJDSF9DQVBB
QklMSVRJRVM6CisgICAgY2FzZSBNU1JfSUEzMl9QRVJGX1NUQVRVUzoKICAg
ICAgICAgLyogUmVhZC1vbmx5ICovCiAgICAgY2FzZSBNU1JfVFNYX0ZPUkNF
X0FCT1JUOgogICAgIGNhc2UgTVNSX1RTWF9DVFJMOgpAQCAtNDExLDYgKzQz
MSwyMSBAQCBpbnQgZ3Vlc3Rfd3Jtc3Ioc3RydWN0IHZjcHUgKnYsIHVpbnQz
Ml90IG1zciwgdWludDY0X3QgdmFsKQogICAgICAgICBicmVhazsKICAgICB9
CiAKKyAgICAgICAgLyoKKyAgICAgICAgICogVGhpcyBNU1IgaXMgbm90IGVu
dW1lcmF0ZWQgaW4gQ1BVSUQuICBJdCBoYXMgYmVlbiBhcm91bmQgc2luY2Ug
dGhlCisgICAgICAgICAqIFBlbnRpdW0gNCwgYW5kIGltcGxlbWVudGVkIGJ5
IG90aGVyIHZlbmRvcnMuCisgICAgICAgICAqCisgICAgICAgICAqIFRvIG1h
dGNoIHRoZSBSQVogc2VtYW50aWNzLCBpbXBsZW1lbnQgYXMgd3JpdGUtZGlz
Y2FyZCwgZXhjZXB0IGZvcgorICAgICAgICAgKiBhIGNwdWZyZXEgY29udHJv
bGxlciBkb20wIHdoaWNoIGhhcyBmdWxsIGFjY2Vzcy4KKyAgICAgICAgICov
CisgICAgY2FzZSBNU1JfSUEzMl9QRVJGX0NUTDoKKyAgICAgICAgaWYgKCAh
KGNwLT54ODZfdmVuZG9yICYgKFg4Nl9WRU5ET1JfSU5URUwgfCBYODZfVkVO
RE9SX0NFTlRBVVIpKSApCisgICAgICAgICAgICBnb3RvIGdwX2ZhdWx0Owor
CisgICAgICAgIGlmICggbGlrZWx5KCFpc19jcHVmcmVxX2NvbnRyb2xsZXIo
ZCkpIHx8IHdybXNyX3NhZmUobXNyLCB2YWwpID09IDAgKQorICAgICAgICAg
ICAgYnJlYWs7CisgICAgICAgIGdvdG8gZ3BfZmF1bHQ7CisKICAgICBjYXNl
IE1TUl9YMkFQSUNfRklSU1QgLi4uIE1TUl9YMkFQSUNfTEFTVDoKICAgICAg
ICAgaWYgKCAhaXNfaHZtX2RvbWFpbihkKSB8fCB2ICE9IGN1cnIgKQogICAg
ICAgICAgICAgZ290byBncF9mYXVsdDsKZGlmZiAtLWdpdCBhL3hlbi9hcmNo
L3g4Ni9wdi9lbXVsLXByaXYtb3AuYyBiL3hlbi9hcmNoL3g4Ni9wdi9lbXVs
LXByaXYtb3AuYwppbmRleCA0MjI1OGM2YmYxLi42ZGM0ZjkyYTg0IDEwMDY0
NAotLS0gYS94ZW4vYXJjaC94ODYvcHYvZW11bC1wcml2LW9wLmMKKysrIGIv
eGVuL2FyY2gveDg2L3B2L2VtdWwtcHJpdi1vcC5jCkBAIC03NzYsMTIgKzc3
Niw2IEBAIHN0YXRpYyBpbmxpbmUgdWludDY0X3QgZ3Vlc3RfbWlzY19lbmFi
bGUodWludDY0X3QgdmFsKQogICAgIHJldHVybiB2YWw7CiB9CiAKLXN0YXRp
YyBpbmxpbmUgYm9vbCBpc19jcHVmcmVxX2NvbnRyb2xsZXIoY29uc3Qgc3Ry
dWN0IGRvbWFpbiAqZCkKLXsKLSAgICByZXR1cm4gKChjcHVmcmVxX2NvbnRy
b2xsZXIgPT0gRlJFUUNUTF9kb20wX2tlcm5lbCkgJiYKLSAgICAgICAgICAg
IGlzX2hhcmR3YXJlX2RvbWFpbihkKSk7Ci19Ci0KIHN0YXRpYyBpbnQgcmVh
ZF9tc3IodW5zaWduZWQgaW50IHJlZywgdWludDY0X3QgKnZhbCwKICAgICAg
ICAgICAgICAgICAgICAgc3RydWN0IHg4Nl9lbXVsYXRlX2N0eHQgKmN0eHQp
CiB7CkBAIC0xMDI2LDE0ICsxMDIwLDYgQEAgc3RhdGljIGludCB3cml0ZV9t
c3IodW5zaWduZWQgaW50IHJlZywgdWludDY0X3QgdmFsLAogICAgICAgICAg
ICAgcmV0dXJuIFg4NkVNVUxfT0tBWTsKICAgICAgICAgYnJlYWs7CiAKLSAg
ICBjYXNlIE1TUl9JQTMyX1BFUkZfQ1RMOgotICAgICAgICBpZiAoIGJvb3Rf
Y3B1X2RhdGEueDg2X3ZlbmRvciAhPSBYODZfVkVORE9SX0lOVEVMICkKLSAg
ICAgICAgICAgIGJyZWFrOwotICAgICAgICBpZiAoIGxpa2VseSghaXNfY3B1
ZnJlcV9jb250cm9sbGVyKGN1cnJkKSkgfHwKLSAgICAgICAgICAgICB3cm1z
cl9zYWZlKHJlZywgdmFsKSA9PSAwICkKLSAgICAgICAgICAgIHJldHVybiBY
ODZFTVVMX09LQVk7Ci0gICAgICAgIGJyZWFrOwotCiAgICAgY2FzZSBNU1Jf
SUEzMl9USEVSTV9DT05UUk9MOgogICAgIGNhc2UgTVNSX0lBMzJfRU5FUkdZ
X1BFUkZfQklBUzoKICAgICAgICAgaWYgKCBib290X2NwdV9kYXRhLng4Nl92
ZW5kb3IgIT0gWDg2X1ZFTkRPUl9JTlRFTCApCmRpZmYgLS1naXQgYS94ZW4v
aW5jbHVkZS94ZW4vc2NoZWQuaCBiL3hlbi9pbmNsdWRlL3hlbi9zY2hlZC5o
CmluZGV4IGQ2ZTI3ZmM0YjguLjhiYjViZDdiMzggMTAwNjQ0Ci0tLSBhL3hl
bi9pbmNsdWRlL3hlbi9zY2hlZC5oCisrKyBiL3hlbi9pbmNsdWRlL3hlbi9z
Y2hlZC5oCkBAIC0xMDU3LDYgKzEwNTcsMjIgQEAgZXh0ZXJuIGVudW0gY3B1
ZnJlcV9jb250cm9sbGVyIHsKICAgICBGUkVRQ1RMX25vbmUsIEZSRVFDVExf
ZG9tMF9rZXJuZWwsIEZSRVFDVExfeGVuCiB9IGNwdWZyZXFfY29udHJvbGxl
cjsKIAorc3RhdGljIGFsd2F5c19pbmxpbmUgYm9vbCBpc19jcHVmcmVxX2Nv
bnRyb2xsZXIoY29uc3Qgc3RydWN0IGRvbWFpbiAqZCkKK3sKKyAgICAvKgor
ICAgICAqIEEgUFYgZG9tMCBjYW4gYmUgbm9taW5hdGVkIGFzIHRoZSBjcHVm
cmVxIGNvbnRyb2xsZXIsIGluc3RlYWQgb2YgdXNpbmcKKyAgICAgKiBYZW4n
cyBjcHVmcmVxIGRyaXZlciwgYXQgd2hpY2ggcG9pbnQgZG9tMCBnZXRzIGRp
cmVjdCBhY2Nlc3MgdG8gY2VydGFpbgorICAgICAqIE1TUnMuCisgICAgICoK
KyAgICAgKiBUaGlzIGludGVyZmFjZSBvbmx5IHdvcmtzIHdoZW4gZG9tMCBp
cyBpZGVudGl0eSBwaW5uZWQgYW5kIGhhcyB0aGUgc2FtZQorICAgICAqIG51
bWJlciBvZiB2Q1BVcyBhcyBwQ1BVcyBvbiB0aGUgc3lzdGVtLgorICAgICAq
CisgICAgICogSXQgd291bGQgYmUgZmFyIGJldHRlciB0byBwYXJhdmlydHVh
bGlzZSB0aGUgaW50ZXJmYWNlLgorICAgICAqLworICAgIHJldHVybiAoaXNf
cHZfZG9tYWluKGQpICYmIGlzX2hhcmR3YXJlX2RvbWFpbihkKSAmJgorICAg
ICAgICAgICAgY3B1ZnJlcV9jb250cm9sbGVyID09IEZSRVFDVExfZG9tMF9r
ZXJuZWwpOworfQorCiAjZGVmaW5lIENQVVBPT0xJRF9OT05FICAgIC0xCiAK
IHN0cnVjdCBjcHVwb29sICpjcHVwb29sX2dldF9ieV9pZChpbnQgcG9vbGlk
KTsK

--=separator
Content-Type: application/octet-stream; name="xsa351-x86-4.13-2.patch"
Content-Disposition: attachment; filename="xsa351-x86-4.13-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L21zcjogRGlzYWxsb3cgZ3Vlc3QgYWNjZXNzIHRv
IHRoZSBSQVBMIE1TUnMKClJlc2VhcmNoZXJzIGhhdmUgZGVtb25zdHJhdGVk
IHVzaW5nIHRoZSBSQVBMIGludGVyZmFjZSB0byBwZXJmb3JtIGEKZGlmZmVy
ZW50aWFsIHBvd2VyIGFuYWx5c2lzIGF0dGFjayB0byByZWNvdmVyIEFFUyBr
ZXlzIHVzZWQgYnkgb3RoZXIgY29yZXMgaW4KdGhlIHN5c3RlbS4KCkZ1cnRo
ZXJtb3JlLCBldmVuIHByaXZpbGVnZWQgZ3Vlc3RzIGNhbm5vdCB1c2UgdGhp
cyBpbnRlcmZhY2UgY29ycmVjdGx5LCBkdWUKdG8gTVNSIHNjb3BlIGFuZCB2
Y3B1IHNjaGVkdWxpbmcgaXNzdWVzLiAgVGhlIGludGVyZmFjZSB3b3VsZCB3
YW50IHRvIGJlCnBhcmF2aXJ0dWFsaXNlZCB0byBiZSB1c2VkIHNlbnNpYmx5
LgoKRGlzYWxsb3cgYWNjZXNzIHRvIHRoZSBSQVBMIE1TUnMgY29tcGxldGVs
eSwgYXMgd2VsbCBhcyBvdGhlciBNU1JzIHdoaWNoCnBvdGVudGlhbGx5IGFj
Y2VzcyBmaW5lIGdyYWluIHBvd2VyIGluZm9ybWF0aW9uLgoKVGhpcyBpcyBw
YXJ0IG9mIFhTQS0zNTEuCgpTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFu
IEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgoKZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL3g4Ni9tc3IuYyBiL3hlbi9hcmNoL3g4Ni9tc3IuYwppbmRleCA4
Yzk2OTE5N2FhLi44YWI2OTQ5YThlIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94
ODYvbXNyLmMKKysrIGIveGVuL2FyY2gveDg2L21zci5jCkBAIC0xNTIsMTEg
KzE1MiwyMCBAQCBpbnQgZ3Vlc3RfcmRtc3Ioc3RydWN0IHZjcHUgKnYsIHVp
bnQzMl90IG1zciwgdWludDY0X3QgKnZhbCkKICAgICBjYXNlIE1TUl9UU1hf
Q1RSTDoKICAgICBjYXNlIE1TUl9NQ1VfT1BUX0NUUkw6CiAgICAgY2FzZSBN
U1JfUlRJVF9PVVRQVVRfQkFTRSAuLi4gTVNSX1JUSVRfQUREUl9CKDcpOgor
ICAgIGNhc2UgTVNSX1JBUExfUE9XRVJfVU5JVDoKKyAgICBjYXNlIE1TUl9Q
S0dfUE9XRVJfTElNSVQgIC4uLiBNU1JfUEtHX1BPV0VSX0lORk86CisgICAg
Y2FzZSBNU1JfRFJBTV9QT1dFUl9MSU1JVCAuLi4gTVNSX0RSQU1fUE9XRVJf
SU5GTzoKKyAgICBjYXNlIE1TUl9QUDBfUE9XRVJfTElNSVQgIC4uLiBNU1Jf
UFAwX1BPTElDWToKKyAgICBjYXNlIE1TUl9QUDFfUE9XRVJfTElNSVQgIC4u
LiBNU1JfUFAxX1BPTElDWToKKyAgICBjYXNlIE1TUl9QTEFURk9STV9FTkVS
R1lfQ09VTlRFUjoKKyAgICBjYXNlIE1TUl9QTEFURk9STV9QT1dFUl9MSU1J
VDoKICAgICBjYXNlIE1TUl9VX0NFVDoKICAgICBjYXNlIE1TUl9TX0NFVDoK
ICAgICBjYXNlIE1TUl9QTDBfU1NQIC4uLiBNU1JfSU5URVJSVVBUX1NTUF9U
QUJMRToKICAgICBjYXNlIE1TUl9BTUQ2NF9MV1BfQ0ZHOgogICAgIGNhc2Ug
TVNSX0FNRDY0X0xXUF9DQkFERFI6CisgICAgY2FzZSBNU1JfRjE1SF9DVV9Q
T1dFUiAuLi4gTVNSX0YxNUhfQ1VfTUFYX1BPV0VSOgorICAgIGNhc2UgTVNS
X0FNRF9SQVBMX1BPV0VSX1VOSVQgLi4uIE1TUl9BTURfUEtHX0VORVJHWV9T
VEFUVVM6CiAgICAgICAgIC8qIE5vdCBvZmZlcmVkIHRvIGd1ZXN0cy4gKi8K
ICAgICAgICAgZ290byBncF9mYXVsdDsKIApAQCAtMzMwLDExICszMzksMjAg
QEAgaW50IGd1ZXN0X3dybXNyKHN0cnVjdCB2Y3B1ICp2LCB1aW50MzJfdCBt
c3IsIHVpbnQ2NF90IHZhbCkKICAgICBjYXNlIE1TUl9UU1hfQ1RSTDoKICAg
ICBjYXNlIE1TUl9NQ1VfT1BUX0NUUkw6CiAgICAgY2FzZSBNU1JfUlRJVF9P
VVRQVVRfQkFTRSAuLi4gTVNSX1JUSVRfQUREUl9CKDcpOgorICAgIGNhc2Ug
TVNSX1JBUExfUE9XRVJfVU5JVDoKKyAgICBjYXNlIE1TUl9QS0dfUE9XRVJf
TElNSVQgIC4uLiBNU1JfUEtHX1BPV0VSX0lORk86CisgICAgY2FzZSBNU1Jf
RFJBTV9QT1dFUl9MSU1JVCAuLi4gTVNSX0RSQU1fUE9XRVJfSU5GTzoKKyAg
ICBjYXNlIE1TUl9QUDBfUE9XRVJfTElNSVQgIC4uLiBNU1JfUFAwX1BPTElD
WToKKyAgICBjYXNlIE1TUl9QUDFfUE9XRVJfTElNSVQgIC4uLiBNU1JfUFAx
X1BPTElDWToKKyAgICBjYXNlIE1TUl9QTEFURk9STV9FTkVSR1lfQ09VTlRF
UjoKKyAgICBjYXNlIE1TUl9QTEFURk9STV9QT1dFUl9MSU1JVDoKICAgICBj
YXNlIE1TUl9VX0NFVDoKICAgICBjYXNlIE1TUl9TX0NFVDoKICAgICBjYXNl
IE1TUl9QTDBfU1NQIC4uLiBNU1JfSU5URVJSVVBUX1NTUF9UQUJMRToKICAg
ICBjYXNlIE1TUl9BTUQ2NF9MV1BfQ0ZHOgogICAgIGNhc2UgTVNSX0FNRDY0
X0xXUF9DQkFERFI6CisgICAgY2FzZSBNU1JfRjE1SF9DVV9QT1dFUiAuLi4g
TVNSX0YxNUhfQ1VfTUFYX1BPV0VSOgorICAgIGNhc2UgTVNSX0FNRF9SQVBM
X1BPV0VSX1VOSVQgLi4uIE1TUl9BTURfUEtHX0VORVJHWV9TVEFUVVM6CiAg
ICAgICAgIC8qIE5vdCBvZmZlcmVkIHRvIGd1ZXN0cy4gKi8KICAgICAgICAg
Z290byBncF9mYXVsdDsKIApkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvYXNt
LXg4Ni9tc3ItaW5kZXguaCBiL3hlbi9pbmNsdWRlL2FzbS14ODYvbXNyLWlu
ZGV4LmgKaW5kZXggMGViNjg1NTYxNC4uYmE5ZTkwYWYyMSAxMDA2NDQKLS0t
IGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9tc3ItaW5kZXguaAorKysgYi94ZW4v
aW5jbHVkZS9hc20teDg2L21zci1pbmRleC5oCkBAIC05Niw2ICs5NiwzOCBA
QAogLyogTG93ZXIgNiBiaXRzIGRlZmluZSB0aGUgZm9ybWF0IG9mIHRoZSBh
ZGRyZXNzIGluIHRoZSBMQlIgc3RhY2sgKi8KICNkZWZpbmUgTVNSX0lBMzJf
UEVSRl9DQVBfTEJSX0ZPUk1BVAkweDNmCiAKKy8qCisgKiBJbnRlbCBSdW50
aW1lIEF2ZXJhZ2UgUG93ZXIgTGltaXRpbmcgKFJBUEwpIGludGVyZmFjZS4g
IFBvd2VyIHBsYW5lIGJhc2UKKyAqIGFkZHJlc3NlcyAoTVNSXypfUE9XRVJf
TElNSVQpIGFyZSBtb2RlbCBzcGVjaWZpYywgYnV0IGhhdmUgc28tZmFyIGJl
ZW4KKyAqIGNvbnNpc3RlbnQgc2luY2UgdGhlaXIgaW50cm9kdWN0aW9uIGlu
IFNhbmR5QnJpZGdlLgorICoKKyAqIE9mZnNldHMgb2YgZnVuY3Rpb25hbGl0
eSBmcm9tIHRoZSBwb3dlciBwbGFuZSBiYXNlIGlzIGFyY2hpdGVjdHVyYWws
IGJ1dAorICogbm90IGFsbCBwb3dlciBwbGFuZXMgc3VwcG9ydCBhbGwgZnVu
Y3Rpb25hbGl0eS4KKyAqLworI2RlZmluZSBNU1JfUkFQTF9QT1dFUl9VTklU
CQkweDAwMDAwNjA2CisKKyNkZWZpbmUgTVNSX1BLR19QT1dFUl9MSU1JVAkJ
MHgwMDAwMDYxMAorI2RlZmluZSBNU1JfUEtHX0VORVJHWV9TVEFUVVMJCTB4
MDAwMDA2MTEKKyNkZWZpbmUgTVNSX1BLR19QRVJGX1NUQVRVUwkJMHgwMDAw
MDYxMworI2RlZmluZSBNU1JfUEtHX1BPV0VSX0lORk8JCTB4MDAwMDA2MTQK
KworI2RlZmluZSBNU1JfRFJBTV9QT1dFUl9MSU1JVAkJMHgwMDAwMDYxOAor
I2RlZmluZSBNU1JfRFJBTV9FTkVSR1lfU1RBVFVTCQkweDAwMDAwNjE5Cisj
ZGVmaW5lIE1TUl9EUkFNX1BFUkZfU1RBVFVTCQkweDAwMDAwNjFiCisjZGVm
aW5lIE1TUl9EUkFNX1BPV0VSX0lORk8JCTB4MDAwMDA2MWMKKworI2RlZmlu
ZSBNU1JfUFAwX1BPV0VSX0xJTUlUCQkweDAwMDAwNjM4CisjZGVmaW5lIE1T
Ul9QUDBfRU5FUkdZX1NUQVRVUwkJMHgwMDAwMDYzOQorI2RlZmluZSBNU1Jf
UFAwX1BPTElDWQkJCTB4MDAwMDA2M2EKKworI2RlZmluZSBNU1JfUFAxX1BP
V0VSX0xJTUlUCQkweDAwMDAwNjQwCisjZGVmaW5lIE1TUl9QUDFfRU5FUkdZ
X1NUQVRVUwkJMHgwMDAwMDY0MQorI2RlZmluZSBNU1JfUFAxX1BPTElDWQkJ
CTB4MDAwMDA2NDIKKworLyogSW50ZWwgUGxhdGZvcm0td2lkZSBwb3dlciBp
bnRlcmZhY2UuICovCisjZGVmaW5lIE1TUl9QTEFURk9STV9FTkVSR1lfQ09V
TlRFUgkweDAwMDAwNjRkCisjZGVmaW5lIE1TUl9QTEFURk9STV9QT1dFUl9M
SU1JVAkweDAwMDAwNjVjCisKICNkZWZpbmUgTVNSX0lBMzJfQk5EQ0ZHUwkJ
MHgwMDAwMGQ5MAogI2RlZmluZSBJQTMyX0JORENGR1NfRU5BQkxFCQkweDAw
MDAwMDAxCiAjZGVmaW5lIElBMzJfQk5EQ0ZHU19QUkVTRVJWRQkJMHgwMDAw
MDAwMgpAQCAtMjM2LDYgKzI2OCw4IEBACiAjZGVmaW5lIE1TUl9LOF9WTV9D
UgkJCTB4YzAwMTAxMTQKICNkZWZpbmUgTVNSX0s4X1ZNX0hTQVZFX1BBCQkw
eGMwMDEwMTE3CiAKKyNkZWZpbmUgTVNSX0YxNUhfQ1VfUE9XRVIJCTB4YzAw
MTAwN2EKKyNkZWZpbmUgTVNSX0YxNUhfQ1VfTUFYX1BPV0VSCQkweGMwMDEw
MDdiCiAjZGVmaW5lIE1TUl9BTURfRkFNMTVIX0VWTlRTRUwwCQkweGMwMDEw
MjAwCiAjZGVmaW5lIE1TUl9BTURfRkFNMTVIX1BFUkZDVFIwCQkweGMwMDEw
MjAxCiAjZGVmaW5lIE1TUl9BTURfRkFNMTVIX0VWTlRTRUwxCQkweGMwMDEw
MjAyCkBAIC0yNDksNiArMjgzLDEwIEBACiAjZGVmaW5lIE1TUl9BTURfRkFN
MTVIX0VWTlRTRUw1CQkweGMwMDEwMjBhCiAjZGVmaW5lIE1TUl9BTURfRkFN
MTVIX1BFUkZDVFI1CQkweGMwMDEwMjBiCiAKKyNkZWZpbmUgTVNSX0FNRF9S
QVBMX1BPV0VSX1VOSVQJCTB4YzAwMTAyOTkKKyNkZWZpbmUgTVNSX0FNRF9D
T1JFX0VORVJHWV9TVEFUVVMJMHhjMDAxMDI5YQorI2RlZmluZSBNU1JfQU1E
X1BLR19FTkVSR1lfU1RBVFVTCTB4YzAwMTAyOWIKKwogI2RlZmluZSBNU1Jf
QU1EX0w3UzBfRkVBVFVSRV9NQVNLCTB4YzAwMTEwMDIKICNkZWZpbmUgTVNS
X0FNRF9USFJNX0ZFQVRVUkVfTUFTSwkweGMwMDExMDAzCiAjZGVmaW5lIE1T
Ul9LOF9GRUFUVVJFX01BU0sJCTB4YzAwMTEwMDQK

--=separator
Content-Type: application/octet-stream; name="xsa351-x86-4.14-1.patch"
Content-Disposition: attachment; filename="xsa351-x86-4.14-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogPT9VVEYtOD9xP1JvZ2VyPTIwUGF1PTIwTW9ubj1DMz1BOT89IDxy
b2dlci5wYXVAY2l0cml4LmNvbT4KU3ViamVjdDogeDg2L21zcjogZml4IGhh
bmRsaW5nIG9mIE1TUl9JQTMyX1BFUkZfe1NUQVRVUy9DVEx9Ck1JTUUtVmVy
c2lvbjogMS4wCkNvbnRlbnQtVHlwZTogdGV4dC9wbGFpbjsgY2hhcnNldD1V
VEYtOApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA4Yml0CgpDdXJyZW50
bHkgYSBQViBoYXJkd2FyZSBkb21haW4gY2FuIGFsc28gYmUgZ2l2ZW4gY29u
dHJvbCBvdmVyIHRoZSBDUFUKZnJlcXVlbmN5LCBhbmQgc3VjaCBndWVzdCBp
cyBhbGxvd2VkIHRvIHdyaXRlIHRvIE1TUl9JQTMyX1BFUkZfQ1RMLgpIb3dl
dmVyIHNpbmNlIGNvbW1pdCAzMjJlYzdjODlmNiB0aGUgZGVmYXVsdCBiZWhh
dmlvciBoYXMgYmVlbiBjaGFuZ2VkCnRvIHJlamVjdCBhY2Nlc3NlcyB0byBu
b3QgZXhwbGljaXRseSBoYW5kbGVkIE1TUnMsIHByZXZlbnRpbmcgUFYKZ3Vl
c3RzIHRoYXQgbWFuYWdlIENQVSBmcmVxdWVuY3kgZnJvbSByZWFkaW5nCk1T
Ul9JQTMyX1BFUkZfe1NUQVRVUy9DVEx9LgoKQWRkaXRpb25hbGx5IHNvbWUg
SFZNIGd1ZXN0cyAoV2luZG93cyBhdCBsZWFzdCkgd2lsbCBhdHRlbXB0IHRv
IHJlYWQKTVNSX0lBMzJfUEVSRl9DVEwgYW5kIHdpbGwgcGFuaWMgaWYgZ2l2
ZW4gYmFjayBhICNHUCBmYXVsdDoKCiAgdm14LmM6MzAzNTpkOHYwIFJETVNS
IDB4MDAwMDAxOTkgdW5pbXBsZW1lbnRlZAogIGQ4djAgVklSSURJQU4gQ1JB
U0g6IDNiIGMwMDAwMDk2IGZmZmZmODA2ODcxYzE2NTEgZmZmZmRhMDI1MzY4
MzcyMCAwCgpNb3ZlIHRoZSBoYW5kbGluZyBvZiBNU1JfSUEzMl9QRVJGX3tT
VEFUVVMvQ1RMfSB0byB0aGUgY29tbW9uIE1TUgpoYW5kbGluZyBzaGFyZWQg
YmV0d2VlbiBIVk0gYW5kIFBWIGd1ZXN0cywgYW5kIGFkZCBhbiBleHBsaWNp
dCBjYXNlCmZvciByZWFkcyB0byBNU1JfSUEzMl9QRVJGX3tTVEFUVVMvQ1RM
fS4KClJlc3RvcmUgcHJldmlvdXMgYmVoYXZpb3IgYW5kIGFsbG93IFBWIGd1
ZXN0cyB3aXRoIHRoZSByZXF1aXJlZApwZXJtaXNzaW9ucyB0byByZWFkIHRo
ZSBjb250ZW50cyBvZiB0aGUgbWVudGlvbmVkIE1TUnMuIE5vbiBwcml2aWxl
Z2VkCmd1ZXN0cyB3aWxsIGdldCAwIHdoZW4gdHJ5aW5nIHRvIHJlYWQgdGhv
c2UgcmVnaXN0ZXJzLCBhcyB3cml0ZXMgdG8KTVNSX0lBMzJfUEVSRl9DVEwg
Ynkgc3VjaCBndWVzdCB3aWxsIGFscmVhZHkgYmUgc2lsZW50bHkgZHJvcHBl
ZC4KCkZpeGVzOiAzMjJlYzdjODlmNiAoJ3g4Ni9wdjogZGlzYWxsb3cgYWNj
ZXNzIHRvIHVua25vd24gTVNScycpCkZpeGVzOiA4NGU4NDhmZDdhMSAoJ3g4
Ni9odm06IGRpc2FsbG93IGFjY2VzcyB0byB1bmtub3duIE1TUnMnKQpTaWdu
ZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4
LmNvbT4KU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNv
b3BlcjNAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IFJvZ2VyIFBhdSBNb25u
w6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgooY2hlcnJ5IHBpY2tlZCBmcm9t
IGNvbW1pdCAzMDU5MTc4Nzk4YTIzYmE4NzBmZjg2ZmY1NGQ0NDJhMDdlNjY1
MWZjKQoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9tc3IuYyBiL3hlbi9h
cmNoL3g4Ni9tc3IuYwppbmRleCBkNzJhYjBmYTFmLi4zZGIyNmZhZjA4IDEw
MDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbXNyLmMKKysrIGIveGVuL2FyY2gv
eDg2L21zci5jCkBAIC0yNDUsNiArMjQ1LDI1IEBAIGludCBndWVzdF9yZG1z
cihzdHJ1Y3QgdmNwdSAqdiwgdWludDMyX3QgbXNyLCB1aW50NjRfdCAqdmFs
KQogICAgICAgICAqdmFsID0gbXNycy0+bWlzY19mZWF0dXJlc19lbmFibGVz
LnJhdzsKICAgICAgICAgYnJlYWs7CiAKKyAgICAgICAgLyoKKyAgICAgICAg
ICogVGhlc2UgTVNScyBhcmUgbm90IGVudW1lcmF0ZWQgaW4gQ1BVSUQuICBU
aGV5IGhhdmUgYmVlbiBhcm91bmQKKyAgICAgICAgICogc2luY2UgdGhlIFBl
bnRpdW0gNCwgYW5kIGltcGxlbWVudGVkIGJ5IG90aGVyIHZlbmRvcnMuCisg
ICAgICAgICAqCisgICAgICAgICAqIFNvbWUgdmVyc2lvbnMgb2YgV2luZG93
cyB0cnkgcmVhZGluZyB0aGVzZSBiZWZvcmUgc2V0dGluZyB1cCBhICNHUAor
ICAgICAgICAgKiBoYW5kbGVyLCBhbmQgTGludXggaGFzIHNldmVyYWwgdW5n
dWFyZGVkIHJlYWRzIGFzIHdlbGwuICBQcm92aWRlCisgICAgICAgICAqIFJB
WiBzZW1hbnRpY3MsIGluIGdlbmVyYWwsIGJ1dCBwZXJtaXQgYSBjcHVmcmVx
IGNvbnRyb2xsZXIgZG9tMCB0bworICAgICAgICAgKiBoYXZlIGZ1bGwgYWNj
ZXNzLgorICAgICAgICAgKi8KKyAgICBjYXNlIE1TUl9JQTMyX1BFUkZfU1RB
VFVTOgorICAgIGNhc2UgTVNSX0lBMzJfUEVSRl9DVEw6CisgICAgICAgIGlm
ICggIShjcC0+eDg2X3ZlbmRvciAmIChYODZfVkVORE9SX0lOVEVMIHwgWDg2
X1ZFTkRPUl9DRU5UQVVSKSkgKQorICAgICAgICAgICAgZ290byBncF9mYXVs
dDsKKworICAgICAgICAqdmFsID0gMDsKKyAgICAgICAgaWYgKCBsaWtlbHko
IWlzX2NwdWZyZXFfY29udHJvbGxlcihkKSkgfHwgcmRtc3Jfc2FmZShtc3Is
ICp2YWwpID09IDAgKQorICAgICAgICAgICAgYnJlYWs7CisgICAgICAgIGdv
dG8gZ3BfZmF1bHQ7CisKICAgICBjYXNlIE1TUl9YMkFQSUNfRklSU1QgLi4u
IE1TUl9YMkFQSUNfTEFTVDoKICAgICAgICAgaWYgKCAhaXNfaHZtX2RvbWFp
bihkKSB8fCB2ICE9IGN1cnIgKQogICAgICAgICAgICAgZ290byBncF9mYXVs
dDsKQEAgLTM0Myw2ICszNjIsNyBAQCBpbnQgZ3Vlc3Rfd3Jtc3Ioc3RydWN0
IHZjcHUgKnYsIHVpbnQzMl90IG1zciwgdWludDY0X3QgdmFsKQogICAgIGNh
c2UgTVNSX0lOVEVMX0NPUkVfVEhSRUFEX0NPVU5UOgogICAgIGNhc2UgTVNS
X0lOVEVMX1BMQVRGT1JNX0lORk86CiAgICAgY2FzZSBNU1JfQVJDSF9DQVBB
QklMSVRJRVM6CisgICAgY2FzZSBNU1JfSUEzMl9QRVJGX1NUQVRVUzoKICAg
ICAgICAgLyogUmVhZC1vbmx5ICovCiAgICAgY2FzZSBNU1JfVEVTVF9DVFJM
OgogICAgIGNhc2UgTVNSX1RTWF9GT1JDRV9BQk9SVDoKQEAgLTQ1NCw2ICs0
NzQsMjEgQEAgaW50IGd1ZXN0X3dybXNyKHN0cnVjdCB2Y3B1ICp2LCB1aW50
MzJfdCBtc3IsIHVpbnQ2NF90IHZhbCkKICAgICAgICAgYnJlYWs7CiAgICAg
fQogCisgICAgICAgIC8qCisgICAgICAgICAqIFRoaXMgTVNSIGlzIG5vdCBl
bnVtZXJhdGVkIGluIENQVUlELiAgSXQgaGFzIGJlZW4gYXJvdW5kIHNpbmNl
IHRoZQorICAgICAgICAgKiBQZW50aXVtIDQsIGFuZCBpbXBsZW1lbnRlZCBi
eSBvdGhlciB2ZW5kb3JzLgorICAgICAgICAgKgorICAgICAgICAgKiBUbyBt
YXRjaCB0aGUgUkFaIHNlbWFudGljcywgaW1wbGVtZW50IGFzIHdyaXRlLWRp
c2NhcmQsIGV4Y2VwdCBmb3IKKyAgICAgICAgICogYSBjcHVmcmVxIGNvbnRy
b2xsZXIgZG9tMCB3aGljaCBoYXMgZnVsbCBhY2Nlc3MuCisgICAgICAgICAq
LworICAgIGNhc2UgTVNSX0lBMzJfUEVSRl9DVEw6CisgICAgICAgIGlmICgg
IShjcC0+eDg2X3ZlbmRvciAmIChYODZfVkVORE9SX0lOVEVMIHwgWDg2X1ZF
TkRPUl9DRU5UQVVSKSkgKQorICAgICAgICAgICAgZ290byBncF9mYXVsdDsK
KworICAgICAgICBpZiAoIGxpa2VseSghaXNfY3B1ZnJlcV9jb250cm9sbGVy
KGQpKSB8fCB3cm1zcl9zYWZlKG1zciwgdmFsKSA9PSAwICkKKyAgICAgICAg
ICAgIGJyZWFrOworICAgICAgICBnb3RvIGdwX2ZhdWx0OworCiAgICAgY2Fz
ZSBNU1JfWDJBUElDX0ZJUlNUIC4uLiBNU1JfWDJBUElDX0xBU1Q6CiAgICAg
ICAgIGlmICggIWlzX2h2bV9kb21haW4oZCkgfHwgdiAhPSBjdXJyICkKICAg
ICAgICAgICAgIGdvdG8gZ3BfZmF1bHQ7CmRpZmYgLS1naXQgYS94ZW4vYXJj
aC94ODYvcHYvZW11bC1wcml2LW9wLmMgYi94ZW4vYXJjaC94ODYvcHYvZW11
bC1wcml2LW9wLmMKaW5kZXggODVhOWZkNDc2Ny4uNWM3YjkxMTdhZSAxMDA2
NDQKLS0tIGEveGVuL2FyY2gveDg2L3B2L2VtdWwtcHJpdi1vcC5jCisrKyBi
L3hlbi9hcmNoL3g4Ni9wdi9lbXVsLXByaXYtb3AuYwpAQCAtODIwLDEyICs4
MjAsNiBAQCBzdGF0aWMgaW5saW5lIHVpbnQ2NF90IGd1ZXN0X21pc2NfZW5h
YmxlKHVpbnQ2NF90IHZhbCkKICAgICByZXR1cm4gdmFsOwogfQogCi1zdGF0
aWMgaW5saW5lIGJvb2wgaXNfY3B1ZnJlcV9jb250cm9sbGVyKGNvbnN0IHN0
cnVjdCBkb21haW4gKmQpCi17Ci0gICAgcmV0dXJuICgoY3B1ZnJlcV9jb250
cm9sbGVyID09IEZSRVFDVExfZG9tMF9rZXJuZWwpICYmCi0gICAgICAgICAg
ICBpc19oYXJkd2FyZV9kb21haW4oZCkpOwotfQotCiBzdGF0aWMgaW50IHJl
YWRfbXNyKHVuc2lnbmVkIGludCByZWcsIHVpbnQ2NF90ICp2YWwsCiAgICAg
ICAgICAgICAgICAgICAgIHN0cnVjdCB4ODZfZW11bGF0ZV9jdHh0ICpjdHh0
KQogewpAQCAtMTA3MCwxNCArMTA2NCw2IEBAIHN0YXRpYyBpbnQgd3JpdGVf
bXNyKHVuc2lnbmVkIGludCByZWcsIHVpbnQ2NF90IHZhbCwKICAgICAgICAg
ICAgIHJldHVybiBYODZFTVVMX09LQVk7CiAgICAgICAgIGJyZWFrOwogCi0g
ICAgY2FzZSBNU1JfSUEzMl9QRVJGX0NUTDoKLSAgICAgICAgaWYgKCBib290
X2NwdV9kYXRhLng4Nl92ZW5kb3IgIT0gWDg2X1ZFTkRPUl9JTlRFTCApCi0g
ICAgICAgICAgICBicmVhazsKLSAgICAgICAgaWYgKCBsaWtlbHkoIWlzX2Nw
dWZyZXFfY29udHJvbGxlcihjdXJyZCkpIHx8Ci0gICAgICAgICAgICAgd3Jt
c3Jfc2FmZShyZWcsIHZhbCkgPT0gMCApCi0gICAgICAgICAgICByZXR1cm4g
WDg2RU1VTF9PS0FZOwotICAgICAgICBicmVhazsKLQogICAgIGNhc2UgTVNS
X0lBMzJfVEhFUk1fQ09OVFJPTDoKICAgICBjYXNlIE1TUl9JQTMyX0VORVJH
WV9QRVJGX0JJQVM6CiAgICAgICAgIGlmICggYm9vdF9jcHVfZGF0YS54ODZf
dmVuZG9yICE9IFg4Nl9WRU5ET1JfSU5URUwgKQpkaWZmIC0tZ2l0IGEveGVu
L2luY2x1ZGUveGVuL3NjaGVkLmggYi94ZW4vaW5jbHVkZS94ZW4vc2NoZWQu
aAppbmRleCBhMGQ4N2VmOWQwLi45N2JhOGUwNzk1IDEwMDY0NAotLS0gYS94
ZW4vaW5jbHVkZS94ZW4vc2NoZWQuaAorKysgYi94ZW4vaW5jbHVkZS94ZW4v
c2NoZWQuaApAQCAtMTA3MSw2ICsxMDcxLDIyIEBAIGV4dGVybiBlbnVtIGNw
dWZyZXFfY29udHJvbGxlciB7CiAgICAgRlJFUUNUTF9ub25lLCBGUkVRQ1RM
X2RvbTBfa2VybmVsLCBGUkVRQ1RMX3hlbgogfSBjcHVmcmVxX2NvbnRyb2xs
ZXI7CiAKK3N0YXRpYyBhbHdheXNfaW5saW5lIGJvb2wgaXNfY3B1ZnJlcV9j
b250cm9sbGVyKGNvbnN0IHN0cnVjdCBkb21haW4gKmQpCit7CisgICAgLyoK
KyAgICAgKiBBIFBWIGRvbTAgY2FuIGJlIG5vbWluYXRlZCBhcyB0aGUgY3B1
ZnJlcSBjb250cm9sbGVyLCBpbnN0ZWFkIG9mIHVzaW5nCisgICAgICogWGVu
J3MgY3B1ZnJlcSBkcml2ZXIsIGF0IHdoaWNoIHBvaW50IGRvbTAgZ2V0cyBk
aXJlY3QgYWNjZXNzIHRvIGNlcnRhaW4KKyAgICAgKiBNU1JzLgorICAgICAq
CisgICAgICogVGhpcyBpbnRlcmZhY2Ugb25seSB3b3JrcyB3aGVuIGRvbTAg
aXMgaWRlbnRpdHkgcGlubmVkIGFuZCBoYXMgdGhlIHNhbWUKKyAgICAgKiBu
dW1iZXIgb2YgdkNQVXMgYXMgcENQVXMgb24gdGhlIHN5c3RlbS4KKyAgICAg
KgorICAgICAqIEl0IHdvdWxkIGJlIGZhciBiZXR0ZXIgdG8gcGFyYXZpcnR1
YWxpc2UgdGhlIGludGVyZmFjZS4KKyAgICAgKi8KKyAgICByZXR1cm4gKGlz
X3B2X2RvbWFpbihkKSAmJiBpc19oYXJkd2FyZV9kb21haW4oZCkgJiYKKyAg
ICAgICAgICAgIGNwdWZyZXFfY29udHJvbGxlciA9PSBGUkVRQ1RMX2RvbTBf
a2VybmVsKTsKK30KKwogaW50IGNwdXBvb2xfbW92ZV9kb21haW4oc3RydWN0
IGRvbWFpbiAqZCwgc3RydWN0IGNwdXBvb2wgKmMpOwogaW50IGNwdXBvb2xf
ZG9fc3lzY3RsKHN0cnVjdCB4ZW5fc3lzY3RsX2NwdXBvb2xfb3AgKm9wKTsK
IGludCBjcHVwb29sX2dldF9pZChjb25zdCBzdHJ1Y3QgZG9tYWluICpkKTsK

--=separator
Content-Type: application/octet-stream; name="xsa351-x86-4.14-2.patch"
Content-Disposition: attachment; filename="xsa351-x86-4.14-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KU3ViamVjdDogeDg2L21zcjogRGlzYWxsb3cgZ3Vlc3QgYWNjZXNzIHRv
IHRoZSBSQVBMIE1TUnMKClJlc2VhcmNoZXJzIGhhdmUgZGVtb25zdHJhdGVk
IHVzaW5nIHRoZSBSQVBMIGludGVyZmFjZSB0byBwZXJmb3JtIGEKZGlmZmVy
ZW50aWFsIHBvd2VyIGFuYWx5c2lzIGF0dGFjayB0byByZWNvdmVyIEFFUyBr
ZXlzIHVzZWQgYnkgb3RoZXIgY29yZXMgaW4KdGhlIHN5c3RlbS4KCkZ1cnRo
ZXJtb3JlLCBldmVuIHByaXZpbGVnZWQgZ3Vlc3RzIGNhbm5vdCB1c2UgdGhp
cyBpbnRlcmZhY2UgY29ycmVjdGx5LCBkdWUKdG8gTVNSIHNjb3BlIGFuZCB2
Y3B1IHNjaGVkdWxpbmcgaXNzdWVzLiAgVGhlIGludGVyZmFjZSB3b3VsZCB3
YW50IHRvIGJlCnBhcmF2aXJ0dWFsaXNlZCB0byBiZSB1c2VkIHNlbnNpYmx5
LgoKRGlzYWxsb3cgYWNjZXNzIHRvIHRoZSBSQVBMIE1TUnMgY29tcGxldGVs
eSwgYXMgd2VsbCBhcyBvdGhlciBNU1JzIHdoaWNoCnBvdGVudGlhbGx5IGFj
Y2VzcyBmaW5lIGdyYWluIHBvd2VyIGluZm9ybWF0aW9uLgoKVGhpcyBpcyBw
YXJ0IG9mIFhTQS0zNTEuCgpTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFu
IEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgoKZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL3g4Ni9tc3IuYyBiL3hlbi9hcmNoL3g4Ni9tc3IuYwppbmRleCAz
ZGIyNmZhZjA4Li5hYTEwNzgyM2FjIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94
ODYvbXNyLmMKKysrIGIveGVuL2FyY2gveDg2L21zci5jCkBAIC0xODUsNiAr
MTg1LDEzIEBAIGludCBndWVzdF9yZG1zcihzdHJ1Y3QgdmNwdSAqdiwgdWlu
dDMyX3QgbXNyLCB1aW50NjRfdCAqdmFsKQogICAgIGNhc2UgTVNSX1RTWF9D
VFJMOgogICAgIGNhc2UgTVNSX01DVV9PUFRfQ1RSTDoKICAgICBjYXNlIE1T
Ul9SVElUX09VVFBVVF9CQVNFIC4uLiBNU1JfUlRJVF9BRERSX0IoNyk6Cisg
ICAgY2FzZSBNU1JfUkFQTF9QT1dFUl9VTklUOgorICAgIGNhc2UgTVNSX1BL
R19QT1dFUl9MSU1JVCAgLi4uIE1TUl9QS0dfUE9XRVJfSU5GTzoKKyAgICBj
YXNlIE1TUl9EUkFNX1BPV0VSX0xJTUlUIC4uLiBNU1JfRFJBTV9QT1dFUl9J
TkZPOgorICAgIGNhc2UgTVNSX1BQMF9QT1dFUl9MSU1JVCAgLi4uIE1TUl9Q
UDBfUE9MSUNZOgorICAgIGNhc2UgTVNSX1BQMV9QT1dFUl9MSU1JVCAgLi4u
IE1TUl9QUDFfUE9MSUNZOgorICAgIGNhc2UgTVNSX1BMQVRGT1JNX0VORVJH
WV9DT1VOVEVSOgorICAgIGNhc2UgTVNSX1BMQVRGT1JNX1BPV0VSX0xJTUlU
OgogICAgIGNhc2UgTVNSX1VfQ0VUOgogICAgIGNhc2UgTVNSX1NfQ0VUOgog
ICAgIGNhc2UgTVNSX1BMMF9TU1AgLi4uIE1TUl9JTlRFUlJVUFRfU1NQX1RB
QkxFOgpAQCAtMTkyLDYgKzE5OSw4IEBAIGludCBndWVzdF9yZG1zcihzdHJ1
Y3QgdmNwdSAqdiwgdWludDMyX3QgbXNyLCB1aW50NjRfdCAqdmFsKQogICAg
IGNhc2UgTVNSX0FNRDY0X0xXUF9DQkFERFI6CiAgICAgY2FzZSBNU1JfUFBJ
Tl9DVEw6CiAgICAgY2FzZSBNU1JfUFBJTjoKKyAgICBjYXNlIE1TUl9GMTVI
X0NVX1BPV0VSIC4uLiBNU1JfRjE1SF9DVV9NQVhfUE9XRVI6CisgICAgY2Fz
ZSBNU1JfQU1EX1JBUExfUE9XRVJfVU5JVCAuLi4gTVNSX0FNRF9QS0dfRU5F
UkdZX1NUQVRVUzoKICAgICBjYXNlIE1TUl9BTURfUFBJTl9DVEw6CiAgICAg
Y2FzZSBNU1JfQU1EX1BQSU46CiAgICAgICAgIC8qIE5vdCBvZmZlcmVkIHRv
IGd1ZXN0cy4gKi8KQEAgLTM2OSw2ICszNzgsMTMgQEAgaW50IGd1ZXN0X3dy
bXNyKHN0cnVjdCB2Y3B1ICp2LCB1aW50MzJfdCBtc3IsIHVpbnQ2NF90IHZh
bCkKICAgICBjYXNlIE1TUl9UU1hfQ1RSTDoKICAgICBjYXNlIE1TUl9NQ1Vf
T1BUX0NUUkw6CiAgICAgY2FzZSBNU1JfUlRJVF9PVVRQVVRfQkFTRSAuLi4g
TVNSX1JUSVRfQUREUl9CKDcpOgorICAgIGNhc2UgTVNSX1JBUExfUE9XRVJf
VU5JVDoKKyAgICBjYXNlIE1TUl9QS0dfUE9XRVJfTElNSVQgIC4uLiBNU1Jf
UEtHX1BPV0VSX0lORk86CisgICAgY2FzZSBNU1JfRFJBTV9QT1dFUl9MSU1J
VCAuLi4gTVNSX0RSQU1fUE9XRVJfSU5GTzoKKyAgICBjYXNlIE1TUl9QUDBf
UE9XRVJfTElNSVQgIC4uLiBNU1JfUFAwX1BPTElDWToKKyAgICBjYXNlIE1T
Ul9QUDFfUE9XRVJfTElNSVQgIC4uLiBNU1JfUFAxX1BPTElDWToKKyAgICBj
YXNlIE1TUl9QTEFURk9STV9FTkVSR1lfQ09VTlRFUjoKKyAgICBjYXNlIE1T
Ul9QTEFURk9STV9QT1dFUl9MSU1JVDoKICAgICBjYXNlIE1TUl9VX0NFVDoK
ICAgICBjYXNlIE1TUl9TX0NFVDoKICAgICBjYXNlIE1TUl9QTDBfU1NQIC4u
LiBNU1JfSU5URVJSVVBUX1NTUF9UQUJMRToKQEAgLTM3Niw2ICszOTIsOCBA
QCBpbnQgZ3Vlc3Rfd3Jtc3Ioc3RydWN0IHZjcHUgKnYsIHVpbnQzMl90IG1z
ciwgdWludDY0X3QgdmFsKQogICAgIGNhc2UgTVNSX0FNRDY0X0xXUF9DQkFE
RFI6CiAgICAgY2FzZSBNU1JfUFBJTl9DVEw6CiAgICAgY2FzZSBNU1JfUFBJ
TjoKKyAgICBjYXNlIE1TUl9GMTVIX0NVX1BPV0VSIC4uLiBNU1JfRjE1SF9D
VV9NQVhfUE9XRVI6CisgICAgY2FzZSBNU1JfQU1EX1JBUExfUE9XRVJfVU5J
VCAuLi4gTVNSX0FNRF9QS0dfRU5FUkdZX1NUQVRVUzoKICAgICBjYXNlIE1T
Ul9BTURfUFBJTl9DVEw6CiAgICAgY2FzZSBNU1JfQU1EX1BQSU46CiAgICAg
ICAgIC8qIE5vdCBvZmZlcmVkIHRvIGd1ZXN0cy4gKi8KZGlmZiAtLWdpdCBh
L3hlbi9pbmNsdWRlL2FzbS14ODYvbXNyLWluZGV4LmggYi94ZW4vaW5jbHVk
ZS9hc20teDg2L21zci1pbmRleC5oCmluZGV4IDBmZTk4YWY5MjMuLjVlNjRl
Y2ZmOTEgMTAwNjQ0Ci0tLSBhL3hlbi9pbmNsdWRlL2FzbS14ODYvbXNyLWlu
ZGV4LmgKKysrIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9tc3ItaW5kZXguaApA
QCAtNzcsNiArNzcsMzggQEAKICNkZWZpbmUgTVNSX1JUSVRfQUREUl9BKG4p
ICAgICAgICAgICAgICAgICAoMHgwMDAwMDU4MCArIChuKSAqIDIpCiAjZGVm
aW5lIE1TUl9SVElUX0FERFJfQihuKSAgICAgICAgICAgICAgICAgKDB4MDAw
MDA1ODEgKyAobikgKiAyKQogCisvKgorICogSW50ZWwgUnVudGltZSBBdmVy
YWdlIFBvd2VyIExpbWl0aW5nIChSQVBMKSBpbnRlcmZhY2UuICBQb3dlciBw
bGFuZSBiYXNlCisgKiBhZGRyZXNzZXMgKE1TUl8qX1BPV0VSX0xJTUlUKSBh
cmUgbW9kZWwgc3BlY2lmaWMsIGJ1dCBoYXZlIHNvLWZhciBiZWVuCisgKiBj
b25zaXN0ZW50IHNpbmNlIHRoZWlyIGludHJvZHVjdGlvbiBpbiBTYW5keUJy
aWRnZS4KKyAqCisgKiBPZmZzZXRzIG9mIGZ1bmN0aW9uYWxpdHkgZnJvbSB0
aGUgcG93ZXIgcGxhbmUgYmFzZSBpcyBhcmNoaXRlY3R1cmFsLCBidXQKKyAq
IG5vdCBhbGwgcG93ZXIgcGxhbmVzIHN1cHBvcnQgYWxsIGZ1bmN0aW9uYWxp
dHkuCisgKi8KKyNkZWZpbmUgTVNSX1JBUExfUE9XRVJfVU5JVCAgICAgICAg
ICAgICAgICAgMHgwMDAwMDYwNgorCisjZGVmaW5lIE1TUl9QS0dfUE9XRVJf
TElNSVQgICAgICAgICAgICAgICAgIDB4MDAwMDA2MTAKKyNkZWZpbmUgTVNS
X1BLR19FTkVSR1lfU1RBVFVTICAgICAgICAgICAgICAgMHgwMDAwMDYxMQor
I2RlZmluZSBNU1JfUEtHX1BFUkZfU1RBVFVTICAgICAgICAgICAgICAgICAw
eDAwMDAwNjEzCisjZGVmaW5lIE1TUl9QS0dfUE9XRVJfSU5GTyAgICAgICAg
ICAgICAgICAgIDB4MDAwMDA2MTQKKworI2RlZmluZSBNU1JfRFJBTV9QT1dF
Ul9MSU1JVCAgICAgICAgICAgICAgICAweDAwMDAwNjE4CisjZGVmaW5lIE1T
Ul9EUkFNX0VORVJHWV9TVEFUVVMgICAgICAgICAgICAgIDB4MDAwMDA2MTkK
KyNkZWZpbmUgTVNSX0RSQU1fUEVSRl9TVEFUVVMgICAgICAgICAgICAgICAg
MHgwMDAwMDYxYgorI2RlZmluZSBNU1JfRFJBTV9QT1dFUl9JTkZPICAgICAg
ICAgICAgICAgICAweDAwMDAwNjFjCisKKyNkZWZpbmUgTVNSX1BQMF9QT1dF
Ul9MSU1JVCAgICAgICAgICAgICAgICAgMHgwMDAwMDYzOAorI2RlZmluZSBN
U1JfUFAwX0VORVJHWV9TVEFUVVMgICAgICAgICAgICAgICAweDAwMDAwNjM5
CisjZGVmaW5lIE1TUl9QUDBfUE9MSUNZICAgICAgICAgICAgICAgICAgICAg
IDB4MDAwMDA2M2EKKworI2RlZmluZSBNU1JfUFAxX1BPV0VSX0xJTUlUICAg
ICAgICAgICAgICAgICAweDAwMDAwNjQwCisjZGVmaW5lIE1TUl9QUDFfRU5F
UkdZX1NUQVRVUyAgICAgICAgICAgICAgIDB4MDAwMDA2NDEKKyNkZWZpbmUg
TVNSX1BQMV9QT0xJQ1kgICAgICAgICAgICAgICAgICAgICAgMHgwMDAwMDY0
MgorCisvKiBJbnRlbCBQbGF0Zm9ybS13aWRlIHBvd2VyIGludGVyZmFjZS4g
Ki8KKyNkZWZpbmUgTVNSX1BMQVRGT1JNX0VORVJHWV9DT1VOVEVSICAgICAg
ICAgMHgwMDAwMDY0ZAorI2RlZmluZSBNU1JfUExBVEZPUk1fUE9XRVJfTElN
SVQgICAgICAgICAgICAweDAwMDAwNjVjCisKICNkZWZpbmUgTVNSX1VfQ0VU
ICAgICAgICAgICAgICAgICAgICAgICAgICAgMHgwMDAwMDZhMAogI2RlZmlu
ZSBNU1JfU19DRVQgICAgICAgICAgICAgICAgICAgICAgICAgICAweDAwMDAw
NmEyCiAjZGVmaW5lICBDRVRfU0hTVEtfRU4gICAgICAgICAgICAgICAgICAg
ICAgIChfQUMoMSwgVUxMKSA8PCAgMCkKQEAgLTkyLDYgKzEyNCwxMyBAQAog
I2RlZmluZSAgUEFTSURfUEFTSURfTUFTSyAgICAgICAgICAgICAgICAgICAw
eDAwMGZmZmZmCiAjZGVmaW5lICBQQVNJRF9WQUxJRCAgICAgICAgICAgICAg
ICAgICAgICAgIChfQUMoMSwgVUxMKSA8PCAzMSkKIAorI2RlZmluZSBNU1Jf
RjE1SF9DVV9QT1dFUiAgICAgICAgICAgICAgICAgICAweGMwMDEwMDdhCisj
ZGVmaW5lIE1TUl9GMTVIX0NVX01BWF9QT1dFUiAgICAgICAgICAgICAgIDB4
YzAwMTAwN2IKKworI2RlZmluZSBNU1JfQU1EX1JBUExfUE9XRVJfVU5JVCAg
ICAgICAgICAgICAweGMwMDEwMjk5CisjZGVmaW5lIE1TUl9BTURfQ09SRV9F
TkVSR1lfU1RBVFVTICAgICAgICAgIDB4YzAwMTAyOWEKKyNkZWZpbmUgTVNS
X0FNRF9QS0dfRU5FUkdZX1NUQVRVUyAgICAgICAgICAgMHhjMDAxMDI5Ygor
CiAvKgogICogTGVnYWN5IE1TUiBjb25zdGFudHMgaW4gbmVlZCBvZiBjbGVh
bnVwLiAgTm8gbmV3IE1TUnMgYmVsb3cgdGhpcyBjb21tZW50LgogICovCg==

--=separator--


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 17:02:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 17:02:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38861.71611 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKfK-0004Pu-L2; Thu, 26 Nov 2020 17:02:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38861.71611; Thu, 26 Nov 2020 17:02:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKfK-0004Pn-Fa; Thu, 26 Nov 2020 17:02:42 +0000
Received: by outflank-mailman (input) for mailman id 38861;
 Thu, 26 Nov 2020 17:02:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C6x3=FA=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kiKfJ-0004PR-Nx
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:02:41 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id e14a2d94-076a-437a-bac9-b95b0380f47c;
 Thu, 26 Nov 2020 17:02:33 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AA71831B;
 Thu, 26 Nov 2020 09:02:33 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5C7F13F23F;
 Thu, 26 Nov 2020 09:02:32 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=C6x3=FA=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kiKfJ-0004PR-Nx
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:02:41 +0000
X-Inumbo-ID: e14a2d94-076a-437a-bac9-b95b0380f47c
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id e14a2d94-076a-437a-bac9-b95b0380f47c;
	Thu, 26 Nov 2020 17:02:33 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AA71831B;
	Thu, 26 Nov 2020 09:02:33 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5C7F13F23F;
	Thu, 26 Nov 2020 09:02:32 -0800 (PST)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 1/8] xen/arm: Import the SMMUv3 driver from Linux
Date: Thu, 26 Nov 2020 17:02:00 +0000
Message-Id: <0967bb590eb1ea4bb040e064e7c5c1bb90ef2a21.1606406359.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1606406359.git.rahul.singh@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
In-Reply-To: <cover.1606406359.git.rahul.singh@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>

Based on tag Linux 5.9.8 commit 951cbbc386ff01b50da4f46387e994e81d9ab431

It's a copy of the Linux SMMUv3 driver. Xen specific code has not
been added yet and code has not been compiled.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/drivers/passthrough/arm/smmu-v3.c | 4164 +++++++++++++++++++++++++
 1 file changed, 4164 insertions(+)
 create mode 100644 xen/drivers/passthrough/arm/smmu-v3.c

diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
new file mode 100644
index 0000000000..c192544e87
--- /dev/null
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -0,0 +1,4164 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * IOMMU API for ARM architected SMMUv3 implementations.
+ *
+ * Copyright (C) 2015 ARM Limited
+ *
+ * Author: Will Deacon <will.deacon@arm.com>
+ *
+ * This driver is powered by bad coffee and bombay mix.
+ */
+
+#include <linux/acpi.h>
+#include <linux/acpi_iort.h>
+#include <linux/bitfield.h>
+#include <linux/bitops.h>
+#include <linux/crash_dump.h>
+#include <linux/delay.h>
+#include <linux/dma-iommu.h>
+#include <linux/err.h>
+#include <linux/interrupt.h>
+#include <linux/io-pgtable.h>
+#include <linux/iommu.h>
+#include <linux/iopoll.h>
+#include <linux/module.h>
+#include <linux/msi.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_iommu.h>
+#include <linux/of_platform.h>
+#include <linux/pci.h>
+#include <linux/pci-ats.h>
+#include <linux/platform_device.h>
+
+#include <linux/amba/bus.h>
+
+/* MMIO registers */
+#define ARM_SMMU_IDR0			0x0
+#define IDR0_ST_LVL			GENMASK(28, 27)
+#define IDR0_ST_LVL_2LVL		1
+#define IDR0_STALL_MODEL		GENMASK(25, 24)
+#define IDR0_STALL_MODEL_STALL		0
+#define IDR0_STALL_MODEL_FORCE		2
+#define IDR0_TTENDIAN			GENMASK(22, 21)
+#define IDR0_TTENDIAN_MIXED		0
+#define IDR0_TTENDIAN_LE		2
+#define IDR0_TTENDIAN_BE		3
+#define IDR0_CD2L			(1 << 19)
+#define IDR0_VMID16			(1 << 18)
+#define IDR0_PRI			(1 << 16)
+#define IDR0_SEV			(1 << 14)
+#define IDR0_MSI			(1 << 13)
+#define IDR0_ASID16			(1 << 12)
+#define IDR0_ATS			(1 << 10)
+#define IDR0_HYP			(1 << 9)
+#define IDR0_COHACC			(1 << 4)
+#define IDR0_TTF			GENMASK(3, 2)
+#define IDR0_TTF_AARCH64		2
+#define IDR0_TTF_AARCH32_64		3
+#define IDR0_S1P			(1 << 1)
+#define IDR0_S2P			(1 << 0)
+
+#define ARM_SMMU_IDR1			0x4
+#define IDR1_TABLES_PRESET		(1 << 30)
+#define IDR1_QUEUES_PRESET		(1 << 29)
+#define IDR1_REL			(1 << 28)
+#define IDR1_CMDQS			GENMASK(25, 21)
+#define IDR1_EVTQS			GENMASK(20, 16)
+#define IDR1_PRIQS			GENMASK(15, 11)
+#define IDR1_SSIDSIZE			GENMASK(10, 6)
+#define IDR1_SIDSIZE			GENMASK(5, 0)
+
+#define ARM_SMMU_IDR3			0xc
+#define IDR3_RIL			(1 << 10)
+
+#define ARM_SMMU_IDR5			0x14
+#define IDR5_STALL_MAX			GENMASK(31, 16)
+#define IDR5_GRAN64K			(1 << 6)
+#define IDR5_GRAN16K			(1 << 5)
+#define IDR5_GRAN4K			(1 << 4)
+#define IDR5_OAS			GENMASK(2, 0)
+#define IDR5_OAS_32_BIT			0
+#define IDR5_OAS_36_BIT			1
+#define IDR5_OAS_40_BIT			2
+#define IDR5_OAS_42_BIT			3
+#define IDR5_OAS_44_BIT			4
+#define IDR5_OAS_48_BIT			5
+#define IDR5_OAS_52_BIT			6
+#define IDR5_VAX			GENMASK(11, 10)
+#define IDR5_VAX_52_BIT			1
+
+#define ARM_SMMU_CR0			0x20
+#define CR0_ATSCHK			(1 << 4)
+#define CR0_CMDQEN			(1 << 3)
+#define CR0_EVTQEN			(1 << 2)
+#define CR0_PRIQEN			(1 << 1)
+#define CR0_SMMUEN			(1 << 0)
+
+#define ARM_SMMU_CR0ACK			0x24
+
+#define ARM_SMMU_CR1			0x28
+#define CR1_TABLE_SH			GENMASK(11, 10)
+#define CR1_TABLE_OC			GENMASK(9, 8)
+#define CR1_TABLE_IC			GENMASK(7, 6)
+#define CR1_QUEUE_SH			GENMASK(5, 4)
+#define CR1_QUEUE_OC			GENMASK(3, 2)
+#define CR1_QUEUE_IC			GENMASK(1, 0)
+/* CR1 cacheability fields don't quite follow the usual TCR-style encoding */
+#define CR1_CACHE_NC			0
+#define CR1_CACHE_WB			1
+#define CR1_CACHE_WT			2
+
+#define ARM_SMMU_CR2			0x2c
+#define CR2_PTM				(1 << 2)
+#define CR2_RECINVSID			(1 << 1)
+#define CR2_E2H				(1 << 0)
+
+#define ARM_SMMU_GBPA			0x44
+#define GBPA_UPDATE			(1 << 31)
+#define GBPA_ABORT			(1 << 20)
+
+#define ARM_SMMU_IRQ_CTRL		0x50
+#define IRQ_CTRL_EVTQ_IRQEN		(1 << 2)
+#define IRQ_CTRL_PRIQ_IRQEN		(1 << 1)
+#define IRQ_CTRL_GERROR_IRQEN		(1 << 0)
+
+#define ARM_SMMU_IRQ_CTRLACK		0x54
+
+#define ARM_SMMU_GERROR			0x60
+#define GERROR_SFM_ERR			(1 << 8)
+#define GERROR_MSI_GERROR_ABT_ERR	(1 << 7)
+#define GERROR_MSI_PRIQ_ABT_ERR		(1 << 6)
+#define GERROR_MSI_EVTQ_ABT_ERR		(1 << 5)
+#define GERROR_MSI_CMDQ_ABT_ERR		(1 << 4)
+#define GERROR_PRIQ_ABT_ERR		(1 << 3)
+#define GERROR_EVTQ_ABT_ERR		(1 << 2)
+#define GERROR_CMDQ_ERR			(1 << 0)
+#define GERROR_ERR_MASK			0xfd
+
+#define ARM_SMMU_GERRORN		0x64
+
+#define ARM_SMMU_GERROR_IRQ_CFG0	0x68
+#define ARM_SMMU_GERROR_IRQ_CFG1	0x70
+#define ARM_SMMU_GERROR_IRQ_CFG2	0x74
+
+#define ARM_SMMU_STRTAB_BASE		0x80
+#define STRTAB_BASE_RA			(1UL << 62)
+#define STRTAB_BASE_ADDR_MASK		GENMASK_ULL(51, 6)
+
+#define ARM_SMMU_STRTAB_BASE_CFG	0x88
+#define STRTAB_BASE_CFG_FMT		GENMASK(17, 16)
+#define STRTAB_BASE_CFG_FMT_LINEAR	0
+#define STRTAB_BASE_CFG_FMT_2LVL	1
+#define STRTAB_BASE_CFG_SPLIT		GENMASK(10, 6)
+#define STRTAB_BASE_CFG_LOG2SIZE	GENMASK(5, 0)
+
+#define ARM_SMMU_CMDQ_BASE		0x90
+#define ARM_SMMU_CMDQ_PROD		0x98
+#define ARM_SMMU_CMDQ_CONS		0x9c
+
+#define ARM_SMMU_EVTQ_BASE		0xa0
+#define ARM_SMMU_EVTQ_PROD		0x100a8
+#define ARM_SMMU_EVTQ_CONS		0x100ac
+#define ARM_SMMU_EVTQ_IRQ_CFG0		0xb0
+#define ARM_SMMU_EVTQ_IRQ_CFG1		0xb8
+#define ARM_SMMU_EVTQ_IRQ_CFG2		0xbc
+
+#define ARM_SMMU_PRIQ_BASE		0xc0
+#define ARM_SMMU_PRIQ_PROD		0x100c8
+#define ARM_SMMU_PRIQ_CONS		0x100cc
+#define ARM_SMMU_PRIQ_IRQ_CFG0		0xd0
+#define ARM_SMMU_PRIQ_IRQ_CFG1		0xd8
+#define ARM_SMMU_PRIQ_IRQ_CFG2		0xdc
+
+#define ARM_SMMU_REG_SZ			0xe00
+
+/* Common MSI config fields */
+#define MSI_CFG0_ADDR_MASK		GENMASK_ULL(51, 2)
+#define MSI_CFG2_SH			GENMASK(5, 4)
+#define MSI_CFG2_MEMATTR		GENMASK(3, 0)
+
+/* Common memory attribute values */
+#define ARM_SMMU_SH_NSH			0
+#define ARM_SMMU_SH_OSH			2
+#define ARM_SMMU_SH_ISH			3
+#define ARM_SMMU_MEMATTR_DEVICE_nGnRE	0x1
+#define ARM_SMMU_MEMATTR_OIWB		0xf
+
+#define Q_IDX(llq, p)			((p) & ((1 << (llq)->max_n_shift) - 1))
+#define Q_WRP(llq, p)			((p) & (1 << (llq)->max_n_shift))
+#define Q_OVERFLOW_FLAG			(1U << 31)
+#define Q_OVF(p)			((p) & Q_OVERFLOW_FLAG)
+#define Q_ENT(q, p)			((q)->base +			\
+					 Q_IDX(&((q)->llq), p) *	\
+					 (q)->ent_dwords)
+
+#define Q_BASE_RWA			(1UL << 62)
+#define Q_BASE_ADDR_MASK		GENMASK_ULL(51, 5)
+#define Q_BASE_LOG2SIZE			GENMASK(4, 0)
+
+/* Ensure DMA allocations are naturally aligned */
+#ifdef CONFIG_CMA_ALIGNMENT
+#define Q_MAX_SZ_SHIFT			(PAGE_SHIFT + CONFIG_CMA_ALIGNMENT)
+#else
+#define Q_MAX_SZ_SHIFT			(PAGE_SHIFT + MAX_ORDER - 1)
+#endif
+
+/*
+ * Stream table.
+ *
+ * Linear: Enough to cover 1 << IDR1.SIDSIZE entries
+ * 2lvl: 128k L1 entries,
+ *       256 lazy entries per table (each table covers a PCI bus)
+ */
+#define STRTAB_L1_SZ_SHIFT		20
+#define STRTAB_SPLIT			8
+
+#define STRTAB_L1_DESC_DWORDS		1
+#define STRTAB_L1_DESC_SPAN		GENMASK_ULL(4, 0)
+#define STRTAB_L1_DESC_L2PTR_MASK	GENMASK_ULL(51, 6)
+
+#define STRTAB_STE_DWORDS		8
+#define STRTAB_STE_0_V			(1UL << 0)
+#define STRTAB_STE_0_CFG		GENMASK_ULL(3, 1)
+#define STRTAB_STE_0_CFG_ABORT		0
+#define STRTAB_STE_0_CFG_BYPASS		4
+#define STRTAB_STE_0_CFG_S1_TRANS	5
+#define STRTAB_STE_0_CFG_S2_TRANS	6
+
+#define STRTAB_STE_0_S1FMT		GENMASK_ULL(5, 4)
+#define STRTAB_STE_0_S1FMT_LINEAR	0
+#define STRTAB_STE_0_S1FMT_64K_L2	2
+#define STRTAB_STE_0_S1CTXPTR_MASK	GENMASK_ULL(51, 6)
+#define STRTAB_STE_0_S1CDMAX		GENMASK_ULL(63, 59)
+
+#define STRTAB_STE_1_S1DSS		GENMASK_ULL(1, 0)
+#define STRTAB_STE_1_S1DSS_TERMINATE	0x0
+#define STRTAB_STE_1_S1DSS_BYPASS	0x1
+#define STRTAB_STE_1_S1DSS_SSID0	0x2
+
+#define STRTAB_STE_1_S1C_CACHE_NC	0UL
+#define STRTAB_STE_1_S1C_CACHE_WBRA	1UL
+#define STRTAB_STE_1_S1C_CACHE_WT	2UL
+#define STRTAB_STE_1_S1C_CACHE_WB	3UL
+#define STRTAB_STE_1_S1CIR		GENMASK_ULL(3, 2)
+#define STRTAB_STE_1_S1COR		GENMASK_ULL(5, 4)
+#define STRTAB_STE_1_S1CSH		GENMASK_ULL(7, 6)
+
+#define STRTAB_STE_1_S1STALLD		(1UL << 27)
+
+#define STRTAB_STE_1_EATS		GENMASK_ULL(29, 28)
+#define STRTAB_STE_1_EATS_ABT		0UL
+#define STRTAB_STE_1_EATS_TRANS		1UL
+#define STRTAB_STE_1_EATS_S1CHK		2UL
+
+#define STRTAB_STE_1_STRW		GENMASK_ULL(31, 30)
+#define STRTAB_STE_1_STRW_NSEL1		0UL
+#define STRTAB_STE_1_STRW_EL2		2UL
+
+#define STRTAB_STE_1_SHCFG		GENMASK_ULL(45, 44)
+#define STRTAB_STE_1_SHCFG_INCOMING	1UL
+
+#define STRTAB_STE_2_S2VMID		GENMASK_ULL(15, 0)
+#define STRTAB_STE_2_VTCR		GENMASK_ULL(50, 32)
+#define STRTAB_STE_2_VTCR_S2T0SZ	GENMASK_ULL(5, 0)
+#define STRTAB_STE_2_VTCR_S2SL0		GENMASK_ULL(7, 6)
+#define STRTAB_STE_2_VTCR_S2IR0		GENMASK_ULL(9, 8)
+#define STRTAB_STE_2_VTCR_S2OR0		GENMASK_ULL(11, 10)
+#define STRTAB_STE_2_VTCR_S2SH0		GENMASK_ULL(13, 12)
+#define STRTAB_STE_2_VTCR_S2TG		GENMASK_ULL(15, 14)
+#define STRTAB_STE_2_VTCR_S2PS		GENMASK_ULL(18, 16)
+#define STRTAB_STE_2_S2AA64		(1UL << 51)
+#define STRTAB_STE_2_S2ENDI		(1UL << 52)
+#define STRTAB_STE_2_S2PTW		(1UL << 54)
+#define STRTAB_STE_2_S2R		(1UL << 58)
+
+#define STRTAB_STE_3_S2TTB_MASK		GENMASK_ULL(51, 4)
+
+/*
+ * Context descriptors.
+ *
+ * Linear: when less than 1024 SSIDs are supported
+ * 2lvl: at most 1024 L1 entries,
+ *       1024 lazy entries per table.
+ */
+#define CTXDESC_SPLIT			10
+#define CTXDESC_L2_ENTRIES		(1 << CTXDESC_SPLIT)
+
+#define CTXDESC_L1_DESC_DWORDS		1
+#define CTXDESC_L1_DESC_V		(1UL << 0)
+#define CTXDESC_L1_DESC_L2PTR_MASK	GENMASK_ULL(51, 12)
+
+#define CTXDESC_CD_DWORDS		8
+#define CTXDESC_CD_0_TCR_T0SZ		GENMASK_ULL(5, 0)
+#define CTXDESC_CD_0_TCR_TG0		GENMASK_ULL(7, 6)
+#define CTXDESC_CD_0_TCR_IRGN0		GENMASK_ULL(9, 8)
+#define CTXDESC_CD_0_TCR_ORGN0		GENMASK_ULL(11, 10)
+#define CTXDESC_CD_0_TCR_SH0		GENMASK_ULL(13, 12)
+#define CTXDESC_CD_0_TCR_EPD0		(1ULL << 14)
+#define CTXDESC_CD_0_TCR_EPD1		(1ULL << 30)
+
+#define CTXDESC_CD_0_ENDI		(1UL << 15)
+#define CTXDESC_CD_0_V			(1UL << 31)
+
+#define CTXDESC_CD_0_TCR_IPS		GENMASK_ULL(34, 32)
+#define CTXDESC_CD_0_TCR_TBI0		(1ULL << 38)
+
+#define CTXDESC_CD_0_AA64		(1UL << 41)
+#define CTXDESC_CD_0_S			(1UL << 44)
+#define CTXDESC_CD_0_R			(1UL << 45)
+#define CTXDESC_CD_0_A			(1UL << 46)
+#define CTXDESC_CD_0_ASET		(1UL << 47)
+#define CTXDESC_CD_0_ASID		GENMASK_ULL(63, 48)
+
+#define CTXDESC_CD_1_TTB0_MASK		GENMASK_ULL(51, 4)
+
+/*
+ * When the SMMU only supports linear context descriptor tables, pick a
+ * reasonable size limit (64kB).
+ */
+#define CTXDESC_LINEAR_CDMAX		ilog2(SZ_64K / (CTXDESC_CD_DWORDS << 3))
+
+/* Command queue */
+#define CMDQ_ENT_SZ_SHIFT		4
+#define CMDQ_ENT_DWORDS			((1 << CMDQ_ENT_SZ_SHIFT) >> 3)
+#define CMDQ_MAX_SZ_SHIFT		(Q_MAX_SZ_SHIFT - CMDQ_ENT_SZ_SHIFT)
+
+#define CMDQ_CONS_ERR			GENMASK(30, 24)
+#define CMDQ_ERR_CERROR_NONE_IDX	0
+#define CMDQ_ERR_CERROR_ILL_IDX		1
+#define CMDQ_ERR_CERROR_ABT_IDX		2
+#define CMDQ_ERR_CERROR_ATC_INV_IDX	3
+
+#define CMDQ_PROD_OWNED_FLAG		Q_OVERFLOW_FLAG
+
+/*
+ * This is used to size the command queue and therefore must be at least
+ * BITS_PER_LONG so that the valid_map works correctly (it relies on the
+ * total number of queue entries being a multiple of BITS_PER_LONG).
+ */
+#define CMDQ_BATCH_ENTRIES		BITS_PER_LONG
+
+#define CMDQ_0_OP			GENMASK_ULL(7, 0)
+#define CMDQ_0_SSV			(1UL << 11)
+
+#define CMDQ_PREFETCH_0_SID		GENMASK_ULL(63, 32)
+#define CMDQ_PREFETCH_1_SIZE		GENMASK_ULL(4, 0)
+#define CMDQ_PREFETCH_1_ADDR_MASK	GENMASK_ULL(63, 12)
+
+#define CMDQ_CFGI_0_SSID		GENMASK_ULL(31, 12)
+#define CMDQ_CFGI_0_SID			GENMASK_ULL(63, 32)
+#define CMDQ_CFGI_1_LEAF		(1UL << 0)
+#define CMDQ_CFGI_1_RANGE		GENMASK_ULL(4, 0)
+
+#define CMDQ_TLBI_0_NUM			GENMASK_ULL(16, 12)
+#define CMDQ_TLBI_RANGE_NUM_MAX		31
+#define CMDQ_TLBI_0_SCALE		GENMASK_ULL(24, 20)
+#define CMDQ_TLBI_0_VMID		GENMASK_ULL(47, 32)
+#define CMDQ_TLBI_0_ASID		GENMASK_ULL(63, 48)
+#define CMDQ_TLBI_1_LEAF		(1UL << 0)
+#define CMDQ_TLBI_1_TTL			GENMASK_ULL(9, 8)
+#define CMDQ_TLBI_1_TG			GENMASK_ULL(11, 10)
+#define CMDQ_TLBI_1_VA_MASK		GENMASK_ULL(63, 12)
+#define CMDQ_TLBI_1_IPA_MASK		GENMASK_ULL(51, 12)
+
+#define CMDQ_ATC_0_SSID			GENMASK_ULL(31, 12)
+#define CMDQ_ATC_0_SID			GENMASK_ULL(63, 32)
+#define CMDQ_ATC_0_GLOBAL		(1UL << 9)
+#define CMDQ_ATC_1_SIZE			GENMASK_ULL(5, 0)
+#define CMDQ_ATC_1_ADDR_MASK		GENMASK_ULL(63, 12)
+
+#define CMDQ_PRI_0_SSID			GENMASK_ULL(31, 12)
+#define CMDQ_PRI_0_SID			GENMASK_ULL(63, 32)
+#define CMDQ_PRI_1_GRPID		GENMASK_ULL(8, 0)
+#define CMDQ_PRI_1_RESP			GENMASK_ULL(13, 12)
+
+#define CMDQ_SYNC_0_CS			GENMASK_ULL(13, 12)
+#define CMDQ_SYNC_0_CS_NONE		0
+#define CMDQ_SYNC_0_CS_IRQ		1
+#define CMDQ_SYNC_0_CS_SEV		2
+#define CMDQ_SYNC_0_MSH			GENMASK_ULL(23, 22)
+#define CMDQ_SYNC_0_MSIATTR		GENMASK_ULL(27, 24)
+#define CMDQ_SYNC_0_MSIDATA		GENMASK_ULL(63, 32)
+#define CMDQ_SYNC_1_MSIADDR_MASK	GENMASK_ULL(51, 2)
+
+/* Event queue */
+#define EVTQ_ENT_SZ_SHIFT		5
+#define EVTQ_ENT_DWORDS			((1 << EVTQ_ENT_SZ_SHIFT) >> 3)
+#define EVTQ_MAX_SZ_SHIFT		(Q_MAX_SZ_SHIFT - EVTQ_ENT_SZ_SHIFT)
+
+#define EVTQ_0_ID			GENMASK_ULL(7, 0)
+
+/* PRI queue */
+#define PRIQ_ENT_SZ_SHIFT		4
+#define PRIQ_ENT_DWORDS			((1 << PRIQ_ENT_SZ_SHIFT) >> 3)
+#define PRIQ_MAX_SZ_SHIFT		(Q_MAX_SZ_SHIFT - PRIQ_ENT_SZ_SHIFT)
+
+#define PRIQ_0_SID			GENMASK_ULL(31, 0)
+#define PRIQ_0_SSID			GENMASK_ULL(51, 32)
+#define PRIQ_0_PERM_PRIV		(1UL << 58)
+#define PRIQ_0_PERM_EXEC		(1UL << 59)
+#define PRIQ_0_PERM_READ		(1UL << 60)
+#define PRIQ_0_PERM_WRITE		(1UL << 61)
+#define PRIQ_0_PRG_LAST			(1UL << 62)
+#define PRIQ_0_SSID_V			(1UL << 63)
+
+#define PRIQ_1_PRG_IDX			GENMASK_ULL(8, 0)
+#define PRIQ_1_ADDR_MASK		GENMASK_ULL(63, 12)
+
+/* High-level queue structures */
+#define ARM_SMMU_POLL_TIMEOUT_US	1000000 /* 1s! */
+#define ARM_SMMU_POLL_SPIN_COUNT	10
+
+#define MSI_IOVA_BASE			0x8000000
+#define MSI_IOVA_LENGTH			0x100000
+
+static bool disable_bypass = 1;
+module_param_named(disable_bypass, disable_bypass, bool, S_IRUGO);
+MODULE_PARM_DESC(disable_bypass,
+	"Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU.");
+
+enum pri_resp {
+	PRI_RESP_DENY = 0,
+	PRI_RESP_FAIL = 1,
+	PRI_RESP_SUCC = 2,
+};
+
+enum arm_smmu_msi_index {
+	EVTQ_MSI_INDEX,
+	GERROR_MSI_INDEX,
+	PRIQ_MSI_INDEX,
+	ARM_SMMU_MAX_MSIS,
+};
+
+static phys_addr_t arm_smmu_msi_cfg[ARM_SMMU_MAX_MSIS][3] = {
+	[EVTQ_MSI_INDEX] = {
+		ARM_SMMU_EVTQ_IRQ_CFG0,
+		ARM_SMMU_EVTQ_IRQ_CFG1,
+		ARM_SMMU_EVTQ_IRQ_CFG2,
+	},
+	[GERROR_MSI_INDEX] = {
+		ARM_SMMU_GERROR_IRQ_CFG0,
+		ARM_SMMU_GERROR_IRQ_CFG1,
+		ARM_SMMU_GERROR_IRQ_CFG2,
+	},
+	[PRIQ_MSI_INDEX] = {
+		ARM_SMMU_PRIQ_IRQ_CFG0,
+		ARM_SMMU_PRIQ_IRQ_CFG1,
+		ARM_SMMU_PRIQ_IRQ_CFG2,
+	},
+};
+
+struct arm_smmu_cmdq_ent {
+	/* Common fields */
+	u8				opcode;
+	bool				substream_valid;
+
+	/* Command-specific fields */
+	union {
+		#define CMDQ_OP_PREFETCH_CFG	0x1
+		struct {
+			u32			sid;
+			u8			size;
+			u64			addr;
+		} prefetch;
+
+		#define CMDQ_OP_CFGI_STE	0x3
+		#define CMDQ_OP_CFGI_ALL	0x4
+		#define CMDQ_OP_CFGI_CD		0x5
+		#define CMDQ_OP_CFGI_CD_ALL	0x6
+		struct {
+			u32			sid;
+			u32			ssid;
+			union {
+				bool		leaf;
+				u8		span;
+			};
+		} cfgi;
+
+		#define CMDQ_OP_TLBI_NH_ASID	0x11
+		#define CMDQ_OP_TLBI_NH_VA	0x12
+		#define CMDQ_OP_TLBI_EL2_ALL	0x20
+		#define CMDQ_OP_TLBI_S12_VMALL	0x28
+		#define CMDQ_OP_TLBI_S2_IPA	0x2a
+		#define CMDQ_OP_TLBI_NSNH_ALL	0x30
+		struct {
+			u8			num;
+			u8			scale;
+			u16			asid;
+			u16			vmid;
+			bool			leaf;
+			u8			ttl;
+			u8			tg;
+			u64			addr;
+		} tlbi;
+
+		#define CMDQ_OP_ATC_INV		0x40
+		#define ATC_INV_SIZE_ALL	52
+		struct {
+			u32			sid;
+			u32			ssid;
+			u64			addr;
+			u8			size;
+			bool			global;
+		} atc;
+
+		#define CMDQ_OP_PRI_RESP	0x41
+		struct {
+			u32			sid;
+			u32			ssid;
+			u16			grpid;
+			enum pri_resp		resp;
+		} pri;
+
+		#define CMDQ_OP_CMD_SYNC	0x46
+		struct {
+			u64			msiaddr;
+		} sync;
+	};
+};
+
+struct arm_smmu_ll_queue {
+	union {
+		u64			val;
+		struct {
+			u32		prod;
+			u32		cons;
+		};
+		struct {
+			atomic_t	prod;
+			atomic_t	cons;
+		} atomic;
+		u8			__pad[SMP_CACHE_BYTES];
+	} ____cacheline_aligned_in_smp;
+	u32				max_n_shift;
+};
+
+struct arm_smmu_queue {
+	struct arm_smmu_ll_queue	llq;
+	int				irq; /* Wired interrupt */
+
+	__le64				*base;
+	dma_addr_t			base_dma;
+	u64				q_base;
+
+	size_t				ent_dwords;
+
+	u32 __iomem			*prod_reg;
+	u32 __iomem			*cons_reg;
+};
+
+struct arm_smmu_queue_poll {
+	ktime_t				timeout;
+	unsigned int			delay;
+	unsigned int			spin_cnt;
+	bool				wfe;
+};
+
+struct arm_smmu_cmdq {
+	struct arm_smmu_queue		q;
+	atomic_long_t			*valid_map;
+	atomic_t			owner_prod;
+	atomic_t			lock;
+};
+
+struct arm_smmu_cmdq_batch {
+	u64				cmds[CMDQ_BATCH_ENTRIES * CMDQ_ENT_DWORDS];
+	int				num;
+};
+
+struct arm_smmu_evtq {
+	struct arm_smmu_queue		q;
+	u32				max_stalls;
+};
+
+struct arm_smmu_priq {
+	struct arm_smmu_queue		q;
+};
+
+/* High-level stream table and context descriptor structures */
+struct arm_smmu_strtab_l1_desc {
+	u8				span;
+
+	__le64				*l2ptr;
+	dma_addr_t			l2ptr_dma;
+};
+
+struct arm_smmu_ctx_desc {
+	u16				asid;
+	u64				ttbr;
+	u64				tcr;
+	u64				mair;
+};
+
+struct arm_smmu_l1_ctx_desc {
+	__le64				*l2ptr;
+	dma_addr_t			l2ptr_dma;
+};
+
+struct arm_smmu_ctx_desc_cfg {
+	__le64				*cdtab;
+	dma_addr_t			cdtab_dma;
+	struct arm_smmu_l1_ctx_desc	*l1_desc;
+	unsigned int			num_l1_ents;
+};
+
+struct arm_smmu_s1_cfg {
+	struct arm_smmu_ctx_desc_cfg	cdcfg;
+	struct arm_smmu_ctx_desc	cd;
+	u8				s1fmt;
+	u8				s1cdmax;
+};
+
+struct arm_smmu_s2_cfg {
+	u16				vmid;
+	u64				vttbr;
+	u64				vtcr;
+};
+
+struct arm_smmu_strtab_cfg {
+	__le64				*strtab;
+	dma_addr_t			strtab_dma;
+	struct arm_smmu_strtab_l1_desc	*l1_desc;
+	unsigned int			num_l1_ents;
+
+	u64				strtab_base;
+	u32				strtab_base_cfg;
+};
+
+/* An SMMUv3 instance */
+struct arm_smmu_device {
+	struct device			*dev;
+	void __iomem			*base;
+	void __iomem			*page1;
+
+#define ARM_SMMU_FEAT_2_LVL_STRTAB	(1 << 0)
+#define ARM_SMMU_FEAT_2_LVL_CDTAB	(1 << 1)
+#define ARM_SMMU_FEAT_TT_LE		(1 << 2)
+#define ARM_SMMU_FEAT_TT_BE		(1 << 3)
+#define ARM_SMMU_FEAT_PRI		(1 << 4)
+#define ARM_SMMU_FEAT_ATS		(1 << 5)
+#define ARM_SMMU_FEAT_SEV		(1 << 6)
+#define ARM_SMMU_FEAT_MSI		(1 << 7)
+#define ARM_SMMU_FEAT_COHERENCY		(1 << 8)
+#define ARM_SMMU_FEAT_TRANS_S1		(1 << 9)
+#define ARM_SMMU_FEAT_TRANS_S2		(1 << 10)
+#define ARM_SMMU_FEAT_STALLS		(1 << 11)
+#define ARM_SMMU_FEAT_HYP		(1 << 12)
+#define ARM_SMMU_FEAT_STALL_FORCE	(1 << 13)
+#define ARM_SMMU_FEAT_VAX		(1 << 14)
+#define ARM_SMMU_FEAT_RANGE_INV		(1 << 15)
+	u32				features;
+
+#define ARM_SMMU_OPT_SKIP_PREFETCH	(1 << 0)
+#define ARM_SMMU_OPT_PAGE0_REGS_ONLY	(1 << 1)
+	u32				options;
+
+	struct arm_smmu_cmdq		cmdq;
+	struct arm_smmu_evtq		evtq;
+	struct arm_smmu_priq		priq;
+
+	int				gerr_irq;
+	int				combined_irq;
+
+	unsigned long			ias; /* IPA */
+	unsigned long			oas; /* PA */
+	unsigned long			pgsize_bitmap;
+
+#define ARM_SMMU_MAX_ASIDS		(1 << 16)
+	unsigned int			asid_bits;
+
+#define ARM_SMMU_MAX_VMIDS		(1 << 16)
+	unsigned int			vmid_bits;
+	DECLARE_BITMAP(vmid_map, ARM_SMMU_MAX_VMIDS);
+
+	unsigned int			ssid_bits;
+	unsigned int			sid_bits;
+
+	struct arm_smmu_strtab_cfg	strtab_cfg;
+
+	/* IOMMU core code handle */
+	struct iommu_device		iommu;
+};
+
+/* SMMU private data for each master */
+struct arm_smmu_master {
+	struct arm_smmu_device		*smmu;
+	struct device			*dev;
+	struct arm_smmu_domain		*domain;
+	struct list_head		domain_head;
+	u32				*sids;
+	unsigned int			num_sids;
+	bool				ats_enabled;
+	unsigned int			ssid_bits;
+};
+
+/* SMMU private data for an IOMMU domain */
+enum arm_smmu_domain_stage {
+	ARM_SMMU_DOMAIN_S1 = 0,
+	ARM_SMMU_DOMAIN_S2,
+	ARM_SMMU_DOMAIN_NESTED,
+	ARM_SMMU_DOMAIN_BYPASS,
+};
+
+struct arm_smmu_domain {
+	struct arm_smmu_device		*smmu;
+	struct mutex			init_mutex; /* Protects smmu pointer */
+
+	struct io_pgtable_ops		*pgtbl_ops;
+	bool				non_strict;
+	atomic_t			nr_ats_masters;
+
+	enum arm_smmu_domain_stage	stage;
+	union {
+		struct arm_smmu_s1_cfg	s1_cfg;
+		struct arm_smmu_s2_cfg	s2_cfg;
+	};
+
+	struct iommu_domain		domain;
+
+	struct list_head		devices;
+	spinlock_t			devices_lock;
+};
+
+struct arm_smmu_option_prop {
+	u32 opt;
+	const char *prop;
+};
+
+static DEFINE_XARRAY_ALLOC1(asid_xa);
+
+static struct arm_smmu_option_prop arm_smmu_options[] = {
+	{ ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" },
+	{ ARM_SMMU_OPT_PAGE0_REGS_ONLY, "cavium,cn9900-broken-page1-regspace"},
+	{ 0, NULL},
+};
+
+static inline void __iomem *arm_smmu_page1_fixup(unsigned long offset,
+						 struct arm_smmu_device *smmu)
+{
+	if (offset > SZ_64K)
+		return smmu->page1 + offset - SZ_64K;
+
+	return smmu->base + offset;
+}
+
+static struct arm_smmu_domain *to_smmu_domain(struct iommu_domain *dom)
+{
+	return container_of(dom, struct arm_smmu_domain, domain);
+}
+
+static void parse_driver_options(struct arm_smmu_device *smmu)
+{
+	int i = 0;
+
+	do {
+		if (of_property_read_bool(smmu->dev->of_node,
+						arm_smmu_options[i].prop)) {
+			smmu->options |= arm_smmu_options[i].opt;
+			dev_notice(smmu->dev, "option %s\n",
+				arm_smmu_options[i].prop);
+		}
+	} while (arm_smmu_options[++i].opt);
+}
+
+/* Low-level queue manipulation functions */
+static bool queue_has_space(struct arm_smmu_ll_queue *q, u32 n)
+{
+	u32 space, prod, cons;
+
+	prod = Q_IDX(q, q->prod);
+	cons = Q_IDX(q, q->cons);
+
+	if (Q_WRP(q, q->prod) == Q_WRP(q, q->cons))
+		space = (1 << q->max_n_shift) - (prod - cons);
+	else
+		space = cons - prod;
+
+	return space >= n;
+}
+
+static bool queue_full(struct arm_smmu_ll_queue *q)
+{
+	return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
+	       Q_WRP(q, q->prod) != Q_WRP(q, q->cons);
+}
+
+static bool queue_empty(struct arm_smmu_ll_queue *q)
+{
+	return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
+	       Q_WRP(q, q->prod) == Q_WRP(q, q->cons);
+}
+
+static bool queue_consumed(struct arm_smmu_ll_queue *q, u32 prod)
+{
+	return ((Q_WRP(q, q->cons) == Q_WRP(q, prod)) &&
+		(Q_IDX(q, q->cons) > Q_IDX(q, prod))) ||
+	       ((Q_WRP(q, q->cons) != Q_WRP(q, prod)) &&
+		(Q_IDX(q, q->cons) <= Q_IDX(q, prod)));
+}
+
+static void queue_sync_cons_out(struct arm_smmu_queue *q)
+{
+	/*
+	 * Ensure that all CPU accesses (reads and writes) to the queue
+	 * are complete before we update the cons pointer.
+	 */
+	mb();
+	writel_relaxed(q->llq.cons, q->cons_reg);
+}
+
+static void queue_inc_cons(struct arm_smmu_ll_queue *q)
+{
+	u32 cons = (Q_WRP(q, q->cons) | Q_IDX(q, q->cons)) + 1;
+	q->cons = Q_OVF(q->cons) | Q_WRP(q, cons) | Q_IDX(q, cons);
+}
+
+static int queue_sync_prod_in(struct arm_smmu_queue *q)
+{
+	int ret = 0;
+	u32 prod = readl_relaxed(q->prod_reg);
+
+	if (Q_OVF(prod) != Q_OVF(q->llq.prod))
+		ret = -EOVERFLOW;
+
+	q->llq.prod = prod;
+	return ret;
+}
+
+static u32 queue_inc_prod_n(struct arm_smmu_ll_queue *q, int n)
+{
+	u32 prod = (Q_WRP(q, q->prod) | Q_IDX(q, q->prod)) + n;
+	return Q_OVF(q->prod) | Q_WRP(q, prod) | Q_IDX(q, prod);
+}
+
+static void queue_poll_init(struct arm_smmu_device *smmu,
+			    struct arm_smmu_queue_poll *qp)
+{
+	qp->delay = 1;
+	qp->spin_cnt = 0;
+	qp->wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV);
+	qp->timeout = ktime_add_us(ktime_get(), ARM_SMMU_POLL_TIMEOUT_US);
+}
+
+static int queue_poll(struct arm_smmu_queue_poll *qp)
+{
+	if (ktime_compare(ktime_get(), qp->timeout) > 0)
+		return -ETIMEDOUT;
+
+	if (qp->wfe) {
+		wfe();
+	} else if (++qp->spin_cnt < ARM_SMMU_POLL_SPIN_COUNT) {
+		cpu_relax();
+	} else {
+		udelay(qp->delay);
+		qp->delay *= 2;
+		qp->spin_cnt = 0;
+	}
+
+	return 0;
+}
+
+static void queue_write(__le64 *dst, u64 *src, size_t n_dwords)
+{
+	int i;
+
+	for (i = 0; i < n_dwords; ++i)
+		*dst++ = cpu_to_le64(*src++);
+}
+
+static void queue_read(__le64 *dst, u64 *src, size_t n_dwords)
+{
+	int i;
+
+	for (i = 0; i < n_dwords; ++i)
+		*dst++ = le64_to_cpu(*src++);
+}
+
+static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent)
+{
+	if (queue_empty(&q->llq))
+		return -EAGAIN;
+
+	queue_read(ent, Q_ENT(q, q->llq.cons), q->ent_dwords);
+	queue_inc_cons(&q->llq);
+	queue_sync_cons_out(q);
+	return 0;
+}
+
+/* High-level queue accessors */
+static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
+{
+	memset(cmd, 0, 1 << CMDQ_ENT_SZ_SHIFT);
+	cmd[0] |= FIELD_PREP(CMDQ_0_OP, ent->opcode);
+
+	switch (ent->opcode) {
+	case CMDQ_OP_TLBI_EL2_ALL:
+	case CMDQ_OP_TLBI_NSNH_ALL:
+		break;
+	case CMDQ_OP_PREFETCH_CFG:
+		cmd[0] |= FIELD_PREP(CMDQ_PREFETCH_0_SID, ent->prefetch.sid);
+		cmd[1] |= FIELD_PREP(CMDQ_PREFETCH_1_SIZE, ent->prefetch.size);
+		cmd[1] |= ent->prefetch.addr & CMDQ_PREFETCH_1_ADDR_MASK;
+		break;
+	case CMDQ_OP_CFGI_CD:
+		cmd[0] |= FIELD_PREP(CMDQ_CFGI_0_SSID, ent->cfgi.ssid);
+		fallthrough;
+	case CMDQ_OP_CFGI_STE:
+		cmd[0] |= FIELD_PREP(CMDQ_CFGI_0_SID, ent->cfgi.sid);
+		cmd[1] |= FIELD_PREP(CMDQ_CFGI_1_LEAF, ent->cfgi.leaf);
+		break;
+	case CMDQ_OP_CFGI_CD_ALL:
+		cmd[0] |= FIELD_PREP(CMDQ_CFGI_0_SID, ent->cfgi.sid);
+		break;
+	case CMDQ_OP_CFGI_ALL:
+		/* Cover the entire SID range */
+		cmd[1] |= FIELD_PREP(CMDQ_CFGI_1_RANGE, 31);
+		break;
+	case CMDQ_OP_TLBI_NH_VA:
+		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_NUM, ent->tlbi.num);
+		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_SCALE, ent->tlbi.scale);
+		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
+		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_ASID, ent->tlbi.asid);
+		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_LEAF, ent->tlbi.leaf);
+		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TTL, ent->tlbi.ttl);
+		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TG, ent->tlbi.tg);
+		cmd[1] |= ent->tlbi.addr & CMDQ_TLBI_1_VA_MASK;
+		break;
+	case CMDQ_OP_TLBI_S2_IPA:
+		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_NUM, ent->tlbi.num);
+		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_SCALE, ent->tlbi.scale);
+		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
+		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_LEAF, ent->tlbi.leaf);
+		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TTL, ent->tlbi.ttl);
+		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TG, ent->tlbi.tg);
+		cmd[1] |= ent->tlbi.addr & CMDQ_TLBI_1_IPA_MASK;
+		break;
+	case CMDQ_OP_TLBI_NH_ASID:
+		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_ASID, ent->tlbi.asid);
+		fallthrough;
+	case CMDQ_OP_TLBI_S12_VMALL:
+		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
+		break;
+	case CMDQ_OP_ATC_INV:
+		cmd[0] |= FIELD_PREP(CMDQ_0_SSV, ent->substream_valid);
+		cmd[0] |= FIELD_PREP(CMDQ_ATC_0_GLOBAL, ent->atc.global);
+		cmd[0] |= FIELD_PREP(CMDQ_ATC_0_SSID, ent->atc.ssid);
+		cmd[0] |= FIELD_PREP(CMDQ_ATC_0_SID, ent->atc.sid);
+		cmd[1] |= FIELD_PREP(CMDQ_ATC_1_SIZE, ent->atc.size);
+		cmd[1] |= ent->atc.addr & CMDQ_ATC_1_ADDR_MASK;
+		break;
+	case CMDQ_OP_PRI_RESP:
+		cmd[0] |= FIELD_PREP(CMDQ_0_SSV, ent->substream_valid);
+		cmd[0] |= FIELD_PREP(CMDQ_PRI_0_SSID, ent->pri.ssid);
+		cmd[0] |= FIELD_PREP(CMDQ_PRI_0_SID, ent->pri.sid);
+		cmd[1] |= FIELD_PREP(CMDQ_PRI_1_GRPID, ent->pri.grpid);
+		switch (ent->pri.resp) {
+		case PRI_RESP_DENY:
+		case PRI_RESP_FAIL:
+		case PRI_RESP_SUCC:
+			break;
+		default:
+			return -EINVAL;
+		}
+		cmd[1] |= FIELD_PREP(CMDQ_PRI_1_RESP, ent->pri.resp);
+		break;
+	case CMDQ_OP_CMD_SYNC:
+		if (ent->sync.msiaddr) {
+			cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ);
+			cmd[1] |= ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
+		} else {
+			cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV);
+		}
+		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH);
+		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB);
+		break;
+	default:
+		return -ENOENT;
+	}
+
+	return 0;
+}
+
+static void arm_smmu_cmdq_build_sync_cmd(u64 *cmd, struct arm_smmu_device *smmu,
+					 u32 prod)
+{
+	struct arm_smmu_queue *q = &smmu->cmdq.q;
+	struct arm_smmu_cmdq_ent ent = {
+		.opcode = CMDQ_OP_CMD_SYNC,
+	};
+
+	/*
+	 * Beware that Hi16xx adds an extra 32 bits of goodness to its MSI
+	 * payload, so the write will zero the entire command on that platform.
+	 */
+	if (smmu->features & ARM_SMMU_FEAT_MSI &&
+	    smmu->features & ARM_SMMU_FEAT_COHERENCY) {
+		ent.sync.msiaddr = q->base_dma + Q_IDX(&q->llq, prod) *
+				   q->ent_dwords * 8;
+	}
+
+	arm_smmu_cmdq_build_cmd(cmd, &ent);
+}
+
+static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
+{
+	static const char *cerror_str[] = {
+		[CMDQ_ERR_CERROR_NONE_IDX]	= "No error",
+		[CMDQ_ERR_CERROR_ILL_IDX]	= "Illegal command",
+		[CMDQ_ERR_CERROR_ABT_IDX]	= "Abort on command fetch",
+		[CMDQ_ERR_CERROR_ATC_INV_IDX]	= "ATC invalidate timeout",
+	};
+
+	int i;
+	u64 cmd[CMDQ_ENT_DWORDS];
+	struct arm_smmu_queue *q = &smmu->cmdq.q;
+	u32 cons = readl_relaxed(q->cons_reg);
+	u32 idx = FIELD_GET(CMDQ_CONS_ERR, cons);
+	struct arm_smmu_cmdq_ent cmd_sync = {
+		.opcode = CMDQ_OP_CMD_SYNC,
+	};
+
+	dev_err(smmu->dev, "CMDQ error (cons 0x%08x): %s\n", cons,
+		idx < ARRAY_SIZE(cerror_str) ?  cerror_str[idx] : "Unknown");
+
+	switch (idx) {
+	case CMDQ_ERR_CERROR_ABT_IDX:
+		dev_err(smmu->dev, "retrying command fetch\n");
+	case CMDQ_ERR_CERROR_NONE_IDX:
+		return;
+	case CMDQ_ERR_CERROR_ATC_INV_IDX:
+		/*
+		 * ATC Invalidation Completion timeout. CONS is still pointing
+		 * at the CMD_SYNC. Attempt to complete other pending commands
+		 * by repeating the CMD_SYNC, though we might well end up back
+		 * here since the ATC invalidation may still be pending.
+		 */
+		return;
+	case CMDQ_ERR_CERROR_ILL_IDX:
+	default:
+		break;
+	}
+
+	/*
+	 * We may have concurrent producers, so we need to be careful
+	 * not to touch any of the shadow cmdq state.
+	 */
+	queue_read(cmd, Q_ENT(q, cons), q->ent_dwords);
+	dev_err(smmu->dev, "skipping command in error state:\n");
+	for (i = 0; i < ARRAY_SIZE(cmd); ++i)
+		dev_err(smmu->dev, "\t0x%016llx\n", (unsigned long long)cmd[i]);
+
+	/* Convert the erroneous command into a CMD_SYNC */
+	if (arm_smmu_cmdq_build_cmd(cmd, &cmd_sync)) {
+		dev_err(smmu->dev, "failed to convert to CMD_SYNC\n");
+		return;
+	}
+
+	queue_write(Q_ENT(q, cons), cmd, q->ent_dwords);
+}
+
+/*
+ * Command queue locking.
+ * This is a form of bastardised rwlock with the following major changes:
+ *
+ * - The only LOCK routines are exclusive_trylock() and shared_lock().
+ *   Neither have barrier semantics, and instead provide only a control
+ *   dependency.
+ *
+ * - The UNLOCK routines are supplemented with shared_tryunlock(), which
+ *   fails if the caller appears to be the last lock holder (yes, this is
+ *   racy). All successful UNLOCK routines have RELEASE semantics.
+ */
+static void arm_smmu_cmdq_shared_lock(struct arm_smmu_cmdq *cmdq)
+{
+	int val;
+
+	/*
+	 * We can try to avoid the cmpxchg() loop by simply incrementing the
+	 * lock counter. When held in exclusive state, the lock counter is set
+	 * to INT_MIN so these increments won't hurt as the value will remain
+	 * negative.
+	 */
+	if (atomic_fetch_inc_relaxed(&cmdq->lock) >= 0)
+		return;
+
+	do {
+		val = atomic_cond_read_relaxed(&cmdq->lock, VAL >= 0);
+	} while (atomic_cmpxchg_relaxed(&cmdq->lock, val, val + 1) != val);
+}
+
+static void arm_smmu_cmdq_shared_unlock(struct arm_smmu_cmdq *cmdq)
+{
+	(void)atomic_dec_return_release(&cmdq->lock);
+}
+
+static bool arm_smmu_cmdq_shared_tryunlock(struct arm_smmu_cmdq *cmdq)
+{
+	if (atomic_read(&cmdq->lock) == 1)
+		return false;
+
+	arm_smmu_cmdq_shared_unlock(cmdq);
+	return true;
+}
+
+#define arm_smmu_cmdq_exclusive_trylock_irqsave(cmdq, flags)		\
+({									\
+	bool __ret;							\
+	local_irq_save(flags);						\
+	__ret = !atomic_cmpxchg_relaxed(&cmdq->lock, 0, INT_MIN);	\
+	if (!__ret)							\
+		local_irq_restore(flags);				\
+	__ret;								\
+})
+
+#define arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags)		\
+({									\
+	atomic_set_release(&cmdq->lock, 0);				\
+	local_irq_restore(flags);					\
+})
+
+
+/*
+ * Command queue insertion.
+ * This is made fiddly by our attempts to achieve some sort of scalability
+ * since there is one queue shared amongst all of the CPUs in the system.  If
+ * you like mixed-size concurrency, dependency ordering and relaxed atomics,
+ * then you'll *love* this monstrosity.
+ *
+ * The basic idea is to split the queue up into ranges of commands that are
+ * owned by a given CPU; the owner may not have written all of the commands
+ * itself, but is responsible for advancing the hardware prod pointer when
+ * the time comes. The algorithm is roughly:
+ *
+ * 	1. Allocate some space in the queue. At this point we also discover
+ *	   whether the head of the queue is currently owned by another CPU,
+ *	   or whether we are the owner.
+ *
+ *	2. Write our commands into our allocated slots in the queue.
+ *
+ *	3. Mark our slots as valid in arm_smmu_cmdq.valid_map.
+ *
+ *	4. If we are an owner:
+ *		a. Wait for the previous owner to finish.
+ *		b. Mark the queue head as unowned, which tells us the range
+ *		   that we are responsible for publishing.
+ *		c. Wait for all commands in our owned range to become valid.
+ *		d. Advance the hardware prod pointer.
+ *		e. Tell the next owner we've finished.
+ *
+ *	5. If we are inserting a CMD_SYNC (we may or may not have been an
+ *	   owner), then we need to stick around until it has completed:
+ *		a. If we have MSIs, the SMMU can write back into the CMD_SYNC
+ *		   to clear the first 4 bytes.
+ *		b. Otherwise, we spin waiting for the hardware cons pointer to
+ *		   advance past our command.
+ *
+ * The devil is in the details, particularly the use of locking for handling
+ * SYNC completion and freeing up space in the queue before we think that it is
+ * full.
+ */
+static void __arm_smmu_cmdq_poll_set_valid_map(struct arm_smmu_cmdq *cmdq,
+					       u32 sprod, u32 eprod, bool set)
+{
+	u32 swidx, sbidx, ewidx, ebidx;
+	struct arm_smmu_ll_queue llq = {
+		.max_n_shift	= cmdq->q.llq.max_n_shift,
+		.prod		= sprod,
+	};
+
+	ewidx = BIT_WORD(Q_IDX(&llq, eprod));
+	ebidx = Q_IDX(&llq, eprod) % BITS_PER_LONG;
+
+	while (llq.prod != eprod) {
+		unsigned long mask;
+		atomic_long_t *ptr;
+		u32 limit = BITS_PER_LONG;
+
+		swidx = BIT_WORD(Q_IDX(&llq, llq.prod));
+		sbidx = Q_IDX(&llq, llq.prod) % BITS_PER_LONG;
+
+		ptr = &cmdq->valid_map[swidx];
+
+		if ((swidx == ewidx) && (sbidx < ebidx))
+			limit = ebidx;
+
+		mask = GENMASK(limit - 1, sbidx);
+
+		/*
+		 * The valid bit is the inverse of the wrap bit. This means
+		 * that a zero-initialised queue is invalid and, after marking
+		 * all entries as valid, they become invalid again when we
+		 * wrap.
+		 */
+		if (set) {
+			atomic_long_xor(mask, ptr);
+		} else { /* Poll */
+			unsigned long valid;
+
+			valid = (ULONG_MAX + !!Q_WRP(&llq, llq.prod)) & mask;
+			atomic_long_cond_read_relaxed(ptr, (VAL & mask) == valid);
+		}
+
+		llq.prod = queue_inc_prod_n(&llq, limit - sbidx);
+	}
+}
+
+/* Mark all entries in the range [sprod, eprod) as valid */
+static void arm_smmu_cmdq_set_valid_map(struct arm_smmu_cmdq *cmdq,
+					u32 sprod, u32 eprod)
+{
+	__arm_smmu_cmdq_poll_set_valid_map(cmdq, sprod, eprod, true);
+}
+
+/* Wait for all entries in the range [sprod, eprod) to become valid */
+static void arm_smmu_cmdq_poll_valid_map(struct arm_smmu_cmdq *cmdq,
+					 u32 sprod, u32 eprod)
+{
+	__arm_smmu_cmdq_poll_set_valid_map(cmdq, sprod, eprod, false);
+}
+
+/* Wait for the command queue to become non-full */
+static int arm_smmu_cmdq_poll_until_not_full(struct arm_smmu_device *smmu,
+					     struct arm_smmu_ll_queue *llq)
+{
+	unsigned long flags;
+	struct arm_smmu_queue_poll qp;
+	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
+	int ret = 0;
+
+	/*
+	 * Try to update our copy of cons by grabbing exclusive cmdq access. If
+	 * that fails, spin until somebody else updates it for us.
+	 */
+	if (arm_smmu_cmdq_exclusive_trylock_irqsave(cmdq, flags)) {
+		WRITE_ONCE(cmdq->q.llq.cons, readl_relaxed(cmdq->q.cons_reg));
+		arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags);
+		llq->val = READ_ONCE(cmdq->q.llq.val);
+		return 0;
+	}
+
+	queue_poll_init(smmu, &qp);
+	do {
+		llq->val = READ_ONCE(smmu->cmdq.q.llq.val);
+		if (!queue_full(llq))
+			break;
+
+		ret = queue_poll(&qp);
+	} while (!ret);
+
+	return ret;
+}
+
+/*
+ * Wait until the SMMU signals a CMD_SYNC completion MSI.
+ * Must be called with the cmdq lock held in some capacity.
+ */
+static int __arm_smmu_cmdq_poll_until_msi(struct arm_smmu_device *smmu,
+					  struct arm_smmu_ll_queue *llq)
+{
+	int ret = 0;
+	struct arm_smmu_queue_poll qp;
+	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
+	u32 *cmd = (u32 *)(Q_ENT(&cmdq->q, llq->prod));
+
+	queue_poll_init(smmu, &qp);
+
+	/*
+	 * The MSI won't generate an event, since it's being written back
+	 * into the command queue.
+	 */
+	qp.wfe = false;
+	smp_cond_load_relaxed(cmd, !VAL || (ret = queue_poll(&qp)));
+	llq->cons = ret ? llq->prod : queue_inc_prod_n(llq, 1);
+	return ret;
+}
+
+/*
+ * Wait until the SMMU cons index passes llq->prod.
+ * Must be called with the cmdq lock held in some capacity.
+ */
+static int __arm_smmu_cmdq_poll_until_consumed(struct arm_smmu_device *smmu,
+					       struct arm_smmu_ll_queue *llq)
+{
+	struct arm_smmu_queue_poll qp;
+	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
+	u32 prod = llq->prod;
+	int ret = 0;
+
+	queue_poll_init(smmu, &qp);
+	llq->val = READ_ONCE(smmu->cmdq.q.llq.val);
+	do {
+		if (queue_consumed(llq, prod))
+			break;
+
+		ret = queue_poll(&qp);
+
+		/*
+		 * This needs to be a readl() so that our subsequent call
+		 * to arm_smmu_cmdq_shared_tryunlock() can fail accurately.
+		 *
+		 * Specifically, we need to ensure that we observe all
+		 * shared_lock()s by other CMD_SYNCs that share our owner,
+		 * so that a failing call to tryunlock() means that we're
+		 * the last one out and therefore we can safely advance
+		 * cmdq->q.llq.cons. Roughly speaking:
+		 *
+		 * CPU 0		CPU1			CPU2 (us)
+		 *
+		 * if (sync)
+		 * 	shared_lock();
+		 *
+		 * dma_wmb();
+		 * set_valid_map();
+		 *
+		 * 			if (owner) {
+		 *				poll_valid_map();
+		 *				<control dependency>
+		 *				writel(prod_reg);
+		 *
+		 *						readl(cons_reg);
+		 *						tryunlock();
+		 *
+		 * Requires us to see CPU 0's shared_lock() acquisition.
+		 */
+		llq->cons = readl(cmdq->q.cons_reg);
+	} while (!ret);
+
+	return ret;
+}
+
+static int arm_smmu_cmdq_poll_until_sync(struct arm_smmu_device *smmu,
+					 struct arm_smmu_ll_queue *llq)
+{
+	if (smmu->features & ARM_SMMU_FEAT_MSI &&
+	    smmu->features & ARM_SMMU_FEAT_COHERENCY)
+		return __arm_smmu_cmdq_poll_until_msi(smmu, llq);
+
+	return __arm_smmu_cmdq_poll_until_consumed(smmu, llq);
+}
+
+static void arm_smmu_cmdq_write_entries(struct arm_smmu_cmdq *cmdq, u64 *cmds,
+					u32 prod, int n)
+{
+	int i;
+	struct arm_smmu_ll_queue llq = {
+		.max_n_shift	= cmdq->q.llq.max_n_shift,
+		.prod		= prod,
+	};
+
+	for (i = 0; i < n; ++i) {
+		u64 *cmd = &cmds[i * CMDQ_ENT_DWORDS];
+
+		prod = queue_inc_prod_n(&llq, i);
+		queue_write(Q_ENT(&cmdq->q, prod), cmd, CMDQ_ENT_DWORDS);
+	}
+}
+
+/*
+ * This is the actual insertion function, and provides the following
+ * ordering guarantees to callers:
+ *
+ * - There is a dma_wmb() before publishing any commands to the queue.
+ *   This can be relied upon to order prior writes to data structures
+ *   in memory (such as a CD or an STE) before the command.
+ *
+ * - On completion of a CMD_SYNC, there is a control dependency.
+ *   This can be relied upon to order subsequent writes to memory (e.g.
+ *   freeing an IOVA) after completion of the CMD_SYNC.
+ *
+ * - Command insertion is totally ordered, so if two CPUs each race to
+ *   insert their own list of commands then all of the commands from one
+ *   CPU will appear before any of the commands from the other CPU.
+ */
+static int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu,
+				       u64 *cmds, int n, bool sync)
+{
+	u64 cmd_sync[CMDQ_ENT_DWORDS];
+	u32 prod;
+	unsigned long flags;
+	bool owner;
+	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
+	struct arm_smmu_ll_queue llq = {
+		.max_n_shift = cmdq->q.llq.max_n_shift,
+	}, head = llq;
+	int ret = 0;
+
+	/* 1. Allocate some space in the queue */
+	local_irq_save(flags);
+	llq.val = READ_ONCE(cmdq->q.llq.val);
+	do {
+		u64 old;
+
+		while (!queue_has_space(&llq, n + sync)) {
+			local_irq_restore(flags);
+			if (arm_smmu_cmdq_poll_until_not_full(smmu, &llq))
+				dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
+			local_irq_save(flags);
+		}
+
+		head.cons = llq.cons;
+		head.prod = queue_inc_prod_n(&llq, n + sync) |
+					     CMDQ_PROD_OWNED_FLAG;
+
+		old = cmpxchg_relaxed(&cmdq->q.llq.val, llq.val, head.val);
+		if (old == llq.val)
+			break;
+
+		llq.val = old;
+	} while (1);
+	owner = !(llq.prod & CMDQ_PROD_OWNED_FLAG);
+	head.prod &= ~CMDQ_PROD_OWNED_FLAG;
+	llq.prod &= ~CMDQ_PROD_OWNED_FLAG;
+
+	/*
+	 * 2. Write our commands into the queue
+	 * Dependency ordering from the cmpxchg() loop above.
+	 */
+	arm_smmu_cmdq_write_entries(cmdq, cmds, llq.prod, n);
+	if (sync) {
+		prod = queue_inc_prod_n(&llq, n);
+		arm_smmu_cmdq_build_sync_cmd(cmd_sync, smmu, prod);
+		queue_write(Q_ENT(&cmdq->q, prod), cmd_sync, CMDQ_ENT_DWORDS);
+
+		/*
+		 * In order to determine completion of our CMD_SYNC, we must
+		 * ensure that the queue can't wrap twice without us noticing.
+		 * We achieve that by taking the cmdq lock as shared before
+		 * marking our slot as valid.
+		 */
+		arm_smmu_cmdq_shared_lock(cmdq);
+	}
+
+	/* 3. Mark our slots as valid, ensuring commands are visible first */
+	dma_wmb();
+	arm_smmu_cmdq_set_valid_map(cmdq, llq.prod, head.prod);
+
+	/* 4. If we are the owner, take control of the SMMU hardware */
+	if (owner) {
+		/* a. Wait for previous owner to finish */
+		atomic_cond_read_relaxed(&cmdq->owner_prod, VAL == llq.prod);
+
+		/* b. Stop gathering work by clearing the owned flag */
+		prod = atomic_fetch_andnot_relaxed(CMDQ_PROD_OWNED_FLAG,
+						   &cmdq->q.llq.atomic.prod);
+		prod &= ~CMDQ_PROD_OWNED_FLAG;
+
+		/*
+		 * c. Wait for any gathered work to be written to the queue.
+		 * Note that we read our own entries so that we have the control
+		 * dependency required by (d).
+		 */
+		arm_smmu_cmdq_poll_valid_map(cmdq, llq.prod, prod);
+
+		/*
+		 * d. Advance the hardware prod pointer
+		 * Control dependency ordering from the entries becoming valid.
+		 */
+		writel_relaxed(prod, cmdq->q.prod_reg);
+
+		/*
+		 * e. Tell the next owner we're done
+		 * Make sure we've updated the hardware first, so that we don't
+		 * race to update prod and potentially move it backwards.
+		 */
+		atomic_set_release(&cmdq->owner_prod, prod);
+	}
+
+	/* 5. If we are inserting a CMD_SYNC, we must wait for it to complete */
+	if (sync) {
+		llq.prod = queue_inc_prod_n(&llq, n);
+		ret = arm_smmu_cmdq_poll_until_sync(smmu, &llq);
+		if (ret) {
+			dev_err_ratelimited(smmu->dev,
+					    "CMD_SYNC timeout at 0x%08x [hwprod 0x%08x, hwcons 0x%08x]\n",
+					    llq.prod,
+					    readl_relaxed(cmdq->q.prod_reg),
+					    readl_relaxed(cmdq->q.cons_reg));
+		}
+
+		/*
+		 * Try to unlock the cmdq lock. This will fail if we're the last
+		 * reader, in which case we can safely update cmdq->q.llq.cons
+		 */
+		if (!arm_smmu_cmdq_shared_tryunlock(cmdq)) {
+			WRITE_ONCE(cmdq->q.llq.cons, llq.cons);
+			arm_smmu_cmdq_shared_unlock(cmdq);
+		}
+	}
+
+	local_irq_restore(flags);
+	return ret;
+}
+
+static int arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
+				   struct arm_smmu_cmdq_ent *ent)
+{
+	u64 cmd[CMDQ_ENT_DWORDS];
+
+	if (arm_smmu_cmdq_build_cmd(cmd, ent)) {
+		dev_warn(smmu->dev, "ignoring unknown CMDQ opcode 0x%x\n",
+			 ent->opcode);
+		return -EINVAL;
+	}
+
+	return arm_smmu_cmdq_issue_cmdlist(smmu, cmd, 1, false);
+}
+
+static int arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
+{
+	return arm_smmu_cmdq_issue_cmdlist(smmu, NULL, 0, true);
+}
+
+static void arm_smmu_cmdq_batch_add(struct arm_smmu_device *smmu,
+				    struct arm_smmu_cmdq_batch *cmds,
+				    struct arm_smmu_cmdq_ent *cmd)
+{
+	if (cmds->num == CMDQ_BATCH_ENTRIES) {
+		arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, false);
+		cmds->num = 0;
+	}
+	arm_smmu_cmdq_build_cmd(&cmds->cmds[cmds->num * CMDQ_ENT_DWORDS], cmd);
+	cmds->num++;
+}
+
+static int arm_smmu_cmdq_batch_submit(struct arm_smmu_device *smmu,
+				      struct arm_smmu_cmdq_batch *cmds)
+{
+	return arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, true);
+}
+
+/* Context descriptor manipulation functions */
+static void arm_smmu_sync_cd(struct arm_smmu_domain *smmu_domain,
+			     int ssid, bool leaf)
+{
+	size_t i;
+	unsigned long flags;
+	struct arm_smmu_master *master;
+	struct arm_smmu_cmdq_batch cmds = {};
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+	struct arm_smmu_cmdq_ent cmd = {
+		.opcode	= CMDQ_OP_CFGI_CD,
+		.cfgi	= {
+			.ssid	= ssid,
+			.leaf	= leaf,
+		},
+	};
+
+	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+	list_for_each_entry(master, &smmu_domain->devices, domain_head) {
+		for (i = 0; i < master->num_sids; i++) {
+			cmd.cfgi.sid = master->sids[i];
+			arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd);
+		}
+	}
+	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
+
+	arm_smmu_cmdq_batch_submit(smmu, &cmds);
+}
+
+static int arm_smmu_alloc_cd_leaf_table(struct arm_smmu_device *smmu,
+					struct arm_smmu_l1_ctx_desc *l1_desc)
+{
+	size_t size = CTXDESC_L2_ENTRIES * (CTXDESC_CD_DWORDS << 3);
+
+	l1_desc->l2ptr = dmam_alloc_coherent(smmu->dev, size,
+					     &l1_desc->l2ptr_dma, GFP_KERNEL);
+	if (!l1_desc->l2ptr) {
+		dev_warn(smmu->dev,
+			 "failed to allocate context descriptor table\n");
+		return -ENOMEM;
+	}
+	return 0;
+}
+
+static void arm_smmu_write_cd_l1_desc(__le64 *dst,
+				      struct arm_smmu_l1_ctx_desc *l1_desc)
+{
+	u64 val = (l1_desc->l2ptr_dma & CTXDESC_L1_DESC_L2PTR_MASK) |
+		  CTXDESC_L1_DESC_V;
+
+	/* See comment in arm_smmu_write_ctx_desc() */
+	WRITE_ONCE(*dst, cpu_to_le64(val));
+}
+
+static __le64 *arm_smmu_get_cd_ptr(struct arm_smmu_domain *smmu_domain,
+				   u32 ssid)
+{
+	__le64 *l1ptr;
+	unsigned int idx;
+	struct arm_smmu_l1_ctx_desc *l1_desc;
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+	struct arm_smmu_ctx_desc_cfg *cdcfg = &smmu_domain->s1_cfg.cdcfg;
+
+	if (smmu_domain->s1_cfg.s1fmt == STRTAB_STE_0_S1FMT_LINEAR)
+		return cdcfg->cdtab + ssid * CTXDESC_CD_DWORDS;
+
+	idx = ssid >> CTXDESC_SPLIT;
+	l1_desc = &cdcfg->l1_desc[idx];
+	if (!l1_desc->l2ptr) {
+		if (arm_smmu_alloc_cd_leaf_table(smmu, l1_desc))
+			return NULL;
+
+		l1ptr = cdcfg->cdtab + idx * CTXDESC_L1_DESC_DWORDS;
+		arm_smmu_write_cd_l1_desc(l1ptr, l1_desc);
+		/* An invalid L1CD can be cached */
+		arm_smmu_sync_cd(smmu_domain, ssid, false);
+	}
+	idx = ssid & (CTXDESC_L2_ENTRIES - 1);
+	return l1_desc->l2ptr + idx * CTXDESC_CD_DWORDS;
+}
+
+static int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain,
+				   int ssid, struct arm_smmu_ctx_desc *cd)
+{
+	/*
+	 * This function handles the following cases:
+	 *
+	 * (1) Install primary CD, for normal DMA traffic (SSID = 0).
+	 * (2) Install a secondary CD, for SID+SSID traffic.
+	 * (3) Update ASID of a CD. Atomically write the first 64 bits of the
+	 *     CD, then invalidate the old entry and mappings.
+	 * (4) Remove a secondary CD.
+	 */
+	u64 val;
+	bool cd_live;
+	__le64 *cdptr;
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+
+	if (WARN_ON(ssid >= (1 << smmu_domain->s1_cfg.s1cdmax)))
+		return -E2BIG;
+
+	cdptr = arm_smmu_get_cd_ptr(smmu_domain, ssid);
+	if (!cdptr)
+		return -ENOMEM;
+
+	val = le64_to_cpu(cdptr[0]);
+	cd_live = !!(val & CTXDESC_CD_0_V);
+
+	if (!cd) { /* (4) */
+		val = 0;
+	} else if (cd_live) { /* (3) */
+		val &= ~CTXDESC_CD_0_ASID;
+		val |= FIELD_PREP(CTXDESC_CD_0_ASID, cd->asid);
+		/*
+		 * Until CD+TLB invalidation, both ASIDs may be used for tagging
+		 * this substream's traffic
+		 */
+	} else { /* (1) and (2) */
+		cdptr[1] = cpu_to_le64(cd->ttbr & CTXDESC_CD_1_TTB0_MASK);
+		cdptr[2] = 0;
+		cdptr[3] = cpu_to_le64(cd->mair);
+
+		/*
+		 * STE is live, and the SMMU might read dwords of this CD in any
+		 * order. Ensure that it observes valid values before reading
+		 * V=1.
+		 */
+		arm_smmu_sync_cd(smmu_domain, ssid, true);
+
+		val = cd->tcr |
+#ifdef __BIG_ENDIAN
+			CTXDESC_CD_0_ENDI |
+#endif
+			CTXDESC_CD_0_R | CTXDESC_CD_0_A | CTXDESC_CD_0_ASET |
+			CTXDESC_CD_0_AA64 |
+			FIELD_PREP(CTXDESC_CD_0_ASID, cd->asid) |
+			CTXDESC_CD_0_V;
+
+		/* STALL_MODEL==0b10 && CD.S==0 is ILLEGAL */
+		if (smmu->features & ARM_SMMU_FEAT_STALL_FORCE)
+			val |= CTXDESC_CD_0_S;
+	}
+
+	/*
+	 * The SMMU accesses 64-bit values atomically. See IHI0070Ca 3.21.3
+	 * "Configuration structures and configuration invalidation completion"
+	 *
+	 *   The size of single-copy atomic reads made by the SMMU is
+	 *   IMPLEMENTATION DEFINED but must be at least 64 bits. Any single
+	 *   field within an aligned 64-bit span of a structure can be altered
+	 *   without first making the structure invalid.
+	 */
+	WRITE_ONCE(cdptr[0], cpu_to_le64(val));
+	arm_smmu_sync_cd(smmu_domain, ssid, true);
+	return 0;
+}
+
+static int arm_smmu_alloc_cd_tables(struct arm_smmu_domain *smmu_domain)
+{
+	int ret;
+	size_t l1size;
+	size_t max_contexts;
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+	struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
+	struct arm_smmu_ctx_desc_cfg *cdcfg = &cfg->cdcfg;
+
+	max_contexts = 1 << cfg->s1cdmax;
+
+	if (!(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB) ||
+	    max_contexts <= CTXDESC_L2_ENTRIES) {
+		cfg->s1fmt = STRTAB_STE_0_S1FMT_LINEAR;
+		cdcfg->num_l1_ents = max_contexts;
+
+		l1size = max_contexts * (CTXDESC_CD_DWORDS << 3);
+	} else {
+		cfg->s1fmt = STRTAB_STE_0_S1FMT_64K_L2;
+		cdcfg->num_l1_ents = DIV_ROUND_UP(max_contexts,
+						  CTXDESC_L2_ENTRIES);
+
+		cdcfg->l1_desc = devm_kcalloc(smmu->dev, cdcfg->num_l1_ents,
+					      sizeof(*cdcfg->l1_desc),
+					      GFP_KERNEL);
+		if (!cdcfg->l1_desc)
+			return -ENOMEM;
+
+		l1size = cdcfg->num_l1_ents * (CTXDESC_L1_DESC_DWORDS << 3);
+	}
+
+	cdcfg->cdtab = dmam_alloc_coherent(smmu->dev, l1size, &cdcfg->cdtab_dma,
+					   GFP_KERNEL);
+	if (!cdcfg->cdtab) {
+		dev_warn(smmu->dev, "failed to allocate context descriptor\n");
+		ret = -ENOMEM;
+		goto err_free_l1;
+	}
+
+	return 0;
+
+err_free_l1:
+	if (cdcfg->l1_desc) {
+		devm_kfree(smmu->dev, cdcfg->l1_desc);
+		cdcfg->l1_desc = NULL;
+	}
+	return ret;
+}
+
+static void arm_smmu_free_cd_tables(struct arm_smmu_domain *smmu_domain)
+{
+	int i;
+	size_t size, l1size;
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+	struct arm_smmu_ctx_desc_cfg *cdcfg = &smmu_domain->s1_cfg.cdcfg;
+
+	if (cdcfg->l1_desc) {
+		size = CTXDESC_L2_ENTRIES * (CTXDESC_CD_DWORDS << 3);
+
+		for (i = 0; i < cdcfg->num_l1_ents; i++) {
+			if (!cdcfg->l1_desc[i].l2ptr)
+				continue;
+
+			dmam_free_coherent(smmu->dev, size,
+					   cdcfg->l1_desc[i].l2ptr,
+					   cdcfg->l1_desc[i].l2ptr_dma);
+		}
+		devm_kfree(smmu->dev, cdcfg->l1_desc);
+		cdcfg->l1_desc = NULL;
+
+		l1size = cdcfg->num_l1_ents * (CTXDESC_L1_DESC_DWORDS << 3);
+	} else {
+		l1size = cdcfg->num_l1_ents * (CTXDESC_CD_DWORDS << 3);
+	}
+
+	dmam_free_coherent(smmu->dev, l1size, cdcfg->cdtab, cdcfg->cdtab_dma);
+	cdcfg->cdtab_dma = 0;
+	cdcfg->cdtab = NULL;
+}
+
+static void arm_smmu_free_asid(struct arm_smmu_ctx_desc *cd)
+{
+	if (!cd->asid)
+		return;
+
+	xa_erase(&asid_xa, cd->asid);
+}
+
+/* Stream table manipulation functions */
+static void
+arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc *desc)
+{
+	u64 val = 0;
+
+	val |= FIELD_PREP(STRTAB_L1_DESC_SPAN, desc->span);
+	val |= desc->l2ptr_dma & STRTAB_L1_DESC_L2PTR_MASK;
+
+	/* See comment in arm_smmu_write_ctx_desc() */
+	WRITE_ONCE(*dst, cpu_to_le64(val));
+}
+
+static void arm_smmu_sync_ste_for_sid(struct arm_smmu_device *smmu, u32 sid)
+{
+	struct arm_smmu_cmdq_ent cmd = {
+		.opcode	= CMDQ_OP_CFGI_STE,
+		.cfgi	= {
+			.sid	= sid,
+			.leaf	= true,
+		},
+	};
+
+	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+	arm_smmu_cmdq_issue_sync(smmu);
+}
+
+static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid,
+				      __le64 *dst)
+{
+	/*
+	 * This is hideously complicated, but we only really care about
+	 * three cases at the moment:
+	 *
+	 * 1. Invalid (all zero) -> bypass/fault (init)
+	 * 2. Bypass/fault -> translation/bypass (attach)
+	 * 3. Translation/bypass -> bypass/fault (detach)
+	 *
+	 * Given that we can't update the STE atomically and the SMMU
+	 * doesn't read the thing in a defined order, that leaves us
+	 * with the following maintenance requirements:
+	 *
+	 * 1. Update Config, return (init time STEs aren't live)
+	 * 2. Write everything apart from dword 0, sync, write dword 0, sync
+	 * 3. Update Config, sync
+	 */
+	u64 val = le64_to_cpu(dst[0]);
+	bool ste_live = false;
+	struct arm_smmu_device *smmu = NULL;
+	struct arm_smmu_s1_cfg *s1_cfg = NULL;
+	struct arm_smmu_s2_cfg *s2_cfg = NULL;
+	struct arm_smmu_domain *smmu_domain = NULL;
+	struct arm_smmu_cmdq_ent prefetch_cmd = {
+		.opcode		= CMDQ_OP_PREFETCH_CFG,
+		.prefetch	= {
+			.sid	= sid,
+		},
+	};
+
+	if (master) {
+		smmu_domain = master->domain;
+		smmu = master->smmu;
+	}
+
+	if (smmu_domain) {
+		switch (smmu_domain->stage) {
+		case ARM_SMMU_DOMAIN_S1:
+			s1_cfg = &smmu_domain->s1_cfg;
+			break;
+		case ARM_SMMU_DOMAIN_S2:
+		case ARM_SMMU_DOMAIN_NESTED:
+			s2_cfg = &smmu_domain->s2_cfg;
+			break;
+		default:
+			break;
+		}
+	}
+
+	if (val & STRTAB_STE_0_V) {
+		switch (FIELD_GET(STRTAB_STE_0_CFG, val)) {
+		case STRTAB_STE_0_CFG_BYPASS:
+			break;
+		case STRTAB_STE_0_CFG_S1_TRANS:
+		case STRTAB_STE_0_CFG_S2_TRANS:
+			ste_live = true;
+			break;
+		case STRTAB_STE_0_CFG_ABORT:
+			BUG_ON(!disable_bypass);
+			break;
+		default:
+			BUG(); /* STE corruption */
+		}
+	}
+
+	/* Nuke the existing STE_0 value, as we're going to rewrite it */
+	val = STRTAB_STE_0_V;
+
+	/* Bypass/fault */
+	if (!smmu_domain || !(s1_cfg || s2_cfg)) {
+		if (!smmu_domain && disable_bypass)
+			val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_ABORT);
+		else
+			val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_BYPASS);
+
+		dst[0] = cpu_to_le64(val);
+		dst[1] = cpu_to_le64(FIELD_PREP(STRTAB_STE_1_SHCFG,
+						STRTAB_STE_1_SHCFG_INCOMING));
+		dst[2] = 0; /* Nuke the VMID */
+		/*
+		 * The SMMU can perform negative caching, so we must sync
+		 * the STE regardless of whether the old value was live.
+		 */
+		if (smmu)
+			arm_smmu_sync_ste_for_sid(smmu, sid);
+		return;
+	}
+
+	if (s1_cfg) {
+		BUG_ON(ste_live);
+		dst[1] = cpu_to_le64(
+			 FIELD_PREP(STRTAB_STE_1_S1DSS, STRTAB_STE_1_S1DSS_SSID0) |
+			 FIELD_PREP(STRTAB_STE_1_S1CIR, STRTAB_STE_1_S1C_CACHE_WBRA) |
+			 FIELD_PREP(STRTAB_STE_1_S1COR, STRTAB_STE_1_S1C_CACHE_WBRA) |
+			 FIELD_PREP(STRTAB_STE_1_S1CSH, ARM_SMMU_SH_ISH) |
+			 FIELD_PREP(STRTAB_STE_1_STRW, STRTAB_STE_1_STRW_NSEL1));
+
+		if (smmu->features & ARM_SMMU_FEAT_STALLS &&
+		   !(smmu->features & ARM_SMMU_FEAT_STALL_FORCE))
+			dst[1] |= cpu_to_le64(STRTAB_STE_1_S1STALLD);
+
+		val |= (s1_cfg->cdcfg.cdtab_dma & STRTAB_STE_0_S1CTXPTR_MASK) |
+			FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_S1_TRANS) |
+			FIELD_PREP(STRTAB_STE_0_S1CDMAX, s1_cfg->s1cdmax) |
+			FIELD_PREP(STRTAB_STE_0_S1FMT, s1_cfg->s1fmt);
+	}
+
+	if (s2_cfg) {
+		BUG_ON(ste_live);
+		dst[2] = cpu_to_le64(
+			 FIELD_PREP(STRTAB_STE_2_S2VMID, s2_cfg->vmid) |
+			 FIELD_PREP(STRTAB_STE_2_VTCR, s2_cfg->vtcr) |
+#ifdef __BIG_ENDIAN
+			 STRTAB_STE_2_S2ENDI |
+#endif
+			 STRTAB_STE_2_S2PTW | STRTAB_STE_2_S2AA64 |
+			 STRTAB_STE_2_S2R);
+
+		dst[3] = cpu_to_le64(s2_cfg->vttbr & STRTAB_STE_3_S2TTB_MASK);
+
+		val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_S2_TRANS);
+	}
+
+	if (master->ats_enabled)
+		dst[1] |= cpu_to_le64(FIELD_PREP(STRTAB_STE_1_EATS,
+						 STRTAB_STE_1_EATS_TRANS));
+
+	arm_smmu_sync_ste_for_sid(smmu, sid);
+	/* See comment in arm_smmu_write_ctx_desc() */
+	WRITE_ONCE(dst[0], cpu_to_le64(val));
+	arm_smmu_sync_ste_for_sid(smmu, sid);
+
+	/* It's likely that we'll want to use the new STE soon */
+	if (!(smmu->options & ARM_SMMU_OPT_SKIP_PREFETCH))
+		arm_smmu_cmdq_issue_cmd(smmu, &prefetch_cmd);
+}
+
+static void arm_smmu_init_bypass_stes(u64 *strtab, unsigned int nent)
+{
+	unsigned int i;
+
+	for (i = 0; i < nent; ++i) {
+		arm_smmu_write_strtab_ent(NULL, -1, strtab);
+		strtab += STRTAB_STE_DWORDS;
+	}
+}
+
+static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
+{
+	size_t size;
+	void *strtab;
+	struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+	struct arm_smmu_strtab_l1_desc *desc = &cfg->l1_desc[sid >> STRTAB_SPLIT];
+
+	if (desc->l2ptr)
+		return 0;
+
+	size = 1 << (STRTAB_SPLIT + ilog2(STRTAB_STE_DWORDS) + 3);
+	strtab = &cfg->strtab[(sid >> STRTAB_SPLIT) * STRTAB_L1_DESC_DWORDS];
+
+	desc->span = STRTAB_SPLIT + 1;
+	desc->l2ptr = dmam_alloc_coherent(smmu->dev, size, &desc->l2ptr_dma,
+					  GFP_KERNEL);
+	if (!desc->l2ptr) {
+		dev_err(smmu->dev,
+			"failed to allocate l2 stream table for SID %u\n",
+			sid);
+		return -ENOMEM;
+	}
+
+	arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT);
+	arm_smmu_write_strtab_l1_desc(strtab, desc);
+	return 0;
+}
+
+/* IRQ and event handlers */
+static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
+{
+	int i;
+	struct arm_smmu_device *smmu = dev;
+	struct arm_smmu_queue *q = &smmu->evtq.q;
+	struct arm_smmu_ll_queue *llq = &q->llq;
+	u64 evt[EVTQ_ENT_DWORDS];
+
+	do {
+		while (!queue_remove_raw(q, evt)) {
+			u8 id = FIELD_GET(EVTQ_0_ID, evt[0]);
+
+			dev_info(smmu->dev, "event 0x%02x received:\n", id);
+			for (i = 0; i < ARRAY_SIZE(evt); ++i)
+				dev_info(smmu->dev, "\t0x%016llx\n",
+					 (unsigned long long)evt[i]);
+
+		}
+
+		/*
+		 * Not much we can do on overflow, so scream and pretend we're
+		 * trying harder.
+		 */
+		if (queue_sync_prod_in(q) == -EOVERFLOW)
+			dev_err(smmu->dev, "EVTQ overflow detected -- events lost\n");
+	} while (!queue_empty(llq));
+
+	/* Sync our overflow flag, as we believe we're up to speed */
+	llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
+		    Q_IDX(llq, llq->cons);
+	return IRQ_HANDLED;
+}
+
+static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt)
+{
+	u32 sid, ssid;
+	u16 grpid;
+	bool ssv, last;
+
+	sid = FIELD_GET(PRIQ_0_SID, evt[0]);
+	ssv = FIELD_GET(PRIQ_0_SSID_V, evt[0]);
+	ssid = ssv ? FIELD_GET(PRIQ_0_SSID, evt[0]) : 0;
+	last = FIELD_GET(PRIQ_0_PRG_LAST, evt[0]);
+	grpid = FIELD_GET(PRIQ_1_PRG_IDX, evt[1]);
+
+	dev_info(smmu->dev, "unexpected PRI request received:\n");
+	dev_info(smmu->dev,
+		 "\tsid 0x%08x.0x%05x: [%u%s] %sprivileged %s%s%s access at iova 0x%016llx\n",
+		 sid, ssid, grpid, last ? "L" : "",
+		 evt[0] & PRIQ_0_PERM_PRIV ? "" : "un",
+		 evt[0] & PRIQ_0_PERM_READ ? "R" : "",
+		 evt[0] & PRIQ_0_PERM_WRITE ? "W" : "",
+		 evt[0] & PRIQ_0_PERM_EXEC ? "X" : "",
+		 evt[1] & PRIQ_1_ADDR_MASK);
+
+	if (last) {
+		struct arm_smmu_cmdq_ent cmd = {
+			.opcode			= CMDQ_OP_PRI_RESP,
+			.substream_valid	= ssv,
+			.pri			= {
+				.sid	= sid,
+				.ssid	= ssid,
+				.grpid	= grpid,
+				.resp	= PRI_RESP_DENY,
+			},
+		};
+
+		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+	}
+}
+
+static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
+{
+	struct arm_smmu_device *smmu = dev;
+	struct arm_smmu_queue *q = &smmu->priq.q;
+	struct arm_smmu_ll_queue *llq = &q->llq;
+	u64 evt[PRIQ_ENT_DWORDS];
+
+	do {
+		while (!queue_remove_raw(q, evt))
+			arm_smmu_handle_ppr(smmu, evt);
+
+		if (queue_sync_prod_in(q) == -EOVERFLOW)
+			dev_err(smmu->dev, "PRIQ overflow detected -- requests lost\n");
+	} while (!queue_empty(llq));
+
+	/* Sync our overflow flag, as we believe we're up to speed */
+	llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
+		      Q_IDX(llq, llq->cons);
+	queue_sync_cons_out(q);
+	return IRQ_HANDLED;
+}
+
+static int arm_smmu_device_disable(struct arm_smmu_device *smmu);
+
+static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev)
+{
+	u32 gerror, gerrorn, active;
+	struct arm_smmu_device *smmu = dev;
+
+	gerror = readl_relaxed(smmu->base + ARM_SMMU_GERROR);
+	gerrorn = readl_relaxed(smmu->base + ARM_SMMU_GERRORN);
+
+	active = gerror ^ gerrorn;
+	if (!(active & GERROR_ERR_MASK))
+		return IRQ_NONE; /* No errors pending */
+
+	dev_warn(smmu->dev,
+		 "unexpected global error reported (0x%08x), this could be serious\n",
+		 active);
+
+	if (active & GERROR_SFM_ERR) {
+		dev_err(smmu->dev, "device has entered Service Failure Mode!\n");
+		arm_smmu_device_disable(smmu);
+	}
+
+	if (active & GERROR_MSI_GERROR_ABT_ERR)
+		dev_warn(smmu->dev, "GERROR MSI write aborted\n");
+
+	if (active & GERROR_MSI_PRIQ_ABT_ERR)
+		dev_warn(smmu->dev, "PRIQ MSI write aborted\n");
+
+	if (active & GERROR_MSI_EVTQ_ABT_ERR)
+		dev_warn(smmu->dev, "EVTQ MSI write aborted\n");
+
+	if (active & GERROR_MSI_CMDQ_ABT_ERR)
+		dev_warn(smmu->dev, "CMDQ MSI write aborted\n");
+
+	if (active & GERROR_PRIQ_ABT_ERR)
+		dev_err(smmu->dev, "PRIQ write aborted -- events may have been lost\n");
+
+	if (active & GERROR_EVTQ_ABT_ERR)
+		dev_err(smmu->dev, "EVTQ write aborted -- events may have been lost\n");
+
+	if (active & GERROR_CMDQ_ERR)
+		arm_smmu_cmdq_skip_err(smmu);
+
+	writel(gerror, smmu->base + ARM_SMMU_GERRORN);
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t arm_smmu_combined_irq_thread(int irq, void *dev)
+{
+	struct arm_smmu_device *smmu = dev;
+
+	arm_smmu_evtq_thread(irq, dev);
+	if (smmu->features & ARM_SMMU_FEAT_PRI)
+		arm_smmu_priq_thread(irq, dev);
+
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t arm_smmu_combined_irq_handler(int irq, void *dev)
+{
+	arm_smmu_gerror_handler(irq, dev);
+	return IRQ_WAKE_THREAD;
+}
+
+static void
+arm_smmu_atc_inv_to_cmd(int ssid, unsigned long iova, size_t size,
+			struct arm_smmu_cmdq_ent *cmd)
+{
+	size_t log2_span;
+	size_t span_mask;
+	/* ATC invalidates are always on 4096-bytes pages */
+	size_t inval_grain_shift = 12;
+	unsigned long page_start, page_end;
+
+	*cmd = (struct arm_smmu_cmdq_ent) {
+		.opcode			= CMDQ_OP_ATC_INV,
+		.substream_valid	= !!ssid,
+		.atc.ssid		= ssid,
+	};
+
+	if (!size) {
+		cmd->atc.size = ATC_INV_SIZE_ALL;
+		return;
+	}
+
+	page_start	= iova >> inval_grain_shift;
+	page_end	= (iova + size - 1) >> inval_grain_shift;
+
+	/*
+	 * In an ATS Invalidate Request, the address must be aligned on the
+	 * range size, which must be a power of two number of page sizes. We
+	 * thus have to choose between grossly over-invalidating the region, or
+	 * splitting the invalidation into multiple commands. For simplicity
+	 * we'll go with the first solution, but should refine it in the future
+	 * if multiple commands are shown to be more efficient.
+	 *
+	 * Find the smallest power of two that covers the range. The most
+	 * significant differing bit between the start and end addresses,
+	 * fls(start ^ end), indicates the required span. For example:
+	 *
+	 * We want to invalidate pages [8; 11]. This is already the ideal range:
+	 *		x = 0b1000 ^ 0b1011 = 0b11
+	 *		span = 1 << fls(x) = 4
+	 *
+	 * To invalidate pages [7; 10], we need to invalidate [0; 15]:
+	 *		x = 0b0111 ^ 0b1010 = 0b1101
+	 *		span = 1 << fls(x) = 16
+	 */
+	log2_span	= fls_long(page_start ^ page_end);
+	span_mask	= (1ULL << log2_span) - 1;
+
+	page_start	&= ~span_mask;
+
+	cmd->atc.addr	= page_start << inval_grain_shift;
+	cmd->atc.size	= log2_span;
+}
+
+static int arm_smmu_atc_inv_master(struct arm_smmu_master *master)
+{
+	int i;
+	struct arm_smmu_cmdq_ent cmd;
+
+	arm_smmu_atc_inv_to_cmd(0, 0, 0, &cmd);
+
+	for (i = 0; i < master->num_sids; i++) {
+		cmd.atc.sid = master->sids[i];
+		arm_smmu_cmdq_issue_cmd(master->smmu, &cmd);
+	}
+
+	return arm_smmu_cmdq_issue_sync(master->smmu);
+}
+
+static int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain,
+				   int ssid, unsigned long iova, size_t size)
+{
+	int i;
+	unsigned long flags;
+	struct arm_smmu_cmdq_ent cmd;
+	struct arm_smmu_master *master;
+	struct arm_smmu_cmdq_batch cmds = {};
+
+	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_ATS))
+		return 0;
+
+	/*
+	 * Ensure that we've completed prior invalidation of the main TLBs
+	 * before we read 'nr_ats_masters' in case of a concurrent call to
+	 * arm_smmu_enable_ats():
+	 *
+	 *	// unmap()			// arm_smmu_enable_ats()
+	 *	TLBI+SYNC			atomic_inc(&nr_ats_masters);
+	 *	smp_mb();			[...]
+	 *	atomic_read(&nr_ats_masters);	pci_enable_ats() // writel()
+	 *
+	 * Ensures that we always see the incremented 'nr_ats_masters' count if
+	 * ATS was enabled at the PCI device before completion of the TLBI.
+	 */
+	smp_mb();
+	if (!atomic_read(&smmu_domain->nr_ats_masters))
+		return 0;
+
+	arm_smmu_atc_inv_to_cmd(ssid, iova, size, &cmd);
+
+	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+	list_for_each_entry(master, &smmu_domain->devices, domain_head) {
+		if (!master->ats_enabled)
+			continue;
+
+		for (i = 0; i < master->num_sids; i++) {
+			cmd.atc.sid = master->sids[i];
+			arm_smmu_cmdq_batch_add(smmu_domain->smmu, &cmds, &cmd);
+		}
+	}
+	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
+
+	return arm_smmu_cmdq_batch_submit(smmu_domain->smmu, &cmds);
+}
+
+/* IO_PGTABLE API */
+static void arm_smmu_tlb_inv_context(void *cookie)
+{
+	struct arm_smmu_domain *smmu_domain = cookie;
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+	struct arm_smmu_cmdq_ent cmd;
+
+	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
+		cmd.opcode	= CMDQ_OP_TLBI_NH_ASID;
+		cmd.tlbi.asid	= smmu_domain->s1_cfg.cd.asid;
+		cmd.tlbi.vmid	= 0;
+	} else {
+		cmd.opcode	= CMDQ_OP_TLBI_S12_VMALL;
+		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
+	}
+
+	/*
+	 * NOTE: when io-pgtable is in non-strict mode, we may get here with
+	 * PTEs previously cleared by unmaps on the current CPU not yet visible
+	 * to the SMMU. We are relying on the dma_wmb() implicit during cmd
+	 * insertion to guarantee those are observed before the TLBI. Do be
+	 * careful, 007.
+	 */
+	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+	arm_smmu_cmdq_issue_sync(smmu);
+	arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0);
+}
+
+static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size,
+				   size_t granule, bool leaf,
+				   struct arm_smmu_domain *smmu_domain)
+{
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+	unsigned long start = iova, end = iova + size, num_pages = 0, tg = 0;
+	size_t inv_range = granule;
+	struct arm_smmu_cmdq_batch cmds = {};
+	struct arm_smmu_cmdq_ent cmd = {
+		.tlbi = {
+			.leaf	= leaf,
+		},
+	};
+
+	if (!size)
+		return;
+
+	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
+		cmd.opcode	= CMDQ_OP_TLBI_NH_VA;
+		cmd.tlbi.asid	= smmu_domain->s1_cfg.cd.asid;
+	} else {
+		cmd.opcode	= CMDQ_OP_TLBI_S2_IPA;
+		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
+	}
+
+	if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) {
+		/* Get the leaf page size */
+		tg = __ffs(smmu_domain->domain.pgsize_bitmap);
+
+		/* Convert page size of 12,14,16 (log2) to 1,2,3 */
+		cmd.tlbi.tg = (tg - 10) / 2;
+
+		/* Determine what level the granule is at */
+		cmd.tlbi.ttl = 4 - ((ilog2(granule) - 3) / (tg - 3));
+
+		num_pages = size >> tg;
+	}
+
+	while (iova < end) {
+		if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) {
+			/*
+			 * On each iteration of the loop, the range is 5 bits
+			 * worth of the aligned size remaining.
+			 * The range in pages is:
+			 *
+			 * range = (num_pages & (0x1f << __ffs(num_pages)))
+			 */
+			unsigned long scale, num;
+
+			/* Determine the power of 2 multiple number of pages */
+			scale = __ffs(num_pages);
+			cmd.tlbi.scale = scale;
+
+			/* Determine how many chunks of 2^scale size we have */
+			num = (num_pages >> scale) & CMDQ_TLBI_RANGE_NUM_MAX;
+			cmd.tlbi.num = num - 1;
+
+			/* range is num * 2^scale * pgsize */
+			inv_range = num << (scale + tg);
+
+			/* Clear out the lower order bits for the next iteration */
+			num_pages -= num << scale;
+		}
+
+		cmd.tlbi.addr = iova;
+		arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd);
+		iova += inv_range;
+	}
+	arm_smmu_cmdq_batch_submit(smmu, &cmds);
+
+	/*
+	 * Unfortunately, this can't be leaf-only since we may have
+	 * zapped an entire table.
+	 */
+	arm_smmu_atc_inv_domain(smmu_domain, 0, start, size);
+}
+
+static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gather,
+					 unsigned long iova, size_t granule,
+					 void *cookie)
+{
+	struct arm_smmu_domain *smmu_domain = cookie;
+	struct iommu_domain *domain = &smmu_domain->domain;
+
+	iommu_iotlb_gather_add_page(domain, gather, iova, granule);
+}
+
+static void arm_smmu_tlb_inv_walk(unsigned long iova, size_t size,
+				  size_t granule, void *cookie)
+{
+	arm_smmu_tlb_inv_range(iova, size, granule, false, cookie);
+}
+
+static void arm_smmu_tlb_inv_leaf(unsigned long iova, size_t size,
+				  size_t granule, void *cookie)
+{
+	arm_smmu_tlb_inv_range(iova, size, granule, true, cookie);
+}
+
+static const struct iommu_flush_ops arm_smmu_flush_ops = {
+	.tlb_flush_all	= arm_smmu_tlb_inv_context,
+	.tlb_flush_walk = arm_smmu_tlb_inv_walk,
+	.tlb_flush_leaf = arm_smmu_tlb_inv_leaf,
+	.tlb_add_page	= arm_smmu_tlb_inv_page_nosync,
+};
+
+/* IOMMU API */
+static bool arm_smmu_capable(enum iommu_cap cap)
+{
+	switch (cap) {
+	case IOMMU_CAP_CACHE_COHERENCY:
+		return true;
+	case IOMMU_CAP_NOEXEC:
+		return true;
+	default:
+		return false;
+	}
+}
+
+static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
+{
+	struct arm_smmu_domain *smmu_domain;
+
+	if (type != IOMMU_DOMAIN_UNMANAGED &&
+	    type != IOMMU_DOMAIN_DMA &&
+	    type != IOMMU_DOMAIN_IDENTITY)
+		return NULL;
+
+	/*
+	 * Allocate the domain and initialise some of its data structures.
+	 * We can't really do anything meaningful until we've added a
+	 * master.
+	 */
+	smmu_domain = kzalloc(sizeof(*smmu_domain), GFP_KERNEL);
+	if (!smmu_domain)
+		return NULL;
+
+	if (type == IOMMU_DOMAIN_DMA &&
+	    iommu_get_dma_cookie(&smmu_domain->domain)) {
+		kfree(smmu_domain);
+		return NULL;
+	}
+
+	mutex_init(&smmu_domain->init_mutex);
+	INIT_LIST_HEAD(&smmu_domain->devices);
+	spin_lock_init(&smmu_domain->devices_lock);
+
+	return &smmu_domain->domain;
+}
+
+static int arm_smmu_bitmap_alloc(unsigned long *map, int span)
+{
+	int idx, size = 1 << span;
+
+	do {
+		idx = find_first_zero_bit(map, size);
+		if (idx == size)
+			return -ENOSPC;
+	} while (test_and_set_bit(idx, map));
+
+	return idx;
+}
+
+static void arm_smmu_bitmap_free(unsigned long *map, int idx)
+{
+	clear_bit(idx, map);
+}
+
+static void arm_smmu_domain_free(struct iommu_domain *domain)
+{
+	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+
+	iommu_put_dma_cookie(domain);
+	free_io_pgtable_ops(smmu_domain->pgtbl_ops);
+
+	/* Free the CD and ASID, if we allocated them */
+	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
+		struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
+
+		if (cfg->cdcfg.cdtab)
+			arm_smmu_free_cd_tables(smmu_domain);
+		arm_smmu_free_asid(&cfg->cd);
+	} else {
+		struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
+		if (cfg->vmid)
+			arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid);
+	}
+
+	kfree(smmu_domain);
+}
+
+static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
+				       struct arm_smmu_master *master,
+				       struct io_pgtable_cfg *pgtbl_cfg)
+{
+	int ret;
+	u32 asid;
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+	struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
+	typeof(&pgtbl_cfg->arm_lpae_s1_cfg.tcr) tcr = &pgtbl_cfg->arm_lpae_s1_cfg.tcr;
+
+	ret = xa_alloc(&asid_xa, &asid, &cfg->cd,
+		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
+	if (ret)
+		return ret;
+
+	cfg->s1cdmax = master->ssid_bits;
+
+	ret = arm_smmu_alloc_cd_tables(smmu_domain);
+	if (ret)
+		goto out_free_asid;
+
+	cfg->cd.asid	= (u16)asid;
+	cfg->cd.ttbr	= pgtbl_cfg->arm_lpae_s1_cfg.ttbr;
+	cfg->cd.tcr	= FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, tcr->tsz) |
+			  FIELD_PREP(CTXDESC_CD_0_TCR_TG0, tcr->tg) |
+			  FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0, tcr->irgn) |
+			  FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, tcr->orgn) |
+			  FIELD_PREP(CTXDESC_CD_0_TCR_SH0, tcr->sh) |
+			  FIELD_PREP(CTXDESC_CD_0_TCR_IPS, tcr->ips) |
+			  CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_AA64;
+	cfg->cd.mair	= pgtbl_cfg->arm_lpae_s1_cfg.mair;
+
+	/*
+	 * Note that this will end up calling arm_smmu_sync_cd() before
+	 * the master has been added to the devices list for this domain.
+	 * This isn't an issue because the STE hasn't been installed yet.
+	 */
+	ret = arm_smmu_write_ctx_desc(smmu_domain, 0, &cfg->cd);
+	if (ret)
+		goto out_free_cd_tables;
+
+	return 0;
+
+out_free_cd_tables:
+	arm_smmu_free_cd_tables(smmu_domain);
+out_free_asid:
+	arm_smmu_free_asid(&cfg->cd);
+	return ret;
+}
+
+static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
+				       struct arm_smmu_master *master,
+				       struct io_pgtable_cfg *pgtbl_cfg)
+{
+	int vmid;
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+	struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
+	typeof(&pgtbl_cfg->arm_lpae_s2_cfg.vtcr) vtcr;
+
+	vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
+	if (vmid < 0)
+		return vmid;
+
+	vtcr = &pgtbl_cfg->arm_lpae_s2_cfg.vtcr;
+	cfg->vmid	= (u16)vmid;
+	cfg->vttbr	= pgtbl_cfg->arm_lpae_s2_cfg.vttbr;
+	cfg->vtcr	= FIELD_PREP(STRTAB_STE_2_VTCR_S2T0SZ, vtcr->tsz) |
+			  FIELD_PREP(STRTAB_STE_2_VTCR_S2SL0, vtcr->sl) |
+			  FIELD_PREP(STRTAB_STE_2_VTCR_S2IR0, vtcr->irgn) |
+			  FIELD_PREP(STRTAB_STE_2_VTCR_S2OR0, vtcr->orgn) |
+			  FIELD_PREP(STRTAB_STE_2_VTCR_S2SH0, vtcr->sh) |
+			  FIELD_PREP(STRTAB_STE_2_VTCR_S2TG, vtcr->tg) |
+			  FIELD_PREP(STRTAB_STE_2_VTCR_S2PS, vtcr->ps);
+	return 0;
+}
+
+static int arm_smmu_domain_finalise(struct iommu_domain *domain,
+				    struct arm_smmu_master *master)
+{
+	int ret;
+	unsigned long ias, oas;
+	enum io_pgtable_fmt fmt;
+	struct io_pgtable_cfg pgtbl_cfg;
+	struct io_pgtable_ops *pgtbl_ops;
+	int (*finalise_stage_fn)(struct arm_smmu_domain *,
+				 struct arm_smmu_master *,
+				 struct io_pgtable_cfg *);
+	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+
+	if (domain->type == IOMMU_DOMAIN_IDENTITY) {
+		smmu_domain->stage = ARM_SMMU_DOMAIN_BYPASS;
+		return 0;
+	}
+
+	/* Restrict the stage to what we can actually support */
+	if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S1))
+		smmu_domain->stage = ARM_SMMU_DOMAIN_S2;
+	if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S2))
+		smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
+
+	switch (smmu_domain->stage) {
+	case ARM_SMMU_DOMAIN_S1:
+		ias = (smmu->features & ARM_SMMU_FEAT_VAX) ? 52 : 48;
+		ias = min_t(unsigned long, ias, VA_BITS);
+		oas = smmu->ias;
+		fmt = ARM_64_LPAE_S1;
+		finalise_stage_fn = arm_smmu_domain_finalise_s1;
+		break;
+	case ARM_SMMU_DOMAIN_NESTED:
+	case ARM_SMMU_DOMAIN_S2:
+		ias = smmu->ias;
+		oas = smmu->oas;
+		fmt = ARM_64_LPAE_S2;
+		finalise_stage_fn = arm_smmu_domain_finalise_s2;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	pgtbl_cfg = (struct io_pgtable_cfg) {
+		.pgsize_bitmap	= smmu->pgsize_bitmap,
+		.ias		= ias,
+		.oas		= oas,
+		.coherent_walk	= smmu->features & ARM_SMMU_FEAT_COHERENCY,
+		.tlb		= &arm_smmu_flush_ops,
+		.iommu_dev	= smmu->dev,
+	};
+
+	if (smmu_domain->non_strict)
+		pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT;
+
+	pgtbl_ops = alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain);
+	if (!pgtbl_ops)
+		return -ENOMEM;
+
+	domain->pgsize_bitmap = pgtbl_cfg.pgsize_bitmap;
+	domain->geometry.aperture_end = (1UL << pgtbl_cfg.ias) - 1;
+	domain->geometry.force_aperture = true;
+
+	ret = finalise_stage_fn(smmu_domain, master, &pgtbl_cfg);
+	if (ret < 0) {
+		free_io_pgtable_ops(pgtbl_ops);
+		return ret;
+	}
+
+	smmu_domain->pgtbl_ops = pgtbl_ops;
+	return 0;
+}
+
+static __le64 *arm_smmu_get_step_for_sid(struct arm_smmu_device *smmu, u32 sid)
+{
+	__le64 *step;
+	struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+
+	if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) {
+		struct arm_smmu_strtab_l1_desc *l1_desc;
+		int idx;
+
+		/* Two-level walk */
+		idx = (sid >> STRTAB_SPLIT) * STRTAB_L1_DESC_DWORDS;
+		l1_desc = &cfg->l1_desc[idx];
+		idx = (sid & ((1 << STRTAB_SPLIT) - 1)) * STRTAB_STE_DWORDS;
+		step = &l1_desc->l2ptr[idx];
+	} else {
+		/* Simple linear lookup */
+		step = &cfg->strtab[sid * STRTAB_STE_DWORDS];
+	}
+
+	return step;
+}
+
+static void arm_smmu_install_ste_for_dev(struct arm_smmu_master *master)
+{
+	int i, j;
+	struct arm_smmu_device *smmu = master->smmu;
+
+	for (i = 0; i < master->num_sids; ++i) {
+		u32 sid = master->sids[i];
+		__le64 *step = arm_smmu_get_step_for_sid(smmu, sid);
+
+		/* Bridged PCI devices may end up with duplicated IDs */
+		for (j = 0; j < i; j++)
+			if (master->sids[j] == sid)
+				break;
+		if (j < i)
+			continue;
+
+		arm_smmu_write_strtab_ent(master, sid, step);
+	}
+}
+
+static bool arm_smmu_ats_supported(struct arm_smmu_master *master)
+{
+	struct device *dev = master->dev;
+	struct arm_smmu_device *smmu = master->smmu;
+	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+
+	if (!(smmu->features & ARM_SMMU_FEAT_ATS))
+		return false;
+
+	if (!(fwspec->flags & IOMMU_FWSPEC_PCI_RC_ATS))
+		return false;
+
+	return dev_is_pci(dev) && pci_ats_supported(to_pci_dev(dev));
+}
+
+static void arm_smmu_enable_ats(struct arm_smmu_master *master)
+{
+	size_t stu;
+	struct pci_dev *pdev;
+	struct arm_smmu_device *smmu = master->smmu;
+	struct arm_smmu_domain *smmu_domain = master->domain;
+
+	/* Don't enable ATS at the endpoint if it's not enabled in the STE */
+	if (!master->ats_enabled)
+		return;
+
+	/* Smallest Translation Unit: log2 of the smallest supported granule */
+	stu = __ffs(smmu->pgsize_bitmap);
+	pdev = to_pci_dev(master->dev);
+
+	atomic_inc(&smmu_domain->nr_ats_masters);
+	arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0);
+	if (pci_enable_ats(pdev, stu))
+		dev_err(master->dev, "Failed to enable ATS (STU %zu)\n", stu);
+}
+
+static void arm_smmu_disable_ats(struct arm_smmu_master *master)
+{
+	struct arm_smmu_domain *smmu_domain = master->domain;
+
+	if (!master->ats_enabled)
+		return;
+
+	pci_disable_ats(to_pci_dev(master->dev));
+	/*
+	 * Ensure ATS is disabled at the endpoint before we issue the
+	 * ATC invalidation via the SMMU.
+	 */
+	wmb();
+	arm_smmu_atc_inv_master(master);
+	atomic_dec(&smmu_domain->nr_ats_masters);
+}
+
+static int arm_smmu_enable_pasid(struct arm_smmu_master *master)
+{
+	int ret;
+	int features;
+	int num_pasids;
+	struct pci_dev *pdev;
+
+	if (!dev_is_pci(master->dev))
+		return -ENODEV;
+
+	pdev = to_pci_dev(master->dev);
+
+	features = pci_pasid_features(pdev);
+	if (features < 0)
+		return features;
+
+	num_pasids = pci_max_pasids(pdev);
+	if (num_pasids <= 0)
+		return num_pasids;
+
+	ret = pci_enable_pasid(pdev, features);
+	if (ret) {
+		dev_err(&pdev->dev, "Failed to enable PASID\n");
+		return ret;
+	}
+
+	master->ssid_bits = min_t(u8, ilog2(num_pasids),
+				  master->smmu->ssid_bits);
+	return 0;
+}
+
+static void arm_smmu_disable_pasid(struct arm_smmu_master *master)
+{
+	struct pci_dev *pdev;
+
+	if (!dev_is_pci(master->dev))
+		return;
+
+	pdev = to_pci_dev(master->dev);
+
+	if (!pdev->pasid_enabled)
+		return;
+
+	master->ssid_bits = 0;
+	pci_disable_pasid(pdev);
+}
+
+static void arm_smmu_detach_dev(struct arm_smmu_master *master)
+{
+	unsigned long flags;
+	struct arm_smmu_domain *smmu_domain = master->domain;
+
+	if (!smmu_domain)
+		return;
+
+	arm_smmu_disable_ats(master);
+
+	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+	list_del(&master->domain_head);
+	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
+
+	master->domain = NULL;
+	master->ats_enabled = false;
+	arm_smmu_install_ste_for_dev(master);
+}
+
+static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
+{
+	int ret = 0;
+	unsigned long flags;
+	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+	struct arm_smmu_device *smmu;
+	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+	struct arm_smmu_master *master;
+
+	if (!fwspec)
+		return -ENOENT;
+
+	master = dev_iommu_priv_get(dev);
+	smmu = master->smmu;
+
+	arm_smmu_detach_dev(master);
+
+	mutex_lock(&smmu_domain->init_mutex);
+
+	if (!smmu_domain->smmu) {
+		smmu_domain->smmu = smmu;
+		ret = arm_smmu_domain_finalise(domain, master);
+		if (ret) {
+			smmu_domain->smmu = NULL;
+			goto out_unlock;
+		}
+	} else if (smmu_domain->smmu != smmu) {
+		dev_err(dev,
+			"cannot attach to SMMU %s (upstream of %s)\n",
+			dev_name(smmu_domain->smmu->dev),
+			dev_name(smmu->dev));
+		ret = -ENXIO;
+		goto out_unlock;
+	} else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1 &&
+		   master->ssid_bits != smmu_domain->s1_cfg.s1cdmax) {
+		dev_err(dev,
+			"cannot attach to incompatible domain (%u SSID bits != %u)\n",
+			smmu_domain->s1_cfg.s1cdmax, master->ssid_bits);
+		ret = -EINVAL;
+		goto out_unlock;
+	}
+
+	master->domain = smmu_domain;
+
+	if (smmu_domain->stage != ARM_SMMU_DOMAIN_BYPASS)
+		master->ats_enabled = arm_smmu_ats_supported(master);
+
+	arm_smmu_install_ste_for_dev(master);
+
+	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+	list_add(&master->domain_head, &smmu_domain->devices);
+	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
+
+	arm_smmu_enable_ats(master);
+
+out_unlock:
+	mutex_unlock(&smmu_domain->init_mutex);
+	return ret;
+}
+
+static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
+			phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
+{
+	struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
+
+	if (!ops)
+		return -ENODEV;
+
+	return ops->map(ops, iova, paddr, size, prot, gfp);
+}
+
+static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova,
+			     size_t size, struct iommu_iotlb_gather *gather)
+{
+	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+	struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops;
+
+	if (!ops)
+		return 0;
+
+	return ops->unmap(ops, iova, size, gather);
+}
+
+static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain)
+{
+	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+
+	if (smmu_domain->smmu)
+		arm_smmu_tlb_inv_context(smmu_domain);
+}
+
+static void arm_smmu_iotlb_sync(struct iommu_domain *domain,
+				struct iommu_iotlb_gather *gather)
+{
+	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+
+	arm_smmu_tlb_inv_range(gather->start, gather->end - gather->start,
+			       gather->pgsize, true, smmu_domain);
+}
+
+static phys_addr_t
+arm_smmu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
+{
+	struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
+
+	if (domain->type == IOMMU_DOMAIN_IDENTITY)
+		return iova;
+
+	if (!ops)
+		return 0;
+
+	return ops->iova_to_phys(ops, iova);
+}
+
+static struct platform_driver arm_smmu_driver;
+
+static
+struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode)
+{
+	struct device *dev = driver_find_device_by_fwnode(&arm_smmu_driver.driver,
+							  fwnode);
+	put_device(dev);
+	return dev ? dev_get_drvdata(dev) : NULL;
+}
+
+static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
+{
+	unsigned long limit = smmu->strtab_cfg.num_l1_ents;
+
+	if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB)
+		limit *= 1UL << STRTAB_SPLIT;
+
+	return sid < limit;
+}
+
+static struct iommu_ops arm_smmu_ops;
+
+static struct iommu_device *arm_smmu_probe_device(struct device *dev)
+{
+	int i, ret;
+	struct arm_smmu_device *smmu;
+	struct arm_smmu_master *master;
+	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+
+	if (!fwspec || fwspec->ops != &arm_smmu_ops)
+		return ERR_PTR(-ENODEV);
+
+	if (WARN_ON_ONCE(dev_iommu_priv_get(dev)))
+		return ERR_PTR(-EBUSY);
+
+	smmu = arm_smmu_get_by_fwnode(fwspec->iommu_fwnode);
+	if (!smmu)
+		return ERR_PTR(-ENODEV);
+
+	master = kzalloc(sizeof(*master), GFP_KERNEL);
+	if (!master)
+		return ERR_PTR(-ENOMEM);
+
+	master->dev = dev;
+	master->smmu = smmu;
+	master->sids = fwspec->ids;
+	master->num_sids = fwspec->num_ids;
+	dev_iommu_priv_set(dev, master);
+
+	/* Check the SIDs are in range of the SMMU and our stream table */
+	for (i = 0; i < master->num_sids; i++) {
+		u32 sid = master->sids[i];
+
+		if (!arm_smmu_sid_in_range(smmu, sid)) {
+			ret = -ERANGE;
+			goto err_free_master;
+		}
+
+		/* Ensure l2 strtab is initialised */
+		if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) {
+			ret = arm_smmu_init_l2_strtab(smmu, sid);
+			if (ret)
+				goto err_free_master;
+		}
+	}
+
+	master->ssid_bits = min(smmu->ssid_bits, fwspec->num_pasid_bits);
+
+	/*
+	 * Note that PASID must be enabled before, and disabled after ATS:
+	 * PCI Express Base 4.0r1.0 - 10.5.1.3 ATS Control Register
+	 *
+	 *   Behavior is undefined if this bit is Set and the value of the PASID
+	 *   Enable, Execute Requested Enable, or Privileged Mode Requested bits
+	 *   are changed.
+	 */
+	arm_smmu_enable_pasid(master);
+
+	if (!(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB))
+		master->ssid_bits = min_t(u8, master->ssid_bits,
+					  CTXDESC_LINEAR_CDMAX);
+
+	return &smmu->iommu;
+
+err_free_master:
+	kfree(master);
+	dev_iommu_priv_set(dev, NULL);
+	return ERR_PTR(ret);
+}
+
+static void arm_smmu_release_device(struct device *dev)
+{
+	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+	struct arm_smmu_master *master;
+
+	if (!fwspec || fwspec->ops != &arm_smmu_ops)
+		return;
+
+	master = dev_iommu_priv_get(dev);
+	arm_smmu_detach_dev(master);
+	arm_smmu_disable_pasid(master);
+	kfree(master);
+	iommu_fwspec_free(dev);
+}
+
+static struct iommu_group *arm_smmu_device_group(struct device *dev)
+{
+	struct iommu_group *group;
+
+	/*
+	 * We don't support devices sharing stream IDs other than PCI RID
+	 * aliases, since the necessary ID-to-device lookup becomes rather
+	 * impractical given a potential sparse 32-bit stream ID space.
+	 */
+	if (dev_is_pci(dev))
+		group = pci_device_group(dev);
+	else
+		group = generic_device_group(dev);
+
+	return group;
+}
+
+static int arm_smmu_domain_get_attr(struct iommu_domain *domain,
+				    enum iommu_attr attr, void *data)
+{
+	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+
+	switch (domain->type) {
+	case IOMMU_DOMAIN_UNMANAGED:
+		switch (attr) {
+		case DOMAIN_ATTR_NESTING:
+			*(int *)data = (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED);
+			return 0;
+		default:
+			return -ENODEV;
+		}
+		break;
+	case IOMMU_DOMAIN_DMA:
+		switch (attr) {
+		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
+			*(int *)data = smmu_domain->non_strict;
+			return 0;
+		default:
+			return -ENODEV;
+		}
+		break;
+	default:
+		return -EINVAL;
+	}
+}
+
+static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
+				    enum iommu_attr attr, void *data)
+{
+	int ret = 0;
+	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+
+	mutex_lock(&smmu_domain->init_mutex);
+
+	switch (domain->type) {
+	case IOMMU_DOMAIN_UNMANAGED:
+		switch (attr) {
+		case DOMAIN_ATTR_NESTING:
+			if (smmu_domain->smmu) {
+				ret = -EPERM;
+				goto out_unlock;
+			}
+
+			if (*(int *)data)
+				smmu_domain->stage = ARM_SMMU_DOMAIN_NESTED;
+			else
+				smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
+			break;
+		default:
+			ret = -ENODEV;
+		}
+		break;
+	case IOMMU_DOMAIN_DMA:
+		switch(attr) {
+		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
+			smmu_domain->non_strict = *(int *)data;
+			break;
+		default:
+			ret = -ENODEV;
+		}
+		break;
+	default:
+		ret = -EINVAL;
+	}
+
+out_unlock:
+	mutex_unlock(&smmu_domain->init_mutex);
+	return ret;
+}
+
+static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args)
+{
+	return iommu_fwspec_add_ids(dev, args->args, 1);
+}
+
+static void arm_smmu_get_resv_regions(struct device *dev,
+				      struct list_head *head)
+{
+	struct iommu_resv_region *region;
+	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
+
+	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
+					 prot, IOMMU_RESV_SW_MSI);
+	if (!region)
+		return;
+
+	list_add_tail(&region->list, head);
+
+	iommu_dma_get_resv_regions(dev, head);
+}
+
+static struct iommu_ops arm_smmu_ops = {
+	.capable		= arm_smmu_capable,
+	.domain_alloc		= arm_smmu_domain_alloc,
+	.domain_free		= arm_smmu_domain_free,
+	.attach_dev		= arm_smmu_attach_dev,
+	.map			= arm_smmu_map,
+	.unmap			= arm_smmu_unmap,
+	.flush_iotlb_all	= arm_smmu_flush_iotlb_all,
+	.iotlb_sync		= arm_smmu_iotlb_sync,
+	.iova_to_phys		= arm_smmu_iova_to_phys,
+	.probe_device		= arm_smmu_probe_device,
+	.release_device		= arm_smmu_release_device,
+	.device_group		= arm_smmu_device_group,
+	.domain_get_attr	= arm_smmu_domain_get_attr,
+	.domain_set_attr	= arm_smmu_domain_set_attr,
+	.of_xlate		= arm_smmu_of_xlate,
+	.get_resv_regions	= arm_smmu_get_resv_regions,
+	.put_resv_regions	= generic_iommu_put_resv_regions,
+	.pgsize_bitmap		= -1UL, /* Restricted during device attach */
+};
+
+/* Probing and initialisation functions */
+static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
+				   struct arm_smmu_queue *q,
+				   unsigned long prod_off,
+				   unsigned long cons_off,
+				   size_t dwords, const char *name)
+{
+	size_t qsz;
+
+	do {
+		qsz = ((1 << q->llq.max_n_shift) * dwords) << 3;
+		q->base = dmam_alloc_coherent(smmu->dev, qsz, &q->base_dma,
+					      GFP_KERNEL);
+		if (q->base || qsz < PAGE_SIZE)
+			break;
+
+		q->llq.max_n_shift--;
+	} while (1);
+
+	if (!q->base) {
+		dev_err(smmu->dev,
+			"failed to allocate queue (0x%zx bytes) for %s\n",
+			qsz, name);
+		return -ENOMEM;
+	}
+
+	if (!WARN_ON(q->base_dma & (qsz - 1))) {
+		dev_info(smmu->dev, "allocated %u entries for %s\n",
+			 1 << q->llq.max_n_shift, name);
+	}
+
+	q->prod_reg	= arm_smmu_page1_fixup(prod_off, smmu);
+	q->cons_reg	= arm_smmu_page1_fixup(cons_off, smmu);
+	q->ent_dwords	= dwords;
+
+	q->q_base  = Q_BASE_RWA;
+	q->q_base |= q->base_dma & Q_BASE_ADDR_MASK;
+	q->q_base |= FIELD_PREP(Q_BASE_LOG2SIZE, q->llq.max_n_shift);
+
+	q->llq.prod = q->llq.cons = 0;
+	return 0;
+}
+
+static void arm_smmu_cmdq_free_bitmap(void *data)
+{
+	unsigned long *bitmap = data;
+	bitmap_free(bitmap);
+}
+
+static int arm_smmu_cmdq_init(struct arm_smmu_device *smmu)
+{
+	int ret = 0;
+	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
+	unsigned int nents = 1 << cmdq->q.llq.max_n_shift;
+	atomic_long_t *bitmap;
+
+	atomic_set(&cmdq->owner_prod, 0);
+	atomic_set(&cmdq->lock, 0);
+
+	bitmap = (atomic_long_t *)bitmap_zalloc(nents, GFP_KERNEL);
+	if (!bitmap) {
+		dev_err(smmu->dev, "failed to allocate cmdq bitmap\n");
+		ret = -ENOMEM;
+	} else {
+		cmdq->valid_map = bitmap;
+		devm_add_action(smmu->dev, arm_smmu_cmdq_free_bitmap, bitmap);
+	}
+
+	return ret;
+}
+
+static int arm_smmu_init_queues(struct arm_smmu_device *smmu)
+{
+	int ret;
+
+	/* cmdq */
+	ret = arm_smmu_init_one_queue(smmu, &smmu->cmdq.q, ARM_SMMU_CMDQ_PROD,
+				      ARM_SMMU_CMDQ_CONS, CMDQ_ENT_DWORDS,
+				      "cmdq");
+	if (ret)
+		return ret;
+
+	ret = arm_smmu_cmdq_init(smmu);
+	if (ret)
+		return ret;
+
+	/* evtq */
+	ret = arm_smmu_init_one_queue(smmu, &smmu->evtq.q, ARM_SMMU_EVTQ_PROD,
+				      ARM_SMMU_EVTQ_CONS, EVTQ_ENT_DWORDS,
+				      "evtq");
+	if (ret)
+		return ret;
+
+	/* priq */
+	if (!(smmu->features & ARM_SMMU_FEAT_PRI))
+		return 0;
+
+	return arm_smmu_init_one_queue(smmu, &smmu->priq.q, ARM_SMMU_PRIQ_PROD,
+				       ARM_SMMU_PRIQ_CONS, PRIQ_ENT_DWORDS,
+				       "priq");
+}
+
+static int arm_smmu_init_l1_strtab(struct arm_smmu_device *smmu)
+{
+	unsigned int i;
+	struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+	size_t size = sizeof(*cfg->l1_desc) * cfg->num_l1_ents;
+	void *strtab = smmu->strtab_cfg.strtab;
+
+	cfg->l1_desc = devm_kzalloc(smmu->dev, size, GFP_KERNEL);
+	if (!cfg->l1_desc) {
+		dev_err(smmu->dev, "failed to allocate l1 stream table desc\n");
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < cfg->num_l1_ents; ++i) {
+		arm_smmu_write_strtab_l1_desc(strtab, &cfg->l1_desc[i]);
+		strtab += STRTAB_L1_DESC_DWORDS << 3;
+	}
+
+	return 0;
+}
+
+static int arm_smmu_init_strtab_2lvl(struct arm_smmu_device *smmu)
+{
+	void *strtab;
+	u64 reg;
+	u32 size, l1size;
+	struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+
+	/* Calculate the L1 size, capped to the SIDSIZE. */
+	size = STRTAB_L1_SZ_SHIFT - (ilog2(STRTAB_L1_DESC_DWORDS) + 3);
+	size = min(size, smmu->sid_bits - STRTAB_SPLIT);
+	cfg->num_l1_ents = 1 << size;
+
+	size += STRTAB_SPLIT;
+	if (size < smmu->sid_bits)
+		dev_warn(smmu->dev,
+			 "2-level strtab only covers %u/%u bits of SID\n",
+			 size, smmu->sid_bits);
+
+	l1size = cfg->num_l1_ents * (STRTAB_L1_DESC_DWORDS << 3);
+	strtab = dmam_alloc_coherent(smmu->dev, l1size, &cfg->strtab_dma,
+				     GFP_KERNEL);
+	if (!strtab) {
+		dev_err(smmu->dev,
+			"failed to allocate l1 stream table (%u bytes)\n",
+			size);
+		return -ENOMEM;
+	}
+	cfg->strtab = strtab;
+
+	/* Configure strtab_base_cfg for 2 levels */
+	reg  = FIELD_PREP(STRTAB_BASE_CFG_FMT, STRTAB_BASE_CFG_FMT_2LVL);
+	reg |= FIELD_PREP(STRTAB_BASE_CFG_LOG2SIZE, size);
+	reg |= FIELD_PREP(STRTAB_BASE_CFG_SPLIT, STRTAB_SPLIT);
+	cfg->strtab_base_cfg = reg;
+
+	return arm_smmu_init_l1_strtab(smmu);
+}
+
+static int arm_smmu_init_strtab_linear(struct arm_smmu_device *smmu)
+{
+	void *strtab;
+	u64 reg;
+	u32 size;
+	struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+
+	size = (1 << smmu->sid_bits) * (STRTAB_STE_DWORDS << 3);
+	strtab = dmam_alloc_coherent(smmu->dev, size, &cfg->strtab_dma,
+				     GFP_KERNEL);
+	if (!strtab) {
+		dev_err(smmu->dev,
+			"failed to allocate linear stream table (%u bytes)\n",
+			size);
+		return -ENOMEM;
+	}
+	cfg->strtab = strtab;
+	cfg->num_l1_ents = 1 << smmu->sid_bits;
+
+	/* Configure strtab_base_cfg for a linear table covering all SIDs */
+	reg  = FIELD_PREP(STRTAB_BASE_CFG_FMT, STRTAB_BASE_CFG_FMT_LINEAR);
+	reg |= FIELD_PREP(STRTAB_BASE_CFG_LOG2SIZE, smmu->sid_bits);
+	cfg->strtab_base_cfg = reg;
+
+	arm_smmu_init_bypass_stes(strtab, cfg->num_l1_ents);
+	return 0;
+}
+
+static int arm_smmu_init_strtab(struct arm_smmu_device *smmu)
+{
+	u64 reg;
+	int ret;
+
+	if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB)
+		ret = arm_smmu_init_strtab_2lvl(smmu);
+	else
+		ret = arm_smmu_init_strtab_linear(smmu);
+
+	if (ret)
+		return ret;
+
+	/* Set the strtab base address */
+	reg  = smmu->strtab_cfg.strtab_dma & STRTAB_BASE_ADDR_MASK;
+	reg |= STRTAB_BASE_RA;
+	smmu->strtab_cfg.strtab_base = reg;
+
+	/* Allocate the first VMID for stage-2 bypass STEs */
+	set_bit(0, smmu->vmid_map);
+	return 0;
+}
+
+static int arm_smmu_init_structures(struct arm_smmu_device *smmu)
+{
+	int ret;
+
+	ret = arm_smmu_init_queues(smmu);
+	if (ret)
+		return ret;
+
+	return arm_smmu_init_strtab(smmu);
+}
+
+static int arm_smmu_write_reg_sync(struct arm_smmu_device *smmu, u32 val,
+				   unsigned int reg_off, unsigned int ack_off)
+{
+	u32 reg;
+
+	writel_relaxed(val, smmu->base + reg_off);
+	return readl_relaxed_poll_timeout(smmu->base + ack_off, reg, reg == val,
+					  1, ARM_SMMU_POLL_TIMEOUT_US);
+}
+
+/* GBPA is "special" */
+static int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr)
+{
+	int ret;
+	u32 reg, __iomem *gbpa = smmu->base + ARM_SMMU_GBPA;
+
+	ret = readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE),
+					 1, ARM_SMMU_POLL_TIMEOUT_US);
+	if (ret)
+		return ret;
+
+	reg &= ~clr;
+	reg |= set;
+	writel_relaxed(reg | GBPA_UPDATE, gbpa);
+	ret = readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE),
+					 1, ARM_SMMU_POLL_TIMEOUT_US);
+
+	if (ret)
+		dev_err(smmu->dev, "GBPA not responding to update\n");
+	return ret;
+}
+
+static void arm_smmu_free_msis(void *data)
+{
+	struct device *dev = data;
+	platform_msi_domain_free_irqs(dev);
+}
+
+static void arm_smmu_write_msi_msg(struct msi_desc *desc, struct msi_msg *msg)
+{
+	phys_addr_t doorbell;
+	struct device *dev = msi_desc_to_dev(desc);
+	struct arm_smmu_device *smmu = dev_get_drvdata(dev);
+	phys_addr_t *cfg = arm_smmu_msi_cfg[desc->platform.msi_index];
+
+	doorbell = (((u64)msg->address_hi) << 32) | msg->address_lo;
+	doorbell &= MSI_CFG0_ADDR_MASK;
+
+	writeq_relaxed(doorbell, smmu->base + cfg[0]);
+	writel_relaxed(msg->data, smmu->base + cfg[1]);
+	writel_relaxed(ARM_SMMU_MEMATTR_DEVICE_nGnRE, smmu->base + cfg[2]);
+}
+
+static void arm_smmu_setup_msis(struct arm_smmu_device *smmu)
+{
+	struct msi_desc *desc;
+	int ret, nvec = ARM_SMMU_MAX_MSIS;
+	struct device *dev = smmu->dev;
+
+	/* Clear the MSI address regs */
+	writeq_relaxed(0, smmu->base + ARM_SMMU_GERROR_IRQ_CFG0);
+	writeq_relaxed(0, smmu->base + ARM_SMMU_EVTQ_IRQ_CFG0);
+
+	if (smmu->features & ARM_SMMU_FEAT_PRI)
+		writeq_relaxed(0, smmu->base + ARM_SMMU_PRIQ_IRQ_CFG0);
+	else
+		nvec--;
+
+	if (!(smmu->features & ARM_SMMU_FEAT_MSI))
+		return;
+
+	if (!dev->msi_domain) {
+		dev_info(smmu->dev, "msi_domain absent - falling back to wired irqs\n");
+		return;
+	}
+
+	/* Allocate MSIs for evtq, gerror and priq. Ignore cmdq */
+	ret = platform_msi_domain_alloc_irqs(dev, nvec, arm_smmu_write_msi_msg);
+	if (ret) {
+		dev_warn(dev, "failed to allocate MSIs - falling back to wired irqs\n");
+		return;
+	}
+
+	for_each_msi_entry(desc, dev) {
+		switch (desc->platform.msi_index) {
+		case EVTQ_MSI_INDEX:
+			smmu->evtq.q.irq = desc->irq;
+			break;
+		case GERROR_MSI_INDEX:
+			smmu->gerr_irq = desc->irq;
+			break;
+		case PRIQ_MSI_INDEX:
+			smmu->priq.q.irq = desc->irq;
+			break;
+		default:	/* Unknown */
+			continue;
+		}
+	}
+
+	/* Add callback to free MSIs on teardown */
+	devm_add_action(dev, arm_smmu_free_msis, dev);
+}
+
+static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
+{
+	int irq, ret;
+
+	arm_smmu_setup_msis(smmu);
+
+	/* Request interrupt lines */
+	irq = smmu->evtq.q.irq;
+	if (irq) {
+		ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
+						arm_smmu_evtq_thread,
+						IRQF_ONESHOT,
+						"arm-smmu-v3-evtq", smmu);
+		if (ret < 0)
+			dev_warn(smmu->dev, "failed to enable evtq irq\n");
+	} else {
+		dev_warn(smmu->dev, "no evtq irq - events will not be reported!\n");
+	}
+
+	irq = smmu->gerr_irq;
+	if (irq) {
+		ret = devm_request_irq(smmu->dev, irq, arm_smmu_gerror_handler,
+				       0, "arm-smmu-v3-gerror", smmu);
+		if (ret < 0)
+			dev_warn(smmu->dev, "failed to enable gerror irq\n");
+	} else {
+		dev_warn(smmu->dev, "no gerr irq - errors will not be reported!\n");
+	}
+
+	if (smmu->features & ARM_SMMU_FEAT_PRI) {
+		irq = smmu->priq.q.irq;
+		if (irq) {
+			ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
+							arm_smmu_priq_thread,
+							IRQF_ONESHOT,
+							"arm-smmu-v3-priq",
+							smmu);
+			if (ret < 0)
+				dev_warn(smmu->dev,
+					 "failed to enable priq irq\n");
+		} else {
+			dev_warn(smmu->dev, "no priq irq - PRI will be broken\n");
+		}
+	}
+}
+
+static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)
+{
+	int ret, irq;
+	u32 irqen_flags = IRQ_CTRL_EVTQ_IRQEN | IRQ_CTRL_GERROR_IRQEN;
+
+	/* Disable IRQs first */
+	ret = arm_smmu_write_reg_sync(smmu, 0, ARM_SMMU_IRQ_CTRL,
+				      ARM_SMMU_IRQ_CTRLACK);
+	if (ret) {
+		dev_err(smmu->dev, "failed to disable irqs\n");
+		return ret;
+	}
+
+	irq = smmu->combined_irq;
+	if (irq) {
+		/*
+		 * Cavium ThunderX2 implementation doesn't support unique irq
+		 * lines. Use a single irq line for all the SMMUv3 interrupts.
+		 */
+		ret = devm_request_threaded_irq(smmu->dev, irq,
+					arm_smmu_combined_irq_handler,
+					arm_smmu_combined_irq_thread,
+					IRQF_ONESHOT,
+					"arm-smmu-v3-combined-irq", smmu);
+		if (ret < 0)
+			dev_warn(smmu->dev, "failed to enable combined irq\n");
+	} else
+		arm_smmu_setup_unique_irqs(smmu);
+
+	if (smmu->features & ARM_SMMU_FEAT_PRI)
+		irqen_flags |= IRQ_CTRL_PRIQ_IRQEN;
+
+	/* Enable interrupt generation on the SMMU */
+	ret = arm_smmu_write_reg_sync(smmu, irqen_flags,
+				      ARM_SMMU_IRQ_CTRL, ARM_SMMU_IRQ_CTRLACK);
+	if (ret)
+		dev_warn(smmu->dev, "failed to enable irqs\n");
+
+	return 0;
+}
+
+static int arm_smmu_device_disable(struct arm_smmu_device *smmu)
+{
+	int ret;
+
+	ret = arm_smmu_write_reg_sync(smmu, 0, ARM_SMMU_CR0, ARM_SMMU_CR0ACK);
+	if (ret)
+		dev_err(smmu->dev, "failed to clear cr0\n");
+
+	return ret;
+}
+
+static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
+{
+	int ret;
+	u32 reg, enables;
+	struct arm_smmu_cmdq_ent cmd;
+
+	/* Clear CR0 and sync (disables SMMU and queue processing) */
+	reg = readl_relaxed(smmu->base + ARM_SMMU_CR0);
+	if (reg & CR0_SMMUEN) {
+		dev_warn(smmu->dev, "SMMU currently enabled! Resetting...\n");
+		WARN_ON(is_kdump_kernel() && !disable_bypass);
+		arm_smmu_update_gbpa(smmu, GBPA_ABORT, 0);
+	}
+
+	ret = arm_smmu_device_disable(smmu);
+	if (ret)
+		return ret;
+
+	/* CR1 (table and queue memory attributes) */
+	reg = FIELD_PREP(CR1_TABLE_SH, ARM_SMMU_SH_ISH) |
+	      FIELD_PREP(CR1_TABLE_OC, CR1_CACHE_WB) |
+	      FIELD_PREP(CR1_TABLE_IC, CR1_CACHE_WB) |
+	      FIELD_PREP(CR1_QUEUE_SH, ARM_SMMU_SH_ISH) |
+	      FIELD_PREP(CR1_QUEUE_OC, CR1_CACHE_WB) |
+	      FIELD_PREP(CR1_QUEUE_IC, CR1_CACHE_WB);
+	writel_relaxed(reg, smmu->base + ARM_SMMU_CR1);
+
+	/* CR2 (random crap) */
+	reg = CR2_PTM | CR2_RECINVSID | CR2_E2H;
+	writel_relaxed(reg, smmu->base + ARM_SMMU_CR2);
+
+	/* Stream table */
+	writeq_relaxed(smmu->strtab_cfg.strtab_base,
+		       smmu->base + ARM_SMMU_STRTAB_BASE);
+	writel_relaxed(smmu->strtab_cfg.strtab_base_cfg,
+		       smmu->base + ARM_SMMU_STRTAB_BASE_CFG);
+
+	/* Command queue */
+	writeq_relaxed(smmu->cmdq.q.q_base, smmu->base + ARM_SMMU_CMDQ_BASE);
+	writel_relaxed(smmu->cmdq.q.llq.prod, smmu->base + ARM_SMMU_CMDQ_PROD);
+	writel_relaxed(smmu->cmdq.q.llq.cons, smmu->base + ARM_SMMU_CMDQ_CONS);
+
+	enables = CR0_CMDQEN;
+	ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+				      ARM_SMMU_CR0ACK);
+	if (ret) {
+		dev_err(smmu->dev, "failed to enable command queue\n");
+		return ret;
+	}
+
+	/* Invalidate any cached configuration */
+	cmd.opcode = CMDQ_OP_CFGI_ALL;
+	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+	arm_smmu_cmdq_issue_sync(smmu);
+
+	/* Invalidate any stale TLB entries */
+	if (smmu->features & ARM_SMMU_FEAT_HYP) {
+		cmd.opcode = CMDQ_OP_TLBI_EL2_ALL;
+		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+	}
+
+	cmd.opcode = CMDQ_OP_TLBI_NSNH_ALL;
+	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+	arm_smmu_cmdq_issue_sync(smmu);
+
+	/* Event queue */
+	writeq_relaxed(smmu->evtq.q.q_base, smmu->base + ARM_SMMU_EVTQ_BASE);
+	writel_relaxed(smmu->evtq.q.llq.prod,
+		       arm_smmu_page1_fixup(ARM_SMMU_EVTQ_PROD, smmu));
+	writel_relaxed(smmu->evtq.q.llq.cons,
+		       arm_smmu_page1_fixup(ARM_SMMU_EVTQ_CONS, smmu));
+
+	enables |= CR0_EVTQEN;
+	ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+				      ARM_SMMU_CR0ACK);
+	if (ret) {
+		dev_err(smmu->dev, "failed to enable event queue\n");
+		return ret;
+	}
+
+	/* PRI queue */
+	if (smmu->features & ARM_SMMU_FEAT_PRI) {
+		writeq_relaxed(smmu->priq.q.q_base,
+			       smmu->base + ARM_SMMU_PRIQ_BASE);
+		writel_relaxed(smmu->priq.q.llq.prod,
+			       arm_smmu_page1_fixup(ARM_SMMU_PRIQ_PROD, smmu));
+		writel_relaxed(smmu->priq.q.llq.cons,
+			       arm_smmu_page1_fixup(ARM_SMMU_PRIQ_CONS, smmu));
+
+		enables |= CR0_PRIQEN;
+		ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+					      ARM_SMMU_CR0ACK);
+		if (ret) {
+			dev_err(smmu->dev, "failed to enable PRI queue\n");
+			return ret;
+		}
+	}
+
+	if (smmu->features & ARM_SMMU_FEAT_ATS) {
+		enables |= CR0_ATSCHK;
+		ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+					      ARM_SMMU_CR0ACK);
+		if (ret) {
+			dev_err(smmu->dev, "failed to enable ATS check\n");
+			return ret;
+		}
+	}
+
+	ret = arm_smmu_setup_irqs(smmu);
+	if (ret) {
+		dev_err(smmu->dev, "failed to setup irqs\n");
+		return ret;
+	}
+
+	if (is_kdump_kernel())
+		enables &= ~(CR0_EVTQEN | CR0_PRIQEN);
+
+	/* Enable the SMMU interface, or ensure bypass */
+	if (!bypass || disable_bypass) {
+		enables |= CR0_SMMUEN;
+	} else {
+		ret = arm_smmu_update_gbpa(smmu, 0, GBPA_ABORT);
+		if (ret)
+			return ret;
+	}
+	ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+				      ARM_SMMU_CR0ACK);
+	if (ret) {
+		dev_err(smmu->dev, "failed to enable SMMU interface\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
+{
+	u32 reg;
+	bool coherent = smmu->features & ARM_SMMU_FEAT_COHERENCY;
+
+	/* IDR0 */
+	reg = readl_relaxed(smmu->base + ARM_SMMU_IDR0);
+
+	/* 2-level structures */
+	if (FIELD_GET(IDR0_ST_LVL, reg) == IDR0_ST_LVL_2LVL)
+		smmu->features |= ARM_SMMU_FEAT_2_LVL_STRTAB;
+
+	if (reg & IDR0_CD2L)
+		smmu->features |= ARM_SMMU_FEAT_2_LVL_CDTAB;
+
+	/*
+	 * Translation table endianness.
+	 * We currently require the same endianness as the CPU, but this
+	 * could be changed later by adding a new IO_PGTABLE_QUIRK.
+	 */
+	switch (FIELD_GET(IDR0_TTENDIAN, reg)) {
+	case IDR0_TTENDIAN_MIXED:
+		smmu->features |= ARM_SMMU_FEAT_TT_LE | ARM_SMMU_FEAT_TT_BE;
+		break;
+#ifdef __BIG_ENDIAN
+	case IDR0_TTENDIAN_BE:
+		smmu->features |= ARM_SMMU_FEAT_TT_BE;
+		break;
+#else
+	case IDR0_TTENDIAN_LE:
+		smmu->features |= ARM_SMMU_FEAT_TT_LE;
+		break;
+#endif
+	default:
+		dev_err(smmu->dev, "unknown/unsupported TT endianness!\n");
+		return -ENXIO;
+	}
+
+	/* Boolean feature flags */
+	if (IS_ENABLED(CONFIG_PCI_PRI) && reg & IDR0_PRI)
+		smmu->features |= ARM_SMMU_FEAT_PRI;
+
+	if (IS_ENABLED(CONFIG_PCI_ATS) && reg & IDR0_ATS)
+		smmu->features |= ARM_SMMU_FEAT_ATS;
+
+	if (reg & IDR0_SEV)
+		smmu->features |= ARM_SMMU_FEAT_SEV;
+
+	if (reg & IDR0_MSI)
+		smmu->features |= ARM_SMMU_FEAT_MSI;
+
+	if (reg & IDR0_HYP)
+		smmu->features |= ARM_SMMU_FEAT_HYP;
+
+	/*
+	 * The coherency feature as set by FW is used in preference to the ID
+	 * register, but warn on mismatch.
+	 */
+	if (!!(reg & IDR0_COHACC) != coherent)
+		dev_warn(smmu->dev, "IDR0.COHACC overridden by FW configuration (%s)\n",
+			 coherent ? "true" : "false");
+
+	switch (FIELD_GET(IDR0_STALL_MODEL, reg)) {
+	case IDR0_STALL_MODEL_FORCE:
+		smmu->features |= ARM_SMMU_FEAT_STALL_FORCE;
+		fallthrough;
+	case IDR0_STALL_MODEL_STALL:
+		smmu->features |= ARM_SMMU_FEAT_STALLS;
+	}
+
+	if (reg & IDR0_S1P)
+		smmu->features |= ARM_SMMU_FEAT_TRANS_S1;
+
+	if (reg & IDR0_S2P)
+		smmu->features |= ARM_SMMU_FEAT_TRANS_S2;
+
+	if (!(reg & (IDR0_S1P | IDR0_S2P))) {
+		dev_err(smmu->dev, "no translation support!\n");
+		return -ENXIO;
+	}
+
+	/* We only support the AArch64 table format at present */
+	switch (FIELD_GET(IDR0_TTF, reg)) {
+	case IDR0_TTF_AARCH32_64:
+		smmu->ias = 40;
+		fallthrough;
+	case IDR0_TTF_AARCH64:
+		break;
+	default:
+		dev_err(smmu->dev, "AArch64 table format not supported!\n");
+		return -ENXIO;
+	}
+
+	/* ASID/VMID sizes */
+	smmu->asid_bits = reg & IDR0_ASID16 ? 16 : 8;
+	smmu->vmid_bits = reg & IDR0_VMID16 ? 16 : 8;
+
+	/* IDR1 */
+	reg = readl_relaxed(smmu->base + ARM_SMMU_IDR1);
+	if (reg & (IDR1_TABLES_PRESET | IDR1_QUEUES_PRESET | IDR1_REL)) {
+		dev_err(smmu->dev, "embedded implementation not supported\n");
+		return -ENXIO;
+	}
+
+	/* Queue sizes, capped to ensure natural alignment */
+	smmu->cmdq.q.llq.max_n_shift = min_t(u32, CMDQ_MAX_SZ_SHIFT,
+					     FIELD_GET(IDR1_CMDQS, reg));
+	if (smmu->cmdq.q.llq.max_n_shift <= ilog2(CMDQ_BATCH_ENTRIES)) {
+		/*
+		 * We don't support splitting up batches, so one batch of
+		 * commands plus an extra sync needs to fit inside the command
+		 * queue. There's also no way we can handle the weird alignment
+		 * restrictions on the base pointer for a unit-length queue.
+		 */
+		dev_err(smmu->dev, "command queue size <= %d entries not supported\n",
+			CMDQ_BATCH_ENTRIES);
+		return -ENXIO;
+	}
+
+	smmu->evtq.q.llq.max_n_shift = min_t(u32, EVTQ_MAX_SZ_SHIFT,
+					     FIELD_GET(IDR1_EVTQS, reg));
+	smmu->priq.q.llq.max_n_shift = min_t(u32, PRIQ_MAX_SZ_SHIFT,
+					     FIELD_GET(IDR1_PRIQS, reg));
+
+	/* SID/SSID sizes */
+	smmu->ssid_bits = FIELD_GET(IDR1_SSIDSIZE, reg);
+	smmu->sid_bits = FIELD_GET(IDR1_SIDSIZE, reg);
+
+	/*
+	 * If the SMMU supports fewer bits than would fill a single L2 stream
+	 * table, use a linear table instead.
+	 */
+	if (smmu->sid_bits <= STRTAB_SPLIT)
+		smmu->features &= ~ARM_SMMU_FEAT_2_LVL_STRTAB;
+
+	/* IDR3 */
+	reg = readl_relaxed(smmu->base + ARM_SMMU_IDR3);
+	if (FIELD_GET(IDR3_RIL, reg))
+		smmu->features |= ARM_SMMU_FEAT_RANGE_INV;
+
+	/* IDR5 */
+	reg = readl_relaxed(smmu->base + ARM_SMMU_IDR5);
+
+	/* Maximum number of outstanding stalls */
+	smmu->evtq.max_stalls = FIELD_GET(IDR5_STALL_MAX, reg);
+
+	/* Page sizes */
+	if (reg & IDR5_GRAN64K)
+		smmu->pgsize_bitmap |= SZ_64K | SZ_512M;
+	if (reg & IDR5_GRAN16K)
+		smmu->pgsize_bitmap |= SZ_16K | SZ_32M;
+	if (reg & IDR5_GRAN4K)
+		smmu->pgsize_bitmap |= SZ_4K | SZ_2M | SZ_1G;
+
+	/* Input address size */
+	if (FIELD_GET(IDR5_VAX, reg) == IDR5_VAX_52_BIT)
+		smmu->features |= ARM_SMMU_FEAT_VAX;
+
+	/* Output address size */
+	switch (FIELD_GET(IDR5_OAS, reg)) {
+	case IDR5_OAS_32_BIT:
+		smmu->oas = 32;
+		break;
+	case IDR5_OAS_36_BIT:
+		smmu->oas = 36;
+		break;
+	case IDR5_OAS_40_BIT:
+		smmu->oas = 40;
+		break;
+	case IDR5_OAS_42_BIT:
+		smmu->oas = 42;
+		break;
+	case IDR5_OAS_44_BIT:
+		smmu->oas = 44;
+		break;
+	case IDR5_OAS_52_BIT:
+		smmu->oas = 52;
+		smmu->pgsize_bitmap |= 1ULL << 42; /* 4TB */
+		break;
+	default:
+		dev_info(smmu->dev,
+			"unknown output address size. Truncating to 48-bit\n");
+		fallthrough;
+	case IDR5_OAS_48_BIT:
+		smmu->oas = 48;
+	}
+
+	if (arm_smmu_ops.pgsize_bitmap == -1UL)
+		arm_smmu_ops.pgsize_bitmap = smmu->pgsize_bitmap;
+	else
+		arm_smmu_ops.pgsize_bitmap |= smmu->pgsize_bitmap;
+
+	/* Set the DMA mask for our table walker */
+	if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas)))
+		dev_warn(smmu->dev,
+			 "failed to set DMA mask for table walker\n");
+
+	smmu->ias = max(smmu->ias, smmu->oas);
+
+	dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n",
+		 smmu->ias, smmu->oas, smmu->features);
+	return 0;
+}
+
+#ifdef CONFIG_ACPI
+static void acpi_smmu_get_options(u32 model, struct arm_smmu_device *smmu)
+{
+	switch (model) {
+	case ACPI_IORT_SMMU_V3_CAVIUM_CN99XX:
+		smmu->options |= ARM_SMMU_OPT_PAGE0_REGS_ONLY;
+		break;
+	case ACPI_IORT_SMMU_V3_HISILICON_HI161X:
+		smmu->options |= ARM_SMMU_OPT_SKIP_PREFETCH;
+		break;
+	}
+
+	dev_notice(smmu->dev, "option mask 0x%x\n", smmu->options);
+}
+
+static int arm_smmu_device_acpi_probe(struct platform_device *pdev,
+				      struct arm_smmu_device *smmu)
+{
+	struct acpi_iort_smmu_v3 *iort_smmu;
+	struct device *dev = smmu->dev;
+	struct acpi_iort_node *node;
+
+	node = *(struct acpi_iort_node **)dev_get_platdata(dev);
+
+	/* Retrieve SMMUv3 specific data */
+	iort_smmu = (struct acpi_iort_smmu_v3 *)node->node_data;
+
+	acpi_smmu_get_options(iort_smmu->model, smmu);
+
+	if (iort_smmu->flags & ACPI_IORT_SMMU_V3_COHACC_OVERRIDE)
+		smmu->features |= ARM_SMMU_FEAT_COHERENCY;
+
+	return 0;
+}
+#else
+static inline int arm_smmu_device_acpi_probe(struct platform_device *pdev,
+					     struct arm_smmu_device *smmu)
+{
+	return -ENODEV;
+}
+#endif
+
+static int arm_smmu_device_dt_probe(struct platform_device *pdev,
+				    struct arm_smmu_device *smmu)
+{
+	struct device *dev = &pdev->dev;
+	u32 cells;
+	int ret = -EINVAL;
+
+	if (of_property_read_u32(dev->of_node, "#iommu-cells", &cells))
+		dev_err(dev, "missing #iommu-cells property\n");
+	else if (cells != 1)
+		dev_err(dev, "invalid #iommu-cells value (%d)\n", cells);
+	else
+		ret = 0;
+
+	parse_driver_options(smmu);
+
+	if (of_dma_is_coherent(dev->of_node))
+		smmu->features |= ARM_SMMU_FEAT_COHERENCY;
+
+	return ret;
+}
+
+static unsigned long arm_smmu_resource_size(struct arm_smmu_device *smmu)
+{
+	if (smmu->options & ARM_SMMU_OPT_PAGE0_REGS_ONLY)
+		return SZ_64K;
+	else
+		return SZ_128K;
+}
+
+static int arm_smmu_set_bus_ops(struct iommu_ops *ops)
+{
+	int err;
+
+#ifdef CONFIG_PCI
+	if (pci_bus_type.iommu_ops != ops) {
+		err = bus_set_iommu(&pci_bus_type, ops);
+		if (err)
+			return err;
+	}
+#endif
+#ifdef CONFIG_ARM_AMBA
+	if (amba_bustype.iommu_ops != ops) {
+		err = bus_set_iommu(&amba_bustype, ops);
+		if (err)
+			goto err_reset_pci_ops;
+	}
+#endif
+	if (platform_bus_type.iommu_ops != ops) {
+		err = bus_set_iommu(&platform_bus_type, ops);
+		if (err)
+			goto err_reset_amba_ops;
+	}
+
+	return 0;
+
+err_reset_amba_ops:
+#ifdef CONFIG_ARM_AMBA
+	bus_set_iommu(&amba_bustype, NULL);
+#endif
+err_reset_pci_ops: __maybe_unused;
+#ifdef CONFIG_PCI
+	bus_set_iommu(&pci_bus_type, NULL);
+#endif
+	return err;
+}
+
+static void __iomem *arm_smmu_ioremap(struct device *dev, resource_size_t start,
+				      resource_size_t size)
+{
+	struct resource res = {
+		.flags = IORESOURCE_MEM,
+		.start = start,
+		.end = start + size - 1,
+	};
+
+	return devm_ioremap_resource(dev, &res);
+}
+
+static int arm_smmu_device_probe(struct platform_device *pdev)
+{
+	int irq, ret;
+	struct resource *res;
+	resource_size_t ioaddr;
+	struct arm_smmu_device *smmu;
+	struct device *dev = &pdev->dev;
+	bool bypass;
+
+	smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
+	if (!smmu) {
+		dev_err(dev, "failed to allocate arm_smmu_device\n");
+		return -ENOMEM;
+	}
+	smmu->dev = dev;
+
+	if (dev->of_node) {
+		ret = arm_smmu_device_dt_probe(pdev, smmu);
+	} else {
+		ret = arm_smmu_device_acpi_probe(pdev, smmu);
+		if (ret == -ENODEV)
+			return ret;
+	}
+
+	/* Set bypass mode according to firmware probing result */
+	bypass = !!ret;
+
+	/* Base address */
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (resource_size(res) < arm_smmu_resource_size(smmu)) {
+		dev_err(dev, "MMIO region too small (%pr)\n", res);
+		return -EINVAL;
+	}
+	ioaddr = res->start;
+
+	/*
+	 * Don't map the IMPLEMENTATION DEFINED regions, since they may contain
+	 * the PMCG registers which are reserved by the PMU driver.
+	 */
+	smmu->base = arm_smmu_ioremap(dev, ioaddr, ARM_SMMU_REG_SZ);
+	if (IS_ERR(smmu->base))
+		return PTR_ERR(smmu->base);
+
+	if (arm_smmu_resource_size(smmu) > SZ_64K) {
+		smmu->page1 = arm_smmu_ioremap(dev, ioaddr + SZ_64K,
+					       ARM_SMMU_REG_SZ);
+		if (IS_ERR(smmu->page1))
+			return PTR_ERR(smmu->page1);
+	} else {
+		smmu->page1 = smmu->base;
+	}
+
+	/* Interrupt lines */
+
+	irq = platform_get_irq_byname_optional(pdev, "combined");
+	if (irq > 0)
+		smmu->combined_irq = irq;
+	else {
+		irq = platform_get_irq_byname_optional(pdev, "eventq");
+		if (irq > 0)
+			smmu->evtq.q.irq = irq;
+
+		irq = platform_get_irq_byname_optional(pdev, "priq");
+		if (irq > 0)
+			smmu->priq.q.irq = irq;
+
+		irq = platform_get_irq_byname_optional(pdev, "gerror");
+		if (irq > 0)
+			smmu->gerr_irq = irq;
+	}
+	/* Probe the h/w */
+	ret = arm_smmu_device_hw_probe(smmu);
+	if (ret)
+		return ret;
+
+	/* Initialise in-memory data structures */
+	ret = arm_smmu_init_structures(smmu);
+	if (ret)
+		return ret;
+
+	/* Record our private device structure */
+	platform_set_drvdata(pdev, smmu);
+
+	/* Reset the device */
+	ret = arm_smmu_device_reset(smmu, bypass);
+	if (ret)
+		return ret;
+
+	/* And we're up. Go go go! */
+	ret = iommu_device_sysfs_add(&smmu->iommu, dev, NULL,
+				     "smmu3.%pa", &ioaddr);
+	if (ret)
+		return ret;
+
+	iommu_device_set_ops(&smmu->iommu, &arm_smmu_ops);
+	iommu_device_set_fwnode(&smmu->iommu, dev->fwnode);
+
+	ret = iommu_device_register(&smmu->iommu);
+	if (ret) {
+		dev_err(dev, "Failed to register iommu\n");
+		return ret;
+	}
+
+	return arm_smmu_set_bus_ops(&arm_smmu_ops);
+}
+
+static int arm_smmu_device_remove(struct platform_device *pdev)
+{
+	struct arm_smmu_device *smmu = platform_get_drvdata(pdev);
+
+	arm_smmu_set_bus_ops(NULL);
+	iommu_device_unregister(&smmu->iommu);
+	iommu_device_sysfs_remove(&smmu->iommu);
+	arm_smmu_device_disable(smmu);
+
+	return 0;
+}
+
+static void arm_smmu_device_shutdown(struct platform_device *pdev)
+{
+	arm_smmu_device_remove(pdev);
+}
+
+static const struct of_device_id arm_smmu_of_match[] = {
+	{ .compatible = "arm,smmu-v3", },
+	{ },
+};
+MODULE_DEVICE_TABLE(of, arm_smmu_of_match);
+
+static struct platform_driver arm_smmu_driver = {
+	.driver	= {
+		.name			= "arm-smmu-v3",
+		.of_match_table		= arm_smmu_of_match,
+		.suppress_bind_attrs	= true,
+	},
+	.probe	= arm_smmu_device_probe,
+	.remove	= arm_smmu_device_remove,
+	.shutdown = arm_smmu_device_shutdown,
+};
+module_platform_driver(arm_smmu_driver);
+
+MODULE_DESCRIPTION("IOMMU API for ARM architected SMMUv3 implementations");
+MODULE_AUTHOR("Will Deacon <will@kernel.org>");
+MODULE_ALIAS("platform:arm-smmu-v3");
+MODULE_LICENSE("GPL v2");
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 17:02:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 17:02:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38860.71599 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKfH-0004Mt-52; Thu, 26 Nov 2020 17:02:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38860.71599; Thu, 26 Nov 2020 17:02:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKfH-0004Mm-1Q; Thu, 26 Nov 2020 17:02:39 +0000
Received: by outflank-mailman (input) for mailman id 38860;
 Thu, 26 Nov 2020 17:02:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C6x3=FA=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kiKfF-0004MJ-Ai
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:02:37 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 0d830fc5-35d3-4885-9ce2-9024feed763d;
 Thu, 26 Nov 2020 17:02:36 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1AEEC1516;
 Thu, 26 Nov 2020 09:02:36 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1CF473F23F;
 Thu, 26 Nov 2020 09:02:35 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=C6x3=FA=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kiKfF-0004MJ-Ai
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:02:37 +0000
X-Inumbo-ID: 0d830fc5-35d3-4885-9ce2-9024feed763d
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 0d830fc5-35d3-4885-9ce2-9024feed763d;
	Thu, 26 Nov 2020 17:02:36 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1AEEC1516;
	Thu, 26 Nov 2020 09:02:36 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1CF473F23F;
	Thu, 26 Nov 2020 09:02:35 -0800 (PST)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 2/8] xen/arm: revert atomic operation related command-queue insertion patch
Date: Thu, 26 Nov 2020 17:02:01 +0000
Message-Id: <4a0ca6d03b5f1f5b30c4cdbdff0688cea84d9e91.1606406359.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1606406359.git.rahul.singh@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
In-Reply-To: <cover.1606406359.git.rahul.singh@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>

Linux SMMUv3 code implements the commands-queue insertion based on
atomic operations implemented in Linux. Atomic functions used by the
commands-queue insertion is not implemented in XEN therefore revert the
patch that implemented the commands-queue insertion based on atomic
operations.

Once the proper atomic operations will be available in XEN the driver
can be updated.

Reverted the commit 587e6c10a7ce89a5924fdbeff2ec524fbd6a124b
iommu/arm-smmu-v3: Reduce contention during command-queue insertion

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/drivers/passthrough/arm/smmu-v3.c | 847 ++++++--------------------
 1 file changed, 180 insertions(+), 667 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index c192544e87..97eac61ea4 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -330,15 +330,6 @@
 #define CMDQ_ERR_CERROR_ABT_IDX		2
 #define CMDQ_ERR_CERROR_ATC_INV_IDX	3
 
-#define CMDQ_PROD_OWNED_FLAG		Q_OVERFLOW_FLAG
-
-/*
- * This is used to size the command queue and therefore must be at least
- * BITS_PER_LONG so that the valid_map works correctly (it relies on the
- * total number of queue entries being a multiple of BITS_PER_LONG).
- */
-#define CMDQ_BATCH_ENTRIES		BITS_PER_LONG
-
 #define CMDQ_0_OP			GENMASK_ULL(7, 0)
 #define CMDQ_0_SSV			(1UL << 11)
 
@@ -407,8 +398,9 @@
 #define PRIQ_1_ADDR_MASK		GENMASK_ULL(63, 12)
 
 /* High-level queue structures */
-#define ARM_SMMU_POLL_TIMEOUT_US	1000000 /* 1s! */
-#define ARM_SMMU_POLL_SPIN_COUNT	10
+#define ARM_SMMU_POLL_TIMEOUT_US	100
+#define ARM_SMMU_CMDQ_SYNC_TIMEOUT_US	1000000 /* 1s! */
+#define ARM_SMMU_CMDQ_SYNC_SPIN_COUNT	10
 
 #define MSI_IOVA_BASE			0x8000000
 #define MSI_IOVA_LENGTH			0x100000
@@ -513,24 +505,15 @@ struct arm_smmu_cmdq_ent {
 
 		#define CMDQ_OP_CMD_SYNC	0x46
 		struct {
+			u32			msidata;
 			u64			msiaddr;
 		} sync;
 	};
 };
 
 struct arm_smmu_ll_queue {
-	union {
-		u64			val;
-		struct {
-			u32		prod;
-			u32		cons;
-		};
-		struct {
-			atomic_t	prod;
-			atomic_t	cons;
-		} atomic;
-		u8			__pad[SMP_CACHE_BYTES];
-	} ____cacheline_aligned_in_smp;
+	u32				prod;
+	u32				cons;
 	u32				max_n_shift;
 };
 
@@ -548,23 +531,9 @@ struct arm_smmu_queue {
 	u32 __iomem			*cons_reg;
 };
 
-struct arm_smmu_queue_poll {
-	ktime_t				timeout;
-	unsigned int			delay;
-	unsigned int			spin_cnt;
-	bool				wfe;
-};
-
 struct arm_smmu_cmdq {
 	struct arm_smmu_queue		q;
-	atomic_long_t			*valid_map;
-	atomic_t			owner_prod;
-	atomic_t			lock;
-};
-
-struct arm_smmu_cmdq_batch {
-	u64				cmds[CMDQ_BATCH_ENTRIES * CMDQ_ENT_DWORDS];
-	int				num;
+	spinlock_t			lock;
 };
 
 struct arm_smmu_evtq {
@@ -660,6 +629,8 @@ struct arm_smmu_device {
 
 	int				gerr_irq;
 	int				combined_irq;
+	u32				sync_nr;
+	u8				prev_cmd_opcode;
 
 	unsigned long			ias; /* IPA */
 	unsigned long			oas; /* PA */
@@ -677,6 +648,12 @@ struct arm_smmu_device {
 
 	struct arm_smmu_strtab_cfg	strtab_cfg;
 
+	/* Hi16xx adds an extra 32 bits of goodness to its MSI payload */
+	union {
+		u32			sync_count;
+		u64			padding;
+	};
+
 	/* IOMMU core code handle */
 	struct iommu_device		iommu;
 };
@@ -763,21 +740,6 @@ static void parse_driver_options(struct arm_smmu_device *smmu)
 }
 
 /* Low-level queue manipulation functions */
-static bool queue_has_space(struct arm_smmu_ll_queue *q, u32 n)
-{
-	u32 space, prod, cons;
-
-	prod = Q_IDX(q, q->prod);
-	cons = Q_IDX(q, q->cons);
-
-	if (Q_WRP(q, q->prod) == Q_WRP(q, q->cons))
-		space = (1 << q->max_n_shift) - (prod - cons);
-	else
-		space = cons - prod;
-
-	return space >= n;
-}
-
 static bool queue_full(struct arm_smmu_ll_queue *q)
 {
 	return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
@@ -790,12 +752,9 @@ static bool queue_empty(struct arm_smmu_ll_queue *q)
 	       Q_WRP(q, q->prod) == Q_WRP(q, q->cons);
 }
 
-static bool queue_consumed(struct arm_smmu_ll_queue *q, u32 prod)
+static void queue_sync_cons_in(struct arm_smmu_queue *q)
 {
-	return ((Q_WRP(q, q->cons) == Q_WRP(q, prod)) &&
-		(Q_IDX(q, q->cons) > Q_IDX(q, prod))) ||
-	       ((Q_WRP(q, q->cons) != Q_WRP(q, prod)) &&
-		(Q_IDX(q, q->cons) <= Q_IDX(q, prod)));
+	q->llq.cons = readl_relaxed(q->cons_reg);
 }
 
 static void queue_sync_cons_out(struct arm_smmu_queue *q)
@@ -826,34 +785,46 @@ static int queue_sync_prod_in(struct arm_smmu_queue *q)
 	return ret;
 }
 
-static u32 queue_inc_prod_n(struct arm_smmu_ll_queue *q, int n)
+static void queue_sync_prod_out(struct arm_smmu_queue *q)
 {
-	u32 prod = (Q_WRP(q, q->prod) | Q_IDX(q, q->prod)) + n;
-	return Q_OVF(q->prod) | Q_WRP(q, prod) | Q_IDX(q, prod);
+	writel(q->llq.prod, q->prod_reg);
 }
 
-static void queue_poll_init(struct arm_smmu_device *smmu,
-			    struct arm_smmu_queue_poll *qp)
+static void queue_inc_prod(struct arm_smmu_ll_queue *q)
 {
-	qp->delay = 1;
-	qp->spin_cnt = 0;
-	qp->wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV);
-	qp->timeout = ktime_add_us(ktime_get(), ARM_SMMU_POLL_TIMEOUT_US);
+	u32 prod = (Q_WRP(q, q->prod) | Q_IDX(q, q->prod)) + 1;
+	q->prod = Q_OVF(q->prod) | Q_WRP(q, prod) | Q_IDX(q, prod);
 }
 
-static int queue_poll(struct arm_smmu_queue_poll *qp)
+/*
+ * Wait for the SMMU to consume items. If sync is true, wait until the queue
+ * is empty. Otherwise, wait until there is at least one free slot.
+ */
+static int queue_poll_cons(struct arm_smmu_queue *q, bool sync, bool wfe)
 {
-	if (ktime_compare(ktime_get(), qp->timeout) > 0)
-		return -ETIMEDOUT;
+	ktime_t timeout;
+	unsigned int delay = 1, spin_cnt = 0;
 
-	if (qp->wfe) {
-		wfe();
-	} else if (++qp->spin_cnt < ARM_SMMU_POLL_SPIN_COUNT) {
-		cpu_relax();
-	} else {
-		udelay(qp->delay);
-		qp->delay *= 2;
-		qp->spin_cnt = 0;
+	/* Wait longer if it's a CMD_SYNC */
+	timeout = ktime_add_us(ktime_get(), sync ?
+					    ARM_SMMU_CMDQ_SYNC_TIMEOUT_US :
+					    ARM_SMMU_POLL_TIMEOUT_US);
+
+	while (queue_sync_cons_in(q),
+	      (sync ? !queue_empty(&q->llq) : queue_full(&q->llq))) {
+		if (ktime_compare(ktime_get(), timeout) > 0)
+			return -ETIMEDOUT;
+
+		if (wfe) {
+			wfe();
+		} else if (++spin_cnt < ARM_SMMU_CMDQ_SYNC_SPIN_COUNT) {
+			cpu_relax();
+			continue;
+		} else {
+			udelay(delay);
+			delay *= 2;
+			spin_cnt = 0;
+		}
 	}
 
 	return 0;
@@ -867,6 +838,17 @@ static void queue_write(__le64 *dst, u64 *src, size_t n_dwords)
 		*dst++ = cpu_to_le64(*src++);
 }
 
+static int queue_insert_raw(struct arm_smmu_queue *q, u64 *ent)
+{
+	if (queue_full(&q->llq))
+		return -ENOSPC;
+
+	queue_write(Q_ENT(q, q->llq.prod), ent, q->ent_dwords);
+	queue_inc_prod(&q->llq);
+	queue_sync_prod_out(q);
+	return 0;
+}
+
 static void queue_read(__le64 *dst, u64 *src, size_t n_dwords)
 {
 	int i;
@@ -964,14 +946,20 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
 		cmd[1] |= FIELD_PREP(CMDQ_PRI_1_RESP, ent->pri.resp);
 		break;
 	case CMDQ_OP_CMD_SYNC:
-		if (ent->sync.msiaddr) {
+		if (ent->sync.msiaddr)
 			cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ);
-			cmd[1] |= ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
-		} else {
+		else
 			cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV);
-		}
 		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH);
 		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB);
+		/*
+		 * Commands are written little-endian, but we want the SMMU to
+		 * receive MSIData, and thus write it back to memory, in CPU
+		 * byte order, so big-endian needs an extra byteswap here.
+		 */
+		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA,
+				     cpu_to_le32(ent->sync.msidata));
+		cmd[1] |= ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
 		break;
 	default:
 		return -ENOENT;
@@ -980,27 +968,6 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
 	return 0;
 }
 
-static void arm_smmu_cmdq_build_sync_cmd(u64 *cmd, struct arm_smmu_device *smmu,
-					 u32 prod)
-{
-	struct arm_smmu_queue *q = &smmu->cmdq.q;
-	struct arm_smmu_cmdq_ent ent = {
-		.opcode = CMDQ_OP_CMD_SYNC,
-	};
-
-	/*
-	 * Beware that Hi16xx adds an extra 32 bits of goodness to its MSI
-	 * payload, so the write will zero the entire command on that platform.
-	 */
-	if (smmu->features & ARM_SMMU_FEAT_MSI &&
-	    smmu->features & ARM_SMMU_FEAT_COHERENCY) {
-		ent.sync.msiaddr = q->base_dma + Q_IDX(&q->llq, prod) *
-				   q->ent_dwords * 8;
-	}
-
-	arm_smmu_cmdq_build_cmd(cmd, &ent);
-}
-
 static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
 {
 	static const char *cerror_str[] = {
@@ -1058,474 +1025,109 @@ static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
 	queue_write(Q_ENT(q, cons), cmd, q->ent_dwords);
 }
 
-/*
- * Command queue locking.
- * This is a form of bastardised rwlock with the following major changes:
- *
- * - The only LOCK routines are exclusive_trylock() and shared_lock().
- *   Neither have barrier semantics, and instead provide only a control
- *   dependency.
- *
- * - The UNLOCK routines are supplemented with shared_tryunlock(), which
- *   fails if the caller appears to be the last lock holder (yes, this is
- *   racy). All successful UNLOCK routines have RELEASE semantics.
- */
-static void arm_smmu_cmdq_shared_lock(struct arm_smmu_cmdq *cmdq)
+static void arm_smmu_cmdq_insert_cmd(struct arm_smmu_device *smmu, u64 *cmd)
 {
-	int val;
-
-	/*
-	 * We can try to avoid the cmpxchg() loop by simply incrementing the
-	 * lock counter. When held in exclusive state, the lock counter is set
-	 * to INT_MIN so these increments won't hurt as the value will remain
-	 * negative.
-	 */
-	if (atomic_fetch_inc_relaxed(&cmdq->lock) >= 0)
-		return;
-
-	do {
-		val = atomic_cond_read_relaxed(&cmdq->lock, VAL >= 0);
-	} while (atomic_cmpxchg_relaxed(&cmdq->lock, val, val + 1) != val);
-}
-
-static void arm_smmu_cmdq_shared_unlock(struct arm_smmu_cmdq *cmdq)
-{
-	(void)atomic_dec_return_release(&cmdq->lock);
-}
-
-static bool arm_smmu_cmdq_shared_tryunlock(struct arm_smmu_cmdq *cmdq)
-{
-	if (atomic_read(&cmdq->lock) == 1)
-		return false;
-
-	arm_smmu_cmdq_shared_unlock(cmdq);
-	return true;
-}
-
-#define arm_smmu_cmdq_exclusive_trylock_irqsave(cmdq, flags)		\
-({									\
-	bool __ret;							\
-	local_irq_save(flags);						\
-	__ret = !atomic_cmpxchg_relaxed(&cmdq->lock, 0, INT_MIN);	\
-	if (!__ret)							\
-		local_irq_restore(flags);				\
-	__ret;								\
-})
-
-#define arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags)		\
-({									\
-	atomic_set_release(&cmdq->lock, 0);				\
-	local_irq_restore(flags);					\
-})
-
-
-/*
- * Command queue insertion.
- * This is made fiddly by our attempts to achieve some sort of scalability
- * since there is one queue shared amongst all of the CPUs in the system.  If
- * you like mixed-size concurrency, dependency ordering and relaxed atomics,
- * then you'll *love* this monstrosity.
- *
- * The basic idea is to split the queue up into ranges of commands that are
- * owned by a given CPU; the owner may not have written all of the commands
- * itself, but is responsible for advancing the hardware prod pointer when
- * the time comes. The algorithm is roughly:
- *
- * 	1. Allocate some space in the queue. At this point we also discover
- *	   whether the head of the queue is currently owned by another CPU,
- *	   or whether we are the owner.
- *
- *	2. Write our commands into our allocated slots in the queue.
- *
- *	3. Mark our slots as valid in arm_smmu_cmdq.valid_map.
- *
- *	4. If we are an owner:
- *		a. Wait for the previous owner to finish.
- *		b. Mark the queue head as unowned, which tells us the range
- *		   that we are responsible for publishing.
- *		c. Wait for all commands in our owned range to become valid.
- *		d. Advance the hardware prod pointer.
- *		e. Tell the next owner we've finished.
- *
- *	5. If we are inserting a CMD_SYNC (we may or may not have been an
- *	   owner), then we need to stick around until it has completed:
- *		a. If we have MSIs, the SMMU can write back into the CMD_SYNC
- *		   to clear the first 4 bytes.
- *		b. Otherwise, we spin waiting for the hardware cons pointer to
- *		   advance past our command.
- *
- * The devil is in the details, particularly the use of locking for handling
- * SYNC completion and freeing up space in the queue before we think that it is
- * full.
- */
-static void __arm_smmu_cmdq_poll_set_valid_map(struct arm_smmu_cmdq *cmdq,
-					       u32 sprod, u32 eprod, bool set)
-{
-	u32 swidx, sbidx, ewidx, ebidx;
-	struct arm_smmu_ll_queue llq = {
-		.max_n_shift	= cmdq->q.llq.max_n_shift,
-		.prod		= sprod,
-	};
-
-	ewidx = BIT_WORD(Q_IDX(&llq, eprod));
-	ebidx = Q_IDX(&llq, eprod) % BITS_PER_LONG;
-
-	while (llq.prod != eprod) {
-		unsigned long mask;
-		atomic_long_t *ptr;
-		u32 limit = BITS_PER_LONG;
-
-		swidx = BIT_WORD(Q_IDX(&llq, llq.prod));
-		sbidx = Q_IDX(&llq, llq.prod) % BITS_PER_LONG;
-
-		ptr = &cmdq->valid_map[swidx];
-
-		if ((swidx == ewidx) && (sbidx < ebidx))
-			limit = ebidx;
-
-		mask = GENMASK(limit - 1, sbidx);
-
-		/*
-		 * The valid bit is the inverse of the wrap bit. This means
-		 * that a zero-initialised queue is invalid and, after marking
-		 * all entries as valid, they become invalid again when we
-		 * wrap.
-		 */
-		if (set) {
-			atomic_long_xor(mask, ptr);
-		} else { /* Poll */
-			unsigned long valid;
+	struct arm_smmu_queue *q = &smmu->cmdq.q;
+	bool wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV);
 
-			valid = (ULONG_MAX + !!Q_WRP(&llq, llq.prod)) & mask;
-			atomic_long_cond_read_relaxed(ptr, (VAL & mask) == valid);
-		}
+	smmu->prev_cmd_opcode = FIELD_GET(CMDQ_0_OP, cmd[0]);
 
-		llq.prod = queue_inc_prod_n(&llq, limit - sbidx);
+	while (queue_insert_raw(q, cmd) == -ENOSPC) {
+		if (queue_poll_cons(q, false, wfe))
+			dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
 	}
 }
 
-/* Mark all entries in the range [sprod, eprod) as valid */
-static void arm_smmu_cmdq_set_valid_map(struct arm_smmu_cmdq *cmdq,
-					u32 sprod, u32 eprod)
-{
-	__arm_smmu_cmdq_poll_set_valid_map(cmdq, sprod, eprod, true);
-}
-
-/* Wait for all entries in the range [sprod, eprod) to become valid */
-static void arm_smmu_cmdq_poll_valid_map(struct arm_smmu_cmdq *cmdq,
-					 u32 sprod, u32 eprod)
-{
-	__arm_smmu_cmdq_poll_set_valid_map(cmdq, sprod, eprod, false);
-}
-
-/* Wait for the command queue to become non-full */
-static int arm_smmu_cmdq_poll_until_not_full(struct arm_smmu_device *smmu,
-					     struct arm_smmu_ll_queue *llq)
+static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
+				    struct arm_smmu_cmdq_ent *ent)
 {
+	u64 cmd[CMDQ_ENT_DWORDS];
 	unsigned long flags;
-	struct arm_smmu_queue_poll qp;
-	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
-	int ret = 0;
 
-	/*
-	 * Try to update our copy of cons by grabbing exclusive cmdq access. If
-	 * that fails, spin until somebody else updates it for us.
-	 */
-	if (arm_smmu_cmdq_exclusive_trylock_irqsave(cmdq, flags)) {
-		WRITE_ONCE(cmdq->q.llq.cons, readl_relaxed(cmdq->q.cons_reg));
-		arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags);
-		llq->val = READ_ONCE(cmdq->q.llq.val);
-		return 0;
+	if (arm_smmu_cmdq_build_cmd(cmd, ent)) {
+		dev_warn(smmu->dev, "ignoring unknown CMDQ opcode 0x%x\n",
+			 ent->opcode);
+		return;
 	}
 
-	queue_poll_init(smmu, &qp);
-	do {
-		llq->val = READ_ONCE(smmu->cmdq.q.llq.val);
-		if (!queue_full(llq))
-			break;
-
-		ret = queue_poll(&qp);
-	} while (!ret);
-
-	return ret;
-}
-
-/*
- * Wait until the SMMU signals a CMD_SYNC completion MSI.
- * Must be called with the cmdq lock held in some capacity.
- */
-static int __arm_smmu_cmdq_poll_until_msi(struct arm_smmu_device *smmu,
-					  struct arm_smmu_ll_queue *llq)
-{
-	int ret = 0;
-	struct arm_smmu_queue_poll qp;
-	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
-	u32 *cmd = (u32 *)(Q_ENT(&cmdq->q, llq->prod));
-
-	queue_poll_init(smmu, &qp);
-
-	/*
-	 * The MSI won't generate an event, since it's being written back
-	 * into the command queue.
-	 */
-	qp.wfe = false;
-	smp_cond_load_relaxed(cmd, !VAL || (ret = queue_poll(&qp)));
-	llq->cons = ret ? llq->prod : queue_inc_prod_n(llq, 1);
-	return ret;
+	spin_lock_irqsave(&smmu->cmdq.lock, flags);
+	arm_smmu_cmdq_insert_cmd(smmu, cmd);
+	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
 }
 
 /*
- * Wait until the SMMU cons index passes llq->prod.
- * Must be called with the cmdq lock held in some capacity.
+ * The difference between val and sync_idx is bounded by the maximum size of
+ * a queue at 2^20 entries, so 32 bits is plenty for wrap-safe arithmetic.
  */
-static int __arm_smmu_cmdq_poll_until_consumed(struct arm_smmu_device *smmu,
-					       struct arm_smmu_ll_queue *llq)
+static int __arm_smmu_sync_poll_msi(struct arm_smmu_device *smmu, u32 sync_idx)
 {
-	struct arm_smmu_queue_poll qp;
-	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
-	u32 prod = llq->prod;
-	int ret = 0;
+	ktime_t timeout;
+	u32 val;
 
-	queue_poll_init(smmu, &qp);
-	llq->val = READ_ONCE(smmu->cmdq.q.llq.val);
-	do {
-		if (queue_consumed(llq, prod))
-			break;
-
-		ret = queue_poll(&qp);
-
-		/*
-		 * This needs to be a readl() so that our subsequent call
-		 * to arm_smmu_cmdq_shared_tryunlock() can fail accurately.
-		 *
-		 * Specifically, we need to ensure that we observe all
-		 * shared_lock()s by other CMD_SYNCs that share our owner,
-		 * so that a failing call to tryunlock() means that we're
-		 * the last one out and therefore we can safely advance
-		 * cmdq->q.llq.cons. Roughly speaking:
-		 *
-		 * CPU 0		CPU1			CPU2 (us)
-		 *
-		 * if (sync)
-		 * 	shared_lock();
-		 *
-		 * dma_wmb();
-		 * set_valid_map();
-		 *
-		 * 			if (owner) {
-		 *				poll_valid_map();
-		 *				<control dependency>
-		 *				writel(prod_reg);
-		 *
-		 *						readl(cons_reg);
-		 *						tryunlock();
-		 *
-		 * Requires us to see CPU 0's shared_lock() acquisition.
-		 */
-		llq->cons = readl(cmdq->q.cons_reg);
-	} while (!ret);
+	timeout = ktime_add_us(ktime_get(), ARM_SMMU_CMDQ_SYNC_TIMEOUT_US);
+	val = smp_cond_load_acquire(&smmu->sync_count,
+				    (int)(VAL - sync_idx) >= 0 ||
+				    !ktime_before(ktime_get(), timeout));
 
-	return ret;
+	return (int)(val - sync_idx) < 0 ? -ETIMEDOUT : 0;
 }
 
-static int arm_smmu_cmdq_poll_until_sync(struct arm_smmu_device *smmu,
-					 struct arm_smmu_ll_queue *llq)
+static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu)
 {
-	if (smmu->features & ARM_SMMU_FEAT_MSI &&
-	    smmu->features & ARM_SMMU_FEAT_COHERENCY)
-		return __arm_smmu_cmdq_poll_until_msi(smmu, llq);
-
-	return __arm_smmu_cmdq_poll_until_consumed(smmu, llq);
-}
-
-static void arm_smmu_cmdq_write_entries(struct arm_smmu_cmdq *cmdq, u64 *cmds,
-					u32 prod, int n)
-{
-	int i;
-	struct arm_smmu_ll_queue llq = {
-		.max_n_shift	= cmdq->q.llq.max_n_shift,
-		.prod		= prod,
-	};
-
-	for (i = 0; i < n; ++i) {
-		u64 *cmd = &cmds[i * CMDQ_ENT_DWORDS];
-
-		prod = queue_inc_prod_n(&llq, i);
-		queue_write(Q_ENT(&cmdq->q, prod), cmd, CMDQ_ENT_DWORDS);
-	}
-}
-
-/*
- * This is the actual insertion function, and provides the following
- * ordering guarantees to callers:
- *
- * - There is a dma_wmb() before publishing any commands to the queue.
- *   This can be relied upon to order prior writes to data structures
- *   in memory (such as a CD or an STE) before the command.
- *
- * - On completion of a CMD_SYNC, there is a control dependency.
- *   This can be relied upon to order subsequent writes to memory (e.g.
- *   freeing an IOVA) after completion of the CMD_SYNC.
- *
- * - Command insertion is totally ordered, so if two CPUs each race to
- *   insert their own list of commands then all of the commands from one
- *   CPU will appear before any of the commands from the other CPU.
- */
-static int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu,
-				       u64 *cmds, int n, bool sync)
-{
-	u64 cmd_sync[CMDQ_ENT_DWORDS];
-	u32 prod;
+	u64 cmd[CMDQ_ENT_DWORDS];
 	unsigned long flags;
-	bool owner;
-	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
-	struct arm_smmu_ll_queue llq = {
-		.max_n_shift = cmdq->q.llq.max_n_shift,
-	}, head = llq;
-	int ret = 0;
-
-	/* 1. Allocate some space in the queue */
-	local_irq_save(flags);
-	llq.val = READ_ONCE(cmdq->q.llq.val);
-	do {
-		u64 old;
-
-		while (!queue_has_space(&llq, n + sync)) {
-			local_irq_restore(flags);
-			if (arm_smmu_cmdq_poll_until_not_full(smmu, &llq))
-				dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
-			local_irq_save(flags);
-		}
-
-		head.cons = llq.cons;
-		head.prod = queue_inc_prod_n(&llq, n + sync) |
-					     CMDQ_PROD_OWNED_FLAG;
-
-		old = cmpxchg_relaxed(&cmdq->q.llq.val, llq.val, head.val);
-		if (old == llq.val)
-			break;
-
-		llq.val = old;
-	} while (1);
-	owner = !(llq.prod & CMDQ_PROD_OWNED_FLAG);
-	head.prod &= ~CMDQ_PROD_OWNED_FLAG;
-	llq.prod &= ~CMDQ_PROD_OWNED_FLAG;
-
-	/*
-	 * 2. Write our commands into the queue
-	 * Dependency ordering from the cmpxchg() loop above.
-	 */
-	arm_smmu_cmdq_write_entries(cmdq, cmds, llq.prod, n);
-	if (sync) {
-		prod = queue_inc_prod_n(&llq, n);
-		arm_smmu_cmdq_build_sync_cmd(cmd_sync, smmu, prod);
-		queue_write(Q_ENT(&cmdq->q, prod), cmd_sync, CMDQ_ENT_DWORDS);
-
-		/*
-		 * In order to determine completion of our CMD_SYNC, we must
-		 * ensure that the queue can't wrap twice without us noticing.
-		 * We achieve that by taking the cmdq lock as shared before
-		 * marking our slot as valid.
-		 */
-		arm_smmu_cmdq_shared_lock(cmdq);
-	}
-
-	/* 3. Mark our slots as valid, ensuring commands are visible first */
-	dma_wmb();
-	arm_smmu_cmdq_set_valid_map(cmdq, llq.prod, head.prod);
-
-	/* 4. If we are the owner, take control of the SMMU hardware */
-	if (owner) {
-		/* a. Wait for previous owner to finish */
-		atomic_cond_read_relaxed(&cmdq->owner_prod, VAL == llq.prod);
-
-		/* b. Stop gathering work by clearing the owned flag */
-		prod = atomic_fetch_andnot_relaxed(CMDQ_PROD_OWNED_FLAG,
-						   &cmdq->q.llq.atomic.prod);
-		prod &= ~CMDQ_PROD_OWNED_FLAG;
+	struct arm_smmu_cmdq_ent  ent = {
+		.opcode = CMDQ_OP_CMD_SYNC,
+		.sync	= {
+			.msiaddr = virt_to_phys(&smmu->sync_count),
+		},
+	};
 
-		/*
-		 * c. Wait for any gathered work to be written to the queue.
-		 * Note that we read our own entries so that we have the control
-		 * dependency required by (d).
-		 */
-		arm_smmu_cmdq_poll_valid_map(cmdq, llq.prod, prod);
+	spin_lock_irqsave(&smmu->cmdq.lock, flags);
 
-		/*
-		 * d. Advance the hardware prod pointer
-		 * Control dependency ordering from the entries becoming valid.
-		 */
-		writel_relaxed(prod, cmdq->q.prod_reg);
-
-		/*
-		 * e. Tell the next owner we're done
-		 * Make sure we've updated the hardware first, so that we don't
-		 * race to update prod and potentially move it backwards.
-		 */
-		atomic_set_release(&cmdq->owner_prod, prod);
+	/* Piggy-back on the previous command if it's a SYNC */
+	if (smmu->prev_cmd_opcode == CMDQ_OP_CMD_SYNC) {
+		ent.sync.msidata = smmu->sync_nr;
+	} else {
+		ent.sync.msidata = ++smmu->sync_nr;
+		arm_smmu_cmdq_build_cmd(cmd, &ent);
+		arm_smmu_cmdq_insert_cmd(smmu, cmd);
 	}
 
-	/* 5. If we are inserting a CMD_SYNC, we must wait for it to complete */
-	if (sync) {
-		llq.prod = queue_inc_prod_n(&llq, n);
-		ret = arm_smmu_cmdq_poll_until_sync(smmu, &llq);
-		if (ret) {
-			dev_err_ratelimited(smmu->dev,
-					    "CMD_SYNC timeout at 0x%08x [hwprod 0x%08x, hwcons 0x%08x]\n",
-					    llq.prod,
-					    readl_relaxed(cmdq->q.prod_reg),
-					    readl_relaxed(cmdq->q.cons_reg));
-		}
-
-		/*
-		 * Try to unlock the cmdq lock. This will fail if we're the last
-		 * reader, in which case we can safely update cmdq->q.llq.cons
-		 */
-		if (!arm_smmu_cmdq_shared_tryunlock(cmdq)) {
-			WRITE_ONCE(cmdq->q.llq.cons, llq.cons);
-			arm_smmu_cmdq_shared_unlock(cmdq);
-		}
-	}
+	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
 
-	local_irq_restore(flags);
-	return ret;
+	return __arm_smmu_sync_poll_msi(smmu, ent.sync.msidata);
 }
 
-static int arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
-				   struct arm_smmu_cmdq_ent *ent)
+static int __arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
 {
 	u64 cmd[CMDQ_ENT_DWORDS];
+	unsigned long flags;
+	bool wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV);
+	struct arm_smmu_cmdq_ent ent = { .opcode = CMDQ_OP_CMD_SYNC };
+	int ret;
 
-	if (arm_smmu_cmdq_build_cmd(cmd, ent)) {
-		dev_warn(smmu->dev, "ignoring unknown CMDQ opcode 0x%x\n",
-			 ent->opcode);
-		return -EINVAL;
-	}
+	arm_smmu_cmdq_build_cmd(cmd, &ent);
 
-	return arm_smmu_cmdq_issue_cmdlist(smmu, cmd, 1, false);
-}
+	spin_lock_irqsave(&smmu->cmdq.lock, flags);
+	arm_smmu_cmdq_insert_cmd(smmu, cmd);
+	ret = queue_poll_cons(&smmu->cmdq.q, true, wfe);
+	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
 
-static int arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
-{
-	return arm_smmu_cmdq_issue_cmdlist(smmu, NULL, 0, true);
+	return ret;
 }
 
-static void arm_smmu_cmdq_batch_add(struct arm_smmu_device *smmu,
-				    struct arm_smmu_cmdq_batch *cmds,
-				    struct arm_smmu_cmdq_ent *cmd)
+static int arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
 {
-	if (cmds->num == CMDQ_BATCH_ENTRIES) {
-		arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, false);
-		cmds->num = 0;
-	}
-	arm_smmu_cmdq_build_cmd(&cmds->cmds[cmds->num * CMDQ_ENT_DWORDS], cmd);
-	cmds->num++;
-}
+	int ret;
+	bool msi = (smmu->features & ARM_SMMU_FEAT_MSI) &&
+		   (smmu->features & ARM_SMMU_FEAT_COHERENCY);
 
-static int arm_smmu_cmdq_batch_submit(struct arm_smmu_device *smmu,
-				      struct arm_smmu_cmdq_batch *cmds)
-{
-	return arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, true);
+	ret = msi ? __arm_smmu_cmdq_issue_sync_msi(smmu)
+		  : __arm_smmu_cmdq_issue_sync(smmu);
+	if (ret)
+		dev_err_ratelimited(smmu->dev, "CMD_SYNC timeout\n");
+	return ret;
 }
 
 /* Context descriptor manipulation functions */
@@ -1535,7 +1137,6 @@ static void arm_smmu_sync_cd(struct arm_smmu_domain *smmu_domain,
 	size_t i;
 	unsigned long flags;
 	struct arm_smmu_master *master;
-	struct arm_smmu_cmdq_batch cmds = {};
 	struct arm_smmu_device *smmu = smmu_domain->smmu;
 	struct arm_smmu_cmdq_ent cmd = {
 		.opcode	= CMDQ_OP_CFGI_CD,
@@ -1549,12 +1150,12 @@ static void arm_smmu_sync_cd(struct arm_smmu_domain *smmu_domain,
 	list_for_each_entry(master, &smmu_domain->devices, domain_head) {
 		for (i = 0; i < master->num_sids; i++) {
 			cmd.cfgi.sid = master->sids[i];
-			arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd);
+			arm_smmu_cmdq_issue_cmd(smmu, &cmd);
 		}
 	}
 	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
 
-	arm_smmu_cmdq_batch_submit(smmu, &cmds);
+	arm_smmu_cmdq_issue_sync(smmu);
 }
 
 static int arm_smmu_alloc_cd_leaf_table(struct arm_smmu_device *smmu,
@@ -2189,16 +1790,17 @@ arm_smmu_atc_inv_to_cmd(int ssid, unsigned long iova, size_t size,
 	cmd->atc.size	= log2_span;
 }
 
-static int arm_smmu_atc_inv_master(struct arm_smmu_master *master)
+static int arm_smmu_atc_inv_master(struct arm_smmu_master *master,
+				   struct arm_smmu_cmdq_ent *cmd)
 {
 	int i;
-	struct arm_smmu_cmdq_ent cmd;
 
-	arm_smmu_atc_inv_to_cmd(0, 0, 0, &cmd);
+	if (!master->ats_enabled)
+		return 0;
 
 	for (i = 0; i < master->num_sids; i++) {
-		cmd.atc.sid = master->sids[i];
-		arm_smmu_cmdq_issue_cmd(master->smmu, &cmd);
+		cmd->atc.sid = master->sids[i];
+		arm_smmu_cmdq_issue_cmd(master->smmu, cmd);
 	}
 
 	return arm_smmu_cmdq_issue_sync(master->smmu);
@@ -2207,11 +1809,10 @@ static int arm_smmu_atc_inv_master(struct arm_smmu_master *master)
 static int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain,
 				   int ssid, unsigned long iova, size_t size)
 {
-	int i;
+	int ret = 0;
 	unsigned long flags;
 	struct arm_smmu_cmdq_ent cmd;
 	struct arm_smmu_master *master;
-	struct arm_smmu_cmdq_batch cmds = {};
 
 	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_ATS))
 		return 0;
@@ -2236,18 +1837,11 @@ static int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain,
 	arm_smmu_atc_inv_to_cmd(ssid, iova, size, &cmd);
 
 	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
-	list_for_each_entry(master, &smmu_domain->devices, domain_head) {
-		if (!master->ats_enabled)
-			continue;
-
-		for (i = 0; i < master->num_sids; i++) {
-			cmd.atc.sid = master->sids[i];
-			arm_smmu_cmdq_batch_add(smmu_domain->smmu, &cmds, &cmd);
-		}
-	}
+	list_for_each_entry(master, &smmu_domain->devices, domain_head)
+		ret |= arm_smmu_atc_inv_master(master, &cmd);
 	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
 
-	return arm_smmu_cmdq_batch_submit(smmu_domain->smmu, &cmds);
+	return ret ? -ETIMEDOUT : 0;
 }
 
 /* IO_PGTABLE API */
@@ -2269,32 +1863,27 @@ static void arm_smmu_tlb_inv_context(void *cookie)
 	/*
 	 * NOTE: when io-pgtable is in non-strict mode, we may get here with
 	 * PTEs previously cleared by unmaps on the current CPU not yet visible
-	 * to the SMMU. We are relying on the dma_wmb() implicit during cmd
-	 * insertion to guarantee those are observed before the TLBI. Do be
-	 * careful, 007.
+	 * to the SMMU. We are relying on the DSB implicit in
+	 * queue_sync_prod_out() to guarantee those are observed before the
+	 * TLBI. Do be careful, 007.
 	 */
 	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
 	arm_smmu_cmdq_issue_sync(smmu);
 	arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0);
 }
 
-static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size,
-				   size_t granule, bool leaf,
-				   struct arm_smmu_domain *smmu_domain)
+static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
+					  size_t granule, bool leaf, void *cookie)
 {
+	struct arm_smmu_domain *smmu_domain = cookie;
 	struct arm_smmu_device *smmu = smmu_domain->smmu;
-	unsigned long start = iova, end = iova + size, num_pages = 0, tg = 0;
-	size_t inv_range = granule;
-	struct arm_smmu_cmdq_batch cmds = {};
 	struct arm_smmu_cmdq_ent cmd = {
 		.tlbi = {
 			.leaf	= leaf,
+			.addr	= iova,
 		},
 	};
 
-	if (!size)
-		return;
-
 	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
 		cmd.opcode	= CMDQ_OP_TLBI_NH_VA;
 		cmd.tlbi.asid	= smmu_domain->s1_cfg.cd.asid;
@@ -2303,78 +1892,37 @@ static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size,
 		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
 	}
 
-	if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) {
-		/* Get the leaf page size */
-		tg = __ffs(smmu_domain->domain.pgsize_bitmap);
-
-		/* Convert page size of 12,14,16 (log2) to 1,2,3 */
-		cmd.tlbi.tg = (tg - 10) / 2;
-
-		/* Determine what level the granule is at */
-		cmd.tlbi.ttl = 4 - ((ilog2(granule) - 3) / (tg - 3));
-
-		num_pages = size >> tg;
-	}
-
-	while (iova < end) {
-		if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) {
-			/*
-			 * On each iteration of the loop, the range is 5 bits
-			 * worth of the aligned size remaining.
-			 * The range in pages is:
-			 *
-			 * range = (num_pages & (0x1f << __ffs(num_pages)))
-			 */
-			unsigned long scale, num;
-
-			/* Determine the power of 2 multiple number of pages */
-			scale = __ffs(num_pages);
-			cmd.tlbi.scale = scale;
-
-			/* Determine how many chunks of 2^scale size we have */
-			num = (num_pages >> scale) & CMDQ_TLBI_RANGE_NUM_MAX;
-			cmd.tlbi.num = num - 1;
-
-			/* range is num * 2^scale * pgsize */
-			inv_range = num << (scale + tg);
-
-			/* Clear out the lower order bits for the next iteration */
-			num_pages -= num << scale;
-		}
-
-		cmd.tlbi.addr = iova;
-		arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd);
-		iova += inv_range;
-	}
-	arm_smmu_cmdq_batch_submit(smmu, &cmds);
-
-	/*
-	 * Unfortunately, this can't be leaf-only since we may have
-	 * zapped an entire table.
-	 */
-	arm_smmu_atc_inv_domain(smmu_domain, 0, start, size);
+	do {
+		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+		cmd.tlbi.addr += granule;
+	} while (size -= granule);
 }
 
 static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gather,
 					 unsigned long iova, size_t granule,
 					 void *cookie)
 {
-	struct arm_smmu_domain *smmu_domain = cookie;
-	struct iommu_domain *domain = &smmu_domain->domain;
-
-	iommu_iotlb_gather_add_page(domain, gather, iova, granule);
+	arm_smmu_tlb_inv_range_nosync(iova, granule, granule, true, cookie);
 }
 
 static void arm_smmu_tlb_inv_walk(unsigned long iova, size_t size,
 				  size_t granule, void *cookie)
 {
-	arm_smmu_tlb_inv_range(iova, size, granule, false, cookie);
+	struct arm_smmu_domain *smmu_domain = cookie;
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+
+	arm_smmu_tlb_inv_range_nosync(iova, size, granule, false, cookie);
+	arm_smmu_cmdq_issue_sync(smmu);
 }
 
 static void arm_smmu_tlb_inv_leaf(unsigned long iova, size_t size,
 				  size_t granule, void *cookie)
 {
-	arm_smmu_tlb_inv_range(iova, size, granule, true, cookie);
+	struct arm_smmu_domain *smmu_domain = cookie;
+	struct arm_smmu_device *smmu = smmu_domain->smmu;
+
+	arm_smmu_tlb_inv_range_nosync(iova, size, granule, true, cookie);
+	arm_smmu_cmdq_issue_sync(smmu);
 }
 
 static const struct iommu_flush_ops arm_smmu_flush_ops = {
@@ -2700,6 +2248,7 @@ static void arm_smmu_enable_ats(struct arm_smmu_master *master)
 
 static void arm_smmu_disable_ats(struct arm_smmu_master *master)
 {
+	struct arm_smmu_cmdq_ent cmd;
 	struct arm_smmu_domain *smmu_domain = master->domain;
 
 	if (!master->ats_enabled)
@@ -2711,8 +2260,9 @@ static void arm_smmu_disable_ats(struct arm_smmu_master *master)
 	 * ATC invalidation via the SMMU.
 	 */
 	wmb();
-	arm_smmu_atc_inv_master(master);
-	atomic_dec(&smmu_domain->nr_ats_masters);
+	arm_smmu_atc_inv_to_cmd(0, 0, 0, &cmd);
+	arm_smmu_atc_inv_master(master, &cmd);
+    atomic_dec(&smmu_domain->nr_ats_masters);
 }
 
 static int arm_smmu_enable_pasid(struct arm_smmu_master *master)
@@ -2875,10 +2425,10 @@ static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain)
 static void arm_smmu_iotlb_sync(struct iommu_domain *domain,
 				struct iommu_iotlb_gather *gather)
 {
-	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+	struct arm_smmu_device *smmu = to_smmu_domain(domain)->smmu;
 
-	arm_smmu_tlb_inv_range(gather->start, gather->end - gather->start,
-			       gather->pgsize, true, smmu_domain);
+	if (smmu)
+		arm_smmu_cmdq_issue_sync(smmu);
 }
 
 static phys_addr_t
@@ -3176,49 +2726,18 @@ static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
 	return 0;
 }
 
-static void arm_smmu_cmdq_free_bitmap(void *data)
-{
-	unsigned long *bitmap = data;
-	bitmap_free(bitmap);
-}
-
-static int arm_smmu_cmdq_init(struct arm_smmu_device *smmu)
-{
-	int ret = 0;
-	struct arm_smmu_cmdq *cmdq = &smmu->cmdq;
-	unsigned int nents = 1 << cmdq->q.llq.max_n_shift;
-	atomic_long_t *bitmap;
-
-	atomic_set(&cmdq->owner_prod, 0);
-	atomic_set(&cmdq->lock, 0);
-
-	bitmap = (atomic_long_t *)bitmap_zalloc(nents, GFP_KERNEL);
-	if (!bitmap) {
-		dev_err(smmu->dev, "failed to allocate cmdq bitmap\n");
-		ret = -ENOMEM;
-	} else {
-		cmdq->valid_map = bitmap;
-		devm_add_action(smmu->dev, arm_smmu_cmdq_free_bitmap, bitmap);
-	}
-
-	return ret;
-}
-
 static int arm_smmu_init_queues(struct arm_smmu_device *smmu)
 {
 	int ret;
 
 	/* cmdq */
+	spin_lock_init(&smmu->cmdq.lock);
 	ret = arm_smmu_init_one_queue(smmu, &smmu->cmdq.q, ARM_SMMU_CMDQ_PROD,
 				      ARM_SMMU_CMDQ_CONS, CMDQ_ENT_DWORDS,
 				      "cmdq");
 	if (ret)
 		return ret;
 
-	ret = arm_smmu_cmdq_init(smmu);
-	if (ret)
-		return ret;
-
 	/* evtq */
 	ret = arm_smmu_init_one_queue(smmu, &smmu->evtq.q, ARM_SMMU_EVTQ_PROD,
 				      ARM_SMMU_EVTQ_CONS, EVTQ_ENT_DWORDS,
@@ -3799,15 +3318,9 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
 	/* Queue sizes, capped to ensure natural alignment */
 	smmu->cmdq.q.llq.max_n_shift = min_t(u32, CMDQ_MAX_SZ_SHIFT,
 					     FIELD_GET(IDR1_CMDQS, reg));
-	if (smmu->cmdq.q.llq.max_n_shift <= ilog2(CMDQ_BATCH_ENTRIES)) {
-		/*
-		 * We don't support splitting up batches, so one batch of
-		 * commands plus an extra sync needs to fit inside the command
-		 * queue. There's also no way we can handle the weird alignment
-		 * restrictions on the base pointer for a unit-length queue.
-		 */
-		dev_err(smmu->dev, "command queue size <= %d entries not supported\n",
-			CMDQ_BATCH_ENTRIES);
+	if (!smmu->cmdq.q.llq.max_n_shift) {
+		/* Odd alignment restrictions on the base, so ignore for now */
+		dev_err(smmu->dev, "unit-length command queue not supported\n");
 		return -ENXIO;
 	}
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 17:02:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 17:02:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38863.71634 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKfN-0004U2-G1; Thu, 26 Nov 2020 17:02:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38863.71634; Thu, 26 Nov 2020 17:02:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKfN-0004Ts-C6; Thu, 26 Nov 2020 17:02:45 +0000
Received: by outflank-mailman (input) for mailman id 38863;
 Thu, 26 Nov 2020 17:02:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C6x3=FA=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kiKfM-0004PR-NH
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:02:44 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 1f79dbbc-b2c6-426c-a820-b1905ac42e53;
 Thu, 26 Nov 2020 17:02:40 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3778B1597;
 Thu, 26 Nov 2020 09:02:40 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5483F3F23F;
 Thu, 26 Nov 2020 09:02:39 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=C6x3=FA=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kiKfM-0004PR-NH
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:02:44 +0000
X-Inumbo-ID: 1f79dbbc-b2c6-426c-a820-b1905ac42e53
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 1f79dbbc-b2c6-426c-a820-b1905ac42e53;
	Thu, 26 Nov 2020 17:02:40 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3778B1597;
	Thu, 26 Nov 2020 09:02:40 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5483F3F23F;
	Thu, 26 Nov 2020 09:02:39 -0800 (PST)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 4/8] xen/arm: Remove support for MSI on SMMUv3
Date: Thu, 26 Nov 2020 17:02:03 +0000
Message-Id: <cfc6cbe23f05162d5c62df9db09fef3f8e0b8e14.1606406359.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1606406359.git.rahul.singh@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
In-Reply-To: <cover.1606406359.git.rahul.singh@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>

XEN does not support MSI on ARM platforms, therefore remove the MSI
support from SMMUv3 driver.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/drivers/passthrough/arm/smmu-v3.c | 176 +-------------------------
 1 file changed, 3 insertions(+), 173 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index cec304e51a..401f7ae006 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -416,31 +416,6 @@ enum pri_resp {
 	PRI_RESP_SUCC = 2,
 };
 
-enum arm_smmu_msi_index {
-	EVTQ_MSI_INDEX,
-	GERROR_MSI_INDEX,
-	PRIQ_MSI_INDEX,
-	ARM_SMMU_MAX_MSIS,
-};
-
-static phys_addr_t arm_smmu_msi_cfg[ARM_SMMU_MAX_MSIS][3] = {
-	[EVTQ_MSI_INDEX] = {
-		ARM_SMMU_EVTQ_IRQ_CFG0,
-		ARM_SMMU_EVTQ_IRQ_CFG1,
-		ARM_SMMU_EVTQ_IRQ_CFG2,
-	},
-	[GERROR_MSI_INDEX] = {
-		ARM_SMMU_GERROR_IRQ_CFG0,
-		ARM_SMMU_GERROR_IRQ_CFG1,
-		ARM_SMMU_GERROR_IRQ_CFG2,
-	},
-	[PRIQ_MSI_INDEX] = {
-		ARM_SMMU_PRIQ_IRQ_CFG0,
-		ARM_SMMU_PRIQ_IRQ_CFG1,
-		ARM_SMMU_PRIQ_IRQ_CFG2,
-	},
-};
-
 struct arm_smmu_cmdq_ent {
 	/* Common fields */
 	u8				opcode;
@@ -504,10 +479,6 @@ struct arm_smmu_cmdq_ent {
 		} pri;
 
 		#define CMDQ_OP_CMD_SYNC	0x46
-		struct {
-			u32			msidata;
-			u64			msiaddr;
-		} sync;
 	};
 };
 
@@ -649,12 +620,6 @@ struct arm_smmu_device {
 
 	struct arm_smmu_strtab_cfg	strtab_cfg;
 
-	/* Hi16xx adds an extra 32 bits of goodness to its MSI payload */
-	union {
-		u32			sync_count;
-		u64			padding;
-	};
-
 	/* IOMMU core code handle */
 	struct iommu_device		iommu;
 };
@@ -945,20 +910,7 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
 		cmd[1] |= FIELD_PREP(CMDQ_PRI_1_RESP, ent->pri.resp);
 		break;
 	case CMDQ_OP_CMD_SYNC:
-		if (ent->sync.msiaddr)
-			cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ);
-		else
-			cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV);
-		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH);
-		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB);
-		/*
-		 * Commands are written little-endian, but we want the SMMU to
-		 * receive MSIData, and thus write it back to memory, in CPU
-		 * byte order, so big-endian needs an extra byteswap here.
-		 */
-		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA,
-				     cpu_to_le32(ent->sync.msidata));
-		cmd[1] |= ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
+		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV);
 		break;
 	default:
 		return -ENOENT;
@@ -1054,50 +1006,6 @@ static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
 	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
 }
 
-/*
- * The difference between val and sync_idx is bounded by the maximum size of
- * a queue at 2^20 entries, so 32 bits is plenty for wrap-safe arithmetic.
- */
-static int __arm_smmu_sync_poll_msi(struct arm_smmu_device *smmu, u32 sync_idx)
-{
-	ktime_t timeout;
-	u32 val;
-
-	timeout = ktime_add_us(ktime_get(), ARM_SMMU_CMDQ_SYNC_TIMEOUT_US);
-	val = smp_cond_load_acquire(&smmu->sync_count,
-				    (int)(VAL - sync_idx) >= 0 ||
-				    !ktime_before(ktime_get(), timeout));
-
-	return (int)(val - sync_idx) < 0 ? -ETIMEDOUT : 0;
-}
-
-static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu)
-{
-	u64 cmd[CMDQ_ENT_DWORDS];
-	unsigned long flags;
-	struct arm_smmu_cmdq_ent  ent = {
-		.opcode = CMDQ_OP_CMD_SYNC,
-		.sync	= {
-			.msiaddr = virt_to_phys(&smmu->sync_count),
-		},
-	};
-
-	spin_lock_irqsave(&smmu->cmdq.lock, flags);
-
-	/* Piggy-back on the previous command if it's a SYNC */
-	if (smmu->prev_cmd_opcode == CMDQ_OP_CMD_SYNC) {
-		ent.sync.msidata = smmu->sync_nr;
-	} else {
-		ent.sync.msidata = ++smmu->sync_nr;
-		arm_smmu_cmdq_build_cmd(cmd, &ent);
-		arm_smmu_cmdq_insert_cmd(smmu, cmd);
-	}
-
-	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
-
-	return __arm_smmu_sync_poll_msi(smmu, ent.sync.msidata);
-}
-
 static int __arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
 {
 	u64 cmd[CMDQ_ENT_DWORDS];
@@ -1119,12 +1027,9 @@ static int __arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
 static int arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
 {
 	int ret;
-	bool msi = (smmu->features & ARM_SMMU_FEAT_MSI) &&
-		   (smmu->features & ARM_SMMU_FEAT_COHERENCY);
 
-	ret = msi ? __arm_smmu_cmdq_issue_sync_msi(smmu)
-		  : __arm_smmu_cmdq_issue_sync(smmu);
-	if (ret)
+	ret = __arm_smmu_cmdq_issue_sync(smmu);
+	if ( ret )
 		dev_err_ratelimited(smmu->dev, "CMD_SYNC timeout\n");
 	return ret;
 }
@@ -2898,83 +2803,10 @@ static int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr)
 	return ret;
 }
 
-static void arm_smmu_free_msis(void *data)
-{
-	struct device *dev = data;
-	platform_msi_domain_free_irqs(dev);
-}
-
-static void arm_smmu_write_msi_msg(struct msi_desc *desc, struct msi_msg *msg)
-{
-	phys_addr_t doorbell;
-	struct device *dev = msi_desc_to_dev(desc);
-	struct arm_smmu_device *smmu = dev_get_drvdata(dev);
-	phys_addr_t *cfg = arm_smmu_msi_cfg[desc->platform.msi_index];
-
-	doorbell = (((u64)msg->address_hi) << 32) | msg->address_lo;
-	doorbell &= MSI_CFG0_ADDR_MASK;
-
-	writeq_relaxed(doorbell, smmu->base + cfg[0]);
-	writel_relaxed(msg->data, smmu->base + cfg[1]);
-	writel_relaxed(ARM_SMMU_MEMATTR_DEVICE_nGnRE, smmu->base + cfg[2]);
-}
-
-static void arm_smmu_setup_msis(struct arm_smmu_device *smmu)
-{
-	struct msi_desc *desc;
-	int ret, nvec = ARM_SMMU_MAX_MSIS;
-	struct device *dev = smmu->dev;
-
-	/* Clear the MSI address regs */
-	writeq_relaxed(0, smmu->base + ARM_SMMU_GERROR_IRQ_CFG0);
-	writeq_relaxed(0, smmu->base + ARM_SMMU_EVTQ_IRQ_CFG0);
-
-	if (smmu->features & ARM_SMMU_FEAT_PRI)
-		writeq_relaxed(0, smmu->base + ARM_SMMU_PRIQ_IRQ_CFG0);
-	else
-		nvec--;
-
-	if (!(smmu->features & ARM_SMMU_FEAT_MSI))
-		return;
-
-	if (!dev->msi_domain) {
-		dev_info(smmu->dev, "msi_domain absent - falling back to wired irqs\n");
-		return;
-	}
-
-	/* Allocate MSIs for evtq, gerror and priq. Ignore cmdq */
-	ret = platform_msi_domain_alloc_irqs(dev, nvec, arm_smmu_write_msi_msg);
-	if (ret) {
-		dev_warn(dev, "failed to allocate MSIs - falling back to wired irqs\n");
-		return;
-	}
-
-	for_each_msi_entry(desc, dev) {
-		switch (desc->platform.msi_index) {
-		case EVTQ_MSI_INDEX:
-			smmu->evtq.q.irq = desc->irq;
-			break;
-		case GERROR_MSI_INDEX:
-			smmu->gerr_irq = desc->irq;
-			break;
-		case PRIQ_MSI_INDEX:
-			smmu->priq.q.irq = desc->irq;
-			break;
-		default:	/* Unknown */
-			continue;
-		}
-	}
-
-	/* Add callback to free MSIs on teardown */
-	devm_add_action(dev, arm_smmu_free_msis, dev);
-}
-
 static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
 {
 	int irq, ret;
 
-	arm_smmu_setup_msis(smmu);
-
 	/* Request interrupt lines */
 	irq = smmu->evtq.q.irq;
 	if (irq) {
@@ -3250,8 +3082,6 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
 	if (reg & IDR0_SEV)
 		smmu->features |= ARM_SMMU_FEAT_SEV;
 
-	if (reg & IDR0_MSI)
-		smmu->features |= ARM_SMMU_FEAT_MSI;
 
 	if (reg & IDR0_HYP)
 		smmu->features |= ARM_SMMU_FEAT_HYP;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 17:02:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 17:02:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38859.71587 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKfB-0004Ks-Sg; Thu, 26 Nov 2020 17:02:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38859.71587; Thu, 26 Nov 2020 17:02:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKfB-0004Kl-PJ; Thu, 26 Nov 2020 17:02:33 +0000
Received: by outflank-mailman (input) for mailman id 38859;
 Thu, 26 Nov 2020 17:02:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C6x3=FA=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kiKf9-0004Kg-TM
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:02:31 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 98e166d4-ff61-475d-bf55-3d1799060e48;
 Thu, 26 Nov 2020 17:02:30 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EDB1C31B;
 Thu, 26 Nov 2020 09:02:29 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6B6973F23F;
 Thu, 26 Nov 2020 09:02:28 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=C6x3=FA=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kiKf9-0004Kg-TM
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:02:31 +0000
X-Inumbo-ID: 98e166d4-ff61-475d-bf55-3d1799060e48
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 98e166d4-ff61-475d-bf55-3d1799060e48;
	Thu, 26 Nov 2020 17:02:30 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EDB1C31B;
	Thu, 26 Nov 2020 09:02:29 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6B6973F23F;
	Thu, 26 Nov 2020 09:02:28 -0800 (PST)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v2 0/8] xen/arm: Add support for SMMUv3 driver
Date: Thu, 26 Nov 2020 17:01:59 +0000
Message-Id: <cover.1606406359.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1

This patch series is v2 of the work to add support for the SMMUv3 driver.

Approach taken is to first merge the Linux copy of the SMMUv3 driver
(tag v5.9.8) and then modify the driver to build on XEN.

Linux SMMUv3 code implements the commands-queue insertion based on atomic
operations implemented in Linux. Atomic functions used by the commands-queue
insertion is not implemented in XEN therefore we decided to revert the code
that implemented the commands-queue insertion based on atomic operations.
Once the proper atomic operations will be available in XEN the driver can be
updated to include the patch.

Remove the support for MSI and PCI ATS functionality as there is no support
available in XEN on ARM to test the functionality.

As of now only Stage-2 translation support has been tested. Stage-1
translation support is removed. Once Stage-1 translation support is tested code
can be added.

Code specific to Linux is removed from the driver to avoid dead code.

Rahul Singh (8):
  xen/arm: Import the SMMUv3 driver from Linux
  xen/arm: revert atomic operation related command-queue insertion patch
  xen/arm: revert patch related to XArray
  xen/arm: Remove support for MSI on SMMUv3
  xen/arm: Remove support for PCI ATS on SMMUv3
  xen/arm: Remove support for Stage-1 translation on SMMUv3.
  xen/arm: Remove Linux specific code that is not usable in XEN
  xen/arm: Add support for SMMUv3 driver

 MAINTAINERS                           |    6 +
 SUPPORT.md                            |    1 +
 xen/drivers/passthrough/Kconfig       |   10 +
 xen/drivers/passthrough/arm/Makefile  |    1 +
 xen/drivers/passthrough/arm/smmu-v3.c | 2954 +++++++++++++++++++++++++
 5 files changed, 2972 insertions(+)
 create mode 100644 xen/drivers/passthrough/arm/smmu-v3.c

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 17:02:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 17:02:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38862.71623 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKfM-0004Rz-5l; Thu, 26 Nov 2020 17:02:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38862.71623; Thu, 26 Nov 2020 17:02:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKfM-0004Rr-10; Thu, 26 Nov 2020 17:02:44 +0000
Received: by outflank-mailman (input) for mailman id 38862;
 Thu, 26 Nov 2020 17:02:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C6x3=FA=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kiKfK-0004MJ-5B
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:02:42 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id e1df84c5-f26b-43d9-90eb-f7db39644c69;
 Thu, 26 Nov 2020 17:02:38 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 279E9153B;
 Thu, 26 Nov 2020 09:02:38 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 43A663F23F;
 Thu, 26 Nov 2020 09:02:37 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=C6x3=FA=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kiKfK-0004MJ-5B
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:02:42 +0000
X-Inumbo-ID: e1df84c5-f26b-43d9-90eb-f7db39644c69
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id e1df84c5-f26b-43d9-90eb-f7db39644c69;
	Thu, 26 Nov 2020 17:02:38 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 279E9153B;
	Thu, 26 Nov 2020 09:02:38 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 43A663F23F;
	Thu, 26 Nov 2020 09:02:37 -0800 (PST)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 3/8] xen/arm: revert patch related to XArray
Date: Thu, 26 Nov 2020 17:02:02 +0000
Message-Id: <612c1adabc1c26a539abf0dc05ea20b51e66e85f.1606406359.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1606406359.git.rahul.singh@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
In-Reply-To: <cover.1606406359.git.rahul.singh@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>

XArray is not implemented in XEN revert the patch that introduce the
XArray code in SMMUv3 driver.

Once XArray is implemented in XEN this patch can be added in XEN.

Reverted the commit 0299a1a81ca056e79c1a7fb751f936ec0d5c7afe

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/drivers/passthrough/arm/smmu-v3.c | 27 +++++++++------------------
 1 file changed, 9 insertions(+), 18 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index 97eac61ea4..cec304e51a 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -638,6 +638,7 @@ struct arm_smmu_device {
 
 #define ARM_SMMU_MAX_ASIDS		(1 << 16)
 	unsigned int			asid_bits;
+	DECLARE_BITMAP(asid_map, ARM_SMMU_MAX_ASIDS);
 
 #define ARM_SMMU_MAX_VMIDS		(1 << 16)
 	unsigned int			vmid_bits;
@@ -703,8 +704,6 @@ struct arm_smmu_option_prop {
 	const char *prop;
 };
 
-static DEFINE_XARRAY_ALLOC1(asid_xa);
-
 static struct arm_smmu_option_prop arm_smmu_options[] = {
 	{ ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" },
 	{ ARM_SMMU_OPT_PAGE0_REGS_ONLY, "cavium,cn9900-broken-page1-regspace"},
@@ -1366,14 +1365,6 @@ static void arm_smmu_free_cd_tables(struct arm_smmu_domain *smmu_domain)
 	cdcfg->cdtab = NULL;
 }
 
-static void arm_smmu_free_asid(struct arm_smmu_ctx_desc *cd)
-{
-	if (!cd->asid)
-		return;
-
-	xa_erase(&asid_xa, cd->asid);
-}
-
 /* Stream table manipulation functions */
 static void
 arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc *desc)
@@ -2006,9 +1997,10 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
 	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
 		struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
 
-		if (cfg->cdcfg.cdtab)
+		if (cfg->cdcfg.cdtab) {
 			arm_smmu_free_cd_tables(smmu_domain);
-		arm_smmu_free_asid(&cfg->cd);
+			arm_smmu_bitmap_free(smmu->asid_map, cfg->cd.asid);
+		}
 	} else {
 		struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
 		if (cfg->vmid)
@@ -2023,15 +2015,14 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
 				       struct io_pgtable_cfg *pgtbl_cfg)
 {
 	int ret;
-	u32 asid;
+	int asid;
 	struct arm_smmu_device *smmu = smmu_domain->smmu;
 	struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
 	typeof(&pgtbl_cfg->arm_lpae_s1_cfg.tcr) tcr = &pgtbl_cfg->arm_lpae_s1_cfg.tcr;
 
-	ret = xa_alloc(&asid_xa, &asid, &cfg->cd,
-		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
-	if (ret)
-		return ret;
+	asid = arm_smmu_bitmap_alloc(smmu->asid_map, smmu->asid_bits);
+	if (asid < 0)
+		return asid;
 
 	cfg->s1cdmax = master->ssid_bits;
 
@@ -2064,7 +2055,7 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
 out_free_cd_tables:
 	arm_smmu_free_cd_tables(smmu_domain);
 out_free_asid:
-	arm_smmu_free_asid(&cfg->cd);
+	arm_smmu_bitmap_free(smmu->asid_map, asid);
 	return ret;
 }
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 17:02:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 17:02:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38864.71647 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKfP-0004Y4-T8; Thu, 26 Nov 2020 17:02:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38864.71647; Thu, 26 Nov 2020 17:02:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKfP-0004Xp-OL; Thu, 26 Nov 2020 17:02:47 +0000
Received: by outflank-mailman (input) for mailman id 38864;
 Thu, 26 Nov 2020 17:02:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C6x3=FA=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kiKfO-0004PR-Md
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:02:46 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 982b97c6-f0a6-4e4a-9df7-34012dd7e656;
 Thu, 26 Nov 2020 17:02:42 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5DEF715A1;
 Thu, 26 Nov 2020 09:02:42 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7AD383F23F;
 Thu, 26 Nov 2020 09:02:41 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=C6x3=FA=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kiKfO-0004PR-Md
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:02:46 +0000
X-Inumbo-ID: 982b97c6-f0a6-4e4a-9df7-34012dd7e656
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 982b97c6-f0a6-4e4a-9df7-34012dd7e656;
	Thu, 26 Nov 2020 17:02:42 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5DEF715A1;
	Thu, 26 Nov 2020 09:02:42 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7AD383F23F;
	Thu, 26 Nov 2020 09:02:41 -0800 (PST)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 5/8] xen/arm: Remove support for PCI ATS on SMMUv3
Date: Thu, 26 Nov 2020 17:02:04 +0000
Message-Id: <78079d1d6e9d2e7e87125da131e9bdb5809b838a.1606406359.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1606406359.git.rahul.singh@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
In-Reply-To: <cover.1606406359.git.rahul.singh@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>

PCI ATS functionality is not implemented and tested on ARM. Remove the
PCI ATS support, once PCI ATS support is tested and available this
patch can be added.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/drivers/passthrough/arm/smmu-v3.c | 273 --------------------------
 1 file changed, 273 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index 401f7ae006..6a33628087 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -460,16 +460,6 @@ struct arm_smmu_cmdq_ent {
 			u64			addr;
 		} tlbi;
 
-		#define CMDQ_OP_ATC_INV		0x40
-		#define ATC_INV_SIZE_ALL	52
-		struct {
-			u32			sid;
-			u32			ssid;
-			u64			addr;
-			u8			size;
-			bool			global;
-		} atc;
-
 		#define CMDQ_OP_PRI_RESP	0x41
 		struct {
 			u32			sid;
@@ -632,7 +622,6 @@ struct arm_smmu_master {
 	struct list_head		domain_head;
 	u32				*sids;
 	unsigned int			num_sids;
-	bool				ats_enabled;
 	unsigned int			ssid_bits;
 };
 
@@ -650,7 +639,6 @@ struct arm_smmu_domain {
 
 	struct io_pgtable_ops		*pgtbl_ops;
 	bool				non_strict;
-	atomic_t			nr_ats_masters;
 
 	enum arm_smmu_domain_stage	stage;
 	union {
@@ -886,14 +874,6 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
 	case CMDQ_OP_TLBI_S12_VMALL:
 		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
 		break;
-	case CMDQ_OP_ATC_INV:
-		cmd[0] |= FIELD_PREP(CMDQ_0_SSV, ent->substream_valid);
-		cmd[0] |= FIELD_PREP(CMDQ_ATC_0_GLOBAL, ent->atc.global);
-		cmd[0] |= FIELD_PREP(CMDQ_ATC_0_SSID, ent->atc.ssid);
-		cmd[0] |= FIELD_PREP(CMDQ_ATC_0_SID, ent->atc.sid);
-		cmd[1] |= FIELD_PREP(CMDQ_ATC_1_SIZE, ent->atc.size);
-		cmd[1] |= ent->atc.addr & CMDQ_ATC_1_ADDR_MASK;
-		break;
 	case CMDQ_OP_PRI_RESP:
 		cmd[0] |= FIELD_PREP(CMDQ_0_SSV, ent->substream_valid);
 		cmd[0] |= FIELD_PREP(CMDQ_PRI_0_SSID, ent->pri.ssid);
@@ -925,7 +905,6 @@ static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
 		[CMDQ_ERR_CERROR_NONE_IDX]	= "No error",
 		[CMDQ_ERR_CERROR_ILL_IDX]	= "Illegal command",
 		[CMDQ_ERR_CERROR_ABT_IDX]	= "Abort on command fetch",
-		[CMDQ_ERR_CERROR_ATC_INV_IDX]	= "ATC invalidate timeout",
 	};
 
 	int i;
@@ -945,14 +924,6 @@ static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
 		dev_err(smmu->dev, "retrying command fetch\n");
 	case CMDQ_ERR_CERROR_NONE_IDX:
 		return;
-	case CMDQ_ERR_CERROR_ATC_INV_IDX:
-		/*
-		 * ATC Invalidation Completion timeout. CONS is still pointing
-		 * at the CMD_SYNC. Attempt to complete other pending commands
-		 * by repeating the CMD_SYNC, though we might well end up back
-		 * here since the ATC invalidation may still be pending.
-		 */
-		return;
 	case CMDQ_ERR_CERROR_ILL_IDX:
 	default:
 		break;
@@ -1422,9 +1393,6 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid,
 		val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_S2_TRANS);
 	}
 
-	if (master->ats_enabled)
-		dst[1] |= cpu_to_le64(FIELD_PREP(STRTAB_STE_1_EATS,
-						 STRTAB_STE_1_EATS_TRANS));
 
 	arm_smmu_sync_ste_for_sid(smmu, sid);
 	/* See comment in arm_smmu_write_ctx_desc() */
@@ -1633,112 +1601,6 @@ static irqreturn_t arm_smmu_combined_irq_handler(int irq, void *dev)
 	return IRQ_WAKE_THREAD;
 }
 
-static void
-arm_smmu_atc_inv_to_cmd(int ssid, unsigned long iova, size_t size,
-			struct arm_smmu_cmdq_ent *cmd)
-{
-	size_t log2_span;
-	size_t span_mask;
-	/* ATC invalidates are always on 4096-bytes pages */
-	size_t inval_grain_shift = 12;
-	unsigned long page_start, page_end;
-
-	*cmd = (struct arm_smmu_cmdq_ent) {
-		.opcode			= CMDQ_OP_ATC_INV,
-		.substream_valid	= !!ssid,
-		.atc.ssid		= ssid,
-	};
-
-	if (!size) {
-		cmd->atc.size = ATC_INV_SIZE_ALL;
-		return;
-	}
-
-	page_start	= iova >> inval_grain_shift;
-	page_end	= (iova + size - 1) >> inval_grain_shift;
-
-	/*
-	 * In an ATS Invalidate Request, the address must be aligned on the
-	 * range size, which must be a power of two number of page sizes. We
-	 * thus have to choose between grossly over-invalidating the region, or
-	 * splitting the invalidation into multiple commands. For simplicity
-	 * we'll go with the first solution, but should refine it in the future
-	 * if multiple commands are shown to be more efficient.
-	 *
-	 * Find the smallest power of two that covers the range. The most
-	 * significant differing bit between the start and end addresses,
-	 * fls(start ^ end), indicates the required span. For example:
-	 *
-	 * We want to invalidate pages [8; 11]. This is already the ideal range:
-	 *		x = 0b1000 ^ 0b1011 = 0b11
-	 *		span = 1 << fls(x) = 4
-	 *
-	 * To invalidate pages [7; 10], we need to invalidate [0; 15]:
-	 *		x = 0b0111 ^ 0b1010 = 0b1101
-	 *		span = 1 << fls(x) = 16
-	 */
-	log2_span	= fls_long(page_start ^ page_end);
-	span_mask	= (1ULL << log2_span) - 1;
-
-	page_start	&= ~span_mask;
-
-	cmd->atc.addr	= page_start << inval_grain_shift;
-	cmd->atc.size	= log2_span;
-}
-
-static int arm_smmu_atc_inv_master(struct arm_smmu_master *master,
-				   struct arm_smmu_cmdq_ent *cmd)
-{
-	int i;
-
-	if (!master->ats_enabled)
-		return 0;
-
-	for (i = 0; i < master->num_sids; i++) {
-		cmd->atc.sid = master->sids[i];
-		arm_smmu_cmdq_issue_cmd(master->smmu, cmd);
-	}
-
-	return arm_smmu_cmdq_issue_sync(master->smmu);
-}
-
-static int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain,
-				   int ssid, unsigned long iova, size_t size)
-{
-	int ret = 0;
-	unsigned long flags;
-	struct arm_smmu_cmdq_ent cmd;
-	struct arm_smmu_master *master;
-
-	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_ATS))
-		return 0;
-
-	/*
-	 * Ensure that we've completed prior invalidation of the main TLBs
-	 * before we read 'nr_ats_masters' in case of a concurrent call to
-	 * arm_smmu_enable_ats():
-	 *
-	 *	// unmap()			// arm_smmu_enable_ats()
-	 *	TLBI+SYNC			atomic_inc(&nr_ats_masters);
-	 *	smp_mb();			[...]
-	 *	atomic_read(&nr_ats_masters);	pci_enable_ats() // writel()
-	 *
-	 * Ensures that we always see the incremented 'nr_ats_masters' count if
-	 * ATS was enabled at the PCI device before completion of the TLBI.
-	 */
-	smp_mb();
-	if (!atomic_read(&smmu_domain->nr_ats_masters))
-		return 0;
-
-	arm_smmu_atc_inv_to_cmd(ssid, iova, size, &cmd);
-
-	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
-	list_for_each_entry(master, &smmu_domain->devices, domain_head)
-		ret |= arm_smmu_atc_inv_master(master, &cmd);
-	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
-
-	return ret ? -ETIMEDOUT : 0;
-}
 
 /* IO_PGTABLE API */
 static void arm_smmu_tlb_inv_context(void *cookie)
@@ -1765,7 +1627,6 @@ static void arm_smmu_tlb_inv_context(void *cookie)
 	 */
 	arm_smmu_cmdq_issue_cmd(smmu, &cmd);
 	arm_smmu_cmdq_issue_sync(smmu);
-	arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0);
 }
 
 static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
@@ -2106,108 +1967,6 @@ static void arm_smmu_install_ste_for_dev(struct arm_smmu_master *master)
 	}
 }
 
-static bool arm_smmu_ats_supported(struct arm_smmu_master *master)
-{
-	struct device *dev = master->dev;
-	struct arm_smmu_device *smmu = master->smmu;
-	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
-
-	if (!(smmu->features & ARM_SMMU_FEAT_ATS))
-		return false;
-
-	if (!(fwspec->flags & IOMMU_FWSPEC_PCI_RC_ATS))
-		return false;
-
-	return dev_is_pci(dev) && pci_ats_supported(to_pci_dev(dev));
-}
-
-static void arm_smmu_enable_ats(struct arm_smmu_master *master)
-{
-	size_t stu;
-	struct pci_dev *pdev;
-	struct arm_smmu_device *smmu = master->smmu;
-	struct arm_smmu_domain *smmu_domain = master->domain;
-
-	/* Don't enable ATS at the endpoint if it's not enabled in the STE */
-	if (!master->ats_enabled)
-		return;
-
-	/* Smallest Translation Unit: log2 of the smallest supported granule */
-	stu = __ffs(smmu->pgsize_bitmap);
-	pdev = to_pci_dev(master->dev);
-
-	atomic_inc(&smmu_domain->nr_ats_masters);
-	arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0);
-	if (pci_enable_ats(pdev, stu))
-		dev_err(master->dev, "Failed to enable ATS (STU %zu)\n", stu);
-}
-
-static void arm_smmu_disable_ats(struct arm_smmu_master *master)
-{
-	struct arm_smmu_cmdq_ent cmd;
-	struct arm_smmu_domain *smmu_domain = master->domain;
-
-	if (!master->ats_enabled)
-		return;
-
-	pci_disable_ats(to_pci_dev(master->dev));
-	/*
-	 * Ensure ATS is disabled at the endpoint before we issue the
-	 * ATC invalidation via the SMMU.
-	 */
-	wmb();
-	arm_smmu_atc_inv_to_cmd(0, 0, 0, &cmd);
-	arm_smmu_atc_inv_master(master, &cmd);
-    atomic_dec(&smmu_domain->nr_ats_masters);
-}
-
-static int arm_smmu_enable_pasid(struct arm_smmu_master *master)
-{
-	int ret;
-	int features;
-	int num_pasids;
-	struct pci_dev *pdev;
-
-	if (!dev_is_pci(master->dev))
-		return -ENODEV;
-
-	pdev = to_pci_dev(master->dev);
-
-	features = pci_pasid_features(pdev);
-	if (features < 0)
-		return features;
-
-	num_pasids = pci_max_pasids(pdev);
-	if (num_pasids <= 0)
-		return num_pasids;
-
-	ret = pci_enable_pasid(pdev, features);
-	if (ret) {
-		dev_err(&pdev->dev, "Failed to enable PASID\n");
-		return ret;
-	}
-
-	master->ssid_bits = min_t(u8, ilog2(num_pasids),
-				  master->smmu->ssid_bits);
-	return 0;
-}
-
-static void arm_smmu_disable_pasid(struct arm_smmu_master *master)
-{
-	struct pci_dev *pdev;
-
-	if (!dev_is_pci(master->dev))
-		return;
-
-	pdev = to_pci_dev(master->dev);
-
-	if (!pdev->pasid_enabled)
-		return;
-
-	master->ssid_bits = 0;
-	pci_disable_pasid(pdev);
-}
-
 static void arm_smmu_detach_dev(struct arm_smmu_master *master)
 {
 	unsigned long flags;
@@ -2216,14 +1975,11 @@ static void arm_smmu_detach_dev(struct arm_smmu_master *master)
 	if (!smmu_domain)
 		return;
 
-	arm_smmu_disable_ats(master);
-
 	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
 	list_del(&master->domain_head);
 	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
 
 	master->domain = NULL;
-	master->ats_enabled = false;
 	arm_smmu_install_ste_for_dev(master);
 }
 
@@ -2271,17 +2027,12 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
 
 	master->domain = smmu_domain;
 
-	if (smmu_domain->stage != ARM_SMMU_DOMAIN_BYPASS)
-		master->ats_enabled = arm_smmu_ats_supported(master);
-
 	arm_smmu_install_ste_for_dev(master);
 
 	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
 	list_add(&master->domain_head, &smmu_domain->devices);
 	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
 
-	arm_smmu_enable_ats(master);
-
 out_unlock:
 	mutex_unlock(&smmu_domain->init_mutex);
 	return ret;
@@ -2410,16 +2161,6 @@ static struct iommu_device *arm_smmu_probe_device(struct device *dev)
 
 	master->ssid_bits = min(smmu->ssid_bits, fwspec->num_pasid_bits);
 
-	/*
-	 * Note that PASID must be enabled before, and disabled after ATS:
-	 * PCI Express Base 4.0r1.0 - 10.5.1.3 ATS Control Register
-	 *
-	 *   Behavior is undefined if this bit is Set and the value of the PASID
-	 *   Enable, Execute Requested Enable, or Privileged Mode Requested bits
-	 *   are changed.
-	 */
-	arm_smmu_enable_pasid(master);
-
 	if (!(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB))
 		master->ssid_bits = min_t(u8, master->ssid_bits,
 					  CTXDESC_LINEAR_CDMAX);
@@ -2442,7 +2183,6 @@ static void arm_smmu_release_device(struct device *dev)
 
 	master = dev_iommu_priv_get(dev);
 	arm_smmu_detach_dev(master);
-	arm_smmu_disable_pasid(master);
 	kfree(master);
 	iommu_fwspec_free(dev);
 }
@@ -2997,15 +2737,6 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
 		}
 	}
 
-	if (smmu->features & ARM_SMMU_FEAT_ATS) {
-		enables |= CR0_ATSCHK;
-		ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
-					      ARM_SMMU_CR0ACK);
-		if (ret) {
-			dev_err(smmu->dev, "failed to enable ATS check\n");
-			return ret;
-		}
-	}
 
 	ret = arm_smmu_setup_irqs(smmu);
 	if (ret) {
@@ -3076,13 +2807,9 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
 	if (IS_ENABLED(CONFIG_PCI_PRI) && reg & IDR0_PRI)
 		smmu->features |= ARM_SMMU_FEAT_PRI;
 
-	if (IS_ENABLED(CONFIG_PCI_ATS) && reg & IDR0_ATS)
-		smmu->features |= ARM_SMMU_FEAT_ATS;
-
 	if (reg & IDR0_SEV)
 		smmu->features |= ARM_SMMU_FEAT_SEV;
 
-
 	if (reg & IDR0_HYP)
 		smmu->features |= ARM_SMMU_FEAT_HYP;
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 17:02:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 17:02:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38865.71659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKfV-0004fN-8N; Thu, 26 Nov 2020 17:02:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38865.71659; Thu, 26 Nov 2020 17:02:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKfV-0004fD-4W; Thu, 26 Nov 2020 17:02:53 +0000
Received: by outflank-mailman (input) for mailman id 38865;
 Thu, 26 Nov 2020 17:02:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C6x3=FA=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kiKfT-0004PR-Mu
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:02:51 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 4ae7c634-5e4c-40d4-b56f-ac96a9c990ae;
 Thu, 26 Nov 2020 17:02:44 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 84AAE31B;
 Thu, 26 Nov 2020 09:02:44 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A13193F23F;
 Thu, 26 Nov 2020 09:02:43 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=C6x3=FA=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kiKfT-0004PR-Mu
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:02:51 +0000
X-Inumbo-ID: 4ae7c634-5e4c-40d4-b56f-ac96a9c990ae
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 4ae7c634-5e4c-40d4-b56f-ac96a9c990ae;
	Thu, 26 Nov 2020 17:02:44 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 84AAE31B;
	Thu, 26 Nov 2020 09:02:44 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A13193F23F;
	Thu, 26 Nov 2020 09:02:43 -0800 (PST)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 6/8] xen/arm: Remove support for Stage-1 translation on SMMUv3.
Date: Thu, 26 Nov 2020 17:02:05 +0000
Message-Id: <29d40e76341983b175250b71e7b7a290895effd0.1606406359.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1606406359.git.rahul.singh@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
In-Reply-To: <cover.1606406359.git.rahul.singh@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>

Linux SMMUv3 driver supports both Stage-1 and Stage-2 translations.
As of now only Stage-2 translation support has been tested.

Once Stage-1 translation support is tested this patch can be added.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/drivers/passthrough/arm/smmu-v3.c | 440 +-------------------------
 1 file changed, 10 insertions(+), 430 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index 6a33628087..40e3890a58 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -432,19 +432,14 @@ struct arm_smmu_cmdq_ent {
 
 		#define CMDQ_OP_CFGI_STE	0x3
 		#define CMDQ_OP_CFGI_ALL	0x4
-		#define CMDQ_OP_CFGI_CD		0x5
-		#define CMDQ_OP_CFGI_CD_ALL	0x6
 		struct {
 			u32			sid;
-			u32			ssid;
 			union {
 				bool		leaf;
 				u8		span;
 			};
 		} cfgi;
 
-		#define CMDQ_OP_TLBI_NH_ASID	0x11
-		#define CMDQ_OP_TLBI_NH_VA	0x12
 		#define CMDQ_OP_TLBI_EL2_ALL	0x20
 		#define CMDQ_OP_TLBI_S12_VMALL	0x28
 		#define CMDQ_OP_TLBI_S2_IPA	0x2a
@@ -514,32 +509,6 @@ struct arm_smmu_strtab_l1_desc {
 	dma_addr_t			l2ptr_dma;
 };
 
-struct arm_smmu_ctx_desc {
-	u16				asid;
-	u64				ttbr;
-	u64				tcr;
-	u64				mair;
-};
-
-struct arm_smmu_l1_ctx_desc {
-	__le64				*l2ptr;
-	dma_addr_t			l2ptr_dma;
-};
-
-struct arm_smmu_ctx_desc_cfg {
-	__le64				*cdtab;
-	dma_addr_t			cdtab_dma;
-	struct arm_smmu_l1_ctx_desc	*l1_desc;
-	unsigned int			num_l1_ents;
-};
-
-struct arm_smmu_s1_cfg {
-	struct arm_smmu_ctx_desc_cfg	cdcfg;
-	struct arm_smmu_ctx_desc	cd;
-	u8				s1fmt;
-	u8				s1cdmax;
-};
-
 struct arm_smmu_s2_cfg {
 	u16				vmid;
 	u64				vttbr;
@@ -590,22 +559,16 @@ struct arm_smmu_device {
 
 	int				gerr_irq;
 	int				combined_irq;
-	u32				sync_nr;
 	u8				prev_cmd_opcode;
 
 	unsigned long			ias; /* IPA */
 	unsigned long			oas; /* PA */
 	unsigned long			pgsize_bitmap;
 
-#define ARM_SMMU_MAX_ASIDS		(1 << 16)
-	unsigned int			asid_bits;
-	DECLARE_BITMAP(asid_map, ARM_SMMU_MAX_ASIDS);
-
 #define ARM_SMMU_MAX_VMIDS		(1 << 16)
 	unsigned int			vmid_bits;
 	DECLARE_BITMAP(vmid_map, ARM_SMMU_MAX_VMIDS);
 
-	unsigned int			ssid_bits;
 	unsigned int			sid_bits;
 
 	struct arm_smmu_strtab_cfg	strtab_cfg;
@@ -622,7 +585,6 @@ struct arm_smmu_master {
 	struct list_head		domain_head;
 	u32				*sids;
 	unsigned int			num_sids;
-	unsigned int			ssid_bits;
 };
 
 /* SMMU private data for an IOMMU domain */
@@ -642,7 +604,6 @@ struct arm_smmu_domain {
 
 	enum arm_smmu_domain_stage	stage;
 	union {
-		struct arm_smmu_s1_cfg	s1_cfg;
 		struct arm_smmu_s2_cfg	s2_cfg;
 	};
 
@@ -835,30 +796,14 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
 		cmd[1] |= FIELD_PREP(CMDQ_PREFETCH_1_SIZE, ent->prefetch.size);
 		cmd[1] |= ent->prefetch.addr & CMDQ_PREFETCH_1_ADDR_MASK;
 		break;
-	case CMDQ_OP_CFGI_CD:
-		cmd[0] |= FIELD_PREP(CMDQ_CFGI_0_SSID, ent->cfgi.ssid);
-		fallthrough;
 	case CMDQ_OP_CFGI_STE:
 		cmd[0] |= FIELD_PREP(CMDQ_CFGI_0_SID, ent->cfgi.sid);
 		cmd[1] |= FIELD_PREP(CMDQ_CFGI_1_LEAF, ent->cfgi.leaf);
 		break;
-	case CMDQ_OP_CFGI_CD_ALL:
-		cmd[0] |= FIELD_PREP(CMDQ_CFGI_0_SID, ent->cfgi.sid);
-		break;
 	case CMDQ_OP_CFGI_ALL:
 		/* Cover the entire SID range */
 		cmd[1] |= FIELD_PREP(CMDQ_CFGI_1_RANGE, 31);
 		break;
-	case CMDQ_OP_TLBI_NH_VA:
-		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_NUM, ent->tlbi.num);
-		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_SCALE, ent->tlbi.scale);
-		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
-		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_ASID, ent->tlbi.asid);
-		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_LEAF, ent->tlbi.leaf);
-		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TTL, ent->tlbi.ttl);
-		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TG, ent->tlbi.tg);
-		cmd[1] |= ent->tlbi.addr & CMDQ_TLBI_1_VA_MASK;
-		break;
 	case CMDQ_OP_TLBI_S2_IPA:
 		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_NUM, ent->tlbi.num);
 		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_SCALE, ent->tlbi.scale);
@@ -868,9 +813,6 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
 		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TG, ent->tlbi.tg);
 		cmd[1] |= ent->tlbi.addr & CMDQ_TLBI_1_IPA_MASK;
 		break;
-	case CMDQ_OP_TLBI_NH_ASID:
-		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_ASID, ent->tlbi.asid);
-		fallthrough;
 	case CMDQ_OP_TLBI_S12_VMALL:
 		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
 		break;
@@ -1005,242 +947,6 @@ static int arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
 	return ret;
 }
 
-/* Context descriptor manipulation functions */
-static void arm_smmu_sync_cd(struct arm_smmu_domain *smmu_domain,
-			     int ssid, bool leaf)
-{
-	size_t i;
-	unsigned long flags;
-	struct arm_smmu_master *master;
-	struct arm_smmu_device *smmu = smmu_domain->smmu;
-	struct arm_smmu_cmdq_ent cmd = {
-		.opcode	= CMDQ_OP_CFGI_CD,
-		.cfgi	= {
-			.ssid	= ssid,
-			.leaf	= leaf,
-		},
-	};
-
-	spin_lock_irqsave(&smmu_domain->devices_lock, flags);
-	list_for_each_entry(master, &smmu_domain->devices, domain_head) {
-		for (i = 0; i < master->num_sids; i++) {
-			cmd.cfgi.sid = master->sids[i];
-			arm_smmu_cmdq_issue_cmd(smmu, &cmd);
-		}
-	}
-	spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
-
-	arm_smmu_cmdq_issue_sync(smmu);
-}
-
-static int arm_smmu_alloc_cd_leaf_table(struct arm_smmu_device *smmu,
-					struct arm_smmu_l1_ctx_desc *l1_desc)
-{
-	size_t size = CTXDESC_L2_ENTRIES * (CTXDESC_CD_DWORDS << 3);
-
-	l1_desc->l2ptr = dmam_alloc_coherent(smmu->dev, size,
-					     &l1_desc->l2ptr_dma, GFP_KERNEL);
-	if (!l1_desc->l2ptr) {
-		dev_warn(smmu->dev,
-			 "failed to allocate context descriptor table\n");
-		return -ENOMEM;
-	}
-	return 0;
-}
-
-static void arm_smmu_write_cd_l1_desc(__le64 *dst,
-				      struct arm_smmu_l1_ctx_desc *l1_desc)
-{
-	u64 val = (l1_desc->l2ptr_dma & CTXDESC_L1_DESC_L2PTR_MASK) |
-		  CTXDESC_L1_DESC_V;
-
-	/* See comment in arm_smmu_write_ctx_desc() */
-	WRITE_ONCE(*dst, cpu_to_le64(val));
-}
-
-static __le64 *arm_smmu_get_cd_ptr(struct arm_smmu_domain *smmu_domain,
-				   u32 ssid)
-{
-	__le64 *l1ptr;
-	unsigned int idx;
-	struct arm_smmu_l1_ctx_desc *l1_desc;
-	struct arm_smmu_device *smmu = smmu_domain->smmu;
-	struct arm_smmu_ctx_desc_cfg *cdcfg = &smmu_domain->s1_cfg.cdcfg;
-
-	if (smmu_domain->s1_cfg.s1fmt == STRTAB_STE_0_S1FMT_LINEAR)
-		return cdcfg->cdtab + ssid * CTXDESC_CD_DWORDS;
-
-	idx = ssid >> CTXDESC_SPLIT;
-	l1_desc = &cdcfg->l1_desc[idx];
-	if (!l1_desc->l2ptr) {
-		if (arm_smmu_alloc_cd_leaf_table(smmu, l1_desc))
-			return NULL;
-
-		l1ptr = cdcfg->cdtab + idx * CTXDESC_L1_DESC_DWORDS;
-		arm_smmu_write_cd_l1_desc(l1ptr, l1_desc);
-		/* An invalid L1CD can be cached */
-		arm_smmu_sync_cd(smmu_domain, ssid, false);
-	}
-	idx = ssid & (CTXDESC_L2_ENTRIES - 1);
-	return l1_desc->l2ptr + idx * CTXDESC_CD_DWORDS;
-}
-
-static int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain,
-				   int ssid, struct arm_smmu_ctx_desc *cd)
-{
-	/*
-	 * This function handles the following cases:
-	 *
-	 * (1) Install primary CD, for normal DMA traffic (SSID = 0).
-	 * (2) Install a secondary CD, for SID+SSID traffic.
-	 * (3) Update ASID of a CD. Atomically write the first 64 bits of the
-	 *     CD, then invalidate the old entry and mappings.
-	 * (4) Remove a secondary CD.
-	 */
-	u64 val;
-	bool cd_live;
-	__le64 *cdptr;
-	struct arm_smmu_device *smmu = smmu_domain->smmu;
-
-	if (WARN_ON(ssid >= (1 << smmu_domain->s1_cfg.s1cdmax)))
-		return -E2BIG;
-
-	cdptr = arm_smmu_get_cd_ptr(smmu_domain, ssid);
-	if (!cdptr)
-		return -ENOMEM;
-
-	val = le64_to_cpu(cdptr[0]);
-	cd_live = !!(val & CTXDESC_CD_0_V);
-
-	if (!cd) { /* (4) */
-		val = 0;
-	} else if (cd_live) { /* (3) */
-		val &= ~CTXDESC_CD_0_ASID;
-		val |= FIELD_PREP(CTXDESC_CD_0_ASID, cd->asid);
-		/*
-		 * Until CD+TLB invalidation, both ASIDs may be used for tagging
-		 * this substream's traffic
-		 */
-	} else { /* (1) and (2) */
-		cdptr[1] = cpu_to_le64(cd->ttbr & CTXDESC_CD_1_TTB0_MASK);
-		cdptr[2] = 0;
-		cdptr[3] = cpu_to_le64(cd->mair);
-
-		/*
-		 * STE is live, and the SMMU might read dwords of this CD in any
-		 * order. Ensure that it observes valid values before reading
-		 * V=1.
-		 */
-		arm_smmu_sync_cd(smmu_domain, ssid, true);
-
-		val = cd->tcr |
-#ifdef __BIG_ENDIAN
-			CTXDESC_CD_0_ENDI |
-#endif
-			CTXDESC_CD_0_R | CTXDESC_CD_0_A | CTXDESC_CD_0_ASET |
-			CTXDESC_CD_0_AA64 |
-			FIELD_PREP(CTXDESC_CD_0_ASID, cd->asid) |
-			CTXDESC_CD_0_V;
-
-		/* STALL_MODEL==0b10 && CD.S==0 is ILLEGAL */
-		if (smmu->features & ARM_SMMU_FEAT_STALL_FORCE)
-			val |= CTXDESC_CD_0_S;
-	}
-
-	/*
-	 * The SMMU accesses 64-bit values atomically. See IHI0070Ca 3.21.3
-	 * "Configuration structures and configuration invalidation completion"
-	 *
-	 *   The size of single-copy atomic reads made by the SMMU is
-	 *   IMPLEMENTATION DEFINED but must be at least 64 bits. Any single
-	 *   field within an aligned 64-bit span of a structure can be altered
-	 *   without first making the structure invalid.
-	 */
-	WRITE_ONCE(cdptr[0], cpu_to_le64(val));
-	arm_smmu_sync_cd(smmu_domain, ssid, true);
-	return 0;
-}
-
-static int arm_smmu_alloc_cd_tables(struct arm_smmu_domain *smmu_domain)
-{
-	int ret;
-	size_t l1size;
-	size_t max_contexts;
-	struct arm_smmu_device *smmu = smmu_domain->smmu;
-	struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
-	struct arm_smmu_ctx_desc_cfg *cdcfg = &cfg->cdcfg;
-
-	max_contexts = 1 << cfg->s1cdmax;
-
-	if (!(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB) ||
-	    max_contexts <= CTXDESC_L2_ENTRIES) {
-		cfg->s1fmt = STRTAB_STE_0_S1FMT_LINEAR;
-		cdcfg->num_l1_ents = max_contexts;
-
-		l1size = max_contexts * (CTXDESC_CD_DWORDS << 3);
-	} else {
-		cfg->s1fmt = STRTAB_STE_0_S1FMT_64K_L2;
-		cdcfg->num_l1_ents = DIV_ROUND_UP(max_contexts,
-						  CTXDESC_L2_ENTRIES);
-
-		cdcfg->l1_desc = devm_kcalloc(smmu->dev, cdcfg->num_l1_ents,
-					      sizeof(*cdcfg->l1_desc),
-					      GFP_KERNEL);
-		if (!cdcfg->l1_desc)
-			return -ENOMEM;
-
-		l1size = cdcfg->num_l1_ents * (CTXDESC_L1_DESC_DWORDS << 3);
-	}
-
-	cdcfg->cdtab = dmam_alloc_coherent(smmu->dev, l1size, &cdcfg->cdtab_dma,
-					   GFP_KERNEL);
-	if (!cdcfg->cdtab) {
-		dev_warn(smmu->dev, "failed to allocate context descriptor\n");
-		ret = -ENOMEM;
-		goto err_free_l1;
-	}
-
-	return 0;
-
-err_free_l1:
-	if (cdcfg->l1_desc) {
-		devm_kfree(smmu->dev, cdcfg->l1_desc);
-		cdcfg->l1_desc = NULL;
-	}
-	return ret;
-}
-
-static void arm_smmu_free_cd_tables(struct arm_smmu_domain *smmu_domain)
-{
-	int i;
-	size_t size, l1size;
-	struct arm_smmu_device *smmu = smmu_domain->smmu;
-	struct arm_smmu_ctx_desc_cfg *cdcfg = &smmu_domain->s1_cfg.cdcfg;
-
-	if (cdcfg->l1_desc) {
-		size = CTXDESC_L2_ENTRIES * (CTXDESC_CD_DWORDS << 3);
-
-		for (i = 0; i < cdcfg->num_l1_ents; i++) {
-			if (!cdcfg->l1_desc[i].l2ptr)
-				continue;
-
-			dmam_free_coherent(smmu->dev, size,
-					   cdcfg->l1_desc[i].l2ptr,
-					   cdcfg->l1_desc[i].l2ptr_dma);
-		}
-		devm_kfree(smmu->dev, cdcfg->l1_desc);
-		cdcfg->l1_desc = NULL;
-
-		l1size = cdcfg->num_l1_ents * (CTXDESC_L1_DESC_DWORDS << 3);
-	} else {
-		l1size = cdcfg->num_l1_ents * (CTXDESC_CD_DWORDS << 3);
-	}
-
-	dmam_free_coherent(smmu->dev, l1size, cdcfg->cdtab, cdcfg->cdtab_dma);
-	cdcfg->cdtab_dma = 0;
-	cdcfg->cdtab = NULL;
-}
-
 /* Stream table manipulation functions */
 static void
 arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc *desc)
@@ -1290,7 +996,6 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid,
 	u64 val = le64_to_cpu(dst[0]);
 	bool ste_live = false;
 	struct arm_smmu_device *smmu = NULL;
-	struct arm_smmu_s1_cfg *s1_cfg = NULL;
 	struct arm_smmu_s2_cfg *s2_cfg = NULL;
 	struct arm_smmu_domain *smmu_domain = NULL;
 	struct arm_smmu_cmdq_ent prefetch_cmd = {
@@ -1306,24 +1011,13 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid,
 	}
 
 	if (smmu_domain) {
-		switch (smmu_domain->stage) {
-		case ARM_SMMU_DOMAIN_S1:
-			s1_cfg = &smmu_domain->s1_cfg;
-			break;
-		case ARM_SMMU_DOMAIN_S2:
-		case ARM_SMMU_DOMAIN_NESTED:
-			s2_cfg = &smmu_domain->s2_cfg;
-			break;
-		default:
-			break;
-		}
+		s2_cfg = &smmu_domain->s2_cfg;
 	}
 
 	if (val & STRTAB_STE_0_V) {
 		switch (FIELD_GET(STRTAB_STE_0_CFG, val)) {
 		case STRTAB_STE_0_CFG_BYPASS:
 			break;
-		case STRTAB_STE_0_CFG_S1_TRANS:
 		case STRTAB_STE_0_CFG_S2_TRANS:
 			ste_live = true;
 			break;
@@ -1339,7 +1033,7 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid,
 	val = STRTAB_STE_0_V;
 
 	/* Bypass/fault */
-	if (!smmu_domain || !(s1_cfg || s2_cfg)) {
+	if (!smmu_domain || !(s2_cfg)) {
 		if (!smmu_domain && disable_bypass)
 			val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_ABORT);
 		else
@@ -1358,25 +1052,6 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid,
 		return;
 	}
 
-	if (s1_cfg) {
-		BUG_ON(ste_live);
-		dst[1] = cpu_to_le64(
-			 FIELD_PREP(STRTAB_STE_1_S1DSS, STRTAB_STE_1_S1DSS_SSID0) |
-			 FIELD_PREP(STRTAB_STE_1_S1CIR, STRTAB_STE_1_S1C_CACHE_WBRA) |
-			 FIELD_PREP(STRTAB_STE_1_S1COR, STRTAB_STE_1_S1C_CACHE_WBRA) |
-			 FIELD_PREP(STRTAB_STE_1_S1CSH, ARM_SMMU_SH_ISH) |
-			 FIELD_PREP(STRTAB_STE_1_STRW, STRTAB_STE_1_STRW_NSEL1));
-
-		if (smmu->features & ARM_SMMU_FEAT_STALLS &&
-		   !(smmu->features & ARM_SMMU_FEAT_STALL_FORCE))
-			dst[1] |= cpu_to_le64(STRTAB_STE_1_S1STALLD);
-
-		val |= (s1_cfg->cdcfg.cdtab_dma & STRTAB_STE_0_S1CTXPTR_MASK) |
-			FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_S1_TRANS) |
-			FIELD_PREP(STRTAB_STE_0_S1CDMAX, s1_cfg->s1cdmax) |
-			FIELD_PREP(STRTAB_STE_0_S1FMT, s1_cfg->s1fmt);
-	}
-
 	if (s2_cfg) {
 		BUG_ON(ste_live);
 		dst[2] = cpu_to_le64(
@@ -1395,7 +1070,6 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid,
 
 
 	arm_smmu_sync_ste_for_sid(smmu, sid);
-	/* See comment in arm_smmu_write_ctx_desc() */
 	WRITE_ONCE(dst[0], cpu_to_le64(val));
 	arm_smmu_sync_ste_for_sid(smmu, sid);
 
@@ -1609,14 +1283,8 @@ static void arm_smmu_tlb_inv_context(void *cookie)
 	struct arm_smmu_device *smmu = smmu_domain->smmu;
 	struct arm_smmu_cmdq_ent cmd;
 
-	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
-		cmd.opcode	= CMDQ_OP_TLBI_NH_ASID;
-		cmd.tlbi.asid	= smmu_domain->s1_cfg.cd.asid;
-		cmd.tlbi.vmid	= 0;
-	} else {
-		cmd.opcode	= CMDQ_OP_TLBI_S12_VMALL;
-		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
-	}
+	cmd.opcode	= CMDQ_OP_TLBI_S12_VMALL;
+	cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
 
 	/*
 	 * NOTE: when io-pgtable is in non-strict mode, we may get here with
@@ -1641,13 +1309,8 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
 		},
 	};
 
-	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
-		cmd.opcode	= CMDQ_OP_TLBI_NH_VA;
-		cmd.tlbi.asid	= smmu_domain->s1_cfg.cd.asid;
-	} else {
-		cmd.opcode	= CMDQ_OP_TLBI_S2_IPA;
-		cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
-	}
+	cmd.opcode	= CMDQ_OP_TLBI_S2_IPA;
+	cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
 
 	do {
 		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
@@ -1755,75 +1418,17 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
 {
 	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
 	struct arm_smmu_device *smmu = smmu_domain->smmu;
+	struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
 
 	iommu_put_dma_cookie(domain);
 	free_io_pgtable_ops(smmu_domain->pgtbl_ops);
 
-	/* Free the CD and ASID, if we allocated them */
-	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
-		struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
-
-		if (cfg->cdcfg.cdtab) {
-			arm_smmu_free_cd_tables(smmu_domain);
-			arm_smmu_bitmap_free(smmu->asid_map, cfg->cd.asid);
-		}
-	} else {
-		struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
-		if (cfg->vmid)
-			arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid);
-	}
+	if (cfg->vmid)
+		arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid);
 
 	kfree(smmu_domain);
 }
 
-static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
-				       struct arm_smmu_master *master,
-				       struct io_pgtable_cfg *pgtbl_cfg)
-{
-	int ret;
-	int asid;
-	struct arm_smmu_device *smmu = smmu_domain->smmu;
-	struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
-	typeof(&pgtbl_cfg->arm_lpae_s1_cfg.tcr) tcr = &pgtbl_cfg->arm_lpae_s1_cfg.tcr;
-
-	asid = arm_smmu_bitmap_alloc(smmu->asid_map, smmu->asid_bits);
-	if (asid < 0)
-		return asid;
-
-	cfg->s1cdmax = master->ssid_bits;
-
-	ret = arm_smmu_alloc_cd_tables(smmu_domain);
-	if (ret)
-		goto out_free_asid;
-
-	cfg->cd.asid	= (u16)asid;
-	cfg->cd.ttbr	= pgtbl_cfg->arm_lpae_s1_cfg.ttbr;
-	cfg->cd.tcr	= FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, tcr->tsz) |
-			  FIELD_PREP(CTXDESC_CD_0_TCR_TG0, tcr->tg) |
-			  FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0, tcr->irgn) |
-			  FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, tcr->orgn) |
-			  FIELD_PREP(CTXDESC_CD_0_TCR_SH0, tcr->sh) |
-			  FIELD_PREP(CTXDESC_CD_0_TCR_IPS, tcr->ips) |
-			  CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_AA64;
-	cfg->cd.mair	= pgtbl_cfg->arm_lpae_s1_cfg.mair;
-
-	/*
-	 * Note that this will end up calling arm_smmu_sync_cd() before
-	 * the master has been added to the devices list for this domain.
-	 * This isn't an issue because the STE hasn't been installed yet.
-	 */
-	ret = arm_smmu_write_ctx_desc(smmu_domain, 0, &cfg->cd);
-	if (ret)
-		goto out_free_cd_tables;
-
-	return 0;
-
-out_free_cd_tables:
-	arm_smmu_free_cd_tables(smmu_domain);
-out_free_asid:
-	arm_smmu_bitmap_free(smmu->asid_map, asid);
-	return ret;
-}
 
 static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
 				       struct arm_smmu_master *master,
@@ -1871,19 +1476,9 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain,
 	}
 
 	/* Restrict the stage to what we can actually support */
-	if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S1))
-		smmu_domain->stage = ARM_SMMU_DOMAIN_S2;
-	if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S2))
-		smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
+	smmu_domain->stage = ARM_SMMU_DOMAIN_S2;
 
 	switch (smmu_domain->stage) {
-	case ARM_SMMU_DOMAIN_S1:
-		ias = (smmu->features & ARM_SMMU_FEAT_VAX) ? 52 : 48;
-		ias = min_t(unsigned long, ias, VA_BITS);
-		oas = smmu->ias;
-		fmt = ARM_64_LPAE_S1;
-		finalise_stage_fn = arm_smmu_domain_finalise_s1;
-		break;
 	case ARM_SMMU_DOMAIN_NESTED:
 	case ARM_SMMU_DOMAIN_S2:
 		ias = smmu->ias;
@@ -2016,13 +1611,6 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
 			dev_name(smmu->dev));
 		ret = -ENXIO;
 		goto out_unlock;
-	} else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1 &&
-		   master->ssid_bits != smmu_domain->s1_cfg.s1cdmax) {
-		dev_err(dev,
-			"cannot attach to incompatible domain (%u SSID bits != %u)\n",
-			smmu_domain->s1_cfg.s1cdmax, master->ssid_bits);
-		ret = -EINVAL;
-		goto out_unlock;
 	}
 
 	master->domain = smmu_domain;
@@ -2159,12 +1747,6 @@ static struct iommu_device *arm_smmu_probe_device(struct device *dev)
 		}
 	}
 
-	master->ssid_bits = min(smmu->ssid_bits, fwspec->num_pasid_bits);
-
-	if (!(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB))
-		master->ssid_bits = min_t(u8, master->ssid_bits,
-					  CTXDESC_LINEAR_CDMAX);
-
 	return &smmu->iommu;
 
 err_free_master:
@@ -2853,7 +2435,6 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
 	}
 
 	/* ASID/VMID sizes */
-	smmu->asid_bits = reg & IDR0_ASID16 ? 16 : 8;
 	smmu->vmid_bits = reg & IDR0_VMID16 ? 16 : 8;
 
 	/* IDR1 */
@@ -2878,7 +2459,6 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
 					     FIELD_GET(IDR1_PRIQS, reg));
 
 	/* SID/SSID sizes */
-	smmu->ssid_bits = FIELD_GET(IDR1_SSIDSIZE, reg);
 	smmu->sid_bits = FIELD_GET(IDR1_SIDSIZE, reg);
 
 	/*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 17:02:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 17:02:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38867.71671 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKfZ-0004lW-PX; Thu, 26 Nov 2020 17:02:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38867.71671; Thu, 26 Nov 2020 17:02:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKfZ-0004lO-L6; Thu, 26 Nov 2020 17:02:57 +0000
Received: by outflank-mailman (input) for mailman id 38867;
 Thu, 26 Nov 2020 17:02:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C6x3=FA=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kiKfY-0004PR-N1
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:02:56 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id e4aed5f0-dd65-427c-bc66-61e41117ad43;
 Thu, 26 Nov 2020 17:02:46 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6146A1516;
 Thu, 26 Nov 2020 09:02:46 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7D9E43F23F;
 Thu, 26 Nov 2020 09:02:45 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=C6x3=FA=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kiKfY-0004PR-N1
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:02:56 +0000
X-Inumbo-ID: e4aed5f0-dd65-427c-bc66-61e41117ad43
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id e4aed5f0-dd65-427c-bc66-61e41117ad43;
	Thu, 26 Nov 2020 17:02:46 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6146A1516;
	Thu, 26 Nov 2020 09:02:46 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7D9E43F23F;
	Thu, 26 Nov 2020 09:02:45 -0800 (PST)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 7/8] xen/arm: Remove Linux specific code that is not usable in XEN
Date: Thu, 26 Nov 2020 17:02:06 +0000
Message-Id: <1d9da8ed4845aeb9e86a5ce6750b811bd7e2020e.1606406359.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1606406359.git.rahul.singh@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
In-Reply-To: <cover.1606406359.git.rahul.singh@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>

struct io_pgtable_ops, struct io_pgtable_cfg, struct iommu_flush_ops,
and struct iommu_ops related code are linux specific.

Remove code related to above struct as code is dead code in XEN.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/drivers/passthrough/arm/smmu-v3.c | 457 --------------------------
 1 file changed, 457 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index 40e3890a58..55d1cba194 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -402,13 +402,7 @@
 #define ARM_SMMU_CMDQ_SYNC_TIMEOUT_US	1000000 /* 1s! */
 #define ARM_SMMU_CMDQ_SYNC_SPIN_COUNT	10
 
-#define MSI_IOVA_BASE			0x8000000
-#define MSI_IOVA_LENGTH			0x100000
-
 static bool disable_bypass = 1;
-module_param_named(disable_bypass, disable_bypass, bool, S_IRUGO);
-MODULE_PARM_DESC(disable_bypass,
-	"Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU.");
 
 enum pri_resp {
 	PRI_RESP_DENY = 0,
@@ -599,7 +593,6 @@ struct arm_smmu_domain {
 	struct arm_smmu_device		*smmu;
 	struct mutex			init_mutex; /* Protects smmu pointer */
 
-	struct io_pgtable_ops		*pgtbl_ops;
 	bool				non_strict;
 
 	enum arm_smmu_domain_stage	stage;
@@ -1297,74 +1290,6 @@ static void arm_smmu_tlb_inv_context(void *cookie)
 	arm_smmu_cmdq_issue_sync(smmu);
 }
 
-static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
-					  size_t granule, bool leaf, void *cookie)
-{
-	struct arm_smmu_domain *smmu_domain = cookie;
-	struct arm_smmu_device *smmu = smmu_domain->smmu;
-	struct arm_smmu_cmdq_ent cmd = {
-		.tlbi = {
-			.leaf	= leaf,
-			.addr	= iova,
-		},
-	};
-
-	cmd.opcode	= CMDQ_OP_TLBI_S2_IPA;
-	cmd.tlbi.vmid	= smmu_domain->s2_cfg.vmid;
-
-	do {
-		arm_smmu_cmdq_issue_cmd(smmu, &cmd);
-		cmd.tlbi.addr += granule;
-	} while (size -= granule);
-}
-
-static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gather,
-					 unsigned long iova, size_t granule,
-					 void *cookie)
-{
-	arm_smmu_tlb_inv_range_nosync(iova, granule, granule, true, cookie);
-}
-
-static void arm_smmu_tlb_inv_walk(unsigned long iova, size_t size,
-				  size_t granule, void *cookie)
-{
-	struct arm_smmu_domain *smmu_domain = cookie;
-	struct arm_smmu_device *smmu = smmu_domain->smmu;
-
-	arm_smmu_tlb_inv_range_nosync(iova, size, granule, false, cookie);
-	arm_smmu_cmdq_issue_sync(smmu);
-}
-
-static void arm_smmu_tlb_inv_leaf(unsigned long iova, size_t size,
-				  size_t granule, void *cookie)
-{
-	struct arm_smmu_domain *smmu_domain = cookie;
-	struct arm_smmu_device *smmu = smmu_domain->smmu;
-
-	arm_smmu_tlb_inv_range_nosync(iova, size, granule, true, cookie);
-	arm_smmu_cmdq_issue_sync(smmu);
-}
-
-static const struct iommu_flush_ops arm_smmu_flush_ops = {
-	.tlb_flush_all	= arm_smmu_tlb_inv_context,
-	.tlb_flush_walk = arm_smmu_tlb_inv_walk,
-	.tlb_flush_leaf = arm_smmu_tlb_inv_leaf,
-	.tlb_add_page	= arm_smmu_tlb_inv_page_nosync,
-};
-
-/* IOMMU API */
-static bool arm_smmu_capable(enum iommu_cap cap)
-{
-	switch (cap) {
-	case IOMMU_CAP_CACHE_COHERENCY:
-		return true;
-	case IOMMU_CAP_NOEXEC:
-		return true;
-	default:
-		return false;
-	}
-}
-
 static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
 {
 	struct arm_smmu_domain *smmu_domain;
@@ -1421,7 +1346,6 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
 	struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
 
 	iommu_put_dma_cookie(domain);
-	free_io_pgtable_ops(smmu_domain->pgtbl_ops);
 
 	if (cfg->vmid)
 		arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid);
@@ -1429,7 +1353,6 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
 	kfree(smmu_domain);
 }
 
-
 static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
 				       struct arm_smmu_master *master,
 				       struct io_pgtable_cfg *pgtbl_cfg)
@@ -1437,7 +1360,6 @@ static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
 	int vmid;
 	struct arm_smmu_device *smmu = smmu_domain->smmu;
 	struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
-	typeof(&pgtbl_cfg->arm_lpae_s2_cfg.vtcr) vtcr;
 
 	vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
 	if (vmid < 0)
@@ -1461,20 +1383,12 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain,
 {
 	int ret;
 	unsigned long ias, oas;
-	enum io_pgtable_fmt fmt;
-	struct io_pgtable_cfg pgtbl_cfg;
-	struct io_pgtable_ops *pgtbl_ops;
 	int (*finalise_stage_fn)(struct arm_smmu_domain *,
 				 struct arm_smmu_master *,
 				 struct io_pgtable_cfg *);
 	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
 	struct arm_smmu_device *smmu = smmu_domain->smmu;
 
-	if (domain->type == IOMMU_DOMAIN_IDENTITY) {
-		smmu_domain->stage = ARM_SMMU_DOMAIN_BYPASS;
-		return 0;
-	}
-
 	/* Restrict the stage to what we can actually support */
 	smmu_domain->stage = ARM_SMMU_DOMAIN_S2;
 
@@ -1483,40 +1397,17 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain,
 	case ARM_SMMU_DOMAIN_S2:
 		ias = smmu->ias;
 		oas = smmu->oas;
-		fmt = ARM_64_LPAE_S2;
 		finalise_stage_fn = arm_smmu_domain_finalise_s2;
 		break;
 	default:
 		return -EINVAL;
 	}
 
-	pgtbl_cfg = (struct io_pgtable_cfg) {
-		.pgsize_bitmap	= smmu->pgsize_bitmap,
-		.ias		= ias,
-		.oas		= oas,
-		.coherent_walk	= smmu->features & ARM_SMMU_FEAT_COHERENCY,
-		.tlb		= &arm_smmu_flush_ops,
-		.iommu_dev	= smmu->dev,
-	};
-
-	if (smmu_domain->non_strict)
-		pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT;
-
-	pgtbl_ops = alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain);
-	if (!pgtbl_ops)
-		return -ENOMEM;
-
-	domain->pgsize_bitmap = pgtbl_cfg.pgsize_bitmap;
-	domain->geometry.aperture_end = (1UL << pgtbl_cfg.ias) - 1;
-	domain->geometry.force_aperture = true;
-
 	ret = finalise_stage_fn(smmu_domain, master, &pgtbl_cfg);
 	if (ret < 0) {
-		free_io_pgtable_ops(pgtbl_ops);
 		return ret;
 	}
 
-	smmu_domain->pgtbl_ops = pgtbl_ops;
 	return 0;
 }
 
@@ -1626,71 +1517,6 @@ out_unlock:
 	return ret;
 }
 
-static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
-			phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
-{
-	struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
-
-	if (!ops)
-		return -ENODEV;
-
-	return ops->map(ops, iova, paddr, size, prot, gfp);
-}
-
-static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova,
-			     size_t size, struct iommu_iotlb_gather *gather)
-{
-	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
-	struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops;
-
-	if (!ops)
-		return 0;
-
-	return ops->unmap(ops, iova, size, gather);
-}
-
-static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain)
-{
-	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
-
-	if (smmu_domain->smmu)
-		arm_smmu_tlb_inv_context(smmu_domain);
-}
-
-static void arm_smmu_iotlb_sync(struct iommu_domain *domain,
-				struct iommu_iotlb_gather *gather)
-{
-	struct arm_smmu_device *smmu = to_smmu_domain(domain)->smmu;
-
-	if (smmu)
-		arm_smmu_cmdq_issue_sync(smmu);
-}
-
-static phys_addr_t
-arm_smmu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
-{
-	struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
-
-	if (domain->type == IOMMU_DOMAIN_IDENTITY)
-		return iova;
-
-	if (!ops)
-		return 0;
-
-	return ops->iova_to_phys(ops, iova);
-}
-
-static struct platform_driver arm_smmu_driver;
-
-static
-struct arm_smmu_device *arm_smmu_get_by_fwnode(struct fwnode_handle *fwnode)
-{
-	struct device *dev = driver_find_device_by_fwnode(&arm_smmu_driver.driver,
-							  fwnode);
-	put_device(dev);
-	return dev ? dev_get_drvdata(dev) : NULL;
-}
-
 static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
 {
 	unsigned long limit = smmu->strtab_cfg.num_l1_ents;
@@ -1701,206 +1527,6 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
 	return sid < limit;
 }
 
-static struct iommu_ops arm_smmu_ops;
-
-static struct iommu_device *arm_smmu_probe_device(struct device *dev)
-{
-	int i, ret;
-	struct arm_smmu_device *smmu;
-	struct arm_smmu_master *master;
-	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
-
-	if (!fwspec || fwspec->ops != &arm_smmu_ops)
-		return ERR_PTR(-ENODEV);
-
-	if (WARN_ON_ONCE(dev_iommu_priv_get(dev)))
-		return ERR_PTR(-EBUSY);
-
-	smmu = arm_smmu_get_by_fwnode(fwspec->iommu_fwnode);
-	if (!smmu)
-		return ERR_PTR(-ENODEV);
-
-	master = kzalloc(sizeof(*master), GFP_KERNEL);
-	if (!master)
-		return ERR_PTR(-ENOMEM);
-
-	master->dev = dev;
-	master->smmu = smmu;
-	master->sids = fwspec->ids;
-	master->num_sids = fwspec->num_ids;
-	dev_iommu_priv_set(dev, master);
-
-	/* Check the SIDs are in range of the SMMU and our stream table */
-	for (i = 0; i < master->num_sids; i++) {
-		u32 sid = master->sids[i];
-
-		if (!arm_smmu_sid_in_range(smmu, sid)) {
-			ret = -ERANGE;
-			goto err_free_master;
-		}
-
-		/* Ensure l2 strtab is initialised */
-		if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) {
-			ret = arm_smmu_init_l2_strtab(smmu, sid);
-			if (ret)
-				goto err_free_master;
-		}
-	}
-
-	return &smmu->iommu;
-
-err_free_master:
-	kfree(master);
-	dev_iommu_priv_set(dev, NULL);
-	return ERR_PTR(ret);
-}
-
-static void arm_smmu_release_device(struct device *dev)
-{
-	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
-	struct arm_smmu_master *master;
-
-	if (!fwspec || fwspec->ops != &arm_smmu_ops)
-		return;
-
-	master = dev_iommu_priv_get(dev);
-	arm_smmu_detach_dev(master);
-	kfree(master);
-	iommu_fwspec_free(dev);
-}
-
-static struct iommu_group *arm_smmu_device_group(struct device *dev)
-{
-	struct iommu_group *group;
-
-	/*
-	 * We don't support devices sharing stream IDs other than PCI RID
-	 * aliases, since the necessary ID-to-device lookup becomes rather
-	 * impractical given a potential sparse 32-bit stream ID space.
-	 */
-	if (dev_is_pci(dev))
-		group = pci_device_group(dev);
-	else
-		group = generic_device_group(dev);
-
-	return group;
-}
-
-static int arm_smmu_domain_get_attr(struct iommu_domain *domain,
-				    enum iommu_attr attr, void *data)
-{
-	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
-
-	switch (domain->type) {
-	case IOMMU_DOMAIN_UNMANAGED:
-		switch (attr) {
-		case DOMAIN_ATTR_NESTING:
-			*(int *)data = (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED);
-			return 0;
-		default:
-			return -ENODEV;
-		}
-		break;
-	case IOMMU_DOMAIN_DMA:
-		switch (attr) {
-		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
-			*(int *)data = smmu_domain->non_strict;
-			return 0;
-		default:
-			return -ENODEV;
-		}
-		break;
-	default:
-		return -EINVAL;
-	}
-}
-
-static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
-				    enum iommu_attr attr, void *data)
-{
-	int ret = 0;
-	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
-
-	mutex_lock(&smmu_domain->init_mutex);
-
-	switch (domain->type) {
-	case IOMMU_DOMAIN_UNMANAGED:
-		switch (attr) {
-		case DOMAIN_ATTR_NESTING:
-			if (smmu_domain->smmu) {
-				ret = -EPERM;
-				goto out_unlock;
-			}
-
-			if (*(int *)data)
-				smmu_domain->stage = ARM_SMMU_DOMAIN_NESTED;
-			else
-				smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
-			break;
-		default:
-			ret = -ENODEV;
-		}
-		break;
-	case IOMMU_DOMAIN_DMA:
-		switch(attr) {
-		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
-			smmu_domain->non_strict = *(int *)data;
-			break;
-		default:
-			ret = -ENODEV;
-		}
-		break;
-	default:
-		ret = -EINVAL;
-	}
-
-out_unlock:
-	mutex_unlock(&smmu_domain->init_mutex);
-	return ret;
-}
-
-static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args)
-{
-	return iommu_fwspec_add_ids(dev, args->args, 1);
-}
-
-static void arm_smmu_get_resv_regions(struct device *dev,
-				      struct list_head *head)
-{
-	struct iommu_resv_region *region;
-	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
-
-	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
-					 prot, IOMMU_RESV_SW_MSI);
-	if (!region)
-		return;
-
-	list_add_tail(&region->list, head);
-
-	iommu_dma_get_resv_regions(dev, head);
-}
-
-static struct iommu_ops arm_smmu_ops = {
-	.capable		= arm_smmu_capable,
-	.domain_alloc		= arm_smmu_domain_alloc,
-	.domain_free		= arm_smmu_domain_free,
-	.attach_dev		= arm_smmu_attach_dev,
-	.map			= arm_smmu_map,
-	.unmap			= arm_smmu_unmap,
-	.flush_iotlb_all	= arm_smmu_flush_iotlb_all,
-	.iotlb_sync		= arm_smmu_iotlb_sync,
-	.iova_to_phys		= arm_smmu_iova_to_phys,
-	.probe_device		= arm_smmu_probe_device,
-	.release_device		= arm_smmu_release_device,
-	.device_group		= arm_smmu_device_group,
-	.domain_get_attr	= arm_smmu_domain_get_attr,
-	.domain_set_attr	= arm_smmu_domain_set_attr,
-	.of_xlate		= arm_smmu_of_xlate,
-	.get_resv_regions	= arm_smmu_get_resv_regions,
-	.put_resv_regions	= generic_iommu_put_resv_regions,
-	.pgsize_bitmap		= -1UL, /* Restricted during device attach */
-};
-
 /* Probing and initialisation functions */
 static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
 				   struct arm_smmu_queue *q,
@@ -2406,7 +2032,6 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
 	switch (FIELD_GET(IDR0_STALL_MODEL, reg)) {
 	case IDR0_STALL_MODEL_FORCE:
 		smmu->features |= ARM_SMMU_FEAT_STALL_FORCE;
-		fallthrough;
 	case IDR0_STALL_MODEL_STALL:
 		smmu->features |= ARM_SMMU_FEAT_STALLS;
 	}
@@ -2426,7 +2051,6 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
 	switch (FIELD_GET(IDR0_TTF, reg)) {
 	case IDR0_TTF_AARCH32_64:
 		smmu->ias = 40;
-		fallthrough;
 	case IDR0_TTF_AARCH64:
 		break;
 	default:
@@ -2515,21 +2139,10 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
 	default:
 		dev_info(smmu->dev,
 			"unknown output address size. Truncating to 48-bit\n");
-		fallthrough;
 	case IDR5_OAS_48_BIT:
 		smmu->oas = 48;
 	}
 
-	if (arm_smmu_ops.pgsize_bitmap == -1UL)
-		arm_smmu_ops.pgsize_bitmap = smmu->pgsize_bitmap;
-	else
-		arm_smmu_ops.pgsize_bitmap |= smmu->pgsize_bitmap;
-
-	/* Set the DMA mask for our table walker */
-	if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas)))
-		dev_warn(smmu->dev,
-			 "failed to set DMA mask for table walker\n");
-
 	smmu->ias = max(smmu->ias, smmu->oas);
 
 	dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n",
@@ -2595,9 +2208,6 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev,
 
 	parse_driver_options(smmu);
 
-	if (of_dma_is_coherent(dev->of_node))
-		smmu->features |= ARM_SMMU_FEAT_COHERENCY;
-
 	return ret;
 }
 
@@ -2609,55 +2219,6 @@ static unsigned long arm_smmu_resource_size(struct arm_smmu_device *smmu)
 		return SZ_128K;
 }
 
-static int arm_smmu_set_bus_ops(struct iommu_ops *ops)
-{
-	int err;
-
-#ifdef CONFIG_PCI
-	if (pci_bus_type.iommu_ops != ops) {
-		err = bus_set_iommu(&pci_bus_type, ops);
-		if (err)
-			return err;
-	}
-#endif
-#ifdef CONFIG_ARM_AMBA
-	if (amba_bustype.iommu_ops != ops) {
-		err = bus_set_iommu(&amba_bustype, ops);
-		if (err)
-			goto err_reset_pci_ops;
-	}
-#endif
-	if (platform_bus_type.iommu_ops != ops) {
-		err = bus_set_iommu(&platform_bus_type, ops);
-		if (err)
-			goto err_reset_amba_ops;
-	}
-
-	return 0;
-
-err_reset_amba_ops:
-#ifdef CONFIG_ARM_AMBA
-	bus_set_iommu(&amba_bustype, NULL);
-#endif
-err_reset_pci_ops: __maybe_unused;
-#ifdef CONFIG_PCI
-	bus_set_iommu(&pci_bus_type, NULL);
-#endif
-	return err;
-}
-
-static void __iomem *arm_smmu_ioremap(struct device *dev, resource_size_t start,
-				      resource_size_t size)
-{
-	struct resource res = {
-		.flags = IORESOURCE_MEM,
-		.start = start,
-		.end = start + size - 1,
-	};
-
-	return devm_ioremap_resource(dev, &res);
-}
-
 static int arm_smmu_device_probe(struct platform_device *pdev)
 {
 	int irq, ret;
@@ -2785,21 +2346,3 @@ static const struct of_device_id arm_smmu_of_match[] = {
 	{ .compatible = "arm,smmu-v3", },
 	{ },
 };
-MODULE_DEVICE_TABLE(of, arm_smmu_of_match);
-
-static struct platform_driver arm_smmu_driver = {
-	.driver	= {
-		.name			= "arm-smmu-v3",
-		.of_match_table		= arm_smmu_of_match,
-		.suppress_bind_attrs	= true,
-	},
-	.probe	= arm_smmu_device_probe,
-	.remove	= arm_smmu_device_remove,
-	.shutdown = arm_smmu_device_shutdown,
-};
-module_platform_driver(arm_smmu_driver);
-
-MODULE_DESCRIPTION("IOMMU API for ARM architected SMMUv3 implementations");
-MODULE_AUTHOR("Will Deacon <will@kernel.org>");
-MODULE_ALIAS("platform:arm-smmu-v3");
-MODULE_LICENSE("GPL v2");
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 17:03:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 17:03:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38869.71683 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKff-0004sc-4h; Thu, 26 Nov 2020 17:03:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38869.71683; Thu, 26 Nov 2020 17:03:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKff-0004sU-1L; Thu, 26 Nov 2020 17:03:03 +0000
Received: by outflank-mailman (input) for mailman id 38869;
 Thu, 26 Nov 2020 17:03:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C6x3=FA=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kiKfd-0004PR-NF
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:03:01 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id d33b7a50-8782-4458-9531-254b783b556a;
 Thu, 26 Nov 2020 17:02:49 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2E67E31B;
 Thu, 26 Nov 2020 09:02:49 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 837253F23F;
 Thu, 26 Nov 2020 09:02:47 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=C6x3=FA=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kiKfd-0004PR-NF
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:03:01 +0000
X-Inumbo-ID: d33b7a50-8782-4458-9531-254b783b556a
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id d33b7a50-8782-4458-9531-254b783b556a;
	Thu, 26 Nov 2020 17:02:49 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2E67E31B;
	Thu, 26 Nov 2020 09:02:49 -0800 (PST)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 837253F23F;
	Thu, 26 Nov 2020 09:02:47 -0800 (PST)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 8/8] xen/arm: Add support for SMMUv3 driver
Date: Thu, 26 Nov 2020 17:02:07 +0000
Message-Id: <de2101687020d18172a2b153f8977a5116d0cd66.1606406359.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1606406359.git.rahul.singh@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>
In-Reply-To: <cover.1606406359.git.rahul.singh@arm.com>
References: <cover.1606406359.git.rahul.singh@arm.com>

Add support for ARM architected SMMUv3 implementation. It is based on
the Linux SMMUv3 driver.

Major differences with regard to Linux driver are as follows:
1. Only Stage-2 translation is supported as compared to the Linux driver
   that supports both Stage-1 and Stage-2 translations.
2. Use P2M  page table instead of creating one as SMMUv3 has the
   capability to share the page tables with the CPU.
3. Tasklets are used in place of threaded IRQ's in Linux for event queue
   and priority queue IRQ handling.
4. Latest version of the Linux SMMUv3 code implements the commands queue
   access functions based on atomic operations implemented in Linux.
   Atomic functions used by the commands queue access functions are not
   implemented in XEN therefore we decided to port the earlier version
   of the code. Once the proper atomic operations will be available in
   XEN the driver can be updated.
5. Driver is currently supported as Tech Preview.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 MAINTAINERS                           |   6 +
 SUPPORT.md                            |   1 +
 xen/drivers/passthrough/Kconfig       |  10 +
 xen/drivers/passthrough/arm/Makefile  |   1 +
 xen/drivers/passthrough/arm/smmu-v3.c | 986 +++++++++++++++++++++-----
 5 files changed, 814 insertions(+), 190 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index dab38a6a14..1d63489eec 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -249,6 +249,12 @@ F:	xen/include/asm-arm/
 F:	xen/include/public/arch-arm/
 F:	xen/include/public/arch-arm.h
 
+ARM SMMUv3
+M:	Bertrand Marquis <bertrand.marquis@arm.com>
+M:	Rahul Singh <rahul.singh@arm.com>
+S:	Supported
+F:	xen/drivers/passthrough/arm/smmu-v3.c
+
 Change Log
 M:	Paul Durrant <paul@xen.org>
 R:	Community Manager <community.manager@xenproject.org>
diff --git a/SUPPORT.md b/SUPPORT.md
index ab02aca5f4..e402c7202d 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -68,6 +68,7 @@ For the Cortex A57 r0p0 - r1p1, see Errata 832075.
     Status, ARM SMMUv1: Supported, not security supported
     Status, ARM SMMUv2: Supported, not security supported
     Status, Renesas IPMMU-VMSA: Supported, not security supported
+    Status, ARM SMMUv3: Tech Preview
 
 ### ARM/GICv3 ITS
 
diff --git a/xen/drivers/passthrough/Kconfig b/xen/drivers/passthrough/Kconfig
index 0036007ec4..5b71c59f47 100644
--- a/xen/drivers/passthrough/Kconfig
+++ b/xen/drivers/passthrough/Kconfig
@@ -13,6 +13,16 @@ config ARM_SMMU
 	  Say Y here if your SoC includes an IOMMU device implementing the
 	  ARM SMMU architecture.
 
+config ARM_SMMU_V3
+	bool "ARM Ltd. System MMU Version 3 (SMMUv3) Support" if EXPERT
+	depends on ARM_64
+	---help---
+	 Support for implementations of the ARM System MMU architecture
+	 version 3.
+
+	 Say Y here if your system includes an IOMMU device implementing
+	 the ARM SMMUv3 architecture.
+
 config IPMMU_VMSA
 	bool "Renesas IPMMU-VMSA found in R-Car Gen3 SoCs"
 	depends on ARM_64
diff --git a/xen/drivers/passthrough/arm/Makefile b/xen/drivers/passthrough/arm/Makefile
index fcd918ea3e..c5fb3b58a5 100644
--- a/xen/drivers/passthrough/arm/Makefile
+++ b/xen/drivers/passthrough/arm/Makefile
@@ -1,3 +1,4 @@
 obj-y += iommu.o iommu_helpers.o iommu_fwspec.o
 obj-$(CONFIG_ARM_SMMU) += smmu.o
 obj-$(CONFIG_IPMMU_VMSA) += ipmmu-vmsa.o
+obj-$(CONFIG_ARM_SMMU_V3) += smmu-v3.o
diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index 55d1cba194..8f2337e7f2 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -2,36 +2,280 @@
 /*
  * IOMMU API for ARM architected SMMUv3 implementations.
  *
- * Copyright (C) 2015 ARM Limited
+ * Based on Linux's SMMUv3 driver:
+ *    drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+ *    commit: 951cbbc386ff01b50da4f46387e994e81d9ab431
+ * and Xen's SMMU driver:
+ *    xen/drivers/passthrough/arm/smmu.c
  *
- * Author: Will Deacon <will.deacon@arm.com>
+ * Copyright (C) 2015 ARM Limited Will Deacon <will.deacon@arm.com>
  *
- * This driver is powered by bad coffee and bombay mix.
+ * Copyright (C) 2020 Arm Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+
+#include <xen/acpi.h>
+#include <xen/config.h>
+#include <xen/delay.h>
+#include <xen/errno.h>
+#include <xen/err.h>
+#include <xen/irq.h>
+#include <xen/lib.h>
+#include <xen/list.h>
+#include <xen/mm.h>
+#include <xen/rbtree.h>
+#include <xen/sched.h>
+#include <xen/sizes.h>
+#include <xen/vmap.h>
+#include <asm/atomic.h>
+#include <asm/device.h>
+#include <asm/io.h>
+#include <asm/platform.h>
+#include <asm/iommu_fwspec.h>
+
+/* Linux compatibility functions. */
+typedef paddr_t dma_addr_t;
+typedef unsigned int gfp_t;
+
+#define platform_device device
+
+#define GFP_KERNEL 0
+
+/* Alias to Xen device tree helpers */
+#define device_node dt_device_node
+#define of_phandle_args dt_phandle_args
+#define of_device_id dt_device_match
+#define of_match_node dt_match_node
+#define of_property_read_u32(np, pname, out) (!dt_property_read_u32(np, pname, out))
+#define of_property_read_bool dt_property_read_bool
+#define of_parse_phandle_with_args dt_parse_phandle_with_args
+
+/* Alias to Xen lock functions */
+#define mutex spinlock
+#define mutex_init spin_lock_init
+#define mutex_lock spin_lock
+#define mutex_unlock spin_unlock
+
+/* Alias to Xen time functions */
+#define ktime_t s_time_t
+#define ktime_get()             (NOW())
+#define ktime_add_us(t,i)       (t + MICROSECS(i))
+#define ktime_compare(t,i)      (t > (i))
+
+/* Alias to Xen allocation helpers */
+#define kzalloc(size, flags)    _xzalloc(size, sizeof(void *))
+#define kfree xfree
+#define devm_kzalloc(dev, size, flags)  _xzalloc(size, sizeof(void *))
+
+/* Device logger functions */
+#define dev_name(dev) dt_node_full_name(dev->of_node)
+#define dev_dbg(dev, fmt, ...)      \
+    printk(XENLOG_DEBUG "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
+#define dev_notice(dev, fmt, ...)   \
+    printk(XENLOG_INFO "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
+#define dev_warn(dev, fmt, ...)     \
+    printk(XENLOG_WARNING "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
+#define dev_err(dev, fmt, ...)      \
+    printk(XENLOG_ERR "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
+#define dev_info(dev, fmt, ...)     \
+    printk(XENLOG_INFO "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
+#define dev_err_ratelimited(dev, fmt, ...)      \
+    printk(XENLOG_ERR "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
+
+/*
+ * Periodically poll an address and wait between reads in us until a
+ * condition is met or a timeout occurs.
+ */
+#define readx_poll_timeout(op, addr, val, cond, sleep_us, timeout_us) \
+({ \
+     s_time_t deadline = NOW() + MICROSECS(timeout_us); \
+     for (;;) { \
+        (val) = op(addr); \
+        if (cond) \
+            break; \
+        if (NOW() > deadline) { \
+            (val) = op(addr); \
+            break; \
+        } \
+        udelay(sleep_us); \
+     } \
+     (cond) ? 0 : -ETIMEDOUT; \
+})
+
+#define readl_relaxed_poll_timeout(addr, val, cond, delay_us, timeout_us) \
+    readx_poll_timeout(readl_relaxed, addr, val, cond, delay_us, timeout_us)
+
+#define FIELD_PREP(_mask, _val)         \
+    (((typeof(_mask))(_val) << (__builtin_ffsll(_mask) - 1)) & (_mask))
+
+#define FIELD_GET(_mask, _reg)          \
+    (typeof(_mask))(((_reg) & (_mask)) >> (__builtin_ffsll(_mask) - 1))
+
+#define WRITE_ONCE(x, val)                  \
+do {                                        \
+    *(volatile typeof(x) *)&(x) = (val);    \
+} while (0)
+
+/* Xen: Stub out DMA domain related functions */
+#define iommu_get_dma_cookie(dom) 0
+#define iommu_put_dma_cookie(dom)
+
+/*
+ * Helpers for DMA allocation. Just the function name is reused for
+ * porting code, these allocation are not managed allocations
  */
+static void *dmam_alloc_coherent(struct device *dev, size_t size,
+                                 paddr_t *dma_handle, gfp_t gfp)
+{
+    void *vaddr;
+    unsigned long alignment = size;
+
+    /*
+     * _xzalloc requires that the (align & (align -1)) = 0. Most of the
+     * allocations in SMMU code should send the right value for size. In
+     * case this is not true print a warning and align to the size of a
+     * (void *)
+     */
+    if ( size & (size - 1) )
+    {
+        printk(XENLOG_WARNING "SMMUv3: Fixing alignment for the DMA buffer\n");
+        alignment = sizeof(void *);
+    }
+
+    vaddr = _xzalloc(size, alignment);
+    if ( !vaddr )
+    {
+        printk(XENLOG_ERR "SMMUv3: DMA allocation failed\n");
+        return NULL;
+    }
+
+    *dma_handle = virt_to_maddr(vaddr);
+
+    return vaddr;
+}
+
+/* Xen: Type definitions for iommu_domain */
+#define IOMMU_DOMAIN_UNMANAGED 0
+#define IOMMU_DOMAIN_DMA 1
+#define IOMMU_DOMAIN_IDENTITY 2
+
+/* Xen specific code. */
+struct iommu_domain {
+    /* Runtime SMMU configuration for this iommu_domain */
+    atomic_t ref;
+    /*
+     * Used to link iommu_domain contexts for a same domain.
+     * There is at least one per-SMMU to used by the domain.
+     */
+    struct list_head    list;
+};
+
+/* Describes information required for a Xen domain */
+struct arm_smmu_xen_domain {
+    spinlock_t      lock;
+
+    /* List of iommu domains associated to this domain */
+    struct list_head    contexts;
+};
+
+/*
+ * Information about each device stored in dev->archdata.iommu
+ * The dev->archdata.iommu stores the iommu_domain (runtime configuration of
+ * the SMMU).
+ */
+struct arm_smmu_xen_device {
+    struct iommu_domain *domain;
+};
+
+/* Keep a list of devices associated with this driver */
+static DEFINE_SPINLOCK(arm_smmu_devices_lock);
+static LIST_HEAD(arm_smmu_devices);
+
+
+static inline void *dev_iommu_priv_get(struct device *dev)
+{
+    struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+
+    return fwspec && fwspec->iommu_priv ? fwspec->iommu_priv : NULL;
+}
+
+static inline void dev_iommu_priv_set(struct device *dev, void *priv)
+{
+    struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+
+    fwspec->iommu_priv = priv;
+}
+
+int dt_property_match_string(const struct dt_device_node *np,
+                             const char *propname, const char *string)
+{
+    const struct dt_property *dtprop = dt_find_property(np, propname, NULL);
+    size_t l;
+    int i;
+    const char *p, *end;
+
+    if ( !dtprop )
+        return -EINVAL;
+
+    if ( !dtprop->value )
+        return -ENODATA;
+
+    p = dtprop->value;
+    end = p + dtprop->length;
+
+    for ( i = 0; p < end; i++, p += l )
+    {
+        l = strnlen(p, end - p) + 1;
+
+        if ( p + l > end )
+            return -EILSEQ;
+
+        if ( strcmp(string, p) == 0 )
+            return i; /* Found it; return index */
+    }
+
+    return -ENODATA;
+}
+
+static int platform_get_irq_byname_optional(struct device *dev,
+                                            const char *name)
+{
+    int index, ret;
+    struct dt_device_node *np  = dev_to_dt(dev);
+
+    if ( unlikely(!name) )
+        return -EINVAL;
+
+    index = dt_property_match_string(np, "interrupt-names", name);
+    if ( index < 0 )
+    {
+        dev_info(dev, "IRQ %s not found\n", name);
+        return index;
+    }
 
-#include <linux/acpi.h>
-#include <linux/acpi_iort.h>
-#include <linux/bitfield.h>
-#include <linux/bitops.h>
-#include <linux/crash_dump.h>
-#include <linux/delay.h>
-#include <linux/dma-iommu.h>
-#include <linux/err.h>
-#include <linux/interrupt.h>
-#include <linux/io-pgtable.h>
-#include <linux/iommu.h>
-#include <linux/iopoll.h>
-#include <linux/module.h>
-#include <linux/msi.h>
-#include <linux/of.h>
-#include <linux/of_address.h>
-#include <linux/of_iommu.h>
-#include <linux/of_platform.h>
-#include <linux/pci.h>
-#include <linux/pci-ats.h>
-#include <linux/platform_device.h>
-
-#include <linux/amba/bus.h>
+    ret = platform_get_irq(np, index);
+    if ( ret < 0 )
+    {
+        dev_err(dev, "failed to get irq index %d\n", index);
+        return -ENODEV;
+    }
+
+    return ret;
+}
+
+/* Start of Linux SMMUv3 code */
 
 /* MMIO registers */
 #define ARM_SMMU_IDR0			0x0
@@ -507,6 +751,7 @@ struct arm_smmu_s2_cfg {
 	u16				vmid;
 	u64				vttbr;
 	u64				vtcr;
+	struct domain		*domain;
 };
 
 struct arm_smmu_strtab_cfg {
@@ -567,8 +812,13 @@ struct arm_smmu_device {
 
 	struct arm_smmu_strtab_cfg	strtab_cfg;
 
-	/* IOMMU core code handle */
-	struct iommu_device		iommu;
+	/* Need to keep a list of SMMU devices */
+	struct list_head		devices;
+
+	/* Tasklets for handling evts/faults and pci page request IRQs*/
+	struct tasklet		evtq_irq_tasklet;
+	struct tasklet		priq_irq_tasklet;
+	struct tasklet		combined_irq_tasklet;
 };
 
 /* SMMU private data for each master */
@@ -1110,7 +1360,7 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
 }
 
 /* IRQ and event handlers */
-static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
+static void arm_smmu_evtq_thread(void *dev)
 {
 	int i;
 	struct arm_smmu_device *smmu = dev;
@@ -1140,7 +1390,6 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
 	/* Sync our overflow flag, as we believe we're up to speed */
 	llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
 		    Q_IDX(llq, llq->cons);
-	return IRQ_HANDLED;
 }
 
 static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt)
@@ -1181,7 +1430,7 @@ static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt)
 	}
 }
 
-static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
+static void arm_smmu_priq_thread(void *dev)
 {
 	struct arm_smmu_device *smmu = dev;
 	struct arm_smmu_queue *q = &smmu->priq.q;
@@ -1200,12 +1449,12 @@ static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
 	llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
 		      Q_IDX(llq, llq->cons);
 	queue_sync_cons_out(q);
-	return IRQ_HANDLED;
 }
 
 static int arm_smmu_device_disable(struct arm_smmu_device *smmu);
 
-static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev)
+static void arm_smmu_gerror_handler(int irq, void *dev,
+				struct cpu_user_regs *regs)
 {
 	u32 gerror, gerrorn, active;
 	struct arm_smmu_device *smmu = dev;
@@ -1215,7 +1464,7 @@ static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev)
 
 	active = gerror ^ gerrorn;
 	if (!(active & GERROR_ERR_MASK))
-		return IRQ_NONE; /* No errors pending */
+		return; /* No errors pending */
 
 	dev_warn(smmu->dev,
 		 "unexpected global error reported (0x%08x), this could be serious\n",
@@ -1248,26 +1497,42 @@ static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev)
 		arm_smmu_cmdq_skip_err(smmu);
 
 	writel(gerror, smmu->base + ARM_SMMU_GERRORN);
-	return IRQ_HANDLED;
 }
 
-static irqreturn_t arm_smmu_combined_irq_thread(int irq, void *dev)
+static void arm_smmu_combined_irq_handler(int irq, void *dev,
+				struct cpu_user_regs *regs)
+{
+	struct arm_smmu_device *smmu = (struct arm_smmu_device *)dev;
+
+	arm_smmu_gerror_handler(irq, dev, regs);
+
+	tasklet_schedule(&(smmu->combined_irq_tasklet));
+}
+
+static void arm_smmu_combined_irq_thread(void *dev)
 {
 	struct arm_smmu_device *smmu = dev;
 
-	arm_smmu_evtq_thread(irq, dev);
+	arm_smmu_evtq_thread(dev);
 	if (smmu->features & ARM_SMMU_FEAT_PRI)
-		arm_smmu_priq_thread(irq, dev);
-
-	return IRQ_HANDLED;
+		arm_smmu_priq_thread(dev);
 }
 
-static irqreturn_t arm_smmu_combined_irq_handler(int irq, void *dev)
+static void arm_smmu_evtq_irq_tasklet(int irq, void *dev,
+				struct cpu_user_regs *regs)
 {
-	arm_smmu_gerror_handler(irq, dev);
-	return IRQ_WAKE_THREAD;
+	struct arm_smmu_device *smmu = (struct arm_smmu_device *)dev;
+
+	tasklet_schedule(&(smmu->evtq_irq_tasklet));
 }
 
+static void arm_smmu_priq_irq_tasklet(int irq, void *dev,
+				struct cpu_user_regs *regs)
+{
+	struct arm_smmu_device *smmu = (struct arm_smmu_device *)dev;
+
+	tasklet_schedule(&(smmu->priq_irq_tasklet));
+}
 
 /* IO_PGTABLE API */
 static void arm_smmu_tlb_inv_context(void *cookie)
@@ -1354,27 +1619,69 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
 }
 
 static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
-				       struct arm_smmu_master *master,
-				       struct io_pgtable_cfg *pgtbl_cfg)
+				       struct arm_smmu_master *master)
 {
 	int vmid;
+	u64 reg;
 	struct arm_smmu_device *smmu = smmu_domain->smmu;
 	struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
 
+	/* VTCR */
+	reg = VTCR_RES1 | VTCR_SH0_IS | VTCR_IRGN0_WBWA | VTCR_ORGN0_WBWA;
+
+	switch (PAGE_SIZE) {
+	case SZ_4K:
+		reg |= VTCR_TG0_4K;
+		break;
+	case SZ_16K:
+		reg |= VTCR_TG0_16K;
+		break;
+	case SZ_64K:
+		reg |= VTCR_TG0_4K;
+		break;
+	}
+
+	switch (smmu->oas) {
+	case 32:
+		reg |= VTCR_PS(_AC(0x0,ULL));
+		break;
+	case 36:
+		reg |= VTCR_PS(_AC(0x1,ULL));
+		break;
+	case 40:
+		reg |= VTCR_PS(_AC(0x2,ULL));
+		break;
+	case 42:
+		reg |= VTCR_PS(_AC(0x3,ULL));
+		break;
+	case 44:
+		reg |= VTCR_PS(_AC(0x4,ULL));
+		break;
+		case 48:
+		reg |= VTCR_PS(_AC(0x5,ULL));
+		break;
+	case 52:
+		reg |= VTCR_PS(_AC(0x6,ULL));
+		break;
+	}
+
+	reg |= VTCR_T0SZ(64ULL - smmu->ias);
+	reg |= VTCR_SL0(0x2);
+	reg |= VTCR_VS;
+
+	cfg->vtcr   = reg;
+
 	vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
 	if (vmid < 0)
 		return vmid;
+	cfg->vmid  = (u16)vmid;
+
+	cfg->vttbr  = page_to_maddr(cfg->domain->arch.p2m.root);
+
+	printk(XENLOG_DEBUG
+		   "SMMUv3: d%u: vmid 0x%x vtcr 0x%"PRIpaddr" p2maddr 0x%"PRIpaddr"\n",
+		   cfg->domain->domain_id, cfg->vmid, cfg->vtcr, cfg->vttbr);
 
-	vtcr = &pgtbl_cfg->arm_lpae_s2_cfg.vtcr;
-	cfg->vmid	= (u16)vmid;
-	cfg->vttbr	= pgtbl_cfg->arm_lpae_s2_cfg.vttbr;
-	cfg->vtcr	= FIELD_PREP(STRTAB_STE_2_VTCR_S2T0SZ, vtcr->tsz) |
-			  FIELD_PREP(STRTAB_STE_2_VTCR_S2SL0, vtcr->sl) |
-			  FIELD_PREP(STRTAB_STE_2_VTCR_S2IR0, vtcr->irgn) |
-			  FIELD_PREP(STRTAB_STE_2_VTCR_S2OR0, vtcr->orgn) |
-			  FIELD_PREP(STRTAB_STE_2_VTCR_S2SH0, vtcr->sh) |
-			  FIELD_PREP(STRTAB_STE_2_VTCR_S2TG, vtcr->tg) |
-			  FIELD_PREP(STRTAB_STE_2_VTCR_S2PS, vtcr->ps);
 	return 0;
 }
 
@@ -1382,28 +1689,12 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain,
 				    struct arm_smmu_master *master)
 {
 	int ret;
-	unsigned long ias, oas;
-	int (*finalise_stage_fn)(struct arm_smmu_domain *,
-				 struct arm_smmu_master *,
-				 struct io_pgtable_cfg *);
 	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
-	struct arm_smmu_device *smmu = smmu_domain->smmu;
 
 	/* Restrict the stage to what we can actually support */
 	smmu_domain->stage = ARM_SMMU_DOMAIN_S2;
 
-	switch (smmu_domain->stage) {
-	case ARM_SMMU_DOMAIN_NESTED:
-	case ARM_SMMU_DOMAIN_S2:
-		ias = smmu->ias;
-		oas = smmu->oas;
-		finalise_stage_fn = arm_smmu_domain_finalise_s2;
-		break;
-	default:
-		return -EINVAL;
-	}
-
-	ret = finalise_stage_fn(smmu_domain, master, &pgtbl_cfg);
+	ret = arm_smmu_domain_finalise_s2(smmu_domain, master);
 	if (ret < 0) {
 		return ret;
 	}
@@ -1553,7 +1844,8 @@ static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
 		return -ENOMEM;
 	}
 
-	if (!WARN_ON(q->base_dma & (qsz - 1))) {
+	WARN_ON(q->base_dma & (qsz - 1));
+	if (unlikely(q->base_dma & (qsz - 1))) {
 		dev_info(smmu->dev, "allocated %u entries for %s\n",
 			 1 << q->llq.max_n_shift, name);
 	}
@@ -1758,9 +2050,7 @@ static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
 	/* Request interrupt lines */
 	irq = smmu->evtq.q.irq;
 	if (irq) {
-		ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
-						arm_smmu_evtq_thread,
-						IRQF_ONESHOT,
+		ret = request_irq(irq, 0, arm_smmu_evtq_irq_tasklet,
 						"arm-smmu-v3-evtq", smmu);
 		if (ret < 0)
 			dev_warn(smmu->dev, "failed to enable evtq irq\n");
@@ -1770,8 +2060,8 @@ static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
 
 	irq = smmu->gerr_irq;
 	if (irq) {
-		ret = devm_request_irq(smmu->dev, irq, arm_smmu_gerror_handler,
-				       0, "arm-smmu-v3-gerror", smmu);
+		ret = request_irq(irq, 0, arm_smmu_gerror_handler,
+						"arm-smmu-v3-gerror", smmu);
 		if (ret < 0)
 			dev_warn(smmu->dev, "failed to enable gerror irq\n");
 	} else {
@@ -1781,11 +2071,8 @@ static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
 	if (smmu->features & ARM_SMMU_FEAT_PRI) {
 		irq = smmu->priq.q.irq;
 		if (irq) {
-			ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
-							arm_smmu_priq_thread,
-							IRQF_ONESHOT,
-							"arm-smmu-v3-priq",
-							smmu);
+			ret = request_irq(irq, 0, arm_smmu_priq_irq_tasklet,
+							"arm-smmu-v3-priq", smmu);
 			if (ret < 0)
 				dev_warn(smmu->dev,
 					 "failed to enable priq irq\n");
@@ -1814,11 +2101,8 @@ static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)
 		 * Cavium ThunderX2 implementation doesn't support unique irq
 		 * lines. Use a single irq line for all the SMMUv3 interrupts.
 		 */
-		ret = devm_request_threaded_irq(smmu->dev, irq,
-					arm_smmu_combined_irq_handler,
-					arm_smmu_combined_irq_thread,
-					IRQF_ONESHOT,
-					"arm-smmu-v3-combined-irq", smmu);
+		ret = request_irq(irq, 0, arm_smmu_combined_irq_handler,
+						"arm-smmu-v3-combined-irq", smmu);
 		if (ret < 0)
 			dev_warn(smmu->dev, "failed to enable combined irq\n");
 	} else
@@ -1857,7 +2141,7 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
 	reg = readl_relaxed(smmu->base + ARM_SMMU_CR0);
 	if (reg & CR0_SMMUEN) {
 		dev_warn(smmu->dev, "SMMU currently enabled! Resetting...\n");
-		WARN_ON(is_kdump_kernel() && !disable_bypass);
+		WARN_ON(!disable_bypass);
 		arm_smmu_update_gbpa(smmu, GBPA_ABORT, 0);
 	}
 
@@ -1952,8 +2236,11 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
 		return ret;
 	}
 
-	if (is_kdump_kernel())
-		enables &= ~(CR0_EVTQEN | CR0_PRIQEN);
+	/* Initialize tasklets for threaded IRQs*/
+	tasklet_init(&smmu->evtq_irq_tasklet, arm_smmu_evtq_thread, smmu);
+	tasklet_init(&smmu->priq_irq_tasklet, arm_smmu_priq_thread, smmu);
+	tasklet_init(&smmu->combined_irq_tasklet, arm_smmu_combined_irq_thread,
+				 smmu);
 
 	/* Enable the SMMU interface, or ensure bypass */
 	if (!bypass || disable_bypass) {
@@ -2195,7 +2482,7 @@ static inline int arm_smmu_device_acpi_probe(struct platform_device *pdev,
 static int arm_smmu_device_dt_probe(struct platform_device *pdev,
 				    struct arm_smmu_device *smmu)
 {
-	struct device *dev = &pdev->dev;
+	struct device *dev = pdev;
 	u32 cells;
 	int ret = -EINVAL;
 
@@ -2219,130 +2506,449 @@ static unsigned long arm_smmu_resource_size(struct arm_smmu_device *smmu)
 		return SZ_128K;
 }
 
+/* Start of Xen specific code. */
 static int arm_smmu_device_probe(struct platform_device *pdev)
 {
-	int irq, ret;
-	struct resource *res;
-	resource_size_t ioaddr;
-	struct arm_smmu_device *smmu;
-	struct device *dev = &pdev->dev;
-	bool bypass;
+    int irq, ret;
+    paddr_t ioaddr, iosize;
+    struct arm_smmu_device *smmu;
+    bool bypass;
+
+    smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
+    if ( !smmu )
+    {
+        dev_err(pdev, "failed to allocate arm_smmu_device\n");
+        return -ENOMEM;
+    }
+    smmu->dev = pdev;
+
+    if ( pdev->of_node )
+    {
+        ret = arm_smmu_device_dt_probe(pdev, smmu);
+    } else
+    {
+        ret = arm_smmu_device_acpi_probe(pdev, smmu);
+        if ( ret == -ENODEV )
+            return ret;
+    }
+
+    /* Set bypass mode according to firmware probing result */
+    bypass = !!ret;
+
+    /* Base address */
+    ret = dt_device_get_address(dev_to_dt(pdev), 0, &ioaddr, &iosize);
+    if( ret )
+        return -ENODEV;
+
+    if ( iosize < arm_smmu_resource_size(smmu) )
+    {
+        dev_err(pdev, "MMIO region too small (%lx)\n", iosize);
+        return -EINVAL;
+    }
+
+    /*
+     * Don't map the IMPLEMENTATION DEFINED regions, since they may contain
+     * the PMCG registers which are reserved by the PMU driver.
+     */
+    smmu->base = ioremap_nocache(ioaddr, iosize);
+    if ( IS_ERR(smmu->base) )
+        return PTR_ERR(smmu->base);
+
+    if ( iosize > SZ_64K )
+    {
+        smmu->page1 = ioremap_nocache(ioaddr + SZ_64K, ARM_SMMU_REG_SZ);
+        if ( IS_ERR(smmu->page1) )
+            return PTR_ERR(smmu->page1);
+    }
+    else
+    {
+        smmu->page1 = smmu->base;
+    }
+
+    /* Interrupt lines */
+
+    irq = platform_get_irq_byname_optional(pdev, "combined");
+    if ( irq > 0 )
+        smmu->combined_irq = irq;
+    else
+    {
+        irq = platform_get_irq_byname_optional(pdev, "eventq");
+        if ( irq > 0 )
+            smmu->evtq.q.irq = irq;
+
+        irq = platform_get_irq_byname_optional(pdev, "priq");
+        if ( irq > 0 )
+            smmu->priq.q.irq = irq;
+
+        irq = platform_get_irq_byname_optional(pdev, "gerror");
+        if ( irq > 0 )
+            smmu->gerr_irq = irq;
+    }
+    /* Probe the h/w */
+    ret = arm_smmu_device_hw_probe(smmu);
+    if ( ret )
+        return ret;
+
+    /* Initialise in-memory data structures */
+    ret = arm_smmu_init_structures(smmu);
+    if ( ret )
+        return ret;
+
+    /* Reset the device */
+    ret = arm_smmu_device_reset(smmu, bypass);
+    if ( ret )
+        return ret;
+
+    /*
+     * Keep a list of all probed devices. This will be used to query
+     * the smmu devices based on the fwnode.
+     */
+    INIT_LIST_HEAD(&smmu->devices);
+
+    spin_lock(&arm_smmu_devices_lock);
+    list_add(&smmu->devices, &arm_smmu_devices);
+    spin_unlock(&arm_smmu_devices_lock);
+
+    return 0;
+}
+
+static int __must_check arm_smmu_iotlb_flush_all(struct domain *d)
+{
+    struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
+    struct iommu_domain *io_domain;
+
+    spin_lock(&xen_domain->lock);
+
+    list_for_each_entry( io_domain, &xen_domain->contexts, list )
+    {
+        /*
+         * Only invalidate the context when SMMU is present.
+         * This is because the context initialization is delayed
+         * until a master has been added.
+         */
+        if ( unlikely(!ACCESS_ONCE(to_smmu_domain(io_domain)->smmu)) )
+            continue;
+
+        arm_smmu_tlb_inv_context(to_smmu_domain(io_domain));
+    }
 
-	smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
-	if (!smmu) {
-		dev_err(dev, "failed to allocate arm_smmu_device\n");
-		return -ENOMEM;
-	}
-	smmu->dev = dev;
+    spin_unlock(&xen_domain->lock);
+
+    return 0;
+}
+
+static int __must_check arm_smmu_iotlb_flush(struct domain *d, dfn_t dfn,
+                                             unsigned long page_count,
+                                             unsigned int flush_flags)
+{
+    return arm_smmu_iotlb_flush_all(d);
+}
 
-	if (dev->of_node) {
-		ret = arm_smmu_device_dt_probe(pdev, smmu);
-	} else {
-		ret = arm_smmu_device_acpi_probe(pdev, smmu);
-		if (ret == -ENODEV)
-			return ret;
-	}
+static struct arm_smmu_device *arm_smmu_get_by_dev(struct device *dev)
+{
+    struct arm_smmu_device *smmu = NULL;
 
-	/* Set bypass mode according to firmware probing result */
-	bypass = !!ret;
+    spin_lock(&arm_smmu_devices_lock);
+    list_for_each_entry( smmu, &arm_smmu_devices, devices )
+    {
+        if ( smmu->dev  == dev )
+        {
+            spin_unlock(&arm_smmu_devices_lock);
+            return smmu;
+        }
+    }
+    spin_unlock(&arm_smmu_devices_lock);
 
-	/* Base address */
-	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
-	if (resource_size(res) < arm_smmu_resource_size(smmu)) {
-		dev_err(dev, "MMIO region too small (%pr)\n", res);
-		return -EINVAL;
-	}
-	ioaddr = res->start;
+    return NULL;
+}
 
-	/*
-	 * Don't map the IMPLEMENTATION DEFINED regions, since they may contain
-	 * the PMCG registers which are reserved by the PMU driver.
-	 */
-	smmu->base = arm_smmu_ioremap(dev, ioaddr, ARM_SMMU_REG_SZ);
-	if (IS_ERR(smmu->base))
-		return PTR_ERR(smmu->base);
-
-	if (arm_smmu_resource_size(smmu) > SZ_64K) {
-		smmu->page1 = arm_smmu_ioremap(dev, ioaddr + SZ_64K,
-					       ARM_SMMU_REG_SZ);
-		if (IS_ERR(smmu->page1))
-			return PTR_ERR(smmu->page1);
-	} else {
-		smmu->page1 = smmu->base;
-	}
+/* Probing and initialisation functions */
+static struct iommu_domain *arm_smmu_get_domain(struct domain *d,
+                                                struct device *dev)
+{
+    struct iommu_domain *io_domain;
+    struct arm_smmu_domain *smmu_domain;
+    struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+    struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
+    struct arm_smmu_device *smmu = arm_smmu_get_by_dev(fwspec->iommu_dev);
+
+    if ( !smmu )
+        return NULL;
 
-	/* Interrupt lines */
+    /*
+     * Loop through the &xen_domain->contexts to locate a context
+     * assigned to this SMMU
+     */
+    list_for_each_entry( io_domain, &xen_domain->contexts, list )
+    {
+        smmu_domain = to_smmu_domain(io_domain);
+        if ( smmu_domain->smmu == smmu )
+            return io_domain;
+    }
 
-	irq = platform_get_irq_byname_optional(pdev, "combined");
-	if (irq > 0)
-		smmu->combined_irq = irq;
-	else {
-		irq = platform_get_irq_byname_optional(pdev, "eventq");
-		if (irq > 0)
-			smmu->evtq.q.irq = irq;
+    return NULL;
+}
 
-		irq = platform_get_irq_byname_optional(pdev, "priq");
-		if (irq > 0)
-			smmu->priq.q.irq = irq;
+static void arm_smmu_destroy_iommu_domain(struct iommu_domain *io_domain)
+{
+    list_del(&io_domain->list);
+    arm_smmu_domain_free(io_domain);
+}
 
-		irq = platform_get_irq_byname_optional(pdev, "gerror");
-		if (irq > 0)
-			smmu->gerr_irq = irq;
-	}
-	/* Probe the h/w */
-	ret = arm_smmu_device_hw_probe(smmu);
-	if (ret)
-		return ret;
+static int arm_smmu_assign_dev(struct domain *d, u8 devfn,
+                               struct device *dev, u32 flag)
+{
+    int ret = 0;
+    struct iommu_domain *io_domain;
+    struct arm_smmu_domain *smmu_domain;
+    struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
 
-	/* Initialise in-memory data structures */
-	ret = arm_smmu_init_structures(smmu);
-	if (ret)
-		return ret;
+    if ( !dev->archdata.iommu )
+    {
+        dev->archdata.iommu = xzalloc(struct arm_smmu_xen_device);
+        if ( !dev->archdata.iommu )
+            return -ENOMEM;
+    }
 
-	/* Record our private device structure */
-	platform_set_drvdata(pdev, smmu);
+    spin_lock(&xen_domain->lock);
 
-	/* Reset the device */
-	ret = arm_smmu_device_reset(smmu, bypass);
-	if (ret)
-		return ret;
+    /*
+     * Check to see if an iommu_domain already exists for this xen domain
+     * under the same SMMU
+     */
+    io_domain = arm_smmu_get_domain(d, dev);
+    if ( !io_domain )
+    {
+        io_domain = arm_smmu_domain_alloc(IOMMU_DOMAIN_DMA);
+        if ( !io_domain )
+        {
+            ret = -ENOMEM;
+            goto out;
+        }
 
-	/* And we're up. Go go go! */
-	ret = iommu_device_sysfs_add(&smmu->iommu, dev, NULL,
-				     "smmu3.%pa", &ioaddr);
-	if (ret)
-		return ret;
+        smmu_domain = to_smmu_domain(io_domain);
+        smmu_domain->s2_cfg.domain = d;
 
-	iommu_device_set_ops(&smmu->iommu, &arm_smmu_ops);
-	iommu_device_set_fwnode(&smmu->iommu, dev->fwnode);
+        /* Chain the new context to the domain */
+        list_add(&io_domain->list, &xen_domain->contexts);
 
-	ret = iommu_device_register(&smmu->iommu);
-	if (ret) {
-		dev_err(dev, "Failed to register iommu\n");
-		return ret;
-	}
+    }
+
+    ret = arm_smmu_attach_dev(io_domain, dev);
+    if ( ret )
+    {
+        if ( io_domain->ref.counter == 0 )
+            arm_smmu_destroy_iommu_domain(io_domain);
+    }
+    else
+    {
+        atomic_inc(&io_domain->ref);
+    }
 
-	return arm_smmu_set_bus_ops(&arm_smmu_ops);
+out:
+    spin_unlock(&xen_domain->lock);
+    return ret;
 }
 
-static int arm_smmu_device_remove(struct platform_device *pdev)
+static int arm_smmu_deassign_dev(struct domain *d, struct device *dev)
 {
-	struct arm_smmu_device *smmu = platform_get_drvdata(pdev);
+    struct iommu_domain *io_domain = arm_smmu_get_domain(d, dev);
+    struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
+    struct arm_smmu_domain *arm_smmu = to_smmu_domain(io_domain);
+    struct arm_smmu_master *master = dev_iommu_priv_get(dev);
 
-	arm_smmu_set_bus_ops(NULL);
-	iommu_device_unregister(&smmu->iommu);
-	iommu_device_sysfs_remove(&smmu->iommu);
-	arm_smmu_device_disable(smmu);
+    if ( !arm_smmu || arm_smmu->s2_cfg.domain != d )
+    {
+        dev_err(dev, " not attached to domain %d\n", d->domain_id);
+        return -ESRCH;
+    }
 
-	return 0;
+    spin_lock(&xen_domain->lock);
+
+    arm_smmu_detach_dev(master);
+    atomic_dec(&io_domain->ref);
+
+    if ( io_domain->ref.counter == 0 )
+        arm_smmu_destroy_iommu_domain(io_domain);
+
+    spin_unlock(&xen_domain->lock);
+
+    return 0;
+}
+
+static int arm_smmu_reassign_dev(struct domain *s, struct domain *t,
+                                 u8 devfn,  struct device *dev)
+{
+    int ret = 0;
+
+    /* Don't allow remapping on other domain than hwdom */
+    if ( t && t != hardware_domain )
+        return -EPERM;
+
+    if ( t == s )
+        return 0;
+
+    ret = arm_smmu_deassign_dev(s, dev);
+    if ( ret )
+        return ret;
+
+    if ( t )
+    {
+        /* No flags are defined for ARM. */
+        ret = arm_smmu_assign_dev(t, devfn, dev, 0);
+        if ( ret )
+            return ret;
+    }
+
+    return 0;
+}
+
+static int arm_smmu_iommu_xen_domain_init(struct domain *d)
+{
+    struct arm_smmu_xen_domain *xen_domain;
+
+    xen_domain = xzalloc(struct arm_smmu_xen_domain);
+    if ( !xen_domain )
+        return -ENOMEM;
+
+    spin_lock_init(&xen_domain->lock);
+    INIT_LIST_HEAD(&xen_domain->contexts);
+
+    dom_iommu(d)->arch.priv = xen_domain;
+
+    return 0;
+}
+
+static void __hwdom_init arm_smmu_iommu_hwdom_init(struct domain *d)
+{
+}
+
+static void arm_smmu_iommu_xen_domain_teardown(struct domain *d)
+{
+    struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
+
+    ASSERT(list_empty(&xen_domain->contexts));
+    xfree(xen_domain);
+}
+
+static int arm_smmu_dt_xlate(struct device *dev,
+                             const struct dt_phandle_args *args)
+{
+    int ret;
+    struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+
+    ret = iommu_fwspec_add_ids(dev, args->args, 1);
+    if ( ret )
+        return ret;
+
+    if ( dt_device_is_protected(dev_to_dt(dev)) )
+    {
+        dev_err(dev, "Already added to SMMUv3\n");
+        return -EEXIST;
+    }
+
+    /* Let Xen know that the master device is protected by an IOMMU. */
+    dt_device_set_protected(dev_to_dt(dev));
+
+    dev_info(dev, "Added master device (SMMUv3 %s StreamIds %u)\n",
+            dev_name(fwspec->iommu_dev), fwspec->num_ids);
+
+    return 0;
 }
 
-static void arm_smmu_device_shutdown(struct platform_device *pdev)
+static int arm_smmu_add_device(u8 devfn, struct device *dev)
 {
-	arm_smmu_device_remove(pdev);
+    int i, ret;
+    struct arm_smmu_device *smmu;
+    struct arm_smmu_master *master;
+    struct iommu_fwspec *fwspec;
+
+    fwspec = dev_iommu_fwspec_get(dev);
+    if ( !fwspec )
+        return -ENODEV;
+
+    smmu = arm_smmu_get_by_dev(fwspec->iommu_dev);
+    if ( !smmu )
+        return -ENODEV;
+
+    master = xzalloc(struct arm_smmu_master);
+    if ( !master )
+        return -ENOMEM;
+
+    master->dev = dev;
+    master->smmu = smmu;
+    master->sids = fwspec->ids;
+    master->num_sids = fwspec->num_ids;
+
+    dev_iommu_priv_set(dev, master);
+
+    /* Check the SIDs are in range of the SMMU and our stream table */
+    for ( i = 0; i < master->num_sids; i++ )
+    {
+        u32 sid = master->sids[i];
+
+        if ( !arm_smmu_sid_in_range(smmu, sid) )
+        {
+            ret = -ERANGE;
+            goto err_free_master;
+        }
+
+        /* Ensure l2 strtab is initialised */
+        if ( smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB )
+        {
+            ret = arm_smmu_init_l2_strtab(smmu, sid);
+            if ( ret )
+                goto err_free_master;
+        }
+    }
+
+    return 0;
+
+err_free_master:
+    xfree(master);
+    dev_iommu_priv_set(dev, NULL);
+    return ret;
 }
 
-static const struct of_device_id arm_smmu_of_match[] = {
-	{ .compatible = "arm,smmu-v3", },
-	{ },
+static const struct iommu_ops arm_smmu_iommu_ops = {
+    .init = arm_smmu_iommu_xen_domain_init,
+    .hwdom_init = arm_smmu_iommu_hwdom_init,
+    .teardown = arm_smmu_iommu_xen_domain_teardown,
+    .iotlb_flush = arm_smmu_iotlb_flush,
+    .iotlb_flush_all = arm_smmu_iotlb_flush_all,
+    .assign_device = arm_smmu_assign_dev,
+    .reassign_device = arm_smmu_reassign_dev,
+    .map_page = arm_iommu_map_page,
+    .unmap_page = arm_iommu_unmap_page,
+    .dt_xlate = arm_smmu_dt_xlate,
+    .add_device = arm_smmu_add_device,
+};
+
+static const struct dt_device_match arm_smmu_of_match[] = {
+    { .compatible = "arm,smmu-v3", },
+    { },
 };
+
+static __init int arm_smmu_dt_init(struct dt_device_node *dev,
+                                   const void *data)
+{
+    int rc;
+
+    /*
+     * Even if the device can't be initialized, we don't want to
+     * give the SMMU device to dom0.
+     */
+    dt_device_set_used_by(dev, DOMID_XEN);
+
+    rc = arm_smmu_device_probe(dt_to_dev(dev));
+    if ( rc )
+        return rc;
+
+    iommu_set_ops(&arm_smmu_iommu_ops);
+    return 0;
+}
+
+DT_DEVICE_START(smmuv3, "ARM SMMU V3", DEVICE_IOMMU)
+    .dt_match = arm_smmu_of_match,
+    .init = arm_smmu_dt_init,
+DT_DEVICE_END
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 17:03:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 17:03:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38912.71695 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKgO-0005LG-Ol; Thu, 26 Nov 2020 17:03:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38912.71695; Thu, 26 Nov 2020 17:03:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKgO-0005L9-Kd; Thu, 26 Nov 2020 17:03:48 +0000
Received: by outflank-mailman (input) for mailman id 38912;
 Thu, 26 Nov 2020 17:03:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EevG=FA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kiKgN-0005Ko-TA
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:03:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 93a60847-34b3-4299-a7bb-62aa68d6b0e4;
 Thu, 26 Nov 2020 17:03:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3EC3EAC65;
 Thu, 26 Nov 2020 17:03:45 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=EevG=FA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kiKgN-0005Ko-TA
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:03:48 +0000
X-Inumbo-ID: 93a60847-34b3-4299-a7bb-62aa68d6b0e4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 93a60847-34b3-4299-a7bb-62aa68d6b0e4;
	Thu, 26 Nov 2020 17:03:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606410225; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=m1ysc7Ibeo17wE8VMnjGvvKlHGmYQEQlgVtwhWD/yTA=;
	b=j6H1kfZ8ETGSTP2KFkhwMyMe9Kplj3X7moMODC3OnHDFN1+gfLFrdbCph1t6UP1eZx5PjX
	d71aZDHGgZ6s89LDt04gEyingUMD/IrdaOVl0WBOQqtS8EjSAuynRPLUGuFLMJevDhrvA3
	ujKKtRuefeEFisqrGAbA/Di/Y5c80t0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3EC3EAC65;
	Thu, 26 Nov 2020 17:03:45 +0000 (UTC)
Subject: Re: [PATCH v2 02/17] mm: introduce xvmalloc() et al and use for grant
 table allocations
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <255f466c-3c95-88c5-3e55-0f04c9ae1b12@suse.com>
 <23acd443-348c-5ef9-0fb5-880e06cc9a2d@suse.com>
 <0c40a6f6-af8c-1040-f249-36752df3a1f1@xen.org>
 <a752cdb9-4609-2a61-b657-c17cbe4febb8@suse.com>
 <alpine.DEB.2.21.2011251122200.7979@sstabellini-ThinkPad-T480s>
 <2aeba247-8b36-7b75-dc17-b901bf746f87@suse.com>
 <8e86bed4-b6fa-ed81-8ca8-41e727c56cb1@xen.org>
 <150fdd8b-f4c5-5ee8-f1e5-b3fafb4eb3ca@suse.com>
 <19142200-cddb-0d54-4c15-5eb7668481bc@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <82dfedbb-a5ab-45b6-8b38-7b00ba1335f3@suse.com>
Date: Thu, 26 Nov 2020 18:03:45 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <19142200-cddb-0d54-4c15-5eb7668481bc@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.11.2020 16:53, Julien Grall wrote:
> On 26/11/2020 15:18, Jan Beulich wrote:
>> On 26.11.2020 14:22, Julien Grall wrote:
>>> On 26/11/2020 11:34, Jan Beulich wrote:
>>>> On 25.11.2020 20:48, Stefano Stabellini wrote:
>>>>> On Wed, 25 Nov 2020, Jan Beulich wrote:
>>>>>> On 25.11.2020 13:15, Julien Grall wrote:
>>>>>>> On 23/11/2020 14:23, Jan Beulich wrote:
>>>>>>>> I'm unconvinced of the mentioning of "physically contiguous" in the
>>>>>>>> comment at the top of the new header: I don't think xmalloc() provides
>>>>>>>> such a guarantee. Any use assuming so would look (latently) broken to
>>>>>>>> me.
>>>>>>>
>>>>>>> I haven't had the chance to reply to the first version about this. So I
>>>>>>> will reply here to avoid confusion.
>>>>>>>
>>>>>>> I can at least spot one user in Arm that would use xmalloc() that way
>>>>>>> (see the allocation of itt_addr in arch/arm/gic-v3-its.c).
>>>>>>
>>>>>> And I surely wouldn't have spotted this, even if I had tried
>>>>>> to find "offenders", i.e. as said before not wanting to alter
>>>>>> the behavior of existing code (beyond the explicit changes
>>>>>> done here) was ...
>>>>>>
>>>>>>> AFAIK, the memory is for the sole purpose of the ITS and should not be
>>>>>>> accessed by Xen. So I think we can replace by a new version of
>>>>>>> alloc_domheap_pages().
>>>>>>>
>>>>>>> However, I still question the usefulness of introducing yet another way
>>>>>>> to allocate memory (we already have alloc_xenheap_pages(), xmalloc(),
>>>>>>> alloc_domheap_pages(), vmap()) if you think users cannot rely on
>>>>>>> xmalloc() to allocate memory physically contiguous.
>>>>>>
>>>>>> ... the reason to introduce a separate new interface. Plus of
>>>>>> course this parallels what Linux has.
>>>>>>
>>>>>>> It definitely makes more difficult to figure out when to use xmalloc()
>>>>>>> vs xvalloc().
>>>>>>
>>>>>> I don't see the difficulty:
>>>>>> - if you need physically contiguous memory, use alloc_xen*_pages(),
>>>>>> - if you know the allocation size is always no more than a page,
>>>>>>     use xmalloc(),
>>>
>>> If that's then intention, then may I ask why xmalloc() is able to
>>> support multiple pages allocation?
>>
>> Because support for this pre-dates even the introduction of vmalloc()?
> 
> Right, so the code should disappear if we want people to avoid making 
> that assumption with xmalloc().

You mean once all users of xmalloc() needing more than a page have
disappeared? We can't drop that code any earlier.

>>> Your assumption is Xen will always be built with the same page size
>>> across all the architecture. While Xen only works with 4KB pages today,
>>> Arm can support 16KB and 64KB. I have long term plan to add support for it.
>>>
>>> So I don't think you can use the page size as a way to distinguish which
>>> one to use.
>>
>> The let's abstract this one level further
>>
>> - if you know the allocation size is always no more than the smallest
>>    possible page size, use xmalloc()
> 
> So basically, xmalloc() is becoming pointless when xvmalloc() can do the 
> same for you (as it would call xmalloc()).

At the expense of one extra call layer. I still think directly
calling xmalloc() for small blocks of memory is a sensible
thing to do. So no, I don't view it as becoming pointless.

>>>>> What if you need memory physically contiguous but not necessarily an
>>>>> order of pages, such as for instance 5200 bytes?
>>>>
>>>> This case is, I think, rare enough (in particular in Xen) that the
>>>> waste of space can be tolerated imo.
>>>
>>> This is quite departure from:
>>>
>>> commit b829a0ff5794ee5b0f96a0c872f6a4ed7b1007c7
>>> Author: Jan Beulich <jbeulich@suse.com>
>>> Date:   Thu Oct 13 10:03:43 2011 +0200
>>>
>>>       xmalloc: return unused full pages on multi-page allocations
>>>
>>>       Certain (boot time) allocations are relatively large (particularly when
>>>       building with high NR_CPUS), but can also happen to be pretty far away
>>>       from a power-of-two size. Utilize the page allocator's (other than
>>>       Linux'es) capability of allowing to return space from higher-order
>>>       allocations in smaller pieces to return the unused parts immediately.
>>>
>>>       Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>       Acked-by: Keir Fraser <keir@xen.org>
>>>
>>> I am curious to know what changed...
>>
>> Nothing. But even if something had, citing a 9 year old commit is
>> not likely to point out any actual contradiction.
> 
> You can't say it is tolerable but in the past suggested that it was not 
> (otherwise why would you hand back memory?).
> 
> Therefore I would like to understand why in the past this was not 
> tolerable but now it is... What changed?

Oh, so it looks like I misunderstood, taking together this and ...

>>> Anyway, what you wrote is very server focused. On Arm, we have plan to
>>> run Xen on smaller hardware where wasting memory mean less usable RAM
>>> for guests.
>>>
>>> The problem with using an order is the bigger the order is the more
>>> change you will waste space...
>>>
>>> Allocating more than a page is fairly common on Arm, so we really want
>>> to reduce the amount of memory wasted.
>>
>> The amount of space wasted is the same - the tail of the trailing
>> page. I'm afraid I don't see what your point is.
> 
> There would obviously be no difference if one wants to allocate more 
> than one page but less than 2 pages....
> 
> But that was not my point. My point is when you allocate with an order 
> greater or equal to 2, then you will start wasting memory when not using 
> xmalloc().
> 
> For instance, if you want to allocate 20kB then you will need to 
> allocate 32kB and lose 12kB. To make it sound bigger, you could replace 
> kB by mB.

... this. You're asking why the order based alloc_*heap_pages()
induced waste is acceptable? It's not. But that's not the point
here. vmalloc() doesn't incur such a waste, and as was said
earlier perhaps we should gain a count-based page allocator
interface (which could simply be the code from xmalloc() broken
out). But all of this is mostly unrelated to the change here,
not the least because call sites caring to avoid the waste can
easily return what they don't need. It is again a property of
the current xmalloc() implementation, but not a guarantee, that
such returning happens.

>>> However, you seem to be the one objecting on the behavior of xmalloc().
>>
>> I don't think I'm objecting to any existing behavior. What I did
>> is state my view on (non-)guarantees by xmalloc(). And I've
>> already said - maybe I'm wrong and, like Linux'es kmalloc(),
>> there is a guarantee of it producing contiguous memory, and I
>> merely didn't find where that's said.
> 
> I can find quite a few places in Linux that use kmalloc() with size that 
> are bigger than a page size.
> 
> That's enough for me to think that while this may not have been the 
> original intention, people are using it like that (same in Xen). So we 
> can't dismiss it.

Once there are such uses, replacing them takes (often a lot of)
time. So unless it's clear they're intentionally kmalloc() despite
the big size, I'd generally suspect people simply didn't know or
care to use vmalloc(). If it also quite common for allocation
sizes to cross the boundary of page size without the person doing
the "offending" change actually noticing.

>>> I can't speak for Stefano, but I don't object on following Linux.
>>> Instead I am objecting on the growing number of way to allocate memory
>>> in Xen and that differ depending on the system_state.
>>
>> But as per above the addition only brings us on par with Linux.
>> There, kvmalloc_node() is simply a wrapper (with different logic
>> when to try what) around kmalloc_node() and __vmalloc_node(). No
>> different (in the basic idea) from what I'm doing here.
> 
> There are at least two more in Xen so far:
> 
>   - alloc_domheap_pages()
>   - alloc_xenheap_pages()

Which have an equivalent in Linux, too.

> I still maintain that the way you suggest to use each of them is not 
> clear. In particular, that xmalloc() doesn't guarantee physcally 
> contiguous allocation for allocation larger than a page size.
> 
> In summary, I would be happy with the introduction of xvmalloc() as long 
> as we guarantee that xmalloc() is allocating physically contiguous memory.

So the main reason why I think xmalloc() and kmalloc() can be
considered different in this regard is that we have basically no
device drivers in Xen, and hence there's pretty limited need for
physically contiguous memory (outside of cases going straight to
the page allocator anyway).

Yet once again - that's my view on it, and the original intentions
may have been different.

All I want is code that allows me to avoid, at call sites, to
have to explicitly decide whether I want xmalloc() or vmalloc().
The grant table code changed here is one good example. The uses
of the new functions in subsequent patches are further ones. I
do specifically _not_ care to replace any existing functionality.
I only want these library-like helpers. Optimization and code
folding could come later.

> Users that doesn't care about this guarantee would have to be switched 
> to use xvmalloc() (This doesn't need to be done here).

And that's what I dislike: I don't see any "have to" here. I view
"have to" as applicable only to runtime allocations which may end
up being non-order-0 once they hit the underlying page allocator.
Don't forget that vmalloc(), besides the memory to be allocated,
also consumes VA space. xmalloc() doesn't as long we have a
directmap. Hence at least during boot there may be good reasons
why someone prefers to use xmalloc() even for larger bodies of
memory.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 17:06:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 17:06:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38947.71710 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKiV-0005eA-8x; Thu, 26 Nov 2020 17:05:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38947.71710; Thu, 26 Nov 2020 17:05:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKiV-0005e3-5Y; Thu, 26 Nov 2020 17:05:59 +0000
Received: by outflank-mailman (input) for mailman id 38947;
 Thu, 26 Nov 2020 17:05:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rLOc=FA=gmail.com=miguel.ojeda.sandonis@srs-us1.protection.inumbo.net>)
 id 1kiKiU-0005dw-6f
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:05:58 +0000
Received: from mail-yb1-xb33.google.com (unknown [2607:f8b0:4864:20::b33])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 393807ce-76d3-4c11-9174-3fd14fefab6b;
 Thu, 26 Nov 2020 17:05:57 +0000 (UTC)
Received: by mail-yb1-xb33.google.com with SMTP id r127so2177788yba.10
 for <xen-devel@lists.xenproject.org>; Thu, 26 Nov 2020 09:05:57 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rLOc=FA=gmail.com=miguel.ojeda.sandonis@srs-us1.protection.inumbo.net>)
	id 1kiKiU-0005dw-6f
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:05:58 +0000
X-Inumbo-ID: 393807ce-76d3-4c11-9174-3fd14fefab6b
Received: from mail-yb1-xb33.google.com (unknown [2607:f8b0:4864:20::b33])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 393807ce-76d3-4c11-9174-3fd14fefab6b;
	Thu, 26 Nov 2020 17:05:57 +0000 (UTC)
Received: by mail-yb1-xb33.google.com with SMTP id r127so2177788yba.10
        for <xen-devel@lists.xenproject.org>; Thu, 26 Nov 2020 09:05:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=O/jaIJrbif54isUECHds/J8Ujq0NdoNxUTCCwJWwQ80=;
        b=YymqsvqKytU8xghgrWvO9q2MBm/qo/QfMLfSKpWw4/r/4uRHhZxuPq9ek+nW3ctzYi
         vs3zi1yLEuFoGUDgrnF15wpJAK9RqJ1aO9sTXa0PjOzjLe+d4O/pTFiiO013Mx2qKoL4
         SkUMUUIjag5bjqwHGLYc9f20TwOaOWbGTale/6i8ahvnaZymX26ArF3ReuQKFVdof495
         iKDDkr6mQvIIbdynQAd2wxd2GUAuBku/vDm4jIdtOw4Ph7wkJ5rD2TxigEVm3yQ5C9tu
         pJCfZKqtAfLebSyeKrm6VDCN2G4oKvj4TFU9uFOnavPNoLjuB5eIt+ODZ6yyvDDDthAD
         GeSQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=O/jaIJrbif54isUECHds/J8Ujq0NdoNxUTCCwJWwQ80=;
        b=sEyNeJzILhcBLZxr08GtnUPi+i/9ZUsK7QfdoJunYXm+MmZh8iTc9RbA7n2P/FCD5+
         V4FRtR/n9Bz6h5GtCqtVkXXjpAfx5kLEuwFqeQHd18eb4dmwMCv8kx+qz3NyNb7DfEtV
         H4dooayvHMDTYfsv/pdcxgD3ZNnt6lPyd9RQZnvKPs8NdI31rrj7GAVb2sjXzX06aIHf
         gLkLuvN1LtuTLg7t0PB1x2Tlg4MJ2XqDdKmB4fET/mIt5QdGaWiZk6VE837y1PJ5rO+j
         O5i5amncRdvx5VkPhRWuidfvTsg0kQ4w+CLIhadtMEdfP02jOVQwBYQjI8q7rMY9pZDu
         AQNw==
X-Gm-Message-State: AOAM531DolWKmIMy5YoYvuXkCRgkbe2R0YelJ6b+JbR6MVrAWWA1vJP3
	xEi8GtP3QBAU2YTm9ACpNnXuKyLpQyZfvwFSVHc=
X-Google-Smtp-Source: ABdhPJxMy5ncXEZ6TSWkZ0cAXTnkQ7iw+jjdg+cNNUWTZPlZcmFMrQQcJd2JAoeBontKFrtwmzLazAacI3fdsjx9xvQ=
X-Received: by 2002:a5b:40e:: with SMTP id m14mr4835621ybp.33.1606410357153;
 Thu, 26 Nov 2020 09:05:57 -0800 (PST)
MIME-Version: 1.0
References: <cover.1605896059.git.gustavoars@kernel.org> <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook> <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
 <CANiq72nZrHWTA4_Msg6MP9snTyenC6-eGfD27CyfNSu7QoVZbw@mail.gmail.com>
 <1c7d7fde126bc0acf825766de64bf2f9b888f216.camel@HansenPartnership.com>
 <CANiq72m22Jb5_+62NnwX8xds2iUdWDMAqD8PZw9cuxdHd95W0A@mail.gmail.com>
 <fc45750b6d0277c401015b7aa11e16cd15f32ab2.camel@HansenPartnership.com>
 <CANiq72k5tpDoDPmJ0ZWc1DGqm+81Gi-uEENAtvEs9v3SZcx6_Q@mail.gmail.com>
 <4993259d01a0064f8bb22770503490f9252f3659.camel@HansenPartnership.com>
 <CANiq72kqO=bYMJnFS2uYRpgWATJ=uXxZuNUsTXT+3aLtrpnzvQ@mail.gmail.com>
 <44005bde-f6d4-5eaa-39b8-1a5efeedb2d3@gmail.com> <CANiq72nobq=ptWK-qWxU91JHqkKhMcRtJNnw2XJd5-vSJWZd8Q@mail.gmail.com>
 <CAMuHMdV5kOakvZJMWLxbpigFPS+Xuw6DVYsWCWZy7wGsv3idcw@mail.gmail.com>
In-Reply-To: <CAMuHMdV5kOakvZJMWLxbpigFPS+Xuw6DVYsWCWZy7wGsv3idcw@mail.gmail.com>
From: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Date: Thu, 26 Nov 2020 18:05:45 +0100
Message-ID: <CANiq72=n4rVvmKt0RCb5aOfQydA8bgDxfntRLDieV8Q2efP8Zg@mail.gmail.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
To: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Edward Cree <ecree.xilinx@gmail.com>, 
	ALSA Development Mailing List <alsa-devel@alsa-project.org>, linux-atm-general@lists.sourceforge.net, 
	reiserfs-devel@vger.kernel.org, linux-iio@vger.kernel.org, 
	linux-wireless <linux-wireless@vger.kernel.org>, 
	Linux Fbdev development list <linux-fbdev@vger.kernel.org>, dri-devel <dri-devel@lists.freedesktop.org>, 
	"Gustavo A. R. Silva" <gustavoars@kernel.org>, 
	James Bottomley <James.Bottomley@hansenpartnership.com>, linux-ide@vger.kernel.org, 
	dm-devel@redhat.com, keyrings@vger.kernel.org, 
	MTD Maling List <linux-mtd@lists.infradead.org>, GR-everest-linux-l2@marvell.com, 
	wcn36xx@lists.infradead.org, samba-technical@lists.samba.org, 
	linux-i3c@lists.infradead.org, linux1394-devel@lists.sourceforge.net, 
	linux-afs@lists.infradead.org, usb-storage@lists.one-eyed-alien.net, 
	Lars Ellenberg <drbd-dev@lists.linbit.com>, driverdevel <devel@driverdev.osuosl.org>, 
	linux-cifs@vger.kernel.org, rds-devel@oss.oracle.com, 
	Nick Desaulniers <ndesaulniers@google.com>, scsi <linux-scsi@vger.kernel.org>, 
	Nathan Chancellor <natechancellor@gmail.com>, linux-rdma <linux-rdma@vger.kernel.org>, 
	oss-drivers@netronome.com, bridge@lists.linux-foundation.org, 
	linux-security-module <linux-security-module@vger.kernel.org>, 
	amd-gfx list <amd-gfx@lists.freedesktop.org>, linux-stm32@st-md-mailman.stormreply.com, 
	cluster-devel@redhat.com, ACPI Devel Maling List <linux-acpi@vger.kernel.org>, 
	coreteam@netfilter.org, intel-wired-lan@lists.osuosl.org, 
	linux-input <linux-input@vger.kernel.org>, Miguel Ojeda <ojeda@kernel.org>, 
	Jakub Kicinski <kuba@kernel.org>, Ext4 Developers List <linux-ext4@vger.kernel.org>, 
	Linux Media Mailing List <linux-media@vger.kernel.org>, Kees Cook <keescook@chromium.org>, 
	selinux@vger.kernel.org, linux-arm-msm <linux-arm-msm@vger.kernel.org>, 
	Intel Graphics Development <intel-gfx@lists.freedesktop.org>, linux-geode@lists.infradead.org, 
	linux-can@vger.kernel.org, linux-block@vger.kernel.org, 
	"open list:GPIO SUBSYSTEM" <linux-gpio@vger.kernel.org>, op-tee@lists.trustedfirmware.org, 
	linux-mediatek@lists.infradead.org, xen-devel@lists.xenproject.org, 
	Nouveau Dev <nouveau@lists.freedesktop.org>, linux-hams@vger.kernel.org, 
	ceph-devel <ceph-devel@vger.kernel.org>, virtualization@lists.linux-foundation.org, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, linux-hwmon@vger.kernel.org, 
	Linux Watchdog Mailing List <linux-watchdog@vger.kernel.org>, 
	"open list:NFS, SUNRPC, AND..." <linux-nfs@vger.kernel.org>, GR-Linux-NIC-Dev@marvell.com, 
	tipc-discussion@lists.sourceforge.net, Linux-MM <linux-mm@kvack.org>, 
	Network Development <netdev@vger.kernel.org>, linux-decnet-user@lists.sourceforge.net, 
	Linux MMC List <linux-mmc@vger.kernel.org>, linux-kernel <linux-kernel@vger.kernel.org>, 
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, Linux-Renesas <linux-renesas-soc@vger.kernel.org>, 
	linux-sctp@vger.kernel.org, USB list <linux-usb@vger.kernel.org>, 
	NetFilter <netfilter-devel@vger.kernel.org>, 
	Linux Crypto Mailing List <linux-crypto@vger.kernel.org>, patches@opensource.cirrus.com, 
	Joe Perches <joe@perches.com>, linux-integrity <linux-integrity@vger.kernel.org>, 
	target-devel <target-devel@vger.kernel.org>, linux-hardening@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"

On Thu, Nov 26, 2020 at 4:28 PM Geert Uytterhoeven <geert@linux-m68k.org> wrote:
>
> The maintainer is not necessarily the owner/author of the code, and
> thus may not know the intent of the code.

Agreed, I was not blaming maintainers -- just trying to point out that
the problem is there :-)

In those cases, it is still very useful: we add the `fallthrough` and
a comment saying `FIXME: fallthrough intended? Figure this out...`.
Thus a previous unknown unknown is now a known unknown. And no new
unknown unknowns will be introduced since we enabled the warning
globally.

> BTW, you cannot mindlessly fix the latter, as you cannot know if
> "(a == b)" or "((a = b))" was intended, without understanding the code
> (and the (possibly unavailable) data sheet, and the hardware, ...).

That's right, I was referring to the cases where the compiler saves
someone time from a typo they just made.

Cheers,
Miguel


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 17:20:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 17:20:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38967.71724 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKwt-0007Vj-Ll; Thu, 26 Nov 2020 17:20:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38967.71724; Thu, 26 Nov 2020 17:20:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiKwt-0007Vc-IT; Thu, 26 Nov 2020 17:20:51 +0000
Received: by outflank-mailman (input) for mailman id 38967;
 Thu, 26 Nov 2020 17:20:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=smDs=FA=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kiKws-0007VW-EC
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:20:50 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e960c967-7462-4c05-a27d-1635be85e312;
 Thu, 26 Nov 2020 17:20:47 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AQHKeOf012144
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Thu, 26 Nov 2020 18:20:41 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id F12EC2E9CAC; Thu, 26 Nov 2020 18:20:34 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=smDs=FA=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kiKws-0007VW-EC
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:20:50 +0000
X-Inumbo-ID: e960c967-7462-4c05-a27d-1635be85e312
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e960c967-7462-4c05-a27d-1635be85e312;
	Thu, 26 Nov 2020 17:20:47 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0AQHKeOf012144
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Thu, 26 Nov 2020 18:20:41 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id F12EC2E9CAC; Thu, 26 Nov 2020 18:20:34 +0100 (MET)
Date: Thu, 26 Nov 2020 18:20:34 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201126172034.GA7642@antioche.eu.org>
References: <6d6a77cf-58de-4e4d-ed75-e9365be060b7@suse.com>
 <20201124142713.GM2020@antioche.eu.org>
 <e6a0fc84-e7ed-825c-5356-29b8a6359a2b@suse.com>
 <20201124150842.GN2020@antioche.eu.org>
 <20201124154917.l3jwa6w4ejumjuqw@Air-de-Roger>
 <20201124160914.GQ2020@antioche.eu.org>
 <20201126133444.r2oi24i3umh7shb3@Air-de-Roger>
 <20201126141608.GA4123@antioche.eu.org>
 <20201126142635.uzi643co3mxp5h42@Air-de-Roger>
 <20201126150937.jhbfp7iefkmtedx7@Air-de-Roger>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="uAKRQypu60I7Lcqm"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201126150937.jhbfp7iefkmtedx7@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Thu, 26 Nov 2020 18:20:41 +0100 (MET)


--uAKRQypu60I7Lcqm
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit

On Thu, Nov 26, 2020 at 04:09:37PM +0100, Roger Pau Monn wrote:
> > 
> > Oh, that's actually very useful. The interrupt is being constantly
> > injected from the hardware and received by Xen, it's just not then
> > injected into dom0 - that's the bit we are missing. Let me look into
> > adding some more debug to that path, hopefully it will tell us where
> > things are getting blocked.
> 
> So I have yet one more patch for you to try, this one has more
> debugging and a slight change in the emulated IO-APIC behavior.
> Depending on the result I might have to find a way to mask the
> interrupt so it doesn't spam the whole buffer in order for us to see
> exactly what triggered this scenario you are in.

OK, here it is:
http://www-soc.lip6.fr/~bouyer/xen-log9.txt

I had to restart from a clean source tree to apply this patch, so to make
sure we're in sync I attached the diff from my sources

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--

--uAKRQypu60I7Lcqm
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="xen.diff"

diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index 38ac5fb6c7..9db3dcc957 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -187,6 +187,10 @@ void hvm_gsi_assert(struct domain *d, unsigned int gsi)
      * to know if the GSI is pending or not.
      */
     spin_lock(&d->arch.hvm.irq_lock);
+    if ( gsi == TRACK_IRQ )
+        debugtrace_printk("hvm_gsi_assert irq %u trig %u assert count %u\n",
+                          gsi, trig, hvm_irq->gsi_assert_count[gsi]);
+
     if ( trig == VIOAPIC_EDGE_TRIG || !hvm_irq->gsi_assert_count[gsi] )
     {
         if ( trig == VIOAPIC_LEVEL_TRIG )
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 67d4a6237f..aeff9c7687 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -257,7 +257,17 @@ static void vioapic_write_redirent(
         vlapic_adjust_i8259_target(d);
     }
     else if ( ent.fields.trig_mode == VIOAPIC_EDGE_TRIG )
+    {
+        if ( gsi == TRACK_IRQ )
+            debugtrace_printk("vIO-APIC set edge trigger irq %u\n", gsi);
         pent->fields.remote_irr = 0;
+        if ( is_iommu_enabled(d) )
+        {
+            spin_unlock(&d->arch.hvm.irq_lock);
+            hvm_dpci_eoi(d, gsi, pent);
+            spin_lock(&d->arch.hvm.irq_lock);
+        }
+    }
     else if ( !ent.fields.mask &&
               !ent.fields.remote_irr &&
               hvm_irq->gsi_assert_count[idx] )
@@ -278,6 +288,10 @@ static void vioapic_write_redirent(
          */
         int ret = vioapic_hwdom_map_gsi(gsi, ent.fields.trig_mode,
                                         ent.fields.polarity);
+
+        if ( gsi == TRACK_IRQ )
+            debugtrace_printk("vIO-APIC UNMASK irq %u\n", gsi);
+
         if ( ret )
         {
             gprintk(XENLOG_ERR,
@@ -285,6 +299,9 @@ static void vioapic_write_redirent(
             unmasked = 0;
         }
     }
+    else if ( is_hardware_domain(d) && gsi == TRACK_IRQ )
+        debugtrace_printk("vIO-APIC MASK irq %u\n", gsi);
+
 
     if ( gsi == 0 || unmasked )
         pt_may_unmask_irq(d, NULL);
@@ -405,6 +422,10 @@ static void vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int pin)
 
     ASSERT(spin_is_locked(&d->arch.hvm.irq_lock));
 
+    if ( irq == TRACK_IRQ )
+            debugtrace_printk("vIO-APIC deliver irq %u vector %u\n",
+                              irq, vector);
+
     HVM_DBG_LOG(DBG_LEVEL_IOAPIC,
                 "dest=%x dest_mode=%x delivery_mode=%x "
                 "vector=%x trig_mode=%x",
diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c
index e66fa99ec7..c28025657d 100644
--- a/xen/arch/x86/io_apic.c
+++ b/xen/arch/x86/io_apic.c
@@ -1641,6 +1641,9 @@ static void mask_and_ack_level_ioapic_irq(struct irq_desc *desc)
     unsigned long v;
     int i;
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("ACK irq %u\n", desc->irq);
+
     irq_complete_move(desc);
 
     if ( !directed_eoi_enabled )
@@ -1688,6 +1691,9 @@ static void mask_and_ack_level_ioapic_irq(struct irq_desc *desc)
 
 static void end_level_ioapic_irq_old(struct irq_desc *desc, u8 vector)
 {
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("END irq %u\n", desc->irq);
+
     if ( directed_eoi_enabled )
     {
         if ( !(desc->status & (IRQ_DISABLED|IRQ_MOVE_PENDING)) )
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 93c4fb9a79..cc5a0e2a21 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1109,6 +1109,10 @@ static void irq_guest_eoi_timer_fn(void *data)
     unsigned int i, irq = desc - irq_desc;
     irq_guest_action_t *action;
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("irq_guest_eoi_timer_fn irq %u status %x\n",
+                          desc->irq, desc->status);
+
     spin_lock_irq(&desc->lock);
     
     if ( !(desc->status & IRQ_GUEST) )
@@ -1118,6 +1122,10 @@ static void irq_guest_eoi_timer_fn(void *data)
 
     ASSERT(action->ack_type != ACKTYPE_NONE);
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("ack_type %u in_flight %u\n",
+                          action->ack_type, action->in_flight);
+
     /*
      * Is no IRQ in flight at all, or another instance of this timer already
      * running? Skip everything to avoid forcing an EOI early.
@@ -1837,6 +1845,10 @@ static void do_IRQ_guest(struct irq_desc *desc, unsigned int vector)
     unsigned int        i;
     struct pending_eoi *peoi = this_cpu(pending_eoi);
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("do_IRQ_guest irq %u\n", desc->irq);
+
+
     if ( unlikely(!action->nr_guests) )
     {
         /* An interrupt may slip through while freeing an ACKTYPE_EOI irq. */
diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
index 6b1305a3e5..86c2db9da0 100644
--- a/xen/drivers/passthrough/io.c
+++ b/xen/drivers/passthrough/io.c
@@ -174,7 +174,6 @@ static void pt_irq_time_out(void *data)
          * In the identity mapped case the EOI can also be done now, this way
          * the iteration over the list of domain pirqs is avoided.
          */
-        hvm_gsi_deassert(irq_map->dom, dpci_pirq(irq_map)->pirq);
         irq_map->flags |= HVM_IRQ_DPCI_EOI_LATCH;
         pt_irq_guest_eoi(irq_map->dom, irq_map, NULL);
         spin_unlock(&irq_map->dom->event_lock);
@@ -828,6 +827,9 @@ int hvm_do_IRQ_dpci(struct domain *d, struct pirq *pirq)
          !pirq_dpci || !(pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) )
         return 0;
 
+    if ( pirq->pirq == TRACK_IRQ )
+        debugtrace_printk("hvm_do_IRQ_dpci irq %u\n", pirq->pirq);
+
     pirq_dpci->masked = 1;
     raise_softirq_for(pirq_dpci);
     return 1;
@@ -1010,6 +1012,9 @@ void hvm_dpci_eoi(struct domain *d, unsigned int guest_gsi,
     if ( !is_iommu_enabled(d) )
         return;
 
+    if ( guest_gsi == TRACK_IRQ )
+        debugtrace_printk("hvm_dpci_eoi irq %u\n", guest_gsi);
+
     if ( is_hardware_domain(d) )
     {
         spin_lock(&d->event_lock);
diff --git a/xen/drivers/vpci/msix.c b/xen/drivers/vpci/msix.c
index 64dd0a929c..3eb6102a61 100644
--- a/xen/drivers/vpci/msix.c
+++ b/xen/drivers/vpci/msix.c
@@ -370,7 +370,7 @@ static int msix_write(struct vcpu *v, unsigned long addr, unsigned int len,
 
             entry->updated = false;
         }
-        else
+        else if ( msix->enabled )
             vpci_msix_arch_mask_entry(entry, pdev, entry->masked);
 
         break;
diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
index 43d567fe44..871810134f 100644
--- a/xen/include/xen/irq.h
+++ b/xen/include/xen/irq.h
@@ -174,4 +174,6 @@ unsigned int arch_hwdom_irqs(domid_t);
 void arch_evtchn_bind_pirq(struct domain *, int pirq);
 #endif
 
+#define TRACK_IRQ 34
+
 #endif /* __XEN_IRQ_H__ */

--uAKRQypu60I7Lcqm--


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 17:38:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 17:38:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38976.71737 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiLE1-0000Hv-Bq; Thu, 26 Nov 2020 17:38:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38976.71737; Thu, 26 Nov 2020 17:38:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiLE1-0000Ho-8m; Thu, 26 Nov 2020 17:38:33 +0000
Received: by outflank-mailman (input) for mailman id 38976;
 Thu, 26 Nov 2020 17:38:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OMov=FA=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1kiLE0-0000Hj-CP
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:38:32 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2a3e81e6-d144-4013-8191-44535450bd06;
 Thu, 26 Nov 2020 17:38:31 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=OMov=FA=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
	id 1kiLE0-0000Hj-CP
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:38:32 +0000
X-Inumbo-ID: 2a3e81e6-d144-4013-8191-44535450bd06
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2a3e81e6-d144-4013-8191-44535450bd06;
	Thu, 26 Nov 2020 17:38:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606412310;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=bXQzNtHqJ2Yw8039qwF6PExFC7hXWX7QRQoV51ZIeHI=;
  b=JB1qwYe3nnUAGHgc+A4yDDQW8ctiiQzPIFFjUMs8is6rBAMOH4Ch6HnH
   fn38gCJR3cd3/CWAUb/VSH4/9670Dtk/5Lsfx0vKMqw6lzQl6vNs2epDO
   T//hjty5dUZVKnxJV+yWUKRVuRn3J/aNn9tAk4lO33UCv01F+iQmMetKH
   M=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: KFVGLLdvf8DEeihaDbUXddHR0GUT3gEY/wdoGM652HXR/2rCOBM0eF25OhgsWDNyfp9MYeek/X
 0WqeC80WMCfN5zCQyhxAe/ClDmUBJc/JGrvvkpaaf8+FQl/NGDNBjFJ7foXBTOmAsMLwcty1Gm
 HyVsqyBsVA4G3HPVyHI1XZR324jHdTkA8nGTMYXZrCt7x0UIlPuH696eY5fnGbQaPvhMOZ9Aoo
 M8iqvI23iZcXGqxutTGvgaJUYGdwZdT71+kxty6g9x2cBmotdnH2k6/JUn+s7MnnpHKhmbihk2
 YHU=
X-SBRS: None
X-MesageID: 33168172
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,372,1599537600"; 
   d="scan'208";a="33168172"
Date: Thu, 26 Nov 2020 17:38:24 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@redhat.com>
CC: <qemu-devel@nongnu.org>, Jiaxun Yang <jiaxun.yang@flygoat.com>, "Igor
 Mammedov" <imammedo@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>, Marcel Apfelbaum
	<marcel.apfelbaum@gmail.com>, Wainer dos Santos Moschetta
	<wainersm@redhat.com>, Aurelien Jarno <aurelien@aurel32.net>, Thomas Huth
	<thuth@redhat.com>, Eduardo Habkost <ehabkost@redhat.com>, Alex
 =?iso-8859-1?Q?Benn=E9e?= <alex.bennee@linaro.org>, Aleksandar Rikalo
	<aleksandar.rikalo@syrmia.com>, Richard Henderson <rth@twiddle.net>, Fam
 Zheng <fam@euphon.net>, "Daniel P . Berrange" <berrange@redhat.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH-for-6.0 v4 15/17] gitlab-ci: Add test for Xen (on CentOS
 7)
Message-ID: <20201126173824.GB2098@perard.uk.xensource.com>
References: <20201108204535.2319870-1-philmd@redhat.com>
 <20201108204535.2319870-16-philmd@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201108204535.2319870-16-philmd@redhat.com>

On Sun, Nov 08, 2020 at 09:45:33PM +0100, Philippe Mathieu-Daud wrote:
> Xen packages are available in CentOS 7, but have been
> removed from CentOS 8. Use the CentOS 7 container.

Technically Xen has never been in CentOS 8, I'm working on it, slowly.

> Signed-off-by: Philippe Mathieu-Daud <philmd@redhat.com>
> ---
> Cc: Eduardo Habkost <ehabkost@redhat.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> Cc: Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org
> ---
>  .gitlab-ci.yml | 21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)
> 
> diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
> index 2f0da7b3dc1..8e15266c277 100644
> --- a/.gitlab-ci.yml
> +++ b/.gitlab-ci.yml
> @@ -557,6 +557,27 @@ check-crypto-only-gnutls:
>      IMAGE: centos7
>      MAKE_CHECK_ARGS: check
>  
> +build-xen-centos:
> +  <<: *native_build_job_definition
> +  variables:
> +    IMAGE: centos7
> +    TARGETS: i386-softmmu x86_64-softmmu
> +    CONFIGURE_ARGS: --enable-xen
> +    MAKE_CHECK_ARGS: check-build
> +  artifacts:
> +    paths:
> +      - build
> +
> +check-xen-centos:
> +  <<: *native_test_job_definition
> +  needs:
> +    - job: build-xen-centos
> +      artifacts: true
> +  variables:
> +    IMAGE: centos7
> +    MAKE_CHECK_ARGS: check

Is `make check` going to do something useful with the Xen support? Or is
it going to need more work in order to test the Xen support of QEMU?
(Like starting an actual Xen guest.)

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 17:46:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 17:46:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38984.71749 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiLLT-0001C6-6B; Thu, 26 Nov 2020 17:46:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38984.71749; Thu, 26 Nov 2020 17:46:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiLLT-0001Bz-22; Thu, 26 Nov 2020 17:46:15 +0000
Received: by outflank-mailman (input) for mailman id 38984;
 Thu, 26 Nov 2020 17:46:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MEn1=FA=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
 id 1kiLLS-0001Bu-6s
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:46:14 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id ed195e09-9a1c-42f2-9056-887e1661d8ec;
 Thu, 26 Nov 2020 17:46:13 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-246-BB1HWuiHPgCSHcsN6cw6zQ-1; Thu, 26 Nov 2020 12:46:09 -0500
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com
 [10.5.11.16])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DA43D805BFA;
 Thu, 26 Nov 2020 17:46:07 +0000 (UTC)
Received: from localhost (unknown [10.10.67.2])
 by smtp.corp.redhat.com (Postfix) with ESMTP id EF2E85C1C2;
 Thu, 26 Nov 2020 17:46:00 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MEn1=FA=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
	id 1kiLLS-0001Bu-6s
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 17:46:14 +0000
X-Inumbo-ID: ed195e09-9a1c-42f2-9056-887e1661d8ec
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id ed195e09-9a1c-42f2-9056-887e1661d8ec;
	Thu, 26 Nov 2020 17:46:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1606412773;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PfLv9sn75hFi4vrNX0/cdhCzTxZHr7btcEyxi0LGyNA=;
	b=hLUy6klhbkdFcfa+XpR6P1luU+zspbgqHBxTTVpiGHcINgQGyFCv6tIuvesCDZZ3gfT5rT
	ugJw0jx4oBHyzn493pqOkz2re36V8gK4glkQJr2Gd7VvCP4g6SsRG7fDB0qILgog5xoA7e
	OUs+SWaiGAGse6ArU7xxjFsQnmci/gg=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-246-BB1HWuiHPgCSHcsN6cw6zQ-1; Thu, 26 Nov 2020 12:46:09 -0500
X-MC-Unique: BB1HWuiHPgCSHcsN6cw6zQ-1
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DA43D805BFA;
	Thu, 26 Nov 2020 17:46:07 +0000 (UTC)
Received: from localhost (unknown [10.10.67.2])
	by smtp.corp.redhat.com (Postfix) with ESMTP id EF2E85C1C2;
	Thu, 26 Nov 2020 17:46:00 +0000 (UTC)
Date: Thu, 26 Nov 2020 12:45:59 -0500
From: Eduardo Habkost <ehabkost@redhat.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	qemu-devel@nongnu.org, Jiaxun Yang <jiaxun.yang@flygoat.com>,
	Igor Mammedov <imammedo@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Wainer dos Santos Moschetta <wainersm@redhat.com>,
	Aurelien Jarno <aurelien@aurel32.net>,
	Thomas Huth <thuth@redhat.com>,
	Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	Richard Henderson <rth@twiddle.net>, Fam Zheng <fam@euphon.net>,
	"Daniel P . Berrange" <berrange@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH-for-6.0 v4 15/17] gitlab-ci: Add test for Xen (on CentOS
 7)
Message-ID: <20201126174559.GP2271382@habkost.net>
References: <20201108204535.2319870-1-philmd@redhat.com>
 <20201108204535.2319870-16-philmd@redhat.com>
 <20201126173824.GB2098@perard.uk.xensource.com>
MIME-Version: 1.0
In-Reply-To: <20201126173824.GB2098@perard.uk.xensource.com>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ehabkost@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit

On Thu, Nov 26, 2020 at 05:38:24PM +0000, Anthony PERARD wrote:
> On Sun, Nov 08, 2020 at 09:45:33PM +0100, Philippe Mathieu-DaudÃ© wrote:
> > Xen packages are available in CentOS 7, but have been
> > removed from CentOS 8. Use the CentOS 7 container.
> 
> Technically Xen has never been in CentOS 8, I'm working on it, slowly.
> 
> > Signed-off-by: Philippe Mathieu-DaudÃ© <philmd@redhat.com>
> > ---
> > Cc: Eduardo Habkost <ehabkost@redhat.com>
> > Cc: Stefano Stabellini <sstabellini@kernel.org>
> > Cc: Anthony Perard <anthony.perard@citrix.com>
> > Cc: Paul Durrant <paul@xen.org>
> > Cc: xen-devel@lists.xenproject.org
> > ---
> >  .gitlab-ci.yml | 21 +++++++++++++++++++++
> >  1 file changed, 21 insertions(+)
> > 
> > diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
> > index 2f0da7b3dc1..8e15266c277 100644
> > --- a/.gitlab-ci.yml
> > +++ b/.gitlab-ci.yml
> > @@ -557,6 +557,27 @@ check-crypto-only-gnutls:
> >      IMAGE: centos7
> >      MAKE_CHECK_ARGS: check
> >  
> > +build-xen-centos:
> > +  <<: *native_build_job_definition
> > +  variables:
> > +    IMAGE: centos7
> > +    TARGETS: i386-softmmu x86_64-softmmu
> > +    CONFIGURE_ARGS: --enable-xen
> > +    MAKE_CHECK_ARGS: check-build
> > +  artifacts:
> > +    paths:
> > +      - build
> > +
> > +check-xen-centos:
> > +  <<: *native_test_job_definition
> > +  needs:
> > +    - job: build-xen-centos
> > +      artifacts: true
> > +  variables:
> > +    IMAGE: centos7
> > +    MAKE_CHECK_ARGS: check
> 
> Is `make check` going to do something useful with the Xen support? Or is
> it going to need more work in order to test the Xen support of QEMU?
> (Like starting an actual Xen guest.)

I don't think it will test Xen support, but we still want to at
least check if --enable-xen doesn't break anything else.

Is there any public CI system anywhere where Xen support is
tested today?

-- 
Eduardo



From xen-devel-bounces@lists.xenproject.org Thu Nov 26 20:17:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 20:17:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39003.71767 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiNhs-0006St-0A; Thu, 26 Nov 2020 20:17:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39003.71767; Thu, 26 Nov 2020 20:17:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiNhr-0006Sm-St; Thu, 26 Nov 2020 20:17:31 +0000
Received: by outflank-mailman (input) for mailman id 39003;
 Thu, 26 Nov 2020 20:17:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiNhq-0006Se-Ft; Thu, 26 Nov 2020 20:17:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiNhq-0002uu-8O; Thu, 26 Nov 2020 20:17:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiNhq-0001Rc-03; Thu, 26 Nov 2020 20:17:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kiNhp-0002gg-Vo; Thu, 26 Nov 2020 20:17:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiNhq-0006Se-Ft; Thu, 26 Nov 2020 20:17:30 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wFUoiIUZUaewVYBCVIkRkxws1u2xNF5jEvGvRg4XMXw=; b=y73uTEsL1lFUM9K0J4Jrnxpt9X
	Js9Rd2RZi8/k/31KUHhDLghOK2LT3MMyk4cOW1gj8zmdvqOd+9ri5I3u34tC1XG6djVwi464knRwW
	gZ7M8c12nZ1cOhjfGkJgp/VQv+Bs5nBlqpLijI2ORyq7+WH9lHC1RMVxYr038vulDxN4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiNhq-0002uu-8O; Thu, 26 Nov 2020 20:17:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiNhq-0001Rc-03; Thu, 26 Nov 2020 20:17:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiNhp-0002gg-Vo; Thu, 26 Nov 2020 20:17:29 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157023-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157023: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=fa02fcd94b0c8dff6cc65714510cf25ad194b90d
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 26 Nov 2020 20:17:29 +0000

flight 157023 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157023/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                fa02fcd94b0c8dff6cc65714510cf25ad194b90d
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  118 days
Failing since        152366  2020-08-01 20:49:34 Z  116 days  196 attempts
Testing same since   157023  2020-11-26 05:39:28 Z    0 days    1 attempts

------------------------------------------------------------
3576 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 684525 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Nov 26 23:29:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 26 Nov 2020 23:29:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39024.71791 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiQhd-0006uH-I0; Thu, 26 Nov 2020 23:29:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39024.71791; Thu, 26 Nov 2020 23:29:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiQhd-0006uA-F8; Thu, 26 Nov 2020 23:29:29 +0000
Received: by outflank-mailman (input) for mailman id 39024;
 Thu, 26 Nov 2020 23:29:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiQhc-0006u2-K1; Thu, 26 Nov 2020 23:29:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiQhc-0006lL-Bb; Thu, 26 Nov 2020 23:29:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiQhb-0003SO-Ud; Thu, 26 Nov 2020 23:29:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kiQhb-0006tZ-U8; Thu, 26 Nov 2020 23:29:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiQhc-0006u2-K1; Thu, 26 Nov 2020 23:29:28 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6SIgue+eA/OHPeOK6RPTyibEt2kwDNeR+9fG1s0Fhvc=; b=vRgcau/27e7cc7OTGgjSJ6sOtZ
	rVr+yah/5nBMXGl79jpqYUAejed4cOVg3csFgDxuduayRkuriFNLFqMOJwEuXJOWL8SzuBeNbZ3u0
	BhEvOqY9CBCPc7hkd6KLe4upL1REJ5HAImshba6gDGjIP/+1d2Hsp1wNMA02giaUW56w=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiQhc-0006lL-Bb; Thu, 26 Nov 2020 23:29:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiQhb-0003SO-Ud; Thu, 26 Nov 2020 23:29:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiQhb-0006tZ-U8; Thu, 26 Nov 2020 23:29:27 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157028-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157028: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=181f2c224ccd0a2900d6ae94ec390a546731f593
X-Osstest-Versions-That:
    xen=181f2c224ccd0a2900d6ae94ec390a546731f593
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 26 Nov 2020 23:29:27 +0000

flight 157028 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157028/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157016
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157016
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157016
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157016
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157016
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157016
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157016
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157016
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157016
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157016
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157016
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  181f2c224ccd0a2900d6ae94ec390a546731f593
baseline version:
 xen                  181f2c224ccd0a2900d6ae94ec390a546731f593

Last test of basis   157028  2020-11-26 10:54:58 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Fri Nov 27 02:06:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 02:06:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39042.71813 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiT9b-0001SI-KS; Fri, 27 Nov 2020 02:06:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39042.71813; Fri, 27 Nov 2020 02:06:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiT9b-0001SB-Fh; Fri, 27 Nov 2020 02:06:31 +0000
Received: by outflank-mailman (input) for mailman id 39042;
 Fri, 27 Nov 2020 02:06:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2VC9=FB=intel.com=oliver.sang@srs-us1.protection.inumbo.net>)
 id 1kiT9Z-0001S6-Uh
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 02:06:30 +0000
Received: from mga07.intel.com (unknown [134.134.136.100])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c9a86dc0-afd1-4ef6-872e-a42726781b76;
 Fri, 27 Nov 2020 02:06:16 +0000 (UTC)
Received: from orsmga004.jf.intel.com ([10.7.209.38])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 26 Nov 2020 18:06:14 -0800
Received: from xsang-optiplex-9020.sh.intel.com (HELO xsang-OptiPlex-9020)
 ([10.239.159.140])
 by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 26 Nov 2020 18:06:08 -0800
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2VC9=FB=intel.com=oliver.sang@srs-us1.protection.inumbo.net>)
	id 1kiT9Z-0001S6-Uh
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 02:06:30 +0000
X-Inumbo-ID: c9a86dc0-afd1-4ef6-872e-a42726781b76
Received: from mga07.intel.com (unknown [134.134.136.100])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c9a86dc0-afd1-4ef6-872e-a42726781b76;
	Fri, 27 Nov 2020 02:06:16 +0000 (UTC)
IronPort-SDR: x2wfin5nmQr5JSarGDUIA5OKi72baDsUIIuFRTrlw2B8N4sLAAUr3osuWAHaOmUNI7sGJVo/ic
 ih4znDTArpcA==
X-IronPort-AV: E=McAfee;i="6000,8403,9817"; a="236470026"
X-IronPort-AV: E=Sophos;i="5.78,373,1599548400"; 
   d="yaml'?scan'208";a="236470026"
X-Amp-Result: UNKNOWN
X-Amp-Original-Verdict: FILE UNKNOWN
X-Amp-File-Uploaded: False
Received: from orsmga004.jf.intel.com ([10.7.209.38])
  by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Nov 2020 18:06:14 -0800
IronPort-SDR: yhA34yx7HTJsOiRSvOFBnTmZpUEu5NiXomiFDMZcSXBIJm6YrMoa1kPANo+iceGlJFvR1v5wbC
 EdAln/nrC8xw==
X-IronPort-AV: E=Sophos;i="5.78,373,1599548400"; 
   d="yaml'?scan'208";a="479531158"
Received: from xsang-optiplex-9020.sh.intel.com (HELO xsang-OptiPlex-9020) ([10.239.159.140])
  by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Nov 2020 18:06:08 -0800
Date: Fri, 27 Nov 2020 10:20:18 +0800
From: kernel test robot <oliver.sang@intel.com>
To: Juergen Gross <jgross@suse.com>
Cc: 0day robot <lkp@intel.com>, Andy Lutomirski <luto@kernel.org>,
	LKML <linux-kernel@vger.kernel.org>, lkp@lists.01.org,
	ying.huang@intel.com, feng.tang@intel.com, zhengjun.xing@intel.com,
	xen-devel@lists.xenproject.org, x86@kernel.org,
	virtualization@lists.linux-foundation.org, peterz@infradead.org,
	Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>, Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [x86]  97e8f0134a:  fio.write_iops 8.6% improvement
Message-ID: <20201127022018.GA29584@xsang-OptiPlex-9020>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="BOKacYhQ+x31HxR3"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201120114630.13552-6-jgross@suse.com>
User-Agent: NeoMutt/20170113 (1.7.2)


--BOKacYhQ+x31HxR3
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit


Greeting,

FYI, we noticed a 8.6% improvement of fio.write_iops due to commit:


commit: 97e8f0134a2bb794e4885f642724a50979b84f89 ("x86: rework arch_local_irq_restore() to not use popf")
url: https://github.com/0day-ci/linux/commits/Juergen-Gross/x86-major-paravirt-cleanup/20201120-194934


in testcase: fio-basic
on test machine: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 256G memory
with following parameters:

	disk: 2pmem
	fs: xfs
	mount_option: dax
	runtime: 200s
	nr_task: 50%
	time_based: tb
	rw: randwrite
	bs: 4k
	ioengine: sync
	test_size: 200G
	cpufreq_governor: performance
	ucode: 0x5003003

test-description: Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user.
test-url: https://github.com/axboe/fio

In addition to that, the commit also has significant impact on the following tests:

+------------------+---------------------------------------------------------------------------+
| testcase: change | will-it-scale: will-it-scale.per_process_ops 5.2% improvement             |
| test machine     | 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory |
| test parameters  | cpufreq_governor=performance                                              |
|                  | mode=process                                                              |
|                  | nr_task=50%                                                               |
|                  | test=futex1                                                               |
|                  | ucode=0x5003003                                                           |
+------------------+---------------------------------------------------------------------------+




Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

        git clone https://github.com/intel/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml

=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/mount_option/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/time_based/ucode:
  4k/gcc-9/performance/2pmem/xfs/sync/x86_64-rhel-8.3/dax/50%/debian-10.4-x86_64-20200603.cgz/200s/randwrite/lkp-csl-2sp6/200G/fio-basic/tb/0x5003003

commit: 
  d625d30a28 ("x86/xen: drop USERGS_SYSRET64 paravirt call")
  97e8f0134a ("x86: rework arch_local_irq_restore() to not use popf")

d625d30a28a4c7a3 97e8f0134a2bb794e4885f64272 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
      0.12   5%      -0.0        0.09  13%  fio.latency_1000us%
      0.32            -0.0        0.31        fio.latency_100ms%
     18.23            -1.8       16.42   2%  fio.latency_100us%
     19.93            +2.4       22.31   4%  fio.latency_20us%
     20.34            -3.5       16.85        fio.latency_250us%
      0.04   4%      -0.0        0.03   5%  fio.latency_2ms%
      2.78   2%      -0.3        2.45   2%  fio.latency_500us%
      0.26            -0.0        0.21        fio.latency_50ms%
     37.13            +3.5       40.61   2%  fio.latency_50us%
      0.61   4%      -0.1        0.52   7%  fio.latency_750us%
    600.25            -4.7%     572.00        fio.time.percent_of_cpu_this_job_got
      1169            -4.8%       1112        fio.time.system_time
   8540510            +2.4%    8746174        fio.time.voluntary_context_switches
  26105864            +8.6%   28354900        fio.workload
    509.80            +8.6%     553.74        fio.write_bw_MBps
    153600           -12.7%     134144        fio.write_clat_90%_us
    220672            -9.5%     199680        fio.write_clat_95%_us
    612352            -9.4%     555008   2%  fio.write_clat_99%_us
    366788            -7.9%     337679        fio.write_clat_mean_us
   3896686            -2.5%    3799973        fio.write_clat_stddev
    130510            +8.6%     141756        fio.write_iops
     15.54            -1.3%      15.35        boot-time.dhcp
      3739  76%     +84.2%       6886  30%  numa-meminfo.node1.PageTables
    934.75  76%     +84.2%       1721  31%  numa-vmstat.node1.nr_page_table_pages
   1128546            +7.6%    1214673        vmstat.io.bo
    254919            +5.4%     268773        vmstat.system.cs
    158457            +1.5%     160806        proc-vmstat.nr_slab_unreclaimable
   1694299            +5.6%    1789326        proc-vmstat.numa_hit
   1662873            +5.7%    1757835        proc-vmstat.numa_local
   6648063            +9.2%    7260774        proc-vmstat.pgalloc_normal
   6403869            +9.8%    7028415        proc-vmstat.pgfree
 2.313e+08            +7.5%  2.487e+08        proc-vmstat.pgpgout
     36071   5%     -10.1%      32430   3%  softirqs.CPU36.RCU
     36623   5%      -9.0%      33322   4%  softirqs.CPU41.RCU
     36414   5%      -8.8%      33202   3%  softirqs.CPU47.RCU
     34248            -9.6%      30945        softirqs.CPU73.RCU
     33882            -9.4%      30687        softirqs.CPU74.RCU
     36889   4%     -12.9%      32132   2%  softirqs.CPU82.RCU
     35183   4%      -8.5%      32189   4%  softirqs.CPU85.RCU
     36146   2%      -9.3%      32785   3%  softirqs.CPU87.RCU
     36673   3%      -9.9%      33030   3%  softirqs.CPU95.RCU
      5942           -30.4%       4138  21%  sched_debug.cfs_rq:/.exec_clock.avg
      4314   2%     -33.6%       2863  25%  sched_debug.cfs_rq:/.exec_clock.min
    773.71   8%     -16.7%     644.43   6%  sched_debug.cfs_rq:/.exec_clock.stddev
    241.46  11%     -27.0%     176.38  26%  sched_debug.cfs_rq:/.load_avg.avg
     27925   5%     -21.0%      22049  17%  sched_debug.cfs_rq:/.min_vruntime.min
    116399           -20.2%      92862  13%  sched_debug.cpu.clock.avg
    116404           -20.2%      92867  13%  sched_debug.cpu.clock.max
    116393           -20.2%      92856  13%  sched_debug.cpu.clock.min
    114940           -20.2%      91709  13%  sched_debug.cpu.clock_task.avg
    115124           -20.2%      91877  13%  sched_debug.cpu.clock_task.max
    110309           -20.8%      87321  14%  sched_debug.cpu.clock_task.min
      5029           -11.6%       4444   7%  sched_debug.cpu.curr->pid.max
    150621   6%     -31.1%     103723  26%  sched_debug.cpu.nr_switches.min
    148865   6%     -31.1%     102577  26%  sched_debug.cpu.sched_count.min
     36388           -27.7%      26315  22%  sched_debug.cpu.sched_goidle.avg
     23703   3%     -35.2%      15364  25%  sched_debug.cpu.sched_goidle.min
     74196   5%     -30.8%      51361  26%  sched_debug.cpu.ttwu_count.min
    116395           -20.2%      92858  13%  sched_debug.cpu_clk
    115878           -20.3%      92361  13%  sched_debug.ktime
    116767           -20.2%      93217  13%  sched_debug.sched_clk
 1.668e+09            +3.3%  1.724e+09        perf-stat.i.branch-instructions
  28584168   9%      +8.4%   30994854   8%  perf-stat.i.branch-misses
  33324787            +7.5%   35813189        perf-stat.i.cache-misses
  83826973  12%     +12.7%   94503492  13%  perf-stat.i.cache-references
    258508            +5.4%     272477        perf-stat.i.context-switches
      2.60   2%      -5.6%       2.45   2%  perf-stat.i.cpi
    674.12   2%      -7.8%     621.30   3%  perf-stat.i.cycles-between-cache-misses
 2.429e+09            +3.9%  2.524e+09        perf-stat.i.dTLB-loads
 1.259e+09            +5.3%  1.326e+09        perf-stat.i.dTLB-stores
     53.99            +2.1       56.12        perf-stat.i.iTLB-load-miss-rate%
   9728741            +5.2%   10234658   3%  perf-stat.i.iTLB-load-misses
 8.601e+09            +3.8%  8.927e+09        perf-stat.i.instructions
      0.39   2%      +5.7%       0.41   2%  perf-stat.i.ipc
     56.92            +4.2%      59.31        perf-stat.i.metric.M/sec
   9006030            +7.1%    9648264        perf-stat.i.node-load-misses
   3237763   2%     +10.6%    3581907   2%  perf-stat.i.node-loads
   2918052   2%      +6.4%    3106057        perf-stat.i.node-store-misses
    582042   3%     +11.5%     649247   4%  perf-stat.i.node-stores
      2.53   2%      -5.2%       2.40   2%  perf-stat.overall.cpi
    656.32   2%      -8.5%     600.78   2%  perf-stat.overall.cycles-between-cache-misses
     53.81            +2.0       55.85        perf-stat.overall.iTLB-load-miss-rate%
      0.39   2%      +5.5%       0.42   2%  perf-stat.overall.ipc
     66759            -4.6%      63713        perf-stat.overall.path-length
 1.666e+09            +3.3%  1.721e+09        perf-stat.ps.branch-instructions
  28522187   9%      +8.4%   30911250   8%  perf-stat.ps.branch-misses
  33163222            +7.5%   35640249        perf-stat.ps.cache-misses
  83518487  12%     +12.7%   94131115  12%  perf-stat.ps.cache-references
    257076            +5.4%     270958        perf-stat.ps.context-switches
 2.425e+09            +3.9%   2.52e+09        perf-stat.ps.dTLB-loads
 1.256e+09            +5.3%  1.323e+09        perf-stat.ps.dTLB-stores
   9695548            +5.2%   10200116   3%  perf-stat.ps.iTLB-load-misses
 8.591e+09            +3.8%  8.916e+09        perf-stat.ps.instructions
   8959319            +7.1%    9599418        perf-stat.ps.node-load-misses
   3223144   2%     +10.6%    3564824   2%  perf-stat.ps.node-loads
   2903781   2%      +6.5%    3091553        perf-stat.ps.node-store-misses
    580260   3%     +11.5%     646965   4%  perf-stat.ps.node-stores
 1.743e+12            +3.7%  1.807e+12        perf-stat.total.instructions
      1131 164%     -99.8%       2.25 173%  interrupts.79:PCI-MSI.31981612-edge.i40e-eth0-TxRx-43
    714498   2%     +14.7%     819498   2%  interrupts.CAL:Function_call_interrupts
      6256   8%     +31.9%       8251   6%  interrupts.CPU0.CAL:Function_call_interrupts
    306.50  12%     +31.2%     402.00   7%  interrupts.CPU0.RES:Rescheduling_interrupts
      6860  11%     +39.6%       9576   7%  interrupts.CPU1.CAL:Function_call_interrupts
    291.25  16%     +23.7%     360.25   7%  interrupts.CPU1.RES:Rescheduling_interrupts
      6531   4%     +23.6%       8073  12%  interrupts.CPU11.CAL:Function_call_interrupts
      6598   9%     +22.4%       8078   7%  interrupts.CPU12.CAL:Function_call_interrupts
      6250   8%     +34.0%       8372   9%  interrupts.CPU13.CAL:Function_call_interrupts
      6748  11%     +25.4%       8460  10%  interrupts.CPU14.CAL:Function_call_interrupts
      6387  10%     +33.1%       8498  10%  interrupts.CPU16.CAL:Function_call_interrupts
      6562  12%     +21.9%       7996   9%  interrupts.CPU19.CAL:Function_call_interrupts
     78.75  36%     -64.8%      27.75  39%  interrupts.CPU19.TLB:TLB_shootdowns
      7255   4%     +23.5%       8959   7%  interrupts.CPU2.CAL:Function_call_interrupts
      6542   9%     +34.1%       8770   4%  interrupts.CPU20.CAL:Function_call_interrupts
      6278   9%     +37.5%       8635   5%  interrupts.CPU21.CAL:Function_call_interrupts
    243.00  13%     +24.4%     302.25   6%  interrupts.CPU21.RES:Rescheduling_interrupts
    397.25  12%     -21.1%     313.50  18%  interrupts.CPU24.RES:Rescheduling_interrupts
    540.25  26%     +68.0%     907.50  23%  interrupts.CPU26.NMI:Non-maskable_interrupts
    540.25  26%     +68.0%     907.50  23%  interrupts.CPU26.PMI:Performance_monitoring_interrupts
    336.50  11%     -26.4%     247.75   8%  interrupts.CPU29.RES:Rescheduling_interrupts
      7506   3%     +25.4%       9410  11%  interrupts.CPU3.CAL:Function_call_interrupts
      6747  11%     +17.7%       7944   5%  interrupts.CPU39.CAL:Function_call_interrupts
      6517  10%     +31.9%       8593   8%  interrupts.CPU4.CAL:Function_call_interrupts
      1131 164%     -99.8%       2.00 173%  interrupts.CPU43.79:PCI-MSI.31981612-edge.i40e-eth0-TxRx-43
      1009  10%     -37.5%     631.25  27%  interrupts.CPU43.NMI:Non-maskable_interrupts
      1009  10%     -37.5%     631.25  27%  interrupts.CPU43.PMI:Performance_monitoring_interrupts
      6214  17%     +26.6%       7866   5%  interrupts.CPU48.CAL:Function_call_interrupts
     95.00  34%     -64.5%      33.75  46%  interrupts.CPU49.TLB:TLB_shootdowns
      7046  12%     +31.1%       9235  10%  interrupts.CPU5.CAL:Function_call_interrupts
      7146   9%     +22.1%       8727   2%  interrupts.CPU50.CAL:Function_call_interrupts
      7306   8%     +18.5%       8654  14%  interrupts.CPU51.CAL:Function_call_interrupts
      7609  11%     +26.2%       9601   4%  interrupts.CPU53.CAL:Function_call_interrupts
      7626   2%     +15.1%       8774   9%  interrupts.CPU55.CAL:Function_call_interrupts
      7280   4%     +27.3%       9264   4%  interrupts.CPU57.CAL:Function_call_interrupts
    491.75  22%     +26.5%     622.00  29%  interrupts.CPU57.NMI:Non-maskable_interrupts
    491.75  22%     +26.5%     622.00  29%  interrupts.CPU57.PMI:Performance_monitoring_interrupts
      8002  10%     +13.9%       9117   6%  interrupts.CPU60.CAL:Function_call_interrupts
      7373   8%     +32.7%       9786   4%  interrupts.CPU62.CAL:Function_call_interrupts
      7193   5%     +33.4%       9593   4%  interrupts.CPU63.CAL:Function_call_interrupts
    272.50  12%     +29.0%     351.50  14%  interrupts.CPU63.RES:Rescheduling_interrupts
      7818   6%     +21.6%       9507   5%  interrupts.CPU64.CAL:Function_call_interrupts
      7044  11%     +34.5%       9474  10%  interrupts.CPU65.CAL:Function_call_interrupts
      7602   9%     +21.2%       9216   6%  interrupts.CPU66.CAL:Function_call_interrupts
      7413   4%     +29.0%       9561   6%  interrupts.CPU67.CAL:Function_call_interrupts
    282.00  17%     +23.7%     348.75   5%  interrupts.CPU67.RES:Rescheduling_interrupts
      7334   6%     +24.8%       9155   8%  interrupts.CPU68.CAL:Function_call_interrupts
    748.50  24%     +32.8%     994.25   8%  interrupts.CPU69.NMI:Non-maskable_interrupts
    748.50  24%     +32.8%     994.25   8%  interrupts.CPU69.PMI:Performance_monitoring_interrupts
      6745   8%     +24.7%       8412   9%  interrupts.CPU7.CAL:Function_call_interrupts
      7765   6%     +25.8%       9769   6%  interrupts.CPU70.CAL:Function_call_interrupts
    281.75   8%     +18.4%     333.50  12%  interrupts.CPU70.RES:Rescheduling_interrupts
      7299   6%     +33.6%       9749  12%  interrupts.CPU71.CAL:Function_call_interrupts
      1046   8%     -37.0%     659.50  17%  interrupts.CPU73.NMI:Non-maskable_interrupts
      1046   8%     -37.0%     659.50  17%  interrupts.CPU73.PMI:Performance_monitoring_interrupts
    970.50  10%     -26.4%     714.00  22%  interrupts.CPU75.NMI:Non-maskable_interrupts
    970.50  10%     -26.4%     714.00  22%  interrupts.CPU75.PMI:Performance_monitoring_interrupts
    854.25  19%     -39.6%     515.75  27%  interrupts.CPU76.NMI:Non-maskable_interrupts
    854.25  19%     -39.6%     515.75  27%  interrupts.CPU76.PMI:Performance_monitoring_interrupts
    978.00  24%     -46.2%     525.75  40%  interrupts.CPU77.NMI:Non-maskable_interrupts
    978.00  24%     -46.2%     525.75  40%  interrupts.CPU77.PMI:Performance_monitoring_interrupts
      7666   2%     +16.3%       8914   7%  interrupts.CPU78.CAL:Function_call_interrupts
    888.25  20%     -28.4%     636.00  25%  interrupts.CPU78.NMI:Non-maskable_interrupts
    888.25  20%     -28.4%     636.00  25%  interrupts.CPU78.PMI:Performance_monitoring_interrupts
    829.00  13%     -17.2%     686.75  20%  interrupts.CPU79.NMI:Non-maskable_interrupts
    829.00  13%     -17.2%     686.75  20%  interrupts.CPU79.PMI:Performance_monitoring_interrupts
    314.75  10%     -12.4%     275.75   5%  interrupts.CPU85.RES:Rescheduling_interrupts
    341.50  13%     -17.9%     280.25  13%  interrupts.CPU92.RES:Rescheduling_interrupts
    350.25  11%     -22.6%     271.25   9%  interrupts.CPU94.RES:Rescheduling_interrupts
     15.97   4%      -1.3       14.63   3%  perf-profile.calltrace.cycles-pp.xlog_grant_head_check.xfs_log_reserve.xfs_trans_reserve.xfs_trans_alloc.xfs_iomap_write_direct
     16.73   4%      -1.2       15.49   2%  perf-profile.calltrace.cycles-pp.xfs_log_reserve.xfs_trans_reserve.xfs_trans_alloc.xfs_iomap_write_direct.xfs_direct_write_iomap_begin
     16.78   4%      -1.2       15.55   3%  perf-profile.calltrace.cycles-pp.xfs_trans_reserve.xfs_trans_alloc.xfs_iomap_write_direct.xfs_direct_write_iomap_begin.iomap_apply
     16.87   4%      -1.2       15.65   2%  perf-profile.calltrace.cycles-pp.xfs_trans_alloc.xfs_iomap_write_direct.xfs_direct_write_iomap_begin.iomap_apply.dax_iomap_rw
      7.83   3%      -1.1        6.68   4%  perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.xlog_grant_head_wait.xlog_grant_head_check.xfs_log_reserve
      7.89   3%      -1.1        6.76   4%  perf-profile.calltrace.cycles-pp._raw_spin_lock.xlog_grant_head_wait.xlog_grant_head_check.xfs_log_reserve.xfs_trans_reserve
      8.81   3%      -1.1        7.74   4%  perf-profile.calltrace.cycles-pp.xlog_grant_head_wait.xlog_grant_head_check.xfs_log_reserve.xfs_trans_reserve.xfs_trans_alloc
      9.79   4%      -1.1        8.71   3%  perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.xfs_log_space_wake.xfs_log_ticket_ungrant.xfs_log_commit_cil
      9.86   4%      -1.1        8.80   3%  perf-profile.calltrace.cycles-pp._raw_spin_lock.xfs_log_space_wake.xfs_log_ticket_ungrant.xfs_log_commit_cil.__xfs_trans_commit
     10.16   3%      -1.0        9.15   3%  perf-profile.calltrace.cycles-pp.xfs_log_space_wake.xfs_log_ticket_ungrant.xfs_log_commit_cil.__xfs_trans_commit.xfs_iomap_write_direct
     10.83   3%      -0.8        9.99   3%  perf-profile.calltrace.cycles-pp.xfs_log_ticket_ungrant.xfs_log_commit_cil.__xfs_trans_commit.xfs_iomap_write_direct.xfs_direct_write_iomap_begin
      3.89   7%      -0.4        3.45   4%  perf-profile.calltrace.cycles-pp.file_update_time.xfs_file_aio_write_checks.xfs_file_dax_write.new_sync_write.vfs_write
      3.97   7%      -0.4        3.54   4%  perf-profile.calltrace.cycles-pp.xfs_file_aio_write_checks.xfs_file_dax_write.new_sync_write.vfs_write.ksys_write
      3.84   6%      -0.4        3.42   4%  perf-profile.calltrace.cycles-pp.xfs_vn_update_time.file_update_time.xfs_file_aio_write_checks.xfs_file_dax_write.new_sync_write
      2.31   7%      -0.4        1.91   6%  perf-profile.calltrace.cycles-pp.xlog_grant_head_check.xfs_log_reserve.xfs_trans_reserve.xfs_trans_alloc.xfs_vn_update_time
      2.40   7%      -0.4        1.99   5%  perf-profile.calltrace.cycles-pp.xfs_log_reserve.xfs_trans_reserve.xfs_trans_alloc.xfs_vn_update_time.file_update_time
      2.40   6%      -0.4        2.00   5%  perf-profile.calltrace.cycles-pp.xfs_trans_reserve.xfs_trans_alloc.xfs_vn_update_time.file_update_time.xfs_file_aio_write_checks
      2.40   7%      -0.4        2.01   5%  perf-profile.calltrace.cycles-pp.xfs_trans_alloc.xfs_vn_update_time.file_update_time.xfs_file_aio_write_checks.xfs_file_dax_write
      0.96   9%      -0.1        0.83   5%  perf-profile.calltrace.cycles-pp.xfs_log_space_wake.xfs_log_ticket_ungrant.xfs_log_commit_cil.__xfs_trans_commit.xfs_vn_update_time
      0.89   2%      -0.1        0.83   3%  perf-profile.calltrace.cycles-pp.xfs_trans_committed_bulk.xlog_cil_committed.xlog_cil_process_committed.xlog_state_do_callback.xlog_ioend_work
      0.56   4%      +0.1        0.63   3%  perf-profile.calltrace.cycles-pp.xfs_bmbt_init_key_from_rec.xfs_lookup_get_search_key.xfs_btree_lookup.xfs_bmap_add_extent_unwritten_real.xfs_bmapi_convert_unwritten
      0.67   4%      +0.1        0.75   2%  perf-profile.calltrace.cycles-pp.xfs_lookup_get_search_key.xfs_btree_lookup.xfs_bmap_add_extent_unwritten_real.xfs_bmapi_convert_unwritten.xfs_bmapi_write
      0.80   7%      +0.1        0.89   6%  perf-profile.calltrace.cycles-pp.xfs_buf_item_format.xlog_cil_insert_items.xfs_log_commit_cil.__xfs_trans_commit.xfs_iomap_write_direct
      0.69   6%      +0.1        0.78   5%  perf-profile.calltrace.cycles-pp.xlog_state_release_iclog.xlog_write.xlog_cil_push_work.process_one_work.worker_thread
      0.67   4%      +0.1        0.77   8%  perf-profile.calltrace.cycles-pp.pmem_submit_bio.submit_bio_noacct.submit_bio._xfs_buf_ioapply.__xfs_buf_submit
      0.73   4%      +0.1        0.85   7%  perf-profile.calltrace.cycles-pp.submit_bio_noacct.submit_bio._xfs_buf_ioapply.__xfs_buf_submit.xfs_buf_delwri_submit_buffers
      0.73   4%      +0.1        0.86   7%  perf-profile.calltrace.cycles-pp.submit_bio._xfs_buf_ioapply.__xfs_buf_submit.xfs_buf_delwri_submit_buffers.xfsaild_push
      0.69   7%      +0.1        0.82   8%  perf-profile.calltrace.cycles-pp.xfs_iext_lookup_extent.xfs_bmapi_read.xfs_direct_write_iomap_begin.iomap_apply.dax_iomap_rw
      0.74   7%      +0.1        0.88   8%  perf-profile.calltrace.cycles-pp.xfs_bmapi_read.xfs_direct_write_iomap_begin.iomap_apply.dax_iomap_rw.xfs_file_dax_write
      1.21   4%      +0.2        1.36   6%  perf-profile.calltrace.cycles-pp.pmem_submit_bio.submit_bio_noacct.submit_bio.submit_bio_wait.blkdev_issue_zeroout
      1.74   6%      +0.2        1.92   4%  perf-profile.calltrace.cycles-pp.blkdev_issue_zeroout.xfs_bmapi_convert_unwritten.xfs_bmapi_write.xfs_iomap_write_direct.xfs_direct_write_iomap_begin
      1.26   4%      +0.2        1.46   4%  perf-profile.calltrace.cycles-pp.xlog_write.xlog_cil_push_work.process_one_work.worker_thread.kthread
      1.76   4%      +0.2        1.96   4%  perf-profile.calltrace.cycles-pp.xfs_btree_insrec.xfs_btree_insert.xfs_bmap_add_extent_unwritten_real.xfs_bmapi_convert_unwritten.xfs_bmapi_write
      0.42  57%      +0.2        0.62   6%  perf-profile.calltrace.cycles-pp.xfs_cil_prepare_item.xlog_cil_insert_items.xfs_log_commit_cil.__xfs_trans_commit.xfs_iomap_write_direct
      1.80   4%      +0.2        2.02   4%  perf-profile.calltrace.cycles-pp.xfs_btree_insert.xfs_bmap_add_extent_unwritten_real.xfs_bmapi_convert_unwritten.xfs_bmapi_write.xfs_iomap_write_direct
      2.75   3%      +0.2        2.97   5%  perf-profile.calltrace.cycles-pp.xfsaild.kthread.ret_from_fork
      2.75   3%      +0.2        2.97   5%  perf-profile.calltrace.cycles-pp.xfsaild_push.xfsaild.kthread.ret_from_fork
      1.25   5%      +0.2        1.48   2%  perf-profile.calltrace.cycles-pp.dax_iomap_actor.iomap_apply.dax_iomap_rw.xfs_file_dax_write.new_sync_write
      1.39   3%      +0.2        1.63   3%  perf-profile.calltrace.cycles-pp.xlog_cil_push_work.process_one_work.worker_thread.kthread.ret_from_fork
      1.32   4%      +0.3        1.60   5%  perf-profile.calltrace.cycles-pp._xfs_buf_ioapply.__xfs_buf_submit.xfs_buf_delwri_submit_buffers.xfsaild_push.xfsaild
      2.99            +0.3        3.30   2%  perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
      1.96   4%      +0.3        2.29   5%  perf-profile.calltrace.cycles-pp.__xfs_buf_submit.xfs_buf_delwri_submit_buffers.xfsaild_push.xfsaild.kthread
      2.30   4%      +0.3        2.63   5%  perf-profile.calltrace.cycles-pp.xfs_buf_delwri_submit_buffers.xfsaild_push.xfsaild.kthread.ret_from_fork
      3.31            +0.4        3.69   3%  perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
      6.09   2%      +0.6        6.70   4%  perf-profile.calltrace.cycles-pp.ret_from_fork
      6.09   2%      +0.6        6.70   4%  perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
      6.92   4%      +0.8        7.72   3%  perf-profile.calltrace.cycles-pp.xfs_bmap_add_extent_unwritten_real.xfs_bmapi_convert_unwritten.xfs_bmapi_write.xfs_iomap_write_direct.xfs_direct_write_iomap_begin
      8.79   4%      +1.0        9.78   3%  perf-profile.calltrace.cycles-pp.xfs_bmapi_convert_unwritten.xfs_bmapi_write.xfs_iomap_write_direct.xfs_direct_write_iomap_begin.iomap_apply
      9.29   4%      +1.0       10.33   3%  perf-profile.calltrace.cycles-pp.xfs_bmapi_write.xfs_iomap_write_direct.xfs_direct_write_iomap_begin.iomap_apply.dax_iomap_rw
      2.87   5%      +1.4        4.24   7%  perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.xlog_cil_insert_items.xfs_log_commit_cil.__xfs_trans_commit
      3.03   5%      +1.4        4.45   7%  perf-profile.calltrace.cycles-pp._raw_spin_lock.xlog_cil_insert_items.xfs_log_commit_cil.__xfs_trans_commit.xfs_iomap_write_direct
     18.13   4%      +1.7       19.80   5%  perf-profile.calltrace.cycles-pp.__xfs_trans_commit.xfs_iomap_write_direct.xfs_direct_write_iomap_begin.iomap_apply.dax_iomap_rw
     18.04   4%      +1.7       19.71   5%  perf-profile.calltrace.cycles-pp.xfs_log_commit_cil.__xfs_trans_commit.xfs_iomap_write_direct.xfs_direct_write_iomap_begin.iomap_apply
      4.89   5%      +1.7        6.59   4%  perf-profile.calltrace.cycles-pp.xlog_cil_insert_items.xfs_log_commit_cil.__xfs_trans_commit.xfs_iomap_write_direct.xfs_direct_write_iomap_begin
     18.29   4%      -1.8       16.54   3%  perf-profile.children.cycles-pp.xlog_grant_head_check
     19.12   4%      -1.6       17.49   2%  perf-profile.children.cycles-pp.xfs_log_reserve
     19.18   4%      -1.6       17.55   3%  perf-profile.children.cycles-pp.xfs_trans_reserve
     19.28   4%      -1.6       17.66   2%  perf-profile.children.cycles-pp.xfs_trans_alloc
     11.15   4%      -1.1       10.02   3%  perf-profile.children.cycles-pp.xfs_log_space_wake
      8.81   3%      -1.1        7.74   4%  perf-profile.children.cycles-pp.xlog_grant_head_wait
     11.88   3%      -1.0       10.92   3%  perf-profile.children.cycles-pp.xfs_log_ticket_ungrant
      3.89   7%      -0.4        3.45   4%  perf-profile.children.cycles-pp.file_update_time
      3.97   7%      -0.4        3.54   4%  perf-profile.children.cycles-pp.xfs_file_aio_write_checks
      3.84   6%      -0.4        3.42   4%  perf-profile.children.cycles-pp.xfs_vn_update_time
      0.36   6%      -0.3        0.10  11%  perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
      1.70   3%      -0.2        1.52   5%  perf-profile.children.cycles-pp.xlog_grant_head_wake
      0.30   5%      -0.1        0.23  10%  perf-profile.children.cycles-pp.xfs_buf_item_push
      0.89   2%      -0.1        0.83   3%  perf-profile.children.cycles-pp.xfs_trans_committed_bulk
      0.11  15%      -0.0        0.08  19%  perf-profile.children.cycles-pp.get_next_timer_interrupt
      0.17   2%      -0.0        0.15   5%  perf-profile.children.cycles-pp.lapic_next_deadline
      0.09            +0.0        0.11   6%  perf-profile.children.cycles-pp.xfs_btree_ptr_addr
      0.11   6%      +0.0        0.13  12%  perf-profile.children.cycles-pp.crc_128
      0.16  10%      +0.0        0.19   6%  perf-profile.children.cycles-pp.xfs_trans_ail_update_bulk
      0.10  12%      +0.0        0.13   6%  perf-profile.children.cycles-pp.xfs_trans_dirty_buf
      0.13   6%      +0.0        0.16        perf-profile.children.cycles-pp.irqtime_account_irq
      0.09            +0.0        0.12   3%  perf-profile.children.cycles-pp.xfs_btree_log_block
      0.04  58%      +0.0        0.07   6%  perf-profile.children.cycles-pp.percpu_counter_add_batch
      0.04  58%      +0.0        0.07  12%  perf-profile.children.cycles-pp.xfs_verify_fsbno
      0.04  58%      +0.0        0.07  10%  perf-profile.children.cycles-pp.cpumask_next_and
      0.11   9%      +0.0        0.14  15%  perf-profile.children.cycles-pp.__bio_add_page
      0.14  11%      +0.0        0.17  10%  perf-profile.children.cycles-pp.bio_add_page
      0.10   7%      +0.0        0.13  14%  perf-profile.children.cycles-pp.sysvec_call_function_single
      0.07   6%      +0.0        0.10   7%  perf-profile.children.cycles-pp.xfs_ail_delete_one
      0.14  10%      +0.0        0.18   4%  perf-profile.children.cycles-pp.finish_task_switch
      0.16   5%      +0.0        0.19   9%  perf-profile.children.cycles-pp.xfs_perag_put
      0.09  13%      +0.0        0.12  14%  perf-profile.children.cycles-pp.xfs_ail_check
      0.17   7%      +0.0        0.21   6%  perf-profile.children.cycles-pp.xfs_buf_unlock
      0.15   5%      +0.0        0.20   5%  perf-profile.children.cycles-pp.__slab_free
      0.12   8%      +0.0        0.16   4%  perf-profile.children.cycles-pp.xfs_trans_log_buf
      0.11  10%      +0.0        0.15   8%  perf-profile.children.cycles-pp.asm_sysvec_call_function_single
      0.24   9%      +0.0        0.28   5%  perf-profile.children.cycles-pp.xfs_btree_update
      0.28   7%      +0.0        0.32   5%  perf-profile.children.cycles-pp.orc_find
      0.26   6%      +0.0        0.30   6%  perf-profile.children.cycles-pp.xfs_buf_bio_end_io
      0.33   6%      +0.0        0.38   3%  perf-profile.children.cycles-pp.bio_alloc_bioset
      0.06   7%      +0.1        0.11  10%  perf-profile.children.cycles-pp.xfs_btree_lblock_calc_crc
      0.01 173%      +0.1        0.07  25%  perf-profile.children.cycles-pp.timerqueue_del
      0.19   7%      +0.1        0.24   3%  perf-profile.children.cycles-pp.preempt_schedule_common
      0.00            +0.1        0.05   8%  perf-profile.children.cycles-pp._find_next_bit
      0.26   7%      +0.1        0.32   3%  perf-profile.children.cycles-pp._cond_resched
      0.51   4%      +0.1        0.56   5%  perf-profile.children.cycles-pp._xfs_trans_bjoin
      0.14  11%      +0.1        0.20   4%  perf-profile.children.cycles-pp.xfs_buf_item_done
      0.14  10%      +0.1        0.20   5%  perf-profile.children.cycles-pp.xfs_trans_ail_delete
      0.23   9%      +0.1        0.29   5%  perf-profile.children.cycles-pp.__orc_find
      0.39   9%      +0.1        0.45   4%  perf-profile.children.cycles-pp.xfs_buf_item_pin
      0.45   8%      +0.1        0.51   2%  perf-profile.children.cycles-pp.update_sd_lb_stats
      0.46   8%      +0.1        0.52   2%  perf-profile.children.cycles-pp.find_busiest_group
      0.24   8%      +0.1        0.31   7%  perf-profile.children.cycles-pp.__kmalloc
      0.30  11%      +0.1        0.37   7%  perf-profile.children.cycles-pp.schedule_idle
      0.38   6%      +0.1        0.45   6%  perf-profile.children.cycles-pp.down_read
      0.60   6%      +0.1        0.67   4%  perf-profile.children.cycles-pp.xfs_perag_get
      0.54            +0.1        0.61   3%  perf-profile.children.cycles-pp.memmove
      0.46   7%      +0.1        0.53   4%  perf-profile.children.cycles-pp.newidle_balance
      0.57   3%      +0.1        0.65   2%  perf-profile.children.cycles-pp.xfs_bmbt_init_key_from_rec
      0.54  10%      +0.1        0.61   2%  perf-profile.children.cycles-pp.load_balance
      0.83   7%      +0.1        0.91   6%  perf-profile.children.cycles-pp.xfs_buf_item_format
      0.11   7%      +0.1        0.19   2%  perf-profile.children.cycles-pp.up
      0.68   3%      +0.1        0.76   2%  perf-profile.children.cycles-pp.xfs_lookup_get_search_key
      0.20   5%      +0.1        0.28   2%  perf-profile.children.cycles-pp.__wake_up_common_lock
      0.28   3%      +0.1        0.37        perf-profile.children.cycles-pp.xfs_buf_ioend
      0.54   8%      +0.1        0.63   5%  perf-profile.children.cycles-pp.xfs_cil_prepare_item
      0.69   6%      +0.1        0.78   5%  perf-profile.children.cycles-pp.xlog_state_release_iclog
      0.57   8%      +0.1        0.67   5%  perf-profile.children.cycles-pp.unwind_next_frame
      0.89   5%      +0.1        1.01   7%  perf-profile.children.cycles-pp.memcpy_erms
      0.43   6%      +0.1        0.56   7%  perf-profile.children.cycles-pp.__srcu_read_unlock
      0.68   5%      +0.1        0.82   4%  perf-profile.children.cycles-pp.pick_next_task_fair
      0.74   7%      +0.1        0.89   8%  perf-profile.children.cycles-pp.xfs_bmapi_read
      1.09   6%      +0.1        1.23   3%  perf-profile.children.cycles-pp.schedule
      0.81   8%      +0.2        0.97   4%  perf-profile.children.cycles-pp.arch_stack_walk
      0.88   7%      +0.2        1.03   5%  perf-profile.children.cycles-pp.stack_trace_save_tsk
      0.82   7%      +0.2        0.98   7%  perf-profile.children.cycles-pp.xfs_iext_lookup_extent
      1.13   6%      +0.2        1.29   4%  perf-profile.children.cycles-pp.__account_scheduler_latency
      1.46   5%      +0.2        1.63   4%  perf-profile.children.cycles-pp.ttwu_do_activate
      1.45   5%      +0.2        1.62   4%  perf-profile.children.cycles-pp.enqueue_task_fair
      1.74   6%      +0.2        1.92   4%  perf-profile.children.cycles-pp.blkdev_issue_zeroout
      1.38   5%      +0.2        1.56   4%  perf-profile.children.cycles-pp.enqueue_entity
      1.26   4%      +0.2        1.46   3%  perf-profile.children.cycles-pp.xlog_write
      1.76   4%      +0.2        1.97   4%  perf-profile.children.cycles-pp.xfs_btree_insrec
      1.81   4%      +0.2        2.02   4%  perf-profile.children.cycles-pp.xfs_btree_insert
      2.75   3%      +0.2        2.97   5%  perf-profile.children.cycles-pp.xfsaild
      2.75   3%      +0.2        2.97   5%  perf-profile.children.cycles-pp.xfsaild_push
      1.25   5%      +0.2        1.48   2%  perf-profile.children.cycles-pp.dax_iomap_actor
      1.39   3%      +0.2        1.63   3%  perf-profile.children.cycles-pp.xlog_cil_push_work
      1.54   5%      +0.3        1.79   3%  perf-profile.children.cycles-pp.__schedule
      1.32   4%      +0.3        1.60   5%  perf-profile.children.cycles-pp._xfs_buf_ioapply
      2.99            +0.3        3.30   2%  perf-profile.children.cycles-pp.process_one_work
      1.96   3%      +0.3        2.29   5%  perf-profile.children.cycles-pp.__xfs_buf_submit
      2.43   4%      +0.3        2.75   6%  perf-profile.children.cycles-pp.pmem_submit_bio
      2.30   4%      +0.3        2.63   5%  perf-profile.children.cycles-pp.xfs_buf_delwri_submit_buffers
      2.73   4%      +0.4        3.08   5%  perf-profile.children.cycles-pp.submit_bio_noacct
      2.75   4%      +0.4        3.10   5%  perf-profile.children.cycles-pp.submit_bio
      1.52   6%      +0.4        1.89   4%  perf-profile.children.cycles-pp._raw_spin_lock_irqsave
      3.31            +0.4        3.69   3%  perf-profile.children.cycles-pp.worker_thread
      6.10   2%      +0.6        6.70   4%  perf-profile.children.cycles-pp.ret_from_fork
      6.09   2%      +0.6        6.70   4%  perf-profile.children.cycles-pp.kthread
      6.92   4%      +0.8        7.73   3%  perf-profile.children.cycles-pp.xfs_bmap_add_extent_unwritten_real
      8.79   4%      +1.0        9.78   3%  perf-profile.children.cycles-pp.xfs_bmapi_convert_unwritten
      9.29   4%      +1.0       10.33   3%  perf-profile.children.cycles-pp.xfs_bmapi_write
     19.55   4%      +1.6       21.20   4%  perf-profile.children.cycles-pp.__xfs_trans_commit
     19.46   4%      +1.6       21.11   4%  perf-profile.children.cycles-pp.xfs_log_commit_cil
      5.16   5%      +1.8        6.94   4%  perf-profile.children.cycles-pp.xlog_cil_insert_items
      0.31   7%      -0.2        0.08   8%  perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
      0.35   5%      -0.2        0.14  10%  perf-profile.self.cycles-pp.xlog_grant_head_wake
      0.20   5%      -0.1        0.08  13%  perf-profile.self.cycles-pp.xfs_buf_item_unpin
      0.11   6%      -0.0        0.09  10%  perf-profile.self.cycles-pp.try_to_wake_up
      0.17            -0.0        0.15   5%  perf-profile.self.cycles-pp.lapic_next_deadline
      0.11   3%      +0.0        0.13   5%  perf-profile.self.cycles-pp.xfs_bmbt_key_diff
      0.10   4%      +0.0        0.12   6%  perf-profile.self.cycles-pp._xfs_trans_bjoin
      0.07  10%      +0.0        0.09   8%  perf-profile.self.cycles-pp.irqtime_account_irq
      0.06  11%      +0.0        0.08  15%  perf-profile.self.cycles-pp.pmem_submit_bio
      0.11   6%      +0.0        0.13  12%  perf-profile.self.cycles-pp.crc_128
      0.08   8%      +0.0        0.10  10%  perf-profile.self.cycles-pp.list_sort
      0.10  12%      +0.0        0.12   8%  perf-profile.self.cycles-pp.xfs_trans_dirty_buf
      0.08  15%      +0.0        0.10  10%  perf-profile.self.cycles-pp.xfs_trans_committed_bulk
      0.08   8%      +0.0        0.11   4%  perf-profile.self.cycles-pp.tick_nohz_next_event
      0.14   5%      +0.0        0.17   9%  perf-profile.self.cycles-pp.xfs_cil_prepare_item
      0.11   9%      +0.0        0.14  15%  perf-profile.self.cycles-pp.__bio_add_page
      0.16   7%      +0.0        0.19   8%  perf-profile.self.cycles-pp.xfs_perag_put
      0.08  10%      +0.0        0.12  13%  perf-profile.self.cycles-pp.xfs_ail_check
      0.17   2%      +0.0        0.21   8%  perf-profile.self.cycles-pp.xlog_write
      0.15   5%      +0.0        0.19   3%  perf-profile.self.cycles-pp.__slab_free
      0.25  10%      +0.0        0.29   5%  perf-profile.self.cycles-pp.xfs_buf_bio_end_io
      0.06   9%      +0.0        0.10   8%  perf-profile.self.cycles-pp.xfs_btree_lblock_calc_crc
      0.03 100%      +0.1        0.08  10%  perf-profile.self.cycles-pp._xfs_buf_ioapply
      0.23   9%      +0.1        0.29   5%  perf-profile.self.cycles-pp.__orc_find
      0.56   4%      +0.1        0.62   3%  perf-profile.self.cycles-pp.xfs_bmbt_init_key_from_rec
      0.35   8%      +0.1        0.42   6%  perf-profile.self.cycles-pp.down_read
      0.55   7%      +0.1        0.62   3%  perf-profile.self.cycles-pp.xfs_perag_get
      0.54            +0.1        0.61   3%  perf-profile.self.cycles-pp.memmove
      0.49   2%      +0.1        0.57   5%  perf-profile.self.cycles-pp.xfs_btree_lookup
      0.53   9%      +0.1        0.63   4%  perf-profile.self.cycles-pp.xfs_log_commit_cil
      0.43   6%      +0.1        0.55   7%  perf-profile.self.cycles-pp.__srcu_read_unlock
      0.88   5%      +0.1        1.00   8%  perf-profile.self.cycles-pp.memcpy_erms
      0.81   7%      +0.1        0.96   7%  perf-profile.self.cycles-pp.xfs_iext_lookup_extent
      0.75  10%      +0.2        0.92   5%  perf-profile.self.cycles-pp.xfs_log_ticket_ungrant
      1.47   6%      +0.4        1.84   3%  perf-profile.self.cycles-pp._raw_spin_lock_irqsave


                                                                                
                                 fio.write_bw_MBps                              
                                                                                
  600 +---------------------------------------------------------------------+   
      |   O O O O O O O  O O O O O O O O O O O O O O O O                    |   
  500 |.+.+.+.+.+.+.+.+..+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+..+.+.+.+.+.+.+.+.|   
      |                                                                     |   
      |                                                                     |   
  400 |-+                                                                   |   
      |                                                                     |   
  300 |-+                                                                   |   
      |                                                                     |   
  200 |-+                                                                   |   
      |                                                                     |   
      |                                                                     |   
  100 |-+                                                                   |   
      |                                                                     |   
    0 +---------------------------------------------------------------------+   
                                                                                
                                                                                                                                                                
                                   fio.write_iops                               
                                                                                
  160000 +------------------------------------------------------------------+   
         |           O                              O                       |   
  140000 |-+.+.O.+.O.+.O.+.+.O O O O O O O.+O O O.+. .+.O  .+. .+.+.+.      |   
  120000 |.+   +   +   +     +.+.+.+.+.+.+  +.+.+   +   +.+   +       +.+.+.|   
         |                                                                  |   
  100000 |-+                                                                |   
         |                                                                  |   
   80000 |-+                                                                |   
         |                                                                  |   
   60000 |-+                                                                |   
   40000 |-+                                                                |   
         |                                                                  |   
   20000 |-+                                                                |   
         |                                                                  |   
       0 +------------------------------------------------------------------+   
                                                                                
                                                                                                                                                                
                               fio.write_clat_mean_us                           
                                                                                
  400000 +------------------------------------------------------------------+   
         |.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.|   
  350000 |-+ O O O O O O O O O O O O O O O OO O O O O O O                   |   
         |                                                                  |   
  300000 |-+                                                                |   
  250000 |-+                                                                |   
         |                                                                  |   
  200000 |-+                                                                |   
         |                                                                  |   
  150000 |-+                                                                |   
  100000 |-+                                                                |   
         |                                                                  |   
   50000 |-+                                                                |   
         |                                                                  |   
       0 +------------------------------------------------------------------+   
                                                                                
                                                                                                                                                                
                                fio.write_clat_90__us                           
                                                                                
  160000 +------------------------------------------------------------------+   
         |                   +             ++                         +     |   
  140000 |-+ O O O O O O O O O O O O O O O OO O O O O O O                   |   
  120000 |-+                                                                |   
         |                                                                  |   
  100000 |-+                                                                |   
         |                                                                  |   
   80000 |-+                                                                |   
         |                                                                  |   
   60000 |-+                                                                |   
   40000 |-+                                                                |   
         |                                                                  |   
   20000 |-+                                                                |   
         |                                                                  |   
       0 +------------------------------------------------------------------+   
                                                                                
                                                                                                                                                                
                               fio.latency_250us_                               
                                                                                
  25 +----------------------------------------------------------------------+   
     |                                                                      |   
     |.+.   .+.  .+. .+.       .+.          .+.+.   .+.+.+.   .+..       .+.|   
  20 |-+ +.+   +.   +   +.+.+.+   +.+.+..+.+     +.+       +.+    +.+.+.+   |   
     |   O O O      O O   O O O O   O O  O           O O                    |   
     |         O  O     O         O        O O O O O                        |   
  15 |-+                                                                    |   
     |                                                                      |   
  10 |-+                                                                    |   
     |                                                                      |   
     |                                                                      |   
   5 |-+                                                                    |   
     |                                                                      |   
     |                                                                      |   
   0 +----------------------------------------------------------------------+   
                                                                                
                                                                                                                                                                
                                     fio.workload                               
                                                                                
    3e+07 +-----------------------------------------------------------------+   
          |   O O O O O O OO O O O O O O O O O O O O O O O                  |   
  2.5e+07 |.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.|   
          |                                                                 |   
          |                                                                 |   
    2e+07 |-+                                                               |   
          |                                                                 |   
  1.5e+07 |-+                                                               |   
          |                                                                 |   
    1e+07 |-+                                                               |   
          |                                                                 |   
          |                                                                 |   
    5e+06 |-+                                                               |   
          |                                                                 |   
        0 +-----------------------------------------------------------------+   
                                                                                
                                                                                
[*] bisect-good sample
[O] bisect-bad  sample

***************************************************************************************************
lkp-csl-2ap2: 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory




Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Oliver Sang


--BOKacYhQ+x31HxR3
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="config-5.10.0-rc4-00005-g97e8f0134a2b"

#
# Automatically generated file; DO NOT EDIT.
# Linux/x86_64 5.10.0-rc4 Kernel Configuration
#
CONFIG_CC_VERSION_TEXT="gcc-9 (Debian 9.3.0-15) 9.3.0"
CONFIG_CC_IS_GCC=y
CONFIG_GCC_VERSION=90300
CONFIG_LD_VERSION=235000000
CONFIG_CLANG_VERSION=0
CONFIG_CC_CAN_LINK=y
CONFIG_CC_CAN_LINK_STATIC=y
CONFIG_CC_HAS_ASM_GOTO=y
CONFIG_CC_HAS_ASM_INLINE=y
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_TABLE_SORT=y
CONFIG_THREAD_INFO_IN_TASK=y

#
# General setup
#
CONFIG_INIT_ENV_ARG_LIMIT=32
# CONFIG_COMPILE_TEST is not set
CONFIG_LOCALVERSION=""
CONFIG_LOCALVERSION_AUTO=y
CONFIG_BUILD_SALT=""
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_BZIP2=y
CONFIG_HAVE_KERNEL_LZMA=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_HAVE_KERNEL_LZO=y
CONFIG_HAVE_KERNEL_LZ4=y
CONFIG_HAVE_KERNEL_ZSTD=y
CONFIG_KERNEL_GZIP=y
# CONFIG_KERNEL_BZIP2 is not set
# CONFIG_KERNEL_LZMA is not set
# CONFIG_KERNEL_XZ is not set
# CONFIG_KERNEL_LZO is not set
# CONFIG_KERNEL_LZ4 is not set
# CONFIG_KERNEL_ZSTD is not set
CONFIG_DEFAULT_INIT=""
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
CONFIG_POSIX_MQUEUE=y
CONFIG_POSIX_MQUEUE_SYSCTL=y
# CONFIG_WATCH_QUEUE is not set
CONFIG_CROSS_MEMORY_ATTACH=y
# CONFIG_USELIB is not set
CONFIG_AUDIT=y
CONFIG_HAVE_ARCH_AUDITSYSCALL=y
CONFIG_AUDITSYSCALL=y

#
# IRQ subsystem
#
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK=y
CONFIG_GENERIC_PENDING_IRQ=y
CONFIG_GENERIC_IRQ_MIGRATION=y
CONFIG_GENERIC_IRQ_INJECTION=y
CONFIG_HARDIRQS_SW_RESEND=y
CONFIG_IRQ_DOMAIN=y
CONFIG_IRQ_DOMAIN_HIERARCHY=y
CONFIG_GENERIC_MSI_IRQ=y
CONFIG_GENERIC_MSI_IRQ_DOMAIN=y
CONFIG_IRQ_MSI_IOMMU=y
CONFIG_GENERIC_IRQ_MATRIX_ALLOCATOR=y
CONFIG_GENERIC_IRQ_RESERVATION_MODE=y
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_SPARSE_IRQ=y
# CONFIG_GENERIC_IRQ_DEBUGFS is not set
# end of IRQ subsystem

CONFIG_CLOCKSOURCE_WATCHDOG=y
CONFIG_ARCH_CLOCKSOURCE_INIT=y
CONFIG_CLOCKSOURCE_VALIDATE_LAST_CYCLE=y
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
CONFIG_GENERIC_CMOS_UPDATE=y
CONFIG_HAVE_POSIX_CPU_TIMERS_TASK_WORK=y
CONFIG_POSIX_CPU_TIMERS_TASK_WORK=y

#
# Timers subsystem
#
CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ_COMMON=y
# CONFIG_HZ_PERIODIC is not set
# CONFIG_NO_HZ_IDLE is not set
CONFIG_NO_HZ_FULL=y
CONFIG_CONTEXT_TRACKING=y
# CONFIG_CONTEXT_TRACKING_FORCE is not set
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
# end of Timers subsystem

# CONFIG_PREEMPT_NONE is not set
CONFIG_PREEMPT_VOLUNTARY=y
# CONFIG_PREEMPT is not set
CONFIG_PREEMPT_COUNT=y

#
# CPU/Task time and stats accounting
#
CONFIG_VIRT_CPU_ACCOUNTING=y
CONFIG_VIRT_CPU_ACCOUNTING_GEN=y
CONFIG_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_SCHED_AVG_IRQ=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
CONFIG_TASKSTATS=y
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y
# CONFIG_PSI is not set
# end of CPU/Task time and stats accounting

CONFIG_CPU_ISOLATION=y

#
# RCU Subsystem
#
CONFIG_TREE_RCU=y
# CONFIG_RCU_EXPERT is not set
CONFIG_SRCU=y
CONFIG_TREE_SRCU=y
CONFIG_TASKS_RCU_GENERIC=y
CONFIG_TASKS_RCU=y
CONFIG_TASKS_RUDE_RCU=y
CONFIG_TASKS_TRACE_RCU=y
CONFIG_RCU_STALL_COMMON=y
CONFIG_RCU_NEED_SEGCBLIST=y
CONFIG_RCU_NOCB_CPU=y
# end of RCU Subsystem

CONFIG_BUILD_BIN2C=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
# CONFIG_IKHEADERS is not set
CONFIG_LOG_BUF_SHIFT=20
CONFIG_LOG_CPU_MAX_BUF_SHIFT=12
CONFIG_PRINTK_SAFE_LOG_BUF_SHIFT=13
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y

#
# Scheduler features
#
# CONFIG_UCLAMP_TASK is not set
# end of Scheduler features

CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH=y
CONFIG_CC_HAS_INT128=y
CONFIG_ARCH_SUPPORTS_INT128=y
CONFIG_NUMA_BALANCING=y
CONFIG_NUMA_BALANCING_DEFAULT_ENABLED=y
CONFIG_CGROUPS=y
CONFIG_PAGE_COUNTER=y
CONFIG_MEMCG=y
CONFIG_MEMCG_SWAP=y
CONFIG_MEMCG_KMEM=y
CONFIG_BLK_CGROUP=y
CONFIG_CGROUP_WRITEBACK=y
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
CONFIG_CFS_BANDWIDTH=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_CGROUP_PIDS=y
CONFIG_CGROUP_RDMA=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_HUGETLB=y
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_BPF=y
# CONFIG_CGROUP_DEBUG is not set
CONFIG_SOCK_CGROUP_DATA=y
CONFIG_NAMESPACES=y
CONFIG_UTS_NS=y
CONFIG_TIME_NS=y
CONFIG_IPC_NS=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
# CONFIG_CHECKPOINT_RESTORE is not set
CONFIG_SCHED_AUTOGROUP=y
# CONFIG_SYSFS_DEPRECATED is not set
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_RD_GZIP=y
CONFIG_RD_BZIP2=y
CONFIG_RD_LZMA=y
CONFIG_RD_XZ=y
CONFIG_RD_LZO=y
CONFIG_RD_LZ4=y
CONFIG_RD_ZSTD=y
# CONFIG_BOOT_CONFIG is not set
CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_SYSCTL=y
CONFIG_HAVE_UID16=y
CONFIG_SYSCTL_EXCEPTION_TRACE=y
CONFIG_HAVE_PCSPKR_PLATFORM=y
CONFIG_BPF=y
# CONFIG_EXPERT is not set
CONFIG_UID16=y
CONFIG_MULTIUSER=y
CONFIG_SGETMASK_SYSCALL=y
CONFIG_SYSFS_SYSCALL=y
CONFIG_FHANDLE=y
CONFIG_POSIX_TIMERS=y
CONFIG_PRINTK=y
CONFIG_PRINTK_NMI=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
CONFIG_PCSPKR_PLATFORM=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_FUTEX_PI=y
CONFIG_EPOLL=y
CONFIG_SIGNALFD=y
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
CONFIG_SHMEM=y
CONFIG_AIO=y
CONFIG_IO_URING=y
CONFIG_ADVISE_SYSCALLS=y
CONFIG_HAVE_ARCH_USERFAULTFD_WP=y
CONFIG_MEMBARRIER=y
CONFIG_KALLSYMS=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_ABSOLUTE_PERCPU=y
CONFIG_KALLSYMS_BASE_RELATIVE=y
# CONFIG_BPF_LSM is not set
CONFIG_BPF_SYSCALL=y
CONFIG_ARCH_WANT_DEFAULT_BPF_JIT=y
CONFIG_BPF_JIT_ALWAYS_ON=y
CONFIG_BPF_JIT_DEFAULT_ON=y
# CONFIG_BPF_PRELOAD is not set
CONFIG_USERFAULTFD=y
CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE=y
CONFIG_RSEQ=y
# CONFIG_EMBEDDED is not set
CONFIG_HAVE_PERF_EVENTS=y

#
# Kernel Performance Events And Counters
#
CONFIG_PERF_EVENTS=y
# CONFIG_DEBUG_PERF_USE_VMALLOC is not set
# end of Kernel Performance Events And Counters

CONFIG_VM_EVENT_COUNTERS=y
CONFIG_SLUB_DEBUG=y
# CONFIG_COMPAT_BRK is not set
# CONFIG_SLAB is not set
CONFIG_SLUB=y
CONFIG_SLAB_MERGE_DEFAULT=y
CONFIG_SLAB_FREELIST_RANDOM=y
# CONFIG_SLAB_FREELIST_HARDENED is not set
CONFIG_SHUFFLE_PAGE_ALLOCATOR=y
CONFIG_SLUB_CPU_PARTIAL=y
CONFIG_SYSTEM_DATA_VERIFICATION=y
CONFIG_PROFILING=y
CONFIG_TRACEPOINTS=y
# end of General setup

CONFIG_64BIT=y
CONFIG_X86_64=y
CONFIG_X86=y
CONFIG_INSTRUCTION_DECODER=y
CONFIG_OUTPUT_FORMAT="elf64-x86-64"
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_MMU=y
CONFIG_ARCH_MMAP_RND_BITS_MIN=28
CONFIG_ARCH_MMAP_RND_BITS_MAX=32
CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MIN=8
CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MAX=16
CONFIG_GENERIC_ISA_DMA=y
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_ARCH_HAS_CPU_RELAX=y
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_ARCH_HAS_FILTER_PGPROT=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_ARCH_WANT_GENERAL_HUGETLB=y
CONFIG_ZONE_DMA32=y
CONFIG_AUDIT_ARCH=y
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_HAVE_INTEL_TXT=y
CONFIG_X86_64_SMP=y
CONFIG_ARCH_SUPPORTS_UPROBES=y
CONFIG_FIX_EARLYCON_MEM=y
CONFIG_DYNAMIC_PHYSICAL_MASK=y
CONFIG_PGTABLE_LEVELS=5
CONFIG_CC_HAS_SANE_STACKPROTECTOR=y

#
# Processor type and features
#
CONFIG_ZONE_DMA=y
CONFIG_SMP=y
CONFIG_X86_FEATURE_NAMES=y
CONFIG_X86_X2APIC=y
CONFIG_X86_MPPARSE=y
# CONFIG_GOLDFISH is not set
CONFIG_RETPOLINE=y
CONFIG_X86_CPU_RESCTRL=y
CONFIG_X86_EXTENDED_PLATFORM=y
# CONFIG_X86_NUMACHIP is not set
# CONFIG_X86_VSMP is not set
CONFIG_X86_UV=y
# CONFIG_X86_GOLDFISH is not set
# CONFIG_X86_INTEL_MID is not set
CONFIG_X86_INTEL_LPSS=y
CONFIG_X86_AMD_PLATFORM_DEVICE=y
CONFIG_IOSF_MBI=y
# CONFIG_IOSF_MBI_DEBUG is not set
CONFIG_X86_SUPPORTS_MEMORY_FAILURE=y
# CONFIG_SCHED_OMIT_FRAME_POINTER is not set
CONFIG_HYPERVISOR_GUEST=y
CONFIG_PARAVIRT=y
# CONFIG_PARAVIRT_DEBUG is not set
CONFIG_PARAVIRT_SPINLOCKS=y
CONFIG_X86_HV_CALLBACK_VECTOR=y
CONFIG_XEN=y
# CONFIG_XEN_PV is not set
CONFIG_XEN_PVHVM=y
CONFIG_XEN_PVHVM_SMP=y
CONFIG_XEN_SAVE_RESTORE=y
# CONFIG_XEN_DEBUG_FS is not set
# CONFIG_XEN_PVH is not set
CONFIG_KVM_GUEST=y
CONFIG_ARCH_CPUIDLE_HALTPOLL=y
# CONFIG_PVH is not set
CONFIG_PARAVIRT_TIME_ACCOUNTING=y
CONFIG_PARAVIRT_CLOCK=y
# CONFIG_JAILHOUSE_GUEST is not set
# CONFIG_ACRN_GUEST is not set
# CONFIG_MK8 is not set
# CONFIG_MPSC is not set
# CONFIG_MCORE2 is not set
# CONFIG_MATOM is not set
CONFIG_GENERIC_CPU=y
CONFIG_X86_INTERNODE_CACHE_SHIFT=6
CONFIG_X86_L1_CACHE_SHIFT=6
CONFIG_X86_TSC=y
CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_IA32_FEAT_CTL=y
CONFIG_X86_VMX_FEATURE_NAMES=y
CONFIG_CPU_SUP_INTEL=y
CONFIG_CPU_SUP_AMD=y
CONFIG_CPU_SUP_HYGON=y
CONFIG_CPU_SUP_CENTAUR=y
CONFIG_CPU_SUP_ZHAOXIN=y
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
CONFIG_DMI=y
# CONFIG_GART_IOMMU is not set
CONFIG_MAXSMP=y
CONFIG_NR_CPUS_RANGE_BEGIN=8192
CONFIG_NR_CPUS_RANGE_END=8192
CONFIG_NR_CPUS_DEFAULT=8192
CONFIG_NR_CPUS=8192
CONFIG_SCHED_SMT=y
CONFIG_SCHED_MC=y
CONFIG_SCHED_MC_PRIO=y
CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y
CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y
CONFIG_X86_MCE=y
CONFIG_X86_MCELOG_LEGACY=y
CONFIG_X86_MCE_INTEL=y
CONFIG_X86_MCE_AMD=y
CONFIG_X86_MCE_THRESHOLD=y
CONFIG_X86_MCE_INJECT=m
CONFIG_X86_THERMAL_VECTOR=y

#
# Performance monitoring
#
CONFIG_PERF_EVENTS_INTEL_UNCORE=m
CONFIG_PERF_EVENTS_INTEL_RAPL=m
CONFIG_PERF_EVENTS_INTEL_CSTATE=m
CONFIG_PERF_EVENTS_AMD_POWER=m
# end of Performance monitoring

CONFIG_X86_16BIT=y
CONFIG_X86_ESPFIX64=y
CONFIG_X86_VSYSCALL_EMULATION=y
CONFIG_X86_IOPL_IOPERM=y
CONFIG_I8K=m
CONFIG_MICROCODE=y
CONFIG_MICROCODE_INTEL=y
CONFIG_MICROCODE_AMD=y
CONFIG_MICROCODE_OLD_INTERFACE=y
CONFIG_X86_MSR=y
CONFIG_X86_CPUID=y
CONFIG_X86_5LEVEL=y
CONFIG_X86_DIRECT_GBPAGES=y
# CONFIG_X86_CPA_STATISTICS is not set
CONFIG_AMD_MEM_ENCRYPT=y
# CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT is not set
CONFIG_NUMA=y
CONFIG_AMD_NUMA=y
CONFIG_X86_64_ACPI_NUMA=y
CONFIG_NUMA_EMU=y
CONFIG_NODES_SHIFT=10
CONFIG_ARCH_SPARSEMEM_ENABLE=y
CONFIG_ARCH_SPARSEMEM_DEFAULT=y
CONFIG_ARCH_SELECT_MEMORY_MODEL=y
# CONFIG_ARCH_MEMORY_PROBE is not set
CONFIG_ARCH_PROC_KCORE_TEXT=y
CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
CONFIG_X86_PMEM_LEGACY_DEVICE=y
CONFIG_X86_PMEM_LEGACY=m
CONFIG_X86_CHECK_BIOS_CORRUPTION=y
# CONFIG_X86_BOOTPARAM_MEMORY_CORRUPTION_CHECK is not set
CONFIG_X86_RESERVE_LOW=64
CONFIG_MTRR=y
CONFIG_MTRR_SANITIZER=y
CONFIG_MTRR_SANITIZER_ENABLE_DEFAULT=1
CONFIG_MTRR_SANITIZER_SPARE_REG_NR_DEFAULT=1
CONFIG_X86_PAT=y
CONFIG_ARCH_USES_PG_UNCACHED=y
CONFIG_ARCH_RANDOM=y
CONFIG_X86_SMAP=y
CONFIG_X86_UMIP=y
CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS=y
CONFIG_X86_INTEL_TSX_MODE_OFF=y
# CONFIG_X86_INTEL_TSX_MODE_ON is not set
# CONFIG_X86_INTEL_TSX_MODE_AUTO is not set
CONFIG_EFI=y
CONFIG_EFI_STUB=y
CONFIG_EFI_MIXED=y
# CONFIG_HZ_100 is not set
# CONFIG_HZ_250 is not set
# CONFIG_HZ_300 is not set
CONFIG_HZ_1000=y
CONFIG_HZ=1000
CONFIG_SCHED_HRTICK=y
CONFIG_KEXEC=y
CONFIG_KEXEC_FILE=y
CONFIG_ARCH_HAS_KEXEC_PURGATORY=y
# CONFIG_KEXEC_SIG is not set
CONFIG_CRASH_DUMP=y
CONFIG_KEXEC_JUMP=y
CONFIG_PHYSICAL_START=0x1000000
CONFIG_RELOCATABLE=y
CONFIG_RANDOMIZE_BASE=y
CONFIG_X86_NEED_RELOCS=y
CONFIG_PHYSICAL_ALIGN=0x200000
CONFIG_DYNAMIC_MEMORY_LAYOUT=y
CONFIG_RANDOMIZE_MEMORY=y
CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING=0xa
CONFIG_HOTPLUG_CPU=y
CONFIG_BOOTPARAM_HOTPLUG_CPU0=y
# CONFIG_DEBUG_HOTPLUG_CPU0 is not set
# CONFIG_COMPAT_VDSO is not set
CONFIG_LEGACY_VSYSCALL_EMULATE=y
# CONFIG_LEGACY_VSYSCALL_XONLY is not set
# CONFIG_LEGACY_VSYSCALL_NONE is not set
# CONFIG_CMDLINE_BOOL is not set
CONFIG_MODIFY_LDT_SYSCALL=y
CONFIG_HAVE_LIVEPATCH=y
CONFIG_LIVEPATCH=y
# end of Processor type and features

CONFIG_ARCH_HAS_ADD_PAGES=y
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y
CONFIG_USE_PERCPU_NUMA_NODE_ID=y
CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK=y
CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION=y
CONFIG_ARCH_ENABLE_THP_MIGRATION=y

#
# Power management and ACPI options
#
CONFIG_ARCH_HIBERNATION_HEADER=y
CONFIG_SUSPEND=y
CONFIG_SUSPEND_FREEZER=y
CONFIG_HIBERNATE_CALLBACKS=y
CONFIG_HIBERNATION=y
CONFIG_HIBERNATION_SNAPSHOT_DEV=y
CONFIG_PM_STD_PARTITION=""
CONFIG_PM_SLEEP=y
CONFIG_PM_SLEEP_SMP=y
# CONFIG_PM_AUTOSLEEP is not set
# CONFIG_PM_WAKELOCKS is not set
CONFIG_PM=y
CONFIG_PM_DEBUG=y
# CONFIG_PM_ADVANCED_DEBUG is not set
# CONFIG_PM_TEST_SUSPEND is not set
CONFIG_PM_SLEEP_DEBUG=y
# CONFIG_PM_TRACE_RTC is not set
CONFIG_PM_CLK=y
# CONFIG_WQ_POWER_EFFICIENT_DEFAULT is not set
# CONFIG_ENERGY_MODEL is not set
CONFIG_ARCH_SUPPORTS_ACPI=y
CONFIG_ACPI=y
CONFIG_ACPI_LEGACY_TABLES_LOOKUP=y
CONFIG_ARCH_MIGHT_HAVE_ACPI_PDC=y
CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT=y
# CONFIG_ACPI_DEBUGGER is not set
CONFIG_ACPI_SPCR_TABLE=y
CONFIG_ACPI_LPIT=y
CONFIG_ACPI_SLEEP=y
CONFIG_ACPI_REV_OVERRIDE_POSSIBLE=y
CONFIG_ACPI_EC_DEBUGFS=m
CONFIG_ACPI_AC=y
CONFIG_ACPI_BATTERY=y
CONFIG_ACPI_BUTTON=y
CONFIG_ACPI_VIDEO=m
CONFIG_ACPI_FAN=y
CONFIG_ACPI_TAD=m
CONFIG_ACPI_DOCK=y
CONFIG_ACPI_CPU_FREQ_PSS=y
CONFIG_ACPI_PROCESSOR_CSTATE=y
CONFIG_ACPI_PROCESSOR_IDLE=y
CONFIG_ACPI_CPPC_LIB=y
CONFIG_ACPI_PROCESSOR=y
CONFIG_ACPI_IPMI=m
CONFIG_ACPI_HOTPLUG_CPU=y
CONFIG_ACPI_PROCESSOR_AGGREGATOR=m
CONFIG_ACPI_THERMAL=y
CONFIG_ARCH_HAS_ACPI_TABLE_UPGRADE=y
CONFIG_ACPI_TABLE_UPGRADE=y
# CONFIG_ACPI_DEBUG is not set
CONFIG_ACPI_PCI_SLOT=y
CONFIG_ACPI_CONTAINER=y
CONFIG_ACPI_HOTPLUG_MEMORY=y
CONFIG_ACPI_HOTPLUG_IOAPIC=y
CONFIG_ACPI_SBS=m
CONFIG_ACPI_HED=y
# CONFIG_ACPI_CUSTOM_METHOD is not set
CONFIG_ACPI_BGRT=y
CONFIG_ACPI_NFIT=m
# CONFIG_NFIT_SECURITY_DEBUG is not set
CONFIG_ACPI_NUMA=y
# CONFIG_ACPI_HMAT is not set
CONFIG_HAVE_ACPI_APEI=y
CONFIG_HAVE_ACPI_APEI_NMI=y
CONFIG_ACPI_APEI=y
CONFIG_ACPI_APEI_GHES=y
CONFIG_ACPI_APEI_PCIEAER=y
CONFIG_ACPI_APEI_MEMORY_FAILURE=y
CONFIG_ACPI_APEI_EINJ=m
CONFIG_ACPI_APEI_ERST_DEBUG=y
# CONFIG_ACPI_DPTF is not set
CONFIG_ACPI_WATCHDOG=y
CONFIG_ACPI_EXTLOG=m
CONFIG_ACPI_ADXL=y
# CONFIG_ACPI_CONFIGFS is not set
CONFIG_PMIC_OPREGION=y
CONFIG_X86_PM_TIMER=y
CONFIG_SFI=y

#
# CPU Frequency scaling
#
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_GOV_ATTR_SET=y
CONFIG_CPU_FREQ_GOV_COMMON=y
CONFIG_CPU_FREQ_STAT=y
CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE=y
# CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL is not set
CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y

#
# CPU frequency scaling drivers
#
CONFIG_X86_INTEL_PSTATE=y
# CONFIG_X86_PCC_CPUFREQ is not set
CONFIG_X86_ACPI_CPUFREQ=m
CONFIG_X86_ACPI_CPUFREQ_CPB=y
CONFIG_X86_POWERNOW_K8=m
CONFIG_X86_AMD_FREQ_SENSITIVITY=m
# CONFIG_X86_SPEEDSTEP_CENTRINO is not set
CONFIG_X86_P4_CLOCKMOD=m

#
# shared options
#
CONFIG_X86_SPEEDSTEP_LIB=m
# end of CPU Frequency scaling

#
# CPU Idle
#
CONFIG_CPU_IDLE=y
# CONFIG_CPU_IDLE_GOV_LADDER is not set
CONFIG_CPU_IDLE_GOV_MENU=y
# CONFIG_CPU_IDLE_GOV_TEO is not set
# CONFIG_CPU_IDLE_GOV_HALTPOLL is not set
CONFIG_HALTPOLL_CPUIDLE=y
# end of CPU Idle

CONFIG_INTEL_IDLE=y
# end of Power management and ACPI options

#
# Bus options (PCI etc.)
#
CONFIG_PCI_DIRECT=y
CONFIG_PCI_MMCONFIG=y
CONFIG_PCI_XEN=y
CONFIG_MMCONF_FAM10H=y
CONFIG_ISA_DMA_API=y
CONFIG_AMD_NB=y
# CONFIG_X86_SYSFB is not set
# end of Bus options (PCI etc.)

#
# Binary Emulations
#
CONFIG_IA32_EMULATION=y
# CONFIG_X86_X32 is not set
CONFIG_COMPAT_32=y
CONFIG_COMPAT=y
CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
CONFIG_SYSVIPC_COMPAT=y
# end of Binary Emulations

#
# Firmware Drivers
#
CONFIG_EDD=m
# CONFIG_EDD_OFF is not set
CONFIG_FIRMWARE_MEMMAP=y
CONFIG_DMIID=y
CONFIG_DMI_SYSFS=y
CONFIG_DMI_SCAN_MACHINE_NON_EFI_FALLBACK=y
# CONFIG_ISCSI_IBFT is not set
CONFIG_FW_CFG_SYSFS=y
# CONFIG_FW_CFG_SYSFS_CMDLINE is not set
# CONFIG_GOOGLE_FIRMWARE is not set

#
# EFI (Extensible Firmware Interface) Support
#
CONFIG_EFI_VARS=y
CONFIG_EFI_ESRT=y
CONFIG_EFI_VARS_PSTORE=y
CONFIG_EFI_VARS_PSTORE_DEFAULT_DISABLE=y
CONFIG_EFI_RUNTIME_MAP=y
# CONFIG_EFI_FAKE_MEMMAP is not set
CONFIG_EFI_RUNTIME_WRAPPERS=y
CONFIG_EFI_GENERIC_STUB_INITRD_CMDLINE_LOADER=y
# CONFIG_EFI_BOOTLOADER_CONTROL is not set
# CONFIG_EFI_CAPSULE_LOADER is not set
# CONFIG_EFI_TEST is not set
CONFIG_APPLE_PROPERTIES=y
# CONFIG_RESET_ATTACK_MITIGATION is not set
# CONFIG_EFI_RCI2_TABLE is not set
# CONFIG_EFI_DISABLE_PCI_DMA is not set
# end of EFI (Extensible Firmware Interface) Support

CONFIG_UEFI_CPER=y
CONFIG_UEFI_CPER_X86=y
CONFIG_EFI_DEV_PATH_PARSER=y
CONFIG_EFI_EARLYCON=y
CONFIG_EFI_CUSTOM_SSDT_OVERLAYS=y

#
# Tegra firmware driver
#
# end of Tegra firmware driver
# end of Firmware Drivers

CONFIG_HAVE_KVM=y
CONFIG_HAVE_KVM_IRQCHIP=y
CONFIG_HAVE_KVM_IRQFD=y
CONFIG_HAVE_KVM_IRQ_ROUTING=y
CONFIG_HAVE_KVM_EVENTFD=y
CONFIG_KVM_MMIO=y
CONFIG_KVM_ASYNC_PF=y
CONFIG_HAVE_KVM_MSI=y
CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT=y
CONFIG_KVM_VFIO=y
CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT=y
CONFIG_KVM_COMPAT=y
CONFIG_HAVE_KVM_IRQ_BYPASS=y
CONFIG_HAVE_KVM_NO_POLL=y
CONFIG_KVM_XFER_TO_GUEST_WORK=y
CONFIG_VIRTUALIZATION=y
CONFIG_KVM=m
CONFIG_KVM_INTEL=m
# CONFIG_KVM_AMD is not set
CONFIG_KVM_MMU_AUDIT=y
CONFIG_AS_AVX512=y
CONFIG_AS_SHA1_NI=y
CONFIG_AS_SHA256_NI=y
CONFIG_AS_TPAUSE=y

#
# General architecture-dependent options
#
CONFIG_CRASH_CORE=y
CONFIG_KEXEC_CORE=y
CONFIG_HOTPLUG_SMT=y
CONFIG_GENERIC_ENTRY=y
CONFIG_OPROFILE=m
CONFIG_OPROFILE_EVENT_MULTIPLEX=y
CONFIG_HAVE_OPROFILE=y
CONFIG_OPROFILE_NMI_TIMER=y
CONFIG_KPROBES=y
CONFIG_JUMP_LABEL=y
# CONFIG_STATIC_KEYS_SELFTEST is not set
# CONFIG_STATIC_CALL_SELFTEST is not set
CONFIG_OPTPROBES=y
CONFIG_KPROBES_ON_FTRACE=y
CONFIG_UPROBES=y
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_ARCH_USE_BUILTIN_BSWAP=y
CONFIG_KRETPROBES=y
CONFIG_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_OPTPROBES=y
CONFIG_HAVE_KPROBES_ON_FTRACE=y
CONFIG_HAVE_FUNCTION_ERROR_INJECTION=y
CONFIG_HAVE_NMI=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_HAVE_DMA_CONTIGUOUS=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_ARCH_HAS_FORTIFY_SOURCE=y
CONFIG_ARCH_HAS_SET_MEMORY=y
CONFIG_ARCH_HAS_SET_DIRECT_MAP=y
CONFIG_HAVE_ARCH_THREAD_STRUCT_WHITELIST=y
CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT=y
CONFIG_HAVE_ASM_MODVERSIONS=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_HAVE_RSEQ=y
CONFIG_HAVE_FUNCTION_ARG_ACCESS_API=y
CONFIG_HAVE_HW_BREAKPOINT=y
CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
CONFIG_HAVE_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_PERF_EVENTS_NMI=y
CONFIG_HAVE_HARDLOCKUP_DETECTOR_PERF=y
CONFIG_HAVE_PERF_REGS=y
CONFIG_HAVE_PERF_USER_STACK_DUMP=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_HAVE_ARCH_JUMP_LABEL_RELATIVE=y
CONFIG_MMU_GATHER_TABLE_FREE=y
CONFIG_MMU_GATHER_RCU_TABLE_FREE=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y
CONFIG_HAVE_CMPXCHG_LOCAL=y
CONFIG_HAVE_CMPXCHG_DOUBLE=y
CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y
CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y
CONFIG_HAVE_ARCH_SECCOMP=y
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
CONFIG_SECCOMP=y
CONFIG_SECCOMP_FILTER=y
CONFIG_HAVE_ARCH_STACKLEAK=y
CONFIG_HAVE_STACKPROTECTOR=y
CONFIG_STACKPROTECTOR=y
CONFIG_STACKPROTECTOR_STRONG=y
CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES=y
CONFIG_HAVE_CONTEXT_TRACKING=y
CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y
CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_MOVE_PMD=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD=y
CONFIG_HAVE_ARCH_HUGE_VMAP=y
CONFIG_ARCH_WANT_HUGE_PMD_SHARE=y
CONFIG_HAVE_ARCH_SOFT_DIRTY=y
CONFIG_HAVE_MOD_ARCH_SPECIFIC=y
CONFIG_MODULES_USE_ELF_RELA=y
CONFIG_ARCH_HAS_ELF_RANDOMIZE=y
CONFIG_HAVE_ARCH_MMAP_RND_BITS=y
CONFIG_HAVE_EXIT_THREAD=y
CONFIG_ARCH_MMAP_RND_BITS=28
CONFIG_HAVE_ARCH_MMAP_RND_COMPAT_BITS=y
CONFIG_ARCH_MMAP_RND_COMPAT_BITS=8
CONFIG_HAVE_ARCH_COMPAT_MMAP_BASES=y
CONFIG_HAVE_STACK_VALIDATION=y
CONFIG_HAVE_RELIABLE_STACKTRACE=y
CONFIG_OLD_SIGSUSPEND3=y
CONFIG_COMPAT_OLD_SIGACTION=y
CONFIG_COMPAT_32BIT_TIME=y
CONFIG_HAVE_ARCH_VMAP_STACK=y
CONFIG_VMAP_STACK=y
CONFIG_ARCH_HAS_STRICT_KERNEL_RWX=y
CONFIG_STRICT_KERNEL_RWX=y
CONFIG_ARCH_HAS_STRICT_MODULE_RWX=y
CONFIG_STRICT_MODULE_RWX=y
CONFIG_HAVE_ARCH_PREL32_RELOCATIONS=y
CONFIG_ARCH_USE_MEMREMAP_PROT=y
# CONFIG_LOCK_EVENT_COUNTS is not set
CONFIG_ARCH_HAS_MEM_ENCRYPT=y
CONFIG_HAVE_STATIC_CALL=y
CONFIG_HAVE_STATIC_CALL_INLINE=y

#
# GCOV-based kernel profiling
#
# CONFIG_GCOV_KERNEL is not set
CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y
# end of GCOV-based kernel profiling

CONFIG_HAVE_GCC_PLUGINS=y
# end of General architecture-dependent options

CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
CONFIG_MODULE_SIG_FORMAT=y
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_MODULE_FORCE_UNLOAD is not set
# CONFIG_MODVERSIONS is not set
# CONFIG_MODULE_SRCVERSION_ALL is not set
CONFIG_MODULE_SIG=y
# CONFIG_MODULE_SIG_FORCE is not set
CONFIG_MODULE_SIG_ALL=y
# CONFIG_MODULE_SIG_SHA1 is not set
# CONFIG_MODULE_SIG_SHA224 is not set
CONFIG_MODULE_SIG_SHA256=y
# CONFIG_MODULE_SIG_SHA384 is not set
# CONFIG_MODULE_SIG_SHA512 is not set
CONFIG_MODULE_SIG_HASH="sha256"
# CONFIG_MODULE_COMPRESS is not set
# CONFIG_MODULE_ALLOW_MISSING_NAMESPACE_IMPORTS is not set
# CONFIG_UNUSED_SYMBOLS is not set
# CONFIG_TRIM_UNUSED_KSYMS is not set
CONFIG_MODULES_TREE_LOOKUP=y
CONFIG_BLOCK=y
CONFIG_BLK_SCSI_REQUEST=y
CONFIG_BLK_CGROUP_RWSTAT=y
CONFIG_BLK_DEV_BSG=y
CONFIG_BLK_DEV_BSGLIB=y
CONFIG_BLK_DEV_INTEGRITY=y
CONFIG_BLK_DEV_INTEGRITY_T10=m
CONFIG_BLK_DEV_ZONED=y
CONFIG_BLK_DEV_THROTTLING=y
# CONFIG_BLK_DEV_THROTTLING_LOW is not set
# CONFIG_BLK_CMDLINE_PARSER is not set
CONFIG_BLK_WBT=y
# CONFIG_BLK_CGROUP_IOLATENCY is not set
# CONFIG_BLK_CGROUP_IOCOST is not set
CONFIG_BLK_WBT_MQ=y
CONFIG_BLK_DEBUG_FS=y
CONFIG_BLK_DEBUG_FS_ZONED=y
# CONFIG_BLK_SED_OPAL is not set
# CONFIG_BLK_INLINE_ENCRYPTION is not set

#
# Partition Types
#
CONFIG_PARTITION_ADVANCED=y
# CONFIG_ACORN_PARTITION is not set
# CONFIG_AIX_PARTITION is not set
CONFIG_OSF_PARTITION=y
CONFIG_AMIGA_PARTITION=y
# CONFIG_ATARI_PARTITION is not set
CONFIG_MAC_PARTITION=y
CONFIG_MSDOS_PARTITION=y
CONFIG_BSD_DISKLABEL=y
CONFIG_MINIX_SUBPARTITION=y
CONFIG_SOLARIS_X86_PARTITION=y
CONFIG_UNIXWARE_DISKLABEL=y
# CONFIG_LDM_PARTITION is not set
CONFIG_SGI_PARTITION=y
# CONFIG_ULTRIX_PARTITION is not set
CONFIG_SUN_PARTITION=y
CONFIG_KARMA_PARTITION=y
CONFIG_EFI_PARTITION=y
# CONFIG_SYSV68_PARTITION is not set
# CONFIG_CMDLINE_PARTITION is not set
# end of Partition Types

CONFIG_BLOCK_COMPAT=y
CONFIG_BLK_MQ_PCI=y
CONFIG_BLK_MQ_VIRTIO=y
CONFIG_BLK_MQ_RDMA=y
CONFIG_BLK_PM=y

#
# IO Schedulers
#
CONFIG_MQ_IOSCHED_DEADLINE=y
CONFIG_MQ_IOSCHED_KYBER=y
CONFIG_IOSCHED_BFQ=y
CONFIG_BFQ_GROUP_IOSCHED=y
# CONFIG_BFQ_CGROUP_DEBUG is not set
# end of IO Schedulers

CONFIG_PREEMPT_NOTIFIERS=y
CONFIG_PADATA=y
CONFIG_ASN1=y
CONFIG_INLINE_SPIN_UNLOCK_IRQ=y
CONFIG_INLINE_READ_UNLOCK=y
CONFIG_INLINE_READ_UNLOCK_IRQ=y
CONFIG_INLINE_WRITE_UNLOCK=y
CONFIG_INLINE_WRITE_UNLOCK_IRQ=y
CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=y
CONFIG_MUTEX_SPIN_ON_OWNER=y
CONFIG_RWSEM_SPIN_ON_OWNER=y
CONFIG_LOCK_SPIN_ON_OWNER=y
CONFIG_ARCH_USE_QUEUED_SPINLOCKS=y
CONFIG_QUEUED_SPINLOCKS=y
CONFIG_ARCH_USE_QUEUED_RWLOCKS=y
CONFIG_QUEUED_RWLOCKS=y
CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE=y
CONFIG_ARCH_HAS_SYNC_CORE_BEFORE_USERMODE=y
CONFIG_ARCH_HAS_SYSCALL_WRAPPER=y
CONFIG_FREEZER=y

#
# Executable file formats
#
CONFIG_BINFMT_ELF=y
CONFIG_COMPAT_BINFMT_ELF=y
CONFIG_ELFCORE=y
CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y
CONFIG_BINFMT_SCRIPT=y
CONFIG_BINFMT_MISC=m
CONFIG_COREDUMP=y
# end of Executable file formats

#
# Memory Management options
#
CONFIG_SELECT_MEMORY_MODEL=y
CONFIG_SPARSEMEM_MANUAL=y
CONFIG_SPARSEMEM=y
CONFIG_NEED_MULTIPLE_NODES=y
CONFIG_SPARSEMEM_EXTREME=y
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
CONFIG_SPARSEMEM_VMEMMAP=y
CONFIG_HAVE_FAST_GUP=y
CONFIG_NUMA_KEEP_MEMINFO=y
CONFIG_MEMORY_ISOLATION=y
CONFIG_HAVE_BOOTMEM_INFO_NODE=y
CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTPLUG_SPARSE=y
# CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE is not set
CONFIG_MEMORY_HOTREMOVE=y
CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_MEMORY_BALLOON=y
CONFIG_BALLOON_COMPACTION=y
CONFIG_COMPACTION=y
CONFIG_PAGE_REPORTING=y
CONFIG_MIGRATION=y
CONFIG_CONTIG_ALLOC=y
CONFIG_PHYS_ADDR_T_64BIT=y
CONFIG_BOUNCE=y
CONFIG_VIRT_TO_BUS=y
CONFIG_MMU_NOTIFIER=y
CONFIG_KSM=y
CONFIG_DEFAULT_MMAP_MIN_ADDR=4096
CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y
CONFIG_MEMORY_FAILURE=y
CONFIG_HWPOISON_INJECT=m
CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y
# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set
CONFIG_ARCH_WANTS_THP_SWAP=y
CONFIG_THP_SWAP=y
CONFIG_CLEANCACHE=y
CONFIG_FRONTSWAP=y
CONFIG_CMA=y
# CONFIG_CMA_DEBUG is not set
# CONFIG_CMA_DEBUGFS is not set
CONFIG_CMA_AREAS=19
CONFIG_ZSWAP=y
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_DEFLATE is not set
CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZO=y
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_842 is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4 is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4HC is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_ZSTD is not set
CONFIG_ZSWAP_COMPRESSOR_DEFAULT="lzo"
CONFIG_ZSWAP_ZPOOL_DEFAULT_ZBUD=y
# CONFIG_ZSWAP_ZPOOL_DEFAULT_Z3FOLD is not set
# CONFIG_ZSWAP_ZPOOL_DEFAULT_ZSMALLOC is not set
CONFIG_ZSWAP_ZPOOL_DEFAULT="zbud"
# CONFIG_ZSWAP_DEFAULT_ON is not set
CONFIG_ZPOOL=y
CONFIG_ZBUD=y
# CONFIG_Z3FOLD is not set
CONFIG_ZSMALLOC=y
# CONFIG_ZSMALLOC_PGTABLE_MAPPING is not set
CONFIG_ZSMALLOC_STAT=y
CONFIG_GENERIC_EARLY_IOREMAP=y
CONFIG_DEFERRED_STRUCT_PAGE_INIT=y
CONFIG_IDLE_PAGE_TRACKING=y
CONFIG_ARCH_HAS_PTE_DEVMAP=y
CONFIG_ZONE_DEVICE=y
CONFIG_DEV_PAGEMAP_OPS=y
CONFIG_HMM_MIRROR=y
CONFIG_DEVICE_PRIVATE=y
CONFIG_VMAP_PFN=y
CONFIG_FRAME_VECTOR=y
CONFIG_ARCH_USES_HIGH_VMA_FLAGS=y
CONFIG_ARCH_HAS_PKEYS=y
# CONFIG_PERCPU_STATS is not set
# CONFIG_GUP_BENCHMARK is not set
# CONFIG_READ_ONLY_THP_FOR_FS is not set
CONFIG_ARCH_HAS_PTE_SPECIAL=y
CONFIG_MAPPING_DIRTY_HELPERS=y
# end of Memory Management options

CONFIG_NET=y
CONFIG_COMPAT_NETLINK_MESSAGES=y
CONFIG_NET_INGRESS=y
CONFIG_NET_EGRESS=y
CONFIG_SKB_EXTENSIONS=y

#
# Networking options
#
CONFIG_PACKET=y
CONFIG_PACKET_DIAG=m
CONFIG_UNIX=y
CONFIG_UNIX_SCM=y
CONFIG_UNIX_DIAG=m
CONFIG_TLS=m
CONFIG_TLS_DEVICE=y
# CONFIG_TLS_TOE is not set
CONFIG_XFRM=y
CONFIG_XFRM_OFFLOAD=y
CONFIG_XFRM_ALGO=y
CONFIG_XFRM_USER=y
# CONFIG_XFRM_USER_COMPAT is not set
# CONFIG_XFRM_INTERFACE is not set
CONFIG_XFRM_SUB_POLICY=y
CONFIG_XFRM_MIGRATE=y
CONFIG_XFRM_STATISTICS=y
CONFIG_XFRM_AH=m
CONFIG_XFRM_ESP=m
CONFIG_XFRM_IPCOMP=m
CONFIG_NET_KEY=m
CONFIG_NET_KEY_MIGRATE=y
# CONFIG_SMC is not set
CONFIG_XDP_SOCKETS=y
# CONFIG_XDP_SOCKETS_DIAG is not set
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_ADVANCED_ROUTER=y
CONFIG_IP_FIB_TRIE_STATS=y
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_IP_ROUTE_MULTIPATH=y
CONFIG_IP_ROUTE_VERBOSE=y
CONFIG_IP_ROUTE_CLASSID=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
# CONFIG_IP_PNP_BOOTP is not set
# CONFIG_IP_PNP_RARP is not set
CONFIG_NET_IPIP=m
CONFIG_NET_IPGRE_DEMUX=m
CONFIG_NET_IP_TUNNEL=m
CONFIG_NET_IPGRE=m
CONFIG_NET_IPGRE_BROADCAST=y
CONFIG_IP_MROUTE_COMMON=y
CONFIG_IP_MROUTE=y
CONFIG_IP_MROUTE_MULTIPLE_TABLES=y
CONFIG_IP_PIMSM_V1=y
CONFIG_IP_PIMSM_V2=y
CONFIG_SYN_COOKIES=y
CONFIG_NET_IPVTI=m
CONFIG_NET_UDP_TUNNEL=m
# CONFIG_NET_FOU is not set
# CONFIG_NET_FOU_IP_TUNNELS is not set
CONFIG_INET_AH=m
CONFIG_INET_ESP=m
CONFIG_INET_ESP_OFFLOAD=m
# CONFIG_INET_ESPINTCP is not set
CONFIG_INET_IPCOMP=m
CONFIG_INET_XFRM_TUNNEL=m
CONFIG_INET_TUNNEL=m
CONFIG_INET_DIAG=m
CONFIG_INET_TCP_DIAG=m
CONFIG_INET_UDP_DIAG=m
CONFIG_INET_RAW_DIAG=m
# CONFIG_INET_DIAG_DESTROY is not set
CONFIG_TCP_CONG_ADVANCED=y
CONFIG_TCP_CONG_BIC=m
CONFIG_TCP_CONG_CUBIC=y
CONFIG_TCP_CONG_WESTWOOD=m
CONFIG_TCP_CONG_HTCP=m
CONFIG_TCP_CONG_HSTCP=m
CONFIG_TCP_CONG_HYBLA=m
CONFIG_TCP_CONG_VEGAS=m
CONFIG_TCP_CONG_NV=m
CONFIG_TCP_CONG_SCALABLE=m
CONFIG_TCP_CONG_LP=m
CONFIG_TCP_CONG_VENO=m
CONFIG_TCP_CONG_YEAH=m
CONFIG_TCP_CONG_ILLINOIS=m
CONFIG_TCP_CONG_DCTCP=m
# CONFIG_TCP_CONG_CDG is not set
CONFIG_TCP_CONG_BBR=m
CONFIG_DEFAULT_CUBIC=y
# CONFIG_DEFAULT_RENO is not set
CONFIG_DEFAULT_TCP_CONG="cubic"
CONFIG_TCP_MD5SIG=y
CONFIG_IPV6=y
CONFIG_IPV6_ROUTER_PREF=y
CONFIG_IPV6_ROUTE_INFO=y
CONFIG_IPV6_OPTIMISTIC_DAD=y
CONFIG_INET6_AH=m
CONFIG_INET6_ESP=m
CONFIG_INET6_ESP_OFFLOAD=m
# CONFIG_INET6_ESPINTCP is not set
CONFIG_INET6_IPCOMP=m
CONFIG_IPV6_MIP6=m
# CONFIG_IPV6_ILA is not set
CONFIG_INET6_XFRM_TUNNEL=m
CONFIG_INET6_TUNNEL=m
CONFIG_IPV6_VTI=m
CONFIG_IPV6_SIT=m
CONFIG_IPV6_SIT_6RD=y
CONFIG_IPV6_NDISC_NODETYPE=y
CONFIG_IPV6_TUNNEL=m
CONFIG_IPV6_GRE=m
CONFIG_IPV6_MULTIPLE_TABLES=y
# CONFIG_IPV6_SUBTREES is not set
CONFIG_IPV6_MROUTE=y
CONFIG_IPV6_MROUTE_MULTIPLE_TABLES=y
CONFIG_IPV6_PIMSM_V2=y
# CONFIG_IPV6_SEG6_LWTUNNEL is not set
# CONFIG_IPV6_SEG6_HMAC is not set
# CONFIG_IPV6_RPL_LWTUNNEL is not set
CONFIG_NETLABEL=y
# CONFIG_MPTCP is not set
CONFIG_NETWORK_SECMARK=y
CONFIG_NET_PTP_CLASSIFY=y
CONFIG_NETWORK_PHY_TIMESTAMPING=y
CONFIG_NETFILTER=y
CONFIG_NETFILTER_ADVANCED=y
CONFIG_BRIDGE_NETFILTER=m

#
# Core Netfilter Configuration
#
CONFIG_NETFILTER_INGRESS=y
CONFIG_NETFILTER_NETLINK=m
CONFIG_NETFILTER_FAMILY_BRIDGE=y
CONFIG_NETFILTER_FAMILY_ARP=y
# CONFIG_NETFILTER_NETLINK_ACCT is not set
CONFIG_NETFILTER_NETLINK_QUEUE=m
CONFIG_NETFILTER_NETLINK_LOG=m
CONFIG_NETFILTER_NETLINK_OSF=m
CONFIG_NF_CONNTRACK=m
CONFIG_NF_LOG_COMMON=m
CONFIG_NF_LOG_NETDEV=m
CONFIG_NETFILTER_CONNCOUNT=m
CONFIG_NF_CONNTRACK_MARK=y
CONFIG_NF_CONNTRACK_SECMARK=y
CONFIG_NF_CONNTRACK_ZONES=y
CONFIG_NF_CONNTRACK_PROCFS=y
CONFIG_NF_CONNTRACK_EVENTS=y
CONFIG_NF_CONNTRACK_TIMEOUT=y
CONFIG_NF_CONNTRACK_TIMESTAMP=y
CONFIG_NF_CONNTRACK_LABELS=y
CONFIG_NF_CT_PROTO_DCCP=y
CONFIG_NF_CT_PROTO_GRE=y
CONFIG_NF_CT_PROTO_SCTP=y
CONFIG_NF_CT_PROTO_UDPLITE=y
CONFIG_NF_CONNTRACK_AMANDA=m
CONFIG_NF_CONNTRACK_FTP=m
CONFIG_NF_CONNTRACK_H323=m
CONFIG_NF_CONNTRACK_IRC=m
CONFIG_NF_CONNTRACK_BROADCAST=m
CONFIG_NF_CONNTRACK_NETBIOS_NS=m
CONFIG_NF_CONNTRACK_SNMP=m
CONFIG_NF_CONNTRACK_PPTP=m
CONFIG_NF_CONNTRACK_SANE=m
CONFIG_NF_CONNTRACK_SIP=m
CONFIG_NF_CONNTRACK_TFTP=m
CONFIG_NF_CT_NETLINK=m
CONFIG_NF_CT_NETLINK_TIMEOUT=m
CONFIG_NF_CT_NETLINK_HELPER=m
CONFIG_NETFILTER_NETLINK_GLUE_CT=y
CONFIG_NF_NAT=m
CONFIG_NF_NAT_AMANDA=m
CONFIG_NF_NAT_FTP=m
CONFIG_NF_NAT_IRC=m
CONFIG_NF_NAT_SIP=m
CONFIG_NF_NAT_TFTP=m
CONFIG_NF_NAT_REDIRECT=y
CONFIG_NF_NAT_MASQUERADE=y
CONFIG_NETFILTER_SYNPROXY=m
CONFIG_NF_TABLES=m
CONFIG_NF_TABLES_INET=y
CONFIG_NF_TABLES_NETDEV=y
CONFIG_NFT_NUMGEN=m
CONFIG_NFT_CT=m
CONFIG_NFT_COUNTER=m
CONFIG_NFT_CONNLIMIT=m
CONFIG_NFT_LOG=m
CONFIG_NFT_LIMIT=m
CONFIG_NFT_MASQ=m
CONFIG_NFT_REDIR=m
CONFIG_NFT_NAT=m
# CONFIG_NFT_TUNNEL is not set
CONFIG_NFT_OBJREF=m
CONFIG_NFT_QUEUE=m
CONFIG_NFT_QUOTA=m
CONFIG_NFT_REJECT=m
CONFIG_NFT_REJECT_INET=m
CONFIG_NFT_COMPAT=m
CONFIG_NFT_HASH=m
CONFIG_NFT_FIB=m
CONFIG_NFT_FIB_INET=m
# CONFIG_NFT_XFRM is not set
CONFIG_NFT_SOCKET=m
# CONFIG_NFT_OSF is not set
# CONFIG_NFT_TPROXY is not set
# CONFIG_NFT_SYNPROXY is not set
CONFIG_NF_DUP_NETDEV=m
CONFIG_NFT_DUP_NETDEV=m
CONFIG_NFT_FWD_NETDEV=m
CONFIG_NFT_FIB_NETDEV=m
# CONFIG_NF_FLOW_TABLE is not set
CONFIG_NETFILTER_XTABLES=y

#
# Xtables combined modules
#
CONFIG_NETFILTER_XT_MARK=m
CONFIG_NETFILTER_XT_CONNMARK=m
CONFIG_NETFILTER_XT_SET=m

#
# Xtables targets
#
CONFIG_NETFILTER_XT_TARGET_AUDIT=m
CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
CONFIG_NETFILTER_XT_TARGET_CONNMARK=m
CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=m
CONFIG_NETFILTER_XT_TARGET_CT=m
CONFIG_NETFILTER_XT_TARGET_DSCP=m
CONFIG_NETFILTER_XT_TARGET_HL=m
CONFIG_NETFILTER_XT_TARGET_HMARK=m
CONFIG_NETFILTER_XT_TARGET_IDLETIMER=m
# CONFIG_NETFILTER_XT_TARGET_LED is not set
CONFIG_NETFILTER_XT_TARGET_LOG=m
CONFIG_NETFILTER_XT_TARGET_MARK=m
CONFIG_NETFILTER_XT_NAT=m
CONFIG_NETFILTER_XT_TARGET_NETMAP=m
CONFIG_NETFILTER_XT_TARGET_NFLOG=m
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m
CONFIG_NETFILTER_XT_TARGET_NOTRACK=m
CONFIG_NETFILTER_XT_TARGET_RATEEST=m
CONFIG_NETFILTER_XT_TARGET_REDIRECT=m
CONFIG_NETFILTER_XT_TARGET_MASQUERADE=m
CONFIG_NETFILTER_XT_TARGET_TEE=m
CONFIG_NETFILTER_XT_TARGET_TPROXY=m
CONFIG_NETFILTER_XT_TARGET_TRACE=m
CONFIG_NETFILTER_XT_TARGET_SECMARK=m
CONFIG_NETFILTER_XT_TARGET_TCPMSS=m
CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m

#
# Xtables matches
#
CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=m
CONFIG_NETFILTER_XT_MATCH_BPF=m
CONFIG_NETFILTER_XT_MATCH_CGROUP=m
CONFIG_NETFILTER_XT_MATCH_CLUSTER=m
CONFIG_NETFILTER_XT_MATCH_COMMENT=m
CONFIG_NETFILTER_XT_MATCH_CONNBYTES=m
CONFIG_NETFILTER_XT_MATCH_CONNLABEL=m
CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m
CONFIG_NETFILTER_XT_MATCH_CONNMARK=m
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m
CONFIG_NETFILTER_XT_MATCH_CPU=m
CONFIG_NETFILTER_XT_MATCH_DCCP=m
CONFIG_NETFILTER_XT_MATCH_DEVGROUP=m
CONFIG_NETFILTER_XT_MATCH_DSCP=m
CONFIG_NETFILTER_XT_MATCH_ECN=m
CONFIG_NETFILTER_XT_MATCH_ESP=m
CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=m
CONFIG_NETFILTER_XT_MATCH_HELPER=m
CONFIG_NETFILTER_XT_MATCH_HL=m
# CONFIG_NETFILTER_XT_MATCH_IPCOMP is not set
CONFIG_NETFILTER_XT_MATCH_IPRANGE=m
CONFIG_NETFILTER_XT_MATCH_IPVS=m
# CONFIG_NETFILTER_XT_MATCH_L2TP is not set
CONFIG_NETFILTER_XT_MATCH_LENGTH=m
CONFIG_NETFILTER_XT_MATCH_LIMIT=m
CONFIG_NETFILTER_XT_MATCH_MAC=m
CONFIG_NETFILTER_XT_MATCH_MARK=m
CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m
# CONFIG_NETFILTER_XT_MATCH_NFACCT is not set
CONFIG_NETFILTER_XT_MATCH_OSF=m
CONFIG_NETFILTER_XT_MATCH_OWNER=m
CONFIG_NETFILTER_XT_MATCH_POLICY=m
CONFIG_NETFILTER_XT_MATCH_PHYSDEV=m
CONFIG_NETFILTER_XT_MATCH_PKTTYPE=m
CONFIG_NETFILTER_XT_MATCH_QUOTA=m
CONFIG_NETFILTER_XT_MATCH_RATEEST=m
CONFIG_NETFILTER_XT_MATCH_REALM=m
CONFIG_NETFILTER_XT_MATCH_RECENT=m
CONFIG_NETFILTER_XT_MATCH_SCTP=m
CONFIG_NETFILTER_XT_MATCH_SOCKET=m
CONFIG_NETFILTER_XT_MATCH_STATE=m
CONFIG_NETFILTER_XT_MATCH_STATISTIC=m
CONFIG_NETFILTER_XT_MATCH_STRING=m
CONFIG_NETFILTER_XT_MATCH_TCPMSS=m
# CONFIG_NETFILTER_XT_MATCH_TIME is not set
# CONFIG_NETFILTER_XT_MATCH_U32 is not set
# end of Core Netfilter Configuration

CONFIG_IP_SET=m
CONFIG_IP_SET_MAX=256
CONFIG_IP_SET_BITMAP_IP=m
CONFIG_IP_SET_BITMAP_IPMAC=m
CONFIG_IP_SET_BITMAP_PORT=m
CONFIG_IP_SET_HASH_IP=m
CONFIG_IP_SET_HASH_IPMARK=m
CONFIG_IP_SET_HASH_IPPORT=m
CONFIG_IP_SET_HASH_IPPORTIP=m
CONFIG_IP_SET_HASH_IPPORTNET=m
CONFIG_IP_SET_HASH_IPMAC=m
CONFIG_IP_SET_HASH_MAC=m
CONFIG_IP_SET_HASH_NETPORTNET=m
CONFIG_IP_SET_HASH_NET=m
CONFIG_IP_SET_HASH_NETNET=m
CONFIG_IP_SET_HASH_NETPORT=m
CONFIG_IP_SET_HASH_NETIFACE=m
CONFIG_IP_SET_LIST_SET=m
CONFIG_IP_VS=m
CONFIG_IP_VS_IPV6=y
# CONFIG_IP_VS_DEBUG is not set
CONFIG_IP_VS_TAB_BITS=12

#
# IPVS transport protocol load balancing support
#
CONFIG_IP_VS_PROTO_TCP=y
CONFIG_IP_VS_PROTO_UDP=y
CONFIG_IP_VS_PROTO_AH_ESP=y
CONFIG_IP_VS_PROTO_ESP=y
CONFIG_IP_VS_PROTO_AH=y
CONFIG_IP_VS_PROTO_SCTP=y

#
# IPVS scheduler
#
CONFIG_IP_VS_RR=m
CONFIG_IP_VS_WRR=m
CONFIG_IP_VS_LC=m
CONFIG_IP_VS_WLC=m
CONFIG_IP_VS_FO=m
CONFIG_IP_VS_OVF=m
CONFIG_IP_VS_LBLC=m
CONFIG_IP_VS_LBLCR=m
CONFIG_IP_VS_DH=m
CONFIG_IP_VS_SH=m
# CONFIG_IP_VS_MH is not set
CONFIG_IP_VS_SED=m
CONFIG_IP_VS_NQ=m

#
# IPVS SH scheduler
#
CONFIG_IP_VS_SH_TAB_BITS=8

#
# IPVS MH scheduler
#
CONFIG_IP_VS_MH_TAB_INDEX=12

#
# IPVS application helper
#
CONFIG_IP_VS_FTP=m
CONFIG_IP_VS_NFCT=y
CONFIG_IP_VS_PE_SIP=m

#
# IP: Netfilter Configuration
#
CONFIG_NF_DEFRAG_IPV4=m
CONFIG_NF_SOCKET_IPV4=m
CONFIG_NF_TPROXY_IPV4=m
CONFIG_NF_TABLES_IPV4=y
CONFIG_NFT_REJECT_IPV4=m
CONFIG_NFT_DUP_IPV4=m
CONFIG_NFT_FIB_IPV4=m
CONFIG_NF_TABLES_ARP=y
CONFIG_NF_DUP_IPV4=m
CONFIG_NF_LOG_ARP=m
CONFIG_NF_LOG_IPV4=m
CONFIG_NF_REJECT_IPV4=m
CONFIG_NF_NAT_SNMP_BASIC=m
CONFIG_NF_NAT_PPTP=m
CONFIG_NF_NAT_H323=m
CONFIG_IP_NF_IPTABLES=m
CONFIG_IP_NF_MATCH_AH=m
CONFIG_IP_NF_MATCH_ECN=m
CONFIG_IP_NF_MATCH_RPFILTER=m
CONFIG_IP_NF_MATCH_TTL=m
CONFIG_IP_NF_FILTER=m
CONFIG_IP_NF_TARGET_REJECT=m
CONFIG_IP_NF_TARGET_SYNPROXY=m
CONFIG_IP_NF_NAT=m
CONFIG_IP_NF_TARGET_MASQUERADE=m
CONFIG_IP_NF_TARGET_NETMAP=m
CONFIG_IP_NF_TARGET_REDIRECT=m
CONFIG_IP_NF_MANGLE=m
# CONFIG_IP_NF_TARGET_CLUSTERIP is not set
CONFIG_IP_NF_TARGET_ECN=m
CONFIG_IP_NF_TARGET_TTL=m
CONFIG_IP_NF_RAW=m
CONFIG_IP_NF_SECURITY=m
CONFIG_IP_NF_ARPTABLES=m
CONFIG_IP_NF_ARPFILTER=m
CONFIG_IP_NF_ARP_MANGLE=m
# end of IP: Netfilter Configuration

#
# IPv6: Netfilter Configuration
#
CONFIG_NF_SOCKET_IPV6=m
CONFIG_NF_TPROXY_IPV6=m
CONFIG_NF_TABLES_IPV6=y
CONFIG_NFT_REJECT_IPV6=m
CONFIG_NFT_DUP_IPV6=m
CONFIG_NFT_FIB_IPV6=m
CONFIG_NF_DUP_IPV6=m
CONFIG_NF_REJECT_IPV6=m
CONFIG_NF_LOG_IPV6=m
CONFIG_IP6_NF_IPTABLES=m
CONFIG_IP6_NF_MATCH_AH=m
CONFIG_IP6_NF_MATCH_EUI64=m
CONFIG_IP6_NF_MATCH_FRAG=m
CONFIG_IP6_NF_MATCH_OPTS=m
CONFIG_IP6_NF_MATCH_HL=m
CONFIG_IP6_NF_MATCH_IPV6HEADER=m
CONFIG_IP6_NF_MATCH_MH=m
CONFIG_IP6_NF_MATCH_RPFILTER=m
CONFIG_IP6_NF_MATCH_RT=m
# CONFIG_IP6_NF_MATCH_SRH is not set
# CONFIG_IP6_NF_TARGET_HL is not set
CONFIG_IP6_NF_FILTER=m
CONFIG_IP6_NF_TARGET_REJECT=m
CONFIG_IP6_NF_TARGET_SYNPROXY=m
CONFIG_IP6_NF_MANGLE=m
CONFIG_IP6_NF_RAW=m
CONFIG_IP6_NF_SECURITY=m
CONFIG_IP6_NF_NAT=m
CONFIG_IP6_NF_TARGET_MASQUERADE=m
CONFIG_IP6_NF_TARGET_NPT=m
# end of IPv6: Netfilter Configuration

CONFIG_NF_DEFRAG_IPV6=m
CONFIG_NF_TABLES_BRIDGE=m
# CONFIG_NFT_BRIDGE_META is not set
CONFIG_NFT_BRIDGE_REJECT=m
CONFIG_NF_LOG_BRIDGE=m
# CONFIG_NF_CONNTRACK_BRIDGE is not set
CONFIG_BRIDGE_NF_EBTABLES=m
CONFIG_BRIDGE_EBT_BROUTE=m
CONFIG_BRIDGE_EBT_T_FILTER=m
CONFIG_BRIDGE_EBT_T_NAT=m
CONFIG_BRIDGE_EBT_802_3=m
CONFIG_BRIDGE_EBT_AMONG=m
CONFIG_BRIDGE_EBT_ARP=m
CONFIG_BRIDGE_EBT_IP=m
CONFIG_BRIDGE_EBT_IP6=m
CONFIG_BRIDGE_EBT_LIMIT=m
CONFIG_BRIDGE_EBT_MARK=m
CONFIG_BRIDGE_EBT_PKTTYPE=m
CONFIG_BRIDGE_EBT_STP=m
CONFIG_BRIDGE_EBT_VLAN=m
CONFIG_BRIDGE_EBT_ARPREPLY=m
CONFIG_BRIDGE_EBT_DNAT=m
CONFIG_BRIDGE_EBT_MARK_T=m
CONFIG_BRIDGE_EBT_REDIRECT=m
CONFIG_BRIDGE_EBT_SNAT=m
CONFIG_BRIDGE_EBT_LOG=m
CONFIG_BRIDGE_EBT_NFLOG=m
# CONFIG_BPFILTER is not set
# CONFIG_IP_DCCP is not set
CONFIG_IP_SCTP=m
# CONFIG_SCTP_DBG_OBJCNT is not set
# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_MD5 is not set
CONFIG_SCTP_DEFAULT_COOKIE_HMAC_SHA1=y
# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_NONE is not set
CONFIG_SCTP_COOKIE_HMAC_MD5=y
CONFIG_SCTP_COOKIE_HMAC_SHA1=y
CONFIG_INET_SCTP_DIAG=m
# CONFIG_RDS is not set
CONFIG_TIPC=m
# CONFIG_TIPC_MEDIA_IB is not set
CONFIG_TIPC_MEDIA_UDP=y
CONFIG_TIPC_CRYPTO=y
CONFIG_TIPC_DIAG=m
CONFIG_ATM=m
CONFIG_ATM_CLIP=m
# CONFIG_ATM_CLIP_NO_ICMP is not set
CONFIG_ATM_LANE=m
# CONFIG_ATM_MPOA is not set
CONFIG_ATM_BR2684=m
# CONFIG_ATM_BR2684_IPFILTER is not set
CONFIG_L2TP=m
CONFIG_L2TP_DEBUGFS=m
CONFIG_L2TP_V3=y
CONFIG_L2TP_IP=m
CONFIG_L2TP_ETH=m
CONFIG_STP=m
CONFIG_GARP=m
CONFIG_MRP=m
CONFIG_BRIDGE=m
CONFIG_BRIDGE_IGMP_SNOOPING=y
CONFIG_BRIDGE_VLAN_FILTERING=y
# CONFIG_BRIDGE_MRP is not set
CONFIG_HAVE_NET_DSA=y
# CONFIG_NET_DSA is not set
CONFIG_VLAN_8021Q=m
CONFIG_VLAN_8021Q_GVRP=y
CONFIG_VLAN_8021Q_MVRP=y
# CONFIG_DECNET is not set
CONFIG_LLC=m
# CONFIG_LLC2 is not set
# CONFIG_ATALK is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_PHONET is not set
CONFIG_6LOWPAN=m
# CONFIG_6LOWPAN_DEBUGFS is not set
# CONFIG_6LOWPAN_NHC is not set
CONFIG_IEEE802154=m
# CONFIG_IEEE802154_NL802154_EXPERIMENTAL is not set
CONFIG_IEEE802154_SOCKET=m
CONFIG_IEEE802154_6LOWPAN=m
CONFIG_MAC802154=m
CONFIG_NET_SCHED=y

#
# Queueing/Scheduling
#
CONFIG_NET_SCH_CBQ=m
CONFIG_NET_SCH_HTB=m
CONFIG_NET_SCH_HFSC=m
CONFIG_NET_SCH_ATM=m
CONFIG_NET_SCH_PRIO=m
CONFIG_NET_SCH_MULTIQ=m
CONFIG_NET_SCH_RED=m
CONFIG_NET_SCH_SFB=m
CONFIG_NET_SCH_SFQ=m
CONFIG_NET_SCH_TEQL=m
CONFIG_NET_SCH_TBF=m
# CONFIG_NET_SCH_CBS is not set
# CONFIG_NET_SCH_ETF is not set
# CONFIG_NET_SCH_TAPRIO is not set
CONFIG_NET_SCH_GRED=m
CONFIG_NET_SCH_DSMARK=m
CONFIG_NET_SCH_NETEM=m
CONFIG_NET_SCH_DRR=m
CONFIG_NET_SCH_MQPRIO=m
# CONFIG_NET_SCH_SKBPRIO is not set
CONFIG_NET_SCH_CHOKE=m
CONFIG_NET_SCH_QFQ=m
CONFIG_NET_SCH_CODEL=m
CONFIG_NET_SCH_FQ_CODEL=y
# CONFIG_NET_SCH_CAKE is not set
CONFIG_NET_SCH_FQ=m
CONFIG_NET_SCH_HHF=m
CONFIG_NET_SCH_PIE=m
# CONFIG_NET_SCH_FQ_PIE is not set
CONFIG_NET_SCH_INGRESS=m
CONFIG_NET_SCH_PLUG=m
# CONFIG_NET_SCH_ETS is not set
CONFIG_NET_SCH_DEFAULT=y
# CONFIG_DEFAULT_FQ is not set
# CONFIG_DEFAULT_CODEL is not set
CONFIG_DEFAULT_FQ_CODEL=y
# CONFIG_DEFAULT_SFQ is not set
# CONFIG_DEFAULT_PFIFO_FAST is not set
CONFIG_DEFAULT_NET_SCH="fq_codel"

#
# Classification
#
CONFIG_NET_CLS=y
CONFIG_NET_CLS_BASIC=m
CONFIG_NET_CLS_TCINDEX=m
CONFIG_NET_CLS_ROUTE4=m
CONFIG_NET_CLS_FW=m
CONFIG_NET_CLS_U32=m
CONFIG_CLS_U32_PERF=y
CONFIG_CLS_U32_MARK=y
CONFIG_NET_CLS_RSVP=m
CONFIG_NET_CLS_RSVP6=m
CONFIG_NET_CLS_FLOW=m
CONFIG_NET_CLS_CGROUP=y
CONFIG_NET_CLS_BPF=m
CONFIG_NET_CLS_FLOWER=m
CONFIG_NET_CLS_MATCHALL=m
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_STACK=32
CONFIG_NET_EMATCH_CMP=m
CONFIG_NET_EMATCH_NBYTE=m
CONFIG_NET_EMATCH_U32=m
CONFIG_NET_EMATCH_META=m
CONFIG_NET_EMATCH_TEXT=m
# CONFIG_NET_EMATCH_CANID is not set
CONFIG_NET_EMATCH_IPSET=m
# CONFIG_NET_EMATCH_IPT is not set
CONFIG_NET_CLS_ACT=y
CONFIG_NET_ACT_POLICE=m
CONFIG_NET_ACT_GACT=m
CONFIG_GACT_PROB=y
CONFIG_NET_ACT_MIRRED=m
CONFIG_NET_ACT_SAMPLE=m
# CONFIG_NET_ACT_IPT is not set
CONFIG_NET_ACT_NAT=m
CONFIG_NET_ACT_PEDIT=m
CONFIG_NET_ACT_SIMP=m
CONFIG_NET_ACT_SKBEDIT=m
CONFIG_NET_ACT_CSUM=m
# CONFIG_NET_ACT_MPLS is not set
CONFIG_NET_ACT_VLAN=m
CONFIG_NET_ACT_BPF=m
# CONFIG_NET_ACT_CONNMARK is not set
# CONFIG_NET_ACT_CTINFO is not set
CONFIG_NET_ACT_SKBMOD=m
# CONFIG_NET_ACT_IFE is not set
CONFIG_NET_ACT_TUNNEL_KEY=m
# CONFIG_NET_ACT_GATE is not set
# CONFIG_NET_TC_SKB_EXT is not set
CONFIG_NET_SCH_FIFO=y
CONFIG_DCB=y
CONFIG_DNS_RESOLVER=m
# CONFIG_BATMAN_ADV is not set
CONFIG_OPENVSWITCH=m
CONFIG_OPENVSWITCH_GRE=m
CONFIG_VSOCKETS=m
CONFIG_VSOCKETS_DIAG=m
CONFIG_VSOCKETS_LOOPBACK=m
CONFIG_VMWARE_VMCI_VSOCKETS=m
CONFIG_VIRTIO_VSOCKETS=m
CONFIG_VIRTIO_VSOCKETS_COMMON=m
CONFIG_HYPERV_VSOCKETS=m
CONFIG_NETLINK_DIAG=m
CONFIG_MPLS=y
CONFIG_NET_MPLS_GSO=y
CONFIG_MPLS_ROUTING=m
CONFIG_MPLS_IPTUNNEL=m
CONFIG_NET_NSH=y
# CONFIG_HSR is not set
CONFIG_NET_SWITCHDEV=y
CONFIG_NET_L3_MASTER_DEV=y
# CONFIG_QRTR is not set
# CONFIG_NET_NCSI is not set
CONFIG_RPS=y
CONFIG_RFS_ACCEL=y
CONFIG_XPS=y
CONFIG_CGROUP_NET_PRIO=y
CONFIG_CGROUP_NET_CLASSID=y
CONFIG_NET_RX_BUSY_POLL=y
CONFIG_BQL=y
CONFIG_BPF_JIT=y
CONFIG_BPF_STREAM_PARSER=y
CONFIG_NET_FLOW_LIMIT=y

#
# Network testing
#
CONFIG_NET_PKTGEN=m
CONFIG_NET_DROP_MONITOR=y
# end of Network testing
# end of Networking options

# CONFIG_HAMRADIO is not set
CONFIG_CAN=m
CONFIG_CAN_RAW=m
CONFIG_CAN_BCM=m
CONFIG_CAN_GW=m
# CONFIG_CAN_J1939 is not set
# CONFIG_CAN_ISOTP is not set

#
# CAN Device Drivers
#
CONFIG_CAN_VCAN=m
# CONFIG_CAN_VXCAN is not set
CONFIG_CAN_SLCAN=m
CONFIG_CAN_DEV=m
CONFIG_CAN_CALC_BITTIMING=y
# CONFIG_CAN_KVASER_PCIEFD is not set
CONFIG_CAN_C_CAN=m
CONFIG_CAN_C_CAN_PLATFORM=m
CONFIG_CAN_C_CAN_PCI=m
CONFIG_CAN_CC770=m
# CONFIG_CAN_CC770_ISA is not set
CONFIG_CAN_CC770_PLATFORM=m
# CONFIG_CAN_IFI_CANFD is not set
# CONFIG_CAN_M_CAN is not set
# CONFIG_CAN_PEAK_PCIEFD is not set
CONFIG_CAN_SJA1000=m
CONFIG_CAN_EMS_PCI=m
# CONFIG_CAN_F81601 is not set
CONFIG_CAN_KVASER_PCI=m
CONFIG_CAN_PEAK_PCI=m
CONFIG_CAN_PEAK_PCIEC=y
CONFIG_CAN_PLX_PCI=m
# CONFIG_CAN_SJA1000_ISA is not set
CONFIG_CAN_SJA1000_PLATFORM=m
CONFIG_CAN_SOFTING=m

#
# CAN SPI interfaces
#
# CONFIG_CAN_HI311X is not set
# CONFIG_CAN_MCP251X is not set
# CONFIG_CAN_MCP251XFD is not set
# end of CAN SPI interfaces

#
# CAN USB interfaces
#
# CONFIG_CAN_8DEV_USB is not set
# CONFIG_CAN_EMS_USB is not set
# CONFIG_CAN_ESD_USB2 is not set
# CONFIG_CAN_GS_USB is not set
# CONFIG_CAN_KVASER_USB is not set
# CONFIG_CAN_MCBA_USB is not set
# CONFIG_CAN_PEAK_USB is not set
# CONFIG_CAN_UCAN is not set
# end of CAN USB interfaces

# CONFIG_CAN_DEBUG_DEVICES is not set
# end of CAN Device Drivers

CONFIG_BT=m
CONFIG_BT_BREDR=y
CONFIG_BT_RFCOMM=m
CONFIG_BT_RFCOMM_TTY=y
CONFIG_BT_BNEP=m
CONFIG_BT_BNEP_MC_FILTER=y
CONFIG_BT_BNEP_PROTO_FILTER=y
CONFIG_BT_HIDP=m
CONFIG_BT_HS=y
CONFIG_BT_LE=y
# CONFIG_BT_6LOWPAN is not set
# CONFIG_BT_LEDS is not set
# CONFIG_BT_MSFTEXT is not set
CONFIG_BT_DEBUGFS=y
# CONFIG_BT_SELFTEST is not set

#
# Bluetooth device drivers
#
# CONFIG_BT_HCIBTUSB is not set
# CONFIG_BT_HCIBTSDIO is not set
CONFIG_BT_HCIUART=m
CONFIG_BT_HCIUART_H4=y
CONFIG_BT_HCIUART_BCSP=y
CONFIG_BT_HCIUART_ATH3K=y
# CONFIG_BT_HCIUART_INTEL is not set
# CONFIG_BT_HCIUART_AG6XX is not set
# CONFIG_BT_HCIBCM203X is not set
# CONFIG_BT_HCIBPA10X is not set
# CONFIG_BT_HCIBFUSB is not set
CONFIG_BT_HCIVHCI=m
CONFIG_BT_MRVL=m
# CONFIG_BT_MRVL_SDIO is not set
# CONFIG_BT_MTKSDIO is not set
# end of Bluetooth device drivers

# CONFIG_AF_RXRPC is not set
# CONFIG_AF_KCM is not set
CONFIG_STREAM_PARSER=y
CONFIG_FIB_RULES=y
CONFIG_WIRELESS=y
CONFIG_WEXT_CORE=y
CONFIG_WEXT_PROC=y
CONFIG_CFG80211=m
# CONFIG_NL80211_TESTMODE is not set
# CONFIG_CFG80211_DEVELOPER_WARNINGS is not set
CONFIG_CFG80211_REQUIRE_SIGNED_REGDB=y
CONFIG_CFG80211_USE_KERNEL_REGDB_KEYS=y
CONFIG_CFG80211_DEFAULT_PS=y
# CONFIG_CFG80211_DEBUGFS is not set
CONFIG_CFG80211_CRDA_SUPPORT=y
CONFIG_CFG80211_WEXT=y
CONFIG_MAC80211=m
CONFIG_MAC80211_HAS_RC=y
CONFIG_MAC80211_RC_MINSTREL=y
CONFIG_MAC80211_RC_DEFAULT_MINSTREL=y
CONFIG_MAC80211_RC_DEFAULT="minstrel_ht"
CONFIG_MAC80211_MESH=y
CONFIG_MAC80211_LEDS=y
CONFIG_MAC80211_DEBUGFS=y
# CONFIG_MAC80211_MESSAGE_TRACING is not set
# CONFIG_MAC80211_DEBUG_MENU is not set
CONFIG_MAC80211_STA_HASH_MAX_SIZE=0
# CONFIG_WIMAX is not set
CONFIG_RFKILL=m
CONFIG_RFKILL_LEDS=y
CONFIG_RFKILL_INPUT=y
# CONFIG_RFKILL_GPIO is not set
CONFIG_NET_9P=y
CONFIG_NET_9P_VIRTIO=y
# CONFIG_NET_9P_XEN is not set
# CONFIG_NET_9P_RDMA is not set
# CONFIG_NET_9P_DEBUG is not set
# CONFIG_CAIF is not set
CONFIG_CEPH_LIB=m
# CONFIG_CEPH_LIB_PRETTYDEBUG is not set
CONFIG_CEPH_LIB_USE_DNS_RESOLVER=y
# CONFIG_NFC is not set
CONFIG_PSAMPLE=m
# CONFIG_NET_IFE is not set
CONFIG_LWTUNNEL=y
CONFIG_LWTUNNEL_BPF=y
CONFIG_DST_CACHE=y
CONFIG_GRO_CELLS=y
CONFIG_SOCK_VALIDATE_XMIT=y
CONFIG_NET_SOCK_MSG=y
CONFIG_NET_DEVLINK=y
CONFIG_PAGE_POOL=y
CONFIG_FAILOVER=m
CONFIG_ETHTOOL_NETLINK=y
CONFIG_HAVE_EBPF_JIT=y

#
# Device Drivers
#
CONFIG_HAVE_EISA=y
# CONFIG_EISA is not set
CONFIG_HAVE_PCI=y
CONFIG_PCI=y
CONFIG_PCI_DOMAINS=y
CONFIG_PCIEPORTBUS=y
CONFIG_HOTPLUG_PCI_PCIE=y
CONFIG_PCIEAER=y
CONFIG_PCIEAER_INJECT=m
CONFIG_PCIE_ECRC=y
CONFIG_PCIEASPM=y
CONFIG_PCIEASPM_DEFAULT=y
# CONFIG_PCIEASPM_POWERSAVE is not set
# CONFIG_PCIEASPM_POWER_SUPERSAVE is not set
# CONFIG_PCIEASPM_PERFORMANCE is not set
CONFIG_PCIE_PME=y
CONFIG_PCIE_DPC=y
# CONFIG_PCIE_PTM is not set
# CONFIG_PCIE_BW is not set
# CONFIG_PCIE_EDR is not set
CONFIG_PCI_MSI=y
CONFIG_PCI_MSI_IRQ_DOMAIN=y
CONFIG_PCI_QUIRKS=y
# CONFIG_PCI_DEBUG is not set
# CONFIG_PCI_REALLOC_ENABLE_AUTO is not set
CONFIG_PCI_STUB=y
CONFIG_PCI_PF_STUB=m
# CONFIG_XEN_PCIDEV_FRONTEND is not set
CONFIG_PCI_ATS=y
CONFIG_PCI_LOCKLESS_CONFIG=y
CONFIG_PCI_IOV=y
CONFIG_PCI_PRI=y
CONFIG_PCI_PASID=y
# CONFIG_PCI_P2PDMA is not set
CONFIG_PCI_LABEL=y
CONFIG_PCI_HYPERV=m
CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_ACPI=y
CONFIG_HOTPLUG_PCI_ACPI_IBM=m
# CONFIG_HOTPLUG_PCI_CPCI is not set
CONFIG_HOTPLUG_PCI_SHPC=y

#
# PCI controller drivers
#
CONFIG_VMD=y
CONFIG_PCI_HYPERV_INTERFACE=m

#
# DesignWare PCI Core Support
#
# CONFIG_PCIE_DW_PLAT_HOST is not set
# CONFIG_PCI_MESON is not set
# end of DesignWare PCI Core Support

#
# Mobiveil PCIe Core Support
#
# end of Mobiveil PCIe Core Support

#
# Cadence PCIe controllers support
#
# end of Cadence PCIe controllers support
# end of PCI controller drivers

#
# PCI Endpoint
#
# CONFIG_PCI_ENDPOINT is not set
# end of PCI Endpoint

#
# PCI switch controller drivers
#
# CONFIG_PCI_SW_SWITCHTEC is not set
# end of PCI switch controller drivers

# CONFIG_PCCARD is not set
# CONFIG_RAPIDIO is not set

#
# Generic Driver Options
#
# CONFIG_UEVENT_HELPER is not set
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y

#
# Firmware loader
#
CONFIG_FW_LOADER=y
CONFIG_FW_LOADER_PAGED_BUF=y
CONFIG_EXTRA_FIRMWARE=""
CONFIG_FW_LOADER_USER_HELPER=y
# CONFIG_FW_LOADER_USER_HELPER_FALLBACK is not set
# CONFIG_FW_LOADER_COMPRESS is not set
CONFIG_FW_CACHE=y
# end of Firmware loader

CONFIG_ALLOW_DEV_COREDUMP=y
# CONFIG_DEBUG_DRIVER is not set
# CONFIG_DEBUG_DEVRES is not set
# CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set
# CONFIG_PM_QOS_KUNIT_TEST is not set
# CONFIG_TEST_ASYNC_DRIVER_PROBE is not set
CONFIG_KUNIT_DRIVER_PE_TEST=y
CONFIG_SYS_HYPERVISOR=y
CONFIG_GENERIC_CPU_AUTOPROBE=y
CONFIG_GENERIC_CPU_VULNERABILITIES=y
CONFIG_REGMAP=y
CONFIG_REGMAP_I2C=m
CONFIG_REGMAP_SPI=m
CONFIG_DMA_SHARED_BUFFER=y
# CONFIG_DMA_FENCE_TRACE is not set
# end of Generic Driver Options

#
# Bus devices
#
# CONFIG_MHI_BUS is not set
# end of Bus devices

CONFIG_CONNECTOR=y
CONFIG_PROC_EVENTS=y
# CONFIG_GNSS is not set
# CONFIG_MTD is not set
# CONFIG_OF is not set
CONFIG_ARCH_MIGHT_HAVE_PC_PARPORT=y
CONFIG_PARPORT=m
CONFIG_PARPORT_PC=m
CONFIG_PARPORT_SERIAL=m
# CONFIG_PARPORT_PC_FIFO is not set
# CONFIG_PARPORT_PC_SUPERIO is not set
# CONFIG_PARPORT_AX88796 is not set
CONFIG_PARPORT_1284=y
CONFIG_PNP=y
# CONFIG_PNP_DEBUG_MESSAGES is not set

#
# Protocols
#
CONFIG_PNPACPI=y
CONFIG_BLK_DEV=y
CONFIG_BLK_DEV_NULL_BLK=m
CONFIG_BLK_DEV_NULL_BLK_FAULT_INJECTION=y
# CONFIG_BLK_DEV_FD is not set
CONFIG_CDROM=m
# CONFIG_PARIDE is not set
# CONFIG_BLK_DEV_PCIESSD_MTIP32XX is not set
# CONFIG_ZRAM is not set
# CONFIG_BLK_DEV_UMEM is not set
CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_LOOP_MIN_COUNT=0
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
# CONFIG_BLK_DEV_DRBD is not set
CONFIG_BLK_DEV_NBD=m
# CONFIG_BLK_DEV_SKD is not set
# CONFIG_BLK_DEV_SX8 is not set
CONFIG_BLK_DEV_RAM=m
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=16384
CONFIG_CDROM_PKTCDVD=m
CONFIG_CDROM_PKTCDVD_BUFFERS=8
# CONFIG_CDROM_PKTCDVD_WCACHE is not set
# CONFIG_ATA_OVER_ETH is not set
CONFIG_XEN_BLKDEV_FRONTEND=m
CONFIG_VIRTIO_BLK=m
CONFIG_BLK_DEV_RBD=m
# CONFIG_BLK_DEV_RSXX is not set

#
# NVME Support
#
CONFIG_NVME_CORE=m
CONFIG_BLK_DEV_NVME=m
CONFIG_NVME_MULTIPATH=y
# CONFIG_NVME_HWMON is not set
CONFIG_NVME_FABRICS=m
# CONFIG_NVME_RDMA is not set
CONFIG_NVME_FC=m
# CONFIG_NVME_TCP is not set
CONFIG_NVME_TARGET=m
# CONFIG_NVME_TARGET_PASSTHRU is not set
CONFIG_NVME_TARGET_LOOP=m
# CONFIG_NVME_TARGET_RDMA is not set
CONFIG_NVME_TARGET_FC=m
CONFIG_NVME_TARGET_FCLOOP=m
# CONFIG_NVME_TARGET_TCP is not set
# end of NVME Support

#
# Misc devices
#
CONFIG_SENSORS_LIS3LV02D=m
# CONFIG_AD525X_DPOT is not set
# CONFIG_DUMMY_IRQ is not set
# CONFIG_IBM_ASM is not set
# CONFIG_PHANTOM is not set
CONFIG_TIFM_CORE=m
CONFIG_TIFM_7XX1=m
# CONFIG_ICS932S401 is not set
CONFIG_ENCLOSURE_SERVICES=m
CONFIG_SGI_XP=m
CONFIG_HP_ILO=m
CONFIG_SGI_GRU=m
# CONFIG_SGI_GRU_DEBUG is not set
CONFIG_APDS9802ALS=m
CONFIG_ISL29003=m
CONFIG_ISL29020=m
CONFIG_SENSORS_TSL2550=m
CONFIG_SENSORS_BH1770=m
CONFIG_SENSORS_APDS990X=m
# CONFIG_HMC6352 is not set
# CONFIG_DS1682 is not set
CONFIG_VMWARE_BALLOON=m
# CONFIG_LATTICE_ECP3_CONFIG is not set
# CONFIG_SRAM is not set
# CONFIG_PCI_ENDPOINT_TEST is not set
# CONFIG_XILINX_SDFEC is not set
CONFIG_MISC_RTSX=m
CONFIG_PVPANIC=y
# CONFIG_C2PORT is not set

#
# EEPROM support
#
# CONFIG_EEPROM_AT24 is not set
# CONFIG_EEPROM_AT25 is not set
CONFIG_EEPROM_LEGACY=m
CONFIG_EEPROM_MAX6875=m
CONFIG_EEPROM_93CX6=m
# CONFIG_EEPROM_93XX46 is not set
# CONFIG_EEPROM_IDT_89HPESX is not set
# CONFIG_EEPROM_EE1004 is not set
# end of EEPROM support

CONFIG_CB710_CORE=m
# CONFIG_CB710_DEBUG is not set
CONFIG_CB710_DEBUG_ASSUMPTIONS=y

#
# Texas Instruments shared transport line discipline
#
# CONFIG_TI_ST is not set
# end of Texas Instruments shared transport line discipline

CONFIG_SENSORS_LIS3_I2C=m
CONFIG_ALTERA_STAPL=m
CONFIG_INTEL_MEI=m
CONFIG_INTEL_MEI_ME=m
# CONFIG_INTEL_MEI_TXE is not set
# CONFIG_INTEL_MEI_VIRTIO is not set
# CONFIG_INTEL_MEI_HDCP is not set
CONFIG_VMWARE_VMCI=m
# CONFIG_GENWQE is not set
# CONFIG_ECHO is not set
# CONFIG_MISC_ALCOR_PCI is not set
CONFIG_MISC_RTSX_PCI=m
# CONFIG_MISC_RTSX_USB is not set
# CONFIG_HABANA_AI is not set
# CONFIG_UACCE is not set
# end of Misc devices

CONFIG_HAVE_IDE=y
# CONFIG_IDE is not set

#
# SCSI device support
#
CONFIG_SCSI_MOD=y
CONFIG_RAID_ATTRS=m
CONFIG_SCSI=y
CONFIG_SCSI_DMA=y
CONFIG_SCSI_NETLINK=y
CONFIG_SCSI_PROC_FS=y

#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=m
CONFIG_CHR_DEV_ST=m
CONFIG_BLK_DEV_SR=m
CONFIG_CHR_DEV_SG=m
CONFIG_CHR_DEV_SCH=m
CONFIG_SCSI_ENCLOSURE=m
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
CONFIG_SCSI_SCAN_ASYNC=y

#
# SCSI Transports
#
CONFIG_SCSI_SPI_ATTRS=m
CONFIG_SCSI_FC_ATTRS=m
CONFIG_SCSI_ISCSI_ATTRS=m
CONFIG_SCSI_SAS_ATTRS=m
CONFIG_SCSI_SAS_LIBSAS=m
CONFIG_SCSI_SAS_ATA=y
CONFIG_SCSI_SAS_HOST_SMP=y
CONFIG_SCSI_SRP_ATTRS=m
# end of SCSI Transports

CONFIG_SCSI_LOWLEVEL=y
# CONFIG_ISCSI_TCP is not set
# CONFIG_ISCSI_BOOT_SYSFS is not set
# CONFIG_SCSI_CXGB3_ISCSI is not set
# CONFIG_SCSI_CXGB4_ISCSI is not set
# CONFIG_SCSI_BNX2_ISCSI is not set
# CONFIG_BE2ISCSI is not set
# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
# CONFIG_SCSI_HPSA is not set
# CONFIG_SCSI_3W_9XXX is not set
# CONFIG_SCSI_3W_SAS is not set
# CONFIG_SCSI_ACARD is not set
# CONFIG_SCSI_AACRAID is not set
# CONFIG_SCSI_AIC7XXX is not set
# CONFIG_SCSI_AIC79XX is not set
# CONFIG_SCSI_AIC94XX is not set
# CONFIG_SCSI_MVSAS is not set
# CONFIG_SCSI_MVUMI is not set
# CONFIG_SCSI_DPT_I2O is not set
# CONFIG_SCSI_ADVANSYS is not set
# CONFIG_SCSI_ARCMSR is not set
# CONFIG_SCSI_ESAS2R is not set
# CONFIG_MEGARAID_NEWGEN is not set
# CONFIG_MEGARAID_LEGACY is not set
# CONFIG_MEGARAID_SAS is not set
CONFIG_SCSI_MPT3SAS=m
CONFIG_SCSI_MPT2SAS_MAX_SGE=128
CONFIG_SCSI_MPT3SAS_MAX_SGE=128
# CONFIG_SCSI_MPT2SAS is not set
# CONFIG_SCSI_SMARTPQI is not set
# CONFIG_SCSI_UFSHCD is not set
# CONFIG_SCSI_HPTIOP is not set
# CONFIG_SCSI_BUSLOGIC is not set
# CONFIG_SCSI_MYRB is not set
# CONFIG_SCSI_MYRS is not set
# CONFIG_VMWARE_PVSCSI is not set
# CONFIG_XEN_SCSI_FRONTEND is not set
CONFIG_HYPERV_STORAGE=m
# CONFIG_LIBFC is not set
# CONFIG_SCSI_SNIC is not set
# CONFIG_SCSI_DMX3191D is not set
# CONFIG_SCSI_FDOMAIN_PCI is not set
# CONFIG_SCSI_GDTH is not set
CONFIG_SCSI_ISCI=m
# CONFIG_SCSI_IPS is not set
# CONFIG_SCSI_INITIO is not set
# CONFIG_SCSI_INIA100 is not set
# CONFIG_SCSI_PPA is not set
# CONFIG_SCSI_IMM is not set
# CONFIG_SCSI_STEX is not set
# CONFIG_SCSI_SYM53C8XX_2 is not set
# CONFIG_SCSI_IPR is not set
# CONFIG_SCSI_QLOGIC_1280 is not set
# CONFIG_SCSI_QLA_FC is not set
# CONFIG_SCSI_QLA_ISCSI is not set
# CONFIG_SCSI_LPFC is not set
# CONFIG_SCSI_DC395x is not set
# CONFIG_SCSI_AM53C974 is not set
# CONFIG_SCSI_WD719X is not set
CONFIG_SCSI_DEBUG=m
# CONFIG_SCSI_PMCRAID is not set
# CONFIG_SCSI_PM8001 is not set
# CONFIG_SCSI_BFA_FC is not set
# CONFIG_SCSI_VIRTIO is not set
# CONFIG_SCSI_CHELSIO_FCOE is not set
CONFIG_SCSI_DH=y
CONFIG_SCSI_DH_RDAC=y
CONFIG_SCSI_DH_HP_SW=y
CONFIG_SCSI_DH_EMC=y
CONFIG_SCSI_DH_ALUA=y
# end of SCSI device support

CONFIG_ATA=m
CONFIG_SATA_HOST=y
CONFIG_PATA_TIMINGS=y
CONFIG_ATA_VERBOSE_ERROR=y
CONFIG_ATA_FORCE=y
CONFIG_ATA_ACPI=y
# CONFIG_SATA_ZPODD is not set
CONFIG_SATA_PMP=y

#
# Controllers with non-SFF native interface
#
CONFIG_SATA_AHCI=m
CONFIG_SATA_MOBILE_LPM_POLICY=0
CONFIG_SATA_AHCI_PLATFORM=m
# CONFIG_SATA_INIC162X is not set
# CONFIG_SATA_ACARD_AHCI is not set
# CONFIG_SATA_SIL24 is not set
CONFIG_ATA_SFF=y

#
# SFF controllers with custom DMA interface
#
# CONFIG_PDC_ADMA is not set
# CONFIG_SATA_QSTOR is not set
# CONFIG_SATA_SX4 is not set
CONFIG_ATA_BMDMA=y

#
# SATA SFF controllers with BMDMA
#
CONFIG_ATA_PIIX=m
# CONFIG_SATA_DWC is not set
# CONFIG_SATA_MV is not set
# CONFIG_SATA_NV is not set
# CONFIG_SATA_PROMISE is not set
# CONFIG_SATA_SIL is not set
# CONFIG_SATA_SIS is not set
# CONFIG_SATA_SVW is not set
# CONFIG_SATA_ULI is not set
# CONFIG_SATA_VIA is not set
# CONFIG_SATA_VITESSE is not set

#
# PATA SFF controllers with BMDMA
#
# CONFIG_PATA_ALI is not set
# CONFIG_PATA_AMD is not set
# CONFIG_PATA_ARTOP is not set
# CONFIG_PATA_ATIIXP is not set
# CONFIG_PATA_ATP867X is not set
# CONFIG_PATA_CMD64X is not set
# CONFIG_PATA_CYPRESS is not set
# CONFIG_PATA_EFAR is not set
# CONFIG_PATA_HPT366 is not set
# CONFIG_PATA_HPT37X is not set
# CONFIG_PATA_HPT3X2N is not set
# CONFIG_PATA_HPT3X3 is not set
# CONFIG_PATA_IT8213 is not set
# CONFIG_PATA_IT821X is not set
# CONFIG_PATA_JMICRON is not set
# CONFIG_PATA_MARVELL is not set
# CONFIG_PATA_NETCELL is not set
# CONFIG_PATA_NINJA32 is not set
# CONFIG_PATA_NS87415 is not set
# CONFIG_PATA_OLDPIIX is not set
# CONFIG_PATA_OPTIDMA is not set
# CONFIG_PATA_PDC2027X is not set
# CONFIG_PATA_PDC_OLD is not set
# CONFIG_PATA_RADISYS is not set
# CONFIG_PATA_RDC is not set
# CONFIG_PATA_SCH is not set
# CONFIG_PATA_SERVERWORKS is not set
# CONFIG_PATA_SIL680 is not set
# CONFIG_PATA_SIS is not set
# CONFIG_PATA_TOSHIBA is not set
# CONFIG_PATA_TRIFLEX is not set
# CONFIG_PATA_VIA is not set
# CONFIG_PATA_WINBOND is not set

#
# PIO-only SFF controllers
#
# CONFIG_PATA_CMD640_PCI is not set
# CONFIG_PATA_MPIIX is not set
# CONFIG_PATA_NS87410 is not set
# CONFIG_PATA_OPTI is not set
# CONFIG_PATA_RZ1000 is not set

#
# Generic fallback / legacy drivers
#
# CONFIG_PATA_ACPI is not set
CONFIG_ATA_GENERIC=m
# CONFIG_PATA_LEGACY is not set
CONFIG_MD=y
CONFIG_BLK_DEV_MD=y
CONFIG_MD_AUTODETECT=y
CONFIG_MD_LINEAR=m
CONFIG_MD_RAID0=m
CONFIG_MD_RAID1=m
CONFIG_MD_RAID10=m
CONFIG_MD_RAID456=m
CONFIG_MD_MULTIPATH=m
CONFIG_MD_FAULTY=m
CONFIG_MD_CLUSTER=m
# CONFIG_BCACHE is not set
CONFIG_BLK_DEV_DM_BUILTIN=y
CONFIG_BLK_DEV_DM=m
CONFIG_DM_DEBUG=y
CONFIG_DM_BUFIO=m
# CONFIG_DM_DEBUG_BLOCK_MANAGER_LOCKING is not set
CONFIG_DM_BIO_PRISON=m
CONFIG_DM_PERSISTENT_DATA=m
# CONFIG_DM_UNSTRIPED is not set
CONFIG_DM_CRYPT=m
CONFIG_DM_SNAPSHOT=m
CONFIG_DM_THIN_PROVISIONING=m
CONFIG_DM_CACHE=m
CONFIG_DM_CACHE_SMQ=m
CONFIG_DM_WRITECACHE=m
# CONFIG_DM_EBS is not set
CONFIG_DM_ERA=m
# CONFIG_DM_CLONE is not set
CONFIG_DM_MIRROR=m
CONFIG_DM_LOG_USERSPACE=m
CONFIG_DM_RAID=m
CONFIG_DM_ZERO=m
CONFIG_DM_MULTIPATH=m
CONFIG_DM_MULTIPATH_QL=m
CONFIG_DM_MULTIPATH_ST=m
# CONFIG_DM_MULTIPATH_HST is not set
CONFIG_DM_DELAY=m
# CONFIG_DM_DUST is not set
CONFIG_DM_UEVENT=y
CONFIG_DM_FLAKEY=m
CONFIG_DM_VERITY=m
# CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG is not set
# CONFIG_DM_VERITY_FEC is not set
CONFIG_DM_SWITCH=m
CONFIG_DM_LOG_WRITES=m
CONFIG_DM_INTEGRITY=m
# CONFIG_DM_ZONED is not set
CONFIG_TARGET_CORE=m
CONFIG_TCM_IBLOCK=m
CONFIG_TCM_FILEIO=m
CONFIG_TCM_PSCSI=m
CONFIG_TCM_USER2=m
CONFIG_LOOPBACK_TARGET=m
CONFIG_ISCSI_TARGET=m
# CONFIG_SBP_TARGET is not set
# CONFIG_FUSION is not set

#
# IEEE 1394 (FireWire) support
#
CONFIG_FIREWIRE=m
CONFIG_FIREWIRE_OHCI=m
CONFIG_FIREWIRE_SBP2=m
CONFIG_FIREWIRE_NET=m
# CONFIG_FIREWIRE_NOSY is not set
# end of IEEE 1394 (FireWire) support

CONFIG_MACINTOSH_DRIVERS=y
CONFIG_MAC_EMUMOUSEBTN=y
CONFIG_NETDEVICES=y
CONFIG_MII=m
CONFIG_NET_CORE=y
# CONFIG_BONDING is not set
# CONFIG_DUMMY is not set
# CONFIG_WIREGUARD is not set
# CONFIG_EQUALIZER is not set
# CONFIG_NET_FC is not set
# CONFIG_IFB is not set
# CONFIG_NET_TEAM is not set
# CONFIG_MACVLAN is not set
# CONFIG_IPVLAN is not set
# CONFIG_VXLAN is not set
# CONFIG_GENEVE is not set
# CONFIG_BAREUDP is not set
# CONFIG_GTP is not set
# CONFIG_MACSEC is not set
CONFIG_NETCONSOLE=m
CONFIG_NETCONSOLE_DYNAMIC=y
CONFIG_NETPOLL=y
CONFIG_NET_POLL_CONTROLLER=y
CONFIG_TUN=m
# CONFIG_TUN_VNET_CROSS_LE is not set
CONFIG_VETH=m
CONFIG_VIRTIO_NET=m
# CONFIG_NLMON is not set
# CONFIG_NET_VRF is not set
# CONFIG_VSOCKMON is not set
# CONFIG_ARCNET is not set
CONFIG_ATM_DRIVERS=y
# CONFIG_ATM_DUMMY is not set
# CONFIG_ATM_TCP is not set
# CONFIG_ATM_LANAI is not set
# CONFIG_ATM_ENI is not set
# CONFIG_ATM_FIRESTREAM is not set
# CONFIG_ATM_ZATM is not set
# CONFIG_ATM_NICSTAR is not set
# CONFIG_ATM_IDT77252 is not set
# CONFIG_ATM_AMBASSADOR is not set
# CONFIG_ATM_HORIZON is not set
# CONFIG_ATM_IA is not set
# CONFIG_ATM_FORE200E is not set
# CONFIG_ATM_HE is not set
# CONFIG_ATM_SOLOS is not set

#
# Distributed Switch Architecture drivers
#
# end of Distributed Switch Architecture drivers

CONFIG_ETHERNET=y
CONFIG_MDIO=y
CONFIG_NET_VENDOR_3COM=y
# CONFIG_VORTEX is not set
# CONFIG_TYPHOON is not set
CONFIG_NET_VENDOR_ADAPTEC=y
# CONFIG_ADAPTEC_STARFIRE is not set
CONFIG_NET_VENDOR_AGERE=y
# CONFIG_ET131X is not set
CONFIG_NET_VENDOR_ALACRITECH=y
# CONFIG_SLICOSS is not set
CONFIG_NET_VENDOR_ALTEON=y
# CONFIG_ACENIC is not set
# CONFIG_ALTERA_TSE is not set
CONFIG_NET_VENDOR_AMAZON=y
# CONFIG_ENA_ETHERNET is not set
CONFIG_NET_VENDOR_AMD=y
# CONFIG_AMD8111_ETH is not set
# CONFIG_PCNET32 is not set
# CONFIG_AMD_XGBE is not set
CONFIG_NET_VENDOR_AQUANTIA=y
# CONFIG_AQTION is not set
CONFIG_NET_VENDOR_ARC=y
CONFIG_NET_VENDOR_ATHEROS=y
# CONFIG_ATL2 is not set
# CONFIG_ATL1 is not set
# CONFIG_ATL1E is not set
# CONFIG_ATL1C is not set
# CONFIG_ALX is not set
# CONFIG_NET_VENDOR_AURORA is not set
CONFIG_NET_VENDOR_BROADCOM=y
# CONFIG_B44 is not set
# CONFIG_BCMGENET is not set
# CONFIG_BNX2 is not set
# CONFIG_CNIC is not set
# CONFIG_TIGON3 is not set
# CONFIG_BNX2X is not set
# CONFIG_SYSTEMPORT is not set
# CONFIG_BNXT is not set
CONFIG_NET_VENDOR_BROCADE=y
# CONFIG_BNA is not set
CONFIG_NET_VENDOR_CADENCE=y
# CONFIG_MACB is not set
CONFIG_NET_VENDOR_CAVIUM=y
# CONFIG_THUNDER_NIC_PF is not set
# CONFIG_THUNDER_NIC_VF is not set
# CONFIG_THUNDER_NIC_BGX is not set
# CONFIG_THUNDER_NIC_RGX is not set
CONFIG_CAVIUM_PTP=y
# CONFIG_LIQUIDIO is not set
# CONFIG_LIQUIDIO_VF is not set
CONFIG_NET_VENDOR_CHELSIO=y
# CONFIG_CHELSIO_T1 is not set
# CONFIG_CHELSIO_T3 is not set
# CONFIG_CHELSIO_T4 is not set
# CONFIG_CHELSIO_T4VF is not set
CONFIG_NET_VENDOR_CISCO=y
# CONFIG_ENIC is not set
CONFIG_NET_VENDOR_CORTINA=y
# CONFIG_CX_ECAT is not set
# CONFIG_DNET is not set
CONFIG_NET_VENDOR_DEC=y
# CONFIG_NET_TULIP is not set
CONFIG_NET_VENDOR_DLINK=y
# CONFIG_DL2K is not set
# CONFIG_SUNDANCE is not set
CONFIG_NET_VENDOR_EMULEX=y
# CONFIG_BE2NET is not set
CONFIG_NET_VENDOR_EZCHIP=y
CONFIG_NET_VENDOR_GOOGLE=y
# CONFIG_GVE is not set
CONFIG_NET_VENDOR_HUAWEI=y
# CONFIG_HINIC is not set
CONFIG_NET_VENDOR_I825XX=y
CONFIG_NET_VENDOR_INTEL=y
# CONFIG_E100 is not set
CONFIG_E1000=y
CONFIG_E1000E=y
CONFIG_E1000E_HWTS=y
CONFIG_IGB=y
CONFIG_IGB_HWMON=y
# CONFIG_IGBVF is not set
# CONFIG_IXGB is not set
CONFIG_IXGBE=y
CONFIG_IXGBE_HWMON=y
# CONFIG_IXGBE_DCB is not set
CONFIG_IXGBE_IPSEC=y
# CONFIG_IXGBEVF is not set
CONFIG_I40E=y
# CONFIG_I40E_DCB is not set
# CONFIG_I40EVF is not set
# CONFIG_ICE is not set
# CONFIG_FM10K is not set
# CONFIG_IGC is not set
# CONFIG_JME is not set
CONFIG_NET_VENDOR_MARVELL=y
# CONFIG_MVMDIO is not set
# CONFIG_SKGE is not set
# CONFIG_SKY2 is not set
# CONFIG_PRESTERA is not set
CONFIG_NET_VENDOR_MELLANOX=y
# CONFIG_MLX4_EN is not set
# CONFIG_MLX5_CORE is not set
# CONFIG_MLXSW_CORE is not set
# CONFIG_MLXFW is not set
CONFIG_NET_VENDOR_MICREL=y
# CONFIG_KS8842 is not set
# CONFIG_KS8851 is not set
# CONFIG_KS8851_MLL is not set
# CONFIG_KSZ884X_PCI is not set
CONFIG_NET_VENDOR_MICROCHIP=y
# CONFIG_ENC28J60 is not set
# CONFIG_ENCX24J600 is not set
# CONFIG_LAN743X is not set
CONFIG_NET_VENDOR_MICROSEMI=y
CONFIG_NET_VENDOR_MYRI=y
# CONFIG_MYRI10GE is not set
# CONFIG_FEALNX is not set
CONFIG_NET_VENDOR_NATSEMI=y
# CONFIG_NATSEMI is not set
# CONFIG_NS83820 is not set
CONFIG_NET_VENDOR_NETERION=y
# CONFIG_S2IO is not set
# CONFIG_VXGE is not set
CONFIG_NET_VENDOR_NETRONOME=y
# CONFIG_NFP is not set
CONFIG_NET_VENDOR_NI=y
# CONFIG_NI_XGE_MANAGEMENT_ENET is not set
CONFIG_NET_VENDOR_8390=y
# CONFIG_NE2K_PCI is not set
CONFIG_NET_VENDOR_NVIDIA=y
# CONFIG_FORCEDETH is not set
CONFIG_NET_VENDOR_OKI=y
# CONFIG_ETHOC is not set
CONFIG_NET_VENDOR_PACKET_ENGINES=y
# CONFIG_HAMACHI is not set
# CONFIG_YELLOWFIN is not set
CONFIG_NET_VENDOR_PENSANDO=y
# CONFIG_IONIC is not set
CONFIG_NET_VENDOR_QLOGIC=y
# CONFIG_QLA3XXX is not set
# CONFIG_QLCNIC is not set
# CONFIG_NETXEN_NIC is not set
# CONFIG_QED is not set
CONFIG_NET_VENDOR_QUALCOMM=y
# CONFIG_QCOM_EMAC is not set
# CONFIG_RMNET is not set
CONFIG_NET_VENDOR_RDC=y
# CONFIG_R6040 is not set
CONFIG_NET_VENDOR_REALTEK=y
# CONFIG_ATP is not set
# CONFIG_8139CP is not set
# CONFIG_8139TOO is not set
CONFIG_R8169=y
CONFIG_NET_VENDOR_RENESAS=y
CONFIG_NET_VENDOR_ROCKER=y
# CONFIG_ROCKER is not set
CONFIG_NET_VENDOR_SAMSUNG=y
# CONFIG_SXGBE_ETH is not set
CONFIG_NET_VENDOR_SEEQ=y
CONFIG_NET_VENDOR_SOLARFLARE=y
# CONFIG_SFC is not set
# CONFIG_SFC_FALCON is not set
CONFIG_NET_VENDOR_SILAN=y
# CONFIG_SC92031 is not set
CONFIG_NET_VENDOR_SIS=y
# CONFIG_SIS900 is not set
# CONFIG_SIS190 is not set
CONFIG_NET_VENDOR_SMSC=y
# CONFIG_EPIC100 is not set
# CONFIG_SMSC911X is not set
# CONFIG_SMSC9420 is not set
CONFIG_NET_VENDOR_SOCIONEXT=y
CONFIG_NET_VENDOR_STMICRO=y
# CONFIG_STMMAC_ETH is not set
CONFIG_NET_VENDOR_SUN=y
# CONFIG_HAPPYMEAL is not set
# CONFIG_SUNGEM is not set
# CONFIG_CASSINI is not set
# CONFIG_NIU is not set
CONFIG_NET_VENDOR_SYNOPSYS=y
# CONFIG_DWC_XLGMAC is not set
CONFIG_NET_VENDOR_TEHUTI=y
# CONFIG_TEHUTI is not set
CONFIG_NET_VENDOR_TI=y
# CONFIG_TI_CPSW_PHY_SEL is not set
# CONFIG_TLAN is not set
CONFIG_NET_VENDOR_VIA=y
# CONFIG_VIA_RHINE is not set
# CONFIG_VIA_VELOCITY is not set
CONFIG_NET_VENDOR_WIZNET=y
# CONFIG_WIZNET_W5100 is not set
# CONFIG_WIZNET_W5300 is not set
CONFIG_NET_VENDOR_XILINX=y
# CONFIG_XILINX_AXI_EMAC is not set
# CONFIG_XILINX_LL_TEMAC is not set
# CONFIG_FDDI is not set
# CONFIG_HIPPI is not set
# CONFIG_NET_SB1000 is not set
CONFIG_PHYLIB=y
# CONFIG_LED_TRIGGER_PHY is not set
# CONFIG_FIXED_PHY is not set

#
# MII PHY device drivers
#
# CONFIG_AMD_PHY is not set
# CONFIG_ADIN_PHY is not set
# CONFIG_AQUANTIA_PHY is not set
# CONFIG_AX88796B_PHY is not set
# CONFIG_BROADCOM_PHY is not set
# CONFIG_BCM54140_PHY is not set
# CONFIG_BCM7XXX_PHY is not set
# CONFIG_BCM84881_PHY is not set
# CONFIG_BCM87XX_PHY is not set
# CONFIG_CICADA_PHY is not set
# CONFIG_CORTINA_PHY is not set
# CONFIG_DAVICOM_PHY is not set
# CONFIG_ICPLUS_PHY is not set
# CONFIG_LXT_PHY is not set
# CONFIG_INTEL_XWAY_PHY is not set
# CONFIG_LSI_ET1011C_PHY is not set
# CONFIG_MARVELL_PHY is not set
# CONFIG_MARVELL_10G_PHY is not set
# CONFIG_MICREL_PHY is not set
# CONFIG_MICROCHIP_PHY is not set
# CONFIG_MICROCHIP_T1_PHY is not set
# CONFIG_MICROSEMI_PHY is not set
# CONFIG_NATIONAL_PHY is not set
# CONFIG_NXP_TJA11XX_PHY is not set
# CONFIG_QSEMI_PHY is not set
CONFIG_REALTEK_PHY=y
# CONFIG_RENESAS_PHY is not set
# CONFIG_ROCKCHIP_PHY is not set
# CONFIG_SMSC_PHY is not set
# CONFIG_STE10XP is not set
# CONFIG_TERANETICS_PHY is not set
# CONFIG_DP83822_PHY is not set
# CONFIG_DP83TC811_PHY is not set
# CONFIG_DP83848_PHY is not set
# CONFIG_DP83867_PHY is not set
# CONFIG_DP83869_PHY is not set
# CONFIG_VITESSE_PHY is not set
# CONFIG_XILINX_GMII2RGMII is not set
# CONFIG_MICREL_KS8995MA is not set
CONFIG_MDIO_DEVICE=y
CONFIG_MDIO_BUS=y
CONFIG_MDIO_DEVRES=y
# CONFIG_MDIO_BITBANG is not set
# CONFIG_MDIO_BCM_UNIMAC is not set
# CONFIG_MDIO_MVUSB is not set
# CONFIG_MDIO_MSCC_MIIM is not set
# CONFIG_MDIO_THUNDER is not set

#
# MDIO Multiplexers
#

#
# PCS device drivers
#
# CONFIG_PCS_XPCS is not set
# end of PCS device drivers

# CONFIG_PLIP is not set
# CONFIG_PPP is not set
# CONFIG_SLIP is not set
CONFIG_USB_NET_DRIVERS=y
# CONFIG_USB_CATC is not set
# CONFIG_USB_KAWETH is not set
# CONFIG_USB_PEGASUS is not set
# CONFIG_USB_RTL8150 is not set
CONFIG_USB_RTL8152=m
# CONFIG_USB_LAN78XX is not set
# CONFIG_USB_USBNET is not set
# CONFIG_USB_HSO is not set
# CONFIG_USB_IPHETH is not set
CONFIG_WLAN=y
CONFIG_WLAN_VENDOR_ADMTEK=y
# CONFIG_ADM8211 is not set
CONFIG_WLAN_VENDOR_ATH=y
# CONFIG_ATH_DEBUG is not set
# CONFIG_ATH5K is not set
# CONFIG_ATH5K_PCI is not set
# CONFIG_ATH9K is not set
# CONFIG_ATH9K_HTC is not set
# CONFIG_CARL9170 is not set
# CONFIG_ATH6KL is not set
# CONFIG_AR5523 is not set
# CONFIG_WIL6210 is not set
# CONFIG_ATH10K is not set
# CONFIG_WCN36XX is not set
# CONFIG_ATH11K is not set
CONFIG_WLAN_VENDOR_ATMEL=y
# CONFIG_ATMEL is not set
# CONFIG_AT76C50X_USB is not set
CONFIG_WLAN_VENDOR_BROADCOM=y
# CONFIG_B43 is not set
# CONFIG_B43LEGACY is not set
# CONFIG_BRCMSMAC is not set
# CONFIG_BRCMFMAC is not set
CONFIG_WLAN_VENDOR_CISCO=y
# CONFIG_AIRO is not set
CONFIG_WLAN_VENDOR_INTEL=y
# CONFIG_IPW2100 is not set
# CONFIG_IPW2200 is not set
# CONFIG_IWL4965 is not set
# CONFIG_IWL3945 is not set
# CONFIG_IWLWIFI is not set
CONFIG_WLAN_VENDOR_INTERSIL=y
# CONFIG_HOSTAP is not set
# CONFIG_HERMES is not set
# CONFIG_P54_COMMON is not set
# CONFIG_PRISM54 is not set
CONFIG_WLAN_VENDOR_MARVELL=y
# CONFIG_LIBERTAS is not set
# CONFIG_LIBERTAS_THINFIRM is not set
# CONFIG_MWIFIEX is not set
# CONFIG_MWL8K is not set
CONFIG_WLAN_VENDOR_MEDIATEK=y
# CONFIG_MT7601U is not set
# CONFIG_MT76x0U is not set
# CONFIG_MT76x0E is not set
# CONFIG_MT76x2E is not set
# CONFIG_MT76x2U is not set
# CONFIG_MT7603E is not set
# CONFIG_MT7615E is not set
# CONFIG_MT7663U is not set
# CONFIG_MT7663S is not set
# CONFIG_MT7915E is not set
CONFIG_WLAN_VENDOR_MICROCHIP=y
# CONFIG_WILC1000_SDIO is not set
# CONFIG_WILC1000_SPI is not set
CONFIG_WLAN_VENDOR_RALINK=y
# CONFIG_RT2X00 is not set
CONFIG_WLAN_VENDOR_REALTEK=y
# CONFIG_RTL8180 is not set
# CONFIG_RTL8187 is not set
CONFIG_RTL_CARDS=m
# CONFIG_RTL8192CE is not set
# CONFIG_RTL8192SE is not set
# CONFIG_RTL8192DE is not set
# CONFIG_RTL8723AE is not set
# CONFIG_RTL8723BE is not set
# CONFIG_RTL8188EE is not set
# CONFIG_RTL8192EE is not set
# CONFIG_RTL8821AE is not set
# CONFIG_RTL8192CU is not set
# CONFIG_RTL8XXXU is not set
# CONFIG_RTW88 is not set
CONFIG_WLAN_VENDOR_RSI=y
# CONFIG_RSI_91X is not set
CONFIG_WLAN_VENDOR_ST=y
# CONFIG_CW1200 is not set
CONFIG_WLAN_VENDOR_TI=y
# CONFIG_WL1251 is not set
# CONFIG_WL12XX is not set
# CONFIG_WL18XX is not set
# CONFIG_WLCORE is not set
CONFIG_WLAN_VENDOR_ZYDAS=y
# CONFIG_USB_ZD1201 is not set
# CONFIG_ZD1211RW is not set
CONFIG_WLAN_VENDOR_QUANTENNA=y
# CONFIG_QTNFMAC_PCIE is not set
CONFIG_MAC80211_HWSIM=m
# CONFIG_USB_NET_RNDIS_WLAN is not set
# CONFIG_VIRT_WIFI is not set

#
# Enable WiMAX (Networking options) to see the WiMAX drivers
#
# CONFIG_WAN is not set
CONFIG_IEEE802154_DRIVERS=m
# CONFIG_IEEE802154_FAKELB is not set
# CONFIG_IEEE802154_AT86RF230 is not set
# CONFIG_IEEE802154_MRF24J40 is not set
# CONFIG_IEEE802154_CC2520 is not set
# CONFIG_IEEE802154_ATUSB is not set
# CONFIG_IEEE802154_ADF7242 is not set
# CONFIG_IEEE802154_CA8210 is not set
# CONFIG_IEEE802154_MCR20A is not set
# CONFIG_IEEE802154_HWSIM is not set
CONFIG_XEN_NETDEV_FRONTEND=y
# CONFIG_VMXNET3 is not set
# CONFIG_FUJITSU_ES is not set
# CONFIG_HYPERV_NET is not set
CONFIG_NETDEVSIM=m
CONFIG_NET_FAILOVER=m
# CONFIG_ISDN is not set
# CONFIG_NVM is not set

#
# Input device support
#
CONFIG_INPUT=y
CONFIG_INPUT_LEDS=y
CONFIG_INPUT_FF_MEMLESS=m
CONFIG_INPUT_POLLDEV=m
CONFIG_INPUT_SPARSEKMAP=m
# CONFIG_INPUT_MATRIXKMAP is not set

#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=y
# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
CONFIG_INPUT_JOYDEV=m
CONFIG_INPUT_EVDEV=y
# CONFIG_INPUT_EVBUG is not set

#
# Input Device Drivers
#
CONFIG_INPUT_KEYBOARD=y
# CONFIG_KEYBOARD_ADP5588 is not set
# CONFIG_KEYBOARD_ADP5589 is not set
# CONFIG_KEYBOARD_APPLESPI is not set
CONFIG_KEYBOARD_ATKBD=y
# CONFIG_KEYBOARD_QT1050 is not set
# CONFIG_KEYBOARD_QT1070 is not set
# CONFIG_KEYBOARD_QT2160 is not set
# CONFIG_KEYBOARD_DLINK_DIR685 is not set
# CONFIG_KEYBOARD_LKKBD is not set
# CONFIG_KEYBOARD_GPIO is not set
# CONFIG_KEYBOARD_GPIO_POLLED is not set
# CONFIG_KEYBOARD_TCA6416 is not set
# CONFIG_KEYBOARD_TCA8418 is not set
# CONFIG_KEYBOARD_MATRIX is not set
# CONFIG_KEYBOARD_LM8323 is not set
# CONFIG_KEYBOARD_LM8333 is not set
# CONFIG_KEYBOARD_MAX7359 is not set
# CONFIG_KEYBOARD_MCS is not set
# CONFIG_KEYBOARD_MPR121 is not set
# CONFIG_KEYBOARD_NEWTON is not set
# CONFIG_KEYBOARD_OPENCORES is not set
# CONFIG_KEYBOARD_SAMSUNG is not set
# CONFIG_KEYBOARD_STOWAWAY is not set
# CONFIG_KEYBOARD_SUNKBD is not set
# CONFIG_KEYBOARD_TM2_TOUCHKEY is not set
# CONFIG_KEYBOARD_XTKBD is not set
CONFIG_INPUT_MOUSE=y
CONFIG_MOUSE_PS2=y
CONFIG_MOUSE_PS2_ALPS=y
CONFIG_MOUSE_PS2_BYD=y
CONFIG_MOUSE_PS2_LOGIPS2PP=y
CONFIG_MOUSE_PS2_SYNAPTICS=y
CONFIG_MOUSE_PS2_SYNAPTICS_SMBUS=y
CONFIG_MOUSE_PS2_CYPRESS=y
CONFIG_MOUSE_PS2_LIFEBOOK=y
CONFIG_MOUSE_PS2_TRACKPOINT=y
CONFIG_MOUSE_PS2_ELANTECH=y
CONFIG_MOUSE_PS2_ELANTECH_SMBUS=y
CONFIG_MOUSE_PS2_SENTELIC=y
# CONFIG_MOUSE_PS2_TOUCHKIT is not set
CONFIG_MOUSE_PS2_FOCALTECH=y
CONFIG_MOUSE_PS2_VMMOUSE=y
CONFIG_MOUSE_PS2_SMBUS=y
CONFIG_MOUSE_SERIAL=m
# CONFIG_MOUSE_APPLETOUCH is not set
# CONFIG_MOUSE_BCM5974 is not set
CONFIG_MOUSE_CYAPA=m
CONFIG_MOUSE_ELAN_I2C=m
CONFIG_MOUSE_ELAN_I2C_I2C=y
CONFIG_MOUSE_ELAN_I2C_SMBUS=y
CONFIG_MOUSE_VSXXXAA=m
# CONFIG_MOUSE_GPIO is not set
CONFIG_MOUSE_SYNAPTICS_I2C=m
# CONFIG_MOUSE_SYNAPTICS_USB is not set
# CONFIG_INPUT_JOYSTICK is not set
# CONFIG_INPUT_TABLET is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
# CONFIG_INPUT_MISC is not set
CONFIG_RMI4_CORE=m
CONFIG_RMI4_I2C=m
CONFIG_RMI4_SPI=m
CONFIG_RMI4_SMB=m
CONFIG_RMI4_F03=y
CONFIG_RMI4_F03_SERIO=m
CONFIG_RMI4_2D_SENSOR=y
CONFIG_RMI4_F11=y
CONFIG_RMI4_F12=y
CONFIG_RMI4_F30=y
CONFIG_RMI4_F34=y
# CONFIG_RMI4_F3A is not set
# CONFIG_RMI4_F54 is not set
CONFIG_RMI4_F55=y

#
# Hardware I/O ports
#
CONFIG_SERIO=y
CONFIG_ARCH_MIGHT_HAVE_PC_SERIO=y
CONFIG_SERIO_I8042=y
CONFIG_SERIO_SERPORT=y
# CONFIG_SERIO_CT82C710 is not set
# CONFIG_SERIO_PARKBD is not set
# CONFIG_SERIO_PCIPS2 is not set
CONFIG_SERIO_LIBPS2=y
CONFIG_SERIO_RAW=m
CONFIG_SERIO_ALTERA_PS2=m
# CONFIG_SERIO_PS2MULT is not set
CONFIG_SERIO_ARC_PS2=m
CONFIG_HYPERV_KEYBOARD=m
# CONFIG_SERIO_GPIO_PS2 is not set
# CONFIG_USERIO is not set
# CONFIG_GAMEPORT is not set
# end of Hardware I/O ports
# end of Input device support

#
# Character devices
#
CONFIG_TTY=y
CONFIG_VT=y
CONFIG_CONSOLE_TRANSLATIONS=y
CONFIG_VT_CONSOLE=y
CONFIG_VT_CONSOLE_SLEEP=y
CONFIG_HW_CONSOLE=y
CONFIG_VT_HW_CONSOLE_BINDING=y
CONFIG_UNIX98_PTYS=y
# CONFIG_LEGACY_PTYS is not set
CONFIG_LDISC_AUTOLOAD=y

#
# Serial drivers
#
CONFIG_SERIAL_EARLYCON=y
CONFIG_SERIAL_8250=y
# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
CONFIG_SERIAL_8250_PNP=y
# CONFIG_SERIAL_8250_16550A_VARIANTS is not set
# CONFIG_SERIAL_8250_FINTEK is not set
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_8250_DMA=y
CONFIG_SERIAL_8250_PCI=y
CONFIG_SERIAL_8250_EXAR=y
CONFIG_SERIAL_8250_NR_UARTS=64
CONFIG_SERIAL_8250_RUNTIME_UARTS=4
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_MANY_PORTS=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
# CONFIG_SERIAL_8250_DETECT_IRQ is not set
CONFIG_SERIAL_8250_RSA=y
CONFIG_SERIAL_8250_DWLIB=y
CONFIG_SERIAL_8250_DW=y
# CONFIG_SERIAL_8250_RT288X is not set
CONFIG_SERIAL_8250_LPSS=y
CONFIG_SERIAL_8250_MID=y

#
# Non-8250 serial port support
#
# CONFIG_SERIAL_MAX3100 is not set
# CONFIG_SERIAL_MAX310X is not set
# CONFIG_SERIAL_UARTLITE is not set
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
CONFIG_SERIAL_JSM=m
# CONFIG_SERIAL_LANTIQ is not set
# CONFIG_SERIAL_SCCNXP is not set
# CONFIG_SERIAL_SC16IS7XX is not set
# CONFIG_SERIAL_ALTERA_JTAGUART is not set
# CONFIG_SERIAL_ALTERA_UART is not set
# CONFIG_SERIAL_IFX6X60 is not set
CONFIG_SERIAL_ARC=m
CONFIG_SERIAL_ARC_NR_PORTS=1
# CONFIG_SERIAL_RP2 is not set
# CONFIG_SERIAL_FSL_LPUART is not set
# CONFIG_SERIAL_FSL_LINFLEXUART is not set
# CONFIG_SERIAL_SPRD is not set
# end of Serial drivers

CONFIG_SERIAL_MCTRL_GPIO=y
CONFIG_SERIAL_NONSTANDARD=y
# CONFIG_ROCKETPORT is not set
CONFIG_CYCLADES=m
# CONFIG_CYZ_INTR is not set
# CONFIG_MOXA_INTELLIO is not set
# CONFIG_MOXA_SMARTIO is not set
CONFIG_SYNCLINK=m
CONFIG_SYNCLINKMP=m
CONFIG_SYNCLINK_GT=m
# CONFIG_ISI is not set
CONFIG_N_HDLC=m
CONFIG_N_GSM=m
CONFIG_NOZOMI=m
# CONFIG_NULL_TTY is not set
# CONFIG_TRACE_SINK is not set
CONFIG_HVC_DRIVER=y
CONFIG_HVC_IRQ=y
CONFIG_HVC_XEN=y
CONFIG_HVC_XEN_FRONTEND=y
# CONFIG_SERIAL_DEV_BUS is not set
CONFIG_PRINTER=m
# CONFIG_LP_CONSOLE is not set
CONFIG_PPDEV=m
CONFIG_VIRTIO_CONSOLE=m
CONFIG_IPMI_HANDLER=m
CONFIG_IPMI_DMI_DECODE=y
CONFIG_IPMI_PLAT_DATA=y
CONFIG_IPMI_PANIC_EVENT=y
CONFIG_IPMI_PANIC_STRING=y
CONFIG_IPMI_DEVICE_INTERFACE=m
CONFIG_IPMI_SI=m
CONFIG_IPMI_SSIF=m
CONFIG_IPMI_WATCHDOG=m
CONFIG_IPMI_POWEROFF=m
CONFIG_HW_RANDOM=y
CONFIG_HW_RANDOM_TIMERIOMEM=m
CONFIG_HW_RANDOM_INTEL=m
CONFIG_HW_RANDOM_AMD=m
# CONFIG_HW_RANDOM_BA431 is not set
CONFIG_HW_RANDOM_VIA=m
CONFIG_HW_RANDOM_VIRTIO=y
# CONFIG_HW_RANDOM_XIPHERA is not set
# CONFIG_APPLICOM is not set
# CONFIG_MWAVE is not set
CONFIG_DEVMEM=y
# CONFIG_DEVKMEM is not set
CONFIG_NVRAM=y
CONFIG_RAW_DRIVER=y
CONFIG_MAX_RAW_DEVS=8192
CONFIG_DEVPORT=y
CONFIG_HPET=y
CONFIG_HPET_MMAP=y
# CONFIG_HPET_MMAP_DEFAULT is not set
CONFIG_HANGCHECK_TIMER=m
CONFIG_UV_MMTIMER=m
CONFIG_TCG_TPM=y
CONFIG_HW_RANDOM_TPM=y
CONFIG_TCG_TIS_CORE=y
CONFIG_TCG_TIS=y
# CONFIG_TCG_TIS_SPI is not set
CONFIG_TCG_TIS_I2C_ATMEL=m
CONFIG_TCG_TIS_I2C_INFINEON=m
CONFIG_TCG_TIS_I2C_NUVOTON=m
CONFIG_TCG_NSC=m
CONFIG_TCG_ATMEL=m
CONFIG_TCG_INFINEON=m
# CONFIG_TCG_XEN is not set
CONFIG_TCG_CRB=y
# CONFIG_TCG_VTPM_PROXY is not set
CONFIG_TCG_TIS_ST33ZP24=m
CONFIG_TCG_TIS_ST33ZP24_I2C=m
# CONFIG_TCG_TIS_ST33ZP24_SPI is not set
CONFIG_TELCLOCK=m
# CONFIG_XILLYBUS is not set
# end of Character devices

# CONFIG_RANDOM_TRUST_CPU is not set
# CONFIG_RANDOM_TRUST_BOOTLOADER is not set

#
# I2C support
#
CONFIG_I2C=y
CONFIG_ACPI_I2C_OPREGION=y
CONFIG_I2C_BOARDINFO=y
CONFIG_I2C_COMPAT=y
CONFIG_I2C_CHARDEV=m
CONFIG_I2C_MUX=m

#
# Multiplexer I2C Chip support
#
# CONFIG_I2C_MUX_GPIO is not set
# CONFIG_I2C_MUX_LTC4306 is not set
# CONFIG_I2C_MUX_PCA9541 is not set
# CONFIG_I2C_MUX_PCA954x is not set
# CONFIG_I2C_MUX_REG is not set
CONFIG_I2C_MUX_MLXCPLD=m
# end of Multiplexer I2C Chip support

CONFIG_I2C_HELPER_AUTO=y
CONFIG_I2C_SMBUS=y
CONFIG_I2C_ALGOBIT=y
CONFIG_I2C_ALGOPCA=m

#
# I2C Hardware Bus support
#

#
# PC SMBus host controller drivers
#
# CONFIG_I2C_ALI1535 is not set
# CONFIG_I2C_ALI1563 is not set
# CONFIG_I2C_ALI15X3 is not set
CONFIG_I2C_AMD756=m
CONFIG_I2C_AMD756_S4882=m
CONFIG_I2C_AMD8111=m
# CONFIG_I2C_AMD_MP2 is not set
CONFIG_I2C_I801=y
CONFIG_I2C_ISCH=m
CONFIG_I2C_ISMT=m
CONFIG_I2C_PIIX4=m
CONFIG_I2C_NFORCE2=m
CONFIG_I2C_NFORCE2_S4985=m
# CONFIG_I2C_NVIDIA_GPU is not set
# CONFIG_I2C_SIS5595 is not set
# CONFIG_I2C_SIS630 is not set
CONFIG_I2C_SIS96X=m
CONFIG_I2C_VIA=m
CONFIG_I2C_VIAPRO=m

#
# ACPI drivers
#
CONFIG_I2C_SCMI=m

#
# I2C system bus drivers (mostly embedded / system-on-chip)
#
# CONFIG_I2C_CBUS_GPIO is not set
CONFIG_I2C_DESIGNWARE_CORE=m
# CONFIG_I2C_DESIGNWARE_SLAVE is not set
CONFIG_I2C_DESIGNWARE_PLATFORM=m
CONFIG_I2C_DESIGNWARE_BAYTRAIL=y
# CONFIG_I2C_DESIGNWARE_PCI is not set
# CONFIG_I2C_EMEV2 is not set
# CONFIG_I2C_GPIO is not set
# CONFIG_I2C_OCORES is not set
CONFIG_I2C_PCA_PLATFORM=m
CONFIG_I2C_SIMTEC=m
# CONFIG_I2C_XILINX is not set

#
# External I2C/SMBus adapter drivers
#
# CONFIG_I2C_DIOLAN_U2C is not set
CONFIG_I2C_PARPORT=m
# CONFIG_I2C_ROBOTFUZZ_OSIF is not set
# CONFIG_I2C_TAOS_EVM is not set
# CONFIG_I2C_TINY_USB is not set

#
# Other I2C/SMBus bus drivers
#
CONFIG_I2C_MLXCPLD=m
# end of I2C Hardware Bus support

CONFIG_I2C_STUB=m
# CONFIG_I2C_SLAVE is not set
# CONFIG_I2C_DEBUG_CORE is not set
# CONFIG_I2C_DEBUG_ALGO is not set
# CONFIG_I2C_DEBUG_BUS is not set
# end of I2C support

# CONFIG_I3C is not set
CONFIG_SPI=y
# CONFIG_SPI_DEBUG is not set
CONFIG_SPI_MASTER=y
# CONFIG_SPI_MEM is not set

#
# SPI Master Controller Drivers
#
# CONFIG_SPI_ALTERA is not set
# CONFIG_SPI_AXI_SPI_ENGINE is not set
# CONFIG_SPI_BITBANG is not set
# CONFIG_SPI_BUTTERFLY is not set
# CONFIG_SPI_CADENCE is not set
# CONFIG_SPI_DESIGNWARE is not set
# CONFIG_SPI_NXP_FLEXSPI is not set
# CONFIG_SPI_GPIO is not set
# CONFIG_SPI_LM70_LLP is not set
# CONFIG_SPI_LANTIQ_SSC is not set
# CONFIG_SPI_OC_TINY is not set
# CONFIG_SPI_PXA2XX is not set
# CONFIG_SPI_ROCKCHIP is not set
# CONFIG_SPI_SC18IS602 is not set
# CONFIG_SPI_SIFIVE is not set
# CONFIG_SPI_MXIC is not set
# CONFIG_SPI_XCOMM is not set
# CONFIG_SPI_XILINX is not set
# CONFIG_SPI_ZYNQMP_GQSPI is not set
# CONFIG_SPI_AMD is not set

#
# SPI Multiplexer support
#
# CONFIG_SPI_MUX is not set

#
# SPI Protocol Masters
#
# CONFIG_SPI_SPIDEV is not set
# CONFIG_SPI_LOOPBACK_TEST is not set
# CONFIG_SPI_TLE62X0 is not set
# CONFIG_SPI_SLAVE is not set
CONFIG_SPI_DYNAMIC=y
# CONFIG_SPMI is not set
# CONFIG_HSI is not set
CONFIG_PPS=y
# CONFIG_PPS_DEBUG is not set

#
# PPS clients support
#
# CONFIG_PPS_CLIENT_KTIMER is not set
CONFIG_PPS_CLIENT_LDISC=m
CONFIG_PPS_CLIENT_PARPORT=m
CONFIG_PPS_CLIENT_GPIO=m

#
# PPS generators support
#

#
# PTP clock support
#
CONFIG_PTP_1588_CLOCK=y
# CONFIG_DP83640_PHY is not set
# CONFIG_PTP_1588_CLOCK_INES is not set
CONFIG_PTP_1588_CLOCK_KVM=m
# CONFIG_PTP_1588_CLOCK_IDT82P33 is not set
# CONFIG_PTP_1588_CLOCK_IDTCM is not set
# CONFIG_PTP_1588_CLOCK_VMW is not set
# end of PTP clock support

CONFIG_PINCTRL=y
CONFIG_PINMUX=y
CONFIG_PINCONF=y
CONFIG_GENERIC_PINCONF=y
# CONFIG_DEBUG_PINCTRL is not set
CONFIG_PINCTRL_AMD=m
# CONFIG_PINCTRL_MCP23S08 is not set
# CONFIG_PINCTRL_SX150X is not set
CONFIG_PINCTRL_BAYTRAIL=y
# CONFIG_PINCTRL_CHERRYVIEW is not set
# CONFIG_PINCTRL_LYNXPOINT is not set
CONFIG_PINCTRL_INTEL=y
CONFIG_PINCTRL_BROXTON=m
CONFIG_PINCTRL_CANNONLAKE=m
CONFIG_PINCTRL_CEDARFORK=m
CONFIG_PINCTRL_DENVERTON=m
# CONFIG_PINCTRL_EMMITSBURG is not set
CONFIG_PINCTRL_GEMINILAKE=m
# CONFIG_PINCTRL_ICELAKE is not set
# CONFIG_PINCTRL_JASPERLAKE is not set
CONFIG_PINCTRL_LEWISBURG=m
CONFIG_PINCTRL_SUNRISEPOINT=m
# CONFIG_PINCTRL_TIGERLAKE is not set

#
# Renesas pinctrl drivers
#
# end of Renesas pinctrl drivers

CONFIG_GPIOLIB=y
CONFIG_GPIOLIB_FASTPATH_LIMIT=512
CONFIG_GPIO_ACPI=y
CONFIG_GPIOLIB_IRQCHIP=y
# CONFIG_DEBUG_GPIO is not set
CONFIG_GPIO_SYSFS=y
CONFIG_GPIO_CDEV=y
CONFIG_GPIO_CDEV_V1=y
CONFIG_GPIO_GENERIC=m

#
# Memory mapped GPIO drivers
#
CONFIG_GPIO_AMDPT=m
# CONFIG_GPIO_DWAPB is not set
# CONFIG_GPIO_EXAR is not set
# CONFIG_GPIO_GENERIC_PLATFORM is not set
CONFIG_GPIO_ICH=m
# CONFIG_GPIO_MB86S7X is not set
# CONFIG_GPIO_VX855 is not set
# CONFIG_GPIO_XILINX is not set
# CONFIG_GPIO_AMD_FCH is not set
# end of Memory mapped GPIO drivers

#
# Port-mapped I/O GPIO drivers
#
# CONFIG_GPIO_F7188X is not set
# CONFIG_GPIO_IT87 is not set
# CONFIG_GPIO_SCH is not set
# CONFIG_GPIO_SCH311X is not set
# CONFIG_GPIO_WINBOND is not set
# CONFIG_GPIO_WS16C48 is not set
# end of Port-mapped I/O GPIO drivers

#
# I2C GPIO expanders
#
# CONFIG_GPIO_ADP5588 is not set
# CONFIG_GPIO_MAX7300 is not set
# CONFIG_GPIO_MAX732X is not set
# CONFIG_GPIO_PCA953X is not set
# CONFIG_GPIO_PCA9570 is not set
# CONFIG_GPIO_PCF857X is not set
# CONFIG_GPIO_TPIC2810 is not set
# end of I2C GPIO expanders

#
# MFD GPIO expanders
#
# end of MFD GPIO expanders

#
# PCI GPIO expanders
#
# CONFIG_GPIO_AMD8111 is not set
# CONFIG_GPIO_BT8XX is not set
# CONFIG_GPIO_ML_IOH is not set
# CONFIG_GPIO_PCI_IDIO_16 is not set
# CONFIG_GPIO_PCIE_IDIO_24 is not set
# CONFIG_GPIO_RDC321X is not set
# end of PCI GPIO expanders

#
# SPI GPIO expanders
#
# CONFIG_GPIO_MAX3191X is not set
# CONFIG_GPIO_MAX7301 is not set
# CONFIG_GPIO_MC33880 is not set
# CONFIG_GPIO_PISOSR is not set
# CONFIG_GPIO_XRA1403 is not set
# end of SPI GPIO expanders

#
# USB GPIO expanders
#
# end of USB GPIO expanders

# CONFIG_GPIO_AGGREGATOR is not set
# CONFIG_GPIO_MOCKUP is not set
# CONFIG_W1 is not set
CONFIG_POWER_RESET=y
# CONFIG_POWER_RESET_RESTART is not set
CONFIG_POWER_SUPPLY=y
# CONFIG_POWER_SUPPLY_DEBUG is not set
CONFIG_POWER_SUPPLY_HWMON=y
# CONFIG_PDA_POWER is not set
# CONFIG_TEST_POWER is not set
# CONFIG_CHARGER_ADP5061 is not set
# CONFIG_BATTERY_CW2015 is not set
# CONFIG_BATTERY_DS2780 is not set
# CONFIG_BATTERY_DS2781 is not set
# CONFIG_BATTERY_DS2782 is not set
# CONFIG_BATTERY_SBS is not set
# CONFIG_CHARGER_SBS is not set
# CONFIG_MANAGER_SBS is not set
# CONFIG_BATTERY_BQ27XXX is not set
# CONFIG_BATTERY_MAX17040 is not set
# CONFIG_BATTERY_MAX17042 is not set
# CONFIG_CHARGER_MAX8903 is not set
# CONFIG_CHARGER_LP8727 is not set
# CONFIG_CHARGER_GPIO is not set
# CONFIG_CHARGER_LT3651 is not set
# CONFIG_CHARGER_BQ2415X is not set
# CONFIG_CHARGER_BQ24257 is not set
# CONFIG_CHARGER_BQ24735 is not set
# CONFIG_CHARGER_BQ2515X is not set
# CONFIG_CHARGER_BQ25890 is not set
# CONFIG_CHARGER_BQ25980 is not set
CONFIG_CHARGER_SMB347=m
# CONFIG_BATTERY_GAUGE_LTC2941 is not set
# CONFIG_CHARGER_RT9455 is not set
# CONFIG_CHARGER_BD99954 is not set
CONFIG_HWMON=y
CONFIG_HWMON_VID=m
# CONFIG_HWMON_DEBUG_CHIP is not set

#
# Native drivers
#
CONFIG_SENSORS_ABITUGURU=m
CONFIG_SENSORS_ABITUGURU3=m
# CONFIG_SENSORS_AD7314 is not set
CONFIG_SENSORS_AD7414=m
CONFIG_SENSORS_AD7418=m
CONFIG_SENSORS_ADM1021=m
CONFIG_SENSORS_ADM1025=m
CONFIG_SENSORS_ADM1026=m
CONFIG_SENSORS_ADM1029=m
CONFIG_SENSORS_ADM1031=m
# CONFIG_SENSORS_ADM1177 is not set
CONFIG_SENSORS_ADM9240=m
CONFIG_SENSORS_ADT7X10=m
# CONFIG_SENSORS_ADT7310 is not set
CONFIG_SENSORS_ADT7410=m
CONFIG_SENSORS_ADT7411=m
CONFIG_SENSORS_ADT7462=m
CONFIG_SENSORS_ADT7470=m
CONFIG_SENSORS_ADT7475=m
# CONFIG_SENSORS_AS370 is not set
CONFIG_SENSORS_ASC7621=m
# CONFIG_SENSORS_AXI_FAN_CONTROL is not set
CONFIG_SENSORS_K8TEMP=m
CONFIG_SENSORS_K10TEMP=m
CONFIG_SENSORS_FAM15H_POWER=m
# CONFIG_SENSORS_AMD_ENERGY is not set
CONFIG_SENSORS_APPLESMC=m
CONFIG_SENSORS_ASB100=m
# CONFIG_SENSORS_ASPEED is not set
CONFIG_SENSORS_ATXP1=m
# CONFIG_SENSORS_CORSAIR_CPRO is not set
# CONFIG_SENSORS_DRIVETEMP is not set
CONFIG_SENSORS_DS620=m
CONFIG_SENSORS_DS1621=m
CONFIG_SENSORS_DELL_SMM=m
CONFIG_SENSORS_I5K_AMB=m
CONFIG_SENSORS_F71805F=m
CONFIG_SENSORS_F71882FG=m
CONFIG_SENSORS_F75375S=m
CONFIG_SENSORS_FSCHMD=m
# CONFIG_SENSORS_FTSTEUTATES is not set
CONFIG_SENSORS_GL518SM=m
CONFIG_SENSORS_GL520SM=m
CONFIG_SENSORS_G760A=m
# CONFIG_SENSORS_G762 is not set
# CONFIG_SENSORS_HIH6130 is not set
CONFIG_SENSORS_IBMAEM=m
CONFIG_SENSORS_IBMPEX=m
CONFIG_SENSORS_I5500=m
CONFIG_SENSORS_CORETEMP=m
CONFIG_SENSORS_IT87=m
CONFIG_SENSORS_JC42=m
# CONFIG_SENSORS_POWR1220 is not set
CONFIG_SENSORS_LINEAGE=m
# CONFIG_SENSORS_LTC2945 is not set
# CONFIG_SENSORS_LTC2947_I2C is not set
# CONFIG_SENSORS_LTC2947_SPI is not set
# CONFIG_SENSORS_LTC2990 is not set
CONFIG_SENSORS_LTC4151=m
CONFIG_SENSORS_LTC4215=m
# CONFIG_SENSORS_LTC4222 is not set
CONFIG_SENSORS_LTC4245=m
# CONFIG_SENSORS_LTC4260 is not set
CONFIG_SENSORS_LTC4261=m
# CONFIG_SENSORS_MAX1111 is not set
CONFIG_SENSORS_MAX16065=m
CONFIG_SENSORS_MAX1619=m
CONFIG_SENSORS_MAX1668=m
CONFIG_SENSORS_MAX197=m
# CONFIG_SENSORS_MAX31722 is not set
# CONFIG_SENSORS_MAX31730 is not set
# CONFIG_SENSORS_MAX6621 is not set
CONFIG_SENSORS_MAX6639=m
CONFIG_SENSORS_MAX6642=m
CONFIG_SENSORS_MAX6650=m
CONFIG_SENSORS_MAX6697=m
# CONFIG_SENSORS_MAX31790 is not set
CONFIG_SENSORS_MCP3021=m
# CONFIG_SENSORS_MLXREG_FAN is not set
# CONFIG_SENSORS_TC654 is not set
# CONFIG_SENSORS_MR75203 is not set
# CONFIG_SENSORS_ADCXX is not set
CONFIG_SENSORS_LM63=m
# CONFIG_SENSORS_LM70 is not set
CONFIG_SENSORS_LM73=m
CONFIG_SENSORS_LM75=m
CONFIG_SENSORS_LM77=m
CONFIG_SENSORS_LM78=m
CONFIG_SENSORS_LM80=m
CONFIG_SENSORS_LM83=m
CONFIG_SENSORS_LM85=m
CONFIG_SENSORS_LM87=m
CONFIG_SENSORS_LM90=m
CONFIG_SENSORS_LM92=m
CONFIG_SENSORS_LM93=m
CONFIG_SENSORS_LM95234=m
CONFIG_SENSORS_LM95241=m
CONFIG_SENSORS_LM95245=m
CONFIG_SENSORS_PC87360=m
CONFIG_SENSORS_PC87427=m
CONFIG_SENSORS_NTC_THERMISTOR=m
# CONFIG_SENSORS_NCT6683 is not set
CONFIG_SENSORS_NCT6775=m
# CONFIG_SENSORS_NCT7802 is not set
# CONFIG_SENSORS_NCT7904 is not set
# CONFIG_SENSORS_NPCM7XX is not set
CONFIG_SENSORS_PCF8591=m
CONFIG_PMBUS=m
CONFIG_SENSORS_PMBUS=m
# CONFIG_SENSORS_ADM1266 is not set
CONFIG_SENSORS_ADM1275=m
# CONFIG_SENSORS_BEL_PFE is not set
# CONFIG_SENSORS_IBM_CFFPS is not set
# CONFIG_SENSORS_INSPUR_IPSPS is not set
# CONFIG_SENSORS_IR35221 is not set
# CONFIG_SENSORS_IR38064 is not set
# CONFIG_SENSORS_IRPS5401 is not set
# CONFIG_SENSORS_ISL68137 is not set
CONFIG_SENSORS_LM25066=m
CONFIG_SENSORS_LTC2978=m
# CONFIG_SENSORS_LTC3815 is not set
CONFIG_SENSORS_MAX16064=m
# CONFIG_SENSORS_MAX16601 is not set
# CONFIG_SENSORS_MAX20730 is not set
# CONFIG_SENSORS_MAX20751 is not set
# CONFIG_SENSORS_MAX31785 is not set
CONFIG_SENSORS_MAX34440=m
CONFIG_SENSORS_MAX8688=m
# CONFIG_SENSORS_MP2975 is not set
# CONFIG_SENSORS_PXE1610 is not set
# CONFIG_SENSORS_TPS40422 is not set
# CONFIG_SENSORS_TPS53679 is not set
CONFIG_SENSORS_UCD9000=m
CONFIG_SENSORS_UCD9200=m
# CONFIG_SENSORS_XDPE122 is not set
CONFIG_SENSORS_ZL6100=m
CONFIG_SENSORS_SHT15=m
CONFIG_SENSORS_SHT21=m
# CONFIG_SENSORS_SHT3x is not set
# CONFIG_SENSORS_SHTC1 is not set
CONFIG_SENSORS_SIS5595=m
CONFIG_SENSORS_DME1737=m
CONFIG_SENSORS_EMC1403=m
# CONFIG_SENSORS_EMC2103 is not set
CONFIG_SENSORS_EMC6W201=m
CONFIG_SENSORS_SMSC47M1=m
CONFIG_SENSORS_SMSC47M192=m
CONFIG_SENSORS_SMSC47B397=m
CONFIG_SENSORS_SCH56XX_COMMON=m
CONFIG_SENSORS_SCH5627=m
CONFIG_SENSORS_SCH5636=m
# CONFIG_SENSORS_STTS751 is not set
# CONFIG_SENSORS_SMM665 is not set
# CONFIG_SENSORS_ADC128D818 is not set
CONFIG_SENSORS_ADS7828=m
# CONFIG_SENSORS_ADS7871 is not set
CONFIG_SENSORS_AMC6821=m
CONFIG_SENSORS_INA209=m
CONFIG_SENSORS_INA2XX=m
# CONFIG_SENSORS_INA3221 is not set
# CONFIG_SENSORS_TC74 is not set
CONFIG_SENSORS_THMC50=m
CONFIG_SENSORS_TMP102=m
# CONFIG_SENSORS_TMP103 is not set
# CONFIG_SENSORS_TMP108 is not set
CONFIG_SENSORS_TMP401=m
CONFIG_SENSORS_TMP421=m
# CONFIG_SENSORS_TMP513 is not set
CONFIG_SENSORS_VIA_CPUTEMP=m
CONFIG_SENSORS_VIA686A=m
CONFIG_SENSORS_VT1211=m
CONFIG_SENSORS_VT8231=m
# CONFIG_SENSORS_W83773G is not set
CONFIG_SENSORS_W83781D=m
CONFIG_SENSORS_W83791D=m
CONFIG_SENSORS_W83792D=m
CONFIG_SENSORS_W83793=m
CONFIG_SENSORS_W83795=m
# CONFIG_SENSORS_W83795_FANCTRL is not set
CONFIG_SENSORS_W83L785TS=m
CONFIG_SENSORS_W83L786NG=m
CONFIG_SENSORS_W83627HF=m
CONFIG_SENSORS_W83627EHF=m
# CONFIG_SENSORS_XGENE is not set

#
# ACPI drivers
#
CONFIG_SENSORS_ACPI_POWER=m
CONFIG_SENSORS_ATK0110=m
CONFIG_THERMAL=y
# CONFIG_THERMAL_NETLINK is not set
# CONFIG_THERMAL_STATISTICS is not set
CONFIG_THERMAL_EMERGENCY_POWEROFF_DELAY_MS=0
CONFIG_THERMAL_HWMON=y
CONFIG_THERMAL_WRITABLE_TRIPS=y
CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y
# CONFIG_THERMAL_DEFAULT_GOV_FAIR_SHARE is not set
# CONFIG_THERMAL_DEFAULT_GOV_USER_SPACE is not set
CONFIG_THERMAL_GOV_FAIR_SHARE=y
CONFIG_THERMAL_GOV_STEP_WISE=y
CONFIG_THERMAL_GOV_BANG_BANG=y
CONFIG_THERMAL_GOV_USER_SPACE=y
# CONFIG_THERMAL_EMULATION is not set

#
# Intel thermal drivers
#
CONFIG_INTEL_POWERCLAMP=m
CONFIG_X86_PKG_TEMP_THERMAL=m
CONFIG_INTEL_SOC_DTS_IOSF_CORE=m
# CONFIG_INTEL_SOC_DTS_THERMAL is not set

#
# ACPI INT340X thermal drivers
#
CONFIG_INT340X_THERMAL=m
CONFIG_ACPI_THERMAL_REL=m
# CONFIG_INT3406_THERMAL is not set
CONFIG_PROC_THERMAL_MMIO_RAPL=y
# end of ACPI INT340X thermal drivers

CONFIG_INTEL_PCH_THERMAL=m
# end of Intel thermal drivers

CONFIG_WATCHDOG=y
CONFIG_WATCHDOG_CORE=y
# CONFIG_WATCHDOG_NOWAYOUT is not set
CONFIG_WATCHDOG_HANDLE_BOOT_ENABLED=y
CONFIG_WATCHDOG_OPEN_TIMEOUT=0
CONFIG_WATCHDOG_SYSFS=y

#
# Watchdog Pretimeout Governors
#
# CONFIG_WATCHDOG_PRETIMEOUT_GOV is not set

#
# Watchdog Device Drivers
#
CONFIG_SOFT_WATCHDOG=m
CONFIG_WDAT_WDT=m
# CONFIG_XILINX_WATCHDOG is not set
# CONFIG_ZIIRAVE_WATCHDOG is not set
# CONFIG_MLX_WDT is not set
# CONFIG_CADENCE_WATCHDOG is not set
# CONFIG_DW_WATCHDOG is not set
# CONFIG_MAX63XX_WATCHDOG is not set
# CONFIG_ACQUIRE_WDT is not set
# CONFIG_ADVANTECH_WDT is not set
CONFIG_ALIM1535_WDT=m
CONFIG_ALIM7101_WDT=m
# CONFIG_EBC_C384_WDT is not set
CONFIG_F71808E_WDT=m
CONFIG_SP5100_TCO=m
CONFIG_SBC_FITPC2_WATCHDOG=m
# CONFIG_EUROTECH_WDT is not set
CONFIG_IB700_WDT=m
CONFIG_IBMASR=m
# CONFIG_WAFER_WDT is not set
CONFIG_I6300ESB_WDT=y
CONFIG_IE6XX_WDT=m
CONFIG_ITCO_WDT=y
CONFIG_ITCO_VENDOR_SUPPORT=y
CONFIG_IT8712F_WDT=m
CONFIG_IT87_WDT=m
CONFIG_HP_WATCHDOG=m
CONFIG_HPWDT_NMI_DECODING=y
# CONFIG_SC1200_WDT is not set
# CONFIG_PC87413_WDT is not set
CONFIG_NV_TCO=m
# CONFIG_60XX_WDT is not set
# CONFIG_CPU5_WDT is not set
CONFIG_SMSC_SCH311X_WDT=m
# CONFIG_SMSC37B787_WDT is not set
# CONFIG_TQMX86_WDT is not set
CONFIG_VIA_WDT=m
CONFIG_W83627HF_WDT=m
CONFIG_W83877F_WDT=m
CONFIG_W83977F_WDT=m
CONFIG_MACHZ_WDT=m
# CONFIG_SBC_EPX_C3_WATCHDOG is not set
CONFIG_INTEL_MEI_WDT=m
# CONFIG_NI903X_WDT is not set
# CONFIG_NIC7018_WDT is not set
# CONFIG_MEN_A21_WDT is not set
CONFIG_XEN_WDT=m

#
# PCI-based Watchdog Cards
#
CONFIG_PCIPCWATCHDOG=m
CONFIG_WDTPCI=m

#
# USB-based Watchdog Cards
#
# CONFIG_USBPCWATCHDOG is not set
CONFIG_SSB_POSSIBLE=y
# CONFIG_SSB is not set
CONFIG_BCMA_POSSIBLE=y
CONFIG_BCMA=m
CONFIG_BCMA_HOST_PCI_POSSIBLE=y
CONFIG_BCMA_HOST_PCI=y
# CONFIG_BCMA_HOST_SOC is not set
CONFIG_BCMA_DRIVER_PCI=y
CONFIG_BCMA_DRIVER_GMAC_CMN=y
CONFIG_BCMA_DRIVER_GPIO=y
# CONFIG_BCMA_DEBUG is not set

#
# Multifunction device drivers
#
CONFIG_MFD_CORE=y
# CONFIG_MFD_AS3711 is not set
# CONFIG_PMIC_ADP5520 is not set
# CONFIG_MFD_AAT2870_CORE is not set
# CONFIG_MFD_BCM590XX is not set
# CONFIG_MFD_BD9571MWV is not set
# CONFIG_MFD_AXP20X_I2C is not set
# CONFIG_MFD_MADERA is not set
# CONFIG_PMIC_DA903X is not set
# CONFIG_MFD_DA9052_SPI is not set
# CONFIG_MFD_DA9052_I2C is not set
# CONFIG_MFD_DA9055 is not set
# CONFIG_MFD_DA9062 is not set
# CONFIG_MFD_DA9063 is not set
# CONFIG_MFD_DA9150 is not set
# CONFIG_MFD_DLN2 is not set
# CONFIG_MFD_MC13XXX_SPI is not set
# CONFIG_MFD_MC13XXX_I2C is not set
# CONFIG_MFD_MP2629 is not set
# CONFIG_HTC_PASIC3 is not set
# CONFIG_HTC_I2CPLD is not set
# CONFIG_MFD_INTEL_QUARK_I2C_GPIO is not set
CONFIG_LPC_ICH=y
CONFIG_LPC_SCH=m
# CONFIG_INTEL_SOC_PMIC_CHTDC_TI is not set
CONFIG_MFD_INTEL_LPSS=y
CONFIG_MFD_INTEL_LPSS_ACPI=y
CONFIG_MFD_INTEL_LPSS_PCI=y
# CONFIG_MFD_INTEL_PMC_BXT is not set
# CONFIG_MFD_IQS62X is not set
# CONFIG_MFD_JANZ_CMODIO is not set
# CONFIG_MFD_KEMPLD is not set
# CONFIG_MFD_88PM800 is not set
# CONFIG_MFD_88PM805 is not set
# CONFIG_MFD_88PM860X is not set
# CONFIG_MFD_MAX14577 is not set
# CONFIG_MFD_MAX77693 is not set
# CONFIG_MFD_MAX77843 is not set
# CONFIG_MFD_MAX8907 is not set
# CONFIG_MFD_MAX8925 is not set
# CONFIG_MFD_MAX8997 is not set
# CONFIG_MFD_MAX8998 is not set
# CONFIG_MFD_MT6360 is not set
# CONFIG_MFD_MT6397 is not set
# CONFIG_MFD_MENF21BMC is not set
# CONFIG_EZX_PCAP is not set
# CONFIG_MFD_VIPERBOARD is not set
# CONFIG_MFD_RETU is not set
# CONFIG_MFD_PCF50633 is not set
# CONFIG_MFD_RDC321X is not set
# CONFIG_MFD_RT5033 is not set
# CONFIG_MFD_RC5T583 is not set
# CONFIG_MFD_SEC_CORE is not set
# CONFIG_MFD_SI476X_CORE is not set
# CONFIG_MFD_SL28CPLD is not set
CONFIG_MFD_SM501=m
CONFIG_MFD_SM501_GPIO=y
# CONFIG_MFD_SKY81452 is not set
# CONFIG_ABX500_CORE is not set
# CONFIG_MFD_SYSCON is not set
# CONFIG_MFD_TI_AM335X_TSCADC is not set
# CONFIG_MFD_LP3943 is not set
# CONFIG_MFD_LP8788 is not set
# CONFIG_MFD_TI_LMU is not set
# CONFIG_MFD_PALMAS is not set
# CONFIG_TPS6105X is not set
# CONFIG_TPS65010 is not set
# CONFIG_TPS6507X is not set
# CONFIG_MFD_TPS65086 is not set
# CONFIG_MFD_TPS65090 is not set
# CONFIG_MFD_TI_LP873X is not set
# CONFIG_MFD_TPS6586X is not set
# CONFIG_MFD_TPS65910 is not set
# CONFIG_MFD_TPS65912_I2C is not set
# CONFIG_MFD_TPS65912_SPI is not set
# CONFIG_MFD_TPS80031 is not set
# CONFIG_TWL4030_CORE is not set
# CONFIG_TWL6040_CORE is not set
# CONFIG_MFD_WL1273_CORE is not set
# CONFIG_MFD_LM3533 is not set
# CONFIG_MFD_TQMX86 is not set
CONFIG_MFD_VX855=m
# CONFIG_MFD_ARIZONA_I2C is not set
# CONFIG_MFD_ARIZONA_SPI is not set
# CONFIG_MFD_WM8400 is not set
# CONFIG_MFD_WM831X_I2C is not set
# CONFIG_MFD_WM831X_SPI is not set
# CONFIG_MFD_WM8350_I2C is not set
# CONFIG_MFD_WM8994 is not set
# CONFIG_MFD_INTEL_M10_BMC is not set
# end of Multifunction device drivers

# CONFIG_REGULATOR is not set
CONFIG_RC_CORE=m
CONFIG_RC_MAP=m
CONFIG_LIRC=y
CONFIG_RC_DECODERS=y
CONFIG_IR_NEC_DECODER=m
CONFIG_IR_RC5_DECODER=m
CONFIG_IR_RC6_DECODER=m
CONFIG_IR_JVC_DECODER=m
CONFIG_IR_SONY_DECODER=m
CONFIG_IR_SANYO_DECODER=m
# CONFIG_IR_SHARP_DECODER is not set
CONFIG_IR_MCE_KBD_DECODER=m
# CONFIG_IR_XMP_DECODER is not set
CONFIG_IR_IMON_DECODER=m
# CONFIG_IR_RCMM_DECODER is not set
CONFIG_RC_DEVICES=y
# CONFIG_RC_ATI_REMOTE is not set
CONFIG_IR_ENE=m
# CONFIG_IR_IMON is not set
# CONFIG_IR_IMON_RAW is not set
# CONFIG_IR_MCEUSB is not set
CONFIG_IR_ITE_CIR=m
CONFIG_IR_FINTEK=m
CONFIG_IR_NUVOTON=m
# CONFIG_IR_REDRAT3 is not set
# CONFIG_IR_STREAMZAP is not set
CONFIG_IR_WINBOND_CIR=m
# CONFIG_IR_IGORPLUGUSB is not set
# CONFIG_IR_IGUANA is not set
# CONFIG_IR_TTUSBIR is not set
# CONFIG_RC_LOOPBACK is not set
CONFIG_IR_SERIAL=m
CONFIG_IR_SERIAL_TRANSMITTER=y
CONFIG_IR_SIR=m
# CONFIG_RC_XBOX_DVD is not set
# CONFIG_IR_TOY is not set
CONFIG_MEDIA_CEC_SUPPORT=y
# CONFIG_CEC_CH7322 is not set
# CONFIG_CEC_SECO is not set
# CONFIG_USB_PULSE8_CEC is not set
# CONFIG_USB_RAINSHADOW_CEC is not set
CONFIG_MEDIA_SUPPORT=m
# CONFIG_MEDIA_SUPPORT_FILTER is not set
# CONFIG_MEDIA_SUBDRV_AUTOSELECT is not set

#
# Media device types
#
CONFIG_MEDIA_CAMERA_SUPPORT=y
CONFIG_MEDIA_ANALOG_TV_SUPPORT=y
CONFIG_MEDIA_DIGITAL_TV_SUPPORT=y
CONFIG_MEDIA_RADIO_SUPPORT=y
CONFIG_MEDIA_SDR_SUPPORT=y
CONFIG_MEDIA_PLATFORM_SUPPORT=y
CONFIG_MEDIA_TEST_SUPPORT=y
# end of Media device types

#
# Media core support
#
CONFIG_VIDEO_DEV=m
CONFIG_MEDIA_CONTROLLER=y
CONFIG_DVB_CORE=m
# end of Media core support

#
# Video4Linux options
#
CONFIG_VIDEO_V4L2=m
CONFIG_VIDEO_V4L2_I2C=y
CONFIG_VIDEO_V4L2_SUBDEV_API=y
# CONFIG_VIDEO_ADV_DEBUG is not set
# CONFIG_VIDEO_FIXED_MINOR_RANGES is not set
# end of Video4Linux options

#
# Media controller options
#
# CONFIG_MEDIA_CONTROLLER_DVB is not set
# end of Media controller options

#
# Digital TV options
#
# CONFIG_DVB_MMAP is not set
CONFIG_DVB_NET=y
CONFIG_DVB_MAX_ADAPTERS=16
CONFIG_DVB_DYNAMIC_MINORS=y
# CONFIG_DVB_DEMUX_SECTION_LOSS_LOG is not set
# CONFIG_DVB_ULE_DEBUG is not set
# end of Digital TV options

#
# Media drivers
#
# CONFIG_MEDIA_USB_SUPPORT is not set
# CONFIG_MEDIA_PCI_SUPPORT is not set
CONFIG_RADIO_ADAPTERS=y
# CONFIG_RADIO_SI470X is not set
# CONFIG_RADIO_SI4713 is not set
# CONFIG_USB_MR800 is not set
# CONFIG_USB_DSBR is not set
# CONFIG_RADIO_MAXIRADIO is not set
# CONFIG_RADIO_SHARK is not set
# CONFIG_RADIO_SHARK2 is not set
# CONFIG_USB_KEENE is not set
# CONFIG_USB_RAREMONO is not set
# CONFIG_USB_MA901 is not set
# CONFIG_RADIO_TEA5764 is not set
# CONFIG_RADIO_SAA7706H is not set
# CONFIG_RADIO_TEF6862 is not set
# CONFIG_RADIO_WL1273 is not set
CONFIG_VIDEOBUF2_CORE=m
CONFIG_VIDEOBUF2_V4L2=m
CONFIG_VIDEOBUF2_MEMOPS=m
CONFIG_VIDEOBUF2_VMALLOC=m
# CONFIG_V4L_PLATFORM_DRIVERS is not set
# CONFIG_V4L_MEM2MEM_DRIVERS is not set
# CONFIG_DVB_PLATFORM_DRIVERS is not set
# CONFIG_SDR_PLATFORM_DRIVERS is not set

#
# MMC/SDIO DVB adapters
#
# CONFIG_SMS_SDIO_DRV is not set
# CONFIG_V4L_TEST_DRIVERS is not set
# CONFIG_DVB_TEST_DRIVERS is not set

#
# FireWire (IEEE 1394) Adapters
#
# CONFIG_DVB_FIREDTV is not set
# end of Media drivers

#
# Media ancillary drivers
#
CONFIG_MEDIA_ATTACH=y
CONFIG_VIDEO_IR_I2C=m

#
# Audio decoders, processors and mixers
#
# CONFIG_VIDEO_TVAUDIO is not set
# CONFIG_VIDEO_TDA7432 is not set
# CONFIG_VIDEO_TDA9840 is not set
# CONFIG_VIDEO_TEA6415C is not set
# CONFIG_VIDEO_TEA6420 is not set
# CONFIG_VIDEO_MSP3400 is not set
# CONFIG_VIDEO_CS3308 is not set
# CONFIG_VIDEO_CS5345 is not set
# CONFIG_VIDEO_CS53L32A is not set
# CONFIG_VIDEO_TLV320AIC23B is not set
# CONFIG_VIDEO_UDA1342 is not set
# CONFIG_VIDEO_WM8775 is not set
# CONFIG_VIDEO_WM8739 is not set
# CONFIG_VIDEO_VP27SMPX is not set
# CONFIG_VIDEO_SONY_BTF_MPX is not set
# end of Audio decoders, processors and mixers

#
# RDS decoders
#
# CONFIG_VIDEO_SAA6588 is not set
# end of RDS decoders

#
# Video decoders
#
# CONFIG_VIDEO_ADV7180 is not set
# CONFIG_VIDEO_ADV7183 is not set
# CONFIG_VIDEO_ADV7604 is not set
# CONFIG_VIDEO_ADV7842 is not set
# CONFIG_VIDEO_BT819 is not set
# CONFIG_VIDEO_BT856 is not set
# CONFIG_VIDEO_BT866 is not set
# CONFIG_VIDEO_KS0127 is not set
# CONFIG_VIDEO_ML86V7667 is not set
# CONFIG_VIDEO_SAA7110 is not set
# CONFIG_VIDEO_SAA711X is not set
# CONFIG_VIDEO_TC358743 is not set
# CONFIG_VIDEO_TVP514X is not set
# CONFIG_VIDEO_TVP5150 is not set
# CONFIG_VIDEO_TVP7002 is not set
# CONFIG_VIDEO_TW2804 is not set
# CONFIG_VIDEO_TW9903 is not set
# CONFIG_VIDEO_TW9906 is not set
# CONFIG_VIDEO_TW9910 is not set
# CONFIG_VIDEO_VPX3220 is not set

#
# Video and audio decoders
#
# CONFIG_VIDEO_SAA717X is not set
# CONFIG_VIDEO_CX25840 is not set
# end of Video decoders

#
# Video encoders
#
# CONFIG_VIDEO_SAA7127 is not set
# CONFIG_VIDEO_SAA7185 is not set
# CONFIG_VIDEO_ADV7170 is not set
# CONFIG_VIDEO_ADV7175 is not set
# CONFIG_VIDEO_ADV7343 is not set
# CONFIG_VIDEO_ADV7393 is not set
# CONFIG_VIDEO_ADV7511 is not set
# CONFIG_VIDEO_AD9389B is not set
# CONFIG_VIDEO_AK881X is not set
# CONFIG_VIDEO_THS8200 is not set
# end of Video encoders

#
# Video improvement chips
#
# CONFIG_VIDEO_UPD64031A is not set
# CONFIG_VIDEO_UPD64083 is not set
# end of Video improvement chips

#
# Audio/Video compression chips
#
# CONFIG_VIDEO_SAA6752HS is not set
# end of Audio/Video compression chips

#
# SDR tuner chips
#
# CONFIG_SDR_MAX2175 is not set
# end of SDR tuner chips

#
# Miscellaneous helper chips
#
# CONFIG_VIDEO_THS7303 is not set
# CONFIG_VIDEO_M52790 is not set
# CONFIG_VIDEO_I2C is not set
# CONFIG_VIDEO_ST_MIPID02 is not set
# end of Miscellaneous helper chips

#
# Camera sensor devices
#
# CONFIG_VIDEO_HI556 is not set
# CONFIG_VIDEO_IMX214 is not set
# CONFIG_VIDEO_IMX219 is not set
# CONFIG_VIDEO_IMX258 is not set
# CONFIG_VIDEO_IMX274 is not set
# CONFIG_VIDEO_IMX290 is not set
# CONFIG_VIDEO_IMX319 is not set
# CONFIG_VIDEO_IMX355 is not set
# CONFIG_VIDEO_OV2640 is not set
# CONFIG_VIDEO_OV2659 is not set
# CONFIG_VIDEO_OV2680 is not set
# CONFIG_VIDEO_OV2685 is not set
# CONFIG_VIDEO_OV2740 is not set
# CONFIG_VIDEO_OV5647 is not set
# CONFIG_VIDEO_OV6650 is not set
# CONFIG_VIDEO_OV5670 is not set
# CONFIG_VIDEO_OV5675 is not set
# CONFIG_VIDEO_OV5695 is not set
# CONFIG_VIDEO_OV7251 is not set
# CONFIG_VIDEO_OV772X is not set
# CONFIG_VIDEO_OV7640 is not set
# CONFIG_VIDEO_OV7670 is not set
# CONFIG_VIDEO_OV7740 is not set
# CONFIG_VIDEO_OV8856 is not set
# CONFIG_VIDEO_OV9640 is not set
# CONFIG_VIDEO_OV9650 is not set
# CONFIG_VIDEO_OV13858 is not set
# CONFIG_VIDEO_VS6624 is not set
# CONFIG_VIDEO_MT9M001 is not set
# CONFIG_VIDEO_MT9M032 is not set
# CONFIG_VIDEO_MT9M111 is not set
# CONFIG_VIDEO_MT9P031 is not set
# CONFIG_VIDEO_MT9T001 is not set
# CONFIG_VIDEO_MT9T112 is not set
# CONFIG_VIDEO_MT9V011 is not set
# CONFIG_VIDEO_MT9V032 is not set
# CONFIG_VIDEO_MT9V111 is not set
# CONFIG_VIDEO_SR030PC30 is not set
# CONFIG_VIDEO_NOON010PC30 is not set
# CONFIG_VIDEO_M5MOLS is not set
# CONFIG_VIDEO_RDACM20 is not set
# CONFIG_VIDEO_RJ54N1 is not set
# CONFIG_VIDEO_S5K6AA is not set
# CONFIG_VIDEO_S5K6A3 is not set
# CONFIG_VIDEO_S5K4ECGX is not set
# CONFIG_VIDEO_S5K5BAF is not set
# CONFIG_VIDEO_SMIAPP is not set
# CONFIG_VIDEO_ET8EK8 is not set
# CONFIG_VIDEO_S5C73M3 is not set
# end of Camera sensor devices

#
# Lens drivers
#
# CONFIG_VIDEO_AD5820 is not set
# CONFIG_VIDEO_AK7375 is not set
# CONFIG_VIDEO_DW9714 is not set
# CONFIG_VIDEO_DW9768 is not set
# CONFIG_VIDEO_DW9807_VCM is not set
# end of Lens drivers

#
# Flash devices
#
# CONFIG_VIDEO_ADP1653 is not set
# CONFIG_VIDEO_LM3560 is not set
# CONFIG_VIDEO_LM3646 is not set
# end of Flash devices

#
# SPI helper chips
#
# CONFIG_VIDEO_GS1662 is not set
# end of SPI helper chips

#
# Media SPI Adapters
#
CONFIG_CXD2880_SPI_DRV=m
# end of Media SPI Adapters

CONFIG_MEDIA_TUNER=m

#
# Customize TV tuners
#
CONFIG_MEDIA_TUNER_SIMPLE=m
CONFIG_MEDIA_TUNER_TDA18250=m
CONFIG_MEDIA_TUNER_TDA8290=m
CONFIG_MEDIA_TUNER_TDA827X=m
CONFIG_MEDIA_TUNER_TDA18271=m
CONFIG_MEDIA_TUNER_TDA9887=m
CONFIG_MEDIA_TUNER_TEA5761=m
CONFIG_MEDIA_TUNER_TEA5767=m
CONFIG_MEDIA_TUNER_MSI001=m
CONFIG_MEDIA_TUNER_MT20XX=m
CONFIG_MEDIA_TUNER_MT2060=m
CONFIG_MEDIA_TUNER_MT2063=m
CONFIG_MEDIA_TUNER_MT2266=m
CONFIG_MEDIA_TUNER_MT2131=m
CONFIG_MEDIA_TUNER_QT1010=m
CONFIG_MEDIA_TUNER_XC2028=m
CONFIG_MEDIA_TUNER_XC5000=m
CONFIG_MEDIA_TUNER_XC4000=m
CONFIG_MEDIA_TUNER_MXL5005S=m
CONFIG_MEDIA_TUNER_MXL5007T=m
CONFIG_MEDIA_TUNER_MC44S803=m
CONFIG_MEDIA_TUNER_MAX2165=m
CONFIG_MEDIA_TUNER_TDA18218=m
CONFIG_MEDIA_TUNER_FC0011=m
CONFIG_MEDIA_TUNER_FC0012=m
CONFIG_MEDIA_TUNER_FC0013=m
CONFIG_MEDIA_TUNER_TDA18212=m
CONFIG_MEDIA_TUNER_E4000=m
CONFIG_MEDIA_TUNER_FC2580=m
CONFIG_MEDIA_TUNER_M88RS6000T=m
CONFIG_MEDIA_TUNER_TUA9001=m
CONFIG_MEDIA_TUNER_SI2157=m
CONFIG_MEDIA_TUNER_IT913X=m
CONFIG_MEDIA_TUNER_R820T=m
CONFIG_MEDIA_TUNER_MXL301RF=m
CONFIG_MEDIA_TUNER_QM1D1C0042=m
CONFIG_MEDIA_TUNER_QM1D1B0004=m
# end of Customize TV tuners

#
# Customise DVB Frontends
#

#
# Multistandard (satellite) frontends
#
CONFIG_DVB_STB0899=m
CONFIG_DVB_STB6100=m
CONFIG_DVB_STV090x=m
CONFIG_DVB_STV0910=m
CONFIG_DVB_STV6110x=m
CONFIG_DVB_STV6111=m
CONFIG_DVB_MXL5XX=m
CONFIG_DVB_M88DS3103=m

#
# Multistandard (cable + terrestrial) frontends
#
CONFIG_DVB_DRXK=m
CONFIG_DVB_TDA18271C2DD=m
CONFIG_DVB_SI2165=m
CONFIG_DVB_MN88472=m
CONFIG_DVB_MN88473=m

#
# DVB-S (satellite) frontends
#
CONFIG_DVB_CX24110=m
CONFIG_DVB_CX24123=m
CONFIG_DVB_MT312=m
CONFIG_DVB_ZL10036=m
CONFIG_DVB_ZL10039=m
CONFIG_DVB_S5H1420=m
CONFIG_DVB_STV0288=m
CONFIG_DVB_STB6000=m
CONFIG_DVB_STV0299=m
CONFIG_DVB_STV6110=m
CONFIG_DVB_STV0900=m
CONFIG_DVB_TDA8083=m
CONFIG_DVB_TDA10086=m
CONFIG_DVB_TDA8261=m
CONFIG_DVB_VES1X93=m
CONFIG_DVB_TUNER_ITD1000=m
CONFIG_DVB_TUNER_CX24113=m
CONFIG_DVB_TDA826X=m
CONFIG_DVB_TUA6100=m
CONFIG_DVB_CX24116=m
CONFIG_DVB_CX24117=m
CONFIG_DVB_CX24120=m
CONFIG_DVB_SI21XX=m
CONFIG_DVB_TS2020=m
CONFIG_DVB_DS3000=m
CONFIG_DVB_MB86A16=m
CONFIG_DVB_TDA10071=m

#
# DVB-T (terrestrial) frontends
#
CONFIG_DVB_SP8870=m
CONFIG_DVB_SP887X=m
CONFIG_DVB_CX22700=m
CONFIG_DVB_CX22702=m
CONFIG_DVB_S5H1432=m
CONFIG_DVB_DRXD=m
CONFIG_DVB_L64781=m
CONFIG_DVB_TDA1004X=m
CONFIG_DVB_NXT6000=m
CONFIG_DVB_MT352=m
CONFIG_DVB_ZL10353=m
CONFIG_DVB_DIB3000MB=m
CONFIG_DVB_DIB3000MC=m
CONFIG_DVB_DIB7000M=m
CONFIG_DVB_DIB7000P=m
CONFIG_DVB_DIB9000=m
CONFIG_DVB_TDA10048=m
CONFIG_DVB_AF9013=m
CONFIG_DVB_EC100=m
CONFIG_DVB_STV0367=m
CONFIG_DVB_CXD2820R=m
CONFIG_DVB_CXD2841ER=m
CONFIG_DVB_RTL2830=m
CONFIG_DVB_RTL2832=m
CONFIG_DVB_RTL2832_SDR=m
CONFIG_DVB_SI2168=m
CONFIG_DVB_ZD1301_DEMOD=m
CONFIG_DVB_CXD2880=m

#
# DVB-C (cable) frontends
#
CONFIG_DVB_VES1820=m
CONFIG_DVB_TDA10021=m
CONFIG_DVB_TDA10023=m
CONFIG_DVB_STV0297=m

#
# ATSC (North American/Korean Terrestrial/Cable DTV) frontends
#
CONFIG_DVB_NXT200X=m
CONFIG_DVB_OR51211=m
CONFIG_DVB_OR51132=m
CONFIG_DVB_BCM3510=m
CONFIG_DVB_LGDT330X=m
CONFIG_DVB_LGDT3305=m
CONFIG_DVB_LGDT3306A=m
CONFIG_DVB_LG2160=m
CONFIG_DVB_S5H1409=m
CONFIG_DVB_AU8522=m
CONFIG_DVB_AU8522_DTV=m
CONFIG_DVB_AU8522_V4L=m
CONFIG_DVB_S5H1411=m

#
# ISDB-T (terrestrial) frontends
#
CONFIG_DVB_S921=m
CONFIG_DVB_DIB8000=m
CONFIG_DVB_MB86A20S=m

#
# ISDB-S (satellite) & ISDB-T (terrestrial) frontends
#
CONFIG_DVB_TC90522=m
CONFIG_DVB_MN88443X=m

#
# Digital terrestrial only tuners/PLL
#
CONFIG_DVB_PLL=m
CONFIG_DVB_TUNER_DIB0070=m
CONFIG_DVB_TUNER_DIB0090=m

#
# SEC control devices for DVB-S
#
CONFIG_DVB_DRX39XYJ=m
CONFIG_DVB_LNBH25=m
CONFIG_DVB_LNBH29=m
CONFIG_DVB_LNBP21=m
CONFIG_DVB_LNBP22=m
CONFIG_DVB_ISL6405=m
CONFIG_DVB_ISL6421=m
CONFIG_DVB_ISL6423=m
CONFIG_DVB_A8293=m
CONFIG_DVB_LGS8GL5=m
CONFIG_DVB_LGS8GXX=m
CONFIG_DVB_ATBM8830=m
CONFIG_DVB_TDA665x=m
CONFIG_DVB_IX2505V=m
CONFIG_DVB_M88RS2000=m
CONFIG_DVB_AF9033=m
CONFIG_DVB_HORUS3A=m
CONFIG_DVB_ASCOT2E=m
CONFIG_DVB_HELENE=m

#
# Common Interface (EN50221) controller drivers
#
CONFIG_DVB_CXD2099=m
CONFIG_DVB_SP2=m
# end of Customise DVB Frontends

#
# Tools to develop new frontends
#
# CONFIG_DVB_DUMMY_FE is not set
# end of Media ancillary drivers

#
# Graphics support
#
# CONFIG_AGP is not set
CONFIG_INTEL_GTT=m
CONFIG_VGA_ARB=y
CONFIG_VGA_ARB_MAX_GPUS=64
CONFIG_VGA_SWITCHEROO=y
CONFIG_DRM=m
CONFIG_DRM_MIPI_DSI=y
CONFIG_DRM_DP_AUX_CHARDEV=y
# CONFIG_DRM_DEBUG_SELFTEST is not set
CONFIG_DRM_KMS_HELPER=m
CONFIG_DRM_KMS_FB_HELPER=y
CONFIG_DRM_FBDEV_EMULATION=y
CONFIG_DRM_FBDEV_OVERALLOC=100
CONFIG_DRM_LOAD_EDID_FIRMWARE=y
# CONFIG_DRM_DP_CEC is not set
CONFIG_DRM_TTM=m
CONFIG_DRM_TTM_DMA_PAGE_POOL=y
CONFIG_DRM_VRAM_HELPER=m
CONFIG_DRM_TTM_HELPER=m
CONFIG_DRM_GEM_SHMEM_HELPER=y

#
# I2C encoder or helper chips
#
CONFIG_DRM_I2C_CH7006=m
CONFIG_DRM_I2C_SIL164=m
# CONFIG_DRM_I2C_NXP_TDA998X is not set
# CONFIG_DRM_I2C_NXP_TDA9950 is not set
# end of I2C encoder or helper chips

#
# ARM devices
#
# end of ARM devices

# CONFIG_DRM_RADEON is not set
# CONFIG_DRM_AMDGPU is not set
# CONFIG_DRM_NOUVEAU is not set
CONFIG_DRM_I915=m
CONFIG_DRM_I915_FORCE_PROBE=""
CONFIG_DRM_I915_CAPTURE_ERROR=y
CONFIG_DRM_I915_COMPRESS_ERROR=y
CONFIG_DRM_I915_USERPTR=y
CONFIG_DRM_I915_GVT=y
CONFIG_DRM_I915_GVT_KVMGT=m
CONFIG_DRM_I915_FENCE_TIMEOUT=10000
CONFIG_DRM_I915_USERFAULT_AUTOSUSPEND=250
CONFIG_DRM_I915_HEARTBEAT_INTERVAL=2500
CONFIG_DRM_I915_PREEMPT_TIMEOUT=640
CONFIG_DRM_I915_MAX_REQUEST_BUSYWAIT=8000
CONFIG_DRM_I915_STOP_TIMEOUT=100
CONFIG_DRM_I915_TIMESLICE_DURATION=1
# CONFIG_DRM_VGEM is not set
# CONFIG_DRM_VKMS is not set
CONFIG_DRM_VMWGFX=m
CONFIG_DRM_VMWGFX_FBCON=y
CONFIG_DRM_GMA500=m
CONFIG_DRM_GMA600=y
CONFIG_DRM_GMA3600=y
# CONFIG_DRM_UDL is not set
CONFIG_DRM_AST=m
CONFIG_DRM_MGAG200=m
CONFIG_DRM_QXL=m
CONFIG_DRM_BOCHS=m
CONFIG_DRM_VIRTIO_GPU=m
CONFIG_DRM_PANEL=y

#
# Display Panels
#
# CONFIG_DRM_PANEL_RASPBERRYPI_TOUCHSCREEN is not set
# end of Display Panels

CONFIG_DRM_BRIDGE=y
CONFIG_DRM_PANEL_BRIDGE=y

#
# Display Interface Bridges
#
# CONFIG_DRM_ANALOGIX_ANX78XX is not set
# end of Display Interface Bridges

# CONFIG_DRM_ETNAVIV is not set
CONFIG_DRM_CIRRUS_QEMU=m
# CONFIG_DRM_GM12U320 is not set
# CONFIG_TINYDRM_HX8357D is not set
# CONFIG_TINYDRM_ILI9225 is not set
# CONFIG_TINYDRM_ILI9341 is not set
# CONFIG_TINYDRM_ILI9486 is not set
# CONFIG_TINYDRM_MI0283QT is not set
# CONFIG_TINYDRM_REPAPER is not set
# CONFIG_TINYDRM_ST7586 is not set
# CONFIG_TINYDRM_ST7735R is not set
# CONFIG_DRM_XEN is not set
# CONFIG_DRM_VBOXVIDEO is not set
# CONFIG_DRM_LEGACY is not set
CONFIG_DRM_PANEL_ORIENTATION_QUIRKS=y

#
# Frame buffer Devices
#
CONFIG_FB_CMDLINE=y
CONFIG_FB_NOTIFY=y
CONFIG_FB=y
# CONFIG_FIRMWARE_EDID is not set
CONFIG_FB_BOOT_VESA_SUPPORT=y
CONFIG_FB_CFB_FILLRECT=y
CONFIG_FB_CFB_COPYAREA=y
CONFIG_FB_CFB_IMAGEBLIT=y
CONFIG_FB_SYS_FILLRECT=m
CONFIG_FB_SYS_COPYAREA=m
CONFIG_FB_SYS_IMAGEBLIT=m
# CONFIG_FB_FOREIGN_ENDIAN is not set
CONFIG_FB_SYS_FOPS=m
CONFIG_FB_DEFERRED_IO=y
# CONFIG_FB_MODE_HELPERS is not set
CONFIG_FB_TILEBLITTING=y

#
# Frame buffer hardware drivers
#
# CONFIG_FB_CIRRUS is not set
# CONFIG_FB_PM2 is not set
# CONFIG_FB_CYBER2000 is not set
# CONFIG_FB_ARC is not set
# CONFIG_FB_ASILIANT is not set
# CONFIG_FB_IMSTT is not set
# CONFIG_FB_VGA16 is not set
# CONFIG_FB_UVESA is not set
CONFIG_FB_VESA=y
CONFIG_FB_EFI=y
# CONFIG_FB_N411 is not set
# CONFIG_FB_HGA is not set
# CONFIG_FB_OPENCORES is not set
# CONFIG_FB_S1D13XXX is not set
# CONFIG_FB_NVIDIA is not set
# CONFIG_FB_RIVA is not set
# CONFIG_FB_I740 is not set
# CONFIG_FB_LE80578 is not set
# CONFIG_FB_MATROX is not set
# CONFIG_FB_RADEON is not set
# CONFIG_FB_ATY128 is not set
# CONFIG_FB_ATY is not set
# CONFIG_FB_S3 is not set
# CONFIG_FB_SAVAGE is not set
# CONFIG_FB_SIS is not set
# CONFIG_FB_VIA is not set
# CONFIG_FB_NEOMAGIC is not set
# CONFIG_FB_KYRO is not set
# CONFIG_FB_3DFX is not set
# CONFIG_FB_VOODOO1 is not set
# CONFIG_FB_VT8623 is not set
# CONFIG_FB_TRIDENT is not set
# CONFIG_FB_ARK is not set
# CONFIG_FB_PM3 is not set
# CONFIG_FB_CARMINE is not set
# CONFIG_FB_SM501 is not set
# CONFIG_FB_SMSCUFX is not set
# CONFIG_FB_UDL is not set
# CONFIG_FB_IBM_GXT4500 is not set
# CONFIG_FB_VIRTUAL is not set
# CONFIG_XEN_FBDEV_FRONTEND is not set
# CONFIG_FB_METRONOME is not set
# CONFIG_FB_MB862XX is not set
CONFIG_FB_HYPERV=m
# CONFIG_FB_SIMPLE is not set
# CONFIG_FB_SM712 is not set
# end of Frame buffer Devices

#
# Backlight & LCD device support
#
CONFIG_LCD_CLASS_DEVICE=m
# CONFIG_LCD_L4F00242T03 is not set
# CONFIG_LCD_LMS283GF05 is not set
# CONFIG_LCD_LTV350QV is not set
# CONFIG_LCD_ILI922X is not set
# CONFIG_LCD_ILI9320 is not set
# CONFIG_LCD_TDO24M is not set
# CONFIG_LCD_VGG2432A4 is not set
CONFIG_LCD_PLATFORM=m
# CONFIG_LCD_AMS369FG06 is not set
# CONFIG_LCD_LMS501KF03 is not set
# CONFIG_LCD_HX8357 is not set
# CONFIG_LCD_OTM3225A is not set
CONFIG_BACKLIGHT_CLASS_DEVICE=y
# CONFIG_BACKLIGHT_KTD253 is not set
# CONFIG_BACKLIGHT_PWM is not set
CONFIG_BACKLIGHT_APPLE=m
# CONFIG_BACKLIGHT_QCOM_WLED is not set
# CONFIG_BACKLIGHT_SAHARA is not set
# CONFIG_BACKLIGHT_ADP8860 is not set
# CONFIG_BACKLIGHT_ADP8870 is not set
# CONFIG_BACKLIGHT_LM3630A is not set
# CONFIG_BACKLIGHT_LM3639 is not set
CONFIG_BACKLIGHT_LP855X=m
# CONFIG_BACKLIGHT_GPIO is not set
# CONFIG_BACKLIGHT_LV5207LP is not set
# CONFIG_BACKLIGHT_BD6107 is not set
# CONFIG_BACKLIGHT_ARCXCNN is not set
# end of Backlight & LCD device support

CONFIG_HDMI=y

#
# Console display driver support
#
CONFIG_VGA_CONSOLE=y
CONFIG_DUMMY_CONSOLE=y
CONFIG_DUMMY_CONSOLE_COLUMNS=80
CONFIG_DUMMY_CONSOLE_ROWS=25
CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y
# CONFIG_FRAMEBUFFER_CONSOLE_DEFERRED_TAKEOVER is not set
# end of Console display driver support

CONFIG_LOGO=y
# CONFIG_LOGO_LINUX_MONO is not set
# CONFIG_LOGO_LINUX_VGA16 is not set
CONFIG_LOGO_LINUX_CLUT224=y
# end of Graphics support

# CONFIG_SOUND is not set

#
# HID support
#
CONFIG_HID=y
CONFIG_HID_BATTERY_STRENGTH=y
CONFIG_HIDRAW=y
CONFIG_UHID=m
CONFIG_HID_GENERIC=y

#
# Special HID drivers
#
CONFIG_HID_A4TECH=m
# CONFIG_HID_ACCUTOUCH is not set
CONFIG_HID_ACRUX=m
# CONFIG_HID_ACRUX_FF is not set
CONFIG_HID_APPLE=m
# CONFIG_HID_APPLEIR is not set
CONFIG_HID_ASUS=m
CONFIG_HID_AUREAL=m
CONFIG_HID_BELKIN=m
# CONFIG_HID_BETOP_FF is not set
# CONFIG_HID_BIGBEN_FF is not set
CONFIG_HID_CHERRY=m
CONFIG_HID_CHICONY=m
# CONFIG_HID_CORSAIR is not set
# CONFIG_HID_COUGAR is not set
# CONFIG_HID_MACALLY is not set
CONFIG_HID_CMEDIA=m
# CONFIG_HID_CP2112 is not set
# CONFIG_HID_CREATIVE_SB0540 is not set
CONFIG_HID_CYPRESS=m
CONFIG_HID_DRAGONRISE=m
# CONFIG_DRAGONRISE_FF is not set
# CONFIG_HID_EMS_FF is not set
# CONFIG_HID_ELAN is not set
CONFIG_HID_ELECOM=m
# CONFIG_HID_ELO is not set
CONFIG_HID_EZKEY=m
CONFIG_HID_GEMBIRD=m
CONFIG_HID_GFRM=m
# CONFIG_HID_GLORIOUS is not set
# CONFIG_HID_HOLTEK is not set
# CONFIG_HID_VIVALDI is not set
# CONFIG_HID_GT683R is not set
CONFIG_HID_KEYTOUCH=m
CONFIG_HID_KYE=m
# CONFIG_HID_UCLOGIC is not set
CONFIG_HID_WALTOP=m
# CONFIG_HID_VIEWSONIC is not set
CONFIG_HID_GYRATION=m
CONFIG_HID_ICADE=m
CONFIG_HID_ITE=m
CONFIG_HID_JABRA=m
CONFIG_HID_TWINHAN=m
CONFIG_HID_KENSINGTON=m
CONFIG_HID_LCPOWER=m
CONFIG_HID_LED=m
CONFIG_HID_LENOVO=m
CONFIG_HID_LOGITECH=m
CONFIG_HID_LOGITECH_DJ=m
CONFIG_HID_LOGITECH_HIDPP=m
# CONFIG_LOGITECH_FF is not set
# CONFIG_LOGIRUMBLEPAD2_FF is not set
# CONFIG_LOGIG940_FF is not set
# CONFIG_LOGIWHEELS_FF is not set
CONFIG_HID_MAGICMOUSE=y
# CONFIG_HID_MALTRON is not set
# CONFIG_HID_MAYFLASH is not set
# CONFIG_HID_REDRAGON is not set
CONFIG_HID_MICROSOFT=m
CONFIG_HID_MONTEREY=m
CONFIG_HID_MULTITOUCH=m
CONFIG_HID_NTI=m
# CONFIG_HID_NTRIG is not set
CONFIG_HID_ORTEK=m
CONFIG_HID_PANTHERLORD=m
# CONFIG_PANTHERLORD_FF is not set
# CONFIG_HID_PENMOUNT is not set
CONFIG_HID_PETALYNX=m
CONFIG_HID_PICOLCD=m
CONFIG_HID_PICOLCD_FB=y
CONFIG_HID_PICOLCD_BACKLIGHT=y
CONFIG_HID_PICOLCD_LCD=y
CONFIG_HID_PICOLCD_LEDS=y
CONFIG_HID_PICOLCD_CIR=y
CONFIG_HID_PLANTRONICS=m
CONFIG_HID_PRIMAX=m
# CONFIG_HID_RETRODE is not set
# CONFIG_HID_ROCCAT is not set
CONFIG_HID_SAITEK=m
CONFIG_HID_SAMSUNG=m
# CONFIG_HID_SONY is not set
CONFIG_HID_SPEEDLINK=m
# CONFIG_HID_STEAM is not set
CONFIG_HID_STEELSERIES=m
CONFIG_HID_SUNPLUS=m
CONFIG_HID_RMI=m
CONFIG_HID_GREENASIA=m
# CONFIG_GREENASIA_FF is not set
CONFIG_HID_HYPERV_MOUSE=m
CONFIG_HID_SMARTJOYPLUS=m
# CONFIG_SMARTJOYPLUS_FF is not set
CONFIG_HID_TIVO=m
CONFIG_HID_TOPSEED=m
CONFIG_HID_THINGM=m
CONFIG_HID_THRUSTMASTER=m
# CONFIG_THRUSTMASTER_FF is not set
# CONFIG_HID_UDRAW_PS3 is not set
# CONFIG_HID_U2FZERO is not set
# CONFIG_HID_WACOM is not set
CONFIG_HID_WIIMOTE=m
CONFIG_HID_XINMO=m
CONFIG_HID_ZEROPLUS=m
# CONFIG_ZEROPLUS_FF is not set
CONFIG_HID_ZYDACRON=m
CONFIG_HID_SENSOR_HUB=y
CONFIG_HID_SENSOR_CUSTOM_SENSOR=m
CONFIG_HID_ALPS=m
# CONFIG_HID_MCP2221 is not set
# end of Special HID drivers

#
# USB HID support
#
CONFIG_USB_HID=y
# CONFIG_HID_PID is not set
# CONFIG_USB_HIDDEV is not set
# end of USB HID support

#
# I2C HID support
#
CONFIG_I2C_HID=m
# end of I2C HID support

#
# Intel ISH HID support
#
CONFIG_INTEL_ISH_HID=m
# CONFIG_INTEL_ISH_FIRMWARE_DOWNLOADER is not set
# end of Intel ISH HID support
# end of HID support

CONFIG_USB_OHCI_LITTLE_ENDIAN=y
CONFIG_USB_SUPPORT=y
CONFIG_USB_COMMON=y
# CONFIG_USB_LED_TRIG is not set
# CONFIG_USB_ULPI_BUS is not set
# CONFIG_USB_CONN_GPIO is not set
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB=y
CONFIG_USB_PCI=y
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y

#
# Miscellaneous USB options
#
CONFIG_USB_DEFAULT_PERSIST=y
# CONFIG_USB_FEW_INIT_RETRIES is not set
# CONFIG_USB_DYNAMIC_MINORS is not set
# CONFIG_USB_OTG is not set
# CONFIG_USB_OTG_PRODUCTLIST is not set
CONFIG_USB_LEDS_TRIGGER_USBPORT=y
CONFIG_USB_AUTOSUSPEND_DELAY=2
CONFIG_USB_MON=y

#
# USB Host Controller Drivers
#
# CONFIG_USB_C67X00_HCD is not set
CONFIG_USB_XHCI_HCD=y
# CONFIG_USB_XHCI_DBGCAP is not set
CONFIG_USB_XHCI_PCI=y
# CONFIG_USB_XHCI_PCI_RENESAS is not set
# CONFIG_USB_XHCI_PLATFORM is not set
CONFIG_USB_EHCI_HCD=y
CONFIG_USB_EHCI_ROOT_HUB_TT=y
CONFIG_USB_EHCI_TT_NEWSCHED=y
CONFIG_USB_EHCI_PCI=y
# CONFIG_USB_EHCI_FSL is not set
# CONFIG_USB_EHCI_HCD_PLATFORM is not set
# CONFIG_USB_OXU210HP_HCD is not set
# CONFIG_USB_ISP116X_HCD is not set
# CONFIG_USB_FOTG210_HCD is not set
# CONFIG_USB_MAX3421_HCD is not set
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_OHCI_HCD_PCI=y
# CONFIG_USB_OHCI_HCD_PLATFORM is not set
CONFIG_USB_UHCI_HCD=y
# CONFIG_USB_SL811_HCD is not set
# CONFIG_USB_R8A66597_HCD is not set
# CONFIG_USB_HCD_BCMA is not set
# CONFIG_USB_HCD_TEST_MODE is not set

#
# USB Device Class drivers
#
# CONFIG_USB_ACM is not set
# CONFIG_USB_PRINTER is not set
# CONFIG_USB_WDM is not set
# CONFIG_USB_TMC is not set

#
# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
#

#
# also be needed; see USB_STORAGE Help for more info
#
CONFIG_USB_STORAGE=m
# CONFIG_USB_STORAGE_DEBUG is not set
# CONFIG_USB_STORAGE_REALTEK is not set
# CONFIG_USB_STORAGE_DATAFAB is not set
# CONFIG_USB_STORAGE_FREECOM is not set
# CONFIG_USB_STORAGE_ISD200 is not set
# CONFIG_USB_STORAGE_USBAT is not set
# CONFIG_USB_STORAGE_SDDR09 is not set
# CONFIG_USB_STORAGE_SDDR55 is not set
# CONFIG_USB_STORAGE_JUMPSHOT is not set
# CONFIG_USB_STORAGE_ALAUDA is not set
# CONFIG_USB_STORAGE_ONETOUCH is not set
# CONFIG_USB_STORAGE_KARMA is not set
# CONFIG_USB_STORAGE_CYPRESS_ATACB is not set
# CONFIG_USB_STORAGE_ENE_UB6250 is not set
# CONFIG_USB_UAS is not set

#
# USB Imaging devices
#
# CONFIG_USB_MDC800 is not set
# CONFIG_USB_MICROTEK is not set
# CONFIG_USBIP_CORE is not set
# CONFIG_USB_CDNS3 is not set
# CONFIG_USB_MUSB_HDRC is not set
# CONFIG_USB_DWC3 is not set
# CONFIG_USB_DWC2 is not set
# CONFIG_USB_CHIPIDEA is not set
# CONFIG_USB_ISP1760 is not set

#
# USB port drivers
#
# CONFIG_USB_USS720 is not set
CONFIG_USB_SERIAL=m
CONFIG_USB_SERIAL_GENERIC=y
# CONFIG_USB_SERIAL_SIMPLE is not set
# CONFIG_USB_SERIAL_AIRCABLE is not set
# CONFIG_USB_SERIAL_ARK3116 is not set
# CONFIG_USB_SERIAL_BELKIN is not set
# CONFIG_USB_SERIAL_CH341 is not set
# CONFIG_USB_SERIAL_WHITEHEAT is not set
# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set
# CONFIG_USB_SERIAL_CP210X is not set
# CONFIG_USB_SERIAL_CYPRESS_M8 is not set
# CONFIG_USB_SERIAL_EMPEG is not set
# CONFIG_USB_SERIAL_FTDI_SIO is not set
# CONFIG_USB_SERIAL_VISOR is not set
# CONFIG_USB_SERIAL_IPAQ is not set
# CONFIG_USB_SERIAL_IR is not set
# CONFIG_USB_SERIAL_EDGEPORT is not set
# CONFIG_USB_SERIAL_EDGEPORT_TI is not set
# CONFIG_USB_SERIAL_F81232 is not set
# CONFIG_USB_SERIAL_F8153X is not set
# CONFIG_USB_SERIAL_GARMIN is not set
# CONFIG_USB_SERIAL_IPW is not set
# CONFIG_USB_SERIAL_IUU is not set
# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set
# CONFIG_USB_SERIAL_KEYSPAN is not set
# CONFIG_USB_SERIAL_KLSI is not set
# CONFIG_USB_SERIAL_KOBIL_SCT is not set
# CONFIG_USB_SERIAL_MCT_U232 is not set
# CONFIG_USB_SERIAL_METRO is not set
# CONFIG_USB_SERIAL_MOS7720 is not set
# CONFIG_USB_SERIAL_MOS7840 is not set
# CONFIG_USB_SERIAL_MXUPORT is not set
# CONFIG_USB_SERIAL_NAVMAN is not set
# CONFIG_USB_SERIAL_PL2303 is not set
# CONFIG_USB_SERIAL_OTI6858 is not set
# CONFIG_USB_SERIAL_QCAUX is not set
# CONFIG_USB_SERIAL_QUALCOMM is not set
# CONFIG_USB_SERIAL_SPCP8X5 is not set
# CONFIG_USB_SERIAL_SAFE is not set
# CONFIG_USB_SERIAL_SIERRAWIRELESS is not set
# CONFIG_USB_SERIAL_SYMBOL is not set
# CONFIG_USB_SERIAL_TI is not set
# CONFIG_USB_SERIAL_CYBERJACK is not set
# CONFIG_USB_SERIAL_XIRCOM is not set
# CONFIG_USB_SERIAL_OPTION is not set
# CONFIG_USB_SERIAL_OMNINET is not set
# CONFIG_USB_SERIAL_OPTICON is not set
# CONFIG_USB_SERIAL_XSENS_MT is not set
# CONFIG_USB_SERIAL_WISHBONE is not set
# CONFIG_USB_SERIAL_SSU100 is not set
# CONFIG_USB_SERIAL_QT2 is not set
# CONFIG_USB_SERIAL_UPD78F0730 is not set
CONFIG_USB_SERIAL_DEBUG=m

#
# USB Miscellaneous drivers
#
# CONFIG_USB_EMI62 is not set
# CONFIG_USB_EMI26 is not set
# CONFIG_USB_ADUTUX is not set
# CONFIG_USB_SEVSEG is not set
# CONFIG_USB_LEGOTOWER is not set
# CONFIG_USB_LCD is not set
# CONFIG_USB_CYPRESS_CY7C63 is not set
# CONFIG_USB_CYTHERM is not set
# CONFIG_USB_IDMOUSE is not set
# CONFIG_USB_FTDI_ELAN is not set
# CONFIG_USB_APPLEDISPLAY is not set
# CONFIG_APPLE_MFI_FASTCHARGE is not set
# CONFIG_USB_SISUSBVGA is not set
# CONFIG_USB_LD is not set
# CONFIG_USB_TRANCEVIBRATOR is not set
# CONFIG_USB_IOWARRIOR is not set
# CONFIG_USB_TEST is not set
# CONFIG_USB_EHSET_TEST_FIXTURE is not set
# CONFIG_USB_ISIGHTFW is not set
# CONFIG_USB_YUREX is not set
# CONFIG_USB_EZUSB_FX2 is not set
# CONFIG_USB_HUB_USB251XB is not set
# CONFIG_USB_HSIC_USB3503 is not set
# CONFIG_USB_HSIC_USB4604 is not set
# CONFIG_USB_LINK_LAYER_TEST is not set
# CONFIG_USB_CHAOSKEY is not set
# CONFIG_USB_ATM is not set

#
# USB Physical Layer drivers
#
# CONFIG_NOP_USB_XCEIV is not set
# CONFIG_USB_GPIO_VBUS is not set
# CONFIG_USB_ISP1301 is not set
# end of USB Physical Layer drivers

# CONFIG_USB_GADGET is not set
CONFIG_TYPEC=y
# CONFIG_TYPEC_TCPM is not set
CONFIG_TYPEC_UCSI=y
# CONFIG_UCSI_CCG is not set
CONFIG_UCSI_ACPI=y
# CONFIG_TYPEC_TPS6598X is not set
# CONFIG_TYPEC_STUSB160X is not set

#
# USB Type-C Multiplexer/DeMultiplexer Switch support
#
# CONFIG_TYPEC_MUX_PI3USB30532 is not set
# end of USB Type-C Multiplexer/DeMultiplexer Switch support

#
# USB Type-C Alternate Mode drivers
#
# CONFIG_TYPEC_DP_ALTMODE is not set
# end of USB Type-C Alternate Mode drivers

# CONFIG_USB_ROLE_SWITCH is not set
CONFIG_MMC=m
CONFIG_MMC_BLOCK=m
CONFIG_MMC_BLOCK_MINORS=8
CONFIG_SDIO_UART=m
# CONFIG_MMC_TEST is not set

#
# MMC/SD/SDIO Host Controller Drivers
#
# CONFIG_MMC_DEBUG is not set
CONFIG_MMC_SDHCI=m
CONFIG_MMC_SDHCI_IO_ACCESSORS=y
CONFIG_MMC_SDHCI_PCI=m
CONFIG_MMC_RICOH_MMC=y
CONFIG_MMC_SDHCI_ACPI=m
CONFIG_MMC_SDHCI_PLTFM=m
# CONFIG_MMC_SDHCI_F_SDH30 is not set
# CONFIG_MMC_WBSD is not set
# CONFIG_MMC_TIFM_SD is not set
# CONFIG_MMC_SPI is not set
# CONFIG_MMC_CB710 is not set
# CONFIG_MMC_VIA_SDMMC is not set
# CONFIG_MMC_VUB300 is not set
# CONFIG_MMC_USHC is not set
# CONFIG_MMC_USDHI6ROL0 is not set
# CONFIG_MMC_REALTEK_PCI is not set
CONFIG_MMC_CQHCI=m
# CONFIG_MMC_HSQ is not set
# CONFIG_MMC_TOSHIBA_PCI is not set
# CONFIG_MMC_MTK is not set
# CONFIG_MMC_SDHCI_XENON is not set
# CONFIG_MEMSTICK is not set
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
# CONFIG_LEDS_CLASS_FLASH is not set
# CONFIG_LEDS_CLASS_MULTICOLOR is not set
# CONFIG_LEDS_BRIGHTNESS_HW_CHANGED is not set

#
# LED drivers
#
# CONFIG_LEDS_APU is not set
CONFIG_LEDS_LM3530=m
# CONFIG_LEDS_LM3532 is not set
# CONFIG_LEDS_LM3642 is not set
# CONFIG_LEDS_PCA9532 is not set
# CONFIG_LEDS_GPIO is not set
CONFIG_LEDS_LP3944=m
# CONFIG_LEDS_LP3952 is not set
# CONFIG_LEDS_LP50XX is not set
CONFIG_LEDS_CLEVO_MAIL=m
# CONFIG_LEDS_PCA955X is not set
# CONFIG_LEDS_PCA963X is not set
# CONFIG_LEDS_DAC124S085 is not set
# CONFIG_LEDS_PWM is not set
# CONFIG_LEDS_BD2802 is not set
CONFIG_LEDS_INTEL_SS4200=m
# CONFIG_LEDS_TCA6507 is not set
# CONFIG_LEDS_TLC591XX is not set
# CONFIG_LEDS_LM355x is not set

#
# LED driver for blink(1) USB RGB LED is under Special HID drivers (HID_THINGM)
#
CONFIG_LEDS_BLINKM=m
CONFIG_LEDS_MLXCPLD=m
# CONFIG_LEDS_MLXREG is not set
# CONFIG_LEDS_USER is not set
# CONFIG_LEDS_NIC78BX is not set
# CONFIG_LEDS_TI_LMU_COMMON is not set

#
# LED Triggers
#
CONFIG_LEDS_TRIGGERS=y
CONFIG_LEDS_TRIGGER_TIMER=m
CONFIG_LEDS_TRIGGER_ONESHOT=m
# CONFIG_LEDS_TRIGGER_DISK is not set
CONFIG_LEDS_TRIGGER_HEARTBEAT=m
CONFIG_LEDS_TRIGGER_BACKLIGHT=m
# CONFIG_LEDS_TRIGGER_CPU is not set
# CONFIG_LEDS_TRIGGER_ACTIVITY is not set
CONFIG_LEDS_TRIGGER_GPIO=m
CONFIG_LEDS_TRIGGER_DEFAULT_ON=m

#
# iptables trigger is under Netfilter config (LED target)
#
CONFIG_LEDS_TRIGGER_TRANSIENT=m
CONFIG_LEDS_TRIGGER_CAMERA=m
# CONFIG_LEDS_TRIGGER_PANIC is not set
# CONFIG_LEDS_TRIGGER_NETDEV is not set
# CONFIG_LEDS_TRIGGER_PATTERN is not set
CONFIG_LEDS_TRIGGER_AUDIO=m
# CONFIG_ACCESSIBILITY is not set
CONFIG_INFINIBAND=m
CONFIG_INFINIBAND_USER_MAD=m
CONFIG_INFINIBAND_USER_ACCESS=m
CONFIG_INFINIBAND_USER_MEM=y
CONFIG_INFINIBAND_ON_DEMAND_PAGING=y
CONFIG_INFINIBAND_ADDR_TRANS=y
CONFIG_INFINIBAND_ADDR_TRANS_CONFIGFS=y
# CONFIG_INFINIBAND_MTHCA is not set
# CONFIG_INFINIBAND_EFA is not set
# CONFIG_INFINIBAND_I40IW is not set
# CONFIG_MLX4_INFINIBAND is not set
# CONFIG_INFINIBAND_OCRDMA is not set
# CONFIG_INFINIBAND_USNIC is not set
# CONFIG_INFINIBAND_BNXT_RE is not set
# CONFIG_INFINIBAND_RDMAVT is not set
CONFIG_RDMA_RXE=m
CONFIG_RDMA_SIW=m
CONFIG_INFINIBAND_IPOIB=m
# CONFIG_INFINIBAND_IPOIB_CM is not set
CONFIG_INFINIBAND_IPOIB_DEBUG=y
# CONFIG_INFINIBAND_IPOIB_DEBUG_DATA is not set
CONFIG_INFINIBAND_SRP=m
CONFIG_INFINIBAND_SRPT=m
# CONFIG_INFINIBAND_ISER is not set
# CONFIG_INFINIBAND_ISERT is not set
# CONFIG_INFINIBAND_RTRS_CLIENT is not set
# CONFIG_INFINIBAND_RTRS_SERVER is not set
# CONFIG_INFINIBAND_OPA_VNIC is not set
CONFIG_EDAC_ATOMIC_SCRUB=y
CONFIG_EDAC_SUPPORT=y
CONFIG_EDAC=y
CONFIG_EDAC_LEGACY_SYSFS=y
# CONFIG_EDAC_DEBUG is not set
CONFIG_EDAC_DECODE_MCE=m
CONFIG_EDAC_GHES=y
CONFIG_EDAC_AMD64=m
# CONFIG_EDAC_AMD64_ERROR_INJECTION is not set
CONFIG_EDAC_E752X=m
CONFIG_EDAC_I82975X=m
CONFIG_EDAC_I3000=m
CONFIG_EDAC_I3200=m
CONFIG_EDAC_IE31200=m
CONFIG_EDAC_X38=m
CONFIG_EDAC_I5400=m
CONFIG_EDAC_I7CORE=m
CONFIG_EDAC_I5000=m
CONFIG_EDAC_I5100=m
CONFIG_EDAC_I7300=m
CONFIG_EDAC_SBRIDGE=m
CONFIG_EDAC_SKX=m
# CONFIG_EDAC_I10NM is not set
CONFIG_EDAC_PND2=m
CONFIG_RTC_LIB=y
CONFIG_RTC_MC146818_LIB=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_HCTOSYS=y
CONFIG_RTC_HCTOSYS_DEVICE="rtc0"
# CONFIG_RTC_SYSTOHC is not set
# CONFIG_RTC_DEBUG is not set
CONFIG_RTC_NVMEM=y

#
# RTC interfaces
#
CONFIG_RTC_INTF_SYSFS=y
CONFIG_RTC_INTF_PROC=y
CONFIG_RTC_INTF_DEV=y
# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set
# CONFIG_RTC_DRV_TEST is not set

#
# I2C RTC drivers
#
# CONFIG_RTC_DRV_ABB5ZES3 is not set
# CONFIG_RTC_DRV_ABEOZ9 is not set
# CONFIG_RTC_DRV_ABX80X is not set
CONFIG_RTC_DRV_DS1307=m
# CONFIG_RTC_DRV_DS1307_CENTURY is not set
CONFIG_RTC_DRV_DS1374=m
# CONFIG_RTC_DRV_DS1374_WDT is not set
CONFIG_RTC_DRV_DS1672=m
CONFIG_RTC_DRV_MAX6900=m
CONFIG_RTC_DRV_RS5C372=m
CONFIG_RTC_DRV_ISL1208=m
CONFIG_RTC_DRV_ISL12022=m
CONFIG_RTC_DRV_X1205=m
CONFIG_RTC_DRV_PCF8523=m
# CONFIG_RTC_DRV_PCF85063 is not set
# CONFIG_RTC_DRV_PCF85363 is not set
CONFIG_RTC_DRV_PCF8563=m
CONFIG_RTC_DRV_PCF8583=m
CONFIG_RTC_DRV_M41T80=m
CONFIG_RTC_DRV_M41T80_WDT=y
CONFIG_RTC_DRV_BQ32K=m
# CONFIG_RTC_DRV_S35390A is not set
CONFIG_RTC_DRV_FM3130=m
# CONFIG_RTC_DRV_RX8010 is not set
CONFIG_RTC_DRV_RX8581=m
CONFIG_RTC_DRV_RX8025=m
CONFIG_RTC_DRV_EM3027=m
# CONFIG_RTC_DRV_RV3028 is not set
# CONFIG_RTC_DRV_RV3032 is not set
# CONFIG_RTC_DRV_RV8803 is not set
# CONFIG_RTC_DRV_SD3078 is not set

#
# SPI RTC drivers
#
# CONFIG_RTC_DRV_M41T93 is not set
# CONFIG_RTC_DRV_M41T94 is not set
# CONFIG_RTC_DRV_DS1302 is not set
# CONFIG_RTC_DRV_DS1305 is not set
# CONFIG_RTC_DRV_DS1343 is not set
# CONFIG_RTC_DRV_DS1347 is not set
# CONFIG_RTC_DRV_DS1390 is not set
# CONFIG_RTC_DRV_MAX6916 is not set
# CONFIG_RTC_DRV_R9701 is not set
CONFIG_RTC_DRV_RX4581=m
# CONFIG_RTC_DRV_RX6110 is not set
# CONFIG_RTC_DRV_RS5C348 is not set
# CONFIG_RTC_DRV_MAX6902 is not set
# CONFIG_RTC_DRV_PCF2123 is not set
# CONFIG_RTC_DRV_MCP795 is not set
CONFIG_RTC_I2C_AND_SPI=y

#
# SPI and I2C RTC drivers
#
CONFIG_RTC_DRV_DS3232=m
CONFIG_RTC_DRV_DS3232_HWMON=y
# CONFIG_RTC_DRV_PCF2127 is not set
CONFIG_RTC_DRV_RV3029C2=m
# CONFIG_RTC_DRV_RV3029_HWMON is not set

#
# Platform RTC drivers
#
CONFIG_RTC_DRV_CMOS=y
CONFIG_RTC_DRV_DS1286=m
CONFIG_RTC_DRV_DS1511=m
CONFIG_RTC_DRV_DS1553=m
# CONFIG_RTC_DRV_DS1685_FAMILY is not set
CONFIG_RTC_DRV_DS1742=m
CONFIG_RTC_DRV_DS2404=m
CONFIG_RTC_DRV_STK17TA8=m
# CONFIG_RTC_DRV_M48T86 is not set
CONFIG_RTC_DRV_M48T35=m
CONFIG_RTC_DRV_M48T59=m
CONFIG_RTC_DRV_MSM6242=m
CONFIG_RTC_DRV_BQ4802=m
CONFIG_RTC_DRV_RP5C01=m
CONFIG_RTC_DRV_V3020=m

#
# on-CPU RTC drivers
#
# CONFIG_RTC_DRV_FTRTC010 is not set

#
# HID Sensor RTC drivers
#
CONFIG_DMADEVICES=y
# CONFIG_DMADEVICES_DEBUG is not set

#
# DMA Devices
#
CONFIG_DMA_ENGINE=y
CONFIG_DMA_VIRTUAL_CHANNELS=y
CONFIG_DMA_ACPI=y
# CONFIG_ALTERA_MSGDMA is not set
CONFIG_INTEL_IDMA64=m
# CONFIG_INTEL_IDXD is not set
CONFIG_INTEL_IOATDMA=m
# CONFIG_PLX_DMA is not set
# CONFIG_XILINX_ZYNQMP_DPDMA is not set
# CONFIG_QCOM_HIDMA_MGMT is not set
# CONFIG_QCOM_HIDMA is not set
CONFIG_DW_DMAC_CORE=y
CONFIG_DW_DMAC=m
CONFIG_DW_DMAC_PCI=y
# CONFIG_DW_EDMA is not set
# CONFIG_DW_EDMA_PCIE is not set
CONFIG_HSU_DMA=y
# CONFIG_SF_PDMA is not set

#
# DMA Clients
#
CONFIG_ASYNC_TX_DMA=y
CONFIG_DMATEST=m
CONFIG_DMA_ENGINE_RAID=y

#
# DMABUF options
#
CONFIG_SYNC_FILE=y
# CONFIG_SW_SYNC is not set
# CONFIG_UDMABUF is not set
# CONFIG_DMABUF_MOVE_NOTIFY is not set
# CONFIG_DMABUF_SELFTESTS is not set
# CONFIG_DMABUF_HEAPS is not set
# end of DMABUF options

CONFIG_DCA=m
# CONFIG_AUXDISPLAY is not set
# CONFIG_PANEL is not set
CONFIG_UIO=m
CONFIG_UIO_CIF=m
CONFIG_UIO_PDRV_GENIRQ=m
# CONFIG_UIO_DMEM_GENIRQ is not set
CONFIG_UIO_AEC=m
CONFIG_UIO_SERCOS3=m
CONFIG_UIO_PCI_GENERIC=m
# CONFIG_UIO_NETX is not set
# CONFIG_UIO_PRUSS is not set
# CONFIG_UIO_MF624 is not set
CONFIG_UIO_HV_GENERIC=m
CONFIG_VFIO_IOMMU_TYPE1=m
CONFIG_VFIO_VIRQFD=m
CONFIG_VFIO=m
CONFIG_VFIO_NOIOMMU=y
CONFIG_VFIO_PCI=m
# CONFIG_VFIO_PCI_VGA is not set
CONFIG_VFIO_PCI_MMAP=y
CONFIG_VFIO_PCI_INTX=y
# CONFIG_VFIO_PCI_IGD is not set
CONFIG_VFIO_MDEV=m
CONFIG_VFIO_MDEV_DEVICE=m
CONFIG_IRQ_BYPASS_MANAGER=m
# CONFIG_VIRT_DRIVERS is not set
CONFIG_VIRTIO=y
CONFIG_VIRTIO_MENU=y
CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_PCI_LEGACY=y
# CONFIG_VIRTIO_PMEM is not set
CONFIG_VIRTIO_BALLOON=m
CONFIG_VIRTIO_MEM=m
CONFIG_VIRTIO_INPUT=m
# CONFIG_VIRTIO_MMIO is not set
CONFIG_VIRTIO_DMA_SHARED_BUFFER=m
# CONFIG_VDPA is not set
CONFIG_VHOST_IOTLB=m
CONFIG_VHOST=m
CONFIG_VHOST_MENU=y
CONFIG_VHOST_NET=m
# CONFIG_VHOST_SCSI is not set
CONFIG_VHOST_VSOCK=m
# CONFIG_VHOST_CROSS_ENDIAN_LEGACY is not set

#
# Microsoft Hyper-V guest support
#
CONFIG_HYPERV=m
CONFIG_HYPERV_TIMER=y
CONFIG_HYPERV_UTILS=m
CONFIG_HYPERV_BALLOON=m
# end of Microsoft Hyper-V guest support

#
# Xen driver support
#
# CONFIG_XEN_BALLOON is not set
CONFIG_XEN_DEV_EVTCHN=m
# CONFIG_XEN_BACKEND is not set
CONFIG_XENFS=m
CONFIG_XEN_COMPAT_XENFS=y
CONFIG_XEN_SYS_HYPERVISOR=y
CONFIG_XEN_XENBUS_FRONTEND=y
# CONFIG_XEN_GNTDEV is not set
# CONFIG_XEN_GRANT_DEV_ALLOC is not set
# CONFIG_XEN_GRANT_DMA_ALLOC is not set
CONFIG_SWIOTLB_XEN=y
# CONFIG_XEN_PVCALLS_FRONTEND is not set
CONFIG_XEN_PRIVCMD=m
CONFIG_XEN_EFI=y
CONFIG_XEN_AUTO_XLATE=y
CONFIG_XEN_ACPI=y
# CONFIG_XEN_UNPOPULATED_ALLOC is not set
# end of Xen driver support

# CONFIG_GREYBUS is not set
# CONFIG_STAGING is not set
CONFIG_X86_PLATFORM_DEVICES=y
CONFIG_ACPI_WMI=m
CONFIG_WMI_BMOF=m
# CONFIG_ALIENWARE_WMI is not set
# CONFIG_HUAWEI_WMI is not set
# CONFIG_INTEL_WMI_SBL_FW_UPDATE is not set
CONFIG_INTEL_WMI_THUNDERBOLT=m
CONFIG_MXM_WMI=m
# CONFIG_PEAQ_WMI is not set
# CONFIG_XIAOMI_WMI is not set
CONFIG_ACERHDF=m
# CONFIG_ACER_WIRELESS is not set
CONFIG_ACER_WMI=m
CONFIG_APPLE_GMUX=m
CONFIG_ASUS_LAPTOP=m
# CONFIG_ASUS_WIRELESS is not set
CONFIG_ASUS_WMI=m
CONFIG_ASUS_NB_WMI=m
CONFIG_EEEPC_LAPTOP=m
CONFIG_EEEPC_WMI=m
CONFIG_DCDBAS=m
CONFIG_DELL_SMBIOS=m
CONFIG_DELL_SMBIOS_WMI=y
# CONFIG_DELL_SMBIOS_SMM is not set
CONFIG_DELL_LAPTOP=m
CONFIG_DELL_RBTN=m
CONFIG_DELL_RBU=m
CONFIG_DELL_SMO8800=m
CONFIG_DELL_WMI=m
CONFIG_DELL_WMI_DESCRIPTOR=m
CONFIG_DELL_WMI_AIO=m
CONFIG_DELL_WMI_LED=m
CONFIG_AMILO_RFKILL=m
CONFIG_FUJITSU_LAPTOP=m
CONFIG_FUJITSU_TABLET=m
# CONFIG_GPD_POCKET_FAN is not set
CONFIG_HP_ACCEL=m
CONFIG_HP_WIRELESS=m
CONFIG_HP_WMI=m
# CONFIG_IBM_RTL is not set
CONFIG_IDEAPAD_LAPTOP=m
CONFIG_SENSORS_HDAPS=m
CONFIG_THINKPAD_ACPI=m
# CONFIG_THINKPAD_ACPI_DEBUGFACILITIES is not set
# CONFIG_THINKPAD_ACPI_DEBUG is not set
# CONFIG_THINKPAD_ACPI_UNSAFE_LEDS is not set
CONFIG_THINKPAD_ACPI_VIDEO=y
CONFIG_THINKPAD_ACPI_HOTKEY_POLL=y
# CONFIG_INTEL_ATOMISP2_PM is not set
CONFIG_INTEL_HID_EVENT=m
# CONFIG_INTEL_INT0002_VGPIO is not set
# CONFIG_INTEL_MENLOW is not set
CONFIG_INTEL_OAKTRAIL=m
CONFIG_INTEL_VBTN=m
# CONFIG_SURFACE3_WMI is not set
# CONFIG_SURFACE_3_POWER_OPREGION is not set
# CONFIG_SURFACE_PRO3_BUTTON is not set
CONFIG_MSI_LAPTOP=m
CONFIG_MSI_WMI=m
# CONFIG_PCENGINES_APU2 is not set
CONFIG_SAMSUNG_LAPTOP=m
CONFIG_SAMSUNG_Q10=m
CONFIG_TOSHIBA_BT_RFKILL=m
# CONFIG_TOSHIBA_HAPS is not set
# CONFIG_TOSHIBA_WMI is not set
CONFIG_ACPI_CMPC=m
CONFIG_COMPAL_LAPTOP=m
# CONFIG_LG_LAPTOP is not set
CONFIG_PANASONIC_LAPTOP=m
CONFIG_SONY_LAPTOP=m
CONFIG_SONYPI_COMPAT=y
# CONFIG_SYSTEM76_ACPI is not set
CONFIG_TOPSTAR_LAPTOP=m
# CONFIG_I2C_MULTI_INSTANTIATE is not set
CONFIG_MLX_PLATFORM=m
CONFIG_INTEL_IPS=m
CONFIG_INTEL_RST=m
# CONFIG_INTEL_SMARTCONNECT is not set

#
# Intel Speed Select Technology interface support
#
# CONFIG_INTEL_SPEED_SELECT_INTERFACE is not set
# end of Intel Speed Select Technology interface support

CONFIG_INTEL_TURBO_MAX_3=y
# CONFIG_INTEL_UNCORE_FREQ_CONTROL is not set
CONFIG_INTEL_PMC_CORE=m
# CONFIG_INTEL_PUNIT_IPC is not set
# CONFIG_INTEL_SCU_PCI is not set
# CONFIG_INTEL_SCU_PLATFORM is not set
CONFIG_PMC_ATOM=y
# CONFIG_CHROME_PLATFORMS is not set
CONFIG_MELLANOX_PLATFORM=y
CONFIG_MLXREG_HOTPLUG=m
# CONFIG_MLXREG_IO is not set
CONFIG_HAVE_CLK=y
CONFIG_CLKDEV_LOOKUP=y
CONFIG_HAVE_CLK_PREPARE=y
CONFIG_COMMON_CLK=y
# CONFIG_COMMON_CLK_MAX9485 is not set
# CONFIG_COMMON_CLK_SI5341 is not set
# CONFIG_COMMON_CLK_SI5351 is not set
# CONFIG_COMMON_CLK_SI544 is not set
# CONFIG_COMMON_CLK_CDCE706 is not set
# CONFIG_COMMON_CLK_CS2000_CP is not set
# CONFIG_COMMON_CLK_PWM is not set
CONFIG_HWSPINLOCK=y

#
# Clock Source drivers
#
CONFIG_CLKEVT_I8253=y
CONFIG_I8253_LOCK=y
CONFIG_CLKBLD_I8253=y
# end of Clock Source drivers

CONFIG_MAILBOX=y
CONFIG_PCC=y
# CONFIG_ALTERA_MBOX is not set
CONFIG_IOMMU_IOVA=y
CONFIG_IOASID=y
CONFIG_IOMMU_API=y
CONFIG_IOMMU_SUPPORT=y

#
# Generic IOMMU Pagetable Support
#
# end of Generic IOMMU Pagetable Support

# CONFIG_IOMMU_DEBUGFS is not set
# CONFIG_IOMMU_DEFAULT_PASSTHROUGH is not set
CONFIG_IOMMU_DMA=y
CONFIG_AMD_IOMMU=y
CONFIG_AMD_IOMMU_V2=m
CONFIG_DMAR_TABLE=y
CONFIG_INTEL_IOMMU=y
# CONFIG_INTEL_IOMMU_SVM is not set
# CONFIG_INTEL_IOMMU_DEFAULT_ON is not set
CONFIG_INTEL_IOMMU_FLOPPY_WA=y
# CONFIG_INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON is not set
CONFIG_IRQ_REMAP=y
CONFIG_HYPERV_IOMMU=y

#
# Remoteproc drivers
#
# CONFIG_REMOTEPROC is not set
# end of Remoteproc drivers

#
# Rpmsg drivers
#
# CONFIG_RPMSG_QCOM_GLINK_RPM is not set
# CONFIG_RPMSG_VIRTIO is not set
# end of Rpmsg drivers

# CONFIG_SOUNDWIRE is not set

#
# SOC (System On Chip) specific Drivers
#

#
# Amlogic SoC drivers
#
# end of Amlogic SoC drivers

#
# Aspeed SoC drivers
#
# end of Aspeed SoC drivers

#
# Broadcom SoC drivers
#
# end of Broadcom SoC drivers

#
# NXP/Freescale QorIQ SoC drivers
#
# end of NXP/Freescale QorIQ SoC drivers

#
# i.MX SoC drivers
#
# end of i.MX SoC drivers

#
# Qualcomm SoC drivers
#
# end of Qualcomm SoC drivers

# CONFIG_SOC_TI is not set

#
# Xilinx SoC drivers
#
# CONFIG_XILINX_VCU is not set
# end of Xilinx SoC drivers
# end of SOC (System On Chip) specific Drivers

# CONFIG_PM_DEVFREQ is not set
# CONFIG_EXTCON is not set
# CONFIG_MEMORY is not set
# CONFIG_IIO is not set
CONFIG_NTB=m
# CONFIG_NTB_MSI is not set
# CONFIG_NTB_AMD is not set
# CONFIG_NTB_IDT is not set
# CONFIG_NTB_INTEL is not set
# CONFIG_NTB_SWITCHTEC is not set
# CONFIG_NTB_PINGPONG is not set
# CONFIG_NTB_TOOL is not set
# CONFIG_NTB_PERF is not set
# CONFIG_NTB_TRANSPORT is not set
# CONFIG_VME_BUS is not set
CONFIG_PWM=y
CONFIG_PWM_SYSFS=y
# CONFIG_PWM_DEBUG is not set
CONFIG_PWM_LPSS=m
CONFIG_PWM_LPSS_PCI=m
CONFIG_PWM_LPSS_PLATFORM=m
# CONFIG_PWM_PCA9685 is not set

#
# IRQ chip support
#
# end of IRQ chip support

# CONFIG_IPACK_BUS is not set
# CONFIG_RESET_CONTROLLER is not set

#
# PHY Subsystem
#
# CONFIG_GENERIC_PHY is not set
# CONFIG_USB_LGM_PHY is not set
# CONFIG_BCM_KONA_USB2_PHY is not set
# CONFIG_PHY_PXA_28NM_HSIC is not set
# CONFIG_PHY_PXA_28NM_USB2 is not set
# CONFIG_PHY_INTEL_LGM_EMMC is not set
# end of PHY Subsystem

CONFIG_POWERCAP=y
CONFIG_INTEL_RAPL_CORE=m
CONFIG_INTEL_RAPL=m
# CONFIG_IDLE_INJECT is not set
# CONFIG_MCB is not set

#
# Performance monitor support
#
# end of Performance monitor support

CONFIG_RAS=y
# CONFIG_RAS_CEC is not set
# CONFIG_USB4 is not set

#
# Android
#
# CONFIG_ANDROID is not set
# end of Android

CONFIG_LIBNVDIMM=m
CONFIG_BLK_DEV_PMEM=m
CONFIG_ND_BLK=m
CONFIG_ND_CLAIM=y
CONFIG_ND_BTT=m
CONFIG_BTT=y
CONFIG_ND_PFN=m
CONFIG_NVDIMM_PFN=y
CONFIG_NVDIMM_DAX=y
CONFIG_NVDIMM_KEYS=y
CONFIG_DAX_DRIVER=y
CONFIG_DAX=y
CONFIG_DEV_DAX=m
CONFIG_DEV_DAX_PMEM=m
CONFIG_DEV_DAX_KMEM=m
CONFIG_DEV_DAX_PMEM_COMPAT=m
CONFIG_NVMEM=y
CONFIG_NVMEM_SYSFS=y

#
# HW tracing support
#
CONFIG_STM=m
# CONFIG_STM_PROTO_BASIC is not set
# CONFIG_STM_PROTO_SYS_T is not set
CONFIG_STM_DUMMY=m
CONFIG_STM_SOURCE_CONSOLE=m
CONFIG_STM_SOURCE_HEARTBEAT=m
CONFIG_STM_SOURCE_FTRACE=m
CONFIG_INTEL_TH=m
CONFIG_INTEL_TH_PCI=m
CONFIG_INTEL_TH_ACPI=m
CONFIG_INTEL_TH_GTH=m
CONFIG_INTEL_TH_STH=m
CONFIG_INTEL_TH_MSU=m
CONFIG_INTEL_TH_PTI=m
# CONFIG_INTEL_TH_DEBUG is not set
# end of HW tracing support

# CONFIG_FPGA is not set
# CONFIG_TEE is not set
# CONFIG_UNISYS_VISORBUS is not set
# CONFIG_SIOX is not set
# CONFIG_SLIMBUS is not set
# CONFIG_INTERCONNECT is not set
# CONFIG_COUNTER is not set
# CONFIG_MOST is not set
# end of Device Drivers

#
# File systems
#
CONFIG_DCACHE_WORD_ACCESS=y
# CONFIG_VALIDATE_FS_PARSER is not set
CONFIG_FS_IOMAP=y
CONFIG_EXT2_FS=m
CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT2_FS_POSIX_ACL=y
CONFIG_EXT2_FS_SECURITY=y
# CONFIG_EXT3_FS is not set
CONFIG_EXT4_FS=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
# CONFIG_EXT4_DEBUG is not set
CONFIG_EXT4_KUNIT_TESTS=m
CONFIG_JBD2=y
# CONFIG_JBD2_DEBUG is not set
CONFIG_FS_MBCACHE=y
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
CONFIG_XFS_FS=m
CONFIG_XFS_SUPPORT_V4=y
CONFIG_XFS_QUOTA=y
CONFIG_XFS_POSIX_ACL=y
CONFIG_XFS_RT=y
CONFIG_XFS_ONLINE_SCRUB=y
CONFIG_XFS_ONLINE_REPAIR=y
CONFIG_XFS_DEBUG=y
CONFIG_XFS_ASSERT_FATAL=y
CONFIG_GFS2_FS=m
CONFIG_GFS2_FS_LOCKING_DLM=y
CONFIG_OCFS2_FS=m
CONFIG_OCFS2_FS_O2CB=m
CONFIG_OCFS2_FS_USERSPACE_CLUSTER=m
CONFIG_OCFS2_FS_STATS=y
CONFIG_OCFS2_DEBUG_MASKLOG=y
# CONFIG_OCFS2_DEBUG_FS is not set
CONFIG_BTRFS_FS=m
CONFIG_BTRFS_FS_POSIX_ACL=y
# CONFIG_BTRFS_FS_CHECK_INTEGRITY is not set
# CONFIG_BTRFS_FS_RUN_SANITY_TESTS is not set
# CONFIG_BTRFS_DEBUG is not set
# CONFIG_BTRFS_ASSERT is not set
# CONFIG_BTRFS_FS_REF_VERIFY is not set
# CONFIG_NILFS2_FS is not set
CONFIG_F2FS_FS=m
CONFIG_F2FS_STAT_FS=y
CONFIG_F2FS_FS_XATTR=y
CONFIG_F2FS_FS_POSIX_ACL=y
CONFIG_F2FS_FS_SECURITY=y
# CONFIG_F2FS_CHECK_FS is not set
# CONFIG_F2FS_IO_TRACE is not set
# CONFIG_F2FS_FAULT_INJECTION is not set
# CONFIG_F2FS_FS_COMPRESSION is not set
# CONFIG_ZONEFS_FS is not set
CONFIG_FS_DAX=y
CONFIG_FS_DAX_PMD=y
CONFIG_FS_POSIX_ACL=y
CONFIG_EXPORTFS=y
CONFIG_EXPORTFS_BLOCK_OPS=y
CONFIG_FILE_LOCKING=y
CONFIG_MANDATORY_FILE_LOCKING=y
CONFIG_FS_ENCRYPTION=y
CONFIG_FS_ENCRYPTION_ALGS=y
# CONFIG_FS_VERITY is not set
CONFIG_FSNOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_INOTIFY_USER=y
CONFIG_FANOTIFY=y
CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
CONFIG_PRINT_QUOTA_WARNING=y
# CONFIG_QUOTA_DEBUG is not set
CONFIG_QUOTA_TREE=y
# CONFIG_QFMT_V1 is not set
CONFIG_QFMT_V2=y
CONFIG_QUOTACTL=y
CONFIG_AUTOFS4_FS=y
CONFIG_AUTOFS_FS=y
CONFIG_FUSE_FS=m
CONFIG_CUSE=m
# CONFIG_VIRTIO_FS is not set
CONFIG_OVERLAY_FS=m
# CONFIG_OVERLAY_FS_REDIRECT_DIR is not set
# CONFIG_OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW is not set
# CONFIG_OVERLAY_FS_INDEX is not set
# CONFIG_OVERLAY_FS_XINO_AUTO is not set
# CONFIG_OVERLAY_FS_METACOPY is not set

#
# Caches
#
CONFIG_FSCACHE=m
CONFIG_FSCACHE_STATS=y
# CONFIG_FSCACHE_HISTOGRAM is not set
# CONFIG_FSCACHE_DEBUG is not set
# CONFIG_FSCACHE_OBJECT_LIST is not set
CONFIG_CACHEFILES=m
# CONFIG_CACHEFILES_DEBUG is not set
# CONFIG_CACHEFILES_HISTOGRAM is not set
# end of Caches

#
# CD-ROM/DVD Filesystems
#
CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y
CONFIG_ZISOFS=y
CONFIG_UDF_FS=m
# end of CD-ROM/DVD Filesystems

#
# DOS/FAT/EXFAT/NT Filesystems
#
CONFIG_FAT_FS=m
CONFIG_MSDOS_FS=m
CONFIG_VFAT_FS=m
CONFIG_FAT_DEFAULT_CODEPAGE=437
CONFIG_FAT_DEFAULT_IOCHARSET="ascii"
# CONFIG_FAT_DEFAULT_UTF8 is not set
# CONFIG_EXFAT_FS is not set
# CONFIG_NTFS_FS is not set
# end of DOS/FAT/EXFAT/NT Filesystems

#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_PROC_VMCORE=y
CONFIG_PROC_VMCORE_DEVICE_DUMP=y
CONFIG_PROC_SYSCTL=y
CONFIG_PROC_PAGE_MONITOR=y
CONFIG_PROC_CHILDREN=y
CONFIG_PROC_PID_ARCH_STATUS=y
CONFIG_PROC_CPU_RESCTRL=y
CONFIG_KERNFS=y
CONFIG_SYSFS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_TMPFS_XATTR=y
# CONFIG_TMPFS_INODE64 is not set
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
CONFIG_MEMFD_CREATE=y
CONFIG_ARCH_HAS_GIGANTIC_PAGE=y
CONFIG_CONFIGFS_FS=y
CONFIG_EFIVAR_FS=y
# end of Pseudo filesystems

CONFIG_MISC_FILESYSTEMS=y
# CONFIG_ORANGEFS_FS is not set
# CONFIG_ADFS_FS is not set
# CONFIG_AFFS_FS is not set
# CONFIG_ECRYPT_FS is not set
# CONFIG_HFS_FS is not set
# CONFIG_HFSPLUS_FS is not set
# CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
CONFIG_CRAMFS=m
CONFIG_CRAMFS_BLOCKDEV=y
CONFIG_SQUASHFS=m
# CONFIG_SQUASHFS_FILE_CACHE is not set
CONFIG_SQUASHFS_FILE_DIRECT=y
# CONFIG_SQUASHFS_DECOMP_SINGLE is not set
# CONFIG_SQUASHFS_DECOMP_MULTI is not set
CONFIG_SQUASHFS_DECOMP_MULTI_PERCPU=y
CONFIG_SQUASHFS_XATTR=y
CONFIG_SQUASHFS_ZLIB=y
# CONFIG_SQUASHFS_LZ4 is not set
CONFIG_SQUASHFS_LZO=y
CONFIG_SQUASHFS_XZ=y
# CONFIG_SQUASHFS_ZSTD is not set
# CONFIG_SQUASHFS_4K_DEVBLK_SIZE is not set
# CONFIG_SQUASHFS_EMBEDDED is not set
CONFIG_SQUASHFS_FRAGMENT_CACHE_SIZE=3
# CONFIG_VXFS_FS is not set
CONFIG_MINIX_FS=m
# CONFIG_OMFS_FS is not set
# CONFIG_HPFS_FS is not set
# CONFIG_QNX4FS_FS is not set
# CONFIG_QNX6FS_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_PSTORE=y
CONFIG_PSTORE_DEFLATE_COMPRESS=y
# CONFIG_PSTORE_LZO_COMPRESS is not set
# CONFIG_PSTORE_LZ4_COMPRESS is not set
# CONFIG_PSTORE_LZ4HC_COMPRESS is not set
# CONFIG_PSTORE_842_COMPRESS is not set
# CONFIG_PSTORE_ZSTD_COMPRESS is not set
CONFIG_PSTORE_COMPRESS=y
CONFIG_PSTORE_DEFLATE_COMPRESS_DEFAULT=y
CONFIG_PSTORE_COMPRESS_DEFAULT="deflate"
# CONFIG_PSTORE_CONSOLE is not set
# CONFIG_PSTORE_PMSG is not set
# CONFIG_PSTORE_FTRACE is not set
CONFIG_PSTORE_RAM=m
# CONFIG_PSTORE_BLK is not set
# CONFIG_SYSV_FS is not set
# CONFIG_UFS_FS is not set
# CONFIG_EROFS_FS is not set
CONFIG_NETWORK_FILESYSTEMS=y
CONFIG_NFS_FS=y
# CONFIG_NFS_V2 is not set
CONFIG_NFS_V3=y
CONFIG_NFS_V3_ACL=y
CONFIG_NFS_V4=m
# CONFIG_NFS_SWAP is not set
CONFIG_NFS_V4_1=y
CONFIG_NFS_V4_2=y
CONFIG_PNFS_FILE_LAYOUT=m
CONFIG_PNFS_BLOCK=m
CONFIG_PNFS_FLEXFILE_LAYOUT=m
CONFIG_NFS_V4_1_IMPLEMENTATION_ID_DOMAIN="kernel.org"
# CONFIG_NFS_V4_1_MIGRATION is not set
CONFIG_NFS_V4_SECURITY_LABEL=y
CONFIG_ROOT_NFS=y
# CONFIG_NFS_USE_LEGACY_DNS is not set
CONFIG_NFS_USE_KERNEL_DNS=y
CONFIG_NFS_DEBUG=y
CONFIG_NFS_DISABLE_UDP_SUPPORT=y
CONFIG_NFSD=m
CONFIG_NFSD_V2_ACL=y
CONFIG_NFSD_V3=y
CONFIG_NFSD_V3_ACL=y
CONFIG_NFSD_V4=y
CONFIG_NFSD_PNFS=y
# CONFIG_NFSD_BLOCKLAYOUT is not set
CONFIG_NFSD_SCSILAYOUT=y
# CONFIG_NFSD_FLEXFILELAYOUT is not set
# CONFIG_NFSD_V4_2_INTER_SSC is not set
CONFIG_NFSD_V4_SECURITY_LABEL=y
CONFIG_GRACE_PERIOD=y
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
CONFIG_NFS_ACL_SUPPORT=y
CONFIG_NFS_COMMON=y
CONFIG_SUNRPC=y
CONFIG_SUNRPC_GSS=m
CONFIG_SUNRPC_BACKCHANNEL=y
CONFIG_RPCSEC_GSS_KRB5=m
# CONFIG_SUNRPC_DISABLE_INSECURE_ENCTYPES is not set
CONFIG_SUNRPC_DEBUG=y
CONFIG_SUNRPC_XPRT_RDMA=m
CONFIG_CEPH_FS=m
# CONFIG_CEPH_FSCACHE is not set
CONFIG_CEPH_FS_POSIX_ACL=y
# CONFIG_CEPH_FS_SECURITY_LABEL is not set
CONFIG_CIFS=m
# CONFIG_CIFS_STATS2 is not set
CONFIG_CIFS_ALLOW_INSECURE_LEGACY=y
CONFIG_CIFS_WEAK_PW_HASH=y
CONFIG_CIFS_UPCALL=y
CONFIG_CIFS_XATTR=y
CONFIG_CIFS_POSIX=y
CONFIG_CIFS_DEBUG=y
# CONFIG_CIFS_DEBUG2 is not set
# CONFIG_CIFS_DEBUG_DUMP_KEYS is not set
CONFIG_CIFS_DFS_UPCALL=y
# CONFIG_CIFS_SMB_DIRECT is not set
# CONFIG_CIFS_FSCACHE is not set
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
# CONFIG_9P_FS is not set
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="utf8"
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_CODEPAGE_737=m
CONFIG_NLS_CODEPAGE_775=m
CONFIG_NLS_CODEPAGE_850=m
CONFIG_NLS_CODEPAGE_852=m
CONFIG_NLS_CODEPAGE_855=m
CONFIG_NLS_CODEPAGE_857=m
CONFIG_NLS_CODEPAGE_860=m
CONFIG_NLS_CODEPAGE_861=m
CONFIG_NLS_CODEPAGE_862=m
CONFIG_NLS_CODEPAGE_863=m
CONFIG_NLS_CODEPAGE_864=m
CONFIG_NLS_CODEPAGE_865=m
CONFIG_NLS_CODEPAGE_866=m
CONFIG_NLS_CODEPAGE_869=m
CONFIG_NLS_CODEPAGE_936=m
CONFIG_NLS_CODEPAGE_950=m
CONFIG_NLS_CODEPAGE_932=m
CONFIG_NLS_CODEPAGE_949=m
CONFIG_NLS_CODEPAGE_874=m
CONFIG_NLS_ISO8859_8=m
CONFIG_NLS_CODEPAGE_1250=m
CONFIG_NLS_CODEPAGE_1251=m
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=m
CONFIG_NLS_ISO8859_2=m
CONFIG_NLS_ISO8859_3=m
CONFIG_NLS_ISO8859_4=m
CONFIG_NLS_ISO8859_5=m
CONFIG_NLS_ISO8859_6=m
CONFIG_NLS_ISO8859_7=m
CONFIG_NLS_ISO8859_9=m
CONFIG_NLS_ISO8859_13=m
CONFIG_NLS_ISO8859_14=m
CONFIG_NLS_ISO8859_15=m
CONFIG_NLS_KOI8_R=m
CONFIG_NLS_KOI8_U=m
CONFIG_NLS_MAC_ROMAN=m
CONFIG_NLS_MAC_CELTIC=m
CONFIG_NLS_MAC_CENTEURO=m
CONFIG_NLS_MAC_CROATIAN=m
CONFIG_NLS_MAC_CYRILLIC=m
CONFIG_NLS_MAC_GAELIC=m
CONFIG_NLS_MAC_GREEK=m
CONFIG_NLS_MAC_ICELAND=m
CONFIG_NLS_MAC_INUIT=m
CONFIG_NLS_MAC_ROMANIAN=m
CONFIG_NLS_MAC_TURKISH=m
CONFIG_NLS_UTF8=m
CONFIG_DLM=m
CONFIG_DLM_DEBUG=y
# CONFIG_UNICODE is not set
CONFIG_IO_WQ=y
# end of File systems

#
# Security options
#
CONFIG_KEYS=y
# CONFIG_KEYS_REQUEST_CACHE is not set
CONFIG_PERSISTENT_KEYRINGS=y
CONFIG_TRUSTED_KEYS=y
CONFIG_ENCRYPTED_KEYS=y
# CONFIG_KEY_DH_OPERATIONS is not set
# CONFIG_SECURITY_DMESG_RESTRICT is not set
CONFIG_SECURITY=y
CONFIG_SECURITY_WRITABLE_HOOKS=y
CONFIG_SECURITYFS=y
CONFIG_SECURITY_NETWORK=y
CONFIG_PAGE_TABLE_ISOLATION=y
# CONFIG_SECURITY_INFINIBAND is not set
CONFIG_SECURITY_NETWORK_XFRM=y
CONFIG_SECURITY_PATH=y
CONFIG_INTEL_TXT=y
CONFIG_LSM_MMAP_MIN_ADDR=65535
CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR=y
CONFIG_HARDENED_USERCOPY=y
CONFIG_HARDENED_USERCOPY_FALLBACK=y
CONFIG_FORTIFY_SOURCE=y
# CONFIG_STATIC_USERMODEHELPER is not set
CONFIG_SECURITY_SELINUX=y
CONFIG_SECURITY_SELINUX_BOOTPARAM=y
CONFIG_SECURITY_SELINUX_DISABLE=y
CONFIG_SECURITY_SELINUX_DEVELOP=y
CONFIG_SECURITY_SELINUX_AVC_STATS=y
CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=1
CONFIG_SECURITY_SELINUX_SIDTAB_HASH_BITS=9
CONFIG_SECURITY_SELINUX_SID2STR_CACHE_SIZE=256
# CONFIG_SECURITY_SMACK is not set
# CONFIG_SECURITY_TOMOYO is not set
CONFIG_SECURITY_APPARMOR=y
CONFIG_SECURITY_APPARMOR_HASH=y
CONFIG_SECURITY_APPARMOR_HASH_DEFAULT=y
# CONFIG_SECURITY_APPARMOR_DEBUG is not set
# CONFIG_SECURITY_APPARMOR_KUNIT_TEST is not set
# CONFIG_SECURITY_LOADPIN is not set
CONFIG_SECURITY_YAMA=y
# CONFIG_SECURITY_SAFESETID is not set
# CONFIG_SECURITY_LOCKDOWN_LSM is not set
CONFIG_INTEGRITY=y
CONFIG_INTEGRITY_SIGNATURE=y
CONFIG_INTEGRITY_ASYMMETRIC_KEYS=y
CONFIG_INTEGRITY_TRUSTED_KEYRING=y
# CONFIG_INTEGRITY_PLATFORM_KEYRING is not set
CONFIG_INTEGRITY_AUDIT=y
CONFIG_IMA=y
CONFIG_IMA_MEASURE_PCR_IDX=10
CONFIG_IMA_LSM_RULES=y
# CONFIG_IMA_TEMPLATE is not set
CONFIG_IMA_NG_TEMPLATE=y
# CONFIG_IMA_SIG_TEMPLATE is not set
CONFIG_IMA_DEFAULT_TEMPLATE="ima-ng"
CONFIG_IMA_DEFAULT_HASH_SHA1=y
# CONFIG_IMA_DEFAULT_HASH_SHA256 is not set
# CONFIG_IMA_DEFAULT_HASH_SHA512 is not set
CONFIG_IMA_DEFAULT_HASH="sha1"
# CONFIG_IMA_WRITE_POLICY is not set
# CONFIG_IMA_READ_POLICY is not set
CONFIG_IMA_APPRAISE=y
# CONFIG_IMA_ARCH_POLICY is not set
# CONFIG_IMA_APPRAISE_BUILD_POLICY is not set
CONFIG_IMA_APPRAISE_BOOTPARAM=y
# CONFIG_IMA_APPRAISE_MODSIG is not set
CONFIG_IMA_TRUSTED_KEYRING=y
# CONFIG_IMA_BLACKLIST_KEYRING is not set
# CONFIG_IMA_LOAD_X509 is not set
CONFIG_IMA_MEASURE_ASYMMETRIC_KEYS=y
CONFIG_IMA_QUEUE_EARLY_BOOT_KEYS=y
# CONFIG_IMA_SECURE_AND_OR_TRUSTED_BOOT is not set
CONFIG_EVM=y
CONFIG_EVM_ATTR_FSUUID=y
# CONFIG_EVM_ADD_XATTRS is not set
# CONFIG_EVM_LOAD_X509 is not set
CONFIG_DEFAULT_SECURITY_SELINUX=y
# CONFIG_DEFAULT_SECURITY_APPARMOR is not set
# CONFIG_DEFAULT_SECURITY_DAC is not set
CONFIG_LSM="lockdown,yama,loadpin,safesetid,integrity,selinux,smack,tomoyo,apparmor,bpf"

#
# Kernel hardening options
#

#
# Memory initialization
#
CONFIG_INIT_STACK_NONE=y
# CONFIG_INIT_ON_ALLOC_DEFAULT_ON is not set
# CONFIG_INIT_ON_FREE_DEFAULT_ON is not set
# end of Memory initialization
# end of Kernel hardening options
# end of Security options

CONFIG_XOR_BLOCKS=m
CONFIG_ASYNC_CORE=m
CONFIG_ASYNC_MEMCPY=m
CONFIG_ASYNC_XOR=m
CONFIG_ASYNC_PQ=m
CONFIG_ASYNC_RAID6_RECOV=m
CONFIG_CRYPTO=y

#
# Crypto core or helper
#
CONFIG_CRYPTO_ALGAPI=y
CONFIG_CRYPTO_ALGAPI2=y
CONFIG_CRYPTO_AEAD=y
CONFIG_CRYPTO_AEAD2=y
CONFIG_CRYPTO_SKCIPHER=y
CONFIG_CRYPTO_SKCIPHER2=y
CONFIG_CRYPTO_HASH=y
CONFIG_CRYPTO_HASH2=y
CONFIG_CRYPTO_RNG=y
CONFIG_CRYPTO_RNG2=y
CONFIG_CRYPTO_RNG_DEFAULT=y
CONFIG_CRYPTO_AKCIPHER2=y
CONFIG_CRYPTO_AKCIPHER=y
CONFIG_CRYPTO_KPP2=y
CONFIG_CRYPTO_KPP=m
CONFIG_CRYPTO_ACOMP2=y
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_MANAGER2=y
CONFIG_CRYPTO_USER=m
CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y
CONFIG_CRYPTO_GF128MUL=y
CONFIG_CRYPTO_NULL=y
CONFIG_CRYPTO_NULL2=y
CONFIG_CRYPTO_PCRYPT=m
CONFIG_CRYPTO_CRYPTD=y
CONFIG_CRYPTO_AUTHENC=m
CONFIG_CRYPTO_TEST=m
CONFIG_CRYPTO_SIMD=y
CONFIG_CRYPTO_GLUE_HELPER_X86=y

#
# Public-key cryptography
#
CONFIG_CRYPTO_RSA=y
CONFIG_CRYPTO_DH=m
CONFIG_CRYPTO_ECC=m
CONFIG_CRYPTO_ECDH=m
# CONFIG_CRYPTO_ECRDSA is not set
# CONFIG_CRYPTO_SM2 is not set
# CONFIG_CRYPTO_CURVE25519 is not set
# CONFIG_CRYPTO_CURVE25519_X86 is not set

#
# Authenticated Encryption with Associated Data
#
CONFIG_CRYPTO_CCM=m
CONFIG_CRYPTO_GCM=y
CONFIG_CRYPTO_CHACHA20POLY1305=m
# CONFIG_CRYPTO_AEGIS128 is not set
# CONFIG_CRYPTO_AEGIS128_AESNI_SSE2 is not set
CONFIG_CRYPTO_SEQIV=y
CONFIG_CRYPTO_ECHAINIV=m

#
# Block modes
#
CONFIG_CRYPTO_CBC=y
CONFIG_CRYPTO_CFB=y
CONFIG_CRYPTO_CTR=y
CONFIG_CRYPTO_CTS=y
CONFIG_CRYPTO_ECB=y
CONFIG_CRYPTO_LRW=m
# CONFIG_CRYPTO_OFB is not set
CONFIG_CRYPTO_PCBC=m
CONFIG_CRYPTO_XTS=y
# CONFIG_CRYPTO_KEYWRAP is not set
# CONFIG_CRYPTO_NHPOLY1305_SSE2 is not set
# CONFIG_CRYPTO_NHPOLY1305_AVX2 is not set
# CONFIG_CRYPTO_ADIANTUM is not set
CONFIG_CRYPTO_ESSIV=m

#
# Hash modes
#
CONFIG_CRYPTO_CMAC=m
CONFIG_CRYPTO_HMAC=y
CONFIG_CRYPTO_XCBC=m
CONFIG_CRYPTO_VMAC=m

#
# Digest
#
CONFIG_CRYPTO_CRC32C=y
CONFIG_CRYPTO_CRC32C_INTEL=m
CONFIG_CRYPTO_CRC32=m
CONFIG_CRYPTO_CRC32_PCLMUL=m
CONFIG_CRYPTO_XXHASH=m
CONFIG_CRYPTO_BLAKE2B=m
# CONFIG_CRYPTO_BLAKE2S is not set
# CONFIG_CRYPTO_BLAKE2S_X86 is not set
CONFIG_CRYPTO_CRCT10DIF=y
CONFIG_CRYPTO_CRCT10DIF_PCLMUL=m
CONFIG_CRYPTO_GHASH=y
CONFIG_CRYPTO_POLY1305=m
CONFIG_CRYPTO_POLY1305_X86_64=m
CONFIG_CRYPTO_MD4=m
CONFIG_CRYPTO_MD5=y
CONFIG_CRYPTO_MICHAEL_MIC=m
CONFIG_CRYPTO_RMD128=m
CONFIG_CRYPTO_RMD160=m
CONFIG_CRYPTO_RMD256=m
CONFIG_CRYPTO_RMD320=m
CONFIG_CRYPTO_SHA1=y
CONFIG_CRYPTO_SHA1_SSSE3=y
CONFIG_CRYPTO_SHA256_SSSE3=y
CONFIG_CRYPTO_SHA512_SSSE3=m
CONFIG_CRYPTO_SHA256=y
CONFIG_CRYPTO_SHA512=y
CONFIG_CRYPTO_SHA3=m
# CONFIG_CRYPTO_SM3 is not set
# CONFIG_CRYPTO_STREEBOG is not set
CONFIG_CRYPTO_TGR192=m
CONFIG_CRYPTO_WP512=m
CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL=m

#
# Ciphers
#
CONFIG_CRYPTO_AES=y
# CONFIG_CRYPTO_AES_TI is not set
CONFIG_CRYPTO_AES_NI_INTEL=y
CONFIG_CRYPTO_ANUBIS=m
CONFIG_CRYPTO_ARC4=m
CONFIG_CRYPTO_BLOWFISH=m
CONFIG_CRYPTO_BLOWFISH_COMMON=m
CONFIG_CRYPTO_BLOWFISH_X86_64=m
CONFIG_CRYPTO_CAMELLIA=m
CONFIG_CRYPTO_CAMELLIA_X86_64=m
CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64=m
CONFIG_CRYPTO_CAMELLIA_AESNI_AVX2_X86_64=m
CONFIG_CRYPTO_CAST_COMMON=m
CONFIG_CRYPTO_CAST5=m
CONFIG_CRYPTO_CAST5_AVX_X86_64=m
CONFIG_CRYPTO_CAST6=m
CONFIG_CRYPTO_CAST6_AVX_X86_64=m
CONFIG_CRYPTO_DES=m
CONFIG_CRYPTO_DES3_EDE_X86_64=m
CONFIG_CRYPTO_FCRYPT=m
CONFIG_CRYPTO_KHAZAD=m
CONFIG_CRYPTO_SALSA20=m
CONFIG_CRYPTO_CHACHA20=m
CONFIG_CRYPTO_CHACHA20_X86_64=m
CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_SERPENT_SSE2_X86_64=m
CONFIG_CRYPTO_SERPENT_AVX_X86_64=m
CONFIG_CRYPTO_SERPENT_AVX2_X86_64=m
# CONFIG_CRYPTO_SM4 is not set
CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m
CONFIG_CRYPTO_TWOFISH_COMMON=m
CONFIG_CRYPTO_TWOFISH_X86_64=m
CONFIG_CRYPTO_TWOFISH_X86_64_3WAY=m
CONFIG_CRYPTO_TWOFISH_AVX_X86_64=m

#
# Compression
#
CONFIG_CRYPTO_DEFLATE=y
CONFIG_CRYPTO_LZO=y
# CONFIG_CRYPTO_842 is not set
# CONFIG_CRYPTO_LZ4 is not set
# CONFIG_CRYPTO_LZ4HC is not set
# CONFIG_CRYPTO_ZSTD is not set

#
# Random Number Generation
#
CONFIG_CRYPTO_ANSI_CPRNG=m
CONFIG_CRYPTO_DRBG_MENU=y
CONFIG_CRYPTO_DRBG_HMAC=y
CONFIG_CRYPTO_DRBG_HASH=y
CONFIG_CRYPTO_DRBG_CTR=y
CONFIG_CRYPTO_DRBG=y
CONFIG_CRYPTO_JITTERENTROPY=y
CONFIG_CRYPTO_USER_API=y
CONFIG_CRYPTO_USER_API_HASH=y
CONFIG_CRYPTO_USER_API_SKCIPHER=y
CONFIG_CRYPTO_USER_API_RNG=y
# CONFIG_CRYPTO_USER_API_RNG_CAVP is not set
CONFIG_CRYPTO_USER_API_AEAD=y
CONFIG_CRYPTO_USER_API_ENABLE_OBSOLETE=y
# CONFIG_CRYPTO_STATS is not set
CONFIG_CRYPTO_HASH_INFO=y

#
# Crypto library routines
#
CONFIG_CRYPTO_LIB_AES=y
CONFIG_CRYPTO_LIB_ARC4=m
# CONFIG_CRYPTO_LIB_BLAKE2S is not set
CONFIG_CRYPTO_ARCH_HAVE_LIB_CHACHA=m
CONFIG_CRYPTO_LIB_CHACHA_GENERIC=m
# CONFIG_CRYPTO_LIB_CHACHA is not set
# CONFIG_CRYPTO_LIB_CURVE25519 is not set
CONFIG_CRYPTO_LIB_DES=m
CONFIG_CRYPTO_LIB_POLY1305_RSIZE=11
CONFIG_CRYPTO_ARCH_HAVE_LIB_POLY1305=m
CONFIG_CRYPTO_LIB_POLY1305_GENERIC=m
# CONFIG_CRYPTO_LIB_POLY1305 is not set
# CONFIG_CRYPTO_LIB_CHACHA20POLY1305 is not set
CONFIG_CRYPTO_LIB_SHA256=y
CONFIG_CRYPTO_HW=y
CONFIG_CRYPTO_DEV_PADLOCK=m
CONFIG_CRYPTO_DEV_PADLOCK_AES=m
CONFIG_CRYPTO_DEV_PADLOCK_SHA=m
# CONFIG_CRYPTO_DEV_ATMEL_ECC is not set
# CONFIG_CRYPTO_DEV_ATMEL_SHA204A is not set
CONFIG_CRYPTO_DEV_CCP=y
CONFIG_CRYPTO_DEV_CCP_DD=m
CONFIG_CRYPTO_DEV_SP_CCP=y
CONFIG_CRYPTO_DEV_CCP_CRYPTO=m
CONFIG_CRYPTO_DEV_SP_PSP=y
# CONFIG_CRYPTO_DEV_CCP_DEBUGFS is not set
CONFIG_CRYPTO_DEV_QAT=m
CONFIG_CRYPTO_DEV_QAT_DH895xCC=m
CONFIG_CRYPTO_DEV_QAT_C3XXX=m
CONFIG_CRYPTO_DEV_QAT_C62X=m
CONFIG_CRYPTO_DEV_QAT_DH895xCCVF=m
CONFIG_CRYPTO_DEV_QAT_C3XXXVF=m
CONFIG_CRYPTO_DEV_QAT_C62XVF=m
CONFIG_CRYPTO_DEV_NITROX=m
CONFIG_CRYPTO_DEV_NITROX_CNN55XX=m
# CONFIG_CRYPTO_DEV_VIRTIO is not set
# CONFIG_CRYPTO_DEV_SAFEXCEL is not set
# CONFIG_CRYPTO_DEV_AMLOGIC_GXL is not set
CONFIG_ASYMMETRIC_KEY_TYPE=y
CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=y
# CONFIG_ASYMMETRIC_TPM_KEY_SUBTYPE is not set
CONFIG_X509_CERTIFICATE_PARSER=y
# CONFIG_PKCS8_PRIVATE_KEY_PARSER is not set
CONFIG_PKCS7_MESSAGE_PARSER=y
# CONFIG_PKCS7_TEST_KEY is not set
CONFIG_SIGNED_PE_FILE_VERIFICATION=y

#
# Certificates for signature checking
#
CONFIG_MODULE_SIG_KEY="certs/signing_key.pem"
CONFIG_SYSTEM_TRUSTED_KEYRING=y
CONFIG_SYSTEM_TRUSTED_KEYS=""
# CONFIG_SYSTEM_EXTRA_CERTIFICATE is not set
# CONFIG_SECONDARY_TRUSTED_KEYRING is not set
CONFIG_SYSTEM_BLACKLIST_KEYRING=y
CONFIG_SYSTEM_BLACKLIST_HASH_LIST=""
# end of Certificates for signature checking

CONFIG_BINARY_PRINTF=y

#
# Library routines
#
CONFIG_RAID6_PQ=m
CONFIG_RAID6_PQ_BENCHMARK=y
# CONFIG_PACKING is not set
CONFIG_BITREVERSE=y
CONFIG_GENERIC_STRNCPY_FROM_USER=y
CONFIG_GENERIC_STRNLEN_USER=y
CONFIG_GENERIC_NET_UTILS=y
CONFIG_GENERIC_FIND_FIRST_BIT=y
CONFIG_CORDIC=m
# CONFIG_PRIME_NUMBERS is not set
CONFIG_RATIONAL=y
CONFIG_GENERIC_PCI_IOMAP=y
CONFIG_GENERIC_IOMAP=y
CONFIG_ARCH_USE_CMPXCHG_LOCKREF=y
CONFIG_ARCH_HAS_FAST_MULTIPLIER=y
CONFIG_ARCH_USE_SYM_ANNOTATIONS=y
CONFIG_CRC_CCITT=y
CONFIG_CRC16=y
CONFIG_CRC_T10DIF=y
CONFIG_CRC_ITU_T=m
CONFIG_CRC32=y
# CONFIG_CRC32_SELFTEST is not set
CONFIG_CRC32_SLICEBY8=y
# CONFIG_CRC32_SLICEBY4 is not set
# CONFIG_CRC32_SARWATE is not set
# CONFIG_CRC32_BIT is not set
# CONFIG_CRC64 is not set
# CONFIG_CRC4 is not set
CONFIG_CRC7=m
CONFIG_LIBCRC32C=m
CONFIG_CRC8=m
CONFIG_XXHASH=y
# CONFIG_RANDOM32_SELFTEST is not set
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=y
CONFIG_LZO_COMPRESS=y
CONFIG_LZO_DECOMPRESS=y
CONFIG_LZ4_DECOMPRESS=y
CONFIG_ZSTD_COMPRESS=m
CONFIG_ZSTD_DECOMPRESS=y
CONFIG_XZ_DEC=y
CONFIG_XZ_DEC_X86=y
CONFIG_XZ_DEC_POWERPC=y
CONFIG_XZ_DEC_IA64=y
CONFIG_XZ_DEC_ARM=y
CONFIG_XZ_DEC_ARMTHUMB=y
CONFIG_XZ_DEC_SPARC=y
CONFIG_XZ_DEC_BCJ=y
# CONFIG_XZ_DEC_TEST is not set
CONFIG_DECOMPRESS_GZIP=y
CONFIG_DECOMPRESS_BZIP2=y
CONFIG_DECOMPRESS_LZMA=y
CONFIG_DECOMPRESS_XZ=y
CONFIG_DECOMPRESS_LZO=y
CONFIG_DECOMPRESS_LZ4=y
CONFIG_DECOMPRESS_ZSTD=y
CONFIG_GENERIC_ALLOCATOR=y
CONFIG_REED_SOLOMON=m
CONFIG_REED_SOLOMON_ENC8=y
CONFIG_REED_SOLOMON_DEC8=y
CONFIG_TEXTSEARCH=y
CONFIG_TEXTSEARCH_KMP=m
CONFIG_TEXTSEARCH_BM=m
CONFIG_TEXTSEARCH_FSM=m
CONFIG_INTERVAL_TREE=y
CONFIG_XARRAY_MULTI=y
CONFIG_ASSOCIATIVE_ARRAY=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT_MAP=y
CONFIG_HAS_DMA=y
CONFIG_DMA_OPS=y
CONFIG_NEED_SG_DMA_LENGTH=y
CONFIG_NEED_DMA_MAP_STATE=y
CONFIG_ARCH_DMA_ADDR_T_64BIT=y
CONFIG_ARCH_HAS_FORCE_DMA_UNENCRYPTED=y
CONFIG_DMA_VIRT_OPS=y
CONFIG_SWIOTLB=y
CONFIG_DMA_COHERENT_POOL=y
CONFIG_DMA_CMA=y
# CONFIG_DMA_PERNUMA_CMA is not set

#
# Default contiguous memory area size:
#
CONFIG_CMA_SIZE_MBYTES=200
CONFIG_CMA_SIZE_SEL_MBYTES=y
# CONFIG_CMA_SIZE_SEL_PERCENTAGE is not set
# CONFIG_CMA_SIZE_SEL_MIN is not set
# CONFIG_CMA_SIZE_SEL_MAX is not set
CONFIG_CMA_ALIGNMENT=8
# CONFIG_DMA_API_DEBUG is not set
CONFIG_SGL_ALLOC=y
CONFIG_CHECK_SIGNATURE=y
CONFIG_CPUMASK_OFFSTACK=y
CONFIG_CPU_RMAP=y
CONFIG_DQL=y
CONFIG_GLOB=y
# CONFIG_GLOB_SELFTEST is not set
CONFIG_NLATTR=y
CONFIG_CLZ_TAB=y
CONFIG_IRQ_POLL=y
CONFIG_MPILIB=y
CONFIG_SIGNATURE=y
CONFIG_DIMLIB=y
CONFIG_OID_REGISTRY=y
CONFIG_UCS2_STRING=y
CONFIG_HAVE_GENERIC_VDSO=y
CONFIG_GENERIC_GETTIMEOFDAY=y
CONFIG_GENERIC_VDSO_TIME_NS=y
CONFIG_FONT_SUPPORT=y
# CONFIG_FONTS is not set
CONFIG_FONT_8x8=y
CONFIG_FONT_8x16=y
CONFIG_SG_POOL=y
CONFIG_ARCH_HAS_PMEM_API=y
CONFIG_MEMREGION=y
CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE=y
CONFIG_ARCH_HAS_COPY_MC=y
CONFIG_ARCH_STACKWALK=y
CONFIG_SBITMAP=y
# CONFIG_STRING_SELFTEST is not set
# end of Library routines

#
# Kernel hacking
#

#
# printk and dmesg options
#
CONFIG_PRINTK_TIME=y
# CONFIG_PRINTK_CALLER is not set
CONFIG_CONSOLE_LOGLEVEL_DEFAULT=7
CONFIG_CONSOLE_LOGLEVEL_QUIET=4
CONFIG_MESSAGE_LOGLEVEL_DEFAULT=4
CONFIG_BOOT_PRINTK_DELAY=y
CONFIG_DYNAMIC_DEBUG=y
CONFIG_DYNAMIC_DEBUG_CORE=y
CONFIG_SYMBOLIC_ERRNAME=y
CONFIG_DEBUG_BUGVERBOSE=y
# end of printk and dmesg options

#
# Compile-time checks and compiler options
#
CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_INFO_REDUCED=y
# CONFIG_DEBUG_INFO_COMPRESSED is not set
# CONFIG_DEBUG_INFO_SPLIT is not set
CONFIG_DEBUG_INFO_DWARF4=y
# CONFIG_GDB_SCRIPTS is not set
CONFIG_ENABLE_MUST_CHECK=y
CONFIG_FRAME_WARN=2048
CONFIG_STRIP_ASM_SYMS=y
# CONFIG_READABLE_ASM is not set
# CONFIG_HEADERS_INSTALL is not set
CONFIG_DEBUG_SECTION_MISMATCH=y
CONFIG_SECTION_MISMATCH_WARN_ONLY=y
CONFIG_STACK_VALIDATION=y
# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set
# end of Compile-time checks and compiler options

#
# Generic Kernel Debugging Instruments
#
CONFIG_MAGIC_SYSRQ=y
CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE=0x1
CONFIG_MAGIC_SYSRQ_SERIAL=y
CONFIG_MAGIC_SYSRQ_SERIAL_SEQUENCE=""
CONFIG_DEBUG_FS=y
CONFIG_DEBUG_FS_ALLOW_ALL=y
# CONFIG_DEBUG_FS_DISALLOW_MOUNT is not set
# CONFIG_DEBUG_FS_ALLOW_NONE is not set
CONFIG_HAVE_ARCH_KGDB=y
# CONFIG_KGDB is not set
CONFIG_ARCH_HAS_UBSAN_SANITIZE_ALL=y
# CONFIG_UBSAN is not set
CONFIG_HAVE_ARCH_KCSAN=y
# end of Generic Kernel Debugging Instruments

CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_MISC=y

#
# Memory Debugging
#
# CONFIG_PAGE_EXTENSION is not set
# CONFIG_DEBUG_PAGEALLOC is not set
# CONFIG_PAGE_OWNER is not set
# CONFIG_PAGE_POISONING is not set
# CONFIG_DEBUG_PAGE_REF is not set
# CONFIG_DEBUG_RODATA_TEST is not set
CONFIG_ARCH_HAS_DEBUG_WX=y
# CONFIG_DEBUG_WX is not set
CONFIG_GENERIC_PTDUMP=y
# CONFIG_PTDUMP_DEBUGFS is not set
# CONFIG_DEBUG_OBJECTS is not set
# CONFIG_SLUB_DEBUG_ON is not set
# CONFIG_SLUB_STATS is not set
CONFIG_HAVE_DEBUG_KMEMLEAK=y
# CONFIG_DEBUG_KMEMLEAK is not set
# CONFIG_DEBUG_STACK_USAGE is not set
# CONFIG_SCHED_STACK_END_CHECK is not set
CONFIG_ARCH_HAS_DEBUG_VM_PGTABLE=y
# CONFIG_DEBUG_VM is not set
# CONFIG_DEBUG_VM_PGTABLE is not set
CONFIG_ARCH_HAS_DEBUG_VIRTUAL=y
# CONFIG_DEBUG_VIRTUAL is not set
CONFIG_DEBUG_MEMORY_INIT=y
# CONFIG_DEBUG_PER_CPU_MAPS is not set
CONFIG_HAVE_ARCH_KASAN=y
CONFIG_HAVE_ARCH_KASAN_VMALLOC=y
CONFIG_CC_HAS_KASAN_GENERIC=y
CONFIG_CC_HAS_WORKING_NOSANITIZE_ADDRESS=y
# CONFIG_KASAN is not set
# end of Memory Debugging

CONFIG_DEBUG_SHIRQ=y

#
# Debug Oops, Lockups and Hangs
#
CONFIG_PANIC_ON_OOPS=y
CONFIG_PANIC_ON_OOPS_VALUE=1
CONFIG_PANIC_TIMEOUT=0
CONFIG_LOCKUP_DETECTOR=y
CONFIG_SOFTLOCKUP_DETECTOR=y
# CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC is not set
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=0
CONFIG_HARDLOCKUP_DETECTOR_PERF=y
CONFIG_HARDLOCKUP_CHECK_TIMESTAMP=y
CONFIG_HARDLOCKUP_DETECTOR=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC_VALUE=1
# CONFIG_DETECT_HUNG_TASK is not set
# CONFIG_WQ_WATCHDOG is not set
# CONFIG_TEST_LOCKUP is not set
# end of Debug Oops, Lockups and Hangs

#
# Scheduler Debugging
#
CONFIG_SCHED_DEBUG=y
CONFIG_SCHED_INFO=y
CONFIG_SCHEDSTATS=y
# end of Scheduler Debugging

# CONFIG_DEBUG_TIMEKEEPING is not set

#
# Lock Debugging (spinlocks, mutexes, etc...)
#
CONFIG_LOCK_DEBUGGING_SUPPORT=y
# CONFIG_PROVE_LOCKING is not set
# CONFIG_LOCK_STAT is not set
# CONFIG_DEBUG_RT_MUTEXES is not set
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_DEBUG_MUTEXES is not set
# CONFIG_DEBUG_WW_MUTEX_SLOWPATH is not set
# CONFIG_DEBUG_RWSEMS is not set
# CONFIG_DEBUG_LOCK_ALLOC is not set
CONFIG_DEBUG_ATOMIC_SLEEP=y
# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
CONFIG_LOCK_TORTURE_TEST=m
# CONFIG_WW_MUTEX_SELFTEST is not set
# CONFIG_SCF_TORTURE_TEST is not set
# CONFIG_CSD_LOCK_WAIT_DEBUG is not set
# end of Lock Debugging (spinlocks, mutexes, etc...)

CONFIG_STACKTRACE=y
# CONFIG_WARN_ALL_UNSEEDED_RANDOM is not set
# CONFIG_DEBUG_KOBJECT is not set

#
# Debug kernel data structures
#
CONFIG_DEBUG_LIST=y
# CONFIG_DEBUG_PLIST is not set
# CONFIG_DEBUG_SG is not set
# CONFIG_DEBUG_NOTIFIERS is not set
CONFIG_BUG_ON_DATA_CORRUPTION=y
# end of Debug kernel data structures

# CONFIG_DEBUG_CREDENTIALS is not set

#
# RCU Debugging
#
CONFIG_TORTURE_TEST=m
# CONFIG_RCU_SCALE_TEST is not set
CONFIG_RCU_TORTURE_TEST=m
# CONFIG_RCU_REF_SCALE_TEST is not set
CONFIG_RCU_CPU_STALL_TIMEOUT=60
# CONFIG_RCU_TRACE is not set
# CONFIG_RCU_EQS_DEBUG is not set
# end of RCU Debugging

# CONFIG_DEBUG_WQ_FORCE_RR_CPU is not set
# CONFIG_DEBUG_BLOCK_EXT_DEVT is not set
# CONFIG_CPU_HOTPLUG_STATE_CONTROL is not set
CONFIG_LATENCYTOP=y
CONFIG_USER_STACKTRACE_SUPPORT=y
CONFIG_NOP_TRACER=y
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_HAVE_FENTRY=y
CONFIG_HAVE_C_RECORDMCOUNT=y
CONFIG_TRACER_MAX_TRACE=y
CONFIG_TRACE_CLOCK=y
CONFIG_RING_BUFFER=y
CONFIG_EVENT_TRACING=y
CONFIG_CONTEXT_SWITCH_TRACER=y
CONFIG_RING_BUFFER_ALLOW_SWAP=y
CONFIG_TRACING=y
CONFIG_GENERIC_TRACER=y
CONFIG_TRACING_SUPPORT=y
CONFIG_FTRACE=y
# CONFIG_BOOTTIME_TRACING is not set
CONFIG_FUNCTION_TRACER=y
CONFIG_FUNCTION_GRAPH_TRACER=y
CONFIG_DYNAMIC_FTRACE=y
CONFIG_DYNAMIC_FTRACE_WITH_REGS=y
CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y
CONFIG_FUNCTION_PROFILER=y
CONFIG_STACK_TRACER=y
# CONFIG_IRQSOFF_TRACER is not set
CONFIG_SCHED_TRACER=y
CONFIG_HWLAT_TRACER=y
# CONFIG_MMIOTRACE is not set
CONFIG_FTRACE_SYSCALLS=y
CONFIG_TRACER_SNAPSHOT=y
# CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP is not set
CONFIG_BRANCH_PROFILE_NONE=y
# CONFIG_PROFILE_ANNOTATED_BRANCHES is not set
CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_KPROBE_EVENTS=y
# CONFIG_KPROBE_EVENTS_ON_NOTRACE is not set
CONFIG_UPROBE_EVENTS=y
CONFIG_BPF_EVENTS=y
CONFIG_DYNAMIC_EVENTS=y
CONFIG_PROBE_EVENTS=y
# CONFIG_BPF_KPROBE_OVERRIDE is not set
CONFIG_FTRACE_MCOUNT_RECORD=y
CONFIG_TRACING_MAP=y
CONFIG_SYNTH_EVENTS=y
CONFIG_HIST_TRIGGERS=y
# CONFIG_TRACE_EVENT_INJECT is not set
# CONFIG_TRACEPOINT_BENCHMARK is not set
CONFIG_RING_BUFFER_BENCHMARK=m
# CONFIG_TRACE_EVAL_MAP_FILE is not set
# CONFIG_FTRACE_STARTUP_TEST is not set
# CONFIG_RING_BUFFER_STARTUP_TEST is not set
# CONFIG_PREEMPTIRQ_DELAY_TEST is not set
# CONFIG_SYNTH_EVENT_GEN_TEST is not set
# CONFIG_KPROBE_EVENT_GEN_TEST is not set
# CONFIG_HIST_TRIGGERS_DEBUG is not set
CONFIG_PROVIDE_OHCI1394_DMA_INIT=y
# CONFIG_SAMPLES is not set
CONFIG_ARCH_HAS_DEVMEM_IS_ALLOWED=y
CONFIG_STRICT_DEVMEM=y
# CONFIG_IO_STRICT_DEVMEM is not set

#
# x86 Debugging
#
CONFIG_TRACE_IRQFLAGS_SUPPORT=y
CONFIG_TRACE_IRQFLAGS_NMI_SUPPORT=y
CONFIG_EARLY_PRINTK_USB=y
CONFIG_X86_VERBOSE_BOOTUP=y
CONFIG_EARLY_PRINTK=y
CONFIG_EARLY_PRINTK_DBGP=y
CONFIG_EARLY_PRINTK_USB_XDBC=y
# CONFIG_EFI_PGT_DUMP is not set
# CONFIG_DEBUG_TLBFLUSH is not set
CONFIG_HAVE_MMIOTRACE_SUPPORT=y
CONFIG_X86_DECODER_SELFTEST=y
CONFIG_IO_DELAY_0X80=y
# CONFIG_IO_DELAY_0XED is not set
# CONFIG_IO_DELAY_UDELAY is not set
# CONFIG_IO_DELAY_NONE is not set
CONFIG_DEBUG_BOOT_PARAMS=y
# CONFIG_CPA_DEBUG is not set
# CONFIG_DEBUG_ENTRY is not set
# CONFIG_DEBUG_NMI_SELFTEST is not set
# CONFIG_X86_DEBUG_FPU is not set
# CONFIG_PUNIT_ATOM_DEBUG is not set
CONFIG_UNWINDER_ORC=y
# CONFIG_UNWINDER_FRAME_POINTER is not set
# end of x86 Debugging

#
# Kernel Testing and Coverage
#
CONFIG_KUNIT=y
# CONFIG_KUNIT_DEBUGFS is not set
CONFIG_KUNIT_TEST=m
CONFIG_KUNIT_EXAMPLE_TEST=m
# CONFIG_KUNIT_ALL_TESTS is not set
# CONFIG_NOTIFIER_ERROR_INJECTION is not set
CONFIG_FUNCTION_ERROR_INJECTION=y
CONFIG_FAULT_INJECTION=y
# CONFIG_FAILSLAB is not set
# CONFIG_FAIL_PAGE_ALLOC is not set
# CONFIG_FAULT_INJECTION_USERCOPY is not set
CONFIG_FAIL_MAKE_REQUEST=y
# CONFIG_FAIL_IO_TIMEOUT is not set
# CONFIG_FAIL_FUTEX is not set
CONFIG_FAULT_INJECTION_DEBUG_FS=y
# CONFIG_FAIL_FUNCTION is not set
# CONFIG_FAIL_MMC_REQUEST is not set
CONFIG_ARCH_HAS_KCOV=y
CONFIG_CC_HAS_SANCOV_TRACE_PC=y
# CONFIG_KCOV is not set
CONFIG_RUNTIME_TESTING_MENU=y
# CONFIG_LKDTM is not set
# CONFIG_TEST_LIST_SORT is not set
# CONFIG_TEST_MIN_HEAP is not set
# CONFIG_TEST_SORT is not set
# CONFIG_KPROBES_SANITY_TEST is not set
# CONFIG_BACKTRACE_SELF_TEST is not set
# CONFIG_RBTREE_TEST is not set
# CONFIG_REED_SOLOMON_TEST is not set
# CONFIG_INTERVAL_TREE_TEST is not set
# CONFIG_PERCPU_TEST is not set
CONFIG_ATOMIC64_SELFTEST=y
# CONFIG_ASYNC_RAID6_TEST is not set
# CONFIG_TEST_HEXDUMP is not set
# CONFIG_TEST_STRING_HELPERS is not set
# CONFIG_TEST_STRSCPY is not set
# CONFIG_TEST_KSTRTOX is not set
# CONFIG_TEST_PRINTF is not set
# CONFIG_TEST_BITMAP is not set
# CONFIG_TEST_UUID is not set
# CONFIG_TEST_XARRAY is not set
# CONFIG_TEST_OVERFLOW is not set
# CONFIG_TEST_RHASHTABLE is not set
# CONFIG_TEST_HASH is not set
# CONFIG_TEST_IDA is not set
# CONFIG_TEST_LKM is not set
# CONFIG_TEST_BITOPS is not set
# CONFIG_TEST_VMALLOC is not set
# CONFIG_TEST_USER_COPY is not set
CONFIG_TEST_BPF=m
# CONFIG_TEST_BLACKHOLE_DEV is not set
# CONFIG_FIND_BIT_BENCHMARK is not set
# CONFIG_TEST_FIRMWARE is not set
# CONFIG_TEST_SYSCTL is not set
# CONFIG_BITFIELD_KUNIT is not set
CONFIG_SYSCTL_KUNIT_TEST=m
CONFIG_LIST_KUNIT_TEST=m
# CONFIG_LINEAR_RANGES_TEST is not set
# CONFIG_BITS_TEST is not set
# CONFIG_TEST_UDELAY is not set
# CONFIG_TEST_STATIC_KEYS is not set
# CONFIG_TEST_KMOD is not set
# CONFIG_TEST_MEMCAT_P is not set
# CONFIG_TEST_LIVEPATCH is not set
# CONFIG_TEST_STACKINIT is not set
# CONFIG_TEST_MEMINIT is not set
# CONFIG_TEST_HMM is not set
# CONFIG_TEST_FREE_PAGES is not set
# CONFIG_TEST_FPU is not set
# CONFIG_MEMTEST is not set
# CONFIG_HYPERV_TESTING is not set
# end of Kernel Testing and Coverage
# end of Kernel hacking

--BOKacYhQ+x31HxR3
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename=job-script

#!/bin/sh

export_top_env()
{
	export suite='fio-basic'
	export testcase='fio-basic'
	export category='benchmark'
	export runtime=200
	export nr_task=48
	export time_based='tb'
	export job_origin='/lkp-src/allot/cyclic:p1:linux-devel:devel-hourly/lkp-csl-2sp6/fio-basic-2pmem-dax-256G.yaml'
	export queue_cmdline_keys='branch
commit
queue_at_least_once'
	export queue='validate'
	export testbox='lkp-csl-2sp6'
	export tbox_group='lkp-csl-2sp6'
	export kconfig='x86_64-rhel-8.3'
	export submit_id='5fbfc6bf221b7d1a3c176106'
	export job_file='/lkp/jobs/scheduled/lkp-csl-2sp6/fio-basic-4k-performance-2pmem-xfs-sync-dax-50%-200s-randwrite-200G-tb-ucode=0x5003003-monitor=6e02c248-debian-10.4-x86_64-20200-20201126-6716-dn14tu-2.yaml'
	export id='b87460ce9d03fd9941799863f8d3e911d535cdb9'
	export queuer_version='/lkp-src'
	export model='Cascade Lake'
	export nr_node=2
	export nr_cpu=96
	export memory='256G'
	export nr_hdd_partitions=1
	export hdd_partitions='/dev/disk/by-id/ata-WDC_WD10EZEX-75ZF5A0_WD-WCC1S1302268-part5'
	export ssd_partitions='/dev/disk/by-id/ata-INTEL_SSDSC2BB800G4_PHWL4204001B800RGN-part1'
	export swap_partitions=
	export rootfs_partition='/dev/disk/by-id/ata-WDC_WD10EZEX-75ZF5A0_WD-WCC1S1302268-part4'
	export brand='Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz'
	export need_kconfig='CONFIG_LIBNVDIMM
CONFIG_BTT
CONFIG_BLK_DEV_PMEM
CONFIG_X86_PMEM_LEGACY
CONFIG_XFS_FS'
	export commit='97e8f0134a2bb794e4885f642724a50979b84f89'
	export ucode='0x5003003'
	export need_kconfig_hw='CONFIG_I40E=y
CONFIG_SATA_AHCI'
	export enqueue_time='2020-11-26 23:16:16 +0800'
	export _id='5fbfc6bf221b7d1a3c176106'
	export _rt='/result/fio-basic/4k-performance-2pmem-xfs-sync-dax-50%-200s-randwrite-200G-tb-ucode=0x5003003-monitor=6e02c248/lkp-csl-2sp6/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3/gcc-9/97e8f0134a2bb794e4885f642724a50979b84f89'
	export user='lkp'
	export compiler='gcc-9'
	export head_commit='49b0a0e914926c1116def49daf207c926dde5715'
	export base_commit='418baf2c28f3473039f2f7377760bd8f6897ae18'
	export branch='linux-review/Juergen-Gross/x86-major-paravirt-cleanup/20201120-194934'
	export rootfs='debian-10.4-x86_64-20200603.cgz'
	export monitor_sha='6e02c248'
	export result_root='/result/fio-basic/4k-performance-2pmem-xfs-sync-dax-50%-200s-randwrite-200G-tb-ucode=0x5003003-monitor=6e02c248/lkp-csl-2sp6/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3/gcc-9/97e8f0134a2bb794e4885f642724a50979b84f89/3'
	export scheduler_version='/lkp/lkp/.src-20201126-070356'
	export LKP_SERVER='internal-lkp-server'
	export arch='x86_64'
	export max_uptime=2400
	export initrd='/osimage/debian/debian-10.4-x86_64-20200603.cgz'
	export bootloader_append='root=/dev/ram0
user=lkp
job=/lkp/jobs/scheduled/lkp-csl-2sp6/fio-basic-4k-performance-2pmem-xfs-sync-dax-50%-200s-randwrite-200G-tb-ucode=0x5003003-monitor=6e02c248-debian-10.4-x86_64-20200-20201126-6716-dn14tu-2.yaml
ARCH=x86_64
kconfig=x86_64-rhel-8.3
branch=linux-review/Juergen-Gross/x86-major-paravirt-cleanup/20201120-194934
commit=97e8f0134a2bb794e4885f642724a50979b84f89
BOOT_IMAGE=/pkg/linux/x86_64-rhel-8.3/gcc-9/97e8f0134a2bb794e4885f642724a50979b84f89/vmlinuz-5.10.0-rc4-00005-g97e8f0134a2b
memmap=104G!8G
memmap=104G!132G
max_uptime=2400
RESULT_ROOT=/result/fio-basic/4k-performance-2pmem-xfs-sync-dax-50%-200s-randwrite-200G-tb-ucode=0x5003003-monitor=6e02c248/lkp-csl-2sp6/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3/gcc-9/97e8f0134a2bb794e4885f642724a50979b84f89/3
LKP_SERVER=internal-lkp-server
nokaslr
selinux=0
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
net.ifnames=0
printk.devkmsg=on
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
drbd.minor_count=8
systemd.log_level=err
ignore_loglevel
console=tty0
earlyprintk=ttyS0,115200
console=ttyS0,115200
vga=normal
rw'
	export modules_initrd='/pkg/linux/x86_64-rhel-8.3/gcc-9/97e8f0134a2bb794e4885f642724a50979b84f89/modules.cgz'
	export bm_initrd='/osimage/deps/debian-10.4-x86_64-20200603.cgz/run-ipconfig_20200608.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/lkp_20200709.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/rsync-rootfs_20200608.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/fs_20200714.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/fio_20200714.cgz,/osimage/pkg/debian-10.4-x86_64-20200603.cgz/fio-x86_64-3.15-1_20200907.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/mpstat_20200714.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/perf_20201126.cgz,/osimage/pkg/debian-10.4-x86_64-20200603.cgz/perf-x86_64-fa02fcd94b0c-1_20201126.cgz,/osimage/pkg/debian-10.4-x86_64-20200603.cgz/sar-x86_64-34c92ae-1_20200702.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/hw_20200715.cgz'
	export ucode_initrd='/osimage/ucode/intel-ucode-20201117.cgz'
	export lkp_initrd='/osimage/user/lkp/lkp-x86_64.cgz'
	export site='inn'
	export LKP_CGI_PORT=80
	export LKP_CIFS_PORT=139
	export last_kernel='5.10.0-rc5-intel-next-02601-g683ffd637ac1'
	export repeat_to=4
	export schedule_notify_address=
	export queue_at_least_once=1
	export kernel='/pkg/linux/x86_64-rhel-8.3/gcc-9/97e8f0134a2bb794e4885f642724a50979b84f89/vmlinuz-5.10.0-rc4-00005-g97e8f0134a2b'
	export dequeue_time='2020-11-26 23:29:36 +0800'
	export job_initrd='/lkp/jobs/scheduled/lkp-csl-2sp6/fio-basic-4k-performance-2pmem-xfs-sync-dax-50%-200s-randwrite-200G-tb-ucode=0x5003003-monitor=6e02c248-debian-10.4-x86_64-20200-20201126-6716-dn14tu-2.cgz'

	[ -n "$LKP_SRC" ] ||
	export LKP_SRC=/lkp/${user:-lkp}/src
}

run_job()
{
	echo $$ > $TMP/run-job.pid

	. $LKP_SRC/lib/http.sh
	. $LKP_SRC/lib/job.sh
	. $LKP_SRC/lib/env.sh

	export_top_env

	run_setup bp1_memmap='104G!8G' bp2_memmap='104G!132G' $LKP_SRC/setup/boot_params

	run_setup nr_pmem=2 $LKP_SRC/setup/disk

	run_setup fs='xfs' mount_option='dax' $LKP_SRC/setup/fs

	run_setup rw='randwrite' bs='4k' ioengine='sync' test_size='200G' $LKP_SRC/setup/fio-setup-basic

	run_setup $LKP_SRC/setup/cpufreq_governor 'performance'

	run_monitor $LKP_SRC/monitors/wrapper kmsg
	run_monitor $LKP_SRC/monitors/no-stdout/wrapper boot-time
	run_monitor $LKP_SRC/monitors/wrapper uptime
	run_monitor $LKP_SRC/monitors/wrapper iostat
	run_monitor $LKP_SRC/monitors/wrapper heartbeat
	run_monitor $LKP_SRC/monitors/wrapper vmstat
	run_monitor $LKP_SRC/monitors/wrapper numa-numastat
	run_monitor $LKP_SRC/monitors/wrapper numa-vmstat
	run_monitor $LKP_SRC/monitors/wrapper numa-meminfo
	run_monitor $LKP_SRC/monitors/wrapper proc-vmstat
	run_monitor $LKP_SRC/monitors/wrapper proc-stat
	run_monitor $LKP_SRC/monitors/wrapper meminfo
	run_monitor $LKP_SRC/monitors/wrapper slabinfo
	run_monitor $LKP_SRC/monitors/wrapper interrupts
	run_monitor $LKP_SRC/monitors/wrapper lock_stat
	run_monitor $LKP_SRC/monitors/wrapper latency_stats
	run_monitor $LKP_SRC/monitors/wrapper softirqs
	run_monitor $LKP_SRC/monitors/one-shot/wrapper bdi_dev_mapping
	run_monitor $LKP_SRC/monitors/wrapper diskstats
	run_monitor $LKP_SRC/monitors/wrapper nfsstat
	run_monitor $LKP_SRC/monitors/wrapper cpuidle
	run_monitor $LKP_SRC/monitors/wrapper cpufreq-stats
	run_monitor $LKP_SRC/monitors/wrapper sched_debug
	run_monitor $LKP_SRC/monitors/wrapper perf-stat
	run_monitor $LKP_SRC/monitors/wrapper mpstat
	run_monitor $LKP_SRC/monitors/no-stdout/wrapper perf-profile
	run_monitor $LKP_SRC/monitors/wrapper oom-killer
	run_monitor $LKP_SRC/monitors/plain/watchdog

	run_test $LKP_SRC/tests/wrapper fio
}

extract_stats()
{
	export stats_part_begin=
	export stats_part_end=

	$LKP_SRC/stats/wrapper fio
	$LKP_SRC/stats/wrapper kmsg
	$LKP_SRC/stats/wrapper boot-time
	$LKP_SRC/stats/wrapper uptime
	$LKP_SRC/stats/wrapper iostat
	$LKP_SRC/stats/wrapper vmstat
	$LKP_SRC/stats/wrapper numa-numastat
	$LKP_SRC/stats/wrapper numa-vmstat
	$LKP_SRC/stats/wrapper numa-meminfo
	$LKP_SRC/stats/wrapper proc-vmstat
	$LKP_SRC/stats/wrapper meminfo
	$LKP_SRC/stats/wrapper slabinfo
	$LKP_SRC/stats/wrapper interrupts
	$LKP_SRC/stats/wrapper lock_stat
	$LKP_SRC/stats/wrapper latency_stats
	$LKP_SRC/stats/wrapper softirqs
	$LKP_SRC/stats/wrapper diskstats
	$LKP_SRC/stats/wrapper nfsstat
	$LKP_SRC/stats/wrapper cpuidle
	$LKP_SRC/stats/wrapper sched_debug
	$LKP_SRC/stats/wrapper perf-stat
	$LKP_SRC/stats/wrapper mpstat
	$LKP_SRC/stats/wrapper perf-profile

	$LKP_SRC/stats/wrapper time fio.time
	$LKP_SRC/stats/wrapper dmesg
	$LKP_SRC/stats/wrapper kmsg
	$LKP_SRC/stats/wrapper last_state
	$LKP_SRC/stats/wrapper stderr
	$LKP_SRC/stats/wrapper time
}

"$@"

--BOKacYhQ+x31HxR3
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="job.yaml"

---

#! jobs/fio-basic-2pmem-dax-256G.yaml
suite: fio-basic
testcase: fio-basic
category: benchmark
boot_params:
  bp1_memmap: 104G!8G
  bp2_memmap: 104G!132G
disk: 2pmem
fs:
  fs: xfs
  mount_option: dax
runtime: 200s
nr_task: 50%
time_based: tb
fio-setup-basic:
  rw: randwrite
  bs: 4k
  ioengine: sync
  test_size: 200G
fio: 
job_origin: "/lkp-src/allot/cyclic:p1:linux-devel:devel-hourly/lkp-csl-2sp6/fio-basic-2pmem-dax-256G.yaml"

#! queue options
queue_cmdline_keys:
- branch
- commit
- queue_at_least_once
queue: bisect
testbox: lkp-csl-2sp6
tbox_group: lkp-csl-2sp6
kconfig: x86_64-rhel-8.3
submit_id: 5fbfbc74221b7d18172f2df8
job_file: "/lkp/jobs/scheduled/lkp-csl-2sp6/fio-basic-4k-performance-2pmem-xfs-sync-dax-50%-200s-randwrite-200G-tb-ucode=0x5003003-monitor=6e02c248-debian-10.4-x86_64-20200-20201126-6167-1uqga7q-0.yaml"
id: f113b58cd6ffcc4b0d4dc76ea61fcc0140ea0f74
queuer_version: "/lkp-src"

#! hosts/lkp-csl-2sp6
model: Cascade Lake
nr_node: 2
nr_cpu: 96
memory: 256G
nr_hdd_partitions: 1
hdd_partitions: "/dev/disk/by-id/ata-WDC_WD10EZEX-75ZF5A0_WD-WCC1S1302268-part5"
ssd_partitions: "/dev/disk/by-id/ata-INTEL_SSDSC2BB800G4_PHWL4204001B800RGN-part1"
swap_partitions: 
rootfs_partition: "/dev/disk/by-id/ata-WDC_WD10EZEX-75ZF5A0_WD-WCC1S1302268-part4"
brand: Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz

#! include/category/benchmark
kmsg: 
boot-time: 
uptime: 
iostat: 
heartbeat: 
vmstat: 
numa-numastat: 
numa-vmstat: 
numa-meminfo: 
proc-vmstat: 
proc-stat: 
meminfo: 
slabinfo: 
interrupts: 
lock_stat: 
latency_stats: 
softirqs: 
bdi_dev_mapping: 
diskstats: 
nfsstat: 
cpuidle: 
cpufreq-stats: 
sched_debug: 
perf-stat: 
mpstat: 
perf-profile: 

#! include/category/ALL
cpufreq_governor: performance

#! include/disk/nr_pmem
need_kconfig:
- CONFIG_LIBNVDIMM
- CONFIG_BTT
- CONFIG_BLK_DEV_PMEM
- CONFIG_X86_PMEM_LEGACY
- CONFIG_XFS_FS

#! include/queue/cyclic
commit: 97e8f0134a2bb794e4885f642724a50979b84f89

#! include/testbox/lkp-csl-2sp6
ucode: '0x5003003'
need_kconfig_hw:
- CONFIG_I40E=y
- CONFIG_SATA_AHCI

#! include/fs/OTHERS
enqueue_time: 2020-11-26 22:32:20.538869421 +08:00
_id: 5fbfbc74221b7d18172f2df8
_rt: "/result/fio-basic/4k-performance-2pmem-xfs-sync-dax-50%-200s-randwrite-200G-tb-ucode=0x5003003-monitor=6e02c248/lkp-csl-2sp6/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3/gcc-9/97e8f0134a2bb794e4885f642724a50979b84f89"

#! schedule options
user: lkp
compiler: gcc-9
head_commit: 49b0a0e914926c1116def49daf207c926dde5715
base_commit: 418baf2c28f3473039f2f7377760bd8f6897ae18
branch: linux-devel/devel-hourly-2020112414
rootfs: debian-10.4-x86_64-20200603.cgz
monitor_sha: 6e02c248
result_root: "/result/fio-basic/4k-performance-2pmem-xfs-sync-dax-50%-200s-randwrite-200G-tb-ucode=0x5003003-monitor=6e02c248/lkp-csl-2sp6/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3/gcc-9/97e8f0134a2bb794e4885f642724a50979b84f89/0"
scheduler_version: "/lkp/lkp/.src-20201126-070356"
LKP_SERVER: internal-lkp-server
arch: x86_64
max_uptime: 2400
initrd: "/osimage/debian/debian-10.4-x86_64-20200603.cgz"
bootloader_append:
- root=/dev/ram0
- user=lkp
- job=/lkp/jobs/scheduled/lkp-csl-2sp6/fio-basic-4k-performance-2pmem-xfs-sync-dax-50%-200s-randwrite-200G-tb-ucode=0x5003003-monitor=6e02c248-debian-10.4-x86_64-20200-20201126-6167-1uqga7q-0.yaml
- ARCH=x86_64
- kconfig=x86_64-rhel-8.3
- branch=linux-devel/devel-hourly-2020112414
- commit=97e8f0134a2bb794e4885f642724a50979b84f89
- BOOT_IMAGE=/pkg/linux/x86_64-rhel-8.3/gcc-9/97e8f0134a2bb794e4885f642724a50979b84f89/vmlinuz-5.10.0-rc4-00005-g97e8f0134a2b
- memmap=104G!8G
- memmap=104G!132G
- max_uptime=2400
- RESULT_ROOT=/result/fio-basic/4k-performance-2pmem-xfs-sync-dax-50%-200s-randwrite-200G-tb-ucode=0x5003003-monitor=6e02c248/lkp-csl-2sp6/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3/gcc-9/97e8f0134a2bb794e4885f642724a50979b84f89/0
- LKP_SERVER=internal-lkp-server
- nokaslr
- selinux=0
- debug
- apic=debug
- sysrq_always_enabled
- rcupdate.rcu_cpu_stall_timeout=100
- net.ifnames=0
- printk.devkmsg=on
- panic=-1
- softlockup_panic=1
- nmi_watchdog=panic
- oops=panic
- load_ramdisk=2
- prompt_ramdisk=0
- drbd.minor_count=8
- systemd.log_level=err
- ignore_loglevel
- console=tty0
- earlyprintk=ttyS0,115200
- console=ttyS0,115200
- vga=normal
- rw
modules_initrd: "/pkg/linux/x86_64-rhel-8.3/gcc-9/97e8f0134a2bb794e4885f642724a50979b84f89/modules.cgz"
bm_initrd: "/osimage/deps/debian-10.4-x86_64-20200603.cgz/run-ipconfig_20200608.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/lkp_20200709.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/rsync-rootfs_20200608.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/fs_20200714.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/fio_20200714.cgz,/osimage/pkg/debian-10.4-x86_64-20200603.cgz/fio-x86_64-3.15-1_20200907.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/mpstat_20200714.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/perf_20201126.cgz,/osimage/pkg/debian-10.4-x86_64-20200603.cgz/perf-x86_64-fa02fcd94b0c-1_20201126.cgz,/osimage/pkg/debian-10.4-x86_64-20200603.cgz/sar-x86_64-34c92ae-1_20200702.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/hw_20200715.cgz"
ucode_initrd: "/osimage/ucode/intel-ucode-20201117.cgz"
lkp_initrd: "/osimage/user/lkp/lkp-x86_64.cgz"
site: inn

#! /lkp/lkp/.src-20201126-070356/include/site/inn
LKP_CGI_PORT: 80
LKP_CIFS_PORT: 139
oom-killer: 
watchdog: 

#! runtime status
last_kernel: 5.10.0-rc5
repeat_to: 2
schedule_notify_address: 

#! user overrides
queue_at_least_once: 0
kernel: "/pkg/linux/x86_64-rhel-8.3/gcc-9/97e8f0134a2bb794e4885f642724a50979b84f89/vmlinuz-5.10.0-rc4-00005-g97e8f0134a2b"
dequeue_time: 2020-11-26 22:43:19.866920347 +08:00
job_state: finished
loadavg: 40.65 23.53 9.58 1/883 7994
start_time: '1606401300'
end_time: '1606401502'
version: "/lkp/lkp/.src-20201126-070431:ed39d797:dc1ef617f"

--BOKacYhQ+x31HxR3
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename=reproduce

 "modprobe" "nd_e820"
dmsetup remove_all
wipefs -a --force /dev/pmem0
wipefs -a --force /dev/pmem1
mkfs -t xfs -f /dev/pmem1
mkfs -t xfs -f /dev/pmem0
mkdir -p /fs/pmem0
modprobe xfs
mount -t xfs -o inode64 -o dax /dev/pmem0 /fs/pmem0
mkdir -p /fs/pmem1
mount -t xfs -o inode64 -o dax /dev/pmem1 /fs/pmem1

for cpu_dir in /sys/devices/system/cpu/cpu[0-9]*
do
	online_file="$cpu_dir"/online
	[ -f "$online_file" ] && [ "$(cat "$online_file")" -eq 0 ] && continue

	file="$cpu_dir"/cpufreq/scaling_governor
	[ -f "$file" ] && echo "performance" > "$file"
done

echo '
[global]
bs=4k
ioengine=sync
iodepth=32
size=4473924266
nr_files=1
filesize=4473924266
direct=0
runtime=200
invalidate=1
fallocate=posix
io_size=4473924266
file_service_type=roundrobin
random_distribution=random
group_reporting
pre_read=0

time_based

[task_0]
rw=randwrite
directory=/fs/pmem0
numjobs=24

[task_1]
rw=randwrite
directory=/fs/pmem1
numjobs=24' | fio --output-format=json -

--BOKacYhQ+x31HxR3--


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 06:14:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 06:14:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39060.71831 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiX0z-0007dk-Sb; Fri, 27 Nov 2020 06:13:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39060.71831; Fri, 27 Nov 2020 06:13:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiX0z-0007dd-P4; Fri, 27 Nov 2020 06:13:53 +0000
Received: by outflank-mailman (input) for mailman id 39060;
 Fri, 27 Nov 2020 06:13:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiX0y-0007dV-Jv; Fri, 27 Nov 2020 06:13:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiX0y-0003mU-EZ; Fri, 27 Nov 2020 06:13:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiX0y-0006E2-3U; Fri, 27 Nov 2020 06:13:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kiX0y-0001S8-31; Fri, 27 Nov 2020 06:13:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiX0y-0007dV-Jv; Fri, 27 Nov 2020 06:13:52 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fmecPbh945VtP5+ZQECMXQsVYg+f0h4dqbE99OqX2FA=; b=FVMHfoKzTtH0cppy+X9YmADi/T
	wu2ZJ9zm294OYOphh0BLS4lLby5sGADT13AsPUvhZJBWWzoSozHBYMEjVvScNANPaydA8BhaU8wEH
	kTmTOjs3hH7rw1jNbG+s7nR5aqElLaIRpaSvMFD1s2w0Ld0Glnc/JZoDtLd7VawRkdAo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiX0y-0003mU-EZ; Fri, 27 Nov 2020 06:13:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiX0y-0006E2-3U; Fri, 27 Nov 2020 06:13:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiX0y-0001S8-31; Fri, 27 Nov 2020 06:13:52 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157037-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157037: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=dd3d2340c4076d1735cd0f7cb61f4d8622b9562d
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 27 Nov 2020 06:13:52 +0000

flight 157037 qemu-mainline real [real]
flight 157046 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157037/
http://logs.test-lab.xenproject.org/osstest/logs/157046/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                dd3d2340c4076d1735cd0f7cb61f4d8622b9562d
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   98 days
Failing since        152659  2020-08-21 14:07:39 Z   97 days  205 attempts
Testing same since   157020  2020-11-26 03:28:48 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69155 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 07:07:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 07:07:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39071.71846 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiXqQ-0003ug-TX; Fri, 27 Nov 2020 07:07:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39071.71846; Fri, 27 Nov 2020 07:07:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiXqQ-0003uZ-QT; Fri, 27 Nov 2020 07:07:02 +0000
Received: by outflank-mailman (input) for mailman id 39071;
 Fri, 27 Nov 2020 07:07:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiXqP-0003uR-PF; Fri, 27 Nov 2020 07:07:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiXqP-0004t8-Fm; Fri, 27 Nov 2020 07:07:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiXqP-0000wE-40; Fri, 27 Nov 2020 07:07:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kiXqP-0002jo-3W; Fri, 27 Nov 2020 07:07:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiXqP-0003uR-PF; Fri, 27 Nov 2020 07:07:01 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BxAaC/RMxiThGKe28x/rJGUVY2Lmj5DSC18EPvD/ZnM=; b=PrR4/QYKd6WKcX7qDcMApbJujo
	nzGwgLGqRczYu5A7IZpDvk59iPBhoyzf9mnWR7/aXE1uGMdSv6ribkYWGzs52h9Ej1eSoCxnaqK6q
	aY7ZwwJK3aqDaJQMRcPd3aMPt8zRSZRUKEjRgG7veY7JsVQBYGfgvHKAbuUwM7xIBIZo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiXqP-0004t8-Fm; Fri, 27 Nov 2020 07:07:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiXqP-0000wE-40; Fri, 27 Nov 2020 07:07:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiXqP-0002jo-3W; Fri, 27 Nov 2020 07:07:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157040-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157040: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=85a2c56cb4454c73f56d3099d96942e7919b292f
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 27 Nov 2020 07:07:01 +0000

flight 157040 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157040/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                85a2c56cb4454c73f56d3099d96942e7919b292f
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  118 days
Failing since        152366  2020-08-01 20:49:34 Z  117 days  197 attempts
Testing same since   157040  2020-11-26 20:40:12 Z    0 days    1 attempts

------------------------------------------------------------
3576 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 684608 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 07:21:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 07:21:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39085.71867 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiY4P-0005lH-Gf; Fri, 27 Nov 2020 07:21:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39085.71867; Fri, 27 Nov 2020 07:21:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiY4P-0005lA-DN; Fri, 27 Nov 2020 07:21:29 +0000
Received: by outflank-mailman (input) for mailman id 39085;
 Fri, 27 Nov 2020 07:21:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiY4O-0005l2-2Z; Fri, 27 Nov 2020 07:21:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiY4N-00059C-Py; Fri, 27 Nov 2020 07:21:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiY4N-0001Xk-GN; Fri, 27 Nov 2020 07:21:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kiY4N-0007s2-Fr; Fri, 27 Nov 2020 07:21:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiY4O-0005l2-2Z; Fri, 27 Nov 2020 07:21:28 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=C6NsvQEZUE9rNKS0RA4lIyDls1MrieCzzmqLts1S2vE=; b=zAv38tWA1SXf7zn5IUl/X+dCy0
	ys5KOJC66ehBc2KZguSru2O39gtmlM9fYrZJuisef5urCM12uHEDLhT+CqpTyMRr3D149sikfc8fC
	nVG4HC7ravNKxaZRel/p0KVxAZLXF4T2a6thFldaL+pBBEs4GedwODJRCMQDTOGiGRek=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiY4N-00059C-Py; Fri, 27 Nov 2020 07:21:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiY4N-0001Xk-GN; Fri, 27 Nov 2020 07:21:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiY4N-0007s2-Fr; Fri, 27 Nov 2020 07:21:27 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157042-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157042: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=872f953262d68a11da7bc2fb3ded16df234b8700
X-Osstest-Versions-That:
    ovmf=21f984cedec1c613218480bc3eb5e92349a7a812
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 27 Nov 2020 07:21:27 +0000

flight 157042 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157042/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 872f953262d68a11da7bc2fb3ded16df234b8700
baseline version:
 ovmf                 21f984cedec1c613218480bc3eb5e92349a7a812

Last test of basis   157025  2020-11-26 07:11:04 Z    1 days
Testing same since   157042  2020-11-27 01:41:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  James Bottomley <jejb@linux.ibm.com>
  Laszlo Ersek <lersek@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   21f984cede..872f953262  872f953262d68a11da7bc2fb3ded16df234b8700 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 09:20:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 09:20:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39109.71886 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiZuq-00082W-KI; Fri, 27 Nov 2020 09:19:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39109.71886; Fri, 27 Nov 2020 09:19:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiZuq-00082P-GI; Fri, 27 Nov 2020 09:19:44 +0000
Received: by outflank-mailman (input) for mailman id 39109;
 Fri, 27 Nov 2020 09:19:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiZup-00082H-Mb; Fri, 27 Nov 2020 09:19:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiZup-00082u-GB; Fri, 27 Nov 2020 09:19:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiZup-0000au-87; Fri, 27 Nov 2020 09:19:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kiZup-0004eh-7c; Fri, 27 Nov 2020 09:19:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiZup-00082H-Mb; Fri, 27 Nov 2020 09:19:43 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=IbZHOrRndej+7IFn42fI1mN7aSyEmKPqfeHX8o4cnXg=; b=wOmHXq1PAGjJNpMwnQM0zW4daq
	+IZJoLVGvtWk/WRQymUPpUXkYYnzXmQsG5OzMAZSYI8RjMDmI7yUdMyJfrEiMGJlCn2k2/MkNhEfz
	GoVsZJeuFIBvBNmV3BH8Tv3VJ0coT3ZF8DHWxIbveZu9MciHwcC0FzkKdjHXTDttc89M=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiZup-00082u-GB; Fri, 27 Nov 2020 09:19:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiZup-0000au-87; Fri, 27 Nov 2020 09:19:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kiZup-0004eh-7c; Fri, 27 Nov 2020 09:19:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157045-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157045: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=6d69afe4517646811ee96981408bc6fc18b5ffbb
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 27 Nov 2020 09:19:43 +0000

flight 157045 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157045/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              6d69afe4517646811ee96981408bc6fc18b5ffbb
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  140 days
Failing since        151818  2020-07-11 04:18:52 Z  139 days  134 attempts
Testing same since   157045  2020-11-27 04:20:18 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 29554 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 09:37:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 09:37:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.38790.71900 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiaBa-0001P5-4Z; Fri, 27 Nov 2020 09:37:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 38790.71900; Fri, 27 Nov 2020 09:37:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiaBa-0001Oy-1W; Fri, 27 Nov 2020 09:37:02 +0000
Received: by outflank-mailman (input) for mailman id 38790;
 Thu, 26 Nov 2020 16:18:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QDyK=FA=redhat.com=kherbst@srs-us1.protection.inumbo.net>)
 id 1kiJyZ-0007gu-Ux
 for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 16:18:32 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id fe4b196e-02ab-4885-9a13-c0a768cb4fad;
 Thu, 26 Nov 2020 16:18:31 +0000 (UTC)
Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com
 [209.85.160.198]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-422-Vyb3LmvEO0yCfU3IWWKsNg-1; Thu, 26 Nov 2020 11:18:25 -0500
Received: by mail-qt1-f198.google.com with SMTP id i20so1507587qtr.0
 for <xen-devel@lists.xenproject.org>; Thu, 26 Nov 2020 08:18:25 -0800 (PST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=QDyK=FA=redhat.com=kherbst@srs-us1.protection.inumbo.net>)
	id 1kiJyZ-0007gu-Ux
	for xen-devel@lists.xenproject.org; Thu, 26 Nov 2020 16:18:32 +0000
X-Inumbo-ID: fe4b196e-02ab-4885-9a13-c0a768cb4fad
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id fe4b196e-02ab-4885-9a13-c0a768cb4fad;
	Thu, 26 Nov 2020 16:18:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1606407510;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=IL7sPB68gF1YeWIUFuBJ//YIBDjI3bDL6qHHKubt/Z0=;
	b=IOmhlXVU0SbnGpY3BpOTvOF5fmmL+x3dr/igY6NQi8P5VRvzaBx7rYihsMQLnr/2wjS5ls
	lvhWzMsvGVZbuOMadN8VodSqXyxvXgSytlwV2pfW04pV/KU13GU2/Vlnjpaju/SpxStUa5
	Ap4HVqPWBmDJjfj+VUMbgiAsQoaIc4c=
Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com
 [209.85.160.198]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-422-Vyb3LmvEO0yCfU3IWWKsNg-1; Thu, 26 Nov 2020 11:18:25 -0500
X-MC-Unique: Vyb3LmvEO0yCfU3IWWKsNg-1
Received: by mail-qt1-f198.google.com with SMTP id i20so1507587qtr.0
        for <xen-devel@lists.xenproject.org>; Thu, 26 Nov 2020 08:18:25 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=IL7sPB68gF1YeWIUFuBJ//YIBDjI3bDL6qHHKubt/Z0=;
        b=G9dZgy+hfboKxAn7QJxMH5KrbOZlac7s/hADSv6HCM9jLk8BdlI+STFTz59JPgnZhQ
         TO+kppUWlqYVE5JoDI+y+S9rbPVsS7iokVTh78U1V1nRBJrz1M3UdI9fvwd1hGFzQuv3
         5ULRe/baZgsE3nXebw2V0UT7/jzcUelIbO3tvv3DwlSAwoGo0WWVqMEGqJ2NvHzfrseX
         YJOuhTfERtJRU+oZpdDGDao0tIOeU98Fw/ZD+RrGsnX/tHa4lsCvGwXSzTgr0kjpLrrI
         VnKkFCFGExnDa1AMropwgsXQ473ydc+HwT2/cc9fdAsvUl9qiKBbwNnwHJINbywtZgjf
         ERjA==
X-Gm-Message-State: AOAM530L0RUvp4fm8RTjHhox3TVlqXeNjO4gcHS/yDl8HOZ/SJbm5Eia
	0fKjZvnj0UmaHHNBwd9UGv8kDl1CA7xvV2uxthCHSkdKphkM3byDMCrs0jYWFnLPIMLv8Zr2hAd
	KtH/hgTxfv8AFEFT9fboH8ywC2veyJg+T/YwoeGR8yAc=
X-Received: by 2002:a37:ac8:: with SMTP id 191mr3793961qkk.381.1606407504845;
        Thu, 26 Nov 2020 08:18:24 -0800 (PST)
X-Google-Smtp-Source: ABdhPJzCU4CKAolN2PpaYdMoKCHma/+NC3lHjkQkQkRPTWC20j3rANbYTTy+FG9V7n634RRlgf0kcsxPjR4LO+NB5fA=
X-Received: by 2002:a37:ac8:: with SMTP id 191mr3793888qkk.381.1606407504531;
 Thu, 26 Nov 2020 08:18:24 -0800 (PST)
MIME-Version: 1.0
References: <cover.1605896059.git.gustavoars@kernel.org> <20201120105344.4345c14e@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011201129.B13FDB3C@keescook> <20201120115142.292999b2@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
 <202011220816.8B6591A@keescook> <9b57fd4914b46f38d54087d75e072d6e947cb56d.camel@HansenPartnership.com>
 <CANiq72nZrHWTA4_Msg6MP9snTyenC6-eGfD27CyfNSu7QoVZbw@mail.gmail.com>
 <1c7d7fde126bc0acf825766de64bf2f9b888f216.camel@HansenPartnership.com>
 <CANiq72m22Jb5_+62NnwX8xds2iUdWDMAqD8PZw9cuxdHd95W0A@mail.gmail.com>
 <fc45750b6d0277c401015b7aa11e16cd15f32ab2.camel@HansenPartnership.com>
 <CANiq72k5tpDoDPmJ0ZWc1DGqm+81Gi-uEENAtvEs9v3SZcx6_Q@mail.gmail.com>
 <4993259d01a0064f8bb22770503490f9252f3659.camel@HansenPartnership.com>
 <CANiq72kqO=bYMJnFS2uYRpgWATJ=uXxZuNUsTXT+3aLtrpnzvQ@mail.gmail.com>
 <44005bde-f6d4-5eaa-39b8-1a5efeedb2d3@gmail.com> <CANiq72nobq=ptWK-qWxU91JHqkKhMcRtJNnw2XJd5-vSJWZd8Q@mail.gmail.com>
 <CAMuHMdV5kOakvZJMWLxbpigFPS+Xuw6DVYsWCWZy7wGsv3idcw@mail.gmail.com>
In-Reply-To: <CAMuHMdV5kOakvZJMWLxbpigFPS+Xuw6DVYsWCWZy7wGsv3idcw@mail.gmail.com>
From: Karol Herbst <kherbst@redhat.com>
Date: Thu, 26 Nov 2020 17:18:13 +0100
Message-ID: <CACO55tsBj3gLECoMWtViDitd7fVTnW+Cp0LVmqYkR=QFBJkEmQ@mail.gmail.com>
Subject: Re: [PATCH 000/141] Fix fall-through warnings for Clang
To: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>, 
	ALSA Development Mailing List <alsa-devel@alsa-project.org>, bridge@lists.linux-foundation.org, 
	target-devel <target-devel@vger.kernel.org>, linux-iio@vger.kernel.org, 
	linux-wireless <linux-wireless@vger.kernel.org>, Linux MMC List <linux-mmc@vger.kernel.org>, 
	Linux Fbdev development list <linux-fbdev@vger.kernel.org>, dri-devel <dri-devel@lists.freedesktop.org>, 
	virtualization@lists.linux-foundation.org, 
	James Bottomley <James.Bottomley@hansenpartnership.com>, linux-ide@vger.kernel.org, 
	dm-devel@redhat.com, keyrings@vger.kernel.org, 
	MTD Maling List <linux-mtd@lists.infradead.org>, GR-everest-linux-l2@marvell.com, 
	wcn36xx@lists.infradead.org, linux-i3c@lists.infradead.org, 
	linux1394-devel@lists.sourceforge.net, linux-afs@lists.infradead.org, 
	Lars Ellenberg <drbd-dev@lists.linbit.com>, driverdevel <devel@driverdev.osuosl.org>, 
	linux-cifs@vger.kernel.org, rds-devel@oss.oracle.com, 
	scsi <linux-scsi@vger.kernel.org>, 
	ACPI Devel Maling List <linux-acpi@vger.kernel.org>, linux-rdma <linux-rdma@vger.kernel.org>, 
	oss-drivers@netronome.com, linux-atm-general@lists.sourceforge.net, 
	ceph-devel <ceph-devel@vger.kernel.org>, amd-gfx list <amd-gfx@lists.freedesktop.org>, 
	linux-stm32@st-md-mailman.stormreply.com, cluster-devel@redhat.com, 
	usb-storage@lists.one-eyed-alien.net, coreteam@netfilter.org, 
	intel-wired-lan@lists.osuosl.org, linux-input <linux-input@vger.kernel.org>, 
	Miguel Ojeda <ojeda@kernel.org>, Jakub Kicinski <kuba@kernel.org>, 
	Ext4 Developers List <linux-ext4@vger.kernel.org>, NetFilter <netfilter-devel@vger.kernel.org>, 
	Linux Media Mailing List <linux-media@vger.kernel.org>, Kees Cook <keescook@chromium.org>, 
	selinux@vger.kernel.org, linux-arm-msm <linux-arm-msm@vger.kernel.org>, 
	Intel Graphics Development <intel-gfx@lists.freedesktop.org>, linux-sctp@vger.kernel.org, 
	reiserfs-devel@vger.kernel.org, linux-geode@lists.infradead.org, 
	linux-block@vger.kernel.org, 
	"open list:GPIO SUBSYSTEM" <linux-gpio@vger.kernel.org>, op-tee@lists.trustedfirmware.org, 
	linux-mediatek@lists.infradead.org, xen-devel@lists.xenproject.org, 
	Nouveau Dev <nouveau@lists.freedesktop.org>, linux-hams@vger.kernel.org, 
	Nathan Chancellor <natechancellor@gmail.com>, linux-can@vger.kernel.org, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, linux-hwmon@vger.kernel.org, 
	Nick Desaulniers <ndesaulniers@google.com>, 
	Linux Watchdog Mailing List <linux-watchdog@vger.kernel.org>, GR-Linux-NIC-Dev@marvell.com, 
	Linux-MM <linux-mm@kvack.org>, Network Development <netdev@vger.kernel.org>, 
	linux-decnet-user@lists.sourceforge.net, samba-technical@lists.samba.org, 
	"Gustavo A. R. Silva" <gustavoars@kernel.org>, linux-kernel <linux-kernel@vger.kernel.org>, 
	Linux-Renesas <linux-renesas-soc@vger.kernel.org>, Edward Cree <ecree.xilinx@gmail.com>, 
	linux-security-module <linux-security-module@vger.kernel.org>, 
	USB list <linux-usb@vger.kernel.org>, tipc-discussion@lists.sourceforge.net, 
	Linux Crypto Mailing List <linux-crypto@vger.kernel.org>, patches@opensource.cirrus.com, 
	Joe Perches <joe@perches.com>, linux-integrity <linux-integrity@vger.kernel.org>, 
	"open list:NFS, SUNRPC, AND..." <linux-nfs@vger.kernel.org>, 
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@kernel.org>, linux-hardening@vger.kernel.org
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=kherbst@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="UTF-8"

On Thu, Nov 26, 2020 at 4:28 PM Geert Uytterhoeven <geert@linux-m68k.org> wrote:
>
> Hi Miguel,
>
> On Thu, Nov 26, 2020 at 3:54 PM Miguel Ojeda
> <miguel.ojeda.sandonis@gmail.com> wrote:
> > On Wed, Nov 25, 2020 at 11:44 PM Edward Cree <ecree.xilinx@gmail.com> wrote:
> > > To make the intent clear, you have to first be certain that you
> > >  understand the intent; otherwise by adding either a break or a
> > >  fallthrough to suppress the warning you are just destroying the
> > >  information that "the intent of this code is unknown".
> >
> > If you don't know what the intent of your own code is, then you
> > *already* have a problem in your hands.
>
> The maintainer is not necessarily the owner/author of the code, and
> thus may not know the intent of the code.
>
> > > or does it flag up code
> > >  that can be mindlessly "fixed" (in which case the warning is
> > >  worthless)?  Proponents in this thread seem to be trying to
> > >  have it both ways.
> >
> > A warning is not worthless just because you can mindlessly fix it.
> > There are many counterexamples, e.g. many
> > checkpatch/lint/lang-format/indentation warnings, functional ones like
> > the `if (a = b)` warning...
>
> BTW, you cannot mindlessly fix the latter, as you cannot know if
> "(a == b)" or "((a = b))" was intended, without understanding the code
> (and the (possibly unavailable) data sheet, and the hardware, ...).
>

to allow assignments in if statements was clearly a mistake and if you
need outside information to understand the code, your code is the
issue already.

> P.S. So far I've stayed out of this thread, as I like it if the compiler
>      flags possible mistakes.  After all I was the one fixing new
>      "may be used uninitialized" warnings thrown up by gcc-4.1, until
>      (a bit later than) support for that compiler was removed...
>
> Gr{oetje,eeting}s,
>
>                         Geert
>
> --
> Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org
>
> In personal conversations with technical people, I call myself a hacker. But
> when I'm talking to journalists I just say "programmer" or something like that.
>                                 -- Linus Torvalds
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>



From xen-devel-bounces@lists.xenproject.org Fri Nov 27 10:23:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 10:23:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39133.71919 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiau4-00060d-W3; Fri, 27 Nov 2020 10:23:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39133.71919; Fri, 27 Nov 2020 10:23:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiau4-00060W-Rp; Fri, 27 Nov 2020 10:23:00 +0000
Received: by outflank-mailman (input) for mailman id 39133;
 Fri, 27 Nov 2020 10:22:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kiau3-00060R-Nk
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 10:22:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0d2630ba-47fd-4d95-a9dd-11a1eb657d8c;
 Fri, 27 Nov 2020 10:22:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C728DAC2F;
 Fri, 27 Nov 2020 10:22:57 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kiau3-00060R-Nk
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 10:22:59 +0000
X-Inumbo-ID: 0d2630ba-47fd-4d95-a9dd-11a1eb657d8c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0d2630ba-47fd-4d95-a9dd-11a1eb657d8c;
	Fri, 27 Nov 2020 10:22:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606472577; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DRWyPz7nptY60EVFk7sxsNcVyrTPLHy7zKH/6dyJ0rQ=;
	b=d/ZFS/W/0lZAu+tLiDjBIm02M1rICaaoXDLBNBVNj2ZMXOx0FjBj5zqx1o2bBGa1WLg0JH
	mlPk4LTf6WqACva8+5rdZt7IuWvhhGRE5VTf0LQ3nTmtrYyCBJA+od6JFzcnFmudZbBkM1
	yUto/Up/4vGpsKBmrhWjrrfNjBwB+Qs=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C728DAC2F;
	Fri, 27 Nov 2020 10:22:57 +0000 (UTC)
Subject: Re: [PATCH] xen/x86: Work around Clang code generation bug with asm
 parameters
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20201111124512.2268-1-andrew.cooper3@citrix.com>
 <8282790a-a0bd-1d33-d992-9d194766254e@suse.com>
 <3ecb8469-8504-054a-078d-4bf32f8f82c4@citrix.com>
 <cfc7ad85-22b3-701f-f1d8-5009e5262b92@suse.com>
Message-ID: <539850cd-9e59-a07f-9c9f-ddf9fc28f203@suse.com>
Date: Fri, 27 Nov 2020 11:22:58 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <cfc7ad85-22b3-701f-f1d8-5009e5262b92@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12.11.2020 09:14, Jan Beulich wrote:
> On 11.11.2020 21:01, Andrew Cooper wrote:
>> On 11/11/2020 15:11, Jan Beulich wrote:
>>> On 11.11.2020 13:45, Andrew Cooper wrote:
>>>> Clang 9 and later don't handle the clobber of %r10 correctly in
>>>> _hypercall64_4().  See https://bugs.llvm.org/show_bug.cgi?id=48122
>>> Are you sure this is a bug?
>>
>> Yes.
>>
>>>  With ...
>>>
>>>>  #define _hypercall64_4(type, hcall, a1, a2, a3, a4)                     \
>>>>      ({                                                                  \
>>>> -        long res, tmp__;                                                \
>>>> -        register long _a4 asm ("r10") = ((long)(a4));                   \
>>>> +        long res, _a1 = (long)(a1), _a2 = (long)(a2),                   \
>>>> +            _a3 = (long)(a3);                                           \
>>>> +        register long _a4 asm ("r10") = (long)(a4);                     \
>>>>          asm volatile (                                                  \
>>>>              "call hypercall_page + %c[offset]"                          \
>>>> -            : "=a" (res), "=D" (tmp__), "=S" (tmp__), "=d" (tmp__),     \
>>>> -              "=&r" (tmp__) ASM_CALL_CONSTRAINT                         \
>>> ... this we've requested "any register", while with ...
>>>
>>>> -            : [offset] "i" (hcall * 32),                                \
>>>> -              "1" ((long)(a1)), "2" ((long)(a2)), "3" ((long)(a3)),     \
>>>> -              "4" (_a4)                                                 \
>>> ... this we've asked for that specific register to be initialized
>>> from r10 (and without telling the compiler that r10 is going to
>>> change).
>>
>> Consider applying that same reasoning to "1" instead of "4".  In that
>> case, a1 would no longer be bound to %rdi.
> 
> That's different: "=D" specifies the register, and "1" says "use
> the same register as input". Whereas, as said, "=&r" says "use
> any register" with "1" saying "use the same register" and (_a4)
> specifying where the value is to come from.
> 
>> The use of "4" explicitly binds the input and the output, which includes
>> requiring them to be the same register.
>>
>> Furthermore, LLVM tends to consider "not behaving in the same was as
>> GCC" a bug.
> 
> That's a fair statement, but then still the description wants
> re-wording. Plus of course future gcc is free to change their
> behavior to that currently observed with clang.
> 
> Consider the following example (on an arch where "f" is a
> floating point register and there are ways to copy directly
> between GPR and floating point registers:
> 
>    int i;
>    register float f asm("f7") = <input>;
>    asm("..." : "=r" (i) : "0" (f));
> 
> In this case obviously f7 can't be used for i (as it doesn't
> match "r"). It's merely that the initial value of i is to come
> from f7. In fact for Arm64 this
> 
> extern float flt;
> 
> int test(void) {
> 	int i;
> 	register float f asm("s7") = flt;
> 	asm("add %0,%0,5" : "=r" (i) : "0" (f));
> 	return i;
> }
> 
> behaves exactly as described:
> 
> test:
>         adrp    x0, flt
>         ldr     s7, [x0, @lo12(flt)]
>         fmov    w0, s7
>         add     x0, x0, #5
>         ret
> 
> (Whether fmov is a sensible choice here is a different question;
> I'd have expected some fcvt*.)

Meanwhile I've realized that I neither need to resort to Arm here,
nor to floating point, e.g.

int test2(int in) {
	int i;
	register int ri asm("ecx") = in;
	asm("nop %0" : "=r" (i) : "0" (ri));
	return i;
}

You'll find that the resulting code (at -O2; gcc 10.2.0) doesn't
use %ecx at all - %edi gets moved directly to %eax.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 10:46:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 10:46:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39143.71931 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kibH5-0007zP-2l; Fri, 27 Nov 2020 10:46:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39143.71931; Fri, 27 Nov 2020 10:46:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kibH4-0007zI-Vn; Fri, 27 Nov 2020 10:46:46 +0000
Received: by outflank-mailman (input) for mailman id 39143;
 Fri, 27 Nov 2020 10:46:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=33Tu=FB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kibH4-0007zD-3N
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 10:46:46 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 623a9f76-a0ce-434e-9e18-a2f5091dccec;
 Fri, 27 Nov 2020 10:46:44 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=33Tu=FB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kibH4-0007zD-3N
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 10:46:46 +0000
X-Inumbo-ID: 623a9f76-a0ce-434e-9e18-a2f5091dccec
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 623a9f76-a0ce-434e-9e18-a2f5091dccec;
	Fri, 27 Nov 2020 10:46:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606474004;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=pl4SLpfK3Jw2b0Ca1apyPoV1WVxy+JeuV676lj8j2NI=;
  b=fhkaSzVkBDaNsoD5dMdM0Sxmk75/6IK4QWk8BmjYyKTRQuenBW+mJVXh
   xohwljeIUK9xHyic1199VZK3T5+9tEYUALaGsHbjipNOkXm+KEtTMOb0z
   r1//Qk6MtGDfFeb5Lf+boVjR3iygEE7WQbrwDxXWVhf+LJxKWfa+R/fvG
   M=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: /skHKc2GgQjSi6QkDRPaIMDu9NqM/V3a/uRqgVDXxOWfLsWfWQWYH61nrI2gPb+3Q0QNgy9vqd
 uzNm1P5DLoQGgeLuHPmflOTDnc6wd+qLBBWsVtDdTkyox6RrrAZ86QQ+L9XUvbxkYMpP+o4Rew
 IXYS4bef4SrBZhUuYx6pjn7l1QFKOHfhOog0muPLLe/sad9p9rSwJjGiIbrZlRXPOGpPv7l8+9
 2+coyUkLZg5iJHB4BAHSKEUrYBlqqNE+B1Z2MDi1oyECXlJeyRS9EBSe2UKZEqIJqsO8/0eGra
 xW0=
X-SBRS: None
X-MesageID: 32007806
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,374,1599537600"; 
   d="scan'208";a="32007806"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=k+AWDEwllZrtql9NDcXH5jT9gIbhrXFGvgqw7pUsqKPGiP7/mWcx7a1viBWGlWyRbwbOqqO9Hu/fC4txJhCUhFHp6afExNrKxfQYa5HRPwWjzgyHuudo8AzixVU/S7W6HwlgfnH4YsDEC5lxixyAcD+M/jddLDNAwF8JlZQqHip9CTAbrMyju/j7GLQHUp1zZDH5JCI2sHk6MmmAgxI7loagkGQTyUZl8tjEe5YbtSjtt2ydwj9JciFQFuQyQUp15ZYCto9vmpEGyHJ3CePoX0dMA+TcDHGs0GtBnoXkSWAkbprvCYWic/6E2vMXA2PZqZE2XsjPOJ0KvjRUbJfUxA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kMM+/x5/EdEJOyjUC+X75zlJ9+hyk+7CvKHWojtzWqE=;
 b=d1V8P/PHkVyWrG/F5hSYb9m+D7vZuA4MZ6G335V6+1wpvsuHfp6KgPF9/CbqtkTY58TFDfk0MrGOZ0LJ6qDipo7J2WKnRCm9mztAnTLtm98xre7RkiUvG3w3ZJXm2fMXS9KSx28nuysMU8Ct9pInmYfKbAe6mKb49Hzcmc9Qt63+Hm1LBv13jm78SpwJkR04PtvxQRv0chk+WAlQnX3zAxnFKGPHoSdTW1BGBa3R8NjbWM2JKYFTa6qWfwY74n43kVXO38P6YXe0/H8WYUlx68DQzAxF3T0v/xhL2Aaenix35Np8ydtSzixSoG54oVZiqAAyasWtby9+B+a1qjpmFA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kMM+/x5/EdEJOyjUC+X75zlJ9+hyk+7CvKHWojtzWqE=;
 b=paDeWY4AZGOW2Cjiv41KVoVW65DNW12tNtIi5zk7gT3LRn20UdInrgUTetMdvvY+n4aAQcxY8z47y628VZ5mEZzFP3hLjrVhHUI0ZVmfzi5tjlx0JaLCoAyh8767vhqI6TmoO59BzqxUWMksKOggTMme9ocehG4/X7qTVkD6Vzc=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/msr: don't inject #GP when trying to read FEATURE_CONTROL
Date: Fri, 27 Nov 2020 11:46:14 +0100
Message-ID: <20201127104614.71933-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.29.2
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0076.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:32::16) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8d5dd280-f377-4b21-2bdf-08d892c1b926
X-MS-TrafficTypeDiagnostic: DM6PR03MB5084:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB508451FAB879FD8E8DB519488FF80@DM6PR03MB5084.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2803;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: FCyUFWR02v68GGz7QdyyWsYhIMAq5R7Li7KpoYnlfipZ3ID1Zn6ErV/uv4/R3sAVIgICV+NX+z5XRJbrm8USSwNvB0nVhedqXMeuzCLVSjLpF1wrdHnjh6OTbDSvSOZf/4GFyBiHcHm5kwTi+4WaJDJfkN8tngkbde21+LtBpD03SltzYQYwokAaawrhQF4hQ8esvZ4ULkMnvrs+Z6VY/Rz/g+diqV99m5Mk9Mp044dGo53BTynuxbePkmgRpjub0ObrLNn8dh6bxlmxFo/Jlxe+ri7KlK+1+P2TKZkJB05dbwA+a/unsI065mwYgKhd
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(39860400002)(396003)(366004)(136003)(4326008)(2906002)(478600001)(16526019)(5660300002)(83380400001)(2616005)(8936002)(36756003)(956004)(6666004)(86362001)(186003)(316002)(6496006)(54906003)(6486002)(8676002)(1076003)(66476007)(26005)(4744005)(6916009)(66946007)(66556008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Vyt5d0JEMWcweEg4em5QaU1IcHdTZ3RiT01PdVg3VVFxSkZObFh1eWpqYUtD?=
 =?utf-8?B?YjQ0amJWT09kVmFxNjFmTmVJV3lvWHJvY3htbmFVRDhkcUpHUk12V0Y5ZDNG?=
 =?utf-8?B?Q1NwN1ZGYVh2M0dlY0V5OUl4ZHBuektXN0NoT1AzMmVqUHozZEtBck5hYUVp?=
 =?utf-8?B?RVI0djNDTlA4NG1MSUlCaUpLVk14RW5NaE9nVjF4dXZENkN6T0N1SWdSK0Nu?=
 =?utf-8?B?MzNxcVNSOUl0UkY1dzE2L0Y0c3ZsVG03d2diSHlyaHlZK0l0RWlFUnZZQ3JW?=
 =?utf-8?B?VVBjWVVqRGwxQmVWNnljZFU3cWhHT3AxNHpIM0xsWmttN1F0TU5uZ0NwMmxO?=
 =?utf-8?B?WktqZ0xzR3VTUy9RODliMktEWGIxM01Sdmp4VDNLOS84Y0lCY2wyM3d2anF1?=
 =?utf-8?B?YVQ1aGRubWFQUVBFQkQ4NEZHMzZqclZDVWVDUlFFSkV1d3p1cHFSNStOQk04?=
 =?utf-8?B?bkJnbjFmYXNmSkMzblFYS2NOUm0rbTN5dzVQYXJQRTFkakJUMmVMT1A3VnlX?=
 =?utf-8?B?a2lxQVNiS2J3eVZlaWR5N2FRMzNTRUwyeTl2WnVxZHNhRy9obk1MK0FYOXRa?=
 =?utf-8?B?OG42ZDUybFY4VXFTVGdINDgzY3psZzBEdk1uUEJTVjY4cFRiMVlzcXRCRFJa?=
 =?utf-8?B?UjRWYS9CMDJNWUxxY1dZTjNGdXdYbXVmbCt5dHcrZHozNHZFMENsWExRbTVp?=
 =?utf-8?B?VHEra1JNUjh3RnFBOGdpaExidkNHb2dSTnF6czkrTFk1SFdwbEpMQ2o1VUNq?=
 =?utf-8?B?VEhRYk9scUthOWlBNjk2ak5rbFNaYmsvRHZlUDlnWU43bHZDK1hndlFSWEYr?=
 =?utf-8?B?TVlyVWRLMExuZnBSdExaUGlGaWREWlNCL2RRb2lZc1FhVGVmaFJObTBLR3dr?=
 =?utf-8?B?WTZkVUgvMEZ5R3M3cWJYeVg4Q2hWZ001Tm5qUkNuSmVwSm5ZVit3WmVsVitE?=
 =?utf-8?B?SnI2eVZ2S2R4dmxQNEJXYVgwTG9aY1EvT2E4SGl0eXJpT29UTDluOXYza0hT?=
 =?utf-8?B?MlF2Lzh0Z05pT2hVVUNyWlN1RHRVQ05xc3haV2xRaUdWdjJ3ZlozU3NmVHVl?=
 =?utf-8?B?eVFWR2VkcDRvU2hmem45akJsOXJlWkhRWUdlVzlNRWhFcEFtMnR0ZzIvejZ4?=
 =?utf-8?B?L0hDN2xnam53ZFpnK2hMb00yazQ5SUgwREhpdHdaMTNXMStjSVliNTB1djUv?=
 =?utf-8?B?WS9BNWh6aU96ZHUrUTFuTHdBSDJtTzJBcmJXcnQrZlRHdXIrV0ZYbXIyRmpR?=
 =?utf-8?B?RWxSbFBxaXh2YTRPV1hTeFQrZVVaSTkwYmFhdXRjdlhzNjBlN2x1SS9LOHYz?=
 =?utf-8?Q?pBcKXLmtDSI1XOr6+0tHGtYEfxJK+rndPe?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 8d5dd280-f377-4b21-2bdf-08d892c1b926
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Nov 2020 10:46:40.6650
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ChrTmSH/CIFp0mqJtuclXQsBCTHhfXzIQwIbjQC2YgVw06M62ylU39Hl8OhA7cdUsIWrVS51fwJcv6upzmsWhQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5084
X-OriginatorOrg: citrix.com

Windows 10 will triple fault if #GP is injected when attempting to
read the FEATURE_CONTROL MSR on Intel or compatible hardware. Fix this
by injecting a #GP only when the vendor doesn't support the MSR, even
if there are no features to expose.

Fixes: 39ab598c50a2 ('x86/pv: allow reading FEATURE_CONTROL MSR')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/msr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index be8e363862..38b0a046e1 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -176,7 +176,7 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
     switch ( msr )
     {
     case MSR_IA32_FEATURE_CONTROL:
-        if ( !cp->basic.vmx && !vmce_has_lmce(v) )
+        if ( !(cp->x86_vendor & (X86_VENDOR_INTEL | X86_VENDOR_CENTAUR)) )
             goto gp_fault;
 
         *val = IA32_FEATURE_CONTROL_LOCK;
-- 
2.29.2



From xen-devel-bounces@lists.xenproject.org Fri Nov 27 10:56:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 10:56:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39151.71943 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kibQT-0000ZQ-1Q; Fri, 27 Nov 2020 10:56:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39151.71943; Fri, 27 Nov 2020 10:56:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kibQS-0000ZJ-Tn; Fri, 27 Nov 2020 10:56:28 +0000
Received: by outflank-mailman (input) for mailman id 39151;
 Fri, 27 Nov 2020 10:56:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kibQR-0000ZD-LN
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 10:56:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 55f058ed-6de4-4b20-9e8a-0ceddf9b3824;
 Fri, 27 Nov 2020 10:56:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 77EEEAD7C;
 Fri, 27 Nov 2020 10:56:24 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kibQR-0000ZD-LN
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 10:56:27 +0000
X-Inumbo-ID: 55f058ed-6de4-4b20-9e8a-0ceddf9b3824
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 55f058ed-6de4-4b20-9e8a-0ceddf9b3824;
	Fri, 27 Nov 2020 10:56:25 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606474584; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=GJiGZRoGdo1CBYHWTG9rexMadMVBWHmssa+Ae+8gdGg=;
	b=uPyiaRJY2ts86APmhdGOk+UZ5iAfSeglbwg/zf3hdT2Lz3azuMmFfAN9rlLC/98qDALP8Z
	az2wvSmsEB8NF4pFjQIJsxgkCiItyUhbvyG9LVKUvnr+3RqaxZDXtHorV9EWAzNGd+pEIW
	QuO0fH57fd04E0EXJqNbpSzwWtZO0tk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 77EEEAD7C;
	Fri, 27 Nov 2020 10:56:24 +0000 (UTC)
Subject: Re: [PATCH] x86/msr: don't inject #GP when trying to read
 FEATURE_CONTROL
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20201127104614.71933-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c1f686e2-dcc3-233a-c241-edf997d2cef7@suse.com>
Date: Fri, 27 Nov 2020 11:56:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201127104614.71933-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 27.11.2020 11:46, Roger Pau Monne wrote:
> Windows 10 will triple fault if #GP is injected when attempting to
> read the FEATURE_CONTROL MSR on Intel or compatible hardware. Fix this
> by injecting a #GP only when the vendor doesn't support the MSR, even
> if there are no features to expose.
> 
> Fixes: 39ab598c50a2 ('x86/pv: allow reading FEATURE_CONTROL MSR')
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

In principle
Acked-by: Jan Beulich <jbeulich@suse.com>

However, iirc it was Andrew who had suggested the conditional you
now replace, so I'd like to wait for him to voice a view.

> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -176,7 +176,7 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>      switch ( msr )
>      {
>      case MSR_IA32_FEATURE_CONTROL:
> -        if ( !cp->basic.vmx && !vmce_has_lmce(v) )
> +        if ( !(cp->x86_vendor & (X86_VENDOR_INTEL | X86_VENDOR_CENTAUR)) )

What about Shanghai? init_shanghai() calling init_intel_cacheinfo()
suggests to me it's at least as Intel-like as Centaur/VIA.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 11:00:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 11:00:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39159.71955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kibTt-0000oJ-H4; Fri, 27 Nov 2020 11:00:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39159.71955; Fri, 27 Nov 2020 11:00:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kibTt-0000oC-Dn; Fri, 27 Nov 2020 11:00:01 +0000
Received: by outflank-mailman (input) for mailman id 39159;
 Fri, 27 Nov 2020 11:00:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=33Tu=FB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kibTs-0000o7-0I
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 11:00:00 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dad782ac-3a2d-40ee-962e-7c888e1d9bd7;
 Fri, 27 Nov 2020 10:59:58 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=33Tu=FB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kibTs-0000o7-0I
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 11:00:00 +0000
X-Inumbo-ID: dad782ac-3a2d-40ee-962e-7c888e1d9bd7
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id dad782ac-3a2d-40ee-962e-7c888e1d9bd7;
	Fri, 27 Nov 2020 10:59:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606474797;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=sya3w3xjNmTpGM8F+H/vXdnphlyvL9y9J+tvGSk3bCg=;
  b=AJjqswSZimUKHnl/a+NIEN8bZy7Ga/M4xQyvKSmR/rmjeaY4SQM87yp8
   jx8HOPc2teb+SFSBry2shJuPF3yFqRtdhBJUye2w8AtQMyq9/c49BAOFi
   XDaq97CArNvcgswAvDwtVmNVA/sQCTlyiBzkl3pZkj5E6KY4dD/Aa/DYK
   o=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: dDvB5GsWLdFN6aEBnZywazOukUByHr/89TcE+loiUT/+T28tmvmcN0OiCvilqVQj8YJEq5M0PE
 +DrDdnz5MuA71N6ZWQ5/L2RXNX+d7OLPD86Rr8dJ1zJJz6y97Iccg/YQdyBxCbOQ4OjokPrlp5
 SV57Fx/cZuWdoTl4KwHWuzdTl/5i7XEPgdwNLDy4RojAxVKh7ouIE5rB2vasNltSQqT7+oFaP/
 D8jZ2GIWynOX1nFAy56+JJF5XMc6r6U0iIyJImGiiJdBn7urbnRiGVdUU0L97PNM14X5JnQ9Yy
 Ffk=
X-SBRS: None
X-MesageID: 32264361
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,374,1599537600"; 
   d="scan'208";a="32264361"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BI1VdwSkKB7dbycXokkxFdRw4wWpXNBxzXIbRWEv1sVB0AwrpSOp2JmM2AsRDXI1bmYCCDkM5JZI/kVUSAPPNa+NzM69Zi58gJx3rtx6EvuakoMQwgxpwTyvamYrFfcoeS/9pOtsnxGur1p3GdS9vd1Eoq4Qh4389eueasyC9DPY75K6eOcpYzNLdUXUNIBSGibO9JgblZ0jrIfvSE8ePgUAyM0vOlMVuNIucu+7oJLiHpiR/KODlq3H+2KCqHwHHW2qTf8OcMPlAAweDnFjXu8F28FWJ52wbTWkKjMG3ggRRjkijW2HoJrz6gdbvC/XouxPXwwtezviCky1nbttHw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FDIHO9hVwsPiqCeOuuYkyJlQUH63L6Os++lsGP48pas=;
 b=fJZvqhwtZRt4nTJ6XtNg1kOj+hcBhP20amus5cgYPlrG+LdZgAbROfeQyrbzHQlMPs9o4PVXnMzaFIkU6ekJOeAZcKK92e7VfiXye3Cvl4tAMvppLDGwPOfbrKAE8WRA8RQAUv33Ov1SVGmPP8QfACraNTo9ftnKCkO1Bzo9O51ioeM3m9L+PQ283FRFlthCODGRE6Li/t7ECXAq/UfZOh/91HywKusiMULN+ogWxs95QC7VCnrHepA8jfnliHyDV75c0jP+Kdl8dzTDcPuRHPhYJ7kdYYFxzwFakeh2PoA5PtMpvVRWwMya035PT9hT3u1sxWrgX3x357g9OfiiTA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FDIHO9hVwsPiqCeOuuYkyJlQUH63L6Os++lsGP48pas=;
 b=lwZel5Tp1dIOGkfEz+mySqd2rLlWcsSR0pgIpIdUkn1Fl8JBRdT7YQp4ums47oLD0it1AhWyatitdiue5jwQM2p90aOuQZ5/90sByJLst61yM6DRVo6cP19DCTl/iT0nNx8cyZM60Fv2bJ/ZOwTJGF1bunPL+oI1/16lFhWW548=
Date: Fri, 27 Nov 2020 11:59:48 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: Jan Beulich <jbeulich@suse.com>, <xen-devel@lists.xenproject.org>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201127105948.ji5gxv4e7axrvgpo@Air-de-Roger>
References: <20201124142713.GM2020@antioche.eu.org>
 <e6a0fc84-e7ed-825c-5356-29b8a6359a2b@suse.com>
 <20201124150842.GN2020@antioche.eu.org>
 <20201124154917.l3jwa6w4ejumjuqw@Air-de-Roger>
 <20201124160914.GQ2020@antioche.eu.org>
 <20201126133444.r2oi24i3umh7shb3@Air-de-Roger>
 <20201126141608.GA4123@antioche.eu.org>
 <20201126142635.uzi643co3mxp5h42@Air-de-Roger>
 <20201126150937.jhbfp7iefkmtedx7@Air-de-Roger>
 <20201126172034.GA7642@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201126172034.GA7642@antioche.eu.org>
X-ClientProxiedBy: MR2P264CA0009.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:1::21) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f53541ec-585d-4551-ff12-08d892c391bb
X-MS-TrafficTypeDiagnostic: DM6PR03MB4682:
X-Microsoft-Antispam-PRVS: <DM6PR03MB46828C8F88557CE1F921AFAE8FF80@DM6PR03MB4682.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: PEYwC9MKPOYDYQjlzTAdLNz0BJen+6T3VU2U5UPu7n2L9fIsLz3UtndERz4syWLBtUtycp1MMyU4JnGYQOb5iu2F8pZ27zP1oglVsAwJ6RtDMLLgXNB7bc1EOt79Wj5VcwuIMQ5W94Skk8R9iTp+j98xBlgbsvRce/R97nz9sEmcSjUz7xSIWwhZtCly00NmmprWhbXR6fR5XKQ/CWN7wYNhqap2zULpKl4U1ThHsvDqqCQ3p1bKQWAQQXFMZ9cHNlFN/aM9NpLTI3/jk8ot2mD+A+1fyz6nO4o6lUxJn3a8r1GHUSJIOSWXZ7alFApLfI7eJgv3VmmJUPHVxD2ixs6c7OMFtjNgoze1bKG9MScciDqKj01HWez8ASjLAb9qB4o+67SwiEtMKt33oupsxQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(366004)(376002)(396003)(136003)(346002)(39860400002)(33716001)(6496006)(4326008)(8676002)(26005)(5660300002)(9686003)(2906002)(16526019)(8936002)(6486002)(316002)(186003)(66946007)(6666004)(956004)(85182001)(66476007)(66556008)(478600001)(1076003)(966005)(83380400001)(6916009)(86362001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?NElxalJHNEYxSGlHOVVsNC9KYnBOdVZ0SGRLZ0tNcjBseFpzNU1iSEFNZDZ1?=
 =?utf-8?B?aVV0UUJURWlzM1dXaDBVa0xiRGQ1WEdKdXF0YndMZTc4Q1ZqbndZd0d4WGcv?=
 =?utf-8?B?cTR1Nnhza0FmYWdBcG9nQzNRL1R3QmVSS0dkbUVKaXNDcGlDcEE3RnNVd2FL?=
 =?utf-8?B?aHhNZTh4Q09xVWxlOEZFczJnQ21kMWVKLzh6MzNBTno1b09UVWZLTlJacGNv?=
 =?utf-8?B?bkFiVWRsK2I3WndXRGRWODRrUHhZanliWFZ4QXFpbGtVMzQzOS83WnRVYzVa?=
 =?utf-8?B?REZxZVV6VGVqY2JEVDZ3UW5wYjBUNmZEd2tvSDRSTStoVkNWOVRSdVprb3ht?=
 =?utf-8?B?NVlzejR4WTJiM1ZaSU5LdnJuZnJWTHFncVI1Rkc1ckdGVkt2dThXN2R2TVZ2?=
 =?utf-8?B?aGx4M2FwVGtjRDFjUi83eDFaZkQ0R3MxOWV2aEpoeHVxZXhteFovRG5Ma1px?=
 =?utf-8?B?alVyT3ZBenQ0RldBSkVGTFZsMm1xV3FRSWZwbmkvL3Mxb0VtT21IRHFLK0JJ?=
 =?utf-8?B?S1cyeDh0UWJxblB2bXN1RU01V1RHcHovYllGRWQvTGViUGFQdFUzaGZOS0pG?=
 =?utf-8?B?Y2txSWkxREordWE4NXBpNmdhdWlzTFlTTnV5dnIyRHhiUlR3d1pvZmEya3NQ?=
 =?utf-8?B?aERoSnBQNUJWc21XcGp6bmNJNGlRYitkdnlLTGxkSGNxVnZPSHR1L0c5M0w5?=
 =?utf-8?B?SWZLbVFqODU1YXQ4QzlZdXI1MmhNTUJEd0NDUzZNNHlqY1d2cjBkSnJoQW5k?=
 =?utf-8?B?cFZ1SHJjZ2gweVdQRVdNajEyQzh2RGZ6V0ZkRjhSUlNnbjY2OHNBQjZIY0lx?=
 =?utf-8?B?OGNsem5sTHB0TytlaUxFL0s2SG9CM1lTcDNXMzlSa0U5L2p0YVkxaXhUakJF?=
 =?utf-8?B?VHJzNUZ1dlJydTVtalliYjNNQWM1RVh2L0tZaGhsQks4c0NrSWF0aHdVS3BM?=
 =?utf-8?B?TllBbHZDZ25CYncxNFpSOVNwekRhd3dkQmw5KzM2YzVFNUJiOGRTZkptMHJS?=
 =?utf-8?B?QTdlUmxZZzBHQlVPSnJFWkRybGVpRVZhL1BCdHBxd2lXMk4xK2o1dHZ0VHdz?=
 =?utf-8?B?dEFaR2k2b21lUHgvcExxSmpuR09Cb0xEUVViUW9NT3ArODZtcXRKUHpxaFFH?=
 =?utf-8?B?NlNRaE5BVGMvbUpDUks3TjNtZllwYWdEWDZaemQrR3pTRG9uRGZDTFowQXli?=
 =?utf-8?B?Y21MREdYWmFnSldJdUZtOGU1aFNUaUVmdU8vNlZOZEZkNE9oRzJMQ3NydWpG?=
 =?utf-8?B?T3NCMGRoTXc3NjdFUlJITXZDK3dvZjNnWm1PbUZLOVAxK1ZlK3puWDNqUWtv?=
 =?utf-8?Q?bwlrtBGSGdcIff+YoTEzqCCth4gUjb0Lu+?=
X-MS-Exchange-CrossTenant-Network-Message-Id: f53541ec-585d-4551-ff12-08d892c391bb
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Nov 2020 10:59:53.5952
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hMneGnAUL1ngLRmOAqUjLMPirkKqiGPwnRa/n/SdApof7YyKP3nRlAxL7WjX7hosIP9SHTXXqlYFXF1oduvflQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4682
X-OriginatorOrg: citrix.com

On Thu, Nov 26, 2020 at 06:20:34PM +0100, Manuel Bouyer wrote:
> On Thu, Nov 26, 2020 at 04:09:37PM +0100, Roger Pau Monné wrote:
> > > 
> > > Oh, that's actually very useful. The interrupt is being constantly
> > > injected from the hardware and received by Xen, it's just not then
> > > injected into dom0 - that's the bit we are missing. Let me look into
> > > adding some more debug to that path, hopefully it will tell us where
> > > things are getting blocked.
> > 
> > So I have yet one more patch for you to try, this one has more
> > debugging and a slight change in the emulated IO-APIC behavior.
> > Depending on the result I might have to find a way to mask the
> > interrupt so it doesn't spam the whole buffer in order for us to see
> > exactly what triggered this scenario you are in.
> 
> OK, here it is:
> http://www-soc.lip6.fr/~bouyer/xen-log9.txt
> 
> I had to restart from a clean source tree to apply this patch, so to make
> sure we're in sync I attached the diff from my sources

I'm quite confused about why your trace don't even get into
hvm_do_IRQ_dpci, so I've added some more debug info.

Here is the new patch, sorry for so many rounds of testing.
---8<---
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index 38ac5fb6c7..9db3dcc957 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -187,6 +187,10 @@ void hvm_gsi_assert(struct domain *d, unsigned int gsi)
      * to know if the GSI is pending or not.
      */
     spin_lock(&d->arch.hvm.irq_lock);
+    if ( gsi == TRACK_IRQ )
+        debugtrace_printk("hvm_gsi_assert irq %u trig %u assert count %u\n",
+                          gsi, trig, hvm_irq->gsi_assert_count[gsi]);
+
     if ( trig == VIOAPIC_EDGE_TRIG || !hvm_irq->gsi_assert_count[gsi] )
     {
         if ( trig == VIOAPIC_LEVEL_TRIG )
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 67d4a6237f..e6748e0649 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -257,7 +257,11 @@ static void vioapic_write_redirent(
         vlapic_adjust_i8259_target(d);
     }
     else if ( ent.fields.trig_mode == VIOAPIC_EDGE_TRIG )
+    {
+        if ( gsi == TRACK_IRQ )
+            debugtrace_printk("vIO-APIC set edge trigger irq %u\n", gsi);
         pent->fields.remote_irr = 0;
+    }
     else if ( !ent.fields.mask &&
               !ent.fields.remote_irr &&
               hvm_irq->gsi_assert_count[idx] )
@@ -278,6 +282,10 @@ static void vioapic_write_redirent(
          */
         int ret = vioapic_hwdom_map_gsi(gsi, ent.fields.trig_mode,
                                         ent.fields.polarity);
+
+        if ( gsi == TRACK_IRQ )
+            debugtrace_printk("vIO-APIC UNMASK irq %u\n", gsi);
+
         if ( ret )
         {
             gprintk(XENLOG_ERR,
@@ -285,6 +293,9 @@ static void vioapic_write_redirent(
             unmasked = 0;
         }
     }
+    else if ( is_hardware_domain(d) && gsi == TRACK_IRQ )
+        debugtrace_printk("vIO-APIC MASK irq %u\n", gsi);
+
 
     if ( gsi == 0 || unmasked )
         pt_may_unmask_irq(d, NULL);
@@ -405,6 +416,10 @@ static void vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int pin)
 
     ASSERT(spin_is_locked(&d->arch.hvm.irq_lock));
 
+    if ( irq == TRACK_IRQ )
+            debugtrace_printk("vIO-APIC deliver irq %u vector %u\n",
+                              irq, vector);
+
     HVM_DBG_LOG(DBG_LEVEL_IOAPIC,
                 "dest=%x dest_mode=%x delivery_mode=%x "
                 "vector=%x trig_mode=%x",
diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c
index 49bd778484..db7167eb4b 100644
--- a/xen/arch/x86/io_apic.c
+++ b/xen/arch/x86/io_apic.c
@@ -1641,6 +1641,9 @@ static void mask_and_ack_level_ioapic_irq(struct irq_desc *desc)
     unsigned long v;
     int i;
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("ACK irq %u\n", desc->irq);
+
     irq_complete_move(desc);
 
     if ( !directed_eoi_enabled )
@@ -1688,6 +1691,9 @@ static void mask_and_ack_level_ioapic_irq(struct irq_desc *desc)
 
 static void end_level_ioapic_irq_old(struct irq_desc *desc, u8 vector)
 {
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("END irq %u\n", desc->irq);
+
     if ( directed_eoi_enabled )
     {
         if ( !(desc->status & (IRQ_DISABLED|IRQ_MOVE_PENDING)) )
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 8d1f9a9fc6..ec52e44cb7 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1109,6 +1109,10 @@ static void irq_guest_eoi_timer_fn(void *data)
     unsigned int i, irq = desc - irq_desc;
     irq_guest_action_t *action;
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("irq_guest_eoi_timer_fn irq %u status %x\n",
+                          desc->irq, desc->status);
+
     spin_lock_irq(&desc->lock);
     
     if ( !(desc->status & IRQ_GUEST) )
@@ -1118,6 +1122,10 @@ static void irq_guest_eoi_timer_fn(void *data)
 
     ASSERT(action->ack_type != ACKTYPE_NONE);
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("ack_type %u in_flight %u\n",
+                          action->ack_type, action->in_flight);
+
     /*
      * Is no IRQ in flight at all, or another instance of this timer already
      * running? Skip everything to avoid forcing an EOI early.
@@ -1837,6 +1845,12 @@ static void do_IRQ_guest(struct irq_desc *desc, unsigned int vector)
     unsigned int        i;
     struct pending_eoi *peoi = this_cpu(pending_eoi);
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("do_IRQ_guest irq %u nr_guests %u ack_type %u in_flight %u\n",
+                          desc->irq, action->nr_guests, action->ack_type,
+                          action->in_flight);
+
+
     if ( unlikely(!action->nr_guests) )
     {
         /* An interrupt may slip through while freeing an ACKTYPE_EOI irq. */
diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
index 6b1305a3e5..92f3670508 100644
--- a/xen/drivers/passthrough/io.c
+++ b/xen/drivers/passthrough/io.c
@@ -828,6 +828,9 @@ int hvm_do_IRQ_dpci(struct domain *d, struct pirq *pirq)
          !pirq_dpci || !(pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) )
         return 0;
 
+    if ( pirq->pirq == TRACK_IRQ )
+        debugtrace_printk("hvm_do_IRQ_dpci irq %u\n", pirq->pirq);
+
     pirq_dpci->masked = 1;
     raise_softirq_for(pirq_dpci);
     return 1;
@@ -1010,6 +1013,9 @@ void hvm_dpci_eoi(struct domain *d, unsigned int guest_gsi,
     if ( !is_iommu_enabled(d) )
         return;
 
+    if ( guest_gsi == TRACK_IRQ )
+        debugtrace_printk("hvm_dpci_eoi irq %u\n", guest_gsi);
+
     if ( is_hardware_domain(d) )
     {
         spin_lock(&d->event_lock);
diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
index 43d567fe44..91579c33b9 100644
--- a/xen/include/xen/irq.h
+++ b/xen/include/xen/irq.h
@@ -174,4 +174,6 @@ unsigned int arch_hwdom_irqs(domid_t);
 void arch_evtchn_bind_pirq(struct domain *, int pirq);
 #endif
 
+#define TRACK_IRQ 34
+
 #endif /* __XEN_IRQ_H__ */



From xen-devel-bounces@lists.xenproject.org Fri Nov 27 11:18:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 11:18:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39171.71967 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiblu-0002k5-23; Fri, 27 Nov 2020 11:18:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39171.71967; Fri, 27 Nov 2020 11:18:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiblt-0002jy-UK; Fri, 27 Nov 2020 11:18:37 +0000
Received: by outflank-mailman (input) for mailman id 39171;
 Fri, 27 Nov 2020 11:18:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kibls-0002jt-Of
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 11:18:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 951e8e8f-b19f-4f49-8b58-9a2e698755cf;
 Fri, 27 Nov 2020 11:18:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6E5D3AC2F;
 Fri, 27 Nov 2020 11:18:34 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kibls-0002jt-Of
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 11:18:36 +0000
X-Inumbo-ID: 951e8e8f-b19f-4f49-8b58-9a2e698755cf
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 951e8e8f-b19f-4f49-8b58-9a2e698755cf;
	Fri, 27 Nov 2020 11:18:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606475914; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cvoOrWHyB+9oRM6OEMfIKcaLGjMbphGjIujoDFw01jU=;
	b=F74ErVc+dsrYbgWH3PxcRpIbdDvKfc/XgFcorVRdrZK8KJLIBurDZDQOm2OO5s7XIStinP
	2iAPZRZ/H4Z9D+OLyyzyBeRHGDpXfITGyPKHvI6gM1ryrjRG5yaxJ8ELmzXXq29hJAOERs
	S0SzwAwAokdXHflekSkCOu3TSBYxOSg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6E5D3AC2F;
	Fri, 27 Nov 2020 11:18:34 +0000 (UTC)
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Manuel Bouyer <bouyer@antioche.eu.org>
References: <20201124142713.GM2020@antioche.eu.org>
 <e6a0fc84-e7ed-825c-5356-29b8a6359a2b@suse.com>
 <20201124150842.GN2020@antioche.eu.org>
 <20201124154917.l3jwa6w4ejumjuqw@Air-de-Roger>
 <20201124160914.GQ2020@antioche.eu.org>
 <20201126133444.r2oi24i3umh7shb3@Air-de-Roger>
 <20201126141608.GA4123@antioche.eu.org>
 <20201126142635.uzi643co3mxp5h42@Air-de-Roger>
 <20201126150937.jhbfp7iefkmtedx7@Air-de-Roger>
 <20201126172034.GA7642@antioche.eu.org>
 <20201127105948.ji5gxv4e7axrvgpo@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <90b6f981-2494-34c8-c611-37bc16d473b6@suse.com>
Date: Fri, 27 Nov 2020 12:18:35 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201127105948.ji5gxv4e7axrvgpo@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 27.11.2020 11:59, Roger Pau Monné wrote:
> On Thu, Nov 26, 2020 at 06:20:34PM +0100, Manuel Bouyer wrote:
>> On Thu, Nov 26, 2020 at 04:09:37PM +0100, Roger Pau Monné wrote:
>>>>
>>>> Oh, that's actually very useful. The interrupt is being constantly
>>>> injected from the hardware and received by Xen, it's just not then
>>>> injected into dom0 - that's the bit we are missing. Let me look into
>>>> adding some more debug to that path, hopefully it will tell us where
>>>> things are getting blocked.
>>>
>>> So I have yet one more patch for you to try, this one has more
>>> debugging and a slight change in the emulated IO-APIC behavior.
>>> Depending on the result I might have to find a way to mask the
>>> interrupt so it doesn't spam the whole buffer in order for us to see
>>> exactly what triggered this scenario you are in.
>>
>> OK, here it is:
>> http://www-soc.lip6.fr/~bouyer/xen-log9.txt
>>
>> I had to restart from a clean source tree to apply this patch, so to make
>> sure we're in sync I attached the diff from my sources
> 
> I'm quite confused about why your trace don't even get into
> hvm_do_IRQ_dpci, so I've added some more debug info.

Are you sure it doesn't? I'm somewhat worried we may ...

> --- a/xen/drivers/passthrough/io.c
> +++ b/xen/drivers/passthrough/io.c
> @@ -828,6 +828,9 @@ int hvm_do_IRQ_dpci(struct domain *d, struct pirq *pirq)
>           !pirq_dpci || !(pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) )
>          return 0;
>  
> +    if ( pirq->pirq == TRACK_IRQ )
> +        debugtrace_printk("hvm_do_IRQ_dpci irq %u\n", pirq->pirq);

... take the early exit path up from here. I still wouldn't be
able to say why that is, because when I looked yesterday I
think I found all failure paths leading to HVM_IRQ_DPCI_MAPPED
remaining clear to have a log message associated, while Manuel
said there were no other log messages.

In the context of this I also started wondering whether it's
the right thing to do to start the EOI timer if the subsequent
call to send_guest_pirq() also doesn't actually send any event.
In this case the guest is effectively guaranteed to not handle
the interrupt. When the interrupt isn't shared, I think we
ought to ->end() it right away, but without unmasking it, to
unblock same or lower priority interrupts. What to do in the
shared case is less obvious to me ...

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 11:19:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 11:19:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39177.71979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kibmZ-0002rD-Ge; Fri, 27 Nov 2020 11:19:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39177.71979; Fri, 27 Nov 2020 11:19:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kibmZ-0002r6-DO; Fri, 27 Nov 2020 11:19:19 +0000
Received: by outflank-mailman (input) for mailman id 39177;
 Fri, 27 Nov 2020 11:19:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N/b8=FB=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kibmY-0002qz-GE
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 11:19:18 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 30ea6b90-df7a-43a1-85b9-2d095bb91b90;
 Fri, 27 Nov 2020 11:19:16 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0ARBJ9vS029288
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Fri, 27 Nov 2020 12:19:10 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 9D7082E9CAC; Fri, 27 Nov 2020 12:19:04 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=N/b8=FB=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kibmY-0002qz-GE
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 11:19:18 +0000
X-Inumbo-ID: 30ea6b90-df7a-43a1-85b9-2d095bb91b90
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 30ea6b90-df7a-43a1-85b9-2d095bb91b90;
	Fri, 27 Nov 2020 11:19:16 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0ARBJ9vS029288
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Fri, 27 Nov 2020 12:19:10 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 9D7082E9CAC; Fri, 27 Nov 2020 12:19:04 +0100 (MET)
Date: Fri, 27 Nov 2020 12:19:04 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201127111904.GG1717@antioche.eu.org>
References: <e6a0fc84-e7ed-825c-5356-29b8a6359a2b@suse.com>
 <20201124150842.GN2020@antioche.eu.org>
 <20201124154917.l3jwa6w4ejumjuqw@Air-de-Roger>
 <20201124160914.GQ2020@antioche.eu.org>
 <20201126133444.r2oi24i3umh7shb3@Air-de-Roger>
 <20201126141608.GA4123@antioche.eu.org>
 <20201126142635.uzi643co3mxp5h42@Air-de-Roger>
 <20201126150937.jhbfp7iefkmtedx7@Air-de-Roger>
 <20201126172034.GA7642@antioche.eu.org>
 <20201127105948.ji5gxv4e7axrvgpo@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201127105948.ji5gxv4e7axrvgpo@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Fri, 27 Nov 2020 12:19:11 +0100 (MET)

On Fri, Nov 27, 2020 at 11:59:48AM +0100, Roger Pau Monn wrote:
> > 
> > I had to restart from a clean source tree to apply this patch, so to make
> > sure we're in sync I attached the diff from my sources
> 
> I'm quite confused about why your trace don't even get into
> hvm_do_IRQ_dpci, so I've added some more debug info.
> 
> Here is the new patch, sorry for so many rounds of testing.

No problem, it's expected for this kind of debug :)

http://www-soc.lip6.fr/~bouyer/xen-log11.txt

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 11:21:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 11:21:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39185.71991 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kibob-0003gN-Ui; Fri, 27 Nov 2020 11:21:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39185.71991; Fri, 27 Nov 2020 11:21:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kibob-0003gG-RA; Fri, 27 Nov 2020 11:21:25 +0000
Received: by outflank-mailman (input) for mailman id 39185;
 Fri, 27 Nov 2020 11:21:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kiboa-0003gB-JJ
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 11:21:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 201f8daa-b2d2-4f00-9408-0bbe4a1a60e3;
 Fri, 27 Nov 2020 11:21:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 55FF3AEA2;
 Fri, 27 Nov 2020 11:21:22 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kiboa-0003gB-JJ
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 11:21:24 +0000
X-Inumbo-ID: 201f8daa-b2d2-4f00-9408-0bbe4a1a60e3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 201f8daa-b2d2-4f00-9408-0bbe4a1a60e3;
	Fri, 27 Nov 2020 11:21:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606476082; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wjcCoKTS1XZ8V30EJpSmW5RijefpZ514L6a72pRKebQ=;
	b=PiXq3EK3Qh0O0aWvUnBwcMNiysSaqjTOhRvW3BAPZ2PF/HrMPcnXBngFglCFPHdjQ1AuGK
	Tud45Ro7VDE5ScvDlhgmVQqlaxNfuPzofCQhaKxm9jY587JMN83NQLZXkpwWoDeT4f3Ksu
	cY12yVKbXPNohbxEWGw1qIQi0y9hAU8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 55FF3AEA2;
	Fri, 27 Nov 2020 11:21:22 +0000 (UTC)
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
To: Manuel Bouyer <bouyer@antioche.eu.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
References: <e6a0fc84-e7ed-825c-5356-29b8a6359a2b@suse.com>
 <20201124150842.GN2020@antioche.eu.org>
 <20201124154917.l3jwa6w4ejumjuqw@Air-de-Roger>
 <20201124160914.GQ2020@antioche.eu.org>
 <20201126133444.r2oi24i3umh7shb3@Air-de-Roger>
 <20201126141608.GA4123@antioche.eu.org>
 <20201126142635.uzi643co3mxp5h42@Air-de-Roger>
 <20201126150937.jhbfp7iefkmtedx7@Air-de-Roger>
 <20201126172034.GA7642@antioche.eu.org>
 <20201127105948.ji5gxv4e7axrvgpo@Air-de-Roger>
 <20201127111904.GG1717@antioche.eu.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <89aecc1b-bfe5-26fb-9d11-bec4f0aa7b84@suse.com>
Date: Fri, 27 Nov 2020 12:21:23 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201127111904.GG1717@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 27.11.2020 12:19, Manuel Bouyer wrote:
> On Fri, Nov 27, 2020 at 11:59:48AM +0100, Roger Pau Monné wrote:
>>>
>>> I had to restart from a clean source tree to apply this patch, so to make
>>> sure we're in sync I attached the diff from my sources
>>
>> I'm quite confused about why your trace don't even get into
>> hvm_do_IRQ_dpci, so I've added some more debug info.
>>
>> Here is the new patch, sorry for so many rounds of testing.
> 
> No problem, it's expected for this kind of debug :)
> 
> http://www-soc.lip6.fr/~bouyer/xen-log11.txt

Hmm, this one now has hvm_do_IRQ_dpci entries. Maybe the previous one
was again from a stale binary?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 11:29:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 11:29:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39194.72003 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kibwW-00040d-RF; Fri, 27 Nov 2020 11:29:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39194.72003; Fri, 27 Nov 2020 11:29:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kibwW-00040W-O7; Fri, 27 Nov 2020 11:29:36 +0000
Received: by outflank-mailman (input) for mailman id 39194;
 Fri, 27 Nov 2020 11:29:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kibwV-00040R-Pn
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 11:29:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1a5075ca-9156-4512-8c52-904a7ea1b812;
 Fri, 27 Nov 2020 11:29:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3D4FBABD7;
 Fri, 27 Nov 2020 11:29:34 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kibwV-00040R-Pn
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 11:29:35 +0000
X-Inumbo-ID: 1a5075ca-9156-4512-8c52-904a7ea1b812
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1a5075ca-9156-4512-8c52-904a7ea1b812;
	Fri, 27 Nov 2020 11:29:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606476574; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=goPbyqWpRwaSLpYInMxYycqxofYVEUeukrRN1EBxPwc=;
	b=ipBG18eBzyXJYd40nN1b+NYMn+pfzA4q7EBEiO+OO63590YUV47JQhTzrb+diLhj/kAAMl
	+gyjCCWIpXlRywvmEW5THjHQ1jqXrC4SvXsuPYDDuiQSTPA6WHfQRb7aBI8ztr0OLmcPUb
	0zh7PN4bM/zxBbb4H/huu/m0LJIdRlE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3D4FBABD7;
	Fri, 27 Nov 2020 11:29:34 +0000 (UTC)
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Manuel Bouyer <bouyer@antioche.eu.org>
Cc: xen-devel@lists.xenproject.org
References: <20201124142713.GM2020@antioche.eu.org>
 <e6a0fc84-e7ed-825c-5356-29b8a6359a2b@suse.com>
 <20201124150842.GN2020@antioche.eu.org>
 <20201124154917.l3jwa6w4ejumjuqw@Air-de-Roger>
 <20201124160914.GQ2020@antioche.eu.org>
 <20201126133444.r2oi24i3umh7shb3@Air-de-Roger>
 <20201126141608.GA4123@antioche.eu.org>
 <20201126142635.uzi643co3mxp5h42@Air-de-Roger>
 <20201126150937.jhbfp7iefkmtedx7@Air-de-Roger>
 <20201126172034.GA7642@antioche.eu.org>
 <20201127105948.ji5gxv4e7axrvgpo@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e9610278-84e5-dc32-b568-8867011de4e4@suse.com>
Date: Fri, 27 Nov 2020 12:29:35 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201127105948.ji5gxv4e7axrvgpo@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 27.11.2020 11:59, Roger Pau Monné wrote:
> --- a/xen/arch/x86/hvm/irq.c
> +++ b/xen/arch/x86/hvm/irq.c
> @@ -187,6 +187,10 @@ void hvm_gsi_assert(struct domain *d, unsigned int gsi)
>       * to know if the GSI is pending or not.
>       */
>      spin_lock(&d->arch.hvm.irq_lock);
> +    if ( gsi == TRACK_IRQ )
> +        debugtrace_printk("hvm_gsi_assert irq %u trig %u assert count %u\n",
> +                          gsi, trig, hvm_irq->gsi_assert_count[gsi]);

This produces

81961 hvm_gsi_assert irq 34 trig 1 assert count 1

Since the logging occurs ahead of the call to assert_gsi(), it
means we don't signal anything to Dom0, because according to our
records there's still an IRQ in flight. Unfortunately we only
see the tail of the trace, so it's not possible to tell how / when
we got into this state.

Manuel - is this the only patch you have in place? Or did you keep
any prior ones? Iirc there once was one where Roger also suppressed
some de-assert call.

Jan



From xen-devel-bounces@lists.xenproject.org Fri Nov 27 11:32:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 11:32:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39201.72015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kibyx-0004of-95; Fri, 27 Nov 2020 11:32:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39201.72015; Fri, 27 Nov 2020 11:32:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kibyx-0004oY-5X; Fri, 27 Nov 2020 11:32:07 +0000
Received: by outflank-mailman (input) for mailman id 39201;
 Fri, 27 Nov 2020 11:32:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eONm=FB=alien8.de=bp@srs-us1.protection.inumbo.net>)
 id 1kibyv-0004oS-LM
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 11:32:06 +0000
Received: from mail.skyhub.de (unknown [5.9.137.197])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d11e0816-5014-4931-8c25-8a2fcfd6e1bd;
 Fri, 27 Nov 2020 11:32:03 +0000 (UTC)
Received: from zn.tnic (p200300ec2f0ffb00d5ac34a4508c2f14.dip0.t-ipconnect.de
 [IPv6:2003:ec:2f0f:fb00:d5ac:34a4:508c:2f14])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id C8BD21EC0323;
 Fri, 27 Nov 2020 12:32:02 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=eONm=FB=alien8.de=bp@srs-us1.protection.inumbo.net>)
	id 1kibyv-0004oS-LM
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 11:32:06 +0000
X-Inumbo-ID: d11e0816-5014-4931-8c25-8a2fcfd6e1bd
Received: from mail.skyhub.de (unknown [5.9.137.197])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d11e0816-5014-4931-8c25-8a2fcfd6e1bd;
	Fri, 27 Nov 2020 11:32:03 +0000 (UTC)
Received: from zn.tnic (p200300ec2f0ffb00d5ac34a4508c2f14.dip0.t-ipconnect.de [IPv6:2003:ec:2f0f:fb00:d5ac:34a4:508c:2f14])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id C8BD21EC0323;
	Fri, 27 Nov 2020 12:32:02 +0100 (CET)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim;
	t=1606476722;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:in-reply-to:in-reply-to:  references:references;
	bh=ugdjThtrhUGuPIVjsesMvfGYF6EvCSTLiEMoM1kV7p0=;
	b=H3DZCuzv0GbI72+5hcncMOCBvU3bwbQjV3Xg05BJXfrUY9iAwMey2+iadrW0uv0ZNz8JVW
	H5tHbow1z7hsKPg3aLpNWsikw0IR8bez7Yd35BVVGZVFqwLMPfDKicMg1TIF3sq/9OMGYs
	yNNdq7lFily/ytxwflNIuveYBKeRHuc=
Date: Fri, 27 Nov 2020 12:31:56 +0100
From: Borislav Petkov <bp@alien8.de>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, peterz@infradead.org,
	luto@kernel.org, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Deep Shah <sdeep@vmware.com>,
	"VMware, Inc." <pv-drivers@vmware.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v2 03/12] x86/pv: switch SWAPGS to ALTERNATIVE
Message-ID: <20201127113156.GB13163@zn.tnic>
References: <20201120114630.13552-1-jgross@suse.com>
 <20201120114630.13552-4-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20201120114630.13552-4-jgross@suse.com>

On Fri, Nov 20, 2020 at 12:46:21PM +0100, Juergen Gross wrote:
> SWAPGS is used only for interrupts coming from user mode or for
> returning to user mode. So there is no reason to use the PARAVIRT
> framework, as it can easily be replaced by an ALTERNATIVE depending
> on X86_FEATURE_XENPV.
> 
> There are several instances using the PV-aware SWAPGS macro in paths
> which are never executed in a Xen PV guest. Replace those with the
> plain swapgs instruction. For SWAPGS_UNSAFE_STACK the same applies.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Acked-by: Andy Lutomirski <luto@kernel.org>
> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
>  arch/x86/entry/entry_64.S             | 10 +++++-----
>  arch/x86/include/asm/irqflags.h       | 20 ++++++++------------
>  arch/x86/include/asm/paravirt.h       | 20 --------------------
>  arch/x86/include/asm/paravirt_types.h |  2 --
>  arch/x86/kernel/asm-offsets_64.c      |  1 -
>  arch/x86/kernel/paravirt.c            |  1 -
>  arch/x86/kernel/paravirt_patch.c      |  3 ---
>  arch/x86/xen/enlighten_pv.c           |  3 ---
>  8 files changed, 13 insertions(+), 47 deletions(-)

I love patches like this one! Give me more...

Reviewed-by: Borislav Petkov <bp@suse.de>

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 11:42:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 11:42:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39213.72027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kic8s-0005rj-8I; Fri, 27 Nov 2020 11:42:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39213.72027; Fri, 27 Nov 2020 11:42:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kic8s-0005rc-5I; Fri, 27 Nov 2020 11:42:22 +0000
Received: by outflank-mailman (input) for mailman id 39213;
 Fri, 27 Nov 2020 11:42:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kic8r-0005rX-Gn
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 11:42:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1a267842-0997-4bf6-b8d2-075604802f00;
 Fri, 27 Nov 2020 11:42:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AB3B9AC23;
 Fri, 27 Nov 2020 11:42:19 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kic8r-0005rX-Gn
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 11:42:21 +0000
X-Inumbo-ID: 1a267842-0997-4bf6-b8d2-075604802f00
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1a267842-0997-4bf6-b8d2-075604802f00;
	Fri, 27 Nov 2020 11:42:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606477339; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5euZdDABe2W1BEOLxYMKgD6idvirOY2OQbp4p+WrtIU=;
	b=bVW43O75tQDPYXePNjLRHMyQg9dSEuRPdkb3nHY+V1yJpeqV8HTZCaK0XtkWX/fc+NpQkM
	jmkazpmfQZBDExH2+xLLeXSmeYF7nE6OgrjgYEKI9ynjTYbvTcvMG9oCIJ8RsGDO71k3ti
	JXoCUgslmLXJuVXdGM8a+yWy6McnMbc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id AB3B9AC23;
	Fri, 27 Nov 2020 11:42:19 +0000 (UTC)
Subject: Re: [PATCH v8 1/3] xen/events: modify struct evtchn layout
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201125105122.3650-1-jgross@suse.com>
 <20201125105122.3650-2-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4c054bdb-e74a-4ca8-ede3-8df3874b39fb@suse.com>
Date: Fri, 27 Nov 2020 12:42:20 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201125105122.3650-2-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.11.2020 11:51, Juergen Gross wrote:
> In order to avoid latent races when updating an event channel put
> xen_consumer and pending fields in different bytes. This is no problem
> right now, but especially the pending indicator isn't used only when
> initializing an event channel (unlike xen_consumer), so any future
> addition to this byte would need to be done with a potential race kept
> in mind.
> 
> At the same time move some other fields around to have less implicit
> paddings and to keep related fields more closely together.
> 
> Finally switch struct evtchn to no longer use fixed sized types where
> not needed.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with one more adjustment (can be done while committing, I guess):

> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -93,31 +93,33 @@ struct evtchn
>  #define ECS_PIRQ         4 /* Channel is bound to a physical IRQ line.       */
>  #define ECS_VIRQ         5 /* Channel is bound to a virtual IRQ line.        */
>  #define ECS_IPI          6 /* Channel is bound to a virtual IPI line.        */
> -    u8  state;             /* ECS_* */
> -    u8  xen_consumer:XEN_CONSUMER_BITS; /* Consumer in Xen if nonzero */
> -    u8  pending:1;
> -    u16 notify_vcpu_id;    /* VCPU for local delivery notification */
> -    u32 port;
> +    unsigned char state;   /* ECS_* */
> +#ifndef NDEBUG
> +    unsigned char old_state; /* State when taking lock in write mode. */
> +#endif
> +    unsigned char xen_consumer:XEN_CONSUMER_BITS; /* Consumer in Xen if != 0 */
> +    unsigned int port;

evtchn_port_t, to be in line with ...

>      union {
>          struct {
>              domid_t remote_domid;
> -        } unbound;     /* state == ECS_UNBOUND */
> +        } unbound;          /* state == ECS_UNBOUND */
>          struct {
>              evtchn_port_t  remote_port;
>              struct domain *remote_dom;
> -        } interdomain; /* state == ECS_INTERDOMAIN */
> +        } interdomain;      /* state == ECS_INTERDOMAIN */
>          struct {
> -            u32            irq;
> +            unsigned int   irq;
>              evtchn_port_t  next_port;
>              evtchn_port_t  prev_port;

... three of the fields above from here.

> -        } pirq;        /* state == ECS_PIRQ */
> -        u16 virq;      /* state == ECS_VIRQ */
> +        } pirq;             /* state == ECS_PIRQ */
> +        unsigned int virq;  /* state == ECS_VIRQ */
>      } u;
> -    u8 priority;
> -#ifndef NDEBUG
> -    u8 old_state;      /* State when taking lock in write mode. */
> -#endif
> -    u32 fifo_lastq;    /* Data for fifo events identifying last queue. */
> +
> +    bool pending;                  /* FIFO event channels only. */
> +    unsigned char priority;        /* FIFO event channels only. */
> +    unsigned short notify_vcpu_id; /* VCPU for local delivery notification */

I have to admit though that I'm not fully happy with the uses of
"unsigned char" and "unsigned short". Yes, I did ask for this
change (based on ./CODING_STYLE), but I did also hint towards the
use of bitfields. If bitfields aren't an option here to achieve
the desired dense packing, perhaps this desire should be permitted
as another reason to use fixed width types. (Question goes more
towards everyone who cares than to you specifically.)

Otoh, as long as we have the odd alignment attribute on the struct,
packing density doesn't really matter all this much. I was more
hoping for this change to be a step towards us dropping that
attribute.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 11:52:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 11:52:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39222.72039 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kicIp-0006sZ-91; Fri, 27 Nov 2020 11:52:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39222.72039; Fri, 27 Nov 2020 11:52:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kicIp-0006sS-5w; Fri, 27 Nov 2020 11:52:39 +0000
Received: by outflank-mailman (input) for mailman id 39222;
 Fri, 27 Nov 2020 11:52:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3N/V=FB=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1kicIo-0006sN-4q
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 11:52:38 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c27decad-27f3-4885-ad7c-850d7b55dc45;
 Fri, 27 Nov 2020 11:52:36 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3N/V=FB=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
	id 1kicIo-0006sN-4q
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 11:52:38 +0000
X-Inumbo-ID: c27decad-27f3-4885-ad7c-850d7b55dc45
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c27decad-27f3-4885-ad7c-850d7b55dc45;
	Fri, 27 Nov 2020 11:52:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606477956;
  h=from:to:cc:subject:date:message-id:content-id:
   content-transfer-encoding:mime-version;
  bh=1hUheRcfQ2BE0EVK9CkZsSJd2nGjkBDnrBNC4ozpTdc=;
  b=Z+W0W7uE6VPij5D6vumMMH+x8/79pxjHM3h77X6io+HgmRMQHvAx0u2m
   Tq+JdGzJCsaNz5jjXfE5bIDA1xdB1jaj9rCrBJfv8PGygGVSHn/AuK2Zu
   63qo9kya2ojPld42/gD+PPBDlcDEEfEoGQUW/dcW2UhPPfaVs3Smt6gNQ
   c=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 9SBS035S2MePgGWMNwR2Rpx1zK9Lxuku8kmORCHCgId8JranaZU4clmeDseIqMCRB9XnxcObE3
 Z+ostc+8jzz7oAlMN4mjxHSzp/dVqsOX4iixZOroiPEDJCUlCblwp+b1c4VvnH5hLM5zSG5J8K
 Y8a65WdVX33JzZtax1lWckPOuA7/t5VgFUtDSGGHEogMXjbS8i3dB7b0/n1ZNldw9tZhiI8QFG
 X1cCOpGxxyV/AWWCMWq48VIdDRuFjavKri4pLwnuSqJx6GTshOM2+yr3/OOn5MsRUPH8JJfjjt
 qAA=
X-SBRS: None
X-MesageID: 32045698
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,374,1599537600"; 
   d="scan'208";a="32045698"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NG+fKncufNB0XET1WmnQLIiLe2sDF8gfThd6umIn/WOXjGgjK+iXQk3QcqKAHnzG4uaC8HUIaP+jwnGX2x47EeHK6/2uwtTSKZRVC6LVUm/2JL/JGYmeMNlAuYe7BSo17YgCdj/LGQU6lxPYC1vG0lys7zu8BSIpohvKm3Sc3UXGH3G1fLRZQ8HYp5u85Xv/t2/e0sKL6InQBmJbM4GY4so+jG0bVaqdxJeFPG4LaprYE/O3W+vyLJKQgQTS47FTTYqE2WJpc5YcIc5s7DXaP8TbkWf3D9nspyOZ2km8BcGrOMOQsYe0Bq4cX4j1cc/N6ei7hNHDol0PPN7OVVDA3w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1hUheRcfQ2BE0EVK9CkZsSJd2nGjkBDnrBNC4ozpTdc=;
 b=E0spEj0a2wHSXKTVVmahQJxNbl8HGfQOaqI4FLntCKKQWMIGedAwAjl8/fKbj6STKZ1VxT5yhkHD4YMw4XIsP3Z7Bjqy4Dd046hbVTkLIj2FhGD0obIAIJQo14rgwiKfclXvX+v7/xHLjagWaxbyMX/5t1Y9m+Rn2IpQwLMaw5aH5V0qaBiWiz8/WGCTL/fFticikWyYfdIi0kbmeDEIlvDhLB8kLupnVKNKBkJebT6MDbmAltYwPXQx7ltIZxKseDmQGpI6rJELXSRkUC6v4/54tzXBvTutaTeo/jV/mgb+B4QEoRtgpDyMEqLkzUvAwQJvdjKviupJqylKyduo2Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1hUheRcfQ2BE0EVK9CkZsSJd2nGjkBDnrBNC4ozpTdc=;
 b=RCaHAvCiVGJuloNmd7XO2J8XhnuZHDFUrwZyco2O+lPrtalzLpCCmIBxPQAnVJqb57jOKg1rvbm+Oo788to8z2FFv0z75FLXn2+L6qAqPuoIWz708PJSYzXaQTVQUrpi5K9mf3cckN+VMRSS/1X8AlMGxQg0kcswauO5LYSKH6c=
From: George Dunlap <George.Dunlap@citrix.com>
To: "open list:X86" <xen-devel@lists.xenproject.org>
CC: Tamas K Lengyel <tamas.k.lengyel@gmail.com>, "intel-xen@intel.com"
	<intel-xen@intel.com>, "daniel.kiper@oracle.com" <daniel.kiper@oracle.com>,
	Roger Pau Monne <roger.pau@citrix.com>, Sergey Dyasli
	<sergey.dyasli@citrix.com>, Christopher Clark
	<christopher.w.clark@gmail.com>, Rich Persaud <persaur@gmail.com>, "Kevin
 Pearson" <kevin.pearson@ortmanconsulting.com>, Juergen Gross
	<jgross@suse.com>, =?utf-8?B?UGF1bCBEdXJyYW50wqA=?= <pdurrant@amazon.com>,
	"Ji, John" <john.ji@intel.com>, "edgar.iglesias@xilinx.com"
	<edgar.iglesias@xilinx.com>, "robin.randhawa@arm.com"
	<robin.randhawa@arm.com>, Artem Mygaiev <Artem_Mygaiev@epam.com>, "Matt
 Spencer" <Matt.Spencer@arm.com>, Stewart Hildebrand
	<Stewart.Hildebrand@dornerworks.com>, Volodymyr Babchuk
	<volodymyr_babchuk@epam.com>, "mirela.simonovic@aggios.com"
	<mirela.simonovic@aggios.com>, Jarvis Roach <Jarvis.Roach@dornerworks.com>,
	Jeff Kubascik <Jeff.Kubascik@dornerworks.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Ian Jackson
	<Ian.Jackson@citrix.com>, Rian Quinn <rianquinn@gmail.com>, "Daniel P. Smith"
	<dpsmith@apertussolutions.com>,
	=?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLRG91ZyBHb2xkc3RlaW4=?=
	<cardoe@cardoe.com>, George Dunlap <George.Dunlap@citrix.com>, "David
 Woodhouse" <dwmw@amazon.co.uk>,
	=?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLQW1pdCBTaGFo?= <amit@infradead.org>,
	=?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLVmFyYWQgR2F1dGFt?=
	<varadgautam@gmail.com>, Brian Woods <brian.woods@xilinx.com>, Robert Townley
	<rob.townley@gmail.com>, Bobby Eshleman <bobby.eshleman@gmail.com>,
	=?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLQ29yZXkgTWlueWFyZA==?=
	<cminyard@mvista.com>, Olivier Lambert <olivier.lambert@vates.fr>, "Andrew
 Cooper" <Andrew.Cooper3@citrix.com>, Wei Liu <wl@xen.org>, Ash Wilding
	<ash.j.wilding@gmail.com>, Rahul Singh <Rahul.Singh@arm.com>,
	=?utf-8?B?UGlvdHIgS3LDs2w=?= <piotr.krol@3mdeb.com>, Brendan Kerrigan
	<brendank310@gmail.com>, "Thierry Laurion (Insurgo)" <insurgo@riseup.net>
Subject: [ANNOUNCE] Call for agenda items for December 2020 Community Call @
 16:00 UTC 
Thread-Topic: [ANNOUNCE] Call for agenda items for December 2020 Community
 Call @ 16:00 UTC 
Thread-Index: AQHWxLPI3g+PARWmhEi8Mt/igt/QKw==
Date: Fri, 27 Nov 2020 11:52:30 +0000
Message-ID: <6A1AC739-EB53-4996-A99B-EE68358E70DB@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 00fea6ff-e686-4233-0aa0-08d892caeb6b
x-ms-traffictypediagnostic: BYAPR03MB3509:
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <BYAPR03MB35098AC4D242ACA0869EF1C599F80@BYAPR03MB3509.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: cWf09v6TSVNSJbEPQrBhrJrjP9Gw8pCZ3cIaGF8G1OL7OZZoMwmIDLEvz0BOcw/MiI1w+ZGvmRau3VVGdoP1iDg0ba0rUqgg7SFXj2aUo371lFT96fOwF4YO5cM41oVEZLb39G0WpV22gJ72oOG1+PIMuCQkTqFJs/6WnmkBRQsanl2kHfuPTrwhi273l5po+ngyqMvT0LW1AZaTBfrsOMhUeNDT4xVETH2SDySTrEkGOudFp0YA2A+JPOU6SN6SCX/7evQF4hi3cLlKK+9KZI2uF2B15CrZJQzKOCnLo2R34jdw5gtk2qSsSsZzoCkZzSeoajN63Uv2Vi8VGci7eeuXc/S2Kcj3PlvWRT5VjPyvvhFMlMb1iN4eY6r4jIMaqqQFo4PLraIR1CuJRi4yJw==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4229.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(136003)(376002)(39860400002)(396003)(366004)(76116006)(66476007)(71200400001)(66556008)(66446008)(2616005)(4743002)(91956017)(4326008)(86362001)(64756008)(6486002)(5660300002)(26005)(33656002)(186003)(7416002)(7406005)(66946007)(6506007)(55236004)(54906003)(478600001)(966005)(6512007)(83380400001)(8936002)(316002)(2906002)(8676002)(6916009)(36756003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?utf-8?B?T2JacEZLdDlpbVNKbVlSdk14MGFTVVh1ZERaMjFuays4bER6cXdPb2x5STBq?=
 =?utf-8?B?TXB4R095dFBPWnoyODg5U0N4WjZtMy95eWtyUFEwVmk0bEJuU1h5Kzc1V0dX?=
 =?utf-8?B?WHl4WmUrM2ZjZFpnbmRCSTF3RTZmNW1PRUVMdCs3cVlDWHhHUkJ0U2orb05V?=
 =?utf-8?B?U2Y0eHRjNEowbVluWFZlc3N1d1lCWXYxcFlMVlRZcVNjNnV3TU85S29oWG4x?=
 =?utf-8?B?VkZubStMS3BiSGdIRWhSYkNkWmc1anBJS3VRZ2E4SnFRVW93S1ZZZ1QzYVhU?=
 =?utf-8?B?VUtwZHlqZzBTZy9iMHYyZnY5ZXhsT3hsVElna2h1cXo4UTZ6WTNaZkhhdjJZ?=
 =?utf-8?B?clBkcHVIUXVrbEJzWE9yd05CZThBMmU5RW9uWEYzY2hFQlpCc01hNDNYNkpL?=
 =?utf-8?B?VGtnUlF3WVdkR3ZpZDdFangxWVpzV1paZWdVd01VK1Y2ZTRvdEthSHdGR2FE?=
 =?utf-8?B?MEF1aGNSajVrQzFTczdsZFdiS1A1U21zNFgxZ01hdnhQUkt5Mk9zSVBGT3U3?=
 =?utf-8?B?Mk9sN3ArOXNqTC9ldHRlMnhTVWVwWlpuUFhnUXJaamZTQ1ZPMTlwMGZ5OHdC?=
 =?utf-8?B?R01zS1IyTGI4VnVZTERyVTZWTjZkWk9PUU1tWkxxMG96d3dvaUdONVJjV01E?=
 =?utf-8?B?cE03RTV3VXh3UDJSSFl5REdjODhSQWpWV0lZazZHUWFwRDVvWkpoWXE1d25V?=
 =?utf-8?B?aENsZGFpQTBIZEZDNi91WFgxbnY3SWZCcDR6Ry8vS25xcXF0b0NPdnZHeDFH?=
 =?utf-8?B?UFhGK0dMUUVEdGxublNPM3k1QUR2ajMyNE5BM1FLenVWV0orZmZwTWJ4MHR2?=
 =?utf-8?B?OExJL3NkYjNuRnZxQXV0eGQyYlF1VmVDYWNuTnR2NlQrVmI0eFVRUVlqUjBh?=
 =?utf-8?B?VU04UE1MNGNrWlBTVTgzTVE0eDc5U3hpSnBuTXIvb2QwV0hXY044dnhvL29v?=
 =?utf-8?B?SXovVVRTak9Ob0U5Wmp6eGpkWHNMMDFJdzU2bUZpbDRXN3ZYV1VOdUI0ZzA1?=
 =?utf-8?B?ak5Sd0hoZWpQamtpSDFIaFhOMmQrQTJGMndmV2cxMFVPU0JYS2l3RVFFY1Jv?=
 =?utf-8?B?QmRhNnVxWUo2ZHdMVzY1T2RiZkZSdnZnMjVHZDA2NGVEdTl6YmxOd3plUE1T?=
 =?utf-8?B?NkMrd3l3RVhSak1kNW9wbG9MTTZKUkRQa0pqUnI4SlNOTWtVMVExaWhRQUpM?=
 =?utf-8?B?TzRFSFg5OHJqTVBKQ2ZnbEZlVkRUaWVIejhXU0pYUmVJUlNmYTF3aTFLbFRL?=
 =?utf-8?B?QVJPbHVxYVhmc0FEOHYwb0ZDelRCUURaYVVpajNySmdVb29rdnBBM0t2QzNP?=
 =?utf-8?Q?gYOag0jDByOlOai6QGHMJVYdeUuwEg6o0q?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <1DE7F32824F952489F920BEA9EEB66CF@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4229.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 00fea6ff-e686-4233-0aa0-08d892caeb6b
X-MS-Exchange-CrossTenant-originalarrivaltime: 27 Nov 2020 11:52:30.1196
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: kXpEk8eR9nBwd4DwNi377LnwPUIis1oSnLrCmI3VOyHmh+Vf0IoNNHO1fbLjZ9KnmkXe2eD/iYOpu0pMoJMaqJmDCKOGxSx1C2b8RaJ2kVE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3509
X-OriginatorOrg: citrix.com

DQpIaSBhbGwsDQoNClRoZSBwcm9wb3NlZCBhZ2VuZGEgaXMgaW4gaHR0cHM6Ly9jcnlwdHBhZC5m
ci9wYWQvIy8yL3BhZC9lZGl0L09QTjU1clhhT25jdXB1V3VIeHRkZHpXSi8gYW5kIHlvdSBjYW4g
ZWRpdCB0byBhZGQgaXRlbXMuICBBbHRlcm5hdGl2ZWx5LCB5b3UgY2FuIHJlcGx5IHRvIHRoaXMg
bWFpbCBkaXJlY3RseS4NCg0KQWdlbmRhIGl0ZW1zIGFwcHJlY2lhdGVkIGEgZmV3IGRheXMgYmVm
b3JlIHRoZSBjYWxsOiBwbGVhc2UgcHV0IHlvdXIgbmFtZSBiZXNpZGVzIGl0ZW1zIGlmIHlvdSBl
ZGl0IHRoZSBkb2N1bWVudC4NCg0KTm90ZSB0aGUgZm9sbG93aW5nIGFkbWluaXN0cmF0aXZlIGNv
bnZlbnRpb25zIGZvciB0aGUgY2FsbDoNCiogVW5sZXNzLCBhZ3JlZWQgaW4gdGhlIHBlcnZpb3Vz
IG1lZXRpbmcgb3RoZXJ3aXNlLCB0aGUgY2FsbCBpcyBvbiB0aGUgMXN0IFRodXJzZGF5IG9mIGVh
Y2ggbW9udGggYXQgMTYwMCBCcml0aXNoIFRpbWUgKGVpdGhlciBHTVQgb3IgQlNUKQ0KKiBJIHVz
dWFsbHkgc2VuZCBvdXQgYSBtZWV0aW5nIHJlbWluZGVyIGEgZmV3IGRheXMgYmVmb3JlIHdpdGgg
YSBwcm92aXNpb25hbCBhZ2VuZGENCg0KKiBUbyBhbGxvdyB0aW1lIHRvIHN3aXRjaCBiZXR3ZWVu
IG1lZXRpbmdzLCB3ZSdsbCBwbGFuIG9uIHN0YXJ0aW5nIHRoZSBhZ2VuZGEgYXQgMTY6MDUgc2hh
cnAuICBBaW0gdG8gam9pbiBieSAxNjowMyBpZiBwb3NzaWJsZSB0byBhbGxvY2F0ZSB0aW1lIHRv
IHNvcnQgb3V0IHRlY2huaWNhbCBkaWZmaWN1bHRpZXMgJmMNCg0KKiBJZiB5b3Ugd2FudCB0byBi
ZSBDQydlZCBwbGVhc2UgYWRkIG9yIHJlbW92ZSB5b3Vyc2VsZiBmcm9tIHRoZSBzaWduLXVwLXNo
ZWV0IGF0IGh0dHBzOi8vY3J5cHRwYWQuZnIvcGFkLyMvMi9wYWQvZWRpdC9EOXZHemloUHh4QU9l
NlJGUHowc1JDZisvDQoNCkJlc3QgUmVnYXJkcw0KR2VvcmdlDQoNCg0KDQo9PSBEaWFsLWluIElu
Zm9ybWF0aW9uID09DQojIyBNZWV0aW5nIHRpbWUNCjE2OjAwIC0gMTc6MDAgVVRDDQpGdXJ0aGVy
IEludGVybmF0aW9uYWwgbWVldGluZyB0aW1lczogaHR0cHM6Ly93d3cudGltZWFuZGRhdGUuY29t
L3dvcmxkY2xvY2svbWVldGluZ2RldGFpbHMuaHRtbD95ZWFyPTIwMjAmbW9udGg9MTImZGF5PTMm
aG91cj0xNiZtaW49MCZzZWM9MCZwMT0xMjM0JnAyPTM3JnAzPTIyNCZwND0xNzkNCg0KDQojIyBE
aWFsIGluIGRldGFpbHMNCldlYjogaHR0cHM6Ly93d3cuZ290b21lZXQubWUvR2VvcmdlRHVubGFw
DQoNCllvdSBjYW4gYWxzbyBkaWFsIGluIHVzaW5nIHlvdXIgcGhvbmUuDQpBY2Nlc3MgQ29kZTog
MTY4LTY4Mi0xMDkNCg0KQ2hpbmEgKFRvbGwgRnJlZSk6IDQwMDggODExMDg0DQpHZXJtYW55OiAr
NDkgNjkyIDU3MzYgNzMxNw0KUG9sYW5kIChUb2xsIEZyZWUpOiAwMCA4MDAgMTEyNDc1OQ0KVWty
YWluZSAoVG9sbCBGcmVlKTogMCA4MDAgNTAgMTczMw0KVW5pdGVkIEtpbmdkb206ICs0NCAzMzAg
MjIxIDAwODgNClVuaXRlZCBTdGF0ZXM6ICsxICg1NzEpIDMxNy0zMTI5DQpTcGFpbjogKzM0IDkz
MiA3NSAyMDA0DQoNCg0KTW9yZSBwaG9uZSBudW1iZXJzDQpBdXN0cmFsaWE6ICs2MSAyIDkwODcg
MzYwNA0KQXVzdHJpYTogKzQzIDcgMjA4MSA1NDI3DQpBcmdlbnRpbmEgKFRvbGwgRnJlZSk6IDAg
ODAwIDQ0NCAzMzc1DQpCYWhyYWluIChUb2xsIEZyZWUpOiA4MDAgODEgMTExDQpCZWxhcnVzIChU
b2xsIEZyZWUpOiA4IDgyMCAwMDExIDA0MDANCkJlbGdpdW06ICszMiAyOCA5MyA3MDE4DQpCcmF6
aWwgKFRvbGwgRnJlZSk6IDAgODAwIDA0NyA0OTA2DQpCdWxnYXJpYSAoVG9sbCBGcmVlKTogMDA4
MDAgMTIwIDQ0MTcNCkNhbmFkYTogKzEgKDY0NykgNDk3LTkzOTENCkNoaWxlIChUb2xsIEZyZWUp
OiA4MDAgMzk1IDE1MA0KQ29sb21iaWEgKFRvbGwgRnJlZSk6IDAxIDgwMCA1MTggNDQ4Mw0KQ3pl
Y2ggUmVwdWJsaWMgKFRvbGwgRnJlZSk6IDgwMCA1MDA0NDgNCkRlbm1hcms6ICs0NSAzMiA3MiAw
MyA4Mg0KRmlubGFuZDogKzM1OCA5MjMgMTcgMDU2OA0KRnJhbmNlOiArMzMgMTcwIDk1MCA1OTQN
CkdyZWVjZSAoVG9sbCBGcmVlKTogMDAgODAwIDQ0MTQgMzgzOA0KSG9uZyBLb25nIChUb2xsIEZy
ZWUpOiAzMDcxMzE2OTkwNi04ODYtOTY1DQpIdW5nYXJ5IChUb2xsIEZyZWUpOiAoMDYpIDgwIDk4
NiAyNTUNCkljZWxhbmQgKFRvbGwgRnJlZSk6IDgwMCA3MjA0DQpJbmRpYSAoVG9sbCBGcmVlKTog
MTgwMDI2NjkyNzINCkluZG9uZXNpYSAoVG9sbCBGcmVlKTogMDA3IDgwMyAwMjAgNTM3NQ0KSXJl
bGFuZDogKzM1MyAxNSAzNjAgNzI4DQpJc3JhZWwgKFRvbGwgRnJlZSk6IDEgODA5IDQ1NCA4MzAN
Ckl0YWx5OiArMzkgMCAyNDcgOTIgMTMgMDENCkphcGFuIChUb2xsIEZyZWUpOiAwIDEyMCA2NjMg
ODAwDQpLb3JlYSwgUmVwdWJsaWMgb2YgKFRvbGwgRnJlZSk6IDAwNzk4IDE0IDIwNyA0OTE0DQpM
dXhlbWJvdXJnIChUb2xsIEZyZWUpOiA4MDAgODUxNTgNCk1hbGF5c2lhIChUb2xsIEZyZWUpOiAx
IDgwMCA4MSA2ODU0DQpNZXhpY28gKFRvbGwgRnJlZSk6IDAxIDgwMCA1MjIgMTEzMw0KTmV0aGVy
bGFuZHM6ICszMSAyMDcgOTQxIDM3Nw0KTmV3IFplYWxhbmQ6ICs2NCA5IDI4MCA2MzAyDQpOb3J3
YXk6ICs0NyAyMSA5MyAzNyA1MQ0KUGFuYW1hIChUb2xsIEZyZWUpOiAwMCA4MDAgMjI2IDc5MjgN
ClBlcnUgKFRvbGwgRnJlZSk6IDAgODAwIDc3MDIzDQpQaGlsaXBwaW5lcyAoVG9sbCBGcmVlKTog
MSA4MDAgMTExMCAxNjYxDQpQb3J0dWdhbCAoVG9sbCBGcmVlKTogODAwIDgxOSA1NzUNClJvbWFu
aWEgKFRvbGwgRnJlZSk6IDAgODAwIDQxMCAwMjkNClJ1c3NpYW4gRmVkZXJhdGlvbiAoVG9sbCBG
cmVlKTogOCA4MDAgMTAwIDYyMDMNClNhdWRpIEFyYWJpYSAoVG9sbCBGcmVlKTogODAwIDg0NCAz
NjMzDQpTaW5nYXBvcmUgKFRvbGwgRnJlZSk6IDE4MDA3MjMxMzIzDQpTb3V0aCBBZnJpY2EgKFRv
bGwgRnJlZSk6IDAgODAwIDU1NSA0NDcNClN3ZWRlbjogKzQ2IDg1MyA1MjcgODI3DQpTd2l0emVy
bGFuZDogKzQxIDIyNSA0NTk5IDc4DQpUYWl3YW4gKFRvbGwgRnJlZSk6IDAgODAwIDY2NiA4NTQN
ClRoYWlsYW5kIChUb2xsIEZyZWUpOiAwMDEgODAwIDAxMSAwMjMNClR1cmtleSAoVG9sbCBGcmVl
KTogMDAgODAwIDQ0ODggMjM2ODMNClVuaXRlZCBBcmFiIEVtaXJhdGVzIChUb2xsIEZyZWUpOiA4
MDAgMDQ0IDQwNDM5DQpVcnVndWF5IChUb2xsIEZyZWUpOiAwMDA0IDAxOSAxMDE4DQpWaWV0IE5h
bSAoVG9sbCBGcmVlKTogMTIyIDgwIDQ4MQ0K4oCL4oCL4oCL4oCL4oCL4oCL4oCLDQoNCkZpcnN0
IEdvVG9NZWV0aW5nPyBMZXQncyBkbyBhIHF1aWNrIHN5c3RlbSBjaGVjazoNCg0KaHR0cHM6Ly9s
aW5rLmdvdG9tZWV0aW5nLmNvbS9zeXN0ZW0tY2hlY2s=


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 11:57:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 11:57:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39230.72051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kicNa-00077M-0C; Fri, 27 Nov 2020 11:57:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39230.72051; Fri, 27 Nov 2020 11:57:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kicNZ-00077F-Sa; Fri, 27 Nov 2020 11:57:33 +0000
Received: by outflank-mailman (input) for mailman id 39230;
 Fri, 27 Nov 2020 11:57:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kicNY-00076y-VC
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 11:57:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kicNY-0002u2-02; Fri, 27 Nov 2020 11:57:32 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kicNX-0005hQ-NF; Fri, 27 Nov 2020 11:57:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kicNY-00076y-VC
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 11:57:33 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=bgm1mDmHMF5DQP5MuWlxGqX+nw5RQiXrDAPsLGSpb4Y=; b=NvZGBEYanb+paG56MicZjoiDG6
	PWXf1iJze1pIsWhYEpRfQinHyUEB9cneF1H7ALS2ietUK4K3DN68921VigpkwAWRlTzTZZh7x+PcV
	2Tb9Wv1vP6MYCS2Ke2U7kgqUTDot6Lz/1syxtMJM3yzGjtkskoOizUBpW6ZrUyZPD+Co=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kicNY-0002u2-02; Fri, 27 Nov 2020 11:57:32 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kicNX-0005hQ-NF; Fri, 27 Nov 2020 11:57:31 +0000
Subject: Re: [PATCH v8 1/3] xen/events: modify struct evtchn layout
To: Jan Beulich <jbeulich@suse.com>, Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20201125105122.3650-1-jgross@suse.com>
 <20201125105122.3650-2-jgross@suse.com>
 <4c054bdb-e74a-4ca8-ede3-8df3874b39fb@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <2b135f7e-1222-9267-7755-6fe46f4f2fd8@xen.org>
Date: Fri, 27 Nov 2020 11:57:29 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <4c054bdb-e74a-4ca8-ede3-8df3874b39fb@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 27/11/2020 11:42, Jan Beulich wrote:
> On 25.11.2020 11:51, Juergen Gross wrote:
>> In order to avoid latent races when updating an event channel put
>> xen_consumer and pending fields in different bytes. This is no problem
>> right now, but especially the pending indicator isn't used only when
>> initializing an event channel (unlike xen_consumer), so any future
>> addition to this byte would need to be done with a potential race kept
>> in mind.
>>
>> At the same time move some other fields around to have less implicit
>> paddings and to keep related fields more closely together.
>>
>> Finally switch struct evtchn to no longer use fixed sized types where
>> not needed.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> with one more adjustment (can be done while committing, I guess):
> 
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -93,31 +93,33 @@ struct evtchn
>>   #define ECS_PIRQ         4 /* Channel is bound to a physical IRQ line.       */
>>   #define ECS_VIRQ         5 /* Channel is bound to a virtual IRQ line.        */
>>   #define ECS_IPI          6 /* Channel is bound to a virtual IPI line.        */
>> -    u8  state;             /* ECS_* */
>> -    u8  xen_consumer:XEN_CONSUMER_BITS; /* Consumer in Xen if nonzero */
>> -    u8  pending:1;
>> -    u16 notify_vcpu_id;    /* VCPU for local delivery notification */
>> -    u32 port;
>> +    unsigned char state;   /* ECS_* */
>> +#ifndef NDEBUG
>> +    unsigned char old_state; /* State when taking lock in write mode. */
>> +#endif
>> +    unsigned char xen_consumer:XEN_CONSUMER_BITS; /* Consumer in Xen if != 0 */
>> +    unsigned int port;
> 
> evtchn_port_t, to be in line with ...
> 
>>       union {
>>           struct {
>>               domid_t remote_domid;
>> -        } unbound;     /* state == ECS_UNBOUND */
>> +        } unbound;          /* state == ECS_UNBOUND */
>>           struct {
>>               evtchn_port_t  remote_port;
>>               struct domain *remote_dom;
>> -        } interdomain; /* state == ECS_INTERDOMAIN */
>> +        } interdomain;      /* state == ECS_INTERDOMAIN */
>>           struct {
>> -            u32            irq;
>> +            unsigned int   irq;
>>               evtchn_port_t  next_port;
>>               evtchn_port_t  prev_port;
> 
> ... three of the fields above from here.
> 
>> -        } pirq;        /* state == ECS_PIRQ */
>> -        u16 virq;      /* state == ECS_VIRQ */
>> +        } pirq;             /* state == ECS_PIRQ */
>> +        unsigned int virq;  /* state == ECS_VIRQ */
>>       } u;
>> -    u8 priority;
>> -#ifndef NDEBUG
>> -    u8 old_state;      /* State when taking lock in write mode. */
>> -#endif
>> -    u32 fifo_lastq;    /* Data for fifo events identifying last queue. */
>> +
>> +    bool pending;                  /* FIFO event channels only. */
>> +    unsigned char priority;        /* FIFO event channels only. */
>> +    unsigned short notify_vcpu_id; /* VCPU for local delivery notification */
> 
> I have to admit though that I'm not fully happy with the uses of
> "unsigned char" and "unsigned short". Yes, I did ask for this
> change (based on ./CODING_STYLE), but I did also hint towards the
> use of bitfields. If bitfields aren't an option here to achieve
> the desired dense packing, perhaps this desire should be permitted
> as another reason to use fixed width types. (Question goes more
> towards everyone who cares than to you specifically.)

I think uint*_t would make sense here because they are storing 
information received from an hypercall (all the fields should be fixed 
size there).

But I am also fine the current patch as it is still readable.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 12:11:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 12:11:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39248.72063 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kicbK-0000ZS-KC; Fri, 27 Nov 2020 12:11:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39248.72063; Fri, 27 Nov 2020 12:11:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kicbK-0000ZL-H5; Fri, 27 Nov 2020 12:11:46 +0000
Received: by outflank-mailman (input) for mailman id 39248;
 Fri, 27 Nov 2020 12:11:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MDOv=FB=redhat.com=cohuck@srs-us1.protection.inumbo.net>)
 id 1kicbJ-0000ZG-Nf
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 12:11:45 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 230f6e01-c1ee-43dd-a1b1-914be9ff5cf4;
 Fri, 27 Nov 2020 12:11:44 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-516-a9KGH4giPU2E32VOYNRdxg-1; Fri, 27 Nov 2020 07:11:40 -0500
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com
 [10.5.11.13])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5A35180EFAC;
 Fri, 27 Nov 2020 12:11:38 +0000 (UTC)
Received: from gondolin (ovpn-113-65.ams2.redhat.com [10.36.113.65])
 by smtp.corp.redhat.com (Postfix) with ESMTP id B4EF66085D;
 Fri, 27 Nov 2020 12:11:32 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MDOv=FB=redhat.com=cohuck@srs-us1.protection.inumbo.net>)
	id 1kicbJ-0000ZG-Nf
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 12:11:45 +0000
X-Inumbo-ID: 230f6e01-c1ee-43dd-a1b1-914be9ff5cf4
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 230f6e01-c1ee-43dd-a1b1-914be9ff5cf4;
	Fri, 27 Nov 2020 12:11:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1606479103;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AThlirL9N+ST9Gm1ZnxC6RPmbKOXYMe8XRrB5KU7zuo=;
	b=LLiWjBr9nHaSriR+QHR+7f9DfuBd/L1nLc3siRz9dS8enquuC+qg4VanIX4XMvXm4OV+GG
	vMHYUkxG5m3PiDJsGvSUzkfPiwAkOzJ/5Ji6oCcWKu/4J2Fc9IHeGh/0xNK78C5GIUk5pm
	NL2RRzsvG8DeD5YVQYM2RrEcP6f4rrs=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-516-a9KGH4giPU2E32VOYNRdxg-1; Fri, 27 Nov 2020 07:11:40 -0500
X-MC-Unique: a9KGH4giPU2E32VOYNRdxg-1
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5A35180EFAC;
	Fri, 27 Nov 2020 12:11:38 +0000 (UTC)
Received: from gondolin (ovpn-113-65.ams2.redhat.com [10.36.113.65])
	by smtp.corp.redhat.com (Postfix) with ESMTP id B4EF66085D;
	Fri, 27 Nov 2020 12:11:32 +0000 (UTC)
Date: Fri, 27 Nov 2020 13:11:30 +0100
From: Cornelia Huck <cohuck@redhat.com>
To: Eduardo Habkost <ehabkost@redhat.com>
Cc: qemu-devel@nongnu.org, Thomas Huth <thuth@redhat.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>, Richard
 Henderson <richard.henderson@linaro.org>, Roman Bolshakov
 <r.bolshakov@yadro.com>, Gerd Hoffmann <kraxel@redhat.com>,
 xen-devel@lists.xenproject.org, Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Claudio Fontana <cfontana@suse.de>
Subject: Re: [PATCH v2 4/6] xen: Delete xen_available() function
Message-ID: <20201127131130.4a4d34ef.cohuck@redhat.com>
In-Reply-To: <20201125205636.3305257-5-ehabkost@redhat.com>
References: <20201125205636.3305257-1-ehabkost@redhat.com>
	<20201125205636.3305257-5-ehabkost@redhat.com>
Organization: Red Hat GmbH
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=cohuck@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Wed, 25 Nov 2020 15:56:34 -0500
Eduardo Habkost <ehabkost@redhat.com> wrote:

> The function can be replaced with accel_available("xen").
> 
> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
> ---
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: qemu-devel@nongnu.org
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> Cc: Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org
> Cc: Richard Henderson <richard.henderson@linaro.org>
> Cc: Claudio Fontana <cfontana@suse.de>
> Cc: Roman Bolshakov <r.bolshakov@yadro.com>
> ---
>  include/sysemu/arch_init.h | 2 --
>  softmmu/arch_init.c        | 9 ---------
>  softmmu/vl.c               | 6 +++---
>  3 files changed, 3 insertions(+), 14 deletions(-)

Reviewed-by: Cornelia Huck <cohuck@redhat.com>



From xen-devel-bounces@lists.xenproject.org Fri Nov 27 12:41:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 12:41:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39258.72081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kid42-0003T9-7i; Fri, 27 Nov 2020 12:41:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39258.72081; Fri, 27 Nov 2020 12:41:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kid42-0003T2-3y; Fri, 27 Nov 2020 12:41:26 +0000
Received: by outflank-mailman (input) for mailman id 39258;
 Fri, 27 Nov 2020 12:41:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kid40-0003Sx-Kp
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 12:41:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1fe60f9b-f8f4-4bfc-8ab1-f66f683997c9;
 Fri, 27 Nov 2020 12:41:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 11110ABD7;
 Fri, 27 Nov 2020 12:41:22 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kid40-0003Sx-Kp
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 12:41:24 +0000
X-Inumbo-ID: 1fe60f9b-f8f4-4bfc-8ab1-f66f683997c9
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1fe60f9b-f8f4-4bfc-8ab1-f66f683997c9;
	Fri, 27 Nov 2020 12:41:22 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606480882; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=gpTXQ5Mz0ZKtF3bYca2oj+FdcO+JKV15TV7OhmMqy5c=;
	b=Z8ZLMoFlzKPKPbSENzXCarLqAfJyA/Y8OjwwYPHkvlu/pPgA2Xfm4Rv7I9AgC8Sv6sCtJz
	6fNbQef5OqFfsKQevpltlM69PUD6lV22C6WroEQ8tJUd5JBKiUQ6Da5DUdQ5coOMVHHWN2
	s0W33MAyFiGcOhAdo6Nx3gfmSsZ9EYI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 11110ABD7;
	Fri, 27 Nov 2020 12:41:22 +0000 (UTC)
Subject: Re: [PATCH v8 1/3] xen/events: modify struct evtchn layout
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org, Juergen Gross <jgross@suse.com>
References: <20201125105122.3650-1-jgross@suse.com>
 <20201125105122.3650-2-jgross@suse.com>
 <4c054bdb-e74a-4ca8-ede3-8df3874b39fb@suse.com>
 <2b135f7e-1222-9267-7755-6fe46f4f2fd8@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <26b90736-61eb-8fa3-54e4-0c3ac07d234e@suse.com>
Date: Fri, 27 Nov 2020 13:41:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <2b135f7e-1222-9267-7755-6fe46f4f2fd8@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 27.11.2020 12:57, Julien Grall wrote:
> On 27/11/2020 11:42, Jan Beulich wrote:
>> I have to admit though that I'm not fully happy with the uses of
>> "unsigned char" and "unsigned short". Yes, I did ask for this
>> change (based on ./CODING_STYLE), but I did also hint towards the
>> use of bitfields. If bitfields aren't an option here to achieve
>> the desired dense packing, perhaps this desire should be permitted
>> as another reason to use fixed width types. (Question goes more
>> towards everyone who cares than to you specifically.)
> 
> I think uint*_t would make sense here because they are storing 
> information received from an hypercall (all the fields should be fixed 
> size there).

"storing information received from a hypercall" is specifically
not a reason to use fixed width types, imo. All of uint8_t,
uint16_t, and uint32_t values coming from hypercalls are fine to
be passed around and stored as unsigned int, just as an example.
It is solely the packing aspect which might matter here.

> But I am also fine the current patch as it is still readable.

Good, thanks for checking.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 13:10:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 13:10:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39266.72093 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kidW0-0006Hr-He; Fri, 27 Nov 2020 13:10:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39266.72093; Fri, 27 Nov 2020 13:10:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kidW0-0006Hk-EZ; Fri, 27 Nov 2020 13:10:20 +0000
Received: by outflank-mailman (input) for mailman id 39266;
 Fri, 27 Nov 2020 13:10:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N/b8=FB=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kidVy-0006Hf-9j
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:10:18 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d56ab460-164c-47e9-b0a4-d1d8c51af2d1;
 Fri, 27 Nov 2020 13:10:17 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0ARDA9mg023757
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Fri, 27 Nov 2020 14:10:10 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id B8DCC2E9CAC; Fri, 27 Nov 2020 14:10:04 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=N/b8=FB=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kidVy-0006Hf-9j
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:10:18 +0000
X-Inumbo-ID: d56ab460-164c-47e9-b0a4-d1d8c51af2d1
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d56ab460-164c-47e9-b0a4-d1d8c51af2d1;
	Fri, 27 Nov 2020 13:10:17 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0ARDA9mg023757
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Fri, 27 Nov 2020 14:10:10 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id B8DCC2E9CAC; Fri, 27 Nov 2020 14:10:04 +0100 (MET)
Date: Fri, 27 Nov 2020 14:10:04 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
        xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201127131004.GH1717@antioche.eu.org>
References: <20201124154917.l3jwa6w4ejumjuqw@Air-de-Roger>
 <20201124160914.GQ2020@antioche.eu.org>
 <20201126133444.r2oi24i3umh7shb3@Air-de-Roger>
 <20201126141608.GA4123@antioche.eu.org>
 <20201126142635.uzi643co3mxp5h42@Air-de-Roger>
 <20201126150937.jhbfp7iefkmtedx7@Air-de-Roger>
 <20201126172034.GA7642@antioche.eu.org>
 <20201127105948.ji5gxv4e7axrvgpo@Air-de-Roger>
 <20201127111904.GG1717@antioche.eu.org>
 <89aecc1b-bfe5-26fb-9d11-bec4f0aa7b84@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <89aecc1b-bfe5-26fb-9d11-bec4f0aa7b84@suse.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Fri, 27 Nov 2020 14:10:11 +0100 (MET)

On Fri, Nov 27, 2020 at 12:21:23PM +0100, Jan Beulich wrote:
> On 27.11.2020 12:19, Manuel Bouyer wrote:
> > On Fri, Nov 27, 2020 at 11:59:48AM +0100, Roger Pau Monn wrote:
> >>>
> >>> I had to restart from a clean source tree to apply this patch, so to make
> >>> sure we're in sync I attached the diff from my sources
> >>
> >> I'm quite confused about why your trace don't even get into
> >> hvm_do_IRQ_dpci, so I've added some more debug info.
> >>
> >> Here is the new patch, sorry for so many rounds of testing.
> > 
> > No problem, it's expected for this kind of debug :)
> > 
> > http://www-soc.lip6.fr/~bouyer/xen-log11.txt
> 
> Hmm, this one now has hvm_do_IRQ_dpci entries. Maybe the previous one
> was again from a stale binary?

But I do see hvm_do_IRQ_dpci in the previous one too (xen-log10.txt)

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 13:11:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 13:11:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39272.72104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kidXY-0006PH-Sp; Fri, 27 Nov 2020 13:11:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39272.72104; Fri, 27 Nov 2020 13:11:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kidXY-0006PA-Ps; Fri, 27 Nov 2020 13:11:56 +0000
Received: by outflank-mailman (input) for mailman id 39272;
 Fri, 27 Nov 2020 13:11:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kidXX-0006P5-Gr
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:11:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id edaf4074-d24e-4c8f-b6a7-9a4507984c69;
 Fri, 27 Nov 2020 13:11:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9C3ABAC23;
 Fri, 27 Nov 2020 13:11:53 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kidXX-0006P5-Gr
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:11:55 +0000
X-Inumbo-ID: edaf4074-d24e-4c8f-b6a7-9a4507984c69
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id edaf4074-d24e-4c8f-b6a7-9a4507984c69;
	Fri, 27 Nov 2020 13:11:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606482713; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3v2UGyUuI7U3dO4RaooUybwMo/sdlaSTWY5yUtOWWbc=;
	b=tf2O7SeDJ+FW5H+sJpQ4wTeP9jKZxN3RUoEjTtgO4C43SIyZfF6QjwMYz0fDk7WADL+h2l
	5qiIZIJB+jQyJ1dhB++xydfRPX2YS7NIuffb0pPIwTQLJ52zus2VS7SF1bm7m7z1GSFlpS
	ORNwAIJh8ikDF9g+6T81kHpir+XbXNU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 9C3ABAC23;
	Fri, 27 Nov 2020 13:11:53 +0000 (UTC)
Subject: Re: [PATCH v8 2/3] xen/events: rework fifo queue locking
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201125105122.3650-1-jgross@suse.com>
 <20201125105122.3650-3-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <68bbc88a-075e-327b-47e1-1dca1f9a3772@suse.com>
Date: Fri, 27 Nov 2020 14:11:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201125105122.3650-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.11.2020 11:51, Juergen Gross wrote:
> Two cpus entering evtchn_fifo_set_pending() for the same event channel
> can race in case the first one gets interrupted after setting
> EVTCHN_FIFO_PENDING and when the other one manages to set
> EVTCHN_FIFO_LINKED before the first one is testing that bit. This can
> lead to evtchn_check_pollers() being called before the event is put
> properly into the queue, resulting eventually in the guest not seeing
> the event pending and thus blocking forever afterwards.
> 
> Note that commit 5f2df45ead7c1195 ("xen/evtchn: rework per event channel
> lock") made the race just more obvious, while the fifo event channel
> implementation had this race from the beginning when an unmask operation
> was running in parallel with an event channel send operation.

I notice you've altered the Fixes: tag, but you still say "from
the beginning" here.

> Using a spinlock for the per event channel lock is problematic due to
> some paths needing to take the lock are called with interrupts off, so
> the lock would need to disable interrupts, which in turn breaks some
> use cases related to vm events.

This reads as if it got put here by mistake. May I suggest to start
with "Using ... had turned out problematic ..." and then add
something like "Therefore that lock was switched to an rw one"?

> For avoiding this race the queue locking in evtchn_fifo_set_pending()
> needs to be reworked to cover the test of EVTCHN_FIFO_PENDING,
> EVTCHN_FIFO_MASKED and EVTCHN_FIFO_LINKED, too. Additionally when an
> event channel needs to change queues both queues need to be locked
> initially.

"... in order to avoid having a window with no lock held at all"?

> @@ -204,17 +175,67 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
>          return;
>      }
>  
> +    /*
> +     * Lock all queues related to the event channel (in case of a queue change
> +     * this might be two).
> +     * It is mandatory to do that before setting and testing the PENDING bit
> +     * and to hold the current queue lock until the event has put into the

"has been" or "was" I think?

With adjustments along these lines (which I guess could again be
done while committing) or reasons against supplied
Reviewed-by: Jan Beulich <jbeulich@suse.com>

One aspect which I wonder whether it wants adding to the description
is that this change makes a bad situation worse: Back at the time
per-channel locks were added to avoid the bottleneck of the per-
domain event lock. While a per-queue lock's scope at least isn't the
entire domain, their use on the send path still has been preventing
full parallelism here. And this patch widens the lock holding region.
If at least there was a fast path not requiring any locking ...

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 13:13:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 13:13:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39280.72117 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kidZB-0006Zp-8Y; Fri, 27 Nov 2020 13:13:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39280.72117; Fri, 27 Nov 2020 13:13:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kidZB-0006Zi-5V; Fri, 27 Nov 2020 13:13:37 +0000
Received: by outflank-mailman (input) for mailman id 39280;
 Fri, 27 Nov 2020 13:13:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N/b8=FB=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kidZ9-0006Zc-WC
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:13:36 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c07cea4a-6c41-4386-94b8-96cb46dddb46;
 Fri, 27 Nov 2020 13:13:34 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0ARDDTrD023694
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Fri, 27 Nov 2020 14:13:30 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 487CE2E9CAC; Fri, 27 Nov 2020 14:13:24 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=N/b8=FB=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kidZ9-0006Zc-WC
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:13:36 +0000
X-Inumbo-ID: c07cea4a-6c41-4386-94b8-96cb46dddb46
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c07cea4a-6c41-4386-94b8-96cb46dddb46;
	Fri, 27 Nov 2020 13:13:34 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0ARDDTrD023694
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Fri, 27 Nov 2020 14:13:30 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 487CE2E9CAC; Fri, 27 Nov 2020 14:13:24 +0100 (MET)
Date: Fri, 27 Nov 2020 14:13:24 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
        xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201127131324.GJ1717@antioche.eu.org>
References: <20201124150842.GN2020@antioche.eu.org>
 <20201124154917.l3jwa6w4ejumjuqw@Air-de-Roger>
 <20201124160914.GQ2020@antioche.eu.org>
 <20201126133444.r2oi24i3umh7shb3@Air-de-Roger>
 <20201126141608.GA4123@antioche.eu.org>
 <20201126142635.uzi643co3mxp5h42@Air-de-Roger>
 <20201126150937.jhbfp7iefkmtedx7@Air-de-Roger>
 <20201126172034.GA7642@antioche.eu.org>
 <20201127105948.ji5gxv4e7axrvgpo@Air-de-Roger>
 <e9610278-84e5-dc32-b568-8867011de4e4@suse.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="dTy3Mrz/UPE2dbVg"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <e9610278-84e5-dc32-b568-8867011de4e4@suse.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Fri, 27 Nov 2020 14:13:30 +0100 (MET)


--dTy3Mrz/UPE2dbVg
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit

On Fri, Nov 27, 2020 at 12:29:35PM +0100, Jan Beulich wrote:
> On 27.11.2020 11:59, Roger Pau Monn wrote:
> > --- a/xen/arch/x86/hvm/irq.c
> > +++ b/xen/arch/x86/hvm/irq.c
> > @@ -187,6 +187,10 @@ void hvm_gsi_assert(struct domain *d, unsigned int gsi)
> >       * to know if the GSI is pending or not.
> >       */
> >      spin_lock(&d->arch.hvm.irq_lock);
> > +    if ( gsi == TRACK_IRQ )
> > +        debugtrace_printk("hvm_gsi_assert irq %u trig %u assert count %u\n",
> > +                          gsi, trig, hvm_irq->gsi_assert_count[gsi]);
> 
> This produces
> 
> 81961 hvm_gsi_assert irq 34 trig 1 assert count 1
> 
> Since the logging occurs ahead of the call to assert_gsi(), it
> means we don't signal anything to Dom0, because according to our
> records there's still an IRQ in flight. Unfortunately we only
> see the tail of the trace, so it's not possible to tell how / when
> we got into this state.
> 
> Manuel - is this the only patch you have in place? Or did you keep
> any prior ones? Iirc there once was one where Roger also suppressed
> some de-assert call.

Yes, I have some of the previous patches (otherwise Xen panics).
Attached is the diffs I currently have 

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--

--dTy3Mrz/UPE2dbVg
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="xen.diff"

diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index 38ac5fb6c7..9db3dcc957 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -187,6 +187,10 @@ void hvm_gsi_assert(struct domain *d, unsigned int gsi)
      * to know if the GSI is pending or not.
      */
     spin_lock(&d->arch.hvm.irq_lock);
+    if ( gsi == TRACK_IRQ )
+        debugtrace_printk("hvm_gsi_assert irq %u trig %u assert count %u\n",
+                          gsi, trig, hvm_irq->gsi_assert_count[gsi]);
+
     if ( trig == VIOAPIC_EDGE_TRIG || !hvm_irq->gsi_assert_count[gsi] )
     {
         if ( trig == VIOAPIC_LEVEL_TRIG )
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 67d4a6237f..e6748e0649 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -257,7 +257,11 @@ static void vioapic_write_redirent(
         vlapic_adjust_i8259_target(d);
     }
     else if ( ent.fields.trig_mode == VIOAPIC_EDGE_TRIG )
+    {
+        if ( gsi == TRACK_IRQ )
+            debugtrace_printk("vIO-APIC set edge trigger irq %u\n", gsi);
         pent->fields.remote_irr = 0;
+    }
     else if ( !ent.fields.mask &&
               !ent.fields.remote_irr &&
               hvm_irq->gsi_assert_count[idx] )
@@ -278,6 +282,10 @@ static void vioapic_write_redirent(
          */
         int ret = vioapic_hwdom_map_gsi(gsi, ent.fields.trig_mode,
                                         ent.fields.polarity);
+
+        if ( gsi == TRACK_IRQ )
+            debugtrace_printk("vIO-APIC UNMASK irq %u\n", gsi);
+
         if ( ret )
         {
             gprintk(XENLOG_ERR,
@@ -285,6 +293,9 @@ static void vioapic_write_redirent(
             unmasked = 0;
         }
     }
+    else if ( is_hardware_domain(d) && gsi == TRACK_IRQ )
+        debugtrace_printk("vIO-APIC MASK irq %u\n", gsi);
+
 
     if ( gsi == 0 || unmasked )
         pt_may_unmask_irq(d, NULL);
@@ -405,6 +416,10 @@ static void vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int pin)
 
     ASSERT(spin_is_locked(&d->arch.hvm.irq_lock));
 
+    if ( irq == TRACK_IRQ )
+            debugtrace_printk("vIO-APIC deliver irq %u vector %u\n",
+                              irq, vector);
+
     HVM_DBG_LOG(DBG_LEVEL_IOAPIC,
                 "dest=%x dest_mode=%x delivery_mode=%x "
                 "vector=%x trig_mode=%x",
diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c
index e66fa99ec7..c28025657d 100644
--- a/xen/arch/x86/io_apic.c
+++ b/xen/arch/x86/io_apic.c
@@ -1641,6 +1641,9 @@ static void mask_and_ack_level_ioapic_irq(struct irq_desc *desc)
     unsigned long v;
     int i;
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("ACK irq %u\n", desc->irq);
+
     irq_complete_move(desc);
 
     if ( !directed_eoi_enabled )
@@ -1688,6 +1691,9 @@ static void mask_and_ack_level_ioapic_irq(struct irq_desc *desc)
 
 static void end_level_ioapic_irq_old(struct irq_desc *desc, u8 vector)
 {
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("END irq %u\n", desc->irq);
+
     if ( directed_eoi_enabled )
     {
         if ( !(desc->status & (IRQ_DISABLED|IRQ_MOVE_PENDING)) )
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 93c4fb9a79..c3a75d98a7 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1109,6 +1109,10 @@ static void irq_guest_eoi_timer_fn(void *data)
     unsigned int i, irq = desc - irq_desc;
     irq_guest_action_t *action;
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("irq_guest_eoi_timer_fn irq %u status %x\n",
+                          desc->irq, desc->status);
+
     spin_lock_irq(&desc->lock);
     
     if ( !(desc->status & IRQ_GUEST) )
@@ -1118,6 +1122,10 @@ static void irq_guest_eoi_timer_fn(void *data)
 
     ASSERT(action->ack_type != ACKTYPE_NONE);
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("ack_type %u in_flight %u\n",
+                          action->ack_type, action->in_flight);
+
     /*
      * Is no IRQ in flight at all, or another instance of this timer already
      * running? Skip everything to avoid forcing an EOI early.
@@ -1837,6 +1845,12 @@ static void do_IRQ_guest(struct irq_desc *desc, unsigned int vector)
     unsigned int        i;
     struct pending_eoi *peoi = this_cpu(pending_eoi);
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("do_IRQ_guest irq %u nr_guests %u ack_type %u in_flight %u\n",
+                          desc->irq, action->nr_guests, action->ack_type,
+                          action->in_flight);
+
+
     if ( unlikely(!action->nr_guests) )
     {
         /* An interrupt may slip through while freeing an ACKTYPE_EOI irq. */
diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
index 6b1305a3e5..86c2db9da0 100644
--- a/xen/drivers/passthrough/io.c
+++ b/xen/drivers/passthrough/io.c
@@ -174,7 +174,6 @@ static void pt_irq_time_out(void *data)
          * In the identity mapped case the EOI can also be done now, this way
          * the iteration over the list of domain pirqs is avoided.
          */
-        hvm_gsi_deassert(irq_map->dom, dpci_pirq(irq_map)->pirq);
         irq_map->flags |= HVM_IRQ_DPCI_EOI_LATCH;
         pt_irq_guest_eoi(irq_map->dom, irq_map, NULL);
         spin_unlock(&irq_map->dom->event_lock);
@@ -828,6 +827,9 @@ int hvm_do_IRQ_dpci(struct domain *d, struct pirq *pirq)
          !pirq_dpci || !(pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) )
         return 0;
 
+    if ( pirq->pirq == TRACK_IRQ )
+        debugtrace_printk("hvm_do_IRQ_dpci irq %u\n", pirq->pirq);
+
     pirq_dpci->masked = 1;
     raise_softirq_for(pirq_dpci);
     return 1;
@@ -1010,6 +1012,9 @@ void hvm_dpci_eoi(struct domain *d, unsigned int guest_gsi,
     if ( !is_iommu_enabled(d) )
         return;
 
+    if ( guest_gsi == TRACK_IRQ )
+        debugtrace_printk("hvm_dpci_eoi irq %u\n", guest_gsi);
+
     if ( is_hardware_domain(d) )
     {
         spin_lock(&d->event_lock);
diff --git a/xen/drivers/vpci/msix.c b/xen/drivers/vpci/msix.c
index 64dd0a929c..3eb6102a61 100644
--- a/xen/drivers/vpci/msix.c
+++ b/xen/drivers/vpci/msix.c
@@ -370,7 +370,7 @@ static int msix_write(struct vcpu *v, unsigned long addr, unsigned int len,
 
             entry->updated = false;
         }
-        else
+        else if ( msix->enabled )
             vpci_msix_arch_mask_entry(entry, pdev, entry->masked);
 
         break;
diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
index 43d567fe44..871810134f 100644
--- a/xen/include/xen/irq.h
+++ b/xen/include/xen/irq.h
@@ -174,4 +174,6 @@ unsigned int arch_hwdom_irqs(domid_t);
 void arch_evtchn_bind_pirq(struct domain *, int pirq);
 #endif
 
+#define TRACK_IRQ 34
+
 #endif /* __XEN_IRQ_H__ */

--dTy3Mrz/UPE2dbVg--


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 13:14:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 13:14:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39290.72129 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kidaS-0006hi-LC; Fri, 27 Nov 2020 13:14:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39290.72129; Fri, 27 Nov 2020 13:14:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kidaS-0006hb-Gh; Fri, 27 Nov 2020 13:14:56 +0000
Received: by outflank-mailman (input) for mailman id 39290;
 Fri, 27 Nov 2020 13:14:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kidaR-0006hU-7o
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:14:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 19032c15-ab24-4234-8e6d-430bb29bbe07;
 Fri, 27 Nov 2020 13:14:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C0C7AAC23;
 Fri, 27 Nov 2020 13:14:53 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kidaR-0006hU-7o
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:14:55 +0000
X-Inumbo-ID: 19032c15-ab24-4234-8e6d-430bb29bbe07
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 19032c15-ab24-4234-8e6d-430bb29bbe07;
	Fri, 27 Nov 2020 13:14:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606482893; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8p8rG6mNNxRdp60uFi/2EGcwbvcytIrxUNojyBnQQtA=;
	b=lrKWoTWcYKmrtAVHtQCkGmDw2zVDiQ5PNNBfIkYcmmJvIIHfACwW2jqanyOZFSYFZH3+SF
	+EDYeDzL1eg7mD9CzP3QOWA2h2VvnYOrg0IXHJiiODt7AY+zmKD9ednRYYIOWFNKupaS0o
	SScNvCZ0uCy14MAHQXQbIUsgDc+iAPQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C0C7AAC23;
	Fri, 27 Nov 2020 13:14:53 +0000 (UTC)
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201124154917.l3jwa6w4ejumjuqw@Air-de-Roger>
 <20201124160914.GQ2020@antioche.eu.org>
 <20201126133444.r2oi24i3umh7shb3@Air-de-Roger>
 <20201126141608.GA4123@antioche.eu.org>
 <20201126142635.uzi643co3mxp5h42@Air-de-Roger>
 <20201126150937.jhbfp7iefkmtedx7@Air-de-Roger>
 <20201126172034.GA7642@antioche.eu.org>
 <20201127105948.ji5gxv4e7axrvgpo@Air-de-Roger>
 <20201127111904.GG1717@antioche.eu.org>
 <89aecc1b-bfe5-26fb-9d11-bec4f0aa7b84@suse.com>
 <20201127131004.GH1717@antioche.eu.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <66716bca-4187-30c2-aba7-f6f973b194e4@suse.com>
Date: Fri, 27 Nov 2020 14:14:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201127131004.GH1717@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 27.11.2020 14:10, Manuel Bouyer wrote:
> On Fri, Nov 27, 2020 at 12:21:23PM +0100, Jan Beulich wrote:
>> On 27.11.2020 12:19, Manuel Bouyer wrote:
>>> On Fri, Nov 27, 2020 at 11:59:48AM +0100, Roger Pau Monné wrote:
>>>>>
>>>>> I had to restart from a clean source tree to apply this patch, so to make
>>>>> sure we're in sync I attached the diff from my sources
>>>>
>>>> I'm quite confused about why your trace don't even get into
>>>> hvm_do_IRQ_dpci, so I've added some more debug info.
>>>>
>>>> Here is the new patch, sorry for so many rounds of testing.
>>>
>>> No problem, it's expected for this kind of debug :)
>>>
>>> http://www-soc.lip6.fr/~bouyer/xen-log11.txt
>>
>> Hmm, this one now has hvm_do_IRQ_dpci entries. Maybe the previous one
>> was again from a stale binary?
> 
> But I do see hvm_do_IRQ_dpci in the previous one too (xen-log10.txt)

Ah yes. In your respective mail the link said 9 though instead of 10.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 13:15:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 13:15:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39293.72141 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kidau-0006oi-3B; Fri, 27 Nov 2020 13:15:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39293.72141; Fri, 27 Nov 2020 13:15:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kidat-0006oa-Vn; Fri, 27 Nov 2020 13:15:23 +0000
Received: by outflank-mailman (input) for mailman id 39293;
 Fri, 27 Nov 2020 13:15:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kidar-0006oI-SV; Fri, 27 Nov 2020 13:15:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kidar-0004W6-N7; Fri, 27 Nov 2020 13:15:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kidar-0004Ge-EY; Fri, 27 Nov 2020 13:15:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kidar-0001sy-E2; Fri, 27 Nov 2020 13:15:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kidar-0006oI-SV; Fri, 27 Nov 2020 13:15:21 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=L5hEP5k++JlxrTNDMAEbJAjFKvv3lxLBsTNyU+4KuVw=; b=DGKqCExo8pIt7dztwMXfH8jDEA
	FEdxxVLk1QmE6HI5tgIPPcVena0DM4Msn15rVdu9sdlVjOk9vVAK7GhPmi10wRZ854aPp7SRVHXcz
	187BWB10smtfrLknq6Dl4Lnd3+y5l2KCQmywpLXhnA/gGmSq+G/0dgudK61zOUeUMLTo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kidar-0004W6-N7; Fri, 27 Nov 2020 13:15:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kidar-0004Ge-EY; Fri, 27 Nov 2020 13:15:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kidar-0001sy-E2; Fri, 27 Nov 2020 13:15:21 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157044-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157044: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:guest-start/redhat.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-multivcpu:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=181f2c224ccd0a2900d6ae94ec390a546731f593
X-Osstest-Versions-That:
    xen=181f2c224ccd0a2900d6ae94ec390a546731f593
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 27 Nov 2020 13:15:21 +0000

flight 157044 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157044/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd 14 guest-start/redhat.repeat fail pass in 157028
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 157028
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 157028
 test-armhf-armhf-xl-multivcpu 18 guest-start/debian.repeat fail pass in 157028

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157028
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157028
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157028
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157028
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157028
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157028
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157028
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157028
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157028
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157028
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157028
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  181f2c224ccd0a2900d6ae94ec390a546731f593
baseline version:
 xen                  181f2c224ccd0a2900d6ae94ec390a546731f593

Last test of basis   157044  2020-11-27 01:51:47 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Fri Nov 27 13:18:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 13:18:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39307.72155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kideB-00077F-JA; Fri, 27 Nov 2020 13:18:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39307.72155; Fri, 27 Nov 2020 13:18:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kideB-000778-GF; Fri, 27 Nov 2020 13:18:47 +0000
Received: by outflank-mailman (input) for mailman id 39307;
 Fri, 27 Nov 2020 13:18:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N/b8=FB=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kideB-000773-0t
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:18:47 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d120b587-f368-4529-9839-6c2163d600ce;
 Fri, 27 Nov 2020 13:18:46 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0ARDIfCJ023726
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Fri, 27 Nov 2020 14:18:42 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 5A98B2E9CAC; Fri, 27 Nov 2020 14:18:36 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=N/b8=FB=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kideB-000773-0t
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:18:47 +0000
X-Inumbo-ID: d120b587-f368-4529-9839-6c2163d600ce
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d120b587-f368-4529-9839-6c2163d600ce;
	Fri, 27 Nov 2020 13:18:46 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0ARDIfCJ023726
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Fri, 27 Nov 2020 14:18:42 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 5A98B2E9CAC; Fri, 27 Nov 2020 14:18:36 +0100 (MET)
Date: Fri, 27 Nov 2020 14:18:36 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
        xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201127131836.GM1717@antioche.eu.org>
References: <20201126133444.r2oi24i3umh7shb3@Air-de-Roger>
 <20201126141608.GA4123@antioche.eu.org>
 <20201126142635.uzi643co3mxp5h42@Air-de-Roger>
 <20201126150937.jhbfp7iefkmtedx7@Air-de-Roger>
 <20201126172034.GA7642@antioche.eu.org>
 <20201127105948.ji5gxv4e7axrvgpo@Air-de-Roger>
 <20201127111904.GG1717@antioche.eu.org>
 <89aecc1b-bfe5-26fb-9d11-bec4f0aa7b84@suse.com>
 <20201127131004.GH1717@antioche.eu.org>
 <66716bca-4187-30c2-aba7-f6f973b194e4@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <66716bca-4187-30c2-aba7-f6f973b194e4@suse.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Fri, 27 Nov 2020 14:18:42 +0100 (MET)

On Fri, Nov 27, 2020 at 02:14:55PM +0100, Jan Beulich wrote:
> Ah yes. In your respective mail the link said 9 though instead of 10.

Ops, sorry. I forgot to update the link I guess ...

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 13:18:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 13:18:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39308.72168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kideK-0007AC-TR; Fri, 27 Nov 2020 13:18:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39308.72168; Fri, 27 Nov 2020 13:18:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kideK-0007A4-PE; Fri, 27 Nov 2020 13:18:56 +0000
Received: by outflank-mailman (input) for mailman id 39308;
 Fri, 27 Nov 2020 13:18:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kideJ-00079o-Qw
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:18:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4e109bb4-c955-4260-aada-4e8269ab984f;
 Fri, 27 Nov 2020 13:18:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 34AEBABD7;
 Fri, 27 Nov 2020 13:18:53 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kideJ-00079o-Qw
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:18:55 +0000
X-Inumbo-ID: 4e109bb4-c955-4260-aada-4e8269ab984f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4e109bb4-c955-4260-aada-4e8269ab984f;
	Fri, 27 Nov 2020 13:18:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606483133; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8g7q5pCrLfu96e1+ucv/jpHymygtJQ9Bt4QSJe70gz0=;
	b=W/ixNgjNCAEsWAEzlqTx4QSc/w0EEvGniqciLxWMT10ckKSyeHlsbGiVlM6Uap2pybEc1f
	6QCiwBHkztVDhaiZG7IgL+hssudgGQBCxy0NyAC6Tec7jVG6kCfrPuZu2CvfjBJtJbw/ml
	vDWTPvfVbdvB/UqG0x6x2ROPZByOQwc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 34AEBABD7;
	Fri, 27 Nov 2020 13:18:53 +0000 (UTC)
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201124150842.GN2020@antioche.eu.org>
 <20201124154917.l3jwa6w4ejumjuqw@Air-de-Roger>
 <20201124160914.GQ2020@antioche.eu.org>
 <20201126133444.r2oi24i3umh7shb3@Air-de-Roger>
 <20201126141608.GA4123@antioche.eu.org>
 <20201126142635.uzi643co3mxp5h42@Air-de-Roger>
 <20201126150937.jhbfp7iefkmtedx7@Air-de-Roger>
 <20201126172034.GA7642@antioche.eu.org>
 <20201127105948.ji5gxv4e7axrvgpo@Air-de-Roger>
 <e9610278-84e5-dc32-b568-8867011de4e4@suse.com>
 <20201127131324.GJ1717@antioche.eu.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <714e9393-d7f4-ed47-d1ed-aff79f3552a0@suse.com>
Date: Fri, 27 Nov 2020 14:18:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201127131324.GJ1717@antioche.eu.org>
Content-Type: text/plain; charset=windows-1252
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 27.11.2020 14:13, Manuel Bouyer wrote:
> On Fri, Nov 27, 2020 at 12:29:35PM +0100, Jan Beulich wrote:
>> On 27.11.2020 11:59, Roger Pau Monn wrote:
>>> --- a/xen/arch/x86/hvm/irq.c
>>> +++ b/xen/arch/x86/hvm/irq.c
>>> @@ -187,6 +187,10 @@ void hvm_gsi_assert(struct domain *d, unsigned int gsi)
>>>       * to know if the GSI is pending or not.
>>>       */
>>>      spin_lock(&d->arch.hvm.irq_lock);
>>> +    if ( gsi == TRACK_IRQ )
>>> +        debugtrace_printk("hvm_gsi_assert irq %u trig %u assert count %u\n",
>>> +                          gsi, trig, hvm_irq->gsi_assert_count[gsi]);
>>
>> This produces
>>
>> 81961 hvm_gsi_assert irq 34 trig 1 assert count 1
>>
>> Since the logging occurs ahead of the call to assert_gsi(), it
>> means we don't signal anything to Dom0, because according to our
>> records there's still an IRQ in flight. Unfortunately we only
>> see the tail of the trace, so it's not possible to tell how / when
>> we got into this state.
>>
>> Manuel - is this the only patch you have in place? Or did you keep
>> any prior ones? Iirc there once was one where Roger also suppressed
>> some de-assert call.
> 
> Yes, I have some of the previous patches (otherwise Xen panics).
> Attached is the diffs I currently have 

I think you want to delete the hunk dropping the call to
hvm_gsi_deassert() from pt_irq_time_out(). Iirc it was that
addition which changed the behavior to just a single IRQ ever
making it into Dom0. And it ought to be only the change to
msix_write() which is needed to avoid the panic.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 13:27:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 13:27:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39324.72180 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kidmb-00089L-PB; Fri, 27 Nov 2020 13:27:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39324.72180; Fri, 27 Nov 2020 13:27:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kidmb-00089E-M8; Fri, 27 Nov 2020 13:27:29 +0000
Received: by outflank-mailman (input) for mailman id 39324;
 Fri, 27 Nov 2020 13:27:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kidma-000899-B4
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:27:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 75cadbde-4305-485a-804a-48616b6f99ad;
 Fri, 27 Nov 2020 13:27:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BB492ABD7;
 Fri, 27 Nov 2020 13:27:26 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kidma-000899-B4
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:27:28 +0000
X-Inumbo-ID: 75cadbde-4305-485a-804a-48616b6f99ad
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 75cadbde-4305-485a-804a-48616b6f99ad;
	Fri, 27 Nov 2020 13:27:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606483646; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=51EYpbdjdC+tNeKD+XfzFsK9Ys9IXmbL6xmukn3jCoQ=;
	b=WbWLvy72YS3uNoBw5PSbszdpDrKCHuXCKw06hQcm3F71o3TD3K3MbLFTzZxQa7XRzWVEgu
	wwmhGk4f1x07/3p4lcPr6H1YVZs6YyRSHKRjPvBp2fauud6jnAujeu4MxrQ/fit0Z2qQO1
	qdC98JaLWBb+zdYjIk35vhxlk7zksbw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id BB492ABD7;
	Fri, 27 Nov 2020 13:27:26 +0000 (UTC)
Subject: Re: [PATCH v8 3/3] xen/events: do some cleanups in
 evtchn_fifo_set_pending()
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201125105122.3650-1-jgross@suse.com>
 <20201125105122.3650-4-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <dc81e650-166f-99c2-0982-58c0b89c1eb4@suse.com>
Date: Fri, 27 Nov 2020 14:27:27 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201125105122.3650-4-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.11.2020 11:51, Juergen Gross wrote:
> --- a/xen/common/event_fifo.c
> +++ b/xen/common/event_fifo.c
> @@ -175,6 +175,18 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
>          return;
>      }
>  
> +    /*
> +     * Control block not mapped.  The guest must not unmask an
> +     * event until the control block is initialized, so we can
> +     * just drop the event.
> +     */
> +    if ( unlikely(!v->evtchn_fifo->control_block) )
> +    {
> +        printk(XENLOG_G_WARNING
> +               "%pv has no FIFO event channel control block\n", v);
> +        return;
> +    }

This results in bypassing the setting of PENDING and the possible
call to evtchn_check_pollers(). It may in particular be the case
that a very special purpose guest uses event channels just for
waking up pollers, which - afaict - then doesn't require setting
up a control block. To give an example, I could easily see an XTF
test avoid that step if indeed it's unnecessary.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 13:31:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 13:31:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39332.72192 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kidqX-0000f2-F7; Fri, 27 Nov 2020 13:31:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39332.72192; Fri, 27 Nov 2020 13:31:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kidqX-0000ev-Bm; Fri, 27 Nov 2020 13:31:33 +0000
Received: by outflank-mailman (input) for mailman id 39332;
 Fri, 27 Nov 2020 13:31:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N/b8=FB=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kidqW-0000eq-9i
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:31:32 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b2e41ab0-119d-43d7-8b7c-8d4a451e69a8;
 Fri, 27 Nov 2020 13:31:31 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0ARDVQp6000255
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Fri, 27 Nov 2020 14:31:27 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 3B00C2E9CAC; Fri, 27 Nov 2020 14:31:21 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=N/b8=FB=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kidqW-0000eq-9i
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:31:32 +0000
X-Inumbo-ID: b2e41ab0-119d-43d7-8b7c-8d4a451e69a8
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b2e41ab0-119d-43d7-8b7c-8d4a451e69a8;
	Fri, 27 Nov 2020 13:31:31 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0ARDVQp6000255
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Fri, 27 Nov 2020 14:31:27 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 3B00C2E9CAC; Fri, 27 Nov 2020 14:31:21 +0100 (MET)
Date: Fri, 27 Nov 2020 14:31:21 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
        xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201127133121.GN1717@antioche.eu.org>
References: <20201124160914.GQ2020@antioche.eu.org>
 <20201126133444.r2oi24i3umh7shb3@Air-de-Roger>
 <20201126141608.GA4123@antioche.eu.org>
 <20201126142635.uzi643co3mxp5h42@Air-de-Roger>
 <20201126150937.jhbfp7iefkmtedx7@Air-de-Roger>
 <20201126172034.GA7642@antioche.eu.org>
 <20201127105948.ji5gxv4e7axrvgpo@Air-de-Roger>
 <e9610278-84e5-dc32-b568-8867011de4e4@suse.com>
 <20201127131324.GJ1717@antioche.eu.org>
 <714e9393-d7f4-ed47-d1ed-aff79f3552a0@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <714e9393-d7f4-ed47-d1ed-aff79f3552a0@suse.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Fri, 27 Nov 2020 14:31:27 +0100 (MET)

On Fri, Nov 27, 2020 at 02:18:54PM +0100, Jan Beulich wrote:
> On 27.11.2020 14:13, Manuel Bouyer wrote:
> > On Fri, Nov 27, 2020 at 12:29:35PM +0100, Jan Beulich wrote:
> >> On 27.11.2020 11:59, Roger Pau Monn wrote:
> >>> --- a/xen/arch/x86/hvm/irq.c
> >>> +++ b/xen/arch/x86/hvm/irq.c
> >>> @@ -187,6 +187,10 @@ void hvm_gsi_assert(struct domain *d, unsigned int gsi)
> >>>       * to know if the GSI is pending or not.
> >>>       */
> >>>      spin_lock(&d->arch.hvm.irq_lock);
> >>> +    if ( gsi == TRACK_IRQ )
> >>> +        debugtrace_printk("hvm_gsi_assert irq %u trig %u assert count %u\n",
> >>> +                          gsi, trig, hvm_irq->gsi_assert_count[gsi]);
> >>
> >> This produces
> >>
> >> 81961 hvm_gsi_assert irq 34 trig 1 assert count 1
> >>
> >> Since the logging occurs ahead of the call to assert_gsi(), it
> >> means we don't signal anything to Dom0, because according to our
> >> records there's still an IRQ in flight. Unfortunately we only
> >> see the tail of the trace, so it's not possible to tell how / when
> >> we got into this state.
> >>
> >> Manuel - is this the only patch you have in place? Or did you keep
> >> any prior ones? Iirc there once was one where Roger also suppressed
> >> some de-assert call.
> > 
> > Yes, I have some of the previous patches (otherwise Xen panics).
> > Attached is the diffs I currently have 
> 
> I think you want to delete the hunk dropping the call to
> hvm_gsi_deassert() from pt_irq_time_out(). Iirc it was that
> addition which changed the behavior to just a single IRQ ever
> making it into Dom0. And it ought to be only the change to
> msix_write() which is needed to avoid the panic.

yes, I did keep the hvm_gsi_deassert() patch because I expected it
to make things easier, as it allows to interract with Xen without changing
interrupt states.
I removed it, here's a new trace

http://www-soc.lip6.fr/~bouyer/xen-log12.txt

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 13:31:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 13:31:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39333.72204 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kidqe-0000hZ-O5; Fri, 27 Nov 2020 13:31:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39333.72204; Fri, 27 Nov 2020 13:31:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kidqe-0000hQ-Js; Fri, 27 Nov 2020 13:31:40 +0000
Received: by outflank-mailman (input) for mailman id 39333;
 Fri, 27 Nov 2020 13:31:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hQCY=FB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kidqd-0000hD-Vn
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:31:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4a92843e-e17a-47c7-a52a-b7cb9a48a700;
 Fri, 27 Nov 2020 13:31:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 00AE4AC90;
 Fri, 27 Nov 2020 13:31:37 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hQCY=FB=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kidqd-0000hD-Vn
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:31:40 +0000
X-Inumbo-ID: 4a92843e-e17a-47c7-a52a-b7cb9a48a700
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4a92843e-e17a-47c7-a52a-b7cb9a48a700;
	Fri, 27 Nov 2020 13:31:38 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606483898; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=LN3Ob1rDFoFX3TlASGbBlsqWYmGkMsYCozvzQ697bjU=;
	b=GFRhAQvKJt9IwsE9cBena4hdJa24dGO+iNcFFIBNiv3NpS5zqAOxToqua6EnxfCrFO0Z/M
	mrx0c4FenpV8vJ6TWI2Gfi0M9hnYe8gLZj+kfdhJ4mvhKO4su4zpqkjYpTNu/lwIjpdNoR
	hcOcBN0FYbKXhGwDAFze2eRvceRJX40=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 00AE4AC90;
	Fri, 27 Nov 2020 13:31:37 +0000 (UTC)
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201125105122.3650-1-jgross@suse.com>
 <20201125105122.3650-3-jgross@suse.com>
 <68bbc88a-075e-327b-47e1-1dca1f9a3772@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v8 2/3] xen/events: rework fifo queue locking
Message-ID: <65eaa8b6-601f-e522-9ad9-2d88e905d447@suse.com>
Date: Fri, 27 Nov 2020 14:31:37 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <68bbc88a-075e-327b-47e1-1dca1f9a3772@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="2rHiV5LjUhONZVLnswYGHrQTfEeZp1OpH"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--2rHiV5LjUhONZVLnswYGHrQTfEeZp1OpH
Content-Type: multipart/mixed; boundary="SWmKRM9IfMPz2noWTThsUTRu6sebOSi82";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <65eaa8b6-601f-e522-9ad9-2d88e905d447@suse.com>
Subject: Re: [PATCH v8 2/3] xen/events: rework fifo queue locking
References: <20201125105122.3650-1-jgross@suse.com>
 <20201125105122.3650-3-jgross@suse.com>
 <68bbc88a-075e-327b-47e1-1dca1f9a3772@suse.com>
In-Reply-To: <68bbc88a-075e-327b-47e1-1dca1f9a3772@suse.com>

--SWmKRM9IfMPz2noWTThsUTRu6sebOSi82
Content-Type: multipart/mixed;
 boundary="------------CFC6A8A3632F14E22D795896"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------CFC6A8A3632F14E22D795896
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 27.11.20 14:11, Jan Beulich wrote:
> On 25.11.2020 11:51, Juergen Gross wrote:
>> Two cpus entering evtchn_fifo_set_pending() for the same event channel=

>> can race in case the first one gets interrupted after setting
>> EVTCHN_FIFO_PENDING and when the other one manages to set
>> EVTCHN_FIFO_LINKED before the first one is testing that bit. This can
>> lead to evtchn_check_pollers() being called before the event is put
>> properly into the queue, resulting eventually in the guest not seeing
>> the event pending and thus blocking forever afterwards.
>>
>> Note that commit 5f2df45ead7c1195 ("xen/evtchn: rework per event chann=
el
>> lock") made the race just more obvious, while the fifo event channel
>> implementation had this race from the beginning when an unmask operati=
on
>> was running in parallel with an event channel send operation.
>=20
> I notice you've altered the Fixes: tag, but you still say "from
> the beginning" here.

Oh, indeed. Thanks for spotting.

>=20
>> Using a spinlock for the per event channel lock is problematic due to
>> some paths needing to take the lock are called with interrupts off, so=

>> the lock would need to disable interrupts, which in turn breaks some
>> use cases related to vm events.
>=20
> This reads as if it got put here by mistake. May I suggest to start
> with "Using ... had turned out problematic ..." and then add
> something like "Therefore that lock was switched to an rw one"?

Fine with me.

>=20
>> For avoiding this race the queue locking in evtchn_fifo_set_pending()
>> needs to be reworked to cover the test of EVTCHN_FIFO_PENDING,
>> EVTCHN_FIFO_MASKED and EVTCHN_FIFO_LINKED, too. Additionally when an
>> event channel needs to change queues both queues need to be locked
>> initially.
>=20
> "... in order to avoid having a window with no lock held at all"?

Yes, this is better.

>=20
>> @@ -204,17 +175,67 @@ static void evtchn_fifo_set_pending(struct vcpu =
*v, struct evtchn *evtchn)
>>           return;
>>       }
>>  =20
>> +    /*
>> +     * Lock all queues related to the event channel (in case of a que=
ue change
>> +     * this might be two).
>> +     * It is mandatory to do that before setting and testing the PEND=
ING bit
>> +     * and to hold the current queue lock until the event has put int=
o the
>=20
> "has been" or "was" I think?

"has been".

>=20
> With adjustments along these lines (which I guess could again be
> done while committing) or reasons against supplied
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.

>=20
> One aspect which I wonder whether it wants adding to the description
> is that this change makes a bad situation worse: Back at the time
> per-channel locks were added to avoid the bottleneck of the per-
> domain event lock. While a per-queue lock's scope at least isn't the
> entire domain, their use on the send path still has been preventing
> full parallelism here. And this patch widens the lock holding region.

As the description already says that additional operations are to be
guarded by the lock I think it is rather clear that the lock holding
region is being widened.

OTOH I wouldn't reject such an addition to the commit message if you
think it is required.


Juergen

--------------CFC6A8A3632F14E22D795896
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------CFC6A8A3632F14E22D795896--

--SWmKRM9IfMPz2noWTThsUTRu6sebOSi82--

--2rHiV5LjUhONZVLnswYGHrQTfEeZp1OpH
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/A/7kFAwAAAAAACgkQsN6d1ii/Ey+0
Bwf/Zzu4iY5rX6AjXpNr5+v2c7p96VUsjfWpB4wDxwhPfYKGwEnujG89v5rrepZFAZxA0cd1xOon
h3t07ccYWwe/uEXgfk4WNzy9xoUKb36TcXZlclJkS96PBKUwLUfdNcINGAM8XWQlzj8kDTIIwODh
Pk0a/abTxMmvDdI+RJNi78W5X9s7VXeKDpJCAITbG+VFWJAKrIPh3B0cz8s9QwDheeMXEYO2jafX
6drX9h8U3NxIR5ThYuN6vpQSH1D3LlhNmnlBfj5uve8iiMKNpesckkyTJ0RqPk7XZfQ3Zk5CPzii
xRWPS2KxybF9ZNTA1Z15IJEKTUsOGT6UzbmMnK0Bcw==
=MJEs
-----END PGP SIGNATURE-----

--2rHiV5LjUhONZVLnswYGHrQTfEeZp1OpH--


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 13:34:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 13:34:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39347.72216 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kidt8-0000wm-6A; Fri, 27 Nov 2020 13:34:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39347.72216; Fri, 27 Nov 2020 13:34:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kidt8-0000wf-2b; Fri, 27 Nov 2020 13:34:14 +0000
Received: by outflank-mailman (input) for mailman id 39347;
 Fri, 27 Nov 2020 13:34:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kidt7-0000wZ-6G
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:34:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 57d78dd0-a197-40d1-a976-7e92f73930b3;
 Fri, 27 Nov 2020 13:34:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 081C6AC23;
 Fri, 27 Nov 2020 13:34:11 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kidt7-0000wZ-6G
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:34:13 +0000
X-Inumbo-ID: 57d78dd0-a197-40d1-a976-7e92f73930b3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 57d78dd0-a197-40d1-a976-7e92f73930b3;
	Fri, 27 Nov 2020 13:34:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606484051; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=I+GnPMFkpmvYnH9SSx8g39Wy3zLm8uM9xXY/9+JAMpA=;
	b=XYTi4itoSDeaJs25MP6jc9C6HWEv9TV1owHWcMi56kTge+JInmBYU0RFhFe2Fq0gmf8kvx
	CUcQmT2bNomV1CAsIcS4qDdn/wjUGhK30V7mmYSRC3VEACiKmP1jXamdASTq+W0cvzSwP2
	UgUlX3oqqnQ6B23v6GU6gJgF4qpoW6o=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 081C6AC23;
	Fri, 27 Nov 2020 13:34:11 +0000 (UTC)
Subject: Re: [PATCH v4 1/3] xen/pci: Move x86 specific code to x86 directory.
To: Rahul Singh <rahul.singh@arm.com>
Cc: bertrand.marquis@arm.com, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1606326929.git.rahul.singh@arm.com>
 <3500f44e3b6f8f05f9d05fa170817d5bc6f39f22.1606326929.git.rahul.singh@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8da65cee-2229-bb11-89f3-0a7db80e999b@suse.com>
Date: Fri, 27 Nov 2020 14:34:12 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <3500f44e3b6f8f05f9d05fa170817d5bc6f39f22.1606326929.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.11.2020 19:16, Rahul Singh wrote:
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -14,9 +14,6 @@
>   * this program; If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> -#include <xen/sched.h>
> -#include <xen/pci.h>
> -#include <xen/pci_regs.h>
>  #include <xen/pci_ids.h>
>  #include <xen/list.h>
>  #include <xen/prefetch.h>

At least the latter two of the lines you remove are clearly still
needed here, and in such a case are better to specify explicitly
than to depend on other headers including them. Since xen/sched.h
very likely also gets included indirectly anyway, I'm inclined to
suggest to drop this entire hunk (which ought to be doable while
committing). With this
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 13:40:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 13:40:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39357.72228 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kidz7-0001uI-Sa; Fri, 27 Nov 2020 13:40:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39357.72228; Fri, 27 Nov 2020 13:40:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kidz7-0001uB-Oq; Fri, 27 Nov 2020 13:40:25 +0000
Received: by outflank-mailman (input) for mailman id 39357;
 Fri, 27 Nov 2020 13:40:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kidz6-0001u1-CV
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:40:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dcbd59b8-2d7c-4ec0-af9f-377ab6008309;
 Fri, 27 Nov 2020 13:40:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 99B61ABD7;
 Fri, 27 Nov 2020 13:40:21 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kidz6-0001u1-CV
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:40:24 +0000
X-Inumbo-ID: dcbd59b8-2d7c-4ec0-af9f-377ab6008309
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id dcbd59b8-2d7c-4ec0-af9f-377ab6008309;
	Fri, 27 Nov 2020 13:40:22 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606484421; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qVjzpZNRKVxdTe81Z9CXgTix8sdGrZXX1YQdSd/d/8Q=;
	b=XzJTJbpYu/661koNsx6MNfpaz+8KuInI694w3KjGa4z2pcPIt6J623R8NbAGZrKN9TYoeS
	XAa1f27aLMlCe7z0l4vhDysgSY54m0dZrYZ6xwU7Y0VDzMmsZkLqMGpWp87kQC62JOnvid
	PrABNBadlioxg3mqReTszz6WGB0mSeg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 99B61ABD7;
	Fri, 27 Nov 2020 13:40:21 +0000 (UTC)
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201124160914.GQ2020@antioche.eu.org>
 <20201126133444.r2oi24i3umh7shb3@Air-de-Roger>
 <20201126141608.GA4123@antioche.eu.org>
 <20201126142635.uzi643co3mxp5h42@Air-de-Roger>
 <20201126150937.jhbfp7iefkmtedx7@Air-de-Roger>
 <20201126172034.GA7642@antioche.eu.org>
 <20201127105948.ji5gxv4e7axrvgpo@Air-de-Roger>
 <e9610278-84e5-dc32-b568-8867011de4e4@suse.com>
 <20201127131324.GJ1717@antioche.eu.org>
 <714e9393-d7f4-ed47-d1ed-aff79f3552a0@suse.com>
 <20201127133121.GN1717@antioche.eu.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <96aa5a9b-3f4a-ce9d-0f41-4a24d409ed55@suse.com>
Date: Fri, 27 Nov 2020 14:40:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201127133121.GN1717@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 27.11.2020 14:31, Manuel Bouyer wrote:
> On Fri, Nov 27, 2020 at 02:18:54PM +0100, Jan Beulich wrote:
>> On 27.11.2020 14:13, Manuel Bouyer wrote:
>>> On Fri, Nov 27, 2020 at 12:29:35PM +0100, Jan Beulich wrote:
>>>> On 27.11.2020 11:59, Roger Pau Monné wrote:
>>>>> --- a/xen/arch/x86/hvm/irq.c
>>>>> +++ b/xen/arch/x86/hvm/irq.c
>>>>> @@ -187,6 +187,10 @@ void hvm_gsi_assert(struct domain *d, unsigned int gsi)
>>>>>       * to know if the GSI is pending or not.
>>>>>       */
>>>>>      spin_lock(&d->arch.hvm.irq_lock);
>>>>> +    if ( gsi == TRACK_IRQ )
>>>>> +        debugtrace_printk("hvm_gsi_assert irq %u trig %u assert count %u\n",
>>>>> +                          gsi, trig, hvm_irq->gsi_assert_count[gsi]);
>>>>
>>>> This produces
>>>>
>>>> 81961 hvm_gsi_assert irq 34 trig 1 assert count 1
>>>>
>>>> Since the logging occurs ahead of the call to assert_gsi(), it
>>>> means we don't signal anything to Dom0, because according to our
>>>> records there's still an IRQ in flight. Unfortunately we only
>>>> see the tail of the trace, so it's not possible to tell how / when
>>>> we got into this state.
>>>>
>>>> Manuel - is this the only patch you have in place? Or did you keep
>>>> any prior ones? Iirc there once was one where Roger also suppressed
>>>> some de-assert call.
>>>
>>> Yes, I have some of the previous patches (otherwise Xen panics).
>>> Attached is the diffs I currently have 
>>
>> I think you want to delete the hunk dropping the call to
>> hvm_gsi_deassert() from pt_irq_time_out(). Iirc it was that
>> addition which changed the behavior to just a single IRQ ever
>> making it into Dom0. And it ought to be only the change to
>> msix_write() which is needed to avoid the panic.
> 
> yes, I did keep the hvm_gsi_deassert() patch because I expected it
> to make things easier, as it allows to interract with Xen without changing
> interrupt states.

Right, but then we'd need to see the beginning of the trace,
rather than it starting at (in this case) about 95,000. Yet ...

> I removed it, here's a new trace
> 
> http://www-soc.lip6.fr/~bouyer/xen-log12.txt

... hmm, odd - no change at all:

95572 hvm_gsi_assert irq 34 trig 1 assert count 1

I was sort of expecting that this might be where we fail to
set the assert count back to zero. Will need further
thinking, if nothing else than how to turn down the verbosity
without hiding crucial information. Or maybe Roger has got
some idea ...

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 13:47:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 13:47:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39377.72256 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kie5W-0002FD-Rq; Fri, 27 Nov 2020 13:47:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39377.72256; Fri, 27 Nov 2020 13:47:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kie5W-0002F6-Nr; Fri, 27 Nov 2020 13:47:02 +0000
Received: by outflank-mailman (input) for mailman id 39377;
 Fri, 27 Nov 2020 13:47:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kie5W-0002F1-3q
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:47:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 37eb4911-a61d-4333-97d0-805bc0880c2d;
 Fri, 27 Nov 2020 13:47:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 965C0ABD7;
 Fri, 27 Nov 2020 13:47:00 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kie5W-0002F1-3q
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:47:02 +0000
X-Inumbo-ID: 37eb4911-a61d-4333-97d0-805bc0880c2d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 37eb4911-a61d-4333-97d0-805bc0880c2d;
	Fri, 27 Nov 2020 13:47:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606484820; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AH8MLbQO+xGAs2wywTgpXKp3VgDz6169LTuoob1JGIc=;
	b=Dv1Zs2bVsDa3vvJuHpPqTcJRLGCzPYwSI9W3Yv+T8NuKXjSBR0gnYcCuOyQ/I+4Pp7MhnR
	nSrlgnOY3IBHPpa+PXz8jHCwWyLmWI9TDXGhP6yrbwjeiYTNoKXBL26ba5+pgvb4XIHHS5
	YZRMmLOp0ZCRFcIstuhcFCqShOX+ub8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 965C0ABD7;
	Fri, 27 Nov 2020 13:47:00 +0000 (UTC)
Subject: Re: [PATCH v4 2/3] xen/pci: solve compilation error on ARM with
 HAS_PCI enabled.
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Rahul Singh <Rahul.Singh@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>
References: <cover.1606326929.git.rahul.singh@arm.com>
 <2ce402cfae6d90433626bcdc6314e5ee5dda103f.1606326929.git.rahul.singh@arm.com>
 <AF56CBF8-CAE2-4296-BEE9-0DED2CD6A648@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <24926459-3f9f-0481-c5a3-9d6f36401346@suse.com>
Date: Fri, 27 Nov 2020 14:47:01 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <AF56CBF8-CAE2-4296-BEE9-0DED2CD6A648@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.11.2020 10:05, Bertrand Marquis wrote:
> 
> 
>> On 25 Nov 2020, at 18:16, Rahul Singh <Rahul.Singh@arm.com> wrote:
>>
>> If mem-sharing, mem-paging, or log-dirty functionality is not enabled
>> for architecture when HAS_PCI is enabled, the compiler will throw an
>> error.
>>
>> Move code to x86 specific file to fix compilation error.
>>
>> Also, modify the code to use likely() in place of unlikley() for each
>> condition to make code more optimized.
>>
>> No functional change intended.
>>
>> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
> 
> I guess the small typo fix could be fixed by the commiter directly :-)

Indeed, and with it
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 13:49:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 13:49:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39384.72268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kie7f-0002U1-7f; Fri, 27 Nov 2020 13:49:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39384.72268; Fri, 27 Nov 2020 13:49:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kie7f-0002Tu-4h; Fri, 27 Nov 2020 13:49:15 +0000
Received: by outflank-mailman (input) for mailman id 39384;
 Fri, 27 Nov 2020 13:49:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hQCY=FB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kie7d-0002Tp-Uf
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:49:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5eb8aff6-ef61-4b54-893e-dadc61cb0ea4;
 Fri, 27 Nov 2020 13:49:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7CC80AC23;
 Fri, 27 Nov 2020 13:49:11 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hQCY=FB=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kie7d-0002Tp-Uf
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:49:13 +0000
X-Inumbo-ID: 5eb8aff6-ef61-4b54-893e-dadc61cb0ea4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5eb8aff6-ef61-4b54-893e-dadc61cb0ea4;
	Fri, 27 Nov 2020 13:49:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606484951; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=yy6jqs2Qy45tVq7pWE5Gfy9Kpou3UWJhCYvaI7gOaO4=;
	b=QrHQKqJIcRoFY0zHjEfx4kmy1NrmpfRuEZb4Gb9kgTPN8x9Z9hn0nRN7aP3+8VJXkhsFBT
	9rgP4o457EMPpv2ug94ASmEoENv0g1AgdB9hLHjKQg0nShu9jP8h4TdEMdW2YIhg0crlpW
	AKloTvzF9sbnSEvzinoXlxMhsDkNl5E=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7CC80AC23;
	Fri, 27 Nov 2020 13:49:11 +0000 (UTC)
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
To: Jan Beulich <jbeulich@suse.com>, Manuel Bouyer <bouyer@antioche.eu.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201124160914.GQ2020@antioche.eu.org>
 <20201126133444.r2oi24i3umh7shb3@Air-de-Roger>
 <20201126141608.GA4123@antioche.eu.org>
 <20201126142635.uzi643co3mxp5h42@Air-de-Roger>
 <20201126150937.jhbfp7iefkmtedx7@Air-de-Roger>
 <20201126172034.GA7642@antioche.eu.org>
 <20201127105948.ji5gxv4e7axrvgpo@Air-de-Roger>
 <e9610278-84e5-dc32-b568-8867011de4e4@suse.com>
 <20201127131324.GJ1717@antioche.eu.org>
 <714e9393-d7f4-ed47-d1ed-aff79f3552a0@suse.com>
 <20201127133121.GN1717@antioche.eu.org>
 <96aa5a9b-3f4a-ce9d-0f41-4a24d409ed55@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <cb87ed52-e9c1-ad27-4c89-82f5b519b83c@suse.com>
Date: Fri, 27 Nov 2020 14:49:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <96aa5a9b-3f4a-ce9d-0f41-4a24d409ed55@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="nRefgtBdgfjzBGxfvXj8vUS8E9Zes41Yn"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--nRefgtBdgfjzBGxfvXj8vUS8E9Zes41Yn
Content-Type: multipart/mixed; boundary="emswArTsDVtaTLp3oTl0LQv33kxUnwAfH";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>, Manuel Bouyer <bouyer@antioche.eu.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
Message-ID: <cb87ed52-e9c1-ad27-4c89-82f5b519b83c@suse.com>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
References: <20201124160914.GQ2020@antioche.eu.org>
 <20201126133444.r2oi24i3umh7shb3@Air-de-Roger>
 <20201126141608.GA4123@antioche.eu.org>
 <20201126142635.uzi643co3mxp5h42@Air-de-Roger>
 <20201126150937.jhbfp7iefkmtedx7@Air-de-Roger>
 <20201126172034.GA7642@antioche.eu.org>
 <20201127105948.ji5gxv4e7axrvgpo@Air-de-Roger>
 <e9610278-84e5-dc32-b568-8867011de4e4@suse.com>
 <20201127131324.GJ1717@antioche.eu.org>
 <714e9393-d7f4-ed47-d1ed-aff79f3552a0@suse.com>
 <20201127133121.GN1717@antioche.eu.org>
 <96aa5a9b-3f4a-ce9d-0f41-4a24d409ed55@suse.com>
In-Reply-To: <96aa5a9b-3f4a-ce9d-0f41-4a24d409ed55@suse.com>

--emswArTsDVtaTLp3oTl0LQv33kxUnwAfH
Content-Type: multipart/mixed;
 boundary="------------95E40551A22C7F7DB50B9513"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------95E40551A22C7F7DB50B9513
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 27.11.20 14:40, Jan Beulich wrote:
> On 27.11.2020 14:31, Manuel Bouyer wrote:
>> On Fri, Nov 27, 2020 at 02:18:54PM +0100, Jan Beulich wrote:
>>> On 27.11.2020 14:13, Manuel Bouyer wrote:
>>>> On Fri, Nov 27, 2020 at 12:29:35PM +0100, Jan Beulich wrote:
>>>>> On 27.11.2020 11:59, Roger Pau Monn=C3=A9 wrote:
>>>>>> --- a/xen/arch/x86/hvm/irq.c
>>>>>> +++ b/xen/arch/x86/hvm/irq.c
>>>>>> @@ -187,6 +187,10 @@ void hvm_gsi_assert(struct domain *d, unsigne=
d int gsi)
>>>>>>        * to know if the GSI is pending or not.
>>>>>>        */
>>>>>>       spin_lock(&d->arch.hvm.irq_lock);
>>>>>> +    if ( gsi =3D=3D TRACK_IRQ )
>>>>>> +        debugtrace_printk("hvm_gsi_assert irq %u trig %u assert c=
ount %u\n",
>>>>>> +                          gsi, trig, hvm_irq->gsi_assert_count[gs=
i]);
>>>>>
>>>>> This produces
>>>>>
>>>>> 81961 hvm_gsi_assert irq 34 trig 1 assert count 1
>>>>>
>>>>> Since the logging occurs ahead of the call to assert_gsi(), it
>>>>> means we don't signal anything to Dom0, because according to our
>>>>> records there's still an IRQ in flight. Unfortunately we only
>>>>> see the tail of the trace, so it's not possible to tell how / when
>>>>> we got into this state.
>>>>>
>>>>> Manuel - is this the only patch you have in place? Or did you keep
>>>>> any prior ones? Iirc there once was one where Roger also suppressed=

>>>>> some de-assert call.
>>>>
>>>> Yes, I have some of the previous patches (otherwise Xen panics).
>>>> Attached is the diffs I currently have
>>>
>>> I think you want to delete the hunk dropping the call to
>>> hvm_gsi_deassert() from pt_irq_time_out(). Iirc it was that
>>> addition which changed the behavior to just a single IRQ ever
>>> making it into Dom0. And it ought to be only the change to
>>> msix_write() which is needed to avoid the panic.
>>
>> yes, I did keep the hvm_gsi_deassert() patch because I expected it
>> to make things easier, as it allows to interract with Xen without chan=
ging
>> interrupt states.
>=20
> Right, but then we'd need to see the beginning of the trace,
> rather than it starting at (in this case) about 95,000. Yet ...
>=20
>> I removed it, here's a new trace
>>
>> http://www-soc.lip6.fr/~bouyer/xen-log12.txt
>=20
> ... hmm, odd - no change at all:
>=20
> 95572 hvm_gsi_assert irq 34 trig 1 assert count 1
>=20
> I was sort of expecting that this might be where we fail to
> set the assert count back to zero. Will need further
> thinking, if nothing else than how to turn down the verbosity
> without hiding crucial information. Or maybe Roger has got
> some idea ...

Set debugtrace buffer size to something huge?

Panic when the buffer is full?

It should be noted that the debugtrace in being printed in case of a
panic.


Juergen


--------------95E40551A22C7F7DB50B9513
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------95E40551A22C7F7DB50B9513--

--emswArTsDVtaTLp3oTl0LQv33kxUnwAfH--

--nRefgtBdgfjzBGxfvXj8vUS8E9Zes41Yn
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/BA9YFAwAAAAAACgkQsN6d1ii/Ey/V
DAf/QfhvI8GsqykXmV/pfBGjilhDBSm29uHEBAmQACse1lqDncdPUmgAtNCh1pfSyRQnNUZPspy2
Pww5GvtSGsI/4BVSVx1Fm/DZvn0MR2CkSVD+NLwADWqBmuc6dkJd7msBSresx8/jhDpJigVj2ZaZ
0fpyA+qu1GDH6kpipRnTDucuv3JSx4ojEPvVOrIW2lRS8AYYYSUBT5NSmyLCS/7i/aJp6meLkH72
zmHTXyHqoDvgrxdaDEMr/5myXHjyIbR4Ebc6Ukd/7OzvOyxFpTfxJb+m2jcf8HsEQ94bS1TkXYew
pYmdLWRyqWkXd7TqFwI3zKsKCJfnJdo3ew/Rv4fMuw==
=pu52
-----END PGP SIGNATURE-----

--nRefgtBdgfjzBGxfvXj8vUS8E9Zes41Yn--


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 13:52:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 13:52:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39393.72280 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kieAq-0003K8-SS; Fri, 27 Nov 2020 13:52:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39393.72280; Fri, 27 Nov 2020 13:52:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kieAq-0003K1-PR; Fri, 27 Nov 2020 13:52:32 +0000
Received: by outflank-mailman (input) for mailman id 39393;
 Fri, 27 Nov 2020 13:52:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hQCY=FB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kieAo-0003Jw-Tg
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:52:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 807bdd4b-a444-4bfb-9bb8-45c9251f1370;
 Fri, 27 Nov 2020 13:52:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 21523AC23;
 Fri, 27 Nov 2020 13:52:29 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hQCY=FB=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kieAo-0003Jw-Tg
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:52:30 +0000
X-Inumbo-ID: 807bdd4b-a444-4bfb-9bb8-45c9251f1370
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 807bdd4b-a444-4bfb-9bb8-45c9251f1370;
	Fri, 27 Nov 2020 13:52:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606485149; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=dPgI4aKfQFGfAVXXozM2BCQci3hCQWBIwgILxOif4rA=;
	b=EN7Tm/hb+RkUo0ZgLZaYmRAmT1vZzib2mVBVUcXaNY8Y16d0/wj3yWZcDO3zThyAaoRNYV
	LQloigK2m5c++eIDrFGlAFaRB/90RPBHmwwCxWnGjcfFIbjziGMein57l4Z39jRYmxWpq2
	nAr3g1iGRuoBqcDvBSLcH7Bn51gkM60=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 21523AC23;
	Fri, 27 Nov 2020 13:52:29 +0000 (UTC)
Subject: Re: [PATCH v8 3/3] xen/events: do some cleanups in
 evtchn_fifo_set_pending()
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201125105122.3650-1-jgross@suse.com>
 <20201125105122.3650-4-jgross@suse.com>
 <dc81e650-166f-99c2-0982-58c0b89c1eb4@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <025429b8-9f07-e41b-56d0-358c933e2580@suse.com>
Date: Fri, 27 Nov 2020 14:52:28 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <dc81e650-166f-99c2-0982-58c0b89c1eb4@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="GMTB3IND77U4ASq8BNVQc2vpvLim2PDcY"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--GMTB3IND77U4ASq8BNVQc2vpvLim2PDcY
Content-Type: multipart/mixed; boundary="nfxTNdwL3nR6FBS8W5nZOcYaL6Y74Ehly";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <025429b8-9f07-e41b-56d0-358c933e2580@suse.com>
Subject: Re: [PATCH v8 3/3] xen/events: do some cleanups in
 evtchn_fifo_set_pending()
References: <20201125105122.3650-1-jgross@suse.com>
 <20201125105122.3650-4-jgross@suse.com>
 <dc81e650-166f-99c2-0982-58c0b89c1eb4@suse.com>
In-Reply-To: <dc81e650-166f-99c2-0982-58c0b89c1eb4@suse.com>

--nfxTNdwL3nR6FBS8W5nZOcYaL6Y74Ehly
Content-Type: multipart/mixed;
 boundary="------------83E34BBF85D1B10B3736F022"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------83E34BBF85D1B10B3736F022
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 27.11.20 14:27, Jan Beulich wrote:
> On 25.11.2020 11:51, Juergen Gross wrote:
>> --- a/xen/common/event_fifo.c
>> +++ b/xen/common/event_fifo.c
>> @@ -175,6 +175,18 @@ static void evtchn_fifo_set_pending(struct vcpu *=
v, struct evtchn *evtchn)
>>           return;
>>       }
>>  =20
>> +    /*
>> +     * Control block not mapped.  The guest must not unmask an
>> +     * event until the control block is initialized, so we can
>> +     * just drop the event.
>> +     */
>> +    if ( unlikely(!v->evtchn_fifo->control_block) )
>> +    {
>> +        printk(XENLOG_G_WARNING
>> +               "%pv has no FIFO event channel control block\n", v);
>> +        return;
>> +    }
>=20
> This results in bypassing the setting of PENDING and the possible
> call to evtchn_check_pollers(). It may in particular be the case
> that a very special purpose guest uses event channels just for
> waking up pollers, which - afaict - then doesn't require setting
> up a control block. To give an example, I could easily see an XTF
> test avoid that step if indeed it's unnecessary.

Okay, I can move the test after setting PENDING and do a "goto unlock"
instead of returning.


Juergen

--------------83E34BBF85D1B10B3736F022
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------83E34BBF85D1B10B3736F022--

--nfxTNdwL3nR6FBS8W5nZOcYaL6Y74Ehly--

--GMTB3IND77U4ASq8BNVQc2vpvLim2PDcY
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/BBJwFAwAAAAAACgkQsN6d1ii/Ey9+
KggAlI8hEoL3wYYV+x0aviGt+jxvgfShwjACoH/BaVRQ5Re4QJ9wMEeQIZotEohf7IXpb2PTxQ5X
1xbBk3tElFhNfJiJHVUvcMcpaPcB2lDcd/bwx57YhHZu8bSgQ95MO7MT1clf5EqejbCIPIUnxGRb
vy1HvHwZneiVH6RGH4h1+q1ZcV6diJj27WTeUCV81fGD68fVrBPx645T7BtRCMYjVmriQSPx6dMr
lp8HHyMZTJ4ey+aMKdL5355qrs327BukmwpQ7dpz6PzhTyVmNjloaO1w/8X1v1NbevLEI84kjEQP
mMyVVapXykvI2cCD97IN1OqsV/rCI/cRO9PtfakEkQ==
=RCA0
-----END PGP SIGNATURE-----

--GMTB3IND77U4ASq8BNVQc2vpvLim2PDcY--


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 13:58:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 13:58:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39401.72292 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kieGW-0003cx-Ib; Fri, 27 Nov 2020 13:58:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39401.72292; Fri, 27 Nov 2020 13:58:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kieGW-0003cq-Eq; Fri, 27 Nov 2020 13:58:24 +0000
Received: by outflank-mailman (input) for mailman id 39401;
 Fri, 27 Nov 2020 13:58:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kieGU-0003ck-Lb
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:58:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7e9c0cf2-45b9-4a56-8731-3a773da178d6;
 Fri, 27 Nov 2020 13:58:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F168FABD7;
 Fri, 27 Nov 2020 13:58:20 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kieGU-0003ck-Lb
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:58:22 +0000
X-Inumbo-ID: 7e9c0cf2-45b9-4a56-8731-3a773da178d6
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7e9c0cf2-45b9-4a56-8731-3a773da178d6;
	Fri, 27 Nov 2020 13:58:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606485501; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QsEKiEZyrFFrYEgyt5mes67Edy/pP1CE2btlFOYBUIY=;
	b=KLZuZzN+jv0gceMzhgdGA//GfBZOCcagCLXZFt3teieFeqiy5Sm5duLGlsTRUStFPBWasD
	VVc4a24XxxKgY2WIBZo5Y3peSZ71Ujz1SnLrwmfqdP/grFlC9fk1/LAAkEBLUDDsuNoX68
	H5+l++MGCEUxT0+ITuy98BgLvwuEVU0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id F168FABD7;
	Fri, 27 Nov 2020 13:58:20 +0000 (UTC)
Subject: Re: [PATCH v4 3/3] ns16550: Gate all PCI code with CONFIG_X86
To: Rahul Singh <rahul.singh@arm.com>
Cc: bertrand.marquis@arm.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1606326929.git.rahul.singh@arm.com>
 <6d64bb35a6ce247faaa3df2ebae27b6bfa1d969e.1606326929.git.rahul.singh@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bacfe1c3-d86d-95b2-c52a-4bb86f1338ea@suse.com>
Date: Fri, 27 Nov 2020 14:58:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <6d64bb35a6ce247faaa3df2ebae27b6bfa1d969e.1606326929.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.11.2020 19:16, Rahul Singh wrote:
> --- a/xen/drivers/char/ns16550.c
> +++ b/xen/drivers/char/ns16550.c
> @@ -16,7 +16,7 @@
>  #include <xen/timer.h>
>  #include <xen/serial.h>
>  #include <xen/iocap.h>
> -#ifdef CONFIG_HAS_PCI
> +#if defined(CONFIG_X86) && defined(CONFIG_HAS_PCI)
>  #include <xen/pci.h>
>  #include <xen/pci_regs.h>
>  #include <xen/pci_ids.h>
> @@ -51,7 +51,7 @@ static struct ns16550 {
>      unsigned int timeout_ms;
>      bool_t intr_works;
>      bool_t dw_usr_bsy;
> -#ifdef CONFIG_HAS_PCI
> +#if defined(CONFIG_X86) && defined(CONFIG_HAS_PCI)

I'm sorry to be picky, but this being a hack wants, imo, also calling
it so, by way of a code comment. Clearly this should go at one of the
first instances, yet neither of the two above are really suitable imo.
Hence I'm coming back to my prior suggestion of introducing a
consolidated #define without this becoming a Kconfig setting:

/*
 * The PCI part of the code in this file currently is only known to
 * work on x86. Undo this hack once the logic has been suitably
 * abstracted.
 */
#if defined(CONFIG_HAS_PCI) && defined(CONFIG_X86)
# define NS16550_PCI
#endif

And then use NS16550_PCI everywhere. I'd be fine making this
adjustment while committing, if I knew that (a) you're okay with it
and (b) the R-b and A-b you've already got can be kept.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 13:58:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 13:58:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39402.72304 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kieGk-0003hL-Qu; Fri, 27 Nov 2020 13:58:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39402.72304; Fri, 27 Nov 2020 13:58:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kieGk-0003hE-Na; Fri, 27 Nov 2020 13:58:38 +0000
Received: by outflank-mailman (input) for mailman id 39402;
 Fri, 27 Nov 2020 13:58:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kieGj-0003gt-IC
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:58:37 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kieGh-0005QB-1j; Fri, 27 Nov 2020 13:58:35 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kieGg-0006M2-Pw; Fri, 27 Nov 2020 13:58:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kieGj-0003gt-IC
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:58:37 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=YHTnUMXXOnVu4XzScm8zStrhl6YYrFwhZHHgjSQqq7I=; b=DW/UfVO2/dtdxFzAjBmCpzOXV/
	+BE17E5QzQI2dIyOjwYWrvey919jI7f4XrVPRSP0X9FNjCXvlxbUZrDrrmYKbRHpDVq79+4p/Z3qy
	DYdjjdvFe8P7SjdiMjpSwQ7f4w2jVAjxA/x6bT9+NwQ1luROpwpXsNaCt4EBMs/QdxBI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kieGh-0005QB-1j; Fri, 27 Nov 2020 13:58:35 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kieGg-0006M2-Pw; Fri, 27 Nov 2020 13:58:34 +0000
Subject: Re: [PATCH v8 2/3] xen/events: rework fifo queue locking
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201125105122.3650-1-jgross@suse.com>
 <20201125105122.3650-3-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <e60e4fce-8c1b-013a-9ec2-20bd2c930619@xen.org>
Date: Fri, 27 Nov 2020 13:58:32 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201125105122.3650-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 25/11/2020 10:51, Juergen Gross wrote:
> -static struct evtchn_fifo_queue *lock_old_queue(const struct domain *d,
> -                                                struct evtchn *evtchn,
> -                                                unsigned long *flags)
> -{
> -    struct vcpu *v;
> -    struct evtchn_fifo_queue *q, *old_q;
> -    unsigned int try;
> -    union evtchn_fifo_lastq lastq;
> -
> -    for ( try = 0; try < 3; try++ )
> -    {
> -        lastq.raw = read_atomic(&evtchn->fifo_lastq);
> -        v = d->vcpu[lastq.last_vcpu_id];
> -        old_q = &v->evtchn_fifo->queue[lastq.last_priority];
> -
> -        spin_lock_irqsave(&old_q->lock, *flags);
> -
> -        v = d->vcpu[lastq.last_vcpu_id];
> -        q = &v->evtchn_fifo->queue[lastq.last_priority];
> -
> -        if ( old_q == q )
> -            return old_q;
> -
> -        spin_unlock_irqrestore(&old_q->lock, *flags);
> -    }
> -
> -    gprintk(XENLOG_WARNING,
> -            "dom%d port %d lost event (too many queue changes)\n",
> -            d->domain_id, evtchn->port);
> -    return NULL;
> -}
> -
>   static int try_set_link(event_word_t *word, event_word_t *w, uint32_t link)
>   {
>       event_word_t new, old;
> @@ -190,6 +158,9 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
>       event_word_t *word;
>       unsigned long flags;
>       bool_t was_pending;
> +    struct evtchn_fifo_queue *q, *old_q;
> +    unsigned int try;
> +    bool linked = true;
>   
>       port = evtchn->port;
>       word = evtchn_fifo_word_from_port(d, port);
> @@ -204,17 +175,67 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
>           return;
>       }
>   
> +    /*
> +     * Lock all queues related to the event channel (in case of a queue change
> +     * this might be two).
> +     * It is mandatory to do that before setting and testing the PENDING bit
> +     * and to hold the current queue lock until the event has put into the
> +     * list of pending events in order to avoid waking up a guest without the
> +     * event being visibly pending in the guest.
> +     */
> +    for ( try = 0; try < 4; try++ )

May I ask why the number of try is 4 rather than the original 3?

> +    {
> +        union evtchn_fifo_lastq lastq;
> +        const struct vcpu *old_v;
> +
> +        lastq.raw = read_atomic(&evtchn->fifo_lastq);
> +        old_v = d->vcpu[lastq.last_vcpu_id];
> +
> +        q = &v->evtchn_fifo->queue[evtchn->priority];
> +        old_q = &old_v->evtchn_fifo->queue[lastq.last_priority];
> +
> +        if ( q == old_q )
> +            spin_lock_irqsave(&q->lock, flags);
> +        else if ( q < old_q )
> +        {
> +            spin_lock_irqsave(&q->lock, flags);
> +            spin_lock(&old_q->lock);
> +        }
> +        else
> +        {
> +            spin_lock_irqsave(&old_q->lock, flags);
> +            spin_lock(&q->lock);
> +        }
> +
> +        lastq.raw = read_atomic(&evtchn->fifo_lastq);
> +        old_v = d->vcpu[lastq.last_vcpu_id];
> +        if ( q == &v->evtchn_fifo->queue[evtchn->priority] &&
> +             old_q == &old_v->evtchn_fifo->queue[lastq.last_priority] )
> +            break;
> +
> +        if ( q != old_q )
> +            spin_unlock(&old_q->lock);
> +        spin_unlock_irqrestore(&q->lock, flags);
> +    }
> +
>       was_pending = guest_test_and_set_bit(d, EVTCHN_FIFO_PENDING, word);
>   
> +    /* If we didn't get the lock bail out. */
> +    if ( try == 4 )
> +    {
> +        gprintk(XENLOG_WARNING,
> +                "dom%d port %d lost event (too many queue changes)\n",
> +                d->domain_id, evtchn->port);

NIT: You can use %pd use in place of dom%d.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 13:59:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 13:59:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39414.72316 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kieHl-0003ru-4V; Fri, 27 Nov 2020 13:59:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39414.72316; Fri, 27 Nov 2020 13:59:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kieHl-0003rn-1P; Fri, 27 Nov 2020 13:59:41 +0000
Received: by outflank-mailman (input) for mailman id 39414;
 Fri, 27 Nov 2020 13:59:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N/b8=FB=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kieHk-0003rh-CN
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:59:40 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bcfc3fad-179a-4ca4-a3f2-e0827cc5f877;
 Fri, 27 Nov 2020 13:59:39 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0ARDxYam027187
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Fri, 27 Nov 2020 14:59:35 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 6D2812E9CC6; Fri, 27 Nov 2020 14:59:29 +0100 (MET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=N/b8=FB=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
	id 1kieHk-0003rh-CN
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 13:59:40 +0000
X-Inumbo-ID: bcfc3fad-179a-4ca4-a3f2-e0827cc5f877
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bcfc3fad-179a-4ca4-a3f2-e0827cc5f877;
	Fri, 27 Nov 2020 13:59:39 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
	by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id 0ARDxYam027187
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
	Fri, 27 Nov 2020 14:59:35 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
	id 6D2812E9CC6; Fri, 27 Nov 2020 14:59:29 +0100 (MET)
Date: Fri, 27 Nov 2020 14:59:29 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
        xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201127135929.GR1717@antioche.eu.org>
References: <20201126141608.GA4123@antioche.eu.org>
 <20201126142635.uzi643co3mxp5h42@Air-de-Roger>
 <20201126150937.jhbfp7iefkmtedx7@Air-de-Roger>
 <20201126172034.GA7642@antioche.eu.org>
 <20201127105948.ji5gxv4e7axrvgpo@Air-de-Roger>
 <e9610278-84e5-dc32-b568-8867011de4e4@suse.com>
 <20201127131324.GJ1717@antioche.eu.org>
 <714e9393-d7f4-ed47-d1ed-aff79f3552a0@suse.com>
 <20201127133121.GN1717@antioche.eu.org>
 <96aa5a9b-3f4a-ce9d-0f41-4a24d409ed55@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <96aa5a9b-3f4a-ce9d-0f41-4a24d409ed55@suse.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Fri, 27 Nov 2020 14:59:35 +0100 (MET)

On Fri, Nov 27, 2020 at 02:40:22PM +0100, Jan Beulich wrote:
> On 27.11.2020 14:31, Manuel Bouyer wrote:
> > On Fri, Nov 27, 2020 at 02:18:54PM +0100, Jan Beulich wrote:
> >> On 27.11.2020 14:13, Manuel Bouyer wrote:
> >>> On Fri, Nov 27, 2020 at 12:29:35PM +0100, Jan Beulich wrote:
> >>>> On 27.11.2020 11:59, Roger Pau Monn wrote:
> >>>>> --- a/xen/arch/x86/hvm/irq.c
> >>>>> +++ b/xen/arch/x86/hvm/irq.c
> >>>>> @@ -187,6 +187,10 @@ void hvm_gsi_assert(struct domain *d, unsigned int gsi)
> >>>>>       * to know if the GSI is pending or not.
> >>>>>       */
> >>>>>      spin_lock(&d->arch.hvm.irq_lock);
> >>>>> +    if ( gsi == TRACK_IRQ )
> >>>>> +        debugtrace_printk("hvm_gsi_assert irq %u trig %u assert count %u\n",
> >>>>> +                          gsi, trig, hvm_irq->gsi_assert_count[gsi]);
> >>>>
> >>>> This produces
> >>>>
> >>>> 81961 hvm_gsi_assert irq 34 trig 1 assert count 1
> >>>>
> >>>> Since the logging occurs ahead of the call to assert_gsi(), it
> >>>> means we don't signal anything to Dom0, because according to our
> >>>> records there's still an IRQ in flight. Unfortunately we only
> >>>> see the tail of the trace, so it's not possible to tell how / when
> >>>> we got into this state.
> >>>>
> >>>> Manuel - is this the only patch you have in place? Or did you keep
> >>>> any prior ones? Iirc there once was one where Roger also suppressed
> >>>> some de-assert call.
> >>>
> >>> Yes, I have some of the previous patches (otherwise Xen panics).
> >>> Attached is the diffs I currently have 
> >>
> >> I think you want to delete the hunk dropping the call to
> >> hvm_gsi_deassert() from pt_irq_time_out(). Iirc it was that
> >> addition which changed the behavior to just a single IRQ ever
> >> making it into Dom0. And it ought to be only the change to
> >> msix_write() which is needed to avoid the panic.
> > 
> > yes, I did keep the hvm_gsi_deassert() patch because I expected it
> > to make things easier, as it allows to interract with Xen without changing
> > interrupt states.
> 
> Right, but then we'd need to see the beginning of the trace,
> rather than it starting at (in this case) about 95,000. Yet ...
> 
> > I removed it, here's a new trace
> > 
> > http://www-soc.lip6.fr/~bouyer/xen-log12.txt
> 
> ... hmm, odd - no change at all:
> 
> 95572 hvm_gsi_assert irq 34 trig 1 assert count 1

But I can confirm that now, entering ^A^A^A gets interrupt going in again

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 14:03:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 14:03:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39425.72328 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kieLl-0004pZ-MM; Fri, 27 Nov 2020 14:03:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39425.72328; Fri, 27 Nov 2020 14:03:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kieLl-0004pS-J2; Fri, 27 Nov 2020 14:03:49 +0000
Received: by outflank-mailman (input) for mailman id 39425;
 Fri, 27 Nov 2020 14:03:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kieLk-0004pN-H4
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:03:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 69c58368-1dbb-4371-b1d4-4e5b0437596a;
 Fri, 27 Nov 2020 14:03:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 49A9AABD7;
 Fri, 27 Nov 2020 14:03:46 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kieLk-0004pN-H4
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:03:48 +0000
X-Inumbo-ID: 69c58368-1dbb-4371-b1d4-4e5b0437596a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 69c58368-1dbb-4371-b1d4-4e5b0437596a;
	Fri, 27 Nov 2020 14:03:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606485826; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ycPrwFrflduIT4hxw8dZblYgL+FCxgwhp+N1wpEpCDY=;
	b=gpNxKORr+oXYM4ZnPIDe6WZQeRWi4UFdm0Op/L4Yo+4JzxgrcOyG7vXR4WUWTzJ2XXoLLG
	PUWBhsWEnlyjMNjMS5lAjhMrcxBBsGk+kH3E/LrNg1h3sw6ZI5KoVrbDyiozV0fJNuTBBQ
	NJICqP4Kpf11iMX+He40buK/+IC9TNw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 49A9AABD7;
	Fri, 27 Nov 2020 14:03:46 +0000 (UTC)
Subject: Re: [PATCH v8 2/3] xen/events: rework fifo queue locking
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org, Juergen Gross <jgross@suse.com>
References: <20201125105122.3650-1-jgross@suse.com>
 <20201125105122.3650-3-jgross@suse.com>
 <e60e4fce-8c1b-013a-9ec2-20bd2c930619@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a7acf5b6-6218-4e4c-b105-c9a20c28ea51@suse.com>
Date: Fri, 27 Nov 2020 15:03:47 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <e60e4fce-8c1b-013a-9ec2-20bd2c930619@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 27.11.2020 14:58, Julien Grall wrote:
> On 25/11/2020 10:51, Juergen Gross wrote:
>> +    /* If we didn't get the lock bail out. */
>> +    if ( try == 4 )
>> +    {
>> +        gprintk(XENLOG_WARNING,
>> +                "dom%d port %d lost event (too many queue changes)\n",
>> +                d->domain_id, evtchn->port);
> 
> NIT: You can use %pd use in place of dom%d.

Oh, indeed - not just can, but imo really should. I'll record this
for on-commit adjustment, unless a v9 becomes necessary anyway.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 14:05:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 14:05:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39431.72340 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kieNh-0004xd-1I; Fri, 27 Nov 2020 14:05:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39431.72340; Fri, 27 Nov 2020 14:05:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kieNg-0004xW-Tk; Fri, 27 Nov 2020 14:05:48 +0000
Received: by outflank-mailman (input) for mailman id 39431;
 Fri, 27 Nov 2020 14:05:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hQCY=FB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kieNf-0004xQ-SS
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:05:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c7a4b55b-64f4-4b09-8cb1-95f40c708522;
 Fri, 27 Nov 2020 14:05:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 180CEAC23;
 Fri, 27 Nov 2020 14:05:45 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hQCY=FB=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kieNf-0004xQ-SS
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:05:47 +0000
X-Inumbo-ID: c7a4b55b-64f4-4b09-8cb1-95f40c708522
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c7a4b55b-64f4-4b09-8cb1-95f40c708522;
	Fri, 27 Nov 2020 14:05:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606485945; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=F4mKkKoFhIJuLeKHSD4f3NCIeDTQLMr0Zld8Bpz2Vgo=;
	b=CyfmLBFjHuTzcdsF0HszQzr4SEgXrPNJzQl18RhZy58cmQCFyVAInlVQgR7UfVftoH3sB2
	JvBwYRUBkb80AXlJx01hY8fPxhwn0Sldm9jX1O3GxCp6FOE8XyRC4rZkfZYR1fnwIrlVg+
	b6Y4/Nem5WDkwRyiIL7VJajuihFuCLA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 180CEAC23;
	Fri, 27 Nov 2020 14:05:45 +0000 (UTC)
Subject: Re: [PATCH v8 2/3] xen/events: rework fifo queue locking
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201125105122.3650-1-jgross@suse.com>
 <20201125105122.3650-3-jgross@suse.com>
 <e60e4fce-8c1b-013a-9ec2-20bd2c930619@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <2b099865-647c-3d47-1510-d429c2a4b6c6@suse.com>
Date: Fri, 27 Nov 2020 15:05:44 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <e60e4fce-8c1b-013a-9ec2-20bd2c930619@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="DSS8JncNRbdqIiwdljUiftaRGKDT9sd1l"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--DSS8JncNRbdqIiwdljUiftaRGKDT9sd1l
Content-Type: multipart/mixed; boundary="8NFTV2bdLPgudmR2d3Px2BdjffeuoG9Cx";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Message-ID: <2b099865-647c-3d47-1510-d429c2a4b6c6@suse.com>
Subject: Re: [PATCH v8 2/3] xen/events: rework fifo queue locking
References: <20201125105122.3650-1-jgross@suse.com>
 <20201125105122.3650-3-jgross@suse.com>
 <e60e4fce-8c1b-013a-9ec2-20bd2c930619@xen.org>
In-Reply-To: <e60e4fce-8c1b-013a-9ec2-20bd2c930619@xen.org>

--8NFTV2bdLPgudmR2d3Px2BdjffeuoG9Cx
Content-Type: multipart/mixed;
 boundary="------------38CF97E019B7DCFB33182E2E"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------38CF97E019B7DCFB33182E2E
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 27.11.20 14:58, Julien Grall wrote:
> Hi Juergen,
>=20
> On 25/11/2020 10:51, Juergen Gross wrote:
>> -static struct evtchn_fifo_queue *lock_old_queue(const struct domain *=
d,
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct ev=
tchn *evtchn,
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned =
long *flags)
>> -{
>> -=C2=A0=C2=A0=C2=A0 struct vcpu *v;
>> -=C2=A0=C2=A0=C2=A0 struct evtchn_fifo_queue *q, *old_q;
>> -=C2=A0=C2=A0=C2=A0 unsigned int try;
>> -=C2=A0=C2=A0=C2=A0 union evtchn_fifo_lastq lastq;
>> -
>> -=C2=A0=C2=A0=C2=A0 for ( try =3D 0; try < 3; try++ )
>> -=C2=A0=C2=A0=C2=A0 {
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 lastq.raw =3D read_atomic(=
&evtchn->fifo_lastq);
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 v =3D d->vcpu[lastq.last_v=
cpu_id];
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 old_q =3D &v->evtchn_fifo-=
>queue[lastq.last_priority];
>> -
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 spin_lock_irqsave(&old_q->=
lock, *flags);
>> -
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 v =3D d->vcpu[lastq.last_v=
cpu_id];
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 q =3D &v->evtchn_fifo->que=
ue[lastq.last_priority];
>> -
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( old_q =3D=3D q )
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 re=
turn old_q;
>> -
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 spin_unlock_irqrestore(&ol=
d_q->lock, *flags);
>> -=C2=A0=C2=A0=C2=A0 }
>> -
>> -=C2=A0=C2=A0=C2=A0 gprintk(XENLOG_WARNING,
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 "d=
om%d port %d lost event (too many queue changes)\n",
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 d-=
>domain_id, evtchn->port);
>> -=C2=A0=C2=A0=C2=A0 return NULL;
>> -}
>> -
>> =C2=A0 static int try_set_link(event_word_t *word, event_word_t *w,=20
>> uint32_t link)
>> =C2=A0 {
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 event_word_t new, old;
>> @@ -190,6 +158,9 @@ static void evtchn_fifo_set_pending(struct vcpu=20
>> *v, struct evtchn *evtchn)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 event_word_t *word;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long flags;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 bool_t was_pending;
>> +=C2=A0=C2=A0=C2=A0 struct evtchn_fifo_queue *q, *old_q;
>> +=C2=A0=C2=A0=C2=A0 unsigned int try;
>> +=C2=A0=C2=A0=C2=A0 bool linked =3D true;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 port =3D evtchn->port;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 word =3D evtchn_fifo_word_from_port(d, =
port);
>> @@ -204,17 +175,67 @@ static void evtchn_fifo_set_pending(struct vcpu =

>> *v, struct evtchn *evtchn)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }
>> +=C2=A0=C2=A0=C2=A0 /*
>> +=C2=A0=C2=A0=C2=A0=C2=A0 * Lock all queues related to the event chann=
el (in case of a=20
>> queue change
>> +=C2=A0=C2=A0=C2=A0=C2=A0 * this might be two).
>> +=C2=A0=C2=A0=C2=A0=C2=A0 * It is mandatory to do that before setting =
and testing the=20
>> PENDING bit
>> +=C2=A0=C2=A0=C2=A0=C2=A0 * and to hold the current queue lock until t=
he event has put=20
>> into the
>> +=C2=A0=C2=A0=C2=A0=C2=A0 * list of pending events in order to avoid w=
aking up a guest=20
>> without the
>> +=C2=A0=C2=A0=C2=A0=C2=A0 * event being visibly pending in the guest.
>> +=C2=A0=C2=A0=C2=A0=C2=A0 */
>> +=C2=A0=C2=A0=C2=A0 for ( try =3D 0; try < 4; try++ )
>=20
> May I ask why the number of try is 4 rather than the original 3?

Oh, I think this is just a typo. OTOH it doesn't really matter.

>=20
>> +=C2=A0=C2=A0=C2=A0 {
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 union evtchn_fifo_lastq la=
stq;
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 const struct vcpu *old_v;
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 lastq.raw =3D read_atomic(=
&evtchn->fifo_lastq);
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 old_v =3D d->vcpu[lastq.la=
st_vcpu_id];
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 q =3D &v->evtchn_fifo->que=
ue[evtchn->priority];
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 old_q =3D &old_v->evtchn_f=
ifo->queue[lastq.last_priority];
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( q =3D=3D old_q )
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sp=
in_lock_irqsave(&q->lock, flags);
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 else if ( q < old_q )
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 {
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sp=
in_lock_irqsave(&q->lock, flags);
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sp=
in_lock(&old_q->lock);
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 else
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 {
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sp=
in_lock_irqsave(&old_q->lock, flags);
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sp=
in_lock(&q->lock);
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 lastq.raw =3D read_atomic(=
&evtchn->fifo_lastq);
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 old_v =3D d->vcpu[lastq.la=
st_vcpu_id];
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( q =3D=3D &v->evtchn_f=
ifo->queue[evtchn->priority] &&
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0 old_q =3D=3D &old_v->evtchn_fifo->queue[lastq.last_priority] )
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 br=
eak;
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( q !=3D old_q )
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sp=
in_unlock(&old_q->lock);
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 spin_unlock_irqrestore(&q-=
>lock, flags);
>> +=C2=A0=C2=A0=C2=A0 }
>> +
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 was_pending =3D guest_test_and_set_bit(=
d, EVTCHN_FIFO_PENDING, word);
>> +=C2=A0=C2=A0=C2=A0 /* If we didn't get the lock bail out. */
>> +=C2=A0=C2=A0=C2=A0 if ( try =3D=3D 4 )
>> +=C2=A0=C2=A0=C2=A0 {
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 gprintk(XENLOG_WARNING,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 "dom%d port %d lost event (too many queue changes)\=
n",
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 d->domain_id, evtchn->port);
>=20
> NIT: You can use %pd use in place of dom%d.

Yes, indeed. This was just moved around. :-)


Juergen

--------------38CF97E019B7DCFB33182E2E
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------38CF97E019B7DCFB33182E2E--

--8NFTV2bdLPgudmR2d3Px2BdjffeuoG9Cx--

--DSS8JncNRbdqIiwdljUiftaRGKDT9sd1l
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/BB7gFAwAAAAAACgkQsN6d1ii/Ey8x
qgf/Vp4zEpgL8jVwHGHZFBrTv1EcS+Q60RBJdaNZ6dkdPcG4R6BVjz1YvP0wg+jAnP657qNhQGoY
dr+JhC8s0HUg0fBMKVVJQpallCxpPlyvlsRX/vcA1t2AoSULtzc0PjikcIc12aQrpgTXLPF1w0XQ
PFJ/K8Up9uqBadZI/hyStxQ5s5MAFafyt36cXUAEWBRDLz5wVV/Navye0b4Ko02x/a17UhWrGHZd
sZTJn8pQgahTFLW0QoqvEGxfDUSAvkznIwpeZWOyG9OHLcFP/AyYwnyKcjxC9BWeWwhBtzPsAfSP
ysexbQxBJt8YxIRNkX0Vp6xDuWpZzSUDosQyoPymEg==
=njV8
-----END PGP SIGNATURE-----

--DSS8JncNRbdqIiwdljUiftaRGKDT9sd1l--


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 14:11:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 14:11:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39440.72352 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kieT0-0005vd-RM; Fri, 27 Nov 2020 14:11:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39440.72352; Fri, 27 Nov 2020 14:11:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kieT0-0005vW-NS; Fri, 27 Nov 2020 14:11:18 +0000
Received: by outflank-mailman (input) for mailman id 39440;
 Fri, 27 Nov 2020 14:11:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kieSz-0005vQ-PL
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:11:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kieSw-0005mK-Jn; Fri, 27 Nov 2020 14:11:14 +0000
Received: from [54.239.6.190] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kieSw-0007Qq-CL; Fri, 27 Nov 2020 14:11:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kieSz-0005vQ-PL
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:11:17 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=wf+hirX66UKoM401H3kE9XEpTvMKV31YP+IdEfWcu/w=; b=KNbeMY5lULv1NMyI0ckVAQaPQv
	hY4IQbcItJiIQg5SauCl0cSW3/g7K57SjgPk8WmpFPtwD19PdPXbLtdglPkk8buQfvQfX6bZZQOPP
	PlzlDyoIqqmtfusA1kowLvToJWOmHOGyLjCqhwB070etJOAl4+1/RbG1rIhPFKSYVSoY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kieSw-0005mK-Jn; Fri, 27 Nov 2020 14:11:14 +0000
Received: from [54.239.6.190] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kieSw-0007Qq-CL; Fri, 27 Nov 2020 14:11:14 +0000
Subject: Re: [PATCH v8 2/3] xen/events: rework fifo queue locking
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201125105122.3650-1-jgross@suse.com>
 <20201125105122.3650-3-jgross@suse.com>
 <e60e4fce-8c1b-013a-9ec2-20bd2c930619@xen.org>
 <2b099865-647c-3d47-1510-d429c2a4b6c6@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <5f04e881-915f-e2b7-6af3-459af614f8ca@xen.org>
Date: Fri, 27 Nov 2020 14:11:12 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <2b099865-647c-3d47-1510-d429c2a4b6c6@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 27/11/2020 14:05, Jürgen Groß wrote:
> On 27.11.20 14:58, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 25/11/2020 10:51, Juergen Gross wrote:
>>> -static struct evtchn_fifo_queue *lock_old_queue(const struct domain *d,
>>> -                                                struct evtchn *evtchn,
>>> -                                                unsigned long *flags)
>>> -{
>>> -    struct vcpu *v;
>>> -    struct evtchn_fifo_queue *q, *old_q;
>>> -    unsigned int try;
>>> -    union evtchn_fifo_lastq lastq;
>>> -
>>> -    for ( try = 0; try < 3; try++ )
>>> -    {
>>> -        lastq.raw = read_atomic(&evtchn->fifo_lastq);
>>> -        v = d->vcpu[lastq.last_vcpu_id];
>>> -        old_q = &v->evtchn_fifo->queue[lastq.last_priority];
>>> -
>>> -        spin_lock_irqsave(&old_q->lock, *flags);
>>> -
>>> -        v = d->vcpu[lastq.last_vcpu_id];
>>> -        q = &v->evtchn_fifo->queue[lastq.last_priority];
>>> -
>>> -        if ( old_q == q )
>>> -            return old_q;
>>> -
>>> -        spin_unlock_irqrestore(&old_q->lock, *flags);
>>> -    }
>>> -
>>> -    gprintk(XENLOG_WARNING,
>>> -            "dom%d port %d lost event (too many queue changes)\n",
>>> -            d->domain_id, evtchn->port);
>>> -    return NULL;
>>> -}
>>> -
>>>   static int try_set_link(event_word_t *word, event_word_t *w, 
>>> uint32_t link)
>>>   {
>>>       event_word_t new, old;
>>> @@ -190,6 +158,9 @@ static void evtchn_fifo_set_pending(struct vcpu 
>>> *v, struct evtchn *evtchn)
>>>       event_word_t *word;
>>>       unsigned long flags;
>>>       bool_t was_pending;
>>> +    struct evtchn_fifo_queue *q, *old_q;
>>> +    unsigned int try;
>>> +    bool linked = true;
>>>       port = evtchn->port;
>>>       word = evtchn_fifo_word_from_port(d, port);
>>> @@ -204,17 +175,67 @@ static void evtchn_fifo_set_pending(struct vcpu 
>>> *v, struct evtchn *evtchn)
>>>           return;
>>>       }
>>> +    /*
>>> +     * Lock all queues related to the event channel (in case of a 
>>> queue change
>>> +     * this might be two).
>>> +     * It is mandatory to do that before setting and testing the 
>>> PENDING bit
>>> +     * and to hold the current queue lock until the event has put 
>>> into the
>>> +     * list of pending events in order to avoid waking up a guest 
>>> without the
>>> +     * event being visibly pending in the guest.
>>> +     */
>>> +    for ( try = 0; try < 4; try++ )
>>
>> May I ask why the number of try is 4 rather than the original 3?
> 
> Oh, I think this is just a typo. OTOH it doesn't really matter.

I agree that the number of try was likely random and therefore using a 
different number should not matter.

However, this is making more difficult to review the patch because this 
is an unexplained change.

I would prefer if this is dropped. But if you want to keep this change, 
then it should be explained in the commit message.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 14:14:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 14:14:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39448.72364 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kieWJ-00066d-A6; Fri, 27 Nov 2020 14:14:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39448.72364; Fri, 27 Nov 2020 14:14:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kieWJ-00066W-6r; Fri, 27 Nov 2020 14:14:43 +0000
Received: by outflank-mailman (input) for mailman id 39448;
 Fri, 27 Nov 2020 14:14:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hQCY=FB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kieWI-00066R-IH
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:14:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c3c6a597-7eab-4853-ae5a-18c82f6f59e7;
 Fri, 27 Nov 2020 14:14:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7D786AE59;
 Fri, 27 Nov 2020 14:14:39 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hQCY=FB=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kieWI-00066R-IH
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:14:42 +0000
X-Inumbo-ID: c3c6a597-7eab-4853-ae5a-18c82f6f59e7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c3c6a597-7eab-4853-ae5a-18c82f6f59e7;
	Fri, 27 Nov 2020 14:14:40 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606486479; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=H5+j1VjDw3rrB9Y/+RHthiScLa1UYvpUqJE4Gf0S3c8=;
	b=ZZv7G1rel6+JicX1h3L7eZiGoJfdjE+fR9vMYivS52mViznlH5kZPcq9UlZALTZnpPnGIJ
	yVtDLZKtAgf/hvRYOBnDipA2PwkCWBe+f9ZC/zzC1rhR7bhxNJ5v7sC+aiTp8qNhm9cnxF
	0Uf9Fq9kz9rU8BpFUFgGcJeTYF0uTq0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7D786AE59;
	Fri, 27 Nov 2020 14:14:39 +0000 (UTC)
Subject: Re: [PATCH v8 2/3] xen/events: rework fifo queue locking
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201125105122.3650-1-jgross@suse.com>
 <20201125105122.3650-3-jgross@suse.com>
 <e60e4fce-8c1b-013a-9ec2-20bd2c930619@xen.org>
 <2b099865-647c-3d47-1510-d429c2a4b6c6@suse.com>
 <5f04e881-915f-e2b7-6af3-459af614f8ca@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <c9a8e879-ff55-3fbc-41ab-df836c76be9f@suse.com>
Date: Fri, 27 Nov 2020 15:14:38 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <5f04e881-915f-e2b7-6af3-459af614f8ca@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="JiVqM4SgNy2AnUH7mGNfPgRFTpbJhjfKn"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--JiVqM4SgNy2AnUH7mGNfPgRFTpbJhjfKn
Content-Type: multipart/mixed; boundary="eAbuZ8bV7zg10jJcjKm4fWQLNyHd3MJez";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Message-ID: <c9a8e879-ff55-3fbc-41ab-df836c76be9f@suse.com>
Subject: Re: [PATCH v8 2/3] xen/events: rework fifo queue locking
References: <20201125105122.3650-1-jgross@suse.com>
 <20201125105122.3650-3-jgross@suse.com>
 <e60e4fce-8c1b-013a-9ec2-20bd2c930619@xen.org>
 <2b099865-647c-3d47-1510-d429c2a4b6c6@suse.com>
 <5f04e881-915f-e2b7-6af3-459af614f8ca@xen.org>
In-Reply-To: <5f04e881-915f-e2b7-6af3-459af614f8ca@xen.org>

--eAbuZ8bV7zg10jJcjKm4fWQLNyHd3MJez
Content-Type: multipart/mixed;
 boundary="------------143C760AC2AC5CBECD35C408"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------143C760AC2AC5CBECD35C408
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 27.11.20 15:11, Julien Grall wrote:
> Hi Juergen,
>=20
> On 27/11/2020 14:05, J=C3=BCrgen Gro=C3=9F wrote:
>> On 27.11.20 14:58, Julien Grall wrote:
>>> Hi Juergen,
>>>
>>> On 25/11/2020 10:51, Juergen Gross wrote:
>>>> -static struct evtchn_fifo_queue *lock_old_queue(const struct domain=
=20
>>>> *d,
>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct ev=
tchn *evtchn,
>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned =
long *flags)
>>>> -{
>>>> -=C2=A0=C2=A0=C2=A0 struct vcpu *v;
>>>> -=C2=A0=C2=A0=C2=A0 struct evtchn_fifo_queue *q, *old_q;
>>>> -=C2=A0=C2=A0=C2=A0 unsigned int try;
>>>> -=C2=A0=C2=A0=C2=A0 union evtchn_fifo_lastq lastq;
>>>> -
>>>> -=C2=A0=C2=A0=C2=A0 for ( try =3D 0; try < 3; try++ )
>>>> -=C2=A0=C2=A0=C2=A0 {
>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 lastq.raw =3D read_atomi=
c(&evtchn->fifo_lastq);
>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 v =3D d->vcpu[lastq.last=
_vcpu_id];
>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 old_q =3D &v->evtchn_fif=
o->queue[lastq.last_priority];
>>>> -
>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 spin_lock_irqsave(&old_q=
->lock, *flags);
>>>> -
>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 v =3D d->vcpu[lastq.last=
_vcpu_id];
>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 q =3D &v->evtchn_fifo->q=
ueue[lastq.last_priority];
>>>> -
>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( old_q =3D=3D q )
>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =
return old_q;
>>>> -
>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 spin_unlock_irqrestore(&=
old_q->lock, *flags);
>>>> -=C2=A0=C2=A0=C2=A0 }
>>>> -
>>>> -=C2=A0=C2=A0=C2=A0 gprintk(XENLOG_WARNING,
>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =
"dom%d port %d lost event (too many queue changes)\n",
>>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =
d->domain_id, evtchn->port);
>>>> -=C2=A0=C2=A0=C2=A0 return NULL;
>>>> -}
>>>> -
>>>> =C2=A0 static int try_set_link(event_word_t *word, event_word_t *w, =

>>>> uint32_t link)
>>>> =C2=A0 {
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 event_word_t new, old;
>>>> @@ -190,6 +158,9 @@ static void evtchn_fifo_set_pending(struct vcpu =

>>>> *v, struct evtchn *evtchn)
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 event_word_t *word;
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long flags;
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 bool_t was_pending;
>>>> +=C2=A0=C2=A0=C2=A0 struct evtchn_fifo_queue *q, *old_q;
>>>> +=C2=A0=C2=A0=C2=A0 unsigned int try;
>>>> +=C2=A0=C2=A0=C2=A0 bool linked =3D true;
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 port =3D evtchn->port;
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 word =3D evtchn_fifo_word_from_port(d=
, port);
>>>> @@ -204,17 +175,67 @@ static void evtchn_fifo_set_pending(struct=20
>>>> vcpu *v, struct evtchn *evtchn)
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return;
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }
>>>> +=C2=A0=C2=A0=C2=A0 /*
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0 * Lock all queues related to the event cha=
nnel (in case of a=20
>>>> queue change
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0 * this might be two).
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0 * It is mandatory to do that before settin=
g and testing the=20
>>>> PENDING bit
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0 * and to hold the current queue lock until=
 the event has put=20
>>>> into the
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0 * list of pending events in order to avoid=
 waking up a guest=20
>>>> without the
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0 * event being visibly pending in the guest=
=2E
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0 */
>>>> +=C2=A0=C2=A0=C2=A0 for ( try =3D 0; try < 4; try++ )
>>>
>>> May I ask why the number of try is 4 rather than the original 3?
>>
>> Oh, I think this is just a typo. OTOH it doesn't really matter.
>=20
> I agree that the number of try was likely random and therefore using a =

> different number should not matter.
>=20
> However, this is making more difficult to review the patch because this=
=20
> is an unexplained change.
>=20
> I would prefer if this is dropped. But if you want to keep this change,=
=20
> then it should be explained in the commit message.

Well, I could argue that there is potentially one lock more to take, so
the retry number is increased by one, too. ;-)

I think we can just switch back to 3.


Juergen

--------------143C760AC2AC5CBECD35C408
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------143C760AC2AC5CBECD35C408--

--eAbuZ8bV7zg10jJcjKm4fWQLNyHd3MJez--

--JiVqM4SgNy2AnUH7mGNfPgRFTpbJhjfKn
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/BCc4FAwAAAAAACgkQsN6d1ii/Ey9E
Uwf/fznCn1xClre4A8zUceBm2gQNev5LV/VCLvNP+UiO3u/iDc8sJUxRtPn6guWNZi8q9i0WCQB4
ehfkKC/YPEZD70OrDAYv3U/edIuEWp+GpdvpZiDkhmJOBTnu/qZ1AGNje9lHpRJ6l7abd+QZndhl
qtKiiAow80BvdtL78k4EyxFTYFBqAZvERcMCndkRjdQiMZSe2FSzqSr/iU8P2T+KVdSkMm9uv1uO
ADDx3Tfzkrz6yB8NStZfT3SToFkQOEWM/ei3cBkjGMioIaAWuleohL0TgZwqZ1OvvMiSq7p7jpRV
64Z8BvDcaYhxS5G9cpyxzqVfqIX60RIxWpyIXd42vA==
=jEZn
-----END PGP SIGNATURE-----

--JiVqM4SgNy2AnUH7mGNfPgRFTpbJhjfKn--


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 14:16:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 14:16:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39455.72376 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kieY4-0006Ez-OT; Fri, 27 Nov 2020 14:16:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39455.72376; Fri, 27 Nov 2020 14:16:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kieY4-0006Es-JU; Fri, 27 Nov 2020 14:16:32 +0000
Received: by outflank-mailman (input) for mailman id 39455;
 Fri, 27 Nov 2020 14:16:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4e3U=FB=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kieY2-0006El-Ow
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:16:30 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe0c::629])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 03b2ded8-b195-4968-962f-f50f5cb89304;
 Fri, 27 Nov 2020 14:16:28 +0000 (UTC)
Received: from DB6P192CA0020.EURP192.PROD.OUTLOOK.COM (2603:10a6:4:b8::30) by
 DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.30; Fri, 27 Nov 2020 14:16:26 +0000
Received: from DB5EUR03FT023.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:b8:cafe::95) by DB6P192CA0020.outlook.office365.com
 (2603:10a6:4:b8::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Fri, 27 Nov 2020 14:16:26 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT023.mail.protection.outlook.com (10.152.20.68) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3611.26 via Frontend Transport; Fri, 27 Nov 2020 14:16:25 +0000
Received: ("Tessian outbound d6c201accd3c:v71");
 Fri, 27 Nov 2020 14:16:25 +0000
Received: from 9db9305aa0c5.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E599E4C9-2682-4465-B3E7-38381134B81A.1; 
 Fri, 27 Nov 2020 14:16:08 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9db9305aa0c5.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 27 Nov 2020 14:16:08 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3386.eurprd08.prod.outlook.com (2603:10a6:10:46::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Fri, 27 Nov
 2020 14:16:07 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0%7]) with mapi id 15.20.3589.031; Fri, 27 Nov 2020
 14:16:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=4e3U=FB=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kieY2-0006El-Ow
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:16:30 +0000
X-Inumbo-ID: 03b2ded8-b195-4968-962f-f50f5cb89304
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:fe0c::629])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 03b2ded8-b195-4968-962f-f50f5cb89304;
	Fri, 27 Nov 2020 14:16:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=W0lKqdAxGV52KUnfN/GqFWuaEaMQkLORmFiAeJApcIQ=;
 b=eQrirgNQTZCdugX6/wtQMGbkPqYJSELONm1xogfD1qDCA4oDDofSiX9g/keGdo4hIZNwqMmZJOw7P5WA6Y99/Kir9xR0DIPdhOL5BhAx9nis9HR+0guntmc49tDfBaOEaHZUJNXkB2/GEuOMt6kn2tPTceLrhfkZ0NVecHJCF2g=
Received: from DB6P192CA0020.EURP192.PROD.OUTLOOK.COM (2603:10a6:4:b8::30) by
 DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3589.30; Fri, 27 Nov 2020 14:16:26 +0000
Received: from DB5EUR03FT023.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:b8:cafe::95) by DB6P192CA0020.outlook.office365.com
 (2603:10a6:4:b8::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Fri, 27 Nov 2020 14:16:26 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT023.mail.protection.outlook.com (10.152.20.68) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3611.26 via Frontend Transport; Fri, 27 Nov 2020 14:16:25 +0000
Received: ("Tessian outbound d6c201accd3c:v71"); Fri, 27 Nov 2020 14:16:25 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 77f8d2eac79d42e1
X-CR-MTA-TID: 64aa7808
Received: from 9db9305aa0c5.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id E599E4C9-2682-4465-B3E7-38381134B81A.1;
	Fri, 27 Nov 2020 14:16:08 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9db9305aa0c5.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 27 Nov 2020 14:16:08 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Q3Ojk0qwf0TjDwXTritXeAoQsb9dnHqwh0S0d1hXLc/sBh2Zz2R54nmape5DmbzvCUkbU7qKuGm5WDIF0CRkVAPFqIYtCO/a8TtTMk2Aa16gQ8aIuYJ756NyTp3Wcr6aOas3BT47YZD3o98io6vQKZ9XpQ0sFyQDOAvUwSkKiUOcwKPE0RuQtCm/8RtXG7BtqK11vxdADoqEfUgWsTvWiJO2GoNzOWwHC29kygudl7uCt0XCWhwgx+SUJJoFJbewkeHdVIhGD7IzQn7rZV0GQmDd4WeCBrdqC7wgy2HqNCN2wD9PpB/65yGKHJshS6gyqjy429gBc1FeAmPzizPN8A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=W0lKqdAxGV52KUnfN/GqFWuaEaMQkLORmFiAeJApcIQ=;
 b=nD4rlL53D1vCW9mkwPmjKwojUXt+yyE/dxYJGOZFHnjeTz7aFylK88NmSZsiuvVkS4BlM530Z1RI9x6hoUDlZJFtJC9fNQGXLo6Cn4GYw8PsVQ9Ld5ygqYs0tzXVA6ACuv/nAvDK92rMgNDfEvh8Q/Lf/N6YuRWYCJqLIjoYUCi/eDYXMOcKc+7jt77kidn2v3EHCBiXuSKeuuXdOMXRIZ0ceyRvL8FVNmXrmies8W8x4gm5CS0y6RcChp3GUzPRYbyrL+mHRRzl6xZ71oV+ImgGpTIKjlWBJnuZG8Km7wZBDrEHlfdGrJr6IfyKUkeQjBPTz7RNJbgF6xdkHHjXww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=W0lKqdAxGV52KUnfN/GqFWuaEaMQkLORmFiAeJApcIQ=;
 b=eQrirgNQTZCdugX6/wtQMGbkPqYJSELONm1xogfD1qDCA4oDDofSiX9g/keGdo4hIZNwqMmZJOw7P5WA6Y99/Kir9xR0DIPdhOL5BhAx9nis9HR+0guntmc49tDfBaOEaHZUJNXkB2/GEuOMt6kn2tPTceLrhfkZ0NVecHJCF2g=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3386.eurprd08.prod.outlook.com (2603:10a6:10:46::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Fri, 27 Nov
 2020 14:16:07 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::8567:dffb:80c1:bc0%7]) with mapi id 15.20.3589.031; Fri, 27 Nov 2020
 14:16:07 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Rahul Singh <Rahul.Singh@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v4 3/3] ns16550: Gate all PCI code with CONFIG_X86
Thread-Topic: [PATCH v4 3/3] ns16550: Gate all PCI code with CONFIG_X86
Thread-Index: AQHWw1cvptOzj+DOhEKKe2XksTlGiKncBA0AgAAE9oA=
Date: Fri, 27 Nov 2020 14:16:07 +0000
Message-ID: <F1A3739A-D07C-429F-AC7B-47F7E2710377@arm.com>
References: <cover.1606326929.git.rahul.singh@arm.com>
 <6d64bb35a6ce247faaa3df2ebae27b6bfa1d969e.1606326929.git.rahul.singh@arm.com>
 <bacfe1c3-d86d-95b2-c52a-4bb86f1338ea@suse.com>
In-Reply-To: <bacfe1c3-d86d-95b2-c52a-4bb86f1338ea@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 168b1d40-4c00-494d-c6e0-08d892df0697
x-ms-traffictypediagnostic: DB7PR08MB3386:|DB7PR08MB3500:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB7PR08MB35009B0B89885D672CBF408F9DF80@DB7PR08MB3500.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 JkBjaAUY01kxYRk/RvQgtCEcpvatQ45yl3FGdkTCa8WBec0419NiT9nnXICP/1AAGJ82L5tuKycHPJq9j6NhuvP6bKjUnXJOqmLxVv+krWjg2i1vqEZRzXsPs09MFQL4mQStQDzY/TSeHufQecirDZiBgh+96EIWltXURDXQ79WJRl4irPgklmpwamYZqf+O7sxyGHSIaYlecLdUHqcKvcmCd9WnUrLzw2c6vR8i/cqSGMMHDiDS3kXBh/bdYxgudPFG9iXhjj7GF0Pvd8jdZ8zbZP9jRl4x9MLWlp+8/duY75JxCZrx+9aOtYq3rFdfdoCcWm/6LH4RGaQoWZDpcg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(39860400002)(366004)(376002)(136003)(2906002)(5660300002)(8676002)(6512007)(6486002)(36756003)(8936002)(6916009)(86362001)(64756008)(66476007)(91956017)(76116006)(66556008)(66446008)(66946007)(4326008)(33656002)(54906003)(71200400001)(186003)(6506007)(53546011)(26005)(478600001)(2616005)(316002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?M927DEbGUqAnGTiPl3WDg/SnxvLnwrCR6vfOgFFIxbOeaLIl5MIVIXhgTBAd?=
 =?us-ascii?Q?TA+QVixn4woItN1Ynn/FYZ0higInnF4spO12g2ntvxgzv/ELYDPrbQG2WGJ1?=
 =?us-ascii?Q?H6ROVfj45+7TEYFgo4/ttOb7+msTRJxcoTJHhLTULAryoBFm6xrJFhlz4O52?=
 =?us-ascii?Q?4otn2fsVwHF2gcO60B1fuL/svhjw6yOlP2l2YG3bEePVohlRWmj/v2EGj485?=
 =?us-ascii?Q?b3xls5b7tJnzxrkU4csH9UxQ6d0cCk3UMvoxxPBDcRHR+pdsLsaoq1hA4QCL?=
 =?us-ascii?Q?ag5IwxnsBF4c7/epqNPtoKVlaZV4CjT5IzyXeFN7FmfwwAfAWOP4yrP9KmDX?=
 =?us-ascii?Q?DUwuILy8DhC3zgwbzc2P26kje2qrbu1+4FWr5prNGkUDpTwV5XBk9WcfOj/Z?=
 =?us-ascii?Q?lNoJ4xMLFnubrvAZIhpbbs8KqOYIekKEcoBN8x3s43s3uCn/+7DTNdvHG6h3?=
 =?us-ascii?Q?7fF/m9xA6NBmK5B4wjAL1YZPEWydCrE3z/wVtiW7ZVZxqPKnzb4e5pbiwL3e?=
 =?us-ascii?Q?Y7uDL0vET1DOUyB4UUfC7mbR5ytN7lehVycVmwDxefHaLMnhbG1C48r5rjHu?=
 =?us-ascii?Q?Wmlrgvcb7sbwdB2WOFM7hhUnCz00YcPtAvycCk8d5CmMSHHV1lczs3Mum+oI?=
 =?us-ascii?Q?Vul0TTLglIBb41LEdHG+Td+nJEgW3OmUKWZ65nkIXJXtD0+fB18+KdJvlsNN?=
 =?us-ascii?Q?Za0jLEq0J03jrzSdmj/J+A52HJumJeplPBzCRw42WQlAtfR8Se/FJXqGkLAG?=
 =?us-ascii?Q?YPM83yCW8svYW9Z9a3HwYGgib/RXQUlCS9994fF+vVnmM4PN8rDtBZ0lEmkZ?=
 =?us-ascii?Q?pEuRCHnDLOnXPT6DfO5vdgbktByRZ4yW+lrSOx1ivnr4HvJ7t1c/5uM/3Kfc?=
 =?us-ascii?Q?1bUQ67QvZzORMt4gXrsbbGeC4oRxO5D+IqRLBM9lRZFoW3lnUIDqiyF+2P11?=
 =?us-ascii?Q?kYd+NvGDsnmAC3L2tCZdjv7ytHC4YjZRmjYh+5uo8CaKbHmsMOnI28LWWw1r?=
 =?us-ascii?Q?hk5G?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <DB27409D3E7E6341906AB6B2C9CD9D72@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3386
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	74b975c8-5e4d-4c3e-06a6-08d892defbaf
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qaBz0E+en1i06MR5HFWrtXquCMAnJLyBcPuLhzAUgibhb1tiwk3ClVUkFbcs6bTyLw4X2/67I05lFuWjyUJq4KGbI92CsbK6pZ8y+49Tt6QRwgNIBzxncBpxzgxRkm6z+FIWkDrFRapqfmcDICQoxNMN3MJf8zHArWMxNIky8FyeS6YUwvMaEHaoOwN7k7tEjO62L1Ugwi1PNRD9BCgaGf9ImdXjfFOoCZ5dXiYipzp9xtfKIoHL38cI7wvIR+1JBgNSbqewgTlRl+NJ7fNLNxi2numWA1xqG4EEDqOVqLRSnJFT1FdYqY3z5mRP9WXTl8N4avCK798BmvMPuorDToYorJv9DXsvqZSvrmYSTAllk+CsUnya7BAG6qbJw1ELZiy6hXLhH1UO5ny/Fdui+A==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(396003)(346002)(376002)(136003)(46966005)(6512007)(2906002)(70206006)(82310400003)(336012)(81166007)(36756003)(6862004)(186003)(316002)(70586007)(2616005)(8676002)(53546011)(356005)(6506007)(5660300002)(26005)(54906003)(478600001)(47076004)(33656002)(82740400003)(8936002)(4326008)(6486002)(86362001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Nov 2020 14:16:25.8837
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 168b1d40-4c00-494d-c6e0-08d892df0697
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3500

Hi Jan,

> On 27 Nov 2020, at 13:58, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 25.11.2020 19:16, Rahul Singh wrote:
>> --- a/xen/drivers/char/ns16550.c
>> +++ b/xen/drivers/char/ns16550.c
>> @@ -16,7 +16,7 @@
>> #include <xen/timer.h>
>> #include <xen/serial.h>
>> #include <xen/iocap.h>
>> -#ifdef CONFIG_HAS_PCI
>> +#if defined(CONFIG_X86) && defined(CONFIG_HAS_PCI)
>> #include <xen/pci.h>
>> #include <xen/pci_regs.h>
>> #include <xen/pci_ids.h>
>> @@ -51,7 +51,7 @@ static struct ns16550 {
>>     unsigned int timeout_ms;
>>     bool_t intr_works;
>>     bool_t dw_usr_bsy;
>> -#ifdef CONFIG_HAS_PCI
>> +#if defined(CONFIG_X86) && defined(CONFIG_HAS_PCI)
>=20
> I'm sorry to be picky, but this being a hack wants, imo, also calling
> it so, by way of a code comment. Clearly this should go at one of the
> first instances, yet neither of the two above are really suitable imo.
> Hence I'm coming back to my prior suggestion of introducing a
> consolidated #define without this becoming a Kconfig setting:
>=20
> /*
> * The PCI part of the code in this file currently is only known to
> * work on x86. Undo this hack once the logic has been suitably
> * abstracted.
> */
> #if defined(CONFIG_HAS_PCI) && defined(CONFIG_X86)
> # define NS16550_PCI
> #endif
>=20
> And then use NS16550_PCI everywhere. I'd be fine making this
> adjustment while committing, if I knew that (a) you're okay with it
> and (b) the R-b and A-b you've already got can be kept.
>=20

Sounds ok to me so you can keep my R-b if you go this way.

Cheers
Bertrand

> Jan
>=20



From xen-devel-bounces@lists.xenproject.org Fri Nov 27 14:23:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 14:23:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39472.72387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kieeQ-0007EC-Ir; Fri, 27 Nov 2020 14:23:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39472.72387; Fri, 27 Nov 2020 14:23:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kieeQ-0007E5-FY; Fri, 27 Nov 2020 14:23:06 +0000
Received: by outflank-mailman (input) for mailman id 39472;
 Fri, 27 Nov 2020 14:23:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kieeO-0007E0-Vy
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:23:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kieeM-00062k-Ry; Fri, 27 Nov 2020 14:23:02 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kieeM-0008KW-Iw; Fri, 27 Nov 2020 14:23:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kieeO-0007E0-Vy
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:23:05 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=KAlWNWKzU90CqokCU2i7K0T5BS0bM4FllDwjHF8xgck=; b=Y3geBysZnrdjanTwdzUj8x594A
	8cdeVJMYxQJPJkkYSUFBY9nxSp0oL1AaLRaX/csYcDsD6/WTbYY50YYcyKBaDndtJEskC8WAyXvHi
	zgyOTahx420DM/uir5r2I5X87lbclaRCv9PruYmXJ9NX0p+aLc8KEsHofC1VeCdOaA/8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kieeM-00062k-Ry; Fri, 27 Nov 2020 14:23:02 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kieeM-0008KW-Iw; Fri, 27 Nov 2020 14:23:02 +0000
Subject: Re: [PATCH v8 3/3] xen/events: do some cleanups in
 evtchn_fifo_set_pending()
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201125105122.3650-1-jgross@suse.com>
 <20201125105122.3650-4-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <0ab6f8b5-1a9a-845e-3935-a660e5c7fc16@xen.org>
Date: Fri, 27 Nov 2020 14:23:00 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201125105122.3650-4-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 25/11/2020 10:51, Juergen Gross wrote:
> evtchn_fifo_set_pending() can be simplified a little bit.

The commit message is quite light... For posterity, it would be good to 
explain why the simplication can be done. In particular, there is a 
chance in behavior after this patch.

> Suggested-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V8:
> - new patch
> ---
>   xen/common/event_fifo.c | 34 +++++++++++++++-------------------
>   1 file changed, 15 insertions(+), 19 deletions(-)
> 
> diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c
> index 443593c3b3..77609539b1 100644
> --- a/xen/common/event_fifo.c
> +++ b/xen/common/event_fifo.c
> @@ -175,6 +175,18 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
>           return;
>       }
>   
> +    /*
> +     * Control block not mapped.  The guest must not unmask an
> +     * event until the control block is initialized, so we can
> +     * just drop the event.
> +     */
> +    if ( unlikely(!v->evtchn_fifo->control_block) )

Sort of unrelated, AFAICT, v->evtchn_fifo->control_block can be set 
concurrently to this access.

Thankfully, once the control block is mapped, it can't be unmapped. 
However, there is still a possibility that you may see half of the update.

Shouldn't the field access with ACCESS_ONCE()?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 14:24:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 14:24:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39479.72400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiefX-0007NG-TX; Fri, 27 Nov 2020 14:24:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39479.72400; Fri, 27 Nov 2020 14:24:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiefX-0007N9-QS; Fri, 27 Nov 2020 14:24:15 +0000
Received: by outflank-mailman (input) for mailman id 39479;
 Fri, 27 Nov 2020 14:24:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0dHZ=FB=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1kiefX-0007N4-3C
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:24:15 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 547d2cd6-08e4-4161-80ff-43340c591949;
 Fri, 27 Nov 2020 14:24:14 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0dHZ=FB=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
	id 1kiefX-0007N4-3C
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:24:15 +0000
X-Inumbo-ID: 547d2cd6-08e4-4161-80ff-43340c591949
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 547d2cd6-08e4-4161-80ff-43340c591949;
	Fri, 27 Nov 2020 14:24:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606487054;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=D5L4R48jTaqMmWokScSU3RSjvbXkKn0bwvjerOZsofs=;
  b=FMWqw4gho2sUxs0mH13DIWvOaf47BqLp72ttAveQfJgXtDQrouI+nt7t
   tEJb6PT4bKeYQ0SZMg3c1/2zYHKTTGxFLWgdMUcL2G/jglvg2xXW8VgJr
   AK0UiUDHQ6OnvRKt10bRr9LH2unLsg1zHBYKy86+cG2UtPTerzcBHXY/4
   8=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: W0cOlhBUV4DnLGq2gDEc1NCNhR932tcp5/Y+YsLB7PwiA8cKThwaunpOtSNhWvvlp5Wr6rLx0F
 W2A18XP4E+SFvNMbcDIqoFsA4rf8dtsH4TL9yXaFrrFH1MY6T9k+w5DO6EuuuB1iXU8eslcR2N
 m6wk86nEHJPn2dYAz4ylRzw9v5yDJMwX4oH24xGdKijsByCPUrzurAL43d/D35A8rF4J5j6yP4
 Mybi2pSan9LhqBlpgr3JpuYkUOSf1dfwmue4oPmVjLLwGtjQzf47vwySVbSX/o3SMYpwKnpnbc
 r18=
X-SBRS: None
X-MesageID: 32019534
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,374,1599537600"; 
   d="scan'208";a="32019534"
Date: Fri, 27 Nov 2020 14:24:07 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Eduardo Habkost <ehabkost@redhat.com>
CC: Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@redhat.com>,
	<qemu-devel@nongnu.org>, Jiaxun Yang <jiaxun.yang@flygoat.com>, Igor Mammedov
	<imammedo@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>, Paolo Bonzini
	<pbonzini@redhat.com>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, "Wainer
 dos Santos Moschetta" <wainersm@redhat.com>, Aurelien Jarno
	<aurelien@aurel32.net>, Thomas Huth <thuth@redhat.com>, Alex
 =?iso-8859-1?Q?Benn=E9e?= <alex.bennee@linaro.org>, Aleksandar Rikalo
	<aleksandar.rikalo@syrmia.com>, Richard Henderson <rth@twiddle.net>, "Fam
 Zheng" <fam@euphon.net>, "Daniel P . Berrange" <berrange@redhat.com>,
	"Stefano Stabellini" <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH-for-6.0 v4 15/17] gitlab-ci: Add test for Xen (on CentOS
 7)
Message-ID: <20201127142407.GC2098@perard.uk.xensource.com>
References: <20201108204535.2319870-1-philmd@redhat.com>
 <20201108204535.2319870-16-philmd@redhat.com>
 <20201126173824.GB2098@perard.uk.xensource.com>
 <20201126174559.GP2271382@habkost.net>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20201126174559.GP2271382@habkost.net>

On Thu, Nov 26, 2020 at 12:45:59PM -0500, Eduardo Habkost wrote:
> On Thu, Nov 26, 2020 at 05:38:24PM +0000, Anthony PERARD wrote:
> > Is `make check` going to do something useful with the Xen support? Or is
> > it going to need more work in order to test the Xen support of QEMU?
> > (Like starting an actual Xen guest.)
> 
> I don't think it will test Xen support, but we still want to at
> least check if --enable-xen doesn't break anything else.

That sound good.

> Is there any public CI system anywhere where Xen support is
> tested today?

Yes, we have osstest which regularly test Xen with QEMU from upstream.
Result are sent to xen-devel. But that might not be very useful for
qemu-devel.

We also have a GitLab CI which does some Xen tests, but I don't think
QEMU is tested there.
https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=automation/gitlab-ci/test.yaml;hb=HEAD
https://gitlab.com/xen-project/xen/

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 14:25:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 14:25:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39485.72411 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiegn-0007VX-8d; Fri, 27 Nov 2020 14:25:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39485.72411; Fri, 27 Nov 2020 14:25:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiegn-0007VQ-5R; Fri, 27 Nov 2020 14:25:33 +0000
Received: by outflank-mailman (input) for mailman id 39485;
 Fri, 27 Nov 2020 14:25:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xDXF=FB=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kiegm-0007VI-FC
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:25:32 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.47]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id affa1838-25d0-4936-86fc-2d2ea22f2063;
 Fri, 27 Nov 2020 14:25:30 +0000 (UTC)
Received: from MRXP264CA0004.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:15::16)
 by DBBPR08MB6011.eurprd08.prod.outlook.com (2603:10a6:10:209::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Fri, 27 Nov
 2020 14:25:27 +0000
Received: from VE1EUR03FT042.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:15:cafe::53) by MRXP264CA0004.outlook.office365.com
 (2603:10a6:500:15::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Fri, 27 Nov 2020 14:25:27 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT042.mail.protection.outlook.com (10.152.19.62) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3611.26 via Frontend Transport; Fri, 27 Nov 2020 14:25:27 +0000
Received: ("Tessian outbound 082214a64d39:v71");
 Fri, 27 Nov 2020 14:25:26 +0000
Received: from b2a4441315b8.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 788D8F11-D9CC-4765-AD32-A1D866C3175E.1; 
 Fri, 27 Nov 2020 14:25:11 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b2a4441315b8.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 27 Nov 2020 14:25:11 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB6073.eurprd08.prod.outlook.com (2603:10a6:10:1f7::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.22; Fri, 27 Nov
 2020 14:25:09 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3589.030; Fri, 27 Nov 2020
 14:25:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xDXF=FB=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kiegm-0007VI-FC
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:25:32 +0000
X-Inumbo-ID: affa1838-25d0-4936-86fc-2d2ea22f2063
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown [40.107.21.47])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id affa1838-25d0-4936-86fc-2d2ea22f2063;
	Fri, 27 Nov 2020 14:25:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WqpGUJM4S9eQHH5Ot9fLaepfAzqGMbuWwOB0pNJ9QQY=;
 b=w2rlozCN59atiW7ZSAinHL8nMPhJ01gTiiCPVtuVTnyHBco13P1/p4OXyzDYenLLYquT3I41BP4WTQ/GhK0DdrQY08mlLTH0NKUfPQMmnwEW8iQgMWAdv4nFhUDNAvjasupNhBDC6T/TzfsXasbD4mRMc6yiXmO9za45oFyOVWQ=
Received: from MRXP264CA0004.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:15::16)
 by DBBPR08MB6011.eurprd08.prod.outlook.com (2603:10a6:10:209::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Fri, 27 Nov
 2020 14:25:27 +0000
Received: from VE1EUR03FT042.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:15:cafe::53) by MRXP264CA0004.outlook.office365.com
 (2603:10a6:500:15::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Fri, 27 Nov 2020 14:25:27 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT042.mail.protection.outlook.com (10.152.19.62) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3611.26 via Frontend Transport; Fri, 27 Nov 2020 14:25:27 +0000
Received: ("Tessian outbound 082214a64d39:v71"); Fri, 27 Nov 2020 14:25:26 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: e77a87755b240a88
X-CR-MTA-TID: 64aa7808
Received: from b2a4441315b8.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 788D8F11-D9CC-4765-AD32-A1D866C3175E.1;
	Fri, 27 Nov 2020 14:25:11 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b2a4441315b8.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 27 Nov 2020 14:25:11 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=edMmVdpIo6qrSjJOS8wgXzcM3GFaUJf6OgVAih+4jKPZRQ7ayz/kRy6Sa8jV/oW3mIU0NVr8QHP1ka7hGoebX7p15CMRJHlAP2MeGGYk02Qg5RXlPosSqGzsixXzRUTsC7+HwTOjeE4OMObhGIxJSsddbgY8GYmNL2YXm5E7x1D4JJ8BdDAAlGKr5eej2MpINQVY5mhJlpBsi8clVJBPKKgVnGxyEMA0ODhjObcmXWgGlRki4RnLLN/Un/xkttwtpJxhPAG3gn/n/EhvAPIxNt/S8S5jnykHh+0rZxW8LrP/uiAjBlNzCldBdnh1eONiTbXZh96mEoDjQM7foYgFvQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WqpGUJM4S9eQHH5Ot9fLaepfAzqGMbuWwOB0pNJ9QQY=;
 b=Oc1TaJ9GoxrQaRHzt5t8RdgiMUQ6w2/cfvlZzf+mC8DdGG9N7AuFJUtD8wqLhtkdG/76Es+Tz018H0c3INv9k7X/2flTVaTEoM5PfdBUUIKWVkHHEXOVOy+oNvIrKyPokBEv6l9K0nwXHDL42pbJsH3GJ4VUpfQzsT54/unu0DTNd/kkOlqn3BaexSC6+uNbXaMOPqc3sL4lNwve9uyhEjZcPk8t8qJBmSS3MduFnKIC1ZuURDzOZ53uKqF0iry23dRN+C4TlFC/b0/4J7kh2ud6XxaTB4zMJYFNIuCViTCPkQ9xndGQTo05A8pfgLAsDRZSo30mYbYLVpsXExFMeA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WqpGUJM4S9eQHH5Ot9fLaepfAzqGMbuWwOB0pNJ9QQY=;
 b=w2rlozCN59atiW7ZSAinHL8nMPhJ01gTiiCPVtuVTnyHBco13P1/p4OXyzDYenLLYquT3I41BP4WTQ/GhK0DdrQY08mlLTH0NKUfPQMmnwEW8iQgMWAdv4nFhUDNAvjasupNhBDC6T/TzfsXasbD4mRMc6yiXmO9za45oFyOVWQ=
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com (2603:10a6:10:49::10)
 by DBBPR08MB6073.eurprd08.prod.outlook.com (2603:10a6:10:1f7::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.22; Fri, 27 Nov
 2020 14:25:09 +0000
Received: from DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef]) by DB7PR08MB3500.eurprd08.prod.outlook.com
 ([fe80::21f3:34c:8f7e:42ef%2]) with mapi id 15.20.3589.030; Fri, 27 Nov 2020
 14:25:08 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v4 3/3] ns16550: Gate all PCI code with CONFIG_X86
Thread-Topic: [PATCH v4 3/3] ns16550: Gate all PCI code with CONFIG_X86
Thread-Index: AQHWw1cvjUPnGDPA302zQ9SdCrcARancBA0AgAAHeoA=
Date: Fri, 27 Nov 2020 14:25:08 +0000
Message-ID: <2E5C1B9B-FB45-4566-9DA9-FE7D00B5AE16@arm.com>
References: <cover.1606326929.git.rahul.singh@arm.com>
 <6d64bb35a6ce247faaa3df2ebae27b6bfa1d969e.1606326929.git.rahul.singh@arm.com>
 <bacfe1c3-d86d-95b2-c52a-4bb86f1338ea@suse.com>
In-Reply-To: <bacfe1c3-d86d-95b2-c52a-4bb86f1338ea@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 29223fb9-992b-4739-f787-08d892e0495b
x-ms-traffictypediagnostic: DBBPR08MB6073:|DBBPR08MB6011:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DBBPR08MB6011C594F2518A421938A5C7FCF80@DBBPR08MB6011.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 hoWju5HdNVcjStV1ANG1QyQBplXXqcPHcUR/5v7JfpKjXniArRtVNNyaTcbn4lc8I5STXR9YRp3IUo5fl0QIJnj+/Kd3FGuGJI6Da0SOxiM2M3CRRjA4svxCeti5Q/Mxr9ErKgudc9xXPHkiLUmitqKFQwuOkVe8kjneMK/NrJssEBZirK69BR9H7phkap9tRHLQWR65z+TzK+XlsaP6Yd4q4WiIQJpB0mzXKZY3QB8Or51+canox8n2QanwomBvbgZ658C9J/XbAiviex5lkSoTYt3IFP6d+qMOrqvrhpzZEtbcFsz+sLIzZkpRhSXppLt0wHOgyYhA/u8L0gxvMg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3500.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(366004)(396003)(346002)(39860400002)(136003)(36756003)(86362001)(33656002)(478600001)(71200400001)(6512007)(2616005)(4326008)(54906003)(8676002)(5660300002)(26005)(2906002)(8936002)(186003)(53546011)(6916009)(91956017)(6486002)(316002)(76116006)(66476007)(6506007)(66446008)(66946007)(64756008)(66556008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?jUMMd7jQA7Q/bwoKzrvA2668oPOeumluMxdu7whg5HmJcGxG5XzM2BeAGQMK?=
 =?us-ascii?Q?y0PGe2rqVubeqg4F+A8zQgf8ePBwxX5TuRetvulVrflxMlOOYJzKC6kTcUa/?=
 =?us-ascii?Q?/M9d1pBlOBexkaKGm6zqgvQBMlf9A8linVEG0cBmH2awUfW67U5pDFopvZte?=
 =?us-ascii?Q?xIozXtZgTX661ViZ41M7UB2LIF488WENXCRCTg7KOxgKoi9zuhoqT5KzrI31?=
 =?us-ascii?Q?C9EoUe0UUBVmvMTblZUXZGn8H4Dmlo5NiNxoywbGKUKYak3OT+42HwXkPi6g?=
 =?us-ascii?Q?CEkWMyKmlxb7lBVgYg41cIuRfHrnTF7YFKS/RW321oicuIXeh2gLgt4HL3sy?=
 =?us-ascii?Q?mwqUYtDb+Oi2zKNTIK0wT4/j271Yo6HvAK9DKhxo0aKHfD7xGIk2Q88ONH47?=
 =?us-ascii?Q?7zHjL4e0GRUPwrs/52awdlueJC7aft0IY8y1AbwZUCWws/P5aRFeHOXhO0i3?=
 =?us-ascii?Q?0lKLy8X2AQXwHaxDWyZtKVRWZtrS/avJNBUNm35ZTwGeUuA9N7fJwWSg/ipE?=
 =?us-ascii?Q?eMrieAweaoXz+rcT/CDxTr8jece6cBC9uRRl4V4pSC39B864XIgb0C+rXb1W?=
 =?us-ascii?Q?Kog6djD01z0Rx/rluCioVIkl3JC5/pVhVAr/UZNMbsRWHz9psoiwGvJM58Rr?=
 =?us-ascii?Q?IK4udm4F3hipYJnqG3tq34cE28D2dbS11/46dQBeeF0mCVNgo4Yn3qZuQz1F?=
 =?us-ascii?Q?netwjP/ieMJENsS6fHGOdhweInfBjUsTOJAHxcNyHQpYYbUEJgHBiJL0YrHu?=
 =?us-ascii?Q?gonzob947zKpgk5p93hWry0znKhjdVG3lxjzhZ7ntHUT0HfeeLpDM/BAvRUJ?=
 =?us-ascii?Q?Q5OZkAKtIrA+M3WZ+Phb5/wm1PGQn4IpULMI+tiliqP7Jx/Z6z1oEzFJ/54d?=
 =?us-ascii?Q?zgDoDLN/o7+eQyoph350C6PGuDqCx7pEShQ1avIX4SCrNC7QjhZf917BWDyB?=
 =?us-ascii?Q?svf+Jxua79wa0Bwlos7SnXk5t2jLKT7iWatijPe/4OVPmfXImDXD63hsIA7h?=
 =?us-ascii?Q?H8nj?=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <5C1951913E8B5C45BFA8B5916A5C7B39@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6073
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT042.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	00c16ae8-ebd8-4666-cba2-08d892e03e31
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	U2d8ob+37NeLQ/ogTIoHYq8SgqJ6VAun7J/I2YcdmoV//i0MrVfekCO9ZXi7P7K7arTBxYIYXllbDLdaEnH+Vsg3jk+W+cMm7fpeR8aA0GiaaSk/0x3UvJa27r04wrcQoTKa5ktatce9iwMZKonk3wD6lF2RKq/qXQ08Byk82OyyVc/5GrLZfq2N8AkM2LV6IJNsr4p99V2YKCJE+DemzXosNVdrolY0o4lLe8rxIKYxH1eFO2WXfvm4JvVCHq/MDsIqUGUIlBFwR1CYTLxlMy7igNzszMkkfF+ItdFdGcCichFjmb9BxAfn2QksCQcgqKn/3SU6Iual3OAPita2fwE9h0+ElJaWb+FxPKj8Po84L/PEHvndU1af6M02UkG08lAyixaw0w+0EDyYIfTHMQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(136003)(346002)(396003)(376002)(46966005)(70206006)(86362001)(70586007)(478600001)(8676002)(26005)(186003)(6506007)(53546011)(82310400003)(6862004)(54906003)(316002)(4326008)(36756003)(82740400003)(33656002)(5660300002)(356005)(47076004)(2616005)(6486002)(6512007)(336012)(8936002)(2906002)(81166007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Nov 2020 14:25:27.2659
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 29223fb9-992b-4739-f787-08d892e0495b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT042.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6011

Hello Jan,

> On 27 Nov 2020, at 1:58 pm, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 25.11.2020 19:16, Rahul Singh wrote:
>> --- a/xen/drivers/char/ns16550.c
>> +++ b/xen/drivers/char/ns16550.c
>> @@ -16,7 +16,7 @@
>> #include <xen/timer.h>
>> #include <xen/serial.h>
>> #include <xen/iocap.h>
>> -#ifdef CONFIG_HAS_PCI
>> +#if defined(CONFIG_X86) && defined(CONFIG_HAS_PCI)
>> #include <xen/pci.h>
>> #include <xen/pci_regs.h>
>> #include <xen/pci_ids.h>
>> @@ -51,7 +51,7 @@ static struct ns16550 {
>>     unsigned int timeout_ms;
>>     bool_t intr_works;
>>     bool_t dw_usr_bsy;
>> -#ifdef CONFIG_HAS_PCI
>> +#if defined(CONFIG_X86) && defined(CONFIG_HAS_PCI)
>=20
> I'm sorry to be picky, but this being a hack wants, imo, also calling
> it so, by way of a code comment. Clearly this should go at one of the
> first instances, yet neither of the two above are really suitable imo.
> Hence I'm coming back to my prior suggestion of introducing a
> consolidated #define without this becoming a Kconfig setting:
>=20
> /*
> * The PCI part of the code in this file currently is only known to
> * work on x86. Undo this hack once the logic has been suitably
> * abstracted.
> */
> #if defined(CONFIG_HAS_PCI) && defined(CONFIG_X86)
> # define NS16550_PCI
> #endif
>=20
> And then use NS16550_PCI everywhere. I'd be fine making this
> adjustment while committing, if I knew that (a) you're okay with it
> and

Thanks for reviewing the code.I am ok with it.

> (b) the R-b and A-b you've already got can be kept.
>=20
> Jan
>=20



From xen-devel-bounces@lists.xenproject.org Fri Nov 27 14:26:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 14:26:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39496.72424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiehY-0007ch-Ij; Fri, 27 Nov 2020 14:26:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39496.72424; Fri, 27 Nov 2020 14:26:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiehY-0007ca-FU; Fri, 27 Nov 2020 14:26:20 +0000
Received: by outflank-mailman (input) for mailman id 39496;
 Fri, 27 Nov 2020 14:26:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kiehX-0007cU-DM
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:26:19 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kiehV-000666-PI; Fri, 27 Nov 2020 14:26:17 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kiehV-0000MU-JY; Fri, 27 Nov 2020 14:26:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kiehX-0007cU-DM
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:26:19 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=4/G7E8NqdL2Dnr5IPRF2frrF0OWnH8yG3AfYGeWfSgg=; b=GCGCgpqIccMuamZaWxk3wDTNbl
	cqM9Y68a+ci3f7jqFAP5fJhVXrEr2I5Lq2GzGfF3y7oRJJUNPsbuWi9Xldmk3bh3OIWubtVfvoFNS
	MVRryZDn+jMlQcR2OVqSY8/PZEvFAoYAzkX31NS/cVZ0zmPANayf5Z4/rshVr6Zxze7k=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kiehV-000666-PI; Fri, 27 Nov 2020 14:26:17 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kiehV-0000MU-JY; Fri, 27 Nov 2020 14:26:17 +0000
Subject: Re: [PATCH v8 2/3] xen/events: rework fifo queue locking
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201125105122.3650-1-jgross@suse.com>
 <20201125105122.3650-3-jgross@suse.com>
 <e60e4fce-8c1b-013a-9ec2-20bd2c930619@xen.org>
 <2b099865-647c-3d47-1510-d429c2a4b6c6@suse.com>
 <5f04e881-915f-e2b7-6af3-459af614f8ca@xen.org>
 <c9a8e879-ff55-3fbc-41ab-df836c76be9f@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <344c5aac-8931-cf7c-f8fb-531d33a3fd0d@xen.org>
Date: Fri, 27 Nov 2020 14:26:15 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <c9a8e879-ff55-3fbc-41ab-df836c76be9f@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 27/11/2020 14:14, Jürgen Groß wrote:
> On 27.11.20 15:11, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 27/11/2020 14:05, Jürgen Groß wrote:
>>> On 27.11.20 14:58, Julien Grall wrote:
>>>> Hi Juergen,
>>>>
>>>> On 25/11/2020 10:51, Juergen Gross wrote:
>>>>> -static struct evtchn_fifo_queue *lock_old_queue(const struct 
>>>>> domain *d,
>>>>> -                                                struct evtchn 
>>>>> *evtchn,
>>>>> -                                                unsigned long *flags)
>>>>> -{
>>>>> -    struct vcpu *v;
>>>>> -    struct evtchn_fifo_queue *q, *old_q;
>>>>> -    unsigned int try;
>>>>> -    union evtchn_fifo_lastq lastq;
>>>>> -
>>>>> -    for ( try = 0; try < 3; try++ )
>>>>> -    {
>>>>> -        lastq.raw = read_atomic(&evtchn->fifo_lastq);
>>>>> -        v = d->vcpu[lastq.last_vcpu_id];
>>>>> -        old_q = &v->evtchn_fifo->queue[lastq.last_priority];
>>>>> -
>>>>> -        spin_lock_irqsave(&old_q->lock, *flags);
>>>>> -
>>>>> -        v = d->vcpu[lastq.last_vcpu_id];
>>>>> -        q = &v->evtchn_fifo->queue[lastq.last_priority];
>>>>> -
>>>>> -        if ( old_q == q )
>>>>> -            return old_q;
>>>>> -
>>>>> -        spin_unlock_irqrestore(&old_q->lock, *flags);
>>>>> -    }
>>>>> -
>>>>> -    gprintk(XENLOG_WARNING,
>>>>> -            "dom%d port %d lost event (too many queue changes)\n",
>>>>> -            d->domain_id, evtchn->port);
>>>>> -    return NULL;
>>>>> -}
>>>>> -
>>>>>   static int try_set_link(event_word_t *word, event_word_t *w, 
>>>>> uint32_t link)
>>>>>   {
>>>>>       event_word_t new, old;
>>>>> @@ -190,6 +158,9 @@ static void evtchn_fifo_set_pending(struct vcpu 
>>>>> *v, struct evtchn *evtchn)
>>>>>       event_word_t *word;
>>>>>       unsigned long flags;
>>>>>       bool_t was_pending;
>>>>> +    struct evtchn_fifo_queue *q, *old_q;
>>>>> +    unsigned int try;
>>>>> +    bool linked = true;
>>>>>       port = evtchn->port;
>>>>>       word = evtchn_fifo_word_from_port(d, port);
>>>>> @@ -204,17 +175,67 @@ static void evtchn_fifo_set_pending(struct 
>>>>> vcpu *v, struct evtchn *evtchn)
>>>>>           return;
>>>>>       }
>>>>> +    /*
>>>>> +     * Lock all queues related to the event channel (in case of a 
>>>>> queue change
>>>>> +     * this might be two).
>>>>> +     * It is mandatory to do that before setting and testing the 
>>>>> PENDING bit
>>>>> +     * and to hold the current queue lock until the event has put 
>>>>> into the
>>>>> +     * list of pending events in order to avoid waking up a guest 
>>>>> without the
>>>>> +     * event being visibly pending in the guest.
>>>>> +     */
>>>>> +    for ( try = 0; try < 4; try++ )
>>>>
>>>> May I ask why the number of try is 4 rather than the original 3?
>>>
>>> Oh, I think this is just a typo. OTOH it doesn't really matter.
>>
>> I agree that the number of try was likely random and therefore using a 
>> different number should not matter.
>>
>> However, this is making more difficult to review the patch because 
>> this is an unexplained change.
>>
>> I would prefer if this is dropped. But if you want to keep this 
>> change, then it should be explained in the commit message.
> 
> Well, I could argue that there is potentially one lock more to take, so
> the retry number is increased by one, too. ;-)

I will not argue against switching 4 :). I care more about explaining 
what we do because this is really frustrating to read some of the older 
commit where there are not much rationale (I probably wrote some).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 14:39:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 14:39:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39513.72441 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kietz-0000P6-2g; Fri, 27 Nov 2020 14:39:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39513.72441; Fri, 27 Nov 2020 14:39:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiety-0000Oz-VP; Fri, 27 Nov 2020 14:39:10 +0000
Received: by outflank-mailman (input) for mailman id 39513;
 Fri, 27 Nov 2020 14:39:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hQCY=FB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kietx-0000Ou-VB
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:39:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0a8edfd9-97e5-4ed7-b3de-f88a634514bf;
 Fri, 27 Nov 2020 14:39:08 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DF675AC55;
 Fri, 27 Nov 2020 14:39:07 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hQCY=FB=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kietx-0000Ou-VB
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:39:09 +0000
X-Inumbo-ID: 0a8edfd9-97e5-4ed7-b3de-f88a634514bf
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0a8edfd9-97e5-4ed7-b3de-f88a634514bf;
	Fri, 27 Nov 2020 14:39:08 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606487948; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=jQEL1saZRtu8Ds50rj4d1r4gft1kfdAL7bU++DKRAsU=;
	b=rZcR5dYs1BNrBXablSRUo5sQhKUgbG/e2Iv/Qr/yzmU7+RVBqrkm7j9az0T6bWSS5fWXhr
	Ec1OZ/fwhYPW2YetM+4OFGGM9XVROOM/xj8JFAPrRf3BAFRjYpbXv62AAUx7Cgi+k24ZpG
	n9DXfkH5GjGxVIL6NhW4aWwOAspVf+8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id DF675AC55;
	Fri, 27 Nov 2020 14:39:07 +0000 (UTC)
Subject: Re: [PATCH v8 3/3] xen/events: do some cleanups in
 evtchn_fifo_set_pending()
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201125105122.3650-1-jgross@suse.com>
 <20201125105122.3650-4-jgross@suse.com>
 <0ab6f8b5-1a9a-845e-3935-a660e5c7fc16@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <a11fb0fc-9a2e-8f9a-5fd3-356c0e0a0f60@suse.com>
Date: Fri, 27 Nov 2020 15:39:07 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <0ab6f8b5-1a9a-845e-3935-a660e5c7fc16@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="qFJVFUWfoJ3GRGmlv25OVTUGcA1OgIM2c"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--qFJVFUWfoJ3GRGmlv25OVTUGcA1OgIM2c
Content-Type: multipart/mixed; boundary="tu9tYuq12ieXBY65NuByCg9Pgwae9UW5K";
 protected-headers="v1"
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Message-ID: <a11fb0fc-9a2e-8f9a-5fd3-356c0e0a0f60@suse.com>
Subject: Re: [PATCH v8 3/3] xen/events: do some cleanups in
 evtchn_fifo_set_pending()
References: <20201125105122.3650-1-jgross@suse.com>
 <20201125105122.3650-4-jgross@suse.com>
 <0ab6f8b5-1a9a-845e-3935-a660e5c7fc16@xen.org>
In-Reply-To: <0ab6f8b5-1a9a-845e-3935-a660e5c7fc16@xen.org>

--tu9tYuq12ieXBY65NuByCg9Pgwae9UW5K
Content-Type: multipart/mixed;
 boundary="------------1EED04DB54FFE24301D417DA"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------1EED04DB54FFE24301D417DA
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 27.11.20 15:23, Julien Grall wrote:
>=20
>=20
> On 25/11/2020 10:51, Juergen Gross wrote:
>> evtchn_fifo_set_pending() can be simplified a little bit.
>=20
> The commit message is quite light... For posterity, it would be good to=
=20
> explain why the simplication can be done. In particular, there is a=20
> chance in behavior after this patch.
>=20
>> Suggested-by: Jan Beulich <jbeulich@suse.com>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>> V8:
>> - new patch
>> ---
>> =C2=A0 xen/common/event_fifo.c | 34 +++++++++++++++-------------------=

>> =C2=A0 1 file changed, 15 insertions(+), 19 deletions(-)
>>
>> diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c
>> index 443593c3b3..77609539b1 100644
>> --- a/xen/common/event_fifo.c
>> +++ b/xen/common/event_fifo.c
>> @@ -175,6 +175,18 @@ static void evtchn_fifo_set_pending(struct vcpu=20
>> *v, struct evtchn *evtchn)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }
>> +=C2=A0=C2=A0=C2=A0 /*
>> +=C2=A0=C2=A0=C2=A0=C2=A0 * Control block not mapped.=C2=A0 The guest =
must not unmask an
>> +=C2=A0=C2=A0=C2=A0=C2=A0 * event until the control block is initializ=
ed, so we can
>> +=C2=A0=C2=A0=C2=A0=C2=A0 * just drop the event.
>> +=C2=A0=C2=A0=C2=A0=C2=A0 */
>> +=C2=A0=C2=A0=C2=A0 if ( unlikely(!v->evtchn_fifo->control_block) )
>=20
> Sort of unrelated, AFAICT, v->evtchn_fifo->control_block can be set=20
> concurrently to this access.
>=20
> Thankfully, once the control block is mapped, it can't be unmapped.=20
> However, there is still a possibility that you may see half of the upda=
te.
>=20
> Shouldn't the field access with ACCESS_ONCE()?

Shouldn't this be another patch? Especially as the writing side needs
the same treatment.


Juergen

--------------1EED04DB54FFE24301D417DA
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------1EED04DB54FFE24301D417DA--

--tu9tYuq12ieXBY65NuByCg9Pgwae9UW5K--

--qFJVFUWfoJ3GRGmlv25OVTUGcA1OgIM2c
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAl/BD4sFAwAAAAAACgkQsN6d1ii/Ey8r
DQf+IDpivk4Wp9bZNtf+YBYCdJI7fIp/G9OQ7a7qtIWc5Hqj38BSwpFrve78Uuzjaz59dxixM+QA
enLUSrl2RT0XgkDEjlNrOGGjtBJYODbRufAFoDUoqN7YP29WtYzQLMreBdsRyZlja5CzQqKyR56w
0IaFimrafWwNFhvcMbuUcKKVmOZ6V7d1VjKvlxvJ5o71zZjPhxCPhpM1/YePrIHKm4yPTUAE4A0q
YJ/Fge8FfQf2svgLmhZ0N3yX7ivZk4dIKwgFXSpW8S24Wyrm/qvXKxqadBU3h08vpa6al7i7u9LB
hq9GZ4A1pMt27SLvqTgQCTbgafY2rVdKsGnRTbQMuQ==
=yu8P
-----END PGP SIGNATURE-----

--qFJVFUWfoJ3GRGmlv25OVTUGcA1OgIM2c--


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 14:39:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 14:39:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39518.72454 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kieuV-0000VK-Ek; Fri, 27 Nov 2020 14:39:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39518.72454; Fri, 27 Nov 2020 14:39:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kieuV-0000VD-BT; Fri, 27 Nov 2020 14:39:43 +0000
Received: by outflank-mailman (input) for mailman id 39518;
 Fri, 27 Nov 2020 14:39:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kieuT-0000V2-QD
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:39:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 96b2181a-2184-4f83-be4c-abeaede88ee2;
 Fri, 27 Nov 2020 14:39:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D0FBFABD7;
 Fri, 27 Nov 2020 14:39:39 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kieuT-0000V2-QD
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:39:41 +0000
X-Inumbo-ID: 96b2181a-2184-4f83-be4c-abeaede88ee2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 96b2181a-2184-4f83-be4c-abeaede88ee2;
	Fri, 27 Nov 2020 14:39:40 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606487979; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=UmA4DRpHijWQ21BBicmaNvL34nvckkY07jwQErDBbIA=;
	b=lyX8Fo0RvawhGB2l3Z/ui2tbWH3kRRaCorXIeOb6FZpsvS58iHzTuinbbFFFk5ZQC1+4KN
	u6u3yBUgw11nJwvSHxd9+wrnjOmnCO6RG4upUUQk32Ip5rl5oYDI7ideUh0KDv39Ye0hPd
	KBVapFTNphHZVZh65WzCENr7sYc1b3Y=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D0FBFABD7;
	Fri, 27 Nov 2020 14:39:39 +0000 (UTC)
Subject: Re: [PATCH v10 1/7] remove remaining uses of iommu_legacy_map/unmap
To: Paul Durrant <paul@xen.org>
Cc: Paul Durrant <pdurrant@amazon.com>, Julien Grall <jgrall@amazon.com>,
 Kevin Tian <kevin.tian@intel.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org
References: <20201120132440.1141-1-paul@xen.org>
 <20201120132440.1141-2-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bf07896c-c2ce-6f5f-eb82-4180b60fa58e@suse.com>
Date: Fri, 27 Nov 2020 15:39:40 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201120132440.1141-2-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.11.2020 14:24, Paul Durrant wrote:
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -2489,10 +2489,16 @@ static int cleanup_page_mappings(struct page_info *page)
>  
>          if ( d && unlikely(need_iommu_pt_sync(d)) && is_pv_domain(d) )
>          {
> -            int rc2 = iommu_legacy_unmap(d, _dfn(mfn), 1u << PAGE_ORDER_4K);
> +            unsigned int flush_flags = 0;
> +            int err;
> +
> +            err = iommu_unmap(d, _dfn(mfn), 1ul << PAGE_ORDER_4K, &flush_flags);
> +            if ( !err && !this_cpu(iommu_dont_flush_iotlb) )
> +                err = iommu_iotlb_flush(d, _dfn(mfn), 1ul << PAGE_ORDER_4K,
> +                                        flush_flags);

As was the subject of XSA-346, honoring on a path leading to
the freeing of a page _before_ the delayed flush actually
happens is wrong. Luckily the first of the two patches for
that XSA arranged for you never being able to observe the flag
set, so the check here is simply pointless. But it should
still be removed for documentation purposes.

> @@ -3014,14 +3020,20 @@ static int _get_page_type(struct page_info *page, unsigned long type,
>          if ( d && unlikely(need_iommu_pt_sync(d)) && is_pv_domain(d) )
>          {
>              mfn_t mfn = page_to_mfn(page);
> +            dfn_t dfn = _dfn(mfn_x(mfn));
> +            unsigned int flush_flags = 0;
>  
>              if ( (x & PGT_type_mask) == PGT_writable_page )
> -                rc = iommu_legacy_unmap(d, _dfn(mfn_x(mfn)),
> -                                        1ul << PAGE_ORDER_4K);
> +                rc = iommu_unmap(d, dfn, 1ul << PAGE_ORDER_4K, &flush_flags);
>              else
> -                rc = iommu_legacy_map(d, _dfn(mfn_x(mfn)), mfn,
> -                                      1ul << PAGE_ORDER_4K,
> -                                      IOMMUF_readable | IOMMUF_writable);
> +            {
> +                rc = iommu_map(d, dfn, mfn, 1ul << PAGE_ORDER_4K,
> +                               IOMMUF_readable | IOMMUF_writable, &flush_flags);
> +            }
> +
> +            if ( !rc && !this_cpu(iommu_dont_flush_iotlb) )
> +                rc = iommu_iotlb_flush(d, dfn, 1ul << PAGE_ORDER_4K,
> +                                       flush_flags);

Along these lines here - at least the unmapping needs to be
followed by a flush before the page can assume its new role.
Yet again I don't think the flag can ever be observed true
here, first and foremost because of the is_pv_domain() in
the surrounding if(). While the check could be made
conditional upon the prior operation having been a map, I
think it's again easier to simply delete the dead check.

> --- a/xen/arch/x86/mm/p2m-ept.c
> +++ b/xen/arch/x86/mm/p2m-ept.c
> @@ -842,15 +842,19 @@ out:
>      if ( rc == 0 && p2m_is_hostp2m(p2m) &&
>           need_modify_vtd_table )
>      {
> -        if ( iommu_use_hap_pt(d) && !this_cpu(iommu_dont_flush_iotlb) )
> -            rc = iommu_iotlb_flush(d, _dfn(gfn), 1ul << order,
> -                                   (iommu_flags ? IOMMU_FLUSHF_added : 0) |
> -                                   (vtd_pte_present ? IOMMU_FLUSHF_modified
> -                                                    : 0));
> -        else if ( need_iommu_pt_sync(d) )
> +        unsigned int flush_flags = 0;
> +
> +        if ( need_iommu_pt_sync(d) )
>              rc = iommu_flags ?
> -                iommu_legacy_map(d, _dfn(gfn), mfn, 1ul << order, iommu_flags) :
> -                iommu_legacy_unmap(d, _dfn(gfn), 1ul << order);
> +                iommu_map(d, _dfn(gfn), mfn, 1ul << order, iommu_flags,
> +                          &flush_flags) :
> +                iommu_unmap(d, _dfn(gfn), 1ul << order, &flush_flags);
> +        else if ( iommu_use_hap_pt(d) )
> +            flush_flags = (iommu_flags ? IOMMU_FLUSHF_added : 0) |
> +                          (vtd_pte_present ? IOMMU_FLUSHF_modified : 0);

Is there a particular reason you inverted the order of the
iommu_use_hap_pt() and need_iommu_pt_sync() checks here?
The common (default) case for VT-x / VT-d / EPT is going to
be shared page tables, so I think this should remain the
path getting away with just one evaluation of a conditional.

> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -836,8 +836,8 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
>  
>      if ( is_iommu_enabled(d) )
>      {
> -       this_cpu(iommu_dont_flush_iotlb) = 1;
> -       extra.ppage = &pages[0];
> +        this_cpu(iommu_dont_flush_iotlb) = true;
> +        extra.ppage = &pages[0];
>      }

Is the respective part of the description ("no longer
pointlessly gated on is_iommu_enabled() returning true") stale?

> @@ -368,15 +360,12 @@ void iommu_dev_iotlb_flush_timeout(struct domain *d, struct pci_dev *pdev);
>  
>  /*
>   * The purpose of the iommu_dont_flush_iotlb optional cpu flag is to
> - * avoid unecessary iotlb_flush in the low level IOMMU code.
> - *
> - * iommu_map_page/iommu_unmap_page must flush the iotlb but somethimes
> - * this operation can be really expensive. This flag will be set by the
> - * caller to notify the low level IOMMU code to avoid the iotlb flushes.
> - * iommu_iotlb_flush/iommu_iotlb_flush_all will be explicitly called by
> - * the caller.
> + * avoid unnecessary IOMMU flushing while updating the P2M.
> + * Setting the value to true will cause iommu_iotlb_flush() to return without
> + * actually performing a flush. A batch flush must therefore be done by the
> + * calling code after setting the value back to false.

I guess this too was in need of updating with the v9 changes?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 14:49:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 14:49:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39541.72466 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kif3T-0001Yn-CK; Fri, 27 Nov 2020 14:48:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39541.72466; Fri, 27 Nov 2020 14:48:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kif3T-0001Yg-90; Fri, 27 Nov 2020 14:48:59 +0000
Received: by outflank-mailman (input) for mailman id 39541;
 Fri, 27 Nov 2020 14:48:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kif3S-0001Yb-84
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:48:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9dfb9086-fb0a-44bf-afcf-371c0ac5d62b;
 Fri, 27 Nov 2020 14:48:57 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6D4B6AC55;
 Fri, 27 Nov 2020 14:48:56 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kif3S-0001Yb-84
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:48:58 +0000
X-Inumbo-ID: 9dfb9086-fb0a-44bf-afcf-371c0ac5d62b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9dfb9086-fb0a-44bf-afcf-371c0ac5d62b;
	Fri, 27 Nov 2020 14:48:57 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606488536; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VmazSFssDUXD5aQuFD5sQTaVVUyogC9qfmDS3M1uctE=;
	b=bQ4OT7GtxBDbAIW4Mz2FOhso3gTCZxwN1nGXOpHHrM9z58OKAtHLO15q5lL8yOoX2RVu9k
	eivE8bFphs2n0ZyQmQ8tSWiUFLslGoFC3avCEtn+xMuI9CyeqbAsG2Xfq8Y2vJeJRM53j/
	3/wlYG3PLvJC7Wbv4vxNowC7jHWjFgE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6D4B6AC55;
	Fri, 27 Nov 2020 14:48:56 +0000 (UTC)
Subject: Re: [PATCH v8 3/3] xen/events: do some cleanups in
 evtchn_fifo_set_pending()
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20201125105122.3650-1-jgross@suse.com>
 <20201125105122.3650-4-jgross@suse.com>
 <0ab6f8b5-1a9a-845e-3935-a660e5c7fc16@xen.org>
 <a11fb0fc-9a2e-8f9a-5fd3-356c0e0a0f60@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <29c8daf7-8af4-df16-716e-113bcc3e96a1@suse.com>
Date: Fri, 27 Nov 2020 15:48:57 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <a11fb0fc-9a2e-8f9a-5fd3-356c0e0a0f60@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 27.11.2020 15:39, Jürgen Groß wrote:
> On 27.11.20 15:23, Julien Grall wrote:
>> On 25/11/2020 10:51, Juergen Gross wrote:
>>> --- a/xen/common/event_fifo.c
>>> +++ b/xen/common/event_fifo.c
>>> @@ -175,6 +175,18 @@ static void evtchn_fifo_set_pending(struct vcpu 
>>> *v, struct evtchn *evtchn)
>>>           return;
>>>       }
>>> +    /*
>>> +     * Control block not mapped.  The guest must not unmask an
>>> +     * event until the control block is initialized, so we can
>>> +     * just drop the event.
>>> +     */
>>> +    if ( unlikely(!v->evtchn_fifo->control_block) )
>>
>> Sort of unrelated, AFAICT, v->evtchn_fifo->control_block can be set 
>> concurrently to this access.
>>
>> Thankfully, once the control block is mapped, it can't be unmapped. 
>> However, there is still a possibility that you may see half of the update.
>>
>> Shouldn't the field access with ACCESS_ONCE()?
> 
> Shouldn't this be another patch? Especially as the writing side needs
> the same treatment.

Indeed. As said on several different occasions - our code base is
full of places where we chance torn accesses, if there really was
a compiler to let us down on this. This recurring pattern
shouldn't lead to unrelated patches getting bloated, unless _all_
affected sites get touched anyway.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 14:50:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 14:50:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39547.72477 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kif4a-0002I8-MO; Fri, 27 Nov 2020 14:50:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39547.72477; Fri, 27 Nov 2020 14:50:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kif4a-0002I1-JC; Fri, 27 Nov 2020 14:50:08 +0000
Received: by outflank-mailman (input) for mailman id 39547;
 Fri, 27 Nov 2020 14:50:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kif4Y-0002Fx-Qg
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:50:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kif4W-0006bk-SM; Fri, 27 Nov 2020 14:50:04 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kif4W-0001yE-JB; Fri, 27 Nov 2020 14:50:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kif4Y-0002Fx-Qg
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:50:06 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=1dqHCtmdCNnKM9kTCemXbGIcMo5dsKeHoevkmqeCBGw=; b=2MiO2smzA07FyZ751/fc86Mcch
	rjqKqT7mptY3l3kKepT7qVBj+WdSoJ3Fax92ECGHEUoyFZzPau7D2P9EtR8nbQG+f5ID19tKc8t3Z
	0JzgRVrq6zv5ITPF0b5eEI9/ES0NSA8+9GUEFQFWOAOJkkjIyPjwAfTTDCBkwLVG/OHI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kif4W-0006bk-SM; Fri, 27 Nov 2020 14:50:04 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kif4W-0001yE-JB; Fri, 27 Nov 2020 14:50:04 +0000
Subject: Re: [PATCH v8 3/3] xen/events: do some cleanups in
 evtchn_fifo_set_pending()
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201125105122.3650-1-jgross@suse.com>
 <20201125105122.3650-4-jgross@suse.com>
 <0ab6f8b5-1a9a-845e-3935-a660e5c7fc16@xen.org>
 <a11fb0fc-9a2e-8f9a-5fd3-356c0e0a0f60@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <2fc7bb0f-c658-6890-f8e1-58cb885f19f2@xen.org>
Date: Fri, 27 Nov 2020 14:50:02 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <a11fb0fc-9a2e-8f9a-5fd3-356c0e0a0f60@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 27/11/2020 14:39, Jürgen Groß wrote:
> On 27.11.20 15:23, Julien Grall wrote:
>>
>>
>> On 25/11/2020 10:51, Juergen Gross wrote:
>>> evtchn_fifo_set_pending() can be simplified a little bit.
>>
>> The commit message is quite light... For posterity, it would be good 
>> to explain why the simplication can be done. In particular, there is a 
>> chance in behavior after this patch.
>>
>>> Suggested-by: Jan Beulich <jbeulich@suse.com>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>> V8:
>>> - new patch
>>> ---
>>>   xen/common/event_fifo.c | 34 +++++++++++++++-------------------
>>>   1 file changed, 15 insertions(+), 19 deletions(-)
>>>
>>> diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c
>>> index 443593c3b3..77609539b1 100644
>>> --- a/xen/common/event_fifo.c
>>> +++ b/xen/common/event_fifo.c
>>> @@ -175,6 +175,18 @@ static void evtchn_fifo_set_pending(struct vcpu 
>>> *v, struct evtchn *evtchn)
>>>           return;
>>>       }
>>> +    /*
>>> +     * Control block not mapped.  The guest must not unmask an
>>> +     * event until the control block is initialized, so we can
>>> +     * just drop the event.
>>> +     */
>>> +    if ( unlikely(!v->evtchn_fifo->control_block) )
>>
>> Sort of unrelated, AFAICT, v->evtchn_fifo->control_block can be set 
>> concurrently to this access.
>>
>> Thankfully, once the control block is mapped, it can't be unmapped. 
>> However, there is still a possibility that you may see half of the 
>> update.
>>
>> Shouldn't the field access with ACCESS_ONCE()?
> 
> Shouldn't this be another patch? Especially as the writing side needs
> the same treatment.

Yes it should. Sorry I should have been clearer.

I am happy to also wrote the patch if you feel you had enough with event 
channel :).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 14:52:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 14:52:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39555.72490 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kif6w-0002Vw-3e; Fri, 27 Nov 2020 14:52:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39555.72490; Fri, 27 Nov 2020 14:52:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kif6w-0002Vp-0Y; Fri, 27 Nov 2020 14:52:34 +0000
Received: by outflank-mailman (input) for mailman id 39555;
 Fri, 27 Nov 2020 14:52:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0dHZ=FB=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1kif6u-0002Vk-Oh
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:52:32 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f9093f5f-f704-4be1-a771-d5704ed7c4df;
 Fri, 27 Nov 2020 14:52:31 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0dHZ=FB=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
	id 1kif6u-0002Vk-Oh
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 14:52:32 +0000
X-Inumbo-ID: f9093f5f-f704-4be1-a771-d5704ed7c4df
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f9093f5f-f704-4be1-a771-d5704ed7c4df;
	Fri, 27 Nov 2020 14:52:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606488751;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=jJUcZXbN3G+d9ShSTyFz3C85sAOZLC2pSugnQvqiPxY=;
  b=d7kPj7DzqNLzRrDionuwuFiLAs/bLbQS7X20Nz6WTw79PJVOdjH16PAh
   zT3Aay16Lz52augxSzo6aNTjUpMVVtnuIORpCC9D/ez5VRPrpnLXE30dN
   pUo8LS0prqnMytDdI7zGBRXWZQnvGaHTb0m8p07I/+Ay9+q9tiSKq7M+0
   I=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: F81GW4oBIsFnf3DpXILoVgoHENqX4BfBxXBaBnkRhAWH2+0zZ2j/guKA7KooSvnQBEafrTkZ6M
 /oDh/0MtzQL+Oo/kRAgHJctdgjnO7t/IAK5MClGk/Fy5K7LKR/Nl/KgNrDQQTo+g+mlRuwJG/Q
 syHLyYIFRvxFxwKnMBuarl5+WmLWeGYUn1xa1Ucupfe+7PCki/nrGDZZY6ii6ABS+LZCuWPcXk
 1qKpPJ7dPIU7mEFeeOARLuQpowzPwkC6/nkvmQvalyskPkB4lGAc+D0dXvMcOeBlzu0FcYT8nb
 K0U=
X-SBRS: None
X-MesageID: 33214504
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,374,1599537600"; 
   d="scan'208";a="33214504"
Date: Fri, 27 Nov 2020 14:52:27 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Eduardo Habkost <ehabkost@redhat.com>
CC: <qemu-devel@nongnu.org>, Gerd Hoffmann <kraxel@redhat.com>, Thomas Huth
	<thuth@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
	<xen-devel@lists.xenproject.org>, Richard Henderson
	<richard.henderson@linaro.org>, Claudio Fontana <cfontana@suse.de>, Roman
 Bolshakov <r.bolshakov@yadro.com>
Subject: Re: [PATCH v2 4/6] xen: Delete xen_available() function
Message-ID: <20201127145227.GD2098@perard.uk.xensource.com>
References: <20201125205636.3305257-1-ehabkost@redhat.com>
 <20201125205636.3305257-5-ehabkost@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20201125205636.3305257-5-ehabkost@redhat.com>

On Wed, Nov 25, 2020 at 03:56:34PM -0500, Eduardo Habkost wrote:
> The function can be replaced with accel_available("xen").
> 
> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>

Acked-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 15:11:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 15:11:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39573.72510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kifPT-0004W9-Rr; Fri, 27 Nov 2020 15:11:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39573.72510; Fri, 27 Nov 2020 15:11:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kifPT-0004W2-OU; Fri, 27 Nov 2020 15:11:43 +0000
Received: by outflank-mailman (input) for mailman id 39573;
 Fri, 27 Nov 2020 15:11:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kifPS-0004Vx-Um
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 15:11:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4d085c82-2c1d-44ac-a00c-60b34cdc29ff;
 Fri, 27 Nov 2020 15:11:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D9806AC2D;
 Fri, 27 Nov 2020 15:11:40 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kifPS-0004Vx-Um
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 15:11:42 +0000
X-Inumbo-ID: 4d085c82-2c1d-44ac-a00c-60b34cdc29ff
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4d085c82-2c1d-44ac-a00c-60b34cdc29ff;
	Fri, 27 Nov 2020 15:11:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606489901; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nKwWJdqq6fY6+MN6akTmVY1AxQwgPhj6tjhLCt6Z5NU=;
	b=csr8vpBsYRNhd5pvRISq+9iALBsnBhAbqCAdJaJPUqIuXybGXfGevXMEqJL8dU7TQSZD/7
	hiuDkCoR8N1222vD37ftzmMmtMYDbdJILHUNFtXmtugirlp9zfxPkyVUk5VqNee52y02Pw
	t/Mbci1EZGM9IJJdh+172aaVuRS+sb4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D9806AC2D;
	Fri, 27 Nov 2020 15:11:40 +0000 (UTC)
Subject: Re: [PATCH v10 5/7] vtd: use a bit field for root_entry
To: Paul Durrant <paul@xen.org>
Cc: Paul Durrant <pdurrant@amazon.com>, Kevin Tian <kevin.tian@intel.com>,
 xen-devel@lists.xenproject.org
References: <20201120132440.1141-1-paul@xen.org>
 <20201120132440.1141-6-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c53f148f-0d3d-764d-7a50-f85ec5e30737@suse.com>
Date: Fri, 27 Nov 2020 16:11:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201120132440.1141-6-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.11.2020 14:24, Paul Durrant wrote:
> @@ -85,25 +85,28 @@ static bool device_in_domain(const struct vtd_iommu *iommu,
>          return false;
>      }
>  
> -    root_entry = map_vtd_domain_page(iommu->root_maddr);
> -    if ( !root_present(root_entry[pdev->bus]) )
> +    root_entries = (struct root_entry *)map_vtd_domain_page(iommu->root_maddr);

Why the cast, the more that ...

> +    root_entry = &root_entries[pdev->bus];
> +    if ( !root_entry->p )
>          goto out;
>  
> -    ctxt_entry = map_vtd_domain_page(root_entry[pdev->bus].val);
> -    if ( context_domain_id(ctxt_entry[pdev->devfn]) != did )
> +    context_entries = map_vtd_domain_page(root_entry->ctp);

... you have none here? With this dropped
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 15:17:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 15:17:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39582.72521 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kifVM-0004j3-Go; Fri, 27 Nov 2020 15:17:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39582.72521; Fri, 27 Nov 2020 15:17:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kifVM-0004iw-Dy; Fri, 27 Nov 2020 15:17:48 +0000
Received: by outflank-mailman (input) for mailman id 39582;
 Fri, 27 Nov 2020 15:17:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kifVK-0004ir-MB
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 15:17:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kifVJ-0007EG-TO; Fri, 27 Nov 2020 15:17:45 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kifVJ-0003mi-Kv; Fri, 27 Nov 2020 15:17:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kifVK-0004ir-MB
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 15:17:46 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=OXmodtBH3zSpvwdmcbuG2U3A3o3pRJ/OPBOaK6sKTAQ=; b=1gazHFRuAwqxbjSO1OVtTe15Ao
	4agvzu1uE7h7hlvqITciZeJku3R12fE/WjCDZ7OF8FcqfrhpQGbnMhYqcgiatScekXQ4Z0lcjHVg8
	1eFwWOhLYI39gm7qsglgK76mrpBJuRd0yBhOILbJlXG6srztRidCtIyo2pwyUX4KH81c=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kifVJ-0007EG-TO; Fri, 27 Nov 2020 15:17:45 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kifVJ-0003mi-Kv; Fri, 27 Nov 2020 15:17:45 +0000
Subject: Re: [PATCH v8 3/3] xen/events: do some cleanups in
 evtchn_fifo_set_pending()
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20201125105122.3650-1-jgross@suse.com>
 <20201125105122.3650-4-jgross@suse.com>
 <0ab6f8b5-1a9a-845e-3935-a660e5c7fc16@xen.org>
 <a11fb0fc-9a2e-8f9a-5fd3-356c0e0a0f60@suse.com>
 <29c8daf7-8af4-df16-716e-113bcc3e96a1@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7e4f42a5-4ab6-8aac-c8d9-95403c90dc4b@xen.org>
Date: Fri, 27 Nov 2020 15:17:43 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <29c8daf7-8af4-df16-716e-113bcc3e96a1@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 27/11/2020 14:48, Jan Beulich wrote:
> On 27.11.2020 15:39, Jürgen Groß wrote:
>> On 27.11.20 15:23, Julien Grall wrote:
>>> On 25/11/2020 10:51, Juergen Gross wrote:
>>>> --- a/xen/common/event_fifo.c
>>>> +++ b/xen/common/event_fifo.c
>>>> @@ -175,6 +175,18 @@ static void evtchn_fifo_set_pending(struct vcpu
>>>> *v, struct evtchn *evtchn)
>>>>            return;
>>>>        }
>>>> +    /*
>>>> +     * Control block not mapped.  The guest must not unmask an
>>>> +     * event until the control block is initialized, so we can
>>>> +     * just drop the event.
>>>> +     */
>>>> +    if ( unlikely(!v->evtchn_fifo->control_block) )
>>>
>>> Sort of unrelated, AFAICT, v->evtchn_fifo->control_block can be set
>>> concurrently to this access.
>>>
>>> Thankfully, once the control block is mapped, it can't be unmapped.
>>> However, there is still a possibility that you may see half of the update.
>>>
>>> Shouldn't the field access with ACCESS_ONCE()?
>>
>> Shouldn't this be another patch? Especially as the writing side needs
>> the same treatment.
> 
> Indeed. As said on several different occasions - our code base is
> full of places where we chance torn accesses, if there really was
> a compiler to let us down on this.

I am quite amazed you that you managed to test all the version of 
GCC/Clang that were built and confirm this is unlikely to happen :).

> This recurring pattern
> shouldn't lead to unrelated patches getting bloated, unless _all_
> affected sites get touched anyway.

You probably missed the point where I say "sort of unrelated". This 
wasn't not a suggestion to fix it here (I should have been clearer 
though) but instead point out issue as I see them.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 15:32:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 15:32:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39594.72534 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kifjL-0006ZP-QQ; Fri, 27 Nov 2020 15:32:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39594.72534; Fri, 27 Nov 2020 15:32:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kifjL-0006ZI-Ms; Fri, 27 Nov 2020 15:32:15 +0000
Received: by outflank-mailman (input) for mailman id 39594;
 Fri, 27 Nov 2020 15:32:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kifjK-0006ZD-M5
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 15:32:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e2b5f24b-4ad1-4926-97a9-0077bf2c25db;
 Fri, 27 Nov 2020 15:32:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 420DFAC2D;
 Fri, 27 Nov 2020 15:32:12 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kifjK-0006ZD-M5
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 15:32:14 +0000
X-Inumbo-ID: e2b5f24b-4ad1-4926-97a9-0077bf2c25db
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e2b5f24b-4ad1-4926-97a9-0077bf2c25db;
	Fri, 27 Nov 2020 15:32:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606491132; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=BHRaQqw4nqU2MYT8jElsl7KcUyxKLYy5JKqi1BajHxc=;
	b=TnvJUE/E8oK/0PeSyEtdgKxM9tmon2n89cPHmANfMXQMA9eVsNjHySuLAwKxMNscHxgT2k
	3Zd+kmFrzEB53US7MNoZA0O0fAWCNwU07H17PbQ1i2w5QUM/Yl+cI4xPj0B3ILBY7enRvT
	DDTjHWf3s/UTlFYaaa216v1McdQ6MXY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 420DFAC2D;
	Fri, 27 Nov 2020 15:32:12 +0000 (UTC)
Subject: Re: [PATCH v10 6/7] vtd: use a bit field for context_entry
To: Paul Durrant <paul@xen.org>
Cc: Paul Durrant <pdurrant@amazon.com>, Kevin Tian <kevin.tian@intel.com>,
 xen-devel@lists.xenproject.org
References: <20201120132440.1141-1-paul@xen.org>
 <20201120132440.1141-7-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5aab9cb2-b5f2-97a3-2433-6301b3ae7c54@suse.com>
Date: Fri, 27 Nov 2020 16:32:13 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201120132440.1141-7-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.11.2020 14:24, Paul Durrant wrote:
> @@ -121,21 +119,22 @@ static int context_set_domain_id(struct context_entry *context,
>      }
>  
>      set_bit(i, iommu->domid_bitmap);
> -    context->hi |= (i & ((1 << DID_FIELD_WIDTH) - 1)) << DID_HIGH_OFFSET;
> +    context->did = i;
> +
>      return 0;
>  }
>  
>  static int context_get_domain_id(struct context_entry *context,
>                                   struct vtd_iommu *iommu)
>  {
> -    unsigned long dom_index, nr_dom;
>      int domid = -1;
>  
>      if (iommu && context)
>      {
> -        nr_dom = cap_ndoms(iommu->cap);
> +        unsigned long dom_index, nr_dom;

unsigned int will do here.

> -        dom_index = context_domain_id(*context);
> +        nr_dom = cap_ndoms(iommu->cap);
> +        dom_index = context->did;

These could also become the initializers of the variables now.

> --- a/xen/drivers/passthrough/vtd/iommu.h
> +++ b/xen/drivers/passthrough/vtd/iommu.h
> @@ -198,37 +198,34 @@ struct root_entry {
>          };
>      };
>  };
> +#define ROOT_ENTRY_NR (PAGE_SIZE_4K / sizeof(struct root_entry))
>  
>  struct context_entry {
> -    u64 lo;
> -    u64 hi;
> -};
> -#define ROOT_ENTRY_NR (PAGE_SIZE_4K/sizeof(struct root_entry))
> -#define context_present(c) ((c).lo & 1)
> -#define context_fault_disable(c) (((c).lo >> 1) & 1)
> -#define context_translation_type(c) (((c).lo >> 2) & 3)
> -#define context_address_root(c) ((c).lo & PAGE_MASK_4K)
> -#define context_address_width(c) ((c).hi &  7)
> -#define context_domain_id(c) (((c).hi >> 8) & ((1 << 16) - 1))
> +    union {
> +        __uint128_t val;
> +        struct { uint64_t lo, hi; };
> +        struct {
> +            /* 0 - 63 */
> +            bool p:1;
> +            bool fpd:1;
> +            uint64_t tt:2;

unsigned int

With these taken care of
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 15:36:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 15:36:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39602.72546 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kifnu-0006m0-Gz; Fri, 27 Nov 2020 15:36:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39602.72546; Fri, 27 Nov 2020 15:36:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kifnu-0006lt-E2; Fri, 27 Nov 2020 15:36:58 +0000
Received: by outflank-mailman (input) for mailman id 39602;
 Fri, 27 Nov 2020 15:36:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kifnt-0006lo-TO
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 15:36:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a9943034-007d-4868-b3b2-a216d844de95;
 Fri, 27 Nov 2020 15:36:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 13F1FAC2D;
 Fri, 27 Nov 2020 15:36:56 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kifnt-0006lo-TO
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 15:36:57 +0000
X-Inumbo-ID: a9943034-007d-4868-b3b2-a216d844de95
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a9943034-007d-4868-b3b2-a216d844de95;
	Fri, 27 Nov 2020 15:36:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606491416; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yqCTdveuLnQeOaDXC6rinMP4mldrQaHAOhubivwehf8=;
	b=rXv2Th4+nR8dQizFX3ytSjb7V6be1qh9f3nb72l2JkDjiLVDjM+J5xWX3ORRP2d9MqhmQV
	Eb41dvH6qlqICs76QmsbYQU5+ZZdwv9ViiDyuecYF2ga7ywcf44dAr0vFw2QAHi288Z0jL
	5Ns0pph3VArvqDAItNv9MTe2zio4LoA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 13F1FAC2D;
	Fri, 27 Nov 2020 15:36:56 +0000 (UTC)
Subject: Re: [PATCH v8 3/3] xen/events: do some cleanups in
 evtchn_fifo_set_pending()
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org, =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>
References: <20201125105122.3650-1-jgross@suse.com>
 <20201125105122.3650-4-jgross@suse.com>
 <0ab6f8b5-1a9a-845e-3935-a660e5c7fc16@xen.org>
 <a11fb0fc-9a2e-8f9a-5fd3-356c0e0a0f60@suse.com>
 <29c8daf7-8af4-df16-716e-113bcc3e96a1@suse.com>
 <7e4f42a5-4ab6-8aac-c8d9-95403c90dc4b@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4de58d09-dbcb-fcb8-0761-fc464428a7c3@suse.com>
Date: Fri, 27 Nov 2020 16:36:57 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <7e4f42a5-4ab6-8aac-c8d9-95403c90dc4b@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 27.11.2020 16:17, Julien Grall wrote:
> 
> 
> On 27/11/2020 14:48, Jan Beulich wrote:
>> On 27.11.2020 15:39, Jürgen Groß wrote:
>>> On 27.11.20 15:23, Julien Grall wrote:
>>>> On 25/11/2020 10:51, Juergen Gross wrote:
>>>>> --- a/xen/common/event_fifo.c
>>>>> +++ b/xen/common/event_fifo.c
>>>>> @@ -175,6 +175,18 @@ static void evtchn_fifo_set_pending(struct vcpu
>>>>> *v, struct evtchn *evtchn)
>>>>>            return;
>>>>>        }
>>>>> +    /*
>>>>> +     * Control block not mapped.  The guest must not unmask an
>>>>> +     * event until the control block is initialized, so we can
>>>>> +     * just drop the event.
>>>>> +     */
>>>>> +    if ( unlikely(!v->evtchn_fifo->control_block) )
>>>>
>>>> Sort of unrelated, AFAICT, v->evtchn_fifo->control_block can be set
>>>> concurrently to this access.
>>>>
>>>> Thankfully, once the control block is mapped, it can't be unmapped.
>>>> However, there is still a possibility that you may see half of the update.
>>>>
>>>> Shouldn't the field access with ACCESS_ONCE()?
>>>
>>> Shouldn't this be another patch? Especially as the writing side needs
>>> the same treatment.
>>
>> Indeed. As said on several different occasions - our code base is
>> full of places where we chance torn accesses, if there really was
>> a compiler to let us down on this.
> 
> I am quite amazed you that you managed to test all the version of 
> GCC/Clang that were built and confirm this is unlikely to happen :).

It's (obviously) not that I tested all of them, but that I know
at least a little bit of how gcc generates code, that I'm unaware
of reports to the contrary, and that it would seem odd for a
compiler to split accesses when they can be done by one insn. Of
course one could build a compiler doing only byte accesses ...

>> This recurring pattern
>> shouldn't lead to unrelated patches getting bloated, unless _all_
>> affected sites get touched anyway.
> 
> You probably missed the point where I say "sort of unrelated". This 
> wasn't not a suggestion to fix it here (I should have been clearer 
> though) but instead point out issue as I see them.

Point taken.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 16:02:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 16:02:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39614.72557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kigCR-0001eW-LK; Fri, 27 Nov 2020 16:02:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39614.72557; Fri, 27 Nov 2020 16:02:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kigCR-0001eP-IG; Fri, 27 Nov 2020 16:02:19 +0000
Received: by outflank-mailman (input) for mailman id 39614;
 Fri, 27 Nov 2020 16:02:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kigCQ-0001eK-2Y
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 16:02:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ab2b01bc-604a-4af7-97c8-a92a590696a8;
 Fri, 27 Nov 2020 16:02:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C230CAC2D;
 Fri, 27 Nov 2020 16:02:15 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kigCQ-0001eK-2Y
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 16:02:18 +0000
X-Inumbo-ID: ab2b01bc-604a-4af7-97c8-a92a590696a8
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ab2b01bc-604a-4af7-97c8-a92a590696a8;
	Fri, 27 Nov 2020 16:02:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606492935; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=NgfoAm+mF4Fjgn0c434j7wnv4MDSiRf0c2pqPYGz+fk=;
	b=tCK8Xcd4QZVmFex0+IWp0UgsWwwdhgT5toB3QMvcareGeoggQr5ZAYwpzJhC1yrCLVeCko
	e2ravUv8RG587zJKI8i15VjkOHRKRAulwBDsq21lXo++KBElPKu1gg/dzOl1F+ZSUAH2O+
	CJkKT5mD7GIdP2IcVEXlGfRJ1ItBQTg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C230CAC2D;
	Fri, 27 Nov 2020 16:02:15 +0000 (UTC)
Subject: Re: [PATCH v10 7/7] vtd: use a bit field for dma_pte
To: Paul Durrant <paul@xen.org>, Kevin Tian <kevin.tian@intel.com>
Cc: Paul Durrant <pdurrant@amazon.com>, xen-devel@lists.xenproject.org
References: <20201120132440.1141-1-paul@xen.org>
 <20201120132440.1141-8-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <24774b4e-3ae8-2941-24ee-722acea69657@suse.com>
Date: Fri, 27 Nov 2020 17:02:17 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201120132440.1141-8-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.11.2020 14:24, Paul Durrant wrote:
> @@ -709,20 +709,23 @@ static void dma_pte_clear_one(struct domain *domain, uint64_t addr,
>      page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
>      pte = page + address_level_offset(addr, 1);
>  
> -    if ( !dma_pte_present(*pte) )
> +    if ( !pte->r && !pte->w )

I think dma_pte_present() wants to stay, so we would have to touch
only one place when adding support for x.

>      {
>          spin_unlock(&hd->arch.mapping_lock);
>          unmap_vtd_domain_page(page);
>          return;
>      }
>  
> -    dma_clear_pte(*pte);
> -    *flush_flags |= IOMMU_FLUSHF_modified;
> +    pte->r = pte->w = false;
> +    smp_wmb();
> +    pte->val = 0;
>  
>      spin_unlock(&hd->arch.mapping_lock);
>      iommu_sync_cache(pte, sizeof(struct dma_pte));

Just as an observation - in an earlier patch I think there was a
code sequence having these the other way around. I think we want
to settle one one way of doing this (flush then unlock, or unlock
then flush). Kevin?

> @@ -1775,15 +1778,12 @@ static int __must_check intel_iommu_map_page(struct domain *d, dfn_t dfn,
>      page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
>      pte = &page[dfn_x(dfn) & LEVEL_MASK];
>      old = *pte;
> -
> -    dma_set_pte_addr(new, mfn_to_maddr(mfn));
> -    dma_set_pte_prot(new,
> -                     ((flags & IOMMUF_readable) ? DMA_PTE_READ  : 0) |
> -                     ((flags & IOMMUF_writable) ? DMA_PTE_WRITE : 0));
> -
> -    /* Set the SNP on leaf page table if Snoop Control available */
> -    if ( iommu_snoop )
> -        dma_set_pte_snp(new);
> +    new = (struct dma_pte){
> +        .r = flags & IOMMUF_readable,
> +        .w = flags & IOMMUF_writable,
> +        .snp = iommu_snoop,
> +        .addr = mfn_x(mfn),
> +    };

We still haven't settled on a newer gcc baseline, so this kind of
initializer is still not allowed (as in: will break the build) for
struct-s with unnamed sub-struct-s / sub-union-s.

> @@ -2611,18 +2611,18 @@ static void vtd_dump_page_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
>              process_pending_softirqs();
>  
>          pte = &pt_vaddr[i];
> -        if ( !dma_pte_present(*pte) )
> +        if ( !pte->r && !pte->w )
>              continue;
>  
>          address = gpa + offset_level_address(i, level);
>          if ( next_level >= 1 ) 
> -            vtd_dump_page_table_level(dma_pte_addr(*pte), next_level,
> +            vtd_dump_page_table_level(pfn_to_paddr(pte->addr), next_level,
>                                        address, indent + 1);
>          else
>              printk("%*sdfn: %08lx mfn: %08lx\n",
>                     indent, "",
>                     (unsigned long)(address >> PAGE_SHIFT_4K),
> -                   (unsigned long)(dma_pte_addr(*pte) >> PAGE_SHIFT_4K));
> +                   (unsigned long)(pte->addr));

Could you also drop the no longer needed pair of parentheses. I
further suspect the cast isn't needed (anymore?). (Otoh I think
I recall oddities with gcc's printf()-style format checking and
direct passing of bitfields. But if that's a problem, I think
one of the earlier ones already introduced such an issue. So
perhaps we can wait until someone actually confirms there is an
issue - quite likely this someone would be me anyway.)

> --- a/xen/drivers/passthrough/vtd/iommu.h
> +++ b/xen/drivers/passthrough/vtd/iommu.h
> @@ -244,38 +244,21 @@ struct context_entry {
>  #define level_size(l) (1 << level_to_offset_bits(l))
>  #define align_to_level(addr, l) ((addr + level_size(l) - 1) & level_mask(l))
>  
> -/*
> - * 0: readable
> - * 1: writable
> - * 2-6: reserved
> - * 7: super page
> - * 8-11: available
> - * 12-63: Host physcial address
> - */
>  struct dma_pte {
> -    u64 val;
> +    union {
> +        uint64_t val;
> +        struct {
> +            bool r:1;
> +            bool w:1;
> +            unsigned int reserved0:1;
> +            unsigned int ignored0:4;
> +            bool ps:1;
> +            unsigned int ignored1:3;
> +            bool snp:1;
> +            uint64_t addr:52;

As per the doc I look at this extends only to bit 51 at most.
Above are 11 ignored bits and (in leaf entries) the TM one.

Considering the differences between leaf and intermediate
entries, perhaps leaf-only fields could gain a brief comment?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 16:07:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 16:07:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39623.72569 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kigHI-0001r7-8v; Fri, 27 Nov 2020 16:07:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39623.72569; Fri, 27 Nov 2020 16:07:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kigHI-0001qz-5l; Fri, 27 Nov 2020 16:07:20 +0000
Received: by outflank-mailman (input) for mailman id 39623;
 Fri, 27 Nov 2020 16:07:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3FF1=FB=redhat.com=trix@srs-us1.protection.inumbo.net>)
 id 1kigHG-0001qs-Bp
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 16:07:18 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 85f75f21-a658-4529-8fb8-386dfb0ae8ae;
 Fri, 27 Nov 2020 16:07:16 +0000 (UTC)
Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com
 [209.85.219.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-101-BwW6imh3OGuofZcwe0icgA-1; Fri, 27 Nov 2020 11:07:14 -0500
Received: by mail-qv1-f69.google.com with SMTP id v8so3324981qvq.12
 for <xen-devel@lists.xenproject.org>; Fri, 27 Nov 2020 08:07:14 -0800 (PST)
Received: from trix.remote.csb (075-142-250-213.res.spectrum.com.
 [75.142.250.213])
 by smtp.gmail.com with ESMTPSA id u22sm6252620qkk.51.2020.11.27.08.07.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 27 Nov 2020 08:07:13 -0800 (PST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3FF1=FB=redhat.com=trix@srs-us1.protection.inumbo.net>)
	id 1kigHG-0001qs-Bp
	for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 16:07:18 +0000
X-Inumbo-ID: 85f75f21-a658-4529-8fb8-386dfb0ae8ae
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 85f75f21-a658-4529-8fb8-386dfb0ae8ae;
	Fri, 27 Nov 2020 16:07:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1606493236;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:content-type:content-type;
	bh=TsMD7b3MnWExoEfBuHmM8Gy0J5MPvPWD/mFWcwpBdWg=;
	b=CkShzpfcPmuV2PZI7PvezdHXAMKEvddi22lMdvyp1ZhtJquNWD2zYm8aTUx/BlFL71/m6E
	B+6HLg9ZB7JPw3nVq47ohzpnti5TFcj3FbHNE4//1PInz2VnkCXBnHArZsz8A8tl1BzTFj
	qNBkv4OdnpDJ8B6B6Sl625k5fN7X0Rg=
Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com
 [209.85.219.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-101-BwW6imh3OGuofZcwe0icgA-1; Fri, 27 Nov 2020 11:07:14 -0500
X-MC-Unique: BwW6imh3OGuofZcwe0icgA-1
Received: by mail-qv1-f69.google.com with SMTP id v8so3324981qvq.12
        for <xen-devel@lists.xenproject.org>; Fri, 27 Nov 2020 08:07:14 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id;
        bh=TsMD7b3MnWExoEfBuHmM8Gy0J5MPvPWD/mFWcwpBdWg=;
        b=F/7np+AnStIztfaA1dPYzlsCEqqYmlhIUnjfOIhaYfPWjecyZjGg2weN7R2h8kV4PC
         iq4Q9rR7hLny54mhHNClRNFZAYnvKjEjWb3ycBZBFUHxhgIGU0hYQ6fTonzN7BHQLdxm
         OVU5LQbhwNIXjI0eXleDfzDUfSkPXS+9W72+l9JOR7EtcesUxTeU3igNuaPB/Ii2GkHK
         d7SAoyhcqG/LNis883TMnNwf4UqcrXL8paQmXgcNUKyijyUvR0JVZHbDamoXn7uP02B4
         bgpFjKm7wSfl1teQzSuiL4PBYMc8Q6n2gsDy3c9YHBpbmAS3Z/t+G1WiC4LZbTn+2Knu
         Y37g==
X-Gm-Message-State: AOAM533nULVoQHKxtUtjdCHikgLPlBZKEk/xvG/+621p02GfwVdU73PI
	g49jMY0+pSLxzcBNHE/2I/byd5wh0Mt2McaX+S8OerMfE+4PVqMZvZi1oZ6TQ+SadyVRopJ6nea
	ADh3Lsk3UqX27/gK7JYBjCt/XbCU=
X-Received: by 2002:ac8:5059:: with SMTP id h25mr8937155qtm.283.1606493234320;
        Fri, 27 Nov 2020 08:07:14 -0800 (PST)
X-Google-Smtp-Source: ABdhPJy+zzViQ6Rqk2DxgBkqfwyhjs9VcRi7Mngdgz9bSSxFH/DfJqKgsYTy1Dp2gQ9sUoRbMvjYaw==
X-Received: by 2002:ac8:5059:: with SMTP id h25mr8937129qtm.283.1606493234096;
        Fri, 27 Nov 2020 08:07:14 -0800 (PST)
Received: from trix.remote.csb (075-142-250-213.res.spectrum.com. [75.142.250.213])
        by smtp.gmail.com with ESMTPSA id u22sm6252620qkk.51.2020.11.27.08.07.12
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 27 Nov 2020 08:07:13 -0800 (PST)
From: trix@redhat.com
To: boris.ostrovsky@oracle.com,
	jgross@suse.com,
	sstabellini@kernel.org,
	tglx@linutronix.de,
	mingo@redhat.com,
	bp@alien8.de,
	hpa@zytor.com
Cc: x86@kernel.org,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	Tom Rix <trix@redhat.com>
Subject: [PATCH] xen: remove trailing semicolon in macro definition
Date: Fri, 27 Nov 2020 08:07:07 -0800
Message-Id: <20201127160707.2622061-1-trix@redhat.com>
X-Mailer: git-send-email 2.18.4
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=trix@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="US-ASCII"

From: Tom Rix <trix@redhat.com>

The macro use will already have a semicolon.

Signed-off-by: Tom Rix <trix@redhat.com>
---
 arch/x86/include/asm/xen/page.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index 5941e18edd5a..1a162e559753 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -355,7 +355,7 @@ unsigned long arbitrary_virt_to_mfn(void *vaddr);
 void make_lowmem_page_readonly(void *vaddr);
 void make_lowmem_page_readwrite(void *vaddr);
 
-#define xen_remap(cookie, size) ioremap((cookie), (size));
+#define xen_remap(cookie, size) ioremap((cookie), (size))
 #define xen_unmap(cookie) iounmap((cookie))
 
 static inline bool xen_arch_need_swiotlb(struct device *dev,
-- 
2.18.4



From xen-devel-bounces@lists.xenproject.org Fri Nov 27 16:21:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 16:21:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39633.72582 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kigV4-0003wv-DL; Fri, 27 Nov 2020 16:21:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39633.72582; Fri, 27 Nov 2020 16:21:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kigV4-0003wo-AK; Fri, 27 Nov 2020 16:21:34 +0000
Received: by outflank-mailman (input) for mailman id 39633;
 Fri, 27 Nov 2020 16:21:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kigV3-0003wj-8j
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 16:21:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5b8ffb12-8211-4ad1-ae41-8f6f41112705;
 Fri, 27 Nov 2020 16:21:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 999BBAC2D;
 Fri, 27 Nov 2020 16:21:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b8ffb12-8211-4ad1-ae41-8f6f41112705
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606494091; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4vKq1RsWnINe57Es3dEOBPFU5E6xXZbeF2qCahwtMek=;
	b=uyRnsBBBivbcLcWY8/KCfrliSH/Q/ouyMQm89y/C1Mh6dMLxBkqE5up20XZUbbqQpzTy1j
	bJRMQW+OVWIGbVVPFH6Sd1IXgD313IDA8/iLYKITXEJpAjhFJTtBKRhR7s1IuxacQu9pMx
	+VRInl+D+4ize4X7DfP7QFfA6Id2Exw=
Subject: Re: [PATCH v10 0/7] IOMMU cleanup
To: Paul Durrant <paul@xen.org>
Cc: Paul Durrant <pdurrant@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Julien Grall <jgrall@amazon.com>,
 Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20201120132440.1141-1-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2015b3e8-0269-d9ba-c160-eb90b6ca3a99@suse.com>
Date: Fri, 27 Nov 2020 17:21:32 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201120132440.1141-1-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.11.2020 14:24, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> This is the remainder of the cleanup series deferred until XSA-346 and
> XSA-347 were publicly disclosed.
> 
> Paul Durrant (7):
>   remove remaining uses of iommu_legacy_map/unmap
>   common/grant_table: batch flush I/O TLB
>   iommu: remove the share_p2m operation
>   iommu: stop calling IOMMU page tables 'p2m tables'

Are the latter two patches dependent upon the former two, or could
they go in independently?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 16:37:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 16:37:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39642.72593 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kigkY-00054j-QM; Fri, 27 Nov 2020 16:37:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39642.72593; Fri, 27 Nov 2020 16:37:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kigkY-00054c-NL; Fri, 27 Nov 2020 16:37:34 +0000
Received: by outflank-mailman (input) for mailman id 39642;
 Fri, 27 Nov 2020 16:37:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oh9M=FB=amazon.co.uk=prvs=5939a8474=pdurrant@srs-us1.protection.inumbo.net>)
 id 1kigkW-00054W-Jm
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 16:37:32 +0000
Received: from smtp-fw-33001.amazon.com (unknown [207.171.190.10])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1bf6e1d7-9aa5-4140-8273-41b02bae612d;
 Fri, 27 Nov 2020 16:37:31 +0000 (UTC)
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-2c-2225282c.us-west-2.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP;
 27 Nov 2020 16:37:19 +0000
Received: from EX13D37EUB002.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198])
 by email-inbound-relay-2c-2225282c.us-west-2.amazon.com (Postfix) with ESMTPS
 id D1A36A1D03; Fri, 27 Nov 2020 16:32:17 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D37EUB002.ant.amazon.com (10.43.166.116) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 27 Nov 2020 16:32:16 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Fri, 27 Nov 2020 16:32:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1bf6e1d7-9aa5-4140-8273-41b02bae612d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
  s=amazon201209; t=1606495052; x=1638031052;
  h=from:to:cc:date:message-id:references:in-reply-to:
   content-transfer-encoding:mime-version:subject;
  bh=lsH0YtbvDGLPC3AHBCFVFlW7qogk8BVDAthYXSFA5u0=;
  b=M7IAh6I/RimXacHhilP6n1ClwNNlAHjDp+w3y/mj53FdYIP5Lgh1Hp10
   mK08V3EHijmmHP+br9ulPjbyCUry8eESHddWh8YLsOLt0QHWi7GXE8FKF
   +Hpa7MT6CobplVQFeY2kZhVvk2MdfO2RNmNJVZmc1iJqh7t824Q18A/gE
   g=;
X-IronPort-AV: E=Sophos;i="5.78,375,1599523200"; 
   d="scan'208";a="98519313"
Subject: RE: [PATCH v10 0/7] IOMMU cleanup
Thread-Topic: [PATCH v10 0/7] IOMMU cleanup
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <ian.jackson@eu.citrix.com>, "Grall,
 Julien" <jgrall@amazon.co.uk>, Jun Nakajima <jun.nakajima@intel.com>, "Kevin
 Tian" <kevin.tian@intel.com>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Thread-Index: AQHWxNloHBe1WIt8nku2zVDuU3dFH6ncK9/Q
Date: Fri, 27 Nov 2020 16:32:15 +0000
Message-ID: <4ec675a6344e46c4a50c4fbadc324bdb@EX13D32EUC003.ant.amazon.com>
References: <20201120132440.1141-1-paul@xen.org>
 <2015b3e8-0269-d9ba-c160-eb90b6ca3a99@suse.com>
In-Reply-To: <2015b3e8-0269-d9ba-c160-eb90b6ca3a99@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.165.145]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxp
Y2hAc3VzZS5jb20+DQo+IFNlbnQ6IDI3IE5vdmVtYmVyIDIwMjAgMTY6MjINCj4gVG86IFBhdWwg
RHVycmFudCA8cGF1bEB4ZW4ub3JnPg0KPiBDYzogRHVycmFudCwgUGF1bCA8cGR1cnJhbnRAYW1h
em9uLmNvLnVrPjsgQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT47IEdl
b3JnZSBEdW5sYXANCj4gPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT47IElhbiBKYWNrc29uIDxp
YW4uamFja3NvbkBldS5jaXRyaXguY29tPjsgR3JhbGwsIEp1bGllbg0KPiA8amdyYWxsQGFtYXpv
bi5jby51az47IEp1biBOYWthamltYSA8anVuLm5ha2FqaW1hQGludGVsLmNvbT47IEtldmluIFRp
YW4gPGtldmluLnRpYW5AaW50ZWwuY29tPjsgUm9nZXINCj4gUGF1IE1vbm7DqSA8cm9nZXIucGF1
QGNpdHJpeC5jb20+OyBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+
OyBXZWkgTGl1IDx3bEB4ZW4ub3JnPjsNCj4geGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3Jn
DQo+IFN1YmplY3Q6IFJFOiBbRVhURVJOQUxdIFtQQVRDSCB2MTAgMC83XSBJT01NVSBjbGVhbnVw
DQo+IA0KPiBDQVVUSU9OOiBUaGlzIGVtYWlsIG9yaWdpbmF0ZWQgZnJvbSBvdXRzaWRlIG9mIHRo
ZSBvcmdhbml6YXRpb24uIERvIG5vdCBjbGljayBsaW5rcyBvciBvcGVuDQo+IGF0dGFjaG1lbnRz
IHVubGVzcyB5b3UgY2FuIGNvbmZpcm0gdGhlIHNlbmRlciBhbmQga25vdyB0aGUgY29udGVudCBp
cyBzYWZlLg0KPiANCj4gDQo+IA0KPiBPbiAyMC4xMS4yMDIwIDE0OjI0LCBQYXVsIER1cnJhbnQg
d3JvdGU6DQo+ID4gRnJvbTogUGF1bCBEdXJyYW50IDxwZHVycmFudEBhbWF6b24uY29tPg0KPiA+
DQo+ID4gVGhpcyBpcyB0aGUgcmVtYWluZGVyIG9mIHRoZSBjbGVhbnVwIHNlcmllcyBkZWZlcnJl
ZCB1bnRpbCBYU0EtMzQ2IGFuZA0KPiA+IFhTQS0zNDcgd2VyZSBwdWJsaWNseSBkaXNjbG9zZWQu
DQo+ID4NCj4gPiBQYXVsIER1cnJhbnQgKDcpOg0KPiA+ICAgcmVtb3ZlIHJlbWFpbmluZyB1c2Vz
IG9mIGlvbW11X2xlZ2FjeV9tYXAvdW5tYXANCj4gPiAgIGNvbW1vbi9ncmFudF90YWJsZTogYmF0
Y2ggZmx1c2ggSS9PIFRMQg0KPiA+ICAgaW9tbXU6IHJlbW92ZSB0aGUgc2hhcmVfcDJtIG9wZXJh
dGlvbg0KPiA+ICAgaW9tbXU6IHN0b3AgY2FsbGluZyBJT01NVSBwYWdlIHRhYmxlcyAncDJtIHRh
YmxlcycNCj4gDQo+IEFyZSB0aGUgbGF0dGVyIHR3byBwYXRjaGVzIGRlcGVuZGVudCB1cG9uIHRo
ZSBmb3JtZXIgdHdvLCBvciBjb3VsZA0KPiB0aGV5IGdvIGluIGluZGVwZW5kZW50bHk/DQo+IA0K
DQpOb3QgcmVhbGx5LiBUaGV5IHNob3VsZCBiZSBhYmxlIHRvIGdvIGluIGFoZWFkIG9mIHRoZSBv
dGhlciB0d28uDQoNCiAgUGF1bA0KDQo+IEphbg0K


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 16:46:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 16:46:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39650.72606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kigsf-00065H-MD; Fri, 27 Nov 2020 16:45:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39650.72606; Fri, 27 Nov 2020 16:45:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kigsf-00065A-IL; Fri, 27 Nov 2020 16:45:57 +0000
Received: by outflank-mailman (input) for mailman id 39650;
 Fri, 27 Nov 2020 16:45:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kigse-000655-Ia
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 16:45:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 26455de0-733f-40f1-a73f-482feed4813a;
 Fri, 27 Nov 2020 16:45:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8C2C2AE9A;
 Fri, 27 Nov 2020 16:45:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26455de0-733f-40f1-a73f-482feed4813a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606495553; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=QhDNPqpolbcP9N01lFwbxNevKi4DbrqrkaefBn/SE7k=;
	b=uhOWtCRCYE3lgiUOYb1Ecvh0klVekRWIAb+BvQ2hyMpOY+tyZu4S+WTMWNPhMkdAdkTHLL
	A7gTaxAK53+QtaNw9PKQ6EAtBEoFjY9IX56zXe983tFUR9jDbt1yttrqFxv8G4GeooPAxB
	lbaQ3S3TMT8p2zFBxNtHeyqz2CnxDrw=
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v4] IOMMU: make DMA containment of quarantined devices
 optional
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Kevin Tian <kevin.tian@intel.com>
Message-ID: <c78e09fa-606c-c6c4-e9db-b57cb50ee5e2@suse.com>
Date: Fri, 27 Nov 2020 17:45:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Containing still in flight DMA was introduced to work around certain
devices / systems hanging hard upon hitting a "not-present" IOMMU fault.
Passing through (such) devices (on such systems) is inherently insecure
(as guests could easily arrange for IOMMU faults of any kind to occur).
Defaulting to a mode where admins may not even become aware of issues
with devices can be considered undesirable. Therefore convert this mode
of operation to an optional one, not one enabled by default.

This involves resurrecting code commit ea38867831da ("x86 / iommu: set
up a scratch page in the quarantine domain") did remove, in a slightly
extended and abstracted fashion. Here, instead of reintroducing a pretty
pointless use of "goto" in domain_context_unmap(), and instead of making
the function (at least temporarily) inconsistent, take the opportunity
and replace the other similarly pointless "goto" as well.

In order to key the re-instated bypasses off of there (not) being a root
page table this further requires moving the allocate_domain_resources()
invocation from reassign_device() to amd_iommu_setup_domain_device() (or
else reassign_device() would allocate a root page table anyway); this is
benign to the second caller of the latter function.

Take the opportunity and also limit the control to builds supporting
PCI.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v4: "full" -> "scratch_page". Duplicate Kconfig help text into command
    line doc. Re-base.
v3: IOMMU_quarantine_basic -> IOMMU_quarantine_fault,
    IOMMU_quarantine_full -> IOMMU_quarantine_write_fault. Kconfig
    option (choice) to select default. Limit to HAS_PCI.
v2: Don't use true/false. Introduce QUARANTINE_SKIP() (albeit I'm not
    really convinced this is an improvement). Add comment.

--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1278,7 +1278,7 @@ detection of systems known to misbehave
 > Default: `new` unless directed-EOI is supported
 
 ### iommu
-    = List of [ <bool>, verbose, debug, force, required, quarantine,
+    = List of [ <bool>, verbose, debug, force, required, quarantine[=scratch-page],
                 sharept, intremap, intpost, crash-disable,
                 snoop, qinval, igfx, amd-iommu-perdev-intremap,
                 dom0-{passthrough,strict} ]
@@ -1316,11 +1316,32 @@ boolean (e.g. `iommu=no`) can override t
     will prevent Xen from booting if IOMMUs aren't discovered and enabled
     successfully.
 
-*   The `quarantine` boolean can be used to control Xen's behavior when
-    de-assigning devices from guests.  If enabled (the default), Xen always
+*   The `quarantine` option can be used to control Xen's behavior when
+    de-assigning devices from guests.
+
+    When a PCI device is assigned to an untrusted domain, it is possible
+    for that domain to program the device to DMA to an arbitrary address.
+    The IOMMU is used to protect the host from malicious DMA by making
+    sure that the device addresses can only target memory assigned to the
+    guest.  However, when the guest domain is torn down, assigning the
+    device back to the hardware domain would allow any in-flight DMA to
+    potentially target critical host data.  To avoid this, quarantining
+    should be enabled.  Quarantining can be done in two ways: In its basic
+    form, all in-flight DMA will simply be forced to encounter IOMMU
+    faults.  Since there are systems where doing so can cause host lockup,
+    an alternative form is available where writes to memory will be made
+    fault, but reads will be directed to a dummy page.  The implication
+    here is that such reads will go unnoticed, i.e. an admin may not
+    become aware of the underlying problem.
+
+    Therefore, if this option is set to true (the default), Xen always
     quarantines such devices; they must be explicitly assigned back to Dom0
-    before they can be used there again.  If disabled, Xen will only
-    quarantine devices the toolstack hass arranged for getting quarantined.
+    before they can be used there again.  If set to "scratch-page", still
+    active DMA reads will additionally be directed to a "scratch" page.  If
+    set to false, Xen will only quarantine devices the toolstack has arranged
+    for getting quarantined.
+
+    This option is only valid on builds supporting PCI.
 
 *   The `sharept` boolean controls whether the IOMMU pagetables are shared
     with the CPU-side HAP pagetables, or allocated separately.  Sharing
--- a/xen/drivers/passthrough/Kconfig
+++ b/xen/drivers/passthrough/Kconfig
@@ -28,3 +28,31 @@ endif
 
 config IOMMU_FORCE_PT_SHARE
 	bool
+
+choice
+	prompt "IOMMU device quarantining default behavior"
+	depends on HAS_PCI
+	default IOMMU_QUARANTINE_BASIC
+	---help---
+	  When a PCI device is assigned to an untrusted domain, it is possible
+	  for that domain to program the device to DMA to an arbitrary address.
+	  The IOMMU is used to protect the host from malicious DMA by making
+	  sure that the device addresses can only target memory assigned to the
+	  guest.  However, when the guest domain is torn down, assigning the
+	  device back to the hardware domain would allow any in-flight DMA to
+	  potentially target critical host data.  To avoid this, quarantining
+	  should be enabled.  Quarantining can be done in two ways: In its basic
+	  form, all in-flight DMA will simply be forced to encounter IOMMU
+	  faults.  Since there are systems where doing so can cause host lockup,
+	  an alternative form is available where writes to memory will be made
+	  fault, but reads will be directed to a dummy page.  The implication
+	  here is that such reads will go unnoticed, i.e. an admin may not
+	  become aware of the underlying problem.
+
+	config IOMMU_QUARANTINE_NONE
+		bool "none"
+	config IOMMU_QUARANTINE_BASIC
+		bool "basic"
+	config IOMMU_QUARANTINE_SCRATCH_PAGE
+		bool "scratch page"
+endchoice
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -25,6 +25,9 @@
 #include "iommu.h"
 #include "../ats.h"
 
+/* dom_io is used as a sentinel for quarantined devices */
+#define QUARANTINE_SKIP(d) ((d) == dom_io && !dom_iommu(d)->arch.amd.root_table)
+
 static bool_t __read_mostly init_done;
 
 static const struct iommu_init_ops _iommu_init_ops;
@@ -81,19 +84,36 @@ int get_dma_requestor_id(uint16_t seg, u
     return req_id;
 }
 
-static void amd_iommu_setup_domain_device(
+static int __must_check allocate_domain_resources(struct domain *d)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    int rc;
+
+    spin_lock(&hd->arch.mapping_lock);
+    rc = amd_iommu_alloc_root(d);
+    spin_unlock(&hd->arch.mapping_lock);
+
+    return rc;
+}
+
+static int __must_check amd_iommu_setup_domain_device(
     struct domain *domain, struct amd_iommu *iommu,
     uint8_t devfn, struct pci_dev *pdev)
 {
     struct amd_iommu_dte *table, *dte;
     unsigned long flags;
-    int req_id, valid = 1;
+    int req_id, valid = 1, rc;
     u8 bus = pdev->bus;
-    const struct domain_iommu *hd = dom_iommu(domain);
+    struct domain_iommu *hd = dom_iommu(domain);
 
-    BUG_ON( !hd->arch.amd.root_table ||
-            !hd->arch.amd.paging_mode ||
-            !iommu->dev_table.buffer );
+    if ( QUARANTINE_SKIP(domain) )
+        return 0;
+
+    BUG_ON(!hd->arch.amd.paging_mode || !iommu->dev_table.buffer);
+
+    rc = allocate_domain_resources(domain);
+    if ( rc )
+        return rc;
 
     if ( iommu_hwdom_passthrough && is_hardware_domain(domain) )
         valid = 0;
@@ -151,6 +171,8 @@ static void amd_iommu_setup_domain_devic
 
         amd_iommu_flush_iotlb(devfn, pdev, INV_IOMMU_ALL_PAGES_ADDRESS, 0);
     }
+
+    return 0;
 }
 
 int __init acpi_ivrs_init(void)
@@ -222,18 +244,6 @@ int amd_iommu_alloc_root(struct domain *
     return 0;
 }
 
-static int __must_check allocate_domain_resources(struct domain *d)
-{
-    struct domain_iommu *hd = dom_iommu(d);
-    int rc;
-
-    spin_lock(&hd->arch.mapping_lock);
-    rc = amd_iommu_alloc_root(d);
-    spin_unlock(&hd->arch.mapping_lock);
-
-    return rc;
-}
-
 static int amd_iommu_domain_init(struct domain *d)
 {
     struct domain_iommu *hd = dom_iommu(d);
@@ -283,6 +293,9 @@ static void amd_iommu_disable_domain_dev
     int req_id;
     u8 bus = pdev->bus;
 
+    if ( QUARANTINE_SKIP(domain) )
+        return;
+
     BUG_ON ( iommu->dev_table.buffer == NULL );
     req_id = get_dma_requestor_id(iommu->seg, PCI_BDF2(bus, devfn));
     table = iommu->dev_table.buffer;
@@ -349,11 +362,10 @@ static int reassign_device(struct domain
         pdev->domain = target;
     }
 
-    rc = allocate_domain_resources(target);
+    rc = amd_iommu_setup_domain_device(target, iommu, devfn, pdev);
     if ( rc )
         return rc;
 
-    amd_iommu_setup_domain_device(target, iommu, devfn, pdev);
     AMD_IOMMU_DEBUG("Re-assign %pp from dom%d to dom%d\n",
                     &pdev->sbdf, source->domain_id, target->domain_id);
 
@@ -451,8 +463,7 @@ static int amd_iommu_add_device(u8 devfn
         spin_unlock_irqrestore(&iommu->lock, flags);
     }
 
-    amd_iommu_setup_domain_device(pdev->domain, iommu, devfn, pdev);
-    return 0;
+    return amd_iommu_setup_domain_device(pdev->domain, iommu, devfn, pdev);
 }
 
 static int amd_iommu_remove_device(u8 devfn, struct pci_dev *pdev)
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -31,9 +31,24 @@ bool_t __initdata iommu_enable = 1;
 bool_t __read_mostly iommu_enabled;
 bool_t __read_mostly force_iommu;
 bool_t __read_mostly iommu_verbose;
-bool __read_mostly iommu_quarantine = true;
 bool_t __read_mostly iommu_crash_disable;
 
+#define IOMMU_quarantine_none        0 /* aka false */
+#define IOMMU_quarantine_fault       1 /* aka true */
+#define IOMMU_quarantine_write_fault 2
+#ifdef CONFIG_HAS_PCI
+uint8_t __read_mostly iommu_quarantine =
+# if defined(CONFIG_IOMMU_QUARANTINE_NONE)
+    IOMMU_quarantine_none;
+# elif defined(CONFIG_IOMMU_QUARANTINE_BASIC)
+    IOMMU_quarantine_fault;
+# elif defined(CONFIG_IOMMU_QUARANTINE_SCRATCH_PAGE)
+    IOMMU_quarantine_write_fault;
+# endif
+#else
+# define iommu_quarantine IOMMU_quarantine_none
+#endif /* CONFIG_HAS_PCI */
+
 static bool __hwdom_initdata iommu_hwdom_none;
 bool __hwdom_initdata iommu_hwdom_strict;
 bool __read_mostly iommu_hwdom_passthrough;
@@ -64,8 +79,12 @@ static int __init parse_iommu_param(cons
         else if ( (val = parse_boolean("force", s, ss)) >= 0 ||
                   (val = parse_boolean("required", s, ss)) >= 0 )
             force_iommu = val;
+#ifdef CONFIG_HAS_PCI
         else if ( (val = parse_boolean("quarantine", s, ss)) >= 0 )
             iommu_quarantine = val;
+        else if ( ss == s + 15 && !strncmp(s, "quarantine=scratch-page", 23) )
+            iommu_quarantine = IOMMU_quarantine_write_fault;
+#endif
 #ifdef CONFIG_X86
         else if ( (val = parse_boolean("igfx", s, ss)) >= 0 )
             iommu_igfx = val;
@@ -425,7 +444,7 @@ static int __init iommu_quarantine_init(
     dom_io->options |= XEN_DOMCTL_CDF_iommu;
 
     rc = iommu_domain_init(dom_io, 0);
-    if ( rc )
+    if ( rc || iommu_quarantine < IOMMU_quarantine_write_fault )
         return rc;
 
     if ( !hd->platform_ops->quarantine_init )
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -42,6 +42,9 @@
 #include "vtd.h"
 #include "../ats.h"
 
+/* dom_io is used as a sentinel for quarantined devices */
+#define QUARANTINE_SKIP(d) ((d) == dom_io && !dom_iommu(d)->arch.vtd.pgd_maddr)
+
 struct mapped_rmrr {
     struct list_head list;
     u64 base, end;
@@ -1289,6 +1292,9 @@ int domain_context_mapping_one(
     int agaw, rc, ret;
     bool_t flush_dev_iotlb;
 
+    if ( QUARANTINE_SKIP(domain) )
+        return 0;
+
     ASSERT(pcidevs_locked());
     spin_lock(&iommu->lock);
     maddr = bus_to_context_maddr(iommu, bus);
@@ -1536,6 +1542,9 @@ int domain_context_unmap_one(
     int iommu_domid, rc, ret;
     bool_t flush_dev_iotlb;
 
+    if ( QUARANTINE_SKIP(domain) )
+        return 0;
+
     ASSERT(pcidevs_locked());
     spin_lock(&iommu->lock);
 
@@ -1597,7 +1606,7 @@ static int domain_context_unmap(struct d
 {
     struct acpi_drhd_unit *drhd;
     struct vtd_iommu *iommu;
-    int ret = 0;
+    int ret;
     u8 seg = pdev->seg, bus = pdev->bus, tmp_bus, tmp_devfn, secbus;
     int found = 0;
 
@@ -1612,14 +1621,12 @@ static int domain_context_unmap(struct d
         if ( iommu_debug )
             printk(VTDPREFIX "%pd:Hostbridge: skip %pp unmap\n",
                    domain, &PCI_SBDF3(seg, bus, devfn));
-        if ( !is_hardware_domain(domain) )
-            return -EPERM;
-        goto out;
+        return is_hardware_domain(domain) ? 0 : -EPERM;
 
     case DEV_TYPE_PCIe_BRIDGE:
     case DEV_TYPE_PCIe2PCI_BRIDGE:
     case DEV_TYPE_LEGACY_PCI_BRIDGE:
-        goto out;
+        return 0;
 
     case DEV_TYPE_PCIe_ENDPOINT:
         if ( iommu_debug )
@@ -1661,10 +1668,12 @@ static int domain_context_unmap(struct d
     default:
         dprintk(XENLOG_ERR VTDPREFIX, "%pd:unknown(%u): %pp\n",
                 domain, pdev->type, &PCI_SBDF3(seg, bus, devfn));
-        ret = -EINVAL;
-        goto out;
+        return -EINVAL;
     }
 
+    if ( ret || QUARANTINE_SKIP(domain) )
+        return ret;
+
     /*
      * if no other devices under the same iommu owned by this domain,
      * clear iommu in iommu_bitmap and clear domain_id in domid_bitmp
@@ -1699,8 +1708,7 @@ static int domain_context_unmap(struct d
         iommu->domid_map[iommu_domid] = 0;
     }
 
-out:
-    return ret;
+    return 0;
 }
 
 static void iommu_domain_teardown(struct domain *d)
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -53,7 +53,9 @@ static inline bool_t dfn_eq(dfn_t x, dfn
 }
 
 extern bool_t iommu_enable, iommu_enabled;
-extern bool force_iommu, iommu_quarantine, iommu_verbose;
+extern bool force_iommu, iommu_verbose;
+/* Boolean except for the specific purposes of drivers/passthrough/iommu.c. */
+extern uint8_t iommu_quarantine;
 
 #ifdef CONFIG_X86
 extern enum __packed iommu_intremap {


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 16:51:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 16:51:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39657.72618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kigyG-00073n-Fh; Fri, 27 Nov 2020 16:51:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39657.72618; Fri, 27 Nov 2020 16:51:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kigyG-00073g-Ao; Fri, 27 Nov 2020 16:51:44 +0000
Received: by outflank-mailman (input) for mailman id 39657;
 Fri, 27 Nov 2020 16:51:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kigyE-00073a-TI
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 16:51:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 197bf9d3-6d7b-4e8e-b26e-1c48b0ed76a8;
 Fri, 27 Nov 2020 16:51:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6057CAD0B;
 Fri, 27 Nov 2020 16:51:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 197bf9d3-6d7b-4e8e-b26e-1c48b0ed76a8
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606495901; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=cu8lOUmCNVpGaa9ENLLh2dwSXaGH+rjyXb081TNyKoY=;
	b=Q0SLFWSg3Y3k4MgVsIU8kBjP0KuuJ1K+//A4spkHfstXNI84kvs9EPb62TCd5uj89VCssg
	JYz0LPjmTmvNVOwLt76YBL9gjL51kb0ngWjo89qS57QMwbRC9g57CHEvqkreIoHwPWaLW+
	rRZBslTCDo/IJZ7dx7/cYIYII99V+F4=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/2] x86: is_pv*domain() adjustments
Message-ID: <7c040eff-2746-59e3-b657-64f5df3c9085@suse.com>
Date: Fri, 27 Nov 2020 17:51:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

1: correct is_pv_domain() when !CONFIG_PV
2: use is_pv_64bit_domain() to avoid double evaluate_nospec()

Jan


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 16:55:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 16:55:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39666.72630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kih1P-0007El-Tt; Fri, 27 Nov 2020 16:54:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39666.72630; Fri, 27 Nov 2020 16:54:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kih1P-0007Ee-Qr; Fri, 27 Nov 2020 16:54:59 +0000
Received: by outflank-mailman (input) for mailman id 39666;
 Fri, 27 Nov 2020 16:54:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kih1O-0007EZ-Nf
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 16:54:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 67ae71ff-c17c-46e0-9c0a-54e16c53547b;
 Fri, 27 Nov 2020 16:54:57 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9879DAC0C;
 Fri, 27 Nov 2020 16:54:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67ae71ff-c17c-46e0-9c0a-54e16c53547b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606496096; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QxFMr9113kE5SXK+sT8NwDysjdRqA9O9J29bhcLGQX0=;
	b=BHSeDi6od3LJn5VLD5KDVnUsxqyJeDvVEB2dH/6GZAJoEwQNHhAtYGUAS4brtJi3gAD/8a
	QierUdKiMkAuthZhcaB3OQoTHdjrQYH48WlNtmlRYlFysRGHMVVrAtd9LCctWDwHJdAYzE
	2eH2Fn85+5LFS6XCvcTjsA8lVBe+TyU=
Subject: [PATCH 1/2] x86: correct is_pv_domain() when !CONFIG_PV
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <7c040eff-2746-59e3-b657-64f5df3c9085@suse.com>
Message-ID: <54013074-1fc4-1047-0d00-2762fcbc9ade@suse.com>
Date: Fri, 27 Nov 2020 17:54:57 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <7c040eff-2746-59e3-b657-64f5df3c9085@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On x86, idle and other system domains are implicitly PV. While I
couldn't spot any cases where this is actively a problem, some cases
required quite close inspection to be certain there couldn't e.g. be
some ASSERT_UNREACHABLE() that would trigger in this case. Let's be on
the safe side and make sure these always have is_pv_domain() returning
true.

For the build to still work, this requires a few adjustments elsewhere.
In particular is_pv_64bit_domain() now gains a CONFIG_PV dependency,
which means that is_pv_32bit_domain() || is_pv_64bit_domain() is no
longer guaranteed to be the same as is_pv_domain().

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -568,7 +568,7 @@ int __init construct_dom0(struct domain
 
     if ( is_hvm_domain(d) )
         rc = dom0_construct_pvh(d, image, image_headroom, initrd, cmdline);
-    else if ( is_pv_domain(d) )
+    else if ( is_pv_64bit_domain(d) || is_pv_32bit_domain(d) )
         rc = dom0_construct_pv(d, image, image_headroom, initrd, cmdline);
     else
         panic("Cannot construct Dom0. No guest interface available\n");
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1544,6 +1544,7 @@ arch_do_vcpu_op(
  */
 static void load_segments(struct vcpu *n)
 {
+#ifdef CONFIG_PV
     struct cpu_user_regs *uregs = &n->arch.user_regs;
     unsigned long gsb = 0, gss = 0;
     bool compat = is_pv_32bit_vcpu(n);
@@ -1709,6 +1710,7 @@ static void load_segments(struct vcpu *n
         regs->cs            = FLAT_KERNEL_CS;
         regs->rip           = pv->failsafe_callback_eip;
     }
+#endif
 }
 
 /*
@@ -1723,6 +1725,7 @@ static void load_segments(struct vcpu *n
  */
 static void save_segments(struct vcpu *v)
 {
+#ifdef CONFIG_PV
     struct cpu_user_regs *regs = &v->arch.user_regs;
 
     read_sregs(regs);
@@ -1748,6 +1751,7 @@ static void save_segments(struct vcpu *v
         else
             v->arch.pv.gs_base_user = gs_base;
     }
+#endif
 }
 
 void paravirt_ctxt_switch_from(struct vcpu *v)
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -408,13 +408,13 @@ long arch_do_domctl(
     case XEN_DOMCTL_set_address_size:
         if ( is_hvm_domain(d) )
             ret = -EOPNOTSUPP;
+        else if ( is_pv_64bit_domain(d) && domctl->u.address_size.size == 32 )
+            ret = switch_compat(d);
         else if ( is_pv_domain(d) )
         {
             if ( ((domctl->u.address_size.size == 64) && !d->arch.pv.is_32bit) ||
                  ((domctl->u.address_size.size == 32) &&  d->arch.pv.is_32bit) )
                 ret = 0;
-            else if ( domctl->u.address_size.size == 32 )
-                ret = switch_compat(d);
             else
                 ret = -EINVAL;
         }
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -985,7 +985,7 @@ static always_inline bool is_control_dom
 
 static always_inline bool is_pv_domain(const struct domain *d)
 {
-    return IS_ENABLED(CONFIG_PV) &&
+    return IS_ENABLED(CONFIG_X86) &&
         evaluate_nospec(!(d->options & XEN_DOMCTL_CDF_hvm));
 }
 
@@ -1011,7 +1011,7 @@ static always_inline bool is_pv_32bit_vc
 
 static always_inline bool is_pv_64bit_domain(const struct domain *d)
 {
-    if ( !is_pv_domain(d) )
+    if ( !IS_ENABLED(CONFIG_PV) || !is_pv_domain(d) )
         return false;
 
 #ifdef CONFIG_PV32



From xen-devel-bounces@lists.xenproject.org Fri Nov 27 16:55:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 16:55:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39672.72641 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kih29-0007La-6y; Fri, 27 Nov 2020 16:55:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39672.72641; Fri, 27 Nov 2020 16:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kih29-0007LT-43; Fri, 27 Nov 2020 16:55:45 +0000
Received: by outflank-mailman (input) for mailman id 39672;
 Fri, 27 Nov 2020 16:55:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rmeX=FB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kih27-0007LK-9U
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 16:55:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c02a6c4e-420e-4b70-a2b2-970d54c8bcd1;
 Fri, 27 Nov 2020 16:55:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 82F12ABD7;
 Fri, 27 Nov 2020 16:55:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c02a6c4e-420e-4b70-a2b2-970d54c8bcd1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606496141; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pFTAN+2TWd/nxQaM+OAm142XT81UHKt2LpqSGodQXMc=;
	b=UcfXgwypDRxGVEGLjjx0jnrf7YQPiHmyutBxEnqliW17XtQkFd8gjK69pQ4/UXXbGaCG55
	OkyEr1ESZLR20ftwjDaefP6DbYDIbWtn6MMPNPaSObXYA54ocbGTVcDjcQHNZ5T8XNQkYM
	ytDbR0GGeip0C0QaVnJw33QKBw6vkPQ=
Subject: [PATCH 2/2] x86: use is_pv_64bit_domain() to avoid double
 evaluate_nospec()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <7c040eff-2746-59e3-b657-64f5df3c9085@suse.com>
Message-ID: <228b19f1-f604-4641-671b-b94ad6542519@suse.com>
Date: Fri, 27 Nov 2020 17:55:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <7c040eff-2746-59e3-b657-64f5df3c9085@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1085,7 +1085,7 @@ int arch_set_info_guest(
           * update_cr3(), sh_update_cr3(), sh_walk_guest_tables(), and
           * shadow_one_bit_disable() for why that is.
           */
-         !is_hvm_domain(d) && !is_pv_32bit_domain(d) )
+         is_pv_64bit_domain(d) )
         v->arch.flags &= ~TF_kernel_mode;
 
     vcpu_setup_fpu(v, v->arch.xsave_area,
@@ -1231,7 +1231,7 @@ int arch_set_info_guest(
          * correct initial RO_MPT_VIRT_{START,END} L4 entry).
          */
         if ( d != current->domain && !VM_ASSIST(d, m2p_strict) &&
-             is_pv_domain(d) && !is_pv_32bit_domain(d) &&
+             is_pv_64bit_domain(d) &&
              test_bit(VMASST_TYPE_m2p_strict, &c.nat->vm_assist) &&
              atomic_read(&d->arch.pv.nr_l4_pages) )
         {
@@ -1960,8 +1960,7 @@ static void __context_switch(void)
 
 #if defined(CONFIG_PV) && defined(CONFIG_HVM)
     /* Prefetch the VMCB if we expect to use it later in the context switch */
-    if ( cpu_has_svm && is_pv_domain(nd) && !is_pv_32bit_domain(nd) &&
-         !is_idle_domain(nd) )
+    if ( cpu_has_svm && is_pv_64bit_domain(nd) && !is_idle_domain(nd) )
         svm_load_segs_prefetch();
 #endif
 



From xen-devel-bounces@lists.xenproject.org Fri Nov 27 17:34:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 17:34:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39688.72670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kihd1-0002n8-GI; Fri, 27 Nov 2020 17:33:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39688.72670; Fri, 27 Nov 2020 17:33:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kihd1-0002n1-Bo; Fri, 27 Nov 2020 17:33:51 +0000
Received: by outflank-mailman (input) for mailman id 39688;
 Fri, 27 Nov 2020 17:33:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kihcz-0002mt-SD; Fri, 27 Nov 2020 17:33:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kihcz-0002As-KL; Fri, 27 Nov 2020 17:33:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kihcz-000499-Aw; Fri, 27 Nov 2020 17:33:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kihcz-00039C-AT; Fri, 27 Nov 2020 17:33:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nkwfz9ott0Z+lU9+cNTwQNDt6/Nl4hjogXRQr/FfWzM=; b=l/JZk4pWZTo7JIDxPnoONzq8Gr
	aj3az1C1QCbiAS+FvfOPCMgHPuCjORbGfkzTXxfTb5VADY8XmgERXikUoQmj80jdORzETUJ69fVDF
	drxhnAiedN2rtDNeypthyIlJ/jgDDoVPz6G7U+WTEptXCpwg6CQh5ESMeMayaiC6SvgE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157048-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157048: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=6cfdaa88cfde716ebc8538f60a9648483049edf4
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 27 Nov 2020 17:33:49 +0000

flight 157048 qemu-mainline real [real]
flight 157054 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157048/
http://logs.test-lab.xenproject.org/osstest/logs/157054/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                6cfdaa88cfde716ebc8538f60a9648483049edf4
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   99 days
Failing since        152659  2020-08-21 14:07:39 Z   98 days  206 attempts
Testing same since   157048  2020-11-27 06:15:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69234 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 17:39:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 17:39:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39698.72684 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kihiG-000363-8G; Fri, 27 Nov 2020 17:39:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39698.72684; Fri, 27 Nov 2020 17:39:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kihiG-00035v-5P; Fri, 27 Nov 2020 17:39:16 +0000
Received: by outflank-mailman (input) for mailman id 39698;
 Fri, 27 Nov 2020 17:39:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C3sB=FB=kernel.org=kuba@srs-us1.protection.inumbo.net>)
 id 1kihiE-00035q-RJ
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 17:39:14 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 59ba2d25-8ba3-4c99-89a9-d1b31f2d7d23;
 Fri, 27 Nov 2020 17:39:14 +0000 (UTC)
Received: from kicinski-fedora-pc1c0hjn.DHCP.thefacebook.com (unknown
 [163.114.132.4])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 04B9621534;
 Fri, 27 Nov 2020 17:39:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59ba2d25-8ba3-4c99-89a9-d1b31f2d7d23
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606498753;
	bh=j1DgY7r0rMQwVPcBNG2W4pflvrkcsR7Nn0kZYvEb2Ss=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=0j8EzUjAKu+0z+mAQUkB3CXGNcEbFYIqENj9gKgasYED6nKNkPWzRrF0o9Zh8bFYY
	 ufUD4pf3x2fVB0Sf272JPD3mGvgRrWph+5enO8G3pD1njUJDyq7PPaKQC7JJXYshQ8
	 NXklXCJw7Tl57KnQy271JTtuX5+F9SUf5tUoe6PU=
Date: Fri, 27 Nov 2020 09:39:11 -0800
From: Jakub Kicinski <kuba@kernel.org>
To: Lee Jones <lee.jones@linaro.org>
Cc: linux-kernel@vger.kernel.org, Alexei Starovoitov <ast@kernel.org>,
 Benjamin Herrenschmidt <benh@kernel.crashing.org>, bpf@vger.kernel.org,
 Daniel Borkmann <daniel@iogearbox.net>, Dany Madden <drt@linux.ibm.com>,
 Daris A Nevil <dnevil@snmc.com>, "David S. Miller" <davem@davemloft.net>,
 Erik Stahlman <erik@vt.edu>, Geoff Levand <geoff@infradead.org>, Grygorii
 Strashko <grygorii.strashko@ti.com>, "Gustavo A. R. Silva"
 <gustavoars@kernel.org>, Ishizaki Kou <kou.ishizaki@toshiba.co.jp>, Ivan
 Khoronzhuk <ivan.khoronzhuk@linaro.org>, Jens Osterkamp
 <Jens.Osterkamp@de.ibm.com>, Jesper Dangaard Brouer <hawk@kernel.org>, John
 Allen <jallen@linux.vnet.ibm.com>, John Fastabend
 <john.fastabend@gmail.com>, Kurt Kanzenbach <kurt@linutronix.de>, Lijun Pan
 <ljp@linux.ibm.com>, linuxppc-dev@lists.ozlabs.org, Michael Ellerman
 <mpe@ellerman.id.au>, netdev@vger.kernel.org, Nicolas Pitre
 <nico@fluxnic.net>, Paul Durrant <paul@xen.org>, Paul Mackerras
 <paulus@samba.org>, Peter Cammaert <pc@denkart.be>, Russell King
 <rmk@arm.linux.org.uk>, Rusty Russell <rusty@rustcorp.com.au>, Santiago
 Leon <santi_leon@yahoo.com>, Sukadev Bhattiprolu <sukadev@linux.ibm.com>,
 Thomas Falcon <tlfalcon@linux.vnet.ibm.com>, Utz Bacher
 <utz.bacher@de.ibm.com>, Wei Liu <wei.liu@kernel.org>,
 xen-devel@lists.xenproject.org
Subject: Re: [PATCH 0/8] Rid W=1 warnings in Net
Message-ID: <20201127093911.05d9122a@kicinski-fedora-pc1c0hjn.DHCP.thefacebook.com>
In-Reply-To: <20201126133853.3213268-1-lee.jones@linaro.org>
References: <20201126133853.3213268-1-lee.jones@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Thu, 26 Nov 2020 13:38:45 +0000 Lee Jones wrote:
> Resending the stragglers.
> 
> This set is part of a larger effort attempting to clean-up W=1
> kernel builds, which are currently overwhelmingly riddled with
> niggly little warnings.

This set doesn't apply to net-next, please rebase.


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 18:58:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 18:58:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39719.72703 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiiwu-000291-87; Fri, 27 Nov 2020 18:58:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39719.72703; Fri, 27 Nov 2020 18:58:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiiwu-00028u-5A; Fri, 27 Nov 2020 18:58:28 +0000
Received: by outflank-mailman (input) for mailman id 39719;
 Fri, 27 Nov 2020 18:58:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiiws-00028h-RG; Fri, 27 Nov 2020 18:58:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiiws-0003ww-L0; Fri, 27 Nov 2020 18:58:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiiws-0006We-CF; Fri, 27 Nov 2020 18:58:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kiiws-0005pR-Bk; Fri, 27 Nov 2020 18:58:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4AhP6P1n3xEvHpsSBVbHhAX//y03xlTQKzjsiPb0jgI=; b=KX9qhstytWw7xw/Zu9WkpXwlSk
	BxPG4gJ1THxQ6H+Jp3vQi84HJ3lfc+cAJS2h9GpyInEnrQbhgpkmx39DC5jwRjM9Wnw5RV3hJ23Ob
	6+Ze4XdlB0lGDSCsF+wvb2DAuSEp4OgSVkkHuoioUfomaz4ZpRp/ynZFlWJEmjq36tao=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157055-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157055: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=73b604bb1e13ff915c523180979f7b4db34b6d1b
X-Osstest-Versions-That:
    ovmf=872f953262d68a11da7bc2fb3ded16df234b8700
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 27 Nov 2020 18:58:26 +0000

flight 157055 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157055/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 73b604bb1e13ff915c523180979f7b4db34b6d1b
baseline version:
 ovmf                 872f953262d68a11da7bc2fb3ded16df234b8700

Last test of basis   157042  2020-11-27 01:41:56 Z    0 days
Testing same since   157055  2020-11-27 17:09:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Laszlo Ersek <lersek@redhat.com>
  Peter Grehan <grehan@freebsd.org>
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   872f953262..73b604bb1e  73b604bb1e13ff915c523180979f7b4db34b6d1b -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 19:29:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 19:29:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39735.72723 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kijRH-00053H-VC; Fri, 27 Nov 2020 19:29:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39735.72723; Fri, 27 Nov 2020 19:29:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kijRH-00053A-Rv; Fri, 27 Nov 2020 19:29:51 +0000
Received: by outflank-mailman (input) for mailman id 39735;
 Fri, 27 Nov 2020 19:29:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kijRG-000532-VU; Fri, 27 Nov 2020 19:29:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kijRG-0004aH-ON; Fri, 27 Nov 2020 19:29:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kijRG-0007Wj-Fa; Fri, 27 Nov 2020 19:29:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kijRG-0000no-F5; Fri, 27 Nov 2020 19:29:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zr/EqzTByvoO6RJqM3MRnwXTRzBxR7GJ57TQr+4fdEc=; b=vMUX0fyhBFVF51CMwszEb03Ioj
	8KvP236K8/yIdWIdogKwy9ehvyH59MurOxmDwfTOUarV7Q6tPh75Pvr1G2DKHDbsAzokJrkB5fVbW
	uzCALK/czYeOVKUgJm464sbNYocFRFjhrDyvPwDAW5DNo443Bsws53KOSefvkmVTGXi4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157049-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157049: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=85a2c56cb4454c73f56d3099d96942e7919b292f
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 27 Nov 2020 19:29:50 +0000

flight 157049 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157049/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle 10 host-ping-check-xen fail in 157040 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit1 10 host-ping-check-xen fail in 157040 pass in 157049
 test-arm64-arm64-xl-credit2   8 xen-boot         fail in 157040 pass in 157049
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 157040
 test-arm64-arm64-xl-seattle   8 xen-boot                   fail pass in 157040

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                85a2c56cb4454c73f56d3099d96942e7919b292f
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  118 days
Failing since        152366  2020-08-01 20:49:34 Z  117 days  198 attempts
Testing same since   157040  2020-11-26 20:40:12 Z    0 days    2 attempts

------------------------------------------------------------
3576 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 684608 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 20:04:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 20:04:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39744.72739 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kijye-0000Is-Rn; Fri, 27 Nov 2020 20:04:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39744.72739; Fri, 27 Nov 2020 20:04:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kijye-0000Il-OP; Fri, 27 Nov 2020 20:04:20 +0000
Received: by outflank-mailman (input) for mailman id 39744;
 Fri, 27 Nov 2020 20:04:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JwIZ=FB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kijyc-0000Ig-U5
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 20:04:18 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e10e425f-6153-40ea-89d4-4bfb00952b27;
 Fri, 27 Nov 2020 20:04:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e10e425f-6153-40ea-89d4-4bfb00952b27
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606507457;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=L5JMnt0wjOTycQ8Eb7dBHfgB7Noge5RbaeQUYwGkL+w=;
  b=KoUBnDaxAhMSTtbeyFjP2jWQWYR/ccioIvfbMw6AXsgCGfoH6tLH95f9
   d3a9wgO+YE3mh1ykVOP9C2k1MZR8OCDNY47kXDAwfkd1OTl+s61pLq8F6
   1y0bqfBElSF+VAX7Zvfi5yJE1aACrXvxdyFs5o5dA8etZ15uJhQqY/Igl
   A=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 8bPTCzUyw4wFCEpehrHObRBOg2Sqq4ds71YZOP4u9DGoE5w6ZZymN9nf8n7Om8Sm+WcsdX0CaM
 rYGa7Xl7NOgMdxKFI7s6PQ8qVdFcV/P6OPg5nhrd0oIUchXdcthaM9zG6rZYW33M6Voq2bg9zi
 3S5EqTyM9h8OgRKDk0vmdHbx3sxyV1sqqWNxGZDvdt21Fnnbs1Sqq+CmWm61Dm1XNiXeqfnVDt
 36RiYaejc7kKOC1+HGTj3e+7kjM6gY3+v8Rq+gJ+QAw50K9Kc6VaByzF3hhft4gneVI32q6m+m
 im0=
X-SBRS: None
X-MesageID: 32065761
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,375,1599537600"; 
   d="scan'208";a="32065761"
Subject: Re: [PATCH v4 0/3] xen/arm: Make PCI passthrough code non-x86
 specific
To: Rahul Singh <rahul.singh@arm.com>, <xen-devel@lists.xenproject.org>
CC: <bertrand.marquis@arm.com>, Jan Beulich <jbeulich@suse.com>, Paul Durrant
	<paul@xen.org>, George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <cover.1606326929.git.rahul.singh@arm.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <44df65a3-b23d-7dee-e6aa-28101b39ab21@citrix.com>
Date: Fri, 27 Nov 2020 20:04:09 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <cover.1606326929.git.rahul.singh@arm.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 25/11/2020 18:16, Rahul Singh wrote:
> This patch series is v4 of preparatory work to make PCI passthrough code
> non-x86 specific.
>
> Rahul Singh (3):
>   xen/pci: Move x86 specific code to x86 directory.
>   xen/pci: solve compilation error on ARM with HAS_PCI enabled.
>   ns16550: Gate all PCI code with CONFIG_X86

https://gitlab.com/xen-project/patchew/xen/-/pipelines/222243396

There was an ARM randconfig failure which looks relevant to the content
in this series.

~Andrew (in lieu of a real CI robot).


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 20:07:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 20:07:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39750.72751 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kik1u-0000SF-Bh; Fri, 27 Nov 2020 20:07:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39750.72751; Fri, 27 Nov 2020 20:07:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kik1u-0000S8-7e; Fri, 27 Nov 2020 20:07:42 +0000
Received: by outflank-mailman (input) for mailman id 39750;
 Fri, 27 Nov 2020 20:07:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JwIZ=FB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kik1s-0000S3-V9
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 20:07:40 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 064e1b16-8d08-4c7b-b429-975f070b6184;
 Fri, 27 Nov 2020 20:07:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 064e1b16-8d08-4c7b-b429-975f070b6184
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606507659;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=C9n1oQhEsOHMBx+KbrK1gnFhFHJ6nRyCPoPSDo4N4EA=;
  b=L4AOfNVvUG1NLEGDug7gVrmO+GdRhyCzAPB31rmUa7nklzCWpxBd8P+q
   JCFLMpGNWbEXxHXJ9dLYEhtNYWm9LN9bNS237imtIMybFDtlmA2blFleo
   cPpxa9wOdqghydB0k72IctRvSJ2S7rQQM54RxpYg+8jvxwzvCEnd8szn5
   Y=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 5pjND+uApVf2uh0XkcuNaA8W0kHzL4eGZF4BJEi43HE5N2jakdToZ7BjBMrnIBdyp9tEJ0BHB+
 w5s0/xhO5WFHuAxIisv+crNAEv96NqOpy0WaaJi/pUFiG0kKiuAzHNf0vN2CN5y297WrbY7ny+
 bd6QUolDDx6LgTyEG40uYqvGWu8XyBOjVHl/GNXcx7MbUJVBBJ3boblBw60ThyhD0K9qxyGWxY
 AgQe6Ziu0G6jvNgf6hyuGSFz9OfAJo51lEBZ7K5KznxMM04BlZyIPUTDzLV14V06BDD8dn7c06
 uAg=
X-SBRS: None
X-MesageID: 32405802
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,375,1599537600"; 
   d="scan'208";a="32405802"
Subject: Re: [PATCH 0/7] xen/arm: Emulate ID registers
To: Bertrand Marquis <bertrand.marquis@arm.com>,
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1606151462.git.bertrand.marquis@arm.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <45b8aac3-75a6-670f-d6f2-b427c497ee2d@citrix.com>
Date: Fri, 27 Nov 2020 20:07:24 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <cover.1606151462.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 26/11/2020 15:51, Bertrand Marquis wrote:
> The goal of this serie is to emulate coprocessor ID registers so that
> Xen only publish to guest features that are supported by Xen and can
> actually be used by guests.
> One practical example where this is required are SVE support which is
> forbidden by Xen as it is not supported, but if Linux is compiled with
> it, it will crash on boot. An other one is AMU which is also forbidden
> by Xen but one Linux compiled with it would crash if the platform
> supports it.
>
> To be able to emulate the coprocessor registers defining what features
> are supported by the hardware, the TID3 bit of HCR must be disabled and
> Xen must emulated the values of those registers when an exception is
> catched when a guest is accessing it.
>
> This serie is first creating a guest cpuinfo structure which will
> contain the values that we want to publish to the guests and then
> provides the proper emulationg for those registers when Xen is getting
> an exception due to an access to any of those registers.
>
> This is a first simple implementation to solve the problem and the way
> to define the values that we provide to guests and which features are
> disabled will be in a future patchset enhance so that we could decide
> per guest what can be used or not and depending on this deduce the bits
> to activate in HCR and the values that we must publish on ID registers.
>
> Bertrand Marquis (7):
>   xen/arm: Add ID registers and complete cpufinfo
>   xen/arm: Add arm64 ID registers definitions
>   xen/arm: create a cpuinfo structure for guest
>   xen/arm: Add handler for ID registers on arm64
>   xen/arm: Add handler for cp15 ID registers
>   xen/arm: Add CP10 exception support to handle VMFR
>   xen/arm: Activate TID3 in HCR_EL2

CI found an ARM randconfig failure against this series.

https://gitlab.com/xen-project/patchew/xen/-/pipelines/221798884

I have admit that I can't spot an obvious connection so it might be
collateral damage from elsewhere, but does need looking at irrespective.

~Andrew (in lieu of a real CI robot).


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 20:09:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 20:09:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39757.72763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kik3s-0000hR-OR; Fri, 27 Nov 2020 20:09:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39757.72763; Fri, 27 Nov 2020 20:09:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kik3s-0000hK-LC; Fri, 27 Nov 2020 20:09:44 +0000
Received: by outflank-mailman (input) for mailman id 39757;
 Fri, 27 Nov 2020 20:09:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JwIZ=FB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kik3r-0000hE-SA
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 20:09:43 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 33d1c58d-1310-46dc-875c-ebd9ed746a41;
 Fri, 27 Nov 2020 20:09:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 33d1c58d-1310-46dc-875c-ebd9ed746a41
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606507782;
  h=subject:from:to:cc:references:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=+EFxz8/rmX57fBxC+oYo5RiAUokwIeN+tlcej19CYOs=;
  b=JMS6JVkvH4NcQWFJfD6Jo18aW1RhN/unHuX2szVFBoyb1fq8SWOUrra2
   bRjGDPo5vLMQW605PYWZRWn4guiJemW+TmUrQu47lV0rFtgobyfmtiQ84
   LRhYWL4erPcmVHS5Gm5lyexPs46zwXGWlHnacu/UNQ8jR//bfE2gIjz53
   U=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Hr1ouXEySYZQ9z7Q1ynnKpRuzi2sB2wC7r6ybx3qESg8K5zGj9PuT/aCcECRkYgDFjWg0C7buC
 df5m2AnLRvkAvCiSavOKWpGwDB7SFsx57I6OHeJBARgRvsm8/uHokGNc+7bhfaW8lUEvn/hSw8
 6aEFZ/serhdwZC7c1GvbX7/jWfNimrHwujFlS7wALU9/OGcpY4fB6rgtrtNj41slRQJc/i6WRT
 84wBVbOtzyC+0zkYyIpeTA/biimmkr0Ns5bluM6+BclbIMmiERbUIboP9Xm5EeX0ZulsOFfsJL
 7tY=
X-SBRS: None
X-MesageID: 32036501
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,375,1599537600"; 
   d="scan'208";a="32036501"
Subject: Re: [PATCH v4 0/3] xen/arm: Make PCI passthrough code non-x86
 specific
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Rahul Singh <rahul.singh@arm.com>, <xen-devel@lists.xenproject.org>
CC: <bertrand.marquis@arm.com>, Jan Beulich <jbeulich@suse.com>, Paul Durrant
	<paul@xen.org>, George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <cover.1606326929.git.rahul.singh@arm.com>
 <44df65a3-b23d-7dee-e6aa-28101b39ab21@citrix.com>
Message-ID: <c576d1f3-f9c8-9051-c5e3-83b704aac499@citrix.com>
Date: Fri, 27 Nov 2020 20:09:34 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <44df65a3-b23d-7dee-e6aa-28101b39ab21@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 27/11/2020 20:04, Andrew Cooper wrote:
> On 25/11/2020 18:16, Rahul Singh wrote:
>> This patch series is v4 of preparatory work to make PCI passthrough code
>> non-x86 specific.
>>
>> Rahul Singh (3):
>>   xen/pci: Move x86 specific code to x86 directory.
>>   xen/pci: solve compilation error on ARM with HAS_PCI enabled.
>>   ns16550: Gate all PCI code with CONFIG_X86
> https://gitlab.com/xen-project/patchew/xen/-/pipelines/222243396
>
> There was an ARM randconfig failure which looks relevant to the content
> in this series.

Sorry - this randconfig failure was also seen against a second series,
so probably is collateral damage from elsewhere.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 20:17:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 20:17:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39765.72774 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kikB3-0001aj-GO; Fri, 27 Nov 2020 20:17:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39765.72774; Fri, 27 Nov 2020 20:17:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kikB3-0001ac-D5; Fri, 27 Nov 2020 20:17:09 +0000
Received: by outflank-mailman (input) for mailman id 39765;
 Fri, 27 Nov 2020 20:17:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kikB1-0001aU-SV; Fri, 27 Nov 2020 20:17:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kikB1-0005ej-Lv; Fri, 27 Nov 2020 20:17:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kikB1-0000L8-EJ; Fri, 27 Nov 2020 20:17:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kikB1-00026u-Dp; Fri, 27 Nov 2020 20:17:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ciwSJL8zHoeLzhSMYHcZkx05BVc5Xm1reJkxrBUnxxI=; b=aqPJ1xrpQjk5EL63Im+oEyg6Ic
	FwK5/If5QvNOHMhwL841+i/8FS3gWnvot3jJa7KRSXOc3e8jh9IWSUBtPwkGy/VhnYh8WWmEXPRJQ
	hkjnFfg/jZoIU+ziPGjNo+SZAZKbTUvN5sBdrqRp6+L8C8IrisvssbEvYld5vmKkxI3k=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157058-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157058: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f7d7d53f6464cff94ead4c15d21e79ce4d9173f5
X-Osstest-Versions-That:
    xen=181f2c224ccd0a2900d6ae94ec390a546731f593
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 27 Nov 2020 20:17:07 +0000

flight 157058 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157058/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f7d7d53f6464cff94ead4c15d21e79ce4d9173f5
baseline version:
 xen                  181f2c224ccd0a2900d6ae94ec390a546731f593

Last test of basis   157009  2020-11-25 15:00:53 Z    2 days
Testing same since   157058  2020-11-27 18:01:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Paul Durrant <pdurrant@amazon.com>
  Rahul Singh <rahul.singh@arm.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   181f2c224c..f7d7d53f64  f7d7d53f6464cff94ead4c15d21e79ce4d9173f5 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Nov 27 20:22:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 20:22:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39776.72790 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kikGA-0002ZN-6V; Fri, 27 Nov 2020 20:22:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39776.72790; Fri, 27 Nov 2020 20:22:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kikGA-0002ZG-27; Fri, 27 Nov 2020 20:22:26 +0000
Received: by outflank-mailman (input) for mailman id 39776;
 Fri, 27 Nov 2020 20:22:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=33Tu=FB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kikG8-0002ZB-BS
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 20:22:24 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 34b51b36-d659-4204-8330-ba1a31221c23;
 Fri, 27 Nov 2020 20:22:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34b51b36-d659-4204-8330-ba1a31221c23
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606508542;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=3Foo1KSIrLyoB1XoSKCJ82fTRPhExZiEUXCxQ28bG3U=;
  b=DY9LtKirTC29A2665JeYqfVYi+czabyB6zzVuc5Gqb2kDhvseQueX9K6
   rE/UmmM249Si91BIcwL3ApcB8go8wgd3vLyvdKaEgchP14SZ8TmnPt8MI
   PXwG2Y1hv4H9wutN7o8DjM1F/jp/povnG4Vb6RTRs+RsBfNyelDWDSBwu
   Y=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: T5CVIw6T6vD7s3fqOKDrSxGK2PDA/hYbm51IEug+wAqsxelalVfKHNZpbW1xubMpggl5NMlBMw
 jdJ0Q4KJcqy9fT8LBQlz6Mhq9g/jYKxapBbGHI5lIXqoJwBdByKA3lbeikrT/JVov+eP8856+l
 5sJjZki9WQ4QhWOndeaTyjcV2fzYhoiygcGhQ9AJu/WtBRRlCQGkWkZzQuyFueb69VFMZ/nebn
 z551pNtFk15rD/ZHAph7pXmgLc44rQ2D8diczWyjaKWgWCXIj1mpuIKkp6wfeySu+SKhalCyJ7
 K6s=
X-SBRS: None
X-MesageID: 32036929
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,375,1599537600"; 
   d="scan'208";a="32036929"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VHve03o28GbM2wh/R9tT5x2U7Xk8y+yXnBt3/Oon2Pm9tatXrRG1xGO6+AUkVGrUGbuGm0zLOKUi09LftONejuXcavP4Lp6WGAhjrpFTi/XWwbadt2kwKMDv6O1fVMVcWVT9OYg77X+t7YrZbGwvDQhMZVHXmXwPoM9zQfkyolj6LfEix33HWUm4ydlomicW9+Xh4Vzt2UgMgO+rL7E2K76GIkA4vqQjBgwZC1tF4f9d9oKq3LrD747ZRTdHCw8QzRXI8K/cvRDdJYiKGHrpfbkZLWFEAOqwo7eud/BLSIvMj+Z/aZVIhpjwAA3fYlSnUuN7pB1Pw8fybCsD+24l1Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=heGh1O7nkzjYLGRgIUaleIyPjT/GnZ5533/e0xCS4+s=;
 b=XZI53RFeXESJZK17BtY7Kx7T3Nce8OLENngvHLXBMdFJJqMh/fgydO85MvTEUQwNw1FzGa9JankBxH8EMujfFyhHfNp10ohg2KgH9TlnQP5NhkWyftOrm7ui3gRLd4Qm00n/Xxw4Vr4wXH8BPTH2n/KhV/LriHxdnd/RmcNyUL8VEMA+JrTy0hFFJBzwHM0ntzPVzt+7U1HwKVXH6gbvuvqmw8Sf/kh5dJM6n0HIvbcU/31fPld55Y0OlhH6NUpvEReUMsj76ecxYg79Tn3/IeIm+Xb2CuX+hpnOz+ol78YStXl7qd5AhCzQ4aIie+YByTcvcCAUINKy4zgawjTBSA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=heGh1O7nkzjYLGRgIUaleIyPjT/GnZ5533/e0xCS4+s=;
 b=cac3bM9m1h31bNZWS3cYmTSbkp0G7YQCAV3f6et7Z/Ej9YaLe+dPSoFEKQWYy8W7KKe73L7SKHL+VcffKcuzm3EFSyZj9j3HQDHHQ4jVPayleNvb3VKo/FWW80e10XugAotmGu1lvyjBggN9aQiCt2VmhtAqFW4RUhLQwO4KAFU=
Date: Fri, 27 Nov 2020 21:22:11 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: Jan Beulich <jbeulich@suse.com>, <xen-devel@lists.xenproject.org>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201127202211.eqrxloii5x54zode@Air-de-Roger>
References: <20201126142635.uzi643co3mxp5h42@Air-de-Roger>
 <20201126150937.jhbfp7iefkmtedx7@Air-de-Roger>
 <20201126172034.GA7642@antioche.eu.org>
 <20201127105948.ji5gxv4e7axrvgpo@Air-de-Roger>
 <e9610278-84e5-dc32-b568-8867011de4e4@suse.com>
 <20201127131324.GJ1717@antioche.eu.org>
 <714e9393-d7f4-ed47-d1ed-aff79f3552a0@suse.com>
 <20201127133121.GN1717@antioche.eu.org>
 <96aa5a9b-3f4a-ce9d-0f41-4a24d409ed55@suse.com>
 <20201127135929.GR1717@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201127135929.GR1717@antioche.eu.org>
X-ClientProxiedBy: MR2P264CA0083.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:32::23) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 399cf661-393e-44c0-64cc-08d8931222a2
X-MS-TrafficTypeDiagnostic: DM6PR03MB4300:
X-Microsoft-Antispam-PRVS: <DM6PR03MB4300C974661D2841CFC6F0038FF80@DM6PR03MB4300.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: oG9qG4LRfgpsH2GUvtR3X1gB9ducaV5FICix1ZuOS+5Q+M6kwYZ9NgctX+r6+X2aAmriXVVVCr3Q+FxziSfC/gMThheMbH18bFN565ggqSn9XmGB39l3JPwqffyshkL7+fNWHuC+Ye87C+LxuTSEOO11xTU7i7uzFQV4vtcWy231Pw5VveqDJGQpKCPh6CprFBAD21bi3NVnPBzC3+0icrvihXZZwfNlSe4/QeycDMqUsqCHXQLJfe0YvyR9C/gK3TxisCHb+iCUm0wWTWpu6T+Iy744bxEuJtzpbaEIUBFmJH/m03tOAyyPk6mlY/EkOtWL3NtrxXtse/JI9gEbGm+HN9juiNIAwcGEF/5bVRgE+6Tv04+z8fiTXUUoYiGmyLex1Hv04WYoDdIkpy8OmA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(39860400002)(396003)(366004)(346002)(376002)(136003)(85182001)(33716001)(86362001)(956004)(966005)(186003)(53546011)(6486002)(478600001)(16526019)(26005)(1076003)(6496006)(9686003)(6916009)(2906002)(66476007)(66556008)(6666004)(83380400001)(316002)(4326008)(5660300002)(66946007)(8676002)(8936002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?bW1wejFwSFlJU2tDUGFQTDlsWUVjaXZwbFdBelpzUWs4a0hBQ3FJNC9xSmVB?=
 =?utf-8?B?enJwUXJlSFFuOXBLYWtab2JWZ0JjTDhTZGpsbFkvQ3BnaENWeGhiRDVLTDhr?=
 =?utf-8?B?Z2hKRFhNektNdUUxcjN2OER4b0pySWsrOEl6bG5XTzVCbEwyV1VsYnV2YzFQ?=
 =?utf-8?B?ZzBEZS94ZXJ0cGo4V1RJWXRaMjNvbzVNVWlYb0dnQXlUbm1QdGlJRmF2cW45?=
 =?utf-8?B?K09CUFdUMitUemJhQ1c2L2tndTFLYisrWjZpclV0azdWNEJvdHo3OHB5N2Z4?=
 =?utf-8?B?UHFmdFpnL0JVOWIxZ3hINGlnRGZUREZQVTZZWnFEVWpvZ1pTQlhlTUUzRmZO?=
 =?utf-8?B?SFVySlh6RmFteVV3S2o3Sk8xNllCSWhYaGZtaDdHWGlhTFdVd2cwQWZMWHFj?=
 =?utf-8?B?Uk9Eclh0c2cxaDdEUEhqc2Jjbi9jNGw0OEl6QXdFampGUW9LR3NESklZZ1Vn?=
 =?utf-8?B?ZElxS0FDZ2p0ZUJ4UXpQbFpZL3EyZ2tMNVowNWF4RG1FUjg0MnJyVGtzc3FN?=
 =?utf-8?B?MS9jNG91YVVpZ1pQU2kycWIrYVFrd0NZVm9iVXgwN2VHdVhWa1k4TUlRWlJ3?=
 =?utf-8?B?SXJTMFE4L0F1ejAxNFhKN0UwcHVxaGMyTnJkZnd4ekdlenRlOU5xSlFIV01o?=
 =?utf-8?B?cG44RVlhMExUWWNNOXdnYzNRUGplNmJrWWxEL2FCSEd1UVplZ0RhOU5nelp6?=
 =?utf-8?B?RzJvNnZuKzdKdWJucm5XRzRaTFp0VUtnMDBwM2ZwY2dva0xGZ3lSbkpPd0VY?=
 =?utf-8?B?dTEyemJoOTZ5VXRPMDk2WEQ1OSs4VGNXL1IrVFVjYTY5b0ozbXVhUVRvdlZX?=
 =?utf-8?B?blRWYzhtcStUQ1BSbFlKMy96VXQ0VHlLZDJyNDlDajd2c01hbkttS25ObnVw?=
 =?utf-8?B?QjhwR3pQa2J1ZGl2K1dtN0hBcERkOGZncXJFZDVjQk1Kbk4zZ2lZK2FCRnk5?=
 =?utf-8?B?bVNhR2JEMU5SZnRuUktvVlRDZk1xQnNRc2IreUwycUtuSFppR0lEQTJBQ3BQ?=
 =?utf-8?B?QU5yaUFudVl1Tm9kWjZrYnphbG5hMVBCVWdPZ1BBdGcycmEyYkhwL3dVYzFJ?=
 =?utf-8?B?ck5OZWNWV2RTMzc2K2tjcDhYU0Zid1BiQm5LQURPKzVUSmpuR1ZqMmxWUHk5?=
 =?utf-8?B?QzNpRjRHSm85Uk02bmFyeTE1N3VwNFk1Q0VVNzJUTDF2OU0wbnB5YU8xTXJZ?=
 =?utf-8?B?ZlRya3poUVVQaVlCQU40cFRkaGhhazFSQTZZQW01RTV5S1RmOGpJQlZJS2M1?=
 =?utf-8?B?dEwwc2U3TTRFUnEzY2lPd1h1WmtOL3AvYmV4SzNIdmJwNmZtdENQUFpDaGF5?=
 =?utf-8?Q?8zZd7Qu8chKcmi03vgId+S8f6qT1lS0lZ9?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 399cf661-393e-44c0-64cc-08d8931222a2
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Nov 2020 20:22:17.4041
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vyKf8YHvefT68HtqizNB2jPo6Rmo+jMmBFw/jI6c9scYi70V4CAgQKDOZvOiRoZFx2LD6VFS3abx54hPiVtECA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4300
X-OriginatorOrg: citrix.com

On Fri, Nov 27, 2020 at 02:59:29PM +0100, Manuel Bouyer wrote:
> On Fri, Nov 27, 2020 at 02:40:22PM +0100, Jan Beulich wrote:
> > On 27.11.2020 14:31, Manuel Bouyer wrote:
> > > On Fri, Nov 27, 2020 at 02:18:54PM +0100, Jan Beulich wrote:
> > >> On 27.11.2020 14:13, Manuel Bouyer wrote:
> > >>> On Fri, Nov 27, 2020 at 12:29:35PM +0100, Jan Beulich wrote:
> > >>>> On 27.11.2020 11:59, Roger Pau Monné wrote:
> > >>>>> --- a/xen/arch/x86/hvm/irq.c
> > >>>>> +++ b/xen/arch/x86/hvm/irq.c
> > >>>>> @@ -187,6 +187,10 @@ void hvm_gsi_assert(struct domain *d, unsigned int gsi)
> > >>>>>       * to know if the GSI is pending or not.
> > >>>>>       */
> > >>>>>      spin_lock(&d->arch.hvm.irq_lock);
> > >>>>> +    if ( gsi == TRACK_IRQ )
> > >>>>> +        debugtrace_printk("hvm_gsi_assert irq %u trig %u assert count %u\n",
> > >>>>> +                          gsi, trig, hvm_irq->gsi_assert_count[gsi]);
> > >>>>
> > >>>> This produces
> > >>>>
> > >>>> 81961 hvm_gsi_assert irq 34 trig 1 assert count 1
> > >>>>
> > >>>> Since the logging occurs ahead of the call to assert_gsi(), it
> > >>>> means we don't signal anything to Dom0, because according to our
> > >>>> records there's still an IRQ in flight. Unfortunately we only
> > >>>> see the tail of the trace, so it's not possible to tell how / when
> > >>>> we got into this state.
> > >>>>
> > >>>> Manuel - is this the only patch you have in place? Or did you keep
> > >>>> any prior ones? Iirc there once was one where Roger also suppressed
> > >>>> some de-assert call.
> > >>>
> > >>> Yes, I have some of the previous patches (otherwise Xen panics).
> > >>> Attached is the diffs I currently have 
> > >>
> > >> I think you want to delete the hunk dropping the call to
> > >> hvm_gsi_deassert() from pt_irq_time_out(). Iirc it was that
> > >> addition which changed the behavior to just a single IRQ ever
> > >> making it into Dom0. And it ought to be only the change to
> > >> msix_write() which is needed to avoid the panic.
> > > 
> > > yes, I did keep the hvm_gsi_deassert() patch because I expected it
> > > to make things easier, as it allows to interract with Xen without changing
> > > interrupt states.
> > 
> > Right, but then we'd need to see the beginning of the trace,
> > rather than it starting at (in this case) about 95,000. Yet ...
> > 
> > > I removed it, here's a new trace
> > > 
> > > http://www-soc.lip6.fr/~bouyer/xen-log12.txt
> > 
> > ... hmm, odd - no change at all:
> > 
> > 95572 hvm_gsi_assert irq 34 trig 1 assert count 1
> 
> But I can confirm that now, entering ^A^A^A gets interrupt going in again

I think there are some weird things with dpci interrupts that I'm
trying to understand. I have a patch now that will panic when the
buffer is full, so we will hopefully be able to see the whole trace of
events. There will be no need for you to press the 'T' key now, the
system will panic when the buffer is full.

Note this patch also removes the deassert done in pt_irq_time_out.

Thanks, Roger.
---8<---
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index 38ac5fb6c7..9db3dcc957 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -187,6 +187,10 @@ void hvm_gsi_assert(struct domain *d, unsigned int gsi)
      * to know if the GSI is pending or not.
      */
     spin_lock(&d->arch.hvm.irq_lock);
+    if ( gsi == TRACK_IRQ )
+        debugtrace_printk("hvm_gsi_assert irq %u trig %u assert count %u\n",
+                          gsi, trig, hvm_irq->gsi_assert_count[gsi]);
+
     if ( trig == VIOAPIC_EDGE_TRIG || !hvm_irq->gsi_assert_count[gsi] )
     {
         if ( trig == VIOAPIC_LEVEL_TRIG )
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 67d4a6237f..e6748e0649 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -257,7 +257,11 @@ static void vioapic_write_redirent(
         vlapic_adjust_i8259_target(d);
     }
     else if ( ent.fields.trig_mode == VIOAPIC_EDGE_TRIG )
+    {
+        if ( gsi == TRACK_IRQ )
+            debugtrace_printk("vIO-APIC set edge trigger irq %u\n", gsi);
         pent->fields.remote_irr = 0;
+    }
     else if ( !ent.fields.mask &&
               !ent.fields.remote_irr &&
               hvm_irq->gsi_assert_count[idx] )
@@ -278,6 +282,10 @@ static void vioapic_write_redirent(
          */
         int ret = vioapic_hwdom_map_gsi(gsi, ent.fields.trig_mode,
                                         ent.fields.polarity);
+
+        if ( gsi == TRACK_IRQ )
+            debugtrace_printk("vIO-APIC UNMASK irq %u\n", gsi);
+
         if ( ret )
         {
             gprintk(XENLOG_ERR,
@@ -285,6 +293,9 @@ static void vioapic_write_redirent(
             unmasked = 0;
         }
     }
+    else if ( is_hardware_domain(d) && gsi == TRACK_IRQ )
+        debugtrace_printk("vIO-APIC MASK irq %u\n", gsi);
+
 
     if ( gsi == 0 || unmasked )
         pt_may_unmask_irq(d, NULL);
@@ -405,6 +416,10 @@ static void vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int pin)
 
     ASSERT(spin_is_locked(&d->arch.hvm.irq_lock));
 
+    if ( irq == TRACK_IRQ )
+            debugtrace_printk("vIO-APIC deliver irq %u vector %u\n",
+                              irq, vector);
+
     HVM_DBG_LOG(DBG_LEVEL_IOAPIC,
                 "dest=%x dest_mode=%x delivery_mode=%x "
                 "vector=%x trig_mode=%x",
diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c
index 49bd778484..db7167eb4b 100644
--- a/xen/arch/x86/io_apic.c
+++ b/xen/arch/x86/io_apic.c
@@ -1641,6 +1641,9 @@ static void mask_and_ack_level_ioapic_irq(struct irq_desc *desc)
     unsigned long v;
     int i;
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("ACK irq %u\n", desc->irq);
+
     irq_complete_move(desc);
 
     if ( !directed_eoi_enabled )
@@ -1688,6 +1691,9 @@ static void mask_and_ack_level_ioapic_irq(struct irq_desc *desc)
 
 static void end_level_ioapic_irq_old(struct irq_desc *desc, u8 vector)
 {
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("END irq %u\n", desc->irq);
+
     if ( directed_eoi_enabled )
     {
         if ( !(desc->status & (IRQ_DISABLED|IRQ_MOVE_PENDING)) )
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 8d1f9a9fc6..ec52e44cb7 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1109,6 +1109,10 @@ static void irq_guest_eoi_timer_fn(void *data)
     unsigned int i, irq = desc - irq_desc;
     irq_guest_action_t *action;
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("irq_guest_eoi_timer_fn irq %u status %x\n",
+                          desc->irq, desc->status);
+
     spin_lock_irq(&desc->lock);
     
     if ( !(desc->status & IRQ_GUEST) )
@@ -1118,6 +1122,10 @@ static void irq_guest_eoi_timer_fn(void *data)
 
     ASSERT(action->ack_type != ACKTYPE_NONE);
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("ack_type %u in_flight %u\n",
+                          action->ack_type, action->in_flight);
+
     /*
      * Is no IRQ in flight at all, or another instance of this timer already
      * running? Skip everything to avoid forcing an EOI early.
@@ -1837,6 +1845,12 @@ static void do_IRQ_guest(struct irq_desc *desc, unsigned int vector)
     unsigned int        i;
     struct pending_eoi *peoi = this_cpu(pending_eoi);
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("do_IRQ_guest irq %u nr_guests %u ack_type %u in_flight %u\n",
+                          desc->irq, action->nr_guests, action->ack_type,
+                          action->in_flight);
+
+
     if ( unlikely(!action->nr_guests) )
     {
         /* An interrupt may slip through while freeing an ACKTYPE_EOI irq. */
diff --git a/xen/common/debugtrace.c b/xen/common/debugtrace.c
index f3794b9453..b22c09297d 100644
--- a/xen/common/debugtrace.c
+++ b/xen/common/debugtrace.c
@@ -130,14 +130,14 @@ static void debugtrace_toggle(void)
 
 void debugtrace_dump(void)
 {
-    unsigned long flags;
+    //unsigned long flags;
 
     watchdog_disable();
-    spin_lock_irqsave(&debugtrace_lock, flags);
+    //spin_lock_irqsave(&debugtrace_lock, flags);
 
     debugtrace_dump_worker();
 
-    spin_unlock_irqrestore(&debugtrace_lock, flags);
+    //spin_unlock_irqrestore(&debugtrace_lock, flags);
     watchdog_enable();
 }
 
@@ -152,7 +152,10 @@ static void debugtrace_add_to_buf(char *buf)
     {
         data->buf[data->prd++] = *p;
         if ( data->prd == debugtrace_bytes )
+        {
+            panic("END of buffer\n");
             data->prd = 0;
+        }
     }
 }
 
diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
index 6b1305a3e5..c8fefc2648 100644
--- a/xen/drivers/passthrough/io.c
+++ b/xen/drivers/passthrough/io.c
@@ -174,7 +174,10 @@ static void pt_irq_time_out(void *data)
          * In the identity mapped case the EOI can also be done now, this way
          * the iteration over the list of domain pirqs is avoided.
          */
-        hvm_gsi_deassert(irq_map->dom, dpci_pirq(irq_map)->pirq);
+        if ( dpci_pirq(irq_map)->pirq == TRACK_IRQ )
+            debugtrace_printk("pt_irq_time_out irq %u\n",
+                              dpci_pirq(irq_map)->pirq);
+        //hvm_gsi_deassert(irq_map->dom, dpci_pirq(irq_map)->pirq);
         irq_map->flags |= HVM_IRQ_DPCI_EOI_LATCH;
         pt_irq_guest_eoi(irq_map->dom, irq_map, NULL);
         spin_unlock(&irq_map->dom->event_lock);
@@ -828,6 +831,9 @@ int hvm_do_IRQ_dpci(struct domain *d, struct pirq *pirq)
          !pirq_dpci || !(pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) )
         return 0;
 
+    if ( pirq->pirq == TRACK_IRQ )
+        debugtrace_printk("hvm_do_IRQ_dpci irq %u\n", pirq->pirq);
+
     pirq_dpci->masked = 1;
     raise_softirq_for(pirq_dpci);
     return 1;
@@ -1010,6 +1016,9 @@ void hvm_dpci_eoi(struct domain *d, unsigned int guest_gsi,
     if ( !is_iommu_enabled(d) )
         return;
 
+    if ( guest_gsi == TRACK_IRQ )
+        debugtrace_printk("hvm_dpci_eoi irq %u\n", guest_gsi);
+
     if ( is_hardware_domain(d) )
     {
         spin_lock(&d->event_lock);
diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
index 43d567fe44..91579c33b9 100644
--- a/xen/include/xen/irq.h
+++ b/xen/include/xen/irq.h
@@ -174,4 +174,6 @@ unsigned int arch_hwdom_irqs(domid_t);
 void arch_evtchn_bind_pirq(struct domain *, int pirq);
 #endif
 
+#define TRACK_IRQ 34
+
 #endif /* __XEN_IRQ_H__ */



From xen-devel-bounces@lists.xenproject.org Fri Nov 27 21:45:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 27 Nov 2020 21:45:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39787.72802 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kilXk-0001bU-5B; Fri, 27 Nov 2020 21:44:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39787.72802; Fri, 27 Nov 2020 21:44:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kilXk-0001bN-28; Fri, 27 Nov 2020 21:44:40 +0000
Received: by outflank-mailman (input) for mailman id 39787;
 Fri, 27 Nov 2020 21:44:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N/b8=FB=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kilXi-0001bI-6k
 for xen-devel@lists.xenproject.org; Fri, 27 Nov 2020 21:44:38 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ad880bc6-89e3-4676-830d-a4c60720df7e;
 Fri, 27 Nov 2020 21:44:35 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0ARLiQiR028880
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Fri, 27 Nov 2020 22:44:27 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id E43082E9465; Fri, 27 Nov 2020 22:44:20 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad880bc6-89e3-4676-830d-a4c60720df7e
Date: Fri, 27 Nov 2020 22:44:20 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201127214420.GA637@antioche.eu.org>
References: <20201126150937.jhbfp7iefkmtedx7@Air-de-Roger>
 <20201126172034.GA7642@antioche.eu.org>
 <20201127105948.ji5gxv4e7axrvgpo@Air-de-Roger>
 <e9610278-84e5-dc32-b568-8867011de4e4@suse.com>
 <20201127131324.GJ1717@antioche.eu.org>
 <714e9393-d7f4-ed47-d1ed-aff79f3552a0@suse.com>
 <20201127133121.GN1717@antioche.eu.org>
 <96aa5a9b-3f4a-ce9d-0f41-4a24d409ed55@suse.com>
 <20201127135929.GR1717@antioche.eu.org>
 <20201127202211.eqrxloii5x54zode@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201127202211.eqrxloii5x54zode@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Fri, 27 Nov 2020 22:44:27 +0100 (MET)

On Fri, Nov 27, 2020 at 09:22:11PM +0100, Roger Pau Monn wrote:
> > 
> > But I can confirm that now, entering ^A^A^A gets interrupt going in again
> 
> I think there are some weird things with dpci interrupts that I'm
> trying to understand. I have a patch now that will panic when the
> buffer is full, so we will hopefully be able to see the whole trace of
> events. There will be no need for you to press the 'T' key now, the
> system will panic when the buffer is full.
> 
> Note this patch also removes the deassert done in pt_irq_time_out.

thanks
the trace is at
http://www-soc.lip6.fr/~bouyer/xen-log13.txt

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Sat Nov 28 03:02:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Nov 2020 03:02:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39809.72826 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiqVD-0002oI-MA; Sat, 28 Nov 2020 03:02:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39809.72826; Sat, 28 Nov 2020 03:02:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiqVD-0002o9-FT; Sat, 28 Nov 2020 03:02:23 +0000
Received: by outflank-mailman (input) for mailman id 39809;
 Sat, 28 Nov 2020 03:02:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiqVD-0002o1-3K; Sat, 28 Nov 2020 03:02:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiqVC-00029f-U2; Sat, 28 Nov 2020 03:02:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiqVC-0002P3-KW; Sat, 28 Nov 2020 03:02:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kiqVC-0007BA-K1; Sat, 28 Nov 2020 03:02:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=g/1saNe/5/IRzJyFBMMJH0AgQgjg636now6T1Yt+drA=; b=zi1UsZ9TlqPIXVSlL+LC39GAce
	JCB9XoSOKpvHnr7xmpoqb7Uysgr6OUk0Jo1qZ2vXc8O8LrBDpFEK39Qo1OVISLvrIN0SfAMf579fo
	jWbrz9CH2FbESfnFBS4DfQYunGaIXtPkiheyICjh/5mTkR7BwqkbWQZqiwff5kWUY/UI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157060-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157060: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=f69a2b9a42029bcbcf88d074425ebe63495b0a08
X-Osstest-Versions-That:
    ovmf=73b604bb1e13ff915c523180979f7b4db34b6d1b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 28 Nov 2020 03:02:22 +0000

flight 157060 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157060/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 f69a2b9a42029bcbcf88d074425ebe63495b0a08
baseline version:
 ovmf                 73b604bb1e13ff915c523180979f7b4db34b6d1b

Last test of basis   157055  2020-11-27 17:09:39 Z    0 days
Testing same since   157060  2020-11-27 19:10:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Wenyi Xie <xiewenyi2@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   73b604bb1e..f69a2b9a42  f69a2b9a42029bcbcf88d074425ebe63495b0a08 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Nov 28 04:56:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Nov 2020 04:56:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39818.72841 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kisHX-0004mu-46; Sat, 28 Nov 2020 04:56:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39818.72841; Sat, 28 Nov 2020 04:56:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kisHX-0004mn-0N; Sat, 28 Nov 2020 04:56:23 +0000
Received: by outflank-mailman (input) for mailman id 39818;
 Sat, 28 Nov 2020 04:56:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3d2o=FC=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kisHV-0004mi-QI
 for xen-devel@lists.xenproject.org; Sat, 28 Nov 2020 04:56:21 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3b213de3-0ebe-446a-91d4-f87d3ec7ecb7;
 Sat, 28 Nov 2020 04:56:20 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0AS4u9Tf049715
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Fri, 27 Nov 2020 23:56:15 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 0AS4u85e049714;
 Fri, 27 Nov 2020 20:56:08 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b213de3-0ebe-446a-91d4-f87d3ec7ecb7
Date: Fri, 27 Nov 2020 20:56:08 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Roman Shaposhnik <roman@zededa.com>, Julien Grall <julien@xen.org>,
        Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: Xen on RP4
Message-ID: <X8HYaI/WjRFtM8As@mattapan.m5p.com>
References: <X73RfHfRfBRLKkvB@mattapan.m5p.com>
 <CAMmSBy8dtUQotUeX2MVke7d2nWS0shvKPL_S=4tUeF0UKh4vgA@mail.gmail.com>
 <X73ghKgQEXLv2z2p@mattapan.m5p.com>
 <CAMmSBy-Qdpj+6FAk9D15=+87_=68T80Y1NGnvyAB=tOFveifiQ@mail.gmail.com>
 <X73owDP0UXx+lvJd@mattapan.m5p.com>
 <alpine.DEB.2.21.2011251051240.7979@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2011251051240.7979@sstabellini-ThinkPad-T480s>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Wed, Nov 25, 2020 at 10:57:31AM -0800, Stefano Stabellini wrote:
> On Tue, 24 Nov 2020, Elliott Mitchell wrote:
> > I've frankly got no idea how to ensure the correct device-tree is passed
> > to Xen.  Is GRUB's `devicetree` command correct when loading Xen?  Is a
> > device-tree matched to the Linux kernel appropriate for Xen?
> > 
> > (I'm guessing the second is "yes", but the first I don't have a clue)
> 
> Yes, devicetree is correct. I have not used the graphical output, so I
> cannot help you there but yes the best bet is to use the devicetree that
> comes with the kernel.
> 
> One thing I noticed is that you are missing some of the command line
> arguments for Xen and Linux in your grub config. For instance on the Xen
> line you want to have something like:
> 
>     dom0_mem=1024M console=dtuart sync_console
> 
> And on the Linux line you might want to have:
> 
>     console=tty0 console=hvc0

I was sending the bare minimum.  Some of the known bits were filtered
out.  After having spent several hours pounding on this and building
multiple kernels, I'm headed towards odd theories...

I'm wondering how Debian's kernel source trees have managed to remain
broken for the Raspberry PIs for over a year:
https://bugs.debian.org/939633

Right now that feels like conspiracy theory territory, but my mind is
wandering in odd directions...   Is someone at Intel trying to sabotage
device-trees so everyone moves to UEFI?

Could simply be Debian's kernel maintainers are very busy and the
original reporter of the bug failed to draw enough attention to a large
problem.  If that odd suspicion is true though, getting EFI to supported
status on ARM might be a major concern.

Alternatively I've been exploring an incorrect path, I should simply
stick with the device-trees which come from the Raspberry PI Foundation,
and not try to follow the Linux kernel's device-trees.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Sat Nov 28 05:37:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Nov 2020 05:37:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39826.72856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kisvF-0000SG-6u; Sat, 28 Nov 2020 05:37:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39826.72856; Sat, 28 Nov 2020 05:37:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kisvF-0000S9-3j; Sat, 28 Nov 2020 05:37:25 +0000
Received: by outflank-mailman (input) for mailman id 39826;
 Sat, 28 Nov 2020 05:37:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kisvD-0000S1-Se; Sat, 28 Nov 2020 05:37:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kisvD-0005jL-Kc; Sat, 28 Nov 2020 05:37:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kisvD-0002Sl-7U; Sat, 28 Nov 2020 05:37:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kisvD-0004bI-70; Sat, 28 Nov 2020 05:37:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=U8pPWQBcmbnaPvM5/fJ9+pxk9aSdDHYjfZXT0zihk9E=; b=TUZUFK4v6MdCfLe/6cqeAJoJcx
	8nZQcULBGoMBgpruN88KE0c9L2ELp1PI/OBhGeJ0pRinkXZQZ6SnIe/sKwf+add9tiLOm8jViYM9s
	RBUFkiZw5GIG9mdXvReoZKMCM4yy7QlSInMvQS6vAw7WdpgQf0gj14txkClSTTN2OezI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157056-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157056: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=ea8208249d1082eae0444934efb3b59cd3183f05
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 28 Nov 2020 05:37:23 +0000

flight 157056 qemu-mainline real [real]
flight 157066 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157056/
http://logs.test-lab.xenproject.org/osstest/logs/157066/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-seattle   8 xen-boot            fail pass in 157066-retest

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle 15 migrate-support-check fail in 157066 never pass
 test-arm64-arm64-xl-seattle 16 saverestore-support-check fail in 157066 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                ea8208249d1082eae0444934efb3b59cd3183f05
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   99 days
Failing since        152659  2020-08-21 14:07:39 Z   98 days  207 attempts
Testing same since   157056  2020-11-27 17:37:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69269 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Nov 28 06:02:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Nov 2020 06:02:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39837.72873 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kitJc-0003Kr-Hw; Sat, 28 Nov 2020 06:02:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39837.72873; Sat, 28 Nov 2020 06:02:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kitJc-0003Kk-Ez; Sat, 28 Nov 2020 06:02:36 +0000
Received: by outflank-mailman (input) for mailman id 39837;
 Sat, 28 Nov 2020 06:02:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kitJb-0003Kc-5y; Sat, 28 Nov 2020 06:02:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kitJa-0006KJ-Ut; Sat, 28 Nov 2020 06:02:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kitJa-0004bq-LC; Sat, 28 Nov 2020 06:02:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kitJa-0004eh-Kh; Sat, 28 Nov 2020 06:02:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=u8sgIYlvXBF7QzMGtY7WrGBnzHlQAx1R1hPLKvlzbMk=; b=RAqsa94SPk0nZuwvECS9NHVwrI
	bb6yWKwtukM+7SCGcr3EtXVx5Nx/auFYFMnv6q32VWYhGHMO4HbEYwHa+AAcAqaXEKaV3mfugZjdB
	mXXPPY1XugWmTfS5sZQABWiasl+HyFbJlBDCGBJLMIKN7lSv+Cgyf40oXa6oTxe50W0w=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157062-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157062: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=99c710c46dfc413b9c8a1a40b463ae1eaca539e5
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 28 Nov 2020 06:02:34 +0000

flight 157062 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157062/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                99c710c46dfc413b9c8a1a40b463ae1eaca539e5
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  119 days
Failing since        152366  2020-08-01 20:49:34 Z  118 days  199 attempts
Testing same since   157062  2020-11-27 19:40:29 Z    0 days    1 attempts

------------------------------------------------------------
3586 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 685898 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Nov 28 07:58:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Nov 2020 07:58:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39847.72889 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiv7O-0005Fw-2b; Sat, 28 Nov 2020 07:58:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39847.72889; Sat, 28 Nov 2020 07:58:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiv7N-0005Fp-Vr; Sat, 28 Nov 2020 07:58:05 +0000
Received: by outflank-mailman (input) for mailman id 39847;
 Sat, 28 Nov 2020 07:58:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiv7M-0005Fh-V8; Sat, 28 Nov 2020 07:58:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiv7M-00009b-M2; Sat, 28 Nov 2020 07:58:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kiv7M-0002HZ-E6; Sat, 28 Nov 2020 07:58:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kiv7M-0007VU-Db; Sat, 28 Nov 2020 07:58:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DzTYOcF/AVjPklUBhmAOEGp7cbMcmRAd0G9KI9Y18t4=; b=F1qIBUZHa53d3eyFa0Oi55+niV
	PxP283nGqQEeMspGgos4jwdgs+ZOtiXR0BePPylPd6C9kapYbVXkQ4m5LiTPLs8zKifjV9EigM2b1
	wA0wQfyowkcQaagqD5R/UPv5AXDFsPDJflaJj7/EGsZckClg7L7kw06Tz6D3/iezeeJw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157067-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157067: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=5d789c7b37721c1dd3b4e9a3732399bf66603737
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 28 Nov 2020 07:58:04 +0000

flight 157067 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157067/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              5d789c7b37721c1dd3b4e9a3732399bf66603737
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  141 days
Failing since        151818  2020-07-11 04:18:52 Z  140 days  135 attempts
Testing same since   157067  2020-11-28 04:19:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 29761 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Nov 28 07:59:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Nov 2020 07:59:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39854.72904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiv8W-0005Ox-IN; Sat, 28 Nov 2020 07:59:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39854.72904; Sat, 28 Nov 2020 07:59:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiv8W-0005Oq-FK; Sat, 28 Nov 2020 07:59:16 +0000
Received: by outflank-mailman (input) for mailman id 39854;
 Sat, 28 Nov 2020 07:59:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8Sga=FC=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1kiv8U-0005Oh-Jm
 for xen-devel@lists.xenproject.org; Sat, 28 Nov 2020 07:59:14 +0000
Received: from mail-qk1-x742.google.com (unknown [2607:f8b0:4864:20::742])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2e82116d-3edf-4cba-9b27-ff542b1763f4;
 Sat, 28 Nov 2020 07:59:13 +0000 (UTC)
Received: by mail-qk1-x742.google.com with SMTP id q5so6336580qkc.12
 for <xen-devel@lists.xenproject.org>; Fri, 27 Nov 2020 23:59:13 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2e82116d-3edf-4cba-9b27-ff542b1763f4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=zededa.com; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=g6TwH1mAsGEvpO10D1ozDkWdXriqYEzkcHX3VrHAXfo=;
        b=h880v3+VZ+X2TRoR4qjQL30o0ema8uM5rktJS927fexqI3U/K9FJpMsBJ+P7Zv/FiU
         q5kdxMVUIq086+Ue3acsmNX3YqMBCOA6JGhTNufdcndPnT3o7fYGDM/FlxOVPF1M9E55
         PlsePgQ6cXbQPk1ArCrjg7zJ3D3Mb6oaiqbeJQicbHAygzfdCxuyId108MW2L0dBP5n+
         JbN2W3CkMIllwFctvgeZ0G1l25BE9ZNjt39+HFwUcXyCtpnDv3qUToH9G/R1Z7cb6fMO
         sVek+dDfKryOpDwo6DJyQmSeQgKyttm5jKes/vuRhSdKr2+wGTQ8xuEXGIzc5AEUOcr6
         GbVg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=g6TwH1mAsGEvpO10D1ozDkWdXriqYEzkcHX3VrHAXfo=;
        b=jzHa8bmiTdtRwVYQyKNxaFdvJXQHliJ4VImhhzlWqEEvXYj3n03YCg9dPq2lcmPEyd
         d4QOJgr7ZdEZNxKcF+5fXD4xuKtqPyRENv1kxLrZir31XxgXbM0AQmXNXjqbzWbJ37X6
         aI+1Qo9aKydJPf23m3CXTVT1IMRVfeXtnMMKFQ5w04ffyjwx63rIx7qb619pMjzfZC5M
         KFjHHZPbmEz9lMzsMMYTS68yJfwXbExNnMndau9hxlw6pGKxRnck+PbVzZCE6wamwQYy
         s7b6P1J9iO7iddSS2Wo7t+LiT2CPxByEfbl27QQ8ukfj0bQasYgwmf7bcjD4q2uoB/tC
         674Q==
X-Gm-Message-State: AOAM531EYb2vUEOQJkQw/bCz1MnPKRTfmPrSuH+ve1sBWGwygW4qU7qt
	umk4cdYUvEPcWj5ZENp+j0CdXQvUsVXR83yRSNIBBA==
X-Google-Smtp-Source: ABdhPJyoPxhXeDn4leCEjmmQJ1whU30j0GXX3jzrxIQvxHGmgd51/PS7TtrDLO3wLKyBz+vfCefvNiBZPMufn4Hw1KA=
X-Received: by 2002:a37:6546:: with SMTP id z67mr12564751qkb.22.1606550353466;
 Fri, 27 Nov 2020 23:59:13 -0800 (PST)
MIME-Version: 1.0
References: <X73RfHfRfBRLKkvB@mattapan.m5p.com> <CAMmSBy8dtUQotUeX2MVke7d2nWS0shvKPL_S=4tUeF0UKh4vgA@mail.gmail.com>
 <X73ghKgQEXLv2z2p@mattapan.m5p.com> <CAMmSBy-Qdpj+6FAk9D15=+87_=68T80Y1NGnvyAB=tOFveifiQ@mail.gmail.com>
 <X73owDP0UXx+lvJd@mattapan.m5p.com> <alpine.DEB.2.21.2011251051240.7979@sstabellini-ThinkPad-T480s>
 <X78irfLB6DQhkPvd@mattapan.m5p.com>
In-Reply-To: <X78irfLB6DQhkPvd@mattapan.m5p.com>
From: Roman Shaposhnik <roman@zededa.com>
Date: Fri, 27 Nov 2020 23:59:10 -0800
Message-ID: <CAMmSBy_4ry2DwMNT1Ai1-11wBHHuO71muvkfEWLRV=h0QiKyoA@mail.gmail.com>
Subject: Re: Xen on RP4
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Wed, Nov 25, 2020 at 7:36 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
>
> On Wed, Nov 25, 2020 at 10:57:31AM -0800, Stefano Stabellini wrote:
> > On Tue, 24 Nov 2020, Elliott Mitchell wrote:
> > > My testing section for Xen is:
> > >     xen_hypervisor /boot/xen-4.14-arm64.efi
> > >     xen_module /boot/vmlinuz-5.8.10-2rp4-6.1.7 root=UUID=01234567-dead-beef-d13d-456789abcdef ro
> > >     devicetree /boot/dtb-5.8.10-2rp4-6.1.7
> > >     xen_module --nounzip /boot/initrd.img-5.8.10-2rp4-6.1.7
> > >
> > > I've frankly got no idea how to ensure the correct device-tree is passed
> > > to Xen.  Is GRUB's `devicetree` command correct when loading Xen?  Is a
> > > device-tree matched to the Linux kernel appropriate for Xen?
> > >
> > > (I'm guessing the second is "yes", but the first I don't have a clue)
> >
> > Yes, devicetree is correct. I have not used the graphical output, so I
> > cannot help you there but yes the best bet is to use the devicetree that
> > comes with the kernel.
>
> Well, I've now got everything together for a "proper" Debian Raspberry PI
> installation.  Apparently since 5.2 (perhaps earlier, but 5.2 is
> confirmed), Debian's kernel source packages have had their Raspberry PI
> device-trees garbled.  I do have full untouched Linux kernel source
> handy, but I tend to stick with the distribution until that proves
> untenable.

Yup. Same here. I started with upstream kernel, wasted a lot of time,
threw in the towel
and imported all of the RPi Foundation patches wholesale :-(

Thanks,
Roman.


From xen-devel-bounces@lists.xenproject.org Sat Nov 28 10:20:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Nov 2020 10:20:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39904.72922 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kixKS-0002Bx-SF; Sat, 28 Nov 2020 10:19:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39904.72922; Sat, 28 Nov 2020 10:19:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kixKS-0002Bq-OW; Sat, 28 Nov 2020 10:19:44 +0000
Received: by outflank-mailman (input) for mailman id 39904;
 Sat, 28 Nov 2020 10:19:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kixKR-0002Bi-QP; Sat, 28 Nov 2020 10:19:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kixKR-0003cj-KO; Sat, 28 Nov 2020 10:19:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kixKR-0008BF-9J; Sat, 28 Nov 2020 10:19:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kixKR-0000Mo-8q; Sat, 28 Nov 2020 10:19:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9IlpoJZL8nIFg46w5lgSQbmtGUHqTeZhL813teo188c=; b=iUEuujGRX9E7ZfA97RKWolx9zT
	AYgjbAyfjJogBQdl8IKPXHYfcq+FLaZP8dUjXaIRWbFdbQSR7lUsQUIMhIvQJsyryvqTT8XPgrT4A
	J3euAMfm+cy2pw8bVyEk7TdT7+S31wMSfe7RYwrUsdQSk7tqktDHSGyD8kU1IQsEAO3o=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157063-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157063: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f7d7d53f6464cff94ead4c15d21e79ce4d9173f5
X-Osstest-Versions-That:
    xen=181f2c224ccd0a2900d6ae94ec390a546731f593
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 28 Nov 2020 10:19:43 +0000

flight 157063 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157063/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157044
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157044
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157044
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157044
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157044
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157044
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157044
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157044
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157044
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157044
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157044
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f7d7d53f6464cff94ead4c15d21e79ce4d9173f5
baseline version:
 xen                  181f2c224ccd0a2900d6ae94ec390a546731f593

Last test of basis   157044  2020-11-27 01:51:47 Z    1 days
Testing same since   157063  2020-11-27 20:37:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Paul Durrant <pdurrant@amazon.com>
  Rahul Singh <rahul.singh@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   181f2c224c..f7d7d53f64  f7d7d53f6464cff94ead4c15d21e79ce4d9173f5 -> master


From xen-devel-bounces@lists.xenproject.org Sat Nov 28 11:14:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Nov 2020 11:14:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39925.72953 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiyBU-0007kM-8f; Sat, 28 Nov 2020 11:14:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39925.72953; Sat, 28 Nov 2020 11:14:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiyBU-0007kF-58; Sat, 28 Nov 2020 11:14:32 +0000
Received: by outflank-mailman (input) for mailman id 39925;
 Sat, 28 Nov 2020 11:14:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kiyBT-0007kA-KE
 for xen-devel@lists.xenproject.org; Sat, 28 Nov 2020 11:14:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kiyBR-0004ip-1p; Sat, 28 Nov 2020 11:14:29 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kiyBQ-00041b-Pn; Sat, 28 Nov 2020 11:14:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=l6Zt80HSdZpb46yg8WFz7BJxgkfut/ni9w1dWOSSvmg=; b=f8HAai3TUXXp5cpnFI5rYc37IH
	V+3LJG9CyrgM5cqbeI+/DMlyfE52slHEdBAyZGBltQm3lN1lxWlPex2LDfXfI6Zp1WZ5t26tN4smy
	NuHXO1jQXAzmGDyNh52YyYDJ0+/ujTndvP6bb8RuOW3iJf8FEr4GHDrLXNymOAjH83cI=;
Subject: Re: [PATCH RFC 1/6] xen/arm: mm: Remove special case for CPU0 in
 dump_hyp_walk()
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 Julien Grall <jgrall@amazon.com>
References: <20201119190751.22345-1-julien@xen.org>
 <20201119190751.22345-2-julien@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <3a783a3d-4c4d-f107-1583-16f04fe76ae0@xen.org>
Date: Sat, 28 Nov 2020 11:14:26 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201119190751.22345-2-julien@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 19/11/2020 19:07, Julien Grall wrote:
> From: Stefano Stabellini <sstabellini@kernel.org>
> 
> There is no need to have a special case for CPU0 when converting the
> page-table virtual address into a physical address. The helper
> virt_to_maddr() is able to translate any address as long as the root
> page-tables is mapped in the virtual address. This is the case for all
> the CPUs at the moment.
> 
> So use the same BUG_ON() regardless the CPU.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> [julien: Rework the commit message]
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ---
> 
> I went back through the conversation in [1] regarding the issue when
> loading Xen below 2MB on Arm32. The example provided is wrong because to
> find the physical address, we need to add 'phys_offset', not substract.
> 
> So I removed the comment regarding the code was buggy.
> 
> [1] https://marc.info/?l=xen-devel&m=157081398022401

Stefano, can you confirm that you are happy with the new commit message?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Nov 28 11:18:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Nov 2020 11:18:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39931.72965 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiyF4-00080o-Q2; Sat, 28 Nov 2020 11:18:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39931.72965; Sat, 28 Nov 2020 11:18:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiyF4-00080h-M4; Sat, 28 Nov 2020 11:18:14 +0000
Received: by outflank-mailman (input) for mailman id 39931;
 Sat, 28 Nov 2020 11:18:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kiyF3-00080b-9V
 for xen-devel@lists.xenproject.org; Sat, 28 Nov 2020 11:18:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kiyEz-0004nz-L0; Sat, 28 Nov 2020 11:18:09 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kiyEz-0004DM-EE; Sat, 28 Nov 2020 11:18:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=40JYWeZt0gcBuYCPBPMxOjXYpJW4xNkZQEH7E8ilIV4=; b=GlSOo9qQtRkijw2mCblvpOc4YB
	FU0z8qC23wyFFN/g1mr+jD25FDzuZkFgHZR8i7cyHX5ekfwyXWmfmI452jaq0bsFs/xahPIDPfWbK
	S75AkwA9tbFosiPV1877ZAhOUw6fMwDeNaOzBhMViRCmxGaXRwU+S/aejpx4GPCKJI8A=;
Subject: Re: [PATCH] xen/iommu: vtd: Fix undefined behavior pci_vtd_quirks()
To: xen-devel@lists.xenproject.org, Kevin Tian <kevin.tian@intel.com>
Cc: Julien Grall <jgrall@amazon.com>
References: <20201119145216.29280-1-julien@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <5ce29a17-a2de-1374-c40b-6c6acd9f816b@xen.org>
Date: Sat, 28 Nov 2020 11:18:08 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201119145216.29280-1-julien@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 19/11/2020 14:52, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> When booting Xen with CONFIG_USBAN=y on Sandy Bridge, UBSAN will throw
> the following splat:
> 
> (XEN) ================================================================================
> (XEN) UBSAN: Undefined behaviour in quirks.c:449:63
> (XEN) left shift of 1 by 31 places cannot be represented in type 'int'
> (XEN) ----[ Xen-4.11.4  x86_64  debug=y   Not tainted ]----
> 
> [...]
> 
> (XEN) Xen call trace:
> (XEN)    [<ffff82d0802c0ccc>] ubsan.c#ubsan_epilogue+0xa/0xad
> (XEN)    [<ffff82d0802c16c9>] __ubsan_handle_shift_out_of_bounds+0xb4/0x145
> (XEN)    [<ffff82d0802eeecd>] pci_vtd_quirk+0x3d3/0x74f
> (XEN)    [<ffff82d0802e508b>] iommu.c#domain_context_mapping+0x45b/0x46f
> (XEN)    [<ffff82d08053f39e>] iommu.c#setup_hwdom_device+0x22/0x3a
> (XEN)    [<ffff82d08053dfbc>] pci.c#setup_one_hwdom_device+0x8c/0x124
> (XEN)    [<ffff82d08053e302>] pci.c#_setup_hwdom_pci_devices+0xbb/0x2f7
> (XEN)    [<ffff82d0802da5b7>] pci.c#pci_segments_iterate+0x4c/0x8c
> (XEN)    [<ffff82d08053e8bd>] setup_hwdom_pci_devices+0x25/0x2c
> (XEN)    [<ffff82d08053e916>] iommu.c#intel_iommu_hwdom_init+0x52/0x2f3
> (XEN)    [<ffff82d08053d6da>] iommu_hwdom_init+0x4e/0xa4
> (XEN)    [<ffff82d080577f32>] dom0_construct_pv+0x23c8/0x2476
> (XEN)    [<ffff82d08057cb50>] construct_dom0+0x6c/0xa3
> (XEN)    [<ffff82d080564822>] __start_xen+0x4651/0x4b55
> (XEN)    [<ffff82d0802000f3>] __high_start+0x53/0x55
> 
> Note that splat is from 4.11.4 and not staging. Although, the problem is
> still present.
> 
> This can be solved by making the first operand unsigned int.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Kevin, can I get an ack on this small patch?

Cheers.

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Nov 28 11:26:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Nov 2020 11:26:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39940.72976 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiyMU-0000Un-NL; Sat, 28 Nov 2020 11:25:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39940.72976; Sat, 28 Nov 2020 11:25:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiyMU-0000Ug-KP; Sat, 28 Nov 2020 11:25:54 +0000
Received: by outflank-mailman (input) for mailman id 39940;
 Sat, 28 Nov 2020 11:25:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kiyMT-0000Ub-Jg
 for xen-devel@lists.xenproject.org; Sat, 28 Nov 2020 11:25:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kiyMQ-0004wh-JZ; Sat, 28 Nov 2020 11:25:50 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kiyMQ-0004hC-7l; Sat, 28 Nov 2020 11:25:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=bqQj0aZdYZhq4LkThwBUQPjV3CoIyckZ/6Zjdg3jPZ0=; b=LzO/J6chHjtKz9GNErOQQp1FMJ
	8UWCeb+MMYe2PbFG8bPdILYTm3u1Mgcz4tlmD1hBCwTi+mhg6Z2SQfeVLQw1imRTQhynyT+KYZLo8
	TGkRcSO+67C8evykBHcvyd3+nUA6Cr8IIkEQVsoomBHZdtedBcaLQYK05w9HnpS6HlMg=;
Subject: Re: [PATCH] xen/irq: Propagate the error from init_one_desc_irq() in
 init_irq_data()
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201119145434.28065-1-julien@xen.org>
 <alpine.DEB.2.21.2011191542200.7979@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <499a56f6-9a66-4fb8-0687-b9fb221fdc52@xen.org>
Date: Sat, 28 Nov 2020 11:25:48 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2011191542200.7979@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 19/11/2020 23:44, Stefano Stabellini wrote:
> On Thu, 19 Nov 2020, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> init_one_desc_irq() can return an error if it is unable to allocate
>> memory. While this is unlikely to happen during boot (called from
>> init_irq_data()), it is better to harden the code by propagting the
>> return value.
>>
>> Spotted by coverity.
>>
>> CID: 106529
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> Hi Julien,

Hi Stefano,

> Thanks for the patch. I was about to commit it when I realized there is
> one more caller: xen/arch/arm/irq.c:init_local_irq_data
> 
> Should we change that too to check for the return error?

We should change too. I will send a new version.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Nov 28 11:27:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Nov 2020 11:27:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39946.72989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiyNg-0000bV-2r; Sat, 28 Nov 2020 11:27:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39946.72989; Sat, 28 Nov 2020 11:27:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiyNf-0000bO-VR; Sat, 28 Nov 2020 11:27:07 +0000
Received: by outflank-mailman (input) for mailman id 39946;
 Sat, 28 Nov 2020 11:27:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kiyNe-0000bG-Nx
 for xen-devel@lists.xenproject.org; Sat, 28 Nov 2020 11:27:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kiyNb-0004zI-Ps; Sat, 28 Nov 2020 11:27:03 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kiyNb-0004jH-IR; Sat, 28 Nov 2020 11:27:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=HvwaVHZe+OmUARUfoPhJEeMbfY6nTVY5vm00zjtencQ=; b=GL0a92Nr7KjOxDIzt5IZIoSPFr
	mp+ejjSGtD9Hq3EGiI1ztaCGUWvodxHqRF84Qv4h5Bug21WEqiowxpe91efDPggxU9b3ZwMtUEWzL
	Gj3lpC6VRSDyjYSMJDW+qOJkp0tSxggAz188/NhIYueJgTbDarm6pczu6EqJ9Yvl7LL0=;
Subject: Re: [PATCH] xen/irq: Propagate the error from init_one_desc_irq() in
 init_irq_data()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Wei Liu <wl@xen.org>
References: <20201119145434.28065-1-julien@xen.org>
 <20201119151156.wgkwyslzzlpcirot@Air-de-Roger>
From: Julien Grall <julien@xen.org>
Message-ID: <592a2f99-a77f-f12d-cefe-4d41e8a0f08e@xen.org>
Date: Sat, 28 Nov 2020 11:27:01 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201119151156.wgkwyslzzlpcirot@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Roger,

On 19/11/2020 15:11, Roger Pau Monné wrote:
> On Thu, Nov 19, 2020 at 02:54:34PM +0000, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> init_one_desc_irq() can return an error if it is unable to allocate
>> memory. While this is unlikely to happen during boot (called from
>> init_irq_data()), it is better to harden the code by propagting the
>> return value.
>>
>> Spotted by coverity.
>>
>> CID: 106529
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
>> ---
>>   xen/arch/x86/irq.c | 7 ++++++-
> 
> Fox x86:
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thank you!

> 
>>   2 files changed, 12 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
>> index 3877657a5277..279d221a2b85 100644
>> --- a/xen/arch/arm/irq.c
>> +++ b/xen/arch/arm/irq.c
>> @@ -88,7 +88,12 @@ static int __init init_irq_data(void)
>>       for ( irq = NR_LOCAL_IRQS; irq < NR_IRQS; irq++ )
>>       {
>>           struct irq_desc *desc = irq_to_desc(irq);
>> -        init_one_irq_desc(desc);
>> +        int rc;
>> +
>> +        rc = init_one_irq_desc(desc);
> 
> You could init rc at definition.

I need to send a new version, so I have merged the two line together.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Nov 28 11:31:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Nov 2020 11:31:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39954.73001 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiyRX-0001Yg-Lh; Sat, 28 Nov 2020 11:31:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39954.73001; Sat, 28 Nov 2020 11:31:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiyRX-0001YZ-H7; Sat, 28 Nov 2020 11:31:07 +0000
Received: by outflank-mailman (input) for mailman id 39954;
 Sat, 28 Nov 2020 11:31:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kiyRW-0001YU-6J
 for xen-devel@lists.xenproject.org; Sat, 28 Nov 2020 11:31:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kiyRV-00053m-Tz; Sat, 28 Nov 2020 11:31:05 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kiyRV-0004ry-JI; Sat, 28 Nov 2020 11:31:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=x08oKDPsYhZfnq1rzPGbIzqD44Wkh09Na1i2ws0EUmg=; b=dVkY1BUejtiAxv1ihPPgma2BP2
	eLYN8ElHe5CmHoyZH1raJAVIjH2BU+d7Maq25284XFPUhgdUNJ5N6r5jgMPfN4YhywIKrRHd7pPBa
	zIqmG4y2ArD+dsf4jY7/6Qkr9nn9plsHxjsqB88D8RAlUBdmjdPdzur8gFvDVRuW5zUY=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	=?UTF-8?q?Roger=20Paul=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2] xen/irq: Propagate the error from init_one_desc_irq() in init_*_irq_data()
Date: Sat, 28 Nov 2020 11:31:02 +0000
Message-Id: <20201128113102.6446-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

init_one_desc_irq() can return an error if it is unable to allocate
memory. While this is unlikely to happen during boot (called from
init_{,local_}irq_data()), it is better to harden the code by
propagting the return value.

Spotted by coverity.

CID: 106529

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Roger Paul Monné <roger.pau@citrix.com>

---
    Changes in v2:
        - Add Roger's reviewed-by for x86
        - Handle
---
 xen/arch/arm/irq.c | 12 ++++++++++--
 xen/arch/x86/irq.c |  7 ++++++-
 2 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index 3877657a5277..b71b099e6fa2 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -88,7 +88,11 @@ static int __init init_irq_data(void)
     for ( irq = NR_LOCAL_IRQS; irq < NR_IRQS; irq++ )
     {
         struct irq_desc *desc = irq_to_desc(irq);
-        init_one_irq_desc(desc);
+        int rc = init_one_irq_desc(desc);
+
+        if ( rc )
+            return rc;
+
         desc->irq = irq;
         desc->action  = NULL;
     }
@@ -105,7 +109,11 @@ static int init_local_irq_data(void)
     for ( irq = 0; irq < NR_LOCAL_IRQS; irq++ )
     {
         struct irq_desc *desc = irq_to_desc(irq);
-        init_one_irq_desc(desc);
+        int rc = init_one_irq_desc(desc);
+
+        if ( rc )
+            return rc;
+
         desc->irq = irq;
         desc->action  = NULL;
 
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 45966947919e..3ebd684415ac 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -428,9 +428,14 @@ int __init init_irq_data(void)
 
     for ( irq = 0; irq < nr_irqs_gsi; irq++ )
     {
+        int rc;
+
         desc = irq_to_desc(irq);
         desc->irq = irq;
-        init_one_irq_desc(desc);
+
+        rc = init_one_irq_desc(desc);
+        if ( rc )
+            return rc;
     }
     for ( ; irq < nr_irqs; irq++ )
         irq_to_desc(irq)->irq = irq;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sat Nov 28 11:36:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Nov 2020 11:36:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39961.73012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiyX2-0001lV-8Z; Sat, 28 Nov 2020 11:36:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39961.73012; Sat, 28 Nov 2020 11:36:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiyX2-0001lO-5c; Sat, 28 Nov 2020 11:36:48 +0000
Received: by outflank-mailman (input) for mailman id 39961;
 Sat, 28 Nov 2020 11:36:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kiyX0-0001lJ-MH
 for xen-devel@lists.xenproject.org; Sat, 28 Nov 2020 11:36:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kiyX0-0005C8-5M; Sat, 28 Nov 2020 11:36:46 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kiyWz-0005KL-Oo; Sat, 28 Nov 2020 11:36:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=x08oKDPsYhZfnq1rzPGbIzqD44Wkh09Na1i2ws0EUmg=; b=5WPhm6Iso5l4CpXiddbRQbhUo+
	mB1bP5gIWc3UXq01x505VjQmh5pOu9/BZBW9iWUg5rbIvwAp+itBCVyJfLa0pxcVL4Rb66wdwGZT4
	w5Wy6fP7lanWU56H55zNQ43u5nTHsbtTkwzJ5b3eKqZ6mgdzLA1VpndGhbgyudoVTBdY=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2] xen/irq: Propagate the error from init_one_desc_irq() in init_*_irq_data()
Date: Sat, 28 Nov 2020 11:36:42 +0000
Message-Id: <20201128113642.8265-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

init_one_desc_irq() can return an error if it is unable to allocate
memory. While this is unlikely to happen during boot (called from
init_{,local_}irq_data()), it is better to harden the code by
propagting the return value.

Spotted by coverity.

CID: 106529

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Roger Paul Monné <roger.pau@citrix.com>

---
    Changes in v2:
        - Add Roger's reviewed-by for x86
        - Handle
---
 xen/arch/arm/irq.c | 12 ++++++++++--
 xen/arch/x86/irq.c |  7 ++++++-
 2 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index 3877657a5277..b71b099e6fa2 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -88,7 +88,11 @@ static int __init init_irq_data(void)
     for ( irq = NR_LOCAL_IRQS; irq < NR_IRQS; irq++ )
     {
         struct irq_desc *desc = irq_to_desc(irq);
-        init_one_irq_desc(desc);
+        int rc = init_one_irq_desc(desc);
+
+        if ( rc )
+            return rc;
+
         desc->irq = irq;
         desc->action  = NULL;
     }
@@ -105,7 +109,11 @@ static int init_local_irq_data(void)
     for ( irq = 0; irq < NR_LOCAL_IRQS; irq++ )
     {
         struct irq_desc *desc = irq_to_desc(irq);
-        init_one_irq_desc(desc);
+        int rc = init_one_irq_desc(desc);
+
+        if ( rc )
+            return rc;
+
         desc->irq = irq;
         desc->action  = NULL;
 
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 45966947919e..3ebd684415ac 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -428,9 +428,14 @@ int __init init_irq_data(void)
 
     for ( irq = 0; irq < nr_irqs_gsi; irq++ )
     {
+        int rc;
+
         desc = irq_to_desc(irq);
         desc->irq = irq;
-        init_one_irq_desc(desc);
+
+        rc = init_one_irq_desc(desc);
+        if ( rc )
+            return rc;
     }
     for ( ; irq < nr_irqs; irq++ )
         irq_to_desc(irq)->irq = irq;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sat Nov 28 11:37:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Nov 2020 11:37:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39966.73025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiyXX-0001r0-If; Sat, 28 Nov 2020 11:37:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39966.73025; Sat, 28 Nov 2020 11:37:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiyXX-0001qt-Ex; Sat, 28 Nov 2020 11:37:19 +0000
Received: by outflank-mailman (input) for mailman id 39966;
 Sat, 28 Nov 2020 11:37:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kiyXV-0001ql-R7
 for xen-devel@lists.xenproject.org; Sat, 28 Nov 2020 11:37:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kiyXV-0005CU-0b; Sat, 28 Nov 2020 11:37:17 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kiyXU-0005Me-QN; Sat, 28 Nov 2020 11:37:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=j4Ev1HJphgPb5eW0/uIqr2mAojAbh0QOI9ysdTCWlqs=; b=Qks2uU5fMosYcu/KNPNmIfBl9/
	gDd6yupNdMN2WbhWxwvp8Xnt7amKSMtno3Yml54Z1GqyP5ktAQ8oATc1Z/qhFDwF8YgYOAdeD479P
	uVVbOC15cSzB4kfHnTuUdVjVvey00QsKAoQ8pF9xBYLMGWsXIgXPpVl9rnnvXmgUpLMI=;
Subject: Re: [PATCH v2] xen/irq: Propagate the error from init_one_desc_irq()
 in init_*_irq_data()
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>, =?UTF-8?Q?Roger_Paul_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <20201128113102.6446-1-julien@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <1682e9e6-a1b2-f0e9-2cba-f7154c5e02b7@xen.org>
Date: Sat, 28 Nov 2020 11:37:15 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201128113102.6446-1-julien@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi,

Please ignore this version as I forgot to call add_maintainers.pl.

Sorry for the noise.

Cheers,

On 28/11/2020 11:31, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> init_one_desc_irq() can return an error if it is unable to allocate
> memory. While this is unlikely to happen during boot (called from
> init_{,local_}irq_data()), it is better to harden the code by
> propagting the return value.
> 
> Spotted by coverity.
> 
> CID: 106529
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Roger Paul Monné <roger.pau@citrix.com>
> 
> ---
>      Changes in v2:
>          - Add Roger's reviewed-by for x86
>          - Handle
> ---
>   xen/arch/arm/irq.c | 12 ++++++++++--
>   xen/arch/x86/irq.c |  7 ++++++-
>   2 files changed, 16 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
> index 3877657a5277..b71b099e6fa2 100644
> --- a/xen/arch/arm/irq.c
> +++ b/xen/arch/arm/irq.c
> @@ -88,7 +88,11 @@ static int __init init_irq_data(void)
>       for ( irq = NR_LOCAL_IRQS; irq < NR_IRQS; irq++ )
>       {
>           struct irq_desc *desc = irq_to_desc(irq);
> -        init_one_irq_desc(desc);
> +        int rc = init_one_irq_desc(desc);
> +
> +        if ( rc )
> +            return rc;
> +
>           desc->irq = irq;
>           desc->action  = NULL;
>       }
> @@ -105,7 +109,11 @@ static int init_local_irq_data(void)
>       for ( irq = 0; irq < NR_LOCAL_IRQS; irq++ )
>       {
>           struct irq_desc *desc = irq_to_desc(irq);
> -        init_one_irq_desc(desc);
> +        int rc = init_one_irq_desc(desc);
> +
> +        if ( rc )
> +            return rc;
> +
>           desc->irq = irq;
>           desc->action  = NULL;
>   
> diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
> index 45966947919e..3ebd684415ac 100644
> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -428,9 +428,14 @@ int __init init_irq_data(void)
>   
>       for ( irq = 0; irq < nr_irqs_gsi; irq++ )
>       {
> +        int rc;
> +
>           desc = irq_to_desc(irq);
>           desc->irq = irq;
> -        init_one_irq_desc(desc);
> +
> +        rc = init_one_irq_desc(desc);
> +        if ( rc )
> +            return rc;
>       }
>       for ( ; irq < nr_irqs; irq++ )
>           irq_to_desc(irq)->irq = irq;
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Nov 28 11:53:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Nov 2020 11:53:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.39974.73037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiynZ-0003mz-0Z; Sat, 28 Nov 2020 11:53:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 39974.73037; Sat, 28 Nov 2020 11:53:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kiynY-0003ms-TD; Sat, 28 Nov 2020 11:53:52 +0000
Received: by outflank-mailman (input) for mailman id 39974;
 Sat, 28 Nov 2020 11:53:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kiynX-0003mm-6C
 for xen-devel@lists.xenproject.org; Sat, 28 Nov 2020 11:53:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kiynV-0005WV-Pr; Sat, 28 Nov 2020 11:53:49 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kiynV-0006Gr-H0; Sat, 28 Nov 2020 11:53:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=0MRFU9GEJlCAUQHDbLbvMKBb6H68pNi65RWtgrYNB6E=; b=omxf2WG2ZapdC5zNJzSSrB0Jgj
	9U5ePwNW8sl3XdMrjV4bjS2yPS+YFMcXQXN7XDDs/XEf03i/PmeU4LgntnaK/qccIE1hYyRv0wfFE
	OpKxGU8JaEa5XX+l0Q3r2nBv+4uuo5TfXH6CuzBh5s9XO7HQIbHpiEdXyO19RRsWVxP4=;
Subject: Re: [PATCH RFC 4/6] xen/arm: mm: Allow other mapping size in
 xen_pt_update_entry()
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com,
 Julien Grall <julien.grall@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20201119190751.22345-1-julien@xen.org>
 <20201119190751.22345-5-julien@xen.org>
 <alpine.DEB.2.21.2011191706420.7979@sstabellini-ThinkPad-T480s>
 <1ba4afef-7efa-6d1a-5929-ec2652dbbb21@xen.org>
 <alpine.DEB.2.21.2011231409050.7979@sstabellini-ThinkPad-T480s>
 <eff4cb40-ac90-940c-aa97-16a5021386d3@xen.org>
 <alpine.DEB.2.21.2011231612330.7979@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <d02e29cb-a4f1-4ebe-a04f-67b4a159a193@xen.org>
Date: Sat, 28 Nov 2020 11:53:47 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2011231612330.7979@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 24/11/2020 00:25, Stefano Stabellini wrote:
> On Mon, 23 Nov 2020, Julien Grall wrote:
>> Hi Stefano,
>>
>> On 23/11/2020 22:27, Stefano Stabellini wrote:
>>> On Fri, 20 Nov 2020, Julien Grall wrote:
>>>>>>         /*
>>>>>>          * For arm32, page-tables are different on each CPUs. Yet, they
>>>>>> share
>>>>>> @@ -1265,14 +1287,43 @@ static int xen_pt_update(unsigned long virt,
>>>>>>           spin_lock(&xen_pt_lock);
>>>>>>     -    for ( ; addr < addr_end; addr += PAGE_SIZE )
>>>>>> +    while ( left )
>>>>>>         {
>>>>>> -        rc = xen_pt_update_entry(root, addr, mfn, flags);
>>>>>> +        unsigned int order;
>>>>>> +        unsigned long mask;
>>>>>> +
>>>>>> +        /*
>>>>>> +         * Don't take into account the MFN when removing mapping (i.e
>>>>>> +         * MFN_INVALID) to calculate the correct target order.
>>>>>> +         *
>>>>>> +         * XXX: Support superpage mappings if nr is not aligned to a
>>>>>> +         * superpage size.
>>>>>
>>>>> It would be good to add another sentence to explain that the checks
>>>>> below are simply based on masks and rely on the mfn, vfn, and also
>>>>> nr_mfn to be superpage aligned. (It took me some time to figure it out.)
>>>>
>>>> I am not sure to understand what you wrote here. Could you suggest a
>>>> sentence?
>>>
>>> Something like the following:
>>>
>>> /*
>>>    * Don't take into account the MFN when removing mapping (i.e
>>>    * MFN_INVALID) to calculate the correct target order.
>>>    *
>>>    * This loop relies on mfn, vfn, and nr_mfn, to be all superpage
>>>    * aligned, and it uses `mask' to check for that.
>>
>> Unfortunately, I am still not sure to understand this comment.
>> The loop can deal with any (super)page size (4KB, 2MB, 1GB). There are no
>> assumption on any alignment for mfn, vfn and nr_mfn.
>>
>> By OR-ing the 3 components together, we can use it to find out the maximum
>> size that can be used for the mapping.
>>
>> So can you clarify what you mean?
> 
> In pseudo-code:
> 
>    mask = mfn | vfn | nr_mfns;
>    if (mask & ((1<<FIRST_ORDER) - 1))
>    if (mask & ((1<<SECOND_ORDER) - 1))
>    if (mask & ((1<<THIRD_ORDER) - 1))
>    ...
> 
> As you wrote the mask is used to find the max size that can be used for
> the mapping.
> 
> But let's take nr_mfns out of the equation for a moment for clarity:
> 
>    mask = mfn | vfn;
>    if (mask & ((1<<FIRST_ORDER) - 1))
>    if (mask & ((1<<SECOND_ORDER) - 1))
>    if (mask & ((1<<THIRD_ORDER) - 1))
>    ...
> 
> How would you describe this check? I'd call this an alignment check,
> is it not?
If you take the ``if`` alone, yes they are alignment check. But if you 
take the overall code, then it will just compute which mapping size can 
be used.

However, what I am disputing here is "rely" because there are no 
assumption made on the alignment in the loop (we are able to cater any 
size). In fact, the fact mfn and vfn should be aligned to the mapping 
size is a requirement from the hardware and not the implementation.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Nov 28 14:43:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Nov 2020 14:43:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40017.73079 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kj1RA-0002nW-Ij; Sat, 28 Nov 2020 14:42:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40017.73079; Sat, 28 Nov 2020 14:42:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kj1RA-0002nI-BT; Sat, 28 Nov 2020 14:42:56 +0000
Received: by outflank-mailman (input) for mailman id 40017;
 Sat, 28 Nov 2020 14:33:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8IEe=FC=qq.com=2284696125@srs-us1.protection.inumbo.net>)
 id 1kj1Hm-0001tv-7v
 for xen-devel@lists.xenproject.org; Sat, 28 Nov 2020 14:33:14 +0000
Received: from smtpbg.qq.com (unknown [203.205.250.53])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7898ce37-7440-4b50-a020-856014c6d465;
 Sat, 28 Nov 2020 14:32:43 +0000 (UTC)
Received: from qq.com (unknown [127.0.0.1]) by smtp.qq.com (ESMTP) with SMTP
 id ; Sat, 28 Nov 2020 22:32:40 +0800 (CST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7898ce37-7440-4b50-a020-856014c6d465
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qq.com; s=s201512;
	t=1606573960; bh=9xcn4M0R0XCRv1C2cX4i1VQxdz2N6enx/7LssllUF4k=;
	h=From:To:Subject:Mime-Version:Date:Message-ID;
	b=RBmvVyIjmaN1SSZ3q20AewNIwHPCGEP5WYwUi1fSO8k8u/v78vtaQjTQBEDWMHSyz
	 d8Y/8iBV7RFqaJUGOGJWkiWotBUX0NCErgx8Q4Rgq+axhz5HncuK1G29missiWhRaQ
	 KQMFzGIhifkjbanlHsX6B/zlMBNXAboMGiVqe78o=
X-QQ-FEAT: YuwbzFVdD1jhZGnyd/OIcNNnihs0ppT20efISsTMmjjcyE2TWhFTB5c3mjpk8
	VVhygR7QirI3luuflYsAH500/Uj2iGZW+jw4zjSVsr0kfq8guFaIUfCXDIZwkOWUF6bxSGY
	JBlfguWcOAw45clBogNqYdY8Jq8oMYxt42btoEF6fEk/LPooN5PD9EXRqaASxKmWNjGVNSk
	UsCfde16ygNHzVv/jPd/OC+0jx2xlgu1z1ic1BtmqkT95UIVcZeVXcThVzU2LiRyNgfHXIn
	trdw==
X-QQ-SSF: 0000000000000010000000000000008
X-QQ-XMAILINFO: OAYk33bJfOB1Tbu4xvO+1ePYoadnMSC9OvVXg1nUTNfJ+g0gFueyx6z6KDPYjx
	 UuaSEFKLI7O1H9suWNMq0cxI6idAQnP+/a45XWjTHq/QnTFF7l+/BiZMDMnVZh+uQKatjBjiBlgBJ
	 CnjnpQxPi2Q0EVmmPf0dN1xAwMgX+RUYVP7ZL0PJPB+BhbphJxqdF8OJqyWJQTA1HeVzlufq3a4GL
	 SIAq/8tA2v+FYtBuCV6S243Uj2zu2JMeJdR7vJyhiUGL7kbJhaGJcI1udT/MBCy/tTw8C0PpJZPfe
	 CWgc2UQn0Fx1BTJ8etdE8PcIBSs7Z1dU6ad250DKUpX/Cn5FYlFolg3JeUoaPqU0IyvueAkLznwKb
	 ukWXURHMLLz2J5lVsjJKSg+VWVj2+6a2tNs1yxFu+zMBBQGh/DAL0cS/XGE+8HBpRm0prgW8/24VA
	 ljUhWj9hMdDuNBTLhlVTwTy2N8GE2Zj6jSA0ndA+ekp3Ug+AWrBfx7e2FSMPn96VNe7L9dEgr3bhR
	 g+oLpqdfzQ3RIcy2OXt5EntT04xX4WtN1vo9C2JK0EapAwQQd3XABqcEJtniIT1ox/Zf7l97u5oV3
	 4H+hzgIjGL6Q5oXykQWguEoYJfDpQWFNgHREDPCyU0gEuZrveOpvYQTKrpMvPEnwD3LHzKpwp4ulC
	 kwW/Mn/BtfM5clotgjNOIpiJYWN+VPstTfwC8x
X-HAS-ATTACH: no
X-QQ-BUSINESS-ORIGIN: 2
X-Originating-IP: 166.111.122.112
X-QQ-STYLE: 
X-QQ-mid: webmail801t1606573958t7901344
From: "=?ISO-8859-1?B?UnJvYWNo?=" <2284696125@qq.com>
To: "=?ISO-8859-1?B?eGVuLWRldmVs?=" <xen-devel@lists.xenproject.org>
Subject: help
Mime-Version: 1.0
Content-Type: multipart/alternative;
	boundary="----=_NextPart_5FC25F86_0F661C10_7D003348"
Content-Transfer-Encoding: 8Bit
Date: Sat, 28 Nov 2020 22:32:38 +0800
X-Priority: 3
Message-ID: <tencent_11A93B706FAC57659EC62A5E5EBF22DEB705@qq.com>
X-QQ-MIME: TCMime 1.0 by Tencent
X-Mailer: QQMail 2.x
X-QQ-Mailer: QQMail 2.x
X-QQ-SENDSIZE: 520
Feedback-ID: webmail:qq.com:bgweb:bgweb4

This is a multi-part message in MIME format.

------=_NextPart_5FC25F86_0F661C10_7D003348
Content-Type: text/plain;
	charset="ISO-8859-1"
Content-Transfer-Encoding: base64

SGksIEknbSBhdHRlbXB0aW5nIHRvIHVzZSBhZGRyZXNzIHNhbml0aXplciBpbiBsb2NhdGlu
ZyBidWdzIGluIFhlbiA0LTEzLCB3aGlsZSB1c2UgYWRkcmVzcyBzYW5pdGl6ZXIgaW4gdG9v
bHMgbW9kdWxlcywgd2hpbGUgSSByYW4gc29tZSBiYXNpYyBpbnN0cnVjdGlvbnMgbGlrZSB4
bCwgWGVuIHJlcG9ydCBzdWNoIGJ1ZzoNCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09DQo9PTI4NjM9PUVSUk9SOiBM
ZWFrU2FuaXRpemVyOiBkZXRlY3RlZCBtZW1vcnkgbGVha3MNCg0KRGlyZWN0IGxlYWsgb2Yg
Mjk2IGJ5dGUocykgaW4gMTEgb2JqZWN0KHMpIGFsbG9jYXRlZCBmcm9tOg0KJm5ic3A7ICZu
YnNwOyAjMCAweDdmNWI5MWFlZmQyOCBpbiBtYWxsb2MgKC91c3IvbGliL3g4Nl82NC1saW51
eC1nbnUvbGliYXNhbi5zby4zKzB4YzFkMjgpDQombmJzcDsgJm5ic3A7ICMxIDB4NDY3OTk3
Jm5ic3A7ICgvdXNyL2Jpbi94ODZfNjQtbGludXgtZ251LWdjYy02KzB4NDY3OTk3KQ0KDQpJ
bmRpcmVjdCBsZWFrIG9mIDEwIGJ5dGUocykgaW4gMSBvYmplY3QocykgYWxsb2NhdGVkIGZy
b206DQombmJzcDsgJm5ic3A7ICMwIDB4N2Y1YjkxYWVmZDI4IGluIG1hbGxvYyAoL3Vzci9s
aWIveDg2XzY0LWxpbnV4LWdudS9saWJhc2FuLnNvLjMrMHhjMWQyOCkNCiZuYnNwOyAmbmJz
cDsgIzEgMHg0Njc5OTcmbmJzcDsgKC91c3IvYmluL3g4Nl82NC1saW51eC1nbnUtZ2NjLTYr
MHg0Njc5OTcpDQoNClNVTU1BUlk6IEFkZHJlc3NTYW5pdGl6ZXI6IDMwNiBieXRlKHMpIGxl
YWtlZCBpbiAxMiBhbGxvY2F0aW9uKHMpLg0KL3Jvb3QvZmF1bHR4ZW4vdG9vbHMvbGlicy90
b29sY29yZS8uLi8uLi8uLi90b29scy9SdWxlcy5tazoyMjQ6IHJlY2lwZSBmb3IgdGFyZ2V0
ICdoZWFkZXJzLmNoaycgZmFpbGVkDQptYWtlWzVdOiAqKiogW2hlYWRlcnMuY2hrXSBFcnJv
ciAxDQoNCj09NzUyMD09RVJST1I6IExlYWtTYW5pdGl6ZXI6IGRldGVjdGVkIG1lbW9yeSBs
ZWFrcw0KDQpEaXJlY3QgbGVhayBvZiAxMCBieXRlKHMpIGluIDEgb2JqZWN0KHMpIGFsbG9j
YXRlZCBmcm9tOg0KJm5ic3A7ICZuYnNwOyAjMCAweDdmZDEwMjhjOGQyOCBpbiBtYWxsb2Mg
KC91c3IvbGliL3g4Nl82NC1saW51eC1nbnUvbGliYXNhbi5zby4zKzB4YzFkMjgpDQombmJz
cDsgJm5ic3A7ICMxIDB4N2ZkMTAyMmU0M2I5IGluIF9fc3RyZHVwICgvbGliL3g4Nl82NC1s
aW51eC1nbnUvbGliYy5zby42KzB4ODAzYjkpDQoNClNVTU1BUlk6IEFkZHJlc3NTYW5pdGl6
ZXI6IDEwIGJ5dGUocykgbGVha2VkIGluIDEgYWxsb2NhdGlvbihzKS4NCj09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
DQpJdCBzZWVtcyB0aGlzIGJ1ZyBpcyB2ZXJ5IGxvdy1sZXZlbCwgYW5kIGFmZmVjdHMgbWFu
eSBiYXNpYyBvcGVyYXRpb25zLCBkbyB5b3UgaGF2ZSBhbnkgaWRlYSB3aGF0IGNhdXNlIHN1
Y2ggYnVncz8=

------=_NextPart_5FC25F86_0F661C10_7D003348
Content-Type: text/html;
	charset="ISO-8859-1"
Content-Transfer-Encoding: base64

PG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7IGNo
YXJzZXQ9R0IxODAzMCI+PGRpdj5IaSwgSSdtIGF0dGVtcHRpbmcgdG8gdXNlIGFkZHJlc3Mg
c2FuaXRpemVyIGluIGxvY2F0aW5nIGJ1Z3MgaW4gWGVuIDQtMTMsIHdoaWxlIHVzZSBhZGRy
ZXNzIHNhbml0aXplciBpbiB0b29scyBtb2R1bGVzLCB3aGlsZSBJIHJhbiBzb21lIGJhc2lj
IGluc3RydWN0aW9ucyBsaWtlIHhsLCBYZW4gcmVwb3J0IHN1Y2ggYnVnOjwvZGl2PjxkaXY+
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT08YnI+PT0yODYzPT1FUlJPUjogTGVha1Nhbml0aXplcjogZGV0ZWN0ZWQg
bWVtb3J5IGxlYWtzPGJyPjxicj5EaXJlY3QgbGVhayBvZiAyOTYgYnl0ZShzKSBpbiAxMSBv
YmplY3QocykgYWxsb2NhdGVkIGZyb206PGJyPiZuYnNwOyAmbmJzcDsgIzAgMHg3ZjViOTFh
ZWZkMjggaW4gbWFsbG9jICgvdXNyL2xpYi94ODZfNjQtbGludXgtZ251L2xpYmFzYW4uc28u
MysweGMxZDI4KTxicj4mbmJzcDsgJm5ic3A7ICMxIDB4NDY3OTk3Jm5ic3A7ICgvdXNyL2Jp
bi94ODZfNjQtbGludXgtZ251LWdjYy02KzB4NDY3OTk3KTxicj48YnI+SW5kaXJlY3QgbGVh
ayBvZiAxMCBieXRlKHMpIGluIDEgb2JqZWN0KHMpIGFsbG9jYXRlZCBmcm9tOjxicj4mbmJz
cDsgJm5ic3A7ICMwIDB4N2Y1YjkxYWVmZDI4IGluIG1hbGxvYyAoL3Vzci9saWIveDg2XzY0
LWxpbnV4LWdudS9saWJhc2FuLnNvLjMrMHhjMWQyOCk8YnI+Jm5ic3A7ICZuYnNwOyAjMSAw
eDQ2Nzk5NyZuYnNwOyAoL3Vzci9iaW4veDg2XzY0LWxpbnV4LWdudS1nY2MtNisweDQ2Nzk5
Nyk8YnI+PGJyPlNVTU1BUlk6IEFkZHJlc3NTYW5pdGl6ZXI6IDMwNiBieXRlKHMpIGxlYWtl
ZCBpbiAxMiBhbGxvY2F0aW9uKHMpLjxicj4vcm9vdC9mYXVsdHhlbi90b29scy9saWJzL3Rv
b2xjb3JlLy4uLy4uLy4uL3Rvb2xzL1J1bGVzLm1rOjIyNDogcmVjaXBlIGZvciB0YXJnZXQg
J2hlYWRlcnMuY2hrJyBmYWlsZWQ8YnI+bWFrZVs1XTogKioqIFtoZWFkZXJzLmNoa10gRXJy
b3IgMTxicj48YnI+PT03NTIwPT1FUlJPUjogTGVha1Nhbml0aXplcjogZGV0ZWN0ZWQgbWVt
b3J5IGxlYWtzPGJyPjxicj5EaXJlY3QgbGVhayBvZiAxMCBieXRlKHMpIGluIDEgb2JqZWN0
KHMpIGFsbG9jYXRlZCBmcm9tOjxicj4mbmJzcDsgJm5ic3A7ICMwIDB4N2ZkMTAyOGM4ZDI4
IGluIG1hbGxvYyAoL3Vzci9saWIveDg2XzY0LWxpbnV4LWdudS9saWJhc2FuLnNvLjMrMHhj
MWQyOCk8YnI+Jm5ic3A7ICZuYnNwOyAjMSAweDdmZDEwMjJlNDNiOSBpbiBfX3N0cmR1cCAo
L2xpYi94ODZfNjQtbGludXgtZ251L2xpYmMuc28uNisweDgwM2I5KTxicj48YnI+U1VNTUFS
WTogQWRkcmVzc1Nhbml0aXplcjogMTAgYnl0ZShzKSBsZWFrZWQgaW4gMSBhbGxvY2F0aW9u
KHMpLjxicj49PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PTwvZGl2PjxkaXY+SXQgc2VlbXMgdGhpcyBidWcgaXMgdmVy
eSBsb3ctbGV2ZWwsIGFuZCBhZmZlY3RzIG1hbnkgYmFzaWMgb3BlcmF0aW9ucywgZG8geW91
IGhhdmUgYW55IGlkZWEgd2hhdCBjYXVzZSBzdWNoIGJ1Z3M/IDxicj48L2Rpdj4=

------=_NextPart_5FC25F86_0F661C10_7D003348--



From xen-devel-bounces@lists.xenproject.org Sat Nov 28 14:43:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Nov 2020 14:43:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40015.73071 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kj1RA-0002n4-7X; Sat, 28 Nov 2020 14:42:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40015.73071; Sat, 28 Nov 2020 14:42:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kj1RA-0002mx-3j; Sat, 28 Nov 2020 14:42:56 +0000
Received: by outflank-mailman (input) for mailman id 40015;
 Sat, 28 Nov 2020 14:31:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8IEe=FC=qq.com=2284696125@srs-us1.protection.inumbo.net>)
 id 1kj1Fa-0001qO-Ks
 for xen-devel@lists.xenproject.org; Sat, 28 Nov 2020 14:31:00 +0000
Received: from smtpbgbr2.qq.com (unknown [54.207.22.56])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 98182dfd-d9e6-4ca0-bb5d-93b90eb7c944;
 Sat, 28 Nov 2020 14:30:49 +0000 (UTC)
Received: from qq.com (unknown [127.0.0.1]) by smtp.qq.com (ESMTP) with SMTP
 id ; Sat, 28 Nov 2020 22:30:41 +0800 (CST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98182dfd-d9e6-4ca0-bb5d-93b90eb7c944
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qq.com; s=s201512;
	t=1606573841; bh=qV7hSTO5UjvdF5UXVXyKerelOoLU8KuiH1AUUWWvTt4=;
	h=From:To:Subject:Mime-Version:Date:Message-ID;
	b=mgN4r4bGJ3SCLMC0SPuyf0f20uyoRuzVlEMSEBJvEOeztOaCaO0ZlV6oFp01R72Lr
	 x1biOf8s3xkKQVIk0g+1Teb7+Sp5U+RZCLJ1EdCtQetebaeK41JhxeG+Sijdj3YteB
	 HTkH0j+0y+O9I8bsTtyEA0FxNUSyHo5bCQx3Stx0=
X-QQ-FEAT: DmHow04r9p0yw5mArx8i6v2IR0qP/57JF7+H5MeIdoDwb87PMS1Oe0fXZRRaY
	0eCr8sy6FZpR9toTIyMAW41maYaem4DuG6oz4g2wraNQLIMYoOaLBs/PKCI9J4eFOzLyeCW
	GiSUSUkr7fSay8b4Y4Bi+0GfZ25h5+JS6MbPii/llxfvSB9RIv9dfJd3yF273mOlePSJShB
	VXeQ0qruQ+3LZrg/t168ZnXRVDPh6bm4c71eP0tv1j/W83PDNq8hFDULUgFpbZwcX1hEMrN
	C5vg==
X-QQ-SSF: 0000000000000010000000000000008
X-QQ-XMAILINFO: NVuz4GJ0yei2+paVX28p33mneU6FNTZk/QcvJkWhww4eSW6Vzbg/y6V5/IQ5vQ
	 B9e4R+DIWMgpmRFC4YZCx2Q/8NDSqv6ede+9do1S35HF30x+B29WBc33LKt3fkqb0KDb3VbFXlt6W
	 CBXET/2Dj92AKLWk8Ljkeh/iHfTKc79rQ8CPtd2cH/GP1/MiYPIA0bxIV8xWegDyBHiZM+gLOX4At
	 /yLAmikExh1GESHDrybB868MTYbcuBBbgxA13ECKCcBlchEOto75QnQ0OExDkhbjt/nSoVoVwHt+e
	 AH+FKGXaNetuMune/jyhYyhkbqSV/1vYdzoYd4UarsUsz7H68+/7nN+w7p4AOt2LPw2znC6dqc0EV
	 /trXItgkpKJL0mmX5lnv4dtDA+zm94rHI/bEiuKeKF13qJclOOdQvQSR+/qxK0DGMZQvgfdOiYevM
	 wq+XGl+zBty5KSITvTcmTfgStuhTFWx+48fFGsxRPJLFLmPOkW1qaklJqI3U1ykcW12XevpEuX7mD
	 BshnDDL2MGr6jFbAzQKCwxIdoRnoFOEX08O9aXpdO5Ke/GjPMCPiR+al7UF1fakw31y37zhsXGuBQ
	 jUYh7MZR9BYqWoVGf0fS31aBCf97fdT/ecfCXPtFpkmQYbvuw7YnOc+KVLt5S7KvmMDQdhidpmMXy
	 t8KiriaA+Uix7ynNIeEDOPPs+TKCEXbIad00951FQgApXIbhqGSGXhL0mRjR/0wmYOgxEM
X-HAS-ATTACH: no
X-QQ-BUSINESS-ORIGIN: 2
X-Originating-IP: 166.111.122.112
X-QQ-STYLE: 
X-QQ-mid: webmail801t1606573839t216799
From: "=?ISO-8859-1?B?UnJvYWNo?=" <2284696125@qq.com>
To: "=?ISO-8859-1?B?eGVuLWRldmVs?=" <xen-devel@lists.xenproject.org>
Subject: Locate a memory leak in tools modules
Mime-Version: 1.0
Content-Type: multipart/alternative;
	boundary="----=_NextPart_5FC25F0F_1018E510_401423E0"
Content-Transfer-Encoding: 8Bit
Date: Sat, 28 Nov 2020 22:30:39 +0800
X-Priority: 3
Message-ID: <tencent_38E67FAD4AE4912BC07DF5BB59F6267A1F09@qq.com>
X-QQ-MIME: TCMime 1.0 by Tencent
X-Mailer: QQMail 2.x
X-QQ-Mailer: QQMail 2.x
X-QQ-SENDSIZE: 520
Feedback-ID: webmail:qq.com:bgforeign:bgforeign12
X-QQ-Bgrelay: 1

This is a multi-part message in MIME format.

------=_NextPart_5FC25F0F_1018E510_401423E0
Content-Type: text/plain;
	charset="ISO-8859-1"
Content-Transfer-Encoding: base64

SGksIEknbSBhdHRlbXB0aW5nIHRvIHVzZSBhZGRyZXNzIHNhbml0aXplciBpbiBsb2NhdGlu
ZyBidWdzIGluIFhlbiA0LTEzLCB3aGlsZSB1c2UgYWRkcmVzcyBzYW5pdGl6ZXIgaW4gdG9v
bHMgbW9kdWxlcywgd2hpbGUgSSByYW4gc29tZSBiYXNpYyBpbnN0cnVjdGlvbnMgbGlrZSB4
bCwgWGVuIHJlcG9ydCBzdWNoIGJ1ZzoNCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09DQo9PTI4NjM9PUVSUk9SOiBM
ZWFrU2FuaXRpemVyOiBkZXRlY3RlZCBtZW1vcnkgbGVha3MNCg0KRGlyZWN0IGxlYWsgb2Yg
Mjk2IGJ5dGUocykgaW4gMTEgb2JqZWN0KHMpIGFsbG9jYXRlZCBmcm9tOg0KJm5ic3A7Jm5i
c3A7Jm5ic3A7ICMwIDB4N2Y1YjkxYWVmZDI4IGluIG1hbGxvYyAoL3Vzci9saWIveDg2XzY0
LWxpbnV4LWdudS9saWJhc2FuLnNvLjMrMHhjMWQyOCkNCiZuYnNwOyZuYnNwOyZuYnNwOyAj
MSAweDQ2Nzk5NyZuYnNwOyAoL3Vzci9iaW4veDg2XzY0LWxpbnV4LWdudS1nY2MtNisweDQ2
Nzk5NykNCg0KSW5kaXJlY3QgbGVhayBvZiAxMCBieXRlKHMpIGluIDEgb2JqZWN0KHMpIGFs
bG9jYXRlZCBmcm9tOg0KJm5ic3A7Jm5ic3A7Jm5ic3A7ICMwIDB4N2Y1YjkxYWVmZDI4IGlu
IG1hbGxvYyAoL3Vzci9saWIveDg2XzY0LWxpbnV4LWdudS9saWJhc2FuLnNvLjMrMHhjMWQy
OCkNCiZuYnNwOyZuYnNwOyZuYnNwOyAjMSAweDQ2Nzk5NyZuYnNwOyAoL3Vzci9iaW4veDg2
XzY0LWxpbnV4LWdudS1nY2MtNisweDQ2Nzk5NykNCg0KU1VNTUFSWTogQWRkcmVzc1Nhbml0
aXplcjogMzA2IGJ5dGUocykgbGVha2VkIGluIDEyIGFsbG9jYXRpb24ocykuDQovcm9vdC9m
YXVsdHhlbi90b29scy9saWJzL3Rvb2xjb3JlLy4uLy4uLy4uL3Rvb2xzL1J1bGVzLm1rOjIy
NDogcmVjaXBlIGZvciB0YXJnZXQgJ2hlYWRlcnMuY2hrJyBmYWlsZWQNCm1ha2VbNV06ICoq
KiBbaGVhZGVycy5jaGtdIEVycm9yIDENCg0KPT03NTIwPT1FUlJPUjogTGVha1Nhbml0aXpl
cjogZGV0ZWN0ZWQgbWVtb3J5IGxlYWtzDQoNCkRpcmVjdCBsZWFrIG9mIDEwIGJ5dGUocykg
aW4gMSBvYmplY3QocykgYWxsb2NhdGVkIGZyb206DQombmJzcDsmbmJzcDsmbmJzcDsgIzAg
MHg3ZmQxMDI4YzhkMjggaW4gbWFsbG9jICgvdXNyL2xpYi94ODZfNjQtbGludXgtZ251L2xp
YmFzYW4uc28uMysweGMxZDI4KQ0KJm5ic3A7Jm5ic3A7Jm5ic3A7ICMxIDB4N2ZkMTAyMmU0
M2I5IGluIF9fc3RyZHVwICgvbGliL3g4Nl82NC1saW51eC1nbnUvbGliYy5zby42KzB4ODAz
YjkpDQoNClNVTU1BUlk6IEFkZHJlc3NTYW5pdGl6ZXI6IDEwIGJ5dGUocykgbGVha2VkIGlu
IDEgYWxsb2NhdGlvbihzKS4NCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09DQpJdCBzZWVtcyB0aGlzIGJ1ZyBpcyB2
ZXJ5IGxvdy1sZXZlbCwgYW5kIGFmZmVjdHMgbWFueSBiYXNpYyBvcGVyYXRpb25zLCBkbyB5
b3UgaGF2ZSBhbnkgaWRlYSB3aGF0IGNhdXNlIHN1Y2ggYnVncz8=

------=_NextPart_5FC25F0F_1018E510_401423E0
Content-Type: text/html;
	charset="ISO-8859-1"
Content-Transfer-Encoding: base64

PG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7IGNo
YXJzZXQ9R0IxODAzMCI+PGRpdj5IaSwgSSdtIGF0dGVtcHRpbmcgdG8gdXNlIGFkZHJlc3Mg
c2FuaXRpemVyIGluIGxvY2F0aW5nIGJ1Z3MgaW4gWGVuIDQtMTMsIHdoaWxlIHVzZSBhZGRy
ZXNzIHNhbml0aXplciBpbiB0b29scyBtb2R1bGVzLCB3aGlsZSBJIHJhbiBzb21lIGJhc2lj
IGluc3RydWN0aW9ucyBsaWtlIHhsLCBYZW4gcmVwb3J0IHN1Y2ggYnVnOjwvZGl2PjxkaXY+
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT08YnI+PT0yODYzPT1FUlJPUjogTGVha1Nhbml0aXplcjogZGV0ZWN0ZWQg
bWVtb3J5IGxlYWtzPGJyPjxicj5EaXJlY3QgbGVhayBvZiAyOTYgYnl0ZShzKSBpbiAxMSBv
YmplY3QocykgYWxsb2NhdGVkIGZyb206PGJyPiZuYnNwOyZuYnNwOyZuYnNwOyAjMCAweDdm
NWI5MWFlZmQyOCBpbiBtYWxsb2MgKC91c3IvbGliL3g4Nl82NC1saW51eC1nbnUvbGliYXNh
bi5zby4zKzB4YzFkMjgpPGJyPiZuYnNwOyZuYnNwOyZuYnNwOyAjMSAweDQ2Nzk5NyZuYnNw
OyAoL3Vzci9iaW4veDg2XzY0LWxpbnV4LWdudS1nY2MtNisweDQ2Nzk5Nyk8YnI+PGJyPklu
ZGlyZWN0IGxlYWsgb2YgMTAgYnl0ZShzKSBpbiAxIG9iamVjdChzKSBhbGxvY2F0ZWQgZnJv
bTo8YnI+Jm5ic3A7Jm5ic3A7Jm5ic3A7ICMwIDB4N2Y1YjkxYWVmZDI4IGluIG1hbGxvYyAo
L3Vzci9saWIveDg2XzY0LWxpbnV4LWdudS9saWJhc2FuLnNvLjMrMHhjMWQyOCk8YnI+Jm5i
c3A7Jm5ic3A7Jm5ic3A7ICMxIDB4NDY3OTk3Jm5ic3A7ICgvdXNyL2Jpbi94ODZfNjQtbGlu
dXgtZ251LWdjYy02KzB4NDY3OTk3KTxicj48YnI+U1VNTUFSWTogQWRkcmVzc1Nhbml0aXpl
cjogMzA2IGJ5dGUocykgbGVha2VkIGluIDEyIGFsbG9jYXRpb24ocykuPGJyPi9yb290L2Zh
dWx0eGVuL3Rvb2xzL2xpYnMvdG9vbGNvcmUvLi4vLi4vLi4vdG9vbHMvUnVsZXMubWs6MjI0
OiByZWNpcGUgZm9yIHRhcmdldCAnaGVhZGVycy5jaGsnIGZhaWxlZDxicj5tYWtlWzVdOiAq
KiogW2hlYWRlcnMuY2hrXSBFcnJvciAxPGJyPjxicj49PTc1MjA9PUVSUk9SOiBMZWFrU2Fu
aXRpemVyOiBkZXRlY3RlZCBtZW1vcnkgbGVha3M8YnI+PGJyPkRpcmVjdCBsZWFrIG9mIDEw
IGJ5dGUocykgaW4gMSBvYmplY3QocykgYWxsb2NhdGVkIGZyb206PGJyPiZuYnNwOyZuYnNw
OyZuYnNwOyAjMCAweDdmZDEwMjhjOGQyOCBpbiBtYWxsb2MgKC91c3IvbGliL3g4Nl82NC1s
aW51eC1nbnUvbGliYXNhbi5zby4zKzB4YzFkMjgpPGJyPiZuYnNwOyZuYnNwOyZuYnNwOyAj
MSAweDdmZDEwMjJlNDNiOSBpbiBfX3N0cmR1cCAoL2xpYi94ODZfNjQtbGludXgtZ251L2xp
YmMuc28uNisweDgwM2I5KTxicj48YnI+U1VNTUFSWTogQWRkcmVzc1Nhbml0aXplcjogMTAg
Ynl0ZShzKSBsZWFrZWQgaW4gMSBhbGxvY2F0aW9uKHMpLjxicj49PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PTwvZGl2
PjxkaXY+SXQgc2VlbXMgdGhpcyBidWcgaXMgdmVyeSBsb3ctbGV2ZWwsIGFuZCBhZmZlY3Rz
IG1hbnkgYmFzaWMgb3BlcmF0aW9ucywgZG8geW91IGhhdmUgYW55IGlkZWEgd2hhdCBjYXVz
ZSBzdWNoIGJ1Z3M/IDxicj48L2Rpdj4=

------=_NextPart_5FC25F0F_1018E510_401423E0--





From xen-devel-bounces@lists.xenproject.org Sat Nov 28 14:53:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Nov 2020 14:53:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40031.73094 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kj1bK-0003xj-IT; Sat, 28 Nov 2020 14:53:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40031.73094; Sat, 28 Nov 2020 14:53:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kj1bK-0003xc-FU; Sat, 28 Nov 2020 14:53:26 +0000
Received: by outflank-mailman (input) for mailman id 40031;
 Sat, 28 Nov 2020 14:53:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=O94F=FC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kj1bJ-0003xX-1x
 for xen-devel@lists.xenproject.org; Sat, 28 Nov 2020 14:53:25 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 892251ee-ef61-454e-965e-97f965bf5667;
 Sat, 28 Nov 2020 14:53:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 892251ee-ef61-454e-965e-97f965bf5667
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606575202;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=YZK+9AtpdfxzzYwaQC1WSsEEsuRX0Gr7ASA3ElH+GBA=;
  b=hScl1X8lY4e9dNM6nArXkVtBxSANRytuVicNjEHTVA77mqY7p+91S21I
   hATh4XDIO3SYvelVswtdppgZRxe7k0jlfJRIYPYghq4nvColeF7QLfEzH
   HNP5tvhkVHCd6ilMjnkStU8v60b9y+Vww3+0qJBKHPXQNxvp8iX4LKDnx
   I=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: KfYPrbFc65l1GpieZZW6T4YeO0ENbB9JqSF2o10Txh7L+M3yGTojoY2QSlc+1Fdb7j3wTzyRl9
 OT7LhSi2dTOB5MpqTMTH2Np+zMhgd2Bz8p0Abso0E0+NcNc7Mx64GofkiwSexFezfntJtOivTB
 MDaV6ahl4TTwKHJnu67VCKFQUTgExfK0hxXohXZSXkEHk8CpFnOH4cH9KMdS/SvbhQYhcVnube
 DGGr4t/Hz5E2P+rQEUBTQyG/+nmMRbBpmLUVTZ9gBnT58/1T/JRycMQYNrKk66MzBZoWGGBO4x
 wtE=
X-SBRS: None
X-MesageID: 32315232
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,377,1599537600"; 
   d="scan'208,223";a="32315232"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RdUnPnOmA/qop23/+uKM+k3hurgZAjU0N7lg7+1WKOEI4HSWVYsGLD92EIAfcOPOJYnEp2JyMmbFqdN1IbVBXA0GdsoxqJ7ypR92HFR9Zaewygxgd097GK+jIpQVDMMEp8QleJS+P8FwxH9hvDY1UjBBtluikhcdeumlUtWK/GGkLHF0y34jjMAZR7gnhQ7AlaXtJ9e8+lfqDJPkmkNvYpjp0QBNHYT8yJVxrMGDujb6dQbKGKQL3qCUljW+XAB9wR6FxyBT8xgol4R9gasmxFMXKUVWaD+oXQMuJYEFxOCd0X4mnNoyWQMJ2KGZoUd9xNCK4paso5UW36sMrzLdgA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Gzy9Wfl5UFss0wnraGiyvh14GkmHtEiACEEAV9ST1Gs=;
 b=FGYm+3k8ZHC5Y8OIpdCAiZwXY0XkPAaXN0Bfsh02TCKbp4Ab5uvl8jd5BG93ZF1siLETXq0dCO02cmMEnTD2+EpaCGY2+zAdBokXkglyMlCtyxENl0B+28xOdWyuOTT6GxKddAyDbDtlmZMpmbor03kH/2w0DQov0NXfMiaQWai+8KJH1h1GrNDeL5zpCfkR6z+fuw9piLjX5J0+Y9jiXtiZqfyNUOECQwrDPWWNEGU5odXrgGog6FqhtV8sr7L6kaFtuUcfNLOBJ1pHJIjPkMDchgSO8ID6P+yQPy4ZS2m+O0smbKsVaCPk7n+zVPfmwvG3EY8bynJKCuQj+A1YIg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Gzy9Wfl5UFss0wnraGiyvh14GkmHtEiACEEAV9ST1Gs=;
 b=j3WdaGCt06pb0gBml1b+fQ3FE1omItlsSNWdRgF4aatD00Oqg/ls4LZX2XHPQzKpPE1GHfzl+YHzTuC2JymCCVKidX78GH2x4pueA3F3YRKck19dt9mT1m5q3GLYUkmZ7mbsiZjUmIK4r9WZ6v3uqRYwORZlFoApywGLEo1S0ZM=
Date: Sat, 28 Nov 2020 15:53:11 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: Jan Beulich <jbeulich@suse.com>, <xen-devel@lists.xenproject.org>
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201128145311.3gmzq5lnkz6ajdtr@Air-de-Roger>
References: <20201126172034.GA7642@antioche.eu.org>
 <20201127105948.ji5gxv4e7axrvgpo@Air-de-Roger>
 <e9610278-84e5-dc32-b568-8867011de4e4@suse.com>
 <20201127131324.GJ1717@antioche.eu.org>
 <714e9393-d7f4-ed47-d1ed-aff79f3552a0@suse.com>
 <20201127133121.GN1717@antioche.eu.org>
 <96aa5a9b-3f4a-ce9d-0f41-4a24d409ed55@suse.com>
 <20201127135929.GR1717@antioche.eu.org>
 <20201127202211.eqrxloii5x54zode@Air-de-Roger>
 <20201127214420.GA637@antioche.eu.org>
Content-Type: multipart/mixed; boundary="4ld5vgy3khrjzwcn"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201127214420.GA637@antioche.eu.org>
X-ClientProxiedBy: LO4P123CA0013.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:150::18) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2740643a-aa06-4b30-2fa0-08d893ad56f2
X-MS-TrafficTypeDiagnostic: DM6PR03MB3947:
X-Microsoft-Antispam-PRVS: <DM6PR03MB39470118A438B26AB66195248FF70@DM6PR03MB3947.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: yOzhkaZI3ys3HnraC5hb0oHZg4obh8XsFuoIMzwwf6dAbLd/TKy7csI7nO09dqPAs7+Nnj1ylYwPAWJ/8ESUpXeO+80DF04+G+cDLfAc7T84ZoDhoejc/0+M1HVJwHiSCavDbcIqI/vVVS+Vwu9ypIfE2eWOTTyle07JN4VFjUmMJuWo2m63iSk6zDwZ5XLgKnu/s+BCJwbceL4FTws5AMA4Y9jU/HBQNUTYIpm59bnNbdBNnTif0YlO5xJCBz3kggF7/reI7ewoyrJxWj15gS0Tz5MUMjTwsBIiqkb3yiJO3iiS+sv8kRdApffLeXY170298YhzR1ZLdbWboROMUY7sh+DOBfmUY5+x2HrCa8DPgfG6D37j9qdn8yeI1o1FTlnRMnbZLXn9HHf48zndNoNloV3lS95wxbVwSS50/NbYs9uGXIfQ8qZUcKqEpQI8JY6FVdL7m937JYUZyciWag==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(39850400004)(346002)(366004)(376002)(396003)(5660300002)(83380400001)(235185007)(16526019)(186003)(9686003)(4326008)(8676002)(66556008)(66476007)(66576008)(85182001)(66946007)(8936002)(6666004)(86362001)(33964004)(316002)(26005)(44144004)(6916009)(6486002)(1076003)(956004)(33716001)(2906002)(6496006)(966005)(478600001)(2700100001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?ZlA3SG5sWWc0WmtndEsvWVdhY243WERSWnR1REVGQzZGZTV5L1B1OW1zU3Zk?=
 =?utf-8?B?dElscDlvNW9KOXZXaUxKNkZ4UkdSc1FLekVNWXlnOVBJMWdYQlFzUkFVVTdt?=
 =?utf-8?B?SUdWbVk5WUVwTUlXYjBPc3dTaUpTOXVWbkZMYTFkNkp6emtPQThXdXMwaTRG?=
 =?utf-8?B?TTVzMkV2TGQ5NlFWS0lmcS9xNVN5d0lEZTNqOFpmUmtadnJtY3FFVjJLS0VD?=
 =?utf-8?B?VGpkbENvcW1iZ1d6aUh4YjlHTGxSelJYcXVGc0lRQnl3dGhDY2tpYk1kZFJD?=
 =?utf-8?B?WVY5R3BabGQ5SWMrWEE5OG9hMVNMMzIwSXBjZTFFTkxPQW1DK3U4U1FkcmpW?=
 =?utf-8?B?Nk92RjFOelJTQVZ2NUZoWGluMC9xZHRPYXlWcFVrbCtneGFIOWhZUEJVa0NI?=
 =?utf-8?B?V0hORFFQYlJMaGh0R01LdFR4bE1ZOTBpSjVHSEloQ2J4TElrNTh4SnJRdVdy?=
 =?utf-8?B?RHJNQnpBeDJSbGNLSzN5Sk5PWFp1amdJaDJRQjFZN3lKRk5oSjdsc0w4MzAv?=
 =?utf-8?B?YXRDNklXSHVnNld1NGpLR3dKS2puNFU2Y0ZYVW9qMEp3RHBhN1grS0RBcE4w?=
 =?utf-8?B?NlRDTitwMmlMeHBHSlBoSDg3OTJ5ZHR5dGxmN2VYa1RKeG5SSEsrVlhTYmVE?=
 =?utf-8?B?Vk5uaTErSW90V2pMMXZjaTVRVDRrRHRFNlJoWDJKenpKUlRiaWJpUERUeG5K?=
 =?utf-8?B?d0RzSGk3b1FXVlVFRXVhSVZiZHJrbmxCRVZaSXREYUo2QmtzYnVNU3hEbmdM?=
 =?utf-8?B?U0VDOGlCSjNnWk9KeFJ4djA3RlFoK0JRbjVsZXB4RjYweFUzWU5kekRma1JC?=
 =?utf-8?B?N2FZUCtyZ1NjVjVWb0FqTEx6VTVhYXNHdUxrUU5pWGQvY01NTHp2NjllT3Ft?=
 =?utf-8?B?Kzg0SWpGQTBLQ0cyUTZJL0VLaTVNOERoVVI4bGtrVnYwSFlFTmFxUzkxYSt5?=
 =?utf-8?B?a2J2MnZJZ28vVlp6MUZweFJiNE5XTmMxTFprU0Vab0hIdzJrUDEwaVQxRzVa?=
 =?utf-8?B?NjlGcWJwTDJ6TWVLNzJvNUJUM1JHRjZiTmFNbURyNFJJQ1BHbmV3WW1XcGZ0?=
 =?utf-8?B?VUtWbUV4YWRHQmhMeG9tOHBHeG1zd0xmaXlPZTJVRkJvdGNUWVNHeHZJWmll?=
 =?utf-8?B?V0lhRWR2VTFKMEFYMENxMEZ6QVBHQnZsUlhCWS8rY0J2Y3VHTVkxU2ovZHRk?=
 =?utf-8?B?VktleUdtWGNJcHFZeEZqSWZzQ2VhSEQzT3NzTzB1ZVRGS3FFeHAvUGRYWTQz?=
 =?utf-8?B?WmwzOTcrNVErRWlwaUtUcnJUaXdaMzRtSmNkNkw1MjR3ZGRzMnlaNnZBS1ZY?=
 =?utf-8?Q?SGVcm9f0MWXIrjeeYG0sxJ5iAnbQcEa/Zh?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 2740643a-aa06-4b30-2fa0-08d893ad56f2
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Nov 2020 14:53:17.2453
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tOal45zokzuM4WW3dX3fkzW13l1CvHwx7LkuZopeiY6E5bYrL7mKNNSKFm6uE3r6fkFf5qgrDJnVDUam9VekwA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3947
X-OriginatorOrg: citrix.com

--4ld5vgy3khrjzwcn
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit

On Fri, Nov 27, 2020 at 10:44:20PM +0100, Manuel Bouyer wrote:
> On Fri, Nov 27, 2020 at 09:22:11PM +0100, Roger Pau Monné wrote:
> > > 
> > > But I can confirm that now, entering ^A^A^A gets interrupt going in again
> > 
> > I think there are some weird things with dpci interrupts that I'm
> > trying to understand. I have a patch now that will panic when the
> > buffer is full, so we will hopefully be able to see the whole trace of
> > events. There will be no need for you to press the 'T' key now, the
> > system will panic when the buffer is full.
> > 
> > Note this patch also removes the deassert done in pt_irq_time_out.
> 
> thanks
> the trace is at
> http://www-soc.lip6.fr/~bouyer/xen-log13.txt

Thanks! I think I've found the issue and I'm attaching a possible fix
(fix.patch) to this email. In any case I've also attached a further
debug patch, in case the fix turns out to be wrong. Please test the
fix first, as the debug patch will end up triggering a panic when the
buffer is full.

Roger.

--4ld5vgy3khrjzwcn
Content-Type: text/plain; charset=utf-8
Content-Disposition: attachment; filename="fix.patch"
Content-Transfer-Encoding: 8bit

>From 232112a292c3b82b3063ea6c7aab56afc8e03f67 Mon Sep 17 00:00:00 2001
From: Roger Pau Monne <roger.pau@citrix.com>
Date: Sat, 28 Nov 2020 15:06:26 +0100
Subject: [PATCH] x86/vioapic: fix usage of index in place of GSI in
 vioapic_write_redirent
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The usage of idx instead of the GSI in vioapic_write_redirent when
accessing gsi_assert_count can cause a PVH dom0 with multiple
vIO-APICs to lose interrupts in case a pin of a IO-APIC different than
the first one is unmasked with pending interrupts.

Switch to use gsi instead to fix the issue.

Fixes: 9f44b08f7d0e4 ('x86/vioapic: introduce support for multiple vIO APICS')
Signed-off-by: Roger Pau Monné <roge.rpau@citrix.com>
---
 xen/arch/x86/hvm/vioapic.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 67d4a6237f..e64abee7a9 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -260,7 +260,7 @@ static void vioapic_write_redirent(
         pent->fields.remote_irr = 0;
     else if ( !ent.fields.mask &&
               !ent.fields.remote_irr &&
-              hvm_irq->gsi_assert_count[idx] )
+              hvm_irq->gsi_assert_count[gsi] )
     {
         pent->fields.remote_irr = 1;
         vioapic_deliver(vioapic, idx);
-- 
2.29.2


--4ld5vgy3khrjzwcn
Content-Type: text/plain; charset=utf-8
Content-Disposition: attachment; filename="debug.patch"

diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index 38ac5fb6c7..9db3dcc957 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -187,6 +187,10 @@ void hvm_gsi_assert(struct domain *d, unsigned int gsi)
      * to know if the GSI is pending or not.
      */
     spin_lock(&d->arch.hvm.irq_lock);
+    if ( gsi == TRACK_IRQ )
+        debugtrace_printk("hvm_gsi_assert irq %u trig %u assert count %u\n",
+                          gsi, trig, hvm_irq->gsi_assert_count[gsi]);
+
     if ( trig == VIOAPIC_EDGE_TRIG || !hvm_irq->gsi_assert_count[gsi] )
     {
         if ( trig == VIOAPIC_LEVEL_TRIG )
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index e64abee7a9..df82147f9b 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -257,7 +257,11 @@ static void vioapic_write_redirent(
         vlapic_adjust_i8259_target(d);
     }
     else if ( ent.fields.trig_mode == VIOAPIC_EDGE_TRIG )
+    {
+        if ( gsi == TRACK_IRQ )
+            debugtrace_printk("vIO-APIC set edge trigger irq %u\n", gsi);
         pent->fields.remote_irr = 0;
+    }
     else if ( !ent.fields.mask &&
               !ent.fields.remote_irr &&
               hvm_irq->gsi_assert_count[gsi] )
@@ -278,6 +282,12 @@ static void vioapic_write_redirent(
          */
         int ret = vioapic_hwdom_map_gsi(gsi, ent.fields.trig_mode,
                                         ent.fields.polarity);
+
+        if ( gsi == TRACK_IRQ )
+            debugtrace_printk("vIO-APIC UNMASK irq %u irr %u mask %u assert count %u\n",
+                              gsi, pent->fields.remote_irr, pent->fields.mask,
+                              hvm_irq->gsi_assert_count[gsi]);
+
         if ( ret )
         {
             gprintk(XENLOG_ERR,
@@ -285,6 +295,9 @@ static void vioapic_write_redirent(
             unmasked = 0;
         }
     }
+    else if ( is_hardware_domain(d) && gsi == TRACK_IRQ )
+        debugtrace_printk("vIO-APIC MASK irq %u\n", gsi);
+
 
     if ( gsi == 0 || unmasked )
         pt_may_unmask_irq(d, NULL);
@@ -405,6 +418,10 @@ static void vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int pin)
 
     ASSERT(spin_is_locked(&d->arch.hvm.irq_lock));
 
+    if ( irq == TRACK_IRQ )
+            debugtrace_printk("vIO-APIC deliver irq %u vector %u\n",
+                              irq, vector);
+
     HVM_DBG_LOG(DBG_LEVEL_IOAPIC,
                 "dest=%x dest_mode=%x delivery_mode=%x "
                 "vector=%x trig_mode=%x",
diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c
index 49bd778484..db7167eb4b 100644
--- a/xen/arch/x86/io_apic.c
+++ b/xen/arch/x86/io_apic.c
@@ -1641,6 +1641,9 @@ static void mask_and_ack_level_ioapic_irq(struct irq_desc *desc)
     unsigned long v;
     int i;
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("ACK irq %u\n", desc->irq);
+
     irq_complete_move(desc);
 
     if ( !directed_eoi_enabled )
@@ -1688,6 +1691,9 @@ static void mask_and_ack_level_ioapic_irq(struct irq_desc *desc)
 
 static void end_level_ioapic_irq_old(struct irq_desc *desc, u8 vector)
 {
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("END irq %u\n", desc->irq);
+
     if ( directed_eoi_enabled )
     {
         if ( !(desc->status & (IRQ_DISABLED|IRQ_MOVE_PENDING)) )
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 8d1f9a9fc6..ec52e44cb7 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1109,6 +1109,10 @@ static void irq_guest_eoi_timer_fn(void *data)
     unsigned int i, irq = desc - irq_desc;
     irq_guest_action_t *action;
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("irq_guest_eoi_timer_fn irq %u status %x\n",
+                          desc->irq, desc->status);
+
     spin_lock_irq(&desc->lock);
     
     if ( !(desc->status & IRQ_GUEST) )
@@ -1118,6 +1122,10 @@ static void irq_guest_eoi_timer_fn(void *data)
 
     ASSERT(action->ack_type != ACKTYPE_NONE);
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("ack_type %u in_flight %u\n",
+                          action->ack_type, action->in_flight);
+
     /*
      * Is no IRQ in flight at all, or another instance of this timer already
      * running? Skip everything to avoid forcing an EOI early.
@@ -1837,6 +1845,12 @@ static void do_IRQ_guest(struct irq_desc *desc, unsigned int vector)
     unsigned int        i;
     struct pending_eoi *peoi = this_cpu(pending_eoi);
 
+    if ( desc->irq == TRACK_IRQ )
+        debugtrace_printk("do_IRQ_guest irq %u nr_guests %u ack_type %u in_flight %u\n",
+                          desc->irq, action->nr_guests, action->ack_type,
+                          action->in_flight);
+
+
     if ( unlikely(!action->nr_guests) )
     {
         /* An interrupt may slip through while freeing an ACKTYPE_EOI irq. */
diff --git a/xen/common/debugtrace.c b/xen/common/debugtrace.c
index f3794b9453..b22c09297d 100644
--- a/xen/common/debugtrace.c
+++ b/xen/common/debugtrace.c
@@ -130,14 +130,14 @@ static void debugtrace_toggle(void)
 
 void debugtrace_dump(void)
 {
-    unsigned long flags;
+    //unsigned long flags;
 
     watchdog_disable();
-    spin_lock_irqsave(&debugtrace_lock, flags);
+    //spin_lock_irqsave(&debugtrace_lock, flags);
 
     debugtrace_dump_worker();
 
-    spin_unlock_irqrestore(&debugtrace_lock, flags);
+    //spin_unlock_irqrestore(&debugtrace_lock, flags);
     watchdog_enable();
 }
 
@@ -152,7 +152,10 @@ static void debugtrace_add_to_buf(char *buf)
     {
         data->buf[data->prd++] = *p;
         if ( data->prd == debugtrace_bytes )
+        {
+            panic("END of buffer\n");
             data->prd = 0;
+        }
     }
 }
 
diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
index 6b1305a3e5..e0949b7057 100644
--- a/xen/drivers/passthrough/io.c
+++ b/xen/drivers/passthrough/io.c
@@ -174,7 +174,10 @@ static void pt_irq_time_out(void *data)
          * In the identity mapped case the EOI can also be done now, this way
          * the iteration over the list of domain pirqs is avoided.
          */
-        hvm_gsi_deassert(irq_map->dom, dpci_pirq(irq_map)->pirq);
+        if ( dpci_pirq(irq_map)->pirq == TRACK_IRQ )
+            debugtrace_printk("pt_irq_time_out irq %u\n",
+                              dpci_pirq(irq_map)->pirq);
+        //hvm_gsi_deassert(irq_map->dom, dpci_pirq(irq_map)->pirq);
         irq_map->flags |= HVM_IRQ_DPCI_EOI_LATCH;
         pt_irq_guest_eoi(irq_map->dom, irq_map, NULL);
         spin_unlock(&irq_map->dom->event_lock);
@@ -828,6 +831,9 @@ int hvm_do_IRQ_dpci(struct domain *d, struct pirq *pirq)
          !pirq_dpci || !(pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) )
         return 0;
 
+    if ( pirq->pirq == TRACK_IRQ )
+        debugtrace_printk("hvm_do_IRQ_dpci irq %u\n", pirq->pirq);
+
     pirq_dpci->masked = 1;
     raise_softirq_for(pirq_dpci);
     return 1;
@@ -1010,6 +1016,10 @@ void hvm_dpci_eoi(struct domain *d, unsigned int guest_gsi,
     if ( !is_iommu_enabled(d) )
         return;
 
+    if ( guest_gsi == TRACK_IRQ )
+        debugtrace_printk("hvm_dpci_eoi irq %u irr %u\n", guest_gsi,
+                          ent->fields.remote_irr);
+
     if ( is_hardware_domain(d) )
     {
         spin_lock(&d->event_lock);
diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
index 43d567fe44..91579c33b9 100644
--- a/xen/include/xen/irq.h
+++ b/xen/include/xen/irq.h
@@ -174,4 +174,6 @@ unsigned int arch_hwdom_irqs(domid_t);
 void arch_evtchn_bind_pirq(struct domain *, int pirq);
 #endif
 
+#define TRACK_IRQ 17
+
 #endif /* __XEN_IRQ_H__ */
-- 
2.29.2


--4ld5vgy3khrjzwcn--


From xen-devel-bounces@lists.xenproject.org Sat Nov 28 15:36:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Nov 2020 15:36:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40047.73113 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kj2GY-0007qh-6l; Sat, 28 Nov 2020 15:36:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40047.73113; Sat, 28 Nov 2020 15:36:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kj2GY-0007qa-3Z; Sat, 28 Nov 2020 15:36:02 +0000
Received: by outflank-mailman (input) for mailman id 40047;
 Sat, 28 Nov 2020 15:36:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kj2GW-0007qQ-C3; Sat, 28 Nov 2020 15:36:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kj2GW-0001eO-4K; Sat, 28 Nov 2020 15:36:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kj2GV-0007up-RK; Sat, 28 Nov 2020 15:35:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kj2GV-0006aQ-Qp; Sat, 28 Nov 2020 15:35:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ewgBy3OisbfnSmUEc3c4BG5/FZHsV0o04qYLvWYjk6w=; b=WKZ0H/zxzloPBhMwfiu9otCaXe
	zKZdHzDUeskCcQ+ToY72L6Rd55S4js3agMJDFHJQOgLUKfQHerTn8wEhgA1YOfuOiLbhYbGg/DOJE
	BJ0CtOR8yZD8P4I31roel3AKwE9emmpOIdDHyjNh3ERTTFVhVLzd2Pe2lN0tB95zkhBM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157069-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157069: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=944fdc5e27a5b5adbb765891e8e70e88ba9a00ec
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 28 Nov 2020 15:35:59 +0000

flight 157069 qemu-mainline real [real]
flight 157074 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157069/
http://logs.test-lab.xenproject.org/osstest/logs/157074/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                944fdc5e27a5b5adbb765891e8e70e88ba9a00ec
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  100 days
Failing since        152659  2020-08-21 14:07:39 Z   99 days  208 attempts
Testing same since   157069  2020-11-28 05:42:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69308 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Nov 28 17:15:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Nov 2020 17:15:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40075.73133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kj3o8-0000nv-Kn; Sat, 28 Nov 2020 17:14:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40075.73133; Sat, 28 Nov 2020 17:14:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kj3o8-0000no-Hs; Sat, 28 Nov 2020 17:14:48 +0000
Received: by outflank-mailman (input) for mailman id 40075;
 Sat, 28 Nov 2020 17:14:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=z4pZ=FC=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kj3o7-0000nj-2H
 for xen-devel@lists.xenproject.org; Sat, 28 Nov 2020 17:14:47 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cf7068d4-d11f-4045-8ca8-2551d36b2550;
 Sat, 28 Nov 2020 17:14:44 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0ASHEZH6028143
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Sat, 28 Nov 2020 18:14:36 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id DD2152E9465; Sat, 28 Nov 2020 18:14:30 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf7068d4-d11f-4045-8ca8-2551d36b2550
Date: Sat, 28 Nov 2020 18:14:30 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201128171430.GB631@antioche.eu.org>
References: <20201127105948.ji5gxv4e7axrvgpo@Air-de-Roger>
 <e9610278-84e5-dc32-b568-8867011de4e4@suse.com>
 <20201127131324.GJ1717@antioche.eu.org>
 <714e9393-d7f4-ed47-d1ed-aff79f3552a0@suse.com>
 <20201127133121.GN1717@antioche.eu.org>
 <96aa5a9b-3f4a-ce9d-0f41-4a24d409ed55@suse.com>
 <20201127135929.GR1717@antioche.eu.org>
 <20201127202211.eqrxloii5x54zode@Air-de-Roger>
 <20201127214420.GA637@antioche.eu.org>
 <20201128145311.3gmzq5lnkz6ajdtr@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201128145311.3gmzq5lnkz6ajdtr@Air-de-Roger>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Sat, 28 Nov 2020 18:14:37 +0100 (MET)

On Sat, Nov 28, 2020 at 03:53:11PM +0100, Roger Pau Monn wrote:
> > the trace is at
> > http://www-soc.lip6.fr/~bouyer/xen-log13.txt
> 
> Thanks! I think I've found the issue and I'm attaching a possible fix
> (fix.patch) to this email. In any case I've also attached a further
> debug patch, in case the fix turns out to be wrong. Please test the
> fix first, as the debug patch will end up triggering a panic when the
> buffer is full.

Yes, fix.patch does make the system boot as expected !
thanks !

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Sat Nov 28 17:28:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Nov 2020 17:28:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40089.73146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kj41a-00020Y-W8; Sat, 28 Nov 2020 17:28:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40089.73146; Sat, 28 Nov 2020 17:28:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kj41a-00020R-Sv; Sat, 28 Nov 2020 17:28:42 +0000
Received: by outflank-mailman (input) for mailman id 40089;
 Sat, 28 Nov 2020 17:28:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kj41Z-00020J-IM; Sat, 28 Nov 2020 17:28:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kj41Z-0004Sa-91; Sat, 28 Nov 2020 17:28:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kj41Y-0006xb-UH; Sat, 28 Nov 2020 17:28:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kj41Y-0000vY-Tp; Sat, 28 Nov 2020 17:28:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8qmq2mmqMIltgPN/Eag7E3BDKlXOvzKRT3l7GqfRYIY=; b=Dv7I3Scs8tkLM1JOeWiiuNb2c+
	DKurvAy44X8jzJSv2YgSA+Eo0D0kFpfEhkcY7btQ+c9p0neTsC2osWGmTHuvV2r0/5+1YXsjD/O1O
	iZ1PkbJhij5rKWtA3DXTJRNpiIBmx3HUd5kha1jxUNuoVQukKiKjROziTKfAZtF77LhU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157070-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157070: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=c84e1efae022071a4fcf9f1899bf71777c49943a
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 28 Nov 2020 17:28:40 +0000

flight 157070 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157070/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                c84e1efae022071a4fcf9f1899bf71777c49943a
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  119 days
Failing since        152366  2020-08-01 20:49:34 Z  118 days  200 attempts
Testing same since   157070  2020-11-28 06:06:38 Z    0 days    1 attempts

------------------------------------------------------------
3613 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 691530 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Nov 28 18:15:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Nov 2020 18:15:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40106.73165 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kj4ku-0006gF-Om; Sat, 28 Nov 2020 18:15:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40106.73165; Sat, 28 Nov 2020 18:15:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kj4ku-0006g8-Lm; Sat, 28 Nov 2020 18:15:32 +0000
Received: by outflank-mailman (input) for mailman id 40106;
 Sat, 28 Nov 2020 18:15:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3d2o=FC=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kj4kt-0006g1-Cd
 for xen-devel@lists.xenproject.org; Sat, 28 Nov 2020 18:15:31 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 423c1e8e-983b-47ea-9180-8cbdcde1e5f9;
 Sat, 28 Nov 2020 18:15:30 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 0ASIFJCL053956
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Sat, 28 Nov 2020 13:15:25 -0500 (EST) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 0ASIFIBd053955;
 Sat, 28 Nov 2020 10:15:18 -0800 (PST) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 423c1e8e-983b-47ea-9180-8cbdcde1e5f9
Date: Sat, 28 Nov 2020 10:15:17 -0800
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Roman Shaposhnik <roman@zededa.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
        Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: Xen on RP4
Message-ID: <X8KTtS99DDj8L2Zh@mattapan.m5p.com>
References: <X73RfHfRfBRLKkvB@mattapan.m5p.com>
 <CAMmSBy8dtUQotUeX2MVke7d2nWS0shvKPL_S=4tUeF0UKh4vgA@mail.gmail.com>
 <X73ghKgQEXLv2z2p@mattapan.m5p.com>
 <CAMmSBy-Qdpj+6FAk9D15=+87_=68T80Y1NGnvyAB=tOFveifiQ@mail.gmail.com>
 <X73owDP0UXx+lvJd@mattapan.m5p.com>
 <alpine.DEB.2.21.2011251051240.7979@sstabellini-ThinkPad-T480s>
 <X78irfLB6DQhkPvd@mattapan.m5p.com>
 <CAMmSBy_4ry2DwMNT1Ai1-11wBHHuO71muvkfEWLRV=h0QiKyoA@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAMmSBy_4ry2DwMNT1Ai1-11wBHHuO71muvkfEWLRV=h0QiKyoA@mail.gmail.com>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Fri, Nov 27, 2020 at 11:59:10PM -0800, Roman Shaposhnik wrote:
> On Wed, Nov 25, 2020 at 7:36 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
> > Well, I've now got everything together for a "proper" Debian Raspberry PI
> > installation.  Apparently since 5.2 (perhaps earlier, but 5.2 is
> > confirmed), Debian's kernel source packages have had their Raspberry PI
> > device-trees garbled.  I do have full untouched Linux kernel source
> > handy, but I tend to stick with the distribution until that proves
> > untenable.
> 
> Yup. Same here. I started with upstream kernel, wasted a lot of time,
> threw in the towel
> and imported all of the RPi Foundation patches wholesale :-(

Any chance you could send e-mail to 939633@bugs.debian.org and state you
have also observed corrupt device-trees with Debian's kernel somewhere
in the >=5.2 range?

I'm a half step away from marking that bug "critical" (breaks most/all
Raspberry PI variants, this breaks Debian's Xen build).  I just tend to
be reluctant...

I can almost believe Intel/Tianocore are trying to create a FUD situation
for device-trees in order to push UEFI.  Debian has been cited by the
Tianocore folks as an example of a Linux distribution which can readily
install without modification.  Just a small push to kill device-trees.

Presently I have no evidence of this, just a niggling suspicion.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Sat Nov 28 21:12:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 28 Nov 2020 21:12:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40147.73194 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kj7VM-0006cq-SM; Sat, 28 Nov 2020 21:11:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40147.73194; Sat, 28 Nov 2020 21:11:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kj7VM-0006cj-PA; Sat, 28 Nov 2020 21:11:40 +0000
Received: by outflank-mailman (input) for mailman id 40147;
 Sat, 28 Nov 2020 21:11:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kj7VK-0006ca-Ly; Sat, 28 Nov 2020 21:11:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kj7VK-0000iy-AI; Sat, 28 Nov 2020 21:11:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kj7VJ-0001sx-Vq; Sat, 28 Nov 2020 21:11:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kj7VJ-0006YI-VM; Sat, 28 Nov 2020 21:11:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=e9ewLDdHGWZ+Ynom5pVxmnY3LuQi2rWxGmX8i1fWN/E=; b=YAUiXt/id6Xv1NvQWkNeJ+kSuh
	c6ehjrNYoeZ1yOctgU3G+/GnsSQqBdmfvS2kaDTauDIATPd0W6RL42aHZ+zcHFt3M2Son+MEhkz/o
	1A5v/q2UNNWhEW5ouNCzrkMDxyMT3DKnNrbeayhrX8JXPnW2wmwpbm1KbbbSwmHjGzJY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157072-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157072: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f7d7d53f6464cff94ead4c15d21e79ce4d9173f5
X-Osstest-Versions-That:
    xen=f7d7d53f6464cff94ead4c15d21e79ce4d9173f5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 28 Nov 2020 21:11:37 +0000

flight 157072 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157072/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10     fail pass in 157063

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157063
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157063
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157063
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157063
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157063
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157063
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157063
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157063
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157063
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157063
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157063
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f7d7d53f6464cff94ead4c15d21e79ce4d9173f5
baseline version:
 xen                  f7d7d53f6464cff94ead4c15d21e79ce4d9173f5

Last test of basis   157072  2020-11-28 10:22:29 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Nov 29 05:44:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 29 Nov 2020 05:44:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40242.73263 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjFVV-00010J-Vq; Sun, 29 Nov 2020 05:44:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40242.73263; Sun, 29 Nov 2020 05:44:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjFVV-00010C-SD; Sun, 29 Nov 2020 05:44:21 +0000
Received: by outflank-mailman (input) for mailman id 40242;
 Sun, 29 Nov 2020 05:44:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjFVU-000104-Ld; Sun, 29 Nov 2020 05:44:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjFVU-0008L9-Dw; Sun, 29 Nov 2020 05:44:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjFVU-0007iF-11; Sun, 29 Nov 2020 05:44:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kjFVU-0005g9-0V; Sun, 29 Nov 2020 05:44:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Zh+G+45nqfMANlGKcUHvmF8o7C7KGwg/Y8e6f5ThCKs=; b=lB3VD2CeFNYdMYmeQWfw69GYbX
	V8iu+Etx4U6u8gZcJLBCCY47BZvkc+LF3O65VHwmGzDw2R8n3X5LwgJh6+UDX+pJrkeTxthFF3yMn
	dDMTC72qOia3WaE5F7TTv2RTxxFOud8txLp3yFd61+kQl0irMpyM6Az6iIQBfVyHpIjI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157078-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157078: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-install:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=c84e1efae022071a4fcf9f1899bf71777c49943a
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 29 Nov 2020 05:44:20 +0000

flight 157078 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157078/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  12 debian-install           fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-examine      8 reboot           fail in 157070 pass in 157078
 test-arm64-arm64-xl           8 xen-boot         fail in 157070 pass in 157078
 test-arm64-arm64-xl-credit1   8 xen-boot         fail in 157070 pass in 157078
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 157070
 test-arm64-arm64-xl-credit2   8 xen-boot                   fail pass in 157070

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit2 11 leak-check/basis(11) fail in 157070 blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11) fail in 157070 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                c84e1efae022071a4fcf9f1899bf71777c49943a
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  120 days
Failing since        152366  2020-08-01 20:49:34 Z  119 days  201 attempts
Testing same since   157070  2020-11-28 06:06:38 Z    0 days    2 attempts

------------------------------------------------------------
3613 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 691530 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 29 06:59:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 29 Nov 2020 06:59:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40262.73285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjGg9-0007il-C9; Sun, 29 Nov 2020 06:59:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40262.73285; Sun, 29 Nov 2020 06:59:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjGg9-0007ie-8q; Sun, 29 Nov 2020 06:59:25 +0000
Received: by outflank-mailman (input) for mailman id 40262;
 Sun, 29 Nov 2020 06:59:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjGg8-0007iW-Br; Sun, 29 Nov 2020 06:59:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjGg8-0001VZ-2Z; Sun, 29 Nov 2020 06:59:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjGg7-0002ob-PF; Sun, 29 Nov 2020 06:59:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kjGg7-0001YA-Ol; Sun, 29 Nov 2020 06:59:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qe1aklfNiszZxPcEsiGj3YG1YMbwvQp5GB3wHTAniFc=; b=yyxTOcH19IM1d679bK/EqVxB63
	qeBDkx15p/ycJHS7RoczdEy6y6sDRymGmpgAFM7+le+18pL4XgnamgWaVpx9tK2U05QLjZXiHcg2F
	YK3EiMoMgQmWbSMixVMs/SrCbdGemKuIihANmBJN4HPdHVdw7g9oSuCHYYIpT5skRWx4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157076-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157076: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=944fdc5e27a5b5adbb765891e8e70e88ba9a00ec
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 29 Nov 2020 06:59:23 +0000

flight 157076 qemu-mainline real [real]
flight 157085 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157076/
http://logs.test-lab.xenproject.org/osstest/logs/157085/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                944fdc5e27a5b5adbb765891e8e70e88ba9a00ec
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  100 days
Failing since        152659  2020-08-21 14:07:39 Z   99 days  209 attempts
Testing same since   157069  2020-11-28 05:42:38 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69308 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 29 07:06:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 29 Nov 2020 07:06:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40271.73299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjGmt-0000HC-AU; Sun, 29 Nov 2020 07:06:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40271.73299; Sun, 29 Nov 2020 07:06:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjGmt-0000H5-7U; Sun, 29 Nov 2020 07:06:23 +0000
Received: by outflank-mailman (input) for mailman id 40271;
 Sun, 29 Nov 2020 07:06:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjGms-0000Gx-0d; Sun, 29 Nov 2020 07:06:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjGmr-0001g0-Ot; Sun, 29 Nov 2020 07:06:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjGmr-00033i-Gr; Sun, 29 Nov 2020 07:06:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kjGmr-0000D5-GL; Sun, 29 Nov 2020 07:06:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3NKStTBH0H4mBVEcON5ivn033rIhJKM5PODxjKGBfmQ=; b=XPuu+n5qFYydN8eSINgHaWzcy7
	jqqeuYCChHOhm1SCqLYvD+hg7k9YJLPpJDchvK99Irer6V9EAHGc/9tET3SdSEZA1mJ0CalbwOg+H
	4B0dDYp+qgisYJyc27ET4vYpcGzFahemY6hf+90Ye4yOa7eHyQJT2TtRLho8hSv+AHOM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157084-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157084: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=5d789c7b37721c1dd3b4e9a3732399bf66603737
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 29 Nov 2020 07:06:21 +0000

flight 157084 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157084/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              5d789c7b37721c1dd3b4e9a3732399bf66603737
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  142 days
Failing since        151818  2020-07-11 04:18:52 Z  141 days  136 attempts
Testing same since   157067  2020-11-28 04:19:31 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 29761 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 29 09:24:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 29 Nov 2020 09:24:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40309.73321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjIw5-00058v-FT; Sun, 29 Nov 2020 09:24:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40309.73321; Sun, 29 Nov 2020 09:24:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjIw5-00058o-CA; Sun, 29 Nov 2020 09:24:01 +0000
Received: by outflank-mailman (input) for mailman id 40309;
 Sun, 29 Nov 2020 09:24:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dMUQ=FD=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kjIw4-00058j-2W
 for xen-devel@lists.xenproject.org; Sun, 29 Nov 2020 09:24:00 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9f822a9d-2300-4014-8b49-c470ebc2cdc0;
 Sun, 29 Nov 2020 09:23:57 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AT9NmVY022830
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Sun, 29 Nov 2020 10:23:49 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 931A32E9CA0; Sun, 29 Nov 2020 10:23:43 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f822a9d-2300-4014-8b49-c470ebc2cdc0
Date: Sun, 29 Nov 2020 10:23:43 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201129092343.GA2203@antioche.eu.org>
References: <e9610278-84e5-dc32-b568-8867011de4e4@suse.com>
 <20201127131324.GJ1717@antioche.eu.org>
 <714e9393-d7f4-ed47-d1ed-aff79f3552a0@suse.com>
 <20201127133121.GN1717@antioche.eu.org>
 <96aa5a9b-3f4a-ce9d-0f41-4a24d409ed55@suse.com>
 <20201127135929.GR1717@antioche.eu.org>
 <20201127202211.eqrxloii5x54zode@Air-de-Roger>
 <20201127214420.GA637@antioche.eu.org>
 <20201128145311.3gmzq5lnkz6ajdtr@Air-de-Roger>
 <20201128171430.GB631@antioche.eu.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201128171430.GB631@antioche.eu.org>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Sun, 29 Nov 2020 10:23:50 +0100 (MET)

On Sat, Nov 28, 2020 at 06:14:30PM +0100, Manuel Bouyer wrote:
> On Sat, Nov 28, 2020 at 03:53:11PM +0100, Roger Pau Monn wrote:
> > > the trace is at
> > > http://www-soc.lip6.fr/~bouyer/xen-log13.txt
> > 
> > Thanks! I think I've found the issue and I'm attaching a possible fix
> > (fix.patch) to this email. In any case I've also attached a further
> > debug patch, in case the fix turns out to be wrong. Please test the
> > fix first, as the debug patch will end up triggering a panic when the
> > buffer is full.
> 
> Yes, fix.patch does make the system boot as expected !
> thanks !

FYI it also works with Xen 4.13

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Sun Nov 29 10:23:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 29 Nov 2020 10:23:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40333.73332 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjJrg-0002Ig-4U; Sun, 29 Nov 2020 10:23:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40333.73332; Sun, 29 Nov 2020 10:23:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjJrg-0002IZ-1A; Sun, 29 Nov 2020 10:23:32 +0000
Received: by outflank-mailman (input) for mailman id 40333;
 Sun, 29 Nov 2020 10:23:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjJre-0002IR-HO; Sun, 29 Nov 2020 10:23:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjJre-0006ED-AZ; Sun, 29 Nov 2020 10:23:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjJrd-0004au-Ui; Sun, 29 Nov 2020 10:23:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kjJrd-0002zN-UG; Sun, 29 Nov 2020 10:23:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FF1CfXW66/ZcMJu6Txt/WnCS6wqV/DFfeLC0sdi7jf4=; b=AfQnK1l1gk1A3V3UF32DgJVSnJ
	UjSg0s5+w4UB3qJUDyMZv9svsqJuQL93W99O1WVakqTU7fcOck9wGCzzmvnm2BMXG0e3u7kQyfOwP
	/owsgy8zpUj6VfRxDfy3tkqyeedt4Z48nyMrhTOwoO6iu/KBXXUJhe/MDQOx71WZCX6g=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157082-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157082: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f7d7d53f6464cff94ead4c15d21e79ce4d9173f5
X-Osstest-Versions-That:
    xen=f7d7d53f6464cff94ead4c15d21e79ce4d9173f5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 29 Nov 2020 10:23:29 +0000

flight 157082 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157082/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds 20 guest-localmigrate/x10 fail in 157072 pass in 157082
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 157072
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 157072

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157072
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157072
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157072
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157072
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157072
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157072
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157072
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157072
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157072
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157072
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157072
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f7d7d53f6464cff94ead4c15d21e79ce4d9173f5
baseline version:
 xen                  f7d7d53f6464cff94ead4c15d21e79ce4d9173f5

Last test of basis   157082  2020-11-29 01:51:28 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Nov 29 10:23:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 29 Nov 2020 10:23:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40335.73348 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjJrt-0002Md-Je; Sun, 29 Nov 2020 10:23:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40335.73348; Sun, 29 Nov 2020 10:23:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjJrt-0002MU-GP; Sun, 29 Nov 2020 10:23:45 +0000
Received: by outflank-mailman (input) for mailman id 40335;
 Sun, 29 Nov 2020 10:23:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjJrs-0002MG-K8; Sun, 29 Nov 2020 10:23:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjJrs-0006EK-EJ; Sun, 29 Nov 2020 10:23:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjJrs-0004bG-5f; Sun, 29 Nov 2020 10:23:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kjJrs-0003Eu-5D; Sun, 29 Nov 2020 10:23:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=m4NkV5vD6WMPDXobGOQPDBuGrPXkq+euzYOOczxP4d0=; b=Fpo8N28CGLgJxqUm3d60B1hm4F
	tsBhw2elI9LoCFid3C3n3MkOPso5GNOliTCQi8id2tYk7WZl7T+q4Jd/wpVSfgqxpL6mMUulMlnnG
	niZJdXW71GffrtXRqtTX4Tdt1p6Gb0vnYgAprtaWrzlDCy2aeZ1v/a99RHt5mAwED/NQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157090-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 157090: all pass - PUSHED
X-Osstest-Versions-This:
    xen=f7d7d53f6464cff94ead4c15d21e79ce4d9173f5
X-Osstest-Versions-That:
    xen=9b156bcc3ffcc7949edd4460b718a241e87ae302
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 29 Nov 2020 10:23:44 +0000

flight 157090 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157090/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  f7d7d53f6464cff94ead4c15d21e79ce4d9173f5
baseline version:
 xen                  9b156bcc3ffcc7949edd4460b718a241e87ae302

Last test of basis   157003  2020-11-25 09:18:28 Z    4 days
Testing same since   157090  2020-11-29 09:18:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bertrand Marquis <bertrand.marquis@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Paul Durrant <pdurrant@amazon.com>
  Rahul Singh <rahul.singh@arm.com>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   9b156bcc3f..f7d7d53f64  f7d7d53f6464cff94ead4c15d21e79ce4d9173f5 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun Nov 29 13:30:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 29 Nov 2020 13:30:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40409.73368 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjMmQ-0002qU-JB; Sun, 29 Nov 2020 13:30:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40409.73368; Sun, 29 Nov 2020 13:30:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjMmQ-0002qN-GC; Sun, 29 Nov 2020 13:30:18 +0000
Received: by outflank-mailman (input) for mailman id 40409;
 Sun, 29 Nov 2020 13:30:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjMmP-0002qF-Ka; Sun, 29 Nov 2020 13:30:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjMmP-0001oW-EO; Sun, 29 Nov 2020 13:30:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjMmP-0004bM-4k; Sun, 29 Nov 2020 13:30:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kjMmP-0005NX-4C; Sun, 29 Nov 2020 13:30:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ttCRuogSqc5jiWE4qwv4saZp45fE+eorBVQbZaQQpc4=; b=RN+e7L3jNq62MWS4VSAxvolvrg
	ky83ueOm19l0bAioMp3qWUw4FA2w7usJy774h6GvnEPby7YpEjQeOrNfEWoyFHNzf8Q0Wl0MqP6fg
	ua/ECwywsLYs57o2Wa6DIvYsGt7muE97oAxN0gT/7K+UdsDZVQCzF7Lw7LUSFqPByTqk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157086-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157086: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=aae5ab854e38151e69f261dbf0e3b7e396403178
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 29 Nov 2020 13:30:17 +0000

flight 157086 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157086/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                aae5ab854e38151e69f261dbf0e3b7e396403178
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  120 days
Failing since        152366  2020-08-01 20:49:34 Z  119 days  202 attempts
Testing same since   157086  2020-11-29 05:47:24 Z    0 days    1 attempts

------------------------------------------------------------
3614 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 692608 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 29 16:29:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 29 Nov 2020 16:29:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40474.73416 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjPZr-0002F5-6Z; Sun, 29 Nov 2020 16:29:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40474.73416; Sun, 29 Nov 2020 16:29:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjPZr-0002Ey-2T; Sun, 29 Nov 2020 16:29:31 +0000
Received: by outflank-mailman (input) for mailman id 40474;
 Sun, 29 Nov 2020 16:28:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kLU4=FD=helimail.de=oliver_linden@srs-us1.protection.inumbo.net>)
 id 1kjPYo-0002DH-4m
 for xen-devel@lists.xenproject.org; Sun, 29 Nov 2020 16:28:26 +0000
Received: from mail.hamcom.de (unknown [2001:14f0:0:dc03::37:215])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b400b7ea-e0b8-4ca7-9719-139f725ad6aa;
 Sun, 29 Nov 2020 16:28:23 +0000 (UTC)
Received: from [37.85.82.69] (port=55888 helo=[192.168.1.100])
 by mail.hamcom.de with esmtpa (Exim 4.92.3)
 (envelope-from <oliver_linden@helimail.de>) id 1kjPYi-0002RD-9A
 for xen-devel@lists.xenproject.org; Sun, 29 Nov 2020 17:28:23 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b400b7ea-e0b8-4ca7-9719-139f725ad6aa
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=helimail.de
	; s=201711; h=Content-Type:MIME-Version:Date:Message-ID:Subject:From:To:
	Sender:Reply-To:Cc:Content-Transfer-Encoding:Content-ID:Content-Description:
	Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:
	In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:List-Subscribe:
	List-Post:List-Owner:List-Archive;
	bh=JLNEniUypK9+9KC8p4FYhCtIXDWa772T7XBz1uyibpk=; b=AIoG4oCwbYdRdmMwU7i+TpVkMY
	GSourdl2fRWedW3FLY2bwr87CxdvNXHw7t0OFxk0fE2A+BPJfE1SMtOM9oEDhVwXhS6D8jw/VtVQq
	pTCZG98NoVld//Yo6eA5eiitDKyw0Z/GZCZoy0pZBXA//H6JOPUgsnU12jSTFdiQkyXE=;
X-HeLi-id: cfcd208495d565ef66e7dff9f98764da
To: xen-devel@lists.xenproject.org
From: Oliver Linden <oliver_linden@helimail.de>
Subject: HVM - 2nd bridge breaking DomU config
Message-ID: <a7b21eef-8580-c3e8-a93d-e625b88bb947@helimail.de>
Date: Sun, 29 Nov 2020 17:28:09 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="f759fcgGUbWP7g50xGNMrNbMKsBX54s8c"
X-Spam-Score: -1.0
X-Spam-Flag: NO

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--f759fcgGUbWP7g50xGNMrNbMKsBX54s8c
Content-Type: multipart/mixed; boundary="XJ66UDMsWA5tAAePrOHIWa0o9qM0SktcF";
 protected-headers="v1"
From: Oliver Linden <oliver_linden@helimail.de>
To: xen-devel@lists.xenproject.org
Message-ID: <a7b21eef-8580-c3e8-a93d-e625b88bb947@helimail.de>
Subject: HVM - 2nd bridge breaking DomU config

--XJ66UDMsWA5tAAePrOHIWa0o9qM0SktcF
Content-Type: multipart/alternative;
 boundary="------------D0BBEA754AD91EDEC9AA23AB"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------D0BBEA754AD91EDEC9AA23AB
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: base64

SGkgYWxsLAoKZ290IHRoZSByZWNvbW1lbmRhdGlvbiBmcm9tIHRoZSBmcmVlbm9kZSB4ZW4g
SVJDIGNoYW5uZWwgdG8gb3BlbiBhCnRpY2tldCBoZXJlLiBIb3BpbmcgdGhhdCdzIG9rLgoK
UHJvYmxlbToKCkkgd2FudCB0byBwcm92aWRlIGEgc2Vjb25kIGJyaWRnZSB0byBhIEhWTSBM
aW51eCBEb21VLiBCdXQgYXMgc29vbiBhcyBJCmFkZCB0aGlzIHRvIHRoZSBEb21VIGNvbmZp
ZyBmaWxlIHRoZSBEb21VIGRvZXNuJ3Qgc3RhcnQgYW5kIHRoZSBxZW11IGxvZwpmaWxlIGRv
ZXMgc3RhdGUKCipxZW11LXN5c3RlbS1pMzg2OiAtZGV2aWNlCnJ0bDgxMzksaWQ9bmljMSxu
ZXRkZXY9bmV0MSxtYWM9MDA6MTY6M2U6MzU6YWY6YWY6IHhlbjogZmFpbGVkIHRvCnBvcHVs
YXRlIHJhbSBhdCAxMTAwYzAwMDAqIChzYW1lIHdpdGggZTEwMDApCgpQcm92aWRpbmcgb25s
eSBvbmUgYnJpZGdlIGRvZXMgd29yayBmaW5lLiBTaW5jZSBJIGNvdWxkbid0IGZpbmQgYW55
dGhpbmcKbWVhbmluZ2Z1bCB0byBtZSBvbiB0aGUgSW50ZXJuZXQgSSdtIGFza2luZyBoZXJl
LiBBbnkgaGVscCBpcyBoaWdobHkKYXBwcmVjaWF0ZWQuCgpyb290QEhBTS1YRU4xOi9ldGMv
eGVuIyBsc2JfcmVsZWFzZSAtYQpObyBMU0IgbW9kdWxlcyBhcmUgYXZhaWxhYmxlLgpEaXN0
cmlidXRvciBJRDogVWJ1bnR1CkRlc2NyaXB0aW9uOsKgwqDCoCBVYnVudHUgMjAuMDQuMSBM
VFMKUmVsZWFzZTrCoMKgwqDCoMKgwqDCoCAyMC4wNApDb2RlbmFtZTrCoMKgwqDCoMKgwqAg
Zm9jYWwKCnJvb3RASEFNLVhFTjE6L2V0Yy94ZW4jIHVuYW1lIC1hCkxpbnV4IEhBTS1YRU4x
IDUuNC4wLTU0LWdlbmVyaWMgIzYwLVVidW50dSBTTVAgRnJpIE5vdiA2IDEwOjM3OjU5IFVU
QwoyMDIwIHg4Nl82NCB4ODZfNjQgeDg2XzY0IEdOVS9MaW51eAoKcm9vdEBIQU0tWEVOMTov
ZXRjL3hlbiMgeGwgaW5mbwpob3N0wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgIDogSEFNLVhFTjEKcmVsZWFzZcKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCA6
IDUuNC4wLTU0LWdlbmVyaWMKdmVyc2lvbsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oCA6ICM2MC1VYnVudHUgU01QIEZyaSBOb3YgNiAxMDozNzo1OSBVVEMgMjAyMAptYWNoaW5l
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIDogeDg2XzY0Cm5yX2NwdXPCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgOiAxNgptYXhfY3B1X2lkwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgIDogMzEKbnJfbm9kZXPCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIDog
MQpjb3Jlc19wZXJfc29ja2V0wqDCoMKgwqDCoMKgIDogOAp0aHJlYWRzX3Blcl9jb3JlwqDC
oMKgwqDCoMKgIDogMgpjcHVfbWh6wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIDog
MzU5My4zMjYKaHdfY2Fwc8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCA6CjE3OGJm
M2ZmOmY2ZDgzMjBiOjJlNTAwODAwOjI0NDAzN2ZmOjAwMDAwMDBmOjIxOWM5MWE5OjAwNDAw
MDA0OjAwMDAwNTAwCnZpcnRfY2Fwc8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIDogaHZt
IGh2bV9kaXJlY3Rpbwp0b3RhbF9tZW1vcnnCoMKgwqDCoMKgwqDCoMKgwqDCoCA6IDMyNjk5
CmZyZWVfbWVtb3J5wqDCoMKgwqDCoMKgwqDCoMKgwqDCoCA6IDcxNTAKc2hhcmluZ19mcmVl
ZF9tZW1vcnnCoMKgIDogMApzaGFyaW5nX3VzZWRfbWVtb3J5wqDCoMKgIDogMApvdXRzdGFu
ZGluZ19jbGFpbXPCoMKgwqDCoCA6IDAKZnJlZV9jcHVzwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqAgOiAwCnhlbl9tYWpvcsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIDogNAp4ZW5f
bWlub3LCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCA6IDExCnhlbl9leHRyYcKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgIDogLjQtcHJlCnhlbl92ZXJzaW9uwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoCA6IDQuMTEuNC1wcmUKeGVuX2NhcHPCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgIDogeGVuLTMuMC14ODZfNjQgeGVuLTMuMC14ODZfMzJwIGh2bS0zLjAteDg2XzMyCmh2
bS0zLjAteDg2XzMycCBodm0tMy4wLXg4Nl82NAp4ZW5fc2NoZWR1bGVywqDCoMKgwqDCoMKg
wqDCoMKgIDogY3JlZGl0Cnhlbl9wYWdlc2l6ZcKgwqDCoMKgwqDCoMKgwqDCoMKgIDogNDA5
NgpwbGF0Zm9ybV9wYXJhbXPCoMKgwqDCoMKgwqDCoCA6IHZpcnRfc3RhcnQ9MHhmZmZmODAw
MDAwMDAwMDAwCnhlbl9jaGFuZ2VzZXTCoMKgwqDCoMKgwqDCoMKgwqAgOgp4ZW5fY29tbWFu
ZGxpbmXCoMKgwqDCoMKgwqDCoCA6IHBsYWNlaG9sZGVyIGRvbTBfbWF4X3ZjcHVzPTItMwpk
b20wX21lbT0xMDI0TSxtYXg6MjA0OE0gaW9tbXU9YW1kLWlvbW11LWdsb2JhbC1pbnRyZW1h
cCBsb2dsdmw9YWxsCmd1ZXN0X2xvZ2x2bD1hbGwgcXVpZXQgbm8tcmVhbC1tb2RlIGVkZD1v
ZmYKY2NfY29tcGlsZXLCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIDogZ2NjIChVYnVudHUgOS4y
LjEtMzF1YnVudHUzKSA5LjIuMSAyMDIwMDMwNgpjY19jb21waWxlX2J5wqDCoMKgwqDCoMKg
wqDCoMKgIDogdWJ1bnR1LWRldmVsLWRpCmNjX2NvbXBpbGVfZG9tYWluwqDCoMKgwqDCoCA6
IGxpc3RzLnVidW50dS5jb20KY2NfY29tcGlsZV9kYXRlwqDCoMKgwqDCoMKgwqAgOiBUdWUg
TWFyIDEwIDA5OjA0OjA2IFVUQyAyMDIwCmJ1aWxkX2lkwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoCA6IDcwZWRmNTBmY2U0NDRhNzA2ZWI1YzY5NzM1YzM1YzE4MzhlNGVhZWUKeGVu
ZF9jb25maWdfZm9ybWF0wqDCoMKgwqAgOiA0CgpCZXN0LCBPbGl2ZXIKCg==
--------------D0BBEA754AD91EDEC9AA23AB
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable

<html>
  <head>

    <meta http-equiv=3D"content-type" content=3D"text/html; charset=3DUTF=
-8">
  </head>
  <body>
    <p>Hi all,</p>
    <p>got the recommendation from the freenode xen IRC channel to open
      a ticket here. Hoping that's ok.</p>
    <p>Problem:</p>
    <p>I want to provide a second bridge to a HVM Linux DomU. But as
      soon as I add this to the DomU config file the DomU doesn't start
      and the qemu log file does state</p>
    <p><b>qemu-system-i386: -device
        rtl8139,id=3Dnic1,netdev=3Dnet1,mac=3D00:16:3e:35:af:af: xen: fai=
led
        to populate ram at 1100c0000</b> (same with e1000)</p>
    <p>Providing only one bridge does work fine. Since I couldn't find
      anything meaningful to me on the Internet I'm asking here. Any
      help is highly appreciated.<br>
    </p>
    <p>root@HAM-XEN1:/etc/xen# lsb_release -a<br>
      No LSB modules are available.<br>
      Distributor ID: Ubuntu<br>
      Description:=C2=A0=C2=A0=C2=A0 Ubuntu 20.04.1 LTS<br>
      Release:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 20.04<br>
      Codename:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 focal</p>
    <p>root@HAM-XEN1:/etc/xen# uname -a<br>
      Linux HAM-XEN1 5.4.0-54-generic #60-Ubuntu SMP Fri Nov 6 10:37:59
      UTC 2020 x86_64 x86_64 x86_64 GNU/Linux<br>
    </p>
    <p>root@HAM-XEN1:/etc/xen# xl info<br>
      host=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : HAM-XEN1<br>
      release=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : 5.4.0-54-generic<br>
      version=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : #60-Ubuntu SMP Fri Nov 6 10:37:59 UTC
      2020<br>
      machine=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : x86_64<br>
      nr_cpus=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : 16<br>
      max_cpu_id=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 : 31<br>
      nr_nodes=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 : 1<br>
      cores_per_socket=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : 8<br>
      threads_per_core=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : 2<br>
      cpu_mhz=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : 3593.326<br>
      hw_caps=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 :
      178bf3ff:f6d8320b:2e500800:244037ff:0000000f:219c91a9:00400004:0000=
0500<br>
      virt_caps=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 : hvm hvm_directio<br>
      total_memory=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0 : 32699<br>
      free_memory=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 : 7150<br>
      sharing_freed_memory=C2=A0=C2=A0 : 0<br>
      sharing_used_memory=C2=A0=C2=A0=C2=A0 : 0<br>
      outstanding_claims=C2=A0=C2=A0=C2=A0=C2=A0 : 0<br>
      free_cpus=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 : 0<br>
      xen_major=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 : 4<br>
      xen_minor=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 : 11<br>
      xen_extra=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 : .4-pre<br>
      xen_version=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 : 4.11.4-pre<br>
      xen_caps=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 : xen-3.0-x86_64 xen-3.0-x86_32p
      hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 <br>
      xen_scheduler=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
 : credit<br>
      xen_pagesize=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0 : 4096<br>
      platform_params=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : virt_st=
art=3D0xffff800000000000<br>
      xen_changeset=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
 : <br>
      xen_commandline=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : placeho=
lder dom0_max_vcpus=3D2-3
      dom0_mem=3D1024M,max:2048M iommu=3Damd-iommu-global-intremap
      loglvl=3Dall guest_loglvl=3Dall quiet no-real-mode edd=3Doff<br>
      cc_compiler=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 : gcc (Ubuntu 9.2.1-31ubuntu3) 9.2.1
      20200306<br>
      cc_compile_by=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
 : ubuntu-devel-di<br>
      cc_compile_domain=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : lists.ubuntu.com<=
br>
      cc_compile_date=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : Tue Mar=
 10 09:04:06 UTC 2020<br>
      build_id=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 : 70edf50fce444a706eb5c69735c35c1838e4eaee<br>
      xend_config_format=C2=A0=C2=A0=C2=A0=C2=A0 : 4<br>
    </p>
    <p>Best, Oliver<br>
    </p>
  </body>
</html>

--------------D0BBEA754AD91EDEC9AA23AB--

--XJ66UDMsWA5tAAePrOHIWa0o9qM0SktcF--

--f759fcgGUbWP7g50xGNMrNbMKsBX54s8c
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----

iF0EARECAB0WIQQG14HRX/gP3AY3+bqqsNqQA1FaigUCX8PMHgAKCRCqsNqQA1Fa
ilg0AJoClBHCBbkYeb54qHKtOIIwDtDGOACgnQ4GcTTAMsE13k5V+KDiilxI78Y=
=7dei
-----END PGP SIGNATURE-----

--f759fcgGUbWP7g50xGNMrNbMKsBX54s8c--


From xen-devel-bounces@lists.xenproject.org Sun Nov 29 18:35:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 29 Nov 2020 18:35:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40526.73434 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjRXH-0005WO-JW; Sun, 29 Nov 2020 18:34:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40526.73434; Sun, 29 Nov 2020 18:34:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjRXH-0005WH-Fb; Sun, 29 Nov 2020 18:34:59 +0000
Received: by outflank-mailman (input) for mailman id 40526;
 Sun, 29 Nov 2020 18:34:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9YJf=FD=lunn.ch=andrew@srs-us1.protection.inumbo.net>)
 id 1kjRXG-0005WC-EG
 for xen-devel@lists.xenproject.org; Sun, 29 Nov 2020 18:34:58 +0000
Received: from vps0.lunn.ch (unknown [185.16.172.187])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d8f14b3b-1407-4848-95ba-4a1133936bd0;
 Sun, 29 Nov 2020 18:34:56 +0000 (UTC)
Received: from andrew by vps0.lunn.ch with local (Exim 4.94)
 (envelope-from <andrew@lunn.ch>)
 id 1kjRX2-009NmY-4l; Sun, 29 Nov 2020 19:34:44 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d8f14b3b-1407-4848-95ba-4a1133936bd0
Date: Sun, 29 Nov 2020 19:34:44 +0100
From: Andrew Lunn <andrew@lunn.ch>
To: Lee Jones <lee.jones@linaro.org>
Cc: linux-kernel@vger.kernel.org, Wei Liu <wei.liu@kernel.org>,
	Paul Durrant <paul@xen.org>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	John Fastabend <john.fastabend@gmail.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
	bpf@vger.kernel.org
Subject: Re: [PATCH 2/8] net: xen-netback: xenbus: Demote nonconformant
 kernel-doc headers
Message-ID: <20201129183444.GI2234159@lunn.ch>
References: <20201126133853.3213268-1-lee.jones@linaro.org>
 <20201126133853.3213268-3-lee.jones@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201126133853.3213268-3-lee.jones@linaro.org>

On Thu, Nov 26, 2020 at 01:38:47PM +0000, Lee Jones wrote:
> Fixes the following W=1 kernel build warning(s):
> 
>  drivers/net/xen-netback/xenbus.c:419: warning: Function parameter or member 'dev' not described in 'frontend_changed'
>  drivers/net/xen-netback/xenbus.c:419: warning: Function parameter or member 'frontend_state' not described in 'frontend_changed'
>  drivers/net/xen-netback/xenbus.c:1001: warning: Function parameter or member 'dev' not described in 'netback_probe'
>  drivers/net/xen-netback/xenbus.c:1001: warning: Function parameter or member 'id' not described in 'netback_probe'
> 
> Cc: Wei Liu <wei.liu@kernel.org>
> Cc: Paul Durrant <paul@xen.org>
> Cc: "David S. Miller" <davem@davemloft.net>
> Cc: Jakub Kicinski <kuba@kernel.org>
> Cc: Alexei Starovoitov <ast@kernel.org>
> Cc: Daniel Borkmann <daniel@iogearbox.net>
> Cc: Jesper Dangaard Brouer <hawk@kernel.org>
> Cc: John Fastabend <john.fastabend@gmail.com>
> Cc: Rusty Russell <rusty@rustcorp.com.au>
> Cc: xen-devel@lists.xenproject.org
> Cc: netdev@vger.kernel.org
> Cc: bpf@vger.kernel.org
> Signed-off-by: Lee Jones <lee.jones@linaro.org>

Reviewed-by: Andrew Lunn <andrew@lunn.ch>

    Andrew


From xen-devel-bounces@lists.xenproject.org Sun Nov 29 19:01:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 29 Nov 2020 19:01:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40537.73446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjRxH-0008Fv-PZ; Sun, 29 Nov 2020 19:01:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40537.73446; Sun, 29 Nov 2020 19:01:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjRxH-0008Fo-LR; Sun, 29 Nov 2020 19:01:51 +0000
Received: by outflank-mailman (input) for mailman id 40537;
 Sun, 29 Nov 2020 19:01:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjRxG-0008Fg-DQ; Sun, 29 Nov 2020 19:01:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjRxG-0000jL-5H; Sun, 29 Nov 2020 19:01:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjRxF-0000yO-PJ; Sun, 29 Nov 2020 19:01:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kjRxF-0006LP-On; Sun, 29 Nov 2020 19:01:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=h3mwnCYXjAoByNPQxTRGUSwlxaXeoXWNYtWSQHY5rOw=; b=unmkpF+Iy9jLcw38PGFz10K2Yx
	J2qGRUiCX+ZD3BQHOX/rm4QLEZoc0/EYsRFjY8rsSOsOp3EIBA1lh9AMsCz5xZn8RVnalJpsPlXk0
	U49VggSCpZklW35pSGPNSoog77fzWEu4TzYb69Y3vU4Q43FBRpA5sevfNMbc/AGss9n8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157088-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157088: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=944fdc5e27a5b5adbb765891e8e70e88ba9a00ec
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 29 Nov 2020 19:01:49 +0000

flight 157088 qemu-mainline real [real]
flight 157095 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157088/
http://logs.test-lab.xenproject.org/osstest/logs/157095/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                944fdc5e27a5b5adbb765891e8e70e88ba9a00ec
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  101 days
Failing since        152659  2020-08-21 14:07:39 Z  100 days  210 attempts
Testing same since   157069  2020-11-28 05:42:38 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69308 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Nov 29 21:07:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 29 Nov 2020 21:07:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40561.73466 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjTud-0002Lc-Ii; Sun, 29 Nov 2020 21:07:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40561.73466; Sun, 29 Nov 2020 21:07:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjTud-0002LV-Ff; Sun, 29 Nov 2020 21:07:15 +0000
Received: by outflank-mailman (input) for mailman id 40561;
 Sun, 29 Nov 2020 21:07:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjTub-0002LN-Cr; Sun, 29 Nov 2020 21:07:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjTub-0003Ns-52; Sun, 29 Nov 2020 21:07:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjTua-0005H4-Rz; Sun, 29 Nov 2020 21:07:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kjTua-0003nb-RQ; Sun, 29 Nov 2020 21:07:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JDVrydkQvbB39z/+zdcy7nmp8NZYg0J2SDgZA1tU4nM=; b=g7sl9WORF73qHOTBEvu7cD3Uze
	4A+Pcj6++iazdJjEYNWjK5dd0RjkwxaLqV21IgZDOuz7IklGyAoIKbOceAQv4qZPbEenPf6Ft27l+
	M7BI2t6rmjYrBQsOoZQLe2eY4JRHN4HbFAE2VBF0Ql04iFHUilznY60dPYiDLprmx9Tk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157093-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157093: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-xl:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:host-install(5):broken:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=aae5ab854e38151e69f261dbf0e3b7e396403178
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 29 Nov 2020 21:07:12 +0000

flight 157093 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157093/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl             <job status>                 broken
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen fail in 157086 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-seattle   8 xen-boot                   fail pass in 157086
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 157086

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl           5 host-install(5)       broken blocked in 152332
 test-arm64-arm64-xl-seattle 11 leak-check/basis(11) fail in 157086 blocked in 152332
 test-arm64-arm64-xl   11 leak-check/basis(11) fail in 157086 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                aae5ab854e38151e69f261dbf0e3b7e396403178
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  121 days
Failing since        152366  2020-08-01 20:49:34 Z  120 days  203 attempts
Testing same since   157086  2020-11-29 05:47:24 Z    0 days    2 attempts

------------------------------------------------------------
3614 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          broken  
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl broken
broken-step test-arm64-arm64-xl host-install(5)

Not pushing.

(No revision log; it would be 692608 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 02:32:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 02:32:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40610.73498 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjYyo-0003tQ-Ti; Mon, 30 Nov 2020 02:31:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40610.73498; Mon, 30 Nov 2020 02:31:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjYyo-0003tI-Ot; Mon, 30 Nov 2020 02:31:54 +0000
Received: by outflank-mailman (input) for mailman id 40610;
 Mon, 30 Nov 2020 02:31:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjYyn-0003tA-Ei; Mon, 30 Nov 2020 02:31:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjYyn-0006e6-52; Mon, 30 Nov 2020 02:31:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjYym-0003ep-NB; Mon, 30 Nov 2020 02:31:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kjYym-0007UO-Mc; Mon, 30 Nov 2020 02:31:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=L6NwyzhAI10+I5BQIov6k/63SqQxghpqHXwclaFmWoA=; b=q1kxhDT5XAy0ROQLSsSvngp1Ij
	UmkaTB88kNCLG7F5yI8cHWbpMdsMzT1jsigJi9BLCrBJC90+fnoAnzXr8ASLqMrMamKkGfNB+wJPH
	Gf8it3fvs4zPKhf6CUxRmRD9J+PmfG6FqGrL9iKyYweawoWy29j04026yhQ3CmApQB2Y=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157097-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157097: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=944fdc5e27a5b5adbb765891e8e70e88ba9a00ec
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 30 Nov 2020 02:31:52 +0000

flight 157097 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157097/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 157088

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                944fdc5e27a5b5adbb765891e8e70e88ba9a00ec
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  101 days
Failing since        152659  2020-08-21 14:07:39 Z  100 days  211 attempts
Testing same since   157069  2020-11-28 05:42:38 Z    1 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69308 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 02:48:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 02:48:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40619.73513 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjZEv-00054E-Iz; Mon, 30 Nov 2020 02:48:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40619.73513; Mon, 30 Nov 2020 02:48:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjZEv-000547-EF; Mon, 30 Nov 2020 02:48:33 +0000
Received: by outflank-mailman (input) for mailman id 40619;
 Mon, 30 Nov 2020 02:48:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x+J0=FE=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1kjZEt-000542-GK
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 02:48:31 +0000
Received: from mga18.intel.com (unknown [134.134.136.126])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e2f62de9-9077-45b0-bb87-83db74b4f6ca;
 Mon, 30 Nov 2020 02:48:28 +0000 (UTC)
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
 by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 29 Nov 2020 18:48:27 -0800
Received: from fmsmsx602.amr.corp.intel.com ([10.18.126.82])
 by fmsmga005.fm.intel.com with ESMTP; 29 Nov 2020 18:48:26 -0800
Received: from fmsmsx610.amr.corp.intel.com (10.18.126.90) by
 fmsmsx602.amr.corp.intel.com (10.18.126.82) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1713.5; Sun, 29 Nov 2020 18:48:26 -0800
Received: from fmsmsx603.amr.corp.intel.com (10.18.126.83) by
 fmsmsx610.amr.corp.intel.com (10.18.126.90) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1713.5; Sun, 29 Nov 2020 18:48:26 -0800
Received: from fmsedg601.ED.cps.intel.com (10.1.192.135) by
 fmsmsx603.amr.corp.intel.com (10.18.126.83) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5
 via Frontend Transport; Sun, 29 Nov 2020 18:48:26 -0800
Received: from NAM12-BN8-obe.outbound.protection.outlook.com (104.47.55.174)
 by edgegateway.intel.com (192.55.55.70) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1713.5; Sun, 29 Nov 2020 18:48:20 -0800
Received: from MWHPR11MB1645.namprd11.prod.outlook.com (2603:10b6:301:b::12)
 by CO1PR11MB5028.namprd11.prod.outlook.com (2603:10b6:303:9a::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.23; Mon, 30 Nov
 2020 02:48:19 +0000
Received: from MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::31c9:44c6:7323:61ac]) by MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::31c9:44c6:7323:61ac%7]) with mapi id 15.20.3611.025; Mon, 30 Nov 2020
 02:48:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2f62de9-9077-45b0-bb87-83db74b4f6ca
IronPort-SDR: Ul+B/BCCOIltWytTIzMplUYu/sAT8jGd1QGlyltEkEj8RD5EPCxXHpQGD64QOqrWNS11FxkEZx
 thnRbACRoumw==
X-IronPort-AV: E=McAfee;i="6000,8403,9820"; a="160345539"
X-IronPort-AV: E=Sophos;i="5.78,379,1599548400"; 
   d="scan'208";a="160345539"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
IronPort-SDR: 9XTw6KOFLyjkmdYJ9SJRy4Zh/ztqsfp193sHe345yWgPJWLmGc7DDl/PRSe19BBYFOAUQ5yy4L
 eQtwIYyhfVHA==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.78,379,1599548400"; 
   d="scan'208";a="538428521"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ai+4F7U8mio+/ht1sGh1ZKVHqFRm01YGT7bQoxKg0Zq9tcGXgwW/iGnW6wYjyaw/rGlplc6YsI5QnR2mTnKtao45PUnZtK9YaTSQWkgR1cDWTjNzbgOJXVaB33cngYH/byXfMOIZrd+HXdWmfkvx4Y1P/Wd/ZFI4mZsV13VrCO//F59aiLR8wwJAvt5YQfeBdExR4LQEU6kVtgG1qAT8Ww6gC95dtWBS9aeccqUvjMcz6MOeO+Xk8Okql/GRir3Xp7upiK3HjdfkWQyABKMtGtM/SjSC4jMjK1H3VAX+tVSUEP0k7ftdR6mg35Va3dJvetucqaoYUx1vdvOsHyQA+Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x3sqJ4zOMuW/7klBLWT2EcsUv5E5sUH3HBV6NHvBMI4=;
 b=WP2csqRx0kvlyURRn0dA6qxEn2OMyxL8p8363UAiyVjIy0kMIsEdfVmyPjYqSAWYg2bNG18QKdUtqHkRGKKEK0tZZUCaZsrfKgY8KnXUngFm2hwGUDyXV1oK980wk9h6oyUL9rD4AcB5gb7fy4Q9jifZdwwQPXwstmJAYy2cyj/B2guGWs1flBBL1JpeFJ3h1HOJzyfz4rvr4ECjWgR+0W+C9SChmWZKCyrHMGbSQiSdyMDGcIgwSrU2TCgeBXTOet/bMXgFxcKgCKia2xzxztw2twhppG6RaOWPHQaAZqWaGT66h8IQBo1CGTUMCj+14XuSki0P9hdaAscaksOBqg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com;
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x3sqJ4zOMuW/7klBLWT2EcsUv5E5sUH3HBV6NHvBMI4=;
 b=Nre+7jbqfn4OBoSbTkVKCHJ4sDb26gE7teyCjCwOQ4Zhi9csqn+SHSB6dyCSS5NpERMCERUr5j47m1G67hsD+j4YIjMK8FAOveErDswW4OAbs38nye2quEPvAtZms31mJUFuvvfYU60hkH2jbu/FygJUj/iCTyEnIfo/m8UH5Iw=
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Paul Durrant <paul@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, "Nakajima, Jun"
	<jun.nakajima@intel.com>, Jan Beulich <jbeulich@suse.com>, "Cooper, Andrew"
	<andrew.cooper3@citrix.com>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: RE: [PATCH v3 13/13] x86: replace open-coded occurrences of
 sizeof_field()...
Thread-Topic: [PATCH v3 13/13] x86: replace open-coded occurrences of
 sizeof_field()...
Thread-Index: AQHWwphVvkgPQ2EqeUezHV5G5mGBFKngASIQ
Date: Mon, 30 Nov 2020 02:48:19 +0000
Message-ID: <MWHPR11MB1645B36F8461F16F6A6151898CF50@MWHPR11MB1645.namprd11.prod.outlook.com>
References: <20201124190744.11343-1-paul@xen.org>
 <20201124190744.11343-14-paul@xen.org>
In-Reply-To: <20201124190744.11343-14-paul@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
dlp-version: 11.5.1.3
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.198.147.217]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 8c3a826d-91cf-4f43-794a-08d894da652b
x-ms-traffictypediagnostic: CO1PR11MB5028:
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <CO1PR11MB50288A60F8F9D49D3B569D1A8CF50@CO1PR11MB5028.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:669;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: g85pYz290st+7L/msBM9U9DlhBCje8PPWNqk+NTvWSbEEWoe+ha1PhfEg4F5/XwJzmg5sqa6wuhIbbpuHYx3+PZdS6Dx4ZfIcal4jNoqsgEtqAM/V9T/dOjK08HJUyNierQEnJWoWOekwRv29cnYntYiOCSlb9XvW+jTKpiewDYQgbQUClvOFT+A7BmSfN/Axs2i4UJsrRXr7JIgABSKGPRdQDB1Y8hCUAeSCO+A6S4LwAOHwpAVawd6ABn7HsWv8h/4eN3YfPX4oHQZw4gWG5Dz5wg4dw3lLVHPBbB14GQg34JDP7nAUqvyhVljQnc5
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MWHPR11MB1645.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(39860400002)(366004)(396003)(346002)(71200400001)(9686003)(52536014)(6506007)(4326008)(7696005)(86362001)(8936002)(5660300002)(54906003)(316002)(76116006)(64756008)(186003)(26005)(8676002)(2906002)(66446008)(33656002)(110136005)(66476007)(478600001)(66556008)(55016002)(66946007)(83380400001);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata: =?utf-8?B?WG5xWVZEc3ZhbEVTOUM1ckhSMVdKMFlCZFVqT1hERlVDcnUxT083R0poZ2xQ?=
 =?utf-8?B?ei83SXdOcFZCN29JZ1NiVHpBUFp3d2FVTkNodU10QTFnenF1dXgyeDdlNWsy?=
 =?utf-8?B?WWtmbXIzeWV3MUVMbFVHS3I3NWJYNHJETTR1cTdqaDgvT0ZrSkRJRjlaeGpM?=
 =?utf-8?B?R1Z6WEIxR0Y4Sy8xVGZPTE44eE5zcWhGWEJoTSs5eTV1aHVrUDE4WFBlOTFa?=
 =?utf-8?B?Mk9VM2wyc2M2Q3Q2Z3RJdlhQa2VxYzh3N0F6THpNdmV1a0pvSUJReFlxVlBp?=
 =?utf-8?B?WDk1Z1hoMzRFOURiaG1xbmZrSlh3NUl2MU9PQ21sc2hua1ZJNWlJaTN3RHY5?=
 =?utf-8?B?eGdaNkVuVVgrWWp5RzkvZ3BUSEFRT25YTTViSDVqeWtwZGpYTGhOQUZlZy9r?=
 =?utf-8?B?TUphRUxnaGkyZk9uUjF2TnUwSnk1dFhnSDVubDBoamdxM01hL0RTTm9rRTQw?=
 =?utf-8?B?YzJoUlN0cW9Hb011OU16bjlZUE0yT0VLRWkyNU9HZStlaFp5MEFocXBqMloz?=
 =?utf-8?B?Zjc4cmFIRUVNMXNsdzRMWmQxS0txK29veXNqQmJtWmhaazkwbkIyYXFpclJH?=
 =?utf-8?B?SzM4U295Um1qcFBvN1hRVWcvME1sYU1pZlNKOXY2SS9rYjdFQXM3bFdxaHJW?=
 =?utf-8?B?TGxRZTFpY2c3OUI3MU9rWDFobGFkdXE0aDlqZzNpSWFDT0Y3NFRHU0hEOWln?=
 =?utf-8?B?dUJud2xYYXNybW9tNW1ZZnRBajFTUWYwZm1RL0tnYms3aDJKMHBLMWNzbVRY?=
 =?utf-8?B?NFFHSThYOWZtWjJhMFNJM0xuNm1GT2prWmlxNWVZa1YyMFNITWQ4cmJRYXFF?=
 =?utf-8?B?bGp3bW9JNHpNYUVmR1JnbkowMVFLOGV1NzQ4RnJXaEdPL1lxNFlaMUU3dUJC?=
 =?utf-8?B?WHgrQUhGQWNNOHlPRHY4UDFvY1Y5K0NKNWVrYzhhZGEyYlZGUWl0Zkh6Qy9m?=
 =?utf-8?B?aHM5K1ZvMHFUYUxXMVdTUHhCZWtmQ0dPaUJ4SXdocFFBYkFRSVg5ZEhLMnBk?=
 =?utf-8?B?NFhWTVNQdTlsQzJlNFlpNEJSWEJJanFuLzZ0MlJXM2hEQTY3YXB0QTVEMFhS?=
 =?utf-8?B?ekI1WDRmdlNid01QTmhhZ2NZdm9DS2dCSk9hRmN6cG9GSVhhU1hVYmlDdXVQ?=
 =?utf-8?B?TUd6T2pqQ01GcmFMQnFveGhGc3oyQ1dicUNicTcyYkdCZUJxY281UzFHNytw?=
 =?utf-8?B?QlI5ZnQ5Uk9TZHFZOSsybVBqVEpURk1QeC9mSFBjZFdseTVZUDJwMkRTYWVj?=
 =?utf-8?B?RFg3aWtHM1N3RW1NTEo0UmFLN21ZQnZodDVCVzI5WjdZUUVtV2xsZHkwb29U?=
 =?utf-8?Q?CaIwdcSJO9d7c=3D?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR11MB1645.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8c3a826d-91cf-4f43-794a-08d894da652b
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Nov 2020 02:48:19.3324
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 53wmVr29NeGQLRP2u3PJDtVau76NaYAxCouqejuhqJ012PlzaK9RSzEvUR5AIAgCEafUum3DB93x9k0mEb6KZg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO1PR11MB5028
X-OriginatorOrg: intel.com

PiBGcm9tOiBQYXVsIER1cnJhbnQgPHBhdWxAeGVuLm9yZz4NCj4gU2VudDogV2VkbmVzZGF5LCBO
b3ZlbWJlciAyNSwgMjAyMCAzOjA4IEFNDQo+IA0KPiBGcm9tOiBQYXVsIER1cnJhbnQgPHBkdXJy
YW50QGFtYXpvbi5jb20+DQo+IA0KPiAuLi4gd2l0aCBtYWNybyBldmFsdWF0aW9ucywgbm93IHRo
YXQgaXQgaXMgYXZhaWxhYmxlLg0KPiANCj4gQSByZWNlbnQgcGF0Y2ggaW1wb3J0ZWQgdGhlIHNp
emVvZl9maWVsZCgpIG1hY3JvIGZyb20gTGludXguIFRoaXMgcGF0Y2gNCj4gbWFrZXMNCj4gdXNl
IG9mIGl0IGluIHBsYWNlcyB3aGVyZSB0aGUgY29uc3RydWN0IGlzIGN1cnJlbnRseSBvcGVuLWNv
ZGVkLg0KPiANCj4gU2lnbmVkLW9mZi1ieTogUGF1bCBEdXJyYW50IDxwZHVycmFudEBhbWF6b24u
Y29tPg0KDQpSZXZpZXdlZC1ieTogS2V2aW4gVGlhbiA8a2V2aW4udGlhbkBpbnRlbC5jb20+DQoN
Cj4gLS0tDQo+IENjOiBKdW4gTmFrYWppbWEgPGp1bi5uYWthamltYUBpbnRlbC5jb20+DQo+IENj
OiBLZXZpbiBUaWFuIDxrZXZpbi50aWFuQGludGVsLmNvbT4NCj4gQ2M6IEphbiBCZXVsaWNoIDxq
YmV1bGljaEBzdXNlLmNvbT4NCj4gQ2M6IEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNp
dHJpeC5jb20+DQo+IENjOiAiUm9nZXIgUGF1IE1vbm7DqSIgPHJvZ2VyLnBhdUBjaXRyaXguY29t
Pg0KPiBDYzogV2VpIExpdSA8d2xAeGVuLm9yZz4NCj4gLS0tDQo+ICB4ZW4vYXJjaC94ODYvY3B1
L3ZwbXVfaW50ZWwuYyB8ICA0ICsrLS0NCj4gIHhlbi9hcmNoL3g4Ni9zZXR1cC5jICAgICAgICAg
IHwgMTYgKysrKysrKystLS0tLS0tLQ0KPiAgMiBmaWxlcyBjaGFuZ2VkLCAxMCBpbnNlcnRpb25z
KCspLCAxMCBkZWxldGlvbnMoLSkNCj4gDQo+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvY3B1
L3ZwbXVfaW50ZWwuYw0KPiBiL3hlbi9hcmNoL3g4Ni9jcHUvdnBtdV9pbnRlbC5jDQo+IGluZGV4
IDc1YWExMWM2YWRlYy4uNmU5N2NlNzkwMDM3IDEwMDY0NA0KPiAtLS0gYS94ZW4vYXJjaC94ODYv
Y3B1L3ZwbXVfaW50ZWwuYw0KPiArKysgYi94ZW4vYXJjaC94ODYvY3B1L3ZwbXVfaW50ZWwuYw0K
PiBAQCAtOTAsOCArOTAsOCBAQCBzdGF0aWMgdWludDY0X3QgX19yZWFkX21vc3RseSBnbG9iYWxf
b3ZmX2N0cmxfbWFzaywNCj4gZ2xvYmFsX2N0cmxfbWFzazsNCj4gIHN0YXRpYyB1bnNpZ25lZCBp
bnQgX19yZWFkX21vc3RseSByZWdzX3N6Ow0KPiAgLyogT2Zmc2V0IGludG8gY29udGV4dCBvZiB0
aGUgYmVnaW5uaW5nIG9mIFBNVSByZWdpc3RlciBibG9jayAqLw0KPiAgc3RhdGljIGNvbnN0IHVu
c2lnbmVkIGludCByZWdzX29mZiA9DQo+IC0gICAgICAgIHNpemVvZigoKHN0cnVjdCB4ZW5fcG11
X2ludGVsX2N0eHQgKikwKS0+Zml4ZWRfY291bnRlcnMpICsNCj4gLSAgICAgICAgc2l6ZW9mKCgo
c3RydWN0IHhlbl9wbXVfaW50ZWxfY3R4dCAqKTApLT5hcmNoX2NvdW50ZXJzKTsNCj4gKyAgICBz
aXplb2ZfZmllbGQoc3RydWN0IHhlbl9wbXVfaW50ZWxfY3R4dCwgZml4ZWRfY291bnRlcnMpICsN
Cj4gKyAgICBzaXplb2ZfZmllbGQoc3RydWN0IHhlbl9wbXVfaW50ZWxfY3R4dCwgYXJjaF9jb3Vu
dGVycyk7DQo+IA0KPiAgLyoNCj4gICAqIFFVSVJLIHRvIHdvcmthcm91bmQgYW4gaXNzdWUgb24g
dmFyaW91cyBmYW1pbHkgNiBjcHVzLg0KPiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L3NldHVw
LmMgYi94ZW4vYXJjaC94ODYvc2V0dXAuYw0KPiBpbmRleCA0NGMwNGUyNzM1MzcuLjMwZDZmMzc1
YTNhZiAxMDA2NDQNCj4gLS0tIGEveGVuL2FyY2gveDg2L3NldHVwLmMNCj4gKysrIGIveGVuL2Fy
Y2gveDg2L3NldHVwLmMNCj4gQEAgLTE2MTcsMTkgKzE2MTcsMTkgQEAgdm9pZCBfX2luaXQgbm9y
ZXR1cm4gX19zdGFydF94ZW4odW5zaWduZWQgbG9uZw0KPiBtYmlfcCkNCj4gICAgICB0b3RhbF9w
YWdlcyA9IG5yX3BhZ2VzOw0KPiANCj4gICAgICAvKiBTYW5pdHkgY2hlY2sgZm9yIHVud2FudGVk
IGJsb2F0IG9mIGNlcnRhaW4gaHlwZXJjYWxsIHN0cnVjdHVyZXMuICovDQo+IC0gICAgQlVJTERf
QlVHX09OKHNpemVvZigoKHN0cnVjdCB4ZW5fcGxhdGZvcm1fb3AgKikwKS0+dSkgIT0NCj4gLSAg
ICAgICAgICAgICAgICAgc2l6ZW9mKCgoc3RydWN0IHhlbl9wbGF0Zm9ybV9vcCAqKTApLT51LnBh
ZCkpOw0KPiAtICAgIEJVSUxEX0JVR19PTihzaXplb2YoKChzdHJ1Y3QgeGVuX2RvbWN0bCAqKTAp
LT51KSAhPQ0KPiAtICAgICAgICAgICAgICAgICBzaXplb2YoKChzdHJ1Y3QgeGVuX2RvbWN0bCAq
KTApLT51LnBhZCkpOw0KPiAtICAgIEJVSUxEX0JVR19PTihzaXplb2YoKChzdHJ1Y3QgeGVuX3N5
c2N0bCAqKTApLT51KSAhPQ0KPiAtICAgICAgICAgICAgICAgICBzaXplb2YoKChzdHJ1Y3QgeGVu
X3N5c2N0bCAqKTApLT51LnBhZCkpOw0KPiArICAgIEJVSUxEX0JVR19PTihzaXplb2ZfZmllbGQo
c3RydWN0IHhlbl9wbGF0Zm9ybV9vcCwgdSkgIT0NCj4gKyAgICAgICAgICAgICAgICAgc2l6ZW9m
X2ZpZWxkKHN0cnVjdCB4ZW5fcGxhdGZvcm1fb3AsIHUucGFkKSk7DQo+ICsgICAgQlVJTERfQlVH
X09OKHNpemVvZl9maWVsZChzdHJ1Y3QgeGVuX2RvbWN0bCwgdSkgIT0NCj4gKyAgICAgICAgICAg
ICAgICAgc2l6ZW9mX2ZpZWxkKHN0cnVjdCB4ZW5fZG9tY3RsLCB1LnBhZCkpOw0KPiArICAgIEJV
SUxEX0JVR19PTihzaXplb2ZfZmllbGQoc3RydWN0IHhlbl9zeXNjdGwsIHUpICE9DQo+ICsgICAg
ICAgICAgICAgICAgIHNpemVvZl9maWVsZChzdHJ1Y3QgeGVuX3N5c2N0bCwgdS5wYWQpKTsNCj4g
DQo+ICAgICAgQlVJTERfQlVHX09OKHNpemVvZihzdGFydF9pbmZvX3QpID4gUEFHRV9TSVpFKTsN
Cj4gICAgICBCVUlMRF9CVUdfT04oc2l6ZW9mKHNoYXJlZF9pbmZvX3QpID4gUEFHRV9TSVpFKTsN
Cj4gICAgICBCVUlMRF9CVUdfT04oc2l6ZW9mKHN0cnVjdCB2Y3B1X2luZm8pICE9IDY0KTsNCj4g
DQo+IC0gICAgQlVJTERfQlVHX09OKHNpemVvZigoKHN0cnVjdCBjb21wYXRfcGxhdGZvcm1fb3Ag
KikwKS0+dSkgIT0NCj4gLSAgICAgICAgICAgICAgICAgc2l6ZW9mKCgoc3RydWN0IGNvbXBhdF9w
bGF0Zm9ybV9vcCAqKTApLT51LnBhZCkpOw0KPiArICAgIEJVSUxEX0JVR19PTihzaXplb2ZfZmll
bGQoc3RydWN0IGNvbXBhdF9wbGF0Zm9ybV9vcCwgdSkgIT0NCj4gKyAgICAgICAgICAgICAgICAg
c2l6ZW9mX2ZpZWxkKHN0cnVjdCBjb21wYXRfcGxhdGZvcm1fb3AsIHUucGFkKSk7DQo+ICAgICAg
QlVJTERfQlVHX09OKHNpemVvZihzdGFydF9pbmZvX2NvbXBhdF90KSA+IFBBR0VfU0laRSk7DQo+
ICAgICAgQlVJTERfQlVHX09OKHNpemVvZihzdHJ1Y3QgY29tcGF0X3ZjcHVfaW5mbykgIT0gNjQp
Ow0KPiANCj4gLS0NCj4gMi4yMC4xDQoNCg==


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 02:50:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 02:50:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40625.73524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjZGq-0005rX-UY; Mon, 30 Nov 2020 02:50:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40625.73524; Mon, 30 Nov 2020 02:50:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjZGq-0005rQ-RV; Mon, 30 Nov 2020 02:50:32 +0000
Received: by outflank-mailman (input) for mailman id 40625;
 Mon, 30 Nov 2020 02:50:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x+J0=FE=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1kjZGo-0005rG-UU
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 02:50:30 +0000
Received: from mga18.intel.com (unknown [134.134.136.126])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4178d6ca-2ea6-46da-bb58-05210031a6d7;
 Mon, 30 Nov 2020 02:50:28 +0000 (UTC)
Received: from fmsmga003.fm.intel.com ([10.253.24.29])
 by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 29 Nov 2020 18:50:28 -0800
Received: from orsmsx605.amr.corp.intel.com ([10.22.229.18])
 by FMSMGA003.fm.intel.com with ESMTP; 29 Nov 2020 18:50:27 -0800
Received: from orsmsx609.amr.corp.intel.com (10.22.229.22) by
 ORSMSX605.amr.corp.intel.com (10.22.229.18) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1713.5; Sun, 29 Nov 2020 18:50:27 -0800
Received: from orsmsx608.amr.corp.intel.com (10.22.229.21) by
 ORSMSX609.amr.corp.intel.com (10.22.229.22) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1713.5; Sun, 29 Nov 2020 18:50:26 -0800
Received: from ORSEDG601.ED.cps.intel.com (10.7.248.6) by
 orsmsx608.amr.corp.intel.com (10.22.229.21) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5
 via Frontend Transport; Sun, 29 Nov 2020 18:50:26 -0800
Received: from NAM10-MW2-obe.outbound.protection.outlook.com (104.47.55.105)
 by edgegateway.intel.com (134.134.137.102) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1713.5; Sun, 29 Nov 2020 18:50:24 -0800
Received: from MWHPR11MB1645.namprd11.prod.outlook.com (2603:10b6:301:b::12)
 by CO1PR11MB5028.namprd11.prod.outlook.com (2603:10b6:303:9a::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.23; Mon, 30 Nov
 2020 02:50:23 +0000
Received: from MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::31c9:44c6:7323:61ac]) by MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::31c9:44c6:7323:61ac%7]) with mapi id 15.20.3611.025; Mon, 30 Nov 2020
 02:50:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4178d6ca-2ea6-46da-bb58-05210031a6d7
IronPort-SDR: 1P3Gxyv0q5E+5OSh8qzwZcRZxQeQOLZwe/DyuXqu76HT3zpFzZtUfiC28gsv9UdDmEoud+iHXZ
 qapbHQ2MBPiQ==
X-IronPort-AV: E=McAfee;i="6000,8403,9820"; a="160345679"
X-IronPort-AV: E=Sophos;i="5.78,379,1599548400"; 
   d="scan'208";a="160345679"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
IronPort-SDR: 3v1W/e3gG/2LaEmiAlJUgA09Rwx7PtNeUPGtcviUrXQiNl/pzFspK+nDkuWFeQhlhqcFDvroB6
 itLL1hM0qQ8A==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.78,379,1599548400"; 
   d="scan'208";a="372328842"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NTNMdMFrrbfex0bE387q1d2I6gj7GBjql6YTQfoNHQAIId086Li7xkz9GHVytADgmMZvdskykAazsJGQBRT9cRLLwzMUUnFoUIVRJNr9+fIQtNEsCQM6FaxWZKDoJciWYqSykzMCb984OwboEsL7pm3vv+0BR6QaBc36E4GldSZXtKSgv5PI3vf99YsmqPbUMNuZ5S69TKmkC+Wy+Qklll+1+goWTHtvtrGOhP1pNI1RT1qtv7BPmaoUxOeWwaexWecZhGiBhIUJZxU7NNRuo7xjXuHJdcEEFrV7zkmXpDR+bvsWdl9d1EqlNlvnSCHIRh6PlKgWrX6E6d1YELSmyg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mxGh980rurTfhXcU3ftPBq57c43XUNnP7P+xNRvGVbk=;
 b=eV+7tODrwQPNOSMfj7s+pdfR6KCPBhfid/NbgWWdbCCCMSgtEZsvBT4wV1GphthsLlq4GoefR0NGgHVbN8luBoje45eLl0W1IlrJ2jqKCvgeblN0X0H/c+OpUIkMSD/xWjCp2/1WAvUXboU4k+l8Upa0Bz6+g/2xQxB+vU7ar+/Lg5zSIWKKZ9u+Z1FccNtL4/T1t6Uyr4h9EGSxkztQCaKcom2vJxBms4RSkc1vYazF3kWvd41dyeEtPugTN6orkiW3hRYI0S5azI6E52RZTMYrCzlaopU3gIxBT3AAU8yt4zAVj5kpR28GAsa1YylFf/DK/grdizlrorLbnY4+pg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com;
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mxGh980rurTfhXcU3ftPBq57c43XUNnP7P+xNRvGVbk=;
 b=mY96ZqFvtU17/TmZIOjjyTy2nCWoT2ZL6eGqlBMRM2LuKkK2aY1h5Jbfu6Rv3/BwsoXtUm38pDqQk/tQ5yapU1XxshwBSgYfwvZ45RcK54VjFs1TVg9CTRut8SFIgaZbbOBSiC50m3KK9U60TprT41Iw+iAUhfpwtuSU8roLBJs=
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Julien Grall <jgrall@amazon.com>
Subject: RE: [PATCH] xen/iommu: vtd: Fix undefined behavior pci_vtd_quirks()
Thread-Topic: [PATCH] xen/iommu: vtd: Fix undefined behavior pci_vtd_quirks()
Thread-Index: AQHWvoOaSpI/XFJ49Uub3hmiO5sEdqngCfbQ
Date: Mon, 30 Nov 2020 02:50:23 +0000
Message-ID: <MWHPR11MB16456E395CC9B993E0C07EC48CF50@MWHPR11MB1645.namprd11.prod.outlook.com>
References: <20201119145216.29280-1-julien@xen.org>
In-Reply-To: <20201119145216.29280-1-julien@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
dlp-version: 11.5.1.3
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.198.147.217]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 7f170a4a-fe2a-4951-742e-08d894daaf3c
x-ms-traffictypediagnostic: CO1PR11MB5028:
x-microsoft-antispam-prvs: <CO1PR11MB50285EA04850D2D699A18E0D8CF50@CO1PR11MB5028.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: hh/Op+FXpv3cM9MZAR0ziLVmj7Z1IN9gsKNEtT2xKCQIfrLZ46QLgUQmEcUuZLXnfYntK5bkHjAl6U0QLq2BVncQ7gxWyZ/ukCJP0wSHiljMQ/X07NxaxGiPVo6jRtjjDauCLRPZqRkNz57+pYR/4uXORJcVHMHydwJwKuG0ms+6ASuFAr8+RZwOMYdHft09sE6A+FZs8GCTFg/HnoqmggBtfbnkpLj2Z/fzplewb838TJ+i4YjRJhL4/raZk0kOMiMmHk+5QcstTYE51rHgCypjYDnAfvKod35acW5X9/rrbjss7gJ6ZGYZyUrJzwjMPmiOD6FhMfHA4v/uBkCX741M/oDUFFox3KJECXw/Mi2gXtikxMEa9zP8RsTpubXKU2NDL7v3+BykhBH0nLH8GA==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MWHPR11MB1645.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(39860400002)(366004)(396003)(346002)(71200400001)(9686003)(52536014)(6506007)(4326008)(7696005)(86362001)(8936002)(5660300002)(316002)(76116006)(64756008)(186003)(26005)(966005)(8676002)(2906002)(66446008)(33656002)(110136005)(66476007)(478600001)(66556008)(55016002)(66946007)(83380400001);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata: =?us-ascii?Q?l8aQBlvABtbNL/DtlPGZCl6h/cYnPzqBU683PYs8cvi3BqWxTjOMPIO7HlmV?=
 =?us-ascii?Q?JRe/sUrwx+/gyVCacIZ0DeFu/SFfSu5QPQs/MtKMxaGtuhBdYJ/+cYR1qgcH?=
 =?us-ascii?Q?pH33Jn0Sk7SPlqCDsO0zlYyRIIPGHXuw0bndqo2Kvgwyda0MJ3kLn0/2w3zP?=
 =?us-ascii?Q?TpmDruY5IErphcUap7bFZoDPuaaCOwTX263/ZkzwQTIVwo4gqig0hOzvV6tV?=
 =?us-ascii?Q?yOk8QT5xB9Ulp+RPSG7gsnOrF9M2taFoPQpnv4IGPq7SKfKxlLa/xAOsTGdM?=
 =?us-ascii?Q?6l2Dp2SHCNCxYsVvF8gipsWo34NbCM+5IPBq0HSsalIlsCIQpRZjjPRR1PuI?=
 =?us-ascii?Q?jYEnwdmTLveHHaZa63Q0VpDsKqM8i1Au03LUV2yy895qoVcw5jiVGkVrKvPn?=
 =?us-ascii?Q?tzdOufMhxksw4yfI7H+zhhQJHI8dKCmq4kGLOH9RtA5JvDNpjZfFo9pZwLGo?=
 =?us-ascii?Q?XUnKH6pfyTD/K6Oi0pELKaHcfsfZXyxJIUTR0YlwlNPE1ju+JZh+9Vp043oL?=
 =?us-ascii?Q?3ggfz130+dUCQ/pcABOzWE1Y3J5LXv3pKWzuRO8cGpil5MxkrXpjHcspSrGy?=
 =?us-ascii?Q?z+saD1zJovRTz4R93TwhKWFllddhJSgTFYhfMCJeX6Q9xiz3CTLLEZurU5hs?=
 =?us-ascii?Q?TA9Lmd44foDaCJtOQSF8xHYj3N54x7zByLn3J0F2ADAwqoH8V2cEBaAFIYzO?=
 =?us-ascii?Q?7sRI/9wGx86OOY3KZ5z4PYuRb1TWUMkhVCP+JO4auiKwjTCYjD717SPdIeNX?=
 =?us-ascii?Q?dgf0Bp/hRuckqwCL09T4SHFN6Ijg94AvcoywulWZ0l4YpI4srsVZNOIVD3/v?=
 =?us-ascii?Q?83Szt+i+oy4az8DbHOS538EQ/VyXH45TZIL3vP55zJSwWvb03bjs9YfwEpuQ?=
 =?us-ascii?Q?rUovW7TI92JJX57MJi4ayr3ZMM7xn6ddm0+3APnT/2mhmbshC6RI7Y//qyWv?=
 =?us-ascii?Q?AarELN5kXcTOR3QzyPlUeFvr5IQYGWaquKk7L2C1J/0=3D?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR11MB1645.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7f170a4a-fe2a-4951-742e-08d894daaf3c
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Nov 2020 02:50:23.6041
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: boWQBMNynXt7pOBMdGZeUHqWKHP9CEdY8uQc/5ZsxAYPIyqont6NnUBOUPLNU5yV5G+49iF1N6sgDWPo0zKPNw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO1PR11MB5028
X-OriginatorOrg: intel.com

> From: Julien Grall <julien@xen.org>
> Sent: Thursday, November 19, 2020 10:52 PM
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> When booting Xen with CONFIG_USBAN=3Dy on Sandy Bridge, UBSAN will
> throw
> the following splat:
>=20
> (XEN)
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> (XEN) UBSAN: Undefined behaviour in quirks.c:449:63
> (XEN) left shift of 1 by 31 places cannot be represented in type 'int'
> (XEN) ----[ Xen-4.11.4  x86_64  debug=3Dy   Not tainted ]----
>=20
> [...]
>=20
> (XEN) Xen call trace:
> (XEN)    [<ffff82d0802c0ccc>] ubsan.c#ubsan_epilogue+0xa/0xad
> (XEN)    [<ffff82d0802c16c9>]
> __ubsan_handle_shift_out_of_bounds+0xb4/0x145
> (XEN)    [<ffff82d0802eeecd>] pci_vtd_quirk+0x3d3/0x74f
> (XEN)    [<ffff82d0802e508b>]
> iommu.c#domain_context_mapping+0x45b/0x46f
> (XEN)    [<ffff82d08053f39e>] iommu.c#setup_hwdom_device+0x22/0x3a
> (XEN)    [<ffff82d08053dfbc>] pci.c#setup_one_hwdom_device+0x8c/0x124
> (XEN)    [<ffff82d08053e302>] pci.c#_setup_hwdom_pci_devices+0xbb/0x2f7
> (XEN)    [<ffff82d0802da5b7>] pci.c#pci_segments_iterate+0x4c/0x8c
> (XEN)    [<ffff82d08053e8bd>] setup_hwdom_pci_devices+0x25/0x2c
> (XEN)    [<ffff82d08053e916>]
> iommu.c#intel_iommu_hwdom_init+0x52/0x2f3
> (XEN)    [<ffff82d08053d6da>] iommu_hwdom_init+0x4e/0xa4
> (XEN)    [<ffff82d080577f32>] dom0_construct_pv+0x23c8/0x2476
> (XEN)    [<ffff82d08057cb50>] construct_dom0+0x6c/0xa3
> (XEN)    [<ffff82d080564822>] __start_xen+0x4651/0x4b55
> (XEN)    [<ffff82d0802000f3>] __high_start+0x53/0x55
>=20
> Note that splat is from 4.11.4 and not staging. Although, the problem is
> still present.
>=20
> This can be solved by making the first operand unsigned int.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

>=20
> CR: https://code.amazon.com/reviews/CR-38873112
> ---
>  xen/drivers/passthrough/vtd/quirks.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>=20
> diff --git a/xen/drivers/passthrough/vtd/quirks.c
> b/xen/drivers/passthrough/vtd/quirks.c
> index a8330f17bc0c..8a81d9c9308b 100644
> --- a/xen/drivers/passthrough/vtd/quirks.c
> +++ b/xen/drivers/passthrough/vtd/quirks.c
> @@ -435,7 +435,7 @@ void pci_vtd_quirk(const struct pci_dev *pdev)
>      case 0x3728: /* Xeon C5500/C3500 (JasperForest) */
>      case 0x3c28: /* Sandybridge */
>          val =3D pci_conf_read32(pdev->sbdf, 0x1AC);
> -        pci_conf_write32(pdev->sbdf, 0x1AC, val | (1 << 31));
> +        pci_conf_write32(pdev->sbdf, 0x1AC, val | (1U << 31));
>          printk(XENLOG_INFO "Masked VT-d error signaling on %pp\n", &pdev=
-
> >sbdf);
>          break;
>=20
> --
> 2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 03:07:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 03:07:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40633.73537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjZWu-0007Rg-C9; Mon, 30 Nov 2020 03:07:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40633.73537; Mon, 30 Nov 2020 03:07:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjZWu-0007RZ-8p; Mon, 30 Nov 2020 03:07:08 +0000
Received: by outflank-mailman (input) for mailman id 40633;
 Mon, 30 Nov 2020 03:07:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x+J0=FE=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1kjZWt-0007RU-OJ
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 03:07:07 +0000
Received: from mga07.intel.com (unknown [134.134.136.100])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e7fe85b0-0154-4540-934b-9406e029c01b;
 Mon, 30 Nov 2020 03:07:04 +0000 (UTC)
Received: from orsmga005.jf.intel.com ([10.7.209.41])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 29 Nov 2020 19:07:03 -0800
Received: from orsmsx605.amr.corp.intel.com ([10.22.229.18])
 by orsmga005.jf.intel.com with ESMTP; 29 Nov 2020 19:07:03 -0800
Received: from orsmsx605.amr.corp.intel.com (10.22.229.18) by
 ORSMSX605.amr.corp.intel.com (10.22.229.18) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1713.5; Sun, 29 Nov 2020 19:07:02 -0800
Received: from ORSEDG601.ED.cps.intel.com (10.7.248.6) by
 orsmsx605.amr.corp.intel.com (10.22.229.18) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5
 via Frontend Transport; Sun, 29 Nov 2020 19:07:02 -0800
Received: from NAM10-BN7-obe.outbound.protection.outlook.com (104.47.70.103)
 by edgegateway.intel.com (134.134.137.102) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1713.5; Sun, 29 Nov 2020 19:06:56 -0800
Received: from MWHPR11MB1645.namprd11.prod.outlook.com (2603:10b6:301:b::12)
 by MW3PR11MB4569.namprd11.prod.outlook.com (2603:10b6:303:54::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Mon, 30 Nov
 2020 03:06:54 +0000
Received: from MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::31c9:44c6:7323:61ac]) by MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::31c9:44c6:7323:61ac%7]) with mapi id 15.20.3611.025; Mon, 30 Nov 2020
 03:06:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7fe85b0-0154-4540-934b-9406e029c01b
IronPort-SDR: 7x5EfeQ6LRUMslcqprJIhhakWdj5yenc81tRIBK7KSnCVjq4KT/hJXFu/BLxOg1YqHZI6fgDuu
 S6zBnDktAa5w==
X-IronPort-AV: E=McAfee;i="6000,8403,9820"; a="236700356"
X-IronPort-AV: E=Sophos;i="5.78,379,1599548400"; 
   d="scan'208";a="236700356"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
IronPort-SDR: KOkg9CcsSOTE69ilnfcGgYIxohDbg21j5zwuvStvNniDxf0eiCmvB0qrdPifOxphQRRFfxpt0B
 n7EtHknCna1Q==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.78,379,1599548400"; 
   d="scan'208";a="548908692"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=glxYktrDvShonLEjb3MXrLujlKaSiQ1h+dRLlsnrIprQ0IgsXGS0NKmVGdvvctOj8CeCeWx2pFGlFCZ9feW1h7Kwqas6ZwJ7tof9RxB6kcMk2ZYN6KQiIIwLjSDEgy5iY58/nP9BDf0nE7Nn/7KXpdnJGO+oESYfpwXZncmJlI+Hg0M4jCjWDwu9GBjChgWHmd/JasRXuf/HZy6cy6wIWtR7a4H7i6H0QgNlhzplhXqSOeNWanl7qJWgK1GKsMVrQqQ2hXOlFpxFwTM5DU8vPaQJJt9QM2TarllE3H3fGCfU38HT6PLWF/XfF4hNypkQqXg62YvVhWuwSNVwq6RpVw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iiovQkWpoKh7jYGCVmlBa94pRx4OT7IoM7ixCi81ijM=;
 b=GAz+Pgrpoqy6Rc8uXoYliPL5HZNGgfvxsGsOBanTMFKBNBAhYBBroMQs96ypMZMpYqZZhHZ5SWdokdrpsx+9XeFor/CM30RD4X4m7xYVCOjZBi9qCC7Bne/LsbNSlZCJ8LElzB2tqMcz+1dqQDIArXMk4xBJ1TAWFunBZtf1PnDr8GA7fe3Lq4HTo1C0kzhrcYBStAB458bWAEeJgit8amH7kjkSLUP56yBN0YxrZcfO3koZ3efkkNhaOUBzHlieEVN0OlRq9MsagwxFLRvziaTAWdkEGqFFzZgz5Us/M7hcaIney7nl7DHwz0JckBess5yNmPpnQY4dZhj3rewCyQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com;
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iiovQkWpoKh7jYGCVmlBa94pRx4OT7IoM7ixCi81ijM=;
 b=VVYKwH6VGc8VG9YfuUKJ/tFXoC0d5UWlktE/sDxBYxSkuhhTnCxfBvdHJ4+k0f3jKk+V6tm9lpMLSW/SiVUbg8g3DsjmYvaQebKfpeKeSPcRlmoBFXZcQVNxl7F8mOx6WVnWVVJzNHW4Aa+QojptxsqOzpNqnCZxpWUVZOP+N18=
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Paul Durrant <paul@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>
Subject: RE: [PATCH v10 5/7] vtd: use a bit field for root_entry
Thread-Topic: [PATCH v10 5/7] vtd: use a bit field for root_entry
Thread-Index: AQHWv0COaWiEdvBtMUu0J5DvTo0K7angCUXA
Date: Mon, 30 Nov 2020 03:06:54 +0000
Message-ID: <MWHPR11MB164520264945AF959D7A3ED28CF50@MWHPR11MB1645.namprd11.prod.outlook.com>
References: <20201120132440.1141-1-paul@xen.org>
 <20201120132440.1141-6-paul@xen.org>
In-Reply-To: <20201120132440.1141-6-paul@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
dlp-version: 11.5.1.3
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.198.147.217]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: b337982f-bbeb-4bc4-8d98-08d894dcfe02
x-ms-traffictypediagnostic: MW3PR11MB4569:
x-microsoft-antispam-prvs: <MW3PR11MB4569884722EA43093ED2967A8CF50@MW3PR11MB4569.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:6430;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: NkIaJuH6cRjroQIHG870PuCzkes3spRu/6Upo/aiQoRxlg5Aij8+lfcSwef72zC4NEXCcnRml/G0cIYAFckCX41Z/hncriz7gHxp+GGlnqEQJJuGksTmyB2hTNv51aA8VN/DTfO9Qr29DUHaRgD/S9XbErh8YgAQe6rqcu1IsB/roSxNJk8ko80U9feDgjRxUa5XCjGoVaxQ8+w7ir+uUQ9YLu33NY554h+V1t6f0reCCi7ZDl/zWy0VwCiee94yXIv4buasli3CNhdb0DvvQImwFg1njPEziAXKXZjRck7RLzFLDQqoT5pBHNtJjGmXf7wfgP7nl520ueF6u5Z1SQ==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MWHPR11MB1645.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(136003)(346002)(396003)(376002)(366004)(33656002)(66946007)(52536014)(83380400001)(66556008)(64756008)(8936002)(66446008)(66476007)(5660300002)(76116006)(8676002)(2906002)(478600001)(9686003)(26005)(316002)(110136005)(55016002)(4326008)(6506007)(186003)(71200400001)(86362001)(7696005);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata: =?us-ascii?Q?jvwlXsSu4GW/MQMHwjwZcGwT6dhfKa+WF8qMN1PJAVZ5GlxwjWlsZCZAltx3?=
 =?us-ascii?Q?4nGC4VbLu82nt+uXuaktVgfZs71EIEzB1tv7/zoOOC75u9S+1VaKTyTR9Xag?=
 =?us-ascii?Q?ZfBNrT8n2fFlLtoz3AQLb6RmGqApI5smQ1tPTdzO5pt/teed3EmiP66J4Jui?=
 =?us-ascii?Q?7/OrE91NyQ7oyi6swgWMptCvndY2y5GUBUnlHEcDoskBfXyKJr6ehDZJwG1i?=
 =?us-ascii?Q?NxF9MiPMLKjx5P/EkhXShSytRn8OeXGkJ4DFG0Swt9lO4Jc/XwRttMKdC5aE?=
 =?us-ascii?Q?3xtQOuVmapWmETBR26SXKkYa7PRkTEOiDUVUr0Xo1/rh2Bq/lRtcdTLD9nfI?=
 =?us-ascii?Q?fDNybx0NdIS/PxaIVzFGpUxffHwvg0kgYQMhSxIaXrrGFo+Xb0bD9Zz519Dc?=
 =?us-ascii?Q?8H/nM4DGWC4UcuzJgCNv2FN5In2pz+nbFc6c/z+e3h4SFCO/unHlR5Qqpg/y?=
 =?us-ascii?Q?yxChdj+ScXUfsBalkVeh0Hblk0k3600S+bctLVz3Du9tBsd4v9LakZBgIqp1?=
 =?us-ascii?Q?OP9z0cvbZFtJ7Qxffr3qMWH4oqlSiVV/WoUz4Xc1dkDclaXLqptdlOraGzeY?=
 =?us-ascii?Q?NHVvl9KhtuBIPDEmJJewiRn6GrQBvl45EN8+OfRvS57XYuJ08uoDI9i9GNiU?=
 =?us-ascii?Q?OIqOdhQWSxPACGn64clMb/3TLh9p+w0//8O9gW0XlK6DICW3BbJsbdLbmq1I?=
 =?us-ascii?Q?m7RpHnM4sU1x8iTR5JMBOmv8StX+fh0/XHmCSMiQGYkgcSIJWCK49HGxDmsP?=
 =?us-ascii?Q?TJ/i22t4uZnLslUzqLmz7LMA9UvMmRrDYoJS7jpg0aEFFaJbGAOBsaJ31NpC?=
 =?us-ascii?Q?Qs7KtZyHPItECJ2kx9T49VesqVLoaw89d1re6RZV6teKcNsdZVM4O+Ik3CQe?=
 =?us-ascii?Q?Upp0+WZM9E77Ao8ACl1sum72gO2ZVxoqXgEsnSzpDV7jHv6uPk86T1kcNsXG?=
 =?us-ascii?Q?mN8ZdZzK6GsApW1YDjrUp8ND10ZHilp0pJob39Pa0Po=3D?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR11MB1645.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b337982f-bbeb-4bc4-8d98-08d894dcfe02
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Nov 2020 03:06:54.7261
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: ZdaiBpTK0ze/CPhsGm7NhYEVMdKhEQ23Obj6vQa4AnATMjE6LwTjidO6DI6AkLSsqLn0p7fTjIHyJwAtP3Xckg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR11MB4569
X-OriginatorOrg: intel.com

> From: Paul Durrant <paul@xen.org>
> Sent: Friday, November 20, 2020 9:25 PM
>=20
> From: Paul Durrant <pdurrant@amazon.com>
>=20
> This makes the code a little easier to read and also makes it more consis=
tent
> with iremap_entry.
>=20
> Also take the opportunity to tidy up the implementation of
> device_in_domain().
>=20
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Reviewed-by: <kevin.tian@intel.com>

> ---
> Cc: Kevin Tian <kevin.tian@intel.com>
>=20
> v10:
>  - Small tweaks requested by Jan
>  - Remove macros in favour of direct field access
>  - Add missing barrier
>=20
> v4:
>  - New in v4
> ---
>  xen/drivers/passthrough/vtd/iommu.c   |  9 +++++----
>  xen/drivers/passthrough/vtd/iommu.h   | 25 ++++++++++++-------------
>  xen/drivers/passthrough/vtd/utils.c   |  6 +++---
>  xen/drivers/passthrough/vtd/x86/ats.c | 27 +++++++++++++++------------
>  4 files changed, 35 insertions(+), 32 deletions(-)
>=20
> diff --git a/xen/drivers/passthrough/vtd/iommu.c
> b/xen/drivers/passthrough/vtd/iommu.c
> index d136fe36883b..1a038541f0a3 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -237,7 +237,7 @@ static u64 bus_to_context_maddr(struct vtd_iommu
> *iommu, u8 bus)
>      ASSERT(spin_is_locked(&iommu->lock));
>      root_entries =3D (struct root_entry *)map_vtd_domain_page(iommu-
> >root_maddr);
>      root =3D &root_entries[bus];
> -    if ( !root_present(*root) )
> +    if ( !root->p )
>      {
>          maddr =3D alloc_pgtable_maddr(1, iommu->node);
>          if ( maddr =3D=3D 0 )
> @@ -245,11 +245,12 @@ static u64 bus_to_context_maddr(struct
> vtd_iommu *iommu, u8 bus)
>              unmap_vtd_domain_page(root_entries);
>              return 0;
>          }
> -        set_root_value(*root, maddr);
> -        set_root_present(*root);
> +        root->ctp =3D paddr_to_pfn(maddr);
> +        smp_wmb();
> +        root->p =3D true;
>          iommu_sync_cache(root, sizeof(struct root_entry));
>      }
> -    maddr =3D (u64) get_context_addr(*root);
> +    maddr =3D pfn_to_paddr(root->ctp);
>      unmap_vtd_domain_page(root_entries);
>      return maddr;
>  }
> diff --git a/xen/drivers/passthrough/vtd/iommu.h
> b/xen/drivers/passthrough/vtd/iommu.h
> index 216791b3d634..b14628eec260 100644
> --- a/xen/drivers/passthrough/vtd/iommu.h
> +++ b/xen/drivers/passthrough/vtd/iommu.h
> @@ -184,21 +184,20 @@
>  #define dma_frcd_source_id(c) (c & 0xffff)
>  #define dma_frcd_page_addr(d) (d & (((u64)-1) << 12)) /* low 64 bit */
>=20
> -/*
> - * 0: Present
> - * 1-11: Reserved
> - * 12-63: Context Ptr (12 - (haw-1))
> - * 64-127: Reserved
> - */
>  struct root_entry {
> -    u64    val;
> -    u64    rsvd1;
> +    union {
> +        struct { uint64_t lo, hi; };
> +        struct {
> +            /* 0 - 63 */
> +            bool p:1;
> +            unsigned int reserved0:11;
> +            uint64_t ctp:52;
> +
> +            /* 64 - 127 */
> +            uint64_t reserved1;
> +        };
> +    };
>  };
> -#define root_present(root)    ((root).val & 1)
> -#define set_root_present(root) do {(root).val |=3D 1;} while(0)
> -#define get_context_addr(root) ((root).val & PAGE_MASK_4K)
> -#define set_root_value(root, value) \
> -    do {(root).val |=3D ((value) & PAGE_MASK_4K);} while(0)
>=20
>  struct context_entry {
>      u64 lo;
> diff --git a/xen/drivers/passthrough/vtd/utils.c
> b/xen/drivers/passthrough/vtd/utils.c
> index 4febcf506d8a..5f25a86a535c 100644
> --- a/xen/drivers/passthrough/vtd/utils.c
> +++ b/xen/drivers/passthrough/vtd/utils.c
> @@ -112,15 +112,15 @@ void print_vtd_entries(struct vtd_iommu *iommu,
> int bus, int devfn, u64 gmfn)
>          return;
>      }
>=20
> -    printk("    root_entry[%02x] =3D %"PRIx64"\n", bus, root_entry[bus].=
val);
> -    if ( !root_present(root_entry[bus]) )
> +    printk("    root_entry[%02x] =3D %"PRIx64"\n", bus, root_entry[bus].=
lo);
> +    if ( !root_entry[bus].p )
>      {
>          unmap_vtd_domain_page(root_entry);
>          printk("    root_entry[%02x] not present\n", bus);
>          return;
>      }
>=20
> -    val =3D root_entry[bus].val;
> +    val =3D pfn_to_paddr(root_entry[bus].ctp);
>      unmap_vtd_domain_page(root_entry);
>      ctxt_entry =3D map_vtd_domain_page(val);
>      if ( ctxt_entry =3D=3D NULL )
> diff --git a/xen/drivers/passthrough/vtd/x86/ats.c
> b/xen/drivers/passthrough/vtd/x86/ats.c
> index 04d702b1d6b1..fec969ef75bb 100644
> --- a/xen/drivers/passthrough/vtd/x86/ats.c
> +++ b/xen/drivers/passthrough/vtd/x86/ats.c
> @@ -74,8 +74,8 @@ int ats_device(const struct pci_dev *pdev, const struct
> acpi_drhd_unit *drhd)
>  static bool device_in_domain(const struct vtd_iommu *iommu,
>                               const struct pci_dev *pdev, uint16_t did)
>  {
> -    struct root_entry *root_entry;
> -    struct context_entry *ctxt_entry =3D NULL;
> +    struct root_entry *root_entry, *root_entries;
> +    struct context_entry *context_entry, *context_entries =3D NULL;
>      unsigned int tt;
>      bool found =3D false;
>=20
> @@ -85,25 +85,28 @@ static bool device_in_domain(const struct
> vtd_iommu *iommu,
>          return false;
>      }
>=20
> -    root_entry =3D map_vtd_domain_page(iommu->root_maddr);
> -    if ( !root_present(root_entry[pdev->bus]) )
> +    root_entries =3D (struct root_entry *)map_vtd_domain_page(iommu-
> >root_maddr);
> +    root_entry =3D &root_entries[pdev->bus];
> +    if ( !root_entry->p )
>          goto out;
>=20
> -    ctxt_entry =3D map_vtd_domain_page(root_entry[pdev->bus].val);
> -    if ( context_domain_id(ctxt_entry[pdev->devfn]) !=3D did )
> +    context_entries =3D map_vtd_domain_page(root_entry->ctp);
> +    context_entry =3D &context_entries[pdev->devfn];
> +    if ( context_domain_id(*context_entry) !=3D did )
>          goto out;
>=20
> -    tt =3D context_translation_type(ctxt_entry[pdev->devfn]);
> +    tt =3D context_translation_type(*context_entry);
>      if ( tt !=3D CONTEXT_TT_DEV_IOTLB )
>          goto out;
>=20
>      found =3D true;
> -out:
> -    if ( root_entry )
> -        unmap_vtd_domain_page(root_entry);
>=20
> -    if ( ctxt_entry )
> -        unmap_vtd_domain_page(ctxt_entry);
> + out:
> +    if ( root_entries )
> +        unmap_vtd_domain_page(root_entries);
> +
> +    if ( context_entries )
> +        unmap_vtd_domain_page(context_entries);
>=20
>      return found;
>  }
> --
> 2.20.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 03:10:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 03:10:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40640.73549 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjZab-0008MF-1M; Mon, 30 Nov 2020 03:10:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40640.73549; Mon, 30 Nov 2020 03:10:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjZaa-0008M8-UP; Mon, 30 Nov 2020 03:10:56 +0000
Received: by outflank-mailman (input) for mailman id 40640;
 Mon, 30 Nov 2020 03:10:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x+J0=FE=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1kjZaZ-0008M3-AE
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 03:10:55 +0000
Received: from mga03.intel.com (unknown [134.134.136.65])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6cc251ae-1d71-4b8a-bf78-78b97bf8fd91;
 Mon, 30 Nov 2020 03:10:52 +0000 (UTC)
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
 by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 29 Nov 2020 19:10:50 -0800
Received: from orsmsx604.amr.corp.intel.com ([10.22.229.17])
 by fmsmga002.fm.intel.com with ESMTP; 29 Nov 2020 19:10:50 -0800
Received: from orsmsx603.amr.corp.intel.com (10.22.229.16) by
 ORSMSX604.amr.corp.intel.com (10.22.229.17) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1713.5; Sun, 29 Nov 2020 19:10:50 -0800
Received: from ORSEDG601.ED.cps.intel.com (10.7.248.6) by
 orsmsx603.amr.corp.intel.com (10.22.229.16) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5
 via Frontend Transport; Sun, 29 Nov 2020 19:10:50 -0800
Received: from NAM10-BN7-obe.outbound.protection.outlook.com (104.47.70.106)
 by edgegateway.intel.com (134.134.137.102) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1713.5; Sun, 29 Nov 2020 19:10:34 -0800
Received: from MWHPR11MB1645.namprd11.prod.outlook.com (2603:10b6:301:b::12)
 by MW3PR11MB4569.namprd11.prod.outlook.com (2603:10b6:303:54::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Mon, 30 Nov
 2020 03:10:33 +0000
Received: from MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::31c9:44c6:7323:61ac]) by MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::31c9:44c6:7323:61ac%7]) with mapi id 15.20.3611.025; Mon, 30 Nov 2020
 03:10:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6cc251ae-1d71-4b8a-bf78-78b97bf8fd91
IronPort-SDR: k2AObQjIeTRb/BAFQd9ctPsElNBPse6qkVX1+3UQq/rys8O/GiJNdVYznFnwZf2XkyTDwd9bzR
 YXoe6ZbgsKsQ==
X-IronPort-AV: E=McAfee;i="6000,8403,9820"; a="172675774"
X-IronPort-AV: E=Sophos;i="5.78,379,1599548400"; 
   d="scan'208";a="172675774"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
IronPort-SDR: MOmYRUgNlTdV+DtfnCOZaqzCA7+gM7SMMG4mDQZFqDG/KVPmLpj5jjajAxZu3sUbSL1WX/hUoC
 FyO2YKXjadUw==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.78,379,1599548400"; 
   d="scan'208";a="366950873"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hjhOVFmAHRpJMZcGyYvT74eVMFnbLxXxF8ZCyYDYzMXQdYYyyPtTRSLams5iRb2EoM9vlZdi8WFdtBqFdj8PH+jmrUMqfEV/Rm1fznGoXb2as+REXfqd6vQz5SSBN0RiattX5EsYr9Sp37coZ7WX9xF9EYR7qpAZZD2IMilR6SsljemgoCFnSQEfA5ourlF6Zo1EaSLsa9ljkBRjt1lzCLAAcDnxodtd5LZ/dx7WSxs4tlGQqQr1uzPCd2fTOJLy4diQMI7vnjwzzdQifC0wMnrMcgiswjV8k7TyDLVgeOc6zHjwZjWFCl40JmChepUTMtzWAV9BDoK5vqdmanj9QQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6X/vZqpv07GZ/6Qj2tD8PpokEI2VUZ1i12TylnoiR70=;
 b=kuCQeipLti0kTNrxIPLwGr5pQAspq9b5ANF/gl+MZ+DIfxN+TFDfK5rIcwQrkTKX0tQQojysHKf7eYYpAAVnPMDj+K1nCz6mYrIL3ImxmetAsjbwx8REST7wD7wHzVA1wiq8oDJqUEusK3U5gfOJPVegVk2bcLMsmfF0RVIQNEPeBFEACJLa/RpEASQhyuVsJmdijCKiP1Ya4rKstUvdHXku0nWh2SMqhIYTFdm1rABfEUzJXXpkTZK7dmaDO7rDLhiemUvf45Fl1XBOVTiEUi5Fx6srsASzsBiThiWpMQcyorpa6Lo/+vlSru6gINMSG8SSbJ/Y14NessqytO/55A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com;
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6X/vZqpv07GZ/6Qj2tD8PpokEI2VUZ1i12TylnoiR70=;
 b=NIcTH4r0qV0odm6HivvG+MOU28UnwNWtqUDhbpLpqS13HHFQyjYPS04alUiBfOHSj6mnKetkdGyGCNVnmwHAKXYLF3Rgpu8gZfTrbQOBuxV9PI+an5XsuILfZICrXIgKQTtm79AwjHkwTCU52vUjvYmVZ0bxQ0t6t9mP//DKh/Q=
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Paul Durrant <paul@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>
Subject: RE: [PATCH v10 6/7] vtd: use a bit field for context_entry
Thread-Topic: [PATCH v10 6/7] vtd: use a bit field for context_entry
Thread-Index: AQHWv0CO1PZZYlXYt0GC2y4nZ5YxkKngDZkw
Date: Mon, 30 Nov 2020 03:10:33 +0000
Message-ID: <MWHPR11MB16457F22A1940E866803819D8CF50@MWHPR11MB1645.namprd11.prod.outlook.com>
References: <20201120132440.1141-1-paul@xen.org>
 <20201120132440.1141-7-paul@xen.org>
In-Reply-To: <20201120132440.1141-7-paul@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
dlp-version: 11.5.1.3
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.198.147.217]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: fe57ec27-b2b2-4d1b-1621-08d894dd8079
x-ms-traffictypediagnostic: MW3PR11MB4569:
x-microsoft-antispam-prvs: <MW3PR11MB4569F88A51EE3BF2AD4CAE148CF50@MW3PR11MB4569.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:4941;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: Mtn75AaV4orWtomuUbC3PxGazP2OHYBzjmGrLYFXrmb/gJ989RWV1Ib1xbCz2+Ej4FvOgVqhMjAMf7Yb9tvuUIIcQ16wQtIqCALDwSkq78xKXKihxmye36kcn9UaFxdBnJUGSeOZJSUy1+XcZgn5WXA9SkWGJ2EoCeq72Ut0rFmMia5+TuUTGfCyKMRJQZZjG7NfW6n4t3fSFfvcOZ1opXRKx6WyFVNDIAhMNOUz+t9OHQq12kAqhkYFty0s9mMP0lE69DLrC1+ImXwueYTPZeHdMJKRo97MTngWurX2NpfZcRFyygy+/evZ8m0UGvwEWrsxb1LNAnQ9Nl6cWoOaiQ==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MWHPR11MB1645.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(396003)(346002)(136003)(39860400002)(55016002)(110136005)(26005)(9686003)(316002)(186003)(71200400001)(86362001)(6506007)(7696005)(4326008)(66446008)(66476007)(66556008)(64756008)(8936002)(5660300002)(66946007)(52536014)(33656002)(83380400001)(2906002)(478600001)(76116006)(8676002);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata: =?us-ascii?Q?+RH9PAEywEPO5YOX27Y288/91tRhGsDcEWBLTT/7PQHVjmWP5kakM4ESpAFT?=
 =?us-ascii?Q?lErvmjoCUxRnSP3P9cbjiVWG447ZSiIOGI7FdJvY/xqyczMocxMAo2Fuxb0P?=
 =?us-ascii?Q?uR9QLSzmqyaAQswjPoSkrvBk8wjOjXpVL92Fb8FgeE906bKR9yrg8zWTHUYo?=
 =?us-ascii?Q?KYidokJNZfuyFcYPWWROiIjqd6Ly0qn8xhzdzHbIxRewX44lKb6hfyZHgi7J?=
 =?us-ascii?Q?rmhA5WklRzaLfyx7ov0E8VVAiDe5vjF0gqtTqcBlXxUOKwldyiPrRAkCYgPA?=
 =?us-ascii?Q?jzKSw+LjuE0kU42MhgBDbtGtruwehlsUtCWSocDx/sK9rMGyjXDtn+idUthT?=
 =?us-ascii?Q?pTHcXh5oiR0uzRkkBrVwYfsxmUxews8B3JWTPsA1VqK6aLHg3kNSqByD9U4Z?=
 =?us-ascii?Q?rVmPrB2radDZT5ClWZp5Jij31kAWLvGILrX6k/TUOqP4ZVq1oFD26bE23lEc?=
 =?us-ascii?Q?+6EIqL1pgrbcL/zIcYQNtxcC3+3HSndEdQ26LWJ4ivYg/0L72C0ocgPRrBsR?=
 =?us-ascii?Q?SP5UycbFlqMOtbRTp3/f2IBx8SAEGq0derVZvRn12EP9NafA7qXfUxIBiMQ2?=
 =?us-ascii?Q?i3mYW0OmzthVI2xIOjQlbaZBvn3bnZxQb4fOvOa7oWfIORYZbitR16grtyXs?=
 =?us-ascii?Q?gvx0p5H3mtIM4Cc9X1cgkd5FM5nqkN/kmnEcoVokXWd+SlZJ1MsNRS3TsnPv?=
 =?us-ascii?Q?5YApFwdYwLOq+CZIMnDjIaRMEg1T4MONwovbmp1tppNiUeOqZr+R5DvK01jI?=
 =?us-ascii?Q?0+9I7P0fL1CJ4n9+8Ek4lQbTramUJFkFdyEYjsA951K6uD0olhBce2jPUTR7?=
 =?us-ascii?Q?jObv8habjJ5iPkflwqGAO5+Zwk1U8+WlTc+FBJk6MN7N30//ryMFxyRAv77u?=
 =?us-ascii?Q?RKoxIq2cag2wmRe/dv/lQtn9zuTiYyjFwWlKzUtV96Zm9QlayUyh8xdHjhSl?=
 =?us-ascii?Q?g9NjD3i4SM/ljZGqBgKEp+rx+DKaKsAeIKFFWxjqHnw=3D?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR11MB1645.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fe57ec27-b2b2-4d1b-1621-08d894dd8079
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Nov 2020 03:10:33.6647
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: bywwW1H+XfyRle6FqbwOgSh6FDpKJ4mul+v4P0Gb3YrXI7TuwiJAdSOMe9rVW32jLLnsWuDurj7b6qI7EeT68A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR11MB4569
X-OriginatorOrg: intel.com

> From: Paul Durrant <paul@xen.org>
> Sent: Friday, November 20, 2020 9:25 PM
>=20
> From: Paul Durrant <pdurrant@amazon.com>
>=20
> This removes the need for much shifting, masking and several magic
> numbers.
> On the whole it makes the code quite a bit more readable.
>=20
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

> ---
> Cc: Kevin Tian <kevin.tian@intel.com>
>=20
> v10:
>  - Remove macros in favour of direct field access
>  - Adjust field types
>  - Add missing barriers
>=20
> v4:
>  - New in v4
> ---
>  xen/drivers/passthrough/vtd/iommu.c   | 36 +++++++++++----------
>  xen/drivers/passthrough/vtd/iommu.h   | 45 +++++++++++++--------------
>  xen/drivers/passthrough/vtd/utils.c   | 10 +++---
>  xen/drivers/passthrough/vtd/x86/ats.c |  6 ++--
>  4 files changed, 47 insertions(+), 50 deletions(-)
>=20
> diff --git a/xen/drivers/passthrough/vtd/iommu.c
> b/xen/drivers/passthrough/vtd/iommu.c
> index 1a038541f0a3..fdb472ad6515 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -86,8 +86,6 @@ static int domain_iommu_domid(struct domain *d,
>      return -1;
>  }
>=20
> -#define DID_FIELD_WIDTH 16
> -#define DID_HIGH_OFFSET 8
>  static int context_set_domain_id(struct context_entry *context,
>                                   struct domain *d,
>                                   struct vtd_iommu *iommu)
> @@ -121,21 +119,22 @@ static int context_set_domain_id(struct
> context_entry *context,
>      }
>=20
>      set_bit(i, iommu->domid_bitmap);
> -    context->hi |=3D (i & ((1 << DID_FIELD_WIDTH) - 1)) << DID_HIGH_OFFS=
ET;
> +    context->did =3D i;
> +
>      return 0;
>  }
>=20
>  static int context_get_domain_id(struct context_entry *context,
>                                   struct vtd_iommu *iommu)
>  {
> -    unsigned long dom_index, nr_dom;
>      int domid =3D -1;
>=20
>      if (iommu && context)
>      {
> -        nr_dom =3D cap_ndoms(iommu->cap);
> +        unsigned long dom_index, nr_dom;
>=20
> -        dom_index =3D context_domain_id(*context);
> +        nr_dom =3D cap_ndoms(iommu->cap);
> +        dom_index =3D context->did;
>=20
>          if ( dom_index < nr_dom && iommu->domid_map )
>              domid =3D iommu->domid_map[dom_index];
> @@ -1338,7 +1337,7 @@ int domain_context_mapping_one(
>      context_entries =3D (struct context_entry *)map_vtd_domain_page(madd=
r);
>      context =3D &context_entries[devfn];
>=20
> -    if ( context_present(*context) )
> +    if ( context->p )
>      {
>          int res =3D 0;
>=20
> @@ -1382,7 +1381,7 @@ int domain_context_mapping_one(
>=20
>      if ( iommu_hwdom_passthrough && is_hardware_domain(domain) )
>      {
> -        context_set_translation_type(*context, CONTEXT_TT_PASS_THRU);
> +        context->tt =3D CONTEXT_TT_PASS_THRU;
>      }
>      else
>      {
> @@ -1397,11 +1396,11 @@ int domain_context_mapping_one(
>              return -ENOMEM;
>          }
>=20
> -        context_set_address_root(*context, pgd_maddr);
> +        context->slptptr =3D paddr_to_pfn(pgd_maddr);
>          if ( ats_enabled && ecap_dev_iotlb(iommu->ecap) )
> -            context_set_translation_type(*context, CONTEXT_TT_DEV_IOTLB)=
;
> +            context->tt =3D CONTEXT_TT_DEV_IOTLB;
>          else
> -            context_set_translation_type(*context, CONTEXT_TT_MULTI_LEVE=
L);
> +            context->tt =3D CONTEXT_TT_MULTI_LEVEL;
>=20
>          spin_unlock(&hd->arch.mapping_lock);
>      }
> @@ -1413,9 +1412,10 @@ int domain_context_mapping_one(
>          return -EFAULT;
>      }
>=20
> -    context_set_address_width(*context, level_to_agaw(iommu-
> >nr_pt_levels));
> -    context_set_fault_enable(*context);
> -    context_set_present(*context);
> +    context->aw =3D level_to_agaw(iommu->nr_pt_levels);
> +    context->fpd =3D false;
> +    smp_wmb();
> +    context->p =3D true;
>      iommu_sync_cache(context, sizeof(struct context_entry));
>      spin_unlock(&iommu->lock);
>=20
> @@ -1567,17 +1567,19 @@ int domain_context_unmap_one(
>      context_entries =3D (struct context_entry *)map_vtd_domain_page(madd=
r);
>      context =3D &context_entries[devfn];
>=20
> -    if ( !context_present(*context) )
> +    if ( !context->p )
>      {
>          spin_unlock(&iommu->lock);
>          unmap_vtd_domain_page(context_entries);
>          return 0;
>      }
>=20
> -    context_clear_present(*context);
> -    context_clear_entry(*context);
> +    context->p =3D false;
> +    smp_wmb();
>      iommu_sync_cache(context, sizeof(struct context_entry));
>=20
> +    context->val =3D 0; /* No need to sync; present bit is already clear=
ed */
> +
>      iommu_domid=3D domain_iommu_domid(domain, iommu);
>      if ( iommu_domid =3D=3D -1 )
>      {
> diff --git a/xen/drivers/passthrough/vtd/iommu.h
> b/xen/drivers/passthrough/vtd/iommu.h
> index b14628eec260..33b1abf98526 100644
> --- a/xen/drivers/passthrough/vtd/iommu.h
> +++ b/xen/drivers/passthrough/vtd/iommu.h
> @@ -198,37 +198,34 @@ struct root_entry {
>          };
>      };
>  };
> +#define ROOT_ENTRY_NR (PAGE_SIZE_4K / sizeof(struct root_entry))
>=20
>  struct context_entry {
> -    u64 lo;
> -    u64 hi;
> -};
> -#define ROOT_ENTRY_NR (PAGE_SIZE_4K/sizeof(struct root_entry))
> -#define context_present(c) ((c).lo & 1)
> -#define context_fault_disable(c) (((c).lo >> 1) & 1)
> -#define context_translation_type(c) (((c).lo >> 2) & 3)
> -#define context_address_root(c) ((c).lo & PAGE_MASK_4K)
> -#define context_address_width(c) ((c).hi &  7)
> -#define context_domain_id(c) (((c).hi >> 8) & ((1 << 16) - 1))
> +    union {
> +        __uint128_t val;
> +        struct { uint64_t lo, hi; };
> +        struct {
> +            /* 0 - 63 */
> +            bool p:1;
> +            bool fpd:1;
> +            uint64_t tt:2;
>=20
> -#define context_set_present(c) do {(c).lo |=3D 1;} while(0)
> -#define context_clear_present(c) do {(c).lo &=3D ~1;} while(0)
> -#define context_set_fault_enable(c) \
> -    do {(c).lo &=3D (((u64)-1) << 2) | 1;} while(0)
> -
> -#define context_set_translation_type(c, val) do { \
> -        (c).lo &=3D (((u64)-1) << 4) | 3; \
> -        (c).lo |=3D (val & 3) << 2; \
> -    } while(0)
>  #define CONTEXT_TT_MULTI_LEVEL 0
>  #define CONTEXT_TT_DEV_IOTLB   1
>  #define CONTEXT_TT_PASS_THRU   2
>=20
> -#define context_set_address_root(c, val) \
> -    do {(c).lo &=3D 0xfff; (c).lo |=3D (val) & PAGE_MASK_4K ;} while(0)
> -#define context_set_address_width(c, val) \
> -    do {(c).hi &=3D 0xfffffff8; (c).hi |=3D (val) & 7;} while(0)
> -#define context_clear_entry(c) do {(c).lo =3D 0; (c).hi =3D 0;} while(0)
> +            unsigned int reserved0:8;
> +            uint64_t slptptr:52;
> +
> +            /* 64 - 127 */
> +            unsigned int aw:3;
> +            unsigned int ignored:4;
> +            unsigned int reserved1:1;
> +            unsigned int did:16;
> +            uint64_t reserved2:40;
> +        };
> +    };
> +};
>=20
>  /* page table handling */
>  #define LEVEL_STRIDE       (9)
> diff --git a/xen/drivers/passthrough/vtd/utils.c
> b/xen/drivers/passthrough/vtd/utils.c
> index 5f25a86a535c..4bca160bc663 100644
> --- a/xen/drivers/passthrough/vtd/utils.c
> +++ b/xen/drivers/passthrough/vtd/utils.c
> @@ -129,17 +129,17 @@ void print_vtd_entries(struct vtd_iommu *iommu,
> int bus, int devfn, u64 gmfn)
>          return;
>      }
>=20
> -    val =3D ctxt_entry[devfn].lo;
> -    printk("    context[%02x] =3D %"PRIx64"_%"PRIx64"\n",
> -           devfn, ctxt_entry[devfn].hi, val);
> -    if ( !context_present(ctxt_entry[devfn]) )
> +    printk("    context[%02x] =3D %"PRIx64"_%"PRIx64"\n", devfn,
> +           ctxt_entry[devfn].hi, ctxt_entry[devfn].lo);
> +    if ( !ctxt_entry[devfn].p )
>      {
>          unmap_vtd_domain_page(ctxt_entry);
>          printk("    ctxt_entry[%02x] not present\n", devfn);
>          return;
>      }
>=20
> -    level =3D agaw_to_level(context_address_width(ctxt_entry[devfn]));
> +    level =3D agaw_to_level(ctxt_entry[devfn].aw);
> +    val =3D pfn_to_paddr(ctxt_entry[devfn].slptptr);
>      unmap_vtd_domain_page(ctxt_entry);
>      if ( level !=3D VTD_PAGE_TABLE_LEVEL_3 &&
>           level !=3D VTD_PAGE_TABLE_LEVEL_4)
> diff --git a/xen/drivers/passthrough/vtd/x86/ats.c
> b/xen/drivers/passthrough/vtd/x86/ats.c
> index fec969ef75bb..cb057ced3cf7 100644
> --- a/xen/drivers/passthrough/vtd/x86/ats.c
> +++ b/xen/drivers/passthrough/vtd/x86/ats.c
> @@ -76,7 +76,6 @@ static bool device_in_domain(const struct vtd_iommu
> *iommu,
>  {
>      struct root_entry *root_entry, *root_entries;
>      struct context_entry *context_entry, *context_entries =3D NULL;
> -    unsigned int tt;
>      bool found =3D false;
>=20
>      if ( unlikely(!iommu->root_maddr) )
> @@ -92,11 +91,10 @@ static bool device_in_domain(const struct
> vtd_iommu *iommu,
>=20
>      context_entries =3D map_vtd_domain_page(root_entry->ctp);
>      context_entry =3D &context_entries[pdev->devfn];
> -    if ( context_domain_id(*context_entry) !=3D did )
> +    if ( context_entry->did !=3D did )
>          goto out;
>=20
> -    tt =3D context_translation_type(*context_entry);
> -    if ( tt !=3D CONTEXT_TT_DEV_IOTLB )
> +    if ( context_entry->tt !=3D CONTEXT_TT_DEV_IOTLB )
>          goto out;
>=20
>      found =3D true;
> --
> 2.20.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 05:29:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 05:29:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40654.73567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjbkT-0003m5-6g; Mon, 30 Nov 2020 05:29:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40654.73567; Mon, 30 Nov 2020 05:29:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjbkT-0003ly-3W; Mon, 30 Nov 2020 05:29:17 +0000
Received: by outflank-mailman (input) for mailman id 40654;
 Mon, 30 Nov 2020 05:29:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x+J0=FE=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1kjbkR-0003lt-32
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 05:29:15 +0000
Received: from mga11.intel.com (unknown [192.55.52.93])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0c48df58-2b87-4c33-83a8-c3c927fe9f4d;
 Mon, 30 Nov 2020 05:29:13 +0000 (UTC)
Received: from orsmga004.jf.intel.com ([10.7.209.38])
 by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 29 Nov 2020 21:29:11 -0800
Received: from orsmsx601.amr.corp.intel.com ([10.22.229.14])
 by orsmga004.jf.intel.com with ESMTP; 29 Nov 2020 21:29:11 -0800
Received: from orsmsx604.amr.corp.intel.com (10.22.229.17) by
 ORSMSX601.amr.corp.intel.com (10.22.229.14) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1713.5; Sun, 29 Nov 2020 21:29:10 -0800
Received: from ORSEDG601.ED.cps.intel.com (10.7.248.6) by
 orsmsx604.amr.corp.intel.com (10.22.229.17) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5
 via Frontend Transport; Sun, 29 Nov 2020 21:29:10 -0800
Received: from NAM12-BN8-obe.outbound.protection.outlook.com (104.47.55.172)
 by edgegateway.intel.com (134.134.137.102) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1713.5; Sun, 29 Nov 2020 21:29:06 -0800
Received: from MWHPR11MB1645.namprd11.prod.outlook.com (2603:10b6:301:b::12)
 by MWHPR11MB0062.namprd11.prod.outlook.com (2603:10b6:301:67::34) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3589.30; Mon, 30 Nov
 2020 05:29:05 +0000
Received: from MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::31c9:44c6:7323:61ac]) by MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::31c9:44c6:7323:61ac%7]) with mapi id 15.20.3611.025; Mon, 30 Nov 2020
 05:29:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c48df58-2b87-4c33-83a8-c3c927fe9f4d
IronPort-SDR: Q7dwKLrQ+8fdn6geJsxAXnW4tXcUupubgd7OKwHRpk6zRvDf2eWSZj9/q9jhkoNdrdIaGhYCuO
 tg44IoTdSs6Q==
X-IronPort-AV: E=McAfee;i="6000,8403,9820"; a="169085058"
X-IronPort-AV: E=Sophos;i="5.78,379,1599548400"; 
   d="scan'208";a="169085058"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
IronPort-SDR: 8Paml2sERhA37Xoy8etT+2NHB3ErD+WXbNhIYPvrWEE/ySe4xaf2JAVamgMp5MXJNziLpOjfA5
 F+8hBbuhN9JA==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.78,379,1599548400"; 
   d="scan'208";a="480518396"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hLpp9LdTKPFVIDNVfqcjm6DE8qoJuDBo2UDg9CFKxQsDfpkhU6vhAdFbL2UJWz4zgBrR2Q9pAcTa4IuPyCPsFfcJ4yp2KbaO07kGwj2iOTp1cMVhnlDO6NAsWThBWleqJZPpAwLWK0ZnxU0iUR1EJ952lPoLE1HdAsnBCrFyyRnIUuCDFybhU5ErXd3vGlWTIIZxe5J1PNKfY4hEz+4o1UVo2966L72SB0cqqsGbyjW/7u34aLSUKH8VS/7tEunc+Wwh2l+TLmA6nwJMCrvGjeOJdcTqvMd6NLs36A5iuDROwbwoLQrrNVtVm15C7EF7iVcqiumQ2gJJo17oEXbEYQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vwiZdpkwPU4vE6pIVVS1yOCxygQ4CsnKQvP7+mgCbqo=;
 b=EMNRqzlm1vVhSZenO4PbUCMzr5DKvgWVS66iP0LYwFycEPMLDPJl5O5zOABxWnl8VIO1zNVXX/k28iX3x9H1zQxbw8JD8f5YufadMnPhAcoL8LTVjxU6uMC1B5bgd52MpO/s+GxL1NIpYt2NseLDkylty4He+vo8Rq8BDM+OhIWqtWj/95kcvf3lv11Nhq/6CyxhYoLHFrYx/bLxP7aTzImjiCLkQbFdez0Xv+Xnw2TEs+IBOzX0IWZTLqLjYYYRxD0W1vIPhJLhD6X3tvUdGGh5r8gxeC/kJNla8GHnmoWWEuCrD/dYzhv5n6IS/gq+3qEP1quKbFT3tX9+tgPZtQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com;
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vwiZdpkwPU4vE6pIVVS1yOCxygQ4CsnKQvP7+mgCbqo=;
 b=mpUWtFpu10SqWnEPQyVpflUkfoXeNRAthl3+pQjkRF2W+gNX6GweEPeglMElC1KvTF0sdS75vpRlVK3NKXTXIoI4VgkYAwXQHqXRjnsqpfrK2W69zFuSRPVsdCIgHOkpVEsmO7v796abzE6LHKECSBIYP6DjIfcWEdlXEExfYTI=
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
CC: Paul Durrant <pdurrant@amazon.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v10 7/7] vtd: use a bit field for dma_pte
Thread-Topic: [PATCH v10 7/7] vtd: use a bit field for dma_pte
Thread-Index: AQHWv0CPiyT+mTsyP0CyAKicZjEQmKncLtqAgAP+YuA=
Date: Mon, 30 Nov 2020 05:29:04 +0000
Message-ID: <MWHPR11MB16456F6B57ED225135D2EC218CF50@MWHPR11MB1645.namprd11.prod.outlook.com>
References: <20201120132440.1141-1-paul@xen.org>
 <20201120132440.1141-8-paul@xen.org>
 <24774b4e-3ae8-2941-24ee-722acea69657@suse.com>
In-Reply-To: <24774b4e-3ae8-2941-24ee-722acea69657@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
dlp-version: 11.5.1.3
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.198.147.217]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: b776f18c-6b17-4a6b-b4d9-08d894f0da3c
x-ms-traffictypediagnostic: MWHPR11MB0062:
x-microsoft-antispam-prvs: <MWHPR11MB00624B359BFDA478D84DB7248CF50@MWHPR11MB0062.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:2657;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: R2sHnfLz5mJJzV7kIqhGZ21Ya4/6/mmjEQ7atIdwRQUD71/nHJgjUJlBFiEdjReTfr9NNWWPm2K47T8rzF6GXDMuRcAWSJY85N4mygPnncTB5lnxw6noVNbgyC1MB7QthdHH5msG3/SglSXpB8KTUROE+I8+Ol9h9EOtMoRc4oqWAW68FvQduker/bWfMSU6FjLNeLkf6gxQ4Mg4swlEeLCO73A4HZeKXtZbym5Jif6iFXWySX9ZgvoUFVg0lKTi74u4aLdDWLDrh7hQ4+huulP7ATVEBjUrJfemcR0N7gd4Ka1k0Nf+JxpIR1OXTgIV
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MWHPR11MB1645.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(376002)(366004)(346002)(396003)(136003)(55016002)(9686003)(33656002)(66446008)(316002)(76116006)(66946007)(66476007)(66556008)(64756008)(86362001)(4326008)(8676002)(7696005)(71200400001)(2906002)(8936002)(54906003)(110136005)(5660300002)(186003)(53546011)(6506007)(26005)(83380400001)(478600001)(52536014);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata: =?utf-8?B?OWlGNFdXdHNoWTFIcjZhKy93b0JDY2pmR0Z4OTB2d3U3VjBRM3JSOVdDTXdq?=
 =?utf-8?B?NE1RemErOTRmYkJ1dlJZTU5EMW4ybW9lamwrWWtGNVZiR2VIU1RBQnE1OFBo?=
 =?utf-8?B?am9hQ09Tc3JZaER6aDgyTUFpS1RWbkdYYStPOW9WWWRScld0b3pMbThXQWpm?=
 =?utf-8?B?NHRhb0FMQnFKd1pLYW5vNk9ZYnVkR3lXSTBNU2VlMnBMaXF3VWd0ZyswdlJG?=
 =?utf-8?B?RzA0TEZYa1RCcWxBL1ppbHUyajNvWmJXeHgra2NsQ2JkL1JyT0VhUG9pZlRn?=
 =?utf-8?B?cW4rTHVic0hFL3NkK2tXdkpSaE4rRnJnK2Z5N200N1J4WjVoSFZoTHhvckhm?=
 =?utf-8?B?S2JQYW01aU93d2x0NDU4dUNWdEtVZStQRkZRSzhoaTBKU2tCeXdic0k4SWJx?=
 =?utf-8?B?aWVOZEVZcS9uazNjWmgyajZEaS9Ga2YvL2FaeXVjalNidmRhS08xMHlwbVJG?=
 =?utf-8?B?eC9sWHNkTFIwdnNEVVdla0pZcjhaYWpXTTFSZHV0aUtiYVNTMFpuMkNLZUFQ?=
 =?utf-8?B?ZTdQdVc5OUtIYzZGSWVZR3VuemdkdE5NYkVyK0VtNFNPWi9ieVJxM29tU2VF?=
 =?utf-8?B?bUQ5eVdORnZ1SS9BZnNjNEQ4cW96MUc2VW54ZjFmaHNxV2xQbG4vdmpBRit6?=
 =?utf-8?B?dzhETWlXd2N2TlNmV3ljVVcrMmd0YWRFRGZMSFFRU29FOEF0ampOa1hMcHJN?=
 =?utf-8?B?R05WM0pIbTloK2NQdnY5NDlzd3V6NE5SRzhwNWw3cm9Bd0FYM2xJZk96Y2l3?=
 =?utf-8?B?dmVvZ0RjSHVKeGlyOUtNWWpVeUZHNzlacDZzdFpraHUxSEdCNHpvYnlKMzZ2?=
 =?utf-8?B?cXdhZTZka2dVMGxxZEREVWE4WGV6WDh2c1ZVeDdCOG1CazYvWndNeU9IOFU1?=
 =?utf-8?B?c0MxdkxKNmRtdThScHZXOGMxVURPcjlNQkRjQ1BoVWpQV3FtRkNDMTdNNU5N?=
 =?utf-8?B?R0RJQkJFckM1M3ducFhHKzJFbUowNTJYWStpMVg3NWtSdCtQQkgrVW5JOXdL?=
 =?utf-8?B?ME5PdmpOdUdxazdzWmFLN3JvYUYwaVlralNlVjNYaHBTUWNCRndwakw4UWY2?=
 =?utf-8?B?clNSTjNhWkk5SE5NRjl6ZkFzVmZraE1QYzRUMURrWTlhbkJUbXZPS05lMHNX?=
 =?utf-8?B?L0tnQ01RQzk2V0lvdFZacWFWNm40OXRML3Z2NTZWRzJraGRaTVJ3WkUybm9V?=
 =?utf-8?B?S04vdjRyVVYrOHQzU0VqOVlKUzdxRzVldFBQQW1LbVN0NHlRMlFuY3BrcE00?=
 =?utf-8?B?b2tZYStYa0tHZFFUSnJLdEhDWXBsbUdmUGhjVXVpdVNNb2g5d2E3MlAvT3ov?=
 =?utf-8?Q?UseWEVroNOMmY=3D?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR11MB1645.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b776f18c-6b17-4a6b-b4d9-08d894f0da3c
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Nov 2020 05:29:04.6239
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: nOfZUiawuGVHp1xeBQbFcbnTS+4fGY2UfzP6hvCGn7IbUGVfcx1So9PChhbSa9dttbjyYBbYQx7ETYnZFYxbpg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR11MB0062
X-OriginatorOrg: intel.com

PiBGcm9tOiBCZXVsaWNoDQo+IFNlbnQ6IFNhdHVyZGF5LCBOb3ZlbWJlciAyOCwgMjAyMCAxMjow
MiBBTQ0KPiANCj4gT24gMjAuMTEuMjAyMCAxNDoyNCwgUGF1bCBEdXJyYW50IHdyb3RlOg0KPiA+
IEBAIC03MDksMjAgKzcwOSwyMyBAQCBzdGF0aWMgdm9pZCBkbWFfcHRlX2NsZWFyX29uZShzdHJ1
Y3QgZG9tYWluDQo+ICpkb21haW4sIHVpbnQ2NF90IGFkZHIsDQo+ID4gICAgICBwYWdlID0gKHN0
cnVjdCBkbWFfcHRlICopbWFwX3Z0ZF9kb21haW5fcGFnZShwZ19tYWRkcik7DQo+ID4gICAgICBw
dGUgPSBwYWdlICsgYWRkcmVzc19sZXZlbF9vZmZzZXQoYWRkciwgMSk7DQo+ID4NCj4gPiAtICAg
IGlmICggIWRtYV9wdGVfcHJlc2VudCgqcHRlKSApDQo+ID4gKyAgICBpZiAoICFwdGUtPnIgJiYg
IXB0ZS0+dyApDQo+IA0KPiBJIHRoaW5rIGRtYV9wdGVfcHJlc2VudCgpIHdhbnRzIHRvIHN0YXks
IHNvIHdlIHdvdWxkIGhhdmUgdG8gdG91Y2gNCj4gb25seSBvbmUgcGxhY2Ugd2hlbiBhZGRpbmcg
c3VwcG9ydCBmb3IgeC4NCj4gDQo+ID4gICAgICB7DQo+ID4gICAgICAgICAgc3Bpbl91bmxvY2so
JmhkLT5hcmNoLm1hcHBpbmdfbG9jayk7DQo+ID4gICAgICAgICAgdW5tYXBfdnRkX2RvbWFpbl9w
YWdlKHBhZ2UpOw0KPiA+ICAgICAgICAgIHJldHVybjsNCj4gPiAgICAgIH0NCj4gPg0KPiA+IC0g
ICAgZG1hX2NsZWFyX3B0ZSgqcHRlKTsNCj4gPiAtICAgICpmbHVzaF9mbGFncyB8PSBJT01NVV9G
TFVTSEZfbW9kaWZpZWQ7DQo+ID4gKyAgICBwdGUtPnIgPSBwdGUtPncgPSBmYWxzZTsNCj4gPiAr
ICAgIHNtcF93bWIoKTsNCj4gPiArICAgIHB0ZS0+dmFsID0gMDsNCj4gPg0KPiA+ICAgICAgc3Bp
bl91bmxvY2soJmhkLT5hcmNoLm1hcHBpbmdfbG9jayk7DQo+ID4gICAgICBpb21tdV9zeW5jX2Nh
Y2hlKHB0ZSwgc2l6ZW9mKHN0cnVjdCBkbWFfcHRlKSk7DQo+IA0KPiBKdXN0IGFzIGFuIG9ic2Vy
dmF0aW9uIC0gaW4gYW4gZWFybGllciBwYXRjaCBJIHRoaW5rIHRoZXJlIHdhcyBhDQo+IGNvZGUg
c2VxdWVuY2UgaGF2aW5nIHRoZXNlIHRoZSBvdGhlciB3YXkgYXJvdW5kLiBJIHRoaW5rIHdlIHdh
bnQNCj4gdG8gc2V0dGxlIG9uZSBvbmUgd2F5IG9mIGRvaW5nIHRoaXMgKGZsdXNoIHRoZW4gdW5s
b2NrLCBvciB1bmxvY2sNCj4gdGhlbiBmbHVzaCkuIEtldmluPw0KPiANCg0KQWdyZWUuIEdlbmVy
YWxseSBzcGVha2luZyAndW5sb2NrIHRoZW4gZmx1c2gnIGlzIHByZWZlcnJlZCBzaW5jZQ0Kc3Bp
bmxvY2sgZG9lc24ndCBwcm90ZWN0IGlvbW11IGFueXdheS4NCg0KPiA+IEBAIC0xNzc1LDE1ICsx
Nzc4LDEyIEBAIHN0YXRpYyBpbnQgX19tdXN0X2NoZWNrDQo+IGludGVsX2lvbW11X21hcF9wYWdl
KHN0cnVjdCBkb21haW4gKmQsIGRmbl90IGRmbiwNCj4gPiAgICAgIHBhZ2UgPSAoc3RydWN0IGRt
YV9wdGUgKiltYXBfdnRkX2RvbWFpbl9wYWdlKHBnX21hZGRyKTsNCj4gPiAgICAgIHB0ZSA9ICZw
YWdlW2Rmbl94KGRmbikgJiBMRVZFTF9NQVNLXTsNCj4gPiAgICAgIG9sZCA9ICpwdGU7DQo+ID4g
LQ0KPiA+IC0gICAgZG1hX3NldF9wdGVfYWRkcihuZXcsIG1mbl90b19tYWRkcihtZm4pKTsNCj4g
PiAtICAgIGRtYV9zZXRfcHRlX3Byb3QobmV3LA0KPiA+IC0gICAgICAgICAgICAgICAgICAgICAo
KGZsYWdzICYgSU9NTVVGX3JlYWRhYmxlKSA/IERNQV9QVEVfUkVBRCAgOiAwKSB8DQo+ID4gLSAg
ICAgICAgICAgICAgICAgICAgICgoZmxhZ3MgJiBJT01NVUZfd3JpdGFibGUpID8gRE1BX1BURV9X
UklURSA6IDApKTsNCj4gPiAtDQo+ID4gLSAgICAvKiBTZXQgdGhlIFNOUCBvbiBsZWFmIHBhZ2Ug
dGFibGUgaWYgU25vb3AgQ29udHJvbCBhdmFpbGFibGUgKi8NCj4gPiAtICAgIGlmICggaW9tbXVf
c25vb3AgKQ0KPiA+IC0gICAgICAgIGRtYV9zZXRfcHRlX3NucChuZXcpOw0KPiA+ICsgICAgbmV3
ID0gKHN0cnVjdCBkbWFfcHRlKXsNCj4gPiArICAgICAgICAuciA9IGZsYWdzICYgSU9NTVVGX3Jl
YWRhYmxlLA0KPiA+ICsgICAgICAgIC53ID0gZmxhZ3MgJiBJT01NVUZfd3JpdGFibGUsDQo+ID4g
KyAgICAgICAgLnNucCA9IGlvbW11X3Nub29wLA0KPiA+ICsgICAgICAgIC5hZGRyID0gbWZuX3go
bWZuKSwNCj4gPiArICAgIH07DQo+IA0KPiBXZSBzdGlsbCBoYXZlbid0IHNldHRsZWQgb24gYSBu
ZXdlciBnY2MgYmFzZWxpbmUsIHNvIHRoaXMga2luZCBvZg0KPiBpbml0aWFsaXplciBpcyBzdGls
bCBub3QgYWxsb3dlZCAoYXMgaW46IHdpbGwgYnJlYWsgdGhlIGJ1aWxkKSBmb3INCj4gc3RydWN0
LXMgd2l0aCB1bm5hbWVkIHN1Yi1zdHJ1Y3QtcyAvIHN1Yi11bmlvbi1zLg0KPiANCj4gPiBAQCAt
MjYxMSwxOCArMjYxMSwxOCBAQCBzdGF0aWMgdm9pZA0KPiB2dGRfZHVtcF9wYWdlX3RhYmxlX2xl
dmVsKHBhZGRyX3QgcHRfbWFkZHIsIGludCBsZXZlbCwgcGFkZHJfdCBncGEsDQo+ID4gICAgICAg
ICAgICAgIHByb2Nlc3NfcGVuZGluZ19zb2Z0aXJxcygpOw0KPiA+DQo+ID4gICAgICAgICAgcHRl
ID0gJnB0X3ZhZGRyW2ldOw0KPiA+IC0gICAgICAgIGlmICggIWRtYV9wdGVfcHJlc2VudCgqcHRl
KSApDQo+ID4gKyAgICAgICAgaWYgKCAhcHRlLT5yICYmICFwdGUtPncgKQ0KPiA+ICAgICAgICAg
ICAgICBjb250aW51ZTsNCj4gPg0KPiA+ICAgICAgICAgIGFkZHJlc3MgPSBncGEgKyBvZmZzZXRf
bGV2ZWxfYWRkcmVzcyhpLCBsZXZlbCk7DQo+ID4gICAgICAgICAgaWYgKCBuZXh0X2xldmVsID49
IDEgKQ0KPiA+IC0gICAgICAgICAgICB2dGRfZHVtcF9wYWdlX3RhYmxlX2xldmVsKGRtYV9wdGVf
YWRkcigqcHRlKSwgbmV4dF9sZXZlbCwNCj4gPiArICAgICAgICAgICAgdnRkX2R1bXBfcGFnZV90
YWJsZV9sZXZlbChwZm5fdG9fcGFkZHIocHRlLT5hZGRyKSwgbmV4dF9sZXZlbCwNCj4gPiAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBhZGRyZXNzLCBpbmRlbnQgKyAxKTsN
Cj4gPiAgICAgICAgICBlbHNlDQo+ID4gICAgICAgICAgICAgIHByaW50aygiJSpzZGZuOiAlMDhs
eCBtZm46ICUwOGx4XG4iLA0KPiA+ICAgICAgICAgICAgICAgICAgICAgaW5kZW50LCAiIiwNCj4g
PiAgICAgICAgICAgICAgICAgICAgICh1bnNpZ25lZCBsb25nKShhZGRyZXNzID4+IFBBR0VfU0hJ
RlRfNEspLA0KPiA+IC0gICAgICAgICAgICAgICAgICAgKHVuc2lnbmVkIGxvbmcpKGRtYV9wdGVf
YWRkcigqcHRlKSA+PiBQQUdFX1NISUZUXzRLKSk7DQo+ID4gKyAgICAgICAgICAgICAgICAgICAo
dW5zaWduZWQgbG9uZykocHRlLT5hZGRyKSk7DQo+IA0KPiBDb3VsZCB5b3UgYWxzbyBkcm9wIHRo
ZSBubyBsb25nZXIgbmVlZGVkIHBhaXIgb2YgcGFyZW50aGVzZXMuIEkNCj4gZnVydGhlciBzdXNw
ZWN0IHRoZSBjYXN0IGlzbid0IG5lZWRlZCAoYW55bW9yZT8pLiAoT3RvaCBJIHRoaW5rDQo+IEkg
cmVjYWxsIG9kZGl0aWVzIHdpdGggZ2NjJ3MgcHJpbnRmKCktc3R5bGUgZm9ybWF0IGNoZWNraW5n
IGFuZA0KPiBkaXJlY3QgcGFzc2luZyBvZiBiaXRmaWVsZHMuIEJ1dCBpZiB0aGF0J3MgYSBwcm9i
bGVtLCBJIHRoaW5rDQo+IG9uZSBvZiB0aGUgZWFybGllciBvbmVzIGFscmVhZHkgaW50cm9kdWNl
ZCBzdWNoIGFuIGlzc3VlLiBTbw0KPiBwZXJoYXBzIHdlIGNhbiB3YWl0IHVudGlsIHNvbWVvbmUg
YWN0dWFsbHkgY29uZmlybXMgdGhlcmUgaXMgYW4NCj4gaXNzdWUgLSBxdWl0ZSBsaWtlbHkgdGhp
cyBzb21lb25lIHdvdWxkIGJlIG1lIGFueXdheS4pDQo+IA0KPiA+IC0tLSBhL3hlbi9kcml2ZXJz
L3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5oDQo+ID4gKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91
Z2gvdnRkL2lvbW11LmgNCj4gPiBAQCAtMjQ0LDM4ICsyNDQsMjEgQEAgc3RydWN0IGNvbnRleHRf
ZW50cnkgew0KPiA+ICAjZGVmaW5lIGxldmVsX3NpemUobCkgKDEgPDwgbGV2ZWxfdG9fb2Zmc2V0
X2JpdHMobCkpDQo+ID4gICNkZWZpbmUgYWxpZ25fdG9fbGV2ZWwoYWRkciwgbCkgKChhZGRyICsg
bGV2ZWxfc2l6ZShsKSAtIDEpICYgbGV2ZWxfbWFzayhsKSkNCj4gPg0KPiA+IC0vKg0KPiA+IC0g
KiAwOiByZWFkYWJsZQ0KPiA+IC0gKiAxOiB3cml0YWJsZQ0KPiA+IC0gKiAyLTY6IHJlc2VydmVk
DQo+ID4gLSAqIDc6IHN1cGVyIHBhZ2UNCj4gPiAtICogOC0xMTogYXZhaWxhYmxlDQo+ID4gLSAq
IDEyLTYzOiBIb3N0IHBoeXNjaWFsIGFkZHJlc3MNCj4gPiAtICovDQo+ID4gIHN0cnVjdCBkbWFf
cHRlIHsNCj4gPiAtICAgIHU2NCB2YWw7DQo+ID4gKyAgICB1bmlvbiB7DQo+ID4gKyAgICAgICAg
dWludDY0X3QgdmFsOw0KPiA+ICsgICAgICAgIHN0cnVjdCB7DQo+ID4gKyAgICAgICAgICAgIGJv
b2wgcjoxOw0KPiA+ICsgICAgICAgICAgICBib29sIHc6MTsNCj4gPiArICAgICAgICAgICAgdW5z
aWduZWQgaW50IHJlc2VydmVkMDoxOw0KDQonWCcgYml0IGlzIGlnbm9yZWQgaW5zdGVhZCBvZiBy
ZXNlcnZlZCB3aGVuIGV4ZWN1dGUgcmVxdWVzdCBpcyBub3QNCnJlcG9ydGVkIG9yIGRpc2FibGVk
Lg0KDQpUaGFua3MNCktldmluDQoNCj4gPiArICAgICAgICAgICAgdW5zaWduZWQgaW50IGlnbm9y
ZWQwOjQ7DQo+ID4gKyAgICAgICAgICAgIGJvb2wgcHM6MTsNCj4gPiArICAgICAgICAgICAgdW5z
aWduZWQgaW50IGlnbm9yZWQxOjM7DQo+ID4gKyAgICAgICAgICAgIGJvb2wgc25wOjE7DQo+ID4g
KyAgICAgICAgICAgIHVpbnQ2NF90IGFkZHI6NTI7DQo+IA0KPiBBcyBwZXIgdGhlIGRvYyBJIGxv
b2sgYXQgdGhpcyBleHRlbmRzIG9ubHkgdG8gYml0IDUxIGF0IG1vc3QuDQo+IEFib3ZlIGFyZSAx
MSBpZ25vcmVkIGJpdHMgYW5kIChpbiBsZWFmIGVudHJpZXMpIHRoZSBUTSBvbmUuDQo+IA0KPiBD
b25zaWRlcmluZyB0aGUgZGlmZmVyZW5jZXMgYmV0d2VlbiBsZWFmIGFuZCBpbnRlcm1lZGlhdGUN
Cj4gZW50cmllcywgcGVyaGFwcyBsZWFmLW9ubHkgZmllbGRzIGNvdWxkIGdhaW4gYSBicmllZiBj
b21tZW50Pw0KPiANCj4gSmFuDQoNCg==


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 06:13:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 06:13:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40664.73585 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjcRN-0008ER-Uo; Mon, 30 Nov 2020 06:13:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40664.73585; Mon, 30 Nov 2020 06:13:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjcRN-0008EK-Rb; Mon, 30 Nov 2020 06:13:37 +0000
Received: by outflank-mailman (input) for mailman id 40664;
 Mon, 30 Nov 2020 06:13:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x+J0=FE=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1kjcRM-0008EF-Rx
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 06:13:36 +0000
Received: from mga09.intel.com (unknown [134.134.136.24])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0241ca6c-9828-4a6c-aef6-082c875e2013;
 Mon, 30 Nov 2020 06:13:32 +0000 (UTC)
Received: from orsmga005.jf.intel.com ([10.7.209.41])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 29 Nov 2020 22:13:30 -0800
Received: from orsmsx603.amr.corp.intel.com ([10.22.229.16])
 by orsmga005.jf.intel.com with ESMTP; 29 Nov 2020 22:13:30 -0800
Received: from orsmsx606.amr.corp.intel.com (10.22.229.19) by
 ORSMSX603.amr.corp.intel.com (10.22.229.16) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1713.5; Sun, 29 Nov 2020 22:13:29 -0800
Received: from orsedg603.ED.cps.intel.com (10.7.248.4) by
 orsmsx606.amr.corp.intel.com (10.22.229.19) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5
 via Frontend Transport; Sun, 29 Nov 2020 22:13:29 -0800
Received: from NAM10-MW2-obe.outbound.protection.outlook.com (104.47.55.103)
 by edgegateway.intel.com (134.134.137.100) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1713.5; Sun, 29 Nov 2020 22:13:23 -0800
Received: from MWHPR11MB1645.namprd11.prod.outlook.com (2603:10b6:301:b::12)
 by CO1PR11MB5012.namprd11.prod.outlook.com (2603:10b6:303:90::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.24; Mon, 30 Nov
 2020 06:13:23 +0000
Received: from MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::31c9:44c6:7323:61ac]) by MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::31c9:44c6:7323:61ac%7]) with mapi id 15.20.3611.025; Mon, 30 Nov 2020
 06:13:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0241ca6c-9828-4a6c-aef6-082c875e2013
IronPort-SDR: a9n1bYeCHLUTk3j3ztx/RkWelLkcF8RCergRQO/8jcKYyGZJQpQpCaD5TmFODe9/SxsB3FI62E
 HStoXlwKJHsw==
X-IronPort-AV: E=McAfee;i="6000,8403,9820"; a="172744534"
X-IronPort-AV: E=Sophos;i="5.78,379,1599548400"; 
   d="scan'208";a="172744534"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
IronPort-SDR: brkDZLBj1/tZegz7OtBocMmvEZP+ihmV13T2S05l5E9UnxxCve4l/kt+LY/Qjc+ShojeGfXSbE
 7sbDfUeg6nog==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.78,379,1599548400"; 
   d="scan'208";a="548959543"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Rvq6z2VYfap5QgPqLhNn4Qsyx9w99Nz4hv5hRJJfmMDWX5PqoRfOd15NOb6TlsmRs3594Yyy0aCCeqYQJlyCrNUqRxIlzfK56Bhw7574Pvlw4bnBqsS//fCF3GhjG9lsopv8v+8R4brsTc/v7RPwbFaHsZhosym3GyPlhajdY/QaigfJ5J+lgJzmJs9PBUiB1HamnRMdFuBWMYElFgBwBEwGiyYxHSF0tZh/qENdlFk0R/JJUeGnM1VkNqD6lSXgM6WZzCkdZrN3zGGujB9bOeMd4lqaibsDTcBo6gQTKH9GD7p7S51OkHRI6J/HgaC/0tn/TVOr7EUVurSpC3P5mg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kBXFrdbPMQwk6tQzrIyT+Tb1UYevj6QJFNizxHdJfDk=;
 b=PrWW863fkGRdqFCwPzQugSTGxQARF5BL7MNrK3N70VXwUnbE9+RokEqFC2VOaaQoWOIbcT2k3J1cZUSMu+Z9AIaFnvTlJQ/OWRQK07bCPQLWfVLarJ/f+1jHdnNvAvaVJw+Rue+OLv8i996SrWShSgRFkVn4NZurpmLbidOLpnFLyweE4Ia7gj+UhMlEQcsj5RwRTR8FK0N+3m/ZaxLkzrxYr+Df0e4YG7pRgAx8Qtx5Yy+g/M/N6Uv/OmqBnQYnR/4R3BdBJ9IkZfkSPqC5e93NOm60OF6jIudhmgOcLoXYh94iRQXZzbYRrch8C0hQnOM05L6+ymw6Lf9Lbb4pug==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com;
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kBXFrdbPMQwk6tQzrIyT+Tb1UYevj6QJFNizxHdJfDk=;
 b=gOaqyK22E8WMkjUIvbYyImH8ac4rmb+QuUpfgxuGWi+la+jriwE9sMWQKrXWWf5kt3wo3cJXj3fZKEgkZ/7kr00B0ULrR1sjLKU81e3zryF5QRrEHf+9xWf/HRwxy+Ly7h0gkM38FuEZIWcxSMTU5mNH8YeorzasSrbQLyVN4TI=
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: "Cooper, Andrew" <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>
Subject: RE: [PATCH v4] IOMMU: make DMA containment of quarantined devices
 optional
Thread-Topic: [PATCH v4] IOMMU: make DMA containment of quarantined devices
 optional
Thread-Index: AQHWxNzT5+yOOE/aLkyGjNnv/rbw96ngL6Og
Date: Mon, 30 Nov 2020 06:13:22 +0000
Message-ID: <MWHPR11MB1645257FCF6DF38A68310ABF8CF50@MWHPR11MB1645.namprd11.prod.outlook.com>
References: <c78e09fa-606c-c6c4-e9db-b57cb50ee5e2@suse.com>
In-Reply-To: <c78e09fa-606c-c6c4-e9db-b57cb50ee5e2@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
dlp-version: 11.5.1.3
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.198.147.217]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 2ee94c25-3402-4545-58ba-08d894f70abc
x-ms-traffictypediagnostic: CO1PR11MB5012:
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-microsoft-antispam-prvs: <CO1PR11MB5012EA17E4B0856D1BF43A5A8CF50@CO1PR11MB5012.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:4502;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: S/qOewye2MQUX/AowZ9KhUw2GW319ypgOjxMTvDZJckOaZO98hA+OJBsjo/qhxBNkJaA7Z5Ev70GsP9pGOiPjon49vOh7CYDkZIIMdxDQgEeZloqOc2g0k/3t+OF9k3n9d6XfTRTab7gvgK58/gd9QfiNcjop6gMJaAy5oGnCBS8R4qK0TPHUH7L/woL2e6y/XDn29DGMEXJ0MICIRet/tYwPOxNg2a2d214/AtCNroo2UJnDBwXnFeCsfvgq8uwmaYCg2xrnSqbeSJTKHTCVANZEQCwEMIjFgvCvn0WBEXaXExCHUjVtC8nJGu9vLyZjWNWa7+tdmzhdnGerHv+Ww==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MWHPR11MB1645.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(346002)(136003)(376002)(39860400002)(71200400001)(8936002)(2906002)(8676002)(86362001)(4326008)(66446008)(66946007)(64756008)(52536014)(76116006)(66556008)(33656002)(5660300002)(30864003)(66476007)(83380400001)(478600001)(55016002)(54906003)(110136005)(316002)(9686003)(186003)(26005)(6506007)(7696005);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata: =?utf-8?B?eElsbU9kRWFjbE9odjdNTEQybWhoclArdDVvcE41U1VuM1V6WUhsdkcxM20r?=
 =?utf-8?B?ZWtIQldyWSs3RWxudUo3N1F1WXhzYmZpNUo4dWk1allsS0JUamdCQ2JobFNo?=
 =?utf-8?B?UVJGWFpOb3VQZGszN1hzVlNEU1JtVENGM25aNzh5WEVna0dKY2cyMG1YSFd1?=
 =?utf-8?B?TzVEMzZ5TVVnbWYzRjl1bDBrWmErNWZCU1kxWmFQclJlTDBqaVVQamZhOE05?=
 =?utf-8?B?ZnZiRU9Ed3llL2dneEVuNEx2WnNmQkxZNEd5MTh1Q2RWWktKbVdOa2JudWJV?=
 =?utf-8?B?TDZ3N1Q5NEJNUTJGRElvRlFkTjVmYUNsWmdPWWVNMlp6UGh0QTdGdXNGdGRj?=
 =?utf-8?B?dnpKaEdneWpaM081Q2Uza1Z4ZlZsOGdINDhpRnpxWUhkWDVBcUMvQ1NlQ2V2?=
 =?utf-8?B?UUVhMGppVzFYdE5VYUhSL1lpRytrS3BQeWhNeE4xeDY0MFhUNDBOK3poUS9V?=
 =?utf-8?B?enJjSlNNdXRaRW91VXI0WlYvWGNKMWNnSlB0RUoybGJkcU1CQkFkVnFQWGZV?=
 =?utf-8?B?a3lRQkNIS3QxemY3S3dQeGlLTjM0OURxdC8rSzFEeGdKY2Z3OVVKaHdueUxi?=
 =?utf-8?B?MElEcTdxVDIrb2VTUzBOZ2tQL2RNbVh5SE1UNlY5WTU1TFBjRytLSjhsYUFW?=
 =?utf-8?B?dXdZaTZhUGRyVnJwcUxwMTQ2ak00NGtWY05oaElxM1AxeEFoQml1bDJ3TTR5?=
 =?utf-8?B?di9tQTBRRnh0U3FZeDlKUmVuRE9vUHRPTTFqS0t0MUdkNXdXYmlWMVJzd3Av?=
 =?utf-8?B?L0xFVGlTaUdEVjdCamJTbHFWRU9wZi9LRGdWOEFQMGlzcy9MRGFIR0V5cWZS?=
 =?utf-8?B?akFER3o5d21EVW1QNUFvUW9LNUMzaFI0MzdEMjRJdUdONjZOYVlmaGlDNzNm?=
 =?utf-8?B?d1ZpMDVrVjAybmhTVWJwQ0xrUjZxUDlJb0hrMDJWWEljSmE4c2I2ZHJRZ0Vr?=
 =?utf-8?B?N29WZXVrczhlOVBsU3JjV0pWclhESWJZZkkxT0ZoSDFuU0NESTlGNGlYZ1Jv?=
 =?utf-8?B?V05LYXN6WkpSSW5KK2VsMzk2WVJ1UmswQ2NrMVphc0hyb3d6V3k5Z0oxZi9j?=
 =?utf-8?B?UU9XbDBqL1FlOFVaN1BHMm1INGR5cW9tb3Vpb2FhQWFoeHJGZjNqcTVwb1ky?=
 =?utf-8?B?S0dnUXB3S2Uzbk9MV2F4enpobGQrN2syMmcxK2ZsREkxdjNZbVRhcDJ3U3du?=
 =?utf-8?B?STA5ajRHWVBCWEt0eVhDMXkvbHpJbVdYMno4NjdmU1lXdnlHV2JVZjl4YmEx?=
 =?utf-8?B?WWVmMStxQ2FXdjBoSU8zUFc5QlNUbDFmSkl3MjNpMVJLL0xVcVdoREJOZGtm?=
 =?utf-8?Q?nKWUjoBpJIsxk=3D?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR11MB1645.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2ee94c25-3402-4545-58ba-08d894f70abc
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Nov 2020 06:13:22.9744
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: w61Sd6P+JnJDEARuwcKXd31RfhLAk3hKmvvOoeh6v25dX3S3PFw7uRhTMfVJbSM6kTgaeD+p+djifK9ZkNzeiQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO1PR11MB5012
X-OriginatorOrg: intel.com

PiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+IFNlbnQ6IFNhdHVyZGF5
LCBOb3ZlbWJlciAyOCwgMjAyMCAxMjo0NiBBTQ0KPiANCj4gQ29udGFpbmluZyBzdGlsbCBpbiBm
bGlnaHQgRE1BIHdhcyBpbnRyb2R1Y2VkIHRvIHdvcmsgYXJvdW5kIGNlcnRhaW4NCj4gZGV2aWNl
cyAvIHN5c3RlbXMgaGFuZ2luZyBoYXJkIHVwb24gaGl0dGluZyBhICJub3QtcHJlc2VudCIgSU9N
TVUgZmF1bHQuDQo+IFBhc3NpbmcgdGhyb3VnaCAoc3VjaCkgZGV2aWNlcyAob24gc3VjaCBzeXN0
ZW1zKSBpcyBpbmhlcmVudGx5IGluc2VjdXJlDQo+IChhcyBndWVzdHMgY291bGQgZWFzaWx5IGFy
cmFuZ2UgZm9yIElPTU1VIGZhdWx0cyBvZiBhbnkga2luZCB0byBvY2N1cikuDQo+IERlZmF1bHRp
bmcgdG8gYSBtb2RlIHdoZXJlIGFkbWlucyBtYXkgbm90IGV2ZW4gYmVjb21lIGF3YXJlIG9mIGlz
c3Vlcw0KPiB3aXRoIGRldmljZXMgY2FuIGJlIGNvbnNpZGVyZWQgdW5kZXNpcmFibGUuIFRoZXJl
Zm9yZSBjb252ZXJ0IHRoaXMgbW9kZQ0KPiBvZiBvcGVyYXRpb24gdG8gYW4gb3B0aW9uYWwgb25l
LCBub3Qgb25lIGVuYWJsZWQgYnkgZGVmYXVsdC4NCj4gDQo+IFRoaXMgaW52b2x2ZXMgcmVzdXJy
ZWN0aW5nIGNvZGUgY29tbWl0IGVhMzg4Njc4MzFkYSAoIng4NiAvIGlvbW11OiBzZXQNCj4gdXAg
YSBzY3JhdGNoIHBhZ2UgaW4gdGhlIHF1YXJhbnRpbmUgZG9tYWluIikgZGlkIHJlbW92ZSwgaW4g
YSBzbGlnaHRseQ0KPiBleHRlbmRlZCBhbmQgYWJzdHJhY3RlZCBmYXNoaW9uLiBIZXJlLCBpbnN0
ZWFkIG9mIHJlaW50cm9kdWNpbmcgYSBwcmV0dHkNCj4gcG9pbnRsZXNzIHVzZSBvZiAiZ290byIg
aW4gZG9tYWluX2NvbnRleHRfdW5tYXAoKSwgYW5kIGluc3RlYWQgb2YgbWFraW5nDQo+IHRoZSBm
dW5jdGlvbiAoYXQgbGVhc3QgdGVtcG9yYXJpbHkpIGluY29uc2lzdGVudCwgdGFrZSB0aGUgb3Bw
b3J0dW5pdHkNCj4gYW5kIHJlcGxhY2UgdGhlIG90aGVyIHNpbWlsYXJseSBwb2ludGxlc3MgImdv
dG8iIGFzIHdlbGwuDQo+IA0KPiBJbiBvcmRlciB0byBrZXkgdGhlIHJlLWluc3RhdGVkIGJ5cGFz
c2VzIG9mZiBvZiB0aGVyZSAobm90KSBiZWluZyBhIHJvb3QNCj4gcGFnZSB0YWJsZSB0aGlzIGZ1
cnRoZXIgcmVxdWlyZXMgbW92aW5nIHRoZSBhbGxvY2F0ZV9kb21haW5fcmVzb3VyY2VzKCkNCj4g
aW52b2NhdGlvbiBmcm9tIHJlYXNzaWduX2RldmljZSgpIHRvIGFtZF9pb21tdV9zZXR1cF9kb21h
aW5fZGV2aWNlKCkNCj4gKG9yDQo+IGVsc2UgcmVhc3NpZ25fZGV2aWNlKCkgd291bGQgYWxsb2Nh
dGUgYSByb290IHBhZ2UgdGFibGUgYW55d2F5KTsgdGhpcyBpcw0KPiBiZW5pZ24gdG8gdGhlIHNl
Y29uZCBjYWxsZXIgb2YgdGhlIGxhdHRlciBmdW5jdGlvbi4NCj4gDQo+IFRha2UgdGhlIG9wcG9y
dHVuaXR5IGFuZCBhbHNvIGxpbWl0IHRoZSBjb250cm9sIHRvIGJ1aWxkcyBzdXBwb3J0aW5nDQo+
IFBDSS4NCj4gDQo+IFNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNv
bT4NCg0KSGksIEphbiwNCg0KT3ZlcmFsbCB0aGlzIGlzIGEgbmljZSBpbXByb3ZlbWVudC4gSnVz
dCBvbmUgY29tbWVudCBiZWxvdy4uLg0KDQo+IC0tLQ0KPiB2NDogImZ1bGwiIC0+ICJzY3JhdGNo
X3BhZ2UiLiBEdXBsaWNhdGUgS2NvbmZpZyBoZWxwIHRleHQgaW50byBjb21tYW5kDQo+ICAgICBs
aW5lIGRvYy4gUmUtYmFzZS4NCj4gdjM6IElPTU1VX3F1YXJhbnRpbmVfYmFzaWMgLT4gSU9NTVVf
cXVhcmFudGluZV9mYXVsdCwNCj4gICAgIElPTU1VX3F1YXJhbnRpbmVfZnVsbCAtPiBJT01NVV9x
dWFyYW50aW5lX3dyaXRlX2ZhdWx0LiBLY29uZmlnDQo+ICAgICBvcHRpb24gKGNob2ljZSkgdG8g
c2VsZWN0IGRlZmF1bHQuIExpbWl0IHRvIEhBU19QQ0kuDQo+IHYyOiBEb24ndCB1c2UgdHJ1ZS9m
YWxzZS4gSW50cm9kdWNlIFFVQVJBTlRJTkVfU0tJUCgpIChhbGJlaXQgSSdtIG5vdA0KPiAgICAg
cmVhbGx5IGNvbnZpbmNlZCB0aGlzIGlzIGFuIGltcHJvdmVtZW50KS4gQWRkIGNvbW1lbnQuDQo+
IA0KPiAtLS0gYS9kb2NzL21pc2MveGVuLWNvbW1hbmQtbGluZS5wYW5kb2MNCj4gKysrIGIvZG9j
cy9taXNjL3hlbi1jb21tYW5kLWxpbmUucGFuZG9jDQo+IEBAIC0xMjc4LDcgKzEyNzgsNyBAQCBk
ZXRlY3Rpb24gb2Ygc3lzdGVtcyBrbm93biB0byBtaXNiZWhhdmUNCj4gID4gRGVmYXVsdDogYG5l
d2AgdW5sZXNzIGRpcmVjdGVkLUVPSSBpcyBzdXBwb3J0ZWQNCj4gDQo+ICAjIyMgaW9tbXUNCj4g
LSAgICA9IExpc3Qgb2YgWyA8Ym9vbD4sIHZlcmJvc2UsIGRlYnVnLCBmb3JjZSwgcmVxdWlyZWQs
IHF1YXJhbnRpbmUsDQo+ICsgICAgPSBMaXN0IG9mIFsgPGJvb2w+LCB2ZXJib3NlLCBkZWJ1Zywg
Zm9yY2UsIHJlcXVpcmVkLCBxdWFyYW50aW5lWz1zY3JhdGNoLQ0KPiBwYWdlXSwNCj4gICAgICAg
ICAgICAgICAgICBzaGFyZXB0LCBpbnRyZW1hcCwgaW50cG9zdCwgY3Jhc2gtZGlzYWJsZSwNCj4g
ICAgICAgICAgICAgICAgICBzbm9vcCwgcWludmFsLCBpZ2Z4LCBhbWQtaW9tbXUtcGVyZGV2LWlu
dHJlbWFwLA0KPiAgICAgICAgICAgICAgICAgIGRvbTAte3Bhc3N0aHJvdWdoLHN0cmljdH0gXQ0K
PiBAQCAtMTMxNiwxMSArMTMxNiwzMiBAQCBib29sZWFuIChlLmcuIGBpb21tdT1ub2ApIGNhbiBv
dmVycmlkZSB0DQo+ICAgICAgd2lsbCBwcmV2ZW50IFhlbiBmcm9tIGJvb3RpbmcgaWYgSU9NTVVz
IGFyZW4ndCBkaXNjb3ZlcmVkIGFuZCBlbmFibGVkDQo+ICAgICAgc3VjY2Vzc2Z1bGx5Lg0KPiAN
Cj4gLSogICBUaGUgYHF1YXJhbnRpbmVgIGJvb2xlYW4gY2FuIGJlIHVzZWQgdG8gY29udHJvbCBY
ZW4ncyBiZWhhdmlvciB3aGVuDQo+IC0gICAgZGUtYXNzaWduaW5nIGRldmljZXMgZnJvbSBndWVz
dHMuICBJZiBlbmFibGVkICh0aGUgZGVmYXVsdCksIFhlbiBhbHdheXMNCj4gKyogICBUaGUgYHF1
YXJhbnRpbmVgIG9wdGlvbiBjYW4gYmUgdXNlZCB0byBjb250cm9sIFhlbidzIGJlaGF2aW9yIHdo
ZW4NCj4gKyAgICBkZS1hc3NpZ25pbmcgZGV2aWNlcyBmcm9tIGd1ZXN0cy4NCj4gKw0KPiArICAg
IFdoZW4gYSBQQ0kgZGV2aWNlIGlzIGFzc2lnbmVkIHRvIGFuIHVudHJ1c3RlZCBkb21haW4sIGl0
IGlzIHBvc3NpYmxlDQo+ICsgICAgZm9yIHRoYXQgZG9tYWluIHRvIHByb2dyYW0gdGhlIGRldmlj
ZSB0byBETUEgdG8gYW4gYXJiaXRyYXJ5IGFkZHJlc3MuDQo+ICsgICAgVGhlIElPTU1VIGlzIHVz
ZWQgdG8gcHJvdGVjdCB0aGUgaG9zdCBmcm9tIG1hbGljaW91cyBETUEgYnkgbWFraW5nDQo+ICsg
ICAgc3VyZSB0aGF0IHRoZSBkZXZpY2UgYWRkcmVzc2VzIGNhbiBvbmx5IHRhcmdldCBtZW1vcnkg
YXNzaWduZWQgdG8gdGhlDQo+ICsgICAgZ3Vlc3QuICBIb3dldmVyLCB3aGVuIHRoZSBndWVzdCBk
b21haW4gaXMgdG9ybiBkb3duLCBhc3NpZ25pbmcgdGhlDQo+ICsgICAgZGV2aWNlIGJhY2sgdG8g
dGhlIGhhcmR3YXJlIGRvbWFpbiB3b3VsZCBhbGxvdyBhbnkgaW4tZmxpZ2h0IERNQSB0bw0KPiAr
ICAgIHBvdGVudGlhbGx5IHRhcmdldCBjcml0aWNhbCBob3N0IGRhdGEuICBUbyBhdm9pZCB0aGlz
LCBxdWFyYW50aW5pbmcNCj4gKyAgICBzaG91bGQgYmUgZW5hYmxlZC4gIFF1YXJhbnRpbmluZyBj
YW4gYmUgZG9uZSBpbiB0d28gd2F5czogSW4gaXRzIGJhc2ljDQo+ICsgICAgZm9ybSwgYWxsIGlu
LWZsaWdodCBETUEgd2lsbCBzaW1wbHkgYmUgZm9yY2VkIHRvIGVuY291bnRlciBJT01NVQ0KPiAr
ICAgIGZhdWx0cy4gIFNpbmNlIHRoZXJlIGFyZSBzeXN0ZW1zIHdoZXJlIGRvaW5nIHNvIGNhbiBj
YXVzZSBob3N0IGxvY2t1cCwNCj4gKyAgICBhbiBhbHRlcm5hdGl2ZSBmb3JtIGlzIGF2YWlsYWJs
ZSB3aGVyZSB3cml0ZXMgdG8gbWVtb3J5IHdpbGwgYmUgbWFkZQ0KPiArICAgIGZhdWx0LCBidXQg
cmVhZHMgd2lsbCBiZSBkaXJlY3RlZCB0byBhIGR1bW15IHBhZ2UuICBUaGUgaW1wbGljYXRpb24N
Cj4gKyAgICBoZXJlIGlzIHRoYXQgc3VjaCByZWFkcyB3aWxsIGdvIHVubm90aWNlZCwgaS5lLiBh
biBhZG1pbiBtYXkgbm90DQo+ICsgICAgYmVjb21lIGF3YXJlIG9mIHRoZSB1bmRlcmx5aW5nIHBy
b2JsZW0uDQo+ICsNCj4gKyAgICBUaGVyZWZvcmUsIGlmIHRoaXMgb3B0aW9uIGlzIHNldCB0byB0
cnVlICh0aGUgZGVmYXVsdCksIFhlbiBhbHdheXMNCj4gICAgICBxdWFyYW50aW5lcyBzdWNoIGRl
dmljZXM7IHRoZXkgbXVzdCBiZSBleHBsaWNpdGx5IGFzc2lnbmVkIGJhY2sgdG8gRG9tMA0KPiAt
ICAgIGJlZm9yZSB0aGV5IGNhbiBiZSB1c2VkIHRoZXJlIGFnYWluLiAgSWYgZGlzYWJsZWQsIFhl
biB3aWxsIG9ubHkNCj4gLSAgICBxdWFyYW50aW5lIGRldmljZXMgdGhlIHRvb2xzdGFjayBoYXNz
IGFycmFuZ2VkIGZvciBnZXR0aW5nIHF1YXJhbnRpbmVkLg0KPiArICAgIGJlZm9yZSB0aGV5IGNh
biBiZSB1c2VkIHRoZXJlIGFnYWluLiAgSWYgc2V0IHRvICJzY3JhdGNoLXBhZ2UiLCBzdGlsbA0K
PiArICAgIGFjdGl2ZSBETUEgcmVhZHMgd2lsbCBhZGRpdGlvbmFsbHkgYmUgZGlyZWN0ZWQgdG8g
YSAic2NyYXRjaCIgcGFnZS4gIElmDQo+ICsgICAgc2V0IHRvIGZhbHNlLCBYZW4gd2lsbCBvbmx5
IHF1YXJhbnRpbmUgZGV2aWNlcyB0aGUgdG9vbHN0YWNrIGhhcyBhcnJhbmdlZA0KPiArICAgIGZv
ciBnZXR0aW5nIHF1YXJhbnRpbmVkLg0KDQpIZXJlIGxldCdzIGJlIGNsZWFyIGFib3V0IHRoZSBx
dWFyYW50aW5lIHBvbGljeSB3aGVuIHRoZSBxdWFyYW50aW5lDQpkZXZpY2VzIGFyZSBhcnJhbmdl
ZCBieSB0b29sc3RhY2suIEJhc2VkIG9uIHRoaXMgcGF0Y2ggaXQgaXMgdGhlICdiYXNpYycNCmZv
cm0gaS5lLiBhbHdheXMgZ2V0dGluZyBJT01NVSBmYXVsdHMgZm9yIHN1Y2ggZGV2aWNlcy4NCg0K
T25lIG1heSBmdXJ0aGVyIGFzayB3aGV0aGVyIHdlIHNob3VsZCBhbGxvdyB0b29sc3RhY2sgdG8g
c3BlY2lmeSANCnRoZSBxdWFyYW50aW5lIGZvcm0gZm9yIHN1Y2ggZGV2aWNlcy4gSSBkb24ndCBo
YXZlIGEgc3Ryb25nIG9waW5pb24NCmhlcmUsIGFzIGltbyB0aGUgdXNlcnMgc2hvdWxkIGJlIGVu
Y291cmFnZWQgdG8gYWx3YXlzIGRvIHF1YXJhbnRpbmUNCnRoZW4gdGhlIHRvb2xzdGFjayB3YXkg
aXMgbW9yZSBsaWtlIGEgbmljaGUgdGhpbmcgYW5kIGNvdWxkIGJlIGtlcHQNCnNpbXBsZSBsaWtl
IHRoaXMgcGF0Y2guDQoNClRoYW5rcw0KS2V2aW4NCg0KPiArDQo+ICsgICAgVGhpcyBvcHRpb24g
aXMgb25seSB2YWxpZCBvbiBidWlsZHMgc3VwcG9ydGluZyBQQ0kuDQo+IA0KPiAgKiAgIFRoZSBg
c2hhcmVwdGAgYm9vbGVhbiBjb250cm9scyB3aGV0aGVyIHRoZSBJT01NVSBwYWdldGFibGVzIGFy
ZQ0KPiBzaGFyZWQNCj4gICAgICB3aXRoIHRoZSBDUFUtc2lkZSBIQVAgcGFnZXRhYmxlcywgb3Ig
YWxsb2NhdGVkIHNlcGFyYXRlbHkuICBTaGFyaW5nDQo+IC0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0
aHJvdWdoL0tjb25maWcNCj4gKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvS2NvbmZpZw0K
PiBAQCAtMjgsMyArMjgsMzEgQEAgZW5kaWYNCj4gDQo+ICBjb25maWcgSU9NTVVfRk9SQ0VfUFRf
U0hBUkUNCj4gIAlib29sDQo+ICsNCj4gK2Nob2ljZQ0KPiArCXByb21wdCAiSU9NTVUgZGV2aWNl
IHF1YXJhbnRpbmluZyBkZWZhdWx0IGJlaGF2aW9yIg0KPiArCWRlcGVuZHMgb24gSEFTX1BDSQ0K
PiArCWRlZmF1bHQgSU9NTVVfUVVBUkFOVElORV9CQVNJQw0KPiArCS0tLWhlbHAtLS0NCj4gKwkg
IFdoZW4gYSBQQ0kgZGV2aWNlIGlzIGFzc2lnbmVkIHRvIGFuIHVudHJ1c3RlZCBkb21haW4sIGl0
IGlzIHBvc3NpYmxlDQo+ICsJICBmb3IgdGhhdCBkb21haW4gdG8gcHJvZ3JhbSB0aGUgZGV2aWNl
IHRvIERNQSB0byBhbiBhcmJpdHJhcnkNCj4gYWRkcmVzcy4NCj4gKwkgIFRoZSBJT01NVSBpcyB1
c2VkIHRvIHByb3RlY3QgdGhlIGhvc3QgZnJvbSBtYWxpY2lvdXMgRE1BIGJ5DQo+IG1ha2luZw0K
PiArCSAgc3VyZSB0aGF0IHRoZSBkZXZpY2UgYWRkcmVzc2VzIGNhbiBvbmx5IHRhcmdldCBtZW1v
cnkgYXNzaWduZWQgdG8NCj4gdGhlDQo+ICsJICBndWVzdC4gIEhvd2V2ZXIsIHdoZW4gdGhlIGd1
ZXN0IGRvbWFpbiBpcyB0b3JuIGRvd24sIGFzc2lnbmluZyB0aGUNCj4gKwkgIGRldmljZSBiYWNr
IHRvIHRoZSBoYXJkd2FyZSBkb21haW4gd291bGQgYWxsb3cgYW55IGluLWZsaWdodCBETUENCj4g
dG8NCj4gKwkgIHBvdGVudGlhbGx5IHRhcmdldCBjcml0aWNhbCBob3N0IGRhdGEuICBUbyBhdm9p
ZCB0aGlzLCBxdWFyYW50aW5pbmcNCj4gKwkgIHNob3VsZCBiZSBlbmFibGVkLiAgUXVhcmFudGlu
aW5nIGNhbiBiZSBkb25lIGluIHR3byB3YXlzOiBJbiBpdHMNCj4gYmFzaWMNCj4gKwkgIGZvcm0s
IGFsbCBpbi1mbGlnaHQgRE1BIHdpbGwgc2ltcGx5IGJlIGZvcmNlZCB0byBlbmNvdW50ZXIgSU9N
TVUNCj4gKwkgIGZhdWx0cy4gIFNpbmNlIHRoZXJlIGFyZSBzeXN0ZW1zIHdoZXJlIGRvaW5nIHNv
IGNhbiBjYXVzZSBob3N0DQo+IGxvY2t1cCwNCj4gKwkgIGFuIGFsdGVybmF0aXZlIGZvcm0gaXMg
YXZhaWxhYmxlIHdoZXJlIHdyaXRlcyB0byBtZW1vcnkgd2lsbCBiZQ0KPiBtYWRlDQo+ICsJICBm
YXVsdCwgYnV0IHJlYWRzIHdpbGwgYmUgZGlyZWN0ZWQgdG8gYSBkdW1teSBwYWdlLiAgVGhlIGlt
cGxpY2F0aW9uDQo+ICsJICBoZXJlIGlzIHRoYXQgc3VjaCByZWFkcyB3aWxsIGdvIHVubm90aWNl
ZCwgaS5lLiBhbiBhZG1pbiBtYXkgbm90DQo+ICsJICBiZWNvbWUgYXdhcmUgb2YgdGhlIHVuZGVy
bHlpbmcgcHJvYmxlbS4NCj4gKw0KPiArCWNvbmZpZyBJT01NVV9RVUFSQU5USU5FX05PTkUNCj4g
KwkJYm9vbCAibm9uZSINCj4gKwljb25maWcgSU9NTVVfUVVBUkFOVElORV9CQVNJQw0KPiArCQli
b29sICJiYXNpYyINCj4gKwljb25maWcgSU9NTVVfUVVBUkFOVElORV9TQ1JBVENIX1BBR0UNCj4g
KwkJYm9vbCAic2NyYXRjaCBwYWdlIg0KPiArZW5kY2hvaWNlDQo+IC0tLSBhL3hlbi9kcml2ZXJz
L3Bhc3N0aHJvdWdoL2FtZC9wY2lfYW1kX2lvbW11LmMNCj4gKysrIGIveGVuL2RyaXZlcnMvcGFz
c3Rocm91Z2gvYW1kL3BjaV9hbWRfaW9tbXUuYw0KPiBAQCAtMjUsNiArMjUsOSBAQA0KPiAgI2lu
Y2x1ZGUgImlvbW11LmgiDQo+ICAjaW5jbHVkZSAiLi4vYXRzLmgiDQo+IA0KPiArLyogZG9tX2lv
IGlzIHVzZWQgYXMgYSBzZW50aW5lbCBmb3IgcXVhcmFudGluZWQgZGV2aWNlcyAqLw0KPiArI2Rl
ZmluZSBRVUFSQU5USU5FX1NLSVAoZCkgKChkKSA9PSBkb21faW8gJiYgIWRvbV9pb21tdShkKS0N
Cj4gPmFyY2guYW1kLnJvb3RfdGFibGUpDQo+ICsNCj4gIHN0YXRpYyBib29sX3QgX19yZWFkX21v
c3RseSBpbml0X2RvbmU7DQo+IA0KPiAgc3RhdGljIGNvbnN0IHN0cnVjdCBpb21tdV9pbml0X29w
cyBfaW9tbXVfaW5pdF9vcHM7DQo+IEBAIC04MSwxOSArODQsMzYgQEAgaW50IGdldF9kbWFfcmVx
dWVzdG9yX2lkKHVpbnQxNl90IHNlZywgdQ0KPiAgICAgIHJldHVybiByZXFfaWQ7DQo+ICB9DQo+
IA0KPiAtc3RhdGljIHZvaWQgYW1kX2lvbW11X3NldHVwX2RvbWFpbl9kZXZpY2UoDQo+ICtzdGF0
aWMgaW50IF9fbXVzdF9jaGVjayBhbGxvY2F0ZV9kb21haW5fcmVzb3VyY2VzKHN0cnVjdCBkb21h
aW4gKmQpDQo+ICt7DQo+ICsgICAgc3RydWN0IGRvbWFpbl9pb21tdSAqaGQgPSBkb21faW9tbXUo
ZCk7DQo+ICsgICAgaW50IHJjOw0KPiArDQo+ICsgICAgc3Bpbl9sb2NrKCZoZC0+YXJjaC5tYXBw
aW5nX2xvY2spOw0KPiArICAgIHJjID0gYW1kX2lvbW11X2FsbG9jX3Jvb3QoZCk7DQo+ICsgICAg
c3Bpbl91bmxvY2soJmhkLT5hcmNoLm1hcHBpbmdfbG9jayk7DQo+ICsNCj4gKyAgICByZXR1cm4g
cmM7DQo+ICt9DQo+ICsNCj4gK3N0YXRpYyBpbnQgX19tdXN0X2NoZWNrIGFtZF9pb21tdV9zZXR1
cF9kb21haW5fZGV2aWNlKA0KPiAgICAgIHN0cnVjdCBkb21haW4gKmRvbWFpbiwgc3RydWN0IGFt
ZF9pb21tdSAqaW9tbXUsDQo+ICAgICAgdWludDhfdCBkZXZmbiwgc3RydWN0IHBjaV9kZXYgKnBk
ZXYpDQo+ICB7DQo+ICAgICAgc3RydWN0IGFtZF9pb21tdV9kdGUgKnRhYmxlLCAqZHRlOw0KPiAg
ICAgIHVuc2lnbmVkIGxvbmcgZmxhZ3M7DQo+IC0gICAgaW50IHJlcV9pZCwgdmFsaWQgPSAxOw0K
PiArICAgIGludCByZXFfaWQsIHZhbGlkID0gMSwgcmM7DQo+ICAgICAgdTggYnVzID0gcGRldi0+
YnVzOw0KPiAtICAgIGNvbnN0IHN0cnVjdCBkb21haW5faW9tbXUgKmhkID0gZG9tX2lvbW11KGRv
bWFpbik7DQo+ICsgICAgc3RydWN0IGRvbWFpbl9pb21tdSAqaGQgPSBkb21faW9tbXUoZG9tYWlu
KTsNCj4gDQo+IC0gICAgQlVHX09OKCAhaGQtPmFyY2guYW1kLnJvb3RfdGFibGUgfHwNCj4gLSAg
ICAgICAgICAgICFoZC0+YXJjaC5hbWQucGFnaW5nX21vZGUgfHwNCj4gLSAgICAgICAgICAgICFp
b21tdS0+ZGV2X3RhYmxlLmJ1ZmZlciApOw0KPiArICAgIGlmICggUVVBUkFOVElORV9TS0lQKGRv
bWFpbikgKQ0KPiArICAgICAgICByZXR1cm4gMDsNCj4gKw0KPiArICAgIEJVR19PTighaGQtPmFy
Y2guYW1kLnBhZ2luZ19tb2RlIHx8ICFpb21tdS0+ZGV2X3RhYmxlLmJ1ZmZlcik7DQo+ICsNCj4g
KyAgICByYyA9IGFsbG9jYXRlX2RvbWFpbl9yZXNvdXJjZXMoZG9tYWluKTsNCj4gKyAgICBpZiAo
IHJjICkNCj4gKyAgICAgICAgcmV0dXJuIHJjOw0KPiANCj4gICAgICBpZiAoIGlvbW11X2h3ZG9t
X3Bhc3N0aHJvdWdoICYmIGlzX2hhcmR3YXJlX2RvbWFpbihkb21haW4pICkNCj4gICAgICAgICAg
dmFsaWQgPSAwOw0KPiBAQCAtMTUxLDYgKzE3MSw4IEBAIHN0YXRpYyB2b2lkIGFtZF9pb21tdV9z
ZXR1cF9kb21haW5fZGV2aWMNCj4gDQo+ICAgICAgICAgIGFtZF9pb21tdV9mbHVzaF9pb3RsYihk
ZXZmbiwgcGRldiwNCj4gSU5WX0lPTU1VX0FMTF9QQUdFU19BRERSRVNTLCAwKTsNCj4gICAgICB9
DQo+ICsNCj4gKyAgICByZXR1cm4gMDsNCj4gIH0NCj4gDQo+ICBpbnQgX19pbml0IGFjcGlfaXZy
c19pbml0KHZvaWQpDQo+IEBAIC0yMjIsMTggKzI0NCw2IEBAIGludCBhbWRfaW9tbXVfYWxsb2Nf
cm9vdChzdHJ1Y3QgZG9tYWluICoNCj4gICAgICByZXR1cm4gMDsNCj4gIH0NCj4gDQo+IC1zdGF0
aWMgaW50IF9fbXVzdF9jaGVjayBhbGxvY2F0ZV9kb21haW5fcmVzb3VyY2VzKHN0cnVjdCBkb21h
aW4gKmQpDQo+IC17DQo+IC0gICAgc3RydWN0IGRvbWFpbl9pb21tdSAqaGQgPSBkb21faW9tbXUo
ZCk7DQo+IC0gICAgaW50IHJjOw0KPiAtDQo+IC0gICAgc3Bpbl9sb2NrKCZoZC0+YXJjaC5tYXBw
aW5nX2xvY2spOw0KPiAtICAgIHJjID0gYW1kX2lvbW11X2FsbG9jX3Jvb3QoZCk7DQo+IC0gICAg
c3Bpbl91bmxvY2soJmhkLT5hcmNoLm1hcHBpbmdfbG9jayk7DQo+IC0NCj4gLSAgICByZXR1cm4g
cmM7DQo+IC19DQo+IC0NCj4gIHN0YXRpYyBpbnQgYW1kX2lvbW11X2RvbWFpbl9pbml0KHN0cnVj
dCBkb21haW4gKmQpDQo+ICB7DQo+ICAgICAgc3RydWN0IGRvbWFpbl9pb21tdSAqaGQgPSBkb21f
aW9tbXUoZCk7DQo+IEBAIC0yODMsNiArMjkzLDkgQEAgc3RhdGljIHZvaWQgYW1kX2lvbW11X2Rp
c2FibGVfZG9tYWluX2Rldg0KPiAgICAgIGludCByZXFfaWQ7DQo+ICAgICAgdTggYnVzID0gcGRl
di0+YnVzOw0KPiANCj4gKyAgICBpZiAoIFFVQVJBTlRJTkVfU0tJUChkb21haW4pICkNCj4gKyAg
ICAgICAgcmV0dXJuOw0KPiArDQo+ICAgICAgQlVHX09OICggaW9tbXUtPmRldl90YWJsZS5idWZm
ZXIgPT0gTlVMTCApOw0KPiAgICAgIHJlcV9pZCA9IGdldF9kbWFfcmVxdWVzdG9yX2lkKGlvbW11
LT5zZWcsIFBDSV9CREYyKGJ1cywgZGV2Zm4pKTsNCj4gICAgICB0YWJsZSA9IGlvbW11LT5kZXZf
dGFibGUuYnVmZmVyOw0KPiBAQCAtMzQ5LDExICszNjIsMTAgQEAgc3RhdGljIGludCByZWFzc2ln
bl9kZXZpY2Uoc3RydWN0IGRvbWFpbg0KPiAgICAgICAgICBwZGV2LT5kb21haW4gPSB0YXJnZXQ7
DQo+ICAgICAgfQ0KPiANCj4gLSAgICByYyA9IGFsbG9jYXRlX2RvbWFpbl9yZXNvdXJjZXModGFy
Z2V0KTsNCj4gKyAgICByYyA9IGFtZF9pb21tdV9zZXR1cF9kb21haW5fZGV2aWNlKHRhcmdldCwg
aW9tbXUsIGRldmZuLCBwZGV2KTsNCj4gICAgICBpZiAoIHJjICkNCj4gICAgICAgICAgcmV0dXJu
IHJjOw0KPiANCj4gLSAgICBhbWRfaW9tbXVfc2V0dXBfZG9tYWluX2RldmljZSh0YXJnZXQsIGlv
bW11LCBkZXZmbiwgcGRldik7DQo+ICAgICAgQU1EX0lPTU1VX0RFQlVHKCJSZS1hc3NpZ24gJXBw
IGZyb20gZG9tJWQgdG8gZG9tJWRcbiIsDQo+ICAgICAgICAgICAgICAgICAgICAgICZwZGV2LT5z
YmRmLCBzb3VyY2UtPmRvbWFpbl9pZCwgdGFyZ2V0LT5kb21haW5faWQpOw0KPiANCj4gQEAgLTQ1
MSw4ICs0NjMsNyBAQCBzdGF0aWMgaW50IGFtZF9pb21tdV9hZGRfZGV2aWNlKHU4IGRldmZuDQo+
ICAgICAgICAgIHNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmlvbW11LT5sb2NrLCBmbGFncyk7DQo+
ICAgICAgfQ0KPiANCj4gLSAgICBhbWRfaW9tbXVfc2V0dXBfZG9tYWluX2RldmljZShwZGV2LT5k
b21haW4sIGlvbW11LCBkZXZmbiwgcGRldik7DQo+IC0gICAgcmV0dXJuIDA7DQo+ICsgICAgcmV0
dXJuIGFtZF9pb21tdV9zZXR1cF9kb21haW5fZGV2aWNlKHBkZXYtPmRvbWFpbiwgaW9tbXUsIGRl
dmZuLA0KPiBwZGV2KTsNCj4gIH0NCj4gDQo+ICBzdGF0aWMgaW50IGFtZF9pb21tdV9yZW1vdmVf
ZGV2aWNlKHU4IGRldmZuLCBzdHJ1Y3QgcGNpX2RldiAqcGRldikNCj4gLS0tIGEveGVuL2RyaXZl
cnMvcGFzc3Rocm91Z2gvaW9tbXUuYw0KPiArKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9p
b21tdS5jDQo+IEBAIC0zMSw5ICszMSwyNCBAQCBib29sX3QgX19pbml0ZGF0YSBpb21tdV9lbmFi
bGUgPSAxOw0KPiAgYm9vbF90IF9fcmVhZF9tb3N0bHkgaW9tbXVfZW5hYmxlZDsNCj4gIGJvb2xf
dCBfX3JlYWRfbW9zdGx5IGZvcmNlX2lvbW11Ow0KPiAgYm9vbF90IF9fcmVhZF9tb3N0bHkgaW9t
bXVfdmVyYm9zZTsNCj4gLWJvb2wgX19yZWFkX21vc3RseSBpb21tdV9xdWFyYW50aW5lID0gdHJ1
ZTsNCj4gIGJvb2xfdCBfX3JlYWRfbW9zdGx5IGlvbW11X2NyYXNoX2Rpc2FibGU7DQo+IA0KPiAr
I2RlZmluZSBJT01NVV9xdWFyYW50aW5lX25vbmUgICAgICAgIDAgLyogYWthIGZhbHNlICovDQo+
ICsjZGVmaW5lIElPTU1VX3F1YXJhbnRpbmVfZmF1bHQgICAgICAgMSAvKiBha2EgdHJ1ZSAqLw0K
PiArI2RlZmluZSBJT01NVV9xdWFyYW50aW5lX3dyaXRlX2ZhdWx0IDINCj4gKyNpZmRlZiBDT05G
SUdfSEFTX1BDSQ0KPiArdWludDhfdCBfX3JlYWRfbW9zdGx5IGlvbW11X3F1YXJhbnRpbmUgPQ0K
PiArIyBpZiBkZWZpbmVkKENPTkZJR19JT01NVV9RVUFSQU5USU5FX05PTkUpDQo+ICsgICAgSU9N
TVVfcXVhcmFudGluZV9ub25lOw0KPiArIyBlbGlmIGRlZmluZWQoQ09ORklHX0lPTU1VX1FVQVJB
TlRJTkVfQkFTSUMpDQo+ICsgICAgSU9NTVVfcXVhcmFudGluZV9mYXVsdDsNCj4gKyMgZWxpZiBk
ZWZpbmVkKENPTkZJR19JT01NVV9RVUFSQU5USU5FX1NDUkFUQ0hfUEFHRSkNCj4gKyAgICBJT01N
VV9xdWFyYW50aW5lX3dyaXRlX2ZhdWx0Ow0KPiArIyBlbmRpZg0KPiArI2Vsc2UNCj4gKyMgZGVm
aW5lIGlvbW11X3F1YXJhbnRpbmUgSU9NTVVfcXVhcmFudGluZV9ub25lDQo+ICsjZW5kaWYgLyog
Q09ORklHX0hBU19QQ0kgKi8NCj4gKw0KPiAgc3RhdGljIGJvb2wgX19od2RvbV9pbml0ZGF0YSBp
b21tdV9od2RvbV9ub25lOw0KPiAgYm9vbCBfX2h3ZG9tX2luaXRkYXRhIGlvbW11X2h3ZG9tX3N0
cmljdDsNCj4gIGJvb2wgX19yZWFkX21vc3RseSBpb21tdV9od2RvbV9wYXNzdGhyb3VnaDsNCj4g
QEAgLTY0LDggKzc5LDEyIEBAIHN0YXRpYyBpbnQgX19pbml0IHBhcnNlX2lvbW11X3BhcmFtKGNv
bnMNCj4gICAgICAgICAgZWxzZSBpZiAoICh2YWwgPSBwYXJzZV9ib29sZWFuKCJmb3JjZSIsIHMs
IHNzKSkgPj0gMCB8fA0KPiAgICAgICAgICAgICAgICAgICAgKHZhbCA9IHBhcnNlX2Jvb2xlYW4o
InJlcXVpcmVkIiwgcywgc3MpKSA+PSAwICkNCj4gICAgICAgICAgICAgIGZvcmNlX2lvbW11ID0g
dmFsOw0KPiArI2lmZGVmIENPTkZJR19IQVNfUENJDQo+ICAgICAgICAgIGVsc2UgaWYgKCAodmFs
ID0gcGFyc2VfYm9vbGVhbigicXVhcmFudGluZSIsIHMsIHNzKSkgPj0gMCApDQo+ICAgICAgICAg
ICAgICBpb21tdV9xdWFyYW50aW5lID0gdmFsOw0KPiArICAgICAgICBlbHNlIGlmICggc3MgPT0g
cyArIDE1ICYmICFzdHJuY21wKHMsICJxdWFyYW50aW5lPXNjcmF0Y2gtcGFnZSIsIDIzKSApDQo+
ICsgICAgICAgICAgICBpb21tdV9xdWFyYW50aW5lID0gSU9NTVVfcXVhcmFudGluZV93cml0ZV9m
YXVsdDsNCj4gKyNlbmRpZg0KPiAgI2lmZGVmIENPTkZJR19YODYNCj4gICAgICAgICAgZWxzZSBp
ZiAoICh2YWwgPSBwYXJzZV9ib29sZWFuKCJpZ2Z4Iiwgcywgc3MpKSA+PSAwICkNCj4gICAgICAg
ICAgICAgIGlvbW11X2lnZnggPSB2YWw7DQo+IEBAIC00MjUsNyArNDQ0LDcgQEAgc3RhdGljIGlu
dCBfX2luaXQgaW9tbXVfcXVhcmFudGluZV9pbml0KA0KPiAgICAgIGRvbV9pby0+b3B0aW9ucyB8
PSBYRU5fRE9NQ1RMX0NERl9pb21tdTsNCj4gDQo+ICAgICAgcmMgPSBpb21tdV9kb21haW5faW5p
dChkb21faW8sIDApOw0KPiAtICAgIGlmICggcmMgKQ0KPiArICAgIGlmICggcmMgfHwgaW9tbXVf
cXVhcmFudGluZSA8IElPTU1VX3F1YXJhbnRpbmVfd3JpdGVfZmF1bHQgKQ0KPiAgICAgICAgICBy
ZXR1cm4gcmM7DQo+IA0KPiAgICAgIGlmICggIWhkLT5wbGF0Zm9ybV9vcHMtPnF1YXJhbnRpbmVf
aW5pdCApDQo+IC0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jDQo+ICsr
KyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jDQo+IEBAIC00Miw2ICs0Miw5
IEBADQo+ICAjaW5jbHVkZSAidnRkLmgiDQo+ICAjaW5jbHVkZSAiLi4vYXRzLmgiDQo+IA0KPiAr
LyogZG9tX2lvIGlzIHVzZWQgYXMgYSBzZW50aW5lbCBmb3IgcXVhcmFudGluZWQgZGV2aWNlcyAq
Lw0KPiArI2RlZmluZSBRVUFSQU5USU5FX1NLSVAoZCkgKChkKSA9PSBkb21faW8gJiYgIWRvbV9p
b21tdShkKS0NCj4gPmFyY2gudnRkLnBnZF9tYWRkcikNCj4gKw0KPiAgc3RydWN0IG1hcHBlZF9y
bXJyIHsNCj4gICAgICBzdHJ1Y3QgbGlzdF9oZWFkIGxpc3Q7DQo+ICAgICAgdTY0IGJhc2UsIGVu
ZDsNCj4gQEAgLTEyODksNiArMTI5Miw5IEBAIGludCBkb21haW5fY29udGV4dF9tYXBwaW5nX29u
ZSgNCj4gICAgICBpbnQgYWdhdywgcmMsIHJldDsNCj4gICAgICBib29sX3QgZmx1c2hfZGV2X2lv
dGxiOw0KPiANCj4gKyAgICBpZiAoIFFVQVJBTlRJTkVfU0tJUChkb21haW4pICkNCj4gKyAgICAg
ICAgcmV0dXJuIDA7DQo+ICsNCj4gICAgICBBU1NFUlQocGNpZGV2c19sb2NrZWQoKSk7DQo+ICAg
ICAgc3Bpbl9sb2NrKCZpb21tdS0+bG9jayk7DQo+ICAgICAgbWFkZHIgPSBidXNfdG9fY29udGV4
dF9tYWRkcihpb21tdSwgYnVzKTsNCj4gQEAgLTE1MzYsNiArMTU0Miw5IEBAIGludCBkb21haW5f
Y29udGV4dF91bm1hcF9vbmUoDQo+ICAgICAgaW50IGlvbW11X2RvbWlkLCByYywgcmV0Ow0KPiAg
ICAgIGJvb2xfdCBmbHVzaF9kZXZfaW90bGI7DQo+IA0KPiArICAgIGlmICggUVVBUkFOVElORV9T
S0lQKGRvbWFpbikgKQ0KPiArICAgICAgICByZXR1cm4gMDsNCj4gKw0KPiAgICAgIEFTU0VSVChw
Y2lkZXZzX2xvY2tlZCgpKTsNCj4gICAgICBzcGluX2xvY2soJmlvbW11LT5sb2NrKTsNCj4gDQo+
IEBAIC0xNTk3LDcgKzE2MDYsNyBAQCBzdGF0aWMgaW50IGRvbWFpbl9jb250ZXh0X3VubWFwKHN0
cnVjdCBkDQo+ICB7DQo+ICAgICAgc3RydWN0IGFjcGlfZHJoZF91bml0ICpkcmhkOw0KPiAgICAg
IHN0cnVjdCB2dGRfaW9tbXUgKmlvbW11Ow0KPiAtICAgIGludCByZXQgPSAwOw0KPiArICAgIGlu
dCByZXQ7DQo+ICAgICAgdTggc2VnID0gcGRldi0+c2VnLCBidXMgPSBwZGV2LT5idXMsIHRtcF9i
dXMsIHRtcF9kZXZmbiwgc2VjYnVzOw0KPiAgICAgIGludCBmb3VuZCA9IDA7DQo+IA0KPiBAQCAt
MTYxMiwxNCArMTYyMSwxMiBAQCBzdGF0aWMgaW50IGRvbWFpbl9jb250ZXh0X3VubWFwKHN0cnVj
dCBkDQo+ICAgICAgICAgIGlmICggaW9tbXVfZGVidWcgKQ0KPiAgICAgICAgICAgICAgcHJpbnRr
KFZURFBSRUZJWCAiJXBkOkhvc3RicmlkZ2U6IHNraXAgJXBwIHVubWFwXG4iLA0KPiAgICAgICAg
ICAgICAgICAgICAgIGRvbWFpbiwgJlBDSV9TQkRGMyhzZWcsIGJ1cywgZGV2Zm4pKTsNCj4gLSAg
ICAgICAgaWYgKCAhaXNfaGFyZHdhcmVfZG9tYWluKGRvbWFpbikgKQ0KPiAtICAgICAgICAgICAg
cmV0dXJuIC1FUEVSTTsNCj4gLSAgICAgICAgZ290byBvdXQ7DQo+ICsgICAgICAgIHJldHVybiBp
c19oYXJkd2FyZV9kb21haW4oZG9tYWluKSA/IDAgOiAtRVBFUk07DQo+IA0KPiAgICAgIGNhc2Ug
REVWX1RZUEVfUENJZV9CUklER0U6DQo+ICAgICAgY2FzZSBERVZfVFlQRV9QQ0llMlBDSV9CUklE
R0U6DQo+ICAgICAgY2FzZSBERVZfVFlQRV9MRUdBQ1lfUENJX0JSSURHRToNCj4gLSAgICAgICAg
Z290byBvdXQ7DQo+ICsgICAgICAgIHJldHVybiAwOw0KPiANCj4gICAgICBjYXNlIERFVl9UWVBF
X1BDSWVfRU5EUE9JTlQ6DQo+ICAgICAgICAgIGlmICggaW9tbXVfZGVidWcgKQ0KPiBAQCAtMTY2
MSwxMCArMTY2OCwxMiBAQCBzdGF0aWMgaW50IGRvbWFpbl9jb250ZXh0X3VubWFwKHN0cnVjdCBk
DQo+ICAgICAgZGVmYXVsdDoNCj4gICAgICAgICAgZHByaW50ayhYRU5MT0dfRVJSIFZURFBSRUZJ
WCwgIiVwZDp1bmtub3duKCV1KTogJXBwXG4iLA0KPiAgICAgICAgICAgICAgICAgIGRvbWFpbiwg
cGRldi0+dHlwZSwgJlBDSV9TQkRGMyhzZWcsIGJ1cywgZGV2Zm4pKTsNCj4gLSAgICAgICAgcmV0
ID0gLUVJTlZBTDsNCj4gLSAgICAgICAgZ290byBvdXQ7DQo+ICsgICAgICAgIHJldHVybiAtRUlO
VkFMOw0KPiAgICAgIH0NCj4gDQo+ICsgICAgaWYgKCByZXQgfHwgUVVBUkFOVElORV9TS0lQKGRv
bWFpbikgKQ0KPiArICAgICAgICByZXR1cm4gcmV0Ow0KPiArDQo+ICAgICAgLyoNCj4gICAgICAg
KiBpZiBubyBvdGhlciBkZXZpY2VzIHVuZGVyIHRoZSBzYW1lIGlvbW11IG93bmVkIGJ5IHRoaXMg
ZG9tYWluLA0KPiAgICAgICAqIGNsZWFyIGlvbW11IGluIGlvbW11X2JpdG1hcCBhbmQgY2xlYXIg
ZG9tYWluX2lkIGluIGRvbWlkX2JpdG1wDQo+IEBAIC0xNjk5LDggKzE3MDgsNyBAQCBzdGF0aWMg
aW50IGRvbWFpbl9jb250ZXh0X3VubWFwKHN0cnVjdCBkDQo+ICAgICAgICAgIGlvbW11LT5kb21p
ZF9tYXBbaW9tbXVfZG9taWRdID0gMDsNCj4gICAgICB9DQo+IA0KPiAtb3V0Og0KPiAtICAgIHJl
dHVybiByZXQ7DQo+ICsgICAgcmV0dXJuIDA7DQo+ICB9DQo+IA0KPiAgc3RhdGljIHZvaWQgaW9t
bXVfZG9tYWluX3RlYXJkb3duKHN0cnVjdCBkb21haW4gKmQpDQo+IC0tLSBhL3hlbi9pbmNsdWRl
L3hlbi9pb21tdS5oDQo+ICsrKyBiL3hlbi9pbmNsdWRlL3hlbi9pb21tdS5oDQo+IEBAIC01Myw3
ICs1Myw5IEBAIHN0YXRpYyBpbmxpbmUgYm9vbF90IGRmbl9lcShkZm5fdCB4LCBkZm4NCj4gIH0N
Cj4gDQo+ICBleHRlcm4gYm9vbF90IGlvbW11X2VuYWJsZSwgaW9tbXVfZW5hYmxlZDsNCj4gLWV4
dGVybiBib29sIGZvcmNlX2lvbW11LCBpb21tdV9xdWFyYW50aW5lLCBpb21tdV92ZXJib3NlOw0K
PiArZXh0ZXJuIGJvb2wgZm9yY2VfaW9tbXUsIGlvbW11X3ZlcmJvc2U7DQo+ICsvKiBCb29sZWFu
IGV4Y2VwdCBmb3IgdGhlIHNwZWNpZmljIHB1cnBvc2VzIG9mDQo+IGRyaXZlcnMvcGFzc3Rocm91
Z2gvaW9tbXUuYy4gKi8NCj4gK2V4dGVybiB1aW50OF90IGlvbW11X3F1YXJhbnRpbmU7DQo+IA0K
PiAgI2lmZGVmIENPTkZJR19YODYNCj4gIGV4dGVybiBlbnVtIF9fcGFja2VkIGlvbW11X2ludHJl
bWFwIHsNCg==


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 07:35:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 07:35:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40674.73602 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjdih-00073T-E1; Mon, 30 Nov 2020 07:35:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40674.73602; Mon, 30 Nov 2020 07:35:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjdih-00073M-B4; Mon, 30 Nov 2020 07:35:35 +0000
Received: by outflank-mailman (input) for mailman id 40674;
 Mon, 30 Nov 2020 07:35:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lj5U=FE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kjdig-00073H-1B
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 07:35:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id be8e66e1-26ce-42d9-b49d-adc60253e2c2;
 Mon, 30 Nov 2020 07:35:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 75E43AC8F;
 Mon, 30 Nov 2020 07:35:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be8e66e1-26ce-42d9-b49d-adc60253e2c2
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606721731; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2lNdcgcw4Oq/TdpjNWbuo+EE/TpgKTTUxuSiEeo+3Dg=;
	b=RcYanMpn7DNoZNiOlzmIJ7rbb97UIeoFattq2DiIlmk9ZFbdr0YxYJKa9cno3Happd/0vH
	XkiK6zmtqDlyWKKu2mzUScIUbPIsPrJ9yQb0bJ0stEu1FwDiSQxdjxwKqG4Mn1ITGU+t5+
	phpGdMknzganw4Mhtfgl+tfHkJb+pt4=
Subject: Re: [PATCH v4] IOMMU: make DMA containment of quarantined devices
 optional
To: "Tian, Kevin" <kevin.tian@intel.com>
Cc: "Cooper, Andrew" <andrew.cooper3@citrix.com>, Paul Durrant
 <paul@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <c78e09fa-606c-c6c4-e9db-b57cb50ee5e2@suse.com>
 <MWHPR11MB1645257FCF6DF38A68310ABF8CF50@MWHPR11MB1645.namprd11.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9ad2b898-16d8-9f80-b6ef-8f618419d369@suse.com>
Date: Mon, 30 Nov 2020 08:35:27 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <MWHPR11MB1645257FCF6DF38A68310ABF8CF50@MWHPR11MB1645.namprd11.prod.outlook.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.11.2020 07:13, Tian, Kevin wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: Saturday, November 28, 2020 12:46 AM
>>
>> @@ -1316,11 +1316,32 @@ boolean (e.g. `iommu=no`) can override t
>>      will prevent Xen from booting if IOMMUs aren't discovered and enabled
>>      successfully.
>>
>> -*   The `quarantine` boolean can be used to control Xen's behavior when
>> -    de-assigning devices from guests.  If enabled (the default), Xen always
>> +*   The `quarantine` option can be used to control Xen's behavior when
>> +    de-assigning devices from guests.
>> +
>> +    When a PCI device is assigned to an untrusted domain, it is possible
>> +    for that domain to program the device to DMA to an arbitrary address.
>> +    The IOMMU is used to protect the host from malicious DMA by making
>> +    sure that the device addresses can only target memory assigned to the
>> +    guest.  However, when the guest domain is torn down, assigning the
>> +    device back to the hardware domain would allow any in-flight DMA to
>> +    potentially target critical host data.  To avoid this, quarantining
>> +    should be enabled.  Quarantining can be done in two ways: In its basic
>> +    form, all in-flight DMA will simply be forced to encounter IOMMU
>> +    faults.  Since there are systems where doing so can cause host lockup,
>> +    an alternative form is available where writes to memory will be made
>> +    fault, but reads will be directed to a dummy page.  The implication
>> +    here is that such reads will go unnoticed, i.e. an admin may not
>> +    become aware of the underlying problem.
>> +
>> +    Therefore, if this option is set to true (the default), Xen always
>>      quarantines such devices; they must be explicitly assigned back to Dom0
>> -    before they can be used there again.  If disabled, Xen will only
>> -    quarantine devices the toolstack hass arranged for getting quarantined.
>> +    before they can be used there again.  If set to "scratch-page", still
>> +    active DMA reads will additionally be directed to a "scratch" page.  If
>> +    set to false, Xen will only quarantine devices the toolstack has arranged
>> +    for getting quarantined.
> 
> Here let's be clear about the quarantine policy when the quarantine
> devices are arranged by toolstack. Based on this patch it is the 'basic'
> form i.e. always getting IOMMU faults for such devices.

Well, the policy is always as chosen via command line. Therefore do
you perhaps merely mean the default mode to be spelled out? This is
already the case at the beginning of the 2nd paragraph.

> One may further ask whether we should allow toolstack to specify 
> the quarantine form for such devices. I don't have a strong opinion
> here, as imo the users should be encouraged to always do quarantine
> then the toolstack way is more like a niche thing and could be kept
> simple like this patch.

That's certainly a further option, but none I'd like to pursue right
away.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 08:05:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 08:05:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40690.73615 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjeBP-0002BU-Va; Mon, 30 Nov 2020 08:05:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40690.73615; Mon, 30 Nov 2020 08:05:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjeBP-0002BN-Re; Mon, 30 Nov 2020 08:05:15 +0000
Received: by outflank-mailman (input) for mailman id 40690;
 Mon, 30 Nov 2020 08:05:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x+J0=FE=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1kjeBO-0002BI-7y
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 08:05:14 +0000
Received: from mga06.intel.com (unknown [134.134.136.31])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1250057f-bea4-457e-b242-72842bc16dd7;
 Mon, 30 Nov 2020 08:05:11 +0000 (UTC)
Received: from orsmga005.jf.intel.com ([10.7.209.41])
 by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 30 Nov 2020 00:05:08 -0800
Received: from fmsmsx601.amr.corp.intel.com ([10.18.126.81])
 by orsmga005.jf.intel.com with ESMTP; 30 Nov 2020 00:05:07 -0800
Received: from fmsmsx608.amr.corp.intel.com (10.18.126.88) by
 fmsmsx601.amr.corp.intel.com (10.18.126.81) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1713.5; Mon, 30 Nov 2020 00:05:07 -0800
Received: from fmsmsx605.amr.corp.intel.com (10.18.126.85) by
 fmsmsx608.amr.corp.intel.com (10.18.126.88) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1713.5; Mon, 30 Nov 2020 00:05:06 -0800
Received: from fmsedg601.ED.cps.intel.com (10.1.192.135) by
 fmsmsx605.amr.corp.intel.com (10.18.126.85) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5
 via Frontend Transport; Mon, 30 Nov 2020 00:05:06 -0800
Received: from NAM10-BN7-obe.outbound.protection.outlook.com (104.47.70.106)
 by edgegateway.intel.com (192.55.55.70) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1713.5; Mon, 30 Nov 2020 00:05:02 -0800
Received: from MWHPR11MB1645.namprd11.prod.outlook.com (2603:10b6:301:b::12)
 by MWHPR11MB1725.namprd11.prod.outlook.com (2603:10b6:300:2a::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.22; Mon, 30 Nov
 2020 08:05:00 +0000
Received: from MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::31c9:44c6:7323:61ac]) by MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::31c9:44c6:7323:61ac%7]) with mapi id 15.20.3611.025; Mon, 30 Nov 2020
 08:05:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1250057f-bea4-457e-b242-72842bc16dd7
IronPort-SDR: zwIGe8/fX9a0M8tNyoPjkyJEN3yVL1qHl2WTfnXgpiZTINDrg6K0HhX96BNsFGyaDdXspJMVgl
 dLduDVTj6JXg==
X-IronPort-AV: E=McAfee;i="6000,8403,9820"; a="234202294"
X-IronPort-AV: E=Sophos;i="5.78,381,1599548400"; 
   d="scan'208";a="234202294"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
IronPort-SDR: 0uvb9IOEpBYqk1ojjgpsZLhKIN8ww4jkZrKGXPIZUjrV3KVqtBkvRsrozRLIkgQUobJfj6UhhZ
 MDOQk4VWRKrQ==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.78,381,1599548400"; 
   d="scan'208";a="548997151"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WTMZ1WUCfmm94ovKo5W2Qb3v4lvRYFsDWVWC2d0huDoYhQDEZeawfhCTC2a1qplm0YxroolK8+d40us2kO29NVeTCsHie45Ld8wXsVR/ff/TUBW2iARZNIiB3Y8EPi4iXlQH38LxiS/LjzGRjDjak9vGLaWHHAEWxSsSj0kwTdB6KE4zYrreTIjkePP0NN97aTgo/URjgwItTHRx+CVCIYqbMSbEYxIFIr75G8M0JnAKrsXouzCM5fGtibhK99W0atXk/7D3kLBblpLq+yqoigKIdBHv867/AOYOzGFqfxZRsYGMTUxtnC3sW8u32dAqfo//CcWf9oOsxi97bLuqow==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TSRI63inD66yhT2lieGsWekufNP7pjGEqdM9WiLQXJM=;
 b=QIVhH9WEOu/fbqytAtIVKc/BIDtu3mh5CQNKu1w/wh5LUZoskOgmE8jovhp7GNhVCpV1+v7EOuQ9ATqNqrppaw/BOGIGJe7kOZ95/ej3etlK/W9iMguPOtj2Dt/llXBxC8bL3gyYeA0NOdgcpsyDisTFqUUWfGzkqNxPY1V/s4ilzcBfRjeSWsAeGiPLCFdFZT8xgTU8uhkXv8jLsRcqAtwzHCoumhVGVb8KZQgLqZhs0wPjgRNjXmmoMvnInqG6Go44F7A4BA6KLqk04oFx++VEON9T1XNy/2JyqzdHIZcruN8lQUENrHqHhEynrjrK8gMp+XgfF4uFegGJsXanbw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com;
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TSRI63inD66yhT2lieGsWekufNP7pjGEqdM9WiLQXJM=;
 b=IqjElNq9+1QiKDsfYjBFiclqaf8719jZbVikcF8Fbxgv1RFWE+MtLXu/YL19pF2CTgktmpml1jxATWyO4gwUYymSYbxcrMyM6zJw5Mg5e5FoSyNIX29D9bS0gZ9Q++hsIO+FxECQoy5NpFvkE2cXhfgKoxJoF/+qiiIIerEDYFc=
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "Cooper, Andrew" <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v4] IOMMU: make DMA containment of quarantined devices
 optional
Thread-Topic: [PATCH v4] IOMMU: make DMA containment of quarantined devices
 optional
Thread-Index: AQHWxNzT5+yOOE/aLkyGjNnv/rbw96ngL6OggAAdYYCAAAakUA==
Date: Mon, 30 Nov 2020 08:05:00 +0000
Message-ID: <MWHPR11MB1645ED0ED102DE2903B302878CF50@MWHPR11MB1645.namprd11.prod.outlook.com>
References: <c78e09fa-606c-c6c4-e9db-b57cb50ee5e2@suse.com>
 <MWHPR11MB1645257FCF6DF38A68310ABF8CF50@MWHPR11MB1645.namprd11.prod.outlook.com>
 <9ad2b898-16d8-9f80-b6ef-8f618419d369@suse.com>
In-Reply-To: <9ad2b898-16d8-9f80-b6ef-8f618419d369@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
dlp-version: 11.5.1.3
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.198.147.217]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: ae603bfe-c8d8-43a9-ae00-08d89506a2d2
x-ms-traffictypediagnostic: MWHPR11MB1725:
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-microsoft-antispam-prvs: <MWHPR11MB1725BB402FF34C72A9DB03C38CF50@MWHPR11MB1725.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 4/zGdOGD2vvGjH0gls1Fe8NJiFh/hIWlb/YfBfr2KKRwFYu1eaNIVHgweip6WgIMzG1Z1CgMOAJkABHssbZfLjCK14NDXlRj6++nYPEa9JlPhkd7IVccEEpTDyi2hcJeK5wc5+OtC7xpP84pZxJCwylOcD/7YmuZc7hkCwZsXAZPi93s58MHFZZPq/7oq+vKLXtEjbGoUJxzWnmj430fPMbTCYWhkIDKuZTPhRww/SllAS0xhLZn8z/OG1DUJR9kMhRVn2pxbg04OmrNf8ZeF4ynCzOYzcVsquR5yZQXtm6n9WItC8uPnQ0QyUnu0uW9
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MWHPR11MB1645.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(366004)(346002)(396003)(39860400002)(55016002)(9686003)(76116006)(64756008)(71200400001)(83380400001)(53546011)(478600001)(4326008)(6506007)(7696005)(186003)(26005)(52536014)(6916009)(66476007)(316002)(8676002)(66946007)(66556008)(66446008)(33656002)(5660300002)(54906003)(86362001)(8936002)(2906002);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata: =?utf-8?B?ZkduY2VCQ2w2RkdxcUIxa0FST3ZvVXhqYVBxck1DenBHaGhMMHB1b1RiaW5y?=
 =?utf-8?B?bUVOQXZvWllsbDRCdjRKR2toQi9lb1VhdFJMb3IxOG1rSWZTdlhpanhuUXhY?=
 =?utf-8?B?Ukt3YzI2d2M5VWxBMkhNN29qSkcvYS9GeDR0YzhBRUY2c0hUTjlZUXgwdDJh?=
 =?utf-8?B?bnRpU0ZWOWFzNDEzTEFyUzV1cnJaeXBYS1RuTWlIaXdkTXJXeS9BRGpzK3FB?=
 =?utf-8?B?Tm02clpJTnpBTEZJd3YrdVFmYlVRczFYd0EyemVONWNJbFpHelQ4VG56SW10?=
 =?utf-8?B?MGdUTEovRXBXS25jWkhYK09ZcFpRMzBuMkRXVXF2ZWV0R2lVM1RxcXB2VVlq?=
 =?utf-8?B?bnlsQ3l5cTBSRlJxOGp1N3lmWVlrR3dkc3dvRys2NURDUUVvTi9hdXhwdm55?=
 =?utf-8?B?MGdOYjlXc0dqZ2E5c0FiYU5oSnBuMmFkcUd0dkZsM2cvRzVncldPdUVleGw2?=
 =?utf-8?B?c1Vad2poWWpJUWJ1cVR0NXNOaXEzU1FnM2lucXlCZGhBK1lIY2Q2ODNxVExv?=
 =?utf-8?B?NVFINndZUUowWUJpUFo4elk5UmxjaVhwTzZYb3F3aFEwU3FQK3BhZEZtUURp?=
 =?utf-8?B?V1pPR1pUQ1ZPdGxZWloyRW10bWpTbU10dW1PdmZpVG1lQk5zWlh2U0pUaWdu?=
 =?utf-8?B?M1ZBekl3SHhKOGpEUzgwdjdIZWt0ampKTm5SRmhBdDViVnFUSkZHTElnK1l3?=
 =?utf-8?B?MlI2dzZKYVJvSTBVK0ZUL2t5U0xoSE92Wkh3ODBOc2JUR0pCa0E2SWNQaEYw?=
 =?utf-8?B?cHFReGNzVUJiN0FTL2VhbTQ4Qm1VZjJQOVZ3Zm4xYUlxZHdIV0FxbFpCMEdU?=
 =?utf-8?B?Ry9Ncmh2WE45MkJLV0pvYnl1T2FlWURWRmRrTmlWdlRINDZJSWEvMFJUWUVN?=
 =?utf-8?B?QWs4Kzl6elVPd3g3NTFQb1F5ay94THN0UkVCMHExUldiTitxMVFVYk1OY1hN?=
 =?utf-8?B?VEhFTEpXcVE1bGFRYnNiSVRnaXZXNWpnWG50YWQ0dkpSTm9GSThNc1lVVEtp?=
 =?utf-8?B?Y082L2JaSWxPKzB5NU5Fc3RlQUpUcUUzdU5ZRnpNL05lQXFJeWZBVnhLYi80?=
 =?utf-8?B?M0l5WUMrbHIwSjdFeVRMT0xndUZtS3VSemJoZXNyUG15UWxnVGFIK2hSSURp?=
 =?utf-8?B?VFFzc2o5OGtzV1NEWXJzdkhxbTd4SWV4dTVvc0lJelE3eG5OTGpXVERsanpL?=
 =?utf-8?B?N3B3ZHBac3dlemJ4Q1VEZ0twbGlDZUJCYVdCdmp5ZlhJRVNPT2hIeG9YYzdY?=
 =?utf-8?B?NUdkcjlBM3pDVHNmZ3JWd3NUZHVjMWJlVmRIY0tPS1NtTVI5bCtZUWxsUjVJ?=
 =?utf-8?Q?39VxkwNa2BKfeW9pEU2iL9urOUDSB943Du?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR11MB1645.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ae603bfe-c8d8-43a9-ae00-08d89506a2d2
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Nov 2020 08:05:00.5734
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: BowZyX/kkJ43bhe9MEG9fnmQksBiyle50Fehz5LHg526SjleWYkoyXfedudsSt8TAcen7hUABFTWqrH7bElw0w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR11MB1725
X-OriginatorOrg: intel.com

PiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+IFNlbnQ6IE1vbmRheSwg
Tm92ZW1iZXIgMzAsIDIwMjAgMzozNSBQTQ0KPiANCj4gT24gMzAuMTEuMjAyMCAwNzoxMywgVGlh
biwgS2V2aW4gd3JvdGU6DQo+ID4+IEZyb206IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNv
bT4NCj4gPj4gU2VudDogU2F0dXJkYXksIE5vdmVtYmVyIDI4LCAyMDIwIDEyOjQ2IEFNDQo+ID4+
DQo+ID4+IEBAIC0xMzE2LDExICsxMzE2LDMyIEBAIGJvb2xlYW4gKGUuZy4gYGlvbW11PW5vYCkg
Y2FuIG92ZXJyaWRlIHQNCj4gPj4gICAgICB3aWxsIHByZXZlbnQgWGVuIGZyb20gYm9vdGluZyBp
ZiBJT01NVXMgYXJlbid0IGRpc2NvdmVyZWQgYW5kDQo+IGVuYWJsZWQNCj4gPj4gICAgICBzdWNj
ZXNzZnVsbHkuDQo+ID4+DQo+ID4+IC0qICAgVGhlIGBxdWFyYW50aW5lYCBib29sZWFuIGNhbiBi
ZSB1c2VkIHRvIGNvbnRyb2wgWGVuJ3MgYmVoYXZpb3Igd2hlbg0KPiA+PiAtICAgIGRlLWFzc2ln
bmluZyBkZXZpY2VzIGZyb20gZ3Vlc3RzLiAgSWYgZW5hYmxlZCAodGhlIGRlZmF1bHQpLCBYZW4g
YWx3YXlzDQo+ID4+ICsqICAgVGhlIGBxdWFyYW50aW5lYCBvcHRpb24gY2FuIGJlIHVzZWQgdG8g
Y29udHJvbCBYZW4ncyBiZWhhdmlvciB3aGVuDQo+ID4+ICsgICAgZGUtYXNzaWduaW5nIGRldmlj
ZXMgZnJvbSBndWVzdHMuDQo+ID4+ICsNCj4gPj4gKyAgICBXaGVuIGEgUENJIGRldmljZSBpcyBh
c3NpZ25lZCB0byBhbiB1bnRydXN0ZWQgZG9tYWluLCBpdCBpcyBwb3NzaWJsZQ0KPiA+PiArICAg
IGZvciB0aGF0IGRvbWFpbiB0byBwcm9ncmFtIHRoZSBkZXZpY2UgdG8gRE1BIHRvIGFuIGFyYml0
cmFyeSBhZGRyZXNzLg0KPiA+PiArICAgIFRoZSBJT01NVSBpcyB1c2VkIHRvIHByb3RlY3QgdGhl
IGhvc3QgZnJvbSBtYWxpY2lvdXMgRE1BIGJ5IG1ha2luZw0KPiA+PiArICAgIHN1cmUgdGhhdCB0
aGUgZGV2aWNlIGFkZHJlc3NlcyBjYW4gb25seSB0YXJnZXQgbWVtb3J5IGFzc2lnbmVkIHRvIHRo
ZQ0KPiA+PiArICAgIGd1ZXN0LiAgSG93ZXZlciwgd2hlbiB0aGUgZ3Vlc3QgZG9tYWluIGlzIHRv
cm4gZG93biwgYXNzaWduaW5nIHRoZQ0KPiA+PiArICAgIGRldmljZSBiYWNrIHRvIHRoZSBoYXJk
d2FyZSBkb21haW4gd291bGQgYWxsb3cgYW55IGluLWZsaWdodCBETUEgdG8NCj4gPj4gKyAgICBw
b3RlbnRpYWxseSB0YXJnZXQgY3JpdGljYWwgaG9zdCBkYXRhLiAgVG8gYXZvaWQgdGhpcywgcXVh
cmFudGluaW5nDQo+ID4+ICsgICAgc2hvdWxkIGJlIGVuYWJsZWQuICBRdWFyYW50aW5pbmcgY2Fu
IGJlIGRvbmUgaW4gdHdvIHdheXM6IEluIGl0cyBiYXNpYw0KPiA+PiArICAgIGZvcm0sIGFsbCBp
bi1mbGlnaHQgRE1BIHdpbGwgc2ltcGx5IGJlIGZvcmNlZCB0byBlbmNvdW50ZXIgSU9NTVUNCj4g
Pj4gKyAgICBmYXVsdHMuICBTaW5jZSB0aGVyZSBhcmUgc3lzdGVtcyB3aGVyZSBkb2luZyBzbyBj
YW4gY2F1c2UgaG9zdCBsb2NrdXAsDQo+ID4+ICsgICAgYW4gYWx0ZXJuYXRpdmUgZm9ybSBpcyBh
dmFpbGFibGUgd2hlcmUgd3JpdGVzIHRvIG1lbW9yeSB3aWxsIGJlIG1hZGUNCj4gPj4gKyAgICBm
YXVsdCwgYnV0IHJlYWRzIHdpbGwgYmUgZGlyZWN0ZWQgdG8gYSBkdW1teSBwYWdlLiAgVGhlIGlt
cGxpY2F0aW9uDQo+ID4+ICsgICAgaGVyZSBpcyB0aGF0IHN1Y2ggcmVhZHMgd2lsbCBnbyB1bm5v
dGljZWQsIGkuZS4gYW4gYWRtaW4gbWF5IG5vdA0KPiA+PiArICAgIGJlY29tZSBhd2FyZSBvZiB0
aGUgdW5kZXJseWluZyBwcm9ibGVtLg0KPiA+PiArDQo+ID4+ICsgICAgVGhlcmVmb3JlLCBpZiB0
aGlzIG9wdGlvbiBpcyBzZXQgdG8gdHJ1ZSAodGhlIGRlZmF1bHQpLCBYZW4gYWx3YXlzDQo+ID4+
ICAgICAgcXVhcmFudGluZXMgc3VjaCBkZXZpY2VzOyB0aGV5IG11c3QgYmUgZXhwbGljaXRseSBh
c3NpZ25lZCBiYWNrIHRvDQo+IERvbTANCj4gPj4gLSAgICBiZWZvcmUgdGhleSBjYW4gYmUgdXNl
ZCB0aGVyZSBhZ2Fpbi4gIElmIGRpc2FibGVkLCBYZW4gd2lsbCBvbmx5DQo+ID4+IC0gICAgcXVh
cmFudGluZSBkZXZpY2VzIHRoZSB0b29sc3RhY2sgaGFzcyBhcnJhbmdlZCBmb3IgZ2V0dGluZyBx
dWFyYW50aW5lZC4NCj4gPj4gKyAgICBiZWZvcmUgdGhleSBjYW4gYmUgdXNlZCB0aGVyZSBhZ2Fp
bi4gIElmIHNldCB0byAic2NyYXRjaC1wYWdlIiwgc3RpbGwNCj4gPj4gKyAgICBhY3RpdmUgRE1B
IHJlYWRzIHdpbGwgYWRkaXRpb25hbGx5IGJlIGRpcmVjdGVkIHRvIGEgInNjcmF0Y2giIHBhZ2Uu
ICBJZg0KPiA+PiArICAgIHNldCB0byBmYWxzZSwgWGVuIHdpbGwgb25seSBxdWFyYW50aW5lIGRl
dmljZXMgdGhlIHRvb2xzdGFjayBoYXMNCj4gYXJyYW5nZWQNCj4gPj4gKyAgICBmb3IgZ2V0dGlu
ZyBxdWFyYW50aW5lZC4NCj4gPg0KPiA+IEhlcmUgbGV0J3MgYmUgY2xlYXIgYWJvdXQgdGhlIHF1
YXJhbnRpbmUgcG9saWN5IHdoZW4gdGhlIHF1YXJhbnRpbmUNCj4gPiBkZXZpY2VzIGFyZSBhcnJh
bmdlZCBieSB0b29sc3RhY2suIEJhc2VkIG9uIHRoaXMgcGF0Y2ggaXQgaXMgdGhlICdiYXNpYycN
Cj4gPiBmb3JtIGkuZS4gYWx3YXlzIGdldHRpbmcgSU9NTVUgZmF1bHRzIGZvciBzdWNoIGRldmlj
ZXMuDQo+IA0KPiBXZWxsLCB0aGUgcG9saWN5IGlzIGFsd2F5cyBhcyBjaG9zZW4gdmlhIGNvbW1h
bmQgbGluZS4gVGhlcmVmb3JlIGRvDQo+IHlvdSBwZXJoYXBzIG1lcmVseSBtZWFuIHRoZSBkZWZh
dWx0IG1vZGUgdG8gYmUgc3BlbGxlZCBvdXQ/IFRoaXMgaXMNCj4gYWxyZWFkeSB0aGUgY2FzZSBh
dCB0aGUgYmVnaW5uaW5nIG9mIHRoZSAybmQgcGFyYWdyYXBoLg0KDQpXaGVuIEkgcmVhZCBhYm92
ZSBwYXJhZ3JhcGhzLCBpdCdzIGNsZWFyIGFib3V0IHRoZSBlbmFibGVkIGNhc2Ugd2hlcmUNCnR3
byBxdWFyYW50aW5lIGZvcm1zIGFyZSBhdmFpbGFibGUgKGJhc2ljIHZzLiBzY3JhdGNoLXBhZ2Up
IGFuZCBob3cgdG8NCmNob29zZSB0aGVtLCBidXQgaXQncyBub3QgY3J5c3RhbCBjbGVhciBhYm91
dCB0aGUgZGlzYWJsZWQgY2FzZSB3aGljaCANCmZvcm0gaXMgYXNzdW1lZCBmb3IgdG9vbHN0YWNr
LW1hbmFnZWQgZGV2aWNlcywgZnJvbSBhbiB1c2VyIHAuby52Lg0KDQpUaGFua3MsDQpLZXZpbg0K


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 08:14:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 08:14:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40704.73627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjeKV-0003Bd-TE; Mon, 30 Nov 2020 08:14:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40704.73627; Mon, 30 Nov 2020 08:14:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjeKV-0003BW-PL; Mon, 30 Nov 2020 08:14:39 +0000
Received: by outflank-mailman (input) for mailman id 40704;
 Mon, 30 Nov 2020 08:14:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjeKU-0003BO-Ef; Mon, 30 Nov 2020 08:14:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjeKU-0006vu-6O; Mon, 30 Nov 2020 08:14:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjeKT-0004qc-Tn; Mon, 30 Nov 2020 08:14:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kjeKT-0002bZ-TH; Mon, 30 Nov 2020 08:14:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=S1qa5Gv3HUSRiD0oZ4JfV2DRhbZWD/QCTznN8u8xaaE=; b=QAu3Z4y7K5YDNmvt0A+GMrVQz4
	I2BSjzubJ6zlBls5EAwBGOILyY9srbHlvC9DzUG5cxS3A1UMwWJAvlMHATjqf5fuxePJVtyEpsk4q
	7zBrNwLtroL5O9yC1pZFMQPYS5t902/W2B5Mk7FvQS/jsZML7VjaKgz5+LixZIsGZu24=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157099-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157099: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-install:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f91a3aa6bce480fe6e08df540129f4a923222419
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 30 Nov 2020 08:14:37 +0000

flight 157099 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157099/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl          12 debian-install           fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  12 debian-install           fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                f91a3aa6bce480fe6e08df540129f4a923222419
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  121 days
Failing since        152366  2020-08-01 20:49:34 Z  120 days  204 attempts
Testing same since   157099  2020-11-29 21:09:17 Z    0 days    1 attempts

------------------------------------------------------------
3619 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 693037 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 08:33:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 08:33:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40719.73642 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjecH-000542-Lo; Mon, 30 Nov 2020 08:33:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40719.73642; Mon, 30 Nov 2020 08:33:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjecH-00053v-Gz; Mon, 30 Nov 2020 08:33:01 +0000
Received: by outflank-mailman (input) for mailman id 40719;
 Mon, 30 Nov 2020 08:33:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lj5U=FE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kjecG-00053q-9D
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 08:33:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b794f48e-d5ea-4726-8f27-58ca90e7ab10;
 Mon, 30 Nov 2020 08:32:57 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AD63BAC6A;
 Mon, 30 Nov 2020 08:32:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b794f48e-d5ea-4726-8f27-58ca90e7ab10
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606725176; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/Ydry4higSvtBV6mILjaf+1YKXT5bhOQm95fx+ARtAM=;
	b=JWbhHEHKZF5tTMohSP2BVW/98rk6cmKy6Av6TjGMFMLBt/k1EV+/3eJmhltwcN+l6e1amS
	jOuD6eXxofs9xvCYgkshYNti9cR9chtmKlLLyN2f7UZTxI2in7zp5TYjI9ds6rVN1YBuai
	6Pc8GS1ycHSc/iBzuh8eGKVvKMlxqmE=
Subject: Re: [PATCH v4] IOMMU: make DMA containment of quarantined devices
 optional
To: "Tian, Kevin" <kevin.tian@intel.com>
Cc: "Cooper, Andrew" <andrew.cooper3@citrix.com>, Paul Durrant
 <paul@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <c78e09fa-606c-c6c4-e9db-b57cb50ee5e2@suse.com>
 <MWHPR11MB1645257FCF6DF38A68310ABF8CF50@MWHPR11MB1645.namprd11.prod.outlook.com>
 <9ad2b898-16d8-9f80-b6ef-8f618419d369@suse.com>
 <MWHPR11MB1645ED0ED102DE2903B302878CF50@MWHPR11MB1645.namprd11.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1ae87896-1656-e383-7725-22414a8e58cd@suse.com>
Date: Mon, 30 Nov 2020 09:32:56 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <MWHPR11MB1645ED0ED102DE2903B302878CF50@MWHPR11MB1645.namprd11.prod.outlook.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.11.2020 09:05, Tian, Kevin wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: Monday, November 30, 2020 3:35 PM
>>
>> On 30.11.2020 07:13, Tian, Kevin wrote:
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>> Sent: Saturday, November 28, 2020 12:46 AM
>>>>
>>>> @@ -1316,11 +1316,32 @@ boolean (e.g. `iommu=no`) can override t
>>>>      will prevent Xen from booting if IOMMUs aren't discovered and
>> enabled
>>>>      successfully.
>>>>
>>>> -*   The `quarantine` boolean can be used to control Xen's behavior when
>>>> -    de-assigning devices from guests.  If enabled (the default), Xen always
>>>> +*   The `quarantine` option can be used to control Xen's behavior when
>>>> +    de-assigning devices from guests.
>>>> +
>>>> +    When a PCI device is assigned to an untrusted domain, it is possible
>>>> +    for that domain to program the device to DMA to an arbitrary address.
>>>> +    The IOMMU is used to protect the host from malicious DMA by making
>>>> +    sure that the device addresses can only target memory assigned to the
>>>> +    guest.  However, when the guest domain is torn down, assigning the
>>>> +    device back to the hardware domain would allow any in-flight DMA to
>>>> +    potentially target critical host data.  To avoid this, quarantining
>>>> +    should be enabled.  Quarantining can be done in two ways: In its basic
>>>> +    form, all in-flight DMA will simply be forced to encounter IOMMU
>>>> +    faults.  Since there are systems where doing so can cause host lockup,
>>>> +    an alternative form is available where writes to memory will be made
>>>> +    fault, but reads will be directed to a dummy page.  The implication
>>>> +    here is that such reads will go unnoticed, i.e. an admin may not
>>>> +    become aware of the underlying problem.
>>>> +
>>>> +    Therefore, if this option is set to true (the default), Xen always
>>>>      quarantines such devices; they must be explicitly assigned back to
>> Dom0
>>>> -    before they can be used there again.  If disabled, Xen will only
>>>> -    quarantine devices the toolstack hass arranged for getting quarantined.
>>>> +    before they can be used there again.  If set to "scratch-page", still
>>>> +    active DMA reads will additionally be directed to a "scratch" page.  If
>>>> +    set to false, Xen will only quarantine devices the toolstack has
>> arranged
>>>> +    for getting quarantined.
>>>
>>> Here let's be clear about the quarantine policy when the quarantine
>>> devices are arranged by toolstack. Based on this patch it is the 'basic'
>>> form i.e. always getting IOMMU faults for such devices.
>>
>> Well, the policy is always as chosen via command line. Therefore do
>> you perhaps merely mean the default mode to be spelled out? This is
>> already the case at the beginning of the 2nd paragraph.
> 
> When I read above paragraphs, it's clear about the enabled case where
> two quarantine forms are available (basic vs. scratch-page) and how to
> choose them, but it's not crystal clear about the disabled case which 
> form is assumed for toolstack-managed devices, from an user p.o.v.

Oh, now I think I got what you mean. I've added '..., and only in the
"basic" form' to that last sentence.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 08:37:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 08:37:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40730.73654 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjegT-0005HT-5Q; Mon, 30 Nov 2020 08:37:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40730.73654; Mon, 30 Nov 2020 08:37:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjegT-0005HM-2E; Mon, 30 Nov 2020 08:37:21 +0000
Received: by outflank-mailman (input) for mailman id 40730;
 Mon, 30 Nov 2020 08:37:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjegS-0005HE-KR; Mon, 30 Nov 2020 08:37:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjegS-0007PK-A0; Mon, 30 Nov 2020 08:37:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjegR-00078o-Tk; Mon, 30 Nov 2020 08:37:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kjegR-0001NL-TH; Mon, 30 Nov 2020 08:37:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=T3B1+NK/GPl0UUaQ60tDWXEefNiOnCCk+LNwhS4qyUQ=; b=0qmS9RJJ02T0uy8FObZlmFKyM0
	Hth/J0ntBpEcXLvxnu79OtJ6LxzE8StI3Ts995VGbl6B3qpcMTVcuthogumjWybrxLaPXuTYLlkBe
	zJznNav3rv+DMhYVYb1fE5sAGvCI4DJ5qVtvH/IMtHweUyFjcGrBJXM9mdopMhqpnSB0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157106-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 157106: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    libvirt:build-armhf-libvirt:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    libvirt=5d789c7b37721c1dd3b4e9a3732399bf66603737
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 30 Nov 2020 08:37:19 +0000

flight 157106 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157106/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 build-armhf-libvirt           2 hosts-allocate               starved  n/a

version targeted for testing:
 libvirt              5d789c7b37721c1dd3b4e9a3732399bf66603737
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  143 days
Failing since        151818  2020-07-11 04:18:52 Z  142 days  137 attempts
Testing same since   157067  2020-11-28 04:19:31 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 29761 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 09:46:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 09:46:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40755.73669 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjfkv-0003FD-88; Mon, 30 Nov 2020 09:46:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40755.73669; Mon, 30 Nov 2020 09:46:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjfkv-0003F6-4p; Mon, 30 Nov 2020 09:46:01 +0000
Received: by outflank-mailman (input) for mailman id 40755;
 Mon, 30 Nov 2020 09:45:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lj5U=FE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kjfkt-0003Ez-J4
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 09:45:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7c69cc95-e9b1-4842-bd01-fe5f7c0f8bc1;
 Mon, 30 Nov 2020 09:45:57 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 35E61AC6A;
 Mon, 30 Nov 2020 09:45:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c69cc95-e9b1-4842-bd01-fe5f7c0f8bc1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606729556; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=717ZnlDtzkAXums8kkcz9Mg9J24T5NL/mgFvBcE1vKI=;
	b=gD0pigSkU2DljCnzXV3mPoF+0g0S60VuCwG5Ox9ZTJZNul+cbDamRyxisrpLf3ixBahhy9
	tJt/A5MeG/qyvN1m6tsq475p8IC8AK79JuwavF8sSXjUeT1dLUiz2WzTLFh1Nvaab1yXPG
	2eNdTJvyvAQtzjho+gULkKMyTuUAdqo=
Subject: Re: [PATCH v10 5/7] vtd: use a bit field for root_entry
To: "Tian, Kevin" <kevin.tian@intel.com>
Cc: Paul Durrant <pdurrant@amazon.com>, Paul Durrant <paul@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20201120132440.1141-1-paul@xen.org>
 <20201120132440.1141-6-paul@xen.org>
 <MWHPR11MB164520264945AF959D7A3ED28CF50@MWHPR11MB1645.namprd11.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5962cbc3-5aaf-7855-e00d-fb525441f454@suse.com>
Date: Mon, 30 Nov 2020 10:45:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <MWHPR11MB164520264945AF959D7A3ED28CF50@MWHPR11MB1645.namprd11.prod.outlook.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.11.2020 04:06, Tian, Kevin wrote:
>> From: Paul Durrant <paul@xen.org>
>> Sent: Friday, November 20, 2020 9:25 PM
>>
>> From: Paul Durrant <pdurrant@amazon.com>
>>
>> This makes the code a little easier to read and also makes it more consistent
>> with iremap_entry.
>>
>> Also take the opportunity to tidy up the implementation of
>> device_in_domain().
>>
>> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> 
> Reviewed-by: <kevin.tian@intel.com>

Besides this looking a little odd (can be easily fixed of course)
I wonder whether both here and for patch 6 you had seen my requests
for smallish changes, and whether you meant to override those, or
whether your R-b will continue to apply with them made.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:00:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:00:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40765.73680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjfyt-00054I-Hv; Mon, 30 Nov 2020 10:00:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40765.73680; Mon, 30 Nov 2020 10:00:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjfyt-00054B-Eh; Mon, 30 Nov 2020 10:00:27 +0000
Received: by outflank-mailman (input) for mailman id 40765;
 Mon, 30 Nov 2020 10:00:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lj5U=FE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kjfys-000546-D8
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:00:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6c1a65e4-b029-4ca9-acc6-5e72e9e4b163;
 Mon, 30 Nov 2020 10:00:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AE01BACBA;
 Mon, 30 Nov 2020 10:00:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c1a65e4-b029-4ca9-acc6-5e72e9e4b163
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606730424; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=95uC2XHwBM4RWytiVJt2lQqM6w1sQucsXLg4nBaFplc=;
	b=TRQe+uT8V49+lW1EtQsorkD2aXfwKk5rsbnpjbCZZUpYdJeOwlQkcnqYpk9gECoOzbwirJ
	OEqbwS79KicDolM4sYnhFL1SWegCOD4wl0LrO5bZ9D7N8W8cw2pGsEFZnWb9qFN4Q8Mi2h
	ZlokNz2AJyq3xoZ4ZCmN9gXbsnuRPng=
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
To: Manuel Bouyer <bouyer@antioche.eu.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
References: <20201127105948.ji5gxv4e7axrvgpo@Air-de-Roger>
 <e9610278-84e5-dc32-b568-8867011de4e4@suse.com>
 <20201127131324.GJ1717@antioche.eu.org>
 <714e9393-d7f4-ed47-d1ed-aff79f3552a0@suse.com>
 <20201127133121.GN1717@antioche.eu.org>
 <96aa5a9b-3f4a-ce9d-0f41-4a24d409ed55@suse.com>
 <20201127135929.GR1717@antioche.eu.org>
 <20201127202211.eqrxloii5x54zode@Air-de-Roger>
 <20201127214420.GA637@antioche.eu.org>
 <20201128145311.3gmzq5lnkz6ajdtr@Air-de-Roger>
 <20201128171430.GB631@antioche.eu.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <819e859e-0fd2-cdbf-6126-46c924364d12@suse.com>
Date: Mon, 30 Nov 2020 11:00:23 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201128171430.GB631@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 28.11.2020 18:14, Manuel Bouyer wrote:
> On Sat, Nov 28, 2020 at 03:53:11PM +0100, Roger Pau Monné wrote:
>>> the trace is at
>>> http://www-soc.lip6.fr/~bouyer/xen-log13.txt
>>
>> Thanks! I think I've found the issue and I'm attaching a possible fix
>> (fix.patch) to this email. In any case I've also attached a further
>> debug patch, in case the fix turns out to be wrong. Please test the
>> fix first, as the debug patch will end up triggering a panic when the
>> buffer is full.
> 
> Yes, fix.patch does make the system boot as expected !

May I translate this to a Tested-by?

Patch also
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks much to both of you for all the effort here!

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:16:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:16:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40774.73693 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgEB-0006Oc-VR; Mon, 30 Nov 2020 10:16:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40774.73693; Mon, 30 Nov 2020 10:16:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgEB-0006OV-Qp; Mon, 30 Nov 2020 10:16:15 +0000
Received: by outflank-mailman (input) for mailman id 40774;
 Mon, 30 Nov 2020 10:16:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lj5U=FE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kjgEA-0006OK-8z
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:16:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1c41cb51-c734-4156-b1f1-ad3f04dd2374;
 Mon, 30 Nov 2020 10:16:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E14EBACB5;
 Mon, 30 Nov 2020 10:16:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c41cb51-c734-4156-b1f1-ad3f04dd2374
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606731372; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fFi+ssvb0SZ1fooUTaC0EQMDJk1EiU6HlH7OlP+oWQ0=;
	b=DJNZ9lcMRM5+oCGgrN4rhi+4nwj75VO7AQa/VL44QVRMBWDGN+2sLKcNHfWRiSswuRLoZR
	sDrna/BF327HgNCk3TPYTcOSq1OlUoX4ifzMJqVPT3K4K+LQEEHKvbejVhEkdlR7wQxAU4
	y+7aCuiW1X7HEY/WcRsRguPGPYC2Apk=
Subject: Re: [PATCH 04/16] x86/srat: vmap the pages for acpi_slit
To: Hongyan Xia <hx242@xen.org>
Cc: julien@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <cover.1588278317.git.hongyxia@amazon.com>
 <f4226fafcd333c0274fcee24601c280bf6494417.1588278317.git.hongyxia@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d41fee35-8889-3ab8-2a5e-f4b442747362@suse.com>
Date: Mon, 30 Nov 2020 11:16:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <f4226fafcd333c0274fcee24601c280bf6494417.1588278317.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.04.2020 22:44, Hongyan Xia wrote:
> --- a/xen/arch/x86/srat.c
> +++ b/xen/arch/x86/srat.c
> @@ -196,7 +196,8 @@ void __init acpi_numa_slit_init(struct acpi_table_slit *slit)
>  		return;
>  	}
>  	mfn = alloc_boot_pages(PFN_UP(slit->header.length), 1);
> -	acpi_slit = mfn_to_virt(mfn_x(mfn));
> +	acpi_slit = vmap_boot_pages(mfn, PFN_UP(slit->header.length));
> +	BUG_ON(!acpi_slit);
>  	memcpy(acpi_slit, slit, slit->header.length);
>  }

I'm not sure in how far this series is still to be considered
active / pending; I still have it in my inbox as something to
look at in any event. If it is, then I think the latest by this
patch it becomes clear that we either want to make vmalloc()
boot-allocator capable, or introduce e.g. vmalloc_boot().
Having this recurring pattern including the somewhat odd
vmap_boot_pages() is imo not the best way forward. It would
then also no longer be necessary to allocate contiguous pages,
as none of the users up to here appear to have such a need.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:21:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:21:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40780.73705 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgIw-0007aA-IQ; Mon, 30 Nov 2020 10:21:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40780.73705; Mon, 30 Nov 2020 10:21:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgIw-0007a1-Dw; Mon, 30 Nov 2020 10:21:10 +0000
Received: by outflank-mailman (input) for mailman id 40780;
 Mon, 30 Nov 2020 10:21:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DnPL=FE=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kjgIv-0007Zs-6q
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:21:09 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.2.50]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ee263c8e-45e9-4625-a8af-09a7ac580664;
 Mon, 30 Nov 2020 10:21:06 +0000 (UTC)
Received: from AM0PR01CA0103.eurprd01.prod.exchangelabs.com
 (2603:10a6:208:10e::44) by AS8PR08MB6280.eurprd08.prod.outlook.com
 (2603:10a6:20b:29b::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20; Mon, 30 Nov
 2020 10:21:04 +0000
Received: from AM5EUR03FT037.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:208:10e:cafe::19) by AM0PR01CA0103.outlook.office365.com
 (2603:10a6:208:10e::44) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Mon, 30 Nov 2020 10:21:04 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT037.mail.protection.outlook.com (10.152.17.241) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3611.26 via Frontend Transport; Mon, 30 Nov 2020 10:21:04 +0000
Received: ("Tessian outbound fcd5bc555ddc:v71");
 Mon, 30 Nov 2020 10:21:04 +0000
Received: from eba43a1c45ac.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 61840E19-420E-4D49-A40D-939B83E99DC8.1; 
 Mon, 30 Nov 2020 10:20:47 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id eba43a1c45ac.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 30 Nov 2020 10:20:47 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3161.eurprd08.prod.outlook.com (2603:10a6:5:1d::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Mon, 30 Nov
 2020 10:20:46 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3611.031; Mon, 30 Nov 2020
 10:20:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee263c8e-45e9-4625-a8af-09a7ac580664
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=r49K6sJHi7SUcDVDS7ZRT+jJIk+0vIWsHW3Bm479OYE=;
 b=kvO8PiN13sQEJvBajPnc24vCxlwmFdlCce7ySdknPcv2EMUhVXcyfXilpA1IW5XEt8o1dhRrIocHetsM1mtd/LJOF60Qv/qZWEVrZiB3/lYfGlCcwDc7267JH+8EzwmVXwtleh+d669jDEd3kFLITD1jVe+xPpvGODBpdmQ0KCc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8bdb98d7fc6053d5
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IOtycQn+JjNMvmnE7ka3Ht2HNinr7zVzARpdRHMmF0x0DlXD+Q3RoJPGwlN9ywYQGGop1bo9+kGmXkNu+jTgFHFsjrAegrNu70O56S3rLEnfcAvnP9nKbO5hMgKSpBuycU641fNDo9EU+ooRkMRdrubYmo7ZOl6EKIFP3sqSqhTot4cfobltuTBdw68oIGYuQK5OlHH6EQqmNlvBP8LR3sKkXG8muI9hLYjsOyPFMt6Mna+TyPHFOJyaQE88HnyLV7uPC8xUCa2I7tbw6Vf4XrtgLOsOtOl43KzVsX9sECJpofV+HfjfRNgiCmUrkiOd/D3u/kOAtln2/VJtrpkrLw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=r49K6sJHi7SUcDVDS7ZRT+jJIk+0vIWsHW3Bm479OYE=;
 b=I8/yMLCiyXNVI6iRa9lC4Hmzr4xlTZdXFHq9G/5+sAdb21P5Psmd88GdUI0vfm3EQWCEISvSarVbkqoSelp2UurUEHxt/gRumEYC5GIwxcyJtSoxJlS2E51IBTAOz1ELt4nOXexfshmVAKzl+Y0kwRF9h3+sRdoPzn+9WQfynvuXFF8Hm9ttV3kMmlyyU7zYLqfAhQnP9J5KMGIKoE2TjHa3QOnTTl+97oHgl30bEoZIbpMAjCvjkGCJqbwSXQVP0cQFnovbcS2dAw4FOhorme2nYsY9/kUHA5V9V5rSPLjkHR5BFsdgJhBXmPJoUOryByTGokjouQNccxHTrwZB4A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=r49K6sJHi7SUcDVDS7ZRT+jJIk+0vIWsHW3Bm479OYE=;
 b=kvO8PiN13sQEJvBajPnc24vCxlwmFdlCce7ySdknPcv2EMUhVXcyfXilpA1IW5XEt8o1dhRrIocHetsM1mtd/LJOF60Qv/qZWEVrZiB3/lYfGlCcwDc7267JH+8EzwmVXwtleh+d669jDEd3kFLITD1jVe+xPpvGODBpdmQ0KCc=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 0/7] xen/arm: Emulate ID registers
Thread-Topic: [PATCH 0/7] xen/arm: Emulate ID registers
Thread-Index: AQHWxPkEic+Kxyhsf02jxIedQiFc2qngevuA
Date: Mon, 30 Nov 2020 10:20:45 +0000
Message-ID: <1BAAADF6-9E29-4BE5-857D-A8B51EB80712@arm.com>
References: <cover.1606151462.git.bertrand.marquis@arm.com>
 <45b8aac3-75a6-670f-d6f2-b427c497ee2d@citrix.com>
In-Reply-To: <45b8aac3-75a6-670f-d6f2-b427c497ee2d@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: bb3b533c-7999-4c91-bfd6-08d89519a4d8
x-ms-traffictypediagnostic: DB7PR08MB3161:|AS8PR08MB6280:
X-Microsoft-Antispam-PRVS:
	<AS8PR08MB6280B38C3B7C4986760AD1459DF50@AS8PR08MB6280.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 KrpwvuiOdEPguUvIld+T4FiAALiSk9FAr55HdDSvkW+/Isg4AXKNOSwHCV83xL0FsaaMOtSa4Aj1sHgNdW8CRIuidVPl2rBLEyst7re81+cl42URsK1FHYrWy6L5X7VP+xQ7Jx2jNNXtV0J9dgsWo7wg7qDXuZtY9+aeu7e29/V2wNxPmix7fKyYedr0hIlhNSKEsPEzjhh0BGg/smbyhZVV+c6HO76JYR1kLIJs51i5Tifq58YasX230ZRE2r0uFhZsGH5RlzAdB5oOr5F8uH6+l0RdcGaZiPYmw8NI4R+HbYt83tvl8jMZUWtV3b/+1m5mdRJaM+Wlo+XTxXuG3leg4wSxt+XP5JWO8fe9SvUpiiq8kh7hKzvygBR6H+aqXDxwl78JPPfUitfhYXOIZw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(396003)(136003)(39860400002)(346002)(6512007)(71200400001)(478600001)(64756008)(66446008)(8936002)(4326008)(86362001)(66556008)(53546011)(83380400001)(6506007)(26005)(5660300002)(66476007)(186003)(76116006)(66946007)(8676002)(91956017)(966005)(2906002)(6916009)(54906003)(2616005)(36756003)(316002)(33656002)(6486002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?vKKWH7YMRYknevoF0babZlCvF4BNDbKBaCULTPWK/aj9cqm/xrHCIl5+iAMD?=
 =?us-ascii?Q?U3PYHXc9mbXGtl/v+n5LMcaMO3JBpEwWHa+UBCrNHqA9pkWRaHb0Q9ehDInf?=
 =?us-ascii?Q?jxpDuihq6teDMQmnyqGa+6BZBV4PPKJgxqhfOcvDrMQ0wKY3AVMR7+gxxfUR?=
 =?us-ascii?Q?NjGCUtfAhPem8sUz7oMzUMbNL0giM43NaAIPtPB35NCewfN+brs8clf80KHm?=
 =?us-ascii?Q?9tMnPQtpVDjdSM2wxgnDWa89aqCuVz2Lf6VooayUeekjzIEJHJvYWjOFsx/S?=
 =?us-ascii?Q?jTq0TbylhDbqU8zMk/LBQmpBML8g+RDaUTPyQD93z4D1dqHyd+m2oDummpG3?=
 =?us-ascii?Q?tYMUH4l+bGEIvHRusLm3wetS+CI4xwRQr5meY5YepzjLtnU0EFAFoMaQUvWY?=
 =?us-ascii?Q?kJG5eU440cnKg99SU43i6GDx+JqSfmByXxBVocqPuPfnnAldxZ02AzCmvafH?=
 =?us-ascii?Q?KmJx9p3DDP1HlvCW7rTH30R+sklWqzLmr3IiDHXLgF/WOy/SP/yOHT78N9Vv?=
 =?us-ascii?Q?pAGxDD6VKd/R4xA1Hmg3b0tI6APttOmR3CqPQr+ZB+vN0MhDWJ+t3SVL8MdL?=
 =?us-ascii?Q?5MwjBuuPbsw3G6+5TIPVB3fTnWAyFkofYwEu1rMWFNmZPS/3MiFQ7CDtLdMM?=
 =?us-ascii?Q?2idS0L+ZY0Mj1XvvhgsuD1iuGdnjg7Em1aOy1XtezLrSH1y3i0Pc5RRXfzCx?=
 =?us-ascii?Q?UlkPHdmZHUDP7sivvp/QLWde1BlOMTpTbAwujkjiqY35wep61rjcWlzqwIf/?=
 =?us-ascii?Q?WHO25yzI+gch29UCOe13U2l2lr3NbLV/n5Cf3wrcsda6i+IZZSDBj3ZW1yLc?=
 =?us-ascii?Q?R8Oq1xlRD8FeAIS/D15gYyUgOa1uZfPy44oZ+Q1o25j+TnxAcW3uE0mTqLNO?=
 =?us-ascii?Q?2MUo2S3Ronz2zlxDeUQTux6BmoIPIrZjPjM7sHkAx9tebOmdV6ALhTgz+d2r?=
 =?us-ascii?Q?20ubsTVkJiApzKyVg1p+gz67d108qWa4IqPcfO8c9Jrc9/rxzn1lWXrU5pRz?=
 =?us-ascii?Q?3p4m?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <9A8D1CFBE6179C4CB9BA35DE6A32303E@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3161
Original-Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f8a7a757-7575-43c1-e78b-08d8951999c7
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ArKLUjWrFtBoA6b0reMfx7ZxqmprfOvPJ2dc1zztWnHnew6M8qRByvBFgrPVnUaT90ZxJ+/mkmKpLjtClcKbi8ws/vWtjo5J8Dd86Kwr3J7pgNzAwUjGyCQBmxj65Rcu6RN4TbEdl9gXS7uTdHGHJ3Hb2y6/27DOc/L+4FrC4w1ggrvCUzzufCVCbZ2k/w+QuHdLezEsK9gE5QRwWGCXMxmr/ToJTpQociCDgdhTVEe5yCJkV1ZiNx853nDDOE9RRe6ifAY2aBDNdCXwJaKRwOoZcSsobQuqz5x9smtbF9a33X8sK3yR6JcqfNhBy8V20KYeEE72fRhuBQYSA2xjjCSBLMaIbbJwLenn7WwoA0Hhj91VaQ2JKi0sTZOaqoCnw1QSz8R9iCT/60jPoGYDfwg4EkM0giUFtLJ1ZNOKL7i5zr1pHyGCT67LFFAOay4WPXZOVZg3Hb/QFudCygWulDa/YhW5c+Tf8WF6NIX+6ks=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(136003)(396003)(39860400002)(376002)(46966005)(83380400001)(36756003)(33656002)(82740400003)(47076004)(6506007)(186003)(82310400003)(336012)(26005)(53546011)(4326008)(2616005)(8676002)(478600001)(966005)(5660300002)(70586007)(70206006)(2906002)(316002)(8936002)(6512007)(356005)(36906005)(54906003)(6486002)(107886003)(81166007)(86362001)(6862004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Nov 2020 10:21:04.4764
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: bb3b533c-7999-4c91-bfd6-08d89519a4d8
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6280

Hi Andrew,

> On 27 Nov 2020, at 20:07, Andrew Cooper <andrew.cooper3@citrix.com> wrote=
:
>=20
> On 26/11/2020 15:51, Bertrand Marquis wrote:
>> The goal of this serie is to emulate coprocessor ID registers so that
>> Xen only publish to guest features that are supported by Xen and can
>> actually be used by guests.
>> One practical example where this is required are SVE support which is
>> forbidden by Xen as it is not supported, but if Linux is compiled with
>> it, it will crash on boot. An other one is AMU which is also forbidden
>> by Xen but one Linux compiled with it would crash if the platform
>> supports it.
>>=20
>> To be able to emulate the coprocessor registers defining what features
>> are supported by the hardware, the TID3 bit of HCR must be disabled and
>> Xen must emulated the values of those registers when an exception is
>> catched when a guest is accessing it.
>>=20
>> This serie is first creating a guest cpuinfo structure which will
>> contain the values that we want to publish to the guests and then
>> provides the proper emulationg for those registers when Xen is getting
>> an exception due to an access to any of those registers.
>>=20
>> This is a first simple implementation to solve the problem and the way
>> to define the values that we provide to guests and which features are
>> disabled will be in a future patchset enhance so that we could decide
>> per guest what can be used or not and depending on this deduce the bits
>> to activate in HCR and the values that we must publish on ID registers.
>>=20
>> Bertrand Marquis (7):
>>  xen/arm: Add ID registers and complete cpufinfo
>>  xen/arm: Add arm64 ID registers definitions
>>  xen/arm: create a cpuinfo structure for guest
>>  xen/arm: Add handler for ID registers on arm64
>>  xen/arm: Add handler for cp15 ID registers
>>  xen/arm: Add CP10 exception support to handle VMFR
>>  xen/arm: Activate TID3 in HCR_EL2
>=20
> CI found an ARM randconfig failure against this series.
>=20
> https://gitlab.com/xen-project/patchew/xen/-/pipelines/221798884
>=20
> I have admit that I can't spot an obvious connection so it might be
> collateral damage from elsewhere, but does need looking at irrespective.

This absolutely right, there is a bug in my code and i will send a V2 to fi=
x it.

Very nice finding, i am wondering why my tests did not point this out.

Regards
Bertrand

>=20
> ~Andrew (in lieu of a real CI robot).



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:26:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:26:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40789.73717 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgNd-0007sa-A4; Mon, 30 Nov 2020 10:26:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40789.73717; Mon, 30 Nov 2020 10:26:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgNd-0007sT-6n; Mon, 30 Nov 2020 10:26:01 +0000
Received: by outflank-mailman (input) for mailman id 40789;
 Mon, 30 Nov 2020 10:25:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lj5U=FE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kjgNb-0007sM-MT
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:25:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 64067815-f0e1-448f-bbb2-91688457eef9;
 Mon, 30 Nov 2020 10:25:59 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 358C8AC91;
 Mon, 30 Nov 2020 10:25:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64067815-f0e1-448f-bbb2-91688457eef9
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606731958; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=dBEF1eJeXnriytUNl9lwc1WRag97wA2VNUJqIUxwYDY=;
	b=OxF3xbjx7qQfha/QRMYoAUSP8aGCPmdKeAFoYWTDi19BKMWTDgXPyUFZKUIB7kPDWu4Y4T
	jXv3NVpa06QraMTo2FiSeRIVBegcRD+dp+JJgCvkJmXraJBFsPZB7aRyzpxbpRjRdXF6Yg
	Tlshyeudfc11GfnOyZNak44kxLFyqp0=
Subject: Re: [ANNOUNCE] Call for agenda items for December 2020 Community Call
 @ 16:00 UTC
To: George Dunlap <George.Dunlap@citrix.com>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>
References: <6A1AC739-EB53-4996-A99B-EE68358E70DB@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6da4cd56-7364-bc6e-24d8-02976dbd637d@suse.com>
Date: Mon, 30 Nov 2020 11:25:57 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <6A1AC739-EB53-4996-A99B-EE68358E70DB@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 27.11.2020 12:52, George Dunlap wrote:
> The proposed agenda is in https://cryptpad.fr/pad/#/2/pad/edit/OPN55rXaOncupuWuHxtddzWJ/ and you can edit to add items.  Alternatively, you can reply to this mail directly.

The "New series / series requiring attention" section is gone. Was
this intentional? If not, I would have wanted to propose that items
from that list which we didn't get to on the previous call be
automatically propagated. According to my observation it is more
likely than not that nothing would have changed in their status.
Hence it may be easier to take one off the list if indeed it has
got unstalled.

> == Dial-in Information ==
> ## Meeting time
> 16:00 - 17:00 UTC
> Further International meeting times: https://www.timeanddate.com/worldclock/meetingdetails.html?year=2020&month=12&day=3&hour=16&min=0&sec=0&p1=1234&p2=37&p3=224&p4=179
> 
> 
> ## Dial in details
> Web: https://www.gotomeet.me/GeorgeDunlap
> 
> You can also dial in using your phone.
> Access Code: 168-682-109
> 
> China (Toll Free): 4008 811084
> Germany: +49 692 5736 7317

>From last month's meeting:

   Germany: +49 721 9881 4161

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:28:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:28:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40794.73728 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgQD-00082i-OX; Mon, 30 Nov 2020 10:28:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40794.73728; Mon, 30 Nov 2020 10:28:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgQD-00082b-LZ; Mon, 30 Nov 2020 10:28:41 +0000
Received: by outflank-mailman (input) for mailman id 40794;
 Mon, 30 Nov 2020 10:28:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=I3zd=FE=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kjgQC-00082V-Cy
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:28:40 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f038c638-b2bf-4372-9635-c787039fffda;
 Mon, 30 Nov 2020 10:28:38 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AUASTND013861
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Mon, 30 Nov 2020 11:28:30 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 7138D2E9CAC; Mon, 30 Nov 2020 11:28:24 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f038c638-b2bf-4372-9635-c787039fffda
Date: Mon, 30 Nov 2020 11:28:24 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
        xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201130102824.GB1084@antioche.eu.org>
References: <20201127131324.GJ1717@antioche.eu.org>
 <714e9393-d7f4-ed47-d1ed-aff79f3552a0@suse.com>
 <20201127133121.GN1717@antioche.eu.org>
 <96aa5a9b-3f4a-ce9d-0f41-4a24d409ed55@suse.com>
 <20201127135929.GR1717@antioche.eu.org>
 <20201127202211.eqrxloii5x54zode@Air-de-Roger>
 <20201127214420.GA637@antioche.eu.org>
 <20201128145311.3gmzq5lnkz6ajdtr@Air-de-Roger>
 <20201128171430.GB631@antioche.eu.org>
 <819e859e-0fd2-cdbf-6126-46c924364d12@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <819e859e-0fd2-cdbf-6126-46c924364d12@suse.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Mon, 30 Nov 2020 11:28:30 +0100 (MET)

On Mon, Nov 30, 2020 at 11:00:23AM +0100, Jan Beulich wrote:
> On 28.11.2020 18:14, Manuel Bouyer wrote:
> > On Sat, Nov 28, 2020 at 03:53:11PM +0100, Roger Pau Monn wrote:
> >>> the trace is at
> >>> http://www-soc.lip6.fr/~bouyer/xen-log13.txt
> >>
> >> Thanks! I think I've found the issue and I'm attaching a possible fix
> >> (fix.patch) to this email. In any case I've also attached a further
> >> debug patch, in case the fix turns out to be wrong. Please test the
> >> fix first, as the debug patch will end up triggering a panic when the
> >> buffer is full.
> > 
> > Yes, fix.patch does make the system boot as expected !
> 
> May I translate this to a Tested-by?

Sure !

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:32:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:32:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40802.73741 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgTS-0000V6-7s; Mon, 30 Nov 2020 10:32:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40802.73741; Mon, 30 Nov 2020 10:32:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgTS-0000Uz-4o; Mon, 30 Nov 2020 10:32:02 +0000
Received: by outflank-mailman (input) for mailman id 40802;
 Mon, 30 Nov 2020 10:32:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjgTQ-0000Uu-Do
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:32:00 +0000
Received: from mail-lf1-x132.google.com (unknown [2a00:1450:4864:20::132])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8eb8bb7c-7248-47fe-aa3c-5ad1b0218f7e;
 Mon, 30 Nov 2020 10:31:58 +0000 (UTC)
Received: by mail-lf1-x132.google.com with SMTP id a9so20591042lfh.2
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 02:31:58 -0800 (PST)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.31.55
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 02:31:56 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8eb8bb7c-7248-47fe-aa3c-5ad1b0218f7e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id;
        bh=EsdzKrQa0ehB+EwQob7OH3FDoo9cYhWibeUDikgbgko=;
        b=FEPh0iyMo1kjvjPBbU7xdrCHrvpvbO55NOVrDHlcGphNIIczLJuF2cDKwZyC1wSEmk
         SzqSm96S5zD5CGoglTUTE7m+efpnVunuI82PgJa6PJqFnLQx0KsT+g4tq+CbxV2xxXyL
         Va1Gqv6QLPbPhynOsWakyFjhmgnWCKsNFoDiYYJbtrwb6mexbS3xBv2Mt4nThDvpkWj9
         omoC6Id91PlhlW/5G+4hb2Nngxi85OR8EP78WuLJXvKxealjknql4GTnWZY/Kcgc45fm
         MHFXMtRXayGNl49GaWLIUNyrJRC2RmBDJhTL3XMRGE+iepnX25mOFeT6gfIRQ6uzgT0D
         mM7A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id;
        bh=EsdzKrQa0ehB+EwQob7OH3FDoo9cYhWibeUDikgbgko=;
        b=J1615h7ApfZTH/pxy7z8qdn3KPaNjaSzQn3kbBHD04i0+ep3xYM3NurmjwHqCEp5+M
         VwT7h5vF/RtKkq1/4zeZsf9jItOEeQTL2ZjByRkX5AJnLAewU5AdOPg0gkzVprcpnFVJ
         RiXwGRQT6p3K53A3RvUt0eYoQa+4crYHAZMC6beQBhDRTs5FWJl7GteLbRL0pQy6KW65
         /P+eUiOiv8wKY2l9rnRNaM9Z2Tf2E7JzxxNnzhs8KzMhYEd6Ezzi8SjZIe6jIX2iZLAE
         l9KJ2fKQIHQODlKr8R6At1H+E864zFebEeaEZNBLMa7nTYmNtFxksIif7Tz8W3LUPKXr
         84Xg==
X-Gm-Message-State: AOAM531e9i09UQKZFLxDDSeLTiDSuVD9/AV0hnLGbd73+x0L9ZcF3JWh
	/fajwbIbNRaaPOZrHcowB37+f6qZ7AckDw==
X-Google-Smtp-Source: ABdhPJzFhvHxnTSaxIRlYPUxVmZEOzDUffJiWLRFAOcYzEcptXl/e7gt5zTt28SDpmuMo7Z8XJPtwQ==
X-Received: by 2002:a19:e059:: with SMTP id g25mr9327867lfj.584.1606732317136;
        Mon, 30 Nov 2020 02:31:57 -0800 (PST)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Paul Durrant <paul@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien.grall@arm.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Tim Deegan <tim@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Jun Nakajima <jun.nakajima@intel.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Wei Chen <Wei.Chen@arm.com>,
	Kaly Xin <Kaly.Xin@arm.com>,
	Artem Mygaiev <joculator@gmail.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
Subject: 
Date: Mon, 30 Nov 2020 12:31:15 +0200
Message-Id: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>


Date: Sat, 28 Nov 2020 22:33:51 +0200
Subject: [PATCH V3 00/23] IOREQ feature (+ virtio-mmio) on Arm
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Hello all.

The purpose of this patch series is to add IOREQ/DM support to Xen on Arm.
You can find an initial discussion at [1] and RFC/V1/V2 series at [2]/[3]/[4].
Xen on Arm requires some implementation to forward guest MMIO access to a device
model in order to implement virtio-mmio backend or even mediator outside of hypervisor.
As Xen on x86 already contains required support this series tries to make it common
and introduce Arm specific bits plus some new functionality. Patch series is based on
Julien's PoC "xen/arm: Add support for Guest IO forwarding to a device emulator".
Besides splitting existing IOREQ/DM support and introducing Arm side, the series
also includes virtio-mmio related changes (last 2 patches for toolstack)
for the reviewers to be able to see how the whole picture could look like.

According to the initial discussion there are a few open questions/concerns
regarding security, performance in VirtIO solution:
1. virtio-mmio vs virtio-pci, SPI vs MSI, different use-cases require different
   transport...
2. virtio backend is able to access all guest memory, some kind of protection
   is needed: 'virtio-iommu in Xen' vs 'pre-shared-memory & memcpys in guest'
3. interface between toolstack and 'out-of-qemu' virtio backend, avoid using
   Xenstore in virtio backend if possible.
4. a lot of 'foreing mapping' could lead to the memory exhaustion, Julien
   has some idea regarding that.

Looks like all of them are valid and worth considering, but the first thing
which we need on Arm is a mechanism to forward guest IO to a device emulator,
so let's focus on it in the first place.

***

There are a lot of changes since RFC series, almost all TODOs were resolved on Arm,
Arm code was improved and hardened, common IOREQ/DM code became really arch-agnostic
(without HVM-ism), the "legacy" mechanism of mapping magic pages for the IOREQ servers
was left x86 specific, etc. But one TODO still remains which is "PIO handling" on Arm.
The "PIO handling" TODO is expected to left unaddressed for the current series.
It is not an big issue for now while Xen doesn't have support for vPCI on Arm.
On Arm64 they are only used for PCI IO Bar and we would probably want to expose
them to emulator as PIO access to make a DM completely arch-agnostic. So "PIO handling"
should be implemented when we add support for vPCI.

I left interface untouched in the following patch
"xen/dm: Introduce xendevicemodel_set_irq_level DM op"
since there is still an open discussion what interface to use/what
information to pass to the hypervisor.

There is a patch on review this series depends on:
https://patchwork.kernel.org/patch/11816689

Please note, that IOREQ feature is disabled by default on Arm within current series.

***

Patch series [5] was rebased on recent "staging branch"
(181f2c2 evtchn: double per-channel locking can't hit identical channels) and tested on
Renesas Salvator-X board + H3 ES3.0 SoC (Arm64) with virtio-mmio disk backend [6]
running in driver domain and unmodified Linux Guest running on existing
virtio-blk driver (frontend). No issues were observed. Guest domain 'reboot/destroy'
use-cases work properly. Patch series was only build-tested on x86.

Please note, build-test passed for the following modes:
1. x86: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y (default)
2. x86: #CONFIG_HVM is not set / #CONFIG_IOREQ_SERVER is not set
3. Arm64: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y
4. Arm64: CONFIG_HVM=y / #CONFIG_IOREQ_SERVER is not set  (default)
5. Arm32: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y
6. Arm32: CONFIG_HVM=y / #CONFIG_IOREQ_SERVER is not set  (default)

***

Any feedback/help would be highly appreciated.

[1] https://lists.xenproject.org/archives/html/xen-devel/2020-07/msg00825.html
[2] https://lists.xenproject.org/archives/html/xen-devel/2020-08/msg00071.html
[3] https://lists.xenproject.org/archives/html/xen-devel/2020-09/msg00732.html
[4] https://lists.xenproject.org/archives/html/xen-devel/2020-10/msg01077.html
[5] https://github.com/otyshchenko1/xen/commits/ioreq_4.14_ml4
[6] https://github.com/xen-troops/virtio-disk/commits/ioreq_ml1

Julien Grall (5):
  xen/dm: Make x86's DM feature common
  xen/mm: Make x86's XENMEM_resource_ioreq_server handling common
  arm/ioreq: Introduce arch specific bits for IOREQ/DM features
  xen/dm: Introduce xendevicemodel_set_irq_level DM op
  libxl: Introduce basic virtio-mmio support on Arm

Oleksandr Tyshchenko (18):
  x86/ioreq: Prepare IOREQ feature for making it common
  x86/ioreq: Add IOREQ_STATUS_* #define-s and update code for moving
  x86/ioreq: Provide out-of-line wrapper for the handle_mmio()
  xen/ioreq: Make x86's IOREQ feature common
  xen/ioreq: Make x86's hvm_ioreq_needs_completion() common
  xen/ioreq: Make x86's hvm_mmio_first(last)_byte() common
  xen/ioreq: Make x86's hvm_ioreq_(page/vcpu/server) structs common
  xen/ioreq: Move x86's ioreq_server to struct domain
  xen/ioreq: Move x86's io_completion/io_req fields to struct vcpu
  xen/ioreq: Remove "hvm" prefixes from involved function names
  xen/ioreq: Use guest_cmpxchg64() instead of cmpxchg()
  xen/arm: Stick around in leave_hypervisor_to_guest until I/O has
    completed
  xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm
  xen/ioreq: Introduce domain_has_ioreq_server()
  xen/arm: io: Abstract sign-extension
  xen/ioreq: Make x86's send_invalidate_req() common
  xen/arm: Add mapcache invalidation handling
  [RFC] libxl: Add support for virtio-disk configuration

 MAINTAINERS                                  |    8 +-
 tools/include/xendevicemodel.h               |    4 +
 tools/libs/devicemodel/core.c                |   18 +
 tools/libs/devicemodel/libxendevicemodel.map |    1 +
 tools/libs/light/Makefile                    |    1 +
 tools/libs/light/libxl_arm.c                 |   94 +-
 tools/libs/light/libxl_create.c              |    1 +
 tools/libs/light/libxl_internal.h            |    1 +
 tools/libs/light/libxl_types.idl             |   16 +
 tools/libs/light/libxl_types_internal.idl    |    1 +
 tools/libs/light/libxl_virtio_disk.c         |  109 +++
 tools/xl/Makefile                            |    2 +-
 tools/xl/xl.h                                |    3 +
 tools/xl/xl_cmdtable.c                       |   15 +
 tools/xl/xl_parse.c                          |  116 +++
 tools/xl/xl_virtio_disk.c                    |   46 +
 xen/arch/arm/Makefile                        |    2 +
 xen/arch/arm/dm.c                            |   89 ++
 xen/arch/arm/domain.c                        |    9 +
 xen/arch/arm/hvm.c                           |    4 +
 xen/arch/arm/io.c                            |   29 +-
 xen/arch/arm/ioreq.c                         |  126 +++
 xen/arch/arm/p2m.c                           |   48 +-
 xen/arch/arm/traps.c                         |   58 +-
 xen/arch/x86/Kconfig                         |    1 +
 xen/arch/x86/hvm/dm.c                        |  295 +-----
 xen/arch/x86/hvm/emulate.c                   |   80 +-
 xen/arch/x86/hvm/hvm.c                       |   12 +-
 xen/arch/x86/hvm/hypercall.c                 |    9 +-
 xen/arch/x86/hvm/intercept.c                 |    5 +-
 xen/arch/x86/hvm/io.c                        |   26 +-
 xen/arch/x86/hvm/ioreq.c                     | 1357 ++------------------------
 xen/arch/x86/hvm/stdvga.c                    |   10 +-
 xen/arch/x86/hvm/svm/nestedsvm.c             |    2 +-
 xen/arch/x86/hvm/vmx/realmode.c              |    6 +-
 xen/arch/x86/hvm/vmx/vvmx.c                  |    2 +-
 xen/arch/x86/mm.c                            |   46 +-
 xen/arch/x86/mm/p2m.c                        |   13 +-
 xen/arch/x86/mm/shadow/common.c              |    2 +-
 xen/common/Kconfig                           |    3 +
 xen/common/Makefile                          |    2 +
 xen/common/dm.c                              |  292 ++++++
 xen/common/ioreq.c                           | 1307 +++++++++++++++++++++++++
 xen/common/memory.c                          |   73 +-
 xen/include/asm-arm/domain.h                 |    3 +
 xen/include/asm-arm/hvm/ioreq.h              |  139 +++
 xen/include/asm-arm/mm.h                     |    8 -
 xen/include/asm-arm/mmio.h                   |    1 +
 xen/include/asm-arm/p2m.h                    |   19 +-
 xen/include/asm-arm/traps.h                  |   24 +
 xen/include/asm-x86/hvm/domain.h             |   43 -
 xen/include/asm-x86/hvm/emulate.h            |    2 +-
 xen/include/asm-x86/hvm/io.h                 |   17 -
 xen/include/asm-x86/hvm/ioreq.h              |   58 +-
 xen/include/asm-x86/hvm/vcpu.h               |   18 -
 xen/include/asm-x86/mm.h                     |    4 -
 xen/include/asm-x86/p2m.h                    |   24 +-
 xen/include/public/arch-arm.h                |    5 +
 xen/include/public/hvm/dm_op.h               |   16 +
 xen/include/xen/dm.h                         |   44 +
 xen/include/xen/ioreq.h                      |  146 +++
 xen/include/xen/p2m-common.h                 |    4 +
 xen/include/xen/sched.h                      |   32 +
 xen/include/xsm/dummy.h                      |    4 +-
 xen/include/xsm/xsm.h                        |    6 +-
 xen/xsm/dummy.c                              |    2 +-
 xen/xsm/flask/hooks.c                        |    5 +-
 67 files changed, 3084 insertions(+), 1884 deletions(-)
 create mode 100644 tools/libs/light/libxl_virtio_disk.c
 create mode 100644 tools/xl/xl_virtio_disk.c
 create mode 100644 xen/arch/arm/dm.c
 create mode 100644 xen/arch/arm/ioreq.c
 create mode 100644 xen/common/dm.c
 create mode 100644 xen/common/ioreq.c
 create mode 100644 xen/include/asm-arm/hvm/ioreq.h
 create mode 100644 xen/include/xen/dm.h
 create mode 100644 xen/include/xen/ioreq.h

-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:32:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:32:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40803.73753 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgTW-0000Wk-Gm; Mon, 30 Nov 2020 10:32:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40803.73753; Mon, 30 Nov 2020 10:32:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgTW-0000Wc-DP; Mon, 30 Nov 2020 10:32:06 +0000
Received: by outflank-mailman (input) for mailman id 40803;
 Mon, 30 Nov 2020 10:32:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjgTV-0000Uu-Bd
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:32:05 +0000
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9f36d7d1-6755-472c-bfde-d95988fc6a48;
 Mon, 30 Nov 2020 10:31:59 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id t6so20556215lfl.13
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 02:31:59 -0800 (PST)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.31.57
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 02:31:57 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f36d7d1-6755-472c-bfde-d95988fc6a48
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=H4oorxrqC8a8IVKIJ+B0T4aa6L2Ar1jXOispE9qF9to=;
        b=bfLk0hkt1ud8tqn2B7N9fvtJkgTUBH3gLqBX6yTp5mQBI159oL8rB332M5GpjMIfF1
         9Q5oMMP4ZyUKENrx2OHUV/9EmLzuw3GIE0ZP3MWFiGJ7devXTyVxzIieMf+H/D9MY9G6
         OuibPsWcMV5EH6Q/fyE8eB4d3+RU4DwNyk8GpCrwzq17BF/WuM6zGFhCAykN+YMDuYK1
         h4KH10xS6WOHSNFLsQeE1rOMwk0HwTUbWhLOpY3CWLza3+7qk7GOqoHWbznVAOiJRPKX
         rIPmt1lr/PvJLv51KmBu0zChFpLaLf0hqm7A8a5N0o1hWxhPme1XAObdYVeNBanmenOx
         afiA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=H4oorxrqC8a8IVKIJ+B0T4aa6L2Ar1jXOispE9qF9to=;
        b=eJor9tvvciN5SdUbVAxUneUmci/feULibv/dPtaJmBIVrFIyiSromZ2gmBsaSlDnbk
         zEzgh+eo9XticAhR03EfSk/ufjQxAsauiNoVIjUj2xnrgacbw1wQMwF3ZJgru7HeYjVL
         RliJdU4KYAcEKa30+NO7zRF3Ygl7fI5dZRt6Nre/eQIlN0T44h6xDrUUAJYIFp1Fw9f0
         cz01wsLW8OiVahxrIGC9GLuHWg5Q6AMgO+CHU4uuOjTDOAAknFwhAE3ZWo+2lYC+HSC5
         bSf9hLCbgq4Qk4hfpVIoeUtxQZg08z4eDjxALc7sJgI6kCClWoiHnicGwpDcZeplzZh2
         Fabg==
X-Gm-Message-State: AOAM533VnJs+X9TAZ4tcNaAr2XfYrrdjkoJtNYqpIRvGrSYv9r4JUC8O
	dLAn+zwfKiKTZKzhSuq0aKxZwFlzp3JStg==
X-Google-Smtp-Source: ABdhPJxkLlSzpvFrmGcakSg3BhhbgwiAmkbwvRc5I3CekIamfo2xZL+HeY8fhILdLtm0UB8yYPx75A==
X-Received: by 2002:ac2:5985:: with SMTP id w5mr8569034lfn.386.1606732318276;
        Mon, 30 Nov 2020 02:31:58 -0800 (PST)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Paul Durrant <paul@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V3 01/23] x86/ioreq: Prepare IOREQ feature for making it common
Date: Mon, 30 Nov 2020 12:31:16 +0200
Message-Id: <1606732298-22107-2-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

As a lot of x86 code can be re-used on Arm later on, this
patch makes some preparation to x86/hvm/ioreq.c before moving
to the common code. This way we will get a verbatim copy
for a code movement in subsequent patch.

This patch mostly introduces specific hooks to abstract arch
specific materials taking into the account the requirment to leave
the "legacy" mechanism of mapping magic pages for the IOREQ servers
x86 specific and not expose it to the common code.

These hooks are named according to the more consistent new naming
scheme right away (including dropping the "hvm" prefixes and infixes):
- IOREQ server functions should start with "ioreq_server_"
- IOREQ functions should start with "ioreq_"
other functions will be renamed in subsequent patches.

It worth mentioning that a code which checks the return value of
p2m_set_ioreq_server() in hvm_map_mem_type_to_ioreq_server() was
folded into arch_ioreq_server_map_mem_type() for the clear split.
So the p2m_change_entry_type_global() is called with ioreq_server
lock held.

Also re-order #include-s alphabetically.

This support is going to be used on Arm to be able run device
emulator outside of Xen hypervisor.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - new patch, was split from:
     "[RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common"
   - fold the check of p->type into hvm_get_ioreq_server_range_type()
     and make it return success/failure
   - remove relocate_portio_handler() call from arch_hvm_ioreq_destroy()
     in arch/x86/hvm/ioreq.c
   - introduce arch_hvm_destroy_ioreq_server()/arch_handle_hvm_io_completion()

Changes V1 -> V2:
   - update patch description
   - make arch functions inline and put them into arch header
     to achieve a truly rename by the subsequent patch
   - return void in arch_hvm_destroy_ioreq_server()
   - return bool in arch_hvm_ioreq_destroy()
   - bring relocate_portio_handler() back to arch_hvm_ioreq_destroy()
   - rename IOREQ_IO* to IOREQ_STATUS*
   - remove *handle* from arch_handle_hvm_io_completion()
   - re-order #include-s alphabetically
   - rename hvm_get_ioreq_server_range_type() to hvm_ioreq_server_get_type_addr()
     and add "const" to several arguments

Changes V2 -> V3:
   - update patch description
   - name new arch hooks according to the new naming scheme
   - don't make arch hooks inline, move them ioreq.c
   - make get_ioreq_server() local again
   - rework the whole patch taking into the account that "legacy" interface
     should remain x86 specific (additional arch hooks, etc)
   - update the code to be able to use hvm_map_mem_type_to_ioreq_server()
     in the common code (an extra arch hook, etc)
   - don’t include <asm/hvm/emulate.h> from arch header
   - add "arch" prefix to hvm_ioreq_server_get_type_addr()
   - move IOREQ_STATUS_* #define-s introduction to the separate patch
   - move HANDLE_BUFIOREQ to the arch header
   - just return relocate_portio_handler() from arch_ioreq_server_destroy_all()
   - misc adjustments proposed by Jan (adding const, unsigned int instead of uint32_t)
---
---
 xen/arch/x86/hvm/ioreq.c        | 174 ++++++++++++++++++++++++++--------------
 xen/include/asm-x86/hvm/ioreq.h |  19 +++++
 2 files changed, 133 insertions(+), 60 deletions(-)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 1cc27df..e3dfb49 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -17,15 +17,15 @@
  */
 
 #include <xen/ctype.h>
+#include <xen/domain.h>
+#include <xen/event.h>
 #include <xen/init.h>
+#include <xen/irq.h>
 #include <xen/lib.h>
-#include <xen/trace.h>
+#include <xen/paging.h>
 #include <xen/sched.h>
-#include <xen/irq.h>
 #include <xen/softirq.h>
-#include <xen/domain.h>
-#include <xen/event.h>
-#include <xen/paging.h>
+#include <xen/trace.h>
 #include <xen/vpci.h>
 
 #include <asm/hvm/emulate.h>
@@ -170,6 +170,29 @@ static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
     return true;
 }
 
+bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion)
+{
+    switch ( io_completion )
+    {
+    case HVMIO_realmode_completion:
+    {
+        struct hvm_emulate_ctxt ctxt;
+
+        hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
+        vmx_realmode_emulate_one(&ctxt);
+        hvm_emulate_writeback(&ctxt);
+
+        break;
+    }
+
+    default:
+        ASSERT_UNREACHABLE();
+        break;
+    }
+
+    return true;
+}
+
 bool handle_hvm_io_completion(struct vcpu *v)
 {
     struct domain *d = v->domain;
@@ -209,19 +232,8 @@ bool handle_hvm_io_completion(struct vcpu *v)
         return handle_pio(vio->io_req.addr, vio->io_req.size,
                           vio->io_req.dir);
 
-    case HVMIO_realmode_completion:
-    {
-        struct hvm_emulate_ctxt ctxt;
-
-        hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
-        vmx_realmode_emulate_one(&ctxt);
-        hvm_emulate_writeback(&ctxt);
-
-        break;
-    }
     default:
-        ASSERT_UNREACHABLE();
-        break;
+        return arch_vcpu_ioreq_completion(io_completion);
     }
 
     return true;
@@ -477,9 +489,6 @@ static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s,
     }
 }
 
-#define HANDLE_BUFIOREQ(s) \
-    ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF)
-
 static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s,
                                      struct vcpu *v)
 {
@@ -586,7 +595,7 @@ static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s)
     spin_unlock(&s->lock);
 }
 
-static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s)
+int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s)
 {
     int rc;
 
@@ -601,7 +610,7 @@ static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s)
     return rc;
 }
 
-static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s)
+void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s)
 {
     hvm_unmap_ioreq_gfn(s, true);
     hvm_unmap_ioreq_gfn(s, false);
@@ -674,6 +683,12 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
     return rc;
 }
 
+void arch_ioreq_server_enable(struct hvm_ioreq_server *s)
+{
+    hvm_remove_ioreq_gfn(s, false);
+    hvm_remove_ioreq_gfn(s, true);
+}
+
 static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s)
 {
     struct hvm_ioreq_vcpu *sv;
@@ -683,8 +698,7 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s)
     if ( s->enabled )
         goto done;
 
-    hvm_remove_ioreq_gfn(s, false);
-    hvm_remove_ioreq_gfn(s, true);
+    arch_ioreq_server_enable(s);
 
     s->enabled = true;
 
@@ -697,6 +711,12 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s)
     spin_unlock(&s->lock);
 }
 
+void arch_ioreq_server_disable(struct hvm_ioreq_server *s)
+{
+    hvm_add_ioreq_gfn(s, true);
+    hvm_add_ioreq_gfn(s, false);
+}
+
 static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s)
 {
     spin_lock(&s->lock);
@@ -704,8 +724,7 @@ static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s)
     if ( !s->enabled )
         goto done;
 
-    hvm_add_ioreq_gfn(s, true);
-    hvm_add_ioreq_gfn(s, false);
+    arch_ioreq_server_disable(s);
 
     s->enabled = false;
 
@@ -750,7 +769,7 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s,
 
  fail_add:
     hvm_ioreq_server_remove_all_vcpus(s);
-    hvm_ioreq_server_unmap_pages(s);
+    arch_ioreq_server_unmap_pages(s);
 
     hvm_ioreq_server_free_rangesets(s);
 
@@ -764,7 +783,7 @@ static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s)
     hvm_ioreq_server_remove_all_vcpus(s);
 
     /*
-     * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and
+     * NOTE: It is safe to call both arch_ioreq_server_unmap_pages() and
      *       hvm_ioreq_server_free_pages() in that order.
      *       This is because the former will do nothing if the pages
      *       are not mapped, leaving the page to be freed by the latter.
@@ -772,7 +791,7 @@ static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s)
      *       the page_info pointer to NULL, meaning the latter will do
      *       nothing.
      */
-    hvm_ioreq_server_unmap_pages(s);
+    arch_ioreq_server_unmap_pages(s);
     hvm_ioreq_server_free_pages(s);
 
     hvm_ioreq_server_free_rangesets(s);
@@ -836,6 +855,12 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
     return rc;
 }
 
+/* Called when target domain is paused */
+void arch_ioreq_server_destroy(struct hvm_ioreq_server *s)
+{
+    p2m_set_ioreq_server(s->target, 0, s);
+}
+
 int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
 {
     struct hvm_ioreq_server *s;
@@ -855,7 +880,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
 
     domain_pause(d);
 
-    p2m_set_ioreq_server(d, 0, s);
+    arch_ioreq_server_destroy(s);
 
     hvm_ioreq_server_disable(s);
 
@@ -900,7 +925,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
 
     if ( ioreq_gfn || bufioreq_gfn )
     {
-        rc = hvm_ioreq_server_map_pages(s);
+        rc = arch_ioreq_server_map_pages(s);
         if ( rc )
             goto out;
     }
@@ -1080,6 +1105,24 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
     return rc;
 }
 
+/* Called with ioreq_server lock held */
+int arch_ioreq_server_map_mem_type(struct domain *d,
+                                   struct hvm_ioreq_server *s,
+                                   uint32_t flags)
+{
+    int rc = p2m_set_ioreq_server(d, flags, s);
+
+    if ( rc == 0 && flags == 0 )
+    {
+        const struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+        if ( read_atomic(&p2m->ioreq.entry_count) )
+            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
+    }
+
+    return rc;
+}
+
 /*
  * Map or unmap an ioreq server to specific memory type. For now, only
  * HVMMEM_ioreq_server is supported, and in the future new types can be
@@ -1112,19 +1155,11 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
     if ( s->emulator != current->domain )
         goto out;
 
-    rc = p2m_set_ioreq_server(d, flags, s);
+    rc = arch_ioreq_server_map_mem_type(d, s, flags);
 
  out:
     spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
 
-    if ( rc == 0 && flags == 0 )
-    {
-        struct p2m_domain *p2m = p2m_get_hostp2m(d);
-
-        if ( read_atomic(&p2m->ioreq.entry_count) )
-            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
-    }
-
     return rc;
 }
 
@@ -1210,12 +1245,17 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v)
     spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
 }
 
+bool arch_ioreq_server_destroy_all(struct domain *d)
+{
+    return relocate_portio_handler(d, 0xcf8, 0xcf8, 4);
+}
+
 void hvm_destroy_all_ioreq_servers(struct domain *d)
 {
     struct hvm_ioreq_server *s;
     unsigned int id;
 
-    if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) )
+    if ( !arch_ioreq_server_destroy_all(d) )
         return;
 
     spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
@@ -1239,33 +1279,28 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
     spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
 }
 
-struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
-                                                 ioreq_t *p)
+int arch_ioreq_server_get_type_addr(const struct domain *d,
+                                    const ioreq_t *p,
+                                    uint8_t *type,
+                                    uint64_t *addr)
 {
-    struct hvm_ioreq_server *s;
-    uint32_t cf8;
-    uint8_t type;
-    uint64_t addr;
-    unsigned int id;
+    unsigned int cf8 = d->arch.hvm.pci_cf8;
 
     if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
-        return NULL;
-
-    cf8 = d->arch.hvm.pci_cf8;
+        return -EINVAL;
 
     if ( p->type == IOREQ_TYPE_PIO &&
          (p->addr & ~3) == 0xcfc &&
          CF8_ENABLED(cf8) )
     {
-        uint32_t x86_fam;
+        unsigned int x86_fam, reg;
         pci_sbdf_t sbdf;
-        unsigned int reg;
 
         reg = hvm_pci_decode_addr(cf8, p->addr, &sbdf);
 
         /* PCI config data cycle */
-        type = XEN_DMOP_IO_RANGE_PCI;
-        addr = ((uint64_t)sbdf.sbdf << 32) | reg;
+        *type = XEN_DMOP_IO_RANGE_PCI;
+        *addr = ((uint64_t)sbdf.sbdf << 32) | reg;
         /* AMD extended configuration space access? */
         if ( CF8_ADDR_HI(cf8) &&
              d->arch.cpuid->x86_vendor == X86_VENDOR_AMD &&
@@ -1277,16 +1312,30 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
 
             if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) &&
                  (msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) )
-                addr |= CF8_ADDR_HI(cf8);
+                *addr |= CF8_ADDR_HI(cf8);
         }
     }
     else
     {
-        type = (p->type == IOREQ_TYPE_PIO) ?
-                XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY;
-        addr = p->addr;
+        *type = (p->type == IOREQ_TYPE_PIO) ?
+                 XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY;
+        *addr = p->addr;
     }
 
+    return 0;
+}
+
+struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
+                                                 ioreq_t *p)
+{
+    struct hvm_ioreq_server *s;
+    uint8_t type;
+    uint64_t addr;
+    unsigned int id;
+
+    if ( arch_ioreq_server_get_type_addr(d, p, &type, &addr) )
+        return NULL;
+
     FOR_EACH_IOREQ_SERVER(d, id, s)
     {
         struct rangeset *r;
@@ -1515,11 +1564,16 @@ static int hvm_access_cf8(
     return X86EMUL_UNHANDLEABLE;
 }
 
+void arch_ioreq_domain_init(struct domain *d)
+{
+    register_portio_handler(d, 0xcf8, 4, hvm_access_cf8);
+}
+
 void hvm_ioreq_init(struct domain *d)
 {
     spin_lock_init(&d->arch.hvm.ioreq_server.lock);
 
-    register_portio_handler(d, 0xcf8, 4, hvm_access_cf8);
+    arch_ioreq_domain_init(d);
 }
 
 /*
diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
index e2588e9..cc79285 100644
--- a/xen/include/asm-x86/hvm/ioreq.h
+++ b/xen/include/asm-x86/hvm/ioreq.h
@@ -19,6 +19,25 @@
 #ifndef __ASM_X86_HVM_IOREQ_H__
 #define __ASM_X86_HVM_IOREQ_H__
 
+#define HANDLE_BUFIOREQ(s) \
+    ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF)
+
+bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion);
+int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s);
+void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s);
+void arch_ioreq_server_enable(struct hvm_ioreq_server *s);
+void arch_ioreq_server_disable(struct hvm_ioreq_server *s);
+void arch_ioreq_server_destroy(struct hvm_ioreq_server *s);
+int arch_ioreq_server_map_mem_type(struct domain *d,
+                                   struct hvm_ioreq_server *s,
+                                   uint32_t flags);
+bool arch_ioreq_server_destroy_all(struct domain *d);
+int arch_ioreq_server_get_type_addr(const struct domain *d,
+                                    const ioreq_t *p,
+                                    uint8_t *type,
+                                    uint64_t *addr);
+void arch_ioreq_domain_init(struct domain *d);
+
 bool hvm_io_pending(struct vcpu *v);
 bool handle_hvm_io_completion(struct vcpu *v);
 bool is_ioreq_server_page(struct domain *d, const struct page_info *page);
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:32:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:32:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40804.73765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgTb-0000aZ-Ub; Mon, 30 Nov 2020 10:32:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40804.73765; Mon, 30 Nov 2020 10:32:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgTb-0000aR-R0; Mon, 30 Nov 2020 10:32:11 +0000
Received: by outflank-mailman (input) for mailman id 40804;
 Mon, 30 Nov 2020 10:32:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjgTa-0000Uu-Bt
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:32:10 +0000
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c79e6e20-71f7-4aeb-9f98-f5e9636a98a0;
 Mon, 30 Nov 2020 10:32:00 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id u19so20609196lfr.7
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 02:32:00 -0800 (PST)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.31.58
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 02:31:58 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c79e6e20-71f7-4aeb-9f98-f5e9636a98a0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=DAVWNnxNI+etFFRHAWRdvvzFJ2BAef63O413YzD2Bp8=;
        b=HoKpwzrYIldx9hYT5rofXiD7iYq8fpH+km86eXPPvxLKEk0zVaLE5FUaNY6lxP7au2
         t4VQAGseGeBfRcpgYEy5/dFGyPQEr08bYNjwPJ+MFM9VKs4MbOKSRv9LJx2iYXLCdWuS
         kaWsKSDcowlOa9TlN5pyIYRzAhuYA6TvfrR1fpImak01PW13ZNTc4fbEYHzXDBjbNBSk
         GWgcWqntuS7xR5Frw4jGZG3oqwHJN1gDyLPRWmDy/28QmhN3jc0vKZq//xOgdOSToO0w
         22ETbEgDw5lCoqpLLOblmIILWVigeb/ne3+kQ4VZjBN0SG7XBibIeB2D3yJM4QXLd2EB
         RA7g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=DAVWNnxNI+etFFRHAWRdvvzFJ2BAef63O413YzD2Bp8=;
        b=Hnszf4u4uZL1s8M89P8Xa7VagIUTLvX66I391bZeUSslXyrA8HgMzl7Puczz+48Gh3
         B9hi0bSdbiuYVFTGLnz6YOXqYIcOW09oHxyegoLM+rxQlmfPPRb+scgf6Uqiu8VIcYGL
         IRrcc1i5FDSqLpz7QJYvN65ZyTrBMopXVb2srWi4fta5m7QaNEycnBsK7HOGdFsYbph7
         EDxLuEFV7kWRu8UTrvdNqWpeBpO+/pk+pYyf+VEEyhJHVHPU8ncC0/6Glva2hi4K/6aG
         dFvRfwnyTYDLrpBBSOygOzUII8DB8r5R9lQDXKnlYUCNd3KJiRq/JoEAoCU2mDfDvhOM
         olew==
X-Gm-Message-State: AOAM531AselRB8BDmxQjHAkt/clfTTNa+zhH14dhLyc2n1ILpbG7KE4X
	kxyhTpWmduxAH7yUuUDPZzPhnsz+WWujUQ==
X-Google-Smtp-Source: ABdhPJzJZ7NsJA4nffR8QNtTSSiXN0wYjOa8ZqEK32XZp02Z4ytCpN1Glbo+EebOv8yWpCrpb2wrAQ==
X-Received: by 2002:a19:6551:: with SMTP id c17mr9075180lfj.46.1606732319305;
        Mon, 30 Nov 2020 02:31:59 -0800 (PST)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Paul Durrant <paul@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V3 02/23] x86/ioreq: Add IOREQ_STATUS_* #define-s and update code for moving
Date: Mon, 30 Nov 2020 12:31:17 +0200
Message-Id: <1606732298-22107-3-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

This patch continues to make some preparation to x86/hvm/ioreq.c
before moving to the common code.

Add IOREQ_STATUS_* #define-s and update candidates for moving
since X86EMUL_* shouldn't be exposed to the common code in
that form.

This support is going to be used on Arm to be able run device
emulator outside of Xen hypervisor.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes V2 -> V3:
 - new patch, was split from
   [PATCH V2 01/23] x86/ioreq: Prepare IOREQ feature for making it common
---
---
 xen/arch/x86/hvm/ioreq.c        | 16 ++++++++--------
 xen/include/asm-x86/hvm/ioreq.h |  4 ++++
 2 files changed, 12 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index e3dfb49..9525554 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -1400,7 +1400,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p)
     pg = iorp->va;
 
     if ( !pg )
-        return X86EMUL_UNHANDLEABLE;
+        return IOREQ_STATUS_UNHANDLED;
 
     /*
      * Return 0 for the cases we can't deal with:
@@ -1430,7 +1430,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p)
         break;
     default:
         gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size);
-        return X86EMUL_UNHANDLEABLE;
+        return IOREQ_STATUS_UNHANDLED;
     }
 
     spin_lock(&s->bufioreq_lock);
@@ -1440,7 +1440,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p)
     {
         /* The queue is full: send the iopacket through the normal path. */
         spin_unlock(&s->bufioreq_lock);
-        return X86EMUL_UNHANDLEABLE;
+        return IOREQ_STATUS_UNHANDLED;
     }
 
     pg->buf_ioreq[pg->ptrs.write_pointer % IOREQ_BUFFER_SLOT_NUM] = bp;
@@ -1471,7 +1471,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p)
     notify_via_xen_event_channel(d, s->bufioreq_evtchn);
     spin_unlock(&s->bufioreq_lock);
 
-    return X86EMUL_OKAY;
+    return IOREQ_STATUS_HANDLED;
 }
 
 int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p,
@@ -1487,7 +1487,7 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p,
         return hvm_send_buffered_ioreq(s, proto_p);
 
     if ( unlikely(!vcpu_start_shutdown_deferral(curr)) )
-        return X86EMUL_RETRY;
+        return IOREQ_STATUS_RETRY;
 
     list_for_each_entry ( sv,
                           &s->ioreq_vcpu_list,
@@ -1527,11 +1527,11 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p,
             notify_via_xen_event_channel(d, port);
 
             sv->pending = true;
-            return X86EMUL_RETRY;
+            return IOREQ_STATUS_RETRY;
         }
     }
 
-    return X86EMUL_UNHANDLEABLE;
+    return IOREQ_STATUS_UNHANDLED;
 }
 
 unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered)
@@ -1545,7 +1545,7 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered)
         if ( !s->enabled )
             continue;
 
-        if ( hvm_send_ioreq(s, p, buffered) == X86EMUL_UNHANDLEABLE )
+        if ( hvm_send_ioreq(s, p, buffered) == IOREQ_STATUS_UNHANDLED )
             failed++;
     }
 
diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
index cc79285..e9c8b2d 100644
--- a/xen/include/asm-x86/hvm/ioreq.h
+++ b/xen/include/asm-x86/hvm/ioreq.h
@@ -74,6 +74,10 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered);
 
 void hvm_ioreq_init(struct domain *d);
 
+#define IOREQ_STATUS_HANDLED     X86EMUL_OKAY
+#define IOREQ_STATUS_UNHANDLED   X86EMUL_UNHANDLEABLE
+#define IOREQ_STATUS_RETRY       X86EMUL_RETRY
+
 #endif /* __ASM_X86_HVM_IOREQ_H__ */
 
 /*
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:32:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:32:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40805.73777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgTh-0000ev-7y; Mon, 30 Nov 2020 10:32:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40805.73777; Mon, 30 Nov 2020 10:32:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgTh-0000eo-4M; Mon, 30 Nov 2020 10:32:17 +0000
Received: by outflank-mailman (input) for mailman id 40805;
 Mon, 30 Nov 2020 10:32:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjgTf-0000Uu-CD
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:32:15 +0000
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8247b874-94ea-45a8-b30a-bd01c48b57d4;
 Mon, 30 Nov 2020 10:32:01 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id o24so17026537ljj.6
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 02:32:01 -0800 (PST)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.31.59
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 02:31:59 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8247b874-94ea-45a8-b30a-bd01c48b57d4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=oPwVS+xymxylL7L/6Uw1WpkrUax1c9fqZs6baXhLT/w=;
        b=u26ezXmb1HRCJb8cN2LiXx5fYCJJYgz8O2qS2wZNHCim1gBF6QWCOUxmPgGnUZJdCQ
         A2u07GsuWsrAPZVkPSbcqCvD5qIr7z1DpVWz3+yYB2QNTW6tgJMTKCzn5FwuZLmVFUMP
         tqWxf9mah+ryYyhRIePhziAvpCOSpqXvL1l6/akGhCHL7rbjCdRxK2cjsH9Hk2dyKreD
         mLK+pQDSdeCZRlgHQBveuXtZ5jFRB14xGWrknxT60ICr3W22nPUSzhpC8QLxQ5epxVOW
         TL7Io7jPWTIj9N9sdrERmj6D+Gsqf9asqUJZGQO0cdmVX45AJS/XtSynS/GGAaLvvV7F
         YTfw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=oPwVS+xymxylL7L/6Uw1WpkrUax1c9fqZs6baXhLT/w=;
        b=ho0zbMsGJt0juD6lEUrRYVD6n1SIRDZhx15x1YpnfxkWe3biuV90O7nAEfudHu5FEA
         lA61i1UhJ7kD31Oe4IdPXtGliy/hBT1yPmGc2zwTvvMMfAW0qpvYbp6euFg/5LcBO0I8
         Jd7u6sr+MtvCpU6K0LZwiOuNaXqhkIMEpicx9RPSdsU7qKZ6G8nA5cHlaQTGCgUVPjic
         ahEamABDw4vIJi2F6xuKr8Riqdj7ivXIGJo+GUNArx8riz9k55xHDQ7FvqXDIbnNmjDK
         3VGeLENUBmMLa79wbjNHp2X5AbbdkjuZiOioTKV6U4tbBhYX1YvdVoBxyVGRClgiJuf9
         xr0g==
X-Gm-Message-State: AOAM530ET5C56xCgF0EhGZJwwUsWYkdd4ffj92fcMf3jH20tiTSS+tgS
	R+x5Qj0yV1Ra+yI9NjR8V2jZoxuLjCoa9w==
X-Google-Smtp-Source: ABdhPJyuSNGsY9pPHt6NxlrdawJ/3pd8os8BI0EWnoUkpWEPjXmJ9k5q9+vUOTu4IB3qd3+tTl76+Q==
X-Received: by 2002:a05:651c:315:: with SMTP id a21mr8588159ljp.229.1606732320354;
        Mon, 30 Nov 2020 02:32:00 -0800 (PST)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Paul Durrant <paul@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V3 03/23] x86/ioreq: Provide out-of-line wrapper for the handle_mmio()
Date: Mon, 30 Nov 2020 12:31:18 +0200
Message-Id: <1606732298-22107-4-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

The IOREQ is about to be common feature and Arm will have its own
implementation.

But the name of the function is pretty generic and can be confusing
on Arm (we already have a try_handle_mmio()).

In order not to rename the function (which is used for a varying
set of purposes on x86) globally and get non-confusing variant on Arm
provide a wrapper ioreq_complete_mmio() to be used on common and Arm code.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - new patch

Changes V1 -> V2:
   - remove "handle"
   - add Jan's A-b

Changes V2 -> V3:
   - remove Jan's A-b
   - update patch subject/description
   - use out-of-line function instead of #define
   - put earlier in the series to avoid breakage
---
---
 xen/arch/x86/hvm/ioreq.c        | 7 ++++++-
 xen/include/asm-x86/hvm/ioreq.h | 2 ++
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 9525554..36b1e4e 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -36,6 +36,11 @@
 #include <public/hvm/ioreq.h>
 #include <public/hvm/params.h>
 
+bool ioreq_complete_mmio(void)
+{
+    return handle_mmio();
+}
+
 static void set_ioreq_server(struct domain *d, unsigned int id,
                              struct hvm_ioreq_server *s)
 {
@@ -226,7 +231,7 @@ bool handle_hvm_io_completion(struct vcpu *v)
         break;
 
     case HVMIO_mmio_completion:
-        return handle_mmio();
+        return ioreq_complete_mmio();
 
     case HVMIO_pio_completion:
         return handle_pio(vio->io_req.addr, vio->io_req.size,
diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
index e9c8b2d..c7563e1 100644
--- a/xen/include/asm-x86/hvm/ioreq.h
+++ b/xen/include/asm-x86/hvm/ioreq.h
@@ -74,6 +74,8 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered);
 
 void hvm_ioreq_init(struct domain *d);
 
+bool ioreq_complete_mmio(void);
+
 #define IOREQ_STATUS_HANDLED     X86EMUL_OKAY
 #define IOREQ_STATUS_UNHANDLED   X86EMUL_UNHANDLEABLE
 #define IOREQ_STATUS_RETRY       X86EMUL_RETRY
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:32:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:32:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40806.73789 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgTl-0000k7-JS; Mon, 30 Nov 2020 10:32:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40806.73789; Mon, 30 Nov 2020 10:32:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgTl-0000jz-FN; Mon, 30 Nov 2020 10:32:21 +0000
Received: by outflank-mailman (input) for mailman id 40806;
 Mon, 30 Nov 2020 10:32:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjgTk-0000Uu-CG
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:32:20 +0000
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2805de2e-574c-4796-84cd-436d506d06e0;
 Mon, 30 Nov 2020 10:32:04 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id y7so17018053lji.8
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 02:32:03 -0800 (PST)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.01
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 02:32:02 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2805de2e-574c-4796-84cd-436d506d06e0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=rGoTPdf/l+qRRLf+/W/rrPRXJCM3LTC59V6irfyoxMU=;
        b=jV/3EVtrfs9wG+IGfm6YI638irh46ArUIkqAPh0C+AWXimT1zEZ0+fzXBVgRjPUvAO
         QDkpKJxcH4xTo/N0J0itT42n1OCqn/GAvNC/5AY0a3gs+NMiHTvSWN67HA9PkqKz/IgF
         2kvOyamIEBWswBq1h8d80RinN2ccMiyTfX1b4uRpAYGPjGdedcNVyYHndK6hZaOTqURQ
         TYZmiwQsue9OqUfUy0q4wONRuwlVwiQyzgtrPJF+Bbnh3on5+h6qVaiHPDC7Vc4y4hFD
         J1AXDtEKUpUj2REvXtdBk7ZWt3EB4SHjkp2xkIhTb6klVXwtMzFfjsRfBWYMeG2cXprc
         1YHA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=rGoTPdf/l+qRRLf+/W/rrPRXJCM3LTC59V6irfyoxMU=;
        b=fSzjwr1CnhAzcVgjG6O2YXt1HXxrsswOZh7hJV6JTrAL5/fv+XDDjgxufIVjnnRl/t
         qLCDVSfTQ6nD9UseAyepeTOLpkUwITjzG5bQLQQq5xdGgtjYp3zbq225NTNxnYkaaE/4
         14YJEP/xC0V58XAzQEDFs4dZ0kRvgECtaMYETrrkHgahuqe97x/LjarG5CYoWi36FKkD
         +P5CMBe42zRbd8ozJ278P6VeJzFIOBXPVTDhje8btOvmkreMjn9Lq/aGK3Bodvs6I3DJ
         TtLdCAyIkf6krUoPggmRQtKhSkC/saBpEqQY3nGu1ZfHTlLEQYYUNLN6GyyiyzeQJgNi
         L+kw==
X-Gm-Message-State: AOAM53105iB2Wc9H66p5sb6dw2n6tHSuXa1IP2sreOARd4y3Qxs1i7pt
	8yE1xEq1xZrYRgrwdzB8GC1l+pIkaXLWiw==
X-Google-Smtp-Source: ABdhPJy47RPNrcGD+P3/cQU3hJMi9G/IQpqMoec+hwdC5rhZFoXqEO5xPoFNlLh9h6hCaQxcwARbQQ==
X-Received: by 2002:a2e:8e81:: with SMTP id z1mr8655507ljk.316.1606732322757;
        Mon, 30 Nov 2020 02:32:02 -0800 (PST)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Paul Durrant <paul@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V3 05/23] xen/ioreq: Make x86's hvm_ioreq_needs_completion() common
Date: Mon, 30 Nov 2020 12:31:20 +0200
Message-Id: <1606732298-22107-6-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

The IOREQ is a common feature now and this helper will be used
on Arm as is. Move it to xen/ioreq.h and remove "hvm" prefix.

Although PIO handling on Arm is not introduced with the current series
(it will be implemented when we add support for vPCI), technically
the PIOs exist on Arm (however they are accessed the same way as MMIO)
and it would be better not to diverge now.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Reviewed-by: Paul Durrant <paul@xen.org>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - new patch, was split from:
     "[RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common"

Changes V1 -> V2:
   - remove "hvm" prefix

Changes V2 -> V3:
   - add Paul's R-b
---
---
 xen/arch/x86/hvm/emulate.c     | 4 ++--
 xen/arch/x86/hvm/io.c          | 2 +-
 xen/common/ioreq.c             | 4 ++--
 xen/include/asm-x86/hvm/vcpu.h | 7 -------
 xen/include/xen/ioreq.h        | 7 +++++++
 5 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 24cf85f..5700274 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -336,7 +336,7 @@ static int hvmemul_do_io(
             rc = hvm_send_ioreq(s, &p, 0);
             if ( rc != X86EMUL_RETRY || currd->is_shutting_down )
                 vio->io_req.state = STATE_IOREQ_NONE;
-            else if ( !hvm_ioreq_needs_completion(&vio->io_req) )
+            else if ( !ioreq_needs_completion(&vio->io_req) )
                 rc = X86EMUL_OKAY;
         }
         break;
@@ -2649,7 +2649,7 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
     if ( rc == X86EMUL_OKAY && vio->mmio_retry )
         rc = X86EMUL_RETRY;
 
-    if ( !hvm_ioreq_needs_completion(&vio->io_req) )
+    if ( !ioreq_needs_completion(&vio->io_req) )
         completion = HVMIO_no_completion;
     else if ( completion == HVMIO_no_completion )
         completion = (vio->io_req.type != IOREQ_TYPE_PIO ||
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index 3e09d9b..b220d6b 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -135,7 +135,7 @@ bool handle_pio(uint16_t port, unsigned int size, int dir)
 
     rc = hvmemul_do_pio_buffer(port, size, dir, &data);
 
-    if ( hvm_ioreq_needs_completion(&vio->io_req) )
+    if ( ioreq_needs_completion(&vio->io_req) )
         vio->io_completion = HVMIO_pio_completion;
 
     switch ( rc )
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index 13ea959..44385ef 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -160,7 +160,7 @@ static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
     }
 
     p = &sv->vcpu->arch.hvm.hvm_io.io_req;
-    if ( hvm_ioreq_needs_completion(p) )
+    if ( ioreq_needs_completion(p) )
         p->data = data;
 
     sv->pending = false;
@@ -186,7 +186,7 @@ bool handle_hvm_io_completion(struct vcpu *v)
     if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) )
         return false;
 
-    vio->io_req.state = hvm_ioreq_needs_completion(&vio->io_req) ?
+    vio->io_req.state = ioreq_needs_completion(&vio->io_req) ?
         STATE_IORESP_READY : STATE_IOREQ_NONE;
 
     msix_write_completion(v);
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index 5ccd075..6c1feda 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -91,13 +91,6 @@ struct hvm_vcpu_io {
     const struct g2m_ioport *g2m_ioport;
 };
 
-static inline bool hvm_ioreq_needs_completion(const ioreq_t *ioreq)
-{
-    return ioreq->state == STATE_IOREQ_READY &&
-           !ioreq->data_is_ptr &&
-           (ioreq->type != IOREQ_TYPE_PIO || ioreq->dir != IOREQ_WRITE);
-}
-
 struct nestedvcpu {
     bool_t nv_guestmode; /* vcpu in guestmode? */
     void *nv_vvmcx; /* l1 guest virtual VMCB/VMCS */
diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
index ad47c61..3cc333d 100644
--- a/xen/include/xen/ioreq.h
+++ b/xen/include/xen/ioreq.h
@@ -21,6 +21,13 @@
 
 #include <xen/sched.h>
 
+static inline bool ioreq_needs_completion(const ioreq_t *ioreq)
+{
+    return ioreq->state == STATE_IOREQ_READY &&
+           !ioreq->data_is_ptr &&
+           (ioreq->type != IOREQ_TYPE_PIO || ioreq->dir != IOREQ_WRITE);
+}
+
 #define HANDLE_BUFIOREQ(s) \
     ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF)
 
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:32:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:32:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40810.73801 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgTq-0000q4-Vz; Mon, 30 Nov 2020 10:32:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40810.73801; Mon, 30 Nov 2020 10:32:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgTq-0000pu-Ry; Mon, 30 Nov 2020 10:32:26 +0000
Received: by outflank-mailman (input) for mailman id 40810;
 Mon, 30 Nov 2020 10:32:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjgTp-0000Uu-CO
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:32:25 +0000
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f3f1b784-1180-4b2c-bb88-f2f13bb2d93f;
 Mon, 30 Nov 2020 10:32:05 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id j10so17065418lja.5
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 02:32:05 -0800 (PST)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.02
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 02:32:03 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3f1b784-1180-4b2c-bb88-f2f13bb2d93f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=BMgHi30kkpPIOBuivFYxLOwMmC5wtjLvN0gOaeT0A2k=;
        b=UUg5dCSoinN5YJ15ePPYLqNzyVdSWglcKASLNn74daGGVtFK3RMfdzdFbiJt/NyfDu
         PHsdv1ou5dsi5CplxXvgnzTziyzyldowCEOlOYoK84Z385ls8jRqhWyPsWJRAjjD01Dp
         K6mFVo3794BGC8F2ldWClGDS42pvVDNElVV+jn+tPvoxv8ddckP1cJg50liFgeNFitxW
         EK6IlcCDECZYBAahrbAz7EHAOPCPlm6PfLS2B8RRZzcrjzLXg8L4bFmVt+oP0yVtdCAO
         EmIlMwCIKXA9H4Y++XXpV9G2j7ZuPHiJKY1sCZn3K9CWzQ4bWk6nYBRTuHdOOusb+v/M
         nDHQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=BMgHi30kkpPIOBuivFYxLOwMmC5wtjLvN0gOaeT0A2k=;
        b=Jc0hwZEhePLzP3lT6Dr6XcpWhfeQ/jBAHX3UF7pgmepsgDo6Mz3CNDvBgz+rMdUi6p
         Sxk5z8OdX0/C582G0p4yjNbEevokORXb/cTqepriEcy64fg71lwqYM7EiNw4uAMGrfil
         9OJQYjccuy+EWH3NwEWEPRjMde/imJPB4xf+rSemz4oeMP2ym1QviAQjs6FrbzDZxIB0
         RrGrwicaHq8kRPMEes7usHtENKk6bCzcQHzGxixdjghLp8g0QRCvBA5YwC9mxMsSCduc
         OMqunpIsYy0MM2lDBSe12NkfAhb30WTuNgJffZXuSap6bpSRCbDGK064ES3IU4gFiCbX
         Qmdg==
X-Gm-Message-State: AOAM530uoxSeJXdfI48x93CR/7GwqRQVcZ+u1VWmvzUFenmD3wvnIT8k
	t9XnDJnnHn62kSKRBet71wWj+XKay/xaHw==
X-Google-Smtp-Source: ABdhPJykjxP4/PJoWDRQ7qfNn7lsdrTkU5Cf96Dhxh76xp7TDH9yYoSuS+as+u7kdkrtXDH4d9oLLA==
X-Received: by 2002:a2e:924f:: with SMTP id v15mr9106399ljg.6.1606732323860;
        Mon, 30 Nov 2020 02:32:03 -0800 (PST)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Paul Durrant <paul@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V3 06/23] xen/ioreq: Make x86's hvm_mmio_first(last)_byte() common
Date: Mon, 30 Nov 2020 12:31:21 +0200
Message-Id: <1606732298-22107-7-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

The IOREQ is a common feature now and these helpers will be used
on Arm as is. Move them to xen/ioreq.h and replace "hvm" prefixes
with "ioreq".

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Reviewed-by: Paul Durrant <paul@xen.org>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - new patch

Changes V1 -> V2:
   - replace "hvm" prefix by "ioreq"

Changes V2 -> V3:
   - add Paul's R-b
---
---
 xen/arch/x86/hvm/intercept.c |  5 +++--
 xen/arch/x86/hvm/stdvga.c    |  4 ++--
 xen/common/ioreq.c           |  4 ++--
 xen/include/asm-x86/hvm/io.h | 16 ----------------
 xen/include/xen/ioreq.h      | 16 ++++++++++++++++
 5 files changed, 23 insertions(+), 22 deletions(-)

diff --git a/xen/arch/x86/hvm/intercept.c b/xen/arch/x86/hvm/intercept.c
index cd4c4c1..02ca3b0 100644
--- a/xen/arch/x86/hvm/intercept.c
+++ b/xen/arch/x86/hvm/intercept.c
@@ -17,6 +17,7 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/ioreq.h>
 #include <xen/types.h>
 #include <xen/sched.h>
 #include <asm/regs.h>
@@ -34,7 +35,7 @@
 static bool_t hvm_mmio_accept(const struct hvm_io_handler *handler,
                               const ioreq_t *p)
 {
-    paddr_t first = hvm_mmio_first_byte(p), last;
+    paddr_t first = ioreq_mmio_first_byte(p), last;
 
     BUG_ON(handler->type != IOREQ_TYPE_COPY);
 
@@ -42,7 +43,7 @@ static bool_t hvm_mmio_accept(const struct hvm_io_handler *handler,
         return 0;
 
     /* Make sure the handler will accept the whole access. */
-    last = hvm_mmio_last_byte(p);
+    last = ioreq_mmio_last_byte(p);
     if ( last != first &&
          !handler->mmio.ops->check(current, last) )
         domain_crash(current->domain);
diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c
index e267513..e184664 100644
--- a/xen/arch/x86/hvm/stdvga.c
+++ b/xen/arch/x86/hvm/stdvga.c
@@ -524,8 +524,8 @@ static bool_t stdvga_mem_accept(const struct hvm_io_handler *handler,
      * deadlock when hvm_mmio_internal() is called from
      * hvm_copy_to/from_guest_phys() in hvm_process_io_intercept().
      */
-    if ( (hvm_mmio_first_byte(p) < VGA_MEM_BASE) ||
-         (hvm_mmio_last_byte(p) >= (VGA_MEM_BASE + VGA_MEM_SIZE)) )
+    if ( (ioreq_mmio_first_byte(p) < VGA_MEM_BASE) ||
+         (ioreq_mmio_last_byte(p) >= (VGA_MEM_BASE + VGA_MEM_SIZE)) )
         return 0;
 
     spin_lock(&s->lock);
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index 44385ef..6e9f745 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -1075,8 +1075,8 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
             break;
 
         case XEN_DMOP_IO_RANGE_MEMORY:
-            start = hvm_mmio_first_byte(p);
-            end = hvm_mmio_last_byte(p);
+            start = ioreq_mmio_first_byte(p);
+            end = ioreq_mmio_last_byte(p);
 
             if ( rangeset_contains_range(r, start, end) )
                 return s;
diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h
index 558426b..fb64294 100644
--- a/xen/include/asm-x86/hvm/io.h
+++ b/xen/include/asm-x86/hvm/io.h
@@ -40,22 +40,6 @@ struct hvm_mmio_ops {
     hvm_mmio_write_t write;
 };
 
-static inline paddr_t hvm_mmio_first_byte(const ioreq_t *p)
-{
-    return unlikely(p->df) ?
-           p->addr - (p->count - 1ul) * p->size :
-           p->addr;
-}
-
-static inline paddr_t hvm_mmio_last_byte(const ioreq_t *p)
-{
-    unsigned long size = p->size;
-
-    return unlikely(p->df) ?
-           p->addr + size - 1:
-           p->addr + (p->count * size) - 1;
-}
-
 typedef int (*portio_action_t)(
     int dir, unsigned int port, unsigned int bytes, uint32_t *val);
 
diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
index 3cc333d..2746bb1 100644
--- a/xen/include/xen/ioreq.h
+++ b/xen/include/xen/ioreq.h
@@ -21,6 +21,22 @@
 
 #include <xen/sched.h>
 
+static inline paddr_t ioreq_mmio_first_byte(const ioreq_t *p)
+{
+    return unlikely(p->df) ?
+           p->addr - (p->count - 1ul) * p->size :
+           p->addr;
+}
+
+static inline paddr_t ioreq_mmio_last_byte(const ioreq_t *p)
+{
+    unsigned long size = p->size;
+
+    return unlikely(p->df) ?
+           p->addr + size - 1:
+           p->addr + (p->count * size) - 1;
+}
+
 static inline bool ioreq_needs_completion(const ioreq_t *ioreq)
 {
     return ioreq->state == STATE_IOREQ_READY &&
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:32:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:32:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40816.73813 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgTw-0000x4-Gw; Mon, 30 Nov 2020 10:32:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40816.73813; Mon, 30 Nov 2020 10:32:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgTw-0000wq-C6; Mon, 30 Nov 2020 10:32:32 +0000
Received: by outflank-mailman (input) for mailman id 40816;
 Mon, 30 Nov 2020 10:32:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjgTu-0000Uu-Ci
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:32:30 +0000
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 120459cb-e07a-46e5-93cd-44136438b689;
 Mon, 30 Nov 2020 10:32:04 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id j205so20582119lfj.6
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 02:32:04 -0800 (PST)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.00
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 02:32:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 120459cb-e07a-46e5-93cd-44136438b689
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=aGQpvDbV6S2U1yz85teIC5aptKqTALR4gBxIs/mCIMY=;
        b=t2IZrF/i6aU21frm5KbsJgen8U407C3uQELtOD+6wxFSKI139uW/U8SpjTZ5EvGv01
         /BUhijalta439xShmd0Wr2b+9IYfLLVLwWFlQ6CQhaNJckZZAW8PBRc8XrMkmGagHkaL
         nMXgBEeL/ZQdaK0/Wf2VVnuO1zIiAKEU9/Eg7dIPoiqz8oHAYa04QA77zvxBahwwMuSo
         Ht1GSCzNTSspG+5zdU3nXWngU7RuxuJFSlPTXNHw8G4D+3jh9RGlNxynt2BQpmylnNru
         QZhT9h6enCuszWu7ms15Qz7vc4tJl/HaIJIN3FjPADqeQ/1CUgeIc2x672pvOsaG3ZkH
         6+IA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=aGQpvDbV6S2U1yz85teIC5aptKqTALR4gBxIs/mCIMY=;
        b=EsrrE2DQty0hNj3HrslG2lpziImHx5jMLFQBOD8YURhogmIOGx/qmtocNcdrMzx9Ep
         5ymwZmeZhgViE7NKXUiIqHL2lUaFCiVhxoIER+pREV67lTt0tSmo1yWJ85/NIanOf+gm
         OApsJ2FeNTulqJG2QuRyoX1ItO1sjObi59sPYNMdrW+aVsxfxrFxytzVgNtwxiVBG3zB
         XuacEzVKugzAzy/SkZA9PpKBejPn6Uf1I1dXXscyztUOG+emm3vZMSb/mov7djZZe5vP
         FJ4KRyMdRbO/jflczMZgEsr74dpfCJ/Sz27uq+NVB3z//hzPiTUgqXez0DrA9uQx5Jl3
         ExtA==
X-Gm-Message-State: AOAM532098CZ+0IC19iwqdrmogjcj/ZCFCQZWwtuChFG4Y5iRca8LYQH
	JoI2LhLPTt9wILJYtKE9zSfJe6Y1rmGFlA==
X-Google-Smtp-Source: ABdhPJxcZ6QbqV454audB9S8a62c7d1bGO1rQuYoFSmSMHuXQZT/Hy8LOLGIhyPFn1hqTXlQV0zDAw==
X-Received: by 2002:a19:857:: with SMTP id 84mr9546062lfi.235.1606732321729;
        Mon, 30 Nov 2020 02:32:01 -0800 (PST)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Tim Deegan <tim@xen.org>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V3 04/23] xen/ioreq: Make x86's IOREQ feature common
Date: Mon, 30 Nov 2020 12:31:19 +0200
Message-Id: <1606732298-22107-5-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

As a lot of x86 code can be re-used on Arm later on, this patch
moves previously prepared IOREQ support to the common code
(the code movement is verbatim copy).

The "legacy" mechanism of mapping magic pages for the IOREQ servers
remains x86 specific and not exposed to the common code.

The common IOREQ feature is supposed to be built with IOREQ_SERVER
option enabled, which is selected for x86's config HVM for now.

In order to avoid having a gigantic patch here, the subsequent
patches will update remaining bits in the common code step by step:
- Make IOREQ related structs/materials common
- Drop the "hvm" prefixes and infixes
- Remove layering violation by moving corresponding fields
  out of *arch.hvm* or abstracting away accesses to them

Also include <xen/domain_page.h> which will be needed on Arm
to avoid touch the common code again when introducing Arm specific bits.

This support is going to be used on Arm to be able run device
emulator outside of Xen hypervisor.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

***
Please note, this patch depends on the following which is
on review:
https://patchwork.kernel.org/patch/11816689/
***

Changes RFC -> V1:
   - was split into three patches:
     - x86/ioreq: Prepare IOREQ feature for making it common
     - xen/ioreq: Make x86's IOREQ feature common
     - xen/ioreq: Make x86's hvm_ioreq_needs_completion() common
   - update MAINTAINERS file
   - do not use a separate subdir for the IOREQ stuff, move it to:
     - xen/common/ioreq.c
     - xen/include/xen/ioreq.h
   - update x86's files to include xen/ioreq.h
   - remove unneeded headers in arch/x86/hvm/ioreq.c
   - re-order the headers alphabetically in common/ioreq.c
   - update common/ioreq.c according to the newly introduced arch functions:
     arch_hvm_destroy_ioreq_server()/arch_handle_hvm_io_completion()

Changes V1 -> V2:
   - update patch description
   - make everything needed in the previous patch to achieve
     a truly rename here
   - don't include unnecessary headers from asm-x86/hvm/ioreq.h
     and xen/ioreq.h
   - use __XEN_IOREQ_H__ instead of __IOREQ_H__
   - move get_ioreq_server() to common/ioreq.c

Changes V2 -> V3:
   - update patch description
   - make everything needed in the previous patch to not
     expose "legacy" interface to the common code here
   - update patch according the "legacy interface" is x86 specific
   - include <xen/domain_page.h> in common ioreq.c
---
---
 MAINTAINERS                     |    8 +-
 xen/arch/x86/Kconfig            |    1 +
 xen/arch/x86/hvm/ioreq.c        | 1356 ++-------------------------------------
 xen/arch/x86/mm.c               |    2 +-
 xen/arch/x86/mm/shadow/common.c |    2 +-
 xen/common/Kconfig              |    3 +
 xen/common/Makefile             |    1 +
 xen/common/ioreq.c              | 1287 +++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/hvm/ioreq.h |   39 +-
 xen/include/xen/ioreq.h         |   73 +++
 10 files changed, 1432 insertions(+), 1340 deletions(-)
 create mode 100644 xen/common/ioreq.c
 create mode 100644 xen/include/xen/ioreq.h

diff --git a/MAINTAINERS b/MAINTAINERS
index dab38a6..5a44ba4 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -333,6 +333,13 @@ X:	xen/drivers/passthrough/vtd/
 X:	xen/drivers/passthrough/device_tree.c
 F:	xen/include/xen/iommu.h
 
+I/O EMULATION (IOREQ)
+M:	Paul Durrant <paul@xen.org>
+S:	Supported
+F:	xen/common/ioreq.c
+F:	xen/include/xen/ioreq.h
+F:	xen/include/public/hvm/ioreq.h
+
 KCONFIG
 M:	Doug Goldstein <cardoe@cardoe.com>
 S:	Supported
@@ -549,7 +556,6 @@ F:	xen/arch/x86/hvm/ioreq.c
 F:	xen/include/asm-x86/hvm/emulate.h
 F:	xen/include/asm-x86/hvm/io.h
 F:	xen/include/asm-x86/hvm/ioreq.h
-F:	xen/include/public/hvm/ioreq.h
 
 X86 MEMORY MANAGEMENT
 M:	Jan Beulich <jbeulich@suse.com>
diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 24868aa..abe0fce 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -91,6 +91,7 @@ config PV_LINEAR_PT
 
 config HVM
 	def_bool !PV_SHIM_EXCLUSIVE
+	select IOREQ_SERVER
 	prompt "HVM support"
 	---help---
 	  Interfaces to support HVM domains.  HVM domains require hardware
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 36b1e4e..b03ceee 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -41,140 +41,6 @@ bool ioreq_complete_mmio(void)
     return handle_mmio();
 }
 
-static void set_ioreq_server(struct domain *d, unsigned int id,
-                             struct hvm_ioreq_server *s)
-{
-    ASSERT(id < MAX_NR_IOREQ_SERVERS);
-    ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]);
-
-    d->arch.hvm.ioreq_server.server[id] = s;
-}
-
-#define GET_IOREQ_SERVER(d, id) \
-    (d)->arch.hvm.ioreq_server.server[id]
-
-static struct hvm_ioreq_server *get_ioreq_server(const struct domain *d,
-                                                 unsigned int id)
-{
-    if ( id >= MAX_NR_IOREQ_SERVERS )
-        return NULL;
-
-    return GET_IOREQ_SERVER(d, id);
-}
-
-/*
- * Iterate over all possible ioreq servers.
- *
- * NOTE: The iteration is backwards such that more recently created
- *       ioreq servers are favoured in hvm_select_ioreq_server().
- *       This is a semantic that previously existed when ioreq servers
- *       were held in a linked list.
- */
-#define FOR_EACH_IOREQ_SERVER(d, id, s) \
-    for ( (id) = MAX_NR_IOREQ_SERVERS; (id) != 0; ) \
-        if ( !(s = GET_IOREQ_SERVER(d, --(id))) ) \
-            continue; \
-        else
-
-static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v)
-{
-    shared_iopage_t *p = s->ioreq.va;
-
-    ASSERT((v == current) || !vcpu_runnable(v));
-    ASSERT(p != NULL);
-
-    return &p->vcpu_ioreq[v->vcpu_id];
-}
-
-static struct hvm_ioreq_vcpu *get_pending_vcpu(const struct vcpu *v,
-                                               struct hvm_ioreq_server **srvp)
-{
-    struct domain *d = v->domain;
-    struct hvm_ioreq_server *s;
-    unsigned int id;
-
-    FOR_EACH_IOREQ_SERVER(d, id, s)
-    {
-        struct hvm_ioreq_vcpu *sv;
-
-        list_for_each_entry ( sv,
-                              &s->ioreq_vcpu_list,
-                              list_entry )
-        {
-            if ( sv->vcpu == v && sv->pending )
-            {
-                if ( srvp )
-                    *srvp = s;
-                return sv;
-            }
-        }
-    }
-
-    return NULL;
-}
-
-bool hvm_io_pending(struct vcpu *v)
-{
-    return get_pending_vcpu(v, NULL);
-}
-
-static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
-{
-    unsigned int prev_state = STATE_IOREQ_NONE;
-    unsigned int state = p->state;
-    uint64_t data = ~0;
-
-    smp_rmb();
-
-    /*
-     * The only reason we should see this condition be false is when an
-     * emulator dying races with I/O being requested.
-     */
-    while ( likely(state != STATE_IOREQ_NONE) )
-    {
-        if ( unlikely(state < prev_state) )
-        {
-            gdprintk(XENLOG_ERR, "Weird HVM ioreq state transition %u -> %u\n",
-                     prev_state, state);
-            sv->pending = false;
-            domain_crash(sv->vcpu->domain);
-            return false; /* bail */
-        }
-
-        switch ( prev_state = state )
-        {
-        case STATE_IORESP_READY: /* IORESP_READY -> NONE */
-            p->state = STATE_IOREQ_NONE;
-            data = p->data;
-            break;
-
-        case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} -> IORESP_READY */
-        case STATE_IOREQ_INPROCESS:
-            wait_on_xen_event_channel(sv->ioreq_evtchn,
-                                      ({ state = p->state;
-                                         smp_rmb();
-                                         state != prev_state; }));
-            continue;
-
-        default:
-            gdprintk(XENLOG_ERR, "Weird HVM iorequest state %u\n", state);
-            sv->pending = false;
-            domain_crash(sv->vcpu->domain);
-            return false; /* bail */
-        }
-
-        break;
-    }
-
-    p = &sv->vcpu->arch.hvm.hvm_io.io_req;
-    if ( hvm_ioreq_needs_completion(p) )
-        p->data = data;
-
-    sv->pending = false;
-
-    return true;
-}
-
 bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion)
 {
     switch ( io_completion )
@@ -198,52 +64,6 @@ bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion)
     return true;
 }
 
-bool handle_hvm_io_completion(struct vcpu *v)
-{
-    struct domain *d = v->domain;
-    struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io;
-    struct hvm_ioreq_server *s;
-    struct hvm_ioreq_vcpu *sv;
-    enum hvm_io_completion io_completion;
-
-    if ( has_vpci(d) && vpci_process_pending(v) )
-    {
-        raise_softirq(SCHEDULE_SOFTIRQ);
-        return false;
-    }
-
-    sv = get_pending_vcpu(v, &s);
-    if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) )
-        return false;
-
-    vio->io_req.state = hvm_ioreq_needs_completion(&vio->io_req) ?
-        STATE_IORESP_READY : STATE_IOREQ_NONE;
-
-    msix_write_completion(v);
-    vcpu_end_shutdown_deferral(v);
-
-    io_completion = vio->io_completion;
-    vio->io_completion = HVMIO_no_completion;
-
-    switch ( io_completion )
-    {
-    case HVMIO_no_completion:
-        break;
-
-    case HVMIO_mmio_completion:
-        return ioreq_complete_mmio();
-
-    case HVMIO_pio_completion:
-        return handle_pio(vio->io_req.addr, vio->io_req.size,
-                          vio->io_req.dir);
-
-    default:
-        return arch_vcpu_ioreq_completion(io_completion);
-    }
-
-    return true;
-}
-
 static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s)
 {
     struct domain *d = s->target;
@@ -360,93 +180,6 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
     return rc;
 }
 
-static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
-{
-    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
-    struct page_info *page;
-
-    if ( iorp->page )
-    {
-        /*
-         * If a guest frame has already been mapped (which may happen
-         * on demand if hvm_get_ioreq_server_info() is called), then
-         * allocating a page is not permitted.
-         */
-        if ( !gfn_eq(iorp->gfn, INVALID_GFN) )
-            return -EPERM;
-
-        return 0;
-    }
-
-    page = alloc_domheap_page(s->target, MEMF_no_refcount);
-
-    if ( !page )
-        return -ENOMEM;
-
-    if ( !get_page_and_type(page, s->target, PGT_writable_page) )
-    {
-        /*
-         * The domain can't possibly know about this page yet, so failure
-         * here is a clear indication of something fishy going on.
-         */
-        domain_crash(s->emulator);
-        return -ENODATA;
-    }
-
-    iorp->va = __map_domain_page_global(page);
-    if ( !iorp->va )
-        goto fail;
-
-    iorp->page = page;
-    clear_page(iorp->va);
-    return 0;
-
- fail:
-    put_page_alloc_ref(page);
-    put_page_and_type(page);
-
-    return -ENOMEM;
-}
-
-static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
-{
-    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
-    struct page_info *page = iorp->page;
-
-    if ( !page )
-        return;
-
-    iorp->page = NULL;
-
-    unmap_domain_page_global(iorp->va);
-    iorp->va = NULL;
-
-    put_page_alloc_ref(page);
-    put_page_and_type(page);
-}
-
-bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
-{
-    const struct hvm_ioreq_server *s;
-    unsigned int id;
-    bool found = false;
-
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    FOR_EACH_IOREQ_SERVER(d, id, s)
-    {
-        if ( (s->ioreq.page == page) || (s->bufioreq.page == page) )
-        {
-            found = true;
-            break;
-        }
-    }
-
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    return found;
-}
-
 static void hvm_remove_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
 
 {
@@ -481,125 +214,6 @@ static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
     return rc;
 }
 
-static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s,
-                                    struct hvm_ioreq_vcpu *sv)
-{
-    ASSERT(spin_is_locked(&s->lock));
-
-    if ( s->ioreq.va != NULL )
-    {
-        ioreq_t *p = get_ioreq(s, sv->vcpu);
-
-        p->vp_eport = sv->ioreq_evtchn;
-    }
-}
-
-static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s,
-                                     struct vcpu *v)
-{
-    struct hvm_ioreq_vcpu *sv;
-    int rc;
-
-    sv = xzalloc(struct hvm_ioreq_vcpu);
-
-    rc = -ENOMEM;
-    if ( !sv )
-        goto fail1;
-
-    spin_lock(&s->lock);
-
-    rc = alloc_unbound_xen_event_channel(v->domain, v->vcpu_id,
-                                         s->emulator->domain_id, NULL);
-    if ( rc < 0 )
-        goto fail2;
-
-    sv->ioreq_evtchn = rc;
-
-    if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) )
-    {
-        rc = alloc_unbound_xen_event_channel(v->domain, 0,
-                                             s->emulator->domain_id, NULL);
-        if ( rc < 0 )
-            goto fail3;
-
-        s->bufioreq_evtchn = rc;
-    }
-
-    sv->vcpu = v;
-
-    list_add(&sv->list_entry, &s->ioreq_vcpu_list);
-
-    if ( s->enabled )
-        hvm_update_ioreq_evtchn(s, sv);
-
-    spin_unlock(&s->lock);
-    return 0;
-
- fail3:
-    free_xen_event_channel(v->domain, sv->ioreq_evtchn);
-
- fail2:
-    spin_unlock(&s->lock);
-    xfree(sv);
-
- fail1:
-    return rc;
-}
-
-static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s,
-                                         struct vcpu *v)
-{
-    struct hvm_ioreq_vcpu *sv;
-
-    spin_lock(&s->lock);
-
-    list_for_each_entry ( sv,
-                          &s->ioreq_vcpu_list,
-                          list_entry )
-    {
-        if ( sv->vcpu != v )
-            continue;
-
-        list_del(&sv->list_entry);
-
-        if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) )
-            free_xen_event_channel(v->domain, s->bufioreq_evtchn);
-
-        free_xen_event_channel(v->domain, sv->ioreq_evtchn);
-
-        xfree(sv);
-        break;
-    }
-
-    spin_unlock(&s->lock);
-}
-
-static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s)
-{
-    struct hvm_ioreq_vcpu *sv, *next;
-
-    spin_lock(&s->lock);
-
-    list_for_each_entry_safe ( sv,
-                               next,
-                               &s->ioreq_vcpu_list,
-                               list_entry )
-    {
-        struct vcpu *v = sv->vcpu;
-
-        list_del(&sv->list_entry);
-
-        if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) )
-            free_xen_event_channel(v->domain, s->bufioreq_evtchn);
-
-        free_xen_event_channel(v->domain, sv->ioreq_evtchn);
-
-        xfree(sv);
-    }
-
-    spin_unlock(&s->lock);
-}
-
 int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s)
 {
     int rc;
@@ -621,940 +235,91 @@ void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s)
     hvm_unmap_ioreq_gfn(s, false);
 }
 
-static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s)
+void arch_ioreq_server_enable(struct hvm_ioreq_server *s)
 {
-    int rc;
-
-    rc = hvm_alloc_ioreq_mfn(s, false);
-
-    if ( !rc && (s->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) )
-        rc = hvm_alloc_ioreq_mfn(s, true);
-
-    if ( rc )
-        hvm_free_ioreq_mfn(s, false);
-
-    return rc;
+    hvm_remove_ioreq_gfn(s, false);
+    hvm_remove_ioreq_gfn(s, true);
 }
 
-static void hvm_ioreq_server_free_pages(struct hvm_ioreq_server *s)
+void arch_ioreq_server_disable(struct hvm_ioreq_server *s)
 {
-    hvm_free_ioreq_mfn(s, true);
-    hvm_free_ioreq_mfn(s, false);
+    hvm_add_ioreq_gfn(s, true);
+    hvm_add_ioreq_gfn(s, false);
 }
 
-static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s)
+/* Called when target domain is paused */
+void arch_ioreq_server_destroy(struct hvm_ioreq_server *s)
 {
-    unsigned int i;
-
-    for ( i = 0; i < NR_IO_RANGE_TYPES; i++ )
-        rangeset_destroy(s->range[i]);
+    p2m_set_ioreq_server(s->target, 0, s);
 }
 
-static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
-                                            ioservid_t id)
+/* Called with ioreq_server lock held */
+int arch_ioreq_server_map_mem_type(struct domain *d,
+                                   struct hvm_ioreq_server *s,
+                                   uint32_t flags)
 {
-    unsigned int i;
-    int rc;
+    int rc = p2m_set_ioreq_server(d, flags, s);
 
-    for ( i = 0; i < NR_IO_RANGE_TYPES; i++ )
+    if ( rc == 0 && flags == 0 )
     {
-        char *name;
-
-        rc = asprintf(&name, "ioreq_server %d %s", id,
-                      (i == XEN_DMOP_IO_RANGE_PORT) ? "port" :
-                      (i == XEN_DMOP_IO_RANGE_MEMORY) ? "memory" :
-                      (i == XEN_DMOP_IO_RANGE_PCI) ? "pci" :
-                      "");
-        if ( rc )
-            goto fail;
-
-        s->range[i] = rangeset_new(s->target, name,
-                                   RANGESETF_prettyprint_hex);
-
-        xfree(name);
-
-        rc = -ENOMEM;
-        if ( !s->range[i] )
-            goto fail;
+        const struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
-        rangeset_limit(s->range[i], MAX_NR_IO_RANGES);
+        if ( read_atomic(&p2m->ioreq.entry_count) )
+            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
     }
 
-    return 0;
-
- fail:
-    hvm_ioreq_server_free_rangesets(s);
-
     return rc;
 }
 
-void arch_ioreq_server_enable(struct hvm_ioreq_server *s)
+bool arch_ioreq_server_destroy_all(struct domain *d)
 {
-    hvm_remove_ioreq_gfn(s, false);
-    hvm_remove_ioreq_gfn(s, true);
+    return relocate_portio_handler(d, 0xcf8, 0xcf8, 4);
 }
 
-static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s)
+int arch_ioreq_server_get_type_addr(const struct domain *d,
+                                    const ioreq_t *p,
+                                    uint8_t *type,
+                                    uint64_t *addr)
 {
-    struct hvm_ioreq_vcpu *sv;
+    unsigned int cf8 = d->arch.hvm.pci_cf8;
 
-    spin_lock(&s->lock);
+    if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
+        return -EINVAL;
 
-    if ( s->enabled )
-        goto done;
+    if ( p->type == IOREQ_TYPE_PIO &&
+         (p->addr & ~3) == 0xcfc &&
+         CF8_ENABLED(cf8) )
+    {
+        unsigned int x86_fam, reg;
+        pci_sbdf_t sbdf;
 
-    arch_ioreq_server_enable(s);
+        reg = hvm_pci_decode_addr(cf8, p->addr, &sbdf);
 
-    s->enabled = true;
+        /* PCI config data cycle */
+        *type = XEN_DMOP_IO_RANGE_PCI;
+        *addr = ((uint64_t)sbdf.sbdf << 32) | reg;
+        /* AMD extended configuration space access? */
+        if ( CF8_ADDR_HI(cf8) &&
+             d->arch.cpuid->x86_vendor == X86_VENDOR_AMD &&
+             (x86_fam = get_cpu_family(
+                 d->arch.cpuid->basic.raw_fms, NULL, NULL)) >= 0x10 &&
+             x86_fam < 0x17 )
+        {
+            uint64_t msr_val;
 
-    list_for_each_entry ( sv,
-                          &s->ioreq_vcpu_list,
-                          list_entry )
-        hvm_update_ioreq_evtchn(s, sv);
+            if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) &&
+                 (msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) )
+                *addr |= CF8_ADDR_HI(cf8);
+        }
+    }
+    else
+    {
+        *type = (p->type == IOREQ_TYPE_PIO) ?
+                 XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY;
+        *addr = p->addr;
+    }
 
-  done:
-    spin_unlock(&s->lock);
-}
-
-void arch_ioreq_server_disable(struct hvm_ioreq_server *s)
-{
-    hvm_add_ioreq_gfn(s, true);
-    hvm_add_ioreq_gfn(s, false);
-}
-
-static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s)
-{
-    spin_lock(&s->lock);
-
-    if ( !s->enabled )
-        goto done;
-
-    arch_ioreq_server_disable(s);
-
-    s->enabled = false;
-
- done:
-    spin_unlock(&s->lock);
-}
-
-static int hvm_ioreq_server_init(struct hvm_ioreq_server *s,
-                                 struct domain *d, int bufioreq_handling,
-                                 ioservid_t id)
-{
-    struct domain *currd = current->domain;
-    struct vcpu *v;
-    int rc;
-
-    s->target = d;
-
-    get_knownalive_domain(currd);
-    s->emulator = currd;
-
-    spin_lock_init(&s->lock);
-    INIT_LIST_HEAD(&s->ioreq_vcpu_list);
-    spin_lock_init(&s->bufioreq_lock);
-
-    s->ioreq.gfn = INVALID_GFN;
-    s->bufioreq.gfn = INVALID_GFN;
-
-    rc = hvm_ioreq_server_alloc_rangesets(s, id);
-    if ( rc )
-        return rc;
-
-    s->bufioreq_handling = bufioreq_handling;
-
-    for_each_vcpu ( d, v )
-    {
-        rc = hvm_ioreq_server_add_vcpu(s, v);
-        if ( rc )
-            goto fail_add;
-    }
-
-    return 0;
-
- fail_add:
-    hvm_ioreq_server_remove_all_vcpus(s);
-    arch_ioreq_server_unmap_pages(s);
-
-    hvm_ioreq_server_free_rangesets(s);
-
-    put_domain(s->emulator);
-    return rc;
-}
-
-static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s)
-{
-    ASSERT(!s->enabled);
-    hvm_ioreq_server_remove_all_vcpus(s);
-
-    /*
-     * NOTE: It is safe to call both arch_ioreq_server_unmap_pages() and
-     *       hvm_ioreq_server_free_pages() in that order.
-     *       This is because the former will do nothing if the pages
-     *       are not mapped, leaving the page to be freed by the latter.
-     *       However if the pages are mapped then the former will set
-     *       the page_info pointer to NULL, meaning the latter will do
-     *       nothing.
-     */
-    arch_ioreq_server_unmap_pages(s);
-    hvm_ioreq_server_free_pages(s);
-
-    hvm_ioreq_server_free_rangesets(s);
-
-    put_domain(s->emulator);
-}
-
-int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
-                            ioservid_t *id)
-{
-    struct hvm_ioreq_server *s;
-    unsigned int i;
-    int rc;
-
-    if ( bufioreq_handling > HVM_IOREQSRV_BUFIOREQ_ATOMIC )
-        return -EINVAL;
-
-    s = xzalloc(struct hvm_ioreq_server);
-    if ( !s )
-        return -ENOMEM;
-
-    domain_pause(d);
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    for ( i = 0; i < MAX_NR_IOREQ_SERVERS; i++ )
-    {
-        if ( !GET_IOREQ_SERVER(d, i) )
-            break;
-    }
-
-    rc = -ENOSPC;
-    if ( i >= MAX_NR_IOREQ_SERVERS )
-        goto fail;
-
-    /*
-     * It is safe to call set_ioreq_server() prior to
-     * hvm_ioreq_server_init() since the target domain is paused.
-     */
-    set_ioreq_server(d, i, s);
-
-    rc = hvm_ioreq_server_init(s, d, bufioreq_handling, i);
-    if ( rc )
-    {
-        set_ioreq_server(d, i, NULL);
-        goto fail;
-    }
-
-    if ( id )
-        *id = i;
-
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-    domain_unpause(d);
-
-    return 0;
-
- fail:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-    domain_unpause(d);
-
-    xfree(s);
-    return rc;
-}
-
-/* Called when target domain is paused */
-void arch_ioreq_server_destroy(struct hvm_ioreq_server *s)
-{
-    p2m_set_ioreq_server(s->target, 0, s);
-}
-
-int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
-{
-    struct hvm_ioreq_server *s;
-    int rc;
-
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    s = get_ioreq_server(d, id);
-
-    rc = -ENOENT;
-    if ( !s )
-        goto out;
-
-    rc = -EPERM;
-    if ( s->emulator != current->domain )
-        goto out;
-
-    domain_pause(d);
-
-    arch_ioreq_server_destroy(s);
-
-    hvm_ioreq_server_disable(s);
-
-    /*
-     * It is safe to call hvm_ioreq_server_deinit() prior to
-     * set_ioreq_server() since the target domain is paused.
-     */
-    hvm_ioreq_server_deinit(s);
-    set_ioreq_server(d, id, NULL);
-
-    domain_unpause(d);
-
-    xfree(s);
-
-    rc = 0;
-
- out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    return rc;
-}
-
-int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
-                              unsigned long *ioreq_gfn,
-                              unsigned long *bufioreq_gfn,
-                              evtchn_port_t *bufioreq_port)
-{
-    struct hvm_ioreq_server *s;
-    int rc;
-
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    s = get_ioreq_server(d, id);
-
-    rc = -ENOENT;
-    if ( !s )
-        goto out;
-
-    rc = -EPERM;
-    if ( s->emulator != current->domain )
-        goto out;
-
-    if ( ioreq_gfn || bufioreq_gfn )
-    {
-        rc = arch_ioreq_server_map_pages(s);
-        if ( rc )
-            goto out;
-    }
-
-    if ( ioreq_gfn )
-        *ioreq_gfn = gfn_x(s->ioreq.gfn);
-
-    if ( HANDLE_BUFIOREQ(s) )
-    {
-        if ( bufioreq_gfn )
-            *bufioreq_gfn = gfn_x(s->bufioreq.gfn);
-
-        if ( bufioreq_port )
-            *bufioreq_port = s->bufioreq_evtchn;
-    }
-
-    rc = 0;
-
- out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    return rc;
-}
-
-int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
-                               unsigned long idx, mfn_t *mfn)
-{
-    struct hvm_ioreq_server *s;
-    int rc;
-
-    ASSERT(is_hvm_domain(d));
-
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    s = get_ioreq_server(d, id);
-
-    rc = -ENOENT;
-    if ( !s )
-        goto out;
-
-    rc = -EPERM;
-    if ( s->emulator != current->domain )
-        goto out;
-
-    rc = hvm_ioreq_server_alloc_pages(s);
-    if ( rc )
-        goto out;
-
-    switch ( idx )
-    {
-    case XENMEM_resource_ioreq_server_frame_bufioreq:
-        rc = -ENOENT;
-        if ( !HANDLE_BUFIOREQ(s) )
-            goto out;
-
-        *mfn = page_to_mfn(s->bufioreq.page);
-        rc = 0;
-        break;
-
-    case XENMEM_resource_ioreq_server_frame_ioreq(0):
-        *mfn = page_to_mfn(s->ioreq.page);
-        rc = 0;
-        break;
-
-    default:
-        rc = -EINVAL;
-        break;
-    }
-
- out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    return rc;
-}
-
-int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
-                                     uint32_t type, uint64_t start,
-                                     uint64_t end)
-{
-    struct hvm_ioreq_server *s;
-    struct rangeset *r;
-    int rc;
-
-    if ( start > end )
-        return -EINVAL;
-
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    s = get_ioreq_server(d, id);
-
-    rc = -ENOENT;
-    if ( !s )
-        goto out;
-
-    rc = -EPERM;
-    if ( s->emulator != current->domain )
-        goto out;
-
-    switch ( type )
-    {
-    case XEN_DMOP_IO_RANGE_PORT:
-    case XEN_DMOP_IO_RANGE_MEMORY:
-    case XEN_DMOP_IO_RANGE_PCI:
-        r = s->range[type];
-        break;
-
-    default:
-        r = NULL;
-        break;
-    }
-
-    rc = -EINVAL;
-    if ( !r )
-        goto out;
-
-    rc = -EEXIST;
-    if ( rangeset_overlaps_range(r, start, end) )
-        goto out;
-
-    rc = rangeset_add_range(r, start, end);
-
- out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    return rc;
-}
-
-int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
-                                         uint32_t type, uint64_t start,
-                                         uint64_t end)
-{
-    struct hvm_ioreq_server *s;
-    struct rangeset *r;
-    int rc;
-
-    if ( start > end )
-        return -EINVAL;
-
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    s = get_ioreq_server(d, id);
-
-    rc = -ENOENT;
-    if ( !s )
-        goto out;
-
-    rc = -EPERM;
-    if ( s->emulator != current->domain )
-        goto out;
-
-    switch ( type )
-    {
-    case XEN_DMOP_IO_RANGE_PORT:
-    case XEN_DMOP_IO_RANGE_MEMORY:
-    case XEN_DMOP_IO_RANGE_PCI:
-        r = s->range[type];
-        break;
-
-    default:
-        r = NULL;
-        break;
-    }
-
-    rc = -EINVAL;
-    if ( !r )
-        goto out;
-
-    rc = -ENOENT;
-    if ( !rangeset_contains_range(r, start, end) )
-        goto out;
-
-    rc = rangeset_remove_range(r, start, end);
-
- out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    return rc;
-}
-
-/* Called with ioreq_server lock held */
-int arch_ioreq_server_map_mem_type(struct domain *d,
-                                   struct hvm_ioreq_server *s,
-                                   uint32_t flags)
-{
-    int rc = p2m_set_ioreq_server(d, flags, s);
-
-    if ( rc == 0 && flags == 0 )
-    {
-        const struct p2m_domain *p2m = p2m_get_hostp2m(d);
-
-        if ( read_atomic(&p2m->ioreq.entry_count) )
-            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
-    }
-
-    return rc;
-}
-
-/*
- * Map or unmap an ioreq server to specific memory type. For now, only
- * HVMMEM_ioreq_server is supported, and in the future new types can be
- * introduced, e.g. HVMMEM_ioreq_serverX mapped to ioreq server X. And
- * currently, only write operations are to be forwarded to an ioreq server.
- * Support for the emulation of read operations can be added when an ioreq
- * server has such requirement in the future.
- */
-int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
-                                     uint32_t type, uint32_t flags)
-{
-    struct hvm_ioreq_server *s;
-    int rc;
-
-    if ( type != HVMMEM_ioreq_server )
-        return -EINVAL;
-
-    if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE )
-        return -EINVAL;
-
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    s = get_ioreq_server(d, id);
-
-    rc = -ENOENT;
-    if ( !s )
-        goto out;
-
-    rc = -EPERM;
-    if ( s->emulator != current->domain )
-        goto out;
-
-    rc = arch_ioreq_server_map_mem_type(d, s, flags);
-
- out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    return rc;
-}
-
-int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
-                               bool enabled)
-{
-    struct hvm_ioreq_server *s;
-    int rc;
-
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    s = get_ioreq_server(d, id);
-
-    rc = -ENOENT;
-    if ( !s )
-        goto out;
-
-    rc = -EPERM;
-    if ( s->emulator != current->domain )
-        goto out;
-
-    domain_pause(d);
-
-    if ( enabled )
-        hvm_ioreq_server_enable(s);
-    else
-        hvm_ioreq_server_disable(s);
-
-    domain_unpause(d);
-
-    rc = 0;
-
- out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-    return rc;
-}
-
-int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
-{
-    struct hvm_ioreq_server *s;
-    unsigned int id;
-    int rc;
-
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    FOR_EACH_IOREQ_SERVER(d, id, s)
-    {
-        rc = hvm_ioreq_server_add_vcpu(s, v);
-        if ( rc )
-            goto fail;
-    }
-
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    return 0;
-
- fail:
-    while ( ++id != MAX_NR_IOREQ_SERVERS )
-    {
-        s = GET_IOREQ_SERVER(d, id);
-
-        if ( !s )
-            continue;
-
-        hvm_ioreq_server_remove_vcpu(s, v);
-    }
-
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    return rc;
-}
-
-void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v)
-{
-    struct hvm_ioreq_server *s;
-    unsigned int id;
-
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    FOR_EACH_IOREQ_SERVER(d, id, s)
-        hvm_ioreq_server_remove_vcpu(s, v);
-
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-}
-
-bool arch_ioreq_server_destroy_all(struct domain *d)
-{
-    return relocate_portio_handler(d, 0xcf8, 0xcf8, 4);
-}
-
-void hvm_destroy_all_ioreq_servers(struct domain *d)
-{
-    struct hvm_ioreq_server *s;
-    unsigned int id;
-
-    if ( !arch_ioreq_server_destroy_all(d) )
-        return;
-
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    /* No need to domain_pause() as the domain is being torn down */
-
-    FOR_EACH_IOREQ_SERVER(d, id, s)
-    {
-        hvm_ioreq_server_disable(s);
-
-        /*
-         * It is safe to call hvm_ioreq_server_deinit() prior to
-         * set_ioreq_server() since the target domain is being destroyed.
-         */
-        hvm_ioreq_server_deinit(s);
-        set_ioreq_server(d, id, NULL);
-
-        xfree(s);
-    }
-
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-}
-
-int arch_ioreq_server_get_type_addr(const struct domain *d,
-                                    const ioreq_t *p,
-                                    uint8_t *type,
-                                    uint64_t *addr)
-{
-    unsigned int cf8 = d->arch.hvm.pci_cf8;
-
-    if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
-        return -EINVAL;
-
-    if ( p->type == IOREQ_TYPE_PIO &&
-         (p->addr & ~3) == 0xcfc &&
-         CF8_ENABLED(cf8) )
-    {
-        unsigned int x86_fam, reg;
-        pci_sbdf_t sbdf;
-
-        reg = hvm_pci_decode_addr(cf8, p->addr, &sbdf);
-
-        /* PCI config data cycle */
-        *type = XEN_DMOP_IO_RANGE_PCI;
-        *addr = ((uint64_t)sbdf.sbdf << 32) | reg;
-        /* AMD extended configuration space access? */
-        if ( CF8_ADDR_HI(cf8) &&
-             d->arch.cpuid->x86_vendor == X86_VENDOR_AMD &&
-             (x86_fam = get_cpu_family(
-                 d->arch.cpuid->basic.raw_fms, NULL, NULL)) >= 0x10 &&
-             x86_fam < 0x17 )
-        {
-            uint64_t msr_val;
-
-            if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) &&
-                 (msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) )
-                *addr |= CF8_ADDR_HI(cf8);
-        }
-    }
-    else
-    {
-        *type = (p->type == IOREQ_TYPE_PIO) ?
-                 XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY;
-        *addr = p->addr;
-    }
-
-    return 0;
-}
-
-struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
-                                                 ioreq_t *p)
-{
-    struct hvm_ioreq_server *s;
-    uint8_t type;
-    uint64_t addr;
-    unsigned int id;
-
-    if ( arch_ioreq_server_get_type_addr(d, p, &type, &addr) )
-        return NULL;
-
-    FOR_EACH_IOREQ_SERVER(d, id, s)
-    {
-        struct rangeset *r;
-
-        if ( !s->enabled )
-            continue;
-
-        r = s->range[type];
-
-        switch ( type )
-        {
-            unsigned long start, end;
-
-        case XEN_DMOP_IO_RANGE_PORT:
-            start = addr;
-            end = start + p->size - 1;
-            if ( rangeset_contains_range(r, start, end) )
-                return s;
-
-            break;
-
-        case XEN_DMOP_IO_RANGE_MEMORY:
-            start = hvm_mmio_first_byte(p);
-            end = hvm_mmio_last_byte(p);
-
-            if ( rangeset_contains_range(r, start, end) )
-                return s;
-
-            break;
-
-        case XEN_DMOP_IO_RANGE_PCI:
-            if ( rangeset_contains_singleton(r, addr >> 32) )
-            {
-                p->type = IOREQ_TYPE_PCI_CONFIG;
-                p->addr = addr;
-                return s;
-            }
-
-            break;
-        }
-    }
-
-    return NULL;
-}
-
-static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p)
-{
-    struct domain *d = current->domain;
-    struct hvm_ioreq_page *iorp;
-    buffered_iopage_t *pg;
-    buf_ioreq_t bp = { .data = p->data,
-                       .addr = p->addr,
-                       .type = p->type,
-                       .dir = p->dir };
-    /* Timeoffset sends 64b data, but no address. Use two consecutive slots. */
-    int qw = 0;
-
-    /* Ensure buffered_iopage fits in a page */
-    BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
-
-    iorp = &s->bufioreq;
-    pg = iorp->va;
-
-    if ( !pg )
-        return IOREQ_STATUS_UNHANDLED;
-
-    /*
-     * Return 0 for the cases we can't deal with:
-     *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
-     *  - we cannot buffer accesses to guest memory buffers, as the guest
-     *    may expect the memory buffer to be synchronously accessed
-     *  - the count field is usually used with data_is_ptr and since we don't
-     *    support data_is_ptr we do not waste space for the count field either
-     */
-    if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) )
-        return 0;
-
-    switch ( p->size )
-    {
-    case 1:
-        bp.size = 0;
-        break;
-    case 2:
-        bp.size = 1;
-        break;
-    case 4:
-        bp.size = 2;
-        break;
-    case 8:
-        bp.size = 3;
-        qw = 1;
-        break;
-    default:
-        gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size);
-        return IOREQ_STATUS_UNHANDLED;
-    }
-
-    spin_lock(&s->bufioreq_lock);
-
-    if ( (pg->ptrs.write_pointer - pg->ptrs.read_pointer) >=
-         (IOREQ_BUFFER_SLOT_NUM - qw) )
-    {
-        /* The queue is full: send the iopacket through the normal path. */
-        spin_unlock(&s->bufioreq_lock);
-        return IOREQ_STATUS_UNHANDLED;
-    }
-
-    pg->buf_ioreq[pg->ptrs.write_pointer % IOREQ_BUFFER_SLOT_NUM] = bp;
-
-    if ( qw )
-    {
-        bp.data = p->data >> 32;
-        pg->buf_ioreq[(pg->ptrs.write_pointer+1) % IOREQ_BUFFER_SLOT_NUM] = bp;
-    }
-
-    /* Make the ioreq_t visible /before/ write_pointer. */
-    smp_wmb();
-    pg->ptrs.write_pointer += qw ? 2 : 1;
-
-    /* Canonicalize read/write pointers to prevent their overflow. */
-    while ( (s->bufioreq_handling == HVM_IOREQSRV_BUFIOREQ_ATOMIC) &&
-            qw++ < IOREQ_BUFFER_SLOT_NUM &&
-            pg->ptrs.read_pointer >= IOREQ_BUFFER_SLOT_NUM )
-    {
-        union bufioreq_pointers old = pg->ptrs, new;
-        unsigned int n = old.read_pointer / IOREQ_BUFFER_SLOT_NUM;
-
-        new.read_pointer = old.read_pointer - n * IOREQ_BUFFER_SLOT_NUM;
-        new.write_pointer = old.write_pointer - n * IOREQ_BUFFER_SLOT_NUM;
-        cmpxchg(&pg->ptrs.full, old.full, new.full);
-    }
-
-    notify_via_xen_event_channel(d, s->bufioreq_evtchn);
-    spin_unlock(&s->bufioreq_lock);
-
-    return IOREQ_STATUS_HANDLED;
-}
-
-int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p,
-                   bool buffered)
-{
-    struct vcpu *curr = current;
-    struct domain *d = curr->domain;
-    struct hvm_ioreq_vcpu *sv;
-
-    ASSERT(s);
-
-    if ( buffered )
-        return hvm_send_buffered_ioreq(s, proto_p);
-
-    if ( unlikely(!vcpu_start_shutdown_deferral(curr)) )
-        return IOREQ_STATUS_RETRY;
-
-    list_for_each_entry ( sv,
-                          &s->ioreq_vcpu_list,
-                          list_entry )
-    {
-        if ( sv->vcpu == curr )
-        {
-            evtchn_port_t port = sv->ioreq_evtchn;
-            ioreq_t *p = get_ioreq(s, curr);
-
-            if ( unlikely(p->state != STATE_IOREQ_NONE) )
-            {
-                gprintk(XENLOG_ERR, "device model set bad IO state %d\n",
-                        p->state);
-                break;
-            }
-
-            if ( unlikely(p->vp_eport != port) )
-            {
-                gprintk(XENLOG_ERR, "device model set bad event channel %d\n",
-                        p->vp_eport);
-                break;
-            }
-
-            proto_p->state = STATE_IOREQ_NONE;
-            proto_p->vp_eport = port;
-            *p = *proto_p;
-
-            prepare_wait_on_xen_event_channel(port);
-
-            /*
-             * Following happens /after/ blocking and setting up ioreq
-             * contents. prepare_wait_on_xen_event_channel() is an implicit
-             * barrier.
-             */
-            p->state = STATE_IOREQ_READY;
-            notify_via_xen_event_channel(d, port);
-
-            sv->pending = true;
-            return IOREQ_STATUS_RETRY;
-        }
-    }
-
-    return IOREQ_STATUS_UNHANDLED;
-}
-
-unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered)
-{
-    struct domain *d = current->domain;
-    struct hvm_ioreq_server *s;
-    unsigned int id, failed = 0;
-
-    FOR_EACH_IOREQ_SERVER(d, id, s)
-    {
-        if ( !s->enabled )
-            continue;
-
-        if ( hvm_send_ioreq(s, p, buffered) == IOREQ_STATUS_UNHANDLED )
-            failed++;
-    }
-
-    return failed;
+    return 0;
 }
 
 static int hvm_access_cf8(
@@ -1574,13 +339,6 @@ void arch_ioreq_domain_init(struct domain *d)
     register_portio_handler(d, 0xcf8, 4, hvm_access_cf8);
 }
 
-void hvm_ioreq_init(struct domain *d)
-{
-    spin_lock_init(&d->arch.hvm.ioreq_server.lock);
-
-    arch_ioreq_domain_init(d);
-}
-
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 5a50339..e4638ef 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -100,6 +100,7 @@
  */
 
 #include <xen/init.h>
+#include <xen/ioreq.h>
 #include <xen/kernel.h>
 #include <xen/lib.h>
 #include <xen/mm.h>
@@ -141,7 +142,6 @@
 #include <asm/io_apic.h>
 #include <asm/pci.h>
 #include <asm/guest.h>
-#include <asm/hvm/ioreq.h>
 
 #include <asm/hvm/grant_table.h>
 #include <asm/pv/domain.h>
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index a33e100..f7d74d3 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -20,6 +20,7 @@
  * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/ioreq.h>
 #include <xen/types.h>
 #include <xen/mm.h>
 #include <xen/trace.h>
@@ -34,7 +35,6 @@
 #include <asm/current.h>
 #include <asm/flushtlb.h>
 #include <asm/shadow.h>
-#include <asm/hvm/ioreq.h>
 #include <xen/numa.h>
 #include "private.h"
 
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index 3e2cf25..c971ded 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -139,6 +139,9 @@ config HYPFS_CONFIG
 	  Disable this option in case you want to spare some memory or you
 	  want to hide the .config contents from dom0.
 
+config IOREQ_SERVER
+	bool
+
 config KEXEC
 	bool "kexec support"
 	default y
diff --git a/xen/common/Makefile b/xen/common/Makefile
index d109f27..c0e91c4 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -15,6 +15,7 @@ obj-$(CONFIG_GRANT_TABLE) += grant_table.o
 obj-y += guestcopy.o
 obj-bin-y += gunzip.init.o
 obj-$(CONFIG_HYPFS) += hypfs.o
+obj-$(CONFIG_IOREQ_SERVER) += ioreq.o
 obj-y += irq.o
 obj-y += kernel.o
 obj-y += keyhandler.o
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
new file mode 100644
index 0000000..13ea959
--- /dev/null
+++ b/xen/common/ioreq.c
@@ -0,0 +1,1287 @@
+/*
+ * ioreq.c: hardware virtual machine I/O emulation
+ *
+ * Copyright (c) 2016 Citrix Systems Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/ctype.h>
+#include <xen/domain.h>
+#include <xen/domain_page.h>
+#include <xen/event.h>
+#include <xen/init.h>
+#include <xen/irq.h>
+#include <xen/lib.h>
+#include <xen/paging.h>
+#include <xen/sched.h>
+#include <xen/softirq.h>
+#include <xen/trace.h>
+#include <xen/vpci.h>
+
+#include <asm/hvm/ioreq.h>
+
+#include <public/hvm/ioreq.h>
+#include <public/hvm/params.h>
+
+static void set_ioreq_server(struct domain *d, unsigned int id,
+                             struct hvm_ioreq_server *s)
+{
+    ASSERT(id < MAX_NR_IOREQ_SERVERS);
+    ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]);
+
+    d->arch.hvm.ioreq_server.server[id] = s;
+}
+
+#define GET_IOREQ_SERVER(d, id) \
+    (d)->arch.hvm.ioreq_server.server[id]
+
+static struct hvm_ioreq_server *get_ioreq_server(const struct domain *d,
+                                                 unsigned int id)
+{
+    if ( id >= MAX_NR_IOREQ_SERVERS )
+        return NULL;
+
+    return GET_IOREQ_SERVER(d, id);
+}
+
+/*
+ * Iterate over all possible ioreq servers.
+ *
+ * NOTE: The iteration is backwards such that more recently created
+ *       ioreq servers are favoured in hvm_select_ioreq_server().
+ *       This is a semantic that previously existed when ioreq servers
+ *       were held in a linked list.
+ */
+#define FOR_EACH_IOREQ_SERVER(d, id, s) \
+    for ( (id) = MAX_NR_IOREQ_SERVERS; (id) != 0; ) \
+        if ( !(s = GET_IOREQ_SERVER(d, --(id))) ) \
+            continue; \
+        else
+
+static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v)
+{
+    shared_iopage_t *p = s->ioreq.va;
+
+    ASSERT((v == current) || !vcpu_runnable(v));
+    ASSERT(p != NULL);
+
+    return &p->vcpu_ioreq[v->vcpu_id];
+}
+
+static struct hvm_ioreq_vcpu *get_pending_vcpu(const struct vcpu *v,
+                                               struct hvm_ioreq_server **srvp)
+{
+    struct domain *d = v->domain;
+    struct hvm_ioreq_server *s;
+    unsigned int id;
+
+    FOR_EACH_IOREQ_SERVER(d, id, s)
+    {
+        struct hvm_ioreq_vcpu *sv;
+
+        list_for_each_entry ( sv,
+                              &s->ioreq_vcpu_list,
+                              list_entry )
+        {
+            if ( sv->vcpu == v && sv->pending )
+            {
+                if ( srvp )
+                    *srvp = s;
+                return sv;
+            }
+        }
+    }
+
+    return NULL;
+}
+
+bool hvm_io_pending(struct vcpu *v)
+{
+    return get_pending_vcpu(v, NULL);
+}
+
+static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
+{
+    unsigned int prev_state = STATE_IOREQ_NONE;
+    unsigned int state = p->state;
+    uint64_t data = ~0;
+
+    smp_rmb();
+
+    /*
+     * The only reason we should see this condition be false is when an
+     * emulator dying races with I/O being requested.
+     */
+    while ( likely(state != STATE_IOREQ_NONE) )
+    {
+        if ( unlikely(state < prev_state) )
+        {
+            gdprintk(XENLOG_ERR, "Weird HVM ioreq state transition %u -> %u\n",
+                     prev_state, state);
+            sv->pending = false;
+            domain_crash(sv->vcpu->domain);
+            return false; /* bail */
+        }
+
+        switch ( prev_state = state )
+        {
+        case STATE_IORESP_READY: /* IORESP_READY -> NONE */
+            p->state = STATE_IOREQ_NONE;
+            data = p->data;
+            break;
+
+        case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} -> IORESP_READY */
+        case STATE_IOREQ_INPROCESS:
+            wait_on_xen_event_channel(sv->ioreq_evtchn,
+                                      ({ state = p->state;
+                                         smp_rmb();
+                                         state != prev_state; }));
+            continue;
+
+        default:
+            gdprintk(XENLOG_ERR, "Weird HVM iorequest state %u\n", state);
+            sv->pending = false;
+            domain_crash(sv->vcpu->domain);
+            return false; /* bail */
+        }
+
+        break;
+    }
+
+    p = &sv->vcpu->arch.hvm.hvm_io.io_req;
+    if ( hvm_ioreq_needs_completion(p) )
+        p->data = data;
+
+    sv->pending = false;
+
+    return true;
+}
+
+bool handle_hvm_io_completion(struct vcpu *v)
+{
+    struct domain *d = v->domain;
+    struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io;
+    struct hvm_ioreq_server *s;
+    struct hvm_ioreq_vcpu *sv;
+    enum hvm_io_completion io_completion;
+
+    if ( has_vpci(d) && vpci_process_pending(v) )
+    {
+        raise_softirq(SCHEDULE_SOFTIRQ);
+        return false;
+    }
+
+    sv = get_pending_vcpu(v, &s);
+    if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) )
+        return false;
+
+    vio->io_req.state = hvm_ioreq_needs_completion(&vio->io_req) ?
+        STATE_IORESP_READY : STATE_IOREQ_NONE;
+
+    msix_write_completion(v);
+    vcpu_end_shutdown_deferral(v);
+
+    io_completion = vio->io_completion;
+    vio->io_completion = HVMIO_no_completion;
+
+    switch ( io_completion )
+    {
+    case HVMIO_no_completion:
+        break;
+
+    case HVMIO_mmio_completion:
+        return ioreq_complete_mmio();
+
+    case HVMIO_pio_completion:
+        return handle_pio(vio->io_req.addr, vio->io_req.size,
+                          vio->io_req.dir);
+
+    default:
+        return arch_vcpu_ioreq_completion(io_completion);
+    }
+
+    return true;
+}
+
+static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
+{
+    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+    struct page_info *page;
+
+    if ( iorp->page )
+    {
+        /*
+         * If a guest frame has already been mapped (which may happen
+         * on demand if hvm_get_ioreq_server_info() is called), then
+         * allocating a page is not permitted.
+         */
+        if ( !gfn_eq(iorp->gfn, INVALID_GFN) )
+            return -EPERM;
+
+        return 0;
+    }
+
+    page = alloc_domheap_page(s->target, MEMF_no_refcount);
+
+    if ( !page )
+        return -ENOMEM;
+
+    if ( !get_page_and_type(page, s->target, PGT_writable_page) )
+    {
+        /*
+         * The domain can't possibly know about this page yet, so failure
+         * here is a clear indication of something fishy going on.
+         */
+        domain_crash(s->emulator);
+        return -ENODATA;
+    }
+
+    iorp->va = __map_domain_page_global(page);
+    if ( !iorp->va )
+        goto fail;
+
+    iorp->page = page;
+    clear_page(iorp->va);
+    return 0;
+
+ fail:
+    put_page_alloc_ref(page);
+    put_page_and_type(page);
+
+    return -ENOMEM;
+}
+
+static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
+{
+    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+    struct page_info *page = iorp->page;
+
+    if ( !page )
+        return;
+
+    iorp->page = NULL;
+
+    unmap_domain_page_global(iorp->va);
+    iorp->va = NULL;
+
+    put_page_alloc_ref(page);
+    put_page_and_type(page);
+}
+
+bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
+{
+    const struct hvm_ioreq_server *s;
+    unsigned int id;
+    bool found = false;
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    FOR_EACH_IOREQ_SERVER(d, id, s)
+    {
+        if ( (s->ioreq.page == page) || (s->bufioreq.page == page) )
+        {
+            found = true;
+            break;
+        }
+    }
+
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    return found;
+}
+
+static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s,
+                                    struct hvm_ioreq_vcpu *sv)
+{
+    ASSERT(spin_is_locked(&s->lock));
+
+    if ( s->ioreq.va != NULL )
+    {
+        ioreq_t *p = get_ioreq(s, sv->vcpu);
+
+        p->vp_eport = sv->ioreq_evtchn;
+    }
+}
+
+static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s,
+                                     struct vcpu *v)
+{
+    struct hvm_ioreq_vcpu *sv;
+    int rc;
+
+    sv = xzalloc(struct hvm_ioreq_vcpu);
+
+    rc = -ENOMEM;
+    if ( !sv )
+        goto fail1;
+
+    spin_lock(&s->lock);
+
+    rc = alloc_unbound_xen_event_channel(v->domain, v->vcpu_id,
+                                         s->emulator->domain_id, NULL);
+    if ( rc < 0 )
+        goto fail2;
+
+    sv->ioreq_evtchn = rc;
+
+    if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) )
+    {
+        rc = alloc_unbound_xen_event_channel(v->domain, 0,
+                                             s->emulator->domain_id, NULL);
+        if ( rc < 0 )
+            goto fail3;
+
+        s->bufioreq_evtchn = rc;
+    }
+
+    sv->vcpu = v;
+
+    list_add(&sv->list_entry, &s->ioreq_vcpu_list);
+
+    if ( s->enabled )
+        hvm_update_ioreq_evtchn(s, sv);
+
+    spin_unlock(&s->lock);
+    return 0;
+
+ fail3:
+    free_xen_event_channel(v->domain, sv->ioreq_evtchn);
+
+ fail2:
+    spin_unlock(&s->lock);
+    xfree(sv);
+
+ fail1:
+    return rc;
+}
+
+static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s,
+                                         struct vcpu *v)
+{
+    struct hvm_ioreq_vcpu *sv;
+
+    spin_lock(&s->lock);
+
+    list_for_each_entry ( sv,
+                          &s->ioreq_vcpu_list,
+                          list_entry )
+    {
+        if ( sv->vcpu != v )
+            continue;
+
+        list_del(&sv->list_entry);
+
+        if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) )
+            free_xen_event_channel(v->domain, s->bufioreq_evtchn);
+
+        free_xen_event_channel(v->domain, sv->ioreq_evtchn);
+
+        xfree(sv);
+        break;
+    }
+
+    spin_unlock(&s->lock);
+}
+
+static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s)
+{
+    struct hvm_ioreq_vcpu *sv, *next;
+
+    spin_lock(&s->lock);
+
+    list_for_each_entry_safe ( sv,
+                               next,
+                               &s->ioreq_vcpu_list,
+                               list_entry )
+    {
+        struct vcpu *v = sv->vcpu;
+
+        list_del(&sv->list_entry);
+
+        if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) )
+            free_xen_event_channel(v->domain, s->bufioreq_evtchn);
+
+        free_xen_event_channel(v->domain, sv->ioreq_evtchn);
+
+        xfree(sv);
+    }
+
+    spin_unlock(&s->lock);
+}
+
+static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s)
+{
+    int rc;
+
+    rc = hvm_alloc_ioreq_mfn(s, false);
+
+    if ( !rc && (s->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) )
+        rc = hvm_alloc_ioreq_mfn(s, true);
+
+    if ( rc )
+        hvm_free_ioreq_mfn(s, false);
+
+    return rc;
+}
+
+static void hvm_ioreq_server_free_pages(struct hvm_ioreq_server *s)
+{
+    hvm_free_ioreq_mfn(s, true);
+    hvm_free_ioreq_mfn(s, false);
+}
+
+static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s)
+{
+    unsigned int i;
+
+    for ( i = 0; i < NR_IO_RANGE_TYPES; i++ )
+        rangeset_destroy(s->range[i]);
+}
+
+static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
+                                            ioservid_t id)
+{
+    unsigned int i;
+    int rc;
+
+    for ( i = 0; i < NR_IO_RANGE_TYPES; i++ )
+    {
+        char *name;
+
+        rc = asprintf(&name, "ioreq_server %d %s", id,
+                      (i == XEN_DMOP_IO_RANGE_PORT) ? "port" :
+                      (i == XEN_DMOP_IO_RANGE_MEMORY) ? "memory" :
+                      (i == XEN_DMOP_IO_RANGE_PCI) ? "pci" :
+                      "");
+        if ( rc )
+            goto fail;
+
+        s->range[i] = rangeset_new(s->target, name,
+                                   RANGESETF_prettyprint_hex);
+
+        xfree(name);
+
+        rc = -ENOMEM;
+        if ( !s->range[i] )
+            goto fail;
+
+        rangeset_limit(s->range[i], MAX_NR_IO_RANGES);
+    }
+
+    return 0;
+
+ fail:
+    hvm_ioreq_server_free_rangesets(s);
+
+    return rc;
+}
+
+static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s)
+{
+    struct hvm_ioreq_vcpu *sv;
+
+    spin_lock(&s->lock);
+
+    if ( s->enabled )
+        goto done;
+
+    arch_ioreq_server_enable(s);
+
+    s->enabled = true;
+
+    list_for_each_entry ( sv,
+                          &s->ioreq_vcpu_list,
+                          list_entry )
+        hvm_update_ioreq_evtchn(s, sv);
+
+  done:
+    spin_unlock(&s->lock);
+}
+
+static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s)
+{
+    spin_lock(&s->lock);
+
+    if ( !s->enabled )
+        goto done;
+
+    arch_ioreq_server_disable(s);
+
+    s->enabled = false;
+
+ done:
+    spin_unlock(&s->lock);
+}
+
+static int hvm_ioreq_server_init(struct hvm_ioreq_server *s,
+                                 struct domain *d, int bufioreq_handling,
+                                 ioservid_t id)
+{
+    struct domain *currd = current->domain;
+    struct vcpu *v;
+    int rc;
+
+    s->target = d;
+
+    get_knownalive_domain(currd);
+    s->emulator = currd;
+
+    spin_lock_init(&s->lock);
+    INIT_LIST_HEAD(&s->ioreq_vcpu_list);
+    spin_lock_init(&s->bufioreq_lock);
+
+    s->ioreq.gfn = INVALID_GFN;
+    s->bufioreq.gfn = INVALID_GFN;
+
+    rc = hvm_ioreq_server_alloc_rangesets(s, id);
+    if ( rc )
+        return rc;
+
+    s->bufioreq_handling = bufioreq_handling;
+
+    for_each_vcpu ( d, v )
+    {
+        rc = hvm_ioreq_server_add_vcpu(s, v);
+        if ( rc )
+            goto fail_add;
+    }
+
+    return 0;
+
+ fail_add:
+    hvm_ioreq_server_remove_all_vcpus(s);
+    arch_ioreq_server_unmap_pages(s);
+
+    hvm_ioreq_server_free_rangesets(s);
+
+    put_domain(s->emulator);
+    return rc;
+}
+
+static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s)
+{
+    ASSERT(!s->enabled);
+    hvm_ioreq_server_remove_all_vcpus(s);
+
+    /*
+     * NOTE: It is safe to call both arch_ioreq_server_unmap_pages() and
+     *       hvm_ioreq_server_free_pages() in that order.
+     *       This is because the former will do nothing if the pages
+     *       are not mapped, leaving the page to be freed by the latter.
+     *       However if the pages are mapped then the former will set
+     *       the page_info pointer to NULL, meaning the latter will do
+     *       nothing.
+     */
+    arch_ioreq_server_unmap_pages(s);
+    hvm_ioreq_server_free_pages(s);
+
+    hvm_ioreq_server_free_rangesets(s);
+
+    put_domain(s->emulator);
+}
+
+int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
+                            ioservid_t *id)
+{
+    struct hvm_ioreq_server *s;
+    unsigned int i;
+    int rc;
+
+    if ( bufioreq_handling > HVM_IOREQSRV_BUFIOREQ_ATOMIC )
+        return -EINVAL;
+
+    s = xzalloc(struct hvm_ioreq_server);
+    if ( !s )
+        return -ENOMEM;
+
+    domain_pause(d);
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    for ( i = 0; i < MAX_NR_IOREQ_SERVERS; i++ )
+    {
+        if ( !GET_IOREQ_SERVER(d, i) )
+            break;
+    }
+
+    rc = -ENOSPC;
+    if ( i >= MAX_NR_IOREQ_SERVERS )
+        goto fail;
+
+    /*
+     * It is safe to call set_ioreq_server() prior to
+     * hvm_ioreq_server_init() since the target domain is paused.
+     */
+    set_ioreq_server(d, i, s);
+
+    rc = hvm_ioreq_server_init(s, d, bufioreq_handling, i);
+    if ( rc )
+    {
+        set_ioreq_server(d, i, NULL);
+        goto fail;
+    }
+
+    if ( id )
+        *id = i;
+
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    domain_unpause(d);
+
+    return 0;
+
+ fail:
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    domain_unpause(d);
+
+    xfree(s);
+    return rc;
+}
+
+int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
+{
+    struct hvm_ioreq_server *s;
+    int rc;
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    s = get_ioreq_server(d, id);
+
+    rc = -ENOENT;
+    if ( !s )
+        goto out;
+
+    rc = -EPERM;
+    if ( s->emulator != current->domain )
+        goto out;
+
+    domain_pause(d);
+
+    arch_ioreq_server_destroy(s);
+
+    hvm_ioreq_server_disable(s);
+
+    /*
+     * It is safe to call hvm_ioreq_server_deinit() prior to
+     * set_ioreq_server() since the target domain is paused.
+     */
+    hvm_ioreq_server_deinit(s);
+    set_ioreq_server(d, id, NULL);
+
+    domain_unpause(d);
+
+    xfree(s);
+
+    rc = 0;
+
+ out:
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    return rc;
+}
+
+int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
+                              unsigned long *ioreq_gfn,
+                              unsigned long *bufioreq_gfn,
+                              evtchn_port_t *bufioreq_port)
+{
+    struct hvm_ioreq_server *s;
+    int rc;
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    s = get_ioreq_server(d, id);
+
+    rc = -ENOENT;
+    if ( !s )
+        goto out;
+
+    rc = -EPERM;
+    if ( s->emulator != current->domain )
+        goto out;
+
+    if ( ioreq_gfn || bufioreq_gfn )
+    {
+        rc = arch_ioreq_server_map_pages(s);
+        if ( rc )
+            goto out;
+    }
+
+    if ( ioreq_gfn )
+        *ioreq_gfn = gfn_x(s->ioreq.gfn);
+
+    if ( HANDLE_BUFIOREQ(s) )
+    {
+        if ( bufioreq_gfn )
+            *bufioreq_gfn = gfn_x(s->bufioreq.gfn);
+
+        if ( bufioreq_port )
+            *bufioreq_port = s->bufioreq_evtchn;
+    }
+
+    rc = 0;
+
+ out:
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    return rc;
+}
+
+int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
+                               unsigned long idx, mfn_t *mfn)
+{
+    struct hvm_ioreq_server *s;
+    int rc;
+
+    ASSERT(is_hvm_domain(d));
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    s = get_ioreq_server(d, id);
+
+    rc = -ENOENT;
+    if ( !s )
+        goto out;
+
+    rc = -EPERM;
+    if ( s->emulator != current->domain )
+        goto out;
+
+    rc = hvm_ioreq_server_alloc_pages(s);
+    if ( rc )
+        goto out;
+
+    switch ( idx )
+    {
+    case XENMEM_resource_ioreq_server_frame_bufioreq:
+        rc = -ENOENT;
+        if ( !HANDLE_BUFIOREQ(s) )
+            goto out;
+
+        *mfn = page_to_mfn(s->bufioreq.page);
+        rc = 0;
+        break;
+
+    case XENMEM_resource_ioreq_server_frame_ioreq(0):
+        *mfn = page_to_mfn(s->ioreq.page);
+        rc = 0;
+        break;
+
+    default:
+        rc = -EINVAL;
+        break;
+    }
+
+ out:
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    return rc;
+}
+
+int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
+                                     uint32_t type, uint64_t start,
+                                     uint64_t end)
+{
+    struct hvm_ioreq_server *s;
+    struct rangeset *r;
+    int rc;
+
+    if ( start > end )
+        return -EINVAL;
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    s = get_ioreq_server(d, id);
+
+    rc = -ENOENT;
+    if ( !s )
+        goto out;
+
+    rc = -EPERM;
+    if ( s->emulator != current->domain )
+        goto out;
+
+    switch ( type )
+    {
+    case XEN_DMOP_IO_RANGE_PORT:
+    case XEN_DMOP_IO_RANGE_MEMORY:
+    case XEN_DMOP_IO_RANGE_PCI:
+        r = s->range[type];
+        break;
+
+    default:
+        r = NULL;
+        break;
+    }
+
+    rc = -EINVAL;
+    if ( !r )
+        goto out;
+
+    rc = -EEXIST;
+    if ( rangeset_overlaps_range(r, start, end) )
+        goto out;
+
+    rc = rangeset_add_range(r, start, end);
+
+ out:
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    return rc;
+}
+
+int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
+                                         uint32_t type, uint64_t start,
+                                         uint64_t end)
+{
+    struct hvm_ioreq_server *s;
+    struct rangeset *r;
+    int rc;
+
+    if ( start > end )
+        return -EINVAL;
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    s = get_ioreq_server(d, id);
+
+    rc = -ENOENT;
+    if ( !s )
+        goto out;
+
+    rc = -EPERM;
+    if ( s->emulator != current->domain )
+        goto out;
+
+    switch ( type )
+    {
+    case XEN_DMOP_IO_RANGE_PORT:
+    case XEN_DMOP_IO_RANGE_MEMORY:
+    case XEN_DMOP_IO_RANGE_PCI:
+        r = s->range[type];
+        break;
+
+    default:
+        r = NULL;
+        break;
+    }
+
+    rc = -EINVAL;
+    if ( !r )
+        goto out;
+
+    rc = -ENOENT;
+    if ( !rangeset_contains_range(r, start, end) )
+        goto out;
+
+    rc = rangeset_remove_range(r, start, end);
+
+ out:
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    return rc;
+}
+
+/*
+ * Map or unmap an ioreq server to specific memory type. For now, only
+ * HVMMEM_ioreq_server is supported, and in the future new types can be
+ * introduced, e.g. HVMMEM_ioreq_serverX mapped to ioreq server X. And
+ * currently, only write operations are to be forwarded to an ioreq server.
+ * Support for the emulation of read operations can be added when an ioreq
+ * server has such requirement in the future.
+ */
+int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
+                                     uint32_t type, uint32_t flags)
+{
+    struct hvm_ioreq_server *s;
+    int rc;
+
+    if ( type != HVMMEM_ioreq_server )
+        return -EINVAL;
+
+    if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE )
+        return -EINVAL;
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    s = get_ioreq_server(d, id);
+
+    rc = -ENOENT;
+    if ( !s )
+        goto out;
+
+    rc = -EPERM;
+    if ( s->emulator != current->domain )
+        goto out;
+
+    rc = arch_ioreq_server_map_mem_type(d, s, flags);
+
+ out:
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    return rc;
+}
+
+int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
+                               bool enabled)
+{
+    struct hvm_ioreq_server *s;
+    int rc;
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    s = get_ioreq_server(d, id);
+
+    rc = -ENOENT;
+    if ( !s )
+        goto out;
+
+    rc = -EPERM;
+    if ( s->emulator != current->domain )
+        goto out;
+
+    domain_pause(d);
+
+    if ( enabled )
+        hvm_ioreq_server_enable(s);
+    else
+        hvm_ioreq_server_disable(s);
+
+    domain_unpause(d);
+
+    rc = 0;
+
+ out:
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    return rc;
+}
+
+int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
+{
+    struct hvm_ioreq_server *s;
+    unsigned int id;
+    int rc;
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    FOR_EACH_IOREQ_SERVER(d, id, s)
+    {
+        rc = hvm_ioreq_server_add_vcpu(s, v);
+        if ( rc )
+            goto fail;
+    }
+
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    return 0;
+
+ fail:
+    while ( ++id != MAX_NR_IOREQ_SERVERS )
+    {
+        s = GET_IOREQ_SERVER(d, id);
+
+        if ( !s )
+            continue;
+
+        hvm_ioreq_server_remove_vcpu(s, v);
+    }
+
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    return rc;
+}
+
+void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v)
+{
+    struct hvm_ioreq_server *s;
+    unsigned int id;
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    FOR_EACH_IOREQ_SERVER(d, id, s)
+        hvm_ioreq_server_remove_vcpu(s, v);
+
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+}
+
+void hvm_destroy_all_ioreq_servers(struct domain *d)
+{
+    struct hvm_ioreq_server *s;
+    unsigned int id;
+
+    if ( !arch_ioreq_server_destroy_all(d) )
+        return;
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    /* No need to domain_pause() as the domain is being torn down */
+
+    FOR_EACH_IOREQ_SERVER(d, id, s)
+    {
+        hvm_ioreq_server_disable(s);
+
+        /*
+         * It is safe to call hvm_ioreq_server_deinit() prior to
+         * set_ioreq_server() since the target domain is being destroyed.
+         */
+        hvm_ioreq_server_deinit(s);
+        set_ioreq_server(d, id, NULL);
+
+        xfree(s);
+    }
+
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+}
+
+struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
+                                                 ioreq_t *p)
+{
+    struct hvm_ioreq_server *s;
+    uint8_t type;
+    uint64_t addr;
+    unsigned int id;
+
+    if ( arch_ioreq_server_get_type_addr(d, p, &type, &addr) )
+        return NULL;
+
+    FOR_EACH_IOREQ_SERVER(d, id, s)
+    {
+        struct rangeset *r;
+
+        if ( !s->enabled )
+            continue;
+
+        r = s->range[type];
+
+        switch ( type )
+        {
+            unsigned long start, end;
+
+        case XEN_DMOP_IO_RANGE_PORT:
+            start = addr;
+            end = start + p->size - 1;
+            if ( rangeset_contains_range(r, start, end) )
+                return s;
+
+            break;
+
+        case XEN_DMOP_IO_RANGE_MEMORY:
+            start = hvm_mmio_first_byte(p);
+            end = hvm_mmio_last_byte(p);
+
+            if ( rangeset_contains_range(r, start, end) )
+                return s;
+
+            break;
+
+        case XEN_DMOP_IO_RANGE_PCI:
+            if ( rangeset_contains_singleton(r, addr >> 32) )
+            {
+                p->type = IOREQ_TYPE_PCI_CONFIG;
+                p->addr = addr;
+                return s;
+            }
+
+            break;
+        }
+    }
+
+    return NULL;
+}
+
+static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p)
+{
+    struct domain *d = current->domain;
+    struct hvm_ioreq_page *iorp;
+    buffered_iopage_t *pg;
+    buf_ioreq_t bp = { .data = p->data,
+                       .addr = p->addr,
+                       .type = p->type,
+                       .dir = p->dir };
+    /* Timeoffset sends 64b data, but no address. Use two consecutive slots. */
+    int qw = 0;
+
+    /* Ensure buffered_iopage fits in a page */
+    BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
+
+    iorp = &s->bufioreq;
+    pg = iorp->va;
+
+    if ( !pg )
+        return IOREQ_STATUS_UNHANDLED;
+
+    /*
+     * Return 0 for the cases we can't deal with:
+     *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
+     *  - we cannot buffer accesses to guest memory buffers, as the guest
+     *    may expect the memory buffer to be synchronously accessed
+     *  - the count field is usually used with data_is_ptr and since we don't
+     *    support data_is_ptr we do not waste space for the count field either
+     */
+    if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) )
+        return 0;
+
+    switch ( p->size )
+    {
+    case 1:
+        bp.size = 0;
+        break;
+    case 2:
+        bp.size = 1;
+        break;
+    case 4:
+        bp.size = 2;
+        break;
+    case 8:
+        bp.size = 3;
+        qw = 1;
+        break;
+    default:
+        gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size);
+        return IOREQ_STATUS_UNHANDLED;
+    }
+
+    spin_lock(&s->bufioreq_lock);
+
+    if ( (pg->ptrs.write_pointer - pg->ptrs.read_pointer) >=
+         (IOREQ_BUFFER_SLOT_NUM - qw) )
+    {
+        /* The queue is full: send the iopacket through the normal path. */
+        spin_unlock(&s->bufioreq_lock);
+        return IOREQ_STATUS_UNHANDLED;
+    }
+
+    pg->buf_ioreq[pg->ptrs.write_pointer % IOREQ_BUFFER_SLOT_NUM] = bp;
+
+    if ( qw )
+    {
+        bp.data = p->data >> 32;
+        pg->buf_ioreq[(pg->ptrs.write_pointer+1) % IOREQ_BUFFER_SLOT_NUM] = bp;
+    }
+
+    /* Make the ioreq_t visible /before/ write_pointer. */
+    smp_wmb();
+    pg->ptrs.write_pointer += qw ? 2 : 1;
+
+    /* Canonicalize read/write pointers to prevent their overflow. */
+    while ( (s->bufioreq_handling == HVM_IOREQSRV_BUFIOREQ_ATOMIC) &&
+            qw++ < IOREQ_BUFFER_SLOT_NUM &&
+            pg->ptrs.read_pointer >= IOREQ_BUFFER_SLOT_NUM )
+    {
+        union bufioreq_pointers old = pg->ptrs, new;
+        unsigned int n = old.read_pointer / IOREQ_BUFFER_SLOT_NUM;
+
+        new.read_pointer = old.read_pointer - n * IOREQ_BUFFER_SLOT_NUM;
+        new.write_pointer = old.write_pointer - n * IOREQ_BUFFER_SLOT_NUM;
+        cmpxchg(&pg->ptrs.full, old.full, new.full);
+    }
+
+    notify_via_xen_event_channel(d, s->bufioreq_evtchn);
+    spin_unlock(&s->bufioreq_lock);
+
+    return IOREQ_STATUS_HANDLED;
+}
+
+int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p,
+                   bool buffered)
+{
+    struct vcpu *curr = current;
+    struct domain *d = curr->domain;
+    struct hvm_ioreq_vcpu *sv;
+
+    ASSERT(s);
+
+    if ( buffered )
+        return hvm_send_buffered_ioreq(s, proto_p);
+
+    if ( unlikely(!vcpu_start_shutdown_deferral(curr)) )
+        return IOREQ_STATUS_RETRY;
+
+    list_for_each_entry ( sv,
+                          &s->ioreq_vcpu_list,
+                          list_entry )
+    {
+        if ( sv->vcpu == curr )
+        {
+            evtchn_port_t port = sv->ioreq_evtchn;
+            ioreq_t *p = get_ioreq(s, curr);
+
+            if ( unlikely(p->state != STATE_IOREQ_NONE) )
+            {
+                gprintk(XENLOG_ERR, "device model set bad IO state %d\n",
+                        p->state);
+                break;
+            }
+
+            if ( unlikely(p->vp_eport != port) )
+            {
+                gprintk(XENLOG_ERR, "device model set bad event channel %d\n",
+                        p->vp_eport);
+                break;
+            }
+
+            proto_p->state = STATE_IOREQ_NONE;
+            proto_p->vp_eport = port;
+            *p = *proto_p;
+
+            prepare_wait_on_xen_event_channel(port);
+
+            /*
+             * Following happens /after/ blocking and setting up ioreq
+             * contents. prepare_wait_on_xen_event_channel() is an implicit
+             * barrier.
+             */
+            p->state = STATE_IOREQ_READY;
+            notify_via_xen_event_channel(d, port);
+
+            sv->pending = true;
+            return IOREQ_STATUS_RETRY;
+        }
+    }
+
+    return IOREQ_STATUS_UNHANDLED;
+}
+
+unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered)
+{
+    struct domain *d = current->domain;
+    struct hvm_ioreq_server *s;
+    unsigned int id, failed = 0;
+
+    FOR_EACH_IOREQ_SERVER(d, id, s)
+    {
+        if ( !s->enabled )
+            continue;
+
+        if ( hvm_send_ioreq(s, p, buffered) == IOREQ_STATUS_UNHANDLED )
+            failed++;
+    }
+
+    return failed;
+}
+
+void hvm_ioreq_init(struct domain *d)
+{
+    spin_lock_init(&d->arch.hvm.ioreq_server.lock);
+
+    arch_ioreq_domain_init(d);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
index c7563e1..ab2f3f8 100644
--- a/xen/include/asm-x86/hvm/ioreq.h
+++ b/xen/include/asm-x86/hvm/ioreq.h
@@ -19,8 +19,7 @@
 #ifndef __ASM_X86_HVM_IOREQ_H__
 #define __ASM_X86_HVM_IOREQ_H__
 
-#define HANDLE_BUFIOREQ(s) \
-    ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF)
+#include <xen/ioreq.h>
 
 bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion);
 int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s);
@@ -38,42 +37,6 @@ int arch_ioreq_server_get_type_addr(const struct domain *d,
                                     uint64_t *addr);
 void arch_ioreq_domain_init(struct domain *d);
 
-bool hvm_io_pending(struct vcpu *v);
-bool handle_hvm_io_completion(struct vcpu *v);
-bool is_ioreq_server_page(struct domain *d, const struct page_info *page);
-
-int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
-                            ioservid_t *id);
-int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id);
-int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
-                              unsigned long *ioreq_gfn,
-                              unsigned long *bufioreq_gfn,
-                              evtchn_port_t *bufioreq_port);
-int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
-                               unsigned long idx, mfn_t *mfn);
-int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
-                                     uint32_t type, uint64_t start,
-                                     uint64_t end);
-int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
-                                         uint32_t type, uint64_t start,
-                                         uint64_t end);
-int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
-                                     uint32_t type, uint32_t flags);
-int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
-                               bool enabled);
-
-int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v);
-void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v);
-void hvm_destroy_all_ioreq_servers(struct domain *d);
-
-struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
-                                                 ioreq_t *p);
-int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p,
-                   bool buffered);
-unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered);
-
-void hvm_ioreq_init(struct domain *d);
-
 bool ioreq_complete_mmio(void);
 
 #define IOREQ_STATUS_HANDLED     X86EMUL_OKAY
diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
new file mode 100644
index 0000000..ad47c61
--- /dev/null
+++ b/xen/include/xen/ioreq.h
@@ -0,0 +1,73 @@
+/*
+ * ioreq.h: Hardware virtual machine assist interface definitions.
+ *
+ * Copyright (c) 2016 Citrix Systems Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __XEN_IOREQ_H__
+#define __XEN_IOREQ_H__
+
+#include <xen/sched.h>
+
+#define HANDLE_BUFIOREQ(s) \
+    ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF)
+
+bool hvm_io_pending(struct vcpu *v);
+bool handle_hvm_io_completion(struct vcpu *v);
+bool is_ioreq_server_page(struct domain *d, const struct page_info *page);
+
+int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
+                            ioservid_t *id);
+int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id);
+int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
+                              unsigned long *ioreq_gfn,
+                              unsigned long *bufioreq_gfn,
+                              evtchn_port_t *bufioreq_port);
+int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
+                               unsigned long idx, mfn_t *mfn);
+int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
+                                     uint32_t type, uint64_t start,
+                                     uint64_t end);
+int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
+                                         uint32_t type, uint64_t start,
+                                         uint64_t end);
+int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
+                                     uint32_t type, uint32_t flags);
+int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
+                               bool enabled);
+
+int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v);
+void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v);
+void hvm_destroy_all_ioreq_servers(struct domain *d);
+
+struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
+                                                 ioreq_t *p);
+int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p,
+                   bool buffered);
+unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered);
+
+void hvm_ioreq_init(struct domain *d);
+
+#endif /* __XEN_IOREQ_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:32:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:32:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40820.73825 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgU1-00012l-3H; Mon, 30 Nov 2020 10:32:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40820.73825; Mon, 30 Nov 2020 10:32:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgU0-00012a-Up; Mon, 30 Nov 2020 10:32:36 +0000
Received: by outflank-mailman (input) for mailman id 40820;
 Mon, 30 Nov 2020 10:32:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjgTz-0000Uu-Cc
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:32:35 +0000
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a4a13094-8e86-4fd3-aeb8-a1da70081b79;
 Mon, 30 Nov 2020 10:32:06 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id o24so17026932ljj.6
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 02:32:06 -0800 (PST)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.03
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 02:32:04 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a4a13094-8e86-4fd3-aeb8-a1da70081b79
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=rn7rcwlGYfRN9LL5RlJZX5o5cC8jSZr2I24RdNS6p0k=;
        b=IvJ05SA7SKa8rQvQ7/l4NypRc/X/Me9ZfT7+PsN6Wo5mETOlMRVHhthu2xpfMHkviL
         bbkKRX1wQubxZyOpuzJ7/S7EwrrqY3GZ5OzZiACQVOV9t8malGF58ptxvm51uOyQC7WU
         XfSx9kkJrfTaxGHxrTFaj0Il91eLigZ8sMq4Hebtanl5OckBWyFQHP4/1Kh7xdwS+BTA
         Jw98q5r4tBUVSEenRvtEMxjdYmweqrROvbV9XF9GxyFAf60qqvoRlVrfKm9ctyDeiVpg
         2+zB1pOivYnoiMSC0jyZHsb8E98A6qmEkhaRVjlyltRamazIZH1diAd44vWrLqMe2RtK
         oqiw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=rn7rcwlGYfRN9LL5RlJZX5o5cC8jSZr2I24RdNS6p0k=;
        b=AEdvkcnSImbsZ1IVMIJ2/AwBt9Yp9zDiknvuwvVQ9qGU+YbUZhVAPe9LuSgkth452v
         WZTqgpkU+eJTlUeDY2XvTjTT6D+OcT3S+TqIOhM5LOvPbY1BVzSoaOxXdSZHpzsS+/Ca
         PJpVM07iNWuOuCxcC95f8SHm5ZwvP2a+1X6yllGXQ9SVRSdunW8HkARRIOTbZcw0hHv6
         5ErvcF6HGNNBkq0dRoKhQ8YClys4ju0FEAzo2ALhbsk7PHnJVQOlm0j/OV+oCmSpwtQi
         I0bDN8d5bYU4rR+q/5cc7AXDL6ZH+DdQ4YOt7CQjDPc7PQ5Z/H9DUR9RS5wtX6bxxoAQ
         R9Yg==
X-Gm-Message-State: AOAM53338756/E0Oi2I5ZMHFA6yh5Fag3WIxHYxMFDyWatMkXOjLQLIv
	rqBSGPpOtQuefD+reDXfEdxDhpJoYavbfA==
X-Google-Smtp-Source: ABdhPJwyOapeZac+B6TTL/mMjuypW5B2KBBZsRIxb1wb0fHOV0SCN6Bwm/83pAuhLeE3F9Va/7JssA==
X-Received: by 2002:a2e:8908:: with SMTP id d8mr9413245lji.309.1606732324950;
        Mon, 30 Nov 2020 02:32:04 -0800 (PST)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Paul Durrant <paul@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V3 07/23] xen/ioreq: Make x86's hvm_ioreq_(page/vcpu/server) structs common
Date: Mon, 30 Nov 2020 12:31:22 +0200
Message-Id: <1606732298-22107-8-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

The IOREQ is a common feature now and these structs will be used
on Arm as is. Move them to xen/ioreq.h and remove "hvm" prefixes.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - new patch

Changes V1 -> V2:
   - remove "hvm" prefix

Changes V2 -> V3:
   - update patch according the "legacy interface" is x86 specific
---
---
 xen/arch/x86/hvm/emulate.c       |   2 +-
 xen/arch/x86/hvm/ioreq.c         |  36 ++++++-------
 xen/arch/x86/hvm/stdvga.c        |   2 +-
 xen/arch/x86/mm/p2m.c            |   8 +--
 xen/common/ioreq.c               | 108 +++++++++++++++++++--------------------
 xen/include/asm-x86/hvm/domain.h |  36 +------------
 xen/include/asm-x86/hvm/ioreq.h  |  12 ++---
 xen/include/asm-x86/p2m.h        |   8 +--
 xen/include/xen/ioreq.h          |  40 +++++++++++++--
 9 files changed, 126 insertions(+), 126 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 5700274..4746d5a 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -287,7 +287,7 @@ static int hvmemul_do_io(
          * However, there's no cheap approach to avoid above situations in xen,
          * so the device model side needs to check the incoming ioreq event.
          */
-        struct hvm_ioreq_server *s = NULL;
+        struct ioreq_server *s = NULL;
         p2m_type_t p2mt = p2m_invalid;
 
         if ( is_mmio )
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index b03ceee..009a95a 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -64,7 +64,7 @@ bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion)
     return true;
 }
 
-static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s)
+static gfn_t hvm_alloc_legacy_ioreq_gfn(struct ioreq_server *s)
 {
     struct domain *d = s->target;
     unsigned int i;
@@ -80,7 +80,7 @@ static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s)
     return INVALID_GFN;
 }
 
-static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s)
+static gfn_t hvm_alloc_ioreq_gfn(struct ioreq_server *s)
 {
     struct domain *d = s->target;
     unsigned int i;
@@ -98,7 +98,7 @@ static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s)
     return hvm_alloc_legacy_ioreq_gfn(s);
 }
 
-static bool hvm_free_legacy_ioreq_gfn(struct hvm_ioreq_server *s,
+static bool hvm_free_legacy_ioreq_gfn(struct ioreq_server *s,
                                       gfn_t gfn)
 {
     struct domain *d = s->target;
@@ -116,7 +116,7 @@ static bool hvm_free_legacy_ioreq_gfn(struct hvm_ioreq_server *s,
     return true;
 }
 
-static void hvm_free_ioreq_gfn(struct hvm_ioreq_server *s, gfn_t gfn)
+static void hvm_free_ioreq_gfn(struct ioreq_server *s, gfn_t gfn)
 {
     struct domain *d = s->target;
     unsigned int i = gfn_x(gfn) - d->arch.hvm.ioreq_gfn.base;
@@ -130,9 +130,9 @@ static void hvm_free_ioreq_gfn(struct hvm_ioreq_server *s, gfn_t gfn)
     }
 }
 
-static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
+static void hvm_unmap_ioreq_gfn(struct ioreq_server *s, bool buf)
 {
-    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+    struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
 
     if ( gfn_eq(iorp->gfn, INVALID_GFN) )
         return;
@@ -144,10 +144,10 @@ static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
     iorp->gfn = INVALID_GFN;
 }
 
-static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
+static int hvm_map_ioreq_gfn(struct ioreq_server *s, bool buf)
 {
     struct domain *d = s->target;
-    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+    struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
     int rc;
 
     if ( iorp->page )
@@ -180,11 +180,11 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
     return rc;
 }
 
-static void hvm_remove_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
+static void hvm_remove_ioreq_gfn(struct ioreq_server *s, bool buf)
 
 {
     struct domain *d = s->target;
-    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+    struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
 
     if ( gfn_eq(iorp->gfn, INVALID_GFN) )
         return;
@@ -195,10 +195,10 @@ static void hvm_remove_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
     clear_page(iorp->va);
 }
 
-static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
+static int hvm_add_ioreq_gfn(struct ioreq_server *s, bool buf)
 {
     struct domain *d = s->target;
-    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+    struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
     int rc;
 
     if ( gfn_eq(iorp->gfn, INVALID_GFN) )
@@ -214,7 +214,7 @@ static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
     return rc;
 }
 
-int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s)
+int arch_ioreq_server_map_pages(struct ioreq_server *s)
 {
     int rc;
 
@@ -229,33 +229,33 @@ int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s)
     return rc;
 }
 
-void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s)
+void arch_ioreq_server_unmap_pages(struct ioreq_server *s)
 {
     hvm_unmap_ioreq_gfn(s, true);
     hvm_unmap_ioreq_gfn(s, false);
 }
 
-void arch_ioreq_server_enable(struct hvm_ioreq_server *s)
+void arch_ioreq_server_enable(struct ioreq_server *s)
 {
     hvm_remove_ioreq_gfn(s, false);
     hvm_remove_ioreq_gfn(s, true);
 }
 
-void arch_ioreq_server_disable(struct hvm_ioreq_server *s)
+void arch_ioreq_server_disable(struct ioreq_server *s)
 {
     hvm_add_ioreq_gfn(s, true);
     hvm_add_ioreq_gfn(s, false);
 }
 
 /* Called when target domain is paused */
-void arch_ioreq_server_destroy(struct hvm_ioreq_server *s)
+void arch_ioreq_server_destroy(struct ioreq_server *s)
 {
     p2m_set_ioreq_server(s->target, 0, s);
 }
 
 /* Called with ioreq_server lock held */
 int arch_ioreq_server_map_mem_type(struct domain *d,
-                                   struct hvm_ioreq_server *s,
+                                   struct ioreq_server *s,
                                    uint32_t flags)
 {
     int rc = p2m_set_ioreq_server(d, flags, s);
diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c
index e184664..bafb3f6 100644
--- a/xen/arch/x86/hvm/stdvga.c
+++ b/xen/arch/x86/hvm/stdvga.c
@@ -466,7 +466,7 @@ static int stdvga_mem_write(const struct hvm_io_handler *handler,
         .dir = IOREQ_WRITE,
         .data = data,
     };
-    struct hvm_ioreq_server *srv;
+    struct ioreq_server *srv;
 
     if ( !stdvga_cache_is_enabled(s) || !s->stdvga )
         goto done;
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index d9cc185..7a2ba82 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -367,7 +367,7 @@ void p2m_memory_type_changed(struct domain *d)
 
 int p2m_set_ioreq_server(struct domain *d,
                          unsigned int flags,
-                         struct hvm_ioreq_server *s)
+                         struct ioreq_server *s)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     int rc;
@@ -415,11 +415,11 @@ int p2m_set_ioreq_server(struct domain *d,
     return rc;
 }
 
-struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d,
-                                              unsigned int *flags)
+struct ioreq_server *p2m_get_ioreq_server(struct domain *d,
+                                          unsigned int *flags)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
 
     spin_lock(&p2m->ioreq.lock);
 
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index 6e9f745..3e80fc6 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -35,7 +35,7 @@
 #include <public/hvm/params.h>
 
 static void set_ioreq_server(struct domain *d, unsigned int id,
-                             struct hvm_ioreq_server *s)
+                             struct ioreq_server *s)
 {
     ASSERT(id < MAX_NR_IOREQ_SERVERS);
     ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]);
@@ -46,8 +46,8 @@ static void set_ioreq_server(struct domain *d, unsigned int id,
 #define GET_IOREQ_SERVER(d, id) \
     (d)->arch.hvm.ioreq_server.server[id]
 
-static struct hvm_ioreq_server *get_ioreq_server(const struct domain *d,
-                                                 unsigned int id)
+static struct ioreq_server *get_ioreq_server(const struct domain *d,
+                                             unsigned int id)
 {
     if ( id >= MAX_NR_IOREQ_SERVERS )
         return NULL;
@@ -69,7 +69,7 @@ static struct hvm_ioreq_server *get_ioreq_server(const struct domain *d,
             continue; \
         else
 
-static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v)
+static ioreq_t *get_ioreq(struct ioreq_server *s, struct vcpu *v)
 {
     shared_iopage_t *p = s->ioreq.va;
 
@@ -79,16 +79,16 @@ static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v)
     return &p->vcpu_ioreq[v->vcpu_id];
 }
 
-static struct hvm_ioreq_vcpu *get_pending_vcpu(const struct vcpu *v,
-                                               struct hvm_ioreq_server **srvp)
+static struct ioreq_vcpu *get_pending_vcpu(const struct vcpu *v,
+                                           struct ioreq_server **srvp)
 {
     struct domain *d = v->domain;
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     unsigned int id;
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
     {
-        struct hvm_ioreq_vcpu *sv;
+        struct ioreq_vcpu *sv;
 
         list_for_each_entry ( sv,
                               &s->ioreq_vcpu_list,
@@ -111,7 +111,7 @@ bool hvm_io_pending(struct vcpu *v)
     return get_pending_vcpu(v, NULL);
 }
 
-static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
+static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p)
 {
     unsigned int prev_state = STATE_IOREQ_NONE;
     unsigned int state = p->state;
@@ -172,8 +172,8 @@ bool handle_hvm_io_completion(struct vcpu *v)
 {
     struct domain *d = v->domain;
     struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io;
-    struct hvm_ioreq_server *s;
-    struct hvm_ioreq_vcpu *sv;
+    struct ioreq_server *s;
+    struct ioreq_vcpu *sv;
     enum hvm_io_completion io_completion;
 
     if ( has_vpci(d) && vpci_process_pending(v) )
@@ -214,9 +214,9 @@ bool handle_hvm_io_completion(struct vcpu *v)
     return true;
 }
 
-static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
+static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, bool buf)
 {
-    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+    struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
     struct page_info *page;
 
     if ( iorp->page )
@@ -262,9 +262,9 @@ static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
     return -ENOMEM;
 }
 
-static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
+static void hvm_free_ioreq_mfn(struct ioreq_server *s, bool buf)
 {
-    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+    struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
     struct page_info *page = iorp->page;
 
     if ( !page )
@@ -281,7 +281,7 @@ static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
 
 bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
 {
-    const struct hvm_ioreq_server *s;
+    const struct ioreq_server *s;
     unsigned int id;
     bool found = false;
 
@@ -301,8 +301,8 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
     return found;
 }
 
-static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s,
-                                    struct hvm_ioreq_vcpu *sv)
+static void hvm_update_ioreq_evtchn(struct ioreq_server *s,
+                                    struct ioreq_vcpu *sv)
 {
     ASSERT(spin_is_locked(&s->lock));
 
@@ -314,13 +314,13 @@ static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s,
     }
 }
 
-static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s,
+static int hvm_ioreq_server_add_vcpu(struct ioreq_server *s,
                                      struct vcpu *v)
 {
-    struct hvm_ioreq_vcpu *sv;
+    struct ioreq_vcpu *sv;
     int rc;
 
-    sv = xzalloc(struct hvm_ioreq_vcpu);
+    sv = xzalloc(struct ioreq_vcpu);
 
     rc = -ENOMEM;
     if ( !sv )
@@ -366,10 +366,10 @@ static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s,
     return rc;
 }
 
-static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s,
+static void hvm_ioreq_server_remove_vcpu(struct ioreq_server *s,
                                          struct vcpu *v)
 {
-    struct hvm_ioreq_vcpu *sv;
+    struct ioreq_vcpu *sv;
 
     spin_lock(&s->lock);
 
@@ -394,9 +394,9 @@ static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s,
     spin_unlock(&s->lock);
 }
 
-static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s)
+static void hvm_ioreq_server_remove_all_vcpus(struct ioreq_server *s)
 {
-    struct hvm_ioreq_vcpu *sv, *next;
+    struct ioreq_vcpu *sv, *next;
 
     spin_lock(&s->lock);
 
@@ -420,7 +420,7 @@ static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s)
     spin_unlock(&s->lock);
 }
 
-static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s)
+static int hvm_ioreq_server_alloc_pages(struct ioreq_server *s)
 {
     int rc;
 
@@ -435,13 +435,13 @@ static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s)
     return rc;
 }
 
-static void hvm_ioreq_server_free_pages(struct hvm_ioreq_server *s)
+static void hvm_ioreq_server_free_pages(struct ioreq_server *s)
 {
     hvm_free_ioreq_mfn(s, true);
     hvm_free_ioreq_mfn(s, false);
 }
 
-static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s)
+static void hvm_ioreq_server_free_rangesets(struct ioreq_server *s)
 {
     unsigned int i;
 
@@ -449,7 +449,7 @@ static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s)
         rangeset_destroy(s->range[i]);
 }
 
-static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
+static int hvm_ioreq_server_alloc_rangesets(struct ioreq_server *s,
                                             ioservid_t id)
 {
     unsigned int i;
@@ -487,9 +487,9 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
     return rc;
 }
 
-static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s)
+static void hvm_ioreq_server_enable(struct ioreq_server *s)
 {
-    struct hvm_ioreq_vcpu *sv;
+    struct ioreq_vcpu *sv;
 
     spin_lock(&s->lock);
 
@@ -509,7 +509,7 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s)
     spin_unlock(&s->lock);
 }
 
-static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s)
+static void hvm_ioreq_server_disable(struct ioreq_server *s)
 {
     spin_lock(&s->lock);
 
@@ -524,7 +524,7 @@ static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s)
     spin_unlock(&s->lock);
 }
 
-static int hvm_ioreq_server_init(struct hvm_ioreq_server *s,
+static int hvm_ioreq_server_init(struct ioreq_server *s,
                                  struct domain *d, int bufioreq_handling,
                                  ioservid_t id)
 {
@@ -569,7 +569,7 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s,
     return rc;
 }
 
-static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s)
+static void hvm_ioreq_server_deinit(struct ioreq_server *s)
 {
     ASSERT(!s->enabled);
     hvm_ioreq_server_remove_all_vcpus(s);
@@ -594,14 +594,14 @@ static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s)
 int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
                             ioservid_t *id)
 {
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     unsigned int i;
     int rc;
 
     if ( bufioreq_handling > HVM_IOREQSRV_BUFIOREQ_ATOMIC )
         return -EINVAL;
 
-    s = xzalloc(struct hvm_ioreq_server);
+    s = xzalloc(struct ioreq_server);
     if ( !s )
         return -ENOMEM;
 
@@ -649,7 +649,7 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
 
 int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
 {
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     int rc;
 
     spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
@@ -694,7 +694,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
                               unsigned long *bufioreq_gfn,
                               evtchn_port_t *bufioreq_port)
 {
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     int rc;
 
     spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
@@ -739,7 +739,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
 int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
                                unsigned long idx, mfn_t *mfn)
 {
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     int rc;
 
     ASSERT(is_hvm_domain(d));
@@ -791,7 +791,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
                                      uint32_t type, uint64_t start,
                                      uint64_t end)
 {
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     struct rangeset *r;
     int rc;
 
@@ -843,7 +843,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
                                          uint32_t type, uint64_t start,
                                          uint64_t end)
 {
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     struct rangeset *r;
     int rc;
 
@@ -902,7 +902,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
 int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
                                      uint32_t type, uint32_t flags)
 {
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     int rc;
 
     if ( type != HVMMEM_ioreq_server )
@@ -934,7 +934,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
 int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
                                bool enabled)
 {
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     int rc;
 
     spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
@@ -967,7 +967,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
 
 int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
 {
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     unsigned int id;
     int rc;
 
@@ -1002,7 +1002,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
 
 void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v)
 {
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     unsigned int id;
 
     spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
@@ -1015,7 +1015,7 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v)
 
 void hvm_destroy_all_ioreq_servers(struct domain *d)
 {
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     unsigned int id;
 
     if ( !arch_ioreq_server_destroy_all(d) )
@@ -1042,10 +1042,10 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
     spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
 }
 
-struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
-                                                 ioreq_t *p)
+struct ioreq_server *hvm_select_ioreq_server(struct domain *d,
+                                             ioreq_t *p)
 {
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     uint8_t type;
     uint64_t addr;
     unsigned int id;
@@ -1098,10 +1098,10 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
     return NULL;
 }
 
-static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p)
+static int hvm_send_buffered_ioreq(struct ioreq_server *s, ioreq_t *p)
 {
     struct domain *d = current->domain;
-    struct hvm_ioreq_page *iorp;
+    struct ioreq_page *iorp;
     buffered_iopage_t *pg;
     buf_ioreq_t bp = { .data = p->data,
                        .addr = p->addr,
@@ -1191,12 +1191,12 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p)
     return IOREQ_STATUS_HANDLED;
 }
 
-int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p,
+int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p,
                    bool buffered)
 {
     struct vcpu *curr = current;
     struct domain *d = curr->domain;
-    struct hvm_ioreq_vcpu *sv;
+    struct ioreq_vcpu *sv;
 
     ASSERT(s);
 
@@ -1254,7 +1254,7 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p,
 unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered)
 {
     struct domain *d = current->domain;
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     unsigned int id, failed = 0;
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 9d247ba..1c4ca47 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -30,40 +30,6 @@
 
 #include <public/hvm/dm_op.h>
 
-struct hvm_ioreq_page {
-    gfn_t gfn;
-    struct page_info *page;
-    void *va;
-};
-
-struct hvm_ioreq_vcpu {
-    struct list_head list_entry;
-    struct vcpu      *vcpu;
-    evtchn_port_t    ioreq_evtchn;
-    bool             pending;
-};
-
-#define NR_IO_RANGE_TYPES (XEN_DMOP_IO_RANGE_PCI + 1)
-#define MAX_NR_IO_RANGES  256
-
-struct hvm_ioreq_server {
-    struct domain          *target, *emulator;
-
-    /* Lock to serialize toolstack modifications */
-    spinlock_t             lock;
-
-    struct hvm_ioreq_page  ioreq;
-    struct list_head       ioreq_vcpu_list;
-    struct hvm_ioreq_page  bufioreq;
-
-    /* Lock to serialize access to buffered ioreq ring */
-    spinlock_t             bufioreq_lock;
-    evtchn_port_t          bufioreq_evtchn;
-    struct rangeset        *range[NR_IO_RANGE_TYPES];
-    bool                   enabled;
-    uint8_t                bufioreq_handling;
-};
-
 #ifdef CONFIG_MEM_SHARING
 struct mem_sharing_domain
 {
@@ -110,7 +76,7 @@ struct hvm_domain {
     /* Lock protects all other values in the sub-struct and the default */
     struct {
         spinlock_t              lock;
-        struct hvm_ioreq_server *server[MAX_NR_IOREQ_SERVERS];
+        struct ioreq_server *server[MAX_NR_IOREQ_SERVERS];
     } ioreq_server;
 
     /* Cached CF8 for guest PCI config cycles */
diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
index ab2f3f8..854dc77 100644
--- a/xen/include/asm-x86/hvm/ioreq.h
+++ b/xen/include/asm-x86/hvm/ioreq.h
@@ -22,13 +22,13 @@
 #include <xen/ioreq.h>
 
 bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion);
-int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s);
-void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s);
-void arch_ioreq_server_enable(struct hvm_ioreq_server *s);
-void arch_ioreq_server_disable(struct hvm_ioreq_server *s);
-void arch_ioreq_server_destroy(struct hvm_ioreq_server *s);
+int arch_ioreq_server_map_pages(struct ioreq_server *s);
+void arch_ioreq_server_unmap_pages(struct ioreq_server *s);
+void arch_ioreq_server_enable(struct ioreq_server *s);
+void arch_ioreq_server_disable(struct ioreq_server *s);
+void arch_ioreq_server_destroy(struct ioreq_server *s);
 int arch_ioreq_server_map_mem_type(struct domain *d,
-                                   struct hvm_ioreq_server *s,
+                                   struct ioreq_server *s,
                                    uint32_t flags);
 bool arch_ioreq_server_destroy_all(struct domain *d);
 int arch_ioreq_server_get_type_addr(const struct domain *d,
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 8d6fd1a..4603560 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -363,7 +363,7 @@ struct p2m_domain {
           * ioreq server who's responsible for the emulation of
           * gfns with specific p2m type(for now, p2m_ioreq_server).
           */
-         struct hvm_ioreq_server *server;
+         struct ioreq_server *server;
          /*
           * flags specifies whether read, write or both operations
           * are to be emulated by an ioreq server.
@@ -941,9 +941,9 @@ static inline unsigned int p2m_get_iommu_flags(p2m_type_t p2mt, mfn_t mfn)
 }
 
 int p2m_set_ioreq_server(struct domain *d, unsigned int flags,
-                         struct hvm_ioreq_server *s);
-struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d,
-                                              unsigned int *flags);
+                         struct ioreq_server *s);
+struct ioreq_server *p2m_get_ioreq_server(struct domain *d,
+                                          unsigned int *flags);
 
 static inline int p2m_entry_modify(struct p2m_domain *p2m, p2m_type_t nt,
                                    p2m_type_t ot, mfn_t nfn, mfn_t ofn,
diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
index 2746bb1..979afa0 100644
--- a/xen/include/xen/ioreq.h
+++ b/xen/include/xen/ioreq.h
@@ -21,6 +21,40 @@
 
 #include <xen/sched.h>
 
+struct ioreq_page {
+    gfn_t gfn;
+    struct page_info *page;
+    void *va;
+};
+
+struct ioreq_vcpu {
+    struct list_head list_entry;
+    struct vcpu      *vcpu;
+    evtchn_port_t    ioreq_evtchn;
+    bool             pending;
+};
+
+#define NR_IO_RANGE_TYPES (XEN_DMOP_IO_RANGE_PCI + 1)
+#define MAX_NR_IO_RANGES  256
+
+struct ioreq_server {
+    struct domain          *target, *emulator;
+
+    /* Lock to serialize toolstack modifications */
+    spinlock_t             lock;
+
+    struct ioreq_page      ioreq;
+    struct list_head       ioreq_vcpu_list;
+    struct ioreq_page      bufioreq;
+
+    /* Lock to serialize access to buffered ioreq ring */
+    spinlock_t             bufioreq_lock;
+    evtchn_port_t          bufioreq_evtchn;
+    struct rangeset        *range[NR_IO_RANGE_TYPES];
+    bool                   enabled;
+    uint8_t                bufioreq_handling;
+};
+
 static inline paddr_t ioreq_mmio_first_byte(const ioreq_t *p)
 {
     return unlikely(p->df) ?
@@ -75,9 +109,9 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v);
 void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v);
 void hvm_destroy_all_ioreq_servers(struct domain *d);
 
-struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
-                                                 ioreq_t *p);
-int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p,
+struct ioreq_server *hvm_select_ioreq_server(struct domain *d,
+                                             ioreq_t *p);
+int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p,
                    bool buffered);
 unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered);
 
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:32:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:32:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40824.73836 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgU6-00019c-Kh; Mon, 30 Nov 2020 10:32:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40824.73836; Mon, 30 Nov 2020 10:32:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgU6-00019N-Gh; Mon, 30 Nov 2020 10:32:42 +0000
Received: by outflank-mailman (input) for mailman id 40824;
 Mon, 30 Nov 2020 10:32:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjgU4-0000Uu-Cl
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:32:40 +0000
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fe4ec96c-4b2d-4290-ac68-1104e63198e9;
 Mon, 30 Nov 2020 10:32:07 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id y7so17018339lji.8
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 02:32:07 -0800 (PST)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.05
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 02:32:05 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe4ec96c-4b2d-4290-ac68-1104e63198e9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=Cc66UPW5H7v9UJ3Wv0uf40SoC8B/Dy8enJYvxK9l3es=;
        b=Qb/ZibRigsTp1//JPm1gDde4wRxyyl2Jt+50ox4r4LS55UTDorgGU2J+ImajEKFr3A
         LT4DGSCAq8iDWY8ktxIRaIvZjfQaY5IUUW7VT4JtG7zEF2RC9KAcrArtEE2Ov5Ea01Dr
         AnJLobw4U//72RCmCFyn0U530UlRBEgxZhyzfatdFmGpnGOjskXbHx7kjz3r/pKLyMtS
         PByPlWILbdJAgXbXMkiUffPqqDs3OlPF5PLUkIKvmDiNzV5W3w56T8HkaXBPOy8Is1Eh
         37GaQPemxndRug59hSIfSV+Ru/9P9DXOP6nrhiGLyBJ1mhqSqNrubKlMMZluSdjsxnt7
         BfkQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=Cc66UPW5H7v9UJ3Wv0uf40SoC8B/Dy8enJYvxK9l3es=;
        b=a0Xg2fUkyzYREigLLgqPD0qR+AAXnVqAe8IgsmSGkLDQCGwsnEyFw/Ej1BEZ+7oO50
         mwjLDmxJjxRo/0PnKMwhQvxF33Nx+8tY9vfWu+zWkwV0uEPf4nHj+Q4HfEs4HIvcz04Q
         z8dgIWSol7jBxRmdUWOkgqwAcMbFd3gokvF/vhTZ/PinisNn19czxg9o7JY7YQCmtPox
         mKueAJH8V1lZZoEUbQpKgpElSkDOmIfo2XFhKgvIeGRwAXKQQw3w2+Yrwz+Mdg1Zoq8C
         S+RgziSI7QaOz02inOoaW+f5KaeRRFG1J3pyLEocMwsr+jsNDHoJ7Wh/bCezZ7EYbnjQ
         zm1Q==
X-Gm-Message-State: AOAM5306fTC9Mz4zKx02OQWzMEAX1AO3HOx02t1nA/2firyguKq7Y443
	rnWnB7AY3J5GjVhxJZgCAsdY9gEWPL47Bg==
X-Google-Smtp-Source: ABdhPJy4eNMMohOGq2GH4e3eHmtQqcMmMO2w2jZDZccMSVyUoPMx30MpFl7/aZncoaaYDZOAduEAhQ==
X-Received: by 2002:a2e:7203:: with SMTP id n3mr9504708ljc.86.1606732326102;
        Mon, 30 Nov 2020 02:32:06 -0800 (PST)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Paul Durrant <paul@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V3 08/23] xen/ioreq: Move x86's ioreq_server to struct domain
Date: Mon, 30 Nov 2020 12:31:23 +0200
Message-Id: <1606732298-22107-9-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

The IOREQ is a common feature now and this struct will be used
on Arm as is. Move it to common struct domain. This also
significantly reduces the layering violation in the common code
(*arch.hvm* usage).

We don't move ioreq_gfn since it is not used in the common code
(the "legacy" mechanism is x86 specific).

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes V1 -> V2:
   - new patch

Changes V2 -> V3:
   - remove the mention of "ioreq_gfn" from patch subject/description
   - update patch according the "legacy interface" is x86 specific
   - drop hvm_params related changes in arch/x86/hvm/hvm.c
   - leave ioreq_gfn in hvm_domain
---
---
 xen/common/ioreq.c               | 60 ++++++++++++++++++++--------------------
 xen/include/asm-x86/hvm/domain.h |  8 ------
 xen/include/xen/sched.h          | 10 +++++++
 3 files changed, 40 insertions(+), 38 deletions(-)

diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index 3e80fc6..b7c2d5a 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -38,13 +38,13 @@ static void set_ioreq_server(struct domain *d, unsigned int id,
                              struct ioreq_server *s)
 {
     ASSERT(id < MAX_NR_IOREQ_SERVERS);
-    ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]);
+    ASSERT(!s || !d->ioreq_server.server[id]);
 
-    d->arch.hvm.ioreq_server.server[id] = s;
+    d->ioreq_server.server[id] = s;
 }
 
 #define GET_IOREQ_SERVER(d, id) \
-    (d)->arch.hvm.ioreq_server.server[id]
+    (d)->ioreq_server.server[id]
 
 static struct ioreq_server *get_ioreq_server(const struct domain *d,
                                              unsigned int id)
@@ -285,7 +285,7 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
     unsigned int id;
     bool found = false;
 
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_recursive(&d->ioreq_server.lock);
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
     {
@@ -296,7 +296,7 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
         }
     }
 
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
 
     return found;
 }
@@ -606,7 +606,7 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
         return -ENOMEM;
 
     domain_pause(d);
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_recursive(&d->ioreq_server.lock);
 
     for ( i = 0; i < MAX_NR_IOREQ_SERVERS; i++ )
     {
@@ -634,13 +634,13 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
     if ( id )
         *id = i;
 
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
     domain_unpause(d);
 
     return 0;
 
  fail:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
     domain_unpause(d);
 
     xfree(s);
@@ -652,7 +652,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
     struct ioreq_server *s;
     int rc;
 
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_recursive(&d->ioreq_server.lock);
 
     s = get_ioreq_server(d, id);
 
@@ -684,7 +684,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
     rc = 0;
 
  out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
 
     return rc;
 }
@@ -697,7 +697,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
     struct ioreq_server *s;
     int rc;
 
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_recursive(&d->ioreq_server.lock);
 
     s = get_ioreq_server(d, id);
 
@@ -731,7 +731,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
     rc = 0;
 
  out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
 
     return rc;
 }
@@ -744,7 +744,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
 
     ASSERT(is_hvm_domain(d));
 
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_recursive(&d->ioreq_server.lock);
 
     s = get_ioreq_server(d, id);
 
@@ -782,7 +782,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
     }
 
  out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
 
     return rc;
 }
@@ -798,7 +798,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
     if ( start > end )
         return -EINVAL;
 
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_recursive(&d->ioreq_server.lock);
 
     s = get_ioreq_server(d, id);
 
@@ -834,7 +834,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
     rc = rangeset_add_range(r, start, end);
 
  out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
 
     return rc;
 }
@@ -850,7 +850,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
     if ( start > end )
         return -EINVAL;
 
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_recursive(&d->ioreq_server.lock);
 
     s = get_ioreq_server(d, id);
 
@@ -886,7 +886,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
     rc = rangeset_remove_range(r, start, end);
 
  out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
 
     return rc;
 }
@@ -911,7 +911,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
     if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE )
         return -EINVAL;
 
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_recursive(&d->ioreq_server.lock);
 
     s = get_ioreq_server(d, id);
 
@@ -926,7 +926,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
     rc = arch_ioreq_server_map_mem_type(d, s, flags);
 
  out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
 
     return rc;
 }
@@ -937,7 +937,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
     struct ioreq_server *s;
     int rc;
 
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_recursive(&d->ioreq_server.lock);
 
     s = get_ioreq_server(d, id);
 
@@ -961,7 +961,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
     rc = 0;
 
  out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
     return rc;
 }
 
@@ -971,7 +971,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
     unsigned int id;
     int rc;
 
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_recursive(&d->ioreq_server.lock);
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
     {
@@ -980,7 +980,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
             goto fail;
     }
 
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
 
     return 0;
 
@@ -995,7 +995,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
         hvm_ioreq_server_remove_vcpu(s, v);
     }
 
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
 
     return rc;
 }
@@ -1005,12 +1005,12 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v)
     struct ioreq_server *s;
     unsigned int id;
 
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_recursive(&d->ioreq_server.lock);
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
         hvm_ioreq_server_remove_vcpu(s, v);
 
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
 }
 
 void hvm_destroy_all_ioreq_servers(struct domain *d)
@@ -1021,7 +1021,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
     if ( !arch_ioreq_server_destroy_all(d) )
         return;
 
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_recursive(&d->ioreq_server.lock);
 
     /* No need to domain_pause() as the domain is being torn down */
 
@@ -1039,7 +1039,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
         xfree(s);
     }
 
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
 }
 
 struct ioreq_server *hvm_select_ioreq_server(struct domain *d,
@@ -1271,7 +1271,7 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered)
 
 void hvm_ioreq_init(struct domain *d)
 {
-    spin_lock_init(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_init(&d->ioreq_server.lock);
 
     arch_ioreq_domain_init(d);
 }
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 1c4ca47..b8be1ad 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -63,8 +63,6 @@ struct hvm_pi_ops {
     void (*vcpu_block)(struct vcpu *);
 };
 
-#define MAX_NR_IOREQ_SERVERS 8
-
 struct hvm_domain {
     /* Guest page range used for non-default ioreq servers */
     struct {
@@ -73,12 +71,6 @@ struct hvm_domain {
         unsigned long legacy_mask; /* indexed by HVM param number */
     } ioreq_gfn;
 
-    /* Lock protects all other values in the sub-struct and the default */
-    struct {
-        spinlock_t              lock;
-        struct ioreq_server *server[MAX_NR_IOREQ_SERVERS];
-    } ioreq_server;
-
     /* Cached CF8 for guest PCI config cycles */
     uint32_t                pci_cf8;
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index a345cc0..62cbcdb 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -316,6 +316,8 @@ struct sched_unit {
 
 struct evtchn_port_ops;
 
+#define MAX_NR_IOREQ_SERVERS 8
+
 struct domain
 {
     domid_t          domain_id;
@@ -523,6 +525,14 @@ struct domain
     /* Argo interdomain communication support */
     struct argo_domain *argo;
 #endif
+
+#ifdef CONFIG_IOREQ_SERVER
+    /* Lock protects all other values in the sub-struct and the default */
+    struct {
+        spinlock_t              lock;
+        struct ioreq_server     *server[MAX_NR_IOREQ_SERVERS];
+    } ioreq_server;
+#endif
 };
 
 static inline struct page_list_head *page_to_list(
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:32:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:32:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40831.73849 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgU9-0001EA-Vi; Mon, 30 Nov 2020 10:32:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40831.73849; Mon, 30 Nov 2020 10:32:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgU9-0001Dz-Rh; Mon, 30 Nov 2020 10:32:45 +0000
Received: by outflank-mailman (input) for mailman id 40831;
 Mon, 30 Nov 2020 10:32:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjgU9-0000Uu-D0
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:32:45 +0000
Received: from mail-lj1-x233.google.com (unknown [2a00:1450:4864:20::233])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 79765296-9490-4b12-9b66-dbbc61d640c0;
 Mon, 30 Nov 2020 10:32:08 +0000 (UTC)
Received: by mail-lj1-x233.google.com with SMTP id t22so17072185ljk.0
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 02:32:08 -0800 (PST)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.06
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 02:32:06 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 79765296-9490-4b12-9b66-dbbc61d640c0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=jmfuLLn6VZhTTj/XRd0e2QYAXx4n0t7Wv41UBmA+aUQ=;
        b=DkMEJ+/hAYMWc7ygU6SikW0OL+fcT2FOueY1DqxM1hhwt0kDHOxr3zPTGvTqfthyTb
         EONZ2ugtLWdlijN3cV6fljiPYD0lKvWOEBq9DHgWqVF6QoiHULR2qgUSL0cEGP51czEG
         AwmD+9ScD4j9mIBzfNcbG9KSOjMp86DLPscfkitArneWhumOMaigj/4ymithTdjzh8fX
         Gv6MQBRNaKhXZt8jxLca7UOuTgwlTPzKmNeBBIcU8xpxYLVv83gDw7rn3xip4iefM4Si
         oQSog4d87s4+LSB0RDCVmuIvNly9AVKOF8cxn1gPPQh0qbE0T3EAwnNCKSpJER62/hc7
         npjw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=jmfuLLn6VZhTTj/XRd0e2QYAXx4n0t7Wv41UBmA+aUQ=;
        b=cW4EcR3islnK2sA0y9WP/g7nNqTYjtfsp9+gbpL7W041and1gYMdgcXl/FRj/wg7z5
         j+aoVYjX0EdEhhqqEpCzeEWlqVfXsk26aLXm4Ejgl5iPnq/CTzuxWpCepcHpp/F/m1B4
         yiO/yLdDc46wFafK586hn1mFYeUa0YoxQyoqokg8ufUIYkn9/6O7CIXzdCVZYOh8NIVM
         EAbtyOzOe6pKLNwtnck0+ASpm5Yo1VWMGdHTGvg664BlPKoyGt8pxZZJJ892O/eTn5Gq
         CMTx29ZY9kcjUrcEIoWV7JypIf9ii75omkiuZO8cqm8m1BcJblzf7G85bJGRciZZA4Le
         uBsg==
X-Gm-Message-State: AOAM533ne1LEublDo2mDwLXUZxbss0zAf04il9MhVltave497JiEzZKX
	iRN6W+cRIkcQeIuM/5q8kBkxdthMq57u9w==
X-Google-Smtp-Source: ABdhPJxG/YFNMEt3pHjJaZUIojgOCRGKzKOKXPksO1pbmwPbAjKmp2ZkXAI13wN8Co2FqBbyfqmrjA==
X-Received: by 2002:a2e:814a:: with SMTP id t10mr9735200ljg.30.1606732327338;
        Mon, 30 Nov 2020 02:32:07 -0800 (PST)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien.grall@arm.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: [PATCH V3 09/23] xen/dm: Make x86's DM feature common
Date: Mon, 30 Nov 2020 12:31:24 +0200
Message-Id: <1606732298-22107-10-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>

From: Julien Grall <julien.grall@arm.com>

As a lot of x86 code can be re-used on Arm later on, this patch
splits devicemodel support into common and arch specific parts.

The common DM feature is supposed to be built with IOREQ_SERVER
option enabled (as well as the IOREQ feature), which is selected
for x86's config HVM for now.

Also update XSM code a bit to let DM op be used on Arm.

This support is going to be used on Arm to be able run device
emulator outside of Xen hypervisor.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - update XSM, related changes were pulled from:
     [RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/DM features

Changes V1 -> V2:
   - update the author of a patch
   - update patch description
   - introduce xen/dm.h and move definitions here

Changes V2 -> V3:
   - no changes
---
---
 xen/arch/x86/hvm/dm.c   | 291 ++++--------------------------------------------
 xen/common/Makefile     |   1 +
 xen/common/dm.c         | 291 ++++++++++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/dm.h    |  44 ++++++++
 xen/include/xsm/dummy.h |   4 +-
 xen/include/xsm/xsm.h   |   6 +-
 xen/xsm/dummy.c         |   2 +-
 xen/xsm/flask/hooks.c   |   5 +-
 8 files changed, 364 insertions(+), 280 deletions(-)
 create mode 100644 xen/common/dm.c
 create mode 100644 xen/include/xen/dm.h

diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index 71f5ca4..35f860a 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -16,6 +16,7 @@
 
 #include <xen/event.h>
 #include <xen/guest_access.h>
+#include <xen/dm.h>
 #include <xen/hypercall.h>
 #include <xen/nospec.h>
 #include <xen/sched.h>
@@ -29,13 +30,6 @@
 
 #include <public/hvm/hvm_op.h>
 
-struct dmop_args {
-    domid_t domid;
-    unsigned int nr_bufs;
-    /* Reserve enough buf elements for all current hypercalls. */
-    struct xen_dm_op_buf buf[2];
-};
-
 static bool _raw_copy_from_guest_buf_offset(void *dst,
                                             const struct dmop_args *args,
                                             unsigned int buf_idx,
@@ -338,148 +332,20 @@ static int inject_event(struct domain *d,
     return 0;
 }
 
-static int dm_op(const struct dmop_args *op_args)
+int arch_dm_op(struct xen_dm_op *op, struct domain *d,
+               const struct dmop_args *op_args, bool *const_op)
 {
-    struct domain *d;
-    struct xen_dm_op op;
-    bool const_op = true;
     long rc;
-    size_t offset;
-
-    static const uint8_t op_size[] = {
-        [XEN_DMOP_create_ioreq_server]              = sizeof(struct xen_dm_op_create_ioreq_server),
-        [XEN_DMOP_get_ioreq_server_info]            = sizeof(struct xen_dm_op_get_ioreq_server_info),
-        [XEN_DMOP_map_io_range_to_ioreq_server]     = sizeof(struct xen_dm_op_ioreq_server_range),
-        [XEN_DMOP_unmap_io_range_from_ioreq_server] = sizeof(struct xen_dm_op_ioreq_server_range),
-        [XEN_DMOP_set_ioreq_server_state]           = sizeof(struct xen_dm_op_set_ioreq_server_state),
-        [XEN_DMOP_destroy_ioreq_server]             = sizeof(struct xen_dm_op_destroy_ioreq_server),
-        [XEN_DMOP_track_dirty_vram]                 = sizeof(struct xen_dm_op_track_dirty_vram),
-        [XEN_DMOP_set_pci_intx_level]               = sizeof(struct xen_dm_op_set_pci_intx_level),
-        [XEN_DMOP_set_isa_irq_level]                = sizeof(struct xen_dm_op_set_isa_irq_level),
-        [XEN_DMOP_set_pci_link_route]               = sizeof(struct xen_dm_op_set_pci_link_route),
-        [XEN_DMOP_modified_memory]                  = sizeof(struct xen_dm_op_modified_memory),
-        [XEN_DMOP_set_mem_type]                     = sizeof(struct xen_dm_op_set_mem_type),
-        [XEN_DMOP_inject_event]                     = sizeof(struct xen_dm_op_inject_event),
-        [XEN_DMOP_inject_msi]                       = sizeof(struct xen_dm_op_inject_msi),
-        [XEN_DMOP_map_mem_type_to_ioreq_server]     = sizeof(struct xen_dm_op_map_mem_type_to_ioreq_server),
-        [XEN_DMOP_remote_shutdown]                  = sizeof(struct xen_dm_op_remote_shutdown),
-        [XEN_DMOP_relocate_memory]                  = sizeof(struct xen_dm_op_relocate_memory),
-        [XEN_DMOP_pin_memory_cacheattr]             = sizeof(struct xen_dm_op_pin_memory_cacheattr),
-    };
-
-    rc = rcu_lock_remote_domain_by_id(op_args->domid, &d);
-    if ( rc )
-        return rc;
-
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_dm_op(XSM_DM_PRIV, d);
-    if ( rc )
-        goto out;
-
-    offset = offsetof(struct xen_dm_op, u);
-
-    rc = -EFAULT;
-    if ( op_args->buf[0].size < offset )
-        goto out;
-
-    if ( copy_from_guest_offset((void *)&op, op_args->buf[0].h, 0, offset) )
-        goto out;
-
-    if ( op.op >= ARRAY_SIZE(op_size) )
-    {
-        rc = -EOPNOTSUPP;
-        goto out;
-    }
-
-    op.op = array_index_nospec(op.op, ARRAY_SIZE(op_size));
-
-    if ( op_args->buf[0].size < offset + op_size[op.op] )
-        goto out;
-
-    if ( copy_from_guest_offset((void *)&op.u, op_args->buf[0].h, offset,
-                                op_size[op.op]) )
-        goto out;
-
-    rc = -EINVAL;
-    if ( op.pad )
-        goto out;
-
-    switch ( op.op )
-    {
-    case XEN_DMOP_create_ioreq_server:
-    {
-        struct xen_dm_op_create_ioreq_server *data =
-            &op.u.create_ioreq_server;
-
-        const_op = false;
-
-        rc = -EINVAL;
-        if ( data->pad[0] || data->pad[1] || data->pad[2] )
-            break;
-
-        rc = hvm_create_ioreq_server(d, data->handle_bufioreq,
-                                     &data->id);
-        break;
-    }
 
-    case XEN_DMOP_get_ioreq_server_info:
+    switch ( op->op )
     {
-        struct xen_dm_op_get_ioreq_server_info *data =
-            &op.u.get_ioreq_server_info;
-        const uint16_t valid_flags = XEN_DMOP_no_gfns;
-
-        const_op = false;
-
-        rc = -EINVAL;
-        if ( data->flags & ~valid_flags )
-            break;
-
-        rc = hvm_get_ioreq_server_info(d, data->id,
-                                       (data->flags & XEN_DMOP_no_gfns) ?
-                                       NULL : &data->ioreq_gfn,
-                                       (data->flags & XEN_DMOP_no_gfns) ?
-                                       NULL : &data->bufioreq_gfn,
-                                       &data->bufioreq_port);
-        break;
-    }
-
-    case XEN_DMOP_map_io_range_to_ioreq_server:
-    {
-        const struct xen_dm_op_ioreq_server_range *data =
-            &op.u.map_io_range_to_ioreq_server;
-
-        rc = -EINVAL;
-        if ( data->pad )
-            break;
-
-        rc = hvm_map_io_range_to_ioreq_server(d, data->id, data->type,
-                                              data->start, data->end);
-        break;
-    }
-
-    case XEN_DMOP_unmap_io_range_from_ioreq_server:
-    {
-        const struct xen_dm_op_ioreq_server_range *data =
-            &op.u.unmap_io_range_from_ioreq_server;
-
-        rc = -EINVAL;
-        if ( data->pad )
-            break;
-
-        rc = hvm_unmap_io_range_from_ioreq_server(d, data->id, data->type,
-                                                  data->start, data->end);
-        break;
-    }
-
     case XEN_DMOP_map_mem_type_to_ioreq_server:
     {
         struct xen_dm_op_map_mem_type_to_ioreq_server *data =
-            &op.u.map_mem_type_to_ioreq_server;
+            &op->u.map_mem_type_to_ioreq_server;
         unsigned long first_gfn = data->opaque;
 
-        const_op = false;
+        *const_op = false;
 
         rc = -EOPNOTSUPP;
         if ( !hap_enabled(d) )
@@ -523,36 +389,10 @@ static int dm_op(const struct dmop_args *op_args)
         break;
     }
 
-    case XEN_DMOP_set_ioreq_server_state:
-    {
-        const struct xen_dm_op_set_ioreq_server_state *data =
-            &op.u.set_ioreq_server_state;
-
-        rc = -EINVAL;
-        if ( data->pad )
-            break;
-
-        rc = hvm_set_ioreq_server_state(d, data->id, !!data->enabled);
-        break;
-    }
-
-    case XEN_DMOP_destroy_ioreq_server:
-    {
-        const struct xen_dm_op_destroy_ioreq_server *data =
-            &op.u.destroy_ioreq_server;
-
-        rc = -EINVAL;
-        if ( data->pad )
-            break;
-
-        rc = hvm_destroy_ioreq_server(d, data->id);
-        break;
-    }
-
     case XEN_DMOP_track_dirty_vram:
     {
         const struct xen_dm_op_track_dirty_vram *data =
-            &op.u.track_dirty_vram;
+            &op->u.track_dirty_vram;
 
         rc = -EINVAL;
         if ( data->pad )
@@ -568,7 +408,7 @@ static int dm_op(const struct dmop_args *op_args)
     case XEN_DMOP_set_pci_intx_level:
     {
         const struct xen_dm_op_set_pci_intx_level *data =
-            &op.u.set_pci_intx_level;
+            &op->u.set_pci_intx_level;
 
         rc = set_pci_intx_level(d, data->domain, data->bus,
                                 data->device, data->intx,
@@ -579,7 +419,7 @@ static int dm_op(const struct dmop_args *op_args)
     case XEN_DMOP_set_isa_irq_level:
     {
         const struct xen_dm_op_set_isa_irq_level *data =
-            &op.u.set_isa_irq_level;
+            &op->u.set_isa_irq_level;
 
         rc = set_isa_irq_level(d, data->isa_irq, data->level);
         break;
@@ -588,7 +428,7 @@ static int dm_op(const struct dmop_args *op_args)
     case XEN_DMOP_set_pci_link_route:
     {
         const struct xen_dm_op_set_pci_link_route *data =
-            &op.u.set_pci_link_route;
+            &op->u.set_pci_link_route;
 
         rc = hvm_set_pci_link_route(d, data->link, data->isa_irq);
         break;
@@ -597,19 +437,19 @@ static int dm_op(const struct dmop_args *op_args)
     case XEN_DMOP_modified_memory:
     {
         struct xen_dm_op_modified_memory *data =
-            &op.u.modified_memory;
+            &op->u.modified_memory;
 
         rc = modified_memory(d, op_args, data);
-        const_op = !rc;
+        *const_op = !rc;
         break;
     }
 
     case XEN_DMOP_set_mem_type:
     {
         struct xen_dm_op_set_mem_type *data =
-            &op.u.set_mem_type;
+            &op->u.set_mem_type;
 
-        const_op = false;
+        *const_op = false;
 
         rc = -EINVAL;
         if ( data->pad )
@@ -622,7 +462,7 @@ static int dm_op(const struct dmop_args *op_args)
     case XEN_DMOP_inject_event:
     {
         const struct xen_dm_op_inject_event *data =
-            &op.u.inject_event;
+            &op->u.inject_event;
 
         rc = -EINVAL;
         if ( data->pad0 || data->pad1 )
@@ -635,7 +475,7 @@ static int dm_op(const struct dmop_args *op_args)
     case XEN_DMOP_inject_msi:
     {
         const struct xen_dm_op_inject_msi *data =
-            &op.u.inject_msi;
+            &op->u.inject_msi;
 
         rc = -EINVAL;
         if ( data->pad )
@@ -648,7 +488,7 @@ static int dm_op(const struct dmop_args *op_args)
     case XEN_DMOP_remote_shutdown:
     {
         const struct xen_dm_op_remote_shutdown *data =
-            &op.u.remote_shutdown;
+            &op->u.remote_shutdown;
 
         domain_shutdown(d, data->reason);
         rc = 0;
@@ -657,7 +497,7 @@ static int dm_op(const struct dmop_args *op_args)
 
     case XEN_DMOP_relocate_memory:
     {
-        struct xen_dm_op_relocate_memory *data = &op.u.relocate_memory;
+        struct xen_dm_op_relocate_memory *data = &op->u.relocate_memory;
         struct xen_add_to_physmap xatp = {
             .domid = op_args->domid,
             .size = data->size,
@@ -680,7 +520,7 @@ static int dm_op(const struct dmop_args *op_args)
             data->size -= rc;
             data->src_gfn += rc;
             data->dst_gfn += rc;
-            const_op = false;
+            *const_op = false;
             rc = -ERESTART;
         }
         break;
@@ -689,7 +529,7 @@ static int dm_op(const struct dmop_args *op_args)
     case XEN_DMOP_pin_memory_cacheattr:
     {
         const struct xen_dm_op_pin_memory_cacheattr *data =
-            &op.u.pin_memory_cacheattr;
+            &op->u.pin_memory_cacheattr;
 
         if ( data->pad )
         {
@@ -707,97 +547,6 @@ static int dm_op(const struct dmop_args *op_args)
         break;
     }
 
-    if ( (!rc || rc == -ERESTART) &&
-         !const_op && copy_to_guest_offset(op_args->buf[0].h, offset,
-                                           (void *)&op.u, op_size[op.op]) )
-        rc = -EFAULT;
-
- out:
-    rcu_unlock_domain(d);
-
-    return rc;
-}
-
-#include <compat/hvm/dm_op.h>
-
-CHECK_dm_op_create_ioreq_server;
-CHECK_dm_op_get_ioreq_server_info;
-CHECK_dm_op_ioreq_server_range;
-CHECK_dm_op_set_ioreq_server_state;
-CHECK_dm_op_destroy_ioreq_server;
-CHECK_dm_op_track_dirty_vram;
-CHECK_dm_op_set_pci_intx_level;
-CHECK_dm_op_set_isa_irq_level;
-CHECK_dm_op_set_pci_link_route;
-CHECK_dm_op_modified_memory;
-CHECK_dm_op_set_mem_type;
-CHECK_dm_op_inject_event;
-CHECK_dm_op_inject_msi;
-CHECK_dm_op_map_mem_type_to_ioreq_server;
-CHECK_dm_op_remote_shutdown;
-CHECK_dm_op_relocate_memory;
-CHECK_dm_op_pin_memory_cacheattr;
-
-int compat_dm_op(domid_t domid,
-                 unsigned int nr_bufs,
-                 XEN_GUEST_HANDLE_PARAM(void) bufs)
-{
-    struct dmop_args args;
-    unsigned int i;
-    int rc;
-
-    if ( nr_bufs > ARRAY_SIZE(args.buf) )
-        return -E2BIG;
-
-    args.domid = domid;
-    args.nr_bufs = array_index_nospec(nr_bufs, ARRAY_SIZE(args.buf) + 1);
-
-    for ( i = 0; i < args.nr_bufs; i++ )
-    {
-        struct compat_dm_op_buf cmp;
-
-        if ( copy_from_guest_offset(&cmp, bufs, i, 1) )
-            return -EFAULT;
-
-#define XLAT_dm_op_buf_HNDL_h(_d_, _s_) \
-        guest_from_compat_handle((_d_)->h, (_s_)->h)
-
-        XLAT_dm_op_buf(&args.buf[i], &cmp);
-
-#undef XLAT_dm_op_buf_HNDL_h
-    }
-
-    rc = dm_op(&args);
-
-    if ( rc == -ERESTART )
-        rc = hypercall_create_continuation(__HYPERVISOR_dm_op, "iih",
-                                           domid, nr_bufs, bufs);
-
-    return rc;
-}
-
-long do_dm_op(domid_t domid,
-              unsigned int nr_bufs,
-              XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs)
-{
-    struct dmop_args args;
-    int rc;
-
-    if ( nr_bufs > ARRAY_SIZE(args.buf) )
-        return -E2BIG;
-
-    args.domid = domid;
-    args.nr_bufs = array_index_nospec(nr_bufs, ARRAY_SIZE(args.buf) + 1);
-
-    if ( copy_from_guest_offset(&args.buf[0], bufs, 0, args.nr_bufs) )
-        return -EFAULT;
-
-    rc = dm_op(&args);
-
-    if ( rc == -ERESTART )
-        rc = hypercall_create_continuation(__HYPERVISOR_dm_op, "iih",
-                                           domid, nr_bufs, bufs);
-
     return rc;
 }
 
diff --git a/xen/common/Makefile b/xen/common/Makefile
index c0e91c4..460f214 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -6,6 +6,7 @@ obj-$(CONFIG_CORE_PARKING) += core_parking.o
 obj-y += cpu.o
 obj-$(CONFIG_DEBUG_TRACE) += debugtrace.o
 obj-$(CONFIG_HAS_DEVICE_TREE) += device_tree.o
+obj-$(CONFIG_IOREQ_SERVER) += dm.o
 obj-y += domain.o
 obj-y += event_2l.o
 obj-y += event_channel.o
diff --git a/xen/common/dm.c b/xen/common/dm.c
new file mode 100644
index 0000000..36e01a2
--- /dev/null
+++ b/xen/common/dm.c
@@ -0,0 +1,291 @@
+/*
+ * Copyright (c) 2016 Citrix Systems Inc.
+ * Copyright (c) 2019 Arm ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/guest_access.h>
+#include <xen/dm.h>
+#include <xen/hypercall.h>
+#include <xen/ioreq.h>
+#include <xen/nospec.h>
+
+static int dm_op(const struct dmop_args *op_args)
+{
+    struct domain *d;
+    struct xen_dm_op op;
+    long rc;
+    bool const_op = true;
+    const size_t offset = offsetof(struct xen_dm_op, u);
+
+    static const uint8_t op_size[] = {
+        [XEN_DMOP_create_ioreq_server]              = sizeof(struct xen_dm_op_create_ioreq_server),
+        [XEN_DMOP_get_ioreq_server_info]            = sizeof(struct xen_dm_op_get_ioreq_server_info),
+        [XEN_DMOP_map_io_range_to_ioreq_server]     = sizeof(struct xen_dm_op_ioreq_server_range),
+        [XEN_DMOP_unmap_io_range_from_ioreq_server] = sizeof(struct xen_dm_op_ioreq_server_range),
+        [XEN_DMOP_set_ioreq_server_state]           = sizeof(struct xen_dm_op_set_ioreq_server_state),
+        [XEN_DMOP_destroy_ioreq_server]             = sizeof(struct xen_dm_op_destroy_ioreq_server),
+        [XEN_DMOP_track_dirty_vram]                 = sizeof(struct xen_dm_op_track_dirty_vram),
+        [XEN_DMOP_set_pci_intx_level]               = sizeof(struct xen_dm_op_set_pci_intx_level),
+        [XEN_DMOP_set_isa_irq_level]                = sizeof(struct xen_dm_op_set_isa_irq_level),
+        [XEN_DMOP_set_pci_link_route]               = sizeof(struct xen_dm_op_set_pci_link_route),
+        [XEN_DMOP_modified_memory]                  = sizeof(struct xen_dm_op_modified_memory),
+        [XEN_DMOP_set_mem_type]                     = sizeof(struct xen_dm_op_set_mem_type),
+        [XEN_DMOP_inject_event]                     = sizeof(struct xen_dm_op_inject_event),
+        [XEN_DMOP_inject_msi]                       = sizeof(struct xen_dm_op_inject_msi),
+        [XEN_DMOP_map_mem_type_to_ioreq_server]     = sizeof(struct xen_dm_op_map_mem_type_to_ioreq_server),
+        [XEN_DMOP_remote_shutdown]                  = sizeof(struct xen_dm_op_remote_shutdown),
+        [XEN_DMOP_relocate_memory]                  = sizeof(struct xen_dm_op_relocate_memory),
+        [XEN_DMOP_pin_memory_cacheattr]             = sizeof(struct xen_dm_op_pin_memory_cacheattr),
+    };
+
+    rc = rcu_lock_remote_domain_by_id(op_args->domid, &d);
+    if ( rc )
+        return rc;
+
+    if ( !is_hvm_domain(d) )
+        goto out;
+
+    rc = xsm_dm_op(XSM_DM_PRIV, d);
+    if ( rc )
+        goto out;
+
+    rc = -EFAULT;
+    if ( op_args->buf[0].size < offset )
+        goto out;
+
+    if ( copy_from_guest_offset((void *)&op, op_args->buf[0].h, 0, offset) )
+        goto out;
+
+    if ( op.op >= ARRAY_SIZE(op_size) )
+    {
+        rc = -EOPNOTSUPP;
+        goto out;
+    }
+
+    op.op = array_index_nospec(op.op, ARRAY_SIZE(op_size));
+
+    if ( op_args->buf[0].size < offset + op_size[op.op] )
+        goto out;
+
+    if ( copy_from_guest_offset((void *)&op.u, op_args->buf[0].h, offset,
+                                op_size[op.op]) )
+        goto out;
+
+    rc = -EINVAL;
+    if ( op.pad )
+        goto out;
+
+    switch ( op.op )
+    {
+    case XEN_DMOP_create_ioreq_server:
+    {
+        struct xen_dm_op_create_ioreq_server *data =
+            &op.u.create_ioreq_server;
+
+        const_op = false;
+
+        rc = -EINVAL;
+        if ( data->pad[0] || data->pad[1] || data->pad[2] )
+            break;
+
+        rc = hvm_create_ioreq_server(d, data->handle_bufioreq,
+                                     &data->id);
+        break;
+    }
+
+    case XEN_DMOP_get_ioreq_server_info:
+    {
+        struct xen_dm_op_get_ioreq_server_info *data =
+            &op.u.get_ioreq_server_info;
+        const uint16_t valid_flags = XEN_DMOP_no_gfns;
+
+        const_op = false;
+
+        rc = -EINVAL;
+        if ( data->flags & ~valid_flags )
+            break;
+
+        rc = hvm_get_ioreq_server_info(d, data->id,
+                                       (data->flags & XEN_DMOP_no_gfns) ?
+                                       NULL : (unsigned long *)&data->ioreq_gfn,
+                                       (data->flags & XEN_DMOP_no_gfns) ?
+                                       NULL : (unsigned long *)&data->bufioreq_gfn,
+                                       &data->bufioreq_port);
+        break;
+    }
+
+    case XEN_DMOP_map_io_range_to_ioreq_server:
+    {
+        const struct xen_dm_op_ioreq_server_range *data =
+            &op.u.map_io_range_to_ioreq_server;
+
+        rc = -EINVAL;
+        if ( data->pad )
+            break;
+
+        rc = hvm_map_io_range_to_ioreq_server(d, data->id, data->type,
+                                              data->start, data->end);
+        break;
+    }
+
+    case XEN_DMOP_unmap_io_range_from_ioreq_server:
+    {
+        const struct xen_dm_op_ioreq_server_range *data =
+            &op.u.unmap_io_range_from_ioreq_server;
+
+        rc = -EINVAL;
+        if ( data->pad )
+            break;
+
+        rc = hvm_unmap_io_range_from_ioreq_server(d, data->id, data->type,
+                                                  data->start, data->end);
+        break;
+    }
+
+    case XEN_DMOP_set_ioreq_server_state:
+    {
+        const struct xen_dm_op_set_ioreq_server_state *data =
+            &op.u.set_ioreq_server_state;
+
+        rc = -EINVAL;
+        if ( data->pad )
+            break;
+
+        rc = hvm_set_ioreq_server_state(d, data->id, !!data->enabled);
+        break;
+    }
+
+    case XEN_DMOP_destroy_ioreq_server:
+    {
+        const struct xen_dm_op_destroy_ioreq_server *data =
+            &op.u.destroy_ioreq_server;
+
+        rc = -EINVAL;
+        if ( data->pad )
+            break;
+
+        rc = hvm_destroy_ioreq_server(d, data->id);
+        break;
+    }
+
+    default:
+        rc = arch_dm_op(&op, d, op_args, &const_op);
+    }
+
+    if ( (!rc || rc == -ERESTART) &&
+         !const_op && copy_to_guest_offset(op_args->buf[0].h, offset,
+                                           (void *)&op.u, op_size[op.op]) )
+        rc = -EFAULT;
+
+ out:
+    rcu_unlock_domain(d);
+
+    return rc;
+}
+
+#ifdef CONFIG_COMPAT
+#include <compat/hvm/dm_op.h>
+
+CHECK_dm_op_create_ioreq_server;
+CHECK_dm_op_get_ioreq_server_info;
+CHECK_dm_op_ioreq_server_range;
+CHECK_dm_op_set_ioreq_server_state;
+CHECK_dm_op_destroy_ioreq_server;
+CHECK_dm_op_track_dirty_vram;
+CHECK_dm_op_set_pci_intx_level;
+CHECK_dm_op_set_isa_irq_level;
+CHECK_dm_op_set_pci_link_route;
+CHECK_dm_op_modified_memory;
+CHECK_dm_op_set_mem_type;
+CHECK_dm_op_inject_event;
+CHECK_dm_op_inject_msi;
+CHECK_dm_op_map_mem_type_to_ioreq_server;
+CHECK_dm_op_remote_shutdown;
+CHECK_dm_op_relocate_memory;
+CHECK_dm_op_pin_memory_cacheattr;
+
+int compat_dm_op(domid_t domid,
+                 unsigned int nr_bufs,
+                 XEN_GUEST_HANDLE_PARAM(void) bufs)
+{
+    struct dmop_args args;
+    unsigned int i;
+    int rc;
+
+    if ( nr_bufs > ARRAY_SIZE(args.buf) )
+        return -E2BIG;
+
+    args.domid = domid;
+    args.nr_bufs = array_index_nospec(nr_bufs, ARRAY_SIZE(args.buf) + 1);
+
+    for ( i = 0; i < args.nr_bufs; i++ )
+    {
+        struct compat_dm_op_buf cmp;
+
+        if ( copy_from_guest_offset(&cmp, bufs, i, 1) )
+            return -EFAULT;
+
+#define XLAT_dm_op_buf_HNDL_h(_d_, _s_) \
+        guest_from_compat_handle((_d_)->h, (_s_)->h)
+
+        XLAT_dm_op_buf(&args.buf[i], &cmp);
+
+#undef XLAT_dm_op_buf_HNDL_h
+    }
+
+    rc = dm_op(&args);
+
+    if ( rc == -ERESTART )
+        rc = hypercall_create_continuation(__HYPERVISOR_dm_op, "iih",
+                                           domid, nr_bufs, bufs);
+
+    return rc;
+}
+#endif
+
+long do_dm_op(domid_t domid,
+              unsigned int nr_bufs,
+              XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs)
+{
+    struct dmop_args args;
+    int rc;
+
+    if ( nr_bufs > ARRAY_SIZE(args.buf) )
+        return -E2BIG;
+
+    args.domid = domid;
+    args.nr_bufs = array_index_nospec(nr_bufs, ARRAY_SIZE(args.buf) + 1);
+
+    if ( copy_from_guest_offset(&args.buf[0], bufs, 0, args.nr_bufs) )
+        return -EFAULT;
+
+    rc = dm_op(&args);
+
+    if ( rc == -ERESTART )
+        rc = hypercall_create_continuation(__HYPERVISOR_dm_op, "iih",
+                                           domid, nr_bufs, bufs);
+
+    return rc;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xen/dm.h b/xen/include/xen/dm.h
new file mode 100644
index 0000000..ef15edf
--- /dev/null
+++ b/xen/include/xen/dm.h
@@ -0,0 +1,44 @@
+/*
+ * Copyright (c) 2016 Citrix Systems Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __XEN_DM_H__
+#define __XEN_DM_H__
+
+#include <xen/sched.h>
+
+struct dmop_args {
+    domid_t domid;
+    unsigned int nr_bufs;
+    /* Reserve enough buf elements for all current hypercalls. */
+    struct xen_dm_op_buf buf[2];
+};
+
+int arch_dm_op(struct xen_dm_op *op,
+               struct domain *d,
+               const struct dmop_args *op_args,
+               bool *const_op);
+
+#endif /* __XEN_DM_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 7ae3c40..5c61d8e 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -707,14 +707,14 @@ static XSM_INLINE int xsm_pmu_op (XSM_DEFAULT_ARG struct domain *d, unsigned int
     }
 }
 
+#endif /* CONFIG_X86 */
+
 static XSM_INLINE int xsm_dm_op(XSM_DEFAULT_ARG struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
 
-#endif /* CONFIG_X86 */
-
 #ifdef CONFIG_ARGO
 static XSM_INLINE int xsm_argo_enable(const struct domain *d)
 {
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 7bd03d8..91ecff4 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -176,8 +176,8 @@ struct xsm_operations {
     int (*ioport_permission) (struct domain *d, uint32_t s, uint32_t e, uint8_t allow);
     int (*ioport_mapping) (struct domain *d, uint32_t s, uint32_t e, uint8_t allow);
     int (*pmu_op) (struct domain *d, unsigned int op);
-    int (*dm_op) (struct domain *d);
 #endif
+    int (*dm_op) (struct domain *d);
     int (*xen_version) (uint32_t cmd);
     int (*domain_resource_map) (struct domain *d);
 #ifdef CONFIG_ARGO
@@ -682,13 +682,13 @@ static inline int xsm_pmu_op (xsm_default_t def, struct domain *d, unsigned int
     return xsm_ops->pmu_op(d, op);
 }
 
+#endif /* CONFIG_X86 */
+
 static inline int xsm_dm_op(xsm_default_t def, struct domain *d)
 {
     return xsm_ops->dm_op(d);
 }
 
-#endif /* CONFIG_X86 */
-
 static inline int xsm_xen_version (xsm_default_t def, uint32_t op)
 {
     return xsm_ops->xen_version(op);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 9e09512..8bdffe7 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -147,8 +147,8 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, ioport_permission);
     set_to_dummy_if_null(ops, ioport_mapping);
     set_to_dummy_if_null(ops, pmu_op);
-    set_to_dummy_if_null(ops, dm_op);
 #endif
+    set_to_dummy_if_null(ops, dm_op);
     set_to_dummy_if_null(ops, xen_version);
     set_to_dummy_if_null(ops, domain_resource_map);
 #ifdef CONFIG_ARGO
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 19b0d9e..11784d7 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1656,14 +1656,13 @@ static int flask_pmu_op (struct domain *d, unsigned int op)
         return -EPERM;
     }
 }
+#endif /* CONFIG_X86 */
 
 static int flask_dm_op(struct domain *d)
 {
     return current_has_perm(d, SECCLASS_HVM, HVM__DM);
 }
 
-#endif /* CONFIG_X86 */
-
 static int flask_xen_version (uint32_t op)
 {
     u32 dsid = domain_sid(current->domain);
@@ -1865,8 +1864,8 @@ static struct xsm_operations flask_ops = {
     .ioport_permission = flask_ioport_permission,
     .ioport_mapping = flask_ioport_mapping,
     .pmu_op = flask_pmu_op,
-    .dm_op = flask_dm_op,
 #endif
+    .dm_op = flask_dm_op,
     .xen_version = flask_xen_version,
     .domain_resource_map = flask_domain_resource_map,
 #ifdef CONFIG_ARGO
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:36:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:36:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40863.73860 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgXK-0001nn-OB; Mon, 30 Nov 2020 10:36:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40863.73860; Mon, 30 Nov 2020 10:36:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgXK-0001ng-Ks; Mon, 30 Nov 2020 10:36:02 +0000
Received: by outflank-mailman (input) for mailman id 40863;
 Mon, 30 Nov 2020 10:36:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjgXJ-0001nY-CK; Mon, 30 Nov 2020 10:36:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjgXJ-0001RQ-97; Mon, 30 Nov 2020 10:36:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjgXJ-0004Cm-0B; Mon, 30 Nov 2020 10:36:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kjgXI-0007hu-W0; Mon, 30 Nov 2020 10:36:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8O71Y70zQclewrxCglZZ7J8boJp0UScJw1GcYmrm/Tg=; b=mxvh90PPM7TqPQkv0q1ScJuwrK
	XWpPepyFJgp19UWzqkwipPKPL0Zr46xiWTw4513ueI+oiWJyLeJg7MMZwECfJscTTDCjYihtyVrkc
	8xksA/AZELQ5u4FwaRA72pvQJE8aKwMlxtnWzR+lCz9OLTlbW3dKoBnRaoONOYdgNNGg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157104-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 157104: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=8501bb0c05ad9dd7ef6504803678866b1d23f6ab
X-Osstest-Versions-That:
    ovmf=f69a2b9a42029bcbcf88d074425ebe63495b0a08
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 30 Nov 2020 10:36:00 +0000

flight 157104 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157104/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 8501bb0c05ad9dd7ef6504803678866b1d23f6ab
baseline version:
 ovmf                 f69a2b9a42029bcbcf88d074425ebe63495b0a08

Last test of basis   157060  2020-11-27 19:10:46 Z    2 days
Testing same since   157104  2020-11-30 03:09:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Chen, Christine <Yuwei.Chen@intel.com>
  Yuwei Chen <yuwei.chen@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   f69a2b9a42..8501bb0c05  8501bb0c05ad9dd7ef6504803678866b1d23f6ab -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:36:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:36:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40865.73876 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgXT-0001rK-1E; Mon, 30 Nov 2020 10:36:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40865.73876; Mon, 30 Nov 2020 10:36:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgXS-0001rD-UL; Mon, 30 Nov 2020 10:36:10 +0000
Received: by outflank-mailman (input) for mailman id 40865;
 Mon, 30 Nov 2020 10:36:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Tpvk=FE=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kjgXR-0001qk-81
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:36:09 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8c33855f-a6d8-433f-8c86-0789e278baae;
 Mon, 30 Nov 2020 10:36:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8c33855f-a6d8-433f-8c86-0789e278baae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606732568;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=xYuuqM4xf95GM/STRHtKgAhRRJqwGwniaSpMW1mPr+c=;
  b=I5eYtfomRssSz/S7+0cukW0nfwG8N7JPCX/87WGflksLxydKe9alJ3tG
   wfSHxXu8p2XMz/QO9H2tUdeqim6SvSKwHuEK00P7W1UEmHlAcIfpB1J/C
   FNWM6lIY0TPxkYTtqBUSbITK7bhMe1M390ZRaJL7LkM4ziE8uNAS8gasW
   c=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: dhz8Jn0D8gv4lolQ1ZeDeZxVzM9XxeJsU54cEe7YQ+XHeH/5cAD2g1f/JAVphwAeJWNwt6ELwN
 dFWdAmnLDK7wf+j/QMKTQWxiFx76WTYJsn9mTIYrVJ7KLfVl8EiLHpPjUR4Lu0Kj+REDydlX54
 K1gpWF0RCcrjPFXGB2lrB4zLiWxNwxJ9MbYeh7Uy90rGIUihN/gQldgDOBkyTa4e+7FA5JfjfO
 ksjtRlAgJ+aAKkR7Sme3UQXVHu18kjLvABlxT7pXuaTIH4WXlVbVJH713rk111+VitqN9Xn/5o
 El8=
X-SBRS: None
X-MesageID: 32488046
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,381,1599537600"; 
   d="scan'208";a="32488046"
Subject: Re: [PATCH 0/7] xen/arm: Emulate ID registers
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
References: <cover.1606151462.git.bertrand.marquis@arm.com>
 <45b8aac3-75a6-670f-d6f2-b427c497ee2d@citrix.com>
 <1BAAADF6-9E29-4BE5-857D-A8B51EB80712@arm.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e119d6ff-dc61-0fd7-6da5-3e4e1b51839c@citrix.com>
Date: Mon, 30 Nov 2020 10:36:00 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <1BAAADF6-9E29-4BE5-857D-A8B51EB80712@arm.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 30/11/2020 10:20, Bertrand Marquis wrote:
> Hi Andrew,
>
>> On 27 Nov 2020, at 20:07, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>
>> On 26/11/2020 15:51, Bertrand Marquis wrote:
>>> The goal of this serie is to emulate coprocessor ID registers so that
>>> Xen only publish to guest features that are supported by Xen and can
>>> actually be used by guests.
>>> One practical example where this is required are SVE support which is
>>> forbidden by Xen as it is not supported, but if Linux is compiled with
>>> it, it will crash on boot. An other one is AMU which is also forbidden
>>> by Xen but one Linux compiled with it would crash if the platform
>>> supports it.
>>>
>>> To be able to emulate the coprocessor registers defining what features
>>> are supported by the hardware, the TID3 bit of HCR must be disabled and
>>> Xen must emulated the values of those registers when an exception is
>>> catched when a guest is accessing it.
>>>
>>> This serie is first creating a guest cpuinfo structure which will
>>> contain the values that we want to publish to the guests and then
>>> provides the proper emulationg for those registers when Xen is getting
>>> an exception due to an access to any of those registers.
>>>
>>> This is a first simple implementation to solve the problem and the way
>>> to define the values that we provide to guests and which features are
>>> disabled will be in a future patchset enhance so that we could decide
>>> per guest what can be used or not and depending on this deduce the bits
>>> to activate in HCR and the values that we must publish on ID registers.
>>>
>>> Bertrand Marquis (7):
>>>  xen/arm: Add ID registers and complete cpufinfo
>>>  xen/arm: Add arm64 ID registers definitions
>>>  xen/arm: create a cpuinfo structure for guest
>>>  xen/arm: Add handler for ID registers on arm64
>>>  xen/arm: Add handler for cp15 ID registers
>>>  xen/arm: Add CP10 exception support to handle VMFR
>>>  xen/arm: Activate TID3 in HCR_EL2
>> CI found an ARM randconfig failure against this series.
>>
>> https://gitlab.com/xen-project/patchew/xen/-/pipelines/221798884
>>
>> I have admit that I can't spot an obvious connection so it might be
>> collateral damage from elsewhere, but does need looking at irrespective.
> This absolutely right, there is a bug in my code and i will send a V2 to fix it.
>
> Very nice finding, i am wondering why my tests did not point this out.

Its randconfig, so every time the test runs, it picks a new random
Kconfig configuration.

Sadly, it is non-deterministic, and not necessarily the fault of change
the test ran against.  We're probably going to have to tweak how we run
these tests before the CI goes too much further.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:39:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:39:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40883.73888 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgah-00027k-Hq; Mon, 30 Nov 2020 10:39:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40883.73888; Mon, 30 Nov 2020 10:39:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgah-00027d-EW; Mon, 30 Nov 2020 10:39:31 +0000
Received: by outflank-mailman (input) for mailman id 40883;
 Mon, 30 Nov 2020 10:39:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KbLm=FE=bitdefender.com=aisaila@srs-us1.protection.inumbo.net>)
 id 1kjgag-00027Y-5t
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:39:30 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.2.123]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d8a74955-bcb6-4a05-be71-64c76c5f235a;
 Mon, 30 Nov 2020 10:39:28 +0000 (UTC)
Received: from DB8PR02MB5740.eurprd02.prod.outlook.com (2603:10a6:10:eb::10)
 by DB3PR0202MB3484.eurprd02.prod.outlook.com (2603:10a6:8:2::30) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3564.25; Mon, 30 Nov
 2020 10:39:25 +0000
Received: from DB8PR02MB5740.eurprd02.prod.outlook.com
 ([fe80::258a:bf9f:fc0f:5381]) by DB8PR02MB5740.eurprd02.prod.outlook.com
 ([fe80::258a:bf9f:fc0f:5381%5]) with mapi id 15.20.3611.020; Mon, 30 Nov 2020
 10:39:25 +0000
Received: from [192.168.1.109] (86.127.52.143) by
 VI1PR06CA0203.eurprd06.prod.outlook.com (2603:10a6:802:2c::24) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3611.22 via Frontend Transport; Mon, 30 Nov 2020 10:39:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d8a74955-bcb6-4a05-be71-64c76c5f235a
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QlCYa3XLQdrBggSb5abn3o5I0qRPYS+wDCo1Ek98/ES0dWM1ehkoauUwK2Bay/wZGMT47yk9TrUie3qDeR6ZQWMeIBI2rFL7fupV7J+Eecax+DgGtQQ/kQ7TbYJ90qAjdmpQwj2ZyKD6NriQKrf+EUvcc1XIyoxhY8sUYMNEePRPS26xCHgUFzBb7xgOfHUe9vrl73c55gp8z/RW7iRldx6V2HZltk5TWbAv2s6qpWNdHQyGh0H+yIzSXDq/GH65KwV0w8++OT46owxKRK4Fk0dneqLj9YFYky1JQ6F/AFMIVXuvRj4RVsP1ofPgJL1uoDvzL35/86XnHEPdnf4sQQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=g9DiQhYKayoFWZlhVMSSV5VnP10hD7SjHfBowlZ/e+o=;
 b=aJf7s1+Kqg5IwyhMljj7avfk2aJJhrosPa7M6FJm31zQXUHs9natflOSiJTxiElnLpqBblqQltsJG5q/zPNuYdig0gCkhKpu1Yb4/7yzOVDBU15w7Z4Usfb8ns0efcM2VRg3s3cnKXe+pdTEsQDJTbw+EuJGHPTCgDq8DM9mmj5PasYY8DboJRjSXS/SXGh7suUwFBhwSlSvtlxkMlaL9O1CKxaaEN8wfCSUhbXQ2ko3S+HN0uAGLSLzudMaiwRWe2m6iVPXnjfzxROVluHjr4nhsfq92UWSBTw2zBGHZIzmBf5sRZPnGM6Eod5uyzaupR/OIXDskwuFhU4b6YeKfQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=bitdefender.com; dmarc=pass action=none
 header.from=bitdefender.com; dkim=pass header.d=bitdefender.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=bitdefender.onmicrosoft.com; s=selector2-bitdefender-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=g9DiQhYKayoFWZlhVMSSV5VnP10hD7SjHfBowlZ/e+o=;
 b=WNPZ+8NoRhbJkfk3xnQE2dggvBJrfBHNcih+E1RW9uuNrI5rRudvSJsM52+SDtlluL2N4pMFPzR1LV3Ki7iinQAel/qBQEBqjxBvVxMF1DfprABmOuISOjrWp3j3QhqE/U83a/Mj5BI9YVuEg4Uzb9UqXHhEchW0/YVm9MLzBhA=
Authentication-Results: bitdefender.com; dkim=none (message not signed)
 header.d=none;bitdefender.com; dmarc=none action=none
 header.from=bitdefender.com;
Subject: Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with
 per-channel lock held
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <lengyelt@ainfosec.com>,
 Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>
References: <9d7a052a-6222-80ff-cbf1-612d4ca50c2a@suse.com>
 <d821c715-966a-b48b-a877-c5dac36822f0@suse.com>
From: Isaila Alexandru <aisaila@bitdefender.com>
Organization: BD
Message-ID: <0c51ad57-0d40-0fa9-7992-d747fc31b441@bitdefender.com>
Date: Mon, 30 Nov 2020 12:39:23 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.4.1
In-Reply-To: <d821c715-966a-b48b-a877-c5dac36822f0@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [86.127.52.143]
X-ClientProxiedBy: VI1PR06CA0203.eurprd06.prod.outlook.com
 (2603:10a6:802:2c::24) To DB8PR02MB5740.eurprd02.prod.outlook.com
 (2603:10a6:10:eb::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b821afee-ae32-4d77-4df3-08d8951c3506
X-MS-TrafficTypeDiagnostic: DB3PR0202MB3484:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<DB3PR0202MB3484A18BE68C04F1B048749CABF50@DB3PR0202MB3484.eurprd02.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6430;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LGQJfUG2V/urQkKkx+ueznn5wmdXKfh+5+gYhMttiMuVEreiBLRFwd8thkAKEJS2GNAbDx+BUldbtM1HzK098p30kxwaWyKX5zvznJSYgRRR29T+ToueBwOgGPsrDXLHAPxPl4py7nIphlMMOqxPdNK6UMwJOYYDLKhNtFFGe3VX/KstVyTCL8NCA1tTwXAVuaX9xQhnq5kNb73wVOT4ouO1m+85k19SNAH3iUfE3zwAKfF0VZMb1mFLSsg7cSFPf4mWFi3U7vbg/EG6dh7i6moA7e4IYIEV8vUKuaf5VhdvfrxG+IYq0/N1UWn1ado5zUofch5jo9kJ6nf6dQTW8SXYlvcFZMtGwAjbg9jGCIYofPX/PQLRms6y27Ja1wFR
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB8PR02MB5740.eurprd02.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(346002)(396003)(39850400004)(136003)(366004)(66946007)(110136005)(54906003)(16576012)(2906002)(316002)(66556008)(66476007)(53546011)(36756003)(6486002)(107886003)(4326008)(478600001)(36916002)(86362001)(2616005)(956004)(8936002)(16526019)(186003)(31696002)(26005)(5660300002)(31686004)(8676002)(83380400001)(52116002)(43740500002);DIR:OUT;SFP:1102;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?Q1EvTTJIU3NBQ2VYMnpmZCtCVTZhbG50OC9Xd0lpNm9KM3dsSk4vMmtlVm9V?=
 =?utf-8?B?WWpIMExxbnViT3BMREY0dU1pWm9PRGRrYUtESTFXdjlHbk52NFZTYmNKUmNE?=
 =?utf-8?B?L1ZyK3VSU3VTNVJ2T1FFVHVzazc3TnA5c1IzOGVaVDdEMTBaaXQzNENHcCtZ?=
 =?utf-8?B?N0drTVJnY3FiNmpJeXViQ2crYUNoZGZMK2RIVlAwUlNpd0x2amUyd3dIaTVJ?=
 =?utf-8?B?TExVdFBNb2pGblhoNFFpMDhGYTVnY0E1MG1FZVdib1kxd290dHBQaXdwSHFx?=
 =?utf-8?B?eHYzL2dNTEQvaURrQjdHK0RKSzM2dHFodU1iYXN4dWhxUElhU0E0K053ejFq?=
 =?utf-8?B?TDIrSnJWbkl0NjZGaFh3aFRjbVkrYjdkc29NQ1dtbUJIb1ZhK0xpdmxveGlq?=
 =?utf-8?B?eSt4OTI1UHUrWTNDY1dnZ1IxTWZ2MVlQUkJ6eXpZV1VMUEFXWEhtYytwd3Nt?=
 =?utf-8?B?VzhZUmhVS1pab0FuM09jdlh6bVNFSFcxQ2dxWE13VFoyb0RES0tiNjk4NU5Z?=
 =?utf-8?B?TlpaWUZhOE9PRDVROXcyRnRDeFp1QzN5R2R3MHRYUEE2S3BhalEzQlROU3dS?=
 =?utf-8?B?dzMwclFFREROUmc3R3BWZzNzalNLTWdzdmhuZndqeHhVZnl0ZGhGNXN5eU9W?=
 =?utf-8?B?TG5BY01yUG5Td3g0anBDdTNBRWsxd1MzYnNHdkgvVE8xcjBqMlhsV1FJNHhq?=
 =?utf-8?B?VjBvdGVDaUI3b2Y1RXRlUGxXdGFOUkFGZHlEc0xmd1pQTkk4LzcweU0yajA3?=
 =?utf-8?B?NnVNUk1rSURXSlJLVnBNbHFMNnpaVlQ0TmQ1YUpxZGxFTWVNalk2T1I0TDdl?=
 =?utf-8?B?S05jc1NqYlNuWlZFYnlOay9SRTRIT2FHdjdhMkVMdnRVZVNxYzRnYTBEMWQy?=
 =?utf-8?B?bXdQVGU5SkRyQm1lSWJzR1ZUWUg1L1VGMEpybFB3NXIvdW5LQzNKRlg4SGkr?=
 =?utf-8?B?MmhtMmdTZStQZk5vR0llK2JldjRxL3FveWxtc1VDT2FiWlhRd1NRMDNLYUhy?=
 =?utf-8?B?MHliamdEUG9WQzhNSGpsMkxMWEJ4WW92blkxcVpnSFd4c3NTYkZEdUlhR1ds?=
 =?utf-8?B?RSsrS2RRQjErUGdqOUlOUGdodi9rdnlsYkhsR1UzN0Z2Kyt5dE5FbEJ6ejdm?=
 =?utf-8?B?ZmpoSDBDMzZOS2VyOG44c1NNaUJ2cVZzUDRGZWlPRTR5S3B5d1lTeXRGMUQz?=
 =?utf-8?B?V2o2WVVYTkpNQXBORDMwRTFVcUFBV2NYZ3hhQ29BN0Zyb0t6WjJ0TTZKOEtZ?=
 =?utf-8?B?V1RRMTdISlhIWlozS3EwVi9odmRqNjZoRVVmaFhrOWZxL0FGZVFNYloxb055?=
 =?utf-8?Q?SmhesIxotQ1ZftNdamuF/RtZKZGfnTjHRG?=
X-OriginatorOrg: bitdefender.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b821afee-ae32-4d77-4df3-08d8951c3506
X-MS-Exchange-CrossTenant-AuthSource: DB8PR02MB5740.eurprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Nov 2020 10:39:25.7191
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 487baf29-f1da-469a-9221-243f830c36f3
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: L5uQRl4ORvI7KpWgo4THAMxThwgXZvgy7FGaZ+trHQumujMh5bHV/M/TSubSqUeB2HCdFxU4NW7STaO2gk6vL5z0teYobiprKVUJ5cGKdeI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB3PR0202MB3484

On 23.11.2020 15:30, Jan Beulich wrote:
> While there don't look to be any problems with this right now, the lock
> order implications from holding the lock can be very difficult to follow
> (and may be easy to violate unknowingly). The present callbacks don't
> (and no such callback should) have any need for the lock to be held.
> 
> However, vm_event_disable() frees the structures used by respective
> callbacks and isn't otherwise synchronized with invocations of these
> callbacks, so maintain a count of in-progress calls, for evtchn_close()
> to wait to drop to zero before freeing the port (and dropping the lock).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Alexandru Isaila <aisaila@bitdefender.com>

> ---
> Should we make this accounting optional, to be requested through a new
> parameter to alloc_unbound_xen_event_channel(), or derived from other
> than the default callback being requested?
> ---
> v3: Drain callbacks before proceeding with closing. Re-base.
> 
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -397,6 +397,7 @@ static long evtchn_bind_interdomain(evtc
>       
>       rchn->u.interdomain.remote_dom  = ld;
>       rchn->u.interdomain.remote_port = lport;
> +    atomic_set(&rchn->u.interdomain.active_calls, 0);
>       rchn->state                     = ECS_INTERDOMAIN;
>   
>       /*
> @@ -720,6 +721,10 @@ int evtchn_close(struct domain *d1, int
>   
>           double_evtchn_lock(chn1, chn2);
>   
> +        if ( consumer_is_xen(chn1) )
> +            while ( atomic_read(&chn1->u.interdomain.active_calls) )
> +                cpu_relax();
> +
>           evtchn_free(d1, chn1);
>   
>           chn2->state = ECS_UNBOUND;
> @@ -781,9 +786,15 @@ int evtchn_send(struct domain *ld, unsig
>           rport = lchn->u.interdomain.remote_port;
>           rchn  = evtchn_from_port(rd, rport);
>           if ( consumer_is_xen(rchn) )
> +        {
> +            /* Don't keep holding the lock for the call below. */
> +            atomic_inc(&rchn->u.interdomain.active_calls);
> +            evtchn_read_unlock(lchn);
>               xen_notification_fn(rchn)(rd->vcpu[rchn->notify_vcpu_id], rport);
> -        else
> -            evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
> +            atomic_dec(&rchn->u.interdomain.active_calls);
> +            return 0;
> +        }
> +        evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
>           break;
>       case ECS_IPI:
>           evtchn_port_set_pending(ld, lchn->notify_vcpu_id, lchn);
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -104,6 +104,7 @@ struct evtchn
>           } unbound;     /* state == ECS_UNBOUND */
>           struct {
>               evtchn_port_t  remote_port;
> +            atomic_t       active_calls;
>               struct domain *remote_dom;
>           } interdomain; /* state == ECS_INTERDOMAIN */
>           struct {
> 
> 


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:40:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:40:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40890.73900 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgbu-0002v5-TE; Mon, 30 Nov 2020 10:40:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40890.73900; Mon, 30 Nov 2020 10:40:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgbu-0002uy-Q1; Mon, 30 Nov 2020 10:40:46 +0000
Received: by outflank-mailman (input) for mailman id 40890;
 Mon, 30 Nov 2020 10:40:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjgUx-0000Uu-EL
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:33:35 +0000
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bd410a64-db72-4953-8a83-8438d237a04e;
 Mon, 30 Nov 2020 10:32:19 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id y7so17019212lji.8
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 02:32:19 -0800 (PST)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.17
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 02:32:17 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd410a64-db72-4953-8a83-8438d237a04e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=cBlwWPGKzEGxAAOiUlGssFOYms5RMEboYfXpdUQmleg=;
        b=jbVzVpyjkJDq4y8E5MaaQa+5tPgBBO1gUbVpVCKJCv2sSzYgKO3ASyhFoToueFJoVc
         BoO3t1tOtefZJ73282cqMkDoH4NzpFkoW4BF9j3/gvePFx9jgK0U1prSV/lnT4Jwq8Ho
         +oBmxy1QcC3yQeMiMPN10o6IzCl0t/8uSMx5vTHvTULDPGoDlEHNshDRMCBrF+uMRdo8
         3nCzit3goLRD5gtD1U7Pv4Z51Nw2W0O1ndKS6o4LLAYQrpIV0CDwyREPilAGWAeDhTrB
         a0o3nXnjWT8KFbvvVxjxMn0y/yG6WYMMc8fMO7B5p5J30+u+kSJT8Mn0VflJVHbbTGbb
         ug2w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=cBlwWPGKzEGxAAOiUlGssFOYms5RMEboYfXpdUQmleg=;
        b=dYPVcQiMvjEoxhuO45gToD4fFUGiev9pPeW4zph3SMVbA5Ud4Gui/+TGAenE9q0aYq
         +NyYQ35G4W9G09QWKN86EPkwKueqoUNbEqvOqIXxgByzaK5iw7XOeRf/roZjraShe8or
         ksXdR2M4EXyhU+TB5QvgjuE3A0soPpwgWNbraEI5CsIrSN2hogOZHwWjdpG+Nu9hkYcf
         4nTPnNrUnGM0CjYWwxWamgwav0Ll4HehGulrw1ywLzZ1w218Nt6RxFmzICenTVDPrHrY
         GufP4dC5JQF3/JmdH4uok+ZONktvFjNtCTef0PBRJwhqfdUCHBIhvfvklgrWFI8GtgCg
         1seQ==
X-Gm-Message-State: AOAM531o6quIwMq4xf6uW17YM2TaA9oA8Qd0/FuBpEANotZlxvFa6NCS
	6CmE6ih1oaDefD2ARLKLVSbulJdOQmhNsA==
X-Google-Smtp-Source: ABdhPJzYADassTkhsXa92/kInhE69nL47Ls14oh6muNSJGEg1ZrkemxdIuJKh8G5Qfuaxk8MNo3ECg==
X-Received: by 2002:a2e:5c2:: with SMTP id 185mr9454661ljf.79.1606732338314;
        Mon, 30 Nov 2020 02:32:18 -0800 (PST)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V3 19/23] xen/arm: io: Abstract sign-extension
Date: Mon, 30 Nov 2020 12:31:34 +0200
Message-Id: <1606732298-22107-20-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

In order to avoid code duplication (both handle_read() and
handle_ioserv() contain the same code for the sign-extension)
put this code to a common helper to be used for both.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes V1 -> V2:
   - new patch

Changes V2 -> V3:
   - no changes
---
---
 xen/arch/arm/io.c           | 18 ++----------------
 xen/arch/arm/ioreq.c        | 17 +----------------
 xen/include/asm-arm/traps.h | 24 ++++++++++++++++++++++++
 3 files changed, 27 insertions(+), 32 deletions(-)

diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
index f44cfd4..8d6ec6c 100644
--- a/xen/arch/arm/io.c
+++ b/xen/arch/arm/io.c
@@ -23,6 +23,7 @@
 #include <asm/cpuerrata.h>
 #include <asm/current.h>
 #include <asm/mmio.h>
+#include <asm/traps.h>
 #include <asm/hvm/ioreq.h>
 
 #include "decode.h"
@@ -39,26 +40,11 @@ static enum io_state handle_read(const struct mmio_handler *handler,
      * setting r).
      */
     register_t r = 0;
-    uint8_t size = (1 << dabt.size) * 8;
 
     if ( !handler->ops->read(v, info, &r, handler->priv) )
         return IO_ABORT;
 
-    /*
-     * Sign extend if required.
-     * Note that we expect the read handler to have zeroed the bits
-     * outside the requested access size.
-     */
-    if ( dabt.sign && (r & (1UL << (size - 1))) )
-    {
-        /*
-         * We are relying on register_t using the same as
-         * an unsigned long in order to keep the 32-bit assembly
-         * code smaller.
-         */
-        BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long));
-        r |= (~0UL) << size;
-    }
+    r = sign_extend(dabt, r);
 
     set_user_reg(regs, dabt.reg, r);
 
diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c
index f08190c..2f39289 100644
--- a/xen/arch/arm/ioreq.c
+++ b/xen/arch/arm/ioreq.c
@@ -28,7 +28,6 @@ enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v)
     const union hsr hsr = { .bits = regs->hsr };
     const struct hsr_dabt dabt = hsr.dabt;
     /* Code is similar to handle_read */
-    uint8_t size = (1 << dabt.size) * 8;
     register_t r = v->io.req.data;
 
     /* We are done with the IO */
@@ -37,21 +36,7 @@ enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v)
     if ( dabt.write )
         return IO_HANDLED;
 
-    /*
-     * Sign extend if required.
-     * Note that we expect the read handler to have zeroed the bits
-     * outside the requested access size.
-     */
-    if ( dabt.sign && (r & (1UL << (size - 1))) )
-    {
-        /*
-         * We are relying on register_t using the same as
-         * an unsigned long in order to keep the 32-bit assembly
-         * code smaller.
-         */
-        BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long));
-        r |= (~0UL) << size;
-    }
+    r = sign_extend(dabt, r);
 
     set_user_reg(regs, dabt.reg, r);
 
diff --git a/xen/include/asm-arm/traps.h b/xen/include/asm-arm/traps.h
index 997c378..e301c44 100644
--- a/xen/include/asm-arm/traps.h
+++ b/xen/include/asm-arm/traps.h
@@ -83,6 +83,30 @@ static inline bool VABORT_GEN_BY_GUEST(const struct cpu_user_regs *regs)
         (unsigned long)abort_guest_exit_end == regs->pc;
 }
 
+/* Check whether the sign extension is required and perform it */
+static inline register_t sign_extend(const struct hsr_dabt dabt, register_t r)
+{
+    uint8_t size = (1 << dabt.size) * 8;
+
+    /*
+     * Sign extend if required.
+     * Note that we expect the read handler to have zeroed the bits
+     * outside the requested access size.
+     */
+    if ( dabt.sign && (r & (1UL << (size - 1))) )
+    {
+        /*
+         * We are relying on register_t using the same as
+         * an unsigned long in order to keep the 32-bit assembly
+         * code smaller.
+         */
+        BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long));
+        r |= (~0UL) << size;
+    }
+
+    return r;
+}
+
 #endif /* __ASM_ARM_TRAPS__ */
 /*
  * Local variables:
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:42:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:42:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40897.73919 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgdS-000369-LK; Mon, 30 Nov 2020 10:42:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40897.73919; Mon, 30 Nov 2020 10:42:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgdS-00035r-E2; Mon, 30 Nov 2020 10:42:22 +0000
Received: by outflank-mailman (input) for mailman id 40897;
 Mon, 30 Nov 2020 10:42:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjgV2-0000Uu-EZ
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:33:40 +0000
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 34b8d9a9-44f7-4625-b302-f3659906a420;
 Mon, 30 Nov 2020 10:32:21 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id s30so20589388lfc.4
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 02:32:20 -0800 (PST)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.18
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 02:32:18 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34b8d9a9-44f7-4625-b302-f3659906a420
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=/NCbRJRG/y1peWdb5VV9fW9K7yzgcZXIhEHGv+DYhcI=;
        b=nkoAAdWqP3QV7ERu4DO5sk6igvlfj4tBavrIeM93acV99mrK4SLzELno8x3hBXV1Sr
         Suy9tD0w+m0eWVjgBvSTitriGBy7qnfofhNMRqnZZq50qFwh9LqhRUR8U88MCHSZNLEp
         UWr/ZQq1pykUXpB6Xe3CXlCDieGVT2a1hsNV5lDKZ+U1oXD91Dk+rET7PoxnPDRGINiC
         QNrTH8LnM2RmWMa1EQ/aX0n2FqWQc/LqolKbHjKckPg4djAI3U7XXgwxLaVL/oHYLO3e
         TExfSmMogquumcB3JPW/kfwS76E2pNJ9DjGYXaE2IMQaNqSp4uLo3/oauHlwJ+WTtblK
         wEFA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=/NCbRJRG/y1peWdb5VV9fW9K7yzgcZXIhEHGv+DYhcI=;
        b=QhJzw9niQehij3h33hBwwXgXW/5Lr3ymRPyUeFsB6rBxoS79P3y/Dtt0yZ6aa48Dxz
         T1HcpDh7d7fE3mpUF2kZGokEU1SgE6H3Qhqw/Dt2Flqp8a9BuGDU7vEK9HcwWwVcGYLq
         sQjuY0yKN7jWa+GIvvXJAzVcVMJ6KA2hYqgsLlGIObDLlQm7SfHaHXcGgtMVVPhRQYC+
         thuubdhTSFhn4qSrS1zwT2s6Wd9uGmpLHE79vhAh4Bb9EzY3hq1ChzyAkT6qtAg0RqHa
         aeJIYG8xgNBo1AvvnkI75bbkn3asWb+jshES35L3LALzC5vLrQvwz89CyG1kjX7p8ygS
         Bjeg==
X-Gm-Message-State: AOAM530nLRXxVymzSUXuRSN/cDwUcdisbkatpA7n4HhjAo2IgpGqX1jW
	WWQaa62Lgrqj/aGGdggRpLb+bOjWX/DrdA==
X-Google-Smtp-Source: ABdhPJysgf1BIsSrpTJNnyHZsuIO26DFzpRK0mmqVYcYFuGaiKZTOWSYioDBKAmiE0iyDYk1oQcwag==
X-Received: by 2002:a05:6512:3e6:: with SMTP id n6mr9015506lfq.413.1606732339424;
        Mon, 30 Nov 2020 02:32:19 -0800 (PST)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V3 20/23] xen/ioreq: Make x86's send_invalidate_req() common
Date: Mon, 30 Nov 2020 12:31:35 +0200
Message-Id: <1606732298-22107-21-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

As the IOREQ is a common feature now and we also need to
invalidate qemu/demu mapcache on Arm when the required condition
occurs this patch moves this function to the common code
(and remames it to ioreq_signal_mapcache_invalidate).
This patch also moves per-domain qemu_mapcache_invalidate
variable out of the arch sub-struct (and drops "qemu" prefix).

The subsequent patch will add mapcache invalidation handling on Arm.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - move send_invalidate_req() to the common code
   - update patch subject/description
   - move qemu_mapcache_invalidate out of the arch sub-struct,
     update checks
   - remove #if defined(CONFIG_ARM64) from the common code

Changes V1 -> V2:
   - was split into:
     - xen/ioreq: Make x86's send_invalidate_req() common
     - xen/arm: Add mapcache invalidation handling
   - update patch description/subject
   - move Arm bits to a separate patch
   - don't alter the common code, the flag is set by arch code
   - rename send_invalidate_req() to send_invalidate_ioreq()
   - guard qemu_mapcache_invalidate with CONFIG_IOREQ_SERVER
   - use bool instead of bool_t
   - remove blank line blank line between head comment and #include-s

Changes V2 -> V3:
   - update patch description
   - drop "qemu" prefix from the variable name
   - rename send_invalidate_req() to ioreq_signal_mapcache_invalidate()
---
---
 xen/arch/x86/hvm/hypercall.c     |  9 +++++----
 xen/arch/x86/hvm/io.c            | 14 --------------
 xen/common/ioreq.c               | 14 ++++++++++++++
 xen/include/asm-x86/hvm/domain.h |  1 -
 xen/include/asm-x86/hvm/io.h     |  1 -
 xen/include/xen/ioreq.h          |  1 +
 xen/include/xen/sched.h          |  2 ++
 7 files changed, 22 insertions(+), 20 deletions(-)

diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
index ac573c8..6d41c56 100644
--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -20,6 +20,7 @@
  */
 #include <xen/lib.h>
 #include <xen/hypercall.h>
+#include <xen/ioreq.h>
 #include <xen/nospec.h>
 
 #include <asm/hvm/emulate.h>
@@ -47,7 +48,7 @@ static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         rc = compat_memory_op(cmd, arg);
 
     if ( (cmd & MEMOP_CMD_MASK) == XENMEM_decrease_reservation )
-        curr->domain->arch.hvm.qemu_mapcache_invalidate = true;
+        curr->domain->mapcache_invalidate = true;
 
     return rc;
 }
@@ -326,9 +327,9 @@ int hvm_hypercall(struct cpu_user_regs *regs)
 
     HVM_DBG_LOG(DBG_LEVEL_HCALL, "hcall%lu -> %lx", eax, regs->rax);
 
-    if ( unlikely(currd->arch.hvm.qemu_mapcache_invalidate) &&
-         test_and_clear_bool(currd->arch.hvm.qemu_mapcache_invalidate) )
-        send_invalidate_req();
+    if ( unlikely(currd->mapcache_invalidate) &&
+         test_and_clear_bool(currd->mapcache_invalidate) )
+        ioreq_signal_mapcache_invalidate();
 
     return curr->hcall_preempted ? HVM_HCALL_preempted : HVM_HCALL_completed;
 }
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index a0dd8d1..ba77414 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -64,20 +64,6 @@ void send_timeoffset_req(unsigned long timeoff)
         gprintk(XENLOG_ERR, "Unsuccessful timeoffset update\n");
 }
 
-/* Ask ioemu mapcache to invalidate mappings. */
-void send_invalidate_req(void)
-{
-    ioreq_t p = {
-        .type = IOREQ_TYPE_INVALIDATE,
-        .size = 4,
-        .dir = IOREQ_WRITE,
-        .data = ~0UL, /* flush all */
-    };
-
-    if ( ioreq_broadcast(&p, false) != 0 )
-        gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n");
-}
-
 bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate, const char *descr)
 {
     struct hvm_emulate_ctxt ctxt;
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index f35dcf9..61ba761 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -35,6 +35,20 @@
 #include <public/hvm/ioreq.h>
 #include <public/hvm/params.h>
 
+/* Ask ioemu mapcache to invalidate mappings. */
+void ioreq_signal_mapcache_invalidate(void)
+{
+    ioreq_t p = {
+        .type = IOREQ_TYPE_INVALIDATE,
+        .size = 4,
+        .dir = IOREQ_WRITE,
+        .data = ~0UL, /* flush all */
+    };
+
+    if ( ioreq_broadcast(&p, false) != 0 )
+        gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n");
+}
+
 static void set_ioreq_server(struct domain *d, unsigned int id,
                              struct ioreq_server *s)
 {
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index b8be1ad..cf959f6 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -122,7 +122,6 @@ struct hvm_domain {
 
     struct viridian_domain *viridian;
 
-    bool_t                 qemu_mapcache_invalidate;
     bool_t                 is_s3_suspended;
 
     /*
diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h
index fb64294..3da0136 100644
--- a/xen/include/asm-x86/hvm/io.h
+++ b/xen/include/asm-x86/hvm/io.h
@@ -97,7 +97,6 @@ bool relocate_portio_handler(
     unsigned int size);
 
 void send_timeoffset_req(unsigned long timeoff);
-void send_invalidate_req(void);
 bool handle_mmio_with_translation(unsigned long gla, unsigned long gpfn,
                                   struct npfec);
 bool handle_pio(uint16_t port, unsigned int size, int dir);
diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
index 2289e79..482f76f 100644
--- a/xen/include/xen/ioreq.h
+++ b/xen/include/xen/ioreq.h
@@ -129,6 +129,7 @@ struct ioreq_server *ioreq_server_select(struct domain *d,
 int ioreq_send(struct ioreq_server *s, ioreq_t *proto_p,
                bool buffered);
 unsigned int ioreq_broadcast(ioreq_t *p, bool buffered);
+void ioreq_signal_mapcache_invalidate(void);
 
 void ioreq_domain_init(struct domain *d);
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 2277995..60bf254 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -552,6 +552,8 @@ struct domain
         struct ioreq_server     *server[MAX_NR_IOREQ_SERVERS];
         unsigned int            nr_servers;
     } ioreq_server;
+
+    bool mapcache_invalidate;
 #endif
 };
 
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:42:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:42:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40896.73912 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgdS-00035b-8T; Mon, 30 Nov 2020 10:42:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40896.73912; Mon, 30 Nov 2020 10:42:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgdS-00035F-5P; Mon, 30 Nov 2020 10:42:22 +0000
Received: by outflank-mailman (input) for mailman id 40896;
 Mon, 30 Nov 2020 10:42:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjgUs-0000Uu-E5
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:33:30 +0000
Received: from mail-lj1-x22e.google.com (unknown [2a00:1450:4864:20::22e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5285bf49-6a37-4c47-b49e-5de333c5ec0c;
 Mon, 30 Nov 2020 10:32:18 +0000 (UTC)
Received: by mail-lj1-x22e.google.com with SMTP id r18so17057209ljc.2
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 02:32:18 -0800 (PST)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.16
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 02:32:16 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5285bf49-6a37-4c47-b49e-5de333c5ec0c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=vLq3q1p9DK/8+swqd0EF1XmAyuyBf4cp5s4Q1EQ0uO8=;
        b=SXOHyTZ1rwZniOOv8KjCpjfALIIOEOF2Bux7/lVnWEpqg2Kwk+ssC7rkpWYtUSNGvE
         OkJP6HKHS4c8Z5Az31kcnmHLvIG7heO5XP8ccqrUkRY6Dtovs+ABUp/H5XjXHLdaL/NH
         C2uHrigDL6XxTRBByVQVvUHo8D8pXd/PA6/+1oJXNdTqL7zvNkGH70Jnt4K5rfw0O/xr
         MgajZC9NF2c69fYNMzC4XR2o0yCzfrByS85Bk3JH/zT62aABeYua7zPRv1j48ofLFCut
         AxZNuXuAzd6d18K2Lf+lfBASn9evS2uTj0o27By8UDHjxzVGD0aiteNLj3GkQeQGOekt
         Dv1A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=vLq3q1p9DK/8+swqd0EF1XmAyuyBf4cp5s4Q1EQ0uO8=;
        b=inLL78r2rGcqdNQpvZ7nuA92kyjAegIVwDMcI54pQH1H4EKx7oVOuubYjsHuAW3Pga
         B8x0h6fbN9GRk6d4HxOIixIqAOH6qxUrtdi0LJtcgbk6/HEM7/dTSAiafeXCP9RrYYMH
         Y+JMO4IR10+fCz5EArQ9wO0mIy1LUhtAyDUwY+9EdxZcbi9p5OcS6EgE3oQN6/ROu1ro
         QidegtkPiJF+kwcyV9XXbDyS1Sas+4lcFzsbCrjX7U8PMxpQhZoNSnjn+9lAWro2E6CD
         v3LzuaMv+oDFjP2y7k6lDBeiG9TExcsBerDOgfDmfGkom4E9bN8ccu91JV6qC4XXlJcX
         THDg==
X-Gm-Message-State: AOAM531DhDxIXB24MastsEQZWtNyM+D3s3cPUsbWTpH1z+HKdJgnEf8v
	C7VY67C4jKVZYhwVWNf3OW8LhYkhK/6Ipw==
X-Google-Smtp-Source: ABdhPJx6IDc/ofgTeeKOgnKL59M07jftT6TZAWWGN6beKRoLaMz8foPoms99Ssblul6f3LbpRUMF9g==
X-Received: by 2002:a05:651c:112e:: with SMTP id e14mr9348107ljo.388.1606732337396;
        Mon, 30 Nov 2020 02:32:17 -0800 (PST)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien.grall@arm.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: [PATCH V3 18/23] xen/dm: Introduce xendevicemodel_set_irq_level DM op
Date: Mon, 30 Nov 2020 12:31:33 +0200
Message-Id: <1606732298-22107-19-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>

From: Julien Grall <julien.grall@arm.com>

This patch adds ability to the device emulator to notify otherend
(some entity running in the guest) using a SPI and implements Arm
specific bits for it. Proposed interface allows emulator to set
the logical level of a one of a domain's IRQ lines.

We can't reuse the existing DM op (xen_dm_op_set_isa_irq_level)
to inject an interrupt as the "isa_irq" field is only 8-bit and
able to cover IRQ 0 - 255, whereas we need a wider range (0 - 1020).

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

***
Please note, I left interface untouched since there is still
an open discussion what interface to use/what information to pass
to the hypervisor. The question whether we should abstract away
the state of the line or not.
***

Changes RFC -> V1:
   - check incoming parameters in arch_dm_op()
   - add explicit padding to struct xen_dm_op_set_irq_level

Changes V1 -> V2:
   - update the author of a patch
   - update patch description
   - check that padding is always 0
   - mention that interface is Arm only and only SPIs are
     supported for now
   - allow to set the logical level of a line for non-allocated
     interrupts only
   - add xen_dm_op_set_irq_level_t

Changes V2 -> V3:
   - no changes
---
---
 tools/include/xendevicemodel.h               |  4 ++
 tools/libs/devicemodel/core.c                | 18 +++++++++
 tools/libs/devicemodel/libxendevicemodel.map |  1 +
 xen/arch/arm/dm.c                            | 57 +++++++++++++++++++++++++++-
 xen/common/dm.c                              |  1 +
 xen/include/public/hvm/dm_op.h               | 16 ++++++++
 6 files changed, 96 insertions(+), 1 deletion(-)

diff --git a/tools/include/xendevicemodel.h b/tools/include/xendevicemodel.h
index e877f5c..c06b3c8 100644
--- a/tools/include/xendevicemodel.h
+++ b/tools/include/xendevicemodel.h
@@ -209,6 +209,10 @@ int xendevicemodel_set_isa_irq_level(
     xendevicemodel_handle *dmod, domid_t domid, uint8_t irq,
     unsigned int level);
 
+int xendevicemodel_set_irq_level(
+    xendevicemodel_handle *dmod, domid_t domid, unsigned int irq,
+    unsigned int level);
+
 /**
  * This function maps a PCI INTx line to a an IRQ line.
  *
diff --git a/tools/libs/devicemodel/core.c b/tools/libs/devicemodel/core.c
index 4d40639..30bd79f 100644
--- a/tools/libs/devicemodel/core.c
+++ b/tools/libs/devicemodel/core.c
@@ -430,6 +430,24 @@ int xendevicemodel_set_isa_irq_level(
     return xendevicemodel_op(dmod, domid, 1, &op, sizeof(op));
 }
 
+int xendevicemodel_set_irq_level(
+    xendevicemodel_handle *dmod, domid_t domid, uint32_t irq,
+    unsigned int level)
+{
+    struct xen_dm_op op;
+    struct xen_dm_op_set_irq_level *data;
+
+    memset(&op, 0, sizeof(op));
+
+    op.op = XEN_DMOP_set_irq_level;
+    data = &op.u.set_irq_level;
+
+    data->irq = irq;
+    data->level = level;
+
+    return xendevicemodel_op(dmod, domid, 1, &op, sizeof(op));
+}
+
 int xendevicemodel_set_pci_link_route(
     xendevicemodel_handle *dmod, domid_t domid, uint8_t link, uint8_t irq)
 {
diff --git a/tools/libs/devicemodel/libxendevicemodel.map b/tools/libs/devicemodel/libxendevicemodel.map
index 561c62d..a0c3012 100644
--- a/tools/libs/devicemodel/libxendevicemodel.map
+++ b/tools/libs/devicemodel/libxendevicemodel.map
@@ -32,6 +32,7 @@ VERS_1.2 {
 	global:
 		xendevicemodel_relocate_memory;
 		xendevicemodel_pin_memory_cacheattr;
+		xendevicemodel_set_irq_level;
 } VERS_1.1;
 
 VERS_1.3 {
diff --git a/xen/arch/arm/dm.c b/xen/arch/arm/dm.c
index 5d3da37..e4bb233 100644
--- a/xen/arch/arm/dm.c
+++ b/xen/arch/arm/dm.c
@@ -17,10 +17,65 @@
 #include <xen/dm.h>
 #include <xen/hypercall.h>
 
+#include <asm/vgic.h>
+
 int arch_dm_op(struct xen_dm_op *op, struct domain *d,
                const struct dmop_args *op_args, bool *const_op)
 {
-    return -EOPNOTSUPP;
+    int rc;
+
+    switch ( op->op )
+    {
+    case XEN_DMOP_set_irq_level:
+    {
+        const struct xen_dm_op_set_irq_level *data =
+            &op->u.set_irq_level;
+        unsigned int i;
+
+        /* Only SPIs are supported */
+        if ( (data->irq < NR_LOCAL_IRQS) || (data->irq >= vgic_num_irqs(d)) )
+        {
+            rc = -EINVAL;
+            break;
+        }
+
+        if ( data->level != 0 && data->level != 1 )
+        {
+            rc = -EINVAL;
+            break;
+        }
+
+        /* Check that padding is always 0 */
+        for ( i = 0; i < sizeof(data->pad); i++ )
+        {
+            if ( data->pad[i] )
+            {
+                rc = -EINVAL;
+                break;
+            }
+        }
+
+        /*
+         * Allow to set the logical level of a line for non-allocated
+         * interrupts only.
+         */
+        if ( test_bit(data->irq, d->arch.vgic.allocated_irqs) )
+        {
+            rc = -EINVAL;
+            break;
+        }
+
+        vgic_inject_irq(d, NULL, data->irq, data->level);
+        rc = 0;
+        break;
+    }
+
+    default:
+        rc = -EOPNOTSUPP;
+        break;
+    }
+
+    return rc;
 }
 
 /*
diff --git a/xen/common/dm.c b/xen/common/dm.c
index 9d394fc..7bfb46c 100644
--- a/xen/common/dm.c
+++ b/xen/common/dm.c
@@ -48,6 +48,7 @@ static int dm_op(const struct dmop_args *op_args)
         [XEN_DMOP_remote_shutdown]                  = sizeof(struct xen_dm_op_remote_shutdown),
         [XEN_DMOP_relocate_memory]                  = sizeof(struct xen_dm_op_relocate_memory),
         [XEN_DMOP_pin_memory_cacheattr]             = sizeof(struct xen_dm_op_pin_memory_cacheattr),
+        [XEN_DMOP_set_irq_level]                    = sizeof(struct xen_dm_op_set_irq_level),
     };
 
     rc = rcu_lock_remote_domain_by_id(op_args->domid, &d);
diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
index 66cae1a..1f70d58 100644
--- a/xen/include/public/hvm/dm_op.h
+++ b/xen/include/public/hvm/dm_op.h
@@ -434,6 +434,21 @@ struct xen_dm_op_pin_memory_cacheattr {
 };
 typedef struct xen_dm_op_pin_memory_cacheattr xen_dm_op_pin_memory_cacheattr_t;
 
+/*
+ * XEN_DMOP_set_irq_level: Set the logical level of a one of a domain's
+ *                         IRQ lines (currently Arm only).
+ * Only SPIs are supported.
+ */
+#define XEN_DMOP_set_irq_level 19
+
+struct xen_dm_op_set_irq_level {
+    uint32_t irq;
+    /* IN - Level: 0 -> deasserted, 1 -> asserted */
+    uint8_t level;
+    uint8_t pad[3];
+};
+typedef struct xen_dm_op_set_irq_level xen_dm_op_set_irq_level_t;
+
 struct xen_dm_op {
     uint32_t op;
     uint32_t pad;
@@ -447,6 +462,7 @@ struct xen_dm_op {
         xen_dm_op_track_dirty_vram_t track_dirty_vram;
         xen_dm_op_set_pci_intx_level_t set_pci_intx_level;
         xen_dm_op_set_isa_irq_level_t set_isa_irq_level;
+        xen_dm_op_set_irq_level_t set_irq_level;
         xen_dm_op_set_pci_link_route_t set_pci_link_route;
         xen_dm_op_modified_memory_t modified_memory;
         xen_dm_op_set_mem_type_t set_mem_type;
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:42:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:42:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40898.73936 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgdV-00039Q-1F; Mon, 30 Nov 2020 10:42:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40898.73936; Mon, 30 Nov 2020 10:42:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgdU-00039H-Th; Mon, 30 Nov 2020 10:42:24 +0000
Received: by outflank-mailman (input) for mailman id 40898;
 Mon, 30 Nov 2020 10:42:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjgUd-0000Uu-Du
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:33:15 +0000
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 96a2f0db-d730-4ce2-85d1-7141288e635d;
 Mon, 30 Nov 2020 10:32:15 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id y16so17071337ljk.1
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 02:32:15 -0800 (PST)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.13
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 02:32:13 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 96a2f0db-d730-4ce2-85d1-7141288e635d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=iqCz87HzldwgsC5VC8qOcBaMp2PRkja+L+B3SFRUh4I=;
        b=PyzX3v5YY/kPVBJWz1Ll85oBRXYlOGLysDHW9CI041WcTXn6VP0t0HIrPjkGANNvj1
         PnII00xjc+tLQksDJGkiOIeMQHut3W2ceR4ltOKYDuDyglU0htzdL6DOXMd0QFideswo
         JiS1nuD+Wo1Y8hGft5xA+dQh0ILdZ/CX1uZ30hkIyG1HJ0W25w/SZiK+7PrP1uMCBQPV
         X2f2+VJ8BJmUQZSfooyAXkjUF5oQ7VhwaQoVC+2F56IBASDsaK7oqoh+wCnY8CxxRtDS
         P6jtMMT8g665RpAhToPMxKZLbaeBgyDuRGxic/5CS87iMKVlj7WRdWqF+m2YqHR5X03F
         hF+A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=iqCz87HzldwgsC5VC8qOcBaMp2PRkja+L+B3SFRUh4I=;
        b=m1fEarczBMsV+SRisVv5MxyZF27gftUVKneAwYGRbrDqr6VpNixrm93gjTOKlnXBkX
         9ZFYFB6EjDTJuiuuzvnEAutxuqY7PNFGub3aHNJ/3vYBMuFXs1ajoOzvoy93marx1DL7
         8yIj89esrj8xq38Mzi8XS0dRSrVtoFlxm9jd6PCz4wruKXzSXOLJvOfJwsIzBOh6rBG2
         122MZck+H+zmyOs+5fP1GafT9oaw7tJsNq50zIzeDEp3MbzFUQXJT0N7okaF/lGO0Gdz
         ZEbJ2pzI8SB4GOcnm+DZ35yWK5z3NV/wRiOhl9juMt28yTFuTA2OqHQKdrQlZKauFBy6
         ttXw==
X-Gm-Message-State: AOAM53132nRlxke+8A5SDQu+Gdd3K6DvI5qNsN7kLeJ/c8AIUGBd3AF+
	SYKP1MFw2PljxYxHWt2sAEtk/uoou3IkNA==
X-Google-Smtp-Source: ABdhPJw40lqiQy7GF6uLV41CLLq1UB/3gNsg5mbEHB+jHc++oxZAzF6T7J0nFBRk7Ia3Xc4GnJ12HQ==
X-Received: by 2002:a2e:a377:: with SMTP id i23mr9900360ljn.185.1606732334020;
        Mon, 30 Nov 2020 02:32:14 -0800 (PST)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V3 15/23] xen/arm: Stick around in leave_hypervisor_to_guest until I/O has completed
Date: Mon, 30 Nov 2020 12:31:30 +0200
Message-Id: <1606732298-22107-16-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

This patch adds proper handling of return value of
vcpu_ioreq_handle_completion() which involves using a loop
in leave_hypervisor_to_guest().

The reason to use an unbounded loop here is the fact that vCPU
shouldn't continue until an I/O has completed. In Xen case, if an I/O
never completes then it most likely means that something went horribly
wrong with the Device Emulator. And it is most likely not safe to
continue. So letting the vCPU to spin forever if I/O never completes
is a safer action than letting it continue and leaving the guest in
unclear state and is the best what we can do for now.

This wouldn't be an issue for Xen as do_softirq() would be called at
every loop. In case of failure, the guest will crash and the vCPU
will be unscheduled.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes V1 -> V2:
   - new patch, changes were derived from (+ new explanation):
     arm/ioreq: Introduce arch specific bits for IOREQ/DM features

Changes V2 -> V3:
   - update patch description
---
---
 xen/arch/arm/traps.c | 31 ++++++++++++++++++++++++++-----
 1 file changed, 26 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 036b13f..4cef43e 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2257,18 +2257,23 @@ static void check_for_pcpu_work(void)
  * Process pending work for the vCPU. Any call should be fast or
  * implement preemption.
  */
-static void check_for_vcpu_work(void)
+static bool check_for_vcpu_work(void)
 {
     struct vcpu *v = current;
 
 #ifdef CONFIG_IOREQ_SERVER
+    bool handled;
+
     local_irq_enable();
-    vcpu_ioreq_handle_completion(v);
+    handled = vcpu_ioreq_handle_completion(v);
     local_irq_disable();
+
+    if ( !handled )
+        return true;
 #endif
 
     if ( likely(!v->arch.need_flush_to_ram) )
-        return;
+        return false;
 
     /*
      * Give a chance for the pCPU to process work before handling the vCPU
@@ -2279,6 +2284,8 @@ static void check_for_vcpu_work(void)
     local_irq_enable();
     p2m_flush_vm(v);
     local_irq_disable();
+
+    return false;
 }
 
 /*
@@ -2291,8 +2298,22 @@ void leave_hypervisor_to_guest(void)
 {
     local_irq_disable();
 
-    check_for_vcpu_work();
-    check_for_pcpu_work();
+    /*
+     * The reason to use an unbounded loop here is the fact that vCPU
+     * shouldn't continue until an I/O has completed. In Xen case, if an I/O
+     * never completes then it most likely means that something went horribly
+     * wrong with the Device Emulator. And it is most likely not safe to
+     * continue. So letting the vCPU to spin forever if I/O never completes
+     * is a safer action than letting it continue and leaving the guest in
+     * unclear state and is the best what we can do for now.
+     *
+     * This wouldn't be an issue for Xen as do_softirq() would be called at
+     * every loop. In case of failure, the guest will crash and the vCPU
+     * will be unscheduled.
+     */
+    do {
+        check_for_pcpu_work();
+    } while ( check_for_vcpu_work() );
 
     vgic_sync_to_lrs();
 
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:42:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:42:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40899.73941 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgdV-0003AD-ES; Mon, 30 Nov 2020 10:42:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40899.73941; Mon, 30 Nov 2020 10:42:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgdV-00039z-9b; Mon, 30 Nov 2020 10:42:25 +0000
Received: by outflank-mailman (input) for mailman id 40899;
 Mon, 30 Nov 2020 10:42:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjgUJ-0000Uu-DQ
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:32:55 +0000
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8aebbba1-fb08-4028-bcdd-c48910c5b2f7;
 Mon, 30 Nov 2020 10:32:11 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id u18so20569916lfd.9
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 02:32:11 -0800 (PST)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.08
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 02:32:09 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8aebbba1-fb08-4028-bcdd-c48910c5b2f7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=eshuqDSj3zEarimOkB2aX4FZ33jnTSOjPB7bKmk+eE8=;
        b=MFZ8mhig+TzPLkg9FVgiE/SfHSeO8ODXrKXW4kqmOB95vTeL5UAS+2I4Vz0BOyj+AH
         0xlh0lCifbdqvZw7SXb/MT0lWzEFe1fqWnHRsRgeXzM7kCme5YX0YK9toxjS8GJd579Q
         6od0sZ8NkW7nG4nq4sMeJM6Oa52VyP/K7y21EQT/mNnD9+Kq6SPn9K4vZMPQ13GBEF0e
         RNxSRN8sRT7i8M92eMV1vxaFdkTgAj43vFDeFO1kw7wuCYilDr4nwDXkICzbN/fDrl3q
         dwjpRiAauK+qhcfnug/6z1qsLXpYtsNy5IG9vmSY5isVusL2oNm7a/WsxS1zmxHblFO1
         3vzg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=eshuqDSj3zEarimOkB2aX4FZ33jnTSOjPB7bKmk+eE8=;
        b=ovZLJphYBIuRYMfKB7YFcuf15fwKs0tYwWFXOKSQJGU+P6Vr8fxNrBOyYlHNo7couS
         j6JBGJAj0AWFy6VnUPFd2Q6wYq9xbi/hwYf1tytA8HpA4crB2AUMT7Gxt+0aRqb+8WnU
         Ki8NAW+o2Dbg7k2WqpAlXWfAuvfCMUEmcyW2zpzQ19VHEP6kure+4EFgvijUwSNMFNPb
         2fhu7uBFkh/fa7i81L3kP7/f94XXVVawJx507OpMH4PXxCIx2WhLq8TrsgNzV0IOTyrg
         MskPalRj+ughu2CczWd4U4ohJIUJ8QsYGZ/VU6yF6lBU7ynxKwamYSDeobv1dnCWQiOL
         YD5Q==
X-Gm-Message-State: AOAM533tzSotW9Yvghm9vuRaGyXKksr9oaLCa3lwtEQqOz/LwgXQFUAY
	fuBUF9TKlQU8o+W95TMgykaBmZwnCKNe8Q==
X-Google-Smtp-Source: ABdhPJwjEuWwAuuzHD/sq1bzikOS6pgc5KOC+joPhA5v8wz4/g6AyMOkWFCmMDhURCb2oUyGFEzbsQ==
X-Received: by 2002:ac2:431a:: with SMTP id l26mr7157890lfh.196.1606732329822;
        Mon, 30 Nov 2020 02:32:09 -0800 (PST)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Paul Durrant <paul@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jun Nakajima <jun.nakajima@intel.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V3 11/23] xen/ioreq: Move x86's io_completion/io_req fields to struct vcpu
Date: Mon, 30 Nov 2020 12:31:26 +0200
Message-Id: <1606732298-22107-12-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

The IOREQ is a common feature now and these fields will be used
on Arm as is. Move them to common struct vcpu as a part of new
struct vcpu_io and drop duplicating "io" prefixes. Also move
enum hvm_io_completion to xen/sched.h and remove "hvm" prefixes.

This patch completely removes layering violation in the common code.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes V1 -> V2:
   - new patch

Changes V2 -> V3:
   - update patch according the "legacy interface" is x86 specific
   - update patch description
   - drop the "io" prefixes from the field names
   - wrap IO_realmode_completion
---
---
 xen/arch/x86/hvm/emulate.c        | 72 +++++++++++++++++++--------------------
 xen/arch/x86/hvm/hvm.c            |  2 +-
 xen/arch/x86/hvm/io.c             |  8 ++---
 xen/arch/x86/hvm/ioreq.c          |  4 +--
 xen/arch/x86/hvm/svm/nestedsvm.c  |  2 +-
 xen/arch/x86/hvm/vmx/realmode.c   |  6 ++--
 xen/common/ioreq.c                | 22 ++++++------
 xen/include/asm-x86/hvm/emulate.h |  2 +-
 xen/include/asm-x86/hvm/ioreq.h   |  2 +-
 xen/include/asm-x86/hvm/vcpu.h    | 11 ------
 xen/include/xen/sched.h           | 19 +++++++++++
 11 files changed, 79 insertions(+), 71 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 4746d5a..04e4994 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -142,8 +142,8 @@ void hvmemul_cancel(struct vcpu *v)
 {
     struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io;
 
-    vio->io_req.state = STATE_IOREQ_NONE;
-    vio->io_completion = HVMIO_no_completion;
+    v->io.req.state = STATE_IOREQ_NONE;
+    v->io.completion = IO_no_completion;
     vio->mmio_cache_count = 0;
     vio->mmio_insn_bytes = 0;
     vio->mmio_access = (struct npfec){};
@@ -159,7 +159,7 @@ static int hvmemul_do_io(
 {
     struct vcpu *curr = current;
     struct domain *currd = curr->domain;
-    struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io;
+    struct vcpu_io *vio = &curr->io;
     ioreq_t p = {
         .type = is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO,
         .addr = addr,
@@ -184,13 +184,13 @@ static int hvmemul_do_io(
         return X86EMUL_UNHANDLEABLE;
     }
 
-    switch ( vio->io_req.state )
+    switch ( vio->req.state )
     {
     case STATE_IOREQ_NONE:
         break;
     case STATE_IORESP_READY:
-        vio->io_req.state = STATE_IOREQ_NONE;
-        p = vio->io_req;
+        vio->req.state = STATE_IOREQ_NONE;
+        p = vio->req;
 
         /* Verify the emulation request has been correctly re-issued */
         if ( (p.type != (is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO)) ||
@@ -238,7 +238,7 @@ static int hvmemul_do_io(
     }
     ASSERT(p.count);
 
-    vio->io_req = p;
+    vio->req = p;
 
     rc = hvm_io_intercept(&p);
 
@@ -247,12 +247,12 @@ static int hvmemul_do_io(
      * our callers and mirror this into latched state.
      */
     ASSERT(p.count <= *reps);
-    *reps = vio->io_req.count = p.count;
+    *reps = vio->req.count = p.count;
 
     switch ( rc )
     {
     case X86EMUL_OKAY:
-        vio->io_req.state = STATE_IOREQ_NONE;
+        vio->req.state = STATE_IOREQ_NONE;
         break;
     case X86EMUL_UNHANDLEABLE:
     {
@@ -305,7 +305,7 @@ static int hvmemul_do_io(
                 if ( s == NULL )
                 {
                     rc = X86EMUL_RETRY;
-                    vio->io_req.state = STATE_IOREQ_NONE;
+                    vio->req.state = STATE_IOREQ_NONE;
                     break;
                 }
 
@@ -316,7 +316,7 @@ static int hvmemul_do_io(
                 if ( dir == IOREQ_READ )
                 {
                     rc = hvm_process_io_intercept(&ioreq_server_handler, &p);
-                    vio->io_req.state = STATE_IOREQ_NONE;
+                    vio->req.state = STATE_IOREQ_NONE;
                     break;
                 }
             }
@@ -329,14 +329,14 @@ static int hvmemul_do_io(
         if ( !s )
         {
             rc = hvm_process_io_intercept(&null_handler, &p);
-            vio->io_req.state = STATE_IOREQ_NONE;
+            vio->req.state = STATE_IOREQ_NONE;
         }
         else
         {
             rc = hvm_send_ioreq(s, &p, 0);
             if ( rc != X86EMUL_RETRY || currd->is_shutting_down )
-                vio->io_req.state = STATE_IOREQ_NONE;
-            else if ( !ioreq_needs_completion(&vio->io_req) )
+                vio->req.state = STATE_IOREQ_NONE;
+            else if ( !ioreq_needs_completion(&vio->req) )
                 rc = X86EMUL_OKAY;
         }
         break;
@@ -1854,7 +1854,7 @@ static int hvmemul_rep_movs(
           * cheaper than multiple round trips through the device model. Yet
           * when processing a response we can always re-use the translation.
           */
-         (vio->io_req.state == STATE_IORESP_READY ||
+         (curr->io.req.state == STATE_IORESP_READY ||
           ((!df || *reps == 1) &&
            PAGE_SIZE - (saddr & ~PAGE_MASK) >= *reps * bytes_per_rep)) )
         sgpa = pfn_to_paddr(vio->mmio_gpfn) | (saddr & ~PAGE_MASK);
@@ -1870,7 +1870,7 @@ static int hvmemul_rep_movs(
     if ( vio->mmio_access.write_access &&
          (vio->mmio_gla == (daddr & PAGE_MASK)) &&
          /* See comment above. */
-         (vio->io_req.state == STATE_IORESP_READY ||
+         (curr->io.req.state == STATE_IORESP_READY ||
           ((!df || *reps == 1) &&
            PAGE_SIZE - (daddr & ~PAGE_MASK) >= *reps * bytes_per_rep)) )
         dgpa = pfn_to_paddr(vio->mmio_gpfn) | (daddr & ~PAGE_MASK);
@@ -2007,7 +2007,7 @@ static int hvmemul_rep_stos(
     if ( vio->mmio_access.write_access &&
          (vio->mmio_gla == (addr & PAGE_MASK)) &&
          /* See respective comment in MOVS processing. */
-         (vio->io_req.state == STATE_IORESP_READY ||
+         (curr->io.req.state == STATE_IORESP_READY ||
           ((!df || *reps == 1) &&
            PAGE_SIZE - (addr & ~PAGE_MASK) >= *reps * bytes_per_rep)) )
         gpa = pfn_to_paddr(vio->mmio_gpfn) | (addr & ~PAGE_MASK);
@@ -2613,13 +2613,13 @@ static const struct x86_emulate_ops hvm_emulate_ops_no_write = {
 };
 
 /*
- * Note that passing HVMIO_no_completion into this function serves as kind
+ * Note that passing IO_no_completion into this function serves as kind
  * of (but not fully) an "auto select completion" indicator.  When there's
  * no completion needed, the passed in value will be ignored in any case.
  */
 static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
     const struct x86_emulate_ops *ops,
-    enum hvm_io_completion completion)
+    enum io_completion completion)
 {
     const struct cpu_user_regs *regs = hvmemul_ctxt->ctxt.regs;
     struct vcpu *curr = current;
@@ -2634,11 +2634,11 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
      */
     if ( vio->cache->num_ents > vio->cache->max_ents )
     {
-        ASSERT(vio->io_req.state == STATE_IOREQ_NONE);
+        ASSERT(curr->io.req.state == STATE_IOREQ_NONE);
         vio->cache->num_ents = 0;
     }
     else
-        ASSERT(vio->io_req.state == STATE_IORESP_READY);
+        ASSERT(curr->io.req.state == STATE_IORESP_READY);
 
     hvm_emulate_init_per_insn(hvmemul_ctxt, vio->mmio_insn,
                               vio->mmio_insn_bytes);
@@ -2649,25 +2649,25 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
     if ( rc == X86EMUL_OKAY && vio->mmio_retry )
         rc = X86EMUL_RETRY;
 
-    if ( !ioreq_needs_completion(&vio->io_req) )
-        completion = HVMIO_no_completion;
-    else if ( completion == HVMIO_no_completion )
-        completion = (vio->io_req.type != IOREQ_TYPE_PIO ||
-                      hvmemul_ctxt->is_mem_access) ? HVMIO_mmio_completion
-                                                   : HVMIO_pio_completion;
+    if ( !ioreq_needs_completion(&curr->io.req) )
+        completion = IO_no_completion;
+    else if ( completion == IO_no_completion )
+        completion = (curr->io.req.type != IOREQ_TYPE_PIO ||
+                      hvmemul_ctxt->is_mem_access) ? IO_mmio_completion
+                                                   : IO_pio_completion;
 
-    switch ( vio->io_completion = completion )
+    switch ( curr->io.completion = completion )
     {
-    case HVMIO_no_completion:
-    case HVMIO_pio_completion:
+    case IO_no_completion:
+    case IO_pio_completion:
         vio->mmio_cache_count = 0;
         vio->mmio_insn_bytes = 0;
         vio->mmio_access = (struct npfec){};
         hvmemul_cache_disable(curr);
         break;
 
-    case HVMIO_mmio_completion:
-    case HVMIO_realmode_completion:
+    case IO_mmio_completion:
+    case IO_realmode_completion:
         BUILD_BUG_ON(sizeof(vio->mmio_insn) < sizeof(hvmemul_ctxt->insn_buf));
         vio->mmio_insn_bytes = hvmemul_ctxt->insn_buf_bytes;
         memcpy(vio->mmio_insn, hvmemul_ctxt->insn_buf, vio->mmio_insn_bytes);
@@ -2716,7 +2716,7 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
 
 int hvm_emulate_one(
     struct hvm_emulate_ctxt *hvmemul_ctxt,
-    enum hvm_io_completion completion)
+    enum io_completion completion)
 {
     return _hvm_emulate_one(hvmemul_ctxt, &hvm_emulate_ops, completion);
 }
@@ -2754,7 +2754,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla)
                           guest_cpu_user_regs());
     ctxt.ctxt.data = &mmio_ro_ctxt;
 
-    switch ( rc = _hvm_emulate_one(&ctxt, ops, HVMIO_no_completion) )
+    switch ( rc = _hvm_emulate_one(&ctxt, ops, IO_no_completion) )
     {
     case X86EMUL_UNHANDLEABLE:
     case X86EMUL_UNIMPLEMENTED:
@@ -2782,7 +2782,7 @@ void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr,
     {
     case EMUL_KIND_NOWRITE:
         rc = _hvm_emulate_one(&ctx, &hvm_emulate_ops_no_write,
-                              HVMIO_no_completion);
+                              IO_no_completion);
         break;
     case EMUL_KIND_SET_CONTEXT_INSN: {
         struct vcpu *curr = current;
@@ -2803,7 +2803,7 @@ void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr,
     /* Fall-through */
     default:
         ctx.set_context = (kind == EMUL_KIND_SET_CONTEXT_DATA);
-        rc = hvm_emulate_one(&ctx, HVMIO_no_completion);
+        rc = hvm_emulate_one(&ctx, IO_no_completion);
     }
 
     switch ( rc )
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 54e32e4..cc46909 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3800,7 +3800,7 @@ void hvm_ud_intercept(struct cpu_user_regs *regs)
         return;
     }
 
-    switch ( hvm_emulate_one(&ctxt, HVMIO_no_completion) )
+    switch ( hvm_emulate_one(&ctxt, IO_no_completion) )
     {
     case X86EMUL_UNHANDLEABLE:
     case X86EMUL_UNIMPLEMENTED:
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index b220d6b..327a6a2 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -85,7 +85,7 @@ bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate, const char *descr)
 
     hvm_emulate_init_once(&ctxt, validate, guest_cpu_user_regs());
 
-    switch ( rc = hvm_emulate_one(&ctxt, HVMIO_no_completion) )
+    switch ( rc = hvm_emulate_one(&ctxt, IO_no_completion) )
     {
     case X86EMUL_UNHANDLEABLE:
         hvm_dump_emulation_state(XENLOG_G_WARNING, descr, &ctxt, rc);
@@ -122,7 +122,7 @@ bool handle_mmio_with_translation(unsigned long gla, unsigned long gpfn,
 bool handle_pio(uint16_t port, unsigned int size, int dir)
 {
     struct vcpu *curr = current;
-    struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io;
+    struct vcpu_io *vio = &curr->io;
     unsigned int data;
     int rc;
 
@@ -135,8 +135,8 @@ bool handle_pio(uint16_t port, unsigned int size, int dir)
 
     rc = hvmemul_do_pio_buffer(port, size, dir, &data);
 
-    if ( ioreq_needs_completion(&vio->io_req) )
-        vio->io_completion = HVMIO_pio_completion;
+    if ( ioreq_needs_completion(&vio->req) )
+        vio->completion = IO_pio_completion;
 
     switch ( rc )
     {
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 009a95a..7808b75 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -41,11 +41,11 @@ bool ioreq_complete_mmio(void)
     return handle_mmio();
 }
 
-bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion)
+bool arch_vcpu_ioreq_completion(enum io_completion io_completion)
 {
     switch ( io_completion )
     {
-    case HVMIO_realmode_completion:
+    case IO_realmode_completion:
     {
         struct hvm_emulate_ctxt ctxt;
 
diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index fcfccf7..6d90630 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -1266,7 +1266,7 @@ enum hvm_intblk nsvm_intr_blocked(struct vcpu *v)
          * Delay the injection because this would result in delivering
          * an interrupt *within* the execution of an instruction.
          */
-        if ( v->arch.hvm.hvm_io.io_req.state != STATE_IOREQ_NONE )
+        if ( v->io.req.state != STATE_IOREQ_NONE )
             return hvm_intblk_shadow;
 
         if ( !nv->nv_vmexit_pending && n2vmcb->exit_int_info.v )
diff --git a/xen/arch/x86/hvm/vmx/realmode.c b/xen/arch/x86/hvm/vmx/realmode.c
index 768f01e..3033143 100644
--- a/xen/arch/x86/hvm/vmx/realmode.c
+++ b/xen/arch/x86/hvm/vmx/realmode.c
@@ -101,7 +101,7 @@ void vmx_realmode_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt)
 
     perfc_incr(realmode_emulations);
 
-    rc = hvm_emulate_one(hvmemul_ctxt, HVMIO_realmode_completion);
+    rc = hvm_emulate_one(hvmemul_ctxt, IO_realmode_completion);
 
     if ( rc == X86EMUL_UNHANDLEABLE )
     {
@@ -188,7 +188,7 @@ void vmx_realmode(struct cpu_user_regs *regs)
 
         vmx_realmode_emulate_one(&hvmemul_ctxt);
 
-        if ( vio->io_req.state != STATE_IOREQ_NONE || vio->mmio_retry )
+        if ( curr->io.req.state != STATE_IOREQ_NONE || vio->mmio_retry )
             break;
 
         /* Stop emulating unless our segment state is not safe */
@@ -202,7 +202,7 @@ void vmx_realmode(struct cpu_user_regs *regs)
     }
 
     /* Need to emulate next time if we've started an IO operation */
-    if ( vio->io_req.state != STATE_IOREQ_NONE )
+    if ( curr->io.req.state != STATE_IOREQ_NONE )
         curr->arch.hvm.vmx.vmx_emulate = 1;
 
     if ( !curr->arch.hvm.vmx.vmx_emulate && !curr->arch.hvm.vmx.vmx_realmode )
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index b7c2d5a..caf4543 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -159,7 +159,7 @@ static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p)
         break;
     }
 
-    p = &sv->vcpu->arch.hvm.hvm_io.io_req;
+    p = &sv->vcpu->io.req;
     if ( ioreq_needs_completion(p) )
         p->data = data;
 
@@ -171,10 +171,10 @@ static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p)
 bool handle_hvm_io_completion(struct vcpu *v)
 {
     struct domain *d = v->domain;
-    struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io;
+    struct vcpu_io *vio = &v->io;
     struct ioreq_server *s;
     struct ioreq_vcpu *sv;
-    enum hvm_io_completion io_completion;
+    enum io_completion io_completion;
 
     if ( has_vpci(d) && vpci_process_pending(v) )
     {
@@ -186,26 +186,26 @@ bool handle_hvm_io_completion(struct vcpu *v)
     if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) )
         return false;
 
-    vio->io_req.state = ioreq_needs_completion(&vio->io_req) ?
+    vio->req.state = ioreq_needs_completion(&vio->req) ?
         STATE_IORESP_READY : STATE_IOREQ_NONE;
 
     msix_write_completion(v);
     vcpu_end_shutdown_deferral(v);
 
-    io_completion = vio->io_completion;
-    vio->io_completion = HVMIO_no_completion;
+    io_completion = vio->completion;
+    vio->completion = IO_no_completion;
 
     switch ( io_completion )
     {
-    case HVMIO_no_completion:
+    case IO_no_completion:
         break;
 
-    case HVMIO_mmio_completion:
+    case IO_mmio_completion:
         return ioreq_complete_mmio();
 
-    case HVMIO_pio_completion:
-        return handle_pio(vio->io_req.addr, vio->io_req.size,
-                          vio->io_req.dir);
+    case IO_pio_completion:
+        return handle_pio(vio->req.addr, vio->req.size,
+                          vio->req.dir);
 
     default:
         return arch_vcpu_ioreq_completion(io_completion);
diff --git a/xen/include/asm-x86/hvm/emulate.h b/xen/include/asm-x86/hvm/emulate.h
index 1620cc7..131cdf4 100644
--- a/xen/include/asm-x86/hvm/emulate.h
+++ b/xen/include/asm-x86/hvm/emulate.h
@@ -65,7 +65,7 @@ bool __nonnull(1, 2) hvm_emulate_one_insn(
     const char *descr);
 int hvm_emulate_one(
     struct hvm_emulate_ctxt *hvmemul_ctxt,
-    enum hvm_io_completion completion);
+    enum io_completion completion);
 void hvm_emulate_one_vm_event(enum emul_kind kind,
     unsigned int trapnr,
     unsigned int errcode);
diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
index 854dc77..ca3bf29 100644
--- a/xen/include/asm-x86/hvm/ioreq.h
+++ b/xen/include/asm-x86/hvm/ioreq.h
@@ -21,7 +21,7 @@
 
 #include <xen/ioreq.h>
 
-bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion);
+bool arch_vcpu_ioreq_completion(enum io_completion io_completion);
 int arch_ioreq_server_map_pages(struct ioreq_server *s);
 void arch_ioreq_server_unmap_pages(struct ioreq_server *s);
 void arch_ioreq_server_enable(struct ioreq_server *s);
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index 6c1feda..8adf455 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -28,13 +28,6 @@
 #include <asm/mtrr.h>
 #include <public/hvm/ioreq.h>
 
-enum hvm_io_completion {
-    HVMIO_no_completion,
-    HVMIO_mmio_completion,
-    HVMIO_pio_completion,
-    HVMIO_realmode_completion
-};
-
 struct hvm_vcpu_asid {
     uint64_t generation;
     uint32_t asid;
@@ -52,10 +45,6 @@ struct hvm_mmio_cache {
 };
 
 struct hvm_vcpu_io {
-    /* I/O request in flight to device model. */
-    enum hvm_io_completion io_completion;
-    ioreq_t                io_req;
-
     /*
      * HVM emulation:
      *  Linear address @mmio_gla maps to MMIO physical frame @mmio_gpfn.
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 62cbcdb..8269f84 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -145,6 +145,21 @@ void evtchn_destroy_final(struct domain *d); /* from complete_domain_destroy */
 
 struct waitqueue_vcpu;
 
+enum io_completion {
+    IO_no_completion,
+    IO_mmio_completion,
+    IO_pio_completion,
+#ifdef CONFIG_X86
+    IO_realmode_completion,
+#endif
+};
+
+struct vcpu_io {
+    /* I/O request in flight to device model. */
+    enum io_completion   completion;
+    ioreq_t              req;
+};
+
 struct vcpu
 {
     int              vcpu_id;
@@ -256,6 +271,10 @@ struct vcpu
     struct vpci_vcpu vpci;
 
     struct arch_vcpu arch;
+
+#ifdef CONFIG_IOREQ_SERVER
+    struct vcpu_io io;
+#endif
 };
 
 struct sched_unit {
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:44:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:44:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40923.73960 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgfJ-0003e3-6x; Mon, 30 Nov 2020 10:44:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40923.73960; Mon, 30 Nov 2020 10:44:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgfJ-0003dw-3n; Mon, 30 Nov 2020 10:44:17 +0000
Received: by outflank-mailman (input) for mailman id 40923;
 Mon, 30 Nov 2020 10:44:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjgVC-0000Uu-Er
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:33:50 +0000
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aa0973e0-b233-481e-b031-1dc209cd97f5;
 Mon, 30 Nov 2020 10:32:22 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id u19so20611199lfr.7
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 02:32:22 -0800 (PST)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.20
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 02:32:20 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aa0973e0-b233-481e-b031-1dc209cd97f5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=ifPY9Uvp382IkhNwXLGsr13LZTrex5ciLBeHg/6gJ1w=;
        b=Qxzxqs0lfT5xEyU1rdEO3FpxNRG118Njr8elyaY4OYVfNW1Qf3ZQQMBqhUfMiCMhza
         XbwWPwK7m22WFM1ce//fK8bg+xoYLPCUX58AWm1UypIBRPwBxxNoMXsvU/dZQYjbk0Zt
         oJZaMv7XOdA4dJF4CpoPELB0Sc/ZB/6ymdat0jDQT+CEJRscndwYrH865UIaQTOv5XsY
         ZNrSteYSMDSsxEvuGk7grv6tO3pCkTnqumz6vg9alWNaYhCpIoYpix5t5IaUkwE90ydE
         wed3kQulCvdj8KNkXy85jHnc7CI+x19G1eu1pkev36HC6wwE8lCl9/o785g24UXzaSsp
         V6DQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=ifPY9Uvp382IkhNwXLGsr13LZTrex5ciLBeHg/6gJ1w=;
        b=E7X6V5RnkSfXQZoj+0LbcvrCAJHCeB8LcQNdiGJ67ftZspiGfgjVRoZS4wN/eViFpI
         gD1OD8kmmNB5tNzHPVlolmEaKeQf9fhiznUs4de/DEAOi8TrkkytvMnd+07IaSxYDSqw
         SQpkcO+H23+KI/gpW2T18nYQqNEm/7PHFga7QNDXB6GpWmXdPhMpQ2XLC3fCOBI/p2MC
         GkHwzy05MiJ2pwuIL42ZPqKCUtPA7LXT0nJumUsyjmIT00upqiRVS9JX+r7uaV15k9ra
         MHciV2n5hsWB7FLrbQpketBhQVGnpUamEm36sbpPjrkPY2KuupdNKtlhBdZiuqegtQAc
         L8mQ==
X-Gm-Message-State: AOAM530KwT9hkODzEfeA1+h2CvMsLl3e1keA6ZZa2mrR5iAZkI0bhY2R
	WzkF9EByBI6xsiiNNVB2SZAPWC/GF41PZw==
X-Google-Smtp-Source: ABdhPJxetDjXbIwzfcMq8cJC4D1KQaRrNniJGhlimdK3j8VjICnXrpr3lqGdTSoEjUU0QhpzApP0qg==
X-Received: by 2002:a19:952:: with SMTP id 79mr9235742lfj.559.1606732341362;
        Mon, 30 Nov 2020 02:32:21 -0800 (PST)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien.grall@arm.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: [PATCH V3 22/23] libxl: Introduce basic virtio-mmio support on Arm
Date: Mon, 30 Nov 2020 12:31:37 +0200
Message-Id: <1606732298-22107-23-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>

From: Julien Grall <julien.grall@arm.com>

This patch creates specific device node in the Guest device-tree
with allocated MMIO range and SPI interrupt if specific 'virtio'
property is present in domain config.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - was squashed with:
     "[RFC PATCH V1 09/12] libxl: Handle virtio-mmio irq in more correct way"
     "[RFC PATCH V1 11/12] libxl: Insert "dma-coherent" property into virtio-mmio device node"
     "[RFC PATCH V1 12/12] libxl: Fix duplicate memory node in DT"
   - move VirtIO MMIO #define-s to xen/include/public/arch-arm.h

Changes V1 -> V2:
   - update the author of a patch

Changes V2 -> V3:
   - no changes
---
---
 tools/libs/light/libxl_arm.c     | 58 ++++++++++++++++++++++++++++++++++++++--
 tools/libs/light/libxl_types.idl |  1 +
 tools/xl/xl_parse.c              |  1 +
 xen/include/public/arch-arm.h    |  5 ++++
 4 files changed, 63 insertions(+), 2 deletions(-)

diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index 66e8a06..588ee5a 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -26,8 +26,8 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
 {
     uint32_t nr_spis = 0;
     unsigned int i;
-    uint32_t vuart_irq;
-    bool vuart_enabled = false;
+    uint32_t vuart_irq, virtio_irq;
+    bool vuart_enabled = false, virtio_enabled = false;
 
     /*
      * If pl011 vuart is enabled then increment the nr_spis to allow allocation
@@ -39,6 +39,17 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
         vuart_enabled = true;
     }
 
+    /*
+     * XXX: Handle properly virtio
+     * A proper solution would be the toolstack to allocate the interrupts
+     * used by each virtio backend and let the backend now which one is used
+     */
+    if (libxl_defbool_val(d_config->b_info.arch_arm.virtio)) {
+        nr_spis += (GUEST_VIRTIO_MMIO_SPI - 32) + 1;
+        virtio_irq = GUEST_VIRTIO_MMIO_SPI;
+        virtio_enabled = true;
+    }
+
     for (i = 0; i < d_config->b_info.num_irqs; i++) {
         uint32_t irq = d_config->b_info.irqs[i];
         uint32_t spi;
@@ -58,6 +69,12 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
             return ERROR_FAIL;
         }
 
+        /* The same check as for vpl011 */
+        if (virtio_enabled && irq == virtio_irq) {
+            LOG(ERROR, "Physical IRQ %u conflicting with virtio SPI\n", irq);
+            return ERROR_FAIL;
+        }
+
         if (irq < 32)
             continue;
 
@@ -658,6 +675,39 @@ static int make_vpl011_uart_node(libxl__gc *gc, void *fdt,
     return 0;
 }
 
+static int make_virtio_mmio_node(libxl__gc *gc, void *fdt,
+                                 uint64_t base, uint32_t irq)
+{
+    int res;
+    gic_interrupt intr;
+    /* Placeholder for virtio@ + a 64-bit number + \0 */
+    char buf[24];
+
+    snprintf(buf, sizeof(buf), "virtio@%"PRIx64, base);
+    res = fdt_begin_node(fdt, buf);
+    if (res) return res;
+
+    res = fdt_property_compat(gc, fdt, 1, "virtio,mmio");
+    if (res) return res;
+
+    res = fdt_property_regs(gc, fdt, GUEST_ROOT_ADDRESS_CELLS, GUEST_ROOT_SIZE_CELLS,
+                            1, base, GUEST_VIRTIO_MMIO_SIZE);
+    if (res) return res;
+
+    set_interrupt(intr, irq, 0xf, DT_IRQ_TYPE_EDGE_RISING);
+    res = fdt_property_interrupts(gc, fdt, &intr, 1);
+    if (res) return res;
+
+    res = fdt_property(fdt, "dma-coherent", NULL, 0);
+    if (res) return res;
+
+    res = fdt_end_node(fdt);
+    if (res) return res;
+
+    return 0;
+
+}
+
 static const struct arch_info *get_arch_info(libxl__gc *gc,
                                              const struct xc_dom_image *dom)
 {
@@ -961,6 +1011,9 @@ next_resize:
         if (info->tee == LIBXL_TEE_TYPE_OPTEE)
             FDT( make_optee_node(gc, fdt) );
 
+        if (libxl_defbool_val(info->arch_arm.virtio))
+            FDT( make_virtio_mmio_node(gc, fdt, GUEST_VIRTIO_MMIO_BASE, GUEST_VIRTIO_MMIO_SPI) );
+
         if (pfdt)
             FDT( copy_partial_fdt(gc, fdt, pfdt) );
 
@@ -1178,6 +1231,7 @@ void libxl__arch_domain_build_info_setdefault(libxl__gc *gc,
 {
     /* ACPI is disabled by default */
     libxl_defbool_setdefault(&b_info->acpi, false);
+    libxl_defbool_setdefault(&b_info->arch_arm.virtio, false);
 
     if (b_info->type != LIBXL_DOMAIN_TYPE_PV)
         return;
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 9d3f05f..b054bf9 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -639,6 +639,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
 
 
     ("arch_arm", Struct(None, [("gic_version", libxl_gic_version),
+                               ("virtio", libxl_defbool),
                                ("vuart", libxl_vuart_type),
                               ])),
     # Alternate p2m is not bound to any architecture or guest type, as it is
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index cae8eb6..10acf22 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -2581,6 +2581,7 @@ skip_usbdev:
     }
 
     xlu_cfg_get_defbool(config, "dm_restrict", &b_info->dm_restrict, 0);
+    xlu_cfg_get_defbool(config, "virtio", &b_info->arch_arm.virtio, 0);
 
     if (c_info->type == LIBXL_DOMAIN_TYPE_HVM) {
         if (!xlu_cfg_get_string (config, "vga", &buf, 0)) {
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index c365b1b..be7595f 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -464,6 +464,11 @@ typedef uint64_t xen_callback_t;
 #define PSCI_cpu_on      2
 #define PSCI_migrate     3
 
+/* VirtIO MMIO definitions */
+#define GUEST_VIRTIO_MMIO_BASE  xen_mk_ullong(0x02000000)
+#define GUEST_VIRTIO_MMIO_SIZE  xen_mk_ullong(0x200)
+#define GUEST_VIRTIO_MMIO_SPI   33
+
 #endif
 
 #ifndef __ASSEMBLY__
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:44:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:44:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40924.73966 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgfJ-0003eS-H2; Mon, 30 Nov 2020 10:44:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40924.73966; Mon, 30 Nov 2020 10:44:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgfJ-0003eK-C5; Mon, 30 Nov 2020 10:44:17 +0000
Received: by outflank-mailman (input) for mailman id 40924;
 Mon, 30 Nov 2020 10:44:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjgUn-0000Uu-Dz
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:33:25 +0000
Received: from mail-lj1-x22c.google.com (unknown [2a00:1450:4864:20::22c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 553845a9-04a6-42a1-bbaf-90d801202ba7;
 Mon, 30 Nov 2020 10:32:17 +0000 (UTC)
Received: by mail-lj1-x22c.google.com with SMTP id s9so16995139ljo.11
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 02:32:17 -0800 (PST)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.15
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 02:32:15 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 553845a9-04a6-42a1-bbaf-90d801202ba7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=kU8hXo0MhUwaJAOfCWP3cQmYgBrqrc7iZRLWg/7AJHY=;
        b=H4WDeUB/57LzDiHlv6dARF/vp22ivBig+v9X086y02k3BHN3mN0dh9PeTop8n5QXKF
         cxkGhqC99WfEJ99PDxwkWG5woj6hArK3AXI2CkKJ6WpEUA3hbmLzNqnd5VNI8CXezn/h
         Ic4P1axxOApt25piQe+p2rC5dSXtTnNggsi7AO9LlclK5q6QzqBrirRbyZs5bS5sp4x9
         iYDQWya4aKnvhO3hwu4e5xFiuXZEP79ChJfj0K+IVsaOz62D8NgxbkCGZFHDYLWPUDm/
         pyGeBcnLiG0tFkm5oRzUFxFGeOa+HIOyTCdpBtOPKcgQgsJ7RI6BqBxhKIQAvkeTh6YA
         X8ZQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=kU8hXo0MhUwaJAOfCWP3cQmYgBrqrc7iZRLWg/7AJHY=;
        b=eLibJvDBL0/+WhxtJYEg2/pttSmXIx6y2xbY8HrUlpzxdKdJucbMvQabDBN+QJjyLS
         O/gNSEnipoJKPkIafwWAifjOaWQXPY0RGHZV7qcOXNLhC8dmP1SqL88NjiOl9EPbp6uN
         fgWK7j9Wvs/fZZWdiGMpYeSkaEY8cNouqgWPQQy0qbDKHYPYTs8M32Ge4JXmoUnIGyb+
         hZbjAJCLISdm7zno2+VNX2Nuaqvuz4TQJIA18nb45prw7L+Mr1jMhlMrohUuvIZgHqxq
         2T7E9f+bhxCy7C7OqNsfEnqw1ub23BC7ALnHz5FNEj7QZWTn5mXwr+0mPBi9Q8yawWhk
         GW7g==
X-Gm-Message-State: AOAM533ry5IvGkZNDpIynb5MT5+U/lALAlLixo2+hGFlGp/FVSTelYoI
	6HU7oERpuB+bYQKkij0/8qzKxpw80oNHIQ==
X-Google-Smtp-Source: ABdhPJwMpyUcazry+2thIFAZp4dzdk2dlfVeiFZSM93RQu/mrnP+Z/vMPEdKvWqrRLEmtl6WU6tNhA==
X-Received: by 2002:a2e:985:: with SMTP id 127mr10145900ljj.268.1606732336342;
        Mon, 30 Nov 2020 02:32:16 -0800 (PST)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V3 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
Date: Mon, 30 Nov 2020 12:31:32 +0200
Message-Id: <1606732298-22107-18-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

This patch introduces a helper the main purpose of which is to check
if a domain is using IOREQ server(s).

On Arm the current benefit is to avoid calling vcpu_ioreq_handle_completion()
(which implies iterating over all possible IOREQ servers anyway)
on every return in leave_hypervisor_to_guest() if there is no active
servers for the particular domain.
Also this helper will be used by one of the subsequent patches on Arm.

This involves adding an extra per-domain variable to store the count
of servers in use.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - new patch

Changes V1 -> V2:
   - update patch description
   - guard helper with CONFIG_IOREQ_SERVER
   - remove "hvm" prefix
   - modify helper to just return d->arch.hvm.ioreq_server.nr_servers
   - put suitable ASSERT()s
   - use ASSERT(d->ioreq_server.server[id] ? !s : !!s) in set_ioreq_server()
   - remove d->ioreq_server.nr_servers = 0 from hvm_ioreq_init()

Changes V2 -> V3:
   - update patch description
   - remove ASSERT()s from the helper, add a comment
   - use #ifdef CONFIG_IOREQ_SERVER inside function body
   - use new ASSERT() construction in set_ioreq_server()
---
---
 xen/arch/arm/traps.c    | 15 +++++++++------
 xen/common/ioreq.c      |  7 ++++++-
 xen/include/xen/ioreq.h | 14 ++++++++++++++
 xen/include/xen/sched.h |  1 +
 4 files changed, 30 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 4cef43e..b6077d2 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2262,14 +2262,17 @@ static bool check_for_vcpu_work(void)
     struct vcpu *v = current;
 
 #ifdef CONFIG_IOREQ_SERVER
-    bool handled;
+    if ( domain_has_ioreq_server(v->domain) )
+    {
+        bool handled;
 
-    local_irq_enable();
-    handled = vcpu_ioreq_handle_completion(v);
-    local_irq_disable();
+        local_irq_enable();
+        handled = vcpu_ioreq_handle_completion(v);
+        local_irq_disable();
 
-    if ( !handled )
-        return true;
+        if ( !handled )
+            return true;
+    }
 #endif
 
     if ( likely(!v->arch.need_flush_to_ram) )
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index 4855dd8..f35dcf9 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -39,9 +39,14 @@ static void set_ioreq_server(struct domain *d, unsigned int id,
                              struct ioreq_server *s)
 {
     ASSERT(id < MAX_NR_IOREQ_SERVERS);
-    ASSERT(!s || !d->ioreq_server.server[id]);
+    ASSERT(!s ^ !d->ioreq_server.server[id]);
 
     d->ioreq_server.server[id] = s;
+
+    if ( s )
+        d->ioreq_server.nr_servers++;
+    else
+        d->ioreq_server.nr_servers--;
 }
 
 #define GET_IOREQ_SERVER(d, id) \
diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
index 02ff998..2289e79 100644
--- a/xen/include/xen/ioreq.h
+++ b/xen/include/xen/ioreq.h
@@ -55,6 +55,20 @@ struct ioreq_server {
     uint8_t                bufioreq_handling;
 };
 
+/*
+ * This should only be used when d == current->domain and it's not paused,
+ * or when they're distinct and d is paused. Otherwise the result is
+ * stale before the caller can inspect it.
+ */
+static inline bool domain_has_ioreq_server(const struct domain *d)
+{
+#ifdef CONFIG_IOREQ_SERVER
+    return d->ioreq_server.nr_servers;
+#else
+    return false;
+#endif
+}
+
 static inline paddr_t ioreq_mmio_first_byte(const ioreq_t *p)
 {
     return unlikely(p->df) ?
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 8269f84..2277995 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -550,6 +550,7 @@ struct domain
     struct {
         spinlock_t              lock;
         struct ioreq_server     *server[MAX_NR_IOREQ_SERVERS];
+        unsigned int            nr_servers;
     } ioreq_server;
 #endif
 };
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:44:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:44:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40930.73990 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgfX-0003o2-6z; Mon, 30 Nov 2020 10:44:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40930.73990; Mon, 30 Nov 2020 10:44:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgfX-0003nr-0c; Mon, 30 Nov 2020 10:44:31 +0000
Received: by outflank-mailman (input) for mailman id 40930;
 Mon, 30 Nov 2020 10:44:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjgV7-0000Uu-En
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:33:45 +0000
Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 99336363-45f7-42cc-a685-7b4091276a2a;
 Mon, 30 Nov 2020 10:32:22 +0000 (UTC)
Received: by mail-lf1-x144.google.com with SMTP id q13so19910159lfr.10
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 02:32:21 -0800 (PST)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.19
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 02:32:19 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99336363-45f7-42cc-a685-7b4091276a2a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=Z671f8NtIQQSdLKPW7Ujw7d1GChOiu14pAFcqn9342M=;
        b=i6IB/D3cTa/9KbdcsLiGLYiGU6RKFd30neH7OpWV/zgMkEGuWqh+gK44FpPVNN/3gP
         han1c6Bc+aJ7ampbPFJ3PotG9kHTiB/SmzdbG9AtbSSXUjZZy9daEfdQyKZa9xaECzyW
         jI/U6vZJ5OlUVbejs/MYuUuU9a3pJlePvsPRLqgZcfijVh3ShuEnax5TdUD1kPrPH8Kb
         61PNowVZZudUxAYJZWDrbGRIlBfKb7O05MHYeZu7PPXcObLrevrEkWXBwv4DQhMlhafy
         1pT7yjQIXQwiI7c3sxPx7BPPTXPCvpAWBDV0rLa6uoKWRIpIp+W5i8h4+/FEyGAYFfET
         rSzw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=Z671f8NtIQQSdLKPW7Ujw7d1GChOiu14pAFcqn9342M=;
        b=AmQ+0F93qdliGcez0NUbb7n0ce5VePfxPH9FdAOijEpXxG0mk88AEgpcYWyWAhLhBV
         zs7LkE2UQV4y7p/j/NjyANNw5i8avYYF9dGo3T4ev7XURORq6pa6Auyhj4zER7kHnaJn
         +YWl8RCQ2cyrfA5DIAxViuRl7UqYo2D3W6NPepojmkoTC8PQvxXJNttYvY3fXjrAWoJe
         IhDKAInT89zxQ2SVahEB0s3thW21Nw7a4ppvZXxV/Zv4UO3hKYxIqF0wYHRhVaWq7ZUb
         GZXJQNsGKybOWwMpw4dBvPkNZ82XipQABcr4UOxTp4FNOAf7QriuFlo0llDcjIsEjh6U
         NKvg==
X-Gm-Message-State: AOAM530MEYe8YCM76tI2gcKSm1fqgT+fonxW1PYDJt1psepVYS6CLOUA
	rYCLUPG0NdHLQ1ddjx0e5qhnWAtfrLB1Aw==
X-Google-Smtp-Source: ABdhPJySun5qea1qs774Yk8F4e0pmXYkhq/Jfe962JNLAOrwOFlSTOgFx57BlncSwpz28P1vAOSXVQ==
X-Received: by 2002:a05:6512:3243:: with SMTP id c3mr8426212lfr.371.1606732340364;
        Mon, 30 Nov 2020 02:32:20 -0800 (PST)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V3 21/23] xen/arm: Add mapcache invalidation handling
Date: Mon, 30 Nov 2020 12:31:36 +0200
Message-Id: <1606732298-22107-22-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

We need to send mapcache invalidation request to qemu/demu everytime
the page gets removed from a guest.

At the moment, the Arm code doesn't explicitely remove the existing
mapping before inserting the new mapping. Instead, this is done
implicitely by __p2m_set_entry().

So we need to recognize a case when old entry is a RAM page *and*
the new MFN is different in order to set the corresponding flag.
The most suitable place to do this is p2m_free_entry(), there
we can find the correct leaf type. The invalidation request
will be sent in do_trap_hypercall() later on.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes V1 -> V2:
   - new patch, some changes were derived from (+ new explanation):
     xen/ioreq: Make x86's invalidate qemu mapcache handling common
   - put setting of the flag into __p2m_set_entry()
   - clarify the conditions when the flag should be set
   - use domain_has_ioreq_server()
   - update do_trap_hypercall() by adding local variable

Changes V2 -> V3:
   - update patch description
   - move check to p2m_free_entry()
   - add a comment
   - use "curr" instead of "v" in do_trap_hypercall()
---
---
 xen/arch/arm/p2m.c   | 24 ++++++++++++++++--------
 xen/arch/arm/traps.c | 13 ++++++++++---
 2 files changed, 26 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 5b8d494..9674f6f 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1,6 +1,7 @@
 #include <xen/cpu.h>
 #include <xen/domain_page.h>
 #include <xen/iocap.h>
+#include <xen/ioreq.h>
 #include <xen/lib.h>
 #include <xen/sched.h>
 #include <xen/softirq.h>
@@ -749,17 +750,24 @@ static void p2m_free_entry(struct p2m_domain *p2m,
     if ( !p2m_is_valid(entry) )
         return;
 
-    /* Nothing to do but updating the stats if the entry is a super-page. */
-    if ( p2m_is_superpage(entry, level) )
+    if ( p2m_is_superpage(entry, level) || (level == 3) )
     {
-        p2m->stats.mappings[level]--;
-        return;
-    }
+#ifdef CONFIG_IOREQ_SERVER
+        /*
+         * If this gets called (non-recursively) then either the entry
+         * was replaced by an entry with a different base (valid case) or
+         * the shattering of a superpage was failed (error case).
+         * So, at worst, the spurious mapcache invalidation might be sent.
+         */
+        if ( domain_has_ioreq_server(p2m->domain) &&
+             (p2m->domain == current->domain) && p2m_is_ram(entry.p2m.type) )
+            p2m->domain->mapcache_invalidate = true;
+#endif
 
-    if ( level == 3 )
-    {
         p2m->stats.mappings[level]--;
-        p2m_put_l3_page(entry);
+        /* Nothing to do if the entry is a super-page. */
+        if ( level == 3 )
+            p2m_put_l3_page(entry);
         return;
     }
 
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index b6077d2..151c626 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1443,6 +1443,7 @@ static void do_trap_hypercall(struct cpu_user_regs *regs, register_t *nr,
                               const union hsr hsr)
 {
     arm_hypercall_fn_t call = NULL;
+    struct vcpu *curr = current;
 
     BUILD_BUG_ON(NR_hypercalls < ARRAY_SIZE(arm_hypercall_table) );
 
@@ -1459,7 +1460,7 @@ static void do_trap_hypercall(struct cpu_user_regs *regs, register_t *nr,
         return;
     }
 
-    current->hcall_preempted = false;
+    curr->hcall_preempted = false;
 
     perfc_incra(hypercalls, *nr);
     call = arm_hypercall_table[*nr].fn;
@@ -1472,7 +1473,7 @@ static void do_trap_hypercall(struct cpu_user_regs *regs, register_t *nr,
     HYPERCALL_RESULT_REG(regs) = call(HYPERCALL_ARGS(regs));
 
 #ifndef NDEBUG
-    if ( !current->hcall_preempted )
+    if ( !curr->hcall_preempted )
     {
         /* Deliberately corrupt parameter regs used by this hypercall. */
         switch ( arm_hypercall_table[*nr].nr_args ) {
@@ -1489,8 +1490,14 @@ static void do_trap_hypercall(struct cpu_user_regs *regs, register_t *nr,
 #endif
 
     /* Ensure the hypercall trap instruction is re-executed. */
-    if ( current->hcall_preempted )
+    if ( curr->hcall_preempted )
         regs->pc -= 4;  /* re-execute 'hvc #XEN_HYPERCALL_TAG' */
+
+#ifdef CONFIG_IOREQ_SERVER
+    if ( unlikely(curr->domain->mapcache_invalidate) &&
+         test_and_clear_bool(curr->domain->mapcache_invalidate) )
+        ioreq_signal_mapcache_invalidate();
+#endif
 }
 
 void arch_hypercall_tasklet_result(struct vcpu *v, long res)
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:44:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:44:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40929.73984 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgfW-0003nN-Qf; Mon, 30 Nov 2020 10:44:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40929.73984; Mon, 30 Nov 2020 10:44:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgfW-0003n3-MM; Mon, 30 Nov 2020 10:44:30 +0000
Received: by outflank-mailman (input) for mailman id 40929;
 Mon, 30 Nov 2020 10:44:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjgUO-0000Uu-DP
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:33:00 +0000
Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 475c828b-c025-4108-a7fe-694966bbda51;
 Mon, 30 Nov 2020 10:32:13 +0000 (UTC)
Received: by mail-lf1-x144.google.com with SMTP id d8so20627634lfa.1
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 02:32:13 -0800 (PST)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.11
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 02:32:11 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 475c828b-c025-4108-a7fe-694966bbda51
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=nMA1D5QyuY6nqUL9SQWis4OrJ06p7JM6vXYO/WRZnDQ=;
        b=jK99CXJOBFidWNf8C240OVzB5yoEfyU15B2+M7W/8Daf5lmtZuXDIdsAjV2s85dKMH
         eg4p9Iv9wIom4ohIKNtmwsW+6oTfdv68QLZOfizUCAyswBWvgxQufepXuZlbzu6AoMMN
         /xE9QZ8huIFDaNz5yg8MCyAqB/ZlNj+zhyJo+M5arDeh1cJOVwlQkg7VypkVWKsq/6wN
         /rsMncuE1LNTSTz1LsFriaHeg8m4W4b8LP90fIWctCmPO56de5xK36yGGIxz07Rr48ax
         Sthgqur8dsVIPh9KrmXiiZyBwwrf/0wBMLG9I84eg3IbbHf5ODdtUmd4dAPG2RGiLX7a
         7yag==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=nMA1D5QyuY6nqUL9SQWis4OrJ06p7JM6vXYO/WRZnDQ=;
        b=PAi4uMJ4G1cc4sWeTsrPLwgAGjmx2Gkny0UDLNm9SXui+BmgmPBCZrvDY+fbzRCAJ6
         mlCoGm/LYlBSXXIQ9m4sPmvMtaqGqy9a7mNvDddmIuu3tIu6axymWIGlthqDSr/NU+e7
         UbbKJBhjZpDN2tn33JpQtgNsmHvZWQcReotHwNUmX/ynSD7RCzqOvNWINqYtFpb4FizV
         ackTI04zxTBSZPDCPf1Aw20Z3E1cf9TAxaIb8LyFivyTEtIUVOMKNmUCtVSLRd02Ejgd
         u2vj35AAjuoxT2PPuiVMUcgZht1o10pud4bVKZW/o7e8jdIXpOqvgMGxpPU8ZexmEDEc
         qIWQ==
X-Gm-Message-State: AOAM530dWcEWcM6YLO2zOGpe6JsxtaveQve7CYYM40FXwpbDMNesFcE9
	zw/eoSXr/6AlymOnWxVQPRVh+vGXc4+A/Q==
X-Google-Smtp-Source: ABdhPJyPEH5EGMt8g7+ksXyuOES+H1T4Dvl+dgtWK2QVenb8rwRLrqqN7nDDLL9ibneLpMfhtYE4tw==
X-Received: by 2002:a19:248a:: with SMTP id k132mr9638514lfk.387.1606732332030;
        Mon, 30 Nov 2020 02:32:12 -0800 (PST)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Paul Durrant <paul@xen.org>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V3 13/23] xen/ioreq: Use guest_cmpxchg64() instead of cmpxchg()
Date: Mon, 30 Nov 2020 12:31:28 +0200
Message-Id: <1606732298-22107-14-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

The cmpxchg() in ioreq_send_buffered() operates on memory shared
with the emulator domain (and the target domain if the legacy
interface is used).

In order to be on the safe side we need to switch
to guest_cmpxchg64() to prevent a domain to DoS Xen on Arm.

As there is no plan to support the legacy interface on Arm,
we will have a page to be mapped in a single domain at the time,
so we can use s->emulator in guest_cmpxchg64() safely.

Thankfully the only user of the legacy interface is x86 so far
and there is not concern regarding the atomics operations.

Please note, that the legacy interface *must* not be used on Arm
without revisiting the code.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - new patch

Changes V1 -> V2:
   - move earlier to avoid breaking arm32 compilation
   - add an explanation to commit description and hvm_allow_set_param()
   - pass s->emulator

Changes V2 -> V3:
   - update patch description
---
---
 xen/arch/arm/hvm.c | 4 ++++
 xen/common/ioreq.c | 3 ++-
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 8951b34..9694e5a 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -31,6 +31,10 @@
 
 #include <asm/hypercall.h>
 
+/*
+ * The legacy interface (which involves magic IOREQ pages) *must* not be used
+ * without revisiting the code.
+ */
 static int hvm_allow_set_param(const struct domain *d, unsigned int param)
 {
     switch ( param )
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index 3ca5b96..4855dd8 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -29,6 +29,7 @@
 #include <xen/trace.h>
 #include <xen/vpci.h>
 
+#include <asm/guest_atomics.h>
 #include <asm/hvm/ioreq.h>
 
 #include <public/hvm/ioreq.h>
@@ -1182,7 +1183,7 @@ static int ioreq_send_buffered(struct ioreq_server *s, ioreq_t *p)
 
         new.read_pointer = old.read_pointer - n * IOREQ_BUFFER_SLOT_NUM;
         new.write_pointer = old.write_pointer - n * IOREQ_BUFFER_SLOT_NUM;
-        cmpxchg(&pg->ptrs.full, old.full, new.full);
+        guest_cmpxchg64(s->emulator, &pg->ptrs.full, old.full, new.full);
     }
 
     notify_via_xen_event_channel(d, s->bufioreq_evtchn);
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:44:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:44:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40931.74002 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgfX-0003ph-SL; Mon, 30 Nov 2020 10:44:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40931.74002; Mon, 30 Nov 2020 10:44:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgfX-0003ou-Eq; Mon, 30 Nov 2020 10:44:31 +0000
Received: by outflank-mailman (input) for mailman id 40931;
 Mon, 30 Nov 2020 10:44:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjgUT-0000Uu-Di
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:33:05 +0000
Received: from mail-lf1-x136.google.com (unknown [2a00:1450:4864:20::136])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 139e13ca-ea33-4293-acbf-50a16258e6e5;
 Mon, 30 Nov 2020 10:32:12 +0000 (UTC)
Received: by mail-lf1-x136.google.com with SMTP id z21so20560124lfe.12
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 02:32:12 -0800 (PST)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.09
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 02:32:10 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 139e13ca-ea33-4293-acbf-50a16258e6e5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=M048BVT+nCu0MjdyKQAZ2lA41UelwaMDIQ9W/KmwiOU=;
        b=ZRM3C2bwAeHcK8L2SFy9VzjIPuMZBQi1v7iYg/lOTkIPNf5r6NGWfird1X1t27vMQH
         0gfC1AybpAfANSPJw+5LZnpRbhOU3VWoiKJzE1TQKNhm35tEb7MWFMboj+Zu2lBNughJ
         1nuT9yRE7xkL9nsIlxh0TtyyWvKw+dJYOQMdrWAPRJp9QC0feOf9C1bvJw6ab7qetBQL
         A/kO0+KB1CxAt87LEjwfsK3oR7H3glLKjlj/XY/SeKyJMUiQkeodmVL8ivoFP07x0RJw
         UxjsWALAIFliBPRLfRQ0NSduRq4pzfbUA8Gc1LUAgu3EMDdzjndCgOOFFv0D714KE/fD
         uRYg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=M048BVT+nCu0MjdyKQAZ2lA41UelwaMDIQ9W/KmwiOU=;
        b=ZwzUCLCbbKgm+NwNjR+GBKY1s+jpX9Vn0xonrf3QDVhlUXzUmTnnTznRcghijFdop9
         bOcU5xxfMjaRiJHVBypZFQbY5xaeOYg2DFXUdHb8RD4V/ZMG65wXPd4DCySV7IbjcclU
         GLGHvYxrykXwdkwqO2OAomCN6ao8xIVJQxoR9beYmfH79ZMPSzxf6Kmq49y/sxmjO7uI
         1TunFoOuONLhemVupSWfde62pHl0iYirSvfAJsUAEksy5SfMVuGhG3w1GHO2fG/ay1Pz
         7cNRVTsnmp0IWPv986wXWWsPSbwHsth15VB/tifykBh0TquPD2CD+cT5fTYt1wwoU5G7
         LCrw==
X-Gm-Message-State: AOAM532sqj0Ulrl8QGlo6HHd/BtPD1p7UVwgLTjGYdYUz5j8LDeg7etc
	J8QzmK9JDQfHJMHPR2KdFcl2U0VC+oTAFw==
X-Google-Smtp-Source: ABdhPJxS23rp4feIT9/kzYSLTMYBHzWOxn0CwNgpX6XLp1NId/ICuU1Bxn8Jpt1h2ggt/yBIxq9mlg==
X-Received: by 2002:ac2:5208:: with SMTP id a8mr5249176lfl.206.1606732331125;
        Mon, 30 Nov 2020 02:32:11 -0800 (PST)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Jun Nakajima <jun.nakajima@intel.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V3 12/23] xen/ioreq: Remove "hvm" prefixes from involved function names
Date: Mon, 30 Nov 2020 12:31:27 +0200
Message-Id: <1606732298-22107-13-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

This patch removes "hvm" prefixes and infixes from IOREQ related
function names in the common code and performs a renaming where
appropriate according to the more consistent new naming scheme:
- IOREQ server functions should start with "ioreq_server_"
- IOREQ functions should start with "ioreq_"

A few function names are clarified to better fit into their purposes:
handle_hvm_io_completion -> vcpu_ioreq_handle_completion
hvm_io_pending           -> vcpu_ioreq_pending
hvm_ioreq_init           -> ioreq_domain_init
hvm_alloc_ioreq_mfn      -> ioreq_server_alloc_mfn
hvm_free_ioreq_mfn       -> ioreq_server_free_mfn

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes V1 -> V2:
   - new patch

Changes V2 -> V3:
   - update patch according the "legacy interface" is x86 specific
   - update patch description
   - rename everything touched according to new naming scheme
---
---
 xen/arch/x86/hvm/dm.c       |   4 +-
 xen/arch/x86/hvm/emulate.c  |   6 +-
 xen/arch/x86/hvm/hvm.c      |  10 +--
 xen/arch/x86/hvm/io.c       |   6 +-
 xen/arch/x86/hvm/ioreq.c    |   2 +-
 xen/arch/x86/hvm/stdvga.c   |   4 +-
 xen/arch/x86/hvm/vmx/vvmx.c |   2 +-
 xen/common/dm.c             |  28 +++----
 xen/common/ioreq.c          | 174 ++++++++++++++++++++++----------------------
 xen/common/memory.c         |   2 +-
 xen/include/xen/ioreq.h     |  67 ++++++++---------
 11 files changed, 153 insertions(+), 152 deletions(-)

diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index 35f860a..0b6319e 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -352,8 +352,8 @@ int arch_dm_op(struct xen_dm_op *op, struct domain *d,
             break;
 
         if ( first_gfn == 0 )
-            rc = hvm_map_mem_type_to_ioreq_server(d, data->id,
-                                                  data->type, data->flags);
+            rc = ioreq_server_map_mem_type(d, data->id,
+                                           data->type, data->flags);
         else
             rc = 0;
 
diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 04e4994..a025f89 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -261,7 +261,7 @@ static int hvmemul_do_io(
          * an ioreq server that can handle it.
          *
          * Rules:
-         * A> PIO or MMIO accesses run through hvm_select_ioreq_server() to
+         * A> PIO or MMIO accesses run through ioreq_server_select() to
          * choose the ioreq server by range. If no server is found, the access
          * is ignored.
          *
@@ -323,7 +323,7 @@ static int hvmemul_do_io(
         }
 
         if ( !s )
-            s = hvm_select_ioreq_server(currd, &p);
+            s = ioreq_server_select(currd, &p);
 
         /* If there is no suitable backing DM, just ignore accesses */
         if ( !s )
@@ -333,7 +333,7 @@ static int hvmemul_do_io(
         }
         else
         {
-            rc = hvm_send_ioreq(s, &p, 0);
+            rc = ioreq_send(s, &p, 0);
             if ( rc != X86EMUL_RETRY || currd->is_shutting_down )
                 vio->req.state = STATE_IOREQ_NONE;
             else if ( !ioreq_needs_completion(&vio->req) )
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index cc46909..8e3c2e2 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -546,7 +546,7 @@ void hvm_do_resume(struct vcpu *v)
 
     pt_restore_timer(v);
 
-    if ( !handle_hvm_io_completion(v) )
+    if ( !vcpu_ioreq_handle_completion(v) )
         return;
 
     if ( unlikely(v->arch.vm_event) )
@@ -677,7 +677,7 @@ int hvm_domain_initialise(struct domain *d)
     register_g2m_portio_handler(d);
     register_vpci_portio_handler(d);
 
-    hvm_ioreq_init(d);
+    ioreq_domain_init(d);
 
     hvm_init_guest_time(d);
 
@@ -739,7 +739,7 @@ void hvm_domain_relinquish_resources(struct domain *d)
 
     viridian_domain_deinit(d);
 
-    hvm_destroy_all_ioreq_servers(d);
+    ioreq_server_destroy_all(d);
 
     msixtbl_pt_cleanup(d);
 
@@ -1582,7 +1582,7 @@ int hvm_vcpu_initialise(struct vcpu *v)
     if ( rc )
         goto fail5;
 
-    rc = hvm_all_ioreq_servers_add_vcpu(d, v);
+    rc = ioreq_server_add_vcpu_all(d, v);
     if ( rc != 0 )
         goto fail6;
 
@@ -1618,7 +1618,7 @@ void hvm_vcpu_destroy(struct vcpu *v)
 {
     viridian_vcpu_deinit(v);
 
-    hvm_all_ioreq_servers_remove_vcpu(v->domain, v);
+    ioreq_server_remove_vcpu_all(v->domain, v);
 
     if ( hvm_altp2m_supported() )
         altp2m_vcpu_destroy(v);
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index 327a6a2..a0dd8d1 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -60,7 +60,7 @@ void send_timeoffset_req(unsigned long timeoff)
     if ( timeoff == 0 )
         return;
 
-    if ( hvm_broadcast_ioreq(&p, true) != 0 )
+    if ( ioreq_broadcast(&p, true) != 0 )
         gprintk(XENLOG_ERR, "Unsuccessful timeoffset update\n");
 }
 
@@ -74,7 +74,7 @@ void send_invalidate_req(void)
         .data = ~0UL, /* flush all */
     };
 
-    if ( hvm_broadcast_ioreq(&p, false) != 0 )
+    if ( ioreq_broadcast(&p, false) != 0 )
         gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n");
 }
 
@@ -155,7 +155,7 @@ bool handle_pio(uint16_t port, unsigned int size, int dir)
          * We should not advance RIP/EIP if the domain is shutting down or
          * if X86EMUL_RETRY has been returned by an internal handler.
          */
-        if ( curr->domain->is_shutting_down || !hvm_io_pending(curr) )
+        if ( curr->domain->is_shutting_down || !vcpu_ioreq_pending(curr) )
             return false;
         break;
 
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 7808b75..934189e 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -154,7 +154,7 @@ static int hvm_map_ioreq_gfn(struct ioreq_server *s, bool buf)
     {
         /*
          * If a page has already been allocated (which will happen on
-         * demand if hvm_get_ioreq_server_frame() is called), then
+         * demand if ioreq_server_get_frame() is called), then
          * mapping a guest frame is not permitted.
          */
         if ( gfn_eq(iorp->gfn, INVALID_GFN) )
diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c
index bafb3f6..390ac51 100644
--- a/xen/arch/x86/hvm/stdvga.c
+++ b/xen/arch/x86/hvm/stdvga.c
@@ -507,11 +507,11 @@ static int stdvga_mem_write(const struct hvm_io_handler *handler,
     }
 
  done:
-    srv = hvm_select_ioreq_server(current->domain, &p);
+    srv = ioreq_server_select(current->domain, &p);
     if ( !srv )
         return X86EMUL_UNHANDLEABLE;
 
-    return hvm_send_ioreq(srv, &p, 1);
+    return ioreq_send(srv, &p, 1);
 }
 
 static bool_t stdvga_mem_accept(const struct hvm_io_handler *handler,
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 3a37e9e..a4813f0 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1516,7 +1516,7 @@ void nvmx_switch_guest(void)
      * don't want to continue as this setup is not implemented nor supported
      * as of right now.
      */
-    if ( hvm_io_pending(v) )
+    if ( vcpu_ioreq_pending(v) )
         return;
     /*
      * a softirq may interrupt us between a virtual vmentry is
diff --git a/xen/common/dm.c b/xen/common/dm.c
index 36e01a2..9d394fc 100644
--- a/xen/common/dm.c
+++ b/xen/common/dm.c
@@ -100,8 +100,8 @@ static int dm_op(const struct dmop_args *op_args)
         if ( data->pad[0] || data->pad[1] || data->pad[2] )
             break;
 
-        rc = hvm_create_ioreq_server(d, data->handle_bufioreq,
-                                     &data->id);
+        rc = ioreq_server_create(d, data->handle_bufioreq,
+                                 &data->id);
         break;
     }
 
@@ -117,12 +117,12 @@ static int dm_op(const struct dmop_args *op_args)
         if ( data->flags & ~valid_flags )
             break;
 
-        rc = hvm_get_ioreq_server_info(d, data->id,
-                                       (data->flags & XEN_DMOP_no_gfns) ?
-                                       NULL : (unsigned long *)&data->ioreq_gfn,
-                                       (data->flags & XEN_DMOP_no_gfns) ?
-                                       NULL : (unsigned long *)&data->bufioreq_gfn,
-                                       &data->bufioreq_port);
+        rc = ioreq_server_get_info(d, data->id,
+                                   (data->flags & XEN_DMOP_no_gfns) ?
+                                   NULL : (unsigned long *)&data->ioreq_gfn,
+                                   (data->flags & XEN_DMOP_no_gfns) ?
+                                   NULL : (unsigned long *)&data->bufioreq_gfn,
+                                   &data->bufioreq_port);
         break;
     }
 
@@ -135,8 +135,8 @@ static int dm_op(const struct dmop_args *op_args)
         if ( data->pad )
             break;
 
-        rc = hvm_map_io_range_to_ioreq_server(d, data->id, data->type,
-                                              data->start, data->end);
+        rc = ioreq_server_map_io_range(d, data->id, data->type,
+                                       data->start, data->end);
         break;
     }
 
@@ -149,8 +149,8 @@ static int dm_op(const struct dmop_args *op_args)
         if ( data->pad )
             break;
 
-        rc = hvm_unmap_io_range_from_ioreq_server(d, data->id, data->type,
-                                                  data->start, data->end);
+        rc = ioreq_server_unmap_io_range(d, data->id, data->type,
+                                         data->start, data->end);
         break;
     }
 
@@ -163,7 +163,7 @@ static int dm_op(const struct dmop_args *op_args)
         if ( data->pad )
             break;
 
-        rc = hvm_set_ioreq_server_state(d, data->id, !!data->enabled);
+        rc = ioreq_server_set_state(d, data->id, !!data->enabled);
         break;
     }
 
@@ -176,7 +176,7 @@ static int dm_op(const struct dmop_args *op_args)
         if ( data->pad )
             break;
 
-        rc = hvm_destroy_ioreq_server(d, data->id);
+        rc = ioreq_server_destroy(d, data->id);
         break;
     }
 
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index caf4543..3ca5b96 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -59,7 +59,7 @@ static struct ioreq_server *get_ioreq_server(const struct domain *d,
  * Iterate over all possible ioreq servers.
  *
  * NOTE: The iteration is backwards such that more recently created
- *       ioreq servers are favoured in hvm_select_ioreq_server().
+ *       ioreq servers are favoured in ioreq_server_select().
  *       This is a semantic that previously existed when ioreq servers
  *       were held in a linked list.
  */
@@ -106,12 +106,12 @@ static struct ioreq_vcpu *get_pending_vcpu(const struct vcpu *v,
     return NULL;
 }
 
-bool hvm_io_pending(struct vcpu *v)
+bool vcpu_ioreq_pending(struct vcpu *v)
 {
     return get_pending_vcpu(v, NULL);
 }
 
-static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p)
+static bool wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p)
 {
     unsigned int prev_state = STATE_IOREQ_NONE;
     unsigned int state = p->state;
@@ -168,7 +168,7 @@ static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p)
     return true;
 }
 
-bool handle_hvm_io_completion(struct vcpu *v)
+bool vcpu_ioreq_handle_completion(struct vcpu *v)
 {
     struct domain *d = v->domain;
     struct vcpu_io *vio = &v->io;
@@ -183,7 +183,7 @@ bool handle_hvm_io_completion(struct vcpu *v)
     }
 
     sv = get_pending_vcpu(v, &s);
-    if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) )
+    if ( sv && !wait_for_io(sv, get_ioreq(s, v)) )
         return false;
 
     vio->req.state = ioreq_needs_completion(&vio->req) ?
@@ -214,7 +214,7 @@ bool handle_hvm_io_completion(struct vcpu *v)
     return true;
 }
 
-static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, bool buf)
+static int ioreq_server_alloc_mfn(struct ioreq_server *s, bool buf)
 {
     struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
     struct page_info *page;
@@ -223,7 +223,7 @@ static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, bool buf)
     {
         /*
          * If a guest frame has already been mapped (which may happen
-         * on demand if hvm_get_ioreq_server_info() is called), then
+         * on demand if ioreq_server_get_info() is called), then
          * allocating a page is not permitted.
          */
         if ( !gfn_eq(iorp->gfn, INVALID_GFN) )
@@ -262,7 +262,7 @@ static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, bool buf)
     return -ENOMEM;
 }
 
-static void hvm_free_ioreq_mfn(struct ioreq_server *s, bool buf)
+static void ioreq_server_free_mfn(struct ioreq_server *s, bool buf)
 {
     struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
     struct page_info *page = iorp->page;
@@ -301,8 +301,8 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
     return found;
 }
 
-static void hvm_update_ioreq_evtchn(struct ioreq_server *s,
-                                    struct ioreq_vcpu *sv)
+static void ioreq_update_evtchn(struct ioreq_server *s,
+                                struct ioreq_vcpu *sv)
 {
     ASSERT(spin_is_locked(&s->lock));
 
@@ -314,8 +314,8 @@ static void hvm_update_ioreq_evtchn(struct ioreq_server *s,
     }
 }
 
-static int hvm_ioreq_server_add_vcpu(struct ioreq_server *s,
-                                     struct vcpu *v)
+static int ioreq_server_add_vcpu(struct ioreq_server *s,
+                                 struct vcpu *v)
 {
     struct ioreq_vcpu *sv;
     int rc;
@@ -350,7 +350,7 @@ static int hvm_ioreq_server_add_vcpu(struct ioreq_server *s,
     list_add(&sv->list_entry, &s->ioreq_vcpu_list);
 
     if ( s->enabled )
-        hvm_update_ioreq_evtchn(s, sv);
+        ioreq_update_evtchn(s, sv);
 
     spin_unlock(&s->lock);
     return 0;
@@ -366,8 +366,8 @@ static int hvm_ioreq_server_add_vcpu(struct ioreq_server *s,
     return rc;
 }
 
-static void hvm_ioreq_server_remove_vcpu(struct ioreq_server *s,
-                                         struct vcpu *v)
+static void ioreq_server_remove_vcpu(struct ioreq_server *s,
+                                     struct vcpu *v)
 {
     struct ioreq_vcpu *sv;
 
@@ -394,7 +394,7 @@ static void hvm_ioreq_server_remove_vcpu(struct ioreq_server *s,
     spin_unlock(&s->lock);
 }
 
-static void hvm_ioreq_server_remove_all_vcpus(struct ioreq_server *s)
+static void ioreq_server_remove_all_vcpus(struct ioreq_server *s)
 {
     struct ioreq_vcpu *sv, *next;
 
@@ -420,28 +420,28 @@ static void hvm_ioreq_server_remove_all_vcpus(struct ioreq_server *s)
     spin_unlock(&s->lock);
 }
 
-static int hvm_ioreq_server_alloc_pages(struct ioreq_server *s)
+static int ioreq_server_alloc_pages(struct ioreq_server *s)
 {
     int rc;
 
-    rc = hvm_alloc_ioreq_mfn(s, false);
+    rc = ioreq_server_alloc_mfn(s, false);
 
     if ( !rc && (s->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) )
-        rc = hvm_alloc_ioreq_mfn(s, true);
+        rc = ioreq_server_alloc_mfn(s, true);
 
     if ( rc )
-        hvm_free_ioreq_mfn(s, false);
+        ioreq_server_free_mfn(s, false);
 
     return rc;
 }
 
-static void hvm_ioreq_server_free_pages(struct ioreq_server *s)
+static void ioreq_server_free_pages(struct ioreq_server *s)
 {
-    hvm_free_ioreq_mfn(s, true);
-    hvm_free_ioreq_mfn(s, false);
+    ioreq_server_free_mfn(s, true);
+    ioreq_server_free_mfn(s, false);
 }
 
-static void hvm_ioreq_server_free_rangesets(struct ioreq_server *s)
+static void ioreq_server_free_rangesets(struct ioreq_server *s)
 {
     unsigned int i;
 
@@ -449,8 +449,8 @@ static void hvm_ioreq_server_free_rangesets(struct ioreq_server *s)
         rangeset_destroy(s->range[i]);
 }
 
-static int hvm_ioreq_server_alloc_rangesets(struct ioreq_server *s,
-                                            ioservid_t id)
+static int ioreq_server_alloc_rangesets(struct ioreq_server *s,
+                                        ioservid_t id)
 {
     unsigned int i;
     int rc;
@@ -482,12 +482,12 @@ static int hvm_ioreq_server_alloc_rangesets(struct ioreq_server *s,
     return 0;
 
  fail:
-    hvm_ioreq_server_free_rangesets(s);
+    ioreq_server_free_rangesets(s);
 
     return rc;
 }
 
-static void hvm_ioreq_server_enable(struct ioreq_server *s)
+static void ioreq_server_enable(struct ioreq_server *s)
 {
     struct ioreq_vcpu *sv;
 
@@ -503,13 +503,13 @@ static void hvm_ioreq_server_enable(struct ioreq_server *s)
     list_for_each_entry ( sv,
                           &s->ioreq_vcpu_list,
                           list_entry )
-        hvm_update_ioreq_evtchn(s, sv);
+        ioreq_update_evtchn(s, sv);
 
   done:
     spin_unlock(&s->lock);
 }
 
-static void hvm_ioreq_server_disable(struct ioreq_server *s)
+static void ioreq_server_disable(struct ioreq_server *s)
 {
     spin_lock(&s->lock);
 
@@ -524,9 +524,9 @@ static void hvm_ioreq_server_disable(struct ioreq_server *s)
     spin_unlock(&s->lock);
 }
 
-static int hvm_ioreq_server_init(struct ioreq_server *s,
-                                 struct domain *d, int bufioreq_handling,
-                                 ioservid_t id)
+static int ioreq_server_init(struct ioreq_server *s,
+                             struct domain *d, int bufioreq_handling,
+                             ioservid_t id)
 {
     struct domain *currd = current->domain;
     struct vcpu *v;
@@ -544,7 +544,7 @@ static int hvm_ioreq_server_init(struct ioreq_server *s,
     s->ioreq.gfn = INVALID_GFN;
     s->bufioreq.gfn = INVALID_GFN;
 
-    rc = hvm_ioreq_server_alloc_rangesets(s, id);
+    rc = ioreq_server_alloc_rangesets(s, id);
     if ( rc )
         return rc;
 
@@ -552,7 +552,7 @@ static int hvm_ioreq_server_init(struct ioreq_server *s,
 
     for_each_vcpu ( d, v )
     {
-        rc = hvm_ioreq_server_add_vcpu(s, v);
+        rc = ioreq_server_add_vcpu(s, v);
         if ( rc )
             goto fail_add;
     }
@@ -560,23 +560,23 @@ static int hvm_ioreq_server_init(struct ioreq_server *s,
     return 0;
 
  fail_add:
-    hvm_ioreq_server_remove_all_vcpus(s);
+    ioreq_server_remove_all_vcpus(s);
     arch_ioreq_server_unmap_pages(s);
 
-    hvm_ioreq_server_free_rangesets(s);
+    ioreq_server_free_rangesets(s);
 
     put_domain(s->emulator);
     return rc;
 }
 
-static void hvm_ioreq_server_deinit(struct ioreq_server *s)
+static void ioreq_server_deinit(struct ioreq_server *s)
 {
     ASSERT(!s->enabled);
-    hvm_ioreq_server_remove_all_vcpus(s);
+    ioreq_server_remove_all_vcpus(s);
 
     /*
      * NOTE: It is safe to call both arch_ioreq_server_unmap_pages() and
-     *       hvm_ioreq_server_free_pages() in that order.
+     *       ioreq_server_free_pages() in that order.
      *       This is because the former will do nothing if the pages
      *       are not mapped, leaving the page to be freed by the latter.
      *       However if the pages are mapped then the former will set
@@ -584,15 +584,15 @@ static void hvm_ioreq_server_deinit(struct ioreq_server *s)
      *       nothing.
      */
     arch_ioreq_server_unmap_pages(s);
-    hvm_ioreq_server_free_pages(s);
+    ioreq_server_free_pages(s);
 
-    hvm_ioreq_server_free_rangesets(s);
+    ioreq_server_free_rangesets(s);
 
     put_domain(s->emulator);
 }
 
-int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
-                            ioservid_t *id)
+int ioreq_server_create(struct domain *d, int bufioreq_handling,
+                        ioservid_t *id)
 {
     struct ioreq_server *s;
     unsigned int i;
@@ -620,11 +620,11 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
 
     /*
      * It is safe to call set_ioreq_server() prior to
-     * hvm_ioreq_server_init() since the target domain is paused.
+     * ioreq_server_init() since the target domain is paused.
      */
     set_ioreq_server(d, i, s);
 
-    rc = hvm_ioreq_server_init(s, d, bufioreq_handling, i);
+    rc = ioreq_server_init(s, d, bufioreq_handling, i);
     if ( rc )
     {
         set_ioreq_server(d, i, NULL);
@@ -647,7 +647,7 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
     return rc;
 }
 
-int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
+int ioreq_server_destroy(struct domain *d, ioservid_t id)
 {
     struct ioreq_server *s;
     int rc;
@@ -668,13 +668,13 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
 
     arch_ioreq_server_destroy(s);
 
-    hvm_ioreq_server_disable(s);
+    ioreq_server_disable(s);
 
     /*
-     * It is safe to call hvm_ioreq_server_deinit() prior to
+     * It is safe to call ioreq_server_deinit() prior to
      * set_ioreq_server() since the target domain is paused.
      */
-    hvm_ioreq_server_deinit(s);
+    ioreq_server_deinit(s);
     set_ioreq_server(d, id, NULL);
 
     domain_unpause(d);
@@ -689,10 +689,10 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
     return rc;
 }
 
-int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
-                              unsigned long *ioreq_gfn,
-                              unsigned long *bufioreq_gfn,
-                              evtchn_port_t *bufioreq_port)
+int ioreq_server_get_info(struct domain *d, ioservid_t id,
+                          unsigned long *ioreq_gfn,
+                          unsigned long *bufioreq_gfn,
+                          evtchn_port_t *bufioreq_port)
 {
     struct ioreq_server *s;
     int rc;
@@ -736,8 +736,8 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
     return rc;
 }
 
-int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
-                               unsigned long idx, mfn_t *mfn)
+int ioreq_server_get_frame(struct domain *d, ioservid_t id,
+                           unsigned long idx, mfn_t *mfn)
 {
     struct ioreq_server *s;
     int rc;
@@ -756,7 +756,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
     if ( s->emulator != current->domain )
         goto out;
 
-    rc = hvm_ioreq_server_alloc_pages(s);
+    rc = ioreq_server_alloc_pages(s);
     if ( rc )
         goto out;
 
@@ -787,9 +787,9 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
     return rc;
 }
 
-int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
-                                     uint32_t type, uint64_t start,
-                                     uint64_t end)
+int ioreq_server_map_io_range(struct domain *d, ioservid_t id,
+                              uint32_t type, uint64_t start,
+                              uint64_t end)
 {
     struct ioreq_server *s;
     struct rangeset *r;
@@ -839,9 +839,9 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
     return rc;
 }
 
-int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
-                                         uint32_t type, uint64_t start,
-                                         uint64_t end)
+int ioreq_server_unmap_io_range(struct domain *d, ioservid_t id,
+                                uint32_t type, uint64_t start,
+                                uint64_t end)
 {
     struct ioreq_server *s;
     struct rangeset *r;
@@ -899,8 +899,8 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
  * Support for the emulation of read operations can be added when an ioreq
  * server has such requirement in the future.
  */
-int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
-                                     uint32_t type, uint32_t flags)
+int ioreq_server_map_mem_type(struct domain *d, ioservid_t id,
+                              uint32_t type, uint32_t flags)
 {
     struct ioreq_server *s;
     int rc;
@@ -931,8 +931,8 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
     return rc;
 }
 
-int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
-                               bool enabled)
+int ioreq_server_set_state(struct domain *d, ioservid_t id,
+                           bool enabled)
 {
     struct ioreq_server *s;
     int rc;
@@ -952,9 +952,9 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
     domain_pause(d);
 
     if ( enabled )
-        hvm_ioreq_server_enable(s);
+        ioreq_server_enable(s);
     else
-        hvm_ioreq_server_disable(s);
+        ioreq_server_disable(s);
 
     domain_unpause(d);
 
@@ -965,7 +965,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
     return rc;
 }
 
-int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
+int ioreq_server_add_vcpu_all(struct domain *d, struct vcpu *v)
 {
     struct ioreq_server *s;
     unsigned int id;
@@ -975,7 +975,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
     {
-        rc = hvm_ioreq_server_add_vcpu(s, v);
+        rc = ioreq_server_add_vcpu(s, v);
         if ( rc )
             goto fail;
     }
@@ -992,7 +992,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
         if ( !s )
             continue;
 
-        hvm_ioreq_server_remove_vcpu(s, v);
+        ioreq_server_remove_vcpu(s, v);
     }
 
     spin_unlock_recursive(&d->ioreq_server.lock);
@@ -1000,7 +1000,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
     return rc;
 }
 
-void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v)
+void ioreq_server_remove_vcpu_all(struct domain *d, struct vcpu *v)
 {
     struct ioreq_server *s;
     unsigned int id;
@@ -1008,12 +1008,12 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v)
     spin_lock_recursive(&d->ioreq_server.lock);
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
-        hvm_ioreq_server_remove_vcpu(s, v);
+        ioreq_server_remove_vcpu(s, v);
 
     spin_unlock_recursive(&d->ioreq_server.lock);
 }
 
-void hvm_destroy_all_ioreq_servers(struct domain *d)
+void ioreq_server_destroy_all(struct domain *d)
 {
     struct ioreq_server *s;
     unsigned int id;
@@ -1027,13 +1027,13 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
     {
-        hvm_ioreq_server_disable(s);
+        ioreq_server_disable(s);
 
         /*
-         * It is safe to call hvm_ioreq_server_deinit() prior to
+         * It is safe to call ioreq_server_deinit() prior to
          * set_ioreq_server() since the target domain is being destroyed.
          */
-        hvm_ioreq_server_deinit(s);
+        ioreq_server_deinit(s);
         set_ioreq_server(d, id, NULL);
 
         xfree(s);
@@ -1042,8 +1042,8 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
     spin_unlock_recursive(&d->ioreq_server.lock);
 }
 
-struct ioreq_server *hvm_select_ioreq_server(struct domain *d,
-                                             ioreq_t *p)
+struct ioreq_server *ioreq_server_select(struct domain *d,
+                                         ioreq_t *p)
 {
     struct ioreq_server *s;
     uint8_t type;
@@ -1098,7 +1098,7 @@ struct ioreq_server *hvm_select_ioreq_server(struct domain *d,
     return NULL;
 }
 
-static int hvm_send_buffered_ioreq(struct ioreq_server *s, ioreq_t *p)
+static int ioreq_send_buffered(struct ioreq_server *s, ioreq_t *p)
 {
     struct domain *d = current->domain;
     struct ioreq_page *iorp;
@@ -1191,8 +1191,8 @@ static int hvm_send_buffered_ioreq(struct ioreq_server *s, ioreq_t *p)
     return IOREQ_STATUS_HANDLED;
 }
 
-int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p,
-                   bool buffered)
+int ioreq_send(struct ioreq_server *s, ioreq_t *proto_p,
+               bool buffered)
 {
     struct vcpu *curr = current;
     struct domain *d = curr->domain;
@@ -1201,7 +1201,7 @@ int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p,
     ASSERT(s);
 
     if ( buffered )
-        return hvm_send_buffered_ioreq(s, proto_p);
+        return ioreq_send_buffered(s, proto_p);
 
     if ( unlikely(!vcpu_start_shutdown_deferral(curr)) )
         return IOREQ_STATUS_RETRY;
@@ -1251,7 +1251,7 @@ int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p,
     return IOREQ_STATUS_UNHANDLED;
 }
 
-unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered)
+unsigned int ioreq_broadcast(ioreq_t *p, bool buffered)
 {
     struct domain *d = current->domain;
     struct ioreq_server *s;
@@ -1262,14 +1262,14 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered)
         if ( !s->enabled )
             continue;
 
-        if ( hvm_send_ioreq(s, p, buffered) == IOREQ_STATUS_UNHANDLED )
+        if ( ioreq_send(s, p, buffered) == IOREQ_STATUS_UNHANDLED )
             failed++;
     }
 
     return failed;
 }
 
-void hvm_ioreq_init(struct domain *d)
+void ioreq_domain_init(struct domain *d)
 {
     spin_lock_init(&d->ioreq_server.lock);
 
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 92cf983..3363c06 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1108,7 +1108,7 @@ static int acquire_ioreq_server(struct domain *d,
     {
         mfn_t mfn;
 
-        rc = hvm_get_ioreq_server_frame(d, id, frame + i, &mfn);
+        rc = ioreq_server_get_frame(d, id, frame + i, &mfn);
         if ( rc )
             return rc;
 
diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
index 979afa0..02ff998 100644
--- a/xen/include/xen/ioreq.h
+++ b/xen/include/xen/ioreq.h
@@ -81,41 +81,42 @@ static inline bool ioreq_needs_completion(const ioreq_t *ioreq)
 #define HANDLE_BUFIOREQ(s) \
     ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF)
 
-bool hvm_io_pending(struct vcpu *v);
-bool handle_hvm_io_completion(struct vcpu *v);
+bool vcpu_ioreq_pending(struct vcpu *v);
+bool vcpu_ioreq_handle_completion(struct vcpu *v);
 bool is_ioreq_server_page(struct domain *d, const struct page_info *page);
 
-int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
-                            ioservid_t *id);
-int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id);
-int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
-                              unsigned long *ioreq_gfn,
-                              unsigned long *bufioreq_gfn,
-                              evtchn_port_t *bufioreq_port);
-int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
-                               unsigned long idx, mfn_t *mfn);
-int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
-                                     uint32_t type, uint64_t start,
-                                     uint64_t end);
-int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
-                                         uint32_t type, uint64_t start,
-                                         uint64_t end);
-int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
-                                     uint32_t type, uint32_t flags);
-int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
-                               bool enabled);
-
-int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v);
-void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v);
-void hvm_destroy_all_ioreq_servers(struct domain *d);
-
-struct ioreq_server *hvm_select_ioreq_server(struct domain *d,
-                                             ioreq_t *p);
-int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p,
-                   bool buffered);
-unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered);
-
-void hvm_ioreq_init(struct domain *d);
+int ioreq_server_create(struct domain *d, int bufioreq_handling,
+                        ioservid_t *id);
+int ioreq_server_destroy(struct domain *d, ioservid_t id);
+int ioreq_server_get_info(struct domain *d, ioservid_t id,
+                          unsigned long *ioreq_gfn,
+                          unsigned long *bufioreq_gfn,
+                          evtchn_port_t *bufioreq_port);
+int ioreq_server_get_frame(struct domain *d, ioservid_t id,
+                           unsigned long idx, mfn_t *mfn);
+int ioreq_server_map_io_range(struct domain *d, ioservid_t id,
+                              uint32_t type, uint64_t start,
+                              uint64_t end);
+int ioreq_server_unmap_io_range(struct domain *d, ioservid_t id,
+                                uint32_t type, uint64_t start,
+                                uint64_t end);
+int ioreq_server_map_mem_type(struct domain *d, ioservid_t id,
+                              uint32_t type, uint32_t flags);
+
+int ioreq_server_set_state(struct domain *d, ioservid_t id,
+                           bool enabled);
+
+int ioreq_server_add_vcpu_all(struct domain *d, struct vcpu *v);
+void ioreq_server_remove_vcpu_all(struct domain *d, struct vcpu *v);
+void ioreq_server_destroy_all(struct domain *d);
+
+struct ioreq_server *ioreq_server_select(struct domain *d,
+                                         ioreq_t *p);
+int ioreq_send(struct ioreq_server *s, ioreq_t *proto_p,
+               bool buffered);
+unsigned int ioreq_broadcast(ioreq_t *p, bool buffered);
+
+void ioreq_domain_init(struct domain *d);
 
 #endif /* __XEN_IOREQ_H__ */
 
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:44:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:44:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40932.74012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgfY-0003rU-HS; Mon, 30 Nov 2020 10:44:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40932.74012; Mon, 30 Nov 2020 10:44:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgfY-0003qt-6K; Mon, 30 Nov 2020 10:44:32 +0000
Received: by outflank-mailman (input) for mailman id 40932;
 Mon, 30 Nov 2020 10:44:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjgUY-0000Uu-Dl
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:33:10 +0000
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id daa3e8e6-cf72-44a6-8ee1-0f0c2ee68849;
 Mon, 30 Nov 2020 10:32:14 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id l11so20619199lfg.0
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 02:32:14 -0800 (PST)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.12
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 02:32:12 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: daa3e8e6-cf72-44a6-8ee1-0f0c2ee68849
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=fPq8ybHMVxvpysG13On0tY3c7FiJZ3bhgQTb8lqkCiM=;
        b=nTmscL9MBzSpuqJuv7d2U+kLjSOgPh4EfE6Dkna/f3t+K++rjv8PGxKzFFuE/Qrsp+
         s6N20OWSly3VEIj+nO38oNE3PgKQhgUkLZurBDBrRpd6N91IMEvhfzuqPMVcRoSxc+SP
         6TnLHKIrMe4M2KC9xHlSWimS62QWTAHGxD/v8nn2kjs21L0riEfjEM1BhDdrRtHIsYbp
         YyTvr6GM0dShSobdRFYIdODA4pbd4Und0oO/h/eG8DsZfvS3/PBb/3Gmp9oBYrCMxKU2
         vgehUZadkyrLQtGDKYoPK4URQsbg4vX+wxDtn2QI4c8lR7CYJ5reIMRtC6aiqgmHgYDs
         pCaQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=fPq8ybHMVxvpysG13On0tY3c7FiJZ3bhgQTb8lqkCiM=;
        b=HPlXO/3ZP06CQqk9KrVaDv7L8N26Y8rP683a149y2X4JfejgnlPmvkYpdJtX+MoD9j
         imUhjXBq35ZGocWvGH7scLlbaAIVosg8NS68R9Y+qJUOGKgKcfIGjXlubrfmjkRDwbPO
         7kSi09Yk1FwI3wHpT6j55mHr29aocPBM9g4qMFxLnGo8HWzVysMxRxDaYoNU2gcAztks
         YLegmlPfjODg3ytAfVT8K/OcvH9JdPVwlGyOI0yuTH+X0Y/BlDVmdlB+YeG4fck3Fxpm
         +mHe15L7M8x08jElLTK7kAI5FtSJVhZ4t39pXVU9HmIQl7Lm+9xJv6vGPNeAfGHC4aq8
         lIRw==
X-Gm-Message-State: AOAM5324Ru553N6hkZujDtFTvp6MuniC9YBXklK1VUmZZ3Fa2SgTrr9j
	GGqGzv++IJ3zYyMVQdPQ7v6bhU7WNuAvYA==
X-Google-Smtp-Source: ABdhPJx1W8tA7iTrTV9rbTzlbFKXCaj6IFoo5qPVaUoRyIsxoqoIMgJKEOtD+9JsGxSSHfeQ/OFBYA==
X-Received: by 2002:a19:83c9:: with SMTP id f192mr9387224lfd.302.1606732332978;
        Mon, 30 Nov 2020 02:32:12 -0800 (PST)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien.grall@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: [PATCH V3 14/23] arm/ioreq: Introduce arch specific bits for IOREQ/DM features
Date: Mon, 30 Nov 2020 12:31:29 +0200
Message-Id: <1606732298-22107-15-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Julien Grall <julien.grall@arm.com>

This patch adds basic IOREQ/DM support on Arm. The subsequent
patches will improve functionality and add remaining bits.

The IOREQ/DM features are supposed to be built with IOREQ_SERVER
option enabled, which is disabled by default on Arm for now.

Please note, the "PIO handling" TODO is expected to left unaddressed
for the current series. It is not an big issue for now while Xen
doesn't have support for vPCI on Arm. On Arm64 they are only used
for PCI IO Bar and we would probably want to expose them to emulator
as PIO access to make a DM completely arch-agnostic. So "PIO handling"
should be implemented when we add support for vPCI.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - was split into:
     - arm/ioreq: Introduce arch specific bits for IOREQ/DM features
     - xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm
   - update patch description
   - update asm-arm/hvm/ioreq.h according to the newly introduced arch functions:
     - arch_hvm_destroy_ioreq_server()
     - arch_handle_hvm_io_completion()
   - update arch files to include xen/ioreq.h
   - remove HVMOP plumbing
   - rewrite a logic to handle properly case when hvm_send_ioreq() returns IO_RETRY
   - add a logic to handle properly handle_hvm_io_completion() return value
   - rename handle_mmio() to ioreq_handle_complete_mmio()
   - move paging_mark_pfn_dirty() to asm-arm/paging.h
   - remove forward declaration for hvm_ioreq_server in asm-arm/paging.h
   - move try_fwd_ioserv() to ioreq.c, provide stubs if !CONFIG_IOREQ_SERVER
   - do not remove #ifdef CONFIG_IOREQ_SERVER in memory.c for guarding xen/ioreq.h
   - use gdprintk in try_fwd_ioserv(), remove unneeded prints
   - update list of #include-s
   - move has_vpci() to asm-arm/domain.h
   - add a comment (TODO) to unimplemented yet handle_pio()
   - remove hvm_mmio_first(last)_byte() and hvm_ioreq_(page/vcpu/server) structs
     from the arch files, they were already moved to the common code
   - remove set_foreign_p2m_entry() changes, they will be properly implemented
     in the follow-up patch
   - select IOREQ_SERVER for Arm instead of Arm64 in Kconfig
   - remove x86's realmode and other unneeded stubs from xen/ioreq.h
   - clafify ioreq_t p.df usage in try_fwd_ioserv()
   - set ioreq_t p.count to 1 in try_fwd_ioserv()

Changes V1 -> V2:
   - was split into:
     - arm/ioreq: Introduce arch specific bits for IOREQ/DM features
     - xen/arm: Stick around in leave_hypervisor_to_guest until I/O has completed
   - update the author of a patch
   - update patch description
   - move a loop in leave_hypervisor_to_guest() to a separate patch
   - set IOREQ_SERVER disabled by default
   - remove already clarified /* XXX */
   - replace BUG() by ASSERT_UNREACHABLE() in handle_pio()
   - remove default case for handling the return value of try_handle_mmio()
   - remove struct hvm_domain, enum hvm_io_completion, struct hvm_vcpu_io,
     struct hvm_vcpu from asm-arm/domain.h, these are common materials now
   - update everything according to the recent changes (IOREQ related function
     names don't contain "hvm" prefixes/infixes anymore, IOREQ related fields
     are part of common struct vcpu/domain now, etc)

Changes V2 -> V3:
   - update patch according the "legacy interface" is x86 specific
   - add dummy arch hooks
   - remove dummy paging_mark_pfn_dirty()
   - don’t include <xen/domain_page.h> in common ioreq.c
   - don’t include <public/hvm/ioreq.h> in arch ioreq.h
   - remove #define ioreq_params(d, i)
---
---
 xen/arch/arm/Makefile           |   2 +
 xen/arch/arm/dm.c               |  34 ++++++++++
 xen/arch/arm/domain.c           |   9 +++
 xen/arch/arm/io.c               |  11 +++-
 xen/arch/arm/ioreq.c            | 141 ++++++++++++++++++++++++++++++++++++++++
 xen/arch/arm/traps.c            |  13 ++++
 xen/include/asm-arm/domain.h    |   3 +
 xen/include/asm-arm/hvm/ioreq.h | 139 +++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/mmio.h      |   1 +
 9 files changed, 352 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/arm/dm.c
 create mode 100644 xen/arch/arm/ioreq.c
 create mode 100644 xen/include/asm-arm/hvm/ioreq.h

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 296c5e6..c3ff454 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -13,6 +13,7 @@ obj-y += cpuerrata.o
 obj-y += cpufeature.o
 obj-y += decode.o
 obj-y += device.o
+obj-$(CONFIG_IOREQ_SERVER) += dm.o
 obj-y += domain.o
 obj-y += domain_build.init.o
 obj-y += domctl.o
@@ -27,6 +28,7 @@ obj-y += guest_atomics.o
 obj-y += guest_walk.o
 obj-y += hvm.o
 obj-y += io.o
+obj-$(CONFIG_IOREQ_SERVER) += ioreq.o
 obj-y += irq.o
 obj-y += kernel.init.o
 obj-$(CONFIG_LIVEPATCH) += livepatch.o
diff --git a/xen/arch/arm/dm.c b/xen/arch/arm/dm.c
new file mode 100644
index 0000000..5d3da37
--- /dev/null
+++ b/xen/arch/arm/dm.c
@@ -0,0 +1,34 @@
+/*
+ * Copyright (c) 2019 Arm ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/dm.h>
+#include <xen/hypercall.h>
+
+int arch_dm_op(struct xen_dm_op *op, struct domain *d,
+               const struct dmop_args *op_args, bool *const_op)
+{
+    return -EOPNOTSUPP;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 18cafcd..8f55aba 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -15,6 +15,7 @@
 #include <xen/guest_access.h>
 #include <xen/hypercall.h>
 #include <xen/init.h>
+#include <xen/ioreq.h>
 #include <xen/lib.h>
 #include <xen/livepatch.h>
 #include <xen/sched.h>
@@ -696,6 +697,10 @@ int arch_domain_create(struct domain *d,
 
     ASSERT(config != NULL);
 
+#ifdef CONFIG_IOREQ_SERVER
+    ioreq_domain_init(d);
+#endif
+
     /* p2m_init relies on some value initialized by the IOMMU subsystem */
     if ( (rc = iommu_domain_init(d, config->iommu_opts)) != 0 )
         goto fail;
@@ -1014,6 +1019,10 @@ int domain_relinquish_resources(struct domain *d)
         if (ret )
             return ret;
 
+#ifdef CONFIG_IOREQ_SERVER
+        ioreq_server_destroy_all(d);
+#endif
+
     PROGRESS(xen):
         ret = relinquish_memory(d, &d->xenpage_list);
         if ( ret )
diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
index ae7ef96..f44cfd4 100644
--- a/xen/arch/arm/io.c
+++ b/xen/arch/arm/io.c
@@ -23,6 +23,7 @@
 #include <asm/cpuerrata.h>
 #include <asm/current.h>
 #include <asm/mmio.h>
+#include <asm/hvm/ioreq.h>
 
 #include "decode.h"
 
@@ -123,7 +124,15 @@ enum io_state try_handle_mmio(struct cpu_user_regs *regs,
 
     handler = find_mmio_handler(v->domain, info.gpa);
     if ( !handler )
-        return IO_UNHANDLED;
+    {
+        int rc;
+
+        rc = try_fwd_ioserv(regs, v, &info);
+        if ( rc == IO_HANDLED )
+            return handle_ioserv(regs, v);
+
+        return rc;
+    }
 
     /* All the instructions used on emulated MMIO region should be valid */
     if ( !dabt.valid )
diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c
new file mode 100644
index 0000000..f08190c
--- /dev/null
+++ b/xen/arch/arm/ioreq.c
@@ -0,0 +1,141 @@
+/*
+ * arm/ioreq.c: hardware virtual machine I/O emulation
+ *
+ * Copyright (c) 2019 Arm ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/domain.h>
+#include <xen/ioreq.h>
+
+#include <asm/traps.h>
+
+#include <public/hvm/ioreq.h>
+
+enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v)
+{
+    const union hsr hsr = { .bits = regs->hsr };
+    const struct hsr_dabt dabt = hsr.dabt;
+    /* Code is similar to handle_read */
+    uint8_t size = (1 << dabt.size) * 8;
+    register_t r = v->io.req.data;
+
+    /* We are done with the IO */
+    v->io.req.state = STATE_IOREQ_NONE;
+
+    if ( dabt.write )
+        return IO_HANDLED;
+
+    /*
+     * Sign extend if required.
+     * Note that we expect the read handler to have zeroed the bits
+     * outside the requested access size.
+     */
+    if ( dabt.sign && (r & (1UL << (size - 1))) )
+    {
+        /*
+         * We are relying on register_t using the same as
+         * an unsigned long in order to keep the 32-bit assembly
+         * code smaller.
+         */
+        BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long));
+        r |= (~0UL) << size;
+    }
+
+    set_user_reg(regs, dabt.reg, r);
+
+    return IO_HANDLED;
+}
+
+enum io_state try_fwd_ioserv(struct cpu_user_regs *regs,
+                             struct vcpu *v, mmio_info_t *info)
+{
+    struct vcpu_io *vio = &v->io;
+    ioreq_t p = {
+        .type = IOREQ_TYPE_COPY,
+        .addr = info->gpa,
+        .size = 1 << info->dabt.size,
+        .count = 1,
+        .dir = !info->dabt.write,
+        /*
+         * On x86, df is used by 'rep' instruction to tell the direction
+         * to iterate (forward or backward).
+         * On Arm, all the accesses to MMIO region will do a single
+         * memory access. So for now, we can safely always set to 0.
+         */
+        .df = 0,
+        .data = get_user_reg(regs, info->dabt.reg),
+        .state = STATE_IOREQ_READY,
+    };
+    struct ioreq_server *s = NULL;
+    enum io_state rc;
+
+    switch ( vio->req.state )
+    {
+    case STATE_IOREQ_NONE:
+        break;
+
+    case STATE_IORESP_READY:
+        return IO_HANDLED;
+
+    default:
+        gdprintk(XENLOG_ERR, "wrong state %u\n", vio->req.state);
+        return IO_ABORT;
+    }
+
+    s = ioreq_server_select(v->domain, &p);
+    if ( !s )
+        return IO_UNHANDLED;
+
+    if ( !info->dabt.valid )
+        return IO_ABORT;
+
+    vio->req = p;
+
+    rc = ioreq_send(s, &p, 0);
+    if ( rc != IO_RETRY || v->domain->is_shutting_down )
+        vio->req.state = STATE_IOREQ_NONE;
+    else if ( !ioreq_needs_completion(&vio->req) )
+        rc = IO_HANDLED;
+    else
+        vio->completion = IO_mmio_completion;
+
+    return rc;
+}
+
+bool ioreq_complete_mmio(void)
+{
+    struct vcpu *v = current;
+    struct cpu_user_regs *regs = guest_cpu_user_regs();
+    const union hsr hsr = { .bits = regs->hsr };
+    paddr_t addr = v->io.req.addr;
+
+    if ( try_handle_mmio(regs, hsr, addr) == IO_HANDLED )
+    {
+        advance_pc(regs, hsr);
+        return true;
+    }
+
+    return false;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 22bd1bd..036b13f 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -21,6 +21,7 @@
 #include <xen/hypercall.h>
 #include <xen/init.h>
 #include <xen/iocap.h>
+#include <xen/ioreq.h>
 #include <xen/irq.h>
 #include <xen/lib.h>
 #include <xen/mem_access.h>
@@ -1385,6 +1386,9 @@ static arm_hypercall_t arm_hypercall_table[] = {
 #ifdef CONFIG_HYPFS
     HYPERCALL(hypfs_op, 5),
 #endif
+#ifdef CONFIG_IOREQ_SERVER
+    HYPERCALL(dm_op, 3),
+#endif
 };
 
 #ifndef NDEBUG
@@ -1956,6 +1960,9 @@ static void do_trap_stage2_abort_guest(struct cpu_user_regs *regs,
             case IO_HANDLED:
                 advance_pc(regs, hsr);
                 return;
+            case IO_RETRY:
+                /* finish later */
+                return;
             case IO_UNHANDLED:
                 /* IO unhandled, try another way to handle it. */
                 break;
@@ -2254,6 +2261,12 @@ static void check_for_vcpu_work(void)
 {
     struct vcpu *v = current;
 
+#ifdef CONFIG_IOREQ_SERVER
+    local_irq_enable();
+    vcpu_ioreq_handle_completion(v);
+    local_irq_disable();
+#endif
+
     if ( likely(!v->arch.need_flush_to_ram) )
         return;
 
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 6819a3b..c235e5b 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -10,6 +10,7 @@
 #include <asm/gic.h>
 #include <asm/vgic.h>
 #include <asm/vpl011.h>
+#include <public/hvm/dm_op.h>
 #include <public/hvm/params.h>
 
 struct hvm_domain
@@ -262,6 +263,8 @@ static inline void arch_vcpu_block(struct vcpu *v) {}
 
 #define arch_vm_assist_valid_mask(d) (1UL << VMASST_TYPE_runstate_update_flag)
 
+#define has_vpci(d)    ({ (void)(d); false; })
+
 #endif /* __ASM_DOMAIN_H__ */
 
 /*
diff --git a/xen/include/asm-arm/hvm/ioreq.h b/xen/include/asm-arm/hvm/ioreq.h
new file mode 100644
index 0000000..2bffc7a
--- /dev/null
+++ b/xen/include/asm-arm/hvm/ioreq.h
@@ -0,0 +1,139 @@
+/*
+ * hvm.h: Hardware virtual machine assist interface definitions.
+ *
+ * Copyright (c) 2016 Citrix Systems Inc.
+ * Copyright (c) 2019 Arm ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ASM_ARM_HVM_IOREQ_H__
+#define __ASM_ARM_HVM_IOREQ_H__
+
+#include <xen/ioreq.h>
+
+#ifdef CONFIG_IOREQ_SERVER
+enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v);
+enum io_state try_fwd_ioserv(struct cpu_user_regs *regs,
+                             struct vcpu *v, mmio_info_t *info);
+#else
+static inline enum io_state handle_ioserv(struct cpu_user_regs *regs,
+                                          struct vcpu *v)
+{
+    return IO_UNHANDLED;
+}
+
+static inline enum io_state try_fwd_ioserv(struct cpu_user_regs *regs,
+                                           struct vcpu *v, mmio_info_t *info)
+{
+    return IO_UNHANDLED;
+}
+#endif
+
+bool ioreq_complete_mmio(void);
+
+static inline bool handle_pio(uint16_t port, unsigned int size, int dir)
+{
+    /*
+     * TODO: For Arm64, the main user will be PCI. So this should be
+     * implemented when we add support for vPCI.
+     */
+    ASSERT_UNREACHABLE();
+    return true;
+}
+
+static inline void msix_write_completion(struct vcpu *v)
+{
+}
+
+static inline bool arch_vcpu_ioreq_completion(enum io_completion io_completion)
+{
+    ASSERT_UNREACHABLE();
+    return true;
+}
+
+/*
+ * The "legacy" mechanism of mapping magic pages for the IOREQ servers
+ * is x86 specific, so the following hooks don't need to be implemented on Arm:
+ * - arch_ioreq_server_map_pages
+ * - arch_ioreq_server_unmap_pages
+ * - arch_ioreq_server_enable
+ * - arch_ioreq_server_disable
+ */
+static inline int arch_ioreq_server_map_pages(struct ioreq_server *s)
+{
+    return -EOPNOTSUPP;
+}
+
+static inline void arch_ioreq_server_unmap_pages(struct ioreq_server *s)
+{
+}
+
+static inline void arch_ioreq_server_enable(struct ioreq_server *s)
+{
+}
+
+static inline void arch_ioreq_server_disable(struct ioreq_server *s)
+{
+}
+
+static inline void arch_ioreq_server_destroy(struct ioreq_server *s)
+{
+}
+
+static inline int arch_ioreq_server_map_mem_type(struct domain *d,
+                                                 struct ioreq_server *s,
+                                                 uint32_t flags)
+{
+    return -EOPNOTSUPP;
+}
+
+static inline bool arch_ioreq_server_destroy_all(struct domain *d)
+{
+    return true;
+}
+
+static inline int arch_ioreq_server_get_type_addr(const struct domain *d,
+                                                  const ioreq_t *p,
+                                                  uint8_t *type,
+                                                  uint64_t *addr)
+{
+    if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
+        return -EINVAL;
+
+    *type = (p->type == IOREQ_TYPE_PIO) ?
+             XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY;
+    *addr = p->addr;
+
+    return 0;
+}
+
+static inline void arch_ioreq_domain_init(struct domain *d)
+{
+}
+
+#define IOREQ_STATUS_HANDLED     IO_HANDLED
+#define IOREQ_STATUS_UNHANDLED   IO_UNHANDLED
+#define IOREQ_STATUS_RETRY       IO_RETRY
+
+#endif /* __ASM_ARM_HVM_IOREQ_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-arm/mmio.h b/xen/include/asm-arm/mmio.h
index 8dbfb27..7ab873c 100644
--- a/xen/include/asm-arm/mmio.h
+++ b/xen/include/asm-arm/mmio.h
@@ -37,6 +37,7 @@ enum io_state
     IO_ABORT,       /* The IO was handled by the helper and led to an abort. */
     IO_HANDLED,     /* The IO was successfully handled by the helper. */
     IO_UNHANDLED,   /* The IO was not handled by the helper. */
+    IO_RETRY,       /* Retry the emulation for some reason */
 };
 
 typedef int (*mmio_read_t)(struct vcpu *v, mmio_info_t *info,
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:44:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:44:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40935.74030 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgfZ-0003uf-Ta; Mon, 30 Nov 2020 10:44:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40935.74030; Mon, 30 Nov 2020 10:44:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgfZ-0003u5-Ec; Mon, 30 Nov 2020 10:44:33 +0000
Received: by outflank-mailman (input) for mailman id 40935;
 Mon, 30 Nov 2020 10:44:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjgUi-0000Uu-Dz
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:33:20 +0000
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6ae8ea99-6cfd-4cdd-ac5d-3e112cc031bc;
 Mon, 30 Nov 2020 10:32:16 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id o24so17027702ljj.6
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 02:32:16 -0800 (PST)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.14
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 02:32:14 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ae8ea99-6cfd-4cdd-ac5d-3e112cc031bc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=OKmGkMX/jIOJO9o6+1yYd5b6GcAGRnA31NMN1x/OSi4=;
        b=XKXYxs6V0uUIrdlAspHzTQGR3pmU8Ss17YPN4o2F4++GT9IJYtjEF760ralpDNSCm5
         4V7W5UxP5PODamVWYK7FGDpE8gP5WcHwG9Q+zN6aOpbB7dp8/7B1M01IZCecvacPcWi/
         5tvgk2Lzp77zp57KvWRUXIGDu63XI0PiHQNWPdXeEH941F6tCF1zHDA+1+XZ6+fojrnz
         Cw4/kLix3wC4JtpSxuNGTNdEDkEKTvgyxNslcrWl0LH/SgAoDiw2Izv/kCtyGbSowy6Q
         pqzL1+HxTd/UtXhAhz3mks5CB9/io+fT5JiWOwBPHVVfCoZvcq0p5E+IVR1CUh4lj+NL
         G7PQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=OKmGkMX/jIOJO9o6+1yYd5b6GcAGRnA31NMN1x/OSi4=;
        b=KIZtWPaJjKRgs3GFYOQk3B+S9A7P93bpnguTd57DLbT+JLQoOGxq8z8S5+g5d0tHu2
         2VPBCdimWfPImsOH57zFbJLrd+F4F1z06rX2VfQSxF8JbBhIkducHnrisNnWNGPu88D5
         gLRVNYVqtkJDENceFiZ0gJVVFmoWX4lpKyYwg+PGddIOV4bjkfwAkeYySwkM9bEMwZ0l
         nXoUqw49lHzjqSlN6F5Gm0DlolKSdTiK0oZi0/5CEfI1XaIXQohjtarfWRorrNAx5wLw
         ESmXph8Z7H9bVRCMSWR/ff38Jr9pU4sb6BoySD7n8Bzw6/3sjH4TCaSic9GhVeCoEtvM
         uEfg==
X-Gm-Message-State: AOAM532XwRViCvPuVaJHo7q7lfvN1VbUt/84sLfkdoV72JN4gfEhFmmx
	2iT9rWNCfRg6bPomzeXDbzz6tNaAkqDeUg==
X-Google-Smtp-Source: ABdhPJyMo+o0k5ozOLvlit07DgGXSuYm6EYjFCQns/RTQUDWl7b3OnGNV5QbcQGsq44W8TxhrbtePg==
X-Received: by 2002:a2e:2286:: with SMTP id i128mr9141277lji.396.1606732335210;
        Mon, 30 Nov 2020 02:32:15 -0800 (PST)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V3 16/23] xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm
Date: Mon, 30 Nov 2020 12:31:31 +0200
Message-Id: <1606732298-22107-17-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

This patch implements reference counting of foreign entries in
in set_foreign_p2m_entry() on Arm. This is a mandatory action if
we want to run emulator (IOREQ server) in other than dom0 domain,
as we can't trust it to do the right thing if it is not running
in dom0. So we need to grab a reference on the page to avoid it
disappearing.

It is valid to always pass "p2m_map_foreign_rw" type to
guest_physmap_add_entry() since the current and foreign domains
would be always different. A case when they are equal would be
rejected by rcu_lock_remote_domain_by_id(). Besides the similar
comment in the code put a respective ASSERT() to catch incorrect
usage in future.

It was tested with IOREQ feature to confirm that all the pages given
to this function belong to a domain, so we can use the same approach
as for XENMAPSPACE_gmfn_foreign handling in xenmem_add_to_physmap_one().

This involves adding an extra parameter for the foreign domain to
set_foreign_p2m_entry() and a helper to indicate whether the arch
supports the reference counting of foreign entries and the restriction
for the hardware domain in the common code can be skipped for it.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - new patch, was split from:
     "[RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/DM features"
   - rewrite a logic to handle properly reference in set_foreign_p2m_entry()
     instead of treating foreign entries as p2m_ram_rw

Changes V1 -> V2:
   - rebase according to the recent changes to acquire_resource()
   - update patch description
   - introduce arch_refcounts_p2m()
   - add an explanation why p2m_map_foreign_rw is valid
   - move set_foreign_p2m_entry() to p2m-common.h
   - add const to new parameter

Changes V2 -> V3:
   - update patch description
   - rename arch_refcounts_p2m() to arch_acquire_resource_check()
   - move comment to x86’s arch_acquire_resource_check()
   - return rc in Arm's set_foreign_p2m_entry()
   - put a respective ASSERT() into Arm's set_foreign_p2m_entry()
---
---
 xen/arch/arm/p2m.c           | 24 ++++++++++++++++++++++++
 xen/arch/x86/mm/p2m.c        |  5 +++--
 xen/common/memory.c          | 10 +++-------
 xen/include/asm-arm/p2m.h    | 19 +++++++++----------
 xen/include/asm-x86/p2m.h    | 16 +++++++++++++---
 xen/include/xen/p2m-common.h |  4 ++++
 6 files changed, 56 insertions(+), 22 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 4eeb867..5b8d494 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1380,6 +1380,30 @@ int guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
     return p2m_remove_mapping(d, gfn, (1 << page_order), mfn);
 }
 
+int set_foreign_p2m_entry(struct domain *d, const struct domain *fd,
+                          unsigned long gfn, mfn_t mfn)
+{
+    struct page_info *page = mfn_to_page(mfn);
+    int rc;
+
+    if ( !get_page(page, fd) )
+        return -EINVAL;
+
+    /*
+     * It is valid to always use p2m_map_foreign_rw here as if this gets
+     * called then d != fd. A case when d == fd would be rejected by
+     * rcu_lock_remote_domain_by_id() earlier. Put a respective ASSERT()
+     * to catch incorrect usage in future.
+     */
+    ASSERT(d != fd);
+
+    rc = guest_physmap_add_entry(d, _gfn(gfn), mfn, 0, p2m_map_foreign_rw);
+    if ( rc )
+        put_page(page);
+
+    return rc;
+}
+
 static struct page_info *p2m_allocate_root(void)
 {
     struct page_info *page;
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 7a2ba82..4772c86 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1321,7 +1321,8 @@ static int set_typed_p2m_entry(struct domain *d, unsigned long gfn_l,
 }
 
 /* Set foreign mfn in the given guest's p2m table. */
-int set_foreign_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn)
+int set_foreign_p2m_entry(struct domain *d, const struct domain *fd,
+                          unsigned long gfn, mfn_t mfn)
 {
     return set_typed_p2m_entry(d, gfn, mfn, PAGE_ORDER_4K, p2m_map_foreign,
                                p2m_get_hostp2m(d)->default_access);
@@ -2621,7 +2622,7 @@ int p2m_add_foreign(struct domain *tdom, unsigned long fgfn,
      * will update the m2p table which will result in  mfn -> gpfn of dom0
      * and not fgfn of domU.
      */
-    rc = set_foreign_p2m_entry(tdom, gpfn, mfn);
+    rc = set_foreign_p2m_entry(tdom, fdom, gpfn, mfn);
     if ( rc )
         gdprintk(XENLOG_WARNING, "set_foreign_p2m_entry failed. "
                  "gpfn:%lx mfn:%lx fgfn:%lx td:%d fd:%d\n",
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 3363c06..49e3001 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1134,12 +1134,8 @@ static int acquire_resource(
     xen_pfn_t mfn_list[32];
     int rc;
 
-    /*
-     * FIXME: Until foreign pages inserted into the P2M are properly
-     *        reference counted, it is unsafe to allow mapping of
-     *        resource pages unless the caller is the hardware domain.
-     */
-    if ( paging_mode_translate(currd) && !is_hardware_domain(currd) )
+    if ( paging_mode_translate(currd) && !is_hardware_domain(currd) &&
+         !arch_acquire_resource_check() )
         return -EACCES;
 
     if ( copy_from_guest(&xmar, arg, 1) )
@@ -1207,7 +1203,7 @@ static int acquire_resource(
 
         for ( i = 0; !rc && i < xmar.nr_frames; i++ )
         {
-            rc = set_foreign_p2m_entry(currd, gfn_list[i],
+            rc = set_foreign_p2m_entry(currd, d, gfn_list[i],
                                        _mfn(mfn_list[i]));
             /* rc should be -EIO for any iteration other than the first */
             if ( rc && i )
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 28ca9a8..4f8056e 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -161,6 +161,15 @@ typedef enum {
 #endif
 #include <xen/p2m-common.h>
 
+static inline bool arch_acquire_resource_check(void)
+{
+    /*
+     * The reference counting of foreign entries in set_foreign_p2m_entry()
+     * is supported on Arm.
+     */
+    return true;
+}
+
 static inline
 void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
 {
@@ -392,16 +401,6 @@ static inline gfn_t gfn_next_boundary(gfn_t gfn, unsigned int order)
     return gfn_add(gfn, 1UL << order);
 }
 
-static inline int set_foreign_p2m_entry(struct domain *d, unsigned long gfn,
-                                        mfn_t mfn)
-{
-    /*
-     * NOTE: If this is implemented then proper reference counting of
-     *       foreign entries will need to be implemented.
-     */
-    return -EOPNOTSUPP;
-}
-
 /*
  * A vCPU has cache enabled only when the MMU is enabled and data cache
  * is enabled.
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 4603560..8d2dc22 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -382,6 +382,19 @@ struct p2m_domain {
 #endif
 #include <xen/p2m-common.h>
 
+static inline bool arch_acquire_resource_check(void)
+{
+    /*
+     * The reference counting of foreign entries in set_foreign_p2m_entry()
+     * is not supported on x86.
+     *
+     * FIXME: Until foreign pages inserted into the P2M are properly
+     * reference counted, it is unsafe to allow mapping of
+     * resource pages unless the caller is the hardware domain.
+     */
+    return false;
+}
+
 /*
  * Updates vCPU's n2pm to match its np2m_base in VMCx12 and returns that np2m.
  */
@@ -647,9 +660,6 @@ int p2m_finish_type_change(struct domain *d,
 int p2m_is_logdirty_range(struct p2m_domain *, unsigned long start,
                           unsigned long end);
 
-/* Set foreign entry in the p2m table (for priv-mapping) */
-int set_foreign_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn);
-
 /* Set mmio addresses in the p2m table (for pass-through) */
 int set_mmio_p2m_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
                        unsigned int order);
diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h
index 58031a6..b4bc709 100644
--- a/xen/include/xen/p2m-common.h
+++ b/xen/include/xen/p2m-common.h
@@ -3,6 +3,10 @@
 
 #include <xen/mm.h>
 
+/* Set foreign entry in the p2m table */
+int set_foreign_p2m_entry(struct domain *d, const struct domain *fd,
+                          unsigned long gfn, mfn_t mfn);
+
 /* Remove a page from a domain's p2m table */
 int __must_check
 guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:44:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:44:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40934.74024 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgfZ-0003sz-CS; Mon, 30 Nov 2020 10:44:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40934.74024; Mon, 30 Nov 2020 10:44:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgfY-0003sT-Od; Mon, 30 Nov 2020 10:44:32 +0000
Received: by outflank-mailman (input) for mailman id 40934;
 Mon, 30 Nov 2020 10:44:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjgUE-0000Uu-DG
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:32:50 +0000
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a62e5870-ab8e-4ea1-91ea-27ad55cb95e1;
 Mon, 30 Nov 2020 10:32:09 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id l11so20618759lfg.0
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 02:32:09 -0800 (PST)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.07
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 02:32:08 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a62e5870-ab8e-4ea1-91ea-27ad55cb95e1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=BmFrzwS/kmGPL3jpIw+pFO9nB5RgVKn8PxqvE5ux3HE=;
        b=sg92dzWNiK+/Ro+9oqGcTaWQJO27limwAQq3tw95xlyV0n3Cq8uhjCySRcCye5bv8r
         sXa6pPo9nKyspMB27GhibfSj0LhXUZ+CK1bJ1ZWaaoQGvlWfvynoBb9pGQYwrdq9LAFt
         uZad3CAj73YWgfZE6JdyZ5XHdZ4/XCDSZnpkuo4noBeDWx2ErCT9Zpz2ochBGza7DCFJ
         CwDoh5r2ibnfEnBV+geiur72JAsfyoA1+J38wgNZb9zSHbtZa1ms+7NsS0Fs1mFk4uAY
         VJaqlpsdO/O/iRtbH+yQtafO0nhvn3Kybl4vSGfpxi6Q1OEKr35WvIeWeGIg7pRtSs57
         fF6A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=BmFrzwS/kmGPL3jpIw+pFO9nB5RgVKn8PxqvE5ux3HE=;
        b=GhAR1oPzEIpN/sHYVIQGacEyByMgtskvEvwtMSN4aoFLkIqiCyD38FQUMEGnI7Ysgv
         facCt4KBZcpx88uyZ1efNqw+hpis4PYHwVl/3U0IFZ0O+I6uWEAaTbEtqhbP4EM3p0Sy
         VSZKXh6npFFyazbph2HpnJAVM73DAoIqoQ5K/d9TdryyllWfNRo+W+ieQoUFlmkVEkCM
         Qh6Rgyf7gEhXODkxLHval3cItRVL04nFWNZpjXAA0UX+ok9BeRwTE75ew/jpzF9Eb6F9
         1TNHYcdylAfaSBfqyuq9FTdQ6syfau1yKCkqbaHRofMuqwrneaMBEweq0RXISwZT7+Y7
         J1ig==
X-Gm-Message-State: AOAM533z7Gk5dPDdX92JZDAiECjbix0y9DofDXeCLL+PZcInkrDw9UL0
	XiAQdi0pVH5qDHYCrY8OoZ1f1gwQokUbbA==
X-Google-Smtp-Source: ABdhPJywPAlsZY/gRxAwj8tc1Y63uc2g2x+0dSFYJS21YrDzj2f6MLz0fCbuKRgQxwxerAU2m2BN0A==
X-Received: by 2002:a19:418d:: with SMTP id o135mr9066537lfa.329.1606732328561;
        Mon, 30 Nov 2020 02:32:08 -0800 (PST)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien.grall@arm.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: [PATCH V3 10/23] xen/mm: Make x86's XENMEM_resource_ioreq_server handling common
Date: Mon, 30 Nov 2020 12:31:25 +0200
Message-Id: <1606732298-22107-11-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>

From: Julien Grall <julien.grall@arm.com>

As x86 implementation of XENMEM_resource_ioreq_server can be
re-used on Arm later on, this patch makes it common and removes
arch_acquire_resource as unneeded.

Also re-order #include-s alphabetically.

This support is going to be used on Arm to be able run device
emulator outside of Xen hypervisor.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - no changes

Changes V1 -> V2:
   - update the author of a patch

Changes V2 -> V3:
   - don't wrap #include <xen/ioreq.h>
   - limit the number of #ifdef-s
   - re-order #include-s alphabetically
---
---
 xen/arch/x86/mm.c        | 44 ---------------------------------
 xen/common/memory.c      | 63 +++++++++++++++++++++++++++++++++++++++---------
 xen/include/asm-arm/mm.h |  8 ------
 xen/include/asm-x86/mm.h |  4 ---
 4 files changed, 51 insertions(+), 68 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index e4638ef..c0a7124 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4699,50 +4699,6 @@ int xenmem_add_to_physmap_one(
     return rc;
 }
 
-int arch_acquire_resource(struct domain *d, unsigned int type,
-                          unsigned int id, unsigned long frame,
-                          unsigned int nr_frames, xen_pfn_t mfn_list[])
-{
-    int rc;
-
-    switch ( type )
-    {
-#ifdef CONFIG_HVM
-    case XENMEM_resource_ioreq_server:
-    {
-        ioservid_t ioservid = id;
-        unsigned int i;
-
-        rc = -EINVAL;
-        if ( !is_hvm_domain(d) )
-            break;
-
-        if ( id != (unsigned int)ioservid )
-            break;
-
-        rc = 0;
-        for ( i = 0; i < nr_frames; i++ )
-        {
-            mfn_t mfn;
-
-            rc = hvm_get_ioreq_server_frame(d, id, frame + i, &mfn);
-            if ( rc )
-                break;
-
-            mfn_list[i] = mfn_x(mfn);
-        }
-        break;
-    }
-#endif
-
-    default:
-        rc = -EOPNOTSUPP;
-        break;
-    }
-
-    return rc;
-}
-
 long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 2c86934..92cf983 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -8,22 +8,23 @@
  */
 
 #include <xen/domain_page.h>
-#include <xen/types.h>
+#include <xen/errno.h>
+#include <xen/event.h>
+#include <xen/grant_table.h>
+#include <xen/guest_access.h>
+#include <xen/hypercall.h>
+#include <xen/iocap.h>
+#include <xen/ioreq.h>
 #include <xen/lib.h>
+#include <xen/mem_access.h>
 #include <xen/mm.h>
+#include <xen/numa.h>
+#include <xen/paging.h>
 #include <xen/param.h>
 #include <xen/perfc.h>
 #include <xen/sched.h>
-#include <xen/event.h>
-#include <xen/paging.h>
-#include <xen/iocap.h>
-#include <xen/guest_access.h>
-#include <xen/hypercall.h>
-#include <xen/errno.h>
-#include <xen/numa.h>
-#include <xen/mem_access.h>
 #include <xen/trace.h>
-#include <xen/grant_table.h>
+#include <xen/types.h>
 #include <asm/current.h>
 #include <asm/hardirq.h>
 #include <asm/p2m.h>
@@ -1086,6 +1087,40 @@ static int acquire_grant_table(struct domain *d, unsigned int id,
     return 0;
 }
 
+static int acquire_ioreq_server(struct domain *d,
+                                unsigned int id,
+                                unsigned long frame,
+                                unsigned int nr_frames,
+                                xen_pfn_t mfn_list[])
+{
+#ifdef CONFIG_IOREQ_SERVER
+    ioservid_t ioservid = id;
+    unsigned int i;
+    int rc;
+
+    if ( !is_hvm_domain(d) )
+        return -EINVAL;
+
+    if ( id != (unsigned int)ioservid )
+        return -EINVAL;
+
+    for ( i = 0; i < nr_frames; i++ )
+    {
+        mfn_t mfn;
+
+        rc = hvm_get_ioreq_server_frame(d, id, frame + i, &mfn);
+        if ( rc )
+            return rc;
+
+        mfn_list[i] = mfn_x(mfn);
+    }
+
+    return 0;
+#else
+    return -EOPNOTSUPP;
+#endif
+}
+
 static int acquire_resource(
     XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg)
 {
@@ -1144,9 +1179,13 @@ static int acquire_resource(
                                  mfn_list);
         break;
 
+    case XENMEM_resource_ioreq_server:
+        rc = acquire_ioreq_server(d, xmar.id, xmar.frame, xmar.nr_frames,
+                                  mfn_list);
+        break;
+
     default:
-        rc = arch_acquire_resource(d, xmar.type, xmar.id, xmar.frame,
-                                   xmar.nr_frames, mfn_list);
+        rc = -EOPNOTSUPP;
         break;
     }
 
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index f8ba49b..0b7de31 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -358,14 +358,6 @@ static inline void put_page_and_type(struct page_info *page)
 
 void clear_and_clean_page(struct page_info *page);
 
-static inline
-int arch_acquire_resource(struct domain *d, unsigned int type, unsigned int id,
-                          unsigned long frame, unsigned int nr_frames,
-                          xen_pfn_t mfn_list[])
-{
-    return -EOPNOTSUPP;
-}
-
 unsigned int arch_get_dma_bitsize(void);
 
 #endif /*  __ARCH_ARM_MM__ */
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index deeba75..859214e 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -639,8 +639,4 @@ static inline bool arch_mfn_in_directmap(unsigned long mfn)
     return mfn <= (virt_to_mfn(eva - 1) + 1);
 }
 
-int arch_acquire_resource(struct domain *d, unsigned int type,
-                          unsigned int id, unsigned long frame,
-                          unsigned int nr_frames, xen_pfn_t mfn_list[]);
-
 #endif /* __ASM_X86_MM_H__ */
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:44:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:44:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40937.74054 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgfc-00041g-5F; Mon, 30 Nov 2020 10:44:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40937.74054; Mon, 30 Nov 2020 10:44:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgfc-00041H-0M; Mon, 30 Nov 2020 10:44:36 +0000
Received: by outflank-mailman (input) for mailman id 40937;
 Mon, 30 Nov 2020 10:44:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjgVH-0000Uu-F2
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:33:55 +0000
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f507fd0b-22b9-4436-ae24-5a8c01053eb2;
 Mon, 30 Nov 2020 10:32:27 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id s27so20591630lfp.5
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 02:32:23 -0800 (PST)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.21
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 02:32:21 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f507fd0b-22b9-4436-ae24-5a8c01053eb2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=tgVthTyJdv0ZHP39YuffZGYtQrzTKTFcEEOILGJIBck=;
        b=QniTpPBA0dahyqbjVq3caKgybDZpvMlrZej7irZa21Z38Pv04CaPMWDlG6PNoKARXn
         UFL8SONlV7ezKqCJcAIty6MJVParEaB8Gk6sdMeHHEKrJd9HgXNQzHLTtSG8p7zyBHSX
         ILcaqgkA0f8vsZgUHluOQRjtZXZWyE21Vm6CkS6plAd71D3SgAdcdY75lVNf3mcTQxaG
         PlF3ZQLljAs7Yy/574aQiYZjRIPVtLjOKQYxdcfhg/poyUYOnRmaP+pI/NH5ehRjHpJA
         XocOC8nkoIPD0JFroevd2z+gUQmxqFblJPzls6HHt9Mm2cIJPiMaLPips0f1yZqSal57
         7cQA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=tgVthTyJdv0ZHP39YuffZGYtQrzTKTFcEEOILGJIBck=;
        b=DnyizD7yU6E/CQpLvMqSLHAlCIsXiUPiiR6uZMWOGNeNDteh5yGXemn8x29Zb6foj8
         BjvmuNnzWMVWJ710M9M7kJxCCHMi3Weyliyi6N2Ck6LeLorNZIG7td6qP3tutfhHQAuN
         Bl7TtR+gOAt7fDi9m/2jiuuQK1Lh9SB6nAlj7Utm5jwcRg7yIsPIAsMibMslmajJAukB
         MrigxwCYu3CPaDy8XR9Gh5dnJvYhS3d4WgQTOSJi0XAjvzW0ocbGFIKM9UVtmCpz39Fp
         h0cIawpgnzwIqMjq0O6yTcFJjh75pYxgJsuONbnwIDBEI+GGr7czcWOmqGJxmyyVwe3C
         1cdA==
X-Gm-Message-State: AOAM530mDC6G16VF9WmQd6iSnmivAYgA13Sg9/17XByTV2i6mZUTduuA
	4tb2DxJKsVMk5nqpaFsf2YimJOoBQuX9HQ==
X-Google-Smtp-Source: ABdhPJxd57Am6JNM1anvQlenRuZFLqMrHJuNwjJl3vIVER15CFMt5qCEByazdJaBKOeAiuWnWkA/RA==
X-Received: by 2002:a19:be4a:: with SMTP id o71mr8088918lff.494.1606732342315;
        Mon, 30 Nov 2020 02:32:22 -0800 (PST)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH V3 23/23] [RFC] libxl: Add support for virtio-disk configuration
Date: Mon, 30 Nov 2020 12:31:38 +0200
Message-Id: <1606732298-22107-24-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

This patch adds basic support for configuring and assisting virtio-disk
backend (emualator) which is intended to run out of Qemu and could be run
in any domain.

Xenstore was chosen as a communication interface for the emulator running
in non-toolstack domain to be able to get configuration either by reading
Xenstore directly or by receiving command line parameters (an updated 'xl devd'
running in the same domain would read Xenstore beforehand and call backend
executable with the required arguments).

An example of domain configuration (two disks are assigned to the guest,
the latter is in readonly mode):

vdisk = [ 'backend=DomD, disks=rw:/dev/mmcblk0p3;ro:/dev/mmcblk1p3' ]

Where per-disk Xenstore entries are:
- filename and readonly flag (configured via "vdisk" property)
- base and irq (allocated dynamically)

Besides handling 'visible' params described in configuration file,
patch also allocates virtio-mmio specific ones for each device and
writes them into Xenstore. virtio-mmio params (irq and base) are
unique per guest domain, they allocated at the domain creation time
and passed through to the emulator. Each VirtIO device has at least
one pair of these params.

TODO:
1. An extra "virtio" property could be removed.
2. Update documentation.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

---
Changes RFC -> V1:
   - no changes

Changes V1 -> V2:
   - rebase according to the new location of libxl_virtio_disk.c

Changes V2 -> V3:
   - no changes

Please note, there is a real concern about VirtIO interrupts allocation.
Just copy here what Stefano said in RFC thread.

So, if we end up allocating let's say 6 virtio interrupts for a domain,
the chance of a clash with a physical interrupt of a passthrough device is real.

I am not entirely sure how to solve it, but these are a few ideas:
- choosing virtio interrupts that are less likely to conflict (maybe > 1000)
- make the virtio irq (optionally) configurable so that a user could
  override the default irq and specify one that doesn't conflict
- implementing support for virq != pirq (even the xl interface doesn't
  allow to specify the virq number for passthrough devices, see "irqs")

Also there is one suggestion from Wei Chen regarding a parameter for domain
config file which I haven't addressed yet.
[Just copy here what Wei said in V2 thread]
Can we keep use the same 'disk' parameter for virtio-disk, but add an option like
"model=virtio-disk"?
For example:
disk = [ 'backend=DomD, disks=rw:/dev/mmcblk0p3,model=virtio-disk' ]
Just like what Xen has done for x86 virtio-net.

---
---
 tools/libs/light/Makefile                 |   1 +
 tools/libs/light/libxl_arm.c              |  56 ++++++++++++---
 tools/libs/light/libxl_create.c           |   1 +
 tools/libs/light/libxl_internal.h         |   1 +
 tools/libs/light/libxl_types.idl          |  15 ++++
 tools/libs/light/libxl_types_internal.idl |   1 +
 tools/libs/light/libxl_virtio_disk.c      | 109 ++++++++++++++++++++++++++++
 tools/xl/Makefile                         |   2 +-
 tools/xl/xl.h                             |   3 +
 tools/xl/xl_cmdtable.c                    |  15 ++++
 tools/xl/xl_parse.c                       | 115 ++++++++++++++++++++++++++++++
 tools/xl/xl_virtio_disk.c                 |  46 ++++++++++++
 12 files changed, 354 insertions(+), 11 deletions(-)
 create mode 100644 tools/libs/light/libxl_virtio_disk.c
 create mode 100644 tools/xl/xl_virtio_disk.c

diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
index 68f6fa3..ccc91b9 100644
--- a/tools/libs/light/Makefile
+++ b/tools/libs/light/Makefile
@@ -115,6 +115,7 @@ SRCS-y += libxl_genid.c
 SRCS-y += _libxl_types.c
 SRCS-y += libxl_flask.c
 SRCS-y += _libxl_types_internal.c
+SRCS-y += libxl_virtio_disk.c
 
 ifeq ($(CONFIG_LIBNL),y)
 CFLAGS_LIBXL += $(LIBNL3_CFLAGS)
diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index 588ee5a..9eb3022 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -8,6 +8,12 @@
 #include <assert.h>
 #include <xen/device_tree_defs.h>
 
+#ifndef container_of
+#define container_of(ptr, type, member) ({			\
+        typeof( ((type *)0)->member ) *__mptr = (ptr);	\
+        (type *)( (char *)__mptr - offsetof(type,member) );})
+#endif
+
 static const char *gicv_to_string(libxl_gic_version gic_version)
 {
     switch (gic_version) {
@@ -39,14 +45,32 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
         vuart_enabled = true;
     }
 
-    /*
-     * XXX: Handle properly virtio
-     * A proper solution would be the toolstack to allocate the interrupts
-     * used by each virtio backend and let the backend now which one is used
-     */
     if (libxl_defbool_val(d_config->b_info.arch_arm.virtio)) {
-        nr_spis += (GUEST_VIRTIO_MMIO_SPI - 32) + 1;
+        uint64_t virtio_base;
+        libxl_device_virtio_disk *virtio_disk;
+
+        virtio_base = GUEST_VIRTIO_MMIO_BASE;
         virtio_irq = GUEST_VIRTIO_MMIO_SPI;
+
+        if (!d_config->num_virtio_disks) {
+            LOG(ERROR, "Virtio is enabled, but no Virtio devices present\n");
+            return ERROR_FAIL;
+        }
+        virtio_disk = &d_config->virtio_disks[0];
+
+        for (i = 0; i < virtio_disk->num_disks; i++) {
+            virtio_disk->disks[i].base = virtio_base;
+            virtio_disk->disks[i].irq = virtio_irq;
+
+            LOG(DEBUG, "Allocate Virtio MMIO params: IRQ %u BASE 0x%"PRIx64,
+                virtio_irq, virtio_base);
+
+            virtio_irq ++;
+            virtio_base += GUEST_VIRTIO_MMIO_SIZE;
+        }
+        virtio_irq --;
+
+        nr_spis += (virtio_irq - 32) + 1;
         virtio_enabled = true;
     }
 
@@ -70,8 +94,9 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
         }
 
         /* The same check as for vpl011 */
-        if (virtio_enabled && irq == virtio_irq) {
-            LOG(ERROR, "Physical IRQ %u conflicting with virtio SPI\n", irq);
+        if (virtio_enabled &&
+           (irq >= GUEST_VIRTIO_MMIO_SPI && irq <= virtio_irq)) {
+            LOG(ERROR, "Physical IRQ %u conflicting with Virtio IRQ range\n", irq);
             return ERROR_FAIL;
         }
 
@@ -1011,8 +1036,19 @@ next_resize:
         if (info->tee == LIBXL_TEE_TYPE_OPTEE)
             FDT( make_optee_node(gc, fdt) );
 
-        if (libxl_defbool_val(info->arch_arm.virtio))
-            FDT( make_virtio_mmio_node(gc, fdt, GUEST_VIRTIO_MMIO_BASE, GUEST_VIRTIO_MMIO_SPI) );
+        if (libxl_defbool_val(info->arch_arm.virtio)) {
+            libxl_domain_config *d_config =
+                container_of(info, libxl_domain_config, b_info);
+            libxl_device_virtio_disk *virtio_disk = &d_config->virtio_disks[0];
+            unsigned int i;
+
+            for (i = 0; i < virtio_disk->num_disks; i++) {
+                uint64_t base = virtio_disk->disks[i].base;
+                uint32_t irq = virtio_disk->disks[i].irq;
+
+                FDT( make_virtio_mmio_node(gc, fdt, base, irq) );
+            }
+        }
 
         if (pfdt)
             FDT( copy_partial_fdt(gc, fdt, pfdt) );
diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index 321a13e..8da328d 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -1821,6 +1821,7 @@ const libxl__device_type *device_type_tbl[] = {
     &libxl__dtdev_devtype,
     &libxl__vdispl_devtype,
     &libxl__vsnd_devtype,
+    &libxl__virtio_disk_devtype,
     NULL
 };
 
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index e26cda9..ea497bb 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -4000,6 +4000,7 @@ extern const libxl__device_type libxl__vdispl_devtype;
 extern const libxl__device_type libxl__p9_devtype;
 extern const libxl__device_type libxl__pvcallsif_devtype;
 extern const libxl__device_type libxl__vsnd_devtype;
+extern const libxl__device_type libxl__virtio_disk_devtype;
 
 extern const libxl__device_type *device_type_tbl[];
 
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index b054bf9..5f8a3ff 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -935,6 +935,20 @@ libxl_device_vsnd = Struct("device_vsnd", [
     ("pcms", Array(libxl_vsnd_pcm, "num_vsnd_pcms"))
     ])
 
+libxl_virtio_disk_param = Struct("virtio_disk_param", [
+    ("filename", string),
+    ("readonly", bool),
+    ("irq", uint32),
+    ("base", uint64),
+    ])
+
+libxl_device_virtio_disk = Struct("device_virtio_disk", [
+    ("backend_domid", libxl_domid),
+    ("backend_domname", string),
+    ("devid", libxl_devid),
+    ("disks", Array(libxl_virtio_disk_param, "num_disks")),
+    ])
+
 libxl_domain_config = Struct("domain_config", [
     ("c_info", libxl_domain_create_info),
     ("b_info", libxl_domain_build_info),
@@ -951,6 +965,7 @@ libxl_domain_config = Struct("domain_config", [
     ("pvcallsifs", Array(libxl_device_pvcallsif, "num_pvcallsifs")),
     ("vdispls", Array(libxl_device_vdispl, "num_vdispls")),
     ("vsnds", Array(libxl_device_vsnd, "num_vsnds")),
+    ("virtio_disks", Array(libxl_device_virtio_disk, "num_virtio_disks")),
     # a channel manifests as a console with a name,
     # see docs/misc/channels.txt
     ("channels", Array(libxl_device_channel, "num_channels")),
diff --git a/tools/libs/light/libxl_types_internal.idl b/tools/libs/light/libxl_types_internal.idl
index 3593e21..8f71980 100644
--- a/tools/libs/light/libxl_types_internal.idl
+++ b/tools/libs/light/libxl_types_internal.idl
@@ -32,6 +32,7 @@ libxl__device_kind = Enumeration("device_kind", [
     (14, "PVCALLS"),
     (15, "VSND"),
     (16, "VINPUT"),
+    (17, "VIRTIO_DISK"),
     ])
 
 libxl__console_backend = Enumeration("console_backend", [
diff --git a/tools/libs/light/libxl_virtio_disk.c b/tools/libs/light/libxl_virtio_disk.c
new file mode 100644
index 0000000..25e7f1a
--- /dev/null
+++ b/tools/libs/light/libxl_virtio_disk.c
@@ -0,0 +1,109 @@
+/*
+ * Copyright (C) 2020 EPAM Systems Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_internal.h"
+
+static int libxl__device_virtio_disk_setdefault(libxl__gc *gc, uint32_t domid,
+                                                libxl_device_virtio_disk *virtio_disk,
+                                                bool hotplug)
+{
+    return libxl__resolve_domid(gc, virtio_disk->backend_domname,
+                                &virtio_disk->backend_domid);
+}
+
+static int libxl__virtio_disk_from_xenstore(libxl__gc *gc, const char *libxl_path,
+                                            libxl_devid devid,
+                                            libxl_device_virtio_disk *virtio_disk)
+{
+    const char *be_path;
+    int rc;
+
+    virtio_disk->devid = devid;
+    rc = libxl__xs_read_mandatory(gc, XBT_NULL,
+                                  GCSPRINTF("%s/backend", libxl_path),
+                                  &be_path);
+    if (rc) return rc;
+
+    rc = libxl__backendpath_parse_domid(gc, be_path, &virtio_disk->backend_domid);
+    if (rc) return rc;
+
+    return 0;
+}
+
+static void libxl__update_config_virtio_disk(libxl__gc *gc,
+                                             libxl_device_virtio_disk *dst,
+                                             libxl_device_virtio_disk *src)
+{
+    dst->devid = src->devid;
+}
+
+static int libxl_device_virtio_disk_compare(libxl_device_virtio_disk *d1,
+                                            libxl_device_virtio_disk *d2)
+{
+    return COMPARE_DEVID(d1, d2);
+}
+
+static void libxl__device_virtio_disk_add(libxl__egc *egc, uint32_t domid,
+                                          libxl_device_virtio_disk *virtio_disk,
+                                          libxl__ao_device *aodev)
+{
+    libxl__device_add_async(egc, domid, &libxl__virtio_disk_devtype, virtio_disk, aodev);
+}
+
+static int libxl__set_xenstore_virtio_disk(libxl__gc *gc, uint32_t domid,
+                                           libxl_device_virtio_disk *virtio_disk,
+                                           flexarray_t *back, flexarray_t *front,
+                                           flexarray_t *ro_front)
+{
+    int rc;
+    unsigned int i;
+
+    for (i = 0; i < virtio_disk->num_disks; i++) {
+        rc = flexarray_append_pair(ro_front, GCSPRINTF("%d/filename", i),
+                                   GCSPRINTF("%s", virtio_disk->disks[i].filename));
+        if (rc) return rc;
+
+        rc = flexarray_append_pair(ro_front, GCSPRINTF("%d/readonly", i),
+                                   GCSPRINTF("%d", virtio_disk->disks[i].readonly));
+        if (rc) return rc;
+
+        rc = flexarray_append_pair(ro_front, GCSPRINTF("%d/base", i),
+                                   GCSPRINTF("%lu", virtio_disk->disks[i].base));
+        if (rc) return rc;
+
+        rc = flexarray_append_pair(ro_front, GCSPRINTF("%d/irq", i),
+                                   GCSPRINTF("%u", virtio_disk->disks[i].irq));
+        if (rc) return rc;
+    }
+
+    return 0;
+}
+
+static LIBXL_DEFINE_UPDATE_DEVID(virtio_disk)
+static LIBXL_DEFINE_DEVICE_FROM_TYPE(virtio_disk)
+static LIBXL_DEFINE_DEVICES_ADD(virtio_disk)
+
+DEFINE_DEVICE_TYPE_STRUCT(virtio_disk, VIRTIO_DISK,
+    .update_config = (device_update_config_fn_t) libxl__update_config_virtio_disk,
+    .from_xenstore = (device_from_xenstore_fn_t) libxl__virtio_disk_from_xenstore,
+    .set_xenstore_config = (device_set_xenstore_config_fn_t) libxl__set_xenstore_virtio_disk
+);
+
+/*
+ * Local variables:
+ * mode: C
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/xl/Makefile b/tools/xl/Makefile
index bdf67c8..9d8f2aa 100644
--- a/tools/xl/Makefile
+++ b/tools/xl/Makefile
@@ -23,7 +23,7 @@ XL_OBJS += xl_vtpm.o xl_block.o xl_nic.o xl_usb.o
 XL_OBJS += xl_sched.o xl_pci.o xl_vcpu.o xl_cdrom.o xl_mem.o
 XL_OBJS += xl_info.o xl_console.o xl_misc.o
 XL_OBJS += xl_vmcontrol.o xl_saverestore.o xl_migrate.o
-XL_OBJS += xl_vdispl.o xl_vsnd.o xl_vkb.o
+XL_OBJS += xl_vdispl.o xl_vsnd.o xl_vkb.o xl_virtio_disk.o
 
 $(XL_OBJS): CFLAGS += $(CFLAGS_libxentoollog)
 $(XL_OBJS): CFLAGS += $(CFLAGS_XL)
diff --git a/tools/xl/xl.h b/tools/xl/xl.h
index 06569c6..3d26f19 100644
--- a/tools/xl/xl.h
+++ b/tools/xl/xl.h
@@ -178,6 +178,9 @@ int main_vsnddetach(int argc, char **argv);
 int main_vkbattach(int argc, char **argv);
 int main_vkblist(int argc, char **argv);
 int main_vkbdetach(int argc, char **argv);
+int main_virtio_diskattach(int argc, char **argv);
+int main_virtio_disklist(int argc, char **argv);
+int main_virtio_diskdetach(int argc, char **argv);
 int main_usbctrl_attach(int argc, char **argv);
 int main_usbctrl_detach(int argc, char **argv);
 int main_usbdev_attach(int argc, char **argv);
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 7da6c1b..745afab 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -435,6 +435,21 @@ struct cmd_spec cmd_table[] = {
       "Destroy a domain's virtual sound device",
       "<Domain> <DevId>",
     },
+    { "virtio-disk-attach",
+      &main_virtio_diskattach, 1, 1,
+      "Create a new virtio block device",
+      " TBD\n"
+    },
+    { "virtio-disk-list",
+      &main_virtio_disklist, 0, 0,
+      "List virtio block devices for a domain",
+      "<Domain(s)>",
+    },
+    { "virtio-disk-detach",
+      &main_virtio_diskdetach, 0, 1,
+      "Destroy a domain's virtio block device",
+      "<Domain> <DevId>",
+    },
     { "uptime",
       &main_uptime, 0, 0,
       "Print uptime for all/some domains",
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 10acf22..6cf3524 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1204,6 +1204,120 @@ out:
     if (rc) exit(EXIT_FAILURE);
 }
 
+#define MAX_VIRTIO_DISKS 4
+
+static int parse_virtio_disk_config(libxl_device_virtio_disk *virtio_disk, char *token)
+{
+    char *oparg;
+    libxl_string_list disks = NULL;
+    int i, rc;
+
+    if (MATCH_OPTION("backend", token, oparg)) {
+        virtio_disk->backend_domname = strdup(oparg);
+    } else if (MATCH_OPTION("disks", token, oparg)) {
+        split_string_into_string_list(oparg, ";", &disks);
+
+        virtio_disk->num_disks = libxl_string_list_length(&disks);
+        if (virtio_disk->num_disks > MAX_VIRTIO_DISKS) {
+            fprintf(stderr, "vdisk: currently only %d disks are supported",
+                    MAX_VIRTIO_DISKS);
+            return 1;
+        }
+        virtio_disk->disks = xcalloc(virtio_disk->num_disks,
+                                     sizeof(*virtio_disk->disks));
+
+        for(i = 0; i < virtio_disk->num_disks; i++) {
+            char *disk_opt;
+
+            rc = split_string_into_pair(disks[i], ":", &disk_opt,
+                                        &virtio_disk->disks[i].filename);
+            if (rc) {
+                fprintf(stderr, "vdisk: failed to split \"%s\" into pair\n",
+                        disks[i]);
+                goto out;
+            }
+
+            if (!strcmp(disk_opt, "ro"))
+                virtio_disk->disks[i].readonly = 1;
+            else if (!strcmp(disk_opt, "rw"))
+                virtio_disk->disks[i].readonly = 0;
+            else {
+                fprintf(stderr, "vdisk: failed to parse \"%s\" disk option\n",
+                        disk_opt);
+                rc = 1;
+            }
+            free(disk_opt);
+
+            if (rc) goto out;
+        }
+    } else {
+        fprintf(stderr, "Unknown string \"%s\" in vdisk spec\n", token);
+        rc = 1; goto out;
+    }
+
+    rc = 0;
+
+out:
+    libxl_string_list_dispose(&disks);
+    return rc;
+}
+
+static void parse_virtio_disk_list(const XLU_Config *config,
+                            libxl_domain_config *d_config)
+{
+    XLU_ConfigList *virtio_disks;
+    const char *item;
+    char *buf = NULL;
+    int rc;
+
+    if (!xlu_cfg_get_list (config, "vdisk", &virtio_disks, 0, 0)) {
+        libxl_domain_build_info *b_info = &d_config->b_info;
+        int entry = 0;
+
+        /* XXX Remove an extra property */
+        libxl_defbool_setdefault(&b_info->arch_arm.virtio, false);
+        if (!libxl_defbool_val(b_info->arch_arm.virtio)) {
+            fprintf(stderr, "Virtio device requires Virtio property to be set\n");
+            exit(EXIT_FAILURE);
+        }
+
+        while ((item = xlu_cfg_get_listitem(virtio_disks, entry)) != NULL) {
+            libxl_device_virtio_disk *virtio_disk;
+            char *p;
+
+            virtio_disk = ARRAY_EXTEND_INIT(d_config->virtio_disks,
+                                            d_config->num_virtio_disks,
+                                            libxl_device_virtio_disk_init);
+
+            buf = strdup(item);
+
+            p = strtok (buf, ",");
+            while (p != NULL)
+            {
+                while (*p == ' ') p++;
+
+                rc = parse_virtio_disk_config(virtio_disk, p);
+                if (rc) goto out;
+
+                p = strtok (NULL, ",");
+            }
+
+            entry++;
+
+            if (virtio_disk->num_disks == 0) {
+                fprintf(stderr, "At least one virtio disk should be specified\n");
+                rc = 1; goto out;
+            }
+        }
+    }
+
+    rc = 0;
+
+out:
+    free(buf);
+    if (rc) exit(EXIT_FAILURE);
+}
+
 void parse_config_data(const char *config_source,
                        const char *config_data,
                        int config_len,
@@ -2734,6 +2848,7 @@ skip_usbdev:
     }
 
     parse_vkb_list(config, d_config);
+    parse_virtio_disk_list(config, d_config);
 
     xlu_cfg_get_defbool(config, "xend_suspend_evtchn_compat",
                         &c_info->xend_suspend_evtchn_compat, 0);
diff --git a/tools/xl/xl_virtio_disk.c b/tools/xl/xl_virtio_disk.c
new file mode 100644
index 0000000..808a7da
--- /dev/null
+++ b/tools/xl/xl_virtio_disk.c
@@ -0,0 +1,46 @@
+/*
+ * Copyright (C) 2020 EPAM Systems Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include <stdlib.h>
+
+#include <libxl.h>
+#include <libxl_utils.h>
+#include <libxlutil.h>
+
+#include "xl.h"
+#include "xl_utils.h"
+#include "xl_parse.h"
+
+int main_virtio_diskattach(int argc, char **argv)
+{
+    return 0;
+}
+
+int main_virtio_disklist(int argc, char **argv)
+{
+   return 0;
+}
+
+int main_virtio_diskdetach(int argc, char **argv)
+{
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:47:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:47:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40989.74068 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgii-0004oe-DH; Mon, 30 Nov 2020 10:47:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40989.74068; Mon, 30 Nov 2020 10:47:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgii-0004oX-AH; Mon, 30 Nov 2020 10:47:48 +0000
Received: by outflank-mailman (input) for mailman id 40989;
 Mon, 30 Nov 2020 10:47:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DnPL=FE=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kjgih-0004oS-6P
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:47:47 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.5.43]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9dc8397d-96ac-4274-a20a-2bc66e29197b;
 Mon, 30 Nov 2020 10:47:46 +0000 (UTC)
Received: from DB6PR0202CA0028.eurprd02.prod.outlook.com (2603:10a6:4:a5::14)
 by AM0PR08MB5348.eurprd08.prod.outlook.com (2603:10a6:208:189::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Mon, 30 Nov
 2020 10:47:39 +0000
Received: from DB5EUR03FT037.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:a5:cafe::d0) by DB6PR0202CA0028.outlook.office365.com
 (2603:10a6:4:a5::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.20 via Frontend
 Transport; Mon, 30 Nov 2020 10:47:39 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT037.mail.protection.outlook.com (10.152.20.215) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3611.26 via Frontend Transport; Mon, 30 Nov 2020 10:47:39 +0000
Received: ("Tessian outbound 797fb8e1da56:v71");
 Mon, 30 Nov 2020 10:47:37 +0000
Received: from 30d5ba9b9df3.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C2A55A43-3411-40DF-A1FD-12754BEBB636.1; 
 Mon, 30 Nov 2020 10:47:00 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 30d5ba9b9df3.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 30 Nov 2020 10:47:00 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1910.eurprd08.prod.outlook.com (2603:10a6:4:75::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Mon, 30 Nov
 2020 10:46:58 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::98c7:4612:2365:cc6b%5]) with mapi id 15.20.3611.031; Mon, 30 Nov 2020
 10:46:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9dc8397d-96ac-4274-a20a-2bc66e29197b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sSNwJasFQ4DpZY37jelQq8pyGhAA8IgOys8LHIxMnbE=;
 b=2LtKzaM0qB3G52vzdYhPPW3khUVtcI5utigpcNX/Brts81u/Sa/tDU2lMDbI0f5LY/q95TlU98TeQdCOVJ05jXBbJzxIMjL4eHQVZwP7rf+MefLNrjxq3H7m5KyqhDQunlQtTjjDzOZ3p84DXqnuI/CsmT/YQN7Tdglng8Bx7Qk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 2db7d77e50cf6b71
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Go0kvSpmKyqTIK8UoFyLrCVX+4A/JbskwlLQJbdxornN49nOgDuJNE6G/u0PyAGA2DMQZAt9QTlwDrzFXfelMbVf4IL1sURj41DQa7/7Db7mUKsS0TYxMvVYwSSqFI+HxO7B9FDqX5HMAsSVzCytCDpGBxaRFj/z7tIOnKYbxecUV0II+nCKt6j0NKyb09t0LWuj4M7enwg7kxgcKaEASAHIpwVFFmWxPRBwbavljnw09AbXXg6SHGdr3fB6TAKL3Y+bpynhaSCzlXUUoJJoccF6J3h2Fnc+UXTtgE/jJWS6R7ZTnq4r8ypSnTwtUoC/6/nyuxkbmRIcMD33tKz0Mw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sSNwJasFQ4DpZY37jelQq8pyGhAA8IgOys8LHIxMnbE=;
 b=MpCeBsKxb/1iJjIvYG+Yyz140e63w52UDUnuByEEkb0JhM2X6FN1WvG649h8z8z2nfdKspULFenXTS3HrVQyArWkatRw3vrp8tMotjZ2TsYBK3XyVwL1Lw7GrPO2JHexPeyVHBVIqk5FeYuZS2W8tJjwuHfM1TtpNVCdmXTR667kkor3XNj4xiwKy6J95eS0Nz17cjw7Ka2rou0Qj044kEj9iMoP1tOq70M4XW+1o/nVvt2Mnrkq72vDqOeGTmyNQiaOwec/kOqNYp9GmaQd3EDZnZfLxnB9+UbyXmxJonpgpMaDfOZARTRI8HSx6znP8RKUS80EUlWzTtMMRec4Xg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sSNwJasFQ4DpZY37jelQq8pyGhAA8IgOys8LHIxMnbE=;
 b=2LtKzaM0qB3G52vzdYhPPW3khUVtcI5utigpcNX/Brts81u/Sa/tDU2lMDbI0f5LY/q95TlU98TeQdCOVJ05jXBbJzxIMjL4eHQVZwP7rf+MefLNrjxq3H7m5KyqhDQunlQtTjjDzOZ3p84DXqnuI/CsmT/YQN7Tdglng8Bx7Qk=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 0/7] xen/arm: Emulate ID registers
Thread-Topic: [PATCH 0/7] xen/arm: Emulate ID registers
Thread-Index: AQHWxPkEic+Kxyhsf02jxIedQiFc2qngevuAgAAEQwCAAAMPgA==
Date: Mon, 30 Nov 2020 10:46:58 +0000
Message-ID: <86D0C252-F970-43F1-A876-58D8F63CFD55@arm.com>
References: <cover.1606151462.git.bertrand.marquis@arm.com>
 <45b8aac3-75a6-670f-d6f2-b427c497ee2d@citrix.com>
 <1BAAADF6-9E29-4BE5-857D-A8B51EB80712@arm.com>
 <e119d6ff-dc61-0fd7-6da5-3e4e1b51839c@citrix.com>
In-Reply-To: <e119d6ff-dc61-0fd7-6da5-3e4e1b51839c@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
Authentication-Results-Original: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.9.225.195]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 40c74519-f949-40b5-a78f-08d8951d5b83
x-ms-traffictypediagnostic: DB6PR0801MB1910:|AM0PR08MB5348:
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB5348EC46CCD7E851F585B7099DF50@AM0PR08MB5348.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 R/e3vW5ef1Hf4jHwdWz/v9XLClhm6g6lLlrIYLcJnVaMRQloWF/imBQHQdLt5ifBrEhmuU7LRSW0XE6Sx0u7DwxTGNkNTOTS3nQ1fbBLRT28Tza7gLKqDhDRwDlqkBXIIckrVblrc4RzGJXEKbeQwYPoUVmpwcnIPxnYHjb5bQf+KJb17n3nKX14g7aJyKQTQhRRj8KpArHD2SwVhfqxAKQC20LLdpv45h2GIiLZxrUd3IdUmjNvH7zODcDLRjNYwd7enjWY9P3wDcHA8KTIFwZ2PNPMPJQdm9Vr5byBUE+J4cWUIChLWRAe/wEEo0pcBWgHS7flfVxN52qGhlm9g5y4OEvWQUJzY1qKISDF7JFIEzSC9lkw0UvozBOGNCLMPJD4lSPvqKQtxz6DDhjW2g==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(346002)(136003)(366004)(376002)(5660300002)(26005)(6512007)(66946007)(478600001)(6506007)(6916009)(36756003)(53546011)(186003)(33656002)(2616005)(8936002)(66476007)(8676002)(83380400001)(4326008)(6486002)(64756008)(2906002)(76116006)(54906003)(91956017)(86362001)(66556008)(71200400001)(316002)(966005)(66446008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?us-ascii?Q?+r/LSJToeBvPwgsKwkkvTt820EQYFgalzMpaUKr3qR39sz9MUybzMTi2fpbR?=
 =?us-ascii?Q?OqJy8O1GJObw/1qaQ4mdZo5X5PDoLMp8G8Ys/sW2/byZsuvTTM387Z8/XAds?=
 =?us-ascii?Q?eHsFL66wrAgQgTKhwQzfLH02czE10US5dAAl/E6OJ/9eZ9ulZZYxQY2YZYZW?=
 =?us-ascii?Q?Z94JF8pR6OI5KO29A01myNG0kfF2PaBR/hexfd7uGxDLQ06OiCuP5/EVRWoD?=
 =?us-ascii?Q?HkiNPX0xjrJBxveD2Gya64QcsGSfDSwNg/7u856yRbf9MFsUQVkgLu+2ydGN?=
 =?us-ascii?Q?62T76SivoMjBTLxurotVoJsM1ozzxkXMkRAFNiVfNDeL6NLaaK5FEcvsdJh0?=
 =?us-ascii?Q?1gYo6Grt9/MyOCspQ5kea+25N9wK2tSL1Xh0tXJj47JKz9njoD2oNJ/we9Bg?=
 =?us-ascii?Q?1VywlITPTEAu3mSUS6r01UjzO/VtvOEJQ2hj22zfKi7uGK2JNtVElbsOwo3/?=
 =?us-ascii?Q?i/CnN15rLXx3Y6bdRQP5VVeJy7dubxE8ci/XPd4tKbc+Tyr9t+zmrJgk67IT?=
 =?us-ascii?Q?OtQxfbLrNgo3RGCPo33aM39Q93Om+pMtLPaO3Ggl/IgGDdTe+dM7DPrwhFOh?=
 =?us-ascii?Q?7RzHMKOJ4ZjkFUXQeQVOv8AdgQ1ifKtrEL8jDkjVREblYNcXUxYZobs1IWke?=
 =?us-ascii?Q?uVfCXTc1yrRu9jivjbV8lxSJ7mhnZAnJXZwM0VMYI+M+HplKhSwRYSMRpHgq?=
 =?us-ascii?Q?hxn7w/mvm+V9fmJjBpZENWOPmDWRblkE7iQPdLAO2krWM8nklcFBv0omYKof?=
 =?us-ascii?Q?4tGMS0sq400bfXqTiLVvxrwmLU1t0J5cNHglE8b/oskjQVl3+hPMi1H6TYsf?=
 =?us-ascii?Q?WtWb9+sAVSVv5lb0GXrXgDPX7pfSkkn+tfniOczETzbWODdNsEd2TKVIIqXa?=
 =?us-ascii?Q?AS4H2GDPNfvF4upnSPYuwLqn0aVVtcYFVt2DeJ+gH6hiro2jtkz/MJcSH7Cb?=
 =?us-ascii?Q?Yr0arNv3ry/pbZsUd5KP5YRDnGxtDKBbWCBz5YfOfZyOsJu6j9KOot9M27e+?=
 =?us-ascii?Q?upz4?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <FEFF8A26EFA2664981E3F58F4B1F457A@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1910
Original-Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	3b0c773b-6f3e-4c68-9008-08d8951d431c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dCCDtP++F+ztWrx+F+6Xr8s5hLwzUqW0ykkdlr0vuCxyBMYwCe3MYCZpB0CbnEpt5g592pMRNVzgTDfrq4PvhQnxRfUn6IdVtukcQ0lyd+TWu1ng5mE7dh5RBh5wmhNAYBmN9Ho9bn+lUuuLE73YIayBXbJhlkW4wpgE5DTPmwRNzn7q37PYWcJtgBo0ZcTU7leVfxeSIQp+gbAZdt1rObn9WjvrP3RqScnkCxavarpzjDOelV5XCWQmz6XDUt151USRS2MJ7QeuAvmen8AHOGpXnkhYAhHkChNUGDcaJGSI26UddffconG/ga3qpJPKjbhy7KCUITt8BJk5VwBpSfO5+nZafRrwyMV7F9AVpiAEEO6YJxrkumCtPmoCQsorGUpG8cBULQwkRp62Q0+6MQ+JjwsPlCwqZpTnj+UtpsMRYABMsHge0uuC8+gqbvcuGHwA21bK7TVxv+H6XuOG4SZJvUnHTtGL8xLCfxVzF+U=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(396003)(346002)(376002)(136003)(46966005)(70586007)(36756003)(86362001)(8936002)(8676002)(70206006)(53546011)(6486002)(4326008)(5660300002)(83380400001)(81166007)(82740400003)(82310400003)(356005)(47076004)(54906003)(33656002)(2906002)(186003)(26005)(6512007)(2616005)(6506007)(316002)(966005)(6862004)(478600001)(107886003)(336012);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Nov 2020 10:47:39.4790
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 40c74519-f949-40b5-a78f-08d8951d5b83
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5348

Hi Andrew,

> On 30 Nov 2020, at 10:36, Andrew Cooper <andrew.cooper3@citrix.com> wrote=
:
>=20
> On 30/11/2020 10:20, Bertrand Marquis wrote:
>> Hi Andrew,
>>=20
>>> On 27 Nov 2020, at 20:07, Andrew Cooper <andrew.cooper3@citrix.com> wro=
te:
>>>=20
>>> On 26/11/2020 15:51, Bertrand Marquis wrote:
>>>> The goal of this serie is to emulate coprocessor ID registers so that
>>>> Xen only publish to guest features that are supported by Xen and can
>>>> actually be used by guests.
>>>> One practical example where this is required are SVE support which is
>>>> forbidden by Xen as it is not supported, but if Linux is compiled with
>>>> it, it will crash on boot. An other one is AMU which is also forbidden
>>>> by Xen but one Linux compiled with it would crash if the platform
>>>> supports it.
>>>>=20
>>>> To be able to emulate the coprocessor registers defining what features
>>>> are supported by the hardware, the TID3 bit of HCR must be disabled an=
d
>>>> Xen must emulated the values of those registers when an exception is
>>>> catched when a guest is accessing it.
>>>>=20
>>>> This serie is first creating a guest cpuinfo structure which will
>>>> contain the values that we want to publish to the guests and then
>>>> provides the proper emulationg for those registers when Xen is getting
>>>> an exception due to an access to any of those registers.
>>>>=20
>>>> This is a first simple implementation to solve the problem and the way
>>>> to define the values that we provide to guests and which features are
>>>> disabled will be in a future patchset enhance so that we could decide
>>>> per guest what can be used or not and depending on this deduce the bit=
s
>>>> to activate in HCR and the values that we must publish on ID registers=
.
>>>>=20
>>>> Bertrand Marquis (7):
>>>> xen/arm: Add ID registers and complete cpufinfo
>>>> xen/arm: Add arm64 ID registers definitions
>>>> xen/arm: create a cpuinfo structure for guest
>>>> xen/arm: Add handler for ID registers on arm64
>>>> xen/arm: Add handler for cp15 ID registers
>>>> xen/arm: Add CP10 exception support to handle VMFR
>>>> xen/arm: Activate TID3 in HCR_EL2
>>> CI found an ARM randconfig failure against this series.
>>>=20
>>> https://gitlab.com/xen-project/patchew/xen/-/pipelines/221798884
>>>=20
>>> I have admit that I can't spot an obvious connection so it might be
>>> collateral damage from elsewhere, but does need looking at irrespective=
.
>> This absolutely right, there is a bug in my code and i will send a V2 to=
 fix it.
>>=20
>> Very nice finding, i am wondering why my tests did not point this out.
>=20
> Its randconfig, so every time the test runs, it picks a new random
> Kconfig configuration.

This could be consider as a fuzzer testing and can be very usefull.

>=20
> Sadly, it is non-deterministic, and not necessarily the fault of change
> the test ran against.  We're probably going to have to tweak how we run
> these tests before the CI goes too much further.

Agree but in this case the error would have been triggered for any Arm
configuration with the right compiler flags.

Regards
Bertrand



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 10:50:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 10:50:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.40998.74080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgla-0005hn-TS; Mon, 30 Nov 2020 10:50:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 40998.74080; Mon, 30 Nov 2020 10:50:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjgla-0005hg-Pq; Mon, 30 Nov 2020 10:50:46 +0000
Received: by outflank-mailman (input) for mailman id 40998;
 Mon, 30 Nov 2020 10:50:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a3VR=FE=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kjggo-0004Aj-HG
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:45:50 +0000
Received: from mail-wr1-x42a.google.com (unknown [2a00:1450:4864:20::42a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fce6a8ae-1655-4ee9-858f-ecb60cdda627;
 Mon, 30 Nov 2020 10:45:39 +0000 (UTC)
Received: by mail-wr1-x42a.google.com with SMTP id e7so15506949wrv.6
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 02:45:39 -0800 (PST)
Received: from CBGR90WXYV0 ([2a00:23c5:5785:9a01:8931:214a:807a:cb80])
 by smtp.gmail.com with ESMTPSA id a65sm7213049wmc.35.2020.11.30.02.45.37
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 02:45:38 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fce6a8ae-1655-4ee9-858f-ecb60cdda627
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=/diZ7VyXKP6X0O8jl6ZE2tV3//7EXwwtnKHThnr1OrI=;
        b=EWstEzre2pVp1WZ73KD7U4Lc0jp6RrfMUZLmSEQDAYXR8SwrUJDwYbDBQMb3qkG9WO
         sRg6rnrh14Vrd7tmJh+nv0K8KO4YRrR+99SyzXSRLKlIIEgIAiYJaB5gC8pwuf3FGe6z
         YHukkUKEJPS/CJKrg76SYWrZb4mxevjNZvZQ2V9hUj3IvRlSyjp0Vd2+vAaUxYtV0Fvn
         thdmP22iJ3UlV+jjG1d3Q4ole60R0U8nMDlsUAMfFQqrcyvbAAVGspzkRtHWouA7yVrK
         O7L6Ff6DszFUCT1dB7VV+nUIjc9AKi+h9miDHCkCPkgON/7Y7JcOfasfn6GZJq1SRlbU
         lxVQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=/diZ7VyXKP6X0O8jl6ZE2tV3//7EXwwtnKHThnr1OrI=;
        b=V7zGdsONtvq0hODOWeG1vFhK1AuFwFeI6u2+YFvM4vkjt1vH2Q0GTpJ5MQPsOTt3nn
         RPPRsLkm5WH+9Q/Bgo2rr1KKtlXD1KeRgQn9C2zNE4IQcqiNZxYl3c4+PsYghRBCXiRV
         EHNhoh5Zg2sx+Jhx+wu1jIPez7hAqCJQ/tPQkZVD40UBzsIJ16JE3c+3E8PC1lWZAOXu
         JWBGMH9HzCKIrbD9csSv8bcZhUD5FEgRgK0df9JUaA8SxpHKOGrjBNb5GpTXPVtYZn4I
         Ck8tYrCaO4GneOrs4sowKnRijkJl4x5xdEXnD3LZmp4neCort/vCw5iFintHIBJqHRr3
         /Vww==
X-Gm-Message-State: AOAM530kx2s+jzK27hpi049YKNkWr7AVvJlnlCcll0TrNaiQX79RwCn7
	jUHCs5v8n6nw8YGn315brC4=
X-Google-Smtp-Source: ABdhPJxl2R2FeAhkgx28P0MkA80KZrR1KI2bcy0PiaoXvbbgPlycFWvcCZKFlvEmHyHfAOyawsweGA==
X-Received: by 2002:adf:82ca:: with SMTP id 68mr27203214wrc.332.1606733138482;
        Mon, 30 Nov 2020 02:45:38 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'Kevin Tian'" <kevin.tian@intel.com>
References: <c78e09fa-606c-c6c4-e9db-b57cb50ee5e2@suse.com>
In-Reply-To: <c78e09fa-606c-c6c4-e9db-b57cb50ee5e2@suse.com>
Subject: RE: [PATCH v4] IOMMU: make DMA containment of quarantined devices optional
Date: Mon, 30 Nov 2020 10:45:37 -0000
Message-ID: <013601d6c705$f09fd9a0$d1df8ce0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQI15LetYabIv76Hs54hT8VKRuHZ1akiP7Iw
Content-Language: en-gb

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 27 November 2020 16:46
> To: xen-devel@lists.xenproject.org
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>; Paul Durrant =
<paul@xen.org>; Kevin Tian
> <kevin.tian@intel.com>
> Subject: [PATCH v4] IOMMU: make DMA containment of quarantined devices =
optional
>=20
> Containing still in flight DMA was introduced to work around certain
> devices / systems hanging hard upon hitting a "not-present" IOMMU =
fault.
> Passing through (such) devices (on such systems) is inherently =
insecure
> (as guests could easily arrange for IOMMU faults of any kind to =
occur).
> Defaulting to a mode where admins may not even become aware of issues
> with devices can be considered undesirable. Therefore convert this =
mode
> of operation to an optional one, not one enabled by default.
>=20
> This involves resurrecting code commit ea38867831da ("x86 / iommu: set
> up a scratch page in the quarantine domain") did remove, in a slightly
> extended and abstracted fashion. Here, instead of reintroducing a =
pretty
> pointless use of "goto" in domain_context_unmap(), and instead of =
making
> the function (at least temporarily) inconsistent, take the opportunity
> and replace the other similarly pointless "goto" as well.
>=20
> In order to key the re-instated bypasses off of there (not) being a =
root
> page table this further requires moving the =
allocate_domain_resources()
> invocation from reassign_device() to amd_iommu_setup_domain_device() =
(or
> else reassign_device() would allocate a root page table anyway); this =
is
> benign to the second caller of the latter function.
>=20
> Take the opportunity and also limit the control to builds supporting
> PCI.
>=20
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v4: "full" -> "scratch_page". Duplicate Kconfig help text into command
>     line doc. Re-base.
> v3: IOMMU_quarantine_basic -> IOMMU_quarantine_fault,
>     IOMMU_quarantine_full -> IOMMU_quarantine_write_fault. Kconfig
>     option (choice) to select default. Limit to HAS_PCI.
> v2: Don't use true/false. Introduce QUARANTINE_SKIP() (albeit I'm not
>     really convinced this is an improvement). Add comment.
>=20
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -1278,7 +1278,7 @@ detection of systems known to misbehave
>  > Default: `new` unless directed-EOI is supported
>=20
>  ### iommu
> -    =3D List of [ <bool>, verbose, debug, force, required, =
quarantine,
> +    =3D List of [ <bool>, verbose, debug, force, required, =
quarantine[=3Dscratch-page],
>                  sharept, intremap, intpost, crash-disable,
>                  snoop, qinval, igfx, amd-iommu-perdev-intremap,
>                  dom0-{passthrough,strict} ]
> @@ -1316,11 +1316,32 @@ boolean (e.g. `iommu=3Dno`) can override t
>      will prevent Xen from booting if IOMMUs aren't discovered and =
enabled
>      successfully.
>=20
> -*   The `quarantine` boolean can be used to control Xen's behavior =
when
> -    de-assigning devices from guests.  If enabled (the default), Xen =
always
> +*   The `quarantine` option can be used to control Xen's behavior =
when
> +    de-assigning devices from guests.
> +
> +    When a PCI device is assigned to an untrusted domain, it is =
possible
> +    for that domain to program the device to DMA to an arbitrary =
address.
> +    The IOMMU is used to protect the host from malicious DMA by =
making
> +    sure that the device addresses can only target memory assigned to =
the
> +    guest.  However, when the guest domain is torn down, assigning =
the
> +    device back to the hardware domain would allow any in-flight DMA =
to
> +    potentially target critical host data.  To avoid this, =
quarantining
> +    should be enabled.  Quarantining can be done in two ways: In its =
basic
> +    form, all in-flight DMA will simply be forced to encounter IOMMU
> +    faults.  Since there are systems where doing so can cause host =
lockup,
> +    an alternative form is available where writes to memory will be =
made
> +    fault, but reads will be directed to a dummy page.  The =
implication
> +    here is that such reads will go unnoticed, i.e. an admin may not
> +    become aware of the underlying problem.
> +
> +    Therefore, if this option is set to true (the default), Xen =
always
>      quarantines such devices; they must be explicitly assigned back =
to Dom0
> -    before they can be used there again.  If disabled, Xen will only
> -    quarantine devices the toolstack hass arranged for getting =
quarantined.
> +    before they can be used there again.  If set to "scratch-page", =
still
> +    active DMA reads will additionally be directed to a "scratch" =
page.  If

There's inconsistency of terms here. We should choose either 'dummy =
page' or 'scratch page' (and my vote goes for the latter). Also, rather =
than true or false, shouldn't we have 'off', 'basic', and =
'scratch-page'?

  Paul



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 11:23:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 11:23:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41021.74101 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjhGk-0000PO-K6; Mon, 30 Nov 2020 11:22:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41021.74101; Mon, 30 Nov 2020 11:22:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjhGk-0000PH-H2; Mon, 30 Nov 2020 11:22:58 +0000
Received: by outflank-mailman (input) for mailman id 41021;
 Mon, 30 Nov 2020 11:22:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjhGj-0000P8-4f
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 11:22:57 +0000
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a9f3f888-0c5b-451f-8358-ed17f7e4de6e;
 Mon, 30 Nov 2020 11:22:55 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id r24so20875616lfm.8
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 03:22:55 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id x134sm2413508lff.161.2020.11.30.03.22.52
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 30 Nov 2020 03:22:53 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a9f3f888-0c5b-451f-8358-ed17f7e4de6e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:from:to:cc:references:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=MXrDDLpAfTaxU1bNVbBRC2S3j+N1CzIycOci6TfNZ2g=;
        b=neI9z+45JbHS5pifD7oPK6lOC8ge1SUAzfnEYFrpa9ctaC0cIsuQWaBpNMAUZcLJid
         gKDxUKoyzEU1AGZEz4h66JCwhfddejc0TXWkuVgFFffJt2mCOI5HwqmYFhkNehe6C48A
         RNyvUsze9eKr5yS8SIN23hdBmEExi3RC/yzH+vfEe7hrrL3Asw6dC8t36H3Io9nTtBtI
         VxXhesTzStcvCdG1LOcF+ePffqK2ivZPybHOG5rJuqJgSvxr0eMX02HKn/9vu6Id7Yaz
         h47CJL1/LfgGSTZEgLMC25aJCp0AgHM92y7yxro99Q+79NL4y5CSvU/sPDIRc3B8YSPj
         4Gvw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:from:to:cc:references:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=MXrDDLpAfTaxU1bNVbBRC2S3j+N1CzIycOci6TfNZ2g=;
        b=tgeUcWzC9BSwVfahNDPedV6jf0K+UQ6btaGhqXsTOO/dYIPJzuZv14Gu9joIlal4/Z
         SCBaOeCPwNGj8M3x0HPnP/MEJySCH3w6ZNZfZxuANYBOd9ehWA3uzLALPPpYjskS8Syc
         RIkG8K3orBbqF7oRSIuvY4dxYX/Vnp5iP6GWdI6Veo3fIu8Mvu+At2MnfLPug1MPoQ5R
         PaKEyJOy9QH6x4Gh+ckUUTfNSGNx+MvRTTdX1dF5QuengBKhQ6KTl0Iv9tr/zSZtftbB
         69HbgMH8jrhxE1AYukB/xqP4n5F7bkYz5ocSYGWyjvDtYueF+Z3fn3Md7L3M3DfXE6Jj
         D4AA==
X-Gm-Message-State: AOAM532mgpI2D8NT5tLhDP1M2n7K9DAuuCcFmSlGNuaeyP8oYrOptxhx
	JPzGn3w/gV5bV3F+y+FpL2Y=
X-Google-Smtp-Source: ABdhPJyh5IDuJDJNpDZ0b0zQ7MtdUUl5vXX1HHFtuvwPWeN/eq2FYFW5XbpdUzPYn7N4byNJEl2QvA==
X-Received: by 2002:a19:cc2:: with SMTP id 185mr8964985lfm.318.1606735374534;
        Mon, 30 Nov 2020 03:22:54 -0800 (PST)
Subject: Re: [PATCH V3 00/23] IOREQ feature (+ virtio-mmio) on Arm
From: Oleksandr <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Jan Beulich <jbeulich@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <julien.grall@arm.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Tim Deegan <tim@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Kaly Xin <Kaly.Xin@arm.com>, Artem Mygaiev <joculator@gmail.com>,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
Message-ID: <66df4a0b-166a-81c3-9237-854649c832f9@gmail.com>
Date: Mon, 30 Nov 2020 13:22:52 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 30.11.20 12:31, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Hello all.

Added missed subject line. I am sorry for the inconvenience.


>
>
> Date: Sat, 28 Nov 2020 22:33:51 +0200
> Subject: [PATCH V3 00/23] IOREQ feature (+ virtio-mmio) on Arm
> MIME-Version: 1.0
> Content-Type: text/plain; charset=UTF-8
> Content-Transfer-Encoding: 8bit
>
> Hello all.
>
> The purpose of this patch series is to add IOREQ/DM support to Xen on Arm.
> You can find an initial discussion at [1] and RFC/V1/V2 series at [2]/[3]/[4].
> Xen on Arm requires some implementation to forward guest MMIO access to a device
> model in order to implement virtio-mmio backend or even mediator outside of hypervisor.
> As Xen on x86 already contains required support this series tries to make it common
> and introduce Arm specific bits plus some new functionality. Patch series is based on
> Julien's PoC "xen/arm: Add support for Guest IO forwarding to a device emulator".
> Besides splitting existing IOREQ/DM support and introducing Arm side, the series
> also includes virtio-mmio related changes (last 2 patches for toolstack)
> for the reviewers to be able to see how the whole picture could look like.
>
> According to the initial discussion there are a few open questions/concerns
> regarding security, performance in VirtIO solution:
> 1. virtio-mmio vs virtio-pci, SPI vs MSI, different use-cases require different
>     transport...
> 2. virtio backend is able to access all guest memory, some kind of protection
>     is needed: 'virtio-iommu in Xen' vs 'pre-shared-memory & memcpys in guest'
> 3. interface between toolstack and 'out-of-qemu' virtio backend, avoid using
>     Xenstore in virtio backend if possible.
> 4. a lot of 'foreing mapping' could lead to the memory exhaustion, Julien
>     has some idea regarding that.
>
> Looks like all of them are valid and worth considering, but the first thing
> which we need on Arm is a mechanism to forward guest IO to a device emulator,
> so let's focus on it in the first place.
>
> ***
>
> There are a lot of changes since RFC series, almost all TODOs were resolved on Arm,
> Arm code was improved and hardened, common IOREQ/DM code became really arch-agnostic
> (without HVM-ism), the "legacy" mechanism of mapping magic pages for the IOREQ servers
> was left x86 specific, etc. But one TODO still remains which is "PIO handling" on Arm.
> The "PIO handling" TODO is expected to left unaddressed for the current series.
> It is not an big issue for now while Xen doesn't have support for vPCI on Arm.
> On Arm64 they are only used for PCI IO Bar and we would probably want to expose
> them to emulator as PIO access to make a DM completely arch-agnostic. So "PIO handling"
> should be implemented when we add support for vPCI.
>
> I left interface untouched in the following patch
> "xen/dm: Introduce xendevicemodel_set_irq_level DM op"
> since there is still an open discussion what interface to use/what
> information to pass to the hypervisor.
>
> There is a patch on review this series depends on:
> https://patchwork.kernel.org/patch/11816689
>
> Please note, that IOREQ feature is disabled by default on Arm within current series.
>
> ***
>
> Patch series [5] was rebased on recent "staging branch"
> (181f2c2 evtchn: double per-channel locking can't hit identical channels) and tested on
> Renesas Salvator-X board + H3 ES3.0 SoC (Arm64) with virtio-mmio disk backend [6]
> running in driver domain and unmodified Linux Guest running on existing
> virtio-blk driver (frontend). No issues were observed. Guest domain 'reboot/destroy'
> use-cases work properly. Patch series was only build-tested on x86.
>
> Please note, build-test passed for the following modes:
> 1. x86: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y (default)
> 2. x86: #CONFIG_HVM is not set / #CONFIG_IOREQ_SERVER is not set
> 3. Arm64: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y
> 4. Arm64: CONFIG_HVM=y / #CONFIG_IOREQ_SERVER is not set  (default)
> 5. Arm32: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y
> 6. Arm32: CONFIG_HVM=y / #CONFIG_IOREQ_SERVER is not set  (default)
>
> ***
>
> Any feedback/help would be highly appreciated.
>
> [1] https://lists.xenproject.org/archives/html/xen-devel/2020-07/msg00825.html
> [2] https://lists.xenproject.org/archives/html/xen-devel/2020-08/msg00071.html
> [3] https://lists.xenproject.org/archives/html/xen-devel/2020-09/msg00732.html
> [4] https://lists.xenproject.org/archives/html/xen-devel/2020-10/msg01077.html
> [5] https://github.com/otyshchenko1/xen/commits/ioreq_4.14_ml4
> [6] https://github.com/xen-troops/virtio-disk/commits/ioreq_ml1
>
> Julien Grall (5):
>    xen/dm: Make x86's DM feature common
>    xen/mm: Make x86's XENMEM_resource_ioreq_server handling common
>    arm/ioreq: Introduce arch specific bits for IOREQ/DM features
>    xen/dm: Introduce xendevicemodel_set_irq_level DM op
>    libxl: Introduce basic virtio-mmio support on Arm
>
> Oleksandr Tyshchenko (18):
>    x86/ioreq: Prepare IOREQ feature for making it common
>    x86/ioreq: Add IOREQ_STATUS_* #define-s and update code for moving
>    x86/ioreq: Provide out-of-line wrapper for the handle_mmio()
>    xen/ioreq: Make x86's IOREQ feature common
>    xen/ioreq: Make x86's hvm_ioreq_needs_completion() common
>    xen/ioreq: Make x86's hvm_mmio_first(last)_byte() common
>    xen/ioreq: Make x86's hvm_ioreq_(page/vcpu/server) structs common
>    xen/ioreq: Move x86's ioreq_server to struct domain
>    xen/ioreq: Move x86's io_completion/io_req fields to struct vcpu
>    xen/ioreq: Remove "hvm" prefixes from involved function names
>    xen/ioreq: Use guest_cmpxchg64() instead of cmpxchg()
>    xen/arm: Stick around in leave_hypervisor_to_guest until I/O has
>      completed
>    xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm
>    xen/ioreq: Introduce domain_has_ioreq_server()
>    xen/arm: io: Abstract sign-extension
>    xen/ioreq: Make x86's send_invalidate_req() common
>    xen/arm: Add mapcache invalidation handling
>    [RFC] libxl: Add support for virtio-disk configuration
>
>   MAINTAINERS                                  |    8 +-
>   tools/include/xendevicemodel.h               |    4 +
>   tools/libs/devicemodel/core.c                |   18 +
>   tools/libs/devicemodel/libxendevicemodel.map |    1 +
>   tools/libs/light/Makefile                    |    1 +
>   tools/libs/light/libxl_arm.c                 |   94 +-
>   tools/libs/light/libxl_create.c              |    1 +
>   tools/libs/light/libxl_internal.h            |    1 +
>   tools/libs/light/libxl_types.idl             |   16 +
>   tools/libs/light/libxl_types_internal.idl    |    1 +
>   tools/libs/light/libxl_virtio_disk.c         |  109 +++
>   tools/xl/Makefile                            |    2 +-
>   tools/xl/xl.h                                |    3 +
>   tools/xl/xl_cmdtable.c                       |   15 +
>   tools/xl/xl_parse.c                          |  116 +++
>   tools/xl/xl_virtio_disk.c                    |   46 +
>   xen/arch/arm/Makefile                        |    2 +
>   xen/arch/arm/dm.c                            |   89 ++
>   xen/arch/arm/domain.c                        |    9 +
>   xen/arch/arm/hvm.c                           |    4 +
>   xen/arch/arm/io.c                            |   29 +-
>   xen/arch/arm/ioreq.c                         |  126 +++
>   xen/arch/arm/p2m.c                           |   48 +-
>   xen/arch/arm/traps.c                         |   58 +-
>   xen/arch/x86/Kconfig                         |    1 +
>   xen/arch/x86/hvm/dm.c                        |  295 +-----
>   xen/arch/x86/hvm/emulate.c                   |   80 +-
>   xen/arch/x86/hvm/hvm.c                       |   12 +-
>   xen/arch/x86/hvm/hypercall.c                 |    9 +-
>   xen/arch/x86/hvm/intercept.c                 |    5 +-
>   xen/arch/x86/hvm/io.c                        |   26 +-
>   xen/arch/x86/hvm/ioreq.c                     | 1357 ++------------------------
>   xen/arch/x86/hvm/stdvga.c                    |   10 +-
>   xen/arch/x86/hvm/svm/nestedsvm.c             |    2 +-
>   xen/arch/x86/hvm/vmx/realmode.c              |    6 +-
>   xen/arch/x86/hvm/vmx/vvmx.c                  |    2 +-
>   xen/arch/x86/mm.c                            |   46 +-
>   xen/arch/x86/mm/p2m.c                        |   13 +-
>   xen/arch/x86/mm/shadow/common.c              |    2 +-
>   xen/common/Kconfig                           |    3 +
>   xen/common/Makefile                          |    2 +
>   xen/common/dm.c                              |  292 ++++++
>   xen/common/ioreq.c                           | 1307 +++++++++++++++++++++++++
>   xen/common/memory.c                          |   73 +-
>   xen/include/asm-arm/domain.h                 |    3 +
>   xen/include/asm-arm/hvm/ioreq.h              |  139 +++
>   xen/include/asm-arm/mm.h                     |    8 -
>   xen/include/asm-arm/mmio.h                   |    1 +
>   xen/include/asm-arm/p2m.h                    |   19 +-
>   xen/include/asm-arm/traps.h                  |   24 +
>   xen/include/asm-x86/hvm/domain.h             |   43 -
>   xen/include/asm-x86/hvm/emulate.h            |    2 +-
>   xen/include/asm-x86/hvm/io.h                 |   17 -
>   xen/include/asm-x86/hvm/ioreq.h              |   58 +-
>   xen/include/asm-x86/hvm/vcpu.h               |   18 -
>   xen/include/asm-x86/mm.h                     |    4 -
>   xen/include/asm-x86/p2m.h                    |   24 +-
>   xen/include/public/arch-arm.h                |    5 +
>   xen/include/public/hvm/dm_op.h               |   16 +
>   xen/include/xen/dm.h                         |   44 +
>   xen/include/xen/ioreq.h                      |  146 +++
>   xen/include/xen/p2m-common.h                 |    4 +
>   xen/include/xen/sched.h                      |   32 +
>   xen/include/xsm/dummy.h                      |    4 +-
>   xen/include/xsm/xsm.h                        |    6 +-
>   xen/xsm/dummy.c                              |    2 +-
>   xen/xsm/flask/hooks.c                        |    5 +-
>   67 files changed, 3084 insertions(+), 1884 deletions(-)
>   create mode 100644 tools/libs/light/libxl_virtio_disk.c
>   create mode 100644 tools/xl/xl_virtio_disk.c
>   create mode 100644 xen/arch/arm/dm.c
>   create mode 100644 xen/arch/arm/ioreq.c
>   create mode 100644 xen/common/dm.c
>   create mode 100644 xen/common/ioreq.c
>   create mode 100644 xen/include/asm-arm/hvm/ioreq.h
>   create mode 100644 xen/include/xen/dm.h
>   create mode 100644 xen/include/xen/ioreq.h
>
-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 11:35:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 11:35:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41032.74113 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjhT7-0001Ww-VS; Mon, 30 Nov 2020 11:35:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41032.74113; Mon, 30 Nov 2020 11:35:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjhT7-0001Wp-SK; Mon, 30 Nov 2020 11:35:45 +0000
Received: by outflank-mailman (input) for mailman id 41032;
 Mon, 30 Nov 2020 11:35:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=I3zd=FE=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kjhT6-0001Wk-GE
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 11:35:44 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5b64e9de-1139-42bf-8371-6e68610583df;
 Mon, 30 Nov 2020 11:35:41 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AUBZXm3023070
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Mon, 30 Nov 2020 12:35:34 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 017D22E9CAC; Mon, 30 Nov 2020 12:35:27 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b64e9de-1139-42bf-8371-6e68610583df
Date: Mon, 30 Nov 2020 12:35:27 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
        xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201130113527.GE1084@antioche.eu.org>
References: <20201127131324.GJ1717@antioche.eu.org>
 <714e9393-d7f4-ed47-d1ed-aff79f3552a0@suse.com>
 <20201127133121.GN1717@antioche.eu.org>
 <96aa5a9b-3f4a-ce9d-0f41-4a24d409ed55@suse.com>
 <20201127135929.GR1717@antioche.eu.org>
 <20201127202211.eqrxloii5x54zode@Air-de-Roger>
 <20201127214420.GA637@antioche.eu.org>
 <20201128145311.3gmzq5lnkz6ajdtr@Air-de-Roger>
 <20201128171430.GB631@antioche.eu.org>
 <819e859e-0fd2-cdbf-6126-46c924364d12@suse.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="oyUTqETQ0mS9luUI"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <819e859e-0fd2-cdbf-6126-46c924364d12@suse.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Mon, 30 Nov 2020 12:35:34 +0100 (MET)


--oyUTqETQ0mS9luUI
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit

On Mon, Nov 30, 2020 at 11:00:23AM +0100, Jan Beulich wrote:
> On 28.11.2020 18:14, Manuel Bouyer wrote:
> > On Sat, Nov 28, 2020 at 03:53:11PM +0100, Roger Pau Monn wrote:
> >>> the trace is at
> >>> http://www-soc.lip6.fr/~bouyer/xen-log13.txt
> >>
> >> Thanks! I think I've found the issue and I'm attaching a possible fix
> >> (fix.patch) to this email. In any case I've also attached a further
> >> debug patch, in case the fix turns out to be wrong. Please test the
> >> fix first, as the debug patch will end up triggering a panic when the
> >> buffer is full.
> > 
> > Yes, fix.patch does make the system boot as expected !
> 
> May I translate this to a Tested-by?
> 
> Patch also
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> Thanks much to both of you for all the effort here!

Also, please don't forget the attached patch !
Without it, the hypervisor panics.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--

--oyUTqETQ0mS9luUI
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename=patch-pvh-panic

diff --git a/xen/drivers/vpci/msix.c b/xen/drivers/vpci/msix.c
index 64dd0a929c..3eb6102a61 100644
--- xen/drivers/vpci/msix.c.orig
+++ xen/drivers/vpci/msix.c
@@ -370,7 +370,7 @@ static int msix_write(struct vcpu *v, unsigned long addr, unsigned int len,
 
             entry->updated = false;
         }
-        else
+        else if ( msix->enabled )
             vpci_msix_arch_mask_entry(entry, pdev, entry->masked);
 
         break;

--oyUTqETQ0mS9luUI--


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 11:44:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 11:44:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41041.74126 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjhbW-0002ZV-Rg; Mon, 30 Nov 2020 11:44:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41041.74126; Mon, 30 Nov 2020 11:44:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjhbW-0002ZO-Of; Mon, 30 Nov 2020 11:44:26 +0000
Received: by outflank-mailman (input) for mailman id 41041;
 Mon, 30 Nov 2020 11:44:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lj5U=FE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kjhbV-0002ZJ-Cw
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 11:44:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 43b459a2-36ea-4eda-a6ed-6c622e60e343;
 Mon, 30 Nov 2020 11:44:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BC8FFAC91;
 Mon, 30 Nov 2020 11:44:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43b459a2-36ea-4eda-a6ed-6c622e60e343
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606736663; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=UMdCMJiYPHVg4TmZjTwMFkZljeNUWYNdlnjTwhmYfxA=;
	b=Bt5XWvb5KGHwEtIin/sSLFyj3A4G9CiO3SHy4aDNILDW+/JbrvLb2G4R0CU5Mrqmzo3RQR
	0ZvkGF8HqhLhgmiXrgRflU59Hn3PcT/4MYe+yFDEKUlZlW3HJIInkexlEoJQKosbhL8vUX
	Ltq+0lCkyf+Dr0AGMKyXJglPM2BMJ58=
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201127131324.GJ1717@antioche.eu.org>
 <714e9393-d7f4-ed47-d1ed-aff79f3552a0@suse.com>
 <20201127133121.GN1717@antioche.eu.org>
 <96aa5a9b-3f4a-ce9d-0f41-4a24d409ed55@suse.com>
 <20201127135929.GR1717@antioche.eu.org>
 <20201127202211.eqrxloii5x54zode@Air-de-Roger>
 <20201127214420.GA637@antioche.eu.org>
 <20201128145311.3gmzq5lnkz6ajdtr@Air-de-Roger>
 <20201128171430.GB631@antioche.eu.org>
 <819e859e-0fd2-cdbf-6126-46c924364d12@suse.com>
 <20201130113527.GE1084@antioche.eu.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7e284ec6-a3a3-6c04-ce48-10a8290304d5@suse.com>
Date: Mon, 30 Nov 2020 12:44:23 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201130113527.GE1084@antioche.eu.org>
Content-Type: text/plain; charset=windows-1252
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 30.11.2020 12:35, Manuel Bouyer wrote:
> On Mon, Nov 30, 2020 at 11:00:23AM +0100, Jan Beulich wrote:
>> On 28.11.2020 18:14, Manuel Bouyer wrote:
>>> On Sat, Nov 28, 2020 at 03:53:11PM +0100, Roger Pau Monn wrote:
>>>>> the trace is at
>>>>> http://www-soc.lip6.fr/~bouyer/xen-log13.txt
>>>>
>>>> Thanks! I think I've found the issue and I'm attaching a possible fix
>>>> (fix.patch) to this email. In any case I've also attached a further
>>>> debug patch, in case the fix turns out to be wrong. Please test the
>>>> fix first, as the debug patch will end up triggering a panic when the
>>>> buffer is full.
>>>
>>> Yes, fix.patch does make the system boot as expected !
>>
>> May I translate this to a Tested-by?
>>
>> Patch also
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>
>> Thanks much to both of you for all the effort here!
> 
> Also, please don't forget the attached patch !
> Without it, the hypervisor panics.

Well - this one still needs a proper description and S-o-b.
The other one came in immediately consumable shape right away.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 11:50:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 11:50:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41048.74137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjhhK-0003Rn-HI; Mon, 30 Nov 2020 11:50:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41048.74137; Mon, 30 Nov 2020 11:50:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjhhK-0003Rg-EF; Mon, 30 Nov 2020 11:50:26 +0000
Received: by outflank-mailman (input) for mailman id 41048;
 Mon, 30 Nov 2020 11:50:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=I3zd=FE=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1kjhhJ-0003Rb-7i
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 11:50:25 +0000
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 553e2e2b-eebb-4cdf-a44d-b7a8aa80c1cb;
 Mon, 30 Nov 2020 11:50:23 +0000 (UTC)
Received: from sandettie.soc.lip6.fr (82-64-3-41.subs.proxad.net [82.64.3.41])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTPS id
 0AUBoIsU021271
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=OK);
 Mon, 30 Nov 2020 12:50:19 +0100 (MET)
Received: by sandettie.soc.lip6.fr (Postfix, from userid 373)
 id 266502E9CAC; Mon, 30 Nov 2020 12:50:13 +0100 (MET)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 553e2e2b-eebb-4cdf-a44d-b7a8aa80c1cb
Date: Mon, 30 Nov 2020 12:50:13 +0100
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
        xen-devel@lists.xenproject.org
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
Message-ID: <20201130115013.GF1084@antioche.eu.org>
References: <20201127133121.GN1717@antioche.eu.org>
 <96aa5a9b-3f4a-ce9d-0f41-4a24d409ed55@suse.com>
 <20201127135929.GR1717@antioche.eu.org>
 <20201127202211.eqrxloii5x54zode@Air-de-Roger>
 <20201127214420.GA637@antioche.eu.org>
 <20201128145311.3gmzq5lnkz6ajdtr@Air-de-Roger>
 <20201128171430.GB631@antioche.eu.org>
 <819e859e-0fd2-cdbf-6126-46c924364d12@suse.com>
 <20201130113527.GE1084@antioche.eu.org>
 <7e284ec6-a3a3-6c04-ce48-10a8290304d5@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <7e284ec6-a3a3-6c04-ce48-10a8290304d5@suse.com>
X-Greylist: Sender succeeded STARTTLS authentication, not delayed by milter-greylist-4.4.3 (chassiron.antioche.eu.org [151.127.5.145]); Mon, 30 Nov 2020 12:50:19 +0100 (MET)

On Mon, Nov 30, 2020 at 12:44:23PM +0100, Jan Beulich wrote:
> On 30.11.2020 12:35, Manuel Bouyer wrote:
> > On Mon, Nov 30, 2020 at 11:00:23AM +0100, Jan Beulich wrote:
> >> On 28.11.2020 18:14, Manuel Bouyer wrote:
> >>> On Sat, Nov 28, 2020 at 03:53:11PM +0100, Roger Pau Monn wrote:
> >>>>> the trace is at
> >>>>> http://www-soc.lip6.fr/~bouyer/xen-log13.txt
> >>>>
> >>>> Thanks! I think I've found the issue and I'm attaching a possible fix
> >>>> (fix.patch) to this email. In any case I've also attached a further
> >>>> debug patch, in case the fix turns out to be wrong. Please test the
> >>>> fix first, as the debug patch will end up triggering a panic when the
> >>>> buffer is full.
> >>>
> >>> Yes, fix.patch does make the system boot as expected !
> >>
> >> May I translate this to a Tested-by?
> >>
> >> Patch also
> >> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> >>
> >> Thanks much to both of you for all the effort here!
> > 
> > Also, please don't forget the attached patch !
> > Without it, the hypervisor panics.
> 
> Well - this one still needs a proper description and S-o-b.
> The other one came in immediately consumable shape right away.

The patch was sent by Roger on 12 Nov 2020, in reply to my mail
about the panic. The panic is:

(XEN) Xen call trace:
(XEN)    [<ffff82d08031cc28>] R vpci_msix_arch_mask_entry+0x18/0x20
(XEN)    [<ffff82d08025a38a>] S drivers/vpci/msix.c#msix_write+0x18a/0x2b0
(XEN)    [<ffff82d08030d943>] S arch/x86/hvm/intercept.c#hvm_mmio_write+0x23/0x3
0
(XEN)    [<ffff82d08030dd19>] S hvm_process_io_intercept+0x1e9/0x260
(XEN)    [<ffff82d08030ddad>] S hvm_io_intercept+0x1d/0x40
(XEN)    [<ffff82d0802fe7ba>] S arch/x86/hvm/emulate.c#hvmemul_do_io+0x26a/0x4d0
(XEN)    [<ffff82d080259ef9>] S drivers/vpci/msix.c#msix_accept+0x9/0x20
(XEN)    [<ffff82d0802fea56>] S arch/x86/hvm/emulate.c#hvmemul_do_io_buffer+0x36
/0x70
(XEN)    [<ffff82d0802ff005>] S arch/x86/hvm/emulate.c#hvmemul_linear_mmio_access+0x1e5/0x300
(XEN)    [<ffff82d0802fff44>] S arch/x86/hvm/emulate.c#linear_write+0x84/0x160
(XEN)    [<ffff82d080301ca8>] S arch/x86/hvm/emulate.c#hvmemul_write+0xe8/0x100
(XEN)    [<ffff82d0802de6cc>] S x86_emulate+0x289dc/0x2cfb0
(XEN)    [<ffff82d08027c7ab>] S map_domain_page+0x4b/0x600
(XEN)    [<ffff82d080340eaa>] S __get_gfn_type_access+0x6a/0x100
(XEN)    [<ffff82d08034a367>] S arch/x86/mm/p2m-ept.c#ept_next_level+0x107/0x150
(XEN)    [<ffff82d0802e4961>] S x86_emulate_wrapper+0x21/0x60
(XEN)    [<ffff82d08030024f>] S arch/x86/hvm/emulate.c#_hvm_emulate_one+0x4f/0x220
(XEN)    [<ffff82d0803004ed>] S hvmemul_get_seg_reg+0x4d/0x50
(XEN)    [<ffff82d08030042e>] S hvm_emulate_one+0xe/0x10
(XEN)    [<ffff82d08030e4ca>] S hvm_emulate_one_insn+0x3a/0xf0
(XEN)    [<ffff82d0802e4af0>] S x86_insn_is_mem_access+0/0x260
(XEN)    [<ffff82d08030e5c9>] S handle_mmio_with_translation+0x49/0x60
(XEN)    [<ffff82d080305d78>] S hvm_hap_nested_page_fault+0x2c8/0x720
(XEN)    [<ffff82d0802fea56>] S arch/x86/hvm/emulate.c#hv(XEN) 
(XEN) ****************************************
(XEN) Panic on CPU 13:
(XEN) Assertion 'entry->arch.pirq != INVALID_PIRQ' failed at vmsi.c:843
(XEN) ****************************************

This is when it configures the broadcom network interface, which interrupts
at "msix3 vec 0". It is the first MSI-X device configured; the previous
ones are MSI only.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 11:58:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 11:58:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41055.74149 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjhpW-0003lM-Cd; Mon, 30 Nov 2020 11:58:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41055.74149; Mon, 30 Nov 2020 11:58:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjhpW-0003lF-9i; Mon, 30 Nov 2020 11:58:54 +0000
Received: by outflank-mailman (input) for mailman id 41055;
 Mon, 30 Nov 2020 11:58:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lj5U=FE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kjhpV-0003lA-P1
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 11:58:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 09dd7da0-a390-4de8-8b81-374f416e528a;
 Mon, 30 Nov 2020 11:58:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 20CFFABD2;
 Mon, 30 Nov 2020 11:58:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 09dd7da0-a390-4de8-8b81-374f416e528a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606737531; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5byG2FT0x2IRFALO4a+A1nUQJUqRrfL6T+ypiFVIAWo=;
	b=GqX3nQ+oQWQ1vY227YD6c4LMoJwwTmqxgyR18vJx85L6bzH9JMvJh58z4hTVPY4UB80d9k
	ALMjlLlw2QRR7kVNH+scl81k7Xh+tZXSD07E9qmrtbIsrIUtRHxb1lr39PF/Q1E6CDG93z
	tXRLe7jb8wyt5Emdy/hoiXj7uF1syHA=
Subject: Re: [PATCH v4] IOMMU: make DMA containment of quarantined devices
 optional
To: paul@xen.org
Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Kevin Tian' <kevin.tian@intel.com>, xen-devel@lists.xenproject.org
References: <c78e09fa-606c-c6c4-e9db-b57cb50ee5e2@suse.com>
 <013601d6c705$f09fd9a0$d1df8ce0$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <221431d9-2435-f106-af46-0641f5a4e8f8@suse.com>
Date: Mon, 30 Nov 2020 12:58:50 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <013601d6c705$f09fd9a0$d1df8ce0$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.11.2020 11:45, Paul Durrant wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 27 November 2020 16:46
>>
>> --- a/docs/misc/xen-command-line.pandoc
>> +++ b/docs/misc/xen-command-line.pandoc
>> @@ -1278,7 +1278,7 @@ detection of systems known to misbehave
>>  > Default: `new` unless directed-EOI is supported
>>
>>  ### iommu
>> -    = List of [ <bool>, verbose, debug, force, required, quarantine,
>> +    = List of [ <bool>, verbose, debug, force, required, quarantine[=scratch-page],
>>                  sharept, intremap, intpost, crash-disable,
>>                  snoop, qinval, igfx, amd-iommu-perdev-intremap,
>>                  dom0-{passthrough,strict} ]
>> @@ -1316,11 +1316,32 @@ boolean (e.g. `iommu=no`) can override t
>>      will prevent Xen from booting if IOMMUs aren't discovered and enabled
>>      successfully.
>>
>> -*   The `quarantine` boolean can be used to control Xen's behavior when
>> -    de-assigning devices from guests.  If enabled (the default), Xen always
>> +*   The `quarantine` option can be used to control Xen's behavior when
>> +    de-assigning devices from guests.
>> +
>> +    When a PCI device is assigned to an untrusted domain, it is possible
>> +    for that domain to program the device to DMA to an arbitrary address.
>> +    The IOMMU is used to protect the host from malicious DMA by making
>> +    sure that the device addresses can only target memory assigned to the
>> +    guest.  However, when the guest domain is torn down, assigning the
>> +    device back to the hardware domain would allow any in-flight DMA to
>> +    potentially target critical host data.  To avoid this, quarantining
>> +    should be enabled.  Quarantining can be done in two ways: In its basic
>> +    form, all in-flight DMA will simply be forced to encounter IOMMU
>> +    faults.  Since there are systems where doing so can cause host lockup,
>> +    an alternative form is available where writes to memory will be made
>> +    fault, but reads will be directed to a dummy page.  The implication
>> +    here is that such reads will go unnoticed, i.e. an admin may not
>> +    become aware of the underlying problem.
>> +
>> +    Therefore, if this option is set to true (the default), Xen always
>>      quarantines such devices; they must be explicitly assigned back to Dom0
>> -    before they can be used there again.  If disabled, Xen will only
>> -    quarantine devices the toolstack hass arranged for getting quarantined.
>> +    before they can be used there again.  If set to "scratch-page", still
>> +    active DMA reads will additionally be directed to a "scratch" page.  If
> 
> There's inconsistency of terms here. We should choose either 'dummy page'
> or 'scratch page' (and my vote goes for the latter).

Oh, that wasn't intentional. I've replaced all "dummy" now.

> Also, rather than true or false, shouldn't we have 'off', 'basic', and
> 'scratch-page'?

I didn't want to break (or needlessly extend) the present boolean nature
of the option. Hence I only added "scratch-page". I wouldn't want to add
"basic" as an alias of "true", but if you think we really need this, then
I surely could do so. As to "off" vs "false" - both are permitted anyway
by the parsing functions. And to me (both as a programmer and as someone
who had been studying maths long ago) something that's boolean goes
rather with true/false than on/off; I can certainly change that wording
if you deem that more appropriate / helpful for the target audience.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 12:09:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 12:09:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41063.74162 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjhzW-0004u4-GG; Mon, 30 Nov 2020 12:09:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41063.74162; Mon, 30 Nov 2020 12:09:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjhzW-0004tx-D2; Mon, 30 Nov 2020 12:09:14 +0000
Received: by outflank-mailman (input) for mailman id 41063;
 Mon, 30 Nov 2020 12:09:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lj5U=FE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kjhzV-0004ts-4d
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 12:09:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7bbcd17d-f33e-46e6-9fe3-fcc10a745539;
 Mon, 30 Nov 2020 12:09:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7D366AC91;
 Mon, 30 Nov 2020 12:09:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7bbcd17d-f33e-46e6-9fe3-fcc10a745539
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606738151; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=H+Ui8SwHQZa3iP+Gts8zCQ4ObScAO+xHJ1EnaXEaAcg=;
	b=O/MR1I7qEg+DF+XDfCCqR64i0zYZIRXF8TuAMZ4vcgMN+AsYai3hSPy7qzH+xDgkT9q48h
	ZMN63bCJTVVH0WdmgBmeeYo6PdxE8l21goTXTSOxNqIefvrfyBsEtsxPGvnUQ9Go2IXAID
	tThQjnEU8MgzKZEUSVLcKr8k2mT90zQ=
Subject: Re: NetBSD dom0 PVH: hardware interrupts stalls
To: Manuel Bouyer <bouyer@antioche.eu.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201127133121.GN1717@antioche.eu.org>
 <96aa5a9b-3f4a-ce9d-0f41-4a24d409ed55@suse.com>
 <20201127135929.GR1717@antioche.eu.org>
 <20201127202211.eqrxloii5x54zode@Air-de-Roger>
 <20201127214420.GA637@antioche.eu.org>
 <20201128145311.3gmzq5lnkz6ajdtr@Air-de-Roger>
 <20201128171430.GB631@antioche.eu.org>
 <819e859e-0fd2-cdbf-6126-46c924364d12@suse.com>
 <20201130113527.GE1084@antioche.eu.org>
 <7e284ec6-a3a3-6c04-ce48-10a8290304d5@suse.com>
 <20201130115013.GF1084@antioche.eu.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9fc635f0-3a80-892f-cc7a-8c30e35a6f2d@suse.com>
Date: Mon, 30 Nov 2020 13:09:10 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <20201130115013.GF1084@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 30.11.2020 12:50, Manuel Bouyer wrote:
> On Mon, Nov 30, 2020 at 12:44:23PM +0100, Jan Beulich wrote:
>> On 30.11.2020 12:35, Manuel Bouyer wrote:
>>> On Mon, Nov 30, 2020 at 11:00:23AM +0100, Jan Beulich wrote:
>>>> On 28.11.2020 18:14, Manuel Bouyer wrote:
>>>>> On Sat, Nov 28, 2020 at 03:53:11PM +0100, Roger Pau Monné wrote:
>>>>>>> the trace is at
>>>>>>> http://www-soc.lip6.fr/~bouyer/xen-log13.txt
>>>>>>
>>>>>> Thanks! I think I've found the issue and I'm attaching a possible fix
>>>>>> (fix.patch) to this email. In any case I've also attached a further
>>>>>> debug patch, in case the fix turns out to be wrong. Please test the
>>>>>> fix first, as the debug patch will end up triggering a panic when the
>>>>>> buffer is full.
>>>>>
>>>>> Yes, fix.patch does make the system boot as expected !
>>>>
>>>> May I translate this to a Tested-by?
>>>>
>>>> Patch also
>>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>>
>>>> Thanks much to both of you for all the effort here!
>>>
>>> Also, please don't forget the attached patch !
>>> Without it, the hypervisor panics.
>>
>> Well - this one still needs a proper description and S-o-b.
>> The other one came in immediately consumable shape right away.
> 
> The patch was sent by Roger on 12 Nov 2020, in reply to my mail
> about the panic.

I'm aware, but that wasn't a patch I can take and commit. I'm not
even entirely certain the code change is the final one, not the
least because of not having seen a description of the change, yet.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 12:13:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 12:13:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41070.74174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kji3w-0005qa-3S; Mon, 30 Nov 2020 12:13:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41070.74174; Mon, 30 Nov 2020 12:13:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kji3v-0005qT-Vj; Mon, 30 Nov 2020 12:13:47 +0000
Received: by outflank-mailman (input) for mailman id 41070;
 Mon, 30 Nov 2020 12:13:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>) id 1kji3u-0005qO-AP
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 12:13:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kji3t-0003Zf-5T; Mon, 30 Nov 2020 12:13:45 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=edge-cache-235.e-lhr50.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kji3s-00010A-Oy; Mon, 30 Nov 2020 12:13:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
	References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID;
	bh=o9iTScdYJq7mKlr6B5dIRzWBzACVKFfWdXR7I9EiAmI=; b=37yj4hV7WoLKSnHWJII0VgkCNm
	pUaDaxNWmnEv+HvzwXZAFBYhSh4oOfzwP82xRwL5Yl3ZVVACdXL6WBIniepAzogThVaGjFUb0wIdh
	HMf99fDlQLGksZ+fSIN0rhr4JQ8Btj3uREM716LScF23Ch20q+QOBlFukscbK+0xNzA0=;
Message-ID: <23cd67ea1b96ba3f8801a3cf13549298597cb331.camel@xen.org>
Subject: Re: [PATCH v8 03/15] x86/mm: rewrite virt_to_xen_l*e
From: Hongyan Xia <hx242@xen.org>
To: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, jgrall@amazon.com, Andrew Cooper
 <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Roger Pau
 =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>, George Dunlap
 <george.dunlap@citrix.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Date: Mon, 30 Nov 2020 12:13:40 +0000
In-Reply-To: <600d3ea4-f905-3aab-e110-da3bd0d4b38a@suse.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
	 <e7963f6d8cab8e4d5d4249b12a8175405d888bba.1595857947.git.hongyxia@amazon.com>
	 <41d9d8d4-d5cb-8350-c118-c9e1fe73b6d0@suse.com>
	 <a4f02c292a369cfd771790b1d164f139fec6bead.camel@xen.org>
	 <f25e278f-2d63-d806-4650-983df490556f@xen.org>
	 <d75fd45c-3f66-63c9-90c7-90dc10fc5763@suse.com>
	 <8bb9eb92-ede4-0fa4-d21f-c7976fe70acf@xen.org>
	 <622a8319-a439-72f2-c045-15e7611a22e7@suse.com>
	 <3db3081d-232a-cce1-cfce-c657be64a0dd@xen.org>
	 <600d3ea4-f905-3aab-e110-da3bd0d4b38a@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit

Sorry for the late reply. Been busy with something else.

On Tue, 2020-08-18 at 18:16 +0200, Jan Beulich wrote:
> On 18.08.2020 15:08, Julien Grall wrote:
> > Hi Jan,
> > 
> > On 18/08/2020 12:30, Jan Beulich wrote:
> > > On 18.08.2020 12:13, Julien Grall wrote:
> > > > Hi Jan,
> > > > 
> > > > On 18/08/2020 09:49, Jan Beulich wrote:
> > > > > On 13.08.2020 19:22, Julien Grall wrote:
> > > > > > Hi,
> > > > > > 
> > > > > > On 13/08/2020 17:08, Hongyan Xia wrote:
> > > > > > > On Fri, 2020-08-07 at 16:05 +0200, Jan Beulich wrote:
> > > > > > > > On 27.07.2020 16:21, Hongyan Xia wrote:
> > > > > > > > > From: Wei Liu <wei.liu2@citrix.com>
> > > > > > > > > 
> > > > > > > > > Rewrite those functions to use the new APIs. Modify
> > > > > > > > > its callers to
> > > > > > > > > unmap
> > > > > > > > > the pointer returned. Since alloc_xen_pagetable_new()
> > > > > > > > > is almost
> > > > > > > > > never
> > > > > > > > > useful unless accompanied by page clearing and a
> > > > > > > > > mapping, introduce
> > > > > > > > > a
> > > > > > > > > helper alloc_map_clear_xen_pt() for this sequence.
> > > > > > > > > 
> > > > > > > > > Note that the change of virt_to_xen_l1e() also
> > > > > > > > > requires
> > > > > > > > > vmap_to_mfn() to
> > > > > > > > > unmap the page, which requires domain_page.h header
> > > > > > > > > in vmap.
> > > > > > > > > 
> > > > > > > > > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > > > > > > > > Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> > > > > > > > > Reviewed-by: Jan Beulich <jbeulich@suse.com>
> > > > > > > > > 
> > > > > > > > > ---
> > > > > > > > > Changed in v8:
> > > > > > > > > - s/virtual address/linear address/.
> > > > > > > > > - BUG_ON() on NULL return in vmap_to_mfn().
> > > > > > > > 
> > > > > > > > The justification for this should be recorded in the
> > > > > > > > description. In
> > > > > > > 
> > > > > > > Will do.
> > > > > > > 
> > > > > > > > reply to v7 I did even suggest how to easily address
> > > > > > > > the issue you
> > > > > > > > did notice with large pages, as well as alternative
> > > > > > > > behavior for
> > > > > > > > vmap_to_mfn().
> > > > > > > 
> > > > > > > One thing about adding SMALL_PAGES is that vmap is common
> > > > > > > code and I am
> > > > > > > not sure if the Arm side is happy with it.
> > > > > > 
> > > > > > At the moment, Arm is only using small mapping but I plan
> > > > > > to change that soon because we have regions that can be
> > > > > > fairly big.
> > > > > > 
> > > > > > Regardless that, the issue with vmap_to_mfn() is rather x86
> > > > > > specific. So I don't particularly like the idea to expose
> > > > > > such trick in common code.
> > > > > > 
> > > > > > Even on x86, I think this is not the right approach. Such
> > > > > > band-aid will impact the performance as, assuming
> > > > > > superpages are used, it will take longer to map and add
> > > > > > pressure on the TLBs.
> > > > > > 
> > > > > > I am aware that superpages will be useful for LiveUpdate,
> > > > > > but is there any use cases in upstream?
> > > > > 
> > > > > Superpage use by vmalloc() is purely occasional: You'd have
> > > > > to vmalloc()
> > > > > 2Mb or more _and_ the page-wise allocation ought to return
> > > > > 512
> > > > > consecutive pages in the right order. Getting 512 consecutive
> > > > > pages is
> > > > > possible in practice, but with the page allocator allocating
> > > > > top-down it
> > > > > is very unlikely for them to be returned in increasing-sorted 
> > > > > order.
> > > > 
> > > > So your assumption here is vmap_to_mfn() can only be called on
> > > > vmalloc-ed() area. While this may be the case in Xen today, the
> > > > name clearly suggest it can be called on all vmap-ed region.
> > > 
> > > No, I don't make this assumption, and I did spell this out in an
> > > earlier
> > > reply to Hongyan: Parties using vmap() on a sufficiently large
> > > address
> > > range with consecutive MFNs simply have to be aware that they may
> > > not
> > > call vmap_to_mfn().
> > 
> > You make it sounds easy to be aware, however there are two
> > implementations of vmap_to_mfn() (one per arch). Even looking at
> > the x86 version, it is not obvious there is a restriction.
> 
> I didn't mean to make it sound like this - I agree it's not an
> obvious
> restriction.
> 
> > So I am a bit concerned of the "misuse" of the function. This could
> > possibly be documented. Although, I am not entirely happy to
> > restrict the use because of x86.
> 
> Unless the underlying issue gets fixed, we need _some_ form of bodge.
> I'm not really happy with the BUG_ON() as proposed by Hongyan. You're
> not really happy with my first proposed alternative, and you didn't
> comment on the 2nd one (still kept in context below). Not sure what
> to do: Throw dice?

Actually I did not propose the BUG_ON() fix. I was just in favor of it
when Jan presented it as a choice in v7.

The reason I am in favor of it is that even without it, the existing
x86 code already BUG_ON() when vmap has a superpage anyway, so it's not
like other alternatives behave any differently for superpages. I am
also not sure about returning INVALID_MFN because if virt_to_xen_l1e()
really returns NULL, then we are calling vmap_to_mfn() on a non-vmap
address (not even populated) which frankly deserves at least ASSERT().
So, I don't think BUG_ON() is a bad idea for now before vmap_to_mfn()
supports superpages.

Of course, we could use MAP_SMALL_PAGES to avoid the whole problem, but
Arm may not be happy. After a quick chat with Julien, how about having
ARCH_VMAP_FLAGS and only small pages for x86?

Hongyan



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 12:16:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 12:16:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41077.74186 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kji6l-0005zr-Hd; Mon, 30 Nov 2020 12:16:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41077.74186; Mon, 30 Nov 2020 12:16:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kji6l-0005zk-Eb; Mon, 30 Nov 2020 12:16:43 +0000
Received: by outflank-mailman (input) for mailman id 41077;
 Mon, 30 Nov 2020 12:16:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kji6k-0005zb-1w; Mon, 30 Nov 2020 12:16:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kji6j-0003ea-T0; Mon, 30 Nov 2020 12:16:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kji6j-0007vn-En; Mon, 30 Nov 2020 12:16:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kji6j-0003IU-EI; Mon, 30 Nov 2020 12:16:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=c+85PyoNL5o6Odc9ulOr2UO3grKmtif0qHfB3ZRgiC8=; b=yeEHx/BIoLaLNr0m/UXQpJLCQs
	xDIIMX+lRVLgFu8b9gxy7TZLc7wwcNZ11zupq1sL7ObOeQZEeh2z8JAnPimig9mCoYmM+qKbVRwPq
	xArL9zZJzGpxH0EiJYIBu9OYDfu7nnvdeocA5IxDcnawocGrAkAsdbcqF+49prx9Co0k=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157102-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 157102: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f7d7d53f6464cff94ead4c15d21e79ce4d9173f5
X-Osstest-Versions-That:
    xen=f7d7d53f6464cff94ead4c15d21e79ce4d9173f5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 30 Nov 2020 12:16:41 +0000

flight 157102 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157102/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 157082
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 157082
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 157082
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 157082
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 157082
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 157082
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 157082
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 157082
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 157082
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157082
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 157082
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 157082
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f7d7d53f6464cff94ead4c15d21e79ce4d9173f5
baseline version:
 xen                  f7d7d53f6464cff94ead4c15d21e79ce4d9173f5

Last test of basis   157102  2020-11-30 01:52:27 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 12:16:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 12:16:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41079.74201 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kji6r-00062r-0F; Mon, 30 Nov 2020 12:16:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41079.74201; Mon, 30 Nov 2020 12:16:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kji6q-00062i-TO; Mon, 30 Nov 2020 12:16:48 +0000
Received: by outflank-mailman (input) for mailman id 41079;
 Mon, 30 Nov 2020 12:16:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kBYl=FE=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1kji6p-00062H-Tv
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 12:16:48 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 591ac46f-45b7-48d4-bf2d-89e9cb00ab6e;
 Mon, 30 Nov 2020 12:16:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 591ac46f-45b7-48d4-bf2d-89e9cb00ab6e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606738606;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=RGaoivVpLl6U9E9xdyq9SBGAoGEAgDGE+pZ5sloPuvA=;
  b=LzxhhegQpZGjd5CmN6XEcLAx3LeJOT6jNPP8zK1onG4A7KLKZwLKVLX8
   EDtrKWh0pcFOc8uuVwba6KXw5gnDPwwJk50H62ZPB1THVy4qWSWwGcfMG
   n97wZjqxqJ5sIVr305fzWqcUd98VvxjXtV7RWmSAV3F6JrP07YuEpr0ed
   Y=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 6yNIasUQmUag/a5XE8qCNC/aCn0Qjxa9Pdfm3owyTNwYIDbH3QBZZUowFh1cN1+55ZPY/PthP7
 xZvrOxSYBv7tAXpP55oAD5fxMpk8udk7mG8hp8NtdSc5SnAQvyRoVLOmXGP3Wi86ze6SF9feGc
 1Sj++WIMyHgtl1Ah3DtVjgVRaFaznkLVdX9SL5jqsBq3cUtJbd3YioAXSCy6AKOXlfE8iA4s5h
 KuSAG0GGCSZXZP5uhgODyn2/WzoIawjAUngZrqxtKR83wHl7kgdPrddN4sYnjNkILsBy+YyRQu
 Wtk=
X-SBRS: None
X-MesageID: 32122689
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,381,1599537600"; 
   d="scan'208";a="32122689"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IsT0mKFM1VaMFkeBZNCaa/CCRv9iEVbjFqq5DNQGr0U0ySEHZhvbL+gMqH/dygcWSLmQed0mLEuC8VTmO//aoJUOOgG31gdGIydZxOaqebzf+GSKkTrjZ/1o5mF6tWSu1sO3aCCwUGyoNv+xjqVg84ROw/kB7vEVk3t9yqCWvSdxbkGNEHokjfPMeBLwSch1FHCZ6YsLVqrNVbs+LeaS6xkBJYUrSeoEfV9pZFjEM4AI92HEM0WNI0OKJVO2U938HYEZXlQT9EMM2pdnB5qW5xF/cMP3mE1ZXL/YWiSzPZsOaF1Ubgf3Zv0foOTsn9xpvftGBGp9SLvddX6TooHKqA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RGaoivVpLl6U9E9xdyq9SBGAoGEAgDGE+pZ5sloPuvA=;
 b=bznFUHH+bgdXWjSaYjrlkXoDKdzn5R1FCV6AH97ur3jad2C99RCe8hSKApdsC1Xfgs7tyjXYJE1uTBkNyaCkL5z7rNfPYGjjAYTdb5nDKIIow6Srv+Xbw31FXj+CwDGXsX1IQSHoZbYK+87gRcl6vhGD5FQrOG6y6YT64fG5P+m715HrQXLi59pXVIOPaQnxKr7QNe3dvzmVK92X+6jx0Gnz/ZXgF93bt9UUxwP7l6j4XYjGPh8qqEU5efU1/rhb9jIp6N1DK/mSaUoICZEMEVIje/wsOv+Uq6KAAm1hLXZMv2ih6AaCfPpKlOsKXEbxeO5JKt8JQFgb3E8F2X6KSA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RGaoivVpLl6U9E9xdyq9SBGAoGEAgDGE+pZ5sloPuvA=;
 b=NoaxjU+z/lcVNoJrfH1wWwVi6taewVEa08+3QLSalitHye2rmTBhM6WrXs6LW0wNwHeErGbFoR+on22DrNlSngh1znmHUoJww3WMq5F9/scJNZWwGZLlfyAu+wEJv4cm4waZQz5nXFCl/uookKwgjOu6ENR82D0n8QiuoI2CIGE=
From: George Dunlap <George.Dunlap@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>
Subject: Re: [ANNOUNCE] Call for agenda items for December 2020 Community Call
 @ 16:00 UTC
Thread-Topic: [ANNOUNCE] Call for agenda items for December 2020 Community
 Call @ 16:00 UTC
Thread-Index: AQHWxLPI3g+PARWmhEi8Mt/igt/QK6ngfPmAgAAe7oA=
Date: Mon, 30 Nov 2020 12:16:40 +0000
Message-ID: <49AA35F9-5056-4D42-AE1D-5A478B0CDF7B@citrix.com>
References: <6A1AC739-EB53-4996-A99B-EE68358E70DB@citrix.com>
 <6da4cd56-7364-bc6e-24d8-02976dbd637d@suse.com>
In-Reply-To: <6da4cd56-7364-bc6e-24d8-02976dbd637d@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 12a424b4-9334-4125-91d3-08d89529cb51
x-ms-traffictypediagnostic: BYAPR03MB3638:
x-microsoft-antispam-prvs: <BYAPR03MB36381CCC4AA19ABB994EC0A399F50@BYAPR03MB3638.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7691;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: bwAHMMaEY6xMU458M2psVxQ2OWiINfYT4hWpgel9tfQIK/JWd5UprMxYRjn4Re7znONmI0Lasfd4Gz8FyVyPAaERaI98AhsKi4mWrI5J+3O2VOoTChn+oGrOoy5Y8fPnOg9lZAHQ5r8FW05tH1AjYsH6NwyKRxkUdT4SJDlznTTs0o0ANXDGBMfwwagCZNfbmhNnbj0a4zqEtg8WtU9O6SwrCltMU7ZlStSPV2XeVdEZwE585wTkc68W1psu3gqTHyET50YBoy45fHe3SFYGUzGYNsqxeCDZcTDBJCt7CQt+meZsrz4xtMtiZ9OpQ6qhI/GbEB4awPvu/V3VwE+y7Sp1ZYPcszAJNN6Yp/erZeYsMPbO904WeVlo7QGoeiHRPTLo5L0LH7NaHEjc2xnPpw==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4229.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(366004)(136003)(39860400002)(396003)(26005)(8676002)(5660300002)(55236004)(8936002)(83380400001)(6486002)(33656002)(478600001)(2616005)(2906002)(316002)(966005)(86362001)(6512007)(66446008)(4326008)(64756008)(91956017)(66946007)(66476007)(6916009)(71200400001)(6506007)(36756003)(53546011)(76116006)(66556008)(186003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?utf-8?B?N1kxWDQzOTF0ZmhaWEY4QkYvalpORnc4UzErMlRPK1ZjZHhuV0VlejdHRFlL?=
 =?utf-8?B?aFZXd1A0U2sxSmYvc2hVcStMYTBheEFJcnh2WGU1MnZKMHJBcUVNbDhNRUdq?=
 =?utf-8?B?Qk9ZdUV0dnpqSkVHc2dLTWFyTkszSnk0dUwvWVlvSTg3Q3NiZFBTTmhlWEI0?=
 =?utf-8?B?ZndaZVJYK0NvV0pXVTF3WWtHeE8wc3piTkh0RStDRlZqZ0pjVjhXeDBLT1Ba?=
 =?utf-8?B?dmNjRmZEOTI0Vld6aDFQaFY4R1FnZXNEK3JiM3JyUlMrTDJJRU9hZjcxeDVK?=
 =?utf-8?B?VEdvbnBtK3J4MjdxRG9zMVRMQU9sWkZCdWZuQjhMUWZ1dmx6eC90K284RnBk?=
 =?utf-8?B?SkVTbFZERjBsMExwVURSeEJuZTVwNDhGNGMxamdlZXViYkRsd3FPcGthTjZJ?=
 =?utf-8?B?TDNxcnJMLzVySk96UlgvYytBMkgxbFFvR3JGMUNIY1VrMTg2Y1hmVXhyR21n?=
 =?utf-8?B?eG5IVFJrSm93d0pwaEdXTjF4SUZadmw0dFFpMk9INXY5WVpHbWtxeVRSb0Ni?=
 =?utf-8?B?K2MydGtlWE1uUGJZdzNLTkcySWdWQWZLTE5SMDFmaVpMR0VkWGlWd2p1SGt3?=
 =?utf-8?B?bVIyeXNEbUNjU0FRelhSWUhNZDFTNEg3c3dWV0ZjckZsVkVEWmRJa1ZTQ3Bi?=
 =?utf-8?B?eEVZWkRTVTgyTnVkRGs2WVhreVlBOFJzU1hXTFBNZllEbVlWRkxyNEVkNkZz?=
 =?utf-8?B?UStUdllSU3RLUjdDRCtYMkhPNzFyZDBhZjhBUGJjRHdZUS80ZnRxTE0wOUxI?=
 =?utf-8?B?a0xSc0Uvb2tJdUhJMkd4YnVkMnh2OHYzeG1DMUV6b3NzUXlXeDRDK2pCL2h4?=
 =?utf-8?B?dmVyU0Mzdm81aGhsbWtTNTdUaTZYSmlNZk1qWnZDVU1BVHc1dmNUTWNveHFI?=
 =?utf-8?B?N0o4Z1p6VDdBZmN4VHJPNVJ6ZVQxZEgzY2RZb1pSMTQ1WG9VOFhjamZNcVZx?=
 =?utf-8?B?ODZ6WEpoaXJER2FkSDJVY3VLWStPZWQxZlloYlRqVkIxc0xWWk43TW5mT3Rr?=
 =?utf-8?B?RnBTeEt5b1gxU0dLbE1IdGdmVmxJTHBTdUV0UnkrZERZQVJoMjRyRElmNkpV?=
 =?utf-8?B?ZVlOTTZDK1pUdkU3NW5oMVFxcXMvY1hXeXNnUDBMTjF6M2xMMnUwRnVzS0xh?=
 =?utf-8?B?UDZJb0lHTzZmRG5GZGZuWFgxUXh3bk1OcFRwRmFZUWNrWDdMSXNsamUrYkhz?=
 =?utf-8?B?d1lCcGdWYWo2TlRZaWpiOTJySUYyd01pTVpmR285bTlUcnQvSDhHc0hNZzFS?=
 =?utf-8?B?UUo5RG9sSUxHeDNvZnQ0cTlYSEt2OVlJQng2S3BhTlMvam1nTXUxS0YrYUky?=
 =?utf-8?Q?HN7uVbtnJgS80M8x/Xz9xkJuqScOMy/+Ah?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <6AAC1EF8C62FC245B4AD6D374600F2C9@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4229.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 12a424b4-9334-4125-91d3-08d89529cb51
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Nov 2020 12:16:40.9374
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: lAfzS4B8bgTLe5c+hMSY0xQQwmFkVwokbbozt//xzRjVTP3+lhqsYFYeCAXtt5ampnKX2PfdqbiSP+Etz6iOLtwtHDbF8o7fkEFA08ITY7E=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3638
X-OriginatorOrg: citrix.com

DQoNCj4gT24gTm92IDMwLCAyMDIwLCBhdCAxMDoyNSBBTSwgSmFuIEJldWxpY2ggPGpiZXVsaWNo
QHN1c2UuY29tPiB3cm90ZToNCj4gDQo+IE9uIDI3LjExLjIwMjAgMTI6NTIsIEdlb3JnZSBEdW5s
YXAgd3JvdGU6DQo+PiBUaGUgcHJvcG9zZWQgYWdlbmRhIGlzIGluIGh0dHBzOi8vY3J5cHRwYWQu
ZnIvcGFkLyMvMi9wYWQvZWRpdC9PUE41NXJYYU9uY3VwdVd1SHh0ZGR6V0ovIGFuZCB5b3UgY2Fu
IGVkaXQgdG8gYWRkIGl0ZW1zLiAgQWx0ZXJuYXRpdmVseSwgeW91IGNhbiByZXBseSB0byB0aGlz
IG1haWwgZGlyZWN0bHkuDQo+IA0KPiBUaGUgIk5ldyBzZXJpZXMgLyBzZXJpZXMgcmVxdWlyaW5n
IGF0dGVudGlvbiIgc2VjdGlvbiBpcyBnb25lLiBXYXMNCj4gdGhpcyBpbnRlbnRpb25hbD8gSWYg
bm90LCBJIHdvdWxkIGhhdmUgd2FudGVkIHRvIHByb3Bvc2UgdGhhdCBpdGVtcw0KPiBmcm9tIHRo
YXQgbGlzdCB3aGljaCB3ZSBkaWRuJ3QgZ2V0IHRvIG9uIHRoZSBwcmV2aW91cyBjYWxsIGJlDQo+
IGF1dG9tYXRpY2FsbHkgcHJvcGFnYXRlZC4gQWNjb3JkaW5nIHRvIG15IG9ic2VydmF0aW9uIGl0
IGlzIG1vcmUNCj4gbGlrZWx5IHRoYW4gbm90IHRoYXQgbm90aGluZyB3b3VsZCBoYXZlIGNoYW5n
ZWQgaW4gdGhlaXIgc3RhdHVzLg0KPiBIZW5jZSBpdCBtYXkgYmUgZWFzaWVyIHRvIHRha2Ugb25l
IG9mZiB0aGUgbGlzdCBpZiBpbmRlZWQgaXQgaGFzDQo+IGdvdCB1bnN0YWxsZWQuDQoNCk9vcHMg
4oCUIEkgbWVhbnQgdG8gZGVsZXRlIHRoZSBjb250ZW50LCBidXQgbm90IHRoZSBoZWFkZXIuDQoN
CkhvcGVmdWxseSDigJxub3QgZ2V0dGluZyB0byB0aGF0IHBhcnQgb2YgdGhlIGNhbGzigJ0gc2hv
dWxkIGJlIHJhcmU7IGJ1dCB5ZXMsIGNvcHlpbmcgaXQgb3ZlciAocGVyaGFwcyB3aXRoIGEgY29s
b3IgdG8gaW5kaWNhdGUgdGhhdCBpdOKAmXMgYmVlbiBjYXJyaWVkIG92ZXIgZnJvbSBsYXN0IHRp
bWUpIHNvdW5kcyByZWFzb25hYmxlLiAgSeKAmWxsIGRvIHRoYXQuDQoNCg0KPj4gIyMgRGlhbCBp
biBkZXRhaWxzDQo+PiBXZWI6IGh0dHBzOi8vd3d3LmdvdG9tZWV0Lm1lL0dlb3JnZUR1bmxhcA0K
Pj4gDQo+PiBZb3UgY2FuIGFsc28gZGlhbCBpbiB1c2luZyB5b3VyIHBob25lLg0KPj4gQWNjZXNz
IENvZGU6IDE2OC02ODItMTA5DQo+PiANCj4+IENoaW5hIChUb2xsIEZyZWUpOiA0MDA4IDgxMTA4
NA0KPj4gR2VybWFueTogKzQ5IDY5MiA1NzM2IDczMTcNCj4gDQo+IEZyb20gbGFzdCBtb250aCdz
IG1lZXRpbmc6DQo+IA0KPiAgIEdlcm1hbnk6ICs0OSA3MjEgOTg4MSA0MTYxDQoNClRoYW5rcy4g
IEnigJlkIHVwZGF0ZSBpdCBpbiBteSB0ZW1wbGF0ZSwgYnV0IGl0IGxvb2tzIGxpa2UgQ2l0cml4
IGlzIGdvaW5nIHRvIGJlIHN3aXRjaGluZyBhd2F5IGZyb20gR1RNIGFueXdheS4NCg0KIC1HZW9y
Z2U=


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 12:31:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 12:31:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41107.74213 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjiKV-00080B-BV; Mon, 30 Nov 2020 12:30:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41107.74213; Mon, 30 Nov 2020 12:30:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjiKV-000804-7V; Mon, 30 Nov 2020 12:30:55 +0000
Received: by outflank-mailman (input) for mailman id 41107;
 Mon, 30 Nov 2020 12:30:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a3VR=FE=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kjiKT-0007zz-Gb
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 12:30:53 +0000
Received: from mail-wm1-x333.google.com (unknown [2a00:1450:4864:20::333])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 046ee015-77f2-455e-a5ab-73d3674ca24a;
 Mon, 30 Nov 2020 12:30:52 +0000 (UTC)
Received: by mail-wm1-x333.google.com with SMTP id h21so24911435wmb.2
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 04:30:52 -0800 (PST)
Received: from CBGR90WXYV0 ([2a00:23c5:5785:9a01:8931:214a:807a:cb80])
 by smtp.gmail.com with ESMTPSA id t136sm24840963wmt.18.2020.11.30.04.30.50
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 30 Nov 2020 04:30:51 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 046ee015-77f2-455e-a5ab-73d3674ca24a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=dBtgtnNJH3xBmi7xgLCoFbo4xZGyYbGw3S9XIhTwjyk=;
        b=XOvroQPsAx074IVQFN4F3wmKFcE1+cUEdaG6vskqhWIw/ZqvoV+yzaa88BHQrqAGbq
         6O7Dhz7ce4XEXT0wsSpaqAtezirStKSGYVHeQ3aJzxUWhh68CNlPUCjAWGxplhr9hWQF
         W1B8+wGkvX1KWVyLjwavCQgRs//tlng6p3Os9Ug8vEVT5uh7doV30EZ3weruq2m2sznB
         Yi2PGnyJKA6f2sv3NJFJsby/QxAZ/gZoT9JcbNJjZ4Jtp1+Aq6YmyyDcw8W3df4Nt38W
         6a1HNRhY/jwbPUBjsIlQfoAt2iWYDDTAzPEkRWgxo1fmBMu1Hhz5KqwlYIXTg+bZB0B/
         hfXA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=dBtgtnNJH3xBmi7xgLCoFbo4xZGyYbGw3S9XIhTwjyk=;
        b=A2EU1wiaY8Mg5CCs/E4lCUV5O6NJfLu0PRXMq2ZX718wAbNb4TadbiJQjtiZhPB+Sz
         /LAfA1sDHc1aUG8QqDH4aCskf8Gs6NT0TLeZt2uCHb4mBjlTRwKrh/XzkH00QGYaKiEu
         +wYFg+8hYK1a3YEtQcvuqz2FLd2oVH2nIHYRzsgvHL9lGS/dfRT+jOCfS2xVHQD+cW7L
         uXzoiOG3334SZ4xQ1Ww9yZ+hN5WVw/WKjJPDMqJZOVoOu3pqGRLsNAJgQd4z9YXjL9wz
         5FqgLGVxSvXgsaRedoV7+LHq+v18VhP1pot28XTSJNZ5DfQxHJ/PLeR7CGISBYgMek08
         IBfA==
X-Gm-Message-State: AOAM532M60o7DqkL8DDatdVrsn9pShHeVRX0H7cS1dvZJM9LdQcoZsPy
	mFKAAswwr4FUuW+LKLEu2go=
X-Google-Smtp-Source: ABdhPJw139Atj6av+JfjKFdaKCBKc71kcLcGZS4IPA+5E8nNBia3R7jEE9XMY9UwJXuXk9Pzvp/b/Q==
X-Received: by 2002:a1c:f315:: with SMTP id q21mr12552776wmq.1.1606739451603;
        Mon, 30 Nov 2020 04:30:51 -0800 (PST)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
Cc: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'Kevin Tian'" <kevin.tian@intel.com>,
	<xen-devel@lists.xenproject.org>
References: <c78e09fa-606c-c6c4-e9db-b57cb50ee5e2@suse.com> <013601d6c705$f09fd9a0$d1df8ce0$@xen.org> <221431d9-2435-f106-af46-0641f5a4e8f8@suse.com>
In-Reply-To: <221431d9-2435-f106-af46-0641f5a4e8f8@suse.com>
Subject: RE: [PATCH v4] IOMMU: make DMA containment of quarantined devices optional
Date: Mon, 30 Nov 2020 12:30:50 -0000
Message-ID: <013d01d6c714$a379a520$ea6cef60$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQI15LetYabIv76Hs54hT8VKRuHZ1QKvQI13ATnQLI2pAxZn4A==
Content-Language: en-gb

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 30 November 2020 11:59
> To: paul@xen.org
> Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>; 'Kevin Tian' =
<kevin.tian@intel.com>; xen-
> devel@lists.xenproject.org
> Subject: Re: [PATCH v4] IOMMU: make DMA containment of quarantined =
devices optional
>=20
> On 30.11.2020 11:45, Paul Durrant wrote:
> >> From: Jan Beulich <jbeulich@suse.com>
> >> Sent: 27 November 2020 16:46
> >>
> >> --- a/docs/misc/xen-command-line.pandoc
> >> +++ b/docs/misc/xen-command-line.pandoc
> >> @@ -1278,7 +1278,7 @@ detection of systems known to misbehave
> >>  > Default: `new` unless directed-EOI is supported
> >>
> >>  ### iommu
> >> -    =3D List of [ <bool>, verbose, debug, force, required, =
quarantine,
> >> +    =3D List of [ <bool>, verbose, debug, force, required, =
quarantine[=3Dscratch-page],
> >>                  sharept, intremap, intpost, crash-disable,
> >>                  snoop, qinval, igfx, amd-iommu-perdev-intremap,
> >>                  dom0-{passthrough,strict} ]
> >> @@ -1316,11 +1316,32 @@ boolean (e.g. `iommu=3Dno`) can override t
> >>      will prevent Xen from booting if IOMMUs aren't discovered and =
enabled
> >>      successfully.
> >>
> >> -*   The `quarantine` boolean can be used to control Xen's behavior =
when
> >> -    de-assigning devices from guests.  If enabled (the default), =
Xen always
> >> +*   The `quarantine` option can be used to control Xen's behavior =
when
> >> +    de-assigning devices from guests.
> >> +
> >> +    When a PCI device is assigned to an untrusted domain, it is =
possible
> >> +    for that domain to program the device to DMA to an arbitrary =
address.
> >> +    The IOMMU is used to protect the host from malicious DMA by =
making
> >> +    sure that the device addresses can only target memory assigned =
to the
> >> +    guest.  However, when the guest domain is torn down, assigning =
the
> >> +    device back to the hardware domain would allow any in-flight =
DMA to
> >> +    potentially target critical host data.  To avoid this, =
quarantining
> >> +    should be enabled.  Quarantining can be done in two ways: In =
its basic
> >> +    form, all in-flight DMA will simply be forced to encounter =
IOMMU
> >> +    faults.  Since there are systems where doing so can cause host =
lockup,
> >> +    an alternative form is available where writes to memory will =
be made
> >> +    fault, but reads will be directed to a dummy page.  The =
implication
> >> +    here is that such reads will go unnoticed, i.e. an admin may =
not
> >> +    become aware of the underlying problem.
> >> +
> >> +    Therefore, if this option is set to true (the default), Xen =
always
> >>      quarantines such devices; they must be explicitly assigned =
back to Dom0
> >> -    before they can be used there again.  If disabled, Xen will =
only
> >> -    quarantine devices the toolstack hass arranged for getting =
quarantined.
> >> +    before they can be used there again.  If set to =
"scratch-page", still
> >> +    active DMA reads will additionally be directed to a "scratch" =
page.  If
> >
> > There's inconsistency of terms here. We should choose either 'dummy =
page'
> > or 'scratch page' (and my vote goes for the latter).
>=20
> Oh, that wasn't intentional. I've replaced all "dummy" now.
>=20
> > Also, rather than true or false, shouldn't we have 'off', 'basic', =
and
> > 'scratch-page'?
>=20
> I didn't want to break (or needlessly extend) the present boolean =
nature
> of the option. Hence I only added "scratch-page". I wouldn't want to =
add
> "basic" as an alias of "true", but if you think we really need this, =
then
> I surely could do so. As to "off" vs "false" - both are permitted =
anyway
> by the parsing functions. And to me (both as a programmer and as =
someone
> who had been studying maths long ago) something that's boolean goes
> rather with true/false than on/off; I can certainly change that =
wording
> if you deem that more appropriate / helpful for the target audience.
>=20

I think that once the option gained a third value it ceased to be a =
boolean and hence setting to true/false is really only for =
compatibility, hence I think an 'off', 'basic', 'scratch-page' enum =
would now make more sense (even if true/false still works underneath the =
covers).

  Paul

> Jan



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 12:51:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 12:51:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41122.74231 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjidm-0001Wr-73; Mon, 30 Nov 2020 12:50:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41122.74231; Mon, 30 Nov 2020 12:50:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjidm-0001Wk-3o; Mon, 30 Nov 2020 12:50:50 +0000
Received: by outflank-mailman (input) for mailman id 41122;
 Mon, 30 Nov 2020 12:50:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lj5U=FE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kjidl-0001Wf-4S
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 12:50:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 52ac2cf8-4fae-4004-bdf7-88a38b749d1b;
 Mon, 30 Nov 2020 12:50:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6E508AC75;
 Mon, 30 Nov 2020 12:50:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52ac2cf8-4fae-4004-bdf7-88a38b749d1b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606740646; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KiVciFoQ8MAzUOLsUizOihN5Ldueegxfk4UdjO8hms0=;
	b=Q26+iVJTvAzlXRcy5VMaqN+bmsyZg+3+cxBiKs/t69n+4Y2Sym9WNIqApP99BM4gazllca
	EWsnzjZ1VfWQhLKlxLTGBLXbTRP9z0r+kiUu4ZDi4d41VE16gdhcr3Uqw0rD8eR3LVmv6S
	z6wdIX6Eb5b6rI+8e6iGwcTxEf/L3h0=
Subject: Re: [PATCH v8 03/15] x86/mm: rewrite virt_to_xen_l*e
To: Hongyan Xia <hx242@xen.org>, Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, jgrall@amazon.com,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <cover.1595857947.git.hongyxia@amazon.com>
 <e7963f6d8cab8e4d5d4249b12a8175405d888bba.1595857947.git.hongyxia@amazon.com>
 <41d9d8d4-d5cb-8350-c118-c9e1fe73b6d0@suse.com>
 <a4f02c292a369cfd771790b1d164f139fec6bead.camel@xen.org>
 <f25e278f-2d63-d806-4650-983df490556f@xen.org>
 <d75fd45c-3f66-63c9-90c7-90dc10fc5763@suse.com>
 <8bb9eb92-ede4-0fa4-d21f-c7976fe70acf@xen.org>
 <622a8319-a439-72f2-c045-15e7611a22e7@suse.com>
 <3db3081d-232a-cce1-cfce-c657be64a0dd@xen.org>
 <600d3ea4-f905-3aab-e110-da3bd0d4b38a@suse.com>
 <23cd67ea1b96ba3f8801a3cf13549298597cb331.camel@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1dab4032-6ae1-bf77-c183-c62ca06f0ad8@suse.com>
Date: Mon, 30 Nov 2020 13:50:45 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <23cd67ea1b96ba3f8801a3cf13549298597cb331.camel@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.11.2020 13:13, Hongyan Xia wrote:
> On Tue, 2020-08-18 at 18:16 +0200, Jan Beulich wrote:
>> On 18.08.2020 15:08, Julien Grall wrote:
>>> On 18/08/2020 12:30, Jan Beulich wrote:
>>>> On 18.08.2020 12:13, Julien Grall wrote:
>>>>> On 18/08/2020 09:49, Jan Beulich wrote:
>>>>>> On 13.08.2020 19:22, Julien Grall wrote:
>>>>>>> On 13/08/2020 17:08, Hongyan Xia wrote:
>>>>>>>> On Fri, 2020-08-07 at 16:05 +0200, Jan Beulich wrote:
>>>>>>>>> On 27.07.2020 16:21, Hongyan Xia wrote:
>>>>>>>>>> From: Wei Liu <wei.liu2@citrix.com>
>>>>>>>>>>
>>>>>>>>>> Rewrite those functions to use the new APIs. Modify
>>>>>>>>>> its callers to
>>>>>>>>>> unmap
>>>>>>>>>> the pointer returned. Since alloc_xen_pagetable_new()
>>>>>>>>>> is almost
>>>>>>>>>> never
>>>>>>>>>> useful unless accompanied by page clearing and a
>>>>>>>>>> mapping, introduce
>>>>>>>>>> a
>>>>>>>>>> helper alloc_map_clear_xen_pt() for this sequence.
>>>>>>>>>>
>>>>>>>>>> Note that the change of virt_to_xen_l1e() also
>>>>>>>>>> requires
>>>>>>>>>> vmap_to_mfn() to
>>>>>>>>>> unmap the page, which requires domain_page.h header
>>>>>>>>>> in vmap.
>>>>>>>>>>
>>>>>>>>>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>>>>>>>>>> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
>>>>>>>>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>>>>>>>>
>>>>>>>>>> ---
>>>>>>>>>> Changed in v8:
>>>>>>>>>> - s/virtual address/linear address/.
>>>>>>>>>> - BUG_ON() on NULL return in vmap_to_mfn().
>>>>>>>>>
>>>>>>>>> The justification for this should be recorded in the
>>>>>>>>> description. In
>>>>>>>>
>>>>>>>> Will do.
>>>>>>>>
>>>>>>>>> reply to v7 I did even suggest how to easily address
>>>>>>>>> the issue you
>>>>>>>>> did notice with large pages, as well as alternative
>>>>>>>>> behavior for
>>>>>>>>> vmap_to_mfn().
>>>>>>>>
>>>>>>>> One thing about adding SMALL_PAGES is that vmap is common
>>>>>>>> code and I am
>>>>>>>> not sure if the Arm side is happy with it.
>>>>>>>
>>>>>>> At the moment, Arm is only using small mapping but I plan
>>>>>>> to change that soon because we have regions that can be
>>>>>>> fairly big.
>>>>>>>
>>>>>>> Regardless that, the issue with vmap_to_mfn() is rather x86
>>>>>>> specific. So I don't particularly like the idea to expose
>>>>>>> such trick in common code.
>>>>>>>
>>>>>>> Even on x86, I think this is not the right approach. Such
>>>>>>> band-aid will impact the performance as, assuming
>>>>>>> superpages are used, it will take longer to map and add
>>>>>>> pressure on the TLBs.
>>>>>>>
>>>>>>> I am aware that superpages will be useful for LiveUpdate,
>>>>>>> but is there any use cases in upstream?
>>>>>>
>>>>>> Superpage use by vmalloc() is purely occasional: You'd have
>>>>>> to vmalloc()
>>>>>> 2Mb or more _and_ the page-wise allocation ought to return
>>>>>> 512
>>>>>> consecutive pages in the right order. Getting 512 consecutive
>>>>>> pages is
>>>>>> possible in practice, but with the page allocator allocating
>>>>>> top-down it
>>>>>> is very unlikely for them to be returned in increasing-sorted 
>>>>>> order.
>>>>>
>>>>> So your assumption here is vmap_to_mfn() can only be called on
>>>>> vmalloc-ed() area. While this may be the case in Xen today, the
>>>>> name clearly suggest it can be called on all vmap-ed region.
>>>>
>>>> No, I don't make this assumption, and I did spell this out in an
>>>> earlier
>>>> reply to Hongyan: Parties using vmap() on a sufficiently large
>>>> address
>>>> range with consecutive MFNs simply have to be aware that they may
>>>> not
>>>> call vmap_to_mfn().
>>>
>>> You make it sounds easy to be aware, however there are two
>>> implementations of vmap_to_mfn() (one per arch). Even looking at
>>> the x86 version, it is not obvious there is a restriction.
>>
>> I didn't mean to make it sound like this - I agree it's not an
>> obvious
>> restriction.
>>
>>> So I am a bit concerned of the "misuse" of the function. This could
>>> possibly be documented. Although, I am not entirely happy to
>>> restrict the use because of x86.
>>
>> Unless the underlying issue gets fixed, we need _some_ form of bodge.
>> I'm not really happy with the BUG_ON() as proposed by Hongyan. You're
>> not really happy with my first proposed alternative, and you didn't
>> comment on the 2nd one (still kept in context below). Not sure what
>> to do: Throw dice?
> 
> Actually I did not propose the BUG_ON() fix. I was just in favor of it
> when Jan presented it as a choice in v7.
> 
> The reason I am in favor of it is that even without it, the existing
> x86 code already BUG_ON() when vmap has a superpage anyway, so it's not
> like other alternatives behave any differently for superpages. I am
> also not sure about returning INVALID_MFN because if virt_to_xen_l1e()
> really returns NULL, then we are calling vmap_to_mfn() on a non-vmap
> address (not even populated) which frankly deserves at least ASSERT().
> So, I don't think BUG_ON() is a bad idea for now before vmap_to_mfn()
> supports superpages.
> 
> Of course, we could use MAP_SMALL_PAGES to avoid the whole problem, but
> Arm may not be happy. After a quick chat with Julien, how about having
> ARCH_VMAP_FLAGS and only small pages for x86?

Possibly, albeit this will then make it look less like a bodge and
more like we would want to keep it this way. How difficult would it
be to actually make the thing work with superpages? Is it more than
simply
(pseudocode, potentially needed locking omitted):

vmap_to_mfn(va) {
    pl1e = virt_to_xen_l1e(va);
    if ( pl1e )
        return l1e_get_mfn(*pl1e);
    pl2e = virt_to_xen_l2e(va);
    if ( pl2e )
        return l2e_get_mfn(*pl2e) + suitable_bits(va);
    return l3e_get_mfn(*virt_to_xen_l3e(va)) + suitable_bits(va);
}

(assuming virt_to_xen_l<N>e() would be returning NULL in such a case)?
Not very efficient, but not needed anywhere anyway - the sole user of
the construct is domain_page_map_to_mfn(), which maps only individual
pages. (An even better option would be to avoid the recurring walk of
the higher levels by using only virt_to_xen_l3e() and then doing the
remaining steps "by hand". This would at once allow to avoid the here
unwanted / unneeded checking for whether page tables need allocating.)

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 13:02:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 13:02:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41132.74242 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjip3-0002du-BT; Mon, 30 Nov 2020 13:02:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41132.74242; Mon, 30 Nov 2020 13:02:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjip3-0002dn-8X; Mon, 30 Nov 2020 13:02:29 +0000
Received: by outflank-mailman (input) for mailman id 41132;
 Mon, 30 Nov 2020 13:02:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lj5U=FE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kjip1-0002di-OL
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 13:02:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eb0ceebb-07d1-4432-bfa4-b9d6029b6fe8;
 Mon, 30 Nov 2020 13:02:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E7389ADAA;
 Mon, 30 Nov 2020 13:02:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb0ceebb-07d1-4432-bfa4-b9d6029b6fe8
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606741346; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=bFz+j0j1P0zGM8SMbfwC3ImnB8Jp5I5PkMsyfmFgqWA=;
	b=NLWwjhI2W3dNph9v4lxmODTN2yZHzD4GSSmCyXBPl0W14Dukfza0C2gJKeOKJOytdjaSoW
	AgnAeASEFJrs3d/SQByXdTt402gmVEGMQ4T7wWxl1cj6FPklkbxK/AU3lpFvo4fpMiWJYM
	Vm9vuWbPsf4PWJ4O+WHBLBkJzf39k8g=
Subject: Re: [PATCH 2/4] x86/ACPI: fix S3 wakeup vector mapping
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>
References: <7f895b0e-f46f-8fe2-b0ac-e0503ef06a1f@suse.com>
 <c0210cbf-c07d-7fa6-2ae0-59764514836a@suse.com>
 <20201123152454.yjr3jgvsyucftrff@Air-de-Roger>
 <79776889-c566-5f07-abfe-2cb79cfa78fa@suse.com>
 <20201123160752.uzczcxnz5ytvtd46@Air-de-Roger>
 <fe2ec163-c6c7-12d6-0c89-57a238514e25@citrix.com>
 <094e9e27-e01f-6020-c091-f9c546e92028@suse.com>
Message-ID: <1d971d71-9a7e-f97c-6575-7f427dc1553e@suse.com>
Date: Mon, 30 Nov 2020 14:02:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <094e9e27-e01f-6020-c091-f9c546e92028@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 24.11.2020 12:04, Jan Beulich wrote:
> On 23.11.2020 17:14, Andrew Cooper wrote:
>> On 23/11/2020 16:07, Roger Pau Monné wrote:
>>> On Mon, Nov 23, 2020 at 04:30:05PM +0100, Jan Beulich wrote:
>>>> On 23.11.2020 16:24, Roger Pau Monné wrote:
>>>>> On Mon, Nov 23, 2020 at 01:40:12PM +0100, Jan Beulich wrote:
>>>>>> --- a/xen/arch/x86/acpi/power.c
>>>>>> +++ b/xen/arch/x86/acpi/power.c
>>>>>> @@ -174,17 +174,20 @@ static void acpi_sleep_prepare(u32 state
>>>>>>      if ( state != ACPI_STATE_S3 )
>>>>>>          return;
>>>>>>  
>>>>>> -    wakeup_vector_va = __acpi_map_table(
>>>>>> -        acpi_sinfo.wakeup_vector, sizeof(uint64_t));
>>>>>> -
>>>>>>      /* TBoot will set resume vector itself (when it is safe to do so). */
>>>>>>      if ( tboot_in_measured_env() )
>>>>>>          return;
>>>>>>  
>>>>>> +    set_fixmap(FIX_ACPI_END, acpi_sinfo.wakeup_vector);
>>>>>> +    wakeup_vector_va = fix_to_virt(FIX_ACPI_END) +
>>>>>> +                       PAGE_OFFSET(acpi_sinfo.wakeup_vector);
>>>>>> +
>>>>>>      if ( acpi_sinfo.vector_width == 32 )
>>>>>>          *(uint32_t *)wakeup_vector_va = bootsym_phys(wakeup_start);
>>>>>>      else
>>>>>>          *(uint64_t *)wakeup_vector_va = bootsym_phys(wakeup_start);
>>>>>> +
>>>>>> +    clear_fixmap(FIX_ACPI_END);
>>>>> Why not use vmap here instead of the fixmap?
>>>> Considering the S3 path is relatively fragile (as in: we end up
>>>> breaking it more often than about anything else) I wanted to
>>>> make as little of a change as possible. Hence I decided to stick
>>>> to the fixmap use that was (indirectly) used before as well.
>>> Unless there's a restriction to use the ACPI fixmap entry I would just
>>> switch to use vmap, as it's used extensively in the code and less
>>> likely to trigger issues in the future, or else a bunch of other stuff
>>> would also be broken.
>>>
>>> IMO doing the mapping differently here when it's not required will end
>>> up turning this code more fragile in the long run.
>>
>> We can't enter S3 at all until dom0 has booted, as one detail has to
>> come from AML.
>>
>> Therefore, we're fully up and running by this point, and vmap() will be
>> fine.
> 
> That's not the point of my reservation. The code here runs when the
> system already isn't "fully up and running" anymore. Secondary CPUs
> have already been offlined, and we're around the point where we
> disable interrupts. Granted when we disable them, we also turn off
> spin debugging, but I'd still prefer a path that's not susceptible
> to IRQ state. What I admit I didn't pay attention to is that
> set_fixmap(), by virtue of being a thin wrapper around
> map_pages_to_xen(), similarly uses locks. IOW - okay, I'll switch
> to vmap(). You're both aware that it, unlike set_fixmap(), can
> fail, aren't you?

Would at least one of the two of you please explicitly reply to
this last question, clarifying that you're indeed okay with this
new possible source of S3 entry failing?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 14:13:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 14:13:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41163.74273 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjjvw-0000wM-3I; Mon, 30 Nov 2020 14:13:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41163.74273; Mon, 30 Nov 2020 14:13:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjjvv-0000wF-WD; Mon, 30 Nov 2020 14:13:40 +0000
Received: by outflank-mailman (input) for mailman id 41163;
 Mon, 30 Nov 2020 14:13:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>) id 1kjjvu-0000wA-Ja
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 14:13:38 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kjjvt-00065h-04; Mon, 30 Nov 2020 14:13:37 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=edge-cache-235.e-lhr50.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kjjvs-00036I-Ix; Mon, 30 Nov 2020 14:13:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
	References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID;
	bh=UH0ytmsdfEMTLKGVQ2dLoIFTz1e9pLOM1P8javWKamc=; b=hbgFIgdPDvEbUC4QmriPSlXsC/
	EtSyxXtbNpnFnqahvuypvbdqAKlCRJcUZxQjafD3HObOEsbCKF/QcXeeeTywRq5D1s0f7VMu+VzIX
	qr9AraMj5g+GMw09JfmHSGwCCQVUfMtyMZS9tjVdnW/P/ukuNhYPEQz06qLj0bIkkaR8=;
Message-ID: <21e17d308adcec2854b35c5d1682927bedf45f58.camel@xen.org>
Subject: Re: [PATCH v8 03/15] x86/mm: rewrite virt_to_xen_l*e
From: Hongyan Xia <hx242@xen.org>
To: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, jgrall@amazon.com, Andrew Cooper
 <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Roger Pau
 =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>, George Dunlap
 <george.dunlap@citrix.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Date: Mon, 30 Nov 2020 14:13:32 +0000
In-Reply-To: <1dab4032-6ae1-bf77-c183-c62ca06f0ad8@suse.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
	 <e7963f6d8cab8e4d5d4249b12a8175405d888bba.1595857947.git.hongyxia@amazon.com>
	 <41d9d8d4-d5cb-8350-c118-c9e1fe73b6d0@suse.com>
	 <a4f02c292a369cfd771790b1d164f139fec6bead.camel@xen.org>
	 <f25e278f-2d63-d806-4650-983df490556f@xen.org>
	 <d75fd45c-3f66-63c9-90c7-90dc10fc5763@suse.com>
	 <8bb9eb92-ede4-0fa4-d21f-c7976fe70acf@xen.org>
	 <622a8319-a439-72f2-c045-15e7611a22e7@suse.com>
	 <3db3081d-232a-cce1-cfce-c657be64a0dd@xen.org>
	 <600d3ea4-f905-3aab-e110-da3bd0d4b38a@suse.com>
	 <23cd67ea1b96ba3f8801a3cf13549298597cb331.camel@xen.org>
	 <1dab4032-6ae1-bf77-c183-c62ca06f0ad8@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit

On Mon, 2020-11-30 at 13:50 +0100, Jan Beulich wrote:
> On 30.11.2020 13:13, Hongyan Xia wrote:
> > On Tue, 2020-08-18 at 18:16 +0200, Jan Beulich wrote:
> > [...]
> > 
> > Actually I did not propose the BUG_ON() fix. I was just in favor of
> > it
> > when Jan presented it as a choice in v7.
> > 
> > The reason I am in favor of it is that even without it, the
> > existing
> > x86 code already BUG_ON() when vmap has a superpage anyway, so it's
> > not
> > like other alternatives behave any differently for superpages. I am
> > also not sure about returning INVALID_MFN because if
> > virt_to_xen_l1e()
> > really returns NULL, then we are calling vmap_to_mfn() on a non-
> > vmap
> > address (not even populated) which frankly deserves at least
> > ASSERT().
> > So, I don't think BUG_ON() is a bad idea for now before
> > vmap_to_mfn()
> > supports superpages.
> > 
> > Of course, we could use MAP_SMALL_PAGES to avoid the whole problem,
> > but
> > Arm may not be happy. After a quick chat with Julien, how about
> > having
> > ARCH_VMAP_FLAGS and only small pages for x86?
> 
> Possibly, albeit this will then make it look less like a bodge and
> more like we would want to keep it this way. How difficult would it
> be to actually make the thing work with superpages? Is it more than
> simply
> (pseudocode, potentially needed locking omitted):
> 
> vmap_to_mfn(va) {
>     pl1e = virt_to_xen_l1e(va);
>     if ( pl1e )
>         return l1e_get_mfn(*pl1e);
>     pl2e = virt_to_xen_l2e(va);
>     if ( pl2e )
>         return l2e_get_mfn(*pl2e) + suitable_bits(va);
>     return l3e_get_mfn(*virt_to_xen_l3e(va)) + suitable_bits(va);
> }
> 
> (assuming virt_to_xen_l<N>e() would be returning NULL in such a
> case)?

The sad part is that instead of returning NULL, such functions BUG_ON()
when there is a superpage, hence why this solution was not considered.
Changing the logic from BUG_ON to returning NULL might not be
straightforward, since so far the callers assume NULL means -ENOMEM and
not anything else.

> Not very efficient, but not needed anywhere anyway - the sole user of
> the construct is domain_page_map_to_mfn(), which maps only individual
> pages. (An even better option would be to avoid the recurring walk of
> the higher levels by using only virt_to_xen_l3e() and then doing the
> remaining steps "by hand". This would at once allow to avoid the here
> unwanted / unneeded checking for whether page tables need
> allocating.)

The "even better option" looks more promising to me, and is what I want
to go forward with. At any rate, this fix grows larger than intended
and I want to send this as an individual patch. Any objections?

Hongyan



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 14:22:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 14:22:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41171.74285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjk4A-0001wv-0C; Mon, 30 Nov 2020 14:22:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41171.74285; Mon, 30 Nov 2020 14:22:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjk49-0001wo-Sm; Mon, 30 Nov 2020 14:22:09 +0000
Received: by outflank-mailman (input) for mailman id 41171;
 Mon, 30 Nov 2020 14:22:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DnPL=FE=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kjk48-0001wj-Q4
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 14:22:08 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 256ef79b-e690-427e-bf35-c55739c15249;
 Mon, 30 Nov 2020 14:22:07 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2D866D6E;
 Mon, 30 Nov 2020 06:22:07 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7AB7F3F71F;
 Mon, 30 Nov 2020 06:22:06 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 256ef79b-e690-427e-bf35-c55739c15249
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 0/7] xen/arm: Emulate ID registers
Date: Mon, 30 Nov 2020 14:21:36 +0000
Message-Id: <cover.1606742184.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1

The goal of this serie is to emulate coprocessor ID registers so that
Xen only publish to guest features that are supported by Xen and can
actually be used by guests.
One practical example where this is required are SVE support which is
forbidden by Xen as it is not supported, but if Linux is compiled with
it, it will crash on boot. An other one is AMU which is also forbidden
by Xen but one Linux compiled with it would crash if the platform
supports it.

To be able to emulate the coprocessor registers defining what features
are supported by the hardware, the TID3 bit of HCR must be disabled and
Xen must emulated the values of those registers when an exception is
catched when a guest is accessing it.

This serie is first creating a guest cpuinfo structure which will
contain the values that we want to publish to the guests and then
provides the proper emulationg for those registers when Xen is getting
an exception due to an access to any of those registers.

This is a first simple implementation to solve the problem and the way
to define the values that we provide to guests and which features are
disabled will be in a future patchset enhance so that we could decide
per guest what can be used or not and depending on this deduce the bits
to activate in HCR and the values that we must publish on ID registers.

---
Changes in V2:
  Fix First patch to properly handle DFR1 register and increase dbg32
  size. Other patches have just been rebased.

Bertrand Marquis (7):
  xen/arm: Add ID registers and complete cpufinfo
  xen/arm: Add arm64 ID registers definitions
  xen/arm: create a cpuinfo structure for guest
  xen/arm: Add handler for ID registers on arm64
  xen/arm: Add handler for cp15 ID registers
  xen/arm: Add CP10 exception support to handle VMFR
  xen/arm: Activate TID3 in HCR_EL2

 xen/arch/arm/arm64/vsysreg.c        | 49 +++++++++++++++++++
 xen/arch/arm/cpufeature.c           | 68 +++++++++++++++++++++++++++
 xen/arch/arm/traps.c                |  7 ++-
 xen/arch/arm/vcpreg.c               | 73 +++++++++++++++++++++++++++++
 xen/include/asm-arm/arm64/hsr.h     | 37 +++++++++++++++
 xen/include/asm-arm/arm64/sysregs.h | 25 ++++++++++
 xen/include/asm-arm/cpregs.h        | 11 +++++
 xen/include/asm-arm/cpufeature.h    | 65 +++++++++++++++++++++----
 xen/include/asm-arm/perfc_defn.h    |  1 +
 xen/include/asm-arm/traps.h         |  1 +
 10 files changed, 327 insertions(+), 10 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 14:23:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 14:23:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41177.74297 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjk50-00023n-9l; Mon, 30 Nov 2020 14:23:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41177.74297; Mon, 30 Nov 2020 14:23:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjk50-00023g-6d; Mon, 30 Nov 2020 14:23:02 +0000
Received: by outflank-mailman (input) for mailman id 41177;
 Mon, 30 Nov 2020 14:23:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DnPL=FE=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kjk4y-00022q-Mr
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 14:23:00 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 89d869bd-b1d7-4bfa-a37c-167f277b97fc;
 Mon, 30 Nov 2020 14:22:59 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BB2A8D6E;
 Mon, 30 Nov 2020 06:22:59 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F108B3F71F;
 Mon, 30 Nov 2020 06:22:58 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 89d869bd-b1d7-4bfa-a37c-167f277b97fc
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 1/7] xen/arm: Add ID registers and complete cpufinfo
Date: Mon, 30 Nov 2020 14:21:37 +0000
Message-Id: <97efd89cccdffc2a7fd987ac8156f5eea191fd3f.1606742184.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1606742184.git.bertrand.marquis@arm.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1606742184.git.bertrand.marquis@arm.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>

Add definition and entries in cpuinfo for ID registers introduced in
newer Arm Architecture reference manual:
- ID_PFR2: processor feature register 2
- ID_DFR1: debug feature register 1
- ID_MMFR4 and ID_MMFR5: Memory model feature registers 4 and 5
- ID_ISA6: ISA Feature register 6
Add more bitfield definitions in PFR fields of cpuinfo.
Add MVFR2 register definition for aarch32.
Add mvfr values in cpuinfo.
Add some registers definition for arm64 in sysregs as some are not
always know by compilers.
Initialize the new values added in cpuinfo in identify_cpu during init.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

---
Changes in V2:
  Fix dbg32 table size and add proper initialisation of the second entry
  of the table by reading ID_DFR1 register.
---
 xen/arch/arm/cpufeature.c           | 17 ++++++++
 xen/include/asm-arm/arm64/sysregs.h | 25 ++++++++++++
 xen/include/asm-arm/cpregs.h        | 11 +++++
 xen/include/asm-arm/cpufeature.h    | 63 ++++++++++++++++++++++++-----
 4 files changed, 107 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
index 44126dbf07..204be9b084 100644
--- a/xen/arch/arm/cpufeature.c
+++ b/xen/arch/arm/cpufeature.c
@@ -114,15 +114,20 @@ void identify_cpu(struct cpuinfo_arm *c)
 
         c->mm64.bits[0]  = READ_SYSREG64(ID_AA64MMFR0_EL1);
         c->mm64.bits[1]  = READ_SYSREG64(ID_AA64MMFR1_EL1);
+        c->mm64.bits[2]  = READ_SYSREG64(ID_AA64MMFR2_EL1);
 
         c->isa64.bits[0] = READ_SYSREG64(ID_AA64ISAR0_EL1);
         c->isa64.bits[1] = READ_SYSREG64(ID_AA64ISAR1_EL1);
+
+        c->zfr64.bits[0] = READ_SYSREG64(ID_AA64ZFR0_EL1);
 #endif
 
         c->pfr32.bits[0] = READ_SYSREG32(ID_PFR0_EL1);
         c->pfr32.bits[1] = READ_SYSREG32(ID_PFR1_EL1);
+        c->pfr32.bits[2] = READ_SYSREG32(ID_PFR2_EL1);
 
         c->dbg32.bits[0] = READ_SYSREG32(ID_DFR0_EL1);
+        c->dbg32.bits[1] = READ_SYSREG32(ID_DFR1_EL1);
 
         c->aux32.bits[0] = READ_SYSREG32(ID_AFR0_EL1);
 
@@ -130,6 +135,8 @@ void identify_cpu(struct cpuinfo_arm *c)
         c->mm32.bits[1]  = READ_SYSREG32(ID_MMFR1_EL1);
         c->mm32.bits[2]  = READ_SYSREG32(ID_MMFR2_EL1);
         c->mm32.bits[3]  = READ_SYSREG32(ID_MMFR3_EL1);
+        c->mm32.bits[4]  = READ_SYSREG32(ID_MMFR4_EL1);
+        c->mm32.bits[5]  = READ_SYSREG32(ID_MMFR5_EL1);
 
         c->isa32.bits[0] = READ_SYSREG32(ID_ISAR0_EL1);
         c->isa32.bits[1] = READ_SYSREG32(ID_ISAR1_EL1);
@@ -137,6 +144,16 @@ void identify_cpu(struct cpuinfo_arm *c)
         c->isa32.bits[3] = READ_SYSREG32(ID_ISAR3_EL1);
         c->isa32.bits[4] = READ_SYSREG32(ID_ISAR4_EL1);
         c->isa32.bits[5] = READ_SYSREG32(ID_ISAR5_EL1);
+        c->isa32.bits[6] = READ_SYSREG32(ID_ISAR6_EL1);
+
+#ifdef CONFIG_ARM_64
+        c->mvfr.bits[0] = READ_SYSREG64(MVFR0_EL1);
+        c->mvfr.bits[1] = READ_SYSREG64(MVFR1_EL1);
+        c->mvfr.bits[2] = READ_SYSREG64(MVFR2_EL1);
+#else
+        c->mvfr.bits[0] = READ_CP32(MVFR0);
+        c->mvfr.bits[1] = READ_CP32(MVFR1);
+#endif
 }
 
 /*
diff --git a/xen/include/asm-arm/arm64/sysregs.h b/xen/include/asm-arm/arm64/sysregs.h
index c60029d38f..5abbeda3fd 100644
--- a/xen/include/asm-arm/arm64/sysregs.h
+++ b/xen/include/asm-arm/arm64/sysregs.h
@@ -57,6 +57,31 @@
 #define ICH_AP1R2_EL2             __AP1Rx_EL2(2)
 #define ICH_AP1R3_EL2             __AP1Rx_EL2(3)
 
+/*
+ * Define ID coprocessor registers if they are not
+ * already defined by the compiler.
+ *
+ * Values picked from linux kernel
+ */
+#ifndef ID_AA64MMFR2_EL1
+#define ID_AA64MMFR2_EL1            S3_0_C0_C7_2
+#endif
+#ifndef ID_PFR2_EL1
+#define ID_PFR2_EL1                 S3_0_C0_C3_4
+#endif
+#ifndef ID_MMFR5_EL1
+#define ID_MMFR5_EL1                S3_0_C0_C3_6
+#endif
+#ifndef ID_ISAR6_EL1
+#define ID_ISAR6_EL1                S3_0_C0_C2_7
+#endif
+#ifndef ID_AA64ZFR0_EL1
+#define ID_AA64ZFR0_EL1             S3_0_C0_C4_4
+#endif
+#ifndef ID_DFR1_EL1
+#define ID_DFR1_EL1                 S3_0_C0_C3_5
+#endif
+
 /* Access to system registers */
 
 #define READ_SYSREG32(name) ((uint32_t)READ_SYSREG64(name))
diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs.h
index 8fd344146e..58be898891 100644
--- a/xen/include/asm-arm/cpregs.h
+++ b/xen/include/asm-arm/cpregs.h
@@ -63,6 +63,7 @@
 #define FPSID           p10,7,c0,c0,0   /* Floating-Point System ID Register */
 #define FPSCR           p10,7,c1,c0,0   /* Floating-Point Status and Control Register */
 #define MVFR0           p10,7,c7,c0,0   /* Media and VFP Feature Register 0 */
+#define MVFR1           p10,7,c6,c0,0   /* Media and VFP Feature Register 1 */
 #define FPEXC           p10,7,c8,c0,0   /* Floating-Point Exception Control Register */
 #define FPINST          p10,7,c9,c0,0   /* Floating-Point Instruction Register */
 #define FPINST2         p10,7,c10,c0,0  /* Floating-point Instruction Register 2 */
@@ -108,18 +109,23 @@
 #define MPIDR           p15,0,c0,c0,5   /* Multiprocessor Affinity Register */
 #define ID_PFR0         p15,0,c0,c1,0   /* Processor Feature Register 0 */
 #define ID_PFR1         p15,0,c0,c1,1   /* Processor Feature Register 1 */
+#define ID_PFR2         p15,0,c0,c3,4   /* Processor Feature Register 2 */
 #define ID_DFR0         p15,0,c0,c1,2   /* Debug Feature Register 0 */
+#define ID_DFR1         p15,0,c0,c3,5   /* Debug Feature Register 1 */
 #define ID_AFR0         p15,0,c0,c1,3   /* Auxiliary Feature Register 0 */
 #define ID_MMFR0        p15,0,c0,c1,4   /* Memory Model Feature Register 0 */
 #define ID_MMFR1        p15,0,c0,c1,5   /* Memory Model Feature Register 1 */
 #define ID_MMFR2        p15,0,c0,c1,6   /* Memory Model Feature Register 2 */
 #define ID_MMFR3        p15,0,c0,c1,7   /* Memory Model Feature Register 3 */
+#define ID_MMFR4        p15,0,c0,c2,6   /* Memory Model Feature Register 4 */
+#define ID_MMFR5        p15,0,c0,c3,6   /* Memory Model Feature Register 5 */
 #define ID_ISAR0        p15,0,c0,c2,0   /* ISA Feature Register 0 */
 #define ID_ISAR1        p15,0,c0,c2,1   /* ISA Feature Register 1 */
 #define ID_ISAR2        p15,0,c0,c2,2   /* ISA Feature Register 2 */
 #define ID_ISAR3        p15,0,c0,c2,3   /* ISA Feature Register 3 */
 #define ID_ISAR4        p15,0,c0,c2,4   /* ISA Feature Register 4 */
 #define ID_ISAR5        p15,0,c0,c2,5   /* ISA Feature Register 5 */
+#define ID_ISAR6        p15,0,c0,c2,7   /* ISA Feature Register 6 */
 #define CCSIDR          p15,1,c0,c0,0   /* Cache Size ID Registers */
 #define CLIDR           p15,1,c0,c0,1   /* Cache Level ID Register */
 #define CSSELR          p15,2,c0,c0,0   /* Cache Size Selection Register */
@@ -312,18 +318,23 @@
 #define HSTR_EL2                HSTR
 #define ID_AFR0_EL1             ID_AFR0
 #define ID_DFR0_EL1             ID_DFR0
+#define ID_DFR1_EL1             ID_DFR1
 #define ID_ISAR0_EL1            ID_ISAR0
 #define ID_ISAR1_EL1            ID_ISAR1
 #define ID_ISAR2_EL1            ID_ISAR2
 #define ID_ISAR3_EL1            ID_ISAR3
 #define ID_ISAR4_EL1            ID_ISAR4
 #define ID_ISAR5_EL1            ID_ISAR5
+#define ID_ISAR6_EL1            ID_ISAR6
 #define ID_MMFR0_EL1            ID_MMFR0
 #define ID_MMFR1_EL1            ID_MMFR1
 #define ID_MMFR2_EL1            ID_MMFR2
 #define ID_MMFR3_EL1            ID_MMFR3
+#define ID_MMFR4_EL1            ID_MMFR4
+#define ID_MMFR5_EL1            ID_MMFR5
 #define ID_PFR0_EL1             ID_PFR0
 #define ID_PFR1_EL1             ID_PFR1
+#define ID_PFR2_EL1             ID_PFR2
 #define IFSR32_EL2              IFSR
 #define MDCR_EL2                HDCR
 #define MIDR_EL1                MIDR
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index c7b5052992..64354c3f19 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -148,6 +148,7 @@ struct cpuinfo_arm {
     union {
         uint64_t bits[2];
         struct {
+            /* PFR0 */
             unsigned long el0:4;
             unsigned long el1:4;
             unsigned long el2:4;
@@ -155,9 +156,23 @@ struct cpuinfo_arm {
             unsigned long fp:4;   /* Floating Point */
             unsigned long simd:4; /* Advanced SIMD */
             unsigned long gic:4;  /* GIC support */
-            unsigned long __res0:28;
+            unsigned long ras:4;
+            unsigned long sve:4;
+            unsigned long sel2:4;
+            unsigned long mpam:4;
+            unsigned long amu:4;
+            unsigned long dit:4;
+            unsigned long __res0:4;
             unsigned long csv2:4;
-            unsigned long __res1:4;
+            unsigned long cvs3:4;
+
+            /* PFR1 */
+            unsigned long bt:4;
+            unsigned long ssbs:4;
+            unsigned long mte:4;
+            unsigned long ras_frac:4;
+            unsigned long mpam_frac:4;
+            unsigned long __res1:44;
         };
     } pfr64;
 
@@ -170,7 +185,7 @@ struct cpuinfo_arm {
     } aux64;
 
     union {
-        uint64_t bits[2];
+        uint64_t bits[3];
         struct {
             unsigned long pa_range:4;
             unsigned long asid_bits:4;
@@ -190,6 +205,8 @@ struct cpuinfo_arm {
             unsigned long pan:4;
             unsigned long __res1:8;
             unsigned long __res2:32;
+
+            unsigned long __res3:64;
         };
     } mm64;
 
@@ -197,6 +214,10 @@ struct cpuinfo_arm {
         uint64_t bits[2];
     } isa64;
 
+    struct {
+        uint64_t bits[1];
+    } zfr64;
+
 #endif
 
     /*
@@ -204,25 +225,38 @@ struct cpuinfo_arm {
      * when running in 32-bit mode.
      */
     union {
-        uint32_t bits[2];
+        uint32_t bits[3];
         struct {
+            /* PFR0 */
             unsigned long arm:4;
             unsigned long thumb:4;
             unsigned long jazelle:4;
             unsigned long thumbee:4;
-            unsigned long __res0:16;
+            unsigned long csv2:4;
+            unsigned long amu:4;
+            unsigned long dit:4;
+            unsigned long ras:4;
 
+            /* PFR1 */
             unsigned long progmodel:4;
             unsigned long security:4;
             unsigned long mprofile:4;
             unsigned long virt:4;
             unsigned long gentimer:4;
-            unsigned long __res1:12;
+            unsigned long sec_frac:4;
+            unsigned long virt_frac:4;
+            unsigned long gic:4;
+
+            /* PFR2 */
+            unsigned long csv3:4;
+            unsigned long ssbs:4;
+            unsigned long ras_frac:4;
+            unsigned long __res2:20;
         };
     } pfr32;
 
     struct {
-        uint32_t bits[1];
+        uint32_t bits[2];
     } dbg32;
 
     struct {
@@ -230,12 +264,23 @@ struct cpuinfo_arm {
     } aux32;
 
     struct {
-        uint32_t bits[4];
+        uint32_t bits[6];
     } mm32;
 
     struct {
-        uint32_t bits[6];
+        uint32_t bits[7];
     } isa32;
+
+#ifdef CONFIG_ARM_64
+    struct {
+        uint64_t bits[3];
+    } mvfr;
+#else
+    /* Only MVFR0 and MVFR1 exist on armv7 */
+    struct {
+        uint32_t bits[2];
+    } mvfr;
+#endif
 };
 
 extern struct cpuinfo_arm boot_cpu_data;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 14:23:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 14:23:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41178.74305 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjk50-00024X-Oz; Mon, 30 Nov 2020 14:23:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41178.74305; Mon, 30 Nov 2020 14:23:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjk50-000245-Fa; Mon, 30 Nov 2020 14:23:02 +0000
Received: by outflank-mailman (input) for mailman id 41178;
 Mon, 30 Nov 2020 14:23:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DnPL=FE=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kjk4z-00022w-IJ
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 14:23:01 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id e43e7f3b-76e5-4957-8b4f-341920bdde02;
 Mon, 30 Nov 2020 14:23:00 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9E9E31042;
 Mon, 30 Nov 2020 06:23:00 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EF1573F71F;
 Mon, 30 Nov 2020 06:22:59 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e43e7f3b-76e5-4957-8b4f-341920bdde02
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 2/7] xen/arm: Add arm64 ID registers definitions
Date: Mon, 30 Nov 2020 14:21:38 +0000
Message-Id: <83f4e52dce23d2e83f6118e5ecb3cef22112f9e9.1606742184.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1606742184.git.bertrand.marquis@arm.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1606742184.git.bertrand.marquis@arm.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>

Add coprocessor registers definitions for all ID registers trapped
through the TID3 bit of HSR.
Those are the one that will be emulated in Xen to only publish to guests
the features that are supported by Xen and that are accessible to
guests.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in V2: rebase
---
 xen/include/asm-arm/arm64/hsr.h | 37 +++++++++++++++++++++++++++++++++
 1 file changed, 37 insertions(+)

diff --git a/xen/include/asm-arm/arm64/hsr.h b/xen/include/asm-arm/arm64/hsr.h
index ca931dd2fe..e691d41c17 100644
--- a/xen/include/asm-arm/arm64/hsr.h
+++ b/xen/include/asm-arm/arm64/hsr.h
@@ -110,6 +110,43 @@
 #define HSR_SYSREG_CNTP_CTL_EL0   HSR_SYSREG(3,3,c14,c2,1)
 #define HSR_SYSREG_CNTP_CVAL_EL0  HSR_SYSREG(3,3,c14,c2,2)
 
+/* Those registers are used when HCR_EL2.TID3 is set */
+#define HSR_SYSREG_ID_PFR0_EL1    HSR_SYSREG(3,0,c0,c1,0)
+#define HSR_SYSREG_ID_PFR1_EL1    HSR_SYSREG(3,0,c0,c1,1)
+#define HSR_SYSREG_ID_PFR2_EL1    HSR_SYSREG(3,0,c0,c3,4)
+#define HSR_SYSREG_ID_DFR0_EL1    HSR_SYSREG(3,0,c0,c1,2)
+#define HSR_SYSREG_ID_DFR1_EL1    HSR_SYSREG(3,0,c0,c3,5)
+#define HSR_SYSREG_ID_AFR0_EL1    HSR_SYSREG(3,0,c0,c1,3)
+#define HSR_SYSREG_ID_MMFR0_EL1   HSR_SYSREG(3,0,c0,c1,4)
+#define HSR_SYSREG_ID_MMFR1_EL1   HSR_SYSREG(3,0,c0,c1,5)
+#define HSR_SYSREG_ID_MMFR2_EL1   HSR_SYSREG(3,0,c0,c1,6)
+#define HSR_SYSREG_ID_MMFR3_EL1   HSR_SYSREG(3,0,c0,c1,7)
+#define HSR_SYSREG_ID_MMFR4_EL1   HSR_SYSREG(3,0,c0,c2,6)
+#define HSR_SYSREG_ID_MMFR5_EL1   HSR_SYSREG(3,0,c0,c3,6)
+#define HSR_SYSREG_ID_ISAR0_EL1   HSR_SYSREG(3,0,c0,c2,0)
+#define HSR_SYSREG_ID_ISAR1_EL1   HSR_SYSREG(3,0,c0,c2,1)
+#define HSR_SYSREG_ID_ISAR2_EL1   HSR_SYSREG(3,0,c0,c2,2)
+#define HSR_SYSREG_ID_ISAR3_EL1   HSR_SYSREG(3,0,c0,c2,3)
+#define HSR_SYSREG_ID_ISAR4_EL1   HSR_SYSREG(3,0,c0,c2,4)
+#define HSR_SYSREG_ID_ISAR5_EL1   HSR_SYSREG(3,0,c0,c2,5)
+#define HSR_SYSREG_ID_ISAR6_EL1   HSR_SYSREG(3,0,c0,c2,7)
+#define HSR_SYSREG_MVFR0_EL1      HSR_SYSREG(3,0,c0,c3,0)
+#define HSR_SYSREG_MVFR1_EL1      HSR_SYSREG(3,0,c0,c3,1)
+#define HSR_SYSREG_MVFR2_EL1      HSR_SYSREG(3,0,c0,c3,2)
+
+#define HSR_SYSREG_ID_AA64PFR0_EL1   HSR_SYSREG(3,0,c0,c4,0)
+#define HSR_SYSREG_ID_AA64PFR1_EL1   HSR_SYSREG(3,0,c0,c4,1)
+#define HSR_SYSREG_ID_AA64DFR0_EL1   HSR_SYSREG(3,0,c0,c5,0)
+#define HSR_SYSREG_ID_AA64DFR1_EL1   HSR_SYSREG(3,0,c0,c5,1)
+#define HSR_SYSREG_ID_AA64ISAR0_EL1  HSR_SYSREG(3,0,c0,c6,0)
+#define HSR_SYSREG_ID_AA64ISAR1_EL1  HSR_SYSREG(3,0,c0,c6,1)
+#define HSR_SYSREG_ID_AA64MMFR0_EL1  HSR_SYSREG(3,0,c0,c7,0)
+#define HSR_SYSREG_ID_AA64MMFR1_EL1  HSR_SYSREG(3,0,c0,c7,1)
+#define HSR_SYSREG_ID_AA64MMFR2_EL1  HSR_SYSREG(3,0,c0,c7,2)
+#define HSR_SYSREG_ID_AA64AFR0_EL1   HSR_SYSREG(3,0,c0,c5,4)
+#define HSR_SYSREG_ID_AA64AFR1_EL1   HSR_SYSREG(3,0,c0,c5,5)
+#define HSR_SYSREG_ID_AA64ZFR0_EL1   HSR_SYSREG(3,0,c0,c4,4)
+
 #endif /* __ASM_ARM_ARM64_HSR_H */
 
 /*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 14:23:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 14:23:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41179.74321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjk51-00026w-V6; Mon, 30 Nov 2020 14:23:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41179.74321; Mon, 30 Nov 2020 14:23:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjk51-00026n-RV; Mon, 30 Nov 2020 14:23:03 +0000
Received: by outflank-mailman (input) for mailman id 41179;
 Mon, 30 Nov 2020 14:23:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DnPL=FE=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kjk50-00022w-AC
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 14:23:02 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 68dbe204-06ab-4ef6-a8e6-cfc63250b524;
 Mon, 30 Nov 2020 14:23:01 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 81EFB1042;
 Mon, 30 Nov 2020 06:23:01 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D23CF3F71F;
 Mon, 30 Nov 2020 06:23:00 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68dbe204-06ab-4ef6-a8e6-cfc63250b524
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 3/7] xen/arm: create a cpuinfo structure for guest
Date: Mon, 30 Nov 2020 14:21:39 +0000
Message-Id: <539cc9c817a80e35a2532dba5bc01e9b2533ff56.1606742184.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1606742184.git.bertrand.marquis@arm.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1606742184.git.bertrand.marquis@arm.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>

Create a cpuinfo structure for guest and mask into it the features that
we do not support in Xen or that we do not want to publish to guests.

Modify some values in the cpuinfo structure for guests to mask some
features which we do not want to allow to guests (like AMU) or we do not
support (like SVE).

The code is trying to group together registers modifications for the
same feature to be able in the long term to easily enable/disable a
feature depending on user parameters or add other registers modification
in the same place (like enabling/disabling HCR bits).

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in V2: rebase
---
 xen/arch/arm/cpufeature.c        | 51 ++++++++++++++++++++++++++++++++
 xen/include/asm-arm/cpufeature.h |  2 ++
 2 files changed, 53 insertions(+)

diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
index 204be9b084..309941ff37 100644
--- a/xen/arch/arm/cpufeature.c
+++ b/xen/arch/arm/cpufeature.c
@@ -24,6 +24,8 @@
 
 DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
 
+struct cpuinfo_arm __read_mostly guest_cpuinfo;
+
 void update_cpu_capabilities(const struct arm_cpu_capabilities *caps,
                              const char *info)
 {
@@ -156,6 +158,55 @@ void identify_cpu(struct cpuinfo_arm *c)
 #endif
 }
 
+/*
+ * This function is creating a cpuinfo structure with values modified to mask
+ * all cpu features that should not be published to guest.
+ * The created structure is then used to provide ID registers values to guests.
+ */
+static int __init create_guest_cpuinfo(void)
+{
+    /*
+     * TODO: The code is currently using only the features detected on the boot
+     * core. In the long term we should try to compute values containing only
+     * features supported by all cores.
+     */
+    identify_cpu(&guest_cpuinfo);
+
+#ifdef CONFIG_ARM_64
+    /* Disable MPAM as xen does not support it */
+    guest_cpuinfo.pfr64.mpam = 0;
+    guest_cpuinfo.pfr64.mpam_frac = 0;
+
+    /* Disable SVE as Xen does not support it */
+    guest_cpuinfo.pfr64.sve = 0;
+    guest_cpuinfo.zfr64.bits[0] = 0;
+
+    /* Disable MTE as Xen does not support it */
+    guest_cpuinfo.pfr64.mte = 0;
+#endif
+
+    /* Disable AMU */
+#ifdef CONFIG_ARM_64
+    guest_cpuinfo.pfr64.amu = 0;
+#endif
+    guest_cpuinfo.pfr32.amu = 0;
+
+    /* Disable RAS as Xen does not support it */
+#ifdef CONFIG_ARM_64
+    guest_cpuinfo.pfr64.ras = 0;
+    guest_cpuinfo.pfr64.ras_frac = 0;
+#endif
+    guest_cpuinfo.pfr32.ras = 0;
+    guest_cpuinfo.pfr32.ras_frac = 0;
+
+    return 0;
+}
+/*
+ * This function needs to be run after all smp are started to have
+ * cpuinfo structures for all cores.
+ */
+__initcall(create_guest_cpuinfo);
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index 64354c3f19..0ab6dd42a0 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -290,6 +290,8 @@ extern void identify_cpu(struct cpuinfo_arm *);
 extern struct cpuinfo_arm cpu_data[];
 #define current_cpu_data cpu_data[smp_processor_id()]
 
+extern struct cpuinfo_arm guest_cpuinfo;
+
 #endif /* __ASSEMBLY__ */
 
 #endif
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 14:23:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 14:23:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41180.74333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjk54-0002AJ-EH; Mon, 30 Nov 2020 14:23:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41180.74333; Mon, 30 Nov 2020 14:23:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjk54-0002AA-Am; Mon, 30 Nov 2020 14:23:06 +0000
Received: by outflank-mailman (input) for mailman id 41180;
 Mon, 30 Nov 2020 14:23:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DnPL=FE=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kjk53-00022q-Ld
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 14:23:05 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 47a94ad0-42cd-482a-9043-806181c4771d;
 Mon, 30 Nov 2020 14:23:04 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 29EBCD6E;
 Mon, 30 Nov 2020 06:23:04 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7A8183F71F;
 Mon, 30 Nov 2020 06:23:03 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47a94ad0-42cd-482a-9043-806181c4771d
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 6/7] xen/arm: Add CP10 exception support to handle VMFR
Date: Mon, 30 Nov 2020 14:21:42 +0000
Message-Id: <58ff66d0daf610dfe8e09516302cb0c0fe17fc59.1606742184.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1606742184.git.bertrand.marquis@arm.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1606742184.git.bertrand.marquis@arm.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>

Add support for cp10 exceptions decoding to be able to emulate the
values for VMFR0 and VMFR1 when TID3 bit of HSR is activated.
This is required for aarch32 guests accessing VMFR0 and VMFR1 using vmrs
and vmsr instructions.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in V2: rebase
---
 xen/arch/arm/traps.c             |  5 +++++
 xen/arch/arm/vcpreg.c            | 38 ++++++++++++++++++++++++++++++++
 xen/include/asm-arm/perfc_defn.h |  1 +
 xen/include/asm-arm/traps.h      |  1 +
 4 files changed, 45 insertions(+)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 22bd1bd4c6..28d9d64558 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2097,6 +2097,11 @@ void do_trap_guest_sync(struct cpu_user_regs *regs)
         perfc_incr(trap_cp14_dbg);
         do_cp14_dbg(regs, hsr);
         break;
+    case HSR_EC_CP10:
+        GUEST_BUG_ON(!psr_mode_is_32bit(regs));
+        perfc_incr(trap_cp10);
+        do_cp10(regs, hsr);
+        break;
     case HSR_EC_CP:
         GUEST_BUG_ON(!psr_mode_is_32bit(regs));
         perfc_incr(trap_cp);
diff --git a/xen/arch/arm/vcpreg.c b/xen/arch/arm/vcpreg.c
index d0c6406f34..9d6a36ca5d 100644
--- a/xen/arch/arm/vcpreg.c
+++ b/xen/arch/arm/vcpreg.c
@@ -634,6 +634,44 @@ void do_cp14_dbg(struct cpu_user_regs *regs, const union hsr hsr)
     inject_undef_exception(regs, hsr);
 }
 
+void do_cp10(struct cpu_user_regs *regs, const union hsr hsr)
+{
+    const struct hsr_cp32 cp32 = hsr.cp32;
+    int regidx = cp32.reg;
+
+    if ( !check_conditional_instr(regs, hsr) )
+    {
+        advance_pc(regs, hsr);
+        return;
+    }
+
+    switch ( hsr.bits & HSR_CP32_REGS_MASK )
+    {
+    /*
+     * HSR.TID3 is trapping access to MVFR register used to identify the
+     * VFP/Simd using VMRS/VMSR instructions.
+     * In this case MVFR2 is not supported as the instruction does not support
+     * it.
+     * Exception encoding is using MRC/MCR standard with the reg field in Crn
+     * as are declared MVFR0 and MVFR1 in cpregs.h
+     */
+    GENERATE_TID3_INFO(MVFR0, mvfr, 0)
+    GENERATE_TID3_INFO(MVFR1, mvfr, 1)
+
+    default:
+        gdprintk(XENLOG_ERR,
+                 "%s p10, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
+                 cp32.read ? "mrc" : "mcr",
+                 cp32.op1, cp32.reg, cp32.crn, cp32.crm, cp32.op2, regs->pc);
+        gdprintk(XENLOG_ERR, "unhandled 32-bit CP10 access %#x\n",
+                 hsr.bits & HSR_CP32_REGS_MASK);
+        inject_undef_exception(regs, hsr);
+        return;
+    }
+
+    advance_pc(regs, hsr);
+}
+
 void do_cp(struct cpu_user_regs *regs, const union hsr hsr)
 {
     const struct hsr_cp cp = hsr.cp;
diff --git a/xen/include/asm-arm/perfc_defn.h b/xen/include/asm-arm/perfc_defn.h
index 6a83185163..31f071222b 100644
--- a/xen/include/asm-arm/perfc_defn.h
+++ b/xen/include/asm-arm/perfc_defn.h
@@ -11,6 +11,7 @@ PERFCOUNTER(trap_cp15_64,  "trap: cp15 64-bit access")
 PERFCOUNTER(trap_cp14_32,  "trap: cp14 32-bit access")
 PERFCOUNTER(trap_cp14_64,  "trap: cp14 64-bit access")
 PERFCOUNTER(trap_cp14_dbg, "trap: cp14 dbg access")
+PERFCOUNTER(trap_cp10,     "trap: cp10 access")
 PERFCOUNTER(trap_cp,       "trap: cp access")
 PERFCOUNTER(trap_smc32,    "trap: 32-bit smc")
 PERFCOUNTER(trap_hvc32,    "trap: 32-bit hvc")
diff --git a/xen/include/asm-arm/traps.h b/xen/include/asm-arm/traps.h
index 997c37884e..c4a3d0fb1b 100644
--- a/xen/include/asm-arm/traps.h
+++ b/xen/include/asm-arm/traps.h
@@ -62,6 +62,7 @@ void do_cp15_64(struct cpu_user_regs *regs, const union hsr hsr);
 void do_cp14_32(struct cpu_user_regs *regs, const union hsr hsr);
 void do_cp14_64(struct cpu_user_regs *regs, const union hsr hsr);
 void do_cp14_dbg(struct cpu_user_regs *regs, const union hsr hsr);
+void do_cp10(struct cpu_user_regs *regs, const union hsr hsr);
 void do_cp(struct cpu_user_regs *regs, const union hsr hsr);
 
 /* SMCCC handling */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 14:23:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 14:23:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41181.74345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjk55-0002Cw-PZ; Mon, 30 Nov 2020 14:23:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41181.74345; Mon, 30 Nov 2020 14:23:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjk55-0002Cl-Lg; Mon, 30 Nov 2020 14:23:07 +0000
Received: by outflank-mailman (input) for mailman id 41181;
 Mon, 30 Nov 2020 14:23:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DnPL=FE=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kjk54-00022w-Gt
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 14:23:06 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 5998b097-c1a0-4e3e-89d9-09df240bb616;
 Mon, 30 Nov 2020 14:23:02 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 643A2D6E;
 Mon, 30 Nov 2020 06:23:02 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B4EEC3F71F;
 Mon, 30 Nov 2020 06:23:01 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5998b097-c1a0-4e3e-89d9-09df240bb616
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 4/7] xen/arm: Add handler for ID registers on arm64
Date: Mon, 30 Nov 2020 14:21:40 +0000
Message-Id: <6db611491b25591829b9408267bd9bd50e266fe2.1606742184.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1606742184.git.bertrand.marquis@arm.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1606742184.git.bertrand.marquis@arm.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>

Add vsysreg emulation for registers trapped when TID3 bit is activated
in HSR.
The emulation is returning the value stored in cpuinfo_guest structure
for most values and the hardware value for registers not stored in the
structure (those are mostly registers existing only as a provision for
feature use but who have no definition right now).

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in V2: rebase
---
 xen/arch/arm/arm64/vsysreg.c | 49 ++++++++++++++++++++++++++++++++++++
 1 file changed, 49 insertions(+)

diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.c
index 8a85507d9d..970ef51603 100644
--- a/xen/arch/arm/arm64/vsysreg.c
+++ b/xen/arch/arm/arm64/vsysreg.c
@@ -69,6 +69,14 @@ TVM_REG(CONTEXTIDR_EL1)
         break;                                                          \
     }
 
+/* Macro to generate easily case for ID co-processor emulation */
+#define GENERATE_TID3_INFO(reg,field,offset)                            \
+    case HSR_SYSREG_##reg:                                              \
+    {                                                                   \
+        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr,   \
+                          1, guest_cpuinfo.field.bits[offset]);         \
+    }
+
 void do_sysreg(struct cpu_user_regs *regs,
                const union hsr hsr)
 {
@@ -259,6 +267,47 @@ void do_sysreg(struct cpu_user_regs *regs,
          */
         return handle_raz_wi(regs, regidx, hsr.sysreg.read, hsr, 1);
 
+    /*
+     * HCR_EL2.TID3
+     *
+     * This is trapping most Identification registers used by a guest
+     * to identify the processor features
+     */
+    GENERATE_TID3_INFO(ID_PFR0_EL1, pfr32, 0)
+    GENERATE_TID3_INFO(ID_PFR1_EL1, pfr32, 1)
+    GENERATE_TID3_INFO(ID_PFR2_EL1, pfr32, 2)
+    GENERATE_TID3_INFO(ID_DFR0_EL1, dbg32, 0)
+    GENERATE_TID3_INFO(ID_DFR1_EL1, dbg32, 1)
+    GENERATE_TID3_INFO(ID_AFR0_EL1, aux32, 0)
+    GENERATE_TID3_INFO(ID_MMFR0_EL1, mm32, 0)
+    GENERATE_TID3_INFO(ID_MMFR1_EL1, mm32, 1)
+    GENERATE_TID3_INFO(ID_MMFR2_EL1, mm32, 2)
+    GENERATE_TID3_INFO(ID_MMFR3_EL1, mm32, 3)
+    GENERATE_TID3_INFO(ID_MMFR4_EL1, mm32, 4)
+    GENERATE_TID3_INFO(ID_MMFR5_EL1, mm32, 5)
+    GENERATE_TID3_INFO(ID_ISAR0_EL1, isa32, 0)
+    GENERATE_TID3_INFO(ID_ISAR1_EL1, isa32, 1)
+    GENERATE_TID3_INFO(ID_ISAR2_EL1, isa32, 2)
+    GENERATE_TID3_INFO(ID_ISAR3_EL1, isa32, 3)
+    GENERATE_TID3_INFO(ID_ISAR4_EL1, isa32, 4)
+    GENERATE_TID3_INFO(ID_ISAR5_EL1, isa32, 5)
+    GENERATE_TID3_INFO(ID_ISAR6_EL1, isa32, 6)
+    GENERATE_TID3_INFO(MVFR0_EL1, mvfr, 0)
+    GENERATE_TID3_INFO(MVFR1_EL1, mvfr, 1)
+    GENERATE_TID3_INFO(MVFR2_EL1, mvfr, 2)
+    GENERATE_TID3_INFO(ID_AA64PFR0_EL1, pfr64, 0)
+    GENERATE_TID3_INFO(ID_AA64PFR1_EL1, pfr64, 1)
+    GENERATE_TID3_INFO(ID_AA64DFR0_EL1, dbg64, 0)
+    GENERATE_TID3_INFO(ID_AA64DFR1_EL1, dbg64, 1)
+    GENERATE_TID3_INFO(ID_AA64ISAR0_EL1, isa64, 0)
+    GENERATE_TID3_INFO(ID_AA64ISAR1_EL1, isa64, 1)
+    GENERATE_TID3_INFO(ID_AA64MMFR0_EL1, mm64, 0)
+    GENERATE_TID3_INFO(ID_AA64MMFR1_EL1, mm64, 1)
+    GENERATE_TID3_INFO(ID_AA64MMFR2_EL1, mm64, 2)
+    GENERATE_TID3_INFO(ID_AA64AFR0_EL1, aux64, 0)
+    GENERATE_TID3_INFO(ID_AA64AFR1_EL1, aux64, 1)
+    GENERATE_TID3_INFO(ID_AA64ZFR0_EL1, zfr64, 0)
+
     /*
      * HCR_EL2.TIDCP
      *
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 14:23:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 14:23:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41182.74357 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjk5A-0002KW-6f; Mon, 30 Nov 2020 14:23:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41182.74357; Mon, 30 Nov 2020 14:23:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjk5A-0002KL-2t; Mon, 30 Nov 2020 14:23:12 +0000
Received: by outflank-mailman (input) for mailman id 41182;
 Mon, 30 Nov 2020 14:23:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DnPL=FE=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kjk58-00022q-Lr
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 14:23:10 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id b5323705-a290-4066-97f0-b64bb7f4b256;
 Mon, 30 Nov 2020 14:23:05 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0E6E1D6E;
 Mon, 30 Nov 2020 06:23:05 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5DB833F71F;
 Mon, 30 Nov 2020 06:23:04 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b5323705-a290-4066-97f0-b64bb7f4b256
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 7/7] xen/arm: Activate TID3 in HCR_EL2
Date: Mon, 30 Nov 2020 14:21:43 +0000
Message-Id: <592253f7e7f02890a6ca8bab4263ad40d8a7dafc.1606742184.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1606742184.git.bertrand.marquis@arm.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1606742184.git.bertrand.marquis@arm.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>

Activate TID3 bit in HSR register when starting a guest.
This will trap all coprecessor ID registers so that we can give to guest
values corresponding to what they can actually use and mask some
features to guests even though they would be supported by the underlying
hardware (like SVE or MPAM).

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in V2: rebase
---
 xen/arch/arm/traps.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 28d9d64558..c1a9ad6056 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -98,7 +98,7 @@ register_t get_default_hcr_flags(void)
 {
     return  (HCR_PTW|HCR_BSU_INNER|HCR_AMO|HCR_IMO|HCR_FMO|HCR_VM|
              (vwfi != NATIVE ? (HCR_TWI|HCR_TWE) : 0) |
-             HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
+             HCR_TID3|HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
 }
 
 static enum {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 14:23:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 14:23:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41183.74364 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjk5A-0002LX-P2; Mon, 30 Nov 2020 14:23:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41183.74364; Mon, 30 Nov 2020 14:23:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjk5A-0002L7-Ft; Mon, 30 Nov 2020 14:23:12 +0000
Received: by outflank-mailman (input) for mailman id 41183;
 Mon, 30 Nov 2020 14:23:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DnPL=FE=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kjk59-00022w-H6
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 14:23:11 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 034b4ba2-2386-4ae3-92ee-0b5e1b1f31fc;
 Mon, 30 Nov 2020 14:23:03 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 475591042;
 Mon, 30 Nov 2020 06:23:03 -0800 (PST)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 97B703F71F;
 Mon, 30 Nov 2020 06:23:02 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 034b4ba2-2386-4ae3-92ee-0b5e1b1f31fc
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 5/7] xen/arm: Add handler for cp15 ID registers
Date: Mon, 30 Nov 2020 14:21:41 +0000
Message-Id: <86c96cd3895bf968f94010c0f4ee8dce7f0338e8.1606742184.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1606742184.git.bertrand.marquis@arm.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1606742184.git.bertrand.marquis@arm.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>

Add support for emulation of cp15 based ID registers (on arm32 or when
running a 32bit guest on arm64).
The handlers are returning the values stored in the guest_cpuinfo
structure.
In the current status the MVFR registers are no supported.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in V2: rebase
---
 xen/arch/arm/vcpreg.c | 35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/xen/arch/arm/vcpreg.c b/xen/arch/arm/vcpreg.c
index cdc91cdf5b..d0c6406f34 100644
--- a/xen/arch/arm/vcpreg.c
+++ b/xen/arch/arm/vcpreg.c
@@ -155,6 +155,14 @@ TVM_REG32(CONTEXTIDR, CONTEXTIDR_EL1)
         break;                                                      \
     }
 
+/* Macro to generate easily case for ID co-processor emulation */
+#define GENERATE_TID3_INFO(reg,field,offset)                        \
+    case HSR_CPREG32(reg):                                          \
+    {                                                               \
+        return handle_ro_read_val(regs, regidx, cp32.read, hsr,     \
+                          1, guest_cpuinfo.field.bits[offset]);     \
+    }
+
 void do_cp15_32(struct cpu_user_regs *regs, const union hsr hsr)
 {
     const struct hsr_cp32 cp32 = hsr.cp32;
@@ -286,6 +294,33 @@ void do_cp15_32(struct cpu_user_regs *regs, const union hsr hsr)
          */
         return handle_raz_wi(regs, regidx, cp32.read, hsr, 1);
 
+    /*
+     * HCR_EL2.TID3
+     *
+     * This is trapping most Identification registers used by a guest
+     * to identify the processor features
+     */
+    GENERATE_TID3_INFO(ID_PFR0, pfr32, 0)
+    GENERATE_TID3_INFO(ID_PFR1, pfr32, 1)
+    GENERATE_TID3_INFO(ID_PFR2, pfr32, 2)
+    GENERATE_TID3_INFO(ID_DFR0, dbg32, 0)
+    GENERATE_TID3_INFO(ID_DFR1, dbg32, 1)
+    GENERATE_TID3_INFO(ID_AFR0, aux32, 0)
+    GENERATE_TID3_INFO(ID_MMFR0, mm32, 0)
+    GENERATE_TID3_INFO(ID_MMFR1, mm32, 1)
+    GENERATE_TID3_INFO(ID_MMFR2, mm32, 2)
+    GENERATE_TID3_INFO(ID_MMFR3, mm32, 3)
+    GENERATE_TID3_INFO(ID_MMFR4, mm32, 4)
+    GENERATE_TID3_INFO(ID_MMFR5, mm32, 5)
+    GENERATE_TID3_INFO(ID_ISAR0, isa32, 0)
+    GENERATE_TID3_INFO(ID_ISAR1, isa32, 1)
+    GENERATE_TID3_INFO(ID_ISAR2, isa32, 2)
+    GENERATE_TID3_INFO(ID_ISAR3, isa32, 3)
+    GENERATE_TID3_INFO(ID_ISAR4, isa32, 4)
+    GENERATE_TID3_INFO(ID_ISAR5, isa32, 5)
+    GENERATE_TID3_INFO(ID_ISAR6, isa32, 6)
+    /* MVFR registers are in cp10 no cp15 */
+
     /*
      * HCR_EL2.TIDCP
      *
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 14:48:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 14:48:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41240.74381 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjkTm-0004so-TH; Mon, 30 Nov 2020 14:48:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41240.74381; Mon, 30 Nov 2020 14:48:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjkTm-0004sh-Px; Mon, 30 Nov 2020 14:48:38 +0000
Received: by outflank-mailman (input) for mailman id 41240;
 Mon, 30 Nov 2020 14:48:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lj5U=FE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kjkTl-0004sc-CF
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 14:48:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 85c7a924-80fd-4ccb-b756-843dce666f2a;
 Mon, 30 Nov 2020 14:48:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0C31DB2D1;
 Mon, 30 Nov 2020 14:47:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 85c7a924-80fd-4ccb-b756-843dce666f2a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1606747632; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=c9rg7STKFkmnEhepr5+mSU1ShSbrW0KVLMPanlFaYlU=;
	b=glKWLgLGcLSkq1U2D6UmlhijQHupxtwzEOl7R65AuWEH8RUGh600TJWVOq13JFZazFx324
	XwV/xlY2rDVIqgpmZZnE++yRYwAXfBDF4mE9kgIiNaR8zKfj/jt46blUhbXJRqiWWSedQA
	93pt/XGpwYAYcki7dg35ZTSRIm+hu74=
Subject: Re: [PATCH v8 03/15] x86/mm: rewrite virt_to_xen_l*e
To: Hongyan Xia <hx242@xen.org>
Cc: xen-devel@lists.xenproject.org, jgrall@amazon.com,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
References: <cover.1595857947.git.hongyxia@amazon.com>
 <e7963f6d8cab8e4d5d4249b12a8175405d888bba.1595857947.git.hongyxia@amazon.com>
 <41d9d8d4-d5cb-8350-c118-c9e1fe73b6d0@suse.com>
 <a4f02c292a369cfd771790b1d164f139fec6bead.camel@xen.org>
 <f25e278f-2d63-d806-4650-983df490556f@xen.org>
 <d75fd45c-3f66-63c9-90c7-90dc10fc5763@suse.com>
 <8bb9eb92-ede4-0fa4-d21f-c7976fe70acf@xen.org>
 <622a8319-a439-72f2-c045-15e7611a22e7@suse.com>
 <3db3081d-232a-cce1-cfce-c657be64a0dd@xen.org>
 <600d3ea4-f905-3aab-e110-da3bd0d4b38a@suse.com>
 <23cd67ea1b96ba3f8801a3cf13549298597cb331.camel@xen.org>
 <1dab4032-6ae1-bf77-c183-c62ca06f0ad8@suse.com>
 <21e17d308adcec2854b35c5d1682927bedf45f58.camel@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1a885152-80be-19f7-6549-302b4b3d1a48@suse.com>
Date: Mon, 30 Nov 2020 15:47:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <21e17d308adcec2854b35c5d1682927bedf45f58.camel@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.11.2020 15:13, Hongyan Xia wrote:
> On Mon, 2020-11-30 at 13:50 +0100, Jan Beulich wrote:
>> Not very efficient, but not needed anywhere anyway - the sole user of
>> the construct is domain_page_map_to_mfn(), which maps only individual
>> pages. (An even better option would be to avoid the recurring walk of
>> the higher levels by using only virt_to_xen_l3e() and then doing the
>> remaining steps "by hand". This would at once allow to avoid the here
>> unwanted / unneeded checking for whether page tables need
>> allocating.)
> 
> The "even better option" looks more promising to me, and is what I want
> to go forward with. At any rate, this fix grows larger than intended
> and I want to send this as an individual patch. Any objections?

Definitely not - separate changes are almost always easier to look
at and faster to get in.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 16:08:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 16:08:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41275.74393 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjlic-0004WC-GZ; Mon, 30 Nov 2020 16:08:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41275.74393; Mon, 30 Nov 2020 16:08:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjlic-0004W5-DS; Mon, 30 Nov 2020 16:08:02 +0000
Received: by outflank-mailman (input) for mailman id 41275;
 Mon, 30 Nov 2020 16:08:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+HSz=FE=redhat.com=mcascell@srs-us1.protection.inumbo.net>)
 id 1kjlib-0004W0-Ky
 for xen-devel@lists.xen.org; Mon, 30 Nov 2020 16:08:01 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 3d4015d4-a77f-4b16-b900-86db1e60403b;
 Mon, 30 Nov 2020 16:07:59 +0000 (UTC)
Received: from mail-ej1-f69.google.com (mail-ej1-f69.google.com
 [209.85.218.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-460-FXHYE6GqPgCUMdjFIAONaQ-1; Mon, 30 Nov 2020 11:07:55 -0500
Received: by mail-ej1-f69.google.com with SMTP id y23so5968772ejp.10
 for <xen-devel@lists.xen.org>; Mon, 30 Nov 2020 08:07:55 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d4015d4-a77f-4b16-b900-86db1e60403b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1606752479;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=0UmyB6wD7lJntS672UF2e+w1KzmAZRBaPaC50VvJwmo=;
	b=LniLxDPzSSHChsLHlAf14sy2fZNK2qljv57gnAVEJM0Kyv/UpqkRcj6moTNJnRFUa+CfIL
	FEQulRqaWaW4uUfZAtd4z4Vp/+mebfih2x0wYMd+4UYnbKQxSb3MgUy8T4XcvSFgyeGfPW
	leV2prmhcI3P9hwYeyz5KYQmA/WWPcQ=
X-MC-Unique: FXHYE6GqPgCUMdjFIAONaQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=0UmyB6wD7lJntS672UF2e+w1KzmAZRBaPaC50VvJwmo=;
        b=IEf0hRNCbEHWO9T9HslZyDDjKa24mvuDbH9IuQY+qzwgqOXJySqw/drlctR1UsHvou
         BoS8GtTFRy09hummwg41v2QPk5q5bgkWXw1meJpu2VxcvwapjvE+QaXaN5RGALWRdwr/
         ctIDTCkEIBrAkAOJLMeUVbpSiz32zhMgHNZvA+493WpnxArySr8VDkvJMrcN0xCNlQG7
         g1qgGNdpxZCcpgX4LpRcdEhefsyJ6pFUYJTO/v8/eTQH/1ELTi8SA61iYxoJHY9LYuOw
         Nrb+Ru9j3TSx5VfZWdCWKp8As+hPJ8e0BLvtqoALwCam90G9JZVEySJyAmucghJnPPej
         SDvQ==
X-Gm-Message-State: AOAM530b+hLLc12g2BAv0otPuMx4ojZnK89GNrEL1BJ6jONAjI3RQEjl
	0BNMSpTRMgioJ1vfoTAWCT3yPXss2uFPVI8dQrHZjw9zUJu9MjjNNlJMe+3JeRE9XS3T74BnXw5
	EH3nqbTV47OhfIOMvmITyTTJVqy+rqLWZNQ==
X-Received: by 2002:a17:906:fa8b:: with SMTP id lt11mr16719953ejb.94.1606752474466;
        Mon, 30 Nov 2020 08:07:54 -0800 (PST)
X-Google-Smtp-Source: ABdhPJwPTcg7h61WhXOewQ/zmeslpkWzRb0jdxrrFpipJqK+4vktZ+WMPvlmXuKlJSjMc6S1nyGtIi5gZ4FFHqNSzIw=
X-Received: by 2002:a17:906:fa8b:: with SMTP id lt11mr16719934ejb.94.1606752474226;
 Mon, 30 Nov 2020 08:07:54 -0800 (PST)
MIME-Version: 1.0
References: <E1khX2v-0002f4-3b@xenbits.xenproject.org>
In-Reply-To: <E1khX2v-0002f4-3b@xenbits.xenproject.org>
From: Mauro Matteo Cascella <mcascell@redhat.com>
Date: Mon, 30 Nov 2020 17:07:43 +0100
Message-ID: <CAA8xKjWY2+xo57n8hsvG6yMyhs6nAH+S4NbCsEJLWEVff_aWzg@mail.gmail.com>
Subject: Re: [oss-security] Xen Security Advisory 355 v2 - stack corruption
 from XSA-346 change
To: oss-security@lists.openwall.com
Cc: xen-announce@lists.xen.org, xen-devel@lists.xen.org, 
	xen-users@lists.xen.org, 
	"Xen.org security team" <security-team-members@xen.org>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=mcascell@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="UTF-8"

Hello,

Has a CVE been assigned for this issue?

Regards,

On Tue, Nov 24, 2020 at 1:06 PM Xen.org security team <security@xen.org> wrote:
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
>                     Xen Security Advisory XSA-355
>                               version 2
>
>                  stack corruption from XSA-346 change
>
> UPDATES IN VERSION 2
> ====================
>
> Added metadata file.
>
> Public release.
>
> ISSUE DESCRIPTION
> =================
>
> One of the two changes for XSA-346 introduced an on-stack array.  The
> check for guarding against overrunning this array was off by one,
> allowing for corruption of the first stack slot immediately following
> this array.
>
> IMPACT
> ======
>
> A malicious or buggy HVM or PVH guest can cause Xen to crash, resulting
> in a Denial of Service (DoS) to the entire host.  Privilege escalation
> as well as information leaks cannot be excluded.
>
> VULNERABLE SYSTEMS
> ==================
>
> All Xen versions which have the patches for XSA-346 applied are
> vulnerable.
>
> Only x86 HVM and PVH guests can leverage the vulnerability.  Arm guests
> and x86 PV guests cannot leverage the vulnerability.
>
> Only x86 HVM and PVH guests which have physical devices passed through
> to them can leverage the vulnerability.
>
> MITIGATION
> ==========
>
> Not passing through physical devices to untrusted guests will avoid
> the vulnerability.
>
> CREDITS
> =======
>
> This issue was discovered by Jan Beulich of SUSE.
>
> RESOLUTION
> ==========
>
> Applying the attached patch resolves this issue.
>
> Note that patches for released versions are generally prepared to
> apply to the stable branches, and may not apply cleanly to the most
> recent release tarball.  Downstreams are encouraged to update to the
> tip of the stable branch before applying these patches.
>
> xsa355.patch           xen-unstable - Xen 4.10.x
>
> $ sha256sum xsa355*
> a93bfc376897e7cffd095d395f1a66476adb9503d7d80a59b7861e64c2675323  xsa355.meta
> dae633c11cf2eff3e304737265e18ab09213e8e4640458080a944ae7a40819a4  xsa355.patch
> $
>
> NOTE CONCERNING SHORT EMBARGO
> =============================
>
> This issue is likely to be re-discovered as the changes for XSA-346
> are deployed more widely, since the issue is also triggerable without
> any malice or bugginess.
>
> DEPLOYMENT DURING EMBARGO
> =========================
>
> Deployment of the patches and/or mitigations described above (or
> others which are substantially similar) is permitted during the
> embargo, even on public-facing systems with untrusted guest users and
> administrators.
>
> But: Distribution of updated software is prohibited (except to other
> members of the predisclosure list).
>
> Predisclosure list members who wish to deploy significantly different
> patches and/or mitigations, please contact the Xen Project Security
> Team.
>
> (Note: this during-embargo deployment notice is retained in
> post-embargo publicly released Xen Project advisories, even though it
> is then no longer applicable.  This is to enable the community to have
> oversight of the Xen Project Security Team's decisionmaking.)
>
> For more information about permissible uses of embargoed information,
> consult the Xen Project community's agreed Security Policy:
>   http://www.xenproject.org/security-policy.html
> -----BEGIN PGP SIGNATURE-----
>
> iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl+89pEMHHBncEB4ZW4u
> b3JnAAoJEIP+FMlX6CvZRHQH/1D8CfjZWYgLcdYOg6sDO6BIK8IsnAiOoe2C8b9i
> M8QPFzHlUx09FI5CHVb0Va/pFliR1OS2tmmIU30DL9nmiDLcaP2uvpgJAYo5GwL5
> Rzccjo4qbXwfSRQvHmLzbr+XN8sHDxbekpFd8T5WvuarUgxOaPCLTfSG0nag/t52
> OVNIdDcP5lSt/Z88lYW75j4gBAsXUZDEXgn81JpeHj9js8YLFC3WFcwh58Jjd+hw
> 5DH955jNAKD8TRSy6uffDpvN1m9wm2vDGeXSUcJyswlV8Nqi6YRW4XO4Q6Cfj+CG
> LVBS/T977JZGJjRvTw4j0H+xAXiLFwQ1I/6v6fSZzxDMt9k=
> =+4M1
> -----END PGP SIGNATURE-----



-- 
Mauro Matteo Cascella
Red Hat Product Security
PGP-Key ID: BB3410B0



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 16:12:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 16:12:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41292.74422 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjlnM-0005Xn-RJ; Mon, 30 Nov 2020 16:12:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41292.74422; Mon, 30 Nov 2020 16:12:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjlnM-0005Xg-OE; Mon, 30 Nov 2020 16:12:56 +0000
Received: by outflank-mailman (input) for mailman id 41292;
 Mon, 30 Nov 2020 16:12:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Tpvk=FE=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kjlnK-0005Uw-Pc
 for xen-devel@lists.xen.org; Mon, 30 Nov 2020 16:12:54 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b5d482b8-e691-4967-8739-36e44bba9526;
 Mon, 30 Nov 2020 16:12:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b5d482b8-e691-4967-8739-36e44bba9526
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1606752763;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=wqpac+ZXvFw1ZF2F2HGgOYXyHv5dQoS1gJbo6VOVPi8=;
  b=BpxJEcGW0/cy0ItDf52SLlB3btNDC87+HO7In3EJTUrGVpDfBsoAnK56
   uVLMeP7tAmGWH60rHUaCz1w/IfJt29QoJZDy3KO5dX90f0QBPUMj+iTr/
   PlLE0WmVk89RH5r/n7vZRltzhDVdaYn2Abu+/TxuHbJhRCNjPB5yxzF6Y
   s=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: RRsSWBnkNpbKylkUsEaWqll+sSpLu4/eGCXiDrUCHBqP8Ozyq8XAr/7IFzNt/wAjBECqPhyEPI
 Gc5HqvniTSBqcptvJtE9VcmZ7W9myGSPlJWmGL0dC+ceYzKxbYzgoj0S0LIt6XvNCWLrwId178
 /i5whk+BDBiGXwJO+1EBHN0qm4fxKhLb3Z+y3kfNSTAtdBvS1/ZLMwpcYraldDZ0lEgiDFLXfQ
 bAgBs7/ocPM3BqG1AX/cPIUMsA02qJdO3RAgPLa+QRCN06Xtxqetc5F2oM/Aa64pRwzfgYRqCY
 Bo4=
X-SBRS: None
X-MesageID: 32181387
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.78,382,1599537600"; 
   d="scan'208";a="32181387"
Subject: Re: [oss-security] Xen Security Advisory 355 v2 - stack corruption
 from XSA-346 change
To: Mauro Matteo Cascella <mcascell@redhat.com>,
	<oss-security@lists.openwall.com>
CC: <xen-announce@lists.xen.org>, <xen-devel@lists.xen.org>,
	<xen-users@lists.xen.org>, Xen.org security team
	<security-team-members@xen.org>
References: <E1khX2v-0002f4-3b@xenbits.xenproject.org>
 <CAA8xKjWY2+xo57n8hsvG6yMyhs6nAH+S4NbCsEJLWEVff_aWzg@mail.gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <10f61e14-05f3-0294-8cda-e63764d98cbc@citrix.com>
Date: Mon, 30 Nov 2020 16:10:35 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <CAA8xKjWY2+xo57n8hsvG6yMyhs6nAH+S4NbCsEJLWEVff_aWzg@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 30/11/2020 16:07, Mauro Matteo Cascella wrote:
> Hello,
>
> Has a CVE been assigned for this issue?
>
> Regards,

Some unknown 3rd party appears to have allocated a CVE and we're
currently trying to track down who.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 16:22:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 16:22:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41309.74435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjlwG-0006lu-RG; Mon, 30 Nov 2020 16:22:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41309.74435; Mon, 30 Nov 2020 16:22:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjlwG-0006ln-NS; Mon, 30 Nov 2020 16:22:08 +0000
Received: by outflank-mailman (input) for mailman id 41309;
 Mon, 30 Nov 2020 16:22:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FMVF=FE=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1kjlwF-0006li-CN
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 16:22:07 +0000
Received: from mail-wm1-x32f.google.com (unknown [2a00:1450:4864:20::32f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 57e5423b-c123-432b-a176-0afec2d09a19;
 Mon, 30 Nov 2020 16:22:05 +0000 (UTC)
Received: by mail-wm1-x32f.google.com with SMTP id h21so25857497wmb.2
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 08:22:05 -0800 (PST)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id o5sm24521025wmh.8.2020.11.30.08.22.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 30 Nov 2020 08:22:00 -0800 (PST)
Received: from zen (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id 0BA221FF7E;
 Mon, 30 Nov 2020 16:22:00 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57e5423b-c123-432b-a176-0afec2d09a19
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=references:user-agent:from:to:cc:subject:in-reply-to:date
         :message-id:mime-version:content-transfer-encoding;
        bh=TVxWV15LJeBTnS48HEZkzfLGLUUbk5sUK43fF6RwOIQ=;
        b=dDFy3zREsEBwX/oIgSBzoy2x4SeYW+l6vcHnkdlim/nODhj2xYanYBR81tK5pve5Xs
         POoFxGUkEWQePQ29LfBtdRjGbhrC7C5I6nLZZxpz0GvhWfCFQbko8MuDserSugx/rRJW
         bX7mlHeLoAEaziEPGPrcD0GPCiKXNvspFLLnQHucmtnkSZpHkH0NQsynrVNkU7j5x3Zl
         93SLCGsfFUsMQ2P0AiUtlQmB83HDgn7IyWgYZzCY+NQ4ODswmm0FLJBvs7K5L1+41iMe
         UTAr2HcZtTXBGRpYTCala9Fwpve7yD+cdS80TDVfk0Kd9OcXneVSnyGPjHGcpFHSoeez
         mgNQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:references:user-agent:from:to:cc:subject
         :in-reply-to:date:message-id:mime-version:content-transfer-encoding;
        bh=TVxWV15LJeBTnS48HEZkzfLGLUUbk5sUK43fF6RwOIQ=;
        b=UUVgcRXOuu4nAkAD4A5LuMGTyRhjBXaBfH2BL6oxUeVO7vrFzkt2EScvG7gg9GOzYC
         LClsDbC2vP7tKw2G4EeiNYNiLn2lRsLCrLYzniBTVMVQvNceZg4k+Re7SlbCrWl/EAUJ
         r6YJ5Uk1FXqCrBLOQEStueu3wxNKrgvvM0QWM6HNYIWsXqAwCCRJ0gnDrO2OESGwLaMP
         mUXBZ2LZxSI6FYNW6DuFjWvYtxvCjI6Y2l7lPso9owxQgLNEHjKAUnxPQsPyNmh4Rbyf
         8c6twrDNXeJxM7SNkrKbS77oWo1ArGjQyzV6+nsoiv58Z3oKlEdwVxD8z19J89gOeimn
         r3Ug==
X-Gm-Message-State: AOAM5306CPBlILBByFvj/gbawsaQT3eNoe59jvhCq8NYTPIQnbShSSDf
	ZT4HQCduIe0oYDfCjvQJdT1ZTw==
X-Google-Smtp-Source: ABdhPJwsK3cO2rfZSGuD+pPnZSyL4MGoiEYINLpam63DvVLLHmr0RDzzEXmc3u/m1hu0lOAdaf1IkQ==
X-Received: by 2002:a1c:7d94:: with SMTP id y142mr20047595wmc.105.1606753322901;
        Mon, 30 Nov 2020 08:22:02 -0800 (PST)
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
User-agent: mu4e 1.5.7; emacs 28.0.50
From: Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: xen-devel@lists.xenproject.org, Oleksandr Tyshchenko
 <oleksandr_tyshchenko@epam.com>, Paul Durrant <paul@xen.org>, Jan Beulich
 <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, Roger Pau
 =?utf-8?Q?Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Julien
 Grall
 <julien.grall@arm.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Tim Deegan <tim@xen.org>, Daniel De
 Graaf <dgdegra@tycho.nsa.gov>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Jun Nakajima <jun.nakajima@intel.com>, Kevin
 Tian <kevin.tian@intel.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Kaly Xin <Kaly.Xin@arm.com>, Artem Mygaiev <joculator@gmail.com>
Subject: Re: 
In-reply-to: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
Date: Mon, 30 Nov 2020 16:21:59 +0000
Message-ID: <87h7p6u860.fsf@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable


Oleksandr Tyshchenko <olekstysh@gmail.com> writes:

> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>
>
> Date: Sat, 28 Nov 2020 22:33:51 +0200
> Subject: [PATCH V3 00/23] IOREQ feature (+ virtio-mmio) on Arm
> MIME-Version: 1.0
> Content-Type: text/plain; charset=3DUTF-8
> Content-Transfer-Encoding: 8bit
>
> Hello all.
>
> The purpose of this patch series is to add IOREQ/DM support to Xen on Arm.
> You can find an initial discussion at [1] and RFC/V1/V2 series at [2]/[3]=
/[4].
> Xen on Arm requires some implementation to forward guest MMIO access to a=
 device
> model in order to implement virtio-mmio backend or even mediator outside =
of hypervisor.
> As Xen on x86 already contains required support this series tries to make=
 it common
> and introduce Arm specific bits plus some new functionality. Patch series=
 is based on
> Julien's PoC "xen/arm: Add support for Guest IO forwarding to a device em=
ulator".
> Besides splitting existing IOREQ/DM support and introducing Arm side, the=
 series
> also includes virtio-mmio related changes (last 2 patches for toolstack)
> for the reviewers to be able to see how the whole picture could look
> like.

Thanks for posting the latest version.

>
> According to the initial discussion there are a few open questions/concer=
ns
> regarding security, performance in VirtIO solution:
> 1. virtio-mmio vs virtio-pci, SPI vs MSI, different use-cases require dif=
ferent
>    transport...

I think I'm repeating things here I've said in various ephemeral video
chats over the last few weeks but I should probably put things down on
the record.

I think the original intention of the virtio framers is advanced
features would build on virtio-pci because you get a bunch of things
"for free" - notably enumeration and MSI support. There is assumption
that by the time you add these features to virtio-mmio you end up
re-creating your own less well tested version of virtio-pci. I've not
been terribly convinced by the argument that the guest implementation of
PCI presents a sufficiently large blob of code to make the simpler MMIO
desirable. My attempts to build two virtio kernels (PCI/MMIO) with
otherwise the same devices wasn't terribly conclusive either way.

That said virtio-mmio still has life in it because the cloudy slimmed
down guests moved to using it because the enumeration of PCI is a road
block to their fast boot up requirements. I'm sure they would also
appreciate a MSI implementation to reduce the overhead that handling
notifications currently has on trap-and-emulate.

AIUI for Xen the other downside to PCI is you would have to emulate it
in the hypervisor which would be additional code at the most privileged
level.

> 2. virtio backend is able to access all guest memory, some kind of protec=
tion
>    is needed: 'virtio-iommu in Xen' vs 'pre-shared-memory & memcpys in
>    guest'

This is also an area of interest for Project Stratos and something we
would like to be solved generally for all hypervisors. There is a good
write up of some approaches that Jean Phillipe did on the stratos
mailing list:

  From: Jean-Philippe Brucker <jean-philippe@linaro.org>
  Subject: Limited memory sharing investigation
  Message-ID: <20201002134336.GA2196245@myrica>

I suspect there is a good argument for the simplicity of a combined
virt queue but it is unlikely to be very performance orientated.

> 3. interface between toolstack and 'out-of-qemu' virtio backend, avoid us=
ing
>    Xenstore in virtio backend if possible.

I wonder how much work it would be for a rust expert to make:

  https://github.com/slp/vhost-user-blk

handle an IOREQ signalling pathway instead of the vhost-user/eventfd
pathway? That would give a good indication on how "hypervisor blind"
these daemons could be made.

<snip>
>
> Please note, build-test passed for the following modes:
> 1. x86: CONFIG_HVM=3Dy / CONFIG_IOREQ_SERVER=3Dy (default)
> 2. x86: #CONFIG_HVM is not set / #CONFIG_IOREQ_SERVER is not set
> 3. Arm64: CONFIG_HVM=3Dy / CONFIG_IOREQ_SERVER=3Dy

Forgive my relative newness to Xen, how do I convince the hypervisor to
build with this on? I've tried variants of:

  make -j9 CROSS_COMPILE=3Daarch64-linux-gnu- XEN_TARGET_ARCH=3Darm64 menuc=
onfig XEN_EXPERT=3Dy [CONFIG_|XEN_|_]IOREQ_SERVER=3Dy

with no joy...

> 4. Arm64: CONFIG_HVM=3Dy / #CONFIG_IOREQ_SERVER is not set  (default)
> 5. Arm32: CONFIG_HVM=3Dy / CONFIG_IOREQ_SERVER=3Dy
> 6. Arm32: CONFIG_HVM=3Dy / #CONFIG_IOREQ_SERVER is not set  (default)
>
> ***
>
> Any feedback/help would be highly appreciated.
<snip>

--=20
Alex Benn=C3=A9e


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 16:38:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 16:38:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41318.74447 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjmBZ-0007uB-Ul; Mon, 30 Nov 2020 16:37:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41318.74447; Mon, 30 Nov 2020 16:37:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjmBZ-0007u4-Re; Mon, 30 Nov 2020 16:37:57 +0000
Received: by outflank-mailman (input) for mailman id 41318;
 Mon, 30 Nov 2020 16:37:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjmBY-0007tw-S5; Mon, 30 Nov 2020 16:37:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjmBY-00019j-JQ; Mon, 30 Nov 2020 16:37:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjmBY-00011M-8j; Mon, 30 Nov 2020 16:37:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kjmBY-0002EV-8B; Mon, 30 Nov 2020 16:37:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GqQKnGIHpkCqlxTsuW+V7U32Ry7FPESlk9QcswRYb8c=; b=1q5l51qDoVn0bptIlP2yslJbq3
	7JhqktSEFa0t/Zn7yjg4SxDYW8uDZpmzdqDEoanjhCTCg3mgJmRlpeB8qQQtkLGHvE61OMOoWzms9
	MqEv0dkLhoNiiLrwwtWoFs39C4qKqLRfg4BkwQtxt44F3hspATYSrXjF5HZiFpt2s0Ls=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157103-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 157103: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=944fdc5e27a5b5adbb765891e8e70e88ba9a00ec
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 30 Nov 2020 16:37:56 +0000

flight 157103 qemu-mainline real [real]
flight 157114 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157103/
http://logs.test-lab.xenproject.org/osstest/logs/157114/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                944fdc5e27a5b5adbb765891e8e70e88ba9a00ec
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  102 days
Failing since        152659  2020-08-21 14:07:39 Z  101 days  212 attempts
Testing same since   157069  2020-11-28 05:42:38 Z    2 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Chen <alex.chen@huawei.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander von Gluck IV <kallisti5@unixzen.com>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andreas Gustafsson <gson@gson.org>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bharat Bhushan <bbhushan2@marvell.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Brad Smith <brad@comstyle.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Felipe Franciosi <felipe@nutanix.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gan Qixin <ganqixin@huawei.com>
  Geoffrey McRae <geoff@hostfission.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Gollu Appalanaidu <anaidu.gollu@samsung.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Guoyi Tu <tu.guoyi@h3c.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Haotian Li <lihaotian9@huawei.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason J. Herne <jjherne@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jens Freimann <jfreimann@redhat.com>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jin Yu <jin.yu@intel.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kai Deng <dengkai1@huawei.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Kirti Wankhede <kwankhede@nvidia.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei Rao <lei.rao@intel.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  LemonBoy <thatlemon@gmail.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Liyang Shi <shiliyang@huawei.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <1892540@bugs.launchpad.net>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Pierre Morel <pmorel@linux.ibm.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Rao, Lei <lei.rao@intel.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Rebecca Cran <rebecca@nuviainc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  shiliyang <shiliyang@huawei.com>
  Si-Wei Liu <si-wei.liu@oracle.com>
  Stafford Horne <shorne@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tom Lendacky <thomas.lendacky@amd.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tuguoyi <tu.guoyi@h3c.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinhao Zhang <zhangxinhao1@huawei.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  yuanjungong <ruc_gongyuanjun@163.com>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Chen <chen.zhang@intel.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  Zhengui li <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 69308 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 16:39:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 16:39:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41325.74462 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjmCh-000847-F5; Mon, 30 Nov 2020 16:39:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41325.74462; Mon, 30 Nov 2020 16:39:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjmCh-000840-Ay; Mon, 30 Nov 2020 16:39:07 +0000
Received: by outflank-mailman (input) for mailman id 41325;
 Mon, 30 Nov 2020 16:39:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjmCg-00083s-Dn; Mon, 30 Nov 2020 16:39:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjmCg-0001B3-9W; Mon, 30 Nov 2020 16:39:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjmCg-000133-1t; Mon, 30 Nov 2020 16:39:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kjmCg-0003sF-1J; Mon, 30 Nov 2020 16:39:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dV5E4fUh/sZFVGtt5a+pIQrjKqHWd4+yhZT2FHt8nds=; b=d6ivy7zTNcjHjTwq2KbVy9ZVjf
	uG0wDCp9gqg7ZWecOY+41EqC+28Di4a0bRQjXfoH4t80kghNHJJ3NhbDd2FdceGzztSdeQmajra4c
	GiwaR+XItXa7gMKygGkTQUoeMAPZHClcmlNKVj6eUimIHPPUMwP6BrhaTOUW8rsKnedk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157112-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 157112: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3ae469af8e680df31eecd0a2ac6a83b58ad7ce53
X-Osstest-Versions-That:
    xen=f7d7d53f6464cff94ead4c15d21e79ce4d9173f5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 30 Nov 2020 16:39:06 +0000

flight 157112 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157112/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  3ae469af8e680df31eecd0a2ac6a83b58ad7ce53
baseline version:
 xen                  f7d7d53f6464cff94ead4c15d21e79ce4d9173f5

Last test of basis   157058  2020-11-27 18:01:37 Z    2 days
Testing same since   157112  2020-11-30 14:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Juergen Gross <jgross@suse.com>
  Manuel Bouyer <bouyer@antioche.eu.org>
  Roger Pau Monné <roge.rpau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f7d7d53f64..3ae469af8e  3ae469af8e680df31eecd0a2ac6a83b58ad7ce53 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 16:51:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 16:51:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41340.74476 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjmOX-0001QY-K3; Mon, 30 Nov 2020 16:51:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41340.74476; Mon, 30 Nov 2020 16:51:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjmOX-0001QR-Gv; Mon, 30 Nov 2020 16:51:21 +0000
Received: by outflank-mailman (input) for mailman id 41340;
 Mon, 30 Nov 2020 16:51:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>) id 1kjmOW-0001QM-CX
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 16:51:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kjmOV-0001PU-UW; Mon, 30 Nov 2020 16:51:19 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kjmOV-0007mX-Hc; Mon, 30 Nov 2020 16:51:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=XvfdPrYOjfivOrXAndJdfQrCNwEnDaIrvyrxzNqhODk=; b=Mdru5x59232mjdZC+7aa+PxzE/
	zemvWoaOiZkQ5drZzX2l8VIXF4KYhM3ihmfT/YG6aKlvlbZHs4jDCjRujUT7Et013HZggjhMy2tDk
	1nv5uRBw2V2SzU4CY114j+hIMQCMCQzZItcGJFV2CHN1neZ/Ef0cDxHl6e3lb/3ASSM0=;
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Cc: jgrall@amazon.com,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] x86/vmap: handle superpages in vmap_to_mfn()
Date: Mon, 30 Nov 2020 16:50:54 +0000
Message-Id: <34de4c4326673c60d3e2cbd3bbcbcca481906524.1606755042.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1

From: Hongyan Xia <hongyxia@amazon.com>

There is simply no guarantee that vmap won't return superpages to the
caller. It can happen if the list of MFNs are contiguous, or we simply
have a large granularity. Although rare, if such things do happen, we
will simply hit BUG_ON() and crash. Properly handle such cases in a new
implementation.

Note that vmap is now too large to be a macro, so implement it as a
normal function and move the declaration to mm.h (page.h cannot handle
mfn_t).

Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
---
 xen/arch/x86/domain_page.c |  2 +-
 xen/arch/x86/mm.c          | 43 ++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/mm.h   |  2 ++
 xen/include/asm-x86/page.h |  2 --
 4 files changed, 46 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
index eac5e3304fb8..4ba75d397a17 100644
--- a/xen/arch/x86/domain_page.c
+++ b/xen/arch/x86/domain_page.c
@@ -338,7 +338,7 @@ mfn_t domain_page_map_to_mfn(const void *ptr)
         return _mfn(virt_to_mfn(ptr));
 
     if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
-        return vmap_to_mfn(va);
+        return vmap_to_mfn(ptr);
 
     ASSERT(va >= MAPCACHE_VIRT_START && va < MAPCACHE_VIRT_END);
 
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 5a50339284c7..c22385e90d8a 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5194,6 +5194,49 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
         }                                          \
     } while ( false )
 
+mfn_t vmap_to_mfn(const void *v)
+{
+    bool locking = system_state > SYS_STATE_boot;
+    unsigned int l2_offset = l2_table_offset((unsigned long)v);
+    unsigned int l1_offset = l1_table_offset((unsigned long)v);
+    l3_pgentry_t *pl3e = virt_to_xen_l3e((unsigned long)v);
+    l2_pgentry_t *pl2e;
+    l1_pgentry_t *pl1e;
+    struct page_info *l3page;
+    mfn_t ret;
+
+    ASSERT(pl3e);
+    l3page = virt_to_page(pl3e);
+    L3T_LOCK(l3page);
+
+    ASSERT(l3e_get_flags(*pl3e) & _PAGE_PRESENT);
+    if ( l3e_get_flags(*pl3e) & _PAGE_PSE )
+    {
+        ret = mfn_add(l3e_get_mfn(*pl3e),
+                      (l2_offset << PAGETABLE_ORDER) + l1_offset);
+        L3T_UNLOCK(l3page);
+        return ret;
+    }
+
+    pl2e = map_l2t_from_l3e(*pl3e) + l2_offset;
+    ASSERT(l2e_get_flags(*pl2e) & _PAGE_PRESENT);
+    if ( l2e_get_flags(*pl2e) & _PAGE_PSE )
+    {
+        ret = mfn_add(l2e_get_mfn(*pl2e), l1_offset);
+        L3T_UNLOCK(l3page);
+        return ret;
+    }
+
+    pl1e = map_l1t_from_l2e(*pl2e) + l1_offset;
+    UNMAP_DOMAIN_PAGE(pl2e);
+    ASSERT(l1e_get_flags(*pl1e) & _PAGE_PRESENT);
+    ret = l1e_get_mfn(*pl1e);
+    L3T_UNLOCK(l3page);
+    UNMAP_DOMAIN_PAGE(pl1e);
+
+    return ret;
+}
+
 int map_pages_to_xen(
     unsigned long virt,
     mfn_t mfn,
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index deeba75a1cbb..6354d165f48b 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -578,6 +578,8 @@ mfn_t alloc_xen_pagetable_new(void);
 void free_xen_pagetable_new(mfn_t mfn);
 
 l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
+mfn_t vmap_to_mfn(const void *v);
+#define vmap_to_page(va) mfn_to_page(vmap_to_mfn(va))
 
 int __sync_local_execstate(void);
 
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index 7a771baf7cb3..b2bcc95fd2de 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -291,8 +291,6 @@ void copy_page_sse2(void *, const void *);
 #define pfn_to_paddr(pfn)   __pfn_to_paddr(pfn)
 #define paddr_to_pfn(pa)    __paddr_to_pfn(pa)
 #define paddr_to_pdx(pa)    pfn_to_pdx(paddr_to_pfn(pa))
-#define vmap_to_mfn(va)     l1e_get_mfn(*virt_to_xen_l1e((unsigned long)(va)))
-#define vmap_to_page(va)    mfn_to_page(vmap_to_mfn(va))
 
 #endif /* !defined(__ASSEMBLY__) */
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 16:56:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 16:56:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41351.74493 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjmTF-0001k5-98; Mon, 30 Nov 2020 16:56:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41351.74493; Mon, 30 Nov 2020 16:56:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjmTF-0001jy-6B; Mon, 30 Nov 2020 16:56:13 +0000
Received: by outflank-mailman (input) for mailman id 41351;
 Mon, 30 Nov 2020 16:56:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1kjmTD-0001jt-Bq
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 16:56:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kjmT5-0001Vf-NJ; Mon, 30 Nov 2020 16:56:03 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kjmT5-0002Oq-HJ; Mon, 30 Nov 2020 16:56:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=/GuFncgBZL1ILEdm4G2ODWEnlFhcl0st1I6ubsiFQHA=; b=OA5u39nXo4nDOZg/pRt+k4SZRB
	Xk5u9yfypa5jtWFAxaf6d9g9OKFi0Iu0x4ipBoZWnv4HaptrWvla5K9W6JqE+5oxGoZKwOdvQk5D8
	h/ZsoVJPVpXuFvoLdZC381v39EBrA/Y53k9uowtte0CWmvs7hUCHGjX0BM9OF+sTR18Y=;
Subject: Re: Xen 4.15: Proposed release schedule
To: Ian Jackson <iwj@xenproject.org>, xen-devel@lists.xenproject.org
Cc: committers@xenproject.org, George Dunlap <George.Dunlap@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, Paul Durrant <xadimgnik@gmail.com>, Wei Liu <wl@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>
References: <24510.24778.433048.477008@mariner.uk.xensource.com>
 <24510.25252.447028.364012@mariner.uk.xensource.com>
From: Julien Grall <julien@xen.org>
Message-ID: <a0648b20-54df-850b-2992-35dfbb86b7ca@xen.org>
Date: Mon, 30 Nov 2020 16:56:01 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <24510.25252.447028.364012@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Ian,

On 25/11/2020 13:56, Ian Jackson wrote:
>    Friday 8th January    Last posting date
> 
>      Patches adding new features should be posted to the mailing list
>      by this cate, although perhaps not in their final version.
> 
>    Friday 22nd January   Feature freeze
>   
>      Patches adding new features should be committed by this date.
>      Straightforward bugfixes may continue to be accepted by
>      maintainers.
We have a quite a good features line-up on Arm for this release. 
Unfortunately, some of the Arm reviewers (including myself) will be 
unavailable until mid-January. I think this will likely impair what we 
can get in Xen 4.15.

I was going to suggest a different feature freeze date for Arm (IIRC we 
did that in 2018), but the implementation of IOREQ for Arm will touch 
also x86 (in order to make the existing code common).

Therefore, would it be possible to push the "Feature Freeze" by week?

Note that I am not suggesting to push the "Last posting date" as to 
avoid increasing pressure on the number of series to review.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 18:12:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 18:12:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41380.74505 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjneN-0000tc-5Y; Mon, 30 Nov 2020 18:11:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41380.74505; Mon, 30 Nov 2020 18:11:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjneN-0000tV-1R; Mon, 30 Nov 2020 18:11:47 +0000
Received: by outflank-mailman (input) for mailman id 41380;
 Mon, 30 Nov 2020 18:11:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>) id 1kjneL-0000tQ-5O
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 18:11:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kjneH-0003AP-Vv; Mon, 30 Nov 2020 18:11:41 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=edge-cache-235.e-lhr50.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kjneH-0008CH-J5; Mon, 30 Nov 2020 18:11:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
	References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID;
	bh=ndgVYZCrNmPbecjhfCy3ZQaLH0zpEGe05VzAmEtBl2s=; b=uSdYT+fG9DAb6Pm4Dlo2z0CP+L
	cusBBvqmPk1HkhAm+j6a7G+tcY8aUK0dlvh7vK8kCXfpkWmoaGeLCftR3JAiJVMKjjSBl0rDJisOf
	gcf/Z3jvpeimbe3IpS5s4iBEovARFM1juI8Qyl044J1dSfw9T42EaQOZjz+2cGvxvRl8=;
Message-ID: <8118aa61528cb14acab8a399bd483557bd3c921e.camel@xen.org>
Subject: Re: [PATCH 04/16] x86/srat: vmap the pages for acpi_slit
From: Hongyan Xia <hx242@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: julien@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu
	 <wl@xen.org>, Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>, 
	xen-devel@lists.xenproject.org
Date: Mon, 30 Nov 2020 18:11:38 +0000
In-Reply-To: <d41fee35-8889-3ab8-2a5e-f4b442747362@suse.com>
References: <cover.1588278317.git.hongyxia@amazon.com>
	 <f4226fafcd333c0274fcee24601c280bf6494417.1588278317.git.hongyxia@amazon.com>
	 <d41fee35-8889-3ab8-2a5e-f4b442747362@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit

On Mon, 2020-11-30 at 11:16 +0100, Jan Beulich wrote:
> On 30.04.2020 22:44, Hongyan Xia wrote:
> > --- a/xen/arch/x86/srat.c
> > +++ b/xen/arch/x86/srat.c
> > @@ -196,7 +196,8 @@ void __init acpi_numa_slit_init(struct
> > acpi_table_slit *slit)
> >  		return;
> >  	}
> >  	mfn = alloc_boot_pages(PFN_UP(slit->header.length), 1);
> > -	acpi_slit = mfn_to_virt(mfn_x(mfn));
> > +	acpi_slit = vmap_boot_pages(mfn, PFN_UP(slit->header.length));
> > +	BUG_ON(!acpi_slit);
> >  	memcpy(acpi_slit, slit, slit->header.length);
> >  }
> 
> I'm not sure in how far this series is still to be considered
> active / pending; I still have it in my inbox as something to
> look at in any event. If it is, then I think the latest by this
> patch it becomes clear that we either want to make vmalloc()
> boot-allocator capable, or introduce e.g. vmalloc_boot().
> Having this recurring pattern including the somewhat odd
> vmap_boot_pages() is imo not the best way forward. It would
> then also no longer be necessary to allocate contiguous pages,
> as none of the users up to here appear to have such a need.

This series is blocked on the PTE domheap conversion series so I will
definitely come back here after that series is merged.

vmap_boot_pages() (poorly named, there is nothing "boot" about it) is
actually useful in other patches as well, especially when there is no
direct map but we need to map a contiguous range, since
map_domain_page() can only handle a single one. So I would say there
will be a need for this function (maybe call it vmap_contig_pages()?)
even if for this patch a boot-capable vmalloc can do the job.

Hongyan



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 18:37:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 18:37:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41390.74523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjo2o-0002xo-AF; Mon, 30 Nov 2020 18:37:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41390.74523; Mon, 30 Nov 2020 18:37:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjo2o-0002xh-73; Mon, 30 Nov 2020 18:37:02 +0000
Received: by outflank-mailman (input) for mailman id 41390;
 Mon, 30 Nov 2020 18:37:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjo2m-0002xY-94; Mon, 30 Nov 2020 18:37:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjo2m-0003fk-1I; Mon, 30 Nov 2020 18:37:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kjo2l-0005Su-M6; Mon, 30 Nov 2020 18:36:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kjo2l-0001vi-LV; Mon, 30 Nov 2020 18:36:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=73zzMcxYB93xP73ULPp3oHNc7xFO0uTlNYJ33pkTdOs=; b=WJWyi7BZCf7SQxBiqiZdD+w3Bl
	8n+ymz1In1/3gf7RX6jLkzo56BQbgsNIwAIisRoUzdibawaGW2t+CKAbIcUvDZCrka2IMjl5V3PTx
	oYQr0KGblSMolU0zCmk7XsvDnJb2YzwJavIW3wLZj1p0w7ZvJw8kLhEbWikCUKLWV7ts=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-157109-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 157109: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=b65054597872ce3aefbc6a666385eabdf9e288da
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 30 Nov 2020 18:36:59 +0000

flight 157109 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157109/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl          12 debian-install           fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                b65054597872ce3aefbc6a666385eabdf9e288da
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  121 days
Failing since        152366  2020-08-01 20:49:34 Z  120 days  205 attempts
Testing same since   157109  2020-11-30 08:17:04 Z    0 days    1 attempts

------------------------------------------------------------
3619 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 693043 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 19:56:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 19:56:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41405.74537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjpGr-00027N-Po; Mon, 30 Nov 2020 19:55:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41405.74537; Mon, 30 Nov 2020 19:55:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjpGr-00027G-Mt; Mon, 30 Nov 2020 19:55:37 +0000
Received: by outflank-mailman (input) for mailman id 41405;
 Mon, 30 Nov 2020 19:55:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QBJD=FE=epam.com=prvs=06035e4899=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1kjpGp-00027B-DK
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 19:55:35 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dc3606af-a6f0-478c-9ab4-ccce6c7e9208;
 Mon, 30 Nov 2020 19:55:33 +0000 (UTC)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0AUJjS2f028297; Mon, 30 Nov 2020 19:55:24 GMT
Received: from eur03-db5-obe.outbound.protection.outlook.com
 (mail-db5eur03lp2057.outbound.protection.outlook.com [104.47.10.57])
 by mx0a-0039f301.pphosted.com with ESMTP id 353fhjnpph-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 30 Nov 2020 19:55:24 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com (2603:10a6:800:17e::20)
 by VE1PR03MB5678.eurprd03.prod.outlook.com (2603:10a6:803:121::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Mon, 30 Nov
 2020 19:55:20 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::d7a:2503:2ffd:1c51]) by VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::d7a:2503:2ffd:1c51%6]) with mapi id 15.20.3611.031; Mon, 30 Nov 2020
 19:55:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc3606af-a6f0-478c-9ab4-ccce6c7e9208
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VbioNdr/zeXnDGWy0q47LPQoo5fprccBc7XEw0rLlzGUhekIZKYmucQvd1+M9N33knumjDyNdVmelfLv63f8YgkW81DZZ/TlE6NbJ//1N3TNrsu+x4QT0DN7oxsF9or1t7FM2oM1CdF0MwJFaL2GrHo6b1ANU/TdmE8OqML0sY8X4iYdySWydfzJjqpyNx7h5gfNTVod4RMVQrJ+J8I0D7kAP7QN2h5CutBd7pwH8jE/zHbTR0CfxmV+XB8tm3JA26K/SB+HpGGRUbRYIWIuS/fyIwQ3UsYgQfI+BfOYEmZLZlRi+pEtpzzz3slCeT862pT7Js0ekR/Cpt9Vr2CNPQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5me/NsA3wrfxdGTi9WxEf9tRE4mwHDkNfJz2ZY0kIOg=;
 b=R4I1xUdEK3wEDeDXrAyd9MX9TyH6IYNyLvmJ2hH4uA5AMbPbQhDKguHhwKbZ4ThbMkpflN7GbjAiwlRtGfGDu0xKxaHGa3jUB6YMsXTvgfu+a8/+4oqg9K67mZsMDbOzHNRA+vfyzsMnV6oFsV0H4BPVzZIhsiJ3qeayAEi/lEhHBRZxNCNUiiDdTbMmnIksFKeDPJowoOziF1DwVIVe2OxxoLm4XIm50tWfu99qtdRDrXYhReBFiDOIwdtmUZwhvB/8+mEcuXOalBDtRIoygFc5g/ecoDFLR6DLydqz+2Fa3MY4yJDg0OrxpSh1j4A5HfMEbuQzStEYaC03IYfjJg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5me/NsA3wrfxdGTi9WxEf9tRE4mwHDkNfJz2ZY0kIOg=;
 b=DTmP83Krhet9u0/2vmvq4QLreBPRCp5QUxoDG5Kp+DpHKPiCgZ91ppzKAjy7zn7cjQ2AJokXSPQM9uaO104PkcU42uTAvlzjtEapUlvl2jiDiub2o1VRtTUUUZp07VykuiadHRzViMcrPpAcZf4omhrtXNRkJz2boL+ofa7lkGaoejhg9DweYrVUuKmHjsiETyodQGuSrSDc+gzEFrcpm9UUmklO1WAcCKnVFVyMsPmp0lYmkMWmSnup9DPdgYpbNABPuDxOjD1BOuJP9Tp1XU9ciLAX/cPgF68gX5fxjucWfQW9XanSOErODx4V9+XcKhBX7xrl5vcRqGDm/ei2hw==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Bertrand Marquis <bertrand.marquis@arm.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Stefano
 Stabellini <sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 1/7] xen/arm: Add ID registers and complete cpufinfo
Thread-Topic: [PATCH v2 1/7] xen/arm: Add ID registers and complete cpufinfo
Thread-Index: AQHWxyRRqZC8P3GPykKfLJ52tVNQw6nhFy2A
Date: Mon, 30 Nov 2020 19:55:20 +0000
Message-ID: <875z5m8vrs.fsf@epam.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>
 <97efd89cccdffc2a7fd987ac8156f5eea191fd3f.1606742184.git.bertrand.marquis@arm.com>
In-Reply-To: 
 <97efd89cccdffc2a7fd987ac8156f5eea191fd3f.1606742184.git.bertrand.marquis@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.4.10; emacs 27.1
authentication-results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [85.223.209.18]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: d7170e77-7264-4732-f53d-08d89569de4e
x-ms-traffictypediagnostic: VE1PR03MB5678:
x-microsoft-antispam-prvs: 
 <VE1PR03MB5678FD2BCEF582EFF6D84DB5E6F50@VE1PR03MB5678.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8273;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 Z9T7M9Gsz5Ms+7l5eic8/u8nQSMelLKLLu0N33h6B09F0A8e4BeP/8wt6qQ3WSqCNNjXygT/KWWt0Ovtu28llU2TXyOxbOXkZgt6FR5lOuq3Ga75lIX/UjcGjGWOojg+vKUZoF05X5ShSft0Kx2miwJhW+6+Tklk7fVe4qX48yb8YZRwfTcRGbYJckHibLRJxO+qUcH+Q2xLIN2BSNdlTdsOiqpS7fKPxmWiM1cWgiXZNmUlbNcETBt1W16gmOYy0h58hexIIF62geNIJyTZOdD43Ax8bfVy3tkpryIcylVKMb75VECFcCwKrGXYmtmd+ujsGmeYNdqWXh7It//FKQ==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB6400.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39850400004)(136003)(366004)(396003)(376002)(346002)(36756003)(4326008)(186003)(91956017)(76116006)(54906003)(86362001)(2616005)(316002)(66556008)(5660300002)(66446008)(2906002)(66946007)(66476007)(55236004)(8676002)(6506007)(6486002)(6512007)(64756008)(8936002)(6916009)(71200400001)(26005)(83380400001)(478600001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?vLE5+IhcNadVQzSUI+siLpMYVvS4dxHBemngLb3GEfZ/CVbtx1HwrKp2L7?=
 =?iso-8859-1?Q?h2faHVFJ4oUf1BVZYSP6t7ns0y+P545eih5HTHWWu7Fez9XRBV2AAF/Hc9?=
 =?iso-8859-1?Q?svVhub0BcJWIb6QghNi+Z0UtV/QqunyBpyp/GSF456OAtudaShM7xWvv0S?=
 =?iso-8859-1?Q?CvFEPlS2N0368lknCDEmskpequqyVlEBDvf1hRifv11gkzneIzHWikAoqV?=
 =?iso-8859-1?Q?+g7kO5TkuAKGuK6ulvw1XEXlouP/qi3eZ2JWG3jqdRP4Sq6qJtctobXGNb?=
 =?iso-8859-1?Q?nQ+Wfs9kt3Zglml2jjKhxoeO7kYh6jPTEqBcUnxRX1jjpjtlrC7kd5oke/?=
 =?iso-8859-1?Q?Y6rArZ1h5CsWMLbgE+vGJ4MRPRy95eQCt8OZv+TEsxo5B0Tdc/QEzWDic0?=
 =?iso-8859-1?Q?oqEpV4Z218cxwqunP5VogVQd1+mAKdpLMceNDsK9mqG8rY1KBHhjM13S2J?=
 =?iso-8859-1?Q?MIwvOB/U+XPSZhKPakGO2eOPIRIV6OWuRdkl/G2Zp/NjNCEsi67X02oCMD?=
 =?iso-8859-1?Q?gZIobC5GOXTIVxJS8eDHwEYpYuJufbUrkjrLSrvzFFiLJBmZ7dzQRyRhTu?=
 =?iso-8859-1?Q?kn4Y8U6KhOHzjrQQptnfwu+fUnhuUYDSpHdJGio0S/0oQ0iJAerXVmYxlX?=
 =?iso-8859-1?Q?mF20ILR9BFHJtV5GQIFvQp8jaHH6DzRv4GYBMLSD/+Av7N3+HLuRPKgUpI?=
 =?iso-8859-1?Q?fqnP+BQnHiR0tIWz7ialvFU147hqIrT/Oj7/WQAlVIqCeALOZIPB7Y+c2q?=
 =?iso-8859-1?Q?v17NID4Ep1FCwMXgSA+e9gybmJPqS/iZNAwM/iW0fknq41E43pAuhAkfSz?=
 =?iso-8859-1?Q?LYnqWBpziZATQk68SwJ/7zpNBbblDluYJXUv+4GAL0rj4e6i11/wuLjK86?=
 =?iso-8859-1?Q?2zrjMYNZo64qcE4o4IJ0BGfmqt0xNpz93/BwRRGrdVB2mQkfPIk5K1NJJl?=
 =?iso-8859-1?Q?Iz+aUkFgElPcjovK6OEo7m3SnlsR7M/9BqYq00zdf7QJiMTaMEtnvHkH2F?=
 =?iso-8859-1?Q?HO47XLudGJCDRYe/+CdKFUtM6WUAHacQX7KPs8?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB6400.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d7170e77-7264-4732-f53d-08d89569de4e
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Nov 2020 19:55:20.5183
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: F453H56l5iq3mhr/oJJP3nx/tE/ANjt2zhyP4TDZjEpUvlHVM6PDlJdoEBOmBawQvM3ltMplQfkTyaoWnJWiQ8S7V+I6h8Bo5dLfBUh1+AI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR03MB5678
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-30_08:2020-11-30,2020-11-30 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0
 impostorscore=0 malwarescore=0 priorityscore=1501 bulkscore=0
 mlxlogscore=679 phishscore=0 suspectscore=0 spamscore=0 clxscore=1011
 adultscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2011300128


Hello Bertrand,

Bertrand Marquis writes:

> Add definition and entries in cpuinfo for ID registers introduced in
> newer Arm Architecture reference manual:
> - ID_PFR2: processor feature register 2
> - ID_DFR1: debug feature register 1
> - ID_MMFR4 and ID_MMFR5: Memory model feature registers 4 and 5
> - ID_ISA6: ISA Feature register 6
> Add more bitfield definitions in PFR fields of cpuinfo.
> Add MVFR2 register definition for aarch32.
> Add mvfr values in cpuinfo.
> Add some registers definition for arm64 in sysregs as some are not
> always know by compilers.
> Initialize the new values added in cpuinfo in identify_cpu during init.
>
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
Reviewed-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>

>
> ---
> Changes in V2:
>   Fix dbg32 table size and add proper initialisation of the second entry
>   of the table by reading ID_DFR1 register.
> ---
>  xen/arch/arm/cpufeature.c           | 17 ++++++++
>  xen/include/asm-arm/arm64/sysregs.h | 25 ++++++++++++
>  xen/include/asm-arm/cpregs.h        | 11 +++++
>  xen/include/asm-arm/cpufeature.h    | 63 ++++++++++++++++++++++++-----
>  4 files changed, 107 insertions(+), 9 deletions(-)
>
> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
> index 44126dbf07..204be9b084 100644
> --- a/xen/arch/arm/cpufeature.c
> +++ b/xen/arch/arm/cpufeature.c
> @@ -114,15 +114,20 @@ void identify_cpu(struct cpuinfo_arm *c)
> =20
>          c->mm64.bits[0]  =3D READ_SYSREG64(ID_AA64MMFR0_EL1);
>          c->mm64.bits[1]  =3D READ_SYSREG64(ID_AA64MMFR1_EL1);
> +        c->mm64.bits[2]  =3D READ_SYSREG64(ID_AA64MMFR2_EL1);
> =20
>          c->isa64.bits[0] =3D READ_SYSREG64(ID_AA64ISAR0_EL1);
>          c->isa64.bits[1] =3D READ_SYSREG64(ID_AA64ISAR1_EL1);
> +
> +        c->zfr64.bits[0] =3D READ_SYSREG64(ID_AA64ZFR0_EL1);
>  #endif
> =20
>          c->pfr32.bits[0] =3D READ_SYSREG32(ID_PFR0_EL1);
>          c->pfr32.bits[1] =3D READ_SYSREG32(ID_PFR1_EL1);
> +        c->pfr32.bits[2] =3D READ_SYSREG32(ID_PFR2_EL1);
> =20
>          c->dbg32.bits[0] =3D READ_SYSREG32(ID_DFR0_EL1);
> +        c->dbg32.bits[1] =3D READ_SYSREG32(ID_DFR1_EL1);
> =20
>          c->aux32.bits[0] =3D READ_SYSREG32(ID_AFR0_EL1);
> =20
> @@ -130,6 +135,8 @@ void identify_cpu(struct cpuinfo_arm *c)
>          c->mm32.bits[1]  =3D READ_SYSREG32(ID_MMFR1_EL1);
>          c->mm32.bits[2]  =3D READ_SYSREG32(ID_MMFR2_EL1);
>          c->mm32.bits[3]  =3D READ_SYSREG32(ID_MMFR3_EL1);
> +        c->mm32.bits[4]  =3D READ_SYSREG32(ID_MMFR4_EL1);
> +        c->mm32.bits[5]  =3D READ_SYSREG32(ID_MMFR5_EL1);
> =20
>          c->isa32.bits[0] =3D READ_SYSREG32(ID_ISAR0_EL1);
>          c->isa32.bits[1] =3D READ_SYSREG32(ID_ISAR1_EL1);
> @@ -137,6 +144,16 @@ void identify_cpu(struct cpuinfo_arm *c)
>          c->isa32.bits[3] =3D READ_SYSREG32(ID_ISAR3_EL1);
>          c->isa32.bits[4] =3D READ_SYSREG32(ID_ISAR4_EL1);
>          c->isa32.bits[5] =3D READ_SYSREG32(ID_ISAR5_EL1);
> +        c->isa32.bits[6] =3D READ_SYSREG32(ID_ISAR6_EL1);
> +
> +#ifdef CONFIG_ARM_64
> +        c->mvfr.bits[0] =3D READ_SYSREG64(MVFR0_EL1);
> +        c->mvfr.bits[1] =3D READ_SYSREG64(MVFR1_EL1);
> +        c->mvfr.bits[2] =3D READ_SYSREG64(MVFR2_EL1);
> +#else
> +        c->mvfr.bits[0] =3D READ_CP32(MVFR0);
> +        c->mvfr.bits[1] =3D READ_CP32(MVFR1);
> +#endif
>  }
> =20
>  /*
> diff --git a/xen/include/asm-arm/arm64/sysregs.h b/xen/include/asm-arm/ar=
m64/sysregs.h
> index c60029d38f..5abbeda3fd 100644
> --- a/xen/include/asm-arm/arm64/sysregs.h
> +++ b/xen/include/asm-arm/arm64/sysregs.h
> @@ -57,6 +57,31 @@
>  #define ICH_AP1R2_EL2             __AP1Rx_EL2(2)
>  #define ICH_AP1R3_EL2             __AP1Rx_EL2(3)
> =20
> +/*
> + * Define ID coprocessor registers if they are not
> + * already defined by the compiler.
> + *
> + * Values picked from linux kernel
> + */
> +#ifndef ID_AA64MMFR2_EL1
> +#define ID_AA64MMFR2_EL1            S3_0_C0_C7_2
> +#endif
> +#ifndef ID_PFR2_EL1
> +#define ID_PFR2_EL1                 S3_0_C0_C3_4
> +#endif
> +#ifndef ID_MMFR5_EL1
> +#define ID_MMFR5_EL1                S3_0_C0_C3_6
> +#endif
> +#ifndef ID_ISAR6_EL1
> +#define ID_ISAR6_EL1                S3_0_C0_C2_7
> +#endif
> +#ifndef ID_AA64ZFR0_EL1
> +#define ID_AA64ZFR0_EL1             S3_0_C0_C4_4
> +#endif
> +#ifndef ID_DFR1_EL1
> +#define ID_DFR1_EL1                 S3_0_C0_C3_5
> +#endif
> +
>  /* Access to system registers */
> =20
>  #define READ_SYSREG32(name) ((uint32_t)READ_SYSREG64(name))
> diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs.h
> index 8fd344146e..58be898891 100644
> --- a/xen/include/asm-arm/cpregs.h
> +++ b/xen/include/asm-arm/cpregs.h
> @@ -63,6 +63,7 @@
>  #define FPSID           p10,7,c0,c0,0   /* Floating-Point System ID Regi=
ster */
>  #define FPSCR           p10,7,c1,c0,0   /* Floating-Point Status and Con=
trol Register */
>  #define MVFR0           p10,7,c7,c0,0   /* Media and VFP Feature Registe=
r 0 */
> +#define MVFR1           p10,7,c6,c0,0   /* Media and VFP Feature Registe=
r 1 */
>  #define FPEXC           p10,7,c8,c0,0   /* Floating-Point Exception Cont=
rol Register */
>  #define FPINST          p10,7,c9,c0,0   /* Floating-Point Instruction Re=
gister */
>  #define FPINST2         p10,7,c10,c0,0  /* Floating-point Instruction Re=
gister 2 */
> @@ -108,18 +109,23 @@
>  #define MPIDR           p15,0,c0,c0,5   /* Multiprocessor Affinity Regis=
ter */
>  #define ID_PFR0         p15,0,c0,c1,0   /* Processor Feature Register 0 =
*/
>  #define ID_PFR1         p15,0,c0,c1,1   /* Processor Feature Register 1 =
*/
> +#define ID_PFR2         p15,0,c0,c3,4   /* Processor Feature Register 2 =
*/
>  #define ID_DFR0         p15,0,c0,c1,2   /* Debug Feature Register 0 */
> +#define ID_DFR1         p15,0,c0,c3,5   /* Debug Feature Register 1 */
>  #define ID_AFR0         p15,0,c0,c1,3   /* Auxiliary Feature Register 0 =
*/
>  #define ID_MMFR0        p15,0,c0,c1,4   /* Memory Model Feature Register=
 0 */
>  #define ID_MMFR1        p15,0,c0,c1,5   /* Memory Model Feature Register=
 1 */
>  #define ID_MMFR2        p15,0,c0,c1,6   /* Memory Model Feature Register=
 2 */
>  #define ID_MMFR3        p15,0,c0,c1,7   /* Memory Model Feature Register=
 3 */
> +#define ID_MMFR4        p15,0,c0,c2,6   /* Memory Model Feature Register=
 4 */
> +#define ID_MMFR5        p15,0,c0,c3,6   /* Memory Model Feature Register=
 5 */
>  #define ID_ISAR0        p15,0,c0,c2,0   /* ISA Feature Register 0 */
>  #define ID_ISAR1        p15,0,c0,c2,1   /* ISA Feature Register 1 */
>  #define ID_ISAR2        p15,0,c0,c2,2   /* ISA Feature Register 2 */
>  #define ID_ISAR3        p15,0,c0,c2,3   /* ISA Feature Register 3 */
>  #define ID_ISAR4        p15,0,c0,c2,4   /* ISA Feature Register 4 */
>  #define ID_ISAR5        p15,0,c0,c2,5   /* ISA Feature Register 5 */
> +#define ID_ISAR6        p15,0,c0,c2,7   /* ISA Feature Register 6 */
>  #define CCSIDR          p15,1,c0,c0,0   /* Cache Size ID Registers */
>  #define CLIDR           p15,1,c0,c0,1   /* Cache Level ID Register */
>  #define CSSELR          p15,2,c0,c0,0   /* Cache Size Selection Register=
 */
> @@ -312,18 +318,23 @@
>  #define HSTR_EL2                HSTR
>  #define ID_AFR0_EL1             ID_AFR0
>  #define ID_DFR0_EL1             ID_DFR0
> +#define ID_DFR1_EL1             ID_DFR1
>  #define ID_ISAR0_EL1            ID_ISAR0
>  #define ID_ISAR1_EL1            ID_ISAR1
>  #define ID_ISAR2_EL1            ID_ISAR2
>  #define ID_ISAR3_EL1            ID_ISAR3
>  #define ID_ISAR4_EL1            ID_ISAR4
>  #define ID_ISAR5_EL1            ID_ISAR5
> +#define ID_ISAR6_EL1            ID_ISAR6
>  #define ID_MMFR0_EL1            ID_MMFR0
>  #define ID_MMFR1_EL1            ID_MMFR1
>  #define ID_MMFR2_EL1            ID_MMFR2
>  #define ID_MMFR3_EL1            ID_MMFR3
> +#define ID_MMFR4_EL1            ID_MMFR4
> +#define ID_MMFR5_EL1            ID_MMFR5
>  #define ID_PFR0_EL1             ID_PFR0
>  #define ID_PFR1_EL1             ID_PFR1
> +#define ID_PFR2_EL1             ID_PFR2
>  #define IFSR32_EL2              IFSR
>  #define MDCR_EL2                HDCR
>  #define MIDR_EL1                MIDR
> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufe=
ature.h
> index c7b5052992..64354c3f19 100644
> --- a/xen/include/asm-arm/cpufeature.h
> +++ b/xen/include/asm-arm/cpufeature.h
> @@ -148,6 +148,7 @@ struct cpuinfo_arm {
>      union {
>          uint64_t bits[2];
>          struct {
> +            /* PFR0 */
>              unsigned long el0:4;
>              unsigned long el1:4;
>              unsigned long el2:4;
> @@ -155,9 +156,23 @@ struct cpuinfo_arm {
>              unsigned long fp:4;   /* Floating Point */
>              unsigned long simd:4; /* Advanced SIMD */
>              unsigned long gic:4;  /* GIC support */
> -            unsigned long __res0:28;
> +            unsigned long ras:4;
> +            unsigned long sve:4;
> +            unsigned long sel2:4;
> +            unsigned long mpam:4;
> +            unsigned long amu:4;
> +            unsigned long dit:4;
> +            unsigned long __res0:4;
>              unsigned long csv2:4;
> -            unsigned long __res1:4;
> +            unsigned long cvs3:4;
> +
> +            /* PFR1 */
> +            unsigned long bt:4;
> +            unsigned long ssbs:4;
> +            unsigned long mte:4;
> +            unsigned long ras_frac:4;
> +            unsigned long mpam_frac:4;
> +            unsigned long __res1:44;
>          };
>      } pfr64;
> =20
> @@ -170,7 +185,7 @@ struct cpuinfo_arm {
>      } aux64;
> =20
>      union {
> -        uint64_t bits[2];
> +        uint64_t bits[3];
>          struct {
>              unsigned long pa_range:4;
>              unsigned long asid_bits:4;
> @@ -190,6 +205,8 @@ struct cpuinfo_arm {
>              unsigned long pan:4;
>              unsigned long __res1:8;
>              unsigned long __res2:32;
> +
> +            unsigned long __res3:64;
>          };
>      } mm64;
> =20
> @@ -197,6 +214,10 @@ struct cpuinfo_arm {
>          uint64_t bits[2];
>      } isa64;
> =20
> +    struct {
> +        uint64_t bits[1];
> +    } zfr64;
> +
>  #endif
> =20
>      /*
> @@ -204,25 +225,38 @@ struct cpuinfo_arm {
>       * when running in 32-bit mode.
>       */
>      union {
> -        uint32_t bits[2];
> +        uint32_t bits[3];
>          struct {
> +            /* PFR0 */
>              unsigned long arm:4;
>              unsigned long thumb:4;
>              unsigned long jazelle:4;
>              unsigned long thumbee:4;
> -            unsigned long __res0:16;
> +            unsigned long csv2:4;
> +            unsigned long amu:4;
> +            unsigned long dit:4;
> +            unsigned long ras:4;
> =20
> +            /* PFR1 */
>              unsigned long progmodel:4;
>              unsigned long security:4;
>              unsigned long mprofile:4;
>              unsigned long virt:4;
>              unsigned long gentimer:4;
> -            unsigned long __res1:12;
> +            unsigned long sec_frac:4;
> +            unsigned long virt_frac:4;
> +            unsigned long gic:4;
> +
> +            /* PFR2 */
> +            unsigned long csv3:4;
> +            unsigned long ssbs:4;
> +            unsigned long ras_frac:4;
> +            unsigned long __res2:20;
>          };
>      } pfr32;
> =20
>      struct {
> -        uint32_t bits[1];
> +        uint32_t bits[2];
>      } dbg32;
> =20
>      struct {
> @@ -230,12 +264,23 @@ struct cpuinfo_arm {
>      } aux32;
> =20
>      struct {
> -        uint32_t bits[4];
> +        uint32_t bits[6];
>      } mm32;
> =20
>      struct {
> -        uint32_t bits[6];
> +        uint32_t bits[7];
>      } isa32;
> +
> +#ifdef CONFIG_ARM_64
> +    struct {
> +        uint64_t bits[3];
> +    } mvfr;
> +#else
> +    /* Only MVFR0 and MVFR1 exist on armv7 */
> +    struct {
> +        uint32_t bits[2];
> +    } mvfr;
> +#endif
>  };
> =20
>  extern struct cpuinfo_arm boot_cpu_data;


--=20
Volodymyr Babchuk at EPAM=


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 20:08:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 20:08:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41416.74550 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjpT9-0003J8-1X; Mon, 30 Nov 2020 20:08:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41416.74550; Mon, 30 Nov 2020 20:08:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjpT8-0003J1-UR; Mon, 30 Nov 2020 20:08:18 +0000
Received: by outflank-mailman (input) for mailman id 41416;
 Mon, 30 Nov 2020 20:08:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QBJD=FE=epam.com=prvs=06035e4899=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1kjpT7-0003Iw-J3
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 20:08:17 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 352803f8-c8ed-4a55-8ce2-00e9c94fe34a;
 Mon, 30 Nov 2020 20:08:15 +0000 (UTC)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0AUK5hBN025753; Mon, 30 Nov 2020 20:08:11 GMT
Received: from eur05-db8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2113.outbound.protection.outlook.com [104.47.17.113])
 by mx0b-0039f301.pphosted.com with ESMTP id 353dv2wugf-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 30 Nov 2020 20:08:11 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com (2603:10a6:800:17e::20)
 by VI1PR03MB3136.eurprd03.prod.outlook.com (2603:10a6:802:2f::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.23; Mon, 30 Nov
 2020 20:08:08 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::d7a:2503:2ffd:1c51]) by VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::d7a:2503:2ffd:1c51%6]) with mapi id 15.20.3611.031; Mon, 30 Nov 2020
 20:08:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 352803f8-c8ed-4a55-8ce2-00e9c94fe34a
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RLU7Su1zCuE8M4O30Zrosnf96727mBEcXxTL6R2P9r45NUeTiY4Q+6BK0r6WNa2yyFZd7oj7HkkckpsyzxyBaCMGFiKMWqUqTN8ckdCAtD2PKqMzaYj1IHL7j7rmz0vMo3aCb3v2Ou/jr/vsgzGX/sKWrxsXV13hhGuRMgWy/LgCuZHzajWCEARjMOkbjD+nZbl8xsPjgCGqecxIPefI9VMmw9CLgqP8azLvs0smlPacUi8B31ZuOwcVgnjyn2cx4BkhofTGw0GZD1QAgZOGy6kip7HKuPZjyxYZ34R0Rcw4wVHWwUAAhRkaLxThBCdQytp4kNHdPwgv2QS0tYdFlA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1GpJLuMk490sLpHVQEx7ReDP4GVxwBBTJSCre2mVUfA=;
 b=XoPmi1bMozCrkxn1PCrW75NbUOtnwm0RBa9Lmz2fLCD2kUJcqXs/v3Metd62CGY/dZW1El+foObKJjQzBqgugiUGeUna6yM9nIT/6vBS3pELQrt5GEUh/OXLQG1RFa6DpDyUAFqdb7xDxsx8P0RBJTYsKjPyGS7V9QbhqIeE0zeQO9pBiksGDzrKeWc9FBCaph5mecZZ1najBt1FzWxr6Gz3WAj7hLG+mfrq8LSSrI6OSDCtfERO+WMG1ysqaViyX3WOX6z1Ltf4K831NvBnJ0+dBtJPc9LIcl5yQaq/qtgc1cWbNkpO45L1tpXHtTTJjy93FoVFObLubTcgSonIJA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1GpJLuMk490sLpHVQEx7ReDP4GVxwBBTJSCre2mVUfA=;
 b=FUOLYKdQYQPERVaAh+0mLQBGiapQn6ZLrLxta3WPpQj9q0k3h1cHoFvyV311s6ZXs0EcDVny2LSIhtvkObf8/yjauJkPVgawDl292MCr81JOViOzKxVT0t035K33nFa2lh740/LU5e0C/r4/O5f0mDurdiAwMJiYKSjfj4Kqt0KYyXIGlW7bm51viKncK/arGurTncQBtN7Ehq56tFoviyJArHv2pA9U+JYhDFOM4ZBAcbI6R2WgZCDS2cfwL9QFOFr2FLok3a2Y1DBkQKQWUs03OfPVUeW2xvr8w2pejkz3QI6b46A3vYsSh1zb3mCnI4wI28gUX9SgZIx5PjcxDA==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Bertrand Marquis <bertrand.marquis@arm.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Stefano
 Stabellini <sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 2/7] xen/arm: Add arm64 ID registers definitions
Thread-Topic: [PATCH v2 2/7] xen/arm: Add arm64 ID registers definitions
Thread-Index: AQHWxyRQqQqjgsMJ50ujNTmlFMWEoKnhGsCA
Date: Mon, 30 Nov 2020 20:08:08 +0000
Message-ID: <87zh2y7gm0.fsf@epam.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>
 <83f4e52dce23d2e83f6118e5ecb3cef22112f9e9.1606742184.git.bertrand.marquis@arm.com>
In-Reply-To: 
 <83f4e52dce23d2e83f6118e5ecb3cef22112f9e9.1606742184.git.bertrand.marquis@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.4.10; emacs 27.1
authentication-results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [85.223.209.18]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 7fe48c91-9908-4db0-c829-08d8956ba814
x-ms-traffictypediagnostic: VI1PR03MB3136:
x-microsoft-antispam-prvs: 
 <VI1PR03MB313630C2435BDA809D0F9243E6F50@VI1PR03MB3136.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:6790;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 eEDUAilF5iflN9T08h8XSXOxe/oZoz/74sWAGdcHeLAHVVHBxcPX4Pnv6gsTChs4SBA/Ghc6Z4YAuxIeIv415Ja+9GGwKIgWae8yyIIW9svzz1NPlOStPNK1dt6y6E5LffVyLWff0eoIf8UaOFng6xDSoqbuzqrv9NIlrdlvV6Yzun1gZOKuWA9p20pclDUPztBUVifSgXhMemdvDBt6v4BKsVQlQo6if18eMS4NfzVmQRv+8oyXRK3OnOn/cy1PM9nXRQFztDsN4dL6JmlUrMP/VRqB3qaUEGbKEFFMo2yb4/Bmun/R+7Erk/9GXMmhf2UMlpDGeYTkE/1ixC4tDQ==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB6400.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(366004)(136003)(396003)(346002)(66476007)(6512007)(86362001)(6916009)(8676002)(8936002)(91956017)(66446008)(478600001)(76116006)(2906002)(66946007)(6486002)(55236004)(5660300002)(26005)(2616005)(186003)(64756008)(71200400001)(54906003)(316002)(36756003)(4326008)(66556008)(83380400001)(6506007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?UsqJJMS80FSN9JgifhSLtjJAmGT26SKyOm3zLTHVB6IUBDcaLniGflLua/?=
 =?iso-8859-1?Q?4jmVWpabQZ0zKoUzLi4GsWonSViWyo/qsMkuQDvWKb3Ar0oRxu1pkiX6Qc?=
 =?iso-8859-1?Q?oYvgY6lnyvl57iRtglHjnQc1pkoxQq4sW2dRiTiLlKbdrsGjRp9ch1ghtq?=
 =?iso-8859-1?Q?7N/neoqtBlxGLpo+OQXuMBmFg8WOQyqB8CggnJfwAn0qc8f06QeiL9AwUY?=
 =?iso-8859-1?Q?dRTyK4L2v0RnfFfKa2wFemJ4TTs4T3N2JHte9ZVl87Ejz7u6opoBSVJmCz?=
 =?iso-8859-1?Q?kEUmZs2rKflhMJqey1zfWR4DhUAG1WLpv5/FIPaGFXX/Jgzh9Yc1XqXA6z?=
 =?iso-8859-1?Q?S0fX+ck4vwjoFu1TsO2klVFgOmf244FsLW5KgUxe9jKUlCWtFYiGGYtOxI?=
 =?iso-8859-1?Q?Wp7yicMRES1ekfEmoeD/ZWcOiY6Sj3+8XsIYjhzcQKgX5ncyG80RH8qeKw?=
 =?iso-8859-1?Q?pQ/F5n92YgyapryyxgKSr/8S+uQ/SnoPu2XZ8QZwwe7jS2ZgGBGmKysOvv?=
 =?iso-8859-1?Q?O5Zug/0VQoRUtS+GBw0G+bpo9fB4HH+cTO1gWWJLJ8QGSh1TyRIzpcJXgW?=
 =?iso-8859-1?Q?2O3rIXMNldZtNmJJLU8jr9Uo+BUv4/gR1DwIaitHckl8pyRiKxwFRBi2bP?=
 =?iso-8859-1?Q?5xlw9kvTr5GRATCHs4dyNq+9ysgtcOkrFljs+KLpwEY7AxrXFIu3Xw/lM7?=
 =?iso-8859-1?Q?zsqrWMenyTJJbdVHs6gnrJQ1jHh6o4qd7BQJ69P8VawlXKor9mlLoyNvn6?=
 =?iso-8859-1?Q?WCsP8YYPyEtPWDcSH+Ph6JSra/KduDHbrpqiyI/kRNO3IvBrlr1nfAOmJp?=
 =?iso-8859-1?Q?wrMG7ZeCuJ4/GkPv89nOlDdxm95COFz7kAxU4Pp3Go7uWA5KgocTPHtGBy?=
 =?iso-8859-1?Q?OYWZqUZ0b2TmzxZIVIFvzY0OZb2Bmd4lTTSPC61r3mElv1fksD49vf2u3q?=
 =?iso-8859-1?Q?kDyQ0dGjocbi85GlXSUAVkc1TyX5ne+MmAlU345Skyv5J5QoSY9tuRZT/H?=
 =?iso-8859-1?Q?IUOxpbb6YR6rnZGMfkBmY1ZKhpEOTtyFr6mV7R?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB6400.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7fe48c91-9908-4db0-c829-08d8956ba814
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Nov 2020 20:08:08.6062
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: l4WTwxoKbTMhTADbAt9KYt+LQuB+iPYYQpA4Utc+5YhFNEZWOOqmYo1uOqEyK+R5HWxK57X2TOmXe6BWxBXJ5RXYAuyUbx6jMffpxGJionM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR03MB3136
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-30_08:2020-11-30,2020-11-30 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 adultscore=0 suspectscore=0
 lowpriorityscore=0 impostorscore=0 phishscore=0 clxscore=1015 spamscore=0
 malwarescore=0 priorityscore=1501 mlxscore=0 mlxlogscore=584 bulkscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2011300129


Bertrand Marquis writes:

> Add coprocessor registers definitions for all ID registers trapped
> through the TID3 bit of HSR.
> Those are the one that will be emulated in Xen to only publish to guests
> the features that are supported by Xen and that are accessible to
> guests.
>
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
Reviewed-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>

> ---
> Changes in V2: rebase
> ---
>  xen/include/asm-arm/arm64/hsr.h | 37 +++++++++++++++++++++++++++++++++
>  1 file changed, 37 insertions(+)
>
> diff --git a/xen/include/asm-arm/arm64/hsr.h b/xen/include/asm-arm/arm64/=
hsr.h
> index ca931dd2fe..e691d41c17 100644
> --- a/xen/include/asm-arm/arm64/hsr.h
> +++ b/xen/include/asm-arm/arm64/hsr.h
> @@ -110,6 +110,43 @@
>  #define HSR_SYSREG_CNTP_CTL_EL0   HSR_SYSREG(3,3,c14,c2,1)
>  #define HSR_SYSREG_CNTP_CVAL_EL0  HSR_SYSREG(3,3,c14,c2,2)
> =20
> +/* Those registers are used when HCR_EL2.TID3 is set */
> +#define HSR_SYSREG_ID_PFR0_EL1    HSR_SYSREG(3,0,c0,c1,0)
> +#define HSR_SYSREG_ID_PFR1_EL1    HSR_SYSREG(3,0,c0,c1,1)
> +#define HSR_SYSREG_ID_PFR2_EL1    HSR_SYSREG(3,0,c0,c3,4)
> +#define HSR_SYSREG_ID_DFR0_EL1    HSR_SYSREG(3,0,c0,c1,2)
> +#define HSR_SYSREG_ID_DFR1_EL1    HSR_SYSREG(3,0,c0,c3,5)
> +#define HSR_SYSREG_ID_AFR0_EL1    HSR_SYSREG(3,0,c0,c1,3)
> +#define HSR_SYSREG_ID_MMFR0_EL1   HSR_SYSREG(3,0,c0,c1,4)
> +#define HSR_SYSREG_ID_MMFR1_EL1   HSR_SYSREG(3,0,c0,c1,5)
> +#define HSR_SYSREG_ID_MMFR2_EL1   HSR_SYSREG(3,0,c0,c1,6)
> +#define HSR_SYSREG_ID_MMFR3_EL1   HSR_SYSREG(3,0,c0,c1,7)
> +#define HSR_SYSREG_ID_MMFR4_EL1   HSR_SYSREG(3,0,c0,c2,6)
> +#define HSR_SYSREG_ID_MMFR5_EL1   HSR_SYSREG(3,0,c0,c3,6)
> +#define HSR_SYSREG_ID_ISAR0_EL1   HSR_SYSREG(3,0,c0,c2,0)
> +#define HSR_SYSREG_ID_ISAR1_EL1   HSR_SYSREG(3,0,c0,c2,1)
> +#define HSR_SYSREG_ID_ISAR2_EL1   HSR_SYSREG(3,0,c0,c2,2)
> +#define HSR_SYSREG_ID_ISAR3_EL1   HSR_SYSREG(3,0,c0,c2,3)
> +#define HSR_SYSREG_ID_ISAR4_EL1   HSR_SYSREG(3,0,c0,c2,4)
> +#define HSR_SYSREG_ID_ISAR5_EL1   HSR_SYSREG(3,0,c0,c2,5)
> +#define HSR_SYSREG_ID_ISAR6_EL1   HSR_SYSREG(3,0,c0,c2,7)
> +#define HSR_SYSREG_MVFR0_EL1      HSR_SYSREG(3,0,c0,c3,0)
> +#define HSR_SYSREG_MVFR1_EL1      HSR_SYSREG(3,0,c0,c3,1)
> +#define HSR_SYSREG_MVFR2_EL1      HSR_SYSREG(3,0,c0,c3,2)
> +
> +#define HSR_SYSREG_ID_AA64PFR0_EL1   HSR_SYSREG(3,0,c0,c4,0)
> +#define HSR_SYSREG_ID_AA64PFR1_EL1   HSR_SYSREG(3,0,c0,c4,1)
> +#define HSR_SYSREG_ID_AA64DFR0_EL1   HSR_SYSREG(3,0,c0,c5,0)
> +#define HSR_SYSREG_ID_AA64DFR1_EL1   HSR_SYSREG(3,0,c0,c5,1)
> +#define HSR_SYSREG_ID_AA64ISAR0_EL1  HSR_SYSREG(3,0,c0,c6,0)
> +#define HSR_SYSREG_ID_AA64ISAR1_EL1  HSR_SYSREG(3,0,c0,c6,1)
> +#define HSR_SYSREG_ID_AA64MMFR0_EL1  HSR_SYSREG(3,0,c0,c7,0)
> +#define HSR_SYSREG_ID_AA64MMFR1_EL1  HSR_SYSREG(3,0,c0,c7,1)
> +#define HSR_SYSREG_ID_AA64MMFR2_EL1  HSR_SYSREG(3,0,c0,c7,2)
> +#define HSR_SYSREG_ID_AA64AFR0_EL1   HSR_SYSREG(3,0,c0,c5,4)
> +#define HSR_SYSREG_ID_AA64AFR1_EL1   HSR_SYSREG(3,0,c0,c5,5)
> +#define HSR_SYSREG_ID_AA64ZFR0_EL1   HSR_SYSREG(3,0,c0,c4,4)
> +
>  #endif /* __ASM_ARM_ARM64_HSR_H */
> =20
>  /*


--=20
Volodymyr Babchuk at EPAM=


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 20:16:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 20:16:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41424.74561 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjpam-0004Jd-UQ; Mon, 30 Nov 2020 20:16:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41424.74561; Mon, 30 Nov 2020 20:16:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjpam-0004JW-RW; Mon, 30 Nov 2020 20:16:12 +0000
Received: by outflank-mailman (input) for mailman id 41424;
 Mon, 30 Nov 2020 20:16:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QBJD=FE=epam.com=prvs=06035e4899=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1kjpal-0004JP-Vd
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 20:16:12 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 09c36977-c9b0-47a3-a232-578e9196c575;
 Mon, 30 Nov 2020 20:16:10 +0000 (UTC)
Received: from pps.filterd (m0174678.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0AUK7H9G023941; Mon, 30 Nov 2020 20:16:04 GMT
Received: from eur03-db5-obe.outbound.protection.outlook.com
 (mail-db5eur03lp2059.outbound.protection.outlook.com [104.47.10.59])
 by mx0a-0039f301.pphosted.com with ESMTP id 353epuns3e-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 30 Nov 2020 20:16:04 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com (2603:10a6:800:17e::20)
 by VI1PR03MB3136.eurprd03.prod.outlook.com (2603:10a6:802:2f::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.23; Mon, 30 Nov
 2020 20:15:53 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::d7a:2503:2ffd:1c51]) by VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::d7a:2503:2ffd:1c51%6]) with mapi id 15.20.3611.031; Mon, 30 Nov 2020
 20:15:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 09c36977-c9b0-47a3-a232-578e9196c575
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GIBOjWjrIFPEVjFAGsPq4cdB5KKhNNTnyQZD9zTSh5z6IBRcpujNuG9lz6coQhN64cHz27GBx6DFlgPa6xXEsi4IHk/khdLL6C7ckft9KR6wwCVxFY6SdSckJteuz/cJwVyyrXytNJOMvAcjBoXwPAETcJFMjk4IujWSr8KvCsX3IWYbb8rHvPSRWR83hqiF3vOLa1IBgcUVdFf/XUaU8QSJG8tmnwYMW0AtCs+nv3TQ6aaCujZ7ATXqX919x21Vek78Gxzkmo6pQUouTnKWzJY4Jn3PZob4rOEpgrxz2VBmYif6uEpep2Nzcm0uTGOVcQCD6T6klsouZsphigRXhQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7FbDhL/syt/rYJ2rMzkj3W5HWlVabCifNiS1DWUfHLI=;
 b=lxiimcZLDIeJ+j+fQubO+6FxSoevQ5JNBc99ZVVSxXzns95uXQSxYTykufedV7fkdiow2Bl90/CRyTQQw6E50oam1JwYkvcR0k34fBczxfc7cdDgCp/rdb2XrojGpqFTgtjqggs7K0RsN3eK9G1WBwBDIXD8u7zT0cL5utjRA+wkOgxA3o+NJLTKBCZ3LSV6tCkCt2cIewXfPRhDo0wnqnBK7+lTk3wc8avNSXh16h9W8cGitDE0GMATN8DDn7+m0mdgCcnVYHUClBRZmC2vwkfTC9v+eAZI9De3xXKfO3eFKen2T7Vw/azQK5wbmiR/a2RD9JYPWfkvEIC5mC0CwA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7FbDhL/syt/rYJ2rMzkj3W5HWlVabCifNiS1DWUfHLI=;
 b=y/I+Inf0zx2jNaojRNQPlWu5pFd8E++2JFwHWjHD7dOoLFcx2Fkb1TfzK6J/dbueoPumKypxiieTbKddazNqDw+vhk5n2D63JAlUPW6Hw1cpeeYd9WrdgdET4PhkJCsQQBmL1VpWhUpGCMJK9DYRF3QlovE/dtofMAcaMnRm4kCK6mQXh18fTlqMB0MJq3C4poqmnDUno8dVhBnDSazSrE/JxzNqiPelou7IscevkHEtL8JMpEVG9ibT+OaeZxLna9CZ75/DG92W720gevnLzpoQBfHgZoaK99srT/sC1+EWo1DCMFDPal+F4pDdjSTxdmKYws9BXLdSDAooDs+a4Q==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Bertrand Marquis <bertrand.marquis@arm.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Stefano
 Stabellini <sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 3/7] xen/arm: create a cpuinfo structure for guest
Thread-Topic: [PATCH v2 3/7] xen/arm: create a cpuinfo structure for guest
Thread-Index: AQHWxyRRyTw/iUdji0eQScYdbZeE1KnhHOoA
Date: Mon, 30 Nov 2020 20:15:53 +0000
Message-ID: <87tut67g93.fsf@epam.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>
 <539cc9c817a80e35a2532dba5bc01e9b2533ff56.1606742184.git.bertrand.marquis@arm.com>
In-Reply-To: 
 <539cc9c817a80e35a2532dba5bc01e9b2533ff56.1606742184.git.bertrand.marquis@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.4.10; emacs 27.1
authentication-results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [85.223.209.18]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 479a6e53-9f9b-4354-0275-08d8956cbd1a
x-ms-traffictypediagnostic: VI1PR03MB3136:
x-microsoft-antispam-prvs: 
 <VI1PR03MB313605E02163791D1CFD849AE6F50@VI1PR03MB3136.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 WDhzFUnZmzKzmqVj7iqApyiM5/4hK4gtBX04p9JxMO+Ox+hwq4mCZt9YBbp4/hOcBinSDwanlt8tN/n/fsPWSRN21/PML4PoFQksZcbpVAC+vYID3UsCQRj+4WSstMxpU7+lIdbsLkbj1dT/uKeyVKJlg93SE6J+JjNoMgkwdXsryfK5H5bPEE14VLWx+ArRrTgb24yOE20VWQWxArwv/0iNNevh4Mn2bXAnbSRtqQ13YdidIRjiPeZu7xFNaMHSbvt6DaUihHCepg6DD1JgSP3mDumSzZR0Ws0RcQdc9cb9+uo2Wk4L83vw2e0tQCdwMmAyefypiVjOZb2WAscOYQ==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB6400.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(346002)(136003)(366004)(396003)(376002)(186003)(55236004)(5660300002)(2616005)(26005)(4326008)(36756003)(6506007)(66556008)(83380400001)(64756008)(316002)(71200400001)(54906003)(6512007)(6916009)(86362001)(66476007)(478600001)(76116006)(66446008)(91956017)(6486002)(2906002)(66946007)(8676002)(8936002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?yqTJLC6lTnYCCsOeBUTT9rMo9ssldVf8ubbj7/sYqS9k2DxXA5j5OaQE26?=
 =?iso-8859-1?Q?o3Y3y6RwoKxjl/RHy1fozO4dybTNnQOkglYp8FjMfU0f8t6mh+vwxFhwKK?=
 =?iso-8859-1?Q?XTLMcbSJjxVlzK4t12J4NjQjf90yiXjRUkNkkXFPi5DylMP3BdeFpxXeYl?=
 =?iso-8859-1?Q?mhib23a0odWqNIOpz1Ypq9LVHirYCznDQCvgZEStAiHfifFNM9X1nZwCSy?=
 =?iso-8859-1?Q?LbC9Yj67bYsrndwekq8Cu3WGfo4topRqRxlBSgk+B+1FAEscrwN72h5/MI?=
 =?iso-8859-1?Q?hSnbq5TfOqNsjAD/Uuro5nO4jLpXic4S3R23FVxRnBgFx1kqeZcT6oVVA7?=
 =?iso-8859-1?Q?zNx7REn1AG8ifcJJ3iFy5S0PehAJnpRTawwdi+tVhXgLs0HoKNyGXDkUZt?=
 =?iso-8859-1?Q?S2F7kuXz6FwUpvc+J5aIeD+fd6B36Yx/UzuFXR66G3ORa7RYDoVy75iMVR?=
 =?iso-8859-1?Q?4yb56aMrWN47f4rqpWFEs170dtQLuhFnpJsGJ9NwlGONloeuvccsHwl3cP?=
 =?iso-8859-1?Q?orQbX9qv+RH7wZpvfj+m8sR7DG6ue6LuNCOZKTio5/v68Oe7RUf0c0NvDr?=
 =?iso-8859-1?Q?pd9jngoUxvqwuYan9YlmdDwKjEzqgU1YZrFZO8hNLpPQhVFEPkrwxhBRiM?=
 =?iso-8859-1?Q?1BtoQS5C72MtctqUFWwyRKg1UP1lNpp3g3cTpmOCXVdKmvvocKlAGxx9pJ?=
 =?iso-8859-1?Q?gL11OSSlz5XawfUywh0qk09gglhOyccidedqNVcYi1q2MkMsbbbLCuv9T8?=
 =?iso-8859-1?Q?4aJdMqT+L82BZZcf02xwVZZb5oICIdesB7/sZ2nyVDTZQi+WQkEuHIcb7j?=
 =?iso-8859-1?Q?C2nMBhgjcyPCAXopjuBysomVcBzF+74mUQNoQHlhc2vEM5aPpyOGw/WkJa?=
 =?iso-8859-1?Q?7nzT6fohVcf2H1kK+mzYDC623Ig0DAr0v7qb/FOF8ASSnh3ZtwPDXZArj5?=
 =?iso-8859-1?Q?sC8CukmS0FREOalEezzhZfR0aZv+pDgf6ojTenNv+GZO9OYQmGwApXLuBG?=
 =?iso-8859-1?Q?WtpyzySof2MJeDe3Wy9IlrtzYekzwZxRMnC+hG?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB6400.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 479a6e53-9f9b-4354-0275-08d8956cbd1a
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Nov 2020 20:15:53.3793
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: hqNL1xyYnMFDt8o8Pyh/1sbLlrkjQqzTAd7WesV6+IBL5x/C8Op2gKzd0pDYszQelKaY3TqbhiTQJbspZFPH+S7u4P8TUaMr0yKzsHuOP+w=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR03MB3136
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-30_08:2020-11-30,2020-11-30 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 clxscore=1015
 impostorscore=0 lowpriorityscore=0 phishscore=0 mlxscore=0 adultscore=0
 priorityscore=1501 malwarescore=0 bulkscore=0 spamscore=0 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2011300129


Bertrand Marquis writes:

> Create a cpuinfo structure for guest and mask into it the features that
> we do not support in Xen or that we do not want to publish to guests.
>
> Modify some values in the cpuinfo structure for guests to mask some
> features which we do not want to allow to guests (like AMU) or we do not
> support (like SVE).
>
> The code is trying to group together registers modifications for the
> same feature to be able in the long term to easily enable/disable a
> feature depending on user parameters or add other registers modification
> in the same place (like enabling/disabling HCR bits).
>
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
> Changes in V2: rebase
> ---
>  xen/arch/arm/cpufeature.c        | 51 ++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/cpufeature.h |  2 ++
>  2 files changed, 53 insertions(+)
>
> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
> index 204be9b084..309941ff37 100644
> --- a/xen/arch/arm/cpufeature.c
> +++ b/xen/arch/arm/cpufeature.c
> @@ -24,6 +24,8 @@
> =20
>  DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
> =20
> +struct cpuinfo_arm __read_mostly guest_cpuinfo;
> +
>  void update_cpu_capabilities(const struct arm_cpu_capabilities *caps,
>                               const char *info)
>  {
> @@ -156,6 +158,55 @@ void identify_cpu(struct cpuinfo_arm *c)
>  #endif
>  }
> =20
> +/*
> + * This function is creating a cpuinfo structure with values modified to=
 mask
> + * all cpu features that should not be published to guest.
> + * The created structure is then used to provide ID registers values to =
guests.
> + */
> +static int __init create_guest_cpuinfo(void)
> +{
> +    /*
> +     * TODO: The code is currently using only the features detected on t=
he boot
> +     * core. In the long term we should try to compute values containing=
 only
> +     * features supported by all cores.
> +     */
> +    identify_cpu(&guest_cpuinfo);
> +
> +#ifdef CONFIG_ARM_64
> +    /* Disable MPAM as xen does not support it */
> +    guest_cpuinfo.pfr64.mpam =3D 0;
> +    guest_cpuinfo.pfr64.mpam_frac =3D 0;
> +
> +    /* Disable SVE as Xen does not support it */
> +    guest_cpuinfo.pfr64.sve =3D 0;
> +    guest_cpuinfo.zfr64.bits[0] =3D 0;
> +
> +    /* Disable MTE as Xen does not support it */
> +    guest_cpuinfo.pfr64.mte =3D 0;
> +#endif
> +
> +    /* Disable AMU */
> +#ifdef CONFIG_ARM_64
> +    guest_cpuinfo.pfr64.amu =3D 0;
> +#endif
> +    guest_cpuinfo.pfr32.amu =3D 0;
> +
> +    /* Disable RAS as Xen does not support it */
> +#ifdef CONFIG_ARM_64
> +    guest_cpuinfo.pfr64.ras =3D 0;
> +    guest_cpuinfo.pfr64.ras_frac =3D 0;
> +#endif
> +    guest_cpuinfo.pfr32.ras =3D 0;
> +    guest_cpuinfo.pfr32.ras_frac =3D 0;
> +
> +    return 0;
> +}
> +/*
> + * This function needs to be run after all smp are started to have
> + * cpuinfo structures for all cores.
> + */

This comment contradicts with TODO at the beginning of
create_guest_cpuinfo().

> +__initcall(create_guest_cpuinfo);
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufe=
ature.h
> index 64354c3f19..0ab6dd42a0 100644
> --- a/xen/include/asm-arm/cpufeature.h
> +++ b/xen/include/asm-arm/cpufeature.h
> @@ -290,6 +290,8 @@ extern void identify_cpu(struct cpuinfo_arm *);
>  extern struct cpuinfo_arm cpu_data[];
>  #define current_cpu_data cpu_data[smp_processor_id()]
> =20
> +extern struct cpuinfo_arm guest_cpuinfo;
> +
>  #endif /* __ASSEMBLY__ */
> =20
>  #endif


--=20
Volodymyr Babchuk at EPAM=


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 20:22:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 20:22:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41432.74574 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjpgf-0005IT-LZ; Mon, 30 Nov 2020 20:22:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41432.74574; Mon, 30 Nov 2020 20:22:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjpgf-0005IM-IH; Mon, 30 Nov 2020 20:22:17 +0000
Received: by outflank-mailman (input) for mailman id 41432;
 Mon, 30 Nov 2020 20:22:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QBJD=FE=epam.com=prvs=06035e4899=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1kjpge-0005IH-2Z
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 20:22:16 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c10ecc07-a308-4017-bfcc-1e0f39fe1958;
 Mon, 30 Nov 2020 20:22:15 +0000 (UTC)
Received: from pps.filterd (m0174680.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0AUKK9G9002260; Mon, 30 Nov 2020 20:22:09 GMT
Received: from eur02-am5-obe.outbound.protection.outlook.com
 (mail-am5eur02lp2058.outbound.protection.outlook.com [104.47.4.58])
 by mx0b-0039f301.pphosted.com with ESMTP id 353c7me372-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 30 Nov 2020 20:22:09 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com (2603:10a6:800:17e::20)
 by VI1PR03MB4975.eurprd03.prod.outlook.com (2603:10a6:803:bc::33)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.23; Mon, 30 Nov
 2020 20:22:07 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::d7a:2503:2ffd:1c51]) by VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::d7a:2503:2ffd:1c51%6]) with mapi id 15.20.3611.031; Mon, 30 Nov 2020
 20:22:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c10ecc07-a308-4017-bfcc-1e0f39fe1958
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fL6IAv3KMYKuhVH4c2aO4zFSsF2O6raRwjuIR9+dYXYWdD1nhnvIWtTpsuImolGeVDo4sNIJNgUEar3/j4AU+/GvgSg0U5emQ1tR+GYCY5mhvtfm9XwJlNnhS/iqEsKT5U+OjYAOFFv4YxHoblnh4/B7EwXj4tlPLiFcH1xD6LdkqF7ZQl+DCi6rP+2c6YDFWEhAjz/P0vJetAIKnAI2pn8LJ12Dq1mYBy/zApiujWaTtqbtK1Tag5jrmZz19AbEua+SHCQobEfahVKQQ9sMlxwL9kAKQTM3j44LE4ieSizelorABXHPRr9GkM30hL6FRaiww9SY6CWu1mlzYiXslA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z6bQ5RxLzW7tCkp+2j9zC7JMyKKjt3E5JTDlIk3GGrQ=;
 b=NLRc8uEtDUnwVWbRC3YGajPvP5u1LEgJ1UEyjm5zDatpO5lp3/b1bjlkDNYlV3pAaDTN+BIH2aDQGRvKhGXdbqyNn/9yUT1WN6Kpu1eCco00nnQjJDOorwRHa7re5GJAIIr7lpBb3d4cS2v58ta3lJGmvoCRevyquVzml63h2ymA8Os+S2UjQxlwvEygsY5VKCuRzbiW96bJYHC9guKK6I8g59Xfn+c65gDkX7/g00hdXm1qmaiZ4uhrLLCFpwiL1Bv7hAreSHjF702pP7XJgYzYEo49v/gWnweSNYEknDnOkAw287GBGDcNZYKnycOWkSPghiekY3FFqLYrwqwahA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z6bQ5RxLzW7tCkp+2j9zC7JMyKKjt3E5JTDlIk3GGrQ=;
 b=nByJwbXSK+d7kuc85fATdZBgg/eVF5AqpyYH/Nb7hBpsPAiHtyQZ+ImB4jd8y7f7o77+IWDp70OW9Rrma8zZ0h3cYHJuLNAdt11sCtQrcl4WQHqC1lDYUapeNMKigdHRmL9gxNZlvp3q8KGiQPIus8+tw5u7oNR5Q1aHGeIcVlFGaO9vR5L4W8c26cTqBLqsZQXuExzCxo4AE7Paf6w64dHqnJCgCyK6VC5t7PEc5lcJJ2Se4lazX5VUakOTASk/9EfW8deS8foHhHHGlbX8U+buEnRY+9IRANYpJNmB9HuACucjzRNL2ZFy0jp7qPqrT6yIQWpWFNFBn1Vlv/GV0w==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Bertrand Marquis <bertrand.marquis@arm.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Stefano
 Stabellini <sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 4/7] xen/arm: Add handler for ID registers on arm64
Thread-Topic: [PATCH v2 4/7] xen/arm: Add handler for ID registers on arm64
Thread-Index: AQHWxyRRk7rRSvoRk02jzxNDTD03R6nhHqgA
Date: Mon, 30 Nov 2020 20:22:06 +0000
Message-ID: <87pn3u7fyp.fsf@epam.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>
 <6db611491b25591829b9408267bd9bd50e266fe2.1606742184.git.bertrand.marquis@arm.com>
In-Reply-To: 
 <6db611491b25591829b9408267bd9bd50e266fe2.1606742184.git.bertrand.marquis@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.4.10; emacs 27.1
authentication-results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [85.223.209.18]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: e091f732-b180-4297-b7fb-08d8956d9bd0
x-ms-traffictypediagnostic: VI1PR03MB4975:
x-microsoft-antispam-prvs: 
 <VI1PR03MB4975E8B4904B9B77C4B9A9E7E6F50@VI1PR03MB4975.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:4502;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 zKa7wuP4WsSadwAki2hjM6y1VMIReP+/i4Pbs5gjTYdtAzmTm2M+5E133xvBFbWKg0nU9oErI94IfRPlAFTl90nlkMqgAEeap4tb48nR4qkw728ryDYd1vhjz1/hSGAgV7/n9EBTRByNHywbm+usahPhg99iJWQTD+TPdceUnkf5zlkoURCEY1rBuKrjwIiFn801YrMRPDffDK6WYnRgRwE8o+zTZNoK9k0P4LfOSeFN/xenNHGwVMx8l7ms471ZGtuJxFpiSOPTB66Ais4DIEac0L7AvTBh+YPKqlsijAmbICNec9/PW5nu4lq/lUEuE5fOeNjcHkskq4Oq/cvM0A==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB6400.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(376002)(39860400002)(136003)(346002)(5660300002)(36756003)(6506007)(478600001)(86362001)(83380400001)(66556008)(66476007)(66946007)(64756008)(66446008)(4326008)(6486002)(26005)(316002)(186003)(6512007)(8676002)(6916009)(2616005)(8936002)(2906002)(54906003)(55236004)(76116006)(91956017)(71200400001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?OFhxjXu9Rcv7ifNACK67/SqGgOEpcAM9t8Vmzx+I5hiCbxR9fht+8oTO+g?=
 =?iso-8859-1?Q?0x/WhmFZJYyeP/akZfl3YAi6BIT2m0tvN8FF41KXgz4Tr/rSn/V1iaI2Md?=
 =?iso-8859-1?Q?FjwXtIwcw8v+tpNtslzcbf4lK8iSeOiIlQn2ge0k+REbFX/cXdujDhc7DF?=
 =?iso-8859-1?Q?R3lxZ7QuUI8wUHOPT4yN7ds/2xad0Q+4QuG1V83Z+uQdltf93wWFVgWrft?=
 =?iso-8859-1?Q?NqhPqD5Bb0qpgPtU34RnleQckII4o9yv/bZbDav5/5cIHFUL5IgJg6o1MD?=
 =?iso-8859-1?Q?5NbIqSqmAuD6HH7HV+68Hil1b2ggUoJ80bxmyta6sLZ5A0Fhs4BVg/gUbl?=
 =?iso-8859-1?Q?Y+vpNUh7+kHmgKb/FQod55GXKXoUptdsqe8HxT9gI5mtXkPVVLlqIrnvV0?=
 =?iso-8859-1?Q?2hfP2jC95JoiYUEVq2NMP6UErJSiE/+EA1B/u6vopX/K/pEbDGhDSVc2dJ?=
 =?iso-8859-1?Q?DZotlA3B/6Gwlm3Bvp25JXPuoGfNuv4ySp/IQyBJ4aTXimbPwTDoDZndwB?=
 =?iso-8859-1?Q?RcKcP3Z8keaI2fE0UOzhPMx8sNvNFto4A6zi4U6FUd+MilMJEVj19chRZT?=
 =?iso-8859-1?Q?JcXhQZBY3xHaL1PxS88zoakQP0ndbg7WyOgx+vgCn0kQcb3i7E79r49eOd?=
 =?iso-8859-1?Q?9S/awFQH93bmayA9/uLJrcSEmvrkUAkpB9WMNEjCJaQZx9DRNBULC0lq9r?=
 =?iso-8859-1?Q?mH/PhQniv/gYb1+UPbPkQ718A5AlNy/ARMOcfG1Hh95FfkInqfJoxjCR7F?=
 =?iso-8859-1?Q?pIp//AQ/i0Q3v4ssdjrPXPDyC5kDb1dMYJrSwoF8p5/gwjsnDyWcP9CWgH?=
 =?iso-8859-1?Q?CmAQRCjPR5wHi+R7RhHTbgiHrfOkaMbIrXdEok4fjhbPKdM+7SVwF7Kw4z?=
 =?iso-8859-1?Q?e1gnqjS1uphHWv3AztThVDHUwY2ur1gLdZWg3Xw66K0Dar2OIf0UuY/7Ia?=
 =?iso-8859-1?Q?jcYfVcP/ldFjHD9H5i0LFX5sROfFaxRGvvxoTs8s6zc5/gGJhTo/8eGa6v?=
 =?iso-8859-1?Q?XGS9OUkSZe01u2gIy/ogKmvLTrsqEZ76KAnUQg?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB6400.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e091f732-b180-4297-b7fb-08d8956d9bd0
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Nov 2020 20:22:06.9612
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: oRsJy63cLEdXbVq2itKK/eFOLSCDuSv2CqLC9/6s8v3K7omcGFePPIs9g4kbBn3PGekQg+RiqZ6VbbXXAhzj76RTr7VH1wHO6vHeB1pTHys=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR03MB4975
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-30_11:2020-11-30,2020-11-30 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0
 priorityscore=1501 malwarescore=0 suspectscore=0 mlxscore=0
 mlxlogscore=910 clxscore=1015 phishscore=0 impostorscore=0 bulkscore=0
 adultscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2009150000 definitions=main-2011300129



Bertrand Marquis writes:

> Add vsysreg emulation for registers trapped when TID3 bit is activated
> in HSR.
> The emulation is returning the value stored in cpuinfo_guest structure
> for most values and the hardware value for registers not stored in the
> structure (those are mostly registers existing only as a provision for
> feature use but who have no definition right now).

I can't see the code that returns values for the registers not stored in
the guest_cpuinfo. Perhaps you need to update the commit description?

> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
> Changes in V2: rebase
> ---
>  xen/arch/arm/arm64/vsysreg.c | 49 ++++++++++++++++++++++++++++++++++++
>  1 file changed, 49 insertions(+)
>
> diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.c
> index 8a85507d9d..970ef51603 100644
> --- a/xen/arch/arm/arm64/vsysreg.c
> +++ b/xen/arch/arm/arm64/vsysreg.c
> @@ -69,6 +69,14 @@ TVM_REG(CONTEXTIDR_EL1)
>          break;                                                          =
\
>      }
> =20
> +/* Macro to generate easily case for ID co-processor emulation */
> +#define GENERATE_TID3_INFO(reg,field,offset)                            =
\
> +    case HSR_SYSREG_##reg:                                              =
\
> +    {                                                                   =
\
> +        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr,   =
\
> +                          1, guest_cpuinfo.field.bits[offset]);         =
\
> +    }
> +
>  void do_sysreg(struct cpu_user_regs *regs,
>                 const union hsr hsr)
>  {
> @@ -259,6 +267,47 @@ void do_sysreg(struct cpu_user_regs *regs,
>           */
>          return handle_raz_wi(regs, regidx, hsr.sysreg.read, hsr, 1);
> =20
> +    /*
> +     * HCR_EL2.TID3
> +     *
> +     * This is trapping most Identification registers used by a guest
> +     * to identify the processor features
> +     */
> +    GENERATE_TID3_INFO(ID_PFR0_EL1, pfr32, 0)
> +    GENERATE_TID3_INFO(ID_PFR1_EL1, pfr32, 1)
> +    GENERATE_TID3_INFO(ID_PFR2_EL1, pfr32, 2)
> +    GENERATE_TID3_INFO(ID_DFR0_EL1, dbg32, 0)
> +    GENERATE_TID3_INFO(ID_DFR1_EL1, dbg32, 1)
> +    GENERATE_TID3_INFO(ID_AFR0_EL1, aux32, 0)
> +    GENERATE_TID3_INFO(ID_MMFR0_EL1, mm32, 0)
> +    GENERATE_TID3_INFO(ID_MMFR1_EL1, mm32, 1)
> +    GENERATE_TID3_INFO(ID_MMFR2_EL1, mm32, 2)
> +    GENERATE_TID3_INFO(ID_MMFR3_EL1, mm32, 3)
> +    GENERATE_TID3_INFO(ID_MMFR4_EL1, mm32, 4)
> +    GENERATE_TID3_INFO(ID_MMFR5_EL1, mm32, 5)
> +    GENERATE_TID3_INFO(ID_ISAR0_EL1, isa32, 0)
> +    GENERATE_TID3_INFO(ID_ISAR1_EL1, isa32, 1)
> +    GENERATE_TID3_INFO(ID_ISAR2_EL1, isa32, 2)
> +    GENERATE_TID3_INFO(ID_ISAR3_EL1, isa32, 3)
> +    GENERATE_TID3_INFO(ID_ISAR4_EL1, isa32, 4)
> +    GENERATE_TID3_INFO(ID_ISAR5_EL1, isa32, 5)
> +    GENERATE_TID3_INFO(ID_ISAR6_EL1, isa32, 6)
> +    GENERATE_TID3_INFO(MVFR0_EL1, mvfr, 0)
> +    GENERATE_TID3_INFO(MVFR1_EL1, mvfr, 1)
> +    GENERATE_TID3_INFO(MVFR2_EL1, mvfr, 2)
> +    GENERATE_TID3_INFO(ID_AA64PFR0_EL1, pfr64, 0)
> +    GENERATE_TID3_INFO(ID_AA64PFR1_EL1, pfr64, 1)
> +    GENERATE_TID3_INFO(ID_AA64DFR0_EL1, dbg64, 0)
> +    GENERATE_TID3_INFO(ID_AA64DFR1_EL1, dbg64, 1)
> +    GENERATE_TID3_INFO(ID_AA64ISAR0_EL1, isa64, 0)
> +    GENERATE_TID3_INFO(ID_AA64ISAR1_EL1, isa64, 1)
> +    GENERATE_TID3_INFO(ID_AA64MMFR0_EL1, mm64, 0)
> +    GENERATE_TID3_INFO(ID_AA64MMFR1_EL1, mm64, 1)
> +    GENERATE_TID3_INFO(ID_AA64MMFR2_EL1, mm64, 2)
> +    GENERATE_TID3_INFO(ID_AA64AFR0_EL1, aux64, 0)
> +    GENERATE_TID3_INFO(ID_AA64AFR1_EL1, aux64, 1)
> +    GENERATE_TID3_INFO(ID_AA64ZFR0_EL1, zfr64, 0)
> +
>      /*
>       * HCR_EL2.TIDCP
>       *


--=20
Volodymyr Babchuk at EPAM=


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 20:31:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 20:31:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41440.74585 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjppk-0006Km-Jy; Mon, 30 Nov 2020 20:31:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41440.74585; Mon, 30 Nov 2020 20:31:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjppk-0006Kf-H1; Mon, 30 Nov 2020 20:31:40 +0000
Received: by outflank-mailman (input) for mailman id 41440;
 Mon, 30 Nov 2020 20:31:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QBJD=FE=epam.com=prvs=06035e4899=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1kjppj-0006Ka-9H
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 20:31:39 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0eb114bb-f6be-4778-b383-d29cc5ba4cbf;
 Mon, 30 Nov 2020 20:31:38 +0000 (UTC)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0AUKU8A5017975; Mon, 30 Nov 2020 20:31:34 GMT
Received: from eur05-am6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2108.outbound.protection.outlook.com [104.47.18.108])
 by mx0b-0039f301.pphosted.com with ESMTP id 353ejmwufm-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 30 Nov 2020 20:31:34 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com (2603:10a6:800:17e::20)
 by VI1PR03MB3952.eurprd03.prod.outlook.com (2603:10a6:803:72::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Mon, 30 Nov
 2020 20:31:27 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::d7a:2503:2ffd:1c51]) by VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::d7a:2503:2ffd:1c51%6]) with mapi id 15.20.3611.031; Mon, 30 Nov 2020
 20:31:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0eb114bb-f6be-4778-b383-d29cc5ba4cbf
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=n/OKvHfqB4DP8rvh4eoE0e+cWkGZEMBfyF5K5r8IGNdKojSSXVOWvsMggeLwNXUtkvttBzHCOO7ybLGbnHN+wB0kO592dUOz61Wzv4w579B/BdGNGVypSQc71+vCB4XV8h+26KLjNmNobqm+mFBjxz1erpI/pelIiu6cIKovcU+LckEHL5cdba+o9xFOKhIFb7NAQtVUgu758jibEy9P1m/jeCDMMqpFTvBsuv/+0VvaBLiWMP0lcvtYgF7p4AWvKmSc2h9JdF6fIjxsGh6UB4noBSHYhjobPjJd/wLo+644+bhHCMxmDGsHokDZv/xRE0Xb/WYSy4OfTzLaN/caog==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FXyWrhtWrJ2j0/Llt8vyBSNOAVnWeuWkkQIQhOyqf0c=;
 b=nrw+8MOUSFYYgpuYYUzdWDgBzZnZhVkEpWRGmKapE1c5exkvVfKKrz5M3QyffvPxDRMn0wJoa/1QLICILKLCwveMzNpheZZKSHk5nWaGw/h28Ye01jtuox/SDZEXSOdgor/dJVKEOkvxhkAWd1KghbwRXo22nh3nFOJGgmURhI2Kxpp9Pv4B5XSp4+ZjYcNq+xGbTNH4ifU0PdUihpYxQA8/T9DZfkqla6+Pl9aq+M+mUMZCSnndAjiGQUQXs6gpBYyk0EjGm6Od7GtehZLGm8Wi7msuRyOf688uuElWR4tVyYSitSuU6G3SmGvA/33f+7IC+93xAxwlSQKsTipevg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FXyWrhtWrJ2j0/Llt8vyBSNOAVnWeuWkkQIQhOyqf0c=;
 b=P4mhsJAn5CXQQifLrIKMZTSn1p8VDLF+gk5HP6g5kCcnsU7kT5yuuZOYhgqkzWO6+pFq1KI0ZuQR/sUKf3Muv+3smltPD6A9DDUyNBXta810gZHSXLmqSvTV9K2L45MevMclIFmXwcV3ANe6f5ksxnR8NepwuGlBwSwYsmozl378Xw8MzEPrZv3bk2VF3JulUcsnZ3mgTI4yY0wJ2+cXifLQFR/nL0/dcUWtQ9A1peNl+orilMhw7rm3vJQcomDW2XogC50lK1TsKD99OBgWz6PCjo3GZzvjlajKiafZ4sKTluUgLclYUHxQrY03apwY8hDKzqiyt2a06lot/OeQHA==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Bertrand Marquis <bertrand.marquis@arm.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Stefano
 Stabellini <sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 5/7] xen/arm: Add handler for cp15 ID registers
Thread-Topic: [PATCH v2 5/7] xen/arm: Add handler for cp15 ID registers
Thread-Index: AQHWxyRSFk5jpSQ9QE+NXR5AM1rL5qnhIUQA
Date: Mon, 30 Nov 2020 20:31:27 +0000
Message-ID: <87lfei7fj5.fsf@epam.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>
 <86c96cd3895bf968f94010c0f4ee8dce7f0338e8.1606742184.git.bertrand.marquis@arm.com>
In-Reply-To: 
 <86c96cd3895bf968f94010c0f4ee8dce7f0338e8.1606742184.git.bertrand.marquis@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.4.10; emacs 27.1
authentication-results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [85.223.209.18]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 04417952-d567-4181-c251-08d8956ee9fc
x-ms-traffictypediagnostic: VI1PR03MB3952:
x-microsoft-antispam-prvs: 
 <VI1PR03MB3952C56C3AF4F7F11707030BE6F50@VI1PR03MB3952.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 OQiEJtQQE+712lUpA05AUDVDyHqXR+/bZAw8PG+IaDOZsRPSW3AtBhLW9FWUMUqcYFh67WhezHH0Ey5k3mQKJnQNIH3MeBrO89itv3ivWwjym7kkaGnSjIJuOQELKeWxz9j8aax/WeRsiEMvEF2jtrPyvjXseBAhADAHN7/WUvEZJZ8EazCzdJta2k6aqqMuiaHiclnXOBml4f/qVtS4G1wXm+qfYIWNaVHF2NYQ+CMH7sH5Ax8Yq40CwksXCE9h3a+i1Xd3xZyXQtwtHwpaaiHrRuy3PMTnXihYeHQ3g+8Kq0qCQc0tflrAzS3gd9xZrruVh8YZE7wmeXiD+GaLuQ==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB6400.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(136003)(366004)(39850400004)(396003)(5660300002)(2906002)(36756003)(6486002)(91956017)(76116006)(83380400001)(66946007)(71200400001)(6512007)(66556008)(64756008)(66476007)(6916009)(66446008)(186003)(478600001)(55236004)(8936002)(8676002)(54906003)(2616005)(86362001)(316002)(26005)(4326008)(6506007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?+vhqChxCVmQz5p0uiK+t53vcxiuSyULtZA9z8rAzaFWiOMueEFaNJS7Tod?=
 =?iso-8859-1?Q?5nOkq+XtCrl+kyWscQEUAtw34JaFZFtpHyYlFlqTVFQ3KDGnsviV9b7QWH?=
 =?iso-8859-1?Q?BE3/izH/WMTQsN6SKn7X89IkxJ6C5BN4uWcZ8hsuBWP90WGDHHHrv3d1Qz?=
 =?iso-8859-1?Q?hXroTiu9wxwbzHJP/q0wJ3FVyAQkGnnyLbgEvaGSBDMgeRncqjdjCJCtzx?=
 =?iso-8859-1?Q?9zBP7UL3u7EgPKdpb2WaTkuMeTSyFhWjBr1/u3uVdm1ys0SVvJp3yfTPgR?=
 =?iso-8859-1?Q?nWDieG22C/ISJLGRR+LCWx1BcAw1JPxVuWdf2kcAby80OORzXJxepzfThT?=
 =?iso-8859-1?Q?v6Tw3pzeAGcmZTRC3Iw+YGUnJsJSm8qdxov8fSCix4CfJy7sYqBgfVyN73?=
 =?iso-8859-1?Q?daBOw2s6M8DwklJr9aUg0qHubeivlfk1PH4Ad1prIBFllN087YFErpLyG5?=
 =?iso-8859-1?Q?a7/ZI67TgZ9WCYVc5gcAENH63pCgUoKO+ho9n4NzTw6xPm1UpRn98JWonM?=
 =?iso-8859-1?Q?VEbrXx6+upsXdCZL/sOejAhGDAdMCaM8DTY/hI00ydLngtP/HyK3DedSqm?=
 =?iso-8859-1?Q?6zDPlDF3vPq0V74YeZUvootwkIYRnFLeIouH2fUWXq8pctjij0+rTGv12I?=
 =?iso-8859-1?Q?QdWybnqCu8g6PQsiP6R1QOToTNmV8NX/taU6m86xZzIT41oSgrty3VH0rf?=
 =?iso-8859-1?Q?yzHjr+kCAsX28F7W50aWDhaUWUEOBK4i9wPu/Zw6HXnGv6vRbadAotUzlW?=
 =?iso-8859-1?Q?BVnTZNVGryQbX1dZK966m9KaBfEKLL61xNgoAP3PV4bmZ0uctATjmfRWFE?=
 =?iso-8859-1?Q?pALqAv3OjLc0dKDD4PlAhgBix16okvQDOBDs+aAlriRDa4yYIxfYTMzUWJ?=
 =?iso-8859-1?Q?ouQrahCvhWoNE2lixZRdWydPpN7BpS5EuLS5+qz5JKIFS0+VxVav96chKo?=
 =?iso-8859-1?Q?/PICoZ5WTAVubGW5TkL/C0ooxd+OZcdaplKE6e9lAmvBv5NdVTTw/CqxJG?=
 =?iso-8859-1?Q?ew6EmKjGfOV0i6h8elbgSYrBl3K0Z2rKhP61kC?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB6400.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 04417952-d567-4181-c251-08d8956ee9fc
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Nov 2020 20:31:27.6347
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: SInA+WYkPhvTEjQB0V+s9OUPujqj9HWyYs3Yn5i5Q2CCOBXJt8MYaDTX0/ikrBdw94zF4o+iJp58ufjE7aLVRJXqj7rtZ6640h99tXe1uvU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR03MB3952
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-30_11:2020-11-30,2020-11-30 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0
 clxscore=1015 malwarescore=0 priorityscore=1501 bulkscore=0
 mlxlogscore=673 spamscore=0 mlxscore=0 adultscore=0 phishscore=0
 impostorscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2009150000 definitions=main-2011300132


Bertrand Marquis writes:

> Add support for emulation of cp15 based ID registers (on arm32 or when
> running a 32bit guest on arm64).
> The handlers are returning the values stored in the guest_cpuinfo
> structure.
> In the current status the MVFR registers are no supported.

It is unclear what will happen with registers that are not covered by
guest_cpuinfo structure. According to ARM ARM, it is implementation
defined if such accesses will be trapped. On other hand, there are many
registers which are RAZ. So, good behaving guest can try to read one of
that registers and it will get undefined instruction exception, instead
of just reading all zeroes.

> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
> Changes in V2: rebase
> ---
>  xen/arch/arm/vcpreg.c | 35 +++++++++++++++++++++++++++++++++++
>  1 file changed, 35 insertions(+)
>
> diff --git a/xen/arch/arm/vcpreg.c b/xen/arch/arm/vcpreg.c
> index cdc91cdf5b..d0c6406f34 100644
> --- a/xen/arch/arm/vcpreg.c
> +++ b/xen/arch/arm/vcpreg.c
> @@ -155,6 +155,14 @@ TVM_REG32(CONTEXTIDR, CONTEXTIDR_EL1)
>          break;                                                      \
>      }
> =20
> +/* Macro to generate easily case for ID co-processor emulation */
> +#define GENERATE_TID3_INFO(reg,field,offset)                        \
> +    case HSR_CPREG32(reg):                                          \
> +    {                                                               \
> +        return handle_ro_read_val(regs, regidx, cp32.read, hsr,     \
> +                          1, guest_cpuinfo.field.bits[offset]);     \
> +    }
> +
>  void do_cp15_32(struct cpu_user_regs *regs, const union hsr hsr)
>  {
>      const struct hsr_cp32 cp32 =3D hsr.cp32;
> @@ -286,6 +294,33 @@ void do_cp15_32(struct cpu_user_regs *regs, const un=
ion hsr hsr)
>           */
>          return handle_raz_wi(regs, regidx, cp32.read, hsr, 1);
> =20
> +    /*
> +     * HCR_EL2.TID3
> +     *
> +     * This is trapping most Identification registers used by a guest
> +     * to identify the processor features
> +     */
> +    GENERATE_TID3_INFO(ID_PFR0, pfr32, 0)
> +    GENERATE_TID3_INFO(ID_PFR1, pfr32, 1)
> +    GENERATE_TID3_INFO(ID_PFR2, pfr32, 2)
> +    GENERATE_TID3_INFO(ID_DFR0, dbg32, 0)
> +    GENERATE_TID3_INFO(ID_DFR1, dbg32, 1)
> +    GENERATE_TID3_INFO(ID_AFR0, aux32, 0)
> +    GENERATE_TID3_INFO(ID_MMFR0, mm32, 0)
> +    GENERATE_TID3_INFO(ID_MMFR1, mm32, 1)
> +    GENERATE_TID3_INFO(ID_MMFR2, mm32, 2)
> +    GENERATE_TID3_INFO(ID_MMFR3, mm32, 3)
> +    GENERATE_TID3_INFO(ID_MMFR4, mm32, 4)
> +    GENERATE_TID3_INFO(ID_MMFR5, mm32, 5)
> +    GENERATE_TID3_INFO(ID_ISAR0, isa32, 0)
> +    GENERATE_TID3_INFO(ID_ISAR1, isa32, 1)
> +    GENERATE_TID3_INFO(ID_ISAR2, isa32, 2)
> +    GENERATE_TID3_INFO(ID_ISAR3, isa32, 3)
> +    GENERATE_TID3_INFO(ID_ISAR4, isa32, 4)
> +    GENERATE_TID3_INFO(ID_ISAR5, isa32, 5)
> +    GENERATE_TID3_INFO(ID_ISAR6, isa32, 6)
> +    /* MVFR registers are in cp10 no cp15 */
> +
>      /*
>       * HCR_EL2.TIDCP
>       *


--=20
Volodymyr Babchuk at EPAM=


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 20:40:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 20:40:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41447.74597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjpxt-000759-Ig; Mon, 30 Nov 2020 20:40:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41447.74597; Mon, 30 Nov 2020 20:40:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjpxt-000752-FO; Mon, 30 Nov 2020 20:40:05 +0000
Received: by outflank-mailman (input) for mailman id 41447;
 Mon, 30 Nov 2020 20:40:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QBJD=FE=epam.com=prvs=06035e4899=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1kjpxs-0006qj-1n
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 20:40:04 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 881d8c57-f111-48ee-b312-43c435f234f5;
 Mon, 30 Nov 2020 20:40:03 +0000 (UTC)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0AUKU9wc018045; Mon, 30 Nov 2020 20:39:58 GMT
Received: from eur05-am6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2108.outbound.protection.outlook.com [104.47.18.108])
 by mx0b-0039f301.pphosted.com with ESMTP id 353ejmwv1h-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 30 Nov 2020 20:39:57 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com (2603:10a6:800:17e::20)
 by VI1PR03MB3952.eurprd03.prod.outlook.com (2603:10a6:803:72::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Mon, 30 Nov
 2020 20:39:54 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::d7a:2503:2ffd:1c51]) by VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::d7a:2503:2ffd:1c51%6]) with mapi id 15.20.3611.031; Mon, 30 Nov 2020
 20:39:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 881d8c57-f111-48ee-b312-43c435f234f5
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EYvSCKzUsPlPPhaJRh2DIAEIw2H/fFnP89Re2l8LKvcWnJWFdLrJtDpxcXe3NaJyIti6AAxQUzDtaMpC5Bj4a1hVOS1yjB4ZISjY8kKTo3QgghS9uODrh+UIh+WzsKmwsl71+WS095FN0i3O6aexKGb/sFwXJKohR/CFkRSkptxGF9tc+JdhVRhGCu3A+gZCLZkFEjOxVfF4D9YU7re4el4YtVW7KZOdc4uahYBz7mj4TSseTV/QrS/3k1xGmbFjoQhqIIaLWIswpVPxnzqFSyfNQduTbT31OyXqxDYpLRtrEZ9RfUI26F+udG4jmUau5SJC7x2gt++zodt5/0fmxQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8jj6GLLuP7XyB7Gz+CcOMdnBFBEt6b+3WFAq4cTASqA=;
 b=DK7SH78piaoOXDkAVA2l0XpItIF7LJxXYBfvoL3U78gGLNeLwg1BWebJ52JJK8zTuVX8OYCh8In9TrmHNQIw1mJbT0jXgS101b/rHPvesU8iARjn30yKAzrGQzXXxYLcZiEL2uKQtbdXevQeQzv+6OoMCY0RYaTKfeoTaKMJIyWgmeulR5KpcvSPCul1+Y6yZ0DeLzk/CVklK/KuftQw6hNxapLYku/hcugA4KKyJPYEjDtO1LIP2iVmRhSVJ8zh4qiSTSyKsFTzuCUFBB5+FBOh26rVym3lLGQ8TriPUefAl/Q3mw+JCeIKNGdLxNi1VoLjPmeeovYvdL1FZQHfCQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8jj6GLLuP7XyB7Gz+CcOMdnBFBEt6b+3WFAq4cTASqA=;
 b=w/xEDy5mV+neY8c1BvqKm/bhC5i9Hvst8JsvmD6QsTFM5j0IWFbU7CZqzLLVhKKSeFgipaU53BEZL8PrT/ZbVkV4z5jk+TStIU1tKIK7G+vSMFDJCgIgYtm2ynENJyP+n22N9bRFDVZrOMLI9Aw+fgVW1NQcvM3tQClbGGjJWdQf1lPX8HG0iNTwQlHkfXOs4lG6fex5SNDg//bFMO9kZSqrw/2ZgvHAnDCKwJq5jOZfJ1uRy9eN7+AoZaCyOJKq/oeyDg8PwBBJIoAwgBBW2HLou7CmQ5Jr4g4by3yvU1VkPbLlCySy57NZueS9v8h1R/QH1NWp3Wj4sTUM7q+LEQ==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Bertrand Marquis <bertrand.marquis@arm.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Stefano
 Stabellini <sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 6/7] xen/arm: Add CP10 exception support to handle VMFR
Thread-Topic: [PATCH v2 6/7] xen/arm: Add CP10 exception support to handle
 VMFR
Thread-Index: AQHWxyRSFZ5B0f9NVEG4OUHzk+RfiKnhI6CA
Date: Mon, 30 Nov 2020 20:39:54 +0000
Message-ID: <87h7p67f52.fsf@epam.com>
References: <cover.1606742184.git.bertrand.marquis@arm.com>
 <58ff66d0daf610dfe8e09516302cb0c0fe17fc59.1606742184.git.bertrand.marquis@arm.com>
In-Reply-To: 
 <58ff66d0daf610dfe8e09516302cb0c0fe17fc59.1606742184.git.bertrand.marquis@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.4.10; emacs 27.1
authentication-results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [85.223.209.18]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 71e2a1c6-bf7c-4ab9-6c8f-08d8957017da
x-ms-traffictypediagnostic: VI1PR03MB3952:
x-microsoft-antispam-prvs: 
 <VI1PR03MB3952F12CF92EC2D2679C34DCE6F50@VI1PR03MB3952.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 VyKhbWWqtgY1hZI1dGaQiEwhjnj9zBYI8TuPOxr80u8gpaY/pJKqrSvROh/8L/+IU6883n1pFRboFbv15wixekncQreDDauYQq9AqPZgs6GS7R4Pzb5h8/oW9lIzjDzpp//IdGskIsdxYSOCv1QGN/BdsAuP9KW/ckerZSSUZ5g4nbtrrrSSPohinTcDrl0vHt1T0Jcc5euu9UMqmweYSXicsZcC/gOjpY0EZB1d7HpitZkatPPQokC8EfazjauhiflbgA3HArASvOzaU9yPo5DjnwyR9sshe0jHTFKMGEbwSBxNtWHUhnLLVSsFY4Bu
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB6400.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(366004)(136003)(346002)(376002)(8936002)(8676002)(55236004)(478600001)(186003)(26005)(4326008)(316002)(6506007)(54906003)(2616005)(86362001)(36756003)(5660300002)(2906002)(6512007)(66556008)(66446008)(64756008)(66476007)(6916009)(91956017)(76116006)(83380400001)(66946007)(6486002)(71200400001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?HbIjzGRHnPSHTn6E5aIAXqGqi/tUuBpUYZqSDoVY0MA553fNDgwDkyfVXf?=
 =?iso-8859-1?Q?k7d8PMl3XF0c1v9CZDnlSYoxiLRhd4d7V9eT2R8kEjgpTlqzuyy/TQ1sU6?=
 =?iso-8859-1?Q?lPkfNZsAV6s//3Y+u/n1tciQmDVvsjD0WJcy5uOZP2/MngWUkzGl7DdWpo?=
 =?iso-8859-1?Q?6hTd2CLzLNgNS3MfjxKzsbJUhKwYp3Wjt1GTH9jYiuGbFHle86w+8hZTUU?=
 =?iso-8859-1?Q?xWID6hyWcNmqe8jCjhnZvRTz8YUMDDcYy9qu2iQxF4tZB3Fhd+6NpgN7W3?=
 =?iso-8859-1?Q?xuxpvNyloLMeMH+JA+weMLctae5xDGicvXFrICYYQqckP4zcSvmUDwiO5V?=
 =?iso-8859-1?Q?pDANykgF1zBy6jby2ScO+odj921oJCp7P6fHwFIpO3Ofl+7bc/2VT0BN/f?=
 =?iso-8859-1?Q?TZKA4y7/4lsq3lXQvuZqZlvw3fSmmCp1xzAqIImSQrZpY00aE1lPD3XxQ1?=
 =?iso-8859-1?Q?3p6JtJDSLDlf3X2Y6gnozHOa9VlTrplW2PKZ//R/H5BcCbFrFnn+6IszmK?=
 =?iso-8859-1?Q?M3F2SRX+Vpm+crfxDAN53oVvzyEAi+/q+aX33ci3kHB5iv1+T6KlTtVOd5?=
 =?iso-8859-1?Q?JymsYP1Op5CS0l8U9dFSZ7TAMMfvKfrMVkmWuB0mHjdItJI9S0TCt23brb?=
 =?iso-8859-1?Q?eLvNZ58QW7KJIqX23QgPPNJ2XKMg0Du6fP3paVXO3IZtj8QqVULjQV3tMs?=
 =?iso-8859-1?Q?jsH/Y3zktOn28l+ps8bL/PqCmR2ne1dWC34RyfjJ0vAjkM5X0bUEcZr652?=
 =?iso-8859-1?Q?rTJ+PYM1dfaD0PHs2ZhvpWXvMm6rPldhy+41pvjpaLmJKlc7WD+qDsNdu5?=
 =?iso-8859-1?Q?hy6AsbPdUS3TFgCeqWJMVLhmZ7E6mLi+RLsvRYFT3RCy0JeHfn+6Mv/rZb?=
 =?iso-8859-1?Q?CKWoX8VDky+9mEaIY0jCGSHwg7FH8RAZD0LH9d6VLLEA7TZAqyzSR2iMaN?=
 =?iso-8859-1?Q?PNTCsJOq63Qovk9pdMl6lLQAZEETSCxfwwXYmsfZx9NjSot26Bd31uF33P?=
 =?iso-8859-1?Q?1ESqtUpjSWxGwJ1l6c2Wad+SmE+Mb5IeXpFww8?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB6400.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 71e2a1c6-bf7c-4ab9-6c8f-08d8957017da
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Nov 2020 20:39:54.0637
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: PPs403ePclVRYmeMDNvXJtThhxtavq2OFg9N2FfS0v6zZEz0W2V9+cgYlned6E/MP+hBAZ/Mdk0YCby9gXL1cz1JxSCHU8u+z6fwQZ6QawQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR03MB3952
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-30_11:2020-11-30,2020-11-30 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0
 clxscore=1015 malwarescore=0 priorityscore=1501 bulkscore=0
 mlxlogscore=876 spamscore=0 mlxscore=0 adultscore=0 phishscore=0
 impostorscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2009150000 definitions=main-2011300132


Bertrand Marquis writes:

> Add support for cp10 exceptions decoding to be able to emulate the
> values for VMFR0 and VMFR1 when TID3 bit of HSR is activated.
> This is required for aarch32 guests accessing VMFR0 and VMFR1 using vmrs
> and vmsr instructions.

is it VMFR or MVFR? According to the reference manual, it is MVFR. Also,
you are missing MVFR2.

> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
> Changes in V2: rebase
> ---
>  xen/arch/arm/traps.c             |  5 +++++
>  xen/arch/arm/vcpreg.c            | 38 ++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/perfc_defn.h |  1 +
>  xen/include/asm-arm/traps.h      |  1 +
>  4 files changed, 45 insertions(+)
>
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 22bd1bd4c6..28d9d64558 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -2097,6 +2097,11 @@ void do_trap_guest_sync(struct cpu_user_regs *regs=
)
>          perfc_incr(trap_cp14_dbg);
>          do_cp14_dbg(regs, hsr);
>          break;
> +    case HSR_EC_CP10:
> +        GUEST_BUG_ON(!psr_mode_is_32bit(regs));
> +        perfc_incr(trap_cp10);
> +        do_cp10(regs, hsr);
> +        break;
>      case HSR_EC_CP:
>          GUEST_BUG_ON(!psr_mode_is_32bit(regs));
>          perfc_incr(trap_cp);
> diff --git a/xen/arch/arm/vcpreg.c b/xen/arch/arm/vcpreg.c
> index d0c6406f34..9d6a36ca5d 100644
> --- a/xen/arch/arm/vcpreg.c
> +++ b/xen/arch/arm/vcpreg.c
> @@ -634,6 +634,44 @@ void do_cp14_dbg(struct cpu_user_regs *regs, const u=
nion hsr hsr)
>      inject_undef_exception(regs, hsr);
>  }
> =20
> +void do_cp10(struct cpu_user_regs *regs, const union hsr hsr)
> +{
> +    const struct hsr_cp32 cp32 =3D hsr.cp32;
> +    int regidx =3D cp32.reg;
> +
> +    if ( !check_conditional_instr(regs, hsr) )
> +    {
> +        advance_pc(regs, hsr);
> +        return;
> +    }
> +
> +    switch ( hsr.bits & HSR_CP32_REGS_MASK )
> +    {
> +    /*
> +     * HSR.TID3 is trapping access to MVFR register used to identify the
> +     * VFP/Simd using VMRS/VMSR instructions.
> +     * In this case MVFR2 is not supported as the instruction does not s=
upport
> +     * it.
> +     * Exception encoding is using MRC/MCR standard with the reg field i=
n Crn
> +     * as are declared MVFR0 and MVFR1 in cpregs.h
> +     */
> +    GENERATE_TID3_INFO(MVFR0, mvfr, 0)
> +    GENERATE_TID3_INFO(MVFR1, mvfr, 1)
> +
> +    default:
> +        gdprintk(XENLOG_ERR,
> +                 "%s p10, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
> +                 cp32.read ? "mrc" : "mcr",
> +                 cp32.op1, cp32.reg, cp32.crn, cp32.crm, cp32.op2, regs-=
>pc);
> +        gdprintk(XENLOG_ERR, "unhandled 32-bit CP10 access %#x\n",
> +                 hsr.bits & HSR_CP32_REGS_MASK);
> +        inject_undef_exception(regs, hsr);
> +        return;
> +    }
> +
> +    advance_pc(regs, hsr);
> +}
> +
>  void do_cp(struct cpu_user_regs *regs, const union hsr hsr)
>  {
>      const struct hsr_cp cp =3D hsr.cp;
> diff --git a/xen/include/asm-arm/perfc_defn.h b/xen/include/asm-arm/perfc=
_defn.h
> index 6a83185163..31f071222b 100644
> --- a/xen/include/asm-arm/perfc_defn.h
> +++ b/xen/include/asm-arm/perfc_defn.h
> @@ -11,6 +11,7 @@ PERFCOUNTER(trap_cp15_64,  "trap: cp15 64-bit access")
>  PERFCOUNTER(trap_cp14_32,  "trap: cp14 32-bit access")
>  PERFCOUNTER(trap_cp14_64,  "trap: cp14 64-bit access")
>  PERFCOUNTER(trap_cp14_dbg, "trap: cp14 dbg access")
> +PERFCOUNTER(trap_cp10,     "trap: cp10 access")
>  PERFCOUNTER(trap_cp,       "trap: cp access")
>  PERFCOUNTER(trap_smc32,    "trap: 32-bit smc")
>  PERFCOUNTER(trap_hvc32,    "trap: 32-bit hvc")
> diff --git a/xen/include/asm-arm/traps.h b/xen/include/asm-arm/traps.h
> index 997c37884e..c4a3d0fb1b 100644
> --- a/xen/include/asm-arm/traps.h
> +++ b/xen/include/asm-arm/traps.h
> @@ -62,6 +62,7 @@ void do_cp15_64(struct cpu_user_regs *regs, const union=
 hsr hsr);
>  void do_cp14_32(struct cpu_user_regs *regs, const union hsr hsr);
>  void do_cp14_64(struct cpu_user_regs *regs, const union hsr hsr);
>  void do_cp14_dbg(struct cpu_user_regs *regs, const union hsr hsr);
> +void do_cp10(struct cpu_user_regs *regs, const union hsr hsr);
>  void do_cp(struct cpu_user_regs *regs, const union hsr hsr);
> =20
>  /* SMCCC handling */


--=20
Volodymyr Babchuk at EPAM=


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 20:51:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 20:51:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41456.74610 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjq8u-0008R3-Mm; Mon, 30 Nov 2020 20:51:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41456.74610; Mon, 30 Nov 2020 20:51:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjq8u-0008Qw-JW; Mon, 30 Nov 2020 20:51:28 +0000
Received: by outflank-mailman (input) for mailman id 41456;
 Mon, 30 Nov 2020 20:51:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QBJD=FE=epam.com=prvs=06035e4899=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1kjq8s-0008Qk-Nr
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 20:51:26 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ff35b66d-cae0-4ef7-a91e-9623db490fab;
 Mon, 30 Nov 2020 20:51:25 +0000 (UTC)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0AUKfYwm001800; Mon, 30 Nov 2020 20:51:14 GMT
Received: from eur02-ve1-obe.outbound.protection.outlook.com
 (mail-ve1eur02lp2059.outbound.protection.outlook.com [104.47.6.59])
 by mx0b-0039f301.pphosted.com with ESMTP id 353dv2wxay-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 30 Nov 2020 20:51:14 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com (2603:10a6:800:17e::20)
 by VI1PR03MB2880.eurprd03.prod.outlook.com (2603:10a6:802:2d::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.22; Mon, 30 Nov
 2020 20:51:09 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::d7a:2503:2ffd:1c51]) by VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::d7a:2503:2ffd:1c51%6]) with mapi id 15.20.3611.031; Mon, 30 Nov 2020
 20:51:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff35b66d-cae0-4ef7-a91e-9623db490fab
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f5G95A4yQSj6Ic2y/ZmjOuT5ZhM3KkNdvrksELOCcMkqNVeXkhobGd2AwCOhKH2VYquJnHO98LeMdcnFFy7Cdjvz+kKlU7UTF0YReH4WE0nGPiquUOhAdhZBwXjKHxVYyoGlbMqzezXoAdat+tYr/TABoAvr+dhhbhfsXaDtFIrQAdEW7FMOoP92Iq19LWrRRbCDpaKa5SHnR9/Z+IYQfwd17MEra1w/JFYTwKtphw3e0XOt+CFs7sqmB3BUFR4Bfp5+F9wISXT3uBba2iAALHyrsCt6lyyllrzqnzunP53TstyC0BK0co3zv39t9E39aIEg/xZfr9RyKpwX3DlzgA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KArdJbSp/HmifNqXna5we7oWwJ7unOhPaZmPbRWmTe8=;
 b=i6v2xrFUUSI+wAW6qesgo1s5KJwYzGUYXN8JCVLova7ksXY6NzVNTbqdsm+FzbPKXUjjptAg++hqgNnlIhNjTdXfJhiyJjMvmqAnau1G2Nx3XV1Zd5DsuhfrwukVhijE6VDyOSJ4rSrP2T/FugIfzV5sK5sj4/LYSlHG/he5G8Vaowm5i/XHIevr1I3iuYgTo6HY71x303GMSFkR3XBCbk22s7/V0uSidybQr39txKDARCrgh1aHf8h1MrMQvDL92Tc4f42wwCzOpDEXJSbPE4sirRwl6w097DGTBXSmNqWKIXNjcitHbZjWxnKX/hEE/F/3WG0284Sab57vNacb9Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KArdJbSp/HmifNqXna5we7oWwJ7unOhPaZmPbRWmTe8=;
 b=ilVz2Ur0ZRmJAhZQmjkTu2rhSWPzdjP5E5lw1K8Un29fJpBZlo3MZT9bPN/db4F5hsX2qxGQp9pmmFTgrthmeHcaDnVpmS5mr6OTSU76HaAeg0nh6yv/Wbv4BMfyC2Zf8KqWmkg5jHa+f162kv9kVHCQiRL9h9bCEs2fovVxbcja1FYk+d3Aiv5UeZtRc4xp/Us9+hzSZAowTHqkLFS7H1Q9KQpGG1H6dEoQoqxOR6ha9Z55K1C2PqmeIvw1BddoiSB0vwjuPDLXR16eAC8aHHzWK7y4bTRNsq6pnEEXvMLGqhtO8EoEaQjDrUmXv6D5i3Eh/YgO2NYH7XEi0UuTsg==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Oleksandr Tyshchenko <Oleksandr_Tyshchenko@epam.com>,
        Stefano Stabellini
	<sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>, Julien Grall
	<julien.grall@arm.com>
Subject: Re: [PATCH V3 15/23] xen/arm: Stick around in
 leave_hypervisor_to_guest until I/O has completed
Thread-Topic: [PATCH V3 15/23] xen/arm: Stick around in
 leave_hypervisor_to_guest until I/O has completed
Thread-Index: AQHWxwQMBbxecCO+BEuoRSAjYYX8W6ngeeYAgACtHwA=
Date: Mon, 30 Nov 2020 20:51:09 +0000
Message-ID: <87czzu7emb.fsf@epam.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-16-git-send-email-olekstysh@gmail.com>
In-Reply-To: <1606732298-22107-16-git-send-email-olekstysh@gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.4.10; emacs 27.1
authentication-results: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [85.223.209.18]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 0e062183-ff54-4456-2cf7-08d89571aa6c
x-ms-traffictypediagnostic: VI1PR03MB2880:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <VI1PR03MB2880202661A69C8948A15819E6F50@VI1PR03MB2880.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 SJ8LyQ/Jf9sFTT2d36r6U0tcBqkTEmDB0IEfaRUqHSpyCv3qRVUTKH5up5qdLA52KAIZBDNpQE3n/YJ21dlPDxnbmBroUEofQn7kjrqZfVHnI0M8Wt8cEXtOsbE8lq4yv82F3M01QqO7/kLA39Pn++YTjT8lHb0M0esoEf5qXgwkJBzFwOMPvG0J0xZLPpmQQuSt5dwlP2ICGgDOGE7lasg8TkW3alS0/mvLdBrd+LKMyRCrXW9o0HfZSPyAEjuBJHsdOA2hLpaA3X6aM+rh2dOlnZngnsAopSelt7WY2OlTj4FKaKv0mRBqhpk/WTmu
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB6400.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(136003)(376002)(366004)(346002)(54906003)(66556008)(76116006)(6506007)(71200400001)(316002)(91956017)(66946007)(66476007)(36756003)(64756008)(66446008)(2906002)(186003)(4326008)(478600001)(26005)(6512007)(6916009)(8676002)(86362001)(5660300002)(8936002)(55236004)(6486002)(83380400001)(2616005);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?LAov9b8xQ4IwYJDUhqhNfQnhBRYy/URrwr4AEef0qdBDxpDl/V/C5eWzI5?=
 =?iso-8859-1?Q?MkrYu4lRH8dPM84W9xTwA82bJySpahoHfMwsGpjFjN+/Xf53jMOXfF7BSt?=
 =?iso-8859-1?Q?sulV8nbB7HM6UCJ4JxDQ6jcuv1tmjIsn9P11mQ7zfnFPqPWM7D74Fmer7y?=
 =?iso-8859-1?Q?tzxyurniwFac3DlV6mhgn/rgC5LY1HZJLRiO09wkbUppmYZtvySbHfII9p?=
 =?iso-8859-1?Q?Z3RIoydBOTrJ7N94DEI3kxelcmXlOyMUAYedidFWQ82HW/NMsxL5Wkzswz?=
 =?iso-8859-1?Q?3dNMFV46TYs+WG0FxEC5WqWF50B772OESYCveIui+4NM77zF11t9PU06DI?=
 =?iso-8859-1?Q?t6u2LAR8pFrq8qofdcCg1pYe2PHCnOqBuKUF1T1HuSSDm1qBbgTddcXCLK?=
 =?iso-8859-1?Q?FKVCAPPY/FSUKzVK/eW8cNPI5hjA9gtMh+2RMGZttmGJ7CX69eI5JBXDR/?=
 =?iso-8859-1?Q?2WH09oY/sRXMoI3V6cSHZRTqwEL9GIgr08jOoefpp/6lnh7LFRbdLBABok?=
 =?iso-8859-1?Q?osUs5aB2jtMY3NvtzXtb7H0hpWaYgGWiVYMHL4nmTabp0rAv+PwHDJTUAn?=
 =?iso-8859-1?Q?C89puc/YTfrVRlntfdZNgiHHDzU749hb8gyONa70nLZCM8vNBnsrcV4+jd?=
 =?iso-8859-1?Q?ezLCjyqwiJ+iTVy/j8Eu4XpJ7MkZtyS/Dgvl8vFjunp/pKlP3lfa1C2/6f?=
 =?iso-8859-1?Q?09NHCrkLWVAGbw0lFqpXrgJ6trb0bvEJn0T2lVrUz7gy3GVWxRwsmz7m2V?=
 =?iso-8859-1?Q?Krttb2QSkAhidsRwvZrxJvN5foPnOoQVhau0mv15HD2335+n0Lz83fVcS/?=
 =?iso-8859-1?Q?LpErIFTidSF+R61hqasr6XFHeTN6U00druLHTU28ogNgBdm/i5YgJjHDTv?=
 =?iso-8859-1?Q?XdXJD9DqfPUlS005+UFUkFp6TXofi3lAGpjAaIbIMHnQazBYTShTGVLq6p?=
 =?iso-8859-1?Q?k2LZdSl7dqIK9zKWtpeX3zXyJpFqf0FxikEV3zvU0ll//9ke3tHPQdU0T3?=
 =?iso-8859-1?Q?vlU2Yql7uF3ELIldPKyAh7Vv3LmolfU26HtLOz?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB6400.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0e062183-ff54-4456-2cf7-08d89571aa6c
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Nov 2020 20:51:09.5108
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: OACtJrydduIdWnZObGvULsvbNDTIv6omdmQmEqocGe3iQYo39vlWFuZN3W34MgYXCYdrE9v2RARHRJOsJfTSz1CKEzK4DKCz3etNsrmsq8Y=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR03MB2880
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-30_11:2020-11-30,2020-11-30 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 adultscore=0 suspectscore=0
 lowpriorityscore=0 impostorscore=0 phishscore=0 clxscore=1011 spamscore=0
 malwarescore=0 priorityscore=1501 mlxscore=0 mlxlogscore=999 bulkscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2011300133


Hello Oleksandr,

Oleksandr Tyshchenko writes:

> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>
> This patch adds proper handling of return value of
> vcpu_ioreq_handle_completion() which involves using a loop
> in leave_hypervisor_to_guest().
>
> The reason to use an unbounded loop here is the fact that vCPU
> shouldn't continue until an I/O has completed. In Xen case, if an I/O
> never completes then it most likely means that something went horribly
> wrong with the Device Emulator. And it is most likely not safe to
> continue. So letting the vCPU to spin forever if I/O never completes
> is a safer action than letting it continue and leaving the guest in
> unclear state and is the best what we can do for now.
>
> This wouldn't be an issue for Xen as do_softirq() would be called at
> every loop. In case of failure, the guest will crash and the vCPU
> will be unscheduled.
>

Why you don't block vcpu there and unblock it when response is ready? If
I got it right, "client" vcpu will spin in the loop, eating own
scheduling budget with no useful work done. In the worst case, it will
prevent "server" vcpu from running.

> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Julien Grall <julien.grall@arm.com>
>
> ---
> Please note, this is a split/cleanup/hardening of Julien's PoC:
> "Add support for Guest IO forwarding to a device emulator"
>
> Changes V1 -> V2:
>    - new patch, changes were derived from (+ new explanation):
>      arm/ioreq: Introduce arch specific bits for IOREQ/DM features
>
> Changes V2 -> V3:
>    - update patch description
> ---
> ---
>  xen/arch/arm/traps.c | 31 ++++++++++++++++++++++++++-----
>  1 file changed, 26 insertions(+), 5 deletions(-)
>
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 036b13f..4cef43e 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -2257,18 +2257,23 @@ static void check_for_pcpu_work(void)
>   * Process pending work for the vCPU. Any call should be fast or
>   * implement preemption.
>   */
> -static void check_for_vcpu_work(void)
> +static bool check_for_vcpu_work(void)
>  {
>      struct vcpu *v =3D current;
> =20
>  #ifdef CONFIG_IOREQ_SERVER
> +    bool handled;
> +
>      local_irq_enable();
> -    vcpu_ioreq_handle_completion(v);
> +    handled =3D vcpu_ioreq_handle_completion(v);
>      local_irq_disable();
> +
> +    if ( !handled )
> +        return true;
>  #endif
> =20
>      if ( likely(!v->arch.need_flush_to_ram) )
> -        return;
> +        return false;
> =20
>      /*
>       * Give a chance for the pCPU to process work before handling the vC=
PU
> @@ -2279,6 +2284,8 @@ static void check_for_vcpu_work(void)
>      local_irq_enable();
>      p2m_flush_vm(v);
>      local_irq_disable();
> +
> +    return false;
>  }
> =20
>  /*
> @@ -2291,8 +2298,22 @@ void leave_hypervisor_to_guest(void)
>  {
>      local_irq_disable();
> =20
> -    check_for_vcpu_work();
> -    check_for_pcpu_work();
> +    /*
> +     * The reason to use an unbounded loop here is the fact that vCPU
> +     * shouldn't continue until an I/O has completed. In Xen case, if an=
 I/O
> +     * never completes then it most likely means that something went hor=
ribly
> +     * wrong with the Device Emulator. And it is most likely not safe to
> +     * continue. So letting the vCPU to spin forever if I/O never comple=
tes
> +     * is a safer action than letting it continue and leaving the guest =
in
> +     * unclear state and is the best what we can do for now.
> +     *
> +     * This wouldn't be an issue for Xen as do_softirq() would be called=
 at
> +     * every loop. In case of failure, the guest will crash and the vCPU
> +     * will be unscheduled.
> +     */
> +    do {
> +        check_for_pcpu_work();
> +    } while ( check_for_vcpu_work() );
> =20
>      vgic_sync_to_lrs();


--=20
Volodymyr Babchuk at EPAM=


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 21:04:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 21:04:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41463.74621 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjqL3-00018V-Rn; Mon, 30 Nov 2020 21:04:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41463.74621; Mon, 30 Nov 2020 21:04:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjqL3-00018O-Oo; Mon, 30 Nov 2020 21:04:01 +0000
Received: by outflank-mailman (input) for mailman id 41463;
 Mon, 30 Nov 2020 21:04:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QBJD=FE=epam.com=prvs=06035e4899=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1kjqL2-00018J-U4
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 21:04:01 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0bea53fa-4ddd-49ee-8021-9ffce5a7f877;
 Mon, 30 Nov 2020 21:03:59 +0000 (UTC)
Received: from pps.filterd (m0174676.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0AUKe591003819; Mon, 30 Nov 2020 21:03:50 GMT
Received: from eur04-db3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2050.outbound.protection.outlook.com [104.47.12.50])
 by mx0a-0039f301.pphosted.com with ESMTP id 353ybc4ryf-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 30 Nov 2020 21:03:50 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com (2603:10a6:800:17e::20)
 by VI1PR0301MB2368.eurprd03.prod.outlook.com (2603:10a6:800:6f::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.22; Mon, 30 Nov
 2020 21:03:47 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::d7a:2503:2ffd:1c51]) by VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::d7a:2503:2ffd:1c51%6]) with mapi id 15.20.3611.031; Mon, 30 Nov 2020
 21:03:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0bea53fa-4ddd-49ee-8021-9ffce5a7f877
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Tziu4wLHfWcoQYXsY9STfdzpSgiIYeFBhBdr9Jpm52rPSB34AYZSTPY5CAD7QecisLnqe4X7uwDsA7DkV6Gm4MJwRvzJ5F6h1sfY5Eb6oPI+BeS4sUTWC/7Q/Xo1DcbQOpRj5zersLDI6HUTktzLmL+7dtW/AE36KB01xDfjK/nyDmdjby3zz6/9CEavoLzAFB87z5C/EyNLATAszhG2mVdWOzSg0T+Q254BEJwo3laSI+tc6Ttk/lfRVQBxKn4bEcvXpsV7Jb4bnji+vUqpP8j1QKn9td257vo0npxSJnmgBwtzqNnxBp6yQqVWJskz/dp0IGHlIlzFgXINz0lHYg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Z0NJJ95yqj9dtDn9vQuzwveSIrxFGX2loWzb8QV4It0=;
 b=AG/eJyO681Gp/6cFCc0ygvFCp17wvZuf7U3CYaVFyqrmSQfDjHsdOZboXQUEFzbiCCXF4XBthdbZYGcpbgVAA5vYGD7liISlE77UdO+BX2+JfNHYTG8hukfYH1BWASc0ObCrXoqQeZRPMnEg8C5dskB+KE+ABxkU1Ze+a4UMC6Wxf67rIXaAzLx/9QdfsBPP1RcMIijoNZYLHjS6+fcNftlPWfhbHonlDr80esUa3L+7b7NgQhF9V8soiLQGUxI8nVJb4IjeKouu4rhA/VIrGp8PiWh9tRUlZAjN21dLDNy5aeGwuHiadFUXI9/UaN8Q9AX0XVAN+3x9XXz4KhgXnQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Z0NJJ95yqj9dtDn9vQuzwveSIrxFGX2loWzb8QV4It0=;
 b=TPJjmOA1bbVDStK8IjlYszTMTLTt//+JQsjmURV10tMr6KOW44J5qcclw/YrjOAgxUVsAlOWTRQ5skIReM36mXUfNRE8a6aDLkxBJFTPZ35jpRShS3Cz87RWRO9uggx0UfVRxvayi5q1/v+ggyKblJK5uVyJI34iK+P7v3lpB9uzwqZ2ZeIjJa7JMR9jXfDgXzoX5rluOlU40/ALwbfmMJlOFmc0ZerkuQJSuGM3hq4uQJMkktNEpYAIfw6uoeNV8dWPYbuwei8IDxcStPcLI9rxZN2j8cVzAhERgdlvGVBzXQppIbnghdDc5NmZpQ+lmiQNpsHN3MSZ/PNLYMrUXQ==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Oleksandr Tyshchenko <Oleksandr_Tyshchenko@epam.com>,
        Stefano Stabellini
	<sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>, Julien Grall
	<julien.grall@arm.com>
Subject: Re: [PATCH V3 19/23] xen/arm: io: Abstract sign-extension
Thread-Topic: [PATCH V3 19/23] xen/arm: io: Abstract sign-extension
Thread-Index: AQHWxwQMBbxecCO+BEuoRSAjYYX8W6ngeesAgACwoYA=
Date: Mon, 30 Nov 2020 21:03:47 +0000
Message-ID: <878sai7e1a.fsf@epam.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-20-git-send-email-olekstysh@gmail.com>
In-Reply-To: <1606732298-22107-20-git-send-email-olekstysh@gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.4.10; emacs 27.1
authentication-results: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [85.223.209.18]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 429f40b7-85a3-459a-5464-08d895736df3
x-ms-traffictypediagnostic: VI1PR0301MB2368:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <VI1PR0301MB2368A03D689AB6035D3CB627E6F50@VI1PR0301MB2368.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 PZ95Of0oM2ixdEoNH7n1N3FjU8jqwuMHlDBJOGrn+4rkHMTE4Dro2z2jy244lYpjldx9AkotLkBnzOjzmtqonnagBvQ365l99PKNDssEfwAeDkLv/fwyOSMcaD7aM+i1vCG9ob5BDkvoRQfKZURqNPb6cpDfq6DV/dneW23ffxJulwaEuZdL2q9xnWHI+SoTwff1EmrD9DQZ96REPliO2a9NGQnbUdGp4k++mj8/ppV6U7FFaHk9WLH7RtVm4rqH30T+lIMGAqxkn9TRqZuHtXNAFBe/xeGnWb+1X1EGbi2OzvkWljT3oO0d/3931lMY
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB6400.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(346002)(396003)(366004)(39860400002)(316002)(5660300002)(83380400001)(186003)(71200400001)(66476007)(54906003)(26005)(91956017)(66446008)(76116006)(4326008)(478600001)(6512007)(55236004)(66946007)(8676002)(6506007)(8936002)(64756008)(66556008)(36756003)(86362001)(2616005)(6916009)(6486002)(2906002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?6bgKgfDv7segncR28JrUz5V6wHsYG1XX4LeR92midow9Dbyowxpi7krSm7?=
 =?iso-8859-1?Q?6y21kqS8e8GYpax8ivM/BjQe+rwQG5qbOap4T49SvY6UUK85XNsMAa7OeZ?=
 =?iso-8859-1?Q?74+LjPru2RB0BBERD+Et4pF2TonUU/99GGOasrQ+37hbICp4hmQ0913XcN?=
 =?iso-8859-1?Q?XTtvngg1/VytzB/B2y9gLHCTv2OzU4JmLqbrqxX9WU9S6WBTx/Bvio9Qx1?=
 =?iso-8859-1?Q?xLEXyiqRY11q0vY6mrEGpALSFCXdzdpLzH039dlR6JnxPInjxeGGDEYr1U?=
 =?iso-8859-1?Q?mRsEG5nGg42nCADNxmxjk21KjdAyMpOsCy0VVagQAk7h9JUX1kq9fAIdIc?=
 =?iso-8859-1?Q?fdUQuXub1N2ktM8yPopUqNMstxrdfI01znS9ZFQ9zLC1nOKTyiFFujnZ9G?=
 =?iso-8859-1?Q?vLZjcM+g7UD+ZMTaIvJx8DXAThv57SvfxWTJvPDSZEuEB4hvGjRPRJ/ufW?=
 =?iso-8859-1?Q?5zUscUTXBphpnet5k/QuEW6TOc5CCq+UFZ8/3qYJPkjoXHH3ljJa1SUGgr?=
 =?iso-8859-1?Q?F9gwDSelk50okU7E8/uT4YS7kpaHKI/GiFWkNI45ab6Y+qznMoyuq60HMv?=
 =?iso-8859-1?Q?+HZN6KM5LbRMDy44/WXo6nbbpdjO9b/SqFbqwBnOeDTqSuX3d9HPWJrWzA?=
 =?iso-8859-1?Q?xZsX0yLxx5yzclxd9FIYpMAZoW38PFVeEWxQjJwTNL3GFqM8FCJrUuf0dO?=
 =?iso-8859-1?Q?YiYlv3dIMEqfuooB/6hpJ+zICqMAYXnaaYx8jT82EnbzHZ9aq+QSe5Iilp?=
 =?iso-8859-1?Q?r4Prul5RdMXMzX2MauZxm9R8dFm15BAFxT/jItaWE2l4KiZoiDZxYBUj/W?=
 =?iso-8859-1?Q?I7bW97wr9RRxtuyVuhrfA+EDKxHvu8fLIcDViMlFFchMXQJKS3eRGc2Wgz?=
 =?iso-8859-1?Q?+ikhKNjIUE/YmTkIu8muXaC4ImecbqDuBkyZPShVFhnB5JEH+xVaXF0tc5?=
 =?iso-8859-1?Q?Ugia9jdCmFBYB2o/uvUDc2+XVh3JphHGKusr4n++nEWbLl8cnY7nVgOoM2?=
 =?iso-8859-1?Q?b9bSwn6zoXCS3uWAxhbBlSC1HUWM5qpKCVmhHb?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB6400.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 429f40b7-85a3-459a-5464-08d895736df3
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Nov 2020 21:03:47.0410
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 8kwiWEuhffbl1nEN4rNIwYnDk9okA7HesKSrmpPwUha+JQ1d/DaItVeF1nqVepdSvCNCGG4mqPVJdukVdiVRoXTqh2hW6przPqxOt6cIbHQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0301MB2368
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-30_11:2020-11-30,2020-11-30 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0
 impostorscore=0 suspectscore=0 phishscore=0 bulkscore=0 clxscore=1015
 mlxscore=0 malwarescore=0 adultscore=0 mlxlogscore=788 spamscore=0
 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2011300133


Hi,

Oleksandr Tyshchenko writes:

> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>
> In order to avoid code duplication (both handle_read() and
> handle_ioserv() contain the same code for the sign-extension)
> put this code to a common helper to be used for both.
>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Julien Grall <julien.grall@arm.com>
>
> ---
> Please note, this is a split/cleanup/hardening of Julien's PoC:
> "Add support for Guest IO forwarding to a device emulator"
>
> Changes V1 -> V2:
>    - new patch
>
> Changes V2 -> V3:
>    - no changes
> ---
> ---
>  xen/arch/arm/io.c           | 18 ++----------------
>  xen/arch/arm/ioreq.c        | 17 +----------------
>  xen/include/asm-arm/traps.h | 24 ++++++++++++++++++++++++
>  3 files changed, 27 insertions(+), 32 deletions(-)
>
> diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
> index f44cfd4..8d6ec6c 100644
> --- a/xen/arch/arm/io.c
> +++ b/xen/arch/arm/io.c
> @@ -23,6 +23,7 @@
>  #include <asm/cpuerrata.h>
>  #include <asm/current.h>
>  #include <asm/mmio.h>
> +#include <asm/traps.h>
>  #include <asm/hvm/ioreq.h>
> =20
>  #include "decode.h"
> @@ -39,26 +40,11 @@ static enum io_state handle_read(const struct mmio_ha=
ndler *handler,
>       * setting r).
>       */
>      register_t r =3D 0;
> -    uint8_t size =3D (1 << dabt.size) * 8;
> =20
>      if ( !handler->ops->read(v, info, &r, handler->priv) )
>          return IO_ABORT;
> =20
> -    /*
> -     * Sign extend if required.
> -     * Note that we expect the read handler to have zeroed the bits
> -     * outside the requested access size.
> -     */
> -    if ( dabt.sign && (r & (1UL << (size - 1))) )
> -    {
> -        /*
> -         * We are relying on register_t using the same as
> -         * an unsigned long in order to keep the 32-bit assembly
> -         * code smaller.
> -         */
> -        BUILD_BUG_ON(sizeof(register_t) !=3D sizeof(unsigned long));
> -        r |=3D (~0UL) << size;
> -    }
> +    r =3D sign_extend(dabt, r);
> =20
>      set_user_reg(regs, dabt.reg, r);
> =20
> diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c
> index f08190c..2f39289 100644
> --- a/xen/arch/arm/ioreq.c
> +++ b/xen/arch/arm/ioreq.c
> @@ -28,7 +28,6 @@ enum io_state handle_ioserv(struct cpu_user_regs *regs,=
 struct vcpu *v)
>      const union hsr hsr =3D { .bits =3D regs->hsr };
>      const struct hsr_dabt dabt =3D hsr.dabt;
>      /* Code is similar to handle_read */
> -    uint8_t size =3D (1 << dabt.size) * 8;
>      register_t r =3D v->io.req.data;
> =20
>      /* We are done with the IO */
> @@ -37,21 +36,7 @@ enum io_state handle_ioserv(struct cpu_user_regs *regs=
, struct vcpu *v)
>      if ( dabt.write )
>          return IO_HANDLED;
> =20
> -    /*
> -     * Sign extend if required.
> -     * Note that we expect the read handler to have zeroed the bits
> -     * outside the requested access size.
> -     */
> -    if ( dabt.sign && (r & (1UL << (size - 1))) )
> -    {
> -        /*
> -         * We are relying on register_t using the same as
> -         * an unsigned long in order to keep the 32-bit assembly
> -         * code smaller.
> -         */
> -        BUILD_BUG_ON(sizeof(register_t) !=3D sizeof(unsigned long));
> -        r |=3D (~0UL) << size;
> -    }
> +    r =3D sign_extend(dabt, r);
> =20
>      set_user_reg(regs, dabt.reg, r);
> =20
> diff --git a/xen/include/asm-arm/traps.h b/xen/include/asm-arm/traps.h
> index 997c378..e301c44 100644
> --- a/xen/include/asm-arm/traps.h
> +++ b/xen/include/asm-arm/traps.h
> @@ -83,6 +83,30 @@ static inline bool VABORT_GEN_BY_GUEST(const struct cp=
u_user_regs *regs)
>          (unsigned long)abort_guest_exit_end =3D=3D regs->pc;
>  }
> =20
> +/* Check whether the sign extension is required and perform it */
> +static inline register_t sign_extend(const struct hsr_dabt dabt, registe=
r_t r)
> +{
> +    uint8_t size =3D (1 << dabt.size) * 8;
> +
> +    /*
> +     * Sign extend if required.
> +     * Note that we expect the read handler to have zeroed the bits
> +     * outside the requested access size.
> +     */
> +    if ( dabt.sign && (r & (1UL << (size - 1))) )
> +    {
> +        /*
> +         * We are relying on register_t using the same as
> +         * an unsigned long in order to keep the 32-bit assembly
> +         * code smaller.
> +         */
> +        BUILD_BUG_ON(sizeof(register_t) !=3D sizeof(unsigned long));
> +        r |=3D (~0UL) << size;

If `size` is 64, you will get undefined behavior there.

> +    }
> +
> +    return r;
> +}
> +
>  #endif /* __ASM_ARM_TRAPS__ */
>  /*
>   * Local variables:


--=20
Volodymyr Babchuk at EPAM=


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 21:25:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 21:25:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41471.74634 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjqfQ-00039W-Pl; Mon, 30 Nov 2020 21:25:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41471.74634; Mon, 30 Nov 2020 21:25:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjqfQ-00039P-Lq; Mon, 30 Nov 2020 21:25:04 +0000
Received: by outflank-mailman (input) for mailman id 41471;
 Mon, 30 Nov 2020 21:25:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qs/p=FE=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kjqfP-00039K-HM
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 21:25:03 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 99a77342-8768-4009-9563-bc1d3d4ffbc2;
 Mon, 30 Nov 2020 21:25:01 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 4718A2076A;
 Mon, 30 Nov 2020 21:25:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99a77342-8768-4009-9563-bc1d3d4ffbc2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606771500;
	bh=e540j0UGjlJbQMyrXi4CmNfYK65m/QwHyK8BV8E9HeQ=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=oYeuId2/wXss50ZBVp9lxYjQ7qc2/XAmsERie0V7XWJmsSKk/fT7/N+cxiqua7Tzn
	 dQm8HHCsPSxtSOFByoTITNOiMOBRe5gbCIGeOHKDAl54pXkQ0f4Y4+724uI+A2PTk5
	 ksoqwEWbZv8j2Q/a1EeemCjf5Ke/HBvhecmvGbkc=
Date: Mon, 30 Nov 2020 13:24:59 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
cc: Jan Beulich <jbeulich@suse.com>, Rahul Singh <Rahul.Singh@arm.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v4 3/3] ns16550: Gate all PCI code with CONFIG_X86
In-Reply-To: <F1A3739A-D07C-429F-AC7B-47F7E2710377@arm.com>
Message-ID: <alpine.DEB.2.21.2011301324510.1100@sstabellini-ThinkPad-T480s>
References: <cover.1606326929.git.rahul.singh@arm.com> <6d64bb35a6ce247faaa3df2ebae27b6bfa1d969e.1606326929.git.rahul.singh@arm.com> <bacfe1c3-d86d-95b2-c52a-4bb86f1338ea@suse.com> <F1A3739A-D07C-429F-AC7B-47F7E2710377@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 27 Nov 2020, Bertrand Marquis wrote:
> Hi Jan,
> 
> > On 27 Nov 2020, at 13:58, Jan Beulich <jbeulich@suse.com> wrote:
> > 
> > On 25.11.2020 19:16, Rahul Singh wrote:
> >> --- a/xen/drivers/char/ns16550.c
> >> +++ b/xen/drivers/char/ns16550.c
> >> @@ -16,7 +16,7 @@
> >> #include <xen/timer.h>
> >> #include <xen/serial.h>
> >> #include <xen/iocap.h>
> >> -#ifdef CONFIG_HAS_PCI
> >> +#if defined(CONFIG_X86) && defined(CONFIG_HAS_PCI)
> >> #include <xen/pci.h>
> >> #include <xen/pci_regs.h>
> >> #include <xen/pci_ids.h>
> >> @@ -51,7 +51,7 @@ static struct ns16550 {
> >>     unsigned int timeout_ms;
> >>     bool_t intr_works;
> >>     bool_t dw_usr_bsy;
> >> -#ifdef CONFIG_HAS_PCI
> >> +#if defined(CONFIG_X86) && defined(CONFIG_HAS_PCI)
> > 
> > I'm sorry to be picky, but this being a hack wants, imo, also calling
> > it so, by way of a code comment. Clearly this should go at one of the
> > first instances, yet neither of the two above are really suitable imo.
> > Hence I'm coming back to my prior suggestion of introducing a
> > consolidated #define without this becoming a Kconfig setting:
> > 
> > /*
> > * The PCI part of the code in this file currently is only known to
> > * work on x86. Undo this hack once the logic has been suitably
> > * abstracted.
> > */
> > #if defined(CONFIG_HAS_PCI) && defined(CONFIG_X86)
> > # define NS16550_PCI
> > #endif
> > 
> > And then use NS16550_PCI everywhere. I'd be fine making this
> > adjustment while committing, if I knew that (a) you're okay with it
> > and (b) the R-b and A-b you've already got can be kept.
> > 
> 
> Sounds ok to me so you can keep my R-b if you go this way.

I am OK with that too


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 21:28:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 21:28:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41478.74646 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjqik-0003LB-Bq; Mon, 30 Nov 2020 21:28:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41478.74646; Mon, 30 Nov 2020 21:28:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjqik-0003L4-76; Mon, 30 Nov 2020 21:28:30 +0000
Received: by outflank-mailman (input) for mailman id 41478;
 Mon, 30 Nov 2020 21:28:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qs/p=FE=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kjqii-0003Kz-JG
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 21:28:28 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 41fb2263-0a64-41c1-a950-60869ca0f168;
 Mon, 30 Nov 2020 21:28:28 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 247E62076A;
 Mon, 30 Nov 2020 21:28:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 41fb2263-0a64-41c1-a950-60869ca0f168
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606771707;
	bh=nRkhpXnGjHJGH3BKb1G/qP5N1PKFw4d6Y40Sy/kY1ps=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=DpdC9uqks5pPHPFDx2tEwTqEET2dzNOoklp05CrqT2LJiBmmRoUqE9evb4PSG+dBM
	 KZMUi6Vln0PKwwrCPqooMjAb5F/lzBougs/eR0wrGwNGLSMYsQJTHfWXDVr/nfYuaL
	 IPIwgah91UgCxyobhXyX32H4UNvRQPfYEvuvlE4s=
Date: Mon, 30 Nov 2020 13:28:25 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Anthony PERARD <anthony.perard@citrix.com>
cc: Eduardo Habkost <ehabkost@redhat.com>, 
    =?UTF-8?Q?Philippe_Mathieu-Daud=C3=A9?= <philmd@redhat.com>, 
    qemu-devel@nongnu.org, Jiaxun Yang <jiaxun.yang@flygoat.com>, 
    Igor Mammedov <imammedo@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>, 
    Paolo Bonzini <pbonzini@redhat.com>, 
    Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, 
    Wainer dos Santos Moschetta <wainersm@redhat.com>, 
    Aurelien Jarno <aurelien@aurel32.net>, Thomas Huth <thuth@redhat.com>, 
    =?UTF-8?Q?Alex_Benn=C3=A9e?= <alex.bennee@linaro.org>, 
    Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>, 
    Richard Henderson <rth@twiddle.net>, Fam Zheng <fam@euphon.net>, 
    "Daniel P . Berrange" <berrange@redhat.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>, 
    xen-devel@lists.xenproject.org
Subject: Re: [PATCH-for-6.0 v4 15/17] gitlab-ci: Add test for Xen (on CentOS
 7)
In-Reply-To: <20201127142407.GC2098@perard.uk.xensource.com>
Message-ID: <alpine.DEB.2.21.2011301326110.1100@sstabellini-ThinkPad-T480s>
References: <20201108204535.2319870-1-philmd@redhat.com> <20201108204535.2319870-16-philmd@redhat.com> <20201126173824.GB2098@perard.uk.xensource.com> <20201126174559.GP2271382@habkost.net> <20201127142407.GC2098@perard.uk.xensource.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 27 Nov 2020, Anthony PERARD wrote:
> On Thu, Nov 26, 2020 at 12:45:59PM -0500, Eduardo Habkost wrote:
> > On Thu, Nov 26, 2020 at 05:38:24PM +0000, Anthony PERARD wrote:
> > > Is `make check` going to do something useful with the Xen support? Or is
> > > it going to need more work in order to test the Xen support of QEMU?
> > > (Like starting an actual Xen guest.)
> > 
> > I don't think it will test Xen support, but we still want to at
> > least check if --enable-xen doesn't break anything else.
> 
> That sound good.
> 
> > Is there any public CI system anywhere where Xen support is
> > tested today?
> 
> Yes, we have osstest which regularly test Xen with QEMU from upstream.
> Result are sent to xen-devel. But that might not be very useful for
> qemu-devel.
> 
> We also have a GitLab CI which does some Xen tests, but I don't think
> QEMU is tested there.
> https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=automation/gitlab-ci/test.yaml;hb=HEAD
> https://gitlab.com/xen-project/xen/

QEMU (the version of QEMU picked by the Xen tools) is built but not used
in the Xen Project CI-loop yet.

I am extending the CI-loop with more tests [1], and I would like to have at
least one QEMU test at some point soon. Probably something based on Xen 9pfs.

[1] https://marc.info/?l=xen-devel&m=160627845825763 


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 21:58:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 21:58:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41487.74663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjrBN-0006F9-Mz; Mon, 30 Nov 2020 21:58:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41487.74663; Mon, 30 Nov 2020 21:58:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjrBN-0006F2-K3; Mon, 30 Nov 2020 21:58:05 +0000
Received: by outflank-mailman (input) for mailman id 41487;
 Mon, 30 Nov 2020 21:58:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qs/p=FE=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kjrBM-0006Ex-Ae
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 21:58:04 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1fd5c1fc-875d-4fe3-ac41-4d227b580585;
 Mon, 30 Nov 2020 21:58:03 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 27DF92084C;
 Mon, 30 Nov 2020 21:58:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1fd5c1fc-875d-4fe3-ac41-4d227b580585
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606773482;
	bh=6gAQm7UPZej/TlUFIJ+RG2HvFpxnuc65fvjVOTR5uHk=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=xEza86sRYW8/X3PV1MLwxoHDVR3BqQpZDbxy3tuISn2fk7IWYRX8LAtBVE+KWx3jb
	 xgSwQKKLfzcVVg12VXAoQWHDcQimTe5+Yz4MKc8hopZczMpIOVZDK1mo7ngF37WlpZ
	 Z/g9Lf+gO6bV0krIXUZRPbABFpzwIdld2id8cC9E=
Date: Mon, 30 Nov 2020 13:58:01 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>, 
    Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH RFC 1/6] xen/arm: mm: Remove special case for CPU0 in
 dump_hyp_walk()
In-Reply-To: <3a783a3d-4c4d-f107-1583-16f04fe76ae0@xen.org>
Message-ID: <alpine.DEB.2.21.2011301357530.1100@sstabellini-ThinkPad-T480s>
References: <20201119190751.22345-1-julien@xen.org> <20201119190751.22345-2-julien@xen.org> <3a783a3d-4c4d-f107-1583-16f04fe76ae0@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sat, 28 Nov 2020, Julien Grall wrote:
> On 19/11/2020 19:07, Julien Grall wrote:
> > From: Stefano Stabellini <sstabellini@kernel.org>
> > 
> > There is no need to have a special case for CPU0 when converting the
> > page-table virtual address into a physical address. The helper
> > virt_to_maddr() is able to translate any address as long as the root
> > page-tables is mapped in the virtual address. This is the case for all
> > the CPUs at the moment.
> > 
> > So use the same BUG_ON() regardless the CPU.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > [julien: Rework the commit message]
> > Signed-off-by: Julien Grall <jgrall@amazon.com>
> > 
> > ---
> > 
> > I went back through the conversation in [1] regarding the issue when
> > loading Xen below 2MB on Arm32. The example provided is wrong because to
> > find the physical address, we need to add 'phys_offset', not substract.
> > 
> > So I removed the comment regarding the code was buggy.
> > 
> > [1] https://marc.info/?l=xen-devel&m=157081398022401
> 
> Stefano, can you confirm that you are happy with the new commit message?

Yes, that's OK


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 22:05:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 22:05:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41495.74675 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjrIX-0007IS-Ev; Mon, 30 Nov 2020 22:05:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41495.74675; Mon, 30 Nov 2020 22:05:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjrIX-0007IL-Bp; Mon, 30 Nov 2020 22:05:29 +0000
Received: by outflank-mailman (input) for mailman id 41495;
 Mon, 30 Nov 2020 22:05:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qs/p=FE=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kjrIW-0007IG-7h
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 22:05:28 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e11f1b65-4675-4214-9e42-c131d4317e0a;
 Mon, 30 Nov 2020 22:05:27 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 244D42076C;
 Mon, 30 Nov 2020 22:05:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e11f1b65-4675-4214-9e42-c131d4317e0a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1606773926;
	bh=rgXlfBrxIRPu9ogxe3dMJbhm9O98pcHHp9kCwex3gC4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=m6IpA0gQ9tGRJIA2RWXS8qFQVLmUM8GAHFmsqlU0zrhqWH0Ytv7sJmn9aj6fBMmgo
	 JOwJcYvMKNBHedg3lgnsaNxn+5FrExB5f0gwJq5uQa374me++3E4Gx2FD+x43O+2h1
	 l6IeQkfjjRnS7HdehGfJcw968y4Q94qgL5xaukDo=
Date: Mon, 30 Nov 2020 14:05:25 -0800 (PST)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Julien Grall <julien.grall@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH RFC 4/6] xen/arm: mm: Allow other mapping size in
 xen_pt_update_entry()
In-Reply-To: <d02e29cb-a4f1-4ebe-a04f-67b4a159a193@xen.org>
Message-ID: <alpine.DEB.2.21.2011301359290.1100@sstabellini-ThinkPad-T480s>
References: <20201119190751.22345-1-julien@xen.org> <20201119190751.22345-5-julien@xen.org> <alpine.DEB.2.21.2011191706420.7979@sstabellini-ThinkPad-T480s> <1ba4afef-7efa-6d1a-5929-ec2652dbbb21@xen.org> <alpine.DEB.2.21.2011231409050.7979@sstabellini-ThinkPad-T480s>
 <eff4cb40-ac90-940c-aa97-16a5021386d3@xen.org> <alpine.DEB.2.21.2011231612330.7979@sstabellini-ThinkPad-T480s> <d02e29cb-a4f1-4ebe-a04f-67b4a159a193@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sat, 28 Nov 2020, Julien Grall wrote:
> Hi Stefano,
> 
> On 24/11/2020 00:25, Stefano Stabellini wrote:
> > On Mon, 23 Nov 2020, Julien Grall wrote:
> > > Hi Stefano,
> > > 
> > > On 23/11/2020 22:27, Stefano Stabellini wrote:
> > > > On Fri, 20 Nov 2020, Julien Grall wrote:
> > > > > > >         /*
> > > > > > >          * For arm32, page-tables are different on each CPUs. Yet,
> > > > > > > they
> > > > > > > share
> > > > > > > @@ -1265,14 +1287,43 @@ static int xen_pt_update(unsigned long
> > > > > > > virt,
> > > > > > >           spin_lock(&xen_pt_lock);
> > > > > > >     -    for ( ; addr < addr_end; addr += PAGE_SIZE )
> > > > > > > +    while ( left )
> > > > > > >         {
> > > > > > > -        rc = xen_pt_update_entry(root, addr, mfn, flags);
> > > > > > > +        unsigned int order;
> > > > > > > +        unsigned long mask;
> > > > > > > +
> > > > > > > +        /*
> > > > > > > +         * Don't take into account the MFN when removing mapping
> > > > > > > (i.e
> > > > > > > +         * MFN_INVALID) to calculate the correct target order.
> > > > > > > +         *
> > > > > > > +         * XXX: Support superpage mappings if nr is not aligned
> > > > > > > to a
> > > > > > > +         * superpage size.
> > > > > > 
> > > > > > It would be good to add another sentence to explain that the checks
> > > > > > below are simply based on masks and rely on the mfn, vfn, and also
> > > > > > nr_mfn to be superpage aligned. (It took me some time to figure it
> > > > > > out.)
> > > > > 
> > > > > I am not sure to understand what you wrote here. Could you suggest a
> > > > > sentence?
> > > > 
> > > > Something like the following:
> > > > 
> > > > /*
> > > >    * Don't take into account the MFN when removing mapping (i.e
> > > >    * MFN_INVALID) to calculate the correct target order.
> > > >    *
> > > >    * This loop relies on mfn, vfn, and nr_mfn, to be all superpage
> > > >    * aligned, and it uses `mask' to check for that.
> > > 
> > > Unfortunately, I am still not sure to understand this comment.
> > > The loop can deal with any (super)page size (4KB, 2MB, 1GB). There are no
> > > assumption on any alignment for mfn, vfn and nr_mfn.
> > > 
> > > By OR-ing the 3 components together, we can use it to find out the maximum
> > > size that can be used for the mapping.
> > > 
> > > So can you clarify what you mean?
> > 
> > In pseudo-code:
> > 
> >    mask = mfn | vfn | nr_mfns;
> >    if (mask & ((1<<FIRST_ORDER) - 1))
> >    if (mask & ((1<<SECOND_ORDER) - 1))
> >    if (mask & ((1<<THIRD_ORDER) - 1))
> >    ...
> > 
> > As you wrote the mask is used to find the max size that can be used for
> > the mapping.
> > 
> > But let's take nr_mfns out of the equation for a moment for clarity:
> > 
> >    mask = mfn | vfn;
> >    if (mask & ((1<<FIRST_ORDER) - 1))
> >    if (mask & ((1<<SECOND_ORDER) - 1))
> >    if (mask & ((1<<THIRD_ORDER) - 1))
> >    ...
> > 
> > How would you describe this check? I'd call this an alignment check,
> > is it not?
> If you take the ``if`` alone, yes they are alignment check. But if you take
> the overall code, then it will just compute which mapping size can be used.
> 
> However, what I am disputing here is "rely" because there are no assumption
> made on the alignment in the loop (we are able to cater any size). In fact,
> the fact mfn and vfn should be aligned to the mapping size is a requirement
> from the hardware and not the implementation.

OK, maybe the "rely" gives a bad impression. What about:

This loop relies on mfn, vfn, and nr_mfn, to be all superpage aligned
(mfn and vfn have to be architecturally), and it uses `mask' to check
for that.

Feel free to reword it differently if you have a better idea.


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 22:23:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 22:23:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41503.74688 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjrZb-0000nb-0u; Mon, 30 Nov 2020 22:23:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41503.74688; Mon, 30 Nov 2020 22:23:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjrZa-0000nU-TR; Mon, 30 Nov 2020 22:23:06 +0000
Received: by outflank-mailman (input) for mailman id 41503;
 Mon, 30 Nov 2020 22:23:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjrZa-0000nP-1N
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 22:23:06 +0000
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ba570b5d-06f1-4549-9d10-d6ac8e7bbc85;
 Mon, 30 Nov 2020 22:23:04 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id i2so18455551wrs.4
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 14:23:04 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id u26sm892884wmm.24.2020.11.30.14.23.02
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 30 Nov 2020 14:23:03 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba570b5d-06f1-4549-9d10-d6ac8e7bbc85
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=glKUrv3uJuSm2IL6cWSBnV9ZqAJu7D9khspUyI5Axx4=;
        b=gzTOo1D9dzGbaFDvzbeElJeJoZVvX5LoC0RKGbqdOMAeXgGnISFWYKuMyI6lBOA9K6
         h10rZdGwLlRgrV+pUu+LD/gmquda1cMvblqlfIfojOFSSy9mhGHe0EvRII8jqr6qXdUU
         UQpnBVfLv4azqJpRVF5Xl6OITAmjn6+gGyVpDu6+IeZcfCQofPz14s0M6yrgjBapuXO1
         tpsZDa/yVeKyRvmd4oRxwicssQqBghElKxCelzB7IDEBMkFer/b1nn0NNsXa8TtiqTg7
         JVBmeIchhRYoDocZfJvzIS30qSNw2FJ7g/0ZKXi10CEYb2/ek8IOOq+ALKg04Hhnzu1c
         JPeg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=glKUrv3uJuSm2IL6cWSBnV9ZqAJu7D9khspUyI5Axx4=;
        b=Cu0OZB1+G1oTmjG4MCPC5g6A7Mqal3ndkqe+yQCVTYc2DWF1siDDJoIWkrhyMLDYo/
         HNjUoh5iTRfsMA2Nvzyqifis6PS0VZLzns8d2GO99d2qK/ctd+ufT6SFGNOHZHc4xoG8
         AZRMZ1Aqev3Jb3lvJMKfVVjM6AwdshO+IUCqZOplyGzsm9c/4PUuvCMtdzFgthC0Ij4x
         c/bCtgai8VovcjADcrBZY6NdnsZhVcZ6sSPy3cITglfPkAnPytf/D0cwo8xT8ayel7lZ
         2RCjF5vMIMnV+8pUB/tgXpgdHMU40jHVDcdSpuUd4YoG7GtE75s5IS6g7z9qKeW2R9+H
         ON6g==
X-Gm-Message-State: AOAM5322vIh7QjtkMF5QFTL2LGWoc+5MblpPII/A0JLPGMStj+6viHtJ
	H2WZglpHt0Kc5Sdr6Hdn834=
X-Google-Smtp-Source: ABdhPJyAk2h5T56reTKnxgEnjILBtdQBdnVvnpzkX4lBonK8477fCTnF5bGidJlBeHIZxU6romNl+w==
X-Received: by 2002:a5d:6286:: with SMTP id k6mr31158021wru.309.1606774983619;
        Mon, 30 Nov 2020 14:23:03 -0800 (PST)
Subject: Re: [PATCH V3 00/23] IOREQ feature (+ virtio-mmio) on Arm
To: =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>
Cc: xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, Jan Beulich <jbeulich@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <julien.grall@arm.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Tim Deegan <tim@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Kaly Xin <Kaly.Xin@arm.com>, Artem Mygaiev <joculator@gmail.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <87h7p6u860.fsf@linaro.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <248003d5-87a2-a7b2-5b30-e94c2a49945b@gmail.com>
Date: Tue, 1 Dec 2020 00:22:56 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <87h7p6u860.fsf@linaro.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 30.11.20 18:21, Alex Bennée wrote:

Hi Alex

[added missed subject title]

> Oleksandr Tyshchenko <olekstysh@gmail.com> writes:
>
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>>
>> Date: Sat, 28 Nov 2020 22:33:51 +0200
>> Subject: [PATCH V3 00/23] IOREQ feature (+ virtio-mmio) on Arm
>> MIME-Version: 1.0
>> Content-Type: text/plain; charset=UTF-8
>> Content-Transfer-Encoding: 8bit
>>
>> Hello all.
>>
>> The purpose of this patch series is to add IOREQ/DM support to Xen on Arm.
>> You can find an initial discussion at [1] and RFC/V1/V2 series at [2]/[3]/[4].
>> Xen on Arm requires some implementation to forward guest MMIO access to a device
>> model in order to implement virtio-mmio backend or even mediator outside of hypervisor.
>> As Xen on x86 already contains required support this series tries to make it common
>> and introduce Arm specific bits plus some new functionality. Patch series is based on
>> Julien's PoC "xen/arm: Add support for Guest IO forwarding to a device emulator".
>> Besides splitting existing IOREQ/DM support and introducing Arm side, the series
>> also includes virtio-mmio related changes (last 2 patches for toolstack)
>> for the reviewers to be able to see how the whole picture could look
>> like.
> Thanks for posting the latest version.
>
>> According to the initial discussion there are a few open questions/concerns
>> regarding security, performance in VirtIO solution:
>> 1. virtio-mmio vs virtio-pci, SPI vs MSI, different use-cases require different
>>     transport...
> I think I'm repeating things here I've said in various ephemeral video
> chats over the last few weeks but I should probably put things down on
> the record.
>
> I think the original intention of the virtio framers is advanced
> features would build on virtio-pci because you get a bunch of things
> "for free" - notably enumeration and MSI support. There is assumption
> that by the time you add these features to virtio-mmio you end up
> re-creating your own less well tested version of virtio-pci. I've not
> been terribly convinced by the argument that the guest implementation of
> PCI presents a sufficiently large blob of code to make the simpler MMIO
> desirable. My attempts to build two virtio kernels (PCI/MMIO) with
> otherwise the same devices wasn't terribly conclusive either way.
>
> That said virtio-mmio still has life in it because the cloudy slimmed
> down guests moved to using it because the enumeration of PCI is a road
> block to their fast boot up requirements. I'm sure they would also
> appreciate a MSI implementation to reduce the overhead that handling
> notifications currently has on trap-and-emulate.
>
> AIUI for Xen the other downside to PCI is you would have to emulate it
> in the hypervisor which would be additional code at the most privileged
> level.
Thank you for putting things together here and valuable input. As for 
me, the "virtio-mmio & MSI solution" as a performance improvement sounds 
indeed
interesting. Flipping through the virtio-mmio links I found a discussion 
regarding that [1].
I think this needs an additional investigation and experiments, however 
I am not sure there is an existing infrastructure in Xen on Arm to do so.
Once we make some progress with the IOREQ series I would be able to 
focus on enhancements which we would consider worthwhile.


>
>> 2. virtio backend is able to access all guest memory, some kind of protection
>>     is needed: 'virtio-iommu in Xen' vs 'pre-shared-memory & memcpys in
>>     guest'
> This is also an area of interest for Project Stratos and something we
> would like to be solved generally for all hypervisors. There is a good
> write up of some approaches that Jean Phillipe did on the stratos
> mailing list:
>
>    From: Jean-Philippe Brucker <jean-philippe@linaro.org>
>    Subject: Limited memory sharing investigation
>    Message-ID: <20201002134336.GA2196245@myrica>
>
> I suspect there is a good argument for the simplicity of a combined
> virt queue but it is unlikely to be very performance orientated.

I will look at it.


>> 3. interface between toolstack and 'out-of-qemu' virtio backend, avoid using
>>     Xenstore in virtio backend if possible.
> I wonder how much work it would be for a rust expert to make:
>
>    https://github.com/slp/vhost-user-blk
>
> handle an IOREQ signalling pathway instead of the vhost-user/eventfd
> pathway? That would give a good indication on how "hypervisor blind"
> these daemons could be made.
>
> <snip>
>> Please note, build-test passed for the following modes:
>> 1. x86: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y (default)
>> 2. x86: #CONFIG_HVM is not set / #CONFIG_IOREQ_SERVER is not set
>> 3. Arm64: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y
> Forgive my relative newness to Xen, how do I convince the hypervisor to
> build with this on? I've tried variants of:
>
>    make -j9 CROSS_COMPILE=aarch64-linux-gnu- XEN_TARGET_ARCH=arm64 menuconfig XEN_EXPERT=y [CONFIG_|XEN_|_]IOREQ_SERVER=y
CONFIG_IOREQ_SERVER is not protected by CONFIG_XEN_EXPERT. I mentioned 
how to enable CONFIG_IOREQ_SERVER on Arm (since it is disabled by 
default within this series) when
describing how test this series to Masami, but forgot to add here. Could 
you apply the one-line patch [2] and rebuild. Sorry for inconvenience.


[1] https://lwn.net/Articles/812055/
[2] 
https://github.com/otyshchenko1/xen/commit/b371bc9a3c954595bfce01bad244260364bbcd48

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Mon Nov 30 23:04:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 23:04:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41512.74706 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjsDX-0004mQ-Gt; Mon, 30 Nov 2020 23:04:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41512.74706; Mon, 30 Nov 2020 23:04:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjsDX-0004mJ-Cm; Mon, 30 Nov 2020 23:04:23 +0000
Received: by outflank-mailman (input) for mailman id 41512;
 Mon, 30 Nov 2020 23:04:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QBJD=FE=epam.com=prvs=06035e4899=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1kjsDV-0004mE-QG
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 23:04:21 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f8219f09-c044-4491-9347-4e7187a4a8fa;
 Mon, 30 Nov 2020 23:04:20 +0000 (UTC)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 0AUMufpP004133; Mon, 30 Nov 2020 23:04:16 GMT
Received: from eur04-db3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2058.outbound.protection.outlook.com [104.47.12.58])
 by mx0b-0039f301.pphosted.com with ESMTP id 353ejmx3u9-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 30 Nov 2020 23:04:15 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com (2603:10a6:800:17e::20)
 by VI1PR03MB4205.eurprd03.prod.outlook.com (2603:10a6:803:57::29)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3611.25; Mon, 30 Nov
 2020 23:04:11 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::d7a:2503:2ffd:1c51]) by VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::d7a:2503:2ffd:1c51%6]) with mapi id 15.20.3611.031; Mon, 30 Nov 2020
 23:04:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8219f09-c044-4491-9347-4e7187a4a8fa
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WfxhZJdF6H7GHZy9u+aT6v+Hwqr0yijwGp/DrH8rc3hrUR3jfSJtPfmurXRVNKfF6QZ2Y8gT7Fci13MHiTZgnKF+WsAyOAnkCFy4rML0C+dxUAavCL4i5p3xQ93OBhcHNOR1CwV2nynO8B07GYXq/kXqtRIYNi0yNUBobvyDDVyLCZePv+TPWu71edk+lIjYj6R6T6Eb5VTMzq28R8j7ibml6aDj46CoCklrH0Vlu1K/eccXYSvC/dTFT0m/812KOE/2HgxgbsnSVigKfslDHOuAWNDUxyt7zEggTzpkRpvNADfiCVXE71OHEZeoS/R2zdiYDQTQEieSBATz4NeA6w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9tzVg79mVUHo83Th6q8ZOxTYt3jNAhvCCF0EbqMBuFg=;
 b=RfeNPkOjKuKfiu2dUArLIFCwKEnORNfDqS0uJpm96XwIDz1fTf8pdIreSB26kgUU/y6pwh5Gg8+tgic0aZGkYeewpqPIatTH9mGDn6BGMsS/fgQVgOaUGyJCizH96HE+hkyVuXjaBwMICd/he/PmBLwJf5ENS1D31Mmb0+pPCLfeaLuRCc4CPOT8veCYZTGUq3IpUIdof49ahGyrareNCcKLV2hTW24Gfz7jWhXJO0Eo/6GxIafnSRMH4p6I08gw2kgweh5Qk/+5CJJ3ABNNb++86tChXw6BNXMQbDRzPtUmAayldvxjL9de/JfuF9KFhuZFvxIH6b9jRbz1KWrYTA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9tzVg79mVUHo83Th6q8ZOxTYt3jNAhvCCF0EbqMBuFg=;
 b=Hb2WM6bSqT/rEtupZlO0EPlFQItKRSeVG1BiVnsD6PEYnN240CKBtacrCVke4ocKqa9ponTuD4ELk0KAiCF8CPtvez8iVOTFprr/8rpXkazp1s4IJ8nffhkXXxrejTV7LJ75bh27BR17PLevmBTazP1f1j/KU7ZtlW+f0DDtcDVXEzZ/jEX2qDlgvWsWwq8ZHb71rv0Sr56DYO+zBRtMcbhXphqdBocSMU1nKwzPEAGS2XwNZ17Qgg9Tz2mxXz160Sbnh7GhSHGUOySmKVFv+TGiV29FYzRogp0Jvnry93bwJjJ3u8lmSNbVjUUlM4ohXfnBxyQMl0cgnsLVS5cZ4w==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Julien
 Grall <jgrall@amazon.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Jan
 Beulich <jbeulich@suse.com>,
        Andrew Cooper <andrew.cooper3@citrix.com>,
        =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>,
        Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH v2] xen/irq: Propagate the error from init_one_desc_irq()
 in init_*_irq_data()
Thread-Topic: [PATCH v2] xen/irq: Propagate the error from init_one_desc_irq()
 in init_*_irq_data()
Thread-Index: AQHWxXrGmX4mPPsRQ02tfcrd+7CSaanhT0MA
Date: Mon, 30 Nov 2020 23:04:11 +0000
Message-ID: <874kl678gl.fsf@epam.com>
References: <20201128113642.8265-1-julien@xen.org>
In-Reply-To: <20201128113642.8265-1-julien@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.4.10; emacs 27.1
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 99cbd442-cf07-4e2e-1db3-08d895844022
x-ms-traffictypediagnostic: VI1PR03MB4205:
x-microsoft-antispam-prvs: 
 <VI1PR03MB420594E6E6A4DBE41441AD0EE6F50@VI1PR03MB4205.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7691;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 qwMOCSkaC7givXqEN31CM4YALbPk63BT0qJ9b5bLA3hzy+XE7a5momNe3a9cHkQKi55/Oivu6RNBz+MNMfY0MTiT5Lfrw0nzyrD1SfPWC4V7W8AwGDmH8ehFnVN9DmkoWTn3CjLHrX3Ln0QcuRUEApn+lyIWDe5mVoLWF1Khva3D8dIrAyQsCuaE+nF+iIpO9eJY1V0zFIV4XkqETY9f2Hl84wSTnRDwUAvbHh3hjojprNIfGvjz7iNEmkoeG9eMWkSs+fRTGqUbjnQx7oLXbYUWCXOipI9vvRuqog1GdF0DDyreiCgE73rt+P6ky4Qyqc0jBRQI0vl92cxCWVrmrQ==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB6400.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(346002)(396003)(39860400002)(366004)(376002)(86362001)(186003)(66556008)(5660300002)(6916009)(64756008)(478600001)(66476007)(2616005)(71200400001)(66946007)(76116006)(4326008)(91956017)(66446008)(2906002)(83380400001)(26005)(316002)(54906003)(8676002)(6506007)(8936002)(55236004)(6486002)(6512007)(36756003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?UlFXLzVpTW1XL2FBMkVsNGtFcFYzc2tWT3hrbUI4czlqbVFoYk80Mmhqdll0?=
 =?utf-8?B?Rm91T2tDN1pZNEZGWW5GU0xOL0VTLzB6KzVOTTc4cWZ5ZlVaNWY4MXlzS3BX?=
 =?utf-8?B?MUFSQmxWTWFwN0l4Vll4ZUdrOWQ4UlZQVzFCVHJmMy9KQWRwZXZFKzVtRkhx?=
 =?utf-8?B?TFdsYncwMWc3aWdWaDl2bzRrVGRkMWpRZDR4YnZac2YxdGYzU2U1SG9XeWVq?=
 =?utf-8?B?c2Zzcm1keHhCWmhzQkptOTdobXViMFMzcVZ3cDZoeFd3YTQwVjRTcFphMGs4?=
 =?utf-8?B?WFpwOFVQbVB6c0NyblVTd3Btbm44cXVUVWVGZzI0WjJNL0ppZ3pYVWFHbGFw?=
 =?utf-8?B?ZDBZMnI1MEU0cnZzVllQYmdNdGRWNTJvTWtaNDVWTHFUZFRVZkdmbXM4Z3Bo?=
 =?utf-8?B?NjIwbzZhWXRLQ0pGNUhrYjNNVTF5V1JlL2Q3NGRuc25OMmRtTEhZdFpMMms1?=
 =?utf-8?B?a3AydkpLcWc4UzdnZDU2MitvSDdpK0xKYXRrVGZ5UzFGRjIvMHowWjJaT0hB?=
 =?utf-8?B?ZXhzcVgvSGJtb2ovNDRETTdEb2ZHTkJpWDBZMUNKYjMraXZZdGMwY3ptVmhu?=
 =?utf-8?B?QlRMby9QMnlGVFNwWVA2dWlSK2U5dUx5R1hSdjVjZ2VwRE5qelk2SEFpemJt?=
 =?utf-8?B?MXlNNDcrVCtQT2R0ZFFmS1RQb3ZMeUtGMmlDa1B6T1FRWTNRT09yUXowczBn?=
 =?utf-8?B?TkxzQXhDSGVocDJtbTI1NDFkbVE2ZG1oVVZVUldEU3pKQVlqcFpiWmVpbXhZ?=
 =?utf-8?B?RXl0aDBLSjNGNDMvVkZuWk9PdSt0aGhrQk1HVkk1b3ltOFBRYU5ENnBHcnNE?=
 =?utf-8?B?UEFlMlRJbXo4ZXVtTkVoV3NKMFJmQ28wODRtUWRhOVRQVFI2Q21kcFlLaUpm?=
 =?utf-8?B?VVVhUW9CM0ZIeEhtTVM2K2VZbS9TY3ZIVDBHTkNEOWZhRnJPMzY5bmZiSFFn?=
 =?utf-8?B?U2RWRHJZd0FmMitDVVAvWVRmbkJ1SjNJQkplWUYzV1NFdEJiRXU5TFplRjcy?=
 =?utf-8?B?R2IxaHlGVmU0aXJDejdaeHJLSEVrWExFWDl6Qk5LZ3VxajJadE9GaEF0TEtn?=
 =?utf-8?B?ZmwvblpKZVF2OVk5Vy9BWjFzQk9SS1hvNUdKdFpFeFBqdHA1a0lod096dmMr?=
 =?utf-8?B?L0xCMElFNkZQNnhtL1dNS0gyOHRHV1pvcThLaXV4VlJxU1BSUmVDRGhZcGVp?=
 =?utf-8?B?Z3RSMDhMRUNHKzBBUHZhK1orY2tQNzBSaTJsTWoxZ2EzTnlHNHFpUVpwcXVy?=
 =?utf-8?B?azczMWtlVzIrK2ZocU5FWU9TektqU2FMTWNUUUpCVGJwc3JlMFdwLzNaR0E5?=
 =?utf-8?Q?fN9FvVg6oC2q55B/2bm32+fuquOCCk8uU7?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <DC2A197D3DC742449A059868208A851B@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB6400.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 99cbd442-cf07-4e2e-1db3-08d895844022
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Nov 2020 23:04:11.6079
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: JPGSF/xPMsDAqhUidSMCbQIGoOUrNxb6+DmxzpeVQDr4h4+4Ekls8kSoNrpAE6F1v3FLYFSbh5S4x3l2mCz4y0UzzwApMLq8ffjFVjDbSH4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR03MB4205
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-11-30_12:2020-11-30,2020-11-30 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0
 clxscore=1011 malwarescore=0 priorityscore=1501 bulkscore=0
 mlxlogscore=999 spamscore=0 mlxscore=0 adultscore=0 phishscore=0
 impostorscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2009150000 definitions=main-2011300142

DQpIaSBKdWxpZW4sDQoNCkp1bGllbiBHcmFsbCB3cml0ZXM6DQoNCj4gRnJvbTogSnVsaWVuIEdy
YWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4NCj4NCj4gaW5pdF9vbmVfZGVzY19pcnEoKSBjYW4gcmV0
dXJuIGFuIGVycm9yIGlmIGl0IGlzIHVuYWJsZSB0byBhbGxvY2F0ZQ0KPiBtZW1vcnkuIFdoaWxl
IHRoaXMgaXMgdW5saWtlbHkgdG8gaGFwcGVuIGR1cmluZyBib290IChjYWxsZWQgZnJvbQ0KPiBp
bml0X3ssbG9jYWxffWlycV9kYXRhKCkpLCBpdCBpcyBiZXR0ZXIgdG8gaGFyZGVuIHRoZSBjb2Rl
IGJ5DQo+IHByb3BhZ3RpbmcgdGhlIHJldHVybiB2YWx1ZS4NCj4NCj4gU3BvdHRlZCBieSBjb3Zl
cml0eS4NCj4NCj4gQ0lEOiAxMDY1MjkNCj4NCj4gU2lnbmVkLW9mZi1ieTogSnVsaWVuIEdyYWxs
IDxqZ3JhbGxAYW1hem9uLmNvbT4NCj4gUmV2aWV3ZWQtYnk6IFJvZ2VyIFBhdWwgTW9ubsOpIDxy
b2dlci5wYXVAY2l0cml4LmNvbT4NCg0KUmV2aWV3ZWQtYnk6IFZvbG9keW15ciBCYWJjaHVrIDx2
b2xvZHlteXJfYmFiY2h1a0BlcGFtLmNvbT4NCg0KRm9yIEFSTSBwYXJ0LCBvYnZpb3VzbHkuDQoN
Cj4NCj4gLS0tDQo+ICAgICBDaGFuZ2VzIGluIHYyOg0KPiAgICAgICAgIC0gQWRkIFJvZ2VyJ3Mg
cmV2aWV3ZWQtYnkgZm9yIHg4Ng0KPiAgICAgICAgIC0gSGFuZGxlDQo+IC0tLQ0KPiAgeGVuL2Fy
Y2gvYXJtL2lycS5jIHwgMTIgKysrKysrKysrKy0tDQo+ICB4ZW4vYXJjaC94ODYvaXJxLmMgfCAg
NyArKysrKystDQo+ICAyIGZpbGVzIGNoYW5nZWQsIDE2IGluc2VydGlvbnMoKyksIDMgZGVsZXRp
b25zKC0pDQo+DQo+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vaXJxLmMgYi94ZW4vYXJjaC9h
cm0vaXJxLmMNCj4gaW5kZXggMzg3NzY1N2E1Mjc3Li5iNzFiMDk5ZTZmYTIgMTAwNjQ0DQo+IC0t
LSBhL3hlbi9hcmNoL2FybS9pcnEuYw0KPiArKysgYi94ZW4vYXJjaC9hcm0vaXJxLmMNCj4gQEAg
LTg4LDcgKzg4LDExIEBAIHN0YXRpYyBpbnQgX19pbml0IGluaXRfaXJxX2RhdGEodm9pZCkNCj4g
ICAgICBmb3IgKCBpcnEgPSBOUl9MT0NBTF9JUlFTOyBpcnEgPCBOUl9JUlFTOyBpcnErKyApDQo+
ICAgICAgew0KPiAgICAgICAgICBzdHJ1Y3QgaXJxX2Rlc2MgKmRlc2MgPSBpcnFfdG9fZGVzYyhp
cnEpOw0KPiAtICAgICAgICBpbml0X29uZV9pcnFfZGVzYyhkZXNjKTsNCj4gKyAgICAgICAgaW50
IHJjID0gaW5pdF9vbmVfaXJxX2Rlc2MoZGVzYyk7DQo+ICsNCj4gKyAgICAgICAgaWYgKCByYyAp
DQo+ICsgICAgICAgICAgICByZXR1cm4gcmM7DQo+ICsNCj4gICAgICAgICAgZGVzYy0+aXJxID0g
aXJxOw0KPiAgICAgICAgICBkZXNjLT5hY3Rpb24gID0gTlVMTDsNCj4gICAgICB9DQo+IEBAIC0x
MDUsNyArMTA5LDExIEBAIHN0YXRpYyBpbnQgaW5pdF9sb2NhbF9pcnFfZGF0YSh2b2lkKQ0KPiAg
ICAgIGZvciAoIGlycSA9IDA7IGlycSA8IE5SX0xPQ0FMX0lSUVM7IGlycSsrICkNCj4gICAgICB7
DQo+ICAgICAgICAgIHN0cnVjdCBpcnFfZGVzYyAqZGVzYyA9IGlycV90b19kZXNjKGlycSk7DQo+
IC0gICAgICAgIGluaXRfb25lX2lycV9kZXNjKGRlc2MpOw0KPiArICAgICAgICBpbnQgcmMgPSBp
bml0X29uZV9pcnFfZGVzYyhkZXNjKTsNCj4gKw0KPiArICAgICAgICBpZiAoIHJjICkNCj4gKyAg
ICAgICAgICAgIHJldHVybiByYzsNCj4gKw0KPiAgICAgICAgICBkZXNjLT5pcnEgPSBpcnE7DQo+
ICAgICAgICAgIGRlc2MtPmFjdGlvbiAgPSBOVUxMOw0KPiAgDQo+IGRpZmYgLS1naXQgYS94ZW4v
YXJjaC94ODYvaXJxLmMgYi94ZW4vYXJjaC94ODYvaXJxLmMNCj4gaW5kZXggNDU5NjY5NDc5MTll
Li4zZWJkNjg0NDE1YWMgMTAwNjQ0DQo+IC0tLSBhL3hlbi9hcmNoL3g4Ni9pcnEuYw0KPiArKysg
Yi94ZW4vYXJjaC94ODYvaXJxLmMNCj4gQEAgLTQyOCw5ICs0MjgsMTQgQEAgaW50IF9faW5pdCBp
bml0X2lycV9kYXRhKHZvaWQpDQo+ICANCj4gICAgICBmb3IgKCBpcnEgPSAwOyBpcnEgPCBucl9p
cnFzX2dzaTsgaXJxKysgKQ0KPiAgICAgIHsNCj4gKyAgICAgICAgaW50IHJjOw0KPiArDQo+ICAg
ICAgICAgIGRlc2MgPSBpcnFfdG9fZGVzYyhpcnEpOw0KPiAgICAgICAgICBkZXNjLT5pcnEgPSBp
cnE7DQo+IC0gICAgICAgIGluaXRfb25lX2lycV9kZXNjKGRlc2MpOw0KPiArDQo+ICsgICAgICAg
IHJjID0gaW5pdF9vbmVfaXJxX2Rlc2MoZGVzYyk7DQo+ICsgICAgICAgIGlmICggcmMgKQ0KPiAr
ICAgICAgICAgICAgcmV0dXJuIHJjOw0KPiAgICAgIH0NCj4gICAgICBmb3IgKCA7IGlycSA8IG5y
X2lycXM7IGlycSsrICkNCj4gICAgICAgICAgaXJxX3RvX2Rlc2MoaXJxKS0+aXJxID0gaXJxOw0K
DQoNCi0tIA0KVm9sb2R5bXlyIEJhYmNodWsgYXQgRVBBTQ==


From xen-devel-bounces@lists.xenproject.org Mon Nov 30 23:28:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 30 Nov 2020 23:28:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.41522.74724 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjsaL-0006oY-LU; Mon, 30 Nov 2020 23:27:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 41522.74724; Mon, 30 Nov 2020 23:27:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kjsaL-0006oR-Gz; Mon, 30 Nov 2020 23:27:57 +0000
Received: by outflank-mailman (input) for mailman id 41522;
 Mon, 30 Nov 2020 23:27:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=avKr=FE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kjsaJ-0006oM-Nt
 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 23:27:55 +0000
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 884c9a1d-8b19-42d0-bfb1-7ab34675d380;
 Mon, 30 Nov 2020 23:27:54 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id t4so18625592wrr.12
 for <xen-devel@lists.xenproject.org>; Mon, 30 Nov 2020 15:27:54 -0800 (PST)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id c2sm30888854wrf.68.2020.11.30.15.27.51
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 30 Nov 2020 15:27:52 -0800 (PST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 884c9a1d-8b19-42d0-bfb1-7ab34675d380
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=+oKm/Ah3AvG9Xr+iFRMsTLDsP/59Ag9EL+r+oaOfy8c=;
        b=fenkOp0FOx/NUI2txBM85a3ElWiUBpFWSNXeZdZpHu+aRY8Xs8/WC9I+VrU5p1rih6
         PgrXie2ockUjfI0MkWZmhVnyk1Cpin8xD3LHry6EAe2K3HWxViavpTMqK+phqqm2ZpiC
         kre8yOfrHrgiw5ABSzs+wGiOgtuVp8wOyZIBUnSqJFJuKslNa8qu0YF0Atf3jV+PVCdY
         VCmy5ro4btrBt7zT18HvW2/klsA6o0HibLSlRhFy/z6Zctq5+xwdD0wkgi8xD92nalSM
         oQJF2MAS0AKSDHxF/vZEsXzeas51AHAGMJustGgwPCv9pk4yfY9kNuvWe/fhBlKFU+Qf
         ZhZg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=+oKm/Ah3AvG9Xr+iFRMsTLDsP/59Ag9EL+r+oaOfy8c=;
        b=Y9E1t4Divem7wQ47PA74Vy8QKSU6E2SDQjn5pSpqUFjg5udUrw9nFUGz+fvUy8jbYf
         26+p2ZHfBVP/CwoHJrecmKN20OrFJSAuUrt0ojF/qRQ5vCOOtmprrpM/BmN5DLmqapNZ
         BfVkuyQUEi652HZcWCAkeiOWjJm4hIowMKE1nCOEmdO/WajxDOUI6sqI7L1c58lq6VFa
         S2Ir2wcat9CH9LDyL4+PJj0DUrHi23fHwLjQ7bjE9MYIUoATebfjlmx/vIlUDaCCNJsF
         Wx4Jf1vLJQssQEj8rRBsY3TFKsF7ojAjuLoEz0fYMDPgTx2dLOf+rxbfFJEP8IXCNlU1
         YgGg==
X-Gm-Message-State: AOAM533Vu4d+L2c6qHLPE3hhYjWbZsE3sGgD2BufGAL8GdCf6j9SiC7O
	3NiLVXAECqoCnSd3jYA01ws=
X-Google-Smtp-Source: ABdhPJx4jv+RfJVlXGSo4V/zucNwMx4vQWo4EKNChfWH9mW82KsCz/mIcyipWtCa/EaIPmGFokuJ/A==
X-Received: by 2002:adf:9144:: with SMTP id j62mr30951851wrj.419.1606778872847;
        Mon, 30 Nov 2020 15:27:52 -0800 (PST)
Subject: Re: [PATCH V3 19/23] xen/arm: io: Abstract sign-extension
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Oleksandr Tyshchenko <Oleksandr_Tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Julien Grall <julien.grall@arm.com>
References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com>
 <1606732298-22107-20-git-send-email-olekstysh@gmail.com>
 <878sai7e1a.fsf@epam.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <cad0d7fe-3a9f-3992-9d89-8e9bb438dfbe@gmail.com>
Date: Tue, 1 Dec 2020 01:27:46 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <878sai7e1a.fsf@epam.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 30.11.20 23:03, Volodymyr Babchuk wrote:
> Hi,

Hi Volodymyr


>
> Oleksandr Tyshchenko writes:
>
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> In order to avoid code duplication (both handle_read() and
>> handle_ioserv() contain the same code for the sign-extension)
>> put this code to a common helper to be used for both.
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> CC: Julien Grall <julien.grall@arm.com>
>>
>> ---
>> Please note, this is a split/cleanup/hardening of Julien's PoC:
>> "Add support for Guest IO forwarding to a device emulator"
>>
>> Changes V1 -> V2:
>>     - new patch
>>
>> Changes V2 -> V3:
>>     - no changes
>> ---
>> ---
>>   xen/arch/arm/io.c           | 18 ++----------------
>>   xen/arch/arm/ioreq.c        | 17 +----------------
>>   xen/include/asm-arm/traps.h | 24 ++++++++++++++++++++++++
>>   3 files changed, 27 insertions(+), 32 deletions(-)
>>
>> diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
>> index f44cfd4..8d6ec6c 100644
>> --- a/xen/arch/arm/io.c
>> +++ b/xen/arch/arm/io.c
>> @@ -23,6 +23,7 @@
>>   #include <asm/cpuerrata.h>
>>   #include <asm/current.h>
>>   #include <asm/mmio.h>
>> +#include <asm/traps.h>
>>   #include <asm/hvm/ioreq.h>
>>   
>>   #include "decode.h"
>> @@ -39,26 +40,11 @@ static enum io_state handle_read(const struct mmio_handler *handler,
>>        * setting r).
>>        */
>>       register_t r = 0;
>> -    uint8_t size = (1 << dabt.size) * 8;
>>   
>>       if ( !handler->ops->read(v, info, &r, handler->priv) )
>>           return IO_ABORT;
>>   
>> -    /*
>> -     * Sign extend if required.
>> -     * Note that we expect the read handler to have zeroed the bits
>> -     * outside the requested access size.
>> -     */
>> -    if ( dabt.sign && (r & (1UL << (size - 1))) )
>> -    {
>> -        /*
>> -         * We are relying on register_t using the same as
>> -         * an unsigned long in order to keep the 32-bit assembly
>> -         * code smaller.
>> -         */
>> -        BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long));
>> -        r |= (~0UL) << size;
>> -    }
>> +    r = sign_extend(dabt, r);
>>   
>>       set_user_reg(regs, dabt.reg, r);
>>   
>> diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c
>> index f08190c..2f39289 100644
>> --- a/xen/arch/arm/ioreq.c
>> +++ b/xen/arch/arm/ioreq.c
>> @@ -28,7 +28,6 @@ enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v)
>>       const union hsr hsr = { .bits = regs->hsr };
>>       const struct hsr_dabt dabt = hsr.dabt;
>>       /* Code is similar to handle_read */
>> -    uint8_t size = (1 << dabt.size) * 8;
>>       register_t r = v->io.req.data;
>>   
>>       /* We are done with the IO */
>> @@ -37,21 +36,7 @@ enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v)
>>       if ( dabt.write )
>>           return IO_HANDLED;
>>   
>> -    /*
>> -     * Sign extend if required.
>> -     * Note that we expect the read handler to have zeroed the bits
>> -     * outside the requested access size.
>> -     */
>> -    if ( dabt.sign && (r & (1UL << (size - 1))) )
>> -    {
>> -        /*
>> -         * We are relying on register_t using the same as
>> -         * an unsigned long in order to keep the 32-bit assembly
>> -         * code smaller.
>> -         */
>> -        BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long));
>> -        r |= (~0UL) << size;
>> -    }
>> +    r = sign_extend(dabt, r);
>>   
>>       set_user_reg(regs, dabt.reg, r);
>>   
>> diff --git a/xen/include/asm-arm/traps.h b/xen/include/asm-arm/traps.h
>> index 997c378..e301c44 100644
>> --- a/xen/include/asm-arm/traps.h
>> +++ b/xen/include/asm-arm/traps.h
>> @@ -83,6 +83,30 @@ static inline bool VABORT_GEN_BY_GUEST(const struct cpu_user_regs *regs)
>>           (unsigned long)abort_guest_exit_end == regs->pc;
>>   }
>>   
>> +/* Check whether the sign extension is required and perform it */
>> +static inline register_t sign_extend(const struct hsr_dabt dabt, register_t r)
>> +{
>> +    uint8_t size = (1 << dabt.size) * 8;
>> +
>> +    /*
>> +     * Sign extend if required.
>> +     * Note that we expect the read handler to have zeroed the bits
>> +     * outside the requested access size.
>> +     */
>> +    if ( dabt.sign && (r & (1UL << (size - 1))) )
>> +    {
>> +        /*
>> +         * We are relying on register_t using the same as
>> +         * an unsigned long in order to keep the 32-bit assembly
>> +         * code smaller.
>> +         */
>> +        BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long));
>> +        r |= (~0UL) << size;
> If `size` is 64, you will get undefined behavior there.
I think, we don't need to worry about undefined behavior here. Having 
size=64 would be possible with doubleword (dabt.size=3). But if "r" 
adjustment gets called (I mean Syndrome Sign Extend bit is set) then
we deal with byte, halfword or word operations (dabt.size<3). Or I 
missed something?


-- 
Regards,

Oleksandr Tyshchenko



